Skip to main content
POST
/
embeddings
/
search-raw-embeddings
Error
A valid request URL is required to generate request examples
[
  {
    "source_id": "CortexDoc1234",
    "embedding": {
      "chunk_id": "<chunk_id>",
      "embedding": []
    },
    "score": 1,
    "distance": 1,
    "metadata": {}
  }
]
Hit the Try it button to try this API now in our playground. It’s the best way to check the full request and response in one place, customize your parameters, and generate ready-to-use code snippets.

Examples

curl -X 'POST' \
'https://api.usecortex.ai/embeddings/search-raw-embeddings' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"tenant_id": "string",
"sub_tenant_id": "string",
"query_embedding": [
0
],
"limit": 10,
"filter_expr": "string",
"output_fields": [
"string"
]
}'
Search for similar content using vector embeddings by comparing your input embedding against the vector database to find the most similar content chunks.

Vector Search Concepts

What are Embeddings?

Embeddings are high-dimensional vector representations of text that capture semantic meaning:
  • Semantic Understanding: Similar concepts have similar vector representations
  • Mathematical Distance: Content similarity is measured by vector distance
  • Language Agnostic: Works across different languages and formats
  • Context Preservation: Maintains meaning and relationships between concepts

How Vector Search Works

  1. Input Processing: Your embedding vector is compared against all stored embeddings
  2. Similarity Calculation: Cosine similarity or other distance metrics are computed
  3. Ranking: Results are ranked by similarity score (higher = more similar)
  4. Retrieval: Most similar chunks are returned with their similarity scores

Embedding Dimensions

  • Standard Dimensions: Most embeddings use 384, 512, 768, or 1536 dimensions
  • Quality vs Speed: Higher dimensions = better quality, slower search
  • Compatibility: Ensure your embedding model matches Cortex’s expected format

Search Parameters

Max Chunks

Controls the number of results returned:
  • Range: 1-200 chunks
  • Default: 10 chunks
  • Recommendation:
    • Start with 10-20 for most use cases
    • Use 50-100 for comprehensive searches
    • Use 1-5 for precise, top results only

Embedding Format

  • Type: Single embedding vector (1D array of numeric values)
  • Values: Floating-point numbers (typically between -1 and 1)
  • Length: Must match the embedding model’s dimension size
  • Example: [0.1, -0.2, 0.3, 0.4, -0.5, ...]

Use Cases

  • Content Discovery: Find documents similar to a reference document
  • Recommendation Systems: Suggest related content based on user interests
  • Duplicate Detection: Identify similar or duplicate content
  • Content Clustering: Group related documents together
  • Multilingual Content: Find similar content across different languages
  • Translation Support: Search for content in one language using another
  • Global Knowledge: Access information regardless of original language

Advanced Retrieval

  • Conceptual Search: Find content based on meaning, not exact keywords
  • Context-Aware Search: Retrieve content that matches conceptual context
  • Fuzzy Matching: Find content even with different wording or phrasing

Best Practices

Embedding Quality

  • Use High-Quality Models: Choose well-trained embedding models (OpenAI, Cohere, etc.)
  • Consistent Models: Use the same embedding model for both indexing and searching
  • Preprocessing: Clean and normalize text before generating embeddings
  • Batch Processing: Generate embeddings in batches for better performance

Search Optimization

  • Appropriate Max Chunks: Start with 10-20, adjust based on your needs
  • Similarity Thresholds: Set minimum similarity scores to filter low-quality matches
  • Multiple Queries: Try different embedding representations of the same concept
  • Hybrid Approaches: Combine vector search with keyword search for better results

Performance Considerations

  • Vector Size: Larger vectors provide better quality but slower search
  • Index Size: More indexed content = longer search times
  • Batch Requests: Process multiple embeddings simultaneously when possible
  • Caching: Cache frequently used embeddings to improve response times

Common Patterns

Document Similarity

{
  "embeddings": [0.1, 0.2, 0.3, ...],
  "max_chunks": 20
}
Use when you want to find documents similar to a reference document.
{
  "embeddings": [0.4, -0.1, 0.8, ...],
  "max_chunks": 10
}
Use when searching for content related to a specific concept or topic.

Recommendation Engine

{
  "embeddings": [0.2, 0.5, -0.3, ...],
  "max_chunks": 50
}
Use when building recommendation systems that need many similar items.

Sample Response

{
  "chunk_ids": [
    "CortexEmbeddings123_0",    
    "CortexEmbeddings456_0",
    "CortexEmbeddings456_1",
    "CortexEmbeddings123_2",   
    "CortexEmbeddings123_8"
  ],
  "scores": [
    0.95,
    0.89,
    0.87,
    0.82,
    0.78
  ]
}

Error Responses

All endpoints return consistent error responses following the standard format. For detailed error information, see our Error Responses documentation.

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Body

application/json
tenant_id
string
required

Unique identifier for the tenant/organization

Example:

"tenant_1234"

sub_tenant_id
string
required

Optional sub-tenant identifier used to organize data within a tenant. If omitted, the default sub-tenant created during tenant setup will be used.

Example:

"sub_tenant_4567"

query_embedding
number[]
required

Query embedding vector to search for

Example:
[]
limit
integer
default:10

Maximum number of results to return

Required range: 1 <= x <= 1000
Example:

1

filter_expr
string | null

Optional Milvus filter expression for additional filtering

output_fields
string[] | null

Optional list of fields to return in results (default: chunk_id, source_id, metadata)

Response

Successful Response

source_id
string
required

Source identifier

Example:

"CortexDoc1234"

embedding
RawEmbeddingVector · object

Embedding payload with chunk id and vector (if set)

score
number
default:0

Similarity score

Example:

1

distance
number
default:0

Vector distance

Example:

1

metadata
Metadata · object

Metadata associated with the embedding