Examples
- API Request
- TypeScript
- Python (Sync)
Vector Search Concepts
What are Embeddings?
Embeddings are high-dimensional vector representations of text that capture semantic meaning:- Semantic Understanding: Similar concepts have similar vector representations
- Mathematical Distance: Content similarity is measured by vector distance
- Language Agnostic: Works across different languages and formats
- Context Preservation: Maintains meaning and relationships between concepts
How Vector Search Works
- Input Processing: Your embedding vector is compared against all stored embeddings
- Similarity Calculation: Cosine similarity or other distance metrics are computed
- Ranking: Results are ranked by similarity score (higher = more similar)
- Retrieval: Most similar chunks are returned with their similarity scores
Embedding Dimensions
- Standard Dimensions: Most embeddings use 384, 512, 768, or 1536 dimensions
- Quality vs Speed: Higher dimensions = better quality, slower search
- Compatibility: Ensure your embedding model matches Cortex’s expected format
Search Parameters
Max Chunks
Controls the number of results returned:- Range: 1-200 chunks
- Default: 10 chunks
- Recommendation:
- Start with 10-20 for most use cases
- Use 50-100 for comprehensive searches
- Use 1-5 for precise, top results only
Embedding Format
- Type: Single embedding vector (1D array of numeric values)
- Values: Floating-point numbers (typically between -1 and 1)
- Length: Must match the embedding model’s dimension size
- Example:
[0.1, -0.2, 0.3, 0.4, -0.5, ...]
Use Cases
Semantic Similarity Search
- Content Discovery: Find documents similar to a reference document
- Recommendation Systems: Suggest related content based on user interests
- Duplicate Detection: Identify similar or duplicate content
- Content Clustering: Group related documents together
Cross-Language Search
- Multilingual Content: Find similar content across different languages
- Translation Support: Search for content in one language using another
- Global Knowledge: Access information regardless of original language
Advanced Retrieval
- Conceptual Search: Find content based on meaning, not exact keywords
- Context-Aware Search: Retrieve content that matches conceptual context
- Fuzzy Matching: Find content even with different wording or phrasing
Best Practices
Embedding Quality
- Use High-Quality Models: Choose well-trained embedding models (OpenAI, Cohere, etc.)
- Consistent Models: Use the same embedding model for both indexing and searching
- Preprocessing: Clean and normalize text before generating embeddings
- Batch Processing: Generate embeddings in batches for better performance
Search Optimization
- Appropriate Max Chunks: Start with 10-20, adjust based on your needs
- Similarity Thresholds: Set minimum similarity scores to filter low-quality matches
- Multiple Queries: Try different embedding representations of the same concept
- Hybrid Approaches: Combine vector search with keyword search for better results
Performance Considerations
- Vector Size: Larger vectors provide better quality but slower search
- Index Size: More indexed content = longer search times
- Batch Requests: Process multiple embeddings simultaneously when possible
- Caching: Cache frequently used embeddings to improve response times
Common Patterns
Document Similarity
Concept Search
Recommendation Engine
Sample Response
Error Responses
All endpoints return consistent error responses following the standard format. For detailed error information, see our Error Responses documentation.Authorizations
Body
application/json