News

LlamaIndex OSS Documentation
developers.llamaindex.ai > python > cloud > general > limitations

Limitations

13+ hour, 1+ min ago  (132+ words) This page documents the current service limitations across LlamaParse products. See also Rate Limits for API rate limiting details. These limits apply across all LlamaParse services: Parse jobs have a configurable timeout composed of: The total timeout is calculated as:…...

LlamaIndex OSS Documentation
developers.llamaindex.ai > python > cloud > llamaclassify > examples > classify_with_saved_config

Classify with a Saved Configuration

11+ hour, 50+ min ago  (172+ words) In this example, we'll save a reusable classify configuration and use it across multiple jobs. Instead of passing inline rules every time, you create a configuration once and reference it by ID. This is useful when you have a standard set…...

@llama_index
developers.llamaindex.ai > python > cloud > llamaparse

Overview of LlamaParse

6+ mon, 1+ week ago  (290+ words) Built for teams that care about quality, LlamaParse turns complex, messy files into structured, clean outputs. LlamaParse is a highly accurate parser for complex documents like financial reports, research papers, and scanned PDFs. It handles tables, images, and charts with…...

@llama_index
developers.llamaindex.ai > python > examples > embeddings > llm_rails

LLMRails Embeddings | LlamaIndex Python Documentation

6+ mon, 1+ week ago  (67+ words) LLMRails Embeddings'LlamaIndex If you're opening this Notebook on colab, you will probably need to install LlamaIndex ". # imports from llama_index.embeddings.llm_rails import LLMRailsEmbedding # get credentials and create embeddings import os api_key = os.environ.get("API_KEY", "your-api-key") model_id = os.environ.get("MODEL_ID", "your-model-id") embed_model = LLMRailsEmbedding(model_id=model_id, api_key=api_key) embeddings = embed_model.get_text_embedding( "It is…...

@llama_index
developers.llamaindex.ai > python > framework > module_guides > querying

Querying | LlamaIndex Python Documentation

6+ mon, 1+ week ago  (79+ words) Querying'LlamaIndex Querying is the most important part of your LLM application. To learn more about getting a final product that you can deploy, check out the query engine, chat engine. If you wish to combine advanced reasoning with tool use,…...

@llama_index
developers.llamaindex.ai > python > framework > use_cases

Use Cases | LlamaIndex Python Documentation

6+ mon, 1+ week ago  (55+ words) Use Cases'LlamaIndex LlamaIndex offers powerful capabilities for a wide range of AI applications. Explore the following use cases to learn how to leverage LlamaIndex for your specific needs: - Prompting - Learn advanced prompting techniques with LlamaIndex - Structured Data Extraction - Extract structured…...

@llama_index
developers.llamaindex.ai > python > framework > module_guides > deploying > query_engine

Query Engine | LlamaIndex Python Documentation

6+ mon, 1+ week ago  (103+ words) Query engine is a generic interface that allows you to ask question over your data. A query engine takes in a natural language query, and returns a rich response. It is most often (but not always) built on one or…...