Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

LTLCrit: A Temporal Logic-based LLM Critic for Safe and Efficient Embodied Agents

IBM Launches Power11 Processor, Says It Will Help Partners Unlock AI For Clients

MIT researchers say using ChatGPT can rot your brain. Truth is little more complicated

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
Facebook X (Twitter) Instagram
Advanced AI News
Amazon AWS AI

Build a just-in-time knowledge base with Amazon Bedrock

By Advanced AI EditorJuly 8, 2025No Comments15 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Software as a service (SaaS) companies managing multiple tenants face a critical challenge: efficiently extracting meaningful insights from vast document collections while controlling costs. Traditional approaches often lead to unnecessary spending on unused storage and processing resources, impacting both operational efficiency and profitability. Organizations need solutions that intelligently scale processing and storage resources based on actual tenant usage patterns while maintaining data isolation. Traditional Retrieval Augmented Generation (RAG) systems consume valuable resources by ingesting and maintaining embeddings for documents that might never be queried, resulting in unnecessary storage costs and reduced system efficiency. Systems designed to handle large amounts of small to mid-sized tenants can exceed cost structure and infrastructure limits or might need to use silo-style deployments to keep each tenant’s information and usage separate. Adding to this complexity, many projects are transitory in nature, with work being completed on an intermittent basis, leading to data occupying space in knowledge base systems that could be used by other active tenants.

To address these challenges, this post presents a just-in-time knowledge base solution that reduces unused consumption through intelligent document processing. The solution processes documents only when needed and automatically removes unused resources, so organizations can scale their document repositories without proportionally increasing infrastructure costs.

With a multi-tenant architecture with configurable limits per tenant, service providers can offer tiered pricing models while maintaining strict data isolation, making it ideal for SaaS applications serving multiple clients with varying needs. Automatic document expiration through Time-to-Live (TTL) makes sure the system remains lean and focused on relevant content, while refreshing the TTL for frequently accessed documents maintains optimal performance for information that matters. This architecture also makes it possible to limit the number of files each tenant can ingest at a specific time and the rate at which tenants can query a set of files.This solution uses serverless technologies to alleviate operational overhead and provide automatic scaling, so teams can focus on business logic rather than infrastructure management. By organizing documents into groups with metadata-based filtering, the system enables contextual querying that delivers more relevant results while maintaining security boundaries between tenants.The architecture’s flexibility supports customization of tenant configurations, query rates, and document retention policies, making it adaptable to evolving business requirements without significant rearchitecting.

Solution overview

This architecture combines several AWS services to create a cost-effective, multi-tenant knowledge base solution that processes documents on demand. The key components include:

Vector-based knowledge base – Uses Amazon Bedrock and Amazon OpenSearch Serverless for efficient document processing and querying
On-demand document ingestion – Implements just-in-time processing using the Amazon Bedrock CUSTOM data source type
TTL management – Provides automatic cleanup of unused documents using the TTL feature in Amazon DynamoDB
Multi-tenant isolation – Enforces secure data separation between users and organizations with configurable resource limits

The solution enables granular control through metadata-based filtering at the user, tenant, and file level. The DynamoDB TTL tracking system supports tiered pricing structures, where tenants can pay for different TTL durations, document ingestion limits, and query rates.

The following diagram illustrates the key components and workflow of the solution.

Multi-tier AWS serverless architecture diagram showcasing data flow and integration of various AWS services

The workflow consists of the following steps:

The user logs in to the system, which attaches a tenant ID to the current user for calls to the Amazon Bedrock knowledge base. This authentication step is crucial because it establishes the security context and makes sure subsequent interactions are properly associated with the correct tenant. The tenant ID becomes the foundational piece of metadata that enables proper multi-tenant isolation and resource management throughout the entire workflow.
After authentication, the user creates a project that will serve as a container for the files they want to query. This project creation step establishes the organizational structure needed to manage related documents together. The system generates appropriate metadata and creates the necessary database entries to track the project’s association with the specific tenant, enabling proper access control and resource management at the project level.
With a project established, the user can begin uploading files. The system manages this process by generating pre-signed URLs for secure file upload. As files are uploaded, they are stored in Amazon Simple Storage Service (Amazon S3), and the system automatically creates entries in DynamoDB that associate each file with both the project and the tenant. This three-way relationship (file-project-tenant) is essential for maintaining proper data isolation and enabling efficient querying later.
When a user requests to create a chat with a knowledge base for a specific project, the system begins ingesting the project files using the CUSTOM data source. This is where the just-in-time processing begins. During ingestion, the system applies a TTL value based on the tenant’s tier-specific TTL interval. The TTL makes sure project files remain available during the chat session while setting up the framework for automatic cleanup later. This step represents the core of the on-demand processing strategy, because files are only processed when they are needed.
Each chat session actively updates the TTL for the project files being used. This dynamic TTL management makes sure frequently accessed files remain in the knowledge base while allowing rarely used files to expire naturally. The system continually refreshes the TTL values based on actual usage, creating an efficient balance between resource availability and cost optimization. This approach maintains optimal performance for actively used content while helping to prevent resource waste on unused documents.
After the chat session ends and the TTL value expires, the system automatically removes files from the knowledge base. This cleanup process is triggered by Amazon DynamoDB Streams monitoring TTL expiration events, which activate an AWS Lambda function to remove the expired documents. This final step reduces the load on the underlying OpenSearch Serverless cluster and optimizes system resources, making sure the knowledge base remains lean and efficient.

Prerequisites

You need the following prerequisites before you can proceed with solution. For this post, we use the us-east-1 AWS Region.

Deploy the solution

Complete the following steps to deploy the solution:

Download the AWS CDK project from the GitHub repo.
Install the project dependencies:

Deploy the solution:

Create a user and log in to the system after validating your email.

Validate the knowledge base and run a query

Before allowing users to chat with their documents, the system performs the following steps:

Performs a validation check to determine if documents need to be ingested. This process happens transparently to the user and includes checking document status in DynamoDB and the knowledge base.
Validates that the required documents are successfully ingested and properly indexed before allowing queries.
Returns both the AI-generated answers and relevant citations to source documents, maintaining traceability and empowering users to verify the accuracy of responses.

The following screenshot illustrates an example of chatting with the documents.

AWS Just In Time Knowledge Base interface displaying project files and AI-powered question-answering feature

Looking at the following example method for file ingestion, note how file information is stored in DynamoDB with a TTL value for automatic expiration. The ingest knowledge base documents call includes essential metadata (user ID, tenant ID, and project), enabling precise filtering of this tenant’s files in subsequent operations.

# Ingesting files with tenant-specific TTL values
def ingest_files(user_id, tenant_id, project_id, files):
    # Get tenant configuration and calculate TTL
    tenants = json.loads(os.environ.get(‘TENANTS’))[‘Tenants’]
    tenant = find_tenant(tenant_id, tenants)
    ttl = int(time.time()) + (int(tenant[‘FilesTTLHours’]) * 3600)
    
    # For each file, create a record with TTL and start ingestion
    for file in files:
        file_id = file[‘id’]
        s3_key = file.get(‘s3Key’)
        bucket = file.get(‘bucket’)
        
        # Create a record in the knowledge base files table with TTL
        knowledge_base_files_table.put_item(
            Item={
                ‘id’: file_id,
                ‘userId’: user_id,
                ‘tenantId’: tenant_id,
                ‘projectId’: project_id,
                ‘documentStatus’: ‘ready’,
                ‘createdAt’: int(time.time()),
                ‘ttl’: ttl  # TTL value for automatic expiration
            }
        )
        
        # Start the ingestion job with tenant, user, and project metadata for filtering
        bedrock_agent.ingest_knowledge_base_documents(
            knowledgeBaseId=KNOWLEDGE_BASE_ID,
            dataSourceId=DATA_SOURCE_ID,
            clientToken=str(uuid.uuid4()),
            documents=[
                {
                    ‘content’: {
                        ‘dataSourceType’: ‘CUSTOM’,
                        ‘custom’: {
                            ‘customDocumentIdentifier’: {
                                ‘id’: file_id
                            },
                            ‘s3Location’: {
                                ‘uri’: f”s3://{bucket}/{s3_key}”
                            },
                            ‘sourceType’: ‘S3_LOCATION’
                        }
                    },
                    ‘metadata’: {
                        ‘type’: ‘IN_LINE_ATTRIBUTE’,
                        ‘inlineAttributes’: [
                            {‘key’: ‘userId’, ‘value’: {‘stringValue’: user_id, ‘type’: ‘STRING’}},
                            {‘key’: ‘tenantId’, ‘value’: {‘stringValue’: tenant_id, ‘type’: ‘STRING’}},
                            {‘key’: ‘projectId’, ‘value’: {‘stringValue’: project_id, ‘type’: ‘STRING’}},
                            {‘key’: ‘fileId’, ‘value’: {‘stringValue’: file_id, ‘type’: ‘STRING’}}
                        ]
                    }
                }
            ]
        )

During a query, you can use the associated metadata to construct parameters that make sure you only retrieve files belonging to this specific tenant. For example:

    filter_expression = {
        “andAll”: [
            {
                “equals”: {
                    “key”: “tenantId”,
                    “value”: tenant_id
                }
            },
            {
                “equals”: {
                    “key”: “projectId”,
                    “value”: project_id
                }
            },
            {
                “in”: {
                    “key”: “fileId”,
                    “value”: file_ids
                }
            }
        ]
    }

    # Create base parameters for the API call
    retrieve_params = {
        ‘input’: {
            ‘text’: query
        },
        ‘retrieveAndGenerateConfiguration’: {
            ‘type’: ‘KNOWLEDGE_BASE’,
            ‘knowledgeBaseConfiguration’: {
                ‘knowledgeBaseId’: knowledge_base_id,
                ‘modelArn’: ‘arn:aws:bedrock:us-east-1::foundation-model/amazon.nova-pro-v1:0’,
                ‘retrievalConfiguration’: {
                    ‘vectorSearchConfiguration’: {
                        ‘numberOfResults’: limit,
                        ‘filter’: filter_expression
                    }
                }
            }
        }
    }
    response = bedrock_agent_runtime.retrieve_and_generate(**retrieve_params)

Manage the document lifecycle with TTL

To further optimize resource usage and costs, you can implement an intelligent document lifecycle management system using the DynamoDB (TTL) feature. This consists of the following steps:

When a document is ingested into the knowledge base, a record is created with a configurable TTL value.
This TTL is refreshed when the document is accessed.
DynamoDB Streams with specific filters for TTL expiration events is used to trigger a cleanup Lambda function.
The Lambda function removes expired documents from the knowledge base.

See the following code:

# Lambda function triggered by DynamoDB Streams when TTL expires items
def lambda_handler(event, context):
    “””
    This function is triggered by DynamoDB Streams when TTL expires items.
    It removes expired documents from the knowledge base.
    “””
    
    # Process each record in the event
    for record in event.get(‘Records’, []):
        # Check if this is a TTL expiration event (REMOVE event from DynamoDB Stream)
        if record.get(‘eventName’) == ‘REMOVE’:
            # Check if this is a TTL expiration
            user_identity = record.get(‘userIdentity’, {})
            if user_identity.get(‘type’) == ‘Service’ and user_identity.get(‘principalId’) == ‘dynamodb.amazonaws.com’:
                # Extract the file ID and tenant ID from the record
                keys = record.get(‘dynamodb’, {}).get(‘Keys’, {})
                file_id = keys.get(‘id’, {}).get(‘S’)
                
                # Delete the document from the knowledge base
                bedrock_agent.delete_knowledge_base_documents(
                    clientToken=str(uuid.uuid4()),
                    knowledgeBaseId=knowledge_base_id,
                    dataSourceId=data_source_id,
                    documentIdentifiers=[
                        {
                            ‘custom’: {
                                ‘id’: file_id
                            },
                            ‘dataSourceType’: ‘CUSTOM’
                        }
                    ]
                )

Multi-tenant isolation with tiered service levels

Our architecture enables sophisticated multi-tenant isolation with tiered service levels:

Tenant-specific document filtering – Each query includes user, tenant, and file-specific filters, allowing the system to reduce the number of documents being queried.
Configurable TTL values – Different tenant tiers can have different TTL configurations. For example:

Free tier: Five documents ingested with a 7-day TTL and five queries per minute.
Standard tier: 100 documents ingested with a 30-day TTL and 10 queries per minute.
Premium tier: 1,000 documents ingested with a 90-day TTL and 50 queries per minute.
You can configure additional limits, such as total queries per month or total ingested files per day or month.

Clean up

To clean up the resources created in this post, run the following command from the same location where you performed the deploy step:

Conclusion

The just-in-time knowledge base architecture presented in this post transforms document management across multiple tenants by processing documents only when queried, reducing the unused consumption of traditional RAG systems. This serverless implementation uses Amazon Bedrock, OpenSearch Serverless, and the DynamoDB TTL feature to create a lean system with intelligent document lifecycle management, configurable tenant limits, and strict data isolation, which is essential for SaaS providers offering tiered pricing models.

This solution directly addresses cost structure and infrastructure limitations of traditional systems, particularly for deployments handling numerous small to mid-sized tenants with transitory projects. This architecture combines on-demand document processing with automated lifecycle management, delivering a cost-effective, scalable resource that empowers organizations to focus on extracting insights rather than managing infrastructure, while maintaining security boundaries between tenants.

Ready to implement this architecture? The full sample code is available in the GitHub repository.

About the author

Steven Warwick is a Senior Solutions Architect at AWS, where he leads customer engagements to drive successful cloud adoption and specializes in SaaS architectures and Generative AI solutions. He produces educational content including blog posts and sample code to help customers implement best practices, and has led programs on GenAI topics for solution architects. Steven brings decades of technology experience to his role, helping customers with architectural reviews, cost optimization, and proof-of-concept development.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleBeware the market risk of AI-guided investment gaining mass popularity
Next Article Digital Creatures Learn To Walk | Two Minute Papers #8
Advanced AI Editor
  • Website

Related Posts

Agents as escalators: Real-time AI video monitoring with Amazon Bedrock Agents and video streams

July 8, 2025

Qwen3 family of reasoning models now available in Amazon Bedrock Marketplace and Amazon SageMaker JumpStart

July 8, 2025

How INRIX accelerates transportation planning with Amazon Bedrock

July 7, 2025
Leave A Reply

Latest Posts

Rising Painter Dies at 43

Confederate Group Sues Stone Mountain Over Show on Racism and Slavery

UK MPs to Debate Banning Advertising by Oil Companies

Albright College is Selling Its Art Collection to Balance Its Books

Latest Posts

LTLCrit: A Temporal Logic-based LLM Critic for Safe and Efficient Embodied Agents

July 8, 2025

IBM Launches Power11 Processor, Says It Will Help Partners Unlock AI For Clients

July 8, 2025

MIT researchers say using ChatGPT can rot your brain. Truth is little more complicated

July 8, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • LTLCrit: A Temporal Logic-based LLM Critic for Safe and Efficient Embodied Agents
  • IBM Launches Power11 Processor, Says It Will Help Partners Unlock AI For Clients
  • MIT researchers say using ChatGPT can rot your brain. Truth is little more complicated
  • AI-Powered Construction Procurement Startup Lands $20M Series A 
  • Exploring the future of voice AI with Mati Staniszewski at Disrupt 2025

Recent Comments

No comments to show.

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.