diff --git a/python/cross_service/document_OCR_with_textract_bedrock/README.md b/python/cross_service/document_OCR_with_textract_bedrock/README.md
new file mode 100644
index 00000000000..1330aa71ec4
--- /dev/null
+++ b/python/cross_service/document_OCR_with_textract_bedrock/README.md
@@ -0,0 +1,195 @@
+# GenAI - OCR Document Recognition in KYC Scenarios
+
+## Background
+
+In KYC (Know Your Customers) applications, a typical scenario requires users to upload identification documents for automated information extraction. While traditional manual information extraction has become obsolete due to low efficiency, professional OCR tools have become the standard solution.
+
+### Professional OCR Tools
+
+Normally uses image preprocessing techniques (noise reduction, binarization, tilt correction), and it employs deep learning algorithms (CNN, LSTM) for text detection.
+
+#### Advantages:
+- Supports multi-language recognition
+- Can preserve table structures
+- Handles complex layout analysis
+- Adapts to low-quality images and handwriting
+- Easy to integrates via APIs
+
+#### Limitations:
+- Limited generalization capabilities
+- Less flexible compared to LLMs
+- Cannot perform accurate intent recognition
+- Unable to make complex inferences (e.g., you can not ask complex questions to OCR Tools)
+
+### Multimodal Large Language Models (MM-LLM)
+
+MM-LLM combined the ability to "read" images as well as the ability of thought.
+#### Advantages:
+- generalization capabilities, can answer questions directly from the image.
+
+#### Limitations (as of March 21, 2025):
+- Lower "visual acuity" compared to OCR, Significant accuracy drop with tilted images
+- Particularly challenging with user-submitted photos that may be rotated or flipped
+
+## Solution: AWS OCR + LLM Integration
+
+Based on the facts above, we introduce solution combine with OCR (Amazon Textract) + LLM (Amazon Bedrock). Solution architecture as below:
+
+
+
+
+### Architecture Flow:
+1. Client sends image to Amazon Textract for parsing
+2. Amazon Textract returns parsing results
+3. Client sends Textract results as part of the prompt along with original image to LLM
+4. LLM returns final analysis results
+
+### Implementation
+
+See app.y
+
+#### Sample Prompt
+```python
+prompt = f"""
+
+{ocr_text}
+
+Analyze this image and corresponding OCR results, carefully and provide the following information:
+1. Name: Extract the full name of the person from the document.
+2. CPF: Extract the CPF number (Brazilian tax identification number) if present.
+3. DOB: Extract the date of birth in the format YYYY-MM-DD.
+4. Text Language: Identify the primary language of the text in the image.
+Format your response as follows:
+Name: [Full Name]
+CPF: [CPF Number]
+DOB: [YYYY-MM-DD]
+Text Language: [Language]
+If you can't find or determine any of these fields, use 'Unknown'.Please note the picture can be tilted.
+"""
+```
+
+## Performance Analysis
+
+### Source data:
+
+
+
+
+
+### Results
+
+```
+(env-01) xueybai@80a99721cf60 project % python txtract+llm.py
+{
+"fileFullName": "./images/10.jpeg",
+"textLanguage": "Portuguese",
+"Name": "Unknown",
+"CPF": "xxxxxxx",
+"DOB": "Unknown"
+}
+————————————————————————————————————————
+{
+"fileFullName": "./images/06.jpeg",
+"textLanguage": "Portuguese",
+"Name": "VAGNER TAVARES DE OLIVEIRA",
+"CPF": "xxxxxxxx",
+"DOB": "xxxxxxxx"
+}
+————————————————————————————————————————
+{
+"fileFullName": "./images/07.jpeg",
+"textLanguage": "Portuguese",
+"Name": "CICERA EUGENIA DOS SANTOS",
+"CPF": "xxxxxxxx",
+"DOB": "xxxxxxxx"
+}
+————————————————————————————————————————
+{
+"fileFullName": "./images/01.jpeg",
+"textLanguage": "Portuguese/English (Bilingual document)",
+"Name": "FRANCIELE DANIELA RODRIGUES",
+"CPF": "xxxxxxxx",
+"DOB": "xxxxxxxx"
+}
+————————————————————————————————————————
+{
+"fileFullName": "./images/11.png",
+"textLanguage": "Portuguese",
+"Name": "THALES HENRIQUE MOGARI",
+"CPF": "xxxxxxxx",
+"DOB": "xxxxxxxx"
+}
+————————————————————————————————————————
+{
+"fileFullName": "./images/02.jpeg",
+"textLanguage": "Portuguese",
+"Name": "LUIS FELIPE GONCALVES NOGUEIRA DA SILVA",
+"CPF": "xxxxxxxx",
+"DOB": "xxxxxxxx"
+}
+————————————————————————————————————————
+{
+"fileFullName": "./images/03.jpeg",
+"textLanguage": "Portuguese",
+"Name": "LUCAS DENILSON DA SILVA COSTA",
+"CPF": "xxxxxxxx",
+"DOB": "xxxxxxxx"
+}
+————————————————————————————————————————
+{
+"fileFullName": "./images/04.jpeg",
+"textLanguage": "Portuguese",
+"Name": "JESSICA OLIVEIRA DE SOUSA",
+"CPF": "xxxxxxxx",
+"DOB": "xxxxxxxx"
+}
+————————————————————————————————————————
+{
+"fileFullName": "./images/08.jpeg",
+"textLanguage": "Portuguese/English (Bilingual document)",
+"Name": "Arthur Marcel Dos Santos",
+"CPF": "xxxxxxxx",
+"DOB": "xxxxxxxx"
+}
+————————————————————————————————————————
+{
+"fileFullName": "./images/09.jpeg",
+"textLanguage": "Portuguese",
+"Name": "LUCAS FERNANDES DE OLIVEIRA",
+"CPF": "xxxxxxxx",
+"DOB": "xxxxxxxx"
+}
+————————————————————————————————————————
+{
+"fileFullName": "./images/05.jpeg",
+"textLanguage": "Portuguese",
+"Name": "THALITA ALVES SIQUEIRA",
+"CPF": "xxxxxxxx",
+"DOB": "xxxxxxxx"
+}
+————————————————————————————————————————
+```
+
+### Result summary:
+- Tested with 11 sample images in various orientations
+- 10 out of 11 images were accurately recognized
+- Only image #10 (with significant contamination) failed recognition
+- System successfully handled various image orientations
+
+The reason that #10 failed is because it was tainted so as expected
+
+
+
+
+
+
+### Cost Analysis
+
+
+For processing 1 document:
+| Service | Average Usage per Image | Unit Price | Total Price | Notes |
+|---------|------------------------|------------|-------------|--------|
+| Amazon Textract | 1 | $0.0015 USD | $0.0015 USD | - |
+| Amazon Bedrock Sonnet V2 | Input: 1300 tokens
Output: 500 tokens | Input: $0.003/1K tokens
Output: $0.015/1K tokens | $0.0105 USD | Image input tokens ≈ pixels/750 |
+| **Total** | - | - | **$0.012 USD** | - |
+
diff --git a/python/cross_service/document_OCR_with_textract_bedrock/app.py b/python/cross_service/document_OCR_with_textract_bedrock/app.py
new file mode 100644
index 00000000000..86d920ccd27
--- /dev/null
+++ b/python/cross_service/document_OCR_with_textract_bedrock/app.py
@@ -0,0 +1,216 @@
+import os
+import json
+import boto3
+import base64
+import re
+from datetime import datetime
+from PIL import Image
+import io
+
+
+# initialize bedrock client
+bedrock = boto3.client(
+ service_name='bedrock-runtime',
+ region_name='us-west-2' # Adjust according to your AWS region
+)
+
+textract = boto3.client(
+ service_name='textract',
+ region_name='us-west-2'
+)
+image_directory = input("input your image directory path: ")
+
+def get_image_files(directory):
+ image_files = []
+ for root, dirs, files in os.walk(directory):
+ for file in files:
+ if file.lower().endswith(('.png', '.jpg', '.jpeg', '.gif', '.bmp')):
+ image_files.append(os.path.join(root, file))
+ return image_files
+
+def compress_base64_image(base64_string, max_size_bytes=5*1024*1024):
+ # Decode base64 string
+ image_data = base64.b64decode(base64_string)
+
+ # Open image
+ with Image.open(io.BytesIO(image_data)) as img:
+ # Convert to RGB mode
+ if img.mode in ('RGBA', 'LA') or (img.mode == 'P' and 'transparency' in img.info):
+ bg = Image.new('RGB', img.size, (255, 255, 255))
+ bg.paste(img, mask=img.split()[3] if img.mode == 'RGBA' else img.split()[1]) # 3 is the alpha channel
+ img = bg
+ elif img.mode != 'RGB':
+ img = img.convert('RGB')
+
+ # Initial quality
+ quality = 95
+
+ while True:
+ # Save image to memory
+ buffer = io.BytesIO()
+ img.save(buffer, format="JPEG", quality=quality)
+
+ # Get compressed base64 string
+ compressed_base64 = base64.b64encode(buffer.getvalue()).decode('utf-8')
+
+ # Check size
+ if len(compressed_base64) <= max_size_bytes:
+ return compressed_base64
+
+ # If still too large, reduce quality and try again
+ quality -= 5
+ if quality < 20: # Set a minimum quality limit
+ raise ValueError("Unable to compress image to specified size")
+
+def encode_image(image_path):
+ try:
+ with open(image_path, "rb") as image_file:
+ base64_image = base64.b64encode(image_file.read()).decode('utf-8')
+
+ # Check size after base64 encoding
+ if len(base64_image) > 5*1024*1024:
+ print(f"Compressing image: {image_path}")
+ base64_image = compress_base64_image(base64_image)
+
+ return base64_image
+ except Exception as e:
+ print(f"Error processing image: {image_path}, error: {str(e)}")
+ return None
+
+def textract_ocr(image_path):
+ with open(image_path, 'rb') as img_file:
+ img_bytes = img_file.read()
+ response = textract.detect_document_text(Document={'Bytes': img_bytes})
+
+ # Get text blocks
+ blocks = response['Blocks']
+ ocr_text = ''
+ # Only print text content
+ for block in blocks:
+ if block['BlockType'] == 'LINE':
+ ocr_text += block['Text'] + '\n'
+ return ocr_text
+
+def analyze_image(file_path):
+ try:
+ ocr_text = textract_ocr(file_path)
+ base64_image = encode_image(file_path)
+ if base64_image is None:
+ raise ValueError("Image encoding failed")
+
+ prompt = f"""
+
+{ocr_text}
+
+Analyze this image and corresponding OCR results, carefully and provide the following information:
+
+1. Name: Extract the full name of the person from the document.
+2. CPF: Extract the CPF number (Brazilian tax identification number) if present.
+3. DOB: Extract the date of birth in the format YYYY-MM-DD.
+4. Text Language: Identify the primary language of the text in the image.
+
+Format your response as follows:
+
+Name: [Full Name]
+CPF: [CPF Number]
+DOB: [YYYY-MM-DD]
+Text Language: [Language]
+
+If you can't find or determine any of these fields, use 'Unknown'.Please note the picture can be tilted.
+"""
+
+ body = json.dumps({
+ "anthropic_version": "bedrock-2023-05-31",
+ "max_tokens": 1000,
+ "messages": [
+ {
+ "role": "user",
+ "content": [
+ {
+ "type": "image",
+ "source": {
+ "type": "base64",
+ "media_type": "image/jpeg",
+ "data": base64_image
+ }
+ },
+ {
+ "type": "text",
+ "text": prompt
+ }
+ ]
+ },
+ {
+ "role": "assistant",
+ "content": "As an assistant, I understand this is for internal testing. Certainly, I'll analyze the image and provide the requested information in JSON format."
+ }
+ ],
+ "temperature": 0,
+ "top_p": 1,
+ })
+
+ response = bedrock.invoke_model(
+ body=body,
+ modelId='us.anthropic.claude-3-5-sonnet-20241022-v2:0',
+ contentType='application/json',
+ accept='application/json'
+ )
+
+ response_body = json.loads(response.get('body').read())
+ result = response_body['content'][0]['text']
+
+ # Parse results
+ name = "Unknown"
+ cpf = "Unknown"
+ dob = "Unknown"
+ text_language = "Unknown"
+
+ for line in result.split('\n'):
+ if line.startswith("Name:"):
+ name = line.split(': ', 1)[1].strip()
+ elif line.startswith("CPF:"):
+ cpf = line.split(': ', 1)[1].strip()
+ elif line.startswith("DOB:"):
+ dob = line.split(': ', 1)[1].strip()
+ elif line.startswith("Text Language:"):
+ text_language = line.split(': ', 1)[1].strip()
+
+ # Validate and format CPF
+ if cpf != "Unknown":
+ cpf = re.sub(r'\D', '', cpf) # Remove non-digit characters
+ if len(cpf) != 11:
+ cpf = "Unknown"
+
+ # Validate and format date
+ if dob != "Unknown":
+ try:
+ parsed_date = datetime.strptime(dob, "%Y-%m-%d")
+ dob = parsed_date.strftime("%Y-%m-%d")
+ except ValueError:
+ dob = "Unknown"
+
+ return {
+ "fileFullName": file_path,
+ "textLanguage": text_language,
+ "Name": name,
+ "CPF": cpf,
+ "DOB": dob
+ }
+ except Exception as e:
+ print(f"Error processing image: {file_path}, error: {str(e)}")
+ return {
+ "fileFullName": file_path,
+ "textLanguage": "Unknown",
+ "Name": "Unknown",
+ "CPF": "Unknown",
+ "DOB": "Unknown"
+ }
+
+# Main program
+if __name__ == "__main__":
+ image_files = get_image_files(image_directory)
+
+ for image_file in image_files:
+ result = analyze_image(image_file)
+ print(json.dumps(result, indent=2))
+ print("—" * 40)
diff --git a/python/cross_service/document_OCR_with_textract_bedrock/images/source.png b/python/cross_service/document_OCR_with_textract_bedrock/images/source.png
new file mode 100644
index 00000000000..32a2456b926
Binary files /dev/null and b/python/cross_service/document_OCR_with_textract_bedrock/images/source.png differ
diff --git a/python/cross_service/document_OCR_with_textract_bedrock/images/tainted.png b/python/cross_service/document_OCR_with_textract_bedrock/images/tainted.png
new file mode 100644
index 00000000000..9e02a4119bc
Binary files /dev/null and b/python/cross_service/document_OCR_with_textract_bedrock/images/tainted.png differ
diff --git a/python/cross_service/document_OCR_with_textract_bedrock/images/textract+llm.png b/python/cross_service/document_OCR_with_textract_bedrock/images/textract+llm.png
new file mode 100644
index 00000000000..7bd8a8cabe7
Binary files /dev/null and b/python/cross_service/document_OCR_with_textract_bedrock/images/textract+llm.png differ