本教程使用了surya-ocr库,实现本地RAG,
使用了嵌入模型 bert-base-multilingual-cased(支持多语言)
和推理模型 Qwen1.5-1.8B-Chat
Surya 是一个文档 OCR 工具包,可以处理pdf文件和图片等
- 90+ 种语言的 OCR,与云服务相比具有优势
- 任何语言的行级文本检测
- 布局分析(表格、图像、页眉等检测)
- 读取顺序检测
测试环境:WSL2, 需要显卡
1. 安装必要的库
1 2 3 4 5 6 7 |
!pip install surya-ocr !pip install python-magic !pip install -U transformers !pip install -U sentence_transformers !pip install -U numpy !pip install faiss-cpu |
2. 引入所有的库
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
import torch from transformers import AutoTokenizer, AutoModelForCausalLM from sentence_transformers import SentenceTransformer import faiss import numpy as np import io import magic import pypdfium2 from typing import List from surya.detection import batch_text_detection from surya.layout import batch_layout_detection from surya.model.detection.segformer import load_model, load_processor from surya.model.recognition.model import load_model as load_rec_model from surya.model.recognition.processor import load_processor as load_rec_processor from surya.model.ordering.processor import load_processor as load_order_processor from surya.model.ordering.model import load_model as load_order_model from surya.ordering import batch_ordering from surya.postprocessing.heatmap import draw_polys_on_image from surya.ocr import run_ocr from surya.postprocessing.text import draw_text_on_image from PIL import Image from surya.languages import CODE_TO_LANGUAGE from surya.input.langs import replace_lang_with_code from surya.schema import OCRResult, TextDetectionResult, LayoutResult, OrderResult from surya.settings import settings |
3. 处理 pdf 的函数定义
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
def open_pdf(pdf_file): # 打开文件并读取内容到内存 with open(pdf_file, 'rb') as file: pdf_data = file.read() stream = io.BytesIO(pdf_data) return pypdfium2.PdfDocument(stream) def page_count(pdf_file): doc = open_pdf(pdf_file) return len(doc) def get_page_image(pdf_file, page_num, dpi=96): doc = open_pdf(pdf_file) renderer = doc.render( pypdfium2.PdfBitmap.to_pil, page_indices=[page_num - 1], scale=dpi / 72, ) png = list(renderer)[0] png_image = png.convert("RGB") return png_image def ocr(img, langs: List[str]) -> (Image.Image, OCRResult): replace_lang_with_code(langs) img_pred = run_ocr([img], [langs], det_model, det_processor, rec_model, rec_processor)[0] bboxes = [l.bbox for l in img_pred.text_lines] text = [l.text for l in img_pred.text_lines] rec_img = draw_text_on_image(bboxes, text, img.size, langs, has_math="_math" in langs) return rec_img, img_pred def load_det_cached(): checkpoint = settings.DETECTOR_MODEL_CHECKPOINT return load_model(checkpoint=checkpoint), load_processor(checkpoint=checkpoint) def load_rec_cached(): return load_rec_model(), load_rec_processor() |
4. 嵌入模型和推理模型
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
# 小模型用于创建嵌入 #embedder = SentenceTransformer('Qwen/Qwen1.5-0.5B-Chat') embedder = SentenceTransformer('bert-base-multilingual-cased') # 大模型用于生成 tokenizer = AutoTokenizer.from_pretrained('Qwen/Qwen1.5-1.8B-Chat') device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen1.5-1.8B-Chat", torch_dtype="auto", device_map="auto" ) |
5. 参数初始化
1 2 3 4 5 6 7 8 |
languages=["English"] # Initialize an empty list to store the embeddings embeddings_list = [] documents = [] det_model, det_processor = load_det_cached() rec_model, rec_processor = load_rec_cached() |
languages=[“English”],支持多语言,可以自行加入其他语言,比如加入中文
1 |
languages=["English", "Chinese"] |
6. 处理 pdf 文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
in_file = "data/Learning to Model the World with Language.pdf" file_type = magic.from_file(in_file, mime=True) #print(file_type) # 输出可能是 'application/pdf' if "pdf" in file_type: page_count = page_count(in_file) #print(f"page_count=", page_count) # 循环遍历每一页 for page_number in range(page_count): pil_image = get_page_image(in_file, page_number + 1) rec_img, pred = ocr(pil_image, languages) document = "\n".join([p.text for p in pred.text_lines]) embeddings = embedder.encode(document) embeddings_list.append(embeddings) print(f"page {page_number + 1},{len(document)}:", document) # print(f"embeddings:{len(embeddings)},", embeddings) documents.append(document) |
7. 创建 FAISS 索引和推理
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
# 创建FAISS索引 if embeddings_list: embeddings_array = np.vstack(embeddings_list) index = faiss.IndexFlatL2(embeddings_array.shape[1]) index.add(embeddings_array.astype('float32')) # 用户问题处理与推理 #question = "What is the theme of the document? " question = "这份文档的主题是什么?" query_embedding = embedder.encode([question])[0].astype('float32') # 检索最相关的几个文档段落 combined_segments = "" k = 3 # 你希望检索的相关文档数量 D, I = index.search(np.array([query_embedding]), k=k) print("D:", D) print("I:", I) print("Top", k, "most relevant document segments:") for idx, segment_index in enumerate(I[0]): most_relevant_segment = documents[segment_index] print(f"{idx+1}: {most_relevant_segment}\n") combined_segments += " " + most_relevant_segment prompt = combined_segments + "\n\n###\n\n" + question + "\n\n用中文回答" messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print("Answer to the question:", response) else: print("No embeddings found. Please check your data.") |
显示部分结果内容:
1 2 3 4 5 6 7 |
Answer to the question: 本文探讨了利用语言来与人类在视觉环境中的交互,并且提出了一种名为"Dynalang"的Agent,它通过预测未来文本和图像输入的方式来学习如何使用这种语言。具体步骤如下: 1. **提出Dynalang**:该Agent利用多模态世界模型,即编码包含所有感官输入(如视频和文本)的压缩表示,并将这些信息与其行动相结合来实现视觉环境下的行为理解。 2. **环境模型学习**:世界模型以每帧视频和每个时间步语言输入为输入,同时对动作序列进行压缩表示并将其馈送给序列模型预测下一个代表变量 ˆ z t + 1。这个世界模型由一个循环状态空间模型(RSTM)组成,其中序列模型由GRU(格雷码编码器)实现,能够从当前状态(h t)中连续学习和推断输出的编码表示。 3. **语言生成能力**:Dynalang能够根据其感知向量和之前的经验生成语言,例如,在"我放下了碗"这样的描述性任务中,当语言不谈论任务时,仅与其关联的部分语言信号可以作为优化行动的潜在线索。 4. **模型融合与更新**:训练过程中,Dynalang不仅会利用过去语言对当前状态进行预测,还会不断更新世界模型以最小化基于未来奖励的期望折扣总和,即Lpred + Lrepr,其中L是未来奖励函数,γ<1 是折扣因子,T是episode长度,CT=0 表示Episode结束。 5. **应用范围与任务多样性**:Dynalang可以应用于多种不同类型的环境中,包括复杂、视觉密集型的家庭清洁环境,以及具有符号输入的任务,如游戏手册,需要进行复杂的多级推理和上下文理解。此外,它还可以通过预训练数据集,即文本或视频数据集,在没有实际执行或任务奖励的情况下,进行在线训练,使Agent持续学习语言及其如何与现实世界的相互关系。 6. **总结与讨论**:本文介绍了一个利用语言进行交互的新模型-Dynalang,通过构建语言条件化的世界模型并与未来预言相结合,实现了视觉环境中不同种类语言的理解和有效应用,包括环境描述、游戏规则和指示。尽管主要研究集中在使用任务导向的语言命令直接向策略传递信息上,但文中还展示了如何将广泛涵盖语言的多样性与单一模型中next-token预测 |