-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
chore: update PraisonAI version to 2.2.21 across Dockerfiles and docu… #556
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
|
|
@@ -12,7 +12,7 @@ | |||||
| from dotenv import load_dotenv | ||||||
| from PIL import Image | ||||||
| from tavily import TavilyClient | ||||||
| from crawl4ai import AsyncWebCrawler | ||||||
| from crawl4ai import AsyncAsyncWebCrawler | ||||||
|
|
||||||
| # Local application/library imports | ||||||
| import chainlit as cl | ||||||
|
|
@@ -72,7 +72,7 @@ async def tavily_web_search(query): | |||||
| response = tavily_client.search(query) | ||||||
| logger.debug(f"Tavily search response: {response}") | ||||||
|
|
||||||
| async with AsyncWebCrawler() as crawler: | ||||||
| async with AsyncAsyncWebCrawler() as crawler: | ||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Similar to the import, this instantiation of
Suggested change
|
||||||
| results = [] | ||||||
| for result in response.get('results', []): | ||||||
| url = result.get('url') | ||||||
|
|
||||||
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
|
|
@@ -12,7 +12,7 @@ | |||||
| from PIL import Image | ||||||
| from context import ContextGatherer | ||||||
| from tavily import TavilyClient | ||||||
| from crawl4ai import AsyncWebCrawler | ||||||
| from crawl4ai import AsyncAsyncWebCrawler | ||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There appears to be a typo in the imported class name here as well.
Suggested change
|
||||||
|
|
||||||
| # Local application/library imports | ||||||
| import chainlit as cl | ||||||
|
|
@@ -153,8 +153,8 @@ async def tavily_web_search(query): | |||||
| response = tavily_client.search(query) | ||||||
| logger.debug(f"Tavily search response: {response}") | ||||||
|
|
||||||
| # Create an instance of AsyncWebCrawler | ||||||
| async with AsyncWebCrawler() as crawler: | ||||||
| # Create an instance of AsyncAsyncWebCrawler | ||||||
| async with AsyncAsyncWebCrawler() as crawler: | ||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. |
||||||
| # Prepare the results | ||||||
| results = [] | ||||||
| for result in response.get('results', []): | ||||||
|
|
||||||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -7,7 +7,7 @@ | |
| import json | ||
| import dotenv | ||
| from tavily import TavilyClient | ||
| from crawl4ai import AsyncWebCrawler | ||
| from crawl4ai import AsyncAsyncWebCrawler | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. |
||
|
|
||
| dotenv.load_dotenv() | ||
|
|
||
|
|
@@ -144,7 +144,7 @@ async def tavily_web_search(self, query): | |
| }) | ||
| response = self.tavily_client.search(query) | ||
| results = [] | ||
| async with AsyncWebCrawler() as crawler: | ||
| async with AsyncAsyncWebCrawler() as crawler: | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. |
||
| for result in response.get('results', []): | ||
| url = result.get('url') | ||
| if url: | ||
|
|
||
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems there might be a typo in the imported class name.
AsyncAsyncWebCrawleris likely intended to beAsyncWebCrawler. Could you please verify the correct class name from thecrawl4ailibrary? If this is a typo, it will cause anImportErrorat runtime.