Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 5 additions & 2 deletions apps/dataset/task/sync.py
Original file line number Diff line number Diff line change
Expand Up @@ -54,5 +54,8 @@ def sync_replace_web_dataset(dataset_id: str, url: str, selector: str):
def sync_web_document(dataset_id, source_url_list: List[str], selector: str):
handler = get_sync_web_document_handler(dataset_id)
for source_url in source_url_list:
result = Fork(base_fork_url=source_url, selector_list=selector.split(' ')).fork()
handler(source_url, selector, result)
try:
result = Fork(base_fork_url=source_url, selector_list=selector.split(' ')).fork()
handler(source_url, selector, result)
except Exception as e:
pass
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here are some observations and a few optimizations for the provided code:

  1. Exception Handling: The current implementation of exception handling is too general (just pass). It would be more informative to capture specific exceptions and log them, especially if you're running this in an environment where logging is enabled.

  2. Fork Functionality: Ensure that the Fork class has a valid fork method defined. If the selector_list could potentially be empty, consider raising a custom error message.

  3. Logging: Log information about each successful operation and the URL being processed. This can help with debugging and understanding the flow of data.

With these considerations, here's an optimized version of the function:

def sync_web_document(dataset_id, source_url_list: List[str], selector: str):
    """Synchronizes replacing web dataset with documents based on specified URLs and selectors."""
    
    from fork_module import Fork  # Replace with actual module and class imports
    
    logger = getLogger(__name__)  # Assuming you have a logging setup

    handler = get_sync_web_document_handler(dataset_id)
    
    for source_url in source_url_list:
        try:
            result = Fork(base_fork_url=source_url, selector_list=selector.split(' ')).fork()
            handler(source_url, selector, result)
            
            # Log successful completion
            logger.info(f"Successfully processed {source_url} using selector '{selector}'")
        
        except Exception as e:
            # Log failure along with details
            logger.error(f"Failed processing {source_url}: {e}")
            continue

Key Changes:

  • Logging Integration: Added simple logging statements before and after attempting to process each document.
  • More Specific Error Handling: Explicitly raised Exception for better traceability when something goes wrong during processing.
  • Variable Naming Consistency: Ensured variable names like logger, etc., are consistent and descriptive. Adjust the imports (fork_module) according to your project structure.

This should make the code more robust and easier to debug while providing valuable insights into its performance and operations.