You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your interest in contributing to the llama-github project! We welcome contributions from the community to help improve and enhance the library. This document outlines the guidelines and best practices for contributing to the project.
3
+
Thank you for your interest in contributing to `llama-github`.
4
4
5
5
## Code of Conduct
6
6
7
7
By participating in this project, you agree to abide by the [Code of Conduct](CODE_OF_CONDUCT.md). Please read and follow the guidelines to ensure a welcoming and inclusive environment for all contributors.
8
8
9
-
## Getting Started
10
-
11
-
To get started with contributing to llama-github, follow these steps:
12
-
13
-
1. Fork the repository on GitHub.
14
-
2. Clone your forked repository to your local machine.
15
-
3. Create a new branch for your feature or bug fix.
16
-
4. Make your changes and commit them with descriptive commit messages.
17
-
5. Push your changes to your forked repository.
18
-
6. Submit a pull request to the main repository.
19
-
20
9
## Development Setup
21
10
22
-
To set up the development environment for llama-github, follow these steps:
23
-
24
-
1. Ensure you have Python 3.6 or above installed on your system.
25
-
2. Create a virtual environment for the project.
26
-
3. Install the required dependencies by running `pip install -r requirements.txt`.
27
-
4. Install the development dependencies by running `pip install -r requirements-dev.txt`.
28
-
5. Run the tests to ensure everything is set up correctly by executing `pytest`.
29
-
30
-
## Contribution Guidelines
11
+
Requirements:
31
12
32
-
When contributing to llama-github, please keep the following guidelines in mind:
13
+
- Python `3.10+`
33
14
34
-
- Follow the [PEP 8](https://www.python.org/dev/peps/pep-0008/) style guide for Python code.
35
-
- Write clear and concise commit messages that describe the changes made.
36
-
- Include tests for any new functionality or bug fixes.
37
-
- Update the documentation, including docstrings and README, if necessary.
38
-
- Ensure that your changes do not introduce any breaking changes unless discussed and approved by the maintainers.
39
-
- Be respectful and constructive in all interactions with other contributors.
15
+
Setup:
40
16
41
-
## Issue Tracking
17
+
```bash
18
+
python -m venv .venv
19
+
source .venv/bin/activate
20
+
pip install -e .[dev]
21
+
```
42
22
43
-
If you encounter a bug, have a feature request, or want to discuss an improvement, please submit an issue on the [GitHub issue tracker](https://github.com/JetXu-LLM/llama-github/issues). When submitting an issue, provide as much detail as possible, including steps to reproduce the problem or a clear description of the proposed feature.
23
+
Validation:
44
24
45
-
## Pull Request Process
25
+
```bash
26
+
pytest -q
27
+
python -m build
28
+
```
46
29
47
-
When submitting a pull request, please follow these steps:
30
+
## Guidelines
48
31
49
-
1. Ensure that your changes are based on the latest version of the main branch.
50
-
2. Provide a clear and descriptive title for your pull request.
51
-
3. Include a detailed description of the changes made and the problem they solve or the feature they add.
52
-
4. Reference any related issues or pull requests using the `#` symbol followed by the issue or pull request number.
53
-
5. Ensure that all tests pass and that your changes do not introduce any new warnings or errors.
54
-
6. Be prepared to address any feedback or requests for changes during the code review process.
32
+
- Keep public API changes intentional and well documented.
33
+
- Add or update tests for behavioral changes.
34
+
- Update docs when changing return shapes, examples, or supported runtime versions.
35
+
- Prefer small, reviewable pull requests over broad rewrites.
55
36
56
-
## Code Review
37
+
## Pull Requests
57
38
58
-
All pull requests will be reviewed by the project maintainers. During the review process, the maintainers may provide feedback, request changes, or ask for clarification. Please be responsive to the feedback and address any requested changes in a timely manner.
39
+
- Use a clear title and description.
40
+
- Include migration notes when behavior changes in a user-visible way.
41
+
- Reference related issues when applicable.
42
+
- Make sure the repository still passes `pytest -q` and `python -m build`.
59
43
60
44
## License
61
45
62
46
By contributing to llama-github, you agree that your contributions will be licensed under the [Apache License 2.0](LICENSE).
63
-
64
-
## Recognition
65
-
66
-
We value and appreciate all contributions to the llama-github project. Your contributions will be recognized in the project's release notes and contributor list.
67
-
68
-
## Contact
69
-
70
-
If you have any questions or need further assistance, feel free to reach out to the project maintainers at [Voldemort.xu@foxmail.com](mailto:Voldemort.xu@foxmail.com).
71
-
72
-
Thank you for your contributions and happy coding!
@@ -50,6 +49,8 @@ If you like this project or believe it has potential, please give it a ⭐️. Y
50
49
pip install llama-github
51
50
```
52
51
52
+
Current maintained runtime target: Python `3.10+`.
53
+
53
54
## Usage
54
55
55
56
Here's a simple example of how to use llama-github:
@@ -66,29 +67,32 @@ github_rag = GithubRAG(
66
67
67
68
# Retrieve context for a coding question (simple_mode is default set to False)
68
69
query ="How to create a NumPy array in Python?"
69
-
context= github_rag.retrieve_context(
70
-
query,# In professional mode, one query will take nearly 1 min to generate final contexts. You could set log level to INFO to monitor the retrieval progress
70
+
contexts= github_rag.retrieve_context(
71
+
query,
71
72
# simple_mode = True
72
73
)
73
74
74
-
print(context)
75
+
print(contexts[0]["url"])
76
+
print(contexts[0]["context"])
75
77
```
76
78
77
-
For more advanced usage and examples, please refer to the [documentation](docs/usage.md).
79
+
`retrieve_context()` returns a list of context dictionaries. Each item contains at least `context` and `url`.
80
+
81
+
For more advanced usage and examples, please refer to the [documentation](docs/usage.md). Runnable low-cost examples are also available in [`examples/`](examples).
78
82
79
83
## Key Features
80
84
81
85
-**🔍 Intelligent GitHub Retrieval**: Harness the power of llama-github to retrieve highly relevant code snippets, issues, and repository information from GitHub based on user queries. Our advanced retrieval techniques ensure you find the most pertinent information quickly and efficiently.
82
86
83
-
-**⚡ Repository Pool Caching**: Llama-github has an innovative repository pool caching mechanism. By caching repositories (including READMEs, structures, code, and issues) across threads, llama-github significantly accelerates GitHub search retrieval efficiency and minimizes the consumption of GitHub API tokens. Deploy llama-github in multi-threaded production environments with confidence, knowing that it will perform optimally and save you valuable resources.
87
+
-**⚡ Repository Pool Caching**: Llama-github has an innovative repository pool caching mechanism. By caching repositories (including READMEs, structures, code, and issues) across threads, llama-github significantly accelerates GitHub search retrieval efficiency and minimizes the consumption of GitHub API tokens.
84
88
85
89
-**🧠 LLM-Powered Question Analysis**: Leverage state-of-the-art language models to analyze user questions and generate highly effective search strategies and criteria. Llama-github intelligently breaks down complex queries, ensuring that you retrieve the most relevant information from GitHub's vast repository network.
86
90
87
91
-**📚 Comprehensive Context Generation**: Generate rich, contextually relevant answers by seamlessly combining information retrieved from GitHub with the reasoning capabilities of advanced language models. Llama-github excels at handling even the most complex and lengthy questions, providing comprehensive and insightful responses that include extensive context to support your development needs.
88
92
89
-
-**🚀 Asynchronous Processing Excellence**: Llama-github is built from the ground up to leverage the full potential of asynchronous programming. With meticulously implemented asynchronous mechanisms woven throughout the codebase, llama-github can handle multiple requests concurrently, significantly boosting overall performance. Experience the difference as llama-github efficiently manages high-volume workloads without compromising on speed or quality.
93
+
-**🚀 Asynchronous Processing Excellence**: Llama-github is built from the ground up to leverage the full potential of asynchronous programming. With meticulously implemented asynchronous mechanisms woven throughout the codebase, llama-github can handle multiple requests concurrently, significantly boosting overall performance.
90
94
91
-
-**🔧 Flexible LLM Integration**: Easily integrate llama-github with various LLM providers, embedding models, and reranking models to tailor the library's capabilities to your specific requirements. Our extensible architecture allows you to customize and enhance llama-github's functionality, ensuring that it adapts seamlessly to your unique development environment.
95
+
-**🔧 Flexible LLM Integration**: Easily integrate llama-github with various LLM providers, embedding models, reranking models, or an injected LangChain-compatible chat model to tailor the library's capabilities to your specific requirements.
92
96
93
97
-**🔒 Robust Authentication Options**: Llama-github supports both personal access tokens and GitHub App authentication, providing you with the flexibility to integrate it into different development setups. Whether you're an individual developer or working within an organizational context, llama-github has you covered with secure and reliable authentication mechanisms.
94
98
@@ -120,7 +124,7 @@ Our vision is to become a pivotal module in the future of AI-driven development
120
124
121
125
### Roadmap
122
126
123
-
For a detailed view of our project roadmap, please visit our [Project Roadmap](https://github.com/users/JetXu-LLM/projects/2).
127
+
For a historical view of the earlier roadmap, please visit [Vision and Roadmap](VISION_AND_ROADMAP.md).
0 commit comments