Skip to content

Commit a34d70e

Browse files
committed
bug fix
1 parent 9ebf604 commit a34d70e

File tree

2 files changed

+18
-10
lines changed

2 files changed

+18
-10
lines changed

README.md

Lines changed: 17 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -119,7 +119,7 @@ hints to the student to arrive at correct answer, enhancing student engagement a
119119

120120
```
121121

122-
- Train/Fine-tune the Extractive QA Multilingual Model (Part of our Ask Question/Doubt Component).
122+
- Train/Fine-tune the Extractive QA Multilingual Model (Part of our **Ask Question/Doubt** Component).
123123
Please note that, by default we use this (https://huggingface.co/ai4bharat/indic-bert) as a Backbone (BERT topology)
124124
and finetune it on SQuAD v1 dataset. Moreover, IndicBERT is a multilingual ALBERT model pretrained exclusively on 12 major Indian languages. It is pre-trained on novel monolingual corpus of around 9 billion tokens and subsequently evaluated on a set of diverse tasks. So finetuning, on SQuAD v1 (English) dataset automatically results in cross-lingual
125125
transfer on other 11 indian languages.
@@ -250,19 +250,21 @@ Here is the detailed architecture of `Ask Question/Doubt` component:
250250

251251
```
252252

253-
- For our Interactive Conversational AI Examiner component, as of now we are not doing any training as its based on
254-
recent Generative AI LLM (Large Language model) (open access models like LLaMA, Falcon etc.). You can update the API configuration by specifying hf_model_name (LLM name available in huggingface Hub). Please checkout https://huggingface.co/models
253+
- For our **Interactive Conversational AI Examiner** Component, as of now we are not doing any training as its based on
254+
recent Generative AI LLMs (Large Language models) (open access models like LLaMA, Falcon etc.). You can update the API configuration by specifying hf_model_name (LLM name available in huggingface Hub). Please checkout https://huggingface.co/models for LLMs
255255

256256
Here for performance gain, we can use INT8 quantized model optimized using Intel® Neural Compressor (Few options are like https://huggingface.co/decapoda-research/llama-7b-hf-int8 etc.)
257257

258-
Please Note that for fun 😄, we also provide usage of Azure OpenAI Cognitive Service to use models like GPT3 paid subscription API. You just need to provide `azure_deployment_name` below configuration and `<your_key>`
258+
Please Note that for fun 😄, we also provide usage of Azure OpenAI Cognitive Service to use models like GPT3 paid subscription API. You just need to provide `azure_deployment_name`, set `llm_name` as `hf_pipeline` in the below configuration and then add `<your_key>`
259259

260260
```python
261261

262262
AI_EXAMINER_CONFIG = {
263-
"llm_name": "azure_gpt3",
263+
"llm_name": "azure_gpt3", # azure_gpt3, hf_pipeline
264264
"azure_deployment_name": "text-davinci-003-prod",
265+
265266
"hf_model_name": "TheBloke/falcon-7b-instruct-GPTQ", # mosaicml/mpt-7b-instruct
267+
266268
"device": 0, # cuda:0
267269
"llm_kwargs":{
268270
"do_sample": True,
@@ -274,6 +276,9 @@ Please Note that for fun 😄, we also provide usage of Azure OpenAI Cognitive S
274276
"num_return_sequences": 1,
275277
"stop_sequence": "<|endoftext|>"
276278
}
279+
...
280+
281+
os.environ["OPENAI_API_KEY"] = "<your_key>"
277282
```
278283

279284
- Start the API server
@@ -303,14 +308,17 @@ Please Note that for fun 😄, we also provide usage of Azure OpenAI Cognitive S
303308

304309
- We have already added several benchmark results to compare how beneficial Intel® oneAPI AI Analytics Toolkit is compared to baseline. Please go to `benchmark` folder to view the results. Please Note that the shared results are based
305310
on provided Intel® Dev Cloud machine *(Intel Xeon Processor (Skylake, IBRS) - 10v CPUs 16GB RAM)*
311+
312+
# Comprehensive Implementation PPT (Presentation)
313+
314+
- Please view `ppt/Intel-oneAPI-Hackathon-Implementation.pdf` for more details.
306315

307316
# What we learned ![image](https://user-images.githubusercontent.com/72274851/218499685-e8d445fc-e35e-4ab5-abc1-c32462592603.png)
308317

309-
310318
![image](assets/Intel-ai-analytics-banner.png)
311319

312-
✅ Utilizing the Intel® AI Analytics Toolkit: By utilizing the Intel® AI Analytics Toolkit, developers can leverage familiar Python* tools and frameworks to accelerate the entire data science and analytics process on Intel® architecture. This toolkit incorporates oneAPI libraries for optimized low-level computations, ensuring maximum performance from data preprocessing to deep learning and machine learning tasks. Additionally, it facilitates efficient model development through interoperability.
320+
**Utilizing the Intel® AI Analytics Toolkit**: By utilizing the Intel® AI Analytics Toolkit, developers can leverage familiar Python* tools and frameworks to accelerate the entire data science and analytics process on Intel® architecture. This toolkit incorporates oneAPI libraries for optimized low-level computations, ensuring maximum performance from data preprocessing to deep learning and machine learning tasks. Additionally, it facilitates efficient model development through interoperability.
313321

314-
✅ Seamless Adaptability: The Intel® AI Analytics Toolkit enables smooth integration with machine learning and deep learning workloads, requiring minimal modifications.
322+
**Seamless Adaptability**: The Intel® AI Analytics Toolkit enables smooth integration with machine learning and deep learning workloads, requiring minimal modifications.
315323

316-
✅ Fostered Collaboration: The development of such an application likely involved collaboration with a team comprising experts from diverse fields, including deep learning and data analysis. This experience likely emphasized the significance of collaborative efforts in attaining shared objectives.
324+
**Fostered Collaboration**: The development of such an application likely involved collaboration with a team comprising experts from diverse fields, including deep learning and data analysis. This experience likely emphasized the significance of collaborative efforts in attaining shared objectives.

api/src/config.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@
1919

2020

2121
AI_EXAMINER_CONFIG = {
22-
"llm_name": "azure_gpt3",
22+
"llm_name": "azure_gpt3", # azure_gpt3, hf_pipeline
2323
"azure_deployment_name": "text-davinci-003-prod",
2424
"hf_model_name": "TheBloke/falcon-7b-instruct-GPTQ", # mosaicml/mpt-7b-instruct
2525
"device": 0, # cuda:0

0 commit comments

Comments
 (0)