You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+236-3Lines changed: 236 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -60,10 +60,243 @@ hints to the student to arrive at correct answer, enhancing student engagement a
60
60
61
61

62
62
63
-
List Down all technologies used to Build the prototype **Clearly mentioning Intel® AI Analytics Toolkits, it's libraries and the SYCL/DCP++ Libraries used**
- Train/Fine-tune the Extractive QA Multilingual Model (Part of our Ask Question/Doubt Component).
118
+
Please note that, by default we use this (https://huggingface.co/ai4bharat/indic-bert) as a Backbone (BERT topology)
119
+
and finetune it on SQuAD v1 dataset. Moreover, IndicBERT is a multilingual ALBERT model pretrained exclusively on 12 major Indian languages. It is pre-trained on novel monolingual corpus of around 9 billion tokens and subsequently evaluated on a set of diverse tasks. So finetuning, on SQuAD v1 (English) dataset automatically results in cross-lingual
120
+
transfer on other 11 indian languages.
121
+
122
+
Here is the detailed architecture of `Ask Question/Doubt` component:
123
+
124
+

125
+
126
+
```console
127
+
$ cd nlp/question_answering
128
+
129
+
# install dependencies
130
+
$ pip install -r requirements.txt
131
+
132
+
# modify the fine-tuning params mentioned in finetune_qa.sh
- For our Interactive Conversational AI Examiner component, as of now we are not doing any training as its based on
249
+
recent Generative AI LLM (Large Language model) (open access models like LLaMA, Falcon etc.). You can update the API configuration by specifying hf_model_name (LLM name available in huggingface Hub). Please checkout https://huggingface.co/models
250
+
251
+
Here for performance gain, we can use INT8 quantized model optimized using Intel® Neural Compressor (Few options are like https://huggingface.co/decapoda-research/llama-7b-hf-int8 etc.)
252
+
253
+
Please Note that for fun 😄, we also provide usage of Azure OpenAI Cognitive Service to use models like GPT3 paid subscription API. You just need to provide `azure_deployment_name` below configuration and `<your_key>`
# Benchmark Results with Intel® oneAPI AI Analytics Toolkit
298
+
299
+
- We have already added several benchmark results to compare how beneficial Intel® oneAPI AI Analytics Toolkit is compared to baseline. Please go to`benchmark` folder to view the results.
67
300
68
301
# What I learned 
0 commit comments