You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+17-9Lines changed: 17 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -119,7 +119,7 @@ hints to the student to arrive at correct answer, enhancing student engagement a
119
119
120
120
```
121
121
122
-
- Train/Fine-tune the Extractive QA Multilingual Model (Part of our Ask Question/Doubt Component).
122
+
- Train/Fine-tune the Extractive QA Multilingual Model (Part of our **Ask Question/Doubt** Component).
123
123
Please note that, by default we use this (https://huggingface.co/ai4bharat/indic-bert) as a Backbone (BERT topology)
124
124
and finetune it on SQuAD v1 dataset. Moreover, IndicBERT is a multilingual ALBERT model pretrained exclusively on 12 major Indian languages. It is pre-trained on novel monolingual corpus of around 9 billion tokens and subsequently evaluated on a set of diverse tasks. So finetuning, on SQuAD v1 (English) dataset automatically results in cross-lingual
125
125
transfer on other 11 indian languages.
@@ -250,19 +250,21 @@ Here is the detailed architecture of `Ask Question/Doubt` component:
250
250
251
251
```
252
252
253
-
- For our Interactive Conversational AI Examiner component, as of now we are not doing any training as its based on
254
-
recent Generative AILLM (Large Language model) (open access models like LLaMA, Falcon etc.). You can update the API configuration by specifying hf_model_name (LLM name available in huggingface Hub). Please checkout https://huggingface.co/models
253
+
- For our **Interactive Conversational AI Examiner** Component, as of now we are not doing any training as its based on
254
+
recent Generative AILLMs (Large Language models) (open access models like LLaMA, Falcon etc.). You can update the API configuration by specifying hf_model_name (LLM name available in huggingface Hub). Please checkout https://huggingface.co/modelsfor LLMs
255
255
256
256
Here for performance gain, we can use INT8 quantized model optimized using Intel® Neural Compressor (Few options are like https://huggingface.co/decapoda-research/llama-7b-hf-int8 etc.)
257
257
258
-
Please Note that for fun 😄, we also provide usage of Azure OpenAI Cognitive Service to use models like GPT3 paid subscription API. You just need to provide `azure_deployment_name`below configuration and`<your_key>`
258
+
Please Note that for fun 😄, we also provide usage of Azure OpenAI Cognitive Service to use models like GPT3 paid subscription API. You just need to provide `azure_deployment_name`, set`llm_name`as`hf_pipeline`in the below configuration andthen add `<your_key>`
@@ -274,6 +276,9 @@ Please Note that for fun 😄, we also provide usage of Azure OpenAI Cognitive S
274
276
"num_return_sequences": 1,
275
277
"stop_sequence": "<|endoftext|>"
276
278
}
279
+
...
280
+
281
+
os.environ["OPENAI_API_KEY"] = "<your_key>"
277
282
```
278
283
279
284
- Start the API server
@@ -303,14 +308,17 @@ Please Note that for fun 😄, we also provide usage of Azure OpenAI Cognitive S
303
308
304
309
- We have already added several benchmark results to compare how beneficial Intel® oneAPI AI Analytics Toolkit is compared to baseline. Please go to `benchmark` folder to view the results. Please Note that the shared results are based
305
310
on provided Intel® Dev Cloud machine *(Intel Xeon Processor (Skylake, IBRS) -10v CPUs 16GBRAM)*
311
+
312
+
# Comprehensive Implementation PPT (Presentation)
313
+
314
+
- Please view `ppt/Intel-oneAPI-Hackathon-Implementation.pdf`for more details.
306
315
307
316
# What we learned 
308
317
309
-
310
318

311
319
312
-
✅ Utilizing the Intel® AI Analytics Toolkit: By utilizing the Intel® AI Analytics Toolkit, developers can leverage familiar Python* tools and frameworks to accelerate the entire data science and analytics process on Intel® architecture. This toolkit incorporates oneAPI libraries for optimized low-level computations, ensuring maximum performance from data preprocessing to deep learning and machine learning tasks. Additionally, it facilitates efficient model development through interoperability.
320
+
✅ **Utilizing the Intel® AI Analytics Toolkit**: By utilizing the Intel® AI Analytics Toolkit, developers can leverage familiar Python* tools and frameworks to accelerate the entire data science and analytics process on Intel® architecture. This toolkit incorporates oneAPI libraries for optimized low-level computations, ensuring maximum performance from data preprocessing to deep learning and machine learning tasks. Additionally, it facilitates efficient model development through interoperability.
313
321
314
-
✅ Seamless Adaptability: The Intel® AI Analytics Toolkit enables smooth integration with machine learning and deep learning workloads, requiring minimal modifications.
322
+
✅ **Seamless Adaptability**: The Intel® AI Analytics Toolkit enables smooth integration with machine learning and deep learning workloads, requiring minimal modifications.
315
323
316
-
✅ Fostered Collaboration: The development of such an application likely involved collaboration with a team comprising experts from diverse fields, including deep learning and data analysis. This experience likely emphasized the significance of collaborative efforts in attaining shared objectives.
324
+
✅ **Fostered Collaboration**: The development of such an application likely involved collaboration with a team comprising experts from diverse fields, including deep learning and data analysis. This experience likely emphasized the significance of collaborative efforts in attaining shared objectives.
0 commit comments