Skip to content

Commit 1260d14

Browse files
committed
Bump to latest
1 parent 1befb66 commit 1260d14

File tree

5 files changed

+13
-5
lines changed

5 files changed

+13
-5
lines changed

README.md

Lines changed: 12 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ hints to the student to arrive at correct answer, enhancing student engagement a
5757

5858
- Prototype webapp Tech Stack
5959

60-
![](./assets/Tech-Stack.png)
60+
![](./assets/Prototype-Tech-Stack.png)
6161

6262
# Demo Video
6363

@@ -69,7 +69,7 @@ hints to the student to arrive at correct answer, enhancing student engagement a
6969

7070
- Clone the Repository
7171
```python
72-
$ git clone https://github.com/rohitc5/intel-oneAPI/tree/main
72+
$ git clone https://github.com/rohitc5/intel-oneAPI/
7373
$ cd Intel-oneAPI
7474

7575
```
@@ -251,7 +251,11 @@ Here is the detailed architecture of `Ask Question/Doubt` component:
251251
```
252252

253253
- For our **Interactive Conversational AI Examiner** Component, as of now we are not doing any training as its based on
254-
recent Generative AI LLMs (Large Language models) (open access models like LLaMA, Falcon etc.). You can update the API configuration by specifying hf_model_name (LLM name available in huggingface Hub). Please checkout https://huggingface.co/models for LLMs
254+
recent Generative AI LLMs (Large Language models) (open access models like LLaMA, Falcon etc.). You can update the API configuration by specifying hf_model_name (LLM name available in huggingface Hub). Please checkout https://huggingface.co/models for LLMs.
255+
256+
Here is the architecture of `Interactive Conversational AI Examiner` component:
257+
258+
![](./assets/AI-Examiner.png)
255259

256260
Here for performance gain, we can use INT8 quantized model optimized using Intel® Neural Compressor (Few options are like https://huggingface.co/decapoda-research/llama-7b-hf-int8 etc.)
257261

@@ -306,6 +310,10 @@ Please Note that for fun 😄, we also provide usage of Azure OpenAI Cognitive S
306310

307311
# Benchmark Results with Intel® oneAPI AI Analytics Toolkit
308312

313+
- We follow the below process flow to optimize our models from both the components
314+
315+
![benchmark](./assets/Intel-Optimization.png)
316+
309317
- We have already added several benchmark results to compare how beneficial Intel® oneAPI AI Analytics Toolkit is compared to baseline. Please go to `benchmark` folder to view the results. Please Note that the shared results are based
310318
on provided Intel® Dev Cloud machine *[Intel Xeon Processor (Skylake, IBRS) - 10v CPUs 16GB RAM]*
311319

@@ -315,7 +323,7 @@ on provided Intel® Dev Cloud machine *[Intel Xeon Processor (Skylake, IBRS) - 1
315323

316324
# What we learned ![image](https://user-images.githubusercontent.com/72274851/218499685-e8d445fc-e35e-4ab5-abc1-c32462592603.png)
317325

318-
![image](assets/Intel-ai-analytics-banner.png)
326+
![image](assets/Intel-AI-Analytics-Banner.png)
319327

320328
**Utilizing the Intel® AI Analytics Toolkit**: By utilizing the Intel® AI Analytics Toolkit, developers can leverage familiar Python* tools and frameworks to accelerate the entire data science and analytics process on Intel® architecture. This toolkit incorporates oneAPI libraries for optimized low-level computations, ensuring maximum performance from data preprocessing to deep learning and machine learning tasks. Additionally, it facilitates efficient model development through interoperability.
321329

api/src/config.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@
1919

2020

2121
AI_EXAMINER_CONFIG = {
22-
"llm_name": "azure_gpt3", # azure_gpt3, hf_pipeline
22+
"llm_name": "hf_pipeline", # azure_gpt3, hf_pipeline
2323
"azure_deployment_name": "text-davinci-003-prod",
2424
"hf_model_name": "TheBloke/falcon-7b-instruct-GPTQ", # mosaicml/mpt-7b-instruct
2525
"device": 0, # cuda:0

assets/Intel-Optimization.png

249 KB
Loading
-1 Bytes
Binary file not shown.

0 commit comments

Comments
 (0)