Skip to content

Commit 24ffcbd

Browse files
committed
bug fixes
2 parents 1260d14 + 1b92c06 commit 24ffcbd

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

README.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99

1010
# LEAP
1111

12-
Intel® oneAPI Hackathon - Prototype Implementation for our LEAP Platform
12+
Intel® oneAPI Hackathon 2023 - Prototype Implementation for our LEAP Solution
1313

1414
# A Brief of the Prototype:
1515

@@ -25,7 +25,7 @@ Online learning is crucial for students even post-pandemic due to its flexibilit
2525
- It can be challenging to sift through pile of lengthy videos or documents to find relevant information.
2626
- Teachers or instructors may not be available around the clock to offer guidance
2727

28-
#### Our Proposed Solution ![image](https://user-images.githubusercontent.com/72274851/218503394-b52dfcc9-0620-4f44-94f5-46a09a5cc970.png)
28+
#### PROPOSED SOLUTION ![image](https://user-images.githubusercontent.com/72274851/218503394-b52dfcc9-0620-4f44-94f5-46a09a5cc970.png)
2929

3030
To mitigate the above challenges, we propose LEAP (Learning Enhancement and Assistance Platform), which is an AI-powered
3131
platform designed to enhance student learning outcomes and provide equitable access to quality education. The platform comprises two main features that aim to improve the overall learning experience of the student:
@@ -49,9 +49,9 @@ hints to the student to arrive at correct answer, enhancing student engagement a
4949

5050
![](./assets/Intel-Tech-Stack.png)
5151

52-
1. Intel® Extension for Pytorch: Used for Multilingual Extractive QA model training optimization.
53-
2. Intel® Neural Compressor: Used for Multilingual Extractive QA model inference and Generative AI model inference optimization.
54-
3. Intel® Extension for Scikit-Learn: Used for Multilingual Embedding model training optimization.
52+
1. Intel® Extension for Pytorch: Used for our Multilingual Extractive QA model Training/Inference optimization.
53+
2. Intel® Neural Compressor: Used for Multilingual Extractive QA model and Generative AI model Inference optimization.
54+
3. Intel® Extension for Scikit-Learn: Used for Multilingual Embedding model Training/Inference optimization.
5555
4. Intel® distribution for Modin: Used for basic initial data analysis/EDA.
5656
5. Intel® optimized Python: Used for data pre-processing, reading etc.
5757

@@ -329,4 +329,4 @@ on provided Intel® Dev Cloud machine *[Intel Xeon Processor (Skylake, IBRS) - 1
329329

330330
**Seamless Adaptability**: The Intel® AI Analytics Toolkit enables smooth integration with machine learning and deep learning workloads, requiring minimal modifications.
331331

332-
**Fostered Collaboration**: The development of such an application likely involved collaboration with a team comprising experts from diverse fields, including deep learning and data analysis. This experience likely emphasized the significance of collaborative efforts in attaining shared objectives.
332+
**Fostered Collaboration**: The development of such an application likely involved collaboration with a team comprising experts from diverse fields, including deep learning and data analysis. This experience likely emphasized the significance of collaborative efforts in attaining shared objectives.

0 commit comments

Comments
 (0)