A modern web application demonstrating the integration of Streamlit's chat interface with Google's powerful Large Language Model (LLM), Gemini Pro.
This project showcases a complete chatbot pipeline. It features a clean frontend built with Streamlit (st.chat_message, st.chat_input) that maintains persistent conversation history during the user session. The backend is integrated with the Google Gemini API (specifically gemini-1.5-flash or gemini-pro), which processes user prompts and generates real-time responses.
Below is an illustration of the chatbot interface running locally:
- Native Streamlit UI: Uses Streamlit's new
chatelements for a seamless user experience. - Powered by Gemini: Integrates with Google Generative AI for fast and capable language model responses.
- Session Memory: Leverages
st.session_stateto store and display the full chat history during a single session. - Cross-Compatibility: Demonstrates adaptable code structures capable of switching between LLM providers (e.g., from OpenAI to Gemini).
- Python 3.12+
- Streamlit: For the web interface and frontend elements.
- google-generativeai: Python library to interact with Google's Gemini API.
You will need a Google Gemini API key. You can obtain one for free (within limits) at Google AI Studio.
Navigate to your project folder in the terminal:
# Create a virtual environment
python -m venv venv
# Activate the environment (Mac/Linux)
source venv/bin/activate
# Activate the environment (Windows)
# venv\Scripts\activate
pip install streamlit google-generativeaiOpen main.py and replace "API_KEY" with your actual Google API key.
streamlit run main.pyA new tab will automatically open in your browser displaying the chatbot.
Developed as part of the Hashtag Programação course.
