Is there an existing issue for this?
Feature Description
I feel we shouldn't embed all the logic in the frontend. Scraping leaderboard data directly from the frontend and displaying it there would be poor practice.
Instead, let's create a dedicated backend that updates every 2-3 hours. We created before for gssoc program and hosted the backend on railway which work as a job and update it every 3 hours.
right now whats happening is when user do a PR and get merged instantly it appear in on the frontend , when we have more user it will use all our API usage limits on github.
So we can have a seperate repo in the repository named backend and pull data from there to front end.
This backend can handle all the complexity, including how to scrape GitHub PRs and manage the data.
By keeping the complexity in the backend, we can send simple, clean data to the frontend. This approach prevents unnecessary repetitive API calls imagine scaling to 100 active users; they'd exhaust the API limits in just 1-2 days.
In the backend, we can implement a simple mechanism to scrape data every 2 hours and serve a cached version to the frontend. This will eliminate a lot of redundant API calls and make the frontend much cleaner, faster and easier to maintain.
If you're okay with this approach, I have backend folder in the repository, move all the logic there, and clean up the frontend accordingly.
Use Case
backedn code base: https://github.com/recodehive/leaderboard
Dashboard live view: https://gssoc-new-dashboard-two.vercel.app/leaderboard
Benefits
Watch the explanation video
https://www.loom.com/share/acff412de5894a869368623996cdefde
Add ScreenShots
No response
Priority
High
Record
Is there an existing issue for this?
Feature Description
I feel we shouldn't embed all the logic in the frontend. Scraping leaderboard data directly from the frontend and displaying it there would be poor practice.
Instead, let's create a dedicated backend that updates every 2-3 hours. We created before for gssoc program and hosted the backend on railway which work as a job and update it every 3 hours.
right now whats happening is when user do a PR and get merged instantly it appear in on the frontend , when we have more user it will use all our API usage limits on github.
So we can have a seperate repo in the repository named backend and pull data from there to front end.
This backend can handle all the complexity, including how to scrape GitHub PRs and manage the data.
By keeping the complexity in the backend, we can send simple, clean data to the frontend. This approach prevents unnecessary repetitive API calls imagine scaling to 100 active users; they'd exhaust the API limits in just 1-2 days.
In the backend, we can implement a simple mechanism to scrape data every 2 hours and serve a cached version to the frontend. This will eliminate a lot of redundant API calls and make the frontend much cleaner, faster and easier to maintain.
If you're okay with this approach, I have backend folder in the repository, move all the logic there, and clean up the frontend accordingly.
Use Case
backedn code base: https://github.com/recodehive/leaderboard
Dashboard live view: https://gssoc-new-dashboard-two.vercel.app/leaderboard
Benefits
Watch the explanation video
https://www.loom.com/share/acff412de5894a869368623996cdefde
Add ScreenShots
No response
Priority
High
Record