Conversation
|
🍹 The Update (preview) for dailydotdev/api/prod (at ff37ad2) was successful. Resource Changes Name Type Operation
+ vpc-native-api-db-migration-12bfa57c kubernetes:batch/v1:Job create
~ vpc-native-validate-active-users-cron kubernetes:batch/v1:CronJob update
~ vpc-native-update-source-tag-view-cron kubernetes:batch/v1:CronJob update
~ vpc-native-clean-gifted-plus-cron kubernetes:batch/v1:CronJob update
~ vpc-native-update-highlighted-views-cron kubernetes:batch/v1:CronJob update
~ vpc-native-clean-zombie-users-cron kubernetes:batch/v1:CronJob update
~ vpc-native-update-tag-recommendations-cron kubernetes:batch/v1:CronJob update
~ vpc-native-update-views-cron kubernetes:batch/v1:CronJob update
~ vpc-native-personalized-digest-deployment kubernetes:apps/v1:Deployment update
- vpc-native-api-clickhouse-migration-af54e36f kubernetes:batch/v1:Job delete
~ vpc-native-generic-referral-reminder-cron kubernetes:batch/v1:CronJob update
~ vpc-native-deployment kubernetes:apps/v1:Deployment update
~ vpc-native-post-analytics-clickhouse-cron kubernetes:batch/v1:CronJob update
~ vpc-native-update-source-public-threshold-cron kubernetes:batch/v1:CronJob update
~ vpc-native-check-analytics-report-cron kubernetes:batch/v1:CronJob update
+- vpc-native-k8s-secret kubernetes:core/v1:Secret create-replacement
~ vpc-native-personalized-digest-cron kubernetes:batch/v1:CronJob update
~ vpc-native-clean-zombie-opportunities-cron kubernetes:batch/v1:CronJob update
~ vpc-native-private-deployment kubernetes:apps/v1:Deployment update
~ vpc-native-hourly-notification-cron kubernetes:batch/v1:CronJob update
~ vpc-native-update-tags-str-cron kubernetes:batch/v1:CronJob update
~ vpc-native-sync-subscription-with-cio-cron kubernetes:batch/v1:CronJob update
~ vpc-native-clean-stale-user-transactions-cron kubernetes:batch/v1:CronJob update
~ vpc-native-temporal-deployment kubernetes:apps/v1:Deployment update
~ vpc-native-ws-deployment kubernetes:apps/v1:Deployment update
~ vpc-native-bg-deployment kubernetes:apps/v1:Deployment update
+ vpc-native-api-clickhouse-migration-12bfa57c kubernetes:batch/v1:Job create
~ vpc-native-clean-zombie-images-cron kubernetes:batch/v1:CronJob update
~ vpc-native-user-profile-updated-sync-cron kubernetes:batch/v1:CronJob update
~ vpc-native-update-current-streak-cron kubernetes:batch/v1:CronJob update
~ vpc-native-update-trending-cron kubernetes:batch/v1:CronJob update
~ vpc-native-post-analytics-history-day-clickhouse-cron kubernetes:batch/v1:CronJob update
~ vpc-native-clean-zombie-user-companies-cron kubernetes:batch/v1:CronJob update
~ vpc-native-generate-search-invites-cron kubernetes:batch/v1:CronJob update
~ vpc-native-daily-digest-cron kubernetes:batch/v1:CronJob update
~ vpc-native-calculate-top-readers-cron kubernetes:batch/v1:CronJob update
- vpc-native-api-db-migration-af54e36f kubernetes:batch/v1:Job delete
|
| import { tenorClient } from '../integrations/tenor'; | ||
| import { logger } from '../logger'; | ||
|
|
||
| export default async function (fastify: FastifyInstance): Promise<void> { |
There was a problem hiding this comment.
we should protect this endpoints (only logged in users) and also rate limit them ourself
There was a problem hiding this comment.
also why not gql? A lot of manual work here, error handling, schemas can be defined in gql and zod which already works there
There was a problem hiding this comment.
Yeah, I'd go for gql too
There was a problem hiding this comment.
also why not gql? A lot of manual work here, error handling, schemas can be defined in gql and zod which already works there
What extra benefits would gql provide? We can use zod here if we want to as well. Personally I think it makes sense to use API endpoints for simple services like this
and also rate limit them ourself
I disagree with this. I want to let people search as freely as possible without adding more ways for the user to not get the desired result.
I think if it becomes a noticable issue we can request a higher rate limit from tenor
There was a problem hiding this comment.
I disagree with this. I want to let people search as freely as possible without adding more ways for the user to not get the desired result.
I think if it becomes a noticable issue we can request a higher rate limit from tenor
issue is that like this you expose a free endpoint for anyone to take, especially bad actors, we have to guard it at least somehow, limits do not need to be low, they can be generous but need to block overuse.
There was a problem hiding this comment.
issue is that like this you expose a free endpoint for anyone to take, especially bad actors, we have to guard it at least somehow, limits do not need to be low, they can be generous but need to block overuse.
Fair point, but won't checking for authenticated user already solve this problem?
There was a problem hiding this comment.
looking now, is this what we use? https://developers.google.com/tenor/guides/rate-limits-and-caching#rate-limits
it has 1req/s, this could break even if 2 users try to do it in parallel?
There was a problem hiding this comment.
@capJavert Correct, I mentioned that in my OP. Realistically I don't think its going to be that huge of a problem because of debouncing and caching. Also, we don't even have 1 comment per minute on avg. on our site 😅
I think the announcement post will be the best test to see if its an actual issue, lol. I can imagine people will be posting gifs in the changelog.
| const gifToToggle = req.body as Gif; | ||
| const gifs: Gif[] = []; | ||
| if (existingFavorites?.meta) { | ||
| gifs.push(...existingFavorites.meta.favorites); |
There was a problem hiding this comment.
this can be pretty big array no, if user favorites a lot? Would it not be better to save it as relation in the db
There was a problem hiding this comment.
maybe content preference or something
There was a problem hiding this comment.
in theory, yes, but also I'm not sure I see a problem with it.
I think Discord does it in a similar way because I have a lot of favorites, and they load them all 😅
We could maybe limit the amount of favorites users can have to 50 or something, but honestly I think we can just keep as is.
There was a problem hiding this comment.
yes but I assure you discord does not store it in single json row on the server, you can limit it keep it to 50, up to you but its just dooming/locking the feature unnecessarily from the beginning
also we can maybe just launch gifs without favorites and see where it goes
There was a problem hiding this comment.
Nah, I think the favorite gifs are an important aspect.
Reading around a bit, it seems like the JSONB causing issues would be very unlikely, as they would need to favorite an absolute massive amount of gifs.
If we ever run into this issue, I can look at maybe storing them differently. :)
Backend portion of chat gifs
I opted to implement caching with Redis because tenor recommends caching, due to its 1RPS rate limit. Also recommend to not cache too aggressively because rankings can change multiple times during the day, although how often was not really explained. I put 3 hours arbitrarily, but can be more or less.