You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Apr 7, 2020. It is now read-only.
Right now, we are SCPing up all these files to prod:
allTerms.json to load the data into elasticsearch
employeeDump.json to load the data into elasticsearch
all the files in public/getTermDump/ for the API
employees.json for the API
Would it be possible to load the data from either the first two files (or elasticsearch directly) when someone hits the API? This way we can avoid SCPing up a lot of duplicate files to the production server.
So I think ideally we add a /classes/:school/:termId/:subject/:classId endpoint and hit Elasticsearch (notice that the path is literally the classHash). So for example GET /classes/neu.edu/201930/CS/2500
Additionally, we could support collection URLs like GET /classes/neu.edu/201930 to return all Fall course data, and even GET /classes/neu.edu. All of these would hit Elasticsearch.To avoid bricking the server with these potentially massive payloads, we would probably paginate these queries.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Right now, we are SCPing up all these files to prod:
Would it be possible to load the data from either the first two files (or elasticsearch directly) when someone hits the API? This way we can avoid SCPing up a lot of duplicate files to the production server.
https://github.com/ryanhugh/searchneu/blob/master/docs/API.md
The text was updated successfully, but these errors were encountered: