timetable-mgr/cif
2024-04-10 21:47:08 +01:00
..
check.go Writing full CIF data now works. Still need to work on updating CIF data. 2024-04-08 21:08:07 +01:00
constants.go Writing full CIF data now works. Still need to work on updating CIF data. 2024-04-08 21:08:07 +01:00
convert_test.go Writing full CIF data now works. Still need to work on updating CIF data. 2024-04-08 21:08:07 +01:00
convert.go Writing full CIF data now works. Still need to work on updating CIF data. 2024-04-08 21:08:07 +01:00
helpers_test.go Writing full CIF data now works. Still need to work on updating CIF data. 2024-04-08 21:08:07 +01:00
helpers.go Adjust file write functions to reduce memory load 2024-04-09 22:38:48 +01:00
parse.go Disable parsing of JSONAssociations as they are not currently used within OwlBoard 2024-04-10 21:47:08 +01:00
process_test.go Writing full CIF data now works. Still need to work on updating CIF data. 2024-04-08 21:08:07 +01:00
process.go Re-implement processParsedCif() to reduce memory use by 10%. Further reductions are neccessary 2024-04-09 22:39:35 +01:00
readme.md Disable parsing of JSONAssociations as they are not currently used within OwlBoard 2024-04-10 21:47:08 +01:00
types.go Reorganise repo 2024-04-05 22:23:42 +01:00
update.go Add comments 2024-04-10 20:46:20 +01:00

package cif

This package follows a similar pattern to package corpus.

First, CheckCorpus() retreived cif metadata from the database and determines whether an update is required and what type of update.

Then, one of the update functions is called which run through the update process. There are two update types, 'full' and 'update'. A 'full' update drops the entire timetable collection and rebuilds with a full CIF download. 'update' downloads CIF updates from specified days and applies updates.

Downloads are handled by package nrod which returns an io.ReadCloser which is passed to the parsing function.

Currently the parsing function returns a parsedCif pointer, however this is using significant memory due to the size of a full CIF download (Often around 4.5GB). The intention is to instead use a worker pool to handle the data.