I’m actually ingesting raw data into my database! I fixed some errors, mostly to do with type conversion (numpy data types to JSON), improved some error catching and logging, but the ingest is running. This is already a milestone. With this operation, more archaeological information is being gathered in the Netherlands in one database than probably has been done in the combined history of archaeological Dutch research.

I improved quite a bit on my code base. The inference on the encoding is done now on a maximum of 1 kilobyte of data, which improves the speed a lot.

There are new bugs to sort out, of course: