Graceful Forgetting II. Data as a Process

20 Nov 2022  ·  Alain de Cheveigné ·

Data are rapidly growing in size and importance for society, a trend motivated by their enabling power. The accumulation of new data, sustained by progress in technology, leads to a boundless expansion of stored data, in some cases with an exponential increase in the accrual rate itself. Massive data are hard to process, transmit, store, and exploit, and it is particularly hard to keep abreast of the data store as a whole. This paper distinguishes three phases in the life of data: acquisition, curation, and exploitation. Each involves a distinct process, that may be separated from the others in time, with a different set of priorities. The function of the second phase, curation, is to maximize the future value of the data given limited storage. I argue that this requires that (a) the data take the form of summary statistics and (b) these statistics follow an endless process of rescaling. The summary may be more compact than the original data, but its data structure is more complex and it requires an on-going computational process that is much more sophisticated than mere storage. Rescaling results in dimensionality reduction that may be beneficial for learning, but that must be carefully controlled to preserve relevance. Rescaling may be tuned based on feedback from usage, with the proviso that our memory of the past serves the future, the needs of which are not fully known.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here