A selective data retention approach in massive databases
Exponentially growing databases have been tackled on two basic fronts: technological and methodological. Technology offered solution in storage capacity, processing power, and access speed. Among the methodologies are indexing, views, data mining, and temporal databases, and combinations of technology and methodology come in the form of data warehousing, all designed to get the most out of and best handle mounting and complex databases. The basic premise that underlines those approaches is to store everything. We challenge that premise suggesting a selective retention approach for operational data thus curtailing the size of databases and warehouses without losing content and information value. A model and methodology for selective data retention are introduced. The model, using cost/benefit analysis, allows assessing data elements currently stored in the database as well as providing a retention policy regarding current and prospective data. An example case study on commercial data illustrates the model and concepts of such method.
Volume (Year): 32 (2004)
Issue (Month): 2 (April)
|Contact details of provider:|| Web page: http://www.elsevier.com/wps/find/journaldescription.cws_home/375/description#description|
|Order Information:|| Postal: http://www.elsevier.com/wps/find/supportfaq.cws_home/regional|
When requesting a correction, please mention this item's handle: RePEc:eee:jomega:v:32:y:2004:i:2:p:87-95. See general information about how to correct material in RePEc.
If references are entirely missing, you can add them using this form.