IDEAS home Printed from https://ideas.repec.org/a/rdc/journl/v10y2019i3p16-29.html
   My bibliography  Save this article

Using the Information and Communications Technology Data Deluge from a Semantic Perspective of a Dynamic Challenge: What to Learn and What to Ignore?

Author

Listed:
  • GREU, Victor

Abstract

The paper approaches the Data Deluge which is generated, at World scale, by the flows of data created by the complex proliferation and exponential development of Information and Communications Technologies (ICT), as main driving factor of the progress of the Information society (IS) toward Knowledge Based Society (KBS), but the paper analysis is focused on a systemic approach in order to observe the main premises and features of such complex processes, aiming the optimal efficiency of data generation and use, for humankind and Earth survival. The main emergent (hype) technologies for ICT exponential development in 2019 are considered, as contributing to Data Deluge and including Artificial Intelligence (AI), Internet of Things (IoT), Cloud, Big Data, 3D Printing, Robotic Process Automation, Hardware Robotics, Blockchain, Augmented/Virtual Reality. ICT, but mainly all the above mentioned emergent (hype) technologies, contribute, by their complex processes that involve people, machines and devices in a planetary digital disruption, to the huge phenomenon called Data Deluge, but behind is still, mostly but not exclusively, the Internet (3.0 industry revolution) as a backbone. For the paper entry in Data Deluge issue the CERN amazing “temple” of science and technology is chosen, but social media have also produced crucial changes in the way we live, including business models, although entertainment and other applications that use broadband mobile communications provided are also impressive in generating Data Deluge. All these Data Deluge sources are illustrated with global figures that seem to be more impressive every year (over half a Yottabyte are generated), as the overwhelming park of connected devices and people are exponentially increasing, approximately following the Moore Law consequences as an invitation to Data Deluge. Although the cost of computing and communication is falling to Zero, this reality is not necessary a guarantee that the benefit in information/knowledge is automatically high, but on the contrary, it is necessary to have higher expertise (means/methods) to extract information and eventually knowledge from the Data Deluge. As prominent and relevant source of Data Deluge, CERN project is presented in detail (it includes 22 member states and a global community of 15,000 researchers). The CERN’s mission is pointing international research, technology, education and collaboration. CERN will advance the frontiers of knowledge like the fundamental structure of the Universe, the generation of Universe by Big Bang, the kind of matter within the first moments of the Universe’s existence, understanding the very first moments of Universe after the Big Bang or Dark Matter looking for Antimatter. In addition, CERN will develop new technologies for particle accelerators and detectors, but also for emergent fields like advanced ICT (including Quantum Computing), Web (the World Wide Web was invented at CERN in 1989 by British scientist Tim Berners-Lee) or the (computing) GRID. The medical diagnosis and therapy will also considerably benefit from the unprecedented advances achieved by the CERN. The CERN LHC is a machine of records, including: hottest spots in the galaxy; colder temperatures than outer space; the most sophisticated detectors ever built; the detectors are like gigantic digital cameras built in cathedral-sized caverns. Concluding that CERN is just a tip of the iceberg that Data Deluge is or could become, other relevant examples could be given, but none could reach the CERN unique records (although, in the same class of Data Deluge “giants”, there are Facebook, Google, Amazon, Netflix etc.). A paper goal is to try to analyse what and how, under these storm waves of Data Deluge, is going to melt these icebergs in order to extract and use the best of information/knowledge humankind and Earth need today and especially tomorrow. In the second section of the paper, a disproportional comparison of the antiquity Pyramids with CERN was deliberately chosen just to emphasise the huge role of using the technology advances (mainly enabled by ICT) for generating Data Deluge and then extracting information and eventually knowledge, even from sources (like the antiquity Pyramids) which almost have “run dry” before this new technological support. It is mentioned the importance of the humankind evolution phase we are in, for the relevance of any analysis results, as in every phase the technology advances push the data, information and eventually knowledge generation to a higher level (quite incredible before), which explains also the miracle in the case of Pyramids. The analysis also considered the deep and complex processes where data, information and eventually knowledge are linked with a multitude of goals the people could have when they expect the desired data and look, from a semantic perspective, for using them to fulfil these wishes. The difficulty and complexity of such analyses and optimization approaches are badly increased by the fact that all the mentioned processes premises are fast and nonlinearly changing, mainly because of ICT/IS/KBS exponential pace, this way generating everywhere a dynamic challenge of the mentioned semantic perspective, which in a simple expression could be: What to learn and what to ignore? The paper also approached The Difference Between Data and Knowledge, observing that for leveraging knowledge refining it is necessary to timely think (have a thought) and create appropriate ICT tools because creating large amounts of data does not automatically generate lots of knowledge. Approaching both tools and thoughts related to the processes involved in knowledge creation in this epoque of Data Deluge, we pointed the diversity, complexity and difficulty of semantic context cases where we have to select the optimal data (amount) leading to the desired information and eventually the knowledge benefic to wisdom. As tools prominent example, the heart of CERN computing infrastructure, was given, as the Worldwide LHC Computing Grid (WLCG) includes: 170 computing centres in 42 countries; 1M CPU cores; 1EB of storage; 340 Gb/s transatlantic; 3PB moved per day. The analysis of complex processes where data stream to information and eventually knowledge pointed the two main factors that could influence this (long) road, considering that environmental factors could be located among the Data Deluge sources (where the Data Deluge comes from) discussed mainly in the first section, while the cultural factors are referring to the intimate, diverse, complex and dynamic processes where data are analysed, interpreted or selected by humans or machines, usually (but not exclusively) by semantic methods which naturally benefit from ICT prominent advances like AI/ML/CAIS. The final conclusion is that our analysis needs to be further continued, in order to get deeper (usually, but not always, scientific) insights when looking to Data Deluge, which, in our days comes faster and everywhere.

Suggested Citation

  • GREU, Victor, 2019. "Using the Information and Communications Technology Data Deluge from a Semantic Perspective of a Dynamic Challenge: What to Learn and What to Ignore?," Romanian Distribution Committee Magazine, Romanian Distribution Committee, vol. 10(3), pages 16-29, September.
  • Handle: RePEc:rdc:journl:v:10:y:2019:i:3:p:16-29
    as

    Download full text from publisher

    File URL: http://crd-aida.ro/RePEc/rdc/v10i3/2.pdf
    Download Restriction: no

    More about this item

    Keywords

    Data Deluge; CERN Large Hadron Collider; semantic methods; Worldwide LHC Computing Grid (WLCG); World Wide Web; Particle Physics; Digital Disruption; Internet of Things; information society; knowledge based society; broadband mobile communications;

    JEL classification:

    • L63 - Industrial Organization - - Industry Studies: Manufacturing - - - Microelectronics; Computers; Communications Equipment
    • L86 - Industrial Organization - - Industry Studies: Services - - - Information and Internet Services; Computer Software
    • M15 - Business Administration and Business Economics; Marketing; Accounting; Personnel Economics - - Business Administration - - - IT Management
    • O31 - Economic Development, Innovation, Technological Change, and Growth - - Innovation; Research and Development; Technological Change; Intellectual Property Rights - - - Innovation and Invention: Processes and Incentives
    • O33 - Economic Development, Innovation, Technological Change, and Growth - - Innovation; Research and Development; Technological Change; Intellectual Property Rights - - - Technological Change: Choices and Consequences; Diffusion Processes

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:rdc:journl:v:10:y:2019:i:3:p:16-29. See general information about how to correct material in RePEc.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Theodor Valentin Purcarea) The email address of this maintainer does not seem to be valid anymore. Please ask Theodor Valentin Purcarea to update the entry or send us the correct email address. General contact details of provider: http://www.distribution-magazine.eu .

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service hosted by the Research Division of the Federal Reserve Bank of St. Louis . RePEc uses bibliographic data supplied by the respective publishers.