On the future of computer science

Author: Raphael ‘kena’ Poss
Date: September 2014

Note

The latest version of this document can be found online at http://science.raphael.poss.name/on-the-future.html. Alternate formats: Source, PDF.

Prologue

A famous researcher once said that a scientist should not work more than 5-6 years in the same research area. The argument had something to do with losing the “fresh eye” of a newcomer and accepting established boundaries as inescapable. As I have been working in processor architecture research for 6 years, I feel my time may be coming to move on. However, before I turn this page, I feel compelled by duty and responsibility as a researcher to report a seemingly new understanding in my field, which I have not yet seen occurring to my peers.

Theses

  1. The definition of computing models is exactly the way humans capture and externalize their thought patterns;

  2. It is the human activity of defining or refining computing models that provides to humans the incentive to build computers and runnable software, ie. artefacts of the physical world that can simulate (imitate) those models;

  3. The effective teaching of computing models from one human to another requires working computers and software that illustrate these models in a way that can be observed without the physical presence of the designer or programmer;

  4. There exists a fundamental process, hereafter named “innovation”, carried out by humans, of assembling artefacts together in an initially meaningless way, and then observing the results in the physical world, and then deriving new meaning and possibly new models, in this order over time;

  5. This form of “innovation” in turn requires artefacts that can be composed in an initially meaningless way; conversely, this form of innovation is inhibited by rules that require all combinations of artefacts to have a meaning according to a predefined, known thought system;

  6. There exists a system of background culture, accepted models, prevaling ideas and tacit knowledge, established by CS experts in the period 1950-2015, that limits how new computing artefacts can be assembled and thus fundamentally limits further innovation;

  7. This rule system is predicated on the Church-Turing thesis and the requirement that each new computing artefact must simulate thought patterns that could be those of a single human person; (“there is no function that can be described by human language that cannot be computed by a Turing-equivalent machine”)

  8. A population of two or more people together form a complex system whose collective thought patterns cannot be comprehended nor modeled by any of them individually [1]; however they can together build an artefact that simulates their collective pattern of thoughts;

    [1]

    this statement derives from the observation that Heisenberg's uncertainty principle interferes with the possibility of an effectively computable simulation of the passage of time in a parallel system.

  9. The computing models corresponding to human collectives and the artefacts built from them by human collectives are strictly more powerful than those produced within the cultural mindset described in #6, insofar they enable humans to express and communicate new higher-order understandings of the world around us, understandings that would be impossible to express or communicate using individual patterns of thoughts or machines built to simulate Turing-equivalent models;

  10. It is the role and duty of those computer scientists who have combined expertise in programming language design, operating system design and computer architecture to explore the nature, characteristics and programmability of this new generation of computing models.

Accompanying thoughts

I am, obviously, keenly interested in further interactions with peers to evaluate and discuss the validity of these hypotheses and the means to test them. Next to the previous 10 points, I disclose the following beliefs that have accompanied me in the past few years and helped shape the points above, but whose truth I take less for granted:

  1. Hypothesis #5 may be an indirect corollary of Gödel's incompleteness theorem.
  2. Parallel “accelerators”, recently seen as the means to achieve a “next generation” of computer systems, merely provide quantitative improvements in artefact performance and do not require the invention of new computing models. As such, they will not yield any fundamental new understanding of human thought patterns.
  3. Bitcoin and other distributed computing models of currency and monetary exchange, in contrast, have provided mankind with a fertile ground to discuss and develop new collective understandings of power and incentive models.
  4. Fully distribtued multi-core processors and Cloud infrastructure, due to the decentralized nature of their control structures in hardware and the physical impossibility to simulate them accurately using an abstract Turing machine, may be also a fertile ground to achieve a new level of understanding to mankind, however any full realization of this will be out of grasp from every individual human.
  5. (from #5) Programming languages where every program is only valid if its correctness can be decided a priori within the language's formal semantics are sterile ground for innovation; this includes Haskell.
  6. By #3, The process of innovation using artefacts built by the collective mind cannot be taught by a single teacher to students working individually.
  7. (from the point above) The prevailing current mentor-mentoree relationship of PhD trajects makes PhD trajects an inappropriate means to train future innovators.
  8. The statements #1, #7 and #8 that accompanied my PhD thesis in 2012 (see link below) were the forerunner to this document and their truth would be a consequence of the hypotheses above.

Bibliography


SC fingerprint: fp:TeQHjo-yC0maBk3WLULSqakLP8C4AwelYkPlMRplM79baQ

Comments