Healthcare a Go Go

“Ceci n’est pas un pogo stick”.

How can healthcare organizations and companies make the shift to becoming treatment service provider with highly customized and ongoing ‘cure and care’? Clearly telemedicine is going to play a significant role in this shift. However, data grows faster than processing power, while also idea-to-use cycles are speeding up more and more. And how are current ‘big data’ efforts going to help with fine-grained information on a detail level where statistical tools need to move beyond assumptions? Maybe the past has some keys to offer?

Old Dutch pharmacies often have a so-called ‘gaper’ (yawner) bust hanging outside as a signboard, usually a Mussulman to signify the relation with Turkey, China and other far-away cultures the Dutch were trading with in the 1600s. This ‘gaper’ usually sticks out his tongue, eyes wide open, not only signifying the often awful taste of some potion that a pharmacist brew but it also hints to their use of the tongue and eye for diagnosis and something we seem to have forgotten in the meantime, performing this stretching motion actually helps release muscle tension in the neck and head, and thus helps relieve a headache. Pharmacists at the time used to be the doctors, while the local barber usually acted as surgeon; a mixed role dating back millennia that only recently got split when industrialization allowed for a more specific organizational form centered on the hospital with centralized procedures and processes further enforcing specialization and delegation.

Undeniably there is a certain minimal role division but recent organization forms have been around so long we have gotten stuck in the administrative hierarchy. As technical capabilities are pushing towards a new level of abstraction, described as the “Nexus of Forces” the combination of ‘social’, ‘mobile’, ‘information’ and ‘cloud’ brings together four trends which essentially translate in the capability to have to-the-point relevant and detailed information anytime anywhere. In other words, big data allows for de-specialization by de-bureaucratization which in turn aids not only in dramatic reduction of costs but a much better healthcare at the same time, one which is more future compliant than the current approach. Most of the medical errors are due to inefficient workflow processes, work breakdown and task allocation and organization, so current advances will happen in several healthcare domains at the same time, mutually reinforcing if done well. Ongoing technological progress in information sciences allows for both dealing with massive amounts of data and doing that much faster too. These may be most apparent with the possibilities as offered by 3D printing and synthetic biology. Not only are we nearing an era when someone’s skin cells can be transmuted to stem cells and then used to print organs or other body parts (recent advances even allow growing in vivo structures as complex as eyes), but this will also allow for designer pharmaceutics generated on-the-fly.

This is happening now. The healthcare, pharmaceutical and life science industries are drinking from a fire hose, drowning in the abundance of genuinely needed information that they themselves help create. This presents us with an emerging new world governed by an as-yet-imperfect integration of biology, bio-technology and healthcare. It comprises data about vast numbers of molecules, cells, tissues, organs, and populations of organisms including humans, their actions and the outcomes of those actions. These entities and data from them collectively constitute a new global biosystem. It has to; the highly detailed granularity level of this information has us move beyond abstraction, so to make sense of all this we have to mimic nature in several ways. The health of the biosystem, including our own health, benefits from taking action, recording and analyzing its consequences. On a macro scale, this new biosystem also includes the conceptual and physical consequences of its evolution: ideas, economics, wealth, engineering, and footprints good and bad. This may very well fit the “ecological civilization” as pursued by China’s government, yet focusing on medical aspects already, but as recent policies clearly show, there are substantial drivers.
§
New technologies are emerging, modalities that enable, extract data from, and compute knowledge from, genomics and proteomics, medical text and images, and not least there is the rise of the digital patient record. The data is collectively escalating into many thousands of petabytes globally. Much is summarized in huge amounts of natural language text, but as to the details of the primary research and data capture there is no time and capacity for, nor merit in, traditional methods of publication. Yet actually there is no time and capacity for, nor merit, in humans reading the conclusions in text form either. It has inspired a meta-science, translational research that facilitates the transformation of findings from basic science and medical data to practical applications that enhance human health and wellness in meaningful ways. With the mathematical and technological solutions being developed in the biomedical arena, other kinds of industry and human endeavor will benefit… and generate more data.

Personalized medical treatment is obviously a next step, yet with the increase in advances and cross-fertilization in nano-technology, bio-technology, information technology and cognitive sciences medical companies are likely to expand beyond the primarily chemical focus and not just embrace genetic and epigenetic approaches, but also bio-physical. Instead of swallowing chemicals, a future pill may contain a self-coordinated swarm of dual-layer carbon nanotubes acting as nano-speakers that help massage internal organs. But likely the same feat can be accomplished by rhythmic sound pulses from a smart phone, a medical domotics system or guided finger drumming and simulated yawning as described above. Mature ‘cure and care’ providers would want to deal with all of the above.

Of the medical specializations pharmaceutics is the most likely to turn more explicitly into an information science, but an evolving information science. In a way the pharmaceutical industry has always been an information industry already, albeit frequently confined to adapting nature’s ideas from its own combinatorial chemistry and “event simulation”, i.e. 4 billion years of running trial and error algorithm that Darwinian evolution allows, albeit it both a causally closed and causally open manner. In the process of evolving as an industry pharmaceutical companies will move from ‘pushing pills’ to becoming a treatment service provider where the idea of ‘patient’ is exchanged by participating users who are monitored nonstop and have adaptive personalized treatment involving a possible cocktail of maybe 30-300 substances. So much knowledge is being accumulated and much more will be added, time cycles are speeding up so fast that the existing modality is failing, limited by one-size-fits-all treatments. This shift is known already, but as their second core business of research and development grows into an information science pharmaceutical companies will need to adapt their information infrastructure to suit that inevitable next step.

Stratified and personalized medicine allow for targeted and hence safer drugs, but the technologies mentioned can surely help address the need to eliminate costly medical errors and inefficient paper, amplified by the pressures on the healthcare system by the aging baby boomer population. Optimizing healthcare IT can already reduce costs in the order of trillions of dollars. These form substantial opportunities to integrate industries, and not least broaden the scope of the pharmaceutical industry from its current state of seeking to develop biologically active molecules in some 10-15 years by increasingly more informed methods, whereas biological evolution took a third of the age of the universe by trial and error to do the same thing. The trick is to also engage in the ultimate therapeutic actions involving these molecules, and keep track of their outcomes.

Such informational feedback can have an invaluable impact, for goal-based artificial selection. We cannot take action without knowledge. We cannot have knowledge without data. We cannot have new data without knowledge. To varying extents the issues of converting less structured information to actionable knowledge as more structured information concerns not so much the amount of data, but the combinatorial explosion that relates to the way in which the individual pieces of information interact. For example, for medical records each with a mere 100 basic pieces of information such as age, gender, events, clinical observations and measurements, diagnoses, therapies, and outcomes there are 2^199 possible combinations of factors with their probabilities that we can express as decision making rules, i.e. about 1000, 000, 000, 000, 000, 000, 000, 000, 000, 000 rules. An ominous task besides the fact that current clinical decision support systems and semantic web technologies are hampered by the difficulties in quantifying indeterminacies and drawing inference from them.

Part of the solution for a sufficiently complex ontological infrastructure involves realizing designs for a logical framework for intra- and inter-cloud interoperability and integration, constructed of swarms of network-centric inference robots, using template-based addressing for adaptive connectivity as this infrastructure continuously seeks for the best ‘semiotic interface’ while merging mobility, virtualization and security. Ongoing coarse-graining are foreseen for conceptual convolution (using deductive and inductive reasoning and affine association for concluding, abstracting, adjacent entrainment) for developing modelless intermediate representations suitable for transmitting information between the faster and slower reasoning and remembering cycles in the system. In that way the infrastructure aims to be adaptive to such a granular level that it can dynamically seek an optimal combination of vector processing and stream processing.

Such a multipolar multifaceted distributed decision support system can be used to allow for the transition from a “pill focus” towards a patient focus, from products to treatment service providers, and support initiatives for collaborative treatment lifecycle management, ongoing testing and monitoring of patients, and allowing for cross-translation between discrete knowledge domains and research areas. Although clearly different areas, these are best served by a universal approach. Only in the days of early computing did this translate into a uniform approach but with self-learning self-organizing systems that step is not so self-evident anymore. Nowadays, a new system is needed to allow for both multipolar viewpoints as well as cross correlation discovery and dependency mappings between different sciences.

In that sense, within the existing business model, pharmaceutical treatment is fairly limited, as people’s genetic makeup is possibly for only 30% involved in the formation of a disease, while many outside forces besides genetic and epigenetic responses remain largely unknown. Blaming it all on the placebo effect doesn’t make the oddities go away. Personalized treatment will in fact greatly reduce risk, both by an medical advisory service for doctors, double-checking on prescription drugs and treatments (most medical errors are simply mistakes), and moving from symptom-based diagnostics to a more realistic function-based which is in much closer adherence to all the medical and bio-tech research being done. Within the context of an adaptive treatment, medical risk is greatly reduced and as a result financial risk as well. In order to get there the information infrastructure need to be deal with to learning from every single treatment to enable the move to greater transparency, greater robustness and greater reliability. An unspoken leftover from earlier times, systems tend to make a snapshot of a knowledge domain, describe it with lots of effort, classify it as ontology – and if it is outdated a new snapshot is made, a new version. Eventually, like a flipbook or a movie strip, one ends up having to deal with many of these snapshots at the same time.

This 19th century approach to scientific exploration translates for companies into the so-called dependency hell. Since, we have come a long way: we now know that ongoing knowledge acquisition is irreducibly ambiguous, due to the necessary use of conceptual modeling. Some knowledge domains may be an amalgamate of 40, 50 different abstract models, each telling a part of the whole story, but not really fitting each other as scientific exploration tends to move from unknown to unknown.

The edge of science is balancing on the push of what has already happened and the pull of the potential, limited to the nearest-neighbor in systematic composition while being directed by correspondence. But science is unable to reach far outside of the known, as we do not have a meta-language to express what we do not know that we do not know, although we can sometimes express what we know that we do not know. And yet, eventhough science can grow towards almost endless recombinations of the known, she moves with one step at a time, or one time at a step. Exploration happens where these two overlap and the space of possibilities is not just the next step as in an unreachable potential, but it is the act of stepping itself, a shaping force in a state of becoming. It is time to leave 19th century photography behind and move to streaming media. If ‘one size fits all’ was true than we wouldn’t see the move into big data and high performance computing as a cloud, but as a big ball of water. Loosely-coupled virtualization greatly adds to flexibility which in turn leads to agility, scalability, reliability, reduced costs, increased security and operations. Many computing infrastructures are suffering from dependencies which have grown so complex normal design patterns do not apply anymore. But the same holds for knowledge domains, and as the time between idea and usage is speeding up, we cannot be fixed down by assumptions in our approach anymore, as it limits the results.

There is a much more simple way involving ongoing data-mining, facilitated self-organizing swarms of search-agents reporting to a central system to maintain a real-time dependency map, goal-driven emergent hierarchies with just a touch of ‘unstructuredness’. Employing a mix of dynamic approaches, from process objects, automata, to complex adaptive systems, in some sense this offers a modeless approach, data without representation. In other words, use of the proper tools for adaptive knowledge curation is a natural result of setting up a truly learning organization. Borrowing approaches from cognitive neurosciences and robotics we get reentrant transparallel experiential selection leading to dynamic synchronization and feature binding, mixed with self-organizing incremental neural networks for unsupervised learning. This results in creating an adaptive system of systems mixing distributed decision support and multipolar multifaceted evolving ontologies. Phased knowledge harvests offer an analytical scheme in which both administrative and medical improvements are addressed.

As mentioned with the US Million Veteran Program; “Genetically speaking, each person’s cells carry within them some 3.2 billion bits of data. That’s how many pairs of nucleotides, or chemical bases, are in the human genome. This represents tens of thousands of protein-coding genes, plus lots of other DNA. By and large, the precise role of one stretch of DNA versus another remains a vast unsolved mystery. There are countless possible variants that could affect health, and scientists have yet to learn about most of them.” Not only that, an intimate relation between the DNA sequence and the folding structure of the entire strand has been uncovered, as well as possible coordinating role of the collagen triple helix. This is not just raw information, but also how information is organized and systematized, requiring a sufficiently versatile approach to knowledge management. “With great power comes great responsibility”… with Big Data comes big information, big knowledge, big wisdom and maybe even big intelligence. A ‘feature’ which is often dodged is that Big Data requires an entirely renewed approach to statistical analysis, databases and queries. “Big” is the elephant in the room, a Trojan elephant, and maybe even, it is elephants all the way down.

A special thanks to Barry Robson, Distinguished Scientist, pioneer in Bioinformatics and Artificial Intelligence, and much much more, for the inspirational correspondence and ongoing opportunity to cooperate.

Share