Secure cumulative knowledge memories, transactional logs and blockchain research & development since 1986

  Home page

Knowledge base
Basic principles

Accumulogs are permanent records of transactions accumulated from the collection of data input as an intermittent or continuous data stream. Unlike blockchains which have rigid convergent data set (e.g. crypto currency transactions1) Accumulogs qualify data elements by adding an associated layer containing Locational State2 elements. Every data set element is linked to a location (space-time) and state (object dataset with object, properties, methods) and a validation procedure.

Transparency & reliability

Accumulogs were designed to support learning and decision-making. Therefore the quality or reliability of this information must be of a high order. The access protocols and information presentation dialogs are designed to facilitate recall on the part of users by providing context as well as exposing inter-relationships within the dataset. The role of the validation procedure is to introduce a data reliability and quality control check. The current versions run in a Plasma Operating System3 and use the Object Profile Elements Extension (OPEE)4. OPEE represents an addition to OOP5 and was specifically developed to support Accumulog operations through the validation of data to prevent inaccurate or erroneous data being recorded.
Unlike block chains, Accumulogs are not just restricted to transactional information which is generally recorded through automated reference coding (convergent) that supports transactions. Accumulogs data sets include additional information input by people introducing the risk of wider margins of error as well as the possibility of intentional misrepresentation (divergence). Therefore, if validation detects data quality deficits it should be corrected and the change/s logged. However, if a user leaves data on the system which validation has flagged to be erroneous this fact is also recorded.

This helps make Accumulog data streams more reliable and transparent and thereby conforming to the original objective of Accumulogs and Locational State Theory to support learning systems and decision analysis based on reliable information.


The potential applications are vast. Because of the original expertise of the developer and focus of professional effort6, Accumulogs have been developed to address one of the most complex decision analysis domains of natural resources, agriculture, innovation and economic development. These domains combine a wide range of exacting technical and economic information with conditions of uncertainty. The general approach is to build decision analysis models of any proposed action such as a project and use Locational State considerations to evaluate the potential impacts of those factors characterized by uncertainty so as to quantify the dimensions of risk as a basis for selecting preferable options for actions or project designs.


Although Accumulog and Locational State Theory combine to create novel and sometimes complex concepts, the development of Accumulogs has been directed towards eminently practical applications. Data streams are generated by:
  • ongoing activity implementation (such as a production system or project)
  • simulations of agricultural development, innovation and economic development processes or projects during their design phases
This data is streamed to an Accumulog that records all data to support:
  • the management of all phases of individual project or process design and implementation phases
  • the management of all projects based on a transparent multi-project or process portfolio operation
All data streams into a Portfolio Data Warehouse (PDW)7. A PDW is created by the configuration of all of the project data sets contained within the Accumulog as a portfolio. This provides a transparent support for donors, investors and project managers. They have total oversight through a real time audit (RTA)8 and real time decision support in response to change. The Data Warehouse model enables the sharing of benchmarks and lessons learned on projects under the management of a single organization. The data acquisition and sharing model is completely decentralized enabling those who generate the data to monetise selected elements. For example supplying operational performance benchmarks achieved by different types of project or business process to agricultural or manufacturing extension services or statistical organizations. As a portfolio is extended through the addition of new projects, the datasets provide an increasingly refined knowledge base on the association of performance benchmarks with specific project level contexts. This increases the value of the information.

The Accumulog model applied to project cycle and portfolio management contains data covering the whole project cycle including design, procurement, implementation and post-funding activity. Part of the OPEE structure is that projects are designed on the basis of deterministic decision analysis models9 so as to enable simulation of project option scenarios including alternative input and output circumstances, processes and techniques and such factors as weather impacts. The system also enables evaluations of potential external and internal change impacts on a project as well as impacts on the communities, the environment and ecosystems. The simulation techniques so far deployed in the Accumulog configurations include the most reliable operations research techniques including Monte Carlo Simulation, SIMPLEX optimization, sensitivity analysis, human resource learning curves, dynamic chain sequences (including Markov chains) and population growth impacts of resource consumption and requirements. Such simulation models are subject to assessment against actual data and benchmarks to ensure reliability for the purposes of project design and in support of implementation decision-making.

Improving the value of historic datasets

Amongst the most difficult data to collect is accurate locational state data, especially that relating to phenomena such as weather patterns, environmental conditions and natural processes or even political and economic events. Standard blockchains are a static ledger. Accumulogs, on the other hand, also collect new data which is validated, and especially locational state data. This enables Accumulog records that were input in the past to be related to increasingly refined locational state data as a basis for improving the understanding of the nature and relationships between data elements input in the past. This provides a powerful vector for learning based on instructional simulation using more refined and complete data. This does not alter the original data records but it enables layers of more in-depth interpretative analysis to be added helping to reveal previously "hidden" relationships. As a result, Accumulogs are not static ledgers in the common sense of these terms but they extend blockchains with additional non-intrusive data that can expose valuable data relationships based on the OPEE approach and simulation. This is consistent with the original purpose of Accumulogs to support learning systems and decision analysis.


1. Crypto-currency transactions on blockchains, such as BitCoin, usually register the identities of transacting parties, direction of transfer, a date-time stamp, the currency ID, quantities involved and exchange rate (price quotation).
2. Locational State theory embeds time-space elements into datasets to secure absolute coordinates of events and transitions. Locational State site
3. Plasma Operating System (POS) is an internal control framework for cloud-based server-side scripts that has been developed for the Plasma DataBase by
4. Object Profile Elements Extension (OPEE) is an extension to the OOP profile that adds validation as an essential support for Accumulog operations. In terms of scripting it represents an added component to the ECMAScript and ISO international JavaScript server side extension. Developed by
5. Object Oriented Procedure (OOP) (OOP) was developed in the 1960s as a means of scripting reality to build computer-based simultation models such as the SIMULA series. This development took place in Norway. Kristen Nygaard started writing computer simulation programs in 1957. Nygaard saw a need for a better way of describing the heterogeneity and the operation of a system. To go further with his ideas on a formal computer language for describing a system, Nygaard realized that he needed someone with more programming skills than he had. Ole-Johan Dahl joined him on his work January 1962. SIMULA 67 and modifications of SIMULA were used in the design of VLSI circuitry (Intel, Caltech and Stanford). Alan Kay's group at Xerox PARC used SIMULA as a platform for their development of Smalltalk (first language versions in the 1970s), extending object-oriented programming importantly by the integration of a graphical user interfaces and interactive program execution. Bjarne Stroustrup, from Denmark, started his development of C++ (in the 1970s) by bringing the key concepts of SIMULA into the C programming language. The idea of this development arose as a result of his doctorate work at Cambridge University. SIMULA also inspired much work in the area of program component reuse and the construction of program libraries. The central operational implementation strategy in support of Accumulogs and LST is the use of a server side extension of the international standards for JavaScript which supports OOP. As a result the operational framework builds simulation models by default.
6. Developer Accumulogs and Locational State Theory were both conceived by H. McNeill in 1985, a Senior Scientific Officer at the Information Technology & Telecommunications Task Force (ITTTF) in Brussels who managed the DELTA learning systems initiative. McNeill is an agronomist with post-graduations in economics and systems engineering from Cambridge & Stanford Universities. He is a specialist in the development of coding methods and procedures for decision analysis applied to agricultural economic development project cycle and portfolio management.
7. Portfolio Data Warehouse (PDW) was proposed by Hector McNeill in 2016 as a more effective substitute for General Data Warehouses or Data Warehouses, sometimes referred to as Big Data. McNeill has argued that PDWs represent a convergent and coherent evolving quality and value of knowledge content enabling the specification of coherent deterministic decision analysis models. General Data Warehouses (GDW) combine data collected for different purposes including administrative and regulatory information. This type of information is often modified by data suppliers and even authorities so these inaccurate and unreliable data elements can add too much noise to the system. This often results in a lack of required levels of associative coherence thereby reducing the utility of this information. This reduces its value in specifying good quality deterministic decision analysis models. See Agricultural innovation.
8. Real Time Audit (RTA) Real Time Audit is a 24/7 real time oversight system that operates within the Navatec Cloud, with global coverage, to access all projects in portfolios that use the Navatec System, a cloud-based Software as a Service (SaaS) for project cycle and portfolio management. See also RTA.Systems
9. Decision Analysis Models (DAMs) are based on R.Howard's deterministic decision analysis models developed at the Stanford Research Institute. These have been advanced by SEEL (Systems Engineering Economics Lab) to combine OOP and OPEE as a Seel-Telesis Systems Development Programme activity.