⚠️ This is the first half of the post but it contains most of the important basic information.

Please take a moment and observe the red wire on this picture. It could be the most interesting thing you are seeing today. A simple red wire. Look at it like a child. Isn't it quite mysterious? What are all the things it is capable of? How does it actually work? What basic concepts from it could we apply to understanding the digital equivalent of a connection?This **red wire together with two tin cans** is a ** connection**. It is

Message order is guaranteed. It is not possible to say two things into such microphone and for the *channel* to mix them up, making earlier one arrive later. This is a great property to have and is usually taken for granted.

Messages in this simple analogy case are **not encrypted** 🔐 but this is not relevant for the journey of understanding we are just about to embark. Encryption is easily added on top of the basic design we are going to invent now.

In our physical case we have these two aspects we can cleary identify:

**a wire**:*a physical medium*over which a connection can take place**a connection**: a*wire that connects a sender and a receiver*

Sender and receiver can be instantly reversed over the same connection so it would be more accurate to say that a *bidirectional** connection is a wire that connects two transceiver nodes*. Our physical example can only work in one direction at any given time but

A bidirectional connection connects two nodes and allows message passing in both directions.

We will start with the basic Connectome API design now.

Let's now move into the digital realm.

**channel**: an abstract view of both the physical medium and the connection.

From a perspective of one node it looks like this:

It is an object exactly like this:

We have exactly one channel for each connection. It is a clear 1:1 analogy with our toy example. Analogy will break a little bit at some point but it is actually quite amazing how true it actually is.

When we have the channel, we can use it immediately, **it comes ready to go.**

However, once closed, we lose it forever. Like this:

So far we talked about how channel gets destroyed (when connection drops) but how it gets created in the first place?

Channel gets created each time the other side connects to us.

Let's go back to the basics for a minute.

**Each connection consists of exactly two nodes** and they can be equal in all aspects except in one.

In Connectome Design they can never be equal in the fact that only **one node **was *initiating the connection **while other was the receiver of connection. Once the connection is active it technically does not matter which is which but we still like to keep things clean and so we call the "channel" on the initiating side a connector. This is the connector object:*

This is the **entire picture:**

From browser:

```
import { connectBrowser } from 'connectome';
const connector = connectBrowser({ address, protocol, lane });
```

From node.js:

```
import { connect } from 'connectome';
const connector = connect({ address, protocol, lane });
```

When we do this we will have a `connector` object immediately but it may be connected already *or not. *We can check like this:

`connector.isReady();`

However we usually want to do this instead:

```
connector.on('ready', () => {
// now we are ready to send and receive messages of any kind
})
```

At almost the same instant as we get our `ready`

event, the other side will get a completely prepared `channel`

handed to them inside an `onCreate`

function:

```
function onConnect({ channel }) {
// we have our channel now, also ready to send and receive messages
}
```

We can see that both `connector`

and `channel`

are conceptually the same thing. They can both be treated as *channel*, which is a very broad concept from communication theory.

Differences that are good to know are:

- we always have `connector` object on the side that initiated the connection
- recipient of a successful connection attempt will be using
`channel`

object for further communication `channel`

is never used in browser because browser always initiates the connection and does not receive it- a
`connector`

when disconnected will keep retrying to connect - a
`channel`

when disconnected will be destroyed but a new`channel`

will be created when other side reconnects successfully.

More experienced programmers might ask: what is new in this design?

My answer: if you knew everything already, then nothing is new for you, if you knew almost nothing about this topic, then **everything is new for you. **If you never or rarely used raw WebSockets then by default this information will be new for you.

There is no such thing as "connection" in physical sense. All connections are just logical layers. Currently *"open connections" * have some time resolution within which the process can be certain that anything sent to the remote endpoint will indeed arrive. With every µs of receiving a ping message, the process can be less sure about whether the connection is "still open" or only seems so.

We will go into these details in a day or two. This post will be updated and increased in size by about 100%.

Connection as a concept is further complicated by packet switching networks (which Internet is). WebSockets abstract details away and we can think of our connection as a red wire from the first picture although it is anything but in actual reality.

Conceptually it is exactly like that though.

What can we send over it?

Everything. How? Stay tuned :)

]]>My plan with this post is to give the initial pointers for everyone's individual further research if they are so inclined. I'm clarifying that I'm learning all of this in a more formal way myself and am inviting other previously inexperienced (younger?) programmers and thinkers in this area to consider investing time into this fascinating and very broad subject.

We cannot possibly cover everything in one post, not even in one book. This area is an inexhaustible source of interesting problems and very effective but partial solutions to specific subsets of problems. The beauty of it is that all of it can very be well described with mathematics and computation primitives.

The most important notion to take away from this introduction is that there are **always tradeoffs** in Distributed Systems. We cannot have *all the nice properties at the same time*, something always has to give. The reason for this is the underlying "imperfection" (or better: *reality*) of the physical medium on which we are computing. See fallacies of distributed computing.

Distributed Systems are intimately connected with *state*, please read the previous *(recently updated and expanded)* post about State Machines as well.

Let's now begin with a definition from a famous research paper written in 1978:

A distributed system consists of a collection of distinct processes which are spatially separated and which communicate with one another by exchanging messages.

Source: Time, Clocks and the Ordering of Events in a Distributed System, 1978

Here is one computer:

And here are two *spatially separated computers:*

They *communicate with one another by exchanging messages:*

If time behaved as we usually think it does, everything would be very simple and we always knew the Order of Things.

Second part of definition from the famous paper by Lamport:

A system is distributed if the message transmission delay is not negligible compared to the time between events in a single process.

Also in the paper:

A single computer can also be viewed as a distributed system in which the central control unit, the memory units, and the input-output channels are separate processes.

When considering a network of computers (or even a pair of computers) our logical unit (process) is one computer. Time needed for inter-process messages is much higher than time for internal messaging: \( t \ggg t_{int} \).

As we saw with our state machines post where we draw the abstraction line matters. We have to decide at which level of detail are we actually looking.

**Distributed systems as a network of spatially separated computers with non-negligible messaging delays** are thus not concerned with implementation details of each computer separately. Each computer is a single process from this perspective. Abstraction is a very powerful mental model when working with both mathematics and computation. We only bring important things in focus at each level and *we ignore all the internal details of implementation*.

This concludes the introduction to the paper but does not explain any of its results or findings or explains its importance. Please watch an interview with the author of the paper linked at the bottom of this document. Interview takes some *time* but is very well worth it. It *flows *nicely and builds the tempo masterfully.

With each historically important and broadly cited scientific paper it is great to know a little bit (a lot) of context of how it came about and other such "trivia". Fortunately we are able to obtain this** incredibly interesting contextual information** about this particular paper 🤩. Here it is:

💡 Consider reading the really succinct background story written by Leslie Lamport, the author of the paper.

These are the two important pieces of information:

Jim Gray once told me that he had heard two different opinions of this paper: that it’s trivial and that it’s brilliant. I can’t argue with the former, and I am disinclined to argue with the latter.

Read this excerpt to see what was the brilliant part and what was actually trivial:

The origin of this paper was the note The Maintenance of Duplicate Databases by Paul Johnson and Bob Thomas. I believe their note introduced the idea of using message timestamps in a distributed algorithm. I happen to have a solid, visceral understanding ofspecial relativity. This enabled me to grasp immediately the essence of what they were trying to do.

Special relativity teaches us that there is no invariant total ordering of events in space-time; different observers can disagree about which of two events happened first.There is only a partial order in which an event e1 precedes an event e2 iff e1 can causally affect e2. I realized that the essence of Johnson and Thomas’s algorithm was the use of timestamps to provide a total ordering of events that was consistent with the causal order. This realization may have beenbrilliant.Having realized it, everything else wastrivial.

Because Thomas and Johnson didn’t understand exactly what they were doing 🥺, they didn’t get the algorithm quite right; their algorithm permitted anomalous behavior that essentially violated causality. I quickly wrote a short note pointing this out and correcting the algorithm.

Source: **Time, Clocks and the Ordering of Events in a Distributed System**

↑ 1st important piece of information is the importance of the notion of **relative, subjective time** for each process and implications for **possible ordering of events** that is even theoretically achievable.

Fundamental theory of distributed systems should be based on fundamental theory of information propagation through space-time and that is special relativity published in 1905 by Albert Einstein. Leslie Lamport was the first person to really understand the importance of this fundamental connection between mathematics, physics and (distributed) computing.

Read more about **Special Relativity, 4D Space-Time Mathematical Construct** and **Partial Ordering of Events** here:

And especially from this great book from 2018:

The answer is: most distributed systems have aspects of State Machines and it is very important to apply state transitions in the 'correct order' or else we end up with **inconsistent state.**

Further comment from Leslie Lamport from the background story of his paper:

This is my most often cited paper. Many computer scientists claim to have read it. But I have rarely encounteredanyone who was aware that the paper said anything about state machines🙄. People seem to think that it is about either the causality relation on events in a distributed system, or the distributed mutual exclusion problem. People have insisted that there is nothing about state machines in the paper. I’ve even had to go back and reread it to convince myself that I really did remember what I had written 😒.

↑ 2nd important notion is that if we want to maintain a consistent distributed state, we have to take great care when thinking about the **ordering of events**.

We haven't talked about the byzantine type of distributed systems failures, this is not the topic for this introductory post, these developments came some time later in the research history of distributed systems. They bring the level of difficulty and attention needed when designing such systems to yet another level. With byzantine fault tolerant systems we have to account for not just physical reality of computation and proper ordering of events but also for lost messages, intentionaly faulty processes emitting wrong information (attacks on distributed system) etc. All of this builds firmly on Leslie Lamport's findings. All of it, be it modern blockchain systems or big data center synchronization protocols used by a few of our well known technological giants.

Leslie Lamport modestly says that people from Amazon told him that although they don't understand his Paxos algorithm (1989) completely, the practical and empirically tested difference between this and other algorithms from similar category is that Paxos always works and it never fails in any *of the weird edge cases*.

Asynchronous nature of messaging between different processes of a Distributed System means that there is no common clock between processes and also no reliable upper bound of how much time messages are expected to take (we never know how long a message roundtrip should be, it's an arbitrary number). If we add in unreliable communication medium in the mix, we have a perfect storm ⛈️ How do we know if the message actually arrived? The answer is that usually we have to receive a confirmation (another message in reverse direction). But how long do we wait for a confirmation until deciding that the message is lost? What if the confirmation message gets lost?

Let's keep this subject open and investigate it further in the post about synchronous vs. asynchronous messaging.

Second part of this post takes us one level higher: it is about distributed data stores. Distributed data stores can be thought of as a **upgrade of the definition of Distributed Systems in some sense**. What Leslie Lamport managed to define are not distributed data stores per se but rather a mechanism for distributed synchronization. This, however, is needed for any kind of distributed data store. Many more people built on top of Leslie Lamport's firm foundation and they created a whole blossoming ever growing field based on his lifetime of theories about *concurrency.*

The most mentioned concept at the conceptual level of distributed data stores is the famous CAP theorem, also named **Brewer's theorem**.

CAP theorem is a ** trilemma** (we have 3 possible options):

⚠️* Common misconception:* our options are **not ***Consistency (C)*, *Availability (A)* and *Partition Tolerance (P)* but any combination of **two of these three concepts**:

We can **never have all three great things together. **

More formally we calculate the number of our options with this equation:

\( \binom{n}{k}\) where \(n=3\) and \(k=2\):

This means all possible combinations of *k *items out of a set of *n *items. This is how we calculate the actual result denominated with this special bracket symbol:

\[ \binom{n}{k} =\frac{n!}{(n-k)!k!} \]

\( \binom{3}{2}=3 \), we interpret this as: there are 3 possible combinations when choosing 2 options out of 3 options that are available...

All this very obvious and kind of useless to write binomial equations but I had to throw this in because binomials are otherwise great and besides a small LaTeX excercise seemed useful. Sorry! This has nothing to do with distributed systems though, sorry again, back to the gist of *distributed data stores* now! 😸

As we saw, we can only pick two from the menu of three. This gives us these possibilities:

It should be mentioned that the exact description of what these CA, CP and AP options actually mean in practice can differ greatly. It depends on who exactly is explaining, what practical experience they have etc. There are indeed many compatible possible viewpoints.

Also be aware that the CAP theorem is not without its critiques.

I hope that you sensed (I certainly did, today, finally 🙃) that the CAP theorem is not in the same category of scientific results as some other formulations and theories, for example Lamport's. It is much muddier but still a very useful framework for thinking about distributed systems / data stores.

Here are some interesting slides suggesting that Partitions are almost always a part of distributed system and it's then a question of which of the remaining two properties Availability or Consistency we decide to give up. We further learn that there are at least three types of consistency:

- Weak consistency
- Strong consistency
- Eventual consistency

Each one with subcategories of their own! Each one not formally defined but still somewhat agreed on between modern researchers.

Partial conclusion: **it is complicated** but still understandable, however this is not strict math with formal proofs, it is more like a thinking framework which is **evolving over time as well**. CAP theorem might mean something a bit different a few years down the road. It will probably not change in its basic form but we will come to understand these tradeoffs even better as we test more systems in practice with ever growing **scale**.

Here is another interesting article called CAP Twelve Years Later: How the “Rules” Have Changed by the author of the original theorem himself.

To get even more sense of the nuance see this excerpt from here:

One last resource that hints about derived concepts of **safety **and **liveness** of distributed systems is here. This is just for my personal reference when I come back for more information / research but feel free to 🐠 explore as well, of course :)

If we are not aware of the basics of what is possible and what can we expect, then the whole subject of computing is rather frustrating instead of enjoyable. We get used to be told that restarting devices or programs frequently is normal and that this is just how things are. The truth is that we can do better but the difference between 95% correct systems and 100% formally correct ones is *more than just 5% in perspiration and time invested* and usually it is not economical for companies to do a proper job when sometimes even 80% of the job is good enough for an average user. Note that 100% formally correct distributed system does not mean 100% reliability instead it means:

besttheoretical raliability possible under given conditions and according to the agreed upon tradeoffs in the specification step

In general we can be certain that some "mysteries" of Distributed Systems are going to be a common knowledge in a decade or less. These kinds of systems are becoming prevalent around and among us.

Formal verification ✓

Formal verification of distributed systems is the future. You can learn more about it here, here and here. If you have additional 5.5h to spare for a more lightweight interview with Leslie Lamport, by all means listen to him speak about his life of research: part 1 | part 2. Somewhere in the interview he describes why he believes there are no errors in any of his many papers.

✓✓✓

UPDATE: this is probably the best resource for hard-core dive into different Byzantine-Fault Tolerant distributed protocols. Check out the entire tarilabs site for more great information.

🐇 Deeper and deeper down we went... Time to resurface!

Please enjoy the upcoming week as much as currently possible.

]]>This and the next post are the longest posts of our *Distributed Systems* series. The first one about Computation and Mathematics was the shortest. We plan to stabilize somewhere in between as we progress. This post also touches almost all the other topics we will yet properly explain in future posts.

State Machines and especially the notion of **state** is the central and most important concept in all of computing. Maybe even in the universe. Without state and* changes in state*, there is no past, present or future. Nothing ever happens. The present is encoded in "the *current state* of the world". *The world *🌐 can mean either our entire planet or anything else that can be understood as a well defined, mathematically observable (sub)system. We don't understand our planet as a computing system because *emergence *and *complex systems theory* prevent us from ever really understanding each detail of the system in realtime as it is happening. We can understand simpler, isolated parts of state though, individual parts of our (computing) universe. Please read the post about Computation and Mathematics to see what is meant by *computation *(as opposed to *calculation*).

Read this post slowly, with careful thought if you never had the chance to learn how computers are able to be interesting and useful to us while only doing simple operations on *bits*.

*Machine *in this context means an * abstract computing concept,* not an actual physical machine.

are the simplest possible model of computation.Finite State Machines

Please note that "abstract" is written as opposed to *concrete* or *physical* and not as opposed to *fuzzy* or *undefined*. Abstract in this case means mathematical and *very exact*.

State Machines are usually represented as diagrams:

This is a **door** represented with a state machine model. **Circles are states** and **arrows are transitions between states** when certain actions are executed.

State Machinesmodel the behaviourof the system.

A door can only be in one of the states (*Locked, Unlocked or Opened*) at any given time. Direct transition between some states (for example between *Locked *and *Opened*) are impossible. We have a **finite number** of states — 3.

Each state transition takes some *time*. We can never open the door in 0.0 seconds.

**Our exploration plan:**

- 1st half: philosophical but still realistic, present a few non-everyday viewpoints that should stimulate thinking.
- 2nd half: exact mathematical theory and definition of Finite State Machines. No room for poetic interpretation here. We will leave this to
*verified others*though. We will curate all links and make sure they contain only information of the highest quality possible on this important subject. You can then invest a few days, a few months or a few years into further exploration down this road 🛣️. I certainly will 🐰.

Let's now start with the first half and take a bit of a curious look at the concept of *state *at the lowest possible level we can.

What could be the shortest possible definition to cover all kinds of *states* no matter from which perspective we are looking? We go with this definition:

State is the proto-basic unit of information.

For example each** bit **has** 2 states**. State is thus a *lower-level* concept than bit.

From 1948 paper by **Claude Shannon**:

If the base 2 is used the resulting units may be called binary digits, or more brieflybits,a word suggested by J. W. Tukey. A device with two stable positions, such as a relay or a flip-flop circuit, can store one bit of information.

Notice the concept of **two stable positions (**≡ states**)**. Also notice the term *flip-flop circuit**. *Flip-flop circuit is a digital circuit that *traps (saves) state*. We give it electronic input (*set* or *reset*) and it remembers it. It does so by circulating the current and staying in perpetuum in that *metastable* state until we manually change it by giving it the opposite input. This is the hardest diagram in this post but it is still possible to exactly understand it with some effort. It is not crucial that you do though, what follows can also be understand in entirety even if basic mechanics of this *very interesting *little device at the center of computing is not entirely understood. You will understand the outside behaviour of a *flip-flop* device after continuing the read for sure.

Bistable multivibrator, in which the circuit is stable in either state. It can be flipped from one state to the other by an external trigger pulse. This circuit is also known as a flip-flop. It can store one bit of information, and is widely used in digital logic and computer memory.

Interactive animation where you can try switching between two states is here. The important elements are two **transistors**. Transistor is a miniature component that conducts current between its two contacts — * collector* and

⚠️ We usually mark the electrical current *(I) *as flowing from [+] to [-] *(red arrow)*. This is actually wrong because electrons flow in the reverse direction *(blue arrow)*.

Such convention was established before actual facts were fully known. It is not important for undertanding basic circuits but it is worth knowing. Electrons have negative charge [-] and are attracted to the positive [+] electrode. The basic symbol for transistor in the entire human literature has the arrow pointing in the wrong direction :) Once something is widely used it is very hard to change.

**Transistors** can do two things:

**amplify the current**(for example when you speak in the microphone and the speaker makes your voice louder)**act as little switches**as in*flip-flop*circuit. Main purpose is not to amplify the signal but to use it as a trigger for turning something on and off. This is how we physically transition from continuous currents in physics to our discrete systems with well defined separate states. Another name for this is**digital logic**.

It is interesting to notice that a flip-flop circuit is itself conceptually a state machine!

As we see, *'zero'* and *'one'* (or *'high'* and *'low'*) are just names for states, computers don't actually operate in 'zeros' or 'ones' as usual natural numbers.

Information emerges out of the distinction between states and out of the assigned meaning of what these distrinct states represent.

We can represent two states with one bit, 4 states with 2 bits, 8 states with 3 bits etc.

With 3 bits in each box we could represent these 8 distinct states: 000, 001, 010, 011, 100, 101, 110, 111. For each additional bit we add, we get twice as many possible state as before (jump from 4 to 8 states between 2 and 3 bits)! General formula for this is:

\[ n_{states} = 2^{n_{bits}} \]

This is exponential growth of \( n_{states} \) because \( n_{bits} \) is in the exponent of the equation.

Early computers grouped 8 bits together and this is still called **a** **byte**. 1 byte (= 8 bits) can store \( 2^8=256 \) different combinations (states). This was enough for basic symbols like all normal letters *A-Z*, all digits *0-1* and a few other symbols. This is called an ASCII table and it proved a bit limiting over time. Modern computers now take 32 or even 64 bits as a basic unit for general computation instead of just 8. Modern UTF-8 symbol encoding standard uses up to 4 bytes (32 bits) for each symbol representation.

This is how **symbols** emerge out of tiny electric gates! These symbols don't have any *meaning* without a human observer. We show potentially meaningful *state representation *on a computer screen and this is how symbols (or User Interfaces in general) cross into our 🧠 *wetware *where they finally *mean something*. ** At almost any point in the state processing hiearchy** we can have something that we consider

Performance and stability oriented computer programmers *(especially those working with distributed systems)* have to be exactly aware of how state is stored and manipulated while end users don't have to know almost anything about it. They usually notice a bad design immediately but they don't realize it is very likely related to *state mismanagement *at some level of the program computation.

Distributed systems are notoriously hard exactly because making sure various *copies *of our state are always consistent with each other. We can either get different versions of state and cannot be sure which is right or we can corrupt our state by accident so it doesn't represent reality anymore. With fast systems where messages arrive quickly (sometimes nanoseconds) between each other we have to be sure to apply them in the right order or we can get a wrong final state. It is not always possible to know what order is correct because **nature does not keep an absolute and objective clock. **Each processing unit is working with its respective *relative time* ticking. This is a subject more appropriate for next week (a general post about *distributed systems*).

Let's go back to the main topic of this post now.

To quote the Turing Award winner Leslie Lamport again:

I believe that the best way to get better programs is to teach programmers how to think better. Thinking is not the ability to manipulate language; it’s the ability to manipulate concepts. Computer science should be about concepts, not languages.

Manipulation of *state (eg. DATA STRUCTURES, *with or without explicit State Machines) is the fundamental** **concept of Computer Science and will make you think about programming in a tremendeously more structured way than just thinking about *if / else statements *and *for loops*.

Consider this representation of information:

```
{
"currentSong": "Van Morrison - Alan Watts Blues",
"status": "playing"
}
```

If we save this somewhere, we can consider this *json document* as our representation of **state**. It gives us information about the media player and this information is represented exactly by this document which can be easily converted into *a string of letters. *Computers know exactly how to save strings into memory or hard drive. This is the same as bits which are also saved and represent state. Here we have a document that also represents state and information emerges somewhere between that document inside the computer and our interpretation and confirmation of its effect (song actually playing).

If **state which is saved in the computer **is not synchronized with its actual meaning (for example a different song playing), then the entire theory of deterministic state manipulation and interpretation fails and we don't have a working computing system. We have to always take care that our state machines are in check with reality.

Machine has to perfectly and reliably execute its behaviour according to inputs presented, never making a mistake. If mistakes are made because of physical reality, they have to be immediately recognized and mitigated. We also have to know when we are not able to know if a mistake has been made.

**Blockchains** are also an example of a computing system that is best thought of as a **State Machine** albeit with some interesting **additional security properties** (prevention of undesired state modifications while operating permissionlessly as an open protocol). Distributed security (consensus) protocol is not directly part of state machine so we can forget about it for now (until our installment about Blockchains in a few weeks).

We can think of a blockchain as a repository of various states (for regular account balances as well as for each smart contract). We can name the sum of all these states simply as **"Blockchain state"**.

At each block (usually every few seconds) blockchain accepts valid pending transactions and *bakes them immutably into this particular block*. We always get to the end (current) state of the blockchain by applying all state transitions in each block following the genesis (zeroth block) one after the other... All **state transitions** from beginning give us the **end state**. Blockchain stores the entire history (state transitions) but we are usually only interested in the end state. We need the history to be sure that our end state is valid and that there were never invalid or fake transitions involved.

What are the blockchain transactions actually? Each transaction can either move some balance between a pair of accounts or it can **execute more complex computation inside some smart contract**. Both of these are instances of valid state transitions.

[short comment --> some of this will probably be moved into a separate post about Blockchains because it got a bit too long]

Computers somehow manage to appear 'in touch with us' although all they do is manipulate electric currents to quickly turn billions of little digital switches on and off.

We sometimes manage to continually put computers in correct state to reflect our own thoughts, organize our ideas and make interpersonal communication easier.

They are actually our mirrors and how we behave is reflected and emphasized back to us through our *globally interconnected information amplifiers*. Before 2000 it was mostly one to one relationship with our digital state manipulators but since then and especially after 2020 we are starting to see a few *unified global state machines *emerging which will increasingly hold all our important state information and secure our digitall-mirrored real life relationships with one another.

Here things get weird and this is the current edge of research. The goal is to save *state *not in "giant flip-flop circuits" (in relative terms) but to **save state directly in elementary particles**! Many orders of magnitude smaller building blocks for this kind of computation. Small size is not the only difference. At this scale we bump into physics laws we are not really familiar with in our every day lives and don't have any intuitive notion about them. Quantum computing is weird but so was our current computing roughly 70 years ago.

**Quantum computers** (if possible in practice) will once again change our entire worldview within less than one generation.

Perhaps an 🐠 exploratory learning course in quantum computing is coming this time next year. A promise given to oneself is only good if fulfilled. Promising others makes fulfilment more likely because what we openly promise should hurt us in same way in case of non-delivery. This was a self-made unprovoked promise but it is still a promise :) If not next, then one of the coming years **for sure**.

As promised, here are some of the greatest resources on State Machines and Statecharts:

Expansion of the concept of State Machines:

Library implementing standard compliant State Machines and Statecharts:

Read the docs! Lots of great resources, video tutorials and other great useful information in there.

For even more resources in this area please consider using the **ZetaSeek** re/search engine. The engine is powered by Connectome library.

And have a great week! 👍

]]>⬆this is how mathematics and computation looked* many centuries ago*

This is modern mathematics:

More recently mathematics is about **big**

We often think of Mathematics as the art and science of manipulating numbers. This is not entirely wrong but is still quite a limited view.

⬆this is how mathematics and computation looked* many centuries ago*

This is modern mathematics:

More recently mathematics is about **big abstract ideas** and then trying to make them useful in *the real world* (which is desirable but not always possible). The abstract space of mathematics is conceptually larger than anything else we know and only minuscule parts of it find *applications in practice*. We often find uses for very abstract theories only after long time has passed since initial discoveries.

One of the big ideas of mathematics is the notion of * computation. *Please note that

*Calculation* is understood as *applying basic operations to numbers*.

**Computation** is all about continuously and reliably ** manipulating state** to achieve the appropriate

Here is one prominent way of describing computation — a **State Machine**:

Please read this quote from the 2008 paper by Leslie Lamport:

I have tried to show that state machines provide a simple conceptual framework for describing computation.

...

The advantage of state machines is that they can be described using ordinary mathematics. Mathematics is simpler and more expressive than any language I know that computer scientists have devised to describe computations. It is the basis of all other branches of science and engineering. Mathematics should provide the foundation for computer science as well.

I hope that with this explanation we can better appreciate how Mathematics and Computation are related but not the same.

We have many hills to climb together on further explorations of the territory.

This is the plan for **The Introduction to Distributed Systems and State Management ***exploratory course / blog series:*

01. FRI 2020-10-23 ◈ **Computation and Mathematics**

02. FRI 2020-10-30 ◈ **State Machines**

03. FRI 2020-11-06 ◈ **Distributed Systems**

04. FRI 2020-11-20 ◈ **Connectivity**

05. FRI 2020-11-27 ◈ **Synchronous vs. Asynchronous Transmissions**

05. FRI 2020-12-04 ◈ **Distributed State**

06. FRI 2020-12-11 ◈ **Graphical User Interfaces as State Machines**

07. FRI 2020-12-18 ◈ **Brief Introduction to Blockchains**

08. FRI 2020-12-25 ◈ **Introduction to Connectome JavaScript library **

09. FRI 2021-01-01 ◈ **Connectome 1.0 Release**

**Distributed systems** are the intersection of **Mathematics**, **Physics** and **Computer Science**. Is there anything more interesting you can think of? ☺

This is a practical course to make us aware of the basic concepts, challenges and some ways around these challenges.

Once physics comes into play, we start having **real problems **and we have to improvise our ways around them as good as Nature allows.

More in next installments. Next time still in the ideal made-up *physics-free* world.

**Have a great week!**