Components and connectors are the backbone of the runtime view in software architecture, but the connector appears to play second fiddle to the component. The 1968 NATO conference that coined the term “software architecture” specifically asked for components, but did not mention connectors. Components are shown as two-dimensional boxes, while connectors must make do as one-dimensional lines.
Despite the perception of connectors as simple data movers, real work can be done in connectors. Connectors can convert, transform, or translate datatypes between components. They can adapt protocols and mediate between a collection of components. They can broadcast events, possibly cleaning up duplicate events or prioritizing important ones. Significantly, they can do the work that enables quality attributes, such as encryption, compression, synchronization / replication, and threadsafe communication. It is hard to imagine systems achieving qualities like reliability, durability, latency, and auditability if their connectors are not contributing.
It is useful to identify two perspectives on connectors. The first is the “micromanaged” connector that is just a part that does a job we assign to it. If it fails that is because we did not supervise it sufficiently. Its job is only to do what we told it to do. Micromanaged connectors usually do the simplest job possible and are usually simple connectors. The second kind of connector, a “goal” connector, has an assigned goal, or objective, that it is responsible for accomplishing. A developer who builds a goal connector must avoid failure by looking into the problem, discovering possible failure cases, and ensuring that the connector handles them. Goal connectors are usually complex as they have real domain work to do, and are responsible for seeing it completed.
Consider the seemingly simple task of keeping a hot backup copy of a component, ready for failover. There must be communication between the master and slave, because the slave should maintain the same state as the master. Our first thought may be to make a procedure call to the slave every time the master changes. That might work if the two components were co-located on the same machine, but backups are often kept on separate machines for reliability, so we consider using remote procedure calls or events. But now there are more concerns: What if messages do not arrive? Is the latency between master and slave acceptable? Does the master process the replication synchronously or asynchronously? Does the data need to be compressed, or can we efficiently send deltas? Perhaps worst of all, are there transactional problems, where if a master fails in a transitional state we need to revert the slave back to the last known good state? By assigning a goal to this connector we reduce the chance that we revert to treating it as a trivial mover of data. If it were simpler, one or both of the components would be forced to assume additional responsibilities, diluting their cohesion and purpose. Assigning the synchronization goal to a connector simplifies our components, making them easier to build, maintain, and comprehend. It also simplifies our system description by raising its level of abstraction.
One way to encourage interesting connectors is to assign them goals. Another way is to treat components as domains and assign the job of bridging the domains to the connectors. Michael Jackson described a patient monitoring system where sensors on the patient reported body temperature and pulse and the system’s job was to alert a nurse in case of emergency. He showed that two different kinds of alarms were needed: one where the patient is suffering a heart attack, and another less urgent alarm where the patient has removed the sensors. Let’s look at this example from the perspective of using connectors to bridge the domains. The first domain is that of collecting accurate sensor readings. There may be digital to analog conversion, smoothing, signal transformation, and other work to be done in order to sense the patient’s temperature and pulse. The second domain is that of alarms. There will be several severities of alarms and various ways of informing people. We might configure low severity alarms to blink a light, medium severity alarms to sound a local beeper, and high severity alarms to do all that plus sound a remote beeper. Defining the domains this way, we might even be able to reuse these components in a different context other than patient monitoring, because each component handles a single domain, rather than knowing about the other component or about patient monitoring. When two domain-specific components interact, we will need to write some code that understands both domains. In this case, we need to write code that sounds a medium severity alarm if the patient has accidentally removed his sensors, and a high severity alarm if he is suffering a heart attack. If we place this code in the sensing or alarm components then we have lost their domain neutrality. We can instead place it in the connector, which at one end will take in sensor events and at the other end emit alarm events. It is impossible to avoid having code that knows about both sensors and alarms, but we can locate this code in the connector and insulate the components.
The takeaway lesson is that connectors should be treated as equals of components in software architecture. If we give them simple jobs then we do ourselves a disservice, and likely pollute our components, hurting their cohesion and increasing coupling. Two concrete strategies are to assign goals to connectors and to use connectors to bridge domains.
subscribe via RSS