I found this in a 1970s magazine (Popular Electronics, Jan. 1975, page 34) about the Altair 8800. Their definition of software made me stop and think:
Table of Contents
"A computer is basically a piece of variable hardware. By changing the bit pattern stored in the memory, the hardware (electronic circuitry) can be altered from one type of device to another. When the bit pattern, and thus the hardware, is changed, we have what is referred to as 'software.'"
Software isn't the bit patterns. Software is when the hardware is changed. Software is the act of transformation itself.
Software doesn't exist between clock cycles
The machine is different every nanosecond
Between clock cycles, the computer maintains its current state but performs no transformations. Each tick of the clock triggers a new reconfiguration based on the next instruction. The machine at time T and time T+1 are physically different devices.
Stored patterns aren't software
The instructions stored in memory are not software. They are patterns of electrical charge that have the potential to cause reconfigurations. Software is the actual reconfiguration process when these patterns activate.
Consider what sits in memory when a program is "loaded" but not running. In RAM, you have millions of capacitors holding electrical charges. Some hold high voltage (1), others low (0). These form patterns - 10000010
here, 01111001
there. But at this moment, they are inert. They cause no reconfigurations. No transistor pathways are changing. No computation occurs. These patterns possess potential in the strict physical sense - they store energy that could cause change but currently does not.
The distinction is critical. The bit patterns in memory contain potential reconfigurations but are not themselves software. They become software only at the moment of execution - when the instruction pointer references them, when they flow into the instruction register, when the decoder transforms them into control signals that open and close transmission gates throughout the processor.
This explains why the same program can exist simultaneously in thousands of computers yet software only exists where execution occurs. The pattern 10000010
stored in inactive memory is not software - it's merely organized electrical charge. But when that pattern enters an active instruction pipeline and triggers the cascade of transistor reconfigurations that transform the processor into an adder, then and only then does software exist. The potential becomes actual through physical transformation.
Running means physically transforming
When we "run" a program, we initiate a cascade of physical transformations. Each instruction transforms potentiality into actuality by physically rewiring the processor. The program text is the score; the execution is the performance.
Software only exists during execution
Software cannot exist outside of time. It is not a thing that exists in space. It is a process that occurs through time. A program that is not executing is not software - it's just patterns of stored charge waiting to become software.
This temporal nature is not incidental but fundamental. Software is change, and change requires time. When we say a program "runs for 10 seconds", we're saying software existed for 10 seconds. Before execution began, there were only stored patterns. After execution ends, only stored patterns remain. During those 10 seconds, billions of reconfiguration events occurred - that sequence of reconfigurations through time IS the software.
Modern computing obscures this by creating the illusion of persistent software. We speak of "installing software" as if we're placing a thing onto the computer. But installation merely copies bit patterns into storage. These patterns are not software until execution transforms them into reconfiguration sequences. When we see "Microsoft Word" in our Applications folder, we're not looking at software - we're looking at the potential for software to exist when executed.
The clock signal makes this temporal nature physically explicit. Each tick represents a quantum of time in which one reconfiguration can occur. A 3 GHz processor experiences 3 billion clock cycles per second, potentially executing multiple instructions per cycle through pipelining and superscalar execution. Software exists in these discrete temporal moments. Between clock cycles, no software exists - only the physical results of the previous reconfiguration and the potential for the next one. The processor retains its current state between cycles - pipeline registers, cache contents, and control logic hold their values - but no new transformations occur until the next clock edge.
This is why software performance is always about time. When we optimize code, we're reducing the temporal duration required for a sequence of reconfigurations. When we parallelize, we're performing multiple reconfigurations simultaneously to compress time. Every software metric - latency, throughput, response time - acknowledges that software IS time.
The CPU has no fixed identity
During execution, the processor transforms into billions of different configurations per second - an adder, then a data mover, then a comparator, then a jumper. It has no fixed identity. Its identity is flux.
What code actually does to transistors
Variables claim transistors
When we write int x = 5;
, we're not creating an abstract container for a number. We're claiming ownership of specific transistors in the machine's memory. The bit pattern 00000101
gets stored as actual electrical charges - high voltage in transistors 0 and 2, low voltage in the others. These transistors are now ours. No other part of the program can use them until we release them.
The stack pointer - itself a collection of transistors configured to represent a memory address - physically changes its electrical state to point past our newly claimed territory. The memory allocator marks these transistors as occupied in its own transistor-based bookkeeping. We haven't just declared a variable. We've physically colonized a piece of the machine.
Function calls rewire the CPU
Calling foo()
isn't jumping to a different part of our code. It's physically rewiring the CPU's instruction fetch circuitry. The current value of the instruction pointer - the transistors that tell the CPU which instruction to execute next - gets copied into stack memory. This requires opening transmission gates between the instruction pointer register and memory, allowing electrical signals to flow until the stack transistors hold the same pattern.
Then the instruction pointer's transistors get reconfigured to hold foo
's address. The CPU's fetch circuit, which was pulling instructions from one part of memory, now pulls from a completely different region. The transistor pathways that were routing instruction signals from our main function are closed. New pathways open to foo
's location. The CPU has been physically rewired to execute different code.
When foo
returns, this process reverses. The saved instruction pointer gets copied back, transmission gates switch, and the CPU rewires itself to continue where it left off. Each function call is a physical reconfiguration of the machine's control flow.
Loops make the CPU repeat transformations
A loop like for(i=0; i<1000; i++)
forces the CPU to physically reconfigure itself in the same pattern over and over. At the end of each iteration, a jump instruction reconfigures the instruction pointer's transistors to hold an earlier address. This isn't moving backwards in some abstract space - it's electrically changing which memory transistors the fetch circuit connects to.
Each iteration transforms the CPU through the exact same sequence of physical states. First it becomes an incrementer (i++
), transistors in the ALU forming an addition circuit. Then a comparator (i<1000
), different transistors creating subtraction circuits to test the condition. Then whatever machines our loop body requires. The same transistors switch on and off in the same pattern, the same electrical pathways open and close, the same transformations occur.
While the CPU's control logic doesn't "remember" previous iterations, branch predictors and cache systems do retain state that affects performance. It simply becomes these thousand different configurations, one after another, because the jump instruction keeps resetting its fetch circuit to pull the same instruction sequence. We've created a physical cycle in silicon and electricity.
Data structures are transistor layouts
Every data structure is a specific physical arrangement of transistors. An array isn't an abstract sequence - it's a contiguous block of transistors in memory. When we access array[5]
, the CPU performs arithmetic on the base address (adding 5 times the element size), then configures its memory access circuits to connect to those specific transistors. The contiguity matters because electrical signals can flow through memory in predictable patterns.
A linked list scatters its elements across memory. Each node's "next" pointer is a group of transistors storing the electrical pattern that represents another memory address. Following a pointer means reading this pattern, reconfiguring the memory access circuits to point to those transistors, then reading what's there. Each pointer traversal is a physical reconfiguration of the memory subsystem.
Trees arrange these pointer relationships hierarchically. Hash tables use arithmetic on keys to calculate which transistors to access, trading computation (ALU reconfigurations) for direct access to scattered memory locations. These performance characteristics aren't abstract - they directly reflect the physical work of reconfiguring circuits to access transistors in different arrangements.
Cache is about electron travel distance
Cache hierarchy is pure physics. L1
cache sits micrometers from the execution units - electrical signals travel this distance in picoseconds. L2
cache is millimeters away, requiring nanoseconds for signals to propagate. RAM
sits centimeters away on separate chips, with access times dominated by DRAM row/column access mechanics rather than signal propagation.
When our code accesses memory, it's not retrieving abstract data. It's sending electrical signals to specific transistors and waiting for the response. A cache hit means those transistors are physically close, so the electrical round-trip completes quickly. A cache miss means the signals must travel to distant RAM chips, through longer wires with more electrical resistance and capacitance.
This is why cache-friendly code runs faster. We're minimizing the physical distance electrons must travel. When we optimize for cache locality, we're arranging our program's behavior so the CPU mostly accesses nearby transistors. The performance difference between L1
cache and RAM
isn't architectural abstraction - it's pure physics: SRAM transistors can flip states in picoseconds while DRAM capacitors need careful nanosecond-scale charging.
What this means for actual programming
Bugs are wrong transistor states
A bug isn't just a logical error - it's transistors in the wrong electrical state. When our program crashes with a segmentation fault, it's because we tried to access memory transistors that the operating system won't let us touch. The memory management unit checks our access patterns against its transistor-stored permission tables and halts the illegal access before it can happen.
Debugging tools reveal this physical reality. A breakpoint inserts a special instruction that pauses our program's reconfiguration cascade when reached. While the CPU continues its endless transformations running the debugger, our program's instruction stream freezes - its next reconfiguration suspended in time. When we step through code, we're manually releasing one transformation at a time, watching our program's slice of the machine transform in slow motion.
Memory dumps show us the actual electrical patterns stored in transistors at the moment of the crash. Those hexadecimal values represent real voltage levels in real transistors. When we fix a bug, we're correcting the sequence of physical transformations so transistors end up in the right electrical states.
Performance is transformation count
When code runs slowly, it's because we're forcing too many physical reconfigurations. Every instruction requires time for transistors to switch states, for electrical signals to propagate, for new configurations to stabilize. A nested loop with O(n²)
complexity doesn't just mean "quadratic time" - it means n²
actual physical transformations of our CPU.
Optimization is the art of achieving the same result with fewer reconfigurations. When we replace a bubble sort with quicksort, we're reducing the number of times the CPU must transform into a comparator and data mover. When we cache a calculated value instead of recomputing it, we're eliminating thousands of ALU reconfigurations.
This is why algorithmic optimization matters more than micro-optimization. Changing for(i++)
to for(++i)
might save one transistor reconfiguration per iteration. But changing from O(n²)
to O(n log n)
eliminates millions of physical transformations. We're literally asking the machine to do less physical work.
Threads fight over transistors
When we spawn multiple threads, we're creating competing reconfiguration sequences that want to transform the same transistors. A race condition occurs when two threads try to reconfigure the same memory transistors at the same time. The final electrical state depends on which thread's signals arrive first - a race determined by nanosecond timing variations.
Locks are hardware-supported transistor reservation systems. When a thread acquires a lock, it uses special atomic instructions that reconfigure control transistors in a way that prevents other threads from doing the same. These instructions are guaranteed to complete without interruption on their core - other cores continue their own execution paths independently.
Deadlocks happen when threads create circular transistor dependencies. Thread A has reconfigured lock transistors X
and waits to reconfigure Y
. Thread B has reconfigured Y
and waits for X
. Neither can proceed because each needs the other to release transistors first. The physical configurations have created an impossible state that can only be resolved by killing one of the threads - forcibly resetting its transistor claims.
We're not coding, we're moving electrons
When we write code, we're authoring sequences of physical transformations. Each line we type will become patterns of electrical activity in silicon.
Modern languages pile on abstractions, but they can't change the fundamental reality. Our Python code gets compiled to bytecode, interpreted into machine instructions, and ultimately becomes the same transistor reconfigurations as hand-toggled Altair switches. The abstractions help us think, but they don't change what actually happens. At the bottom, it's all electrons moving through silicon.
This perspective is crucial. Performance problems aren't abstract - they're too many physical transformations. Security vulnerabilities aren't just logic errors - they're unintended transistor access. Concurrency bugs aren't race conditions - they're transistor configuration conflicts. Every programming problem is ultimately a physical problem.
The 1970s programmers understood this because they could see it happening. They watched lights blink as registers changed state. They felt the machine transform beneath their fingers. We've hidden this reality behind layers of abstraction, pretending software is something ethereal that floats above the hardware.
But software has always been hardware reconfiguration. We just forgot.