Monday, April 6, 2026

Make C++ memory safe by never using "new"

Every C++ developer eventually encounters a memory leak. You allocate something on the heap, write some logic, hit an early return at runtime and suddenly that heap memory is gone forever:

std::vector<int>* numbers = new std::vector<int>({1, 2, 3});
// ... what if we return early?
// ... what if an exception fires?
delete numbers; // only runs if we get here

The good news is that it is entirely possible to write professional C++ without malloc or new. The above example can be written as follows, without new and delete:

std::vector<int> numbers = {1, 2, 3}; // Clean and safe

The vector internally does roughly this:

 Stack              Heap
┌───────────────┐  ┌───────────────┐
│ numbers       │  │               │
│ _data ────────┼──┼─► [1] [2] [3] │
│ _size = 3     │  │               │
│ _capacity= 3  │  │               │
└───────────────┘  └───────────────┘

The vector object itself lives on the stack, but the actual integers are allocated on the heap via new[] / allocator, inserted by the compiler. The vector acts as a "wrapper" or "manager" for a raw block of memory on the heap. The std::vector object itself is just a small, fixed-size handle (8-byte pointers = 24 bytes). It doesn't grow or shrink, helping you preserve the precious stack. It knows exactly where the heap memory starts, how much is used, and how much is left. The actual data (your integers, strings, or custom objects) lives in the heap and its size can be gigabytes.

In C++, the destructor of a stack-allocated object is automatically inserted into the generated code by the compiler at all the places where the object goes out of scope:

int complexFunction(int x) {
    std::vector<int> numbers = {1, 2, 3};

    if (x < 0) {
        // COMPILER INSERTS: numbers.~vector();
        return -1; 
    }

    // COMPILER INSERTS: numbers.~vector();
    return x * 2;
}

In the assembly listing, you will see lines like this:

call std::vector<int, std::allocator<int>>::~vector() [base object destructor]

When the std::vector destructor runs, it automatically performs two critical tasks:

  1. Element Destruction: it calls the destructor for every individual object currently stored in the vector. If you have a vector of strings, it ensures each string cleans up its own character buffer first.
  2. Deallocation: Once the elements are destroyed, the vector calls the underlying deallocation function (typically a wrapper around operator delete[] or a custom allocator) to return the entire block of heap memory to the system.

If you were using malloc or new manually, you would have to remember to call free or delete in every possible exit path of your function (including if an error occurs):

std::vector removes this "human element" by making the cleanup a language-level guarantee.

If you have a custom MyClass with a constructor that takes runtime parameters:

// ❌ with new — obj on heap, you manage lifetime manually
MyClass* obj = new MyClass(size, name);
delete obj; // must remember this at every exit point

// ✅ without new — obj on stack, lifetime managed automatically
MyClass obj(size, name);
// no delete needed

In C++11 and beyond, smart pointers cover every legitimate use case for new and delete. Example of object that outlives its scope:

// ❌ old way
MyClass* obj = new MyClass(size, name);
return obj; // caller must remember to delete

// ✅ modern way
return std::make_unique<MyClass>(size, name); // ownership transfers automatically

When you have a class OneClass with a member myMember variable that is also a class of type MyClass and myMember constructor parameters are specified at runtime:

class OneClass {
public:
    OneClass(int size, std::string name)
        : myMember(size, name) // ← myMember constructed here, with runtime args
    {
        // constructor body, myMember is already fully constructed here
    }

private:
    MyClass myMember;  // ← no "new", lives inside OneClass
};

OneClass obj(size, name) created │ ├── myMember(size, name) constructed ← initializer list │ └── OneClass constructor body runs

If myMember is not constructed in OneClass constructor but in some other method call:

#include <memory>
class OneClass {
public:
    OneClass() {} // myMember is nullptr

    void initialize(int size, std::string name) {
        myMember = std::make_unique<MyClass>(size, name);   // constructed here
    }

private:
    std::unique_ptr<MyClass> myMember; // nullptr until initialize() is called
};

unique_ptr starts as nullptr and takes ownership when assigned. ~MyClass() is called automatically when OneClass is destroyed, no manual cleanup needed.

The general term for this mechanism is called RAII (Resource Acquisition Is Initialization). In this paradigm, you use objects that manage their own memory. When the object goes out of scope, it automatically cleans up.

Music: Passenger - Let Her Go

Monday, March 23, 2026

PID Theory

PID control trades optimality for simplicity, it's sub-optimal but good enough for most real systems without needing a mathematical model of the system. For example, for a lunar lander, bang-bang control achieves faster landing but you need the model the physics. You find the control parameters with simulations and tests. General form of PID control force terms:


What would be the simplest controller for a mass to stay at a specific height from the surface of a planet with only gravity acting and no atmosphere?

Without an atmosphere, your system is:


If you use only a Proportional (P) controller, your control force is:


This effectively turns your mass into a pure spring in a vacuum. It will oscillate up and down forever, centered around the target height, because there is no way to remove the kinetic energy (no damping). To stay at a specific height, you need to "electronically" create the friction/damping that the atmosphere is missing:

The P-term (Kp) provides the "restoring force" to push the mass toward the target height. The D-term (Kd) acts as artificial friction. It resists the velocity of the mass, allowing it to slow down as it approaches the target and eventually stop. The Gravity Bias (mg): Technically, to hover perfectly with a PD controller, you need to "cancel out" the constant pull of gravity so the controller only has to worry about the displacement error. Here is a P vs PD comparison using python script:

The double integrator has two poles at the origin (s=0, 0). Without a zero to "pull" them into the Left Half Plane (LHP), the poles have nowhere to go but up and down the imaginary axis as you increase K_p. By PD controller placing a zero on the LHP, you are creating a "target" in the stable region. As you increase the gain, the two poles at the origin are "pulled" off the imaginary axis and toward the LHP zero:

While an Integral (I) term is usually used to eliminate steady-state error (the "droop" caused by gravity), in a vacuum with a double integrator, adding an "I" term without a very strong "D" term is dangerous. It introduces more phase lag, which often leads to the instability shown in the original infographic you shared. In the frequency domain, an integrator introduces a 90° phase lag. A double integrator (1/s^2) already has a 180° phase lag. Adding an integral term pushes the total phase lag toward 270°.

In control systems, if your feedback is delayed by 180° or more, your "correction" starts acting in the same direction as the error. Instead of pulling the mass back to the target, the controller begins pushing it away, leading to the "Unstable" root locus you saw in the original infographic.

Integral windup is another problem where your mass (plant) is stuck (perhaps a mechanical limit or a saturated actuator), the error remains constant because the mass isn't moving, and the integral term keeps summing that error over time. The "I" value grows (winds up) to a massive number. When the mass finally breaks free, the controller has a "memory" of a huge error that no longer exists. It applies a massive, unnecessary force, causing the mass to overshoot violently or even crash into the hardware. You can mitigate windup by stopping the integrator from growing once the actuator reaches its maximum output or by only turning the "I" term on when the mass is very close to the target height.

Thursday, March 12, 2026

Digital vs Analog Simulation

While a purely digital simulation (Model-in-the-Loop) is great for testing logic, an analog simulation (Hardware-in-the-Loop) tests the electrical reality of your system. In a digital simulation, you use values like pressure directly from your atmosphere model. In reality, that pressure goes through a sensor which outputs voltage/current. Your electronics have to read that analog signal and convert it to digital before feeding it to your controller.

A real controller output has to drive a load. Analog simulation ensures the controller's transistors don't overheat or drop voltage when trying to move a high-pressure valve.

Your internal  Analog-to-Digital Converter (ADC) might add extra error. For example, your atmosphere model says 101.325kPa, but your ADC might convert it to 101.328 kPa due to its internal tolerance. Analog simulation reveals whether your control algorithm is robust enough to handle that 0.003 kPa error without oscillating. It also verifies that your controller’s ADC is actually calibrated correctly. The signal chain:

You cannot "short a wire to ground" in a purely digital simulation and see the smoke. With hardware like NI PXI Fault Insertion Units, you can physically short an analog input to a 24V rail. This allows you to verify that your hardware's protection diodes work and that your software enters a "Safe State" immediately.

Wednesday, February 4, 2026

Embedding Visual C++ Runtime into DLL

A user of one of my old desktop programs (written in Java8 and C++) reported that they were getting a "[FileName].dll is not a valid Win32 application." (the Turkish version: "... geçerli bir Win32 uygulaması değil"). Due to the "Win32" in the error message, my first thought was that they were using my 64-bit app on a 32-bit setup. However, that was not the case. They compared the PC that my app was working with the PC that was showing error and found out that they were able to make it work by copying mscvr120.dll file to Windows/System32 folder. After chatting with Gemini, here are my findings:
  • In the world of Windows development, Win32 is the name of the entire programming interface (the API) used to interact with the operating system. When they later moved to 64-bit, instead of renaming it to "Win64," they kept the name Win32 for the API itself to maintain developer familiarity. Technically, 64-bit Windows programs run on the Win32 API for 64-bit systems. So, when the OS says "not a valid Win32 application," it really means "not a valid Windows DLL/exe"
  • msvcr120.dll is a Dynamic Link Library (DLL) file that is a core component of the Microsoft Visual C++ Redistributable (Runtime) for Visual Studio 2013. Since the problematic PC never had any Visual Studio installed on it, it was missing the runtime dependency of my DLL.
  • Shared runtimes are used to reduce the size of the compiled binary, but they introduce a dependency on the target operating system to provide the runtime.
  • You can check the dependencies of your DLL or EXE by using Visual Studio's dumpbin.exe. On cmd, dumpbin /dependents filename.dll shows you the DLLs filename.dll depends on.
    • If you see MSVCR....dll, you need the C Runtime.
    • If you see MSVCP....dll, you also need the C++ Standard Library.
    • If you see KERNEL32.dll or USER32.dll, don't worry, those are part of Windows itself and are always present.
  • Previously, I discussed how to embed the C runtime in Linux. You can also  embed/bake the C/C++ Runtime into your binary with Visual Studio via Project Properties > C/C++ > Code Generation > Runtime Library
    • The default of /MD (Multi-threaded DLL) or /MDd (Multi-threaded Debug DLL) uses shared runtime
    • Changing it to /MT (Multi-threaded), embeds the code into the DLL, leaving it with zero external dependencies. You can verify that your DLL has no dependencies (besides KERNEL32.dll) with dumpbin.exe.
    • The disadvanages of /MT
      • Larger file size
      • If you have five different DLLs all compiled with /MT, each one has its own copy of the runtime in RAM. If they were compiled with /MD, they would all share a single instance of the shared DLL in memory.
      • If a security flaw is found in the Microsoft C++ Runtime, Windows Update cannot fix your app. You would have to recompile your project with the latest patches and send the new DLL to your users.
      • If you use /MT, make sure that any object created inside your DLL is also destroyed inside your DLL (e.g., using a DestroyObject() function you provide).
      • For Debug, use /MDd because it is optimized for finding bugs, filling uninitialized memory with specific patterns (like 0xCCCCCCCC) etc.
  • Windows folder names can be confusing:
    • C:\Windows\System32: Contrary to the name, this folder is for 64-bit DLLs on a 64-bit version of Windows.
    • C:\Windows\SysWOW64: This folder is for 32-bit DLLs. WOW64 stands for "Windows on Windows 64-bit".