Monday, March 23, 2026

PID Theory

General form of PID control force terms:


What would be the simplest controller for a mass to stay at a specific height from the surface of a planet with only gravity acting and no atmosphere?

Without an atmosphere, your system is:


If you use only a Proportional (P) controller, your control force is:


This effectively turns your mass into a pure spring in a vacuum. It will oscillate up and down forever, centered around the target height, because there is no way to remove the kinetic energy (no damping). To stay at a specific height, you need to "electronically" create the friction/damping that the atmosphere is missing:

The P-term (Kp) provides the "restoring force" to push the mass toward the target height. The D-term (Kd) acts as artificial friction. It resists the velocity of the mass, allowing it to slow down as it approaches the target and eventually stop. The Gravity Bias (mg): Technically, to hover perfectly with a PD controller, you need to "cancel out" the constant pull of gravity so the controller only has to worry about the displacement error. Here is a P vs PD comparison using python script:

The double integrator has two poles at the origin (s=0, 0). Without a zero to "pull" them into the Left Half Plane (LHP), the poles have nowhere to go but up and down the imaginary axis as you increase K_p. By PD controller placing a zero on the LHP, you are creating a "target" in the stable region. As you increase the gain, the two poles at the origin are "pulled" off the imaginary axis and toward the LHP zero:

While an Integral (I) term is usually used to eliminate steady-state error (the "droop" caused by gravity), in a vacuum with a double integrator, adding an "I" term without a very strong "D" term is dangerous. It introduces more phase lag, which often leads to the instability shown in the original infographic you shared. In the frequency domain, an integrator introduces a 90° phase lag. A double integrator (1/s^2) already has a 180° phase lag. Adding an integral term pushes the total phase lag toward 270°.

In control systems, if your feedback is delayed by 180° or more, your "correction" starts acting in the same direction as the error. Instead of pulling the mass back to the target, the controller begins pushing it away, leading to the "Unstable" root locus you saw in the original infographic.

Integral windup is another problem where your mass (plant) is stuck (perhaps a mechanical limit or a saturated actuator), the error remains constant because the mass isn't moving, and the integral term keeps summing that error over time. The "I" value grows (winds up) to a massive number. When the mass finally breaks free, the controller has a "memory" of a huge error that no longer exists. It applies a massive, unnecessary force, causing the mass to overshoot violently or even crash into the hardware. You can mitigate windup by stopping the integrator from growing once the actuator reaches its maximum output or by only turning the "I" term on when the mass is very close to the target height.

Thursday, March 12, 2026

Digital vs Analog Simulation

While a purely digital simulation (Model-in-the-Loop) is great for testing logic, an analog simulation (Hardware-in-the-Loop) tests the electrical reality of your system. In a digital simulation, you use values like pressure directly from your atmosphere model. In reality, that pressure goes through a sensor which outputs voltage/current. Your electronics have to read that analog signal and convert it to digital before feeding it to your controller.

A real controller output has to drive a load. Analog simulation ensures the controller's transistors don't overheat or drop voltage when trying to move a high-pressure valve.

Your internal  Analog-to-Digital Converter (ADC) might add extra error. For example, your atmosphere model says 101.325kPa, but your ADC might convert it to 101.328 kPa due to its internal tolerance. Analog simulation reveals whether your control algorithm is robust enough to handle that 0.003 kPa error without oscillating. It also verifies that your controller’s ADC is actually calibrated correctly. The signal chain:

You cannot "short a wire to ground" in a purely digital simulation and see the smoke. With hardware like NI PXI Fault Insertion Units, you can physically short an analog input to a 24V rail. This allows you to verify that your hardware's protection diodes work and that your software enters a "Safe State" immediately.

Wednesday, February 4, 2026

Embedding Visual C++ Runtime into DLL

A user of one of my old desktop programs (written in Java8 and C++) reported that they were getting a "[FileName].dll is not a valid Win32 application." (the Turkish version: "... geçerli bir Win32 uygulaması değil"). Due to the "Win32" in the error message, my first thought was that they were using my 64-bit app on a 32-bit setup. However, that was not the case. They compared the PC that my app was working with the PC that was showing error and found out that they were able to make it work by copying mscvr120.dll file to Windows/System32 folder. After chatting with Gemini, here are my findings:
  • In the world of Windows development, Win32 is the name of the entire programming interface (the API) used to interact with the operating system. When they later moved to 64-bit, instead of renaming it to "Win64," they kept the name Win32 for the API itself to maintain developer familiarity. Technically, 64-bit Windows programs run on the Win32 API for 64-bit systems. So, when the OS says "not a valid Win32 application," it really means "not a valid Windows DLL/exe"
  • msvcr120.dll is a Dynamic Link Library (DLL) file that is a core component of the Microsoft Visual C++ Redistributable (Runtime) for Visual Studio 2013. Since the problematic PC never had any Visual Studio installed on it, it was missing the runtime dependency of my DLL.
  • Shared runtimes are used to reduce the size of the compiled binary, but they introduce a dependency on the target operating system to provide the runtime.
  • You can check the dependencies of your DLL or EXE by using Visual Studio's dumpbin.exe. On cmd, dumpbin /dependents filename.dll shows you the DLLs filename.dll depends on.
    • If you see MSVCR....dll, you need the C Runtime.
    • If you see MSVCP....dll, you also need the C++ Standard Library.
    • If you see KERNEL32.dll or USER32.dll, don't worry, those are part of Windows itself and are always present.
  • Previously, I discussed how to embed the C runtime in Linux. You can also  embed/bake the C/C++ Runtime into your binary with Visual Studio via Project Properties > C/C++ > Code Generation > Runtime Library
    • The default of /MD (Multi-threaded DLL) or /MDd (Multi-threaded Debug DLL) uses shared runtime
    • Changing it to /MT (Multi-threaded), embeds the code into the DLL, leaving it with zero external dependencies. You can verify that your DLL has no dependencies (besides KERNEL32.dll) with dumpbin.exe.
    • The disadvanages of /MT
      • Larger file size
      • If you have five different DLLs all compiled with /MT, each one has its own copy of the runtime in RAM. If they were compiled with /MD, they would all share a single instance of the shared DLL in memory.
      • If a security flaw is found in the Microsoft C++ Runtime, Windows Update cannot fix your app. You would have to recompile your project with the latest patches and send the new DLL to your users.
      • If you use /MT, make sure that any object created inside your DLL is also destroyed inside your DLL (e.g., using a DestroyObject() function you provide).
      • For Debug, use /MDd because it is optimized for finding bugs, filling uninitialized memory with specific patterns (like 0xCCCCCCCC) etc.
  • Windows folder names can be confusing:
    • C:\Windows\System32: Contrary to the name, this folder is for 64-bit DLLs on a 64-bit version of Windows.
    • C:\Windows\SysWOW64: This folder is for 32-bit DLLs. WOW64 stands for "Windows on Windows 64-bit".

Saturday, December 27, 2025

Reading unsigned data in Java

In binary files, a single byte is often used to represent numbers from 0 to 255 (unsigned). However, in Java, a byte is signed, ranging from -128 to 127, because Java doesn't have a unsigned types (with the exception of the 2 byte char type). A raw byte with value 0xF0 in a file meant to represent 240 will become -16 when using Javas's ByteBuffer.get(). To fix this:

int unsignedByte = buffer.get() & 0xFF;

Explanation: When performing bitwise operations, Java automatically "promotes" the 8-bit byte to a 32-bit signed integer. If the byte is 0xF0 = 240 (11110000), Java sees the leading 1 and assumes it is a negative number. Through Sign Extension, it fills the new 24 bits with 1s to preserve that negative value (-16) in the larger container.

Original Byte: 11110000 (-16, see two's complement)
Promoted Int : 11111111 11111111 11111111 11110000 (Still -16)
Now, you apply the mask 0xFF (255). In binary, 0xFF as a 32-bit integer is 00000000 00000000 00000000 11111111:
  11111111 11111111 11111111 11110000 (The promoted -16)
& 00000000 00000000 00000000 11111111 (The 0xFF mask)
 -------------------------------------
  00000000 00000000 00000000 11110000 (The result: 240)

By "ANDing" the promoted integer with 0xFF, you effectively clear out all the 1s created by sign extension, leaving only the original 8 bits and giving you the correct unsigned value of 240.

Similary, reading an unsigned short (16-bit) from file:

int unsignedShort = buffer.getShort() & 0xFFFF;

Reading an unsigned int (32-bit):

long unsignedInt = buffer.getInt() & 0xFFFFFFFFL;

Note that an unsigned 32-bit integer can exceed the capacity of a Java int. You must jump up to a long and use a long literal mask (noted by the L at the end of the mask value).

When writing a value that is first multplied by a scale factor of 2^31 (1 << 31),  Java assumes the 1 << 31 is an int and shifting 1 by 31 places puts it into the sign bit position of a 32bit int, which results in the negative value of -2147483648. Correct usage:

double val = 123;
long scaleFactor = (1L << 31); // Will be positive 2147483648 because long is 64bit
                               // and shifting by 31 won't put the 1 into sign bit
                               // position
long val_scaled = (long) (val * scaleFactor);

Wednesday, October 1, 2025

Why C/C++ circular dependency causes "syntax error"

I have a headerA with methodA that uses a structB from headerB and therefore includes headerB. When I call methodA, I get a "syntax error" in headerA where methodA uses structB location. Details:
// headerB.h
#ifndef HEADER_B_H
#define HEADER_B_H
struct structB { int x; };
#endif

// headerA.h
#ifndef HEADER_A_H
#define HEADER_A_H
#include "headerB.h"
void methodA(structB param);
#endif

// main.cpp
#include "headerB.h" // headerB processed first
#include "headerA.h" // headerB already included, so HEADER_B_H is defined
                     // #include "headerB.h" does NOTHING
                     // structB is unknown! → SYNTAX ERROR
The "syntax error" occurs because the compiler doesn't know what structB is when it tries to compile methodA. If headerB includes headerA (directly or indirectly), you have a circular dependency. The C++ preprocessor just does text substitution - it doesn't understand C++ syntax. It can't detect circular dependencies because include guards prevent infinite loops, so the circular dependency becomes an incomplete type error instead.

Friday, July 11, 2025

Double-to-Int Conversion with Bit Shifting

We often need to pack a large numeric range into 32 bits. For instance, timestamps in microseconds over a 36 minute period exceed Integer.MAX_VALUE. By discarding the least significant bits (via right shift), we can fit the value and later we can recover it by the same amount of left shift. However, increasing the number of shifts decreases accuracy. What we need to optimize is to find the right amount of bit shift that covers the range and minimizes error. The following Java code investigates this:


Monday, June 16, 2025

C/C++ header mismatch bug

I encountered a problem where a field in a global struct (myStruct) held a valid value before entering a function foo, but turned into garbage after entering it. When I consulted AI tools, they suggested that foo might be allocating very large local arrays, causing a stack overflow that could corrupt the global structure. Another possibility was an out-of-bounds write elsewhere in the code.

After a week of debugging and trying various solutions—such as increasing the thread's stack size—I discovered the root cause: The function foo was defined in a C library with multiple versions. Each version resided in a different folder but had the same file names. Which folder was used depended on a #define. I was including the header from one version of the library, but linking against the implementation from another. If the struct definitions had matched, this wouldn’t have caused an issue, but they differed—evident from the differing sizeof(myStruct). As a result, myStruct was interpreted using the wrong layout, leading to corrupted values from an incorrect memory region.