Interactive Educational Platform for Control Barrier Functions - Explore Safety-Critical Control Theory
Watch the robot navigate safely around obstacles using Control Barrier Functions. Includes automatic deadlock resolution when the robot gets stuck at saddle points.
Click on canvas to set new goal position!
Control Barrier Functions provide a systematic framework for safety-critical control. The safe set $\mathcal{C}$ is defined by a differentiable barrier function $h(x)$, and forward invariance of $\mathcal{C}$ is enforced by filtering a nominal input through a minimal-intervention safety constraint. This maintains safety while preserving nominal performance whenever possible.
Safety Set Definition:
$$\mathcal{C} = \{x \in \mathbb{R}^n : h(x) \geq 0\}$$The safe set $\mathcal{C}$ contains all states where the barrier function is non-negative.
Barrier Function (Squared Distance Formulation):
$$h(x) = \|x - x_{obs}\|^2 - r_{safe}^2$$Using squared distance $\|x - x_{obs}\|^2$ eliminates numerical issues and provides smooth gradients everywhere.
CBF Safety Constraint:
$$\dot{h}(x) + \alpha h(x) \geq 0$$This ensures that if $h(x) > 0$ (safe), then $h(x)$ cannot decrease too rapidly toward the unsafe region.
Lie Derivative Expansion:
$$\dot{h}(x) = \nabla h(x) \cdot f(x) + \nabla h(x) \cdot g(x) u$$For single integrator: $f(x) = 0$, $g(x) = I$, so $\dot{h} = \nabla h \cdot u$.
Gradient (Continuous & Smooth):
$$\nabla h(x) = 2(x - x_{obs})$$Linear in distance, no singularities, computationally efficient.
Control Constraint:
$$u \in \{v \in \mathbb{R}^m : \nabla h(x) \cdot v \geq -\alpha h(x)\}$$Any control satisfying this constraint ensures forward invariance of the safe set.
Key Advantages of Squared Distance CBF:
• Continuous gradients (no division by zero)
• Better numerical conditioning
• Smoother control synthesis
• $\alpha > 0$ controls how aggressively the system stays safe
Nominal-to-safe projection (QP):
$$\begin{aligned} \mathbf{u}^*(\mathbf{x})=\arg\min_{\mathbf{u},\;\delta\ge 0}\;&\tfrac{1}{2}\,\|\mathbf{u}-\mathbf{u}_{\text{nom}}\|_2^2 + \tfrac{\rho}{2}\,\delta^2\\ ext{s.t. }\;&\nabla h_i(\mathbf{x})^T\mathbf{u} + \alpha\,h_i(\mathbf{x}) \ge -\delta,\quad \forall i\in\mathcal{I}_{\text{obs}} \end{aligned}$$A small slack $\delta$ (penalized by $\rho\gg 1$) preserves feasibility when obstacles are very close; $\delta\to 0$ in nominal cases.
Single-obstacle closed-form projection:
$$\mathbf{u}^* = \mathbf{u}_{\text{nom}} + \max\!\left\{0,\;\frac{-\alpha h - \nabla h^T\mathbf{u}_{\text{nom}}}{\|\nabla h\|_2^2}\right\}\,\nabla h$$The demo applies this idea iteratively across obstacles, yielding a fast approximation of the QP solution.
Design intuition:
Deadlock handling (practical):
When multiple obstacles create opposing constraints, the projection can stall progress. The demo detects low progress with high modifications and injects a tiny one-time perturbation to escape local deadlocks while always respecting the CBF constraint.
Interactive demonstration of High-Order CBF on acceleration-controlled robot.
High-Order Control Barrier Functions (HOCBFs) extend CBFs to systems with relative degree $r>1$ by constructing a sequence of barrier states and enforcing a differential inequality at the input level. For a double-integrator model, the constraint couples position and velocity so that admissible accelerations guarantee forward invariance while respecting actuation limits.
For double integrator dynamics $\ddot{\mathbf{p}} = \mathbf{u}$ where $\mathbf{p} \in \mathbb{R}^2$ and $\mathbf{u} \in \mathbb{R}^2$, control appears at relative degree 2. We construct high-order barriers recursively.
System Dynamics (2D Double Integrator):
$$\begin{align} \dot{\mathbf{p}} &= \mathbf{v} \\ \dot{\mathbf{v}} &= \mathbf{u} \end{align}$$State: $\mathbf{x} = [\mathbf{p}^T, \mathbf{v}^T]^T \in \mathbb{R}^4$ with position $\mathbf{p} = [p_x, p_y]^T$ and velocity $\mathbf{v} = [v_x, v_y]^T$.
Zeroth-Order Barrier (Obstacle Avoidance):
$$h_0(\mathbf{x}) = \|\mathbf{p} - \mathbf{o}\|^2 - r^2 = (p_x - o_x)^2 + (p_y - o_y)^2 - r^2$$Safe set: $\mathcal{C}_0 = \{\mathbf{x} : h_0(\mathbf{x}) \geq 0\}$ ensures distance from obstacle center $\mathbf{o}$ exceeds radius $r$.
First Lie Derivative:
$$\dot{h}_0(\mathbf{x}) = \nabla_{\mathbf{p}} h_0 \cdot \mathbf{v} = 2(\mathbf{p} - \mathbf{o})^T \mathbf{v}$$Control $\mathbf{u}$ does not appear! Relative degree is 2, requiring HOCBF construction.
First-Order Barrier Construction:
$$h_1(\mathbf{x}) = \dot{h}_0(\mathbf{x}) + \alpha_1 h_0(\mathbf{x}) = 2(\mathbf{p} - \mathbf{o})^T \mathbf{v} + \alpha_1[\|\mathbf{p} - \mathbf{o}\|^2 - r^2]$$Augment position safety with velocity-scaled term using class-$\mathcal{K}$ function $\alpha_1$.
Second Lie Derivative:
$$\begin{align} \ddot{h}_0(\mathbf{x}, \mathbf{u}) &= 2\|\mathbf{v}\|^2 + 2(\mathbf{p} - \mathbf{o})^T \mathbf{u} \\ \dot{h}_1(\mathbf{x}, \mathbf{u}) &= \ddot{h}_0 + \alpha_1 \dot{h}_0 = 2\|\mathbf{v}\|^2 + 2(\mathbf{p} - \mathbf{o})^T \mathbf{u} + \alpha_1 \dot{h}_0 \end{align}$$HOCBF Safety Constraint:
$$\dot{h}_1(\mathbf{x}, \mathbf{u}) + \alpha_2 h_1(\mathbf{x}) \geq 0$$Control Constraint (Linear in $\mathbf{u}$):
$$(\mathbf{p} - \mathbf{o})^T \mathbf{u} \geq -\|\mathbf{v}\|^2 - \frac{\alpha_2}{2}[\dot{h}_0 + \alpha_1 h_0] - \frac{\alpha_1}{2}\dot{h}_0$$Half-space constraint in control space $\mathbb{R}^2$. Minimum-norm projection yields safe control.
Key Properties:
• Relative degree 2 requires two class-$\mathcal{K}$ functions: $\alpha_1, \alpha_2 > 0$
• Velocity-aware safety: accounts for momentum and provides predictive braking
• Smooth deceleration: continuous acceleration prevents jerky motion
• Forward invariance: $h_0(\mathbf{x}(0)) \geq 0 \implies h_0(\mathbf{x}(t)) \geq 0, \; \forall t \geq 0$
• Exponential convergence to safe set when $h_1 < 0$
QP-Based Safety Filter:
$$\begin{align} \mathbf{u}^* = \arg\min_{\mathbf{u}} &\quad \|\mathbf{u} - \mathbf{u}_{\text{nom}}\|^2 \\ s.t. &\quad (\mathbf{p}_i - \mathbf{o}_i)^T \mathbf{u} \geq b_i, \quad \forall i \in \mathcal{I}_{\text{obs}} \end{align}$$Minimal modification to nominal control $\mathbf{u}_{\text{nom}}$ while satisfying all obstacle constraints.
Formal HOCBF recursion (relative degree $r=2$):
$$\psi_0(\mathbf{x})=h_0(\mathbf{x}),\quad \psi_1(\mathbf{x})=\dot{\psi}_0(\mathbf{x})+\alpha_1\big(\psi_0(\mathbf{x})\big)$$ $$\psi_2(\mathbf{x},\mathbf{u})=\dot{\psi}_1(\mathbf{x},\mathbf{u})+\alpha_2\big(\psi_1(\mathbf{x})\big)\;\;\ge 0$$Choosing class-$\mathcal{K}$ functions $\alpha_1,\alpha_2$ ensures forward invariance of $\{\psi_0\ge 0\}$ when $\psi_2\ge 0$ is enforced.
Acceleration bounds and feasibility:
$$\begin{aligned} \min_{\mathbf{u},\delta\ge 0}\;&\tfrac{1}{2}\|\mathbf{u}-\mathbf{u}_{\text{nom}}\|_2^2 + \tfrac{\rho}{2}\delta^2\\ ext{s.t. }\;&\psi_2(\mathbf{x},\mathbf{u}) \ge -\delta,\quad \|\mathbf{u}\|_\infty \le u_{\max} \end{aligned}$$The demo enforces the linear half-space equivalent of $\psi_2\ge 0$ and clips accelerations to emulate $\|\mathbf{u}\|$ limits.
Watch the robot navigate with unknown disturbances while a RISE observer estimates and compensates for them. The system uses vector-valued CBFs for multiple simultaneous safety constraints. Click to set goal. Toggle disturbances to see real-time adaptation.
Unknown disturbances and model errors enlarge worst-case CBF margins. A RISE (Robust Integral of the Sign of the Error) observer provides bounded, fast disturbance estimation that tightens admissible control constraints in real time. Vector-valued CBFs encode multiple barrier inequalities simultaneously for multi-obstacle environments.
Single Integrator with Disturbance:
$$\dot{\mathbf{x}} = \mathbf{u} + \mathbf{d}(\mathbf{x},t)$$Unknown bounded disturbance $\mathbf{d}$ satisfies $\|\mathbf{d}(\mathbf{x},t)\| \leq \bar d$ in the safe region.
Observer dynamics (component-wise):
$$\tilde{\mathbf{x}} = \mathbf{x} - \hat{\mathbf{x}}, \quad \text{dir}(\tilde{\mathbf{x}}) = \frac{\tilde{\mathbf{x}}}{\|\tilde{\mathbf{x}}\|}$$ $$\dot{\hat{\mathbf{x}}} = \mathbf{u} + \hat{\mathbf{d}} + \alpha \, \tilde{\mathbf{x}}$$ $$\hat{\mathbf{d}}(t) = \hat{\mathbf{d}}(0) + k_d \, \tilde{\mathbf{x}} + \int_0^t \big[(k_d \alpha + 1)\,\tilde{\mathbf{x}}(\tau) + \beta\,\text{dir}(\tilde{\mathbf{x}}(\tau))\big] d\tau$$Gains satisfy $\alpha > 1,\; k_d,\beta>0$. Define $\lambda = \tfrac{1}{2}\min\{\alpha-1,\,k_d\}$. Then $\|\tilde{\mathbf{d}}(t)\| \le 2\bar d\,e^{-\lambda t}$.
Vector CBF Definition (safe set):
$$\mathbf{B}(\mathbf{x}) = \begin{bmatrix} B_1(\mathbf{x}) \\ \vdots \\ B_m(\mathbf{x}) \end{bmatrix},\quad \mathcal{S} = \{\mathbf{x} : \mathbf{B}(\mathbf{x}) \le \mathbf{0}\}$$Here we use $B_i(\mathbf{x}) = r_i^2 - \|\mathbf{x}-\mathbf{o}_i\|^2$ so $B_i\le 0$ is safe.
Constraint form (single integrator):
$$\Gamma_i(\mathbf{x},\mathbf{u}) = \nabla B_i(\mathbf{x})^T (\mathbf{u} + \mathbf{d}) \le -\gamma_i(\mathbf{x})$$Implement using an upper bound $\chi_i$ on $\nabla B_i^T\mathbf{d}$.
Observer-aided robust constraint:
$$\chi_i(\mathbf{x}) = \min\{\bar d\,\|\nabla B_i\|,\; \nabla B_i^T\hat{\mathbf{d}} + \tilde d_{UB}(t)\,\|\nabla B_i\|\}$$ $$\nabla B_i(\mathbf{x})^T\,\mathbf{u} \le -\gamma_i(\mathbf{x}) - \chi_i(\mathbf{x})$$With $\tilde d_{UB}(t) = 2\bar d\,e^{-\lambda t}$ and $\gamma_i(\mathbf{x}) = \gamma\,\max\{B_i(\mathbf{x}),0\}$.
Minimum-modification control (conceptually QP):
$$\begin{align} \mathbf{u}^* = \arg\min_{\mathbf{u}} &\quad \|\mathbf{u} - \mathbf{u}_{\text{nom}}\|^2 \\ ext{subject to} &\quad \nabla B_i(\mathbf{x})^T\,\mathbf{u} \le -\gamma_i(\mathbf{x}) - \chi_i(\mathbf{x}),\quad i=1,\ldots,m \end{align}$$Minimally modify nominal control $\mathbf{u}_{\text{nom}}$ while satisfying all $m$ vector CBF constraints.
Why observer-aided CBFs reduce conservatism:
Using a fixed worst-case bound $\bar d$ forces large control modifications near the boundary. The RISE estimate $\hat{\mathbf{d}}$ yields a tighter bound via $\chi_i(\mathbf{x})=\min\{\bar d\,\|\nabla B_i\|,\;\nabla B_i^T\hat{\mathbf{d}}+\tilde d_{UB}(t)\,\|\nabla B_i\|\}$, shrinking the admissible control set only as much as needed.
QP with slack (conceptual):
$$\min_{\mathbf{u},\;\delta\ge 0}\;\tfrac{1}{2}\|\mathbf{u}-\mathbf{u}_{\text{nom}}\|_2^2 + \tfrac{\rho}{2}\delta^2\quad \text{s.t.}\quad \nabla B_i^T\mathbf{u} \le -\gamma_i - \chi_i + \delta.$$The implementation performs fast projected updates equivalent to KKT for one active constraint per step.
Real-time learning of uncertain dynamics with safety guarantees. The DNN adapts online using Jacobian-based weight updates without pre-training, while CBF constraints ensure forward invariance of the safe set. Watch the network learn the unknown drift model in real-time!
Nominal Control with Feedforward Cancellation:
$$\mathbf{u}_{\text{nom}}(\mathbf{x}) = k\,(\mathbf{x}_g - \mathbf{x}) - \Phi(\mathbf{x}, \hat{\boldsymbol{\theta}}), \quad k>0$$
In this demo $g(\mathbf{x}) = I$ (single integrator) and $\mathbf{x}_g$ is the goal. A hard cap on $\|\Phi(\mathbf{x},\hat{\boldsymbol{\theta}})\|\le 10$ is enforced for numerical robustness.
An adaptive neural network approximates unknown drift $f(x)$ online, while the CBF enforces safety by constraining the closed-loop input. Parameter adaptation uses Jacobian information with regularization and clipping for numerical robustness. During intermittent feedback loss, safety is preserved by tightening constraints using a priori bounds on the state-estimation error.
System Dynamics (Unknown $f$):
$$\dot{\mathbf{x}} = f(\mathbf{x}) + g(\mathbf{x})\mathbf{u}$$
DNN Universal Approximation:
$$f(\mathbf{x}) = \Phi(\mathbf{x}, \boldsymbol{\theta}^*) + \boldsymbol{\varepsilon}(\mathbf{x})$$
where $\Phi$ is the DNN, $\boldsymbol{\theta}^*$ are ideal weights, $\|\boldsymbol{\varepsilon}\| \le \bar{\varepsilon}$
High-Gain State-Derivative Estimator:
$$\dot{\hat{\mathbf{x}}} = \hat{f} + g(\mathbf{x})\mathbf{u} + k_x \tilde{\mathbf{x}}$$
$$\dot{\hat{f}} = k_f(\dot{\tilde{\mathbf{x}}} + k_x \tilde{\mathbf{x}}) + \tilde{\mathbf{x}}$$
where $\tilde{\mathbf{x}} = \mathbf{x} - \hat{\mathbf{x}}$, $\tilde{f} = f(\mathbf{x}) - \hat{f}$
DNN Weight Adaptation (Least Squares):
$$\dot{\hat{\boldsymbol{\theta}}} = \text{proj}\left(\boldsymbol{\Gamma}\left[-k_\theta \hat{\boldsymbol{\theta}} + \alpha \Phi'^T(\mathbf{x}, \hat{\boldsymbol{\theta}})(\hat{f} - \Phi(\mathbf{x}, \hat{\boldsymbol{\theta}}))\right]\right)$$
$\Phi' = \frac{\partial \Phi}{\partial \hat{\boldsymbol{\theta}}}$ computed via backpropagation
Adaptive Gain Matrix:
$$\frac{d}{dt}\boldsymbol{\Gamma}^{-1} = -\beta(t)\boldsymbol{\Gamma}^{-1} + \Phi'^T(\mathbf{x}, \hat{\boldsymbol{\theta}})\Phi'(\mathbf{x}, \hat{\boldsymbol{\theta}})$$
with forgetting factor $\beta(t) = \beta_0(1 - \|\boldsymbol{\Gamma}\|/\kappa_0)$
Vector-Valued CBF with DNN Estimate:
$$\mathcal{S} = \{\mathbf{x} \in \mathbb{R}^n : \mathbf{B}(\mathbf{x}) \le \mathbf{0}\}$$
$$K_c(\mathbf{x}) = \left\{\mathbf{u} : \nabla \mathbf{B}^T(\mathbf{x})[\Phi(\mathbf{x}, \hat{\boldsymbol{\theta}}) + g(\mathbf{x})\mathbf{u}] \le -\boldsymbol{\gamma}(\mathbf{x}) - \boldsymbol{\chi}(\mathbf{x})\right\}$$
$\boldsymbol{\chi}(\mathbf{x})$ accounts for $\|\tilde{\boldsymbol{\theta}}\|$ and $\|\boldsymbol{\varepsilon}\|$ bounds
Safety Filter as a QP (conceptual):
$$\begin{aligned} \mathbf{u}^*(\mathbf{x}) \;=\; &\arg\min_{\mathbf{u}}\; \tfrac{1}{2}\,\|\mathbf{u} - \mathbf{u}_{\text{nom}}(\mathbf{x})\|_2^2 \\ s.t.\; &\nabla B_i(\mathbf{x})^T\big(\Phi(\mathbf{x}, \hat{\boldsymbol{\theta}}) + \mathbf{u}\big) + \gamma_i(\mathbf{x}) + \chi\,\|\nabla B_i(\mathbf{x})\|_2 \;\le\; 0,\\ &\forall i \in \{1,\dots,m\}. \end{aligned}$$
The implementation uses an efficient projection step equivalent to the KKT solution for a single active constraint per iteration.
Intermittent feedback safety (loss-of-feedback):
During temporary sensor dropouts, an open-loop predictor $\hat{\mathbf{X}}$ evolves using the last available model; the state error bound satisfies
$$\|\tilde{\mathbf{X}}(t)\|_2 \le L_U\,t + \Delta_U \quad \text{for } t\in[0, T_{\text{loss}}],$$with tunable $L_U,\Delta_U$. The CBF constraint is tightened by a margin $\rho\,\|\nabla B_i\|\,\|\tilde{\mathbf{X}}\|$ to remain safe under prediction error.
Tightened constraint during loss:
$$\nabla B_i(\hat{\mathbf{X}})^T\big(\Phi(\hat{\mathbf{X}},\hat{\boldsymbol{\theta}})+\mathbf{u}\big) + \gamma_i(\hat{\mathbf{X}}) + \rho\,\|\nabla B_i\|\,\|\tilde{\mathbf{X}}\| \le 0.$$When feedback resumes, the standard constraint with observer terms is restored and adaptation unfreezes.
Explore various multi-agent formations and bio-inspired scenarios using Control Barrier Functions for safety. Try different formation types and shepherding scenarios to see how agents navigate complex scenarios while maintaining safety. Use the dropdown to select different formation challenges and herding simulations.
Inter-Agent Distance Constraint:
$$h_{ij}(x_i, x_j) = \|x_i - x_j\|^2 - d_{safe}^2 \geq 0$$Agents $i$ and $j$ must maintain squared distance greater than safety threshold.
Lie Derivative for Agent $i$:
$$\dot{h}_{ij} = \nabla_{x_i} h_{ij} \cdot \dot{x}_i + \nabla_{x_j} h_{ij} \cdot \dot{x}_j$$ $$= 2(x_i - x_j) \cdot u_i + 2(x_j - x_i) \cdot u_j$$ $$= 2(x_i - x_j) \cdot (u_i - u_j)$$Distributed CBF Constraint:
$$\dot{h}_{ij} + \alpha h_{ij} \geq 0$$ $$2(x_i - x_j) \cdot (u_i - u_j) + \alpha h_{ij} \geq 0$$Decentralized Implementation:
$$2(x_i - x_j) \cdot u_i \geq -\frac{\alpha}{2} h_{ij} + 2(x_i - x_j) \cdot u_j^{nom}$$Each agent assumes others follow nominal control $u_j^{nom}$ and takes half responsibility.
QP Formulation for Agent $i$:
$$\begin{align} \min_{u_i} &\quad \|u_i - u_i^{nom}\|^2 \\ s.t. &\quad 2(x_i - x_j) \cdot u_i \geq -\frac{\alpha}{2} h_{ij}, \; \forall j \neq i \end{align}$$
Key Features:
• Distributed: each agent computes own control
• Scalable: $O(n)$ constraints per agent for $n$ agents
• Reciprocal: both agents share avoidance responsibility
• Real-time: efficient QP solution for large swarms
Logic: Two-way crossing through gates only. Safety is the union: away from the wall by at least \(\varepsilon\) or inside any gate band. We encode this with \(\mathbf{H}=\mathrm{diag}(h_i)\), where \(h_i=\max(|x- x_{\text{wall}}| - \varepsilon,\; D_i)\) and \(D_i=\tfrac{w_i}{2} - |y - y_i|\). Projection uses the indefinite MCBF with Schur-based eigen handling for continuity at eigenvalue multiplicities.
Sometimes being safe means satisfying any one of several conditions: “go through gate A, B, or C.” Matrix CBFs model this disjunctive logic smoothly, avoiding the abrupt switches you get from hard max/min. We project the control so a matrix inequality stays satisfied, which encodes the Boolean OR set exactly.
Safe Set via Maximum Eigenvalue:
\(\mathcal{C} = \{ \mathbf{x} \in \mathbb{R}^n \mid \mathbf{H}(\mathbf{x}) \not\prec 0 \} \Leftrightarrow \{ \mathbf{x} \mid \lambda_{\max}(\mathbf{H}(\mathbf{x})) \geq 0 \}\)
Indefinite MCBF Condition:
\(\dot{\mathbf{H}}(\mathbf{x}, \mathbf{u}) \succeq -\alpha(\lambda_{\max}(\mathbf{H})) \mathbf{I} - c_\perp (\lambda_{\max}(\mathbf{H}) \mathbf{I} - \mathbf{H}(\mathbf{x}))\)
Disjunctive Composition: For \(\mathbf{H} = \text{diag}(h_1, \ldots, h_p)\):
\(\lambda_{\max}(\mathbf{H}) = \max_i h_i \geq 0 \Leftrightarrow \bigvee_{i=1}^p (h_i \geq 0)\)
Indefinite MCBFs enable continuous safety filters for disjunctive (OR) constraints without soft-max relaxations. This demo shows a robot navigating through alternative "gates" where safety requires passing through at least one opening. The MCBF formulation maintains the exact safe set while ensuring control continuity at eigenvalue multiplicities.
Projection via eigenvalue gradient:
Let \(\mathbf{M}(\mathbf{u}) = \dot{\mathbf{H}}(\mathbf{u}) + \alpha(\lambda_{\max})\,\mathbf{I} + c_\perp\,(\lambda_{\max}\,\mathbf{I}-\mathbf{H})\).
Enforce \(\lambda_{\min}(\mathbf{M}(\mathbf{u})) \ge \delta\). For eigenvector \(\mathbf{v}\) of the minimal eigenvalue,
$$\frac{\partial\,\lambda_{\min}}{\partial u_j} = \mathbf{v}^T\,\frac{\partial\mathbf{M}}{\partial u_j}\,\mathbf{v}.$$The minimum-norm correction is
$$\mathbf{u}^* = \mathbf{u}_{\text{nom}} + \frac{\delta - \lambda_{\min}}{\|\nabla_{\!\mathbf{u}}\lambda_{\min}\|_2^2}\,\nabla_{\!\mathbf{u}}\lambda_{\min},\quad \nabla_{\!\mathbf{u}}\lambda_{\min} = \big[\mathbf{v}^T\tfrac{\partial\mathbf{M}}{\partial u_j}\mathbf{v}\big]_j.$$The implementation computes this using a numerically robust Schur-based routine for small matrices.
Comparison to Scalar CBF: Scalar eigenvalue-based CBFs suffer from control chattering when eigenvalues merge (e.g., λ₂ and λ₃ become equal). The MCBF formulation avoids this by working directly with the matrix \(\mathbf{H}\), using Schur decomposition to handle multiplicities smoothly via orthogonal eigenspace projectors.
A robot team stays connected when information can flow across the whole group. Instead of directly constraining a single eigenvalue (which can be numerically tricky), Matrix CBFs keep a whole matrix positive semidefinite, yielding smooth controls even when eigenvalues merge.
Laplacian Matrix: \(\mathbf{L}(\mathbf{x}) = \mathbf{D}(\mathbf{x}) - \mathbf{A}(\mathbf{x})\)
where \(\mathbf{A}_{ij}\) is the adjacency weight and \(\mathbf{D}_{ii} = \sum_j \mathbf{A}_{ij}\).
Connectivity via Fiedler Eigenvalue:
Network is connected \(\Leftrightarrow \lambda_2(\mathbf{L}(\mathbf{x})) > 0\)
MCBF Construction:
\(\mathbf{H}(\mathbf{x}) = \mathbf{L}(\mathbf{x}) + \frac{\varepsilon}{p} \mathbf{1}\mathbf{1}^\top - \varepsilon \mathbf{I}\)
Then \(\mathbf{H} \succeq 0\) ensures connectivity with margin \(\varepsilon > 0\).
Multi-agent network connectivity can be maintained using MCBFs without the nonsmoothness issues of scalar eigenvalue-based CBFs. This demo shows agents maintaining communication links while tracking references. The MCBF formulation handles eigenvalue merging (when multiple eigenvalues become equal) continuously via Schur decomposition.
Semidefinite MCBF constraint:
With \(\mathbf{H}(\mathbf{x})\) as above, enforce
$$\dot{\mathbf{H}}(\mathbf{x},\mathbf{u}) + c_\alpha\,\mathbf{H}(\mathbf{x}) \succeq \delta\,\mathbf{I}.$$Projecting a nominal control onto this convex cone uses the same eigen-gradient idea:
$$\frac{\partial\,\lambda_{\min}}{\partial u_j} = \mathbf{v}^T\,\frac{\partial}{\partial u_j}\big(\dot{\mathbf{H}} + c_\alpha\,\mathbf{H}\big)\,\mathbf{v},\quad \lambda_{\min}\big(\dot{\mathbf{H}} + c_\alpha\mathbf{H}\big) \ge \delta.$$This avoids the non-smoothness of directly constraining $\lambda_2(\mathbf{L})$ and yields smooth, chatter-free controls.