Dynamic Defense Reallocation
Markov Decision Process · Approximate Dynamic Programming
Reallocate interceptors in real-time as alien attack waves arrive. Each wave brings new threats while weapons deplete. The optimal policy requires solving a Bellman equation over an exponentially large state space — this demo uses a myopic one-step lookahead heuristic instead.
Wave Defense
| Defense Domain | OR Element | Symbol | Example |
|---|---|---|---|
| Current battle state | State | s = (ammo, threats, time) | (3,2,4, active threats, wave 2) |
| Assignment this turn | Action | a = {(i,j)} | Laser-1 → Scout-3 |
| Engagement outcomes | Transition | P(s′|s,a) | Hit with prob 0.8 |
| Threat value destroyed | Reward | R(s,a) | 100 points |
| Future value | Value function | V(s) | Expected total reward |
| Time horizon | Discount factor | γ = 0.9 | Future rewards discounted |
★☆☆ Educational Demo
This is a simplified simulation demonstrating the sequential decision structure of an MDP. The “Myopic AI” uses one-step lookahead only — it does NOT solve the full Bellman equation. A true ADP solver would use value function approximation (e.g., linear basis functions or neural networks) to estimate V*(s), which is far beyond the scope of a browser demo. See Bertsekas (2012) and Powell (2011) for full treatments.
Wave Simulator
★☆☆ Educational Demo3 weapons (ammo: 3, 3, 4). Threats arrive in waves. Compare your manual decisions against the myopic AI.
Preparing for First Contact
If the aliens arrive, we suspect you will not be visiting a GitHub Pages site. We do recommend the Hungarian algorithm. It works on any planet.
Educational Fiction Disclaimer
This is a fictional educational scenario.
- All “alien invasion” content exists purely to teach OR concepts
- All data and parameters are entirely fictional
- No actual military applications are intended or endorsed
- The author advocates for peace and opposes militarization