Adversarial examples are inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake; they’re like optical illusions for machines.
Using the same zero day logic, adversarial training attempts to train the artificial intelligence algorithm to recognise false positives, and react in the way that it was designed even when hostile actors try to trick the artificial intelligence. But just like with code, the creators of the algorithm and the testers of the algorithm may not have thought of all the ways in which hostile actors can trick the AI.