We explore ways of covertly delivering interventions into the adversary decision cycles so as to effectively shape adversary decision-making and performance without inducing much suspicion. Recognizing that completely covert interventions, while most effective, are difficult to implement, we focus on a more general mode of covertness. Based on insights from human abductive reasoning, we propose a delivery scheme that contains interventions that may be noticeable but whose true meanings are hidden or distorted (e.g., the human operators do not easily attribute the interventions to malicious attacks). We evaluate, both theoretically and empirically, the effectiveness and robustness of this scheme in escaping detection and disrupting performance.