Theories of causal reasoning and learning often implicitly assume that the structural implications of causal models and empirical evidence are consistent. However, for probabilistic causal relations this may not be the case. We propose a causal consistency hypothesis claiming that people tend to create consistency between the two types of knowledge. Mismatches between structural implications and empirical evidence may lead to distortions of empirical evidence. In the present research we used trial-by-trial learning tasks to study how people attempt to create consistency between structural assumptions and learning data. In Experiment 1 we show biasing of empirical evidence with causal chains even after repeated testing of direct and indirect relations. Experiment 2 investigates whether different causal models lead to different judgments, despite identical data patterns. Overall, the findings support the idea that people try to reconcile assumptions about causal structure with probabilistic data, but also suggest that this may depend on the type of causal structure under consideration.