In standard models of Bayesian learning, agents reduce their uncertainty about an event's true probability because their consistent estimator concentrates almost surely around this probability's true value as the number of observations becomes large. This paper takes the empirically observed violations of Savage's (1954) sure thing principle seriously and asks whether Bayesian learners with ambiguity attitudes will reduce their ambiguity when sample information becomes large. To address this question, I develop closed-form models of Bayesian learning in which beliefs are described as Choquet estimators with respect to neoadditive capacities (Chateauneuf, et al., 2007). Under the optimistic, the pessimistic, and the full Bayesian update rules, a Bayesian learner's ambiguity will increase rather than decrease to the effect that these agents will express ambiguity attitudes regardless of whether they have access to large sample information or not. Although consistent Bayesian learning occurs under the Sarin and Wakker (1998b) revealed likelihood and Knightian uncertainty update rule, this result comes with the descriptive drawback that it does not apply to agents who still express ambiguity attitudes after one round of updating.