Privacy-preserving collaborative filtering methods are effectual ways of coping with information overload problem while protecting confidential data. Their success depends on the quality of the collected data. However, malicious entities might create fake profiles and insert them into user-item matrices of such filtering schemes. Hence, shilling attacks play an important role on the quality of data. Designing effective shilling attacks and performing robustness analysis of privacy-preserving collaborative filtering methods are receiving increasing attention. In this study, six shilling attack models are designed to attack binary disguised user-item matrices in privacy-preserving collaborative filtering methods. The attack models are applied to naive Bayesian classifier-based collaborating filtering scheme with privacy to measure its robustness against fake profiles. Empirical results on real data show that designing effective shilling attacks is still possible on perturbed binary ratings and naive Bayesian classifier-based privacy-preserving collaborative filtering algorithm is vulnerable to such shilling attacks.