Privacy-preserving deep learning (PPDL) has largely relied on “one-size- fits-all” mechanisms, that uniformly protect data. Such approaches often sacrifice utility and fail to capture the heterogeneous nature of privacy. This dissertation advances a data-centric perspective built on three assumptions: (1) not all parts of a data sample are equally sensitive―privacy should focus on regions of interest (ROI); (2) privacy preferences vary across data owners, requiring personalized mechanisms; and (3) data contains hierarchical levels of sensitivity, necessitating multi-level protection. Guided by these principles, we introduce novel privacy-preserving mechanisms that go beyond rigid uniform protection towards adaptive, context-aware privacy frameworks in deep learning.