We present DurLAR, a high-fidelity 128-channel 3D LiDAR dataset with panoramic ambient (near infrared) andreflectivity imagery, as well as a sample benchmark usingdepth estimation for autonomous driving applications. Ourdriving platform is equipped with a high resolution 128channel LiDAR, a 2MPix stereo camera, a lux meter anda GNSS/INS system. Ambient and reflectivity images aremade available along with the LiDAR point clouds to facilitate multi-modal use of concurrent ambient and reflectivityscene information. Leveraging DurLAR, with a resolutionexceeding that of prior benchmarks, we consider the task ofmonocular depth estimation and use this increased availability of higher resolution, yet sparse ground truth scenedepth information to propose a novel joint supervised/self-supervised loss formulation. We compare performance overboth our new DurLAR dataset, the established KITTI benchmark and the Cityscapes dataset. Our evaluation shows ourjoint use supervised and self-supervised loss terms, enabledvia the superior ground truth resolution and availabilitywithin DurLAR improves the quantitative and qualitativeperformance of leading contemporary monocular depth es-timation approaches (RMSE= 3.639,SqRel= 0.936).