3D printing offers the opportunity to perform automated restoration of objects to reduce household waste, restore objects of cultural heritage, and automate repair in medical and manufacturing domains. We present an approach that takes a 3D model of a broken object and retrieves proxy 3D models of corresponding complete objects from a library of 3D models, with the goal of using the complete proxy to repair the broken object. We input multi-view renders and point cloud representations of the query to neural networks that output learned visual and geometric feature encodings. Our approach returns complete proxies that are visually and geometrically similar to the broken query object model by searching for the learned encodings in the complete models library. We demonstrate results for retrieval of complete proxies for broken object models with breaks generated synthetically using models from the ShapeNet dataset, and from publicly available datasets of scanned everyday objects and cultural heritage objects. By combining visual and geometric features, our approach shows consistently lower Chamfer distance than when either feature is used alone. Our approach outperforms the existing state-of-the-art method in retrieval of proxies for broken objects in terms of the Chamfer distance. The 3D proxies returned by our approach enable understanding of object geometry to identify object portions requiring repair, to incorporate user preferences, and to generate 3D printable restoration components. Our code to perform broken object model generation, feature extraction, and object retrieval is available at https://git.io/JuKaJ.