Though several approaches exist to automatically generate repair parts for fractured objects, there has been little prior work on the automatic assembly of generated repair parts. Assembly of repair parts to fractured objects is a challenging problem due to the complex high-frequency geometry at the fractured region, which limits the effectiveness of traditional controllers. We present an approach using reinforcement learning that combines visual and tactile information to automatically assemble repair parts to fractured objects. Our approach overcomes the limitations of existing assembly approaches that require objects to have a specific structure, that require training on a large dataset to generalize to new objects, or that require the assembled state to be easily identifiable, such as for peg-in-hole assembly. We propose two visual metrics that provide estimation of assembly state with 3 degrees of freedom. Tactile information allows our approach to assemble objects under occlusion, as occurs when the objects are nearly assembled. Our approach is able to assemble objects with complex interfaces without placing requirements on object structure.