In this work, we present a new approach to Extended Reality (XR), denoted as iCOPYWAVES, which seeks to offer naturally low-latency operation and cost effectiveness, overcoming the critical scalability issues faced by existing solutions. Specifically, iCOPYWAVES is enabled by emerging PWEs, a recently proposed technology in wireless communications. Empowered by intelligent metasurfaces, PWEs transform the wave propagation phenomenon into a software-defined process. To this end, we leverage PWEs to: i) create, and then ii) selectively copy the scattered RF wavefront of an object from one location in space to another, where a machine learning module, accelerated by FPGAs, translates it to visual input for an XR headset using PWE-driven, RF imaging principles (XR-RF). This makes an XR system whose operation is bounded in the physical-layer and, hence, has the prospects for minimal end-to-end latency. For the case of large distances, RF-to-fiber/fiber-to-RF is employed to provide intermediate connectivity. The paper provides a tutorial on the iCOPYWAVES system architecture and workflow. Finally, a proof-of-concept implementation via simulations is provided, demonstrating the reconstruction of challenging objects in iCOPYWAVES-produced computer graphics