GPU's do what they're told. They're closer to an accelerator, then a CPU, since they don't really handle logic. It's not hard to fully use a GPU, it's what they're designed for. It's all of a single SIMD instruction, you just need the data to make it worth anything. Not maxing a GPU (when the environment permits) is a sign of spaghetti code because it means your underlying systems can't get the data you need to the gpu fast enough.
As for the CPU, if you're using the new renderer with dx12, DE addressed this in the patch notes when they released the update for it. It does not cache shader