I really would encourage you to read the linked article.
I am having the impression that your idea of LUTs is a little bit off of what they really are.
A LUT is not a snapshot of absolute state that, once found, can be applied to any footage, as different as it might be.
It is just a table that says - if any pixel on the input side has the value ‘x’ than convert it to ‘y’ for the resulting output video. And it does this for all the x-es (colors) it wants to have a change for. Colors it wants to stay put have no ‘x to y’ entry in the table. Hence LUT - LookUpTable.
That’s why, when you use a LUT as ‘creative’ LUT, you have to bring all your footage to a common color scheme in the first place. From here you can apply a reasonable ‘x to y’ conversion because the source footage does optimally already have matching colors.
On the other hand, to bring different colored, logged, flattened footage to a common denominator you’d have to find a LUT for each single device that you own, transposing their proprietary color science to a commonly accepted scheme like REC-709 or the like...
In your example you seem to have an already straight video from your drone. Color grading and saving a LUT in Affinity just means you are creating a translation table from ‘straight drone footage’ to ’better looking drone footage’. That’s a creative LUT. When you first bring your GoPro footage to the same level as the drone footage already was in the first place, then applying such a ‘creative’ LUT to both clips would make sense. But with this single ‘creative’ LUT you can definitely NOT make up for the missing GoPro ‘flat/log footage’ to ‘straight footage’ conversion. This step needs its own ‘technical’ LUT...
Read the article, they do a much better job in explaining it accurately