Shallow depth of field is the holy grail for smart-phone production. And it's just not possible in many circumstances.
First off, Filmic Pro does not have a means of shooting with shallow depth of field, because it's a limitation of the lens and sensor size of the phone camera. The newer iPhones that simulate shallow depth of field can only do so on stills, because it's a trick; the two lenses are used to guess what's foreground and what's background, and then the phone's computer brain blurs the parts that it believes to be background. It works, and it works pretty well, but it's not perfect, and the processors just aren't fast enough to do this kind of stuff 24 times per second (or 30 or 60, depending on your personal taste in frame rate). I'm cautiously optimistic that either Apple or a third-party developer will come up with a tool that will allow newer phones to simulate shallow depth of field in video at some point, but who knows when that'll happen?
The shot you provided would be very difficult to simulate with tools available in iOS, but if you have a shot with a single person who doesn't move around in frame much, you can use an app called Tilt Shift Video to do a soft, non-articulated mask to blur part of the frame (in your case, the background). It looks pretty decent in the right circumstances, but it's not perfect, and if the person in frame moves, or if the camera pans, the mask can't be moved, so the subject will slide into the blurry background. (I have written to that developer to ask if there's a possibility of adding the feature whereby one could move the mask, but haven't heard back from them.)
There's another app called Fabby that uses face detection and a host of other tricks to blur the background (or add flowers or other weird elements), but again, it's just designed for stills. (You could conceivably use screen recording in one of these apps to capture video with the simulated shallow depth of field, but the interface elements would be recorded, as well, so you'd have to crop the video in LumaFusion to get rid of those elements. It's not an elegant solution by any means.
I have yet to find a solution that works for video and actively finds your subject and blurs the background, either while shooting or in post. It could probably be built by a developer, but I'm not aware of anything now.
iPhones are remarkable tools for production (I just shot a PSA using my iPhone 8+ and it looks great), but they're not going to be able to do the things that large-sensor cameras can do because of the physical properties of the optics in the device. And post-production simulation of shallow depth of field is difficult even in the most advanced of tools, like After Effects, on powerful desktop computers.
Sorry there's not a better answer.