Try to get podman working correctly in order to run ollama in a rootless fashion. you will have to get networking to work at least within localhost.
It is the rocm amdgpu versions fo the containers I usually get.
If you can think of a better runtime than ollama that we can setup here, tinygrad, vllm, etc.. let me know about those..
It will be for running 4-5B param coding or voice models, + embeddings