After trying to follow the instructions here with both OpenAI and OpenRouter providers, I finally got it to work by disabling streaming. With OpenRouter, I was able to run a test with the Run test button in the LLM manual config screen for an gpt-4o. But running the test for perplexity/sonar-deep-research resulted in a 502 error and the following output in the console. It took quite a long time for this error to appear (~30 seconds or something) whereas testing other models returns immediatel...