This is one of those tweaks I’ve been using for while, but it never occurred to me how useful it actually is until a colleague recently asked me about it.
He basically was concerned about having to update variables and store them temporally while also passing the current value into multiple flows, since the value changes dynamically. You know, create a new column(our case) just for that purpose (store temporally values) is just not elegant. On top of that, you still have to keep track of the values, know where they are being used and hope you’re not messing up the current value 😩
So, here it is, a quick walkthrough on how to build a Power Automate flow that takes care of environment variable updates for you.
Scenario: Dynamic file path for File System connector
Here is the exact scenario that got me writing this post. In one my solutions, I’m creating files locally. Nothing fancy, I’m using the File System connector and a regular „Create file“-action but the annoying part is that the path where the file gets saved is dynamic and changes depending on the conditional logic I’m using in the flow. Later, I also need to reuse the same value across multiple flows.
That’s when I decided to store the file path in an environment variable instead.
…Then, automatically update it depending on the audience. No more errors caused by hardcoded values.
Behind the scenes
When we create environment variables in Power Platform, they’re stored in Dataverse across two places/entities.
- Environment Variable Definitions: these where the variables name are stored.
- Environment Variable Values: Here is stored the actual value.
Here’s how we make it work
Our flow will:
- Look for the definition based on the schema name
- Check if there is already a value saved
- Update it or create a new one
Start by creating the environment variable:
I’m manually running this flow, so, in my case, I’m using the instant trigger.
Add one more action „List rows“ and select the „Environment Variable Values“-table.
Filter rows: Filter the records to locate the exact environment variable using this query:
EnvironmentVariableDefinitionId/schemaname eq 'YOUR_ENV.VARIABLE'
Note: Use the schema name not the display name
Tip: You can get the list of rows by running the flow once and checking the output of the action.
In my case, I added a switch control to handle different audiences. The value is passed in the trigger.
Each case inside the Switch updates the environment variable with the correct value for that specific audience.
Below, I’m using „Update a row“-action to update the value of the environment variable without to manually change values each time.
Voilà
The first compose shows the current value of the environment variable.
Then the flow runs the update action to change it.
Finally, the second compose confirms the new value was applied.
Conclusion
Honestly, this little trick has saved me so much time. Updating environment variables might not sound like a big deal, but when you are working with multiple flows, specially when you need to pass values like endpoint, target audience or other dynamic data, it really helps. Instead of creating workaround columns or hardcoding values, Environment variables makes everything cleaner and way easier to maintain.
Save it for later, you might run into a moment where this comes in handy.