top of page

Exploring DAX UDFs — Challenges, Learnings, and What Comes Next

  • Jihwan Kim
  • 2 hours ago
  • 4 min read

In this writing, I’d like to share how I started to learn and create my own DAX UDF (User Defined Function) library, the pain points I’ve faced while trying to make the functions as general as possible, and how I plan to take the next steps forward.


You can explore the project here: jihwankimdata/dax_udf_library


How It Started

Like many Power BI developers, I’ve written the same logic dozens of times — percentage growth, variance analysis, ranking, or window calculations — slightly customized for each model. Every project had its own flavor of the same idea.

When Microsoft announced DAX User Defined Functions (UDFs), it immediately felt like the missing piece in the DAX ecosystem.


The idea was simple: write once, reuse everywhere. So I started building my own DAX UDF Library — a place where reusable, documented, and versioned functions could live.

The first version of the library is small — only two UDFs so far — but each one is built to be reused, refined, and scaled over time.




The Challenge: Making a “General” DAX UDF

My original goal was to write completely model-independent functions — something anyone could copy and use without modification. But once I started implementing them, I hit a fundamental limitation: DAX doesn’t know about my model structure.


For instance, I wrote the DAX UDF called _fn_percentagechange.


DEFINE
	FUNCTION _fn_percentagechange = (CurrentValue, PreviousValue) => DIVIDE(
			CurrentValue - PreviousValue,
			PreviousValue
		)

_fn_percentagechange(CurrentValue, PreviousValue) looks simple, but it assumes:

  • I have a measure for the current and previous value.

  • My model has a proper date table and relationship.

  • My calculation context is aligned with the fact table.


Without those assumptions, the function breaks or produces unexpected results.

That’s when I realized — truly generic DAX UDFs are very difficult to achieve because every semantic model has different naming conventions, hierarchies, and data structures.


So instead of forcing over-generalization, I decided to be explicit about the assumptions.



Why I Introduced the “PBI Data Model Requirements” Section

In each .md file in my GitHub repository jihwankimdata/dax_udf_library , every UDF includes a dedicated section called PBI Data Model Requirements.

This section clearly describes what the function expects from the model — for example:

  • Required columns or table names (e.g., a Date table, a Sales fact table).

  • Relationship requirements (e.g., active relationship between Date and Sales).

  • Measure dependencies.

Here’s a simple example of what it looks like in the documentation:


ree

It’s not as universal as I originally imagined, but it’s honest and practical.

It helps future-me, or developers understand the context needed for the function to work — and, importantly, it keeps the logic reproducible across models.



The Pain Point — Generality vs. Usability

Trying to write general DAX UDFs forced me to think deeply about reusability boundaries.

If the function is too general, it becomes abstract and hard to use — I end up needing to redefine measures every time I implement it. If it’s too model-specific, it loses portability.


Finding the right balance has been the hardest part so far.


Another pain point is that DAX doesn’t currently support dynamic metadata discovery (for example, a function can’t “check” if a model has a specific table). That makes defensive programming nearly impossible — I have to trust the model matches the assumptions.

It’s not a problem of syntax, but of context.



How I Plan to Overcome This

I’ve accepted that UDFs can’t be perfectly generic, but there are several ways to make them more flexible and maintainable.


1. Explicit Documentation Is the First Step

The PBI Data Model Requirements section will stay — it’s now a standard in my repository. Each function will have clear prerequisites. Transparency matters more than pretending everything is plug-and-play.

2. Parameterize Assumptions

Where possible, I plan to add function parameters for column names or table references, so the same logic can adapt to different models.

3. Build Reference Models for Testing

I’ll create a small reference semantic model that serves as a sandbox for validating new UDFs. Each function will be tested in this controlled model to verify it works as documented before I add it to the library.

4. Improve Documentation Automation

Eventually, I plan to automate the generation of each .md file — extracting DAX function metadata and creating standardized “Model Requirements,” “Parameters,” and “Example Usage” sections automatically.

5. Community Feedback

Once the library grows beyond a few functions, I’ll open GitHub Discussions to collect ideas and feedback from other DAX developers. Shared pain points lead to better patterns.



What I’ve Learned

Building this DAX UDF library has been as much about learning DAX design as about writing functions. I learned that:

  • Reusability isn’t free — it takes careful scoping and clear documentation.

  • Generality has limits — but well-documented assumptions can still make a function valuable.

  • Transparency beats abstraction — clear “Model Requirements” help users use UDFs correctly.

By documenting assumptions, I’m not restricting the function — I’m trying to make it reliable.



What’s Next

Right now, the library includes only two functions, but this is just the start. I plan to add more functions step by step — focusing on patterns that have proven value across multiple real-world projects: ranking, conditional filtering, and period-over-period analysis.

Each new function will include detailed documentation and a PBI Data Model Requirements section so it’s easy to understand and adopt.

Over time, I want this repository to become a Library-DAX-UDF for reusable, transparent, and version-controlled UDFs that teams can trust in enterprise models.



Closing Thoughts

Creating the DAX UDF library has been a great learning experience — not just about DAX, but about structure, documentation, and governance. It taught me that making something reusable is less about writing clever code and more about communicating its context clearly.

The PBI Data Model Requirements section came out of frustration — but it’s turning into the foundation for consistency.



I hope this helps having fun in learning, documenting, and building your own DAX UDFs in Power BI and Microsoft Fabric.

Comments


bottom of page