Hi All,
I'm having a debate with a colleague, with which i need some expert help....
I have a single table of data that i am loading into PBI from csv. It is roughly 17k records, and ~20columns.
Each row is a unique product which may or may not have been reserved for hire. So, some products will have a start and end date for the reservation, and products which aren't hired will have no start and end dates.
Along with the dates, each product will have a set of attributes - location, type, size, etc.
In order to do all the visualisations i need, we are having a debate as to the most efficient and manageable method for constructing the queries on the data.
Is it better to take the single flat table in PBI and create a load of mildly complex measures and calculated columns (which of which will include largely replicated logic. Or, is it better to take the initial flat table, create a number of calculated or filtered tables which represent subsets of that data. Then, create much simpler measures on those tables?
In order to maintain interactions between the visulisations, the subset tables (i think) need to contain relationships to eachother (?).
I may not have explained this very well....