Location Analytics: Sometimes It’s More Than Pins On A Map

Sure, it’s always useful to see where your potential clients are on a map, but sometimes you must combine that picture with something else, like, driving distances. The following example is from the auto industry but can easily be adapted to other industries, like healthcare or manufacturing.

The picture below shows all the dealers and garages licensed in Connecticut.

Click on image to expand in new tab

Now, say you are in the parts supply business, and you want to open a new location on route 4 west of Hartford. You can use location analytics to visually see, and calculate how many potential clients are within a 10, 15, and 20-minute drive radius from several potential sites.

Click on image to expand in new tab

The picture above shows a proposed location in Burlington along with 10 and 15-minute drive rings. A 10-minute distance includes only 5 potential clients, but expanded to 15 minutes, the number of potential clients increases to 50.

What happens if we expand the ring to 20 minutes?

Click on image to expand in new tab

As shown above, a 20-minute drive from the proposed location encompasses 200 potential clients, 150 more potential clients by driving 5 more minutes.

This is only one example of the power of location analytics. How wide to cast the net is only one part of the decision making equation, but it is a significant one.

As mentioned above, this can be adapted to other industries, including healthcare. If, for example, you are looking to expand your orthopedic service line, you can map the location of all the orthopedic clinics, rehabilitation facilities, and sports medicine centers, and see how many of them are within a certain driving radius of your potential sites.

If you are interested in location analytics services, please email me at ELYanalytics@Outlook.com, or call 860-580-5177.

Should Staff be Asking Clients for 5 Stars on Customer Surveys?

Downloaded from PIXABAY

Most of us have been there. The scenario may be different, but it goes something like this: you get your oil changed, and after you pay, the staff member tells you that you would be receiving a customer survey and they would really appreciate it if you gave them all 5s.

For whatever reason, this request does not sit well with me. On the one hand, I feel that a 5 is for above and beyond, and a 4 may be good enough for most uneventful transactions. On the other hand, I don’t know if their corporate office penalizes them for anything below a 5.

What is the point of the survey? One would assume that surveys are designed to give honest feedback so that processes can be improved, but that is not always the case. I have heard of a survey where customers have only 4 choices: a) Definitely Not Recommend, b) Not Recommend, c) Recommend, and d) Definitely Recommend. That survey became binary once the business combined the last two and marketed the results as “95% of our customers recommend us to their friends and family”. This is an example of a survey designed to give good news.

Survey design and purpose are important. Most of us would agree that surveys should be designed to receive honest feedback for the purposes of improving operations. Ideally, if the business acts on the survey results, then future scores would naturally improve and there would be more 5s. That approach requires patience and consistency. Patience because you need time to increase your sample size and be able to trend the data, and consistency because the questions should not change during that period. Frankly, I am not even sure that a 1-5 scale can accomplish that.

My recommendation is to abandon the 1-5 scale and go with a 1-10 scale. When you have a larger spread, it is easier to detect small movements of the needle (provided your sample size is large enough). I also do not recommend tying too much of the staff’s compensation to the number of 10s. Sometimes customers are unhappy for reasons beyond the control of the staff.

Finally, if you are curious, I do give all 5s when staff request it, provided that my experience is seamless and uneventful. I prefer to err on the side of not having them penalized for anything less than a 5. If the staff does not ask me for all 5s, and I do fill out a survey, I often give a 4 when I am satisfied, and a 5 only when I am wowed.

I am curious to know what your thoughts are. Do you think there is a problem with asking for the highest scores? Should 5s be given if the service simply met expectations?

Staffing to Demand: One Common Approach

Experienced supervisors, generally speaking, do a good job scheduling their staff to meet demand. They are good at predicting when they need more or fewer staff. Many, unfortunately, may be tempted to overstaff to meet all demand at all time.

In queueing theory, having no wait time for customers means having overcapacity (a.k.a. waste). In most non-life-threatening situations, it is OK to have wait times, as long as they’re not excessive. So how do you achieve that balance?

If you’re looking at staffing levels in a department (say ED), then you need to look at historical customer demand. In this case it is represented by the ED patient arrival pattern (i.e., average number of patients arriving for each hour) for each day of the week. This allows you to take into account the differences between days, including weekends and weekdays.

In this example, the patient arrival pattern is typical of that found in a mid-sized emergency department. As you can see, the staffing pattern does not match the demand pattern, so the staff schedules should be adjusted to match the demand (similar shape).

This graph would need to be recreated for each day of the week (or you can combine Mondays through Thursdays, and create a separate one for Fridays, Saturdays, and Sundays – the reason is that Mondays through Thursdays often have similar patterns).

This example is basic, but it can be refined in several ways, including by reducing the arrival calculation from hourly to 15-minute increments, and calculating the minimum and maximum number of patients arriving in each increment. The latter allows you to create a band surrounding the line which represents the range of arrivals. You can also refine it by breaking the annual data into quarterly chunks, to account for seasonality.

You can also calculate how many staff you need if you know how many patients one staff member can serve each hour. If doing that, you may want to consider using in your calculations the average number of patients arriving each hour plus 1 standard deviation. This gives you a small buffer to accommodate the times when more patients than average arrive. Basically, erring slightly on the side of overstaffing.

One thing to keep in mind is that there is always an element of guessing when doing staff schedules. Spreadsheets and historical data analysis are only tools that need to be supplemented with input from experienced supervisory staff. Never take the human input out of your analysis.

10 Useful Tips When Asking IT Staff for Data

Your IT staff possess the data that are the key to your operational success. I have always believed that canned reports can only take you so far, and that you need raw data to have the flexibility to analyze your operations in ways that are meaningful to you.

You don’t always need fancy and expensive software to analyze your data. Pivot tables in Excel and simple queries and reports in Access often suffice.

So how do you obtain raw data in a format that you can use in Excel or Access? Here’s what to ask your IT staff for:

  1. Ask for the data export to be in Excel or CSV (comma separated value) format.
  2. Data output should be “pivot table friendly”. Most IT folks know exactly what you would mean by that. Don’t just ask them for an excel export.
  3. One row per record; each row should be unique, and preferably have a unique identifier (e.g., a visit number or accession number)
  4. Data should be stacked – every column is a field, with the same type of data repeating. For example, instead of three columns with the headings “First shift”, “Second shift”, and “Third shift”, ask for one column with the heading “Shift”, and the data within can state, First, Second, or Third.
  5. Dates and times should be combined in one cell, whenever possible. For example, instead of having the date in cell F2 and the time in G2, it is easier to have both date and time in cell F2. You can always separate them afterwards, but it is easier to do math calculations when both are combined.
  6. Make column headings descriptive. Instead of “TurnAroundTime”, it should state “OrderToComplete” or “OrderToResult” to remove ambiguity.
  7. Make sure you fully understand what every field represents. For example, what does “Scheduled Time” mean? Is it time that the exam was scheduled ON, or time exam was scheduled FOR. Also, Patient Status is a changing field. Do you want patient status at time of order or time of discharge. Neither is wrong, you just need to specify what you want.
  8. Beware of averages; they are heavily influenced by outliers. In fact, don’t ask for averages-calculate them yourself.
  9. Make sure that fields that have numeric values are in a numeric, not text, format. For example, if you are looking at RVUs, make sure they are numeric so that you can sum and average them, etc.
  10. Make sure the zip codes field is in text format. This way you don’t lose any leading zeros.

By following the tips above, you will save yourself a lot of time and frustration, and your analysis effort will be more productive.

Data Analysis—it’s more than Pivot Tables

Excel pivot tables are incredibly powerful. They are simple and easy to learn, as long as you are working with clean data. Every tutorial, for understandable reasons, uses simple tables. Sometimes you have to create the table, and other times it is provided to you, but in all cases, there is nothing that you need to do to prepare (or “prep”) the data for analysis.

In the real world, how often are you able to extract data and start analyzing them immediately? Anyone who has ever worked with large data sets will tell you that you would spend the majority of your time prepping the data before you get to use pivot tables (or Minitab or any other program). Once the data are clean, then it’s a fairly straight forward process.

Large data sets often have missing or corrupt data points and will often require flagging of records based on different criteria. Before using these sets in your analysis, you need to address those issues.

For most end users, pivot tables are fine for simple counts, sums, and averages, but when it comes to mining massive data sets for improvement opportunities, you should consider working with someone with data prepping experience.

The Power of Visuals in Story Telling: Minard’s Chart of Napoleon’s Russian Campaign

It is said that a picture is worth a thousand words. Years ago, I was lucky enough to attend a seminar on data visualization. In that talk, we were introduced to the work of Charles Joseph Minard, a French civil engineer who was renowned for his data visualization maps—the most famous of which is Napoleon’s failed Russian Campaign.

The map shows how Napoleon’s army started with 422,000 soldiers (in beige), and by the time it reached Moscow, it was down to 100,000. It retreated (in black) and only 10,000 arrived at where they started.

Click on image to expand

If you click on the chart and study it a little, several details jump at you. For example, 60,000 soldiers split at Wilna but only 33,000 made it to Polotzk. Also, on their return from Moscow, they suffered heavy losses at the Berezina river. Thirty thousand soldiers from Polotzk join 20,000 retreating soldiers, but only 28,000 make it across the river.

Although no software that I am aware of can create a masterpiece like this one, I believe that, whenever possible, a good graphic should always accompany tabular data. The story is simply easier to tell, and easier to remember.

Do you trust your data?

If you are trying to use data for decision making, the first thing you should do is establish their trustworthiness. Many managers feel that if the numbers come out of a computer, then they must be good. Well, in some cases that is true, especially if the numbers (say timestamps) are generated automatically by the system. In other cases, data accuracy relies on staff inputting data correctly.

One way to establish data trustworthiness is to look for unusual patterns. You can run a statistical analysis or simply plot the data. In this example, MRI exam durations (Begin to Complete) were analyzed and the technologists’ results were compared to each other.

The descriptive statistics and dotplot immediately showed that Tech5’s data were unusual and different from the others’. The durations were less scattered and tended to aggregate around two numbers.

It is important, though, not to jump to conclusions. Data analysis is meant to be iterative. Results from an analysis often lead to more questions. In this example, it would behoove the analyst to ask more questions, like, is the technologist manually changing the time stamps to ensure the exams are as long as their manager expects? Or, does the technologist “specialize” in only certain exams? (perhaps due to equipment limitation or comfort level, in which case, the numbers may be accurate).

When analyzing large data sets, it is important to include the client early in the analysis before delving too deeply. They can help you decide if your early observations make sense. After all, it is their operation.

It is rare that you will find large data sets that are 100% accurate and clean. You will almost always need to remove bogus data points. As long as they are few, and random, you can proceed with your in-depth analysis.