Outtakes – Effective Needs Analysis

I’m writing a series of Mobile Enterprise Magazine articles on effective strategies and processes for implementing mobile workforce apps. I always get more comments and insights from interviews than I can use so I’m going to give you some of those interview outtakes.

My recent July article was on effective needs analysis. Among those I spoke with were:   

  • Warren Engard, Dir – Distribution Operations at Dunkin’ Donuts, (picking system and warehouse management app using voice recognition, Voxware wearable computers).
  • Michael Kovash, Sr. IT Project Manager of Work Force Automation at Cox Communications (MDSI’s Advantex wireless workforce management software and Panasonic Toughbooks)
  • Mike Lento, VP of Operations at laundry facilities management company Mac-Gray (Vettro field service application) 

Here are their thoughts on several key issues.

Selecting right technology to meet business needs

Engard: We put a team together with heads of each department – the General Manager, Accounting, Traffic, HR and IT. Operations really drove our project. We documented everyone’s operational and data needs, then put together a RFP. For a lot of companies IT seems to be at the forefront making decisions for other departments. Operations get compromised and end up taking a back seat which creates challenges. But you need a partnership. Operations needs applications to function a certain way so we run our business the way it needs to run, and IT needs to support this.

Kovash: Geography is something we had to grapple with. Rather than restrict users to one device or carrier, we needed to come out with a compatibility testing process so that whatever solution was tested had to work with different carriers. People from corporate and regional offices form a team and each has responsibilities. For example, the local site has to determine technology viability within their region So the people in the pilot project did local testing to prove that the application works, then bounce their findings up to corporate.

Selecting the right pilot project participants

Lentro: We wanted to be sure everyone would be able to use it. Worked with our tech training manager and he advised on which were the least tech savvy. We picked the tech that we felt would be most likely to fail. This person didn’t know a computer from a hole in the wall, and was most resistant to technology, a pure technophobe. There were a couple of others with better skills and we gave all of them test phones. After we were satisfied with results we then gave the entire branch devices.

Getting the best results from the pilot?

Lentro:  One of the things was that if people don’t use the application, you’re dead in the water. So we did a lot of explaining. Once we got users to buy into automation, we had to be sure we could measure results. We focused on the productivity of the technician. Whenever the tech would press button to say ‘I’m en route to next stop,’ we would get a GPS signal and timestamp that went back to our database. We were able to track travel time, time at each stop and so forth, and measure this against performance without the application.

Kovash: We’ve learned the hard way how important it is to make it easy for respondents to share feedback with you.  Surveys by themselves aren’t good. If IT staff can’t be in truck with users, then business-side project leaders need to be in the truck. You have to plan this into the project. Someone has to go to meetings. 

When I was doing a laptop trial, we picked some users to cycle through the devices we were considering. I brought people together and said ‘this is feedback I’ll be measuring, these are the type of questions I’ll ask, and here’s the section where you can put information that I didn’t think of. I won’t be offended by your comments.’ On a weekly basis, if I didn’t get e-mail, I would contact them to ask pointed questions. I summarized results at the end, then brought everyone together to see if my assessment was accurate. 


%d bloggers like this: