Archive

Archive for July, 2011

The Great Liquidity Crunch – Private Equity Firms Re-invent Themselves

July 31, 2011 Leave a comment

By Vaibhav Gupta – ZENeSYS Certified Consultant, IIM Indore (2010-12 Batch). Vaibhav completed his B.Tech. in Mechanical Engineering from Delhi College of Engineering. Before his MBA, Vaibhav worked in Project Management for 4 years with Siemens Energy. Vaibhav intends to join the Management Consulting industry after his MBA.

Abstract: The liquidity crunch, tighter regulations and more control demanded by general partners is forcing the PE industry to re-invent itself.

The three key areas of transformation are:

  1.   More secondary deals – establishing the best valuations and finding the right buyers
  2.   Restructuring deals – creative ways of re-negotiating (past) unrealistic deals
  3.   Employing turnaround specialists – creating consultancy like capability for operational improvements

Over the last decade, uncharacteristic of what one might think, the private equity industry has gone through an economic cycle just like the stock market does – up and down. The up phase of the PE market started around 2003 and it continued to boom till 2007-2008 or just when the financial crisis kicked in.

During the up phase, which started after the telecom burst in 2002, the PE industry saw the largest leveraged buyout (LBOs) taking place in history. In the year 2006, PE firms had bought more than 650 US companies for a figure of $375 Million, which was around 18 times the transactions, closed in 2003 [ref 1] alone.

The LBO movement was fueled by decreasing interest rates, low lending standards and tight regulatory framework of owning a public enterprise (Sarbanes Oxley Act) [ref 2]. Consequently, it is one of the reasons why venture capitalists started relying on sales to strategic buyers for an exit.

The arrival of financial crisis in 2008 and the liquidity crunch brought about a quick halt to the LBO frenzy. The leveraged finance market almost stood still during and after the financial crisis. Not only that but as a result of this crisis, many deals were withdrawn or had to be re-negotiated [ref 3].

PE industry is measured with help of two metrics – fund raising and investment activities. The fund raising refers to the money, investors have committed to PE funds in a year. That fund raising activity had fallen to $150 Billion globally in 2009 from $450 Billion in 2008. Coincidentally, the 2009 figure is the lowest since 2004. The lack of debt in the following year of 2010 meant no hope of any speedy recovery.

The other metric, investment activity, which represents the financing of businesses, had fallen from $181 Billion globally in 2008 to just over $90 Billion in 2009. It 2010, it picked up to $110 Billion [ref 5]. This minor jump could possibly be attributed to increased investments in Small & Medium Enterprises and emerging markets such as Brazil, China and India.

Since PE funds acquire firms so that they could be sold at a profit later, life becomes difficult for them during an economic slowdown. Especially more so, where there is a liquidity crunch factor in addition. This was apparent from the total value of PE exit transactions. They fell from $151 Billion in 2008 to $81 Billion in 2009.

From here on to the next five years till 2016, over $800 Billion in loans extended on committed deals would become due or will have to be refinanced [ref 6]. This $800B is distributed almost equally between bank loans and high-yield bonds. To add further complication in hiving off these assets the US government has passed a bill that would require any PE firm which has more $150 Million in assets under management to register with SEC.

The implications are public disclosure of risks, business activities names of the personnel involved, assets owned, amount owed to each creditor, performance metrics, debt and defaults [ref 7].

According to estimates [ref 9], there is one trillion dollars worth of dry powder in PE funds globally. However, LPs are now demanding more control and requesting more information about their investments. The LPs want to keep track of the draw-down capital so that GPs don’t overdraw their limits [ref 8]. This in effect has created a liquidity crunch of sorts, within the PE community itself.

This puts PE funds in a tight spot. How should they service their existing debts and acquire new assets? The solution for servicing debts is to look for different options such as secondary markets, restructuring the deal or employ turn-around specialists to improve valuations. For acquiring new assets, they must look harder in the marketplace or find greener pastures in emerging markets.

Hence, turn-around specialists are in demand nowadays. PE firms are turning away from traditional leveraged deals and looking into investing in distressed companies. They feel it is better to restructure deals based on a change in strategy rather than to take money out and pursue matter in courts.

Turn-around is becoming more and more important as top lines (revenues) are shrinking. Even vendors have stopped extending favorable credit terms. Hence the success of any PE acquisition is down to operational excellence. This means improved management, optimizing expenditures, and rooting out inefficiencies such as overcapacity created in high growth years.

Interestingly, PE firms have identified a new gap in the market – companies, which do not have the means to hire expensive management consultants, are now finding this as a welcome opportunity to bring aboard high quality leadership.

Emerging markets such as Brazil, China and India are still attractive and PE firms need to find ways of entering them. China and India are the two fastest growing economies even during the recession and they need to develop infrastructure to support the high growth. Brazil is set to hold 2014 World Cup and 2016 Olympics.

With increasing pressure from the regulators, lack of liquidity, and tighter control demanded by general partners, future PE deals would need to be financed and executed with better insights and strategic planning. This means deeper research before deals are struck, awareness of best practices in target markets for operational excellence and market intelligence for finding the right buyers quickly and at the best price.

This article has been written as a part of a research project undertaken by one of our certified consultants. All information and content has been derived from secondary research and insights gained from recent projects at ZENeSYS for the PE industry. Credits have been provided in the references for text and data. ZENeSYS provides Market Research and Market Intelligence services to Private Equity Industry on a regular basis.

References:

  1. http://www.washingtonpost.com/wp-dyn/content/article/2007/03/14/AR2007031402177.html
  2. http://www.nytimes.com/2006/12/01/business/01regs.html
  3. http://www.economist.com/finance/displaystory.cfm?story_id=9566005
  4. http://www.thecityuk.com/media/179004/private%20equity%202010.pdf
  5. Wharton PE Report 2010
  6. http://www.freshfields.com/industries/reports/new_normal/assets/new_normal.pdf
  7. A Closer look at “Dodd-Frank Wall Street Reform and Consumer Protection act – Impact on Advisers to PE Funds” – a report by PwC May 2011
  8. http://blogs.wsj.com/privateequity/2011/06/10/limited-partners-talk-about-tightening-the-screws/
  9. http://www.cfo.com/article.cfm/12958163/c_12931795?f=TodayInFinance_Inside

Tips for Quality in Data Analytics Projects

July 18, 2011 Leave a comment

Abstract

If you are thinking of launching a data analytics project at the departmental level. Following best practices in gathering, preparing and loading data into your analytics system will be helpful. This white paper was written by ZENeSYS for iQCodex, the maker of leading data collection and transformation tool – iQCodex. Visit http://www.iqcodex.com for more information.

What is Data Analytics?
Companies are eager to harness the sea of information around them for deep insights. For example, with the recent standardization of medical health records, health care providers now have access to historical data on chronic illness from a variety of sources, which can be helpful for better diagnosis. The Internet, business databases, news feeds partners, clients and internal company online data availability means access to market conditions in near real time. Because of this, companies are feeling the pressure to process information for operational efficiency and competitive advantage using Data Analytics – the science of analyzing data to develop insights.

Data Quality is Key
A significant job in data analytics task is in data collection, scrubbing and validation or better known as ETL (extraction, transformation and load). According to some studies between 40% and 70% of the cost in a data analytics ends up being spent in extraction, transformation and loading functions.

Business Units Take on Data Analytics Projects
With the arrival of affordable and easy to use ETL tools, business units rather than IT departments are taking on data analytics initiatives. This is understandable because the input and output of a data analytics initiative are the business units themselves. Input being raw data, which only the business units know where to find and output being insights, which again, only business units are capable of defining. If your business unit is considering a data analytics project then it is important to understand the best practices involved in capturing, transforming, validating and updating data for analysis.

Best Practices for Ensuring Data Quality
Extraction, Transformation and Loading (ETL) in simple terms means the job of (1) Collecting or capturing data (2) Ensuring Accuracy (3) Conversion for relevance and homogeneity and (4) Updating for keeping it fresh.

We will explain each of these in more detail now.

Collection of Data: Data will inevitably come from a variety of sources and we can be rest assured that the format will vary from one source to another.If it comes from a structured source such as a database, then it will be in a database language such as SQL. This is good news as structured data are easiest to deal with.

However the bad news is, structured data from external sources such as medical records or financial transactions have a different twist to it. These are likely to be in XML file formats – an industry specific data exchange language. These XML standards tend to vary for each industry e.g. insurance, finance or health.

Non-structured data can come from text, PDF, Word, or Excel files, though in most cases it will be Excel or a comma separated file also known as CSV files. This type of data generally comes from customers, news, industry reports, and internal project documents. Tapping into this category of data is often underestimated. By some estimates, up to 40% to 70% of data in analytics projects comes from MS Excel, MS Access or some other form of desktop or legacy application.
The important thing to note here is that you have to make sure the ETL tool you use is able to handle all your required input formats with equal ease.

Accuracy of Data: The foremost thing in ensuring accuracy is avoiding manual data entry. This means an ETL tool should be able to “read” all data sources electronically. For this to happen the ETL interface should be able to handle all the formats described in the previous section i.e. SQL, XML, Excel, Access, and plain text CSV files.

Electronic reading not only assures accuracy, but it reduces cost, speeds up data collection and most importantly, safeguards against errors and fraud – something that is becoming increasingly important as the regulators tighten the laws.
A good ETL tool will also provide a means of cross checking data for errors. For example, there should be validation checks for date formats, phone numbers, postcodes. Some ETL tools can provide look-up tables and programming capability to cross check for data accuracy.

For example, an insurance claim record may contain claim date for an incident that occurred prior to coverage date. So here a date format check will simply not be enough, we need a way to “validate” the incident date by fetching the policy validity dates before the claim data is allowed to go through.

Data Conversion: Conversion of data is probably the most complex feature in an ETL tool. We will go through several aspects of data conversion requirements. Sometimes data is not in the format we need. A simple example would be the need to convert currency. ETL tools should have the means to covert to the desired alternative units on the fly.

The second type of conversion requirement is semantic normalization. Consider a name that is being referred to in multiple ways e.g. a student, an apprentice, a trainee, or an observer. If they all mean the same thing for your analysis then it has to be normalized into a single preferred label before it enters your analytics engine.

Another type of conversion is a bit more complex than the date conversion example. What if we need to grab only “net of tax sales figures” where only gross sales figures are available at source? Your ETL tool should be able to compute the “net sales” from gross sales numbers by deducting the prevailing tax rates for that region as the source data is captured. This is a capability for allowing a “computed value” as input.

A fourth type of data conversion is the need for “protocol conversion”. With the advent of standardization, interchange of data between organizations is now being facilitated with communications standards (also called protocols) such as SWIFT in the financial services industry, HL7 for health records, ACCORD for insurance industry. A good ETL tool must provide for the necessary “hooks” to translate these standards or protocols and render the data to a format that is ready for analysis in your analytics engine.

The fifth type of conversion challenge is that of handling ad-hoc formats. Even today there are many organizations that have “system islands” meaning different systems are running different computer applications with no connection between them. The challenge here is that a data analytics project may need to tap into data from one or many disjointed system such as human resources, accounts payable, inventory management, or a customer relationship management. Some of these systems may be old with proprietary data formats. In such cases an ETL tool must be able to read comma-separated files – something that older systems are capable of generating. However, conversion from such file formats will never go according to plan.

When an error is encountered, an ETL tool needs to “flag” an exception. An administrator of the ETL tool will then examine the failed conversion step and make the conversion manually. Sophisticated ETL tools will “learn” these exception conditions so the same type of encounter again will not result in a manual intervention request.

A duplicate data is a very common exception condition. Here the same piece of data may appear more than once and quite likely with conflicting parts to it. For example, an accounts payable system has a customer bank account number as XYZ and a purchase order by the same customer shows their bank account number as ABC. In such cases the ETL tool must de-duplicate or in case of a conflict establish the correct bank account number.

Stripping of non-relevant data is another exception condition commonly encountered. This can and will happen when it is coming from legacy systems or an external organization. Take for example a data analytics exercise on eye related problems in diabetic patients. The data sources on patient history will likely contain many other data points apart from eye history. The ETL tool in this case has to systematically filter in only eye related data from diabetic patient records from the incoming source.

Updating of Data: Data Analytics is not a one-time effort. It is an ongoing talk that requires data refreshing on a continuous basis. However, to update data, an ETL tool needs to do more than just fetching new data.

Synchronizing the sequence of data updates: An address change update needs to be done prior to applying new eligibility rates by region in a particular customer record for instance. Each update action should have a way to take into account the dependency of another update or the planned time of update.

Timing of data updates: Even though certain data may be available immediately, the actual update needs to occur at a specific time to ensure no loss of data integrity. This happens often when dealing with legacy or older systems where processing takes place at night and the required data can only be extracted after 5 am. Here the ETL tool needs to ensure that the updates that are dependent on that nightly batch of data must start only after 5 am.

Partial or selective updates: Ad hoc sources of data such as external systems and personal data files are not capable of providing “selective updates”. Meaning they will contain the full set of data records each time. So if you are importing the credit history of your customers from an external source, each time you want an update, you will have to deal with the entire data set – even though you are only interested in those that have changed since your last upload. Sometimes there will be a flag or an indicator on which items have changed since last batch and but most likely not. It is up to the ETL system now to detect those flags and only update the changed records. In case there are no change indicator flags, the ETL tool needs to first determine which records have changed. Doing a full load for each update is not a good idea because it can slow down your Data Analytics server significantly. Another drawback is that you may end up with data that you no longer need.

Changes in transformation logic: It is quite apparent that the entire ETL process is a series of algorithms and business rules for enabling the capture, transformation and loading of data. Over a course of time, these algorithm or logic will need to be modified because of changes at source, discovery of a better way of doing things or change in policy. An ETL tool should be able to allow administrator to do this easily. To modify the formula or logic, the administrator should first be able to understand how it is setup currently and then give them the ability to edit it. This is actually a very important consideration to keep in mind while selecting an ETL tool. Those that do not provide this feature of a transparent setup will become totally unmanageable at some point as people forget how the logic was setup long time ago.

Business process changes: Lets say today your Data Analytics system is setup to import credit ratings directly from a rating agency. Tomorrow your accounts department wants to take on the role of approving the ratings because you have access to multiple rating agency data. Some ETL systems are workflow enabled to accommodate such powerful features of building in a approval processes. iQRoute from iQCodex excels in this features with many different workflow features such as escalating, authorizing, approving, and overriding.

Keeping an audit trail: With the advent of new laws such as Sarbanes Oxley in the US and Solvency II in EU, the need for proper audit trails is becoming vital. Notwithstanding, the impending regulatory pressure on compliance, it is a good practice to have an audit trail feature so if things go wrong, the system can pin point what was changed, who made the changes, and when the change was done.

Ability to track back on changes: Often things go wrong and mistakes are discovered after days or weeks. In such instances the audit trail feature will indicate the source of the problem. However to get back on our feet again, a roll back to the former condition becomes a huge lifesaver. A good ETL system will provide snapshots by date and time stamps of changes made. An administrator then can review each historical change and decide how far to go back to start all over again.

Summary
Several organizations are launching Data Analytics initiatives for getting invaluable market and operational insights at the departmental level. However, data analytics outcomes are dependent upon the quality of data. Several affordable tools are available on the market to provide ETL (extract, transform and load) capability. Before you decide to launch a data analytics project, consult this checklist to ensure you select an ETL tool that has all these features. These features are essential for quality of data and therefore the validity of insights derived.