Tuesday, December 19, 2006

The 10 Best of Shanker-Jaikishen

Let me try and list down the 10 best songs which this dynamic duo created – this is a list of personal favorites and in no way reflect anything other than individual interest. Songs in Hindi movies are irrevocably linked to their picturization and I try and explain the rationale behind each of my picks – with the caveat – this is intensely personal.

1. Awara Hoon: Awara (1951) – the song that began it all, my blog, rekindling of my passion for SJ so tragically lost in the pressure of work, immense re-hear value, lovely touching lyrics, a smooth song to sing in all parties – almost everyone knows the tune and the music is lilting and foot-tapping. The song was photographed lovingly in electric black and white, the mischief in Raj Kapoor’s eyes was inimitable, the motion of the chain-watch hypnotic and was in perfect cadence with the SJ background score. Mukesh as usual, excelled when singing for Raj, if he was the body in ‘Mera Joota hai Japani’, he was the soul in this song. Truly timeless.
2. Pyar Hua Ikrar Hua: Shree 420 (1955) My all time favorite RK song – I just love that interlude music especially and THAT scene is truly amazing – the scene where Nargis and Raj are huddled under the black umbrella in a fair shower – Nargis points out to the three Kapoor kids and lips – hum na rahenge tum na rahenge phir bhi rahengi nishaniyan. I dare you to think of a more poetic romantic situation. Evocative.
3. Nanhe Munhe Bachhe Teri: Boot Polish (1953) - This song made a convert of my niece – a young 6 year old kid & who only loved the new age music...she stood absolutely still when she heard this for the first time and then by the second stanza began humming it. Music – as they say, is divine and the look in her face was indeed divine & reflected pure bliss. I can still visualize the precocious Baby Naaz and the vibrant Master Rattan dancing to this amazing Shanker-Jaikishen beat. David was on hand to lip sync to the magnificent Rafi. Add to the heady mix - consider that Boot Polish was meant to be a songless film and the Aah disaster forced Raj Kapoor to do a rethink & create songs and song situations within a few weeks. Invigorating Stuff.
4. Hai Sabse Madhur Woh Geet: (Patita, 1952) - Dev Anand had this dreamy look throughout the movie and perhaps never more than in this song. This is a song which makes you wonder why there weren’t a lot more of Dev-SJ combination in the early fifties. In fact, the next Dev-SJ combination after Patitia came as late as 1959 - Love Marriage which had the famous dig at O P Nayyar (Tin Kanister Peet Peet Kar). The Talat numbers in Patita picturised on Agha – Andhe Jahan Ke & Tujhe Apne Paas Bulati Hai & the Lata solos picturised on Usha Kiron – Mitti Se Khelte Ho & Kisine Apna Banake, the Lata-Hemant lullaby picturised on Dev and Usha – Yaad Kiya Dile Ne all were amazing and deserved a place in any other list best songs but the song I chose from Patita is for that rare combination of wistfulness, passion and desperation, it evokes a sense of nostalgia and ‘what could have been but for…’. Trust SJ to bring out the right mood and Talat to express it so evocatively. The icing was the debonair Dev looking handsome, lost and forlorn…pictures that stay with you a long time after the movie is over. Timeless...
5. Main Piya Teri: (Basant Bahar, 1955) – This movie is stuff musical folklore are made of. Anil Biswas, the great music director was initially chosen for this musical magnum opus but the distributors insisted on SJ – whereupon Anil made his displeasure very clear and reportedly passed some derogatory remarks on SJ's lack of Classical Expertise. Shanker picked up the gauntlet and along with Jaikishen proceeded to give some of the finest classical based songs ever created. Each of the songs was a gem – the Manna Dey classic in Pilu - Sur Na saje Kya Gaon Re and his ode in Miya Ki Malhar (Bhay Bhanjana), the Manna-Bhimsen number – Ketaki Gulab Juhi in Basant Bahar, the Rafi classics Badi Der Bhai ( Pilu) & Duniya Ne Bhaaye (Gurjari Todi or Lalit – I am not sure, comments welcome), the Lata-Manna sweet duet in SJ’s favored Bhairavi – Main Piya teri…which is probably why I pick Main Piya teri amongst the galaxy of shining stars in Basant Bahar as my personal favorite. Listen to it and let me know if you feel as I do – SJ and Bhairavi were truly made for each other. Also, the old story of Shanker telling Pannalal Ghosh on what, how and when to plan his famed flute is the stuff legends are made of & what are they if SJ aren’t one! The best amongst the very best. Truly Classical...
6. Ae Mere Dil Kahin Aur Chal (Sad/Happy, Daag, 1952) – Amazing song, Talat at his very best. Somehow Talat was never as popular as a Rafi or a Mukesh amongst the post ‘1970 listeners which is somewhat of a tragedy. I thought he has an amazing voice admirably suited to the soft and sad songs he often sang principally for a less Debonair Dev in the early fifties and a more composed Dilip around the same time with so much feeling. The two sides of the Daag classic is an example to boot – Dilip as the drunken hero in the sad version made the audience look up to the heavens with him as he cried ‘Chup raha beraham aasman’ and made the same audience tap the feet in energy as the strains of the happier version floated like the season’s first snow flakes on a cold Boston morning. SJ at it – the sad and soft allied to the enterprising and energetic. Super Stuff.
7. Ae Bhai Zara Dekh ke chalo (Mera Naam Joker, 1971) - For sheer joi de vivre this is hard to beat. Mera Naam Joker was a disaster at the Box Office prompting Raj Kapoor to banish himself from the screen and SJ from the Music Studio but the movie was a musical marvel - one of the finest all round musicals of all times. The pathos laden strains of Jaane Kahan Gaye wo din contrasted perfectly with the playful Asha classic Paan Khaiyo saiyaan hamaro. The philosophical Jeena Yahan Marna yahan found a telling riposte in the sparkling Kehta hai joker. But for me the finest was the Manna contribution to Mera Naam Joker - the song fluctuated constantly, with the fast and the slow strains and the meaningful lyrics ending quite magnificiently with Veeran duniya ka basera hai. It got Manna a well deserved Filmfare best singer award. Philosophical stuff with a rythm about it.

8. Ye shaam ki tanhaiyan (Aah, 1953) - Again I am picking a song from what was a box office disaster prompting Raj Kapoor to make a musical Boot Polish instead of what was supposed to be a bold attempt at making a songless wonder. Hear this and leave a comment - has Lata ever sounded better? Aah had amazing numbers, the Mukesh Lata duet (Jaane Na Nazar), the Lata solo picturised on the other sister (Sunte the naam hum) and the Mukesh lullaby (Chhoti si zindagani) come to mind immediately but this Lata solo was - for me - the pick of the jewels in the Aah crown. Sheer Melody from the melody queen.

9. Baharon Phool Barsao (Suraj, 1966) - The pick of the prolific Rafi-SJ-Rajendra Kumar collaboration for me. Suraj was a great musical despite Shanker's fascination with Sharda in Titli Udi. Baharon Phool Barsao had it all - excellent initial music breaking into a crescendo, good lyrics, Rafi's romantic voice, the amazing use of orchestra and the clever use of the variation in Rafi's voice. My friends may wonder why I havent picked some of the better known Rafi-SJ combinations but for me melody is key, rythm and beat come later. For sheer romance and melody, this ditty is hard to ignore amongst the ten best. The most romantic voice singing one of the most romantic numbers. Beautiful.

10. Jiya Bekarar Hai (Barsaat 1949): The song that started a rage - SJ's initiation into Bollywood as a musical entity. Legent has it that the team sat on the footpath outside the recording studio wondering about their future - little knowing they had created history - the best, most versatile and the most popular music directors of all time were born. Barsaat had several first, Lata singing all the songs for both the heroines, the use of significant orchestra, a single song sung by two different people in tow different locations to different sentiments (the rumbustious Mukesh and the doleful Lata in Patli Kamar Hai) but for me Jiya Bekarar Hai is the key song in the SJ repertoire. If this song isnt as good as it was, SJ would not have become what they did - the best of all times.

You may realize I have been partial to the early fifties – I think the duo gave their best music when they were together – in the true sense of the word – and a lot less effective or evocative when a lot more than the hyphen separated them.

Monday, December 11, 2006

Consultative Elicitation

Introduction:

I have been a consultant for a significant portion of my working career working mainly in Financial Services (Capital Markets principally), Retail, telecom as Vertical Domains, CRM and Offshore Development Process/Methodology Definition as Horizontal competencies in geographies ranging from North & South America to Continental Europe, Japan and APAC. This experience has stood me in good stead to understand the nuances of what to expect from customers who know something is not quite right but probably don’t know exactly what or how to influence the outcome positively. Consulting has been a much maligned term especially in India and some developing markets but one must know that before one can commence writing code or testing it – one needs to ensure the requirements are elicited and documented to the best extent possible. As one moves from effort based pricing to outcome based pricing, this is becoming critical to define ROC and ROI.

Consultants would always tell you that the customer never knows what they want to do. However, they almost always know what they want at the end of it all – the problem is that this ‘to be state’ is very often information driven and not data driven, its emotion driven and not facts driven and most times is individual driven and not organizational driven. Unfortunately few of us are experts in interviewing techniques (‘Elicitation’) and get customers to talk candidly about their issues, perceptions, opinions, tasks and bring them I into the context of defining necessities, needs, wants and desires. Sherlock Holmes once commented the issue with simple cases is the largesse in terms of clues and the good detective knows which is important and which are not – it’s often similar in Elicitation. Most consultative Elicitation commences from a position of extreme information overload but less reliable actionable data. So the first step is often to determine which the data elements are that need further elicitation and which need to be kept in abeyance. Let’s now understand how to approach such cases of Consultative Elicitation.

Research: I always tell people that the key to a good consulting effort is the background research one does on the prospect and areas of initially isolated concerns. The research needs to be comprehensive ranging from Market facing data (Revenue, Growth %, areas of growth, areas of growth vis-à-vis market, profitability, profitability around different business areas, strategic direction, cost of support, cost of operations etc), Internal Strategic Initiatives – Internal research data, HR management and initiatives, Press Releases etc. This would give a very good idea where the issues and concerns could lie thus focusing the attention much better during the next phases. This ensures the initial discussions and interviews become validation statements rather than Q&A sessions cutting out valuable time and increasing management attention. Articulating a research objective sets the stage for a successful interview.
Developing the Base Questionnaire - The right questions are those that help us get beneath the surface and understand the customer’s world, work, and concerns and validate assumptions. These would be more in terms of validating some of the data and associated conclusions one could have drawn from extensive research. This would result in either focusing more on the areas of concern earlier identified or going back to the drawing board for further focused research. So initially, it’s recommended that one speaks about overall objectives, options, directions and broad level concerns.
Developing a Good Questioning technique: It’s a good practice to clearly enunciate and understand the overall objectives internally before meeting the customer & commence asking questions. What do you hope to accomplish by interviewing the customer? Do you want to explore broad options or understand a specific business processes? A wandering, unfocused interaction will yield paltry results and frustrate the customer. Once you’ve defined the broad objective, brainstorm a list of all the questions, suggestions etc related to the topic. It’s a good practice to organize the questions in a set of matrices – on the horizontal plane from general to specific and familiar to unfamiliar & in the vertical plane around opinions, recommendations, perceptions, risks & emotional issues (including HR issues). The process of preparing questions helps to identify key topic areas to cover. Following a set list of questions isn’t the point: successful interviewers invest time in designing and testing questions—but then use them as a guide, not a script. As you prepare for an interview, consider different types of questions. Each type will serve a purpose and elicit a different response:
1. Context-Free Questions
2. Meta Questions
3. Open Ended Questions
4. Closed Loop Questions
5. Leading and Show Me Questions
6. Past-Present-Future Questions
Develop In-Depth Questionnaire Patterns: The key to elicitation is developing the detailed requirements is analyzing data obtained from Research, User Interviews, Manager Interviews etc and reverting with a series of if-else, what-if and but-if logic loops. These would enable the interviewer to focus on key core issues while ensuring the tangential ones and symptoms of problems get documented as such. You could think the data is adequate when all the data from Research, User Group Interviews, Other Interviews, and focus interviews is able to identify a set of concerns / issues (process, technology or people) which would explain at least 90% of the issues and concerns expressed.
Data Analysis Template: FTSC: One good practice which I have come across is to analyze all data on the basis of the FTSC dispersal – Foundation, Tactical, Strategic and Continuum. Basically it means that all issues or data could either be related to Foundation (base issues, typically operational and can be rectified by minor operational modifications) Tactical (short term, needs some operational and process tweaking), Strategic (medium term, market facing issues which need broader and senior management), Continuum (Broad Operational areas - fairly operationally efficient but need constant monitoring and efficiency checks).
Conclusion Template: The most critical aspect is the presentation of the issues, suggestions and recommendations. Always remember the client is almost always aware of the issues – the symptoms & sometimes the underlying causes – and tailor analysis to
1. Symptoms observed
2. Possible Causes
3. Probable Causes after data analysis
4. Criticality of causes
5. Recommendations – Process, Technology, Others
6. Suggestions – Process, Technology, Others
Some Pointers: Before you rush off to try out your interviewing skills, practice. Start with a colleague, and then try your interview with an internal customer proxy or subject-matter expert. Its always a good practice to work in pairs with the members taking turns to ask questions and BOTH taking notes. Its better to limit the questioning sessions to 4-5 hours in a working day and keep at least 30 minutes between sessions for note-sharing and data sharing. Avoid using this time to draw conclusions but use this solely to flesh all details and document them effectively. A quick 30 minute discussions with all interviewers at the end of the day – typically informal – is a good idea to get some first impressions about the people, who seemed very open and others who probably were not. At the end, just remember consulting is not rocket science – all it needs is a thinking mind, perception, an eye for detail and documentation ability. Its good to have the requisite domain knowledge but often I have seen its needed as a team – not with every individual.

Sunday, December 10, 2006

An Evening in Sydney

The most arresting aspect of Sydney – it’s a lovely city and I have nothing against it or the beautiful people who inhabit it - is the fact that all shops and commercials establishments close at 5 PM. Yes, you read it right, 5 PM every day including weekends (actually they close earlier then) except Thursdays. I am not a great one for shopping or doing the tourist rounds – in fact, the only occasions when I went across the street to view the Eiffel Tower when I lived in Paris was those infrequent occasions when friends & relatives used to drop in and insist I show them the Eiffel Tower…actually, but my idea of a holiday is be in the midst of 10 million other human beings in Calcutta reading a good book and sipping the local hot brew in the day time munching on samosas or kachauris or drinking something a bit bitter later in the day… anyway more about my likes and dislikes later but about Sydney first before I digress once again.

This aberration in global terms at least of establishment owners finding places to drink rather than open one themelves leads to some peculiar problems for folks like us who have to earn an honest living by working till pretty late in the evening. When I come out at about 10 pm in the evening which is late by Sydney standards, I very often am the only person in the North Sydney suburb where I live and work when I am around. The biggest tragedy is that the bars close up at 10 (PM, not AM) leaving people like me search and scrounge around for places to get an honest bitter. Tough times folks.

More to come on Sydney…Interesting experience its been so far...

Transforming Contact Centers Using Knowledge Centered Support


Introduction:
I was driving a team responsible for building one of the leading Knowledge Management tools in the world while driving the CRM practice for my orgainzation. Without a lot of background knowledge on CRM & especially Knowledge Management or Contact Center Applications - given my background in software engineering for Financial Services - I was swamped with the data and my corresponding lack of knowledge initially... but the topic was so interesting and topical that I was intrigued. Intrigued enough to spend the past 2 1/2 years - incidentally loving it -working on several contact center consulting projects & CRM implementations especially in the Telecom and Financial Services Industry. Amazing experience and this article is a compilation of some of my thoughts around Contact Centers and Knowledge Management over the past 2 odd years...

Base Camp:
Over 92 percent of U.S. consumers form their image of a company based on their experience using a call center. However, Call Center is regarded as one of the major 'cost elements' for accounting purposes. The tool or process that is most essential for customer attrition is regarded as by many as a 'cost element'. Funny right?

The Customer Service and Support (CSS) centers of today need to transcend much beyond handling customer requests with minimal cost. CSS has to be enabled to make the transition from being regarded as a ‘necessary cost center’ to an ‘indispensable profit center’. The key to this, I argue are Knowledge Management, Channel Integration, Effective HR Training, Integration with Back end systems and a Culture of Continuous Customer Centricity.

Why is KM so important?
Lets understand why KM lays such a crucial role in a contact center or a CSS channel:

1. Minimize time to resolution – the key to customer satisfaction during a call or a transaction is not just the time taken to identify the customer (which is pretty straight forward given the investments most companies have made in stand alone CRM & IVRS systems) but the time and efficiency by which the customer issue or query is resolved to complete satisfaction. This requires a high degree of sophistication in terms of Knowledge Management, Integration etc.
2. Knowledge Driven Service Model – A good KM model allows companies to capture unstructured data, information and structured data and make it available to all employees and customers, as well as outsourced contact center agents/companies. This presents a model for effective updation and modification of Knowledge Management & ensuring the knowledge is captured and processed into identifiable and re-usable chunks of data
3. Customer Loyalty: Effective KM sharply reduces the need for escalation or change in agent due to specialization within a contact center. There is a direct cost benefit of reduced escalation & additionally, a more strategic impact on customer satisfaction. We all – as customers, recollect positive experiences with pleasure which makes customer retention easier and good press due to increased CSAT.
4. Customer Data: Market research practitioners will agree that Customer feedback is extremely difficult to gather, analyze, authenticate and draw actionable items from with a large degree of certainty. With effective KM in Contact Centers there is an easier way out than asking Customers to fill in long questionnaire or answer interminable questions from an often bored and dispassionate interviewer. Knowledge-powered Web sites and other channels enable the business to capture the customer's dialogue with the organization, which can be easily analyzed and action items derived for customer insight and increased wallet share.
5. Pursuit of constant improvement: A structured KM program allied to effective tools breeds a team of people constantly trying to improve available knowledge. This is especially true for companies with a large and diversified customer base and correspondingly large Channel Management office.

A step by Step Primer to Implement a KM Solution for a CSS:
About to come...

Friday, December 08, 2006

My Present Area of non-professional interest...

Presently, my interest & research centers around Shanker-Jaikishen - a duo of music directors who revolutionized music in Hindi Films with some breathtaking compositions in the early fifties and remained the supreme duo till the untimely death of Jaikishen in 1971...This interest got re-kindled when I was on a recent visit to the Bay Area. While I was in a cab proceeding from the SFO Airport to Menlo Park, my phone rang - the tune was set to a song from a movie called Awara released in 1951. The cab driver - a white caucasian - immediately began humming the tune under his breath. Once he realised I had completed my call, he turned to me and asked if I was from India -& whether I know Raj Kapoor - the actor on whom this song was picturised - its incredible to imagine this scene - a white Russian emigrated to US 30 years back and still remembers a tune he had briefly heard when he would have been 10 or 12 years old...amazing...that set me thinking - what kind of geniuses could have created a tune that rests in the sub-conscious mind for 55 years and remains fresh?

Operational Risk

Introduction:
Operational Risk is an area of emerging interest and was a lot more nascent when I got into it and the sheer paucity of information - reliable, simple and accurate - prompted me to write this artice along with a good friend & cousin (in that order) Raghu. This was published in several magazines a few years back and I hope proved to be somewhat useful for practioners in Financial Risk.

Summary: Barings, Daiwa, Natwest, Sumitomo suffered catastrophic losses in the area of operational risk, causing regulators and banks world over to refocus on the topic.
As businesses become more and more competitive, as the pace of change inside and outside the organization continues to increase exponentially, as the market place becomes more and more complex due to technological advancement and innovations, the management of operational change and the risks has become a critical success factor. Business operations will need to be as efficient as possible to deliver seamless service to the customer. Risk management structure and practices will need to mature to satisfy all stakeholders versus shareholders, employees, government, regulators and the society as a whole.
As the new millennium unfolds, the challenge for business leaders and decision-makers is to adopt an integrated approach to strategy, value proposition, customer service, capital management, finance, operations, risk management and corporate culture.
What is Operational Risk?
Operational risks are enterprise wide and inherent in any business. It is more pronounced in industries like nuclear power plants, chemical industries and as has been seen lately in the banking industry.
An acceptable and recognized definition for OR is yet to evolve. However, the description of OR ranges from narrow definition of covering operational breakdowns in processes to broad definitions which capture all risks that are not credit or market risks.
Historically, OR was associated with only operations and technology. The Financial Services Authority, (FSA), U.K., describes OR as the "risk of loss, resulting from inadequate or failed internal processes, people and systems, or from external events." Significant operational losses in recent years in the banking industry have highlighted that OR can arise from internal and external fraud, failure to comply with employments laws or meet workplace safety standards, policy breaches, compliance breaches, key personnel risks, damage to physical assets, business disruptions and system failures, transaction processing failures, information security breaches and the like. With increasing attention being paid to social, ethical and environmental issues, the scope of OR management has extended to monitoring and managing these risks as well.
The Basel Committee on banking supervision has recognized that managing OR is becoming an important feature of sound risk management practice in modern financial markets. The committee has noted that the most important types of operational risk involve breakdowns in internal controls and corporate governance. Such breakdowns can lead to financial losses through error, fraud or failure to perform within accepted time lines or cause the interests of the bank to be compromised in some other way, for example by its dealers, lending officers or other staff exceeding their authority or conducting business in an unethical or risky manner. Other aspects of operational risk include major failure of information technology systems or events such as major fires or other disasters.
Operational Risk - The Drivers
The banking industry is by far the most advanced than any other in attempting to manage the credit, market and operational risks in an integrated manner. Regulatory pressure has helped and goaded the banks to adopt a strategic approach to operational continuity and risk management. It is being realized that by managing risks on the operational side, banks can maximize returns through more efficient use of capital, thereby increasing shareholders' wealth.
Globalization, consolidation, outsourcing, nearshoring and offshoring, breaking of geographical barriers by use of new technology, consolidation, growth of e-commerce, competition etc. have significantly increased the profit-making opportunities of the banking industry. At the same time, increased regulatory focus, increased awareness of "uninsurable" risks, greater focus on corporate governance areas, renewed emphasis on corporate accountability and director's liability, public expectations etc. have become the key drivers compelling organizations to focus on management of operational risks. The industry's risk-control structure has more often than not, not kept pace with the hectic changes taking place at the operational level functionality of the bank.
The Basel II Norms
The Basel committee, saying that operational risk has become too important to ignore, decided that banks must take a disciplined and proactive approach to managing it. Though the final guidelines are still not very clearly defined, it is required for banks to apply an explicit capital charge to cover losses arising from operational risks. Ultimately, this would require two measurement models:
Measuring operational risk and
Measuring to determine how much capital must be allocated.
These models are currently in their formative stages with multiple ideas and proposals being discussed. In the meantime, many "best practices" banks have created reserves for operational risk losses by substituting non-interest expense for the data that the models would otherwise provide. A few banks have made a fair degree of progress in developing more advanced techniques for allocating capital with regard to operational risk. To determine their capital allocation, they simply use a percentage of non-interest expense. These banks have allocated 8 percent of non-interest expense to sometimes as much as 20 percent. If the top 100 banks - which have a combined $500 billion of non-interest expense - set aside 30 percent, the allocation would total $150 billion. As with buying insurance, the banks would have to take an annual charge - using current interest rates of about 5 percent - of $7 billion to $8 billion to buy access to this reserve.
On a microeconomic level, a commercial bank with say $1.0 billion of non-interest expense would have to take an annual charge of about $40 million to finance a $250 million of allocation by similar calculations. Thus this would severely limit the bank's risk taking and consequent profit making abilities. In absence of good models or best-of-breed operational risk management scenarios, banks could rely on this percentage calculation of averaged historical data or another data substitute to establish the required capital cushion. The illogical aspect of this plan is that, regardless of operational risk performance, all banks would be treated alike, and better performers would be penalized as they would have access to correct and accurate data increasing the need for minimum capital allocation calculated on standard derivative functions.
On the other hand, banks that can define, design, develop and follow best practices models to accurately measure their operational risk can allocate just enough capital to cover their exposure by using the newer and more accurate derivative functions. As a result, banks that manage their risk efficiently, measure it effectively, and allocate capital effectively would be rewarded with a smaller regulatory burden and more capital to support innovation and expansion. At the same time, their customers would be protected not by unnecessary amounts of external insurance but by solid operational risk management.
More importantly, true competitive advantages arise from developing an organizational culture that proactively manages day-to-day risk, identifies new risks progressively, shares best practices in the organization and beyond and systematically tracks risk exposures. Building the right culture for that begins with instituting a disciplined approach to operational risk management, starting with the board and filtering down through every level and business unit and across every major process in the organization using a software framework customized to peculiar needs for each bank and country of operation. Once the infrastructure is in place, banks must learn to assess the quality of their risk risk-management programs continuously and assign monetary values to the risks they confront, understand the same and take effective measures to mitigate the same. This is the starting point for building a model that banks can use instead of the standard percentage rate that regulators will probably assign across the industry making the better banks enjoy less leverage.
Measuring Operational Risk
Operational risk is more difficult to measure than market or credit risk due to the non-availability of objective data, redundant data, lack of knowledge of what to measure etc. The data requirements for measuring market risk are pretty straight forward - prices, volatility and other external data, packaged with significant history in large databases easily accessible and measurable. Similarly, credit risk relies on the assessment and analysis of historic and factual data, which is easily available in most core banking systems.
Operational risk, however, is an ill-defined "inside measurement," related to the measures of internal performance, such as internal audit ratings, volume, turnover, error rates and income volatility, interaction of people, processes, methodologies, technology systems, business terminology and culture. Uncertainty about which factors are important arises from the absence of a direct relationship between the risk factors usually identified and the size and frequency of losses.
Capturing operational loss experience also raises measurement questions. Further the costs of investigating and correcting the problems underlying a loss event could be significant and in some cases exceed the direct costs of operational losses. Measuring operational risk requires both estimating the probability of an operational loss event and the potential size of the loss.
Thus any mathematical approach to operational risk struggles with lack of objective data. Operational risk could cost the 100 biggest banks $14-15 billion a year given the evolving nature of operations and a single enterprise-wide historical view of operational risk may not be the right approach.
Instead, banks should develop suitable internal measures of operational risk to substitute or add to available historical risk data. This means identifying categories and classes of risk and gathering all readily available information, which together can support a reliable measure of operational risk in each area of activity and for each category or sub-category. Information can be data on risk experience, inherent risk on risk-scoring mechanisms and subjectively based measurements of risk impact and likelihood. Better operation risk management means that banks are less likely to have major losses through error, fraud or failure to deliver quality service.
Along with protecting a company from potential damage, proactive risk management contributes to the bottom line. The benefits include protection of assets by preventing major losses, protection of shareholder value, avoidance of regulatory censure, the ability to render services without interruption, and the maintenance of a good reputation and public confidence. In the long run, the new Basel II guidelines will motivate better control of operational risk, leading to greater efficiencies in pricing and, ultimately, lower costs for lending money. Institutions with enterprise wide operational risk awareness and ownership and clear processes to monitor and manage it will be best equipped to embrace change and profit from it.
Risk Management Tools
A robust operational risk management process consists of clearly defined steps which involve identification of the risk events, analysis, assessment of the impact, treatment and reporting.
While sophisticated tools for measuring and managing operational risks are still to evolve, the current practices in this area are based on self-assessment. The starting point is the development of enterprise-wide generic standards for OR which includes corporate governance standards. It is extremely important for a robust risk management framework that the operational risks are managed where they originate. Risk management and compliance monitoring is a line management function and the risk culture has to be driven by the line Manager. It is, therefore, the line manager's responsibility to develop the generic operational risk standards applicable to his line of business. The purpose of this tool is to set minimum operational risk standards for all business and functional units to establish controls and monitor risks through control standards and risk indicators.
Once the standards are set, the line manager has to undertake a periodic operational risk self- assessment to identify key areas of risk so that necessary risk based controls and checks can be developed to monitor and mitigate the risks.
Control standards set minimum controls and minimum requirements for self-assessment of effectiveness of controls for the key processes.
The risk indicators identify operational risks and control weaknesses through statistical trend analysis. The risk indicators are reviewed periodically to ensure that they are constantly updated.
Reporting is a very important tool in the management of operational risks since it ensures timely escalation and senior management overview. Reporting should include significant operational risk exceptions, corporate governance exceptions, minutes of meetings of operations risk committee and real-time incident reports.
Operational risk management is one of the most complex and fastest growing areas in financial services industry. The methods to quantify the risk are evolving rapidly though they are not likely in the near future, to attain the sophistication with which market and credit risks are measured. Nevertheless, it is extremely important that the significance and impact of this risk area on the overall viability of a banking enterprise is given due recognition so that there are strong incentives for banks to continue to work towards developing models to measure operational risks and to hold the required capital buffers for this risk.

Tsunami Disaster & Rotary Members Visit Dec 26-31 2004

Introduction:
Rotary along with Sherlock Holmes, P G Wodehouse, Shanker-Jaikishen, Mohd Rafi, Pink Floyd has been my one of my abiding passions. Befitting a Gemini, I am prone to frequent mood and interest changes but Rotary has been one of the 'one fixed spots in a changing universe' as Holmes said about Watson.

At Rotary, we are committed to make the society we live in a better place to work while enjoying all the camaraderie, friendship and fun we share. I am priveleged to belong to a Club in India called Rotary Club of Madras, Chenna Patna (RI Dist 3230) and we did a fair bit of work during and after the Tsunami that stuck our coast the morning of 26th Dec 2004. I have attached a report which I published after I visited the coastal areas a few hours after the disaster trying to lend a helping hand to my fellow countrymen.

REFLIEF OPERATIONS AT KARAIKAL/TARANGAMBADI

The sight that greeted us was incongruous and would have been funny but for the fact that the mood was somber. One Ford Ikon, a Cielo, an ambulance and a few buses were scattered around in a large field along with scores of two-wheelers and one could appreciate the fury that had been wrought when one looked back and saw the now-calm sea was a good 700 meters away. Once in a while, Nature reminds us she is the boss is and we helpless human beings can only look on in awe at the power when on display and in sorrow at the destruction that is wrought.

Rotarians Capt Ravi, Saiseshan, Major Lakshmanan, Praveen Mehra and I; accompanied by my colleague, Anil of HCL Technologies and a few others left early for Karaikal in two cars. It was a good 6 hour drive & we reached Mr Ilangovan’s spacious office close to 1 pm and the mood in the normally cheerful little town was one of distress and sadness. Mr Ilangovan and Mr Chockalingam – 2 Lions as any – accompanied us with some of their associates most notably Mr Kesava who runs an orphanage and Mr Satya for a tour of affected areas and we saw for ourselves the state of roads, houses, bridges and boats.


Roads which proudly once carried 3 buses side by side and boasted a view of the sea rivaled by few, now looked forlorn, their width reduced to a few feet incapable of carrying even our car, the proud embankments reduced to rubble and lay scattered across us, the sides of the road a steep gradient now with water on all sides and a few feet away we saw a most distressing sight – a blue plastic chair on which presumably an old man would have been sunning himself in the early morning warmth and would have watched helplessly as the waves hit him.

The tragedy hasn’t spared anyone from any strata of society. A young doctor slated to go to UK for further Doctoral studies in a week’s time lost his life as he went to play tennis with his friends. His body was found about 2 Kilometers from the shore - a young life and talent lost in prime. Scores of fishermen & their families lost their lives, homes and livelihood and the waves stuck with force and surprise. Several pilgrims lost their life while they were on the beach after spending the morning in the pew praying. More than a hundred patients lost their lives as their hospital – more than a kilometer from the sea shore - was washed away and we saw the now-ghost like frame of the building with no doors or windows.

After a reconnoitering of the area with our new friends, we were informed that there was a good deal of aid material available in Karaikal proper, but the areas near Karaikal like Poompuhar, Tarangambadi etc were without proper aid as they were difficult to reach. We then went to one of the major relief camps at Tarangambadi where we saw 1500 inmates packed into a small school building living off aid. They were eating out of leaves and whatever they could get their hands upon and our distribution of plates and glasses was very welcome. There were not enough medicines and doctors at the site were glad to see the quantity and quality of drugs that we were carrying, drinking water is in perennial short supply and we were able to unload some of the water packets we were carrying. Clothes and bedsheets were to be distributed in areas where they are really needed and our friends in Karaikal were planning a trip to Thirukkadayur to distribute the balance over the next day or two. We also met a hard working Rajya Sabha MP, Mrs Gokulindira and the Project Officer, Mr Balasubramaniam who asked us for further relief material and long term assistance to rebuild houses, schools, hospitals and boats.

There is request for further material most notably, utensils, medicines, drinking water sachets, Bedsheets & Chatais (mats used for sleeping). There is also the need for like minded people to volunteer and ensure distributions reach the right people. Most important there is a very real need to assist the affected people to rebuild their shattered lives – vocational training for widows, houses, boats & fishing nets for the fishermen, schools for children and long term medicinal assistance for people affected both physically and mentally by this most unexpected tragedy.

The most poignant memory of this trip was when we called a certain Dr Valluvan’s number who was a close associate of our new friends in Karaikal. The recorded message ‘this is Dr Valluvan speaking, I am fine…’ the sound dies away to waves pounding in our ears and we could visualize the valiant doctor even at the last moment letting his friends know he is fine. In heaven.

Coelreidge came to mind...Water water everywhere, but not a drop to drink….

XP - The New Way to Develop Software

Introduction –
Object-oriented programming using the Java language has become immensely popular. It has revolutionized software development to some degree, but recent studies show that half of software development projects are late, and one-third are over budget. The problem isn't the technology; it's the way software is developed. So-called "lightweight" or "agile" approaches, coupled with the power and flexibility of object-oriented languages like the Java language, offer an intriguing solution. The most popular agile approach is called Extreme Programming, or XP. Using XP on OOPS language projects can increase the chances of success dramatically.
Extreme Programming (XP) is a deliberate and disciplined approach to software development. We found XP to be successful because it stresses on customer satisfaction and the methodology is designed to deliver software as per customer needs and when it is required. XP empowers our developers to confidently respond to changing customer requirements, even late in the life cycle. XP prescribes a core set of values and practices that allow software developers to do what they do best: write code. XP eliminates the unnecessary artifacts of most heavyweight processes that distract from that goal by slowing down and draining the development staff (for example, Gantt charts, status reports, and multi-volume requirements documents).
This methodology also emphasizes team work. Managers, customers, and developers are all part of a team dedicated to delivering quality software. XP implements a simple, yet effective way to enable groupware style development.
XP improves a software project in four essential ways; communication, simplicity,
feedback, and courage. Our XP programmers communicate with our customers – proxy or real and fellow programmers on a continuous basis. Our designs are kept simple and clean and we get feedback by testing the software starting on day one. Deliveries of the system to the customers starts early and changes are implemented without the traditional problems of change management as in Classical SDLC. With this foundation our XP programmers are able to courageously respond to changing requirements and technology.

The anxiety about what XP can do to a development process is a typical example of resistance to change from the structured Classical SDLC with its phased development which last for long periods of time and defined stages to a programming concept where the entire SDLC can actually be developed in a single day. As an analogy, its like the early days of Java development. There were programmers who understood object-oriented programming and took advantage of some of the facilities especially inheritance; however, there were many more programmers who ported their C code to the Java language and then announced that they were developing as per OOPS concepts, which led to serious repercussions. Technically, these developers were doing object-oriented programming, but the approach - building one huge object that contained all the code that used to be embedded in their procedural programs - resulted in a serious hit to performance.

The 12 practices of XP –

Extreme Programming, or XP, is constructed on 12 basic practices that given below and for the most part, these basic practices are rarely demanding or difficult to use and follow.
1. The Planning Process - allows the customer to define the business value of desired features, using cost estimates provided by the programmers to choose what should be done and what should be deferred. XP planning addresses two key questions in software development: predicting what will be accomplished by the due date, and determining what to do next. The emphasis is on steering the project rather than on exact prediction of what will be needed and how long it will take. There are two key planning steps in XP, addressing these two questions:
Release Planning is a practice where the Customer presents the desired features to the programmers, and the programmers estimate their difficulty. With the costs estimates in hand, and with knowledge of the importance of the features, the Customer lays out a plan for the project. Initial release plans are necessarily imprecise: neither the priorities nor the estimates are truly solid, and until the team begins to work, no one can effectively predict just how fast they will go. Even the first release plan is accurate enough for decision making, however, and XP teams revise the release plan regularly.
Iteration Planning is the practice whereby the team is given direction every couple of weeks. XP teams build software in two-week "iterations", delivering running useful software at the end of each iteration. During Iteration Planning, the Customer presents the features desired for the next two weeks. The programmers break them down into tasks, and estimate their cost - at a finer level of detail than in Release Planning. Based on the amount of work accomplished in the previous iteration, the team signs up for what will be undertaken in the current iteration.
These planning steps are very simple, yet they provide very good information and excellent steering control in the hands of the Customer. Every couple of weeks, the amount of progress is entirely visible. There is no "ninety percent done" in XP: an application was completed, or it was not. This focus on visibility results in a nice little paradox: on the one hand, with so much visibility, the Customer is in a position to cancel the project if progress is not sufficient. On the other hand, progress is so visible, and the ability to decide what will be done next is so complete, that XP projects tend to deliver more of what is needed, with less pressure and stress.
2. Small Releases - means the developers put a simple system into production early and update it frequently on a short cycle. Releases should be as small as possible while still delivering enough business value to make them worthwhile. XP suggests that Releases should be as soon as it makes sense to do so. This provides value to the customer as early as possible. Small releases also will provide concrete feedback to developers on what meets customer needs and what doesn't. The team then can include these lessons in its planning for the next release.
3. Metaphor - means the team uses a common "system of names" and a common system description in development and communication. Extreme Programming teams develop a common vision of how the program works, which is called the "metaphor". At its best, the metaphor is a simple evocative description of how the program works. XP teams use a common system of names to be sure that everyone understands how the system works and where to look to find the functionality one is looking for, or to find the right place to put the functionality one is about to add. The system metaphor in XP is analogous to what most methodologies call architecture. The metaphor gives the team a consistent picture they can use to describe the way the existing system works, where new parts fit, and what form they should take.

4. Simple Design - The program should be the Simplest Design that meets the current requirements - without much thought about future versions. (That doesn't mean the program shouldn't scale or be inflexible.) The classical SDLC heavyweight approach say that even the very trivial design tasks has to be accomplished up front. XP says design should not be done all at once, up front, under a delusion that things won't change. XP considers design so important that it should be a constant affair. XP methodology always tries to use the simplest design that could possibly work at any point, changing it as the development proceeds to reflect emerging reality. The simplest design should follow the basic premises given below –
a. Runs all the tests
b. Contains no duplicate code
c. States the programmers' intent for all code clearly
d. Contains the fewest possible classes and methods

5. Acceptance Test Plans – First the Test Plans are written, then the applications are tested and validated to whether the software passes the test. Extreme Programming is obsessed with feedback, and in software development, good feedback requires good testing. XP teams practice "test-first development", working in very short cycles of adding a test, then making it work. Almost effortlessly, teams produce code with nearly 100 percent test coverage, which is a great step forward in most shops. These "programmer tests", or "unit tests" are all collected together, and every time any programmer releases any code to the repository (and pairs typically release twice a day or more), every single one of the programmer tests must run correctly. This means that programmers get immediate feedback on how they're doing. Additionally, these tests provide invaluable support as the software design is improved. The point is simple. Writing tests first ensures
· The most complete set of tests possible
· The simplest code that could possibly work
· A clear vision of the intent of the code
6. Refactoring - With Refactoring, the team improves the design of the system throughout the entire development. The refactoring process focuses on removal of duplication which is a sure sign of poor design. The result is that XP teams start with a good, simple design, and always ens up with a good, simple design for the software. This lets them sustain their development speed, and in fact generally increase speed as the project goes forward. Refactoring has to be strongly supported by comprehensive testing to be sure that as the design evolves, nothing is broken.

7. Pair Programming - In Pair Programming, all production code is written by two programmers working together at one machine. There are studies to demonstrate that this method produces better software at the same or lower costs than using lone programmers. This practice ensures that all production code is reviewed by at least one other programmer, and results in better design, better testing, and better code. Pairing, in addition to providing better code and tests, also serves to communicate knowledge throughout the team. As pairs switch, everyone gets the benefits of everyone's specialized knowledge. Programmers learn, their skills improve, they become move valuable to the team and to the company.

8. Collective Ownership - Each piece of code is subject to Collective Ownership, so any programmer can alter any piece of code with proper use of a tool to monitor the changes. All the contributors to an XP project sit together, members of one team. This team must include a business representative - the "Customer" - who provides the requirements, sets the priorities, and steers the project. It's best if the Customer or one of her aides is a real end user who knows the domain and what is needed. The team will of course have programmers. The team will include testers, who help the Customer define the customer acceptance tests. Analysts may serve as helpers to the Customer, helping to define the requirements. There is commonly a coach, who helps the team keep on track, and facilitates the process. There may be a manager, providing resources, handling external communication, coordinating activities. None of these roles is necessarily the exclusive property of just one individual: Everyone on an XP team contributes in any way that they can. The best teams have no specialists, only general contributors with special skills. Any person on the team should have the authority to make changes to the code to improve it. Everybody owns all the code, meaning everybody is responsible for it. This technique allows people to make necessary changes to a piece of code without going through the bottleneck of an individual code owner. The fact that everybody is responsible negates the chaos that ensues from no code ownership.

9. Continuous Integration - With Continuous Integration, several times a day, progress is rapid and many integration problems are eliminated. Infrequent integration leads to serious problems on a software project. First of all, although integration is critical to shipping good working code, the team is not practiced at it, and often it is delegated to people who are not familiar with the whole system. Second, infrequently integrated code is often full of bugs. Problems creep in at integration time that are not detected by any of the testing that takes place on an unintegrated system. Third, weak integration process leads to long code freezes. Code freezes mean that you have long time periods when the programmers could be working on important shippable features, but that those features must be held back.

10. 40 Hr week - Tired programmers make more mistakes, so generally the team is limited to work for 40 hrs every week.

11. On-site Customer – Its is better to have an On-site Customer available with the authority to determine requirements, set priorities, and answer questions or in his absence a suitable authority who can check the progress and see if it confirms to the requirement.

12. Coding Standards – It is essential to establish a Coding Standard, so programmers can meet the requirements of the other practices as this type of development requires continuous interaction between the programmers. Having a coding standard does two things:
· It keeps the team from being distracted by stupid arguments about things that don't matter as much as going at maximum speed.
· It supports the other practices.
Without coding standards, it is harder to refactor code, harder to switch pairs as often as one should, and harder to go fast. The goal should be that no one on the team can recognize who wrote which piece of code. The goal isn't to have an exhaustive list of rules, but to provide guidelines that will make sure the code communicates clearly. The coding standard should begin simply, then evolve over time based on team experience.

My Experience

Some of the above are essential to follow and followed as such at our development factory, some are followed on a case to case basis depending on the development patterns. Some of the practices not always used include –

No big up-front design – This is not always possible especially in large development with a vast database structure and most of our clients want to see a design before commencing the development – however, the entire development can be broken down into a series of iterative steps.

Metaphor and stories for coding – Not always required – its importance increases with the number of interdependent coding teams. Generally we ensure that Metaphors are utilized during any development process consisting of more than 4 paired teams over a time period exceeding 40 working days.

The 40-48-hour work week – We believe that human beings, especially as intelligent as our coders are talented enough to understand when they are tired and they need a break, however over a long period, we generally ensure and follow the 40-44 hr/week regime to ensure that all the programmers are fresh to undertake the necessary work-load.

· An on-site customer - Very often, we use a separate Project Manager, who is generally a very senior person, as the proxy for an on-site customer with one additional modification - The Project Manager does not drive or monitor the project development in any way. As part of presenting each desired feature, the XP Customer or his proxy defines one or more automated acceptance tests to show that the feature is working. The team builds these tests and uses them to prove to themselves, and to our customer, that the feature is implemented correctly. All our tests are automated – this is important because in the press of time, manual tests are skipped. Our XP teams ensure that once the test runs, the team keeps it running correctly thereafter by our rigorous Regression Testing Techniques. This means that the system only improves, always notching forward, never backsliding.

Frequent changes – If customers want to add features or change requirements, they are generally are allowed to depending on the complexity of change. The Management team just re-prioritzes features, but we believe that suggested changes should held until the next iteration.

Pair programming - falls into the "sometimes" bucket. One can easily visualize the advantages of pair programming during the design and algorithm-construction phase, but the efficacy during the coding and production phase is subject to actual environment. Also, very often the Project Manager alters the pair-programming rule in another way by assigning the pairs himself. He assign two programmers to work on a set task for one to three weeks. He meets with them and gives them the problem and the architected solution and sets them loose.

Advantages of XP -

Simple and elegant code – Software, which is engineered to be simple and elegant is more valuable than software that is complex and hard to maintain and XP methodology generally throws up simple and easily comprehensible code.

ROI – Our experience in Software Development has shown that a typical project will spend about twenty times as much on people than on hardware. That means a project spending 2 million dollars on programmers per year will spend about 100 thousand dollars on computer equipment each year. Let's say that we find a way to save 20% of the hardware costs by some very clever programming tricks. It will make the source code harder to understand and maintain, but we are saving 20% or 20 thousand dollars per year, which is a big saving. Now what if instead we wrote our programs such that they were easy to understand and extend. We could expect to save no less than 10% of our people costs. That would come to 200 thousand dollars, a much bigger savings. This is certainly something our customers appreciate.
Bugs - Another important issue to customers are bugs. XP emphasizes not just testing, but testing well. Tests are automated and provide a safety net for programmers and customers alike. Tests are created before the code is written, while the code is written, and after the code is written. As bugs are found new tests are added. Our strong Regression Testing methodologies sometimes to about 30% of code ensures bugs rarely get through twice.
Changing Requirements - XP enables us to embrace change. Too often we have found a customer will see a real opportunity for making a system useful with some changes after it has been delivered. XP short cuts this by getting customer feed back early while there is still time to change functionality or improve user acceptance. XP is especially useful when customers may not have a firm idea of what the system should do.

Project Risk - XP was also set up to address the problems of project risk. If customers need a new system by a specific date the risk is high. If that system is a new challenge for any software group the risk is even greater. If that system is a new challenge to the entire software industry the risk is greater even still. The XP practices are set up to mitigate the risk and increase the likelihood of success due to tight and controlled iterations, continuous feedback and repeated tests.
Smaller teams - We use XP generally for small groups of programmers - between 2 and 12, though we have occasionally used this methodology for larger projects of 30 or more people with success. We have found that on projects with dynamic requirements or high risk a small team of XP programmers will be more effective than a large team.

Continuous Interaction - XP requires an extended development team. The XP team includes not only the developers, but the managers and customers as well, all working together elbow to elbow. Asking questions, negotiating scope and schedules, and creating functional tests require more than just the developers be involved in producing the software.
Testability – Our testing methodology is geared to create automated unit and functional tests. Sometimes, we change your system design to be easier to test, but at the end of the day, every functionality is tested - where there is a will there is a way to test.
Productivity - The last thing on the list is productivity. XP projects unanimously report greater programmer productivity when compared to other projects within the same corporate environment. But this was never a goal of the XP methodology. The real goal has always been to deliver the software that is needed when it is needed.
Introduction –
Object-oriented programming using the Java language has become immensely popular. It has revolutionized software development to some degree, but recent studies show that half of software development projects are late, and one-third are over budget. The problem isn't the technology; it's the way software is developed. So-called "lightweight" or "agile" approaches, coupled with the power and flexibility of object-oriented languages like the Java language, offer an intriguing solution. The most popular agile approach is called Extreme Programming, or XP. Using XP on OOPS language projects can increase the chances of success dramatically.
Extreme Programming (XP) is a deliberate and disciplined approach to software development. We found XP to be successful because it stresses on customer satisfaction and the methodology is designed to deliver software as per customer needs and when it is required. XP empowers our developers to confidently respond to changing customer requirements, even late in the life cycle. XP prescribes a core set of values and practices that allow software developers to do what they do best: write code. XP eliminates the unnecessary artifacts of most heavyweight processes that distract from that goal by slowing down and draining the development staff (for example, Gantt charts, status reports, and multi-volume requirements documents).
This methodology also emphasizes team work. Managers, customers, and developers are all part of a team dedicated to delivering quality software. XP implements a simple, yet effective way to enable groupware style development.
XP improves a software project in four essential ways; communication, simplicity,
feedback, and courage. Our XP programmers communicate with our customers – proxy or real and fellow programmers on a continuous basis. Our designs are kept simple and clean and we get feedback by testing the software starting on day one. Deliveries of the system to the customers starts early and changes are implemented without the traditional problems of change management as in Classical SDLC. With this foundation our XP programmers are able to courageously respond to changing requirements and technology.

The anxiety about what XP can do to a development process is a typical example of resistance to change from the structured Classical SDLC with its phased development which last for long periods of time and defined stages to a programming concept where the entire SDLC can actually be developed in a single day. As an analogy, its like the early days of Java development. There were programmers who understood object-oriented programming and took advantage of some of the facilities especially inheritance; however, there were many more programmers who ported their C code to the Java language and then announced that they were developing as per OOPS concepts, which led to serious repercussions. Technically, these developers were doing object-oriented programming, but the approach - building one huge object that contained all the code that used to be embedded in their procedural programs - resulted in a serious hit to performance.

The 12 practices of XP –

Extreme Programming, or XP, is constructed on 12 basic practices that given below and for the most part, these basic practices are rarely demanding or difficult to use and follow.
1. The Planning Process - allows the customer to define the business value of desired features, using cost estimates provided by the programmers to choose what should be done and what should be deferred. XP planning addresses two key questions in software development: predicting what will be accomplished by the due date, and determining what to do next. The emphasis is on steering the project rather than on exact prediction of what will be needed and how long it will take. There are two key planning steps in XP, addressing these two questions:
Release Planning is a practice where the Customer presents the desired features to the programmers, and the programmers estimate their difficulty. With the costs estimates in hand, and with knowledge of the importance of the features, the Customer lays out a plan for the project. Initial release plans are necessarily imprecise: neither the priorities nor the estimates are truly solid, and until the team begins to work, no one can effectively predict just how fast they will go. Even the first release plan is accurate enough for decision making, however, and XP teams revise the release plan regularly.
Iteration Planning is the practice whereby the team is given direction every couple of weeks. XP teams build software in two-week "iterations", delivering running useful software at the end of each iteration. During Iteration Planning, the Customer presents the features desired for the next two weeks. The programmers break them down into tasks, and estimate their cost - at a finer level of detail than in Release Planning. Based on the amount of work accomplished in the previous iteration, the team signs up for what will be undertaken in the current iteration.
These planning steps are very simple, yet they provide very good information and excellent steering control in the hands of the Customer. Every couple of weeks, the amount of progress is entirely visible. There is no "ninety percent done" in XP: an application was completed, or it was not. This focus on visibility results in a nice little paradox: on the one hand, with so much visibility, the Customer is in a position to cancel the project if progress is not sufficient. On the other hand, progress is so visible, and the ability to decide what will be done next is so complete, that XP projects tend to deliver more of what is needed, with less pressure and stress.
2. Small Releases - means the developers put a simple system into production early and update it frequently on a short cycle. Releases should be as small as possible while still delivering enough business value to make them worthwhile. XP suggests that Releases should be as soon as it makes sense to do so. This provides value to the customer as early as possible. Small releases also will provide concrete feedback to developers on what meets customer needs and what doesn't. The team then can include these lessons in its planning for the next release.
3. Metaphor - means the team uses a common "system of names" and a common system description in development and communication. Extreme Programming teams develop a common vision of how the program works, which is called the "metaphor". At its best, the metaphor is a simple evocative description of how the program works. XP teams use a common system of names to be sure that everyone understands how the system works and where to look to find the functionality one is looking for, or to find the right place to put the functionality one is about to add. The system metaphor in XP is analogous to what most methodologies call architecture. The metaphor gives the team a consistent picture they can use to describe the way the existing system works, where new parts fit, and what form they should take.

4. Simple Design - The program should be the Simplest Design that meets the current requirements - without much thought about future versions. (That doesn't mean the program shouldn't scale or be inflexible.) The classical SDLC heavyweight approach say that even the very trivial design tasks has to be accomplished up front. XP says design should not be done all at once, up front, under a delusion that things won't change. XP considers design so important that it should be a constant affair. XP methodology always tries to use the simplest design that could possibly work at any point, changing it as the development proceeds to reflect emerging reality. The simplest design should follow the basic premises given below –
a. Runs all the tests
b. Contains no duplicate code
c. States the programmers' intent for all code clearly
d. Contains the fewest possible classes and methods

5. Acceptance Test Plans – First the Test Plans are written, then the applications are tested and validated to whether the software passes the test. Extreme Programming is obsessed with feedback, and in software development, good feedback requires good testing. XP teams practice "test-first development", working in very short cycles of adding a test, then making it work. Almost effortlessly, teams produce code with nearly 100 percent test coverage, which is a great step forward in most shops. These "programmer tests", or "unit tests" are all collected together, and every time any programmer releases any code to the repository (and pairs typically release twice a day or more), every single one of the programmer tests must run correctly. This means that programmers get immediate feedback on how they're doing. Additionally, these tests provide invaluable support as the software design is improved. The point is simple. Writing tests first ensures
· The most complete set of tests possible
· The simplest code that could possibly work
· A clear vision of the intent of the code
6. Refactoring - With Refactoring, the team improves the design of the system throughout the entire development. The refactoring process focuses on removal of duplication which is a sure sign of poor design. The result is that XP teams start with a good, simple design, and always ens up with a good, simple design for the software. This lets them sustain their development speed, and in fact generally increase speed as the project goes forward. Refactoring has to be strongly supported by comprehensive testing to be sure that as the design evolves, nothing is broken.

7. Pair Programming - In Pair Programming, all production code is written by two programmers working together at one machine. There are studies to demonstrate that this method produces better software at the same or lower costs than using lone programmers. This practice ensures that all production code is reviewed by at least one other programmer, and results in better design, better testing, and better code. Pairing, in addition to providing better code and tests, also serves to communicate knowledge throughout the team. As pairs switch, everyone gets the benefits of everyone's specialized knowledge. Programmers learn, their skills improve, they become move valuable to the team and to the company.

8. Collective Ownership - Each piece of code is subject to Collective Ownership, so any programmer can alter any piece of code with proper use of a tool to monitor the changes. All the contributors to an XP project sit together, members of one team. This team must include a business representative - the "Customer" - who provides the requirements, sets the priorities, and steers the project. It's best if the Customer or one of her aides is a real end user who knows the domain and what is needed. The team will of course have programmers. The team will include testers, who help the Customer define the customer acceptance tests. Analysts may serve as helpers to the Customer, helping to define the requirements. There is commonly a coach, who helps the team keep on track, and facilitates the process. There may be a manager, providing resources, handling external communication, coordinating activities. None of these roles is necessarily the exclusive property of just one individual: Everyone on an XP team contributes in any way that they can. The best teams have no specialists, only general contributors with special skills. Any person on the team should have the authority to make changes to the code to improve it. Everybody owns all the code, meaning everybody is responsible for it. This technique allows people to make necessary changes to a piece of code without going through the bottleneck of an individual code owner. The fact that everybody is responsible negates the chaos that ensues from no code ownership.

9. Continuous Integration - With Continuous Integration, several times a day, progress is rapid and many integration problems are eliminated. Infrequent integration leads to serious problems on a software project. First of all, although integration is critical to shipping good working code, the team is not practiced at it, and often it is delegated to people who are not familiar with the whole system. Second, infrequently integrated code is often full of bugs. Problems creep in at integration time that are not detected by any of the testing that takes place on an unintegrated system. Third, weak integration process leads to long code freezes. Code freezes mean that you have long time periods when the programmers could be working on important shippable features, but that those features must be held back.

10. 40 Hr week - Tired programmers make more mistakes, so generally the team is limited to work for 40 hrs every week.

11. On-site Customer – Its is better to have an On-site Customer available with the authority to determine requirements, set priorities, and answer questions or in his absence a suitable authority who can check the progress and see if it confirms to the requirement.

12. Coding Standards – It is essential to establish a Coding Standard, so programmers can meet the requirements of the other practices as this type of development requires continuous interaction between the programmers. Having a coding standard does two things:
· It keeps the team from being distracted by stupid arguments about things that don't matter as much as going at maximum speed.
· It supports the other practices.
Without coding standards, it is harder to refactor code, harder to switch pairs as often as one should, and harder to go fast. The goal should be that no one on the team can recognize who wrote which piece of code. The goal isn't to have an exhaustive list of rules, but to provide guidelines that will make sure the code communicates clearly. The coding standard should begin simply, then evolve over time based on team experience.

My Experience

Some of the above are essential to follow and followed as such at our development factory, some are followed on a case to case basis depending on the development patterns. Some of the practices not always used include –

No big up-front design – This is not always possible especially in large development with a vast database structure and most of our clients want to see a design before commencing the development – however, the entire development can be broken down into a series of iterative steps.

Metaphor and stories for coding – Not always required – its importance increases with the number of interdependent coding teams. Generally we ensure that Metaphors are utilized during any development process consisting of more than 4 paired teams over a time period exceeding 40 working days.

The 40-48-hour work week – We believe that human beings, especially as intelligent as our coders are talented enough to understand when they are tired and they need a break, however over a long period, we generally ensure and follow the 40-44 hr/week regime to ensure that all the programmers are fresh to undertake the necessary work-load.

· An on-site customer - Very often, we use a separate Project Manager, who is generally a very senior person, as the proxy for an on-site customer with one additional modification - The Project Manager does not drive or monitor the project development in any way. As part of presenting each desired feature, the XP Customer or his proxy defines one or more automated acceptance tests to show that the feature is working. The team builds these tests and uses them to prove to themselves, and to our customer, that the feature is implemented correctly. All our tests are automated – this is important because in the press of time, manual tests are skipped. Our XP teams ensure that once the test runs, the team keeps it running correctly thereafter by our rigorous Regression Testing Techniques. This means that the system only improves, always notching forward, never backsliding.

Frequent changes – If customers want to add features or change requirements, they are generally are allowed to depending on the complexity of change. The Management team just re-prioritzes features, but we believe that suggested changes should held until the next iteration.

Pair programming - falls into the "sometimes" bucket. One can easily visualize the advantages of pair programming during the design and algorithm-construction phase, but the efficacy during the coding and production phase is subject to actual environment. Also, very often the Project Manager alters the pair-programming rule in another way by assigning the pairs himself. He assign two programmers to work on a set task for one to three weeks. He meets with them and gives them the problem and the architected solution and sets them loose.

Advantages of XP -

Simple and elegant code – Software, which is engineered to be simple and elegant is more valuable than software that is complex and hard to maintain and XP methodology generally throws up simple and easily comprehensible code.

ROI – Our experience in Software Development has shown that a typical project will spend about twenty times as much on people than on hardware. That means a project spending 2 million dollars on programmers per year will spend about 100 thousand dollars on computer equipment each year. Let's say that we find a way to save 20% of the hardware costs by some very clever programming tricks. It will make the source code harder to understand and maintain, but we are saving 20% or 20 thousand dollars per year, which is a big saving. Now what if instead we wrote our programs such that they were easy to understand and extend. We could expect to save no less than 10% of our people costs. That would come to 200 thousand dollars, a much bigger savings. This is certainly something our customers appreciate.
Bugs - Another important issue to customers are bugs. XP emphasizes not just testing, but testing well. Tests are automated and provide a safety net for programmers and customers alike. Tests are created before the code is written, while the code is written, and after the code is written. As bugs are found new tests are added. Our strong Regression Testing methodologies sometimes to about 30% of code ensures bugs rarely get through twice.
Changing Requirements - XP enables us to embrace change. Too often we have found a customer will see a real opportunity for making a system useful with some changes after it has been delivered. XP short cuts this by getting customer feed back early while there is still time to change functionality or improve user acceptance. XP is especially useful when customers may not have a firm idea of what the system should do.

Project Risk - XP was also set up to address the problems of project risk. If customers need a new system by a specific date the risk is high. If that system is a new challenge for any software group the risk is even greater. If that system is a new challenge to the entire software industry the risk is greater even still. The XP practices are set up to mitigate the risk and increase the likelihood of success due to tight and controlled iterations, continuous feedback and repeated tests.
Smaller teams - We use XP generally for small groups of programmers - between 2 and 12, though we have occasionally used this methodology for larger projects of 30 or more people with success. We have found that on projects with dynamic requirements or high risk a small team of XP programmers will be more effective than a large team.

Continuous Interaction - XP requires an extended development team. The XP team includes not only the developers, but the managers and customers as well, all working together elbow to elbow. Asking questions, negotiating scope and schedules, and creating functional tests require more than just the developers be involved in producing the software.
Testability – Our testing methodology is geared to create automated unit and functional tests. Sometimes, we change your system design to be easier to test, but at the end of the day, every functionality is tested - where there is a will there is a way to test.
Productivity - The last thing on the list is productivity. XP projects unanimously report greater programmer productivity when compared to other projects within the same corporate environment. But this was never a goal of the XP methodology. The real goal has always been to deliver the software that is needed when it is needed.

Outsourcing & Operational Risk

Its my contention that Effective Outsourcing & Offshoring actually mitigates operational risk. Please read further if you are a practitioner /student of Information Technology Applications in Financial Services.

1. Introduction. 1
2. Why outsource. 2
3. Operational Risk & Basel II 3
4. Frameworks governing Operational Risk in Outsourcing. 4
5. How does Outsourcing impact Operational Risk?. 5
6. Measurement of Operational Risk in Outsourcing. 8
7. Minimize Operational Risk in Outsourcing. 9
7.1 Internal Readiness 9
7.2 Choice of vendor 9
7.3 Tools and processes for comprehensive management control 10
7.4 Benchmarking and frequent audits 11
8. Mitigation Plan. 12
9. Does Outsourcing decrease overall Operational Risk?. 13
10. Conclusion. 14

1. Introduction

Outsourcing of business functions to specialist providers is common practice, and nowhere is it more so than in the financial services industry. The mandate is clear for the top banking and financial service firms around the world that they either be competitive or perish. This has forced most of them to renew their focus on their bottom-line cost strategy, of which outsourcing has become a vital component. It has become pretty obvious that most US and European banks, if they have to remain competitive in the globalize economy, they have to follow the laws of economics and look at outsourcing maximum activities to places which are much cheaper and sometimes more competent. The banks are increasingly aware that non-core activities, which do not create immediate tangible value for the organization, can be very well done by outside experts at a fraction of existing costs. Outsourcing, especially offshore seems to offer significant benefits in terms of cost savings and conversion of fixed costs into variable costs. It seems all the more attractive to financial institutions and banks as significant effort is involved in back office processing, which by nature is technology intensive and is a strong case for outsourcing. Thus despite a lot of press reaction and people opposition, outsourcing is only growing stronger day by day. Most of the big banks round the world have begun outsourcing significant parts of their business to countries like India which offer better bang for the dollar. Some of the better-known names include Citi, World Bank, Bank Of America, Merrill Lynch, Lehman Brothers, Deutsche Bank etc and all of them have transferred a bulk of back-office operations and new system development to India. Some of them have even outsourced high value and risk sensitive work like trend analysis for both derivatives and equity markets and are reaping the benefits of continued cost advantages and equally – if not superior – qualified technically and functionally competent personnel.

This does not mean that outsourcing does not have its own problems. Many of the Banks and Financial service organizations that were part of the first outsourcing wave started without adequate research and preparation have had a bad experience or two. Even today clients are finding it difficult to co-ordinate, monitor and control performance of their vendors effectively. Still the value proposition of these offshore vendors is so strong, these issues have not distracted people from going ahead.

It has been estimated that 47% of losses in Capital Markets and Banking is due to systemic process and systems failures. In this paper, lets try and understand the impacts of this outsourcing of business processes, new technology development and existing system maintenance by external vendors and/or by subsidiaries of the Firms in a geographically disparate location on the overall Operational Risk and whether it is possible for an arithmetic correlation between the two.


2. Why outsource

Outsourcing has significant advantages in cost reduction, increase in operational efficiency, decrease in operational costs and better management of quality human resources. PwC’s Paul Halpin said - "Many people think that operational risk inevitably increases when processes are outsourced. However the introduction of more effective controls and better management of risk, by an outsource provider, can often reduce operational risk". High profile accountancy scandals, systemic failures and lack of BCP/DRS systems in conjunction with the proposals contained in Basel II and the EU Credit Directive are increasing the awareness of operational risk. As the financial markets move towards further statutory regulation, operational risk is something that the market makers and executives need to be considering. Outsourcing providers can play a key part in their clients’ operational risk strategies.

There have been several instances of significant quality improvement at the Firm due to better processes at vendor site. Very often vendors, especially software development companies in places like India; are world-leaders in processes for software development life cycles and many of them actually enable the Firms to improve on existing operational and process efficiencies by two-way knowledge transfer.

Outsourcing a range of functions to third party vendors is an attractive risk mitigation option. Outsourcing allows better alignment between cost structure and revenues, greater flexibility to introduce new products, more innovative investment structures, access to new technology, rapid integration of the same into the company’s systems and greater ability to keep pace with changing regulations and markets.

Given the complex and global nature of investment management and the varying functions that can be outsourced, identifying and developing the right model is often difficult. A four-step assessment can help recognize the appropriate outsourcing model – inshore, nearshore, offshore or combination/mix of the three. First, the divisional managers should identify why they want to outsource a particular function. Second, they need to isolate potential issues with outsourcing. Third, they should determine what to outsource. Finally, they need to understand their current and projected cost and revenue structures well enough to align those in an outsourced relationship.

As with any business activity, outsourcing has risks. Such risks depend on several factors, but are most clearly measured by the size, nature and criticality of the outsourced activity. If managed appropriately, outsourcing can be an efficient operational risk mitigation tool. Regardless of the EC disposition on operational risk capital charges, it is likely that more investment managers will turn to outsourcing as a source of flexibility in developing their businesses, reducing cost, and aligning their core competencies and risks with their value added.




3. Operational Risk & Basel II

There are various risks associated while outsourcing. The predominant amongst them is the 'fear of the unknown'. Customer many a time feels outsourcing and 'off-shoring' could be a black-hole. The only way to mitigate this risk apart from top-management commitment is to involve the customer at every stage in the delivery process.
In the January 2001 Basel II Consultative Package, operational risk was defined as: "the risk of direct or indirect loss resulting from inadequate or failed internal processes, people and systems or from external events". The January 2001 paper went on to clarify that this definition included legal risk, but that strategic and reputational risks were not included in this definition for the purpose of a minimum regulatory operational risk capital charge. However in this paper for the purpose of better understanding, we shall also look at the possible impact of such risk on the overall portfolio of risks and ways to minimize the probability of such occurrences.
This focus on operational risk has been generally welcomed by the banking and financial services community, although concerns were expressed about the exact meaning of `direct and indirect loss'. As mentioned above, for the purposes of the Basel II Pillar 1 capital charge, strategic and reputational risks are not included, and neither is it the intention for the capital charge to cover all indirect losses or opportunity costs. As a result, reference to `direct and indirect' in the overall definition has been dropped. By directly defining the types of loss events that should be recorded in internal loss data, the RMG can give much clearer guidance on which losses are relevant for regulatory capital purposes. This leads to a slightly revised definition, as follows: "the risk of loss resulting from inadequate or failed internal processes, people and systems or from external events". The RMG confirms that this definition does not include systemic risk and the operational risk charge will be calibrated accordingly.
It is important to note that this definition is based on the underlying causes of operational risk. It seeks to identify why a loss happened and at the broadest level includes the breakdown by four causes: people, processes, systems and external factors. This very basic definition, and more detailed specifications of it, is particularly useful for managing operational risk within institutions. However, for the purpose of operational risk loss quantification and the pooling of loss data across banks, it is necessary to rely on definitions that are measurable and comparable. Thus several banks and supervisors make distinctions between operational risk causes, actual measurable events (which may be due to a number of causes, many of which may not be fully understood), and the P&L effects (costs) of those events.
The most significant issue facing banks in relation to Basel II is aligning and upgrading data and existing IT systems infrastructure for completeness, consistency and integrity across the organization. The systems to comply with Basel II requirements under the advanced approach for market, credit and operational risk must be compatible with the existing IT architecture and provide suitable reporting facilities and analytics. The second driver is governance and buy-in. The role and responsibilities of each individual and department must be clearly defined to avoid confusion, especially with regard to operational risk. The third is a clear risk awareness culture. Outsourcing of IT functions can free up several divisions inside the IT divisions and Management for risk measurement and mitigation measures.


4. Frameworks governing Operational Risk in Outsourcing

89% of lenders surveyed at a recent industry briefing hosted by mortgage outsourcing firm Marlborough Stirling Mortgage Services (MSMS), said they do not think lenders are as aware of operational risk as they should be. In such a scenario, its vital that available regulations and their impact on the total VaR needs to be studied as it will have a direct impact on the capital adequacy ratio.

The U.K. Financial Services Authority's proposed guidance on operational risk recognizes that while outsourcing may reduce a company's level of risk, careful management is required to yield benefits. The Policy Statement on Prudential Risks, Systems and Controls issued in October by the FSA is an example of the action being taken by regulators to ensure that risks are kept to a minimum in outsourcing contracts. The FSA points out that this statement forms part of the Integrated Prudential Sourcebook it is developing as part of its regulation of financial activities in the UK. The system is due to be complete for all regulated firms, except banks, by the end of next year. Banks will be covered by the Basel II Capital Accord, which is being finalized over the coming months and is expected to be implemented by several Regulatory bodies the world over from 2007. The statement includes new guidance on outsourcing. It is wide-ranging because the FSA makes clear that, although the guidance is designed primarily to cover outsourcing arrangements, firms should consider its applicability to all firms of dependency on third parties. Among the issues covered are:
- Effect of the outsourcing arrangement on a firm’s operational risk profile.
- Desirable controls over the outsourcing supplier’s employees and subcontractors.
- The customer’s business continuity requirements
- Due diligence requirements
- Appropriate performance measures.
- Service management
- Special audit rights
- Change management procedures.
- Confidentiality and security
- Rights relating to the termination of the arrangement.
- Offshore outsourcing
- Outsourcing of a controlled function
- Risk profile of the vendor

The consultation paper pointed out that a firm’s operational risk profile might vary through the life of an outsourcing arrangement. But the policy statement amplifies this by pointing out that operational risk may vary, for example, when the decision to outsource is made during the negotiation phase, during the implementation and maintenance, and on termination of the contract – some issues may even be pertinent to well after the contract including IPR protection etc. The statement also stresses that outsourcing may reduce operational risk and that in such circumstances a suitably proportionate approach to the application of the guidance may be appropriate.

Another development relates to control over outsourced suppliers. The consultation paper suggested that firms consider imposing upon suppliers far-reaching controls over the employees involved in providing the outsourced services. The policy statement goes further through stressing that it may even be necessary for the firm to review and consider the adequacy of the staffing arrangements and policies of a service provider. The statement also calls on firms to look beyond the specifics of the outsourcing arrangements to consider the extent to which they support their business strategies. They should also review this when changing the arrangements. However, even with these additions, firms should not consider the statement as covering all circumstances.


5. How does Outsourcing impact Operational Risk?

So does it mean that Firms that are involved in off shoring have to necessarily bear more operational risks than others that do not? Not necessarily so in most cases. Some times outsourcing allows rapid upgrade of technology and process for example as vendors are IT-specialist and are most times well ahead of the Firms on the technology curve.

This can translate into more efficient processes and systems which bring down the risk co-efficient and carry out operations more efficiently. The Firms also able to pass off some of the risks to vendors, whose financial terms are determined by service levels. Mature vendors bring with themselves the wealth of information that they have gained working with best of the breed organizations and these best practices can be shared with other organizations as well. Besides most of the concerns that we raised can be mitigated easily if the firms are aware of them and follow a methodical risk management approach.

In the next few lines, we discuss the impact of Outsourcing on the four basic premises of Operational Risk – People, Process, Technology & non-predictive events.

People Risk –
People risks generally come about due to a variety of reasons ranging from potential loss of jobs at the parent site to attrition issues at the client/offshore site.
Loss of jobs – there is a very real risk arising due to perceived and actual loss of jobs at the Firm’s principle location due to outsourcing of services to vendors or to offshore geographies. This leads to employee dissatisfaction, security lapses due to laid-off employees and technology or process risks due to attrition arising out of a potentially unstable environment.
Culture issue – the culture disparity issue is less understood and not given the attention it deserves. Different cultures and different approaches to problem resolution gives rise to systemic risks and processes risk increasing probability of operational defaults.
Knowledge transfer gaps - Typically any outsourcing engagement starts with the knowledge transition process, where the vendor staff visits the Firm locations & the reverse for a thorough de-briefing process. The effectiveness of Knowledge transition process is one of the critical success factors to the success of overall engagement. Very often it has been noticed that lack of domain knowledge severely limits understanding of the fundamental concerns of the business users. This can lead to significant increase in operational risks faced by the banks as these contracted vendor employees might take actions or decisions not appropriate to situation.
Strategic alignment – Vendors typically focus on day-to-day operations, which meets agreed SLAs but lack of strategic thinking & alignment with the Firm’s goals what might or might not be beneficial for organization in the long run. This can result in either inadequate or in-appropriate systems and processes.
Attrition - Significant percentage of vendor employees might not stay with the same assignment for long durations – this actually is one of the major issues facing companies now with the ever-increasing opportunities in the burgeoning job market. This means that Firms have to share proprietary process knowledge with larger set of external people who may not be on their project(s) after sometime, there is also the issue of personnel movement between projects for two clients. Besides the obvious issues of potential security lapses, there is also the added risk of additional knowledge transfer.


Process Risk -
Maturity of process - In outsourcing engagements the vendor essentially becomes an extended arm of the Firm and vendor processes and performance directly impact service or product quality. If these processes are not robust and do not conform to quality standards, it can lead to unpleasant surprises for the Firm, e.g. lack of a constantly monitored & effective knowledge management process can make vendors heavily dependent on certain key people, who are perhaps anyway the most hunted employees in business, and hence if they change organizations, retraining and process knowledge might not be very smooth, resulting in immediate performance issues
Response time - Because of the complicated engagement structure and onsite-offshore co-ordination issues, decision-making process and escalation process often slow down at least initially. In unexpected critical situations people due to lack of authority might not be able to respond adequately and be mere spectators of potential harbingers of disaster.
Alignment of objectives & processes - the Firm and the vendor need to be closely aligned with their expectations and goals. For example, an existing vendor might not be able to scale up the processes if suddenly required by the Firm due to lack of funds, capability or choice; as it might not be in his best interest. On the other hand it is also possible that vendor might invest significantly in processes, which might not be very critical for a particular Firms but significant to others and vice-versa.
Management and reporting
BCP & DRS - Most of IT offshore vendors would claim to have robust Business continuity and Disaster recovery process, but the Firm would need to thoroughly whet these initiatives with regard to the practicality and robustness of the defined processes.
Physical Security - Access to vendor facilities can never be as closely monitored as the Firm’s own facilities. Even the vendor’s own employees might be a threat as reference checks and authorization checks in vendor countries might not be as stringent. Internal leaks are a very real threat.
Continuous Process improvements – Vendor’s interest would generally conflict with the interest of the Firm as far as quality levels are concerned which may cause differences in generic operational processes
Regulatory processes – certain risks may arise out of government regulations especially relating to people processes which may significantly impact the associated VaR models. For example under Indian law it is not possible for compensation for breach of contract paid in India to be repatriated to a non-Indian company by virtue of India's foreign exchange control regime.

Technology Risk
The U.K. Financial Services Authority noted, "The increasing automation of systems and our reliance on IT has the potential to transform risks from minor manual processing errors to major systematic failures." That's particularly true in the banking industry, where for example, the outsourcing of check processing is a widespread practice. A major failure in the information technology process would bring a bank to its knees in just a few minutes.

· A recent Gartner report shows that 2 out of 5 enterprises that suffer a disaster go out of business within 12 months.
· According to Computer Economics, computer viruses and worm attacks cost business $17.1 billion in 2000 compared to $12.1 billion in 1999.
· In the 2001 Computer Crime and Security Survey, conducted annually by the Computer Security Institute (CSI) and FBI, eighty five percent of the respondents reported unauthorized use of their computer systems. The study also found that of sixty four percent of respondents reporting their organizations suffered direct financial loss because of security breaches; only thirty five percent could accurately determine how much was lost.
· The CERT Coordination Center (CERT/CC) at Carnegie Mellon, a federally funded research and development center that studies Internet security vulnerabilities, recently issued their vulnerability statistics for the first two Quarters of 2001. The current data suggests a dramatic increase in digital risk activity - almost 70% increase in the number of security incidents in 2001 over 2000.

While traditional risks like fire and flood are relatively containable in the physical world with good communication and continuity systems, network security breaches can inflict damage and losses on others linked to a Firm network through the Internet at an uncontrollable rate and with an unprecedented reach. Any organization connected to the Internet for Back office processing or Software development at a remote location, regardless of how they use that connection, must be concerned with several potential points of compromise, such as:
· Data theft - involves unauthorized insiders or outsiders stealing sensitive information and intellectual property
· "Island hopping" - attackers can gain access to an insecure computer network and use it to launch attacks on the other networks. By compromising security weaknesses at multiple points, attackers can use victim hosts as "zombies" to target denial-of-service assaults that are traceable back to the victim's IP address.
· E-mail compromise - places companies at risk of unknowingly spreading a virus or Trojan horse and harboring legally sensitive unprotected e-mail content.
· Web site exposures - occur when a site becomes unavailable or is maliciously altered to include erroneous information.

Thus operational risks associated with technology failures, obsolescence or data can be broadly classified as

· Risks associated with Information sensitivity/Information availability and data Security
· Risks associated with performance of technology systems
· Risks associated with Transaction
· Risks arising due to non-availability of BCP& Disaster Recovery Systems
· Risks arising due to Technology obsolescence
· Risks due to Virus and malicious attacks

Non-Predictive / Other risks
Country Risks – Some of the risks associated with countries of operation would include -
· Macro-economic evaluation of the domestic economy;
· Extent to which government policies are conducive to competitiveness;
· Extent to which enterprises are performing in an innovative, profitable and responsible manner; and
· Extent to which basic, technological, scientific and human resources meet the needs of business.
Reputation Risk & Legal Risk
Though strictly not a part of Operational Risk, the sheer potential for disaster & the impact on operation these events can have makes it imperative to be considered as part of potential Operational Risks in Outsourcing.
Perhaps the greatest risk of all in the e-business world is the harm to reputation and the catastrophic, unlimited financial consequences that could stem from liability claims by damaged stakeholders (customers, suppliers, shareholders, etc). As the Internet continues to evolve as a business tool, stakeholder accountability will be the prime motivator and in certain events a possibility for criminal action.
Some of the horror stories could come true if –
· Firm secrets are stolen by a competitor and used against the Firms
· Productivity loss due to system crashes throughout the interconnected supply chain
· Public display of intimate & sensitive information by a hacker
· Loss of employee morale when internal hackers gain access to private human resource records
· Failure to fulfill SLA and impact on existing customer and vendor relationships
· Liability claims that result from digital risk exposures inherited from Firm acquisitions and outsourcing
Force Majeure events
This is one area where outsourcing can significantly decrease operational risks as the probability of disasters occurring due to natural causes at two separate places at the same time is extremely low. A case to point is the natural calamity of the collapse of the Standard Chartered Bank data center in Mumbai a decade back or the man-made 9/11 attacks – in both cases, presence of significant outsourcing of data and processes played a major part in systems running within hours. An attack of the magnitude that World Bank suffered during 9/11 would have been far worse if operations were not on parallel at Chennai, India and other places.



6. Measurement of Operational Risk in Outsourcing

Why measure? Simple, anything that cannot be measured cannot be improved. Unlike credit risk and market risk, Operational Risk is not very well researched and there are no one size fit all software programs available for procedure definition, measurement and mitigation. Outsourcing as a concept is only a few years old and very often there are lots of horror stories which tend to skew the historical loss event data availability leading to erroneous results. Overall, outsourcing presents a risk that must be managed within the ambit of the Operational Risk component of Basel II. As with all Operational Risk the measurement of risk for the purpose of regulatory capital allocation uses the Value At Risk measurement. Operational Risk VAR is the amount that represents the maximum likely loss a bank or other institution is exposed to over a given time, with a specific level of confidence. This figure, which many banks allocate at the level of 15% of regulatory capital, is based on experience of operational failures over a given historical period.

As with all risk measurement techniques, the first step is to identify and draw out a laundry list of possible causes of failure. Having identified the potential sources of failure and thus potential risks, the second step is to measure the probability and impact of these risks. Also newer operational risks are identified and such non-predictable events occur all the time and thus there needs to be a clear policy of including newly identified risks and measurement policies for their likely impact on the overall VaR.

There are no off-the-self tools or processes that can enable a Firm to measure such risks with any degree of accuracy. Thus there needs to be a clear defined process for identification of stress points, impact analysis of such stress areas, loss given data for Probability of Occurrence etc to enable a measurement guideline for operational risk in such business cases. Also, this approach makes practical sense, as most of these risks are very specific to organizations and outsourcing deals that they enter into and with the vendor parties – thus there may be different stress points for different business practices outsourced and with different vendors operating in different geographies across different time lines.

However, in addition to this figure the Accord will require Banks to track losses such as: legal costs, loss of reputation and unrealized profits. Banks can approach Operational Risk from "Top Down" - which consists of seeking an overall measure, e.g. a percentage of gross income or a multiple of certain costs, without identifying specific risk events suffered by the Bank. But for any organization that moves onto the more advanced bases of measurement for Basel, an approached on "Bottom Up" assessment of actual risks will be required.

The matrix can look something like this -


Business Area
Possible operational risks
Functional effect
In-house
Outsourced
Probability of Occurrence
Impact
Singular VaR
Trade reconciliation
Data not updated
Trade failure
Medium (Quantify by using pat data)
Medium (Adjust SLA to achieve business objective)



Virus attack at partner site
Customer loss, inaccurate data, perhaps transaction losses
Entire outsourcing operation will be affected
High
Very High



Attrition of critical manpower
Customer losses, trade secrets may be let out, key information non-availability

High
High – maybe a bit less than In-house occurrence













7. Minimize Operational Risk in Outsourcing

Operational risk is generally a result of process failure and people related issues and thus can be minimized by systematically identifying stress points and mitigating risk issues if any.
7.1 Internal Readiness
The first question to be asked - is the enterprise ready for an offshore outsourcing initiative? In order to ensure readiness, there are certain steps that need to be taken that include developing communication plans & channels, getting senior executive sponsorship, assessing the portfolio of technologies and processes, preparing for remote management, taking the decision to outsource to another vendor or start a subsidiary, training for cultural differences, lay-off plan, re-training, re-skilling and re-deploying plan, etc. Some of the critical events could be tabulated as

· Build vs buy decision
· Investment decision – huge investments generally required
· Clear understanding of what can be outsourced
· Start small, increase gradually, volumes & complexity
One of the key aspects is not to try and outsource very sensitive high-end customer service calls or call involving involved significant interaction with front office traders and institutional customers before vendor offshore centers or subsidiary offshore operations have achieved sufficient maturity in terms of processes, knowledge transfer and people maturity.
One of the rules outsourcing is that if it can be codified, it can be done remotely and supported by IT. If it is still tacit and requires a lot of unstructured discussion, then it has to stay in the geography of operation.
7.2 Choice of vendor
One of the most common traps for large Firms that start outsourcing is to go for the equivalent large 3 or 4 companies in the outsourcing/IT/ITES space. The problem with that is though they may posses a great of experience in generic outsourcing but may or may not necessarily have the right domain knowledge for the specialized part of the business identified for the outsourcing function or have the adaptability to make the necessary changes as required by the Firm. Thus it sometimes makes sense to actually locate smaller firms with definable functional skill elements and with the required credibility to manage processes including legal issues. Also smaller ones typically can be more easily molded to suit specialized processes and cultures. Some of the key parameters in choosing the right vendor include –
Location of vendor
Geopolitical Risk - border unrest, religious strife, political processes, government policies (taxes, duties, regulatory hurdles), and relations between countries, war, legal frameworks and probability of terrorist related incidents.
Socioeconomic Risks - Are the shareholders and the local community willing to accept the significant socio-economic gaps or will they see this as job loss to a lower cost sweatshop?
Vendor Landscape - Many offshore vendors lack maturity and focus and there is a great disparity in quality and processes. Sometimes, the number of suppliers (and locations) in the market also adds to the difficulty in evaluating vendors.
Cultural Differences - Cultural differences need to be managed on both sides of the value chain and often across oceans. There needs to be a defined process of knowledge management on both sides and clear understanding of cultural differences initially which needs to be decreased over a period of time.
Legal/Contractual – some of the key questions need to be answered include –
- How can companies ensure that key industry regulations and standards are designed into the offshore solution?
- How can companies monitor and manage offshore compliance?
- What are the legal protections given to IPR related clauses?
- What are the legal consequences arising out of security breaches?
Internal Policies of vendor
Human Resource Policies – this is the key to the success or failure of any offshore vendor – how well the company manages its people, its attrition rate and quality of employees.
Knowledge Transfer - What is the best way to manage the transitioning of knowledge and key resources? Should a phased strategy be used to mitigate risk and manage productivity?
Change Management - Offshore deals require significant change management within the enterprise. How can companies effectively communicate and work with impacted employees in understanding and supporting the use of offshore resources? How should personnel and issues be managed to minimize the potential for disruption?
Communication channels – the vendor needs to have clear process of communication both internally & with the Firm, which needs to be geared for time –based, schedule based and event based impacts.
BCP/DRS - How can companies ensure they maintain flexibility and responsiveness in meeting customer demand? How do they maintain data security? What steps need to be taken to formalize offshore security and data privacy plans that comply with International standards like ISO17799 / BS7799, CoBIT, Safe Harbor etc.?
Pricing - How should companies manage currency and project scope risk? Should companies choose fixed pricing or time and materials? What are the key factors to include in arriving at fixed pricing?
Treasury - For long-term offshore engagement, treasury issues can be either a competitive advantage or a risk to the financial viability of the model. Companies need to build exchange rate fluctuations, inflation rates, interest rates and other treasury issues into their financial models. This technically is part of Market Risk but the country’s rating will have an effect on the forex rates which becomes part of Operational Risk.
Exit Planning - What happens if the offshore engagement does not work? What happens next? Companies need to invest time in building an exit plan to include answers to questions beyond IP issues. There needs to be clear financial models for exit, knowledge and/or resource transition plans, timing, who is involved, etc.
Domain knowledge and skill sets of the vendor
Offshore vendors are often lacking in domain expertise, industry-specific expertise and the ability to support multiple applications and/or business processes.
7.3 Tools and processes for comprehensive management control
Communication tools
One of the key areas of investment is in a good industry standard communication tools which can lend a collaborative work place for instant decision taking ability, video conferencing tools can increase productivity besides controlling potential risk factors and a constantly followed communication protocol will ensure unpleasant surprises are kept to a minimum.
Software Tools
Partners need good CRM which listens to customers carefully and records and analyzes complaints to track early symptoms. Also, MIS tools need to be used to constantly track performance, process tools for schedule variance etc and near-real time risk tools, which assist top management in having a good snapshot of risk status and performance of various departments would be essential for minimizing operational risk probabilities.
Measurement of deliverables versus expectations
Specialized risk managers need to be involved and part of teams that evaluate performance of vendors with respect to deliverables to provide the risk perspective to the outsourcing business function. Its imperative to have a powerful MIS tool in place, which tracks any deviation from normal and data collation needs to be a constant exercise. Collection of relevant data over a period of time might look costly and redundant initially but later can be a very powerful and sophisticated tool, not only to measure risk and comply with regulations but as competitive advantage in terms of process efficiency and decision-making. These metrics should not be vendor specific but job/process specific.
7.4 Benchmarking and frequent audits
Setting operational process benchmarks for errors & complaints and incorporating them in the SLAs is an essential task for monitoring the performance of the vendor. Some external benchmarks like the CMMi levels for process definition or the P-CMM levels for Human resource performance can be a benchmark for the same, most of the certifications need frequent audits too for sustained process maturity.
7.5 Service Level Agreements
Some of the critical issues in a SLA involved understanding what the financial institution requires from their vendors and how to ensure that the minimum levels are met X% of the time. Next is how the processes and deliverables are built on a constant improvement cycle.
The key points could be
- Understanding needs - Minimum expected levels of service
- Protecting data and processes
- Insuring worst case scenarios
- Reward programs linked to achievement of certain base level agreements and bonuses based on the degree of performance above the base benchmark
- Penalties needs to be clearly incorporated
- Continuous improvements on a defined time scale





8. Mitigation Plan

Though Outsourcing, especially offshore seems to offer significant benefits in terms of cost savings and conversion of fixed costs into variable costs to banks and financial institutions, it is not without its own problems. Risk management is not free and frequently regarded a considerable cost center. Just as managing one financial portfolio requires extra research, trading commissions and time, the creation of an outsourcing portfolio that balances risks and tracks return on investment requires data, analysis, constant monitoring and planning. Typically, companies pursue two strategies: engage multiple vendors, or engage a single vendor with an inventory of outsourcing facilities deployed in several geographies. To extend the financial analogy, just as many of us prefer a single, highly diversified mutual fund for our investments, Firms could consider using a single vendor with a broad geographic footprint with clear and demonstrable processes and risk mitigation mechanisms. That footprint addresses the geographic risk issue while the single management structure at the vendor & at the client helps to maintain lower risk management costs, allowing companies to continue to achieve the high returns from outsourcing.
Understand – the first step is to understand the nature of operational risks involved and realize the probability of various risk types impacting the outsourced project or process. In Section 7, we have begun what could be an infinite possibility of risks and risk types. Firm will need to start building their own with varying degrees of probability and potential impacts.
Measure – an arithmetic measure is the easiest and earliest indication that things are fine or going wrong somewhere. Financial Institutions may use various VaR calculations as they may chose and track them effectively for even minor changes – they can use an advanced measurement approach, internal rating approach or any simplistic formulae for tracking risk areas and monitoring expected impacts. In order to benefit from reduction in regulatory capital, banks and other financial institutions have to demonstrate to risk managers that there is an effective decrease in operational risk in outsourcing and major impact areas are identified and back-up plans are in place.
Loss event database building – Its imperative to have access to a good loss event database to build an initial understanding of the total operational risk and risk appetite of the Bank/Financial Institution towards the outsourcing process. This data needs to be constantly updated for further event types and event losses, also the same may be configured for changes in impact effectiveness due to changes in processes, external factors or relevance.
Report, Monitor, Manage & Improve – base risk elements need to be determined and base levels constantly monitored for either fall in delivery quality or defined parameters like
· Increase in Attrition
· Increase in end-product or mid-project errors
· Increased Fault tolerance to errors
· Training schedule variances
· Increase in Communication link failures
· Decrease in communication frequency
· Increased absenteeism
Reporting is an essential element as it ensures the top management are kept abreast of the risk appetite and risk level of the financial institution
Insure – essential to protect worst-case scenarios



9. Does Outsourcing decrease overall Operational Risk?

Outsourcing in the financial services market may finally be coming of age. Financial services organizations continue to struggle with capital adequacy, operational costs, and the need to improve shareholder return. Most industry analysts are predicting strong double-digit growth in outsourcing over the next few years in the sector as a result, particularly for business process outsourcing (BPO). As recent contract awards have shown, companies that may have thought long and hard in the past about turning over the management of just small parts their IT operation to services vendors are now outsourcing whole back office and customer facing processes. And those that aren't yet doing so are at least seriously considering this as an option, even where processes previously considered core functions are involved.
Some of the areas where one can see a significant reduction in operational risk due to effective outsourcing could include -

Lower risk probabilities due to better processes at the vendor site
Benefits to the Firm due to process & operations improvement due to value-additions from vendors
Risk alerts are more closely watched as the vendor is more liable than the Firms in some cases
BCP & DRS at geographically disparate sites
Knowledge transfer & wider availability of knowledge due to deployment of specialist personnel in the training function
24X7 – extended enterprises so better response time
Improvement in technology has meant that systems can be maintained offshore making them less liable to failure on account of more personnel, better processes & 24X7 support
People related security risks are minimized by good security policies
Less probability of geographical risks & losses due to natural calamities due to several centers across the world

Regardless of the adequacy of checks and balances in vendor selection process, the FSA CP142 implies that effective risk management of any outsourcing to an operationally lower cost will “help to reduce direct losses to consumers arising from operational failures at firms” and mitigates the “frequency and impact of operational losses that may deplete a firm’s financial resources” that may possibly arise from potential loss of control over outsourcing arrangements. Therefore, any subsequent offshore operating model must be sufficiently robust to support integrated, end-to-end process execution with appropriate controls in place for compliance and manage business risks.

Companies can shift their geo-political risks as part of the overall Operational Risk Portfolio within lower risk tolerance levels while continuing to generate cost savings by performing a risk portfolio assessment and using the results to change their offshore outsourcing strategy. Using multiple service delivery centers in different geographies creates options and enables a company to transfer application development and support to parallel unaffected geographies in the event of an emergency. Operating different global support locations leads to the existence of more than one support center with knowledge of the in-scope applications and access to the associated data.


10. Conclusion

Case studies on failed outsourcing agreements are few and far. While this is generally understandable, as they demonstrate failure by both parties and neither party would wish to publicize failures in public for case study for any other purpose, its also because to a large extent outsourcing is bringing about significant cost and quality benefits to both the Firm and vendor companies. The overwhelming majority of outsourcing deals would appear to be driven by the desire to deliver short- to medium-term savings by the outsourcing party and in consequence, the economies of scale and length of the duration of the contract do not play enough part to ensure more than just medium term cost saving. Outsourcing contracts when long, properly framed and have a long term committed management buy in from both parties can deliver significant results not only in improved bottom line performance but also in reducing overall operational risks associated with all BFSI operations.

Such deals will then tend to place more priority and focus on improved quality of delivery and service, future upgrades and developments and less on immediate cost benefits to the parties. Thus the vendor will need to have the commitment to respond adequately to the ever-changing business environment, technology needs and functional demands of the Firms. The way forward could be a mix of outsourcing to different geographies to a mix of vendors to mitigate some aspects of operational risk associated with geographies, vendor performance and IP protection.

In the final analysis, it is probably only through offshore outsourcing deals which provide services in an economies with a substantially lower cost base and mature processes, that substantial cost saving outsourcing deals can be conducted to mutual satisfaction. One clear example of such an economy is India with its armies of software professionals working in Software giants many of whom are certified at CMMi Level 4 & 5, inherent strengths in a terms of an English based education system and a vibrant democracy which calls for free and fair legal system.

In many outsourcing deals the senior management involvement decreases when it moves to the operational mode. Cultural issues are more likely to be addressed and resolved through active participation in the operational aspects of the agreement. Senior management's active participation in the conduct of the operational processes, not merely the review and oversight of the conduct of the operational aspects, would identify risk and relationship issues earlier.

‘Outsourcing arrangements can actually reduce risk, however it is important that the regulator be able to satisfy depositors that all arrangements have had associated risks identified and mitigated’ – Greame Thompson, CEO of Australian Prudential Regulatory Authority (APRA)

The GOATs of Hindi film music - an analysis of the greatest music directors in the history of Bollywood cinema

Over the holiday break in December 2020, I did a fair bit of driving around the Australian east coast. The drive from Sydney up to central Q...