Re-thinking IT Paradigms

I have been working on some projects recently where there is a heavy dependency on virtualization.  It works very well and there are certainly advantages.  You can resize things with minimal effort and there is even a level of redundancy built in.  Every instance can have it’s own virtual host (no more stacking multiple E-Business Suite instances on a single box!).  Of course, one downside is that the “old way” of stacking multiple environments on a single box was somewhat self limiting.  With virtualization, it requires more discipline to prevent “instance creep” (where everybody must have their own private instance) on a project.

But, some of this has me wondering.  Manageability benefits aside, the primary selling point behind virtualization is “more efficient use of hardware”. 

When we’re sizing a system using traditional hardware, we size it for the busiest day of the year and then add some fudge factor to account for anticipated growth.  The end result is that you have a system which is running at 30-40% CPU utilization and maybe 50-60% memory utilization most of the time.  Business views this as waste. 

The solution being sold to solve this problem is virtualization.  You can have virtual machines that are sized smaller and dynamically scale to account for growth or those busy days.  The basic thought is that some systems need more CPU/memory today while others will need more tomorrow.  The end result being that you need (theoretically) fewer CPUs (which will operate closer to 100% utilization) and less RAM as an aggregate across your enterprise.

This is all predicated on an assumption that I am beginning to think is either flawed or has simply changed.  The assumption in question is that “hardware is expensive and must be used efficiently”.

The truth is, hardware costs continue to fall even as compute power increases.  Moore’s Law is very much alive.

Contrast this with the proposed solution:  large scale engineered systems (we’re using a vBlock from VCE on this project) at extremely high cost.  These systems introduce their own management challenges, licensing and personnel costs, licensing challenges (we’re in an Oracle world, remember?), and even technology challenges.  How much is it going to cost to upgrade these systems when they become old and slow?  (Moore’s Law strikes again).

So, to me, this begs the question.  Which is more expensive?  Individual servers with “wasted” capacity?  Or the solution we’re deploying to solve that “problem”?

— James

Why I don’t depend on TOAD (or OEM) and neither should you.

My apologies in advance, as this posting may sound like something of a rant.

The first thing I’d like to point out is that I have no real problem with TOAD, Oracle Enterprise Manager, or Windows-based editors. They are all excellent tools that can be extremely helpful in your environment. My objection to these tools is based solely on a lowest-common-denominator argument.

First, a little background. Back in the early 1990’s, I was working as a Unix Systems Administrator for a company in Kansas City, MO. Since then, I’ve worked mainly as a consultant.

Shortly before I started that job in Kansas City, the company had hired a new CIO who let go about half of the legacy (mainframe, COBOL) IT department. The new direction for the company was implementation of Oracle E-Business Suite on Data General Unix (DG/UX).

The mainframe IT staff that survived were being re-trained in the new technology. At one point, several of them came to me insisting that I install ISPF (an editor they were used to on the mainframe) onto the DG/UX boxes because they were struggling to learn to use the vi editor. I informed them that, while they (as a group) may carry enough weight to convince the CIO to direct me to install it (assuming it was even available). However, when they go to their next job and claim that “they know Unix”, they would be alone and wouldn’t have that leverage.  My suggestion was that I would help them to learn the vi editor. (I did offer emacs as an alternative, since it is and was extremely common on Unix systems… Unfortunately, friendlier editors like pico, nano, and joe didn’t exist yet.)

If your primary job is software development, a tool like TOAD is generally something you can depend on having. However, as a DBA, you can’t necessarily depend on having TOAD (or even Oracle Enterprise Manager) at your disposal at all times. Maybe you’re starting a new job and the previous DBA hadn’t set up Enterprise Manager (or you haven’t gotten around to it yet). Even in environments where those tools are available, they may or may not be working when you need them.

So, my advice? There are certain tools that are almost ALWAYS there. Get comfortable with ssh, SQL*Plus, and vi (or vim).  They are your friends.

— James

Useful desktop tools for the EBS Consultant (and DBAs)

This is a short list of some of the tools that I (and others) depend on when we’re working at clients.

Snag-IT! Commercial ($49.95). (Windows & Mac) http://www.techsmith.com/snagit.html Excellent tool if you have to document graphically intensive tasks. Provides nice ways to highlight and circle captured images, also allows you to quickly capture just the active window (or a pre-defined region of the screen), dump multiple captures into a directory, and even sequentially name the files.

Toad Commercial ($955.00 and up). (Windows only) http://www.quest.com/toad/ For many developers and implementation consultants, Toad has a virtually religious following. It is an excellent SQL query/development tool and allows you to easilly save query results into Microsoft Excel format (which will keep Oracle Support plenty happy).  [Full disclosure:  Those of you that have worked with me in the past know that I don’t use Toad (or TOra, or SQL Developer, for that matter) for a variety of reasons (which will be explained in a later posting).]

TOra Open Source (Free). (Windows, Mac, Linux, Solaris) http://torasql.com/about Has many of the same core features as Toad.

Oracle SQL Developer Commercial (Free-ish). (Windows, Linux, others). http://www.oracle.com/sqldeveloper Also has many of the same core features as Toad.

VirtualBox Open Source (Free). (Windows, Mac, Linux, Solaris) http://www.virtualbox.org Do you want to run a Virtual Macine on your laptop? Maybe you’re running Windows 7 and need a copy of Windows XP and an old version of IE so that you can test things and support your users? What if you’d really rather be using Linux but still have to run some Windows software? VirtualBox provides an easy way to create and run a VM of another operating system. Literally run Linux in a Window! (Or even fullscreen).

SecureCRT Commercial ($139.99) (Windows, Mac, Linux) http://www.vandyke.com/products/securecrt/index.html This is a very nice ssh client. It works very well and has a very nice user interface. Unfortunately, it’s fairly expensive. Other (free) choices (on Windows) include: PuTTY http://www.putty.org/, BitVise Tunnelier http://www.bitvise.com/tunnelier, OpenSSH for Windows http://sshwindows.sourceforge.net/, and Cygwin http://www.cygwin.com/. If you’re on a Mac or Linux system (or running a copy of Linux in a Virtual Machine), then you already have a built-in ssh client.

— James

On DBAs, Developers, and Performance

Cary Millsap has an excellent (as usual) blog posting today about the software development process employed by so many oranizations.

You can read his full posting here:

http://carymillsap.blogspot.com/2012/06/organizational-constraint-that.html

This plays into one thing that I see quite a bit as a DBA at client sites. Most developers that I encounter at client sites don’t tend to focus so much on performance (unless it is painful during their testing). This isn’t, specifically, their fault. In many cases, the developers are under significant pressure to “make it work” and move onto the next thing. As a result, they don’t really have the time to worry about performance. Besides, that’s the DBA’s job, isn’t it?

Well, actually, it isn’t. See, here’s the thing, from a DBA perspective, we have a relatively small handful of tools at our disposal. From the bottom-up, here is a basic list of things that a DBA generally does to impact the performance of a system:

Properly configure disk and I/O. This is (or should be) really a collaboration between the DBA, the Systems Administrators, and the Storage Administrators. Done correctly, this shouldn’t really be a problem. However, as with everything, it is still possible to screw it up. Make sure that you have redundant I/O paths that have sufficient bandwidth for your system. Spread your I/O across many physical drives. With RAID technologies (particularly RAID 1/0) this is VERY easy to accomplish. Forget about the old concept of the DBA moving datafiles around to avoid a “hot disk”. The disk array can do this at a much more granular and consistent level than any human possibly can. Your primary goal here should be to have as many spindles involved in every I/O operation as possible.

Properly size your hardware. Another collaboration between the DBA and Systems Administrator. Make sure you have adequate horsepower and RAM. And ALWAYS plan for growth! My general recommendation is always to put more RAM into the box than you think you’ll need. Given that so many systems that I encounter these days are x86-based Linux systems (rather than “big iron” like Solaris or AIX), memory for these systems is a relatively small portion of their cost. Also, RAM doesn’t impact your Oracle licensing!

Properly tune your OS Kernel and Database parameters. I think that this is one area where developers, managers, and users tend to have gross misconceptions. While it’s true that tuning a system to truly “optimal” performance is a dark art, the truth is that, unless you’ve really screwed something up (sized your SGA too small, too few buffers, etc.), odds are you’re not going to see huge performance gains by tweaking these parameters. In most cases, “decent performance” is fairly easy to achieve. Now, squeezing out that extra 20%? That can require in-depth analysis of statistics, utilization, I/O patterns, etc. This is where the “dark art” comes into play. And, honestly, this requires time to observe and adjust.

Unfortunately, too many developers, managers, and even users, seem to wonder why that idiot DBA didn’t just set the magic “MAKE_SQL_RUN_FASTER=TRUE” parameter. (Shh! Don’t tell anyone, that’s OUR little secret!)

The truth is, unless something is wrong at these lower levels, the biggest performance gains are going to come from tuning the code. And, in my opinion, since it’s the developer’s job to produce a quality product, it’s ultimately the developer’s job to do tune their code.  Unfortunately, as Cary points out, too many organizations are structured in a manner that breaks the feedback loop required for developers to do this properly.

That MUST be fixed.

— James

E-Business Suite and the 32-bit vs. 64-bit question

Before I get flamed on this, I want to make clear that, for the purpose of this posting, I’m speaking specifically about operating systems (not hardware). Most of the hardware being sold today is already 64-bit, however, you can run most 32-bit operating systems on 64-bit hardware. It’s that distinction that I’m discussing here.

The first thing that you need to know here is that the big benefit of using a 64-bit operating system really is memory. In particular, it is not about the total amount of memory that can be installed in the machine (that tends to be hardware), but, about the addressable size of “per process” memory.

In the case of components such as those used on an appsTier in EBS, per-process addressable memory doesn’t matter so much, as each process has it’s own private memory (and isn’t depending on “shared memory” like the database server is). So, aside from the fact that it makes our life much easier from an administrative standpoint (and the industry is going that way), there really isn’t much of technical advantage to a 64-bit appsTier.

For EBS 11i (where the DB is certified on x86-64, but the appsTier is only certified on x86-32), you can still use much more than 4GB on an appsTier node (the operating system has a way of addressing large memory). It’s just that the amount of memory that can be addressed by a single process is limited to something between 3 and 4 GB.

In the case of EBS R12, the appsTier binaries are still 32-bit, even when you’re running on a 64-bit operating system. This makes sense because the only component that can really take advantage of it is the database (because the database processes all attach to the same large chunk of memory [the SGA]).

Note that EBS R12 appsTier is certified on both Linux x86-32 and Linux x86-64.

So, for 11i, the best that they can hope for is to have a separate dbTier (database only) running on Linux x86-64 and use Linux x86-32 for their appsTier nodes. Remember, that the 11i appsTier is NOT certified on Linux x86-64. That doesn’t mean that it can’t be done, but I seriously doubt that Oracle has any intention to certify a release that old on, what is effectively, a different platform. In both cases, they can/should be 5.X (5.7 is current). Having, effectively, two different platforms will be something of a headache from a Linux administration standpoint, but it’s something that they’ll have to deal with.

When they get to R12, they should use Linux x86-64 on all tiers (to simplify administrative tasks, as well as being “among the mainstream” of installations). Keep in mind that 64-bit is where “the market” is going. Even though you can (It is certified) do R12 on x86-32, you’re better positioned if you’re on x86-64.

— James

Spreadsheet Risk (and why ad-hoc reporting tools make me twitchy)

First, let me say that I’m a DBA, not an accountant. We tend to trust databases to hold and organize data. We use applications and reports developed by professional developers to retrieve that data. Those applications and reports go through a software development lifecycle for a reason: to make sure that they are accurate.
Despite this, many professional developers aren’t writing “well-tuned code”. They’re generally happy to get the right results and, as long as it isn’t painfully slow, performance is either an afterthought or the DBA’s problem.  I’ve got news for you… some 80% of performance issues are caused by poorly tuned code!  

This is not to denigrate developers.  I’m saying this mostly to prove a point:  if you can’t reliably expect well-tuned code from a professional developer, you’re insane if you expect anything better from end-users with an ad-hoc query tool.

This is one reason why tools that allow end-users to produce their own reports (Discoverer, ADI, et. al.) have always made me (and, I’m sure other DBAs out there) somewhat nervous.
The other reason I’ve always been a little twitchy about those tools is accuracy. With professional developers, they understand the need for testing and accuracy. End-users, however, frequently don’t have that same appreciation. So, when you allow end-users to develop their own queries and reports, or to extract and manipulate data in a spreadsheet, what kind of risks are you taking?

CIO Magazine has a thought-provoking article on this. Definitely worth a read.
http://www.cio.com/article/print/699592

– James

Quick Status Update

Just a quick status update. After the dissolution of TriOra Group, I became an independent contractor working with RedRiver Solutions on most projects. RedRiver Solutions is a Dallas, TX based IT Consultancy specializing in Oracle and Oracle E-Business Suite.

– James

First Entry

Thus begins my foray into blogging. I suppose I should start with a quick introduction. My name is James Morrow. I have been an Oracle E-Business Suite DBA since about 1993 and have worked as a consultant in that area since 1995. I’ve been affiliated with a number of “boutique firms” over that time, including Computer Systems Authority (CSA), Nascent Systems, Solution Beacon and, most recently, TriOra Group.

In this blog, I plan to cover various tips, tricks, techniques, pitfalls, practices, and even pratfalls as they relate to the life of an Oracle E-Business Suite DBA. So, stay tuned!