Quickly built new Python graph SQL execution by plan

sql_id-c6m8w0rxsa92v-on-mydb-database-with-plans

I created a new graph in my PythonDBAGraphs to show how a plan change affected execution time. The legend in the upper left is plan hash value numbers. Normally I run the equivalent as a sqlplus script and just look for plans with higher execution times. I used it today for the SQL statement with SQL_ID c6m8w0rxsa92v. It has been running slow since 10/11/2016.

Since I just split up my Python graphs into multiple smaller scripts I decided to build this new Python script to see how easy it would be to show the execution time of the SQL statement for different plans graphically. It was not hard to build this. Here is the script (sqlstatwithplans.py):

import myplot
import util

def sqlstatwithplans(sql_id):
    q_string = """
select 
to_char(sn.END_INTERVAL_TIME,'MM-DD HH24:MI') DATE_TIME,
plan_hash_value,
ELAPSED_TIME_DELTA/(executions_delta*1000000) ELAPSED_AVG_SEC
from DBA_HIST_SQLSTAT ss,DBA_HIST_SNAPSHOT sn
where ss.sql_id = '""" 
    q_string += sql_id
    q_string += """'
and ss.snap_id=sn.snap_id
and executions_delta > 0
and ss.INSTANCE_NUMBER=sn.INSTANCE_NUMBER
order by ss.snap_id,ss.sql_id,plan_hash_value"""
    return q_string

database,dbconnection = 
util.script_startup('Graph execution time by plan')

# Get user input

sql_id=util.input_with_default('SQL_ID','acrg0q0qtx3gr')

mainquery = sqlstatwithplans(sql_id)

mainresults = dbconnection.run_return_flipped_results(mainquery)

util.exit_no_results(mainresults)

date_times = mainresults[0]
plan_hash_values = mainresults[1]
elapsed_times = mainresults[2]
num_rows = len(date_times)

# build list of distict plan hash values

distinct_plans = []
for phv in plan_hash_values:
    string_phv = str(phv)
    if string_phv not in distinct_plans:
        distinct_plans.append(string_phv)
        
# build a list of elapsed times by plan

# create list with num plans empty lists     
                        
elapsed_by_plan = []
for p in distinct_plans:
    elapsed_by_plan.append([])
    
# update an entry for every plan 
# None for ones that aren't
# in the row

for i in range(num_rows):
    plan_num = distinct_plans.index(str(plan_hash_values[i]))
    for p in range(len(distinct_plans)):
        if p == plan_num:
            elapsed_by_plan[p].append(elapsed_times[i])
        else:
            elapsed_by_plan[p].append(None)
            
# plot query
    
myplot.xlabels = date_times
myplot.ylists = elapsed_by_plan

myplot.title = "Sql_id "+sql_id+" on "+database+
" database with plans"
myplot.ylabel1 = "Averaged Elapsed Seconds"
    
myplot.ylistlabels=distinct_plans

myplot.line()

Having all of the Python code for this one graph in a single file made it much faster to put together a new graph. Pretty neat.

Bobby

Posted in Uncategorized | 2 Comments

Tim Gorman at AZORA meeting tomorrow in Scottsdale

Arizona Oracle User Group – October 20, 2016

Thursday, Oct 20, 2016, 12:30 PM

Republic Services – 3rd Floor Conference Room
14400 N 87th St (AZ101 & Raintree) Scottsdale, AZ

16 AZORAS Attending

Change In Plans -Tim Gorman comes to Phoenix! Stephen Andert had a sudden business commitment making it impossible for him to speak at Thursday’s meeting.Fortunately, Tim Gorman of Delphix will be coming from Denver to speak instead. Tim is an internationally-renowned speaker, performance specialist, member of the Oak Table, Oracle Ace Director, …

Check out this Meetup →

Phoenix area readers – I just found out that Oracle performance specialist and Delphix employee Tim Gorman will be speaking at the Arizona User Group meeting tomorrow in Scottsdale.  I am looking forward to it.

Bobby

Posted in Uncategorized | Leave a comment

Thinking about using Python scripts like SQL scripts

I’ve used Python to make graphs of Oracle database performance information. I put the scripts out on GitHub at https://github.com/bobbydurrett/PythonDBAGraphs. As a result I’m keeping my Python skills a little fresher and learning about git for version control and GitHub as a forum for sharing Open Source. Really, these Python scripts were an experiment. I don’t claim that I have done any great programming or that I will.

But, as I review what I have done so far it makes me think about how to change what I am doing so that Python would be more usable to me. I mainly use SQL scripts for Oracle database tuning. I run them through sqlplus on my laptop. I think I would like to make the way I’m using Python more like the way I use SQL scripts. My idea is that all the pieces would be in place so that I could write a new Python script as easily and quickly as I would a SQL script.

I started out with my PythonDBAGraphs project with a main script called dbgraphs.py that gives you several graphs to choose from. I also have a script called perfq.py that includes the code to build a select statement. To add a new graph I have added entries to both of these files. They are getting kind of long and unwieldy. I’m thinking of breaking up these to scripts into a separate script for each graph like ashcpu.py, onewait.py, etc.

You may wonder why I am talking about changes I might make to this simple set of scripts. I am thinking that my new approach is more in line with how businesses think about using Python. I have heard people say that business users could use Python and the same graphing library that I am using to build reports without having a developer work with them. Of course, people think the same about SQL and it is not always true. But, I think that my first approach to these Python scripts was to build it like a large standalone program. It is like I am building an app to sell or to publish like a compiler or new database system. But, instead I think it makes sense to build an environment where I can quickly write custom standalone scripts, just as I can quickly put together custom SQL scripts.

Anyway, this is my end of the week, end of the work day blogging thoughts. I’m thinking of changing my Python scripts from one big program to an environment that I can use to quickly build new smaller scripts.

Bobby

P.S. It took less time than I thought to split this up. Pushed out the new version with dbgraphs.py and perfq.py split up across each script for each graph. I did keep one class in a file by itself, signatures.py,  because several graphs use it. I guess I won’t really know if splitting these large scripts up is better until I need to build a new graph. But it seems less overwhelming to just build a new script instead of editing some large existing scripts.

Posted in Uncategorized | Leave a comment

Need classes directory to run ENCRYPT_PASSWORD on PeopleTools 8.53

I had worked on creating a Delphix virtual copy of our production PeopleTools 8.53 database and wanted to use ENCRYPT_PASSWORD in Datamover to change a user’s password. But I got this ugly error:

Error: Process aborted. Possibly due to JVM is not available or missing java class or empty password.

What the heck! I have used Datamover to change passwords this way for 20 years and never seen this error. Evidently in PeopleTools 8.53 they increased the complexity of the encryption by adding a “salt” component. So, now when Datamover runs the ENCRYPT_PASSWORD command it calls Java for part of the calculation. For those of you who don’t know, Datamover is a Windows executable, psdmt.exe. But, now it is calling java.exe to run ENCRYPT_PASSWORD.

I looked at Oracle’s support site and tried the things they recommended but it didn’t resolve it. Here are a couple of the notes:

E-SEC: ENCRYPT_PASSWORD Error: Process aborted. Possibly due to JVM is not available or missing java class or empty password. (Doc ID 2001214.1)

E-UPG PT8.53, PT8.54: PeopleTools Only Upgrade – ENCRYPT_PASSWORD Error: Process aborted. Possibly due to JVM is not available or missing java class or empty password. (Doc ID 1532033.1)

They seemed to focus on a situation during an upgrade when you are trying to encrypt all the passwords and some have spaces in their passwords. But that wasn’t the case for me. I was just trying to change one user’s password and it wasn’t spaces.

Another recommendation was to put PS_HOME/jre/bin in the path. This totally made sense. I have a really stripped down PS_HOME and had the least number of directories that I need to do migrations and tax updates. I only have a 120 gig SSD C: drive on my laptop so I didn’t want a full multi-gigabyte PS_HOME. So, I copied the jre directory down from our windows batch server and tried several ways of putting the bin directory in my path and still got the same error.

Finally, I ran across an idea that the Oracle support documents did not address, probably because no one else is using partial PS_HOME directories like me. I realized that I needed to download the classes directory. I found a cool documentation page about the Java class search path for app servers in PeopleTools 8.53. It made me guess that psdmt.exe would search the PS_HOME/classes directory for the classes it needed to do the ENCRYPT_PASSWORD command. So, I copied classes down from the windows batch server and put the jre/bin directory back in the path and success!

Password hashed for TEST
Ended: Tue Oct 11 16:36:55 2016
Successful completion
Script Completed.

So, I thought I would pass this along in the unusual case that someone like myself needs to not only put the jre/bin directory in their path but is also missing the classes directory.

Bobby

Posted in Uncategorized | Leave a comment

JDBC executeBatch looks odd in AWR

A project team asked me to look at the performance of an Oracle database application that does a bunch of inserts into a table. But, when I started looking at the AWR data for the insert the data confused me.

The SQL by elapsed time section looked like this:

byelasped

So, 1514 executions of an insert with 1 second of elapsed time each, almost all of which was CPU. But then I looked at the SQL text:

sqltext

Hmm. It is a simple insert values statement. Usually this means it is inserting one row. But 1 second is a lot of CPU time to insert a row. So, I used my sqlstat.sql script to query DBA_HIST_SQLSTAT about this sql_id.

     >select ss.sql_id,
  2  ss.plan_hash_value,
  3  sn.END_INTERVAL_TIME,
  4  ss.executions_delta,
  5  ELAPSED_TIME_DELTA/(executions_delta*1000) "Elapsed Average ms",
  6  CPU_TIME_DELTA/(executions_delta*1000) "CPU Average ms",
  7  IOWAIT_DELTA/(executions_delta*1000) "IO Average ms",
  8  CLWAIT_DELTA/(executions_delta*1000) "Cluster Average ms",
  9  APWAIT_DELTA/(executions_delta*1000) "Application Average ms",
 10  CCWAIT_DELTA/(executions_delta*1000) "Concurrency Average ms",
 11  BUFFER_GETS_DELTA/executions_delta "Average buffer gets",
 12  DISK_READS_DELTA/executions_delta "Average disk reads",
 13  ROWS_PROCESSED_DELTA/executions_delta "Average rows processed"
 14  from DBA_HIST_SQLSTAT ss,DBA_HIST_SNAPSHOT sn
 15  where ss.sql_id = 'fxtt03b43z4vc'
 16  and ss.snap_id=sn.snap_id
 17  and executions_delta > 0
 18  and ss.INSTANCE_NUMBER=sn.INSTANCE_NUMBER
 19  order by ss.snap_id,ss.sql_id;

SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average rows processed
------------- --------------- ------------------------- ---------------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ----------------------
fxtt03b43z4vc               0 29-SEP-16 07.00.34.682 PM              441         1100.68922     1093.06512     .32522449                  0                      0             .000492063           60930.449         .047619048             4992.20181
fxtt03b43z4vc               0 29-SEP-16 08.00.43.395 PM               91         1069.36489     1069.00231    .058494505                  0                      0                      0          56606.3846         .010989011                   5000
fxtt03b43z4vc               0 29-SEP-16 09.00.52.016 PM               75         1055.05561     1053.73324        .00172                  0                      0                      0          55667.1333                  0             4986.86667
fxtt03b43z4vc               0 29-SEP-16 10.00.01.885 PM              212         1048.44043     1047.14276    .073080189                  0                      0             .005287736          58434.6934         .004716981             4949.35377

Again it was about 1 second of cpu and elapsed time, but almost 5000 rows per execution. This seemed weird. How can a one row insert affect 5000 rows?

I found an entry in Oracle’s support site about AWR sometimes getting corrupt with inserts into tables with blobs so I thought that might be the case here. But then the dev team told me they were using some sort of app that did inserts in batches of 1000 rows each. I asked for the source code. Fortunately, and this was very cool, the app is open source and I was able to look at the Java code on GitHub. It was using executeBatch in JDBC to run a bunch of inserts at once. I guess you load up a bunch of bind variable values in a batch and execute them all at once. Makes sense, but it looked weird in the AWR.

Here is the Java test program that I hacked together to test this phenomenon:

import java.sql.*;
import oracle.jdbc.*;
import oracle.jdbc.pool.OracleDataSource;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.util.*;

public class InsertMil5k
{
  public static void main (String args [])
       throws SQLException
  {
    OracleDataSource ods = new OracleDataSource();
    ods.setUser("MYUSER");
    ods.setPassword("MYPASSWORD");
    ods.setURL("jdbc:oracle:thin:@MYHOST:1521:MYSID");
    OracleConnection conn =
      (OracleConnection)(ods.getConnection ());
    conn.setAutoCommit(false);

    PreparedStatement stmt = conn.prepareStatement("insert into test values (:1,:2,:3,:4)");
    byte [] bytes = new byte[255];
    int k;
    for (k=0;k<255;k++)
      bytes[k]=(byte)k;

/* loop 200 times. Make sure i is unique */
    int i,j;
    for (j=0;j < 200; j++) {

/* load 5000 sets of bind variables */

      for (i=j*5000;i < (j*5000)+5000; i++) {
        stmt.setString(1, Integer.toString(i));
        stmt.setInt(2, 1);
        stmt.setBinaryStream(3, new ByteArrayInputStream(bytes), bytes.length);
        stmt.setLong(4, 1);
        stmt.addBatch();
      }

    stmt.executeBatch();
    conn.commit();
    }

    conn.close();
  }
}

I started with one of the Oracle JDBC samples and grabbed the batch features from the github site. I just made up some random data which wasn’t super realistic. It took me a while to realize that they were actually, at times, doing 5000 row batches. The other AWR entries had 1000 rows per execution so that finally makes sense with what the dev team told me.

I guess the lesson here is that the AWR records each call to executeBatch as an execution but the number of rows is the size of the batch. So, that explains why a simple one row insert values statement showed up as 5000 rows per execution.

Bobby

P.S. The other thing that I didn’t mention when I wrote this blog post of Friday was my confusion over the number of buffer gets. If you look at the sqlstat output above you will see around 60,000 buffer gets per execution. This really puzzled me before I realized that it was really inserting 5000 rows. How can a single row insert get 60,000 buffers? It almost seemed like it was scanning the indexes or that there was something weird with the inline blob column. The top segments part of the AWR report pointed to the three indexes on this table and not to the blob segment so my guess was that somehow the index lookup had degraded into an index scan. But, when I finally realized that it was 5000 rows per execution then I divided the 60,000 buffer gets by 5000 rows to get 12 gets per inserted row. Given three indexes this didn’t seem crazy.

 

Posted in Uncategorized | 3 Comments

Ask Tom table about NOLOGGING and redo generation

I was googling for things related to NOLOGGING operations and found this useful post on the Ask Tom web site: url

There is a nice table in the post that shows when insert operations generate redo log activity. But it isn’t formatted very well so I thought I would format the table here so it lines up better.

Table Mode    Insert Mode     ArchiveLog           mode result
-----------   -------------   -----------------    -----------
LOGGING       APPEND          ARCHIVE LOG          redo generated
NOLOGGING     APPEND          ARCHIVE LOG          no redo
LOGGING       no append       ""                   redo generated
NOLOGGING     no append       ""                   redo generated
LOGGING       APPEND          noarchive log mode   no redo
NOLOGGING     APPEND          noarchive log mode   no redo
LOGGING       no append       noarchive log mode   redo generated
NOLOGGING     no append       noarchive log mode   redo generated

All of this is from Ask Tom. My contribution here is just the formatting.

I ran a couple of tests whose results agree with this table. I ran insert append on a database that was not in archivelog mode and the insert ran for the same amount of time with the table set for LOGGING as it did with the table set for NOLOGGING. I ran the same test on a database that is in archivelog mode and saw a big difference in run time between LOGGING and NOLOGGING. I didn’t prove it but I assume that the redo generation caused the difference in run time.

No archivelog and logging:

insert /*+append*/ into target select * from source;

64000 rows created.

Elapsed: 00:00:00.36

No archivelog and nologging:

insert /*+append*/ into target select * from source;

64000 rows created.

Elapsed: 00:00:00.38

Archivelog and logging:

insert /*+append*/ into target select * from source;

64000 rows created.

Elapsed: 00:00:00.84

Archivelog and nologging:

insert /*+append*/ into target select * from source;

64000 rows created.

Elapsed: 00:00:00.53

I haven’t tested all the table options but I thought it was worth formatting for my reference and for others who find it useful.

Bobby

Posted in Uncategorized | Leave a comment

New graph: Average Active Sessions per minute

I am working on a production issue. I do not think that we have a database issue but I am graphing some performance metrics to make sure. I made a new graph in my PythonDBAGraphs program.

ash_active_session_count_today

It shows the average number of active sessions for a given minute. It prompts you for start and stop date and time. It works best with a relatively small interval or the graph gets too busy. Red is sessions active on CPU and blue is all active sessions. This graph is a production database today. Activity peaked around mid day.

It is kind of like the OEM performance screen but at least having it in Python lets me tinker with the graph to meet my needs. Check out the README on the GitHub link above if you want to run this in your environment.

Bobby

Posted in Uncategorized | 2 Comments

Bulk collect workaround for memory bug

A coworker passed a test script on to me that was failing with the following memory error:

ORA-04030: out of process memory when trying to allocate 4088 bytes (PLS CGA hp,pdzgM64_New_Link)

The error occurred when initializing a PL/SQL table variable with 7500 objects. Here is my sanitized version of the code:

CREATE OR REPLACE TYPE ARRAY_ELEMENT
AS
  OBJECT
  (
    n1 NUMBER,
    n2 NUMBER,
    n3 NUMBER,
    n4 NUMBER );
/

CREATE OR REPLACE TYPE MY_ARRAY
IS
  TABLE OF ARRAY_ELEMENT;
/

DECLARE
  MY_LIST MY_ARRAY;
BEGIN
  MY_LIST := MY_ARRAY(
    ARRAY_ELEMENT(1234,5678,1314,245234),
    ARRAY_ELEMENT(1234,5678,1314,245234),
    ARRAY_ELEMENT(1234,5678,1314,245234),
...
    ARRAY_ELEMENT(1234,5678,1314,245234),
    ARRAY_ELEMENT(1234,5678,1314,245234)
  );

The real code had different meaningful constants for each entry in the table. Here is the error:

8004      ARRAY_ELEMENT(1234,5678,1314,245234)
8005    );
8006  
8007  END;
8008  /
DECLARE
*
ERROR at line 1:
ORA-04030: out of process memory when trying to allocate 4088 bytes 
(PLS CGA hp,pdzgM64_New_Link)


Elapsed: 00:02:51.31

I wrapped the error code manually so it would fit on the page.

The solution looks like this:

create table MY_OBJECTS
  (
    o ARRAY_ELEMENT );

DECLARE
  MY_LIST MY_ARRAY;
BEGIN
 MY_LIST := MY_ARRAY( );
  
 insert into MY_OBJECTS values(ARRAY_ELEMENT(1234,5678,1314,245234));
 insert into MY_OBJECTS values(ARRAY_ELEMENT(1234,5678,1314,245234));
 insert into MY_OBJECTS values(ARRAY_ELEMENT(1234,5678,1314,245234));
...
 insert into MY_OBJECTS values(ARRAY_ELEMENT(1234,5678,1314,245234));
 insert into MY_OBJECTS values(ARRAY_ELEMENT(1234,5678,1314,245234));
 insert into MY_OBJECTS values(ARRAY_ELEMENT(1234,5678,1314,245234));

 commit;
  
 SELECT o
   BULK COLLECT INTO MY_LIST
   FROM MY_OBJECTS; 

END;
/

Here is what the successful run looks like:

8004    insert into MY_OBJECTS values(ARRAY_ELEMENT(1234,5678,1314,245234));
8005    insert into MY_OBJECTS values(ARRAY_ELEMENT(1234,5678,1314,245234));
8006  
8007    commit;
8008  
8009    SELECT o
8010      BULK COLLECT INTO MY_LIST
8011      FROM MY_OBJECTS;
8012  
8013  END;
8014  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:21.36
SQL> 

There is an Oracle document about this bug:

ORA-4030 (PLSQL Opt Pool,pdziM01_Create: New Set), ORA-4030 (PLS CGA hp,pdzgM64_New_Link) (Doc ID 1551115.1)

It doesn’t have using bulk collect as a work around. My situation could be only useful in very specific cases but I thought it was worth sharing it.

Here are my scripts and their logs: zip

This is on HP-UX Itanium Oracle 11.2.0.3.

Bobby

Posted in Uncategorized | Leave a comment

Finished Mathematics for Computer Science class

Today I finally finished the Mathematics for Computer Science class that I have worked on since December. For the last year or two I have wanted to do some general Computer Science study in my free time that is not directly related to my work. I documented a lot of this journey in an earlier blog post.

The math class is on MIT’s OpenCourseWare (OCW) web site. It was an undergraduate semester class and I spent about 9 months on it mostly in my spare time outside of work. I wanted to test out OCW as a source for training just as I had experimented with edX before. So, I thought I would share my thoughts on the experience.

The class contained high quality material. It was an undergraduate class so it may not have been as deep as a graduate level class could be but world-class MIT professors taught the class. Some of my favorite parts of the video lectures were where professor Leighton made comments about how the material applied in the real world.

The biggest negative was that a lot of the problems did not have answers. Also, I was pretty much working through this class on my own. There were some helpful people on a Facebook group that some of my edX classmates created that helped keep me motivated. But there wasn’t a large community of people taking the same class.

Also, it makes me wonder where I should spend time developing myself. Should I be working more on my communication and leadership skills through Toastmasters? Should I be working on my writing? Should I be learning more Oracle features?

I spent months studying for Oracle’s 12c OCP certification exam and I kind of got burnt out on that type of study. The OCP exam has a lot of syntax. To me syntax, which you can look up in a manual, is boring. The underlying computer science is interesting. It is fun to try to understand the Oracle optimizer and Oracle internals, locking, backup and recovery, etc. There is a never-ending well of Oracle knowledge that I could pursue.

Also, there is a lot of cloud stuff going on. I could dive into Amazon and other cloud providers. I also have an interest in open source. MySQL and PostgreSQL intrigue me because I could actually have the source code.

But, there is only so much time in the day and I can’t do everything. I don’t regret taking the math for computer science class even if it was a diversion from my Toastmasters activities and not directly related to work. Now I have a feel for the kind of materials that you have on OCW: high quality, general computer science, mostly self-directed. Now I just have to think about what is next.

Bobby

Posted in Uncategorized | 2 Comments

Trying VirtualBox

I have been using  VMware Player to build test virtual machines on my laptop with an external drive for some time now. I used to use the free VMware Server. My test VMs weren’t fast because of the slow disk drive but they were good enough to run small Linux VMs to evaluate software. I also had one VM to do some C hacking of the game Nethack for fun. I got a lot of good use out of these free VMware products and VMware is a great company so I’m not knocking them. But, this week I accidentally wiped out all the VMs that I had on my external drive so I tried to rebuild one so I at least have one to boot up if I need a test Linux VM. I spend several hours trying to get the Oracle Linux 6.8 VM that I created to work with a screen resolution that matched my monitor. I have a laptop with a smaller 14 inch 1366 x 768 resolution built-in monitor and a nice new 27 inch 1920 x 1080 resolution external monitor. VMware player wouldn’t let me set the resolution to more than 1366 x 768 no matter what I did.

Finally after a lot of googling and trying all kinds of X Windows and VMware settings I finally gave up and decided to try VirtualBox. I was able to quickly install it and get my OEL 6.8 VM up with a larger resolution with no problem. It still didn’t give me 1920 x 1080 for some reason but had a variety of large resolutions to choose from.

After getting my Linux 6.8 machine to work acceptably I remembered that I was not able to get Linux 7 to run on VMware either. I had wanted to build a VM with the latest Linux but couldn’t get it to install. So, I downloaded the 7.2 iso and voilà it installed like a charm in VirtualBox. Plus I was able to set the resolution to exactly 1920 x 1080 and run in full screen mode taking up my entire 27 inch monitor.  Very nice!

I have not yet tried it, but VirtualBox seems to come with the ability to take a snapshot of a VM and to clone a VM. To get these features on VMware I’m pretty sure you need to buy the $249 VMware Workstation. I have a feeling that Workstation is a good product but I think it makes sense to try VirtualBox and see if the features that it comes with meet all my needs.

I installed VirtualBox at the end of the work day today so I haven’t had a lot of time to find its weaknesses and limitations. But so far it seems to have addressed several weaknesses that I found in VMware Player so it may have a lot of value to me. I think it is definitely worth trying out before moving on to the commercial version of VMware.

Bobby

P.S. Just tried the snapshot and clone features. Very neat. Also I forgot another nuisance with VMware Player. It always took a long time to shut down a machine. I think it was saving the current state. I didn’t really care about saving the state or whatever it was doing. Usually I just wanted to bring something up real quick and shut it down fast. This works like a charm on VirtualBox. It shuts down a VM in seconds. So far so good with VirtualBox.

P.P.S This morning I easily got both my Linux 6.8 and 7.2 VM’s to run with a nice screen size that takes up my entire 27 inch monitor but leaves room so I can see the menu at the top of the VM window and my Windows 7 status bar below the VM’s console window. Very nice. I was up late last night tossing and turning in bed thinking about all that I could do with the snapshot and linked clone features. 🙂

P.P.P.S. Now the bad news. Virtualbox is not working for me with USB flash drives. This works flawlessly out of the box with VMware player. I guess it was inevitable that I would find things that VMware does better. Maybe VMware works better with Windows hosts. I seem to be hitting this issue. It seems to be a known bug using USB flash drives on Windows 7 using Virtualbox. Seems to have been a bug for about 4 years. The workaround seems to be to edit the registry on my work laptop. Not going to do that.

Posted in Uncategorized | 2 Comments