Home

News: Comparing Oracle 10g and SQL Server Yukon

  1. Comparing Oracle 10g and SQL Server Yukon (115 messages)

    The purpose of the following technical paper from WisdomForce is to provide a feature comparison between MSSQL 2005 (Yukon) and Oracle 10g.

    The document compares VLDB/OLTP related features and discuss issues with performance, utilities and replication. It discusses several new features, which were developed by Microsoft in order to provide completive functionality to its commercial rival Oracle database. At the same time the document tells about the changes from a Database Administrator stand point that were done to MSSQL 2005 release compared to previous releases.

    I hope this article will trigger serious discussion rather than a war between people prefer either Oracle or Microsoft database.

    Read the paper: Features, strengths, and weaknesses comparison between Microsoft SQL Server 2005 (Yukon) and Oracle 10g databases

    Threaded Messages (115)

  2. I think the only problems with the paper are:

    1) The author had a preconceived conclusion in mind

    2) There is no explicit disclaimer regarding any communication with, incentive from (etc.) the vendors involved.

    That said, MSSQL has been a pretty good choice for the types of apps he suggest for some time now. ("For department level servers and small/mid range applications the MSSQL server would be a preferred choice.") Also, I think that "preferred" is a matter of compatibility, pricing and personal / organizational preference.

    One other thing is that for "department level servers and small/mid range applications," there are much less expensive and more portable options, like Postgres and SAPDB. If cost weren't an issue, MSSQL would almost always win over those (IMHO .. I happen to think MSSQL is pretty good, having personally started out by learning Sybase years ago,) but cost _can_ be a deciding issue for small apps.

    Oh, and one other thing ;-) .. isn't a version of MSSQL now being built into the Windows OS?

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Shared Memories for J2EE Clusters
  3. One other thing is that for "department level servers and small/mid range applications," there are much less expensive and more portable options, like Postgres and SAPDB.
    Firebird: runs on Linux, Windows, and a variety of Unix platforms, based on the source code open-sourced by Borland in 2000.
    http://firebird.sourceforge.net
    http://www.ibphoenix.com
  4. Pretty good idea for a paper, but I really hope it's a draft and not final for the following reasons:

    1) It's really poorly written in terms of grammar, etc. This might not seem like such a big deal, but it detracts slightly from credibility.

    2) It leaves the audience hanging with inside jabs at either platform (e.g. remarks such as "what does this remind you of?" without clarification, etc.), incomplete discussions of some topic areas such as share nothing failover in SQL, etc.

    3) The fact that Oracle 10g is really new in its stated machine of grid computing (the paper itself kind of pokes fun at this assertion) and the version of Yukon being compared is the Public Beta 2 which I think is a long ways off from presenting a compelling comparison story. Lots of problems experienced when trying to work with this version.

    The comparison between the two is interesting from a capability and strategy perspective (i.e., what kinds of theoretical workloads each can handle and how they target different admin audiences) but the two platforms really do serve different application markets and therefore I think any observations of how they compete head-to-head are kind of irrelevant.

    Mike
  5. Cameron, since you have been using SQL Server for a while I'm sure you know it uses page level locking instead of row level locking (perhaps from its Sybase roots?). This is great for toy applications, but any kind of transaction load starts causing massive page locks. I'm assuming Yukon will allow row level locks. Personally I would use Postgres or other OS DBs in place of MS SQL Server 2000 for workgroup style loads. Oracle and DB2 (on iSeries) do not suffer with large transactional loads.
  6. Cameron, since you have been using SQL Server for a while I'm sure you know it uses page level locking instead of row level locking (perhaps from its Sybase roots?).
    Are you implying that SQL Server 2000 *always* uses page-level locking? That is not only untrue but sounds extremely silly to me.
  7. Cameron, since you have been using SQL Server for a while I'm sure you know it uses page level locking instead of row level locking (perhaps from its Sybase roots?).
    Are you implying that SQL Server 2000 *always* uses page-level locking? That is not only untrue but sounds extremely silly to me.

    On page 8 it states:
    Locking strategy The main problem of the MSSQL 2000 compared with Oracle 8i/9i for VLDB/OLTP and enterprise applications on the high end machines was the well known MSSQL locking strategy. This locking would cause a high lock contention with a very high possibility of deadlocks in the correct transactional flows, which could work well on any databases that provide row versioning. In addition, MSSQL 2000 lock escalation mechanism caused locks to be escalated to the block or even table level in case of high concurrency, which is very common on high end, SMP machines.

    then on page 10, it states:
    Locking The most notable feature added in MSSQL 2005 is probably a new isolation level called Snapshot Isolation (SI). The idea has been to add row versioning option to MSSQL so that • Updates will not block the select operation. • If working in so called "Read committed snapshot" isolation level which is equivalent to SCN based consistent read mode, then all fetched rows will be returned in the same state as it just was on select statement execution. • If working in so called "Snapshot Isolation (SI)" isolation level, then all fetched rows will be returned in the same state as they were on transaction begin

    As far as I know, I believe that is accurate, but I could be wrong. I'm not a sql server expert. it would appear in 2005 that is fixed.
  8. As far as I know, I believe that is accurate, but I could be wrong. I'm not a sql server expert. it would appear in 2005 that is fixed.
    I am not sure how much credence one should give to that document but the horse's mouth, the MSDN documentation, says:

    "... SQL Server locks are applied at various levels of granularity in the database. Locks can be acquired on rows, pages, keys, ranges of keys, indexes, tables, or databases. SQL Server dynamically determines the appropriate level at which to place locks for each Transact-SQL statement"

    There is no implication that page-level locking is preferred blindly.
  9. I've only benchmarked sql server 2K on 2 and 4 CPU boxes under moderate concurrent load, so I don't have first hand experience producing what they described.

    hopefully one of the sql server developers can respond with a more authorative answer.
  10. My Impressions[ Go to top ]

    I read the white paper. Below are my impressions:

    1. Written by production dba(s) deeply familiar with Oracle & SQL server
    2. They have a lot of experience with all 3 databases (you can tell from the war stories)
    3. Contrary to Mr. Purdy’s opinion, the paper is not biased towards SQL server. In fact, it details how SQL Server is trying to catch up to Oracle with much bigger features (ahead by 2 or 3 releases).
    4. Very good opinions from Operations point of view (as opposed to developers or academics)
    5. Scattered material (organization and style could be improved)
    6. Material will be Greek to java developers
    7. Way over the head for non database professionals
    8. I learn a lot and will keep it for future reference
  11. My Impressions[ Go to top ]

    3. Contrary to Mr. Purdy's opinion, the paper is not biased towards SQL server. In fact, it details how SQL Server is trying to catch up to Oracle with much bigger features (ahead by 2 or 3 releases).

    I didn't say "biased." I said that in my opinion, they had a pre-conceived conclusion.

    And speaking of pre-conceived conclusions, you're a good candidate .. http://blogs.msdn.com/michael_howard/archive/2004/10/25/247470.aspx

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Shared Memories for J2EE Clusters
  12. Mr. Purdy,

    What is this ?

    A) A personal attack in a public forum ?
    B) A Background investigation ?

    Anyway, I only have high regaard for you.

    Ricky
  13. My Impressions[ Go to top ]

    I didn't say "biased." I said that in my opinion, they had a pre-conceived conclusion.

    They might had to give the conclusion at end of the paper. But I don't think it had pre-conceived conclusion.
  14. a quick search reveals[ Go to top ]

    http://www.greenspun.com/bboard/q-and-a-fetch-msg.tcl?msg_id=00AZ7K

    did a quick google search and it would appear the problem isn't isolated to the article and does appear in other real world deployments. as usual, take it with a grain of salt until someone authorative confirms or clarifies it :)

    I'd try to reproduce it myself, but don't have a sql server 2K handy.
  15. Better late than never?[ Go to top ]

    The most notable feature added in MSSQL 2005 is probably a new isolation level called Snapshot Isolation (SI). The idea has been to add row versioning option to MSSQL so that updates will not block the select operation.
    Welcome to versioning database servers. From Intebase documentation: "SNAPSHOT - this isolation level allows the transaction to see a view of the records as they were when the transaction began. Subsequent changes by other transactions that occur after the transaction began will not be seen."
    http://bdn.borland.com/article/0,1410,25686,00.html
    Seems that Oracle has the same feature, but call it "serializable".
  16. Cameron, since you have been using SQL Server for a while I'm sure you know it uses page level locking instead of row level locking (perhaps from its Sybase roots?). This is great for toy applications, but any kind of transaction load starts causing massive page locks. I'm assuming Yukon will allow row level locks.
    MSSQL 6.5 allows row locking for INSERT
    MSSQL 7.0 allows row locking for INSERT, UPDATE and DELETE
    Row locking can be escalated automatically to page locking and even to table locking. MSSQL does not promise to set or keep the asked type of the lock, so the locking specifier is called a "hint".

    This is an old debate between locking servers and versioning servers. Oracle (and Interbase/Firebird) are versioning servers, MSSQL/Sybase are locking servers.
  17. Cameron, since you have been using SQL Server for a while I'm sure you know it uses page level locking instead of row level locking (perhaps from its Sybase roots?). This is great for toy applications, but any kind of transaction load starts causing massive page locks. I'm assuming Yukon will allow row level locks.
    MSSQL 6.5 allows row locking for INSERTMSSQL 7.0 allows row locking for INSERT, UPDATE and DELETERow locking can be escalated automatically to page locking and even to table locking. MSSQL does not promise to set or keep the asked type of the lock, so the locking specifier is called a "hint".This is an old debate between locking servers and versioning servers. Oracle (and Interbase/Firebird) are versioning servers, MSSQL/Sybase are locking servers.

    Important features for versioning server is : reading
    You can read without lock and that have read consistency
    Oracle mantra : reader don't block writer and writer don't block reader (never) is correct only for versioning system

    You have read (select) without transaction etc ...

    regards
  18. Important features for versioning server is: reading. You can read without lock and that have read consistency. Oracle mantra : reader don't block writer and writer don't block reader (never)
    Yep, i guess this is Interbase matra too. These are some of my findings about Interbase SNAPSHOT isolation level, made four years ago:

    SNAPSHOT isolation level correponds to ANSI/ISO SERIALIZABLE isolation level (thus, Oracle calls it SERIALIZABLE quite correctly). When a transaction starts, it receives a "snapshot" of a database. This shapshot is kept unmodified until the end of transaction. It is possible to read the rows which are being updated by another concurrent transaction, the unmodified data would be fetched. An attempt to change the rows already updated by another transaction (either still running or committed) results in a deadlock (SQLCODE = -913).

    Took MS some time to implement this "new feature" for 2005. Go Yukon!
  19. As I understand SNAPSHOT isolation level is more about implementation (versioning) and it can correspond to any ANSI/ISO isolation level (it is defined in phenomen terms).
    Transaction serializability definition in theory is very trivial, concurrent transactions must produce the same execution history as the serial transaction execution. There are many concurrency control algorythms to produce this kind of histories (directed acyclic graphs) Two Phase Locking and Multi Version Concurrency Control are most popular in practice.
  20. As I understand SNAPSHOT isolation level is more about implementation (versioning) and it can correspond to any ANSI/ISO isolation level (it is defined in phenomen terms).

    No, SNAPSHOT is totally different isolation level, its differ from SERIALIZABLE because it makes Write Skew and Phantom possible. Please review following well-known, but difficult to read article for more details:
    http://www.cs.umb.edu/cs734/CritiqueANSI_Iso.pdf

    Maxim Kramarenko,
    TrackStudio - Hierarchical Issue Tracking Software
  21. I see, it is about concurrency control algorythm implementation, so in ANSI/ISO therms it is READ COMMITED isolation level implementaed using SNAPSHOT strategy.
  22. No, SNAPSHOT is totally different isolation level, its differ from SERIALIZABLE because it makes Write Skew and Phantom possible.

    I certainly is not totally different, but only weaker in certain situations.

    It is a bit funny that half the authors of the paper you mention are from MS ;)

    Anyway, MS SQL will support both snapshot and real serializable...

    Other than that, thanks for the pointer, I really, really good article :)

    Oliver
  23. Important features for versioning server is: reading. You can read without lock and that have read consistency. Oracle mantra : reader don't block writer and writer don't block reader (never)
    Yep, i guess this is Interbase matra too. These are some of my findings about Interbase SNAPSHOT isolation level, made four years ago:SNAPSHOT isolation level correponds to ANSI/ISO SERIALIZABLE isolation level (thus, Oracle calls it SERIALIZABLE quite correctly). When a transaction starts, it receives a "snapshot" of a database. This shapshot is kept unmodified until the end of transaction. It is possible to read the rows which are being updated by another concurrent transaction, the unmodified data would be fetched. An attempt to change the rows already updated by another transaction (either still running or committed) results in a deadlock (SQLCODE = -913).Took MS some time to implement this "new feature" for 2005. Go Yukon!

    It isn't complete correct.Oracle is MVCC.Every transaction have number (SCN) and Oracle remember every transaction data in rollback segment.User start query with SCN x and end query with SCN x - he read only data before transaction x+1
    It isn;t important how much transaction start after.
    Read-only access (select without mutating function) don't start transaction.I don't know database with this behavior.Postgresql is MVCC, but start tranascation for every command (readonly or not)
    regards
  24. Read-only access (select without mutating function) don't start transaction.I don't know database with this behavior.Postgresql is MVCC, but start tranascation for every command (readonly or not)regards
    PostgreSQL doe's not start transactions for commands, user demarcates transactions and user can make it reaonly
    http://www.postgresql.org/docs/7.4/static/sql-set-transaction.html
  25. I didn't think for server commands than sql commands like select - Postgresql start transaction for every select,
    Oracle don't - Postgersql have to do it, because haven't pure
    MVCC - transaction garant read consistency in select for Postgresql - Oracle MVCC garant read consistency and Oracle haven't have transaction for select

    regards
  26. Sorry, I do not get how it different. As I understand "transaction start" means "generate next TxId and register it as active", this Id is used to find correct versions. Doe's oracle has some magic way to find correct version for transactional read without any information about current and active transactions ?
  27. Sorry, I do not get how it different. As I understand "transaction start" means "generate next TxId and register it as active", this Id is used to find correct versions. Doe's oracle has some magic way to find correct version for transactional read without any information about current and active transactions ?

    yes , Oracle have magic
    Every transaction id (SCN) is increase for 1 and this is scenario :

    - User A read (do select) and current SCN is for instance 10
    - user B start update (insert,update, delete or ddl command)
    he start new transaction B with 11 (last SCN + 1)

    - User A read only block with SCN 10 (rollback segment) and transaction B doesn't exist for him

    - User C start transaction C with SCN 12 etc

    - User B i C commit

    - User A is in query yet and read SCN 10

    - User A finish query an start new query - SCN is now 12
    and he read commited transaction B and C

    regards
  28. Sorry, I do not get how it different. As I understand "transaction start" means "generate next TxId and register it as active", this Id is used to find correct versions. Doe's oracle has some magic way to find correct version for transactional read without any information about current and active transactions ?
    yes , Oracle have magicEvery transaction id (SCN) is increase for 1 and this is scenario :- User A read (do select) and current SCN is for instance 10- user B start update (insert,update, delete or ddl command)he start new transaction B with 11 (last SCN + 1)- User A read only block with SCN 10 (rollback segment) and transaction B doesn't exist for him- User C start transaction C with SCN 12 etc- User B i C commit- User A is in query yet and read SCN 10- User A finish query an start new query - SCN is now 12 and he read commited transaction B and Cregards

    I forget most important

    User A don't start transaction (he read only)
    user A don't block users B and C,user B don't block A and C
    and C don't block A and B
     - user B block user C (or reverse) only write to same row

    It is true MVCC

    regards
  29. I forget most important User A don't start transaction (he read only)
    The fact that A only reads data does not mean that there is no transaction running.
    user A don't block users B and C,user B don't block A and C and C don't block A and B  - user B block user C (or reverse) only write to same row It is true MVCC regards
    What if user A reads a row which has been written by user B?
  30. The fact that A only reads data does not mean that there is no transaction running.
    Fortunately, for Oracle it mean that no transaction running
    I have to aprove on hibernate list and I can do it again for You or see this http://forum.hibernate.org/viewtopic.php?t=933214
    What if user A reads a row which has been written by user B?
    Oracle read rollback block with old SCN.When user start transaction he write rollback block with SCN+1 and user with old read (select) command read SCN - Oracle have version (rollback) for all transaction from startup (commited or not)
    It is MVCC, it is cool.

    regards
  31. Read/Write conflict does not exists in any MVCC implementation. It is true for all transacions readonly and read/write. PostgreSQL uses MVCC implementation too, but some exeptions exist, it depends on index type.
  32. Read/Write conflict does not exists in any MVCC implementation. It is true for all transacions readonly and read/write. PostgreSQL uses MVCC implementation too, but some exeptions exist, it depends on index type.

    Juozas,
    Readonly transaction doesn't exist in Oracle - no transaction for readonly.Postgres do transaction for select, beacuse he don't know : maybe select have function which change database
    Oracle know this, because analyze function/procedure better
    I like Postgresql, but I need object dependency - when I recreate or drop any object then I have to remove all dependencies object - it is ugly.and I can't work with this
    Readonly transactions are important for big oltp application only

    regards
  33. yes , Oracle have magicEvery transaction id (SCN) is increase for 1 and this is scenario :- User A read (do select) and current SCN is for instance 10- user B start update (insert,update, delete or ddl command)he start new transaction B with 11 (last SCN + 1)- User A read only block with SCN 10 (rollback segment) and transaction B doesn't exist for him- User C start transaction C with SCN 12 etc- User B i C commit- User A is in query yet and read SCN 10- User A finish query an start new query - SCN is now 12 and he read commited transaction B and Cregards

    that's not completely true.

    transaction IDs are different from the SCN (System Change Number). The latter is incremented when a transaction is committed, not when a transaction starts.

    In your particular example, depending on the isolation level, what happens is either:
     - [if the isolation level is SERIALIZABLE] A, B, and C will see data with SCN 10 and in this isolation level each statement in a transaction sees the data with SCN from the start of the transaction
     - or [if the isolation level is READ COMMITTED] A will see data with SCN10; B and C will initially see data with SCN 10 but then when one of the transaction commits (say B for example) the statements in C that are executed after B's commit will see data with SCN 11 (since the B commit will increment the SCN and in this isolation level each statement in a transaction sees data with SCN from the start of the statement)

    see chapter 13 from the Oracle Database Concepts guide
  34. that's not completely true.transaction IDs are different from the SCN (System Change Number). The latter is incremented when a transaction is committed, not when a transaction starts.In your particular example, depending on the isolation level, what happens is either: - [if the isolation level is SERIALIZABLE] A, B, and C will see data with SCN 10 and in this isolation level each statement in a transaction sees the data with SCN from the start of the transaction - or [if the isolation level is READ COMMITTED] A will see data with SCN10; B and C will initially see data with SCN 10 but then when one of the transaction commits (say B for example) the statements in C that are executed after B's commit will see data with SCN 11 (since the B commit will increment the SCN and in this isolation level each statement in a transaction sees data with SCN from the start of the statement)see chapter 13 from the Oracle Database Concepts guide
      It is important princip - readonly query don't aquire transaction for any transaction level - my example is for default behavior

    regards
  35. Doe's oracle has some magic way to find correct version for transactional read without any information about current and active transactions ?

    active transactions are irrelevant for a query (otherwise a dirty read may occur). Transactions commited after your query has started are irrelevant as well.

    At any point in time there is a single System Change Number (SCN) in the database, which is incremented each time a transaction is committed (well, almost - some internal database work may increment it too, see http://www.jlcomp.demon.co.uk/ch_01.html#Errata).

    When your query (select) starts the database checks which is the current SCN, and then your query will see only data which was commited prior to the start of your query (e.g. data blocks that have SCN <= the SCN of your query).

    Even if a DML statement modifies part of the data while your query is still running, your query won't be affected, since it will access the "old" versions of the data from the rollback segments (the new versions will have SCN > of your query SCN)
  36. Snapshot vs. Serializable[ Go to top ]

    SNAPSHOT isolation level correponds to ANSI/ISO SERIALIZABLE isolation level (thus, Oracle calls it SERIALIZABLE quite correctly).

    I have always thought that Oracle does not support ANSI/ISO serializable isolation level at all. Am I wrong? I found some references to back up my statement:

    http://tinyurl.com/5daaw
    http://tinyurl.com/58ot4
  37. Snapshot vs. Serializable[ Go to top ]

    I have always thought that Oracle does not support ANSI/ISO serializable isolation level at all. Am I wrong? I found some references to back up my statement:http://tinyurl.com/5daawhttp://tinyurl.com/58ot4
    As far as can understand the SQL standard (without reading the actual specification, which isn't freely available), Oracle is compliant. SERIALIZABLE is defined in the SQL standard as a transaction level where dirty, non-repeatable and phantom reads cannot occur. It does not mean that end state of the database (after executing some transactions) must be as if all transactions were executed one after another, i.e. "serially".

    As was mentioned in the second link, this can bite unsuspecting developers in the a**. Happend to me - I lost a whole day trying to figure out why a write skew was happening even when SERIALIZABLE...

    It would be a nice feature for Oracle (and other MVCC DBMSes) to have additional isolatiin levels where SELECTs would automaticaly lock rows or tables, so one wouldn't have to add "FOR UPDATE" to all statements when higher "isolation" is needed.
  38. Snapshot vs. Serializable[ Go to top ]

    As far as can understand the SQL standard (without reading the actual specification, which isn't freely available), Oracle is compliant. SERIALIZABLE is defined in the SQL standard as a transaction level where dirty, non-repeatable and phantom reads cannot occur. It does not mean that end state of the database (after executing some transactions) must be as if all transactions were executed one after another, i.e. "serially".

    No, it does :-)
    Please check out section 4.28 in the draft here:
    http://www.cs.pdx.edu/~len/587/sql-92.pdf

    "A serializable execution is defined to be an execution of the operations of concurrently
    executing SQL-transactions that produces the same effect as some serial execution of those same
    SQL-transactions. A serial execution is one in which each SQL-transaction executes to completion
    before the next SQL-transaction begins."

    Then:

    "The execution of a <rollback statement> may be initiated implicitly by an implementation when it detects the inability to guarantee the serializability of two or more concurrent SQL-transactions. When this error occurs, an exception condition is raised: transaction rollback-serialization failure."

    Preventing the bad reads is necessary, but not enough to be serializable, there must also be logic that "detects the inability to guarantee the serializability".

    Seems like both ORACLE and MS SQL doesn't support ANSI SERIALIZABLE :-)

    Maxim Kramarenko
    TrackStudio - Hierarchical Bug Tracking Software.
  39. Snapshot vs. Serializable[ Go to top ]

    Yes, serializability in concurrency control is not the same as isolation level.
  40. Snapshot vs. Serializable[ Go to top ]

    Yes, serializability in concurrency control is not the same as isolation level.

    Not sure what exactly "Yes", but it should be the same thing :-)

    From 4.28
    "The execution of concurrent SQL-transactions at isolation level SERIALIZABLE is guaranteed to be
    serializable."

    Maxim Kramarenko,
    TrackStudio - Hierarchical Bug Tracking Software
  41. Snapshot vs. Serializable[ Go to top ]

    "The execution of concurrent SQL-transactions at isolation level SERIALIZABLE is guaranteed to be serializable."
    It be true; if thye are the same.

    " if isolation level is SERIALIZABLE then execution is SERIALIZABLE and if execution is SERIALIZABLE then isolation level is SERIALIZABLE"
    It must be interesting to read proff;if it exists:
  42. Snapshot vs. Serializable[ Go to top ]

    No, it does :-)Please check out section 4.28 in the draft here:http://www.cs.pdx.edu/~len/587/sql-92.pdf"A
    Thanks for the link! I have been searching for such a document so long. Obviously without any success, or my explanation wouldn't be so blatantly wrong.
    Then:"The execution of a <rollback statement> may be initiated implicitly by an implementation when it detects the inability to guarantee the serializability of two or more concurrent SQL-transactions. When this error occurs, an exception condition is raised: transaction rollback-serialization failure."Preventing the bad reads is necessary, but not enough to be serializable, there must also be logic that "detects the inability to guarantee the serializability".
    Hm, I don't interpret this statement the same way as you had. Note: "The execution ... may be initiated...". To me, this looks like a relaxation of the SERIALIZABLE rules, so that a transaction may be rolled back by the DBMS "even when transaction did nothing wrong". But if a DBMS can always schedule any transaction so that no deadlock and no resource shortage occurs (and end result is the same as in some serial transaction schedule), it is free not to implement this logic. Tough luck finding it, though ;-)
    Seems like both ORACLE and MS SQL doesn't support ANSI SERIALIZABLE :-)
    Why am I not surprised at all?
  43. Snapshot vs. Serializable[ Go to top ]

    Seems like both ORACLE and MS SQL doesn't support ANSI SERIALIZABLE :-)
    Why am I not surprised at all?

    But at least ORACLE should support ANSI SERIALIZABLE:

    "The SERIALIZALE setting specifies serializable transaction isolation mode as defined in the SQL92 standard. If a serializable transaction contains data manipulation language (DML) that attempts to update any resource that may have been updated in a transaction uncommitted at the start of the serializable transaction, then the DML statement fails."

    I can't describe himself how this can be happend.

    Another strange point: TRANSACTION_SERIALIZABLE in Java not equals to SERIALIZABLE in Standard:
    http://java.sun.com/j2se/1.4.2/docs/api/java/sql/Connection.html
    "TRANSACTION_SERIALIZABLE. A constant indicating that dirty reads, non-repeatable reads and phantom reads are prevented."

    Maxim Kramarenko,
    TrackStudio - Hierarchical Bug Tracking Software
  44. You guys are all confused[ Go to top ]

    Oracle supports ANSI serialization by rolling back your xact if it cannot keep its behavior isolated from other transactions. SQL Server has proper serialization support. JDBC serialization is the same thing as ANSI.

    Serialization/isolation level 3 is typically defined by the phenomena that it prevents:

    dirty reads (see uncommitted data)
    non-repeatable reads (read the same data 2x, see rows changed)
    phantoms (read the same data 2x, see new rows)

    Oracle's support isn't particularly useful, though it is adequate to get them TPCC certification. But you can imagine that forcing the user to code their application to restart rather than preventing phantoms properly would be very painful for anyone that needs iso level 3.

    By the way, this report is replete with all sorts of misunderstandings about both products. If you are serious about databases you will steer clear of any of this noise. My favorite stupid statement the following about Oracle not having the same locking modes as SQL Server: "There is no need for those pessimistic locking modes as the same perfomance is achieved with Oracle optimistic locking mode". This is patently ridiculous and shows that the authors haven't spent much time writing database applications and don't understand how Oracle's model works when there are concurrent readers and writers. Don't get me wrong, Oracle's model has some very nice properties, but these guys with their "experience and expertise" don't have a clue.
  45. dumb question[ Go to top ]

    I've never written a database, so I'm curious about the specifics.
    Oracle supports ANSI serialization by rolling back your xact if it cannot keep its behavior isolated from other transactions. SQL Server has proper serialization support. JDBC serialization is the same thing as ANSI. Serialization/isolation level 3 is typically defined by the phenomena that it prevents:
    dirty reads (see uncommitted data)
    non-repeatable reads (read the same data 2x, see rows changed)
    phantoms (read the same data 2x, see new rows)

    Oracle's support isn't particularly useful, though it is adequate to get them TPCC certification.

    I believe I understand isolation level three (repreatable read), but I don't get why oracle's support is not useful? There's only been a few cases in my work experience where repeatable read was really necessary. In those cases, we would take the penalty and lock the rows. Most of the time optimistic locking was sufficient for the requirements we were given.
    By the way, this report is replete with all sorts of misunderstandings about both products. If you are serious about databases you will steer clear of any of this noise. My favorite stupid statement the following about Oracle not having the same locking modes as SQL Server: "There is no need for those pessimistic locking modes as the same perfomance is achieved with Oracle optimistic locking mode". This is patently ridiculous and shows that the authors haven't spent much time writing database applications and don't understand how Oracle's model works when there are concurrent readers and writers. Don't get me wrong, Oracle's model has some very nice properties, but these guys with their "experience and expertise" don't have a clue.

    That a bit harsh :) What I got out of the article is the authors have experience with a specific type of applications and know those cases well. It's impossible for them to know everything about both products.

    You could send them an email and suggest some revisions. That way, it gets rid of the noise, they learn somethings, you get satisfaction for helping them and the community. Deep intimate knowledge of the database internals is hard to obtain, so I'm sure everyone would be interested in your knowledge of the nuts and bolts.
  46. Cameron, since you have been using SQL Server for a while I'm sure you know it uses page level locking instead of row level locking (perhaps from its Sybase roots?). This is great for toy applications, but any kind of transaction load starts causing massive page locks. I'm assuming Yukon will allow row level locks. Personally I would use Postgres or other OS DBs in place of MS SQL Server 2000 for workgroup style loads. Oracle and DB2 (on iSeries) do not suffer with large transactional loads.

    True from an out-of-date and theoretical standpoint, but I can assure you that Sybase never suffered from transactional throughput in the hands of a good developer .. lots of options including scatter-based key generation with clustered indexes to rows padded to just over 1000 bytes (row == page) .. lots of tricks depending on the problem.

    Same with Oracle .. extremely tunable, trickable, etc. Also a bear at times ;-) .. I found it much harder to learn.

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Shared Memories for J2EE Clusters
  47. I can assure you that Sybase never suffered from transactional throughput in the hands of a good developer.
    Some things just need to be caressed to work properly ;)
  48. 250 connection limit[ Go to top ]

    Does anyone know if Yukon will still have the same connection limit as Sql Server 2K? I ask because back in 99/00 I was stress testing an application. Based on the requirements I was given, 250 just wasn't enough. This was in a setup with connection pooling. I was simulating a cluster of webservers connecting to the same database.
  49. I can assure you that Sybase never suffered from transactional throughput in the hands of a good developer.
    Some things just need to be caressed to work properly ;)

    How about cussing instead of caressing?

    I've had just as much fun with Sybase as I have Squeal Server. I must not be a good developer. I've not had the same issue with DB2, Oracle, ...
  50. Yes, that's true. They got around this by allowing *hints* in their TSQL dialect but...hey! that's is a pain!!!

    In a broad sense, SQL Server is no more than MS-Access with stereoids when compared to real enterprise database products.
  51. In the document, it is said that "Identity is a usefull feature that Oracle historically does not have is the identity.".

    I'm sorry but after tried Sybase Identity (the famous identity gap), MySQL identity, I fall in love for Oracle's Sequence concept.

    It is a very smart way to manager key's table.
    Only want an autocounter use it directly in insert query:
    INSERT INTO MY_TABLE(MY_SEQUENCE.NEXTVAL, 'firstname', 'lastname');

    Want to retrieve a counter to use it in multiple INSERT :
    SELECT MY_SEQUENCE.NEXTVAL FROM DUAL

    With identity, it's always hard to retrieve the identity's value.
  52. In the document, it is said that "Identity is a usefull feature that Oracle historically does not have is the identity.".

    I'm sorry but after tried Sybase Identity (the famous identity gap), MySQL identity, I fall in love for Oracle's Sequence concept.

    It is a very smart way to manager key's table.
    Only want an autocounter use it directly in insert query:
    INSERT INTO MY_TABLE(MY_SEQUENCE.NEXTVAL, 'firstname', 'lastname');

    Want to retrieve a counter to use it in multiple INSERT :
    SELECT MY_SEQUENCE.NEXTVAL FROM DUAL

    With identity, it's always hard to retrieve the identity's value.

    This is soooooooooo true!

    In fact, you can prove that it's better using mathematical principles, because the IDENTITY concept is non-recursive, while SEQUENCE still works! (This becomes important when you are completely wrapping or virtualizing a database .. e.g. when doing database federation of different server types.)

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Shared Memories for J2EE Clusters
  53. Short Sighted[ Go to top ]

    "And Linux is secure, most sercure corps such as finacial industry bans windows due to security. End of story! You can't relink any MS product to make it secure, since no source."


    If only life was so simple as to ever say "End of Story". Its never the end of the story Vic in Business, and when you have some more experience I trust you'll learn. Hang in there tiger.

    Tony
  54. Sql Server 2005 Express = Free[ Go to top ]

    One other thing is that for "department level servers and small/mid range applications," there are much less expensive and more portable options, like Postgres and SAPDB. If cost weren't an issue, MSSQL would almost always win over those (IMHO .. I happen to think MSSQL is pretty good, having personally started out by learning Sybase years ago,) but cost _can_ be a deciding issue for small apps.

    Sql Server 2005 Express will be free as long as you can live with the following :

    1 CPU
    1 GIG ram
    4 GIG DB
  55. Sql Server 2005 Express = Free[ Go to top ]

    This would mean unfortunately not for production use
  56. filled with good details[ Go to top ]

    The article was filled with some nice details, which I usually get from Oracle DBA friends. Looks like Yukon is making good progress and finally getting better cluster support. The only thing I wish Oracle would fix is PLSql, which isn't as nice as TSql from a syntax perspective. That's mostly a personal preference, so it's not like Plsql is broken or anything.

    thanks for writing the article and sticking to the facts.
  57. .NET in MSSQL vs. Java in Oracle[ Go to top ]

    On .NET in MSSQL vs. Java in Oracle, the author is missing the following information: we have discontinued support for CORBA and EJB stack since release 9iR2 however, we do provide a JPublisher generated client-stub to invoke Java-in-the-database without a user-provided PL/SQL wrapper
    See JPulisher documentation @ http://download-west.oracle.com/docs/cd/B14117_01/java.101/b10983.pdf
    (Publishing Server-side Java Classes).

    Kuassi Mensah
    Java-in-the-Database, JPublisher and JDBC Product Management
    Oracle
  58. Weakest part is OS.[ Go to top ]

    Here is the diff with anything MS, it runs on MS Windoze.
    Lets say you have a large application that is loaded and the web app displays images and they are managed in the db.
    The JVM on MS is 32 bit, so DAO caching is limited to less than 2 gig of RAM.
    W/ Linux based solutions, you can do DAO caching over 2 gig.
    And Linux is secure, most sercure corps such as finacial industry bans windows due to security. End of story! You can't relink any MS product to make it secure, since no source.
    What if you did a mix of Linux Web app and MS SQL back end?
    Well, the stress that I did w/ dual gigabit cards shows that the LAN cards saturated.
    So best practice to me is:
    Linux app server and DB on same box, 64 bit, to avoid the traffic btwn app server and DB.
    Also, allways used a caching controler, as per TPC.org results.
    I am a certifeid MS SQL and SYBASE performance and tuning instructor. :-)

    .V
    ps: I think that PostgreSQL is better than rest, but what do I know. It has a sweet JDBC 3 Driver, Storpd procedures, ANSI SQL, simple Full Text Search. Oh and it's FREE to distribute on ANY OS.
    pps: Ex. site of mine: http://wiki.apache.org/struts/PoweredBy
  59. Weakest part is OS.[ Go to top ]

    Here is the diff with anything MS, it runs on MS Windoze.

    True, but you mis-spelled Windows.

    You could always run it inside VMWare inside Linux ;-)
    Lets say you have a large application that is loaded and the web app displays images and they are managed in the db.
    The JVM on MS is 32 bit, so DAO caching is limited to less than 2 gig of RAM.
    W/ Linux based solutions, you can do DAO caching over 2 gig.

    You can cache over 2GB on Windows, just not all on-heap (or not all on a single heap.) We have customers that cache over 2GB on Windows ;-)

    Also, there is a 64-bit Windows, supported by both Sun's and BEA's JVMs.
    And Linux is secure, most sercure corps such as finacial industry bans windows due to security.

    Linux has a more secure architecture, although it's not as good as Unix. Nonetheless, I've actually run into Windows servers being used in financial services .. rare, but it is there.

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Shared Memories for J2EE Clusters
  60. Weakest part is OS.[ Go to top ]

    True, but you mis-spelled Windows.
    You can cache over 2GB on Windows, just not all on-heap (or not all on a single heap.) We have customers that cache over 2GB on Windows ;-)
    Win-doze: As in doze becuase it's slower, what with all the virus and anti virus memory resident crap.

    No you can't do a DAO cache in a Windoze JVM and no customers could be possibly be doing it.
    Ex:
    I have an AMD 64 PC. There is no such thing as Win-doze 64 OS, it's only 32 (OK, the one MSDN in pre-release does not work!)
    Therefore you can't install a 64 bit JVM, it complains that you have a 32 bit OS. I tried. (jRockit I love, but that is only Itanic chip, and again, no such Windoze).
    Since there is no 64 bit JVM, JVM can't adress the RAM.
    There for my DAO (ibatis in my case) can't address more than 2 gig.
    Am I wrong? Is there a setting in 32 bit JVM that lets you address > 2G?

    MS SQL can cache, but not DAO.
    Yes, one day there will be a Windoze 64 and this will work, but there is allways security on servers, and in Windoze you can't re-link and there are no chambers like Linux.
    I do not think anyone belives that heavy lifting servers and their market share is even threated by Windoze. Windoze is OK for few small deparmentall servers. If you have a server farm, the cost of opertations kill it.
    .V
    ps: Itanic = Itanium + Titanic.
  61. Weakest part is OS.[ Go to top ]

    No you can't do a DAO cache in a Windoze JVM and no customers could be possibly be doing it.Ex:I have an AMD 64 PC. There is no such thing as Win-doze 64 OS, it's only 32 (OK, the one MSDN in pre-release does not work!)Therefore you can't install a 64 bit JVM, it complains that you have a 32 bit OS. I tried. (jRockit I love, but that is only Itanic chip, and again, no such Windoze).Since there is no 64 bit JVM, JVM can't adress the RAM.There for my DAO (ibatis in my case) can't address more than 2 gig.Am I wrong? Is there a setting in 32 bit JVM that lets you address > 2G?

    Shared memory across multiple JVMs on multiple Windows servers .. unlimited virtual cache size even with 1.6GB Windows JVM limit. ;-)

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Shared Memories for J2EE Clusters
  62. Weakest part is OS.[ Go to top ]

    Shared memory across multiple JVMs on multiple Windows servers .. unlimited virtual cache size even with 1.6GB Windows JVM limit. ;-)

    ?? How?
    What about network traffic?

    .V
  63. Weakest part is OS.[ Go to top ]

    Hi Vic,
    Shared memory across multiple JVMs on multiple Windows servers .. unlimited virtual cache size even with 1.6GB Windows JVM limit. ;-)

    ?? How?What about network traffic?.V

    Yes, of course -- if it is distributed across multiple servers. No different from any clustered shared memory in that respect.

    Main difference is that instead of sharing / distributing pages of memory, we share / distribute objects.

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Shared Memories for J2EE Clusters
  64. Weakest part is OS.[ Go to top ]

    Shared memory across multiple JVMs on multiple Windows servers .. unlimited virtual cache size even with 1.6GB Windows JVM limit. ;-)Peace,Cameron PurdyTangosol, Inc.Coherence: Shared Memories for J2EE Clusters

    such fun toys to play with. out of curiousity, what type of network topology is recommended?
  65. Weakest part is OS.[ Go to top ]

    such fun toys to play with. out of curiousity, what type of network topology is recommended?

    switched ;-)

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Shared Memories for J2EE Clusters
  66. funny[ Go to top ]

    switched ;-)Peace,Cameron PurdyTangosol, Inc.Coherence: Shared Memories for J2EE Clusters

    that's a good joke. guess i should have been more specific and asked "Is a hypercube, toroidal, or star topology recommended, and which is more optimal with coherence?" Does the old rule of keeping the hops between nodes to 2 switches still apply even with multi-gigabit switches?
  67. funny[ Go to top ]

    guess i should have been more specific and asked "Is a hypercube, toroidal, or star topology recommended, and which is more optimal with coherence?" Does the old rule of keeping the hops between nodes to 2 switches still apply even with multi-gigabit switches?

    One interesting thing with GigE compared to Fast Ethernet compared to 10Mb Ethernet is that the latency has actually dropped in line with the higher throughput, with only a few exceptions. So if you look at switching speeds for GigE, they are literally 100x as fast (as opposed to just the wire's theoretical throughput limit going up 100x) so while it may be important to optimize the network, it's not _as_ important any more.

    Also, as always, it's still very important to avoid going out on the network in the first place .. ;-)

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Shared Memories for J2EE Clusters
  68. mucho gracias[ Go to top ]

    One interesting thing with GigE compared to Fast Ethernet compared to 10Mb Ethernet is that the latency has actually dropped in line with the higher throughput, with only a few exceptions. So if you look at switching speeds for GigE, they are literally 100x as fast (as opposed to just the wire's theoretical throughput limit going up 100x) so while it may be important to optimize the network, it's not _as_ important any more.Also, as always, it's still very important to avoid going out on the network in the first place .. ;-)Peace,Cameron PurdyTangosol, Inc.Coherence: Shared Memories for J2EE Clusters

    thanks for answer my question. now if only I could setup a network of 40 nodes for myself to play with, I can the differences for myself.
  69. Oracle is being Netscaped[ Go to top ]

    "NET in MSSQL vs. Java in Oracle It is looks like MSSQL 2005 has a serious advantage over Oracle".
    And best of all the free version is not throttled anymore!

    Microsoft’s marketshare has already surpassed Oracle:
    "Top Deployed Databases. When respondents were asked which databases were in use at their company, the lion’s share went to Microsoft, with 78 percent saying that SQL Server was deployed. It was followed closely by Oracle, at 55 percent
    Integration and Reporting Study conducted by BZ Research
    http://sdtimes.com/news/105/story15.htm

    From the "pious expectations" department,

    "Oracle Could Regain Database Market Share"
    http://www.forbes.com/markets/2004/03/12/0312automarketscan10.html

    Regards
    Rolf Tollerud
  70. Oracle is being Netscaped[ Go to top ]

    Microsoft's marketshare has already surpassed Oracle..

    Rolf, there are more Java installs than Windows installs out there .. but it's a bit unfair to compare cell phones and office PCs, isn't it?

    You have got to learn to tie your emotional well-being to something other than a company. See also fanboy ;-)

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Shared Memories for J2EE Clusters
  71. back so soon[ Go to top ]

    I thought Rolf was going to take a break from TSS.com. Why back so soon :)
    </joke>
  72. sorry[ Go to top ]

    Video meliora proboque deteriora sequor
  73. Oracle waiting around the corner[ Go to top ]

    The Middleware Company
    Company Info:

    "The Middleware Company is the world's leading knowledge network for
    middleware professionals enabling developers, architects..."


    "TheServerSide Joins TechTarget, TMC discontinued"

    Seems strange that they discontinue "The world's leading company"? :)

    So it was disclosed as just another Ptomkin façade then, as the Java EJB Application servers was and so many other things in the non-MS world. Oracle next in line.

    "it's a bit unfair to compare cell phones and office PCs"

    MS now gives away SQL Server 2005 for projects that can be handled with one PC with 1 GB RAM and 1 CPU. That means that they are not interested in the small market.

    As one such server (+ one standby) easily can handle 2-3 thousands simultaneous users the real market segment of SQL server is more or less the same as Oracle.

    Regards
    Rolf Tollerud
  74. clarification on simultaneous[ Go to top ]

    As one such server (+ one standby) easily can handle 2-3 thousands simultaneous users the real market segment of SQL server is more or less the same as Oracle.RegardsRolf Tollerud

    Did you mean simultaneous users, queries per second or concurrent connections? Supporting thousands of users is no big deal if each person is making 1 query every 10 minutes. It's not even a big deal if each person is making a query once every 2 minutes.

    Since Sql Server 2K has a connection pool limit of 250, I'm gonna guess you don't mean thousand of concurrent queries. By design, the maximum concurrent queries Sql Server 2K can handle is 250. That's assuming there are 250 open connections and 250 queries are executed at exactly the same time. In reality, we know that sql server sets the ratio of worker threads to 2x the number of CPU's by default.

    Not many people actually need to support 1k concurrent connections with an average of 15-20% executing queries. When you do, it usually means using COM+ for the mid-tier on several dedicated systems. Oracle on the other hand will handle 1K concurrent connections even on a cheap dual CPU box. It runs slowly under that load, but from first hand experience it will handle the load reliably.
  75. Peter, how am I going to stop posting if you persist clouding the issue with boring details? The point is that 1 or 2 (one for failover) computers can do a lot of work - "a lot". Exactly how much is irrelevant. What is interesting here is that MS is giving away all the low-level market to focus on the middle and high level.

    Some times it will be fun to set up a test with a quality ($10000 - $20000) server and press it to the max but not now. Concentrate of the gist of my post instead. The free version is changing the rules of the game.

    Regards
    Rolf Tollerud
  76. I have to laugh[ Go to top ]

    Peter, how am I going to stop posting if you persist clouding the issue with boring details? The point is that 1 or 2 (one for failover) computers can do a lot of work - "a lot". Exactly how much is irrelevant. What is interesting here is that MS is giving away all the low-level market to focus on the middle and high level.Some times it will be fun to set up a test with a quality ($10000 - $20000) server and press it to the max but not now. Concentrate of the gist of my post instead. The free version is changing the rules of the game.RegardsRolf Tollerud

    how is that clouding the issue? I was simply asking for clarification on your definition of simultaneous users. Since obviously not every has the same definition of what "simultaneous" means. I'll go back to my boring details now, since that's actually what gets the job done and steers me clear of marketing hyperbole.

    by the way, 10-20K dollar server can't really handle 2K concurrent connections in a reliable way 24x7. Is that from first hand experience, or someone else's blog :)

    by the way, I'm just joking, since I doubt you'll actually answer with boring details.
  77. Weakest part is OS.[ Go to top ]

    I'm interested in speed. Is PostgreSQL realy faster than Oracle on same HW? I'm still not convinced about PostregSQL speed because of all caching in OS. In real database you need caching priorities like index cache should survive longer that data cache ... .
    Is here somebody who compared Oracle with PostregSQL with real data on same HW?
  78. Weakest part is OS.n -pgSQL[ Go to top ]

    Is PostgreSQL realy faster than Oracle on same HW? I'm still not convinced ...

    There is comparison's on pgSQL site.

    Ex: RedHat db is pgSQL!

    In any case I do a stress test of the entire system, who knows what happes! This way I am not worried to make a decision.

    .V
  79. Weakest part is OS.n -pgSQL[ Go to top ]

    No.Maybe if you have simple schema, but with complicated base oracle MVCC is superior

    regards
  80. Weakest part is OS.[ Go to top ]

    I'm interested in speed. Is PostgreSQL realy faster than Oracle on same HW? I'm still not convinced about PostregSQL speed because of all caching in OS. In real database you need caching priorities like index cache should survive longer that data cache ... .Is here somebody who compared Oracle with PostregSQL with real data on same HW?

    Postgress uses MVCC like Oracle does - it's very responsive. It's quite fast, a bit slower than Oracle on the same hardware. But I used postgres only for simple, non critical applications with few concurrent hits. You must to know that postgres hasn't support for threading (on Unices, I'm not sure about Windows). So probably it performs much slower on heavy SMP machines than oracle (in shared server mode).

    Artur
  81. Weakest part is OS.[ Go to top ]

    Postgress uses MVCC like Oracle does - it's very responsive. It's quite fast, a bit slower than Oracle on the same hardware.
    Probably it depends on use case, I found PostgreSQL is very performant and stable as backend for web applications. Connection startup is very fast, it helps to drop pools and it makes applications more stable. I hope Oracle is faster than PostgreSQL in many ways, but I found both are performant.
    I am not sure about MS SQL, it looks bit useless if I can not install it on server ( It is Windows only, is not it ? )
  82. Weakest part is OS.[ Go to top ]

    Probably it depends on use case, I found PostgreSQL is very performant and stable as backend for web applications. [..] I hope Oracle is faster than PostgreSQL in many ways, but I found both are performant.

    Under heavy load, though? I don't think I'd use MySQL or Postgres under any real load .. I've only heard horror stories (both reliability as well as throughput) about either one of them when they are stressed. In fact, when TSS used to go down all the time, it was the load on Postgres that killed the database and thus the site ..

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Shared Memories for J2EE Clusters
  83. Weakest part is OS.[ Go to top ]

    Under heavy load, though? I don't think I'd use MySQL or Postgres under any real load ..
    How are MySQL and PostgreSQL similar ?
    I've only heard horror stories (both reliability as well as throughput) about either one of them when they are stressed. In fact, when TSS used to go down all the time, it was the load on Postgres that killed the database and thus the site ..Peace,Cameron Purdy
    I heard many strange stories about TSS too, looks like IBM tools are bad for them too (it was some scientific case study about it).
    PostgreSQL performs very well, but it needs some workarounds to make it perform, the same is about most databases.
  84. Weakest part is OS.[ Go to top ]

    How are MySQL and PostgreSQL similar ?

    They are both [supposedly] free, and thus are an alternative to purchasing Oracle or SQL Server licenses.
    PostgreSQL performs very well, but it needs some workarounds to make it perform, the same is about most databases.

    First, from what I've seen, Postgres only performs well if you are looking at one-off transaction processing, not heavy parallel load. So I would not use Postgres for a transaction-heavy system, even though I think it's a great fit for other types of applications (e.g. single user.)

    Second, if it gets hammered, the database processes shouldn't die. Who cares how much money you saved on the database (or even how fast it is) if it can't stay up ;-)

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Shared Memories for J2EE Clusters
  85. MySQL, PostgreSQL.[ Go to top ]

    Mixed up:
    They are both free, and thus are an alternative to purchasing Oracle or SQL Server licenses.
    PostgreSQL performs very well, but it needs some workarounds to make it perform, the same is about most databases. ... from what I've seen, Postgres only performs well if you are looking at one-off transaction processing, not heavy parallel load. So I would not use Postgres for a transaction-heavy system

    MySQL license say you can't use it in proudction on Windows.

    PgSQL you can use any way you want, even legaly rename the product, remove attribution and call it your DB.

    In my testing of a heavily transactioned DB on a same large server I found PgSQL easily outdid Oracle.
    Also, it was Oracle that required a lot of config.
    In PgSQL, we just told it to address 4 gig's of RAM.
    PgSQL is VERY similar to Oracle internaly, if you know one, you know the other, but is easier to install.

    (if anyone needs help w/ larger pgSQL let me know - but there are others listed on PostgreSQL.org)

    .V
  86. curious[ Go to top ]

    if you don't mind sharing, how many connections and insert/updates were used in your test? I've never stress tested PostgreSql myself, so I'm curious to know what kinds of loads it can support in terms of connections and concurrent insert/updates.
  87. curious[ Go to top ]

    I do not have reason to test databse performance, marketing can do it better anyway. But there are many reasons to test APPLICATION performance.
  88. MySQL, PostgreSQL.[ Go to top ]

    Mixed up:
    They are both free, and thus are an alternative to purchasing Oracle or SQL Server licenses.PostgreSQL performs very well, but it needs some workarounds to make it perform, the same is about most databases. ... from what I've seen, Postgres only performs well if you are looking at one-off transaction processing, not heavy parallel load. So I would not use Postgres for a transaction-heavy system
    MySQL license say you can't use it in proudction on Windows

    No, it does not. (I don't think it ever has). Yes, 5 years or so ago, there wasn't a 'free' (as in freedom or beer, with a nod towards Cameron's recent survey) version for windows, but that's very old history, as are many arguments against MySQL.
  89. MySQL, PostgreSQL.[ Go to top ]

    MySQL license say you can't use it in proudction on Windows.

    First you call it Windoze, now you want to run software on it ;-)

    Seriously though, if you're going to use MySQL for a non-GPL application, please do some research first:

    http://www.mysql.com/company/legal/licensing/
    http://www.intencha.com/adrian/jdbc_mysql_and_the_gpl.php
    http://mmmysql.sourceforge.net/

    I like the MySQL software, but I've been made uncomfortable by some of the comments (which could be FUD?) floating around. I hope that my comment here is not construed as FUD.
    In my testing of a heavily transactioned DB on a same large server I found PgSQL easily outdid Oracle.

    OK, your results are much better than mine, but I believe you .. every benchmark stresses something different, and in a different way .. and software gets better over time.

    I think your comment about testing a particular application's usage of a database is a good one to stick with ;-) .. and I should try some time to look at an up-to-date build of Postres.

    (Vic, this comment is generic, and is not directed at you.) The concept that someone uses Oracle (or any commercial product) because "they want to have someone to yell at" really does the Oracle software and the entire market that supports it an injustice. There's a lot of well-built and well-tested technology inside Oracle (and other commercial database engines like DB2, Sybase, MSSQL, etc.) and people pay for the trust factor, not just for the "yelling" factor.

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Shared Memories for J2EE Clusters
  90. MySQL, PostgreSQL.[ Go to top ]

    MySQL license say you can't use it in proudction on Windows.

    Where does the MySQL license say it can't be deployed in production on Windows?

    I'm asking because a lot of CodeFutures' customers use MySQL on Windows.


    PJ Murray

    CodeFutures Software

    Java Code Generation for Data Persistence
  91. Weakest part is OS.[ Go to top ]

    It doe's not surprise, many people like to blame tools for failures. Good alternative is to purchasing licenses, you have more reasons to blame tool if you pay for it. All free databases are similar and crapy for this reason.
  92. OpenSta + DMMonster[ Go to top ]

    My developer used DBMonster to inster records and OpenSta to run scipts of 10 diferent types of users.
    The script had a mix of inserts, etc. I have no idea what that mix is now.

    No mater what the app, and how much you paid for SW/HW, you should stress test.

    .V
  93. Weakest part is OS[ Go to top ]

    Postgres only performs well if you are looking at one-off transaction processing, not heavy parallel load.

    I have been using Postgres for parallel load on high-traffic website, and have seen no problems so far: its been very stable, and reasonably fast.

    I always got the impression that MySQL was fast for low loads, but Postgres showed its strengths at high loads..
  94. Weakest part is OS.[ Go to top ]

    Probably it depends on use case, I found PostgreSQL is very performant and stable as backend for web applications. [..] I hope Oracle is faster than PostgreSQL in many ways, but I found both are performant.
    Under heavy load, though? I don't think I'd use MySQL or Postgres under any real load .. I've only heard horror stories (both reliability as well as throughput) about either one of them when they are stressed.

    At least with MySQL, we've found almost all horror stories of throughput or stability under load to be based on misconfiguration or poor data model choices (as is the case with _any_ database).

    The current situation is that there are a _lot_ more people with Oracle skills out there that know how to get the best performance under heavy load, and fewer of those types of people with MySQL or PostgreSQL skills.

    If MySQL didn't work under heavy load, you wouldn't find organizations like Sabre using it to do all of their low-fare searches for properties like Travelocity or Yahoo! using it to power many of their web-based applications, or Google using it for infrastructure stuff.
  95. Weakest part is OS.[ Go to top ]

    If MySQL didn't work under heavy load, you wouldn't find organizations like Sabre using it to do all of their low-fare searches for properties like Travelocity or Yahoo! using it to power many of their web-based applications, or Google using it for infrastructure stuff.

    I tried to be explicit about the transactional workload, as opposed to queries. Well-tuned, MySQL can be very good at queries.

    I think Vic's comments on load-testing are very appropriate -- you need to test your specific application on your choice of software etc. to make sure that it works, works well, handles the load you expect (and then some), etc. Buying Oracle licenses doesn't absolve you of that responsibility ;-)

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Shared Memories for J2EE Clusters
  96. Weakest part is OS.[ Go to top ]

    If MySQL didn't work under heavy load, you wouldn't find organizations like Sabre using it to do all of their low-fare searches for properties like Travelocity or Yahoo! using it to power many of their web-based applications, or Google using it for infrastructure stuff.
    I tried to be explicit about the transactional workload, as opposed to queries. Well-tuned, MySQL can be very good at queries.

    Cameron,

    Actually, quite a few of our larger customers are using transactional workloads (Sabre being one of them, yes you do a lot of querying when searching airfares, but the fare rules and prices change so much, they're actually seeing on the order of 1.2 million write transactions per day, many of those touching a large portion of the entire database, along with approximately 50 million 'queries' per day to use your terminology :)

    They are using InnoDB, the transactional storage engine exclusively in their low fare search application).

    It seems your annecdotal evidence might be slightly dated (of course, that's almost always true of any 'stories from the field', so I'm not holding it against you ;) ).
    I think Vic's comments on load-testing are very appropriate -- you need to test your specific application on your choice of software etc. to make sure that it works, works well, handles the load you expect (and then some), etc. Buying Oracle licenses doesn't absolve you of that responsibility ;-)

    Agreed.

       -Mark
  97. Weakest part is OS.[ Go to top ]

    We have a large Windows server install base - and every time there is a major virus/worm outbreak, we have 24hr shifts patching and rebooting servers.
    Then occasionally we have servers that dont come back after patch installation. More work.

    Due to security issues (since slammer), having a local copy of sql server is forbidden. PC's are port-scanned for SQL server running.

    While SQL Server is nice, this is a situation you want to avoid.

    -Nick
  98. it is just no fun anymore[ Go to top ]

    A good example why there are no idea to post here anymore. Similar to Cameron’s emotional outbursts your post has about the same level of credibility as al Jazeera.

    The Java camp has gone desperate beyond any sense or reason.
  99. vacation?[ Go to top ]

    Maybe you need a vacation? I hear disney cruises are cheap these days. Treat yourself and take a nice long month long vacation.
    <joke>
  100. Interesting footnotes[ Go to top ]

    some of the footnotes in the article are very interesting. One of the references points to an article on microsoft's website. Microsoft's explanation of snapshot Isolation is a bit different.
    In the commercial relational database management system world, prior to SQL Server 2005, there were two camps. The first were the systems that implemented a pessimistic concurrency based on locking schemes that enable support for the four ANSI-standard isolation levels as defined in the SQL-92 standard (ANSI X3.135-1992, American National Standard for Information Systems — Database Language — SQL, November, 1992) – these systems include Microsoft SQL Server, IBM DB2 (all code bases/platforms) and Sybase Adaptive Server. The second camp implemented a non-standard transaction isolation model with optimistic concurrency based on retaining a view of the data as of the start of the transaction – the only system in this camp was Oracle.

    This division has led to three types of software developer:

    1. Develops on Oracle, ports to Microsoft SQL Server
     
    2. Develops on Microsoft SQL Server, ports to Oracle
     
    3. Develops and optimizes for both camps.
     
    Generally only the large software companies can afford to be "type 3" – companies such as SAP, Siebel and Peoplesoft. Most developers must pick between type 1 or type 2, their choice normally being predicated by the degree to which the Unix market matters to their sales.

    Hopefully all this competition will mean cheaper licensing costs from both and more improvements in the near future.
  101. Interesting footnotes[ Go to top ]

    This is not interesting and optimistic concurrency control is a different thing too.
    There is an interesting book on microsoft research site http://research.microsoft.com/pubs/ccontrol/, you can download it for free. Probably there are many terms to explain the same thing, but I prefer math.
  102. Good to hear that finally MS implement as bleeding edge technology as functional indexes (found in oracle since 8i) and in few OSS databases (like Postgress). Generally I can see that Yukon is at the same level as 8i was 4 Years ago :). But hey - what about other features found in 8i/9i like real BEFORE triggers, bitmap indexes (or anything more sophisticated than b-tree indexes), flasback queries etc.

    Author of this report stated that TSQL is better than PL/SQL- I don't like procedural languages at all, but PL/SQL has notion of Exceptions which is *much* more elegant than MSSQL way of dealing with errors.

    Artur
  103. bitmap indexes in Analysis Service[ Go to top ]

    Good to hear that finally MS implement as bleeding edge technology as functional indexes (found in oracle since 8i) and in few OSS databases (like Postgress). Generally I can see that Yukon is at the same level as 8i was 4 Years ago :). But hey - what about other features found in 8i/9i like real BEFORE triggers, bitmap indexes (or anything more sophisticated than b-tree indexes), flasback queries etc.Author of this report stated that TSQL is better than PL/SQL- I don't like procedural languages at all, but PL/SQL has notion of Exceptions which is *much* more elegant than MSSQL way of dealing with errors.Artur

    I thought Sql Server 2005 is suppose to come with Analysis Service. If that is still the case, then I think one "could" say Sql Server 2005 support bitmap indexes. For non aggregate OLAP queries, I'm not sure how useful Bitmap indexes would be for simple join queries.
  104. OHHHH YEAHHH!!!
    Have you ever tried to implement proper exception handling using @@ERROR in MS-SQL world?? IT IS A PAIN!!!

    PL-SQL may look old and terrible to chew, but hey... you can do with it things that you will never be able to do with T-SQL.
  105. Main advantages[ Go to top ]

    MS SQL Server has the advantage of being MUCH easier to use. (In fact it is about the easiest to use I've seen)

    Oracle has more features, is probably faster, etc..

    So if you don't need all the extras, or high end features MSSQL Server is probably better. However, if you don't need all that PostgreSQL is probably good enough for you too!

    And, of course, MS SQL doesn't work on UNIX/Linux (if that is important to you).
  106. Main advantages[ Go to top ]

    MS SQL Server has the advantage of being MUCH easier to use.

    Completely agreed. SQL server comes along with much better client admin/inspection software where with Oracle you would need something like Toad or similar stuff.

    Oliver
  107. Main advantages[ Go to top ]

    it is very important for begin, but later it is more important
    easy tools for simple sql like sqlplus

    regards
  108. Why not SYBASE ASE 15?[ Go to top ]

    As I known, Sybase ASE 15 is in Beta 1 test phase. I had read some documents of it. I think ASE 15, not MSSQL 2005, should compare to Oracle 10G.
    For example: Hash,Range,List,Robbin partition,Glaxy Cluster(Only one system files and database files in Disk Array) and more .

    Flybean
  109. M$ Access on steriods that's what we call it at work.
  110. decent paper[ Go to top ]

    This was a decent comparison paper. That in itself is striking -- it is such a rarity to find a comparison by obvious practitioners who don't seem to have an agenda to promote one product over another (except their own :). Though the conversational tone took a bit away from the credibility at first, the content more than made up for it.

    General notes (and related to the above postings):

    Oracle supports a particular reading of the ANSI SQL isolation levels. The problem is not with Oracle, it is with the definition of the levels, as described here. If you view isolation as "freedom from certain strictly defined anomalies", then yes, Oracle supports SERIALIZABLE. But if you scratch the surface, ANSI SQL Isolation misses some corner-cases and even whole anomalies (like "write skew"!). Most of the problems you find with Oracle's snapshot isolation is related to constraint enforcement -- one can't (without a table or user lock) easily enforce aggregate constraints.

    SQL Server originally had page-level locking, as Sybase, but in 2000 it had row-level locking (as does Sybase 12 ASE). BUT, like DB2, it uses a traditional "lock manager" to manage locks -- meaning every lock eats up resources and memory. When thousands to millions of row locks are taken out, it has to coalesce them into a page or table lock -- this is "escalation". SQL Server 2005 does away with this by adopting Oracle's approach of storing snapshots of prior row versions. Both MySQL with InnoDB and PostgreSQL do something similar.

    Actually, the best part of the paper was their rebuttal to Microsoft's Snapshot Isolation whitepaper, where they critique Oracle's approach. That MS whitepaper was a crapfest, as are most MS "comparison" papers, a practice I wish they'd stop doing if they want to retain developers to their cause. The paper criticized the most inane things (like ORA-1555 snapshot errors) that would only be useful to scare a complete Oracle neophyte.

    One more note... I find SQL Server's IDENTITY columns useful for simple databases but they are very hairy otherwise. Google Joe Celko (SQL guru) -- he has lots of quotes about why he hates them -- though he hates Oracle too, so you can find a good balance in his views :-)

    Good points were made about Oracle: Their RAC clustering is light years ahead of Microsoft's offerings, as are (in my opinion) their online reorganization and backup & recovery support. Sybase & SQL Server always were much more finnicky for recovery in my view.

    Their notes about RAC "reconfiguration time" vs. SQL server need to be taken with a grain of salt, imho. RAC is an active/active cluster -- all nodes contribute, none are "standby". If a node fails, the cluster has to play the undo/redo logs to ensure any written dirty blocks are in a consistent state. A more traditional active/passive standby cluster is a different beast -- the other node is a replica that's waiting in the wings for a failure, and can take over almost immediately. Oracle has a few approaches for this (Data Guard logical log-shipping is one -- and I think they were working on a RAC-based variant too).

    Good points about SQL Server: Oracle tablespaces vs. self-contained dictionaries in SQL server was a good point. And MS' function-based partitioning is a GODSEND and Oracle really should look at building this into 10G release 2 (as it is they already allow a lot, but MS' approach is more general and elegant).

    Note to those who think this is only important for DBA's.... if you've got "solutions architect" , or something like that, in your title, this stuff really should be second nature.
  111. Document by zombie researchers[ Go to top ]

    I have read the entire document. I have over 25 years of experience in software development and about 15 years in RDBMS technology, having worked with Sybase, Oracle and Microsoft. I have always worked for very big companies, started as a programmer trainee and worked my way up to corporate architect. At the moment we are building with IBM Websphere/Java and Microsoft .NET/c# using Sybase and Oracle databases. From my own experience I draw this conclusion:

    The people writing this document hardly have understanding of what they are talking about. This is what I call zombie researchers. They read the manuals and talked to the companies making the software but they are hardly aware of history of products nor of what it means to use these products in real life.

    Their conclusions are all wrong. I will not go into why the comparison of lock strategies is wrong, what the difference is between scale in and scale out solutions for high availability and why many of the stressed points are of no use in real life. I just see very clearly things are not compared doing justice to both products in the same manner. I also suspect Oracle wins because the writers of the document have (access to) better knowledge of Oracle.

    Since noone will value my personal opinion in this (and it is not asked for) I will still give it:

    If you are building big systems and have unlimited amounts of money and people and if you do value doing business with a big name, buy Oracle. It will never be cutting edge technology but in the end it will work. If I where to say however, I would surely fire you for doing so.

    If you build small systems in a small environment and want to buy a big name, don't mind vendor lock in and don't mind restarting your servers on occasions and even losing some data at some times, buy Microsoft. It will serve you right and only if at some point in time you need to scale beyond the capabilities of Microsoft or migrate to more relaible platforms you will be in real trouble. I will however not fire you for buying Microsoft if you match these criteria.

    All other companies should buy Sybase and be happy out of the box.

    Again, €0,05.
  112. You made me laugh, man[ Go to top ]

    Hans,
    You may read the entire document, but you probably missed the conclusion in preface of article, which is not very different of yours:

    "if you have a complicated application or systems running on high end machines you may still want to consider using Oracle. For department level servers and small/mid range applications the MSSQL server would be a preferred choice."

    From your response it is not hard to see that you are a Sybase promoter (I hope you are not getting commissions on that, since it would not be fair to us). Sybase has less than 2% of database market share. I think you giving a very bad advice to people to be lock down on database that might disappear in near future. But at least you spoke :)
  113. hmm.. more squirming excuses - excuse me if I am falling asleep.

    "The people writing this document hardly have understanding of what they are talking about. This is what I call zombie researchers."

    The usual reaction from the Sun/Java/Oracle world to any study, test or benchmark. He forgot to say this time that they were all bought by MS!

    He also forgot to mention that there already is an industry-accepted body for databases benchmarks: The TPC Transation Processing Performance Council.

    "The TPC is a non-profit corporation founded to define transaction processing and database benchmarks and to disseminate objective, verifiable TPC performance data to the industry."

    There you can see that SQL Server has first, second and third place in the list in addition to already has 7 out the top 10.
    http://www.tpc.org/tpcc/results/tpcc_price_perf_results.asp
    Several years ago I worked for a multi-billion dollar
    distributor with a steep investment in Microsoft technologies. Oracle showedup and sold our senior executives on scalability FUD based upon theirdominance of the TCP performance benchmarks at that time (MS did not placein the top ten). In fact, they made a BIG deal of it.

    Needless to say, when MS perfected their federated model (which Oracle couldnot support) and cleaned Oracle's (and EVERYONE's) clock in that benchmark,for some odd reason, suddenly, "benchmark's" didn't matter any more

    Regards
    Rolf Tollerud
  114. Why didn't You link these results?

    http://www.tpc.org/tpcc/results/tpcc_perf_results.asp

    Artur
  115. I have always been of the opinion that the MS marketing machine provides release dates for products that are half baked without a clue of when it will be released .

    Feature comparisons are fair only if there are definite release dates .

    In human years would you compare a 30 year old to a 20 year old ?

    In software terms a product (yukon) a couple of quarters away from release , being compared to the product that can be bought today is an unfair comparison, and in human terms would be a 10 year age gap !!!!!

    After working as a SQL DBA for 6.5,7.0,2000 , trust me Yukon is going to be like a teenager when it comes to delivering those great features that they promise .
  116. forgot to mention this , but as a rule consider Yukon only after sp 3 . By then who knows you might have a new version of MySQL that will take care of most major needs for the department .