--=_NextPart_000_0008_01C7025D.34339BC0
Content-Type: text/plain;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Hi,
During the weekend, I heard that SQL Server 2000 is very slow with = tables containing rows over millions. So I did some tests with our new database on SQL Server 2005 Standard Edition. The = machine is Windows 2003 Standard Server box with 3.5 GB RAM and single 3.0GHZ P4 CPU. There is no RAID = configuration. It has drives: C: with 11GB free space and D: with 169GB free space. SQL 2005 was installed on D: drive.
The size of the database is 10GB containing 47 tables. The main table, = Items, contains 96 columns and 30 millions rows.
All columns are in varchar data type, and 2 of them are in = varchar(2500). There is no any index setup in the table neither.
The time consumed for some SQL statements with this table are the = followings:
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D
No. SQL Statement = Time
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D = =3D=3D=3D=3D=3D=3D=3D=3D=3D
1 SELECT * FROM Items WHERE item_num=3D'10029' 16 minutes
----= -- --
2 SELECT COUNT(*) FROM Items 13 = minutes
----= -- --
3 ALTER TABLE Items ADD rid INT PRIMARY KEY IDENTITY(1,1) 10 = hours 50 minutes
----= -- --
4 SELECT COUNT(*) FROM Items 20 = minutes
----= -- --
5 SELECT * FROM Items WHERE item_num=3D'10029' 18 minutes
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D
The first 2 statements were run without primary key in the table. The = statements #4 and #5 were run after "rid" was added as primary key.
This is the first time I work with a database in such size. But the = performance of SQL Server 2005 surprised me.
Would you please tell me:
1. Is such performance normal with such number of rows? 2. Do I have to do something to improve the performance with such simple = statement when the number of rows>1 million or >10 millions? 3. Is SQL Server 2005 the right one to handle a database with such big = table, or should I consider DB2 9 or Oracle 10g?
Thank you
Hongbo
--=_NextPart_000_0008_01C7025D.34339BC0
Content-Type: text/html;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
&
Hi,
During the weekend, I heard that SQL = Server 2000 is very slow with tables containing rows over millions. So I did some = tests with our new database on SQL Server 2005 Standard Edition. The machine = is Windows 2003 Standard Server box with 3.5 GB RAM and single 3.0GHZ = P4 CPU. There is no RAID configuration. It has drives: C: with 11GB free space = and D: with 169GB free space. SQL 2005 was installed on D: drive. =
The size of the database is 10GB = containing 47 tables. The main table, Items, contains 96 columns and 30 millions = rows.All columns are in varchar data type, and 2 of them are in varchar(2500). = There is no any index setup in the table neither.
The time consumed for some SQL = statements with this table are the followings:=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3DNo. &nb= sp; SQL Statement &nbs= p;  = ; = &= nbsp; Time=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D &nbs= p; =3D=3D=3D=3D=3D=3D=3D=3D=3D 1 SELECT * FROM = Items WHERE item_num=3D'10029' 16 minutes---= -- -- 2 SELECT COUNT(*) FROM Items &n= bsp; &nb= sp; 13 minutes---= -- -- 3 ALTER TABLE Items ADD rid INT PRIMARY KEY IDENTITY(1,1) = 10 hours 50 minutes---= -- -- 4 SELECT COUNT(*) FROM Items &n= bsp; &nb= sp; 20 minutes---= -- -- 5 SELECT * FROM Items = WHERE item_num=3D'10029' 18 minutes=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D
The first 2 statements were run without = primary key in the table. The statements #4 and #5 were run after "rid" was added as = primary key.
This is the first time I work with a = database in such size. But the performance of SQL Server 2005 surprised = me.
Would you please tell me:1. Is such = performance normal with such number of rows? 2. Do I have to do something to = improve the performance with such simple statement when the number of rows>1 = million or >10 millions? 3. Is SQL Server 2005 the right one to handle a = database with such big table, or should I consider DB2 9 or Oracle = 10g?
Thank you
Hongbo
--=_NextPart_000_0008_01C7025D.34339BC0--This is a multi-part message in MIME format.
--=_NextPart_000_003D_01C70287.CDE22960
Content-Type: text/plain;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Try to find out your system having any IO or memory botleneck, if =possible post SHOWPLAN_ALL result as well
vinu
"Hongbo" <hongbo@.goodoffices.com> wrote in message =news:uKS%23AXoAHHA.3836@.TK2MSFTNGP02.phx.gbl...
Hi,
During the weekend, I heard that SQL Server 2000 is very slow with =tables containing rows over millions. So I did some tests with our new database on SQL Server 2005 Standard Edition. The =machine is Windows 2003 Standard Server box with 3.5 GB RAM and single 3.0GHZ P4 CPU. There is no RAID =configuration. It has drives: C: with 11GB free space and D: with 169GB free space. SQL 2005 was installed on D: drive.
The size of the database is 10GB containing 47 tables. The main table, =Items, contains 96 columns and 30 millions rows.
All columns are in varchar data type, and 2 of them are in =varchar(2500). There is no any index setup in the table neither.
The time consumed for some SQL statements with this table are the =followings:
==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D
No. SQL Statement = Time
==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D ==3D=3D=3D=3D=3D=3D=3D=3D=3D
1 SELECT * FROM Items WHERE item_num=3D'10029' 16 minutes
=----=-- --
2 SELECT COUNT(*) FROM Items 13 =minutes
=----=-- --
3 ALTER TABLE Items ADD rid INT PRIMARY KEY IDENTITY(1,1) 10 =hours 50 minutes
=----=-- --
4 SELECT COUNT(*) FROM Items 20 =minutes
=----=-- --
5 SELECT * FROM Items WHERE item_num=3D'10029' 18 minutes
==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D
The first 2 statements were run without primary key in the table. The =statements #4 and #5 were run after "rid" was added as primary key.
This is the first time I work with a database in such size. But the =performance of SQL Server 2005 surprised me.
Would you please tell me:
1. Is such performance normal with such number of rows? 2. Do I have to do something to improve the performance with such =simple statement when the number of rows>1 million or >10 millions? 3. Is SQL Server 2005 the right one to handle a database with such big =table, or should I consider DB2 9 or Oracle 10g?
Thank you
Hongbo
--=_NextPart_000_003D_01C70287.CDE22960
Content-Type: text/html;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
&
Try to find out your system having any =IO or memory botleneck, if possible post SHOWPLAN_ALL result as well
vinu
"Hongbo"
Hi,
During the weekend, I heard that SQL =Server 2000 is very slow with tables containing rows over millions. So I did some tests with our new database on SQL Server 2005 Standard Edition. =The machine is Windows 2003 Standard Server box with 3.5 GB RAM and =single 3.0GHZ P4 CPU. There is no RAID configuration. It has drives: C: with =11GB free space and D: with 169GB free space. SQL 2005 was installed on =D: drive.
The size of the database is 10GB =containing 47 tables. The main table, Items, contains 96 columns and 30 millions rows.All columns are in varchar data type, and 2 of them are in varchar(2500). There is no any index setup in the table =neither.
The time consumed for some SQL =statements with this table are the =followings:=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D=3D=3D=3D=3D=3DNo. &nb=sp; SQL =Statement &nbs=p;  =; = &=nbsp; =Time=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D &nbs=p; =3D=3D=3D=3D=3D=3D=3D=3D=3D 1 SELECT * FROM =Items WHERE item_num=3D'10029' 16 =minutes---=-- -- 2 SELECT COUNT(*) FROM =Items &n=bsp; &nb=sp; 13 =minutes---=-- -- 3 ALTER TABLE Items ADD rid INT PRIMARY KEY =IDENTITY(1,1) = 10 hours 50 =minutes---=-- -- 4 SELECT COUNT(*) FROM =Items &n=bsp; &nb=sp; 20 =minutes---=-- -- 5 SELECT * FROM Items =WHERE item_num=3D'10029' 18 =minutes=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D=3D=3D=3D=3D
The first 2 statements were run =without primary key in the table. The statements #4 and #5 were run after "rid" was =added as primary key.
This is the first time I work with a =database in such size. But the performance of SQL Server 2005 surprised =me.
Would you please tell me:1. Is =such performance normal with such number of rows? 2. Do I have to do =something to improve the performance with such simple statement when the number =of rows>1 million or >10 millions? 3. Is SQL Server 2005 the =right one to handle a database with such big table, or should I consider DB2 9 =or Oracle 10g?
Thank you
Hongbo
--=_NextPart_000_003D_01C70287.CDE22960--|||Hongbo wrote:
> Hi,
> During the weekend, I heard that SQL Server 2000 is very slow with
> tables containing rows over millions. So I did some
> tests with our new database on SQL Server 2005 Standard Edition. The
> machine is Windows 2003 Standard Server box
> with 3.5 GB RAM and single 3.0GHZ P4 CPU. There is no RAID
> configuration. It has drives: C: with 11GB free space
> and D: with 169GB free space. SQL 2005 was installed on D: drive.
> The size of the database is 10GB containing 47 tables. The main table,
> Items, contains 96 columns and 30 millions rows.
> All columns are in varchar data type, and 2 of them are in
> varchar(2500). There is no any index setup in the table neither.
> The time consumed for some SQL statements with this table are the
> followings:
> ======================================================> No. SQL
> Statement Time
> ========================================= =========> 1 SELECT * FROM Items WHERE item_num='10029' 16 minutes
> ----
> --
> 2 SELECT COUNT(*) FROM Items 13
> minutes
> ----
> --
> 3 ALTER TABLE Items
> ADD rid INT PRIMARY KEY IDENTITY(1,1) 10
> hours 50 minutes
> ----
> --
> 4 SELECT COUNT(*) FROM Items 20
> minutes
> ----
> --
> 5 SELECT * FROM Items WHERE item_num='10029' 18 minutes
> ======================================================> The first 2 statements were run without primary key in the table. The
> statements #4 and #5 were run after "rid" was added as primary key.
> This is the first time I work with a database in such size. But the
> performance of SQL Server 2005 surprised me.
> Would you please tell me:
> 1. Is such performance normal with such number of rows?
> 2. Do I have to do something to improve the performance with such
> simple statement when the number of rows>1 million or >10 millions?
> 3. Is SQL Server 2005 the right one to handle a database with such big
> table, or should I consider DB2 9 or Oracle 10g?
> Thank you
> Hongbo
Do I read this right, you have 30 million rows an no indexes? I would
think about adding a few to help you out. SQL 2005 can handle that with
proper design and hardware.
I believe there is a tremendous amount of paging going on there to
accommodate your full table scans and on a single drive, hence your poor
performance.
--
Ryan Sanders
http://ryanlsanders.blogspot.com|||This is a multi-part message in MIME format.
--=_NextPart_000_0015_01C703E5.BD4BE410
Content-Type: text/plain;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Hi Vinu and Ryan,
Thank you for your responses.
I know that some measures could improve the performance.
My question is: Are the results normal in SQL Server 2005 before I take =any other measures to improve the performance?
"Hongbo" <hongbo@.goodoffices.com> wrote in message =news:uKS%23AXoAHHA.3836@.TK2MSFTNGP02.phx.gbl...
Hi,
During the weekend, I heard that SQL Server 2000 is very slow with =tables containing rows over millions. So I did some tests with our new database on SQL Server 2005 Standard Edition. The =machine is Windows 2003 Standard Server box with 3.5 GB RAM and single 3.0GHZ P4 CPU. There is no RAID =configuration. It has drives: C: with 11GB free space and D: with 169GB free space. SQL 2005 was installed on D: drive.
The size of the database is 10GB containing 47 tables. The main table, =Items, contains 96 columns and 30 millions rows.
All columns are in varchar data type, and 2 of them are in =varchar(2500). There is no any index setup in the table neither.
The time consumed for some SQL statements with this table are the =followings:
==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D
No. SQL Statement = Time
==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D ==3D=3D=3D=3D=3D=3D=3D=3D=3D
1 SELECT * FROM Items WHERE item_num=3D'10029' 16 minutes
=----=-- --
2 SELECT COUNT(*) FROM Items 13 =minutes
=----=-- --
3 ALTER TABLE Items ADD rid INT PRIMARY KEY IDENTITY(1,1) 10 =hours 50 minutes
=----=-- --
4 SELECT COUNT(*) FROM Items 20 =minutes
=----=-- --
5 SELECT * FROM Items WHERE item_num=3D'10029' 18 minutes
==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D
The first 2 statements were run without primary key in the table. The =statements #4 and #5 were run after "rid" was added as primary key.
This is the first time I work with a database in such size. But the =performance of SQL Server 2005 surprised me.
Would you please tell me:
1. Is such performance normal with such number of rows? 2. Do I have to do something to improve the performance with such =simple statement when the number of rows>1 million or >10 millions? 3. Is SQL Server 2005 the right one to handle a database with such big =table, or should I consider DB2 9 or Oracle 10g?
Thank you
Hongbo
--=_NextPart_000_0015_01C703E5.BD4BE410
Content-Type: text/html;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
&
Hi Vinu and Ryan,
Thank you for your =responses.
I know that some measures could improve =the performance.
My question is: Are the results =normal in SQL Server 2005 before I take any other measures to improve the performance?
"Hongbo"
Hi,
During the weekend, I heard that SQL =Server 2000 is very slow with tables containing rows over millions. So I did some tests with our new database on SQL Server 2005 Standard Edition. =The machine is Windows 2003 Standard Server box with 3.5 GB RAM and =single 3.0GHZ P4 CPU. There is no RAID configuration. It has drives: C: with =11GB free space and D: with 169GB free space. SQL 2005 was installed on =D: drive.
The size of the database is 10GB =containing 47 tables. The main table, Items, contains 96 columns and 30 millions rows.All columns are in varchar data type, and 2 of them are in varchar(2500). There is no any index setup in the table =neither.
The time consumed for some SQL =statements with this table are the =followings:=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D=3D=3D=3D=3D=3DNo. &nb=sp; SQL =Statement &nbs=p;  =; = &=nbsp; =Time=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D &nbs=p; =3D=3D=3D=3D=3D=3D=3D=3D=3D 1 SELECT * FROM =Items WHERE item_num=3D'10029' 16 =minutes---=-- -- 2 SELECT COUNT(*) FROM =Items &n=bsp; &nb=sp; 13 =minutes---=-- -- 3 ALTER TABLE Items ADD rid INT PRIMARY KEY =IDENTITY(1,1) = 10 hours 50 =minutes---=-- -- 4 SELECT COUNT(*) FROM =Items &n=bsp; &nb=sp; 20 =minutes---=-- -- 5 SELECT * FROM Items =WHERE item_num=3D'10029' 18 =minutes=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D=3D=3D=3D=3D
The first 2 statements were run =without primary key in the table. The statements #4 and #5 were run after "rid" was =added as primary key.
This is the first time I work with a =database in such size. But the performance of SQL Server 2005 surprised =me.
Would you please tell me:1. Is =such performance normal with such number of rows? 2. Do I have to do =something to improve the performance with such simple statement when the number =of rows>1 million or >10 millions? 3. Is SQL Server 2005 the =right one to handle a database with such big table, or should I consider DB2 9 =or Oracle 10g?
Thank you
Hongbo
--=_NextPart_000_0015_01C703E5.BD4BE410--|||This is a multi-part message in MIME format.
--=_NextPart_000_0404_01C706F9.89BE1E30
Content-Type: text/plain;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
"Hongbo" <hongbo@.goodoffices.com> wrote in message =news:ecxxF5ABHHA.5060@.TK2MSFTNGP02.phx.gbl...
Hi Vinu and Ryan,
Thank you for your responses.
I know that some measures could improve the performance.
My question is: Are the results normal in SQL Server 2005 before I =take any other measures to improve the performance?
(please refrain from posting in HTML, most newsreaders don't like =it...)
Anyway...that performance seems about right given your setup.
First, if this database is important, get something with RAID, =otherwise a single disk failure will kill you.
Also, move your LOG file to a different physical drive (or set of =drives). This will increase your update/insert/delete statements.
Finally, Step 3... what is RID? Does it match anything in the outside =world? If not, you may prefer to find a natural key. Perhaps item_num.
If you're selecting against item_num, you will want that as at least =one of your indices.
What other indices you will want will depend greatly on how your =normally access your data.
Also, I would hope you're not using select * in production code.
Finally, 96 columns is a fairly wide database. is this normalized? =If not, why not? What's the average width of a road. You mentioned =varchar, but not what the average use is.
Personally, if the item_num is unique and a good candidate key, I'd =drop the rid column you added, make item_num my primary key and then go =from there.
"Hongbo" <hongbo@.goodoffices.com> wrote in message =news:uKS%23AXoAHHA.3836@.TK2MSFTNGP02.phx.gbl...
Hi,
During the weekend, I heard that SQL Server 2000 is very slow with =tables containing rows over millions. So I did some tests with our new database on SQL Server 2005 Standard Edition. The =machine is Windows 2003 Standard Server box with 3.5 GB RAM and single 3.0GHZ P4 CPU. There is no RAID =configuration. It has drives: C: with 11GB free space and D: with 169GB free space. SQL 2005 was installed on D: drive.
The size of the database is 10GB containing 47 tables. The main =table, Items, contains 96 columns and 30 millions rows.
All columns are in varchar data type, and 2 of them are in =varchar(2500). There is no any index setup in the table neither.
The time consumed for some SQL statements with this table are the =followings:
==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D
No. SQL Statement = Time
==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D ==3D=3D=3D=3D=3D=3D=3D=3D=3D
1 SELECT * FROM Items WHERE item_num=3D'10029' 16 =minutes
=----=-- --
2 SELECT COUNT(*) FROM Items =13 minutes
=----=-- --
3 ALTER TABLE Items ADD rid INT PRIMARY KEY IDENTITY(1,1) 10 =hours 50 minutes
=----=-- --
4 SELECT COUNT(*) FROM Items =20 minutes
=----=-- --
5 SELECT * FROM Items WHERE item_num=3D'10029' 18 =minutes
==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D
The first 2 statements were run without primary key in the table. =The statements #4 and #5 were run after "rid" was added as primary key.
This is the first time I work with a database in such size. But the =performance of SQL Server 2005 surprised me.
Would you please tell me:
1. Is such performance normal with such number of rows? 2. Do I have to do something to improve the performance with such =simple statement when the number of rows>1 million or >10 millions? 3. Is SQL Server 2005 the right one to handle a database with such =big table, or should I consider DB2 9 or Oracle 10g?
Thank you
Hongbo
--=_NextPart_000_0404_01C706F9.89BE1E30
Content-Type: text/html;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
&
"Hongbo"
Hi Vinu and Ryan,
Thank you for your =responses.
I know that some measures could =improve the performance.
My question is: Are the results =normal in SQL Server 2005 before I take any other measures to improve the performance?
(please refrain from posting in HTML, =most newsreaders don't like it...)
Anyway...that performance seems =about right given your setup.
First, if this database is important, =get something with RAID, otherwise a single disk failure will kill you.
Also, move your LOG file to a =different physical drive (or set of drives). This will increase your =update/insert/delete statements.
Finally, Step 3... what is RID? =Does it match anything in the outside world? If not, you may prefer to =find a natural key. Perhaps item_num.
If you're selecting against item_num, =you will want that as at least one of your indices.
What other indices you will want will =depend greatly on how your normally access your data.
Also, I would hope you're not using =select * in production code.
Finally, 96 columns is a fairly wide database. is this normalized? If not, why not? =What's the average width of a road. You mentioned varchar, but not what the =average use is.
Personally, if the item_num is unique =and a good candidate key, I'd drop the rid column you added, make item_num my =primary key and then go from there.
"Hongbo"
Hi,
During the weekend, I heard that =SQL Server 2000 is very slow with tables containing rows over millions. So I =did some tests with our new database on SQL Server 2005 Standard Edition. =The machine is Windows 2003 Standard Server box with 3.5 GB RAM and =single 3.0GHZ P4 CPU. There is no RAID configuration. It has drives: C: =with 11GB free space and D: with 169GB free space. SQL 2005 was installed =on D: drive.
The size of the database is 10GB =containing 47 tables. The main table, Items, contains 96 columns and 30 millions rows.All columns are in varchar data type, and 2 of them are in varchar(2500). There is no any index setup in the table neither.
The time consumed for some SQL =statements with this table are the =followings:=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D=3D=3D=3D=3D=3DNo. &nb=sp; SQL =Statement &nbs=p;  =; = &=nbsp; =Time=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D &nbs=p; =3D=3D=3D=3D=3D=3D=3D=3D=3D 1 SELECT * =FROM Items WHERE item_num=3D'10029' =16 =minutes---=-- -- 2 SELECT COUNT(*) FROM = =Items &n=bsp; &nb=sp; 13 =minutes---=-- -- 3 ALTER TABLE Items ADD rid INT PRIMARY KEY =IDENTITY(1,1) = 10 hours 50 =minutes---=-- -- 4 SELECT COUNT(*) FROM = =Items &n=bsp; &nb=sp; 20 =minutes---=-- -- 5 SELECT * FROM Items =WHERE item_num=3D'10029' =18 =minutes=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D=3D=3D=3D=3D
The first 2 statements were run =without primary key in the table. The statements #4 and #5 were run after "rid" was =added as primary key.
This is the first time I work with =a database in such size. But the performance of SQL Server 2005 surprised me.
Would you please tell me:1. Is =such performance normal with such number of rows? 2. Do I have to do something to improve the performance with such simple statement when =the number of rows>1 million or >10 millions? 3. Is SQL Server =2005 the right one to handle a database with such big table, or should I =consider DB2 9 or Oracle 10g?
Thank you
Hongbo
--=_NextPart_000_0404_01C706F9.89BE1E30--|||This is a multi-part message in MIME format.
--=_NextPart_000_0041_01C70705.B2737350
Content-Type: text/plain;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Hi, Greg,
Thank you for your message.
Regardless what kind of measure should be taken to improve the =performance,
would you please tell whether the result I got is normal under the given =circumstance?
Thank you
"Greg D. Moore (Strider)" <mooregr_deleteth1s@.greenms.com> wrote in =message news:OM6hzNyBHHA.4844@.TK2MSFTNGP02.phx.gbl...
"Hongbo" <hongbo@.goodoffices.com> wrote in message =news:ecxxF5ABHHA.5060@.TK2MSFTNGP02.phx.gbl...
Hi Vinu and Ryan,
Thank you for your responses.
I know that some measures could improve the performance.
My question is: Are the results normal in SQL Server 2005 before I =take any other measures to improve the performance?
(please refrain from posting in HTML, most newsreaders don't like =it...)
Anyway...that performance seems about right given your setup.
First, if this database is important, get something with RAID, =otherwise a single disk failure will kill you.
Also, move your LOG file to a different physical drive (or set of =drives). This will increase your update/insert/delete statements.
Finally, Step 3... what is RID? Does it match anything in the =outside world? If not, you may prefer to find a natural key. Perhaps =item_num.
If you're selecting against item_num, you will want that as at least =one of your indices.
What other indices you will want will depend greatly on how your =normally access your data.
Also, I would hope you're not using select * in production code.
Finally, 96 columns is a fairly wide database. is this normalized? =If not, why not? What's the average width of a road. You mentioned =varchar, but not what the average use is.
Personally, if the item_num is unique and a good candidate key, I'd =drop the rid column you added, make item_num my primary key and then go =from there.
"Hongbo" <hongbo@.goodoffices.com> wrote in message =news:uKS%23AXoAHHA.3836@.TK2MSFTNGP02.phx.gbl...
Hi,
During the weekend, I heard that SQL Server 2000 is very slow with =tables containing rows over millions. So I did some tests with our new database on SQL Server 2005 Standard Edition. =The machine is Windows 2003 Standard Server box with 3.5 GB RAM and single 3.0GHZ P4 CPU. There is no RAID =configuration. It has drives: C: with 11GB free space and D: with 169GB free space. SQL 2005 was installed on D: drive.
The size of the database is 10GB containing 47 tables. The main =table, Items, contains 96 columns and 30 millions rows.
All columns are in varchar data type, and 2 of them are in =varchar(2500). There is no any index setup in the table neither.
The time consumed for some SQL statements with this table are the =followings:
==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D
No. SQL Statement = Time
==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D ==3D=3D=3D=3D=3D=3D=3D=3D=3D
1 SELECT * FROM Items WHERE item_num=3D'10029' 16 =minutes
=----=-- --
2 SELECT COUNT(*) FROM Items = 13 minutes
=----=-- --
3 ALTER TABLE Items ADD rid INT PRIMARY KEY IDENTITY(1,1) =10 hours 50 minutes
=----=-- --
4 SELECT COUNT(*) FROM Items = 20 minutes
=----=-- --
5 SELECT * FROM Items WHERE item_num=3D'10029' 18 =minutes
==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D
The first 2 statements were run without primary key in the table. =The statements #4 and #5 were run after "rid" was added as primary key.
This is the first time I work with a database in such size. But =the performance of SQL Server 2005 surprised me.
Would you please tell me:
1. Is such performance normal with such number of rows? 2. Do I have to do something to improve the performance with such =simple statement when the number of rows>1 million or >10 millions? 3. Is SQL Server 2005 the right one to handle a database with such =big table, or should I consider DB2 9 or Oracle 10g?
Thank you
Hongbo
--=_NextPart_000_0041_01C70705.B2737350
Content-Type: text/html;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
&
Hi, Greg,
Thank you for your =message.
Regardless what kind of measure should =be taken to improve the performance,
would you please tell whether the =result I got is normal under the given circumstance?
Thank you
"Greg D. Moore (Strider)"
"Hongbo"
Hi Vinu and Ryan,
Thank you for your =responses.
I know that some measures could =improve the performance.
My question is: Are the =results normal in SQL Server 2005 before I take any other measures to improve the performance?
(please refrain from posting in =HTML, most newsreaders don't like it...)
Anyway...that performance seems =about right given your setup.
First, if this database is =important, get something with RAID, otherwise a single disk failure will kill you.
Also, move your LOG file to a =different physical drive (or set of drives). This will increase your update/insert/delete statements.
Finally, Step 3... what is =RID? Does it match anything in the outside world? If not, you may prefer to =find a natural key. Perhaps item_num.
If you're selecting against =item_num, you will want that as at least one of your indices.
What other indices you will want =will depend greatly on how your normally access your data.
Also, I would hope you're not using =select * in production code.
Finally, 96 columns is a fairly =wide database. is this normalized? If not, why not? =What's the average width of a road. You mentioned varchar, but not what =the average use is.
Personally, if the item_num is =unique and a good candidate key, I'd drop the rid column you added, make item_num =my primary key and then go from there.
"Hongbo"
Hi,
During the weekend, I heard that =SQL Server 2000 is very slow with tables containing rows over millions. So I =did some tests with our new database on SQL Server 2005 Standard =Edition. The machine is Windows 2003 Standard Server box with 3.5 GB RAM =and single 3.0GHZ P4 CPU. There is no RAID configuration. It has drives: C: =with 11GB free space and D: with 169GB free space. SQL 2005 was =installed on D: drive.
The size of the database is 10GB =containing 47 tables. The main table, Items, contains 96 columns and 30 =millions rows.All columns are in varchar data type, and 2 of them are =in varchar(2500). There is no any index setup in the table neither.
The time consumed for some SQL =statements with this table are the =followings:=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D=3D=3D=3D=3D=3DNo. &nb=sp; SQL =Statement &nbs=p;  =; = &=nbsp; =Time=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D &nbs=p; =3D=3D=3D=3D=3D=3D=3D=3D=3D 1 SELECT * =FROM Items WHERE item_num=3D'10029' =16 =minutes---=-- -- 2 SELECT COUNT(*) =FROM =Items &n=bsp; &nb=sp; 13 =minutes---=-- -- 3 ALTER TABLE Items ADD rid INT PRIMARY KEY =IDENTITY(1,1) = 10 hours 50 =minutes---=-- -- 4 SELECT COUNT(*) =FROM =Items &n=bsp; &nb=sp; 20 =minutes---=-- -- 5 SELECT * FROM =Items WHERE item_num=3D'10029' =18 =minutes=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D=3D=3D=3D=3D
The first 2 statements were run =without primary key in the table. The statements #4 and #5 were run after ="rid" was added as primary key.
This is the first time I work =with a database in such size. But the performance of SQL Server 2005 surprised me.
Would you please tell me:1. =Is such performance normal with such number of rows? 2. Do I have to =do something to improve the performance with such simple statement =when the number of rows>1 million or >10 millions? 3. Is SQL =Server 2005 the right one to handle a database with such big table, or should =I consider DB2 9 or Oracle 10g?
Thank you
Hongbo
--=_NextPart_000_0041_01C70705.B2737350--|||Hongbo wrote:
> Hi Vinu and Ryan,
> Thank you for your responses.
> I know that some measures could improve the performance.
> My question is: Are the results normal in SQL Server 2005 before I take
> any other measures to improve the performance?
> "Hongbo" <hongbo@.goodoffices.com <mailto:hongbo@.goodoffices.com>>
> wrote in message news:uKS%23AXoAHHA.3836@.TK2MSFTNGP02.phx.gbl...
> Hi,
> During the weekend, I heard that SQL Server 2000 is very slow with
> tables containing rows over millions. So I did some
> tests with our new database on SQL Server 2005 Standard Edition. The
> machine is Windows 2003 Standard Server box
> with 3.5 GB RAM and single 3.0GHZ P4 CPU. There is no RAID
> configuration. It has drives: C: with 11GB free space
> and D: with 169GB free space. SQL 2005 was installed on D: drive.
> The size of the database is 10GB containing 47 tables. The main
> table, Items, contains 96 columns and 30 millions rows.
> All columns are in varchar data type, and 2 of them are in
> varchar(2500). There is no any index setup in the table neither.
> The time consumed for some SQL statements with this table are the
> followings:
> ======================================================> No. SQL
> Statement Time
> ========================================= =========> 1 SELECT * FROM Items WHERE item_num='10029' 16 minutes
> ----
> --
> 2 SELECT COUNT(*) FROM Items
> 13 minutes
> ----
> --
> 3 ALTER TABLE Items
> ADD rid INT PRIMARY KEY IDENTITY(1,1) 10
> hours 50 minutes
> ----
> --
> 4 SELECT COUNT(*) FROM Items
> 20 minutes
> ----
> --
> 5 SELECT * FROM Items WHERE item_num='10029' 18 minutes
> ======================================================> The first 2 statements were run without primary key in the table.
> The statements #4 and #5 were run after "rid" was added as primary key.
> This is the first time I work with a database in such size. But the
> performance of SQL Server 2005 surprised me.
> Would you please tell me:
> 1. Is such performance normal with such number of rows?
> 2. Do I have to do something to improve the performance with such
> simple statement when the number of rows>1 million or >10 millions?
> 3. Is SQL Server 2005 the right one to handle a database with such
> big table, or should I consider DB2 9 or Oracle 10g?
> Thank you
> Hongbo
Yes, without any index set up, this is normal performance. Even if you
move this index-less design over to DB2 or 10g, you will see the same
performance.|||This is a multi-part message in MIME format.
--=_NextPart_000_0610_01C70762.EED7AD90
Content-Type: text/plain;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
"Hongbo" <hongbo@.goodoffices.com> wrote in message =news:%23yeU24yBHHA.4292@.TK2MSFTNGP02.phx.gbl...
Hi, Greg,
Thank you for your message.
Regardless what kind of measure should be taken to improve the =performance,
would you please tell whether the result I got is normal under the =given circumstance?
Quite possibly yes.
Thank you
"Greg D. Moore (Strider)" <mooregr_deleteth1s@.greenms.com> wrote in =message news:OM6hzNyBHHA.4844@.TK2MSFTNGP02.phx.gbl...
"Hongbo" <hongbo@.goodoffices.com> wrote in message =news:ecxxF5ABHHA.5060@.TK2MSFTNGP02.phx.gbl...
Hi Vinu and Ryan,
Thank you for your responses.
I know that some measures could improve the performance.
My question is: Are the results normal in SQL Server 2005 before I =take any other measures to improve the performance?
(please refrain from posting in HTML, most newsreaders don't like =it...)
Anyway...that performance seems about right given your setup.
First, if this database is important, get something with RAID, =otherwise a single disk failure will kill you.
Also, move your LOG file to a different physical drive (or set of =drives). This will increase your update/insert/delete statements.
Finally, Step 3... what is RID? Does it match anything in the =outside world? If not, you may prefer to find a natural key. Perhaps =item_num.
If you're selecting against item_num, you will want that as at =least one of your indices.
What other indices you will want will depend greatly on how your =normally access your data.
Also, I would hope you're not using select * in production code.
Finally, 96 columns is a fairly wide database. is this =normalized? If not, why not? What's the average width of a road. You =mentioned varchar, but not what the average use is.
Personally, if the item_num is unique and a good candidate key, =I'd drop the rid column you added, make item_num my primary key and then =go from there.
"Hongbo" <hongbo@.goodoffices.com> wrote in message =news:uKS%23AXoAHHA.3836@.TK2MSFTNGP02.phx.gbl...
Hi,
During the weekend, I heard that SQL Server 2000 is very slow =with tables containing rows over millions. So I did some tests with our new database on SQL Server 2005 Standard Edition. =The machine is Windows 2003 Standard Server box with 3.5 GB RAM and single 3.0GHZ P4 CPU. There is no RAID =configuration. It has drives: C: with 11GB free space and D: with 169GB free space. SQL 2005 was installed on D: =drive.
The size of the database is 10GB containing 47 tables. The main =table, Items, contains 96 columns and 30 millions rows.
All columns are in varchar data type, and 2 of them are in =varchar(2500). There is no any index setup in the table neither.
The time consumed for some SQL statements with this table are =the followings:
==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D
No. SQL Statement = Time
==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D ==3D=3D=3D=3D=3D=3D=3D=3D=3D
1 SELECT * FROM Items WHERE item_num=3D'10029' 16 =minutes
=----=-- --
2 SELECT COUNT(*) FROM Items = 13 minutes
=----=-- --
3 ALTER TABLE Items ADD rid INT PRIMARY KEY IDENTITY(1,1) =10 hours 50 minutes
=----=-- --
4 SELECT COUNT(*) FROM Items = 20 minutes
=----=-- --
5 SELECT * FROM Items WHERE item_num=3D'10029' 18 =minutes
==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D
The first 2 statements were run without primary key in the =table. The statements #4 and #5 were run after "rid" was added as =primary key.
This is the first time I work with a database in such size. But =the performance of SQL Server 2005 surprised me.
Would you please tell me:
1. Is such performance normal with such number of rows? 2. Do I have to do something to improve the performance with =such simple statement when the number of rows>1 million or >10 millions? =
3. Is SQL Server 2005 the right one to handle a database with =such big table, or should I consider DB2 9 or Oracle 10g?
Thank you
Hongbo
--=_NextPart_000_0610_01C70762.EED7AD90
Content-Type: text/html;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
&
"Hongbo"
Hi, Greg,
Thank you for your =message.
Regardless what kind of measure =should be taken to improve the performance,
would you please tell whether the =result I got is normal under the given circumstance?
Quite possibly yes.
Thank you
"Greg D. Moore (Strider)"
"Hongbo"
Hi Vinu and Ryan,
Thank you for your =responses.
I know that some measures could =improve the performance.
My question is: Are the =results normal in SQL Server 2005 before I take any other measures to improve the = performance?
(please refrain from posting in =HTML, most newsreaders don't like it...)
Anyway...that performance seems =about right given your setup.
First, if this database is =important, get something with RAID, otherwise a single disk failure will kill you.
Also, move your LOG file to a =different physical drive (or set of drives). This will increase your update/insert/delete statements.
Finally, Step 3... what is =RID? Does it match anything in the outside world? If not, you may prefer =to find a natural key. Perhaps item_num.
If you're selecting against =item_num, you will want that as at least one of your indices.
What other indices you will want =will depend greatly on how your normally access your data.
Also, I would hope you're not =using select * in production code.
Finally, 96 columns is a fairly =wide database. is this normalized? If not, why not? =What's the average width of a road. You mentioned varchar, but not =what the average use is.
Personally, if the item_num is =unique and a good candidate key, I'd drop the rid column you added, make =item_num my primary key and then go from there.
"Hongbo"
Hi,
During the weekend, I heard =that SQL Server 2000 is very slow with tables containing rows over millions. So =I did some tests with our new database on SQL Server 2005 Standard = Edition. The machine is Windows 2003 Standard Server box =with 3.5 GB RAM and single 3.0GHZ P4 CPU. There is no RAID configuration. It =has drives: C: with 11GB free space and D: with 169GB free =space. SQL 2005 was installed on D: drive.
The size of the database is =10GB containing 47 tables. The main table, Items, contains 96 columns and 30 =millions rows.All columns are in varchar data type, and 2 of them are =in varchar(2500). There is no any index setup in the table neither.
The time consumed for some SQL =statements with this table are the =followings:=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D=3D=3D=3D=3D=3DNo. &nb=sp; SQL =Statement &nbs=p;  =; = &=nbsp; =Time=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D &nbs=p; =3D=3D=3D=3D=3D=3D=3D=3D=3D 1 SELECT =* FROM Items WHERE =item_num=3D'10029' 16 =minutes---=-- -- 2 SELECT COUNT(*) =FROM =Items &n=bsp; &nb=sp; 13 =minutes---=-- -- 3 ALTER TABLE =Items ADD rid INT PRIMARY KEY = =IDENTITY(1,1) = 10 hours 50 =minutes---=-- -- 4 SELECT COUNT(*) =FROM =Items &n=bsp; &nb=sp; 20 =minutes---=-- -- 5 SELECT * FROM =Items WHERE =item_num=3D'10029' 18 =minutes=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D==3D=3D=3D=3D=3D=3D=3D=3D
The first 2 statements were run =without primary key in the table. The statements #4 and #5 were run =after "rid" was added as primary key.
This is the first time I work =with a database in such size. But the performance of SQL Server 2005 =surprised me.
Would you please tell me:1. =Is such performance normal with such number of rows? 2. Do I have to =do something to improve the performance with such simple statement =when the number of rows>1 million or >10 millions? 3. Is SQL =Server 2005 the right one to handle a database with such big table, or =should I consider DB2 9 or Oracle 10g?
Thank you
Hongbo
--=_NextPart_000_0610_01C70762.EED7AD90--
No comments:
Post a Comment