国产一级a片免费看高清,亚洲熟女中文字幕在线视频,黄三级高清在线播放,免费黄色视频在线看

打開APP
userphoto
未登錄

開通VIP,暢享免費電子書等14項超值服

開通VIP
[Mullins02] Section 10.2. DBMS Installation and Configuration Issues

DBMS Installation and Configuration Issues

Everydatabase management system provides parameters that allow the DBA toconfigure various aspects of the database environment. Configuration isaccomplished in a variety of ways, depending on the DBMS. Some popularconfiguration methods include executing system procedures to set andreset values, editing files with parameter settings, issuing commandsat a DBMS prompt, and assembling parameter specifications using anoption within the DBMS. Regardless of the manner of configuring theDBMS, the DBA will be required to specify various parameters to theDBMS that affect the way the DBMS operates.

MostDBMS software ships with default values supplied by the vendor.However, default values are usually not sufficient to support thedevelopment of robust, production applications. This section willdiscuss some of the common configuration parameters and how to tunethem.

Default values are usually not sufficient to support the development of robust, production applications.


Types of Configuration

TheDBMS can be configured when the DBMS is installed or after the DBMS isoperational. During installation, the DBA or SA installing the DBMSwill have the option to change the configuration parameters or to allowthe parameters to default. However, defaults are almost always wrong.It is almost always better to set the parameters based on knowledge ofthe data and applications that will be using the DBMS.

Oncethe DBMS is installed and operational, the DBMS will provide a methodof changing the configuration parameters. Depending on the DBMS,parameters may be changed dynamically, nondynamically, or both.

Depending on the DBMS, parameters may be changed dynamically, nondynamically, or both.


Dynamicparameters can take effect without the need to restart SQL Server.Executing the RECONFIGURE command causes dynamic parameters toimmediately take effect. After issuing the appropriate commands, theparameter value changes and the DBMS will behave accordingly.

Inorder for nondynamic parameters to take effect, the DBMS must be shutdown and restarted. Of course, the parameter value must bechanged—usually in the same or a similar manner as a dynamic parameter.However, a nondynamic parameter will not take effect until the DBMS isrestarted.

Memory Usage

Relationaldatabases love memory. The single biggest system performance tuningtask that a DBA will face is configuring RDBMS memory usage. The DBMSuses random access memory to cache data and other resources required bythe DBMS. This is done because reading data from memory is much lesscostly than reading the data from disk. So, as a rule of thumb, themore memory you can provide to the DBMS, the better performance willbe. Of course, the DBMS has to be configured properly to use the memoryefficiently.

Relational databases love memory.


The American Heritage dictionary defines cacheas “a place for concealing and safekeeping valuables.” This is a goodstarting point for understanding what a cache means to a relationaldatabase. The “place” used by the DBMS is memory (as opposed to disk).The “valuables” are data pages, query plans, and other databaseresources. A typical DBMS will use a cache to keep resources in memorylonger. The longer a resource is cached in a memory location, thebetter the chance for subsequent requests for the resource to avoidincurring costly I/O operations, as shown in Figure 10-2.

Figure 10-2. The value of caching resources in memory


Thereare multiple caches or buffers utilized by DBMS products to reduce thecost of I/O. Each DBMS uses different terminology but generally cachesthe same resources.

A data cacheis used to avoid I/O operations when actual data is being read from thedatabase. Accessing data in memory is substantially faster thanaccessing data on disk. Therefore, all data access from a relationaldatabase goes through a caching area. (Refer to Figure 10-3.)Access to memory is typically measured in microseconds, while access todisk I/O is usually measured in milliseconds. When a program needs arow from a table, the DBMS retrieves the page from disk and stores thepage in the data cache. Basically, the DBMS uses the cache as a stagingarea. If the row changes, the change is written to the page in the datacache. Eventually, the DBMS will write the page in the data cache backto disk. When data needed by an application is on a page already in thedata cache, the application will not have to wait for the page to beretrieved from disk. Depending on the DBMS, this memory structure alsomay be called a buffer pool.

Figure 10-3. Data cache (or buffer pool)


A data cache is used to avoid I/O operations when actual data is being read from the database.


A procedure cachestores SQL and program-related structures. Before SQL statements can beissued to retrieve or modify data, the statement must first beoptimized by the DBMS. The optimization process creates an internalstructure representing the access path that will be used by the DBMS toread the data. The DBMS can store these access paths in the procedurecache and reuse them each time the program or SQL statement is run.This optimizes application performance because the optimization processneed not be performed every time the SQL is run. Instead, optimizationoccurs the first time the SQL is issued, and subsequent executionsretrieve the access path from the procedure cache. Each DBMS providessimilar functionality, although with different names and differingfeatures.

Another memory structure commonly deployed within a DBMS is a sort cache.The sort cache is used instead of temporary disk storage to storeintermediate sort results in memory. The more sorting functionalitythat can be performed in memory, the better a sort will perform. Manyrelational database operations require sorts, for example, grouping,ordering, UNION operations, and certain types of joins.

The sort cache is used instead of temporary disk storage to store intermediate sort results in memory.


The DBMS may also use other internal structure caches.The implementation of each DBMS is unique. To accomplish relationaloperations, the DBMS may need to create internal structures that arenot necessarily visible to the end user. However, DBAs, and sometimesprogrammers, will need to know about the internal structures. Oneexample is the internal DBD (database descriptor) structure used by DB2to manage databases. The DBD is never externalized to users, but everytime an application accesses any object within a database, DB2 mustload the DBD into a memory area known as the EDM pool. DB2 uses the EDMpool to cache dynamic SQL access paths and other internal structures,as well. DB2 DBAs need to allocate sufficient memory to the EDM pooland monitor the efficiency of the EDM pool as processing requirementsand usage patterns change.

The DBMS also may buffer log records to a separate database log cache.Furthermore, the DBMS may implement two log caches, one for log writesand one for log reads. The database log keeps a record of all changesmade to the database. The log write cache is used to speed up databasemodifications. The changed data is written to the log cache, and overtime the cache is written asynchronously to disk. By buffering logwrites in this way, the database log becomes less of a bottleneck tosystem and application performance. The log read cache is used forROLLBACK and RECOVER operations. A rollback or a recovery needs toaccess the log to undo or reapply database changes. As the log recordsare requested, they will be buffered in memory in the log read cache.

The DBMS also may buffer log records to a separate database log cache.


Additional Areas of Memory Consumption

Inaddition to the various caches and buffer pools used by relationaldatabase systems, memory is required for other purposes. Generally, theDBMS installation or configuration routines allow the DBA to allocateand tune the memory consumption of the DBMS. Some of the more commonareas of DBMS memory consumption include

  • User connections. Each concurrent user connection to the DBMS, regardless of the type of client connection, requires memory for the DBMS to maintain and manage the connection.

  • Devices. The devices used by databases may require system memory to maintain and use.

  • Open databases. Most DBMSs provide a parameter to specify the maximum number of databases that can be open at any one time. Each open database requires DBMS memory.

  • Open objects. Another parameter may exist to identify the maximum number of database objects that can be open at any one time, including tables, indexes, and any other database object in use. Each open database object requires memory.

  • Locks. Each concurrently held lock will require memory. The DBMS should provide a configuration parameter for the number of concurrent locks that can be held at one time.

  • Caches. The various caches are discussed in the previous section.

How Much Memory Is Enough?

So,if relational databases love memory, just how much memory should beallocated? This is a difficult (if not impossible) question to answerby using generalities. The tempting answer is “Enough to get the jobdone,” but that does not help the DBA who has to allocate the rightamount of memory in the right place.

EveryDBMS uses memory, but in different amounts and for different things.The best approach is to search your vendor's DBMS manual to determinehow much memory is required for each resource. Then you can estimatethe usage requirements for each resource and calculate the approximateamount of memory required by the DBMS.

Every DBMS uses memory, but in different amounts and for different things.


Forexample, SQL Server requires about 75 bytes of memory per lock. Toconfigure the amount of memory required for locking, the DBA will needto estimate the total number of concurrent transactions and the averagenumber of locks per transaction. After doing so, the calculation formemory required by the DBMS for locking is simple:


Theresult of this calculation provides an estimate for the memory (inbytes) used for locking in a SQL Server system. Of course, the numberof concurrent transactions and the number of locks per transaction maybe difficult to ascertain for your environment. You may be able tosubstitute the number of concurrent transactions with the DBMSconfiguration value used for total number of user connections. Iflocks/transaction is unavailable, you can examine a few typicalprograms to come up with an estimate. This process is then repeated foreach database resource that consumes memory. Of course, the data cacheshould be treated separately and is covered in the next section.

Afterrepeating this process for each resource, you will arrive at the amountof memory to install for the database server. It is a good practice toleave some breathing room to support the addition of unanticipatedapplication programs, DBMS and operating system upgrades requiringadditional memory, and other un foreseen items. This means installing alittle bit more memory than you calculate. However, do not leave toomuch breathing room. The memory was installed to be used, not conserved.

It is a good practice to leave some breathing room.


Data Cache Details

Itis important for the DBA to understand the basics of data cacheoperation. At any given point in time, the data cache will consist ofthree different types of pages:

  • In-use pages— pages that are currently being read and updated by the DBMS. These pages are not available to be used for other database processing.

  • Updated pages— pages where data has been modified in some way, but the data has not yet been written to disk. These pages are not available to be used for other database processing.

  • Available pages— pages not being used. These pages are available to be used for other database processing. New data can be written to available pages in the data cache.

Theperformance of the data cache depends on how efficiently memory hasbeen allocated for its use. For example, if unavailable pages dominatethe data cache, the DBMS may trigger synchronous writes to increase thespace in the data cache. Synchronous writes can slow down databaseprocessing because each write request has to wait for the data to beactually physically written to disk. Depending on the type ofprocessing the database applications are using, the DBA may be able totune data caching parameters and sizes to enable more efficientbuffering of data.

Monitoring and Tuning the Data Cache

Ensuringthe cache is the proper size is one of the most critical aspects ofhaving an efficient data cache. A data cache that is too large wastesmemory and can cause pages in the cache to be moved out to auxiliarystorage. A data cache that is too small forces frequent writes to diskand, in the most severe cases, results in swapping the data cache pagesback and forth from disk.

Ensuring the cache is the proper size is critical.


Thecomplexity of tuning the data cache varies by DBMS and depends on theconfiguration and tuning parameters that are available for the datacache. Some DBMSs, such as DB2, provide multiple buffer pools that canbe configured and tuned independently with multiple parameters. Others,such as SQL Server, are more basic, with a data cache per database. Butregardless of the DBMS, the DBA should monitor the read efficiency ofeach data cache or buffer pool.

Theread efficiency of the data cache is a percentage that tracks how wellthe cache is performing its primary duty—to avoid physical disk I/Ooperations. The read efficiency of each data cache is calculated as thenumber of actual I/Os performed subtracted from the total number ofdata requests, then divided by the total number of data requests, or:


Inother words, read efficiency shows the percentage of times a data pageis found in the data cache (or buffer pool). The higher this percentageis, the more efficient the buffer pool is. When data pages can be foundin buffers in memory without requiring a physical I/O, performance willbe enhanced.

Theactual numbers for I/O requests and actual physical I/O operations canbe found by examining DBMS trace records or by using a databaseperformance monitor. Depending on the DBMS, the DBA may need to turn ontraces that externalize instrumentation details of the DBMS. Each DBMShas its own set of instrumentation information that can be examined.Also, depending on the type of performance monitor, traces may not needto be started by the DBA because the monitor may start and stop tracesas required. The monitor may also use other means to capture theperformance information from the DBMS.

Asa rule of thumb, an 80% or better read efficiency is good for a datacache. Of course, the read efficiency value will depend on the type ofprocessing. Many sequential processes can cause the data cache to beoverrun and the efficiency to drop. Furthermore, systems with manyprocesses that access data only weekly or monthly may have lower readefficiencies because less data is frequently reused. The DBA shouldknow the type of database processing that occurs for each data cacheand gauge the read efficiency of that data cache in light of theprocessing that is occuring.

An 80% or better read efficiency is good for a data cache.


Whenread efficiency is consistently below 80%, consider tuning byincreasing the size of the data cache or reducing the number of tablesand indexes assigned to the data cache. Making such changes may impactavailability because the DBMS must be stopped and restarted to registerthe change. Each DBMS has different requirements for making data cachechanges.

Dependingon the DBMS, the DBA may be able to configure the data cache better bychanging the amount of cache reserved for sequential and paralleloperations. Large table scans can quickly monopolize a data cache. Byreserving only a subset of the entire cache for sequential scans, theoverall performance of the data cache may be improved. Of course, thereverse may be true if the majority of operations that use the datacache are sequential in nature.

Large table scans can quickly monopolize a data cache.


Additionalpossibilities for tuning the data cache—offered by some DBMS vendors oras add-on products—include creating an additional cache to back up thedata cache (see the sidebar “DB2 Hiperpools”), pegging specific pages in memory, and automatically growing the size of the data cache as throughput increases.

Monitoring and Tuning the Procedure Cache

TheDBA must monitor the effectiveness of the procedure cache to helpimprove the efficiency of database applications and queries. Althoughthe procedure cache differs dramatically from DBMS to DBMS, the generalidea is the same: to keep optimized SQL structures in memory so thatthey can be reused by subsequent tasks instead of being reformulated orreread from disk.

DB2 Hiperpools

DB2 for OS/390 provides a secondary level of data caching called a hiperpool. One hiperpool can be defined for each buffer pool. DB2 can move infrequently accessed pages in the buffer pool to the hiperpool. Only pages that have not been changed, or pages that have changed but have already been written to disk, can be written to a hiperpool.

When a row of data is needed from the hiperpool, the page is written back to the buffer pool. If the row is changed, the page is not written back to the hiperpool until the changed page has been written to disk.

With hiperpools, more data can be maintained in memory for longer periods. Of course, the cost is additional memory that must be installed and configured for use by the DBMS.


DB2's EDM Pool

IBM's DB2 for OS/390 caches SQL in a memory area known as the environmental descriptor manager pool, or EDM Pool for short. The EDM Pool contains active and skeleton structures for application plans and packages, as well as optimized dynamic SQL plans, database descriptors, and program authorization lists. The EDM Pool is a sort of catchall memory area for “things” that are used by application programs and SQL statements as they are being run.

An EDM Pool that is too small causes increased I/O activity in the DB2 Directory, a specialized system catalog for internal DB2 structures. An additional symptom that may occur is increased response times as the required structures are read from disk and loaded into memory in the EDM Pool.


Toensure optimal performance, the procedure cache must be sized properlyto accommodate all the SQL that may be run concurrently. DBAs needinformation about the application programs that will be run in order tosize the procedure cache effectively. The read-efficiency calculationwe used for gauging the effectiveness of the data cache can be used forthe procedure cache also. In the case of the procedure cache, the readefficiency calculates how often the DBMS needs to reoptimize SQL.Procedure cache read efficiency usually ranges from 60% to 80%. Ofcourse, this percentage range will differ with the type of DBMS and thetype of caching performed, as well as with the type of applications andthe number of times the same SQL and programs are run. Refer to thesidebar “DB2 EDM Pool” for an example of a procedure cache.

The procedure cache must be sized properly to accommodate all the SQL that may be run concurrently.


“Open” Database Objects

Asdatabase applications process, the DBMS will need to open databaseobjects for access and maintain the open objects. The DBMS willprobably have a configuration option for setting this number. However,these numbers are difficult to arrive at originally. As timeprogresses, the database system can be monitored and better values canbe provided.

Tobegin with, it is not uncommon simply to specify a value such that allproduction databases and objects can be open. However, each opendatabase object consumes memory. Therefore, it is a good idea to temperthis value, taking into account the size of the databaseimplementation, the type of application processing, and the amount ofmemory available.

Each open database object consumes memory.


Database Logs

Thedatabase log is another database system configuration option that canimpact performance. The database log, sometimes referred to as the transaction log,is a fundamental component of a database management system. All changesto application data in the database are recorded serially in thedatabase log (see Figure 10-4).Using this information, the DBMS can track which transaction made whichchanges to the database. Furthermore, ROLLBACK and RECOVER operationsutilize the database log to reset the database to a particular point intime.

Figure 10-4. Logging of database transactions.


All changes to application data in the database are recorded serially in the database log.


Themanner in which the database log is created depends on the DBMS. SomeDBMSs specify the log at the database system level, others define adatabase log for each database that is created within the databasesystem. Some DBMSs provide a parameter to enable and disable logging.In general, avoid disabling database logging for any database ordatabase system where the data is valuable. In other words, considerturning off database logging only for test databases.

Dependingon the DBMS, the database log may consist of several files. Forexample, Oracle uses a transaction log and rollback segments toaccomplish all of the functionality described in this section.

Duringnormal database application processing, SQL INSERTS, UPDATES, andDELETES will be issued to modify data in the database. As thesedatabase modifications are made, the transaction log for the databasewill grow. Because each database change is logged, the DBA will need tomonitor actively the size of the transaction log files. And since datais constantly changing, the log will be continually growing.

The database transaction log is a write-ahead log.This means that changes are made to the transaction log before they aremade to the data in the database tables. When the database modificationhas been fully recorded on the log, recovery of the transaction isguaranteed.

The database transaction log is a write-ahead log.


Typically, the DBMS takes a system checkpointto guarantee that all log records and all modified database pages arewritten safely to disk. The frequency of database system checkpointscan be set up by the DBA using database configuration parameters.Checkpoint frequency is usually set as either a predetermined timeinterval or a preset number of log records written.

Generally, the following type of information is recorded on the database log:

  • Beginning and ending time of each transaction

  • Actual changes made to the data and enough information (using “before” and “after” images of the data) to undo the modifications made during each transaction

  • Allocation and deallocation of database pages

  • Actual COMMIT or ROLLBACK of each transaction

Usingthis information, the DBMS can accomplish data integrity operations toensure consistent data. The transaction log is used when the DBMS isrestarted, when transactions are rolled back, and to restore a databaseto a prior state. Let's examine each of these scenarios.

Whenthe DBMS is restarted, each database goes through a recovery process.During restart processing, the DBMS will check to determine whichtransactions must be rolled forward. This occurs for transactions whereit is unknown if all the modifications were actually written from thecache to disk. A checkpoint forces all modified pages to disk.Therefore, it represents the point at which the startup recovery muststart to roll transactions forward. Because all pages modified beforethe checkpoint are guaranteed to be written accurately to disk, thereis no need to roll forward anything done before the checkpoint.

Whena transaction is rolled back, the DBMS copies “before” images to thedatabase of every modification made since the transaction began.

Duringa recovery scenario, the DBA can use the transaction log to restore adatabase. First, a backup copy of the database is restored and thensubsequent transaction log backups can be restored. This causes a rollforward of the transaction log. During a roll forward, the DBMS willcopy to the database “after” images of each modification. Using thelogged data, the DBMS ensures that each modification is applied in thesame order that it originally occurred.

During a recovery scenario, the DBA can use the transaction log to restore a database.


Youcan see where the transaction log is a useful item to have around incase of database or transaction errors, and to ensure data integrity.

Database Log Configuration Considerations

Theconfiguration of the database log can be a complicated task. Dependingon the DBMS, the DBA may need to make multiple configuration decisionsto set up the database transaction log, such as defining input buffersand output buffers, setting up log offloading, and defining the actuallog files.

Definingoutput buffers for the database log can optimize log write operations.Writing log records to memory instead of directly to disk creates moreefficient database processing. The DBMS can write log recordsasynchronously from the output buffer to the physical log file later.When logging is implemented in this fashion, database processing doesnot need to wait for synchronous log writes to occur to disk.

Defining input buffers for the database log can optimize operations—such as ROLLBACK and RECOVER—that read the database log.

Whenconfiguring the database log, it is a good idea to set up dual logs.With dual logs, the DBMS will log changes to two separate andindependent log files. Implementing dual logging provides a redundantlog for use in case one of the logs fails. A log can fail for manyreasons, including device failure, media problems, or simplecarelessness. When setting up dual logging, be sure to define each logon separate devices and on separate controllers to minimize thepossibility of both logs failing at the same time.

It is a good idea to set up dual logs.


Anotherlog configuration detail is deciding how database logs are to behandled when they fill up. Once again, the implementation detailsdepend on the DBMS. Some DBMSs provide for automatic log offloading. Log offloading is the process of archiving an active log to an archival log and switching log writes to a new active log.

Ifthe DBMS performs automatic log offloading, you can improve performanceby offloading to an archive log on disk instead of tape. By archivingto disk, the log offloading process will run faster, and backout andrecovery processes can also run faster because the log records will beon disk—meaning not just faster I/O but no wait for tapes to bemounted. The DBA can use a storage management system to automaticallymigrate archive logs to tape after a predetermined amount of time.

SomeDBMSs require the database log to be explicitly backed up. In thiscase, you will need to implement periodic backups of the transactionlog. Typically, the DBMS provides a specific backup command to create atransaction log backup. When the DBMS finishes backing up thetransaction log, it truncates the inactive portion of the transactionlog to free up space on the transaction log. Truncation of the logenables the DBMS to reuse space.

Ofcourse, as a DBA you will need to learn how each DBMS you use handlesbacking up the database log. DB2 automatically archives the transactionlog; Microsoft SQL Server requires the DBA to back up transaction logfiles. In fact, Microsoft SQL Server provides some odd features forhandling the transaction log; the sidebar “Microsoft SQL Server Log Options” discusses the parameters that can impact the operation of the database log.

Are All Database Operations Logged?

Dependingon the DBMS, certain situations and commands may not be logged. Toavoid “out of space” conditions caused by rapid growth in transactionlog files, the DBMS may turn off logging. For example, some DDLoperations and database utility executions might not be logged. Bewareof these situations and plan accordingly for your recovery needs.

Certain situations and commands may not be logged.


Duringsome large operations, such as CREATE INDEX, the DBMS will probably notlog every new page. Instead, the DBMS will record enough information inthe database log to determine that a CREATE INDEX happened, so that itcan either be recreated during a roll forward, or removed during arollback.

Additionally,some DBMSs provide configuration options at the database level to turnoff logging for some types of processing. For example, in Microsoft SQLServer, when the SELECT INTO/BULKCOPY database option is set to TRUE,the following operations will not be recorded in the databasetransaction log: bulk load operations, SELECT INTO statements, andWRITETEXT and UPDATETEXT statements. These four operations usuallycause a large volume of data to be changed in the database. As such,logging can slow down these processes, so logging can optionally bedisabled for these operations only. However, because these operationsare not recorded in the transaction log, SQL Server cannot use theRESTORE operation on the transaction log to recover these operations.If the DBMS has no knowledge of the operations in the log, it cannotrecover the data.

Microsoft SQL Server Log Options

Microsoft SQL Server provides a system configuration parameter that affects the behavior of database logging. The TRUNC LOG ON CHKPT option can be changed at the database level. Use the system procedure named SP_DBOPTION to change the configuration settings for a database.

EXEC SP_DBOPTION 'pubs', 'trunc. log on chkpt.', 'false'

Issuing this command causes the TRUNC LOG ON CHKPT option to be set to FALSE for the pubs database. To see a list of all current database options set for a database, simply issue the system procedure without additional parameters.

EXEC SP_DBOPTION pubs

The TRUNC LOG ON CHKPT option is potentially quite dangerous. When this option is set to TRUE, every checkpoint operation will cause the database log to be truncated—that is, the database log will be emptied and reset, causing all of the logged changes to be lost. A database cannot be recovered using a log that is truncated on checkpoints.

You might consider setting TRUNC LOG ON CHKPT option to TRUE for test databases during your application development cycle, but not for mission-critical, production databases. The DBA needs to be aware of these options and ensure that the appropriate settings are made, based on the needs of the users of the data and the applications that access that data.


Ofcourse, SQL Server is not the only DBMS that disables logging forcertain operations. Another example is the DB2 REORG utility. Aparameter is provided to disable or enable logging during thereorganization process. As a DBA, you will need to learn how and whento turn off logging using your DBMS. Moreover, keep in mind thatwhenever logging is turned off, the data must be backed up both beforeand after the nonlogged process to ensure point-in-time recoverability.

Locking and Contention

Concurrency operations such as deadlock detection and lock manager settingscan greatly impact database system performance. Database processingrelies on locking to ensure consistent data based on user requirementsand to avoid losing data during updates. You must balance the need forconcurrency with the need for performance. If at all possible, seek tominimize the following situations.

  • Lock suspensions occur when an application process requests a lock that is already held by another application process and cannot be shared. The suspended process temporarily stops running until the requested lock becomes available.

  • Timeouts occur when an application process is terminated because it has been suspended for longer than a preset interval. This interval can usually be set by using a configuration parameter.

  • Deadlocks occur when two or more application processes hold locks on resources that the others need and without which they cannot proceed. The deadlock detection cycle—that is the time interval between checking for deadlocks—can also be set by using a configuration parameter.

Balance the need for concurrency with the need for performance.


Whenaccessing a relational database, the locking process can be quitecomplex. It depends on the type of processing, the lock size specifiedwhen the table was created, the isolation level of the program or SQLstatement, the method of data access, and the DBMS configurationparameters. To tune database locking requires a combination of system,database, and application tuning.

The System Catalog

Thephysical location and setup of the system catalog will have an impacton system performance. The DBA must decide where it will be installed,on what type of disk, and how much space to allocate. These decisionstypically are made at installation time.

Asa rule of thumb, place the system catalog on a separate disk device sothat it can be managed and tuned independently from other applicationdata. If possible, consider completely dedicating a disk volume or twoto the system catalog. Consider placing the indexes and tables onseparate disk volumes. In addition, if the DBMS does not alreadyprovide a separate data cache for the system catalog, considerisolating the system catalog into its own dedicated data cache. Doingthis makes it easer to track the efficiency of system I/O versusapplication I/O.

Place the system catalog on a separate disk device.


Whenchanges need to be made to the system catalog database, utilities suchas REORG, COPY, and RECOVER or file system commands need to be used.Changes may need to be made to increase the size of the system catalog,to add a new index, or to migrate to a new release of the DBMS. Usuallya migration utility is provided to make system catalog changes.

DBAsshould use the system catalog to actively manage their databaseenvironment. It is a good practice to actively monitor the databaseobjects in the system catalog and delete obsolete objects. For example,in DB2 for OS/390, IBM delivers sample tables with every new version ofDB2. The database object names have the DB2 version number embeddedwithin them. Every new release causes new sample tables to be created.The DBA should delete the old sample tables when new ones areinstalled. This advice applies to all unused database objects, not justto sample tables. If a tablespace exists that is no longer used, it isconsuming valuable resources (disk space, system catalog space, etc.)that can be freed up when the database object is dropped.

Other Configuration Options

EveryDBMS will have many configuration and tuning options that areparticular to that DBMS. The DBA needs to become an expert in theoptions that are available and understand the impact of eachpermissible setting.

Some example configuration options that may be encountered include

  • Nested trigger calls. Some DBMSs can enable and disable nested trigger calls. A nested trigger call is when one trigger causes another trigger to fire. Some DBMSs may provide additional control over trigger nesting by providing a maximum value for it. By setting this value, the DBA can control how many levels of nested trigger calls are allowable. Control over triggers can have an enormous impact on performance. For example, if an application hits the maximum number of nested triggers, all of the changes caused by all previous triggers need to be rolled back—potentially causing considerable performance degradation.

  • Security options. The functionality of security and authorization can be controlled by DBMS configuration options. Some DBMSs allow database security to be turned over to external security and control software.

  • Identity values. The identity property can be assigned to a column such that the DBMS automatically assigns numerically sequential values when data is inserted into the table. The DBMS can allow the configuration of the pool size from which identity values are obtained.

  • Distributed database. To configure a distributed database implementation, the DBMS most likely will provide options for connecting databases at various locations.

General Advice

Whenconfiguring your database environment, avoid defaults. Mostconfiguration options will default to a predefined value if no specificvalue is assigned. The major problem with using default values is thatthey almost never are the best choice for your particular environment.It is always a good idea to specify the value of every configurationparameter even if the default value is the one you wish to choose.

Avoid default configuration options.


Finally,beware of configuration options that change the behavior of the DBMS.Simply setting a single configuration parameter to the wrong value cancause a lot of damage. Consider the following parameters and theirpotential impact.

  • Oracle provides an optimization-mode parameter to choose between cost-based and rule-based SQL optimization. These two methods can create dramatically different SQL access paths with dramatically different performance.

  • Sybase provides a parameter called ALLOW UPDATES that controls whether the system catalog tables can be modified using SQL. When this option is turned on, the system catalog can be changed quite easily. This is to be avoided except under the guidance of Sybase technical support. System catalog changes should be driven by the DBMS itself, not by user-issued SQL.

  • DB2 provides several parameters that control the behavior of database operations such as parallel queries, data sharing, and dynamic SQL caching.

本站僅提供存儲服務,所有內容均由用戶發(fā)布,如發(fā)現有害或侵權內容,請點擊舉報
打開APP,閱讀全文并永久保存 查看更多類似文章
猜你喜歡
類似文章
歷史上最強的sql FAQ for Oracle @ 安全和黑客 2
Jdbc input plugin | Logstash Reference [6.2] | Elastic
Modern Microprocessors
史上最強的FQA for ORACLE(備份與恢復部分)
Mybatis 開啟控制臺打印sql語句
[PowerDesign]將數據庫從SQL Server數據庫轉換為MySQL
更多類似文章 >>
生活服務
分享 收藏 導長圖 關注 下載文章
綁定賬號成功
后續(xù)可登錄賬號暢享VIP特權!
如果VIP功能使用有故障,
可點擊這里聯系客服!

聯系客服