Delphi
Database Programming
Single user databases:
- Considerations
that need to be addressed:
- which database backend to use:
- which interface to the backend:
- normalisation of database structure:
- pros:
- reduces space requirements of the
database
- eradicates duplicate fields thus:
- easier querying a field
rather than multiple
fields
- field - level validation
& lookups easier
- can have unlimited fields
via child table
- cons:
- tables need to be joined for
queries - can slow system
- updating master-detail tables
using cached updates can be a
headache
- indexes on tables:
- primary key:
- incrementing value for each new
record:
- automatic:
- use an
autoincrementing field:
- Paradox:
- maintains the last
known value even if table
is emptied, thus if wish
to reset it to zero, need
to either have a
"clean" table
and rename it or
restructure table to
longInt then back to
autoincrement
- Access:
- if use it, make sure
table does not have
default values in other
fields otherwise
autoincrement values tend
to skip
- SQL Servers:
- use a trigger to
generate new value
- manual:
- use a shadow table
& find highest value
in its primary key then
increment it in the main
table, ensuring shadow
table is refreshed.
- secondary indexes:
- enable rapid sorting on other
fields
- may slow down large databases,
thus don't use if will not be
utilised much
- NB. TClientDataset can rapidly
create indexes for its in memory
data making sorting very fast.
- referential integrity:
- automatic at database table level:
- pros:
- ensures all applications
that access the database
adhere to the rules
- cons:
- migration difficulties
- a corrupt table may
require rebuilding all
linked tables not just
that one table, this may
mean backing all tables
up, emptying them all,
then sequentially
restoring data in each so
that master tables are
restored first
- difficulties in batch
insertions - esp, if
self-referential
- batch updates are slower
as for each record the
lookup table must be
checked even though it
has already been checked
on a prior record
- manually via business rules in
application:
- pros:
- cons:
- if database updated by
other application then
rules will not be applied
- coding required to
maintain rules
- lookup table to
select valid values from
and either:
- prevent user from
altering a primary key,
or,
- ensure that if it is
edited then any dependent
tables that use it as a
lookup have their values
edited, and, prevent
deleting a master record
if dependent table
records exist
- data validation:
- data backup:
- file-based databases require all users to
be disconnected, then the physical file
copied to backup, unfortunately these do
not usually have a transaction log that
can be used to sequentially restore a
corrupted database
- set-based databases can usually backup
while live, including their transaction
log
- database optimisation:
- periodic restructuring to compress tables
and their indices
- database size:
- file based databases only perform well if
database size < 100Mb thus if more
than this consider either:
- ensure record size is optimised
(eg. don't use large string
fields when they are not needed)
- defragment hard disk
- archiving old data to another
table
- use a set-based database server
such as Sybase
- security:
- password protection at various user
access levels:
- table or database level
- application level - avoids
revealing the low-level passwords
- database encryption to avoid table being
read by word processors (this will slow
access to data!!)
- transaction control:
- may be needed if updates to more than one
record must occur or all be rolled back
if errors occur
- eg. changing a master records
balance field when its child
table's record is updated
- client-server databases have transaction support:
- implicit transactions:
- these are "unbounded" SQL statements
that are not bounded by a BEGIN TRAN...COMMIT TRAN
statement
- each unbound DML statement is treated as one
atomic transaction
- explicit transactions:
- DML statements that are bound by a BEGIN
TRAN...COMMIT TRAN transaction statement
- allows multiple DML statements to be embedded
within one transaction and all statements rolled
back (ie. cancelled) if any one of the statements
fails
- this thus allows data integrity where one table
may be updated only if a update to another table
succeeds, thereby preventing critical errors that
may otherwise occur.
- BEGIN TRAN must always be paired with a COMMIT
TRAN otherwise the transaction will fail
- You can name transactions by placing the name
after the word TRAN so that multiple concurrent
transactions can be used and executed properly
HOWEVER nesting transactions within a single SQL
program is usually a bad idea and likely to
provide few benefits & many dangers as the
name may only be retained for the outermost
transaction!
- Sybase & MS SQL Server both have global
variable @@trancount to help alleviate the
problem of nested transactions as data cannot
be committed until the trancount reaches zero.
- NEVER allow a server transaction to wait on a
user input else this will create locks while
waiting, thus best to reserve server transaction
usage to the backend such as stored procedures,
batch programming.
- NB. Delphi TclientDatasets have client side
transactions which obviate this problem if
used appropriately
- triggers are regarded as being a nested implied
transaction, thus if transaction is rolled back so
is trigger and vice versa
- transactions generally are confined to the one
server unless BEGIN DISTRIBUTED TRANSACTION is
used
- performance tips:
- keep transactions short (in terms of time)
and quick (in terms of CPU cycles) to minimise
use of system resources and reduce concurrency
problems
-
- error checking of transactions:
- Transact SQL allows use of a global variable
@@error to track & report the transaction
error:
- IF @@error <> 0 BEGIN RAISERROR
50000 'Place error message here' ROLLBACK
TRAN END
- save points:
- successful DML statements within a
transaction can be saved so that if any
statements that follow fail, they can be
re-tried without having to re-try the initial
successful statements
- design for easy upgrade to multi-user
and/or n-tier and/or client server:
- avoid proprietary database
solutions:
- triggers, stored procedures
- proprietary SQL statements
- database-specific validation
& default field values
- consider placing TDatasets and
TDatabase components in a data module:
- allows multiple forms to access
them
- allows easier migration to a
remote data module for n-tier
- don't do this if:
- for reports as these
should not be shared with
user's table
- multiple instance forms
that need their own
cursors to a dataset
- very simple application
where few datasets and
only one form accessing
it
- consider using TClientDatasets as
this will allow:
- pros:
- easier upsizing to client
server
- optimistic record locking
via cached
updates and read only
access
- briefcase model (store
copy of data to local
drive, delayed editing
& applying of
updates)
- does not need BDE if only
use local ClientDataset
tables - ideal for very
simple database
applications where
single-user only
- cons:
- more complex programming
as need:
- management of master
detail updates
- additional components
TProvider(D4)/TAppServer(D5)
- limited access to
TDataset properties (eg.
cannot change SQL easily
although can change its
parameter values)
- may be problems updating
Paradox tables????
- requires registering on client machines
of:
- Delphi 4: dbclient.dll and
stdvcl40.dll
- Delphi 6: midas.dll
Multi-user
databases:
- in addition to the above considerations for single-user
databases, one needs to consider:
- record
locking:
- this is a major problem as:
- if record is only locked during
posting of data (optimistic
locking), someone else may have
changed the data since you first
read it, so that your post may
either:
- over-write their data
without you realising it
- be cancelled
- if record is locked when a user
starts editing it (pessimistic
locking), everyone else is locked
out until that user has posted or
cancelled his edits which could
be a long time & potentially
forever if that user's computer
crashes during this process
(thereby perhaps requiring a
manual fix to the record lock such as killing
the process but this may lead to other sequelae
such as zombie processes which in turn may leave
other locks in place)
- in order to perform certain
routines on a table it may need
to have an exclusive lock placed
on it, so that other users would
need to be notified and
disconnected.
- most databases (Access & SQL
servers) don't only lock the
record but also its neighbouring
records as a "page"
lock is used.
- solutions:
- optimistic locking with
reconciliation dialogs:
- automatically built in
for Access front end
- Delphi:
- cached
updates thus only a
read lock is placed until
ApplyUpdates is called:
- TClientDataset and
reconciliation dialog
- TDataSet using cached
updates and
reconciliation dialog????
- use of an
application server (ie. middle tier):
- pros:
- allows thin client database
solution as server only needs the
BDE
- allows centralisation of business
rules if multiple applications
share data
- cons:
- cost - $5000 per server for MIDAS
licence
- more complex programming
- minimise
network traffic: