[Freeswitch-trunk] [commit] r3735 - in freeswitch/trunk/libs/sqlite: . art art/tmp contrib doc ext ext/fts1 notes src src/ex test tool www
Freeswitch SVN
mikej at freeswitch.org
Tue Dec 19 15:12:23 EST 2006
Author: mikej
Date: Tue Dec 19 15:11:50 2006
New Revision: 3735
Added:
freeswitch/trunk/libs/sqlite/
freeswitch/trunk/libs/sqlite/Makefile.in
freeswitch/trunk/libs/sqlite/Makefile.linux-gcc
freeswitch/trunk/libs/sqlite/README
freeswitch/trunk/libs/sqlite/VERSION
freeswitch/trunk/libs/sqlite/aclocal.m4
freeswitch/trunk/libs/sqlite/addopcodes.awk
freeswitch/trunk/libs/sqlite/art/
freeswitch/trunk/libs/sqlite/art/2005osaward.gif (contents, props changed)
freeswitch/trunk/libs/sqlite/art/SQLite.eps (contents, props changed)
freeswitch/trunk/libs/sqlite/art/SQLite.gif (contents, props changed)
freeswitch/trunk/libs/sqlite/art/SQLiteLogo3.tiff (contents, props changed)
freeswitch/trunk/libs/sqlite/art/tmp/
freeswitch/trunk/libs/sqlite/config.guess
freeswitch/trunk/libs/sqlite/config.sub
freeswitch/trunk/libs/sqlite/configure (contents, props changed)
freeswitch/trunk/libs/sqlite/configure.ac
freeswitch/trunk/libs/sqlite/contrib/
freeswitch/trunk/libs/sqlite/contrib/sqlitecon.tcl
freeswitch/trunk/libs/sqlite/doc/
freeswitch/trunk/libs/sqlite/doc/lemon.html
freeswitch/trunk/libs/sqlite/doc/report1.txt
freeswitch/trunk/libs/sqlite/ext/
freeswitch/trunk/libs/sqlite/ext/README.txt
freeswitch/trunk/libs/sqlite/ext/fts1/
freeswitch/trunk/libs/sqlite/ext/fts1/README.txt
freeswitch/trunk/libs/sqlite/ext/fts1/fts1.c
freeswitch/trunk/libs/sqlite/ext/fts1/fts1.h
freeswitch/trunk/libs/sqlite/ext/fts1/fts1_hash.c
freeswitch/trunk/libs/sqlite/ext/fts1/fts1_hash.h
freeswitch/trunk/libs/sqlite/ext/fts1/fts1_porter.c
freeswitch/trunk/libs/sqlite/ext/fts1/fts1_tokenizer.h
freeswitch/trunk/libs/sqlite/ext/fts1/fts1_tokenizer1.c
freeswitch/trunk/libs/sqlite/install-sh (contents, props changed)
freeswitch/trunk/libs/sqlite/ltmain.sh
freeswitch/trunk/libs/sqlite/main.mk
freeswitch/trunk/libs/sqlite/mkdll.sh
freeswitch/trunk/libs/sqlite/mkopcodec.awk
freeswitch/trunk/libs/sqlite/mkopcodeh.awk
freeswitch/trunk/libs/sqlite/mkso.sh
freeswitch/trunk/libs/sqlite/notes/
freeswitch/trunk/libs/sqlite/publish.sh
freeswitch/trunk/libs/sqlite/spec.template
freeswitch/trunk/libs/sqlite/sqlite.pc.in
freeswitch/trunk/libs/sqlite/sqlite3.1
freeswitch/trunk/libs/sqlite/sqlite3.pc.in
freeswitch/trunk/libs/sqlite/src/
freeswitch/trunk/libs/sqlite/src/alter.c
freeswitch/trunk/libs/sqlite/src/analyze.c
freeswitch/trunk/libs/sqlite/src/attach.c
freeswitch/trunk/libs/sqlite/src/auth.c
freeswitch/trunk/libs/sqlite/src/btree.c
freeswitch/trunk/libs/sqlite/src/btree.h
freeswitch/trunk/libs/sqlite/src/build.c
freeswitch/trunk/libs/sqlite/src/callback.c
freeswitch/trunk/libs/sqlite/src/complete.c
freeswitch/trunk/libs/sqlite/src/date.c
freeswitch/trunk/libs/sqlite/src/delete.c
freeswitch/trunk/libs/sqlite/src/ex/
freeswitch/trunk/libs/sqlite/src/expr.c
freeswitch/trunk/libs/sqlite/src/func.c
freeswitch/trunk/libs/sqlite/src/hash.c
freeswitch/trunk/libs/sqlite/src/hash.h
freeswitch/trunk/libs/sqlite/src/insert.c
freeswitch/trunk/libs/sqlite/src/legacy.c
freeswitch/trunk/libs/sqlite/src/loadext.c
freeswitch/trunk/libs/sqlite/src/main.c
freeswitch/trunk/libs/sqlite/src/os.c
freeswitch/trunk/libs/sqlite/src/os.h
freeswitch/trunk/libs/sqlite/src/os_common.h
freeswitch/trunk/libs/sqlite/src/os_os2.c
freeswitch/trunk/libs/sqlite/src/os_os2.h
freeswitch/trunk/libs/sqlite/src/os_unix.c
freeswitch/trunk/libs/sqlite/src/os_win.c
freeswitch/trunk/libs/sqlite/src/pager.c
freeswitch/trunk/libs/sqlite/src/pager.h
freeswitch/trunk/libs/sqlite/src/parse.y
freeswitch/trunk/libs/sqlite/src/pragma.c
freeswitch/trunk/libs/sqlite/src/prepare.c
freeswitch/trunk/libs/sqlite/src/printf.c
freeswitch/trunk/libs/sqlite/src/random.c
freeswitch/trunk/libs/sqlite/src/select.c
freeswitch/trunk/libs/sqlite/src/shell.c
freeswitch/trunk/libs/sqlite/src/sqlite.h.in
freeswitch/trunk/libs/sqlite/src/sqlite3ext.h
freeswitch/trunk/libs/sqlite/src/sqliteInt.h
freeswitch/trunk/libs/sqlite/src/table.c
freeswitch/trunk/libs/sqlite/src/tclsqlite.c
freeswitch/trunk/libs/sqlite/src/test1.c
freeswitch/trunk/libs/sqlite/src/test2.c
freeswitch/trunk/libs/sqlite/src/test3.c
freeswitch/trunk/libs/sqlite/src/test4.c
freeswitch/trunk/libs/sqlite/src/test5.c
freeswitch/trunk/libs/sqlite/src/test6.c
freeswitch/trunk/libs/sqlite/src/test7.c
freeswitch/trunk/libs/sqlite/src/test8.c
freeswitch/trunk/libs/sqlite/src/test_async.c
freeswitch/trunk/libs/sqlite/src/test_autoext.c
freeswitch/trunk/libs/sqlite/src/test_loadext.c
freeswitch/trunk/libs/sqlite/src/test_md5.c
freeswitch/trunk/libs/sqlite/src/test_schema.c
freeswitch/trunk/libs/sqlite/src/test_server.c
freeswitch/trunk/libs/sqlite/src/test_tclvar.c
freeswitch/trunk/libs/sqlite/src/tokenize.c
freeswitch/trunk/libs/sqlite/src/trigger.c
freeswitch/trunk/libs/sqlite/src/update.c
freeswitch/trunk/libs/sqlite/src/utf.c
freeswitch/trunk/libs/sqlite/src/util.c
freeswitch/trunk/libs/sqlite/src/vacuum.c
freeswitch/trunk/libs/sqlite/src/vdbe.c
freeswitch/trunk/libs/sqlite/src/vdbe.h
freeswitch/trunk/libs/sqlite/src/vdbeInt.h
freeswitch/trunk/libs/sqlite/src/vdbeapi.c
freeswitch/trunk/libs/sqlite/src/vdbeaux.c
freeswitch/trunk/libs/sqlite/src/vdbefifo.c
freeswitch/trunk/libs/sqlite/src/vdbemem.c
freeswitch/trunk/libs/sqlite/src/vtab.c
freeswitch/trunk/libs/sqlite/src/where.c
freeswitch/trunk/libs/sqlite/tclinstaller.tcl
freeswitch/trunk/libs/sqlite/test/
freeswitch/trunk/libs/sqlite/test/aggerror.test
freeswitch/trunk/libs/sqlite/test/all.test
freeswitch/trunk/libs/sqlite/test/alter.test
freeswitch/trunk/libs/sqlite/test/alter2.test
freeswitch/trunk/libs/sqlite/test/alter3.test
freeswitch/trunk/libs/sqlite/test/altermalloc.test
freeswitch/trunk/libs/sqlite/test/analyze.test
freeswitch/trunk/libs/sqlite/test/async.test
freeswitch/trunk/libs/sqlite/test/async2.test
freeswitch/trunk/libs/sqlite/test/attach.test
freeswitch/trunk/libs/sqlite/test/attach2.test
freeswitch/trunk/libs/sqlite/test/attach3.test
freeswitch/trunk/libs/sqlite/test/attachmalloc.test
freeswitch/trunk/libs/sqlite/test/auth.test
freeswitch/trunk/libs/sqlite/test/auth2.test
freeswitch/trunk/libs/sqlite/test/autoinc.test
freeswitch/trunk/libs/sqlite/test/autovacuum.test
freeswitch/trunk/libs/sqlite/test/autovacuum_crash.test
freeswitch/trunk/libs/sqlite/test/autovacuum_ioerr.test
freeswitch/trunk/libs/sqlite/test/autovacuum_ioerr2.test
freeswitch/trunk/libs/sqlite/test/avtrans.test
freeswitch/trunk/libs/sqlite/test/between.test
freeswitch/trunk/libs/sqlite/test/bigfile.test
freeswitch/trunk/libs/sqlite/test/bigrow.test
freeswitch/trunk/libs/sqlite/test/bind.test
freeswitch/trunk/libs/sqlite/test/bindxfer.test
freeswitch/trunk/libs/sqlite/test/blob.test
freeswitch/trunk/libs/sqlite/test/btree.test
freeswitch/trunk/libs/sqlite/test/btree2.test
freeswitch/trunk/libs/sqlite/test/btree4.test
freeswitch/trunk/libs/sqlite/test/btree5.test
freeswitch/trunk/libs/sqlite/test/btree6.test
freeswitch/trunk/libs/sqlite/test/btree7.test
freeswitch/trunk/libs/sqlite/test/btree8.test
freeswitch/trunk/libs/sqlite/test/busy.test
freeswitch/trunk/libs/sqlite/test/capi2.test
freeswitch/trunk/libs/sqlite/test/capi3.test
freeswitch/trunk/libs/sqlite/test/capi3b.test
freeswitch/trunk/libs/sqlite/test/cast.test
freeswitch/trunk/libs/sqlite/test/check.test
freeswitch/trunk/libs/sqlite/test/collate1.test
freeswitch/trunk/libs/sqlite/test/collate2.test
freeswitch/trunk/libs/sqlite/test/collate3.test
freeswitch/trunk/libs/sqlite/test/collate4.test
freeswitch/trunk/libs/sqlite/test/collate5.test
freeswitch/trunk/libs/sqlite/test/collate6.test
freeswitch/trunk/libs/sqlite/test/colmeta.test
freeswitch/trunk/libs/sqlite/test/conflict.test
freeswitch/trunk/libs/sqlite/test/corrupt.test
freeswitch/trunk/libs/sqlite/test/corrupt2.test
freeswitch/trunk/libs/sqlite/test/crash.test
freeswitch/trunk/libs/sqlite/test/date.test
freeswitch/trunk/libs/sqlite/test/default.test
freeswitch/trunk/libs/sqlite/test/delete.test
freeswitch/trunk/libs/sqlite/test/delete2.test
freeswitch/trunk/libs/sqlite/test/delete3.test
freeswitch/trunk/libs/sqlite/test/descidx1.test
freeswitch/trunk/libs/sqlite/test/descidx2.test
freeswitch/trunk/libs/sqlite/test/descidx3.test
freeswitch/trunk/libs/sqlite/test/diskfull.test
freeswitch/trunk/libs/sqlite/test/distinctagg.test
freeswitch/trunk/libs/sqlite/test/enc.test
freeswitch/trunk/libs/sqlite/test/enc2.test
freeswitch/trunk/libs/sqlite/test/enc3.test
freeswitch/trunk/libs/sqlite/test/expr.test
freeswitch/trunk/libs/sqlite/test/fkey1.test
freeswitch/trunk/libs/sqlite/test/format4.test
freeswitch/trunk/libs/sqlite/test/fts1a.test
freeswitch/trunk/libs/sqlite/test/fts1b.test
freeswitch/trunk/libs/sqlite/test/fts1c.test
freeswitch/trunk/libs/sqlite/test/fts1d.test
freeswitch/trunk/libs/sqlite/test/fts1porter.test
freeswitch/trunk/libs/sqlite/test/func.test
freeswitch/trunk/libs/sqlite/test/hook.test
freeswitch/trunk/libs/sqlite/test/in.test
freeswitch/trunk/libs/sqlite/test/index.test
freeswitch/trunk/libs/sqlite/test/index2.test
freeswitch/trunk/libs/sqlite/test/index3.test
freeswitch/trunk/libs/sqlite/test/insert.test
freeswitch/trunk/libs/sqlite/test/insert2.test
freeswitch/trunk/libs/sqlite/test/insert3.test
freeswitch/trunk/libs/sqlite/test/interrupt.test
freeswitch/trunk/libs/sqlite/test/intpkey.test
freeswitch/trunk/libs/sqlite/test/ioerr.test
freeswitch/trunk/libs/sqlite/test/join.test
freeswitch/trunk/libs/sqlite/test/join2.test
freeswitch/trunk/libs/sqlite/test/join3.test
freeswitch/trunk/libs/sqlite/test/join4.test
freeswitch/trunk/libs/sqlite/test/join5.test
freeswitch/trunk/libs/sqlite/test/journal1.test
freeswitch/trunk/libs/sqlite/test/lastinsert.test
freeswitch/trunk/libs/sqlite/test/laststmtchanges.test
freeswitch/trunk/libs/sqlite/test/like.test
freeswitch/trunk/libs/sqlite/test/limit.test
freeswitch/trunk/libs/sqlite/test/loadext.test
freeswitch/trunk/libs/sqlite/test/loadext2.test
freeswitch/trunk/libs/sqlite/test/lock.test
freeswitch/trunk/libs/sqlite/test/lock2.test
freeswitch/trunk/libs/sqlite/test/lock3.test
freeswitch/trunk/libs/sqlite/test/main.test
freeswitch/trunk/libs/sqlite/test/malloc.test
freeswitch/trunk/libs/sqlite/test/malloc2.test
freeswitch/trunk/libs/sqlite/test/malloc3.test
freeswitch/trunk/libs/sqlite/test/malloc4.test
freeswitch/trunk/libs/sqlite/test/malloc5.test
freeswitch/trunk/libs/sqlite/test/malloc6.test
freeswitch/trunk/libs/sqlite/test/malloc7.test
freeswitch/trunk/libs/sqlite/test/manydb.test
freeswitch/trunk/libs/sqlite/test/memdb.test
freeswitch/trunk/libs/sqlite/test/memleak.test
freeswitch/trunk/libs/sqlite/test/minmax.test
freeswitch/trunk/libs/sqlite/test/misc1.test
freeswitch/trunk/libs/sqlite/test/misc2.test
freeswitch/trunk/libs/sqlite/test/misc3.test
freeswitch/trunk/libs/sqlite/test/misc4.test
freeswitch/trunk/libs/sqlite/test/misc5.test
freeswitch/trunk/libs/sqlite/test/misc6.test
freeswitch/trunk/libs/sqlite/test/misuse.test
freeswitch/trunk/libs/sqlite/test/notnull.test
freeswitch/trunk/libs/sqlite/test/null.test
freeswitch/trunk/libs/sqlite/test/pager.test
freeswitch/trunk/libs/sqlite/test/pager2.test
freeswitch/trunk/libs/sqlite/test/pager3.test
freeswitch/trunk/libs/sqlite/test/pagesize.test
freeswitch/trunk/libs/sqlite/test/pragma.test
freeswitch/trunk/libs/sqlite/test/printf.test
freeswitch/trunk/libs/sqlite/test/progress.test (contents, props changed)
freeswitch/trunk/libs/sqlite/test/quick.test
freeswitch/trunk/libs/sqlite/test/quote.test
freeswitch/trunk/libs/sqlite/test/reindex.test
freeswitch/trunk/libs/sqlite/test/rollback.test
freeswitch/trunk/libs/sqlite/test/rowid.test
freeswitch/trunk/libs/sqlite/test/safety.test
freeswitch/trunk/libs/sqlite/test/schema.test
freeswitch/trunk/libs/sqlite/test/select1.test
freeswitch/trunk/libs/sqlite/test/select2.test
freeswitch/trunk/libs/sqlite/test/select3.test
freeswitch/trunk/libs/sqlite/test/select4.test
freeswitch/trunk/libs/sqlite/test/select5.test
freeswitch/trunk/libs/sqlite/test/select6.test
freeswitch/trunk/libs/sqlite/test/select7.test
freeswitch/trunk/libs/sqlite/test/server1.test
freeswitch/trunk/libs/sqlite/test/shared.test
freeswitch/trunk/libs/sqlite/test/shared2.test
freeswitch/trunk/libs/sqlite/test/shared3.test
freeswitch/trunk/libs/sqlite/test/shared_err.test
freeswitch/trunk/libs/sqlite/test/sort.test
freeswitch/trunk/libs/sqlite/test/subquery.test
freeswitch/trunk/libs/sqlite/test/subselect.test
freeswitch/trunk/libs/sqlite/test/sync.test
freeswitch/trunk/libs/sqlite/test/table.test
freeswitch/trunk/libs/sqlite/test/tableapi.test
freeswitch/trunk/libs/sqlite/test/tclsqlite.test
freeswitch/trunk/libs/sqlite/test/temptable.test
freeswitch/trunk/libs/sqlite/test/tester.tcl
freeswitch/trunk/libs/sqlite/test/thread1.test
freeswitch/trunk/libs/sqlite/test/thread2.test
freeswitch/trunk/libs/sqlite/test/threadtest1.c
freeswitch/trunk/libs/sqlite/test/threadtest2.c
freeswitch/trunk/libs/sqlite/test/tkt1435.test
freeswitch/trunk/libs/sqlite/test/tkt1443.test
freeswitch/trunk/libs/sqlite/test/tkt1444.test
freeswitch/trunk/libs/sqlite/test/tkt1449.test
freeswitch/trunk/libs/sqlite/test/tkt1473.test
freeswitch/trunk/libs/sqlite/test/tkt1501.test
freeswitch/trunk/libs/sqlite/test/tkt1512.test
freeswitch/trunk/libs/sqlite/test/tkt1514.test
freeswitch/trunk/libs/sqlite/test/tkt1536.test
freeswitch/trunk/libs/sqlite/test/tkt1537.test
freeswitch/trunk/libs/sqlite/test/tkt1567.test
freeswitch/trunk/libs/sqlite/test/tkt1644.test
freeswitch/trunk/libs/sqlite/test/tkt1667.test
freeswitch/trunk/libs/sqlite/test/tkt1873.test
freeswitch/trunk/libs/sqlite/test/trace.test
freeswitch/trunk/libs/sqlite/test/trans.test
freeswitch/trunk/libs/sqlite/test/trigger1.test
freeswitch/trunk/libs/sqlite/test/trigger2.test
freeswitch/trunk/libs/sqlite/test/trigger3.test
freeswitch/trunk/libs/sqlite/test/trigger4.test
freeswitch/trunk/libs/sqlite/test/trigger5.test
freeswitch/trunk/libs/sqlite/test/trigger6.test
freeswitch/trunk/libs/sqlite/test/trigger7.test
freeswitch/trunk/libs/sqlite/test/trigger8.test
freeswitch/trunk/libs/sqlite/test/types.test
freeswitch/trunk/libs/sqlite/test/types2.test
freeswitch/trunk/libs/sqlite/test/types3.test
freeswitch/trunk/libs/sqlite/test/unique.test
freeswitch/trunk/libs/sqlite/test/update.test
freeswitch/trunk/libs/sqlite/test/utf16.test
freeswitch/trunk/libs/sqlite/test/utf16align.test
freeswitch/trunk/libs/sqlite/test/vacuum.test
freeswitch/trunk/libs/sqlite/test/vacuum2.test
freeswitch/trunk/libs/sqlite/test/varint.test
freeswitch/trunk/libs/sqlite/test/view.test
freeswitch/trunk/libs/sqlite/test/vtab1.test
freeswitch/trunk/libs/sqlite/test/vtab2.test
freeswitch/trunk/libs/sqlite/test/vtab3.test
freeswitch/trunk/libs/sqlite/test/vtab4.test
freeswitch/trunk/libs/sqlite/test/vtab5.test
freeswitch/trunk/libs/sqlite/test/vtab6.test
freeswitch/trunk/libs/sqlite/test/vtab7.test
freeswitch/trunk/libs/sqlite/test/vtab9.test
freeswitch/trunk/libs/sqlite/test/vtab_err.test
freeswitch/trunk/libs/sqlite/test/where.test
freeswitch/trunk/libs/sqlite/test/where2.test
freeswitch/trunk/libs/sqlite/test/where3.test
freeswitch/trunk/libs/sqlite/tool/
freeswitch/trunk/libs/sqlite/tool/diffdb.c
freeswitch/trunk/libs/sqlite/tool/lemon.c
freeswitch/trunk/libs/sqlite/tool/lempar.c
freeswitch/trunk/libs/sqlite/tool/memleak.awk
freeswitch/trunk/libs/sqlite/tool/memleak2.awk
freeswitch/trunk/libs/sqlite/tool/memleak3.tcl
freeswitch/trunk/libs/sqlite/tool/mkkeywordhash.c
freeswitch/trunk/libs/sqlite/tool/mkopts.tcl (contents, props changed)
freeswitch/trunk/libs/sqlite/tool/omittest.tcl
freeswitch/trunk/libs/sqlite/tool/opcodeDoc.awk
freeswitch/trunk/libs/sqlite/tool/report1.txt
freeswitch/trunk/libs/sqlite/tool/showdb.c
freeswitch/trunk/libs/sqlite/tool/showjournal.c
freeswitch/trunk/libs/sqlite/tool/space_used.tcl
freeswitch/trunk/libs/sqlite/tool/spaceanal.tcl
freeswitch/trunk/libs/sqlite/tool/speedtest.tcl
freeswitch/trunk/libs/sqlite/tool/speedtest2.tcl
freeswitch/trunk/libs/sqlite/www/
freeswitch/trunk/libs/sqlite/www/arch.fig
freeswitch/trunk/libs/sqlite/www/arch.gif (contents, props changed)
freeswitch/trunk/libs/sqlite/www/arch.png (contents, props changed)
freeswitch/trunk/libs/sqlite/www/arch.tcl
freeswitch/trunk/libs/sqlite/www/arch2.fig
freeswitch/trunk/libs/sqlite/www/arch2.gif (contents, props changed)
freeswitch/trunk/libs/sqlite/www/arch2b.fig
freeswitch/trunk/libs/sqlite/www/audit.tcl
freeswitch/trunk/libs/sqlite/www/autoinc.tcl
freeswitch/trunk/libs/sqlite/www/c_interface.tcl
freeswitch/trunk/libs/sqlite/www/capi3.tcl
freeswitch/trunk/libs/sqlite/www/capi3ref.tcl
freeswitch/trunk/libs/sqlite/www/changes.tcl
freeswitch/trunk/libs/sqlite/www/common.tcl
freeswitch/trunk/libs/sqlite/www/compile.tcl
freeswitch/trunk/libs/sqlite/www/conflict.tcl
freeswitch/trunk/libs/sqlite/www/copyright-release.html
freeswitch/trunk/libs/sqlite/www/copyright-release.pdf (contents, props changed)
freeswitch/trunk/libs/sqlite/www/copyright.tcl
freeswitch/trunk/libs/sqlite/www/datatype3.tcl
freeswitch/trunk/libs/sqlite/www/datatypes.tcl
freeswitch/trunk/libs/sqlite/www/different.tcl
freeswitch/trunk/libs/sqlite/www/direct1b.gif (contents, props changed)
freeswitch/trunk/libs/sqlite/www/docs.tcl
freeswitch/trunk/libs/sqlite/www/download.tcl
freeswitch/trunk/libs/sqlite/www/dynload.tcl
freeswitch/trunk/libs/sqlite/www/faq.tcl
freeswitch/trunk/libs/sqlite/www/fileformat.tcl
freeswitch/trunk/libs/sqlite/www/formatchng.tcl
freeswitch/trunk/libs/sqlite/www/fullscanb.gif (contents, props changed)
freeswitch/trunk/libs/sqlite/www/index-ex1-x-b.gif (contents, props changed)
freeswitch/trunk/libs/sqlite/www/index.tcl
freeswitch/trunk/libs/sqlite/www/indirect1b1.gif (contents, props changed)
freeswitch/trunk/libs/sqlite/www/lang.tcl
freeswitch/trunk/libs/sqlite/www/lockingv3.tcl
freeswitch/trunk/libs/sqlite/www/mingw.tcl
freeswitch/trunk/libs/sqlite/www/nulls.tcl
freeswitch/trunk/libs/sqlite/www/oldnews.tcl
freeswitch/trunk/libs/sqlite/www/omitted.tcl
freeswitch/trunk/libs/sqlite/www/opcode.tcl
freeswitch/trunk/libs/sqlite/www/optimizer.tcl
freeswitch/trunk/libs/sqlite/www/optimizing.tcl
freeswitch/trunk/libs/sqlite/www/optoverview.tcl
freeswitch/trunk/libs/sqlite/www/pragma.tcl
freeswitch/trunk/libs/sqlite/www/quickstart.tcl
freeswitch/trunk/libs/sqlite/www/shared.gif (contents, props changed)
freeswitch/trunk/libs/sqlite/www/sharedcache.tcl
freeswitch/trunk/libs/sqlite/www/speed.tcl
freeswitch/trunk/libs/sqlite/www/sqlite.tcl
freeswitch/trunk/libs/sqlite/www/support.tcl
freeswitch/trunk/libs/sqlite/www/table-ex1b2.gif (contents, props changed)
freeswitch/trunk/libs/sqlite/www/tclsqlite.tcl
freeswitch/trunk/libs/sqlite/www/vdbe.tcl
freeswitch/trunk/libs/sqlite/www/version3.tcl
freeswitch/trunk/libs/sqlite/www/whentouse.tcl
Log:
add sqlite 3.3.8 to in tree libs
Added: freeswitch/trunk/libs/sqlite/Makefile.in
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/Makefile.in Tue Dec 19 15:11:50 2006
@@ -0,0 +1,708 @@
+#!/usr/make
+#
+# Makefile for SQLITE
+#
+# This makefile is suppose to be configured automatically using the
+# autoconf. But if that does not work for you, you can configure
+# the makefile manually. Just set the parameters below to values that
+# work well for your system.
+#
+# If the configure script does not work out-of-the-box, you might
+# be able to get it to work by giving it some hints. See the comment
+# at the beginning of configure.in for additional information.
+#
+
+# The toplevel directory of the source tree. This is the directory
+# that contains this "Makefile.in" and the "configure.in" script.
+#
+TOP = @srcdir@
+
+# C Compiler and options for use in building executables that
+# will run on the platform that is doing the build.
+#
+BCC = @BUILD_CC@ @BUILD_CFLAGS@
+
+# C Compile and options for use in building executables that
+# will run on the target platform. (BCC and TCC are usually the
+# same unless your are cross-compiling.)
+#
+TCC = @TARGET_CC@ @TARGET_CFLAGS@ -I. -I${TOP}/src
+
+# Define -DNDEBUG to compile without debugging (i.e., for production usage)
+# Omitting the define will cause extra debugging code to be inserted and
+# includes extra comments when "EXPLAIN stmt" is used.
+#
+TCC += @TARGET_DEBUG@ @XTHREADCONNECT@
+
+# Compiler options needed for programs that use the TCL library.
+#
+TCC += @TCL_INCLUDE_SPEC@
+
+# The library that programs using TCL must link against.
+#
+LIBTCL = @TCL_LIB_SPEC@ @TCL_LIBS@
+
+# Compiler options needed for programs that use the readline() library.
+#
+READLINE_FLAGS = -DHAVE_READLINE=@TARGET_HAVE_READLINE@ @TARGET_READLINE_INC@
+
+# The library that programs using readline() must link against.
+#
+LIBREADLINE = @TARGET_READLINE_LIBS@
+
+# Should the database engine be compiled threadsafe
+#
+TCC += -DTHREADSAFE=@THREADSAFE@
+
+# The pthreads library if needed
+#
+LIBPTHREAD=@TARGET_THREAD_LIB@
+
+# Do threads override each others locks by default (1), or do we test (-1)
+#
+TCC += -DSQLITE_THREAD_OVERRIDE_LOCK=@THREADSOVERRIDELOCKS@
+
+# The fdatasync library
+TLIBS = @TARGET_LIBS@
+
+# Flags controlling use of the in memory btree implementation
+#
+# TEMP_STORE is 0 to force temporary tables to be in a file, 1 to
+# default to file, 2 to default to memory, and 3 to force temporary
+# tables to always be in memory.
+#
+TEMP_STORE = -DTEMP_STORE=@TEMP_STORE@
+
+# Version numbers and release number for the SQLite being compiled.
+#
+VERSION = @VERSION@
+VERSION_NUMBER = @VERSION_NUMBER@
+RELEASE = @RELEASE@
+
+# Filename extensions
+#
+BEXE = @BUILD_EXEEXT@
+TEXE = @TARGET_EXEEXT@
+
+# The following variable is "1" if the configure script was able to locate
+# the tclConfig.sh file. It is an empty string otherwise. When this
+# variable is "1", the TCL extension library (libtclsqlite3.so) is built
+# and installed.
+#
+HAVE_TCL = @HAVE_TCL@
+
+# The suffix used on shared libraries. Ex: ".dll", ".so", ".dylib"
+#
+SHLIB_SUFFIX = @TCL_SHLIB_SUFFIX@
+
+# The directory into which to store package information for
+
+# Some standard variables and programs
+#
+prefix = @prefix@
+exec_prefix = @exec_prefix@
+libdir = @libdir@
+INSTALL = @INSTALL@
+LIBTOOL = ./libtool
+ALLOWRELEASE = @ALLOWRELEASE@
+
+# libtool compile/link/install
+LTCOMPILE = $(LIBTOOL) --mode=compile $(TCC)
+LTLINK = $(LIBTOOL) --mode=link $(TCC)
+LTINSTALL = $(LIBTOOL) --mode=install $(INSTALL)
+
+# nawk compatible awk.
+NAWK = @AWK@
+
+# You should not have to change anything below this line
+###############################################################################
+OPTS =
+OPTS += -DSQLITE_OMIT_CURSOR # Cursors do not work at this time
+TCC += -DSQLITE_OMIT_CURSOR
+
+# Object files for the SQLite library.
+#
+LIBOBJ = alter.lo analyze.lo attach.lo auth.lo btree.lo build.lo \
+ callback.lo complete.lo date.lo \
+ delete.lo expr.lo func.lo hash.lo insert.lo loadext.lo \
+ main.lo opcodes.lo os.lo os_unix.lo os_win.lo os_os2.lo \
+ pager.lo parse.lo pragma.lo prepare.lo printf.lo random.lo \
+ select.lo table.lo tokenize.lo trigger.lo update.lo \
+ util.lo vacuum.lo \
+ vdbe.lo vdbeapi.lo vdbeaux.lo vdbefifo.lo vdbemem.lo \
+ where.lo utf.lo legacy.lo vtab.lo
+
+# All of the source code files.
+#
+SRC = \
+ $(TOP)/src/alter.c \
+ $(TOP)/src/analyze.c \
+ $(TOP)/src/attach.c \
+ $(TOP)/src/auth.c \
+ $(TOP)/src/btree.c \
+ $(TOP)/src/btree.h \
+ $(TOP)/src/build.c \
+ $(TOP)/src/callback.c \
+ $(TOP)/src/complete.c \
+ $(TOP)/src/date.c \
+ $(TOP)/src/delete.c \
+ $(TOP)/src/expr.c \
+ $(TOP)/src/func.c \
+ $(TOP)/src/hash.c \
+ $(TOP)/src/hash.h \
+ $(TOP)/src/insert.c \
+ $(TOP)/src/legacy.c \
+ $(TOP)/src/loadext.c \
+ $(TOP)/src/main.c \
+ $(TOP)/src/os.c \
+ $(TOP)/src/os_unix.c \
+ $(TOP)/src/os_win.c \
+ $(TOP)/src/os_os2.c \
+ $(TOP)/src/pager.c \
+ $(TOP)/src/pager.h \
+ $(TOP)/src/parse.y \
+ $(TOP)/src/pragma.c \
+ $(TOP)/src/prepare.c \
+ $(TOP)/src/printf.c \
+ $(TOP)/src/random.c \
+ $(TOP)/src/select.c \
+ $(TOP)/src/shell.c \
+ $(TOP)/src/sqlite.h.in \
+ $(TOP)/src/sqliteInt.h \
+ $(TOP)/src/table.c \
+ $(TOP)/src/tclsqlite.c \
+ $(TOP)/src/tokenize.c \
+ $(TOP)/src/trigger.c \
+ $(TOP)/src/utf.c \
+ $(TOP)/src/update.c \
+ $(TOP)/src/util.c \
+ $(TOP)/src/vacuum.c \
+ $(TOP)/src/vdbe.c \
+ $(TOP)/src/vdbe.h \
+ $(TOP)/src/vdbeapi.c \
+ $(TOP)/src/vdbeaux.c \
+ $(TOP)/src/vdbefifo.c \
+ $(TOP)/src/vdbemem.c \
+ $(TOP)/src/vdbeInt.h \
+ $(TOP)/src/vtab.c \
+ $(TOP)/src/where.c
+
+# Source code for extensions
+#
+SRC += \
+ $(TOP)/ext/fts1/fts1.c \
+ $(TOP)/ext/fts1/fts1.h \
+ $(TOP)/ext/fts1/fts1_hash.c \
+ $(TOP)/ext/fts1/fts1_hash.h \
+ $(TOP)/ext/fts1/fts1_porter.c \
+ $(TOP)/ext/fts1/fts1_tokenizer.h \
+ $(TOP)/ext/fts1/fts1_tokenizer1.c
+
+
+# Source code to the test files.
+#
+TESTSRC = \
+ $(TOP)/src/btree.c \
+ $(TOP)/src/date.c \
+ $(TOP)/src/func.c \
+ $(TOP)/src/os.c \
+ $(TOP)/src/os_os2.c \
+ $(TOP)/src/os_unix.c \
+ $(TOP)/src/os_win.c \
+ $(TOP)/src/pager.c \
+ $(TOP)/src/pragma.c \
+ $(TOP)/src/printf.c \
+ $(TOP)/src/test1.c \
+ $(TOP)/src/test2.c \
+ $(TOP)/src/test3.c \
+ $(TOP)/src/test4.c \
+ $(TOP)/src/test5.c \
+ $(TOP)/src/test6.c \
+ $(TOP)/src/test7.c \
+ $(TOP)/src/test8.c \
+ $(TOP)/src/test_autoext.c \
+ $(TOP)/src/test_async.c \
+ $(TOP)/src/test_md5.c \
+ $(TOP)/src/test_schema.c \
+ $(TOP)/src/test_server.c \
+ $(TOP)/src/test_tclvar.c \
+ $(TOP)/src/utf.c \
+ $(TOP)/src/util.c \
+ $(TOP)/src/vdbe.c \
+ $(TOP)/src/vdbeaux.c \
+ $(TOP)/src/where.c
+
+# Header files used by all library source files.
+#
+HDR = \
+ sqlite3.h \
+ $(TOP)/src/btree.h \
+ $(TOP)/src/hash.h \
+ opcodes.h \
+ $(TOP)/src/os.h \
+ $(TOP)/src/os_common.h \
+ $(TOP)/src/sqlite3ext.h \
+ $(TOP)/src/sqliteInt.h \
+ $(TOP)/src/vdbe.h \
+ parse.h
+
+# Header files used by extensions
+#
+HDR += \
+ $(TOP)/ext/fts1/fts1.h \
+ $(TOP)/ext/fts1/fts1_hash.h \
+ $(TOP)/ext/fts1/fts1_tokenizer.h
+
+# Header files used by the VDBE submodule
+#
+VDBEHDR = \
+ $(HDR) \
+ $(TOP)/src/vdbeInt.h
+
+# This is the default Makefile target. The objects listed here
+# are what get build when you type just "make" with no arguments.
+#
+all: sqlite3.h libsqlite3.la sqlite3$(TEXE) $(HAVE_TCL:1=libtclsqlite3.la)
+
+Makefile: $(TOP)/Makefile.in
+ ./config.status
+
+# Generate the file "last_change" which contains the date of change
+# of the most recently modified source code file
+#
+last_change: $(SRC)
+ cat $(SRC) | grep '$$Id: ' | sort -k 5 | tail -1 \
+ | $(NAWK) '{print $$5,$$6}' >last_change
+
+libsqlite3.la: $(LIBOBJ)
+ $(LTLINK) -o libsqlite3.la $(LIBOBJ) $(LIBPTHREAD) \
+ ${ALLOWRELEASE} -rpath $(libdir) -version-info "8:6:8"
+
+libtclsqlite3.la: tclsqlite.lo libsqlite3.la
+ $(LTLINK) -o libtclsqlite3.la tclsqlite.lo \
+ $(LIBOBJ) @TCL_STUB_LIB_SPEC@ $(LIBPTHREAD) \
+ -rpath $(libdir)/sqlite \
+ -version-info "8:6:8"
+
+sqlite3$(TEXE): $(TOP)/src/shell.c libsqlite3.la sqlite3.h
+ $(LTLINK) $(READLINE_FLAGS) $(LIBPTHREAD) \
+ -o $@ $(TOP)/src/shell.c libsqlite3.la \
+ $(LIBREADLINE) $(TLIBS)
+
+# This target creates a directory named "tsrc" and fills it with
+# copies of all of the C source code and header files needed to
+# build on the target system. Some of the C source code and header
+# files are automatically generated. This target takes care of
+# all that automatic generation.
+#
+target_source: $(SRC) parse.c opcodes.c keywordhash.h $(VDBEHDR)
+ rm -rf tsrc
+ mkdir -p tsrc
+ cp $(SRC) $(VDBEHDR) tsrc
+ rm tsrc/sqlite.h.in tsrc/parse.y
+ cp parse.c opcodes.c keywordhash.h tsrc
+
+# Rules to build the LEMON compiler generator
+#
+lemon$(BEXE): $(TOP)/tool/lemon.c $(TOP)/tool/lempar.c
+ $(BCC) -o lemon $(TOP)/tool/lemon.c
+ cp $(TOP)/tool/lempar.c .
+
+
+# Rules to build individual files
+#
+alter.lo: $(TOP)/src/alter.c $(HDR)
+ $(LTCOMPILE) -c $(TOP)/src/alter.c
+
+analyze.lo: $(TOP)/src/analyze.c $(HDR)
+ $(LTCOMPILE) -c $(TOP)/src/analyze.c
+
+attach.lo: $(TOP)/src/attach.c $(HDR)
+ $(LTCOMPILE) -c $(TOP)/src/attach.c
+
+auth.lo: $(TOP)/src/auth.c $(HDR)
+ $(LTCOMPILE) -c $(TOP)/src/auth.c
+
+btree.lo: $(TOP)/src/btree.c $(HDR) $(TOP)/src/pager.h
+ $(LTCOMPILE) -c $(TOP)/src/btree.c
+
+build.lo: $(TOP)/src/build.c $(HDR)
+ $(LTCOMPILE) -c $(TOP)/src/build.c
+
+callback.lo: $(TOP)/src/callback.c $(HDR)
+ $(LTCOMPILE) -c $(TOP)/src/callback.c
+
+complete.lo: $(TOP)/src/complete.c $(HDR)
+ $(LTCOMPILE) -c $(TOP)/src/complete.c
+
+date.lo: $(TOP)/src/date.c $(HDR)
+ $(LTCOMPILE) -c $(TOP)/src/date.c
+
+delete.lo: $(TOP)/src/delete.c $(HDR)
+ $(LTCOMPILE) -c $(TOP)/src/delete.c
+
+expr.lo: $(TOP)/src/expr.c $(HDR)
+ $(LTCOMPILE) -c $(TOP)/src/expr.c
+
+func.lo: $(TOP)/src/func.c $(HDR)
+ $(LTCOMPILE) -c $(TOP)/src/func.c
+
+hash.lo: $(TOP)/src/hash.c $(HDR)
+ $(LTCOMPILE) -c $(TOP)/src/hash.c
+
+insert.lo: $(TOP)/src/insert.c $(HDR)
+ $(LTCOMPILE) -c $(TOP)/src/insert.c
+
+legacy.lo: $(TOP)/src/legacy.c $(HDR)
+ $(LTCOMPILE) -c $(TOP)/src/legacy.c
+
+loadext.lo: $(TOP)/src/loadext.c $(HDR)
+ $(LTCOMPILE) -c $(TOP)/src/loadext.c
+
+main.lo: $(TOP)/src/main.c $(HDR)
+ $(LTCOMPILE) $(TEMP_STORE) -c $(TOP)/src/main.c
+
+pager.lo: $(TOP)/src/pager.c $(HDR) $(TOP)/src/pager.h
+ $(LTCOMPILE) -c $(TOP)/src/pager.c
+
+opcodes.lo: opcodes.c
+ $(LTCOMPILE) -c opcodes.c
+
+opcodes.c: opcodes.h $(TOP)/mkopcodec.awk
+ sort -n -b -k 3 opcodes.h | $(NAWK) -f $(TOP)/mkopcodec.awk >opcodes.c
+
+opcodes.h: parse.h $(TOP)/src/vdbe.c $(TOP)/mkopcodeh.awk
+ cat parse.h $(TOP)/src/vdbe.c | $(NAWK) -f $(TOP)/mkopcodeh.awk >opcodes.h
+
+os.lo: $(TOP)/src/os.c $(HDR)
+ $(LTCOMPILE) -c $(TOP)/src/os.c
+
+os_unix.lo: $(TOP)/src/os_unix.c $(HDR)
+ $(LTCOMPILE) -c $(TOP)/src/os_unix.c
+
+os_win.lo: $(TOP)/src/os_win.c $(HDR)
+ $(LTCOMPILE) -c $(TOP)/src/os_win.c
+
+os_os2.lo: $(TOP)/src/os_os2.c $(HDR)
+ $(LTCOMPILE) -c $(TOP)/src/os_os2.c
+
+parse.lo: parse.c $(HDR)
+ $(LTCOMPILE) -c parse.c
+
+parse.h: parse.c
+
+parse.c: $(TOP)/src/parse.y lemon$(BEXE) $(TOP)/addopcodes.awk
+ cp $(TOP)/src/parse.y .
+ ./lemon $(OPTS) parse.y
+ mv parse.h parse.h.temp
+ awk -f $(TOP)/addopcodes.awk parse.h.temp >parse.h
+
+pragma.lo: $(TOP)/src/pragma.c $(HDR)
+ $(LTCOMPILE) -c $(TOP)/src/pragma.c
+
+prepare.lo: $(TOP)/src/prepare.c $(HDR)
+ $(LTCOMPILE) $(TEMP_STORE) -c $(TOP)/src/prepare.c
+
+printf.lo: $(TOP)/src/printf.c $(HDR)
+ $(LTCOMPILE) -c $(TOP)/src/printf.c
+
+random.lo: $(TOP)/src/random.c $(HDR)
+ $(LTCOMPILE) -c $(TOP)/src/random.c
+
+select.lo: $(TOP)/src/select.c $(HDR)
+ $(LTCOMPILE) -c $(TOP)/src/select.c
+
+sqlite3.h: $(TOP)/src/sqlite.h.in
+ sed -e s/--VERS--/$(RELEASE)/ $(TOP)/src/sqlite.h.in | \
+ sed -e s/--VERSION-NUMBER--/$(VERSION_NUMBER)/ >sqlite3.h
+
+table.lo: $(TOP)/src/table.c $(HDR)
+ $(LTCOMPILE) -c $(TOP)/src/table.c
+
+tclsqlite.lo: $(TOP)/src/tclsqlite.c $(HDR)
+ $(LTCOMPILE) -c $(TOP)/src/tclsqlite.c
+
+tokenize.lo: $(TOP)/src/tokenize.c keywordhash.h $(HDR)
+ $(LTCOMPILE) -c $(TOP)/src/tokenize.c
+
+keywordhash.h: $(TOP)/tool/mkkeywordhash.c
+ $(BCC) -o mkkeywordhash$(BEXE) $(OPTS) $(TOP)/tool/mkkeywordhash.c
+ ./mkkeywordhash$(BEXE) >keywordhash.h
+
+trigger.lo: $(TOP)/src/trigger.c $(HDR)
+ $(LTCOMPILE) -c $(TOP)/src/trigger.c
+
+update.lo: $(TOP)/src/update.c $(HDR)
+ $(LTCOMPILE) -c $(TOP)/src/update.c
+
+utf.lo: $(TOP)/src/utf.c $(HDR)
+ $(LTCOMPILE) -c $(TOP)/src/utf.c
+
+util.lo: $(TOP)/src/util.c $(HDR)
+ $(LTCOMPILE) -c $(TOP)/src/util.c
+
+vacuum.lo: $(TOP)/src/vacuum.c $(HDR)
+ $(LTCOMPILE) -c $(TOP)/src/vacuum.c
+
+vdbe.lo: $(TOP)/src/vdbe.c $(VDBEHDR)
+ $(LTCOMPILE) -c $(TOP)/src/vdbe.c
+
+vdbeapi.lo: $(TOP)/src/vdbeapi.c $(VDBEHDR)
+ $(LTCOMPILE) -c $(TOP)/src/vdbeapi.c
+
+vdbeaux.lo: $(TOP)/src/vdbeaux.c $(VDBEHDR)
+ $(LTCOMPILE) -c $(TOP)/src/vdbeaux.c
+
+vdbefifo.lo: $(TOP)/src/vdbefifo.c $(VDBEHDR)
+ $(LTCOMPILE) -c $(TOP)/src/vdbefifo.c
+
+vdbemem.lo: $(TOP)/src/vdbemem.c $(VDBEHDR)
+ $(LTCOMPILE) -c $(TOP)/src/vdbemem.c
+
+vtab.lo: $(TOP)/src/vtab.c $(VDBEHDR)
+ $(LTCOMPILE) -c $(TOP)/src/vtab.c
+
+where.lo: $(TOP)/src/where.c $(HDR)
+ $(LTCOMPILE) -c $(TOP)/src/where.c
+
+tclsqlite-shell.lo: $(TOP)/src/tclsqlite.c $(HDR)
+ $(LTCOMPILE) -DTCLSH=1 -o $@ -c $(TOP)/src/tclsqlite.c
+
+tclsqlite-stubs.lo: $(TOP)/src/tclsqlite.c $(HDR)
+ $(LTCOMPILE) -DTCL_USE_STUBS=1 -o $@ -c $(TOP)/src/tclsqlite.c
+
+tclsqlite3: tclsqlite-shell.lo libsqlite3.la
+ $(LTLINK) -o tclsqlite3 tclsqlite-shell.lo \
+ libsqlite3.la $(LIBTCL)
+
+testfixture$(TEXE): $(TOP)/src/tclsqlite.c libsqlite3.la $(TESTSRC)
+ $(LTLINK) -DTCLSH=1 -DSQLITE_TEST=1 -DSQLITE_CRASH_TEST=1 \
+ -DSQLITE_NO_SYNC=1 $(TEMP_STORE) \
+ -o testfixture $(TESTSRC) $(TOP)/src/tclsqlite.c \
+ libsqlite3.la $(LIBTCL)
+
+
+fulltest: testfixture$(TEXE) sqlite3$(TEXE)
+ ./testfixture $(TOP)/test/all.test
+
+test: testfixture$(TEXE) sqlite3$(TEXE)
+ ./testfixture $(TOP)/test/quick.test
+
+sqlite3_analyzer$(TEXE): $(TOP)/src/tclsqlite.c libtclsqlite3.la \
+ $(TESTSRC) $(TOP)/tool/spaceanal.tcl
+ sed \
+ -e '/^#/d' \
+ -e 's,\\,\\\\,g' \
+ -e 's,",\\",g' \
+ -e 's,^,",' \
+ -e 's,$$,\\n",' \
+ $(TOP)/tool/spaceanal.tcl >spaceanal_tcl.h
+ $(LTLINK) -DTCLSH=2 -DSQLITE_TEST=1 $(TEMP_STORE)\
+ -o sqlite3_analyzer$(EXE) $(TESTSRC) $(TOP)/src/tclsqlite.c \
+ libtclsqlite3.la $(LIBTCL)
+
+# Rules used to build documentation
+#
+arch.html: $(TOP)/www/arch.tcl
+ tclsh $(TOP)/www/arch.tcl >arch.html
+
+arch2.gif: $(TOP)/www/arch2.gif
+ cp $(TOP)/www/arch2.gif .
+
+autoinc.html: $(TOP)/www/autoinc.tcl
+ tclsh $(TOP)/www/autoinc.tcl >autoinc.html
+
+c_interface.html: $(TOP)/www/c_interface.tcl
+ tclsh $(TOP)/www/c_interface.tcl >c_interface.html
+
+capi3.html: $(TOP)/www/capi3.tcl
+ tclsh $(TOP)/www/capi3.tcl >capi3.html
+
+capi3ref.html: $(TOP)/www/capi3ref.tcl
+ tclsh $(TOP)/www/capi3ref.tcl >capi3ref.html
+
+changes.html: $(TOP)/www/changes.tcl
+ tclsh $(TOP)/www/changes.tcl >changes.html
+
+compile.html: $(TOP)/www/compile.tcl
+ tclsh $(TOP)/www/compile.tcl >compile.html
+
+copyright.html: $(TOP)/www/copyright.tcl
+ tclsh $(TOP)/www/copyright.tcl >copyright.html
+
+copyright-release.html: $(TOP)/www/copyright-release.html
+ cp $(TOP)/www/copyright-release.html .
+
+copyright-release.pdf: $(TOP)/www/copyright-release.pdf
+ cp $(TOP)/www/copyright-release.pdf .
+
+common.tcl: $(TOP)/www/common.tcl
+ cp $(TOP)/www/common.tcl .
+
+conflict.html: $(TOP)/www/conflict.tcl
+ tclsh $(TOP)/www/conflict.tcl >conflict.html
+
+datatypes.html: $(TOP)/www/datatypes.tcl
+ tclsh $(TOP)/www/datatypes.tcl >datatypes.html
+
+datatype3.html: $(TOP)/www/datatype3.tcl
+ tclsh $(TOP)/www/datatype3.tcl >datatype3.html
+
+docs.html: $(TOP)/www/docs.tcl
+ tclsh $(TOP)/www/docs.tcl >docs.html
+
+download.html: $(TOP)/www/download.tcl
+ mkdir -p doc
+ tclsh $(TOP)/www/download.tcl >download.html
+
+faq.html: $(TOP)/www/faq.tcl
+ tclsh $(TOP)/www/faq.tcl >faq.html
+
+fileformat.html: $(TOP)/www/fileformat.tcl
+ tclsh $(TOP)/www/fileformat.tcl >fileformat.html
+
+formatchng.html: $(TOP)/www/formatchng.tcl
+ tclsh $(TOP)/www/formatchng.tcl >formatchng.html
+
+index.html: $(TOP)/www/index.tcl last_change
+ tclsh $(TOP)/www/index.tcl >index.html
+
+lang.html: $(TOP)/www/lang.tcl
+ tclsh $(TOP)/www/lang.tcl >lang.html
+
+pragma.html: $(TOP)/www/pragma.tcl
+ tclsh $(TOP)/www/pragma.tcl >pragma.html
+
+lockingv3.html: $(TOP)/www/lockingv3.tcl
+ tclsh $(TOP)/www/lockingv3.tcl >lockingv3.html
+
+oldnews.html: $(TOP)/www/oldnews.tcl
+ tclsh $(TOP)/www/oldnews.tcl >oldnews.html
+
+omitted.html: $(TOP)/www/omitted.tcl
+ tclsh $(TOP)/www/omitted.tcl >omitted.html
+
+opcode.html: $(TOP)/www/opcode.tcl $(TOP)/src/vdbe.c
+ tclsh $(TOP)/www/opcode.tcl $(TOP)/src/vdbe.c >opcode.html
+
+mingw.html: $(TOP)/www/mingw.tcl
+ tclsh $(TOP)/www/mingw.tcl >mingw.html
+
+nulls.html: $(TOP)/www/nulls.tcl
+ tclsh $(TOP)/www/nulls.tcl >nulls.html
+
+quickstart.html: $(TOP)/www/quickstart.tcl
+ tclsh $(TOP)/www/quickstart.tcl >quickstart.html
+
+speed.html: $(TOP)/www/speed.tcl
+ tclsh $(TOP)/www/speed.tcl >speed.html
+
+sqlite.gif: $(TOP)/art/SQLite.gif
+ cp $(TOP)/art/SQLite.gif sqlite.gif
+
+sqlite.html: $(TOP)/www/sqlite.tcl
+ tclsh $(TOP)/www/sqlite.tcl >sqlite.html
+
+support.html: $(TOP)/www/support.tcl
+ tclsh $(TOP)/www/support.tcl >support.html
+
+tclsqlite.html: $(TOP)/www/tclsqlite.tcl
+ tclsh $(TOP)/www/tclsqlite.tcl >tclsqlite.html
+
+vdbe.html: $(TOP)/www/vdbe.tcl
+ tclsh $(TOP)/www/vdbe.tcl >vdbe.html
+
+version3.html: $(TOP)/www/version3.tcl
+ tclsh $(TOP)/www/version3.tcl >version3.html
+
+
+# Files to be published on the website.
+#
+DOC = \
+ arch.html \
+ arch2.gif \
+ autoinc.html \
+ c_interface.html \
+ capi3.html \
+ capi3ref.html \
+ changes.html \
+ compile.html \
+ copyright.html \
+ copyright-release.html \
+ copyright-release.pdf \
+ conflict.html \
+ datatypes.html \
+ datatype3.html \
+ docs.html \
+ download.html \
+ faq.html \
+ fileformat.html \
+ formatchng.html \
+ index.html \
+ lang.html \
+ lockingv3.html \
+ mingw.html \
+ nulls.html \
+ oldnews.html \
+ omitted.html \
+ opcode.html \
+ pragma.html \
+ quickstart.html \
+ speed.html \
+ sqlite.gif \
+ sqlite.html \
+ support.html \
+ tclsqlite.html \
+ vdbe.html \
+ version3.html
+
+doc: common.tcl $(DOC)
+ mkdir -p doc
+ mv $(DOC) doc
+
+install: sqlite3 libsqlite3.la sqlite3.h ${HAVE_TCL:1=tcl_install}
+ $(INSTALL) -d $(DESTDIR)$(libdir)
+ $(LTINSTALL) libsqlite3.la $(DESTDIR)$(libdir)
+ $(INSTALL) -d $(DESTDIR)$(exec_prefix)/bin
+ $(LTINSTALL) sqlite3 $(DESTDIR)$(exec_prefix)/bin
+ $(INSTALL) -d $(DESTDIR)$(prefix)/include
+ $(INSTALL) -m 0644 sqlite3.h $(DESTDIR)$(prefix)/include
+ $(INSTALL) -d $(DESTDIR)$(libdir)/pkgconfig;
+ $(INSTALL) -m 0644 sqlite3.pc $(DESTDIR)$(libdir)/pkgconfig;
+
+tcl_install: libtclsqlite3.la
+ tclsh $(TOP)/tclinstaller.tcl $(VERSION)
+
+clean:
+ rm -f *.lo *.la *.o sqlite3$(TEXE) libsqlite3.la
+ rm -f sqlite3.h opcodes.*
+ rm -rf .libs .deps
+ rm -f lemon$(BEXE) lempar.c parse.* sqlite*.tar.gz
+ rm -f mkkeywordhash$(BEXE) keywordhash.h
+ rm -f $(PUBLISH)
+ rm -f *.da *.bb *.bbg gmon.out
+ rm -f testfixture$(TEXE) test.db
+ rm -rf doc
+ rm -f common.tcl
+ rm -f sqlite3.dll sqlite3.lib sqlite3.def
+
+distclean: clean
+ rm -f config.log config.status libtool Makefile config.h
+
+#
+# Windows section
+#
+dll: sqlite3.dll
+
+REAL_LIBOBJ = $(LIBOBJ:%.lo=.libs/%.o)
+
+$(REAL_LIBOBJ): $(LIBOBJ)
+
+sqlite3.def: $(REAL_LIBOBJ)
+ echo 'EXPORTS' >sqlite3.def
+ nm $(REAL_LIBOBJ) | grep ' T ' | grep ' _sqlite3_' \
+ | sed 's/^.* _//' >>sqlite3.def
+
+sqlite3.dll: $(REAL_LIBOBJ) sqlite3.def
+ $(TCC) -shared -o sqlite3.dll sqlite3.def \
+ -Wl,"--strip-all" $(REAL_LIBOBJ)
Added: freeswitch/trunk/libs/sqlite/Makefile.linux-gcc
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/Makefile.linux-gcc Tue Dec 19 15:11:50 2006
@@ -0,0 +1,130 @@
+#!/usr/make
+#
+# Makefile for SQLITE
+#
+# This is a template makefile for SQLite. Most people prefer to
+# use the autoconf generated "configure" script to generate the
+# makefile automatically. But that does not work for everybody
+# and in every situation. If you are having problems with the
+# "configure" script, you might want to try this makefile as an
+# alternative. Create a copy of this file, edit the parameters
+# below and type "make".
+#
+
+#### The toplevel directory of the source tree. This is the directory
+# that contains this "Makefile.in" and the "configure.in" script.
+#
+TOP = ../sqlite
+
+#### C Compiler and options for use in building executables that
+# will run on the platform that is doing the build.
+#
+BCC = gcc -g -O2
+#BCC = /opt/ancic/bin/c89 -0
+
+#### If the target operating system supports the "usleep()" system
+# call, then define the HAVE_USLEEP macro for all C modules.
+#
+#USLEEP =
+USLEEP = -DHAVE_USLEEP=1
+
+#### If you want the SQLite library to be safe for use within a
+# multi-threaded program, then define the following macro
+# appropriately:
+#
+#THREADSAFE = -DTHREADSAFE=1
+THREADSAFE = -DTHREADSAFE=0
+
+#### Specify any extra linker options needed to make the library
+# thread safe
+#
+#THREADLIB = -lpthread
+THREADLIB =
+
+#### Specify any extra libraries needed to access required functions.
+#
+#TLIBS = -lrt # fdatasync on Solaris 8
+TLIBS =
+
+#### Leave SQLITE_DEBUG undefined for maximum speed. Use SQLITE_DEBUG=1
+# to check for memory leaks. Use SQLITE_DEBUG=2 to print a log of all
+# malloc()s and free()s in order to track down memory leaks.
+#
+# SQLite uses some expensive assert() statements in the inner loop.
+# You can make the library go almost twice as fast if you compile
+# with -DNDEBUG=1
+#
+#OPTS = -DSQLITE_DEBUG=2
+#OPTS = -DSQLITE_DEBUG=1
+#OPTS =
+OPTS = -DNDEBUG=1
+OPTS += -DHAVE_FDATASYNC=1
+
+#### The suffix to add to executable files. ".exe" for windows.
+# Nothing for unix.
+#
+#EXE = .exe
+EXE =
+
+#### C Compile and options for use in building executables that
+# will run on the target platform. This is usually the same
+# as BCC, unless you are cross-compiling.
+#
+TCC = gcc -O6
+#TCC = gcc -g -O0 -Wall
+#TCC = gcc -g -O0 -Wall -fprofile-arcs -ftest-coverage
+#TCC = /opt/mingw/bin/i386-mingw32-gcc -O6
+#TCC = /opt/ansic/bin/c89 -O +z -Wl,-a,archive
+
+#### Tools used to build a static library.
+#
+AR = ar cr
+#AR = /opt/mingw/bin/i386-mingw32-ar cr
+RANLIB = ranlib
+#RANLIB = /opt/mingw/bin/i386-mingw32-ranlib
+
+MKSHLIB = gcc -shared
+SO = so
+SHPREFIX = lib
+# SO = dll
+# SHPREFIX =
+
+#### Extra compiler options needed for programs that use the TCL library.
+#
+#TCL_FLAGS =
+#TCL_FLAGS = -DSTATIC_BUILD=1
+TCL_FLAGS = -I/home/drh/tcltk/8.4linux
+#TCL_FLAGS = -I/home/drh/tcltk/8.4win -DSTATIC_BUILD=1
+#TCL_FLAGS = -I/home/drh/tcltk/8.3hpux
+
+#### Linker options needed to link against the TCL library.
+#
+#LIBTCL = -ltcl -lm -ldl
+LIBTCL = /home/drh/tcltk/8.4linux/libtcl8.4g.a -lm -ldl
+#LIBTCL = /home/drh/tcltk/8.4win/libtcl84s.a -lmsvcrt
+#LIBTCL = /home/drh/tcltk/8.3hpux/libtcl8.3.a -ldld -lm -lc
+
+#### Compiler options needed for programs that use the readline() library.
+#
+READLINE_FLAGS =
+#READLINE_FLAGS = -DHAVE_READLINE=1 -I/usr/include/readline
+
+#### Linker options needed by programs using readline() must link against.
+#
+LIBREADLINE =
+#LIBREADLINE = -static -lreadline -ltermcap
+
+#### Should the database engine assume text is coded as UTF-8 or iso8859?
+#
+# ENCODING = UTF8
+ENCODING = ISO8859
+
+
+#### Which "awk" program provides nawk compatibilty
+#
+# NAWK = nawk
+NAWK = awk
+
+# You should not have to change anything below this line
+###############################################################################
+include $(TOP)/main.mk
Added: freeswitch/trunk/libs/sqlite/README
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/README Tue Dec 19 15:11:50 2006
@@ -0,0 +1,35 @@
+This directory contains source code to
+
+ SQLite: An Embeddable SQL Database Engine
+
+To compile the project, first create a directory in which to place
+the build products. It is recommended, but not required, that the
+build directory be separate from the source directory. Cd into the
+build directory and then from the build directory run the configure
+script found at the root of the source tree. Then run "make".
+
+For example:
+
+ tar xzf sqlite.tar.gz ;# Unpack the source tree into "sqlite"
+ mkdir bld ;# Build will occur in a sibling directory
+ cd bld ;# Change to the build directory
+ ../sqlite/configure ;# Run the configure script
+ make ;# Run the makefile.
+ make install ;# (Optional) Install the build products
+
+The configure script uses autoconf 2.50 and libtool. If the configure
+script does not work out for you, there is a generic makefile named
+"Makefile.linux-gcc" in the top directory of the source tree that you
+can copy and edit to suite your needs. Comments on the generic makefile
+show what changes are needed.
+
+The linux binaries on the website are created using the generic makefile,
+not the configure script. The configure script is unmaintained. (You
+can volunteer to take over maintenance of the configure script, if you want!)
+The windows binaries on the website are created using MinGW32 configured
+as a cross-compiler running under Linux. For details, see the ./publish.sh
+script at the top-level of the source tree.
+
+Contacts:
+
+ http://www.sqlite.org/
Added: freeswitch/trunk/libs/sqlite/VERSION
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/VERSION Tue Dec 19 15:11:50 2006
@@ -0,0 +1 @@
+3.3.8
Added: freeswitch/trunk/libs/sqlite/aclocal.m4
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/aclocal.m4 Tue Dec 19 15:11:50 2006
@@ -0,0 +1,5913 @@
+# generated automatically by aclocal 1.8.2 -*- Autoconf -*-
+
+# Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004
+# Free Software Foundation, Inc.
+# This file is free software; the Free Software Foundation
+# gives unlimited permission to copy and/or distribute it,
+# with or without modifications, as long as this notice is preserved.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
+# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
+# PARTICULAR PURPOSE.
+
+# libtool.m4 - Configure libtool for the host system. -*-Autoconf-*-
+
+# serial 47 AC_PROG_LIBTOOL
+# Debian $Rev: 192 $
+
+
+# AC_PROVIDE_IFELSE(MACRO-NAME, IF-PROVIDED, IF-NOT-PROVIDED)
+# -----------------------------------------------------------
+# If this macro is not defined by Autoconf, define it here.
+m4_ifdef([AC_PROVIDE_IFELSE],
+ [],
+ [m4_define([AC_PROVIDE_IFELSE],
+ [m4_ifdef([AC_PROVIDE_$1],
+ [$2], [$3])])])
+
+
+# AC_PROG_LIBTOOL
+# ---------------
+AC_DEFUN([AC_PROG_LIBTOOL],
+[AC_REQUIRE([_AC_PROG_LIBTOOL])dnl
+dnl If AC_PROG_CXX has already been expanded, run AC_LIBTOOL_CXX
+dnl immediately, otherwise, hook it in at the end of AC_PROG_CXX.
+ AC_PROVIDE_IFELSE([AC_PROG_CXX],
+ [AC_LIBTOOL_CXX],
+ [define([AC_PROG_CXX], defn([AC_PROG_CXX])[AC_LIBTOOL_CXX
+ ])])
+dnl And a similar setup for Fortran 77 support
+ AC_PROVIDE_IFELSE([AC_PROG_F77],
+ [AC_LIBTOOL_F77],
+ [define([AC_PROG_F77], defn([AC_PROG_F77])[AC_LIBTOOL_F77
+])])
+
+dnl Quote A][M_PROG_GCJ so that aclocal doesn't bring it in needlessly.
+dnl If either AC_PROG_GCJ or A][M_PROG_GCJ have already been expanded, run
+dnl AC_LIBTOOL_GCJ immediately, otherwise, hook it in at the end of both.
+ AC_PROVIDE_IFELSE([AC_PROG_GCJ],
+ [AC_LIBTOOL_GCJ],
+ [AC_PROVIDE_IFELSE([A][M_PROG_GCJ],
+ [AC_LIBTOOL_GCJ],
+ [AC_PROVIDE_IFELSE([LT_AC_PROG_GCJ],
+ [AC_LIBTOOL_GCJ],
+ [ifdef([AC_PROG_GCJ],
+ [define([AC_PROG_GCJ], defn([AC_PROG_GCJ])[AC_LIBTOOL_GCJ])])
+ ifdef([A][M_PROG_GCJ],
+ [define([A][M_PROG_GCJ], defn([A][M_PROG_GCJ])[AC_LIBTOOL_GCJ])])
+ ifdef([LT_AC_PROG_GCJ],
+ [define([LT_AC_PROG_GCJ],
+ defn([LT_AC_PROG_GCJ])[AC_LIBTOOL_GCJ])])])])
+])])# AC_PROG_LIBTOOL
+
+
+# _AC_PROG_LIBTOOL
+# ----------------
+AC_DEFUN([_AC_PROG_LIBTOOL],
+[AC_REQUIRE([AC_LIBTOOL_SETUP])dnl
+AC_BEFORE([$0],[AC_LIBTOOL_CXX])dnl
+AC_BEFORE([$0],[AC_LIBTOOL_F77])dnl
+AC_BEFORE([$0],[AC_LIBTOOL_GCJ])dnl
+
+# This can be used to rebuild libtool when needed
+LIBTOOL_DEPS="$ac_aux_dir/ltmain.sh"
+
+# Always use our own libtool.
+LIBTOOL='$(SHELL) $(top_builddir)/libtool'
+AC_SUBST(LIBTOOL)dnl
+
+# Prevent multiple expansion
+define([AC_PROG_LIBTOOL], [])
+])# _AC_PROG_LIBTOOL
+
+
+# AC_LIBTOOL_SETUP
+# ----------------
+AC_DEFUN([AC_LIBTOOL_SETUP],
+[AC_PREREQ(2.50)dnl
+AC_REQUIRE([AC_ENABLE_SHARED])dnl
+AC_REQUIRE([AC_ENABLE_STATIC])dnl
+AC_REQUIRE([AC_ENABLE_FAST_INSTALL])dnl
+AC_REQUIRE([AC_CANONICAL_HOST])dnl
+AC_REQUIRE([AC_CANONICAL_BUILD])dnl
+AC_REQUIRE([AC_PROG_CC])dnl
+AC_REQUIRE([AC_PROG_LD])dnl
+AC_REQUIRE([AC_PROG_LD_RELOAD_FLAG])dnl
+AC_REQUIRE([AC_PROG_NM])dnl
+
+AC_REQUIRE([AC_PROG_LN_S])dnl
+AC_REQUIRE([AC_DEPLIBS_CHECK_METHOD])dnl
+# Autoconf 2.13's AC_OBJEXT and AC_EXEEXT macros only works for C compilers!
+AC_REQUIRE([AC_OBJEXT])dnl
+AC_REQUIRE([AC_EXEEXT])dnl
+dnl
+
+AC_LIBTOOL_SYS_MAX_CMD_LEN
+AC_LIBTOOL_SYS_GLOBAL_SYMBOL_PIPE
+AC_LIBTOOL_OBJDIR
+
+AC_REQUIRE([_LT_AC_SYS_COMPILER])dnl
+_LT_AC_PROG_ECHO_BACKSLASH
+
+case $host_os in
+aix3*)
+ # AIX sometimes has problems with the GCC collect2 program. For some
+ # reason, if we set the COLLECT_NAMES environment variable, the problems
+ # vanish in a puff of smoke.
+ if test "X${COLLECT_NAMES+set}" != Xset; then
+ COLLECT_NAMES=
+ export COLLECT_NAMES
+ fi
+ ;;
+esac
+
+# Sed substitution that helps us do robust quoting. It backslashifies
+# metacharacters that are still active within double-quoted strings.
+Xsed='sed -e s/^X//'
+[sed_quote_subst='s/\([\\"\\`$\\\\]\)/\\\1/g']
+
+# Same as above, but do not quote variable references.
+[double_quote_subst='s/\([\\"\\`\\\\]\)/\\\1/g']
+
+# Sed substitution to delay expansion of an escaped shell variable in a
+# double_quote_subst'ed string.
+delay_variable_subst='s/\\\\\\\\\\\$/\\\\\\$/g'
+
+# Sed substitution to avoid accidental globbing in evaled expressions
+no_glob_subst='s/\*/\\\*/g'
+
+# Constants:
+rm="rm -f"
+
+# Global variables:
+default_ofile=libtool
+can_build_shared=yes
+
+# All known linkers require a `.a' archive for static linking (except M$VC,
+# which needs '.lib').
+libext=a
+ltmain="$ac_aux_dir/ltmain.sh"
+ofile="$default_ofile"
+with_gnu_ld="$lt_cv_prog_gnu_ld"
+
+AC_CHECK_TOOL(AR, ar, AC_CHECK_TOOL(AR, emxomfar, false))
+AC_CHECK_TOOL(RANLIB, ranlib, :)
+AC_CHECK_TOOL(STRIP, strip, :)
+
+old_CC="$CC"
+old_CFLAGS="$CFLAGS"
+
+# Set sane defaults for various variables
+test -z "$AR" && AR=ar
+test -z "$AR_FLAGS" && AR_FLAGS=cru
+test -z "$AS" && AS=as
+test -z "$CC" && CC=cc
+test -z "$LTCC" && LTCC=$CC
+test -z "$DLLTOOL" && DLLTOOL=dlltool
+test -z "$LD" && LD=ld
+test -z "$LN_S" && LN_S="ln -s"
+test -z "$MAGIC_CMD" && MAGIC_CMD=file
+test -z "$NM" && NM=nm
+test -z "$SED" && SED=sed
+test -z "$OBJDUMP" && OBJDUMP=objdump
+test -z "$RANLIB" && RANLIB=:
+test -z "$STRIP" && STRIP=:
+test -z "$ac_objext" && ac_objext=o
+
+# Determine commands to create old-style static archives.
+old_archive_cmds='$AR $AR_FLAGS $oldlib$oldobjs$old_deplibs'
+old_postinstall_cmds='chmod 644 $oldlib'
+old_postuninstall_cmds=
+
+if test -n "$RANLIB"; then
+ case $host_os in
+ openbsd*)
+ old_postinstall_cmds="\$RANLIB -t \$oldlib~$old_postinstall_cmds"
+ ;;
+ *)
+ old_postinstall_cmds="\$RANLIB \$oldlib~$old_postinstall_cmds"
+ ;;
+ esac
+ old_archive_cmds="$old_archive_cmds~\$RANLIB \$oldlib"
+fi
+
+# Only perform the check for file, if the check method requires it
+case $deplibs_check_method in
+file_magic*)
+ if test "$file_magic_cmd" = '$MAGIC_CMD'; then
+ AC_PATH_MAGIC
+ fi
+ ;;
+esac
+
+AC_PROVIDE_IFELSE([AC_LIBTOOL_DLOPEN], enable_dlopen=yes, enable_dlopen=no)
+AC_PROVIDE_IFELSE([AC_LIBTOOL_WIN32_DLL],
+enable_win32_dll=yes, enable_win32_dll=no)
+
+AC_ARG_ENABLE([libtool-lock],
+ [AC_HELP_STRING([--disable-libtool-lock],
+ [avoid locking (might break parallel builds)])])
+test "x$enable_libtool_lock" != xno && enable_libtool_lock=yes
+
+AC_ARG_WITH([pic],
+ [AC_HELP_STRING([--with-pic],
+ [try to use only PIC/non-PIC objects @<:@default=use both@:>@])],
+ [pic_mode="$withval"],
+ [pic_mode=default])
+test -z "$pic_mode" && pic_mode=default
+
+# Use C for the default configuration in the libtool script
+tagname=
+AC_LIBTOOL_LANG_C_CONFIG
+_LT_AC_TAGCONFIG
+])# AC_LIBTOOL_SETUP
+
+
+# _LT_AC_SYS_COMPILER
+# -------------------
+AC_DEFUN([_LT_AC_SYS_COMPILER],
+[AC_REQUIRE([AC_PROG_CC])dnl
+
+# If no C compiler was specified, use CC.
+LTCC=${LTCC-"$CC"}
+
+# Allow CC to be a program name with arguments.
+compiler=$CC
+])# _LT_AC_SYS_COMPILER
+
+
+# _LT_AC_SYS_LIBPATH_AIX
+# ----------------------
+# Links a minimal program and checks the executable
+# for the system default hardcoded library path. In most cases,
+# this is /usr/lib:/lib, but when the MPI compilers are used
+# the location of the communication and MPI libs are included too.
+# If we don't find anything, use the default library path according
+# to the aix ld manual.
+AC_DEFUN([_LT_AC_SYS_LIBPATH_AIX],
+[AC_LINK_IFELSE(AC_LANG_PROGRAM,[
+aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e '/Import File Strings/,/^$/ { /^0/ { s/^0 *\(.*\)$/\1/; p; }
+}'`
+# Check for a 64-bit object if we didn't find anything.
+if test -z "$aix_libpath"; then aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e '/Import File Strings/,/^$/ { /^0/ { s/^0 *\(.*\)$/\1/; p; }
+}'`; fi],[])
+if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+])# _LT_AC_SYS_LIBPATH_AIX
+
+
+# _LT_AC_SHELL_INIT(ARG)
+# ----------------------
+AC_DEFUN([_LT_AC_SHELL_INIT],
+[ifdef([AC_DIVERSION_NOTICE],
+ [AC_DIVERT_PUSH(AC_DIVERSION_NOTICE)],
+ [AC_DIVERT_PUSH(NOTICE)])
+$1
+AC_DIVERT_POP
+])# _LT_AC_SHELL_INIT
+
+
+# _LT_AC_PROG_ECHO_BACKSLASH
+# --------------------------
+# Add some code to the start of the generated configure script which
+# will find an echo command which doesn't interpret backslashes.
+AC_DEFUN([_LT_AC_PROG_ECHO_BACKSLASH],
+[_LT_AC_SHELL_INIT([
+# Check that we are running under the correct shell.
+SHELL=${CONFIG_SHELL-/bin/sh}
+
+case X$ECHO in
+X*--fallback-echo)
+ # Remove one level of quotation (which was required for Make).
+ ECHO=`echo "$ECHO" | sed 's,\\\\\[$]\\[$]0,'[$]0','`
+ ;;
+esac
+
+echo=${ECHO-echo}
+if test "X[$]1" = X--no-reexec; then
+ # Discard the --no-reexec flag, and continue.
+ shift
+elif test "X[$]1" = X--fallback-echo; then
+ # Avoid inline document here, it may be left over
+ :
+elif test "X`($echo '\t') 2>/dev/null`" = 'X\t' ; then
+ # Yippee, $echo works!
+ :
+else
+ # Restart under the correct shell.
+ exec $SHELL "[$]0" --no-reexec ${1+"[$]@"}
+fi
+
+if test "X[$]1" = X--fallback-echo; then
+ # used as fallback echo
+ shift
+ cat <<EOF
+[$]*
+EOF
+ exit 0
+fi
+
+# The HP-UX ksh and POSIX shell print the target directory to stdout
+# if CDPATH is set.
+if test "X${CDPATH+set}" = Xset; then CDPATH=:; export CDPATH; fi
+
+if test -z "$ECHO"; then
+if test "X${echo_test_string+set}" != Xset; then
+# find a string as large as possible, as long as the shell can cope with it
+ for cmd in 'sed 50q "[$]0"' 'sed 20q "[$]0"' 'sed 10q "[$]0"' 'sed 2q "[$]0"' 'echo test'; do
+ # expected sizes: less than 2Kb, 1Kb, 512 bytes, 16 bytes, ...
+ if (echo_test_string="`eval $cmd`") 2>/dev/null &&
+ echo_test_string="`eval $cmd`" &&
+ (test "X$echo_test_string" = "X$echo_test_string") 2>/dev/null
+ then
+ break
+ fi
+ done
+fi
+
+if test "X`($echo '\t') 2>/dev/null`" = 'X\t' &&
+ echo_testing_string=`($echo "$echo_test_string") 2>/dev/null` &&
+ test "X$echo_testing_string" = "X$echo_test_string"; then
+ :
+else
+ # The Solaris, AIX, and Digital Unix default echo programs unquote
+ # backslashes. This makes it impossible to quote backslashes using
+ # echo "$something" | sed 's/\\/\\\\/g'
+ #
+ # So, first we look for a working echo in the user's PATH.
+
+ lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR
+ for dir in $PATH /usr/ucb; do
+ IFS="$lt_save_ifs"
+ if (test -f $dir/echo || test -f $dir/echo$ac_exeext) &&
+ test "X`($dir/echo '\t') 2>/dev/null`" = 'X\t' &&
+ echo_testing_string=`($dir/echo "$echo_test_string") 2>/dev/null` &&
+ test "X$echo_testing_string" = "X$echo_test_string"; then
+ echo="$dir/echo"
+ break
+ fi
+ done
+ IFS="$lt_save_ifs"
+
+ if test "X$echo" = Xecho; then
+ # We didn't find a better echo, so look for alternatives.
+ if test "X`(print -r '\t') 2>/dev/null`" = 'X\t' &&
+ echo_testing_string=`(print -r "$echo_test_string") 2>/dev/null` &&
+ test "X$echo_testing_string" = "X$echo_test_string"; then
+ # This shell has a builtin print -r that does the trick.
+ echo='print -r'
+ elif (test -f /bin/ksh || test -f /bin/ksh$ac_exeext) &&
+ test "X$CONFIG_SHELL" != X/bin/ksh; then
+ # If we have ksh, try running configure again with it.
+ ORIGINAL_CONFIG_SHELL=${CONFIG_SHELL-/bin/sh}
+ export ORIGINAL_CONFIG_SHELL
+ CONFIG_SHELL=/bin/ksh
+ export CONFIG_SHELL
+ exec $CONFIG_SHELL "[$]0" --no-reexec ${1+"[$]@"}
+ else
+ # Try using printf.
+ echo='printf %s\n'
+ if test "X`($echo '\t') 2>/dev/null`" = 'X\t' &&
+ echo_testing_string=`($echo "$echo_test_string") 2>/dev/null` &&
+ test "X$echo_testing_string" = "X$echo_test_string"; then
+ # Cool, printf works
+ :
+ elif echo_testing_string=`($ORIGINAL_CONFIG_SHELL "[$]0" --fallback-echo '\t') 2>/dev/null` &&
+ test "X$echo_testing_string" = 'X\t' &&
+ echo_testing_string=`($ORIGINAL_CONFIG_SHELL "[$]0" --fallback-echo "$echo_test_string") 2>/dev/null` &&
+ test "X$echo_testing_string" = "X$echo_test_string"; then
+ CONFIG_SHELL=$ORIGINAL_CONFIG_SHELL
+ export CONFIG_SHELL
+ SHELL="$CONFIG_SHELL"
+ export SHELL
+ echo="$CONFIG_SHELL [$]0 --fallback-echo"
+ elif echo_testing_string=`($CONFIG_SHELL "[$]0" --fallback-echo '\t') 2>/dev/null` &&
+ test "X$echo_testing_string" = 'X\t' &&
+ echo_testing_string=`($CONFIG_SHELL "[$]0" --fallback-echo "$echo_test_string") 2>/dev/null` &&
+ test "X$echo_testing_string" = "X$echo_test_string"; then
+ echo="$CONFIG_SHELL [$]0 --fallback-echo"
+ else
+ # maybe with a smaller string...
+ prev=:
+
+ for cmd in 'echo test' 'sed 2q "[$]0"' 'sed 10q "[$]0"' 'sed 20q "[$]0"' 'sed 50q "[$]0"'; do
+ if (test "X$echo_test_string" = "X`eval $cmd`") 2>/dev/null
+ then
+ break
+ fi
+ prev="$cmd"
+ done
+
+ if test "$prev" != 'sed 50q "[$]0"'; then
+ echo_test_string=`eval $prev`
+ export echo_test_string
+ exec ${ORIGINAL_CONFIG_SHELL-${CONFIG_SHELL-/bin/sh}} "[$]0" ${1+"[$]@"}
+ else
+ # Oops. We lost completely, so just stick with echo.
+ echo=echo
+ fi
+ fi
+ fi
+ fi
+fi
+fi
+
+# Copy echo and quote the copy suitably for passing to libtool from
+# the Makefile, instead of quoting the original, which is used later.
+ECHO=$echo
+if test "X$ECHO" = "X$CONFIG_SHELL [$]0 --fallback-echo"; then
+ ECHO="$CONFIG_SHELL \\\$\[$]0 --fallback-echo"
+fi
+
+AC_SUBST(ECHO)
+])])# _LT_AC_PROG_ECHO_BACKSLASH
+
+
+# _LT_AC_LOCK
+# -----------
+AC_DEFUN([_LT_AC_LOCK],
+[AC_ARG_ENABLE([libtool-lock],
+ [AC_HELP_STRING([--disable-libtool-lock],
+ [avoid locking (might break parallel builds)])])
+test "x$enable_libtool_lock" != xno && enable_libtool_lock=yes
+
+# Some flags need to be propagated to the compiler or linker for good
+# libtool support.
+case $host in
+ia64-*-hpux*)
+ # Find out which ABI we are using.
+ echo 'int i;' > conftest.$ac_ext
+ if AC_TRY_EVAL(ac_compile); then
+ case `/usr/bin/file conftest.$ac_objext` in
+ *ELF-32*)
+ HPUX_IA64_MODE="32"
+ ;;
+ *ELF-64*)
+ HPUX_IA64_MODE="64"
+ ;;
+ esac
+ fi
+ rm -rf conftest*
+ ;;
+*-*-irix6*)
+ # Find out which ABI we are using.
+ echo '[#]line __oline__ "configure"' > conftest.$ac_ext
+ if AC_TRY_EVAL(ac_compile); then
+ if test "$lt_cv_prog_gnu_ld" = yes; then
+ case `/usr/bin/file conftest.$ac_objext` in
+ *32-bit*)
+ LD="${LD-ld} -melf32bsmip"
+ ;;
+ *N32*)
+ LD="${LD-ld} -melf32bmipn32"
+ ;;
+ *64-bit*)
+ LD="${LD-ld} -melf64bmip"
+ ;;
+ esac
+ else
+ case `/usr/bin/file conftest.$ac_objext` in
+ *32-bit*)
+ LD="${LD-ld} -32"
+ ;;
+ *N32*)
+ LD="${LD-ld} -n32"
+ ;;
+ *64-bit*)
+ LD="${LD-ld} -64"
+ ;;
+ esac
+ fi
+ fi
+ rm -rf conftest*
+ ;;
+
+x86_64-*linux*|ppc*-*linux*|powerpc*-*linux*|s390*-*linux*|sparc*-*linux*)
+ # Find out which ABI we are using.
+ echo 'int i;' > conftest.$ac_ext
+ if AC_TRY_EVAL(ac_compile); then
+ case "`/usr/bin/file conftest.o`" in
+ *32-bit*)
+ case $host in
+ x86_64-*linux*)
+ LD="${LD-ld} -m elf_i386"
+ ;;
+ ppc64-*linux*|powerpc64-*linux*)
+ LD="${LD-ld} -m elf32ppclinux"
+ ;;
+ s390x-*linux*)
+ LD="${LD-ld} -m elf_s390"
+ ;;
+ sparc64-*linux*)
+ LD="${LD-ld} -m elf32_sparc"
+ ;;
+ esac
+ ;;
+ *64-bit*)
+ case $host in
+ x86_64-*linux*)
+ LD="${LD-ld} -m elf_x86_64"
+ ;;
+ ppc*-*linux*|powerpc*-*linux*)
+ LD="${LD-ld} -m elf64ppc"
+ ;;
+ s390*-*linux*)
+ LD="${LD-ld} -m elf64_s390"
+ ;;
+ sparc*-*linux*)
+ LD="${LD-ld} -m elf64_sparc"
+ ;;
+ esac
+ ;;
+ esac
+ fi
+ rm -rf conftest*
+ ;;
+
+*-*-sco3.2v5*)
+ # On SCO OpenServer 5, we need -belf to get full-featured binaries.
+ SAVE_CFLAGS="$CFLAGS"
+ CFLAGS="$CFLAGS -belf"
+ AC_CACHE_CHECK([whether the C compiler needs -belf], lt_cv_cc_needs_belf,
+ [AC_LANG_PUSH(C)
+ AC_TRY_LINK([],[],[lt_cv_cc_needs_belf=yes],[lt_cv_cc_needs_belf=no])
+ AC_LANG_POP])
+ if test x"$lt_cv_cc_needs_belf" != x"yes"; then
+ # this is probably gcc 2.8.0, egcs 1.0 or newer; no need for -belf
+ CFLAGS="$SAVE_CFLAGS"
+ fi
+ ;;
+AC_PROVIDE_IFELSE([AC_LIBTOOL_WIN32_DLL],
+[*-*-cygwin* | *-*-mingw* | *-*-pw32*)
+ AC_CHECK_TOOL(DLLTOOL, dlltool, false)
+ AC_CHECK_TOOL(AS, as, false)
+ AC_CHECK_TOOL(OBJDUMP, objdump, false)
+ ;;
+ ])
+esac
+
+need_locks="$enable_libtool_lock"
+
+])# _LT_AC_LOCK
+
+
+# AC_LIBTOOL_COMPILER_OPTION(MESSAGE, VARIABLE-NAME, FLAGS,
+# [OUTPUT-FILE], [ACTION-SUCCESS], [ACTION-FAILURE])
+# ----------------------------------------------------------------
+# Check whether the given compiler option works
+AC_DEFUN([AC_LIBTOOL_COMPILER_OPTION],
+[AC_REQUIRE([LT_AC_PROG_SED])
+AC_CACHE_CHECK([$1], [$2],
+ [$2=no
+ ifelse([$4], , [ac_outfile=conftest.$ac_objext], [ac_outfile=$4])
+ printf "$lt_simple_compile_test_code" > conftest.$ac_ext
+ lt_compiler_flag="$3"
+ # Insert the option either (1) after the last *FLAGS variable, or
+ # (2) before a word containing "conftest.", or (3) at the end.
+ # Note that $ac_compile itself does not contain backslashes and begins
+ # with a dollar sign (not a hyphen), so the echo should work correctly.
+ # The option is referenced via a variable to avoid confusing sed.
+ lt_compile=`echo "$ac_compile" | $SED \
+ -e 's:.*FLAGS}? :&$lt_compiler_flag :; t' \
+ -e 's: [[^ ]]*conftest\.: $lt_compiler_flag&:; t' \
+ -e 's:$: $lt_compiler_flag:'`
+ (eval echo "\"\$as_me:__oline__: $lt_compile\"" >&AS_MESSAGE_LOG_FD)
+ (eval "$lt_compile" 2>conftest.err)
+ ac_status=$?
+ cat conftest.err >&AS_MESSAGE_LOG_FD
+ echo "$as_me:__oline__: \$? = $ac_status" >&AS_MESSAGE_LOG_FD
+ if (exit $ac_status) && test -s "$ac_outfile"; then
+ # The compiler can only warn and ignore the option if not recognized
+ # So say no if there are warnings
+ if test ! -s conftest.err; then
+ $2=yes
+ fi
+ fi
+ $rm conftest*
+])
+
+if test x"[$]$2" = xyes; then
+ ifelse([$5], , :, [$5])
+else
+ ifelse([$6], , :, [$6])
+fi
+])# AC_LIBTOOL_COMPILER_OPTION
+
+
+# AC_LIBTOOL_LINKER_OPTION(MESSAGE, VARIABLE-NAME, FLAGS,
+# [ACTION-SUCCESS], [ACTION-FAILURE])
+# ------------------------------------------------------------
+# Check whether the given compiler option works
+AC_DEFUN([AC_LIBTOOL_LINKER_OPTION],
+[AC_CACHE_CHECK([$1], [$2],
+ [$2=no
+ save_LDFLAGS="$LDFLAGS"
+ LDFLAGS="$LDFLAGS $3"
+ printf "$lt_simple_link_test_code" > conftest.$ac_ext
+ if (eval $ac_link 2>conftest.err) && test -s conftest$ac_exeext; then
+ # The compiler can only warn and ignore the option if not recognized
+ # So say no if there are warnings
+ if test -s conftest.err; then
+ # Append any errors to the config.log.
+ cat conftest.err 1>&AS_MESSAGE_LOG_FD
+ else
+ $2=yes
+ fi
+ fi
+ $rm conftest*
+ LDFLAGS="$save_LDFLAGS"
+])
+
+if test x"[$]$2" = xyes; then
+ ifelse([$4], , :, [$4])
+else
+ ifelse([$5], , :, [$5])
+fi
+])# AC_LIBTOOL_LINKER_OPTION
+
+
+# AC_LIBTOOL_SYS_MAX_CMD_LEN
+# --------------------------
+AC_DEFUN([AC_LIBTOOL_SYS_MAX_CMD_LEN],
+[# find the maximum length of command line arguments
+AC_MSG_CHECKING([the maximum length of command line arguments])
+AC_CACHE_VAL([lt_cv_sys_max_cmd_len], [dnl
+ i=0
+ testring="ABCD"
+
+ case $build_os in
+ msdosdjgpp*)
+ # On DJGPP, this test can blow up pretty badly due to problems in libc
+ # (any single argument exceeding 2000 bytes causes a buffer overrun
+ # during glob expansion). Even if it were fixed, the result of this
+ # check would be larger than it should be.
+ lt_cv_sys_max_cmd_len=12288; # 12K is about right
+ ;;
+
+ gnu*)
+ # Under GNU Hurd, this test is not required because there is
+ # no limit to the length of command line arguments.
+ # Libtool will interpret -1 as no limit whatsoever
+ lt_cv_sys_max_cmd_len=-1;
+ ;;
+
+ cygwin* | mingw*)
+ # On Win9x/ME, this test blows up -- it succeeds, but takes
+ # about 5 minutes as the teststring grows exponentially.
+ # Worse, since 9x/ME are not pre-emptively multitasking,
+ # you end up with a "frozen" computer, even though with patience
+ # the test eventually succeeds (with a max line length of 256k).
+ # Instead, let's just punt: use the minimum linelength reported by
+ # all of the supported platforms: 8192 (on NT/2K/XP).
+ lt_cv_sys_max_cmd_len=8192;
+ ;;
+
+ amigaos*)
+ # On AmigaOS with pdksh, this test takes hours, literally.
+ # So we just punt and use a minimum line length of 8192.
+ lt_cv_sys_max_cmd_len=8192;
+ ;;
+
+ *)
+ # If test is not a shell built-in, we'll probably end up computing a
+ # maximum length that is only half of the actual maximum length, but
+ # we can't tell.
+ while (test "X"`$CONFIG_SHELL [$]0 --fallback-echo "X$testring" 2>/dev/null` \
+ = "XX$testring") >/dev/null 2>&1 &&
+ new_result=`expr "X$testring" : ".*" 2>&1` &&
+ lt_cv_sys_max_cmd_len=$new_result &&
+ test $i != 17 # 1/2 MB should be enough
+ do
+ i=`expr $i + 1`
+ testring=$testring$testring
+ done
+ testring=
+ # Add a significant safety factor because C++ compilers can tack on massive
+ # amounts of additional arguments before passing them to the linker.
+ # It appears as though 1/2 is a usable value.
+ lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 2`
+ ;;
+ esac
+])
+if test -n $lt_cv_sys_max_cmd_len ; then
+ AC_MSG_RESULT($lt_cv_sys_max_cmd_len)
+else
+ AC_MSG_RESULT(none)
+fi
+])# AC_LIBTOOL_SYS_MAX_CMD_LEN
+
+
+# _LT_AC_CHECK_DLFCN
+# --------------------
+AC_DEFUN([_LT_AC_CHECK_DLFCN],
+[AC_CHECK_HEADERS(dlfcn.h)dnl
+])# _LT_AC_CHECK_DLFCN
+
+
+# _LT_AC_TRY_DLOPEN_SELF (ACTION-IF-TRUE, ACTION-IF-TRUE-W-USCORE,
+# ACTION-IF-FALSE, ACTION-IF-CROSS-COMPILING)
+# ------------------------------------------------------------------
+AC_DEFUN([_LT_AC_TRY_DLOPEN_SELF],
+[AC_REQUIRE([_LT_AC_CHECK_DLFCN])dnl
+if test "$cross_compiling" = yes; then :
+ [$4]
+else
+ lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
+ lt_status=$lt_dlunknown
+ cat > conftest.$ac_ext <<EOF
+[#line __oline__ "configure"
+#include "confdefs.h"
+
+#if HAVE_DLFCN_H
+#include <dlfcn.h>
+#endif
+
+#include <stdio.h>
+
+#ifdef RTLD_GLOBAL
+# define LT_DLGLOBAL RTLD_GLOBAL
+#else
+# ifdef DL_GLOBAL
+# define LT_DLGLOBAL DL_GLOBAL
+# else
+# define LT_DLGLOBAL 0
+# endif
+#endif
+
+/* We may have to define LT_DLLAZY_OR_NOW in the command line if we
+ find out it does not work in some platform. */
+#ifndef LT_DLLAZY_OR_NOW
+# ifdef RTLD_LAZY
+# define LT_DLLAZY_OR_NOW RTLD_LAZY
+# else
+# ifdef DL_LAZY
+# define LT_DLLAZY_OR_NOW DL_LAZY
+# else
+# ifdef RTLD_NOW
+# define LT_DLLAZY_OR_NOW RTLD_NOW
+# else
+# ifdef DL_NOW
+# define LT_DLLAZY_OR_NOW DL_NOW
+# else
+# define LT_DLLAZY_OR_NOW 0
+# endif
+# endif
+# endif
+# endif
+#endif
+
+#ifdef __cplusplus
+extern "C" void exit (int);
+#endif
+
+void fnord() { int i=42;}
+int main ()
+{
+ void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+ int status = $lt_dlunknown;
+
+ if (self)
+ {
+ if (dlsym (self,"fnord")) status = $lt_dlno_uscore;
+ else if (dlsym( self,"_fnord")) status = $lt_dlneed_uscore;
+ /* dlclose (self); */
+ }
+
+ exit (status);
+}]
+EOF
+ if AC_TRY_EVAL(ac_link) && test -s conftest${ac_exeext} 2>/dev/null; then
+ (./conftest; exit; ) 2>/dev/null
+ lt_status=$?
+ case x$lt_status in
+ x$lt_dlno_uscore) $1 ;;
+ x$lt_dlneed_uscore) $2 ;;
+ x$lt_unknown|x*) $3 ;;
+ esac
+ else :
+ # compilation failed
+ $3
+ fi
+fi
+rm -fr conftest*
+])# _LT_AC_TRY_DLOPEN_SELF
+
+
+# AC_LIBTOOL_DLOPEN_SELF
+# -------------------
+AC_DEFUN([AC_LIBTOOL_DLOPEN_SELF],
+[AC_REQUIRE([_LT_AC_CHECK_DLFCN])dnl
+if test "x$enable_dlopen" != xyes; then
+ enable_dlopen=unknown
+ enable_dlopen_self=unknown
+ enable_dlopen_self_static=unknown
+else
+ lt_cv_dlopen=no
+ lt_cv_dlopen_libs=
+
+ case $host_os in
+ beos*)
+ lt_cv_dlopen="load_add_on"
+ lt_cv_dlopen_libs=
+ lt_cv_dlopen_self=yes
+ ;;
+
+ mingw* | pw32*)
+ lt_cv_dlopen="LoadLibrary"
+ lt_cv_dlopen_libs=
+ ;;
+
+ cygwin*)
+ lt_cv_dlopen="dlopen"
+ lt_cv_dlopen_libs=
+ ;;
+
+ darwin*)
+ # if libdl is installed we need to link against it
+ AC_CHECK_LIB([dl], [dlopen],
+ [lt_cv_dlopen="dlopen" lt_cv_dlopen_libs="-ldl"],[
+ lt_cv_dlopen="dyld"
+ lt_cv_dlopen_libs=
+ lt_cv_dlopen_self=yes
+ ])
+ ;;
+
+ *)
+ AC_CHECK_FUNC([shl_load],
+ [lt_cv_dlopen="shl_load"],
+ [AC_CHECK_LIB([dld], [shl_load],
+ [lt_cv_dlopen="shl_load" lt_cv_dlopen_libs="-dld"],
+ [AC_CHECK_FUNC([dlopen],
+ [lt_cv_dlopen="dlopen"],
+ [AC_CHECK_LIB([dl], [dlopen],
+ [lt_cv_dlopen="dlopen" lt_cv_dlopen_libs="-ldl"],
+ [AC_CHECK_LIB([svld], [dlopen],
+ [lt_cv_dlopen="dlopen" lt_cv_dlopen_libs="-lsvld"],
+ [AC_CHECK_LIB([dld], [dld_link],
+ [lt_cv_dlopen="dld_link" lt_cv_dlopen_libs="-dld"])
+ ])
+ ])
+ ])
+ ])
+ ])
+ ;;
+ esac
+
+ if test "x$lt_cv_dlopen" != xno; then
+ enable_dlopen=yes
+ else
+ enable_dlopen=no
+ fi
+
+ case $lt_cv_dlopen in
+ dlopen)
+ save_CPPFLAGS="$CPPFLAGS"
+ test "x$ac_cv_header_dlfcn_h" = xyes && CPPFLAGS="$CPPFLAGS -DHAVE_DLFCN_H"
+
+ save_LDFLAGS="$LDFLAGS"
+ eval LDFLAGS=\"\$LDFLAGS $export_dynamic_flag_spec\"
+
+ save_LIBS="$LIBS"
+ LIBS="$lt_cv_dlopen_libs $LIBS"
+
+ AC_CACHE_CHECK([whether a program can dlopen itself],
+ lt_cv_dlopen_self, [dnl
+ _LT_AC_TRY_DLOPEN_SELF(
+ lt_cv_dlopen_self=yes, lt_cv_dlopen_self=yes,
+ lt_cv_dlopen_self=no, lt_cv_dlopen_self=cross)
+ ])
+
+ if test "x$lt_cv_dlopen_self" = xyes; then
+ LDFLAGS="$LDFLAGS $link_static_flag"
+ AC_CACHE_CHECK([whether a statically linked program can dlopen itself],
+ lt_cv_dlopen_self_static, [dnl
+ _LT_AC_TRY_DLOPEN_SELF(
+ lt_cv_dlopen_self_static=yes, lt_cv_dlopen_self_static=yes,
+ lt_cv_dlopen_self_static=no, lt_cv_dlopen_self_static=cross)
+ ])
+ fi
+
+ CPPFLAGS="$save_CPPFLAGS"
+ LDFLAGS="$save_LDFLAGS"
+ LIBS="$save_LIBS"
+ ;;
+ esac
+
+ case $lt_cv_dlopen_self in
+ yes|no) enable_dlopen_self=$lt_cv_dlopen_self ;;
+ *) enable_dlopen_self=unknown ;;
+ esac
+
+ case $lt_cv_dlopen_self_static in
+ yes|no) enable_dlopen_self_static=$lt_cv_dlopen_self_static ;;
+ *) enable_dlopen_self_static=unknown ;;
+ esac
+fi
+])# AC_LIBTOOL_DLOPEN_SELF
+
+
+# AC_LIBTOOL_PROG_CC_C_O([TAGNAME])
+# ---------------------------------
+# Check to see if options -c and -o are simultaneously supported by compiler
+AC_DEFUN([AC_LIBTOOL_PROG_CC_C_O],
+[AC_REQUIRE([_LT_AC_SYS_COMPILER])dnl
+AC_CACHE_CHECK([if $compiler supports -c -o file.$ac_objext],
+ [_LT_AC_TAGVAR(lt_cv_prog_compiler_c_o, $1)],
+ [_LT_AC_TAGVAR(lt_cv_prog_compiler_c_o, $1)=no
+ $rm -r conftest 2>/dev/null
+ mkdir conftest
+ cd conftest
+ mkdir out
+ printf "$lt_simple_compile_test_code" > conftest.$ac_ext
+
+ lt_compiler_flag="-o out/conftest2.$ac_objext"
+ # Insert the option either (1) after the last *FLAGS variable, or
+ # (2) before a word containing "conftest.", or (3) at the end.
+ # Note that $ac_compile itself does not contain backslashes and begins
+ # with a dollar sign (not a hyphen), so the echo should work correctly.
+ lt_compile=`echo "$ac_compile" | $SED \
+ -e 's:.*FLAGS}? :&$lt_compiler_flag :; t' \
+ -e 's: [[^ ]]*conftest\.: $lt_compiler_flag&:; t' \
+ -e 's:$: $lt_compiler_flag:'`
+ (eval echo "\"\$as_me:__oline__: $lt_compile\"" >&AS_MESSAGE_LOG_FD)
+ (eval "$lt_compile" 2>out/conftest.err)
+ ac_status=$?
+ cat out/conftest.err >&AS_MESSAGE_LOG_FD
+ echo "$as_me:__oline__: \$? = $ac_status" >&AS_MESSAGE_LOG_FD
+ if (exit $ac_status) && test -s out/conftest2.$ac_objext
+ then
+ # The compiler can only warn and ignore the option if not recognized
+ # So say no if there are warnings
+ if test ! -s out/conftest.err; then
+ _LT_AC_TAGVAR(lt_cv_prog_compiler_c_o, $1)=yes
+ fi
+ fi
+ chmod u+w .
+ $rm conftest*
+ # SGI C++ compiler will create directory out/ii_files/ for
+ # template instantiation
+ test -d out/ii_files && $rm out/ii_files/* && rmdir out/ii_files
+ $rm out/* && rmdir out
+ cd ..
+ rmdir conftest
+ $rm conftest*
+])
+])# AC_LIBTOOL_PROG_CC_C_O
+
+
+# AC_LIBTOOL_SYS_HARD_LINK_LOCKS([TAGNAME])
+# -----------------------------------------
+# Check to see if we can do hard links to lock some files if needed
+AC_DEFUN([AC_LIBTOOL_SYS_HARD_LINK_LOCKS],
+[AC_REQUIRE([_LT_AC_LOCK])dnl
+
+hard_links="nottested"
+if test "$_LT_AC_TAGVAR(lt_cv_prog_compiler_c_o, $1)" = no && test "$need_locks" != no; then
+ # do not overwrite the value of need_locks provided by the user
+ AC_MSG_CHECKING([if we can lock with hard links])
+ hard_links=yes
+ $rm conftest*
+ ln conftest.a conftest.b 2>/dev/null && hard_links=no
+ touch conftest.a
+ ln conftest.a conftest.b 2>&5 || hard_links=no
+ ln conftest.a conftest.b 2>/dev/null && hard_links=no
+ AC_MSG_RESULT([$hard_links])
+ if test "$hard_links" = no; then
+ AC_MSG_WARN([`$CC' does not support `-c -o', so `make -j' may be unsafe])
+ need_locks=warn
+ fi
+else
+ need_locks=no
+fi
+])# AC_LIBTOOL_SYS_HARD_LINK_LOCKS
+
+
+# AC_LIBTOOL_OBJDIR
+# -----------------
+AC_DEFUN([AC_LIBTOOL_OBJDIR],
+[AC_CACHE_CHECK([for objdir], [lt_cv_objdir],
+[rm -f .libs 2>/dev/null
+mkdir .libs 2>/dev/null
+if test -d .libs; then
+ lt_cv_objdir=.libs
+else
+ # MS-DOS does not allow filenames that begin with a dot.
+ lt_cv_objdir=_libs
+fi
+rmdir .libs 2>/dev/null])
+objdir=$lt_cv_objdir
+])# AC_LIBTOOL_OBJDIR
+
+
+# AC_LIBTOOL_PROG_LD_HARDCODE_LIBPATH([TAGNAME])
+# ----------------------------------------------
+# Check hardcoding attributes.
+AC_DEFUN([AC_LIBTOOL_PROG_LD_HARDCODE_LIBPATH],
+[AC_MSG_CHECKING([how to hardcode library paths into programs])
+_LT_AC_TAGVAR(hardcode_action, $1)=
+if test -n "$_LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)" || \
+ test -n "$_LT_AC_TAGVAR(runpath_var $1)" || \
+ test "X$_LT_AC_TAGVAR(hardcode_automatic, $1)"="Xyes" ; then
+
+ # We can hardcode non-existant directories.
+ if test "$_LT_AC_TAGVAR(hardcode_direct, $1)" != no &&
+ # If the only mechanism to avoid hardcoding is shlibpath_var, we
+ # have to relink, otherwise we might link with an installed library
+ # when we should be linking with a yet-to-be-installed one
+ ## test "$_LT_AC_TAGVAR(hardcode_shlibpath_var, $1)" != no &&
+ test "$_LT_AC_TAGVAR(hardcode_minus_L, $1)" != no; then
+ # Linking always hardcodes the temporary library directory.
+ _LT_AC_TAGVAR(hardcode_action, $1)=relink
+ else
+ # We can link without hardcoding, and we can hardcode nonexisting dirs.
+ _LT_AC_TAGVAR(hardcode_action, $1)=immediate
+ fi
+else
+ # We cannot hardcode anything, or else we can only hardcode existing
+ # directories.
+ _LT_AC_TAGVAR(hardcode_action, $1)=unsupported
+fi
+AC_MSG_RESULT([$_LT_AC_TAGVAR(hardcode_action, $1)])
+
+if test "$_LT_AC_TAGVAR(hardcode_action, $1)" = relink; then
+ # Fast installation is not supported
+ enable_fast_install=no
+elif test "$shlibpath_overrides_runpath" = yes ||
+ test "$enable_shared" = no; then
+ # Fast installation is not necessary
+ enable_fast_install=needless
+fi
+])# AC_LIBTOOL_PROG_LD_HARDCODE_LIBPATH
+
+
+# AC_LIBTOOL_SYS_LIB_STRIP
+# ------------------------
+AC_DEFUN([AC_LIBTOOL_SYS_LIB_STRIP],
+[striplib=
+old_striplib=
+AC_MSG_CHECKING([whether stripping libraries is possible])
+if test -n "$STRIP" && $STRIP -V 2>&1 | grep "GNU strip" >/dev/null; then
+ test -z "$old_striplib" && old_striplib="$STRIP --strip-debug"
+ test -z "$striplib" && striplib="$STRIP --strip-unneeded"
+ AC_MSG_RESULT([yes])
+else
+# FIXME - insert some real tests, host_os isn't really good enough
+ case $host_os in
+ darwin*)
+ if test -n "$STRIP" ; then
+ striplib="$STRIP -x"
+ AC_MSG_RESULT([yes])
+ else
+ AC_MSG_RESULT([no])
+fi
+ ;;
+ *)
+ AC_MSG_RESULT([no])
+ ;;
+ esac
+fi
+])# AC_LIBTOOL_SYS_LIB_STRIP
+
+
+# AC_LIBTOOL_SYS_DYNAMIC_LINKER
+# -----------------------------
+# PORTME Fill in your ld.so characteristics
+AC_DEFUN([AC_LIBTOOL_SYS_DYNAMIC_LINKER],
+[AC_MSG_CHECKING([dynamic linker characteristics])
+library_names_spec=
+libname_spec='lib$name'
+soname_spec=
+shrext=".so"
+postinstall_cmds=
+postuninstall_cmds=
+finish_cmds=
+finish_eval=
+shlibpath_var=
+shlibpath_overrides_runpath=unknown
+version_type=none
+dynamic_linker="$host_os ld.so"
+sys_lib_dlsearch_path_spec="/lib /usr/lib"
+if test "$GCC" = yes; then
+ sys_lib_search_path_spec=`$CC -print-search-dirs | grep "^libraries:" | $SED -e "s/^libraries://" -e "s,=/,/,g"`
+ if echo "$sys_lib_search_path_spec" | grep ';' >/dev/null ; then
+ # if the path contains ";" then we assume it to be the separator
+ # otherwise default to the standard path separator (i.e. ":") - it is
+ # assumed that no part of a normal pathname contains ";" but that should
+ # okay in the real world where ";" in dirpaths is itself problematic.
+ sys_lib_search_path_spec=`echo "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
+ else
+ sys_lib_search_path_spec=`echo "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
+ fi
+else
+ sys_lib_search_path_spec="/lib /usr/lib /usr/local/lib"
+fi
+need_lib_prefix=unknown
+hardcode_into_libs=no
+
+# when you set need_version to no, make sure it does not cause -set_version
+# flags to be left without arguments
+need_version=unknown
+
+case $host_os in
+aix3*)
+ version_type=linux
+ library_names_spec='${libname}${release}${shared_ext}$versuffix $libname.a'
+ shlibpath_var=LIBPATH
+
+ # AIX 3 has no versioning support, so we append a major version to the name.
+ soname_spec='${libname}${release}${shared_ext}$major'
+ ;;
+
+aix4* | aix5*)
+ version_type=linux
+ need_lib_prefix=no
+ need_version=no
+ hardcode_into_libs=yes
+ if test "$host_cpu" = ia64; then
+ # AIX 5 supports IA64
+ library_names_spec='${libname}${release}${shared_ext}$major ${libname}${release}${shared_ext}$versuffix $libname${shared_ext}'
+ shlibpath_var=LD_LIBRARY_PATH
+ else
+ # With GCC up to 2.95.x, collect2 would create an import file
+ # for dependence libraries. The import file would start with
+ # the line `#! .'. This would cause the generated library to
+ # depend on `.', always an invalid library. This was fixed in
+ # development snapshots of GCC prior to 3.0.
+ case $host_os in
+ aix4 | aix4.[[01]] | aix4.[[01]].*)
+ if { echo '#if __GNUC__ > 2 || (__GNUC__ == 2 && __GNUC_MINOR__ >= 97)'
+ echo ' yes '
+ echo '#endif'; } | ${CC} -E - | grep yes > /dev/null; then
+ :
+ else
+ can_build_shared=no
+ fi
+ ;;
+ esac
+ # AIX (on Power*) has no versioning support, so currently we can not hardcode correct
+ # soname into executable. Probably we can add versioning support to
+ # collect2, so additional links can be useful in future.
+ if test "$aix_use_runtimelinking" = yes; then
+ # If using run time linking (on AIX 4.2 or later) use lib<name>.so
+ # instead of lib<name>.a to let people know that these are not
+ # typical AIX shared libraries.
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ else
+ # We preserve .a as extension for shared libraries through AIX4.2
+ # and later when we are not doing run time linking.
+ library_names_spec='${libname}${release}.a $libname.a'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ fi
+ shlibpath_var=LIBPATH
+ fi
+ ;;
+
+amigaos*)
+ library_names_spec='$libname.ixlibrary $libname.a'
+ # Create ${libname}_ixlibrary.a entries in /sys/libs.
+ finish_eval='for lib in `ls $libdir/*.ixlibrary 2>/dev/null`; do libname=`$echo "X$lib" | $Xsed -e '\''s%^.*/\([[^/]]*\)\.ixlibrary$%\1%'\''`; test $rm /sys/libs/${libname}_ixlibrary.a; $show "cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a"; cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a || exit 1; done'
+ ;;
+
+beos*)
+ library_names_spec='${libname}${shared_ext}'
+ dynamic_linker="$host_os ld.so"
+ shlibpath_var=LIBRARY_PATH
+ ;;
+
+bsdi4*)
+ version_type=linux
+ need_version=no
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ finish_cmds='PATH="\$PATH:/sbin" ldconfig $libdir'
+ shlibpath_var=LD_LIBRARY_PATH
+ sys_lib_search_path_spec="/shlib /usr/lib /usr/X11/lib /usr/contrib/lib /lib /usr/local/lib"
+ sys_lib_dlsearch_path_spec="/shlib /usr/lib /usr/local/lib"
+ # the default ld.so.conf also contains /usr/contrib/lib and
+ # /usr/X11R6/lib (/usr/X11 is a link to /usr/X11R6), but let us allow
+ # libtool to hard-code these into programs
+ ;;
+
+cygwin* | mingw* | pw32*)
+ version_type=windows
+ shrext=".dll"
+ need_version=no
+ need_lib_prefix=no
+
+ case $GCC,$host_os in
+ yes,cygwin* | yes,mingw* | yes,pw32*)
+ library_names_spec='$libname.dll.a'
+ # DLL is installed to $(libdir)/../bin by postinstall_cmds
+ postinstall_cmds='base_file=`basename \${file}`~
+ dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i;echo \$dlname'\''`~
+ dldir=$destdir/`dirname \$dlpath`~
+ test -d \$dldir || mkdir -p \$dldir~
+ $install_prog $dir/$dlname \$dldir/$dlname'
+ postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
+ dlpath=$dir/\$dldll~
+ $rm \$dlpath'
+ shlibpath_overrides_runpath=yes
+
+ case $host_os in
+ cygwin*)
+ # Cygwin DLLs use 'cyg' prefix rather than 'lib'
+ soname_spec='`echo ${libname} | sed -e 's/^lib/cyg/'``echo ${release} | $SED -e 's/[[.]]/-/g'`${versuffix}${shared_ext}'
+ sys_lib_search_path_spec="/usr/lib /lib/w32api /lib /usr/local/lib"
+ ;;
+ mingw*)
+ # MinGW DLLs use traditional 'lib' prefix
+ soname_spec='${libname}`echo ${release} | $SED -e 's/[[.]]/-/g'`${versuffix}${shared_ext}'
+ sys_lib_search_path_spec=`$CC -print-search-dirs | grep "^libraries:" | $SED -e "s/^libraries://" -e "s,=/,/,g"`
+ if echo "$sys_lib_search_path_spec" | [grep ';[c-zC-Z]:/' >/dev/null]; then
+ # It is most probably a Windows format PATH printed by
+ # mingw gcc, but we are running on Cygwin. Gcc prints its search
+ # path with ; separators, and with drive letters. We can handle the
+ # drive letters (cygwin fileutils understands them), so leave them,
+ # especially as we might pass files found there to a mingw objdump,
+ # which wouldn't understand a cygwinified path. Ahh.
+ sys_lib_search_path_spec=`echo "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
+ else
+ sys_lib_search_path_spec=`echo "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
+ fi
+ ;;
+ pw32*)
+ # pw32 DLLs use 'pw' prefix rather than 'lib'
+ library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
+ ;;
+ esac
+ ;;
+
+ *)
+ library_names_spec='${libname}`echo ${release} | $SED -e 's/[[.]]/-/g'`${versuffix}${shared_ext} $libname.lib'
+ ;;
+ esac
+ dynamic_linker='Win32 ld.exe'
+ # FIXME: first we should search . and the directory the executable is in
+ shlibpath_var=PATH
+ ;;
+
+darwin* | rhapsody*)
+ dynamic_linker="$host_os dyld"
+ version_type=darwin
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='${libname}${release}${versuffix}$shared_ext ${libname}${release}${major}$shared_ext ${libname}$shared_ext'
+ soname_spec='${libname}${release}${major}$shared_ext'
+ shlibpath_overrides_runpath=yes
+ shlibpath_var=DYLD_LIBRARY_PATH
+ shrext='$(test .$module = .yes && echo .so || echo .dylib)'
+ # Apple's gcc prints 'gcc -print-search-dirs' doesn't operate the same.
+ if test "$GCC" = yes; then
+ sys_lib_search_path_spec=`$CC -print-search-dirs | tr "\n" "$PATH_SEPARATOR" | sed -e 's/libraries:/@libraries:/' | tr "@" "\n" | grep "^libraries:" | sed -e "s/^libraries://" -e "s,=/,/,g" -e "s,$PATH_SEPARATOR, ,g" -e "s,.*,& /lib /usr/lib /usr/local/lib,g"`
+ else
+ sys_lib_search_path_spec='/lib /usr/lib /usr/local/lib'
+ fi
+ sys_lib_dlsearch_path_spec='/usr/local/lib /lib /usr/lib'
+ ;;
+
+dgux*)
+ version_type=linux
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname$shared_ext'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ ;;
+
+freebsd1*)
+ dynamic_linker=no
+ ;;
+
+kfreebsd*-gnu)
+ version_type=linux
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=no
+ hardcode_into_libs=yes
+ dynamic_linker='GNU ld.so'
+ ;;
+
+freebsd*)
+ objformat=`test -x /usr/bin/objformat && /usr/bin/objformat || echo aout`
+ version_type=freebsd-$objformat
+ case $version_type in
+ freebsd-elf*)
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext} $libname${shared_ext}'
+ need_version=no
+ need_lib_prefix=no
+ ;;
+ freebsd-*)
+ library_names_spec='${libname}${release}${shared_ext}$versuffix $libname${shared_ext}$versuffix'
+ need_version=yes
+ ;;
+ esac
+ shlibpath_var=LD_LIBRARY_PATH
+ case $host_os in
+ freebsd2*)
+ shlibpath_overrides_runpath=yes
+ ;;
+ freebsd3.[01]* | freebsdelf3.[01]*)
+ shlibpath_overrides_runpath=yes
+ hardcode_into_libs=yes
+ ;;
+ *) # from 3.2 on
+ shlibpath_overrides_runpath=no
+ hardcode_into_libs=yes
+ ;;
+ esac
+ ;;
+
+gnu*)
+ version_type=linux
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}${major} ${libname}${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ hardcode_into_libs=yes
+ ;;
+
+hpux9* | hpux10* | hpux11*)
+ # Give a soname corresponding to the major version so that dld.sl refuses to
+ # link against other versions.
+ version_type=sunos
+ need_lib_prefix=no
+ need_version=no
+ case "$host_cpu" in
+ ia64*)
+ shrext='.so'
+ hardcode_into_libs=yes
+ dynamic_linker="$host_os dld.so"
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=yes # Unless +noenvvar is specified.
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ if test "X$HPUX_IA64_MODE" = X32; then
+ sys_lib_search_path_spec="/usr/lib/hpux32 /usr/local/lib/hpux32 /usr/local/lib"
+ else
+ sys_lib_search_path_spec="/usr/lib/hpux64 /usr/local/lib/hpux64"
+ fi
+ sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec
+ ;;
+ hppa*64*)
+ shrext='.sl'
+ hardcode_into_libs=yes
+ dynamic_linker="$host_os dld.sl"
+ shlibpath_var=LD_LIBRARY_PATH # How should we handle SHLIB_PATH
+ shlibpath_overrides_runpath=yes # Unless +noenvvar is specified.
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ sys_lib_search_path_spec="/usr/lib/pa20_64 /usr/ccs/lib/pa20_64"
+ sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec
+ ;;
+ *)
+ shrext='.sl'
+ dynamic_linker="$host_os dld.sl"
+ shlibpath_var=SHLIB_PATH
+ shlibpath_overrides_runpath=no # +s is required to enable SHLIB_PATH
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ ;;
+ esac
+ # HP-UX runs *really* slowly unless shared libraries are mode 555.
+ postinstall_cmds='chmod 555 $lib'
+ ;;
+
+irix5* | irix6* | nonstopux*)
+ case $host_os in
+ nonstopux*) version_type=nonstopux ;;
+ *)
+ if test "$lt_cv_prog_gnu_ld" = yes; then
+ version_type=linux
+ else
+ version_type=irix
+ fi ;;
+ esac
+ need_lib_prefix=no
+ need_version=no
+ soname_spec='${libname}${release}${shared_ext}$major'
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${release}${shared_ext} $libname${shared_ext}'
+ case $host_os in
+ irix5* | nonstopux*)
+ libsuff= shlibsuff=
+ ;;
+ *)
+ case $LD in # libtool.m4 will add one of these switches to LD
+ *-32|*"-32 "|*-melf32bsmip|*"-melf32bsmip ")
+ libsuff= shlibsuff= libmagic=32-bit;;
+ *-n32|*"-n32 "|*-melf32bmipn32|*"-melf32bmipn32 ")
+ libsuff=32 shlibsuff=N32 libmagic=N32;;
+ *-64|*"-64 "|*-melf64bmip|*"-melf64bmip ")
+ libsuff=64 shlibsuff=64 libmagic=64-bit;;
+ *) libsuff= shlibsuff= libmagic=never-match;;
+ esac
+ ;;
+ esac
+ shlibpath_var=LD_LIBRARY${shlibsuff}_PATH
+ shlibpath_overrides_runpath=no
+ sys_lib_search_path_spec="/usr/lib${libsuff} /lib${libsuff} /usr/local/lib${libsuff}"
+ sys_lib_dlsearch_path_spec="/usr/lib${libsuff} /lib${libsuff}"
+ hardcode_into_libs=yes
+ ;;
+
+# No shared lib support for Linux oldld, aout, or coff.
+linux*oldld* | linux*aout* | linux*coff*)
+ dynamic_linker=no
+ ;;
+
+# This must be Linux ELF.
+linux*)
+ version_type=linux
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ finish_cmds='PATH="\$PATH:/sbin" ldconfig -n $libdir'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=no
+ # This implies no fast_install, which is unacceptable.
+ # Some rework will be needed to allow for fast_install
+ # before this can be enabled.
+ hardcode_into_libs=yes
+
+ # Append ld.so.conf contents to the search path
+ if test -f /etc/ld.so.conf; then
+ ld_extra=`$SED -e 's/[:,\t]/ /g;s/=[^=]*$//;s/=[^= ]* / /g' /etc/ld.so.conf`
+ sys_lib_dlsearch_path_spec="/lib /usr/lib $ld_extra"
+ fi
+
+ # We used to test for /lib/ld.so.1 and disable shared libraries on
+ # powerpc, because MkLinux only supported shared libraries with the
+ # GNU dynamic linker. Since this was broken with cross compilers,
+ # most powerpc-linux boxes support dynamic linking these days and
+ # people can always --disable-shared, the test was removed, and we
+ # assume the GNU/Linux dynamic linker is in use.
+ dynamic_linker='GNU/Linux ld.so'
+ ;;
+
+knetbsd*-gnu)
+ version_type=linux
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=no
+ hardcode_into_libs=yes
+ dynamic_linker='GNU ld.so'
+ ;;
+
+netbsd*)
+ version_type=sunos
+ need_lib_prefix=no
+ need_version=no
+ if echo __ELF__ | $CC -E - | grep __ELF__ >/dev/null; then
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix'
+ finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir'
+ dynamic_linker='NetBSD (a.out) ld.so'
+ else
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ dynamic_linker='NetBSD ld.elf_so'
+ fi
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=yes
+ hardcode_into_libs=yes
+ ;;
+
+newsos6)
+ version_type=linux
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=yes
+ ;;
+
+nto-qnx*)
+ version_type=linux
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=yes
+ ;;
+
+openbsd*)
+ version_type=sunos
+ need_lib_prefix=no
+ need_version=yes
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix'
+ finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir'
+ shlibpath_var=LD_LIBRARY_PATH
+ if test -z "`echo __ELF__ | $CC -E - | grep __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then
+ case $host_os in
+ openbsd2.[[89]] | openbsd2.[[89]].*)
+ shlibpath_overrides_runpath=no
+ ;;
+ *)
+ shlibpath_overrides_runpath=yes
+ ;;
+ esac
+ else
+ shlibpath_overrides_runpath=yes
+ fi
+ ;;
+
+os2*)
+ libname_spec='$name'
+ shrext=".dll"
+ need_lib_prefix=no
+ library_names_spec='$libname${shared_ext} $libname.a'
+ dynamic_linker='OS/2 ld.exe'
+ shlibpath_var=LIBPATH
+ ;;
+
+osf3* | osf4* | osf5*)
+ version_type=osf
+ need_lib_prefix=no
+ need_version=no
+ soname_spec='${libname}${release}${shared_ext}$major'
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ shlibpath_var=LD_LIBRARY_PATH
+ sys_lib_search_path_spec="/usr/shlib /usr/ccs/lib /usr/lib/cmplrs/cc /usr/lib /usr/local/lib /var/shlib"
+ sys_lib_dlsearch_path_spec="$sys_lib_search_path_spec"
+ ;;
+
+sco3.2v5*)
+ version_type=osf
+ soname_spec='${libname}${release}${shared_ext}$major'
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ shlibpath_var=LD_LIBRARY_PATH
+ ;;
+
+solaris*)
+ version_type=linux
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=yes
+ hardcode_into_libs=yes
+ # ldd complains unless libraries are executable
+ postinstall_cmds='chmod +x $lib'
+ ;;
+
+sunos4*)
+ version_type=sunos
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix'
+ finish_cmds='PATH="\$PATH:/usr/etc" ldconfig $libdir'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=yes
+ if test "$with_gnu_ld" = yes; then
+ need_lib_prefix=no
+ fi
+ need_version=yes
+ ;;
+
+sysv4 | sysv4.2uw2* | sysv4.3* | sysv5*)
+ version_type=linux
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ case $host_vendor in
+ sni)
+ shlibpath_overrides_runpath=no
+ need_lib_prefix=no
+ export_dynamic_flag_spec='${wl}-Blargedynsym'
+ runpath_var=LD_RUN_PATH
+ ;;
+ siemens)
+ need_lib_prefix=no
+ ;;
+ motorola)
+ need_lib_prefix=no
+ need_version=no
+ shlibpath_overrides_runpath=no
+ sys_lib_search_path_spec='/lib /usr/lib /usr/ccs/lib'
+ ;;
+ esac
+ ;;
+
+sysv4*MP*)
+ if test -d /usr/nec ;then
+ version_type=linux
+ library_names_spec='$libname${shared_ext}.$versuffix $libname${shared_ext}.$major $libname${shared_ext}'
+ soname_spec='$libname${shared_ext}.$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ fi
+ ;;
+
+uts4*)
+ version_type=linux
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ ;;
+
+*)
+ dynamic_linker=no
+ ;;
+esac
+AC_MSG_RESULT([$dynamic_linker])
+test "$dynamic_linker" = no && can_build_shared=no
+])# AC_LIBTOOL_SYS_DYNAMIC_LINKER
+
+
+# _LT_AC_TAGCONFIG
+# ----------------
+AC_DEFUN([_LT_AC_TAGCONFIG],
+[AC_ARG_WITH([tags],
+ [AC_HELP_STRING([--with-tags@<:@=TAGS@:>@],
+ [include additional configurations @<:@automatic@:>@])],
+ [tagnames="$withval"])
+
+if test -f "$ltmain" && test -n "$tagnames"; then
+ if test ! -f "${ofile}"; then
+ AC_MSG_WARN([output file `$ofile' does not exist])
+ fi
+
+ if test -z "$LTCC"; then
+ eval "`$SHELL ${ofile} --config | grep '^LTCC='`"
+ if test -z "$LTCC"; then
+ AC_MSG_WARN([output file `$ofile' does not look like a libtool script])
+ else
+ AC_MSG_WARN([using `LTCC=$LTCC', extracted from `$ofile'])
+ fi
+ fi
+
+ # Extract list of available tagged configurations in $ofile.
+ # Note that this assumes the entire list is on one line.
+ available_tags=`grep "^available_tags=" "${ofile}" | $SED -e 's/available_tags=\(.*$\)/\1/' -e 's/\"//g'`
+
+ lt_save_ifs="$IFS"; IFS="${IFS}$PATH_SEPARATOR,"
+ for tagname in $tagnames; do
+ IFS="$lt_save_ifs"
+ # Check whether tagname contains only valid characters
+ case `$echo "X$tagname" | $Xsed -e 's:[[-_ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz1234567890,/]]::g'` in
+ "") ;;
+ *) AC_MSG_ERROR([invalid tag name: $tagname])
+ ;;
+ esac
+
+ if grep "^# ### BEGIN LIBTOOL TAG CONFIG: $tagname$" < "${ofile}" > /dev/null
+ then
+ AC_MSG_ERROR([tag name \"$tagname\" already exists])
+ fi
+
+ # Update the list of available tags.
+ if test -n "$tagname"; then
+ echo appending configuration tag \"$tagname\" to $ofile
+
+ case $tagname in
+ CXX)
+ if test -n "$CXX" && test "X$CXX" != "Xno"; then
+ AC_LIBTOOL_LANG_CXX_CONFIG
+ else
+ tagname=""
+ fi
+ ;;
+
+ F77)
+ if test -n "$F77" && test "X$F77" != "Xno"; then
+ AC_LIBTOOL_LANG_F77_CONFIG
+ else
+ tagname=""
+ fi
+ ;;
+
+ GCJ)
+ if test -n "$GCJ" && test "X$GCJ" != "Xno"; then
+ AC_LIBTOOL_LANG_GCJ_CONFIG
+ else
+ tagname=""
+ fi
+ ;;
+
+ RC)
+ AC_LIBTOOL_LANG_RC_CONFIG
+ ;;
+
+ *)
+ AC_MSG_ERROR([Unsupported tag name: $tagname])
+ ;;
+ esac
+
+ # Append the new tag name to the list of available tags.
+ if test -n "$tagname" ; then
+ available_tags="$available_tags $tagname"
+ fi
+ fi
+ done
+ IFS="$lt_save_ifs"
+
+ # Now substitute the updated list of available tags.
+ if eval "sed -e 's/^available_tags=.*\$/available_tags=\"$available_tags\"/' \"$ofile\" > \"${ofile}T\""; then
+ mv "${ofile}T" "$ofile"
+ chmod +x "$ofile"
+ else
+ rm -f "${ofile}T"
+ AC_MSG_ERROR([unable to update list of available tagged configurations.])
+ fi
+fi
+])# _LT_AC_TAGCONFIG
+
+
+# AC_LIBTOOL_DLOPEN
+# -----------------
+# enable checks for dlopen support
+AC_DEFUN([AC_LIBTOOL_DLOPEN],
+ [AC_BEFORE([$0],[AC_LIBTOOL_SETUP])
+])# AC_LIBTOOL_DLOPEN
+
+
+# AC_LIBTOOL_WIN32_DLL
+# --------------------
+# declare package support for building win32 dll's
+AC_DEFUN([AC_LIBTOOL_WIN32_DLL],
+[AC_BEFORE([$0], [AC_LIBTOOL_SETUP])
+])# AC_LIBTOOL_WIN32_DLL
+
+
+# AC_ENABLE_SHARED([DEFAULT])
+# ---------------------------
+# implement the --enable-shared flag
+# DEFAULT is either `yes' or `no'. If omitted, it defaults to `yes'.
+AC_DEFUN([AC_ENABLE_SHARED],
+[define([AC_ENABLE_SHARED_DEFAULT], ifelse($1, no, no, yes))dnl
+AC_ARG_ENABLE([shared],
+ [AC_HELP_STRING([--enable-shared@<:@=PKGS@:>@],
+ [build shared libraries @<:@default=]AC_ENABLE_SHARED_DEFAULT[@:>@])],
+ [p=${PACKAGE-default}
+ case $enableval in
+ yes) enable_shared=yes ;;
+ no) enable_shared=no ;;
+ *)
+ enable_shared=no
+ # Look at the argument we got. We use all the common list separators.
+ lt_save_ifs="$IFS"; IFS="${IFS}$PATH_SEPARATOR,"
+ for pkg in $enableval; do
+ IFS="$lt_save_ifs"
+ if test "X$pkg" = "X$p"; then
+ enable_shared=yes
+ fi
+ done
+ IFS="$lt_save_ifs"
+ ;;
+ esac],
+ [enable_shared=]AC_ENABLE_SHARED_DEFAULT)
+])# AC_ENABLE_SHARED
+
+
+# AC_DISABLE_SHARED
+# -----------------
+#- set the default shared flag to --disable-shared
+AC_DEFUN([AC_DISABLE_SHARED],
+[AC_BEFORE([$0],[AC_LIBTOOL_SETUP])dnl
+AC_ENABLE_SHARED(no)
+])# AC_DISABLE_SHARED
+
+
+# AC_ENABLE_STATIC([DEFAULT])
+# ---------------------------
+# implement the --enable-static flag
+# DEFAULT is either `yes' or `no'. If omitted, it defaults to `yes'.
+AC_DEFUN([AC_ENABLE_STATIC],
+[define([AC_ENABLE_STATIC_DEFAULT], ifelse($1, no, no, yes))dnl
+AC_ARG_ENABLE([static],
+ [AC_HELP_STRING([--enable-static@<:@=PKGS@:>@],
+ [build static libraries @<:@default=]AC_ENABLE_STATIC_DEFAULT[@:>@])],
+ [p=${PACKAGE-default}
+ case $enableval in
+ yes) enable_static=yes ;;
+ no) enable_static=no ;;
+ *)
+ enable_static=no
+ # Look at the argument we got. We use all the common list separators.
+ lt_save_ifs="$IFS"; IFS="${IFS}$PATH_SEPARATOR,"
+ for pkg in $enableval; do
+ IFS="$lt_save_ifs"
+ if test "X$pkg" = "X$p"; then
+ enable_static=yes
+ fi
+ done
+ IFS="$lt_save_ifs"
+ ;;
+ esac],
+ [enable_static=]AC_ENABLE_STATIC_DEFAULT)
+])# AC_ENABLE_STATIC
+
+
+# AC_DISABLE_STATIC
+# -----------------
+# set the default static flag to --disable-static
+AC_DEFUN([AC_DISABLE_STATIC],
+[AC_BEFORE([$0],[AC_LIBTOOL_SETUP])dnl
+AC_ENABLE_STATIC(no)
+])# AC_DISABLE_STATIC
+
+
+# AC_ENABLE_FAST_INSTALL([DEFAULT])
+# ---------------------------------
+# implement the --enable-fast-install flag
+# DEFAULT is either `yes' or `no'. If omitted, it defaults to `yes'.
+AC_DEFUN([AC_ENABLE_FAST_INSTALL],
+[define([AC_ENABLE_FAST_INSTALL_DEFAULT], ifelse($1, no, no, yes))dnl
+AC_ARG_ENABLE([fast-install],
+ [AC_HELP_STRING([--enable-fast-install@<:@=PKGS@:>@],
+ [optimize for fast installation @<:@default=]AC_ENABLE_FAST_INSTALL_DEFAULT[@:>@])],
+ [p=${PACKAGE-default}
+ case $enableval in
+ yes) enable_fast_install=yes ;;
+ no) enable_fast_install=no ;;
+ *)
+ enable_fast_install=no
+ # Look at the argument we got. We use all the common list separators.
+ lt_save_ifs="$IFS"; IFS="${IFS}$PATH_SEPARATOR,"
+ for pkg in $enableval; do
+ IFS="$lt_save_ifs"
+ if test "X$pkg" = "X$p"; then
+ enable_fast_install=yes
+ fi
+ done
+ IFS="$lt_save_ifs"
+ ;;
+ esac],
+ [enable_fast_install=]AC_ENABLE_FAST_INSTALL_DEFAULT)
+])# AC_ENABLE_FAST_INSTALL
+
+
+# AC_DISABLE_FAST_INSTALL
+# -----------------------
+# set the default to --disable-fast-install
+AC_DEFUN([AC_DISABLE_FAST_INSTALL],
+[AC_BEFORE([$0],[AC_LIBTOOL_SETUP])dnl
+AC_ENABLE_FAST_INSTALL(no)
+])# AC_DISABLE_FAST_INSTALL
+
+
+# AC_LIBTOOL_PICMODE([MODE])
+# --------------------------
+# implement the --with-pic flag
+# MODE is either `yes' or `no'. If omitted, it defaults to `both'.
+AC_DEFUN([AC_LIBTOOL_PICMODE],
+[AC_BEFORE([$0],[AC_LIBTOOL_SETUP])dnl
+pic_mode=ifelse($#,1,$1,default)
+])# AC_LIBTOOL_PICMODE
+
+
+# AC_PROG_EGREP
+# -------------
+# This is predefined starting with Autoconf 2.54, so this conditional
+# definition can be removed once we require Autoconf 2.54 or later.
+m4_ifndef([AC_PROG_EGREP], [AC_DEFUN([AC_PROG_EGREP],
+[AC_CACHE_CHECK([for egrep], [ac_cv_prog_egrep],
+ [if echo a | (grep -E '(a|b)') >/dev/null 2>&1
+ then ac_cv_prog_egrep='grep -E'
+ else ac_cv_prog_egrep='egrep'
+ fi])
+ EGREP=$ac_cv_prog_egrep
+ AC_SUBST([EGREP])
+])])
+
+
+# AC_PATH_TOOL_PREFIX
+# -------------------
+# find a file program which can recognise shared library
+AC_DEFUN([AC_PATH_TOOL_PREFIX],
+[AC_REQUIRE([AC_PROG_EGREP])dnl
+AC_MSG_CHECKING([for $1])
+AC_CACHE_VAL(lt_cv_path_MAGIC_CMD,
+[case $MAGIC_CMD in
+[[\\/*] | ?:[\\/]*])
+ lt_cv_path_MAGIC_CMD="$MAGIC_CMD" # Let the user override the test with a path.
+ ;;
+*)
+ lt_save_MAGIC_CMD="$MAGIC_CMD"
+ lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR
+dnl $ac_dummy forces splitting on constant user-supplied paths.
+dnl POSIX.2 word splitting is done only on the output of word expansions,
+dnl not every word. This closes a longstanding sh security hole.
+ ac_dummy="ifelse([$2], , $PATH, [$2])"
+ for ac_dir in $ac_dummy; do
+ IFS="$lt_save_ifs"
+ test -z "$ac_dir" && ac_dir=.
+ if test -f $ac_dir/$1; then
+ lt_cv_path_MAGIC_CMD="$ac_dir/$1"
+ if test -n "$file_magic_test_file"; then
+ case $deplibs_check_method in
+ "file_magic "*)
+ file_magic_regex="`expr \"$deplibs_check_method\" : \"file_magic \(.*\)\"`"
+ MAGIC_CMD="$lt_cv_path_MAGIC_CMD"
+ if eval $file_magic_cmd \$file_magic_test_file 2> /dev/null |
+ $EGREP "$file_magic_regex" > /dev/null; then
+ :
+ else
+ cat <<EOF 1>&2
+
+*** Warning: the command libtool uses to detect shared libraries,
+*** $file_magic_cmd, produces output that libtool cannot recognize.
+*** The result is that libtool may fail to recognize shared libraries
+*** as such. This will affect the creation of libtool libraries that
+*** depend on shared libraries, but programs linked with such libtool
+*** libraries will work regardless of this problem. Nevertheless, you
+*** may want to report the problem to your system manager and/or to
+*** bug-libtool at gnu.org
+
+EOF
+ fi ;;
+ esac
+ fi
+ break
+ fi
+ done
+ IFS="$lt_save_ifs"
+ MAGIC_CMD="$lt_save_MAGIC_CMD"
+ ;;
+esac])
+MAGIC_CMD="$lt_cv_path_MAGIC_CMD"
+if test -n "$MAGIC_CMD"; then
+ AC_MSG_RESULT($MAGIC_CMD)
+else
+ AC_MSG_RESULT(no)
+fi
+])# AC_PATH_TOOL_PREFIX
+
+
+# AC_PATH_MAGIC
+# -------------
+# find a file program which can recognise a shared library
+AC_DEFUN([AC_PATH_MAGIC],
+[AC_PATH_TOOL_PREFIX(${ac_tool_prefix}file, /usr/bin$PATH_SEPARATOR$PATH)
+if test -z "$lt_cv_path_MAGIC_CMD"; then
+ if test -n "$ac_tool_prefix"; then
+ AC_PATH_TOOL_PREFIX(file, /usr/bin$PATH_SEPARATOR$PATH)
+ else
+ MAGIC_CMD=:
+ fi
+fi
+])# AC_PATH_MAGIC
+
+
+# AC_PROG_LD
+# ----------
+# find the pathname to the GNU or non-GNU linker
+AC_DEFUN([AC_PROG_LD],
+[AC_ARG_WITH([gnu-ld],
+ [AC_HELP_STRING([--with-gnu-ld],
+ [assume the C compiler uses GNU ld @<:@default=no@:>@])],
+ [test "$withval" = no || with_gnu_ld=yes],
+ [with_gnu_ld=no])
+AC_REQUIRE([LT_AC_PROG_SED])dnl
+AC_REQUIRE([AC_PROG_CC])dnl
+AC_REQUIRE([AC_CANONICAL_HOST])dnl
+AC_REQUIRE([AC_CANONICAL_BUILD])dnl
+ac_prog=ld
+if test "$GCC" = yes; then
+ # Check if gcc -print-prog-name=ld gives a path.
+ AC_MSG_CHECKING([for ld used by $CC])
+ case $host in
+ *-*-mingw*)
+ # gcc leaves a trailing carriage return which upsets mingw
+ ac_prog=`($CC -print-prog-name=ld) 2>&5 | tr -d '\015'` ;;
+ *)
+ ac_prog=`($CC -print-prog-name=ld) 2>&5` ;;
+ esac
+ case $ac_prog in
+ # Accept absolute paths.
+ [[\\/]]* | ?:[[\\/]]*)
+ re_direlt='/[[^/]][[^/]]*/\.\./'
+ # Canonicalize the pathname of ld
+ ac_prog=`echo $ac_prog| $SED 's%\\\\%/%g'`
+ while echo $ac_prog | grep "$re_direlt" > /dev/null 2>&1; do
+ ac_prog=`echo $ac_prog| $SED "s%$re_direlt%/%"`
+ done
+ test -z "$LD" && LD="$ac_prog"
+ ;;
+ "")
+ # If it fails, then pretend we aren't using GCC.
+ ac_prog=ld
+ ;;
+ *)
+ # If it is relative, then search for the first ld in PATH.
+ with_gnu_ld=unknown
+ ;;
+ esac
+elif test "$with_gnu_ld" = yes; then
+ AC_MSG_CHECKING([for GNU ld])
+else
+ AC_MSG_CHECKING([for non-GNU ld])
+fi
+AC_CACHE_VAL(lt_cv_path_LD,
+[if test -z "$LD"; then
+ lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR
+ for ac_dir in $PATH; do
+ IFS="$lt_save_ifs"
+ test -z "$ac_dir" && ac_dir=.
+ if test -f "$ac_dir/$ac_prog" || test -f "$ac_dir/$ac_prog$ac_exeext"; then
+ lt_cv_path_LD="$ac_dir/$ac_prog"
+ # Check to see if the program is GNU ld. I'd rather use --version,
+ # but apparently some GNU ld's only accept -v.
+ # Break only if it was the GNU/non-GNU ld that we prefer.
+ case `"$lt_cv_path_LD" -v 2>&1 </dev/null` in
+ *GNU* | *'with BFD'*)
+ test "$with_gnu_ld" != no && break
+ ;;
+ *)
+ test "$with_gnu_ld" != yes && break
+ ;;
+ esac
+ fi
+ done
+ IFS="$lt_save_ifs"
+else
+ lt_cv_path_LD="$LD" # Let the user override the test with a path.
+fi])
+LD="$lt_cv_path_LD"
+if test -n "$LD"; then
+ AC_MSG_RESULT($LD)
+else
+ AC_MSG_RESULT(no)
+fi
+test -z "$LD" && AC_MSG_ERROR([no acceptable ld found in \$PATH])
+AC_PROG_LD_GNU
+])# AC_PROG_LD
+
+
+# AC_PROG_LD_GNU
+# --------------
+AC_DEFUN([AC_PROG_LD_GNU],
+[AC_REQUIRE([AC_PROG_EGREP])dnl
+AC_CACHE_CHECK([if the linker ($LD) is GNU ld], lt_cv_prog_gnu_ld,
+[# I'd rather use --version here, but apparently some GNU ld's only accept -v.
+case `$LD -v 2>&1 </dev/null` in
+*GNU* | *'with BFD'*)
+ lt_cv_prog_gnu_ld=yes
+ ;;
+*)
+ lt_cv_prog_gnu_ld=no
+ ;;
+esac])
+with_gnu_ld=$lt_cv_prog_gnu_ld
+])# AC_PROG_LD_GNU
+
+
+# AC_PROG_LD_RELOAD_FLAG
+# ----------------------
+# find reload flag for linker
+# -- PORTME Some linkers may need a different reload flag.
+AC_DEFUN([AC_PROG_LD_RELOAD_FLAG],
+[AC_CACHE_CHECK([for $LD option to reload object files],
+ lt_cv_ld_reload_flag,
+ [lt_cv_ld_reload_flag='-r'])
+reload_flag=$lt_cv_ld_reload_flag
+case $reload_flag in
+"" | " "*) ;;
+*) reload_flag=" $reload_flag" ;;
+esac
+reload_cmds='$LD$reload_flag -o $output$reload_objs'
+])# AC_PROG_LD_RELOAD_FLAG
+
+
+# AC_DEPLIBS_CHECK_METHOD
+# -----------------------
+# how to check for library dependencies
+# -- PORTME fill in with the dynamic library characteristics
+AC_DEFUN([AC_DEPLIBS_CHECK_METHOD],
+[AC_CACHE_CHECK([how to recognise dependent libraries],
+lt_cv_deplibs_check_method,
+[lt_cv_file_magic_cmd='$MAGIC_CMD'
+lt_cv_file_magic_test_file=
+lt_cv_deplibs_check_method='unknown'
+# Need to set the preceding variable on all platforms that support
+# interlibrary dependencies.
+# 'none' -- dependencies not supported.
+# `unknown' -- same as none, but documents that we really don't know.
+# 'pass_all' -- all dependencies passed with no checks.
+# 'test_compile' -- check by making test program.
+# 'file_magic [[regex]]' -- check by looking for files in library path
+# which responds to the $file_magic_cmd with a given extended regex.
+# If you have `file' or equivalent on your system and you're not sure
+# whether `pass_all' will *always* work, you probably want this one.
+
+case $host_os in
+aix4* | aix5*)
+ lt_cv_deplibs_check_method=pass_all
+ ;;
+
+beos*)
+ lt_cv_deplibs_check_method=pass_all
+ ;;
+
+bsdi4*)
+ lt_cv_deplibs_check_method='file_magic ELF [[0-9]][[0-9]]*-bit [[ML]]SB (shared object|dynamic lib)'
+ lt_cv_file_magic_cmd='/usr/bin/file -L'
+ lt_cv_file_magic_test_file=/shlib/libc.so
+ ;;
+
+cygwin*)
+ # win32_libid is a shell function defined in ltmain.sh
+ lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
+ lt_cv_file_magic_cmd='win32_libid'
+ ;;
+
+mingw* | pw32*)
+ # Base MSYS/MinGW do not provide the 'file' command needed by
+ # win32_libid shell function, so use a weaker test based on 'objdump'.
+ lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?'
+ lt_cv_file_magic_cmd='$OBJDUMP -f'
+ ;;
+
+darwin* | rhapsody*)
+ lt_cv_deplibs_check_method=pass_all
+ ;;
+
+freebsd* | kfreebsd*-gnu)
+ if echo __ELF__ | $CC -E - | grep __ELF__ > /dev/null; then
+ case $host_cpu in
+ i*86 )
+ # Not sure whether the presence of OpenBSD here was a mistake.
+ # Let's accept both of them until this is cleared up.
+ lt_cv_deplibs_check_method='file_magic (FreeBSD|OpenBSD)/i[[3-9]]86 (compact )?demand paged shared library'
+ lt_cv_file_magic_cmd=/usr/bin/file
+ lt_cv_file_magic_test_file=`echo /usr/lib/libc.so.*`
+ ;;
+ esac
+ else
+ lt_cv_deplibs_check_method=pass_all
+ fi
+ ;;
+
+gnu*)
+ lt_cv_deplibs_check_method=pass_all
+ ;;
+
+hpux10.20* | hpux11*)
+ lt_cv_file_magic_cmd=/usr/bin/file
+ case "$host_cpu" in
+ ia64*)
+ lt_cv_deplibs_check_method='file_magic (s[[0-9]][[0-9]][[0-9]]|ELF-[[0-9]][[0-9]]) shared object file - IA64'
+ lt_cv_file_magic_test_file=/usr/lib/hpux32/libc.so
+ ;;
+ hppa*64*)
+ [lt_cv_deplibs_check_method='file_magic (s[0-9][0-9][0-9]|ELF-[0-9][0-9]) shared object file - PA-RISC [0-9].[0-9]']
+ lt_cv_file_magic_test_file=/usr/lib/pa20_64/libc.sl
+ ;;
+ *)
+ lt_cv_deplibs_check_method='file_magic (s[[0-9]][[0-9]][[0-9]]|PA-RISC[[0-9]].[[0-9]]) shared library'
+ lt_cv_file_magic_test_file=/usr/lib/libc.sl
+ ;;
+ esac
+ ;;
+
+irix5* | irix6* | nonstopux*)
+ case $LD in
+ *-32|*"-32 ") libmagic=32-bit;;
+ *-n32|*"-n32 ") libmagic=N32;;
+ *-64|*"-64 ") libmagic=64-bit;;
+ *) libmagic=never-match;;
+ esac
+ lt_cv_deplibs_check_method=pass_all
+ ;;
+
+# This must be Linux ELF.
+linux*)
+ lt_cv_deplibs_check_method=pass_all
+ ;;
+
+netbsd* | knetbsd*-gnu)
+ if echo __ELF__ | $CC -E - | grep __ELF__ > /dev/null; then
+ lt_cv_deplibs_check_method='match_pattern /lib[[^/]]+(\.so\.[[0-9]]+\.[[0-9]]+|_pic\.a)$'
+ else
+ lt_cv_deplibs_check_method='match_pattern /lib[[^/]]+(\.so|_pic\.a)$'
+ fi
+ ;;
+
+newos6*)
+ lt_cv_deplibs_check_method='file_magic ELF [[0-9]][[0-9]]*-bit [[ML]]SB (executable|dynamic lib)'
+ lt_cv_file_magic_cmd=/usr/bin/file
+ lt_cv_file_magic_test_file=/usr/lib/libnls.so
+ ;;
+
+nto-qnx*)
+ lt_cv_deplibs_check_method=unknown
+ ;;
+
+openbsd*)
+ lt_cv_file_magic_cmd=/usr/bin/file
+ lt_cv_file_magic_test_file=`echo /usr/lib/libc.so.*`
+ if test -z "`echo __ELF__ | $CC -E - | grep __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then
+ lt_cv_deplibs_check_method='file_magic ELF [[0-9]][[0-9]]*-bit [[LM]]SB shared object'
+ else
+ lt_cv_deplibs_check_method='file_magic OpenBSD.* shared library'
+ fi
+ ;;
+
+osf3* | osf4* | osf5*)
+ lt_cv_deplibs_check_method=pass_all
+ ;;
+
+sco3.2v5*)
+ lt_cv_deplibs_check_method=pass_all
+ ;;
+
+solaris*)
+ lt_cv_deplibs_check_method=pass_all
+ ;;
+
+sysv4 | sysv4.2uw2* | sysv4.3* | sysv5*)
+ case $host_vendor in
+ motorola)
+ lt_cv_deplibs_check_method='file_magic ELF [[0-9]][[0-9]]*-bit [[ML]]SB (shared object|dynamic lib) M[[0-9]][[0-9]]* Version [[0-9]]'
+ lt_cv_file_magic_test_file=`echo /usr/lib/libc.so*`
+ ;;
+ ncr)
+ lt_cv_deplibs_check_method=pass_all
+ ;;
+ sequent)
+ lt_cv_file_magic_cmd='/bin/file'
+ lt_cv_deplibs_check_method='file_magic ELF [[0-9]][[0-9]]*-bit [[LM]]SB (shared object|dynamic lib )'
+ ;;
+ sni)
+ lt_cv_file_magic_cmd='/bin/file'
+ lt_cv_deplibs_check_method="file_magic ELF [[0-9]][[0-9]]*-bit [[LM]]SB dynamic lib"
+ lt_cv_file_magic_test_file=/lib/libc.so
+ ;;
+ siemens)
+ lt_cv_deplibs_check_method=pass_all
+ ;;
+ esac
+ ;;
+
+sysv5OpenUNIX8* | sysv5UnixWare7* | sysv5uw[[78]]* | unixware7* | sysv4*uw2*)
+ lt_cv_deplibs_check_method=pass_all
+ ;;
+esac
+])
+file_magic_cmd=$lt_cv_file_magic_cmd
+deplibs_check_method=$lt_cv_deplibs_check_method
+test -z "$deplibs_check_method" && deplibs_check_method=unknown
+])# AC_DEPLIBS_CHECK_METHOD
+
+
+# AC_PROG_NM
+# ----------
+# find the pathname to a BSD-compatible name lister
+AC_DEFUN([AC_PROG_NM],
+[AC_CACHE_CHECK([for BSD-compatible nm], lt_cv_path_NM,
+[if test -n "$NM"; then
+ # Let the user override the test.
+ lt_cv_path_NM="$NM"
+else
+ lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR
+ for ac_dir in $PATH /usr/ccs/bin /usr/ucb /bin; do
+ IFS="$lt_save_ifs"
+ test -z "$ac_dir" && ac_dir=.
+ tmp_nm="$ac_dir/${ac_tool_prefix}nm"
+ if test -f "$tmp_nm" || test -f "$tmp_nm$ac_exeext" ; then
+ # Check to see if the nm accepts a BSD-compat flag.
+ # Adding the `sed 1q' prevents false positives on HP-UX, which says:
+ # nm: unknown option "B" ignored
+ # Tru64's nm complains that /dev/null is an invalid object file
+ case `"$tmp_nm" -B /dev/null 2>&1 | sed '1q'` in
+ */dev/null* | *'Invalid file or object type'*)
+ lt_cv_path_NM="$tmp_nm -B"
+ break
+ ;;
+ *)
+ case `"$tmp_nm" -p /dev/null 2>&1 | sed '1q'` in
+ */dev/null*)
+ lt_cv_path_NM="$tmp_nm -p"
+ break
+ ;;
+ *)
+ lt_cv_path_NM=${lt_cv_path_NM="$tmp_nm"} # keep the first match, but
+ continue # so that we can try to find one that supports BSD flags
+ ;;
+ esac
+ esac
+ fi
+ done
+ IFS="$lt_save_ifs"
+ test -z "$lt_cv_path_NM" && lt_cv_path_NM=nm
+fi])
+NM="$lt_cv_path_NM"
+])# AC_PROG_NM
+
+
+# AC_CHECK_LIBM
+# -------------
+# check for math library
+AC_DEFUN([AC_CHECK_LIBM],
+[AC_REQUIRE([AC_CANONICAL_HOST])dnl
+LIBM=
+case $host in
+*-*-beos* | *-*-cygwin* | *-*-pw32* | *-*-darwin*)
+ # These system don't have libm, or don't need it
+ ;;
+*-ncr-sysv4.3*)
+ AC_CHECK_LIB(mw, _mwvalidcheckl, LIBM="-lmw")
+ AC_CHECK_LIB(m, cos, LIBM="$LIBM -lm")
+ ;;
+*)
+ AC_CHECK_LIB(m, cos, LIBM="-lm")
+ ;;
+esac
+])# AC_CHECK_LIBM
+
+
+# AC_LIBLTDL_CONVENIENCE([DIRECTORY])
+# -----------------------------------
+# sets LIBLTDL to the link flags for the libltdl convenience library and
+# LTDLINCL to the include flags for the libltdl header and adds
+# --enable-ltdl-convenience to the configure arguments. Note that LIBLTDL
+# and LTDLINCL are not AC_SUBSTed, nor is AC_CONFIG_SUBDIRS called. If
+# DIRECTORY is not provided, it is assumed to be `libltdl'. LIBLTDL will
+# be prefixed with '${top_builddir}/' and LTDLINCL will be prefixed with
+# '${top_srcdir}/' (note the single quotes!). If your package is not
+# flat and you're not using automake, define top_builddir and
+# top_srcdir appropriately in the Makefiles.
+AC_DEFUN([AC_LIBLTDL_CONVENIENCE],
+[AC_BEFORE([$0],[AC_LIBTOOL_SETUP])dnl
+ case $enable_ltdl_convenience in
+ no) AC_MSG_ERROR([this package needs a convenience libltdl]) ;;
+ "") enable_ltdl_convenience=yes
+ ac_configure_args="$ac_configure_args --enable-ltdl-convenience" ;;
+ esac
+ LIBLTDL='${top_builddir}/'ifelse($#,1,[$1],['libltdl'])/libltdlc.la
+ LTDLINCL='-I${top_srcdir}/'ifelse($#,1,[$1],['libltdl'])
+ # For backwards non-gettext consistent compatibility...
+ INCLTDL="$LTDLINCL"
+])# AC_LIBLTDL_CONVENIENCE
+
+
+# AC_LIBLTDL_INSTALLABLE([DIRECTORY])
+# -----------------------------------
+# sets LIBLTDL to the link flags for the libltdl installable library and
+# LTDLINCL to the include flags for the libltdl header and adds
+# --enable-ltdl-install to the configure arguments. Note that LIBLTDL
+# and LTDLINCL are not AC_SUBSTed, nor is AC_CONFIG_SUBDIRS called. If
+# DIRECTORY is not provided and an installed libltdl is not found, it is
+# assumed to be `libltdl'. LIBLTDL will be prefixed with '${top_builddir}/'
+# and LTDLINCL will be prefixed with '${top_srcdir}/' (note the single
+# quotes!). If your package is not flat and you're not using automake,
+# define top_builddir and top_srcdir appropriately in the Makefiles.
+# In the future, this macro may have to be called after AC_PROG_LIBTOOL.
+AC_DEFUN([AC_LIBLTDL_INSTALLABLE],
+[AC_BEFORE([$0],[AC_LIBTOOL_SETUP])dnl
+ AC_CHECK_LIB(ltdl, lt_dlinit,
+ [test x"$enable_ltdl_install" != xyes && enable_ltdl_install=no],
+ [if test x"$enable_ltdl_install" = xno; then
+ AC_MSG_WARN([libltdl not installed, but installation disabled])
+ else
+ enable_ltdl_install=yes
+ fi
+ ])
+ if test x"$enable_ltdl_install" = x"yes"; then
+ ac_configure_args="$ac_configure_args --enable-ltdl-install"
+ LIBLTDL='${top_builddir}/'ifelse($#,1,[$1],['libltdl'])/libltdl.la
+ LTDLINCL='-I${top_srcdir}/'ifelse($#,1,[$1],['libltdl'])
+ else
+ ac_configure_args="$ac_configure_args --enable-ltdl-install=no"
+ LIBLTDL="-lltdl"
+ LTDLINCL=
+ fi
+ # For backwards non-gettext consistent compatibility...
+ INCLTDL="$LTDLINCL"
+])# AC_LIBLTDL_INSTALLABLE
+
+
+# AC_LIBTOOL_CXX
+# --------------
+# enable support for C++ libraries
+AC_DEFUN([AC_LIBTOOL_CXX],
+[AC_REQUIRE([_LT_AC_LANG_CXX])
+])# AC_LIBTOOL_CXX
+
+
+# _LT_AC_LANG_CXX
+# ---------------
+AC_DEFUN([_LT_AC_LANG_CXX],
+[AC_REQUIRE([AC_PROG_CXX])
+AC_REQUIRE([AC_PROG_CXXCPP])
+_LT_AC_SHELL_INIT([tagnames=${tagnames+${tagnames},}CXX])
+])# _LT_AC_LANG_CXX
+
+
+# AC_LIBTOOL_F77
+# --------------
+# enable support for Fortran 77 libraries
+AC_DEFUN([AC_LIBTOOL_F77],
+[AC_REQUIRE([_LT_AC_LANG_F77])
+])# AC_LIBTOOL_F77
+
+
+# _LT_AC_LANG_F77
+# ---------------
+AC_DEFUN([_LT_AC_LANG_F77],
+[AC_REQUIRE([AC_PROG_F77])
+_LT_AC_SHELL_INIT([tagnames=${tagnames+${tagnames},}F77])
+])# _LT_AC_LANG_F77
+
+
+# AC_LIBTOOL_GCJ
+# --------------
+# enable support for GCJ libraries
+AC_DEFUN([AC_LIBTOOL_GCJ],
+[AC_REQUIRE([_LT_AC_LANG_GCJ])
+])# AC_LIBTOOL_GCJ
+
+
+# _LT_AC_LANG_GCJ
+# ---------------
+AC_DEFUN([_LT_AC_LANG_GCJ],
+[AC_PROVIDE_IFELSE([AC_PROG_GCJ],[],
+ [AC_PROVIDE_IFELSE([A][M_PROG_GCJ],[],
+ [AC_PROVIDE_IFELSE([LT_AC_PROG_GCJ],[],
+ [ifdef([AC_PROG_GCJ],[AC_REQUIRE([AC_PROG_GCJ])],
+ [ifdef([A][M_PROG_GCJ],[AC_REQUIRE([A][M_PROG_GCJ])],
+ [AC_REQUIRE([A][C_PROG_GCJ_OR_A][M_PROG_GCJ])])])])])])
+_LT_AC_SHELL_INIT([tagnames=${tagnames+${tagnames},}GCJ])
+])# _LT_AC_LANG_GCJ
+
+
+# AC_LIBTOOL_RC
+# --------------
+# enable support for Windows resource files
+AC_DEFUN([AC_LIBTOOL_RC],
+[AC_REQUIRE([LT_AC_PROG_RC])
+_LT_AC_SHELL_INIT([tagnames=${tagnames+${tagnames},}RC])
+])# AC_LIBTOOL_RC
+
+
+# AC_LIBTOOL_LANG_C_CONFIG
+# ------------------------
+# Ensure that the configuration vars for the C compiler are
+# suitably defined. Those variables are subsequently used by
+# AC_LIBTOOL_CONFIG to write the compiler configuration to `libtool'.
+AC_DEFUN([AC_LIBTOOL_LANG_C_CONFIG], [_LT_AC_LANG_C_CONFIG])
+AC_DEFUN([_LT_AC_LANG_C_CONFIG],
+[lt_save_CC="$CC"
+AC_LANG_PUSH(C)
+
+# Source file extension for C test sources.
+ac_ext=c
+
+# Object file extension for compiled C test sources.
+objext=o
+_LT_AC_TAGVAR(objext, $1)=$objext
+
+# Code to be used in simple compile tests
+lt_simple_compile_test_code="int some_variable = 0;\n"
+
+# Code to be used in simple link tests
+lt_simple_link_test_code='int main(){return(0);}\n'
+
+_LT_AC_SYS_COMPILER
+
+#
+# Check for any special shared library compilation flags.
+#
+_LT_AC_TAGVAR(lt_prog_cc_shlib, $1)=
+if test "$GCC" = no; then
+ case $host_os in
+ sco3.2v5*)
+ _LT_AC_TAGVAR(lt_prog_cc_shlib, $1)='-belf'
+ ;;
+ esac
+fi
+if test -n "$_LT_AC_TAGVAR(lt_prog_cc_shlib, $1)"; then
+ AC_MSG_WARN([`$CC' requires `$_LT_AC_TAGVAR(lt_prog_cc_shlib, $1)' to build shared libraries])
+ if echo "$old_CC $old_CFLAGS " | grep "[[ ]]$_LT_AC_TAGVAR(lt_prog_cc_shlib, $1)[[ ]]" >/dev/null; then :
+ else
+ AC_MSG_WARN([add `$_LT_AC_TAGVAR(lt_prog_cc_shlib, $1)' to the CC or CFLAGS env variable and reconfigure])
+ _LT_AC_TAGVAR(lt_cv_prog_cc_can_build_shared, $1)=no
+ fi
+fi
+
+
+#
+# Check to make sure the static flag actually works.
+#
+AC_LIBTOOL_LINKER_OPTION([if $compiler static flag $_LT_AC_TAGVAR(lt_prog_compiler_static, $1) works],
+ _LT_AC_TAGVAR(lt_prog_compiler_static_works, $1),
+ $_LT_AC_TAGVAR(lt_prog_compiler_static, $1),
+ [],
+ [_LT_AC_TAGVAR(lt_prog_compiler_static, $1)=])
+
+
+AC_LIBTOOL_PROG_COMPILER_NO_RTTI($1)
+AC_LIBTOOL_PROG_COMPILER_PIC($1)
+AC_LIBTOOL_PROG_CC_C_O($1)
+AC_LIBTOOL_SYS_HARD_LINK_LOCKS($1)
+AC_LIBTOOL_PROG_LD_SHLIBS($1)
+AC_LIBTOOL_SYS_DYNAMIC_LINKER($1)
+AC_LIBTOOL_PROG_LD_HARDCODE_LIBPATH($1)
+AC_LIBTOOL_SYS_LIB_STRIP
+AC_LIBTOOL_DLOPEN_SELF($1)
+
+# Report which librarie types wil actually be built
+AC_MSG_CHECKING([if libtool supports shared libraries])
+AC_MSG_RESULT([$can_build_shared])
+
+AC_MSG_CHECKING([whether to build shared libraries])
+test "$can_build_shared" = "no" && enable_shared=no
+
+# On AIX, shared libraries and static libraries use the same namespace, and
+# are all built from PIC.
+case "$host_os" in
+aix3*)
+ test "$enable_shared" = yes && enable_static=no
+ if test -n "$RANLIB"; then
+ archive_cmds="$archive_cmds~\$RANLIB \$lib"
+ postinstall_cmds='$RANLIB $lib'
+ fi
+ ;;
+
+aix4*)
+ if test "$host_cpu" != ia64 && test "$aix_use_runtimelinking" = no ; then
+ test "$enable_shared" = yes && enable_static=no
+ fi
+ ;;
+ darwin* | rhapsody*)
+ if test "$GCC" = yes; then
+ _LT_AC_TAGVAR(archive_cmds_need_lc, $1)=no
+ case "$host_os" in
+ rhapsody* | darwin1.[[012]])
+ _LT_AC_TAGVAR(allow_undefined_flag, $1)='-undefined suppress'
+ ;;
+ *) # Darwin 1.3 on
+ if test -z ${MACOSX_DEPLOYMENT_TARGET} ; then
+ _LT_AC_TAGVAR(allow_undefined_flag, $1)='-flat_namespace -undefined suppress'
+ else
+ case ${MACOSX_DEPLOYMENT_TARGET} in
+ 10.[[012]])
+ _LT_AC_TAGVAR(allow_undefined_flag, $1)='-flat_namespace -undefined suppress'
+ ;;
+ 10.*)
+ _LT_AC_TAGVAR(allow_undefined_flag, $1)='-undefined dynamic_lookup'
+ ;;
+ esac
+ fi
+ ;;
+ esac
+ output_verbose_link_cmd='echo'
+ _LT_AC_TAGVAR(archive_cmds, $1)='$CC -dynamiclib $allow_undefined_flag -o $lib $libobjs $deplibs$compiler_flags -install_name $rpath/$soname $verstring'
+ _LT_AC_TAGVAR(module_cmds, $1)='$CC $allow_undefined_flag -o $lib -bundle $libobjs $deplibs$compiler_flags'
+ # Don't fix this by using the ld -exported_symbols_list flag, it doesn't exist in older darwin ld's
+ _LT_AC_TAGVAR(archive_expsym_cmds, $1)='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC -dynamiclib $allow_undefined_flag -o $lib $libobjs $deplibs$compiler_flags -install_name $rpath/$soname $verstring~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}'
+ _LT_AC_TAGVAR(module_expsym_cmds, $1)='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC $allow_undefined_flag -o $lib -bundle $libobjs $deplibs$compiler_flags~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}'
+ _LT_AC_TAGVAR(hardcode_direct, $1)=no
+ _LT_AC_TAGVAR(hardcode_automatic, $1)=yes
+ _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=unsupported
+ _LT_AC_TAGVAR(whole_archive_flag_spec, $1)='-all_load $convenience'
+ _LT_AC_TAGVAR(link_all_deplibs, $1)=yes
+ else
+ _LT_AC_TAGVAR(ld_shlibs, $1)=no
+ fi
+ ;;
+esac
+AC_MSG_RESULT([$enable_shared])
+
+AC_MSG_CHECKING([whether to build static libraries])
+# Make sure either enable_shared or enable_static is yes.
+test "$enable_shared" = yes || enable_static=yes
+AC_MSG_RESULT([$enable_static])
+
+AC_LIBTOOL_CONFIG($1)
+
+AC_LANG_POP
+CC="$lt_save_CC"
+])# AC_LIBTOOL_LANG_C_CONFIG
+
+
+# AC_LIBTOOL_LANG_CXX_CONFIG
+# --------------------------
+# Ensure that the configuration vars for the C compiler are
+# suitably defined. Those variables are subsequently used by
+# AC_LIBTOOL_CONFIG to write the compiler configuration to `libtool'.
+AC_DEFUN([AC_LIBTOOL_LANG_CXX_CONFIG], [_LT_AC_LANG_CXX_CONFIG(CXX)])
+AC_DEFUN([_LT_AC_LANG_CXX_CONFIG],
+[AC_LANG_PUSH(C++)
+AC_REQUIRE([AC_PROG_CXX])
+AC_REQUIRE([AC_PROG_CXXCPP])
+
+_LT_AC_TAGVAR(archive_cmds_need_lc, $1)=no
+_LT_AC_TAGVAR(allow_undefined_flag, $1)=
+_LT_AC_TAGVAR(always_export_symbols, $1)=no
+_LT_AC_TAGVAR(archive_expsym_cmds, $1)=
+_LT_AC_TAGVAR(export_dynamic_flag_spec, $1)=
+_LT_AC_TAGVAR(hardcode_direct, $1)=no
+_LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)=
+_LT_AC_TAGVAR(hardcode_libdir_flag_spec_ld, $1)=
+_LT_AC_TAGVAR(hardcode_libdir_separator, $1)=
+_LT_AC_TAGVAR(hardcode_minus_L, $1)=no
+_LT_AC_TAGVAR(hardcode_automatic, $1)=no
+_LT_AC_TAGVAR(module_cmds, $1)=
+_LT_AC_TAGVAR(module_expsym_cmds, $1)=
+_LT_AC_TAGVAR(link_all_deplibs, $1)=unknown
+_LT_AC_TAGVAR(old_archive_cmds, $1)=$old_archive_cmds
+_LT_AC_TAGVAR(no_undefined_flag, $1)=
+_LT_AC_TAGVAR(whole_archive_flag_spec, $1)=
+_LT_AC_TAGVAR(enable_shared_with_static_runtimes, $1)=no
+
+# Dependencies to place before and after the object being linked:
+_LT_AC_TAGVAR(predep_objects, $1)=
+_LT_AC_TAGVAR(postdep_objects, $1)=
+_LT_AC_TAGVAR(predeps, $1)=
+_LT_AC_TAGVAR(postdeps, $1)=
+_LT_AC_TAGVAR(compiler_lib_search_path, $1)=
+
+# Source file extension for C++ test sources.
+ac_ext=cc
+
+# Object file extension for compiled C++ test sources.
+objext=o
+_LT_AC_TAGVAR(objext, $1)=$objext
+
+# Code to be used in simple compile tests
+lt_simple_compile_test_code="int some_variable = 0;\n"
+
+# Code to be used in simple link tests
+lt_simple_link_test_code='int main(int, char *[]) { return(0); }\n'
+
+# ltmain only uses $CC for tagged configurations so make sure $CC is set.
+_LT_AC_SYS_COMPILER
+
+# Allow CC to be a program name with arguments.
+lt_save_CC=$CC
+lt_save_LD=$LD
+lt_save_GCC=$GCC
+GCC=$GXX
+lt_save_with_gnu_ld=$with_gnu_ld
+lt_save_path_LD=$lt_cv_path_LD
+if test -n "${lt_cv_prog_gnu_ldcxx+set}"; then
+ lt_cv_prog_gnu_ld=$lt_cv_prog_gnu_ldcxx
+else
+ unset lt_cv_prog_gnu_ld
+fi
+if test -n "${lt_cv_path_LDCXX+set}"; then
+ lt_cv_path_LD=$lt_cv_path_LDCXX
+else
+ unset lt_cv_path_LD
+fi
+test -z "${LDCXX+set}" || LD=$LDCXX
+CC=${CXX-"c++"}
+compiler=$CC
+_LT_AC_TAGVAR(compiler, $1)=$CC
+cc_basename=`$echo X"$compiler" | $Xsed -e 's%^.*/%%'`
+
+# We don't want -fno-exception wen compiling C++ code, so set the
+# no_builtin_flag separately
+if test "$GXX" = yes; then
+ _LT_AC_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)=' -fno-builtin'
+else
+ _LT_AC_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)=
+fi
+
+if test "$GXX" = yes; then
+ # Set up default GNU C++ configuration
+
+ AC_PROG_LD
+
+ # Check if GNU C++ uses GNU ld as the underlying linker, since the
+ # archiving commands below assume that GNU ld is being used.
+ if test "$with_gnu_ld" = yes; then
+ _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname -o $lib'
+ _LT_AC_TAGVAR(archive_expsym_cmds, $1)='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}--rpath ${wl}$libdir'
+ _LT_AC_TAGVAR(export_dynamic_flag_spec, $1)='${wl}--export-dynamic'
+
+ # If archive_cmds runs LD, not CC, wlarc should be empty
+ # XXX I think wlarc can be eliminated in ltcf-cxx, but I need to
+ # investigate it a little bit more. (MM)
+ wlarc='${wl}'
+
+ # ancient GNU ld didn't support --whole-archive et. al.
+ if eval "`$CC -print-prog-name=ld` --help 2>&1" | \
+ grep 'no-whole-archive' > /dev/null; then
+ _LT_AC_TAGVAR(whole_archive_flag_spec, $1)="$wlarc"'--whole-archive$convenience '"$wlarc"'--no-whole-archive'
+ else
+ _LT_AC_TAGVAR(whole_archive_flag_spec, $1)=
+ fi
+ else
+ with_gnu_ld=no
+ wlarc=
+
+ # A generic and very simple default shared library creation
+ # command for GNU C++ for the case where it uses the native
+ # linker, instead of GNU ld. If possible, this setting should
+ # overridden to take advantage of the native linker features on
+ # the platform it is being used on.
+ _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $lib'
+ fi
+
+ # Commands to make compiler produce verbose output that lists
+ # what "hidden" libraries, object files and flags are used when
+ # linking a shared library.
+ output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | grep "\-L"'
+
+else
+ GXX=no
+ with_gnu_ld=no
+ wlarc=
+fi
+
+# PORTME: fill in a description of your system's C++ link characteristics
+AC_MSG_CHECKING([whether the $compiler linker ($LD) supports shared libraries])
+_LT_AC_TAGVAR(ld_shlibs, $1)=yes
+case $host_os in
+ aix3*)
+ # FIXME: insert proper C++ library support
+ _LT_AC_TAGVAR(ld_shlibs, $1)=no
+ ;;
+ aix4* | aix5*)
+ if test "$host_cpu" = ia64; then
+ # On IA64, the linker does run time linking by default, so we don't
+ # have to do anything special.
+ aix_use_runtimelinking=no
+ exp_sym_flag='-Bexport'
+ no_entry_flag=""
+ else
+ aix_use_runtimelinking=no
+
+ # Test if we are trying to use run time linking or normal
+ # AIX style linking. If -brtl is somewhere in LDFLAGS, we
+ # need to do runtime linking.
+ case $host_os in aix4.[[23]]|aix4.[[23]].*|aix5*)
+ for ld_flag in $LDFLAGS; do
+ case $ld_flag in
+ *-brtl*)
+ aix_use_runtimelinking=yes
+ break
+ ;;
+ esac
+ done
+ esac
+
+ exp_sym_flag='-bexport'
+ no_entry_flag='-bnoentry'
+ fi
+
+ # When large executables or shared objects are built, AIX ld can
+ # have problems creating the table of contents. If linking a library
+ # or program results in "error TOC overflow" add -mminimal-toc to
+ # CXXFLAGS/CFLAGS for g++/gcc. In the cases where that is not
+ # enough to fix the problem, add -Wl,-bbigtoc to LDFLAGS.
+
+ _LT_AC_TAGVAR(archive_cmds, $1)=''
+ _LT_AC_TAGVAR(hardcode_direct, $1)=yes
+ _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=':'
+ _LT_AC_TAGVAR(link_all_deplibs, $1)=yes
+
+ if test "$GXX" = yes; then
+ case $host_os in aix4.[012]|aix4.[012].*)
+ # We only want to do this on AIX 4.2 and lower, the check
+ # below for broken collect2 doesn't work under 4.3+
+ collect2name=`${CC} -print-prog-name=collect2`
+ if test -f "$collect2name" && \
+ strings "$collect2name" | grep resolve_lib_name >/dev/null
+ then
+ # We have reworked collect2
+ _LT_AC_TAGVAR(hardcode_direct, $1)=yes
+ else
+ # We have old collect2
+ _LT_AC_TAGVAR(hardcode_direct, $1)=unsupported
+ # It fails to find uninstalled libraries when the uninstalled
+ # path is not listed in the libpath. Setting hardcode_minus_L
+ # to unsupported forces relinking
+ _LT_AC_TAGVAR(hardcode_minus_L, $1)=yes
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
+ _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=
+ fi
+ esac
+ shared_flag='-shared'
+ else
+ # not using gcc
+ if test "$host_cpu" = ia64; then
+ # VisualAge C++, Version 5.5 for AIX 5L for IA-64, Beta 3 Release
+ # chokes on -Wl,-G. The following line is correct:
+ shared_flag='-G'
+ else
+ if test "$aix_use_runtimelinking" = yes; then
+ shared_flag='${wl}-G'
+ else
+ shared_flag='${wl}-bM:SRE'
+ fi
+ fi
+ fi
+
+ # It seems that -bexpall does not export symbols beginning with
+ # underscore (_), so it is better to generate a list of symbols to export.
+ _LT_AC_TAGVAR(always_export_symbols, $1)=yes
+ if test "$aix_use_runtimelinking" = yes; then
+ # Warning - without using the other runtime loading flags (-brtl),
+ # -berok will link without error, but may produce a broken library.
+ _LT_AC_TAGVAR(allow_undefined_flag, $1)='-berok'
+ # Determine the default libpath from the value encoded in an empty executable.
+ _LT_AC_SYS_LIBPATH_AIX
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-blibpath:$libdir:'"$aix_libpath"
+
+ _LT_AC_TAGVAR(archive_expsym_cmds, $1)="\$CC"' -o $output_objdir/$soname $libobjs $deplibs $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then echo "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$no_entry_flag \${wl}$exp_sym_flag:\$export_symbols $shared_flag"
+ else
+ if test "$host_cpu" = ia64; then
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-R $libdir:/usr/lib:/lib'
+ _LT_AC_TAGVAR(allow_undefined_flag, $1)="-z nodefs"
+ _LT_AC_TAGVAR(archive_expsym_cmds, $1)="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs $compiler_flags ${wl}${allow_undefined_flag} '"\${wl}$no_entry_flag \${wl}$exp_sym_flag:\$export_symbols"
+ else
+ # Determine the default libpath from the value encoded in an empty executable.
+ _LT_AC_SYS_LIBPATH_AIX
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-blibpath:$libdir:'"$aix_libpath"
+ # Warning - without using the other run time loading flags,
+ # -berok will link without error, but may produce a broken library.
+ _LT_AC_TAGVAR(no_undefined_flag, $1)=' ${wl}-bernotok'
+ _LT_AC_TAGVAR(allow_undefined_flag, $1)=' ${wl}-berok'
+ # -bexpall does not export symbols beginning with underscore (_)
+ _LT_AC_TAGVAR(always_export_symbols, $1)=yes
+ # Exported symbols can be pulled into shared objects from archives
+ _LT_AC_TAGVAR(whole_archive_flag_spec, $1)=' '
+ _LT_AC_TAGVAR(archive_cmds_need_lc, $1)=yes
+ # This is similar to how AIX traditionally builds it's shared libraries.
+ _LT_AC_TAGVAR(archive_expsym_cmds, $1)="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs $compiler_flags ${wl}-bE:$export_symbols ${wl}-bnoentry${allow_undefined_flag}~$AR $AR_FLAGS $output_objdir/$libname$release.a $output_objdir/$soname'
+ fi
+ fi
+ ;;
+ chorus*)
+ case $cc_basename in
+ *)
+ # FIXME: insert proper C++ library support
+ _LT_AC_TAGVAR(ld_shlibs, $1)=no
+ ;;
+ esac
+ ;;
+
+ cygwin* | mingw* | pw32*)
+ # _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1) is actually meaningless,
+ # as there is no search path for DLLs.
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
+ _LT_AC_TAGVAR(allow_undefined_flag, $1)=unsupported
+ _LT_AC_TAGVAR(always_export_symbols, $1)=no
+ _LT_AC_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
+
+ if $LD --help 2>&1 | grep 'auto-import' > /dev/null; then
+ _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--image-base=0x10000000 ${wl}--out-implib,$lib'
+ # If the export-symbols file already is a .def file (1st line
+ # is EXPORTS), use it as is; otherwise, prepend...
+ _LT_AC_TAGVAR(archive_expsym_cmds, $1)='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
+ cp $export_symbols $output_objdir/$soname.def;
+ else
+ echo EXPORTS > $output_objdir/$soname.def;
+ cat $export_symbols >> $output_objdir/$soname.def;
+ fi~
+ $CC -shared -nostdlib $output_objdir/$soname.def $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--image-base=0x10000000 ${wl}--out-implib,$lib'
+ else
+ _LT_AC_TAGVAR(ld_shlibs, $1)=no
+ fi
+ ;;
+
+ darwin* | rhapsody*)
+ if test "$GXX" = yes; then
+ _LT_AC_TAGVAR(archive_cmds_need_lc, $1)=no
+ case "$host_os" in
+ rhapsody* | darwin1.[[012]])
+ _LT_AC_TAGVAR(allow_undefined_flag, $1)='-undefined suppress'
+ ;;
+ *) # Darwin 1.3 on
+ if test -z ${MACOSX_DEPLOYMENT_TARGET} ; then
+ _LT_AC_TAGVAR(allow_undefined_flag, $1)='-flat_namespace -undefined suppress'
+ else
+ case ${MACOSX_DEPLOYMENT_TARGET} in
+ 10.[[012]])
+ _LT_AC_TAGVAR(allow_undefined_flag, $1)='-flat_namespace -undefined suppress'
+ ;;
+ 10.*)
+ _LT_AC_TAGVAR(allow_undefined_flag, $1)='-undefined dynamic_lookup'
+ ;;
+ esac
+ fi
+ ;;
+ esac
+ lt_int_apple_cc_single_mod=no
+ output_verbose_link_cmd='echo'
+ if $CC -dumpspecs 2>&1 | grep 'single_module' >/dev/null ; then
+ lt_int_apple_cc_single_mod=yes
+ fi
+ if test "X$lt_int_apple_cc_single_mod" = Xyes ; then
+ _LT_AC_TAGVAR(archive_cmds, $1)='$CC -dynamiclib -single_module $allow_undefined_flag -o $lib $libobjs $deplibs $compiler_flags -install_name $rpath/$soname $verstring'
+ else
+ _LT_AC_TAGVAR(archive_cmds, $1)='$CC -r ${wl}-bind_at_load -keep_private_externs -nostdlib -o ${lib}-master.o $libobjs~$CC -dynamiclib $allow_undefined_flag -o $lib ${lib}-master.o $deplibs $compiler_flags -install_name $rpath/$soname $verstring'
+ fi
+ _LT_AC_TAGVAR(module_cmds, $1)='$CC ${wl}-bind_at_load $allow_undefined_flag -o $lib -bundle $libobjs $deplibs$compiler_flags'
+
+ # Don't fix this by using the ld -exported_symbols_list flag, it doesn't exist in older darwin ld's
+ if test "X$lt_int_apple_cc_single_mod" = Xyes ; then
+ _LT_AC_TAGVAR(archive_expsym_cmds, $1)='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC -dynamiclib -single_module $allow_undefined_flag -o $lib $libobjs $deplibs $compiler_flags -install_name $rpath/$soname $verstring~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}'
+ else
+ _LT_AC_TAGVAR(archive_expsym_cmds, $1)='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC -r ${wl}-bind_at_load -keep_private_externs -nostdlib -o ${lib}-master.o $libobjs~$CC -dynamiclib $allow_undefined_flag -o $lib ${lib}-master.o $deplibs $compiler_flags -install_name $rpath/$soname $verstring~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}'
+ fi
+ _LT_AC_TAGVAR(module_expsym_cmds, $1)='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC $allow_undefined_flag -o $lib -bundle $libobjs $deplibs$compiler_flags~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}'
+ _LT_AC_TAGVAR(hardcode_direct, $1)=no
+ _LT_AC_TAGVAR(hardcode_automatic, $1)=yes
+ _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=unsupported
+ _LT_AC_TAGVAR(whole_archive_flag_spec, $1)='-all_load $convenience'
+ _LT_AC_TAGVAR(link_all_deplibs, $1)=yes
+ else
+ _LT_AC_TAGVAR(ld_shlibs, $1)=no
+ fi
+ ;;
+
+ dgux*)
+ case $cc_basename in
+ ec++)
+ # FIXME: insert proper C++ library support
+ _LT_AC_TAGVAR(ld_shlibs, $1)=no
+ ;;
+ ghcx)
+ # Green Hills C++ Compiler
+ # FIXME: insert proper C++ library support
+ _LT_AC_TAGVAR(ld_shlibs, $1)=no
+ ;;
+ *)
+ # FIXME: insert proper C++ library support
+ _LT_AC_TAGVAR(ld_shlibs, $1)=no
+ ;;
+ esac
+ ;;
+ freebsd[12]*)
+ # C++ shared libraries reported to be fairly broken before switch to ELF
+ _LT_AC_TAGVAR(ld_shlibs, $1)=no
+ ;;
+ freebsd-elf*)
+ _LT_AC_TAGVAR(archive_cmds_need_lc, $1)=no
+ ;;
+ freebsd* | kfreebsd*-gnu)
+ # FreeBSD 3 and later use GNU C++ and GNU ld with standard ELF
+ # conventions
+ _LT_AC_TAGVAR(ld_shlibs, $1)=yes
+ ;;
+ gnu*)
+ ;;
+ hpux9*)
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}+b ${wl}$libdir'
+ _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=:
+ _LT_AC_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-E'
+ _LT_AC_TAGVAR(hardcode_direct, $1)=yes
+ _LT_AC_TAGVAR(hardcode_minus_L, $1)=yes # Not in the search PATH,
+ # but as the default
+ # location of the library.
+
+ case $cc_basename in
+ CC)
+ # FIXME: insert proper C++ library support
+ _LT_AC_TAGVAR(ld_shlibs, $1)=no
+ ;;
+ aCC)
+ _LT_AC_TAGVAR(archive_cmds, $1)='$rm $output_objdir/$soname~$CC -b ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+ # Commands to make compiler produce verbose output that lists
+ # what "hidden" libraries, object files and flags are used when
+ # linking a shared library.
+ #
+ # There doesn't appear to be a way to prevent this compiler from
+ # explicitly linking system object files so we need to strip them
+ # from the output so that they don't get included in the library
+ # dependencies.
+ output_verbose_link_cmd='templist=`($CC -b $CFLAGS -v conftest.$objext 2>&1) | egrep "\-L"`; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; echo $list'
+ ;;
+ *)
+ if test "$GXX" = yes; then
+ _LT_AC_TAGVAR(archive_cmds, $1)='$rm $output_objdir/$soname~$CC -shared -nostdlib -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+ else
+ # FIXME: insert proper C++ library support
+ _LT_AC_TAGVAR(ld_shlibs, $1)=no
+ fi
+ ;;
+ esac
+ ;;
+ hpux10*|hpux11*)
+ if test $with_gnu_ld = no; then
+ case "$host_cpu" in
+ hppa*64*)
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}+b ${wl}$libdir'
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec_ld, $1)='+b $libdir'
+ _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=:
+ ;;
+ ia64*)
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
+ ;;
+ *)
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}+b ${wl}$libdir'
+ _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=:
+ _LT_AC_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-E'
+ ;;
+ esac
+ fi
+ case "$host_cpu" in
+ hppa*64*)
+ _LT_AC_TAGVAR(hardcode_direct, $1)=no
+ _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no
+ ;;
+ ia64*)
+ _LT_AC_TAGVAR(hardcode_direct, $1)=no
+ _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no
+ _LT_AC_TAGVAR(hardcode_minus_L, $1)=yes # Not in the search PATH,
+ # but as the default
+ # location of the library.
+ ;;
+ *)
+ _LT_AC_TAGVAR(hardcode_direct, $1)=yes
+ _LT_AC_TAGVAR(hardcode_minus_L, $1)=yes # Not in the search PATH,
+ # but as the default
+ # location of the library.
+ ;;
+ esac
+
+ case $cc_basename in
+ CC)
+ # FIXME: insert proper C++ library support
+ _LT_AC_TAGVAR(ld_shlibs, $1)=no
+ ;;
+ aCC)
+ case "$host_cpu" in
+ hppa*64*|ia64*)
+ _LT_AC_TAGVAR(archive_cmds, $1)='$LD -b +h $soname -o $lib $linker_flags $libobjs $deplibs'
+ ;;
+ *)
+ _LT_AC_TAGVAR(archive_cmds, $1)='$CC -b ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
+ ;;
+ esac
+ # Commands to make compiler produce verbose output that lists
+ # what "hidden" libraries, object files and flags are used when
+ # linking a shared library.
+ #
+ # There doesn't appear to be a way to prevent this compiler from
+ # explicitly linking system object files so we need to strip them
+ # from the output so that they don't get included in the library
+ # dependencies.
+ output_verbose_link_cmd='templist=`($CC -b $CFLAGS -v conftest.$objext 2>&1) | grep "\-L"`; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; echo $list'
+ ;;
+ *)
+ if test "$GXX" = yes; then
+ if test $with_gnu_ld = no; then
+ case "$host_cpu" in
+ ia64*|hppa*64*)
+ _LT_AC_TAGVAR(archive_cmds, $1)='$LD -b +h $soname -o $lib $linker_flags $libobjs $deplibs'
+ ;;
+ *)
+ _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
+ ;;
+ esac
+ fi
+ else
+ # FIXME: insert proper C++ library support
+ _LT_AC_TAGVAR(ld_shlibs, $1)=no
+ fi
+ ;;
+ esac
+ ;;
+ irix5* | irix6*)
+ case $cc_basename in
+ CC)
+ # SGI C++
+ _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared -all -multigot $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${objdir}/so_locations -o $lib'
+
+ # Archives containing C++ object files must be created using
+ # "CC -ar", where "CC" is the IRIX C++ compiler. This is
+ # necessary to make sure instantiated templates are included
+ # in the archive.
+ _LT_AC_TAGVAR(old_archive_cmds, $1)='$CC -ar -WR,-u -o $oldlib $oldobjs'
+ ;;
+ *)
+ if test "$GXX" = yes; then
+ if test "$with_gnu_ld" = no; then
+ _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` ${wl}-update_registry ${wl}${objdir}/so_locations -o $lib'
+ else
+ _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` -o $lib'
+ fi
+ fi
+ _LT_AC_TAGVAR(link_all_deplibs, $1)=yes
+ ;;
+ esac
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir'
+ _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=:
+ ;;
+ linux*)
+ case $cc_basename in
+ KCC)
+ # Kuck and Associates, Inc. (KAI) C++ Compiler
+
+ # KCC will only create a shared library if the output file
+ # ends with ".so" (or ".sl" for HP-UX), so rename the library
+ # to its proper name (with version) after linking.
+ _LT_AC_TAGVAR(archive_cmds, $1)='tempext=`echo $shared_ext | $SED -e '\''s/\([[^()0-9A-Za-z{}]]\)/\\\\\1/g'\''`; templib=`echo $lib | $SED -e "s/\${tempext}\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib; mv \$templib $lib'
+ _LT_AC_TAGVAR(archive_expsym_cmds, $1)='tempext=`echo $shared_ext | $SED -e '\''s/\([[^()0-9A-Za-z{}]]\)/\\\\\1/g'\''`; templib=`echo $lib | $SED -e "s/\${tempext}\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib ${wl}-retain-symbols-file,$export_symbols; mv \$templib $lib'
+ # Commands to make compiler produce verbose output that lists
+ # what "hidden" libraries, object files and flags are used when
+ # linking a shared library.
+ #
+ # There doesn't appear to be a way to prevent this compiler from
+ # explicitly linking system object files so we need to strip them
+ # from the output so that they don't get included in the library
+ # dependencies.
+ output_verbose_link_cmd='templist=`$CC $CFLAGS -v conftest.$objext -o libconftest$shared_ext 2>&1 | grep "ld"`; rm -f libconftest$shared_ext; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; echo $list'
+
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}--rpath,$libdir'
+ _LT_AC_TAGVAR(export_dynamic_flag_spec, $1)='${wl}--export-dynamic'
+
+ # Archives containing C++ object files must be created using
+ # "CC -Bstatic", where "CC" is the KAI C++ compiler.
+ _LT_AC_TAGVAR(old_archive_cmds, $1)='$CC -Bstatic -o $oldlib $oldobjs'
+ ;;
+ icpc)
+ # Intel C++
+ with_gnu_ld=yes
+ _LT_AC_TAGVAR(archive_cmds_need_lc, $1)=no
+ _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname -o $lib'
+ _LT_AC_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath,$libdir'
+ _LT_AC_TAGVAR(export_dynamic_flag_spec, $1)='${wl}--export-dynamic'
+ _LT_AC_TAGVAR(whole_archive_flag_spec, $1)='${wl}--whole-archive$convenience ${wl}--no-whole-archive'
+ ;;
+ cxx)
+ # Compaq C++
+ _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname -o $lib'
+ _LT_AC_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname -o $lib ${wl}-retain-symbols-file $wl$export_symbols'
+
+ runpath_var=LD_RUN_PATH
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='-rpath $libdir'
+ _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=:
+
+ # Commands to make compiler produce verbose output that lists
+ # what "hidden" libraries, object files and flags are used when
+ # linking a shared library.
+ #
+ # There doesn't appear to be a way to prevent this compiler from
+ # explicitly linking system object files so we need to strip them
+ # from the output so that they don't get included in the library
+ # dependencies.
+ output_verbose_link_cmd='templist=`$CC -shared $CFLAGS -v conftest.$objext 2>&1 | grep "ld"`; templist=`echo $templist | $SED "s/\(^.*ld.*\)\( .*ld .*$\)/\1/"`; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; echo $list'
+ ;;
+ esac
+ ;;
+ lynxos*)
+ # FIXME: insert proper C++ library support
+ _LT_AC_TAGVAR(ld_shlibs, $1)=no
+ ;;
+ m88k*)
+ # FIXME: insert proper C++ library support
+ _LT_AC_TAGVAR(ld_shlibs, $1)=no
+ ;;
+ mvs*)
+ case $cc_basename in
+ cxx)
+ # FIXME: insert proper C++ library support
+ _LT_AC_TAGVAR(ld_shlibs, $1)=no
+ ;;
+ *)
+ # FIXME: insert proper C++ library support
+ _LT_AC_TAGVAR(ld_shlibs, $1)=no
+ ;;
+ esac
+ ;;
+ netbsd* | knetbsd*-gnu)
+ if echo __ELF__ | $CC -E - | grep __ELF__ >/dev/null; then
+ _LT_AC_TAGVAR(archive_cmds, $1)='$LD -Bshareable -o $lib $predep_objects $libobjs $deplibs $postdep_objects $linker_flags'
+ wlarc=
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir'
+ _LT_AC_TAGVAR(hardcode_direct, $1)=yes
+ _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no
+ fi
+ # Workaround some broken pre-1.5 toolchains
+ output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | grep conftest.$objext | $SED -e "s:-lgcc -lc -lgcc::"'
+ ;;
+ osf3*)
+ case $cc_basename in
+ KCC)
+ # Kuck and Associates, Inc. (KAI) C++ Compiler
+
+ # KCC will only create a shared library if the output file
+ # ends with ".so" (or ".sl" for HP-UX), so rename the library
+ # to its proper name (with version) after linking.
+ _LT_AC_TAGVAR(archive_cmds, $1)='tempext=`echo $shared_ext | $SED -e '\''s/\([[^()0-9A-Za-z{}]]\)/\\\\\1/g'\''`; templib=`echo $lib | $SED -e "s/\${tempext}\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib; mv \$templib $lib'
+
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath,$libdir'
+ _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=:
+
+ # Archives containing C++ object files must be created using
+ # "CC -Bstatic", where "CC" is the KAI C++ compiler.
+ _LT_AC_TAGVAR(old_archive_cmds, $1)='$CC -Bstatic -o $oldlib $oldobjs'
+
+ ;;
+ RCC)
+ # Rational C++ 2.4.1
+ # FIXME: insert proper C++ library support
+ _LT_AC_TAGVAR(ld_shlibs, $1)=no
+ ;;
+ cxx)
+ _LT_AC_TAGVAR(allow_undefined_flag, $1)=' ${wl}-expect_unresolved ${wl}\*'
+ _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $soname `test -n "$verstring" && echo ${wl}-set_version $verstring` -update_registry ${objdir}/so_locations -o $lib'
+
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir'
+ _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=:
+
+ # Commands to make compiler produce verbose output that lists
+ # what "hidden" libraries, object files and flags are used when
+ # linking a shared library.
+ #
+ # There doesn't appear to be a way to prevent this compiler from
+ # explicitly linking system object files so we need to strip them
+ # from the output so that they don't get included in the library
+ # dependencies.
+ output_verbose_link_cmd='templist=`$CC -shared $CFLAGS -v conftest.$objext 2>&1 | grep "ld" | grep -v "ld:"`; templist=`echo $templist | $SED "s/\(^.*ld.*\)\( .*ld.*$\)/\1/"`; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; echo $list'
+ ;;
+ *)
+ if test "$GXX" = yes && test "$with_gnu_ld" = no; then
+ _LT_AC_TAGVAR(allow_undefined_flag, $1)=' ${wl}-expect_unresolved ${wl}\*'
+ _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` ${wl}-update_registry ${wl}${objdir}/so_locations -o $lib'
+
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir'
+ _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=:
+
+ # Commands to make compiler produce verbose output that lists
+ # what "hidden" libraries, object files and flags are used when
+ # linking a shared library.
+ output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | grep "\-L"'
+
+ else
+ # FIXME: insert proper C++ library support
+ _LT_AC_TAGVAR(ld_shlibs, $1)=no
+ fi
+ ;;
+ esac
+ ;;
+ osf4* | osf5*)
+ case $cc_basename in
+ KCC)
+ # Kuck and Associates, Inc. (KAI) C++ Compiler
+
+ # KCC will only create a shared library if the output file
+ # ends with ".so" (or ".sl" for HP-UX), so rename the library
+ # to its proper name (with version) after linking.
+ _LT_AC_TAGVAR(archive_cmds, $1)='tempext=`echo $shared_ext | $SED -e '\''s/\([[^()0-9A-Za-z{}]]\)/\\\\\1/g'\''`; templib=`echo $lib | $SED -e "s/\${tempext}\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib; mv \$templib $lib'
+
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath,$libdir'
+ _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=:
+
+ # Archives containing C++ object files must be created using
+ # the KAI C++ compiler.
+ _LT_AC_TAGVAR(old_archive_cmds, $1)='$CC -o $oldlib $oldobjs'
+ ;;
+ RCC)
+ # Rational C++ 2.4.1
+ # FIXME: insert proper C++ library support
+ _LT_AC_TAGVAR(ld_shlibs, $1)=no
+ ;;
+ cxx)
+ _LT_AC_TAGVAR(allow_undefined_flag, $1)=' -expect_unresolved \*'
+ _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -msym -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${objdir}/so_locations -o $lib'
+ _LT_AC_TAGVAR(archive_expsym_cmds, $1)='for i in `cat $export_symbols`; do printf "%s %s\\n" -exported_symbol "\$i" >> $lib.exp; done~
+ echo "-hidden">> $lib.exp~
+ $CC -shared$allow_undefined_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -msym -soname $soname -Wl,-input -Wl,$lib.exp `test -n "$verstring" && echo -set_version $verstring` -update_registry $objdir/so_locations -o $lib~
+ $rm $lib.exp'
+
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='-rpath $libdir'
+ _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=:
+
+ # Commands to make compiler produce verbose output that lists
+ # what "hidden" libraries, object files and flags are used when
+ # linking a shared library.
+ #
+ # There doesn't appear to be a way to prevent this compiler from
+ # explicitly linking system object files so we need to strip them
+ # from the output so that they don't get included in the library
+ # dependencies.
+ output_verbose_link_cmd='templist=`$CC -shared $CFLAGS -v conftest.$objext 2>&1 | grep "ld" | grep -v "ld:"`; templist=`echo $templist | $SED "s/\(^.*ld.*\)\( .*ld.*$\)/\1/"`; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; echo $list'
+ ;;
+ *)
+ if test "$GXX" = yes && test "$with_gnu_ld" = no; then
+ _LT_AC_TAGVAR(allow_undefined_flag, $1)=' ${wl}-expect_unresolved ${wl}\*'
+ _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` ${wl}-update_registry ${wl}${objdir}/so_locations -o $lib'
+
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir'
+ _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=:
+
+ # Commands to make compiler produce verbose output that lists
+ # what "hidden" libraries, object files and flags are used when
+ # linking a shared library.
+ output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | grep "\-L"'
+
+ else
+ # FIXME: insert proper C++ library support
+ _LT_AC_TAGVAR(ld_shlibs, $1)=no
+ fi
+ ;;
+ esac
+ ;;
+ psos*)
+ # FIXME: insert proper C++ library support
+ _LT_AC_TAGVAR(ld_shlibs, $1)=no
+ ;;
+ sco*)
+ _LT_AC_TAGVAR(archive_cmds_need_lc, $1)=no
+ case $cc_basename in
+ CC)
+ # FIXME: insert proper C++ library support
+ _LT_AC_TAGVAR(ld_shlibs, $1)=no
+ ;;
+ *)
+ # FIXME: insert proper C++ library support
+ _LT_AC_TAGVAR(ld_shlibs, $1)=no
+ ;;
+ esac
+ ;;
+ sunos4*)
+ case $cc_basename in
+ CC)
+ # Sun C++ 4.x
+ # FIXME: insert proper C++ library support
+ _LT_AC_TAGVAR(ld_shlibs, $1)=no
+ ;;
+ lcc)
+ # Lucid
+ # FIXME: insert proper C++ library support
+ _LT_AC_TAGVAR(ld_shlibs, $1)=no
+ ;;
+ *)
+ # FIXME: insert proper C++ library support
+ _LT_AC_TAGVAR(ld_shlibs, $1)=no
+ ;;
+ esac
+ ;;
+ solaris*)
+ case $cc_basename in
+ CC)
+ # Sun C++ 4.2, 5.x and Centerline C++
+ _LT_AC_TAGVAR(no_undefined_flag, $1)=' -zdefs'
+ _LT_AC_TAGVAR(archive_cmds, $1)='$CC -G${allow_undefined_flag} -nolib -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
+ _LT_AC_TAGVAR(archive_expsym_cmds, $1)='$echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~$echo "local: *; };" >> $lib.exp~
+ $CC -G${allow_undefined_flag} -nolib ${wl}-M ${wl}$lib.exp -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$rm $lib.exp'
+
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir'
+ _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no
+ case $host_os in
+ solaris2.[0-5] | solaris2.[0-5].*) ;;
+ *)
+ # The C++ compiler is used as linker so we must use $wl
+ # flag to pass the commands to the underlying system
+ # linker.
+ # Supported since Solaris 2.6 (maybe 2.5.1?)
+ _LT_AC_TAGVAR(whole_archive_flag_spec, $1)='${wl}-z ${wl}allextract$convenience ${wl}-z ${wl}defaultextract'
+ ;;
+ esac
+ _LT_AC_TAGVAR(link_all_deplibs, $1)=yes
+
+ # Commands to make compiler produce verbose output that lists
+ # what "hidden" libraries, object files and flags are used when
+ # linking a shared library.
+ #
+ # There doesn't appear to be a way to prevent this compiler from
+ # explicitly linking system object files so we need to strip them
+ # from the output so that they don't get included in the library
+ # dependencies.
+ output_verbose_link_cmd='templist=`$CC -G $CFLAGS -v conftest.$objext 2>&1 | grep "\-[[LR]]"`; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; echo $list'
+
+ # Archives containing C++ object files must be created using
+ # "CC -xar", where "CC" is the Sun C++ compiler. This is
+ # necessary to make sure instantiated templates are included
+ # in the archive.
+ _LT_AC_TAGVAR(old_archive_cmds, $1)='$CC -xar -o $oldlib $oldobjs'
+ ;;
+ gcx)
+ # Green Hills C++ Compiler
+ _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib'
+
+ # The C++ compiler must be used to create the archive.
+ _LT_AC_TAGVAR(old_archive_cmds, $1)='$CC $LDFLAGS -archive -o $oldlib $oldobjs'
+ ;;
+ *)
+ # GNU C++ compiler with Solaris linker
+ if test "$GXX" = yes && test "$with_gnu_ld" = no; then
+ _LT_AC_TAGVAR(no_undefined_flag, $1)=' ${wl}-z ${wl}defs'
+ if $CC --version | grep -v '^2\.7' > /dev/null; then
+ _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $LDFLAGS $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib'
+ _LT_AC_TAGVAR(archive_expsym_cmds, $1)='$echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~$echo "local: *; };" >> $lib.exp~
+ $CC -shared -nostdlib ${wl}-M $wl$lib.exp -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$rm $lib.exp'
+
+ # Commands to make compiler produce verbose output that lists
+ # what "hidden" libraries, object files and flags are used when
+ # linking a shared library.
+ output_verbose_link_cmd="$CC -shared $CFLAGS -v conftest.$objext 2>&1 | grep \"\-L\""
+ else
+ # g++ 2.7 appears to require `-G' NOT `-shared' on this
+ # platform.
+ _LT_AC_TAGVAR(archive_cmds, $1)='$CC -G -nostdlib $LDFLAGS $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib'
+ _LT_AC_TAGVAR(archive_expsym_cmds, $1)='$echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~$echo "local: *; };" >> $lib.exp~
+ $CC -G -nostdlib ${wl}-M $wl$lib.exp -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$rm $lib.exp'
+
+ # Commands to make compiler produce verbose output that lists
+ # what "hidden" libraries, object files and flags are used when
+ # linking a shared library.
+ output_verbose_link_cmd="$CC -G $CFLAGS -v conftest.$objext 2>&1 | grep \"\-L\""
+ fi
+
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-R $wl$libdir'
+ fi
+ ;;
+ esac
+ ;;
+ sysv5OpenUNIX8* | sysv5UnixWare7* | sysv5uw[[78]]* | unixware7*)
+ _LT_AC_TAGVAR(archive_cmds_need_lc, $1)=no
+ ;;
+ tandem*)
+ case $cc_basename in
+ NCC)
+ # NonStop-UX NCC 3.20
+ # FIXME: insert proper C++ library support
+ _LT_AC_TAGVAR(ld_shlibs, $1)=no
+ ;;
+ *)
+ # FIXME: insert proper C++ library support
+ _LT_AC_TAGVAR(ld_shlibs, $1)=no
+ ;;
+ esac
+ ;;
+ vxworks*)
+ # FIXME: insert proper C++ library support
+ _LT_AC_TAGVAR(ld_shlibs, $1)=no
+ ;;
+ *)
+ # FIXME: insert proper C++ library support
+ _LT_AC_TAGVAR(ld_shlibs, $1)=no
+ ;;
+esac
+AC_MSG_RESULT([$_LT_AC_TAGVAR(ld_shlibs, $1)])
+test "$_LT_AC_TAGVAR(ld_shlibs, $1)" = no && can_build_shared=no
+
+_LT_AC_TAGVAR(GCC, $1)="$GXX"
+_LT_AC_TAGVAR(LD, $1)="$LD"
+
+AC_LIBTOOL_POSTDEP_PREDEP($1)
+AC_LIBTOOL_PROG_COMPILER_PIC($1)
+AC_LIBTOOL_PROG_CC_C_O($1)
+AC_LIBTOOL_SYS_HARD_LINK_LOCKS($1)
+AC_LIBTOOL_PROG_LD_SHLIBS($1)
+AC_LIBTOOL_SYS_DYNAMIC_LINKER($1)
+AC_LIBTOOL_PROG_LD_HARDCODE_LIBPATH($1)
+AC_LIBTOOL_SYS_LIB_STRIP
+AC_LIBTOOL_DLOPEN_SELF($1)
+
+AC_LIBTOOL_CONFIG($1)
+
+AC_LANG_POP
+CC=$lt_save_CC
+LDCXX=$LD
+LD=$lt_save_LD
+GCC=$lt_save_GCC
+with_gnu_ldcxx=$with_gnu_ld
+with_gnu_ld=$lt_save_with_gnu_ld
+lt_cv_path_LDCXX=$lt_cv_path_LD
+lt_cv_path_LD=$lt_save_path_LD
+lt_cv_prog_gnu_ldcxx=$lt_cv_prog_gnu_ld
+lt_cv_prog_gnu_ld=$lt_save_with_gnu_ld
+])# AC_LIBTOOL_LANG_CXX_CONFIG
+
+# AC_LIBTOOL_POSTDEP_PREDEP([TAGNAME])
+# ------------------------
+# Figure out "hidden" library dependencies from verbose
+# compiler output when linking a shared library.
+# Parse the compiler output and extract the necessary
+# objects, libraries and library flags.
+AC_DEFUN([AC_LIBTOOL_POSTDEP_PREDEP],[
+dnl we can't use the lt_simple_compile_test_code here,
+dnl because it contains code intended for an executable,
+dnl not a library. It's possible we should let each
+dnl tag define a new lt_????_link_test_code variable,
+dnl but it's only used here...
+ifelse([$1],[],[cat > conftest.$ac_ext <<EOF
+int a;
+void foo (void) { a = 0; }
+EOF
+],[$1],[CXX],[cat > conftest.$ac_ext <<EOF
+class Foo
+{
+public:
+ Foo (void) { a = 0; }
+private:
+ int a;
+};
+EOF
+],[$1],[F77],[cat > conftest.$ac_ext <<EOF
+ subroutine foo
+ implicit none
+ integer*4 a
+ a=0
+ return
+ end
+EOF
+],[$1],[GCJ],[cat > conftest.$ac_ext <<EOF
+public class foo {
+ private int a;
+ public void bar (void) {
+ a = 0;
+ }
+};
+EOF
+])
+dnl Parse the compiler output and extract the necessary
+dnl objects, libraries and library flags.
+if AC_TRY_EVAL(ac_compile); then
+ # Parse the compiler output and extract the necessary
+ # objects, libraries and library flags.
+
+ # Sentinel used to keep track of whether or not we are before
+ # the conftest object file.
+ pre_test_object_deps_done=no
+
+ # The `*' in the case matches for architectures that use `case' in
+ # $output_verbose_cmd can trigger glob expansion during the loop
+ # eval without this substitution.
+ output_verbose_link_cmd="`$echo \"X$output_verbose_link_cmd\" | $Xsed -e \"$no_glob_subst\"`"
+
+ for p in `eval $output_verbose_link_cmd`; do
+ case $p in
+
+ -L* | -R* | -l*)
+ # Some compilers place space between "-{L,R}" and the path.
+ # Remove the space.
+ if test $p = "-L" \
+ || test $p = "-R"; then
+ prev=$p
+ continue
+ else
+ prev=
+ fi
+
+ if test "$pre_test_object_deps_done" = no; then
+ case $p in
+ -L* | -R*)
+ # Internal compiler library paths should come after those
+ # provided the user. The postdeps already come after the
+ # user supplied libs so there is no need to process them.
+ if test -z "$_LT_AC_TAGVAR(compiler_lib_search_path, $1)"; then
+ _LT_AC_TAGVAR(compiler_lib_search_path, $1)="${prev}${p}"
+ else
+ _LT_AC_TAGVAR(compiler_lib_search_path, $1)="${_LT_AC_TAGVAR(compiler_lib_search_path, $1)} ${prev}${p}"
+ fi
+ ;;
+ # The "-l" case would never come before the object being
+ # linked, so don't bother handling this case.
+ esac
+ else
+ if test -z "$_LT_AC_TAGVAR(postdeps, $1)"; then
+ _LT_AC_TAGVAR(postdeps, $1)="${prev}${p}"
+ else
+ _LT_AC_TAGVAR(postdeps, $1)="${_LT_AC_TAGVAR(postdeps, $1)} ${prev}${p}"
+ fi
+ fi
+ ;;
+
+ *.$objext)
+ # This assumes that the test object file only shows up
+ # once in the compiler output.
+ if test "$p" = "conftest.$objext"; then
+ pre_test_object_deps_done=yes
+ continue
+ fi
+
+ if test "$pre_test_object_deps_done" = no; then
+ if test -z "$_LT_AC_TAGVAR(predep_objects, $1)"; then
+ _LT_AC_TAGVAR(predep_objects, $1)="$p"
+ else
+ _LT_AC_TAGVAR(predep_objects, $1)="$_LT_AC_TAGVAR(predep_objects, $1) $p"
+ fi
+ else
+ if test -z "$_LT_AC_TAGVAR(postdep_objects, $1)"; then
+ _LT_AC_TAGVAR(postdep_objects, $1)="$p"
+ else
+ _LT_AC_TAGVAR(postdep_objects, $1)="$_LT_AC_TAGVAR(postdep_objects, $1) $p"
+ fi
+ fi
+ ;;
+
+ *) ;; # Ignore the rest.
+
+ esac
+ done
+
+ # Clean up.
+ rm -f a.out a.exe
+else
+ echo "libtool.m4: error: problem compiling $1 test program"
+fi
+
+$rm -f confest.$objext
+
+case " $_LT_AC_TAGVAR(postdeps, $1) " in
+*" -lc "*) _LT_AC_TAGVAR(archive_cmds_need_lc, $1)=no ;;
+esac
+])# AC_LIBTOOL_POSTDEP_PREDEP
+
+# AC_LIBTOOL_LANG_F77_CONFIG
+# ------------------------
+# Ensure that the configuration vars for the C compiler are
+# suitably defined. Those variables are subsequently used by
+# AC_LIBTOOL_CONFIG to write the compiler configuration to `libtool'.
+AC_DEFUN([AC_LIBTOOL_LANG_F77_CONFIG], [_LT_AC_LANG_F77_CONFIG(F77)])
+AC_DEFUN([_LT_AC_LANG_F77_CONFIG],
+[AC_REQUIRE([AC_PROG_F77])
+AC_LANG_PUSH(Fortran 77)
+
+_LT_AC_TAGVAR(archive_cmds_need_lc, $1)=no
+_LT_AC_TAGVAR(allow_undefined_flag, $1)=
+_LT_AC_TAGVAR(always_export_symbols, $1)=no
+_LT_AC_TAGVAR(archive_expsym_cmds, $1)=
+_LT_AC_TAGVAR(export_dynamic_flag_spec, $1)=
+_LT_AC_TAGVAR(hardcode_direct, $1)=no
+_LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)=
+_LT_AC_TAGVAR(hardcode_libdir_flag_spec_ld, $1)=
+_LT_AC_TAGVAR(hardcode_libdir_separator, $1)=
+_LT_AC_TAGVAR(hardcode_minus_L, $1)=no
+_LT_AC_TAGVAR(hardcode_automatic, $1)=no
+_LT_AC_TAGVAR(module_cmds, $1)=
+_LT_AC_TAGVAR(module_expsym_cmds, $1)=
+_LT_AC_TAGVAR(link_all_deplibs, $1)=unknown
+_LT_AC_TAGVAR(old_archive_cmds, $1)=$old_archive_cmds
+_LT_AC_TAGVAR(no_undefined_flag, $1)=
+_LT_AC_TAGVAR(whole_archive_flag_spec, $1)=
+_LT_AC_TAGVAR(enable_shared_with_static_runtimes, $1)=no
+
+# Source file extension for f77 test sources.
+ac_ext=f
+
+# Object file extension for compiled f77 test sources.
+objext=o
+_LT_AC_TAGVAR(objext, $1)=$objext
+
+# Code to be used in simple compile tests
+lt_simple_compile_test_code=" subroutine t\n return\n end\n"
+
+# Code to be used in simple link tests
+lt_simple_link_test_code=" program t\n end\n"
+
+# ltmain only uses $CC for tagged configurations so make sure $CC is set.
+_LT_AC_SYS_COMPILER
+
+# Allow CC to be a program name with arguments.
+lt_save_CC="$CC"
+CC=${F77-"f77"}
+compiler=$CC
+_LT_AC_TAGVAR(compiler, $1)=$CC
+cc_basename=`$echo X"$compiler" | $Xsed -e 's%^.*/%%'`
+
+AC_MSG_CHECKING([if libtool supports shared libraries])
+AC_MSG_RESULT([$can_build_shared])
+
+AC_MSG_CHECKING([whether to build shared libraries])
+test "$can_build_shared" = "no" && enable_shared=no
+
+# On AIX, shared libraries and static libraries use the same namespace, and
+# are all built from PIC.
+case "$host_os" in
+aix3*)
+ test "$enable_shared" = yes && enable_static=no
+ if test -n "$RANLIB"; then
+ archive_cmds="$archive_cmds~\$RANLIB \$lib"
+ postinstall_cmds='$RANLIB $lib'
+ fi
+ ;;
+aix4*)
+ test "$enable_shared" = yes && enable_static=no
+ ;;
+esac
+AC_MSG_RESULT([$enable_shared])
+
+AC_MSG_CHECKING([whether to build static libraries])
+# Make sure either enable_shared or enable_static is yes.
+test "$enable_shared" = yes || enable_static=yes
+AC_MSG_RESULT([$enable_static])
+
+test "$_LT_AC_TAGVAR(ld_shlibs, $1)" = no && can_build_shared=no
+
+_LT_AC_TAGVAR(GCC, $1)="$G77"
+_LT_AC_TAGVAR(LD, $1)="$LD"
+
+AC_LIBTOOL_PROG_COMPILER_PIC($1)
+AC_LIBTOOL_PROG_CC_C_O($1)
+AC_LIBTOOL_SYS_HARD_LINK_LOCKS($1)
+AC_LIBTOOL_PROG_LD_SHLIBS($1)
+AC_LIBTOOL_SYS_DYNAMIC_LINKER($1)
+AC_LIBTOOL_PROG_LD_HARDCODE_LIBPATH($1)
+AC_LIBTOOL_SYS_LIB_STRIP
+
+
+AC_LIBTOOL_CONFIG($1)
+
+AC_LANG_POP
+CC="$lt_save_CC"
+])# AC_LIBTOOL_LANG_F77_CONFIG
+
+
+# AC_LIBTOOL_LANG_GCJ_CONFIG
+# --------------------------
+# Ensure that the configuration vars for the C compiler are
+# suitably defined. Those variables are subsequently used by
+# AC_LIBTOOL_CONFIG to write the compiler configuration to `libtool'.
+AC_DEFUN([AC_LIBTOOL_LANG_GCJ_CONFIG], [_LT_AC_LANG_GCJ_CONFIG(GCJ)])
+AC_DEFUN([_LT_AC_LANG_GCJ_CONFIG],
+[AC_LANG_SAVE
+
+# Source file extension for Java test sources.
+ac_ext=java
+
+# Object file extension for compiled Java test sources.
+objext=o
+_LT_AC_TAGVAR(objext, $1)=$objext
+
+# Code to be used in simple compile tests
+lt_simple_compile_test_code="class foo {}\n"
+
+# Code to be used in simple link tests
+lt_simple_link_test_code='public class conftest { public static void main(String[] argv) {}; }\n'
+
+# ltmain only uses $CC for tagged configurations so make sure $CC is set.
+_LT_AC_SYS_COMPILER
+
+# Allow CC to be a program name with arguments.
+lt_save_CC="$CC"
+CC=${GCJ-"gcj"}
+compiler=$CC
+_LT_AC_TAGVAR(compiler, $1)=$CC
+
+# GCJ did not exist at the time GCC didn't implicitly link libc in.
+_LT_AC_TAGVAR(archive_cmds_need_lc, $1)=no
+
+AC_LIBTOOL_PROG_COMPILER_NO_RTTI($1)
+AC_LIBTOOL_PROG_COMPILER_PIC($1)
+AC_LIBTOOL_PROG_CC_C_O($1)
+AC_LIBTOOL_SYS_HARD_LINK_LOCKS($1)
+AC_LIBTOOL_PROG_LD_SHLIBS($1)
+AC_LIBTOOL_SYS_DYNAMIC_LINKER($1)
+AC_LIBTOOL_PROG_LD_HARDCODE_LIBPATH($1)
+AC_LIBTOOL_SYS_LIB_STRIP
+AC_LIBTOOL_DLOPEN_SELF($1)
+
+AC_LIBTOOL_CONFIG($1)
+
+AC_LANG_RESTORE
+CC="$lt_save_CC"
+])# AC_LIBTOOL_LANG_GCJ_CONFIG
+
+
+# AC_LIBTOOL_LANG_RC_CONFIG
+# --------------------------
+# Ensure that the configuration vars for the Windows resource compiler are
+# suitably defined. Those variables are subsequently used by
+# AC_LIBTOOL_CONFIG to write the compiler configuration to `libtool'.
+AC_DEFUN([AC_LIBTOOL_LANG_RC_CONFIG], [_LT_AC_LANG_RC_CONFIG(RC)])
+AC_DEFUN([_LT_AC_LANG_RC_CONFIG],
+[AC_LANG_SAVE
+
+# Source file extension for RC test sources.
+ac_ext=rc
+
+# Object file extension for compiled RC test sources.
+objext=o
+_LT_AC_TAGVAR(objext, $1)=$objext
+
+# Code to be used in simple compile tests
+lt_simple_compile_test_code='sample MENU { MENUITEM "&Soup", 100, CHECKED }\n'
+
+# Code to be used in simple link tests
+lt_simple_link_test_code="$lt_simple_compile_test_code"
+
+# ltmain only uses $CC for tagged configurations so make sure $CC is set.
+_LT_AC_SYS_COMPILER
+
+# Allow CC to be a program name with arguments.
+lt_save_CC="$CC"
+CC=${RC-"windres"}
+compiler=$CC
+_LT_AC_TAGVAR(compiler, $1)=$CC
+_LT_AC_TAGVAR(lt_cv_prog_compiler_c_o, $1)=yes
+
+AC_LIBTOOL_CONFIG($1)
+
+AC_LANG_RESTORE
+CC="$lt_save_CC"
+])# AC_LIBTOOL_LANG_RC_CONFIG
+
+
+# AC_LIBTOOL_CONFIG([TAGNAME])
+# ----------------------------
+# If TAGNAME is not passed, then create an initial libtool script
+# with a default configuration from the untagged config vars. Otherwise
+# add code to config.status for appending the configuration named by
+# TAGNAME from the matching tagged config vars.
+AC_DEFUN([AC_LIBTOOL_CONFIG],
+[# The else clause should only fire when bootstrapping the
+# libtool distribution, otherwise you forgot to ship ltmain.sh
+# with your package, and you will get complaints that there are
+# no rules to generate ltmain.sh.
+if test -f "$ltmain"; then
+ # See if we are running on zsh, and set the options which allow our commands through
+ # without removal of \ escapes.
+ if test -n "${ZSH_VERSION+set}" ; then
+ setopt NO_GLOB_SUBST
+ fi
+ # Now quote all the things that may contain metacharacters while being
+ # careful not to overquote the AC_SUBSTed values. We take copies of the
+ # variables and quote the copies for generation of the libtool script.
+ for var in echo old_CC old_CFLAGS AR AR_FLAGS EGREP RANLIB LN_S LTCC NM \
+ SED SHELL STRIP \
+ libname_spec library_names_spec soname_spec extract_expsyms_cmds \
+ old_striplib striplib file_magic_cmd finish_cmds finish_eval \
+ deplibs_check_method reload_flag reload_cmds need_locks \
+ lt_cv_sys_global_symbol_pipe lt_cv_sys_global_symbol_to_cdecl \
+ lt_cv_sys_global_symbol_to_c_name_address \
+ sys_lib_search_path_spec sys_lib_dlsearch_path_spec \
+ old_postinstall_cmds old_postuninstall_cmds \
+ _LT_AC_TAGVAR(compiler, $1) \
+ _LT_AC_TAGVAR(CC, $1) \
+ _LT_AC_TAGVAR(LD, $1) \
+ _LT_AC_TAGVAR(lt_prog_compiler_wl, $1) \
+ _LT_AC_TAGVAR(lt_prog_compiler_pic, $1) \
+ _LT_AC_TAGVAR(lt_prog_compiler_static, $1) \
+ _LT_AC_TAGVAR(lt_prog_compiler_no_builtin_flag, $1) \
+ _LT_AC_TAGVAR(export_dynamic_flag_spec, $1) \
+ _LT_AC_TAGVAR(thread_safe_flag_spec, $1) \
+ _LT_AC_TAGVAR(whole_archive_flag_spec, $1) \
+ _LT_AC_TAGVAR(enable_shared_with_static_runtimes, $1) \
+ _LT_AC_TAGVAR(old_archive_cmds, $1) \
+ _LT_AC_TAGVAR(old_archive_from_new_cmds, $1) \
+ _LT_AC_TAGVAR(predep_objects, $1) \
+ _LT_AC_TAGVAR(postdep_objects, $1) \
+ _LT_AC_TAGVAR(predeps, $1) \
+ _LT_AC_TAGVAR(postdeps, $1) \
+ _LT_AC_TAGVAR(compiler_lib_search_path, $1) \
+ _LT_AC_TAGVAR(archive_cmds, $1) \
+ _LT_AC_TAGVAR(archive_expsym_cmds, $1) \
+ _LT_AC_TAGVAR(postinstall_cmds, $1) \
+ _LT_AC_TAGVAR(postuninstall_cmds, $1) \
+ _LT_AC_TAGVAR(old_archive_from_expsyms_cmds, $1) \
+ _LT_AC_TAGVAR(allow_undefined_flag, $1) \
+ _LT_AC_TAGVAR(no_undefined_flag, $1) \
+ _LT_AC_TAGVAR(export_symbols_cmds, $1) \
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1) \
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec_ld, $1) \
+ _LT_AC_TAGVAR(hardcode_libdir_separator, $1) \
+ _LT_AC_TAGVAR(hardcode_automatic, $1) \
+ _LT_AC_TAGVAR(module_cmds, $1) \
+ _LT_AC_TAGVAR(module_expsym_cmds, $1) \
+ _LT_AC_TAGVAR(lt_cv_prog_compiler_c_o, $1) \
+ _LT_AC_TAGVAR(exclude_expsyms, $1) \
+ _LT_AC_TAGVAR(include_expsyms, $1); do
+
+ case $var in
+ _LT_AC_TAGVAR(old_archive_cmds, $1) | \
+ _LT_AC_TAGVAR(old_archive_from_new_cmds, $1) | \
+ _LT_AC_TAGVAR(archive_cmds, $1) | \
+ _LT_AC_TAGVAR(archive_expsym_cmds, $1) | \
+ _LT_AC_TAGVAR(module_cmds, $1) | \
+ _LT_AC_TAGVAR(module_expsym_cmds, $1) | \
+ _LT_AC_TAGVAR(old_archive_from_expsyms_cmds, $1) | \
+ _LT_AC_TAGVAR(export_symbols_cmds, $1) | \
+ extract_expsyms_cmds | reload_cmds | finish_cmds | \
+ postinstall_cmds | postuninstall_cmds | \
+ old_postinstall_cmds | old_postuninstall_cmds | \
+ sys_lib_search_path_spec | sys_lib_dlsearch_path_spec)
+ # Double-quote double-evaled strings.
+ eval "lt_$var=\\\"\`\$echo \"X\$$var\" | \$Xsed -e \"\$double_quote_subst\" -e \"\$sed_quote_subst\" -e \"\$delay_variable_subst\"\`\\\""
+ ;;
+ *)
+ eval "lt_$var=\\\"\`\$echo \"X\$$var\" | \$Xsed -e \"\$sed_quote_subst\"\`\\\""
+ ;;
+ esac
+ done
+
+ case $lt_echo in
+ *'\[$]0 --fallback-echo"')
+ lt_echo=`$echo "X$lt_echo" | $Xsed -e 's/\\\\\\\[$]0 --fallback-echo"[$]/[$]0 --fallback-echo"/'`
+ ;;
+ esac
+
+ifelse([$1], [],
+ [cfgfile="${ofile}T"
+ trap "$rm \"$cfgfile\"; exit 1" 1 2 15
+ $rm -f "$cfgfile"
+ AC_MSG_NOTICE([creating $ofile])],
+ [cfgfile="$ofile"])
+
+ cat <<__EOF__ >> "$cfgfile"
+ifelse([$1], [],
+[#! $SHELL
+
+# `$echo "$cfgfile" | sed 's%^.*/%%'` - Provide generalized library-building support services.
+# Generated automatically by $PROGRAM (GNU $PACKAGE $VERSION$TIMESTAMP)
+# NOTE: Changes made to this file will be lost: look at ltmain.sh.
+#
+# Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001
+# Free Software Foundation, Inc.
+#
+# This file is part of GNU Libtool:
+# Originally by Gordon Matzigkeit <gord at gnu.ai.mit.edu>, 1996
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+#
+# As a special exception to the GNU General Public License, if you
+# distribute this file as part of a program that contains a
+# configuration script generated by Autoconf, you may include it under
+# the same distribution terms that you use for the rest of that program.
+
+# A sed program that does not truncate output.
+SED=$lt_SED
+
+# Sed that helps us avoid accidentally triggering echo(1) options like -n.
+Xsed="$SED -e s/^X//"
+
+# The HP-UX ksh and POSIX shell print the target directory to stdout
+# if CDPATH is set.
+if test "X\${CDPATH+set}" = Xset; then CDPATH=:; export CDPATH; fi
+
+# The names of the tagged configurations supported by this script.
+available_tags=
+
+# ### BEGIN LIBTOOL CONFIG],
+[# ### BEGIN LIBTOOL TAG CONFIG: $tagname])
+
+# Libtool was configured on host `(hostname || uname -n) 2>/dev/null | sed 1q`:
+
+# Shell to use when invoking shell scripts.
+SHELL=$lt_SHELL
+
+# Whether or not to build shared libraries.
+build_libtool_libs=$enable_shared
+
+# Whether or not to build static libraries.
+build_old_libs=$enable_static
+
+# Whether or not to add -lc for building shared libraries.
+build_libtool_need_lc=$_LT_AC_TAGVAR(archive_cmds_need_lc, $1)
+
+# Whether or not to disallow shared libs when runtime libs are static
+allow_libtool_libs_with_static_runtimes=$_LT_AC_TAGVAR(enable_shared_with_static_runtimes, $1)
+
+# Whether or not to optimize for fast installation.
+fast_install=$enable_fast_install
+
+# The host system.
+host_alias=$host_alias
+host=$host
+
+# An echo program that does not interpret backslashes.
+echo=$lt_echo
+
+# The archiver.
+AR=$lt_AR
+AR_FLAGS=$lt_AR_FLAGS
+
+# A C compiler.
+LTCC=$lt_LTCC
+
+# A language-specific compiler.
+CC=$lt_[]_LT_AC_TAGVAR(compiler, $1)
+
+# Is the compiler the GNU C compiler?
+with_gcc=$_LT_AC_TAGVAR(GCC, $1)
+
+# An ERE matcher.
+EGREP=$lt_EGREP
+
+# The linker used to build libraries.
+LD=$lt_[]_LT_AC_TAGVAR(LD, $1)
+
+# Whether we need hard or soft links.
+LN_S=$lt_LN_S
+
+# A BSD-compatible nm program.
+NM=$lt_NM
+
+# A symbol stripping program
+STRIP=$lt_STRIP
+
+# Used to examine libraries when file_magic_cmd begins "file"
+MAGIC_CMD=$MAGIC_CMD
+
+# Used on cygwin: DLL creation program.
+DLLTOOL="$DLLTOOL"
+
+# Used on cygwin: object dumper.
+OBJDUMP="$OBJDUMP"
+
+# Used on cygwin: assembler.
+AS="$AS"
+
+# The name of the directory that contains temporary libtool files.
+objdir=$objdir
+
+# How to create reloadable object files.
+reload_flag=$lt_reload_flag
+reload_cmds=$lt_reload_cmds
+
+# How to pass a linker flag through the compiler.
+wl=$lt_[]_LT_AC_TAGVAR(lt_prog_compiler_wl, $1)
+
+# Object file suffix (normally "o").
+objext="$ac_objext"
+
+# Old archive suffix (normally "a").
+libext="$libext"
+
+# Shared library suffix (normally ".so").
+shrext='$shrext'
+
+# Executable file suffix (normally "").
+exeext="$exeext"
+
+# Additional compiler flags for building library objects.
+pic_flag=$lt_[]_LT_AC_TAGVAR(lt_prog_compiler_pic, $1)
+pic_mode=$pic_mode
+
+# What is the maximum length of a command?
+max_cmd_len=$lt_cv_sys_max_cmd_len
+
+# Does compiler simultaneously support -c and -o options?
+compiler_c_o=$lt_[]_LT_AC_TAGVAR(lt_cv_prog_compiler_c_o, $1)
+
+# Must we lock files when doing compilation ?
+need_locks=$lt_need_locks
+
+# Do we need the lib prefix for modules?
+need_lib_prefix=$need_lib_prefix
+
+# Do we need a version for libraries?
+need_version=$need_version
+
+# Whether dlopen is supported.
+dlopen_support=$enable_dlopen
+
+# Whether dlopen of programs is supported.
+dlopen_self=$enable_dlopen_self
+
+# Whether dlopen of statically linked programs is supported.
+dlopen_self_static=$enable_dlopen_self_static
+
+# Compiler flag to prevent dynamic linking.
+link_static_flag=$lt_[]_LT_AC_TAGVAR(lt_prog_compiler_static, $1)
+
+# Compiler flag to turn off builtin functions.
+no_builtin_flag=$lt_[]_LT_AC_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)
+
+# Compiler flag to allow reflexive dlopens.
+export_dynamic_flag_spec=$lt_[]_LT_AC_TAGVAR(export_dynamic_flag_spec, $1)
+
+# Compiler flag to generate shared objects directly from archives.
+whole_archive_flag_spec=$lt_[]_LT_AC_TAGVAR(whole_archive_flag_spec, $1)
+
+# Compiler flag to generate thread-safe objects.
+thread_safe_flag_spec=$lt_[]_LT_AC_TAGVAR(thread_safe_flag_spec, $1)
+
+# Library versioning type.
+version_type=$version_type
+
+# Format of library name prefix.
+libname_spec=$lt_libname_spec
+
+# List of archive names. First name is the real one, the rest are links.
+# The last name is the one that the linker finds with -lNAME.
+library_names_spec=$lt_library_names_spec
+
+# The coded name of the library, if different from the real name.
+soname_spec=$lt_soname_spec
+
+# Commands used to build and install an old-style archive.
+RANLIB=$lt_RANLIB
+old_archive_cmds=$lt_[]_LT_AC_TAGVAR(old_archive_cmds, $1)
+old_postinstall_cmds=$lt_old_postinstall_cmds
+old_postuninstall_cmds=$lt_old_postuninstall_cmds
+
+# Create an old-style archive from a shared archive.
+old_archive_from_new_cmds=$lt_[]_LT_AC_TAGVAR(old_archive_from_new_cmds, $1)
+
+# Create a temporary old-style archive to link instead of a shared archive.
+old_archive_from_expsyms_cmds=$lt_[]_LT_AC_TAGVAR(old_archive_from_expsyms_cmds, $1)
+
+# Commands used to build and install a shared archive.
+archive_cmds=$lt_[]_LT_AC_TAGVAR(archive_cmds, $1)
+archive_expsym_cmds=$lt_[]_LT_AC_TAGVAR(archive_expsym_cmds, $1)
+postinstall_cmds=$lt_postinstall_cmds
+postuninstall_cmds=$lt_postuninstall_cmds
+
+# Commands used to build a loadable module (assumed same as above if empty)
+module_cmds=$lt_[]_LT_AC_TAGVAR(module_cmds, $1)
+module_expsym_cmds=$lt_[]_LT_AC_TAGVAR(module_expsym_cmds, $1)
+
+# Commands to strip libraries.
+old_striplib=$lt_old_striplib
+striplib=$lt_striplib
+
+# Dependencies to place before the objects being linked to create a
+# shared library.
+predep_objects=$lt_[]_LT_AC_TAGVAR(predep_objects, $1)
+
+# Dependencies to place after the objects being linked to create a
+# shared library.
+postdep_objects=$lt_[]_LT_AC_TAGVAR(postdep_objects, $1)
+
+# Dependencies to place before the objects being linked to create a
+# shared library.
+predeps=$lt_[]_LT_AC_TAGVAR(predeps, $1)
+
+# Dependencies to place after the objects being linked to create a
+# shared library.
+postdeps=$lt_[]_LT_AC_TAGVAR(postdeps, $1)
+
+# The library search path used internally by the compiler when linking
+# a shared library.
+compiler_lib_search_path=$lt_[]_LT_AC_TAGVAR(compiler_lib_search_path, $1)
+
+# Method to check whether dependent libraries are shared objects.
+deplibs_check_method=$lt_deplibs_check_method
+
+# Command to use when deplibs_check_method == file_magic.
+file_magic_cmd=$lt_file_magic_cmd
+
+# Flag that allows shared libraries with undefined symbols to be built.
+allow_undefined_flag=$lt_[]_LT_AC_TAGVAR(allow_undefined_flag, $1)
+
+# Flag that forces no undefined symbols.
+no_undefined_flag=$lt_[]_LT_AC_TAGVAR(no_undefined_flag, $1)
+
+# Commands used to finish a libtool library installation in a directory.
+finish_cmds=$lt_finish_cmds
+
+# Same as above, but a single script fragment to be evaled but not shown.
+finish_eval=$lt_finish_eval
+
+# Take the output of nm and produce a listing of raw symbols and C names.
+global_symbol_pipe=$lt_lt_cv_sys_global_symbol_pipe
+
+# Transform the output of nm in a proper C declaration
+global_symbol_to_cdecl=$lt_lt_cv_sys_global_symbol_to_cdecl
+
+# Transform the output of nm in a C name address pair
+global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address
+
+# This is the shared library runtime path variable.
+runpath_var=$runpath_var
+
+# This is the shared library path variable.
+shlibpath_var=$shlibpath_var
+
+# Is shlibpath searched before the hard-coded library search path?
+shlibpath_overrides_runpath=$shlibpath_overrides_runpath
+
+# How to hardcode a shared library path into an executable.
+hardcode_action=$_LT_AC_TAGVAR(hardcode_action, $1)
+
+# Whether we should hardcode library paths into libraries.
+hardcode_into_libs=$hardcode_into_libs
+
+# Flag to hardcode \$libdir into a binary during linking.
+# This must work even if \$libdir does not exist.
+hardcode_libdir_flag_spec=$lt_[]_LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)
+
+# If ld is used when linking, flag to hardcode \$libdir into
+# a binary during linking. This must work even if \$libdir does
+# not exist.
+hardcode_libdir_flag_spec_ld=$lt_[]_LT_AC_TAGVAR(hardcode_libdir_flag_spec_ld, $1)
+
+# Whether we need a single -rpath flag with a separated argument.
+hardcode_libdir_separator=$lt_[]_LT_AC_TAGVAR(hardcode_libdir_separator, $1)
+
+# Set to yes if using DIR/libNAME${shared_ext} during linking hardcodes DIR into the
+# resulting binary.
+hardcode_direct=$_LT_AC_TAGVAR(hardcode_direct, $1)
+
+# Set to yes if using the -LDIR flag during linking hardcodes DIR into the
+# resulting binary.
+hardcode_minus_L=$_LT_AC_TAGVAR(hardcode_minus_L, $1)
+
+# Set to yes if using SHLIBPATH_VAR=DIR during linking hardcodes DIR into
+# the resulting binary.
+hardcode_shlibpath_var=$_LT_AC_TAGVAR(hardcode_shlibpath_var, $1)
+
+# Set to yes if building a shared library automatically hardcodes DIR into the library
+# and all subsequent libraries and executables linked against it.
+hardcode_automatic=$_LT_AC_TAGVAR(hardcode_automatic, $1)
+
+# Variables whose values should be saved in libtool wrapper scripts and
+# restored at relink time.
+variables_saved_for_relink="$variables_saved_for_relink"
+
+# Whether libtool must link a program against all its dependency libraries.
+link_all_deplibs=$_LT_AC_TAGVAR(link_all_deplibs, $1)
+
+# Compile-time system search path for libraries
+sys_lib_search_path_spec=$lt_sys_lib_search_path_spec
+
+# Run-time system search path for libraries
+sys_lib_dlsearch_path_spec=$lt_sys_lib_dlsearch_path_spec
+
+# Fix the shell variable \$srcfile for the compiler.
+fix_srcfile_path="$_LT_AC_TAGVAR(fix_srcfile_path, $1)"
+
+# Set to yes if exported symbols are required.
+always_export_symbols=$_LT_AC_TAGVAR(always_export_symbols, $1)
+
+# The commands to list exported symbols.
+export_symbols_cmds=$lt_[]_LT_AC_TAGVAR(export_symbols_cmds, $1)
+
+# The commands to extract the exported symbol list from a shared archive.
+extract_expsyms_cmds=$lt_extract_expsyms_cmds
+
+# Symbols that should not be listed in the preloaded symbols.
+exclude_expsyms=$lt_[]_LT_AC_TAGVAR(exclude_expsyms, $1)
+
+# Symbols that must always be exported.
+include_expsyms=$lt_[]_LT_AC_TAGVAR(include_expsyms, $1)
+
+ifelse([$1],[],
+[# ### END LIBTOOL CONFIG],
+[# ### END LIBTOOL TAG CONFIG: $tagname])
+
+__EOF__
+
+ifelse([$1],[], [
+ case $host_os in
+ aix3*)
+ cat <<\EOF >> "$cfgfile"
+
+# AIX sometimes has problems with the GCC collect2 program. For some
+# reason, if we set the COLLECT_NAMES environment variable, the problems
+# vanish in a puff of smoke.
+if test "X${COLLECT_NAMES+set}" != Xset; then
+ COLLECT_NAMES=
+ export COLLECT_NAMES
+fi
+EOF
+ ;;
+ esac
+
+ # We use sed instead of cat because bash on DJGPP gets confused if
+ # if finds mixed CR/LF and LF-only lines. Since sed operates in
+ # text mode, it properly converts lines to CR/LF. This bash problem
+ # is reportedly fixed, but why not run on old versions too?
+ sed '$q' "$ltmain" >> "$cfgfile" || (rm -f "$cfgfile"; exit 1)
+
+ mv -f "$cfgfile" "$ofile" || \
+ (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile")
+ chmod +x "$ofile"
+])
+else
+ # If there is no Makefile yet, we rely on a make rule to execute
+ # `config.status --recheck' to rerun these tests and create the
+ # libtool script then.
+ ltmain_in=`echo $ltmain | sed -e 's/\.sh$/.in/'`
+ if test -f "$ltmain_in"; then
+ test -f Makefile && make "$ltmain"
+ fi
+fi
+])# AC_LIBTOOL_CONFIG
+
+
+# AC_LIBTOOL_PROG_COMPILER_NO_RTTI([TAGNAME])
+# -------------------------------------------
+AC_DEFUN([AC_LIBTOOL_PROG_COMPILER_NO_RTTI],
+[AC_REQUIRE([_LT_AC_SYS_COMPILER])dnl
+
+_LT_AC_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)=
+
+if test "$GCC" = yes; then
+ _LT_AC_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)=' -fno-builtin'
+
+ AC_LIBTOOL_COMPILER_OPTION([if $compiler supports -fno-rtti -fno-exceptions],
+ lt_cv_prog_compiler_rtti_exceptions,
+ [-fno-rtti -fno-exceptions], [],
+ [_LT_AC_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)="$_LT_AC_TAGVAR(lt_prog_compiler_no_builtin_flag, $1) -fno-rtti -fno-exceptions"])
+fi
+])# AC_LIBTOOL_PROG_COMPILER_NO_RTTI
+
+
+# AC_LIBTOOL_SYS_GLOBAL_SYMBOL_PIPE
+# ---------------------------------
+AC_DEFUN([AC_LIBTOOL_SYS_GLOBAL_SYMBOL_PIPE],
+[AC_REQUIRE([AC_CANONICAL_HOST])
+AC_REQUIRE([AC_PROG_NM])
+AC_REQUIRE([AC_OBJEXT])
+# Check for command to grab the raw symbol name followed by C symbol from nm.
+AC_MSG_CHECKING([command to parse $NM output from $compiler object])
+AC_CACHE_VAL([lt_cv_sys_global_symbol_pipe],
+[
+# These are sane defaults that work on at least a few old systems.
+# [They come from Ultrix. What could be older than Ultrix?!! ;)]
+
+# Character class describing NM global symbol codes.
+symcode='[[BCDEGRST]]'
+
+# Regexp to match symbols that can be accessed directly from C.
+sympat='\([[_A-Za-z]][[_A-Za-z0-9]]*\)'
+
+# Transform the above into a raw symbol and a C symbol.
+symxfrm='\1 \2\3 \3'
+
+# Transform an extracted symbol line into a proper C declaration
+lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^. .* \(.*\)$/extern int \1;/p'"
+
+# Transform an extracted symbol line into symbol name and symbol address
+lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([[^ ]]*\) $/ {\\\"\1\\\", (lt_ptr) 0},/p' -e 's/^$symcode \([[^ ]]*\) \([[^ ]]*\)$/ {\"\2\", (lt_ptr) \&\2},/p'"
+
+# Define system-specific variables.
+case $host_os in
+aix*)
+ symcode='[[BCDT]]'
+ ;;
+cygwin* | mingw* | pw32*)
+ symcode='[[ABCDGISTW]]'
+ ;;
+hpux*) # Its linker distinguishes data from code symbols
+ if test "$host_cpu" = ia64; then
+ symcode='[[ABCDEGRST]]'
+ fi
+ lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'"
+ lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([[^ ]]*\) $/ {\\\"\1\\\", (lt_ptr) 0},/p' -e 's/^$symcode* \([[^ ]]*\) \([[^ ]]*\)$/ {\"\2\", (lt_ptr) \&\2},/p'"
+ ;;
+irix* | nonstopux*)
+ symcode='[[BCDEGRST]]'
+ ;;
+osf*)
+ symcode='[[BCDEGQRST]]'
+ ;;
+solaris* | sysv5*)
+ symcode='[[BDRT]]'
+ ;;
+sysv4)
+ symcode='[[DFNSTU]]'
+ ;;
+esac
+
+# Handle CRLF in mingw tool chain
+opt_cr=
+case $build_os in
+mingw*)
+ opt_cr=`echo 'x\{0,1\}' | tr x '\015'` # option cr in regexp
+ ;;
+esac
+
+# If we're using GNU nm, then use its standard symbol codes.
+case `$NM -V 2>&1` in
+*GNU* | *'with BFD'*)
+ symcode='[[ABCDGIRSTW]]' ;;
+esac
+
+# Try without a prefix undercore, then with it.
+for ac_symprfx in "" "_"; do
+
+ # Write the raw and C identifiers.
+ lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[[ ]]\($symcode$symcode*\)[[ ]][[ ]]*\($ac_symprfx\)$sympat$opt_cr$/$symxfrm/p'"
+
+ # Check to see that the pipe works correctly.
+ pipe_works=no
+
+ rm -f conftest*
+ cat > conftest.$ac_ext <<EOF
+#ifdef __cplusplus
+extern "C" {
+#endif
+char nm_test_var;
+void nm_test_func(){}
+#ifdef __cplusplus
+}
+#endif
+int main(){nm_test_var='a';nm_test_func();return(0);}
+EOF
+
+ if AC_TRY_EVAL(ac_compile); then
+ # Now try to grab the symbols.
+ nlist=conftest.nm
+ if AC_TRY_EVAL(NM conftest.$ac_objext \| $lt_cv_sys_global_symbol_pipe \> $nlist) && test -s "$nlist"; then
+ # Try sorting and uniquifying the output.
+ if sort "$nlist" | uniq > "$nlist"T; then
+ mv -f "$nlist"T "$nlist"
+ else
+ rm -f "$nlist"T
+ fi
+
+ # Make sure that we snagged all the symbols we need.
+ if grep ' nm_test_var$' "$nlist" >/dev/null; then
+ if grep ' nm_test_func$' "$nlist" >/dev/null; then
+ cat <<EOF > conftest.$ac_ext
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+EOF
+ # Now generate the symbol file.
+ eval "$lt_cv_sys_global_symbol_to_cdecl"' < "$nlist" | grep -v main >> conftest.$ac_ext'
+
+ cat <<EOF >> conftest.$ac_ext
+#if defined (__STDC__) && __STDC__
+# define lt_ptr_t void *
+#else
+# define lt_ptr_t char *
+# define const
+#endif
+
+/* The mapping between symbol names and symbols. */
+const struct {
+ const char *name;
+ lt_ptr_t address;
+}
+lt_preloaded_symbols[[]] =
+{
+EOF
+ $SED "s/^$symcode$symcode* \(.*\) \(.*\)$/ {\"\2\", (lt_ptr_t) \&\2},/" < "$nlist" | grep -v main >> conftest.$ac_ext
+ cat <<\EOF >> conftest.$ac_ext
+ {0, (lt_ptr_t) 0}
+};
+
+#ifdef __cplusplus
+}
+#endif
+EOF
+ # Now try linking the two files.
+ mv conftest.$ac_objext conftstm.$ac_objext
+ lt_save_LIBS="$LIBS"
+ lt_save_CFLAGS="$CFLAGS"
+ LIBS="conftstm.$ac_objext"
+ CFLAGS="$CFLAGS$_LT_AC_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)"
+ if AC_TRY_EVAL(ac_link) && test -s conftest${ac_exeext}; then
+ pipe_works=yes
+ fi
+ LIBS="$lt_save_LIBS"
+ CFLAGS="$lt_save_CFLAGS"
+ else
+ echo "cannot find nm_test_func in $nlist" >&AS_MESSAGE_LOG_FD
+ fi
+ else
+ echo "cannot find nm_test_var in $nlist" >&AS_MESSAGE_LOG_FD
+ fi
+ else
+ echo "cannot run $lt_cv_sys_global_symbol_pipe" >&AS_MESSAGE_LOG_FD
+ fi
+ else
+ echo "$progname: failed program was:" >&AS_MESSAGE_LOG_FD
+ cat conftest.$ac_ext >&5
+ fi
+ rm -f conftest* conftst*
+
+ # Do not use the global_symbol_pipe unless it works.
+ if test "$pipe_works" = yes; then
+ break
+ else
+ lt_cv_sys_global_symbol_pipe=
+ fi
+done
+])
+if test -z "$lt_cv_sys_global_symbol_pipe"; then
+ lt_cv_sys_global_symbol_to_cdecl=
+fi
+if test -z "$lt_cv_sys_global_symbol_pipe$lt_cv_sys_global_symbol_to_cdecl"; then
+ AC_MSG_RESULT(failed)
+else
+ AC_MSG_RESULT(ok)
+fi
+]) # AC_LIBTOOL_SYS_GLOBAL_SYMBOL_PIPE
+
+
+# AC_LIBTOOL_PROG_COMPILER_PIC([TAGNAME])
+# ---------------------------------------
+AC_DEFUN([AC_LIBTOOL_PROG_COMPILER_PIC],
+[_LT_AC_TAGVAR(lt_prog_compiler_wl, $1)=
+_LT_AC_TAGVAR(lt_prog_compiler_pic, $1)=
+_LT_AC_TAGVAR(lt_prog_compiler_static, $1)=
+
+AC_MSG_CHECKING([for $compiler option to produce PIC])
+ ifelse([$1],[CXX],[
+ # C++ specific cases for pic, static, wl, etc.
+ if test "$GXX" = yes; then
+ _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
+ _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-static'
+
+ case $host_os in
+ aix*)
+ # All AIX code is PIC.
+ if test "$host_cpu" = ia64; then
+ # AIX 5 now supports IA64 processor
+ _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
+ fi
+ ;;
+ amigaos*)
+ # FIXME: we need at least 68020 code to build shared libraries, but
+ # adding the `-m68020' flag to GCC prevents building anything better,
+ # like `-m68040'.
+ _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-m68020 -resident32 -malways-restore-a4'
+ ;;
+ beos* | cygwin* | irix5* | irix6* | nonstopux* | osf3* | osf4* | osf5*)
+ # PIC is the default for these OSes.
+ ;;
+ mingw* | os2* | pw32*)
+ # This hack is so that the source file can tell whether it is being
+ # built for inclusion in a dll (and should export symbols for example).
+ _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-DDLL_EXPORT'
+ ;;
+ darwin* | rhapsody*)
+ # PIC is the default on this platform
+ # Common symbols not allowed in MH_DYLIB files
+ _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-fno-common'
+ ;;
+ *djgpp*)
+ # DJGPP does not support shared libraries at all
+ _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)=
+ ;;
+ sysv4*MP*)
+ if test -d /usr/nec; then
+ _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)=-Kconform_pic
+ fi
+ ;;
+ hpux*)
+ # PIC is the default for IA64 HP-UX and 64-bit HP-UX, but
+ # not for PA HP-UX.
+ case "$host_cpu" in
+ hppa*64*|ia64*)
+ ;;
+ *)
+ _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC'
+ ;;
+ esac
+ ;;
+ *)
+ _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC'
+ ;;
+ esac
+ else
+ case $host_os in
+ aix4* | aix5*)
+ # All AIX code is PIC.
+ if test "$host_cpu" = ia64; then
+ # AIX 5 now supports IA64 processor
+ _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
+ else
+ _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-bnso -bI:/lib/syscalls.exp'
+ fi
+ ;;
+ chorus*)
+ case $cc_basename in
+ cxch68)
+ # Green Hills C++ Compiler
+ # _LT_AC_TAGVAR(lt_prog_compiler_static, $1)="--no_auto_instantiation -u __main -u __premain -u _abort -r $COOL_DIR/lib/libOrb.a $MVME_DIR/lib/CC/libC.a $MVME_DIR/lib/classix/libcx.s.a"
+ ;;
+ esac
+ ;;
+ dgux*)
+ case $cc_basename in
+ ec++)
+ _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
+ ;;
+ ghcx)
+ # Green Hills C++ Compiler
+ _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-pic'
+ ;;
+ *)
+ ;;
+ esac
+ ;;
+ freebsd* | kfreebsd*-gnu)
+ # FreeBSD uses GNU C++
+ ;;
+ hpux9* | hpux10* | hpux11*)
+ case $cc_basename in
+ CC)
+ _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
+ _LT_AC_TAGVAR(lt_prog_compiler_static, $1)="${ac_cv_prog_cc_wl}-a ${ac_cv_prog_cc_wl}archive"
+ if test "$host_cpu" != ia64; then
+ _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='+Z'
+ fi
+ ;;
+ aCC)
+ _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
+ _LT_AC_TAGVAR(lt_prog_compiler_static, $1)="${ac_cv_prog_cc_wl}-a ${ac_cv_prog_cc_wl}archive"
+ case "$host_cpu" in
+ hppa*64*|ia64*)
+ # +Z the default
+ ;;
+ *)
+ _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='+Z'
+ ;;
+ esac
+ ;;
+ *)
+ ;;
+ esac
+ ;;
+ irix5* | irix6* | nonstopux*)
+ case $cc_basename in
+ CC)
+ _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
+ _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-non_shared'
+ # CC pic flag -KPIC is the default.
+ ;;
+ *)
+ ;;
+ esac
+ ;;
+ linux*)
+ case $cc_basename in
+ KCC)
+ # KAI C++ Compiler
+ _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='--backend -Wl,'
+ _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC'
+ ;;
+ icpc)
+ # Intel C++
+ _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
+ _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
+ _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-static'
+ ;;
+ cxx)
+ # Compaq C++
+ # Make sure the PIC flag is empty. It appears that all Alpha
+ # Linux and Compaq Tru64 Unix objects are PIC.
+ _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)=
+ _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-non_shared'
+ ;;
+ *)
+ ;;
+ esac
+ ;;
+ lynxos*)
+ ;;
+ m88k*)
+ ;;
+ mvs*)
+ case $cc_basename in
+ cxx)
+ _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-W c,exportall'
+ ;;
+ *)
+ ;;
+ esac
+ ;;
+ netbsd* | knetbsd*-gnu)
+ ;;
+ osf3* | osf4* | osf5*)
+ case $cc_basename in
+ KCC)
+ _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='--backend -Wl,'
+ ;;
+ RCC)
+ # Rational C++ 2.4.1
+ _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-pic'
+ ;;
+ cxx)
+ # Digital/Compaq C++
+ _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
+ # Make sure the PIC flag is empty. It appears that all Alpha
+ # Linux and Compaq Tru64 Unix objects are PIC.
+ _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)=
+ _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-non_shared'
+ ;;
+ *)
+ ;;
+ esac
+ ;;
+ psos*)
+ ;;
+ sco*)
+ case $cc_basename in
+ CC)
+ _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC'
+ ;;
+ *)
+ ;;
+ esac
+ ;;
+ solaris*)
+ case $cc_basename in
+ CC)
+ # Sun C++ 4.2, 5.x and Centerline C++
+ _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
+ _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
+ _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Qoption ld '
+ ;;
+ gcx)
+ # Green Hills C++ Compiler
+ _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-PIC'
+ ;;
+ *)
+ ;;
+ esac
+ ;;
+ sunos4*)
+ case $cc_basename in
+ CC)
+ # Sun C++ 4.x
+ _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-pic'
+ _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
+ ;;
+ lcc)
+ # Lucid
+ _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-pic'
+ ;;
+ *)
+ ;;
+ esac
+ ;;
+ tandem*)
+ case $cc_basename in
+ NCC)
+ # NonStop-UX NCC 3.20
+ _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
+ ;;
+ *)
+ ;;
+ esac
+ ;;
+ unixware*)
+ ;;
+ vxworks*)
+ ;;
+ *)
+ _LT_AC_TAGVAR(lt_prog_compiler_can_build_shared, $1)=no
+ ;;
+ esac
+ fi
+],
+[
+ if test "$GCC" = yes; then
+ _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
+ _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-static'
+
+ case $host_os in
+ aix*)
+ # All AIX code is PIC.
+ if test "$host_cpu" = ia64; then
+ # AIX 5 now supports IA64 processor
+ _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
+ fi
+ ;;
+
+ amigaos*)
+ # FIXME: we need at least 68020 code to build shared libraries, but
+ # adding the `-m68020' flag to GCC prevents building anything better,
+ # like `-m68040'.
+ _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-m68020 -resident32 -malways-restore-a4'
+ ;;
+
+ beos* | cygwin* | irix5* | irix6* | nonstopux* | osf3* | osf4* | osf5*)
+ # PIC is the default for these OSes.
+ ;;
+
+ mingw* | pw32* | os2*)
+ # This hack is so that the source file can tell whether it is being
+ # built for inclusion in a dll (and should export symbols for example).
+ _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-DDLL_EXPORT'
+ ;;
+
+ darwin* | rhapsody*)
+ # PIC is the default on this platform
+ # Common symbols not allowed in MH_DYLIB files
+ _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-fno-common'
+ ;;
+
+ msdosdjgpp*)
+ # Just because we use GCC doesn't mean we suddenly get shared libraries
+ # on systems that don't support them.
+ _LT_AC_TAGVAR(lt_prog_compiler_can_build_shared, $1)=no
+ enable_shared=no
+ ;;
+
+ sysv4*MP*)
+ if test -d /usr/nec; then
+ _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)=-Kconform_pic
+ fi
+ ;;
+
+ hpux*)
+ # PIC is the default for IA64 HP-UX and 64-bit HP-UX, but
+ # not for PA HP-UX.
+ case "$host_cpu" in
+ hppa*64*|ia64*)
+ # +Z the default
+ ;;
+ *)
+ _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC'
+ ;;
+ esac
+ ;;
+
+ *)
+ _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC'
+ ;;
+ esac
+ else
+ # PORTME Check for flag to pass linker flags through the system compiler.
+ case $host_os in
+ aix*)
+ _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
+ if test "$host_cpu" = ia64; then
+ # AIX 5 now supports IA64 processor
+ _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
+ else
+ _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-bnso -bI:/lib/syscalls.exp'
+ fi
+ ;;
+
+ mingw* | pw32* | os2*)
+ # This hack is so that the source file can tell whether it is being
+ # built for inclusion in a dll (and should export symbols for example).
+ _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-DDLL_EXPORT'
+ ;;
+
+ hpux9* | hpux10* | hpux11*)
+ _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
+ # PIC is the default for IA64 HP-UX and 64-bit HP-UX, but
+ # not for PA HP-UX.
+ case "$host_cpu" in
+ hppa*64*|ia64*)
+ # +Z the default
+ ;;
+ *)
+ _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='+Z'
+ ;;
+ esac
+ # Is there a better lt_prog_compiler_static that works with the bundled CC?
+ _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='${wl}-a ${wl}archive'
+ ;;
+
+ irix5* | irix6* | nonstopux*)
+ _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
+ # PIC (with -KPIC) is the default.
+ _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-non_shared'
+ ;;
+
+ newsos6)
+ _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
+ _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
+ ;;
+
+ linux*)
+ case $CC in
+ icc* | ecc*)
+ _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
+ _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
+ _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-static'
+ ;;
+ ccc*)
+ _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
+ # All Alpha code is PIC.
+ _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-non_shared'
+ ;;
+ esac
+ ;;
+
+ osf3* | osf4* | osf5*)
+ _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
+ # All OSF/1 code is PIC.
+ _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-non_shared'
+ ;;
+
+ sco3.2v5*)
+ _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-Kpic'
+ _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-dn'
+ ;;
+
+ solaris*)
+ _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
+ _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
+ _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
+ ;;
+
+ sunos4*)
+ _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Qoption ld '
+ _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-PIC'
+ _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
+ ;;
+
+ sysv4 | sysv4.2uw2* | sysv4.3* | sysv5*)
+ _LT_AC_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
+ _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
+ _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
+ ;;
+
+ sysv4*MP*)
+ if test -d /usr/nec ;then
+ _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-Kconform_pic'
+ _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
+ fi
+ ;;
+
+ uts4*)
+ _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)='-pic'
+ _LT_AC_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
+ ;;
+
+ *)
+ _LT_AC_TAGVAR(lt_prog_compiler_can_build_shared, $1)=no
+ ;;
+ esac
+ fi
+])
+AC_MSG_RESULT([$_LT_AC_TAGVAR(lt_prog_compiler_pic, $1)])
+
+#
+# Check to make sure the PIC flag actually works.
+#
+if test -n "$_LT_AC_TAGVAR(lt_prog_compiler_pic, $1)"; then
+ AC_LIBTOOL_COMPILER_OPTION([if $compiler PIC flag $_LT_AC_TAGVAR(lt_prog_compiler_pic, $1) works],
+ _LT_AC_TAGVAR(lt_prog_compiler_pic_works, $1),
+ [$_LT_AC_TAGVAR(lt_prog_compiler_pic, $1)ifelse([$1],[],[ -DPIC],[ifelse([$1],[CXX],[ -DPIC],[])])], [],
+ [case $_LT_AC_TAGVAR(lt_prog_compiler_pic, $1) in
+ "" | " "*) ;;
+ *) _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)=" $_LT_AC_TAGVAR(lt_prog_compiler_pic, $1)" ;;
+ esac],
+ [_LT_AC_TAGVAR(lt_prog_compiler_pic, $1)=
+ _LT_AC_TAGVAR(lt_prog_compiler_can_build_shared, $1)=no])
+fi
+case "$host_os" in
+ # For platforms which do not support PIC, -DPIC is meaningless:
+ *djgpp*)
+ _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)=
+ ;;
+ *)
+ _LT_AC_TAGVAR(lt_prog_compiler_pic, $1)="$_LT_AC_TAGVAR(lt_prog_compiler_pic, $1)ifelse([$1],[],[ -DPIC],[ifelse([$1],[CXX],[ -DPIC],[])])"
+ ;;
+esac
+])
+
+
+# AC_LIBTOOL_PROG_LD_SHLIBS([TAGNAME])
+# ------------------------------------
+# See if the linker supports building shared libraries.
+AC_DEFUN([AC_LIBTOOL_PROG_LD_SHLIBS],
+[AC_MSG_CHECKING([whether the $compiler linker ($LD) supports shared libraries])
+ifelse([$1],[CXX],[
+ _LT_AC_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols'
+ case $host_os in
+ aix4* | aix5*)
+ # If we're using GNU nm, then we don't want the "-C" option.
+ # -C means demangle to AIX nm, but means don't demangle with GNU nm
+ if $NM -V 2>&1 | grep 'GNU' > /dev/null; then
+ _LT_AC_TAGVAR(export_symbols_cmds, $1)='$NM -Bpg $libobjs $convenience | awk '\''{ if (((\[$]2 == "T") || (\[$]2 == "D") || (\[$]2 == "B")) && ([substr](\[$]3,1,1) != ".")) { print \[$]3 } }'\'' | sort -u > $export_symbols'
+ else
+ _LT_AC_TAGVAR(export_symbols_cmds, $1)='$NM -BCpg $libobjs $convenience | awk '\''{ if (((\[$]2 == "T") || (\[$]2 == "D") || (\[$]2 == "B")) && ([substr](\[$]3,1,1) != ".")) { print \[$]3 } }'\'' | sort -u > $export_symbols'
+ fi
+ ;;
+ pw32*)
+ _LT_AC_TAGVAR(export_symbols_cmds, $1)="$ltdll_cmds"
+ ;;
+ cygwin* | mingw*)
+ _LT_AC_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[[BCDGS]] /s/.* \([[^ ]]*\)/\1 DATA/'\'' | $SED -e '\''/^[[AITW]] /s/.* //'\'' | sort | uniq > $export_symbols'
+ ;;
+ *)
+ _LT_AC_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols'
+ ;;
+ esac
+],[
+ runpath_var=
+ _LT_AC_TAGVAR(allow_undefined_flag, $1)=
+ _LT_AC_TAGVAR(enable_shared_with_static_runtimes, $1)=no
+ _LT_AC_TAGVAR(archive_cmds, $1)=
+ _LT_AC_TAGVAR(archive_expsym_cmds, $1)=
+ _LT_AC_TAGVAR(old_archive_From_new_cmds, $1)=
+ _LT_AC_TAGVAR(old_archive_from_expsyms_cmds, $1)=
+ _LT_AC_TAGVAR(export_dynamic_flag_spec, $1)=
+ _LT_AC_TAGVAR(whole_archive_flag_spec, $1)=
+ _LT_AC_TAGVAR(thread_safe_flag_spec, $1)=
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)=
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec_ld, $1)=
+ _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=
+ _LT_AC_TAGVAR(hardcode_direct, $1)=no
+ _LT_AC_TAGVAR(hardcode_minus_L, $1)=no
+ _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=unsupported
+ _LT_AC_TAGVAR(link_all_deplibs, $1)=unknown
+ _LT_AC_TAGVAR(hardcode_automatic, $1)=no
+ _LT_AC_TAGVAR(module_cmds, $1)=
+ _LT_AC_TAGVAR(module_expsym_cmds, $1)=
+ _LT_AC_TAGVAR(always_export_symbols, $1)=no
+ _LT_AC_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols'
+ # include_expsyms should be a list of space-separated symbols to be *always*
+ # included in the symbol list
+ _LT_AC_TAGVAR(include_expsyms, $1)=
+ # exclude_expsyms can be an extended regexp of symbols to exclude
+ # it will be wrapped by ` (' and `)$', so one must not match beginning or
+ # end of line. Example: `a|bc|.*d.*' will exclude the symbols `a' and `bc',
+ # as well as any symbol that contains `d'.
+ _LT_AC_TAGVAR(exclude_expsyms, $1)="_GLOBAL_OFFSET_TABLE_"
+ # Although _GLOBAL_OFFSET_TABLE_ is a valid symbol C name, most a.out
+ # platforms (ab)use it in PIC code, but their linkers get confused if
+ # the symbol is explicitly referenced. Since portable code cannot
+ # rely on this symbol name, it's probably fine to never include it in
+ # preloaded symbol tables.
+ extract_expsyms_cmds=
+
+ case $host_os in
+ cygwin* | mingw* | pw32*)
+ # FIXME: the MSVC++ port hasn't been tested in a loooong time
+ # When not using gcc, we currently assume that we are using
+ # Microsoft Visual C++.
+ if test "$GCC" != yes; then
+ with_gnu_ld=no
+ fi
+ ;;
+ openbsd*)
+ with_gnu_ld=no
+ ;;
+ esac
+
+ _LT_AC_TAGVAR(ld_shlibs, $1)=yes
+ if test "$with_gnu_ld" = yes; then
+ # If archive_cmds runs LD, not CC, wlarc should be empty
+ wlarc='${wl}'
+
+ # See if GNU ld supports shared libraries.
+ case $host_os in
+ aix3* | aix4* | aix5*)
+ # On AIX/PPC, the GNU linker is very broken
+ if test "$host_cpu" != ia64; then
+ _LT_AC_TAGVAR(ld_shlibs, $1)=no
+ cat <<EOF 1>&2
+
+*** Warning: the GNU linker, at least up to release 2.9.1, is reported
+*** to be unable to reliably create shared libraries on AIX.
+*** Therefore, libtool is disabling shared libraries support. If you
+*** really care for shared libraries, you may want to modify your PATH
+*** so that a non-GNU linker is found, and then restart.
+
+EOF
+ fi
+ ;;
+
+ amigaos*)
+ _LT_AC_TAGVAR(archive_cmds, $1)='$rm $output_objdir/a2ixlibrary.data~$echo "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$echo "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$echo "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$echo "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)'
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
+ _LT_AC_TAGVAR(hardcode_minus_L, $1)=yes
+
+ # Samuel A. Falvo II <kc5tja at dolphin.openprojects.net> reports
+ # that the semantics of dynamic libraries on AmigaOS, at least up
+ # to version 4, is to share data among multiple programs linked
+ # with the same dynamic library. Since this doesn't match the
+ # behavior of shared libraries on other platforms, we can't use
+ # them.
+ _LT_AC_TAGVAR(ld_shlibs, $1)=no
+ ;;
+
+ beos*)
+ if $LD --help 2>&1 | grep ': supported targets:.* elf' > /dev/null; then
+ _LT_AC_TAGVAR(allow_undefined_flag, $1)=unsupported
+ # Joseph Beckenbach <jrb3 at best.com> says some releases of gcc
+ # support --undefined. This deserves some investigation. FIXME
+ _LT_AC_TAGVAR(archive_cmds, $1)='$CC -nostart $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+ else
+ _LT_AC_TAGVAR(ld_shlibs, $1)=no
+ fi
+ ;;
+
+ cygwin* | mingw* | pw32*)
+ # _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1) is actually meaningless,
+ # as there is no search path for DLLs.
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
+ _LT_AC_TAGVAR(allow_undefined_flag, $1)=unsupported
+ _LT_AC_TAGVAR(always_export_symbols, $1)=no
+ _LT_AC_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
+ _LT_AC_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[[BCDGS]] /s/.* \([[^ ]]*\)/\1 DATA/'\'' | $SED -e '\''/^[[AITW]] /s/.* //'\'' | sort | uniq > $export_symbols'
+
+ if $LD --help 2>&1 | grep 'auto-import' > /dev/null; then
+ _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--image-base=0x10000000 ${wl}--out-implib,$lib'
+ # If the export-symbols file already is a .def file (1st line
+ # is EXPORTS), use it as is; otherwise, prepend...
+ _LT_AC_TAGVAR(archive_expsym_cmds, $1)='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
+ cp $export_symbols $output_objdir/$soname.def;
+ else
+ echo EXPORTS > $output_objdir/$soname.def;
+ cat $export_symbols >> $output_objdir/$soname.def;
+ fi~
+ $CC -shared $output_objdir/$soname.def $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--image-base=0x10000000 ${wl}--out-implib,$lib'
+ else
+ ld_shlibs=no
+ fi
+ ;;
+
+ netbsd* | knetbsd*-gnu)
+ if echo __ELF__ | $CC -E - | grep __ELF__ >/dev/null; then
+ _LT_AC_TAGVAR(archive_cmds, $1)='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
+ wlarc=
+ else
+ _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+ _LT_AC_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+ fi
+ ;;
+
+ solaris* | sysv5*)
+ if $LD -v 2>&1 | grep 'BFD 2\.8' > /dev/null; then
+ _LT_AC_TAGVAR(ld_shlibs, $1)=no
+ cat <<EOF 1>&2
+
+*** Warning: The releases 2.8.* of the GNU linker cannot reliably
+*** create shared libraries on Solaris systems. Therefore, libtool
+*** is disabling shared libraries support. We urge you to upgrade GNU
+*** binutils to release 2.9.1 or newer. Another option is to modify
+*** your PATH or compiler configuration so that the native linker is
+*** used, and then restart.
+
+EOF
+ elif $LD --help 2>&1 | grep ': supported targets:.* elf' > /dev/null; then
+ _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+ _LT_AC_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+ else
+ _LT_AC_TAGVAR(ld_shlibs, $1)=no
+ fi
+ ;;
+
+ sunos4*)
+ _LT_AC_TAGVAR(archive_cmds, $1)='$LD -assert pure-text -Bshareable -o $lib $libobjs $deplibs $linker_flags'
+ wlarc=
+ _LT_AC_TAGVAR(hardcode_direct, $1)=yes
+ _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no
+ ;;
+
+ linux*)
+ if $LD --help 2>&1 | egrep ': supported targets:.* elf' > /dev/null; then
+ tmp_archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+ _LT_AC_TAGVAR(archive_cmds, $1)="$tmp_archive_cmds"
+ supports_anon_versioning=no
+ case `$LD -v 2>/dev/null` in
+ *\ [01].* | *\ 2.[[0-9]].* | *\ 2.10.*) ;; # catch versions < 2.11
+ *\ 2.11.93.0.2\ *) supports_anon_versioning=yes ;; # RH7.3 ...
+ *\ 2.11.92.0.12\ *) supports_anon_versioning=yes ;; # Mandrake 8.2 ...
+ *\ 2.11.*) ;; # other 2.11 versions
+ *) supports_anon_versioning=yes ;;
+ esac
+ if test $supports_anon_versioning = yes; then
+ _LT_AC_TAGVAR(archive_expsym_cmds, $1)='$echo "{ global:" > $output_objdir/$libname.ver~
+cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
+$echo "local: *; };" >> $output_objdir/$libname.ver~
+ $CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-version-script ${wl}$output_objdir/$libname.ver -o $lib'
+ else
+ _LT_AC_TAGVAR(archive_expsym_cmds, $1)="$tmp_archive_cmds"
+ fi
+ else
+ _LT_AC_TAGVAR(ld_shlibs, $1)=no
+ fi
+ ;;
+
+ *)
+ if $LD --help 2>&1 | grep ': supported targets:.* elf' > /dev/null; then
+ _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+ _LT_AC_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+ else
+ _LT_AC_TAGVAR(ld_shlibs, $1)=no
+ fi
+ ;;
+ esac
+
+ if test "$_LT_AC_TAGVAR(ld_shlibs, $1)" = yes; then
+ runpath_var=LD_RUN_PATH
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}--rpath ${wl}$libdir'
+ _LT_AC_TAGVAR(export_dynamic_flag_spec, $1)='${wl}--export-dynamic'
+ # ancient GNU ld didn't support --whole-archive et. al.
+ if $LD --help 2>&1 | grep 'no-whole-archive' > /dev/null; then
+ _LT_AC_TAGVAR(whole_archive_flag_spec, $1)="$wlarc"'--whole-archive$convenience '"$wlarc"'--no-whole-archive'
+ else
+ _LT_AC_TAGVAR(whole_archive_flag_spec, $1)=
+ fi
+ fi
+ else
+ # PORTME fill in a description of your system's linker (not GNU ld)
+ case $host_os in
+ aix3*)
+ _LT_AC_TAGVAR(allow_undefined_flag, $1)=unsupported
+ _LT_AC_TAGVAR(always_export_symbols, $1)=yes
+ _LT_AC_TAGVAR(archive_expsym_cmds, $1)='$LD -o $output_objdir/$soname $libobjs $deplibs $linker_flags -bE:$export_symbols -T512 -H512 -bM:SRE~$AR $AR_FLAGS $lib $output_objdir/$soname'
+ # Note: this linker hardcodes the directories in LIBPATH if there
+ # are no directories specified by -L.
+ _LT_AC_TAGVAR(hardcode_minus_L, $1)=yes
+ if test "$GCC" = yes && test -z "$link_static_flag"; then
+ # Neither direct hardcoding nor static linking is supported with a
+ # broken collect2.
+ _LT_AC_TAGVAR(hardcode_direct, $1)=unsupported
+ fi
+ ;;
+
+ aix4* | aix5*)
+ if test "$host_cpu" = ia64; then
+ # On IA64, the linker does run time linking by default, so we don't
+ # have to do anything special.
+ aix_use_runtimelinking=no
+ exp_sym_flag='-Bexport'
+ no_entry_flag=""
+ else
+ # If we're using GNU nm, then we don't want the "-C" option.
+ # -C means demangle to AIX nm, but means don't demangle with GNU nm
+ if $NM -V 2>&1 | grep 'GNU' > /dev/null; then
+ _LT_AC_TAGVAR(export_symbols_cmds, $1)='$NM -Bpg $libobjs $convenience | awk '\''{ if (((\[$]2 == "T") || (\[$]2 == "D") || (\[$]2 == "B")) && ([substr](\[$]3,1,1) != ".")) { print \[$]3 } }'\'' | sort -u > $export_symbols'
+ else
+ _LT_AC_TAGVAR(export_symbols_cmds, $1)='$NM -BCpg $libobjs $convenience | awk '\''{ if (((\[$]2 == "T") || (\[$]2 == "D") || (\[$]2 == "B")) && ([substr](\[$]3,1,1) != ".")) { print \[$]3 } }'\'' | sort -u > $export_symbols'
+ fi
+ aix_use_runtimelinking=no
+
+ # Test if we are trying to use run time linking or normal
+ # AIX style linking. If -brtl is somewhere in LDFLAGS, we
+ # need to do runtime linking.
+ case $host_os in aix4.[[23]]|aix4.[[23]].*|aix5*)
+ for ld_flag in $LDFLAGS; do
+ if (test $ld_flag = "-brtl" || test $ld_flag = "-Wl,-brtl"); then
+ aix_use_runtimelinking=yes
+ break
+ fi
+ done
+ esac
+
+ exp_sym_flag='-bexport'
+ no_entry_flag='-bnoentry'
+ fi
+
+ # When large executables or shared objects are built, AIX ld can
+ # have problems creating the table of contents. If linking a library
+ # or program results in "error TOC overflow" add -mminimal-toc to
+ # CXXFLAGS/CFLAGS for g++/gcc. In the cases where that is not
+ # enough to fix the problem, add -Wl,-bbigtoc to LDFLAGS.
+
+ _LT_AC_TAGVAR(archive_cmds, $1)=''
+ _LT_AC_TAGVAR(hardcode_direct, $1)=yes
+ _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=':'
+ _LT_AC_TAGVAR(link_all_deplibs, $1)=yes
+
+ if test "$GCC" = yes; then
+ case $host_os in aix4.[012]|aix4.[012].*)
+ # We only want to do this on AIX 4.2 and lower, the check
+ # below for broken collect2 doesn't work under 4.3+
+ collect2name=`${CC} -print-prog-name=collect2`
+ if test -f "$collect2name" && \
+ strings "$collect2name" | grep resolve_lib_name >/dev/null
+ then
+ # We have reworked collect2
+ _LT_AC_TAGVAR(hardcode_direct, $1)=yes
+ else
+ # We have old collect2
+ _LT_AC_TAGVAR(hardcode_direct, $1)=unsupported
+ # It fails to find uninstalled libraries when the uninstalled
+ # path is not listed in the libpath. Setting hardcode_minus_L
+ # to unsupported forces relinking
+ _LT_AC_TAGVAR(hardcode_minus_L, $1)=yes
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
+ _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=
+ fi
+ esac
+ shared_flag='-shared'
+ else
+ # not using gcc
+ if test "$host_cpu" = ia64; then
+ # VisualAge C++, Version 5.5 for AIX 5L for IA-64, Beta 3 Release
+ # chokes on -Wl,-G. The following line is correct:
+ shared_flag='-G'
+ else
+ if test "$aix_use_runtimelinking" = yes; then
+ shared_flag='${wl}-G'
+ else
+ shared_flag='${wl}-bM:SRE'
+ fi
+ fi
+ fi
+
+ # It seems that -bexpall does not export symbols beginning with
+ # underscore (_), so it is better to generate a list of symbols to export.
+ _LT_AC_TAGVAR(always_export_symbols, $1)=yes
+ if test "$aix_use_runtimelinking" = yes; then
+ # Warning - without using the other runtime loading flags (-brtl),
+ # -berok will link without error, but may produce a broken library.
+ _LT_AC_TAGVAR(allow_undefined_flag, $1)='-berok'
+ # Determine the default libpath from the value encoded in an empty executable.
+ _LT_AC_SYS_LIBPATH_AIX
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-blibpath:$libdir:'"$aix_libpath"
+ _LT_AC_TAGVAR(archive_expsym_cmds, $1)="\$CC"' -o $output_objdir/$soname $libobjs $deplibs $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then echo "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$no_entry_flag \${wl}$exp_sym_flag:\$export_symbols $shared_flag"
+ else
+ if test "$host_cpu" = ia64; then
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-R $libdir:/usr/lib:/lib'
+ _LT_AC_TAGVAR(allow_undefined_flag, $1)="-z nodefs"
+ _LT_AC_TAGVAR(archive_expsym_cmds, $1)="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs $compiler_flags ${wl}${allow_undefined_flag} '"\${wl}$no_entry_flag \${wl}$exp_sym_flag:\$export_symbols"
+ else
+ # Determine the default libpath from the value encoded in an empty executable.
+ _LT_AC_SYS_LIBPATH_AIX
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-blibpath:$libdir:'"$aix_libpath"
+ # Warning - without using the other run time loading flags,
+ # -berok will link without error, but may produce a broken library.
+ _LT_AC_TAGVAR(no_undefined_flag, $1)=' ${wl}-bernotok'
+ _LT_AC_TAGVAR(allow_undefined_flag, $1)=' ${wl}-berok'
+ # -bexpall does not export symbols beginning with underscore (_)
+ _LT_AC_TAGVAR(always_export_symbols, $1)=yes
+ # Exported symbols can be pulled into shared objects from archives
+ _LT_AC_TAGVAR(whole_archive_flag_spec, $1)=' '
+ _LT_AC_TAGVAR(archive_cmds_need_lc, $1)=yes
+ # This is similar to how AIX traditionally builds it's shared libraries.
+ _LT_AC_TAGVAR(archive_expsym_cmds, $1)="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs $compiler_flags ${wl}-bE:$export_symbols ${wl}-bnoentry${allow_undefined_flag}~$AR $AR_FLAGS $output_objdir/$libname$release.a $output_objdir/$soname'
+ fi
+ fi
+ ;;
+
+ amigaos*)
+ _LT_AC_TAGVAR(archive_cmds, $1)='$rm $output_objdir/a2ixlibrary.data~$echo "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$echo "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$echo "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$echo "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)'
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
+ _LT_AC_TAGVAR(hardcode_minus_L, $1)=yes
+ # see comment about different semantics on the GNU ld section
+ _LT_AC_TAGVAR(ld_shlibs, $1)=no
+ ;;
+
+ bsdi4*)
+ _LT_AC_TAGVAR(export_dynamic_flag_spec, $1)=-rdynamic
+ ;;
+
+ cygwin* | mingw* | pw32*)
+ # When not using gcc, we currently assume that we are using
+ # Microsoft Visual C++.
+ # hardcode_libdir_flag_spec is actually meaningless, as there is
+ # no search path for DLLs.
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)=' '
+ _LT_AC_TAGVAR(allow_undefined_flag, $1)=unsupported
+ # Tell ltmain to make .lib files, not .a files.
+ libext=lib
+ # Tell ltmain to make .dll files, not .so files.
+ shrext=".dll"
+ # FIXME: Setting linknames here is a bad hack.
+ _LT_AC_TAGVAR(archive_cmds, $1)='$CC -o $lib $libobjs $compiler_flags `echo "$deplibs" | $SED -e '\''s/ -lc$//'\''` -link -dll~linknames='
+ # The linker will automatically build a .lib file if we build a DLL.
+ _LT_AC_TAGVAR(old_archive_From_new_cmds, $1)='true'
+ # FIXME: Should let the user specify the lib program.
+ _LT_AC_TAGVAR(old_archive_cmds, $1)='lib /OUT:$oldlib$oldobjs$old_deplibs'
+ fix_srcfile_path='`cygpath -w "$srcfile"`'
+ _LT_AC_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
+ ;;
+
+ darwin* | rhapsody*)
+ if test "$GXX" = yes ; then
+ _LT_AC_TAGVAR(archive_cmds_need_lc, $1)=no
+ case "$host_os" in
+ rhapsody* | darwin1.[[012]])
+ _LT_AC_TAGVAR(allow_undefined_flag, $1)='-undefined suppress'
+ ;;
+ *) # Darwin 1.3 on
+ if test -z ${MACOSX_DEPLOYMENT_TARGET} ; then
+ _LT_AC_TAGVAR(allow_undefined_flag, $1)='-flat_namespace -undefined suppress'
+ else
+ case ${MACOSX_DEPLOYMENT_TARGET} in
+ 10.[[012]])
+ _LT_AC_TAGVAR(allow_undefined_flag, $1)='-flat_namespace -undefined suppress'
+ ;;
+ 10.*)
+ _LT_AC_TAGVAR(allow_undefined_flag, $1)='-undefined dynamic_lookup'
+ ;;
+ esac
+ fi
+ ;;
+ esac
+ lt_int_apple_cc_single_mod=no
+ output_verbose_link_cmd='echo'
+ if $CC -dumpspecs 2>&1 | grep 'single_module' >/dev/null ; then
+ lt_int_apple_cc_single_mod=yes
+ fi
+ if test "X$lt_int_apple_cc_single_mod" = Xyes ; then
+ _LT_AC_TAGVAR(archive_cmds, $1)='$CC -dynamiclib -single_module $allow_undefined_flag -o $lib $libobjs $deplibs $compiler_flags -install_name $rpath/$soname $verstring'
+ else
+ _LT_AC_TAGVAR(archive_cmds, $1)='$CC -r ${wl}-bind_at_load -keep_private_externs -nostdlib -o ${lib}-master.o $libobjs~$CC -dynamiclib $allow_undefined_flag -o $lib ${lib}-master.o $deplibs $compiler_flags -install_name $rpath/$soname $verstring'
+ fi
+ _LT_AC_TAGVAR(module_cmds, $1)='$CC ${wl}-bind_at_load $allow_undefined_flag -o $lib -bundle $libobjs $deplibs$compiler_flags'
+ # Don't fix this by using the ld -exported_symbols_list flag, it doesn't exist in older darwin ld's
+ if test "X$lt_int_apple_cc_single_mod" = Xyes ; then
+ _LT_AC_TAGVAR(archive_expsym_cmds, $1)='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC -dynamiclib -single_module $allow_undefined_flag -o $lib $libobjs $deplibs $compiler_flags -install_name $rpath/$soname $verstring~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}'
+ else
+ _LT_AC_TAGVAR(archive_expsym_cmds, $1)='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC -r ${wl}-bind_at_load -keep_private_externs -nostdlib -o ${lib}-master.o $libobjs~$CC -dynamiclib $allow_undefined_flag -o $lib ${lib}-master.o $deplibs $compiler_flags -install_name $rpath/$soname $verstring~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}'
+ fi
+ _LT_AC_TAGVAR(module_expsym_cmds, $1)='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC $allow_undefined_flag -o $lib -bundle $libobjs $deplibs$compiler_flags~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}'
+ _LT_AC_TAGVAR(hardcode_direct, $1)=no
+ _LT_AC_TAGVAR(hardcode_automatic, $1)=yes
+ _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=unsupported
+ _LT_AC_TAGVAR(whole_archive_flag_spec, $1)='-all_load $convenience'
+ _LT_AC_TAGVAR(link_all_deplibs, $1)=yes
+ else
+ _LT_AC_TAGVAR(ld_shlibs, $1)=no
+ fi
+ ;;
+
+ dgux*)
+ _LT_AC_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
+ _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no
+ ;;
+
+ freebsd1*)
+ _LT_AC_TAGVAR(ld_shlibs, $1)=no
+ ;;
+
+ # FreeBSD 2.2.[012] allows us to include c++rt0.o to get C++ constructor
+ # support. Future versions do this automatically, but an explicit c++rt0.o
+ # does not break anything, and helps significantly (at the cost of a little
+ # extra space).
+ freebsd2.2*)
+ _LT_AC_TAGVAR(archive_cmds, $1)='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags /usr/lib/c++rt0.o'
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir'
+ _LT_AC_TAGVAR(hardcode_direct, $1)=yes
+ _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no
+ ;;
+
+ # Unfortunately, older versions of FreeBSD 2 do not have this feature.
+ freebsd2*)
+ _LT_AC_TAGVAR(archive_cmds, $1)='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags'
+ _LT_AC_TAGVAR(hardcode_direct, $1)=yes
+ _LT_AC_TAGVAR(hardcode_minus_L, $1)=yes
+ _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no
+ ;;
+
+ # FreeBSD 3 and greater uses gcc -shared to do shared libraries.
+ freebsd* | kfreebsd*-gnu)
+ _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared -o $lib $libobjs $deplibs $compiler_flags'
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir'
+ _LT_AC_TAGVAR(hardcode_direct, $1)=yes
+ _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no
+ ;;
+
+ hpux9*)
+ if test "$GCC" = yes; then
+ _LT_AC_TAGVAR(archive_cmds, $1)='$rm $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+ else
+ _LT_AC_TAGVAR(archive_cmds, $1)='$rm $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+ fi
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}+b ${wl}$libdir'
+ _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=:
+ _LT_AC_TAGVAR(hardcode_direct, $1)=yes
+
+ # hardcode_minus_L: Not really in the search PATH,
+ # but as the default location of the library.
+ _LT_AC_TAGVAR(hardcode_minus_L, $1)=yes
+ _LT_AC_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-E'
+ ;;
+
+ hpux10* | hpux11*)
+ if test "$GCC" = yes -a "$with_gnu_ld" = no; then
+ case "$host_cpu" in
+ hppa*64*|ia64*)
+ _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ ;;
+ *)
+ _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+ ;;
+ esac
+ else
+ case "$host_cpu" in
+ hppa*64*|ia64*)
+ _LT_AC_TAGVAR(archive_cmds, $1)='$LD -b +h $soname -o $lib $libobjs $deplibs $linker_flags'
+ ;;
+ *)
+ _LT_AC_TAGVAR(archive_cmds, $1)='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
+ ;;
+ esac
+ fi
+ if test "$with_gnu_ld" = no; then
+ case "$host_cpu" in
+ hppa*64*)
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}+b ${wl}$libdir'
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec_ld, $1)='+b $libdir'
+ _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=:
+ _LT_AC_TAGVAR(hardcode_direct, $1)=no
+ _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no
+ ;;
+ ia64*)
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
+ _LT_AC_TAGVAR(hardcode_direct, $1)=no
+ _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no
+
+ # hardcode_minus_L: Not really in the search PATH,
+ # but as the default location of the library.
+ _LT_AC_TAGVAR(hardcode_minus_L, $1)=yes
+ ;;
+ *)
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}+b ${wl}$libdir'
+ _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=:
+ _LT_AC_TAGVAR(hardcode_direct, $1)=yes
+ _LT_AC_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-E'
+
+ # hardcode_minus_L: Not really in the search PATH,
+ # but as the default location of the library.
+ _LT_AC_TAGVAR(hardcode_minus_L, $1)=yes
+ ;;
+ esac
+ fi
+ ;;
+
+ irix5* | irix6* | nonstopux*)
+ if test "$GCC" = yes; then
+ _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ else
+ _LT_AC_TAGVAR(archive_cmds, $1)='$LD -shared $libobjs $deplibs $linker_flags -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${output_objdir}/so_locations -o $lib'
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec_ld, $1)='-rpath $libdir'
+ fi
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir'
+ _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=:
+ _LT_AC_TAGVAR(link_all_deplibs, $1)=yes
+ ;;
+
+ netbsd* | knetbsd*-gnu)
+ if echo __ELF__ | $CC -E - | grep __ELF__ >/dev/null; then
+ _LT_AC_TAGVAR(archive_cmds, $1)='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' # a.out
+ else
+ _LT_AC_TAGVAR(archive_cmds, $1)='$LD -shared -o $lib $libobjs $deplibs $linker_flags' # ELF
+ fi
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir'
+ _LT_AC_TAGVAR(hardcode_direct, $1)=yes
+ _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no
+ ;;
+
+ newsos6)
+ _LT_AC_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ _LT_AC_TAGVAR(hardcode_direct, $1)=yes
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir'
+ _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=:
+ _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no
+ ;;
+
+ openbsd*)
+ _LT_AC_TAGVAR(hardcode_direct, $1)=yes
+ _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no
+ if test -z "`echo __ELF__ | $CC -E - | grep __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then
+ _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath,$libdir'
+ _LT_AC_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-E'
+ else
+ case $host_os in
+ openbsd[[01]].* | openbsd2.[[0-7]] | openbsd2.[[0-7]].*)
+ _LT_AC_TAGVAR(archive_cmds, $1)='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags'
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir'
+ ;;
+ *)
+ _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath,$libdir'
+ ;;
+ esac
+ fi
+ ;;
+
+ os2*)
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
+ _LT_AC_TAGVAR(hardcode_minus_L, $1)=yes
+ _LT_AC_TAGVAR(allow_undefined_flag, $1)=unsupported
+ _LT_AC_TAGVAR(archive_cmds, $1)='$echo "LIBRARY $libname INITINSTANCE" > $output_objdir/$libname.def~$echo "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~$echo DATA >> $output_objdir/$libname.def~$echo " SINGLE NONSHARED" >> $output_objdir/$libname.def~$echo EXPORTS >> $output_objdir/$libname.def~emxexp $libobjs >> $output_objdir/$libname.def~$CC -Zdll -Zcrtdll -o $lib $libobjs $deplibs $compiler_flags $output_objdir/$libname.def'
+ _LT_AC_TAGVAR(old_archive_From_new_cmds, $1)='emximp -o $output_objdir/$libname.a $output_objdir/$libname.def'
+ ;;
+
+ osf3*)
+ if test "$GCC" = yes; then
+ _LT_AC_TAGVAR(allow_undefined_flag, $1)=' ${wl}-expect_unresolved ${wl}\*'
+ _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ else
+ _LT_AC_TAGVAR(allow_undefined_flag, $1)=' -expect_unresolved \*'
+ _LT_AC_TAGVAR(archive_cmds, $1)='$LD -shared${allow_undefined_flag} $libobjs $deplibs $linker_flags -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${output_objdir}/so_locations -o $lib'
+ fi
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir'
+ _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=:
+ ;;
+
+ osf4* | osf5*) # as osf3* with the addition of -msym flag
+ if test "$GCC" = yes; then
+ _LT_AC_TAGVAR(allow_undefined_flag, $1)=' ${wl}-expect_unresolved ${wl}\*'
+ _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir'
+ else
+ _LT_AC_TAGVAR(allow_undefined_flag, $1)=' -expect_unresolved \*'
+ _LT_AC_TAGVAR(archive_cmds, $1)='$LD -shared${allow_undefined_flag} $libobjs $deplibs $linker_flags -msym -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${output_objdir}/so_locations -o $lib'
+ _LT_AC_TAGVAR(archive_expsym_cmds, $1)='for i in `cat $export_symbols`; do printf "%s %s\\n" -exported_symbol "\$i" >> $lib.exp; done; echo "-hidden">> $lib.exp~
+ $LD -shared${allow_undefined_flag} -input $lib.exp $linker_flags $libobjs $deplibs -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${objdir}/so_locations -o $lib~$rm $lib.exp'
+
+ # Both c and cxx compiler support -rpath directly
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='-rpath $libdir'
+ fi
+ _LT_AC_TAGVAR(hardcode_libdir_separator, $1)=:
+ ;;
+
+ sco3.2v5*)
+ _LT_AC_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no
+ _LT_AC_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-Bexport'
+ runpath_var=LD_RUN_PATH
+ hardcode_runpath_var=yes
+ ;;
+
+ solaris*)
+ _LT_AC_TAGVAR(no_undefined_flag, $1)=' -z text'
+ if test "$GCC" = yes; then
+ _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ _LT_AC_TAGVAR(archive_expsym_cmds, $1)='$echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~$echo "local: *; };" >> $lib.exp~
+ $CC -shared ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$rm $lib.exp'
+ else
+ _LT_AC_TAGVAR(archive_cmds, $1)='$LD -G${allow_undefined_flag} -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ _LT_AC_TAGVAR(archive_expsym_cmds, $1)='$echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~$echo "local: *; };" >> $lib.exp~
+ $LD -G${allow_undefined_flag} -M $lib.exp -h $soname -o $lib $libobjs $deplibs $linker_flags~$rm $lib.exp'
+ fi
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir'
+ _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no
+ case $host_os in
+ solaris2.[[0-5]] | solaris2.[[0-5]].*) ;;
+ *) # Supported since Solaris 2.6 (maybe 2.5.1?)
+ _LT_AC_TAGVAR(whole_archive_flag_spec, $1)='-z allextract$convenience -z defaultextract' ;;
+ esac
+ _LT_AC_TAGVAR(link_all_deplibs, $1)=yes
+ ;;
+
+ sunos4*)
+ if test "x$host_vendor" = xsequent; then
+ # Use $CC to link under sequent, because it throws in some extra .o
+ # files that make .init and .fini sections work.
+ _LT_AC_TAGVAR(archive_cmds, $1)='$CC -G ${wl}-h $soname -o $lib $libobjs $deplibs $compiler_flags'
+ else
+ _LT_AC_TAGVAR(archive_cmds, $1)='$LD -assert pure-text -Bstatic -o $lib $libobjs $deplibs $linker_flags'
+ fi
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
+ _LT_AC_TAGVAR(hardcode_direct, $1)=yes
+ _LT_AC_TAGVAR(hardcode_minus_L, $1)=yes
+ _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no
+ ;;
+
+ sysv4)
+ case $host_vendor in
+ sni)
+ _LT_AC_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ _LT_AC_TAGVAR(hardcode_direct, $1)=yes # is this really true???
+ ;;
+ siemens)
+ ## LD is ld it makes a PLAMLIB
+ ## CC just makes a GrossModule.
+ _LT_AC_TAGVAR(archive_cmds, $1)='$LD -G -o $lib $libobjs $deplibs $linker_flags'
+ _LT_AC_TAGVAR(reload_cmds, $1)='$CC -r -o $output$reload_objs'
+ _LT_AC_TAGVAR(hardcode_direct, $1)=no
+ ;;
+ motorola)
+ _LT_AC_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ _LT_AC_TAGVAR(hardcode_direct, $1)=no #Motorola manual says yes, but my tests say they lie
+ ;;
+ esac
+ runpath_var='LD_RUN_PATH'
+ _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no
+ ;;
+
+ sysv4.3*)
+ _LT_AC_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no
+ _LT_AC_TAGVAR(export_dynamic_flag_spec, $1)='-Bexport'
+ ;;
+
+ sysv4*MP*)
+ if test -d /usr/nec; then
+ _LT_AC_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no
+ runpath_var=LD_RUN_PATH
+ hardcode_runpath_var=yes
+ _LT_AC_TAGVAR(ld_shlibs, $1)=yes
+ fi
+ ;;
+
+ sysv4.2uw2*)
+ _LT_AC_TAGVAR(archive_cmds, $1)='$LD -G -o $lib $libobjs $deplibs $linker_flags'
+ _LT_AC_TAGVAR(hardcode_direct, $1)=yes
+ _LT_AC_TAGVAR(hardcode_minus_L, $1)=no
+ _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no
+ hardcode_runpath_var=yes
+ runpath_var=LD_RUN_PATH
+ ;;
+
+ sysv5OpenUNIX8* | sysv5UnixWare7* | sysv5uw[[78]]* | unixware7*)
+ _LT_AC_TAGVAR(no_undefined_flag, $1)='${wl}-z ${wl}text'
+ if test "$GCC" = yes; then
+ _LT_AC_TAGVAR(archive_cmds, $1)='$CC -shared ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ else
+ _LT_AC_TAGVAR(archive_cmds, $1)='$CC -G ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ fi
+ runpath_var='LD_RUN_PATH'
+ _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no
+ ;;
+
+ sysv5*)
+ _LT_AC_TAGVAR(no_undefined_flag, $1)=' -z text'
+ # $CC -shared without GNU ld will not create a library from C++
+ # object files and a static libstdc++, better avoid it by now
+ _LT_AC_TAGVAR(archive_cmds, $1)='$LD -G${allow_undefined_flag} -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ _LT_AC_TAGVAR(archive_expsym_cmds, $1)='$echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~$echo "local: *; };" >> $lib.exp~
+ $LD -G${allow_undefined_flag} -M $lib.exp -h $soname -o $lib $libobjs $deplibs $linker_flags~$rm $lib.exp'
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)=
+ _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no
+ runpath_var='LD_RUN_PATH'
+ ;;
+
+ uts4*)
+ _LT_AC_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ _LT_AC_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
+ _LT_AC_TAGVAR(hardcode_shlibpath_var, $1)=no
+ ;;
+
+ *)
+ _LT_AC_TAGVAR(ld_shlibs, $1)=no
+ ;;
+ esac
+ fi
+])
+AC_MSG_RESULT([$_LT_AC_TAGVAR(ld_shlibs, $1)])
+test "$_LT_AC_TAGVAR(ld_shlibs, $1)" = no && can_build_shared=no
+
+variables_saved_for_relink="PATH $shlibpath_var $runpath_var"
+if test "$GCC" = yes; then
+ variables_saved_for_relink="$variables_saved_for_relink GCC_EXEC_PREFIX COMPILER_PATH LIBRARY_PATH"
+fi
+
+#
+# Do we need to explicitly link libc?
+#
+case "x$_LT_AC_TAGVAR(archive_cmds_need_lc, $1)" in
+x|xyes)
+ # Assume -lc should be added
+ _LT_AC_TAGVAR(archive_cmds_need_lc, $1)=yes
+
+ if test "$enable_shared" = yes && test "$GCC" = yes; then
+ case $_LT_AC_TAGVAR(archive_cmds, $1) in
+ *'~'*)
+ # FIXME: we may have to deal with multi-command sequences.
+ ;;
+ '$CC '*)
+ # Test whether the compiler implicitly links with -lc since on some
+ # systems, -lgcc has to come before -lc. If gcc already passes -lc
+ # to ld, don't add -lc before -lgcc.
+ AC_MSG_CHECKING([whether -lc should be explicitly linked in])
+ $rm conftest*
+ printf "$lt_simple_compile_test_code" > conftest.$ac_ext
+
+ if AC_TRY_EVAL(ac_compile) 2>conftest.err; then
+ soname=conftest
+ lib=conftest
+ libobjs=conftest.$ac_objext
+ deplibs=
+ wl=$_LT_AC_TAGVAR(lt_prog_compiler_wl, $1)
+ compiler_flags=-v
+ linker_flags=-v
+ verstring=
+ output_objdir=.
+ libname=conftest
+ lt_save_allow_undefined_flag=$_LT_AC_TAGVAR(allow_undefined_flag, $1)
+ _LT_AC_TAGVAR(allow_undefined_flag, $1)=
+ if AC_TRY_EVAL(_LT_AC_TAGVAR(archive_cmds, $1) 2\>\&1 \| grep \" -lc \" \>/dev/null 2\>\&1)
+ then
+ _LT_AC_TAGVAR(archive_cmds_need_lc, $1)=no
+ else
+ _LT_AC_TAGVAR(archive_cmds_need_lc, $1)=yes
+ fi
+ _LT_AC_TAGVAR(allow_undefined_flag, $1)=$lt_save_allow_undefined_flag
+ else
+ cat conftest.err 1>&5
+ fi
+ $rm conftest*
+ AC_MSG_RESULT([$_LT_AC_TAGVAR(archive_cmds_need_lc, $1)])
+ ;;
+ esac
+ fi
+ ;;
+esac
+])# AC_LIBTOOL_PROG_LD_SHLIBS
+
+
+# _LT_AC_FILE_LTDLL_C
+# -------------------
+# Be careful that the start marker always follows a newline.
+AC_DEFUN([_LT_AC_FILE_LTDLL_C], [
+# /* ltdll.c starts here */
+# #define WIN32_LEAN_AND_MEAN
+# #include <windows.h>
+# #undef WIN32_LEAN_AND_MEAN
+# #include <stdio.h>
+#
+# #ifndef __CYGWIN__
+# # ifdef __CYGWIN32__
+# # define __CYGWIN__ __CYGWIN32__
+# # endif
+# #endif
+#
+# #ifdef __cplusplus
+# extern "C" {
+# #endif
+# BOOL APIENTRY DllMain (HINSTANCE hInst, DWORD reason, LPVOID reserved);
+# #ifdef __cplusplus
+# }
+# #endif
+#
+# #ifdef __CYGWIN__
+# #include <cygwin/cygwin_dll.h>
+# DECLARE_CYGWIN_DLL( DllMain );
+# #endif
+# HINSTANCE __hDllInstance_base;
+#
+# BOOL APIENTRY
+# DllMain (HINSTANCE hInst, DWORD reason, LPVOID reserved)
+# {
+# __hDllInstance_base = hInst;
+# return TRUE;
+# }
+# /* ltdll.c ends here */
+])# _LT_AC_FILE_LTDLL_C
+
+
+# _LT_AC_TAGVAR(VARNAME, [TAGNAME])
+# ---------------------------------
+AC_DEFUN([_LT_AC_TAGVAR], [ifelse([$2], [], [$1], [$1_$2])])
+
+
+# old names
+AC_DEFUN([AM_PROG_LIBTOOL], [AC_PROG_LIBTOOL])
+AC_DEFUN([AM_ENABLE_SHARED], [AC_ENABLE_SHARED($@)])
+AC_DEFUN([AM_ENABLE_STATIC], [AC_ENABLE_STATIC($@)])
+AC_DEFUN([AM_DISABLE_SHARED], [AC_DISABLE_SHARED($@)])
+AC_DEFUN([AM_DISABLE_STATIC], [AC_DISABLE_STATIC($@)])
+AC_DEFUN([AM_PROG_LD], [AC_PROG_LD])
+AC_DEFUN([AM_PROG_NM], [AC_PROG_NM])
+
+# This is just to silence aclocal about the macro not being used
+ifelse([AC_DISABLE_FAST_INSTALL])
+
+AC_DEFUN([LT_AC_PROG_GCJ],
+[AC_CHECK_TOOL(GCJ, gcj, no)
+ test "x${GCJFLAGS+set}" = xset || GCJFLAGS="-g -O2"
+ AC_SUBST(GCJFLAGS)
+])
+
+AC_DEFUN([LT_AC_PROG_RC],
+[AC_CHECK_TOOL(RC, windres, no)
+])
+
+# NOTE: This macro has been submitted for inclusion into #
+# GNU Autoconf as AC_PROG_SED. When it is available in #
+# a released version of Autoconf we should remove this #
+# macro and use it instead. #
+# LT_AC_PROG_SED
+# --------------
+# Check for a fully-functional sed program, that truncates
+# as few characters as possible. Prefer GNU sed if found.
+AC_DEFUN([LT_AC_PROG_SED],
+[AC_MSG_CHECKING([for a sed that does not truncate output])
+AC_CACHE_VAL(lt_cv_path_SED,
+[# Loop through the user's path and test for sed and gsed.
+# Then use that list of sed's as ones to test for truncation.
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for lt_ac_prog in sed gsed; do
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if $as_executable_p "$as_dir/$lt_ac_prog$ac_exec_ext"; then
+ lt_ac_sed_list="$lt_ac_sed_list $as_dir/$lt_ac_prog$ac_exec_ext"
+ fi
+ done
+ done
+done
+lt_ac_max=0
+lt_ac_count=0
+# Add /usr/xpg4/bin/sed as it is typically found on Solaris
+# along with /bin/sed that truncates output.
+for lt_ac_sed in $lt_ac_sed_list /usr/xpg4/bin/sed; do
+ test ! -f $lt_ac_sed && break
+ cat /dev/null > conftest.in
+ lt_ac_count=0
+ echo $ECHO_N "0123456789$ECHO_C" >conftest.in
+ # Check for GNU sed and select it if it is found.
+ if "$lt_ac_sed" --version 2>&1 < /dev/null | grep 'GNU' > /dev/null; then
+ lt_cv_path_SED=$lt_ac_sed
+ break
+ fi
+ while true; do
+ cat conftest.in conftest.in >conftest.tmp
+ mv conftest.tmp conftest.in
+ cp conftest.in conftest.nl
+ echo >>conftest.nl
+ $lt_ac_sed -e 's/a$//' < conftest.nl >conftest.out || break
+ cmp -s conftest.out conftest.nl || break
+ # 10000 chars as input seems more than enough
+ test $lt_ac_count -gt 10 && break
+ lt_ac_count=`expr $lt_ac_count + 1`
+ if test $lt_ac_count -gt $lt_ac_max; then
+ lt_ac_max=$lt_ac_count
+ lt_cv_path_SED=$lt_ac_sed
+ fi
+ done
+done
+SED=$lt_cv_path_SED
+])
+AC_MSG_RESULT([$SED])
+])
+
Added: freeswitch/trunk/libs/sqlite/addopcodes.awk
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/addopcodes.awk Tue Dec 19 15:11:50 2006
@@ -0,0 +1,32 @@
+#!/usr/bin/awk
+#
+# This script appends additional token codes to the end of the
+# parse.h file that lemon generates. These extra token codes are
+# not used by the parser. But they are used by the tokenizer and/or
+# the code generator.
+#
+#
+BEGIN {
+ max = 0
+}
+/^#define TK_/ {
+ print $0
+ if( max<$3 ) max = $3
+}
+END {
+ printf "#define TK_%-29s %4d\n", "TO_TEXT", max+1
+ printf "#define TK_%-29s %4d\n", "TO_BLOB", max+2
+ printf "#define TK_%-29s %4d\n", "TO_NUMERIC", max+3
+ printf "#define TK_%-29s %4d\n", "TO_INT", max+4
+ printf "#define TK_%-29s %4d\n", "TO_REAL", max+5
+ printf "#define TK_%-29s %4d\n", "END_OF_FILE", max+6
+ printf "#define TK_%-29s %4d\n", "ILLEGAL", max+7
+ printf "#define TK_%-29s %4d\n", "SPACE", max+8
+ printf "#define TK_%-29s %4d\n", "UNCLOSED_STRING", max+9
+ printf "#define TK_%-29s %4d\n", "COMMENT", max+10
+ printf "#define TK_%-29s %4d\n", "FUNCTION", max+11
+ printf "#define TK_%-29s %4d\n", "COLUMN", max+12
+ printf "#define TK_%-29s %4d\n", "AGG_FUNCTION", max+13
+ printf "#define TK_%-29s %4d\n", "AGG_COLUMN", max+14
+ printf "#define TK_%-29s %4d\n", "CONST_FUNC", max+15
+}
Added: freeswitch/trunk/libs/sqlite/art/2005osaward.gif
==============================================================================
Binary file. No diff available.
Added: freeswitch/trunk/libs/sqlite/art/SQLite.eps
==============================================================================
Binary file. No diff available.
Added: freeswitch/trunk/libs/sqlite/art/SQLite.gif
==============================================================================
Binary file. No diff available.
Added: freeswitch/trunk/libs/sqlite/art/SQLiteLogo3.tiff
==============================================================================
Binary file. No diff available.
Added: freeswitch/trunk/libs/sqlite/config.guess
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/config.guess Tue Dec 19 15:11:50 2006
@@ -0,0 +1,1432 @@
+#! /bin/sh
+# Attempt to guess a canonical system name.
+# Copyright (C) 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999,
+# 2000, 2001, 2002, 2003 Free Software Foundation, Inc.
+
+timestamp='2004-01-05'
+
+# This file is free software; you can redistribute it and/or modify it
+# under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+#
+# As a special exception to the GNU General Public License, if you
+# distribute this file as part of a program that contains a
+# configuration script generated by Autoconf, you may include it under
+# the same distribution terms that you use for the rest of that program.
+
+# Originally written by Per Bothner <per at bothner.com>.
+# Please send patches to <config-patches at gnu.org>. Submit a context
+# diff and a properly formatted ChangeLog entry.
+#
+# This script attempts to guess a canonical system name similar to
+# config.sub. If it succeeds, it prints the system name on stdout, and
+# exits with 0. Otherwise, it exits with 1.
+#
+# The plan is that this can be called by configure scripts if you
+# don't specify an explicit build system type.
+
+me=`echo "$0" | sed -e 's,.*/,,'`
+
+usage="\
+Usage: $0 [OPTION]
+
+Output the configuration name of the system \`$me' is run on.
+
+Operation modes:
+ -h, --help print this help, then exit
+ -t, --time-stamp print date of last modification, then exit
+ -v, --version print version number, then exit
+
+Report bugs and patches to <config-patches at gnu.org>."
+
+version="\
+GNU config.guess ($timestamp)
+
+Originally written by Per Bothner.
+Copyright (C) 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001
+Free Software Foundation, Inc.
+
+This is free software; see the source for copying conditions. There is NO
+warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE."
+
+help="
+Try \`$me --help' for more information."
+
+# Parse command line
+while test $# -gt 0 ; do
+ case $1 in
+ --time-stamp | --time* | -t )
+ echo "$timestamp" ; exit 0 ;;
+ --version | -v )
+ echo "$version" ; exit 0 ;;
+ --help | --h* | -h )
+ echo "$usage"; exit 0 ;;
+ -- ) # Stop option processing
+ shift; break ;;
+ - ) # Use stdin as input.
+ break ;;
+ -* )
+ echo "$me: invalid option $1$help" >&2
+ exit 1 ;;
+ * )
+ break ;;
+ esac
+done
+
+if test $# != 0; then
+ echo "$me: too many arguments$help" >&2
+ exit 1
+fi
+
+trap 'exit 1' 1 2 15
+
+# CC_FOR_BUILD -- compiler used by this script. Note that the use of a
+# compiler to aid in system detection is discouraged as it requires
+# temporary files to be created and, as you can see below, it is a
+# headache to deal with in a portable fashion.
+
+# Historically, `CC_FOR_BUILD' used to be named `HOST_CC'. We still
+# use `HOST_CC' if defined, but it is deprecated.
+
+# Portable tmp directory creation inspired by the Autoconf team.
+
+set_cc_for_build='
+trap "exitcode=\$?; (rm -f \$tmpfiles 2>/dev/null; rmdir \$tmp 2>/dev/null) && exit \$exitcode" 0 ;
+trap "rm -f \$tmpfiles 2>/dev/null; rmdir \$tmp 2>/dev/null; exit 1" 1 2 13 15 ;
+: ${TMPDIR=/tmp} ;
+ { tmp=`(umask 077 && mktemp -d -q "$TMPDIR/cgXXXXXX") 2>/dev/null` && test -n "$tmp" && test -d "$tmp" ; } ||
+ { test -n "$RANDOM" && tmp=$TMPDIR/cg$$-$RANDOM && (umask 077 && mkdir $tmp) ; } ||
+ { tmp=$TMPDIR/cg-$$ && (umask 077 && mkdir $tmp) && echo "Warning: creating insecure temp directory" >&2 ; } ||
+ { echo "$me: cannot create a temporary directory in $TMPDIR" >&2 ; exit 1 ; } ;
+dummy=$tmp/dummy ;
+tmpfiles="$dummy.c $dummy.o $dummy.rel $dummy" ;
+case $CC_FOR_BUILD,$HOST_CC,$CC in
+ ,,) echo "int x;" > $dummy.c ;
+ for c in cc gcc c89 c99 ; do
+ if ($c -c -o $dummy.o $dummy.c) >/dev/null 2>&1 ; then
+ CC_FOR_BUILD="$c"; break ;
+ fi ;
+ done ;
+ if test x"$CC_FOR_BUILD" = x ; then
+ CC_FOR_BUILD=no_compiler_found ;
+ fi
+ ;;
+ ,,*) CC_FOR_BUILD=$CC ;;
+ ,*,*) CC_FOR_BUILD=$HOST_CC ;;
+esac ;'
+
+# This is needed to find uname on a Pyramid OSx when run in the BSD universe.
+# (ghazi at noc.rutgers.edu 1994-08-24)
+if (test -f /.attbin/uname) >/dev/null 2>&1 ; then
+ PATH=$PATH:/.attbin ; export PATH
+fi
+
+UNAME_MACHINE=`(uname -m) 2>/dev/null` || UNAME_MACHINE=unknown
+UNAME_RELEASE=`(uname -r) 2>/dev/null` || UNAME_RELEASE=unknown
+UNAME_SYSTEM=`(uname -s) 2>/dev/null` || UNAME_SYSTEM=unknown
+UNAME_VERSION=`(uname -v) 2>/dev/null` || UNAME_VERSION=unknown
+
+# Note: order is significant - the case branches are not exclusive.
+
+case "${UNAME_MACHINE}:${UNAME_SYSTEM}:${UNAME_RELEASE}:${UNAME_VERSION}" in
+ *:NetBSD:*:*)
+ # NetBSD (nbsd) targets should (where applicable) match one or
+ # more of the tupples: *-*-netbsdelf*, *-*-netbsdaout*,
+ # *-*-netbsdecoff* and *-*-netbsd*. For targets that recently
+ # switched to ELF, *-*-netbsd* would select the old
+ # object file format. This provides both forward
+ # compatibility and a consistent mechanism for selecting the
+ # object file format.
+ #
+ # Note: NetBSD doesn't particularly care about the vendor
+ # portion of the name. We always set it to "unknown".
+ sysctl="sysctl -n hw.machine_arch"
+ UNAME_MACHINE_ARCH=`(/sbin/$sysctl 2>/dev/null || \
+ /usr/sbin/$sysctl 2>/dev/null || echo unknown)`
+ case "${UNAME_MACHINE_ARCH}" in
+ armeb) machine=armeb-unknown ;;
+ arm*) machine=arm-unknown ;;
+ sh3el) machine=shl-unknown ;;
+ sh3eb) machine=sh-unknown ;;
+ *) machine=${UNAME_MACHINE_ARCH}-unknown ;;
+ esac
+ # The Operating System including object format, if it has switched
+ # to ELF recently, or will in the future.
+ case "${UNAME_MACHINE_ARCH}" in
+ arm*|i386|m68k|ns32k|sh3*|sparc|vax)
+ eval $set_cc_for_build
+ if echo __ELF__ | $CC_FOR_BUILD -E - 2>/dev/null \
+ | grep __ELF__ >/dev/null
+ then
+ # Once all utilities can be ECOFF (netbsdecoff) or a.out (netbsdaout).
+ # Return netbsd for either. FIX?
+ os=netbsd
+ else
+ os=netbsdelf
+ fi
+ ;;
+ *)
+ os=netbsd
+ ;;
+ esac
+ # The OS release
+ # Debian GNU/NetBSD machines have a different userland, and
+ # thus, need a distinct triplet. However, they do not need
+ # kernel version information, so it can be replaced with a
+ # suitable tag, in the style of linux-gnu.
+ case "${UNAME_VERSION}" in
+ Debian*)
+ release='-gnu'
+ ;;
+ *)
+ release=`echo ${UNAME_RELEASE}|sed -e 's/[-_].*/\./'`
+ ;;
+ esac
+ # Since CPU_TYPE-MANUFACTURER-KERNEL-OPERATING_SYSTEM:
+ # contains redundant information, the shorter form:
+ # CPU_TYPE-MANUFACTURER-OPERATING_SYSTEM is used.
+ echo "${machine}-${os}${release}"
+ exit 0 ;;
+ amiga:OpenBSD:*:*)
+ echo m68k-unknown-openbsd${UNAME_RELEASE}
+ exit 0 ;;
+ arc:OpenBSD:*:*)
+ echo mipsel-unknown-openbsd${UNAME_RELEASE}
+ exit 0 ;;
+ hp300:OpenBSD:*:*)
+ echo m68k-unknown-openbsd${UNAME_RELEASE}
+ exit 0 ;;
+ mac68k:OpenBSD:*:*)
+ echo m68k-unknown-openbsd${UNAME_RELEASE}
+ exit 0 ;;
+ macppc:OpenBSD:*:*)
+ echo powerpc-unknown-openbsd${UNAME_RELEASE}
+ exit 0 ;;
+ mvme68k:OpenBSD:*:*)
+ echo m68k-unknown-openbsd${UNAME_RELEASE}
+ exit 0 ;;
+ mvme88k:OpenBSD:*:*)
+ echo m88k-unknown-openbsd${UNAME_RELEASE}
+ exit 0 ;;
+ mvmeppc:OpenBSD:*:*)
+ echo powerpc-unknown-openbsd${UNAME_RELEASE}
+ exit 0 ;;
+ pegasos:OpenBSD:*:*)
+ echo powerpc-unknown-openbsd${UNAME_RELEASE}
+ exit 0 ;;
+ pmax:OpenBSD:*:*)
+ echo mipsel-unknown-openbsd${UNAME_RELEASE}
+ exit 0 ;;
+ sgi:OpenBSD:*:*)
+ echo mipseb-unknown-openbsd${UNAME_RELEASE}
+ exit 0 ;;
+ sun3:OpenBSD:*:*)
+ echo m68k-unknown-openbsd${UNAME_RELEASE}
+ exit 0 ;;
+ wgrisc:OpenBSD:*:*)
+ echo mipsel-unknown-openbsd${UNAME_RELEASE}
+ exit 0 ;;
+ *:OpenBSD:*:*)
+ echo ${UNAME_MACHINE}-unknown-openbsd${UNAME_RELEASE}
+ exit 0 ;;
+ alpha:OSF1:*:*)
+ if test $UNAME_RELEASE = "V4.0"; then
+ UNAME_RELEASE=`/usr/sbin/sizer -v | awk '{print $3}'`
+ fi
+ # According to Compaq, /usr/sbin/psrinfo has been available on
+ # OSF/1 and Tru64 systems produced since 1995. I hope that
+ # covers most systems running today. This code pipes the CPU
+ # types through head -n 1, so we only detect the type of CPU 0.
+ ALPHA_CPU_TYPE=`/usr/sbin/psrinfo -v | sed -n -e 's/^ The alpha \(.*\) processor.*$/\1/p' | head -n 1`
+ case "$ALPHA_CPU_TYPE" in
+ "EV4 (21064)")
+ UNAME_MACHINE="alpha" ;;
+ "EV4.5 (21064)")
+ UNAME_MACHINE="alpha" ;;
+ "LCA4 (21066/21068)")
+ UNAME_MACHINE="alpha" ;;
+ "EV5 (21164)")
+ UNAME_MACHINE="alphaev5" ;;
+ "EV5.6 (21164A)")
+ UNAME_MACHINE="alphaev56" ;;
+ "EV5.6 (21164PC)")
+ UNAME_MACHINE="alphapca56" ;;
+ "EV5.7 (21164PC)")
+ UNAME_MACHINE="alphapca57" ;;
+ "EV6 (21264)")
+ UNAME_MACHINE="alphaev6" ;;
+ "EV6.7 (21264A)")
+ UNAME_MACHINE="alphaev67" ;;
+ "EV6.8CB (21264C)")
+ UNAME_MACHINE="alphaev68" ;;
+ "EV6.8AL (21264B)")
+ UNAME_MACHINE="alphaev68" ;;
+ "EV6.8CX (21264D)")
+ UNAME_MACHINE="alphaev68" ;;
+ "EV6.9A (21264/EV69A)")
+ UNAME_MACHINE="alphaev69" ;;
+ "EV7 (21364)")
+ UNAME_MACHINE="alphaev7" ;;
+ "EV7.9 (21364A)")
+ UNAME_MACHINE="alphaev79" ;;
+ esac
+ # A Vn.n version is a released version.
+ # A Tn.n version is a released field test version.
+ # A Xn.n version is an unreleased experimental baselevel.
+ # 1.2 uses "1.2" for uname -r.
+ echo ${UNAME_MACHINE}-dec-osf`echo ${UNAME_RELEASE} | sed -e 's/^[VTX]//' | tr 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' 'abcdefghijklmnopqrstuvwxyz'`
+ exit 0 ;;
+ Alpha*:OpenVMS:*:*)
+ echo alpha-hp-vms
+ exit 0 ;;
+ Alpha\ *:Windows_NT*:*)
+ # How do we know it's Interix rather than the generic POSIX subsystem?
+ # Should we change UNAME_MACHINE based on the output of uname instead
+ # of the specific Alpha model?
+ echo alpha-pc-interix
+ exit 0 ;;
+ 21064:Windows_NT:50:3)
+ echo alpha-dec-winnt3.5
+ exit 0 ;;
+ Amiga*:UNIX_System_V:4.0:*)
+ echo m68k-unknown-sysv4
+ exit 0;;
+ *:[Aa]miga[Oo][Ss]:*:*)
+ echo ${UNAME_MACHINE}-unknown-amigaos
+ exit 0 ;;
+ *:[Mm]orph[Oo][Ss]:*:*)
+ echo ${UNAME_MACHINE}-unknown-morphos
+ exit 0 ;;
+ *:OS/390:*:*)
+ echo i370-ibm-openedition
+ exit 0 ;;
+ *:OS400:*:*)
+ echo powerpc-ibm-os400
+ exit 0 ;;
+ arm:RISC*:1.[012]*:*|arm:riscix:1.[012]*:*)
+ echo arm-acorn-riscix${UNAME_RELEASE}
+ exit 0;;
+ SR2?01:HI-UX/MPP:*:* | SR8000:HI-UX/MPP:*:*)
+ echo hppa1.1-hitachi-hiuxmpp
+ exit 0;;
+ Pyramid*:OSx*:*:* | MIS*:OSx*:*:* | MIS*:SMP_DC-OSx*:*:*)
+ # akee at wpdis03.wpafb.af.mil (Earle F. Ake) contributed MIS and NILE.
+ if test "`(/bin/universe) 2>/dev/null`" = att ; then
+ echo pyramid-pyramid-sysv3
+ else
+ echo pyramid-pyramid-bsd
+ fi
+ exit 0 ;;
+ NILE*:*:*:dcosx)
+ echo pyramid-pyramid-svr4
+ exit 0 ;;
+ DRS?6000:unix:4.0:6*)
+ echo sparc-icl-nx6
+ exit 0 ;;
+ DRS?6000:UNIX_SV:4.2*:7*)
+ case `/usr/bin/uname -p` in
+ sparc) echo sparc-icl-nx7 && exit 0 ;;
+ esac ;;
+ sun4H:SunOS:5.*:*)
+ echo sparc-hal-solaris2`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'`
+ exit 0 ;;
+ sun4*:SunOS:5.*:* | tadpole*:SunOS:5.*:*)
+ echo sparc-sun-solaris2`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'`
+ exit 0 ;;
+ i86pc:SunOS:5.*:*)
+ echo i386-pc-solaris2`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'`
+ exit 0 ;;
+ sun4*:SunOS:6*:*)
+ # According to config.sub, this is the proper way to canonicalize
+ # SunOS6. Hard to guess exactly what SunOS6 will be like, but
+ # it's likely to be more like Solaris than SunOS4.
+ echo sparc-sun-solaris3`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'`
+ exit 0 ;;
+ sun4*:SunOS:*:*)
+ case "`/usr/bin/arch -k`" in
+ Series*|S4*)
+ UNAME_RELEASE=`uname -v`
+ ;;
+ esac
+ # Japanese Language versions have a version number like `4.1.3-JL'.
+ echo sparc-sun-sunos`echo ${UNAME_RELEASE}|sed -e 's/-/_/'`
+ exit 0 ;;
+ sun3*:SunOS:*:*)
+ echo m68k-sun-sunos${UNAME_RELEASE}
+ exit 0 ;;
+ sun*:*:4.2BSD:*)
+ UNAME_RELEASE=`(sed 1q /etc/motd | awk '{print substr($5,1,3)}') 2>/dev/null`
+ test "x${UNAME_RELEASE}" = "x" && UNAME_RELEASE=3
+ case "`/bin/arch`" in
+ sun3)
+ echo m68k-sun-sunos${UNAME_RELEASE}
+ ;;
+ sun4)
+ echo sparc-sun-sunos${UNAME_RELEASE}
+ ;;
+ esac
+ exit 0 ;;
+ aushp:SunOS:*:*)
+ echo sparc-auspex-sunos${UNAME_RELEASE}
+ exit 0 ;;
+ # The situation for MiNT is a little confusing. The machine name
+ # can be virtually everything (everything which is not
+ # "atarist" or "atariste" at least should have a processor
+ # > m68000). The system name ranges from "MiNT" over "FreeMiNT"
+ # to the lowercase version "mint" (or "freemint"). Finally
+ # the system name "TOS" denotes a system which is actually not
+ # MiNT. But MiNT is downward compatible to TOS, so this should
+ # be no problem.
+ atarist[e]:*MiNT:*:* | atarist[e]:*mint:*:* | atarist[e]:*TOS:*:*)
+ echo m68k-atari-mint${UNAME_RELEASE}
+ exit 0 ;;
+ atari*:*MiNT:*:* | atari*:*mint:*:* | atarist[e]:*TOS:*:*)
+ echo m68k-atari-mint${UNAME_RELEASE}
+ exit 0 ;;
+ *falcon*:*MiNT:*:* | *falcon*:*mint:*:* | *falcon*:*TOS:*:*)
+ echo m68k-atari-mint${UNAME_RELEASE}
+ exit 0 ;;
+ milan*:*MiNT:*:* | milan*:*mint:*:* | *milan*:*TOS:*:*)
+ echo m68k-milan-mint${UNAME_RELEASE}
+ exit 0 ;;
+ hades*:*MiNT:*:* | hades*:*mint:*:* | *hades*:*TOS:*:*)
+ echo m68k-hades-mint${UNAME_RELEASE}
+ exit 0 ;;
+ *:*MiNT:*:* | *:*mint:*:* | *:*TOS:*:*)
+ echo m68k-unknown-mint${UNAME_RELEASE}
+ exit 0 ;;
+ powerpc:machten:*:*)
+ echo powerpc-apple-machten${UNAME_RELEASE}
+ exit 0 ;;
+ RISC*:Mach:*:*)
+ echo mips-dec-mach_bsd4.3
+ exit 0 ;;
+ RISC*:ULTRIX:*:*)
+ echo mips-dec-ultrix${UNAME_RELEASE}
+ exit 0 ;;
+ VAX*:ULTRIX*:*:*)
+ echo vax-dec-ultrix${UNAME_RELEASE}
+ exit 0 ;;
+ 2020:CLIX:*:* | 2430:CLIX:*:*)
+ echo clipper-intergraph-clix${UNAME_RELEASE}
+ exit 0 ;;
+ mips:*:*:UMIPS | mips:*:*:RISCos)
+ eval $set_cc_for_build
+ sed 's/^ //' << EOF >$dummy.c
+#ifdef __cplusplus
+#include <stdio.h> /* for printf() prototype */
+ int main (int argc, char *argv[]) {
+#else
+ int main (argc, argv) int argc; char *argv[]; {
+#endif
+ #if defined (host_mips) && defined (MIPSEB)
+ #if defined (SYSTYPE_SYSV)
+ printf ("mips-mips-riscos%ssysv\n", argv[1]); exit (0);
+ #endif
+ #if defined (SYSTYPE_SVR4)
+ printf ("mips-mips-riscos%ssvr4\n", argv[1]); exit (0);
+ #endif
+ #if defined (SYSTYPE_BSD43) || defined(SYSTYPE_BSD)
+ printf ("mips-mips-riscos%sbsd\n", argv[1]); exit (0);
+ #endif
+ #endif
+ exit (-1);
+ }
+EOF
+ $CC_FOR_BUILD -o $dummy $dummy.c \
+ && $dummy `echo "${UNAME_RELEASE}" | sed -n 's/\([0-9]*\).*/\1/p'` \
+ && exit 0
+ echo mips-mips-riscos${UNAME_RELEASE}
+ exit 0 ;;
+ Motorola:PowerMAX_OS:*:*)
+ echo powerpc-motorola-powermax
+ exit 0 ;;
+ Motorola:*:4.3:PL8-*)
+ echo powerpc-harris-powermax
+ exit 0 ;;
+ Night_Hawk:*:*:PowerMAX_OS | Synergy:PowerMAX_OS:*:*)
+ echo powerpc-harris-powermax
+ exit 0 ;;
+ Night_Hawk:Power_UNIX:*:*)
+ echo powerpc-harris-powerunix
+ exit 0 ;;
+ m88k:CX/UX:7*:*)
+ echo m88k-harris-cxux7
+ exit 0 ;;
+ m88k:*:4*:R4*)
+ echo m88k-motorola-sysv4
+ exit 0 ;;
+ m88k:*:3*:R3*)
+ echo m88k-motorola-sysv3
+ exit 0 ;;
+ AViiON:dgux:*:*)
+ # DG/UX returns AViiON for all architectures
+ UNAME_PROCESSOR=`/usr/bin/uname -p`
+ if [ $UNAME_PROCESSOR = mc88100 ] || [ $UNAME_PROCESSOR = mc88110 ]
+ then
+ if [ ${TARGET_BINARY_INTERFACE}x = m88kdguxelfx ] || \
+ [ ${TARGET_BINARY_INTERFACE}x = x ]
+ then
+ echo m88k-dg-dgux${UNAME_RELEASE}
+ else
+ echo m88k-dg-dguxbcs${UNAME_RELEASE}
+ fi
+ else
+ echo i586-dg-dgux${UNAME_RELEASE}
+ fi
+ exit 0 ;;
+ M88*:DolphinOS:*:*) # DolphinOS (SVR3)
+ echo m88k-dolphin-sysv3
+ exit 0 ;;
+ M88*:*:R3*:*)
+ # Delta 88k system running SVR3
+ echo m88k-motorola-sysv3
+ exit 0 ;;
+ XD88*:*:*:*) # Tektronix XD88 system running UTekV (SVR3)
+ echo m88k-tektronix-sysv3
+ exit 0 ;;
+ Tek43[0-9][0-9]:UTek:*:*) # Tektronix 4300 system running UTek (BSD)
+ echo m68k-tektronix-bsd
+ exit 0 ;;
+ *:IRIX*:*:*)
+ echo mips-sgi-irix`echo ${UNAME_RELEASE}|sed -e 's/-/_/g'`
+ exit 0 ;;
+ ????????:AIX?:[12].1:2) # AIX 2.2.1 or AIX 2.1.1 is RT/PC AIX.
+ echo romp-ibm-aix # uname -m gives an 8 hex-code CPU id
+ exit 0 ;; # Note that: echo "'`uname -s`'" gives 'AIX '
+ i*86:AIX:*:*)
+ echo i386-ibm-aix
+ exit 0 ;;
+ ia64:AIX:*:*)
+ if [ -x /usr/bin/oslevel ] ; then
+ IBM_REV=`/usr/bin/oslevel`
+ else
+ IBM_REV=${UNAME_VERSION}.${UNAME_RELEASE}
+ fi
+ echo ${UNAME_MACHINE}-ibm-aix${IBM_REV}
+ exit 0 ;;
+ *:AIX:2:3)
+ if grep bos325 /usr/include/stdio.h >/dev/null 2>&1; then
+ eval $set_cc_for_build
+ sed 's/^ //' << EOF >$dummy.c
+ #include <sys/systemcfg.h>
+
+ main()
+ {
+ if (!__power_pc())
+ exit(1);
+ puts("powerpc-ibm-aix3.2.5");
+ exit(0);
+ }
+EOF
+ $CC_FOR_BUILD -o $dummy $dummy.c && $dummy && exit 0
+ echo rs6000-ibm-aix3.2.5
+ elif grep bos324 /usr/include/stdio.h >/dev/null 2>&1; then
+ echo rs6000-ibm-aix3.2.4
+ else
+ echo rs6000-ibm-aix3.2
+ fi
+ exit 0 ;;
+ *:AIX:*:[45])
+ IBM_CPU_ID=`/usr/sbin/lsdev -C -c processor -S available | sed 1q | awk '{ print $1 }'`
+ if /usr/sbin/lsattr -El ${IBM_CPU_ID} | grep ' POWER' >/dev/null 2>&1; then
+ IBM_ARCH=rs6000
+ else
+ IBM_ARCH=powerpc
+ fi
+ if [ -x /usr/bin/oslevel ] ; then
+ IBM_REV=`/usr/bin/oslevel`
+ else
+ IBM_REV=${UNAME_VERSION}.${UNAME_RELEASE}
+ fi
+ echo ${IBM_ARCH}-ibm-aix${IBM_REV}
+ exit 0 ;;
+ *:AIX:*:*)
+ echo rs6000-ibm-aix
+ exit 0 ;;
+ ibmrt:4.4BSD:*|romp-ibm:BSD:*)
+ echo romp-ibm-bsd4.4
+ exit 0 ;;
+ ibmrt:*BSD:*|romp-ibm:BSD:*) # covers RT/PC BSD and
+ echo romp-ibm-bsd${UNAME_RELEASE} # 4.3 with uname added to
+ exit 0 ;; # report: romp-ibm BSD 4.3
+ *:BOSX:*:*)
+ echo rs6000-bull-bosx
+ exit 0 ;;
+ DPX/2?00:B.O.S.:*:*)
+ echo m68k-bull-sysv3
+ exit 0 ;;
+ 9000/[34]??:4.3bsd:1.*:*)
+ echo m68k-hp-bsd
+ exit 0 ;;
+ hp300:4.4BSD:*:* | 9000/[34]??:4.3bsd:2.*:*)
+ echo m68k-hp-bsd4.4
+ exit 0 ;;
+ 9000/[34678]??:HP-UX:*:*)
+ HPUX_REV=`echo ${UNAME_RELEASE}|sed -e 's/[^.]*.[0B]*//'`
+ case "${UNAME_MACHINE}" in
+ 9000/31? ) HP_ARCH=m68000 ;;
+ 9000/[34]?? ) HP_ARCH=m68k ;;
+ 9000/[678][0-9][0-9])
+ if [ -x /usr/bin/getconf ]; then
+ sc_cpu_version=`/usr/bin/getconf SC_CPU_VERSION 2>/dev/null`
+ sc_kernel_bits=`/usr/bin/getconf SC_KERNEL_BITS 2>/dev/null`
+ case "${sc_cpu_version}" in
+ 523) HP_ARCH="hppa1.0" ;; # CPU_PA_RISC1_0
+ 528) HP_ARCH="hppa1.1" ;; # CPU_PA_RISC1_1
+ 532) # CPU_PA_RISC2_0
+ case "${sc_kernel_bits}" in
+ 32) HP_ARCH="hppa2.0n" ;;
+ 64) HP_ARCH="hppa2.0w" ;;
+ '') HP_ARCH="hppa2.0" ;; # HP-UX 10.20
+ esac ;;
+ esac
+ fi
+ if [ "${HP_ARCH}" = "" ]; then
+ eval $set_cc_for_build
+ sed 's/^ //' << EOF >$dummy.c
+
+ #define _HPUX_SOURCE
+ #include <stdlib.h>
+ #include <unistd.h>
+
+ int main ()
+ {
+ #if defined(_SC_KERNEL_BITS)
+ long bits = sysconf(_SC_KERNEL_BITS);
+ #endif
+ long cpu = sysconf (_SC_CPU_VERSION);
+
+ switch (cpu)
+ {
+ case CPU_PA_RISC1_0: puts ("hppa1.0"); break;
+ case CPU_PA_RISC1_1: puts ("hppa1.1"); break;
+ case CPU_PA_RISC2_0:
+ #if defined(_SC_KERNEL_BITS)
+ switch (bits)
+ {
+ case 64: puts ("hppa2.0w"); break;
+ case 32: puts ("hppa2.0n"); break;
+ default: puts ("hppa2.0"); break;
+ } break;
+ #else /* !defined(_SC_KERNEL_BITS) */
+ puts ("hppa2.0"); break;
+ #endif
+ default: puts ("hppa1.0"); break;
+ }
+ exit (0);
+ }
+EOF
+ (CCOPTS= $CC_FOR_BUILD -o $dummy $dummy.c 2>/dev/null) && HP_ARCH=`$dummy`
+ test -z "$HP_ARCH" && HP_ARCH=hppa
+ fi ;;
+ esac
+ if [ ${HP_ARCH} = "hppa2.0w" ]
+ then
+ # avoid double evaluation of $set_cc_for_build
+ test -n "$CC_FOR_BUILD" || eval $set_cc_for_build
+ if echo __LP64__ | (CCOPTS= $CC_FOR_BUILD -E -) | grep __LP64__ >/dev/null
+ then
+ HP_ARCH="hppa2.0w"
+ else
+ HP_ARCH="hppa64"
+ fi
+ fi
+ echo ${HP_ARCH}-hp-hpux${HPUX_REV}
+ exit 0 ;;
+ ia64:HP-UX:*:*)
+ HPUX_REV=`echo ${UNAME_RELEASE}|sed -e 's/[^.]*.[0B]*//'`
+ echo ia64-hp-hpux${HPUX_REV}
+ exit 0 ;;
+ 3050*:HI-UX:*:*)
+ eval $set_cc_for_build
+ sed 's/^ //' << EOF >$dummy.c
+ #include <unistd.h>
+ int
+ main ()
+ {
+ long cpu = sysconf (_SC_CPU_VERSION);
+ /* The order matters, because CPU_IS_HP_MC68K erroneously returns
+ true for CPU_PA_RISC1_0. CPU_IS_PA_RISC returns correct
+ results, however. */
+ if (CPU_IS_PA_RISC (cpu))
+ {
+ switch (cpu)
+ {
+ case CPU_PA_RISC1_0: puts ("hppa1.0-hitachi-hiuxwe2"); break;
+ case CPU_PA_RISC1_1: puts ("hppa1.1-hitachi-hiuxwe2"); break;
+ case CPU_PA_RISC2_0: puts ("hppa2.0-hitachi-hiuxwe2"); break;
+ default: puts ("hppa-hitachi-hiuxwe2"); break;
+ }
+ }
+ else if (CPU_IS_HP_MC68K (cpu))
+ puts ("m68k-hitachi-hiuxwe2");
+ else puts ("unknown-hitachi-hiuxwe2");
+ exit (0);
+ }
+EOF
+ $CC_FOR_BUILD -o $dummy $dummy.c && $dummy && exit 0
+ echo unknown-hitachi-hiuxwe2
+ exit 0 ;;
+ 9000/7??:4.3bsd:*:* | 9000/8?[79]:4.3bsd:*:* )
+ echo hppa1.1-hp-bsd
+ exit 0 ;;
+ 9000/8??:4.3bsd:*:*)
+ echo hppa1.0-hp-bsd
+ exit 0 ;;
+ *9??*:MPE/iX:*:* | *3000*:MPE/iX:*:*)
+ echo hppa1.0-hp-mpeix
+ exit 0 ;;
+ hp7??:OSF1:*:* | hp8?[79]:OSF1:*:* )
+ echo hppa1.1-hp-osf
+ exit 0 ;;
+ hp8??:OSF1:*:*)
+ echo hppa1.0-hp-osf
+ exit 0 ;;
+ i*86:OSF1:*:*)
+ if [ -x /usr/sbin/sysversion ] ; then
+ echo ${UNAME_MACHINE}-unknown-osf1mk
+ else
+ echo ${UNAME_MACHINE}-unknown-osf1
+ fi
+ exit 0 ;;
+ parisc*:Lites*:*:*)
+ echo hppa1.1-hp-lites
+ exit 0 ;;
+ C1*:ConvexOS:*:* | convex:ConvexOS:C1*:*)
+ echo c1-convex-bsd
+ exit 0 ;;
+ C2*:ConvexOS:*:* | convex:ConvexOS:C2*:*)
+ if getsysinfo -f scalar_acc
+ then echo c32-convex-bsd
+ else echo c2-convex-bsd
+ fi
+ exit 0 ;;
+ C34*:ConvexOS:*:* | convex:ConvexOS:C34*:*)
+ echo c34-convex-bsd
+ exit 0 ;;
+ C38*:ConvexOS:*:* | convex:ConvexOS:C38*:*)
+ echo c38-convex-bsd
+ exit 0 ;;
+ C4*:ConvexOS:*:* | convex:ConvexOS:C4*:*)
+ echo c4-convex-bsd
+ exit 0 ;;
+ CRAY*Y-MP:*:*:*)
+ echo ymp-cray-unicos${UNAME_RELEASE} | sed -e 's/\.[^.]*$/.X/'
+ exit 0 ;;
+ CRAY*[A-Z]90:*:*:*)
+ echo ${UNAME_MACHINE}-cray-unicos${UNAME_RELEASE} \
+ | sed -e 's/CRAY.*\([A-Z]90\)/\1/' \
+ -e y/ABCDEFGHIJKLMNOPQRSTUVWXYZ/abcdefghijklmnopqrstuvwxyz/ \
+ -e 's/\.[^.]*$/.X/'
+ exit 0 ;;
+ CRAY*TS:*:*:*)
+ echo t90-cray-unicos${UNAME_RELEASE} | sed -e 's/\.[^.]*$/.X/'
+ exit 0 ;;
+ CRAY*T3E:*:*:*)
+ echo alphaev5-cray-unicosmk${UNAME_RELEASE} | sed -e 's/\.[^.]*$/.X/'
+ exit 0 ;;
+ CRAY*SV1:*:*:*)
+ echo sv1-cray-unicos${UNAME_RELEASE} | sed -e 's/\.[^.]*$/.X/'
+ exit 0 ;;
+ *:UNICOS/mp:*:*)
+ echo nv1-cray-unicosmp${UNAME_RELEASE} | sed -e 's/\.[^.]*$/.X/'
+ exit 0 ;;
+ F30[01]:UNIX_System_V:*:* | F700:UNIX_System_V:*:*)
+ FUJITSU_PROC=`uname -m | tr 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' 'abcdefghijklmnopqrstuvwxyz'`
+ FUJITSU_SYS=`uname -p | tr 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' 'abcdefghijklmnopqrstuvwxyz' | sed -e 's/\///'`
+ FUJITSU_REL=`echo ${UNAME_RELEASE} | sed -e 's/ /_/'`
+ echo "${FUJITSU_PROC}-fujitsu-${FUJITSU_SYS}${FUJITSU_REL}"
+ exit 0 ;;
+ 5000:UNIX_System_V:4.*:*)
+ FUJITSU_SYS=`uname -p | tr 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' 'abcdefghijklmnopqrstuvwxyz' | sed -e 's/\///'`
+ FUJITSU_REL=`echo ${UNAME_RELEASE} | tr 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' 'abcdefghijklmnopqrstuvwxyz' | sed -e 's/ /_/'`
+ echo "sparc-fujitsu-${FUJITSU_SYS}${FUJITSU_REL}"
+ exit 0 ;;
+ i*86:BSD/386:*:* | i*86:BSD/OS:*:* | *:Ascend\ Embedded/OS:*:*)
+ echo ${UNAME_MACHINE}-pc-bsdi${UNAME_RELEASE}
+ exit 0 ;;
+ sparc*:BSD/OS:*:*)
+ echo sparc-unknown-bsdi${UNAME_RELEASE}
+ exit 0 ;;
+ *:BSD/OS:*:*)
+ echo ${UNAME_MACHINE}-unknown-bsdi${UNAME_RELEASE}
+ exit 0 ;;
+ *:FreeBSD:*:*)
+ # Determine whether the default compiler uses glibc.
+ eval $set_cc_for_build
+ sed 's/^ //' << EOF >$dummy.c
+ #include <features.h>
+ #if __GLIBC__ >= 2
+ LIBC=gnu
+ #else
+ LIBC=
+ #endif
+EOF
+ eval `$CC_FOR_BUILD -E $dummy.c 2>/dev/null | grep ^LIBC=`
+ # GNU/KFreeBSD systems have a "k" prefix to indicate we are using
+ # FreeBSD's kernel, but not the complete OS.
+ case ${LIBC} in gnu) kernel_only='k' ;; esac
+ echo ${UNAME_MACHINE}-unknown-${kernel_only}freebsd`echo ${UNAME_RELEASE}|sed -e 's/[-(].*//'`${LIBC:+-$LIBC}
+ exit 0 ;;
+ i*:CYGWIN*:*)
+ echo ${UNAME_MACHINE}-pc-cygwin
+ exit 0 ;;
+ i*:MINGW*:*)
+ echo ${UNAME_MACHINE}-pc-mingw32
+ exit 0 ;;
+ i*:PW*:*)
+ echo ${UNAME_MACHINE}-pc-pw32
+ exit 0 ;;
+ x86:Interix*:[34]*)
+ echo i586-pc-interix${UNAME_RELEASE}|sed -e 's/\..*//'
+ exit 0 ;;
+ [345]86:Windows_95:* | [345]86:Windows_98:* | [345]86:Windows_NT:*)
+ echo i${UNAME_MACHINE}-pc-mks
+ exit 0 ;;
+ i*:Windows_NT*:* | Pentium*:Windows_NT*:*)
+ # How do we know it's Interix rather than the generic POSIX subsystem?
+ # It also conflicts with pre-2.0 versions of AT&T UWIN. Should we
+ # UNAME_MACHINE based on the output of uname instead of i386?
+ echo i586-pc-interix
+ exit 0 ;;
+ i*:UWIN*:*)
+ echo ${UNAME_MACHINE}-pc-uwin
+ exit 0 ;;
+ p*:CYGWIN*:*)
+ echo powerpcle-unknown-cygwin
+ exit 0 ;;
+ prep*:SunOS:5.*:*)
+ echo powerpcle-unknown-solaris2`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'`
+ exit 0 ;;
+ *:GNU:*:*)
+ # the GNU system
+ echo `echo ${UNAME_MACHINE}|sed -e 's,[-/].*$,,'`-unknown-gnu`echo ${UNAME_RELEASE}|sed -e 's,/.*$,,'`
+ exit 0 ;;
+ *:GNU/*:*:*)
+ # other systems with GNU libc and userland
+ echo ${UNAME_MACHINE}-unknown-`echo ${UNAME_SYSTEM} | sed 's,^[^/]*/,,' | tr '[A-Z]' '[a-z]'``echo ${UNAME_RELEASE}|sed -e 's/[-(].*//'`-gnu
+ exit 0 ;;
+ i*86:Minix:*:*)
+ echo ${UNAME_MACHINE}-pc-minix
+ exit 0 ;;
+ arm*:Linux:*:*)
+ echo ${UNAME_MACHINE}-unknown-linux-gnu
+ exit 0 ;;
+ cris:Linux:*:*)
+ echo cris-axis-linux-gnu
+ exit 0 ;;
+ ia64:Linux:*:*)
+ echo ${UNAME_MACHINE}-unknown-linux-gnu
+ exit 0 ;;
+ m68*:Linux:*:*)
+ echo ${UNAME_MACHINE}-unknown-linux-gnu
+ exit 0 ;;
+ mips:Linux:*:*)
+ eval $set_cc_for_build
+ sed 's/^ //' << EOF >$dummy.c
+ #undef CPU
+ #undef mips
+ #undef mipsel
+ #if defined(__MIPSEL__) || defined(__MIPSEL) || defined(_MIPSEL) || defined(MIPSEL)
+ CPU=mipsel
+ #else
+ #if defined(__MIPSEB__) || defined(__MIPSEB) || defined(_MIPSEB) || defined(MIPSEB)
+ CPU=mips
+ #else
+ CPU=
+ #endif
+ #endif
+EOF
+ eval `$CC_FOR_BUILD -E $dummy.c 2>/dev/null | grep ^CPU=`
+ test x"${CPU}" != x && echo "${CPU}-unknown-linux-gnu" && exit 0
+ ;;
+ mips64:Linux:*:*)
+ eval $set_cc_for_build
+ sed 's/^ //' << EOF >$dummy.c
+ #undef CPU
+ #undef mips64
+ #undef mips64el
+ #if defined(__MIPSEL__) || defined(__MIPSEL) || defined(_MIPSEL) || defined(MIPSEL)
+ CPU=mips64el
+ #else
+ #if defined(__MIPSEB__) || defined(__MIPSEB) || defined(_MIPSEB) || defined(MIPSEB)
+ CPU=mips64
+ #else
+ CPU=
+ #endif
+ #endif
+EOF
+ eval `$CC_FOR_BUILD -E $dummy.c 2>/dev/null | grep ^CPU=`
+ test x"${CPU}" != x && echo "${CPU}-unknown-linux-gnu" && exit 0
+ ;;
+ ppc:Linux:*:*)
+ echo powerpc-unknown-linux-gnu
+ exit 0 ;;
+ ppc64:Linux:*:*)
+ echo powerpc64-unknown-linux-gnu
+ exit 0 ;;
+ alpha:Linux:*:*)
+ case `sed -n '/^cpu model/s/^.*: \(.*\)/\1/p' < /proc/cpuinfo` in
+ EV5) UNAME_MACHINE=alphaev5 ;;
+ EV56) UNAME_MACHINE=alphaev56 ;;
+ PCA56) UNAME_MACHINE=alphapca56 ;;
+ PCA57) UNAME_MACHINE=alphapca56 ;;
+ EV6) UNAME_MACHINE=alphaev6 ;;
+ EV67) UNAME_MACHINE=alphaev67 ;;
+ EV68*) UNAME_MACHINE=alphaev68 ;;
+ esac
+ objdump --private-headers /bin/sh | grep ld.so.1 >/dev/null
+ if test "$?" = 0 ; then LIBC="libc1" ; else LIBC="" ; fi
+ echo ${UNAME_MACHINE}-unknown-linux-gnu${LIBC}
+ exit 0 ;;
+ parisc:Linux:*:* | hppa:Linux:*:*)
+ # Look for CPU level
+ case `grep '^cpu[^a-z]*:' /proc/cpuinfo 2>/dev/null | cut -d' ' -f2` in
+ PA7*) echo hppa1.1-unknown-linux-gnu ;;
+ PA8*) echo hppa2.0-unknown-linux-gnu ;;
+ *) echo hppa-unknown-linux-gnu ;;
+ esac
+ exit 0 ;;
+ parisc64:Linux:*:* | hppa64:Linux:*:*)
+ echo hppa64-unknown-linux-gnu
+ exit 0 ;;
+ s390:Linux:*:* | s390x:Linux:*:*)
+ echo ${UNAME_MACHINE}-ibm-linux
+ exit 0 ;;
+ sh64*:Linux:*:*)
+ echo ${UNAME_MACHINE}-unknown-linux-gnu
+ exit 0 ;;
+ sh*:Linux:*:*)
+ echo ${UNAME_MACHINE}-unknown-linux-gnu
+ exit 0 ;;
+ sparc:Linux:*:* | sparc64:Linux:*:*)
+ echo ${UNAME_MACHINE}-unknown-linux-gnu
+ exit 0 ;;
+ x86_64:Linux:*:*)
+ echo x86_64-unknown-linux-gnu
+ exit 0 ;;
+ i*86:Linux:*:*)
+ # The BFD linker knows what the default object file format is, so
+ # first see if it will tell us. cd to the root directory to prevent
+ # problems with other programs or directories called `ld' in the path.
+ # Set LC_ALL=C to ensure ld outputs messages in English.
+ ld_supported_targets=`cd /; LC_ALL=C ld --help 2>&1 \
+ | sed -ne '/supported targets:/!d
+ s/[ ][ ]*/ /g
+ s/.*supported targets: *//
+ s/ .*//
+ p'`
+ case "$ld_supported_targets" in
+ elf32-i386)
+ TENTATIVE="${UNAME_MACHINE}-pc-linux-gnu"
+ ;;
+ a.out-i386-linux)
+ echo "${UNAME_MACHINE}-pc-linux-gnuaout"
+ exit 0 ;;
+ coff-i386)
+ echo "${UNAME_MACHINE}-pc-linux-gnucoff"
+ exit 0 ;;
+ "")
+ # Either a pre-BFD a.out linker (linux-gnuoldld) or
+ # one that does not give us useful --help.
+ echo "${UNAME_MACHINE}-pc-linux-gnuoldld"
+ exit 0 ;;
+ esac
+ # Determine whether the default compiler is a.out or elf
+ eval $set_cc_for_build
+ sed 's/^ //' << EOF >$dummy.c
+ #include <features.h>
+ #ifdef __ELF__
+ # ifdef __GLIBC__
+ # if __GLIBC__ >= 2
+ LIBC=gnu
+ # else
+ LIBC=gnulibc1
+ # endif
+ # else
+ LIBC=gnulibc1
+ # endif
+ #else
+ #ifdef __INTEL_COMPILER
+ LIBC=gnu
+ #else
+ LIBC=gnuaout
+ #endif
+ #endif
+ #ifdef __dietlibc__
+ LIBC=dietlibc
+ #endif
+EOF
+ eval `$CC_FOR_BUILD -E $dummy.c 2>/dev/null | grep ^LIBC=`
+ test x"${LIBC}" != x && echo "${UNAME_MACHINE}-pc-linux-${LIBC}" && exit 0
+ test x"${TENTATIVE}" != x && echo "${TENTATIVE}" && exit 0
+ ;;
+ i*86:DYNIX/ptx:4*:*)
+ # ptx 4.0 does uname -s correctly, with DYNIX/ptx in there.
+ # earlier versions are messed up and put the nodename in both
+ # sysname and nodename.
+ echo i386-sequent-sysv4
+ exit 0 ;;
+ i*86:UNIX_SV:4.2MP:2.*)
+ # Unixware is an offshoot of SVR4, but it has its own version
+ # number series starting with 2...
+ # I am not positive that other SVR4 systems won't match this,
+ # I just have to hope. -- rms.
+ # Use sysv4.2uw... so that sysv4* matches it.
+ echo ${UNAME_MACHINE}-pc-sysv4.2uw${UNAME_VERSION}
+ exit 0 ;;
+ i*86:OS/2:*:*)
+ # If we were able to find `uname', then EMX Unix compatibility
+ # is probably installed.
+ echo ${UNAME_MACHINE}-pc-os2-emx
+ exit 0 ;;
+ i*86:XTS-300:*:STOP)
+ echo ${UNAME_MACHINE}-unknown-stop
+ exit 0 ;;
+ i*86:atheos:*:*)
+ echo ${UNAME_MACHINE}-unknown-atheos
+ exit 0 ;;
+ i*86:syllable:*:*)
+ echo ${UNAME_MACHINE}-pc-syllable
+ exit 0 ;;
+ i*86:LynxOS:2.*:* | i*86:LynxOS:3.[01]*:* | i*86:LynxOS:4.0*:*)
+ echo i386-unknown-lynxos${UNAME_RELEASE}
+ exit 0 ;;
+ i*86:*DOS:*:*)
+ echo ${UNAME_MACHINE}-pc-msdosdjgpp
+ exit 0 ;;
+ i*86:*:4.*:* | i*86:SYSTEM_V:4.*:*)
+ UNAME_REL=`echo ${UNAME_RELEASE} | sed 's/\/MP$//'`
+ if grep Novell /usr/include/link.h >/dev/null 2>/dev/null; then
+ echo ${UNAME_MACHINE}-univel-sysv${UNAME_REL}
+ else
+ echo ${UNAME_MACHINE}-pc-sysv${UNAME_REL}
+ fi
+ exit 0 ;;
+ i*86:*:5:[78]*)
+ case `/bin/uname -X | grep "^Machine"` in
+ *486*) UNAME_MACHINE=i486 ;;
+ *Pentium) UNAME_MACHINE=i586 ;;
+ *Pent*|*Celeron) UNAME_MACHINE=i686 ;;
+ esac
+ echo ${UNAME_MACHINE}-unknown-sysv${UNAME_RELEASE}${UNAME_SYSTEM}${UNAME_VERSION}
+ exit 0 ;;
+ i*86:*:3.2:*)
+ if test -f /usr/options/cb.name; then
+ UNAME_REL=`sed -n 's/.*Version //p' </usr/options/cb.name`
+ echo ${UNAME_MACHINE}-pc-isc$UNAME_REL
+ elif /bin/uname -X 2>/dev/null >/dev/null ; then
+ UNAME_REL=`(/bin/uname -X|grep Release|sed -e 's/.*= //')`
+ (/bin/uname -X|grep i80486 >/dev/null) && UNAME_MACHINE=i486
+ (/bin/uname -X|grep '^Machine.*Pentium' >/dev/null) \
+ && UNAME_MACHINE=i586
+ (/bin/uname -X|grep '^Machine.*Pent *II' >/dev/null) \
+ && UNAME_MACHINE=i686
+ (/bin/uname -X|grep '^Machine.*Pentium Pro' >/dev/null) \
+ && UNAME_MACHINE=i686
+ echo ${UNAME_MACHINE}-pc-sco$UNAME_REL
+ else
+ echo ${UNAME_MACHINE}-pc-sysv32
+ fi
+ exit 0 ;;
+ pc:*:*:*)
+ # Left here for compatibility:
+ # uname -m prints for DJGPP always 'pc', but it prints nothing about
+ # the processor, so we play safe by assuming i386.
+ echo i386-pc-msdosdjgpp
+ exit 0 ;;
+ Intel:Mach:3*:*)
+ echo i386-pc-mach3
+ exit 0 ;;
+ paragon:*:*:*)
+ echo i860-intel-osf1
+ exit 0 ;;
+ i860:*:4.*:*) # i860-SVR4
+ if grep Stardent /usr/include/sys/uadmin.h >/dev/null 2>&1 ; then
+ echo i860-stardent-sysv${UNAME_RELEASE} # Stardent Vistra i860-SVR4
+ else # Add other i860-SVR4 vendors below as they are discovered.
+ echo i860-unknown-sysv${UNAME_RELEASE} # Unknown i860-SVR4
+ fi
+ exit 0 ;;
+ mini*:CTIX:SYS*5:*)
+ # "miniframe"
+ echo m68010-convergent-sysv
+ exit 0 ;;
+ mc68k:UNIX:SYSTEM5:3.51m)
+ echo m68k-convergent-sysv
+ exit 0 ;;
+ M680?0:D-NIX:5.3:*)
+ echo m68k-diab-dnix
+ exit 0 ;;
+ M68*:*:R3V[567]*:*)
+ test -r /sysV68 && echo 'm68k-motorola-sysv' && exit 0 ;;
+ 3[345]??:*:4.0:3.0 | 3[34]??A:*:4.0:3.0 | 3[34]??,*:*:4.0:3.0 | 3[34]??/*:*:4.0:3.0 | 4400:*:4.0:3.0 | 4850:*:4.0:3.0 | SKA40:*:4.0:3.0 | SDS2:*:4.0:3.0 | SHG2:*:4.0:3.0)
+ OS_REL=''
+ test -r /etc/.relid \
+ && OS_REL=.`sed -n 's/[^ ]* [^ ]* \([0-9][0-9]\).*/\1/p' < /etc/.relid`
+ /bin/uname -p 2>/dev/null | grep 86 >/dev/null \
+ && echo i486-ncr-sysv4.3${OS_REL} && exit 0
+ /bin/uname -p 2>/dev/null | /bin/grep entium >/dev/null \
+ && echo i586-ncr-sysv4.3${OS_REL} && exit 0 ;;
+ 3[34]??:*:4.0:* | 3[34]??,*:*:4.0:*)
+ /bin/uname -p 2>/dev/null | grep 86 >/dev/null \
+ && echo i486-ncr-sysv4 && exit 0 ;;
+ m68*:LynxOS:2.*:* | m68*:LynxOS:3.0*:*)
+ echo m68k-unknown-lynxos${UNAME_RELEASE}
+ exit 0 ;;
+ mc68030:UNIX_System_V:4.*:*)
+ echo m68k-atari-sysv4
+ exit 0 ;;
+ TSUNAMI:LynxOS:2.*:*)
+ echo sparc-unknown-lynxos${UNAME_RELEASE}
+ exit 0 ;;
+ rs6000:LynxOS:2.*:*)
+ echo rs6000-unknown-lynxos${UNAME_RELEASE}
+ exit 0 ;;
+ PowerPC:LynxOS:2.*:* | PowerPC:LynxOS:3.[01]*:* | PowerPC:LynxOS:4.0*:*)
+ echo powerpc-unknown-lynxos${UNAME_RELEASE}
+ exit 0 ;;
+ SM[BE]S:UNIX_SV:*:*)
+ echo mips-dde-sysv${UNAME_RELEASE}
+ exit 0 ;;
+ RM*:ReliantUNIX-*:*:*)
+ echo mips-sni-sysv4
+ exit 0 ;;
+ RM*:SINIX-*:*:*)
+ echo mips-sni-sysv4
+ exit 0 ;;
+ *:SINIX-*:*:*)
+ if uname -p 2>/dev/null >/dev/null ; then
+ UNAME_MACHINE=`(uname -p) 2>/dev/null`
+ echo ${UNAME_MACHINE}-sni-sysv4
+ else
+ echo ns32k-sni-sysv
+ fi
+ exit 0 ;;
+ PENTIUM:*:4.0*:*) # Unisys `ClearPath HMP IX 4000' SVR4/MP effort
+ # says <Richard.M.Bartel at ccMail.Census.GOV>
+ echo i586-unisys-sysv4
+ exit 0 ;;
+ *:UNIX_System_V:4*:FTX*)
+ # From Gerald Hewes <hewes at openmarket.com>.
+ # How about differentiating between stratus architectures? -djm
+ echo hppa1.1-stratus-sysv4
+ exit 0 ;;
+ *:*:*:FTX*)
+ # From seanf at swdc.stratus.com.
+ echo i860-stratus-sysv4
+ exit 0 ;;
+ *:VOS:*:*)
+ # From Paul.Green at stratus.com.
+ echo hppa1.1-stratus-vos
+ exit 0 ;;
+ mc68*:A/UX:*:*)
+ echo m68k-apple-aux${UNAME_RELEASE}
+ exit 0 ;;
+ news*:NEWS-OS:6*:*)
+ echo mips-sony-newsos6
+ exit 0 ;;
+ R[34]000:*System_V*:*:* | R4000:UNIX_SYSV:*:* | R*000:UNIX_SV:*:*)
+ if [ -d /usr/nec ]; then
+ echo mips-nec-sysv${UNAME_RELEASE}
+ else
+ echo mips-unknown-sysv${UNAME_RELEASE}
+ fi
+ exit 0 ;;
+ BeBox:BeOS:*:*) # BeOS running on hardware made by Be, PPC only.
+ echo powerpc-be-beos
+ exit 0 ;;
+ BeMac:BeOS:*:*) # BeOS running on Mac or Mac clone, PPC only.
+ echo powerpc-apple-beos
+ exit 0 ;;
+ BePC:BeOS:*:*) # BeOS running on Intel PC compatible.
+ echo i586-pc-beos
+ exit 0 ;;
+ SX-4:SUPER-UX:*:*)
+ echo sx4-nec-superux${UNAME_RELEASE}
+ exit 0 ;;
+ SX-5:SUPER-UX:*:*)
+ echo sx5-nec-superux${UNAME_RELEASE}
+ exit 0 ;;
+ SX-6:SUPER-UX:*:*)
+ echo sx6-nec-superux${UNAME_RELEASE}
+ exit 0 ;;
+ Power*:Rhapsody:*:*)
+ echo powerpc-apple-rhapsody${UNAME_RELEASE}
+ exit 0 ;;
+ *:Rhapsody:*:*)
+ echo ${UNAME_MACHINE}-apple-rhapsody${UNAME_RELEASE}
+ exit 0 ;;
+ *:Darwin:*:*)
+ case `uname -p` in
+ *86) UNAME_PROCESSOR=i686 ;;
+ powerpc) UNAME_PROCESSOR=powerpc ;;
+ esac
+ echo ${UNAME_PROCESSOR}-apple-darwin${UNAME_RELEASE}
+ exit 0 ;;
+ *:procnto*:*:* | *:QNX:[0123456789]*:*)
+ UNAME_PROCESSOR=`uname -p`
+ if test "$UNAME_PROCESSOR" = "x86"; then
+ UNAME_PROCESSOR=i386
+ UNAME_MACHINE=pc
+ fi
+ echo ${UNAME_PROCESSOR}-${UNAME_MACHINE}-nto-qnx${UNAME_RELEASE}
+ exit 0 ;;
+ *:QNX:*:4*)
+ echo i386-pc-qnx
+ exit 0 ;;
+ NSR-?:NONSTOP_KERNEL:*:*)
+ echo nsr-tandem-nsk${UNAME_RELEASE}
+ exit 0 ;;
+ *:NonStop-UX:*:*)
+ echo mips-compaq-nonstopux
+ exit 0 ;;
+ BS2000:POSIX*:*:*)
+ echo bs2000-siemens-sysv
+ exit 0 ;;
+ DS/*:UNIX_System_V:*:*)
+ echo ${UNAME_MACHINE}-${UNAME_SYSTEM}-${UNAME_RELEASE}
+ exit 0 ;;
+ *:Plan9:*:*)
+ # "uname -m" is not consistent, so use $cputype instead. 386
+ # is converted to i386 for consistency with other x86
+ # operating systems.
+ if test "$cputype" = "386"; then
+ UNAME_MACHINE=i386
+ else
+ UNAME_MACHINE="$cputype"
+ fi
+ echo ${UNAME_MACHINE}-unknown-plan9
+ exit 0 ;;
+ *:TOPS-10:*:*)
+ echo pdp10-unknown-tops10
+ exit 0 ;;
+ *:TENEX:*:*)
+ echo pdp10-unknown-tenex
+ exit 0 ;;
+ KS10:TOPS-20:*:* | KL10:TOPS-20:*:* | TYPE4:TOPS-20:*:*)
+ echo pdp10-dec-tops20
+ exit 0 ;;
+ XKL-1:TOPS-20:*:* | TYPE5:TOPS-20:*:*)
+ echo pdp10-xkl-tops20
+ exit 0 ;;
+ *:TOPS-20:*:*)
+ echo pdp10-unknown-tops20
+ exit 0 ;;
+ *:ITS:*:*)
+ echo pdp10-unknown-its
+ exit 0 ;;
+ SEI:*:*:SEIUX)
+ echo mips-sei-seiux${UNAME_RELEASE}
+ exit 0 ;;
+ *:DRAGONFLY:*:*)
+ echo ${UNAME_MACHINE}-unknown-dragonfly${UNAME_RELEASE}
+ exit 0 ;;
+esac
+
+#echo '(No uname command or uname output not recognized.)' 1>&2
+#echo "${UNAME_MACHINE}:${UNAME_SYSTEM}:${UNAME_RELEASE}:${UNAME_VERSION}" 1>&2
+
+eval $set_cc_for_build
+cat >$dummy.c <<EOF
+#ifdef _SEQUENT_
+# include <sys/types.h>
+# include <sys/utsname.h>
+#endif
+main ()
+{
+#if defined (sony)
+#if defined (MIPSEB)
+ /* BFD wants "bsd" instead of "newsos". Perhaps BFD should be changed,
+ I don't know.... */
+ printf ("mips-sony-bsd\n"); exit (0);
+#else
+#include <sys/param.h>
+ printf ("m68k-sony-newsos%s\n",
+#ifdef NEWSOS4
+ "4"
+#else
+ ""
+#endif
+ ); exit (0);
+#endif
+#endif
+
+#if defined (__arm) && defined (__acorn) && defined (__unix)
+ printf ("arm-acorn-riscix"); exit (0);
+#endif
+
+#if defined (hp300) && !defined (hpux)
+ printf ("m68k-hp-bsd\n"); exit (0);
+#endif
+
+#if defined (NeXT)
+#if !defined (__ARCHITECTURE__)
+#define __ARCHITECTURE__ "m68k"
+#endif
+ int version;
+ version=`(hostinfo | sed -n 's/.*NeXT Mach \([0-9]*\).*/\1/p') 2>/dev/null`;
+ if (version < 4)
+ printf ("%s-next-nextstep%d\n", __ARCHITECTURE__, version);
+ else
+ printf ("%s-next-openstep%d\n", __ARCHITECTURE__, version);
+ exit (0);
+#endif
+
+#if defined (MULTIMAX) || defined (n16)
+#if defined (UMAXV)
+ printf ("ns32k-encore-sysv\n"); exit (0);
+#else
+#if defined (CMU)
+ printf ("ns32k-encore-mach\n"); exit (0);
+#else
+ printf ("ns32k-encore-bsd\n"); exit (0);
+#endif
+#endif
+#endif
+
+#if defined (__386BSD__)
+ printf ("i386-pc-bsd\n"); exit (0);
+#endif
+
+#if defined (sequent)
+#if defined (i386)
+ printf ("i386-sequent-dynix\n"); exit (0);
+#endif
+#if defined (ns32000)
+ printf ("ns32k-sequent-dynix\n"); exit (0);
+#endif
+#endif
+
+#if defined (_SEQUENT_)
+ struct utsname un;
+
+ uname(&un);
+
+ if (strncmp(un.version, "V2", 2) == 0) {
+ printf ("i386-sequent-ptx2\n"); exit (0);
+ }
+ if (strncmp(un.version, "V1", 2) == 0) { /* XXX is V1 correct? */
+ printf ("i386-sequent-ptx1\n"); exit (0);
+ }
+ printf ("i386-sequent-ptx\n"); exit (0);
+
+#endif
+
+#if defined (vax)
+# if !defined (ultrix)
+# include <sys/param.h>
+# if defined (BSD)
+# if BSD == 43
+ printf ("vax-dec-bsd4.3\n"); exit (0);
+# else
+# if BSD == 199006
+ printf ("vax-dec-bsd4.3reno\n"); exit (0);
+# else
+ printf ("vax-dec-bsd\n"); exit (0);
+# endif
+# endif
+# else
+ printf ("vax-dec-bsd\n"); exit (0);
+# endif
+# else
+ printf ("vax-dec-ultrix\n"); exit (0);
+# endif
+#endif
+
+#if defined (alliant) && defined (i860)
+ printf ("i860-alliant-bsd\n"); exit (0);
+#endif
+
+ exit (1);
+}
+EOF
+
+$CC_FOR_BUILD -o $dummy $dummy.c 2>/dev/null && $dummy && exit 0
+
+# Apollos put the system type in the environment.
+
+test -d /usr/apollo && { echo ${ISP}-apollo-${SYSTYPE}; exit 0; }
+
+# Convex versions that predate uname can use getsysinfo(1)
+
+if [ -x /usr/convex/getsysinfo ]
+then
+ case `getsysinfo -f cpu_type` in
+ c1*)
+ echo c1-convex-bsd
+ exit 0 ;;
+ c2*)
+ if getsysinfo -f scalar_acc
+ then echo c32-convex-bsd
+ else echo c2-convex-bsd
+ fi
+ exit 0 ;;
+ c34*)
+ echo c34-convex-bsd
+ exit 0 ;;
+ c38*)
+ echo c38-convex-bsd
+ exit 0 ;;
+ c4*)
+ echo c4-convex-bsd
+ exit 0 ;;
+ esac
+fi
+
+cat >&2 <<EOF
+$0: unable to guess system type
+
+This script, last modified $timestamp, has failed to recognize
+the operating system you are using. It is advised that you
+download the most up to date version of the config scripts from
+
+ ftp://ftp.gnu.org/pub/gnu/config/
+
+If the version you run ($0) is already up to date, please
+send the following data and any information you think might be
+pertinent to <config-patches at gnu.org> in order to provide the needed
+information to handle your system.
+
+config.guess timestamp = $timestamp
+
+uname -m = `(uname -m) 2>/dev/null || echo unknown`
+uname -r = `(uname -r) 2>/dev/null || echo unknown`
+uname -s = `(uname -s) 2>/dev/null || echo unknown`
+uname -v = `(uname -v) 2>/dev/null || echo unknown`
+
+/usr/bin/uname -p = `(/usr/bin/uname -p) 2>/dev/null`
+/bin/uname -X = `(/bin/uname -X) 2>/dev/null`
+
+hostinfo = `(hostinfo) 2>/dev/null`
+/bin/universe = `(/bin/universe) 2>/dev/null`
+/usr/bin/arch -k = `(/usr/bin/arch -k) 2>/dev/null`
+/bin/arch = `(/bin/arch) 2>/dev/null`
+/usr/bin/oslevel = `(/usr/bin/oslevel) 2>/dev/null`
+/usr/convex/getsysinfo = `(/usr/convex/getsysinfo) 2>/dev/null`
+
+UNAME_MACHINE = ${UNAME_MACHINE}
+UNAME_RELEASE = ${UNAME_RELEASE}
+UNAME_SYSTEM = ${UNAME_SYSTEM}
+UNAME_VERSION = ${UNAME_VERSION}
+EOF
+
+exit 1
+
+# Local variables:
+# eval: (add-hook 'write-file-hooks 'time-stamp)
+# time-stamp-start: "timestamp='"
+# time-stamp-format: "%:y-%02m-%02d"
+# time-stamp-end: "'"
+# End:
Added: freeswitch/trunk/libs/sqlite/config.sub
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/config.sub Tue Dec 19 15:11:50 2006
@@ -0,0 +1,1537 @@
+#! /bin/sh
+# Configuration validation subroutine script.
+# Copyright (C) 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999,
+# 2000, 2001, 2002, 2003 Free Software Foundation, Inc.
+
+timestamp='2004-01-05'
+
+# This file is (in principle) common to ALL GNU software.
+# The presence of a machine in this file suggests that SOME GNU software
+# can handle that machine. It does not imply ALL GNU software can.
+#
+# This file is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place - Suite 330,
+# Boston, MA 02111-1307, USA.
+
+# As a special exception to the GNU General Public License, if you
+# distribute this file as part of a program that contains a
+# configuration script generated by Autoconf, you may include it under
+# the same distribution terms that you use for the rest of that program.
+
+# Please send patches to <config-patches at gnu.org>. Submit a context
+# diff and a properly formatted ChangeLog entry.
+#
+# Configuration subroutine to validate and canonicalize a configuration type.
+# Supply the specified configuration type as an argument.
+# If it is invalid, we print an error message on stderr and exit with code 1.
+# Otherwise, we print the canonical config type on stdout and succeed.
+
+# This file is supposed to be the same for all GNU packages
+# and recognize all the CPU types, system types and aliases
+# that are meaningful with *any* GNU software.
+# Each package is responsible for reporting which valid configurations
+# it does not support. The user should be able to distinguish
+# a failure to support a valid configuration from a meaningless
+# configuration.
+
+# The goal of this file is to map all the various variations of a given
+# machine specification into a single specification in the form:
+# CPU_TYPE-MANUFACTURER-OPERATING_SYSTEM
+# or in some cases, the newer four-part form:
+# CPU_TYPE-MANUFACTURER-KERNEL-OPERATING_SYSTEM
+# It is wrong to echo any other type of specification.
+
+me=`echo "$0" | sed -e 's,.*/,,'`
+
+usage="\
+Usage: $0 [OPTION] CPU-MFR-OPSYS
+ $0 [OPTION] ALIAS
+
+Canonicalize a configuration name.
+
+Operation modes:
+ -h, --help print this help, then exit
+ -t, --time-stamp print date of last modification, then exit
+ -v, --version print version number, then exit
+
+Report bugs and patches to <config-patches at gnu.org>."
+
+version="\
+GNU config.sub ($timestamp)
+
+Copyright (C) 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001
+Free Software Foundation, Inc.
+
+This is free software; see the source for copying conditions. There is NO
+warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE."
+
+help="
+Try \`$me --help' for more information."
+
+# Parse command line
+while test $# -gt 0 ; do
+ case $1 in
+ --time-stamp | --time* | -t )
+ echo "$timestamp" ; exit 0 ;;
+ --version | -v )
+ echo "$version" ; exit 0 ;;
+ --help | --h* | -h )
+ echo "$usage"; exit 0 ;;
+ -- ) # Stop option processing
+ shift; break ;;
+ - ) # Use stdin as input.
+ break ;;
+ -* )
+ echo "$me: invalid option $1$help"
+ exit 1 ;;
+
+ *local*)
+ # First pass through any local machine types.
+ echo $1
+ exit 0;;
+
+ * )
+ break ;;
+ esac
+done
+
+case $# in
+ 0) echo "$me: missing argument$help" >&2
+ exit 1;;
+ 1) ;;
+ *) echo "$me: too many arguments$help" >&2
+ exit 1;;
+esac
+
+# Separate what the user gave into CPU-COMPANY and OS or KERNEL-OS (if any).
+# Here we must recognize all the valid KERNEL-OS combinations.
+maybe_os=`echo $1 | sed 's/^\(.*\)-\([^-]*-[^-]*\)$/\2/'`
+case $maybe_os in
+ nto-qnx* | linux-gnu* | linux-dietlibc | linux-uclibc* | uclinux-uclibc* | uclinux-gnu* | \
+ kfreebsd*-gnu* | knetbsd*-gnu* | netbsd*-gnu* | storm-chaos* | os2-emx* | rtmk-nova*)
+ os=-$maybe_os
+ basic_machine=`echo $1 | sed 's/^\(.*\)-\([^-]*-[^-]*\)$/\1/'`
+ ;;
+ *)
+ basic_machine=`echo $1 | sed 's/-[^-]*$//'`
+ if [ $basic_machine != $1 ]
+ then os=`echo $1 | sed 's/.*-/-/'`
+ else os=; fi
+ ;;
+esac
+
+### Let's recognize common machines as not being operating systems so
+### that things like config.sub decstation-3100 work. We also
+### recognize some manufacturers as not being operating systems, so we
+### can provide default operating systems below.
+case $os in
+ -sun*os*)
+ # Prevent following clause from handling this invalid input.
+ ;;
+ -dec* | -mips* | -sequent* | -encore* | -pc532* | -sgi* | -sony* | \
+ -att* | -7300* | -3300* | -delta* | -motorola* | -sun[234]* | \
+ -unicom* | -ibm* | -next | -hp | -isi* | -apollo | -altos* | \
+ -convergent* | -ncr* | -news | -32* | -3600* | -3100* | -hitachi* |\
+ -c[123]* | -convex* | -sun | -crds | -omron* | -dg | -ultra | -tti* | \
+ -harris | -dolphin | -highlevel | -gould | -cbm | -ns | -masscomp | \
+ -apple | -axis)
+ os=
+ basic_machine=$1
+ ;;
+ -sim | -cisco | -oki | -wec | -winbond)
+ os=
+ basic_machine=$1
+ ;;
+ -scout)
+ ;;
+ -wrs)
+ os=-vxworks
+ basic_machine=$1
+ ;;
+ -chorusos*)
+ os=-chorusos
+ basic_machine=$1
+ ;;
+ -chorusrdb)
+ os=-chorusrdb
+ basic_machine=$1
+ ;;
+ -hiux*)
+ os=-hiuxwe2
+ ;;
+ -sco5)
+ os=-sco3.2v5
+ basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'`
+ ;;
+ -sco4)
+ os=-sco3.2v4
+ basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'`
+ ;;
+ -sco3.2.[4-9]*)
+ os=`echo $os | sed -e 's/sco3.2./sco3.2v/'`
+ basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'`
+ ;;
+ -sco3.2v[4-9]*)
+ # Don't forget version if it is 3.2v4 or newer.
+ basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'`
+ ;;
+ -sco*)
+ os=-sco3.2v2
+ basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'`
+ ;;
+ -udk*)
+ basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'`
+ ;;
+ -isc)
+ os=-isc2.2
+ basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'`
+ ;;
+ -clix*)
+ basic_machine=clipper-intergraph
+ ;;
+ -isc*)
+ basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'`
+ ;;
+ -lynx*)
+ os=-lynxos
+ ;;
+ -ptx*)
+ basic_machine=`echo $1 | sed -e 's/86-.*/86-sequent/'`
+ ;;
+ -windowsnt*)
+ os=`echo $os | sed -e 's/windowsnt/winnt/'`
+ ;;
+ -psos*)
+ os=-psos
+ ;;
+ -mint | -mint[0-9]*)
+ basic_machine=m68k-atari
+ os=-mint
+ ;;
+esac
+
+# Decode aliases for certain CPU-COMPANY combinations.
+case $basic_machine in
+ # Recognize the basic CPU types without company name.
+ # Some are omitted here because they have special meanings below.
+ 1750a | 580 \
+ | a29k \
+ | alpha | alphaev[4-8] | alphaev56 | alphaev6[78] | alphapca5[67] \
+ | alpha64 | alpha64ev[4-8] | alpha64ev56 | alpha64ev6[78] | alpha64pca5[67] \
+ | am33_2.0 \
+ | arc | arm | arm[bl]e | arme[lb] | armv[2345] | armv[345][lb] | avr \
+ | c4x | clipper \
+ | d10v | d30v | dlx | dsp16xx \
+ | fr30 | frv \
+ | h8300 | h8500 | hppa | hppa1.[01] | hppa2.0 | hppa2.0[nw] | hppa64 \
+ | i370 | i860 | i960 | ia64 \
+ | ip2k | iq2000 \
+ | m32r | m68000 | m68k | m88k | mcore \
+ | mips | mipsbe | mipseb | mipsel | mipsle \
+ | mips16 \
+ | mips64 | mips64el \
+ | mips64vr | mips64vrel \
+ | mips64orion | mips64orionel \
+ | mips64vr4100 | mips64vr4100el \
+ | mips64vr4300 | mips64vr4300el \
+ | mips64vr5000 | mips64vr5000el \
+ | mipsisa32 | mipsisa32el \
+ | mipsisa32r2 | mipsisa32r2el \
+ | mipsisa64 | mipsisa64el \
+ | mipsisa64r2 | mipsisa64r2el \
+ | mipsisa64sb1 | mipsisa64sb1el \
+ | mipsisa64sr71k | mipsisa64sr71kel \
+ | mipstx39 | mipstx39el \
+ | mn10200 | mn10300 \
+ | msp430 \
+ | ns16k | ns32k \
+ | openrisc | or32 \
+ | pdp10 | pdp11 | pj | pjl \
+ | powerpc | powerpc64 | powerpc64le | powerpcle | ppcbe \
+ | pyramid \
+ | sh | sh[1234] | sh[23]e | sh[34]eb | shbe | shle | sh[1234]le | sh3ele \
+ | sh64 | sh64le \
+ | sparc | sparc64 | sparc86x | sparclet | sparclite | sparcv9 | sparcv9b \
+ | strongarm \
+ | tahoe | thumb | tic4x | tic80 | tron \
+ | v850 | v850e \
+ | we32k \
+ | x86 | xscale | xstormy16 | xtensa \
+ | z8k)
+ basic_machine=$basic_machine-unknown
+ ;;
+ m6811 | m68hc11 | m6812 | m68hc12)
+ # Motorola 68HC11/12.
+ basic_machine=$basic_machine-unknown
+ os=-none
+ ;;
+ m88110 | m680[12346]0 | m683?2 | m68360 | m5200 | v70 | w65 | z8k)
+ ;;
+
+ # We use `pc' rather than `unknown'
+ # because (1) that's what they normally are, and
+ # (2) the word "unknown" tends to confuse beginning users.
+ i*86 | x86_64)
+ basic_machine=$basic_machine-pc
+ ;;
+ # Object if more than one company name word.
+ *-*-*)
+ echo Invalid configuration \`$1\': machine \`$basic_machine\' not recognized 1>&2
+ exit 1
+ ;;
+ # Recognize the basic CPU types with company name.
+ 580-* \
+ | a29k-* \
+ | alpha-* | alphaev[4-8]-* | alphaev56-* | alphaev6[78]-* \
+ | alpha64-* | alpha64ev[4-8]-* | alpha64ev56-* | alpha64ev6[78]-* \
+ | alphapca5[67]-* | alpha64pca5[67]-* | arc-* \
+ | arm-* | armbe-* | armle-* | armeb-* | armv*-* \
+ | avr-* \
+ | bs2000-* \
+ | c[123]* | c30-* | [cjt]90-* | c4x-* | c54x-* | c55x-* | c6x-* \
+ | clipper-* | cydra-* \
+ | d10v-* | d30v-* | dlx-* \
+ | elxsi-* \
+ | f30[01]-* | f700-* | fr30-* | frv-* | fx80-* \
+ | h8300-* | h8500-* \
+ | hppa-* | hppa1.[01]-* | hppa2.0-* | hppa2.0[nw]-* | hppa64-* \
+ | i*86-* | i860-* | i960-* | ia64-* \
+ | ip2k-* | iq2000-* \
+ | m32r-* \
+ | m68000-* | m680[012346]0-* | m68360-* | m683?2-* | m68k-* \
+ | m88110-* | m88k-* | mcore-* \
+ | mips-* | mipsbe-* | mipseb-* | mipsel-* | mipsle-* \
+ | mips16-* \
+ | mips64-* | mips64el-* \
+ | mips64vr-* | mips64vrel-* \
+ | mips64orion-* | mips64orionel-* \
+ | mips64vr4100-* | mips64vr4100el-* \
+ | mips64vr4300-* | mips64vr4300el-* \
+ | mips64vr5000-* | mips64vr5000el-* \
+ | mipsisa32-* | mipsisa32el-* \
+ | mipsisa32r2-* | mipsisa32r2el-* \
+ | mipsisa64-* | mipsisa64el-* \
+ | mipsisa64r2-* | mipsisa64r2el-* \
+ | mipsisa64sb1-* | mipsisa64sb1el-* \
+ | mipsisa64sr71k-* | mipsisa64sr71kel-* \
+ | mipstx39-* | mipstx39el-* \
+ | msp430-* \
+ | none-* | np1-* | nv1-* | ns16k-* | ns32k-* \
+ | orion-* \
+ | pdp10-* | pdp11-* | pj-* | pjl-* | pn-* | power-* \
+ | powerpc-* | powerpc64-* | powerpc64le-* | powerpcle-* | ppcbe-* \
+ | pyramid-* \
+ | romp-* | rs6000-* \
+ | sh-* | sh[1234]-* | sh[23]e-* | sh[34]eb-* | shbe-* \
+ | shle-* | sh[1234]le-* | sh3ele-* | sh64-* | sh64le-* \
+ | sparc-* | sparc64-* | sparc86x-* | sparclet-* | sparclite-* \
+ | sparcv9-* | sparcv9b-* | strongarm-* | sv1-* | sx?-* \
+ | tahoe-* | thumb-* \
+ | tic30-* | tic4x-* | tic54x-* | tic55x-* | tic6x-* | tic80-* \
+ | tron-* \
+ | v850-* | v850e-* | vax-* \
+ | we32k-* \
+ | x86-* | x86_64-* | xps100-* | xscale-* | xstormy16-* \
+ | xtensa-* \
+ | ymp-* \
+ | z8k-*)
+ ;;
+ # Recognize the various machine names and aliases which stand
+ # for a CPU type and a company and sometimes even an OS.
+ 386bsd)
+ basic_machine=i386-unknown
+ os=-bsd
+ ;;
+ 3b1 | 7300 | 7300-att | att-7300 | pc7300 | safari | unixpc)
+ basic_machine=m68000-att
+ ;;
+ 3b*)
+ basic_machine=we32k-att
+ ;;
+ a29khif)
+ basic_machine=a29k-amd
+ os=-udi
+ ;;
+ adobe68k)
+ basic_machine=m68010-adobe
+ os=-scout
+ ;;
+ alliant | fx80)
+ basic_machine=fx80-alliant
+ ;;
+ altos | altos3068)
+ basic_machine=m68k-altos
+ ;;
+ am29k)
+ basic_machine=a29k-none
+ os=-bsd
+ ;;
+ amd64)
+ basic_machine=x86_64-pc
+ ;;
+ amd64-*)
+ basic_machine=x86_64-`echo $basic_machine | sed 's/^[^-]*-//'`
+ ;;
+ amdahl)
+ basic_machine=580-amdahl
+ os=-sysv
+ ;;
+ amiga | amiga-*)
+ basic_machine=m68k-unknown
+ ;;
+ amigaos | amigados)
+ basic_machine=m68k-unknown
+ os=-amigaos
+ ;;
+ amigaunix | amix)
+ basic_machine=m68k-unknown
+ os=-sysv4
+ ;;
+ apollo68)
+ basic_machine=m68k-apollo
+ os=-sysv
+ ;;
+ apollo68bsd)
+ basic_machine=m68k-apollo
+ os=-bsd
+ ;;
+ aux)
+ basic_machine=m68k-apple
+ os=-aux
+ ;;
+ balance)
+ basic_machine=ns32k-sequent
+ os=-dynix
+ ;;
+ c90)
+ basic_machine=c90-cray
+ os=-unicos
+ ;;
+ convex-c1)
+ basic_machine=c1-convex
+ os=-bsd
+ ;;
+ convex-c2)
+ basic_machine=c2-convex
+ os=-bsd
+ ;;
+ convex-c32)
+ basic_machine=c32-convex
+ os=-bsd
+ ;;
+ convex-c34)
+ basic_machine=c34-convex
+ os=-bsd
+ ;;
+ convex-c38)
+ basic_machine=c38-convex
+ os=-bsd
+ ;;
+ cray | j90)
+ basic_machine=j90-cray
+ os=-unicos
+ ;;
+ crds | unos)
+ basic_machine=m68k-crds
+ ;;
+ cris | cris-* | etrax*)
+ basic_machine=cris-axis
+ ;;
+ da30 | da30-*)
+ basic_machine=m68k-da30
+ ;;
+ decstation | decstation-3100 | pmax | pmax-* | pmin | dec3100 | decstatn)
+ basic_machine=mips-dec
+ ;;
+ decsystem10* | dec10*)
+ basic_machine=pdp10-dec
+ os=-tops10
+ ;;
+ decsystem20* | dec20*)
+ basic_machine=pdp10-dec
+ os=-tops20
+ ;;
+ delta | 3300 | motorola-3300 | motorola-delta \
+ | 3300-motorola | delta-motorola)
+ basic_machine=m68k-motorola
+ ;;
+ delta88)
+ basic_machine=m88k-motorola
+ os=-sysv3
+ ;;
+ dpx20 | dpx20-*)
+ basic_machine=rs6000-bull
+ os=-bosx
+ ;;
+ dpx2* | dpx2*-bull)
+ basic_machine=m68k-bull
+ os=-sysv3
+ ;;
+ ebmon29k)
+ basic_machine=a29k-amd
+ os=-ebmon
+ ;;
+ elxsi)
+ basic_machine=elxsi-elxsi
+ os=-bsd
+ ;;
+ encore | umax | mmax)
+ basic_machine=ns32k-encore
+ ;;
+ es1800 | OSE68k | ose68k | ose | OSE)
+ basic_machine=m68k-ericsson
+ os=-ose
+ ;;
+ fx2800)
+ basic_machine=i860-alliant
+ ;;
+ genix)
+ basic_machine=ns32k-ns
+ ;;
+ gmicro)
+ basic_machine=tron-gmicro
+ os=-sysv
+ ;;
+ go32)
+ basic_machine=i386-pc
+ os=-go32
+ ;;
+ h3050r* | hiux*)
+ basic_machine=hppa1.1-hitachi
+ os=-hiuxwe2
+ ;;
+ h8300hms)
+ basic_machine=h8300-hitachi
+ os=-hms
+ ;;
+ h8300xray)
+ basic_machine=h8300-hitachi
+ os=-xray
+ ;;
+ h8500hms)
+ basic_machine=h8500-hitachi
+ os=-hms
+ ;;
+ harris)
+ basic_machine=m88k-harris
+ os=-sysv3
+ ;;
+ hp300-*)
+ basic_machine=m68k-hp
+ ;;
+ hp300bsd)
+ basic_machine=m68k-hp
+ os=-bsd
+ ;;
+ hp300hpux)
+ basic_machine=m68k-hp
+ os=-hpux
+ ;;
+ hp3k9[0-9][0-9] | hp9[0-9][0-9])
+ basic_machine=hppa1.0-hp
+ ;;
+ hp9k2[0-9][0-9] | hp9k31[0-9])
+ basic_machine=m68000-hp
+ ;;
+ hp9k3[2-9][0-9])
+ basic_machine=m68k-hp
+ ;;
+ hp9k6[0-9][0-9] | hp6[0-9][0-9])
+ basic_machine=hppa1.0-hp
+ ;;
+ hp9k7[0-79][0-9] | hp7[0-79][0-9])
+ basic_machine=hppa1.1-hp
+ ;;
+ hp9k78[0-9] | hp78[0-9])
+ # FIXME: really hppa2.0-hp
+ basic_machine=hppa1.1-hp
+ ;;
+ hp9k8[67]1 | hp8[67]1 | hp9k80[24] | hp80[24] | hp9k8[78]9 | hp8[78]9 | hp9k893 | hp893)
+ # FIXME: really hppa2.0-hp
+ basic_machine=hppa1.1-hp
+ ;;
+ hp9k8[0-9][13679] | hp8[0-9][13679])
+ basic_machine=hppa1.1-hp
+ ;;
+ hp9k8[0-9][0-9] | hp8[0-9][0-9])
+ basic_machine=hppa1.0-hp
+ ;;
+ hppa-next)
+ os=-nextstep3
+ ;;
+ hppaosf)
+ basic_machine=hppa1.1-hp
+ os=-osf
+ ;;
+ hppro)
+ basic_machine=hppa1.1-hp
+ os=-proelf
+ ;;
+ i370-ibm* | ibm*)
+ basic_machine=i370-ibm
+ ;;
+# I'm not sure what "Sysv32" means. Should this be sysv3.2?
+ i*86v32)
+ basic_machine=`echo $1 | sed -e 's/86.*/86-pc/'`
+ os=-sysv32
+ ;;
+ i*86v4*)
+ basic_machine=`echo $1 | sed -e 's/86.*/86-pc/'`
+ os=-sysv4
+ ;;
+ i*86v)
+ basic_machine=`echo $1 | sed -e 's/86.*/86-pc/'`
+ os=-sysv
+ ;;
+ i*86sol2)
+ basic_machine=`echo $1 | sed -e 's/86.*/86-pc/'`
+ os=-solaris2
+ ;;
+ i386mach)
+ basic_machine=i386-mach
+ os=-mach
+ ;;
+ i386-vsta | vsta)
+ basic_machine=i386-unknown
+ os=-vsta
+ ;;
+ iris | iris4d)
+ basic_machine=mips-sgi
+ case $os in
+ -irix*)
+ ;;
+ *)
+ os=-irix4
+ ;;
+ esac
+ ;;
+ isi68 | isi)
+ basic_machine=m68k-isi
+ os=-sysv
+ ;;
+ m88k-omron*)
+ basic_machine=m88k-omron
+ ;;
+ magnum | m3230)
+ basic_machine=mips-mips
+ os=-sysv
+ ;;
+ merlin)
+ basic_machine=ns32k-utek
+ os=-sysv
+ ;;
+ mingw32)
+ basic_machine=i386-pc
+ os=-mingw32
+ ;;
+ miniframe)
+ basic_machine=m68000-convergent
+ ;;
+ *mint | -mint[0-9]* | *MiNT | *MiNT[0-9]*)
+ basic_machine=m68k-atari
+ os=-mint
+ ;;
+ mips3*-*)
+ basic_machine=`echo $basic_machine | sed -e 's/mips3/mips64/'`
+ ;;
+ mips3*)
+ basic_machine=`echo $basic_machine | sed -e 's/mips3/mips64/'`-unknown
+ ;;
+ mmix*)
+ basic_machine=mmix-knuth
+ os=-mmixware
+ ;;
+ monitor)
+ basic_machine=m68k-rom68k
+ os=-coff
+ ;;
+ morphos)
+ basic_machine=powerpc-unknown
+ os=-morphos
+ ;;
+ msdos)
+ basic_machine=i386-pc
+ os=-msdos
+ ;;
+ mvs)
+ basic_machine=i370-ibm
+ os=-mvs
+ ;;
+ ncr3000)
+ basic_machine=i486-ncr
+ os=-sysv4
+ ;;
+ netbsd386)
+ basic_machine=i386-unknown
+ os=-netbsd
+ ;;
+ netwinder)
+ basic_machine=armv4l-rebel
+ os=-linux
+ ;;
+ news | news700 | news800 | news900)
+ basic_machine=m68k-sony
+ os=-newsos
+ ;;
+ news1000)
+ basic_machine=m68030-sony
+ os=-newsos
+ ;;
+ news-3600 | risc-news)
+ basic_machine=mips-sony
+ os=-newsos
+ ;;
+ necv70)
+ basic_machine=v70-nec
+ os=-sysv
+ ;;
+ next | m*-next )
+ basic_machine=m68k-next
+ case $os in
+ -nextstep* )
+ ;;
+ -ns2*)
+ os=-nextstep2
+ ;;
+ *)
+ os=-nextstep3
+ ;;
+ esac
+ ;;
+ nh3000)
+ basic_machine=m68k-harris
+ os=-cxux
+ ;;
+ nh[45]000)
+ basic_machine=m88k-harris
+ os=-cxux
+ ;;
+ nindy960)
+ basic_machine=i960-intel
+ os=-nindy
+ ;;
+ mon960)
+ basic_machine=i960-intel
+ os=-mon960
+ ;;
+ nonstopux)
+ basic_machine=mips-compaq
+ os=-nonstopux
+ ;;
+ np1)
+ basic_machine=np1-gould
+ ;;
+ nv1)
+ basic_machine=nv1-cray
+ os=-unicosmp
+ ;;
+ nsr-tandem)
+ basic_machine=nsr-tandem
+ ;;
+ op50n-* | op60c-*)
+ basic_machine=hppa1.1-oki
+ os=-proelf
+ ;;
+ or32 | or32-*)
+ basic_machine=or32-unknown
+ os=-coff
+ ;;
+ os400)
+ basic_machine=powerpc-ibm
+ os=-os400
+ ;;
+ OSE68000 | ose68000)
+ basic_machine=m68000-ericsson
+ os=-ose
+ ;;
+ os68k)
+ basic_machine=m68k-none
+ os=-os68k
+ ;;
+ pa-hitachi)
+ basic_machine=hppa1.1-hitachi
+ os=-hiuxwe2
+ ;;
+ paragon)
+ basic_machine=i860-intel
+ os=-osf
+ ;;
+ pbd)
+ basic_machine=sparc-tti
+ ;;
+ pbb)
+ basic_machine=m68k-tti
+ ;;
+ pc532 | pc532-*)
+ basic_machine=ns32k-pc532
+ ;;
+ pentium | p5 | k5 | k6 | nexgen | viac3)
+ basic_machine=i586-pc
+ ;;
+ pentiumpro | p6 | 6x86 | athlon | athlon_*)
+ basic_machine=i686-pc
+ ;;
+ pentiumii | pentium2 | pentiumiii | pentium3)
+ basic_machine=i686-pc
+ ;;
+ pentium4)
+ basic_machine=i786-pc
+ ;;
+ pentium-* | p5-* | k5-* | k6-* | nexgen-* | viac3-*)
+ basic_machine=i586-`echo $basic_machine | sed 's/^[^-]*-//'`
+ ;;
+ pentiumpro-* | p6-* | 6x86-* | athlon-*)
+ basic_machine=i686-`echo $basic_machine | sed 's/^[^-]*-//'`
+ ;;
+ pentiumii-* | pentium2-* | pentiumiii-* | pentium3-*)
+ basic_machine=i686-`echo $basic_machine | sed 's/^[^-]*-//'`
+ ;;
+ pentium4-*)
+ basic_machine=i786-`echo $basic_machine | sed 's/^[^-]*-//'`
+ ;;
+ pn)
+ basic_machine=pn-gould
+ ;;
+ power) basic_machine=power-ibm
+ ;;
+ ppc) basic_machine=powerpc-unknown
+ ;;
+ ppc-*) basic_machine=powerpc-`echo $basic_machine | sed 's/^[^-]*-//'`
+ ;;
+ ppcle | powerpclittle | ppc-le | powerpc-little)
+ basic_machine=powerpcle-unknown
+ ;;
+ ppcle-* | powerpclittle-*)
+ basic_machine=powerpcle-`echo $basic_machine | sed 's/^[^-]*-//'`
+ ;;
+ ppc64) basic_machine=powerpc64-unknown
+ ;;
+ ppc64-*) basic_machine=powerpc64-`echo $basic_machine | sed 's/^[^-]*-//'`
+ ;;
+ ppc64le | powerpc64little | ppc64-le | powerpc64-little)
+ basic_machine=powerpc64le-unknown
+ ;;
+ ppc64le-* | powerpc64little-*)
+ basic_machine=powerpc64le-`echo $basic_machine | sed 's/^[^-]*-//'`
+ ;;
+ ps2)
+ basic_machine=i386-ibm
+ ;;
+ pw32)
+ basic_machine=i586-unknown
+ os=-pw32
+ ;;
+ rom68k)
+ basic_machine=m68k-rom68k
+ os=-coff
+ ;;
+ rm[46]00)
+ basic_machine=mips-siemens
+ ;;
+ rtpc | rtpc-*)
+ basic_machine=romp-ibm
+ ;;
+ s390 | s390-*)
+ basic_machine=s390-ibm
+ ;;
+ s390x | s390x-*)
+ basic_machine=s390x-ibm
+ ;;
+ sa29200)
+ basic_machine=a29k-amd
+ os=-udi
+ ;;
+ sb1)
+ basic_machine=mipsisa64sb1-unknown
+ ;;
+ sb1el)
+ basic_machine=mipsisa64sb1el-unknown
+ ;;
+ sei)
+ basic_machine=mips-sei
+ os=-seiux
+ ;;
+ sequent)
+ basic_machine=i386-sequent
+ ;;
+ sh)
+ basic_machine=sh-hitachi
+ os=-hms
+ ;;
+ sh64)
+ basic_machine=sh64-unknown
+ ;;
+ sparclite-wrs | simso-wrs)
+ basic_machine=sparclite-wrs
+ os=-vxworks
+ ;;
+ sps7)
+ basic_machine=m68k-bull
+ os=-sysv2
+ ;;
+ spur)
+ basic_machine=spur-unknown
+ ;;
+ st2000)
+ basic_machine=m68k-tandem
+ ;;
+ stratus)
+ basic_machine=i860-stratus
+ os=-sysv4
+ ;;
+ sun2)
+ basic_machine=m68000-sun
+ ;;
+ sun2os3)
+ basic_machine=m68000-sun
+ os=-sunos3
+ ;;
+ sun2os4)
+ basic_machine=m68000-sun
+ os=-sunos4
+ ;;
+ sun3os3)
+ basic_machine=m68k-sun
+ os=-sunos3
+ ;;
+ sun3os4)
+ basic_machine=m68k-sun
+ os=-sunos4
+ ;;
+ sun4os3)
+ basic_machine=sparc-sun
+ os=-sunos3
+ ;;
+ sun4os4)
+ basic_machine=sparc-sun
+ os=-sunos4
+ ;;
+ sun4sol2)
+ basic_machine=sparc-sun
+ os=-solaris2
+ ;;
+ sun3 | sun3-*)
+ basic_machine=m68k-sun
+ ;;
+ sun4)
+ basic_machine=sparc-sun
+ ;;
+ sun386 | sun386i | roadrunner)
+ basic_machine=i386-sun
+ ;;
+ sv1)
+ basic_machine=sv1-cray
+ os=-unicos
+ ;;
+ symmetry)
+ basic_machine=i386-sequent
+ os=-dynix
+ ;;
+ t3e)
+ basic_machine=alphaev5-cray
+ os=-unicos
+ ;;
+ t90)
+ basic_machine=t90-cray
+ os=-unicos
+ ;;
+ tic54x | c54x*)
+ basic_machine=tic54x-unknown
+ os=-coff
+ ;;
+ tic55x | c55x*)
+ basic_machine=tic55x-unknown
+ os=-coff
+ ;;
+ tic6x | c6x*)
+ basic_machine=tic6x-unknown
+ os=-coff
+ ;;
+ tx39)
+ basic_machine=mipstx39-unknown
+ ;;
+ tx39el)
+ basic_machine=mipstx39el-unknown
+ ;;
+ toad1)
+ basic_machine=pdp10-xkl
+ os=-tops20
+ ;;
+ tower | tower-32)
+ basic_machine=m68k-ncr
+ ;;
+ tpf)
+ basic_machine=s390x-ibm
+ os=-tpf
+ ;;
+ udi29k)
+ basic_machine=a29k-amd
+ os=-udi
+ ;;
+ ultra3)
+ basic_machine=a29k-nyu
+ os=-sym1
+ ;;
+ v810 | necv810)
+ basic_machine=v810-nec
+ os=-none
+ ;;
+ vaxv)
+ basic_machine=vax-dec
+ os=-sysv
+ ;;
+ vms)
+ basic_machine=vax-dec
+ os=-vms
+ ;;
+ vpp*|vx|vx-*)
+ basic_machine=f301-fujitsu
+ ;;
+ vxworks960)
+ basic_machine=i960-wrs
+ os=-vxworks
+ ;;
+ vxworks68)
+ basic_machine=m68k-wrs
+ os=-vxworks
+ ;;
+ vxworks29k)
+ basic_machine=a29k-wrs
+ os=-vxworks
+ ;;
+ w65*)
+ basic_machine=w65-wdc
+ os=-none
+ ;;
+ w89k-*)
+ basic_machine=hppa1.1-winbond
+ os=-proelf
+ ;;
+ xps | xps100)
+ basic_machine=xps100-honeywell
+ ;;
+ ymp)
+ basic_machine=ymp-cray
+ os=-unicos
+ ;;
+ z8k-*-coff)
+ basic_machine=z8k-unknown
+ os=-sim
+ ;;
+ none)
+ basic_machine=none-none
+ os=-none
+ ;;
+
+# Here we handle the default manufacturer of certain CPU types. It is in
+# some cases the only manufacturer, in others, it is the most popular.
+ w89k)
+ basic_machine=hppa1.1-winbond
+ ;;
+ op50n)
+ basic_machine=hppa1.1-oki
+ ;;
+ op60c)
+ basic_machine=hppa1.1-oki
+ ;;
+ romp)
+ basic_machine=romp-ibm
+ ;;
+ rs6000)
+ basic_machine=rs6000-ibm
+ ;;
+ vax)
+ basic_machine=vax-dec
+ ;;
+ pdp10)
+ # there are many clones, so DEC is not a safe bet
+ basic_machine=pdp10-unknown
+ ;;
+ pdp11)
+ basic_machine=pdp11-dec
+ ;;
+ we32k)
+ basic_machine=we32k-att
+ ;;
+ sh3 | sh4 | sh[34]eb | sh[1234]le | sh[23]ele)
+ basic_machine=sh-unknown
+ ;;
+ sh64)
+ basic_machine=sh64-unknown
+ ;;
+ sparc | sparcv9 | sparcv9b)
+ basic_machine=sparc-sun
+ ;;
+ cydra)
+ basic_machine=cydra-cydrome
+ ;;
+ orion)
+ basic_machine=orion-highlevel
+ ;;
+ orion105)
+ basic_machine=clipper-highlevel
+ ;;
+ mac | mpw | mac-mpw)
+ basic_machine=m68k-apple
+ ;;
+ pmac | pmac-mpw)
+ basic_machine=powerpc-apple
+ ;;
+ *-unknown)
+ # Make sure to match an already-canonicalized machine name.
+ ;;
+ *)
+ echo Invalid configuration \`$1\': machine \`$basic_machine\' not recognized 1>&2
+ exit 1
+ ;;
+esac
+
+# Here we canonicalize certain aliases for manufacturers.
+case $basic_machine in
+ *-digital*)
+ basic_machine=`echo $basic_machine | sed 's/digital.*/dec/'`
+ ;;
+ *-commodore*)
+ basic_machine=`echo $basic_machine | sed 's/commodore.*/cbm/'`
+ ;;
+ *)
+ ;;
+esac
+
+# Decode manufacturer-specific aliases for certain operating systems.
+
+if [ x"$os" != x"" ]
+then
+case $os in
+ # First match some system type aliases
+ # that might get confused with valid system types.
+ # -solaris* is a basic system type, with this one exception.
+ -solaris1 | -solaris1.*)
+ os=`echo $os | sed -e 's|solaris1|sunos4|'`
+ ;;
+ -solaris)
+ os=-solaris2
+ ;;
+ -svr4*)
+ os=-sysv4
+ ;;
+ -unixware*)
+ os=-sysv4.2uw
+ ;;
+ -gnu/linux*)
+ os=`echo $os | sed -e 's|gnu/linux|linux-gnu|'`
+ ;;
+ # First accept the basic system types.
+ # The portable systems comes first.
+ # Each alternative MUST END IN A *, to match a version number.
+ # -sysv* is not here because it comes later, after sysvr4.
+ -gnu* | -bsd* | -mach* | -minix* | -genix* | -ultrix* | -irix* \
+ | -*vms* | -sco* | -esix* | -isc* | -aix* | -sunos | -sunos[34]*\
+ | -hpux* | -unos* | -osf* | -luna* | -dgux* | -solaris* | -sym* \
+ | -amigaos* | -amigados* | -msdos* | -newsos* | -unicos* | -aof* \
+ | -aos* \
+ | -nindy* | -vxsim* | -vxworks* | -ebmon* | -hms* | -mvs* \
+ | -clix* | -riscos* | -uniplus* | -iris* | -rtu* | -xenix* \
+ | -hiux* | -386bsd* | -knetbsd* | -netbsd* | -openbsd* | -kfreebsd* | -freebsd* | -riscix* \
+ | -lynxos* | -bosx* | -nextstep* | -cxux* | -aout* | -elf* | -oabi* \
+ | -ptx* | -coff* | -ecoff* | -winnt* | -domain* | -vsta* \
+ | -udi* | -eabi* | -lites* | -ieee* | -go32* | -aux* \
+ | -chorusos* | -chorusrdb* \
+ | -cygwin* | -pe* | -psos* | -moss* | -proelf* | -rtems* \
+ | -mingw32* | -linux-gnu* | -linux-uclibc* | -uxpv* | -beos* | -mpeix* | -udk* \
+ | -interix* | -uwin* | -mks* | -rhapsody* | -darwin* | -opened* \
+ | -openstep* | -oskit* | -conix* | -pw32* | -nonstopux* \
+ | -storm-chaos* | -tops10* | -tenex* | -tops20* | -its* \
+ | -os2* | -vos* | -palmos* | -uclinux* | -nucleus* \
+ | -morphos* | -superux* | -rtmk* | -rtmk-nova* | -windiss* \
+ | -powermax* | -dnix* | -nx6 | -nx7 | -sei* | -dragonfly*)
+ # Remember, each alternative MUST END IN *, to match a version number.
+ ;;
+ -qnx*)
+ case $basic_machine in
+ x86-* | i*86-*)
+ ;;
+ *)
+ os=-nto$os
+ ;;
+ esac
+ ;;
+ -nto-qnx*)
+ ;;
+ -nto*)
+ os=`echo $os | sed -e 's|nto|nto-qnx|'`
+ ;;
+ -sim | -es1800* | -hms* | -xray | -os68k* | -none* | -v88r* \
+ | -windows* | -osx | -abug | -netware* | -os9* | -beos* \
+ | -macos* | -mpw* | -magic* | -mmixware* | -mon960* | -lnews*)
+ ;;
+ -mac*)
+ os=`echo $os | sed -e 's|mac|macos|'`
+ ;;
+ -linux-dietlibc)
+ os=-linux-dietlibc
+ ;;
+ -linux*)
+ os=`echo $os | sed -e 's|linux|linux-gnu|'`
+ ;;
+ -sunos5*)
+ os=`echo $os | sed -e 's|sunos5|solaris2|'`
+ ;;
+ -sunos6*)
+ os=`echo $os | sed -e 's|sunos6|solaris3|'`
+ ;;
+ -opened*)
+ os=-openedition
+ ;;
+ -os400*)
+ os=-os400
+ ;;
+ -wince*)
+ os=-wince
+ ;;
+ -osfrose*)
+ os=-osfrose
+ ;;
+ -osf*)
+ os=-osf
+ ;;
+ -utek*)
+ os=-bsd
+ ;;
+ -dynix*)
+ os=-bsd
+ ;;
+ -acis*)
+ os=-aos
+ ;;
+ -atheos*)
+ os=-atheos
+ ;;
+ -syllable*)
+ os=-syllable
+ ;;
+ -386bsd)
+ os=-bsd
+ ;;
+ -ctix* | -uts*)
+ os=-sysv
+ ;;
+ -nova*)
+ os=-rtmk-nova
+ ;;
+ -ns2 )
+ os=-nextstep2
+ ;;
+ -nsk*)
+ os=-nsk
+ ;;
+ # Preserve the version number of sinix5.
+ -sinix5.*)
+ os=`echo $os | sed -e 's|sinix|sysv|'`
+ ;;
+ -sinix*)
+ os=-sysv4
+ ;;
+ -tpf*)
+ os=-tpf
+ ;;
+ -triton*)
+ os=-sysv3
+ ;;
+ -oss*)
+ os=-sysv3
+ ;;
+ -svr4)
+ os=-sysv4
+ ;;
+ -svr3)
+ os=-sysv3
+ ;;
+ -sysvr4)
+ os=-sysv4
+ ;;
+ # This must come after -sysvr4.
+ -sysv*)
+ ;;
+ -ose*)
+ os=-ose
+ ;;
+ -es1800*)
+ os=-ose
+ ;;
+ -xenix)
+ os=-xenix
+ ;;
+ -*mint | -mint[0-9]* | -*MiNT | -MiNT[0-9]*)
+ os=-mint
+ ;;
+ -aros*)
+ os=-aros
+ ;;
+ -kaos*)
+ os=-kaos
+ ;;
+ -none)
+ ;;
+ *)
+ # Get rid of the `-' at the beginning of $os.
+ os=`echo $os | sed 's/[^-]*-//'`
+ echo Invalid configuration \`$1\': system \`$os\' not recognized 1>&2
+ exit 1
+ ;;
+esac
+else
+
+# Here we handle the default operating systems that come with various machines.
+# The value should be what the vendor currently ships out the door with their
+# machine or put another way, the most popular os provided with the machine.
+
+# Note that if you're going to try to match "-MANUFACTURER" here (say,
+# "-sun"), then you have to tell the case statement up towards the top
+# that MANUFACTURER isn't an operating system. Otherwise, code above
+# will signal an error saying that MANUFACTURER isn't an operating
+# system, and we'll never get to this point.
+
+case $basic_machine in
+ *-acorn)
+ os=-riscix1.2
+ ;;
+ arm*-rebel)
+ os=-linux
+ ;;
+ arm*-semi)
+ os=-aout
+ ;;
+ c4x-* | tic4x-*)
+ os=-coff
+ ;;
+ # This must come before the *-dec entry.
+ pdp10-*)
+ os=-tops20
+ ;;
+ pdp11-*)
+ os=-none
+ ;;
+ *-dec | vax-*)
+ os=-ultrix4.2
+ ;;
+ m68*-apollo)
+ os=-domain
+ ;;
+ i386-sun)
+ os=-sunos4.0.2
+ ;;
+ m68000-sun)
+ os=-sunos3
+ # This also exists in the configure program, but was not the
+ # default.
+ # os=-sunos4
+ ;;
+ m68*-cisco)
+ os=-aout
+ ;;
+ mips*-cisco)
+ os=-elf
+ ;;
+ mips*-*)
+ os=-elf
+ ;;
+ or32-*)
+ os=-coff
+ ;;
+ *-tti) # must be before sparc entry or we get the wrong os.
+ os=-sysv3
+ ;;
+ sparc-* | *-sun)
+ os=-sunos4.1.1
+ ;;
+ *-be)
+ os=-beos
+ ;;
+ *-ibm)
+ os=-aix
+ ;;
+ *-wec)
+ os=-proelf
+ ;;
+ *-winbond)
+ os=-proelf
+ ;;
+ *-oki)
+ os=-proelf
+ ;;
+ *-hp)
+ os=-hpux
+ ;;
+ *-hitachi)
+ os=-hiux
+ ;;
+ i860-* | *-att | *-ncr | *-altos | *-motorola | *-convergent)
+ os=-sysv
+ ;;
+ *-cbm)
+ os=-amigaos
+ ;;
+ *-dg)
+ os=-dgux
+ ;;
+ *-dolphin)
+ os=-sysv3
+ ;;
+ m68k-ccur)
+ os=-rtu
+ ;;
+ m88k-omron*)
+ os=-luna
+ ;;
+ *-next )
+ os=-nextstep
+ ;;
+ *-sequent)
+ os=-ptx
+ ;;
+ *-crds)
+ os=-unos
+ ;;
+ *-ns)
+ os=-genix
+ ;;
+ i370-*)
+ os=-mvs
+ ;;
+ *-next)
+ os=-nextstep3
+ ;;
+ *-gould)
+ os=-sysv
+ ;;
+ *-highlevel)
+ os=-bsd
+ ;;
+ *-encore)
+ os=-bsd
+ ;;
+ *-sgi)
+ os=-irix
+ ;;
+ *-siemens)
+ os=-sysv4
+ ;;
+ *-masscomp)
+ os=-rtu
+ ;;
+ f30[01]-fujitsu | f700-fujitsu)
+ os=-uxpv
+ ;;
+ *-rom68k)
+ os=-coff
+ ;;
+ *-*bug)
+ os=-coff
+ ;;
+ *-apple)
+ os=-macos
+ ;;
+ *-atari*)
+ os=-mint
+ ;;
+ *)
+ os=-none
+ ;;
+esac
+fi
+
+# Here we handle the case where we know the os, and the CPU type, but not the
+# manufacturer. We pick the logical manufacturer.
+vendor=unknown
+case $basic_machine in
+ *-unknown)
+ case $os in
+ -riscix*)
+ vendor=acorn
+ ;;
+ -sunos*)
+ vendor=sun
+ ;;
+ -aix*)
+ vendor=ibm
+ ;;
+ -beos*)
+ vendor=be
+ ;;
+ -hpux*)
+ vendor=hp
+ ;;
+ -mpeix*)
+ vendor=hp
+ ;;
+ -hiux*)
+ vendor=hitachi
+ ;;
+ -unos*)
+ vendor=crds
+ ;;
+ -dgux*)
+ vendor=dg
+ ;;
+ -luna*)
+ vendor=omron
+ ;;
+ -genix*)
+ vendor=ns
+ ;;
+ -mvs* | -opened*)
+ vendor=ibm
+ ;;
+ -os400*)
+ vendor=ibm
+ ;;
+ -ptx*)
+ vendor=sequent
+ ;;
+ -tpf*)
+ vendor=ibm
+ ;;
+ -vxsim* | -vxworks* | -windiss*)
+ vendor=wrs
+ ;;
+ -aux*)
+ vendor=apple
+ ;;
+ -hms*)
+ vendor=hitachi
+ ;;
+ -mpw* | -macos*)
+ vendor=apple
+ ;;
+ -*mint | -mint[0-9]* | -*MiNT | -MiNT[0-9]*)
+ vendor=atari
+ ;;
+ -vos*)
+ vendor=stratus
+ ;;
+ esac
+ basic_machine=`echo $basic_machine | sed "s/unknown/$vendor/"`
+ ;;
+esac
+
+echo $basic_machine$os
+exit 0
+
+# Local variables:
+# eval: (add-hook 'write-file-hooks 'time-stamp)
+# time-stamp-start: "timestamp='"
+# time-stamp-format: "%:y-%02m-%02d"
+# time-stamp-end: "'"
+# End:
Added: freeswitch/trunk/libs/sqlite/configure
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/configure Tue Dec 19 15:11:50 2006
@@ -0,0 +1,21682 @@
+#! /bin/sh
+# Guess values for system-dependent variables and create Makefiles.
+# Generated by GNU Autoconf 2.59.
+#
+# Copyright (C) 2003 Free Software Foundation, Inc.
+# This configure script is free software; the Free Software Foundation
+# gives unlimited permission to copy, distribute and modify it.
+## --------------------- ##
+## M4sh Initialization. ##
+## --------------------- ##
+
+# Be Bourne compatible
+if test -n "${ZSH_VERSION+set}" && (emulate sh) >/dev/null 2>&1; then
+ emulate sh
+ NULLCMD=:
+ # Zsh 3.x and 4.x performs word splitting on ${1+"$@"}, which
+ # is contrary to our usage. Disable this feature.
+ alias -g '${1+"$@"}'='"$@"'
+elif test -n "${BASH_VERSION+set}" && (set -o posix) >/dev/null 2>&1; then
+ set -o posix
+fi
+DUALCASE=1; export DUALCASE # for MKS sh
+
+# Support unset when possible.
+if ( (MAIL=60; unset MAIL) || exit) >/dev/null 2>&1; then
+ as_unset=unset
+else
+ as_unset=false
+fi
+
+
+# Work around bugs in pre-3.0 UWIN ksh.
+$as_unset ENV MAIL MAILPATH
+PS1='$ '
+PS2='> '
+PS4='+ '
+
+# NLS nuisances.
+for as_var in \
+ LANG LANGUAGE LC_ADDRESS LC_ALL LC_COLLATE LC_CTYPE LC_IDENTIFICATION \
+ LC_MEASUREMENT LC_MESSAGES LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER \
+ LC_TELEPHONE LC_TIME
+do
+ if (set +x; test -z "`(eval $as_var=C; export $as_var) 2>&1`"); then
+ eval $as_var=C; export $as_var
+ else
+ $as_unset $as_var
+ fi
+done
+
+# Required to use basename.
+if expr a : '\(a\)' >/dev/null 2>&1; then
+ as_expr=expr
+else
+ as_expr=false
+fi
+
+if (basename /) >/dev/null 2>&1 && test "X`basename / 2>&1`" = "X/"; then
+ as_basename=basename
+else
+ as_basename=false
+fi
+
+
+# Name of the executable.
+as_me=`$as_basename "$0" ||
+$as_expr X/"$0" : '.*/\([^/][^/]*\)/*$' \| \
+ X"$0" : 'X\(//\)$' \| \
+ X"$0" : 'X\(/\)$' \| \
+ . : '\(.\)' 2>/dev/null ||
+echo X/"$0" |
+ sed '/^.*\/\([^/][^/]*\)\/*$/{ s//\1/; q; }
+ /^X\/\(\/\/\)$/{ s//\1/; q; }
+ /^X\/\(\/\).*/{ s//\1/; q; }
+ s/.*/./; q'`
+
+
+# PATH needs CR, and LINENO needs CR and PATH.
+# Avoid depending upon Character Ranges.
+as_cr_letters='abcdefghijklmnopqrstuvwxyz'
+as_cr_LETTERS='ABCDEFGHIJKLMNOPQRSTUVWXYZ'
+as_cr_Letters=$as_cr_letters$as_cr_LETTERS
+as_cr_digits='0123456789'
+as_cr_alnum=$as_cr_Letters$as_cr_digits
+
+# The user is always right.
+if test "${PATH_SEPARATOR+set}" != set; then
+ echo "#! /bin/sh" >conf$$.sh
+ echo "exit 0" >>conf$$.sh
+ chmod +x conf$$.sh
+ if (PATH="/nonexistent;."; conf$$.sh) >/dev/null 2>&1; then
+ PATH_SEPARATOR=';'
+ else
+ PATH_SEPARATOR=:
+ fi
+ rm -f conf$$.sh
+fi
+
+
+ as_lineno_1=$LINENO
+ as_lineno_2=$LINENO
+ as_lineno_3=`(expr $as_lineno_1 + 1) 2>/dev/null`
+ test "x$as_lineno_1" != "x$as_lineno_2" &&
+ test "x$as_lineno_3" = "x$as_lineno_2" || {
+ # Find who we are. Look in the path if we contain no path at all
+ # relative or not.
+ case $0 in
+ *[\\/]* ) as_myself=$0 ;;
+ *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ test -r "$as_dir/$0" && as_myself=$as_dir/$0 && break
+done
+
+ ;;
+ esac
+ # We did not find ourselves, most probably we were run as `sh COMMAND'
+ # in which case we are not to be found in the path.
+ if test "x$as_myself" = x; then
+ as_myself=$0
+ fi
+ if test ! -f "$as_myself"; then
+ { echo "$as_me: error: cannot find myself; rerun with an absolute path" >&2
+ { (exit 1); exit 1; }; }
+ fi
+ case $CONFIG_SHELL in
+ '')
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in /bin$PATH_SEPARATOR/usr/bin$PATH_SEPARATOR$PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for as_base in sh bash ksh sh5; do
+ case $as_dir in
+ /*)
+ if ("$as_dir/$as_base" -c '
+ as_lineno_1=$LINENO
+ as_lineno_2=$LINENO
+ as_lineno_3=`(expr $as_lineno_1 + 1) 2>/dev/null`
+ test "x$as_lineno_1" != "x$as_lineno_2" &&
+ test "x$as_lineno_3" = "x$as_lineno_2" ') 2>/dev/null; then
+ $as_unset BASH_ENV || test "${BASH_ENV+set}" != set || { BASH_ENV=; export BASH_ENV; }
+ $as_unset ENV || test "${ENV+set}" != set || { ENV=; export ENV; }
+ CONFIG_SHELL=$as_dir/$as_base
+ export CONFIG_SHELL
+ exec "$CONFIG_SHELL" "$0" ${1+"$@"}
+ fi;;
+ esac
+ done
+done
+;;
+ esac
+
+ # Create $as_me.lineno as a copy of $as_myself, but with $LINENO
+ # uniformly replaced by the line number. The first 'sed' inserts a
+ # line-number line before each line; the second 'sed' does the real
+ # work. The second script uses 'N' to pair each line-number line
+ # with the numbered line, and appends trailing '-' during
+ # substitution so that $LINENO is not a special case at line end.
+ # (Raja R Harinath suggested sed '=', and Paul Eggert wrote the
+ # second 'sed' script. Blame Lee E. McMahon for sed's syntax. :-)
+ sed '=' <$as_myself |
+ sed '
+ N
+ s,$,-,
+ : loop
+ s,^\(['$as_cr_digits']*\)\(.*\)[$]LINENO\([^'$as_cr_alnum'_]\),\1\2\1\3,
+ t loop
+ s,-$,,
+ s,^['$as_cr_digits']*\n,,
+ ' >$as_me.lineno &&
+ chmod +x $as_me.lineno ||
+ { echo "$as_me: error: cannot create $as_me.lineno; rerun with a POSIX shell" >&2
+ { (exit 1); exit 1; }; }
+
+ # Don't try to exec as it changes $[0], causing all sort of problems
+ # (the dirname of $[0] is not the place where we might find the
+ # original and so on. Autoconf is especially sensible to this).
+ . ./$as_me.lineno
+ # Exit status is that of the last command.
+ exit
+}
+
+
+case `echo "testing\c"; echo 1,2,3`,`echo -n testing; echo 1,2,3` in
+ *c*,-n*) ECHO_N= ECHO_C='
+' ECHO_T=' ' ;;
+ *c*,* ) ECHO_N=-n ECHO_C= ECHO_T= ;;
+ *) ECHO_N= ECHO_C='\c' ECHO_T= ;;
+esac
+
+if expr a : '\(a\)' >/dev/null 2>&1; then
+ as_expr=expr
+else
+ as_expr=false
+fi
+
+rm -f conf$$ conf$$.exe conf$$.file
+echo >conf$$.file
+if ln -s conf$$.file conf$$ 2>/dev/null; then
+ # We could just check for DJGPP; but this test a) works b) is more generic
+ # and c) will remain valid once DJGPP supports symlinks (DJGPP 2.04).
+ if test -f conf$$.exe; then
+ # Don't use ln at all; we don't have any links
+ as_ln_s='cp -p'
+ else
+ as_ln_s='ln -s'
+ fi
+elif ln conf$$.file conf$$ 2>/dev/null; then
+ as_ln_s=ln
+else
+ as_ln_s='cp -p'
+fi
+rm -f conf$$ conf$$.exe conf$$.file
+
+if mkdir -p . 2>/dev/null; then
+ as_mkdir_p=:
+else
+ test -d ./-p && rmdir ./-p
+ as_mkdir_p=false
+fi
+
+as_executable_p="test -f"
+
+# Sed expression to map a string onto a valid CPP name.
+as_tr_cpp="eval sed 'y%*$as_cr_letters%P$as_cr_LETTERS%;s%[^_$as_cr_alnum]%_%g'"
+
+# Sed expression to map a string onto a valid variable name.
+as_tr_sh="eval sed 'y%*+%pp%;s%[^_$as_cr_alnum]%_%g'"
+
+
+# IFS
+# We need space, tab and new line, in precisely that order.
+as_nl='
+'
+IFS=" $as_nl"
+
+# CDPATH.
+$as_unset CDPATH
+
+
+
+# Check that we are running under the correct shell.
+SHELL=${CONFIG_SHELL-/bin/sh}
+
+case X$ECHO in
+X*--fallback-echo)
+ # Remove one level of quotation (which was required for Make).
+ ECHO=`echo "$ECHO" | sed 's,\\\\\$\\$0,'$0','`
+ ;;
+esac
+
+echo=${ECHO-echo}
+if test "X$1" = X--no-reexec; then
+ # Discard the --no-reexec flag, and continue.
+ shift
+elif test "X$1" = X--fallback-echo; then
+ # Avoid inline document here, it may be left over
+ :
+elif test "X`($echo '\t') 2>/dev/null`" = 'X\t' ; then
+ # Yippee, $echo works!
+ :
+else
+ # Restart under the correct shell.
+ exec $SHELL "$0" --no-reexec ${1+"$@"}
+fi
+
+if test "X$1" = X--fallback-echo; then
+ # used as fallback echo
+ shift
+ cat <<EOF
+$*
+EOF
+ exit 0
+fi
+
+# The HP-UX ksh and POSIX shell print the target directory to stdout
+# if CDPATH is set.
+if test "X${CDPATH+set}" = Xset; then CDPATH=:; export CDPATH; fi
+
+if test -z "$ECHO"; then
+if test "X${echo_test_string+set}" != Xset; then
+# find a string as large as possible, as long as the shell can cope with it
+ for cmd in 'sed 50q "$0"' 'sed 20q "$0"' 'sed 10q "$0"' 'sed 2q "$0"' 'echo test'; do
+ # expected sizes: less than 2Kb, 1Kb, 512 bytes, 16 bytes, ...
+ if (echo_test_string="`eval $cmd`") 2>/dev/null &&
+ echo_test_string="`eval $cmd`" &&
+ (test "X$echo_test_string" = "X$echo_test_string") 2>/dev/null
+ then
+ break
+ fi
+ done
+fi
+
+if test "X`($echo '\t') 2>/dev/null`" = 'X\t' &&
+ echo_testing_string=`($echo "$echo_test_string") 2>/dev/null` &&
+ test "X$echo_testing_string" = "X$echo_test_string"; then
+ :
+else
+ # The Solaris, AIX, and Digital Unix default echo programs unquote
+ # backslashes. This makes it impossible to quote backslashes using
+ # echo "$something" | sed 's/\\/\\\\/g'
+ #
+ # So, first we look for a working echo in the user's PATH.
+
+ lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR
+ for dir in $PATH /usr/ucb; do
+ IFS="$lt_save_ifs"
+ if (test -f $dir/echo || test -f $dir/echo$ac_exeext) &&
+ test "X`($dir/echo '\t') 2>/dev/null`" = 'X\t' &&
+ echo_testing_string=`($dir/echo "$echo_test_string") 2>/dev/null` &&
+ test "X$echo_testing_string" = "X$echo_test_string"; then
+ echo="$dir/echo"
+ break
+ fi
+ done
+ IFS="$lt_save_ifs"
+
+ if test "X$echo" = Xecho; then
+ # We didn't find a better echo, so look for alternatives.
+ if test "X`(print -r '\t') 2>/dev/null`" = 'X\t' &&
+ echo_testing_string=`(print -r "$echo_test_string") 2>/dev/null` &&
+ test "X$echo_testing_string" = "X$echo_test_string"; then
+ # This shell has a builtin print -r that does the trick.
+ echo='print -r'
+ elif (test -f /bin/ksh || test -f /bin/ksh$ac_exeext) &&
+ test "X$CONFIG_SHELL" != X/bin/ksh; then
+ # If we have ksh, try running configure again with it.
+ ORIGINAL_CONFIG_SHELL=${CONFIG_SHELL-/bin/sh}
+ export ORIGINAL_CONFIG_SHELL
+ CONFIG_SHELL=/bin/ksh
+ export CONFIG_SHELL
+ exec $CONFIG_SHELL "$0" --no-reexec ${1+"$@"}
+ else
+ # Try using printf.
+ echo='printf %s\n'
+ if test "X`($echo '\t') 2>/dev/null`" = 'X\t' &&
+ echo_testing_string=`($echo "$echo_test_string") 2>/dev/null` &&
+ test "X$echo_testing_string" = "X$echo_test_string"; then
+ # Cool, printf works
+ :
+ elif echo_testing_string=`($ORIGINAL_CONFIG_SHELL "$0" --fallback-echo '\t') 2>/dev/null` &&
+ test "X$echo_testing_string" = 'X\t' &&
+ echo_testing_string=`($ORIGINAL_CONFIG_SHELL "$0" --fallback-echo "$echo_test_string") 2>/dev/null` &&
+ test "X$echo_testing_string" = "X$echo_test_string"; then
+ CONFIG_SHELL=$ORIGINAL_CONFIG_SHELL
+ export CONFIG_SHELL
+ SHELL="$CONFIG_SHELL"
+ export SHELL
+ echo="$CONFIG_SHELL $0 --fallback-echo"
+ elif echo_testing_string=`($CONFIG_SHELL "$0" --fallback-echo '\t') 2>/dev/null` &&
+ test "X$echo_testing_string" = 'X\t' &&
+ echo_testing_string=`($CONFIG_SHELL "$0" --fallback-echo "$echo_test_string") 2>/dev/null` &&
+ test "X$echo_testing_string" = "X$echo_test_string"; then
+ echo="$CONFIG_SHELL $0 --fallback-echo"
+ else
+ # maybe with a smaller string...
+ prev=:
+
+ for cmd in 'echo test' 'sed 2q "$0"' 'sed 10q "$0"' 'sed 20q "$0"' 'sed 50q "$0"'; do
+ if (test "X$echo_test_string" = "X`eval $cmd`") 2>/dev/null
+ then
+ break
+ fi
+ prev="$cmd"
+ done
+
+ if test "$prev" != 'sed 50q "$0"'; then
+ echo_test_string=`eval $prev`
+ export echo_test_string
+ exec ${ORIGINAL_CONFIG_SHELL-${CONFIG_SHELL-/bin/sh}} "$0" ${1+"$@"}
+ else
+ # Oops. We lost completely, so just stick with echo.
+ echo=echo
+ fi
+ fi
+ fi
+ fi
+fi
+fi
+
+# Copy echo and quote the copy suitably for passing to libtool from
+# the Makefile, instead of quoting the original, which is used later.
+ECHO=$echo
+if test "X$ECHO" = "X$CONFIG_SHELL $0 --fallback-echo"; then
+ ECHO="$CONFIG_SHELL \\\$\$0 --fallback-echo"
+fi
+
+
+
+
+tagnames=${tagnames+${tagnames},}CXX
+
+tagnames=${tagnames+${tagnames},}F77
+
+# Name of the host.
+# hostname on some systems (SVR3.2, Linux) returns a bogus exit status,
+# so uname gets run too.
+ac_hostname=`(hostname || uname -n) 2>/dev/null | sed 1q`
+
+exec 6>&1
+
+#
+# Initializations.
+#
+ac_default_prefix=/usr/local
+ac_config_libobj_dir=.
+cross_compiling=no
+subdirs=
+MFLAGS=
+MAKEFLAGS=
+SHELL=${CONFIG_SHELL-/bin/sh}
+
+# Maximum number of lines to put in a shell here document.
+# This variable seems obsolete. It should probably be removed, and
+# only ac_max_sed_lines should be used.
+: ${ac_max_here_lines=38}
+
+# Identity of this package.
+PACKAGE_NAME=
+PACKAGE_TARNAME=
+PACKAGE_VERSION=
+PACKAGE_STRING=
+PACKAGE_BUGREPORT=
+
+ac_unique_file="src/sqlite.h.in"
+# Factoring default headers for most tests.
+ac_includes_default="\
+#include <stdio.h>
+#if HAVE_SYS_TYPES_H
+# include <sys/types.h>
+#endif
+#if HAVE_SYS_STAT_H
+# include <sys/stat.h>
+#endif
+#if STDC_HEADERS
+# include <stdlib.h>
+# include <stddef.h>
+#else
+# if HAVE_STDLIB_H
+# include <stdlib.h>
+# endif
+#endif
+#if HAVE_STRING_H
+# if !STDC_HEADERS && HAVE_MEMORY_H
+# include <memory.h>
+# endif
+# include <string.h>
+#endif
+#if HAVE_STRINGS_H
+# include <strings.h>
+#endif
+#if HAVE_INTTYPES_H
+# include <inttypes.h>
+#else
+# if HAVE_STDINT_H
+# include <stdint.h>
+# endif
+#endif
+#if HAVE_UNISTD_H
+# include <unistd.h>
+#endif"
+
+ac_subst_vars='SHELL PATH_SEPARATOR PACKAGE_NAME PACKAGE_TARNAME PACKAGE_VERSION PACKAGE_STRING PACKAGE_BUGREPORT exec_prefix prefix program_transform_name bindir sbindir libexecdir datadir sysconfdir sharedstatedir localstatedir libdir includedir oldincludedir infodir mandir build_alias host_alias target_alias DEFS ECHO_C ECHO_N ECHO_T LIBS build build_cpu build_vendor build_os host host_cpu host_vendor host_os CC CFLAGS LDFLAGS CPPFLAGS ac_ct_CC EXEEXT OBJEXT EGREP LN_S ECHO AR ac_ct_AR RANLIB ac_ct_RANLIB STRIP ac_ct_STRIP CPP CXX CXXFLAGS ac_ct_CXX CXXCPP F77 FFLAGS ac_ct_F77 LIBTOOL INSTALL_PROGRAM INSTALL_SCRIPT INSTALL_DATA AWK program_prefix VERSION RELEASE VERSION_NUMBER BUILD_CC BUILD_CFLAGS BUILD_LIBS TARGET_CC TARGET_CFLAGS TARGET_LINK TARGET_LFLAGS TARGET_RANLIB TARGET_AR THREADSAFE TARGET_THREAD_LIB XTHREADCONNECT THREADSOVERRIDELOCKS ALLOWRELEASE TEMP_STORE BUILD_EXEEXT OS_UNIX OS_WIN OS_OS2 TARGET_EXEEXT TCL_VERSION TCL_BIN_DIR TCL_SRC_DIR TCL_LIBS TCL_INCLUDE_SPEC TCL_LIB_FILE TCL_LIB_FLAG TCL_LIB_SPEC TCL_STUB_LIB_FILE TCL_STUB_LIB_FLAG TCL_STUB_LIB_SPEC HAVE_TCL TARGET_READLINE_LIBS TARGET_READLINE_INC TARGET_HAVE_READLINE TARGET_DEBUG TARGET_LIBS LIBOBJS LTLIBOBJS'
+ac_subst_files=''
+
+# Initialize some variables set by options.
+ac_init_help=
+ac_init_version=false
+# The variables have the same names as the options, with
+# dashes changed to underlines.
+cache_file=/dev/null
+exec_prefix=NONE
+no_create=
+no_recursion=
+prefix=NONE
+program_prefix=NONE
+program_suffix=NONE
+program_transform_name=s,x,x,
+silent=
+site=
+srcdir=
+verbose=
+x_includes=NONE
+x_libraries=NONE
+
+# Installation directory options.
+# These are left unexpanded so users can "make install exec_prefix=/foo"
+# and all the variables that are supposed to be based on exec_prefix
+# by default will actually change.
+# Use braces instead of parens because sh, perl, etc. also accept them.
+bindir='${exec_prefix}/bin'
+sbindir='${exec_prefix}/sbin'
+libexecdir='${exec_prefix}/libexec'
+datadir='${prefix}/share'
+sysconfdir='${prefix}/etc'
+sharedstatedir='${prefix}/com'
+localstatedir='${prefix}/var'
+libdir='${exec_prefix}/lib'
+includedir='${prefix}/include'
+oldincludedir='/usr/include'
+infodir='${prefix}/info'
+mandir='${prefix}/man'
+
+ac_prev=
+for ac_option
+do
+ # If the previous option needs an argument, assign it.
+ if test -n "$ac_prev"; then
+ eval "$ac_prev=\$ac_option"
+ ac_prev=
+ continue
+ fi
+
+ ac_optarg=`expr "x$ac_option" : 'x[^=]*=\(.*\)'`
+
+ # Accept the important Cygnus configure options, so we can diagnose typos.
+
+ case $ac_option in
+
+ -bindir | --bindir | --bindi | --bind | --bin | --bi)
+ ac_prev=bindir ;;
+ -bindir=* | --bindir=* | --bindi=* | --bind=* | --bin=* | --bi=*)
+ bindir=$ac_optarg ;;
+
+ -build | --build | --buil | --bui | --bu)
+ ac_prev=build_alias ;;
+ -build=* | --build=* | --buil=* | --bui=* | --bu=*)
+ build_alias=$ac_optarg ;;
+
+ -cache-file | --cache-file | --cache-fil | --cache-fi \
+ | --cache-f | --cache- | --cache | --cach | --cac | --ca | --c)
+ ac_prev=cache_file ;;
+ -cache-file=* | --cache-file=* | --cache-fil=* | --cache-fi=* \
+ | --cache-f=* | --cache-=* | --cache=* | --cach=* | --cac=* | --ca=* | --c=*)
+ cache_file=$ac_optarg ;;
+
+ --config-cache | -C)
+ cache_file=config.cache ;;
+
+ -datadir | --datadir | --datadi | --datad | --data | --dat | --da)
+ ac_prev=datadir ;;
+ -datadir=* | --datadir=* | --datadi=* | --datad=* | --data=* | --dat=* \
+ | --da=*)
+ datadir=$ac_optarg ;;
+
+ -disable-* | --disable-*)
+ ac_feature=`expr "x$ac_option" : 'x-*disable-\(.*\)'`
+ # Reject names that are not valid shell variable names.
+ expr "x$ac_feature" : ".*[^-_$as_cr_alnum]" >/dev/null &&
+ { echo "$as_me: error: invalid feature name: $ac_feature" >&2
+ { (exit 1); exit 1; }; }
+ ac_feature=`echo $ac_feature | sed 's/-/_/g'`
+ eval "enable_$ac_feature=no" ;;
+
+ -enable-* | --enable-*)
+ ac_feature=`expr "x$ac_option" : 'x-*enable-\([^=]*\)'`
+ # Reject names that are not valid shell variable names.
+ expr "x$ac_feature" : ".*[^-_$as_cr_alnum]" >/dev/null &&
+ { echo "$as_me: error: invalid feature name: $ac_feature" >&2
+ { (exit 1); exit 1; }; }
+ ac_feature=`echo $ac_feature | sed 's/-/_/g'`
+ case $ac_option in
+ *=*) ac_optarg=`echo "$ac_optarg" | sed "s/'/'\\\\\\\\''/g"`;;
+ *) ac_optarg=yes ;;
+ esac
+ eval "enable_$ac_feature='$ac_optarg'" ;;
+
+ -exec-prefix | --exec_prefix | --exec-prefix | --exec-prefi \
+ | --exec-pref | --exec-pre | --exec-pr | --exec-p | --exec- \
+ | --exec | --exe | --ex)
+ ac_prev=exec_prefix ;;
+ -exec-prefix=* | --exec_prefix=* | --exec-prefix=* | --exec-prefi=* \
+ | --exec-pref=* | --exec-pre=* | --exec-pr=* | --exec-p=* | --exec-=* \
+ | --exec=* | --exe=* | --ex=*)
+ exec_prefix=$ac_optarg ;;
+
+ -gas | --gas | --ga | --g)
+ # Obsolete; use --with-gas.
+ with_gas=yes ;;
+
+ -help | --help | --hel | --he | -h)
+ ac_init_help=long ;;
+ -help=r* | --help=r* | --hel=r* | --he=r* | -hr*)
+ ac_init_help=recursive ;;
+ -help=s* | --help=s* | --hel=s* | --he=s* | -hs*)
+ ac_init_help=short ;;
+
+ -host | --host | --hos | --ho)
+ ac_prev=host_alias ;;
+ -host=* | --host=* | --hos=* | --ho=*)
+ host_alias=$ac_optarg ;;
+
+ -includedir | --includedir | --includedi | --included | --include \
+ | --includ | --inclu | --incl | --inc)
+ ac_prev=includedir ;;
+ -includedir=* | --includedir=* | --includedi=* | --included=* | --include=* \
+ | --includ=* | --inclu=* | --incl=* | --inc=*)
+ includedir=$ac_optarg ;;
+
+ -infodir | --infodir | --infodi | --infod | --info | --inf)
+ ac_prev=infodir ;;
+ -infodir=* | --infodir=* | --infodi=* | --infod=* | --info=* | --inf=*)
+ infodir=$ac_optarg ;;
+
+ -libdir | --libdir | --libdi | --libd)
+ ac_prev=libdir ;;
+ -libdir=* | --libdir=* | --libdi=* | --libd=*)
+ libdir=$ac_optarg ;;
+
+ -libexecdir | --libexecdir | --libexecdi | --libexecd | --libexec \
+ | --libexe | --libex | --libe)
+ ac_prev=libexecdir ;;
+ -libexecdir=* | --libexecdir=* | --libexecdi=* | --libexecd=* | --libexec=* \
+ | --libexe=* | --libex=* | --libe=*)
+ libexecdir=$ac_optarg ;;
+
+ -localstatedir | --localstatedir | --localstatedi | --localstated \
+ | --localstate | --localstat | --localsta | --localst \
+ | --locals | --local | --loca | --loc | --lo)
+ ac_prev=localstatedir ;;
+ -localstatedir=* | --localstatedir=* | --localstatedi=* | --localstated=* \
+ | --localstate=* | --localstat=* | --localsta=* | --localst=* \
+ | --locals=* | --local=* | --loca=* | --loc=* | --lo=*)
+ localstatedir=$ac_optarg ;;
+
+ -mandir | --mandir | --mandi | --mand | --man | --ma | --m)
+ ac_prev=mandir ;;
+ -mandir=* | --mandir=* | --mandi=* | --mand=* | --man=* | --ma=* | --m=*)
+ mandir=$ac_optarg ;;
+
+ -nfp | --nfp | --nf)
+ # Obsolete; use --without-fp.
+ with_fp=no ;;
+
+ -no-create | --no-create | --no-creat | --no-crea | --no-cre \
+ | --no-cr | --no-c | -n)
+ no_create=yes ;;
+
+ -no-recursion | --no-recursion | --no-recursio | --no-recursi \
+ | --no-recurs | --no-recur | --no-recu | --no-rec | --no-re | --no-r)
+ no_recursion=yes ;;
+
+ -oldincludedir | --oldincludedir | --oldincludedi | --oldincluded \
+ | --oldinclude | --oldinclud | --oldinclu | --oldincl | --oldinc \
+ | --oldin | --oldi | --old | --ol | --o)
+ ac_prev=oldincludedir ;;
+ -oldincludedir=* | --oldincludedir=* | --oldincludedi=* | --oldincluded=* \
+ | --oldinclude=* | --oldinclud=* | --oldinclu=* | --oldincl=* | --oldinc=* \
+ | --oldin=* | --oldi=* | --old=* | --ol=* | --o=*)
+ oldincludedir=$ac_optarg ;;
+
+ -prefix | --prefix | --prefi | --pref | --pre | --pr | --p)
+ ac_prev=prefix ;;
+ -prefix=* | --prefix=* | --prefi=* | --pref=* | --pre=* | --pr=* | --p=*)
+ prefix=$ac_optarg ;;
+
+ -program-prefix | --program-prefix | --program-prefi | --program-pref \
+ | --program-pre | --program-pr | --program-p)
+ ac_prev=program_prefix ;;
+ -program-prefix=* | --program-prefix=* | --program-prefi=* \
+ | --program-pref=* | --program-pre=* | --program-pr=* | --program-p=*)
+ program_prefix=$ac_optarg ;;
+
+ -program-suffix | --program-suffix | --program-suffi | --program-suff \
+ | --program-suf | --program-su | --program-s)
+ ac_prev=program_suffix ;;
+ -program-suffix=* | --program-suffix=* | --program-suffi=* \
+ | --program-suff=* | --program-suf=* | --program-su=* | --program-s=*)
+ program_suffix=$ac_optarg ;;
+
+ -program-transform-name | --program-transform-name \
+ | --program-transform-nam | --program-transform-na \
+ | --program-transform-n | --program-transform- \
+ | --program-transform | --program-transfor \
+ | --program-transfo | --program-transf \
+ | --program-trans | --program-tran \
+ | --progr-tra | --program-tr | --program-t)
+ ac_prev=program_transform_name ;;
+ -program-transform-name=* | --program-transform-name=* \
+ | --program-transform-nam=* | --program-transform-na=* \
+ | --program-transform-n=* | --program-transform-=* \
+ | --program-transform=* | --program-transfor=* \
+ | --program-transfo=* | --program-transf=* \
+ | --program-trans=* | --program-tran=* \
+ | --progr-tra=* | --program-tr=* | --program-t=*)
+ program_transform_name=$ac_optarg ;;
+
+ -q | -quiet | --quiet | --quie | --qui | --qu | --q \
+ | -silent | --silent | --silen | --sile | --sil)
+ silent=yes ;;
+
+ -sbindir | --sbindir | --sbindi | --sbind | --sbin | --sbi | --sb)
+ ac_prev=sbindir ;;
+ -sbindir=* | --sbindir=* | --sbindi=* | --sbind=* | --sbin=* \
+ | --sbi=* | --sb=*)
+ sbindir=$ac_optarg ;;
+
+ -sharedstatedir | --sharedstatedir | --sharedstatedi \
+ | --sharedstated | --sharedstate | --sharedstat | --sharedsta \
+ | --sharedst | --shareds | --shared | --share | --shar \
+ | --sha | --sh)
+ ac_prev=sharedstatedir ;;
+ -sharedstatedir=* | --sharedstatedir=* | --sharedstatedi=* \
+ | --sharedstated=* | --sharedstate=* | --sharedstat=* | --sharedsta=* \
+ | --sharedst=* | --shareds=* | --shared=* | --share=* | --shar=* \
+ | --sha=* | --sh=*)
+ sharedstatedir=$ac_optarg ;;
+
+ -site | --site | --sit)
+ ac_prev=site ;;
+ -site=* | --site=* | --sit=*)
+ site=$ac_optarg ;;
+
+ -srcdir | --srcdir | --srcdi | --srcd | --src | --sr)
+ ac_prev=srcdir ;;
+ -srcdir=* | --srcdir=* | --srcdi=* | --srcd=* | --src=* | --sr=*)
+ srcdir=$ac_optarg ;;
+
+ -sysconfdir | --sysconfdir | --sysconfdi | --sysconfd | --sysconf \
+ | --syscon | --sysco | --sysc | --sys | --sy)
+ ac_prev=sysconfdir ;;
+ -sysconfdir=* | --sysconfdir=* | --sysconfdi=* | --sysconfd=* | --sysconf=* \
+ | --syscon=* | --sysco=* | --sysc=* | --sys=* | --sy=*)
+ sysconfdir=$ac_optarg ;;
+
+ -target | --target | --targe | --targ | --tar | --ta | --t)
+ ac_prev=target_alias ;;
+ -target=* | --target=* | --targe=* | --targ=* | --tar=* | --ta=* | --t=*)
+ target_alias=$ac_optarg ;;
+
+ -v | -verbose | --verbose | --verbos | --verbo | --verb)
+ verbose=yes ;;
+
+ -version | --version | --versio | --versi | --vers | -V)
+ ac_init_version=: ;;
+
+ -with-* | --with-*)
+ ac_package=`expr "x$ac_option" : 'x-*with-\([^=]*\)'`
+ # Reject names that are not valid shell variable names.
+ expr "x$ac_package" : ".*[^-_$as_cr_alnum]" >/dev/null &&
+ { echo "$as_me: error: invalid package name: $ac_package" >&2
+ { (exit 1); exit 1; }; }
+ ac_package=`echo $ac_package| sed 's/-/_/g'`
+ case $ac_option in
+ *=*) ac_optarg=`echo "$ac_optarg" | sed "s/'/'\\\\\\\\''/g"`;;
+ *) ac_optarg=yes ;;
+ esac
+ eval "with_$ac_package='$ac_optarg'" ;;
+
+ -without-* | --without-*)
+ ac_package=`expr "x$ac_option" : 'x-*without-\(.*\)'`
+ # Reject names that are not valid shell variable names.
+ expr "x$ac_package" : ".*[^-_$as_cr_alnum]" >/dev/null &&
+ { echo "$as_me: error: invalid package name: $ac_package" >&2
+ { (exit 1); exit 1; }; }
+ ac_package=`echo $ac_package | sed 's/-/_/g'`
+ eval "with_$ac_package=no" ;;
+
+ --x)
+ # Obsolete; use --with-x.
+ with_x=yes ;;
+
+ -x-includes | --x-includes | --x-include | --x-includ | --x-inclu \
+ | --x-incl | --x-inc | --x-in | --x-i)
+ ac_prev=x_includes ;;
+ -x-includes=* | --x-includes=* | --x-include=* | --x-includ=* | --x-inclu=* \
+ | --x-incl=* | --x-inc=* | --x-in=* | --x-i=*)
+ x_includes=$ac_optarg ;;
+
+ -x-libraries | --x-libraries | --x-librarie | --x-librari \
+ | --x-librar | --x-libra | --x-libr | --x-lib | --x-li | --x-l)
+ ac_prev=x_libraries ;;
+ -x-libraries=* | --x-libraries=* | --x-librarie=* | --x-librari=* \
+ | --x-librar=* | --x-libra=* | --x-libr=* | --x-lib=* | --x-li=* | --x-l=*)
+ x_libraries=$ac_optarg ;;
+
+ -*) { echo "$as_me: error: unrecognized option: $ac_option
+Try \`$0 --help' for more information." >&2
+ { (exit 1); exit 1; }; }
+ ;;
+
+ *=*)
+ ac_envvar=`expr "x$ac_option" : 'x\([^=]*\)='`
+ # Reject names that are not valid shell variable names.
+ expr "x$ac_envvar" : ".*[^_$as_cr_alnum]" >/dev/null &&
+ { echo "$as_me: error: invalid variable name: $ac_envvar" >&2
+ { (exit 1); exit 1; }; }
+ ac_optarg=`echo "$ac_optarg" | sed "s/'/'\\\\\\\\''/g"`
+ eval "$ac_envvar='$ac_optarg'"
+ export $ac_envvar ;;
+
+ *)
+ # FIXME: should be removed in autoconf 3.0.
+ echo "$as_me: WARNING: you should use --build, --host, --target" >&2
+ expr "x$ac_option" : ".*[^-._$as_cr_alnum]" >/dev/null &&
+ echo "$as_me: WARNING: invalid host type: $ac_option" >&2
+ : ${build_alias=$ac_option} ${host_alias=$ac_option} ${target_alias=$ac_option}
+ ;;
+
+ esac
+done
+
+if test -n "$ac_prev"; then
+ ac_option=--`echo $ac_prev | sed 's/_/-/g'`
+ { echo "$as_me: error: missing argument to $ac_option" >&2
+ { (exit 1); exit 1; }; }
+fi
+
+# Be sure to have absolute paths.
+for ac_var in exec_prefix prefix
+do
+ eval ac_val=$`echo $ac_var`
+ case $ac_val in
+ [\\/$]* | ?:[\\/]* | NONE | '' ) ;;
+ *) { echo "$as_me: error: expected an absolute directory name for --$ac_var: $ac_val" >&2
+ { (exit 1); exit 1; }; };;
+ esac
+done
+
+# Be sure to have absolute paths.
+for ac_var in bindir sbindir libexecdir datadir sysconfdir sharedstatedir \
+ localstatedir libdir includedir oldincludedir infodir mandir
+do
+ eval ac_val=$`echo $ac_var`
+ case $ac_val in
+ [\\/$]* | ?:[\\/]* ) ;;
+ *) { echo "$as_me: error: expected an absolute directory name for --$ac_var: $ac_val" >&2
+ { (exit 1); exit 1; }; };;
+ esac
+done
+
+# There might be people who depend on the old broken behavior: `$host'
+# used to hold the argument of --host etc.
+# FIXME: To remove some day.
+build=$build_alias
+host=$host_alias
+target=$target_alias
+
+# FIXME: To remove some day.
+if test "x$host_alias" != x; then
+ if test "x$build_alias" = x; then
+ cross_compiling=maybe
+ echo "$as_me: WARNING: If you wanted to set the --build type, don't use --host.
+ If a cross compiler is detected then cross compile mode will be used." >&2
+ elif test "x$build_alias" != "x$host_alias"; then
+ cross_compiling=yes
+ fi
+fi
+
+ac_tool_prefix=
+test -n "$host_alias" && ac_tool_prefix=$host_alias-
+
+test "$silent" = yes && exec 6>/dev/null
+
+
+# Find the source files, if location was not specified.
+if test -z "$srcdir"; then
+ ac_srcdir_defaulted=yes
+ # Try the directory containing this script, then its parent.
+ ac_confdir=`(dirname "$0") 2>/dev/null ||
+$as_expr X"$0" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \
+ X"$0" : 'X\(//\)[^/]' \| \
+ X"$0" : 'X\(//\)$' \| \
+ X"$0" : 'X\(/\)' \| \
+ . : '\(.\)' 2>/dev/null ||
+echo X"$0" |
+ sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/; q; }
+ /^X\(\/\/\)[^/].*/{ s//\1/; q; }
+ /^X\(\/\/\)$/{ s//\1/; q; }
+ /^X\(\/\).*/{ s//\1/; q; }
+ s/.*/./; q'`
+ srcdir=$ac_confdir
+ if test ! -r $srcdir/$ac_unique_file; then
+ srcdir=..
+ fi
+else
+ ac_srcdir_defaulted=no
+fi
+if test ! -r $srcdir/$ac_unique_file; then
+ if test "$ac_srcdir_defaulted" = yes; then
+ { echo "$as_me: error: cannot find sources ($ac_unique_file) in $ac_confdir or .." >&2
+ { (exit 1); exit 1; }; }
+ else
+ { echo "$as_me: error: cannot find sources ($ac_unique_file) in $srcdir" >&2
+ { (exit 1); exit 1; }; }
+ fi
+fi
+(cd $srcdir && test -r ./$ac_unique_file) 2>/dev/null ||
+ { echo "$as_me: error: sources are in $srcdir, but \`cd $srcdir' does not work" >&2
+ { (exit 1); exit 1; }; }
+srcdir=`echo "$srcdir" | sed 's%\([^\\/]\)[\\/]*$%\1%'`
+ac_env_build_alias_set=${build_alias+set}
+ac_env_build_alias_value=$build_alias
+ac_cv_env_build_alias_set=${build_alias+set}
+ac_cv_env_build_alias_value=$build_alias
+ac_env_host_alias_set=${host_alias+set}
+ac_env_host_alias_value=$host_alias
+ac_cv_env_host_alias_set=${host_alias+set}
+ac_cv_env_host_alias_value=$host_alias
+ac_env_target_alias_set=${target_alias+set}
+ac_env_target_alias_value=$target_alias
+ac_cv_env_target_alias_set=${target_alias+set}
+ac_cv_env_target_alias_value=$target_alias
+ac_env_CC_set=${CC+set}
+ac_env_CC_value=$CC
+ac_cv_env_CC_set=${CC+set}
+ac_cv_env_CC_value=$CC
+ac_env_CFLAGS_set=${CFLAGS+set}
+ac_env_CFLAGS_value=$CFLAGS
+ac_cv_env_CFLAGS_set=${CFLAGS+set}
+ac_cv_env_CFLAGS_value=$CFLAGS
+ac_env_LDFLAGS_set=${LDFLAGS+set}
+ac_env_LDFLAGS_value=$LDFLAGS
+ac_cv_env_LDFLAGS_set=${LDFLAGS+set}
+ac_cv_env_LDFLAGS_value=$LDFLAGS
+ac_env_CPPFLAGS_set=${CPPFLAGS+set}
+ac_env_CPPFLAGS_value=$CPPFLAGS
+ac_cv_env_CPPFLAGS_set=${CPPFLAGS+set}
+ac_cv_env_CPPFLAGS_value=$CPPFLAGS
+ac_env_CPP_set=${CPP+set}
+ac_env_CPP_value=$CPP
+ac_cv_env_CPP_set=${CPP+set}
+ac_cv_env_CPP_value=$CPP
+ac_env_CXX_set=${CXX+set}
+ac_env_CXX_value=$CXX
+ac_cv_env_CXX_set=${CXX+set}
+ac_cv_env_CXX_value=$CXX
+ac_env_CXXFLAGS_set=${CXXFLAGS+set}
+ac_env_CXXFLAGS_value=$CXXFLAGS
+ac_cv_env_CXXFLAGS_set=${CXXFLAGS+set}
+ac_cv_env_CXXFLAGS_value=$CXXFLAGS
+ac_env_CXXCPP_set=${CXXCPP+set}
+ac_env_CXXCPP_value=$CXXCPP
+ac_cv_env_CXXCPP_set=${CXXCPP+set}
+ac_cv_env_CXXCPP_value=$CXXCPP
+ac_env_F77_set=${F77+set}
+ac_env_F77_value=$F77
+ac_cv_env_F77_set=${F77+set}
+ac_cv_env_F77_value=$F77
+ac_env_FFLAGS_set=${FFLAGS+set}
+ac_env_FFLAGS_value=$FFLAGS
+ac_cv_env_FFLAGS_set=${FFLAGS+set}
+ac_cv_env_FFLAGS_value=$FFLAGS
+
+#
+# Report the --help message.
+#
+if test "$ac_init_help" = "long"; then
+ # Omit some internal or obsolete options to make the list less imposing.
+ # This message is too long to be a string in the A/UX 3.1 sh.
+ cat <<_ACEOF
+\`configure' configures this package to adapt to many kinds of systems.
+
+Usage: $0 [OPTION]... [VAR=VALUE]...
+
+To assign environment variables (e.g., CC, CFLAGS...), specify them as
+VAR=VALUE. See below for descriptions of some of the useful variables.
+
+Defaults for the options are specified in brackets.
+
+Configuration:
+ -h, --help display this help and exit
+ --help=short display options specific to this package
+ --help=recursive display the short help of all the included packages
+ -V, --version display version information and exit
+ -q, --quiet, --silent do not print \`checking...' messages
+ --cache-file=FILE cache test results in FILE [disabled]
+ -C, --config-cache alias for \`--cache-file=config.cache'
+ -n, --no-create do not create output files
+ --srcdir=DIR find the sources in DIR [configure dir or \`..']
+
+_ACEOF
+
+ cat <<_ACEOF
+Installation directories:
+ --prefix=PREFIX install architecture-independent files in PREFIX
+ [$ac_default_prefix]
+ --exec-prefix=EPREFIX install architecture-dependent files in EPREFIX
+ [PREFIX]
+
+By default, \`make install' will install all the files in
+\`$ac_default_prefix/bin', \`$ac_default_prefix/lib' etc. You can specify
+an installation prefix other than \`$ac_default_prefix' using \`--prefix',
+for instance \`--prefix=\$HOME'.
+
+For better control, use the options below.
+
+Fine tuning of the installation directories:
+ --bindir=DIR user executables [EPREFIX/bin]
+ --sbindir=DIR system admin executables [EPREFIX/sbin]
+ --libexecdir=DIR program executables [EPREFIX/libexec]
+ --datadir=DIR read-only architecture-independent data [PREFIX/share]
+ --sysconfdir=DIR read-only single-machine data [PREFIX/etc]
+ --sharedstatedir=DIR modifiable architecture-independent data [PREFIX/com]
+ --localstatedir=DIR modifiable single-machine data [PREFIX/var]
+ --libdir=DIR object code libraries [EPREFIX/lib]
+ --includedir=DIR C header files [PREFIX/include]
+ --oldincludedir=DIR C header files for non-gcc [/usr/include]
+ --infodir=DIR info documentation [PREFIX/info]
+ --mandir=DIR man documentation [PREFIX/man]
+_ACEOF
+
+ cat <<\_ACEOF
+
+System types:
+ --build=BUILD configure for building on BUILD [guessed]
+ --host=HOST cross-compile to build programs to run on HOST [BUILD]
+_ACEOF
+fi
+
+if test -n "$ac_init_help"; then
+
+ cat <<\_ACEOF
+
+Optional Features:
+ --disable-FEATURE do not include FEATURE (same as --enable-FEATURE=no)
+ --enable-FEATURE[=ARG] include FEATURE [ARG=yes]
+ --enable-shared[=PKGS]
+ build shared libraries [default=yes]
+ --enable-static[=PKGS]
+ build static libraries [default=yes]
+ --enable-fast-install[=PKGS]
+ optimize for fast installation [default=yes]
+ --disable-libtool-lock avoid locking (might break parallel builds)
+ --enable-threadsafe Support threadsafe operation
+ --enable-cross-thread-connections
+ Allow connection sharing across threads
+ --enable-threads-override-locks
+ Threads can override each others locks
+ --enable-releasemode Support libtool link to release mode
+ --enable-tempstore Use an in-ram database for temporary tables
+ (never,no,yes,always)
+ --disable-tcl do not build TCL extension
+ --enable-debug enable debugging & verbose explain
+
+Optional Packages:
+ --with-PACKAGE[=ARG] use PACKAGE [ARG=yes]
+ --without-PACKAGE do not use PACKAGE (same as --with-PACKAGE=no)
+ --with-gnu-ld assume the C compiler uses GNU ld [default=no]
+ --with-pic try to use only PIC/non-PIC objects [default=use
+ both]
+ --with-tags[=TAGS]
+ include additional configurations [automatic]
+ --with-hints=FILE Read configuration options from FILE
+ --with-tcl=DIR directory containing tcl configuration
+ (tclConfig.sh)
+
+Some influential environment variables:
+ CC C compiler command
+ CFLAGS C compiler flags
+ LDFLAGS linker flags, e.g. -L<lib dir> if you have libraries in a
+ nonstandard directory <lib dir>
+ CPPFLAGS C/C++ preprocessor flags, e.g. -I<include dir> if you have
+ headers in a nonstandard directory <include dir>
+ CPP C preprocessor
+ CXX C++ compiler command
+ CXXFLAGS C++ compiler flags
+ CXXCPP C++ preprocessor
+ F77 Fortran 77 compiler command
+ FFLAGS Fortran 77 compiler flags
+
+Use these variables to override the choices made by `configure' or to help
+it to find libraries and programs with nonstandard names/locations.
+
+_ACEOF
+fi
+
+if test "$ac_init_help" = "recursive"; then
+ # If there are subdirs, report their specific --help.
+ ac_popdir=`pwd`
+ for ac_dir in : $ac_subdirs_all; do test "x$ac_dir" = x: && continue
+ test -d $ac_dir || continue
+ ac_builddir=.
+
+if test "$ac_dir" != .; then
+ ac_dir_suffix=/`echo "$ac_dir" | sed 's,^\.[\\/],,'`
+ # A "../" for each directory in $ac_dir_suffix.
+ ac_top_builddir=`echo "$ac_dir_suffix" | sed 's,/[^\\/]*,../,g'`
+else
+ ac_dir_suffix= ac_top_builddir=
+fi
+
+case $srcdir in
+ .) # No --srcdir option. We are building in place.
+ ac_srcdir=.
+ if test -z "$ac_top_builddir"; then
+ ac_top_srcdir=.
+ else
+ ac_top_srcdir=`echo $ac_top_builddir | sed 's,/$,,'`
+ fi ;;
+ [\\/]* | ?:[\\/]* ) # Absolute path.
+ ac_srcdir=$srcdir$ac_dir_suffix;
+ ac_top_srcdir=$srcdir ;;
+ *) # Relative path.
+ ac_srcdir=$ac_top_builddir$srcdir$ac_dir_suffix
+ ac_top_srcdir=$ac_top_builddir$srcdir ;;
+esac
+
+# Do not use `cd foo && pwd` to compute absolute paths, because
+# the directories may not exist.
+case `pwd` in
+.) ac_abs_builddir="$ac_dir";;
+*)
+ case "$ac_dir" in
+ .) ac_abs_builddir=`pwd`;;
+ [\\/]* | ?:[\\/]* ) ac_abs_builddir="$ac_dir";;
+ *) ac_abs_builddir=`pwd`/"$ac_dir";;
+ esac;;
+esac
+case $ac_abs_builddir in
+.) ac_abs_top_builddir=${ac_top_builddir}.;;
+*)
+ case ${ac_top_builddir}. in
+ .) ac_abs_top_builddir=$ac_abs_builddir;;
+ [\\/]* | ?:[\\/]* ) ac_abs_top_builddir=${ac_top_builddir}.;;
+ *) ac_abs_top_builddir=$ac_abs_builddir/${ac_top_builddir}.;;
+ esac;;
+esac
+case $ac_abs_builddir in
+.) ac_abs_srcdir=$ac_srcdir;;
+*)
+ case $ac_srcdir in
+ .) ac_abs_srcdir=$ac_abs_builddir;;
+ [\\/]* | ?:[\\/]* ) ac_abs_srcdir=$ac_srcdir;;
+ *) ac_abs_srcdir=$ac_abs_builddir/$ac_srcdir;;
+ esac;;
+esac
+case $ac_abs_builddir in
+.) ac_abs_top_srcdir=$ac_top_srcdir;;
+*)
+ case $ac_top_srcdir in
+ .) ac_abs_top_srcdir=$ac_abs_builddir;;
+ [\\/]* | ?:[\\/]* ) ac_abs_top_srcdir=$ac_top_srcdir;;
+ *) ac_abs_top_srcdir=$ac_abs_builddir/$ac_top_srcdir;;
+ esac;;
+esac
+
+ cd $ac_dir
+ # Check for guested configure; otherwise get Cygnus style configure.
+ if test -f $ac_srcdir/configure.gnu; then
+ echo
+ $SHELL $ac_srcdir/configure.gnu --help=recursive
+ elif test -f $ac_srcdir/configure; then
+ echo
+ $SHELL $ac_srcdir/configure --help=recursive
+ elif test -f $ac_srcdir/configure.ac ||
+ test -f $ac_srcdir/configure.in; then
+ echo
+ $ac_configure --help
+ else
+ echo "$as_me: WARNING: no configuration information is in $ac_dir" >&2
+ fi
+ cd $ac_popdir
+ done
+fi
+
+test -n "$ac_init_help" && exit 0
+if $ac_init_version; then
+ cat <<\_ACEOF
+
+Copyright (C) 2003 Free Software Foundation, Inc.
+This configure script is free software; the Free Software Foundation
+gives unlimited permission to copy, distribute and modify it.
+_ACEOF
+ exit 0
+fi
+exec 5>config.log
+cat >&5 <<_ACEOF
+This file contains any messages produced by compilers while
+running configure, to aid debugging if configure makes a mistake.
+
+It was created by $as_me, which was
+generated by GNU Autoconf 2.59. Invocation command line was
+
+ $ $0 $@
+
+_ACEOF
+{
+cat <<_ASUNAME
+## --------- ##
+## Platform. ##
+## --------- ##
+
+hostname = `(hostname || uname -n) 2>/dev/null | sed 1q`
+uname -m = `(uname -m) 2>/dev/null || echo unknown`
+uname -r = `(uname -r) 2>/dev/null || echo unknown`
+uname -s = `(uname -s) 2>/dev/null || echo unknown`
+uname -v = `(uname -v) 2>/dev/null || echo unknown`
+
+/usr/bin/uname -p = `(/usr/bin/uname -p) 2>/dev/null || echo unknown`
+/bin/uname -X = `(/bin/uname -X) 2>/dev/null || echo unknown`
+
+/bin/arch = `(/bin/arch) 2>/dev/null || echo unknown`
+/usr/bin/arch -k = `(/usr/bin/arch -k) 2>/dev/null || echo unknown`
+/usr/convex/getsysinfo = `(/usr/convex/getsysinfo) 2>/dev/null || echo unknown`
+hostinfo = `(hostinfo) 2>/dev/null || echo unknown`
+/bin/machine = `(/bin/machine) 2>/dev/null || echo unknown`
+/usr/bin/oslevel = `(/usr/bin/oslevel) 2>/dev/null || echo unknown`
+/bin/universe = `(/bin/universe) 2>/dev/null || echo unknown`
+
+_ASUNAME
+
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ echo "PATH: $as_dir"
+done
+
+} >&5
+
+cat >&5 <<_ACEOF
+
+
+## ----------- ##
+## Core tests. ##
+## ----------- ##
+
+_ACEOF
+
+
+# Keep a trace of the command line.
+# Strip out --no-create and --no-recursion so they do not pile up.
+# Strip out --silent because we don't want to record it for future runs.
+# Also quote any args containing shell meta-characters.
+# Make two passes to allow for proper duplicate-argument suppression.
+ac_configure_args=
+ac_configure_args0=
+ac_configure_args1=
+ac_sep=
+ac_must_keep_next=false
+for ac_pass in 1 2
+do
+ for ac_arg
+ do
+ case $ac_arg in
+ -no-create | --no-c* | -n | -no-recursion | --no-r*) continue ;;
+ -q | -quiet | --quiet | --quie | --qui | --qu | --q \
+ | -silent | --silent | --silen | --sile | --sil)
+ continue ;;
+ *" "*|*" "*|*[\[\]\~\#\$\^\&\*\(\)\{\}\\\|\;\<\>\?\"\']*)
+ ac_arg=`echo "$ac_arg" | sed "s/'/'\\\\\\\\''/g"` ;;
+ esac
+ case $ac_pass in
+ 1) ac_configure_args0="$ac_configure_args0 '$ac_arg'" ;;
+ 2)
+ ac_configure_args1="$ac_configure_args1 '$ac_arg'"
+ if test $ac_must_keep_next = true; then
+ ac_must_keep_next=false # Got value, back to normal.
+ else
+ case $ac_arg in
+ *=* | --config-cache | -C | -disable-* | --disable-* \
+ | -enable-* | --enable-* | -gas | --g* | -nfp | --nf* \
+ | -q | -quiet | --q* | -silent | --sil* | -v | -verb* \
+ | -with-* | --with-* | -without-* | --without-* | --x)
+ case "$ac_configure_args0 " in
+ "$ac_configure_args1"*" '$ac_arg' "* ) continue ;;
+ esac
+ ;;
+ -* ) ac_must_keep_next=true ;;
+ esac
+ fi
+ ac_configure_args="$ac_configure_args$ac_sep'$ac_arg'"
+ # Get rid of the leading space.
+ ac_sep=" "
+ ;;
+ esac
+ done
+done
+$as_unset ac_configure_args0 || test "${ac_configure_args0+set}" != set || { ac_configure_args0=; export ac_configure_args0; }
+$as_unset ac_configure_args1 || test "${ac_configure_args1+set}" != set || { ac_configure_args1=; export ac_configure_args1; }
+
+# When interrupted or exit'd, cleanup temporary files, and complete
+# config.log. We remove comments because anyway the quotes in there
+# would cause problems or look ugly.
+# WARNING: Be sure not to use single quotes in there, as some shells,
+# such as our DU 5.0 friend, will then `close' the trap.
+trap 'exit_status=$?
+ # Save into config.log some information that might help in debugging.
+ {
+ echo
+
+ cat <<\_ASBOX
+## ---------------- ##
+## Cache variables. ##
+## ---------------- ##
+_ASBOX
+ echo
+ # The following way of writing the cache mishandles newlines in values,
+{
+ (set) 2>&1 |
+ case `(ac_space='"'"' '"'"'; set | grep ac_space) 2>&1` in
+ *ac_space=\ *)
+ sed -n \
+ "s/'"'"'/'"'"'\\\\'"'"''"'"'/g;
+ s/^\\([_$as_cr_alnum]*_cv_[_$as_cr_alnum]*\\)=\\(.*\\)/\\1='"'"'\\2'"'"'/p"
+ ;;
+ *)
+ sed -n \
+ "s/^\\([_$as_cr_alnum]*_cv_[_$as_cr_alnum]*\\)=\\(.*\\)/\\1=\\2/p"
+ ;;
+ esac;
+}
+ echo
+
+ cat <<\_ASBOX
+## ----------------- ##
+## Output variables. ##
+## ----------------- ##
+_ASBOX
+ echo
+ for ac_var in $ac_subst_vars
+ do
+ eval ac_val=$`echo $ac_var`
+ echo "$ac_var='"'"'$ac_val'"'"'"
+ done | sort
+ echo
+
+ if test -n "$ac_subst_files"; then
+ cat <<\_ASBOX
+## ------------- ##
+## Output files. ##
+## ------------- ##
+_ASBOX
+ echo
+ for ac_var in $ac_subst_files
+ do
+ eval ac_val=$`echo $ac_var`
+ echo "$ac_var='"'"'$ac_val'"'"'"
+ done | sort
+ echo
+ fi
+
+ if test -s confdefs.h; then
+ cat <<\_ASBOX
+## ----------- ##
+## confdefs.h. ##
+## ----------- ##
+_ASBOX
+ echo
+ sed "/^$/d" confdefs.h | sort
+ echo
+ fi
+ test "$ac_signal" != 0 &&
+ echo "$as_me: caught signal $ac_signal"
+ echo "$as_me: exit $exit_status"
+ } >&5
+ rm -f core *.core &&
+ rm -rf conftest* confdefs* conf$$* $ac_clean_files &&
+ exit $exit_status
+ ' 0
+for ac_signal in 1 2 13 15; do
+ trap 'ac_signal='$ac_signal'; { (exit 1); exit 1; }' $ac_signal
+done
+ac_signal=0
+
+# confdefs.h avoids OS command line length limits that DEFS can exceed.
+rm -rf conftest* confdefs.h
+# AIX cpp loses on an empty file, so make sure it contains at least a newline.
+echo >confdefs.h
+
+# Predefined preprocessor variables.
+
+cat >>confdefs.h <<_ACEOF
+#define PACKAGE_NAME "$PACKAGE_NAME"
+_ACEOF
+
+
+cat >>confdefs.h <<_ACEOF
+#define PACKAGE_TARNAME "$PACKAGE_TARNAME"
+_ACEOF
+
+
+cat >>confdefs.h <<_ACEOF
+#define PACKAGE_VERSION "$PACKAGE_VERSION"
+_ACEOF
+
+
+cat >>confdefs.h <<_ACEOF
+#define PACKAGE_STRING "$PACKAGE_STRING"
+_ACEOF
+
+
+cat >>confdefs.h <<_ACEOF
+#define PACKAGE_BUGREPORT "$PACKAGE_BUGREPORT"
+_ACEOF
+
+
+# Let the site file select an alternate cache file if it wants to.
+# Prefer explicitly selected file to automatically selected ones.
+if test -z "$CONFIG_SITE"; then
+ if test "x$prefix" != xNONE; then
+ CONFIG_SITE="$prefix/share/config.site $prefix/etc/config.site"
+ else
+ CONFIG_SITE="$ac_default_prefix/share/config.site $ac_default_prefix/etc/config.site"
+ fi
+fi
+for ac_site_file in $CONFIG_SITE; do
+ if test -r "$ac_site_file"; then
+ { echo "$as_me:$LINENO: loading site script $ac_site_file" >&5
+echo "$as_me: loading site script $ac_site_file" >&6;}
+ sed 's/^/| /' "$ac_site_file" >&5
+ . "$ac_site_file"
+ fi
+done
+
+if test -r "$cache_file"; then
+ # Some versions of bash will fail to source /dev/null (special
+ # files actually), so we avoid doing that.
+ if test -f "$cache_file"; then
+ { echo "$as_me:$LINENO: loading cache $cache_file" >&5
+echo "$as_me: loading cache $cache_file" >&6;}
+ case $cache_file in
+ [\\/]* | ?:[\\/]* ) . $cache_file;;
+ *) . ./$cache_file;;
+ esac
+ fi
+else
+ { echo "$as_me:$LINENO: creating cache $cache_file" >&5
+echo "$as_me: creating cache $cache_file" >&6;}
+ >$cache_file
+fi
+
+# Check that the precious variables saved in the cache have kept the same
+# value.
+ac_cache_corrupted=false
+for ac_var in `(set) 2>&1 |
+ sed -n 's/^ac_env_\([a-zA-Z_0-9]*\)_set=.*/\1/p'`; do
+ eval ac_old_set=\$ac_cv_env_${ac_var}_set
+ eval ac_new_set=\$ac_env_${ac_var}_set
+ eval ac_old_val="\$ac_cv_env_${ac_var}_value"
+ eval ac_new_val="\$ac_env_${ac_var}_value"
+ case $ac_old_set,$ac_new_set in
+ set,)
+ { echo "$as_me:$LINENO: error: \`$ac_var' was set to \`$ac_old_val' in the previous run" >&5
+echo "$as_me: error: \`$ac_var' was set to \`$ac_old_val' in the previous run" >&2;}
+ ac_cache_corrupted=: ;;
+ ,set)
+ { echo "$as_me:$LINENO: error: \`$ac_var' was not set in the previous run" >&5
+echo "$as_me: error: \`$ac_var' was not set in the previous run" >&2;}
+ ac_cache_corrupted=: ;;
+ ,);;
+ *)
+ if test "x$ac_old_val" != "x$ac_new_val"; then
+ { echo "$as_me:$LINENO: error: \`$ac_var' has changed since the previous run:" >&5
+echo "$as_me: error: \`$ac_var' has changed since the previous run:" >&2;}
+ { echo "$as_me:$LINENO: former value: $ac_old_val" >&5
+echo "$as_me: former value: $ac_old_val" >&2;}
+ { echo "$as_me:$LINENO: current value: $ac_new_val" >&5
+echo "$as_me: current value: $ac_new_val" >&2;}
+ ac_cache_corrupted=:
+ fi;;
+ esac
+ # Pass precious variables to config.status.
+ if test "$ac_new_set" = set; then
+ case $ac_new_val in
+ *" "*|*" "*|*[\[\]\~\#\$\^\&\*\(\)\{\}\\\|\;\<\>\?\"\']*)
+ ac_arg=$ac_var=`echo "$ac_new_val" | sed "s/'/'\\\\\\\\''/g"` ;;
+ *) ac_arg=$ac_var=$ac_new_val ;;
+ esac
+ case " $ac_configure_args " in
+ *" '$ac_arg' "*) ;; # Avoid dups. Use of quotes ensures accuracy.
+ *) ac_configure_args="$ac_configure_args '$ac_arg'" ;;
+ esac
+ fi
+done
+if $ac_cache_corrupted; then
+ { echo "$as_me:$LINENO: error: changes in the environment can compromise the build" >&5
+echo "$as_me: error: changes in the environment can compromise the build" >&2;}
+ { { echo "$as_me:$LINENO: error: run \`make distclean' and/or \`rm $cache_file' and start over" >&5
+echo "$as_me: error: run \`make distclean' and/or \`rm $cache_file' and start over" >&2;}
+ { (exit 1); exit 1; }; }
+fi
+
+ac_ext=c
+ac_cpp='$CPP $CPPFLAGS'
+ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_c_compiler_gnu
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+# The following RCS revision string applies to configure.in
+# $Revision: 1.40 $
+
+#########
+# Programs needed
+#
+# Check whether --enable-shared or --disable-shared was given.
+if test "${enable_shared+set}" = set; then
+ enableval="$enable_shared"
+ p=${PACKAGE-default}
+ case $enableval in
+ yes) enable_shared=yes ;;
+ no) enable_shared=no ;;
+ *)
+ enable_shared=no
+ # Look at the argument we got. We use all the common list separators.
+ lt_save_ifs="$IFS"; IFS="${IFS}$PATH_SEPARATOR,"
+ for pkg in $enableval; do
+ IFS="$lt_save_ifs"
+ if test "X$pkg" = "X$p"; then
+ enable_shared=yes
+ fi
+ done
+ IFS="$lt_save_ifs"
+ ;;
+ esac
+else
+ enable_shared=yes
+fi;
+
+# Check whether --enable-static or --disable-static was given.
+if test "${enable_static+set}" = set; then
+ enableval="$enable_static"
+ p=${PACKAGE-default}
+ case $enableval in
+ yes) enable_static=yes ;;
+ no) enable_static=no ;;
+ *)
+ enable_static=no
+ # Look at the argument we got. We use all the common list separators.
+ lt_save_ifs="$IFS"; IFS="${IFS}$PATH_SEPARATOR,"
+ for pkg in $enableval; do
+ IFS="$lt_save_ifs"
+ if test "X$pkg" = "X$p"; then
+ enable_static=yes
+ fi
+ done
+ IFS="$lt_save_ifs"
+ ;;
+ esac
+else
+ enable_static=yes
+fi;
+
+# Check whether --enable-fast-install or --disable-fast-install was given.
+if test "${enable_fast_install+set}" = set; then
+ enableval="$enable_fast_install"
+ p=${PACKAGE-default}
+ case $enableval in
+ yes) enable_fast_install=yes ;;
+ no) enable_fast_install=no ;;
+ *)
+ enable_fast_install=no
+ # Look at the argument we got. We use all the common list separators.
+ lt_save_ifs="$IFS"; IFS="${IFS}$PATH_SEPARATOR,"
+ for pkg in $enableval; do
+ IFS="$lt_save_ifs"
+ if test "X$pkg" = "X$p"; then
+ enable_fast_install=yes
+ fi
+ done
+ IFS="$lt_save_ifs"
+ ;;
+ esac
+else
+ enable_fast_install=yes
+fi;
+
+ac_aux_dir=
+for ac_dir in $srcdir $srcdir/.. $srcdir/../..; do
+ if test -f $ac_dir/install-sh; then
+ ac_aux_dir=$ac_dir
+ ac_install_sh="$ac_aux_dir/install-sh -c"
+ break
+ elif test -f $ac_dir/install.sh; then
+ ac_aux_dir=$ac_dir
+ ac_install_sh="$ac_aux_dir/install.sh -c"
+ break
+ elif test -f $ac_dir/shtool; then
+ ac_aux_dir=$ac_dir
+ ac_install_sh="$ac_aux_dir/shtool install -c"
+ break
+ fi
+done
+if test -z "$ac_aux_dir"; then
+ { { echo "$as_me:$LINENO: error: cannot find install-sh or install.sh in $srcdir $srcdir/.. $srcdir/../.." >&5
+echo "$as_me: error: cannot find install-sh or install.sh in $srcdir $srcdir/.. $srcdir/../.." >&2;}
+ { (exit 1); exit 1; }; }
+fi
+ac_config_guess="$SHELL $ac_aux_dir/config.guess"
+ac_config_sub="$SHELL $ac_aux_dir/config.sub"
+ac_configure="$SHELL $ac_aux_dir/configure" # This should be Cygnus configure.
+
+# Make sure we can run config.sub.
+$ac_config_sub sun4 >/dev/null 2>&1 ||
+ { { echo "$as_me:$LINENO: error: cannot run $ac_config_sub" >&5
+echo "$as_me: error: cannot run $ac_config_sub" >&2;}
+ { (exit 1); exit 1; }; }
+
+echo "$as_me:$LINENO: checking build system type" >&5
+echo $ECHO_N "checking build system type... $ECHO_C" >&6
+if test "${ac_cv_build+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ ac_cv_build_alias=$build_alias
+test -z "$ac_cv_build_alias" &&
+ ac_cv_build_alias=`$ac_config_guess`
+test -z "$ac_cv_build_alias" &&
+ { { echo "$as_me:$LINENO: error: cannot guess build type; you must specify one" >&5
+echo "$as_me: error: cannot guess build type; you must specify one" >&2;}
+ { (exit 1); exit 1; }; }
+ac_cv_build=`$ac_config_sub $ac_cv_build_alias` ||
+ { { echo "$as_me:$LINENO: error: $ac_config_sub $ac_cv_build_alias failed" >&5
+echo "$as_me: error: $ac_config_sub $ac_cv_build_alias failed" >&2;}
+ { (exit 1); exit 1; }; }
+
+fi
+echo "$as_me:$LINENO: result: $ac_cv_build" >&5
+echo "${ECHO_T}$ac_cv_build" >&6
+build=$ac_cv_build
+build_cpu=`echo $ac_cv_build | sed 's/^\([^-]*\)-\([^-]*\)-\(.*\)$/\1/'`
+build_vendor=`echo $ac_cv_build | sed 's/^\([^-]*\)-\([^-]*\)-\(.*\)$/\2/'`
+build_os=`echo $ac_cv_build | sed 's/^\([^-]*\)-\([^-]*\)-\(.*\)$/\3/'`
+
+
+echo "$as_me:$LINENO: checking host system type" >&5
+echo $ECHO_N "checking host system type... $ECHO_C" >&6
+if test "${ac_cv_host+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ ac_cv_host_alias=$host_alias
+test -z "$ac_cv_host_alias" &&
+ ac_cv_host_alias=$ac_cv_build_alias
+ac_cv_host=`$ac_config_sub $ac_cv_host_alias` ||
+ { { echo "$as_me:$LINENO: error: $ac_config_sub $ac_cv_host_alias failed" >&5
+echo "$as_me: error: $ac_config_sub $ac_cv_host_alias failed" >&2;}
+ { (exit 1); exit 1; }; }
+
+fi
+echo "$as_me:$LINENO: result: $ac_cv_host" >&5
+echo "${ECHO_T}$ac_cv_host" >&6
+host=$ac_cv_host
+host_cpu=`echo $ac_cv_host | sed 's/^\([^-]*\)-\([^-]*\)-\(.*\)$/\1/'`
+host_vendor=`echo $ac_cv_host | sed 's/^\([^-]*\)-\([^-]*\)-\(.*\)$/\2/'`
+host_os=`echo $ac_cv_host | sed 's/^\([^-]*\)-\([^-]*\)-\(.*\)$/\3/'`
+
+
+ac_ext=c
+ac_cpp='$CPP $CPPFLAGS'
+ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_c_compiler_gnu
+if test -n "$ac_tool_prefix"; then
+ # Extract the first word of "${ac_tool_prefix}gcc", so it can be a program name with args.
+set dummy ${ac_tool_prefix}gcc; ac_word=$2
+echo "$as_me:$LINENO: checking for $ac_word" >&5
+echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6
+if test "${ac_cv_prog_CC+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ if test -n "$CC"; then
+ ac_cv_prog_CC="$CC" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if $as_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_prog_CC="${ac_tool_prefix}gcc"
+ echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+done
+
+fi
+fi
+CC=$ac_cv_prog_CC
+if test -n "$CC"; then
+ echo "$as_me:$LINENO: result: $CC" >&5
+echo "${ECHO_T}$CC" >&6
+else
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+fi
+
+fi
+if test -z "$ac_cv_prog_CC"; then
+ ac_ct_CC=$CC
+ # Extract the first word of "gcc", so it can be a program name with args.
+set dummy gcc; ac_word=$2
+echo "$as_me:$LINENO: checking for $ac_word" >&5
+echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6
+if test "${ac_cv_prog_ac_ct_CC+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ if test -n "$ac_ct_CC"; then
+ ac_cv_prog_ac_ct_CC="$ac_ct_CC" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if $as_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_prog_ac_ct_CC="gcc"
+ echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+done
+
+fi
+fi
+ac_ct_CC=$ac_cv_prog_ac_ct_CC
+if test -n "$ac_ct_CC"; then
+ echo "$as_me:$LINENO: result: $ac_ct_CC" >&5
+echo "${ECHO_T}$ac_ct_CC" >&6
+else
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+fi
+
+ CC=$ac_ct_CC
+else
+ CC="$ac_cv_prog_CC"
+fi
+
+if test -z "$CC"; then
+ if test -n "$ac_tool_prefix"; then
+ # Extract the first word of "${ac_tool_prefix}cc", so it can be a program name with args.
+set dummy ${ac_tool_prefix}cc; ac_word=$2
+echo "$as_me:$LINENO: checking for $ac_word" >&5
+echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6
+if test "${ac_cv_prog_CC+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ if test -n "$CC"; then
+ ac_cv_prog_CC="$CC" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if $as_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_prog_CC="${ac_tool_prefix}cc"
+ echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+done
+
+fi
+fi
+CC=$ac_cv_prog_CC
+if test -n "$CC"; then
+ echo "$as_me:$LINENO: result: $CC" >&5
+echo "${ECHO_T}$CC" >&6
+else
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+fi
+
+fi
+if test -z "$ac_cv_prog_CC"; then
+ ac_ct_CC=$CC
+ # Extract the first word of "cc", so it can be a program name with args.
+set dummy cc; ac_word=$2
+echo "$as_me:$LINENO: checking for $ac_word" >&5
+echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6
+if test "${ac_cv_prog_ac_ct_CC+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ if test -n "$ac_ct_CC"; then
+ ac_cv_prog_ac_ct_CC="$ac_ct_CC" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if $as_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_prog_ac_ct_CC="cc"
+ echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+done
+
+fi
+fi
+ac_ct_CC=$ac_cv_prog_ac_ct_CC
+if test -n "$ac_ct_CC"; then
+ echo "$as_me:$LINENO: result: $ac_ct_CC" >&5
+echo "${ECHO_T}$ac_ct_CC" >&6
+else
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+fi
+
+ CC=$ac_ct_CC
+else
+ CC="$ac_cv_prog_CC"
+fi
+
+fi
+if test -z "$CC"; then
+ # Extract the first word of "cc", so it can be a program name with args.
+set dummy cc; ac_word=$2
+echo "$as_me:$LINENO: checking for $ac_word" >&5
+echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6
+if test "${ac_cv_prog_CC+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ if test -n "$CC"; then
+ ac_cv_prog_CC="$CC" # Let the user override the test.
+else
+ ac_prog_rejected=no
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if $as_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ if test "$as_dir/$ac_word$ac_exec_ext" = "/usr/ucb/cc"; then
+ ac_prog_rejected=yes
+ continue
+ fi
+ ac_cv_prog_CC="cc"
+ echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+done
+
+if test $ac_prog_rejected = yes; then
+ # We found a bogon in the path, so make sure we never use it.
+ set dummy $ac_cv_prog_CC
+ shift
+ if test $# != 0; then
+ # We chose a different compiler from the bogus one.
+ # However, it has the same basename, so the bogon will be chosen
+ # first if we set CC to just the basename; use the full file name.
+ shift
+ ac_cv_prog_CC="$as_dir/$ac_word${1+' '}$@"
+ fi
+fi
+fi
+fi
+CC=$ac_cv_prog_CC
+if test -n "$CC"; then
+ echo "$as_me:$LINENO: result: $CC" >&5
+echo "${ECHO_T}$CC" >&6
+else
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+fi
+
+fi
+if test -z "$CC"; then
+ if test -n "$ac_tool_prefix"; then
+ for ac_prog in cl
+ do
+ # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args.
+set dummy $ac_tool_prefix$ac_prog; ac_word=$2
+echo "$as_me:$LINENO: checking for $ac_word" >&5
+echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6
+if test "${ac_cv_prog_CC+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ if test -n "$CC"; then
+ ac_cv_prog_CC="$CC" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if $as_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_prog_CC="$ac_tool_prefix$ac_prog"
+ echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+done
+
+fi
+fi
+CC=$ac_cv_prog_CC
+if test -n "$CC"; then
+ echo "$as_me:$LINENO: result: $CC" >&5
+echo "${ECHO_T}$CC" >&6
+else
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+fi
+
+ test -n "$CC" && break
+ done
+fi
+if test -z "$CC"; then
+ ac_ct_CC=$CC
+ for ac_prog in cl
+do
+ # Extract the first word of "$ac_prog", so it can be a program name with args.
+set dummy $ac_prog; ac_word=$2
+echo "$as_me:$LINENO: checking for $ac_word" >&5
+echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6
+if test "${ac_cv_prog_ac_ct_CC+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ if test -n "$ac_ct_CC"; then
+ ac_cv_prog_ac_ct_CC="$ac_ct_CC" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if $as_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_prog_ac_ct_CC="$ac_prog"
+ echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+done
+
+fi
+fi
+ac_ct_CC=$ac_cv_prog_ac_ct_CC
+if test -n "$ac_ct_CC"; then
+ echo "$as_me:$LINENO: result: $ac_ct_CC" >&5
+echo "${ECHO_T}$ac_ct_CC" >&6
+else
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+fi
+
+ test -n "$ac_ct_CC" && break
+done
+
+ CC=$ac_ct_CC
+fi
+
+fi
+
+
+test -z "$CC" && { { echo "$as_me:$LINENO: error: no acceptable C compiler found in \$PATH
+See \`config.log' for more details." >&5
+echo "$as_me: error: no acceptable C compiler found in \$PATH
+See \`config.log' for more details." >&2;}
+ { (exit 1); exit 1; }; }
+
+# Provide some information about the compiler.
+echo "$as_me:$LINENO:" \
+ "checking for C compiler version" >&5
+ac_compiler=`set X $ac_compile; echo $2`
+{ (eval echo "$as_me:$LINENO: \"$ac_compiler --version </dev/null >&5\"") >&5
+ (eval $ac_compiler --version </dev/null >&5) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }
+{ (eval echo "$as_me:$LINENO: \"$ac_compiler -v </dev/null >&5\"") >&5
+ (eval $ac_compiler -v </dev/null >&5) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }
+{ (eval echo "$as_me:$LINENO: \"$ac_compiler -V </dev/null >&5\"") >&5
+ (eval $ac_compiler -V </dev/null >&5) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }
+
+cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+
+int
+main ()
+{
+
+ ;
+ return 0;
+}
+_ACEOF
+ac_clean_files_save=$ac_clean_files
+ac_clean_files="$ac_clean_files a.out a.exe b.out"
+# Try to create an executable without -o first, disregard a.out.
+# It will help us diagnose broken compilers, and finding out an intuition
+# of exeext.
+echo "$as_me:$LINENO: checking for C compiler default output file name" >&5
+echo $ECHO_N "checking for C compiler default output file name... $ECHO_C" >&6
+ac_link_default=`echo "$ac_link" | sed 's/ -o *conftest[^ ]*//'`
+if { (eval echo "$as_me:$LINENO: \"$ac_link_default\"") >&5
+ (eval $ac_link_default) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; then
+ # Find the output, starting from the most likely. This scheme is
+# not robust to junk in `.', hence go to wildcards (a.*) only as a last
+# resort.
+
+# Be careful to initialize this variable, since it used to be cached.
+# Otherwise an old cache value of `no' led to `EXEEXT = no' in a Makefile.
+ac_cv_exeext=
+# b.out is created by i960 compilers.
+for ac_file in a_out.exe a.exe conftest.exe a.out conftest a.* conftest.* b.out
+do
+ test -f "$ac_file" || continue
+ case $ac_file in
+ *.$ac_ext | *.xcoff | *.tds | *.d | *.pdb | *.xSYM | *.bb | *.bbg | *.o | *.obj )
+ ;;
+ conftest.$ac_ext )
+ # This is the source file.
+ ;;
+ [ab].out )
+ # We found the default executable, but exeext='' is most
+ # certainly right.
+ break;;
+ *.* )
+ ac_cv_exeext=`expr "$ac_file" : '[^.]*\(\..*\)'`
+ # FIXME: I believe we export ac_cv_exeext for Libtool,
+ # but it would be cool to find out if it's true. Does anybody
+ # maintain Libtool? --akim.
+ export ac_cv_exeext
+ break;;
+ * )
+ break;;
+ esac
+done
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+{ { echo "$as_me:$LINENO: error: C compiler cannot create executables
+See \`config.log' for more details." >&5
+echo "$as_me: error: C compiler cannot create executables
+See \`config.log' for more details." >&2;}
+ { (exit 77); exit 77; }; }
+fi
+
+ac_exeext=$ac_cv_exeext
+echo "$as_me:$LINENO: result: $ac_file" >&5
+echo "${ECHO_T}$ac_file" >&6
+
+# Check the compiler produces executables we can run. If not, either
+# the compiler is broken, or we cross compile.
+echo "$as_me:$LINENO: checking whether the C compiler works" >&5
+echo $ECHO_N "checking whether the C compiler works... $ECHO_C" >&6
+# FIXME: These cross compiler hacks should be removed for Autoconf 3.0
+# If not cross compiling, check that we can run a simple program.
+if test "$cross_compiling" != yes; then
+ if { ac_try='./$ac_file'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ cross_compiling=no
+ else
+ if test "$cross_compiling" = maybe; then
+ cross_compiling=yes
+ else
+ { { echo "$as_me:$LINENO: error: cannot run C compiled programs.
+If you meant to cross compile, use \`--host'.
+See \`config.log' for more details." >&5
+echo "$as_me: error: cannot run C compiled programs.
+If you meant to cross compile, use \`--host'.
+See \`config.log' for more details." >&2;}
+ { (exit 1); exit 1; }; }
+ fi
+ fi
+fi
+echo "$as_me:$LINENO: result: yes" >&5
+echo "${ECHO_T}yes" >&6
+
+rm -f a.out a.exe conftest$ac_cv_exeext b.out
+ac_clean_files=$ac_clean_files_save
+# Check the compiler produces executables we can run. If not, either
+# the compiler is broken, or we cross compile.
+echo "$as_me:$LINENO: checking whether we are cross compiling" >&5
+echo $ECHO_N "checking whether we are cross compiling... $ECHO_C" >&6
+echo "$as_me:$LINENO: result: $cross_compiling" >&5
+echo "${ECHO_T}$cross_compiling" >&6
+
+echo "$as_me:$LINENO: checking for suffix of executables" >&5
+echo $ECHO_N "checking for suffix of executables... $ECHO_C" >&6
+if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; then
+ # If both `conftest.exe' and `conftest' are `present' (well, observable)
+# catch `conftest.exe'. For instance with Cygwin, `ls conftest' will
+# work properly (i.e., refer to `conftest.exe'), while it won't with
+# `rm'.
+for ac_file in conftest.exe conftest conftest.*; do
+ test -f "$ac_file" || continue
+ case $ac_file in
+ *.$ac_ext | *.xcoff | *.tds | *.d | *.pdb | *.xSYM | *.bb | *.bbg | *.o | *.obj ) ;;
+ *.* ) ac_cv_exeext=`expr "$ac_file" : '[^.]*\(\..*\)'`
+ export ac_cv_exeext
+ break;;
+ * ) break;;
+ esac
+done
+else
+ { { echo "$as_me:$LINENO: error: cannot compute suffix of executables: cannot compile and link
+See \`config.log' for more details." >&5
+echo "$as_me: error: cannot compute suffix of executables: cannot compile and link
+See \`config.log' for more details." >&2;}
+ { (exit 1); exit 1; }; }
+fi
+
+rm -f conftest$ac_cv_exeext
+echo "$as_me:$LINENO: result: $ac_cv_exeext" >&5
+echo "${ECHO_T}$ac_cv_exeext" >&6
+
+rm -f conftest.$ac_ext
+EXEEXT=$ac_cv_exeext
+ac_exeext=$EXEEXT
+echo "$as_me:$LINENO: checking for suffix of object files" >&5
+echo $ECHO_N "checking for suffix of object files... $ECHO_C" >&6
+if test "${ac_cv_objext+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+
+int
+main ()
+{
+
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.o conftest.obj
+if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5
+ (eval $ac_compile) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; then
+ for ac_file in `(ls conftest.o conftest.obj; ls conftest.*) 2>/dev/null`; do
+ case $ac_file in
+ *.$ac_ext | *.xcoff | *.tds | *.d | *.pdb | *.xSYM | *.bb | *.bbg ) ;;
+ *) ac_cv_objext=`expr "$ac_file" : '.*\.\(.*\)'`
+ break;;
+ esac
+done
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+{ { echo "$as_me:$LINENO: error: cannot compute suffix of object files: cannot compile
+See \`config.log' for more details." >&5
+echo "$as_me: error: cannot compute suffix of object files: cannot compile
+See \`config.log' for more details." >&2;}
+ { (exit 1); exit 1; }; }
+fi
+
+rm -f conftest.$ac_cv_objext conftest.$ac_ext
+fi
+echo "$as_me:$LINENO: result: $ac_cv_objext" >&5
+echo "${ECHO_T}$ac_cv_objext" >&6
+OBJEXT=$ac_cv_objext
+ac_objext=$OBJEXT
+echo "$as_me:$LINENO: checking whether we are using the GNU C compiler" >&5
+echo $ECHO_N "checking whether we are using the GNU C compiler... $ECHO_C" >&6
+if test "${ac_cv_c_compiler_gnu+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+
+int
+main ()
+{
+#ifndef __GNUC__
+ choke me
+#endif
+
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext
+if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5
+ (eval $ac_compile) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_c_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest.$ac_objext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ ac_compiler_gnu=yes
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ac_compiler_gnu=no
+fi
+rm -f conftest.err conftest.$ac_objext conftest.$ac_ext
+ac_cv_c_compiler_gnu=$ac_compiler_gnu
+
+fi
+echo "$as_me:$LINENO: result: $ac_cv_c_compiler_gnu" >&5
+echo "${ECHO_T}$ac_cv_c_compiler_gnu" >&6
+GCC=`test $ac_compiler_gnu = yes && echo yes`
+ac_test_CFLAGS=${CFLAGS+set}
+ac_save_CFLAGS=$CFLAGS
+CFLAGS="-g"
+echo "$as_me:$LINENO: checking whether $CC accepts -g" >&5
+echo $ECHO_N "checking whether $CC accepts -g... $ECHO_C" >&6
+if test "${ac_cv_prog_cc_g+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+
+int
+main ()
+{
+
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext
+if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5
+ (eval $ac_compile) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_c_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest.$ac_objext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ ac_cv_prog_cc_g=yes
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ac_cv_prog_cc_g=no
+fi
+rm -f conftest.err conftest.$ac_objext conftest.$ac_ext
+fi
+echo "$as_me:$LINENO: result: $ac_cv_prog_cc_g" >&5
+echo "${ECHO_T}$ac_cv_prog_cc_g" >&6
+if test "$ac_test_CFLAGS" = set; then
+ CFLAGS=$ac_save_CFLAGS
+elif test $ac_cv_prog_cc_g = yes; then
+ if test "$GCC" = yes; then
+ CFLAGS="-g -O2"
+ else
+ CFLAGS="-g"
+ fi
+else
+ if test "$GCC" = yes; then
+ CFLAGS="-O2"
+ else
+ CFLAGS=
+ fi
+fi
+echo "$as_me:$LINENO: checking for $CC option to accept ANSI C" >&5
+echo $ECHO_N "checking for $CC option to accept ANSI C... $ECHO_C" >&6
+if test "${ac_cv_prog_cc_stdc+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ ac_cv_prog_cc_stdc=no
+ac_save_CC=$CC
+cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+#include <stdarg.h>
+#include <stdio.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+/* Most of the following tests are stolen from RCS 5.7's src/conf.sh. */
+struct buf { int x; };
+FILE * (*rcsopen) (struct buf *, struct stat *, int);
+static char *e (p, i)
+ char **p;
+ int i;
+{
+ return p[i];
+}
+static char *f (char * (*g) (char **, int), char **p, ...)
+{
+ char *s;
+ va_list v;
+ va_start (v,p);
+ s = g (p, va_arg (v,int));
+ va_end (v);
+ return s;
+}
+
+/* OSF 4.0 Compaq cc is some sort of almost-ANSI by default. It has
+ function prototypes and stuff, but not '\xHH' hex character constants.
+ These don't provoke an error unfortunately, instead are silently treated
+ as 'x'. The following induces an error, until -std1 is added to get
+ proper ANSI mode. Curiously '\x00'!='x' always comes out true, for an
+ array size at least. It's necessary to write '\x00'==0 to get something
+ that's true only with -std1. */
+int osf4_cc_array ['\x00' == 0 ? 1 : -1];
+
+int test (int i, double x);
+struct s1 {int (*f) (int a);};
+struct s2 {int (*f) (double a);};
+int pairnames (int, char **, FILE *(*)(struct buf *, struct stat *, int), int, int);
+int argc;
+char **argv;
+int
+main ()
+{
+return f (e, argv, 0) != argv[0] || f (e, argv, 1) != argv[1];
+ ;
+ return 0;
+}
+_ACEOF
+# Don't try gcc -ansi; that turns off useful extensions and
+# breaks some systems' header files.
+# AIX -qlanglvl=ansi
+# Ultrix and OSF/1 -std1
+# HP-UX 10.20 and later -Ae
+# HP-UX older versions -Aa -D_HPUX_SOURCE
+# SVR4 -Xc -D__EXTENSIONS__
+for ac_arg in "" -qlanglvl=ansi -std1 -Ae "-Aa -D_HPUX_SOURCE" "-Xc -D__EXTENSIONS__"
+do
+ CC="$ac_save_CC $ac_arg"
+ rm -f conftest.$ac_objext
+if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5
+ (eval $ac_compile) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_c_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest.$ac_objext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ ac_cv_prog_cc_stdc=$ac_arg
+break
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+fi
+rm -f conftest.err conftest.$ac_objext
+done
+rm -f conftest.$ac_ext conftest.$ac_objext
+CC=$ac_save_CC
+
+fi
+
+case "x$ac_cv_prog_cc_stdc" in
+ x|xno)
+ echo "$as_me:$LINENO: result: none needed" >&5
+echo "${ECHO_T}none needed" >&6 ;;
+ *)
+ echo "$as_me:$LINENO: result: $ac_cv_prog_cc_stdc" >&5
+echo "${ECHO_T}$ac_cv_prog_cc_stdc" >&6
+ CC="$CC $ac_cv_prog_cc_stdc" ;;
+esac
+
+# Some people use a C++ compiler to compile C. Since we use `exit',
+# in C++ we need to declare it. In case someone uses the same compiler
+# for both compiling C and C++ we need to have the C++ compiler decide
+# the declaration of exit, since it's the most demanding environment.
+cat >conftest.$ac_ext <<_ACEOF
+#ifndef __cplusplus
+ choke me
+#endif
+_ACEOF
+rm -f conftest.$ac_objext
+if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5
+ (eval $ac_compile) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_c_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest.$ac_objext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ for ac_declaration in \
+ '' \
+ 'extern "C" void std::exit (int) throw (); using std::exit;' \
+ 'extern "C" void std::exit (int); using std::exit;' \
+ 'extern "C" void exit (int) throw ();' \
+ 'extern "C" void exit (int);' \
+ 'void exit (int);'
+do
+ cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+$ac_declaration
+#include <stdlib.h>
+int
+main ()
+{
+exit (42);
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext
+if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5
+ (eval $ac_compile) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_c_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest.$ac_objext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ :
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+continue
+fi
+rm -f conftest.err conftest.$ac_objext conftest.$ac_ext
+ cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+$ac_declaration
+int
+main ()
+{
+exit (42);
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext
+if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5
+ (eval $ac_compile) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_c_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest.$ac_objext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ break
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+fi
+rm -f conftest.err conftest.$ac_objext conftest.$ac_ext
+done
+rm -f conftest*
+if test -n "$ac_declaration"; then
+ echo '#ifdef __cplusplus' >>confdefs.h
+ echo $ac_declaration >>confdefs.h
+ echo '#endif' >>confdefs.h
+fi
+
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+fi
+rm -f conftest.err conftest.$ac_objext conftest.$ac_ext
+ac_ext=c
+ac_cpp='$CPP $CPPFLAGS'
+ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_c_compiler_gnu
+
+echo "$as_me:$LINENO: checking for a sed that does not truncate output" >&5
+echo $ECHO_N "checking for a sed that does not truncate output... $ECHO_C" >&6
+if test "${lt_cv_path_SED+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ # Loop through the user's path and test for sed and gsed.
+# Then use that list of sed's as ones to test for truncation.
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for lt_ac_prog in sed gsed; do
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if $as_executable_p "$as_dir/$lt_ac_prog$ac_exec_ext"; then
+ lt_ac_sed_list="$lt_ac_sed_list $as_dir/$lt_ac_prog$ac_exec_ext"
+ fi
+ done
+ done
+done
+lt_ac_max=0
+lt_ac_count=0
+# Add /usr/xpg4/bin/sed as it is typically found on Solaris
+# along with /bin/sed that truncates output.
+for lt_ac_sed in $lt_ac_sed_list /usr/xpg4/bin/sed; do
+ test ! -f $lt_ac_sed && break
+ cat /dev/null > conftest.in
+ lt_ac_count=0
+ echo $ECHO_N "0123456789$ECHO_C" >conftest.in
+ # Check for GNU sed and select it if it is found.
+ if "$lt_ac_sed" --version 2>&1 < /dev/null | grep 'GNU' > /dev/null; then
+ lt_cv_path_SED=$lt_ac_sed
+ break
+ fi
+ while true; do
+ cat conftest.in conftest.in >conftest.tmp
+ mv conftest.tmp conftest.in
+ cp conftest.in conftest.nl
+ echo >>conftest.nl
+ $lt_ac_sed -e 's/a$//' < conftest.nl >conftest.out || break
+ cmp -s conftest.out conftest.nl || break
+ # 10000 chars as input seems more than enough
+ test $lt_ac_count -gt 10 && break
+ lt_ac_count=`expr $lt_ac_count + 1`
+ if test $lt_ac_count -gt $lt_ac_max; then
+ lt_ac_max=$lt_ac_count
+ lt_cv_path_SED=$lt_ac_sed
+ fi
+ done
+done
+SED=$lt_cv_path_SED
+
+fi
+
+echo "$as_me:$LINENO: result: $SED" >&5
+echo "${ECHO_T}$SED" >&6
+
+echo "$as_me:$LINENO: checking for egrep" >&5
+echo $ECHO_N "checking for egrep... $ECHO_C" >&6
+if test "${ac_cv_prog_egrep+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ if echo a | (grep -E '(a|b)') >/dev/null 2>&1
+ then ac_cv_prog_egrep='grep -E'
+ else ac_cv_prog_egrep='egrep'
+ fi
+fi
+echo "$as_me:$LINENO: result: $ac_cv_prog_egrep" >&5
+echo "${ECHO_T}$ac_cv_prog_egrep" >&6
+ EGREP=$ac_cv_prog_egrep
+
+
+
+# Check whether --with-gnu-ld or --without-gnu-ld was given.
+if test "${with_gnu_ld+set}" = set; then
+ withval="$with_gnu_ld"
+ test "$withval" = no || with_gnu_ld=yes
+else
+ with_gnu_ld=no
+fi;
+ac_prog=ld
+if test "$GCC" = yes; then
+ # Check if gcc -print-prog-name=ld gives a path.
+ echo "$as_me:$LINENO: checking for ld used by $CC" >&5
+echo $ECHO_N "checking for ld used by $CC... $ECHO_C" >&6
+ case $host in
+ *-*-mingw*)
+ # gcc leaves a trailing carriage return which upsets mingw
+ ac_prog=`($CC -print-prog-name=ld) 2>&5 | tr -d '\015'` ;;
+ *)
+ ac_prog=`($CC -print-prog-name=ld) 2>&5` ;;
+ esac
+ case $ac_prog in
+ # Accept absolute paths.
+ [\\/]* | ?:[\\/]*)
+ re_direlt='/[^/][^/]*/\.\./'
+ # Canonicalize the pathname of ld
+ ac_prog=`echo $ac_prog| $SED 's%\\\\%/%g'`
+ while echo $ac_prog | grep "$re_direlt" > /dev/null 2>&1; do
+ ac_prog=`echo $ac_prog| $SED "s%$re_direlt%/%"`
+ done
+ test -z "$LD" && LD="$ac_prog"
+ ;;
+ "")
+ # If it fails, then pretend we aren't using GCC.
+ ac_prog=ld
+ ;;
+ *)
+ # If it is relative, then search for the first ld in PATH.
+ with_gnu_ld=unknown
+ ;;
+ esac
+elif test "$with_gnu_ld" = yes; then
+ echo "$as_me:$LINENO: checking for GNU ld" >&5
+echo $ECHO_N "checking for GNU ld... $ECHO_C" >&6
+else
+ echo "$as_me:$LINENO: checking for non-GNU ld" >&5
+echo $ECHO_N "checking for non-GNU ld... $ECHO_C" >&6
+fi
+if test "${lt_cv_path_LD+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ if test -z "$LD"; then
+ lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR
+ for ac_dir in $PATH; do
+ IFS="$lt_save_ifs"
+ test -z "$ac_dir" && ac_dir=.
+ if test -f "$ac_dir/$ac_prog" || test -f "$ac_dir/$ac_prog$ac_exeext"; then
+ lt_cv_path_LD="$ac_dir/$ac_prog"
+ # Check to see if the program is GNU ld. I'd rather use --version,
+ # but apparently some GNU ld's only accept -v.
+ # Break only if it was the GNU/non-GNU ld that we prefer.
+ case `"$lt_cv_path_LD" -v 2>&1 </dev/null` in
+ *GNU* | *'with BFD'*)
+ test "$with_gnu_ld" != no && break
+ ;;
+ *)
+ test "$with_gnu_ld" != yes && break
+ ;;
+ esac
+ fi
+ done
+ IFS="$lt_save_ifs"
+else
+ lt_cv_path_LD="$LD" # Let the user override the test with a path.
+fi
+fi
+
+LD="$lt_cv_path_LD"
+if test -n "$LD"; then
+ echo "$as_me:$LINENO: result: $LD" >&5
+echo "${ECHO_T}$LD" >&6
+else
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+fi
+test -z "$LD" && { { echo "$as_me:$LINENO: error: no acceptable ld found in \$PATH" >&5
+echo "$as_me: error: no acceptable ld found in \$PATH" >&2;}
+ { (exit 1); exit 1; }; }
+echo "$as_me:$LINENO: checking if the linker ($LD) is GNU ld" >&5
+echo $ECHO_N "checking if the linker ($LD) is GNU ld... $ECHO_C" >&6
+if test "${lt_cv_prog_gnu_ld+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ # I'd rather use --version here, but apparently some GNU ld's only accept -v.
+case `$LD -v 2>&1 </dev/null` in
+*GNU* | *'with BFD'*)
+ lt_cv_prog_gnu_ld=yes
+ ;;
+*)
+ lt_cv_prog_gnu_ld=no
+ ;;
+esac
+fi
+echo "$as_me:$LINENO: result: $lt_cv_prog_gnu_ld" >&5
+echo "${ECHO_T}$lt_cv_prog_gnu_ld" >&6
+with_gnu_ld=$lt_cv_prog_gnu_ld
+
+
+echo "$as_me:$LINENO: checking for $LD option to reload object files" >&5
+echo $ECHO_N "checking for $LD option to reload object files... $ECHO_C" >&6
+if test "${lt_cv_ld_reload_flag+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ lt_cv_ld_reload_flag='-r'
+fi
+echo "$as_me:$LINENO: result: $lt_cv_ld_reload_flag" >&5
+echo "${ECHO_T}$lt_cv_ld_reload_flag" >&6
+reload_flag=$lt_cv_ld_reload_flag
+case $reload_flag in
+"" | " "*) ;;
+*) reload_flag=" $reload_flag" ;;
+esac
+reload_cmds='$LD$reload_flag -o $output$reload_objs'
+
+echo "$as_me:$LINENO: checking for BSD-compatible nm" >&5
+echo $ECHO_N "checking for BSD-compatible nm... $ECHO_C" >&6
+if test "${lt_cv_path_NM+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ if test -n "$NM"; then
+ # Let the user override the test.
+ lt_cv_path_NM="$NM"
+else
+ lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR
+ for ac_dir in $PATH /usr/ccs/bin /usr/ucb /bin; do
+ IFS="$lt_save_ifs"
+ test -z "$ac_dir" && ac_dir=.
+ tmp_nm="$ac_dir/${ac_tool_prefix}nm"
+ if test -f "$tmp_nm" || test -f "$tmp_nm$ac_exeext" ; then
+ # Check to see if the nm accepts a BSD-compat flag.
+ # Adding the `sed 1q' prevents false positives on HP-UX, which says:
+ # nm: unknown option "B" ignored
+ # Tru64's nm complains that /dev/null is an invalid object file
+ case `"$tmp_nm" -B /dev/null 2>&1 | sed '1q'` in
+ */dev/null* | *'Invalid file or object type'*)
+ lt_cv_path_NM="$tmp_nm -B"
+ break
+ ;;
+ *)
+ case `"$tmp_nm" -p /dev/null 2>&1 | sed '1q'` in
+ */dev/null*)
+ lt_cv_path_NM="$tmp_nm -p"
+ break
+ ;;
+ *)
+ lt_cv_path_NM=${lt_cv_path_NM="$tmp_nm"} # keep the first match, but
+ continue # so that we can try to find one that supports BSD flags
+ ;;
+ esac
+ esac
+ fi
+ done
+ IFS="$lt_save_ifs"
+ test -z "$lt_cv_path_NM" && lt_cv_path_NM=nm
+fi
+fi
+echo "$as_me:$LINENO: result: $lt_cv_path_NM" >&5
+echo "${ECHO_T}$lt_cv_path_NM" >&6
+NM="$lt_cv_path_NM"
+
+echo "$as_me:$LINENO: checking whether ln -s works" >&5
+echo $ECHO_N "checking whether ln -s works... $ECHO_C" >&6
+LN_S=$as_ln_s
+if test "$LN_S" = "ln -s"; then
+ echo "$as_me:$LINENO: result: yes" >&5
+echo "${ECHO_T}yes" >&6
+else
+ echo "$as_me:$LINENO: result: no, using $LN_S" >&5
+echo "${ECHO_T}no, using $LN_S" >&6
+fi
+
+echo "$as_me:$LINENO: checking how to recognise dependent libraries" >&5
+echo $ECHO_N "checking how to recognise dependent libraries... $ECHO_C" >&6
+if test "${lt_cv_deplibs_check_method+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ lt_cv_file_magic_cmd='$MAGIC_CMD'
+lt_cv_file_magic_test_file=
+lt_cv_deplibs_check_method='unknown'
+# Need to set the preceding variable on all platforms that support
+# interlibrary dependencies.
+# 'none' -- dependencies not supported.
+# `unknown' -- same as none, but documents that we really don't know.
+# 'pass_all' -- all dependencies passed with no checks.
+# 'test_compile' -- check by making test program.
+# 'file_magic [[regex]]' -- check by looking for files in library path
+# which responds to the $file_magic_cmd with a given extended regex.
+# If you have `file' or equivalent on your system and you're not sure
+# whether `pass_all' will *always* work, you probably want this one.
+
+case $host_os in
+aix4* | aix5*)
+ lt_cv_deplibs_check_method=pass_all
+ ;;
+
+beos*)
+ lt_cv_deplibs_check_method=pass_all
+ ;;
+
+bsdi4*)
+ lt_cv_deplibs_check_method='file_magic ELF [0-9][0-9]*-bit [ML]SB (shared object|dynamic lib)'
+ lt_cv_file_magic_cmd='/usr/bin/file -L'
+ lt_cv_file_magic_test_file=/shlib/libc.so
+ ;;
+
+cygwin*)
+ # win32_libid is a shell function defined in ltmain.sh
+ lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
+ lt_cv_file_magic_cmd='win32_libid'
+ ;;
+
+mingw* | pw32*)
+ # Base MSYS/MinGW do not provide the 'file' command needed by
+ # win32_libid shell function, so use a weaker test based on 'objdump'.
+ lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?'
+ lt_cv_file_magic_cmd='$OBJDUMP -f'
+ ;;
+
+darwin* | rhapsody*)
+ lt_cv_deplibs_check_method=pass_all
+ ;;
+
+freebsd* | kfreebsd*-gnu)
+ if echo __ELF__ | $CC -E - | grep __ELF__ > /dev/null; then
+ case $host_cpu in
+ i*86 )
+ # Not sure whether the presence of OpenBSD here was a mistake.
+ # Let's accept both of them until this is cleared up.
+ lt_cv_deplibs_check_method='file_magic (FreeBSD|OpenBSD)/i[3-9]86 (compact )?demand paged shared library'
+ lt_cv_file_magic_cmd=/usr/bin/file
+ lt_cv_file_magic_test_file=`echo /usr/lib/libc.so.*`
+ ;;
+ esac
+ else
+ lt_cv_deplibs_check_method=pass_all
+ fi
+ ;;
+
+gnu*)
+ lt_cv_deplibs_check_method=pass_all
+ ;;
+
+hpux10.20* | hpux11*)
+ lt_cv_file_magic_cmd=/usr/bin/file
+ case "$host_cpu" in
+ ia64*)
+ lt_cv_deplibs_check_method='file_magic (s[0-9][0-9][0-9]|ELF-[0-9][0-9]) shared object file - IA64'
+ lt_cv_file_magic_test_file=/usr/lib/hpux32/libc.so
+ ;;
+ hppa*64*)
+ lt_cv_deplibs_check_method='file_magic (s[0-9][0-9][0-9]|ELF-[0-9][0-9]) shared object file - PA-RISC [0-9].[0-9]'
+ lt_cv_file_magic_test_file=/usr/lib/pa20_64/libc.sl
+ ;;
+ *)
+ lt_cv_deplibs_check_method='file_magic (s[0-9][0-9][0-9]|PA-RISC[0-9].[0-9]) shared library'
+ lt_cv_file_magic_test_file=/usr/lib/libc.sl
+ ;;
+ esac
+ ;;
+
+irix5* | irix6* | nonstopux*)
+ case $LD in
+ *-32|*"-32 ") libmagic=32-bit;;
+ *-n32|*"-n32 ") libmagic=N32;;
+ *-64|*"-64 ") libmagic=64-bit;;
+ *) libmagic=never-match;;
+ esac
+ lt_cv_deplibs_check_method=pass_all
+ ;;
+
+# This must be Linux ELF.
+linux*)
+ lt_cv_deplibs_check_method=pass_all
+ ;;
+
+netbsd* | knetbsd*-gnu)
+ if echo __ELF__ | $CC -E - | grep __ELF__ > /dev/null; then
+ lt_cv_deplibs_check_method='match_pattern /lib[^/]+(\.so\.[0-9]+\.[0-9]+|_pic\.a)$'
+ else
+ lt_cv_deplibs_check_method='match_pattern /lib[^/]+(\.so|_pic\.a)$'
+ fi
+ ;;
+
+newos6*)
+ lt_cv_deplibs_check_method='file_magic ELF [0-9][0-9]*-bit [ML]SB (executable|dynamic lib)'
+ lt_cv_file_magic_cmd=/usr/bin/file
+ lt_cv_file_magic_test_file=/usr/lib/libnls.so
+ ;;
+
+nto-qnx*)
+ lt_cv_deplibs_check_method=unknown
+ ;;
+
+openbsd*)
+ lt_cv_file_magic_cmd=/usr/bin/file
+ lt_cv_file_magic_test_file=`echo /usr/lib/libc.so.*`
+ if test -z "`echo __ELF__ | $CC -E - | grep __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then
+ lt_cv_deplibs_check_method='file_magic ELF [0-9][0-9]*-bit [LM]SB shared object'
+ else
+ lt_cv_deplibs_check_method='file_magic OpenBSD.* shared library'
+ fi
+ ;;
+
+osf3* | osf4* | osf5*)
+ lt_cv_deplibs_check_method=pass_all
+ ;;
+
+sco3.2v5*)
+ lt_cv_deplibs_check_method=pass_all
+ ;;
+
+solaris*)
+ lt_cv_deplibs_check_method=pass_all
+ ;;
+
+sysv4 | sysv4.2uw2* | sysv4.3* | sysv5*)
+ case $host_vendor in
+ motorola)
+ lt_cv_deplibs_check_method='file_magic ELF [0-9][0-9]*-bit [ML]SB (shared object|dynamic lib) M[0-9][0-9]* Version [0-9]'
+ lt_cv_file_magic_test_file=`echo /usr/lib/libc.so*`
+ ;;
+ ncr)
+ lt_cv_deplibs_check_method=pass_all
+ ;;
+ sequent)
+ lt_cv_file_magic_cmd='/bin/file'
+ lt_cv_deplibs_check_method='file_magic ELF [0-9][0-9]*-bit [LM]SB (shared object|dynamic lib )'
+ ;;
+ sni)
+ lt_cv_file_magic_cmd='/bin/file'
+ lt_cv_deplibs_check_method="file_magic ELF [0-9][0-9]*-bit [LM]SB dynamic lib"
+ lt_cv_file_magic_test_file=/lib/libc.so
+ ;;
+ siemens)
+ lt_cv_deplibs_check_method=pass_all
+ ;;
+ esac
+ ;;
+
+sysv5OpenUNIX8* | sysv5UnixWare7* | sysv5uw[78]* | unixware7* | sysv4*uw2*)
+ lt_cv_deplibs_check_method=pass_all
+ ;;
+esac
+
+fi
+echo "$as_me:$LINENO: result: $lt_cv_deplibs_check_method" >&5
+echo "${ECHO_T}$lt_cv_deplibs_check_method" >&6
+file_magic_cmd=$lt_cv_file_magic_cmd
+deplibs_check_method=$lt_cv_deplibs_check_method
+test -z "$deplibs_check_method" && deplibs_check_method=unknown
+
+
+
+
+# If no C compiler was specified, use CC.
+LTCC=${LTCC-"$CC"}
+
+# Allow CC to be a program name with arguments.
+compiler=$CC
+
+
+# Check whether --enable-libtool-lock or --disable-libtool-lock was given.
+if test "${enable_libtool_lock+set}" = set; then
+ enableval="$enable_libtool_lock"
+
+fi;
+test "x$enable_libtool_lock" != xno && enable_libtool_lock=yes
+
+# Some flags need to be propagated to the compiler or linker for good
+# libtool support.
+case $host in
+ia64-*-hpux*)
+ # Find out which ABI we are using.
+ echo 'int i;' > conftest.$ac_ext
+ if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5
+ (eval $ac_compile) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; then
+ case `/usr/bin/file conftest.$ac_objext` in
+ *ELF-32*)
+ HPUX_IA64_MODE="32"
+ ;;
+ *ELF-64*)
+ HPUX_IA64_MODE="64"
+ ;;
+ esac
+ fi
+ rm -rf conftest*
+ ;;
+*-*-irix6*)
+ # Find out which ABI we are using.
+ echo '#line 3064 "configure"' > conftest.$ac_ext
+ if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5
+ (eval $ac_compile) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; then
+ if test "$lt_cv_prog_gnu_ld" = yes; then
+ case `/usr/bin/file conftest.$ac_objext` in
+ *32-bit*)
+ LD="${LD-ld} -melf32bsmip"
+ ;;
+ *N32*)
+ LD="${LD-ld} -melf32bmipn32"
+ ;;
+ *64-bit*)
+ LD="${LD-ld} -melf64bmip"
+ ;;
+ esac
+ else
+ case `/usr/bin/file conftest.$ac_objext` in
+ *32-bit*)
+ LD="${LD-ld} -32"
+ ;;
+ *N32*)
+ LD="${LD-ld} -n32"
+ ;;
+ *64-bit*)
+ LD="${LD-ld} -64"
+ ;;
+ esac
+ fi
+ fi
+ rm -rf conftest*
+ ;;
+
+x86_64-*linux*|ppc*-*linux*|powerpc*-*linux*|s390*-*linux*|sparc*-*linux*)
+ # Find out which ABI we are using.
+ echo 'int i;' > conftest.$ac_ext
+ if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5
+ (eval $ac_compile) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; then
+ case "`/usr/bin/file conftest.o`" in
+ *32-bit*)
+ case $host in
+ x86_64-*linux*)
+ LD="${LD-ld} -m elf_i386"
+ ;;
+ ppc64-*linux*|powerpc64-*linux*)
+ LD="${LD-ld} -m elf32ppclinux"
+ ;;
+ s390x-*linux*)
+ LD="${LD-ld} -m elf_s390"
+ ;;
+ sparc64-*linux*)
+ LD="${LD-ld} -m elf32_sparc"
+ ;;
+ esac
+ ;;
+ *64-bit*)
+ case $host in
+ x86_64-*linux*)
+ LD="${LD-ld} -m elf_x86_64"
+ ;;
+ ppc*-*linux*|powerpc*-*linux*)
+ LD="${LD-ld} -m elf64ppc"
+ ;;
+ s390*-*linux*)
+ LD="${LD-ld} -m elf64_s390"
+ ;;
+ sparc*-*linux*)
+ LD="${LD-ld} -m elf64_sparc"
+ ;;
+ esac
+ ;;
+ esac
+ fi
+ rm -rf conftest*
+ ;;
+
+*-*-sco3.2v5*)
+ # On SCO OpenServer 5, we need -belf to get full-featured binaries.
+ SAVE_CFLAGS="$CFLAGS"
+ CFLAGS="$CFLAGS -belf"
+ echo "$as_me:$LINENO: checking whether the C compiler needs -belf" >&5
+echo $ECHO_N "checking whether the C compiler needs -belf... $ECHO_C" >&6
+if test "${lt_cv_cc_needs_belf+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ ac_ext=c
+ac_cpp='$CPP $CPPFLAGS'
+ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_c_compiler_gnu
+
+ cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+
+int
+main ()
+{
+
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext conftest$ac_exeext
+if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_c_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest$ac_exeext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ lt_cv_cc_needs_belf=yes
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+lt_cv_cc_needs_belf=no
+fi
+rm -f conftest.err conftest.$ac_objext \
+ conftest$ac_exeext conftest.$ac_ext
+ ac_ext=c
+ac_cpp='$CPP $CPPFLAGS'
+ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_c_compiler_gnu
+
+fi
+echo "$as_me:$LINENO: result: $lt_cv_cc_needs_belf" >&5
+echo "${ECHO_T}$lt_cv_cc_needs_belf" >&6
+ if test x"$lt_cv_cc_needs_belf" != x"yes"; then
+ # this is probably gcc 2.8.0, egcs 1.0 or newer; no need for -belf
+ CFLAGS="$SAVE_CFLAGS"
+ fi
+ ;;
+
+esac
+
+need_locks="$enable_libtool_lock"
+
+
+ac_ext=c
+ac_cpp='$CPP $CPPFLAGS'
+ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_c_compiler_gnu
+echo "$as_me:$LINENO: checking how to run the C preprocessor" >&5
+echo $ECHO_N "checking how to run the C preprocessor... $ECHO_C" >&6
+# On Suns, sometimes $CPP names a directory.
+if test -n "$CPP" && test -d "$CPP"; then
+ CPP=
+fi
+if test -z "$CPP"; then
+ if test "${ac_cv_prog_CPP+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ # Double quotes because CPP needs to be expanded
+ for CPP in "$CC -E" "$CC -E -traditional-cpp" "/lib/cpp"
+ do
+ ac_preproc_ok=false
+for ac_c_preproc_warn_flag in '' yes
+do
+ # Use a header file that comes with gcc, so configuring glibc
+ # with a fresh cross-compiler works.
+ # Prefer <limits.h> to <assert.h> if __STDC__ is defined, since
+ # <limits.h> exists even on freestanding compilers.
+ # On the NeXT, cc -E runs the code through the compiler's parser,
+ # not just through cpp. "Syntax error" is here to catch this case.
+ cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+#ifdef __STDC__
+# include <limits.h>
+#else
+# include <assert.h>
+#endif
+ Syntax error
+_ACEOF
+if { (eval echo "$as_me:$LINENO: \"$ac_cpp conftest.$ac_ext\"") >&5
+ (eval $ac_cpp conftest.$ac_ext) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } >/dev/null; then
+ if test -s conftest.err; then
+ ac_cpp_err=$ac_c_preproc_warn_flag
+ ac_cpp_err=$ac_cpp_err$ac_c_werror_flag
+ else
+ ac_cpp_err=
+ fi
+else
+ ac_cpp_err=yes
+fi
+if test -z "$ac_cpp_err"; then
+ :
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ # Broken: fails on valid input.
+continue
+fi
+rm -f conftest.err conftest.$ac_ext
+
+ # OK, works on sane cases. Now check whether non-existent headers
+ # can be detected and how.
+ cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+#include <ac_nonexistent.h>
+_ACEOF
+if { (eval echo "$as_me:$LINENO: \"$ac_cpp conftest.$ac_ext\"") >&5
+ (eval $ac_cpp conftest.$ac_ext) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } >/dev/null; then
+ if test -s conftest.err; then
+ ac_cpp_err=$ac_c_preproc_warn_flag
+ ac_cpp_err=$ac_cpp_err$ac_c_werror_flag
+ else
+ ac_cpp_err=
+ fi
+else
+ ac_cpp_err=yes
+fi
+if test -z "$ac_cpp_err"; then
+ # Broken: success on invalid input.
+continue
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ # Passes both tests.
+ac_preproc_ok=:
+break
+fi
+rm -f conftest.err conftest.$ac_ext
+
+done
+# Because of `break', _AC_PREPROC_IFELSE's cleaning code was skipped.
+rm -f conftest.err conftest.$ac_ext
+if $ac_preproc_ok; then
+ break
+fi
+
+ done
+ ac_cv_prog_CPP=$CPP
+
+fi
+ CPP=$ac_cv_prog_CPP
+else
+ ac_cv_prog_CPP=$CPP
+fi
+echo "$as_me:$LINENO: result: $CPP" >&5
+echo "${ECHO_T}$CPP" >&6
+ac_preproc_ok=false
+for ac_c_preproc_warn_flag in '' yes
+do
+ # Use a header file that comes with gcc, so configuring glibc
+ # with a fresh cross-compiler works.
+ # Prefer <limits.h> to <assert.h> if __STDC__ is defined, since
+ # <limits.h> exists even on freestanding compilers.
+ # On the NeXT, cc -E runs the code through the compiler's parser,
+ # not just through cpp. "Syntax error" is here to catch this case.
+ cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+#ifdef __STDC__
+# include <limits.h>
+#else
+# include <assert.h>
+#endif
+ Syntax error
+_ACEOF
+if { (eval echo "$as_me:$LINENO: \"$ac_cpp conftest.$ac_ext\"") >&5
+ (eval $ac_cpp conftest.$ac_ext) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } >/dev/null; then
+ if test -s conftest.err; then
+ ac_cpp_err=$ac_c_preproc_warn_flag
+ ac_cpp_err=$ac_cpp_err$ac_c_werror_flag
+ else
+ ac_cpp_err=
+ fi
+else
+ ac_cpp_err=yes
+fi
+if test -z "$ac_cpp_err"; then
+ :
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ # Broken: fails on valid input.
+continue
+fi
+rm -f conftest.err conftest.$ac_ext
+
+ # OK, works on sane cases. Now check whether non-existent headers
+ # can be detected and how.
+ cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+#include <ac_nonexistent.h>
+_ACEOF
+if { (eval echo "$as_me:$LINENO: \"$ac_cpp conftest.$ac_ext\"") >&5
+ (eval $ac_cpp conftest.$ac_ext) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } >/dev/null; then
+ if test -s conftest.err; then
+ ac_cpp_err=$ac_c_preproc_warn_flag
+ ac_cpp_err=$ac_cpp_err$ac_c_werror_flag
+ else
+ ac_cpp_err=
+ fi
+else
+ ac_cpp_err=yes
+fi
+if test -z "$ac_cpp_err"; then
+ # Broken: success on invalid input.
+continue
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ # Passes both tests.
+ac_preproc_ok=:
+break
+fi
+rm -f conftest.err conftest.$ac_ext
+
+done
+# Because of `break', _AC_PREPROC_IFELSE's cleaning code was skipped.
+rm -f conftest.err conftest.$ac_ext
+if $ac_preproc_ok; then
+ :
+else
+ { { echo "$as_me:$LINENO: error: C preprocessor \"$CPP\" fails sanity check
+See \`config.log' for more details." >&5
+echo "$as_me: error: C preprocessor \"$CPP\" fails sanity check
+See \`config.log' for more details." >&2;}
+ { (exit 1); exit 1; }; }
+fi
+
+ac_ext=c
+ac_cpp='$CPP $CPPFLAGS'
+ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_c_compiler_gnu
+
+
+echo "$as_me:$LINENO: checking for ANSI C header files" >&5
+echo $ECHO_N "checking for ANSI C header files... $ECHO_C" >&6
+if test "${ac_cv_header_stdc+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+#include <stdlib.h>
+#include <stdarg.h>
+#include <string.h>
+#include <float.h>
+
+int
+main ()
+{
+
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext
+if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5
+ (eval $ac_compile) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_c_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest.$ac_objext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ ac_cv_header_stdc=yes
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ac_cv_header_stdc=no
+fi
+rm -f conftest.err conftest.$ac_objext conftest.$ac_ext
+
+if test $ac_cv_header_stdc = yes; then
+ # SunOS 4.x string.h does not declare mem*, contrary to ANSI.
+ cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+#include <string.h>
+
+_ACEOF
+if (eval "$ac_cpp conftest.$ac_ext") 2>&5 |
+ $EGREP "memchr" >/dev/null 2>&1; then
+ :
+else
+ ac_cv_header_stdc=no
+fi
+rm -f conftest*
+
+fi
+
+if test $ac_cv_header_stdc = yes; then
+ # ISC 2.0.2 stdlib.h does not declare free, contrary to ANSI.
+ cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+#include <stdlib.h>
+
+_ACEOF
+if (eval "$ac_cpp conftest.$ac_ext") 2>&5 |
+ $EGREP "free" >/dev/null 2>&1; then
+ :
+else
+ ac_cv_header_stdc=no
+fi
+rm -f conftest*
+
+fi
+
+if test $ac_cv_header_stdc = yes; then
+ # /bin/cc in Irix-4.0.5 gets non-ANSI ctype macros unless using -ansi.
+ if test "$cross_compiling" = yes; then
+ :
+else
+ cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+#include <ctype.h>
+#if ((' ' & 0x0FF) == 0x020)
+# define ISLOWER(c) ('a' <= (c) && (c) <= 'z')
+# define TOUPPER(c) (ISLOWER(c) ? 'A' + ((c) - 'a') : (c))
+#else
+# define ISLOWER(c) \
+ (('a' <= (c) && (c) <= 'i') \
+ || ('j' <= (c) && (c) <= 'r') \
+ || ('s' <= (c) && (c) <= 'z'))
+# define TOUPPER(c) (ISLOWER(c) ? ((c) | 0x40) : (c))
+#endif
+
+#define XOR(e, f) (((e) && !(f)) || (!(e) && (f)))
+int
+main ()
+{
+ int i;
+ for (i = 0; i < 256; i++)
+ if (XOR (islower (i), ISLOWER (i))
+ || toupper (i) != TOUPPER (i))
+ exit(2);
+ exit (0);
+}
+_ACEOF
+rm -f conftest$ac_exeext
+if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } && { ac_try='./conftest$ac_exeext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ :
+else
+ echo "$as_me: program exited with status $ac_status" >&5
+echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+( exit $ac_status )
+ac_cv_header_stdc=no
+fi
+rm -f core *.core gmon.out bb.out conftest$ac_exeext conftest.$ac_objext conftest.$ac_ext
+fi
+fi
+fi
+echo "$as_me:$LINENO: result: $ac_cv_header_stdc" >&5
+echo "${ECHO_T}$ac_cv_header_stdc" >&6
+if test $ac_cv_header_stdc = yes; then
+
+cat >>confdefs.h <<\_ACEOF
+#define STDC_HEADERS 1
+_ACEOF
+
+fi
+
+# On IRIX 5.3, sys/types and inttypes.h are conflicting.
+
+
+
+
+
+
+
+
+
+for ac_header in sys/types.h sys/stat.h stdlib.h string.h memory.h strings.h \
+ inttypes.h stdint.h unistd.h
+do
+as_ac_Header=`echo "ac_cv_header_$ac_header" | $as_tr_sh`
+echo "$as_me:$LINENO: checking for $ac_header" >&5
+echo $ECHO_N "checking for $ac_header... $ECHO_C" >&6
+if eval "test \"\${$as_ac_Header+set}\" = set"; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+$ac_includes_default
+
+#include <$ac_header>
+_ACEOF
+rm -f conftest.$ac_objext
+if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5
+ (eval $ac_compile) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_c_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest.$ac_objext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ eval "$as_ac_Header=yes"
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+eval "$as_ac_Header=no"
+fi
+rm -f conftest.err conftest.$ac_objext conftest.$ac_ext
+fi
+echo "$as_me:$LINENO: result: `eval echo '${'$as_ac_Header'}'`" >&5
+echo "${ECHO_T}`eval echo '${'$as_ac_Header'}'`" >&6
+if test `eval echo '${'$as_ac_Header'}'` = yes; then
+ cat >>confdefs.h <<_ACEOF
+#define `echo "HAVE_$ac_header" | $as_tr_cpp` 1
+_ACEOF
+
+fi
+
+done
+
+
+
+for ac_header in dlfcn.h
+do
+as_ac_Header=`echo "ac_cv_header_$ac_header" | $as_tr_sh`
+if eval "test \"\${$as_ac_Header+set}\" = set"; then
+ echo "$as_me:$LINENO: checking for $ac_header" >&5
+echo $ECHO_N "checking for $ac_header... $ECHO_C" >&6
+if eval "test \"\${$as_ac_Header+set}\" = set"; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+fi
+echo "$as_me:$LINENO: result: `eval echo '${'$as_ac_Header'}'`" >&5
+echo "${ECHO_T}`eval echo '${'$as_ac_Header'}'`" >&6
+else
+ # Is the header compilable?
+echo "$as_me:$LINENO: checking $ac_header usability" >&5
+echo $ECHO_N "checking $ac_header usability... $ECHO_C" >&6
+cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+$ac_includes_default
+#include <$ac_header>
+_ACEOF
+rm -f conftest.$ac_objext
+if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5
+ (eval $ac_compile) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_c_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest.$ac_objext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ ac_header_compiler=yes
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ac_header_compiler=no
+fi
+rm -f conftest.err conftest.$ac_objext conftest.$ac_ext
+echo "$as_me:$LINENO: result: $ac_header_compiler" >&5
+echo "${ECHO_T}$ac_header_compiler" >&6
+
+# Is the header present?
+echo "$as_me:$LINENO: checking $ac_header presence" >&5
+echo $ECHO_N "checking $ac_header presence... $ECHO_C" >&6
+cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+#include <$ac_header>
+_ACEOF
+if { (eval echo "$as_me:$LINENO: \"$ac_cpp conftest.$ac_ext\"") >&5
+ (eval $ac_cpp conftest.$ac_ext) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } >/dev/null; then
+ if test -s conftest.err; then
+ ac_cpp_err=$ac_c_preproc_warn_flag
+ ac_cpp_err=$ac_cpp_err$ac_c_werror_flag
+ else
+ ac_cpp_err=
+ fi
+else
+ ac_cpp_err=yes
+fi
+if test -z "$ac_cpp_err"; then
+ ac_header_preproc=yes
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ ac_header_preproc=no
+fi
+rm -f conftest.err conftest.$ac_ext
+echo "$as_me:$LINENO: result: $ac_header_preproc" >&5
+echo "${ECHO_T}$ac_header_preproc" >&6
+
+# So? What about this header?
+case $ac_header_compiler:$ac_header_preproc:$ac_c_preproc_warn_flag in
+ yes:no: )
+ { echo "$as_me:$LINENO: WARNING: $ac_header: accepted by the compiler, rejected by the preprocessor!" >&5
+echo "$as_me: WARNING: $ac_header: accepted by the compiler, rejected by the preprocessor!" >&2;}
+ { echo "$as_me:$LINENO: WARNING: $ac_header: proceeding with the compiler's result" >&5
+echo "$as_me: WARNING: $ac_header: proceeding with the compiler's result" >&2;}
+ ac_header_preproc=yes
+ ;;
+ no:yes:* )
+ { echo "$as_me:$LINENO: WARNING: $ac_header: present but cannot be compiled" >&5
+echo "$as_me: WARNING: $ac_header: present but cannot be compiled" >&2;}
+ { echo "$as_me:$LINENO: WARNING: $ac_header: check for missing prerequisite headers?" >&5
+echo "$as_me: WARNING: $ac_header: check for missing prerequisite headers?" >&2;}
+ { echo "$as_me:$LINENO: WARNING: $ac_header: see the Autoconf documentation" >&5
+echo "$as_me: WARNING: $ac_header: see the Autoconf documentation" >&2;}
+ { echo "$as_me:$LINENO: WARNING: $ac_header: section \"Present But Cannot Be Compiled\"" >&5
+echo "$as_me: WARNING: $ac_header: section \"Present But Cannot Be Compiled\"" >&2;}
+ { echo "$as_me:$LINENO: WARNING: $ac_header: proceeding with the preprocessor's result" >&5
+echo "$as_me: WARNING: $ac_header: proceeding with the preprocessor's result" >&2;}
+ { echo "$as_me:$LINENO: WARNING: $ac_header: in the future, the compiler will take precedence" >&5
+echo "$as_me: WARNING: $ac_header: in the future, the compiler will take precedence" >&2;}
+ (
+ cat <<\_ASBOX
+## ------------------------------------------ ##
+## Report this to the AC_PACKAGE_NAME lists. ##
+## ------------------------------------------ ##
+_ASBOX
+ ) |
+ sed "s/^/$as_me: WARNING: /" >&2
+ ;;
+esac
+echo "$as_me:$LINENO: checking for $ac_header" >&5
+echo $ECHO_N "checking for $ac_header... $ECHO_C" >&6
+if eval "test \"\${$as_ac_Header+set}\" = set"; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ eval "$as_ac_Header=\$ac_header_preproc"
+fi
+echo "$as_me:$LINENO: result: `eval echo '${'$as_ac_Header'}'`" >&5
+echo "${ECHO_T}`eval echo '${'$as_ac_Header'}'`" >&6
+
+fi
+if test `eval echo '${'$as_ac_Header'}'` = yes; then
+ cat >>confdefs.h <<_ACEOF
+#define `echo "HAVE_$ac_header" | $as_tr_cpp` 1
+_ACEOF
+
+fi
+
+done
+
+ac_ext=cc
+ac_cpp='$CXXCPP $CPPFLAGS'
+ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_cxx_compiler_gnu
+if test -n "$ac_tool_prefix"; then
+ for ac_prog in $CCC g++ c++ gpp aCC CC cxx cc++ cl FCC KCC RCC xlC_r xlC
+ do
+ # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args.
+set dummy $ac_tool_prefix$ac_prog; ac_word=$2
+echo "$as_me:$LINENO: checking for $ac_word" >&5
+echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6
+if test "${ac_cv_prog_CXX+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ if test -n "$CXX"; then
+ ac_cv_prog_CXX="$CXX" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if $as_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_prog_CXX="$ac_tool_prefix$ac_prog"
+ echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+done
+
+fi
+fi
+CXX=$ac_cv_prog_CXX
+if test -n "$CXX"; then
+ echo "$as_me:$LINENO: result: $CXX" >&5
+echo "${ECHO_T}$CXX" >&6
+else
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+fi
+
+ test -n "$CXX" && break
+ done
+fi
+if test -z "$CXX"; then
+ ac_ct_CXX=$CXX
+ for ac_prog in $CCC g++ c++ gpp aCC CC cxx cc++ cl FCC KCC RCC xlC_r xlC
+do
+ # Extract the first word of "$ac_prog", so it can be a program name with args.
+set dummy $ac_prog; ac_word=$2
+echo "$as_me:$LINENO: checking for $ac_word" >&5
+echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6
+if test "${ac_cv_prog_ac_ct_CXX+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ if test -n "$ac_ct_CXX"; then
+ ac_cv_prog_ac_ct_CXX="$ac_ct_CXX" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if $as_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_prog_ac_ct_CXX="$ac_prog"
+ echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+done
+
+fi
+fi
+ac_ct_CXX=$ac_cv_prog_ac_ct_CXX
+if test -n "$ac_ct_CXX"; then
+ echo "$as_me:$LINENO: result: $ac_ct_CXX" >&5
+echo "${ECHO_T}$ac_ct_CXX" >&6
+else
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+fi
+
+ test -n "$ac_ct_CXX" && break
+done
+test -n "$ac_ct_CXX" || ac_ct_CXX="g++"
+
+ CXX=$ac_ct_CXX
+fi
+
+
+# Provide some information about the compiler.
+echo "$as_me:$LINENO:" \
+ "checking for C++ compiler version" >&5
+ac_compiler=`set X $ac_compile; echo $2`
+{ (eval echo "$as_me:$LINENO: \"$ac_compiler --version </dev/null >&5\"") >&5
+ (eval $ac_compiler --version </dev/null >&5) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }
+{ (eval echo "$as_me:$LINENO: \"$ac_compiler -v </dev/null >&5\"") >&5
+ (eval $ac_compiler -v </dev/null >&5) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }
+{ (eval echo "$as_me:$LINENO: \"$ac_compiler -V </dev/null >&5\"") >&5
+ (eval $ac_compiler -V </dev/null >&5) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }
+
+echo "$as_me:$LINENO: checking whether we are using the GNU C++ compiler" >&5
+echo $ECHO_N "checking whether we are using the GNU C++ compiler... $ECHO_C" >&6
+if test "${ac_cv_cxx_compiler_gnu+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+
+int
+main ()
+{
+#ifndef __GNUC__
+ choke me
+#endif
+
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext
+if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5
+ (eval $ac_compile) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_cxx_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest.$ac_objext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ ac_compiler_gnu=yes
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ac_compiler_gnu=no
+fi
+rm -f conftest.err conftest.$ac_objext conftest.$ac_ext
+ac_cv_cxx_compiler_gnu=$ac_compiler_gnu
+
+fi
+echo "$as_me:$LINENO: result: $ac_cv_cxx_compiler_gnu" >&5
+echo "${ECHO_T}$ac_cv_cxx_compiler_gnu" >&6
+GXX=`test $ac_compiler_gnu = yes && echo yes`
+ac_test_CXXFLAGS=${CXXFLAGS+set}
+ac_save_CXXFLAGS=$CXXFLAGS
+CXXFLAGS="-g"
+echo "$as_me:$LINENO: checking whether $CXX accepts -g" >&5
+echo $ECHO_N "checking whether $CXX accepts -g... $ECHO_C" >&6
+if test "${ac_cv_prog_cxx_g+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+
+int
+main ()
+{
+
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext
+if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5
+ (eval $ac_compile) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_cxx_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest.$ac_objext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ ac_cv_prog_cxx_g=yes
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ac_cv_prog_cxx_g=no
+fi
+rm -f conftest.err conftest.$ac_objext conftest.$ac_ext
+fi
+echo "$as_me:$LINENO: result: $ac_cv_prog_cxx_g" >&5
+echo "${ECHO_T}$ac_cv_prog_cxx_g" >&6
+if test "$ac_test_CXXFLAGS" = set; then
+ CXXFLAGS=$ac_save_CXXFLAGS
+elif test $ac_cv_prog_cxx_g = yes; then
+ if test "$GXX" = yes; then
+ CXXFLAGS="-g -O2"
+ else
+ CXXFLAGS="-g"
+ fi
+else
+ if test "$GXX" = yes; then
+ CXXFLAGS="-O2"
+ else
+ CXXFLAGS=
+ fi
+fi
+for ac_declaration in \
+ '' \
+ 'extern "C" void std::exit (int) throw (); using std::exit;' \
+ 'extern "C" void std::exit (int); using std::exit;' \
+ 'extern "C" void exit (int) throw ();' \
+ 'extern "C" void exit (int);' \
+ 'void exit (int);'
+do
+ cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+$ac_declaration
+#include <stdlib.h>
+int
+main ()
+{
+exit (42);
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext
+if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5
+ (eval $ac_compile) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_cxx_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest.$ac_objext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ :
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+continue
+fi
+rm -f conftest.err conftest.$ac_objext conftest.$ac_ext
+ cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+$ac_declaration
+int
+main ()
+{
+exit (42);
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext
+if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5
+ (eval $ac_compile) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_cxx_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest.$ac_objext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ break
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+fi
+rm -f conftest.err conftest.$ac_objext conftest.$ac_ext
+done
+rm -f conftest*
+if test -n "$ac_declaration"; then
+ echo '#ifdef __cplusplus' >>confdefs.h
+ echo $ac_declaration >>confdefs.h
+ echo '#endif' >>confdefs.h
+fi
+
+ac_ext=cc
+ac_cpp='$CXXCPP $CPPFLAGS'
+ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_cxx_compiler_gnu
+
+ac_ext=cc
+ac_cpp='$CXXCPP $CPPFLAGS'
+ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_cxx_compiler_gnu
+echo "$as_me:$LINENO: checking how to run the C++ preprocessor" >&5
+echo $ECHO_N "checking how to run the C++ preprocessor... $ECHO_C" >&6
+if test -z "$CXXCPP"; then
+ if test "${ac_cv_prog_CXXCPP+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ # Double quotes because CXXCPP needs to be expanded
+ for CXXCPP in "$CXX -E" "/lib/cpp"
+ do
+ ac_preproc_ok=false
+for ac_cxx_preproc_warn_flag in '' yes
+do
+ # Use a header file that comes with gcc, so configuring glibc
+ # with a fresh cross-compiler works.
+ # Prefer <limits.h> to <assert.h> if __STDC__ is defined, since
+ # <limits.h> exists even on freestanding compilers.
+ # On the NeXT, cc -E runs the code through the compiler's parser,
+ # not just through cpp. "Syntax error" is here to catch this case.
+ cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+#ifdef __STDC__
+# include <limits.h>
+#else
+# include <assert.h>
+#endif
+ Syntax error
+_ACEOF
+if { (eval echo "$as_me:$LINENO: \"$ac_cpp conftest.$ac_ext\"") >&5
+ (eval $ac_cpp conftest.$ac_ext) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } >/dev/null; then
+ if test -s conftest.err; then
+ ac_cpp_err=$ac_cxx_preproc_warn_flag
+ ac_cpp_err=$ac_cpp_err$ac_cxx_werror_flag
+ else
+ ac_cpp_err=
+ fi
+else
+ ac_cpp_err=yes
+fi
+if test -z "$ac_cpp_err"; then
+ :
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ # Broken: fails on valid input.
+continue
+fi
+rm -f conftest.err conftest.$ac_ext
+
+ # OK, works on sane cases. Now check whether non-existent headers
+ # can be detected and how.
+ cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+#include <ac_nonexistent.h>
+_ACEOF
+if { (eval echo "$as_me:$LINENO: \"$ac_cpp conftest.$ac_ext\"") >&5
+ (eval $ac_cpp conftest.$ac_ext) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } >/dev/null; then
+ if test -s conftest.err; then
+ ac_cpp_err=$ac_cxx_preproc_warn_flag
+ ac_cpp_err=$ac_cpp_err$ac_cxx_werror_flag
+ else
+ ac_cpp_err=
+ fi
+else
+ ac_cpp_err=yes
+fi
+if test -z "$ac_cpp_err"; then
+ # Broken: success on invalid input.
+continue
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ # Passes both tests.
+ac_preproc_ok=:
+break
+fi
+rm -f conftest.err conftest.$ac_ext
+
+done
+# Because of `break', _AC_PREPROC_IFELSE's cleaning code was skipped.
+rm -f conftest.err conftest.$ac_ext
+if $ac_preproc_ok; then
+ break
+fi
+
+ done
+ ac_cv_prog_CXXCPP=$CXXCPP
+
+fi
+ CXXCPP=$ac_cv_prog_CXXCPP
+else
+ ac_cv_prog_CXXCPP=$CXXCPP
+fi
+echo "$as_me:$LINENO: result: $CXXCPP" >&5
+echo "${ECHO_T}$CXXCPP" >&6
+ac_preproc_ok=false
+for ac_cxx_preproc_warn_flag in '' yes
+do
+ # Use a header file that comes with gcc, so configuring glibc
+ # with a fresh cross-compiler works.
+ # Prefer <limits.h> to <assert.h> if __STDC__ is defined, since
+ # <limits.h> exists even on freestanding compilers.
+ # On the NeXT, cc -E runs the code through the compiler's parser,
+ # not just through cpp. "Syntax error" is here to catch this case.
+ cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+#ifdef __STDC__
+# include <limits.h>
+#else
+# include <assert.h>
+#endif
+ Syntax error
+_ACEOF
+if { (eval echo "$as_me:$LINENO: \"$ac_cpp conftest.$ac_ext\"") >&5
+ (eval $ac_cpp conftest.$ac_ext) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } >/dev/null; then
+ if test -s conftest.err; then
+ ac_cpp_err=$ac_cxx_preproc_warn_flag
+ ac_cpp_err=$ac_cpp_err$ac_cxx_werror_flag
+ else
+ ac_cpp_err=
+ fi
+else
+ ac_cpp_err=yes
+fi
+if test -z "$ac_cpp_err"; then
+ :
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ # Broken: fails on valid input.
+continue
+fi
+rm -f conftest.err conftest.$ac_ext
+
+ # OK, works on sane cases. Now check whether non-existent headers
+ # can be detected and how.
+ cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+#include <ac_nonexistent.h>
+_ACEOF
+if { (eval echo "$as_me:$LINENO: \"$ac_cpp conftest.$ac_ext\"") >&5
+ (eval $ac_cpp conftest.$ac_ext) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } >/dev/null; then
+ if test -s conftest.err; then
+ ac_cpp_err=$ac_cxx_preproc_warn_flag
+ ac_cpp_err=$ac_cpp_err$ac_cxx_werror_flag
+ else
+ ac_cpp_err=
+ fi
+else
+ ac_cpp_err=yes
+fi
+if test -z "$ac_cpp_err"; then
+ # Broken: success on invalid input.
+continue
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ # Passes both tests.
+ac_preproc_ok=:
+break
+fi
+rm -f conftest.err conftest.$ac_ext
+
+done
+# Because of `break', _AC_PREPROC_IFELSE's cleaning code was skipped.
+rm -f conftest.err conftest.$ac_ext
+if $ac_preproc_ok; then
+ :
+else
+ { { echo "$as_me:$LINENO: error: C++ preprocessor \"$CXXCPP\" fails sanity check
+See \`config.log' for more details." >&5
+echo "$as_me: error: C++ preprocessor \"$CXXCPP\" fails sanity check
+See \`config.log' for more details." >&2;}
+ { (exit 1); exit 1; }; }
+fi
+
+ac_ext=cc
+ac_cpp='$CXXCPP $CPPFLAGS'
+ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_cxx_compiler_gnu
+
+
+ac_ext=f
+ac_compile='$F77 -c $FFLAGS conftest.$ac_ext >&5'
+ac_link='$F77 -o conftest$ac_exeext $FFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_f77_compiler_gnu
+if test -n "$ac_tool_prefix"; then
+ for ac_prog in g77 f77 xlf frt pgf77 fort77 fl32 af77 f90 xlf90 pgf90 epcf90 f95 fort xlf95 ifc efc pgf95 lf95 gfortran
+ do
+ # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args.
+set dummy $ac_tool_prefix$ac_prog; ac_word=$2
+echo "$as_me:$LINENO: checking for $ac_word" >&5
+echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6
+if test "${ac_cv_prog_F77+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ if test -n "$F77"; then
+ ac_cv_prog_F77="$F77" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if $as_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_prog_F77="$ac_tool_prefix$ac_prog"
+ echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+done
+
+fi
+fi
+F77=$ac_cv_prog_F77
+if test -n "$F77"; then
+ echo "$as_me:$LINENO: result: $F77" >&5
+echo "${ECHO_T}$F77" >&6
+else
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+fi
+
+ test -n "$F77" && break
+ done
+fi
+if test -z "$F77"; then
+ ac_ct_F77=$F77
+ for ac_prog in g77 f77 xlf frt pgf77 fort77 fl32 af77 f90 xlf90 pgf90 epcf90 f95 fort xlf95 ifc efc pgf95 lf95 gfortran
+do
+ # Extract the first word of "$ac_prog", so it can be a program name with args.
+set dummy $ac_prog; ac_word=$2
+echo "$as_me:$LINENO: checking for $ac_word" >&5
+echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6
+if test "${ac_cv_prog_ac_ct_F77+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ if test -n "$ac_ct_F77"; then
+ ac_cv_prog_ac_ct_F77="$ac_ct_F77" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if $as_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_prog_ac_ct_F77="$ac_prog"
+ echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+done
+
+fi
+fi
+ac_ct_F77=$ac_cv_prog_ac_ct_F77
+if test -n "$ac_ct_F77"; then
+ echo "$as_me:$LINENO: result: $ac_ct_F77" >&5
+echo "${ECHO_T}$ac_ct_F77" >&6
+else
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+fi
+
+ test -n "$ac_ct_F77" && break
+done
+
+ F77=$ac_ct_F77
+fi
+
+
+# Provide some information about the compiler.
+echo "$as_me:4527:" \
+ "checking for Fortran 77 compiler version" >&5
+ac_compiler=`set X $ac_compile; echo $2`
+{ (eval echo "$as_me:$LINENO: \"$ac_compiler --version </dev/null >&5\"") >&5
+ (eval $ac_compiler --version </dev/null >&5) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }
+{ (eval echo "$as_me:$LINENO: \"$ac_compiler -v </dev/null >&5\"") >&5
+ (eval $ac_compiler -v </dev/null >&5) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }
+{ (eval echo "$as_me:$LINENO: \"$ac_compiler -V </dev/null >&5\"") >&5
+ (eval $ac_compiler -V </dev/null >&5) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }
+rm -f a.out
+
+# If we don't use `.F' as extension, the preprocessor is not run on the
+# input file. (Note that this only needs to work for GNU compilers.)
+ac_save_ext=$ac_ext
+ac_ext=F
+echo "$as_me:$LINENO: checking whether we are using the GNU Fortran 77 compiler" >&5
+echo $ECHO_N "checking whether we are using the GNU Fortran 77 compiler... $ECHO_C" >&6
+if test "${ac_cv_f77_compiler_gnu+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ cat >conftest.$ac_ext <<_ACEOF
+ program main
+#ifndef __GNUC__
+ choke me
+#endif
+
+ end
+_ACEOF
+rm -f conftest.$ac_objext
+if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5
+ (eval $ac_compile) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_f77_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest.$ac_objext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ ac_compiler_gnu=yes
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ac_compiler_gnu=no
+fi
+rm -f conftest.err conftest.$ac_objext conftest.$ac_ext
+ac_cv_f77_compiler_gnu=$ac_compiler_gnu
+
+fi
+echo "$as_me:$LINENO: result: $ac_cv_f77_compiler_gnu" >&5
+echo "${ECHO_T}$ac_cv_f77_compiler_gnu" >&6
+ac_ext=$ac_save_ext
+ac_test_FFLAGS=${FFLAGS+set}
+ac_save_FFLAGS=$FFLAGS
+FFLAGS=
+echo "$as_me:$LINENO: checking whether $F77 accepts -g" >&5
+echo $ECHO_N "checking whether $F77 accepts -g... $ECHO_C" >&6
+if test "${ac_cv_prog_f77_g+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ FFLAGS=-g
+cat >conftest.$ac_ext <<_ACEOF
+ program main
+
+ end
+_ACEOF
+rm -f conftest.$ac_objext
+if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5
+ (eval $ac_compile) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_f77_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest.$ac_objext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ ac_cv_prog_f77_g=yes
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ac_cv_prog_f77_g=no
+fi
+rm -f conftest.err conftest.$ac_objext conftest.$ac_ext
+
+fi
+echo "$as_me:$LINENO: result: $ac_cv_prog_f77_g" >&5
+echo "${ECHO_T}$ac_cv_prog_f77_g" >&6
+if test "$ac_test_FFLAGS" = set; then
+ FFLAGS=$ac_save_FFLAGS
+elif test $ac_cv_prog_f77_g = yes; then
+ if test "x$ac_cv_f77_compiler_gnu" = xyes; then
+ FFLAGS="-g -O2"
+ else
+ FFLAGS="-g"
+ fi
+else
+ if test "x$ac_cv_f77_compiler_gnu" = xyes; then
+ FFLAGS="-O2"
+ else
+ FFLAGS=
+ fi
+fi
+
+G77=`test $ac_compiler_gnu = yes && echo yes`
+ac_ext=c
+ac_cpp='$CPP $CPPFLAGS'
+ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_c_compiler_gnu
+
+
+
+# Autoconf 2.13's AC_OBJEXT and AC_EXEEXT macros only works for C compilers!
+
+# find the maximum length of command line arguments
+echo "$as_me:$LINENO: checking the maximum length of command line arguments" >&5
+echo $ECHO_N "checking the maximum length of command line arguments... $ECHO_C" >&6
+if test "${lt_cv_sys_max_cmd_len+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ i=0
+ testring="ABCD"
+
+ case $build_os in
+ msdosdjgpp*)
+ # On DJGPP, this test can blow up pretty badly due to problems in libc
+ # (any single argument exceeding 2000 bytes causes a buffer overrun
+ # during glob expansion). Even if it were fixed, the result of this
+ # check would be larger than it should be.
+ lt_cv_sys_max_cmd_len=12288; # 12K is about right
+ ;;
+
+ gnu*)
+ # Under GNU Hurd, this test is not required because there is
+ # no limit to the length of command line arguments.
+ # Libtool will interpret -1 as no limit whatsoever
+ lt_cv_sys_max_cmd_len=-1;
+ ;;
+
+ cygwin* | mingw*)
+ # On Win9x/ME, this test blows up -- it succeeds, but takes
+ # about 5 minutes as the teststring grows exponentially.
+ # Worse, since 9x/ME are not pre-emptively multitasking,
+ # you end up with a "frozen" computer, even though with patience
+ # the test eventually succeeds (with a max line length of 256k).
+ # Instead, let's just punt: use the minimum linelength reported by
+ # all of the supported platforms: 8192 (on NT/2K/XP).
+ lt_cv_sys_max_cmd_len=8192;
+ ;;
+
+ amigaos*)
+ # On AmigaOS with pdksh, this test takes hours, literally.
+ # So we just punt and use a minimum line length of 8192.
+ lt_cv_sys_max_cmd_len=8192;
+ ;;
+
+ *)
+ # If test is not a shell built-in, we'll probably end up computing a
+ # maximum length that is only half of the actual maximum length, but
+ # we can't tell.
+ while (test "X"`$CONFIG_SHELL $0 --fallback-echo "X$testring" 2>/dev/null` \
+ = "XX$testring") >/dev/null 2>&1 &&
+ new_result=`expr "X$testring" : ".*" 2>&1` &&
+ lt_cv_sys_max_cmd_len=$new_result &&
+ test $i != 17 # 1/2 MB should be enough
+ do
+ i=`expr $i + 1`
+ testring=$testring$testring
+ done
+ testring=
+ # Add a significant safety factor because C++ compilers can tack on massive
+ # amounts of additional arguments before passing them to the linker.
+ # It appears as though 1/2 is a usable value.
+ lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 2`
+ ;;
+ esac
+
+fi
+
+if test -n $lt_cv_sys_max_cmd_len ; then
+ echo "$as_me:$LINENO: result: $lt_cv_sys_max_cmd_len" >&5
+echo "${ECHO_T}$lt_cv_sys_max_cmd_len" >&6
+else
+ echo "$as_me:$LINENO: result: none" >&5
+echo "${ECHO_T}none" >&6
+fi
+
+
+
+
+# Check for command to grab the raw symbol name followed by C symbol from nm.
+echo "$as_me:$LINENO: checking command to parse $NM output from $compiler object" >&5
+echo $ECHO_N "checking command to parse $NM output from $compiler object... $ECHO_C" >&6
+if test "${lt_cv_sys_global_symbol_pipe+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+
+# These are sane defaults that work on at least a few old systems.
+# [They come from Ultrix. What could be older than Ultrix?!! ;)]
+
+# Character class describing NM global symbol codes.
+symcode='[BCDEGRST]'
+
+# Regexp to match symbols that can be accessed directly from C.
+sympat='\([_A-Za-z][_A-Za-z0-9]*\)'
+
+# Transform the above into a raw symbol and a C symbol.
+symxfrm='\1 \2\3 \3'
+
+# Transform an extracted symbol line into a proper C declaration
+lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^. .* \(.*\)$/extern int \1;/p'"
+
+# Transform an extracted symbol line into symbol name and symbol address
+lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\) $/ {\\\"\1\\\", (lt_ptr) 0},/p' -e 's/^$symcode \([^ ]*\) \([^ ]*\)$/ {\"\2\", (lt_ptr) \&\2},/p'"
+
+# Define system-specific variables.
+case $host_os in
+aix*)
+ symcode='[BCDT]'
+ ;;
+cygwin* | mingw* | pw32*)
+ symcode='[ABCDGISTW]'
+ ;;
+hpux*) # Its linker distinguishes data from code symbols
+ if test "$host_cpu" = ia64; then
+ symcode='[ABCDEGRST]'
+ fi
+ lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'"
+ lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\) $/ {\\\"\1\\\", (lt_ptr) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/ {\"\2\", (lt_ptr) \&\2},/p'"
+ ;;
+irix* | nonstopux*)
+ symcode='[BCDEGRST]'
+ ;;
+osf*)
+ symcode='[BCDEGQRST]'
+ ;;
+solaris* | sysv5*)
+ symcode='[BDRT]'
+ ;;
+sysv4)
+ symcode='[DFNSTU]'
+ ;;
+esac
+
+# Handle CRLF in mingw tool chain
+opt_cr=
+case $build_os in
+mingw*)
+ opt_cr=`echo 'x\{0,1\}' | tr x '\015'` # option cr in regexp
+ ;;
+esac
+
+# If we're using GNU nm, then use its standard symbol codes.
+case `$NM -V 2>&1` in
+*GNU* | *'with BFD'*)
+ symcode='[ABCDGIRSTW]' ;;
+esac
+
+# Try without a prefix undercore, then with it.
+for ac_symprfx in "" "_"; do
+
+ # Write the raw and C identifiers.
+ lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[ ]\($symcode$symcode*\)[ ][ ]*\($ac_symprfx\)$sympat$opt_cr$/$symxfrm/p'"
+
+ # Check to see that the pipe works correctly.
+ pipe_works=no
+
+ rm -f conftest*
+ cat > conftest.$ac_ext <<EOF
+#ifdef __cplusplus
+extern "C" {
+#endif
+char nm_test_var;
+void nm_test_func(){}
+#ifdef __cplusplus
+}
+#endif
+int main(){nm_test_var='a';nm_test_func();return(0);}
+EOF
+
+ if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5
+ (eval $ac_compile) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; then
+ # Now try to grab the symbols.
+ nlist=conftest.nm
+ if { (eval echo "$as_me:$LINENO: \"$NM conftest.$ac_objext \| $lt_cv_sys_global_symbol_pipe \> $nlist\"") >&5
+ (eval $NM conftest.$ac_objext \| $lt_cv_sys_global_symbol_pipe \> $nlist) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } && test -s "$nlist"; then
+ # Try sorting and uniquifying the output.
+ if sort "$nlist" | uniq > "$nlist"T; then
+ mv -f "$nlist"T "$nlist"
+ else
+ rm -f "$nlist"T
+ fi
+
+ # Make sure that we snagged all the symbols we need.
+ if grep ' nm_test_var$' "$nlist" >/dev/null; then
+ if grep ' nm_test_func$' "$nlist" >/dev/null; then
+ cat <<EOF > conftest.$ac_ext
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+EOF
+ # Now generate the symbol file.
+ eval "$lt_cv_sys_global_symbol_to_cdecl"' < "$nlist" | grep -v main >> conftest.$ac_ext'
+
+ cat <<EOF >> conftest.$ac_ext
+#if defined (__STDC__) && __STDC__
+# define lt_ptr_t void *
+#else
+# define lt_ptr_t char *
+# define const
+#endif
+
+/* The mapping between symbol names and symbols. */
+const struct {
+ const char *name;
+ lt_ptr_t address;
+}
+lt_preloaded_symbols[] =
+{
+EOF
+ $SED "s/^$symcode$symcode* \(.*\) \(.*\)$/ {\"\2\", (lt_ptr_t) \&\2},/" < "$nlist" | grep -v main >> conftest.$ac_ext
+ cat <<\EOF >> conftest.$ac_ext
+ {0, (lt_ptr_t) 0}
+};
+
+#ifdef __cplusplus
+}
+#endif
+EOF
+ # Now try linking the two files.
+ mv conftest.$ac_objext conftstm.$ac_objext
+ lt_save_LIBS="$LIBS"
+ lt_save_CFLAGS="$CFLAGS"
+ LIBS="conftstm.$ac_objext"
+ CFLAGS="$CFLAGS$lt_prog_compiler_no_builtin_flag"
+ if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } && test -s conftest${ac_exeext}; then
+ pipe_works=yes
+ fi
+ LIBS="$lt_save_LIBS"
+ CFLAGS="$lt_save_CFLAGS"
+ else
+ echo "cannot find nm_test_func in $nlist" >&5
+ fi
+ else
+ echo "cannot find nm_test_var in $nlist" >&5
+ fi
+ else
+ echo "cannot run $lt_cv_sys_global_symbol_pipe" >&5
+ fi
+ else
+ echo "$progname: failed program was:" >&5
+ cat conftest.$ac_ext >&5
+ fi
+ rm -f conftest* conftst*
+
+ # Do not use the global_symbol_pipe unless it works.
+ if test "$pipe_works" = yes; then
+ break
+ else
+ lt_cv_sys_global_symbol_pipe=
+ fi
+done
+
+fi
+
+if test -z "$lt_cv_sys_global_symbol_pipe"; then
+ lt_cv_sys_global_symbol_to_cdecl=
+fi
+if test -z "$lt_cv_sys_global_symbol_pipe$lt_cv_sys_global_symbol_to_cdecl"; then
+ echo "$as_me:$LINENO: result: failed" >&5
+echo "${ECHO_T}failed" >&6
+else
+ echo "$as_me:$LINENO: result: ok" >&5
+echo "${ECHO_T}ok" >&6
+fi
+
+echo "$as_me:$LINENO: checking for objdir" >&5
+echo $ECHO_N "checking for objdir... $ECHO_C" >&6
+if test "${lt_cv_objdir+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ rm -f .libs 2>/dev/null
+mkdir .libs 2>/dev/null
+if test -d .libs; then
+ lt_cv_objdir=.libs
+else
+ # MS-DOS does not allow filenames that begin with a dot.
+ lt_cv_objdir=_libs
+fi
+rmdir .libs 2>/dev/null
+fi
+echo "$as_me:$LINENO: result: $lt_cv_objdir" >&5
+echo "${ECHO_T}$lt_cv_objdir" >&6
+objdir=$lt_cv_objdir
+
+
+
+
+
+case $host_os in
+aix3*)
+ # AIX sometimes has problems with the GCC collect2 program. For some
+ # reason, if we set the COLLECT_NAMES environment variable, the problems
+ # vanish in a puff of smoke.
+ if test "X${COLLECT_NAMES+set}" != Xset; then
+ COLLECT_NAMES=
+ export COLLECT_NAMES
+ fi
+ ;;
+esac
+
+# Sed substitution that helps us do robust quoting. It backslashifies
+# metacharacters that are still active within double-quoted strings.
+Xsed='sed -e s/^X//'
+sed_quote_subst='s/\([\\"\\`$\\\\]\)/\\\1/g'
+
+# Same as above, but do not quote variable references.
+double_quote_subst='s/\([\\"\\`\\\\]\)/\\\1/g'
+
+# Sed substitution to delay expansion of an escaped shell variable in a
+# double_quote_subst'ed string.
+delay_variable_subst='s/\\\\\\\\\\\$/\\\\\\$/g'
+
+# Sed substitution to avoid accidental globbing in evaled expressions
+no_glob_subst='s/\*/\\\*/g'
+
+# Constants:
+rm="rm -f"
+
+# Global variables:
+default_ofile=libtool
+can_build_shared=yes
+
+# All known linkers require a `.a' archive for static linking (except M$VC,
+# which needs '.lib').
+libext=a
+ltmain="$ac_aux_dir/ltmain.sh"
+ofile="$default_ofile"
+with_gnu_ld="$lt_cv_prog_gnu_ld"
+
+if test -n "$ac_tool_prefix"; then
+ # Extract the first word of "${ac_tool_prefix}ar", so it can be a program name with args.
+set dummy ${ac_tool_prefix}ar; ac_word=$2
+echo "$as_me:$LINENO: checking for $ac_word" >&5
+echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6
+if test "${ac_cv_prog_AR+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ if test -n "$AR"; then
+ ac_cv_prog_AR="$AR" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if $as_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_prog_AR="${ac_tool_prefix}ar"
+ echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+done
+
+fi
+fi
+AR=$ac_cv_prog_AR
+if test -n "$AR"; then
+ echo "$as_me:$LINENO: result: $AR" >&5
+echo "${ECHO_T}$AR" >&6
+else
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+fi
+
+fi
+if test -z "$ac_cv_prog_AR"; then
+ ac_ct_AR=$AR
+ # Extract the first word of "ar", so it can be a program name with args.
+set dummy ar; ac_word=$2
+echo "$as_me:$LINENO: checking for $ac_word" >&5
+echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6
+if test "${ac_cv_prog_ac_ct_AR+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ if test -n "$ac_ct_AR"; then
+ ac_cv_prog_ac_ct_AR="$ac_ct_AR" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if $as_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_prog_ac_ct_AR="ar"
+ echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+done
+
+ test -z "$ac_cv_prog_ac_ct_AR" && ac_cv_prog_ac_ct_AR="false"
+fi
+fi
+ac_ct_AR=$ac_cv_prog_ac_ct_AR
+if test -n "$ac_ct_AR"; then
+ echo "$as_me:$LINENO: result: $ac_ct_AR" >&5
+echo "${ECHO_T}$ac_ct_AR" >&6
+else
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+fi
+
+ AR=$ac_ct_AR
+else
+ AR="$ac_cv_prog_AR"
+fi
+
+if test -n "$ac_tool_prefix"; then
+ # Extract the first word of "${ac_tool_prefix}ranlib", so it can be a program name with args.
+set dummy ${ac_tool_prefix}ranlib; ac_word=$2
+echo "$as_me:$LINENO: checking for $ac_word" >&5
+echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6
+if test "${ac_cv_prog_RANLIB+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ if test -n "$RANLIB"; then
+ ac_cv_prog_RANLIB="$RANLIB" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if $as_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_prog_RANLIB="${ac_tool_prefix}ranlib"
+ echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+done
+
+fi
+fi
+RANLIB=$ac_cv_prog_RANLIB
+if test -n "$RANLIB"; then
+ echo "$as_me:$LINENO: result: $RANLIB" >&5
+echo "${ECHO_T}$RANLIB" >&6
+else
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+fi
+
+fi
+if test -z "$ac_cv_prog_RANLIB"; then
+ ac_ct_RANLIB=$RANLIB
+ # Extract the first word of "ranlib", so it can be a program name with args.
+set dummy ranlib; ac_word=$2
+echo "$as_me:$LINENO: checking for $ac_word" >&5
+echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6
+if test "${ac_cv_prog_ac_ct_RANLIB+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ if test -n "$ac_ct_RANLIB"; then
+ ac_cv_prog_ac_ct_RANLIB="$ac_ct_RANLIB" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if $as_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_prog_ac_ct_RANLIB="ranlib"
+ echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+done
+
+ test -z "$ac_cv_prog_ac_ct_RANLIB" && ac_cv_prog_ac_ct_RANLIB=":"
+fi
+fi
+ac_ct_RANLIB=$ac_cv_prog_ac_ct_RANLIB
+if test -n "$ac_ct_RANLIB"; then
+ echo "$as_me:$LINENO: result: $ac_ct_RANLIB" >&5
+echo "${ECHO_T}$ac_ct_RANLIB" >&6
+else
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+fi
+
+ RANLIB=$ac_ct_RANLIB
+else
+ RANLIB="$ac_cv_prog_RANLIB"
+fi
+
+if test -n "$ac_tool_prefix"; then
+ # Extract the first word of "${ac_tool_prefix}strip", so it can be a program name with args.
+set dummy ${ac_tool_prefix}strip; ac_word=$2
+echo "$as_me:$LINENO: checking for $ac_word" >&5
+echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6
+if test "${ac_cv_prog_STRIP+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ if test -n "$STRIP"; then
+ ac_cv_prog_STRIP="$STRIP" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if $as_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_prog_STRIP="${ac_tool_prefix}strip"
+ echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+done
+
+fi
+fi
+STRIP=$ac_cv_prog_STRIP
+if test -n "$STRIP"; then
+ echo "$as_me:$LINENO: result: $STRIP" >&5
+echo "${ECHO_T}$STRIP" >&6
+else
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+fi
+
+fi
+if test -z "$ac_cv_prog_STRIP"; then
+ ac_ct_STRIP=$STRIP
+ # Extract the first word of "strip", so it can be a program name with args.
+set dummy strip; ac_word=$2
+echo "$as_me:$LINENO: checking for $ac_word" >&5
+echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6
+if test "${ac_cv_prog_ac_ct_STRIP+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ if test -n "$ac_ct_STRIP"; then
+ ac_cv_prog_ac_ct_STRIP="$ac_ct_STRIP" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if $as_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_prog_ac_ct_STRIP="strip"
+ echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+done
+
+ test -z "$ac_cv_prog_ac_ct_STRIP" && ac_cv_prog_ac_ct_STRIP=":"
+fi
+fi
+ac_ct_STRIP=$ac_cv_prog_ac_ct_STRIP
+if test -n "$ac_ct_STRIP"; then
+ echo "$as_me:$LINENO: result: $ac_ct_STRIP" >&5
+echo "${ECHO_T}$ac_ct_STRIP" >&6
+else
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+fi
+
+ STRIP=$ac_ct_STRIP
+else
+ STRIP="$ac_cv_prog_STRIP"
+fi
+
+
+old_CC="$CC"
+old_CFLAGS="$CFLAGS"
+
+# Set sane defaults for various variables
+test -z "$AR" && AR=ar
+test -z "$AR_FLAGS" && AR_FLAGS=cru
+test -z "$AS" && AS=as
+test -z "$CC" && CC=cc
+test -z "$LTCC" && LTCC=$CC
+test -z "$DLLTOOL" && DLLTOOL=dlltool
+test -z "$LD" && LD=ld
+test -z "$LN_S" && LN_S="ln -s"
+test -z "$MAGIC_CMD" && MAGIC_CMD=file
+test -z "$NM" && NM=nm
+test -z "$SED" && SED=sed
+test -z "$OBJDUMP" && OBJDUMP=objdump
+test -z "$RANLIB" && RANLIB=:
+test -z "$STRIP" && STRIP=:
+test -z "$ac_objext" && ac_objext=o
+
+# Determine commands to create old-style static archives.
+old_archive_cmds='$AR $AR_FLAGS $oldlib$oldobjs$old_deplibs'
+old_postinstall_cmds='chmod 644 $oldlib'
+old_postuninstall_cmds=
+
+if test -n "$RANLIB"; then
+ case $host_os in
+ openbsd*)
+ old_postinstall_cmds="\$RANLIB -t \$oldlib~$old_postinstall_cmds"
+ ;;
+ *)
+ old_postinstall_cmds="\$RANLIB \$oldlib~$old_postinstall_cmds"
+ ;;
+ esac
+ old_archive_cmds="$old_archive_cmds~\$RANLIB \$oldlib"
+fi
+
+# Only perform the check for file, if the check method requires it
+case $deplibs_check_method in
+file_magic*)
+ if test "$file_magic_cmd" = '$MAGIC_CMD'; then
+ echo "$as_me:$LINENO: checking for ${ac_tool_prefix}file" >&5
+echo $ECHO_N "checking for ${ac_tool_prefix}file... $ECHO_C" >&6
+if test "${lt_cv_path_MAGIC_CMD+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ case $MAGIC_CMD in
+[\\/*] | ?:[\\/]*)
+ lt_cv_path_MAGIC_CMD="$MAGIC_CMD" # Let the user override the test with a path.
+ ;;
+*)
+ lt_save_MAGIC_CMD="$MAGIC_CMD"
+ lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR
+ ac_dummy="/usr/bin$PATH_SEPARATOR$PATH"
+ for ac_dir in $ac_dummy; do
+ IFS="$lt_save_ifs"
+ test -z "$ac_dir" && ac_dir=.
+ if test -f $ac_dir/${ac_tool_prefix}file; then
+ lt_cv_path_MAGIC_CMD="$ac_dir/${ac_tool_prefix}file"
+ if test -n "$file_magic_test_file"; then
+ case $deplibs_check_method in
+ "file_magic "*)
+ file_magic_regex="`expr \"$deplibs_check_method\" : \"file_magic \(.*\)\"`"
+ MAGIC_CMD="$lt_cv_path_MAGIC_CMD"
+ if eval $file_magic_cmd \$file_magic_test_file 2> /dev/null |
+ $EGREP "$file_magic_regex" > /dev/null; then
+ :
+ else
+ cat <<EOF 1>&2
+
+*** Warning: the command libtool uses to detect shared libraries,
+*** $file_magic_cmd, produces output that libtool cannot recognize.
+*** The result is that libtool may fail to recognize shared libraries
+*** as such. This will affect the creation of libtool libraries that
+*** depend on shared libraries, but programs linked with such libtool
+*** libraries will work regardless of this problem. Nevertheless, you
+*** may want to report the problem to your system manager and/or to
+*** bug-libtool at gnu.org
+
+EOF
+ fi ;;
+ esac
+ fi
+ break
+ fi
+ done
+ IFS="$lt_save_ifs"
+ MAGIC_CMD="$lt_save_MAGIC_CMD"
+ ;;
+esac
+fi
+
+MAGIC_CMD="$lt_cv_path_MAGIC_CMD"
+if test -n "$MAGIC_CMD"; then
+ echo "$as_me:$LINENO: result: $MAGIC_CMD" >&5
+echo "${ECHO_T}$MAGIC_CMD" >&6
+else
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+fi
+
+if test -z "$lt_cv_path_MAGIC_CMD"; then
+ if test -n "$ac_tool_prefix"; then
+ echo "$as_me:$LINENO: checking for file" >&5
+echo $ECHO_N "checking for file... $ECHO_C" >&6
+if test "${lt_cv_path_MAGIC_CMD+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ case $MAGIC_CMD in
+[\\/*] | ?:[\\/]*)
+ lt_cv_path_MAGIC_CMD="$MAGIC_CMD" # Let the user override the test with a path.
+ ;;
+*)
+ lt_save_MAGIC_CMD="$MAGIC_CMD"
+ lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR
+ ac_dummy="/usr/bin$PATH_SEPARATOR$PATH"
+ for ac_dir in $ac_dummy; do
+ IFS="$lt_save_ifs"
+ test -z "$ac_dir" && ac_dir=.
+ if test -f $ac_dir/file; then
+ lt_cv_path_MAGIC_CMD="$ac_dir/file"
+ if test -n "$file_magic_test_file"; then
+ case $deplibs_check_method in
+ "file_magic "*)
+ file_magic_regex="`expr \"$deplibs_check_method\" : \"file_magic \(.*\)\"`"
+ MAGIC_CMD="$lt_cv_path_MAGIC_CMD"
+ if eval $file_magic_cmd \$file_magic_test_file 2> /dev/null |
+ $EGREP "$file_magic_regex" > /dev/null; then
+ :
+ else
+ cat <<EOF 1>&2
+
+*** Warning: the command libtool uses to detect shared libraries,
+*** $file_magic_cmd, produces output that libtool cannot recognize.
+*** The result is that libtool may fail to recognize shared libraries
+*** as such. This will affect the creation of libtool libraries that
+*** depend on shared libraries, but programs linked with such libtool
+*** libraries will work regardless of this problem. Nevertheless, you
+*** may want to report the problem to your system manager and/or to
+*** bug-libtool at gnu.org
+
+EOF
+ fi ;;
+ esac
+ fi
+ break
+ fi
+ done
+ IFS="$lt_save_ifs"
+ MAGIC_CMD="$lt_save_MAGIC_CMD"
+ ;;
+esac
+fi
+
+MAGIC_CMD="$lt_cv_path_MAGIC_CMD"
+if test -n "$MAGIC_CMD"; then
+ echo "$as_me:$LINENO: result: $MAGIC_CMD" >&5
+echo "${ECHO_T}$MAGIC_CMD" >&6
+else
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+fi
+
+ else
+ MAGIC_CMD=:
+ fi
+fi
+
+ fi
+ ;;
+esac
+
+enable_dlopen=no
+enable_win32_dll=no
+
+# Check whether --enable-libtool-lock or --disable-libtool-lock was given.
+if test "${enable_libtool_lock+set}" = set; then
+ enableval="$enable_libtool_lock"
+
+fi;
+test "x$enable_libtool_lock" != xno && enable_libtool_lock=yes
+
+
+# Check whether --with-pic or --without-pic was given.
+if test "${with_pic+set}" = set; then
+ withval="$with_pic"
+ pic_mode="$withval"
+else
+ pic_mode=default
+fi;
+test -z "$pic_mode" && pic_mode=default
+
+# Use C for the default configuration in the libtool script
+tagname=
+lt_save_CC="$CC"
+ac_ext=c
+ac_cpp='$CPP $CPPFLAGS'
+ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_c_compiler_gnu
+
+
+# Source file extension for C test sources.
+ac_ext=c
+
+# Object file extension for compiled C test sources.
+objext=o
+objext=$objext
+
+# Code to be used in simple compile tests
+lt_simple_compile_test_code="int some_variable = 0;\n"
+
+# Code to be used in simple link tests
+lt_simple_link_test_code='int main(){return(0);}\n'
+
+
+# If no C compiler was specified, use CC.
+LTCC=${LTCC-"$CC"}
+
+# Allow CC to be a program name with arguments.
+compiler=$CC
+
+
+#
+# Check for any special shared library compilation flags.
+#
+lt_prog_cc_shlib=
+if test "$GCC" = no; then
+ case $host_os in
+ sco3.2v5*)
+ lt_prog_cc_shlib='-belf'
+ ;;
+ esac
+fi
+if test -n "$lt_prog_cc_shlib"; then
+ { echo "$as_me:$LINENO: WARNING: \`$CC' requires \`$lt_prog_cc_shlib' to build shared libraries" >&5
+echo "$as_me: WARNING: \`$CC' requires \`$lt_prog_cc_shlib' to build shared libraries" >&2;}
+ if echo "$old_CC $old_CFLAGS " | grep "[ ]$lt_prog_cc_shlib[ ]" >/dev/null; then :
+ else
+ { echo "$as_me:$LINENO: WARNING: add \`$lt_prog_cc_shlib' to the CC or CFLAGS env variable and reconfigure" >&5
+echo "$as_me: WARNING: add \`$lt_prog_cc_shlib' to the CC or CFLAGS env variable and reconfigure" >&2;}
+ lt_cv_prog_cc_can_build_shared=no
+ fi
+fi
+
+
+#
+# Check to make sure the static flag actually works.
+#
+echo "$as_me:$LINENO: checking if $compiler static flag $lt_prog_compiler_static works" >&5
+echo $ECHO_N "checking if $compiler static flag $lt_prog_compiler_static works... $ECHO_C" >&6
+if test "${lt_prog_compiler_static_works+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ lt_prog_compiler_static_works=no
+ save_LDFLAGS="$LDFLAGS"
+ LDFLAGS="$LDFLAGS $lt_prog_compiler_static"
+ printf "$lt_simple_link_test_code" > conftest.$ac_ext
+ if (eval $ac_link 2>conftest.err) && test -s conftest$ac_exeext; then
+ # The compiler can only warn and ignore the option if not recognized
+ # So say no if there are warnings
+ if test -s conftest.err; then
+ # Append any errors to the config.log.
+ cat conftest.err 1>&5
+ else
+ lt_prog_compiler_static_works=yes
+ fi
+ fi
+ $rm conftest*
+ LDFLAGS="$save_LDFLAGS"
+
+fi
+echo "$as_me:$LINENO: result: $lt_prog_compiler_static_works" >&5
+echo "${ECHO_T}$lt_prog_compiler_static_works" >&6
+
+if test x"$lt_prog_compiler_static_works" = xyes; then
+ :
+else
+ lt_prog_compiler_static=
+fi
+
+
+
+
+lt_prog_compiler_no_builtin_flag=
+
+if test "$GCC" = yes; then
+ lt_prog_compiler_no_builtin_flag=' -fno-builtin'
+
+
+echo "$as_me:$LINENO: checking if $compiler supports -fno-rtti -fno-exceptions" >&5
+echo $ECHO_N "checking if $compiler supports -fno-rtti -fno-exceptions... $ECHO_C" >&6
+if test "${lt_cv_prog_compiler_rtti_exceptions+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ lt_cv_prog_compiler_rtti_exceptions=no
+ ac_outfile=conftest.$ac_objext
+ printf "$lt_simple_compile_test_code" > conftest.$ac_ext
+ lt_compiler_flag="-fno-rtti -fno-exceptions"
+ # Insert the option either (1) after the last *FLAGS variable, or
+ # (2) before a word containing "conftest.", or (3) at the end.
+ # Note that $ac_compile itself does not contain backslashes and begins
+ # with a dollar sign (not a hyphen), so the echo should work correctly.
+ # The option is referenced via a variable to avoid confusing sed.
+ lt_compile=`echo "$ac_compile" | $SED \
+ -e 's:.*FLAGS}? :&$lt_compiler_flag :; t' \
+ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \
+ -e 's:$: $lt_compiler_flag:'`
+ (eval echo "\"\$as_me:5561: $lt_compile\"" >&5)
+ (eval "$lt_compile" 2>conftest.err)
+ ac_status=$?
+ cat conftest.err >&5
+ echo "$as_me:5565: \$? = $ac_status" >&5
+ if (exit $ac_status) && test -s "$ac_outfile"; then
+ # The compiler can only warn and ignore the option if not recognized
+ # So say no if there are warnings
+ if test ! -s conftest.err; then
+ lt_cv_prog_compiler_rtti_exceptions=yes
+ fi
+ fi
+ $rm conftest*
+
+fi
+echo "$as_me:$LINENO: result: $lt_cv_prog_compiler_rtti_exceptions" >&5
+echo "${ECHO_T}$lt_cv_prog_compiler_rtti_exceptions" >&6
+
+if test x"$lt_cv_prog_compiler_rtti_exceptions" = xyes; then
+ lt_prog_compiler_no_builtin_flag="$lt_prog_compiler_no_builtin_flag -fno-rtti -fno-exceptions"
+else
+ :
+fi
+
+fi
+
+lt_prog_compiler_wl=
+lt_prog_compiler_pic=
+lt_prog_compiler_static=
+
+echo "$as_me:$LINENO: checking for $compiler option to produce PIC" >&5
+echo $ECHO_N "checking for $compiler option to produce PIC... $ECHO_C" >&6
+
+ if test "$GCC" = yes; then
+ lt_prog_compiler_wl='-Wl,'
+ lt_prog_compiler_static='-static'
+
+ case $host_os in
+ aix*)
+ # All AIX code is PIC.
+ if test "$host_cpu" = ia64; then
+ # AIX 5 now supports IA64 processor
+ lt_prog_compiler_static='-Bstatic'
+ fi
+ ;;
+
+ amigaos*)
+ # FIXME: we need at least 68020 code to build shared libraries, but
+ # adding the `-m68020' flag to GCC prevents building anything better,
+ # like `-m68040'.
+ lt_prog_compiler_pic='-m68020 -resident32 -malways-restore-a4'
+ ;;
+
+ beos* | cygwin* | irix5* | irix6* | nonstopux* | osf3* | osf4* | osf5*)
+ # PIC is the default for these OSes.
+ ;;
+
+ mingw* | pw32* | os2*)
+ # This hack is so that the source file can tell whether it is being
+ # built for inclusion in a dll (and should export symbols for example).
+ lt_prog_compiler_pic='-DDLL_EXPORT'
+ ;;
+
+ darwin* | rhapsody*)
+ # PIC is the default on this platform
+ # Common symbols not allowed in MH_DYLIB files
+ lt_prog_compiler_pic='-fno-common'
+ ;;
+
+ msdosdjgpp*)
+ # Just because we use GCC doesn't mean we suddenly get shared libraries
+ # on systems that don't support them.
+ lt_prog_compiler_can_build_shared=no
+ enable_shared=no
+ ;;
+
+ sysv4*MP*)
+ if test -d /usr/nec; then
+ lt_prog_compiler_pic=-Kconform_pic
+ fi
+ ;;
+
+ hpux*)
+ # PIC is the default for IA64 HP-UX and 64-bit HP-UX, but
+ # not for PA HP-UX.
+ case "$host_cpu" in
+ hppa*64*|ia64*)
+ # +Z the default
+ ;;
+ *)
+ lt_prog_compiler_pic='-fPIC'
+ ;;
+ esac
+ ;;
+
+ *)
+ lt_prog_compiler_pic='-fPIC'
+ ;;
+ esac
+ else
+ # PORTME Check for flag to pass linker flags through the system compiler.
+ case $host_os in
+ aix*)
+ lt_prog_compiler_wl='-Wl,'
+ if test "$host_cpu" = ia64; then
+ # AIX 5 now supports IA64 processor
+ lt_prog_compiler_static='-Bstatic'
+ else
+ lt_prog_compiler_static='-bnso -bI:/lib/syscalls.exp'
+ fi
+ ;;
+
+ mingw* | pw32* | os2*)
+ # This hack is so that the source file can tell whether it is being
+ # built for inclusion in a dll (and should export symbols for example).
+ lt_prog_compiler_pic='-DDLL_EXPORT'
+ ;;
+
+ hpux9* | hpux10* | hpux11*)
+ lt_prog_compiler_wl='-Wl,'
+ # PIC is the default for IA64 HP-UX and 64-bit HP-UX, but
+ # not for PA HP-UX.
+ case "$host_cpu" in
+ hppa*64*|ia64*)
+ # +Z the default
+ ;;
+ *)
+ lt_prog_compiler_pic='+Z'
+ ;;
+ esac
+ # Is there a better lt_prog_compiler_static that works with the bundled CC?
+ lt_prog_compiler_static='${wl}-a ${wl}archive'
+ ;;
+
+ irix5* | irix6* | nonstopux*)
+ lt_prog_compiler_wl='-Wl,'
+ # PIC (with -KPIC) is the default.
+ lt_prog_compiler_static='-non_shared'
+ ;;
+
+ newsos6)
+ lt_prog_compiler_pic='-KPIC'
+ lt_prog_compiler_static='-Bstatic'
+ ;;
+
+ linux*)
+ case $CC in
+ icc* | ecc*)
+ lt_prog_compiler_wl='-Wl,'
+ lt_prog_compiler_pic='-KPIC'
+ lt_prog_compiler_static='-static'
+ ;;
+ ccc*)
+ lt_prog_compiler_wl='-Wl,'
+ # All Alpha code is PIC.
+ lt_prog_compiler_static='-non_shared'
+ ;;
+ esac
+ ;;
+
+ osf3* | osf4* | osf5*)
+ lt_prog_compiler_wl='-Wl,'
+ # All OSF/1 code is PIC.
+ lt_prog_compiler_static='-non_shared'
+ ;;
+
+ sco3.2v5*)
+ lt_prog_compiler_pic='-Kpic'
+ lt_prog_compiler_static='-dn'
+ ;;
+
+ solaris*)
+ lt_prog_compiler_wl='-Wl,'
+ lt_prog_compiler_pic='-KPIC'
+ lt_prog_compiler_static='-Bstatic'
+ ;;
+
+ sunos4*)
+ lt_prog_compiler_wl='-Qoption ld '
+ lt_prog_compiler_pic='-PIC'
+ lt_prog_compiler_static='-Bstatic'
+ ;;
+
+ sysv4 | sysv4.2uw2* | sysv4.3* | sysv5*)
+ lt_prog_compiler_wl='-Wl,'
+ lt_prog_compiler_pic='-KPIC'
+ lt_prog_compiler_static='-Bstatic'
+ ;;
+
+ sysv4*MP*)
+ if test -d /usr/nec ;then
+ lt_prog_compiler_pic='-Kconform_pic'
+ lt_prog_compiler_static='-Bstatic'
+ fi
+ ;;
+
+ uts4*)
+ lt_prog_compiler_pic='-pic'
+ lt_prog_compiler_static='-Bstatic'
+ ;;
+
+ *)
+ lt_prog_compiler_can_build_shared=no
+ ;;
+ esac
+ fi
+
+echo "$as_me:$LINENO: result: $lt_prog_compiler_pic" >&5
+echo "${ECHO_T}$lt_prog_compiler_pic" >&6
+
+#
+# Check to make sure the PIC flag actually works.
+#
+if test -n "$lt_prog_compiler_pic"; then
+
+echo "$as_me:$LINENO: checking if $compiler PIC flag $lt_prog_compiler_pic works" >&5
+echo $ECHO_N "checking if $compiler PIC flag $lt_prog_compiler_pic works... $ECHO_C" >&6
+if test "${lt_prog_compiler_pic_works+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ lt_prog_compiler_pic_works=no
+ ac_outfile=conftest.$ac_objext
+ printf "$lt_simple_compile_test_code" > conftest.$ac_ext
+ lt_compiler_flag="$lt_prog_compiler_pic -DPIC"
+ # Insert the option either (1) after the last *FLAGS variable, or
+ # (2) before a word containing "conftest.", or (3) at the end.
+ # Note that $ac_compile itself does not contain backslashes and begins
+ # with a dollar sign (not a hyphen), so the echo should work correctly.
+ # The option is referenced via a variable to avoid confusing sed.
+ lt_compile=`echo "$ac_compile" | $SED \
+ -e 's:.*FLAGS}? :&$lt_compiler_flag :; t' \
+ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \
+ -e 's:$: $lt_compiler_flag:'`
+ (eval echo "\"\$as_me:5794: $lt_compile\"" >&5)
+ (eval "$lt_compile" 2>conftest.err)
+ ac_status=$?
+ cat conftest.err >&5
+ echo "$as_me:5798: \$? = $ac_status" >&5
+ if (exit $ac_status) && test -s "$ac_outfile"; then
+ # The compiler can only warn and ignore the option if not recognized
+ # So say no if there are warnings
+ if test ! -s conftest.err; then
+ lt_prog_compiler_pic_works=yes
+ fi
+ fi
+ $rm conftest*
+
+fi
+echo "$as_me:$LINENO: result: $lt_prog_compiler_pic_works" >&5
+echo "${ECHO_T}$lt_prog_compiler_pic_works" >&6
+
+if test x"$lt_prog_compiler_pic_works" = xyes; then
+ case $lt_prog_compiler_pic in
+ "" | " "*) ;;
+ *) lt_prog_compiler_pic=" $lt_prog_compiler_pic" ;;
+ esac
+else
+ lt_prog_compiler_pic=
+ lt_prog_compiler_can_build_shared=no
+fi
+
+fi
+case "$host_os" in
+ # For platforms which do not support PIC, -DPIC is meaningless:
+ *djgpp*)
+ lt_prog_compiler_pic=
+ ;;
+ *)
+ lt_prog_compiler_pic="$lt_prog_compiler_pic -DPIC"
+ ;;
+esac
+
+echo "$as_me:$LINENO: checking if $compiler supports -c -o file.$ac_objext" >&5
+echo $ECHO_N "checking if $compiler supports -c -o file.$ac_objext... $ECHO_C" >&6
+if test "${lt_cv_prog_compiler_c_o+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ lt_cv_prog_compiler_c_o=no
+ $rm -r conftest 2>/dev/null
+ mkdir conftest
+ cd conftest
+ mkdir out
+ printf "$lt_simple_compile_test_code" > conftest.$ac_ext
+
+ lt_compiler_flag="-o out/conftest2.$ac_objext"
+ # Insert the option either (1) after the last *FLAGS variable, or
+ # (2) before a word containing "conftest.", or (3) at the end.
+ # Note that $ac_compile itself does not contain backslashes and begins
+ # with a dollar sign (not a hyphen), so the echo should work correctly.
+ lt_compile=`echo "$ac_compile" | $SED \
+ -e 's:.*FLAGS}? :&$lt_compiler_flag :; t' \
+ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \
+ -e 's:$: $lt_compiler_flag:'`
+ (eval echo "\"\$as_me:5854: $lt_compile\"" >&5)
+ (eval "$lt_compile" 2>out/conftest.err)
+ ac_status=$?
+ cat out/conftest.err >&5
+ echo "$as_me:5858: \$? = $ac_status" >&5
+ if (exit $ac_status) && test -s out/conftest2.$ac_objext
+ then
+ # The compiler can only warn and ignore the option if not recognized
+ # So say no if there are warnings
+ if test ! -s out/conftest.err; then
+ lt_cv_prog_compiler_c_o=yes
+ fi
+ fi
+ chmod u+w .
+ $rm conftest*
+ # SGI C++ compiler will create directory out/ii_files/ for
+ # template instantiation
+ test -d out/ii_files && $rm out/ii_files/* && rmdir out/ii_files
+ $rm out/* && rmdir out
+ cd ..
+ rmdir conftest
+ $rm conftest*
+
+fi
+echo "$as_me:$LINENO: result: $lt_cv_prog_compiler_c_o" >&5
+echo "${ECHO_T}$lt_cv_prog_compiler_c_o" >&6
+
+
+hard_links="nottested"
+if test "$lt_cv_prog_compiler_c_o" = no && test "$need_locks" != no; then
+ # do not overwrite the value of need_locks provided by the user
+ echo "$as_me:$LINENO: checking if we can lock with hard links" >&5
+echo $ECHO_N "checking if we can lock with hard links... $ECHO_C" >&6
+ hard_links=yes
+ $rm conftest*
+ ln conftest.a conftest.b 2>/dev/null && hard_links=no
+ touch conftest.a
+ ln conftest.a conftest.b 2>&5 || hard_links=no
+ ln conftest.a conftest.b 2>/dev/null && hard_links=no
+ echo "$as_me:$LINENO: result: $hard_links" >&5
+echo "${ECHO_T}$hard_links" >&6
+ if test "$hard_links" = no; then
+ { echo "$as_me:$LINENO: WARNING: \`$CC' does not support \`-c -o', so \`make -j' may be unsafe" >&5
+echo "$as_me: WARNING: \`$CC' does not support \`-c -o', so \`make -j' may be unsafe" >&2;}
+ need_locks=warn
+ fi
+else
+ need_locks=no
+fi
+
+echo "$as_me:$LINENO: checking whether the $compiler linker ($LD) supports shared libraries" >&5
+echo $ECHO_N "checking whether the $compiler linker ($LD) supports shared libraries... $ECHO_C" >&6
+
+ runpath_var=
+ allow_undefined_flag=
+ enable_shared_with_static_runtimes=no
+ archive_cmds=
+ archive_expsym_cmds=
+ old_archive_From_new_cmds=
+ old_archive_from_expsyms_cmds=
+ export_dynamic_flag_spec=
+ whole_archive_flag_spec=
+ thread_safe_flag_spec=
+ hardcode_libdir_flag_spec=
+ hardcode_libdir_flag_spec_ld=
+ hardcode_libdir_separator=
+ hardcode_direct=no
+ hardcode_minus_L=no
+ hardcode_shlibpath_var=unsupported
+ link_all_deplibs=unknown
+ hardcode_automatic=no
+ module_cmds=
+ module_expsym_cmds=
+ always_export_symbols=no
+ export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols'
+ # include_expsyms should be a list of space-separated symbols to be *always*
+ # included in the symbol list
+ include_expsyms=
+ # exclude_expsyms can be an extended regexp of symbols to exclude
+ # it will be wrapped by ` (' and `)$', so one must not match beginning or
+ # end of line. Example: `a|bc|.*d.*' will exclude the symbols `a' and `bc',
+ # as well as any symbol that contains `d'.
+ exclude_expsyms="_GLOBAL_OFFSET_TABLE_"
+ # Although _GLOBAL_OFFSET_TABLE_ is a valid symbol C name, most a.out
+ # platforms (ab)use it in PIC code, but their linkers get confused if
+ # the symbol is explicitly referenced. Since portable code cannot
+ # rely on this symbol name, it's probably fine to never include it in
+ # preloaded symbol tables.
+ extract_expsyms_cmds=
+
+ case $host_os in
+ cygwin* | mingw* | pw32*)
+ # FIXME: the MSVC++ port hasn't been tested in a loooong time
+ # When not using gcc, we currently assume that we are using
+ # Microsoft Visual C++.
+ if test "$GCC" != yes; then
+ with_gnu_ld=no
+ fi
+ ;;
+ openbsd*)
+ with_gnu_ld=no
+ ;;
+ esac
+
+ ld_shlibs=yes
+ if test "$with_gnu_ld" = yes; then
+ # If archive_cmds runs LD, not CC, wlarc should be empty
+ wlarc='${wl}'
+
+ # See if GNU ld supports shared libraries.
+ case $host_os in
+ aix3* | aix4* | aix5*)
+ # On AIX/PPC, the GNU linker is very broken
+ if test "$host_cpu" != ia64; then
+ ld_shlibs=no
+ cat <<EOF 1>&2
+
+*** Warning: the GNU linker, at least up to release 2.9.1, is reported
+*** to be unable to reliably create shared libraries on AIX.
+*** Therefore, libtool is disabling shared libraries support. If you
+*** really care for shared libraries, you may want to modify your PATH
+*** so that a non-GNU linker is found, and then restart.
+
+EOF
+ fi
+ ;;
+
+ amigaos*)
+ archive_cmds='$rm $output_objdir/a2ixlibrary.data~$echo "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$echo "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$echo "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$echo "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)'
+ hardcode_libdir_flag_spec='-L$libdir'
+ hardcode_minus_L=yes
+
+ # Samuel A. Falvo II <kc5tja at dolphin.openprojects.net> reports
+ # that the semantics of dynamic libraries on AmigaOS, at least up
+ # to version 4, is to share data among multiple programs linked
+ # with the same dynamic library. Since this doesn't match the
+ # behavior of shared libraries on other platforms, we can't use
+ # them.
+ ld_shlibs=no
+ ;;
+
+ beos*)
+ if $LD --help 2>&1 | grep ': supported targets:.* elf' > /dev/null; then
+ allow_undefined_flag=unsupported
+ # Joseph Beckenbach <jrb3 at best.com> says some releases of gcc
+ # support --undefined. This deserves some investigation. FIXME
+ archive_cmds='$CC -nostart $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+ else
+ ld_shlibs=no
+ fi
+ ;;
+
+ cygwin* | mingw* | pw32*)
+ # _LT_AC_TAGVAR(hardcode_libdir_flag_spec, ) is actually meaningless,
+ # as there is no search path for DLLs.
+ hardcode_libdir_flag_spec='-L$libdir'
+ allow_undefined_flag=unsupported
+ always_export_symbols=no
+ enable_shared_with_static_runtimes=yes
+ export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGS] /s/.* \([^ ]*\)/\1 DATA/'\'' | $SED -e '\''/^[AITW] /s/.* //'\'' | sort | uniq > $export_symbols'
+
+ if $LD --help 2>&1 | grep 'auto-import' > /dev/null; then
+ archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--image-base=0x10000000 ${wl}--out-implib,$lib'
+ # If the export-symbols file already is a .def file (1st line
+ # is EXPORTS), use it as is; otherwise, prepend...
+ archive_expsym_cmds='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
+ cp $export_symbols $output_objdir/$soname.def;
+ else
+ echo EXPORTS > $output_objdir/$soname.def;
+ cat $export_symbols >> $output_objdir/$soname.def;
+ fi~
+ $CC -shared $output_objdir/$soname.def $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--image-base=0x10000000 ${wl}--out-implib,$lib'
+ else
+ ld_shlibs=no
+ fi
+ ;;
+
+ netbsd* | knetbsd*-gnu)
+ if echo __ELF__ | $CC -E - | grep __ELF__ >/dev/null; then
+ archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
+ wlarc=
+ else
+ archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+ archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+ fi
+ ;;
+
+ solaris* | sysv5*)
+ if $LD -v 2>&1 | grep 'BFD 2\.8' > /dev/null; then
+ ld_shlibs=no
+ cat <<EOF 1>&2
+
+*** Warning: The releases 2.8.* of the GNU linker cannot reliably
+*** create shared libraries on Solaris systems. Therefore, libtool
+*** is disabling shared libraries support. We urge you to upgrade GNU
+*** binutils to release 2.9.1 or newer. Another option is to modify
+*** your PATH or compiler configuration so that the native linker is
+*** used, and then restart.
+
+EOF
+ elif $LD --help 2>&1 | grep ': supported targets:.* elf' > /dev/null; then
+ archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+ archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+ else
+ ld_shlibs=no
+ fi
+ ;;
+
+ sunos4*)
+ archive_cmds='$LD -assert pure-text -Bshareable -o $lib $libobjs $deplibs $linker_flags'
+ wlarc=
+ hardcode_direct=yes
+ hardcode_shlibpath_var=no
+ ;;
+
+ linux*)
+ if $LD --help 2>&1 | egrep ': supported targets:.* elf' > /dev/null; then
+ tmp_archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+ archive_cmds="$tmp_archive_cmds"
+ supports_anon_versioning=no
+ case `$LD -v 2>/dev/null` in
+ *\ 01.* | *\ 2.[0-9].* | *\ 2.10.*) ;; # catch versions < 2.11
+ *\ 2.11.93.0.2\ *) supports_anon_versioning=yes ;; # RH7.3 ...
+ *\ 2.11.92.0.12\ *) supports_anon_versioning=yes ;; # Mandrake 8.2 ...
+ *\ 2.11.*) ;; # other 2.11 versions
+ *) supports_anon_versioning=yes ;;
+ esac
+ if test $supports_anon_versioning = yes; then
+ archive_expsym_cmds='$echo "{ global:" > $output_objdir/$libname.ver~
+cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
+$echo "local: *; };" >> $output_objdir/$libname.ver~
+ $CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-version-script ${wl}$output_objdir/$libname.ver -o $lib'
+ else
+ archive_expsym_cmds="$tmp_archive_cmds"
+ fi
+ else
+ ld_shlibs=no
+ fi
+ ;;
+
+ *)
+ if $LD --help 2>&1 | grep ': supported targets:.* elf' > /dev/null; then
+ archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+ archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+ else
+ ld_shlibs=no
+ fi
+ ;;
+ esac
+
+ if test "$ld_shlibs" = yes; then
+ runpath_var=LD_RUN_PATH
+ hardcode_libdir_flag_spec='${wl}--rpath ${wl}$libdir'
+ export_dynamic_flag_spec='${wl}--export-dynamic'
+ # ancient GNU ld didn't support --whole-archive et. al.
+ if $LD --help 2>&1 | grep 'no-whole-archive' > /dev/null; then
+ whole_archive_flag_spec="$wlarc"'--whole-archive$convenience '"$wlarc"'--no-whole-archive'
+ else
+ whole_archive_flag_spec=
+ fi
+ fi
+ else
+ # PORTME fill in a description of your system's linker (not GNU ld)
+ case $host_os in
+ aix3*)
+ allow_undefined_flag=unsupported
+ always_export_symbols=yes
+ archive_expsym_cmds='$LD -o $output_objdir/$soname $libobjs $deplibs $linker_flags -bE:$export_symbols -T512 -H512 -bM:SRE~$AR $AR_FLAGS $lib $output_objdir/$soname'
+ # Note: this linker hardcodes the directories in LIBPATH if there
+ # are no directories specified by -L.
+ hardcode_minus_L=yes
+ if test "$GCC" = yes && test -z "$link_static_flag"; then
+ # Neither direct hardcoding nor static linking is supported with a
+ # broken collect2.
+ hardcode_direct=unsupported
+ fi
+ ;;
+
+ aix4* | aix5*)
+ if test "$host_cpu" = ia64; then
+ # On IA64, the linker does run time linking by default, so we don't
+ # have to do anything special.
+ aix_use_runtimelinking=no
+ exp_sym_flag='-Bexport'
+ no_entry_flag=""
+ else
+ # If we're using GNU nm, then we don't want the "-C" option.
+ # -C means demangle to AIX nm, but means don't demangle with GNU nm
+ if $NM -V 2>&1 | grep 'GNU' > /dev/null; then
+ export_symbols_cmds='$NM -Bpg $libobjs $convenience | awk '\''{ if (((\$2 == "T") || (\$2 == "D") || (\$2 == "B")) && (substr(\$3,1,1) != ".")) { print \$3 } }'\'' | sort -u > $export_symbols'
+ else
+ export_symbols_cmds='$NM -BCpg $libobjs $convenience | awk '\''{ if (((\$2 == "T") || (\$2 == "D") || (\$2 == "B")) && (substr(\$3,1,1) != ".")) { print \$3 } }'\'' | sort -u > $export_symbols'
+ fi
+ aix_use_runtimelinking=no
+
+ # Test if we are trying to use run time linking or normal
+ # AIX style linking. If -brtl is somewhere in LDFLAGS, we
+ # need to do runtime linking.
+ case $host_os in aix4.[23]|aix4.[23].*|aix5*)
+ for ld_flag in $LDFLAGS; do
+ if (test $ld_flag = "-brtl" || test $ld_flag = "-Wl,-brtl"); then
+ aix_use_runtimelinking=yes
+ break
+ fi
+ done
+ esac
+
+ exp_sym_flag='-bexport'
+ no_entry_flag='-bnoentry'
+ fi
+
+ # When large executables or shared objects are built, AIX ld can
+ # have problems creating the table of contents. If linking a library
+ # or program results in "error TOC overflow" add -mminimal-toc to
+ # CXXFLAGS/CFLAGS for g++/gcc. In the cases where that is not
+ # enough to fix the problem, add -Wl,-bbigtoc to LDFLAGS.
+
+ archive_cmds=''
+ hardcode_direct=yes
+ hardcode_libdir_separator=':'
+ link_all_deplibs=yes
+
+ if test "$GCC" = yes; then
+ case $host_os in aix4.012|aix4.012.*)
+ # We only want to do this on AIX 4.2 and lower, the check
+ # below for broken collect2 doesn't work under 4.3+
+ collect2name=`${CC} -print-prog-name=collect2`
+ if test -f "$collect2name" && \
+ strings "$collect2name" | grep resolve_lib_name >/dev/null
+ then
+ # We have reworked collect2
+ hardcode_direct=yes
+ else
+ # We have old collect2
+ hardcode_direct=unsupported
+ # It fails to find uninstalled libraries when the uninstalled
+ # path is not listed in the libpath. Setting hardcode_minus_L
+ # to unsupported forces relinking
+ hardcode_minus_L=yes
+ hardcode_libdir_flag_spec='-L$libdir'
+ hardcode_libdir_separator=
+ fi
+ esac
+ shared_flag='-shared'
+ else
+ # not using gcc
+ if test "$host_cpu" = ia64; then
+ # VisualAge C++, Version 5.5 for AIX 5L for IA-64, Beta 3 Release
+ # chokes on -Wl,-G. The following line is correct:
+ shared_flag='-G'
+ else
+ if test "$aix_use_runtimelinking" = yes; then
+ shared_flag='${wl}-G'
+ else
+ shared_flag='${wl}-bM:SRE'
+ fi
+ fi
+ fi
+
+ # It seems that -bexpall does not export symbols beginning with
+ # underscore (_), so it is better to generate a list of symbols to export.
+ always_export_symbols=yes
+ if test "$aix_use_runtimelinking" = yes; then
+ # Warning - without using the other runtime loading flags (-brtl),
+ # -berok will link without error, but may produce a broken library.
+ allow_undefined_flag='-berok'
+ # Determine the default libpath from the value encoded in an empty executable.
+ cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+
+int
+main ()
+{
+
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext conftest$ac_exeext
+if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_c_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest$ac_exeext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+
+aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e '/Import File Strings/,/^$/ { /^0/ { s/^0 *\(.*\)$/\1/; p; }
+}'`
+# Check for a 64-bit object if we didn't find anything.
+if test -z "$aix_libpath"; then aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e '/Import File Strings/,/^$/ { /^0/ { s/^0 *\(.*\)$/\1/; p; }
+}'`; fi
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+fi
+rm -f conftest.err conftest.$ac_objext \
+ conftest$ac_exeext conftest.$ac_ext
+if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+
+ hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
+ archive_expsym_cmds="\$CC"' -o $output_objdir/$soname $libobjs $deplibs $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then echo "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$no_entry_flag \${wl}$exp_sym_flag:\$export_symbols $shared_flag"
+ else
+ if test "$host_cpu" = ia64; then
+ hardcode_libdir_flag_spec='${wl}-R $libdir:/usr/lib:/lib'
+ allow_undefined_flag="-z nodefs"
+ archive_expsym_cmds="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs $compiler_flags ${wl}${allow_undefined_flag} '"\${wl}$no_entry_flag \${wl}$exp_sym_flag:\$export_symbols"
+ else
+ # Determine the default libpath from the value encoded in an empty executable.
+ cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+
+int
+main ()
+{
+
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext conftest$ac_exeext
+if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_c_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest$ac_exeext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+
+aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e '/Import File Strings/,/^$/ { /^0/ { s/^0 *\(.*\)$/\1/; p; }
+}'`
+# Check for a 64-bit object if we didn't find anything.
+if test -z "$aix_libpath"; then aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e '/Import File Strings/,/^$/ { /^0/ { s/^0 *\(.*\)$/\1/; p; }
+}'`; fi
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+fi
+rm -f conftest.err conftest.$ac_objext \
+ conftest$ac_exeext conftest.$ac_ext
+if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+
+ hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
+ # Warning - without using the other run time loading flags,
+ # -berok will link without error, but may produce a broken library.
+ no_undefined_flag=' ${wl}-bernotok'
+ allow_undefined_flag=' ${wl}-berok'
+ # -bexpall does not export symbols beginning with underscore (_)
+ always_export_symbols=yes
+ # Exported symbols can be pulled into shared objects from archives
+ whole_archive_flag_spec=' '
+ archive_cmds_need_lc=yes
+ # This is similar to how AIX traditionally builds it's shared libraries.
+ archive_expsym_cmds="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs $compiler_flags ${wl}-bE:$export_symbols ${wl}-bnoentry${allow_undefined_flag}~$AR $AR_FLAGS $output_objdir/$libname$release.a $output_objdir/$soname'
+ fi
+ fi
+ ;;
+
+ amigaos*)
+ archive_cmds='$rm $output_objdir/a2ixlibrary.data~$echo "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$echo "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$echo "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$echo "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)'
+ hardcode_libdir_flag_spec='-L$libdir'
+ hardcode_minus_L=yes
+ # see comment about different semantics on the GNU ld section
+ ld_shlibs=no
+ ;;
+
+ bsdi4*)
+ export_dynamic_flag_spec=-rdynamic
+ ;;
+
+ cygwin* | mingw* | pw32*)
+ # When not using gcc, we currently assume that we are using
+ # Microsoft Visual C++.
+ # hardcode_libdir_flag_spec is actually meaningless, as there is
+ # no search path for DLLs.
+ hardcode_libdir_flag_spec=' '
+ allow_undefined_flag=unsupported
+ # Tell ltmain to make .lib files, not .a files.
+ libext=lib
+ # Tell ltmain to make .dll files, not .so files.
+ shrext=".dll"
+ # FIXME: Setting linknames here is a bad hack.
+ archive_cmds='$CC -o $lib $libobjs $compiler_flags `echo "$deplibs" | $SED -e '\''s/ -lc$//'\''` -link -dll~linknames='
+ # The linker will automatically build a .lib file if we build a DLL.
+ old_archive_From_new_cmds='true'
+ # FIXME: Should let the user specify the lib program.
+ old_archive_cmds='lib /OUT:$oldlib$oldobjs$old_deplibs'
+ fix_srcfile_path='`cygpath -w "$srcfile"`'
+ enable_shared_with_static_runtimes=yes
+ ;;
+
+ darwin* | rhapsody*)
+ if test "$GXX" = yes ; then
+ archive_cmds_need_lc=no
+ case "$host_os" in
+ rhapsody* | darwin1.[012])
+ allow_undefined_flag='-undefined suppress'
+ ;;
+ *) # Darwin 1.3 on
+ if test -z ${MACOSX_DEPLOYMENT_TARGET} ; then
+ allow_undefined_flag='-flat_namespace -undefined suppress'
+ else
+ case ${MACOSX_DEPLOYMENT_TARGET} in
+ 10.[012])
+ allow_undefined_flag='-flat_namespace -undefined suppress'
+ ;;
+ 10.*)
+ allow_undefined_flag='-undefined dynamic_lookup'
+ ;;
+ esac
+ fi
+ ;;
+ esac
+ lt_int_apple_cc_single_mod=no
+ output_verbose_link_cmd='echo'
+ if $CC -dumpspecs 2>&1 | grep 'single_module' >/dev/null ; then
+ lt_int_apple_cc_single_mod=yes
+ fi
+ if test "X$lt_int_apple_cc_single_mod" = Xyes ; then
+ archive_cmds='$CC -dynamiclib -single_module $allow_undefined_flag -o $lib $libobjs $deplibs $compiler_flags -install_name $rpath/$soname $verstring'
+ else
+ archive_cmds='$CC -r ${wl}-bind_at_load -keep_private_externs -nostdlib -o ${lib}-master.o $libobjs~$CC -dynamiclib $allow_undefined_flag -o $lib ${lib}-master.o $deplibs $compiler_flags -install_name $rpath/$soname $verstring'
+ fi
+ module_cmds='$CC ${wl}-bind_at_load $allow_undefined_flag -o $lib -bundle $libobjs $deplibs$compiler_flags'
+ # Don't fix this by using the ld -exported_symbols_list flag, it doesn't exist in older darwin ld's
+ if test "X$lt_int_apple_cc_single_mod" = Xyes ; then
+ archive_expsym_cmds='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC -dynamiclib -single_module $allow_undefined_flag -o $lib $libobjs $deplibs $compiler_flags -install_name $rpath/$soname $verstring~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}'
+ else
+ archive_expsym_cmds='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC -r ${wl}-bind_at_load -keep_private_externs -nostdlib -o ${lib}-master.o $libobjs~$CC -dynamiclib $allow_undefined_flag -o $lib ${lib}-master.o $deplibs $compiler_flags -install_name $rpath/$soname $verstring~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}'
+ fi
+ module_expsym_cmds='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC $allow_undefined_flag -o $lib -bundle $libobjs $deplibs$compiler_flags~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}'
+ hardcode_direct=no
+ hardcode_automatic=yes
+ hardcode_shlibpath_var=unsupported
+ whole_archive_flag_spec='-all_load $convenience'
+ link_all_deplibs=yes
+ else
+ ld_shlibs=no
+ fi
+ ;;
+
+ dgux*)
+ archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ hardcode_libdir_flag_spec='-L$libdir'
+ hardcode_shlibpath_var=no
+ ;;
+
+ freebsd1*)
+ ld_shlibs=no
+ ;;
+
+ # FreeBSD 2.2.[012] allows us to include c++rt0.o to get C++ constructor
+ # support. Future versions do this automatically, but an explicit c++rt0.o
+ # does not break anything, and helps significantly (at the cost of a little
+ # extra space).
+ freebsd2.2*)
+ archive_cmds='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags /usr/lib/c++rt0.o'
+ hardcode_libdir_flag_spec='-R$libdir'
+ hardcode_direct=yes
+ hardcode_shlibpath_var=no
+ ;;
+
+ # Unfortunately, older versions of FreeBSD 2 do not have this feature.
+ freebsd2*)
+ archive_cmds='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags'
+ hardcode_direct=yes
+ hardcode_minus_L=yes
+ hardcode_shlibpath_var=no
+ ;;
+
+ # FreeBSD 3 and greater uses gcc -shared to do shared libraries.
+ freebsd* | kfreebsd*-gnu)
+ archive_cmds='$CC -shared -o $lib $libobjs $deplibs $compiler_flags'
+ hardcode_libdir_flag_spec='-R$libdir'
+ hardcode_direct=yes
+ hardcode_shlibpath_var=no
+ ;;
+
+ hpux9*)
+ if test "$GCC" = yes; then
+ archive_cmds='$rm $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+ else
+ archive_cmds='$rm $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+ fi
+ hardcode_libdir_flag_spec='${wl}+b ${wl}$libdir'
+ hardcode_libdir_separator=:
+ hardcode_direct=yes
+
+ # hardcode_minus_L: Not really in the search PATH,
+ # but as the default location of the library.
+ hardcode_minus_L=yes
+ export_dynamic_flag_spec='${wl}-E'
+ ;;
+
+ hpux10* | hpux11*)
+ if test "$GCC" = yes -a "$with_gnu_ld" = no; then
+ case "$host_cpu" in
+ hppa*64*|ia64*)
+ archive_cmds='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ ;;
+ *)
+ archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+ ;;
+ esac
+ else
+ case "$host_cpu" in
+ hppa*64*|ia64*)
+ archive_cmds='$LD -b +h $soname -o $lib $libobjs $deplibs $linker_flags'
+ ;;
+ *)
+ archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
+ ;;
+ esac
+ fi
+ if test "$with_gnu_ld" = no; then
+ case "$host_cpu" in
+ hppa*64*)
+ hardcode_libdir_flag_spec='${wl}+b ${wl}$libdir'
+ hardcode_libdir_flag_spec_ld='+b $libdir'
+ hardcode_libdir_separator=:
+ hardcode_direct=no
+ hardcode_shlibpath_var=no
+ ;;
+ ia64*)
+ hardcode_libdir_flag_spec='-L$libdir'
+ hardcode_direct=no
+ hardcode_shlibpath_var=no
+
+ # hardcode_minus_L: Not really in the search PATH,
+ # but as the default location of the library.
+ hardcode_minus_L=yes
+ ;;
+ *)
+ hardcode_libdir_flag_spec='${wl}+b ${wl}$libdir'
+ hardcode_libdir_separator=:
+ hardcode_direct=yes
+ export_dynamic_flag_spec='${wl}-E'
+
+ # hardcode_minus_L: Not really in the search PATH,
+ # but as the default location of the library.
+ hardcode_minus_L=yes
+ ;;
+ esac
+ fi
+ ;;
+
+ irix5* | irix6* | nonstopux*)
+ if test "$GCC" = yes; then
+ archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ else
+ archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${output_objdir}/so_locations -o $lib'
+ hardcode_libdir_flag_spec_ld='-rpath $libdir'
+ fi
+ hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir'
+ hardcode_libdir_separator=:
+ link_all_deplibs=yes
+ ;;
+
+ netbsd* | knetbsd*-gnu)
+ if echo __ELF__ | $CC -E - | grep __ELF__ >/dev/null; then
+ archive_cmds='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' # a.out
+ else
+ archive_cmds='$LD -shared -o $lib $libobjs $deplibs $linker_flags' # ELF
+ fi
+ hardcode_libdir_flag_spec='-R$libdir'
+ hardcode_direct=yes
+ hardcode_shlibpath_var=no
+ ;;
+
+ newsos6)
+ archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ hardcode_direct=yes
+ hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir'
+ hardcode_libdir_separator=:
+ hardcode_shlibpath_var=no
+ ;;
+
+ openbsd*)
+ hardcode_direct=yes
+ hardcode_shlibpath_var=no
+ if test -z "`echo __ELF__ | $CC -E - | grep __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then
+ archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
+ hardcode_libdir_flag_spec='${wl}-rpath,$libdir'
+ export_dynamic_flag_spec='${wl}-E'
+ else
+ case $host_os in
+ openbsd[01].* | openbsd2.[0-7] | openbsd2.[0-7].*)
+ archive_cmds='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags'
+ hardcode_libdir_flag_spec='-R$libdir'
+ ;;
+ *)
+ archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
+ hardcode_libdir_flag_spec='${wl}-rpath,$libdir'
+ ;;
+ esac
+ fi
+ ;;
+
+ os2*)
+ hardcode_libdir_flag_spec='-L$libdir'
+ hardcode_minus_L=yes
+ allow_undefined_flag=unsupported
+ archive_cmds='$echo "LIBRARY $libname INITINSTANCE" > $output_objdir/$libname.def~$echo "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~$echo DATA >> $output_objdir/$libname.def~$echo " SINGLE NONSHARED" >> $output_objdir/$libname.def~$echo EXPORTS >> $output_objdir/$libname.def~emxexp $libobjs >> $output_objdir/$libname.def~$CC -Zdll -Zcrtdll -o $lib $libobjs $deplibs $compiler_flags $output_objdir/$libname.def'
+ old_archive_From_new_cmds='emximp -o $output_objdir/$libname.a $output_objdir/$libname.def'
+ ;;
+
+ osf3*)
+ if test "$GCC" = yes; then
+ allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*'
+ archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ else
+ allow_undefined_flag=' -expect_unresolved \*'
+ archive_cmds='$LD -shared${allow_undefined_flag} $libobjs $deplibs $linker_flags -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${output_objdir}/so_locations -o $lib'
+ fi
+ hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir'
+ hardcode_libdir_separator=:
+ ;;
+
+ osf4* | osf5*) # as osf3* with the addition of -msym flag
+ if test "$GCC" = yes; then
+ allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*'
+ archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir'
+ else
+ allow_undefined_flag=' -expect_unresolved \*'
+ archive_cmds='$LD -shared${allow_undefined_flag} $libobjs $deplibs $linker_flags -msym -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${output_objdir}/so_locations -o $lib'
+ archive_expsym_cmds='for i in `cat $export_symbols`; do printf "%s %s\\n" -exported_symbol "\$i" >> $lib.exp; done; echo "-hidden">> $lib.exp~
+ $LD -shared${allow_undefined_flag} -input $lib.exp $linker_flags $libobjs $deplibs -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${objdir}/so_locations -o $lib~$rm $lib.exp'
+
+ # Both c and cxx compiler support -rpath directly
+ hardcode_libdir_flag_spec='-rpath $libdir'
+ fi
+ hardcode_libdir_separator=:
+ ;;
+
+ sco3.2v5*)
+ archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ hardcode_shlibpath_var=no
+ export_dynamic_flag_spec='${wl}-Bexport'
+ runpath_var=LD_RUN_PATH
+ hardcode_runpath_var=yes
+ ;;
+
+ solaris*)
+ no_undefined_flag=' -z text'
+ if test "$GCC" = yes; then
+ archive_cmds='$CC -shared ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ archive_expsym_cmds='$echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~$echo "local: *; };" >> $lib.exp~
+ $CC -shared ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$rm $lib.exp'
+ else
+ archive_cmds='$LD -G${allow_undefined_flag} -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ archive_expsym_cmds='$echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~$echo "local: *; };" >> $lib.exp~
+ $LD -G${allow_undefined_flag} -M $lib.exp -h $soname -o $lib $libobjs $deplibs $linker_flags~$rm $lib.exp'
+ fi
+ hardcode_libdir_flag_spec='-R$libdir'
+ hardcode_shlibpath_var=no
+ case $host_os in
+ solaris2.[0-5] | solaris2.[0-5].*) ;;
+ *) # Supported since Solaris 2.6 (maybe 2.5.1?)
+ whole_archive_flag_spec='-z allextract$convenience -z defaultextract' ;;
+ esac
+ link_all_deplibs=yes
+ ;;
+
+ sunos4*)
+ if test "x$host_vendor" = xsequent; then
+ # Use $CC to link under sequent, because it throws in some extra .o
+ # files that make .init and .fini sections work.
+ archive_cmds='$CC -G ${wl}-h $soname -o $lib $libobjs $deplibs $compiler_flags'
+ else
+ archive_cmds='$LD -assert pure-text -Bstatic -o $lib $libobjs $deplibs $linker_flags'
+ fi
+ hardcode_libdir_flag_spec='-L$libdir'
+ hardcode_direct=yes
+ hardcode_minus_L=yes
+ hardcode_shlibpath_var=no
+ ;;
+
+ sysv4)
+ case $host_vendor in
+ sni)
+ archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ hardcode_direct=yes # is this really true???
+ ;;
+ siemens)
+ ## LD is ld it makes a PLAMLIB
+ ## CC just makes a GrossModule.
+ archive_cmds='$LD -G -o $lib $libobjs $deplibs $linker_flags'
+ reload_cmds='$CC -r -o $output$reload_objs'
+ hardcode_direct=no
+ ;;
+ motorola)
+ archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ hardcode_direct=no #Motorola manual says yes, but my tests say they lie
+ ;;
+ esac
+ runpath_var='LD_RUN_PATH'
+ hardcode_shlibpath_var=no
+ ;;
+
+ sysv4.3*)
+ archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ hardcode_shlibpath_var=no
+ export_dynamic_flag_spec='-Bexport'
+ ;;
+
+ sysv4*MP*)
+ if test -d /usr/nec; then
+ archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ hardcode_shlibpath_var=no
+ runpath_var=LD_RUN_PATH
+ hardcode_runpath_var=yes
+ ld_shlibs=yes
+ fi
+ ;;
+
+ sysv4.2uw2*)
+ archive_cmds='$LD -G -o $lib $libobjs $deplibs $linker_flags'
+ hardcode_direct=yes
+ hardcode_minus_L=no
+ hardcode_shlibpath_var=no
+ hardcode_runpath_var=yes
+ runpath_var=LD_RUN_PATH
+ ;;
+
+ sysv5OpenUNIX8* | sysv5UnixWare7* | sysv5uw[78]* | unixware7*)
+ no_undefined_flag='${wl}-z ${wl}text'
+ if test "$GCC" = yes; then
+ archive_cmds='$CC -shared ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ else
+ archive_cmds='$CC -G ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ fi
+ runpath_var='LD_RUN_PATH'
+ hardcode_shlibpath_var=no
+ ;;
+
+ sysv5*)
+ no_undefined_flag=' -z text'
+ # $CC -shared without GNU ld will not create a library from C++
+ # object files and a static libstdc++, better avoid it by now
+ archive_cmds='$LD -G${allow_undefined_flag} -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ archive_expsym_cmds='$echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~$echo "local: *; };" >> $lib.exp~
+ $LD -G${allow_undefined_flag} -M $lib.exp -h $soname -o $lib $libobjs $deplibs $linker_flags~$rm $lib.exp'
+ hardcode_libdir_flag_spec=
+ hardcode_shlibpath_var=no
+ runpath_var='LD_RUN_PATH'
+ ;;
+
+ uts4*)
+ archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ hardcode_libdir_flag_spec='-L$libdir'
+ hardcode_shlibpath_var=no
+ ;;
+
+ *)
+ ld_shlibs=no
+ ;;
+ esac
+ fi
+
+echo "$as_me:$LINENO: result: $ld_shlibs" >&5
+echo "${ECHO_T}$ld_shlibs" >&6
+test "$ld_shlibs" = no && can_build_shared=no
+
+variables_saved_for_relink="PATH $shlibpath_var $runpath_var"
+if test "$GCC" = yes; then
+ variables_saved_for_relink="$variables_saved_for_relink GCC_EXEC_PREFIX COMPILER_PATH LIBRARY_PATH"
+fi
+
+#
+# Do we need to explicitly link libc?
+#
+case "x$archive_cmds_need_lc" in
+x|xyes)
+ # Assume -lc should be added
+ archive_cmds_need_lc=yes
+
+ if test "$enable_shared" = yes && test "$GCC" = yes; then
+ case $archive_cmds in
+ *'~'*)
+ # FIXME: we may have to deal with multi-command sequences.
+ ;;
+ '$CC '*)
+ # Test whether the compiler implicitly links with -lc since on some
+ # systems, -lgcc has to come before -lc. If gcc already passes -lc
+ # to ld, don't add -lc before -lgcc.
+ echo "$as_me:$LINENO: checking whether -lc should be explicitly linked in" >&5
+echo $ECHO_N "checking whether -lc should be explicitly linked in... $ECHO_C" >&6
+ $rm conftest*
+ printf "$lt_simple_compile_test_code" > conftest.$ac_ext
+
+ if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5
+ (eval $ac_compile) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } 2>conftest.err; then
+ soname=conftest
+ lib=conftest
+ libobjs=conftest.$ac_objext
+ deplibs=
+ wl=$lt_prog_compiler_wl
+ compiler_flags=-v
+ linker_flags=-v
+ verstring=
+ output_objdir=.
+ libname=conftest
+ lt_save_allow_undefined_flag=$allow_undefined_flag
+ allow_undefined_flag=
+ if { (eval echo "$as_me:$LINENO: \"$archive_cmds 2\>\&1 \| grep \" -lc \" \>/dev/null 2\>\&1\"") >&5
+ (eval $archive_cmds 2\>\&1 \| grep \" -lc \" \>/dev/null 2\>\&1) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }
+ then
+ archive_cmds_need_lc=no
+ else
+ archive_cmds_need_lc=yes
+ fi
+ allow_undefined_flag=$lt_save_allow_undefined_flag
+ else
+ cat conftest.err 1>&5
+ fi
+ $rm conftest*
+ echo "$as_me:$LINENO: result: $archive_cmds_need_lc" >&5
+echo "${ECHO_T}$archive_cmds_need_lc" >&6
+ ;;
+ esac
+ fi
+ ;;
+esac
+
+echo "$as_me:$LINENO: checking dynamic linker characteristics" >&5
+echo $ECHO_N "checking dynamic linker characteristics... $ECHO_C" >&6
+library_names_spec=
+libname_spec='lib$name'
+soname_spec=
+shrext=".so"
+postinstall_cmds=
+postuninstall_cmds=
+finish_cmds=
+finish_eval=
+shlibpath_var=
+shlibpath_overrides_runpath=unknown
+version_type=none
+dynamic_linker="$host_os ld.so"
+sys_lib_dlsearch_path_spec="/lib /usr/lib"
+if test "$GCC" = yes; then
+ sys_lib_search_path_spec=`$CC -print-search-dirs | grep "^libraries:" | $SED -e "s/^libraries://" -e "s,=/,/,g"`
+ if echo "$sys_lib_search_path_spec" | grep ';' >/dev/null ; then
+ # if the path contains ";" then we assume it to be the separator
+ # otherwise default to the standard path separator (i.e. ":") - it is
+ # assumed that no part of a normal pathname contains ";" but that should
+ # okay in the real world where ";" in dirpaths is itself problematic.
+ sys_lib_search_path_spec=`echo "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
+ else
+ sys_lib_search_path_spec=`echo "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
+ fi
+else
+ sys_lib_search_path_spec="/lib /usr/lib /usr/local/lib"
+fi
+need_lib_prefix=unknown
+hardcode_into_libs=no
+
+# when you set need_version to no, make sure it does not cause -set_version
+# flags to be left without arguments
+need_version=unknown
+
+case $host_os in
+aix3*)
+ version_type=linux
+ library_names_spec='${libname}${release}${shared_ext}$versuffix $libname.a'
+ shlibpath_var=LIBPATH
+
+ # AIX 3 has no versioning support, so we append a major version to the name.
+ soname_spec='${libname}${release}${shared_ext}$major'
+ ;;
+
+aix4* | aix5*)
+ version_type=linux
+ need_lib_prefix=no
+ need_version=no
+ hardcode_into_libs=yes
+ if test "$host_cpu" = ia64; then
+ # AIX 5 supports IA64
+ library_names_spec='${libname}${release}${shared_ext}$major ${libname}${release}${shared_ext}$versuffix $libname${shared_ext}'
+ shlibpath_var=LD_LIBRARY_PATH
+ else
+ # With GCC up to 2.95.x, collect2 would create an import file
+ # for dependence libraries. The import file would start with
+ # the line `#! .'. This would cause the generated library to
+ # depend on `.', always an invalid library. This was fixed in
+ # development snapshots of GCC prior to 3.0.
+ case $host_os in
+ aix4 | aix4.[01] | aix4.[01].*)
+ if { echo '#if __GNUC__ > 2 || (__GNUC__ == 2 && __GNUC_MINOR__ >= 97)'
+ echo ' yes '
+ echo '#endif'; } | ${CC} -E - | grep yes > /dev/null; then
+ :
+ else
+ can_build_shared=no
+ fi
+ ;;
+ esac
+ # AIX (on Power*) has no versioning support, so currently we can not hardcode correct
+ # soname into executable. Probably we can add versioning support to
+ # collect2, so additional links can be useful in future.
+ if test "$aix_use_runtimelinking" = yes; then
+ # If using run time linking (on AIX 4.2 or later) use lib<name>.so
+ # instead of lib<name>.a to let people know that these are not
+ # typical AIX shared libraries.
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ else
+ # We preserve .a as extension for shared libraries through AIX4.2
+ # and later when we are not doing run time linking.
+ library_names_spec='${libname}${release}.a $libname.a'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ fi
+ shlibpath_var=LIBPATH
+ fi
+ ;;
+
+amigaos*)
+ library_names_spec='$libname.ixlibrary $libname.a'
+ # Create ${libname}_ixlibrary.a entries in /sys/libs.
+ finish_eval='for lib in `ls $libdir/*.ixlibrary 2>/dev/null`; do libname=`$echo "X$lib" | $Xsed -e '\''s%^.*/\([^/]*\)\.ixlibrary$%\1%'\''`; test $rm /sys/libs/${libname}_ixlibrary.a; $show "cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a"; cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a || exit 1; done'
+ ;;
+
+beos*)
+ library_names_spec='${libname}${shared_ext}'
+ dynamic_linker="$host_os ld.so"
+ shlibpath_var=LIBRARY_PATH
+ ;;
+
+bsdi4*)
+ version_type=linux
+ need_version=no
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ finish_cmds='PATH="\$PATH:/sbin" ldconfig $libdir'
+ shlibpath_var=LD_LIBRARY_PATH
+ sys_lib_search_path_spec="/shlib /usr/lib /usr/X11/lib /usr/contrib/lib /lib /usr/local/lib"
+ sys_lib_dlsearch_path_spec="/shlib /usr/lib /usr/local/lib"
+ # the default ld.so.conf also contains /usr/contrib/lib and
+ # /usr/X11R6/lib (/usr/X11 is a link to /usr/X11R6), but let us allow
+ # libtool to hard-code these into programs
+ ;;
+
+cygwin* | mingw* | pw32*)
+ version_type=windows
+ shrext=".dll"
+ need_version=no
+ need_lib_prefix=no
+
+ case $GCC,$host_os in
+ yes,cygwin* | yes,mingw* | yes,pw32*)
+ library_names_spec='$libname.dll.a'
+ # DLL is installed to $(libdir)/../bin by postinstall_cmds
+ postinstall_cmds='base_file=`basename \${file}`~
+ dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i;echo \$dlname'\''`~
+ dldir=$destdir/`dirname \$dlpath`~
+ test -d \$dldir || mkdir -p \$dldir~
+ $install_prog $dir/$dlname \$dldir/$dlname'
+ postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
+ dlpath=$dir/\$dldll~
+ $rm \$dlpath'
+ shlibpath_overrides_runpath=yes
+
+ case $host_os in
+ cygwin*)
+ # Cygwin DLLs use 'cyg' prefix rather than 'lib'
+ soname_spec='`echo ${libname} | sed -e 's/^lib/cyg/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
+ sys_lib_search_path_spec="/usr/lib /lib/w32api /lib /usr/local/lib"
+ ;;
+ mingw*)
+ # MinGW DLLs use traditional 'lib' prefix
+ soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
+ sys_lib_search_path_spec=`$CC -print-search-dirs | grep "^libraries:" | $SED -e "s/^libraries://" -e "s,=/,/,g"`
+ if echo "$sys_lib_search_path_spec" | grep ';[c-zC-Z]:/' >/dev/null; then
+ # It is most probably a Windows format PATH printed by
+ # mingw gcc, but we are running on Cygwin. Gcc prints its search
+ # path with ; separators, and with drive letters. We can handle the
+ # drive letters (cygwin fileutils understands them), so leave them,
+ # especially as we might pass files found there to a mingw objdump,
+ # which wouldn't understand a cygwinified path. Ahh.
+ sys_lib_search_path_spec=`echo "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
+ else
+ sys_lib_search_path_spec=`echo "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
+ fi
+ ;;
+ pw32*)
+ # pw32 DLLs use 'pw' prefix rather than 'lib'
+ library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/./-/g'`${versuffix}${shared_ext}'
+ ;;
+ esac
+ ;;
+
+ *)
+ library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
+ ;;
+ esac
+ dynamic_linker='Win32 ld.exe'
+ # FIXME: first we should search . and the directory the executable is in
+ shlibpath_var=PATH
+ ;;
+
+darwin* | rhapsody*)
+ dynamic_linker="$host_os dyld"
+ version_type=darwin
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='${libname}${release}${versuffix}$shared_ext ${libname}${release}${major}$shared_ext ${libname}$shared_ext'
+ soname_spec='${libname}${release}${major}$shared_ext'
+ shlibpath_overrides_runpath=yes
+ shlibpath_var=DYLD_LIBRARY_PATH
+ shrext='$(test .$module = .yes && echo .so || echo .dylib)'
+ # Apple's gcc prints 'gcc -print-search-dirs' doesn't operate the same.
+ if test "$GCC" = yes; then
+ sys_lib_search_path_spec=`$CC -print-search-dirs | tr "\n" "$PATH_SEPARATOR" | sed -e 's/libraries:/@libraries:/' | tr "@" "\n" | grep "^libraries:" | sed -e "s/^libraries://" -e "s,=/,/,g" -e "s,$PATH_SEPARATOR, ,g" -e "s,.*,& /lib /usr/lib /usr/local/lib,g"`
+ else
+ sys_lib_search_path_spec='/lib /usr/lib /usr/local/lib'
+ fi
+ sys_lib_dlsearch_path_spec='/usr/local/lib /lib /usr/lib'
+ ;;
+
+dgux*)
+ version_type=linux
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname$shared_ext'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ ;;
+
+freebsd1*)
+ dynamic_linker=no
+ ;;
+
+kfreebsd*-gnu)
+ version_type=linux
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=no
+ hardcode_into_libs=yes
+ dynamic_linker='GNU ld.so'
+ ;;
+
+freebsd*)
+ objformat=`test -x /usr/bin/objformat && /usr/bin/objformat || echo aout`
+ version_type=freebsd-$objformat
+ case $version_type in
+ freebsd-elf*)
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext} $libname${shared_ext}'
+ need_version=no
+ need_lib_prefix=no
+ ;;
+ freebsd-*)
+ library_names_spec='${libname}${release}${shared_ext}$versuffix $libname${shared_ext}$versuffix'
+ need_version=yes
+ ;;
+ esac
+ shlibpath_var=LD_LIBRARY_PATH
+ case $host_os in
+ freebsd2*)
+ shlibpath_overrides_runpath=yes
+ ;;
+ freebsd3.01* | freebsdelf3.01*)
+ shlibpath_overrides_runpath=yes
+ hardcode_into_libs=yes
+ ;;
+ *) # from 3.2 on
+ shlibpath_overrides_runpath=no
+ hardcode_into_libs=yes
+ ;;
+ esac
+ ;;
+
+gnu*)
+ version_type=linux
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}${major} ${libname}${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ hardcode_into_libs=yes
+ ;;
+
+hpux9* | hpux10* | hpux11*)
+ # Give a soname corresponding to the major version so that dld.sl refuses to
+ # link against other versions.
+ version_type=sunos
+ need_lib_prefix=no
+ need_version=no
+ case "$host_cpu" in
+ ia64*)
+ shrext='.so'
+ hardcode_into_libs=yes
+ dynamic_linker="$host_os dld.so"
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=yes # Unless +noenvvar is specified.
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ if test "X$HPUX_IA64_MODE" = X32; then
+ sys_lib_search_path_spec="/usr/lib/hpux32 /usr/local/lib/hpux32 /usr/local/lib"
+ else
+ sys_lib_search_path_spec="/usr/lib/hpux64 /usr/local/lib/hpux64"
+ fi
+ sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec
+ ;;
+ hppa*64*)
+ shrext='.sl'
+ hardcode_into_libs=yes
+ dynamic_linker="$host_os dld.sl"
+ shlibpath_var=LD_LIBRARY_PATH # How should we handle SHLIB_PATH
+ shlibpath_overrides_runpath=yes # Unless +noenvvar is specified.
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ sys_lib_search_path_spec="/usr/lib/pa20_64 /usr/ccs/lib/pa20_64"
+ sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec
+ ;;
+ *)
+ shrext='.sl'
+ dynamic_linker="$host_os dld.sl"
+ shlibpath_var=SHLIB_PATH
+ shlibpath_overrides_runpath=no # +s is required to enable SHLIB_PATH
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ ;;
+ esac
+ # HP-UX runs *really* slowly unless shared libraries are mode 555.
+ postinstall_cmds='chmod 555 $lib'
+ ;;
+
+irix5* | irix6* | nonstopux*)
+ case $host_os in
+ nonstopux*) version_type=nonstopux ;;
+ *)
+ if test "$lt_cv_prog_gnu_ld" = yes; then
+ version_type=linux
+ else
+ version_type=irix
+ fi ;;
+ esac
+ need_lib_prefix=no
+ need_version=no
+ soname_spec='${libname}${release}${shared_ext}$major'
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${release}${shared_ext} $libname${shared_ext}'
+ case $host_os in
+ irix5* | nonstopux*)
+ libsuff= shlibsuff=
+ ;;
+ *)
+ case $LD in # libtool.m4 will add one of these switches to LD
+ *-32|*"-32 "|*-melf32bsmip|*"-melf32bsmip ")
+ libsuff= shlibsuff= libmagic=32-bit;;
+ *-n32|*"-n32 "|*-melf32bmipn32|*"-melf32bmipn32 ")
+ libsuff=32 shlibsuff=N32 libmagic=N32;;
+ *-64|*"-64 "|*-melf64bmip|*"-melf64bmip ")
+ libsuff=64 shlibsuff=64 libmagic=64-bit;;
+ *) libsuff= shlibsuff= libmagic=never-match;;
+ esac
+ ;;
+ esac
+ shlibpath_var=LD_LIBRARY${shlibsuff}_PATH
+ shlibpath_overrides_runpath=no
+ sys_lib_search_path_spec="/usr/lib${libsuff} /lib${libsuff} /usr/local/lib${libsuff}"
+ sys_lib_dlsearch_path_spec="/usr/lib${libsuff} /lib${libsuff}"
+ hardcode_into_libs=yes
+ ;;
+
+# No shared lib support for Linux oldld, aout, or coff.
+linux*oldld* | linux*aout* | linux*coff*)
+ dynamic_linker=no
+ ;;
+
+# This must be Linux ELF.
+linux*)
+ version_type=linux
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ finish_cmds='PATH="\$PATH:/sbin" ldconfig -n $libdir'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=no
+ # This implies no fast_install, which is unacceptable.
+ # Some rework will be needed to allow for fast_install
+ # before this can be enabled.
+ hardcode_into_libs=yes
+
+ # Append ld.so.conf contents to the search path
+ if test -f /etc/ld.so.conf; then
+ ld_extra=`$SED -e 's/:,\t/ /g;s/=^=*$//;s/=^= * / /g' /etc/ld.so.conf`
+ sys_lib_dlsearch_path_spec="/lib /usr/lib $ld_extra"
+ fi
+
+ # We used to test for /lib/ld.so.1 and disable shared libraries on
+ # powerpc, because MkLinux only supported shared libraries with the
+ # GNU dynamic linker. Since this was broken with cross compilers,
+ # most powerpc-linux boxes support dynamic linking these days and
+ # people can always --disable-shared, the test was removed, and we
+ # assume the GNU/Linux dynamic linker is in use.
+ dynamic_linker='GNU/Linux ld.so'
+ ;;
+
+knetbsd*-gnu)
+ version_type=linux
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=no
+ hardcode_into_libs=yes
+ dynamic_linker='GNU ld.so'
+ ;;
+
+netbsd*)
+ version_type=sunos
+ need_lib_prefix=no
+ need_version=no
+ if echo __ELF__ | $CC -E - | grep __ELF__ >/dev/null; then
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix'
+ finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir'
+ dynamic_linker='NetBSD (a.out) ld.so'
+ else
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ dynamic_linker='NetBSD ld.elf_so'
+ fi
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=yes
+ hardcode_into_libs=yes
+ ;;
+
+newsos6)
+ version_type=linux
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=yes
+ ;;
+
+nto-qnx*)
+ version_type=linux
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=yes
+ ;;
+
+openbsd*)
+ version_type=sunos
+ need_lib_prefix=no
+ need_version=yes
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix'
+ finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir'
+ shlibpath_var=LD_LIBRARY_PATH
+ if test -z "`echo __ELF__ | $CC -E - | grep __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then
+ case $host_os in
+ openbsd2.[89] | openbsd2.[89].*)
+ shlibpath_overrides_runpath=no
+ ;;
+ *)
+ shlibpath_overrides_runpath=yes
+ ;;
+ esac
+ else
+ shlibpath_overrides_runpath=yes
+ fi
+ ;;
+
+os2*)
+ libname_spec='$name'
+ shrext=".dll"
+ need_lib_prefix=no
+ library_names_spec='$libname${shared_ext} $libname.a'
+ dynamic_linker='OS/2 ld.exe'
+ shlibpath_var=LIBPATH
+ ;;
+
+osf3* | osf4* | osf5*)
+ version_type=osf
+ need_lib_prefix=no
+ need_version=no
+ soname_spec='${libname}${release}${shared_ext}$major'
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ shlibpath_var=LD_LIBRARY_PATH
+ sys_lib_search_path_spec="/usr/shlib /usr/ccs/lib /usr/lib/cmplrs/cc /usr/lib /usr/local/lib /var/shlib"
+ sys_lib_dlsearch_path_spec="$sys_lib_search_path_spec"
+ ;;
+
+sco3.2v5*)
+ version_type=osf
+ soname_spec='${libname}${release}${shared_ext}$major'
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ shlibpath_var=LD_LIBRARY_PATH
+ ;;
+
+solaris*)
+ version_type=linux
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=yes
+ hardcode_into_libs=yes
+ # ldd complains unless libraries are executable
+ postinstall_cmds='chmod +x $lib'
+ ;;
+
+sunos4*)
+ version_type=sunos
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix'
+ finish_cmds='PATH="\$PATH:/usr/etc" ldconfig $libdir'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=yes
+ if test "$with_gnu_ld" = yes; then
+ need_lib_prefix=no
+ fi
+ need_version=yes
+ ;;
+
+sysv4 | sysv4.2uw2* | sysv4.3* | sysv5*)
+ version_type=linux
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ case $host_vendor in
+ sni)
+ shlibpath_overrides_runpath=no
+ need_lib_prefix=no
+ export_dynamic_flag_spec='${wl}-Blargedynsym'
+ runpath_var=LD_RUN_PATH
+ ;;
+ siemens)
+ need_lib_prefix=no
+ ;;
+ motorola)
+ need_lib_prefix=no
+ need_version=no
+ shlibpath_overrides_runpath=no
+ sys_lib_search_path_spec='/lib /usr/lib /usr/ccs/lib'
+ ;;
+ esac
+ ;;
+
+sysv4*MP*)
+ if test -d /usr/nec ;then
+ version_type=linux
+ library_names_spec='$libname${shared_ext}.$versuffix $libname${shared_ext}.$major $libname${shared_ext}'
+ soname_spec='$libname${shared_ext}.$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ fi
+ ;;
+
+uts4*)
+ version_type=linux
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ ;;
+
+*)
+ dynamic_linker=no
+ ;;
+esac
+echo "$as_me:$LINENO: result: $dynamic_linker" >&5
+echo "${ECHO_T}$dynamic_linker" >&6
+test "$dynamic_linker" = no && can_build_shared=no
+
+echo "$as_me:$LINENO: checking how to hardcode library paths into programs" >&5
+echo $ECHO_N "checking how to hardcode library paths into programs... $ECHO_C" >&6
+hardcode_action=
+if test -n "$hardcode_libdir_flag_spec" || \
+ test -n "$runpath_var " || \
+ test "X$hardcode_automatic"="Xyes" ; then
+
+ # We can hardcode non-existant directories.
+ if test "$hardcode_direct" != no &&
+ # If the only mechanism to avoid hardcoding is shlibpath_var, we
+ # have to relink, otherwise we might link with an installed library
+ # when we should be linking with a yet-to-be-installed one
+ ## test "$_LT_AC_TAGVAR(hardcode_shlibpath_var, )" != no &&
+ test "$hardcode_minus_L" != no; then
+ # Linking always hardcodes the temporary library directory.
+ hardcode_action=relink
+ else
+ # We can link without hardcoding, and we can hardcode nonexisting dirs.
+ hardcode_action=immediate
+ fi
+else
+ # We cannot hardcode anything, or else we can only hardcode existing
+ # directories.
+ hardcode_action=unsupported
+fi
+echo "$as_me:$LINENO: result: $hardcode_action" >&5
+echo "${ECHO_T}$hardcode_action" >&6
+
+if test "$hardcode_action" = relink; then
+ # Fast installation is not supported
+ enable_fast_install=no
+elif test "$shlibpath_overrides_runpath" = yes ||
+ test "$enable_shared" = no; then
+ # Fast installation is not necessary
+ enable_fast_install=needless
+fi
+
+striplib=
+old_striplib=
+echo "$as_me:$LINENO: checking whether stripping libraries is possible" >&5
+echo $ECHO_N "checking whether stripping libraries is possible... $ECHO_C" >&6
+if test -n "$STRIP" && $STRIP -V 2>&1 | grep "GNU strip" >/dev/null; then
+ test -z "$old_striplib" && old_striplib="$STRIP --strip-debug"
+ test -z "$striplib" && striplib="$STRIP --strip-unneeded"
+ echo "$as_me:$LINENO: result: yes" >&5
+echo "${ECHO_T}yes" >&6
+else
+# FIXME - insert some real tests, host_os isn't really good enough
+ case $host_os in
+ darwin*)
+ if test -n "$STRIP" ; then
+ striplib="$STRIP -x"
+ echo "$as_me:$LINENO: result: yes" >&5
+echo "${ECHO_T}yes" >&6
+ else
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+fi
+ ;;
+ *)
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+ ;;
+ esac
+fi
+
+if test "x$enable_dlopen" != xyes; then
+ enable_dlopen=unknown
+ enable_dlopen_self=unknown
+ enable_dlopen_self_static=unknown
+else
+ lt_cv_dlopen=no
+ lt_cv_dlopen_libs=
+
+ case $host_os in
+ beos*)
+ lt_cv_dlopen="load_add_on"
+ lt_cv_dlopen_libs=
+ lt_cv_dlopen_self=yes
+ ;;
+
+ mingw* | pw32*)
+ lt_cv_dlopen="LoadLibrary"
+ lt_cv_dlopen_libs=
+ ;;
+
+ cygwin*)
+ lt_cv_dlopen="dlopen"
+ lt_cv_dlopen_libs=
+ ;;
+
+ darwin*)
+ # if libdl is installed we need to link against it
+ echo "$as_me:$LINENO: checking for dlopen in -ldl" >&5
+echo $ECHO_N "checking for dlopen in -ldl... $ECHO_C" >&6
+if test "${ac_cv_lib_dl_dlopen+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ ac_check_lib_save_LIBS=$LIBS
+LIBS="-ldl $LIBS"
+cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+
+/* Override any gcc2 internal prototype to avoid an error. */
+#ifdef __cplusplus
+extern "C"
+#endif
+/* We use char because int might match the return type of a gcc2
+ builtin and then its argument prototype would still apply. */
+char dlopen ();
+int
+main ()
+{
+dlopen ();
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext conftest$ac_exeext
+if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_c_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest$ac_exeext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ ac_cv_lib_dl_dlopen=yes
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ac_cv_lib_dl_dlopen=no
+fi
+rm -f conftest.err conftest.$ac_objext \
+ conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+echo "$as_me:$LINENO: result: $ac_cv_lib_dl_dlopen" >&5
+echo "${ECHO_T}$ac_cv_lib_dl_dlopen" >&6
+if test $ac_cv_lib_dl_dlopen = yes; then
+ lt_cv_dlopen="dlopen" lt_cv_dlopen_libs="-ldl"
+else
+
+ lt_cv_dlopen="dyld"
+ lt_cv_dlopen_libs=
+ lt_cv_dlopen_self=yes
+
+fi
+
+ ;;
+
+ *)
+ echo "$as_me:$LINENO: checking for shl_load" >&5
+echo $ECHO_N "checking for shl_load... $ECHO_C" >&6
+if test "${ac_cv_func_shl_load+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+/* Define shl_load to an innocuous variant, in case <limits.h> declares shl_load.
+ For example, HP-UX 11i <limits.h> declares gettimeofday. */
+#define shl_load innocuous_shl_load
+
+/* System header to define __stub macros and hopefully few prototypes,
+ which can conflict with char shl_load (); below.
+ Prefer <limits.h> to <assert.h> if __STDC__ is defined, since
+ <limits.h> exists even on freestanding compilers. */
+
+#ifdef __STDC__
+# include <limits.h>
+#else
+# include <assert.h>
+#endif
+
+#undef shl_load
+
+/* Override any gcc2 internal prototype to avoid an error. */
+#ifdef __cplusplus
+extern "C"
+{
+#endif
+/* We use char because int might match the return type of a gcc2
+ builtin and then its argument prototype would still apply. */
+char shl_load ();
+/* The GNU C library defines this for functions which it implements
+ to always fail with ENOSYS. Some functions are actually named
+ something starting with __ and the normal name is an alias. */
+#if defined (__stub_shl_load) || defined (__stub___shl_load)
+choke me
+#else
+char (*f) () = shl_load;
+#endif
+#ifdef __cplusplus
+}
+#endif
+
+int
+main ()
+{
+return f != shl_load;
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext conftest$ac_exeext
+if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_c_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest$ac_exeext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ ac_cv_func_shl_load=yes
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ac_cv_func_shl_load=no
+fi
+rm -f conftest.err conftest.$ac_objext \
+ conftest$ac_exeext conftest.$ac_ext
+fi
+echo "$as_me:$LINENO: result: $ac_cv_func_shl_load" >&5
+echo "${ECHO_T}$ac_cv_func_shl_load" >&6
+if test $ac_cv_func_shl_load = yes; then
+ lt_cv_dlopen="shl_load"
+else
+ echo "$as_me:$LINENO: checking for shl_load in -ldld" >&5
+echo $ECHO_N "checking for shl_load in -ldld... $ECHO_C" >&6
+if test "${ac_cv_lib_dld_shl_load+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ ac_check_lib_save_LIBS=$LIBS
+LIBS="-ldld $LIBS"
+cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+
+/* Override any gcc2 internal prototype to avoid an error. */
+#ifdef __cplusplus
+extern "C"
+#endif
+/* We use char because int might match the return type of a gcc2
+ builtin and then its argument prototype would still apply. */
+char shl_load ();
+int
+main ()
+{
+shl_load ();
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext conftest$ac_exeext
+if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_c_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest$ac_exeext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ ac_cv_lib_dld_shl_load=yes
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ac_cv_lib_dld_shl_load=no
+fi
+rm -f conftest.err conftest.$ac_objext \
+ conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+echo "$as_me:$LINENO: result: $ac_cv_lib_dld_shl_load" >&5
+echo "${ECHO_T}$ac_cv_lib_dld_shl_load" >&6
+if test $ac_cv_lib_dld_shl_load = yes; then
+ lt_cv_dlopen="shl_load" lt_cv_dlopen_libs="-dld"
+else
+ echo "$as_me:$LINENO: checking for dlopen" >&5
+echo $ECHO_N "checking for dlopen... $ECHO_C" >&6
+if test "${ac_cv_func_dlopen+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+/* Define dlopen to an innocuous variant, in case <limits.h> declares dlopen.
+ For example, HP-UX 11i <limits.h> declares gettimeofday. */
+#define dlopen innocuous_dlopen
+
+/* System header to define __stub macros and hopefully few prototypes,
+ which can conflict with char dlopen (); below.
+ Prefer <limits.h> to <assert.h> if __STDC__ is defined, since
+ <limits.h> exists even on freestanding compilers. */
+
+#ifdef __STDC__
+# include <limits.h>
+#else
+# include <assert.h>
+#endif
+
+#undef dlopen
+
+/* Override any gcc2 internal prototype to avoid an error. */
+#ifdef __cplusplus
+extern "C"
+{
+#endif
+/* We use char because int might match the return type of a gcc2
+ builtin and then its argument prototype would still apply. */
+char dlopen ();
+/* The GNU C library defines this for functions which it implements
+ to always fail with ENOSYS. Some functions are actually named
+ something starting with __ and the normal name is an alias. */
+#if defined (__stub_dlopen) || defined (__stub___dlopen)
+choke me
+#else
+char (*f) () = dlopen;
+#endif
+#ifdef __cplusplus
+}
+#endif
+
+int
+main ()
+{
+return f != dlopen;
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext conftest$ac_exeext
+if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_c_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest$ac_exeext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ ac_cv_func_dlopen=yes
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ac_cv_func_dlopen=no
+fi
+rm -f conftest.err conftest.$ac_objext \
+ conftest$ac_exeext conftest.$ac_ext
+fi
+echo "$as_me:$LINENO: result: $ac_cv_func_dlopen" >&5
+echo "${ECHO_T}$ac_cv_func_dlopen" >&6
+if test $ac_cv_func_dlopen = yes; then
+ lt_cv_dlopen="dlopen"
+else
+ echo "$as_me:$LINENO: checking for dlopen in -ldl" >&5
+echo $ECHO_N "checking for dlopen in -ldl... $ECHO_C" >&6
+if test "${ac_cv_lib_dl_dlopen+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ ac_check_lib_save_LIBS=$LIBS
+LIBS="-ldl $LIBS"
+cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+
+/* Override any gcc2 internal prototype to avoid an error. */
+#ifdef __cplusplus
+extern "C"
+#endif
+/* We use char because int might match the return type of a gcc2
+ builtin and then its argument prototype would still apply. */
+char dlopen ();
+int
+main ()
+{
+dlopen ();
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext conftest$ac_exeext
+if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_c_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest$ac_exeext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ ac_cv_lib_dl_dlopen=yes
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ac_cv_lib_dl_dlopen=no
+fi
+rm -f conftest.err conftest.$ac_objext \
+ conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+echo "$as_me:$LINENO: result: $ac_cv_lib_dl_dlopen" >&5
+echo "${ECHO_T}$ac_cv_lib_dl_dlopen" >&6
+if test $ac_cv_lib_dl_dlopen = yes; then
+ lt_cv_dlopen="dlopen" lt_cv_dlopen_libs="-ldl"
+else
+ echo "$as_me:$LINENO: checking for dlopen in -lsvld" >&5
+echo $ECHO_N "checking for dlopen in -lsvld... $ECHO_C" >&6
+if test "${ac_cv_lib_svld_dlopen+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ ac_check_lib_save_LIBS=$LIBS
+LIBS="-lsvld $LIBS"
+cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+
+/* Override any gcc2 internal prototype to avoid an error. */
+#ifdef __cplusplus
+extern "C"
+#endif
+/* We use char because int might match the return type of a gcc2
+ builtin and then its argument prototype would still apply. */
+char dlopen ();
+int
+main ()
+{
+dlopen ();
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext conftest$ac_exeext
+if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_c_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest$ac_exeext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ ac_cv_lib_svld_dlopen=yes
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ac_cv_lib_svld_dlopen=no
+fi
+rm -f conftest.err conftest.$ac_objext \
+ conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+echo "$as_me:$LINENO: result: $ac_cv_lib_svld_dlopen" >&5
+echo "${ECHO_T}$ac_cv_lib_svld_dlopen" >&6
+if test $ac_cv_lib_svld_dlopen = yes; then
+ lt_cv_dlopen="dlopen" lt_cv_dlopen_libs="-lsvld"
+else
+ echo "$as_me:$LINENO: checking for dld_link in -ldld" >&5
+echo $ECHO_N "checking for dld_link in -ldld... $ECHO_C" >&6
+if test "${ac_cv_lib_dld_dld_link+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ ac_check_lib_save_LIBS=$LIBS
+LIBS="-ldld $LIBS"
+cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+
+/* Override any gcc2 internal prototype to avoid an error. */
+#ifdef __cplusplus
+extern "C"
+#endif
+/* We use char because int might match the return type of a gcc2
+ builtin and then its argument prototype would still apply. */
+char dld_link ();
+int
+main ()
+{
+dld_link ();
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext conftest$ac_exeext
+if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_c_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest$ac_exeext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ ac_cv_lib_dld_dld_link=yes
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ac_cv_lib_dld_dld_link=no
+fi
+rm -f conftest.err conftest.$ac_objext \
+ conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+echo "$as_me:$LINENO: result: $ac_cv_lib_dld_dld_link" >&5
+echo "${ECHO_T}$ac_cv_lib_dld_dld_link" >&6
+if test $ac_cv_lib_dld_dld_link = yes; then
+ lt_cv_dlopen="dld_link" lt_cv_dlopen_libs="-dld"
+fi
+
+
+fi
+
+
+fi
+
+
+fi
+
+
+fi
+
+
+fi
+
+ ;;
+ esac
+
+ if test "x$lt_cv_dlopen" != xno; then
+ enable_dlopen=yes
+ else
+ enable_dlopen=no
+ fi
+
+ case $lt_cv_dlopen in
+ dlopen)
+ save_CPPFLAGS="$CPPFLAGS"
+ test "x$ac_cv_header_dlfcn_h" = xyes && CPPFLAGS="$CPPFLAGS -DHAVE_DLFCN_H"
+
+ save_LDFLAGS="$LDFLAGS"
+ eval LDFLAGS=\"\$LDFLAGS $export_dynamic_flag_spec\"
+
+ save_LIBS="$LIBS"
+ LIBS="$lt_cv_dlopen_libs $LIBS"
+
+ echo "$as_me:$LINENO: checking whether a program can dlopen itself" >&5
+echo $ECHO_N "checking whether a program can dlopen itself... $ECHO_C" >&6
+if test "${lt_cv_dlopen_self+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ if test "$cross_compiling" = yes; then :
+ lt_cv_dlopen_self=cross
+else
+ lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
+ lt_status=$lt_dlunknown
+ cat > conftest.$ac_ext <<EOF
+#line 8038 "configure"
+#include "confdefs.h"
+
+#if HAVE_DLFCN_H
+#include <dlfcn.h>
+#endif
+
+#include <stdio.h>
+
+#ifdef RTLD_GLOBAL
+# define LT_DLGLOBAL RTLD_GLOBAL
+#else
+# ifdef DL_GLOBAL
+# define LT_DLGLOBAL DL_GLOBAL
+# else
+# define LT_DLGLOBAL 0
+# endif
+#endif
+
+/* We may have to define LT_DLLAZY_OR_NOW in the command line if we
+ find out it does not work in some platform. */
+#ifndef LT_DLLAZY_OR_NOW
+# ifdef RTLD_LAZY
+# define LT_DLLAZY_OR_NOW RTLD_LAZY
+# else
+# ifdef DL_LAZY
+# define LT_DLLAZY_OR_NOW DL_LAZY
+# else
+# ifdef RTLD_NOW
+# define LT_DLLAZY_OR_NOW RTLD_NOW
+# else
+# ifdef DL_NOW
+# define LT_DLLAZY_OR_NOW DL_NOW
+# else
+# define LT_DLLAZY_OR_NOW 0
+# endif
+# endif
+# endif
+# endif
+#endif
+
+#ifdef __cplusplus
+extern "C" void exit (int);
+#endif
+
+void fnord() { int i=42;}
+int main ()
+{
+ void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+ int status = $lt_dlunknown;
+
+ if (self)
+ {
+ if (dlsym (self,"fnord")) status = $lt_dlno_uscore;
+ else if (dlsym( self,"_fnord")) status = $lt_dlneed_uscore;
+ /* dlclose (self); */
+ }
+
+ exit (status);
+}
+EOF
+ if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } && test -s conftest${ac_exeext} 2>/dev/null; then
+ (./conftest; exit; ) 2>/dev/null
+ lt_status=$?
+ case x$lt_status in
+ x$lt_dlno_uscore) lt_cv_dlopen_self=yes ;;
+ x$lt_dlneed_uscore) lt_cv_dlopen_self=yes ;;
+ x$lt_unknown|x*) lt_cv_dlopen_self=no ;;
+ esac
+ else :
+ # compilation failed
+ lt_cv_dlopen_self=no
+ fi
+fi
+rm -fr conftest*
+
+
+fi
+echo "$as_me:$LINENO: result: $lt_cv_dlopen_self" >&5
+echo "${ECHO_T}$lt_cv_dlopen_self" >&6
+
+ if test "x$lt_cv_dlopen_self" = xyes; then
+ LDFLAGS="$LDFLAGS $link_static_flag"
+ echo "$as_me:$LINENO: checking whether a statically linked program can dlopen itself" >&5
+echo $ECHO_N "checking whether a statically linked program can dlopen itself... $ECHO_C" >&6
+if test "${lt_cv_dlopen_self_static+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ if test "$cross_compiling" = yes; then :
+ lt_cv_dlopen_self_static=cross
+else
+ lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
+ lt_status=$lt_dlunknown
+ cat > conftest.$ac_ext <<EOF
+#line 8136 "configure"
+#include "confdefs.h"
+
+#if HAVE_DLFCN_H
+#include <dlfcn.h>
+#endif
+
+#include <stdio.h>
+
+#ifdef RTLD_GLOBAL
+# define LT_DLGLOBAL RTLD_GLOBAL
+#else
+# ifdef DL_GLOBAL
+# define LT_DLGLOBAL DL_GLOBAL
+# else
+# define LT_DLGLOBAL 0
+# endif
+#endif
+
+/* We may have to define LT_DLLAZY_OR_NOW in the command line if we
+ find out it does not work in some platform. */
+#ifndef LT_DLLAZY_OR_NOW
+# ifdef RTLD_LAZY
+# define LT_DLLAZY_OR_NOW RTLD_LAZY
+# else
+# ifdef DL_LAZY
+# define LT_DLLAZY_OR_NOW DL_LAZY
+# else
+# ifdef RTLD_NOW
+# define LT_DLLAZY_OR_NOW RTLD_NOW
+# else
+# ifdef DL_NOW
+# define LT_DLLAZY_OR_NOW DL_NOW
+# else
+# define LT_DLLAZY_OR_NOW 0
+# endif
+# endif
+# endif
+# endif
+#endif
+
+#ifdef __cplusplus
+extern "C" void exit (int);
+#endif
+
+void fnord() { int i=42;}
+int main ()
+{
+ void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+ int status = $lt_dlunknown;
+
+ if (self)
+ {
+ if (dlsym (self,"fnord")) status = $lt_dlno_uscore;
+ else if (dlsym( self,"_fnord")) status = $lt_dlneed_uscore;
+ /* dlclose (self); */
+ }
+
+ exit (status);
+}
+EOF
+ if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } && test -s conftest${ac_exeext} 2>/dev/null; then
+ (./conftest; exit; ) 2>/dev/null
+ lt_status=$?
+ case x$lt_status in
+ x$lt_dlno_uscore) lt_cv_dlopen_self_static=yes ;;
+ x$lt_dlneed_uscore) lt_cv_dlopen_self_static=yes ;;
+ x$lt_unknown|x*) lt_cv_dlopen_self_static=no ;;
+ esac
+ else :
+ # compilation failed
+ lt_cv_dlopen_self_static=no
+ fi
+fi
+rm -fr conftest*
+
+
+fi
+echo "$as_me:$LINENO: result: $lt_cv_dlopen_self_static" >&5
+echo "${ECHO_T}$lt_cv_dlopen_self_static" >&6
+ fi
+
+ CPPFLAGS="$save_CPPFLAGS"
+ LDFLAGS="$save_LDFLAGS"
+ LIBS="$save_LIBS"
+ ;;
+ esac
+
+ case $lt_cv_dlopen_self in
+ yes|no) enable_dlopen_self=$lt_cv_dlopen_self ;;
+ *) enable_dlopen_self=unknown ;;
+ esac
+
+ case $lt_cv_dlopen_self_static in
+ yes|no) enable_dlopen_self_static=$lt_cv_dlopen_self_static ;;
+ *) enable_dlopen_self_static=unknown ;;
+ esac
+fi
+
+
+# Report which librarie types wil actually be built
+echo "$as_me:$LINENO: checking if libtool supports shared libraries" >&5
+echo $ECHO_N "checking if libtool supports shared libraries... $ECHO_C" >&6
+echo "$as_me:$LINENO: result: $can_build_shared" >&5
+echo "${ECHO_T}$can_build_shared" >&6
+
+echo "$as_me:$LINENO: checking whether to build shared libraries" >&5
+echo $ECHO_N "checking whether to build shared libraries... $ECHO_C" >&6
+test "$can_build_shared" = "no" && enable_shared=no
+
+# On AIX, shared libraries and static libraries use the same namespace, and
+# are all built from PIC.
+case "$host_os" in
+aix3*)
+ test "$enable_shared" = yes && enable_static=no
+ if test -n "$RANLIB"; then
+ archive_cmds="$archive_cmds~\$RANLIB \$lib"
+ postinstall_cmds='$RANLIB $lib'
+ fi
+ ;;
+
+aix4*)
+ if test "$host_cpu" != ia64 && test "$aix_use_runtimelinking" = no ; then
+ test "$enable_shared" = yes && enable_static=no
+ fi
+ ;;
+ darwin* | rhapsody*)
+ if test "$GCC" = yes; then
+ archive_cmds_need_lc=no
+ case "$host_os" in
+ rhapsody* | darwin1.[012])
+ allow_undefined_flag='-undefined suppress'
+ ;;
+ *) # Darwin 1.3 on
+ if test -z ${MACOSX_DEPLOYMENT_TARGET} ; then
+ allow_undefined_flag='-flat_namespace -undefined suppress'
+ else
+ case ${MACOSX_DEPLOYMENT_TARGET} in
+ 10.[012])
+ allow_undefined_flag='-flat_namespace -undefined suppress'
+ ;;
+ 10.*)
+ allow_undefined_flag='-undefined dynamic_lookup'
+ ;;
+ esac
+ fi
+ ;;
+ esac
+ output_verbose_link_cmd='echo'
+ archive_cmds='$CC -dynamiclib $allow_undefined_flag -o $lib $libobjs $deplibs$compiler_flags -install_name $rpath/$soname $verstring'
+ module_cmds='$CC $allow_undefined_flag -o $lib -bundle $libobjs $deplibs$compiler_flags'
+ # Don't fix this by using the ld -exported_symbols_list flag, it doesn't exist in older darwin ld's
+ archive_expsym_cmds='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC -dynamiclib $allow_undefined_flag -o $lib $libobjs $deplibs$compiler_flags -install_name $rpath/$soname $verstring~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}'
+ module_expsym_cmds='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC $allow_undefined_flag -o $lib -bundle $libobjs $deplibs$compiler_flags~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}'
+ hardcode_direct=no
+ hardcode_automatic=yes
+ hardcode_shlibpath_var=unsupported
+ whole_archive_flag_spec='-all_load $convenience'
+ link_all_deplibs=yes
+ else
+ ld_shlibs=no
+ fi
+ ;;
+esac
+echo "$as_me:$LINENO: result: $enable_shared" >&5
+echo "${ECHO_T}$enable_shared" >&6
+
+echo "$as_me:$LINENO: checking whether to build static libraries" >&5
+echo $ECHO_N "checking whether to build static libraries... $ECHO_C" >&6
+# Make sure either enable_shared or enable_static is yes.
+test "$enable_shared" = yes || enable_static=yes
+echo "$as_me:$LINENO: result: $enable_static" >&5
+echo "${ECHO_T}$enable_static" >&6
+
+# The else clause should only fire when bootstrapping the
+# libtool distribution, otherwise you forgot to ship ltmain.sh
+# with your package, and you will get complaints that there are
+# no rules to generate ltmain.sh.
+if test -f "$ltmain"; then
+ # See if we are running on zsh, and set the options which allow our commands through
+ # without removal of \ escapes.
+ if test -n "${ZSH_VERSION+set}" ; then
+ setopt NO_GLOB_SUBST
+ fi
+ # Now quote all the things that may contain metacharacters while being
+ # careful not to overquote the AC_SUBSTed values. We take copies of the
+ # variables and quote the copies for generation of the libtool script.
+ for var in echo old_CC old_CFLAGS AR AR_FLAGS EGREP RANLIB LN_S LTCC NM \
+ SED SHELL STRIP \
+ libname_spec library_names_spec soname_spec extract_expsyms_cmds \
+ old_striplib striplib file_magic_cmd finish_cmds finish_eval \
+ deplibs_check_method reload_flag reload_cmds need_locks \
+ lt_cv_sys_global_symbol_pipe lt_cv_sys_global_symbol_to_cdecl \
+ lt_cv_sys_global_symbol_to_c_name_address \
+ sys_lib_search_path_spec sys_lib_dlsearch_path_spec \
+ old_postinstall_cmds old_postuninstall_cmds \
+ compiler \
+ CC \
+ LD \
+ lt_prog_compiler_wl \
+ lt_prog_compiler_pic \
+ lt_prog_compiler_static \
+ lt_prog_compiler_no_builtin_flag \
+ export_dynamic_flag_spec \
+ thread_safe_flag_spec \
+ whole_archive_flag_spec \
+ enable_shared_with_static_runtimes \
+ old_archive_cmds \
+ old_archive_from_new_cmds \
+ predep_objects \
+ postdep_objects \
+ predeps \
+ postdeps \
+ compiler_lib_search_path \
+ archive_cmds \
+ archive_expsym_cmds \
+ postinstall_cmds \
+ postuninstall_cmds \
+ old_archive_from_expsyms_cmds \
+ allow_undefined_flag \
+ no_undefined_flag \
+ export_symbols_cmds \
+ hardcode_libdir_flag_spec \
+ hardcode_libdir_flag_spec_ld \
+ hardcode_libdir_separator \
+ hardcode_automatic \
+ module_cmds \
+ module_expsym_cmds \
+ lt_cv_prog_compiler_c_o \
+ exclude_expsyms \
+ include_expsyms; do
+
+ case $var in
+ old_archive_cmds | \
+ old_archive_from_new_cmds | \
+ archive_cmds | \
+ archive_expsym_cmds | \
+ module_cmds | \
+ module_expsym_cmds | \
+ old_archive_from_expsyms_cmds | \
+ export_symbols_cmds | \
+ extract_expsyms_cmds | reload_cmds | finish_cmds | \
+ postinstall_cmds | postuninstall_cmds | \
+ old_postinstall_cmds | old_postuninstall_cmds | \
+ sys_lib_search_path_spec | sys_lib_dlsearch_path_spec)
+ # Double-quote double-evaled strings.
+ eval "lt_$var=\\\"\`\$echo \"X\$$var\" | \$Xsed -e \"\$double_quote_subst\" -e \"\$sed_quote_subst\" -e \"\$delay_variable_subst\"\`\\\""
+ ;;
+ *)
+ eval "lt_$var=\\\"\`\$echo \"X\$$var\" | \$Xsed -e \"\$sed_quote_subst\"\`\\\""
+ ;;
+ esac
+ done
+
+ case $lt_echo in
+ *'\$0 --fallback-echo"')
+ lt_echo=`$echo "X$lt_echo" | $Xsed -e 's/\\\\\\\$0 --fallback-echo"$/$0 --fallback-echo"/'`
+ ;;
+ esac
+
+cfgfile="${ofile}T"
+ trap "$rm \"$cfgfile\"; exit 1" 1 2 15
+ $rm -f "$cfgfile"
+ { echo "$as_me:$LINENO: creating $ofile" >&5
+echo "$as_me: creating $ofile" >&6;}
+
+ cat <<__EOF__ >> "$cfgfile"
+#! $SHELL
+
+# `$echo "$cfgfile" | sed 's%^.*/%%'` - Provide generalized library-building support services.
+# Generated automatically by $PROGRAM (GNU $PACKAGE $VERSION$TIMESTAMP)
+# NOTE: Changes made to this file will be lost: look at ltmain.sh.
+#
+# Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001
+# Free Software Foundation, Inc.
+#
+# This file is part of GNU Libtool:
+# Originally by Gordon Matzigkeit <gord at gnu.ai.mit.edu>, 1996
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+#
+# As a special exception to the GNU General Public License, if you
+# distribute this file as part of a program that contains a
+# configuration script generated by Autoconf, you may include it under
+# the same distribution terms that you use for the rest of that program.
+
+# A sed program that does not truncate output.
+SED=$lt_SED
+
+# Sed that helps us avoid accidentally triggering echo(1) options like -n.
+Xsed="$SED -e s/^X//"
+
+# The HP-UX ksh and POSIX shell print the target directory to stdout
+# if CDPATH is set.
+if test "X\${CDPATH+set}" = Xset; then CDPATH=:; export CDPATH; fi
+
+# The names of the tagged configurations supported by this script.
+available_tags=
+
+# ### BEGIN LIBTOOL CONFIG
+
+# Libtool was configured on host `(hostname || uname -n) 2>/dev/null | sed 1q`:
+
+# Shell to use when invoking shell scripts.
+SHELL=$lt_SHELL
+
+# Whether or not to build shared libraries.
+build_libtool_libs=$enable_shared
+
+# Whether or not to build static libraries.
+build_old_libs=$enable_static
+
+# Whether or not to add -lc for building shared libraries.
+build_libtool_need_lc=$archive_cmds_need_lc
+
+# Whether or not to disallow shared libs when runtime libs are static
+allow_libtool_libs_with_static_runtimes=$enable_shared_with_static_runtimes
+
+# Whether or not to optimize for fast installation.
+fast_install=$enable_fast_install
+
+# The host system.
+host_alias=$host_alias
+host=$host
+
+# An echo program that does not interpret backslashes.
+echo=$lt_echo
+
+# The archiver.
+AR=$lt_AR
+AR_FLAGS=$lt_AR_FLAGS
+
+# A C compiler.
+LTCC=$lt_LTCC
+
+# A language-specific compiler.
+CC=$lt_compiler
+
+# Is the compiler the GNU C compiler?
+with_gcc=$GCC
+
+# An ERE matcher.
+EGREP=$lt_EGREP
+
+# The linker used to build libraries.
+LD=$lt_LD
+
+# Whether we need hard or soft links.
+LN_S=$lt_LN_S
+
+# A BSD-compatible nm program.
+NM=$lt_NM
+
+# A symbol stripping program
+STRIP=$lt_STRIP
+
+# Used to examine libraries when file_magic_cmd begins "file"
+MAGIC_CMD=$MAGIC_CMD
+
+# Used on cygwin: DLL creation program.
+DLLTOOL="$DLLTOOL"
+
+# Used on cygwin: object dumper.
+OBJDUMP="$OBJDUMP"
+
+# Used on cygwin: assembler.
+AS="$AS"
+
+# The name of the directory that contains temporary libtool files.
+objdir=$objdir
+
+# How to create reloadable object files.
+reload_flag=$lt_reload_flag
+reload_cmds=$lt_reload_cmds
+
+# How to pass a linker flag through the compiler.
+wl=$lt_lt_prog_compiler_wl
+
+# Object file suffix (normally "o").
+objext="$ac_objext"
+
+# Old archive suffix (normally "a").
+libext="$libext"
+
+# Shared library suffix (normally ".so").
+shrext='$shrext'
+
+# Executable file suffix (normally "").
+exeext="$exeext"
+
+# Additional compiler flags for building library objects.
+pic_flag=$lt_lt_prog_compiler_pic
+pic_mode=$pic_mode
+
+# What is the maximum length of a command?
+max_cmd_len=$lt_cv_sys_max_cmd_len
+
+# Does compiler simultaneously support -c and -o options?
+compiler_c_o=$lt_lt_cv_prog_compiler_c_o
+
+# Must we lock files when doing compilation ?
+need_locks=$lt_need_locks
+
+# Do we need the lib prefix for modules?
+need_lib_prefix=$need_lib_prefix
+
+# Do we need a version for libraries?
+need_version=$need_version
+
+# Whether dlopen is supported.
+dlopen_support=$enable_dlopen
+
+# Whether dlopen of programs is supported.
+dlopen_self=$enable_dlopen_self
+
+# Whether dlopen of statically linked programs is supported.
+dlopen_self_static=$enable_dlopen_self_static
+
+# Compiler flag to prevent dynamic linking.
+link_static_flag=$lt_lt_prog_compiler_static
+
+# Compiler flag to turn off builtin functions.
+no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag
+
+# Compiler flag to allow reflexive dlopens.
+export_dynamic_flag_spec=$lt_export_dynamic_flag_spec
+
+# Compiler flag to generate shared objects directly from archives.
+whole_archive_flag_spec=$lt_whole_archive_flag_spec
+
+# Compiler flag to generate thread-safe objects.
+thread_safe_flag_spec=$lt_thread_safe_flag_spec
+
+# Library versioning type.
+version_type=$version_type
+
+# Format of library name prefix.
+libname_spec=$lt_libname_spec
+
+# List of archive names. First name is the real one, the rest are links.
+# The last name is the one that the linker finds with -lNAME.
+library_names_spec=$lt_library_names_spec
+
+# The coded name of the library, if different from the real name.
+soname_spec=$lt_soname_spec
+
+# Commands used to build and install an old-style archive.
+RANLIB=$lt_RANLIB
+old_archive_cmds=$lt_old_archive_cmds
+old_postinstall_cmds=$lt_old_postinstall_cmds
+old_postuninstall_cmds=$lt_old_postuninstall_cmds
+
+# Create an old-style archive from a shared archive.
+old_archive_from_new_cmds=$lt_old_archive_from_new_cmds
+
+# Create a temporary old-style archive to link instead of a shared archive.
+old_archive_from_expsyms_cmds=$lt_old_archive_from_expsyms_cmds
+
+# Commands used to build and install a shared archive.
+archive_cmds=$lt_archive_cmds
+archive_expsym_cmds=$lt_archive_expsym_cmds
+postinstall_cmds=$lt_postinstall_cmds
+postuninstall_cmds=$lt_postuninstall_cmds
+
+# Commands used to build a loadable module (assumed same as above if empty)
+module_cmds=$lt_module_cmds
+module_expsym_cmds=$lt_module_expsym_cmds
+
+# Commands to strip libraries.
+old_striplib=$lt_old_striplib
+striplib=$lt_striplib
+
+# Dependencies to place before the objects being linked to create a
+# shared library.
+predep_objects=$lt_predep_objects
+
+# Dependencies to place after the objects being linked to create a
+# shared library.
+postdep_objects=$lt_postdep_objects
+
+# Dependencies to place before the objects being linked to create a
+# shared library.
+predeps=$lt_predeps
+
+# Dependencies to place after the objects being linked to create a
+# shared library.
+postdeps=$lt_postdeps
+
+# The library search path used internally by the compiler when linking
+# a shared library.
+compiler_lib_search_path=$lt_compiler_lib_search_path
+
+# Method to check whether dependent libraries are shared objects.
+deplibs_check_method=$lt_deplibs_check_method
+
+# Command to use when deplibs_check_method == file_magic.
+file_magic_cmd=$lt_file_magic_cmd
+
+# Flag that allows shared libraries with undefined symbols to be built.
+allow_undefined_flag=$lt_allow_undefined_flag
+
+# Flag that forces no undefined symbols.
+no_undefined_flag=$lt_no_undefined_flag
+
+# Commands used to finish a libtool library installation in a directory.
+finish_cmds=$lt_finish_cmds
+
+# Same as above, but a single script fragment to be evaled but not shown.
+finish_eval=$lt_finish_eval
+
+# Take the output of nm and produce a listing of raw symbols and C names.
+global_symbol_pipe=$lt_lt_cv_sys_global_symbol_pipe
+
+# Transform the output of nm in a proper C declaration
+global_symbol_to_cdecl=$lt_lt_cv_sys_global_symbol_to_cdecl
+
+# Transform the output of nm in a C name address pair
+global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address
+
+# This is the shared library runtime path variable.
+runpath_var=$runpath_var
+
+# This is the shared library path variable.
+shlibpath_var=$shlibpath_var
+
+# Is shlibpath searched before the hard-coded library search path?
+shlibpath_overrides_runpath=$shlibpath_overrides_runpath
+
+# How to hardcode a shared library path into an executable.
+hardcode_action=$hardcode_action
+
+# Whether we should hardcode library paths into libraries.
+hardcode_into_libs=$hardcode_into_libs
+
+# Flag to hardcode \$libdir into a binary during linking.
+# This must work even if \$libdir does not exist.
+hardcode_libdir_flag_spec=$lt_hardcode_libdir_flag_spec
+
+# If ld is used when linking, flag to hardcode \$libdir into
+# a binary during linking. This must work even if \$libdir does
+# not exist.
+hardcode_libdir_flag_spec_ld=$lt_hardcode_libdir_flag_spec_ld
+
+# Whether we need a single -rpath flag with a separated argument.
+hardcode_libdir_separator=$lt_hardcode_libdir_separator
+
+# Set to yes if using DIR/libNAME${shared_ext} during linking hardcodes DIR into the
+# resulting binary.
+hardcode_direct=$hardcode_direct
+
+# Set to yes if using the -LDIR flag during linking hardcodes DIR into the
+# resulting binary.
+hardcode_minus_L=$hardcode_minus_L
+
+# Set to yes if using SHLIBPATH_VAR=DIR during linking hardcodes DIR into
+# the resulting binary.
+hardcode_shlibpath_var=$hardcode_shlibpath_var
+
+# Set to yes if building a shared library automatically hardcodes DIR into the library
+# and all subsequent libraries and executables linked against it.
+hardcode_automatic=$hardcode_automatic
+
+# Variables whose values should be saved in libtool wrapper scripts and
+# restored at relink time.
+variables_saved_for_relink="$variables_saved_for_relink"
+
+# Whether libtool must link a program against all its dependency libraries.
+link_all_deplibs=$link_all_deplibs
+
+# Compile-time system search path for libraries
+sys_lib_search_path_spec=$lt_sys_lib_search_path_spec
+
+# Run-time system search path for libraries
+sys_lib_dlsearch_path_spec=$lt_sys_lib_dlsearch_path_spec
+
+# Fix the shell variable \$srcfile for the compiler.
+fix_srcfile_path="$fix_srcfile_path"
+
+# Set to yes if exported symbols are required.
+always_export_symbols=$always_export_symbols
+
+# The commands to list exported symbols.
+export_symbols_cmds=$lt_export_symbols_cmds
+
+# The commands to extract the exported symbol list from a shared archive.
+extract_expsyms_cmds=$lt_extract_expsyms_cmds
+
+# Symbols that should not be listed in the preloaded symbols.
+exclude_expsyms=$lt_exclude_expsyms
+
+# Symbols that must always be exported.
+include_expsyms=$lt_include_expsyms
+
+# ### END LIBTOOL CONFIG
+
+__EOF__
+
+
+ case $host_os in
+ aix3*)
+ cat <<\EOF >> "$cfgfile"
+
+# AIX sometimes has problems with the GCC collect2 program. For some
+# reason, if we set the COLLECT_NAMES environment variable, the problems
+# vanish in a puff of smoke.
+if test "X${COLLECT_NAMES+set}" != Xset; then
+ COLLECT_NAMES=
+ export COLLECT_NAMES
+fi
+EOF
+ ;;
+ esac
+
+ # We use sed instead of cat because bash on DJGPP gets confused if
+ # if finds mixed CR/LF and LF-only lines. Since sed operates in
+ # text mode, it properly converts lines to CR/LF. This bash problem
+ # is reportedly fixed, but why not run on old versions too?
+ sed '$q' "$ltmain" >> "$cfgfile" || (rm -f "$cfgfile"; exit 1)
+
+ mv -f "$cfgfile" "$ofile" || \
+ (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile")
+ chmod +x "$ofile"
+
+else
+ # If there is no Makefile yet, we rely on a make rule to execute
+ # `config.status --recheck' to rerun these tests and create the
+ # libtool script then.
+ ltmain_in=`echo $ltmain | sed -e 's/\.sh$/.in/'`
+ if test -f "$ltmain_in"; then
+ test -f Makefile && make "$ltmain"
+ fi
+fi
+
+
+ac_ext=c
+ac_cpp='$CPP $CPPFLAGS'
+ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_c_compiler_gnu
+
+CC="$lt_save_CC"
+
+
+# Check whether --with-tags or --without-tags was given.
+if test "${with_tags+set}" = set; then
+ withval="$with_tags"
+ tagnames="$withval"
+fi;
+
+if test -f "$ltmain" && test -n "$tagnames"; then
+ if test ! -f "${ofile}"; then
+ { echo "$as_me:$LINENO: WARNING: output file \`$ofile' does not exist" >&5
+echo "$as_me: WARNING: output file \`$ofile' does not exist" >&2;}
+ fi
+
+ if test -z "$LTCC"; then
+ eval "`$SHELL ${ofile} --config | grep '^LTCC='`"
+ if test -z "$LTCC"; then
+ { echo "$as_me:$LINENO: WARNING: output file \`$ofile' does not look like a libtool script" >&5
+echo "$as_me: WARNING: output file \`$ofile' does not look like a libtool script" >&2;}
+ else
+ { echo "$as_me:$LINENO: WARNING: using \`LTCC=$LTCC', extracted from \`$ofile'" >&5
+echo "$as_me: WARNING: using \`LTCC=$LTCC', extracted from \`$ofile'" >&2;}
+ fi
+ fi
+
+ # Extract list of available tagged configurations in $ofile.
+ # Note that this assumes the entire list is on one line.
+ available_tags=`grep "^available_tags=" "${ofile}" | $SED -e 's/available_tags=\(.*$\)/\1/' -e 's/\"//g'`
+
+ lt_save_ifs="$IFS"; IFS="${IFS}$PATH_SEPARATOR,"
+ for tagname in $tagnames; do
+ IFS="$lt_save_ifs"
+ # Check whether tagname contains only valid characters
+ case `$echo "X$tagname" | $Xsed -e 's:[-_ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz1234567890,/]::g'` in
+ "") ;;
+ *) { { echo "$as_me:$LINENO: error: invalid tag name: $tagname" >&5
+echo "$as_me: error: invalid tag name: $tagname" >&2;}
+ { (exit 1); exit 1; }; }
+ ;;
+ esac
+
+ if grep "^# ### BEGIN LIBTOOL TAG CONFIG: $tagname$" < "${ofile}" > /dev/null
+ then
+ { { echo "$as_me:$LINENO: error: tag name \"$tagname\" already exists" >&5
+echo "$as_me: error: tag name \"$tagname\" already exists" >&2;}
+ { (exit 1); exit 1; }; }
+ fi
+
+ # Update the list of available tags.
+ if test -n "$tagname"; then
+ echo appending configuration tag \"$tagname\" to $ofile
+
+ case $tagname in
+ CXX)
+ if test -n "$CXX" && test "X$CXX" != "Xno"; then
+ ac_ext=cc
+ac_cpp='$CXXCPP $CPPFLAGS'
+ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_cxx_compiler_gnu
+
+
+
+
+archive_cmds_need_lc_CXX=no
+allow_undefined_flag_CXX=
+always_export_symbols_CXX=no
+archive_expsym_cmds_CXX=
+export_dynamic_flag_spec_CXX=
+hardcode_direct_CXX=no
+hardcode_libdir_flag_spec_CXX=
+hardcode_libdir_flag_spec_ld_CXX=
+hardcode_libdir_separator_CXX=
+hardcode_minus_L_CXX=no
+hardcode_automatic_CXX=no
+module_cmds_CXX=
+module_expsym_cmds_CXX=
+link_all_deplibs_CXX=unknown
+old_archive_cmds_CXX=$old_archive_cmds
+no_undefined_flag_CXX=
+whole_archive_flag_spec_CXX=
+enable_shared_with_static_runtimes_CXX=no
+
+# Dependencies to place before and after the object being linked:
+predep_objects_CXX=
+postdep_objects_CXX=
+predeps_CXX=
+postdeps_CXX=
+compiler_lib_search_path_CXX=
+
+# Source file extension for C++ test sources.
+ac_ext=cc
+
+# Object file extension for compiled C++ test sources.
+objext=o
+objext_CXX=$objext
+
+# Code to be used in simple compile tests
+lt_simple_compile_test_code="int some_variable = 0;\n"
+
+# Code to be used in simple link tests
+lt_simple_link_test_code='int main(int, char *) { return(0); }\n'
+
+# ltmain only uses $CC for tagged configurations so make sure $CC is set.
+
+# If no C compiler was specified, use CC.
+LTCC=${LTCC-"$CC"}
+
+# Allow CC to be a program name with arguments.
+compiler=$CC
+
+
+# Allow CC to be a program name with arguments.
+lt_save_CC=$CC
+lt_save_LD=$LD
+lt_save_GCC=$GCC
+GCC=$GXX
+lt_save_with_gnu_ld=$with_gnu_ld
+lt_save_path_LD=$lt_cv_path_LD
+if test -n "${lt_cv_prog_gnu_ldcxx+set}"; then
+ lt_cv_prog_gnu_ld=$lt_cv_prog_gnu_ldcxx
+else
+ unset lt_cv_prog_gnu_ld
+fi
+if test -n "${lt_cv_path_LDCXX+set}"; then
+ lt_cv_path_LD=$lt_cv_path_LDCXX
+else
+ unset lt_cv_path_LD
+fi
+test -z "${LDCXX+set}" || LD=$LDCXX
+CC=${CXX-"c++"}
+compiler=$CC
+compiler_CXX=$CC
+cc_basename=`$echo X"$compiler" | $Xsed -e 's%^.*/%%'`
+
+# We don't want -fno-exception wen compiling C++ code, so set the
+# no_builtin_flag separately
+if test "$GXX" = yes; then
+ lt_prog_compiler_no_builtin_flag_CXX=' -fno-builtin'
+else
+ lt_prog_compiler_no_builtin_flag_CXX=
+fi
+
+if test "$GXX" = yes; then
+ # Set up default GNU C++ configuration
+
+
+# Check whether --with-gnu-ld or --without-gnu-ld was given.
+if test "${with_gnu_ld+set}" = set; then
+ withval="$with_gnu_ld"
+ test "$withval" = no || with_gnu_ld=yes
+else
+ with_gnu_ld=no
+fi;
+ac_prog=ld
+if test "$GCC" = yes; then
+ # Check if gcc -print-prog-name=ld gives a path.
+ echo "$as_me:$LINENO: checking for ld used by $CC" >&5
+echo $ECHO_N "checking for ld used by $CC... $ECHO_C" >&6
+ case $host in
+ *-*-mingw*)
+ # gcc leaves a trailing carriage return which upsets mingw
+ ac_prog=`($CC -print-prog-name=ld) 2>&5 | tr -d '\015'` ;;
+ *)
+ ac_prog=`($CC -print-prog-name=ld) 2>&5` ;;
+ esac
+ case $ac_prog in
+ # Accept absolute paths.
+ [\\/]* | ?:[\\/]*)
+ re_direlt='/[^/][^/]*/\.\./'
+ # Canonicalize the pathname of ld
+ ac_prog=`echo $ac_prog| $SED 's%\\\\%/%g'`
+ while echo $ac_prog | grep "$re_direlt" > /dev/null 2>&1; do
+ ac_prog=`echo $ac_prog| $SED "s%$re_direlt%/%"`
+ done
+ test -z "$LD" && LD="$ac_prog"
+ ;;
+ "")
+ # If it fails, then pretend we aren't using GCC.
+ ac_prog=ld
+ ;;
+ *)
+ # If it is relative, then search for the first ld in PATH.
+ with_gnu_ld=unknown
+ ;;
+ esac
+elif test "$with_gnu_ld" = yes; then
+ echo "$as_me:$LINENO: checking for GNU ld" >&5
+echo $ECHO_N "checking for GNU ld... $ECHO_C" >&6
+else
+ echo "$as_me:$LINENO: checking for non-GNU ld" >&5
+echo $ECHO_N "checking for non-GNU ld... $ECHO_C" >&6
+fi
+if test "${lt_cv_path_LD+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ if test -z "$LD"; then
+ lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR
+ for ac_dir in $PATH; do
+ IFS="$lt_save_ifs"
+ test -z "$ac_dir" && ac_dir=.
+ if test -f "$ac_dir/$ac_prog" || test -f "$ac_dir/$ac_prog$ac_exeext"; then
+ lt_cv_path_LD="$ac_dir/$ac_prog"
+ # Check to see if the program is GNU ld. I'd rather use --version,
+ # but apparently some GNU ld's only accept -v.
+ # Break only if it was the GNU/non-GNU ld that we prefer.
+ case `"$lt_cv_path_LD" -v 2>&1 </dev/null` in
+ *GNU* | *'with BFD'*)
+ test "$with_gnu_ld" != no && break
+ ;;
+ *)
+ test "$with_gnu_ld" != yes && break
+ ;;
+ esac
+ fi
+ done
+ IFS="$lt_save_ifs"
+else
+ lt_cv_path_LD="$LD" # Let the user override the test with a path.
+fi
+fi
+
+LD="$lt_cv_path_LD"
+if test -n "$LD"; then
+ echo "$as_me:$LINENO: result: $LD" >&5
+echo "${ECHO_T}$LD" >&6
+else
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+fi
+test -z "$LD" && { { echo "$as_me:$LINENO: error: no acceptable ld found in \$PATH" >&5
+echo "$as_me: error: no acceptable ld found in \$PATH" >&2;}
+ { (exit 1); exit 1; }; }
+echo "$as_me:$LINENO: checking if the linker ($LD) is GNU ld" >&5
+echo $ECHO_N "checking if the linker ($LD) is GNU ld... $ECHO_C" >&6
+if test "${lt_cv_prog_gnu_ld+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ # I'd rather use --version here, but apparently some GNU ld's only accept -v.
+case `$LD -v 2>&1 </dev/null` in
+*GNU* | *'with BFD'*)
+ lt_cv_prog_gnu_ld=yes
+ ;;
+*)
+ lt_cv_prog_gnu_ld=no
+ ;;
+esac
+fi
+echo "$as_me:$LINENO: result: $lt_cv_prog_gnu_ld" >&5
+echo "${ECHO_T}$lt_cv_prog_gnu_ld" >&6
+with_gnu_ld=$lt_cv_prog_gnu_ld
+
+
+
+ # Check if GNU C++ uses GNU ld as the underlying linker, since the
+ # archiving commands below assume that GNU ld is being used.
+ if test "$with_gnu_ld" = yes; then
+ archive_cmds_CXX='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname -o $lib'
+ archive_expsym_cmds_CXX='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+
+ hardcode_libdir_flag_spec_CXX='${wl}--rpath ${wl}$libdir'
+ export_dynamic_flag_spec_CXX='${wl}--export-dynamic'
+
+ # If archive_cmds runs LD, not CC, wlarc should be empty
+ # XXX I think wlarc can be eliminated in ltcf-cxx, but I need to
+ # investigate it a little bit more. (MM)
+ wlarc='${wl}'
+
+ # ancient GNU ld didn't support --whole-archive et. al.
+ if eval "`$CC -print-prog-name=ld` --help 2>&1" | \
+ grep 'no-whole-archive' > /dev/null; then
+ whole_archive_flag_spec_CXX="$wlarc"'--whole-archive$convenience '"$wlarc"'--no-whole-archive'
+ else
+ whole_archive_flag_spec_CXX=
+ fi
+ else
+ with_gnu_ld=no
+ wlarc=
+
+ # A generic and very simple default shared library creation
+ # command for GNU C++ for the case where it uses the native
+ # linker, instead of GNU ld. If possible, this setting should
+ # overridden to take advantage of the native linker features on
+ # the platform it is being used on.
+ archive_cmds_CXX='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $lib'
+ fi
+
+ # Commands to make compiler produce verbose output that lists
+ # what "hidden" libraries, object files and flags are used when
+ # linking a shared library.
+ output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | grep "\-L"'
+
+else
+ GXX=no
+ with_gnu_ld=no
+ wlarc=
+fi
+
+# PORTME: fill in a description of your system's C++ link characteristics
+echo "$as_me:$LINENO: checking whether the $compiler linker ($LD) supports shared libraries" >&5
+echo $ECHO_N "checking whether the $compiler linker ($LD) supports shared libraries... $ECHO_C" >&6
+ld_shlibs_CXX=yes
+case $host_os in
+ aix3*)
+ # FIXME: insert proper C++ library support
+ ld_shlibs_CXX=no
+ ;;
+ aix4* | aix5*)
+ if test "$host_cpu" = ia64; then
+ # On IA64, the linker does run time linking by default, so we don't
+ # have to do anything special.
+ aix_use_runtimelinking=no
+ exp_sym_flag='-Bexport'
+ no_entry_flag=""
+ else
+ aix_use_runtimelinking=no
+
+ # Test if we are trying to use run time linking or normal
+ # AIX style linking. If -brtl is somewhere in LDFLAGS, we
+ # need to do runtime linking.
+ case $host_os in aix4.[23]|aix4.[23].*|aix5*)
+ for ld_flag in $LDFLAGS; do
+ case $ld_flag in
+ *-brtl*)
+ aix_use_runtimelinking=yes
+ break
+ ;;
+ esac
+ done
+ esac
+
+ exp_sym_flag='-bexport'
+ no_entry_flag='-bnoentry'
+ fi
+
+ # When large executables or shared objects are built, AIX ld can
+ # have problems creating the table of contents. If linking a library
+ # or program results in "error TOC overflow" add -mminimal-toc to
+ # CXXFLAGS/CFLAGS for g++/gcc. In the cases where that is not
+ # enough to fix the problem, add -Wl,-bbigtoc to LDFLAGS.
+
+ archive_cmds_CXX=''
+ hardcode_direct_CXX=yes
+ hardcode_libdir_separator_CXX=':'
+ link_all_deplibs_CXX=yes
+
+ if test "$GXX" = yes; then
+ case $host_os in aix4.012|aix4.012.*)
+ # We only want to do this on AIX 4.2 and lower, the check
+ # below for broken collect2 doesn't work under 4.3+
+ collect2name=`${CC} -print-prog-name=collect2`
+ if test -f "$collect2name" && \
+ strings "$collect2name" | grep resolve_lib_name >/dev/null
+ then
+ # We have reworked collect2
+ hardcode_direct_CXX=yes
+ else
+ # We have old collect2
+ hardcode_direct_CXX=unsupported
+ # It fails to find uninstalled libraries when the uninstalled
+ # path is not listed in the libpath. Setting hardcode_minus_L
+ # to unsupported forces relinking
+ hardcode_minus_L_CXX=yes
+ hardcode_libdir_flag_spec_CXX='-L$libdir'
+ hardcode_libdir_separator_CXX=
+ fi
+ esac
+ shared_flag='-shared'
+ else
+ # not using gcc
+ if test "$host_cpu" = ia64; then
+ # VisualAge C++, Version 5.5 for AIX 5L for IA-64, Beta 3 Release
+ # chokes on -Wl,-G. The following line is correct:
+ shared_flag='-G'
+ else
+ if test "$aix_use_runtimelinking" = yes; then
+ shared_flag='${wl}-G'
+ else
+ shared_flag='${wl}-bM:SRE'
+ fi
+ fi
+ fi
+
+ # It seems that -bexpall does not export symbols beginning with
+ # underscore (_), so it is better to generate a list of symbols to export.
+ always_export_symbols_CXX=yes
+ if test "$aix_use_runtimelinking" = yes; then
+ # Warning - without using the other runtime loading flags (-brtl),
+ # -berok will link without error, but may produce a broken library.
+ allow_undefined_flag_CXX='-berok'
+ # Determine the default libpath from the value encoded in an empty executable.
+ cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+
+int
+main ()
+{
+
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext conftest$ac_exeext
+if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_cxx_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest$ac_exeext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+
+aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e '/Import File Strings/,/^$/ { /^0/ { s/^0 *\(.*\)$/\1/; p; }
+}'`
+# Check for a 64-bit object if we didn't find anything.
+if test -z "$aix_libpath"; then aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e '/Import File Strings/,/^$/ { /^0/ { s/^0 *\(.*\)$/\1/; p; }
+}'`; fi
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+fi
+rm -f conftest.err conftest.$ac_objext \
+ conftest$ac_exeext conftest.$ac_ext
+if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+
+ hardcode_libdir_flag_spec_CXX='${wl}-blibpath:$libdir:'"$aix_libpath"
+
+ archive_expsym_cmds_CXX="\$CC"' -o $output_objdir/$soname $libobjs $deplibs $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then echo "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$no_entry_flag \${wl}$exp_sym_flag:\$export_symbols $shared_flag"
+ else
+ if test "$host_cpu" = ia64; then
+ hardcode_libdir_flag_spec_CXX='${wl}-R $libdir:/usr/lib:/lib'
+ allow_undefined_flag_CXX="-z nodefs"
+ archive_expsym_cmds_CXX="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs $compiler_flags ${wl}${allow_undefined_flag} '"\${wl}$no_entry_flag \${wl}$exp_sym_flag:\$export_symbols"
+ else
+ # Determine the default libpath from the value encoded in an empty executable.
+ cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+
+int
+main ()
+{
+
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext conftest$ac_exeext
+if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_cxx_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest$ac_exeext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+
+aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e '/Import File Strings/,/^$/ { /^0/ { s/^0 *\(.*\)$/\1/; p; }
+}'`
+# Check for a 64-bit object if we didn't find anything.
+if test -z "$aix_libpath"; then aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e '/Import File Strings/,/^$/ { /^0/ { s/^0 *\(.*\)$/\1/; p; }
+}'`; fi
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+fi
+rm -f conftest.err conftest.$ac_objext \
+ conftest$ac_exeext conftest.$ac_ext
+if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+
+ hardcode_libdir_flag_spec_CXX='${wl}-blibpath:$libdir:'"$aix_libpath"
+ # Warning - without using the other run time loading flags,
+ # -berok will link without error, but may produce a broken library.
+ no_undefined_flag_CXX=' ${wl}-bernotok'
+ allow_undefined_flag_CXX=' ${wl}-berok'
+ # -bexpall does not export symbols beginning with underscore (_)
+ always_export_symbols_CXX=yes
+ # Exported symbols can be pulled into shared objects from archives
+ whole_archive_flag_spec_CXX=' '
+ archive_cmds_need_lc_CXX=yes
+ # This is similar to how AIX traditionally builds it's shared libraries.
+ archive_expsym_cmds_CXX="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs $compiler_flags ${wl}-bE:$export_symbols ${wl}-bnoentry${allow_undefined_flag}~$AR $AR_FLAGS $output_objdir/$libname$release.a $output_objdir/$soname'
+ fi
+ fi
+ ;;
+ chorus*)
+ case $cc_basename in
+ *)
+ # FIXME: insert proper C++ library support
+ ld_shlibs_CXX=no
+ ;;
+ esac
+ ;;
+
+ cygwin* | mingw* | pw32*)
+ # _LT_AC_TAGVAR(hardcode_libdir_flag_spec, CXX) is actually meaningless,
+ # as there is no search path for DLLs.
+ hardcode_libdir_flag_spec_CXX='-L$libdir'
+ allow_undefined_flag_CXX=unsupported
+ always_export_symbols_CXX=no
+ enable_shared_with_static_runtimes_CXX=yes
+
+ if $LD --help 2>&1 | grep 'auto-import' > /dev/null; then
+ archive_cmds_CXX='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--image-base=0x10000000 ${wl}--out-implib,$lib'
+ # If the export-symbols file already is a .def file (1st line
+ # is EXPORTS), use it as is; otherwise, prepend...
+ archive_expsym_cmds_CXX='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
+ cp $export_symbols $output_objdir/$soname.def;
+ else
+ echo EXPORTS > $output_objdir/$soname.def;
+ cat $export_symbols >> $output_objdir/$soname.def;
+ fi~
+ $CC -shared -nostdlib $output_objdir/$soname.def $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--image-base=0x10000000 ${wl}--out-implib,$lib'
+ else
+ ld_shlibs_CXX=no
+ fi
+ ;;
+
+ darwin* | rhapsody*)
+ if test "$GXX" = yes; then
+ archive_cmds_need_lc_CXX=no
+ case "$host_os" in
+ rhapsody* | darwin1.[012])
+ allow_undefined_flag_CXX='-undefined suppress'
+ ;;
+ *) # Darwin 1.3 on
+ if test -z ${MACOSX_DEPLOYMENT_TARGET} ; then
+ allow_undefined_flag_CXX='-flat_namespace -undefined suppress'
+ else
+ case ${MACOSX_DEPLOYMENT_TARGET} in
+ 10.[012])
+ allow_undefined_flag_CXX='-flat_namespace -undefined suppress'
+ ;;
+ 10.*)
+ allow_undefined_flag_CXX='-undefined dynamic_lookup'
+ ;;
+ esac
+ fi
+ ;;
+ esac
+ lt_int_apple_cc_single_mod=no
+ output_verbose_link_cmd='echo'
+ if $CC -dumpspecs 2>&1 | grep 'single_module' >/dev/null ; then
+ lt_int_apple_cc_single_mod=yes
+ fi
+ if test "X$lt_int_apple_cc_single_mod" = Xyes ; then
+ archive_cmds_CXX='$CC -dynamiclib -single_module $allow_undefined_flag -o $lib $libobjs $deplibs $compiler_flags -install_name $rpath/$soname $verstring'
+ else
+ archive_cmds_CXX='$CC -r ${wl}-bind_at_load -keep_private_externs -nostdlib -o ${lib}-master.o $libobjs~$CC -dynamiclib $allow_undefined_flag -o $lib ${lib}-master.o $deplibs $compiler_flags -install_name $rpath/$soname $verstring'
+ fi
+ module_cmds_CXX='$CC ${wl}-bind_at_load $allow_undefined_flag -o $lib -bundle $libobjs $deplibs$compiler_flags'
+
+ # Don't fix this by using the ld -exported_symbols_list flag, it doesn't exist in older darwin ld's
+ if test "X$lt_int_apple_cc_single_mod" = Xyes ; then
+ archive_expsym_cmds_CXX='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC -dynamiclib -single_module $allow_undefined_flag -o $lib $libobjs $deplibs $compiler_flags -install_name $rpath/$soname $verstring~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}'
+ else
+ archive_expsym_cmds_CXX='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC -r ${wl}-bind_at_load -keep_private_externs -nostdlib -o ${lib}-master.o $libobjs~$CC -dynamiclib $allow_undefined_flag -o $lib ${lib}-master.o $deplibs $compiler_flags -install_name $rpath/$soname $verstring~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}'
+ fi
+ module_expsym_cmds_CXX='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC $allow_undefined_flag -o $lib -bundle $libobjs $deplibs$compiler_flags~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}'
+ hardcode_direct_CXX=no
+ hardcode_automatic_CXX=yes
+ hardcode_shlibpath_var_CXX=unsupported
+ whole_archive_flag_spec_CXX='-all_load $convenience'
+ link_all_deplibs_CXX=yes
+ else
+ ld_shlibs_CXX=no
+ fi
+ ;;
+
+ dgux*)
+ case $cc_basename in
+ ec++)
+ # FIXME: insert proper C++ library support
+ ld_shlibs_CXX=no
+ ;;
+ ghcx)
+ # Green Hills C++ Compiler
+ # FIXME: insert proper C++ library support
+ ld_shlibs_CXX=no
+ ;;
+ *)
+ # FIXME: insert proper C++ library support
+ ld_shlibs_CXX=no
+ ;;
+ esac
+ ;;
+ freebsd12*)
+ # C++ shared libraries reported to be fairly broken before switch to ELF
+ ld_shlibs_CXX=no
+ ;;
+ freebsd-elf*)
+ archive_cmds_need_lc_CXX=no
+ ;;
+ freebsd* | kfreebsd*-gnu)
+ # FreeBSD 3 and later use GNU C++ and GNU ld with standard ELF
+ # conventions
+ ld_shlibs_CXX=yes
+ ;;
+ gnu*)
+ ;;
+ hpux9*)
+ hardcode_libdir_flag_spec_CXX='${wl}+b ${wl}$libdir'
+ hardcode_libdir_separator_CXX=:
+ export_dynamic_flag_spec_CXX='${wl}-E'
+ hardcode_direct_CXX=yes
+ hardcode_minus_L_CXX=yes # Not in the search PATH,
+ # but as the default
+ # location of the library.
+
+ case $cc_basename in
+ CC)
+ # FIXME: insert proper C++ library support
+ ld_shlibs_CXX=no
+ ;;
+ aCC)
+ archive_cmds_CXX='$rm $output_objdir/$soname~$CC -b ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+ # Commands to make compiler produce verbose output that lists
+ # what "hidden" libraries, object files and flags are used when
+ # linking a shared library.
+ #
+ # There doesn't appear to be a way to prevent this compiler from
+ # explicitly linking system object files so we need to strip them
+ # from the output so that they don't get included in the library
+ # dependencies.
+ output_verbose_link_cmd='templist=`($CC -b $CFLAGS -v conftest.$objext 2>&1) | egrep "\-L"`; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; echo $list'
+ ;;
+ *)
+ if test "$GXX" = yes; then
+ archive_cmds_CXX='$rm $output_objdir/$soname~$CC -shared -nostdlib -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+ else
+ # FIXME: insert proper C++ library support
+ ld_shlibs_CXX=no
+ fi
+ ;;
+ esac
+ ;;
+ hpux10*|hpux11*)
+ if test $with_gnu_ld = no; then
+ case "$host_cpu" in
+ hppa*64*)
+ hardcode_libdir_flag_spec_CXX='${wl}+b ${wl}$libdir'
+ hardcode_libdir_flag_spec_ld_CXX='+b $libdir'
+ hardcode_libdir_separator_CXX=:
+ ;;
+ ia64*)
+ hardcode_libdir_flag_spec_CXX='-L$libdir'
+ ;;
+ *)
+ hardcode_libdir_flag_spec_CXX='${wl}+b ${wl}$libdir'
+ hardcode_libdir_separator_CXX=:
+ export_dynamic_flag_spec_CXX='${wl}-E'
+ ;;
+ esac
+ fi
+ case "$host_cpu" in
+ hppa*64*)
+ hardcode_direct_CXX=no
+ hardcode_shlibpath_var_CXX=no
+ ;;
+ ia64*)
+ hardcode_direct_CXX=no
+ hardcode_shlibpath_var_CXX=no
+ hardcode_minus_L_CXX=yes # Not in the search PATH,
+ # but as the default
+ # location of the library.
+ ;;
+ *)
+ hardcode_direct_CXX=yes
+ hardcode_minus_L_CXX=yes # Not in the search PATH,
+ # but as the default
+ # location of the library.
+ ;;
+ esac
+
+ case $cc_basename in
+ CC)
+ # FIXME: insert proper C++ library support
+ ld_shlibs_CXX=no
+ ;;
+ aCC)
+ case "$host_cpu" in
+ hppa*64*|ia64*)
+ archive_cmds_CXX='$LD -b +h $soname -o $lib $linker_flags $libobjs $deplibs'
+ ;;
+ *)
+ archive_cmds_CXX='$CC -b ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
+ ;;
+ esac
+ # Commands to make compiler produce verbose output that lists
+ # what "hidden" libraries, object files and flags are used when
+ # linking a shared library.
+ #
+ # There doesn't appear to be a way to prevent this compiler from
+ # explicitly linking system object files so we need to strip them
+ # from the output so that they don't get included in the library
+ # dependencies.
+ output_verbose_link_cmd='templist=`($CC -b $CFLAGS -v conftest.$objext 2>&1) | grep "\-L"`; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; echo $list'
+ ;;
+ *)
+ if test "$GXX" = yes; then
+ if test $with_gnu_ld = no; then
+ case "$host_cpu" in
+ ia64*|hppa*64*)
+ archive_cmds_CXX='$LD -b +h $soname -o $lib $linker_flags $libobjs $deplibs'
+ ;;
+ *)
+ archive_cmds_CXX='$CC -shared -nostdlib -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
+ ;;
+ esac
+ fi
+ else
+ # FIXME: insert proper C++ library support
+ ld_shlibs_CXX=no
+ fi
+ ;;
+ esac
+ ;;
+ irix5* | irix6*)
+ case $cc_basename in
+ CC)
+ # SGI C++
+ archive_cmds_CXX='$CC -shared -all -multigot $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${objdir}/so_locations -o $lib'
+
+ # Archives containing C++ object files must be created using
+ # "CC -ar", where "CC" is the IRIX C++ compiler. This is
+ # necessary to make sure instantiated templates are included
+ # in the archive.
+ old_archive_cmds_CXX='$CC -ar -WR,-u -o $oldlib $oldobjs'
+ ;;
+ *)
+ if test "$GXX" = yes; then
+ if test "$with_gnu_ld" = no; then
+ archive_cmds_CXX='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` ${wl}-update_registry ${wl}${objdir}/so_locations -o $lib'
+ else
+ archive_cmds_CXX='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` -o $lib'
+ fi
+ fi
+ link_all_deplibs_CXX=yes
+ ;;
+ esac
+ hardcode_libdir_flag_spec_CXX='${wl}-rpath ${wl}$libdir'
+ hardcode_libdir_separator_CXX=:
+ ;;
+ linux*)
+ case $cc_basename in
+ KCC)
+ # Kuck and Associates, Inc. (KAI) C++ Compiler
+
+ # KCC will only create a shared library if the output file
+ # ends with ".so" (or ".sl" for HP-UX), so rename the library
+ # to its proper name (with version) after linking.
+ archive_cmds_CXX='tempext=`echo $shared_ext | $SED -e '\''s/\([^()0-9A-Za-z{}]\)/\\\\\1/g'\''`; templib=`echo $lib | $SED -e "s/\${tempext}\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib; mv \$templib $lib'
+ archive_expsym_cmds_CXX='tempext=`echo $shared_ext | $SED -e '\''s/\([^()0-9A-Za-z{}]\)/\\\\\1/g'\''`; templib=`echo $lib | $SED -e "s/\${tempext}\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib ${wl}-retain-symbols-file,$export_symbols; mv \$templib $lib'
+ # Commands to make compiler produce verbose output that lists
+ # what "hidden" libraries, object files and flags are used when
+ # linking a shared library.
+ #
+ # There doesn't appear to be a way to prevent this compiler from
+ # explicitly linking system object files so we need to strip them
+ # from the output so that they don't get included in the library
+ # dependencies.
+ output_verbose_link_cmd='templist=`$CC $CFLAGS -v conftest.$objext -o libconftest$shared_ext 2>&1 | grep "ld"`; rm -f libconftest$shared_ext; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; echo $list'
+
+ hardcode_libdir_flag_spec_CXX='${wl}--rpath,$libdir'
+ export_dynamic_flag_spec_CXX='${wl}--export-dynamic'
+
+ # Archives containing C++ object files must be created using
+ # "CC -Bstatic", where "CC" is the KAI C++ compiler.
+ old_archive_cmds_CXX='$CC -Bstatic -o $oldlib $oldobjs'
+ ;;
+ icpc)
+ # Intel C++
+ with_gnu_ld=yes
+ archive_cmds_need_lc_CXX=no
+ archive_cmds_CXX='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname -o $lib'
+ archive_expsym_cmds_CXX='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+ hardcode_libdir_flag_spec_CXX='${wl}-rpath,$libdir'
+ export_dynamic_flag_spec_CXX='${wl}--export-dynamic'
+ whole_archive_flag_spec_CXX='${wl}--whole-archive$convenience ${wl}--no-whole-archive'
+ ;;
+ cxx)
+ # Compaq C++
+ archive_cmds_CXX='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname -o $lib'
+ archive_expsym_cmds_CXX='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname -o $lib ${wl}-retain-symbols-file $wl$export_symbols'
+
+ runpath_var=LD_RUN_PATH
+ hardcode_libdir_flag_spec_CXX='-rpath $libdir'
+ hardcode_libdir_separator_CXX=:
+
+ # Commands to make compiler produce verbose output that lists
+ # what "hidden" libraries, object files and flags are used when
+ # linking a shared library.
+ #
+ # There doesn't appear to be a way to prevent this compiler from
+ # explicitly linking system object files so we need to strip them
+ # from the output so that they don't get included in the library
+ # dependencies.
+ output_verbose_link_cmd='templist=`$CC -shared $CFLAGS -v conftest.$objext 2>&1 | grep "ld"`; templist=`echo $templist | $SED "s/\(^.*ld.*\)\( .*ld .*$\)/\1/"`; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; echo $list'
+ ;;
+ esac
+ ;;
+ lynxos*)
+ # FIXME: insert proper C++ library support
+ ld_shlibs_CXX=no
+ ;;
+ m88k*)
+ # FIXME: insert proper C++ library support
+ ld_shlibs_CXX=no
+ ;;
+ mvs*)
+ case $cc_basename in
+ cxx)
+ # FIXME: insert proper C++ library support
+ ld_shlibs_CXX=no
+ ;;
+ *)
+ # FIXME: insert proper C++ library support
+ ld_shlibs_CXX=no
+ ;;
+ esac
+ ;;
+ netbsd* | knetbsd*-gnu)
+ if echo __ELF__ | $CC -E - | grep __ELF__ >/dev/null; then
+ archive_cmds_CXX='$LD -Bshareable -o $lib $predep_objects $libobjs $deplibs $postdep_objects $linker_flags'
+ wlarc=
+ hardcode_libdir_flag_spec_CXX='-R$libdir'
+ hardcode_direct_CXX=yes
+ hardcode_shlibpath_var_CXX=no
+ fi
+ # Workaround some broken pre-1.5 toolchains
+ output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | grep conftest.$objext | $SED -e "s:-lgcc -lc -lgcc::"'
+ ;;
+ osf3*)
+ case $cc_basename in
+ KCC)
+ # Kuck and Associates, Inc. (KAI) C++ Compiler
+
+ # KCC will only create a shared library if the output file
+ # ends with ".so" (or ".sl" for HP-UX), so rename the library
+ # to its proper name (with version) after linking.
+ archive_cmds_CXX='tempext=`echo $shared_ext | $SED -e '\''s/\([^()0-9A-Za-z{}]\)/\\\\\1/g'\''`; templib=`echo $lib | $SED -e "s/\${tempext}\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib; mv \$templib $lib'
+
+ hardcode_libdir_flag_spec_CXX='${wl}-rpath,$libdir'
+ hardcode_libdir_separator_CXX=:
+
+ # Archives containing C++ object files must be created using
+ # "CC -Bstatic", where "CC" is the KAI C++ compiler.
+ old_archive_cmds_CXX='$CC -Bstatic -o $oldlib $oldobjs'
+
+ ;;
+ RCC)
+ # Rational C++ 2.4.1
+ # FIXME: insert proper C++ library support
+ ld_shlibs_CXX=no
+ ;;
+ cxx)
+ allow_undefined_flag_CXX=' ${wl}-expect_unresolved ${wl}\*'
+ archive_cmds_CXX='$CC -shared${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $soname `test -n "$verstring" && echo ${wl}-set_version $verstring` -update_registry ${objdir}/so_locations -o $lib'
+
+ hardcode_libdir_flag_spec_CXX='${wl}-rpath ${wl}$libdir'
+ hardcode_libdir_separator_CXX=:
+
+ # Commands to make compiler produce verbose output that lists
+ # what "hidden" libraries, object files and flags are used when
+ # linking a shared library.
+ #
+ # There doesn't appear to be a way to prevent this compiler from
+ # explicitly linking system object files so we need to strip them
+ # from the output so that they don't get included in the library
+ # dependencies.
+ output_verbose_link_cmd='templist=`$CC -shared $CFLAGS -v conftest.$objext 2>&1 | grep "ld" | grep -v "ld:"`; templist=`echo $templist | $SED "s/\(^.*ld.*\)\( .*ld.*$\)/\1/"`; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; echo $list'
+ ;;
+ *)
+ if test "$GXX" = yes && test "$with_gnu_ld" = no; then
+ allow_undefined_flag_CXX=' ${wl}-expect_unresolved ${wl}\*'
+ archive_cmds_CXX='$CC -shared -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` ${wl}-update_registry ${wl}${objdir}/so_locations -o $lib'
+
+ hardcode_libdir_flag_spec_CXX='${wl}-rpath ${wl}$libdir'
+ hardcode_libdir_separator_CXX=:
+
+ # Commands to make compiler produce verbose output that lists
+ # what "hidden" libraries, object files and flags are used when
+ # linking a shared library.
+ output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | grep "\-L"'
+
+ else
+ # FIXME: insert proper C++ library support
+ ld_shlibs_CXX=no
+ fi
+ ;;
+ esac
+ ;;
+ osf4* | osf5*)
+ case $cc_basename in
+ KCC)
+ # Kuck and Associates, Inc. (KAI) C++ Compiler
+
+ # KCC will only create a shared library if the output file
+ # ends with ".so" (or ".sl" for HP-UX), so rename the library
+ # to its proper name (with version) after linking.
+ archive_cmds_CXX='tempext=`echo $shared_ext | $SED -e '\''s/\([^()0-9A-Za-z{}]\)/\\\\\1/g'\''`; templib=`echo $lib | $SED -e "s/\${tempext}\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib; mv \$templib $lib'
+
+ hardcode_libdir_flag_spec_CXX='${wl}-rpath,$libdir'
+ hardcode_libdir_separator_CXX=:
+
+ # Archives containing C++ object files must be created using
+ # the KAI C++ compiler.
+ old_archive_cmds_CXX='$CC -o $oldlib $oldobjs'
+ ;;
+ RCC)
+ # Rational C++ 2.4.1
+ # FIXME: insert proper C++ library support
+ ld_shlibs_CXX=no
+ ;;
+ cxx)
+ allow_undefined_flag_CXX=' -expect_unresolved \*'
+ archive_cmds_CXX='$CC -shared${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -msym -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${objdir}/so_locations -o $lib'
+ archive_expsym_cmds_CXX='for i in `cat $export_symbols`; do printf "%s %s\\n" -exported_symbol "\$i" >> $lib.exp; done~
+ echo "-hidden">> $lib.exp~
+ $CC -shared$allow_undefined_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -msym -soname $soname -Wl,-input -Wl,$lib.exp `test -n "$verstring" && echo -set_version $verstring` -update_registry $objdir/so_locations -o $lib~
+ $rm $lib.exp'
+
+ hardcode_libdir_flag_spec_CXX='-rpath $libdir'
+ hardcode_libdir_separator_CXX=:
+
+ # Commands to make compiler produce verbose output that lists
+ # what "hidden" libraries, object files and flags are used when
+ # linking a shared library.
+ #
+ # There doesn't appear to be a way to prevent this compiler from
+ # explicitly linking system object files so we need to strip them
+ # from the output so that they don't get included in the library
+ # dependencies.
+ output_verbose_link_cmd='templist=`$CC -shared $CFLAGS -v conftest.$objext 2>&1 | grep "ld" | grep -v "ld:"`; templist=`echo $templist | $SED "s/\(^.*ld.*\)\( .*ld.*$\)/\1/"`; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; echo $list'
+ ;;
+ *)
+ if test "$GXX" = yes && test "$with_gnu_ld" = no; then
+ allow_undefined_flag_CXX=' ${wl}-expect_unresolved ${wl}\*'
+ archive_cmds_CXX='$CC -shared -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` ${wl}-update_registry ${wl}${objdir}/so_locations -o $lib'
+
+ hardcode_libdir_flag_spec_CXX='${wl}-rpath ${wl}$libdir'
+ hardcode_libdir_separator_CXX=:
+
+ # Commands to make compiler produce verbose output that lists
+ # what "hidden" libraries, object files and flags are used when
+ # linking a shared library.
+ output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | grep "\-L"'
+
+ else
+ # FIXME: insert proper C++ library support
+ ld_shlibs_CXX=no
+ fi
+ ;;
+ esac
+ ;;
+ psos*)
+ # FIXME: insert proper C++ library support
+ ld_shlibs_CXX=no
+ ;;
+ sco*)
+ archive_cmds_need_lc_CXX=no
+ case $cc_basename in
+ CC)
+ # FIXME: insert proper C++ library support
+ ld_shlibs_CXX=no
+ ;;
+ *)
+ # FIXME: insert proper C++ library support
+ ld_shlibs_CXX=no
+ ;;
+ esac
+ ;;
+ sunos4*)
+ case $cc_basename in
+ CC)
+ # Sun C++ 4.x
+ # FIXME: insert proper C++ library support
+ ld_shlibs_CXX=no
+ ;;
+ lcc)
+ # Lucid
+ # FIXME: insert proper C++ library support
+ ld_shlibs_CXX=no
+ ;;
+ *)
+ # FIXME: insert proper C++ library support
+ ld_shlibs_CXX=no
+ ;;
+ esac
+ ;;
+ solaris*)
+ case $cc_basename in
+ CC)
+ # Sun C++ 4.2, 5.x and Centerline C++
+ no_undefined_flag_CXX=' -zdefs'
+ archive_cmds_CXX='$CC -G${allow_undefined_flag} -nolib -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
+ archive_expsym_cmds_CXX='$echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~$echo "local: *; };" >> $lib.exp~
+ $CC -G${allow_undefined_flag} -nolib ${wl}-M ${wl}$lib.exp -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$rm $lib.exp'
+
+ hardcode_libdir_flag_spec_CXX='-R$libdir'
+ hardcode_shlibpath_var_CXX=no
+ case $host_os in
+ solaris2.0-5 | solaris2.0-5.*) ;;
+ *)
+ # The C++ compiler is used as linker so we must use $wl
+ # flag to pass the commands to the underlying system
+ # linker.
+ # Supported since Solaris 2.6 (maybe 2.5.1?)
+ whole_archive_flag_spec_CXX='${wl}-z ${wl}allextract$convenience ${wl}-z ${wl}defaultextract'
+ ;;
+ esac
+ link_all_deplibs_CXX=yes
+
+ # Commands to make compiler produce verbose output that lists
+ # what "hidden" libraries, object files and flags are used when
+ # linking a shared library.
+ #
+ # There doesn't appear to be a way to prevent this compiler from
+ # explicitly linking system object files so we need to strip them
+ # from the output so that they don't get included in the library
+ # dependencies.
+ output_verbose_link_cmd='templist=`$CC -G $CFLAGS -v conftest.$objext 2>&1 | grep "\-[LR]"`; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; echo $list'
+
+ # Archives containing C++ object files must be created using
+ # "CC -xar", where "CC" is the Sun C++ compiler. This is
+ # necessary to make sure instantiated templates are included
+ # in the archive.
+ old_archive_cmds_CXX='$CC -xar -o $oldlib $oldobjs'
+ ;;
+ gcx)
+ # Green Hills C++ Compiler
+ archive_cmds_CXX='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib'
+
+ # The C++ compiler must be used to create the archive.
+ old_archive_cmds_CXX='$CC $LDFLAGS -archive -o $oldlib $oldobjs'
+ ;;
+ *)
+ # GNU C++ compiler with Solaris linker
+ if test "$GXX" = yes && test "$with_gnu_ld" = no; then
+ no_undefined_flag_CXX=' ${wl}-z ${wl}defs'
+ if $CC --version | grep -v '^2\.7' > /dev/null; then
+ archive_cmds_CXX='$CC -shared -nostdlib $LDFLAGS $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib'
+ archive_expsym_cmds_CXX='$echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~$echo "local: *; };" >> $lib.exp~
+ $CC -shared -nostdlib ${wl}-M $wl$lib.exp -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$rm $lib.exp'
+
+ # Commands to make compiler produce verbose output that lists
+ # what "hidden" libraries, object files and flags are used when
+ # linking a shared library.
+ output_verbose_link_cmd="$CC -shared $CFLAGS -v conftest.$objext 2>&1 | grep \"\-L\""
+ else
+ # g++ 2.7 appears to require `-G' NOT `-shared' on this
+ # platform.
+ archive_cmds_CXX='$CC -G -nostdlib $LDFLAGS $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib'
+ archive_expsym_cmds_CXX='$echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~$echo "local: *; };" >> $lib.exp~
+ $CC -G -nostdlib ${wl}-M $wl$lib.exp -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$rm $lib.exp'
+
+ # Commands to make compiler produce verbose output that lists
+ # what "hidden" libraries, object files and flags are used when
+ # linking a shared library.
+ output_verbose_link_cmd="$CC -G $CFLAGS -v conftest.$objext 2>&1 | grep \"\-L\""
+ fi
+
+ hardcode_libdir_flag_spec_CXX='${wl}-R $wl$libdir'
+ fi
+ ;;
+ esac
+ ;;
+ sysv5OpenUNIX8* | sysv5UnixWare7* | sysv5uw[78]* | unixware7*)
+ archive_cmds_need_lc_CXX=no
+ ;;
+ tandem*)
+ case $cc_basename in
+ NCC)
+ # NonStop-UX NCC 3.20
+ # FIXME: insert proper C++ library support
+ ld_shlibs_CXX=no
+ ;;
+ *)
+ # FIXME: insert proper C++ library support
+ ld_shlibs_CXX=no
+ ;;
+ esac
+ ;;
+ vxworks*)
+ # FIXME: insert proper C++ library support
+ ld_shlibs_CXX=no
+ ;;
+ *)
+ # FIXME: insert proper C++ library support
+ ld_shlibs_CXX=no
+ ;;
+esac
+echo "$as_me:$LINENO: result: $ld_shlibs_CXX" >&5
+echo "${ECHO_T}$ld_shlibs_CXX" >&6
+test "$ld_shlibs_CXX" = no && can_build_shared=no
+
+GCC_CXX="$GXX"
+LD_CXX="$LD"
+
+
+cat > conftest.$ac_ext <<EOF
+class Foo
+{
+public:
+ Foo (void) { a = 0; }
+private:
+ int a;
+};
+EOF
+
+if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5
+ (eval $ac_compile) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; then
+ # Parse the compiler output and extract the necessary
+ # objects, libraries and library flags.
+
+ # Sentinel used to keep track of whether or not we are before
+ # the conftest object file.
+ pre_test_object_deps_done=no
+
+ # The `*' in the case matches for architectures that use `case' in
+ # $output_verbose_cmd can trigger glob expansion during the loop
+ # eval without this substitution.
+ output_verbose_link_cmd="`$echo \"X$output_verbose_link_cmd\" | $Xsed -e \"$no_glob_subst\"`"
+
+ for p in `eval $output_verbose_link_cmd`; do
+ case $p in
+
+ -L* | -R* | -l*)
+ # Some compilers place space between "-{L,R}" and the path.
+ # Remove the space.
+ if test $p = "-L" \
+ || test $p = "-R"; then
+ prev=$p
+ continue
+ else
+ prev=
+ fi
+
+ if test "$pre_test_object_deps_done" = no; then
+ case $p in
+ -L* | -R*)
+ # Internal compiler library paths should come after those
+ # provided the user. The postdeps already come after the
+ # user supplied libs so there is no need to process them.
+ if test -z "$compiler_lib_search_path_CXX"; then
+ compiler_lib_search_path_CXX="${prev}${p}"
+ else
+ compiler_lib_search_path_CXX="${compiler_lib_search_path_CXX} ${prev}${p}"
+ fi
+ ;;
+ # The "-l" case would never come before the object being
+ # linked, so don't bother handling this case.
+ esac
+ else
+ if test -z "$postdeps_CXX"; then
+ postdeps_CXX="${prev}${p}"
+ else
+ postdeps_CXX="${postdeps_CXX} ${prev}${p}"
+ fi
+ fi
+ ;;
+
+ *.$objext)
+ # This assumes that the test object file only shows up
+ # once in the compiler output.
+ if test "$p" = "conftest.$objext"; then
+ pre_test_object_deps_done=yes
+ continue
+ fi
+
+ if test "$pre_test_object_deps_done" = no; then
+ if test -z "$predep_objects_CXX"; then
+ predep_objects_CXX="$p"
+ else
+ predep_objects_CXX="$predep_objects_CXX $p"
+ fi
+ else
+ if test -z "$postdep_objects_CXX"; then
+ postdep_objects_CXX="$p"
+ else
+ postdep_objects_CXX="$postdep_objects_CXX $p"
+ fi
+ fi
+ ;;
+
+ *) ;; # Ignore the rest.
+
+ esac
+ done
+
+ # Clean up.
+ rm -f a.out a.exe
+else
+ echo "libtool.m4: error: problem compiling CXX test program"
+fi
+
+$rm -f confest.$objext
+
+case " $postdeps_CXX " in
+*" -lc "*) archive_cmds_need_lc_CXX=no ;;
+esac
+
+lt_prog_compiler_wl_CXX=
+lt_prog_compiler_pic_CXX=
+lt_prog_compiler_static_CXX=
+
+echo "$as_me:$LINENO: checking for $compiler option to produce PIC" >&5
+echo $ECHO_N "checking for $compiler option to produce PIC... $ECHO_C" >&6
+
+ # C++ specific cases for pic, static, wl, etc.
+ if test "$GXX" = yes; then
+ lt_prog_compiler_wl_CXX='-Wl,'
+ lt_prog_compiler_static_CXX='-static'
+
+ case $host_os in
+ aix*)
+ # All AIX code is PIC.
+ if test "$host_cpu" = ia64; then
+ # AIX 5 now supports IA64 processor
+ lt_prog_compiler_static_CXX='-Bstatic'
+ fi
+ ;;
+ amigaos*)
+ # FIXME: we need at least 68020 code to build shared libraries, but
+ # adding the `-m68020' flag to GCC prevents building anything better,
+ # like `-m68040'.
+ lt_prog_compiler_pic_CXX='-m68020 -resident32 -malways-restore-a4'
+ ;;
+ beos* | cygwin* | irix5* | irix6* | nonstopux* | osf3* | osf4* | osf5*)
+ # PIC is the default for these OSes.
+ ;;
+ mingw* | os2* | pw32*)
+ # This hack is so that the source file can tell whether it is being
+ # built for inclusion in a dll (and should export symbols for example).
+ lt_prog_compiler_pic_CXX='-DDLL_EXPORT'
+ ;;
+ darwin* | rhapsody*)
+ # PIC is the default on this platform
+ # Common symbols not allowed in MH_DYLIB files
+ lt_prog_compiler_pic_CXX='-fno-common'
+ ;;
+ *djgpp*)
+ # DJGPP does not support shared libraries at all
+ lt_prog_compiler_pic_CXX=
+ ;;
+ sysv4*MP*)
+ if test -d /usr/nec; then
+ lt_prog_compiler_pic_CXX=-Kconform_pic
+ fi
+ ;;
+ hpux*)
+ # PIC is the default for IA64 HP-UX and 64-bit HP-UX, but
+ # not for PA HP-UX.
+ case "$host_cpu" in
+ hppa*64*|ia64*)
+ ;;
+ *)
+ lt_prog_compiler_pic_CXX='-fPIC'
+ ;;
+ esac
+ ;;
+ *)
+ lt_prog_compiler_pic_CXX='-fPIC'
+ ;;
+ esac
+ else
+ case $host_os in
+ aix4* | aix5*)
+ # All AIX code is PIC.
+ if test "$host_cpu" = ia64; then
+ # AIX 5 now supports IA64 processor
+ lt_prog_compiler_static_CXX='-Bstatic'
+ else
+ lt_prog_compiler_static_CXX='-bnso -bI:/lib/syscalls.exp'
+ fi
+ ;;
+ chorus*)
+ case $cc_basename in
+ cxch68)
+ # Green Hills C++ Compiler
+ # _LT_AC_TAGVAR(lt_prog_compiler_static, CXX)="--no_auto_instantiation -u __main -u __premain -u _abort -r $COOL_DIR/lib/libOrb.a $MVME_DIR/lib/CC/libC.a $MVME_DIR/lib/classix/libcx.s.a"
+ ;;
+ esac
+ ;;
+ dgux*)
+ case $cc_basename in
+ ec++)
+ lt_prog_compiler_pic_CXX='-KPIC'
+ ;;
+ ghcx)
+ # Green Hills C++ Compiler
+ lt_prog_compiler_pic_CXX='-pic'
+ ;;
+ *)
+ ;;
+ esac
+ ;;
+ freebsd* | kfreebsd*-gnu)
+ # FreeBSD uses GNU C++
+ ;;
+ hpux9* | hpux10* | hpux11*)
+ case $cc_basename in
+ CC)
+ lt_prog_compiler_wl_CXX='-Wl,'
+ lt_prog_compiler_static_CXX="${ac_cv_prog_cc_wl}-a ${ac_cv_prog_cc_wl}archive"
+ if test "$host_cpu" != ia64; then
+ lt_prog_compiler_pic_CXX='+Z'
+ fi
+ ;;
+ aCC)
+ lt_prog_compiler_wl_CXX='-Wl,'
+ lt_prog_compiler_static_CXX="${ac_cv_prog_cc_wl}-a ${ac_cv_prog_cc_wl}archive"
+ case "$host_cpu" in
+ hppa*64*|ia64*)
+ # +Z the default
+ ;;
+ *)
+ lt_prog_compiler_pic_CXX='+Z'
+ ;;
+ esac
+ ;;
+ *)
+ ;;
+ esac
+ ;;
+ irix5* | irix6* | nonstopux*)
+ case $cc_basename in
+ CC)
+ lt_prog_compiler_wl_CXX='-Wl,'
+ lt_prog_compiler_static_CXX='-non_shared'
+ # CC pic flag -KPIC is the default.
+ ;;
+ *)
+ ;;
+ esac
+ ;;
+ linux*)
+ case $cc_basename in
+ KCC)
+ # KAI C++ Compiler
+ lt_prog_compiler_wl_CXX='--backend -Wl,'
+ lt_prog_compiler_pic_CXX='-fPIC'
+ ;;
+ icpc)
+ # Intel C++
+ lt_prog_compiler_wl_CXX='-Wl,'
+ lt_prog_compiler_pic_CXX='-KPIC'
+ lt_prog_compiler_static_CXX='-static'
+ ;;
+ cxx)
+ # Compaq C++
+ # Make sure the PIC flag is empty. It appears that all Alpha
+ # Linux and Compaq Tru64 Unix objects are PIC.
+ lt_prog_compiler_pic_CXX=
+ lt_prog_compiler_static_CXX='-non_shared'
+ ;;
+ *)
+ ;;
+ esac
+ ;;
+ lynxos*)
+ ;;
+ m88k*)
+ ;;
+ mvs*)
+ case $cc_basename in
+ cxx)
+ lt_prog_compiler_pic_CXX='-W c,exportall'
+ ;;
+ *)
+ ;;
+ esac
+ ;;
+ netbsd* | knetbsd*-gnu)
+ ;;
+ osf3* | osf4* | osf5*)
+ case $cc_basename in
+ KCC)
+ lt_prog_compiler_wl_CXX='--backend -Wl,'
+ ;;
+ RCC)
+ # Rational C++ 2.4.1
+ lt_prog_compiler_pic_CXX='-pic'
+ ;;
+ cxx)
+ # Digital/Compaq C++
+ lt_prog_compiler_wl_CXX='-Wl,'
+ # Make sure the PIC flag is empty. It appears that all Alpha
+ # Linux and Compaq Tru64 Unix objects are PIC.
+ lt_prog_compiler_pic_CXX=
+ lt_prog_compiler_static_CXX='-non_shared'
+ ;;
+ *)
+ ;;
+ esac
+ ;;
+ psos*)
+ ;;
+ sco*)
+ case $cc_basename in
+ CC)
+ lt_prog_compiler_pic_CXX='-fPIC'
+ ;;
+ *)
+ ;;
+ esac
+ ;;
+ solaris*)
+ case $cc_basename in
+ CC)
+ # Sun C++ 4.2, 5.x and Centerline C++
+ lt_prog_compiler_pic_CXX='-KPIC'
+ lt_prog_compiler_static_CXX='-Bstatic'
+ lt_prog_compiler_wl_CXX='-Qoption ld '
+ ;;
+ gcx)
+ # Green Hills C++ Compiler
+ lt_prog_compiler_pic_CXX='-PIC'
+ ;;
+ *)
+ ;;
+ esac
+ ;;
+ sunos4*)
+ case $cc_basename in
+ CC)
+ # Sun C++ 4.x
+ lt_prog_compiler_pic_CXX='-pic'
+ lt_prog_compiler_static_CXX='-Bstatic'
+ ;;
+ lcc)
+ # Lucid
+ lt_prog_compiler_pic_CXX='-pic'
+ ;;
+ *)
+ ;;
+ esac
+ ;;
+ tandem*)
+ case $cc_basename in
+ NCC)
+ # NonStop-UX NCC 3.20
+ lt_prog_compiler_pic_CXX='-KPIC'
+ ;;
+ *)
+ ;;
+ esac
+ ;;
+ unixware*)
+ ;;
+ vxworks*)
+ ;;
+ *)
+ lt_prog_compiler_can_build_shared_CXX=no
+ ;;
+ esac
+ fi
+
+echo "$as_me:$LINENO: result: $lt_prog_compiler_pic_CXX" >&5
+echo "${ECHO_T}$lt_prog_compiler_pic_CXX" >&6
+
+#
+# Check to make sure the PIC flag actually works.
+#
+if test -n "$lt_prog_compiler_pic_CXX"; then
+
+echo "$as_me:$LINENO: checking if $compiler PIC flag $lt_prog_compiler_pic_CXX works" >&5
+echo $ECHO_N "checking if $compiler PIC flag $lt_prog_compiler_pic_CXX works... $ECHO_C" >&6
+if test "${lt_prog_compiler_pic_works_CXX+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ lt_prog_compiler_pic_works_CXX=no
+ ac_outfile=conftest.$ac_objext
+ printf "$lt_simple_compile_test_code" > conftest.$ac_ext
+ lt_compiler_flag="$lt_prog_compiler_pic_CXX -DPIC"
+ # Insert the option either (1) after the last *FLAGS variable, or
+ # (2) before a word containing "conftest.", or (3) at the end.
+ # Note that $ac_compile itself does not contain backslashes and begins
+ # with a dollar sign (not a hyphen), so the echo should work correctly.
+ # The option is referenced via a variable to avoid confusing sed.
+ lt_compile=`echo "$ac_compile" | $SED \
+ -e 's:.*FLAGS}? :&$lt_compiler_flag :; t' \
+ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \
+ -e 's:$: $lt_compiler_flag:'`
+ (eval echo "\"\$as_me:10315: $lt_compile\"" >&5)
+ (eval "$lt_compile" 2>conftest.err)
+ ac_status=$?
+ cat conftest.err >&5
+ echo "$as_me:10319: \$? = $ac_status" >&5
+ if (exit $ac_status) && test -s "$ac_outfile"; then
+ # The compiler can only warn and ignore the option if not recognized
+ # So say no if there are warnings
+ if test ! -s conftest.err; then
+ lt_prog_compiler_pic_works_CXX=yes
+ fi
+ fi
+ $rm conftest*
+
+fi
+echo "$as_me:$LINENO: result: $lt_prog_compiler_pic_works_CXX" >&5
+echo "${ECHO_T}$lt_prog_compiler_pic_works_CXX" >&6
+
+if test x"$lt_prog_compiler_pic_works_CXX" = xyes; then
+ case $lt_prog_compiler_pic_CXX in
+ "" | " "*) ;;
+ *) lt_prog_compiler_pic_CXX=" $lt_prog_compiler_pic_CXX" ;;
+ esac
+else
+ lt_prog_compiler_pic_CXX=
+ lt_prog_compiler_can_build_shared_CXX=no
+fi
+
+fi
+case "$host_os" in
+ # For platforms which do not support PIC, -DPIC is meaningless:
+ *djgpp*)
+ lt_prog_compiler_pic_CXX=
+ ;;
+ *)
+ lt_prog_compiler_pic_CXX="$lt_prog_compiler_pic_CXX -DPIC"
+ ;;
+esac
+
+echo "$as_me:$LINENO: checking if $compiler supports -c -o file.$ac_objext" >&5
+echo $ECHO_N "checking if $compiler supports -c -o file.$ac_objext... $ECHO_C" >&6
+if test "${lt_cv_prog_compiler_c_o_CXX+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ lt_cv_prog_compiler_c_o_CXX=no
+ $rm -r conftest 2>/dev/null
+ mkdir conftest
+ cd conftest
+ mkdir out
+ printf "$lt_simple_compile_test_code" > conftest.$ac_ext
+
+ lt_compiler_flag="-o out/conftest2.$ac_objext"
+ # Insert the option either (1) after the last *FLAGS variable, or
+ # (2) before a word containing "conftest.", or (3) at the end.
+ # Note that $ac_compile itself does not contain backslashes and begins
+ # with a dollar sign (not a hyphen), so the echo should work correctly.
+ lt_compile=`echo "$ac_compile" | $SED \
+ -e 's:.*FLAGS}? :&$lt_compiler_flag :; t' \
+ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \
+ -e 's:$: $lt_compiler_flag:'`
+ (eval echo "\"\$as_me:10375: $lt_compile\"" >&5)
+ (eval "$lt_compile" 2>out/conftest.err)
+ ac_status=$?
+ cat out/conftest.err >&5
+ echo "$as_me:10379: \$? = $ac_status" >&5
+ if (exit $ac_status) && test -s out/conftest2.$ac_objext
+ then
+ # The compiler can only warn and ignore the option if not recognized
+ # So say no if there are warnings
+ if test ! -s out/conftest.err; then
+ lt_cv_prog_compiler_c_o_CXX=yes
+ fi
+ fi
+ chmod u+w .
+ $rm conftest*
+ # SGI C++ compiler will create directory out/ii_files/ for
+ # template instantiation
+ test -d out/ii_files && $rm out/ii_files/* && rmdir out/ii_files
+ $rm out/* && rmdir out
+ cd ..
+ rmdir conftest
+ $rm conftest*
+
+fi
+echo "$as_me:$LINENO: result: $lt_cv_prog_compiler_c_o_CXX" >&5
+echo "${ECHO_T}$lt_cv_prog_compiler_c_o_CXX" >&6
+
+
+hard_links="nottested"
+if test "$lt_cv_prog_compiler_c_o_CXX" = no && test "$need_locks" != no; then
+ # do not overwrite the value of need_locks provided by the user
+ echo "$as_me:$LINENO: checking if we can lock with hard links" >&5
+echo $ECHO_N "checking if we can lock with hard links... $ECHO_C" >&6
+ hard_links=yes
+ $rm conftest*
+ ln conftest.a conftest.b 2>/dev/null && hard_links=no
+ touch conftest.a
+ ln conftest.a conftest.b 2>&5 || hard_links=no
+ ln conftest.a conftest.b 2>/dev/null && hard_links=no
+ echo "$as_me:$LINENO: result: $hard_links" >&5
+echo "${ECHO_T}$hard_links" >&6
+ if test "$hard_links" = no; then
+ { echo "$as_me:$LINENO: WARNING: \`$CC' does not support \`-c -o', so \`make -j' may be unsafe" >&5
+echo "$as_me: WARNING: \`$CC' does not support \`-c -o', so \`make -j' may be unsafe" >&2;}
+ need_locks=warn
+ fi
+else
+ need_locks=no
+fi
+
+echo "$as_me:$LINENO: checking whether the $compiler linker ($LD) supports shared libraries" >&5
+echo $ECHO_N "checking whether the $compiler linker ($LD) supports shared libraries... $ECHO_C" >&6
+
+ export_symbols_cmds_CXX='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols'
+ case $host_os in
+ aix4* | aix5*)
+ # If we're using GNU nm, then we don't want the "-C" option.
+ # -C means demangle to AIX nm, but means don't demangle with GNU nm
+ if $NM -V 2>&1 | grep 'GNU' > /dev/null; then
+ export_symbols_cmds_CXX='$NM -Bpg $libobjs $convenience | awk '\''{ if (((\$2 == "T") || (\$2 == "D") || (\$2 == "B")) && (substr(\$3,1,1) != ".")) { print \$3 } }'\'' | sort -u > $export_symbols'
+ else
+ export_symbols_cmds_CXX='$NM -BCpg $libobjs $convenience | awk '\''{ if (((\$2 == "T") || (\$2 == "D") || (\$2 == "B")) && (substr(\$3,1,1) != ".")) { print \$3 } }'\'' | sort -u > $export_symbols'
+ fi
+ ;;
+ pw32*)
+ export_symbols_cmds_CXX="$ltdll_cmds"
+ ;;
+ cygwin* | mingw*)
+ export_symbols_cmds_CXX='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGS] /s/.* \([^ ]*\)/\1 DATA/'\'' | $SED -e '\''/^[AITW] /s/.* //'\'' | sort | uniq > $export_symbols'
+ ;;
+ *)
+ export_symbols_cmds_CXX='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols'
+ ;;
+ esac
+
+echo "$as_me:$LINENO: result: $ld_shlibs_CXX" >&5
+echo "${ECHO_T}$ld_shlibs_CXX" >&6
+test "$ld_shlibs_CXX" = no && can_build_shared=no
+
+variables_saved_for_relink="PATH $shlibpath_var $runpath_var"
+if test "$GCC" = yes; then
+ variables_saved_for_relink="$variables_saved_for_relink GCC_EXEC_PREFIX COMPILER_PATH LIBRARY_PATH"
+fi
+
+#
+# Do we need to explicitly link libc?
+#
+case "x$archive_cmds_need_lc_CXX" in
+x|xyes)
+ # Assume -lc should be added
+ archive_cmds_need_lc_CXX=yes
+
+ if test "$enable_shared" = yes && test "$GCC" = yes; then
+ case $archive_cmds_CXX in
+ *'~'*)
+ # FIXME: we may have to deal with multi-command sequences.
+ ;;
+ '$CC '*)
+ # Test whether the compiler implicitly links with -lc since on some
+ # systems, -lgcc has to come before -lc. If gcc already passes -lc
+ # to ld, don't add -lc before -lgcc.
+ echo "$as_me:$LINENO: checking whether -lc should be explicitly linked in" >&5
+echo $ECHO_N "checking whether -lc should be explicitly linked in... $ECHO_C" >&6
+ $rm conftest*
+ printf "$lt_simple_compile_test_code" > conftest.$ac_ext
+
+ if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5
+ (eval $ac_compile) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } 2>conftest.err; then
+ soname=conftest
+ lib=conftest
+ libobjs=conftest.$ac_objext
+ deplibs=
+ wl=$lt_prog_compiler_wl_CXX
+ compiler_flags=-v
+ linker_flags=-v
+ verstring=
+ output_objdir=.
+ libname=conftest
+ lt_save_allow_undefined_flag=$allow_undefined_flag_CXX
+ allow_undefined_flag_CXX=
+ if { (eval echo "$as_me:$LINENO: \"$archive_cmds_CXX 2\>\&1 \| grep \" -lc \" \>/dev/null 2\>\&1\"") >&5
+ (eval $archive_cmds_CXX 2\>\&1 \| grep \" -lc \" \>/dev/null 2\>\&1) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }
+ then
+ archive_cmds_need_lc_CXX=no
+ else
+ archive_cmds_need_lc_CXX=yes
+ fi
+ allow_undefined_flag_CXX=$lt_save_allow_undefined_flag
+ else
+ cat conftest.err 1>&5
+ fi
+ $rm conftest*
+ echo "$as_me:$LINENO: result: $archive_cmds_need_lc_CXX" >&5
+echo "${ECHO_T}$archive_cmds_need_lc_CXX" >&6
+ ;;
+ esac
+ fi
+ ;;
+esac
+
+echo "$as_me:$LINENO: checking dynamic linker characteristics" >&5
+echo $ECHO_N "checking dynamic linker characteristics... $ECHO_C" >&6
+library_names_spec=
+libname_spec='lib$name'
+soname_spec=
+shrext=".so"
+postinstall_cmds=
+postuninstall_cmds=
+finish_cmds=
+finish_eval=
+shlibpath_var=
+shlibpath_overrides_runpath=unknown
+version_type=none
+dynamic_linker="$host_os ld.so"
+sys_lib_dlsearch_path_spec="/lib /usr/lib"
+if test "$GCC" = yes; then
+ sys_lib_search_path_spec=`$CC -print-search-dirs | grep "^libraries:" | $SED -e "s/^libraries://" -e "s,=/,/,g"`
+ if echo "$sys_lib_search_path_spec" | grep ';' >/dev/null ; then
+ # if the path contains ";" then we assume it to be the separator
+ # otherwise default to the standard path separator (i.e. ":") - it is
+ # assumed that no part of a normal pathname contains ";" but that should
+ # okay in the real world where ";" in dirpaths is itself problematic.
+ sys_lib_search_path_spec=`echo "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
+ else
+ sys_lib_search_path_spec=`echo "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
+ fi
+else
+ sys_lib_search_path_spec="/lib /usr/lib /usr/local/lib"
+fi
+need_lib_prefix=unknown
+hardcode_into_libs=no
+
+# when you set need_version to no, make sure it does not cause -set_version
+# flags to be left without arguments
+need_version=unknown
+
+case $host_os in
+aix3*)
+ version_type=linux
+ library_names_spec='${libname}${release}${shared_ext}$versuffix $libname.a'
+ shlibpath_var=LIBPATH
+
+ # AIX 3 has no versioning support, so we append a major version to the name.
+ soname_spec='${libname}${release}${shared_ext}$major'
+ ;;
+
+aix4* | aix5*)
+ version_type=linux
+ need_lib_prefix=no
+ need_version=no
+ hardcode_into_libs=yes
+ if test "$host_cpu" = ia64; then
+ # AIX 5 supports IA64
+ library_names_spec='${libname}${release}${shared_ext}$major ${libname}${release}${shared_ext}$versuffix $libname${shared_ext}'
+ shlibpath_var=LD_LIBRARY_PATH
+ else
+ # With GCC up to 2.95.x, collect2 would create an import file
+ # for dependence libraries. The import file would start with
+ # the line `#! .'. This would cause the generated library to
+ # depend on `.', always an invalid library. This was fixed in
+ # development snapshots of GCC prior to 3.0.
+ case $host_os in
+ aix4 | aix4.[01] | aix4.[01].*)
+ if { echo '#if __GNUC__ > 2 || (__GNUC__ == 2 && __GNUC_MINOR__ >= 97)'
+ echo ' yes '
+ echo '#endif'; } | ${CC} -E - | grep yes > /dev/null; then
+ :
+ else
+ can_build_shared=no
+ fi
+ ;;
+ esac
+ # AIX (on Power*) has no versioning support, so currently we can not hardcode correct
+ # soname into executable. Probably we can add versioning support to
+ # collect2, so additional links can be useful in future.
+ if test "$aix_use_runtimelinking" = yes; then
+ # If using run time linking (on AIX 4.2 or later) use lib<name>.so
+ # instead of lib<name>.a to let people know that these are not
+ # typical AIX shared libraries.
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ else
+ # We preserve .a as extension for shared libraries through AIX4.2
+ # and later when we are not doing run time linking.
+ library_names_spec='${libname}${release}.a $libname.a'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ fi
+ shlibpath_var=LIBPATH
+ fi
+ ;;
+
+amigaos*)
+ library_names_spec='$libname.ixlibrary $libname.a'
+ # Create ${libname}_ixlibrary.a entries in /sys/libs.
+ finish_eval='for lib in `ls $libdir/*.ixlibrary 2>/dev/null`; do libname=`$echo "X$lib" | $Xsed -e '\''s%^.*/\([^/]*\)\.ixlibrary$%\1%'\''`; test $rm /sys/libs/${libname}_ixlibrary.a; $show "cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a"; cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a || exit 1; done'
+ ;;
+
+beos*)
+ library_names_spec='${libname}${shared_ext}'
+ dynamic_linker="$host_os ld.so"
+ shlibpath_var=LIBRARY_PATH
+ ;;
+
+bsdi4*)
+ version_type=linux
+ need_version=no
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ finish_cmds='PATH="\$PATH:/sbin" ldconfig $libdir'
+ shlibpath_var=LD_LIBRARY_PATH
+ sys_lib_search_path_spec="/shlib /usr/lib /usr/X11/lib /usr/contrib/lib /lib /usr/local/lib"
+ sys_lib_dlsearch_path_spec="/shlib /usr/lib /usr/local/lib"
+ # the default ld.so.conf also contains /usr/contrib/lib and
+ # /usr/X11R6/lib (/usr/X11 is a link to /usr/X11R6), but let us allow
+ # libtool to hard-code these into programs
+ ;;
+
+cygwin* | mingw* | pw32*)
+ version_type=windows
+ shrext=".dll"
+ need_version=no
+ need_lib_prefix=no
+
+ case $GCC,$host_os in
+ yes,cygwin* | yes,mingw* | yes,pw32*)
+ library_names_spec='$libname.dll.a'
+ # DLL is installed to $(libdir)/../bin by postinstall_cmds
+ postinstall_cmds='base_file=`basename \${file}`~
+ dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i;echo \$dlname'\''`~
+ dldir=$destdir/`dirname \$dlpath`~
+ test -d \$dldir || mkdir -p \$dldir~
+ $install_prog $dir/$dlname \$dldir/$dlname'
+ postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
+ dlpath=$dir/\$dldll~
+ $rm \$dlpath'
+ shlibpath_overrides_runpath=yes
+
+ case $host_os in
+ cygwin*)
+ # Cygwin DLLs use 'cyg' prefix rather than 'lib'
+ soname_spec='`echo ${libname} | sed -e 's/^lib/cyg/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
+ sys_lib_search_path_spec="/usr/lib /lib/w32api /lib /usr/local/lib"
+ ;;
+ mingw*)
+ # MinGW DLLs use traditional 'lib' prefix
+ soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
+ sys_lib_search_path_spec=`$CC -print-search-dirs | grep "^libraries:" | $SED -e "s/^libraries://" -e "s,=/,/,g"`
+ if echo "$sys_lib_search_path_spec" | grep ';[c-zC-Z]:/' >/dev/null; then
+ # It is most probably a Windows format PATH printed by
+ # mingw gcc, but we are running on Cygwin. Gcc prints its search
+ # path with ; separators, and with drive letters. We can handle the
+ # drive letters (cygwin fileutils understands them), so leave them,
+ # especially as we might pass files found there to a mingw objdump,
+ # which wouldn't understand a cygwinified path. Ahh.
+ sys_lib_search_path_spec=`echo "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
+ else
+ sys_lib_search_path_spec=`echo "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
+ fi
+ ;;
+ pw32*)
+ # pw32 DLLs use 'pw' prefix rather than 'lib'
+ library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/./-/g'`${versuffix}${shared_ext}'
+ ;;
+ esac
+ ;;
+
+ *)
+ library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
+ ;;
+ esac
+ dynamic_linker='Win32 ld.exe'
+ # FIXME: first we should search . and the directory the executable is in
+ shlibpath_var=PATH
+ ;;
+
+darwin* | rhapsody*)
+ dynamic_linker="$host_os dyld"
+ version_type=darwin
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='${libname}${release}${versuffix}$shared_ext ${libname}${release}${major}$shared_ext ${libname}$shared_ext'
+ soname_spec='${libname}${release}${major}$shared_ext'
+ shlibpath_overrides_runpath=yes
+ shlibpath_var=DYLD_LIBRARY_PATH
+ shrext='$(test .$module = .yes && echo .so || echo .dylib)'
+ # Apple's gcc prints 'gcc -print-search-dirs' doesn't operate the same.
+ if test "$GCC" = yes; then
+ sys_lib_search_path_spec=`$CC -print-search-dirs | tr "\n" "$PATH_SEPARATOR" | sed -e 's/libraries:/@libraries:/' | tr "@" "\n" | grep "^libraries:" | sed -e "s/^libraries://" -e "s,=/,/,g" -e "s,$PATH_SEPARATOR, ,g" -e "s,.*,& /lib /usr/lib /usr/local/lib,g"`
+ else
+ sys_lib_search_path_spec='/lib /usr/lib /usr/local/lib'
+ fi
+ sys_lib_dlsearch_path_spec='/usr/local/lib /lib /usr/lib'
+ ;;
+
+dgux*)
+ version_type=linux
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname$shared_ext'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ ;;
+
+freebsd1*)
+ dynamic_linker=no
+ ;;
+
+kfreebsd*-gnu)
+ version_type=linux
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=no
+ hardcode_into_libs=yes
+ dynamic_linker='GNU ld.so'
+ ;;
+
+freebsd*)
+ objformat=`test -x /usr/bin/objformat && /usr/bin/objformat || echo aout`
+ version_type=freebsd-$objformat
+ case $version_type in
+ freebsd-elf*)
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext} $libname${shared_ext}'
+ need_version=no
+ need_lib_prefix=no
+ ;;
+ freebsd-*)
+ library_names_spec='${libname}${release}${shared_ext}$versuffix $libname${shared_ext}$versuffix'
+ need_version=yes
+ ;;
+ esac
+ shlibpath_var=LD_LIBRARY_PATH
+ case $host_os in
+ freebsd2*)
+ shlibpath_overrides_runpath=yes
+ ;;
+ freebsd3.01* | freebsdelf3.01*)
+ shlibpath_overrides_runpath=yes
+ hardcode_into_libs=yes
+ ;;
+ *) # from 3.2 on
+ shlibpath_overrides_runpath=no
+ hardcode_into_libs=yes
+ ;;
+ esac
+ ;;
+
+gnu*)
+ version_type=linux
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}${major} ${libname}${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ hardcode_into_libs=yes
+ ;;
+
+hpux9* | hpux10* | hpux11*)
+ # Give a soname corresponding to the major version so that dld.sl refuses to
+ # link against other versions.
+ version_type=sunos
+ need_lib_prefix=no
+ need_version=no
+ case "$host_cpu" in
+ ia64*)
+ shrext='.so'
+ hardcode_into_libs=yes
+ dynamic_linker="$host_os dld.so"
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=yes # Unless +noenvvar is specified.
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ if test "X$HPUX_IA64_MODE" = X32; then
+ sys_lib_search_path_spec="/usr/lib/hpux32 /usr/local/lib/hpux32 /usr/local/lib"
+ else
+ sys_lib_search_path_spec="/usr/lib/hpux64 /usr/local/lib/hpux64"
+ fi
+ sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec
+ ;;
+ hppa*64*)
+ shrext='.sl'
+ hardcode_into_libs=yes
+ dynamic_linker="$host_os dld.sl"
+ shlibpath_var=LD_LIBRARY_PATH # How should we handle SHLIB_PATH
+ shlibpath_overrides_runpath=yes # Unless +noenvvar is specified.
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ sys_lib_search_path_spec="/usr/lib/pa20_64 /usr/ccs/lib/pa20_64"
+ sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec
+ ;;
+ *)
+ shrext='.sl'
+ dynamic_linker="$host_os dld.sl"
+ shlibpath_var=SHLIB_PATH
+ shlibpath_overrides_runpath=no # +s is required to enable SHLIB_PATH
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ ;;
+ esac
+ # HP-UX runs *really* slowly unless shared libraries are mode 555.
+ postinstall_cmds='chmod 555 $lib'
+ ;;
+
+irix5* | irix6* | nonstopux*)
+ case $host_os in
+ nonstopux*) version_type=nonstopux ;;
+ *)
+ if test "$lt_cv_prog_gnu_ld" = yes; then
+ version_type=linux
+ else
+ version_type=irix
+ fi ;;
+ esac
+ need_lib_prefix=no
+ need_version=no
+ soname_spec='${libname}${release}${shared_ext}$major'
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${release}${shared_ext} $libname${shared_ext}'
+ case $host_os in
+ irix5* | nonstopux*)
+ libsuff= shlibsuff=
+ ;;
+ *)
+ case $LD in # libtool.m4 will add one of these switches to LD
+ *-32|*"-32 "|*-melf32bsmip|*"-melf32bsmip ")
+ libsuff= shlibsuff= libmagic=32-bit;;
+ *-n32|*"-n32 "|*-melf32bmipn32|*"-melf32bmipn32 ")
+ libsuff=32 shlibsuff=N32 libmagic=N32;;
+ *-64|*"-64 "|*-melf64bmip|*"-melf64bmip ")
+ libsuff=64 shlibsuff=64 libmagic=64-bit;;
+ *) libsuff= shlibsuff= libmagic=never-match;;
+ esac
+ ;;
+ esac
+ shlibpath_var=LD_LIBRARY${shlibsuff}_PATH
+ shlibpath_overrides_runpath=no
+ sys_lib_search_path_spec="/usr/lib${libsuff} /lib${libsuff} /usr/local/lib${libsuff}"
+ sys_lib_dlsearch_path_spec="/usr/lib${libsuff} /lib${libsuff}"
+ hardcode_into_libs=yes
+ ;;
+
+# No shared lib support for Linux oldld, aout, or coff.
+linux*oldld* | linux*aout* | linux*coff*)
+ dynamic_linker=no
+ ;;
+
+# This must be Linux ELF.
+linux*)
+ version_type=linux
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ finish_cmds='PATH="\$PATH:/sbin" ldconfig -n $libdir'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=no
+ # This implies no fast_install, which is unacceptable.
+ # Some rework will be needed to allow for fast_install
+ # before this can be enabled.
+ hardcode_into_libs=yes
+
+ # Append ld.so.conf contents to the search path
+ if test -f /etc/ld.so.conf; then
+ ld_extra=`$SED -e 's/:,\t/ /g;s/=^=*$//;s/=^= * / /g' /etc/ld.so.conf`
+ sys_lib_dlsearch_path_spec="/lib /usr/lib $ld_extra"
+ fi
+
+ # We used to test for /lib/ld.so.1 and disable shared libraries on
+ # powerpc, because MkLinux only supported shared libraries with the
+ # GNU dynamic linker. Since this was broken with cross compilers,
+ # most powerpc-linux boxes support dynamic linking these days and
+ # people can always --disable-shared, the test was removed, and we
+ # assume the GNU/Linux dynamic linker is in use.
+ dynamic_linker='GNU/Linux ld.so'
+ ;;
+
+knetbsd*-gnu)
+ version_type=linux
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=no
+ hardcode_into_libs=yes
+ dynamic_linker='GNU ld.so'
+ ;;
+
+netbsd*)
+ version_type=sunos
+ need_lib_prefix=no
+ need_version=no
+ if echo __ELF__ | $CC -E - | grep __ELF__ >/dev/null; then
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix'
+ finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir'
+ dynamic_linker='NetBSD (a.out) ld.so'
+ else
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ dynamic_linker='NetBSD ld.elf_so'
+ fi
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=yes
+ hardcode_into_libs=yes
+ ;;
+
+newsos6)
+ version_type=linux
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=yes
+ ;;
+
+nto-qnx*)
+ version_type=linux
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=yes
+ ;;
+
+openbsd*)
+ version_type=sunos
+ need_lib_prefix=no
+ need_version=yes
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix'
+ finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir'
+ shlibpath_var=LD_LIBRARY_PATH
+ if test -z "`echo __ELF__ | $CC -E - | grep __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then
+ case $host_os in
+ openbsd2.[89] | openbsd2.[89].*)
+ shlibpath_overrides_runpath=no
+ ;;
+ *)
+ shlibpath_overrides_runpath=yes
+ ;;
+ esac
+ else
+ shlibpath_overrides_runpath=yes
+ fi
+ ;;
+
+os2*)
+ libname_spec='$name'
+ shrext=".dll"
+ need_lib_prefix=no
+ library_names_spec='$libname${shared_ext} $libname.a'
+ dynamic_linker='OS/2 ld.exe'
+ shlibpath_var=LIBPATH
+ ;;
+
+osf3* | osf4* | osf5*)
+ version_type=osf
+ need_lib_prefix=no
+ need_version=no
+ soname_spec='${libname}${release}${shared_ext}$major'
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ shlibpath_var=LD_LIBRARY_PATH
+ sys_lib_search_path_spec="/usr/shlib /usr/ccs/lib /usr/lib/cmplrs/cc /usr/lib /usr/local/lib /var/shlib"
+ sys_lib_dlsearch_path_spec="$sys_lib_search_path_spec"
+ ;;
+
+sco3.2v5*)
+ version_type=osf
+ soname_spec='${libname}${release}${shared_ext}$major'
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ shlibpath_var=LD_LIBRARY_PATH
+ ;;
+
+solaris*)
+ version_type=linux
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=yes
+ hardcode_into_libs=yes
+ # ldd complains unless libraries are executable
+ postinstall_cmds='chmod +x $lib'
+ ;;
+
+sunos4*)
+ version_type=sunos
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix'
+ finish_cmds='PATH="\$PATH:/usr/etc" ldconfig $libdir'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=yes
+ if test "$with_gnu_ld" = yes; then
+ need_lib_prefix=no
+ fi
+ need_version=yes
+ ;;
+
+sysv4 | sysv4.2uw2* | sysv4.3* | sysv5*)
+ version_type=linux
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ case $host_vendor in
+ sni)
+ shlibpath_overrides_runpath=no
+ need_lib_prefix=no
+ export_dynamic_flag_spec='${wl}-Blargedynsym'
+ runpath_var=LD_RUN_PATH
+ ;;
+ siemens)
+ need_lib_prefix=no
+ ;;
+ motorola)
+ need_lib_prefix=no
+ need_version=no
+ shlibpath_overrides_runpath=no
+ sys_lib_search_path_spec='/lib /usr/lib /usr/ccs/lib'
+ ;;
+ esac
+ ;;
+
+sysv4*MP*)
+ if test -d /usr/nec ;then
+ version_type=linux
+ library_names_spec='$libname${shared_ext}.$versuffix $libname${shared_ext}.$major $libname${shared_ext}'
+ soname_spec='$libname${shared_ext}.$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ fi
+ ;;
+
+uts4*)
+ version_type=linux
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ ;;
+
+*)
+ dynamic_linker=no
+ ;;
+esac
+echo "$as_me:$LINENO: result: $dynamic_linker" >&5
+echo "${ECHO_T}$dynamic_linker" >&6
+test "$dynamic_linker" = no && can_build_shared=no
+
+echo "$as_me:$LINENO: checking how to hardcode library paths into programs" >&5
+echo $ECHO_N "checking how to hardcode library paths into programs... $ECHO_C" >&6
+hardcode_action_CXX=
+if test -n "$hardcode_libdir_flag_spec_CXX" || \
+ test -n "$runpath_var CXX" || \
+ test "X$hardcode_automatic_CXX"="Xyes" ; then
+
+ # We can hardcode non-existant directories.
+ if test "$hardcode_direct_CXX" != no &&
+ # If the only mechanism to avoid hardcoding is shlibpath_var, we
+ # have to relink, otherwise we might link with an installed library
+ # when we should be linking with a yet-to-be-installed one
+ ## test "$_LT_AC_TAGVAR(hardcode_shlibpath_var, CXX)" != no &&
+ test "$hardcode_minus_L_CXX" != no; then
+ # Linking always hardcodes the temporary library directory.
+ hardcode_action_CXX=relink
+ else
+ # We can link without hardcoding, and we can hardcode nonexisting dirs.
+ hardcode_action_CXX=immediate
+ fi
+else
+ # We cannot hardcode anything, or else we can only hardcode existing
+ # directories.
+ hardcode_action_CXX=unsupported
+fi
+echo "$as_me:$LINENO: result: $hardcode_action_CXX" >&5
+echo "${ECHO_T}$hardcode_action_CXX" >&6
+
+if test "$hardcode_action_CXX" = relink; then
+ # Fast installation is not supported
+ enable_fast_install=no
+elif test "$shlibpath_overrides_runpath" = yes ||
+ test "$enable_shared" = no; then
+ # Fast installation is not necessary
+ enable_fast_install=needless
+fi
+
+striplib=
+old_striplib=
+echo "$as_me:$LINENO: checking whether stripping libraries is possible" >&5
+echo $ECHO_N "checking whether stripping libraries is possible... $ECHO_C" >&6
+if test -n "$STRIP" && $STRIP -V 2>&1 | grep "GNU strip" >/dev/null; then
+ test -z "$old_striplib" && old_striplib="$STRIP --strip-debug"
+ test -z "$striplib" && striplib="$STRIP --strip-unneeded"
+ echo "$as_me:$LINENO: result: yes" >&5
+echo "${ECHO_T}yes" >&6
+else
+# FIXME - insert some real tests, host_os isn't really good enough
+ case $host_os in
+ darwin*)
+ if test -n "$STRIP" ; then
+ striplib="$STRIP -x"
+ echo "$as_me:$LINENO: result: yes" >&5
+echo "${ECHO_T}yes" >&6
+ else
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+fi
+ ;;
+ *)
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+ ;;
+ esac
+fi
+
+if test "x$enable_dlopen" != xyes; then
+ enable_dlopen=unknown
+ enable_dlopen_self=unknown
+ enable_dlopen_self_static=unknown
+else
+ lt_cv_dlopen=no
+ lt_cv_dlopen_libs=
+
+ case $host_os in
+ beos*)
+ lt_cv_dlopen="load_add_on"
+ lt_cv_dlopen_libs=
+ lt_cv_dlopen_self=yes
+ ;;
+
+ mingw* | pw32*)
+ lt_cv_dlopen="LoadLibrary"
+ lt_cv_dlopen_libs=
+ ;;
+
+ cygwin*)
+ lt_cv_dlopen="dlopen"
+ lt_cv_dlopen_libs=
+ ;;
+
+ darwin*)
+ # if libdl is installed we need to link against it
+ echo "$as_me:$LINENO: checking for dlopen in -ldl" >&5
+echo $ECHO_N "checking for dlopen in -ldl... $ECHO_C" >&6
+if test "${ac_cv_lib_dl_dlopen+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ ac_check_lib_save_LIBS=$LIBS
+LIBS="-ldl $LIBS"
+cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+
+/* Override any gcc2 internal prototype to avoid an error. */
+#ifdef __cplusplus
+extern "C"
+#endif
+/* We use char because int might match the return type of a gcc2
+ builtin and then its argument prototype would still apply. */
+char dlopen ();
+int
+main ()
+{
+dlopen ();
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext conftest$ac_exeext
+if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_cxx_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest$ac_exeext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ ac_cv_lib_dl_dlopen=yes
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ac_cv_lib_dl_dlopen=no
+fi
+rm -f conftest.err conftest.$ac_objext \
+ conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+echo "$as_me:$LINENO: result: $ac_cv_lib_dl_dlopen" >&5
+echo "${ECHO_T}$ac_cv_lib_dl_dlopen" >&6
+if test $ac_cv_lib_dl_dlopen = yes; then
+ lt_cv_dlopen="dlopen" lt_cv_dlopen_libs="-ldl"
+else
+
+ lt_cv_dlopen="dyld"
+ lt_cv_dlopen_libs=
+ lt_cv_dlopen_self=yes
+
+fi
+
+ ;;
+
+ *)
+ echo "$as_me:$LINENO: checking for shl_load" >&5
+echo $ECHO_N "checking for shl_load... $ECHO_C" >&6
+if test "${ac_cv_func_shl_load+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+/* Define shl_load to an innocuous variant, in case <limits.h> declares shl_load.
+ For example, HP-UX 11i <limits.h> declares gettimeofday. */
+#define shl_load innocuous_shl_load
+
+/* System header to define __stub macros and hopefully few prototypes,
+ which can conflict with char shl_load (); below.
+ Prefer <limits.h> to <assert.h> if __STDC__ is defined, since
+ <limits.h> exists even on freestanding compilers. */
+
+#ifdef __STDC__
+# include <limits.h>
+#else
+# include <assert.h>
+#endif
+
+#undef shl_load
+
+/* Override any gcc2 internal prototype to avoid an error. */
+#ifdef __cplusplus
+extern "C"
+{
+#endif
+/* We use char because int might match the return type of a gcc2
+ builtin and then its argument prototype would still apply. */
+char shl_load ();
+/* The GNU C library defines this for functions which it implements
+ to always fail with ENOSYS. Some functions are actually named
+ something starting with __ and the normal name is an alias. */
+#if defined (__stub_shl_load) || defined (__stub___shl_load)
+choke me
+#else
+char (*f) () = shl_load;
+#endif
+#ifdef __cplusplus
+}
+#endif
+
+int
+main ()
+{
+return f != shl_load;
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext conftest$ac_exeext
+if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_cxx_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest$ac_exeext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ ac_cv_func_shl_load=yes
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ac_cv_func_shl_load=no
+fi
+rm -f conftest.err conftest.$ac_objext \
+ conftest$ac_exeext conftest.$ac_ext
+fi
+echo "$as_me:$LINENO: result: $ac_cv_func_shl_load" >&5
+echo "${ECHO_T}$ac_cv_func_shl_load" >&6
+if test $ac_cv_func_shl_load = yes; then
+ lt_cv_dlopen="shl_load"
+else
+ echo "$as_me:$LINENO: checking for shl_load in -ldld" >&5
+echo $ECHO_N "checking for shl_load in -ldld... $ECHO_C" >&6
+if test "${ac_cv_lib_dld_shl_load+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ ac_check_lib_save_LIBS=$LIBS
+LIBS="-ldld $LIBS"
+cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+
+/* Override any gcc2 internal prototype to avoid an error. */
+#ifdef __cplusplus
+extern "C"
+#endif
+/* We use char because int might match the return type of a gcc2
+ builtin and then its argument prototype would still apply. */
+char shl_load ();
+int
+main ()
+{
+shl_load ();
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext conftest$ac_exeext
+if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_cxx_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest$ac_exeext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ ac_cv_lib_dld_shl_load=yes
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ac_cv_lib_dld_shl_load=no
+fi
+rm -f conftest.err conftest.$ac_objext \
+ conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+echo "$as_me:$LINENO: result: $ac_cv_lib_dld_shl_load" >&5
+echo "${ECHO_T}$ac_cv_lib_dld_shl_load" >&6
+if test $ac_cv_lib_dld_shl_load = yes; then
+ lt_cv_dlopen="shl_load" lt_cv_dlopen_libs="-dld"
+else
+ echo "$as_me:$LINENO: checking for dlopen" >&5
+echo $ECHO_N "checking for dlopen... $ECHO_C" >&6
+if test "${ac_cv_func_dlopen+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+/* Define dlopen to an innocuous variant, in case <limits.h> declares dlopen.
+ For example, HP-UX 11i <limits.h> declares gettimeofday. */
+#define dlopen innocuous_dlopen
+
+/* System header to define __stub macros and hopefully few prototypes,
+ which can conflict with char dlopen (); below.
+ Prefer <limits.h> to <assert.h> if __STDC__ is defined, since
+ <limits.h> exists even on freestanding compilers. */
+
+#ifdef __STDC__
+# include <limits.h>
+#else
+# include <assert.h>
+#endif
+
+#undef dlopen
+
+/* Override any gcc2 internal prototype to avoid an error. */
+#ifdef __cplusplus
+extern "C"
+{
+#endif
+/* We use char because int might match the return type of a gcc2
+ builtin and then its argument prototype would still apply. */
+char dlopen ();
+/* The GNU C library defines this for functions which it implements
+ to always fail with ENOSYS. Some functions are actually named
+ something starting with __ and the normal name is an alias. */
+#if defined (__stub_dlopen) || defined (__stub___dlopen)
+choke me
+#else
+char (*f) () = dlopen;
+#endif
+#ifdef __cplusplus
+}
+#endif
+
+int
+main ()
+{
+return f != dlopen;
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext conftest$ac_exeext
+if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_cxx_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest$ac_exeext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ ac_cv_func_dlopen=yes
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ac_cv_func_dlopen=no
+fi
+rm -f conftest.err conftest.$ac_objext \
+ conftest$ac_exeext conftest.$ac_ext
+fi
+echo "$as_me:$LINENO: result: $ac_cv_func_dlopen" >&5
+echo "${ECHO_T}$ac_cv_func_dlopen" >&6
+if test $ac_cv_func_dlopen = yes; then
+ lt_cv_dlopen="dlopen"
+else
+ echo "$as_me:$LINENO: checking for dlopen in -ldl" >&5
+echo $ECHO_N "checking for dlopen in -ldl... $ECHO_C" >&6
+if test "${ac_cv_lib_dl_dlopen+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ ac_check_lib_save_LIBS=$LIBS
+LIBS="-ldl $LIBS"
+cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+
+/* Override any gcc2 internal prototype to avoid an error. */
+#ifdef __cplusplus
+extern "C"
+#endif
+/* We use char because int might match the return type of a gcc2
+ builtin and then its argument prototype would still apply. */
+char dlopen ();
+int
+main ()
+{
+dlopen ();
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext conftest$ac_exeext
+if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_cxx_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest$ac_exeext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ ac_cv_lib_dl_dlopen=yes
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ac_cv_lib_dl_dlopen=no
+fi
+rm -f conftest.err conftest.$ac_objext \
+ conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+echo "$as_me:$LINENO: result: $ac_cv_lib_dl_dlopen" >&5
+echo "${ECHO_T}$ac_cv_lib_dl_dlopen" >&6
+if test $ac_cv_lib_dl_dlopen = yes; then
+ lt_cv_dlopen="dlopen" lt_cv_dlopen_libs="-ldl"
+else
+ echo "$as_me:$LINENO: checking for dlopen in -lsvld" >&5
+echo $ECHO_N "checking for dlopen in -lsvld... $ECHO_C" >&6
+if test "${ac_cv_lib_svld_dlopen+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ ac_check_lib_save_LIBS=$LIBS
+LIBS="-lsvld $LIBS"
+cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+
+/* Override any gcc2 internal prototype to avoid an error. */
+#ifdef __cplusplus
+extern "C"
+#endif
+/* We use char because int might match the return type of a gcc2
+ builtin and then its argument prototype would still apply. */
+char dlopen ();
+int
+main ()
+{
+dlopen ();
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext conftest$ac_exeext
+if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_cxx_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest$ac_exeext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ ac_cv_lib_svld_dlopen=yes
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ac_cv_lib_svld_dlopen=no
+fi
+rm -f conftest.err conftest.$ac_objext \
+ conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+echo "$as_me:$LINENO: result: $ac_cv_lib_svld_dlopen" >&5
+echo "${ECHO_T}$ac_cv_lib_svld_dlopen" >&6
+if test $ac_cv_lib_svld_dlopen = yes; then
+ lt_cv_dlopen="dlopen" lt_cv_dlopen_libs="-lsvld"
+else
+ echo "$as_me:$LINENO: checking for dld_link in -ldld" >&5
+echo $ECHO_N "checking for dld_link in -ldld... $ECHO_C" >&6
+if test "${ac_cv_lib_dld_dld_link+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ ac_check_lib_save_LIBS=$LIBS
+LIBS="-ldld $LIBS"
+cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+
+/* Override any gcc2 internal prototype to avoid an error. */
+#ifdef __cplusplus
+extern "C"
+#endif
+/* We use char because int might match the return type of a gcc2
+ builtin and then its argument prototype would still apply. */
+char dld_link ();
+int
+main ()
+{
+dld_link ();
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext conftest$ac_exeext
+if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_cxx_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest$ac_exeext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ ac_cv_lib_dld_dld_link=yes
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ac_cv_lib_dld_dld_link=no
+fi
+rm -f conftest.err conftest.$ac_objext \
+ conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+echo "$as_me:$LINENO: result: $ac_cv_lib_dld_dld_link" >&5
+echo "${ECHO_T}$ac_cv_lib_dld_dld_link" >&6
+if test $ac_cv_lib_dld_dld_link = yes; then
+ lt_cv_dlopen="dld_link" lt_cv_dlopen_libs="-dld"
+fi
+
+
+fi
+
+
+fi
+
+
+fi
+
+
+fi
+
+
+fi
+
+ ;;
+ esac
+
+ if test "x$lt_cv_dlopen" != xno; then
+ enable_dlopen=yes
+ else
+ enable_dlopen=no
+ fi
+
+ case $lt_cv_dlopen in
+ dlopen)
+ save_CPPFLAGS="$CPPFLAGS"
+ test "x$ac_cv_header_dlfcn_h" = xyes && CPPFLAGS="$CPPFLAGS -DHAVE_DLFCN_H"
+
+ save_LDFLAGS="$LDFLAGS"
+ eval LDFLAGS=\"\$LDFLAGS $export_dynamic_flag_spec\"
+
+ save_LIBS="$LIBS"
+ LIBS="$lt_cv_dlopen_libs $LIBS"
+
+ echo "$as_me:$LINENO: checking whether a program can dlopen itself" >&5
+echo $ECHO_N "checking whether a program can dlopen itself... $ECHO_C" >&6
+if test "${lt_cv_dlopen_self+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ if test "$cross_compiling" = yes; then :
+ lt_cv_dlopen_self=cross
+else
+ lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
+ lt_status=$lt_dlunknown
+ cat > conftest.$ac_ext <<EOF
+#line 11736 "configure"
+#include "confdefs.h"
+
+#if HAVE_DLFCN_H
+#include <dlfcn.h>
+#endif
+
+#include <stdio.h>
+
+#ifdef RTLD_GLOBAL
+# define LT_DLGLOBAL RTLD_GLOBAL
+#else
+# ifdef DL_GLOBAL
+# define LT_DLGLOBAL DL_GLOBAL
+# else
+# define LT_DLGLOBAL 0
+# endif
+#endif
+
+/* We may have to define LT_DLLAZY_OR_NOW in the command line if we
+ find out it does not work in some platform. */
+#ifndef LT_DLLAZY_OR_NOW
+# ifdef RTLD_LAZY
+# define LT_DLLAZY_OR_NOW RTLD_LAZY
+# else
+# ifdef DL_LAZY
+# define LT_DLLAZY_OR_NOW DL_LAZY
+# else
+# ifdef RTLD_NOW
+# define LT_DLLAZY_OR_NOW RTLD_NOW
+# else
+# ifdef DL_NOW
+# define LT_DLLAZY_OR_NOW DL_NOW
+# else
+# define LT_DLLAZY_OR_NOW 0
+# endif
+# endif
+# endif
+# endif
+#endif
+
+#ifdef __cplusplus
+extern "C" void exit (int);
+#endif
+
+void fnord() { int i=42;}
+int main ()
+{
+ void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+ int status = $lt_dlunknown;
+
+ if (self)
+ {
+ if (dlsym (self,"fnord")) status = $lt_dlno_uscore;
+ else if (dlsym( self,"_fnord")) status = $lt_dlneed_uscore;
+ /* dlclose (self); */
+ }
+
+ exit (status);
+}
+EOF
+ if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } && test -s conftest${ac_exeext} 2>/dev/null; then
+ (./conftest; exit; ) 2>/dev/null
+ lt_status=$?
+ case x$lt_status in
+ x$lt_dlno_uscore) lt_cv_dlopen_self=yes ;;
+ x$lt_dlneed_uscore) lt_cv_dlopen_self=yes ;;
+ x$lt_unknown|x*) lt_cv_dlopen_self=no ;;
+ esac
+ else :
+ # compilation failed
+ lt_cv_dlopen_self=no
+ fi
+fi
+rm -fr conftest*
+
+
+fi
+echo "$as_me:$LINENO: result: $lt_cv_dlopen_self" >&5
+echo "${ECHO_T}$lt_cv_dlopen_self" >&6
+
+ if test "x$lt_cv_dlopen_self" = xyes; then
+ LDFLAGS="$LDFLAGS $link_static_flag"
+ echo "$as_me:$LINENO: checking whether a statically linked program can dlopen itself" >&5
+echo $ECHO_N "checking whether a statically linked program can dlopen itself... $ECHO_C" >&6
+if test "${lt_cv_dlopen_self_static+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ if test "$cross_compiling" = yes; then :
+ lt_cv_dlopen_self_static=cross
+else
+ lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
+ lt_status=$lt_dlunknown
+ cat > conftest.$ac_ext <<EOF
+#line 11834 "configure"
+#include "confdefs.h"
+
+#if HAVE_DLFCN_H
+#include <dlfcn.h>
+#endif
+
+#include <stdio.h>
+
+#ifdef RTLD_GLOBAL
+# define LT_DLGLOBAL RTLD_GLOBAL
+#else
+# ifdef DL_GLOBAL
+# define LT_DLGLOBAL DL_GLOBAL
+# else
+# define LT_DLGLOBAL 0
+# endif
+#endif
+
+/* We may have to define LT_DLLAZY_OR_NOW in the command line if we
+ find out it does not work in some platform. */
+#ifndef LT_DLLAZY_OR_NOW
+# ifdef RTLD_LAZY
+# define LT_DLLAZY_OR_NOW RTLD_LAZY
+# else
+# ifdef DL_LAZY
+# define LT_DLLAZY_OR_NOW DL_LAZY
+# else
+# ifdef RTLD_NOW
+# define LT_DLLAZY_OR_NOW RTLD_NOW
+# else
+# ifdef DL_NOW
+# define LT_DLLAZY_OR_NOW DL_NOW
+# else
+# define LT_DLLAZY_OR_NOW 0
+# endif
+# endif
+# endif
+# endif
+#endif
+
+#ifdef __cplusplus
+extern "C" void exit (int);
+#endif
+
+void fnord() { int i=42;}
+int main ()
+{
+ void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+ int status = $lt_dlunknown;
+
+ if (self)
+ {
+ if (dlsym (self,"fnord")) status = $lt_dlno_uscore;
+ else if (dlsym( self,"_fnord")) status = $lt_dlneed_uscore;
+ /* dlclose (self); */
+ }
+
+ exit (status);
+}
+EOF
+ if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } && test -s conftest${ac_exeext} 2>/dev/null; then
+ (./conftest; exit; ) 2>/dev/null
+ lt_status=$?
+ case x$lt_status in
+ x$lt_dlno_uscore) lt_cv_dlopen_self_static=yes ;;
+ x$lt_dlneed_uscore) lt_cv_dlopen_self_static=yes ;;
+ x$lt_unknown|x*) lt_cv_dlopen_self_static=no ;;
+ esac
+ else :
+ # compilation failed
+ lt_cv_dlopen_self_static=no
+ fi
+fi
+rm -fr conftest*
+
+
+fi
+echo "$as_me:$LINENO: result: $lt_cv_dlopen_self_static" >&5
+echo "${ECHO_T}$lt_cv_dlopen_self_static" >&6
+ fi
+
+ CPPFLAGS="$save_CPPFLAGS"
+ LDFLAGS="$save_LDFLAGS"
+ LIBS="$save_LIBS"
+ ;;
+ esac
+
+ case $lt_cv_dlopen_self in
+ yes|no) enable_dlopen_self=$lt_cv_dlopen_self ;;
+ *) enable_dlopen_self=unknown ;;
+ esac
+
+ case $lt_cv_dlopen_self_static in
+ yes|no) enable_dlopen_self_static=$lt_cv_dlopen_self_static ;;
+ *) enable_dlopen_self_static=unknown ;;
+ esac
+fi
+
+
+# The else clause should only fire when bootstrapping the
+# libtool distribution, otherwise you forgot to ship ltmain.sh
+# with your package, and you will get complaints that there are
+# no rules to generate ltmain.sh.
+if test -f "$ltmain"; then
+ # See if we are running on zsh, and set the options which allow our commands through
+ # without removal of \ escapes.
+ if test -n "${ZSH_VERSION+set}" ; then
+ setopt NO_GLOB_SUBST
+ fi
+ # Now quote all the things that may contain metacharacters while being
+ # careful not to overquote the AC_SUBSTed values. We take copies of the
+ # variables and quote the copies for generation of the libtool script.
+ for var in echo old_CC old_CFLAGS AR AR_FLAGS EGREP RANLIB LN_S LTCC NM \
+ SED SHELL STRIP \
+ libname_spec library_names_spec soname_spec extract_expsyms_cmds \
+ old_striplib striplib file_magic_cmd finish_cmds finish_eval \
+ deplibs_check_method reload_flag reload_cmds need_locks \
+ lt_cv_sys_global_symbol_pipe lt_cv_sys_global_symbol_to_cdecl \
+ lt_cv_sys_global_symbol_to_c_name_address \
+ sys_lib_search_path_spec sys_lib_dlsearch_path_spec \
+ old_postinstall_cmds old_postuninstall_cmds \
+ compiler_CXX \
+ CC_CXX \
+ LD_CXX \
+ lt_prog_compiler_wl_CXX \
+ lt_prog_compiler_pic_CXX \
+ lt_prog_compiler_static_CXX \
+ lt_prog_compiler_no_builtin_flag_CXX \
+ export_dynamic_flag_spec_CXX \
+ thread_safe_flag_spec_CXX \
+ whole_archive_flag_spec_CXX \
+ enable_shared_with_static_runtimes_CXX \
+ old_archive_cmds_CXX \
+ old_archive_from_new_cmds_CXX \
+ predep_objects_CXX \
+ postdep_objects_CXX \
+ predeps_CXX \
+ postdeps_CXX \
+ compiler_lib_search_path_CXX \
+ archive_cmds_CXX \
+ archive_expsym_cmds_CXX \
+ postinstall_cmds_CXX \
+ postuninstall_cmds_CXX \
+ old_archive_from_expsyms_cmds_CXX \
+ allow_undefined_flag_CXX \
+ no_undefined_flag_CXX \
+ export_symbols_cmds_CXX \
+ hardcode_libdir_flag_spec_CXX \
+ hardcode_libdir_flag_spec_ld_CXX \
+ hardcode_libdir_separator_CXX \
+ hardcode_automatic_CXX \
+ module_cmds_CXX \
+ module_expsym_cmds_CXX \
+ lt_cv_prog_compiler_c_o_CXX \
+ exclude_expsyms_CXX \
+ include_expsyms_CXX; do
+
+ case $var in
+ old_archive_cmds_CXX | \
+ old_archive_from_new_cmds_CXX | \
+ archive_cmds_CXX | \
+ archive_expsym_cmds_CXX | \
+ module_cmds_CXX | \
+ module_expsym_cmds_CXX | \
+ old_archive_from_expsyms_cmds_CXX | \
+ export_symbols_cmds_CXX | \
+ extract_expsyms_cmds | reload_cmds | finish_cmds | \
+ postinstall_cmds | postuninstall_cmds | \
+ old_postinstall_cmds | old_postuninstall_cmds | \
+ sys_lib_search_path_spec | sys_lib_dlsearch_path_spec)
+ # Double-quote double-evaled strings.
+ eval "lt_$var=\\\"\`\$echo \"X\$$var\" | \$Xsed -e \"\$double_quote_subst\" -e \"\$sed_quote_subst\" -e \"\$delay_variable_subst\"\`\\\""
+ ;;
+ *)
+ eval "lt_$var=\\\"\`\$echo \"X\$$var\" | \$Xsed -e \"\$sed_quote_subst\"\`\\\""
+ ;;
+ esac
+ done
+
+ case $lt_echo in
+ *'\$0 --fallback-echo"')
+ lt_echo=`$echo "X$lt_echo" | $Xsed -e 's/\\\\\\\$0 --fallback-echo"$/$0 --fallback-echo"/'`
+ ;;
+ esac
+
+cfgfile="$ofile"
+
+ cat <<__EOF__ >> "$cfgfile"
+# ### BEGIN LIBTOOL TAG CONFIG: $tagname
+
+# Libtool was configured on host `(hostname || uname -n) 2>/dev/null | sed 1q`:
+
+# Shell to use when invoking shell scripts.
+SHELL=$lt_SHELL
+
+# Whether or not to build shared libraries.
+build_libtool_libs=$enable_shared
+
+# Whether or not to build static libraries.
+build_old_libs=$enable_static
+
+# Whether or not to add -lc for building shared libraries.
+build_libtool_need_lc=$archive_cmds_need_lc_CXX
+
+# Whether or not to disallow shared libs when runtime libs are static
+allow_libtool_libs_with_static_runtimes=$enable_shared_with_static_runtimes_CXX
+
+# Whether or not to optimize for fast installation.
+fast_install=$enable_fast_install
+
+# The host system.
+host_alias=$host_alias
+host=$host
+
+# An echo program that does not interpret backslashes.
+echo=$lt_echo
+
+# The archiver.
+AR=$lt_AR
+AR_FLAGS=$lt_AR_FLAGS
+
+# A C compiler.
+LTCC=$lt_LTCC
+
+# A language-specific compiler.
+CC=$lt_compiler_CXX
+
+# Is the compiler the GNU C compiler?
+with_gcc=$GCC_CXX
+
+# An ERE matcher.
+EGREP=$lt_EGREP
+
+# The linker used to build libraries.
+LD=$lt_LD_CXX
+
+# Whether we need hard or soft links.
+LN_S=$lt_LN_S
+
+# A BSD-compatible nm program.
+NM=$lt_NM
+
+# A symbol stripping program
+STRIP=$lt_STRIP
+
+# Used to examine libraries when file_magic_cmd begins "file"
+MAGIC_CMD=$MAGIC_CMD
+
+# Used on cygwin: DLL creation program.
+DLLTOOL="$DLLTOOL"
+
+# Used on cygwin: object dumper.
+OBJDUMP="$OBJDUMP"
+
+# Used on cygwin: assembler.
+AS="$AS"
+
+# The name of the directory that contains temporary libtool files.
+objdir=$objdir
+
+# How to create reloadable object files.
+reload_flag=$lt_reload_flag
+reload_cmds=$lt_reload_cmds
+
+# How to pass a linker flag through the compiler.
+wl=$lt_lt_prog_compiler_wl_CXX
+
+# Object file suffix (normally "o").
+objext="$ac_objext"
+
+# Old archive suffix (normally "a").
+libext="$libext"
+
+# Shared library suffix (normally ".so").
+shrext='$shrext'
+
+# Executable file suffix (normally "").
+exeext="$exeext"
+
+# Additional compiler flags for building library objects.
+pic_flag=$lt_lt_prog_compiler_pic_CXX
+pic_mode=$pic_mode
+
+# What is the maximum length of a command?
+max_cmd_len=$lt_cv_sys_max_cmd_len
+
+# Does compiler simultaneously support -c and -o options?
+compiler_c_o=$lt_lt_cv_prog_compiler_c_o_CXX
+
+# Must we lock files when doing compilation ?
+need_locks=$lt_need_locks
+
+# Do we need the lib prefix for modules?
+need_lib_prefix=$need_lib_prefix
+
+# Do we need a version for libraries?
+need_version=$need_version
+
+# Whether dlopen is supported.
+dlopen_support=$enable_dlopen
+
+# Whether dlopen of programs is supported.
+dlopen_self=$enable_dlopen_self
+
+# Whether dlopen of statically linked programs is supported.
+dlopen_self_static=$enable_dlopen_self_static
+
+# Compiler flag to prevent dynamic linking.
+link_static_flag=$lt_lt_prog_compiler_static_CXX
+
+# Compiler flag to turn off builtin functions.
+no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag_CXX
+
+# Compiler flag to allow reflexive dlopens.
+export_dynamic_flag_spec=$lt_export_dynamic_flag_spec_CXX
+
+# Compiler flag to generate shared objects directly from archives.
+whole_archive_flag_spec=$lt_whole_archive_flag_spec_CXX
+
+# Compiler flag to generate thread-safe objects.
+thread_safe_flag_spec=$lt_thread_safe_flag_spec_CXX
+
+# Library versioning type.
+version_type=$version_type
+
+# Format of library name prefix.
+libname_spec=$lt_libname_spec
+
+# List of archive names. First name is the real one, the rest are links.
+# The last name is the one that the linker finds with -lNAME.
+library_names_spec=$lt_library_names_spec
+
+# The coded name of the library, if different from the real name.
+soname_spec=$lt_soname_spec
+
+# Commands used to build and install an old-style archive.
+RANLIB=$lt_RANLIB
+old_archive_cmds=$lt_old_archive_cmds_CXX
+old_postinstall_cmds=$lt_old_postinstall_cmds
+old_postuninstall_cmds=$lt_old_postuninstall_cmds
+
+# Create an old-style archive from a shared archive.
+old_archive_from_new_cmds=$lt_old_archive_from_new_cmds_CXX
+
+# Create a temporary old-style archive to link instead of a shared archive.
+old_archive_from_expsyms_cmds=$lt_old_archive_from_expsyms_cmds_CXX
+
+# Commands used to build and install a shared archive.
+archive_cmds=$lt_archive_cmds_CXX
+archive_expsym_cmds=$lt_archive_expsym_cmds_CXX
+postinstall_cmds=$lt_postinstall_cmds
+postuninstall_cmds=$lt_postuninstall_cmds
+
+# Commands used to build a loadable module (assumed same as above if empty)
+module_cmds=$lt_module_cmds_CXX
+module_expsym_cmds=$lt_module_expsym_cmds_CXX
+
+# Commands to strip libraries.
+old_striplib=$lt_old_striplib
+striplib=$lt_striplib
+
+# Dependencies to place before the objects being linked to create a
+# shared library.
+predep_objects=$lt_predep_objects_CXX
+
+# Dependencies to place after the objects being linked to create a
+# shared library.
+postdep_objects=$lt_postdep_objects_CXX
+
+# Dependencies to place before the objects being linked to create a
+# shared library.
+predeps=$lt_predeps_CXX
+
+# Dependencies to place after the objects being linked to create a
+# shared library.
+postdeps=$lt_postdeps_CXX
+
+# The library search path used internally by the compiler when linking
+# a shared library.
+compiler_lib_search_path=$lt_compiler_lib_search_path_CXX
+
+# Method to check whether dependent libraries are shared objects.
+deplibs_check_method=$lt_deplibs_check_method
+
+# Command to use when deplibs_check_method == file_magic.
+file_magic_cmd=$lt_file_magic_cmd
+
+# Flag that allows shared libraries with undefined symbols to be built.
+allow_undefined_flag=$lt_allow_undefined_flag_CXX
+
+# Flag that forces no undefined symbols.
+no_undefined_flag=$lt_no_undefined_flag_CXX
+
+# Commands used to finish a libtool library installation in a directory.
+finish_cmds=$lt_finish_cmds
+
+# Same as above, but a single script fragment to be evaled but not shown.
+finish_eval=$lt_finish_eval
+
+# Take the output of nm and produce a listing of raw symbols and C names.
+global_symbol_pipe=$lt_lt_cv_sys_global_symbol_pipe
+
+# Transform the output of nm in a proper C declaration
+global_symbol_to_cdecl=$lt_lt_cv_sys_global_symbol_to_cdecl
+
+# Transform the output of nm in a C name address pair
+global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address
+
+# This is the shared library runtime path variable.
+runpath_var=$runpath_var
+
+# This is the shared library path variable.
+shlibpath_var=$shlibpath_var
+
+# Is shlibpath searched before the hard-coded library search path?
+shlibpath_overrides_runpath=$shlibpath_overrides_runpath
+
+# How to hardcode a shared library path into an executable.
+hardcode_action=$hardcode_action_CXX
+
+# Whether we should hardcode library paths into libraries.
+hardcode_into_libs=$hardcode_into_libs
+
+# Flag to hardcode \$libdir into a binary during linking.
+# This must work even if \$libdir does not exist.
+hardcode_libdir_flag_spec=$lt_hardcode_libdir_flag_spec_CXX
+
+# If ld is used when linking, flag to hardcode \$libdir into
+# a binary during linking. This must work even if \$libdir does
+# not exist.
+hardcode_libdir_flag_spec_ld=$lt_hardcode_libdir_flag_spec_ld_CXX
+
+# Whether we need a single -rpath flag with a separated argument.
+hardcode_libdir_separator=$lt_hardcode_libdir_separator_CXX
+
+# Set to yes if using DIR/libNAME${shared_ext} during linking hardcodes DIR into the
+# resulting binary.
+hardcode_direct=$hardcode_direct_CXX
+
+# Set to yes if using the -LDIR flag during linking hardcodes DIR into the
+# resulting binary.
+hardcode_minus_L=$hardcode_minus_L_CXX
+
+# Set to yes if using SHLIBPATH_VAR=DIR during linking hardcodes DIR into
+# the resulting binary.
+hardcode_shlibpath_var=$hardcode_shlibpath_var_CXX
+
+# Set to yes if building a shared library automatically hardcodes DIR into the library
+# and all subsequent libraries and executables linked against it.
+hardcode_automatic=$hardcode_automatic_CXX
+
+# Variables whose values should be saved in libtool wrapper scripts and
+# restored at relink time.
+variables_saved_for_relink="$variables_saved_for_relink"
+
+# Whether libtool must link a program against all its dependency libraries.
+link_all_deplibs=$link_all_deplibs_CXX
+
+# Compile-time system search path for libraries
+sys_lib_search_path_spec=$lt_sys_lib_search_path_spec
+
+# Run-time system search path for libraries
+sys_lib_dlsearch_path_spec=$lt_sys_lib_dlsearch_path_spec
+
+# Fix the shell variable \$srcfile for the compiler.
+fix_srcfile_path="$fix_srcfile_path_CXX"
+
+# Set to yes if exported symbols are required.
+always_export_symbols=$always_export_symbols_CXX
+
+# The commands to list exported symbols.
+export_symbols_cmds=$lt_export_symbols_cmds_CXX
+
+# The commands to extract the exported symbol list from a shared archive.
+extract_expsyms_cmds=$lt_extract_expsyms_cmds
+
+# Symbols that should not be listed in the preloaded symbols.
+exclude_expsyms=$lt_exclude_expsyms_CXX
+
+# Symbols that must always be exported.
+include_expsyms=$lt_include_expsyms_CXX
+
+# ### END LIBTOOL TAG CONFIG: $tagname
+
+__EOF__
+
+
+else
+ # If there is no Makefile yet, we rely on a make rule to execute
+ # `config.status --recheck' to rerun these tests and create the
+ # libtool script then.
+ ltmain_in=`echo $ltmain | sed -e 's/\.sh$/.in/'`
+ if test -f "$ltmain_in"; then
+ test -f Makefile && make "$ltmain"
+ fi
+fi
+
+
+ac_ext=c
+ac_cpp='$CPP $CPPFLAGS'
+ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_c_compiler_gnu
+
+CC=$lt_save_CC
+LDCXX=$LD
+LD=$lt_save_LD
+GCC=$lt_save_GCC
+with_gnu_ldcxx=$with_gnu_ld
+with_gnu_ld=$lt_save_with_gnu_ld
+lt_cv_path_LDCXX=$lt_cv_path_LD
+lt_cv_path_LD=$lt_save_path_LD
+lt_cv_prog_gnu_ldcxx=$lt_cv_prog_gnu_ld
+lt_cv_prog_gnu_ld=$lt_save_with_gnu_ld
+
+ else
+ tagname=""
+ fi
+ ;;
+
+ F77)
+ if test -n "$F77" && test "X$F77" != "Xno"; then
+
+ac_ext=f
+ac_compile='$F77 -c $FFLAGS conftest.$ac_ext >&5'
+ac_link='$F77 -o conftest$ac_exeext $FFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_f77_compiler_gnu
+
+
+archive_cmds_need_lc_F77=no
+allow_undefined_flag_F77=
+always_export_symbols_F77=no
+archive_expsym_cmds_F77=
+export_dynamic_flag_spec_F77=
+hardcode_direct_F77=no
+hardcode_libdir_flag_spec_F77=
+hardcode_libdir_flag_spec_ld_F77=
+hardcode_libdir_separator_F77=
+hardcode_minus_L_F77=no
+hardcode_automatic_F77=no
+module_cmds_F77=
+module_expsym_cmds_F77=
+link_all_deplibs_F77=unknown
+old_archive_cmds_F77=$old_archive_cmds
+no_undefined_flag_F77=
+whole_archive_flag_spec_F77=
+enable_shared_with_static_runtimes_F77=no
+
+# Source file extension for f77 test sources.
+ac_ext=f
+
+# Object file extension for compiled f77 test sources.
+objext=o
+objext_F77=$objext
+
+# Code to be used in simple compile tests
+lt_simple_compile_test_code=" subroutine t\n return\n end\n"
+
+# Code to be used in simple link tests
+lt_simple_link_test_code=" program t\n end\n"
+
+# ltmain only uses $CC for tagged configurations so make sure $CC is set.
+
+# If no C compiler was specified, use CC.
+LTCC=${LTCC-"$CC"}
+
+# Allow CC to be a program name with arguments.
+compiler=$CC
+
+
+# Allow CC to be a program name with arguments.
+lt_save_CC="$CC"
+CC=${F77-"f77"}
+compiler=$CC
+compiler_F77=$CC
+cc_basename=`$echo X"$compiler" | $Xsed -e 's%^.*/%%'`
+
+echo "$as_me:$LINENO: checking if libtool supports shared libraries" >&5
+echo $ECHO_N "checking if libtool supports shared libraries... $ECHO_C" >&6
+echo "$as_me:$LINENO: result: $can_build_shared" >&5
+echo "${ECHO_T}$can_build_shared" >&6
+
+echo "$as_me:$LINENO: checking whether to build shared libraries" >&5
+echo $ECHO_N "checking whether to build shared libraries... $ECHO_C" >&6
+test "$can_build_shared" = "no" && enable_shared=no
+
+# On AIX, shared libraries and static libraries use the same namespace, and
+# are all built from PIC.
+case "$host_os" in
+aix3*)
+ test "$enable_shared" = yes && enable_static=no
+ if test -n "$RANLIB"; then
+ archive_cmds="$archive_cmds~\$RANLIB \$lib"
+ postinstall_cmds='$RANLIB $lib'
+ fi
+ ;;
+aix4*)
+ test "$enable_shared" = yes && enable_static=no
+ ;;
+esac
+echo "$as_me:$LINENO: result: $enable_shared" >&5
+echo "${ECHO_T}$enable_shared" >&6
+
+echo "$as_me:$LINENO: checking whether to build static libraries" >&5
+echo $ECHO_N "checking whether to build static libraries... $ECHO_C" >&6
+# Make sure either enable_shared or enable_static is yes.
+test "$enable_shared" = yes || enable_static=yes
+echo "$as_me:$LINENO: result: $enable_static" >&5
+echo "${ECHO_T}$enable_static" >&6
+
+test "$ld_shlibs_F77" = no && can_build_shared=no
+
+GCC_F77="$G77"
+LD_F77="$LD"
+
+lt_prog_compiler_wl_F77=
+lt_prog_compiler_pic_F77=
+lt_prog_compiler_static_F77=
+
+echo "$as_me:$LINENO: checking for $compiler option to produce PIC" >&5
+echo $ECHO_N "checking for $compiler option to produce PIC... $ECHO_C" >&6
+
+ if test "$GCC" = yes; then
+ lt_prog_compiler_wl_F77='-Wl,'
+ lt_prog_compiler_static_F77='-static'
+
+ case $host_os in
+ aix*)
+ # All AIX code is PIC.
+ if test "$host_cpu" = ia64; then
+ # AIX 5 now supports IA64 processor
+ lt_prog_compiler_static_F77='-Bstatic'
+ fi
+ ;;
+
+ amigaos*)
+ # FIXME: we need at least 68020 code to build shared libraries, but
+ # adding the `-m68020' flag to GCC prevents building anything better,
+ # like `-m68040'.
+ lt_prog_compiler_pic_F77='-m68020 -resident32 -malways-restore-a4'
+ ;;
+
+ beos* | cygwin* | irix5* | irix6* | nonstopux* | osf3* | osf4* | osf5*)
+ # PIC is the default for these OSes.
+ ;;
+
+ mingw* | pw32* | os2*)
+ # This hack is so that the source file can tell whether it is being
+ # built for inclusion in a dll (and should export symbols for example).
+ lt_prog_compiler_pic_F77='-DDLL_EXPORT'
+ ;;
+
+ darwin* | rhapsody*)
+ # PIC is the default on this platform
+ # Common symbols not allowed in MH_DYLIB files
+ lt_prog_compiler_pic_F77='-fno-common'
+ ;;
+
+ msdosdjgpp*)
+ # Just because we use GCC doesn't mean we suddenly get shared libraries
+ # on systems that don't support them.
+ lt_prog_compiler_can_build_shared_F77=no
+ enable_shared=no
+ ;;
+
+ sysv4*MP*)
+ if test -d /usr/nec; then
+ lt_prog_compiler_pic_F77=-Kconform_pic
+ fi
+ ;;
+
+ hpux*)
+ # PIC is the default for IA64 HP-UX and 64-bit HP-UX, but
+ # not for PA HP-UX.
+ case "$host_cpu" in
+ hppa*64*|ia64*)
+ # +Z the default
+ ;;
+ *)
+ lt_prog_compiler_pic_F77='-fPIC'
+ ;;
+ esac
+ ;;
+
+ *)
+ lt_prog_compiler_pic_F77='-fPIC'
+ ;;
+ esac
+ else
+ # PORTME Check for flag to pass linker flags through the system compiler.
+ case $host_os in
+ aix*)
+ lt_prog_compiler_wl_F77='-Wl,'
+ if test "$host_cpu" = ia64; then
+ # AIX 5 now supports IA64 processor
+ lt_prog_compiler_static_F77='-Bstatic'
+ else
+ lt_prog_compiler_static_F77='-bnso -bI:/lib/syscalls.exp'
+ fi
+ ;;
+
+ mingw* | pw32* | os2*)
+ # This hack is so that the source file can tell whether it is being
+ # built for inclusion in a dll (and should export symbols for example).
+ lt_prog_compiler_pic_F77='-DDLL_EXPORT'
+ ;;
+
+ hpux9* | hpux10* | hpux11*)
+ lt_prog_compiler_wl_F77='-Wl,'
+ # PIC is the default for IA64 HP-UX and 64-bit HP-UX, but
+ # not for PA HP-UX.
+ case "$host_cpu" in
+ hppa*64*|ia64*)
+ # +Z the default
+ ;;
+ *)
+ lt_prog_compiler_pic_F77='+Z'
+ ;;
+ esac
+ # Is there a better lt_prog_compiler_static that works with the bundled CC?
+ lt_prog_compiler_static_F77='${wl}-a ${wl}archive'
+ ;;
+
+ irix5* | irix6* | nonstopux*)
+ lt_prog_compiler_wl_F77='-Wl,'
+ # PIC (with -KPIC) is the default.
+ lt_prog_compiler_static_F77='-non_shared'
+ ;;
+
+ newsos6)
+ lt_prog_compiler_pic_F77='-KPIC'
+ lt_prog_compiler_static_F77='-Bstatic'
+ ;;
+
+ linux*)
+ case $CC in
+ icc* | ecc*)
+ lt_prog_compiler_wl_F77='-Wl,'
+ lt_prog_compiler_pic_F77='-KPIC'
+ lt_prog_compiler_static_F77='-static'
+ ;;
+ ccc*)
+ lt_prog_compiler_wl_F77='-Wl,'
+ # All Alpha code is PIC.
+ lt_prog_compiler_static_F77='-non_shared'
+ ;;
+ esac
+ ;;
+
+ osf3* | osf4* | osf5*)
+ lt_prog_compiler_wl_F77='-Wl,'
+ # All OSF/1 code is PIC.
+ lt_prog_compiler_static_F77='-non_shared'
+ ;;
+
+ sco3.2v5*)
+ lt_prog_compiler_pic_F77='-Kpic'
+ lt_prog_compiler_static_F77='-dn'
+ ;;
+
+ solaris*)
+ lt_prog_compiler_wl_F77='-Wl,'
+ lt_prog_compiler_pic_F77='-KPIC'
+ lt_prog_compiler_static_F77='-Bstatic'
+ ;;
+
+ sunos4*)
+ lt_prog_compiler_wl_F77='-Qoption ld '
+ lt_prog_compiler_pic_F77='-PIC'
+ lt_prog_compiler_static_F77='-Bstatic'
+ ;;
+
+ sysv4 | sysv4.2uw2* | sysv4.3* | sysv5*)
+ lt_prog_compiler_wl_F77='-Wl,'
+ lt_prog_compiler_pic_F77='-KPIC'
+ lt_prog_compiler_static_F77='-Bstatic'
+ ;;
+
+ sysv4*MP*)
+ if test -d /usr/nec ;then
+ lt_prog_compiler_pic_F77='-Kconform_pic'
+ lt_prog_compiler_static_F77='-Bstatic'
+ fi
+ ;;
+
+ uts4*)
+ lt_prog_compiler_pic_F77='-pic'
+ lt_prog_compiler_static_F77='-Bstatic'
+ ;;
+
+ *)
+ lt_prog_compiler_can_build_shared_F77=no
+ ;;
+ esac
+ fi
+
+echo "$as_me:$LINENO: result: $lt_prog_compiler_pic_F77" >&5
+echo "${ECHO_T}$lt_prog_compiler_pic_F77" >&6
+
+#
+# Check to make sure the PIC flag actually works.
+#
+if test -n "$lt_prog_compiler_pic_F77"; then
+
+echo "$as_me:$LINENO: checking if $compiler PIC flag $lt_prog_compiler_pic_F77 works" >&5
+echo $ECHO_N "checking if $compiler PIC flag $lt_prog_compiler_pic_F77 works... $ECHO_C" >&6
+if test "${lt_prog_compiler_pic_works_F77+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ lt_prog_compiler_pic_works_F77=no
+ ac_outfile=conftest.$ac_objext
+ printf "$lt_simple_compile_test_code" > conftest.$ac_ext
+ lt_compiler_flag="$lt_prog_compiler_pic_F77"
+ # Insert the option either (1) after the last *FLAGS variable, or
+ # (2) before a word containing "conftest.", or (3) at the end.
+ # Note that $ac_compile itself does not contain backslashes and begins
+ # with a dollar sign (not a hyphen), so the echo should work correctly.
+ # The option is referenced via a variable to avoid confusing sed.
+ lt_compile=`echo "$ac_compile" | $SED \
+ -e 's:.*FLAGS}? :&$lt_compiler_flag :; t' \
+ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \
+ -e 's:$: $lt_compiler_flag:'`
+ (eval echo "\"\$as_me:12661: $lt_compile\"" >&5)
+ (eval "$lt_compile" 2>conftest.err)
+ ac_status=$?
+ cat conftest.err >&5
+ echo "$as_me:12665: \$? = $ac_status" >&5
+ if (exit $ac_status) && test -s "$ac_outfile"; then
+ # The compiler can only warn and ignore the option if not recognized
+ # So say no if there are warnings
+ if test ! -s conftest.err; then
+ lt_prog_compiler_pic_works_F77=yes
+ fi
+ fi
+ $rm conftest*
+
+fi
+echo "$as_me:$LINENO: result: $lt_prog_compiler_pic_works_F77" >&5
+echo "${ECHO_T}$lt_prog_compiler_pic_works_F77" >&6
+
+if test x"$lt_prog_compiler_pic_works_F77" = xyes; then
+ case $lt_prog_compiler_pic_F77 in
+ "" | " "*) ;;
+ *) lt_prog_compiler_pic_F77=" $lt_prog_compiler_pic_F77" ;;
+ esac
+else
+ lt_prog_compiler_pic_F77=
+ lt_prog_compiler_can_build_shared_F77=no
+fi
+
+fi
+case "$host_os" in
+ # For platforms which do not support PIC, -DPIC is meaningless:
+ *djgpp*)
+ lt_prog_compiler_pic_F77=
+ ;;
+ *)
+ lt_prog_compiler_pic_F77="$lt_prog_compiler_pic_F77"
+ ;;
+esac
+
+echo "$as_me:$LINENO: checking if $compiler supports -c -o file.$ac_objext" >&5
+echo $ECHO_N "checking if $compiler supports -c -o file.$ac_objext... $ECHO_C" >&6
+if test "${lt_cv_prog_compiler_c_o_F77+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ lt_cv_prog_compiler_c_o_F77=no
+ $rm -r conftest 2>/dev/null
+ mkdir conftest
+ cd conftest
+ mkdir out
+ printf "$lt_simple_compile_test_code" > conftest.$ac_ext
+
+ lt_compiler_flag="-o out/conftest2.$ac_objext"
+ # Insert the option either (1) after the last *FLAGS variable, or
+ # (2) before a word containing "conftest.", or (3) at the end.
+ # Note that $ac_compile itself does not contain backslashes and begins
+ # with a dollar sign (not a hyphen), so the echo should work correctly.
+ lt_compile=`echo "$ac_compile" | $SED \
+ -e 's:.*FLAGS}? :&$lt_compiler_flag :; t' \
+ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \
+ -e 's:$: $lt_compiler_flag:'`
+ (eval echo "\"\$as_me:12721: $lt_compile\"" >&5)
+ (eval "$lt_compile" 2>out/conftest.err)
+ ac_status=$?
+ cat out/conftest.err >&5
+ echo "$as_me:12725: \$? = $ac_status" >&5
+ if (exit $ac_status) && test -s out/conftest2.$ac_objext
+ then
+ # The compiler can only warn and ignore the option if not recognized
+ # So say no if there are warnings
+ if test ! -s out/conftest.err; then
+ lt_cv_prog_compiler_c_o_F77=yes
+ fi
+ fi
+ chmod u+w .
+ $rm conftest*
+ # SGI C++ compiler will create directory out/ii_files/ for
+ # template instantiation
+ test -d out/ii_files && $rm out/ii_files/* && rmdir out/ii_files
+ $rm out/* && rmdir out
+ cd ..
+ rmdir conftest
+ $rm conftest*
+
+fi
+echo "$as_me:$LINENO: result: $lt_cv_prog_compiler_c_o_F77" >&5
+echo "${ECHO_T}$lt_cv_prog_compiler_c_o_F77" >&6
+
+
+hard_links="nottested"
+if test "$lt_cv_prog_compiler_c_o_F77" = no && test "$need_locks" != no; then
+ # do not overwrite the value of need_locks provided by the user
+ echo "$as_me:$LINENO: checking if we can lock with hard links" >&5
+echo $ECHO_N "checking if we can lock with hard links... $ECHO_C" >&6
+ hard_links=yes
+ $rm conftest*
+ ln conftest.a conftest.b 2>/dev/null && hard_links=no
+ touch conftest.a
+ ln conftest.a conftest.b 2>&5 || hard_links=no
+ ln conftest.a conftest.b 2>/dev/null && hard_links=no
+ echo "$as_me:$LINENO: result: $hard_links" >&5
+echo "${ECHO_T}$hard_links" >&6
+ if test "$hard_links" = no; then
+ { echo "$as_me:$LINENO: WARNING: \`$CC' does not support \`-c -o', so \`make -j' may be unsafe" >&5
+echo "$as_me: WARNING: \`$CC' does not support \`-c -o', so \`make -j' may be unsafe" >&2;}
+ need_locks=warn
+ fi
+else
+ need_locks=no
+fi
+
+echo "$as_me:$LINENO: checking whether the $compiler linker ($LD) supports shared libraries" >&5
+echo $ECHO_N "checking whether the $compiler linker ($LD) supports shared libraries... $ECHO_C" >&6
+
+ runpath_var=
+ allow_undefined_flag_F77=
+ enable_shared_with_static_runtimes_F77=no
+ archive_cmds_F77=
+ archive_expsym_cmds_F77=
+ old_archive_From_new_cmds_F77=
+ old_archive_from_expsyms_cmds_F77=
+ export_dynamic_flag_spec_F77=
+ whole_archive_flag_spec_F77=
+ thread_safe_flag_spec_F77=
+ hardcode_libdir_flag_spec_F77=
+ hardcode_libdir_flag_spec_ld_F77=
+ hardcode_libdir_separator_F77=
+ hardcode_direct_F77=no
+ hardcode_minus_L_F77=no
+ hardcode_shlibpath_var_F77=unsupported
+ link_all_deplibs_F77=unknown
+ hardcode_automatic_F77=no
+ module_cmds_F77=
+ module_expsym_cmds_F77=
+ always_export_symbols_F77=no
+ export_symbols_cmds_F77='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols'
+ # include_expsyms should be a list of space-separated symbols to be *always*
+ # included in the symbol list
+ include_expsyms_F77=
+ # exclude_expsyms can be an extended regexp of symbols to exclude
+ # it will be wrapped by ` (' and `)$', so one must not match beginning or
+ # end of line. Example: `a|bc|.*d.*' will exclude the symbols `a' and `bc',
+ # as well as any symbol that contains `d'.
+ exclude_expsyms_F77="_GLOBAL_OFFSET_TABLE_"
+ # Although _GLOBAL_OFFSET_TABLE_ is a valid symbol C name, most a.out
+ # platforms (ab)use it in PIC code, but their linkers get confused if
+ # the symbol is explicitly referenced. Since portable code cannot
+ # rely on this symbol name, it's probably fine to never include it in
+ # preloaded symbol tables.
+ extract_expsyms_cmds=
+
+ case $host_os in
+ cygwin* | mingw* | pw32*)
+ # FIXME: the MSVC++ port hasn't been tested in a loooong time
+ # When not using gcc, we currently assume that we are using
+ # Microsoft Visual C++.
+ if test "$GCC" != yes; then
+ with_gnu_ld=no
+ fi
+ ;;
+ openbsd*)
+ with_gnu_ld=no
+ ;;
+ esac
+
+ ld_shlibs_F77=yes
+ if test "$with_gnu_ld" = yes; then
+ # If archive_cmds runs LD, not CC, wlarc should be empty
+ wlarc='${wl}'
+
+ # See if GNU ld supports shared libraries.
+ case $host_os in
+ aix3* | aix4* | aix5*)
+ # On AIX/PPC, the GNU linker is very broken
+ if test "$host_cpu" != ia64; then
+ ld_shlibs_F77=no
+ cat <<EOF 1>&2
+
+*** Warning: the GNU linker, at least up to release 2.9.1, is reported
+*** to be unable to reliably create shared libraries on AIX.
+*** Therefore, libtool is disabling shared libraries support. If you
+*** really care for shared libraries, you may want to modify your PATH
+*** so that a non-GNU linker is found, and then restart.
+
+EOF
+ fi
+ ;;
+
+ amigaos*)
+ archive_cmds_F77='$rm $output_objdir/a2ixlibrary.data~$echo "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$echo "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$echo "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$echo "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)'
+ hardcode_libdir_flag_spec_F77='-L$libdir'
+ hardcode_minus_L_F77=yes
+
+ # Samuel A. Falvo II <kc5tja at dolphin.openprojects.net> reports
+ # that the semantics of dynamic libraries on AmigaOS, at least up
+ # to version 4, is to share data among multiple programs linked
+ # with the same dynamic library. Since this doesn't match the
+ # behavior of shared libraries on other platforms, we can't use
+ # them.
+ ld_shlibs_F77=no
+ ;;
+
+ beos*)
+ if $LD --help 2>&1 | grep ': supported targets:.* elf' > /dev/null; then
+ allow_undefined_flag_F77=unsupported
+ # Joseph Beckenbach <jrb3 at best.com> says some releases of gcc
+ # support --undefined. This deserves some investigation. FIXME
+ archive_cmds_F77='$CC -nostart $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+ else
+ ld_shlibs_F77=no
+ fi
+ ;;
+
+ cygwin* | mingw* | pw32*)
+ # _LT_AC_TAGVAR(hardcode_libdir_flag_spec, F77) is actually meaningless,
+ # as there is no search path for DLLs.
+ hardcode_libdir_flag_spec_F77='-L$libdir'
+ allow_undefined_flag_F77=unsupported
+ always_export_symbols_F77=no
+ enable_shared_with_static_runtimes_F77=yes
+ export_symbols_cmds_F77='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGS] /s/.* \([^ ]*\)/\1 DATA/'\'' | $SED -e '\''/^[AITW] /s/.* //'\'' | sort | uniq > $export_symbols'
+
+ if $LD --help 2>&1 | grep 'auto-import' > /dev/null; then
+ archive_cmds_F77='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--image-base=0x10000000 ${wl}--out-implib,$lib'
+ # If the export-symbols file already is a .def file (1st line
+ # is EXPORTS), use it as is; otherwise, prepend...
+ archive_expsym_cmds_F77='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
+ cp $export_symbols $output_objdir/$soname.def;
+ else
+ echo EXPORTS > $output_objdir/$soname.def;
+ cat $export_symbols >> $output_objdir/$soname.def;
+ fi~
+ $CC -shared $output_objdir/$soname.def $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--image-base=0x10000000 ${wl}--out-implib,$lib'
+ else
+ ld_shlibs=no
+ fi
+ ;;
+
+ netbsd* | knetbsd*-gnu)
+ if echo __ELF__ | $CC -E - | grep __ELF__ >/dev/null; then
+ archive_cmds_F77='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
+ wlarc=
+ else
+ archive_cmds_F77='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+ archive_expsym_cmds_F77='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+ fi
+ ;;
+
+ solaris* | sysv5*)
+ if $LD -v 2>&1 | grep 'BFD 2\.8' > /dev/null; then
+ ld_shlibs_F77=no
+ cat <<EOF 1>&2
+
+*** Warning: The releases 2.8.* of the GNU linker cannot reliably
+*** create shared libraries on Solaris systems. Therefore, libtool
+*** is disabling shared libraries support. We urge you to upgrade GNU
+*** binutils to release 2.9.1 or newer. Another option is to modify
+*** your PATH or compiler configuration so that the native linker is
+*** used, and then restart.
+
+EOF
+ elif $LD --help 2>&1 | grep ': supported targets:.* elf' > /dev/null; then
+ archive_cmds_F77='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+ archive_expsym_cmds_F77='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+ else
+ ld_shlibs_F77=no
+ fi
+ ;;
+
+ sunos4*)
+ archive_cmds_F77='$LD -assert pure-text -Bshareable -o $lib $libobjs $deplibs $linker_flags'
+ wlarc=
+ hardcode_direct_F77=yes
+ hardcode_shlibpath_var_F77=no
+ ;;
+
+ linux*)
+ if $LD --help 2>&1 | egrep ': supported targets:.* elf' > /dev/null; then
+ tmp_archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+ archive_cmds_F77="$tmp_archive_cmds"
+ supports_anon_versioning=no
+ case `$LD -v 2>/dev/null` in
+ *\ 01.* | *\ 2.[0-9].* | *\ 2.10.*) ;; # catch versions < 2.11
+ *\ 2.11.93.0.2\ *) supports_anon_versioning=yes ;; # RH7.3 ...
+ *\ 2.11.92.0.12\ *) supports_anon_versioning=yes ;; # Mandrake 8.2 ...
+ *\ 2.11.*) ;; # other 2.11 versions
+ *) supports_anon_versioning=yes ;;
+ esac
+ if test $supports_anon_versioning = yes; then
+ archive_expsym_cmds_F77='$echo "{ global:" > $output_objdir/$libname.ver~
+cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
+$echo "local: *; };" >> $output_objdir/$libname.ver~
+ $CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-version-script ${wl}$output_objdir/$libname.ver -o $lib'
+ else
+ archive_expsym_cmds_F77="$tmp_archive_cmds"
+ fi
+ else
+ ld_shlibs_F77=no
+ fi
+ ;;
+
+ *)
+ if $LD --help 2>&1 | grep ': supported targets:.* elf' > /dev/null; then
+ archive_cmds_F77='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+ archive_expsym_cmds_F77='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+ else
+ ld_shlibs_F77=no
+ fi
+ ;;
+ esac
+
+ if test "$ld_shlibs_F77" = yes; then
+ runpath_var=LD_RUN_PATH
+ hardcode_libdir_flag_spec_F77='${wl}--rpath ${wl}$libdir'
+ export_dynamic_flag_spec_F77='${wl}--export-dynamic'
+ # ancient GNU ld didn't support --whole-archive et. al.
+ if $LD --help 2>&1 | grep 'no-whole-archive' > /dev/null; then
+ whole_archive_flag_spec_F77="$wlarc"'--whole-archive$convenience '"$wlarc"'--no-whole-archive'
+ else
+ whole_archive_flag_spec_F77=
+ fi
+ fi
+ else
+ # PORTME fill in a description of your system's linker (not GNU ld)
+ case $host_os in
+ aix3*)
+ allow_undefined_flag_F77=unsupported
+ always_export_symbols_F77=yes
+ archive_expsym_cmds_F77='$LD -o $output_objdir/$soname $libobjs $deplibs $linker_flags -bE:$export_symbols -T512 -H512 -bM:SRE~$AR $AR_FLAGS $lib $output_objdir/$soname'
+ # Note: this linker hardcodes the directories in LIBPATH if there
+ # are no directories specified by -L.
+ hardcode_minus_L_F77=yes
+ if test "$GCC" = yes && test -z "$link_static_flag"; then
+ # Neither direct hardcoding nor static linking is supported with a
+ # broken collect2.
+ hardcode_direct_F77=unsupported
+ fi
+ ;;
+
+ aix4* | aix5*)
+ if test "$host_cpu" = ia64; then
+ # On IA64, the linker does run time linking by default, so we don't
+ # have to do anything special.
+ aix_use_runtimelinking=no
+ exp_sym_flag='-Bexport'
+ no_entry_flag=""
+ else
+ # If we're using GNU nm, then we don't want the "-C" option.
+ # -C means demangle to AIX nm, but means don't demangle with GNU nm
+ if $NM -V 2>&1 | grep 'GNU' > /dev/null; then
+ export_symbols_cmds_F77='$NM -Bpg $libobjs $convenience | awk '\''{ if (((\$2 == "T") || (\$2 == "D") || (\$2 == "B")) && (substr(\$3,1,1) != ".")) { print \$3 } }'\'' | sort -u > $export_symbols'
+ else
+ export_symbols_cmds_F77='$NM -BCpg $libobjs $convenience | awk '\''{ if (((\$2 == "T") || (\$2 == "D") || (\$2 == "B")) && (substr(\$3,1,1) != ".")) { print \$3 } }'\'' | sort -u > $export_symbols'
+ fi
+ aix_use_runtimelinking=no
+
+ # Test if we are trying to use run time linking or normal
+ # AIX style linking. If -brtl is somewhere in LDFLAGS, we
+ # need to do runtime linking.
+ case $host_os in aix4.[23]|aix4.[23].*|aix5*)
+ for ld_flag in $LDFLAGS; do
+ if (test $ld_flag = "-brtl" || test $ld_flag = "-Wl,-brtl"); then
+ aix_use_runtimelinking=yes
+ break
+ fi
+ done
+ esac
+
+ exp_sym_flag='-bexport'
+ no_entry_flag='-bnoentry'
+ fi
+
+ # When large executables or shared objects are built, AIX ld can
+ # have problems creating the table of contents. If linking a library
+ # or program results in "error TOC overflow" add -mminimal-toc to
+ # CXXFLAGS/CFLAGS for g++/gcc. In the cases where that is not
+ # enough to fix the problem, add -Wl,-bbigtoc to LDFLAGS.
+
+ archive_cmds_F77=''
+ hardcode_direct_F77=yes
+ hardcode_libdir_separator_F77=':'
+ link_all_deplibs_F77=yes
+
+ if test "$GCC" = yes; then
+ case $host_os in aix4.012|aix4.012.*)
+ # We only want to do this on AIX 4.2 and lower, the check
+ # below for broken collect2 doesn't work under 4.3+
+ collect2name=`${CC} -print-prog-name=collect2`
+ if test -f "$collect2name" && \
+ strings "$collect2name" | grep resolve_lib_name >/dev/null
+ then
+ # We have reworked collect2
+ hardcode_direct_F77=yes
+ else
+ # We have old collect2
+ hardcode_direct_F77=unsupported
+ # It fails to find uninstalled libraries when the uninstalled
+ # path is not listed in the libpath. Setting hardcode_minus_L
+ # to unsupported forces relinking
+ hardcode_minus_L_F77=yes
+ hardcode_libdir_flag_spec_F77='-L$libdir'
+ hardcode_libdir_separator_F77=
+ fi
+ esac
+ shared_flag='-shared'
+ else
+ # not using gcc
+ if test "$host_cpu" = ia64; then
+ # VisualAge C++, Version 5.5 for AIX 5L for IA-64, Beta 3 Release
+ # chokes on -Wl,-G. The following line is correct:
+ shared_flag='-G'
+ else
+ if test "$aix_use_runtimelinking" = yes; then
+ shared_flag='${wl}-G'
+ else
+ shared_flag='${wl}-bM:SRE'
+ fi
+ fi
+ fi
+
+ # It seems that -bexpall does not export symbols beginning with
+ # underscore (_), so it is better to generate a list of symbols to export.
+ always_export_symbols_F77=yes
+ if test "$aix_use_runtimelinking" = yes; then
+ # Warning - without using the other runtime loading flags (-brtl),
+ # -berok will link without error, but may produce a broken library.
+ allow_undefined_flag_F77='-berok'
+ # Determine the default libpath from the value encoded in an empty executable.
+ cat >conftest.$ac_ext <<_ACEOF
+ program main
+
+ end
+_ACEOF
+rm -f conftest.$ac_objext conftest$ac_exeext
+if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_f77_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest$ac_exeext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+
+aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e '/Import File Strings/,/^$/ { /^0/ { s/^0 *\(.*\)$/\1/; p; }
+}'`
+# Check for a 64-bit object if we didn't find anything.
+if test -z "$aix_libpath"; then aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e '/Import File Strings/,/^$/ { /^0/ { s/^0 *\(.*\)$/\1/; p; }
+}'`; fi
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+fi
+rm -f conftest.err conftest.$ac_objext \
+ conftest$ac_exeext conftest.$ac_ext
+if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+
+ hardcode_libdir_flag_spec_F77='${wl}-blibpath:$libdir:'"$aix_libpath"
+ archive_expsym_cmds_F77="\$CC"' -o $output_objdir/$soname $libobjs $deplibs $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then echo "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$no_entry_flag \${wl}$exp_sym_flag:\$export_symbols $shared_flag"
+ else
+ if test "$host_cpu" = ia64; then
+ hardcode_libdir_flag_spec_F77='${wl}-R $libdir:/usr/lib:/lib'
+ allow_undefined_flag_F77="-z nodefs"
+ archive_expsym_cmds_F77="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs $compiler_flags ${wl}${allow_undefined_flag} '"\${wl}$no_entry_flag \${wl}$exp_sym_flag:\$export_symbols"
+ else
+ # Determine the default libpath from the value encoded in an empty executable.
+ cat >conftest.$ac_ext <<_ACEOF
+ program main
+
+ end
+_ACEOF
+rm -f conftest.$ac_objext conftest$ac_exeext
+if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_f77_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest$ac_exeext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+
+aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e '/Import File Strings/,/^$/ { /^0/ { s/^0 *\(.*\)$/\1/; p; }
+}'`
+# Check for a 64-bit object if we didn't find anything.
+if test -z "$aix_libpath"; then aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e '/Import File Strings/,/^$/ { /^0/ { s/^0 *\(.*\)$/\1/; p; }
+}'`; fi
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+fi
+rm -f conftest.err conftest.$ac_objext \
+ conftest$ac_exeext conftest.$ac_ext
+if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+
+ hardcode_libdir_flag_spec_F77='${wl}-blibpath:$libdir:'"$aix_libpath"
+ # Warning - without using the other run time loading flags,
+ # -berok will link without error, but may produce a broken library.
+ no_undefined_flag_F77=' ${wl}-bernotok'
+ allow_undefined_flag_F77=' ${wl}-berok'
+ # -bexpall does not export symbols beginning with underscore (_)
+ always_export_symbols_F77=yes
+ # Exported symbols can be pulled into shared objects from archives
+ whole_archive_flag_spec_F77=' '
+ archive_cmds_need_lc_F77=yes
+ # This is similar to how AIX traditionally builds it's shared libraries.
+ archive_expsym_cmds_F77="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs $compiler_flags ${wl}-bE:$export_symbols ${wl}-bnoentry${allow_undefined_flag}~$AR $AR_FLAGS $output_objdir/$libname$release.a $output_objdir/$soname'
+ fi
+ fi
+ ;;
+
+ amigaos*)
+ archive_cmds_F77='$rm $output_objdir/a2ixlibrary.data~$echo "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$echo "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$echo "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$echo "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)'
+ hardcode_libdir_flag_spec_F77='-L$libdir'
+ hardcode_minus_L_F77=yes
+ # see comment about different semantics on the GNU ld section
+ ld_shlibs_F77=no
+ ;;
+
+ bsdi4*)
+ export_dynamic_flag_spec_F77=-rdynamic
+ ;;
+
+ cygwin* | mingw* | pw32*)
+ # When not using gcc, we currently assume that we are using
+ # Microsoft Visual C++.
+ # hardcode_libdir_flag_spec is actually meaningless, as there is
+ # no search path for DLLs.
+ hardcode_libdir_flag_spec_F77=' '
+ allow_undefined_flag_F77=unsupported
+ # Tell ltmain to make .lib files, not .a files.
+ libext=lib
+ # Tell ltmain to make .dll files, not .so files.
+ shrext=".dll"
+ # FIXME: Setting linknames here is a bad hack.
+ archive_cmds_F77='$CC -o $lib $libobjs $compiler_flags `echo "$deplibs" | $SED -e '\''s/ -lc$//'\''` -link -dll~linknames='
+ # The linker will automatically build a .lib file if we build a DLL.
+ old_archive_From_new_cmds_F77='true'
+ # FIXME: Should let the user specify the lib program.
+ old_archive_cmds_F77='lib /OUT:$oldlib$oldobjs$old_deplibs'
+ fix_srcfile_path='`cygpath -w "$srcfile"`'
+ enable_shared_with_static_runtimes_F77=yes
+ ;;
+
+ darwin* | rhapsody*)
+ if test "$GXX" = yes ; then
+ archive_cmds_need_lc_F77=no
+ case "$host_os" in
+ rhapsody* | darwin1.[012])
+ allow_undefined_flag_F77='-undefined suppress'
+ ;;
+ *) # Darwin 1.3 on
+ if test -z ${MACOSX_DEPLOYMENT_TARGET} ; then
+ allow_undefined_flag_F77='-flat_namespace -undefined suppress'
+ else
+ case ${MACOSX_DEPLOYMENT_TARGET} in
+ 10.[012])
+ allow_undefined_flag_F77='-flat_namespace -undefined suppress'
+ ;;
+ 10.*)
+ allow_undefined_flag_F77='-undefined dynamic_lookup'
+ ;;
+ esac
+ fi
+ ;;
+ esac
+ lt_int_apple_cc_single_mod=no
+ output_verbose_link_cmd='echo'
+ if $CC -dumpspecs 2>&1 | grep 'single_module' >/dev/null ; then
+ lt_int_apple_cc_single_mod=yes
+ fi
+ if test "X$lt_int_apple_cc_single_mod" = Xyes ; then
+ archive_cmds_F77='$CC -dynamiclib -single_module $allow_undefined_flag -o $lib $libobjs $deplibs $compiler_flags -install_name $rpath/$soname $verstring'
+ else
+ archive_cmds_F77='$CC -r ${wl}-bind_at_load -keep_private_externs -nostdlib -o ${lib}-master.o $libobjs~$CC -dynamiclib $allow_undefined_flag -o $lib ${lib}-master.o $deplibs $compiler_flags -install_name $rpath/$soname $verstring'
+ fi
+ module_cmds_F77='$CC ${wl}-bind_at_load $allow_undefined_flag -o $lib -bundle $libobjs $deplibs$compiler_flags'
+ # Don't fix this by using the ld -exported_symbols_list flag, it doesn't exist in older darwin ld's
+ if test "X$lt_int_apple_cc_single_mod" = Xyes ; then
+ archive_expsym_cmds_F77='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC -dynamiclib -single_module $allow_undefined_flag -o $lib $libobjs $deplibs $compiler_flags -install_name $rpath/$soname $verstring~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}'
+ else
+ archive_expsym_cmds_F77='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC -r ${wl}-bind_at_load -keep_private_externs -nostdlib -o ${lib}-master.o $libobjs~$CC -dynamiclib $allow_undefined_flag -o $lib ${lib}-master.o $deplibs $compiler_flags -install_name $rpath/$soname $verstring~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}'
+ fi
+ module_expsym_cmds_F77='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC $allow_undefined_flag -o $lib -bundle $libobjs $deplibs$compiler_flags~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}'
+ hardcode_direct_F77=no
+ hardcode_automatic_F77=yes
+ hardcode_shlibpath_var_F77=unsupported
+ whole_archive_flag_spec_F77='-all_load $convenience'
+ link_all_deplibs_F77=yes
+ else
+ ld_shlibs_F77=no
+ fi
+ ;;
+
+ dgux*)
+ archive_cmds_F77='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ hardcode_libdir_flag_spec_F77='-L$libdir'
+ hardcode_shlibpath_var_F77=no
+ ;;
+
+ freebsd1*)
+ ld_shlibs_F77=no
+ ;;
+
+ # FreeBSD 2.2.[012] allows us to include c++rt0.o to get C++ constructor
+ # support. Future versions do this automatically, but an explicit c++rt0.o
+ # does not break anything, and helps significantly (at the cost of a little
+ # extra space).
+ freebsd2.2*)
+ archive_cmds_F77='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags /usr/lib/c++rt0.o'
+ hardcode_libdir_flag_spec_F77='-R$libdir'
+ hardcode_direct_F77=yes
+ hardcode_shlibpath_var_F77=no
+ ;;
+
+ # Unfortunately, older versions of FreeBSD 2 do not have this feature.
+ freebsd2*)
+ archive_cmds_F77='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags'
+ hardcode_direct_F77=yes
+ hardcode_minus_L_F77=yes
+ hardcode_shlibpath_var_F77=no
+ ;;
+
+ # FreeBSD 3 and greater uses gcc -shared to do shared libraries.
+ freebsd* | kfreebsd*-gnu)
+ archive_cmds_F77='$CC -shared -o $lib $libobjs $deplibs $compiler_flags'
+ hardcode_libdir_flag_spec_F77='-R$libdir'
+ hardcode_direct_F77=yes
+ hardcode_shlibpath_var_F77=no
+ ;;
+
+ hpux9*)
+ if test "$GCC" = yes; then
+ archive_cmds_F77='$rm $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+ else
+ archive_cmds_F77='$rm $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+ fi
+ hardcode_libdir_flag_spec_F77='${wl}+b ${wl}$libdir'
+ hardcode_libdir_separator_F77=:
+ hardcode_direct_F77=yes
+
+ # hardcode_minus_L: Not really in the search PATH,
+ # but as the default location of the library.
+ hardcode_minus_L_F77=yes
+ export_dynamic_flag_spec_F77='${wl}-E'
+ ;;
+
+ hpux10* | hpux11*)
+ if test "$GCC" = yes -a "$with_gnu_ld" = no; then
+ case "$host_cpu" in
+ hppa*64*|ia64*)
+ archive_cmds_F77='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ ;;
+ *)
+ archive_cmds_F77='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+ ;;
+ esac
+ else
+ case "$host_cpu" in
+ hppa*64*|ia64*)
+ archive_cmds_F77='$LD -b +h $soname -o $lib $libobjs $deplibs $linker_flags'
+ ;;
+ *)
+ archive_cmds_F77='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
+ ;;
+ esac
+ fi
+ if test "$with_gnu_ld" = no; then
+ case "$host_cpu" in
+ hppa*64*)
+ hardcode_libdir_flag_spec_F77='${wl}+b ${wl}$libdir'
+ hardcode_libdir_flag_spec_ld_F77='+b $libdir'
+ hardcode_libdir_separator_F77=:
+ hardcode_direct_F77=no
+ hardcode_shlibpath_var_F77=no
+ ;;
+ ia64*)
+ hardcode_libdir_flag_spec_F77='-L$libdir'
+ hardcode_direct_F77=no
+ hardcode_shlibpath_var_F77=no
+
+ # hardcode_minus_L: Not really in the search PATH,
+ # but as the default location of the library.
+ hardcode_minus_L_F77=yes
+ ;;
+ *)
+ hardcode_libdir_flag_spec_F77='${wl}+b ${wl}$libdir'
+ hardcode_libdir_separator_F77=:
+ hardcode_direct_F77=yes
+ export_dynamic_flag_spec_F77='${wl}-E'
+
+ # hardcode_minus_L: Not really in the search PATH,
+ # but as the default location of the library.
+ hardcode_minus_L_F77=yes
+ ;;
+ esac
+ fi
+ ;;
+
+ irix5* | irix6* | nonstopux*)
+ if test "$GCC" = yes; then
+ archive_cmds_F77='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ else
+ archive_cmds_F77='$LD -shared $libobjs $deplibs $linker_flags -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${output_objdir}/so_locations -o $lib'
+ hardcode_libdir_flag_spec_ld_F77='-rpath $libdir'
+ fi
+ hardcode_libdir_flag_spec_F77='${wl}-rpath ${wl}$libdir'
+ hardcode_libdir_separator_F77=:
+ link_all_deplibs_F77=yes
+ ;;
+
+ netbsd* | knetbsd*-gnu)
+ if echo __ELF__ | $CC -E - | grep __ELF__ >/dev/null; then
+ archive_cmds_F77='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' # a.out
+ else
+ archive_cmds_F77='$LD -shared -o $lib $libobjs $deplibs $linker_flags' # ELF
+ fi
+ hardcode_libdir_flag_spec_F77='-R$libdir'
+ hardcode_direct_F77=yes
+ hardcode_shlibpath_var_F77=no
+ ;;
+
+ newsos6)
+ archive_cmds_F77='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ hardcode_direct_F77=yes
+ hardcode_libdir_flag_spec_F77='${wl}-rpath ${wl}$libdir'
+ hardcode_libdir_separator_F77=:
+ hardcode_shlibpath_var_F77=no
+ ;;
+
+ openbsd*)
+ hardcode_direct_F77=yes
+ hardcode_shlibpath_var_F77=no
+ if test -z "`echo __ELF__ | $CC -E - | grep __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then
+ archive_cmds_F77='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
+ hardcode_libdir_flag_spec_F77='${wl}-rpath,$libdir'
+ export_dynamic_flag_spec_F77='${wl}-E'
+ else
+ case $host_os in
+ openbsd[01].* | openbsd2.[0-7] | openbsd2.[0-7].*)
+ archive_cmds_F77='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags'
+ hardcode_libdir_flag_spec_F77='-R$libdir'
+ ;;
+ *)
+ archive_cmds_F77='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
+ hardcode_libdir_flag_spec_F77='${wl}-rpath,$libdir'
+ ;;
+ esac
+ fi
+ ;;
+
+ os2*)
+ hardcode_libdir_flag_spec_F77='-L$libdir'
+ hardcode_minus_L_F77=yes
+ allow_undefined_flag_F77=unsupported
+ archive_cmds_F77='$echo "LIBRARY $libname INITINSTANCE" > $output_objdir/$libname.def~$echo "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~$echo DATA >> $output_objdir/$libname.def~$echo " SINGLE NONSHARED" >> $output_objdir/$libname.def~$echo EXPORTS >> $output_objdir/$libname.def~emxexp $libobjs >> $output_objdir/$libname.def~$CC -Zdll -Zcrtdll -o $lib $libobjs $deplibs $compiler_flags $output_objdir/$libname.def'
+ old_archive_From_new_cmds_F77='emximp -o $output_objdir/$libname.a $output_objdir/$libname.def'
+ ;;
+
+ osf3*)
+ if test "$GCC" = yes; then
+ allow_undefined_flag_F77=' ${wl}-expect_unresolved ${wl}\*'
+ archive_cmds_F77='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ else
+ allow_undefined_flag_F77=' -expect_unresolved \*'
+ archive_cmds_F77='$LD -shared${allow_undefined_flag} $libobjs $deplibs $linker_flags -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${output_objdir}/so_locations -o $lib'
+ fi
+ hardcode_libdir_flag_spec_F77='${wl}-rpath ${wl}$libdir'
+ hardcode_libdir_separator_F77=:
+ ;;
+
+ osf4* | osf5*) # as osf3* with the addition of -msym flag
+ if test "$GCC" = yes; then
+ allow_undefined_flag_F77=' ${wl}-expect_unresolved ${wl}\*'
+ archive_cmds_F77='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ hardcode_libdir_flag_spec_F77='${wl}-rpath ${wl}$libdir'
+ else
+ allow_undefined_flag_F77=' -expect_unresolved \*'
+ archive_cmds_F77='$LD -shared${allow_undefined_flag} $libobjs $deplibs $linker_flags -msym -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${output_objdir}/so_locations -o $lib'
+ archive_expsym_cmds_F77='for i in `cat $export_symbols`; do printf "%s %s\\n" -exported_symbol "\$i" >> $lib.exp; done; echo "-hidden">> $lib.exp~
+ $LD -shared${allow_undefined_flag} -input $lib.exp $linker_flags $libobjs $deplibs -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${objdir}/so_locations -o $lib~$rm $lib.exp'
+
+ # Both c and cxx compiler support -rpath directly
+ hardcode_libdir_flag_spec_F77='-rpath $libdir'
+ fi
+ hardcode_libdir_separator_F77=:
+ ;;
+
+ sco3.2v5*)
+ archive_cmds_F77='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ hardcode_shlibpath_var_F77=no
+ export_dynamic_flag_spec_F77='${wl}-Bexport'
+ runpath_var=LD_RUN_PATH
+ hardcode_runpath_var=yes
+ ;;
+
+ solaris*)
+ no_undefined_flag_F77=' -z text'
+ if test "$GCC" = yes; then
+ archive_cmds_F77='$CC -shared ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ archive_expsym_cmds_F77='$echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~$echo "local: *; };" >> $lib.exp~
+ $CC -shared ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$rm $lib.exp'
+ else
+ archive_cmds_F77='$LD -G${allow_undefined_flag} -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ archive_expsym_cmds_F77='$echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~$echo "local: *; };" >> $lib.exp~
+ $LD -G${allow_undefined_flag} -M $lib.exp -h $soname -o $lib $libobjs $deplibs $linker_flags~$rm $lib.exp'
+ fi
+ hardcode_libdir_flag_spec_F77='-R$libdir'
+ hardcode_shlibpath_var_F77=no
+ case $host_os in
+ solaris2.[0-5] | solaris2.[0-5].*) ;;
+ *) # Supported since Solaris 2.6 (maybe 2.5.1?)
+ whole_archive_flag_spec_F77='-z allextract$convenience -z defaultextract' ;;
+ esac
+ link_all_deplibs_F77=yes
+ ;;
+
+ sunos4*)
+ if test "x$host_vendor" = xsequent; then
+ # Use $CC to link under sequent, because it throws in some extra .o
+ # files that make .init and .fini sections work.
+ archive_cmds_F77='$CC -G ${wl}-h $soname -o $lib $libobjs $deplibs $compiler_flags'
+ else
+ archive_cmds_F77='$LD -assert pure-text -Bstatic -o $lib $libobjs $deplibs $linker_flags'
+ fi
+ hardcode_libdir_flag_spec_F77='-L$libdir'
+ hardcode_direct_F77=yes
+ hardcode_minus_L_F77=yes
+ hardcode_shlibpath_var_F77=no
+ ;;
+
+ sysv4)
+ case $host_vendor in
+ sni)
+ archive_cmds_F77='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ hardcode_direct_F77=yes # is this really true???
+ ;;
+ siemens)
+ ## LD is ld it makes a PLAMLIB
+ ## CC just makes a GrossModule.
+ archive_cmds_F77='$LD -G -o $lib $libobjs $deplibs $linker_flags'
+ reload_cmds_F77='$CC -r -o $output$reload_objs'
+ hardcode_direct_F77=no
+ ;;
+ motorola)
+ archive_cmds_F77='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ hardcode_direct_F77=no #Motorola manual says yes, but my tests say they lie
+ ;;
+ esac
+ runpath_var='LD_RUN_PATH'
+ hardcode_shlibpath_var_F77=no
+ ;;
+
+ sysv4.3*)
+ archive_cmds_F77='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ hardcode_shlibpath_var_F77=no
+ export_dynamic_flag_spec_F77='-Bexport'
+ ;;
+
+ sysv4*MP*)
+ if test -d /usr/nec; then
+ archive_cmds_F77='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ hardcode_shlibpath_var_F77=no
+ runpath_var=LD_RUN_PATH
+ hardcode_runpath_var=yes
+ ld_shlibs_F77=yes
+ fi
+ ;;
+
+ sysv4.2uw2*)
+ archive_cmds_F77='$LD -G -o $lib $libobjs $deplibs $linker_flags'
+ hardcode_direct_F77=yes
+ hardcode_minus_L_F77=no
+ hardcode_shlibpath_var_F77=no
+ hardcode_runpath_var=yes
+ runpath_var=LD_RUN_PATH
+ ;;
+
+ sysv5OpenUNIX8* | sysv5UnixWare7* | sysv5uw[78]* | unixware7*)
+ no_undefined_flag_F77='${wl}-z ${wl}text'
+ if test "$GCC" = yes; then
+ archive_cmds_F77='$CC -shared ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ else
+ archive_cmds_F77='$CC -G ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ fi
+ runpath_var='LD_RUN_PATH'
+ hardcode_shlibpath_var_F77=no
+ ;;
+
+ sysv5*)
+ no_undefined_flag_F77=' -z text'
+ # $CC -shared without GNU ld will not create a library from C++
+ # object files and a static libstdc++, better avoid it by now
+ archive_cmds_F77='$LD -G${allow_undefined_flag} -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ archive_expsym_cmds_F77='$echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~$echo "local: *; };" >> $lib.exp~
+ $LD -G${allow_undefined_flag} -M $lib.exp -h $soname -o $lib $libobjs $deplibs $linker_flags~$rm $lib.exp'
+ hardcode_libdir_flag_spec_F77=
+ hardcode_shlibpath_var_F77=no
+ runpath_var='LD_RUN_PATH'
+ ;;
+
+ uts4*)
+ archive_cmds_F77='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ hardcode_libdir_flag_spec_F77='-L$libdir'
+ hardcode_shlibpath_var_F77=no
+ ;;
+
+ *)
+ ld_shlibs_F77=no
+ ;;
+ esac
+ fi
+
+echo "$as_me:$LINENO: result: $ld_shlibs_F77" >&5
+echo "${ECHO_T}$ld_shlibs_F77" >&6
+test "$ld_shlibs_F77" = no && can_build_shared=no
+
+variables_saved_for_relink="PATH $shlibpath_var $runpath_var"
+if test "$GCC" = yes; then
+ variables_saved_for_relink="$variables_saved_for_relink GCC_EXEC_PREFIX COMPILER_PATH LIBRARY_PATH"
+fi
+
+#
+# Do we need to explicitly link libc?
+#
+case "x$archive_cmds_need_lc_F77" in
+x|xyes)
+ # Assume -lc should be added
+ archive_cmds_need_lc_F77=yes
+
+ if test "$enable_shared" = yes && test "$GCC" = yes; then
+ case $archive_cmds_F77 in
+ *'~'*)
+ # FIXME: we may have to deal with multi-command sequences.
+ ;;
+ '$CC '*)
+ # Test whether the compiler implicitly links with -lc since on some
+ # systems, -lgcc has to come before -lc. If gcc already passes -lc
+ # to ld, don't add -lc before -lgcc.
+ echo "$as_me:$LINENO: checking whether -lc should be explicitly linked in" >&5
+echo $ECHO_N "checking whether -lc should be explicitly linked in... $ECHO_C" >&6
+ $rm conftest*
+ printf "$lt_simple_compile_test_code" > conftest.$ac_ext
+
+ if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5
+ (eval $ac_compile) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } 2>conftest.err; then
+ soname=conftest
+ lib=conftest
+ libobjs=conftest.$ac_objext
+ deplibs=
+ wl=$lt_prog_compiler_wl_F77
+ compiler_flags=-v
+ linker_flags=-v
+ verstring=
+ output_objdir=.
+ libname=conftest
+ lt_save_allow_undefined_flag=$allow_undefined_flag_F77
+ allow_undefined_flag_F77=
+ if { (eval echo "$as_me:$LINENO: \"$archive_cmds_F77 2\>\&1 \| grep \" -lc \" \>/dev/null 2\>\&1\"") >&5
+ (eval $archive_cmds_F77 2\>\&1 \| grep \" -lc \" \>/dev/null 2\>\&1) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }
+ then
+ archive_cmds_need_lc_F77=no
+ else
+ archive_cmds_need_lc_F77=yes
+ fi
+ allow_undefined_flag_F77=$lt_save_allow_undefined_flag
+ else
+ cat conftest.err 1>&5
+ fi
+ $rm conftest*
+ echo "$as_me:$LINENO: result: $archive_cmds_need_lc_F77" >&5
+echo "${ECHO_T}$archive_cmds_need_lc_F77" >&6
+ ;;
+ esac
+ fi
+ ;;
+esac
+
+echo "$as_me:$LINENO: checking dynamic linker characteristics" >&5
+echo $ECHO_N "checking dynamic linker characteristics... $ECHO_C" >&6
+library_names_spec=
+libname_spec='lib$name'
+soname_spec=
+shrext=".so"
+postinstall_cmds=
+postuninstall_cmds=
+finish_cmds=
+finish_eval=
+shlibpath_var=
+shlibpath_overrides_runpath=unknown
+version_type=none
+dynamic_linker="$host_os ld.so"
+sys_lib_dlsearch_path_spec="/lib /usr/lib"
+if test "$GCC" = yes; then
+ sys_lib_search_path_spec=`$CC -print-search-dirs | grep "^libraries:" | $SED -e "s/^libraries://" -e "s,=/,/,g"`
+ if echo "$sys_lib_search_path_spec" | grep ';' >/dev/null ; then
+ # if the path contains ";" then we assume it to be the separator
+ # otherwise default to the standard path separator (i.e. ":") - it is
+ # assumed that no part of a normal pathname contains ";" but that should
+ # okay in the real world where ";" in dirpaths is itself problematic.
+ sys_lib_search_path_spec=`echo "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
+ else
+ sys_lib_search_path_spec=`echo "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
+ fi
+else
+ sys_lib_search_path_spec="/lib /usr/lib /usr/local/lib"
+fi
+need_lib_prefix=unknown
+hardcode_into_libs=no
+
+# when you set need_version to no, make sure it does not cause -set_version
+# flags to be left without arguments
+need_version=unknown
+
+case $host_os in
+aix3*)
+ version_type=linux
+ library_names_spec='${libname}${release}${shared_ext}$versuffix $libname.a'
+ shlibpath_var=LIBPATH
+
+ # AIX 3 has no versioning support, so we append a major version to the name.
+ soname_spec='${libname}${release}${shared_ext}$major'
+ ;;
+
+aix4* | aix5*)
+ version_type=linux
+ need_lib_prefix=no
+ need_version=no
+ hardcode_into_libs=yes
+ if test "$host_cpu" = ia64; then
+ # AIX 5 supports IA64
+ library_names_spec='${libname}${release}${shared_ext}$major ${libname}${release}${shared_ext}$versuffix $libname${shared_ext}'
+ shlibpath_var=LD_LIBRARY_PATH
+ else
+ # With GCC up to 2.95.x, collect2 would create an import file
+ # for dependence libraries. The import file would start with
+ # the line `#! .'. This would cause the generated library to
+ # depend on `.', always an invalid library. This was fixed in
+ # development snapshots of GCC prior to 3.0.
+ case $host_os in
+ aix4 | aix4.[01] | aix4.[01].*)
+ if { echo '#if __GNUC__ > 2 || (__GNUC__ == 2 && __GNUC_MINOR__ >= 97)'
+ echo ' yes '
+ echo '#endif'; } | ${CC} -E - | grep yes > /dev/null; then
+ :
+ else
+ can_build_shared=no
+ fi
+ ;;
+ esac
+ # AIX (on Power*) has no versioning support, so currently we can not hardcode correct
+ # soname into executable. Probably we can add versioning support to
+ # collect2, so additional links can be useful in future.
+ if test "$aix_use_runtimelinking" = yes; then
+ # If using run time linking (on AIX 4.2 or later) use lib<name>.so
+ # instead of lib<name>.a to let people know that these are not
+ # typical AIX shared libraries.
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ else
+ # We preserve .a as extension for shared libraries through AIX4.2
+ # and later when we are not doing run time linking.
+ library_names_spec='${libname}${release}.a $libname.a'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ fi
+ shlibpath_var=LIBPATH
+ fi
+ ;;
+
+amigaos*)
+ library_names_spec='$libname.ixlibrary $libname.a'
+ # Create ${libname}_ixlibrary.a entries in /sys/libs.
+ finish_eval='for lib in `ls $libdir/*.ixlibrary 2>/dev/null`; do libname=`$echo "X$lib" | $Xsed -e '\''s%^.*/\([^/]*\)\.ixlibrary$%\1%'\''`; test $rm /sys/libs/${libname}_ixlibrary.a; $show "cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a"; cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a || exit 1; done'
+ ;;
+
+beos*)
+ library_names_spec='${libname}${shared_ext}'
+ dynamic_linker="$host_os ld.so"
+ shlibpath_var=LIBRARY_PATH
+ ;;
+
+bsdi4*)
+ version_type=linux
+ need_version=no
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ finish_cmds='PATH="\$PATH:/sbin" ldconfig $libdir'
+ shlibpath_var=LD_LIBRARY_PATH
+ sys_lib_search_path_spec="/shlib /usr/lib /usr/X11/lib /usr/contrib/lib /lib /usr/local/lib"
+ sys_lib_dlsearch_path_spec="/shlib /usr/lib /usr/local/lib"
+ # the default ld.so.conf also contains /usr/contrib/lib and
+ # /usr/X11R6/lib (/usr/X11 is a link to /usr/X11R6), but let us allow
+ # libtool to hard-code these into programs
+ ;;
+
+cygwin* | mingw* | pw32*)
+ version_type=windows
+ shrext=".dll"
+ need_version=no
+ need_lib_prefix=no
+
+ case $GCC,$host_os in
+ yes,cygwin* | yes,mingw* | yes,pw32*)
+ library_names_spec='$libname.dll.a'
+ # DLL is installed to $(libdir)/../bin by postinstall_cmds
+ postinstall_cmds='base_file=`basename \${file}`~
+ dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i;echo \$dlname'\''`~
+ dldir=$destdir/`dirname \$dlpath`~
+ test -d \$dldir || mkdir -p \$dldir~
+ $install_prog $dir/$dlname \$dldir/$dlname'
+ postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
+ dlpath=$dir/\$dldll~
+ $rm \$dlpath'
+ shlibpath_overrides_runpath=yes
+
+ case $host_os in
+ cygwin*)
+ # Cygwin DLLs use 'cyg' prefix rather than 'lib'
+ soname_spec='`echo ${libname} | sed -e 's/^lib/cyg/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
+ sys_lib_search_path_spec="/usr/lib /lib/w32api /lib /usr/local/lib"
+ ;;
+ mingw*)
+ # MinGW DLLs use traditional 'lib' prefix
+ soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
+ sys_lib_search_path_spec=`$CC -print-search-dirs | grep "^libraries:" | $SED -e "s/^libraries://" -e "s,=/,/,g"`
+ if echo "$sys_lib_search_path_spec" | grep ';[c-zC-Z]:/' >/dev/null; then
+ # It is most probably a Windows format PATH printed by
+ # mingw gcc, but we are running on Cygwin. Gcc prints its search
+ # path with ; separators, and with drive letters. We can handle the
+ # drive letters (cygwin fileutils understands them), so leave them,
+ # especially as we might pass files found there to a mingw objdump,
+ # which wouldn't understand a cygwinified path. Ahh.
+ sys_lib_search_path_spec=`echo "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
+ else
+ sys_lib_search_path_spec=`echo "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
+ fi
+ ;;
+ pw32*)
+ # pw32 DLLs use 'pw' prefix rather than 'lib'
+ library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/./-/g'`${versuffix}${shared_ext}'
+ ;;
+ esac
+ ;;
+
+ *)
+ library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
+ ;;
+ esac
+ dynamic_linker='Win32 ld.exe'
+ # FIXME: first we should search . and the directory the executable is in
+ shlibpath_var=PATH
+ ;;
+
+darwin* | rhapsody*)
+ dynamic_linker="$host_os dyld"
+ version_type=darwin
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='${libname}${release}${versuffix}$shared_ext ${libname}${release}${major}$shared_ext ${libname}$shared_ext'
+ soname_spec='${libname}${release}${major}$shared_ext'
+ shlibpath_overrides_runpath=yes
+ shlibpath_var=DYLD_LIBRARY_PATH
+ shrext='$(test .$module = .yes && echo .so || echo .dylib)'
+ # Apple's gcc prints 'gcc -print-search-dirs' doesn't operate the same.
+ if test "$GCC" = yes; then
+ sys_lib_search_path_spec=`$CC -print-search-dirs | tr "\n" "$PATH_SEPARATOR" | sed -e 's/libraries:/@libraries:/' | tr "@" "\n" | grep "^libraries:" | sed -e "s/^libraries://" -e "s,=/,/,g" -e "s,$PATH_SEPARATOR, ,g" -e "s,.*,& /lib /usr/lib /usr/local/lib,g"`
+ else
+ sys_lib_search_path_spec='/lib /usr/lib /usr/local/lib'
+ fi
+ sys_lib_dlsearch_path_spec='/usr/local/lib /lib /usr/lib'
+ ;;
+
+dgux*)
+ version_type=linux
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname$shared_ext'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ ;;
+
+freebsd1*)
+ dynamic_linker=no
+ ;;
+
+kfreebsd*-gnu)
+ version_type=linux
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=no
+ hardcode_into_libs=yes
+ dynamic_linker='GNU ld.so'
+ ;;
+
+freebsd*)
+ objformat=`test -x /usr/bin/objformat && /usr/bin/objformat || echo aout`
+ version_type=freebsd-$objformat
+ case $version_type in
+ freebsd-elf*)
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext} $libname${shared_ext}'
+ need_version=no
+ need_lib_prefix=no
+ ;;
+ freebsd-*)
+ library_names_spec='${libname}${release}${shared_ext}$versuffix $libname${shared_ext}$versuffix'
+ need_version=yes
+ ;;
+ esac
+ shlibpath_var=LD_LIBRARY_PATH
+ case $host_os in
+ freebsd2*)
+ shlibpath_overrides_runpath=yes
+ ;;
+ freebsd3.01* | freebsdelf3.01*)
+ shlibpath_overrides_runpath=yes
+ hardcode_into_libs=yes
+ ;;
+ *) # from 3.2 on
+ shlibpath_overrides_runpath=no
+ hardcode_into_libs=yes
+ ;;
+ esac
+ ;;
+
+gnu*)
+ version_type=linux
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}${major} ${libname}${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ hardcode_into_libs=yes
+ ;;
+
+hpux9* | hpux10* | hpux11*)
+ # Give a soname corresponding to the major version so that dld.sl refuses to
+ # link against other versions.
+ version_type=sunos
+ need_lib_prefix=no
+ need_version=no
+ case "$host_cpu" in
+ ia64*)
+ shrext='.so'
+ hardcode_into_libs=yes
+ dynamic_linker="$host_os dld.so"
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=yes # Unless +noenvvar is specified.
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ if test "X$HPUX_IA64_MODE" = X32; then
+ sys_lib_search_path_spec="/usr/lib/hpux32 /usr/local/lib/hpux32 /usr/local/lib"
+ else
+ sys_lib_search_path_spec="/usr/lib/hpux64 /usr/local/lib/hpux64"
+ fi
+ sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec
+ ;;
+ hppa*64*)
+ shrext='.sl'
+ hardcode_into_libs=yes
+ dynamic_linker="$host_os dld.sl"
+ shlibpath_var=LD_LIBRARY_PATH # How should we handle SHLIB_PATH
+ shlibpath_overrides_runpath=yes # Unless +noenvvar is specified.
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ sys_lib_search_path_spec="/usr/lib/pa20_64 /usr/ccs/lib/pa20_64"
+ sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec
+ ;;
+ *)
+ shrext='.sl'
+ dynamic_linker="$host_os dld.sl"
+ shlibpath_var=SHLIB_PATH
+ shlibpath_overrides_runpath=no # +s is required to enable SHLIB_PATH
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ ;;
+ esac
+ # HP-UX runs *really* slowly unless shared libraries are mode 555.
+ postinstall_cmds='chmod 555 $lib'
+ ;;
+
+irix5* | irix6* | nonstopux*)
+ case $host_os in
+ nonstopux*) version_type=nonstopux ;;
+ *)
+ if test "$lt_cv_prog_gnu_ld" = yes; then
+ version_type=linux
+ else
+ version_type=irix
+ fi ;;
+ esac
+ need_lib_prefix=no
+ need_version=no
+ soname_spec='${libname}${release}${shared_ext}$major'
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${release}${shared_ext} $libname${shared_ext}'
+ case $host_os in
+ irix5* | nonstopux*)
+ libsuff= shlibsuff=
+ ;;
+ *)
+ case $LD in # libtool.m4 will add one of these switches to LD
+ *-32|*"-32 "|*-melf32bsmip|*"-melf32bsmip ")
+ libsuff= shlibsuff= libmagic=32-bit;;
+ *-n32|*"-n32 "|*-melf32bmipn32|*"-melf32bmipn32 ")
+ libsuff=32 shlibsuff=N32 libmagic=N32;;
+ *-64|*"-64 "|*-melf64bmip|*"-melf64bmip ")
+ libsuff=64 shlibsuff=64 libmagic=64-bit;;
+ *) libsuff= shlibsuff= libmagic=never-match;;
+ esac
+ ;;
+ esac
+ shlibpath_var=LD_LIBRARY${shlibsuff}_PATH
+ shlibpath_overrides_runpath=no
+ sys_lib_search_path_spec="/usr/lib${libsuff} /lib${libsuff} /usr/local/lib${libsuff}"
+ sys_lib_dlsearch_path_spec="/usr/lib${libsuff} /lib${libsuff}"
+ hardcode_into_libs=yes
+ ;;
+
+# No shared lib support for Linux oldld, aout, or coff.
+linux*oldld* | linux*aout* | linux*coff*)
+ dynamic_linker=no
+ ;;
+
+# This must be Linux ELF.
+linux*)
+ version_type=linux
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ finish_cmds='PATH="\$PATH:/sbin" ldconfig -n $libdir'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=no
+ # This implies no fast_install, which is unacceptable.
+ # Some rework will be needed to allow for fast_install
+ # before this can be enabled.
+ hardcode_into_libs=yes
+
+ # Append ld.so.conf contents to the search path
+ if test -f /etc/ld.so.conf; then
+ ld_extra=`$SED -e 's/:,\t/ /g;s/=^=*$//;s/=^= * / /g' /etc/ld.so.conf`
+ sys_lib_dlsearch_path_spec="/lib /usr/lib $ld_extra"
+ fi
+
+ # We used to test for /lib/ld.so.1 and disable shared libraries on
+ # powerpc, because MkLinux only supported shared libraries with the
+ # GNU dynamic linker. Since this was broken with cross compilers,
+ # most powerpc-linux boxes support dynamic linking these days and
+ # people can always --disable-shared, the test was removed, and we
+ # assume the GNU/Linux dynamic linker is in use.
+ dynamic_linker='GNU/Linux ld.so'
+ ;;
+
+knetbsd*-gnu)
+ version_type=linux
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=no
+ hardcode_into_libs=yes
+ dynamic_linker='GNU ld.so'
+ ;;
+
+netbsd*)
+ version_type=sunos
+ need_lib_prefix=no
+ need_version=no
+ if echo __ELF__ | $CC -E - | grep __ELF__ >/dev/null; then
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix'
+ finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir'
+ dynamic_linker='NetBSD (a.out) ld.so'
+ else
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ dynamic_linker='NetBSD ld.elf_so'
+ fi
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=yes
+ hardcode_into_libs=yes
+ ;;
+
+newsos6)
+ version_type=linux
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=yes
+ ;;
+
+nto-qnx*)
+ version_type=linux
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=yes
+ ;;
+
+openbsd*)
+ version_type=sunos
+ need_lib_prefix=no
+ need_version=yes
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix'
+ finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir'
+ shlibpath_var=LD_LIBRARY_PATH
+ if test -z "`echo __ELF__ | $CC -E - | grep __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then
+ case $host_os in
+ openbsd2.[89] | openbsd2.[89].*)
+ shlibpath_overrides_runpath=no
+ ;;
+ *)
+ shlibpath_overrides_runpath=yes
+ ;;
+ esac
+ else
+ shlibpath_overrides_runpath=yes
+ fi
+ ;;
+
+os2*)
+ libname_spec='$name'
+ shrext=".dll"
+ need_lib_prefix=no
+ library_names_spec='$libname${shared_ext} $libname.a'
+ dynamic_linker='OS/2 ld.exe'
+ shlibpath_var=LIBPATH
+ ;;
+
+osf3* | osf4* | osf5*)
+ version_type=osf
+ need_lib_prefix=no
+ need_version=no
+ soname_spec='${libname}${release}${shared_ext}$major'
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ shlibpath_var=LD_LIBRARY_PATH
+ sys_lib_search_path_spec="/usr/shlib /usr/ccs/lib /usr/lib/cmplrs/cc /usr/lib /usr/local/lib /var/shlib"
+ sys_lib_dlsearch_path_spec="$sys_lib_search_path_spec"
+ ;;
+
+sco3.2v5*)
+ version_type=osf
+ soname_spec='${libname}${release}${shared_ext}$major'
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ shlibpath_var=LD_LIBRARY_PATH
+ ;;
+
+solaris*)
+ version_type=linux
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=yes
+ hardcode_into_libs=yes
+ # ldd complains unless libraries are executable
+ postinstall_cmds='chmod +x $lib'
+ ;;
+
+sunos4*)
+ version_type=sunos
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix'
+ finish_cmds='PATH="\$PATH:/usr/etc" ldconfig $libdir'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=yes
+ if test "$with_gnu_ld" = yes; then
+ need_lib_prefix=no
+ fi
+ need_version=yes
+ ;;
+
+sysv4 | sysv4.2uw2* | sysv4.3* | sysv5*)
+ version_type=linux
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ case $host_vendor in
+ sni)
+ shlibpath_overrides_runpath=no
+ need_lib_prefix=no
+ export_dynamic_flag_spec='${wl}-Blargedynsym'
+ runpath_var=LD_RUN_PATH
+ ;;
+ siemens)
+ need_lib_prefix=no
+ ;;
+ motorola)
+ need_lib_prefix=no
+ need_version=no
+ shlibpath_overrides_runpath=no
+ sys_lib_search_path_spec='/lib /usr/lib /usr/ccs/lib'
+ ;;
+ esac
+ ;;
+
+sysv4*MP*)
+ if test -d /usr/nec ;then
+ version_type=linux
+ library_names_spec='$libname${shared_ext}.$versuffix $libname${shared_ext}.$major $libname${shared_ext}'
+ soname_spec='$libname${shared_ext}.$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ fi
+ ;;
+
+uts4*)
+ version_type=linux
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ ;;
+
+*)
+ dynamic_linker=no
+ ;;
+esac
+echo "$as_me:$LINENO: result: $dynamic_linker" >&5
+echo "${ECHO_T}$dynamic_linker" >&6
+test "$dynamic_linker" = no && can_build_shared=no
+
+echo "$as_me:$LINENO: checking how to hardcode library paths into programs" >&5
+echo $ECHO_N "checking how to hardcode library paths into programs... $ECHO_C" >&6
+hardcode_action_F77=
+if test -n "$hardcode_libdir_flag_spec_F77" || \
+ test -n "$runpath_var F77" || \
+ test "X$hardcode_automatic_F77"="Xyes" ; then
+
+ # We can hardcode non-existant directories.
+ if test "$hardcode_direct_F77" != no &&
+ # If the only mechanism to avoid hardcoding is shlibpath_var, we
+ # have to relink, otherwise we might link with an installed library
+ # when we should be linking with a yet-to-be-installed one
+ ## test "$_LT_AC_TAGVAR(hardcode_shlibpath_var, F77)" != no &&
+ test "$hardcode_minus_L_F77" != no; then
+ # Linking always hardcodes the temporary library directory.
+ hardcode_action_F77=relink
+ else
+ # We can link without hardcoding, and we can hardcode nonexisting dirs.
+ hardcode_action_F77=immediate
+ fi
+else
+ # We cannot hardcode anything, or else we can only hardcode existing
+ # directories.
+ hardcode_action_F77=unsupported
+fi
+echo "$as_me:$LINENO: result: $hardcode_action_F77" >&5
+echo "${ECHO_T}$hardcode_action_F77" >&6
+
+if test "$hardcode_action_F77" = relink; then
+ # Fast installation is not supported
+ enable_fast_install=no
+elif test "$shlibpath_overrides_runpath" = yes ||
+ test "$enable_shared" = no; then
+ # Fast installation is not necessary
+ enable_fast_install=needless
+fi
+
+striplib=
+old_striplib=
+echo "$as_me:$LINENO: checking whether stripping libraries is possible" >&5
+echo $ECHO_N "checking whether stripping libraries is possible... $ECHO_C" >&6
+if test -n "$STRIP" && $STRIP -V 2>&1 | grep "GNU strip" >/dev/null; then
+ test -z "$old_striplib" && old_striplib="$STRIP --strip-debug"
+ test -z "$striplib" && striplib="$STRIP --strip-unneeded"
+ echo "$as_me:$LINENO: result: yes" >&5
+echo "${ECHO_T}yes" >&6
+else
+# FIXME - insert some real tests, host_os isn't really good enough
+ case $host_os in
+ darwin*)
+ if test -n "$STRIP" ; then
+ striplib="$STRIP -x"
+ echo "$as_me:$LINENO: result: yes" >&5
+echo "${ECHO_T}yes" >&6
+ else
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+fi
+ ;;
+ *)
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+ ;;
+ esac
+fi
+
+
+
+# The else clause should only fire when bootstrapping the
+# libtool distribution, otherwise you forgot to ship ltmain.sh
+# with your package, and you will get complaints that there are
+# no rules to generate ltmain.sh.
+if test -f "$ltmain"; then
+ # See if we are running on zsh, and set the options which allow our commands through
+ # without removal of \ escapes.
+ if test -n "${ZSH_VERSION+set}" ; then
+ setopt NO_GLOB_SUBST
+ fi
+ # Now quote all the things that may contain metacharacters while being
+ # careful not to overquote the AC_SUBSTed values. We take copies of the
+ # variables and quote the copies for generation of the libtool script.
+ for var in echo old_CC old_CFLAGS AR AR_FLAGS EGREP RANLIB LN_S LTCC NM \
+ SED SHELL STRIP \
+ libname_spec library_names_spec soname_spec extract_expsyms_cmds \
+ old_striplib striplib file_magic_cmd finish_cmds finish_eval \
+ deplibs_check_method reload_flag reload_cmds need_locks \
+ lt_cv_sys_global_symbol_pipe lt_cv_sys_global_symbol_to_cdecl \
+ lt_cv_sys_global_symbol_to_c_name_address \
+ sys_lib_search_path_spec sys_lib_dlsearch_path_spec \
+ old_postinstall_cmds old_postuninstall_cmds \
+ compiler_F77 \
+ CC_F77 \
+ LD_F77 \
+ lt_prog_compiler_wl_F77 \
+ lt_prog_compiler_pic_F77 \
+ lt_prog_compiler_static_F77 \
+ lt_prog_compiler_no_builtin_flag_F77 \
+ export_dynamic_flag_spec_F77 \
+ thread_safe_flag_spec_F77 \
+ whole_archive_flag_spec_F77 \
+ enable_shared_with_static_runtimes_F77 \
+ old_archive_cmds_F77 \
+ old_archive_from_new_cmds_F77 \
+ predep_objects_F77 \
+ postdep_objects_F77 \
+ predeps_F77 \
+ postdeps_F77 \
+ compiler_lib_search_path_F77 \
+ archive_cmds_F77 \
+ archive_expsym_cmds_F77 \
+ postinstall_cmds_F77 \
+ postuninstall_cmds_F77 \
+ old_archive_from_expsyms_cmds_F77 \
+ allow_undefined_flag_F77 \
+ no_undefined_flag_F77 \
+ export_symbols_cmds_F77 \
+ hardcode_libdir_flag_spec_F77 \
+ hardcode_libdir_flag_spec_ld_F77 \
+ hardcode_libdir_separator_F77 \
+ hardcode_automatic_F77 \
+ module_cmds_F77 \
+ module_expsym_cmds_F77 \
+ lt_cv_prog_compiler_c_o_F77 \
+ exclude_expsyms_F77 \
+ include_expsyms_F77; do
+
+ case $var in
+ old_archive_cmds_F77 | \
+ old_archive_from_new_cmds_F77 | \
+ archive_cmds_F77 | \
+ archive_expsym_cmds_F77 | \
+ module_cmds_F77 | \
+ module_expsym_cmds_F77 | \
+ old_archive_from_expsyms_cmds_F77 | \
+ export_symbols_cmds_F77 | \
+ extract_expsyms_cmds | reload_cmds | finish_cmds | \
+ postinstall_cmds | postuninstall_cmds | \
+ old_postinstall_cmds | old_postuninstall_cmds | \
+ sys_lib_search_path_spec | sys_lib_dlsearch_path_spec)
+ # Double-quote double-evaled strings.
+ eval "lt_$var=\\\"\`\$echo \"X\$$var\" | \$Xsed -e \"\$double_quote_subst\" -e \"\$sed_quote_subst\" -e \"\$delay_variable_subst\"\`\\\""
+ ;;
+ *)
+ eval "lt_$var=\\\"\`\$echo \"X\$$var\" | \$Xsed -e \"\$sed_quote_subst\"\`\\\""
+ ;;
+ esac
+ done
+
+ case $lt_echo in
+ *'\$0 --fallback-echo"')
+ lt_echo=`$echo "X$lt_echo" | $Xsed -e 's/\\\\\\\$0 --fallback-echo"$/$0 --fallback-echo"/'`
+ ;;
+ esac
+
+cfgfile="$ofile"
+
+ cat <<__EOF__ >> "$cfgfile"
+# ### BEGIN LIBTOOL TAG CONFIG: $tagname
+
+# Libtool was configured on host `(hostname || uname -n) 2>/dev/null | sed 1q`:
+
+# Shell to use when invoking shell scripts.
+SHELL=$lt_SHELL
+
+# Whether or not to build shared libraries.
+build_libtool_libs=$enable_shared
+
+# Whether or not to build static libraries.
+build_old_libs=$enable_static
+
+# Whether or not to add -lc for building shared libraries.
+build_libtool_need_lc=$archive_cmds_need_lc_F77
+
+# Whether or not to disallow shared libs when runtime libs are static
+allow_libtool_libs_with_static_runtimes=$enable_shared_with_static_runtimes_F77
+
+# Whether or not to optimize for fast installation.
+fast_install=$enable_fast_install
+
+# The host system.
+host_alias=$host_alias
+host=$host
+
+# An echo program that does not interpret backslashes.
+echo=$lt_echo
+
+# The archiver.
+AR=$lt_AR
+AR_FLAGS=$lt_AR_FLAGS
+
+# A C compiler.
+LTCC=$lt_LTCC
+
+# A language-specific compiler.
+CC=$lt_compiler_F77
+
+# Is the compiler the GNU C compiler?
+with_gcc=$GCC_F77
+
+# An ERE matcher.
+EGREP=$lt_EGREP
+
+# The linker used to build libraries.
+LD=$lt_LD_F77
+
+# Whether we need hard or soft links.
+LN_S=$lt_LN_S
+
+# A BSD-compatible nm program.
+NM=$lt_NM
+
+# A symbol stripping program
+STRIP=$lt_STRIP
+
+# Used to examine libraries when file_magic_cmd begins "file"
+MAGIC_CMD=$MAGIC_CMD
+
+# Used on cygwin: DLL creation program.
+DLLTOOL="$DLLTOOL"
+
+# Used on cygwin: object dumper.
+OBJDUMP="$OBJDUMP"
+
+# Used on cygwin: assembler.
+AS="$AS"
+
+# The name of the directory that contains temporary libtool files.
+objdir=$objdir
+
+# How to create reloadable object files.
+reload_flag=$lt_reload_flag
+reload_cmds=$lt_reload_cmds
+
+# How to pass a linker flag through the compiler.
+wl=$lt_lt_prog_compiler_wl_F77
+
+# Object file suffix (normally "o").
+objext="$ac_objext"
+
+# Old archive suffix (normally "a").
+libext="$libext"
+
+# Shared library suffix (normally ".so").
+shrext='$shrext'
+
+# Executable file suffix (normally "").
+exeext="$exeext"
+
+# Additional compiler flags for building library objects.
+pic_flag=$lt_lt_prog_compiler_pic_F77
+pic_mode=$pic_mode
+
+# What is the maximum length of a command?
+max_cmd_len=$lt_cv_sys_max_cmd_len
+
+# Does compiler simultaneously support -c and -o options?
+compiler_c_o=$lt_lt_cv_prog_compiler_c_o_F77
+
+# Must we lock files when doing compilation ?
+need_locks=$lt_need_locks
+
+# Do we need the lib prefix for modules?
+need_lib_prefix=$need_lib_prefix
+
+# Do we need a version for libraries?
+need_version=$need_version
+
+# Whether dlopen is supported.
+dlopen_support=$enable_dlopen
+
+# Whether dlopen of programs is supported.
+dlopen_self=$enable_dlopen_self
+
+# Whether dlopen of statically linked programs is supported.
+dlopen_self_static=$enable_dlopen_self_static
+
+# Compiler flag to prevent dynamic linking.
+link_static_flag=$lt_lt_prog_compiler_static_F77
+
+# Compiler flag to turn off builtin functions.
+no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag_F77
+
+# Compiler flag to allow reflexive dlopens.
+export_dynamic_flag_spec=$lt_export_dynamic_flag_spec_F77
+
+# Compiler flag to generate shared objects directly from archives.
+whole_archive_flag_spec=$lt_whole_archive_flag_spec_F77
+
+# Compiler flag to generate thread-safe objects.
+thread_safe_flag_spec=$lt_thread_safe_flag_spec_F77
+
+# Library versioning type.
+version_type=$version_type
+
+# Format of library name prefix.
+libname_spec=$lt_libname_spec
+
+# List of archive names. First name is the real one, the rest are links.
+# The last name is the one that the linker finds with -lNAME.
+library_names_spec=$lt_library_names_spec
+
+# The coded name of the library, if different from the real name.
+soname_spec=$lt_soname_spec
+
+# Commands used to build and install an old-style archive.
+RANLIB=$lt_RANLIB
+old_archive_cmds=$lt_old_archive_cmds_F77
+old_postinstall_cmds=$lt_old_postinstall_cmds
+old_postuninstall_cmds=$lt_old_postuninstall_cmds
+
+# Create an old-style archive from a shared archive.
+old_archive_from_new_cmds=$lt_old_archive_from_new_cmds_F77
+
+# Create a temporary old-style archive to link instead of a shared archive.
+old_archive_from_expsyms_cmds=$lt_old_archive_from_expsyms_cmds_F77
+
+# Commands used to build and install a shared archive.
+archive_cmds=$lt_archive_cmds_F77
+archive_expsym_cmds=$lt_archive_expsym_cmds_F77
+postinstall_cmds=$lt_postinstall_cmds
+postuninstall_cmds=$lt_postuninstall_cmds
+
+# Commands used to build a loadable module (assumed same as above if empty)
+module_cmds=$lt_module_cmds_F77
+module_expsym_cmds=$lt_module_expsym_cmds_F77
+
+# Commands to strip libraries.
+old_striplib=$lt_old_striplib
+striplib=$lt_striplib
+
+# Dependencies to place before the objects being linked to create a
+# shared library.
+predep_objects=$lt_predep_objects_F77
+
+# Dependencies to place after the objects being linked to create a
+# shared library.
+postdep_objects=$lt_postdep_objects_F77
+
+# Dependencies to place before the objects being linked to create a
+# shared library.
+predeps=$lt_predeps_F77
+
+# Dependencies to place after the objects being linked to create a
+# shared library.
+postdeps=$lt_postdeps_F77
+
+# The library search path used internally by the compiler when linking
+# a shared library.
+compiler_lib_search_path=$lt_compiler_lib_search_path_F77
+
+# Method to check whether dependent libraries are shared objects.
+deplibs_check_method=$lt_deplibs_check_method
+
+# Command to use when deplibs_check_method == file_magic.
+file_magic_cmd=$lt_file_magic_cmd
+
+# Flag that allows shared libraries with undefined symbols to be built.
+allow_undefined_flag=$lt_allow_undefined_flag_F77
+
+# Flag that forces no undefined symbols.
+no_undefined_flag=$lt_no_undefined_flag_F77
+
+# Commands used to finish a libtool library installation in a directory.
+finish_cmds=$lt_finish_cmds
+
+# Same as above, but a single script fragment to be evaled but not shown.
+finish_eval=$lt_finish_eval
+
+# Take the output of nm and produce a listing of raw symbols and C names.
+global_symbol_pipe=$lt_lt_cv_sys_global_symbol_pipe
+
+# Transform the output of nm in a proper C declaration
+global_symbol_to_cdecl=$lt_lt_cv_sys_global_symbol_to_cdecl
+
+# Transform the output of nm in a C name address pair
+global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address
+
+# This is the shared library runtime path variable.
+runpath_var=$runpath_var
+
+# This is the shared library path variable.
+shlibpath_var=$shlibpath_var
+
+# Is shlibpath searched before the hard-coded library search path?
+shlibpath_overrides_runpath=$shlibpath_overrides_runpath
+
+# How to hardcode a shared library path into an executable.
+hardcode_action=$hardcode_action_F77
+
+# Whether we should hardcode library paths into libraries.
+hardcode_into_libs=$hardcode_into_libs
+
+# Flag to hardcode \$libdir into a binary during linking.
+# This must work even if \$libdir does not exist.
+hardcode_libdir_flag_spec=$lt_hardcode_libdir_flag_spec_F77
+
+# If ld is used when linking, flag to hardcode \$libdir into
+# a binary during linking. This must work even if \$libdir does
+# not exist.
+hardcode_libdir_flag_spec_ld=$lt_hardcode_libdir_flag_spec_ld_F77
+
+# Whether we need a single -rpath flag with a separated argument.
+hardcode_libdir_separator=$lt_hardcode_libdir_separator_F77
+
+# Set to yes if using DIR/libNAME${shared_ext} during linking hardcodes DIR into the
+# resulting binary.
+hardcode_direct=$hardcode_direct_F77
+
+# Set to yes if using the -LDIR flag during linking hardcodes DIR into the
+# resulting binary.
+hardcode_minus_L=$hardcode_minus_L_F77
+
+# Set to yes if using SHLIBPATH_VAR=DIR during linking hardcodes DIR into
+# the resulting binary.
+hardcode_shlibpath_var=$hardcode_shlibpath_var_F77
+
+# Set to yes if building a shared library automatically hardcodes DIR into the library
+# and all subsequent libraries and executables linked against it.
+hardcode_automatic=$hardcode_automatic_F77
+
+# Variables whose values should be saved in libtool wrapper scripts and
+# restored at relink time.
+variables_saved_for_relink="$variables_saved_for_relink"
+
+# Whether libtool must link a program against all its dependency libraries.
+link_all_deplibs=$link_all_deplibs_F77
+
+# Compile-time system search path for libraries
+sys_lib_search_path_spec=$lt_sys_lib_search_path_spec
+
+# Run-time system search path for libraries
+sys_lib_dlsearch_path_spec=$lt_sys_lib_dlsearch_path_spec
+
+# Fix the shell variable \$srcfile for the compiler.
+fix_srcfile_path="$fix_srcfile_path_F77"
+
+# Set to yes if exported symbols are required.
+always_export_symbols=$always_export_symbols_F77
+
+# The commands to list exported symbols.
+export_symbols_cmds=$lt_export_symbols_cmds_F77
+
+# The commands to extract the exported symbol list from a shared archive.
+extract_expsyms_cmds=$lt_extract_expsyms_cmds
+
+# Symbols that should not be listed in the preloaded symbols.
+exclude_expsyms=$lt_exclude_expsyms_F77
+
+# Symbols that must always be exported.
+include_expsyms=$lt_include_expsyms_F77
+
+# ### END LIBTOOL TAG CONFIG: $tagname
+
+__EOF__
+
+
+else
+ # If there is no Makefile yet, we rely on a make rule to execute
+ # `config.status --recheck' to rerun these tests and create the
+ # libtool script then.
+ ltmain_in=`echo $ltmain | sed -e 's/\.sh$/.in/'`
+ if test -f "$ltmain_in"; then
+ test -f Makefile && make "$ltmain"
+ fi
+fi
+
+
+ac_ext=c
+ac_cpp='$CPP $CPPFLAGS'
+ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_c_compiler_gnu
+
+CC="$lt_save_CC"
+
+ else
+ tagname=""
+ fi
+ ;;
+
+ GCJ)
+ if test -n "$GCJ" && test "X$GCJ" != "Xno"; then
+
+
+
+# Source file extension for Java test sources.
+ac_ext=java
+
+# Object file extension for compiled Java test sources.
+objext=o
+objext_GCJ=$objext
+
+# Code to be used in simple compile tests
+lt_simple_compile_test_code="class foo {}\n"
+
+# Code to be used in simple link tests
+lt_simple_link_test_code='public class conftest { public static void main(String argv) {}; }\n'
+
+# ltmain only uses $CC for tagged configurations so make sure $CC is set.
+
+# If no C compiler was specified, use CC.
+LTCC=${LTCC-"$CC"}
+
+# Allow CC to be a program name with arguments.
+compiler=$CC
+
+
+# Allow CC to be a program name with arguments.
+lt_save_CC="$CC"
+CC=${GCJ-"gcj"}
+compiler=$CC
+compiler_GCJ=$CC
+
+# GCJ did not exist at the time GCC didn't implicitly link libc in.
+archive_cmds_need_lc_GCJ=no
+
+
+lt_prog_compiler_no_builtin_flag_GCJ=
+
+if test "$GCC" = yes; then
+ lt_prog_compiler_no_builtin_flag_GCJ=' -fno-builtin'
+
+
+echo "$as_me:$LINENO: checking if $compiler supports -fno-rtti -fno-exceptions" >&5
+echo $ECHO_N "checking if $compiler supports -fno-rtti -fno-exceptions... $ECHO_C" >&6
+if test "${lt_cv_prog_compiler_rtti_exceptions+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ lt_cv_prog_compiler_rtti_exceptions=no
+ ac_outfile=conftest.$ac_objext
+ printf "$lt_simple_compile_test_code" > conftest.$ac_ext
+ lt_compiler_flag="-fno-rtti -fno-exceptions"
+ # Insert the option either (1) after the last *FLAGS variable, or
+ # (2) before a word containing "conftest.", or (3) at the end.
+ # Note that $ac_compile itself does not contain backslashes and begins
+ # with a dollar sign (not a hyphen), so the echo should work correctly.
+ # The option is referenced via a variable to avoid confusing sed.
+ lt_compile=`echo "$ac_compile" | $SED \
+ -e 's:.*FLAGS}? :&$lt_compiler_flag :; t' \
+ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \
+ -e 's:$: $lt_compiler_flag:'`
+ (eval echo "\"\$as_me:14755: $lt_compile\"" >&5)
+ (eval "$lt_compile" 2>conftest.err)
+ ac_status=$?
+ cat conftest.err >&5
+ echo "$as_me:14759: \$? = $ac_status" >&5
+ if (exit $ac_status) && test -s "$ac_outfile"; then
+ # The compiler can only warn and ignore the option if not recognized
+ # So say no if there are warnings
+ if test ! -s conftest.err; then
+ lt_cv_prog_compiler_rtti_exceptions=yes
+ fi
+ fi
+ $rm conftest*
+
+fi
+echo "$as_me:$LINENO: result: $lt_cv_prog_compiler_rtti_exceptions" >&5
+echo "${ECHO_T}$lt_cv_prog_compiler_rtti_exceptions" >&6
+
+if test x"$lt_cv_prog_compiler_rtti_exceptions" = xyes; then
+ lt_prog_compiler_no_builtin_flag_GCJ="$lt_prog_compiler_no_builtin_flag_GCJ -fno-rtti -fno-exceptions"
+else
+ :
+fi
+
+fi
+
+lt_prog_compiler_wl_GCJ=
+lt_prog_compiler_pic_GCJ=
+lt_prog_compiler_static_GCJ=
+
+echo "$as_me:$LINENO: checking for $compiler option to produce PIC" >&5
+echo $ECHO_N "checking for $compiler option to produce PIC... $ECHO_C" >&6
+
+ if test "$GCC" = yes; then
+ lt_prog_compiler_wl_GCJ='-Wl,'
+ lt_prog_compiler_static_GCJ='-static'
+
+ case $host_os in
+ aix*)
+ # All AIX code is PIC.
+ if test "$host_cpu" = ia64; then
+ # AIX 5 now supports IA64 processor
+ lt_prog_compiler_static_GCJ='-Bstatic'
+ fi
+ ;;
+
+ amigaos*)
+ # FIXME: we need at least 68020 code to build shared libraries, but
+ # adding the `-m68020' flag to GCC prevents building anything better,
+ # like `-m68040'.
+ lt_prog_compiler_pic_GCJ='-m68020 -resident32 -malways-restore-a4'
+ ;;
+
+ beos* | cygwin* | irix5* | irix6* | nonstopux* | osf3* | osf4* | osf5*)
+ # PIC is the default for these OSes.
+ ;;
+
+ mingw* | pw32* | os2*)
+ # This hack is so that the source file can tell whether it is being
+ # built for inclusion in a dll (and should export symbols for example).
+ lt_prog_compiler_pic_GCJ='-DDLL_EXPORT'
+ ;;
+
+ darwin* | rhapsody*)
+ # PIC is the default on this platform
+ # Common symbols not allowed in MH_DYLIB files
+ lt_prog_compiler_pic_GCJ='-fno-common'
+ ;;
+
+ msdosdjgpp*)
+ # Just because we use GCC doesn't mean we suddenly get shared libraries
+ # on systems that don't support them.
+ lt_prog_compiler_can_build_shared_GCJ=no
+ enable_shared=no
+ ;;
+
+ sysv4*MP*)
+ if test -d /usr/nec; then
+ lt_prog_compiler_pic_GCJ=-Kconform_pic
+ fi
+ ;;
+
+ hpux*)
+ # PIC is the default for IA64 HP-UX and 64-bit HP-UX, but
+ # not for PA HP-UX.
+ case "$host_cpu" in
+ hppa*64*|ia64*)
+ # +Z the default
+ ;;
+ *)
+ lt_prog_compiler_pic_GCJ='-fPIC'
+ ;;
+ esac
+ ;;
+
+ *)
+ lt_prog_compiler_pic_GCJ='-fPIC'
+ ;;
+ esac
+ else
+ # PORTME Check for flag to pass linker flags through the system compiler.
+ case $host_os in
+ aix*)
+ lt_prog_compiler_wl_GCJ='-Wl,'
+ if test "$host_cpu" = ia64; then
+ # AIX 5 now supports IA64 processor
+ lt_prog_compiler_static_GCJ='-Bstatic'
+ else
+ lt_prog_compiler_static_GCJ='-bnso -bI:/lib/syscalls.exp'
+ fi
+ ;;
+
+ mingw* | pw32* | os2*)
+ # This hack is so that the source file can tell whether it is being
+ # built for inclusion in a dll (and should export symbols for example).
+ lt_prog_compiler_pic_GCJ='-DDLL_EXPORT'
+ ;;
+
+ hpux9* | hpux10* | hpux11*)
+ lt_prog_compiler_wl_GCJ='-Wl,'
+ # PIC is the default for IA64 HP-UX and 64-bit HP-UX, but
+ # not for PA HP-UX.
+ case "$host_cpu" in
+ hppa*64*|ia64*)
+ # +Z the default
+ ;;
+ *)
+ lt_prog_compiler_pic_GCJ='+Z'
+ ;;
+ esac
+ # Is there a better lt_prog_compiler_static that works with the bundled CC?
+ lt_prog_compiler_static_GCJ='${wl}-a ${wl}archive'
+ ;;
+
+ irix5* | irix6* | nonstopux*)
+ lt_prog_compiler_wl_GCJ='-Wl,'
+ # PIC (with -KPIC) is the default.
+ lt_prog_compiler_static_GCJ='-non_shared'
+ ;;
+
+ newsos6)
+ lt_prog_compiler_pic_GCJ='-KPIC'
+ lt_prog_compiler_static_GCJ='-Bstatic'
+ ;;
+
+ linux*)
+ case $CC in
+ icc* | ecc*)
+ lt_prog_compiler_wl_GCJ='-Wl,'
+ lt_prog_compiler_pic_GCJ='-KPIC'
+ lt_prog_compiler_static_GCJ='-static'
+ ;;
+ ccc*)
+ lt_prog_compiler_wl_GCJ='-Wl,'
+ # All Alpha code is PIC.
+ lt_prog_compiler_static_GCJ='-non_shared'
+ ;;
+ esac
+ ;;
+
+ osf3* | osf4* | osf5*)
+ lt_prog_compiler_wl_GCJ='-Wl,'
+ # All OSF/1 code is PIC.
+ lt_prog_compiler_static_GCJ='-non_shared'
+ ;;
+
+ sco3.2v5*)
+ lt_prog_compiler_pic_GCJ='-Kpic'
+ lt_prog_compiler_static_GCJ='-dn'
+ ;;
+
+ solaris*)
+ lt_prog_compiler_wl_GCJ='-Wl,'
+ lt_prog_compiler_pic_GCJ='-KPIC'
+ lt_prog_compiler_static_GCJ='-Bstatic'
+ ;;
+
+ sunos4*)
+ lt_prog_compiler_wl_GCJ='-Qoption ld '
+ lt_prog_compiler_pic_GCJ='-PIC'
+ lt_prog_compiler_static_GCJ='-Bstatic'
+ ;;
+
+ sysv4 | sysv4.2uw2* | sysv4.3* | sysv5*)
+ lt_prog_compiler_wl_GCJ='-Wl,'
+ lt_prog_compiler_pic_GCJ='-KPIC'
+ lt_prog_compiler_static_GCJ='-Bstatic'
+ ;;
+
+ sysv4*MP*)
+ if test -d /usr/nec ;then
+ lt_prog_compiler_pic_GCJ='-Kconform_pic'
+ lt_prog_compiler_static_GCJ='-Bstatic'
+ fi
+ ;;
+
+ uts4*)
+ lt_prog_compiler_pic_GCJ='-pic'
+ lt_prog_compiler_static_GCJ='-Bstatic'
+ ;;
+
+ *)
+ lt_prog_compiler_can_build_shared_GCJ=no
+ ;;
+ esac
+ fi
+
+echo "$as_me:$LINENO: result: $lt_prog_compiler_pic_GCJ" >&5
+echo "${ECHO_T}$lt_prog_compiler_pic_GCJ" >&6
+
+#
+# Check to make sure the PIC flag actually works.
+#
+if test -n "$lt_prog_compiler_pic_GCJ"; then
+
+echo "$as_me:$LINENO: checking if $compiler PIC flag $lt_prog_compiler_pic_GCJ works" >&5
+echo $ECHO_N "checking if $compiler PIC flag $lt_prog_compiler_pic_GCJ works... $ECHO_C" >&6
+if test "${lt_prog_compiler_pic_works_GCJ+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ lt_prog_compiler_pic_works_GCJ=no
+ ac_outfile=conftest.$ac_objext
+ printf "$lt_simple_compile_test_code" > conftest.$ac_ext
+ lt_compiler_flag="$lt_prog_compiler_pic_GCJ"
+ # Insert the option either (1) after the last *FLAGS variable, or
+ # (2) before a word containing "conftest.", or (3) at the end.
+ # Note that $ac_compile itself does not contain backslashes and begins
+ # with a dollar sign (not a hyphen), so the echo should work correctly.
+ # The option is referenced via a variable to avoid confusing sed.
+ lt_compile=`echo "$ac_compile" | $SED \
+ -e 's:.*FLAGS}? :&$lt_compiler_flag :; t' \
+ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \
+ -e 's:$: $lt_compiler_flag:'`
+ (eval echo "\"\$as_me:14988: $lt_compile\"" >&5)
+ (eval "$lt_compile" 2>conftest.err)
+ ac_status=$?
+ cat conftest.err >&5
+ echo "$as_me:14992: \$? = $ac_status" >&5
+ if (exit $ac_status) && test -s "$ac_outfile"; then
+ # The compiler can only warn and ignore the option if not recognized
+ # So say no if there are warnings
+ if test ! -s conftest.err; then
+ lt_prog_compiler_pic_works_GCJ=yes
+ fi
+ fi
+ $rm conftest*
+
+fi
+echo "$as_me:$LINENO: result: $lt_prog_compiler_pic_works_GCJ" >&5
+echo "${ECHO_T}$lt_prog_compiler_pic_works_GCJ" >&6
+
+if test x"$lt_prog_compiler_pic_works_GCJ" = xyes; then
+ case $lt_prog_compiler_pic_GCJ in
+ "" | " "*) ;;
+ *) lt_prog_compiler_pic_GCJ=" $lt_prog_compiler_pic_GCJ" ;;
+ esac
+else
+ lt_prog_compiler_pic_GCJ=
+ lt_prog_compiler_can_build_shared_GCJ=no
+fi
+
+fi
+case "$host_os" in
+ # For platforms which do not support PIC, -DPIC is meaningless:
+ *djgpp*)
+ lt_prog_compiler_pic_GCJ=
+ ;;
+ *)
+ lt_prog_compiler_pic_GCJ="$lt_prog_compiler_pic_GCJ"
+ ;;
+esac
+
+echo "$as_me:$LINENO: checking if $compiler supports -c -o file.$ac_objext" >&5
+echo $ECHO_N "checking if $compiler supports -c -o file.$ac_objext... $ECHO_C" >&6
+if test "${lt_cv_prog_compiler_c_o_GCJ+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ lt_cv_prog_compiler_c_o_GCJ=no
+ $rm -r conftest 2>/dev/null
+ mkdir conftest
+ cd conftest
+ mkdir out
+ printf "$lt_simple_compile_test_code" > conftest.$ac_ext
+
+ lt_compiler_flag="-o out/conftest2.$ac_objext"
+ # Insert the option either (1) after the last *FLAGS variable, or
+ # (2) before a word containing "conftest.", or (3) at the end.
+ # Note that $ac_compile itself does not contain backslashes and begins
+ # with a dollar sign (not a hyphen), so the echo should work correctly.
+ lt_compile=`echo "$ac_compile" | $SED \
+ -e 's:.*FLAGS}? :&$lt_compiler_flag :; t' \
+ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \
+ -e 's:$: $lt_compiler_flag:'`
+ (eval echo "\"\$as_me:15048: $lt_compile\"" >&5)
+ (eval "$lt_compile" 2>out/conftest.err)
+ ac_status=$?
+ cat out/conftest.err >&5
+ echo "$as_me:15052: \$? = $ac_status" >&5
+ if (exit $ac_status) && test -s out/conftest2.$ac_objext
+ then
+ # The compiler can only warn and ignore the option if not recognized
+ # So say no if there are warnings
+ if test ! -s out/conftest.err; then
+ lt_cv_prog_compiler_c_o_GCJ=yes
+ fi
+ fi
+ chmod u+w .
+ $rm conftest*
+ # SGI C++ compiler will create directory out/ii_files/ for
+ # template instantiation
+ test -d out/ii_files && $rm out/ii_files/* && rmdir out/ii_files
+ $rm out/* && rmdir out
+ cd ..
+ rmdir conftest
+ $rm conftest*
+
+fi
+echo "$as_me:$LINENO: result: $lt_cv_prog_compiler_c_o_GCJ" >&5
+echo "${ECHO_T}$lt_cv_prog_compiler_c_o_GCJ" >&6
+
+
+hard_links="nottested"
+if test "$lt_cv_prog_compiler_c_o_GCJ" = no && test "$need_locks" != no; then
+ # do not overwrite the value of need_locks provided by the user
+ echo "$as_me:$LINENO: checking if we can lock with hard links" >&5
+echo $ECHO_N "checking if we can lock with hard links... $ECHO_C" >&6
+ hard_links=yes
+ $rm conftest*
+ ln conftest.a conftest.b 2>/dev/null && hard_links=no
+ touch conftest.a
+ ln conftest.a conftest.b 2>&5 || hard_links=no
+ ln conftest.a conftest.b 2>/dev/null && hard_links=no
+ echo "$as_me:$LINENO: result: $hard_links" >&5
+echo "${ECHO_T}$hard_links" >&6
+ if test "$hard_links" = no; then
+ { echo "$as_me:$LINENO: WARNING: \`$CC' does not support \`-c -o', so \`make -j' may be unsafe" >&5
+echo "$as_me: WARNING: \`$CC' does not support \`-c -o', so \`make -j' may be unsafe" >&2;}
+ need_locks=warn
+ fi
+else
+ need_locks=no
+fi
+
+echo "$as_me:$LINENO: checking whether the $compiler linker ($LD) supports shared libraries" >&5
+echo $ECHO_N "checking whether the $compiler linker ($LD) supports shared libraries... $ECHO_C" >&6
+
+ runpath_var=
+ allow_undefined_flag_GCJ=
+ enable_shared_with_static_runtimes_GCJ=no
+ archive_cmds_GCJ=
+ archive_expsym_cmds_GCJ=
+ old_archive_From_new_cmds_GCJ=
+ old_archive_from_expsyms_cmds_GCJ=
+ export_dynamic_flag_spec_GCJ=
+ whole_archive_flag_spec_GCJ=
+ thread_safe_flag_spec_GCJ=
+ hardcode_libdir_flag_spec_GCJ=
+ hardcode_libdir_flag_spec_ld_GCJ=
+ hardcode_libdir_separator_GCJ=
+ hardcode_direct_GCJ=no
+ hardcode_minus_L_GCJ=no
+ hardcode_shlibpath_var_GCJ=unsupported
+ link_all_deplibs_GCJ=unknown
+ hardcode_automatic_GCJ=no
+ module_cmds_GCJ=
+ module_expsym_cmds_GCJ=
+ always_export_symbols_GCJ=no
+ export_symbols_cmds_GCJ='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols'
+ # include_expsyms should be a list of space-separated symbols to be *always*
+ # included in the symbol list
+ include_expsyms_GCJ=
+ # exclude_expsyms can be an extended regexp of symbols to exclude
+ # it will be wrapped by ` (' and `)$', so one must not match beginning or
+ # end of line. Example: `a|bc|.*d.*' will exclude the symbols `a' and `bc',
+ # as well as any symbol that contains `d'.
+ exclude_expsyms_GCJ="_GLOBAL_OFFSET_TABLE_"
+ # Although _GLOBAL_OFFSET_TABLE_ is a valid symbol C name, most a.out
+ # platforms (ab)use it in PIC code, but their linkers get confused if
+ # the symbol is explicitly referenced. Since portable code cannot
+ # rely on this symbol name, it's probably fine to never include it in
+ # preloaded symbol tables.
+ extract_expsyms_cmds=
+
+ case $host_os in
+ cygwin* | mingw* | pw32*)
+ # FIXME: the MSVC++ port hasn't been tested in a loooong time
+ # When not using gcc, we currently assume that we are using
+ # Microsoft Visual C++.
+ if test "$GCC" != yes; then
+ with_gnu_ld=no
+ fi
+ ;;
+ openbsd*)
+ with_gnu_ld=no
+ ;;
+ esac
+
+ ld_shlibs_GCJ=yes
+ if test "$with_gnu_ld" = yes; then
+ # If archive_cmds runs LD, not CC, wlarc should be empty
+ wlarc='${wl}'
+
+ # See if GNU ld supports shared libraries.
+ case $host_os in
+ aix3* | aix4* | aix5*)
+ # On AIX/PPC, the GNU linker is very broken
+ if test "$host_cpu" != ia64; then
+ ld_shlibs_GCJ=no
+ cat <<EOF 1>&2
+
+*** Warning: the GNU linker, at least up to release 2.9.1, is reported
+*** to be unable to reliably create shared libraries on AIX.
+*** Therefore, libtool is disabling shared libraries support. If you
+*** really care for shared libraries, you may want to modify your PATH
+*** so that a non-GNU linker is found, and then restart.
+
+EOF
+ fi
+ ;;
+
+ amigaos*)
+ archive_cmds_GCJ='$rm $output_objdir/a2ixlibrary.data~$echo "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$echo "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$echo "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$echo "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)'
+ hardcode_libdir_flag_spec_GCJ='-L$libdir'
+ hardcode_minus_L_GCJ=yes
+
+ # Samuel A. Falvo II <kc5tja at dolphin.openprojects.net> reports
+ # that the semantics of dynamic libraries on AmigaOS, at least up
+ # to version 4, is to share data among multiple programs linked
+ # with the same dynamic library. Since this doesn't match the
+ # behavior of shared libraries on other platforms, we can't use
+ # them.
+ ld_shlibs_GCJ=no
+ ;;
+
+ beos*)
+ if $LD --help 2>&1 | grep ': supported targets:.* elf' > /dev/null; then
+ allow_undefined_flag_GCJ=unsupported
+ # Joseph Beckenbach <jrb3 at best.com> says some releases of gcc
+ # support --undefined. This deserves some investigation. FIXME
+ archive_cmds_GCJ='$CC -nostart $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+ else
+ ld_shlibs_GCJ=no
+ fi
+ ;;
+
+ cygwin* | mingw* | pw32*)
+ # _LT_AC_TAGVAR(hardcode_libdir_flag_spec, GCJ) is actually meaningless,
+ # as there is no search path for DLLs.
+ hardcode_libdir_flag_spec_GCJ='-L$libdir'
+ allow_undefined_flag_GCJ=unsupported
+ always_export_symbols_GCJ=no
+ enable_shared_with_static_runtimes_GCJ=yes
+ export_symbols_cmds_GCJ='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGS] /s/.* \([^ ]*\)/\1 DATA/'\'' | $SED -e '\''/^[AITW] /s/.* //'\'' | sort | uniq > $export_symbols'
+
+ if $LD --help 2>&1 | grep 'auto-import' > /dev/null; then
+ archive_cmds_GCJ='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--image-base=0x10000000 ${wl}--out-implib,$lib'
+ # If the export-symbols file already is a .def file (1st line
+ # is EXPORTS), use it as is; otherwise, prepend...
+ archive_expsym_cmds_GCJ='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
+ cp $export_symbols $output_objdir/$soname.def;
+ else
+ echo EXPORTS > $output_objdir/$soname.def;
+ cat $export_symbols >> $output_objdir/$soname.def;
+ fi~
+ $CC -shared $output_objdir/$soname.def $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--image-base=0x10000000 ${wl}--out-implib,$lib'
+ else
+ ld_shlibs=no
+ fi
+ ;;
+
+ netbsd* | knetbsd*-gnu)
+ if echo __ELF__ | $CC -E - | grep __ELF__ >/dev/null; then
+ archive_cmds_GCJ='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
+ wlarc=
+ else
+ archive_cmds_GCJ='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+ archive_expsym_cmds_GCJ='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+ fi
+ ;;
+
+ solaris* | sysv5*)
+ if $LD -v 2>&1 | grep 'BFD 2\.8' > /dev/null; then
+ ld_shlibs_GCJ=no
+ cat <<EOF 1>&2
+
+*** Warning: The releases 2.8.* of the GNU linker cannot reliably
+*** create shared libraries on Solaris systems. Therefore, libtool
+*** is disabling shared libraries support. We urge you to upgrade GNU
+*** binutils to release 2.9.1 or newer. Another option is to modify
+*** your PATH or compiler configuration so that the native linker is
+*** used, and then restart.
+
+EOF
+ elif $LD --help 2>&1 | grep ': supported targets:.* elf' > /dev/null; then
+ archive_cmds_GCJ='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+ archive_expsym_cmds_GCJ='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+ else
+ ld_shlibs_GCJ=no
+ fi
+ ;;
+
+ sunos4*)
+ archive_cmds_GCJ='$LD -assert pure-text -Bshareable -o $lib $libobjs $deplibs $linker_flags'
+ wlarc=
+ hardcode_direct_GCJ=yes
+ hardcode_shlibpath_var_GCJ=no
+ ;;
+
+ linux*)
+ if $LD --help 2>&1 | egrep ': supported targets:.* elf' > /dev/null; then
+ tmp_archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+ archive_cmds_GCJ="$tmp_archive_cmds"
+ supports_anon_versioning=no
+ case `$LD -v 2>/dev/null` in
+ *\ 01.* | *\ 2.[0-9].* | *\ 2.10.*) ;; # catch versions < 2.11
+ *\ 2.11.93.0.2\ *) supports_anon_versioning=yes ;; # RH7.3 ...
+ *\ 2.11.92.0.12\ *) supports_anon_versioning=yes ;; # Mandrake 8.2 ...
+ *\ 2.11.*) ;; # other 2.11 versions
+ *) supports_anon_versioning=yes ;;
+ esac
+ if test $supports_anon_versioning = yes; then
+ archive_expsym_cmds_GCJ='$echo "{ global:" > $output_objdir/$libname.ver~
+cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
+$echo "local: *; };" >> $output_objdir/$libname.ver~
+ $CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-version-script ${wl}$output_objdir/$libname.ver -o $lib'
+ else
+ archive_expsym_cmds_GCJ="$tmp_archive_cmds"
+ fi
+ else
+ ld_shlibs_GCJ=no
+ fi
+ ;;
+
+ *)
+ if $LD --help 2>&1 | grep ': supported targets:.* elf' > /dev/null; then
+ archive_cmds_GCJ='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+ archive_expsym_cmds_GCJ='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+ else
+ ld_shlibs_GCJ=no
+ fi
+ ;;
+ esac
+
+ if test "$ld_shlibs_GCJ" = yes; then
+ runpath_var=LD_RUN_PATH
+ hardcode_libdir_flag_spec_GCJ='${wl}--rpath ${wl}$libdir'
+ export_dynamic_flag_spec_GCJ='${wl}--export-dynamic'
+ # ancient GNU ld didn't support --whole-archive et. al.
+ if $LD --help 2>&1 | grep 'no-whole-archive' > /dev/null; then
+ whole_archive_flag_spec_GCJ="$wlarc"'--whole-archive$convenience '"$wlarc"'--no-whole-archive'
+ else
+ whole_archive_flag_spec_GCJ=
+ fi
+ fi
+ else
+ # PORTME fill in a description of your system's linker (not GNU ld)
+ case $host_os in
+ aix3*)
+ allow_undefined_flag_GCJ=unsupported
+ always_export_symbols_GCJ=yes
+ archive_expsym_cmds_GCJ='$LD -o $output_objdir/$soname $libobjs $deplibs $linker_flags -bE:$export_symbols -T512 -H512 -bM:SRE~$AR $AR_FLAGS $lib $output_objdir/$soname'
+ # Note: this linker hardcodes the directories in LIBPATH if there
+ # are no directories specified by -L.
+ hardcode_minus_L_GCJ=yes
+ if test "$GCC" = yes && test -z "$link_static_flag"; then
+ # Neither direct hardcoding nor static linking is supported with a
+ # broken collect2.
+ hardcode_direct_GCJ=unsupported
+ fi
+ ;;
+
+ aix4* | aix5*)
+ if test "$host_cpu" = ia64; then
+ # On IA64, the linker does run time linking by default, so we don't
+ # have to do anything special.
+ aix_use_runtimelinking=no
+ exp_sym_flag='-Bexport'
+ no_entry_flag=""
+ else
+ # If we're using GNU nm, then we don't want the "-C" option.
+ # -C means demangle to AIX nm, but means don't demangle with GNU nm
+ if $NM -V 2>&1 | grep 'GNU' > /dev/null; then
+ export_symbols_cmds_GCJ='$NM -Bpg $libobjs $convenience | awk '\''{ if (((\$2 == "T") || (\$2 == "D") || (\$2 == "B")) && (substr(\$3,1,1) != ".")) { print \$3 } }'\'' | sort -u > $export_symbols'
+ else
+ export_symbols_cmds_GCJ='$NM -BCpg $libobjs $convenience | awk '\''{ if (((\$2 == "T") || (\$2 == "D") || (\$2 == "B")) && (substr(\$3,1,1) != ".")) { print \$3 } }'\'' | sort -u > $export_symbols'
+ fi
+ aix_use_runtimelinking=no
+
+ # Test if we are trying to use run time linking or normal
+ # AIX style linking. If -brtl is somewhere in LDFLAGS, we
+ # need to do runtime linking.
+ case $host_os in aix4.[23]|aix4.[23].*|aix5*)
+ for ld_flag in $LDFLAGS; do
+ if (test $ld_flag = "-brtl" || test $ld_flag = "-Wl,-brtl"); then
+ aix_use_runtimelinking=yes
+ break
+ fi
+ done
+ esac
+
+ exp_sym_flag='-bexport'
+ no_entry_flag='-bnoentry'
+ fi
+
+ # When large executables or shared objects are built, AIX ld can
+ # have problems creating the table of contents. If linking a library
+ # or program results in "error TOC overflow" add -mminimal-toc to
+ # CXXFLAGS/CFLAGS for g++/gcc. In the cases where that is not
+ # enough to fix the problem, add -Wl,-bbigtoc to LDFLAGS.
+
+ archive_cmds_GCJ=''
+ hardcode_direct_GCJ=yes
+ hardcode_libdir_separator_GCJ=':'
+ link_all_deplibs_GCJ=yes
+
+ if test "$GCC" = yes; then
+ case $host_os in aix4.012|aix4.012.*)
+ # We only want to do this on AIX 4.2 and lower, the check
+ # below for broken collect2 doesn't work under 4.3+
+ collect2name=`${CC} -print-prog-name=collect2`
+ if test -f "$collect2name" && \
+ strings "$collect2name" | grep resolve_lib_name >/dev/null
+ then
+ # We have reworked collect2
+ hardcode_direct_GCJ=yes
+ else
+ # We have old collect2
+ hardcode_direct_GCJ=unsupported
+ # It fails to find uninstalled libraries when the uninstalled
+ # path is not listed in the libpath. Setting hardcode_minus_L
+ # to unsupported forces relinking
+ hardcode_minus_L_GCJ=yes
+ hardcode_libdir_flag_spec_GCJ='-L$libdir'
+ hardcode_libdir_separator_GCJ=
+ fi
+ esac
+ shared_flag='-shared'
+ else
+ # not using gcc
+ if test "$host_cpu" = ia64; then
+ # VisualAge C++, Version 5.5 for AIX 5L for IA-64, Beta 3 Release
+ # chokes on -Wl,-G. The following line is correct:
+ shared_flag='-G'
+ else
+ if test "$aix_use_runtimelinking" = yes; then
+ shared_flag='${wl}-G'
+ else
+ shared_flag='${wl}-bM:SRE'
+ fi
+ fi
+ fi
+
+ # It seems that -bexpall does not export symbols beginning with
+ # underscore (_), so it is better to generate a list of symbols to export.
+ always_export_symbols_GCJ=yes
+ if test "$aix_use_runtimelinking" = yes; then
+ # Warning - without using the other runtime loading flags (-brtl),
+ # -berok will link without error, but may produce a broken library.
+ allow_undefined_flag_GCJ='-berok'
+ # Determine the default libpath from the value encoded in an empty executable.
+ cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+
+int
+main ()
+{
+
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext conftest$ac_exeext
+if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_c_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest$ac_exeext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+
+aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e '/Import File Strings/,/^$/ { /^0/ { s/^0 *\(.*\)$/\1/; p; }
+}'`
+# Check for a 64-bit object if we didn't find anything.
+if test -z "$aix_libpath"; then aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e '/Import File Strings/,/^$/ { /^0/ { s/^0 *\(.*\)$/\1/; p; }
+}'`; fi
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+fi
+rm -f conftest.err conftest.$ac_objext \
+ conftest$ac_exeext conftest.$ac_ext
+if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+
+ hardcode_libdir_flag_spec_GCJ='${wl}-blibpath:$libdir:'"$aix_libpath"
+ archive_expsym_cmds_GCJ="\$CC"' -o $output_objdir/$soname $libobjs $deplibs $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then echo "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$no_entry_flag \${wl}$exp_sym_flag:\$export_symbols $shared_flag"
+ else
+ if test "$host_cpu" = ia64; then
+ hardcode_libdir_flag_spec_GCJ='${wl}-R $libdir:/usr/lib:/lib'
+ allow_undefined_flag_GCJ="-z nodefs"
+ archive_expsym_cmds_GCJ="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs $compiler_flags ${wl}${allow_undefined_flag} '"\${wl}$no_entry_flag \${wl}$exp_sym_flag:\$export_symbols"
+ else
+ # Determine the default libpath from the value encoded in an empty executable.
+ cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+
+int
+main ()
+{
+
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext conftest$ac_exeext
+if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_c_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest$ac_exeext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+
+aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e '/Import File Strings/,/^$/ { /^0/ { s/^0 *\(.*\)$/\1/; p; }
+}'`
+# Check for a 64-bit object if we didn't find anything.
+if test -z "$aix_libpath"; then aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e '/Import File Strings/,/^$/ { /^0/ { s/^0 *\(.*\)$/\1/; p; }
+}'`; fi
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+fi
+rm -f conftest.err conftest.$ac_objext \
+ conftest$ac_exeext conftest.$ac_ext
+if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+
+ hardcode_libdir_flag_spec_GCJ='${wl}-blibpath:$libdir:'"$aix_libpath"
+ # Warning - without using the other run time loading flags,
+ # -berok will link without error, but may produce a broken library.
+ no_undefined_flag_GCJ=' ${wl}-bernotok'
+ allow_undefined_flag_GCJ=' ${wl}-berok'
+ # -bexpall does not export symbols beginning with underscore (_)
+ always_export_symbols_GCJ=yes
+ # Exported symbols can be pulled into shared objects from archives
+ whole_archive_flag_spec_GCJ=' '
+ archive_cmds_need_lc_GCJ=yes
+ # This is similar to how AIX traditionally builds it's shared libraries.
+ archive_expsym_cmds_GCJ="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs $compiler_flags ${wl}-bE:$export_symbols ${wl}-bnoentry${allow_undefined_flag}~$AR $AR_FLAGS $output_objdir/$libname$release.a $output_objdir/$soname'
+ fi
+ fi
+ ;;
+
+ amigaos*)
+ archive_cmds_GCJ='$rm $output_objdir/a2ixlibrary.data~$echo "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$echo "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$echo "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$echo "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)'
+ hardcode_libdir_flag_spec_GCJ='-L$libdir'
+ hardcode_minus_L_GCJ=yes
+ # see comment about different semantics on the GNU ld section
+ ld_shlibs_GCJ=no
+ ;;
+
+ bsdi4*)
+ export_dynamic_flag_spec_GCJ=-rdynamic
+ ;;
+
+ cygwin* | mingw* | pw32*)
+ # When not using gcc, we currently assume that we are using
+ # Microsoft Visual C++.
+ # hardcode_libdir_flag_spec is actually meaningless, as there is
+ # no search path for DLLs.
+ hardcode_libdir_flag_spec_GCJ=' '
+ allow_undefined_flag_GCJ=unsupported
+ # Tell ltmain to make .lib files, not .a files.
+ libext=lib
+ # Tell ltmain to make .dll files, not .so files.
+ shrext=".dll"
+ # FIXME: Setting linknames here is a bad hack.
+ archive_cmds_GCJ='$CC -o $lib $libobjs $compiler_flags `echo "$deplibs" | $SED -e '\''s/ -lc$//'\''` -link -dll~linknames='
+ # The linker will automatically build a .lib file if we build a DLL.
+ old_archive_From_new_cmds_GCJ='true'
+ # FIXME: Should let the user specify the lib program.
+ old_archive_cmds_GCJ='lib /OUT:$oldlib$oldobjs$old_deplibs'
+ fix_srcfile_path='`cygpath -w "$srcfile"`'
+ enable_shared_with_static_runtimes_GCJ=yes
+ ;;
+
+ darwin* | rhapsody*)
+ if test "$GXX" = yes ; then
+ archive_cmds_need_lc_GCJ=no
+ case "$host_os" in
+ rhapsody* | darwin1.[012])
+ allow_undefined_flag_GCJ='-undefined suppress'
+ ;;
+ *) # Darwin 1.3 on
+ if test -z ${MACOSX_DEPLOYMENT_TARGET} ; then
+ allow_undefined_flag_GCJ='-flat_namespace -undefined suppress'
+ else
+ case ${MACOSX_DEPLOYMENT_TARGET} in
+ 10.[012])
+ allow_undefined_flag_GCJ='-flat_namespace -undefined suppress'
+ ;;
+ 10.*)
+ allow_undefined_flag_GCJ='-undefined dynamic_lookup'
+ ;;
+ esac
+ fi
+ ;;
+ esac
+ lt_int_apple_cc_single_mod=no
+ output_verbose_link_cmd='echo'
+ if $CC -dumpspecs 2>&1 | grep 'single_module' >/dev/null ; then
+ lt_int_apple_cc_single_mod=yes
+ fi
+ if test "X$lt_int_apple_cc_single_mod" = Xyes ; then
+ archive_cmds_GCJ='$CC -dynamiclib -single_module $allow_undefined_flag -o $lib $libobjs $deplibs $compiler_flags -install_name $rpath/$soname $verstring'
+ else
+ archive_cmds_GCJ='$CC -r ${wl}-bind_at_load -keep_private_externs -nostdlib -o ${lib}-master.o $libobjs~$CC -dynamiclib $allow_undefined_flag -o $lib ${lib}-master.o $deplibs $compiler_flags -install_name $rpath/$soname $verstring'
+ fi
+ module_cmds_GCJ='$CC ${wl}-bind_at_load $allow_undefined_flag -o $lib -bundle $libobjs $deplibs$compiler_flags'
+ # Don't fix this by using the ld -exported_symbols_list flag, it doesn't exist in older darwin ld's
+ if test "X$lt_int_apple_cc_single_mod" = Xyes ; then
+ archive_expsym_cmds_GCJ='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC -dynamiclib -single_module $allow_undefined_flag -o $lib $libobjs $deplibs $compiler_flags -install_name $rpath/$soname $verstring~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}'
+ else
+ archive_expsym_cmds_GCJ='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC -r ${wl}-bind_at_load -keep_private_externs -nostdlib -o ${lib}-master.o $libobjs~$CC -dynamiclib $allow_undefined_flag -o $lib ${lib}-master.o $deplibs $compiler_flags -install_name $rpath/$soname $verstring~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}'
+ fi
+ module_expsym_cmds_GCJ='sed -e "s,#.*,," -e "s,^[ ]*,," -e "s,^\(..*\),_&," < $export_symbols > $output_objdir/${libname}-symbols.expsym~$CC $allow_undefined_flag -o $lib -bundle $libobjs $deplibs$compiler_flags~nmedit -s $output_objdir/${libname}-symbols.expsym ${lib}'
+ hardcode_direct_GCJ=no
+ hardcode_automatic_GCJ=yes
+ hardcode_shlibpath_var_GCJ=unsupported
+ whole_archive_flag_spec_GCJ='-all_load $convenience'
+ link_all_deplibs_GCJ=yes
+ else
+ ld_shlibs_GCJ=no
+ fi
+ ;;
+
+ dgux*)
+ archive_cmds_GCJ='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ hardcode_libdir_flag_spec_GCJ='-L$libdir'
+ hardcode_shlibpath_var_GCJ=no
+ ;;
+
+ freebsd1*)
+ ld_shlibs_GCJ=no
+ ;;
+
+ # FreeBSD 2.2.[012] allows us to include c++rt0.o to get C++ constructor
+ # support. Future versions do this automatically, but an explicit c++rt0.o
+ # does not break anything, and helps significantly (at the cost of a little
+ # extra space).
+ freebsd2.2*)
+ archive_cmds_GCJ='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags /usr/lib/c++rt0.o'
+ hardcode_libdir_flag_spec_GCJ='-R$libdir'
+ hardcode_direct_GCJ=yes
+ hardcode_shlibpath_var_GCJ=no
+ ;;
+
+ # Unfortunately, older versions of FreeBSD 2 do not have this feature.
+ freebsd2*)
+ archive_cmds_GCJ='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags'
+ hardcode_direct_GCJ=yes
+ hardcode_minus_L_GCJ=yes
+ hardcode_shlibpath_var_GCJ=no
+ ;;
+
+ # FreeBSD 3 and greater uses gcc -shared to do shared libraries.
+ freebsd* | kfreebsd*-gnu)
+ archive_cmds_GCJ='$CC -shared -o $lib $libobjs $deplibs $compiler_flags'
+ hardcode_libdir_flag_spec_GCJ='-R$libdir'
+ hardcode_direct_GCJ=yes
+ hardcode_shlibpath_var_GCJ=no
+ ;;
+
+ hpux9*)
+ if test "$GCC" = yes; then
+ archive_cmds_GCJ='$rm $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+ else
+ archive_cmds_GCJ='$rm $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+ fi
+ hardcode_libdir_flag_spec_GCJ='${wl}+b ${wl}$libdir'
+ hardcode_libdir_separator_GCJ=:
+ hardcode_direct_GCJ=yes
+
+ # hardcode_minus_L: Not really in the search PATH,
+ # but as the default location of the library.
+ hardcode_minus_L_GCJ=yes
+ export_dynamic_flag_spec_GCJ='${wl}-E'
+ ;;
+
+ hpux10* | hpux11*)
+ if test "$GCC" = yes -a "$with_gnu_ld" = no; then
+ case "$host_cpu" in
+ hppa*64*|ia64*)
+ archive_cmds_GCJ='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ ;;
+ *)
+ archive_cmds_GCJ='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+ ;;
+ esac
+ else
+ case "$host_cpu" in
+ hppa*64*|ia64*)
+ archive_cmds_GCJ='$LD -b +h $soname -o $lib $libobjs $deplibs $linker_flags'
+ ;;
+ *)
+ archive_cmds_GCJ='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
+ ;;
+ esac
+ fi
+ if test "$with_gnu_ld" = no; then
+ case "$host_cpu" in
+ hppa*64*)
+ hardcode_libdir_flag_spec_GCJ='${wl}+b ${wl}$libdir'
+ hardcode_libdir_flag_spec_ld_GCJ='+b $libdir'
+ hardcode_libdir_separator_GCJ=:
+ hardcode_direct_GCJ=no
+ hardcode_shlibpath_var_GCJ=no
+ ;;
+ ia64*)
+ hardcode_libdir_flag_spec_GCJ='-L$libdir'
+ hardcode_direct_GCJ=no
+ hardcode_shlibpath_var_GCJ=no
+
+ # hardcode_minus_L: Not really in the search PATH,
+ # but as the default location of the library.
+ hardcode_minus_L_GCJ=yes
+ ;;
+ *)
+ hardcode_libdir_flag_spec_GCJ='${wl}+b ${wl}$libdir'
+ hardcode_libdir_separator_GCJ=:
+ hardcode_direct_GCJ=yes
+ export_dynamic_flag_spec_GCJ='${wl}-E'
+
+ # hardcode_minus_L: Not really in the search PATH,
+ # but as the default location of the library.
+ hardcode_minus_L_GCJ=yes
+ ;;
+ esac
+ fi
+ ;;
+
+ irix5* | irix6* | nonstopux*)
+ if test "$GCC" = yes; then
+ archive_cmds_GCJ='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ else
+ archive_cmds_GCJ='$LD -shared $libobjs $deplibs $linker_flags -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${output_objdir}/so_locations -o $lib'
+ hardcode_libdir_flag_spec_ld_GCJ='-rpath $libdir'
+ fi
+ hardcode_libdir_flag_spec_GCJ='${wl}-rpath ${wl}$libdir'
+ hardcode_libdir_separator_GCJ=:
+ link_all_deplibs_GCJ=yes
+ ;;
+
+ netbsd* | knetbsd*-gnu)
+ if echo __ELF__ | $CC -E - | grep __ELF__ >/dev/null; then
+ archive_cmds_GCJ='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' # a.out
+ else
+ archive_cmds_GCJ='$LD -shared -o $lib $libobjs $deplibs $linker_flags' # ELF
+ fi
+ hardcode_libdir_flag_spec_GCJ='-R$libdir'
+ hardcode_direct_GCJ=yes
+ hardcode_shlibpath_var_GCJ=no
+ ;;
+
+ newsos6)
+ archive_cmds_GCJ='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ hardcode_direct_GCJ=yes
+ hardcode_libdir_flag_spec_GCJ='${wl}-rpath ${wl}$libdir'
+ hardcode_libdir_separator_GCJ=:
+ hardcode_shlibpath_var_GCJ=no
+ ;;
+
+ openbsd*)
+ hardcode_direct_GCJ=yes
+ hardcode_shlibpath_var_GCJ=no
+ if test -z "`echo __ELF__ | $CC -E - | grep __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then
+ archive_cmds_GCJ='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
+ hardcode_libdir_flag_spec_GCJ='${wl}-rpath,$libdir'
+ export_dynamic_flag_spec_GCJ='${wl}-E'
+ else
+ case $host_os in
+ openbsd[01].* | openbsd2.[0-7] | openbsd2.[0-7].*)
+ archive_cmds_GCJ='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags'
+ hardcode_libdir_flag_spec_GCJ='-R$libdir'
+ ;;
+ *)
+ archive_cmds_GCJ='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
+ hardcode_libdir_flag_spec_GCJ='${wl}-rpath,$libdir'
+ ;;
+ esac
+ fi
+ ;;
+
+ os2*)
+ hardcode_libdir_flag_spec_GCJ='-L$libdir'
+ hardcode_minus_L_GCJ=yes
+ allow_undefined_flag_GCJ=unsupported
+ archive_cmds_GCJ='$echo "LIBRARY $libname INITINSTANCE" > $output_objdir/$libname.def~$echo "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~$echo DATA >> $output_objdir/$libname.def~$echo " SINGLE NONSHARED" >> $output_objdir/$libname.def~$echo EXPORTS >> $output_objdir/$libname.def~emxexp $libobjs >> $output_objdir/$libname.def~$CC -Zdll -Zcrtdll -o $lib $libobjs $deplibs $compiler_flags $output_objdir/$libname.def'
+ old_archive_From_new_cmds_GCJ='emximp -o $output_objdir/$libname.a $output_objdir/$libname.def'
+ ;;
+
+ osf3*)
+ if test "$GCC" = yes; then
+ allow_undefined_flag_GCJ=' ${wl}-expect_unresolved ${wl}\*'
+ archive_cmds_GCJ='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ else
+ allow_undefined_flag_GCJ=' -expect_unresolved \*'
+ archive_cmds_GCJ='$LD -shared${allow_undefined_flag} $libobjs $deplibs $linker_flags -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${output_objdir}/so_locations -o $lib'
+ fi
+ hardcode_libdir_flag_spec_GCJ='${wl}-rpath ${wl}$libdir'
+ hardcode_libdir_separator_GCJ=:
+ ;;
+
+ osf4* | osf5*) # as osf3* with the addition of -msym flag
+ if test "$GCC" = yes; then
+ allow_undefined_flag_GCJ=' ${wl}-expect_unresolved ${wl}\*'
+ archive_cmds_GCJ='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ hardcode_libdir_flag_spec_GCJ='${wl}-rpath ${wl}$libdir'
+ else
+ allow_undefined_flag_GCJ=' -expect_unresolved \*'
+ archive_cmds_GCJ='$LD -shared${allow_undefined_flag} $libobjs $deplibs $linker_flags -msym -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${output_objdir}/so_locations -o $lib'
+ archive_expsym_cmds_GCJ='for i in `cat $export_symbols`; do printf "%s %s\\n" -exported_symbol "\$i" >> $lib.exp; done; echo "-hidden">> $lib.exp~
+ $LD -shared${allow_undefined_flag} -input $lib.exp $linker_flags $libobjs $deplibs -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${objdir}/so_locations -o $lib~$rm $lib.exp'
+
+ # Both c and cxx compiler support -rpath directly
+ hardcode_libdir_flag_spec_GCJ='-rpath $libdir'
+ fi
+ hardcode_libdir_separator_GCJ=:
+ ;;
+
+ sco3.2v5*)
+ archive_cmds_GCJ='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ hardcode_shlibpath_var_GCJ=no
+ export_dynamic_flag_spec_GCJ='${wl}-Bexport'
+ runpath_var=LD_RUN_PATH
+ hardcode_runpath_var=yes
+ ;;
+
+ solaris*)
+ no_undefined_flag_GCJ=' -z text'
+ if test "$GCC" = yes; then
+ archive_cmds_GCJ='$CC -shared ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ archive_expsym_cmds_GCJ='$echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~$echo "local: *; };" >> $lib.exp~
+ $CC -shared ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$rm $lib.exp'
+ else
+ archive_cmds_GCJ='$LD -G${allow_undefined_flag} -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ archive_expsym_cmds_GCJ='$echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~$echo "local: *; };" >> $lib.exp~
+ $LD -G${allow_undefined_flag} -M $lib.exp -h $soname -o $lib $libobjs $deplibs $linker_flags~$rm $lib.exp'
+ fi
+ hardcode_libdir_flag_spec_GCJ='-R$libdir'
+ hardcode_shlibpath_var_GCJ=no
+ case $host_os in
+ solaris2.[0-5] | solaris2.[0-5].*) ;;
+ *) # Supported since Solaris 2.6 (maybe 2.5.1?)
+ whole_archive_flag_spec_GCJ='-z allextract$convenience -z defaultextract' ;;
+ esac
+ link_all_deplibs_GCJ=yes
+ ;;
+
+ sunos4*)
+ if test "x$host_vendor" = xsequent; then
+ # Use $CC to link under sequent, because it throws in some extra .o
+ # files that make .init and .fini sections work.
+ archive_cmds_GCJ='$CC -G ${wl}-h $soname -o $lib $libobjs $deplibs $compiler_flags'
+ else
+ archive_cmds_GCJ='$LD -assert pure-text -Bstatic -o $lib $libobjs $deplibs $linker_flags'
+ fi
+ hardcode_libdir_flag_spec_GCJ='-L$libdir'
+ hardcode_direct_GCJ=yes
+ hardcode_minus_L_GCJ=yes
+ hardcode_shlibpath_var_GCJ=no
+ ;;
+
+ sysv4)
+ case $host_vendor in
+ sni)
+ archive_cmds_GCJ='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ hardcode_direct_GCJ=yes # is this really true???
+ ;;
+ siemens)
+ ## LD is ld it makes a PLAMLIB
+ ## CC just makes a GrossModule.
+ archive_cmds_GCJ='$LD -G -o $lib $libobjs $deplibs $linker_flags'
+ reload_cmds_GCJ='$CC -r -o $output$reload_objs'
+ hardcode_direct_GCJ=no
+ ;;
+ motorola)
+ archive_cmds_GCJ='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ hardcode_direct_GCJ=no #Motorola manual says yes, but my tests say they lie
+ ;;
+ esac
+ runpath_var='LD_RUN_PATH'
+ hardcode_shlibpath_var_GCJ=no
+ ;;
+
+ sysv4.3*)
+ archive_cmds_GCJ='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ hardcode_shlibpath_var_GCJ=no
+ export_dynamic_flag_spec_GCJ='-Bexport'
+ ;;
+
+ sysv4*MP*)
+ if test -d /usr/nec; then
+ archive_cmds_GCJ='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ hardcode_shlibpath_var_GCJ=no
+ runpath_var=LD_RUN_PATH
+ hardcode_runpath_var=yes
+ ld_shlibs_GCJ=yes
+ fi
+ ;;
+
+ sysv4.2uw2*)
+ archive_cmds_GCJ='$LD -G -o $lib $libobjs $deplibs $linker_flags'
+ hardcode_direct_GCJ=yes
+ hardcode_minus_L_GCJ=no
+ hardcode_shlibpath_var_GCJ=no
+ hardcode_runpath_var=yes
+ runpath_var=LD_RUN_PATH
+ ;;
+
+ sysv5OpenUNIX8* | sysv5UnixWare7* | sysv5uw[78]* | unixware7*)
+ no_undefined_flag_GCJ='${wl}-z ${wl}text'
+ if test "$GCC" = yes; then
+ archive_cmds_GCJ='$CC -shared ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ else
+ archive_cmds_GCJ='$CC -G ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ fi
+ runpath_var='LD_RUN_PATH'
+ hardcode_shlibpath_var_GCJ=no
+ ;;
+
+ sysv5*)
+ no_undefined_flag_GCJ=' -z text'
+ # $CC -shared without GNU ld will not create a library from C++
+ # object files and a static libstdc++, better avoid it by now
+ archive_cmds_GCJ='$LD -G${allow_undefined_flag} -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ archive_expsym_cmds_GCJ='$echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~$echo "local: *; };" >> $lib.exp~
+ $LD -G${allow_undefined_flag} -M $lib.exp -h $soname -o $lib $libobjs $deplibs $linker_flags~$rm $lib.exp'
+ hardcode_libdir_flag_spec_GCJ=
+ hardcode_shlibpath_var_GCJ=no
+ runpath_var='LD_RUN_PATH'
+ ;;
+
+ uts4*)
+ archive_cmds_GCJ='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
+ hardcode_libdir_flag_spec_GCJ='-L$libdir'
+ hardcode_shlibpath_var_GCJ=no
+ ;;
+
+ *)
+ ld_shlibs_GCJ=no
+ ;;
+ esac
+ fi
+
+echo "$as_me:$LINENO: result: $ld_shlibs_GCJ" >&5
+echo "${ECHO_T}$ld_shlibs_GCJ" >&6
+test "$ld_shlibs_GCJ" = no && can_build_shared=no
+
+variables_saved_for_relink="PATH $shlibpath_var $runpath_var"
+if test "$GCC" = yes; then
+ variables_saved_for_relink="$variables_saved_for_relink GCC_EXEC_PREFIX COMPILER_PATH LIBRARY_PATH"
+fi
+
+#
+# Do we need to explicitly link libc?
+#
+case "x$archive_cmds_need_lc_GCJ" in
+x|xyes)
+ # Assume -lc should be added
+ archive_cmds_need_lc_GCJ=yes
+
+ if test "$enable_shared" = yes && test "$GCC" = yes; then
+ case $archive_cmds_GCJ in
+ *'~'*)
+ # FIXME: we may have to deal with multi-command sequences.
+ ;;
+ '$CC '*)
+ # Test whether the compiler implicitly links with -lc since on some
+ # systems, -lgcc has to come before -lc. If gcc already passes -lc
+ # to ld, don't add -lc before -lgcc.
+ echo "$as_me:$LINENO: checking whether -lc should be explicitly linked in" >&5
+echo $ECHO_N "checking whether -lc should be explicitly linked in... $ECHO_C" >&6
+ $rm conftest*
+ printf "$lt_simple_compile_test_code" > conftest.$ac_ext
+
+ if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5
+ (eval $ac_compile) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } 2>conftest.err; then
+ soname=conftest
+ lib=conftest
+ libobjs=conftest.$ac_objext
+ deplibs=
+ wl=$lt_prog_compiler_wl_GCJ
+ compiler_flags=-v
+ linker_flags=-v
+ verstring=
+ output_objdir=.
+ libname=conftest
+ lt_save_allow_undefined_flag=$allow_undefined_flag_GCJ
+ allow_undefined_flag_GCJ=
+ if { (eval echo "$as_me:$LINENO: \"$archive_cmds_GCJ 2\>\&1 \| grep \" -lc \" \>/dev/null 2\>\&1\"") >&5
+ (eval $archive_cmds_GCJ 2\>\&1 \| grep \" -lc \" \>/dev/null 2\>\&1) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }
+ then
+ archive_cmds_need_lc_GCJ=no
+ else
+ archive_cmds_need_lc_GCJ=yes
+ fi
+ allow_undefined_flag_GCJ=$lt_save_allow_undefined_flag
+ else
+ cat conftest.err 1>&5
+ fi
+ $rm conftest*
+ echo "$as_me:$LINENO: result: $archive_cmds_need_lc_GCJ" >&5
+echo "${ECHO_T}$archive_cmds_need_lc_GCJ" >&6
+ ;;
+ esac
+ fi
+ ;;
+esac
+
+echo "$as_me:$LINENO: checking dynamic linker characteristics" >&5
+echo $ECHO_N "checking dynamic linker characteristics... $ECHO_C" >&6
+library_names_spec=
+libname_spec='lib$name'
+soname_spec=
+shrext=".so"
+postinstall_cmds=
+postuninstall_cmds=
+finish_cmds=
+finish_eval=
+shlibpath_var=
+shlibpath_overrides_runpath=unknown
+version_type=none
+dynamic_linker="$host_os ld.so"
+sys_lib_dlsearch_path_spec="/lib /usr/lib"
+if test "$GCC" = yes; then
+ sys_lib_search_path_spec=`$CC -print-search-dirs | grep "^libraries:" | $SED -e "s/^libraries://" -e "s,=/,/,g"`
+ if echo "$sys_lib_search_path_spec" | grep ';' >/dev/null ; then
+ # if the path contains ";" then we assume it to be the separator
+ # otherwise default to the standard path separator (i.e. ":") - it is
+ # assumed that no part of a normal pathname contains ";" but that should
+ # okay in the real world where ";" in dirpaths is itself problematic.
+ sys_lib_search_path_spec=`echo "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
+ else
+ sys_lib_search_path_spec=`echo "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
+ fi
+else
+ sys_lib_search_path_spec="/lib /usr/lib /usr/local/lib"
+fi
+need_lib_prefix=unknown
+hardcode_into_libs=no
+
+# when you set need_version to no, make sure it does not cause -set_version
+# flags to be left without arguments
+need_version=unknown
+
+case $host_os in
+aix3*)
+ version_type=linux
+ library_names_spec='${libname}${release}${shared_ext}$versuffix $libname.a'
+ shlibpath_var=LIBPATH
+
+ # AIX 3 has no versioning support, so we append a major version to the name.
+ soname_spec='${libname}${release}${shared_ext}$major'
+ ;;
+
+aix4* | aix5*)
+ version_type=linux
+ need_lib_prefix=no
+ need_version=no
+ hardcode_into_libs=yes
+ if test "$host_cpu" = ia64; then
+ # AIX 5 supports IA64
+ library_names_spec='${libname}${release}${shared_ext}$major ${libname}${release}${shared_ext}$versuffix $libname${shared_ext}'
+ shlibpath_var=LD_LIBRARY_PATH
+ else
+ # With GCC up to 2.95.x, collect2 would create an import file
+ # for dependence libraries. The import file would start with
+ # the line `#! .'. This would cause the generated library to
+ # depend on `.', always an invalid library. This was fixed in
+ # development snapshots of GCC prior to 3.0.
+ case $host_os in
+ aix4 | aix4.[01] | aix4.[01].*)
+ if { echo '#if __GNUC__ > 2 || (__GNUC__ == 2 && __GNUC_MINOR__ >= 97)'
+ echo ' yes '
+ echo '#endif'; } | ${CC} -E - | grep yes > /dev/null; then
+ :
+ else
+ can_build_shared=no
+ fi
+ ;;
+ esac
+ # AIX (on Power*) has no versioning support, so currently we can not hardcode correct
+ # soname into executable. Probably we can add versioning support to
+ # collect2, so additional links can be useful in future.
+ if test "$aix_use_runtimelinking" = yes; then
+ # If using run time linking (on AIX 4.2 or later) use lib<name>.so
+ # instead of lib<name>.a to let people know that these are not
+ # typical AIX shared libraries.
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ else
+ # We preserve .a as extension for shared libraries through AIX4.2
+ # and later when we are not doing run time linking.
+ library_names_spec='${libname}${release}.a $libname.a'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ fi
+ shlibpath_var=LIBPATH
+ fi
+ ;;
+
+amigaos*)
+ library_names_spec='$libname.ixlibrary $libname.a'
+ # Create ${libname}_ixlibrary.a entries in /sys/libs.
+ finish_eval='for lib in `ls $libdir/*.ixlibrary 2>/dev/null`; do libname=`$echo "X$lib" | $Xsed -e '\''s%^.*/\([^/]*\)\.ixlibrary$%\1%'\''`; test $rm /sys/libs/${libname}_ixlibrary.a; $show "cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a"; cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a || exit 1; done'
+ ;;
+
+beos*)
+ library_names_spec='${libname}${shared_ext}'
+ dynamic_linker="$host_os ld.so"
+ shlibpath_var=LIBRARY_PATH
+ ;;
+
+bsdi4*)
+ version_type=linux
+ need_version=no
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ finish_cmds='PATH="\$PATH:/sbin" ldconfig $libdir'
+ shlibpath_var=LD_LIBRARY_PATH
+ sys_lib_search_path_spec="/shlib /usr/lib /usr/X11/lib /usr/contrib/lib /lib /usr/local/lib"
+ sys_lib_dlsearch_path_spec="/shlib /usr/lib /usr/local/lib"
+ # the default ld.so.conf also contains /usr/contrib/lib and
+ # /usr/X11R6/lib (/usr/X11 is a link to /usr/X11R6), but let us allow
+ # libtool to hard-code these into programs
+ ;;
+
+cygwin* | mingw* | pw32*)
+ version_type=windows
+ shrext=".dll"
+ need_version=no
+ need_lib_prefix=no
+
+ case $GCC,$host_os in
+ yes,cygwin* | yes,mingw* | yes,pw32*)
+ library_names_spec='$libname.dll.a'
+ # DLL is installed to $(libdir)/../bin by postinstall_cmds
+ postinstall_cmds='base_file=`basename \${file}`~
+ dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i;echo \$dlname'\''`~
+ dldir=$destdir/`dirname \$dlpath`~
+ test -d \$dldir || mkdir -p \$dldir~
+ $install_prog $dir/$dlname \$dldir/$dlname'
+ postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
+ dlpath=$dir/\$dldll~
+ $rm \$dlpath'
+ shlibpath_overrides_runpath=yes
+
+ case $host_os in
+ cygwin*)
+ # Cygwin DLLs use 'cyg' prefix rather than 'lib'
+ soname_spec='`echo ${libname} | sed -e 's/^lib/cyg/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
+ sys_lib_search_path_spec="/usr/lib /lib/w32api /lib /usr/local/lib"
+ ;;
+ mingw*)
+ # MinGW DLLs use traditional 'lib' prefix
+ soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
+ sys_lib_search_path_spec=`$CC -print-search-dirs | grep "^libraries:" | $SED -e "s/^libraries://" -e "s,=/,/,g"`
+ if echo "$sys_lib_search_path_spec" | grep ';[c-zC-Z]:/' >/dev/null; then
+ # It is most probably a Windows format PATH printed by
+ # mingw gcc, but we are running on Cygwin. Gcc prints its search
+ # path with ; separators, and with drive letters. We can handle the
+ # drive letters (cygwin fileutils understands them), so leave them,
+ # especially as we might pass files found there to a mingw objdump,
+ # which wouldn't understand a cygwinified path. Ahh.
+ sys_lib_search_path_spec=`echo "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
+ else
+ sys_lib_search_path_spec=`echo "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
+ fi
+ ;;
+ pw32*)
+ # pw32 DLLs use 'pw' prefix rather than 'lib'
+ library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/./-/g'`${versuffix}${shared_ext}'
+ ;;
+ esac
+ ;;
+
+ *)
+ library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
+ ;;
+ esac
+ dynamic_linker='Win32 ld.exe'
+ # FIXME: first we should search . and the directory the executable is in
+ shlibpath_var=PATH
+ ;;
+
+darwin* | rhapsody*)
+ dynamic_linker="$host_os dyld"
+ version_type=darwin
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='${libname}${release}${versuffix}$shared_ext ${libname}${release}${major}$shared_ext ${libname}$shared_ext'
+ soname_spec='${libname}${release}${major}$shared_ext'
+ shlibpath_overrides_runpath=yes
+ shlibpath_var=DYLD_LIBRARY_PATH
+ shrext='$(test .$module = .yes && echo .so || echo .dylib)'
+ # Apple's gcc prints 'gcc -print-search-dirs' doesn't operate the same.
+ if test "$GCC" = yes; then
+ sys_lib_search_path_spec=`$CC -print-search-dirs | tr "\n" "$PATH_SEPARATOR" | sed -e 's/libraries:/@libraries:/' | tr "@" "\n" | grep "^libraries:" | sed -e "s/^libraries://" -e "s,=/,/,g" -e "s,$PATH_SEPARATOR, ,g" -e "s,.*,& /lib /usr/lib /usr/local/lib,g"`
+ else
+ sys_lib_search_path_spec='/lib /usr/lib /usr/local/lib'
+ fi
+ sys_lib_dlsearch_path_spec='/usr/local/lib /lib /usr/lib'
+ ;;
+
+dgux*)
+ version_type=linux
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname$shared_ext'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ ;;
+
+freebsd1*)
+ dynamic_linker=no
+ ;;
+
+kfreebsd*-gnu)
+ version_type=linux
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=no
+ hardcode_into_libs=yes
+ dynamic_linker='GNU ld.so'
+ ;;
+
+freebsd*)
+ objformat=`test -x /usr/bin/objformat && /usr/bin/objformat || echo aout`
+ version_type=freebsd-$objformat
+ case $version_type in
+ freebsd-elf*)
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext} $libname${shared_ext}'
+ need_version=no
+ need_lib_prefix=no
+ ;;
+ freebsd-*)
+ library_names_spec='${libname}${release}${shared_ext}$versuffix $libname${shared_ext}$versuffix'
+ need_version=yes
+ ;;
+ esac
+ shlibpath_var=LD_LIBRARY_PATH
+ case $host_os in
+ freebsd2*)
+ shlibpath_overrides_runpath=yes
+ ;;
+ freebsd3.01* | freebsdelf3.01*)
+ shlibpath_overrides_runpath=yes
+ hardcode_into_libs=yes
+ ;;
+ *) # from 3.2 on
+ shlibpath_overrides_runpath=no
+ hardcode_into_libs=yes
+ ;;
+ esac
+ ;;
+
+gnu*)
+ version_type=linux
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}${major} ${libname}${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ hardcode_into_libs=yes
+ ;;
+
+hpux9* | hpux10* | hpux11*)
+ # Give a soname corresponding to the major version so that dld.sl refuses to
+ # link against other versions.
+ version_type=sunos
+ need_lib_prefix=no
+ need_version=no
+ case "$host_cpu" in
+ ia64*)
+ shrext='.so'
+ hardcode_into_libs=yes
+ dynamic_linker="$host_os dld.so"
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=yes # Unless +noenvvar is specified.
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ if test "X$HPUX_IA64_MODE" = X32; then
+ sys_lib_search_path_spec="/usr/lib/hpux32 /usr/local/lib/hpux32 /usr/local/lib"
+ else
+ sys_lib_search_path_spec="/usr/lib/hpux64 /usr/local/lib/hpux64"
+ fi
+ sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec
+ ;;
+ hppa*64*)
+ shrext='.sl'
+ hardcode_into_libs=yes
+ dynamic_linker="$host_os dld.sl"
+ shlibpath_var=LD_LIBRARY_PATH # How should we handle SHLIB_PATH
+ shlibpath_overrides_runpath=yes # Unless +noenvvar is specified.
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ sys_lib_search_path_spec="/usr/lib/pa20_64 /usr/ccs/lib/pa20_64"
+ sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec
+ ;;
+ *)
+ shrext='.sl'
+ dynamic_linker="$host_os dld.sl"
+ shlibpath_var=SHLIB_PATH
+ shlibpath_overrides_runpath=no # +s is required to enable SHLIB_PATH
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ ;;
+ esac
+ # HP-UX runs *really* slowly unless shared libraries are mode 555.
+ postinstall_cmds='chmod 555 $lib'
+ ;;
+
+irix5* | irix6* | nonstopux*)
+ case $host_os in
+ nonstopux*) version_type=nonstopux ;;
+ *)
+ if test "$lt_cv_prog_gnu_ld" = yes; then
+ version_type=linux
+ else
+ version_type=irix
+ fi ;;
+ esac
+ need_lib_prefix=no
+ need_version=no
+ soname_spec='${libname}${release}${shared_ext}$major'
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${release}${shared_ext} $libname${shared_ext}'
+ case $host_os in
+ irix5* | nonstopux*)
+ libsuff= shlibsuff=
+ ;;
+ *)
+ case $LD in # libtool.m4 will add one of these switches to LD
+ *-32|*"-32 "|*-melf32bsmip|*"-melf32bsmip ")
+ libsuff= shlibsuff= libmagic=32-bit;;
+ *-n32|*"-n32 "|*-melf32bmipn32|*"-melf32bmipn32 ")
+ libsuff=32 shlibsuff=N32 libmagic=N32;;
+ *-64|*"-64 "|*-melf64bmip|*"-melf64bmip ")
+ libsuff=64 shlibsuff=64 libmagic=64-bit;;
+ *) libsuff= shlibsuff= libmagic=never-match;;
+ esac
+ ;;
+ esac
+ shlibpath_var=LD_LIBRARY${shlibsuff}_PATH
+ shlibpath_overrides_runpath=no
+ sys_lib_search_path_spec="/usr/lib${libsuff} /lib${libsuff} /usr/local/lib${libsuff}"
+ sys_lib_dlsearch_path_spec="/usr/lib${libsuff} /lib${libsuff}"
+ hardcode_into_libs=yes
+ ;;
+
+# No shared lib support for Linux oldld, aout, or coff.
+linux*oldld* | linux*aout* | linux*coff*)
+ dynamic_linker=no
+ ;;
+
+# This must be Linux ELF.
+linux*)
+ version_type=linux
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ finish_cmds='PATH="\$PATH:/sbin" ldconfig -n $libdir'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=no
+ # This implies no fast_install, which is unacceptable.
+ # Some rework will be needed to allow for fast_install
+ # before this can be enabled.
+ hardcode_into_libs=yes
+
+ # Append ld.so.conf contents to the search path
+ if test -f /etc/ld.so.conf; then
+ ld_extra=`$SED -e 's/:,\t/ /g;s/=^=*$//;s/=^= * / /g' /etc/ld.so.conf`
+ sys_lib_dlsearch_path_spec="/lib /usr/lib $ld_extra"
+ fi
+
+ # We used to test for /lib/ld.so.1 and disable shared libraries on
+ # powerpc, because MkLinux only supported shared libraries with the
+ # GNU dynamic linker. Since this was broken with cross compilers,
+ # most powerpc-linux boxes support dynamic linking these days and
+ # people can always --disable-shared, the test was removed, and we
+ # assume the GNU/Linux dynamic linker is in use.
+ dynamic_linker='GNU/Linux ld.so'
+ ;;
+
+knetbsd*-gnu)
+ version_type=linux
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=no
+ hardcode_into_libs=yes
+ dynamic_linker='GNU ld.so'
+ ;;
+
+netbsd*)
+ version_type=sunos
+ need_lib_prefix=no
+ need_version=no
+ if echo __ELF__ | $CC -E - | grep __ELF__ >/dev/null; then
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix'
+ finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir'
+ dynamic_linker='NetBSD (a.out) ld.so'
+ else
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ dynamic_linker='NetBSD ld.elf_so'
+ fi
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=yes
+ hardcode_into_libs=yes
+ ;;
+
+newsos6)
+ version_type=linux
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=yes
+ ;;
+
+nto-qnx*)
+ version_type=linux
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=yes
+ ;;
+
+openbsd*)
+ version_type=sunos
+ need_lib_prefix=no
+ need_version=yes
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix'
+ finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir'
+ shlibpath_var=LD_LIBRARY_PATH
+ if test -z "`echo __ELF__ | $CC -E - | grep __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then
+ case $host_os in
+ openbsd2.[89] | openbsd2.[89].*)
+ shlibpath_overrides_runpath=no
+ ;;
+ *)
+ shlibpath_overrides_runpath=yes
+ ;;
+ esac
+ else
+ shlibpath_overrides_runpath=yes
+ fi
+ ;;
+
+os2*)
+ libname_spec='$name'
+ shrext=".dll"
+ need_lib_prefix=no
+ library_names_spec='$libname${shared_ext} $libname.a'
+ dynamic_linker='OS/2 ld.exe'
+ shlibpath_var=LIBPATH
+ ;;
+
+osf3* | osf4* | osf5*)
+ version_type=osf
+ need_lib_prefix=no
+ need_version=no
+ soname_spec='${libname}${release}${shared_ext}$major'
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ shlibpath_var=LD_LIBRARY_PATH
+ sys_lib_search_path_spec="/usr/shlib /usr/ccs/lib /usr/lib/cmplrs/cc /usr/lib /usr/local/lib /var/shlib"
+ sys_lib_dlsearch_path_spec="$sys_lib_search_path_spec"
+ ;;
+
+sco3.2v5*)
+ version_type=osf
+ soname_spec='${libname}${release}${shared_ext}$major'
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ shlibpath_var=LD_LIBRARY_PATH
+ ;;
+
+solaris*)
+ version_type=linux
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=yes
+ hardcode_into_libs=yes
+ # ldd complains unless libraries are executable
+ postinstall_cmds='chmod +x $lib'
+ ;;
+
+sunos4*)
+ version_type=sunos
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix'
+ finish_cmds='PATH="\$PATH:/usr/etc" ldconfig $libdir'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=yes
+ if test "$with_gnu_ld" = yes; then
+ need_lib_prefix=no
+ fi
+ need_version=yes
+ ;;
+
+sysv4 | sysv4.2uw2* | sysv4.3* | sysv5*)
+ version_type=linux
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ case $host_vendor in
+ sni)
+ shlibpath_overrides_runpath=no
+ need_lib_prefix=no
+ export_dynamic_flag_spec='${wl}-Blargedynsym'
+ runpath_var=LD_RUN_PATH
+ ;;
+ siemens)
+ need_lib_prefix=no
+ ;;
+ motorola)
+ need_lib_prefix=no
+ need_version=no
+ shlibpath_overrides_runpath=no
+ sys_lib_search_path_spec='/lib /usr/lib /usr/ccs/lib'
+ ;;
+ esac
+ ;;
+
+sysv4*MP*)
+ if test -d /usr/nec ;then
+ version_type=linux
+ library_names_spec='$libname${shared_ext}.$versuffix $libname${shared_ext}.$major $libname${shared_ext}'
+ soname_spec='$libname${shared_ext}.$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ fi
+ ;;
+
+uts4*)
+ version_type=linux
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ ;;
+
+*)
+ dynamic_linker=no
+ ;;
+esac
+echo "$as_me:$LINENO: result: $dynamic_linker" >&5
+echo "${ECHO_T}$dynamic_linker" >&6
+test "$dynamic_linker" = no && can_build_shared=no
+
+echo "$as_me:$LINENO: checking how to hardcode library paths into programs" >&5
+echo $ECHO_N "checking how to hardcode library paths into programs... $ECHO_C" >&6
+hardcode_action_GCJ=
+if test -n "$hardcode_libdir_flag_spec_GCJ" || \
+ test -n "$runpath_var GCJ" || \
+ test "X$hardcode_automatic_GCJ"="Xyes" ; then
+
+ # We can hardcode non-existant directories.
+ if test "$hardcode_direct_GCJ" != no &&
+ # If the only mechanism to avoid hardcoding is shlibpath_var, we
+ # have to relink, otherwise we might link with an installed library
+ # when we should be linking with a yet-to-be-installed one
+ ## test "$_LT_AC_TAGVAR(hardcode_shlibpath_var, GCJ)" != no &&
+ test "$hardcode_minus_L_GCJ" != no; then
+ # Linking always hardcodes the temporary library directory.
+ hardcode_action_GCJ=relink
+ else
+ # We can link without hardcoding, and we can hardcode nonexisting dirs.
+ hardcode_action_GCJ=immediate
+ fi
+else
+ # We cannot hardcode anything, or else we can only hardcode existing
+ # directories.
+ hardcode_action_GCJ=unsupported
+fi
+echo "$as_me:$LINENO: result: $hardcode_action_GCJ" >&5
+echo "${ECHO_T}$hardcode_action_GCJ" >&6
+
+if test "$hardcode_action_GCJ" = relink; then
+ # Fast installation is not supported
+ enable_fast_install=no
+elif test "$shlibpath_overrides_runpath" = yes ||
+ test "$enable_shared" = no; then
+ # Fast installation is not necessary
+ enable_fast_install=needless
+fi
+
+striplib=
+old_striplib=
+echo "$as_me:$LINENO: checking whether stripping libraries is possible" >&5
+echo $ECHO_N "checking whether stripping libraries is possible... $ECHO_C" >&6
+if test -n "$STRIP" && $STRIP -V 2>&1 | grep "GNU strip" >/dev/null; then
+ test -z "$old_striplib" && old_striplib="$STRIP --strip-debug"
+ test -z "$striplib" && striplib="$STRIP --strip-unneeded"
+ echo "$as_me:$LINENO: result: yes" >&5
+echo "${ECHO_T}yes" >&6
+else
+# FIXME - insert some real tests, host_os isn't really good enough
+ case $host_os in
+ darwin*)
+ if test -n "$STRIP" ; then
+ striplib="$STRIP -x"
+ echo "$as_me:$LINENO: result: yes" >&5
+echo "${ECHO_T}yes" >&6
+ else
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+fi
+ ;;
+ *)
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+ ;;
+ esac
+fi
+
+if test "x$enable_dlopen" != xyes; then
+ enable_dlopen=unknown
+ enable_dlopen_self=unknown
+ enable_dlopen_self_static=unknown
+else
+ lt_cv_dlopen=no
+ lt_cv_dlopen_libs=
+
+ case $host_os in
+ beos*)
+ lt_cv_dlopen="load_add_on"
+ lt_cv_dlopen_libs=
+ lt_cv_dlopen_self=yes
+ ;;
+
+ mingw* | pw32*)
+ lt_cv_dlopen="LoadLibrary"
+ lt_cv_dlopen_libs=
+ ;;
+
+ cygwin*)
+ lt_cv_dlopen="dlopen"
+ lt_cv_dlopen_libs=
+ ;;
+
+ darwin*)
+ # if libdl is installed we need to link against it
+ echo "$as_me:$LINENO: checking for dlopen in -ldl" >&5
+echo $ECHO_N "checking for dlopen in -ldl... $ECHO_C" >&6
+if test "${ac_cv_lib_dl_dlopen+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ ac_check_lib_save_LIBS=$LIBS
+LIBS="-ldl $LIBS"
+cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+
+/* Override any gcc2 internal prototype to avoid an error. */
+#ifdef __cplusplus
+extern "C"
+#endif
+/* We use char because int might match the return type of a gcc2
+ builtin and then its argument prototype would still apply. */
+char dlopen ();
+int
+main ()
+{
+dlopen ();
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext conftest$ac_exeext
+if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_c_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest$ac_exeext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ ac_cv_lib_dl_dlopen=yes
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ac_cv_lib_dl_dlopen=no
+fi
+rm -f conftest.err conftest.$ac_objext \
+ conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+echo "$as_me:$LINENO: result: $ac_cv_lib_dl_dlopen" >&5
+echo "${ECHO_T}$ac_cv_lib_dl_dlopen" >&6
+if test $ac_cv_lib_dl_dlopen = yes; then
+ lt_cv_dlopen="dlopen" lt_cv_dlopen_libs="-ldl"
+else
+
+ lt_cv_dlopen="dyld"
+ lt_cv_dlopen_libs=
+ lt_cv_dlopen_self=yes
+
+fi
+
+ ;;
+
+ *)
+ echo "$as_me:$LINENO: checking for shl_load" >&5
+echo $ECHO_N "checking for shl_load... $ECHO_C" >&6
+if test "${ac_cv_func_shl_load+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+/* Define shl_load to an innocuous variant, in case <limits.h> declares shl_load.
+ For example, HP-UX 11i <limits.h> declares gettimeofday. */
+#define shl_load innocuous_shl_load
+
+/* System header to define __stub macros and hopefully few prototypes,
+ which can conflict with char shl_load (); below.
+ Prefer <limits.h> to <assert.h> if __STDC__ is defined, since
+ <limits.h> exists even on freestanding compilers. */
+
+#ifdef __STDC__
+# include <limits.h>
+#else
+# include <assert.h>
+#endif
+
+#undef shl_load
+
+/* Override any gcc2 internal prototype to avoid an error. */
+#ifdef __cplusplus
+extern "C"
+{
+#endif
+/* We use char because int might match the return type of a gcc2
+ builtin and then its argument prototype would still apply. */
+char shl_load ();
+/* The GNU C library defines this for functions which it implements
+ to always fail with ENOSYS. Some functions are actually named
+ something starting with __ and the normal name is an alias. */
+#if defined (__stub_shl_load) || defined (__stub___shl_load)
+choke me
+#else
+char (*f) () = shl_load;
+#endif
+#ifdef __cplusplus
+}
+#endif
+
+int
+main ()
+{
+return f != shl_load;
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext conftest$ac_exeext
+if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_c_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest$ac_exeext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ ac_cv_func_shl_load=yes
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ac_cv_func_shl_load=no
+fi
+rm -f conftest.err conftest.$ac_objext \
+ conftest$ac_exeext conftest.$ac_ext
+fi
+echo "$as_me:$LINENO: result: $ac_cv_func_shl_load" >&5
+echo "${ECHO_T}$ac_cv_func_shl_load" >&6
+if test $ac_cv_func_shl_load = yes; then
+ lt_cv_dlopen="shl_load"
+else
+ echo "$as_me:$LINENO: checking for shl_load in -ldld" >&5
+echo $ECHO_N "checking for shl_load in -ldld... $ECHO_C" >&6
+if test "${ac_cv_lib_dld_shl_load+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ ac_check_lib_save_LIBS=$LIBS
+LIBS="-ldld $LIBS"
+cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+
+/* Override any gcc2 internal prototype to avoid an error. */
+#ifdef __cplusplus
+extern "C"
+#endif
+/* We use char because int might match the return type of a gcc2
+ builtin and then its argument prototype would still apply. */
+char shl_load ();
+int
+main ()
+{
+shl_load ();
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext conftest$ac_exeext
+if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_c_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest$ac_exeext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ ac_cv_lib_dld_shl_load=yes
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ac_cv_lib_dld_shl_load=no
+fi
+rm -f conftest.err conftest.$ac_objext \
+ conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+echo "$as_me:$LINENO: result: $ac_cv_lib_dld_shl_load" >&5
+echo "${ECHO_T}$ac_cv_lib_dld_shl_load" >&6
+if test $ac_cv_lib_dld_shl_load = yes; then
+ lt_cv_dlopen="shl_load" lt_cv_dlopen_libs="-dld"
+else
+ echo "$as_me:$LINENO: checking for dlopen" >&5
+echo $ECHO_N "checking for dlopen... $ECHO_C" >&6
+if test "${ac_cv_func_dlopen+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+/* Define dlopen to an innocuous variant, in case <limits.h> declares dlopen.
+ For example, HP-UX 11i <limits.h> declares gettimeofday. */
+#define dlopen innocuous_dlopen
+
+/* System header to define __stub macros and hopefully few prototypes,
+ which can conflict with char dlopen (); below.
+ Prefer <limits.h> to <assert.h> if __STDC__ is defined, since
+ <limits.h> exists even on freestanding compilers. */
+
+#ifdef __STDC__
+# include <limits.h>
+#else
+# include <assert.h>
+#endif
+
+#undef dlopen
+
+/* Override any gcc2 internal prototype to avoid an error. */
+#ifdef __cplusplus
+extern "C"
+{
+#endif
+/* We use char because int might match the return type of a gcc2
+ builtin and then its argument prototype would still apply. */
+char dlopen ();
+/* The GNU C library defines this for functions which it implements
+ to always fail with ENOSYS. Some functions are actually named
+ something starting with __ and the normal name is an alias. */
+#if defined (__stub_dlopen) || defined (__stub___dlopen)
+choke me
+#else
+char (*f) () = dlopen;
+#endif
+#ifdef __cplusplus
+}
+#endif
+
+int
+main ()
+{
+return f != dlopen;
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext conftest$ac_exeext
+if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_c_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest$ac_exeext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ ac_cv_func_dlopen=yes
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ac_cv_func_dlopen=no
+fi
+rm -f conftest.err conftest.$ac_objext \
+ conftest$ac_exeext conftest.$ac_ext
+fi
+echo "$as_me:$LINENO: result: $ac_cv_func_dlopen" >&5
+echo "${ECHO_T}$ac_cv_func_dlopen" >&6
+if test $ac_cv_func_dlopen = yes; then
+ lt_cv_dlopen="dlopen"
+else
+ echo "$as_me:$LINENO: checking for dlopen in -ldl" >&5
+echo $ECHO_N "checking for dlopen in -ldl... $ECHO_C" >&6
+if test "${ac_cv_lib_dl_dlopen+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ ac_check_lib_save_LIBS=$LIBS
+LIBS="-ldl $LIBS"
+cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+
+/* Override any gcc2 internal prototype to avoid an error. */
+#ifdef __cplusplus
+extern "C"
+#endif
+/* We use char because int might match the return type of a gcc2
+ builtin and then its argument prototype would still apply. */
+char dlopen ();
+int
+main ()
+{
+dlopen ();
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext conftest$ac_exeext
+if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_c_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest$ac_exeext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ ac_cv_lib_dl_dlopen=yes
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ac_cv_lib_dl_dlopen=no
+fi
+rm -f conftest.err conftest.$ac_objext \
+ conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+echo "$as_me:$LINENO: result: $ac_cv_lib_dl_dlopen" >&5
+echo "${ECHO_T}$ac_cv_lib_dl_dlopen" >&6
+if test $ac_cv_lib_dl_dlopen = yes; then
+ lt_cv_dlopen="dlopen" lt_cv_dlopen_libs="-ldl"
+else
+ echo "$as_me:$LINENO: checking for dlopen in -lsvld" >&5
+echo $ECHO_N "checking for dlopen in -lsvld... $ECHO_C" >&6
+if test "${ac_cv_lib_svld_dlopen+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ ac_check_lib_save_LIBS=$LIBS
+LIBS="-lsvld $LIBS"
+cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+
+/* Override any gcc2 internal prototype to avoid an error. */
+#ifdef __cplusplus
+extern "C"
+#endif
+/* We use char because int might match the return type of a gcc2
+ builtin and then its argument prototype would still apply. */
+char dlopen ();
+int
+main ()
+{
+dlopen ();
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext conftest$ac_exeext
+if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_c_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest$ac_exeext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ ac_cv_lib_svld_dlopen=yes
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ac_cv_lib_svld_dlopen=no
+fi
+rm -f conftest.err conftest.$ac_objext \
+ conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+echo "$as_me:$LINENO: result: $ac_cv_lib_svld_dlopen" >&5
+echo "${ECHO_T}$ac_cv_lib_svld_dlopen" >&6
+if test $ac_cv_lib_svld_dlopen = yes; then
+ lt_cv_dlopen="dlopen" lt_cv_dlopen_libs="-lsvld"
+else
+ echo "$as_me:$LINENO: checking for dld_link in -ldld" >&5
+echo $ECHO_N "checking for dld_link in -ldld... $ECHO_C" >&6
+if test "${ac_cv_lib_dld_dld_link+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ ac_check_lib_save_LIBS=$LIBS
+LIBS="-ldld $LIBS"
+cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+
+/* Override any gcc2 internal prototype to avoid an error. */
+#ifdef __cplusplus
+extern "C"
+#endif
+/* We use char because int might match the return type of a gcc2
+ builtin and then its argument prototype would still apply. */
+char dld_link ();
+int
+main ()
+{
+dld_link ();
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext conftest$ac_exeext
+if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_c_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest$ac_exeext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ ac_cv_lib_dld_dld_link=yes
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ac_cv_lib_dld_dld_link=no
+fi
+rm -f conftest.err conftest.$ac_objext \
+ conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+echo "$as_me:$LINENO: result: $ac_cv_lib_dld_dld_link" >&5
+echo "${ECHO_T}$ac_cv_lib_dld_dld_link" >&6
+if test $ac_cv_lib_dld_dld_link = yes; then
+ lt_cv_dlopen="dld_link" lt_cv_dlopen_libs="-dld"
+fi
+
+
+fi
+
+
+fi
+
+
+fi
+
+
+fi
+
+
+fi
+
+ ;;
+ esac
+
+ if test "x$lt_cv_dlopen" != xno; then
+ enable_dlopen=yes
+ else
+ enable_dlopen=no
+ fi
+
+ case $lt_cv_dlopen in
+ dlopen)
+ save_CPPFLAGS="$CPPFLAGS"
+ test "x$ac_cv_header_dlfcn_h" = xyes && CPPFLAGS="$CPPFLAGS -DHAVE_DLFCN_H"
+
+ save_LDFLAGS="$LDFLAGS"
+ eval LDFLAGS=\"\$LDFLAGS $export_dynamic_flag_spec\"
+
+ save_LIBS="$LIBS"
+ LIBS="$lt_cv_dlopen_libs $LIBS"
+
+ echo "$as_me:$LINENO: checking whether a program can dlopen itself" >&5
+echo $ECHO_N "checking whether a program can dlopen itself... $ECHO_C" >&6
+if test "${lt_cv_dlopen_self+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ if test "$cross_compiling" = yes; then :
+ lt_cv_dlopen_self=cross
+else
+ lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
+ lt_status=$lt_dlunknown
+ cat > conftest.$ac_ext <<EOF
+#line 17232 "configure"
+#include "confdefs.h"
+
+#if HAVE_DLFCN_H
+#include <dlfcn.h>
+#endif
+
+#include <stdio.h>
+
+#ifdef RTLD_GLOBAL
+# define LT_DLGLOBAL RTLD_GLOBAL
+#else
+# ifdef DL_GLOBAL
+# define LT_DLGLOBAL DL_GLOBAL
+# else
+# define LT_DLGLOBAL 0
+# endif
+#endif
+
+/* We may have to define LT_DLLAZY_OR_NOW in the command line if we
+ find out it does not work in some platform. */
+#ifndef LT_DLLAZY_OR_NOW
+# ifdef RTLD_LAZY
+# define LT_DLLAZY_OR_NOW RTLD_LAZY
+# else
+# ifdef DL_LAZY
+# define LT_DLLAZY_OR_NOW DL_LAZY
+# else
+# ifdef RTLD_NOW
+# define LT_DLLAZY_OR_NOW RTLD_NOW
+# else
+# ifdef DL_NOW
+# define LT_DLLAZY_OR_NOW DL_NOW
+# else
+# define LT_DLLAZY_OR_NOW 0
+# endif
+# endif
+# endif
+# endif
+#endif
+
+#ifdef __cplusplus
+extern "C" void exit (int);
+#endif
+
+void fnord() { int i=42;}
+int main ()
+{
+ void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+ int status = $lt_dlunknown;
+
+ if (self)
+ {
+ if (dlsym (self,"fnord")) status = $lt_dlno_uscore;
+ else if (dlsym( self,"_fnord")) status = $lt_dlneed_uscore;
+ /* dlclose (self); */
+ }
+
+ exit (status);
+}
+EOF
+ if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } && test -s conftest${ac_exeext} 2>/dev/null; then
+ (./conftest; exit; ) 2>/dev/null
+ lt_status=$?
+ case x$lt_status in
+ x$lt_dlno_uscore) lt_cv_dlopen_self=yes ;;
+ x$lt_dlneed_uscore) lt_cv_dlopen_self=yes ;;
+ x$lt_unknown|x*) lt_cv_dlopen_self=no ;;
+ esac
+ else :
+ # compilation failed
+ lt_cv_dlopen_self=no
+ fi
+fi
+rm -fr conftest*
+
+
+fi
+echo "$as_me:$LINENO: result: $lt_cv_dlopen_self" >&5
+echo "${ECHO_T}$lt_cv_dlopen_self" >&6
+
+ if test "x$lt_cv_dlopen_self" = xyes; then
+ LDFLAGS="$LDFLAGS $link_static_flag"
+ echo "$as_me:$LINENO: checking whether a statically linked program can dlopen itself" >&5
+echo $ECHO_N "checking whether a statically linked program can dlopen itself... $ECHO_C" >&6
+if test "${lt_cv_dlopen_self_static+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ if test "$cross_compiling" = yes; then :
+ lt_cv_dlopen_self_static=cross
+else
+ lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
+ lt_status=$lt_dlunknown
+ cat > conftest.$ac_ext <<EOF
+#line 17330 "configure"
+#include "confdefs.h"
+
+#if HAVE_DLFCN_H
+#include <dlfcn.h>
+#endif
+
+#include <stdio.h>
+
+#ifdef RTLD_GLOBAL
+# define LT_DLGLOBAL RTLD_GLOBAL
+#else
+# ifdef DL_GLOBAL
+# define LT_DLGLOBAL DL_GLOBAL
+# else
+# define LT_DLGLOBAL 0
+# endif
+#endif
+
+/* We may have to define LT_DLLAZY_OR_NOW in the command line if we
+ find out it does not work in some platform. */
+#ifndef LT_DLLAZY_OR_NOW
+# ifdef RTLD_LAZY
+# define LT_DLLAZY_OR_NOW RTLD_LAZY
+# else
+# ifdef DL_LAZY
+# define LT_DLLAZY_OR_NOW DL_LAZY
+# else
+# ifdef RTLD_NOW
+# define LT_DLLAZY_OR_NOW RTLD_NOW
+# else
+# ifdef DL_NOW
+# define LT_DLLAZY_OR_NOW DL_NOW
+# else
+# define LT_DLLAZY_OR_NOW 0
+# endif
+# endif
+# endif
+# endif
+#endif
+
+#ifdef __cplusplus
+extern "C" void exit (int);
+#endif
+
+void fnord() { int i=42;}
+int main ()
+{
+ void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+ int status = $lt_dlunknown;
+
+ if (self)
+ {
+ if (dlsym (self,"fnord")) status = $lt_dlno_uscore;
+ else if (dlsym( self,"_fnord")) status = $lt_dlneed_uscore;
+ /* dlclose (self); */
+ }
+
+ exit (status);
+}
+EOF
+ if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } && test -s conftest${ac_exeext} 2>/dev/null; then
+ (./conftest; exit; ) 2>/dev/null
+ lt_status=$?
+ case x$lt_status in
+ x$lt_dlno_uscore) lt_cv_dlopen_self_static=yes ;;
+ x$lt_dlneed_uscore) lt_cv_dlopen_self_static=yes ;;
+ x$lt_unknown|x*) lt_cv_dlopen_self_static=no ;;
+ esac
+ else :
+ # compilation failed
+ lt_cv_dlopen_self_static=no
+ fi
+fi
+rm -fr conftest*
+
+
+fi
+echo "$as_me:$LINENO: result: $lt_cv_dlopen_self_static" >&5
+echo "${ECHO_T}$lt_cv_dlopen_self_static" >&6
+ fi
+
+ CPPFLAGS="$save_CPPFLAGS"
+ LDFLAGS="$save_LDFLAGS"
+ LIBS="$save_LIBS"
+ ;;
+ esac
+
+ case $lt_cv_dlopen_self in
+ yes|no) enable_dlopen_self=$lt_cv_dlopen_self ;;
+ *) enable_dlopen_self=unknown ;;
+ esac
+
+ case $lt_cv_dlopen_self_static in
+ yes|no) enable_dlopen_self_static=$lt_cv_dlopen_self_static ;;
+ *) enable_dlopen_self_static=unknown ;;
+ esac
+fi
+
+
+# The else clause should only fire when bootstrapping the
+# libtool distribution, otherwise you forgot to ship ltmain.sh
+# with your package, and you will get complaints that there are
+# no rules to generate ltmain.sh.
+if test -f "$ltmain"; then
+ # See if we are running on zsh, and set the options which allow our commands through
+ # without removal of \ escapes.
+ if test -n "${ZSH_VERSION+set}" ; then
+ setopt NO_GLOB_SUBST
+ fi
+ # Now quote all the things that may contain metacharacters while being
+ # careful not to overquote the AC_SUBSTed values. We take copies of the
+ # variables and quote the copies for generation of the libtool script.
+ for var in echo old_CC old_CFLAGS AR AR_FLAGS EGREP RANLIB LN_S LTCC NM \
+ SED SHELL STRIP \
+ libname_spec library_names_spec soname_spec extract_expsyms_cmds \
+ old_striplib striplib file_magic_cmd finish_cmds finish_eval \
+ deplibs_check_method reload_flag reload_cmds need_locks \
+ lt_cv_sys_global_symbol_pipe lt_cv_sys_global_symbol_to_cdecl \
+ lt_cv_sys_global_symbol_to_c_name_address \
+ sys_lib_search_path_spec sys_lib_dlsearch_path_spec \
+ old_postinstall_cmds old_postuninstall_cmds \
+ compiler_GCJ \
+ CC_GCJ \
+ LD_GCJ \
+ lt_prog_compiler_wl_GCJ \
+ lt_prog_compiler_pic_GCJ \
+ lt_prog_compiler_static_GCJ \
+ lt_prog_compiler_no_builtin_flag_GCJ \
+ export_dynamic_flag_spec_GCJ \
+ thread_safe_flag_spec_GCJ \
+ whole_archive_flag_spec_GCJ \
+ enable_shared_with_static_runtimes_GCJ \
+ old_archive_cmds_GCJ \
+ old_archive_from_new_cmds_GCJ \
+ predep_objects_GCJ \
+ postdep_objects_GCJ \
+ predeps_GCJ \
+ postdeps_GCJ \
+ compiler_lib_search_path_GCJ \
+ archive_cmds_GCJ \
+ archive_expsym_cmds_GCJ \
+ postinstall_cmds_GCJ \
+ postuninstall_cmds_GCJ \
+ old_archive_from_expsyms_cmds_GCJ \
+ allow_undefined_flag_GCJ \
+ no_undefined_flag_GCJ \
+ export_symbols_cmds_GCJ \
+ hardcode_libdir_flag_spec_GCJ \
+ hardcode_libdir_flag_spec_ld_GCJ \
+ hardcode_libdir_separator_GCJ \
+ hardcode_automatic_GCJ \
+ module_cmds_GCJ \
+ module_expsym_cmds_GCJ \
+ lt_cv_prog_compiler_c_o_GCJ \
+ exclude_expsyms_GCJ \
+ include_expsyms_GCJ; do
+
+ case $var in
+ old_archive_cmds_GCJ | \
+ old_archive_from_new_cmds_GCJ | \
+ archive_cmds_GCJ | \
+ archive_expsym_cmds_GCJ | \
+ module_cmds_GCJ | \
+ module_expsym_cmds_GCJ | \
+ old_archive_from_expsyms_cmds_GCJ | \
+ export_symbols_cmds_GCJ | \
+ extract_expsyms_cmds | reload_cmds | finish_cmds | \
+ postinstall_cmds | postuninstall_cmds | \
+ old_postinstall_cmds | old_postuninstall_cmds | \
+ sys_lib_search_path_spec | sys_lib_dlsearch_path_spec)
+ # Double-quote double-evaled strings.
+ eval "lt_$var=\\\"\`\$echo \"X\$$var\" | \$Xsed -e \"\$double_quote_subst\" -e \"\$sed_quote_subst\" -e \"\$delay_variable_subst\"\`\\\""
+ ;;
+ *)
+ eval "lt_$var=\\\"\`\$echo \"X\$$var\" | \$Xsed -e \"\$sed_quote_subst\"\`\\\""
+ ;;
+ esac
+ done
+
+ case $lt_echo in
+ *'\$0 --fallback-echo"')
+ lt_echo=`$echo "X$lt_echo" | $Xsed -e 's/\\\\\\\$0 --fallback-echo"$/$0 --fallback-echo"/'`
+ ;;
+ esac
+
+cfgfile="$ofile"
+
+ cat <<__EOF__ >> "$cfgfile"
+# ### BEGIN LIBTOOL TAG CONFIG: $tagname
+
+# Libtool was configured on host `(hostname || uname -n) 2>/dev/null | sed 1q`:
+
+# Shell to use when invoking shell scripts.
+SHELL=$lt_SHELL
+
+# Whether or not to build shared libraries.
+build_libtool_libs=$enable_shared
+
+# Whether or not to build static libraries.
+build_old_libs=$enable_static
+
+# Whether or not to add -lc for building shared libraries.
+build_libtool_need_lc=$archive_cmds_need_lc_GCJ
+
+# Whether or not to disallow shared libs when runtime libs are static
+allow_libtool_libs_with_static_runtimes=$enable_shared_with_static_runtimes_GCJ
+
+# Whether or not to optimize for fast installation.
+fast_install=$enable_fast_install
+
+# The host system.
+host_alias=$host_alias
+host=$host
+
+# An echo program that does not interpret backslashes.
+echo=$lt_echo
+
+# The archiver.
+AR=$lt_AR
+AR_FLAGS=$lt_AR_FLAGS
+
+# A C compiler.
+LTCC=$lt_LTCC
+
+# A language-specific compiler.
+CC=$lt_compiler_GCJ
+
+# Is the compiler the GNU C compiler?
+with_gcc=$GCC_GCJ
+
+# An ERE matcher.
+EGREP=$lt_EGREP
+
+# The linker used to build libraries.
+LD=$lt_LD_GCJ
+
+# Whether we need hard or soft links.
+LN_S=$lt_LN_S
+
+# A BSD-compatible nm program.
+NM=$lt_NM
+
+# A symbol stripping program
+STRIP=$lt_STRIP
+
+# Used to examine libraries when file_magic_cmd begins "file"
+MAGIC_CMD=$MAGIC_CMD
+
+# Used on cygwin: DLL creation program.
+DLLTOOL="$DLLTOOL"
+
+# Used on cygwin: object dumper.
+OBJDUMP="$OBJDUMP"
+
+# Used on cygwin: assembler.
+AS="$AS"
+
+# The name of the directory that contains temporary libtool files.
+objdir=$objdir
+
+# How to create reloadable object files.
+reload_flag=$lt_reload_flag
+reload_cmds=$lt_reload_cmds
+
+# How to pass a linker flag through the compiler.
+wl=$lt_lt_prog_compiler_wl_GCJ
+
+# Object file suffix (normally "o").
+objext="$ac_objext"
+
+# Old archive suffix (normally "a").
+libext="$libext"
+
+# Shared library suffix (normally ".so").
+shrext='$shrext'
+
+# Executable file suffix (normally "").
+exeext="$exeext"
+
+# Additional compiler flags for building library objects.
+pic_flag=$lt_lt_prog_compiler_pic_GCJ
+pic_mode=$pic_mode
+
+# What is the maximum length of a command?
+max_cmd_len=$lt_cv_sys_max_cmd_len
+
+# Does compiler simultaneously support -c and -o options?
+compiler_c_o=$lt_lt_cv_prog_compiler_c_o_GCJ
+
+# Must we lock files when doing compilation ?
+need_locks=$lt_need_locks
+
+# Do we need the lib prefix for modules?
+need_lib_prefix=$need_lib_prefix
+
+# Do we need a version for libraries?
+need_version=$need_version
+
+# Whether dlopen is supported.
+dlopen_support=$enable_dlopen
+
+# Whether dlopen of programs is supported.
+dlopen_self=$enable_dlopen_self
+
+# Whether dlopen of statically linked programs is supported.
+dlopen_self_static=$enable_dlopen_self_static
+
+# Compiler flag to prevent dynamic linking.
+link_static_flag=$lt_lt_prog_compiler_static_GCJ
+
+# Compiler flag to turn off builtin functions.
+no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag_GCJ
+
+# Compiler flag to allow reflexive dlopens.
+export_dynamic_flag_spec=$lt_export_dynamic_flag_spec_GCJ
+
+# Compiler flag to generate shared objects directly from archives.
+whole_archive_flag_spec=$lt_whole_archive_flag_spec_GCJ
+
+# Compiler flag to generate thread-safe objects.
+thread_safe_flag_spec=$lt_thread_safe_flag_spec_GCJ
+
+# Library versioning type.
+version_type=$version_type
+
+# Format of library name prefix.
+libname_spec=$lt_libname_spec
+
+# List of archive names. First name is the real one, the rest are links.
+# The last name is the one that the linker finds with -lNAME.
+library_names_spec=$lt_library_names_spec
+
+# The coded name of the library, if different from the real name.
+soname_spec=$lt_soname_spec
+
+# Commands used to build and install an old-style archive.
+RANLIB=$lt_RANLIB
+old_archive_cmds=$lt_old_archive_cmds_GCJ
+old_postinstall_cmds=$lt_old_postinstall_cmds
+old_postuninstall_cmds=$lt_old_postuninstall_cmds
+
+# Create an old-style archive from a shared archive.
+old_archive_from_new_cmds=$lt_old_archive_from_new_cmds_GCJ
+
+# Create a temporary old-style archive to link instead of a shared archive.
+old_archive_from_expsyms_cmds=$lt_old_archive_from_expsyms_cmds_GCJ
+
+# Commands used to build and install a shared archive.
+archive_cmds=$lt_archive_cmds_GCJ
+archive_expsym_cmds=$lt_archive_expsym_cmds_GCJ
+postinstall_cmds=$lt_postinstall_cmds
+postuninstall_cmds=$lt_postuninstall_cmds
+
+# Commands used to build a loadable module (assumed same as above if empty)
+module_cmds=$lt_module_cmds_GCJ
+module_expsym_cmds=$lt_module_expsym_cmds_GCJ
+
+# Commands to strip libraries.
+old_striplib=$lt_old_striplib
+striplib=$lt_striplib
+
+# Dependencies to place before the objects being linked to create a
+# shared library.
+predep_objects=$lt_predep_objects_GCJ
+
+# Dependencies to place after the objects being linked to create a
+# shared library.
+postdep_objects=$lt_postdep_objects_GCJ
+
+# Dependencies to place before the objects being linked to create a
+# shared library.
+predeps=$lt_predeps_GCJ
+
+# Dependencies to place after the objects being linked to create a
+# shared library.
+postdeps=$lt_postdeps_GCJ
+
+# The library search path used internally by the compiler when linking
+# a shared library.
+compiler_lib_search_path=$lt_compiler_lib_search_path_GCJ
+
+# Method to check whether dependent libraries are shared objects.
+deplibs_check_method=$lt_deplibs_check_method
+
+# Command to use when deplibs_check_method == file_magic.
+file_magic_cmd=$lt_file_magic_cmd
+
+# Flag that allows shared libraries with undefined symbols to be built.
+allow_undefined_flag=$lt_allow_undefined_flag_GCJ
+
+# Flag that forces no undefined symbols.
+no_undefined_flag=$lt_no_undefined_flag_GCJ
+
+# Commands used to finish a libtool library installation in a directory.
+finish_cmds=$lt_finish_cmds
+
+# Same as above, but a single script fragment to be evaled but not shown.
+finish_eval=$lt_finish_eval
+
+# Take the output of nm and produce a listing of raw symbols and C names.
+global_symbol_pipe=$lt_lt_cv_sys_global_symbol_pipe
+
+# Transform the output of nm in a proper C declaration
+global_symbol_to_cdecl=$lt_lt_cv_sys_global_symbol_to_cdecl
+
+# Transform the output of nm in a C name address pair
+global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address
+
+# This is the shared library runtime path variable.
+runpath_var=$runpath_var
+
+# This is the shared library path variable.
+shlibpath_var=$shlibpath_var
+
+# Is shlibpath searched before the hard-coded library search path?
+shlibpath_overrides_runpath=$shlibpath_overrides_runpath
+
+# How to hardcode a shared library path into an executable.
+hardcode_action=$hardcode_action_GCJ
+
+# Whether we should hardcode library paths into libraries.
+hardcode_into_libs=$hardcode_into_libs
+
+# Flag to hardcode \$libdir into a binary during linking.
+# This must work even if \$libdir does not exist.
+hardcode_libdir_flag_spec=$lt_hardcode_libdir_flag_spec_GCJ
+
+# If ld is used when linking, flag to hardcode \$libdir into
+# a binary during linking. This must work even if \$libdir does
+# not exist.
+hardcode_libdir_flag_spec_ld=$lt_hardcode_libdir_flag_spec_ld_GCJ
+
+# Whether we need a single -rpath flag with a separated argument.
+hardcode_libdir_separator=$lt_hardcode_libdir_separator_GCJ
+
+# Set to yes if using DIR/libNAME${shared_ext} during linking hardcodes DIR into the
+# resulting binary.
+hardcode_direct=$hardcode_direct_GCJ
+
+# Set to yes if using the -LDIR flag during linking hardcodes DIR into the
+# resulting binary.
+hardcode_minus_L=$hardcode_minus_L_GCJ
+
+# Set to yes if using SHLIBPATH_VAR=DIR during linking hardcodes DIR into
+# the resulting binary.
+hardcode_shlibpath_var=$hardcode_shlibpath_var_GCJ
+
+# Set to yes if building a shared library automatically hardcodes DIR into the library
+# and all subsequent libraries and executables linked against it.
+hardcode_automatic=$hardcode_automatic_GCJ
+
+# Variables whose values should be saved in libtool wrapper scripts and
+# restored at relink time.
+variables_saved_for_relink="$variables_saved_for_relink"
+
+# Whether libtool must link a program against all its dependency libraries.
+link_all_deplibs=$link_all_deplibs_GCJ
+
+# Compile-time system search path for libraries
+sys_lib_search_path_spec=$lt_sys_lib_search_path_spec
+
+# Run-time system search path for libraries
+sys_lib_dlsearch_path_spec=$lt_sys_lib_dlsearch_path_spec
+
+# Fix the shell variable \$srcfile for the compiler.
+fix_srcfile_path="$fix_srcfile_path_GCJ"
+
+# Set to yes if exported symbols are required.
+always_export_symbols=$always_export_symbols_GCJ
+
+# The commands to list exported symbols.
+export_symbols_cmds=$lt_export_symbols_cmds_GCJ
+
+# The commands to extract the exported symbol list from a shared archive.
+extract_expsyms_cmds=$lt_extract_expsyms_cmds
+
+# Symbols that should not be listed in the preloaded symbols.
+exclude_expsyms=$lt_exclude_expsyms_GCJ
+
+# Symbols that must always be exported.
+include_expsyms=$lt_include_expsyms_GCJ
+
+# ### END LIBTOOL TAG CONFIG: $tagname
+
+__EOF__
+
+
+else
+ # If there is no Makefile yet, we rely on a make rule to execute
+ # `config.status --recheck' to rerun these tests and create the
+ # libtool script then.
+ ltmain_in=`echo $ltmain | sed -e 's/\.sh$/.in/'`
+ if test -f "$ltmain_in"; then
+ test -f Makefile && make "$ltmain"
+ fi
+fi
+
+
+ac_ext=c
+ac_cpp='$CPP $CPPFLAGS'
+ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_c_compiler_gnu
+
+CC="$lt_save_CC"
+
+ else
+ tagname=""
+ fi
+ ;;
+
+ RC)
+
+
+
+# Source file extension for RC test sources.
+ac_ext=rc
+
+# Object file extension for compiled RC test sources.
+objext=o
+objext_RC=$objext
+
+# Code to be used in simple compile tests
+lt_simple_compile_test_code='sample MENU { MENUITEM "&Soup", 100, CHECKED }\n'
+
+# Code to be used in simple link tests
+lt_simple_link_test_code="$lt_simple_compile_test_code"
+
+# ltmain only uses $CC for tagged configurations so make sure $CC is set.
+
+# If no C compiler was specified, use CC.
+LTCC=${LTCC-"$CC"}
+
+# Allow CC to be a program name with arguments.
+compiler=$CC
+
+
+# Allow CC to be a program name with arguments.
+lt_save_CC="$CC"
+CC=${RC-"windres"}
+compiler=$CC
+compiler_RC=$CC
+lt_cv_prog_compiler_c_o_RC=yes
+
+# The else clause should only fire when bootstrapping the
+# libtool distribution, otherwise you forgot to ship ltmain.sh
+# with your package, and you will get complaints that there are
+# no rules to generate ltmain.sh.
+if test -f "$ltmain"; then
+ # See if we are running on zsh, and set the options which allow our commands through
+ # without removal of \ escapes.
+ if test -n "${ZSH_VERSION+set}" ; then
+ setopt NO_GLOB_SUBST
+ fi
+ # Now quote all the things that may contain metacharacters while being
+ # careful not to overquote the AC_SUBSTed values. We take copies of the
+ # variables and quote the copies for generation of the libtool script.
+ for var in echo old_CC old_CFLAGS AR AR_FLAGS EGREP RANLIB LN_S LTCC NM \
+ SED SHELL STRIP \
+ libname_spec library_names_spec soname_spec extract_expsyms_cmds \
+ old_striplib striplib file_magic_cmd finish_cmds finish_eval \
+ deplibs_check_method reload_flag reload_cmds need_locks \
+ lt_cv_sys_global_symbol_pipe lt_cv_sys_global_symbol_to_cdecl \
+ lt_cv_sys_global_symbol_to_c_name_address \
+ sys_lib_search_path_spec sys_lib_dlsearch_path_spec \
+ old_postinstall_cmds old_postuninstall_cmds \
+ compiler_RC \
+ CC_RC \
+ LD_RC \
+ lt_prog_compiler_wl_RC \
+ lt_prog_compiler_pic_RC \
+ lt_prog_compiler_static_RC \
+ lt_prog_compiler_no_builtin_flag_RC \
+ export_dynamic_flag_spec_RC \
+ thread_safe_flag_spec_RC \
+ whole_archive_flag_spec_RC \
+ enable_shared_with_static_runtimes_RC \
+ old_archive_cmds_RC \
+ old_archive_from_new_cmds_RC \
+ predep_objects_RC \
+ postdep_objects_RC \
+ predeps_RC \
+ postdeps_RC \
+ compiler_lib_search_path_RC \
+ archive_cmds_RC \
+ archive_expsym_cmds_RC \
+ postinstall_cmds_RC \
+ postuninstall_cmds_RC \
+ old_archive_from_expsyms_cmds_RC \
+ allow_undefined_flag_RC \
+ no_undefined_flag_RC \
+ export_symbols_cmds_RC \
+ hardcode_libdir_flag_spec_RC \
+ hardcode_libdir_flag_spec_ld_RC \
+ hardcode_libdir_separator_RC \
+ hardcode_automatic_RC \
+ module_cmds_RC \
+ module_expsym_cmds_RC \
+ lt_cv_prog_compiler_c_o_RC \
+ exclude_expsyms_RC \
+ include_expsyms_RC; do
+
+ case $var in
+ old_archive_cmds_RC | \
+ old_archive_from_new_cmds_RC | \
+ archive_cmds_RC | \
+ archive_expsym_cmds_RC | \
+ module_cmds_RC | \
+ module_expsym_cmds_RC | \
+ old_archive_from_expsyms_cmds_RC | \
+ export_symbols_cmds_RC | \
+ extract_expsyms_cmds | reload_cmds | finish_cmds | \
+ postinstall_cmds | postuninstall_cmds | \
+ old_postinstall_cmds | old_postuninstall_cmds | \
+ sys_lib_search_path_spec | sys_lib_dlsearch_path_spec)
+ # Double-quote double-evaled strings.
+ eval "lt_$var=\\\"\`\$echo \"X\$$var\" | \$Xsed -e \"\$double_quote_subst\" -e \"\$sed_quote_subst\" -e \"\$delay_variable_subst\"\`\\\""
+ ;;
+ *)
+ eval "lt_$var=\\\"\`\$echo \"X\$$var\" | \$Xsed -e \"\$sed_quote_subst\"\`\\\""
+ ;;
+ esac
+ done
+
+ case $lt_echo in
+ *'\$0 --fallback-echo"')
+ lt_echo=`$echo "X$lt_echo" | $Xsed -e 's/\\\\\\\$0 --fallback-echo"$/$0 --fallback-echo"/'`
+ ;;
+ esac
+
+cfgfile="$ofile"
+
+ cat <<__EOF__ >> "$cfgfile"
+# ### BEGIN LIBTOOL TAG CONFIG: $tagname
+
+# Libtool was configured on host `(hostname || uname -n) 2>/dev/null | sed 1q`:
+
+# Shell to use when invoking shell scripts.
+SHELL=$lt_SHELL
+
+# Whether or not to build shared libraries.
+build_libtool_libs=$enable_shared
+
+# Whether or not to build static libraries.
+build_old_libs=$enable_static
+
+# Whether or not to add -lc for building shared libraries.
+build_libtool_need_lc=$archive_cmds_need_lc_RC
+
+# Whether or not to disallow shared libs when runtime libs are static
+allow_libtool_libs_with_static_runtimes=$enable_shared_with_static_runtimes_RC
+
+# Whether or not to optimize for fast installation.
+fast_install=$enable_fast_install
+
+# The host system.
+host_alias=$host_alias
+host=$host
+
+# An echo program that does not interpret backslashes.
+echo=$lt_echo
+
+# The archiver.
+AR=$lt_AR
+AR_FLAGS=$lt_AR_FLAGS
+
+# A C compiler.
+LTCC=$lt_LTCC
+
+# A language-specific compiler.
+CC=$lt_compiler_RC
+
+# Is the compiler the GNU C compiler?
+with_gcc=$GCC_RC
+
+# An ERE matcher.
+EGREP=$lt_EGREP
+
+# The linker used to build libraries.
+LD=$lt_LD_RC
+
+# Whether we need hard or soft links.
+LN_S=$lt_LN_S
+
+# A BSD-compatible nm program.
+NM=$lt_NM
+
+# A symbol stripping program
+STRIP=$lt_STRIP
+
+# Used to examine libraries when file_magic_cmd begins "file"
+MAGIC_CMD=$MAGIC_CMD
+
+# Used on cygwin: DLL creation program.
+DLLTOOL="$DLLTOOL"
+
+# Used on cygwin: object dumper.
+OBJDUMP="$OBJDUMP"
+
+# Used on cygwin: assembler.
+AS="$AS"
+
+# The name of the directory that contains temporary libtool files.
+objdir=$objdir
+
+# How to create reloadable object files.
+reload_flag=$lt_reload_flag
+reload_cmds=$lt_reload_cmds
+
+# How to pass a linker flag through the compiler.
+wl=$lt_lt_prog_compiler_wl_RC
+
+# Object file suffix (normally "o").
+objext="$ac_objext"
+
+# Old archive suffix (normally "a").
+libext="$libext"
+
+# Shared library suffix (normally ".so").
+shrext='$shrext'
+
+# Executable file suffix (normally "").
+exeext="$exeext"
+
+# Additional compiler flags for building library objects.
+pic_flag=$lt_lt_prog_compiler_pic_RC
+pic_mode=$pic_mode
+
+# What is the maximum length of a command?
+max_cmd_len=$lt_cv_sys_max_cmd_len
+
+# Does compiler simultaneously support -c and -o options?
+compiler_c_o=$lt_lt_cv_prog_compiler_c_o_RC
+
+# Must we lock files when doing compilation ?
+need_locks=$lt_need_locks
+
+# Do we need the lib prefix for modules?
+need_lib_prefix=$need_lib_prefix
+
+# Do we need a version for libraries?
+need_version=$need_version
+
+# Whether dlopen is supported.
+dlopen_support=$enable_dlopen
+
+# Whether dlopen of programs is supported.
+dlopen_self=$enable_dlopen_self
+
+# Whether dlopen of statically linked programs is supported.
+dlopen_self_static=$enable_dlopen_self_static
+
+# Compiler flag to prevent dynamic linking.
+link_static_flag=$lt_lt_prog_compiler_static_RC
+
+# Compiler flag to turn off builtin functions.
+no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag_RC
+
+# Compiler flag to allow reflexive dlopens.
+export_dynamic_flag_spec=$lt_export_dynamic_flag_spec_RC
+
+# Compiler flag to generate shared objects directly from archives.
+whole_archive_flag_spec=$lt_whole_archive_flag_spec_RC
+
+# Compiler flag to generate thread-safe objects.
+thread_safe_flag_spec=$lt_thread_safe_flag_spec_RC
+
+# Library versioning type.
+version_type=$version_type
+
+# Format of library name prefix.
+libname_spec=$lt_libname_spec
+
+# List of archive names. First name is the real one, the rest are links.
+# The last name is the one that the linker finds with -lNAME.
+library_names_spec=$lt_library_names_spec
+
+# The coded name of the library, if different from the real name.
+soname_spec=$lt_soname_spec
+
+# Commands used to build and install an old-style archive.
+RANLIB=$lt_RANLIB
+old_archive_cmds=$lt_old_archive_cmds_RC
+old_postinstall_cmds=$lt_old_postinstall_cmds
+old_postuninstall_cmds=$lt_old_postuninstall_cmds
+
+# Create an old-style archive from a shared archive.
+old_archive_from_new_cmds=$lt_old_archive_from_new_cmds_RC
+
+# Create a temporary old-style archive to link instead of a shared archive.
+old_archive_from_expsyms_cmds=$lt_old_archive_from_expsyms_cmds_RC
+
+# Commands used to build and install a shared archive.
+archive_cmds=$lt_archive_cmds_RC
+archive_expsym_cmds=$lt_archive_expsym_cmds_RC
+postinstall_cmds=$lt_postinstall_cmds
+postuninstall_cmds=$lt_postuninstall_cmds
+
+# Commands used to build a loadable module (assumed same as above if empty)
+module_cmds=$lt_module_cmds_RC
+module_expsym_cmds=$lt_module_expsym_cmds_RC
+
+# Commands to strip libraries.
+old_striplib=$lt_old_striplib
+striplib=$lt_striplib
+
+# Dependencies to place before the objects being linked to create a
+# shared library.
+predep_objects=$lt_predep_objects_RC
+
+# Dependencies to place after the objects being linked to create a
+# shared library.
+postdep_objects=$lt_postdep_objects_RC
+
+# Dependencies to place before the objects being linked to create a
+# shared library.
+predeps=$lt_predeps_RC
+
+# Dependencies to place after the objects being linked to create a
+# shared library.
+postdeps=$lt_postdeps_RC
+
+# The library search path used internally by the compiler when linking
+# a shared library.
+compiler_lib_search_path=$lt_compiler_lib_search_path_RC
+
+# Method to check whether dependent libraries are shared objects.
+deplibs_check_method=$lt_deplibs_check_method
+
+# Command to use when deplibs_check_method == file_magic.
+file_magic_cmd=$lt_file_magic_cmd
+
+# Flag that allows shared libraries with undefined symbols to be built.
+allow_undefined_flag=$lt_allow_undefined_flag_RC
+
+# Flag that forces no undefined symbols.
+no_undefined_flag=$lt_no_undefined_flag_RC
+
+# Commands used to finish a libtool library installation in a directory.
+finish_cmds=$lt_finish_cmds
+
+# Same as above, but a single script fragment to be evaled but not shown.
+finish_eval=$lt_finish_eval
+
+# Take the output of nm and produce a listing of raw symbols and C names.
+global_symbol_pipe=$lt_lt_cv_sys_global_symbol_pipe
+
+# Transform the output of nm in a proper C declaration
+global_symbol_to_cdecl=$lt_lt_cv_sys_global_symbol_to_cdecl
+
+# Transform the output of nm in a C name address pair
+global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address
+
+# This is the shared library runtime path variable.
+runpath_var=$runpath_var
+
+# This is the shared library path variable.
+shlibpath_var=$shlibpath_var
+
+# Is shlibpath searched before the hard-coded library search path?
+shlibpath_overrides_runpath=$shlibpath_overrides_runpath
+
+# How to hardcode a shared library path into an executable.
+hardcode_action=$hardcode_action_RC
+
+# Whether we should hardcode library paths into libraries.
+hardcode_into_libs=$hardcode_into_libs
+
+# Flag to hardcode \$libdir into a binary during linking.
+# This must work even if \$libdir does not exist.
+hardcode_libdir_flag_spec=$lt_hardcode_libdir_flag_spec_RC
+
+# If ld is used when linking, flag to hardcode \$libdir into
+# a binary during linking. This must work even if \$libdir does
+# not exist.
+hardcode_libdir_flag_spec_ld=$lt_hardcode_libdir_flag_spec_ld_RC
+
+# Whether we need a single -rpath flag with a separated argument.
+hardcode_libdir_separator=$lt_hardcode_libdir_separator_RC
+
+# Set to yes if using DIR/libNAME${shared_ext} during linking hardcodes DIR into the
+# resulting binary.
+hardcode_direct=$hardcode_direct_RC
+
+# Set to yes if using the -LDIR flag during linking hardcodes DIR into the
+# resulting binary.
+hardcode_minus_L=$hardcode_minus_L_RC
+
+# Set to yes if using SHLIBPATH_VAR=DIR during linking hardcodes DIR into
+# the resulting binary.
+hardcode_shlibpath_var=$hardcode_shlibpath_var_RC
+
+# Set to yes if building a shared library automatically hardcodes DIR into the library
+# and all subsequent libraries and executables linked against it.
+hardcode_automatic=$hardcode_automatic_RC
+
+# Variables whose values should be saved in libtool wrapper scripts and
+# restored at relink time.
+variables_saved_for_relink="$variables_saved_for_relink"
+
+# Whether libtool must link a program against all its dependency libraries.
+link_all_deplibs=$link_all_deplibs_RC
+
+# Compile-time system search path for libraries
+sys_lib_search_path_spec=$lt_sys_lib_search_path_spec
+
+# Run-time system search path for libraries
+sys_lib_dlsearch_path_spec=$lt_sys_lib_dlsearch_path_spec
+
+# Fix the shell variable \$srcfile for the compiler.
+fix_srcfile_path="$fix_srcfile_path_RC"
+
+# Set to yes if exported symbols are required.
+always_export_symbols=$always_export_symbols_RC
+
+# The commands to list exported symbols.
+export_symbols_cmds=$lt_export_symbols_cmds_RC
+
+# The commands to extract the exported symbol list from a shared archive.
+extract_expsyms_cmds=$lt_extract_expsyms_cmds
+
+# Symbols that should not be listed in the preloaded symbols.
+exclude_expsyms=$lt_exclude_expsyms_RC
+
+# Symbols that must always be exported.
+include_expsyms=$lt_include_expsyms_RC
+
+# ### END LIBTOOL TAG CONFIG: $tagname
+
+__EOF__
+
+
+else
+ # If there is no Makefile yet, we rely on a make rule to execute
+ # `config.status --recheck' to rerun these tests and create the
+ # libtool script then.
+ ltmain_in=`echo $ltmain | sed -e 's/\.sh$/.in/'`
+ if test -f "$ltmain_in"; then
+ test -f Makefile && make "$ltmain"
+ fi
+fi
+
+
+ac_ext=c
+ac_cpp='$CPP $CPPFLAGS'
+ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_c_compiler_gnu
+
+CC="$lt_save_CC"
+
+ ;;
+
+ *)
+ { { echo "$as_me:$LINENO: error: Unsupported tag name: $tagname" >&5
+echo "$as_me: error: Unsupported tag name: $tagname" >&2;}
+ { (exit 1); exit 1; }; }
+ ;;
+ esac
+
+ # Append the new tag name to the list of available tags.
+ if test -n "$tagname" ; then
+ available_tags="$available_tags $tagname"
+ fi
+ fi
+ done
+ IFS="$lt_save_ifs"
+
+ # Now substitute the updated list of available tags.
+ if eval "sed -e 's/^available_tags=.*\$/available_tags=\"$available_tags\"/' \"$ofile\" > \"${ofile}T\""; then
+ mv "${ofile}T" "$ofile"
+ chmod +x "$ofile"
+ else
+ rm -f "${ofile}T"
+ { { echo "$as_me:$LINENO: error: unable to update list of available tagged configurations." >&5
+echo "$as_me: error: unable to update list of available tagged configurations." >&2;}
+ { (exit 1); exit 1; }; }
+ fi
+fi
+
+
+
+# This can be used to rebuild libtool when needed
+LIBTOOL_DEPS="$ac_aux_dir/ltmain.sh"
+
+# Always use our own libtool.
+LIBTOOL='$(SHELL) $(top_builddir)/libtool'
+
+# Prevent multiple expansion
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+# Find a good install program. We prefer a C program (faster),
+# so one script is as good as another. But avoid the broken or
+# incompatible versions:
+# SysV /etc/install, /usr/sbin/install
+# SunOS /usr/etc/install
+# IRIX /sbin/install
+# AIX /bin/install
+# AmigaOS /C/install, which installs bootblocks on floppy discs
+# AIX 4 /usr/bin/installbsd, which doesn't work without a -g flag
+# AFS /usr/afsws/bin/install, which mishandles nonexistent args
+# SVR4 /usr/ucb/install, which tries to use the nonexistent group "staff"
+# OS/2's system install, which has a completely different semantic
+# ./install, which can be erroneously created by make from ./install.sh.
+echo "$as_me:$LINENO: checking for a BSD-compatible install" >&5
+echo $ECHO_N "checking for a BSD-compatible install... $ECHO_C" >&6
+if test -z "$INSTALL"; then
+if test "${ac_cv_path_install+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ # Account for people who put trailing slashes in PATH elements.
+case $as_dir/ in
+ ./ | .// | /cC/* | \
+ /etc/* | /usr/sbin/* | /usr/etc/* | /sbin/* | /usr/afsws/bin/* | \
+ ?:\\/os2\\/install\\/* | ?:\\/OS2\\/INSTALL\\/* | \
+ /usr/ucb/* ) ;;
+ *)
+ # OSF1 and SCO ODT 3.0 have their own names for install.
+ # Don't use installbsd from OSF since it installs stuff as root
+ # by default.
+ for ac_prog in ginstall scoinst install; do
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if $as_executable_p "$as_dir/$ac_prog$ac_exec_ext"; then
+ if test $ac_prog = install &&
+ grep dspmsg "$as_dir/$ac_prog$ac_exec_ext" >/dev/null 2>&1; then
+ # AIX install. It has an incompatible calling convention.
+ :
+ elif test $ac_prog = install &&
+ grep pwplus "$as_dir/$ac_prog$ac_exec_ext" >/dev/null 2>&1; then
+ # program-specific install script used by HP pwplus--don't use.
+ :
+ else
+ ac_cv_path_install="$as_dir/$ac_prog$ac_exec_ext -c"
+ break 3
+ fi
+ fi
+ done
+ done
+ ;;
+esac
+done
+
+
+fi
+ if test "${ac_cv_path_install+set}" = set; then
+ INSTALL=$ac_cv_path_install
+ else
+ # As a last resort, use the slow shell script. We don't cache a
+ # path for INSTALL within a source directory, because that will
+ # break other packages using the cache if that directory is
+ # removed, or if the path is relative.
+ INSTALL=$ac_install_sh
+ fi
+fi
+echo "$as_me:$LINENO: result: $INSTALL" >&5
+echo "${ECHO_T}$INSTALL" >&6
+
+# Use test -z because SunOS4 sh mishandles braces in ${var-val}.
+# It thinks the first close brace ends the variable substitution.
+test -z "$INSTALL_PROGRAM" && INSTALL_PROGRAM='${INSTALL}'
+
+test -z "$INSTALL_SCRIPT" && INSTALL_SCRIPT='${INSTALL}'
+
+test -z "$INSTALL_DATA" && INSTALL_DATA='${INSTALL} -m 644'
+
+for ac_prog in gawk mawk nawk awk
+do
+ # Extract the first word of "$ac_prog", so it can be a program name with args.
+set dummy $ac_prog; ac_word=$2
+echo "$as_me:$LINENO: checking for $ac_word" >&5
+echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6
+if test "${ac_cv_prog_AWK+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ if test -n "$AWK"; then
+ ac_cv_prog_AWK="$AWK" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if $as_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_prog_AWK="$ac_prog"
+ echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+done
+
+fi
+fi
+AWK=$ac_cv_prog_AWK
+if test -n "$AWK"; then
+ echo "$as_me:$LINENO: result: $AWK" >&5
+echo "${ECHO_T}$AWK" >&6
+else
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+fi
+
+ test -n "$AWK" && break
+done
+
+
+#########
+# Set up an appropriate program prefix
+#
+if test "$program_prefix" = "NONE"; then
+ program_prefix=""
+fi
+
+
+VERSION=`cat $srcdir/VERSION | sed 's/^\([0-9]*\.*[0-9]*\).*/\1/'`
+echo "Version set to $VERSION"
+
+RELEASE=`cat $srcdir/VERSION`
+echo "Release set to $RELEASE"
+
+VERSION_NUMBER=`cat $srcdir/VERSION \
+ | sed 's/[^0-9]/ /g' \
+ | awk '{printf "%d%03d%03d",$1,$2,$3}'`
+echo "Version number set to $VERSION_NUMBER"
+
+
+#########
+# Check to see if the --with-hints=FILE option is used. If there is none,
+# then check for a files named "$host.hints" and ../$hosts.hints where
+# $host is the hostname of the build system. If still no hints are
+# found, try looking in $system.hints and ../$system.hints where
+# $system is the result of uname -s.
+#
+
+# Check whether --with-hints or --without-hints was given.
+if test "${with_hints+set}" = set; then
+ withval="$with_hints"
+ hints=$withval
+fi;
+if test "$hints" = ""; then
+ host=`hostname | sed 's/\..*//'`
+ if test -r $host.hints; then
+ hints=$host.hints
+ else
+ if test -r ../$host.hints; then
+ hints=../$host.hints
+ fi
+ fi
+fi
+if test "$hints" = ""; then
+ sys=`uname -s`
+ if test -r $sys.hints; then
+ hints=$sys.hints
+ else
+ if test -r ../$sys.hints; then
+ hints=../$sys.hints
+ fi
+ fi
+fi
+if test "$hints" != ""; then
+ echo "$as_me:$LINENO: result: reading hints from $hints" >&5
+echo "${ECHO_T}reading hints from $hints" >&6
+ . $hints
+fi
+
+#########
+# Locate a compiler for the build machine. This compiler should
+# generate command-line programs that run on the build machine.
+#
+default_build_cflags="-g"
+if test "$config_BUILD_CC" = ""; then
+ ac_ext=c
+ac_cpp='$CPP $CPPFLAGS'
+ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_c_compiler_gnu
+if test -n "$ac_tool_prefix"; then
+ # Extract the first word of "${ac_tool_prefix}gcc", so it can be a program name with args.
+set dummy ${ac_tool_prefix}gcc; ac_word=$2
+echo "$as_me:$LINENO: checking for $ac_word" >&5
+echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6
+if test "${ac_cv_prog_CC+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ if test -n "$CC"; then
+ ac_cv_prog_CC="$CC" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if $as_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_prog_CC="${ac_tool_prefix}gcc"
+ echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+done
+
+fi
+fi
+CC=$ac_cv_prog_CC
+if test -n "$CC"; then
+ echo "$as_me:$LINENO: result: $CC" >&5
+echo "${ECHO_T}$CC" >&6
+else
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+fi
+
+fi
+if test -z "$ac_cv_prog_CC"; then
+ ac_ct_CC=$CC
+ # Extract the first word of "gcc", so it can be a program name with args.
+set dummy gcc; ac_word=$2
+echo "$as_me:$LINENO: checking for $ac_word" >&5
+echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6
+if test "${ac_cv_prog_ac_ct_CC+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ if test -n "$ac_ct_CC"; then
+ ac_cv_prog_ac_ct_CC="$ac_ct_CC" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if $as_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_prog_ac_ct_CC="gcc"
+ echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+done
+
+fi
+fi
+ac_ct_CC=$ac_cv_prog_ac_ct_CC
+if test -n "$ac_ct_CC"; then
+ echo "$as_me:$LINENO: result: $ac_ct_CC" >&5
+echo "${ECHO_T}$ac_ct_CC" >&6
+else
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+fi
+
+ CC=$ac_ct_CC
+else
+ CC="$ac_cv_prog_CC"
+fi
+
+if test -z "$CC"; then
+ if test -n "$ac_tool_prefix"; then
+ # Extract the first word of "${ac_tool_prefix}cc", so it can be a program name with args.
+set dummy ${ac_tool_prefix}cc; ac_word=$2
+echo "$as_me:$LINENO: checking for $ac_word" >&5
+echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6
+if test "${ac_cv_prog_CC+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ if test -n "$CC"; then
+ ac_cv_prog_CC="$CC" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if $as_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_prog_CC="${ac_tool_prefix}cc"
+ echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+done
+
+fi
+fi
+CC=$ac_cv_prog_CC
+if test -n "$CC"; then
+ echo "$as_me:$LINENO: result: $CC" >&5
+echo "${ECHO_T}$CC" >&6
+else
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+fi
+
+fi
+if test -z "$ac_cv_prog_CC"; then
+ ac_ct_CC=$CC
+ # Extract the first word of "cc", so it can be a program name with args.
+set dummy cc; ac_word=$2
+echo "$as_me:$LINENO: checking for $ac_word" >&5
+echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6
+if test "${ac_cv_prog_ac_ct_CC+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ if test -n "$ac_ct_CC"; then
+ ac_cv_prog_ac_ct_CC="$ac_ct_CC" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if $as_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_prog_ac_ct_CC="cc"
+ echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+done
+
+fi
+fi
+ac_ct_CC=$ac_cv_prog_ac_ct_CC
+if test -n "$ac_ct_CC"; then
+ echo "$as_me:$LINENO: result: $ac_ct_CC" >&5
+echo "${ECHO_T}$ac_ct_CC" >&6
+else
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+fi
+
+ CC=$ac_ct_CC
+else
+ CC="$ac_cv_prog_CC"
+fi
+
+fi
+if test -z "$CC"; then
+ # Extract the first word of "cc", so it can be a program name with args.
+set dummy cc; ac_word=$2
+echo "$as_me:$LINENO: checking for $ac_word" >&5
+echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6
+if test "${ac_cv_prog_CC+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ if test -n "$CC"; then
+ ac_cv_prog_CC="$CC" # Let the user override the test.
+else
+ ac_prog_rejected=no
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if $as_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ if test "$as_dir/$ac_word$ac_exec_ext" = "/usr/ucb/cc"; then
+ ac_prog_rejected=yes
+ continue
+ fi
+ ac_cv_prog_CC="cc"
+ echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+done
+
+if test $ac_prog_rejected = yes; then
+ # We found a bogon in the path, so make sure we never use it.
+ set dummy $ac_cv_prog_CC
+ shift
+ if test $# != 0; then
+ # We chose a different compiler from the bogus one.
+ # However, it has the same basename, so the bogon will be chosen
+ # first if we set CC to just the basename; use the full file name.
+ shift
+ ac_cv_prog_CC="$as_dir/$ac_word${1+' '}$@"
+ fi
+fi
+fi
+fi
+CC=$ac_cv_prog_CC
+if test -n "$CC"; then
+ echo "$as_me:$LINENO: result: $CC" >&5
+echo "${ECHO_T}$CC" >&6
+else
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+fi
+
+fi
+if test -z "$CC"; then
+ if test -n "$ac_tool_prefix"; then
+ for ac_prog in cl
+ do
+ # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args.
+set dummy $ac_tool_prefix$ac_prog; ac_word=$2
+echo "$as_me:$LINENO: checking for $ac_word" >&5
+echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6
+if test "${ac_cv_prog_CC+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ if test -n "$CC"; then
+ ac_cv_prog_CC="$CC" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if $as_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_prog_CC="$ac_tool_prefix$ac_prog"
+ echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+done
+
+fi
+fi
+CC=$ac_cv_prog_CC
+if test -n "$CC"; then
+ echo "$as_me:$LINENO: result: $CC" >&5
+echo "${ECHO_T}$CC" >&6
+else
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+fi
+
+ test -n "$CC" && break
+ done
+fi
+if test -z "$CC"; then
+ ac_ct_CC=$CC
+ for ac_prog in cl
+do
+ # Extract the first word of "$ac_prog", so it can be a program name with args.
+set dummy $ac_prog; ac_word=$2
+echo "$as_me:$LINENO: checking for $ac_word" >&5
+echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6
+if test "${ac_cv_prog_ac_ct_CC+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ if test -n "$ac_ct_CC"; then
+ ac_cv_prog_ac_ct_CC="$ac_ct_CC" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if $as_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_prog_ac_ct_CC="$ac_prog"
+ echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+done
+
+fi
+fi
+ac_ct_CC=$ac_cv_prog_ac_ct_CC
+if test -n "$ac_ct_CC"; then
+ echo "$as_me:$LINENO: result: $ac_ct_CC" >&5
+echo "${ECHO_T}$ac_ct_CC" >&6
+else
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+fi
+
+ test -n "$ac_ct_CC" && break
+done
+
+ CC=$ac_ct_CC
+fi
+
+fi
+
+
+test -z "$CC" && { { echo "$as_me:$LINENO: error: no acceptable C compiler found in \$PATH
+See \`config.log' for more details." >&5
+echo "$as_me: error: no acceptable C compiler found in \$PATH
+See \`config.log' for more details." >&2;}
+ { (exit 1); exit 1; }; }
+
+# Provide some information about the compiler.
+echo "$as_me:$LINENO:" \
+ "checking for C compiler version" >&5
+ac_compiler=`set X $ac_compile; echo $2`
+{ (eval echo "$as_me:$LINENO: \"$ac_compiler --version </dev/null >&5\"") >&5
+ (eval $ac_compiler --version </dev/null >&5) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }
+{ (eval echo "$as_me:$LINENO: \"$ac_compiler -v </dev/null >&5\"") >&5
+ (eval $ac_compiler -v </dev/null >&5) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }
+{ (eval echo "$as_me:$LINENO: \"$ac_compiler -V </dev/null >&5\"") >&5
+ (eval $ac_compiler -V </dev/null >&5) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }
+
+echo "$as_me:$LINENO: checking whether we are using the GNU C compiler" >&5
+echo $ECHO_N "checking whether we are using the GNU C compiler... $ECHO_C" >&6
+if test "${ac_cv_c_compiler_gnu+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+
+int
+main ()
+{
+#ifndef __GNUC__
+ choke me
+#endif
+
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext
+if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5
+ (eval $ac_compile) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_c_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest.$ac_objext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ ac_compiler_gnu=yes
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ac_compiler_gnu=no
+fi
+rm -f conftest.err conftest.$ac_objext conftest.$ac_ext
+ac_cv_c_compiler_gnu=$ac_compiler_gnu
+
+fi
+echo "$as_me:$LINENO: result: $ac_cv_c_compiler_gnu" >&5
+echo "${ECHO_T}$ac_cv_c_compiler_gnu" >&6
+GCC=`test $ac_compiler_gnu = yes && echo yes`
+ac_test_CFLAGS=${CFLAGS+set}
+ac_save_CFLAGS=$CFLAGS
+CFLAGS="-g"
+echo "$as_me:$LINENO: checking whether $CC accepts -g" >&5
+echo $ECHO_N "checking whether $CC accepts -g... $ECHO_C" >&6
+if test "${ac_cv_prog_cc_g+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+
+int
+main ()
+{
+
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext
+if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5
+ (eval $ac_compile) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_c_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest.$ac_objext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ ac_cv_prog_cc_g=yes
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ac_cv_prog_cc_g=no
+fi
+rm -f conftest.err conftest.$ac_objext conftest.$ac_ext
+fi
+echo "$as_me:$LINENO: result: $ac_cv_prog_cc_g" >&5
+echo "${ECHO_T}$ac_cv_prog_cc_g" >&6
+if test "$ac_test_CFLAGS" = set; then
+ CFLAGS=$ac_save_CFLAGS
+elif test $ac_cv_prog_cc_g = yes; then
+ if test "$GCC" = yes; then
+ CFLAGS="-g -O2"
+ else
+ CFLAGS="-g"
+ fi
+else
+ if test "$GCC" = yes; then
+ CFLAGS="-O2"
+ else
+ CFLAGS=
+ fi
+fi
+echo "$as_me:$LINENO: checking for $CC option to accept ANSI C" >&5
+echo $ECHO_N "checking for $CC option to accept ANSI C... $ECHO_C" >&6
+if test "${ac_cv_prog_cc_stdc+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ ac_cv_prog_cc_stdc=no
+ac_save_CC=$CC
+cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+#include <stdarg.h>
+#include <stdio.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+/* Most of the following tests are stolen from RCS 5.7's src/conf.sh. */
+struct buf { int x; };
+FILE * (*rcsopen) (struct buf *, struct stat *, int);
+static char *e (p, i)
+ char **p;
+ int i;
+{
+ return p[i];
+}
+static char *f (char * (*g) (char **, int), char **p, ...)
+{
+ char *s;
+ va_list v;
+ va_start (v,p);
+ s = g (p, va_arg (v,int));
+ va_end (v);
+ return s;
+}
+
+/* OSF 4.0 Compaq cc is some sort of almost-ANSI by default. It has
+ function prototypes and stuff, but not '\xHH' hex character constants.
+ These don't provoke an error unfortunately, instead are silently treated
+ as 'x'. The following induces an error, until -std1 is added to get
+ proper ANSI mode. Curiously '\x00'!='x' always comes out true, for an
+ array size at least. It's necessary to write '\x00'==0 to get something
+ that's true only with -std1. */
+int osf4_cc_array ['\x00' == 0 ? 1 : -1];
+
+int test (int i, double x);
+struct s1 {int (*f) (int a);};
+struct s2 {int (*f) (double a);};
+int pairnames (int, char **, FILE *(*)(struct buf *, struct stat *, int), int, int);
+int argc;
+char **argv;
+int
+main ()
+{
+return f (e, argv, 0) != argv[0] || f (e, argv, 1) != argv[1];
+ ;
+ return 0;
+}
+_ACEOF
+# Don't try gcc -ansi; that turns off useful extensions and
+# breaks some systems' header files.
+# AIX -qlanglvl=ansi
+# Ultrix and OSF/1 -std1
+# HP-UX 10.20 and later -Ae
+# HP-UX older versions -Aa -D_HPUX_SOURCE
+# SVR4 -Xc -D__EXTENSIONS__
+for ac_arg in "" -qlanglvl=ansi -std1 -Ae "-Aa -D_HPUX_SOURCE" "-Xc -D__EXTENSIONS__"
+do
+ CC="$ac_save_CC $ac_arg"
+ rm -f conftest.$ac_objext
+if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5
+ (eval $ac_compile) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_c_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest.$ac_objext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ ac_cv_prog_cc_stdc=$ac_arg
+break
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+fi
+rm -f conftest.err conftest.$ac_objext
+done
+rm -f conftest.$ac_ext conftest.$ac_objext
+CC=$ac_save_CC
+
+fi
+
+case "x$ac_cv_prog_cc_stdc" in
+ x|xno)
+ echo "$as_me:$LINENO: result: none needed" >&5
+echo "${ECHO_T}none needed" >&6 ;;
+ *)
+ echo "$as_me:$LINENO: result: $ac_cv_prog_cc_stdc" >&5
+echo "${ECHO_T}$ac_cv_prog_cc_stdc" >&6
+ CC="$CC $ac_cv_prog_cc_stdc" ;;
+esac
+
+# Some people use a C++ compiler to compile C. Since we use `exit',
+# in C++ we need to declare it. In case someone uses the same compiler
+# for both compiling C and C++ we need to have the C++ compiler decide
+# the declaration of exit, since it's the most demanding environment.
+cat >conftest.$ac_ext <<_ACEOF
+#ifndef __cplusplus
+ choke me
+#endif
+_ACEOF
+rm -f conftest.$ac_objext
+if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5
+ (eval $ac_compile) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_c_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest.$ac_objext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ for ac_declaration in \
+ '' \
+ 'extern "C" void std::exit (int) throw (); using std::exit;' \
+ 'extern "C" void std::exit (int); using std::exit;' \
+ 'extern "C" void exit (int) throw ();' \
+ 'extern "C" void exit (int);' \
+ 'void exit (int);'
+do
+ cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+$ac_declaration
+#include <stdlib.h>
+int
+main ()
+{
+exit (42);
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext
+if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5
+ (eval $ac_compile) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_c_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest.$ac_objext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ :
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+continue
+fi
+rm -f conftest.err conftest.$ac_objext conftest.$ac_ext
+ cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+$ac_declaration
+int
+main ()
+{
+exit (42);
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext
+if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5
+ (eval $ac_compile) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_c_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest.$ac_objext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ break
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+fi
+rm -f conftest.err conftest.$ac_objext conftest.$ac_ext
+done
+rm -f conftest*
+if test -n "$ac_declaration"; then
+ echo '#ifdef __cplusplus' >>confdefs.h
+ echo $ac_declaration >>confdefs.h
+ echo '#endif' >>confdefs.h
+fi
+
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+fi
+rm -f conftest.err conftest.$ac_objext conftest.$ac_ext
+ac_ext=c
+ac_cpp='$CPP $CPPFLAGS'
+ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ac_compiler_gnu=$ac_cv_c_compiler_gnu
+
+ if test "$cross_compiling" = "yes"; then
+ { { echo "$as_me:$LINENO: error: unable to find a compiler for building build tools" >&5
+echo "$as_me: error: unable to find a compiler for building build tools" >&2;}
+ { (exit 1); exit 1; }; }
+ fi
+ BUILD_CC=$CC
+ default_build_cflags=$CFLAGS
+else
+ BUILD_CC=$config_BUILD_CC
+ echo "$as_me:$LINENO: checking host compiler" >&5
+echo $ECHO_N "checking host compiler... $ECHO_C" >&6
+ CC=$BUILD_CC
+ echo "$as_me:$LINENO: result: $BUILD_CC" >&5
+echo "${ECHO_T}$BUILD_CC" >&6
+fi
+echo "$as_me:$LINENO: checking switches for the host compiler" >&5
+echo $ECHO_N "checking switches for the host compiler... $ECHO_C" >&6
+if test "$config_BUILD_CFLAGS" != ""; then
+ CFLAGS=$config_BUILD_CFLAGS
+ BUILD_CFLAGS=$config_BUILD_CFLAGS
+else
+ BUILD_CFLAGS=$default_build_cflags
+fi
+echo "$as_me:$LINENO: result: $BUILD_CFLAGS" >&5
+echo "${ECHO_T}$BUILD_CFLAGS" >&6
+if test "$config_BUILD_LIBS" != ""; then
+ BUILD_LIBS=$config_BUILD_LIBS
+fi
+
+
+
+
+##########
+# Locate a compiler that converts C code into *.o files that run on
+# the target machine.
+#
+echo "$as_me:$LINENO: checking target compiler" >&5
+echo $ECHO_N "checking target compiler... $ECHO_C" >&6
+if test "$config_TARGET_CC" != ""; then
+ TARGET_CC=$config_TARGET_CC
+else
+ TARGET_CC=$BUILD_CC
+fi
+echo "$as_me:$LINENO: result: $TARGET_CC" >&5
+echo "${ECHO_T}$TARGET_CC" >&6
+echo "$as_me:$LINENO: checking switches on the target compiler" >&5
+echo $ECHO_N "checking switches on the target compiler... $ECHO_C" >&6
+if test "$config_TARGET_CFLAGS" != ""; then
+ TARGET_CFLAGS=$config_TARGET_CFLAGS
+else
+ TARGET_CFLAGS=$BUILD_CFLAGS
+fi
+echo "$as_me:$LINENO: result: $TARGET_CFLAGS" >&5
+echo "${ECHO_T}$TARGET_CFLAGS" >&6
+echo "$as_me:$LINENO: checking target linker" >&5
+echo $ECHO_N "checking target linker... $ECHO_C" >&6
+if test "$config_TARGET_LINK" = ""; then
+ TARGET_LINK=$TARGET_CC
+else
+ TARGET_LINK=$config_TARGET_LINK
+fi
+echo "$as_me:$LINENO: result: $TARGET_LINK" >&5
+echo "${ECHO_T}$TARGET_LINK" >&6
+echo "$as_me:$LINENO: checking switches on the target compiler" >&5
+echo $ECHO_N "checking switches on the target compiler... $ECHO_C" >&6
+if test "$config_TARGET_TFLAGS" != ""; then
+ TARGET_TFLAGS=$config_TARGET_TFLAGS
+else
+ TARGET_TFLAGS=$BUILD_CFLAGS
+fi
+if test "$config_TARGET_RANLIB" != ""; then
+ TARGET_RANLIB=$config_TARGET_RANLIB
+else
+ if test -n "$ac_tool_prefix"; then
+ # Extract the first word of "${ac_tool_prefix}ranlib", so it can be a program name with args.
+set dummy ${ac_tool_prefix}ranlib; ac_word=$2
+echo "$as_me:$LINENO: checking for $ac_word" >&5
+echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6
+if test "${ac_cv_prog_RANLIB+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ if test -n "$RANLIB"; then
+ ac_cv_prog_RANLIB="$RANLIB" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if $as_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_prog_RANLIB="${ac_tool_prefix}ranlib"
+ echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+done
+
+fi
+fi
+RANLIB=$ac_cv_prog_RANLIB
+if test -n "$RANLIB"; then
+ echo "$as_me:$LINENO: result: $RANLIB" >&5
+echo "${ECHO_T}$RANLIB" >&6
+else
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+fi
+
+fi
+if test -z "$ac_cv_prog_RANLIB"; then
+ ac_ct_RANLIB=$RANLIB
+ # Extract the first word of "ranlib", so it can be a program name with args.
+set dummy ranlib; ac_word=$2
+echo "$as_me:$LINENO: checking for $ac_word" >&5
+echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6
+if test "${ac_cv_prog_ac_ct_RANLIB+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ if test -n "$ac_ct_RANLIB"; then
+ ac_cv_prog_ac_ct_RANLIB="$ac_ct_RANLIB" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if $as_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_prog_ac_ct_RANLIB="ranlib"
+ echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+done
+
+ test -z "$ac_cv_prog_ac_ct_RANLIB" && ac_cv_prog_ac_ct_RANLIB=":"
+fi
+fi
+ac_ct_RANLIB=$ac_cv_prog_ac_ct_RANLIB
+if test -n "$ac_ct_RANLIB"; then
+ echo "$as_me:$LINENO: result: $ac_ct_RANLIB" >&5
+echo "${ECHO_T}$ac_ct_RANLIB" >&6
+else
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+fi
+
+ RANLIB=$ac_ct_RANLIB
+else
+ RANLIB="$ac_cv_prog_RANLIB"
+fi
+
+ TARGET_RANLIB=$RANLIB
+fi
+if test "$config_TARGET_AR" != ""; then
+ TARGET_AR=$config_TARGET_AR
+else
+ TARGET_AR='ar cr'
+fi
+echo "$as_me:$LINENO: result: $TARGET_TFLAGS" >&5
+echo "${ECHO_T}$TARGET_TFLAGS" >&6
+
+
+
+
+
+
+
+# Set the $cross variable if we are cross-compiling. Make
+# it 0 if we are not.
+#
+echo "$as_me:$LINENO: checking if host and target compilers are the same" >&5
+echo $ECHO_N "checking if host and target compilers are the same... $ECHO_C" >&6
+if test "$BUILD_CC" = "$TARGET_CC"; then
+ cross=0
+ echo "$as_me:$LINENO: result: yes" >&5
+echo "${ECHO_T}yes" >&6
+else
+ cross=1
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+fi
+
+##########
+# Do we want to support multithreaded use of sqlite
+#
+# Check whether --enable-threadsafe or --disable-threadsafe was given.
+if test "${enable_threadsafe+set}" = set; then
+ enableval="$enable_threadsafe"
+
+else
+ enable_threadsafe=no
+fi;
+echo "$as_me:$LINENO: checking whether to support threadsafe operation" >&5
+echo $ECHO_N "checking whether to support threadsafe operation... $ECHO_C" >&6
+if test "$enable_threadsafe" = "no"; then
+ THREADSAFE=0
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+else
+ THREADSAFE=1
+ echo "$as_me:$LINENO: result: yes" >&5
+echo "${ECHO_T}yes" >&6
+fi
+
+
+if test "$THREADSAFE" = "1"; then
+ LIBS=""
+
+echo "$as_me:$LINENO: checking for pthread_create in -lpthread" >&5
+echo $ECHO_N "checking for pthread_create in -lpthread... $ECHO_C" >&6
+if test "${ac_cv_lib_pthread_pthread_create+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ ac_check_lib_save_LIBS=$LIBS
+LIBS="-lpthread $LIBS"
+cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+
+/* Override any gcc2 internal prototype to avoid an error. */
+#ifdef __cplusplus
+extern "C"
+#endif
+/* We use char because int might match the return type of a gcc2
+ builtin and then its argument prototype would still apply. */
+char pthread_create ();
+int
+main ()
+{
+pthread_create ();
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext conftest$ac_exeext
+if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_c_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest$ac_exeext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ ac_cv_lib_pthread_pthread_create=yes
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ac_cv_lib_pthread_pthread_create=no
+fi
+rm -f conftest.err conftest.$ac_objext \
+ conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+echo "$as_me:$LINENO: result: $ac_cv_lib_pthread_pthread_create" >&5
+echo "${ECHO_T}$ac_cv_lib_pthread_pthread_create" >&6
+if test $ac_cv_lib_pthread_pthread_create = yes; then
+ cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBPTHREAD 1
+_ACEOF
+
+ LIBS="-lpthread $LIBS"
+
+fi
+
+ TARGET_THREAD_LIB="$LIBS"
+ LIBS=""
+else
+ TARGET_THREAD_LIB=""
+fi
+
+
+##########
+# Do we want to allow a connection created in one thread to be used
+# in another thread. This does not work on many Linux systems (ex: RedHat 9)
+# due to bugs in the threading implementations. This is thus off by default.
+#
+# Check whether --enable-cross-thread-connections or --disable-cross-thread-connections was given.
+if test "${enable_cross_thread_connections+set}" = set; then
+ enableval="$enable_cross_thread_connections"
+
+else
+ enable_xthreadconnect=no
+fi;
+echo "$as_me:$LINENO: checking whether to allow connections to be shared across threads" >&5
+echo $ECHO_N "checking whether to allow connections to be shared across threads... $ECHO_C" >&6
+if test "$enable_xthreadconnect" = "no"; then
+ XTHREADCONNECT=''
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+else
+ XTHREADCONNECT='-DSQLITE_ALLOW_XTHREAD_CONNECT=1'
+ echo "$as_me:$LINENO: result: yes" >&5
+echo "${ECHO_T}yes" >&6
+fi
+
+
+##########
+# Do we want to set threadsOverrideEachOthersLocks variable to be 1 (true) by
+# default. Normally, a test at runtime is performed to determine the
+# appropriate value of this variable. Use this option only if you're sure that
+# threads can safely override each others locks in all runtime situations.
+#
+# Check whether --enable-threads-override-locks or --disable-threads-override-locks was given.
+if test "${enable_threads_override_locks+set}" = set; then
+ enableval="$enable_threads_override_locks"
+
+else
+ enable_threads_override_locks=no
+fi;
+echo "$as_me:$LINENO: checking whether threads can override each others locks" >&5
+echo $ECHO_N "checking whether threads can override each others locks... $ECHO_C" >&6
+if test "$enable_threads_override_locks" = "no"; then
+ THREADSOVERRIDELOCKS='-1'
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+else
+ THREADSOVERRIDELOCKS='1'
+ echo "$as_me:$LINENO: result: yes" >&5
+echo "${ECHO_T}yes" >&6
+fi
+
+
+##########
+# Do we want to support release
+#
+# Check whether --enable-releasemode or --disable-releasemode was given.
+if test "${enable_releasemode+set}" = set; then
+ enableval="$enable_releasemode"
+
+else
+ enable_releasemode=no
+fi;
+echo "$as_me:$LINENO: checking whether to support shared library linked as release mode or not" >&5
+echo $ECHO_N "checking whether to support shared library linked as release mode or not... $ECHO_C" >&6
+if test "$enable_releasemode" = "no"; then
+ ALLOWRELEASE=""
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+else
+ ALLOWRELEASE="-release `cat VERSION`"
+ echo "$as_me:$LINENO: result: yes" >&5
+echo "${ECHO_T}yes" >&6
+fi
+
+
+##########
+# Do we want temporary databases in memory
+#
+# Check whether --enable-tempstore or --disable-tempstore was given.
+if test "${enable_tempstore+set}" = set; then
+ enableval="$enable_tempstore"
+
+else
+ enable_tempstore=no
+fi;
+echo "$as_me:$LINENO: checking whether to use an in-ram database for temporary tables" >&5
+echo $ECHO_N "checking whether to use an in-ram database for temporary tables... $ECHO_C" >&6
+case "$enable_tempstore" in
+ never )
+ TEMP_STORE=0
+ echo "$as_me:$LINENO: result: never" >&5
+echo "${ECHO_T}never" >&6
+ ;;
+ no )
+ TEMP_STORE=1
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+ ;;
+ always )
+ TEMP_STORE=3
+ echo "$as_me:$LINENO: result: always" >&5
+echo "${ECHO_T}always" >&6
+ ;;
+ yes )
+ TEMP_STORE=3
+ echo "$as_me:$LINENO: result: always" >&5
+echo "${ECHO_T}always" >&6
+ ;;
+ * )
+ TEMP_STORE=1
+ echo "$as_me:$LINENO: result: yes" >&5
+echo "${ECHO_T}yes" >&6
+ ;;
+esac
+
+
+
+###########
+# Lots of things are different if we are compiling for Windows using
+# the CYGWIN environment. So check for that special case and handle
+# things accordingly.
+#
+echo "$as_me:$LINENO: checking if executables have the .exe suffix" >&5
+echo $ECHO_N "checking if executables have the .exe suffix... $ECHO_C" >&6
+if test "$config_BUILD_EXEEXT" = ".exe"; then
+ CYGWIN=yes
+ echo "$as_me:$LINENO: result: yes" >&5
+echo "${ECHO_T}yes" >&6
+else
+ echo "$as_me:$LINENO: result: unknown" >&5
+echo "${ECHO_T}unknown" >&6
+fi
+if test "$CYGWIN" != "yes"; then
+
+case $host_os in
+ *cygwin* ) CYGWIN=yes;;
+ * ) CYGWIN=no;;
+esac
+
+fi
+if test "$CYGWIN" = "yes"; then
+ BUILD_EXEEXT=.exe
+else
+ BUILD_EXEEXT=$EXEEXT
+fi
+if test "$cross" = "0"; then
+ TARGET_EXEEXT=$BUILD_EXEEXT
+else
+ TARGET_EXEEXT=$config_TARGET_EXEEXT
+fi
+if test "$TARGET_EXEEXT" = ".exe"; then
+ if test $OS2_SHELL ; then
+ OS_UNIX=0
+ OS_WIN=0
+ OS_OS2=1
+ TARGET_CFLAGS="$TARGET_CFLAGS -DOS_OS2=1"
+ if test "$ac_compiler_gnu" == "yes" ; then
+ TARGET_CFLAGS="$TARGET_CFLAGS -Zomf -Zexe -Zmap"
+ BUILD_CFLAGS="$BUILD_CFLAGS -Zomf -Zexe"
+ fi
+ else
+ OS_UNIX=0
+ OS_WIN=1
+ OS_OS2=0
+ tclsubdir=win
+ TARGET_CFLAGS="$TARGET_CFLAGS -DOS_WIN=1"
+ fi
+else
+ OS_UNIX=1
+ OS_WIN=0
+ OS_OS2=0
+ tclsubdir=unix
+ TARGET_CFLAGS="$TARGET_CFLAGS -DOS_UNIX=1"
+fi
+
+
+
+
+
+
+##########
+# Extract generic linker options from the environment.
+#
+if test "$config_TARGET_LIBS" != ""; then
+ TARGET_LIBS=$config_TARGET_LIBS
+else
+ TARGET_LIBS=""
+fi
+
+##########
+# Figure out all the parameters needed to compile against Tcl.
+#
+# This code is derived from the SC_PATH_TCLCONFIG and SC_LOAD_TCLCONFIG
+# macros in the in the tcl.m4 file of the standard TCL distribution.
+# Those macros could not be used directly since we have to make some
+# minor changes to accomodate systems that do not have TCL installed.
+#
+# Check whether --enable-tcl or --disable-tcl was given.
+if test "${enable_tcl+set}" = set; then
+ enableval="$enable_tcl"
+ use_tcl=$enableval
+else
+ use_tcl=yes
+fi;
+if test "${use_tcl}" = "yes" ; then
+
+# Check whether --with-tcl or --without-tcl was given.
+if test "${with_tcl+set}" = set; then
+ withval="$with_tcl"
+ with_tclconfig=${withval}
+fi;
+ echo "$as_me:$LINENO: checking for Tcl configuration" >&5
+echo $ECHO_N "checking for Tcl configuration... $ECHO_C" >&6
+ if test "${ac_cv_c_tclconfig+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+
+ # First check to see if --with-tcl was specified.
+ if test x"${with_tclconfig}" != x ; then
+ if test -f "${with_tclconfig}/tclConfig.sh" ; then
+ ac_cv_c_tclconfig=`(cd ${with_tclconfig}; pwd)`
+ else
+ { { echo "$as_me:$LINENO: error: ${with_tclconfig} directory doesn't contain tclConfig.sh" >&5
+echo "$as_me: error: ${with_tclconfig} directory doesn't contain tclConfig.sh" >&2;}
+ { (exit 1); exit 1; }; }
+ fi
+ fi
+ # then check for a private Tcl installation
+ if test x"${ac_cv_c_tclconfig}" = x ; then
+ for i in \
+ ../tcl \
+ `ls -dr ../tcl[8-9].[0-9].[0-9]* 2>/dev/null` \
+ `ls -dr ../tcl[8-9].[0-9] 2>/dev/null` \
+ `ls -dr ../tcl[8-9].[0-9]* 2>/dev/null` \
+ ../../tcl \
+ `ls -dr ../../tcl[8-9].[0-9].[0-9]* 2>/dev/null` \
+ `ls -dr ../../tcl[8-9].[0-9] 2>/dev/null` \
+ `ls -dr ../../tcl[8-9].[0-9]* 2>/dev/null` \
+ ../../../tcl \
+ `ls -dr ../../../tcl[8-9].[0-9].[0-9]* 2>/dev/null` \
+ `ls -dr ../../../tcl[8-9].[0-9] 2>/dev/null` \
+ `ls -dr ../../../tcl[8-9].[0-9]* 2>/dev/null`
+ do
+ if test -f "$i/unix/tclConfig.sh" ; then
+ ac_cv_c_tclconfig=`(cd $i/unix; pwd)`
+ break
+ fi
+ done
+ fi
+
+ # check in a few common install locations
+ if test x"${ac_cv_c_tclconfig}" = x ; then
+ for i in \
+ `ls -d ${libdir} 2>/dev/null` \
+ `ls -d /usr/local/lib 2>/dev/null` \
+ `ls -d /usr/contrib/lib 2>/dev/null` \
+ `ls -d /usr/lib 2>/dev/null`
+ do
+ if test -f "$i/tclConfig.sh" ; then
+ ac_cv_c_tclconfig=`(cd $i; pwd)`
+ break
+ fi
+ done
+ fi
+
+ # check in a few other private locations
+ if test x"${ac_cv_c_tclconfig}" = x ; then
+ for i in \
+ ${srcdir}/../tcl \
+ `ls -dr ${srcdir}/../tcl[8-9].[0-9].[0-9]* 2>/dev/null` \
+ `ls -dr ${srcdir}/../tcl[8-9].[0-9] 2>/dev/null` \
+ `ls -dr ${srcdir}/../tcl[8-9].[0-9]* 2>/dev/null`
+ do
+ if test -f "$i/unix/tclConfig.sh" ; then
+ ac_cv_c_tclconfig=`(cd $i/unix; pwd)`
+ break
+ fi
+ done
+ fi
+
+fi
+
+
+ if test x"${ac_cv_c_tclconfig}" = x ; then
+ use_tcl=no
+ { echo "$as_me:$LINENO: WARNING: Can't find Tcl configuration definitions" >&5
+echo "$as_me: WARNING: Can't find Tcl configuration definitions" >&2;}
+ { echo "$as_me:$LINENO: WARNING: *** Without Tcl the regression tests cannot be executed ***" >&5
+echo "$as_me: WARNING: *** Without Tcl the regression tests cannot be executed ***" >&2;}
+ { echo "$as_me:$LINENO: WARNING: *** Consider using --with-tcl=... to define location of Tcl ***" >&5
+echo "$as_me: WARNING: *** Consider using --with-tcl=... to define location of Tcl ***" >&2;}
+ else
+ TCL_BIN_DIR=${ac_cv_c_tclconfig}
+ echo "$as_me:$LINENO: result: found $TCL_BIN_DIR/tclConfig.sh" >&5
+echo "${ECHO_T}found $TCL_BIN_DIR/tclConfig.sh" >&6
+
+ echo "$as_me:$LINENO: checking for existence of $TCL_BIN_DIR/tclConfig.sh" >&5
+echo $ECHO_N "checking for existence of $TCL_BIN_DIR/tclConfig.sh... $ECHO_C" >&6
+ if test -f "$TCL_BIN_DIR/tclConfig.sh" ; then
+ echo "$as_me:$LINENO: result: loading" >&5
+echo "${ECHO_T}loading" >&6
+ . $TCL_BIN_DIR/tclConfig.sh
+ else
+ echo "$as_me:$LINENO: result: file not found" >&5
+echo "${ECHO_T}file not found" >&6
+ fi
+
+ #
+ # If the TCL_BIN_DIR is the build directory (not the install directory),
+ # then set the common variable name to the value of the build variables.
+ # For example, the variable TCL_LIB_SPEC will be set to the value
+ # of TCL_BUILD_LIB_SPEC. An extension should make use of TCL_LIB_SPEC
+ # instead of TCL_BUILD_LIB_SPEC since it will work with both an
+ # installed and uninstalled version of Tcl.
+ #
+
+ if test -f $TCL_BIN_DIR/Makefile ; then
+ TCL_LIB_SPEC=${TCL_BUILD_LIB_SPEC}
+ TCL_STUB_LIB_SPEC=${TCL_BUILD_STUB_LIB_SPEC}
+ TCL_STUB_LIB_PATH=${TCL_BUILD_STUB_LIB_PATH}
+ fi
+
+ #
+ # eval is required to do the TCL_DBGX substitution
+ #
+
+ eval "TCL_LIB_FILE=\"${TCL_LIB_FILE}\""
+ eval "TCL_LIB_FLAG=\"${TCL_LIB_FLAG}\""
+ eval "TCL_LIB_SPEC=\"${TCL_LIB_SPEC}\""
+
+ eval "TCL_STUB_LIB_FILE=\"${TCL_STUB_LIB_FILE}\""
+ eval "TCL_STUB_LIB_FLAG=\"${TCL_STUB_LIB_FLAG}\""
+ eval "TCL_STUB_LIB_SPEC=\"${TCL_STUB_LIB_SPEC}\""
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ fi
+fi
+if test "${use_tcl}" = "no" ; then
+ HAVE_TCL=""
+else
+ HAVE_TCL=1
+fi
+
+
+##########
+# Figure out what C libraries are required to compile programs
+# that use "readline()" library.
+#
+if test "$config_TARGET_READLINE_LIBS" != ""; then
+ TARGET_READLINE_LIBS="$config_TARGET_READLINE_LIBS"
+else
+ CC=$TARGET_CC
+ LIBS=""
+ echo "$as_me:$LINENO: checking for library containing tgetent" >&5
+echo $ECHO_N "checking for library containing tgetent... $ECHO_C" >&6
+if test "${ac_cv_search_tgetent+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ ac_func_search_save_LIBS=$LIBS
+ac_cv_search_tgetent=no
+cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+
+/* Override any gcc2 internal prototype to avoid an error. */
+#ifdef __cplusplus
+extern "C"
+#endif
+/* We use char because int might match the return type of a gcc2
+ builtin and then its argument prototype would still apply. */
+char tgetent ();
+int
+main ()
+{
+tgetent ();
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext conftest$ac_exeext
+if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_c_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest$ac_exeext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ ac_cv_search_tgetent="none required"
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+fi
+rm -f conftest.err conftest.$ac_objext \
+ conftest$ac_exeext conftest.$ac_ext
+if test "$ac_cv_search_tgetent" = no; then
+ for ac_lib in readline ncurses curses termcap; do
+ LIBS="-l$ac_lib $ac_func_search_save_LIBS"
+ cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+
+/* Override any gcc2 internal prototype to avoid an error. */
+#ifdef __cplusplus
+extern "C"
+#endif
+/* We use char because int might match the return type of a gcc2
+ builtin and then its argument prototype would still apply. */
+char tgetent ();
+int
+main ()
+{
+tgetent ();
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext conftest$ac_exeext
+if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_c_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest$ac_exeext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ ac_cv_search_tgetent="-l$ac_lib"
+break
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+fi
+rm -f conftest.err conftest.$ac_objext \
+ conftest$ac_exeext conftest.$ac_ext
+ done
+fi
+LIBS=$ac_func_search_save_LIBS
+fi
+echo "$as_me:$LINENO: result: $ac_cv_search_tgetent" >&5
+echo "${ECHO_T}$ac_cv_search_tgetent" >&6
+if test "$ac_cv_search_tgetent" != no; then
+ test "$ac_cv_search_tgetent" = "none required" || LIBS="$ac_cv_search_tgetent $LIBS"
+
+fi
+
+
+echo "$as_me:$LINENO: checking for readline in -lreadline" >&5
+echo $ECHO_N "checking for readline in -lreadline... $ECHO_C" >&6
+if test "${ac_cv_lib_readline_readline+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ ac_check_lib_save_LIBS=$LIBS
+LIBS="-lreadline $LIBS"
+cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+
+/* Override any gcc2 internal prototype to avoid an error. */
+#ifdef __cplusplus
+extern "C"
+#endif
+/* We use char because int might match the return type of a gcc2
+ builtin and then its argument prototype would still apply. */
+char readline ();
+int
+main ()
+{
+readline ();
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext conftest$ac_exeext
+if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_c_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest$ac_exeext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ ac_cv_lib_readline_readline=yes
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ac_cv_lib_readline_readline=no
+fi
+rm -f conftest.err conftest.$ac_objext \
+ conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+echo "$as_me:$LINENO: result: $ac_cv_lib_readline_readline" >&5
+echo "${ECHO_T}$ac_cv_lib_readline_readline" >&6
+if test $ac_cv_lib_readline_readline = yes; then
+ cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBREADLINE 1
+_ACEOF
+
+ LIBS="-lreadline $LIBS"
+
+fi
+
+ TARGET_READLINE_LIBS="$LIBS"
+fi
+
+
+##########
+# Figure out what C libraries are required to compile programs
+# that use "fdatasync()" function.
+#
+CC=$TARGET_CC
+LIBS=$TARGET_LIBS
+echo "$as_me:$LINENO: checking for library containing fdatasync" >&5
+echo $ECHO_N "checking for library containing fdatasync... $ECHO_C" >&6
+if test "${ac_cv_search_fdatasync+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ ac_func_search_save_LIBS=$LIBS
+ac_cv_search_fdatasync=no
+cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+
+/* Override any gcc2 internal prototype to avoid an error. */
+#ifdef __cplusplus
+extern "C"
+#endif
+/* We use char because int might match the return type of a gcc2
+ builtin and then its argument prototype would still apply. */
+char fdatasync ();
+int
+main ()
+{
+fdatasync ();
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext conftest$ac_exeext
+if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_c_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest$ac_exeext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ ac_cv_search_fdatasync="none required"
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+fi
+rm -f conftest.err conftest.$ac_objext \
+ conftest$ac_exeext conftest.$ac_ext
+if test "$ac_cv_search_fdatasync" = no; then
+ for ac_lib in rt; do
+ LIBS="-l$ac_lib $ac_func_search_save_LIBS"
+ cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+
+/* Override any gcc2 internal prototype to avoid an error. */
+#ifdef __cplusplus
+extern "C"
+#endif
+/* We use char because int might match the return type of a gcc2
+ builtin and then its argument prototype would still apply. */
+char fdatasync ();
+int
+main ()
+{
+fdatasync ();
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext conftest$ac_exeext
+if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_c_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest$ac_exeext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ ac_cv_search_fdatasync="-l$ac_lib"
+break
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+fi
+rm -f conftest.err conftest.$ac_objext \
+ conftest$ac_exeext conftest.$ac_ext
+ done
+fi
+LIBS=$ac_func_search_save_LIBS
+fi
+echo "$as_me:$LINENO: result: $ac_cv_search_fdatasync" >&5
+echo "${ECHO_T}$ac_cv_search_fdatasync" >&6
+if test "$ac_cv_search_fdatasync" != no; then
+ test "$ac_cv_search_fdatasync" = "none required" || LIBS="$ac_cv_search_fdatasync $LIBS"
+
+fi
+
+TARGET_LIBS="$LIBS"
+
+##########
+# Figure out where to get the READLINE header files.
+#
+echo "$as_me:$LINENO: checking readline header files" >&5
+echo $ECHO_N "checking readline header files... $ECHO_C" >&6
+found=no
+if test "$config_TARGET_READLINE_INC" != ""; then
+ TARGET_READLINE_INC=$config_TARGET_READLINE_INC
+ found=yes
+fi
+if test "$found" = "yes"; then
+ echo "$as_me:$LINENO: result: $TARGET_READLINE_INC" >&5
+echo "${ECHO_T}$TARGET_READLINE_INC" >&6
+else
+ echo "$as_me:$LINENO: result: not specified: still searching..." >&5
+echo "${ECHO_T}not specified: still searching..." >&6
+ if test "${ac_cv_header_readline_h+set}" = set; then
+ echo "$as_me:$LINENO: checking for readline.h" >&5
+echo $ECHO_N "checking for readline.h... $ECHO_C" >&6
+if test "${ac_cv_header_readline_h+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+fi
+echo "$as_me:$LINENO: result: $ac_cv_header_readline_h" >&5
+echo "${ECHO_T}$ac_cv_header_readline_h" >&6
+else
+ # Is the header compilable?
+echo "$as_me:$LINENO: checking readline.h usability" >&5
+echo $ECHO_N "checking readline.h usability... $ECHO_C" >&6
+cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+$ac_includes_default
+#include <readline.h>
+_ACEOF
+rm -f conftest.$ac_objext
+if { (eval echo "$as_me:$LINENO: \"$ac_compile\"") >&5
+ (eval $ac_compile) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_c_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest.$ac_objext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ ac_header_compiler=yes
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ac_header_compiler=no
+fi
+rm -f conftest.err conftest.$ac_objext conftest.$ac_ext
+echo "$as_me:$LINENO: result: $ac_header_compiler" >&5
+echo "${ECHO_T}$ac_header_compiler" >&6
+
+# Is the header present?
+echo "$as_me:$LINENO: checking readline.h presence" >&5
+echo $ECHO_N "checking readline.h presence... $ECHO_C" >&6
+cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+#include <readline.h>
+_ACEOF
+if { (eval echo "$as_me:$LINENO: \"$ac_cpp conftest.$ac_ext\"") >&5
+ (eval $ac_cpp conftest.$ac_ext) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } >/dev/null; then
+ if test -s conftest.err; then
+ ac_cpp_err=$ac_c_preproc_warn_flag
+ ac_cpp_err=$ac_cpp_err$ac_c_werror_flag
+ else
+ ac_cpp_err=
+ fi
+else
+ ac_cpp_err=yes
+fi
+if test -z "$ac_cpp_err"; then
+ ac_header_preproc=yes
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ ac_header_preproc=no
+fi
+rm -f conftest.err conftest.$ac_ext
+echo "$as_me:$LINENO: result: $ac_header_preproc" >&5
+echo "${ECHO_T}$ac_header_preproc" >&6
+
+# So? What about this header?
+case $ac_header_compiler:$ac_header_preproc:$ac_c_preproc_warn_flag in
+ yes:no: )
+ { echo "$as_me:$LINENO: WARNING: readline.h: accepted by the compiler, rejected by the preprocessor!" >&5
+echo "$as_me: WARNING: readline.h: accepted by the compiler, rejected by the preprocessor!" >&2;}
+ { echo "$as_me:$LINENO: WARNING: readline.h: proceeding with the compiler's result" >&5
+echo "$as_me: WARNING: readline.h: proceeding with the compiler's result" >&2;}
+ ac_header_preproc=yes
+ ;;
+ no:yes:* )
+ { echo "$as_me:$LINENO: WARNING: readline.h: present but cannot be compiled" >&5
+echo "$as_me: WARNING: readline.h: present but cannot be compiled" >&2;}
+ { echo "$as_me:$LINENO: WARNING: readline.h: check for missing prerequisite headers?" >&5
+echo "$as_me: WARNING: readline.h: check for missing prerequisite headers?" >&2;}
+ { echo "$as_me:$LINENO: WARNING: readline.h: see the Autoconf documentation" >&5
+echo "$as_me: WARNING: readline.h: see the Autoconf documentation" >&2;}
+ { echo "$as_me:$LINENO: WARNING: readline.h: section \"Present But Cannot Be Compiled\"" >&5
+echo "$as_me: WARNING: readline.h: section \"Present But Cannot Be Compiled\"" >&2;}
+ { echo "$as_me:$LINENO: WARNING: readline.h: proceeding with the preprocessor's result" >&5
+echo "$as_me: WARNING: readline.h: proceeding with the preprocessor's result" >&2;}
+ { echo "$as_me:$LINENO: WARNING: readline.h: in the future, the compiler will take precedence" >&5
+echo "$as_me: WARNING: readline.h: in the future, the compiler will take precedence" >&2;}
+ (
+ cat <<\_ASBOX
+## ------------------------------------------ ##
+## Report this to the AC_PACKAGE_NAME lists. ##
+## ------------------------------------------ ##
+_ASBOX
+ ) |
+ sed "s/^/$as_me: WARNING: /" >&2
+ ;;
+esac
+echo "$as_me:$LINENO: checking for readline.h" >&5
+echo $ECHO_N "checking for readline.h... $ECHO_C" >&6
+if test "${ac_cv_header_readline_h+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ ac_cv_header_readline_h=$ac_header_preproc
+fi
+echo "$as_me:$LINENO: result: $ac_cv_header_readline_h" >&5
+echo "${ECHO_T}$ac_cv_header_readline_h" >&6
+
+fi
+if test $ac_cv_header_readline_h = yes; then
+ found=yes
+fi
+
+
+fi
+if test "$found" = "no"; then
+ for dir in /usr /usr/local /usr/local/readline /usr/contrib /mingw; do
+ as_ac_File=`echo "ac_cv_file_$dir/include/readline.h" | $as_tr_sh`
+echo "$as_me:$LINENO: checking for $dir/include/readline.h" >&5
+echo $ECHO_N "checking for $dir/include/readline.h... $ECHO_C" >&6
+if eval "test \"\${$as_ac_File+set}\" = set"; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ test "$cross_compiling" = yes &&
+ { { echo "$as_me:$LINENO: error: cannot check for file existence when cross compiling" >&5
+echo "$as_me: error: cannot check for file existence when cross compiling" >&2;}
+ { (exit 1); exit 1; }; }
+if test -r "$dir/include/readline.h"; then
+ eval "$as_ac_File=yes"
+else
+ eval "$as_ac_File=no"
+fi
+fi
+echo "$as_me:$LINENO: result: `eval echo '${'$as_ac_File'}'`" >&5
+echo "${ECHO_T}`eval echo '${'$as_ac_File'}'`" >&6
+if test `eval echo '${'$as_ac_File'}'` = yes; then
+ found=yes
+fi
+
+ if test "$found" = "yes"; then
+ TARGET_READLINE_INC="-I$dir/include"
+ break
+ fi
+ as_ac_File=`echo "ac_cv_file_$dir/include/readline/readline.h" | $as_tr_sh`
+echo "$as_me:$LINENO: checking for $dir/include/readline/readline.h" >&5
+echo $ECHO_N "checking for $dir/include/readline/readline.h... $ECHO_C" >&6
+if eval "test \"\${$as_ac_File+set}\" = set"; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ test "$cross_compiling" = yes &&
+ { { echo "$as_me:$LINENO: error: cannot check for file existence when cross compiling" >&5
+echo "$as_me: error: cannot check for file existence when cross compiling" >&2;}
+ { (exit 1); exit 1; }; }
+if test -r "$dir/include/readline/readline.h"; then
+ eval "$as_ac_File=yes"
+else
+ eval "$as_ac_File=no"
+fi
+fi
+echo "$as_me:$LINENO: result: `eval echo '${'$as_ac_File'}'`" >&5
+echo "${ECHO_T}`eval echo '${'$as_ac_File'}'`" >&6
+if test `eval echo '${'$as_ac_File'}'` = yes; then
+ found=yes
+fi
+
+ if test "$found" = "yes"; then
+ TARGET_READLINE_INC="-I$dir/include/readline"
+ break
+ fi
+ done
+fi
+if test "$found" = "yes"; then
+ if test "$TARGET_READLINE_LIBS" = ""; then
+ TARGET_HAVE_READLINE=0
+ else
+ TARGET_HAVE_READLINE=1
+ fi
+else
+ TARGET_HAVE_READLINE=0
+fi
+
+
+
+#########
+# check for debug enabled
+# Check whether --enable-debug or --disable-debug was given.
+if test "${enable_debug+set}" = set; then
+ enableval="$enable_debug"
+ use_debug=$enableval
+else
+ use_debug=no
+fi;
+if test "${use_debug}" = "yes" ; then
+ TARGET_DEBUG="-DSQLITE_DEBUG=1"
+else
+ TARGET_DEBUG="-DNDEBUG"
+fi
+
+
+#########
+# Figure out whether or not we have a "usleep()" function.
+#
+echo "$as_me:$LINENO: checking for usleep" >&5
+echo $ECHO_N "checking for usleep... $ECHO_C" >&6
+if test "${ac_cv_func_usleep+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+/* Define usleep to an innocuous variant, in case <limits.h> declares usleep.
+ For example, HP-UX 11i <limits.h> declares gettimeofday. */
+#define usleep innocuous_usleep
+
+/* System header to define __stub macros and hopefully few prototypes,
+ which can conflict with char usleep (); below.
+ Prefer <limits.h> to <assert.h> if __STDC__ is defined, since
+ <limits.h> exists even on freestanding compilers. */
+
+#ifdef __STDC__
+# include <limits.h>
+#else
+# include <assert.h>
+#endif
+
+#undef usleep
+
+/* Override any gcc2 internal prototype to avoid an error. */
+#ifdef __cplusplus
+extern "C"
+{
+#endif
+/* We use char because int might match the return type of a gcc2
+ builtin and then its argument prototype would still apply. */
+char usleep ();
+/* The GNU C library defines this for functions which it implements
+ to always fail with ENOSYS. Some functions are actually named
+ something starting with __ and the normal name is an alias. */
+#if defined (__stub_usleep) || defined (__stub___usleep)
+choke me
+#else
+char (*f) () = usleep;
+#endif
+#ifdef __cplusplus
+}
+#endif
+
+int
+main ()
+{
+return f != usleep;
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext conftest$ac_exeext
+if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_c_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest$ac_exeext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ ac_cv_func_usleep=yes
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ac_cv_func_usleep=no
+fi
+rm -f conftest.err conftest.$ac_objext \
+ conftest$ac_exeext conftest.$ac_ext
+fi
+echo "$as_me:$LINENO: result: $ac_cv_func_usleep" >&5
+echo "${ECHO_T}$ac_cv_func_usleep" >&6
+if test $ac_cv_func_usleep = yes; then
+ TARGET_CFLAGS="$TARGET_CFLAGS -DHAVE_USLEEP=1"
+fi
+
+
+#--------------------------------------------------------------------
+# Redefine fdatasync as fsync on systems that lack fdatasync
+#--------------------------------------------------------------------
+
+echo "$as_me:$LINENO: checking for fdatasync" >&5
+echo $ECHO_N "checking for fdatasync... $ECHO_C" >&6
+if test "${ac_cv_func_fdatasync+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ cat >conftest.$ac_ext <<_ACEOF
+/* confdefs.h. */
+_ACEOF
+cat confdefs.h >>conftest.$ac_ext
+cat >>conftest.$ac_ext <<_ACEOF
+/* end confdefs.h. */
+/* Define fdatasync to an innocuous variant, in case <limits.h> declares fdatasync.
+ For example, HP-UX 11i <limits.h> declares gettimeofday. */
+#define fdatasync innocuous_fdatasync
+
+/* System header to define __stub macros and hopefully few prototypes,
+ which can conflict with char fdatasync (); below.
+ Prefer <limits.h> to <assert.h> if __STDC__ is defined, since
+ <limits.h> exists even on freestanding compilers. */
+
+#ifdef __STDC__
+# include <limits.h>
+#else
+# include <assert.h>
+#endif
+
+#undef fdatasync
+
+/* Override any gcc2 internal prototype to avoid an error. */
+#ifdef __cplusplus
+extern "C"
+{
+#endif
+/* We use char because int might match the return type of a gcc2
+ builtin and then its argument prototype would still apply. */
+char fdatasync ();
+/* The GNU C library defines this for functions which it implements
+ to always fail with ENOSYS. Some functions are actually named
+ something starting with __ and the normal name is an alias. */
+#if defined (__stub_fdatasync) || defined (__stub___fdatasync)
+choke me
+#else
+char (*f) () = fdatasync;
+#endif
+#ifdef __cplusplus
+}
+#endif
+
+int
+main ()
+{
+return f != fdatasync;
+ ;
+ return 0;
+}
+_ACEOF
+rm -f conftest.$ac_objext conftest$ac_exeext
+if { (eval echo "$as_me:$LINENO: \"$ac_link\"") >&5
+ (eval $ac_link) 2>conftest.er1
+ ac_status=$?
+ grep -v '^ *+' conftest.er1 >conftest.err
+ rm -f conftest.er1
+ cat conftest.err >&5
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); } &&
+ { ac_try='test -z "$ac_c_werror_flag"
+ || test ! -s conftest.err'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; } &&
+ { ac_try='test -s conftest$ac_exeext'
+ { (eval echo "$as_me:$LINENO: \"$ac_try\"") >&5
+ (eval $ac_try) 2>&5
+ ac_status=$?
+ echo "$as_me:$LINENO: \$? = $ac_status" >&5
+ (exit $ac_status); }; }; then
+ ac_cv_func_fdatasync=yes
+else
+ echo "$as_me: failed program was:" >&5
+sed 's/^/| /' conftest.$ac_ext >&5
+
+ac_cv_func_fdatasync=no
+fi
+rm -f conftest.err conftest.$ac_objext \
+ conftest$ac_exeext conftest.$ac_ext
+fi
+echo "$as_me:$LINENO: result: $ac_cv_func_fdatasync" >&5
+echo "${ECHO_T}$ac_cv_func_fdatasync" >&6
+if test $ac_cv_func_fdatasync = yes; then
+ TARGET_CFLAGS="$TARGET_CFLAGS -DHAVE_FDATASYNC=1"
+fi
+
+
+#########
+# Put out accumulated miscellaneous LIBRARIES
+#
+
+
+#########
+# Generate the output files.
+#
+ ac_config_files="$ac_config_files Makefile sqlite3.pc"
+cat >confcache <<\_ACEOF
+# This file is a shell script that caches the results of configure
+# tests run on this system so they can be shared between configure
+# scripts and configure runs, see configure's option --config-cache.
+# It is not useful on other systems. If it contains results you don't
+# want to keep, you may remove or edit it.
+#
+# config.status only pays attention to the cache file if you give it
+# the --recheck option to rerun configure.
+#
+# `ac_cv_env_foo' variables (set or unset) will be overridden when
+# loading this file, other *unset* `ac_cv_foo' will be assigned the
+# following values.
+
+_ACEOF
+
+# The following way of writing the cache mishandles newlines in values,
+# but we know of no workaround that is simple, portable, and efficient.
+# So, don't put newlines in cache variables' values.
+# Ultrix sh set writes to stderr and can't be redirected directly,
+# and sets the high bit in the cache file unless we assign to the vars.
+{
+ (set) 2>&1 |
+ case `(ac_space=' '; set | grep ac_space) 2>&1` in
+ *ac_space=\ *)
+ # `set' does not quote correctly, so add quotes (double-quote
+ # substitution turns \\\\ into \\, and sed turns \\ into \).
+ sed -n \
+ "s/'/'\\\\''/g;
+ s/^\\([_$as_cr_alnum]*_cv_[_$as_cr_alnum]*\\)=\\(.*\\)/\\1='\\2'/p"
+ ;;
+ *)
+ # `set' quotes correctly as required by POSIX, so do not add quotes.
+ sed -n \
+ "s/^\\([_$as_cr_alnum]*_cv_[_$as_cr_alnum]*\\)=\\(.*\\)/\\1=\\2/p"
+ ;;
+ esac;
+} |
+ sed '
+ t clear
+ : clear
+ s/^\([^=]*\)=\(.*[{}].*\)$/test "${\1+set}" = set || &/
+ t end
+ /^ac_cv_env/!s/^\([^=]*\)=\(.*\)$/\1=${\1=\2}/
+ : end' >>confcache
+if diff $cache_file confcache >/dev/null 2>&1; then :; else
+ if test -w $cache_file; then
+ test "x$cache_file" != "x/dev/null" && echo "updating cache $cache_file"
+ cat confcache >$cache_file
+ else
+ echo "not updating unwritable cache $cache_file"
+ fi
+fi
+rm -f confcache
+
+test "x$prefix" = xNONE && prefix=$ac_default_prefix
+# Let make expand exec_prefix.
+test "x$exec_prefix" = xNONE && exec_prefix='${prefix}'
+
+# VPATH may cause trouble with some makes, so we remove $(srcdir),
+# ${srcdir} and @srcdir@ from VPATH if srcdir is ".", strip leading and
+# trailing colons and then remove the whole line if VPATH becomes empty
+# (actually we leave an empty line to preserve line numbers).
+if test "x$srcdir" = x.; then
+ ac_vpsub='/^[ ]*VPATH[ ]*=/{
+s/:*\$(srcdir):*/:/;
+s/:*\${srcdir}:*/:/;
+s/:*@srcdir@:*/:/;
+s/^\([^=]*=[ ]*\):*/\1/;
+s/:*$//;
+s/^[^=]*=[ ]*$//;
+}'
+fi
+
+# Transform confdefs.h into DEFS.
+# Protect against shell expansion while executing Makefile rules.
+# Protect against Makefile macro expansion.
+#
+# If the first sed substitution is executed (which looks for macros that
+# take arguments), then we branch to the quote section. Otherwise,
+# look for a macro that doesn't take arguments.
+cat >confdef2opt.sed <<\_ACEOF
+t clear
+: clear
+s,^[ ]*#[ ]*define[ ][ ]*\([^ (][^ (]*([^)]*)\)[ ]*\(.*\),-D\1=\2,g
+t quote
+s,^[ ]*#[ ]*define[ ][ ]*\([^ ][^ ]*\)[ ]*\(.*\),-D\1=\2,g
+t quote
+d
+: quote
+s,[ `~#$^&*(){}\\|;'"<>?],\\&,g
+s,\[,\\&,g
+s,\],\\&,g
+s,\$,$$,g
+p
+_ACEOF
+# We use echo to avoid assuming a particular line-breaking character.
+# The extra dot is to prevent the shell from consuming trailing
+# line-breaks from the sub-command output. A line-break within
+# single-quotes doesn't work because, if this script is created in a
+# platform that uses two characters for line-breaks (e.g., DOS), tr
+# would break.
+ac_LF_and_DOT=`echo; echo .`
+DEFS=`sed -n -f confdef2opt.sed confdefs.h | tr "$ac_LF_and_DOT" ' .'`
+rm -f confdef2opt.sed
+
+
+ac_libobjs=
+ac_ltlibobjs=
+for ac_i in : $LIBOBJS; do test "x$ac_i" = x: && continue
+ # 1. Remove the extension, and $U if already installed.
+ ac_i=`echo "$ac_i" |
+ sed 's/\$U\././;s/\.o$//;s/\.obj$//'`
+ # 2. Add them.
+ ac_libobjs="$ac_libobjs $ac_i\$U.$ac_objext"
+ ac_ltlibobjs="$ac_ltlibobjs $ac_i"'$U.lo'
+done
+LIBOBJS=$ac_libobjs
+
+LTLIBOBJS=$ac_ltlibobjs
+
+
+
+: ${CONFIG_STATUS=./config.status}
+ac_clean_files_save=$ac_clean_files
+ac_clean_files="$ac_clean_files $CONFIG_STATUS"
+{ echo "$as_me:$LINENO: creating $CONFIG_STATUS" >&5
+echo "$as_me: creating $CONFIG_STATUS" >&6;}
+cat >$CONFIG_STATUS <<_ACEOF
+#! $SHELL
+# Generated by $as_me.
+# Run this file to recreate the current configuration.
+# Compiler output produced by configure, useful for debugging
+# configure, is in config.log if it exists.
+
+debug=false
+ac_cs_recheck=false
+ac_cs_silent=false
+SHELL=\${CONFIG_SHELL-$SHELL}
+_ACEOF
+
+cat >>$CONFIG_STATUS <<\_ACEOF
+## --------------------- ##
+## M4sh Initialization. ##
+## --------------------- ##
+
+# Be Bourne compatible
+if test -n "${ZSH_VERSION+set}" && (emulate sh) >/dev/null 2>&1; then
+ emulate sh
+ NULLCMD=:
+ # Zsh 3.x and 4.x performs word splitting on ${1+"$@"}, which
+ # is contrary to our usage. Disable this feature.
+ alias -g '${1+"$@"}'='"$@"'
+elif test -n "${BASH_VERSION+set}" && (set -o posix) >/dev/null 2>&1; then
+ set -o posix
+fi
+DUALCASE=1; export DUALCASE # for MKS sh
+
+# Support unset when possible.
+if ( (MAIL=60; unset MAIL) || exit) >/dev/null 2>&1; then
+ as_unset=unset
+else
+ as_unset=false
+fi
+
+
+# Work around bugs in pre-3.0 UWIN ksh.
+$as_unset ENV MAIL MAILPATH
+PS1='$ '
+PS2='> '
+PS4='+ '
+
+# NLS nuisances.
+for as_var in \
+ LANG LANGUAGE LC_ADDRESS LC_ALL LC_COLLATE LC_CTYPE LC_IDENTIFICATION \
+ LC_MEASUREMENT LC_MESSAGES LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER \
+ LC_TELEPHONE LC_TIME
+do
+ if (set +x; test -z "`(eval $as_var=C; export $as_var) 2>&1`"); then
+ eval $as_var=C; export $as_var
+ else
+ $as_unset $as_var
+ fi
+done
+
+# Required to use basename.
+if expr a : '\(a\)' >/dev/null 2>&1; then
+ as_expr=expr
+else
+ as_expr=false
+fi
+
+if (basename /) >/dev/null 2>&1 && test "X`basename / 2>&1`" = "X/"; then
+ as_basename=basename
+else
+ as_basename=false
+fi
+
+
+# Name of the executable.
+as_me=`$as_basename "$0" ||
+$as_expr X/"$0" : '.*/\([^/][^/]*\)/*$' \| \
+ X"$0" : 'X\(//\)$' \| \
+ X"$0" : 'X\(/\)$' \| \
+ . : '\(.\)' 2>/dev/null ||
+echo X/"$0" |
+ sed '/^.*\/\([^/][^/]*\)\/*$/{ s//\1/; q; }
+ /^X\/\(\/\/\)$/{ s//\1/; q; }
+ /^X\/\(\/\).*/{ s//\1/; q; }
+ s/.*/./; q'`
+
+
+# PATH needs CR, and LINENO needs CR and PATH.
+# Avoid depending upon Character Ranges.
+as_cr_letters='abcdefghijklmnopqrstuvwxyz'
+as_cr_LETTERS='ABCDEFGHIJKLMNOPQRSTUVWXYZ'
+as_cr_Letters=$as_cr_letters$as_cr_LETTERS
+as_cr_digits='0123456789'
+as_cr_alnum=$as_cr_Letters$as_cr_digits
+
+# The user is always right.
+if test "${PATH_SEPARATOR+set}" != set; then
+ echo "#! /bin/sh" >conf$$.sh
+ echo "exit 0" >>conf$$.sh
+ chmod +x conf$$.sh
+ if (PATH="/nonexistent;."; conf$$.sh) >/dev/null 2>&1; then
+ PATH_SEPARATOR=';'
+ else
+ PATH_SEPARATOR=:
+ fi
+ rm -f conf$$.sh
+fi
+
+
+ as_lineno_1=$LINENO
+ as_lineno_2=$LINENO
+ as_lineno_3=`(expr $as_lineno_1 + 1) 2>/dev/null`
+ test "x$as_lineno_1" != "x$as_lineno_2" &&
+ test "x$as_lineno_3" = "x$as_lineno_2" || {
+ # Find who we are. Look in the path if we contain no path at all
+ # relative or not.
+ case $0 in
+ *[\\/]* ) as_myself=$0 ;;
+ *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ test -r "$as_dir/$0" && as_myself=$as_dir/$0 && break
+done
+
+ ;;
+ esac
+ # We did not find ourselves, most probably we were run as `sh COMMAND'
+ # in which case we are not to be found in the path.
+ if test "x$as_myself" = x; then
+ as_myself=$0
+ fi
+ if test ! -f "$as_myself"; then
+ { { echo "$as_me:$LINENO: error: cannot find myself; rerun with an absolute path" >&5
+echo "$as_me: error: cannot find myself; rerun with an absolute path" >&2;}
+ { (exit 1); exit 1; }; }
+ fi
+ case $CONFIG_SHELL in
+ '')
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in /bin$PATH_SEPARATOR/usr/bin$PATH_SEPARATOR$PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for as_base in sh bash ksh sh5; do
+ case $as_dir in
+ /*)
+ if ("$as_dir/$as_base" -c '
+ as_lineno_1=$LINENO
+ as_lineno_2=$LINENO
+ as_lineno_3=`(expr $as_lineno_1 + 1) 2>/dev/null`
+ test "x$as_lineno_1" != "x$as_lineno_2" &&
+ test "x$as_lineno_3" = "x$as_lineno_2" ') 2>/dev/null; then
+ $as_unset BASH_ENV || test "${BASH_ENV+set}" != set || { BASH_ENV=; export BASH_ENV; }
+ $as_unset ENV || test "${ENV+set}" != set || { ENV=; export ENV; }
+ CONFIG_SHELL=$as_dir/$as_base
+ export CONFIG_SHELL
+ exec "$CONFIG_SHELL" "$0" ${1+"$@"}
+ fi;;
+ esac
+ done
+done
+;;
+ esac
+
+ # Create $as_me.lineno as a copy of $as_myself, but with $LINENO
+ # uniformly replaced by the line number. The first 'sed' inserts a
+ # line-number line before each line; the second 'sed' does the real
+ # work. The second script uses 'N' to pair each line-number line
+ # with the numbered line, and appends trailing '-' during
+ # substitution so that $LINENO is not a special case at line end.
+ # (Raja R Harinath suggested sed '=', and Paul Eggert wrote the
+ # second 'sed' script. Blame Lee E. McMahon for sed's syntax. :-)
+ sed '=' <$as_myself |
+ sed '
+ N
+ s,$,-,
+ : loop
+ s,^\(['$as_cr_digits']*\)\(.*\)[$]LINENO\([^'$as_cr_alnum'_]\),\1\2\1\3,
+ t loop
+ s,-$,,
+ s,^['$as_cr_digits']*\n,,
+ ' >$as_me.lineno &&
+ chmod +x $as_me.lineno ||
+ { { echo "$as_me:$LINENO: error: cannot create $as_me.lineno; rerun with a POSIX shell" >&5
+echo "$as_me: error: cannot create $as_me.lineno; rerun with a POSIX shell" >&2;}
+ { (exit 1); exit 1; }; }
+
+ # Don't try to exec as it changes $[0], causing all sort of problems
+ # (the dirname of $[0] is not the place where we might find the
+ # original and so on. Autoconf is especially sensible to this).
+ . ./$as_me.lineno
+ # Exit status is that of the last command.
+ exit
+}
+
+
+case `echo "testing\c"; echo 1,2,3`,`echo -n testing; echo 1,2,3` in
+ *c*,-n*) ECHO_N= ECHO_C='
+' ECHO_T=' ' ;;
+ *c*,* ) ECHO_N=-n ECHO_C= ECHO_T= ;;
+ *) ECHO_N= ECHO_C='\c' ECHO_T= ;;
+esac
+
+if expr a : '\(a\)' >/dev/null 2>&1; then
+ as_expr=expr
+else
+ as_expr=false
+fi
+
+rm -f conf$$ conf$$.exe conf$$.file
+echo >conf$$.file
+if ln -s conf$$.file conf$$ 2>/dev/null; then
+ # We could just check for DJGPP; but this test a) works b) is more generic
+ # and c) will remain valid once DJGPP supports symlinks (DJGPP 2.04).
+ if test -f conf$$.exe; then
+ # Don't use ln at all; we don't have any links
+ as_ln_s='cp -p'
+ else
+ as_ln_s='ln -s'
+ fi
+elif ln conf$$.file conf$$ 2>/dev/null; then
+ as_ln_s=ln
+else
+ as_ln_s='cp -p'
+fi
+rm -f conf$$ conf$$.exe conf$$.file
+
+if mkdir -p . 2>/dev/null; then
+ as_mkdir_p=:
+else
+ test -d ./-p && rmdir ./-p
+ as_mkdir_p=false
+fi
+
+as_executable_p="test -f"
+
+# Sed expression to map a string onto a valid CPP name.
+as_tr_cpp="eval sed 'y%*$as_cr_letters%P$as_cr_LETTERS%;s%[^_$as_cr_alnum]%_%g'"
+
+# Sed expression to map a string onto a valid variable name.
+as_tr_sh="eval sed 'y%*+%pp%;s%[^_$as_cr_alnum]%_%g'"
+
+
+# IFS
+# We need space, tab and new line, in precisely that order.
+as_nl='
+'
+IFS=" $as_nl"
+
+# CDPATH.
+$as_unset CDPATH
+
+exec 6>&1
+
+# Open the log real soon, to keep \$[0] and so on meaningful, and to
+# report actual input values of CONFIG_FILES etc. instead of their
+# values after options handling. Logging --version etc. is OK.
+exec 5>>config.log
+{
+ echo
+ sed 'h;s/./-/g;s/^.../## /;s/...$/ ##/;p;x;p;x' <<_ASBOX
+## Running $as_me. ##
+_ASBOX
+} >&5
+cat >&5 <<_CSEOF
+
+This file was extended by $as_me, which was
+generated by GNU Autoconf 2.59. Invocation command line was
+
+ CONFIG_FILES = $CONFIG_FILES
+ CONFIG_HEADERS = $CONFIG_HEADERS
+ CONFIG_LINKS = $CONFIG_LINKS
+ CONFIG_COMMANDS = $CONFIG_COMMANDS
+ $ $0 $@
+
+_CSEOF
+echo "on `(hostname || uname -n) 2>/dev/null | sed 1q`" >&5
+echo >&5
+_ACEOF
+
+# Files that config.status was made for.
+if test -n "$ac_config_files"; then
+ echo "config_files=\"$ac_config_files\"" >>$CONFIG_STATUS
+fi
+
+if test -n "$ac_config_headers"; then
+ echo "config_headers=\"$ac_config_headers\"" >>$CONFIG_STATUS
+fi
+
+if test -n "$ac_config_links"; then
+ echo "config_links=\"$ac_config_links\"" >>$CONFIG_STATUS
+fi
+
+if test -n "$ac_config_commands"; then
+ echo "config_commands=\"$ac_config_commands\"" >>$CONFIG_STATUS
+fi
+
+cat >>$CONFIG_STATUS <<\_ACEOF
+
+ac_cs_usage="\
+\`$as_me' instantiates files from templates according to the
+current configuration.
+
+Usage: $0 [OPTIONS] [FILE]...
+
+ -h, --help print this help, then exit
+ -V, --version print version number, then exit
+ -q, --quiet do not print progress messages
+ -d, --debug don't remove temporary files
+ --recheck update $as_me by reconfiguring in the same conditions
+ --file=FILE[:TEMPLATE]
+ instantiate the configuration file FILE
+
+Configuration files:
+$config_files
+
+Report bugs to <bug-autoconf at gnu.org>."
+_ACEOF
+
+cat >>$CONFIG_STATUS <<_ACEOF
+ac_cs_version="\\
+config.status
+configured by $0, generated by GNU Autoconf 2.59,
+ with options \\"`echo "$ac_configure_args" | sed 's/[\\""\`\$]/\\\\&/g'`\\"
+
+Copyright (C) 2003 Free Software Foundation, Inc.
+This config.status script is free software; the Free Software Foundation
+gives unlimited permission to copy, distribute and modify it."
+srcdir=$srcdir
+INSTALL="$INSTALL"
+_ACEOF
+
+cat >>$CONFIG_STATUS <<\_ACEOF
+# If no file are specified by the user, then we need to provide default
+# value. By we need to know if files were specified by the user.
+ac_need_defaults=:
+while test $# != 0
+do
+ case $1 in
+ --*=*)
+ ac_option=`expr "x$1" : 'x\([^=]*\)='`
+ ac_optarg=`expr "x$1" : 'x[^=]*=\(.*\)'`
+ ac_shift=:
+ ;;
+ -*)
+ ac_option=$1
+ ac_optarg=$2
+ ac_shift=shift
+ ;;
+ *) # This is not an option, so the user has probably given explicit
+ # arguments.
+ ac_option=$1
+ ac_need_defaults=false;;
+ esac
+
+ case $ac_option in
+ # Handling of the options.
+_ACEOF
+cat >>$CONFIG_STATUS <<\_ACEOF
+ -recheck | --recheck | --rechec | --reche | --rech | --rec | --re | --r)
+ ac_cs_recheck=: ;;
+ --version | --vers* | -V )
+ echo "$ac_cs_version"; exit 0 ;;
+ --he | --h)
+ # Conflict between --help and --header
+ { { echo "$as_me:$LINENO: error: ambiguous option: $1
+Try \`$0 --help' for more information." >&5
+echo "$as_me: error: ambiguous option: $1
+Try \`$0 --help' for more information." >&2;}
+ { (exit 1); exit 1; }; };;
+ --help | --hel | -h )
+ echo "$ac_cs_usage"; exit 0 ;;
+ --debug | --d* | -d )
+ debug=: ;;
+ --file | --fil | --fi | --f )
+ $ac_shift
+ CONFIG_FILES="$CONFIG_FILES $ac_optarg"
+ ac_need_defaults=false;;
+ --header | --heade | --head | --hea )
+ $ac_shift
+ CONFIG_HEADERS="$CONFIG_HEADERS $ac_optarg"
+ ac_need_defaults=false;;
+ -q | -quiet | --quiet | --quie | --qui | --qu | --q \
+ | -silent | --silent | --silen | --sile | --sil | --si | --s)
+ ac_cs_silent=: ;;
+
+ # This is an error.
+ -*) { { echo "$as_me:$LINENO: error: unrecognized option: $1
+Try \`$0 --help' for more information." >&5
+echo "$as_me: error: unrecognized option: $1
+Try \`$0 --help' for more information." >&2;}
+ { (exit 1); exit 1; }; } ;;
+
+ *) ac_config_targets="$ac_config_targets $1" ;;
+
+ esac
+ shift
+done
+
+ac_configure_extra_args=
+
+if $ac_cs_silent; then
+ exec 6>/dev/null
+ ac_configure_extra_args="$ac_configure_extra_args --silent"
+fi
+
+_ACEOF
+cat >>$CONFIG_STATUS <<_ACEOF
+if \$ac_cs_recheck; then
+ echo "running $SHELL $0 " $ac_configure_args \$ac_configure_extra_args " --no-create --no-recursion" >&6
+ exec $SHELL $0 $ac_configure_args \$ac_configure_extra_args --no-create --no-recursion
+fi
+
+_ACEOF
+
+
+
+
+
+cat >>$CONFIG_STATUS <<\_ACEOF
+for ac_config_target in $ac_config_targets
+do
+ case "$ac_config_target" in
+ # Handling of arguments.
+ "Makefile" ) CONFIG_FILES="$CONFIG_FILES Makefile" ;;
+ "sqlite3.pc" ) CONFIG_FILES="$CONFIG_FILES sqlite3.pc" ;;
+ *) { { echo "$as_me:$LINENO: error: invalid argument: $ac_config_target" >&5
+echo "$as_me: error: invalid argument: $ac_config_target" >&2;}
+ { (exit 1); exit 1; }; };;
+ esac
+done
+
+# If the user did not use the arguments to specify the items to instantiate,
+# then the envvar interface is used. Set only those that are not.
+# We use the long form for the default assignment because of an extremely
+# bizarre bug on SunOS 4.1.3.
+if $ac_need_defaults; then
+ test "${CONFIG_FILES+set}" = set || CONFIG_FILES=$config_files
+fi
+
+# Have a temporary directory for convenience. Make it in the build tree
+# simply because there is no reason to put it here, and in addition,
+# creating and moving files from /tmp can sometimes cause problems.
+# Create a temporary directory, and hook for its removal unless debugging.
+$debug ||
+{
+ trap 'exit_status=$?; rm -rf $tmp && exit $exit_status' 0
+ trap '{ (exit 1); exit 1; }' 1 2 13 15
+}
+
+# Create a (secure) tmp directory for tmp files.
+
+{
+ tmp=`(umask 077 && mktemp -d -q "./confstatXXXXXX") 2>/dev/null` &&
+ test -n "$tmp" && test -d "$tmp"
+} ||
+{
+ tmp=./confstat$$-$RANDOM
+ (umask 077 && mkdir $tmp)
+} ||
+{
+ echo "$me: cannot create a temporary directory in ." >&2
+ { (exit 1); exit 1; }
+}
+
+_ACEOF
+
+cat >>$CONFIG_STATUS <<_ACEOF
+
+#
+# CONFIG_FILES section.
+#
+
+# No need to generate the scripts if there are no CONFIG_FILES.
+# This happens for instance when ./config.status config.h
+if test -n "\$CONFIG_FILES"; then
+ # Protect against being on the right side of a sed subst in config.status.
+ sed 's/,@/@@/; s/@,/@@/; s/,;t t\$/@;t t/; /@;t t\$/s/[\\\\&,]/\\\\&/g;
+ s/@@/,@/; s/@@/@,/; s/@;t t\$/,;t t/' >\$tmp/subs.sed <<\\CEOF
+s, at SHELL@,$SHELL,;t t
+s, at PATH_SEPARATOR@,$PATH_SEPARATOR,;t t
+s, at PACKAGE_NAME@,$PACKAGE_NAME,;t t
+s, at PACKAGE_TARNAME@,$PACKAGE_TARNAME,;t t
+s, at PACKAGE_VERSION@,$PACKAGE_VERSION,;t t
+s, at PACKAGE_STRING@,$PACKAGE_STRING,;t t
+s, at PACKAGE_BUGREPORT@,$PACKAGE_BUGREPORT,;t t
+s, at exec_prefix@,$exec_prefix,;t t
+s, at prefix@,$prefix,;t t
+s, at program_transform_name@,$program_transform_name,;t t
+s, at bindir@,$bindir,;t t
+s, at sbindir@,$sbindir,;t t
+s, at libexecdir@,$libexecdir,;t t
+s, at datadir@,$datadir,;t t
+s, at sysconfdir@,$sysconfdir,;t t
+s, at sharedstatedir@,$sharedstatedir,;t t
+s, at localstatedir@,$localstatedir,;t t
+s, at libdir@,$libdir,;t t
+s, at includedir@,$includedir,;t t
+s, at oldincludedir@,$oldincludedir,;t t
+s, at infodir@,$infodir,;t t
+s, at mandir@,$mandir,;t t
+s, at build_alias@,$build_alias,;t t
+s, at host_alias@,$host_alias,;t t
+s, at target_alias@,$target_alias,;t t
+s, at DEFS@,$DEFS,;t t
+s, at ECHO_C@,$ECHO_C,;t t
+s, at ECHO_N@,$ECHO_N,;t t
+s, at ECHO_T@,$ECHO_T,;t t
+s, at LIBS@,$LIBS,;t t
+s, at build@,$build,;t t
+s, at build_cpu@,$build_cpu,;t t
+s, at build_vendor@,$build_vendor,;t t
+s, at build_os@,$build_os,;t t
+s, at host@,$host,;t t
+s, at host_cpu@,$host_cpu,;t t
+s, at host_vendor@,$host_vendor,;t t
+s, at host_os@,$host_os,;t t
+s, at CC@,$CC,;t t
+s, at CFLAGS@,$CFLAGS,;t t
+s, at LDFLAGS@,$LDFLAGS,;t t
+s, at CPPFLAGS@,$CPPFLAGS,;t t
+s, at ac_ct_CC@,$ac_ct_CC,;t t
+s, at EXEEXT@,$EXEEXT,;t t
+s, at OBJEXT@,$OBJEXT,;t t
+s, at EGREP@,$EGREP,;t t
+s, at LN_S@,$LN_S,;t t
+s, at ECHO@,$ECHO,;t t
+s, at AR@,$AR,;t t
+s, at ac_ct_AR@,$ac_ct_AR,;t t
+s, at RANLIB@,$RANLIB,;t t
+s, at ac_ct_RANLIB@,$ac_ct_RANLIB,;t t
+s, at STRIP@,$STRIP,;t t
+s, at ac_ct_STRIP@,$ac_ct_STRIP,;t t
+s, at CPP@,$CPP,;t t
+s, at CXX@,$CXX,;t t
+s, at CXXFLAGS@,$CXXFLAGS,;t t
+s, at ac_ct_CXX@,$ac_ct_CXX,;t t
+s, at CXXCPP@,$CXXCPP,;t t
+s, at F77@,$F77,;t t
+s, at FFLAGS@,$FFLAGS,;t t
+s, at ac_ct_F77@,$ac_ct_F77,;t t
+s, at LIBTOOL@,$LIBTOOL,;t t
+s, at INSTALL_PROGRAM@,$INSTALL_PROGRAM,;t t
+s, at INSTALL_SCRIPT@,$INSTALL_SCRIPT,;t t
+s, at INSTALL_DATA@,$INSTALL_DATA,;t t
+s, at AWK@,$AWK,;t t
+s, at program_prefix@,$program_prefix,;t t
+s, at VERSION@,$VERSION,;t t
+s, at RELEASE@,$RELEASE,;t t
+s, at VERSION_NUMBER@,$VERSION_NUMBER,;t t
+s, at BUILD_CC@,$BUILD_CC,;t t
+s, at BUILD_CFLAGS@,$BUILD_CFLAGS,;t t
+s, at BUILD_LIBS@,$BUILD_LIBS,;t t
+s, at TARGET_CC@,$TARGET_CC,;t t
+s, at TARGET_CFLAGS@,$TARGET_CFLAGS,;t t
+s, at TARGET_LINK@,$TARGET_LINK,;t t
+s, at TARGET_LFLAGS@,$TARGET_LFLAGS,;t t
+s, at TARGET_RANLIB@,$TARGET_RANLIB,;t t
+s, at TARGET_AR@,$TARGET_AR,;t t
+s, at THREADSAFE@,$THREADSAFE,;t t
+s, at TARGET_THREAD_LIB@,$TARGET_THREAD_LIB,;t t
+s, at XTHREADCONNECT@,$XTHREADCONNECT,;t t
+s, at THREADSOVERRIDELOCKS@,$THREADSOVERRIDELOCKS,;t t
+s, at ALLOWRELEASE@,$ALLOWRELEASE,;t t
+s, at TEMP_STORE@,$TEMP_STORE,;t t
+s, at BUILD_EXEEXT@,$BUILD_EXEEXT,;t t
+s, at OS_UNIX@,$OS_UNIX,;t t
+s, at OS_WIN@,$OS_WIN,;t t
+s, at OS_OS2@,$OS_OS2,;t t
+s, at TARGET_EXEEXT@,$TARGET_EXEEXT,;t t
+s, at TCL_VERSION@,$TCL_VERSION,;t t
+s, at TCL_BIN_DIR@,$TCL_BIN_DIR,;t t
+s, at TCL_SRC_DIR@,$TCL_SRC_DIR,;t t
+s, at TCL_LIBS@,$TCL_LIBS,;t t
+s, at TCL_INCLUDE_SPEC@,$TCL_INCLUDE_SPEC,;t t
+s, at TCL_LIB_FILE@,$TCL_LIB_FILE,;t t
+s, at TCL_LIB_FLAG@,$TCL_LIB_FLAG,;t t
+s, at TCL_LIB_SPEC@,$TCL_LIB_SPEC,;t t
+s, at TCL_STUB_LIB_FILE@,$TCL_STUB_LIB_FILE,;t t
+s, at TCL_STUB_LIB_FLAG@,$TCL_STUB_LIB_FLAG,;t t
+s, at TCL_STUB_LIB_SPEC@,$TCL_STUB_LIB_SPEC,;t t
+s, at HAVE_TCL@,$HAVE_TCL,;t t
+s, at TARGET_READLINE_LIBS@,$TARGET_READLINE_LIBS,;t t
+s, at TARGET_READLINE_INC@,$TARGET_READLINE_INC,;t t
+s, at TARGET_HAVE_READLINE@,$TARGET_HAVE_READLINE,;t t
+s, at TARGET_DEBUG@,$TARGET_DEBUG,;t t
+s, at TARGET_LIBS@,$TARGET_LIBS,;t t
+s, at LIBOBJS@,$LIBOBJS,;t t
+s, at LTLIBOBJS@,$LTLIBOBJS,;t t
+CEOF
+
+_ACEOF
+
+ cat >>$CONFIG_STATUS <<\_ACEOF
+ # Split the substitutions into bite-sized pieces for seds with
+ # small command number limits, like on Digital OSF/1 and HP-UX.
+ ac_max_sed_lines=48
+ ac_sed_frag=1 # Number of current file.
+ ac_beg=1 # First line for current file.
+ ac_end=$ac_max_sed_lines # Line after last line for current file.
+ ac_more_lines=:
+ ac_sed_cmds=
+ while $ac_more_lines; do
+ if test $ac_beg -gt 1; then
+ sed "1,${ac_beg}d; ${ac_end}q" $tmp/subs.sed >$tmp/subs.frag
+ else
+ sed "${ac_end}q" $tmp/subs.sed >$tmp/subs.frag
+ fi
+ if test ! -s $tmp/subs.frag; then
+ ac_more_lines=false
+ else
+ # The purpose of the label and of the branching condition is to
+ # speed up the sed processing (if there are no `@' at all, there
+ # is no need to browse any of the substitutions).
+ # These are the two extra sed commands mentioned above.
+ (echo ':t
+ /@[a-zA-Z_][a-zA-Z_0-9]*@/!b' && cat $tmp/subs.frag) >$tmp/subs-$ac_sed_frag.sed
+ if test -z "$ac_sed_cmds"; then
+ ac_sed_cmds="sed -f $tmp/subs-$ac_sed_frag.sed"
+ else
+ ac_sed_cmds="$ac_sed_cmds | sed -f $tmp/subs-$ac_sed_frag.sed"
+ fi
+ ac_sed_frag=`expr $ac_sed_frag + 1`
+ ac_beg=$ac_end
+ ac_end=`expr $ac_end + $ac_max_sed_lines`
+ fi
+ done
+ if test -z "$ac_sed_cmds"; then
+ ac_sed_cmds=cat
+ fi
+fi # test -n "$CONFIG_FILES"
+
+_ACEOF
+cat >>$CONFIG_STATUS <<\_ACEOF
+for ac_file in : $CONFIG_FILES; do test "x$ac_file" = x: && continue
+ # Support "outfile[:infile[:infile...]]", defaulting infile="outfile.in".
+ case $ac_file in
+ - | *:- | *:-:* ) # input from stdin
+ cat >$tmp/stdin
+ ac_file_in=`echo "$ac_file" | sed 's,[^:]*:,,'`
+ ac_file=`echo "$ac_file" | sed 's,:.*,,'` ;;
+ *:* ) ac_file_in=`echo "$ac_file" | sed 's,[^:]*:,,'`
+ ac_file=`echo "$ac_file" | sed 's,:.*,,'` ;;
+ * ) ac_file_in=$ac_file.in ;;
+ esac
+
+ # Compute @srcdir@, @top_srcdir@, and @INSTALL@ for subdirectories.
+ ac_dir=`(dirname "$ac_file") 2>/dev/null ||
+$as_expr X"$ac_file" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \
+ X"$ac_file" : 'X\(//\)[^/]' \| \
+ X"$ac_file" : 'X\(//\)$' \| \
+ X"$ac_file" : 'X\(/\)' \| \
+ . : '\(.\)' 2>/dev/null ||
+echo X"$ac_file" |
+ sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/; q; }
+ /^X\(\/\/\)[^/].*/{ s//\1/; q; }
+ /^X\(\/\/\)$/{ s//\1/; q; }
+ /^X\(\/\).*/{ s//\1/; q; }
+ s/.*/./; q'`
+ { if $as_mkdir_p; then
+ mkdir -p "$ac_dir"
+ else
+ as_dir="$ac_dir"
+ as_dirs=
+ while test ! -d "$as_dir"; do
+ as_dirs="$as_dir $as_dirs"
+ as_dir=`(dirname "$as_dir") 2>/dev/null ||
+$as_expr X"$as_dir" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \
+ X"$as_dir" : 'X\(//\)[^/]' \| \
+ X"$as_dir" : 'X\(//\)$' \| \
+ X"$as_dir" : 'X\(/\)' \| \
+ . : '\(.\)' 2>/dev/null ||
+echo X"$as_dir" |
+ sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/; q; }
+ /^X\(\/\/\)[^/].*/{ s//\1/; q; }
+ /^X\(\/\/\)$/{ s//\1/; q; }
+ /^X\(\/\).*/{ s//\1/; q; }
+ s/.*/./; q'`
+ done
+ test ! -n "$as_dirs" || mkdir $as_dirs
+ fi || { { echo "$as_me:$LINENO: error: cannot create directory \"$ac_dir\"" >&5
+echo "$as_me: error: cannot create directory \"$ac_dir\"" >&2;}
+ { (exit 1); exit 1; }; }; }
+
+ ac_builddir=.
+
+if test "$ac_dir" != .; then
+ ac_dir_suffix=/`echo "$ac_dir" | sed 's,^\.[\\/],,'`
+ # A "../" for each directory in $ac_dir_suffix.
+ ac_top_builddir=`echo "$ac_dir_suffix" | sed 's,/[^\\/]*,../,g'`
+else
+ ac_dir_suffix= ac_top_builddir=
+fi
+
+case $srcdir in
+ .) # No --srcdir option. We are building in place.
+ ac_srcdir=.
+ if test -z "$ac_top_builddir"; then
+ ac_top_srcdir=.
+ else
+ ac_top_srcdir=`echo $ac_top_builddir | sed 's,/$,,'`
+ fi ;;
+ [\\/]* | ?:[\\/]* ) # Absolute path.
+ ac_srcdir=$srcdir$ac_dir_suffix;
+ ac_top_srcdir=$srcdir ;;
+ *) # Relative path.
+ ac_srcdir=$ac_top_builddir$srcdir$ac_dir_suffix
+ ac_top_srcdir=$ac_top_builddir$srcdir ;;
+esac
+
+# Do not use `cd foo && pwd` to compute absolute paths, because
+# the directories may not exist.
+case `pwd` in
+.) ac_abs_builddir="$ac_dir";;
+*)
+ case "$ac_dir" in
+ .) ac_abs_builddir=`pwd`;;
+ [\\/]* | ?:[\\/]* ) ac_abs_builddir="$ac_dir";;
+ *) ac_abs_builddir=`pwd`/"$ac_dir";;
+ esac;;
+esac
+case $ac_abs_builddir in
+.) ac_abs_top_builddir=${ac_top_builddir}.;;
+*)
+ case ${ac_top_builddir}. in
+ .) ac_abs_top_builddir=$ac_abs_builddir;;
+ [\\/]* | ?:[\\/]* ) ac_abs_top_builddir=${ac_top_builddir}.;;
+ *) ac_abs_top_builddir=$ac_abs_builddir/${ac_top_builddir}.;;
+ esac;;
+esac
+case $ac_abs_builddir in
+.) ac_abs_srcdir=$ac_srcdir;;
+*)
+ case $ac_srcdir in
+ .) ac_abs_srcdir=$ac_abs_builddir;;
+ [\\/]* | ?:[\\/]* ) ac_abs_srcdir=$ac_srcdir;;
+ *) ac_abs_srcdir=$ac_abs_builddir/$ac_srcdir;;
+ esac;;
+esac
+case $ac_abs_builddir in
+.) ac_abs_top_srcdir=$ac_top_srcdir;;
+*)
+ case $ac_top_srcdir in
+ .) ac_abs_top_srcdir=$ac_abs_builddir;;
+ [\\/]* | ?:[\\/]* ) ac_abs_top_srcdir=$ac_top_srcdir;;
+ *) ac_abs_top_srcdir=$ac_abs_builddir/$ac_top_srcdir;;
+ esac;;
+esac
+
+
+ case $INSTALL in
+ [\\/$]* | ?:[\\/]* ) ac_INSTALL=$INSTALL ;;
+ *) ac_INSTALL=$ac_top_builddir$INSTALL ;;
+ esac
+
+ if test x"$ac_file" != x-; then
+ { echo "$as_me:$LINENO: creating $ac_file" >&5
+echo "$as_me: creating $ac_file" >&6;}
+ rm -f "$ac_file"
+ fi
+ # Let's still pretend it is `configure' which instantiates (i.e., don't
+ # use $as_me), people would be surprised to read:
+ # /* config.h. Generated by config.status. */
+ if test x"$ac_file" = x-; then
+ configure_input=
+ else
+ configure_input="$ac_file. "
+ fi
+ configure_input=$configure_input"Generated from `echo $ac_file_in |
+ sed 's,.*/,,'` by configure."
+
+ # First look for the input files in the build tree, otherwise in the
+ # src tree.
+ ac_file_inputs=`IFS=:
+ for f in $ac_file_in; do
+ case $f in
+ -) echo $tmp/stdin ;;
+ [\\/$]*)
+ # Absolute (can't be DOS-style, as IFS=:)
+ test -f "$f" || { { echo "$as_me:$LINENO: error: cannot find input file: $f" >&5
+echo "$as_me: error: cannot find input file: $f" >&2;}
+ { (exit 1); exit 1; }; }
+ echo "$f";;
+ *) # Relative
+ if test -f "$f"; then
+ # Build tree
+ echo "$f"
+ elif test -f "$srcdir/$f"; then
+ # Source tree
+ echo "$srcdir/$f"
+ else
+ # /dev/null tree
+ { { echo "$as_me:$LINENO: error: cannot find input file: $f" >&5
+echo "$as_me: error: cannot find input file: $f" >&2;}
+ { (exit 1); exit 1; }; }
+ fi;;
+ esac
+ done` || { (exit 1); exit 1; }
+_ACEOF
+cat >>$CONFIG_STATUS <<_ACEOF
+ sed "$ac_vpsub
+$extrasub
+_ACEOF
+cat >>$CONFIG_STATUS <<\_ACEOF
+:t
+/@[a-zA-Z_][a-zA-Z_0-9]*@/!b
+s, at configure_input@,$configure_input,;t t
+s, at srcdir@,$ac_srcdir,;t t
+s, at abs_srcdir@,$ac_abs_srcdir,;t t
+s, at top_srcdir@,$ac_top_srcdir,;t t
+s, at abs_top_srcdir@,$ac_abs_top_srcdir,;t t
+s, at builddir@,$ac_builddir,;t t
+s, at abs_builddir@,$ac_abs_builddir,;t t
+s, at top_builddir@,$ac_top_builddir,;t t
+s, at abs_top_builddir@,$ac_abs_top_builddir,;t t
+s, at INSTALL@,$ac_INSTALL,;t t
+" $ac_file_inputs | (eval "$ac_sed_cmds") >$tmp/out
+ rm -f $tmp/stdin
+ if test x"$ac_file" != x-; then
+ mv $tmp/out $ac_file
+ else
+ cat $tmp/out
+ rm -f $tmp/out
+ fi
+
+done
+_ACEOF
+
+cat >>$CONFIG_STATUS <<\_ACEOF
+
+{ (exit 0); exit 0; }
+_ACEOF
+chmod +x $CONFIG_STATUS
+ac_clean_files=$ac_clean_files_save
+
+
+# configure is writing to config.log, and then calls config.status.
+# config.status does its own redirection, appending to config.log.
+# Unfortunately, on DOS this fails, as config.log is still kept open
+# by configure, so config.status won't be able to write to it; its
+# output is simply discarded. So we exec the FD to /dev/null,
+# effectively closing config.log, so it can be properly (re)opened and
+# appended to by config.status. When coming back to configure, we
+# need to make the FD available again.
+if test "$no_create" != yes; then
+ ac_cs_success=:
+ ac_config_status_args=
+ test "$silent" = yes &&
+ ac_config_status_args="$ac_config_status_args --quiet"
+ exec 5>/dev/null
+ $SHELL $CONFIG_STATUS $ac_config_status_args || ac_cs_success=false
+ exec 5>>config.log
+ # Use ||, not &&, to avoid exiting from the if with $? = 1, which
+ # would make configure fail if this is the last instruction.
+ $ac_cs_success || { (exit 1); exit 1; }
+fi
+
Added: freeswitch/trunk/libs/sqlite/configure.ac
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/configure.ac Tue Dec 19 15:11:50 2006
@@ -0,0 +1,689 @@
+#
+# The build process allows for using a cross-compiler. But the default
+# action is to target the same platform that we are running on. The
+# configure script needs to discover the following properties of the
+# build and target systems:
+#
+# srcdir
+#
+# The is the name of the directory that contains the
+# "configure" shell script. All source files are
+# located relative to this directory.
+#
+# bindir
+#
+# The name of the directory where executables should be
+# written by the "install" target of the makefile.
+#
+# program_prefix
+#
+# Add this prefix to the names of all executables that run
+# on the target machine. Default: ""
+#
+# ENABLE_SHARED
+#
+# True if shared libraries should be generated.
+#
+# BUILD_CC
+#
+# The name of a command that is used to convert C
+# source files into executables that run on the build
+# platform.
+#
+# BUILD_CFLAGS
+#
+# Switches that the build compiler needs in order to construct
+# command-line programs.
+#
+# BUILD_LIBS
+#
+# Libraries that the build compiler needs in order to construct
+# command-line programs.
+#
+# BUILD_EXEEXT
+#
+# The filename extension for executables on the build
+# platform. "" for Unix and ".exe" for Windows.
+#
+# TARGET_CC
+#
+# The name of a command that runs on the build platform
+# and converts C source files into *.o files for the
+# target platform. In other words, the cross-compiler.
+#
+# TARGET_CFLAGS
+#
+# Switches that the target compiler needs to turn C source files
+# into *.o files. Do not include TARGET_TCL_INC in this list.
+# Makefiles might add additional switches such as "-I.".
+#
+# TCL_*
+#
+# Lots of values are read in from the tclConfig.sh script,
+# if that script is available. This values are used for
+# constructing and installing the TCL extension.
+#
+# TARGET_READLINE_LIBS
+#
+# This is the library directives passed to the target linker
+# that cause the executable to link against the readline library.
+# This might be a switch like "-lreadline" or pathnames of library
+# file like "../../src/libreadline.a".
+#
+# TARGET_READLINE_INC
+#
+# This variables define the directory that contain header
+# files for the readline library. If the compiler is able
+# to find <readline.h> on its own, then this can be blank.
+#
+# TARGET_LINK
+#
+# The name of the linker that combines *.o files generated
+# by TARGET_CC into executables for the target platform.
+#
+# TARGET_LIBS
+#
+# Additional libraries or other switch that the target linker needs
+# to build an executable on the target. Do not include
+# on this list any libraries in TARGET_TCL_LIBS and
+# TARGET_READLINE_LIBS, etc.
+#
+# TARGET_EXEEXT
+#
+# The filename extension for executables on the
+# target platform. "" for Unix and ".exe" for windows.
+#
+# The generated configure script will make an attempt to guess
+# at all of the above parameters. You can override any of
+# the guesses by setting the environment variable named
+# "config_AAAA" where "AAAA" is the name of the parameter
+# described above. (Exception: srcdir cannot be set this way.)
+# If you have a file that sets one or more of these environment
+# variables, you can invoke configure as follows:
+#
+# configure --with-hints=FILE
+#
+# where FILE is the name of the file that sets the environment
+# variables. FILE should be an absolute pathname.
+#
+# This configure.in file is easy to reuse on other projects. Just
+# change the argument to AC_INIT(). And disable any features that
+# you don't need (for example BLT) by erasing or commenting out
+# the corresponding code.
+#
+AC_INIT(src/sqlite.h.in)
+
+dnl Put the RCS revision string after AC_INIT so that it will also
+dnl show in in configure.
+# The following RCS revision string applies to configure.in
+# $Revision: 1.26 $
+
+#########
+# Programs needed
+#
+AC_PROG_LIBTOOL
+AC_PROG_INSTALL
+AC_PROG_AWK
+
+#########
+# Set up an appropriate program prefix
+#
+if test "$program_prefix" = "NONE"; then
+ program_prefix=""
+fi
+AC_SUBST(program_prefix)
+
+VERSION=[`cat $srcdir/VERSION | sed 's/^\([0-9]*\.*[0-9]*\).*/\1/'`]
+echo "Version set to $VERSION"
+AC_SUBST(VERSION)
+RELEASE=`cat $srcdir/VERSION`
+echo "Release set to $RELEASE"
+AC_SUBST(RELEASE)
+VERSION_NUMBER=[`cat $srcdir/VERSION \
+ | sed 's/[^0-9]/ /g' \
+ | awk '{printf "%d%03d%03d",$1,$2,$3}'`]
+echo "Version number set to $VERSION_NUMBER"
+AC_SUBST(VERSION_NUMBER)
+
+#########
+# Check to see if the --with-hints=FILE option is used. If there is none,
+# then check for a files named "$host.hints" and ../$hosts.hints where
+# $host is the hostname of the build system. If still no hints are
+# found, try looking in $system.hints and ../$system.hints where
+# $system is the result of uname -s.
+#
+AC_ARG_WITH(hints,
+ AC_HELP_STRING([--with-hints=FILE],[Read configuration options from FILE]),
+ hints=$withval)
+if test "$hints" = ""; then
+ host=`hostname | sed 's/\..*//'`
+ if test -r $host.hints; then
+ hints=$host.hints
+ else
+ if test -r ../$host.hints; then
+ hints=../$host.hints
+ fi
+ fi
+fi
+if test "$hints" = ""; then
+ sys=`uname -s`
+ if test -r $sys.hints; then
+ hints=$sys.hints
+ else
+ if test -r ../$sys.hints; then
+ hints=../$sys.hints
+ fi
+ fi
+fi
+if test "$hints" != ""; then
+ AC_MSG_RESULT(reading hints from $hints)
+ . $hints
+fi
+
+#########
+# Locate a compiler for the build machine. This compiler should
+# generate command-line programs that run on the build machine.
+#
+default_build_cflags="-g"
+if test "$config_BUILD_CC" = ""; then
+ AC_PROG_CC
+ if test "$cross_compiling" = "yes"; then
+ AC_MSG_ERROR([unable to find a compiler for building build tools])
+ fi
+ BUILD_CC=$CC
+ default_build_cflags=$CFLAGS
+else
+ BUILD_CC=$config_BUILD_CC
+ AC_MSG_CHECKING([host compiler])
+ CC=$BUILD_CC
+ AC_MSG_RESULT($BUILD_CC)
+fi
+AC_MSG_CHECKING([switches for the host compiler])
+if test "$config_BUILD_CFLAGS" != ""; then
+ CFLAGS=$config_BUILD_CFLAGS
+ BUILD_CFLAGS=$config_BUILD_CFLAGS
+else
+ BUILD_CFLAGS=$default_build_cflags
+fi
+AC_MSG_RESULT($BUILD_CFLAGS)
+if test "$config_BUILD_LIBS" != ""; then
+ BUILD_LIBS=$config_BUILD_LIBS
+fi
+AC_SUBST(BUILD_CC)
+AC_SUBST(BUILD_CFLAGS)
+AC_SUBST(BUILD_LIBS)
+
+##########
+# Locate a compiler that converts C code into *.o files that run on
+# the target machine.
+#
+AC_MSG_CHECKING([target compiler])
+if test "$config_TARGET_CC" != ""; then
+ TARGET_CC=$config_TARGET_CC
+else
+ TARGET_CC=$BUILD_CC
+fi
+AC_MSG_RESULT($TARGET_CC)
+AC_MSG_CHECKING([switches on the target compiler])
+if test "$config_TARGET_CFLAGS" != ""; then
+ TARGET_CFLAGS=$config_TARGET_CFLAGS
+else
+ TARGET_CFLAGS=$BUILD_CFLAGS
+fi
+AC_MSG_RESULT($TARGET_CFLAGS)
+AC_MSG_CHECKING([target linker])
+if test "$config_TARGET_LINK" = ""; then
+ TARGET_LINK=$TARGET_CC
+else
+ TARGET_LINK=$config_TARGET_LINK
+fi
+AC_MSG_RESULT($TARGET_LINK)
+AC_MSG_CHECKING([switches on the target compiler])
+if test "$config_TARGET_TFLAGS" != ""; then
+ TARGET_TFLAGS=$config_TARGET_TFLAGS
+else
+ TARGET_TFLAGS=$BUILD_CFLAGS
+fi
+if test "$config_TARGET_RANLIB" != ""; then
+ TARGET_RANLIB=$config_TARGET_RANLIB
+else
+ AC_PROG_RANLIB
+ TARGET_RANLIB=$RANLIB
+fi
+if test "$config_TARGET_AR" != ""; then
+ TARGET_AR=$config_TARGET_AR
+else
+ TARGET_AR='ar cr'
+fi
+AC_MSG_RESULT($TARGET_TFLAGS)
+AC_SUBST(TARGET_CC)
+AC_SUBST(TARGET_CFLAGS)
+AC_SUBST(TARGET_LINK)
+AC_SUBST(TARGET_LFLAGS)
+AC_SUBST(TARGET_RANLIB)
+AC_SUBST(TARGET_AR)
+
+# Set the $cross variable if we are cross-compiling. Make
+# it 0 if we are not.
+#
+AC_MSG_CHECKING([if host and target compilers are the same])
+if test "$BUILD_CC" = "$TARGET_CC"; then
+ cross=0
+ AC_MSG_RESULT(yes)
+else
+ cross=1
+ AC_MSG_RESULT(no)
+fi
+
+##########
+# Do we want to support multithreaded use of sqlite
+#
+AC_ARG_ENABLE(threadsafe,
+AC_HELP_STRING([--enable-threadsafe],[Support threadsafe operation]),,enable_threadsafe=no)
+AC_MSG_CHECKING([whether to support threadsafe operation])
+if test "$enable_threadsafe" = "no"; then
+ THREADSAFE=0
+ AC_MSG_RESULT([no])
+else
+ THREADSAFE=1
+ AC_MSG_RESULT([yes])
+fi
+AC_SUBST(THREADSAFE)
+
+if test "$THREADSAFE" = "1"; then
+ LIBS=""
+ AC_CHECK_LIB(pthread, pthread_create)
+ TARGET_THREAD_LIB="$LIBS"
+ LIBS=""
+else
+ TARGET_THREAD_LIB=""
+fi
+AC_SUBST(TARGET_THREAD_LIB)
+
+##########
+# Do we want to allow a connection created in one thread to be used
+# in another thread. This does not work on many Linux systems (ex: RedHat 9)
+# due to bugs in the threading implementations. This is thus off by default.
+#
+AC_ARG_ENABLE(cross-thread-connections,
+AC_HELP_STRING([--enable-cross-thread-connections],[Allow connection sharing across threads]),,enable_xthreadconnect=no)
+AC_MSG_CHECKING([whether to allow connections to be shared across threads])
+if test "$enable_xthreadconnect" = "no"; then
+ XTHREADCONNECT=''
+ AC_MSG_RESULT([no])
+else
+ XTHREADCONNECT='-DSQLITE_ALLOW_XTHREAD_CONNECT=1'
+ AC_MSG_RESULT([yes])
+fi
+AC_SUBST(XTHREADCONNECT)
+
+##########
+# Do we want to set threadsOverrideEachOthersLocks variable to be 1 (true) by
+# default. Normally, a test at runtime is performed to determine the
+# appropriate value of this variable. Use this option only if you're sure that
+# threads can safely override each others locks in all runtime situations.
+#
+AC_ARG_ENABLE(threads-override-locks,
+AC_HELP_STRING([--enable-threads-override-locks],[Threads can override each others locks]),,enable_threads_override_locks=no)
+AC_MSG_CHECKING([whether threads can override each others locks])
+if test "$enable_threads_override_locks" = "no"; then
+ THREADSOVERRIDELOCKS='-1'
+ AC_MSG_RESULT([no])
+else
+ THREADSOVERRIDELOCKS='1'
+ AC_MSG_RESULT([yes])
+fi
+AC_SUBST(THREADSOVERRIDELOCKS)
+
+##########
+# Do we want to support release
+#
+AC_ARG_ENABLE(releasemode,
+AC_HELP_STRING([--enable-releasemode],[Support libtool link to release mode]),,enable_releasemode=no)
+AC_MSG_CHECKING([whether to support shared library linked as release mode or not])
+if test "$enable_releasemode" = "no"; then
+ ALLOWRELEASE=""
+ AC_MSG_RESULT([no])
+else
+ ALLOWRELEASE="-release `cat VERSION`"
+ AC_MSG_RESULT([yes])
+fi
+AC_SUBST(ALLOWRELEASE)
+
+##########
+# Do we want temporary databases in memory
+#
+AC_ARG_ENABLE(tempstore,
+AC_HELP_STRING([--enable-tempstore],[Use an in-ram database for temporary tables (never,no,yes,always)]),,enable_tempstore=no)
+AC_MSG_CHECKING([whether to use an in-ram database for temporary tables])
+case "$enable_tempstore" in
+ never )
+ TEMP_STORE=0
+ AC_MSG_RESULT([never])
+ ;;
+ no )
+ TEMP_STORE=1
+ AC_MSG_RESULT([no])
+ ;;
+ always )
+ TEMP_STORE=3
+ AC_MSG_RESULT([always])
+ ;;
+ yes )
+ TEMP_STORE=3
+ AC_MSG_RESULT([always])
+ ;;
+ * )
+ TEMP_STORE=1
+ AC_MSG_RESULT([yes])
+ ;;
+esac
+
+AC_SUBST(TEMP_STORE)
+
+###########
+# Lots of things are different if we are compiling for Windows using
+# the CYGWIN environment. So check for that special case and handle
+# things accordingly.
+#
+AC_MSG_CHECKING([if executables have the .exe suffix])
+if test "$config_BUILD_EXEEXT" = ".exe"; then
+ CYGWIN=yes
+ AC_MSG_RESULT(yes)
+else
+ AC_MSG_RESULT(unknown)
+fi
+if test "$CYGWIN" != "yes"; then
+ AC_CYGWIN
+fi
+if test "$CYGWIN" = "yes"; then
+ BUILD_EXEEXT=.exe
+else
+ BUILD_EXEEXT=$EXEEXT
+fi
+if test "$cross" = "0"; then
+ TARGET_EXEEXT=$BUILD_EXEEXT
+else
+ TARGET_EXEEXT=$config_TARGET_EXEEXT
+fi
+if test "$TARGET_EXEEXT" = ".exe"; then
+ if test $OS2_SHELL ; then
+ OS_UNIX=0
+ OS_WIN=0
+ OS_OS2=1
+ TARGET_CFLAGS="$TARGET_CFLAGS -DOS_OS2=1"
+ if test "$ac_compiler_gnu" == "yes" ; then
+ TARGET_CFLAGS="$TARGET_CFLAGS -Zomf -Zexe -Zmap"
+ BUILD_CFLAGS="$BUILD_CFLAGS -Zomf -Zexe"
+ fi
+ else
+ OS_UNIX=0
+ OS_WIN=1
+ OS_OS2=0
+ tclsubdir=win
+ TARGET_CFLAGS="$TARGET_CFLAGS -DOS_WIN=1"
+ fi
+else
+ OS_UNIX=1
+ OS_WIN=0
+ OS_OS2=0
+ tclsubdir=unix
+ TARGET_CFLAGS="$TARGET_CFLAGS -DOS_UNIX=1"
+fi
+
+AC_SUBST(BUILD_EXEEXT)
+AC_SUBST(OS_UNIX)
+AC_SUBST(OS_WIN)
+AC_SUBST(OS_OS2)
+AC_SUBST(TARGET_EXEEXT)
+
+##########
+# Extract generic linker options from the environment.
+#
+if test "$config_TARGET_LIBS" != ""; then
+ TARGET_LIBS=$config_TARGET_LIBS
+else
+ TARGET_LIBS=""
+fi
+
+##########
+# Figure out all the parameters needed to compile against Tcl.
+#
+# This code is derived from the SC_PATH_TCLCONFIG and SC_LOAD_TCLCONFIG
+# macros in the in the tcl.m4 file of the standard TCL distribution.
+# Those macros could not be used directly since we have to make some
+# minor changes to accomodate systems that do not have TCL installed.
+#
+AC_ARG_ENABLE(tcl, AC_HELP_STRING([--disable-tcl],[do not build TCL extension]),
+ [use_tcl=$enableval],[use_tcl=yes])
+if test "${use_tcl}" = "yes" ; then
+ AC_ARG_WITH(tcl, AC_HELP_STRING([--with-tcl=DIR],[directory containing tcl configuration (tclConfig.sh)]), with_tclconfig=${withval})
+ AC_MSG_CHECKING([for Tcl configuration])
+ AC_CACHE_VAL(ac_cv_c_tclconfig,[
+ # First check to see if --with-tcl was specified.
+ if test x"${with_tclconfig}" != x ; then
+ if test -f "${with_tclconfig}/tclConfig.sh" ; then
+ ac_cv_c_tclconfig=`(cd ${with_tclconfig}; pwd)`
+ else
+ AC_MSG_ERROR([${with_tclconfig} directory doesn't contain tclConfig.sh])
+ fi
+ fi
+ # then check for a private Tcl installation
+ if test x"${ac_cv_c_tclconfig}" = x ; then
+ for i in \
+ ../tcl \
+ `ls -dr ../tcl[[8-9]].[[0-9]].[[0-9]]* 2>/dev/null` \
+ `ls -dr ../tcl[[8-9]].[[0-9]] 2>/dev/null` \
+ `ls -dr ../tcl[[8-9]].[[0-9]]* 2>/dev/null` \
+ ../../tcl \
+ `ls -dr ../../tcl[[8-9]].[[0-9]].[[0-9]]* 2>/dev/null` \
+ `ls -dr ../../tcl[[8-9]].[[0-9]] 2>/dev/null` \
+ `ls -dr ../../tcl[[8-9]].[[0-9]]* 2>/dev/null` \
+ ../../../tcl \
+ `ls -dr ../../../tcl[[8-9]].[[0-9]].[[0-9]]* 2>/dev/null` \
+ `ls -dr ../../../tcl[[8-9]].[[0-9]] 2>/dev/null` \
+ `ls -dr ../../../tcl[[8-9]].[[0-9]]* 2>/dev/null`
+ do
+ if test -f "$i/unix/tclConfig.sh" ; then
+ ac_cv_c_tclconfig=`(cd $i/unix; pwd)`
+ break
+ fi
+ done
+ fi
+
+ # check in a few common install locations
+ if test x"${ac_cv_c_tclconfig}" = x ; then
+ for i in \
+ `ls -d ${libdir} 2>/dev/null` \
+ `ls -d /usr/local/lib 2>/dev/null` \
+ `ls -d /usr/contrib/lib 2>/dev/null` \
+ `ls -d /usr/lib 2>/dev/null`
+ do
+ if test -f "$i/tclConfig.sh" ; then
+ ac_cv_c_tclconfig=`(cd $i; pwd)`
+ break
+ fi
+ done
+ fi
+
+ # check in a few other private locations
+ if test x"${ac_cv_c_tclconfig}" = x ; then
+ for i in \
+ ${srcdir}/../tcl \
+ `ls -dr ${srcdir}/../tcl[[8-9]].[[0-9]].[[0-9]]* 2>/dev/null` \
+ `ls -dr ${srcdir}/../tcl[[8-9]].[[0-9]] 2>/dev/null` \
+ `ls -dr ${srcdir}/../tcl[[8-9]].[[0-9]]* 2>/dev/null`
+ do
+ if test -f "$i/unix/tclConfig.sh" ; then
+ ac_cv_c_tclconfig=`(cd $i/unix; pwd)`
+ break
+ fi
+ done
+ fi
+ ])
+
+ if test x"${ac_cv_c_tclconfig}" = x ; then
+ use_tcl=no
+ AC_MSG_WARN(Can't find Tcl configuration definitions)
+ AC_MSG_WARN(*** Without Tcl the regression tests cannot be executed ***)
+ AC_MSG_WARN(*** Consider using --with-tcl=... to define location of Tcl ***)
+ else
+ TCL_BIN_DIR=${ac_cv_c_tclconfig}
+ AC_MSG_RESULT(found $TCL_BIN_DIR/tclConfig.sh)
+
+ AC_MSG_CHECKING([for existence of $TCL_BIN_DIR/tclConfig.sh])
+ if test -f "$TCL_BIN_DIR/tclConfig.sh" ; then
+ AC_MSG_RESULT([loading])
+ . $TCL_BIN_DIR/tclConfig.sh
+ else
+ AC_MSG_RESULT([file not found])
+ fi
+
+ #
+ # If the TCL_BIN_DIR is the build directory (not the install directory),
+ # then set the common variable name to the value of the build variables.
+ # For example, the variable TCL_LIB_SPEC will be set to the value
+ # of TCL_BUILD_LIB_SPEC. An extension should make use of TCL_LIB_SPEC
+ # instead of TCL_BUILD_LIB_SPEC since it will work with both an
+ # installed and uninstalled version of Tcl.
+ #
+
+ if test -f $TCL_BIN_DIR/Makefile ; then
+ TCL_LIB_SPEC=${TCL_BUILD_LIB_SPEC}
+ TCL_STUB_LIB_SPEC=${TCL_BUILD_STUB_LIB_SPEC}
+ TCL_STUB_LIB_PATH=${TCL_BUILD_STUB_LIB_PATH}
+ fi
+
+ #
+ # eval is required to do the TCL_DBGX substitution
+ #
+
+ eval "TCL_LIB_FILE=\"${TCL_LIB_FILE}\""
+ eval "TCL_LIB_FLAG=\"${TCL_LIB_FLAG}\""
+ eval "TCL_LIB_SPEC=\"${TCL_LIB_SPEC}\""
+
+ eval "TCL_STUB_LIB_FILE=\"${TCL_STUB_LIB_FILE}\""
+ eval "TCL_STUB_LIB_FLAG=\"${TCL_STUB_LIB_FLAG}\""
+ eval "TCL_STUB_LIB_SPEC=\"${TCL_STUB_LIB_SPEC}\""
+
+ AC_SUBST(TCL_VERSION)
+ AC_SUBST(TCL_BIN_DIR)
+ AC_SUBST(TCL_SRC_DIR)
+ AC_SUBST(TCL_LIBS)
+ AC_SUBST(TCL_INCLUDE_SPEC)
+
+ AC_SUBST(TCL_LIB_FILE)
+ AC_SUBST(TCL_LIB_FLAG)
+ AC_SUBST(TCL_LIB_SPEC)
+
+ AC_SUBST(TCL_STUB_LIB_FILE)
+ AC_SUBST(TCL_STUB_LIB_FLAG)
+ AC_SUBST(TCL_STUB_LIB_SPEC)
+ fi
+fi
+if test "${use_tcl}" = "no" ; then
+ HAVE_TCL=""
+else
+ HAVE_TCL=1
+fi
+AC_SUBST(HAVE_TCL)
+
+##########
+# Figure out what C libraries are required to compile programs
+# that use "readline()" library.
+#
+if test "$config_TARGET_READLINE_LIBS" != ""; then
+ TARGET_READLINE_LIBS="$config_TARGET_READLINE_LIBS"
+else
+ CC=$TARGET_CC
+ LIBS=""
+ AC_SEARCH_LIBS(tgetent, [readline ncurses curses termcap])
+ AC_CHECK_LIB([readline], [readline])
+ TARGET_READLINE_LIBS="$LIBS"
+fi
+AC_SUBST(TARGET_READLINE_LIBS)
+
+##########
+# Figure out what C libraries are required to compile programs
+# that use "fdatasync()" function.
+#
+CC=$TARGET_CC
+LIBS=$TARGET_LIBS
+AC_SEARCH_LIBS(fdatasync, [rt])
+TARGET_LIBS="$LIBS"
+
+##########
+# Figure out where to get the READLINE header files.
+#
+AC_MSG_CHECKING([readline header files])
+found=no
+if test "$config_TARGET_READLINE_INC" != ""; then
+ TARGET_READLINE_INC=$config_TARGET_READLINE_INC
+ found=yes
+fi
+if test "$found" = "yes"; then
+ AC_MSG_RESULT($TARGET_READLINE_INC)
+else
+ AC_MSG_RESULT(not specified: still searching...)
+ AC_CHECK_HEADER(readline.h, [found=yes])
+fi
+if test "$found" = "no"; then
+ for dir in /usr /usr/local /usr/local/readline /usr/contrib /mingw; do
+ AC_CHECK_FILE($dir/include/readline.h, found=yes)
+ if test "$found" = "yes"; then
+ TARGET_READLINE_INC="-I$dir/include"
+ break
+ fi
+ AC_CHECK_FILE($dir/include/readline/readline.h, found=yes)
+ if test "$found" = "yes"; then
+ TARGET_READLINE_INC="-I$dir/include/readline"
+ break
+ fi
+ done
+fi
+if test "$found" = "yes"; then
+ if test "$TARGET_READLINE_LIBS" = ""; then
+ TARGET_HAVE_READLINE=0
+ else
+ TARGET_HAVE_READLINE=1
+ fi
+else
+ TARGET_HAVE_READLINE=0
+fi
+AC_SUBST(TARGET_READLINE_INC)
+AC_SUBST(TARGET_HAVE_READLINE)
+
+#########
+# check for debug enabled
+AC_ARG_ENABLE(debug, AC_HELP_STRING([--enable-debug],[enable debugging & verbose explain]),
+ [use_debug=$enableval],[use_debug=no])
+if test "${use_debug}" = "yes" ; then
+ TARGET_DEBUG="-DSQLITE_DEBUG=1"
+else
+ TARGET_DEBUG="-DNDEBUG"
+fi
+AC_SUBST(TARGET_DEBUG)
+
+#########
+# Figure out whether or not we have a "usleep()" function.
+#
+AC_CHECK_FUNC(usleep, [TARGET_CFLAGS="$TARGET_CFLAGS -DHAVE_USLEEP=1"])
+
+#--------------------------------------------------------------------
+# Redefine fdatasync as fsync on systems that lack fdatasync
+#--------------------------------------------------------------------
+
+AC_CHECK_FUNC(fdatasync, [TARGET_CFLAGS="$TARGET_CFLAGS -DHAVE_FDATASYNC=1"])
+
+#########
+# Put out accumulated miscellaneous LIBRARIES
+#
+AC_SUBST(TARGET_LIBS)
+
+#########
+# Generate the output files.
+#
+AC_OUTPUT([
+Makefile
+sqlite3.pc
+])
Added: freeswitch/trunk/libs/sqlite/contrib/sqlitecon.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/contrib/sqlitecon.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,679 @@
+# A Tk console widget for SQLite. Invoke sqlitecon::create with a window name,
+# a prompt string, a title to set a new top-level window, and the SQLite
+# database handle. For example:
+#
+# sqlitecon::create .sqlcon {sql:- } {SQL Console} db
+#
+# A toplevel window is created that allows you to type in SQL commands to
+# be processed on the spot.
+#
+# A limited set of dot-commands are supported:
+#
+# .table
+# .schema ?TABLE?
+# .mode list|column|multicolumn|line
+# .exit
+#
+# In addition, a new SQL function named "edit()" is created. This function
+# takes a single text argument and returns a text result. Whenever the
+# the function is called, it pops up a new toplevel window containing a
+# text editor screen initialized to the argument. When the "OK" button
+# is pressed, whatever revised text is in the text editor is returned as
+# the result of the edit() function. This allows text fields of SQL tables
+# to be edited quickly and easily as follows:
+#
+# UPDATE table1 SET dscr = edit(dscr) WHERE rowid=15;
+#
+
+
+# Create a namespace to work in
+#
+namespace eval ::sqlitecon {
+ # do nothing
+}
+
+# Create a console widget named $w. The prompt string is $prompt.
+# The title at the top of the window is $title. The database connection
+# object is $db
+#
+proc sqlitecon::create {w prompt title db} {
+ upvar #0 $w.t v
+ if {[winfo exists $w]} {destroy $w}
+ if {[info exists v]} {unset v}
+ toplevel $w
+ wm title $w $title
+ wm iconname $w $title
+ frame $w.mb -bd 2 -relief raised
+ pack $w.mb -side top -fill x
+ menubutton $w.mb.file -text File -menu $w.mb.file.m
+ menubutton $w.mb.edit -text Edit -menu $w.mb.edit.m
+ pack $w.mb.file $w.mb.edit -side left -padx 8 -pady 1
+ set m [menu $w.mb.file.m -tearoff 0]
+ $m add command -label {Close} -command "destroy $w"
+ sqlitecon::create_child $w $prompt $w.mb.edit.m
+ set v(db) $db
+ $db function edit ::sqlitecon::_edit
+}
+
+# This routine creates a console as a child window within a larger
+# window. It also creates an edit menu named "$editmenu" if $editmenu!="".
+# The calling function is responsible for posting the edit menu.
+#
+proc sqlitecon::create_child {w prompt editmenu} {
+ upvar #0 $w.t v
+ if {$editmenu!=""} {
+ set m [menu $editmenu -tearoff 0]
+ $m add command -label Cut -command "sqlitecon::Cut $w.t"
+ $m add command -label Copy -command "sqlitecon::Copy $w.t"
+ $m add command -label Paste -command "sqlitecon::Paste $w.t"
+ $m add command -label {Clear Screen} -command "sqlitecon::Clear $w.t"
+ $m add separator
+ $m add command -label {Save As...} -command "sqlitecon::SaveFile $w.t"
+ catch {$editmenu config -postcommand "sqlitecon::EnableEditMenu $w"}
+ }
+ scrollbar $w.sb -orient vertical -command "$w.t yview"
+ pack $w.sb -side right -fill y
+ text $w.t -font fixed -yscrollcommand "$w.sb set"
+ pack $w.t -side right -fill both -expand 1
+ bindtags $w.t Sqlitecon
+ set v(editmenu) $editmenu
+ set v(history) 0
+ set v(historycnt) 0
+ set v(current) -1
+ set v(prompt) $prompt
+ set v(prior) {}
+ set v(plength) [string length $v(prompt)]
+ set v(x) 0
+ set v(y) 0
+ set v(mode) column
+ set v(header) on
+ $w.t mark set insert end
+ $w.t tag config ok -foreground blue
+ $w.t tag config err -foreground red
+ $w.t insert end $v(prompt)
+ $w.t mark set out 1.0
+ after idle "focus $w.t"
+}
+
+bind Sqlitecon <1> {sqlitecon::Button1 %W %x %y}
+bind Sqlitecon <B1-Motion> {sqlitecon::B1Motion %W %x %y}
+bind Sqlitecon <B1-Leave> {sqlitecon::B1Leave %W %x %y}
+bind Sqlitecon <B1-Enter> {sqlitecon::cancelMotor %W}
+bind Sqlitecon <ButtonRelease-1> {sqlitecon::cancelMotor %W}
+bind Sqlitecon <KeyPress> {sqlitecon::Insert %W %A}
+bind Sqlitecon <Left> {sqlitecon::Left %W}
+bind Sqlitecon <Control-b> {sqlitecon::Left %W}
+bind Sqlitecon <Right> {sqlitecon::Right %W}
+bind Sqlitecon <Control-f> {sqlitecon::Right %W}
+bind Sqlitecon <BackSpace> {sqlitecon::Backspace %W}
+bind Sqlitecon <Control-h> {sqlitecon::Backspace %W}
+bind Sqlitecon <Delete> {sqlitecon::Delete %W}
+bind Sqlitecon <Control-d> {sqlitecon::Delete %W}
+bind Sqlitecon <Home> {sqlitecon::Home %W}
+bind Sqlitecon <Control-a> {sqlitecon::Home %W}
+bind Sqlitecon <End> {sqlitecon::End %W}
+bind Sqlitecon <Control-e> {sqlitecon::End %W}
+bind Sqlitecon <Return> {sqlitecon::Enter %W}
+bind Sqlitecon <KP_Enter> {sqlitecon::Enter %W}
+bind Sqlitecon <Up> {sqlitecon::Prior %W}
+bind Sqlitecon <Control-p> {sqlitecon::Prior %W}
+bind Sqlitecon <Down> {sqlitecon::Next %W}
+bind Sqlitecon <Control-n> {sqlitecon::Next %W}
+bind Sqlitecon <Control-k> {sqlitecon::EraseEOL %W}
+bind Sqlitecon <<Cut>> {sqlitecon::Cut %W}
+bind Sqlitecon <<Copy>> {sqlitecon::Copy %W}
+bind Sqlitecon <<Paste>> {sqlitecon::Paste %W}
+bind Sqlitecon <<Clear>> {sqlitecon::Clear %W}
+
+# Insert a single character at the insertion cursor
+#
+proc sqlitecon::Insert {w a} {
+ $w insert insert $a
+ $w yview insert
+}
+
+# Move the cursor one character to the left
+#
+proc sqlitecon::Left {w} {
+ upvar #0 $w v
+ scan [$w index insert] %d.%d row col
+ if {$col>$v(plength)} {
+ $w mark set insert "insert -1c"
+ }
+}
+
+# Erase the character to the left of the cursor
+#
+proc sqlitecon::Backspace {w} {
+ upvar #0 $w v
+ scan [$w index insert] %d.%d row col
+ if {$col>$v(plength)} {
+ $w delete {insert -1c}
+ }
+}
+
+# Erase to the end of the line
+#
+proc sqlitecon::EraseEOL {w} {
+ upvar #0 $w v
+ scan [$w index insert] %d.%d row col
+ if {$col>=$v(plength)} {
+ $w delete insert {insert lineend}
+ }
+}
+
+# Move the cursor one character to the right
+#
+proc sqlitecon::Right {w} {
+ $w mark set insert "insert +1c"
+}
+
+# Erase the character to the right of the cursor
+#
+proc sqlitecon::Delete w {
+ $w delete insert
+}
+
+# Move the cursor to the beginning of the current line
+#
+proc sqlitecon::Home w {
+ upvar #0 $w v
+ scan [$w index insert] %d.%d row col
+ $w mark set insert $row.$v(plength)
+}
+
+# Move the cursor to the end of the current line
+#
+proc sqlitecon::End w {
+ $w mark set insert {insert lineend}
+}
+
+# Add a line to the history
+#
+proc sqlitecon::addHistory {w line} {
+ upvar #0 $w v
+ if {$v(historycnt)>0} {
+ set last [lindex $v(history) [expr $v(historycnt)-1]]
+ if {[string compare $last $line]} {
+ lappend v(history) $line
+ incr v(historycnt)
+ }
+ } else {
+ set v(history) [list $line]
+ set v(historycnt) 1
+ }
+ set v(current) $v(historycnt)
+}
+
+# Called when "Enter" is pressed. Do something with the line
+# of text that was entered.
+#
+proc sqlitecon::Enter w {
+ upvar #0 $w v
+ scan [$w index insert] %d.%d row col
+ set start $row.$v(plength)
+ set line [$w get $start "$start lineend"]
+ $w insert end \n
+ $w mark set out end
+ if {$v(prior)==""} {
+ set cmd $line
+ } else {
+ set cmd $v(prior)\n$line
+ }
+ if {[string index $cmd 0]=="." || [$v(db) complete $cmd]} {
+ regsub -all {\n} [string trim $cmd] { } cmd2
+ addHistory $w $cmd2
+ set rc [catch {DoCommand $w $cmd} res]
+ if {![winfo exists $w]} return
+ if {$rc} {
+ $w insert end $res\n err
+ } elseif {[string length $res]>0} {
+ $w insert end $res\n ok
+ }
+ set v(prior) {}
+ $w insert end $v(prompt)
+ } else {
+ set v(prior) $cmd
+ regsub -all {[^ ]} $v(prompt) . x
+ $w insert end $x
+ }
+ $w mark set insert end
+ $w mark set out {insert linestart}
+ $w yview insert
+}
+
+# Execute a single SQL command. Pay special attention to control
+# directives that begin with "."
+#
+# The return value is the text output from the command, properly
+# formatted.
+#
+proc sqlitecon::DoCommand {w cmd} {
+ upvar #0 $w v
+ set mode $v(mode)
+ set header $v(header)
+ if {[regexp {^(\.[a-z]+)} $cmd all word]} {
+ if {$word==".mode"} {
+ regexp {^.[a-z]+ +([a-z]+)} $cmd all v(mode)
+ return {}
+ } elseif {$word==".exit"} {
+ destroy [winfo toplevel $w]
+ return {}
+ } elseif {$word==".header"} {
+ regexp {^.[a-z]+ +([a-z]+)} $cmd all v(header)
+ return {}
+ } elseif {$word==".tables"} {
+ set mode multicolumn
+ set cmd {SELECT name FROM sqlite_master WHERE type='table'
+ UNION ALL
+ SELECT name FROM sqlite_temp_master WHERE type='table'}
+ $v(db) eval {PRAGMA database_list} {
+ if {$name!="temp" && $name!="main"} {
+ append cmd "UNION ALL SELECT name FROM $name.sqlite_master\
+ WHERE type='table'"
+ }
+ }
+ append cmd { ORDER BY 1}
+ } elseif {$word==".fullschema"} {
+ set pattern %
+ regexp {^.[a-z]+ +([^ ]+)} $cmd all pattern
+ set mode list
+ set header 0
+ set cmd "SELECT sql FROM sqlite_master WHERE tbl_name LIKE '$pattern'
+ AND sql NOT NULL UNION ALL SELECT sql FROM sqlite_temp_master
+ WHERE tbl_name LIKE '$pattern' AND sql NOT NULL"
+ $v(db) eval {PRAGMA database_list} {
+ if {$name!="temp" && $name!="main"} {
+ append cmd " UNION ALL SELECT sql FROM $name.sqlite_master\
+ WHERE tbl_name LIKE '$pattern' AND sql NOT NULL"
+ }
+ }
+ } elseif {$word==".schema"} {
+ set pattern %
+ regexp {^.[a-z]+ +([^ ]+)} $cmd all pattern
+ set mode list
+ set header 0
+ set cmd "SELECT sql FROM sqlite_master WHERE name LIKE '$pattern'
+ AND sql NOT NULL UNION ALL SELECT sql FROM sqlite_temp_master
+ WHERE name LIKE '$pattern' AND sql NOT NULL"
+ $v(db) eval {PRAGMA database_list} {
+ if {$name!="temp" && $name!="main"} {
+ append cmd " UNION ALL SELECT sql FROM $name.sqlite_master\
+ WHERE name LIKE '$pattern' AND sql NOT NULL"
+ }
+ }
+ } else {
+ return \
+ ".exit\n.mode line|list|column\n.schema ?TABLENAME?\n.tables"
+ }
+ }
+ set res {}
+ if {$mode=="list"} {
+ $v(db) eval $cmd x {
+ set sep {}
+ foreach col $x(*) {
+ append res $sep$x($col)
+ set sep |
+ }
+ append res \n
+ }
+ if {[info exists x(*)] && $header} {
+ set sep {}
+ set hdr {}
+ foreach col $x(*) {
+ append hdr $sep$col
+ set sep |
+ }
+ set res $hdr\n$res
+ }
+ } elseif {[string range $mode 0 2]=="col"} {
+ set y {}
+ $v(db) eval $cmd x {
+ foreach col $x(*) {
+ if {![info exists cw($col)] || $cw($col)<[string length $x($col)]} {
+ set cw($col) [string length $x($col)]
+ }
+ lappend y $x($col)
+ }
+ }
+ if {[info exists x(*)] && $header} {
+ set hdr {}
+ set ln {}
+ set dash ---------------------------------------------------------------
+ append dash ------------------------------------------------------------
+ foreach col $x(*) {
+ if {![info exists cw($col)] || $cw($col)<[string length $col]} {
+ set cw($col) [string length $col]
+ }
+ lappend hdr $col
+ lappend ln [string range $dash 1 $cw($col)]
+ }
+ set y [concat $hdr $ln $y]
+ }
+ if {[info exists x(*)]} {
+ set format {}
+ set arglist {}
+ set arglist2 {}
+ set i 0
+ foreach col $x(*) {
+ lappend arglist x$i
+ append arglist2 " \$x$i"
+ incr i
+ append format " %-$cw($col)s"
+ }
+ set format [string trimleft $format]\n
+ if {[llength $arglist]>0} {
+ foreach $arglist $y "append res \[format [list $format] $arglist2\]"
+ }
+ }
+ } elseif {$mode=="multicolumn"} {
+ set y [$v(db) eval $cmd]
+ set max 0
+ foreach e $y {
+ if {$max<[string length $e]} {set max [string length $e]}
+ }
+ set ncol [expr {int(80/($max+2))}]
+ if {$ncol<1} {set ncol 1}
+ set nelem [llength $y]
+ set nrow [expr {($nelem+$ncol-1)/$ncol}]
+ set format "%-${max}s"
+ for {set i 0} {$i<$nrow} {incr i} {
+ set j $i
+ while 1 {
+ append res [format $format [lindex $y $j]]
+ incr j $nrow
+ if {$j>=$nelem} break
+ append res { }
+ }
+ append res \n
+ }
+ } else {
+ $v(db) eval $cmd x {
+ foreach col $x(*) {append res "$col = $x($col)\n"}
+ append res \n
+ }
+ }
+ return [string trimright $res]
+}
+
+# Change the line to the previous line
+#
+proc sqlitecon::Prior w {
+ upvar #0 $w v
+ if {$v(current)<=0} return
+ incr v(current) -1
+ set line [lindex $v(history) $v(current)]
+ sqlitecon::SetLine $w $line
+}
+
+# Change the line to the next line
+#
+proc sqlitecon::Next w {
+ upvar #0 $w v
+ if {$v(current)>=$v(historycnt)} return
+ incr v(current) 1
+ set line [lindex $v(history) $v(current)]
+ sqlitecon::SetLine $w $line
+}
+
+# Change the contents of the entry line
+#
+proc sqlitecon::SetLine {w line} {
+ upvar #0 $w v
+ scan [$w index insert] %d.%d row col
+ set start $row.$v(plength)
+ $w delete $start end
+ $w insert end $line
+ $w mark set insert end
+ $w yview insert
+}
+
+# Called when the mouse button is pressed at position $x,$y on
+# the console widget.
+#
+proc sqlitecon::Button1 {w x y} {
+ global tkPriv
+ upvar #0 $w v
+ set v(mouseMoved) 0
+ set v(pressX) $x
+ set p [sqlitecon::nearestBoundry $w $x $y]
+ scan [$w index insert] %d.%d ix iy
+ scan $p %d.%d px py
+ if {$px==$ix} {
+ $w mark set insert $p
+ }
+ $w mark set anchor $p
+ focus $w
+}
+
+# Find the boundry between characters that is nearest
+# to $x,$y
+#
+proc sqlitecon::nearestBoundry {w x y} {
+ set p [$w index @$x,$y]
+ set bb [$w bbox $p]
+ if {![string compare $bb ""]} {return $p}
+ if {($x-[lindex $bb 0])<([lindex $bb 2]/2)} {return $p}
+ $w index "$p + 1 char"
+}
+
+# This routine extends the selection to the point specified by $x,$y
+#
+proc sqlitecon::SelectTo {w x y} {
+ upvar #0 $w v
+ set cur [sqlitecon::nearestBoundry $w $x $y]
+ if {[catch {$w index anchor}]} {
+ $w mark set anchor $cur
+ }
+ set anchor [$w index anchor]
+ if {[$w compare $cur != $anchor] || (abs($v(pressX) - $x) >= 3)} {
+ if {$v(mouseMoved)==0} {
+ $w tag remove sel 0.0 end
+ }
+ set v(mouseMoved) 1
+ }
+ if {[$w compare $cur < anchor]} {
+ set first $cur
+ set last anchor
+ } else {
+ set first anchor
+ set last $cur
+ }
+ if {$v(mouseMoved)} {
+ $w tag remove sel 0.0 $first
+ $w tag add sel $first $last
+ $w tag remove sel $last end
+ update idletasks
+ }
+}
+
+# Called whenever the mouse moves while button-1 is held down.
+#
+proc sqlitecon::B1Motion {w x y} {
+ upvar #0 $w v
+ set v(y) $y
+ set v(x) $x
+ sqlitecon::SelectTo $w $x $y
+}
+
+# Called whenever the mouse leaves the boundries of the widget
+# while button 1 is held down.
+#
+proc sqlitecon::B1Leave {w x y} {
+ upvar #0 $w v
+ set v(y) $y
+ set v(x) $x
+ sqlitecon::motor $w
+}
+
+# This routine is called to automatically scroll the window when
+# the mouse drags offscreen.
+#
+proc sqlitecon::motor w {
+ upvar #0 $w v
+ if {![winfo exists $w]} return
+ if {$v(y)>=[winfo height $w]} {
+ $w yview scroll 1 units
+ } elseif {$v(y)<0} {
+ $w yview scroll -1 units
+ } else {
+ return
+ }
+ sqlitecon::SelectTo $w $v(x) $v(y)
+ set v(timer) [after 50 sqlitecon::motor $w]
+}
+
+# This routine cancels the scrolling motor if it is active
+#
+proc sqlitecon::cancelMotor w {
+ upvar #0 $w v
+ catch {after cancel $v(timer)}
+ catch {unset v(timer)}
+}
+
+# Do a Copy operation on the stuff currently selected.
+#
+proc sqlitecon::Copy w {
+ if {![catch {set text [$w get sel.first sel.last]}]} {
+ clipboard clear -displayof $w
+ clipboard append -displayof $w $text
+ }
+}
+
+# Return 1 if the selection exists and is contained
+# entirely on the input line. Return 2 if the selection
+# exists but is not entirely on the input line. Return 0
+# if the selection does not exist.
+#
+proc sqlitecon::canCut w {
+ set r [catch {
+ scan [$w index sel.first] %d.%d s1x s1y
+ scan [$w index sel.last] %d.%d s2x s2y
+ scan [$w index insert] %d.%d ix iy
+ }]
+ if {$r==1} {return 0}
+ if {$s1x==$ix && $s2x==$ix} {return 1}
+ return 2
+}
+
+# Do a Cut operation if possible. Cuts are only allowed
+# if the current selection is entirely contained on the
+# current input line.
+#
+proc sqlitecon::Cut w {
+ if {[sqlitecon::canCut $w]==1} {
+ sqlitecon::Copy $w
+ $w delete sel.first sel.last
+ }
+}
+
+# Do a paste opeation.
+#
+proc sqlitecon::Paste w {
+ if {[sqlitecon::canCut $w]==1} {
+ $w delete sel.first sel.last
+ }
+ if {[catch {selection get -displayof $w -selection CLIPBOARD} topaste]
+ && [catch {selection get -displayof $w -selection PRIMARY} topaste]} {
+ return
+ }
+ if {[info exists ::$w]} {
+ set prior 0
+ foreach line [split $topaste \n] {
+ if {$prior} {
+ sqlitecon::Enter $w
+ update
+ }
+ set prior 1
+ $w insert insert $line
+ }
+ } else {
+ $w insert insert $topaste
+ }
+}
+
+# Enable or disable entries in the Edit menu
+#
+proc sqlitecon::EnableEditMenu w {
+ upvar #0 $w.t v
+ set m $v(editmenu)
+ if {$m=="" || ![winfo exists $m]} return
+ switch [sqlitecon::canCut $w.t] {
+ 0 {
+ $m entryconf Copy -state disabled
+ $m entryconf Cut -state disabled
+ }
+ 1 {
+ $m entryconf Copy -state normal
+ $m entryconf Cut -state normal
+ }
+ 2 {
+ $m entryconf Copy -state normal
+ $m entryconf Cut -state disabled
+ }
+ }
+}
+
+# Prompt the user for the name of a writable file. Then write the
+# entire contents of the console screen to that file.
+#
+proc sqlitecon::SaveFile w {
+ set types {
+ {{Text Files} {.txt}}
+ {{All Files} *}
+ }
+ set f [tk_getSaveFile -filetypes $types -title "Write Screen To..."]
+ if {$f!=""} {
+ if {[catch {open $f w} fd]} {
+ tk_messageBox -type ok -icon error -message $fd
+ } else {
+ puts $fd [string trimright [$w get 1.0 end] \n]
+ close $fd
+ }
+ }
+}
+
+# Erase everything from the console above the insertion line.
+#
+proc sqlitecon::Clear w {
+ $w delete 1.0 {insert linestart}
+}
+
+# An in-line editor for SQL
+#
+proc sqlitecon::_edit {origtxt {title {}}} {
+ for {set i 0} {[winfo exists .ed$i]} {incr i} continue
+ set w .ed$i
+ toplevel $w
+ wm protocol $w WM_DELETE_WINDOW "$w.b.can invoke"
+ wm title $w {Inline SQL Editor}
+ frame $w.b
+ pack $w.b -side bottom -fill x
+ button $w.b.can -text Cancel -width 6 -command [list set ::$w 0]
+ button $w.b.ok -text OK -width 6 -command [list set ::$w 1]
+ button $w.b.cut -text Cut -width 6 -command [list ::sqlitecon::Cut $w.t]
+ button $w.b.copy -text Copy -width 6 -command [list ::sqlitecon::Copy $w.t]
+ button $w.b.paste -text Paste -width 6 -command [list ::sqlitecon::Paste $w.t]
+ set ::$w {}
+ pack $w.b.cut $w.b.copy $w.b.paste $w.b.can $w.b.ok\
+ -side left -padx 5 -pady 5 -expand 1
+ if {$title!=""} {
+ label $w.title -text $title
+ pack $w.title -side top -padx 5 -pady 5
+ }
+ text $w.t -bg white -fg black -yscrollcommand [list $w.sb set]
+ pack $w.t -side left -fill both -expand 1
+ scrollbar $w.sb -orient vertical -command [list $w.t yview]
+ pack $w.sb -side left -fill y
+ $w.t insert end $origtxt
+
+ vwait ::$w
+
+ if {[set ::$w]} {
+ set txt [string trimright [$w.t get 1.0 end]]
+ } else {
+ set txt $origtxt
+ }
+ destroy $w
+ return $txt
+}
Added: freeswitch/trunk/libs/sqlite/doc/lemon.html
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/doc/lemon.html Tue Dec 19 15:11:50 2006
@@ -0,0 +1,892 @@
+<html>
+<head>
+<title>The Lemon Parser Generator</title>
+</head>
+<body bgcolor=white>
+<h1 align=center>The Lemon Parser Generator</h1>
+
+<p>Lemon is an LALR(1) parser generator for C or C++.
+It does the same job as ``bison'' and ``yacc''.
+But lemon is not another bison or yacc clone. It
+uses a different grammar syntax which is designed to
+reduce the number of coding errors. Lemon also uses a more
+sophisticated parsing engine that is faster than yacc and
+bison and which is both reentrant and thread-safe.
+Furthermore, Lemon implements features that can be used
+to eliminate resource leaks, making is suitable for use
+in long-running programs such as graphical user interfaces
+or embedded controllers.</p>
+
+<p>This document is an introduction to the Lemon
+parser generator.</p>
+
+<h2>Theory of Operation</h2>
+
+<p>The main goal of Lemon is to translate a context free grammar (CFG)
+for a particular language into C code that implements a parser for
+that language.
+The program has two inputs:
+<ul>
+<li>The grammar specification.
+<li>A parser template file.
+</ul>
+Typically, only the grammar specification is supplied by the programmer.
+Lemon comes with a default parser template which works fine for most
+applications. But the user is free to substitute a different parser
+template if desired.</p>
+
+<p>Depending on command-line options, Lemon will generate between
+one and three files of outputs.
+<ul>
+<li>C code to implement the parser.
+<li>A header file defining an integer ID for each terminal symbol.
+<li>An information file that describes the states of the generated parser
+ automaton.
+</ul>
+By default, all three of these output files are generated.
+The header file is suppressed if the ``-m'' command-line option is
+used and the report file is omitted when ``-q'' is selected.</p>
+
+<p>The grammar specification file uses a ``.y'' suffix, by convention.
+In the examples used in this document, we'll assume the name of the
+grammar file is ``gram.y''. A typical use of Lemon would be the
+following command:
+<pre>
+ lemon gram.y
+</pre>
+This command will generate three output files named ``gram.c'',
+``gram.h'' and ``gram.out''.
+The first is C code to implement the parser. The second
+is the header file that defines numerical values for all
+terminal symbols, and the last is the report that explains
+the states used by the parser automaton.</p>
+
+<h3>Command Line Options</h3>
+
+<p>The behavior of Lemon can be modified using command-line options.
+You can obtain a list of the available command-line options together
+with a brief explanation of what each does by typing
+<pre>
+ lemon -?
+</pre>
+As of this writing, the following command-line options are supported:
+<ul>
+<li><tt>-b</tt>
+<li><tt>-c</tt>
+<li><tt>-g</tt>
+<li><tt>-m</tt>
+<li><tt>-q</tt>
+<li><tt>-s</tt>
+<li><tt>-x</tt>
+</ul>
+The ``-b'' option reduces the amount of text in the report file by
+printing only the basis of each parser state, rather than the full
+configuration.
+The ``-c'' option suppresses action table compression. Using -c
+will make the parser a little larger and slower but it will detect
+syntax errors sooner.
+The ``-g'' option causes no output files to be generated at all.
+Instead, the input grammar file is printed on standard output but
+with all comments, actions and other extraneous text deleted. This
+is a useful way to get a quick summary of a grammar.
+The ``-m'' option causes the output C source file to be compatible
+with the ``makeheaders'' program.
+Makeheaders is a program that automatically generates header files
+from C source code. When the ``-m'' option is used, the header
+file is not output since the makeheaders program will take care
+of generated all header files automatically.
+The ``-q'' option suppresses the report file.
+Using ``-s'' causes a brief summary of parser statistics to be
+printed. Like this:
+<pre>
+ Parser statistics: 74 terminals, 70 nonterminals, 179 rules
+ 340 states, 2026 parser table entries, 0 conflicts
+</pre>
+Finally, the ``-x'' option causes Lemon to print its version number
+and then stops without attempting to read the grammar or generate a parser.</p>
+
+<h3>The Parser Interface</h3>
+
+<p>Lemon doesn't generate a complete, working program. It only generates
+a few subroutines that implement a parser. This section describes
+the interface to those subroutines. It is up to the programmer to
+call these subroutines in an appropriate way in order to produce a
+complete system.</p>
+
+<p>Before a program begins using a Lemon-generated parser, the program
+must first create the parser.
+A new parser is created as follows:
+<pre>
+ void *pParser = ParseAlloc( malloc );
+</pre>
+The ParseAlloc() routine allocates and initializes a new parser and
+returns a pointer to it.
+The actual data structure used to represent a parser is opaque --
+its internal structure is not visible or usable by the calling routine.
+For this reason, the ParseAlloc() routine returns a pointer to void
+rather than a pointer to some particular structure.
+The sole argument to the ParseAlloc() routine is a pointer to the
+subroutine used to allocate memory. Typically this means ``malloc()''.</p>
+
+<p>After a program is finished using a parser, it can reclaim all
+memory allocated by that parser by calling
+<pre>
+ ParseFree(pParser, free);
+</pre>
+The first argument is the same pointer returned by ParseAlloc(). The
+second argument is a pointer to the function used to release bulk
+memory back to the system.</p>
+
+<p>After a parser has been allocated using ParseAlloc(), the programmer
+must supply the parser with a sequence of tokens (terminal symbols) to
+be parsed. This is accomplished by calling the following function
+once for each token:
+<pre>
+ Parse(pParser, hTokenID, sTokenData, pArg);
+</pre>
+The first argument to the Parse() routine is the pointer returned by
+ParseAlloc().
+The second argument is a small positive integer that tells the parse the
+type of the next token in the data stream.
+There is one token type for each terminal symbol in the grammar.
+The gram.h file generated by Lemon contains #define statements that
+map symbolic terminal symbol names into appropriate integer values.
+(A value of 0 for the second argument is a special flag to the
+parser to indicate that the end of input has been reached.)
+The third argument is the value of the given token. By default,
+the type of the third argument is integer, but the grammar will
+usually redefine this type to be some kind of structure.
+Typically the second argument will be a broad category of tokens
+such as ``identifier'' or ``number'' and the third argument will
+be the name of the identifier or the value of the number.</p>
+
+<p>The Parse() function may have either three or four arguments,
+depending on the grammar. If the grammar specification file request
+it, the Parse() function will have a fourth parameter that can be
+of any type chosen by the programmer. The parser doesn't do anything
+with this argument except to pass it through to action routines.
+This is a convenient mechanism for passing state information down
+to the action routines without having to use global variables.</p>
+
+<p>A typical use of a Lemon parser might look something like the
+following:
+<pre>
+ 01 ParseTree *ParseFile(const char *zFilename){
+ 02 Tokenizer *pTokenizer;
+ 03 void *pParser;
+ 04 Token sToken;
+ 05 int hTokenId;
+ 06 ParserState sState;
+ 07
+ 08 pTokenizer = TokenizerCreate(zFilename);
+ 09 pParser = ParseAlloc( malloc );
+ 10 InitParserState(&sState);
+ 11 while( GetNextToken(pTokenizer, &hTokenId, &sToken) ){
+ 12 Parse(pParser, hTokenId, sToken, &sState);
+ 13 }
+ 14 Parse(pParser, 0, sToken, &sState);
+ 15 ParseFree(pParser, free );
+ 16 TokenizerFree(pTokenizer);
+ 17 return sState.treeRoot;
+ 18 }
+</pre>
+This example shows a user-written routine that parses a file of
+text and returns a pointer to the parse tree.
+(We've omitted all error-handling from this example to keep it
+simple.)
+We assume the existence of some kind of tokenizer which is created
+using TokenizerCreate() on line 8 and deleted by TokenizerFree()
+on line 16. The GetNextToken() function on line 11 retrieves the
+next token from the input file and puts its type in the
+integer variable hTokenId. The sToken variable is assumed to be
+some kind of structure that contains details about each token,
+such as its complete text, what line it occurs on, etc. </p>
+
+<p>This example also assumes the existence of structure of type
+ParserState that holds state information about a particular parse.
+An instance of such a structure is created on line 6 and initialized
+on line 10. A pointer to this structure is passed into the Parse()
+routine as the optional 4th argument.
+The action routine specified by the grammar for the parser can use
+the ParserState structure to hold whatever information is useful and
+appropriate. In the example, we note that the treeRoot field of
+the ParserState structure is left pointing to the root of the parse
+tree.</p>
+
+<p>The core of this example as it relates to Lemon is as follows:
+<pre>
+ ParseFile(){
+ pParser = ParseAlloc( malloc );
+ while( GetNextToken(pTokenizer,&hTokenId, &sToken) ){
+ Parse(pParser, hTokenId, sToken);
+ }
+ Parse(pParser, 0, sToken);
+ ParseFree(pParser, free );
+ }
+</pre>
+Basically, what a program has to do to use a Lemon-generated parser
+is first create the parser, then send it lots of tokens obtained by
+tokenizing an input source. When the end of input is reached, the
+Parse() routine should be called one last time with a token type
+of 0. This step is necessary to inform the parser that the end of
+input has been reached. Finally, we reclaim memory used by the
+parser by calling ParseFree().</p>
+
+<p>There is one other interface routine that should be mentioned
+before we move on.
+The ParseTrace() function can be used to generate debugging output
+from the parser. A prototype for this routine is as follows:
+<pre>
+ ParseTrace(FILE *stream, char *zPrefix);
+</pre>
+After this routine is called, a short (one-line) message is written
+to the designated output stream every time the parser changes states
+or calls an action routine. Each such message is prefaced using
+the text given by zPrefix. This debugging output can be turned off
+by calling ParseTrace() again with a first argument of NULL (0).</p>
+
+<h3>Differences With YACC and BISON</h3>
+
+<p>Programmers who have previously used the yacc or bison parser
+generator will notice several important differences between yacc and/or
+bison and Lemon.
+<ul>
+<li>In yacc and bison, the parser calls the tokenizer. In Lemon,
+ the tokenizer calls the parser.
+<li>Lemon uses no global variables. Yacc and bison use global variables
+ to pass information between the tokenizer and parser.
+<li>Lemon allows multiple parsers to be running simultaneously. Yacc
+ and bison do not.
+</ul>
+These differences may cause some initial confusion for programmers
+with prior yacc and bison experience.
+But after years of experience using Lemon, I firmly
+believe that the Lemon way of doing things is better.</p>
+
+<h2>Input File Syntax</h2>
+
+<p>The main purpose of the grammar specification file for Lemon is
+to define the grammar for the parser. But the input file also
+specifies additional information Lemon requires to do its job.
+Most of the work in using Lemon is in writing an appropriate
+grammar file.</p>
+
+<p>The grammar file for lemon is, for the most part, free format.
+It does not have sections or divisions like yacc or bison. Any
+declaration can occur at any point in the file.
+Lemon ignores whitespace (except where it is needed to separate
+tokens) and it honors the same commenting conventions as C and C++.</p>
+
+<h3>Terminals and Nonterminals</h3>
+
+<p>A terminal symbol (token) is any string of alphanumeric
+and underscore characters
+that begins with an upper case letter.
+A terminal can contain lower class letters after the first character,
+but the usual convention is to make terminals all upper case.
+A nonterminal, on the other hand, is any string of alphanumeric
+and underscore characters than begins with a lower case letter.
+Again, the usual convention is to make nonterminals use all lower
+case letters.</p>
+
+<p>In Lemon, terminal and nonterminal symbols do not need to
+be declared or identified in a separate section of the grammar file.
+Lemon is able to generate a list of all terminals and nonterminals
+by examining the grammar rules, and it can always distinguish a
+terminal from a nonterminal by checking the case of the first
+character of the name.</p>
+
+<p>Yacc and bison allow terminal symbols to have either alphanumeric
+names or to be individual characters included in single quotes, like
+this: ')' or '$'. Lemon does not allow this alternative form for
+terminal symbols. With Lemon, all symbols, terminals and nonterminals,
+must have alphanumeric names.</p>
+
+<h3>Grammar Rules</h3>
+
+<p>The main component of a Lemon grammar file is a sequence of grammar
+rules.
+Each grammar rule consists of a nonterminal symbol followed by
+the special symbol ``::='' and then a list of terminals and/or nonterminals.
+The rule is terminated by a period.
+The list of terminals and nonterminals on the right-hand side of the
+rule can be empty.
+Rules can occur in any order, except that the left-hand side of the
+first rule is assumed to be the start symbol for the grammar (unless
+specified otherwise using the <tt>%start</tt> directive described below.)
+A typical sequence of grammar rules might look something like this:
+<pre>
+ expr ::= expr PLUS expr.
+ expr ::= expr TIMES expr.
+ expr ::= LPAREN expr RPAREN.
+ expr ::= VALUE.
+</pre>
+</p>
+
+<p>There is one non-terminal in this example, ``expr'', and five
+terminal symbols or tokens: ``PLUS'', ``TIMES'', ``LPAREN'',
+``RPAREN'' and ``VALUE''.</p>
+
+<p>Like yacc and bison, Lemon allows the grammar to specify a block
+of C code that will be executed whenever a grammar rule is reduced
+by the parser.
+In Lemon, this action is specified by putting the C code (contained
+within curly braces <tt>{...}</tt>) immediately after the
+period that closes the rule.
+For example:
+<pre>
+ expr ::= expr PLUS expr. { printf("Doing an addition...\n"); }
+</pre>
+</p>
+
+<p>In order to be useful, grammar actions must normally be linked to
+their associated grammar rules.
+In yacc and bison, this is accomplished by embedding a ``$$'' in the
+action to stand for the value of the left-hand side of the rule and
+symbols ``$1'', ``$2'', and so forth to stand for the value of
+the terminal or nonterminal at position 1, 2 and so forth on the
+right-hand side of the rule.
+This idea is very powerful, but it is also very error-prone. The
+single most common source of errors in a yacc or bison grammar is
+to miscount the number of symbols on the right-hand side of a grammar
+rule and say ``$7'' when you really mean ``$8''.</p>
+
+<p>Lemon avoids the need to count grammar symbols by assigning symbolic
+names to each symbol in a grammar rule and then using those symbolic
+names in the action.
+In yacc or bison, one would write this:
+<pre>
+ expr -> expr PLUS expr { $$ = $1 + $3; };
+</pre>
+But in Lemon, the same rule becomes the following:
+<pre>
+ expr(A) ::= expr(B) PLUS expr(C). { A = B+C; }
+</pre>
+In the Lemon rule, any symbol in parentheses after a grammar rule
+symbol becomes a place holder for that symbol in the grammar rule.
+This place holder can then be used in the associated C action to
+stand for the value of that symbol.<p>
+
+<p>The Lemon notation for linking a grammar rule with its reduce
+action is superior to yacc/bison on several counts.
+First, as mentioned above, the Lemon method avoids the need to
+count grammar symbols.
+Secondly, if a terminal or nonterminal in a Lemon grammar rule
+includes a linking symbol in parentheses but that linking symbol
+is not actually used in the reduce action, then an error message
+is generated.
+For example, the rule
+<pre>
+ expr(A) ::= expr(B) PLUS expr(C). { A = B; }
+</pre>
+will generate an error because the linking symbol ``C'' is used
+in the grammar rule but not in the reduce action.</p>
+
+<p>The Lemon notation for linking grammar rules to reduce actions
+also facilitates the use of destructors for reclaiming memory
+allocated by the values of terminals and nonterminals on the
+right-hand side of a rule.</p>
+
+<h3>Precedence Rules</h3>
+
+<p>Lemon resolves parsing ambiguities in exactly the same way as
+yacc and bison. A shift-reduce conflict is resolved in favor
+of the shift, and a reduce-reduce conflict is resolved by reducing
+whichever rule comes first in the grammar file.</p>
+
+<p>Just like in
+yacc and bison, Lemon allows a measure of control
+over the resolution of paring conflicts using precedence rules.
+A precedence value can be assigned to any terminal symbol
+using the %left, %right or %nonassoc directives. Terminal symbols
+mentioned in earlier directives have a lower precedence that
+terminal symbols mentioned in later directives. For example:</p>
+
+<p><pre>
+ %left AND.
+ %left OR.
+ %nonassoc EQ NE GT GE LT LE.
+ %left PLUS MINUS.
+ %left TIMES DIVIDE MOD.
+ %right EXP NOT.
+</pre></p>
+
+<p>In the preceding sequence of directives, the AND operator is
+defined to have the lowest precedence. The OR operator is one
+precedence level higher. And so forth. Hence, the grammar would
+attempt to group the ambiguous expression
+<pre>
+ a AND b OR c
+</pre>
+like this
+<pre>
+ a AND (b OR c).
+</pre>
+The associativity (left, right or nonassoc) is used to determine
+the grouping when the precedence is the same. AND is left-associative
+in our example, so
+<pre>
+ a AND b AND c
+</pre>
+is parsed like this
+<pre>
+ (a AND b) AND c.
+</pre>
+The EXP operator is right-associative, though, so
+<pre>
+ a EXP b EXP c
+</pre>
+is parsed like this
+<pre>
+ a EXP (b EXP c).
+</pre>
+The nonassoc precedence is used for non-associative operators.
+So
+<pre>
+ a EQ b EQ c
+</pre>
+is an error.</p>
+
+<p>The precedence of non-terminals is transferred to rules as follows:
+The precedence of a grammar rule is equal to the precedence of the
+left-most terminal symbol in the rule for which a precedence is
+defined. This is normally what you want, but in those cases where
+you want to precedence of a grammar rule to be something different,
+you can specify an alternative precedence symbol by putting the
+symbol in square braces after the period at the end of the rule and
+before any C-code. For example:</p>
+
+<p><pre>
+ expr = MINUS expr. [NOT]
+</pre></p>
+
+<p>This rule has a precedence equal to that of the NOT symbol, not the
+MINUS symbol as would have been the case by default.</p>
+
+<p>With the knowledge of how precedence is assigned to terminal
+symbols and individual
+grammar rules, we can now explain precisely how parsing conflicts
+are resolved in Lemon. Shift-reduce conflicts are resolved
+as follows:
+<ul>
+<li> If either the token to be shifted or the rule to be reduced
+ lacks precedence information, then resolve in favor of the
+ shift, but report a parsing conflict.
+<li> If the precedence of the token to be shifted is greater than
+ the precedence of the rule to reduce, then resolve in favor
+ of the shift. No parsing conflict is reported.
+<li> If the precedence of the token it be shifted is less than the
+ precedence of the rule to reduce, then resolve in favor of the
+ reduce action. No parsing conflict is reported.
+<li> If the precedences are the same and the shift token is
+ right-associative, then resolve in favor of the shift.
+ No parsing conflict is reported.
+<li> If the precedences are the same the the shift token is
+ left-associative, then resolve in favor of the reduce.
+ No parsing conflict is reported.
+<li> Otherwise, resolve the conflict by doing the shift and
+ report the parsing conflict.
+</ul>
+Reduce-reduce conflicts are resolved this way:
+<ul>
+<li> If either reduce rule
+ lacks precedence information, then resolve in favor of the
+ rule that appears first in the grammar and report a parsing
+ conflict.
+<li> If both rules have precedence and the precedence is different
+ then resolve the dispute in favor of the rule with the highest
+ precedence and do not report a conflict.
+<li> Otherwise, resolve the conflict by reducing by the rule that
+ appears first in the grammar and report a parsing conflict.
+</ul>
+
+<h3>Special Directives</h3>
+
+<p>The input grammar to Lemon consists of grammar rules and special
+directives. We've described all the grammar rules, so now we'll
+talk about the special directives.</p>
+
+<p>Directives in lemon can occur in any order. You can put them before
+the grammar rules, or after the grammar rules, or in the mist of the
+grammar rules. It doesn't matter. The relative order of
+directives used to assign precedence to terminals is important, but
+other than that, the order of directives in Lemon is arbitrary.</p>
+
+<p>Lemon supports the following special directives:
+<ul>
+<li><tt>%code</tt>
+<li><tt>%default_destructor</tt>
+<li><tt>%default_type</tt>
+<li><tt>%destructor</tt>
+<li><tt>%extra_argument</tt>
+<li><tt>%include</tt>
+<li><tt>%left</tt>
+<li><tt>%name</tt>
+<li><tt>%nonassoc</tt>
+<li><tt>%parse_accept</tt>
+<li><tt>%parse_failure </tt>
+<li><tt>%right</tt>
+<li><tt>%stack_overflow</tt>
+<li><tt>%stack_size</tt>
+<li><tt>%start_symbol</tt>
+<li><tt>%syntax_error</tt>
+<li><tt>%token_destructor</tt>
+<li><tt>%token_prefix</tt>
+<li><tt>%token_type</tt>
+<li><tt>%type</tt>
+</ul>
+Each of these directives will be described separately in the
+following sections:</p>
+
+<h4>The <tt>%code</tt> directive</h4>
+
+<p>The %code directive is used to specify addition C/C++ code that
+is added to the end of the main output file. This is similar to
+the %include directive except that %include is inserted at the
+beginning of the main output file.</p>
+
+<p>%code is typically used to include some action routines or perhaps
+a tokenizer as part of the output file.</p>
+
+<h4>The <tt>%default_destructor</tt> directive</h4>
+
+<p>The %default_destructor directive specifies a destructor to
+use for non-terminals that do not have their own destructor
+specified by a separate %destructor directive. See the documentation
+on the %destructor directive below for additional information.</p>
+
+<p>In some grammers, many different non-terminal symbols have the
+same datatype and hence the same destructor. This directive is
+a convenience way to specify the same destructor for all those
+non-terminals using a single statement.</p>
+
+<h4>The <tt>%default_type</tt> directive</h4>
+
+<p>The %default_type directive specifies the datatype of non-terminal
+symbols that do no have their own datatype defined using a separate
+%type directive. See the documentation on %type below for addition
+information.</p>
+
+<h4>The <tt>%destructor</tt> directive</h4>
+
+<p>The %destructor directive is used to specify a destructor for
+a non-terminal symbol.
+(See also the %token_destructor directive which is used to
+specify a destructor for terminal symbols.)</p>
+
+<p>A non-terminal's destructor is called to dispose of the
+non-terminal's value whenever the non-terminal is popped from
+the stack. This includes all of the following circumstances:
+<ul>
+<li> When a rule reduces and the value of a non-terminal on
+ the right-hand side is not linked to C code.
+<li> When the stack is popped during error processing.
+<li> When the ParseFree() function runs.
+</ul>
+The destructor can do whatever it wants with the value of
+the non-terminal, but its design is to deallocate memory
+or other resources held by that non-terminal.</p>
+
+<p>Consider an example:
+<pre>
+ %type nt {void*}
+ %destructor nt { free($$); }
+ nt(A) ::= ID NUM. { A = malloc( 100 ); }
+</pre>
+This example is a bit contrived but it serves to illustrate how
+destructors work. The example shows a non-terminal named
+``nt'' that holds values of type ``void*''. When the rule for
+an ``nt'' reduces, it sets the value of the non-terminal to
+space obtained from malloc(). Later, when the nt non-terminal
+is popped from the stack, the destructor will fire and call
+free() on this malloced space, thus avoiding a memory leak.
+(Note that the symbol ``$$'' in the destructor code is replaced
+by the value of the non-terminal.)</p>
+
+<p>It is important to note that the value of a non-terminal is passed
+to the destructor whenever the non-terminal is removed from the
+stack, unless the non-terminal is used in a C-code action. If
+the non-terminal is used by C-code, then it is assumed that the
+C-code will take care of destroying it if it should really
+be destroyed. More commonly, the value is used to build some
+larger structure and we don't want to destroy it, which is why
+the destructor is not called in this circumstance.</p>
+
+<p>By appropriate use of destructors, it is possible to
+build a parser using Lemon that can be used within a long-running
+program, such as a GUI, that will not leak memory or other resources.
+To do the same using yacc or bison is much more difficult.</p>
+
+<h4>The <tt>%extra_argument</tt> directive</h4>
+
+The %extra_argument directive instructs Lemon to add a 4th parameter
+to the parameter list of the Parse() function it generates. Lemon
+doesn't do anything itself with this extra argument, but it does
+make the argument available to C-code action routines, destructors,
+and so forth. For example, if the grammar file contains:</p>
+
+<p><pre>
+ %extra_argument { MyStruct *pAbc }
+</pre></p>
+
+<p>Then the Parse() function generated will have an 4th parameter
+of type ``MyStruct*'' and all action routines will have access to
+a variable named ``pAbc'' that is the value of the 4th parameter
+in the most recent call to Parse().</p>
+
+<h4>The <tt>%include</tt> directive</h4>
+
+<p>The %include directive specifies C code that is included at the
+top of the generated parser. You can include any text you want --
+the Lemon parser generator copies it blindly. If you have multiple
+%include directives in your grammar file the value of the last
+%include directive overwrites all the others.</p.
+
+<p>The %include directive is very handy for getting some extra #include
+preprocessor statements at the beginning of the generated parser.
+For example:</p>
+
+<p><pre>
+ %include {#include <unistd.h>}
+</pre></p>
+
+<p>This might be needed, for example, if some of the C actions in the
+grammar call functions that are prototyed in unistd.h.</p>
+
+<h4>The <tt>%left</tt> directive</h4>
+
+The %left directive is used (along with the %right and
+%nonassoc directives) to declare precedences of terminal
+symbols. Every terminal symbol whose name appears after
+a %left directive but before the next period (``.'') is
+given the same left-associative precedence value. Subsequent
+%left directives have higher precedence. For example:</p>
+
+<p><pre>
+ %left AND.
+ %left OR.
+ %nonassoc EQ NE GT GE LT LE.
+ %left PLUS MINUS.
+ %left TIMES DIVIDE MOD.
+ %right EXP NOT.
+</pre></p>
+
+<p>Note the period that terminates each %left, %right or %nonassoc
+directive.</p>
+
+<p>LALR(1) grammars can get into a situation where they require
+a large amount of stack space if you make heavy use or right-associative
+operators. For this reason, it is recommended that you use %left
+rather than %right whenever possible.</p>
+
+<h4>The <tt>%name</tt> directive</h4>
+
+<p>By default, the functions generated by Lemon all begin with the
+five-character string ``Parse''. You can change this string to something
+different using the %name directive. For instance:</p>
+
+<p><pre>
+ %name Abcde
+</pre></p>
+
+<p>Putting this directive in the grammar file will cause Lemon to generate
+functions named
+<ul>
+<li> AbcdeAlloc(),
+<li> AbcdeFree(),
+<li> AbcdeTrace(), and
+<li> Abcde().
+</ul>
+The %name directive allows you to generator two or more different
+parsers and link them all into the same executable.
+</p>
+
+<h4>The <tt>%nonassoc</tt> directive</h4>
+
+<p>This directive is used to assign non-associative precedence to
+one or more terminal symbols. See the section on precedence rules
+or on the %left directive for additional information.</p>
+
+<h4>The <tt>%parse_accept</tt> directive</h4>
+
+<p>The %parse_accept directive specifies a block of C code that is
+executed whenever the parser accepts its input string. To ``accept''
+an input string means that the parser was able to process all tokens
+without error.</p>
+
+<p>For example:</p>
+
+<p><pre>
+ %parse_accept {
+ printf("parsing complete!\n");
+ }
+</pre></p>
+
+
+<h4>The <tt>%parse_failure</tt> directive</h4>
+
+<p>The %parse_failure directive specifies a block of C code that
+is executed whenever the parser fails complete. This code is not
+executed until the parser has tried and failed to resolve an input
+error using is usual error recovery strategy. The routine is
+only invoked when parsing is unable to continue.</p>
+
+<p><pre>
+ %parse_failure {
+ fprintf(stderr,"Giving up. Parser is hopelessly lost...\n");
+ }
+</pre></p>
+
+<h4>The <tt>%right</tt> directive</h4>
+
+<p>This directive is used to assign right-associative precedence to
+one or more terminal symbols. See the section on precedence rules
+or on the %left directive for additional information.</p>
+
+<h4>The <tt>%stack_overflow</tt> directive</h4>
+
+<p>The %stack_overflow directive specifies a block of C code that
+is executed if the parser's internal stack ever overflows. Typically
+this just prints an error message. After a stack overflow, the parser
+will be unable to continue and must be reset.</p>
+
+<p><pre>
+ %stack_overflow {
+ fprintf(stderr,"Giving up. Parser stack overflow\n");
+ }
+</pre></p>
+
+<p>You can help prevent parser stack overflows by avoiding the use
+of right recursion and right-precedence operators in your grammar.
+Use left recursion and and left-precedence operators instead, to
+encourage rules to reduce sooner and keep the stack size down.
+For example, do rules like this:
+<pre>
+ list ::= list element. // left-recursion. Good!
+ list ::= .
+</pre>
+Not like this:
+<pre>
+ list ::= element list. // right-recursion. Bad!
+ list ::= .
+</pre>
+
+<h4>The <tt>%stack_size</tt> directive</h4>
+
+<p>If stack overflow is a problem and you can't resolve the trouble
+by using left-recursion, then you might want to increase the size
+of the parser's stack using this directive. Put an positive integer
+after the %stack_size directive and Lemon will generate a parse
+with a stack of the requested size. The default value is 100.</p>
+
+<p><pre>
+ %stack_size 2000
+</pre></p>
+
+<h4>The <tt>%start_symbol</tt> directive</h4>
+
+<p>By default, the start-symbol for the grammar that Lemon generates
+is the first non-terminal that appears in the grammar file. But you
+can choose a different start-symbol using the %start_symbol directive.</p>
+
+<p><pre>
+ %start_symbol prog
+</pre></p>
+
+<h4>The <tt>%token_destructor</tt> directive</h4>
+
+<p>The %destructor directive assigns a destructor to a non-terminal
+symbol. (See the description of the %destructor directive above.)
+This directive does the same thing for all terminal symbols.</p>
+
+<p>Unlike non-terminal symbols which may each have a different data type
+for their values, terminals all use the same data type (defined by
+the %token_type directive) and so they use a common destructor. Other
+than that, the token destructor works just like the non-terminal
+destructors.</p>
+
+<h4>The <tt>%token_prefix</tt> directive</h4>
+
+<p>Lemon generates #defines that assign small integer constants
+to each terminal symbol in the grammar. If desired, Lemon will
+add a prefix specified by this directive
+to each of the #defines it generates.
+So if the default output of Lemon looked like this:
+<pre>
+ #define AND 1
+ #define MINUS 2
+ #define OR 3
+ #define PLUS 4
+</pre>
+You can insert a statement into the grammar like this:
+<pre>
+ %token_prefix TOKEN_
+</pre>
+to cause Lemon to produce these symbols instead:
+<pre>
+ #define TOKEN_AND 1
+ #define TOKEN_MINUS 2
+ #define TOKEN_OR 3
+ #define TOKEN_PLUS 4
+</pre>
+
+<h4>The <tt>%token_type</tt> and <tt>%type</tt> directives</h4>
+
+<p>These directives are used to specify the data types for values
+on the parser's stack associated with terminal and non-terminal
+symbols. The values of all terminal symbols must be of the same
+type. This turns out to be the same data type as the 3rd parameter
+to the Parse() function generated by Lemon. Typically, you will
+make the value of a terminal symbol by a pointer to some kind of
+token structure. Like this:</p>
+
+<p><pre>
+ %token_type {Token*}
+</pre></p>
+
+<p>If the data type of terminals is not specified, the default value
+is ``int''.</p>
+
+<p>Non-terminal symbols can each have their own data types. Typically
+the data type of a non-terminal is a pointer to the root of a parse-tree
+structure that contains all information about that non-terminal.
+For example:</p>
+
+<p><pre>
+ %type expr {Expr*}
+</pre></p>
+
+<p>Each entry on the parser's stack is actually a union containing
+instances of all data types for every non-terminal and terminal symbol.
+Lemon will automatically use the correct element of this union depending
+on what the corresponding non-terminal or terminal symbol is. But
+the grammar designer should keep in mind that the size of the union
+will be the size of its largest element. So if you have a single
+non-terminal whose data type requires 1K of storage, then your 100
+entry parser stack will require 100K of heap space. If you are willing
+and able to pay that price, fine. You just need to know.</p>
+
+<h3>Error Processing</h3>
+
+<p>After extensive experimentation over several years, it has been
+discovered that the error recovery strategy used by yacc is about
+as good as it gets. And so that is what Lemon uses.</p>
+
+<p>When a Lemon-generated parser encounters a syntax error, it
+first invokes the code specified by the %syntax_error directive, if
+any. It then enters its error recovery strategy. The error recovery
+strategy is to begin popping the parsers stack until it enters a
+state where it is permitted to shift a special non-terminal symbol
+named ``error''. It then shifts this non-terminal and continues
+parsing. But the %syntax_error routine will not be called again
+until at least three new tokens have been successfully shifted.</p>
+
+<p>If the parser pops its stack until the stack is empty, and it still
+is unable to shift the error symbol, then the %parse_failed routine
+is invoked and the parser resets itself to its start state, ready
+to begin parsing a new file. This is what will happen at the very
+first syntax error, of course, if there are no instances of the
+``error'' non-terminal in your grammar.</p>
+
+</body>
+</html>
Added: freeswitch/trunk/libs/sqlite/doc/report1.txt
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/doc/report1.txt Tue Dec 19 15:11:50 2006
@@ -0,0 +1,121 @@
+An SQLite (version 1.0) database was used in a large military application
+where the database contained 105 tables and indices. The following is
+a breakdown on the sizes of keys and data within these tables and indices:
+
+Entries: 967089
+Size: 45896104
+Avg Size: 48
+Key Size: 11112265
+Avg Key Size: 12
+Max Key Size: 99
+
+ 0..8 263 0%
+ 9..12 5560 0%
+ 13..16 71394 7%
+ 17..24 180717 26%
+ 25..32 215442 48%
+ 33..40 151118 64%
+ 41..48 77479 72%
+ 49..56 13983 74%
+ 57..64 14481 75%
+ 65..80 41342 79%
+ 81..96 127098 92%
+ 97..112 38054 96%
+ 113..128 14197 98%
+ 129..144 8208 99%
+ 145..160 3326 99%
+ 161..176 1242 99%
+ 177..192 604 99%
+ 193..208 222 99%
+ 209..224 213 99%
+ 225..240 132 99%
+ 241..256 58 99%
+ 257..288 515 99%
+ 289..320 64 99%
+ 321..352 39 99%
+ 353..384 44 99%
+ 385..416 25 99%
+ 417..448 24 99%
+ 449..480 26 99%
+ 481..512 27 99%
+ 513..1024 470 99%
+ 1025..2048 396 99%
+ 2049..4096 187 99%
+ 4097..8192 78 99%
+ 8193..16384 35 99%
+16385..32768 17 99%
+32769..65536 6 99%
+65537..65541 3 100%
+
+If the indices are omitted, the statistics for the 49 tables
+become the following:
+
+Entries: 451103
+Size: 30930282
+Avg Size: 69
+Key Size: 1804412
+Avg Key Size: 4
+Max Key Size: 4
+
+ 0..24 89 0%
+ 25..32 9417 2%
+ 33..40 119162 28%
+ 41..48 68710 43%
+ 49..56 9539 45%
+ 57..64 12435 48%
+ 65..80 38650 57%
+ 81..96 126877 85%
+ 97..112 38030 93%
+ 113..128 14183 96%
+ 129..144 7668 98%
+ 145..160 3302 99%
+ 161..176 1238 99%
+ 177..192 597 99%
+ 193..208 217 99%
+ 209..224 211 99%
+ 225..240 130 99%
+ 241..256 57 99%
+ 257..288 100 99%
+ 289..320 62 99%
+ 321..352 34 99%
+ 353..384 43 99%
+ 385..416 24 99%
+ 417..448 24 99%
+ 449..480 25 99%
+ 481..512 27 99%
+ 513..1024 153 99%
+ 1025..2048 92 99%
+ 2049..4096 7 100%
+
+The 56 indices have these statistics:
+
+Entries: 512422
+Size: 14879828
+Avg Size: 30
+Key Size: 9253204
+Avg Key Size: 19
+Max Key Size: 99
+
+ 0..8 246 0%
+ 9..12 5486 1%
+ 13..16 70717 14%
+ 17..24 178246 49%
+ 25..32 205722 89%
+ 33..40 31951 96%
+ 41..48 8768 97%
+ 49..56 4444 98%
+ 57..64 2046 99%
+ 65..80 2691 99%
+ 81..96 202 99%
+ 97..112 11 99%
+ 113..144 527 99%
+ 145..160 20 99%
+ 161..288 406 99%
+ 289..1024 316 99%
+ 1025..2048 304 99%
+ 2049..4096 180 99%
+ 4097..8192 78 99%
+ 8193..16384 35 99%
+16385..32768 17 99%
+32769..65536 6 99%
+65537..65541 3 100%
Added: freeswitch/trunk/libs/sqlite/ext/README.txt
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/ext/README.txt Tue Dec 19 15:11:50 2006
@@ -0,0 +1,2 @@
+Version loadable extensions to SQLite are found in subfolders
+of this folder.
Added: freeswitch/trunk/libs/sqlite/ext/fts1/README.txt
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/ext/fts1/README.txt Tue Dec 19 15:11:50 2006
@@ -0,0 +1,2 @@
+This folder contains source code to the first full-text search
+extension for SQLite.
Added: freeswitch/trunk/libs/sqlite/ext/fts1/fts1.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/ext/fts1/fts1.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,3264 @@
+/* The author disclaims copyright to this source code.
+ *
+ * This is an SQLite module implementing full-text search.
+ */
+
+/*
+** The code in this file is only compiled if:
+**
+** * The FTS1 module is being built as an extension
+** (in which case SQLITE_CORE is not defined), or
+**
+** * The FTS1 module is being built into the core of
+** SQLite (in which case SQLITE_ENABLE_FTS1 is defined).
+*/
+#if !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS1)
+
+#if defined(SQLITE_ENABLE_FTS1) && !defined(SQLITE_CORE)
+# define SQLITE_CORE 1
+#endif
+
+#include <assert.h>
+#if !defined(__APPLE__)
+#include <malloc.h>
+#else
+#include <stdlib.h>
+#endif
+#include <stdio.h>
+#include <string.h>
+#include <ctype.h>
+
+#include "fts1.h"
+#include "fts1_hash.h"
+#include "fts1_tokenizer.h"
+#include "sqlite3.h"
+#include "sqlite3ext.h"
+SQLITE_EXTENSION_INIT1
+
+
+#if 0
+# define TRACE(A) printf A; fflush(stdout)
+#else
+# define TRACE(A)
+#endif
+
+/* utility functions */
+
+typedef struct StringBuffer {
+ int len; /* length, not including null terminator */
+ int alloced; /* Space allocated for s[] */
+ char *s; /* Content of the string */
+} StringBuffer;
+
+void initStringBuffer(StringBuffer *sb){
+ sb->len = 0;
+ sb->alloced = 100;
+ sb->s = malloc(100);
+ sb->s[0] = '\0';
+}
+
+void nappend(StringBuffer *sb, const char *zFrom, int nFrom){
+ if( sb->len + nFrom >= sb->alloced ){
+ sb->alloced = sb->len + nFrom + 100;
+ sb->s = realloc(sb->s, sb->alloced+1);
+ if( sb->s==0 ){
+ initStringBuffer(sb);
+ return;
+ }
+ }
+ memcpy(sb->s + sb->len, zFrom, nFrom);
+ sb->len += nFrom;
+ sb->s[sb->len] = 0;
+}
+void append(StringBuffer *sb, const char *zFrom){
+ nappend(sb, zFrom, strlen(zFrom));
+}
+
+/* We encode variable-length integers in little-endian order using seven bits
+ * per byte as follows:
+**
+** KEY:
+** A = 0xxxxxxx 7 bits of data and one flag bit
+** B = 1xxxxxxx 7 bits of data and one flag bit
+**
+** 7 bits - A
+** 14 bits - BA
+** 21 bits - BBA
+** and so on.
+*/
+
+/* We may need up to VARINT_MAX bytes to store an encoded 64-bit integer. */
+#define VARINT_MAX 10
+
+/* Write a 64-bit variable-length integer to memory starting at p[0].
+ * The length of data written will be between 1 and VARINT_MAX bytes.
+ * The number of bytes written is returned. */
+static int putVarint(char *p, sqlite_int64 v){
+ unsigned char *q = (unsigned char *) p;
+ sqlite_uint64 vu = v;
+ do{
+ *q++ = (unsigned char) ((vu & 0x7f) | 0x80);
+ vu >>= 7;
+ }while( vu!=0 );
+ q[-1] &= 0x7f; /* turn off high bit in final byte */
+ assert( q - (unsigned char *)p <= VARINT_MAX );
+ return (int) (q - (unsigned char *)p);
+}
+
+/* Read a 64-bit variable-length integer from memory starting at p[0].
+ * Return the number of bytes read, or 0 on error.
+ * The value is stored in *v. */
+static int getVarint(const char *p, sqlite_int64 *v){
+ const unsigned char *q = (const unsigned char *) p;
+ sqlite_uint64 x = 0, y = 1;
+ while( (*q & 0x80) == 0x80 ){
+ x += y * (*q++ & 0x7f);
+ y <<= 7;
+ if( q - (unsigned char *)p >= VARINT_MAX ){ /* bad data */
+ assert( 0 );
+ return 0;
+ }
+ }
+ x += y * (*q++);
+ *v = (sqlite_int64) x;
+ return (int) (q - (unsigned char *)p);
+}
+
+static int getVarint32(const char *p, int *pi){
+ sqlite_int64 i;
+ int ret = getVarint(p, &i);
+ *pi = (int) i;
+ assert( *pi==i );
+ return ret;
+}
+
+/*** Document lists ***
+ *
+ * A document list holds a sorted list of varint-encoded document IDs.
+ *
+ * A doclist with type DL_POSITIONS_OFFSETS is stored like this:
+ *
+ * array {
+ * varint docid;
+ * array {
+ * varint position; (delta from previous position plus POS_BASE)
+ * varint startOffset; (delta from previous startOffset)
+ * varint endOffset; (delta from startOffset)
+ * }
+ * }
+ *
+ * Here, array { X } means zero or more occurrences of X, adjacent in memory.
+ *
+ * A position list may hold positions for text in multiple columns. A position
+ * POS_COLUMN is followed by a varint containing the index of the column for
+ * following positions in the list. Any positions appearing before any
+ * occurrences of POS_COLUMN are for column 0.
+ *
+ * A doclist with type DL_POSITIONS is like the above, but holds only docids
+ * and positions without offset information.
+ *
+ * A doclist with type DL_DOCIDS is like the above, but holds only docids
+ * without positions or offset information.
+ *
+ * On disk, every document list has positions and offsets, so we don't bother
+ * to serialize a doclist's type.
+ *
+ * We don't yet delta-encode document IDs; doing so will probably be a
+ * modest win.
+ *
+ * NOTE(shess) I've thought of a slightly (1%) better offset encoding.
+ * After the first offset, estimate the next offset by using the
+ * current token position and the previous token position and offset,
+ * offset to handle some variance. So the estimate would be
+ * (iPosition*w->iStartOffset/w->iPosition-64), which is delta-encoded
+ * as normal. Offsets more than 64 chars from the estimate are
+ * encoded as the delta to the previous start offset + 128. An
+ * additional tiny increment can be gained by using the end offset of
+ * the previous token to make the estimate a tiny bit more precise.
+*/
+
+typedef enum DocListType {
+ DL_DOCIDS, /* docids only */
+ DL_POSITIONS, /* docids + positions */
+ DL_POSITIONS_OFFSETS /* docids + positions + offsets */
+} DocListType;
+
+/*
+** By default, only positions and not offsets are stored in the doclists.
+** To change this so that offsets are stored too, compile with
+**
+** -DDL_DEFAULT=DL_POSITIONS_OFFSETS
+**
+*/
+#ifndef DL_DEFAULT
+# define DL_DEFAULT DL_POSITIONS
+#endif
+
+typedef struct DocList {
+ char *pData;
+ int nData;
+ DocListType iType;
+ int iLastColumn; /* the last column written */
+ int iLastPos; /* the last position written */
+ int iLastOffset; /* the last start offset written */
+} DocList;
+
+enum {
+ POS_END = 0, /* end of this position list */
+ POS_COLUMN, /* followed by new column number */
+ POS_BASE
+};
+
+/* Initialize a new DocList to hold the given data. */
+static void docListInit(DocList *d, DocListType iType,
+ const char *pData, int nData){
+ d->nData = nData;
+ if( nData>0 ){
+ d->pData = malloc(nData);
+ memcpy(d->pData, pData, nData);
+ } else {
+ d->pData = NULL;
+ }
+ d->iType = iType;
+ d->iLastColumn = 0;
+ d->iLastPos = d->iLastOffset = 0;
+}
+
+/* Create a new dynamically-allocated DocList. */
+static DocList *docListNew(DocListType iType){
+ DocList *d = (DocList *) malloc(sizeof(DocList));
+ docListInit(d, iType, 0, 0);
+ return d;
+}
+
+static void docListDestroy(DocList *d){
+ free(d->pData);
+#ifndef NDEBUG
+ memset(d, 0x55, sizeof(*d));
+#endif
+}
+
+static void docListDelete(DocList *d){
+ docListDestroy(d);
+ free(d);
+}
+
+static char *docListEnd(DocList *d){
+ return d->pData + d->nData;
+}
+
+/* Append a varint to a DocList's data. */
+static void appendVarint(DocList *d, sqlite_int64 i){
+ char c[VARINT_MAX];
+ int n = putVarint(c, i);
+ d->pData = realloc(d->pData, d->nData + n);
+ memcpy(d->pData + d->nData, c, n);
+ d->nData += n;
+}
+
+static void docListAddDocid(DocList *d, sqlite_int64 iDocid){
+ appendVarint(d, iDocid);
+ if( d->iType>=DL_POSITIONS ){
+ appendVarint(d, POS_END); /* initially empty position list */
+ d->iLastColumn = 0;
+ d->iLastPos = d->iLastOffset = 0;
+ }
+}
+
+/* helper function for docListAddPos and docListAddPosOffset */
+static void addPos(DocList *d, int iColumn, int iPos){
+ assert( d->nData>0 );
+ --d->nData; /* remove previous terminator */
+ if( iColumn!=d->iLastColumn ){
+ assert( iColumn>d->iLastColumn );
+ appendVarint(d, POS_COLUMN);
+ appendVarint(d, iColumn);
+ d->iLastColumn = iColumn;
+ d->iLastPos = d->iLastOffset = 0;
+ }
+ assert( iPos>=d->iLastPos );
+ appendVarint(d, iPos-d->iLastPos+POS_BASE);
+ d->iLastPos = iPos;
+}
+
+/* Add a position to the last position list in a doclist. */
+static void docListAddPos(DocList *d, int iColumn, int iPos){
+ assert( d->iType==DL_POSITIONS );
+ addPos(d, iColumn, iPos);
+ appendVarint(d, POS_END); /* add new terminator */
+}
+
+/*
+** Add a position and starting and ending offsets to a doclist.
+**
+** If the doclist is setup to handle only positions, then insert
+** the position only and ignore the offsets.
+*/
+static void docListAddPosOffset(
+ DocList *d, /* Doclist under construction */
+ int iColumn, /* Column the inserted term is part of */
+ int iPos, /* Position of the inserted term */
+ int iStartOffset, /* Starting offset of inserted term */
+ int iEndOffset /* Ending offset of inserted term */
+){
+ assert( d->iType>=DL_POSITIONS );
+ addPos(d, iColumn, iPos);
+ if( d->iType==DL_POSITIONS_OFFSETS ){
+ assert( iStartOffset>=d->iLastOffset );
+ appendVarint(d, iStartOffset-d->iLastOffset);
+ d->iLastOffset = iStartOffset;
+ assert( iEndOffset>=iStartOffset );
+ appendVarint(d, iEndOffset-iStartOffset);
+ }
+ appendVarint(d, POS_END); /* add new terminator */
+}
+
+/*
+** A DocListReader object is a cursor into a doclist. Initialize
+** the cursor to the beginning of the doclist by calling readerInit().
+** Then use routines
+**
+** peekDocid()
+** readDocid()
+** readPosition()
+** skipPositionList()
+** and so forth...
+**
+** to read information out of the doclist. When we reach the end
+** of the doclist, atEnd() returns TRUE.
+*/
+typedef struct DocListReader {
+ DocList *pDoclist; /* The document list we are stepping through */
+ char *p; /* Pointer to next unread byte in the doclist */
+ int iLastColumn;
+ int iLastPos; /* the last position read, or -1 when not in a position list */
+} DocListReader;
+
+/*
+** Initialize the DocListReader r to point to the beginning of pDoclist.
+*/
+static void readerInit(DocListReader *r, DocList *pDoclist){
+ r->pDoclist = pDoclist;
+ if( pDoclist!=NULL ){
+ r->p = pDoclist->pData;
+ }
+ r->iLastColumn = -1;
+ r->iLastPos = -1;
+}
+
+/*
+** Return TRUE if we have reached then end of pReader and there is
+** nothing else left to read.
+*/
+static int atEnd(DocListReader *pReader){
+ return pReader->pDoclist==0 || (pReader->p >= docListEnd(pReader->pDoclist));
+}
+
+/* Peek at the next docid without advancing the read pointer.
+*/
+static sqlite_int64 peekDocid(DocListReader *pReader){
+ sqlite_int64 ret;
+ assert( !atEnd(pReader) );
+ assert( pReader->iLastPos==-1 );
+ getVarint(pReader->p, &ret);
+ return ret;
+}
+
+/* Read the next docid. See also nextDocid().
+*/
+static sqlite_int64 readDocid(DocListReader *pReader){
+ sqlite_int64 ret;
+ assert( !atEnd(pReader) );
+ assert( pReader->iLastPos==-1 );
+ pReader->p += getVarint(pReader->p, &ret);
+ if( pReader->pDoclist->iType>=DL_POSITIONS ){
+ pReader->iLastColumn = 0;
+ pReader->iLastPos = 0;
+ }
+ return ret;
+}
+
+/* Read the next position and column index from a position list.
+ * Returns the position, or -1 at the end of the list. */
+static int readPosition(DocListReader *pReader, int *iColumn){
+ int i;
+ int iType = pReader->pDoclist->iType;
+
+ if( pReader->iLastPos==-1 ){
+ return -1;
+ }
+ assert( !atEnd(pReader) );
+
+ if( iType<DL_POSITIONS ){
+ return -1;
+ }
+ pReader->p += getVarint32(pReader->p, &i);
+ if( i==POS_END ){
+ pReader->iLastColumn = pReader->iLastPos = -1;
+ *iColumn = -1;
+ return -1;
+ }
+ if( i==POS_COLUMN ){
+ pReader->p += getVarint32(pReader->p, &pReader->iLastColumn);
+ pReader->iLastPos = 0;
+ pReader->p += getVarint32(pReader->p, &i);
+ assert( i>=POS_BASE );
+ }
+ pReader->iLastPos += ((int) i)-POS_BASE;
+ if( iType>=DL_POSITIONS_OFFSETS ){
+ /* Skip over offsets, ignoring them for now. */
+ int iStart, iEnd;
+ pReader->p += getVarint32(pReader->p, &iStart);
+ pReader->p += getVarint32(pReader->p, &iEnd);
+ }
+ *iColumn = pReader->iLastColumn;
+ return pReader->iLastPos;
+}
+
+/* Skip past the end of a position list. */
+static void skipPositionList(DocListReader *pReader){
+ DocList *p = pReader->pDoclist;
+ if( p && p->iType>=DL_POSITIONS ){
+ int iColumn;
+ while( readPosition(pReader, &iColumn)!=-1 ){}
+ }
+}
+
+/* Skip over a docid, including its position list if the doclist has
+ * positions. */
+static void skipDocument(DocListReader *pReader){
+ readDocid(pReader);
+ skipPositionList(pReader);
+}
+
+/* Skip past all docids which are less than [iDocid]. Returns 1 if a docid
+ * matching [iDocid] was found. */
+static int skipToDocid(DocListReader *pReader, sqlite_int64 iDocid){
+ sqlite_int64 d = 0;
+ while( !atEnd(pReader) && (d=peekDocid(pReader))<iDocid ){
+ skipDocument(pReader);
+ }
+ return !atEnd(pReader) && d==iDocid;
+}
+
+/* Return the first document in a document list.
+*/
+static sqlite_int64 firstDocid(DocList *d){
+ DocListReader r;
+ readerInit(&r, d);
+ return readDocid(&r);
+}
+
+#ifdef SQLITE_DEBUG
+/*
+** This routine is used for debugging purpose only.
+**
+** Write the content of a doclist to standard output.
+*/
+static void printDoclist(DocList *p){
+ DocListReader r;
+ const char *zSep = "";
+
+ readerInit(&r, p);
+ while( !atEnd(&r) ){
+ sqlite_int64 docid = readDocid(&r);
+ if( docid==0 ){
+ skipPositionList(&r);
+ continue;
+ }
+ printf("%s%lld", zSep, docid);
+ zSep = ",";
+ if( p->iType>=DL_POSITIONS ){
+ int iPos, iCol;
+ const char *zDiv = "";
+ printf("(");
+ while( (iPos = readPosition(&r, &iCol))>=0 ){
+ printf("%s%d:%d", zDiv, iCol, iPos);
+ zDiv = ":";
+ }
+ printf(")");
+ }
+ }
+ printf("\n");
+ fflush(stdout);
+}
+#endif /* SQLITE_DEBUG */
+
+/* Trim the given doclist to contain only positions in column
+ * [iRestrictColumn]. */
+static void docListRestrictColumn(DocList *in, int iRestrictColumn){
+ DocListReader r;
+ DocList out;
+
+ assert( in->iType>=DL_POSITIONS );
+ readerInit(&r, in);
+ docListInit(&out, DL_POSITIONS, NULL, 0);
+
+ while( !atEnd(&r) ){
+ sqlite_int64 iDocid = readDocid(&r);
+ int iPos, iColumn;
+
+ docListAddDocid(&out, iDocid);
+ while( (iPos = readPosition(&r, &iColumn)) != -1 ){
+ if( iColumn==iRestrictColumn ){
+ docListAddPos(&out, iColumn, iPos);
+ }
+ }
+ }
+
+ docListDestroy(in);
+ *in = out;
+}
+
+/* Trim the given doclist by discarding any docids without any remaining
+ * positions. */
+static void docListDiscardEmpty(DocList *in) {
+ DocListReader r;
+ DocList out;
+
+ /* TODO: It would be nice to implement this operation in place; that
+ * could save a significant amount of memory in queries with long doclists. */
+ assert( in->iType>=DL_POSITIONS );
+ readerInit(&r, in);
+ docListInit(&out, DL_POSITIONS, NULL, 0);
+
+ while( !atEnd(&r) ){
+ sqlite_int64 iDocid = readDocid(&r);
+ int match = 0;
+ int iPos, iColumn;
+ while( (iPos = readPosition(&r, &iColumn)) != -1 ){
+ if( !match ){
+ docListAddDocid(&out, iDocid);
+ match = 1;
+ }
+ docListAddPos(&out, iColumn, iPos);
+ }
+ }
+
+ docListDestroy(in);
+ *in = out;
+}
+
+/* Helper function for docListUpdate() and docListAccumulate().
+** Splices a doclist element into the doclist represented by r,
+** leaving r pointing after the newly spliced element.
+*/
+static void docListSpliceElement(DocListReader *r, sqlite_int64 iDocid,
+ const char *pSource, int nSource){
+ DocList *d = r->pDoclist;
+ char *pTarget;
+ int nTarget, found;
+
+ found = skipToDocid(r, iDocid);
+
+ /* Describe slice in d to place pSource/nSource. */
+ pTarget = r->p;
+ if( found ){
+ skipDocument(r);
+ nTarget = r->p-pTarget;
+ }else{
+ nTarget = 0;
+ }
+
+ /* The sense of the following is that there are three possibilities.
+ ** If nTarget==nSource, we should not move any memory nor realloc.
+ ** If nTarget>nSource, trim target and realloc.
+ ** If nTarget<nSource, realloc then expand target.
+ */
+ if( nTarget>nSource ){
+ memmove(pTarget+nSource, pTarget+nTarget, docListEnd(d)-(pTarget+nTarget));
+ }
+ if( nTarget!=nSource ){
+ int iDoclist = pTarget-d->pData;
+ d->pData = realloc(d->pData, d->nData+nSource-nTarget);
+ pTarget = d->pData+iDoclist;
+ }
+ if( nTarget<nSource ){
+ memmove(pTarget+nSource, pTarget+nTarget, docListEnd(d)-(pTarget+nTarget));
+ }
+
+ memcpy(pTarget, pSource, nSource);
+ d->nData += nSource-nTarget;
+ r->p = pTarget+nSource;
+}
+
+/* Insert/update pUpdate into the doclist. */
+static void docListUpdate(DocList *d, DocList *pUpdate){
+ DocListReader reader;
+
+ assert( d!=NULL && pUpdate!=NULL );
+ assert( d->iType==pUpdate->iType);
+
+ readerInit(&reader, d);
+ docListSpliceElement(&reader, firstDocid(pUpdate),
+ pUpdate->pData, pUpdate->nData);
+}
+
+/* Propagate elements from pUpdate to pAcc, overwriting elements with
+** matching docids.
+*/
+static void docListAccumulate(DocList *pAcc, DocList *pUpdate){
+ DocListReader accReader, updateReader;
+
+ /* Handle edge cases where one doclist is empty. */
+ assert( pAcc!=NULL );
+ if( pUpdate==NULL || pUpdate->nData==0 ) return;
+ if( pAcc->nData==0 ){
+ pAcc->pData = malloc(pUpdate->nData);
+ memcpy(pAcc->pData, pUpdate->pData, pUpdate->nData);
+ pAcc->nData = pUpdate->nData;
+ return;
+ }
+
+ readerInit(&accReader, pAcc);
+ readerInit(&updateReader, pUpdate);
+
+ while( !atEnd(&updateReader) ){
+ char *pSource = updateReader.p;
+ sqlite_int64 iDocid = readDocid(&updateReader);
+ skipPositionList(&updateReader);
+ docListSpliceElement(&accReader, iDocid, pSource, updateReader.p-pSource);
+ }
+}
+
+/*
+** Read the next docid off of pIn. Return 0 if we reach the end.
+*
+* TODO: This assumes that docids are never 0, but they may actually be 0 since
+* users can choose docids when inserting into a full-text table. Fix this.
+*/
+static sqlite_int64 nextDocid(DocListReader *pIn){
+ skipPositionList(pIn);
+ return atEnd(pIn) ? 0 : readDocid(pIn);
+}
+
+/*
+** pLeft and pRight are two DocListReaders that are pointing to
+** positions lists of the same document: iDocid.
+**
+** If there are no instances in pLeft or pRight where the position
+** of pLeft is one less than the position of pRight, then this
+** routine adds nothing to pOut.
+**
+** If there are one or more instances where positions from pLeft
+** are exactly one less than positions from pRight, then add a new
+** document record to pOut. If pOut wants to hold positions, then
+** include the positions from pRight that are one more than a
+** position in pLeft. In other words: pRight.iPos==pLeft.iPos+1.
+**
+** pLeft and pRight are left pointing at the next document record.
+*/
+static void mergePosList(
+ DocListReader *pLeft, /* Left position list */
+ DocListReader *pRight, /* Right position list */
+ sqlite_int64 iDocid, /* The docid from pLeft and pRight */
+ DocList *pOut /* Write the merged document record here */
+){
+ int iLeftCol, iLeftPos = readPosition(pLeft, &iLeftCol);
+ int iRightCol, iRightPos = readPosition(pRight, &iRightCol);
+ int match = 0;
+
+ /* Loop until we've reached the end of both position lists. */
+ while( iLeftPos!=-1 && iRightPos!=-1 ){
+ if( iLeftCol==iRightCol && iLeftPos+1==iRightPos ){
+ if( !match ){
+ docListAddDocid(pOut, iDocid);
+ match = 1;
+ }
+ if( pOut->iType>=DL_POSITIONS ){
+ docListAddPos(pOut, iRightCol, iRightPos);
+ }
+ iLeftPos = readPosition(pLeft, &iLeftCol);
+ iRightPos = readPosition(pRight, &iRightCol);
+ }else if( iRightCol<iLeftCol ||
+ (iRightCol==iLeftCol && iRightPos<iLeftPos+1) ){
+ iRightPos = readPosition(pRight, &iRightCol);
+ }else{
+ iLeftPos = readPosition(pLeft, &iLeftCol);
+ }
+ }
+ if( iLeftPos>=0 ) skipPositionList(pLeft);
+ if( iRightPos>=0 ) skipPositionList(pRight);
+}
+
+/* We have two doclists: pLeft and pRight.
+** Write the phrase intersection of these two doclists into pOut.
+**
+** A phrase intersection means that two documents only match
+** if pLeft.iPos+1==pRight.iPos.
+**
+** The output pOut may or may not contain positions. If pOut
+** does contain positions, they are the positions of pRight.
+*/
+static void docListPhraseMerge(
+ DocList *pLeft, /* Doclist resulting from the words on the left */
+ DocList *pRight, /* Doclist for the next word to the right */
+ DocList *pOut /* Write the combined doclist here */
+){
+ DocListReader left, right;
+ sqlite_int64 docidLeft, docidRight;
+
+ readerInit(&left, pLeft);
+ readerInit(&right, pRight);
+ docidLeft = nextDocid(&left);
+ docidRight = nextDocid(&right);
+
+ while( docidLeft>0 && docidRight>0 ){
+ if( docidLeft<docidRight ){
+ docidLeft = nextDocid(&left);
+ }else if( docidRight<docidLeft ){
+ docidRight = nextDocid(&right);
+ }else{
+ mergePosList(&left, &right, docidLeft, pOut);
+ docidLeft = nextDocid(&left);
+ docidRight = nextDocid(&right);
+ }
+ }
+}
+
+/* We have two doclists: pLeft and pRight.
+** Write the intersection of these two doclists into pOut.
+** Only docids are matched. Position information is ignored.
+**
+** The output pOut never holds positions.
+*/
+static void docListAndMerge(
+ DocList *pLeft, /* Doclist resulting from the words on the left */
+ DocList *pRight, /* Doclist for the next word to the right */
+ DocList *pOut /* Write the combined doclist here */
+){
+ DocListReader left, right;
+ sqlite_int64 docidLeft, docidRight;
+
+ assert( pOut->iType<DL_POSITIONS );
+
+ readerInit(&left, pLeft);
+ readerInit(&right, pRight);
+ docidLeft = nextDocid(&left);
+ docidRight = nextDocid(&right);
+
+ while( docidLeft>0 && docidRight>0 ){
+ if( docidLeft<docidRight ){
+ docidLeft = nextDocid(&left);
+ }else if( docidRight<docidLeft ){
+ docidRight = nextDocid(&right);
+ }else{
+ docListAddDocid(pOut, docidLeft);
+ docidLeft = nextDocid(&left);
+ docidRight = nextDocid(&right);
+ }
+ }
+}
+
+/* We have two doclists: pLeft and pRight.
+** Write the union of these two doclists into pOut.
+** Only docids are matched. Position information is ignored.
+**
+** The output pOut never holds positions.
+*/
+static void docListOrMerge(
+ DocList *pLeft, /* Doclist resulting from the words on the left */
+ DocList *pRight, /* Doclist for the next word to the right */
+ DocList *pOut /* Write the combined doclist here */
+){
+ DocListReader left, right;
+ sqlite_int64 docidLeft, docidRight, priorLeft;
+
+ readerInit(&left, pLeft);
+ readerInit(&right, pRight);
+ docidLeft = nextDocid(&left);
+ docidRight = nextDocid(&right);
+
+ while( docidLeft>0 && docidRight>0 ){
+ if( docidLeft<=docidRight ){
+ docListAddDocid(pOut, docidLeft);
+ }else{
+ docListAddDocid(pOut, docidRight);
+ }
+ priorLeft = docidLeft;
+ if( docidLeft<=docidRight ){
+ docidLeft = nextDocid(&left);
+ }
+ if( docidRight>0 && docidRight<=priorLeft ){
+ docidRight = nextDocid(&right);
+ }
+ }
+ while( docidLeft>0 ){
+ docListAddDocid(pOut, docidLeft);
+ docidLeft = nextDocid(&left);
+ }
+ while( docidRight>0 ){
+ docListAddDocid(pOut, docidRight);
+ docidRight = nextDocid(&right);
+ }
+}
+
+/* We have two doclists: pLeft and pRight.
+** Write into pOut all documents that occur in pLeft but not
+** in pRight.
+**
+** Only docids are matched. Position information is ignored.
+**
+** The output pOut never holds positions.
+*/
+static void docListExceptMerge(
+ DocList *pLeft, /* Doclist resulting from the words on the left */
+ DocList *pRight, /* Doclist for the next word to the right */
+ DocList *pOut /* Write the combined doclist here */
+){
+ DocListReader left, right;
+ sqlite_int64 docidLeft, docidRight, priorLeft;
+
+ readerInit(&left, pLeft);
+ readerInit(&right, pRight);
+ docidLeft = nextDocid(&left);
+ docidRight = nextDocid(&right);
+
+ while( docidLeft>0 && docidRight>0 ){
+ priorLeft = docidLeft;
+ if( docidLeft<docidRight ){
+ docListAddDocid(pOut, docidLeft);
+ }
+ if( docidLeft<=docidRight ){
+ docidLeft = nextDocid(&left);
+ }
+ if( docidRight>0 && docidRight<=priorLeft ){
+ docidRight = nextDocid(&right);
+ }
+ }
+ while( docidLeft>0 ){
+ docListAddDocid(pOut, docidLeft);
+ docidLeft = nextDocid(&left);
+ }
+}
+
+static char *string_dup_n(const char *s, int n){
+ char *str = malloc(n + 1);
+ memcpy(str, s, n);
+ str[n] = '\0';
+ return str;
+}
+
+/* Duplicate a string; the caller must free() the returned string.
+ * (We don't use strdup() since it's not part of the standard C library and
+ * may not be available everywhere.) */
+static char *string_dup(const char *s){
+ return string_dup_n(s, strlen(s));
+}
+
+/* Format a string, replacing each occurrence of the % character with
+ * zName. This may be more convenient than sqlite_mprintf()
+ * when one string is used repeatedly in a format string.
+ * The caller must free() the returned string. */
+static char *string_format(const char *zFormat, const char *zName){
+ const char *p;
+ size_t len = 0;
+ size_t nName = strlen(zName);
+ char *result;
+ char *r;
+
+ /* first compute length needed */
+ for(p = zFormat ; *p ; ++p){
+ len += (*p=='%' ? nName : 1);
+ }
+ len += 1; /* for null terminator */
+
+ r = result = malloc(len);
+ for(p = zFormat; *p; ++p){
+ if( *p=='%' ){
+ memcpy(r, zName, nName);
+ r += nName;
+ } else {
+ *r++ = *p;
+ }
+ }
+ *r++ = '\0';
+ assert( r == result + len );
+ return result;
+}
+
+static int sql_exec(sqlite3 *db, const char *zName, const char *zFormat){
+ char *zCommand = string_format(zFormat, zName);
+ int rc;
+ TRACE(("FTS1 sql: %s\n", zCommand));
+ rc = sqlite3_exec(db, zCommand, NULL, 0, NULL);
+ free(zCommand);
+ return rc;
+}
+
+static int sql_prepare(sqlite3 *db, const char *zName, sqlite3_stmt **ppStmt,
+ const char *zFormat){
+ char *zCommand = string_format(zFormat, zName);
+ int rc;
+ TRACE(("FTS1 prepare: %s\n", zCommand));
+ rc = sqlite3_prepare(db, zCommand, -1, ppStmt, NULL);
+ free(zCommand);
+ return rc;
+}
+
+/* end utility functions */
+
+/* Forward reference */
+typedef struct fulltext_vtab fulltext_vtab;
+
+/* A single term in a query is represented by an instances of
+** the following structure.
+*/
+typedef struct QueryTerm {
+ short int nPhrase; /* How many following terms are part of the same phrase */
+ short int iPhrase; /* This is the i-th term of a phrase. */
+ short int iColumn; /* Column of the index that must match this term */
+ signed char isOr; /* this term is preceded by "OR" */
+ signed char isNot; /* this term is preceded by "-" */
+ char *pTerm; /* text of the term. '\000' terminated. malloced */
+ int nTerm; /* Number of bytes in pTerm[] */
+} QueryTerm;
+
+
+/* A query string is parsed into a Query structure.
+ *
+ * We could, in theory, allow query strings to be complicated
+ * nested expressions with precedence determined by parentheses.
+ * But none of the major search engines do this. (Perhaps the
+ * feeling is that an parenthesized expression is two complex of
+ * an idea for the average user to grasp.) Taking our lead from
+ * the major search engines, we will allow queries to be a list
+ * of terms (with an implied AND operator) or phrases in double-quotes,
+ * with a single optional "-" before each non-phrase term to designate
+ * negation and an optional OR connector.
+ *
+ * OR binds more tightly than the implied AND, which is what the
+ * major search engines seem to do. So, for example:
+ *
+ * [one two OR three] ==> one AND (two OR three)
+ * [one OR two three] ==> (one OR two) AND three
+ *
+ * A "-" before a term matches all entries that lack that term.
+ * The "-" must occur immediately before the term with in intervening
+ * space. This is how the search engines do it.
+ *
+ * A NOT term cannot be the right-hand operand of an OR. If this
+ * occurs in the query string, the NOT is ignored:
+ *
+ * [one OR -two] ==> one OR two
+ *
+ */
+typedef struct Query {
+ fulltext_vtab *pFts; /* The full text index */
+ int nTerms; /* Number of terms in the query */
+ QueryTerm *pTerms; /* Array of terms. Space obtained from malloc() */
+ int nextIsOr; /* Set the isOr flag on the next inserted term */
+ int nextColumn; /* Next word parsed must be in this column */
+ int dfltColumn; /* The default column */
+} Query;
+
+
+/*
+** An instance of the following structure keeps track of generated
+** matching-word offset information and snippets.
+*/
+typedef struct Snippet {
+ int nMatch; /* Total number of matches */
+ int nAlloc; /* Space allocated for aMatch[] */
+ struct snippetMatch { /* One entry for each matching term */
+ char snStatus; /* Status flag for use while constructing snippets */
+ short int iCol; /* The column that contains the match */
+ short int iTerm; /* The index in Query.pTerms[] of the matching term */
+ short int nByte; /* Number of bytes in the term */
+ int iStart; /* The offset to the first character of the term */
+ } *aMatch; /* Points to space obtained from malloc */
+ char *zOffset; /* Text rendering of aMatch[] */
+ int nOffset; /* strlen(zOffset) */
+ char *zSnippet; /* Snippet text */
+ int nSnippet; /* strlen(zSnippet) */
+} Snippet;
+
+
+typedef enum QueryType {
+ QUERY_GENERIC, /* table scan */
+ QUERY_ROWID, /* lookup by rowid */
+ QUERY_FULLTEXT /* QUERY_FULLTEXT + [i] is a full-text search for column i*/
+} QueryType;
+
+/* TODO(shess) CHUNK_MAX controls how much data we allow in segment 0
+** before we start aggregating into larger segments. Lower CHUNK_MAX
+** means that for a given input we have more individual segments per
+** term, which means more rows in the table and a bigger index (due to
+** both more rows and bigger rowids). But it also reduces the average
+** cost of adding new elements to the segment 0 doclist, and it seems
+** to reduce the number of pages read and written during inserts. 256
+** was chosen by measuring insertion times for a certain input (first
+** 10k documents of Enron corpus), though including query performance
+** in the decision may argue for a larger value.
+*/
+#define CHUNK_MAX 256
+
+typedef enum fulltext_statement {
+ CONTENT_INSERT_STMT,
+ CONTENT_SELECT_STMT,
+ CONTENT_UPDATE_STMT,
+ CONTENT_DELETE_STMT,
+
+ TERM_SELECT_STMT,
+ TERM_SELECT_ALL_STMT,
+ TERM_INSERT_STMT,
+ TERM_UPDATE_STMT,
+ TERM_DELETE_STMT,
+
+ MAX_STMT /* Always at end! */
+} fulltext_statement;
+
+/* These must exactly match the enum above. */
+/* TODO(adam): Is there some risk that a statement (in particular,
+** pTermSelectStmt) will be used in two cursors at once, e.g. if a
+** query joins a virtual table to itself? If so perhaps we should
+** move some of these to the cursor object.
+*/
+static const char *const fulltext_zStatement[MAX_STMT] = {
+ /* CONTENT_INSERT */ NULL, /* generated in contentInsertStatement() */
+ /* CONTENT_SELECT */ "select * from %_content where rowid = ?",
+ /* CONTENT_UPDATE */ NULL, /* generated in contentUpdateStatement() */
+ /* CONTENT_DELETE */ "delete from %_content where rowid = ?",
+
+ /* TERM_SELECT */
+ "select rowid, doclist from %_term where term = ? and segment = ?",
+ /* TERM_SELECT_ALL */
+ "select doclist from %_term where term = ? order by segment",
+ /* TERM_INSERT */
+ "insert into %_term (rowid, term, segment, doclist) values (?, ?, ?, ?)",
+ /* TERM_UPDATE */ "update %_term set doclist = ? where rowid = ?",
+ /* TERM_DELETE */ "delete from %_term where rowid = ?",
+};
+
+/*
+** A connection to a fulltext index is an instance of the following
+** structure. The xCreate and xConnect methods create an instance
+** of this structure and xDestroy and xDisconnect free that instance.
+** All other methods receive a pointer to the structure as one of their
+** arguments.
+*/
+struct fulltext_vtab {
+ sqlite3_vtab base; /* Base class used by SQLite core */
+ sqlite3 *db; /* The database connection */
+ const char *zName; /* virtual table name */
+ int nColumn; /* number of columns in virtual table */
+ char **azColumn; /* column names. malloced */
+ char **azContentColumn; /* column names in content table; malloced */
+ sqlite3_tokenizer *pTokenizer; /* tokenizer for inserts and queries */
+
+ /* Precompiled statements which we keep as long as the table is
+ ** open.
+ */
+ sqlite3_stmt *pFulltextStatements[MAX_STMT];
+};
+
+/*
+** When the core wants to do a query, it create a cursor using a
+** call to xOpen. This structure is an instance of a cursor. It
+** is destroyed by xClose.
+*/
+typedef struct fulltext_cursor {
+ sqlite3_vtab_cursor base; /* Base class used by SQLite core */
+ QueryType iCursorType; /* Copy of sqlite3_index_info.idxNum */
+ sqlite3_stmt *pStmt; /* Prepared statement in use by the cursor */
+ int eof; /* True if at End Of Results */
+ Query q; /* Parsed query string */
+ Snippet snippet; /* Cached snippet for the current row */
+ int iColumn; /* Column being searched */
+ DocListReader result; /* used when iCursorType == QUERY_FULLTEXT */
+} fulltext_cursor;
+
+static struct fulltext_vtab *cursor_vtab(fulltext_cursor *c){
+ return (fulltext_vtab *) c->base.pVtab;
+}
+
+static const sqlite3_module fulltextModule; /* forward declaration */
+
+/* Append a list of strings separated by commas to a StringBuffer. */
+static void appendList(StringBuffer *sb, int nString, char **azString){
+ int i;
+ for(i=0; i<nString; ++i){
+ if( i>0 ) append(sb, ", ");
+ append(sb, azString[i]);
+ }
+}
+
+/* Return a dynamically generated statement of the form
+ * insert into %_content (rowid, ...) values (?, ...)
+ */
+static const char *contentInsertStatement(fulltext_vtab *v){
+ StringBuffer sb;
+ int i;
+
+ initStringBuffer(&sb);
+ append(&sb, "insert into %_content (rowid, ");
+ appendList(&sb, v->nColumn, v->azContentColumn);
+ append(&sb, ") values (?");
+ for(i=0; i<v->nColumn; ++i)
+ append(&sb, ", ?");
+ append(&sb, ")");
+ return sb.s;
+}
+
+/* Return a dynamically generated statement of the form
+ * update %_content set [col_0] = ?, [col_1] = ?, ...
+ * where rowid = ?
+ */
+static const char *contentUpdateStatement(fulltext_vtab *v){
+ StringBuffer sb;
+ int i;
+
+ initStringBuffer(&sb);
+ append(&sb, "update %_content set ");
+ for(i=0; i<v->nColumn; ++i) {
+ if( i>0 ){
+ append(&sb, ", ");
+ }
+ append(&sb, v->azContentColumn[i]);
+ append(&sb, " = ?");
+ }
+ append(&sb, " where rowid = ?");
+ return sb.s;
+}
+
+/* Puts a freshly-prepared statement determined by iStmt in *ppStmt.
+** If the indicated statement has never been prepared, it is prepared
+** and cached, otherwise the cached version is reset.
+*/
+static int sql_get_statement(fulltext_vtab *v, fulltext_statement iStmt,
+ sqlite3_stmt **ppStmt){
+ assert( iStmt<MAX_STMT );
+ if( v->pFulltextStatements[iStmt]==NULL ){
+ const char *zStmt;
+ int rc;
+ switch( iStmt ){
+ case CONTENT_INSERT_STMT:
+ zStmt = contentInsertStatement(v); break;
+ case CONTENT_UPDATE_STMT:
+ zStmt = contentUpdateStatement(v); break;
+ default:
+ zStmt = fulltext_zStatement[iStmt];
+ }
+ rc = sql_prepare(v->db, v->zName, &v->pFulltextStatements[iStmt],
+ zStmt);
+ if( zStmt != fulltext_zStatement[iStmt]) free((void *) zStmt);
+ if( rc!=SQLITE_OK ) return rc;
+ } else {
+ int rc = sqlite3_reset(v->pFulltextStatements[iStmt]);
+ if( rc!=SQLITE_OK ) return rc;
+ }
+
+ *ppStmt = v->pFulltextStatements[iStmt];
+ return SQLITE_OK;
+}
+
+/* Step the indicated statement, handling errors SQLITE_BUSY (by
+** retrying) and SQLITE_SCHEMA (by re-preparing and transferring
+** bindings to the new statement).
+** TODO(adam): We should extend this function so that it can work with
+** statements declared locally, not only globally cached statements.
+*/
+static int sql_step_statement(fulltext_vtab *v, fulltext_statement iStmt,
+ sqlite3_stmt **ppStmt){
+ int rc;
+ sqlite3_stmt *s = *ppStmt;
+ assert( iStmt<MAX_STMT );
+ assert( s==v->pFulltextStatements[iStmt] );
+
+ while( (rc=sqlite3_step(s))!=SQLITE_DONE && rc!=SQLITE_ROW ){
+ sqlite3_stmt *pNewStmt;
+
+ if( rc==SQLITE_BUSY ) continue;
+ if( rc!=SQLITE_ERROR ) return rc;
+
+ rc = sqlite3_reset(s);
+ if( rc!=SQLITE_SCHEMA ) return SQLITE_ERROR;
+
+ v->pFulltextStatements[iStmt] = NULL; /* Still in s */
+ rc = sql_get_statement(v, iStmt, &pNewStmt);
+ if( rc!=SQLITE_OK ) goto err;
+ *ppStmt = pNewStmt;
+
+ rc = sqlite3_transfer_bindings(s, pNewStmt);
+ if( rc!=SQLITE_OK ) goto err;
+
+ rc = sqlite3_finalize(s);
+ if( rc!=SQLITE_OK ) return rc;
+ s = pNewStmt;
+ }
+ return rc;
+
+ err:
+ sqlite3_finalize(s);
+ return rc;
+}
+
+/* Like sql_step_statement(), but convert SQLITE_DONE to SQLITE_OK.
+** Useful for statements like UPDATE, where we expect no results.
+*/
+static int sql_single_step_statement(fulltext_vtab *v,
+ fulltext_statement iStmt,
+ sqlite3_stmt **ppStmt){
+ int rc = sql_step_statement(v, iStmt, ppStmt);
+ return (rc==SQLITE_DONE) ? SQLITE_OK : rc;
+}
+
+/* insert into %_content (rowid, ...) values ([rowid], [pValues]) */
+static int content_insert(fulltext_vtab *v, sqlite3_value *rowid,
+ sqlite3_value **pValues){
+ sqlite3_stmt *s;
+ int i;
+ int rc = sql_get_statement(v, CONTENT_INSERT_STMT, &s);
+ if( rc!=SQLITE_OK ) return rc;
+
+ rc = sqlite3_bind_value(s, 1, rowid);
+ if( rc!=SQLITE_OK ) return rc;
+
+ for(i=0; i<v->nColumn; ++i){
+ rc = sqlite3_bind_value(s, 2+i, pValues[i]);
+ if( rc!=SQLITE_OK ) return rc;
+ }
+
+ return sql_single_step_statement(v, CONTENT_INSERT_STMT, &s);
+}
+
+/* update %_content set col0 = pValues[0], col1 = pValues[1], ...
+ * where rowid = [iRowid] */
+static int content_update(fulltext_vtab *v, sqlite3_value **pValues,
+ sqlite_int64 iRowid){
+ sqlite3_stmt *s;
+ int i;
+ int rc = sql_get_statement(v, CONTENT_UPDATE_STMT, &s);
+ if( rc!=SQLITE_OK ) return rc;
+
+ for(i=0; i<v->nColumn; ++i){
+ rc = sqlite3_bind_value(s, 1+i, pValues[i]);
+ if( rc!=SQLITE_OK ) return rc;
+ }
+
+ rc = sqlite3_bind_int64(s, 1+v->nColumn, iRowid);
+ if( rc!=SQLITE_OK ) return rc;
+
+ return sql_single_step_statement(v, CONTENT_UPDATE_STMT, &s);
+}
+
+void freeStringArray(int nString, const char **pString){
+ int i;
+
+ for (i=0 ; i < nString ; ++i) {
+ free((void *) pString[i]);
+ }
+ free((void *) pString);
+}
+
+/* select * from %_content where rowid = [iRow]
+ * The caller must delete the returned array and all strings in it.
+ *
+ * TODO: Perhaps we should return pointer/length strings here for consistency
+ * with other code which uses pointer/length. */
+static int content_select(fulltext_vtab *v, sqlite_int64 iRow,
+ const char ***pValues){
+ sqlite3_stmt *s;
+ const char **values;
+ int i;
+ int rc;
+
+ *pValues = NULL;
+
+ rc = sql_get_statement(v, CONTENT_SELECT_STMT, &s);
+ if( rc!=SQLITE_OK ) return rc;
+
+ rc = sqlite3_bind_int64(s, 1, iRow);
+ if( rc!=SQLITE_OK ) return rc;
+
+ rc = sql_step_statement(v, CONTENT_SELECT_STMT, &s);
+ if( rc!=SQLITE_ROW ) return rc;
+
+ values = (const char **) malloc(v->nColumn * sizeof(const char *));
+ for(i=0; i<v->nColumn; ++i){
+ values[i] = string_dup((char*)sqlite3_column_text(s, i));
+ }
+
+ /* We expect only one row. We must execute another sqlite3_step()
+ * to complete the iteration; otherwise the table will remain locked. */
+ rc = sqlite3_step(s);
+ if( rc==SQLITE_DONE ){
+ *pValues = values;
+ return SQLITE_OK;
+ }
+
+ freeStringArray(v->nColumn, values);
+ return rc;
+}
+
+/* delete from %_content where rowid = [iRow ] */
+static int content_delete(fulltext_vtab *v, sqlite_int64 iRow){
+ sqlite3_stmt *s;
+ int rc = sql_get_statement(v, CONTENT_DELETE_STMT, &s);
+ if( rc!=SQLITE_OK ) return rc;
+
+ rc = sqlite3_bind_int64(s, 1, iRow);
+ if( rc!=SQLITE_OK ) return rc;
+
+ return sql_single_step_statement(v, CONTENT_DELETE_STMT, &s);
+}
+
+/* select rowid, doclist from %_term
+ * where term = [pTerm] and segment = [iSegment]
+ * If found, returns SQLITE_ROW; the caller must free the
+ * returned doclist. If no rows found, returns SQLITE_DONE. */
+static int term_select(fulltext_vtab *v, const char *pTerm, int nTerm,
+ int iSegment,
+ sqlite_int64 *rowid, DocList *out){
+ sqlite3_stmt *s;
+ int rc = sql_get_statement(v, TERM_SELECT_STMT, &s);
+ if( rc!=SQLITE_OK ) return rc;
+
+ rc = sqlite3_bind_text(s, 1, pTerm, nTerm, SQLITE_STATIC);
+ if( rc!=SQLITE_OK ) return rc;
+
+ rc = sqlite3_bind_int(s, 2, iSegment);
+ if( rc!=SQLITE_OK ) return rc;
+
+ rc = sql_step_statement(v, TERM_SELECT_STMT, &s);
+ if( rc!=SQLITE_ROW ) return rc;
+
+ *rowid = sqlite3_column_int64(s, 0);
+ docListInit(out, DL_DEFAULT,
+ sqlite3_column_blob(s, 1), sqlite3_column_bytes(s, 1));
+
+ /* We expect only one row. We must execute another sqlite3_step()
+ * to complete the iteration; otherwise the table will remain locked. */
+ rc = sqlite3_step(s);
+ return rc==SQLITE_DONE ? SQLITE_ROW : rc;
+}
+
+/* Load the segment doclists for term pTerm and merge them in
+** appropriate order into out. Returns SQLITE_OK if successful. If
+** there are no segments for pTerm, successfully returns an empty
+** doclist in out.
+**
+** Each document consists of 1 or more "columns". The number of
+** columns is v->nColumn. If iColumn==v->nColumn, then return
+** position information about all columns. If iColumn<v->nColumn,
+** then only return position information about the iColumn-th column
+** (where the first column is 0).
+*/
+static int term_select_all(
+ fulltext_vtab *v, /* The fulltext index we are querying against */
+ int iColumn, /* If <nColumn, only look at the iColumn-th column */
+ const char *pTerm, /* The term whose posting lists we want */
+ int nTerm, /* Number of bytes in pTerm */
+ DocList *out /* Write the resulting doclist here */
+){
+ DocList doclist;
+ sqlite3_stmt *s;
+ int rc = sql_get_statement(v, TERM_SELECT_ALL_STMT, &s);
+ if( rc!=SQLITE_OK ) return rc;
+
+ rc = sqlite3_bind_text(s, 1, pTerm, nTerm, SQLITE_STATIC);
+ if( rc!=SQLITE_OK ) return rc;
+
+ docListInit(&doclist, DL_DEFAULT, 0, 0);
+
+ /* TODO(shess) Handle schema and busy errors. */
+ while( (rc=sql_step_statement(v, TERM_SELECT_ALL_STMT, &s))==SQLITE_ROW ){
+ DocList old;
+
+ /* TODO(shess) If we processed doclists from oldest to newest, we
+ ** could skip the malloc() involved with the following call. For
+ ** now, I'd rather keep this logic similar to index_insert_term().
+ ** We could additionally drop elements when we see deletes, but
+ ** that would require a distinct version of docListAccumulate().
+ */
+ docListInit(&old, DL_DEFAULT,
+ sqlite3_column_blob(s, 0), sqlite3_column_bytes(s, 0));
+
+ if( iColumn<v->nColumn ){ /* querying a single column */
+ docListRestrictColumn(&old, iColumn);
+ }
+
+ /* doclist contains the newer data, so write it over old. Then
+ ** steal accumulated result for doclist.
+ */
+ docListAccumulate(&old, &doclist);
+ docListDestroy(&doclist);
+ doclist = old;
+ }
+ if( rc!=SQLITE_DONE ){
+ docListDestroy(&doclist);
+ return rc;
+ }
+
+ docListDiscardEmpty(&doclist);
+ *out = doclist;
+ return SQLITE_OK;
+}
+
+/* insert into %_term (rowid, term, segment, doclist)
+ values ([piRowid], [pTerm], [iSegment], [doclist])
+** Lets sqlite select rowid if piRowid is NULL, else uses *piRowid.
+**
+** NOTE(shess) piRowid is IN, with values of "space of int64" plus
+** null, it is not used to pass data back to the caller.
+*/
+static int term_insert(fulltext_vtab *v, sqlite_int64 *piRowid,
+ const char *pTerm, int nTerm,
+ int iSegment, DocList *doclist){
+ sqlite3_stmt *s;
+ int rc = sql_get_statement(v, TERM_INSERT_STMT, &s);
+ if( rc!=SQLITE_OK ) return rc;
+
+ if( piRowid==NULL ){
+ rc = sqlite3_bind_null(s, 1);
+ }else{
+ rc = sqlite3_bind_int64(s, 1, *piRowid);
+ }
+ if( rc!=SQLITE_OK ) return rc;
+
+ rc = sqlite3_bind_text(s, 2, pTerm, nTerm, SQLITE_STATIC);
+ if( rc!=SQLITE_OK ) return rc;
+
+ rc = sqlite3_bind_int(s, 3, iSegment);
+ if( rc!=SQLITE_OK ) return rc;
+
+ rc = sqlite3_bind_blob(s, 4, doclist->pData, doclist->nData, SQLITE_STATIC);
+ if( rc!=SQLITE_OK ) return rc;
+
+ return sql_single_step_statement(v, TERM_INSERT_STMT, &s);
+}
+
+/* update %_term set doclist = [doclist] where rowid = [rowid] */
+static int term_update(fulltext_vtab *v, sqlite_int64 rowid,
+ DocList *doclist){
+ sqlite3_stmt *s;
+ int rc = sql_get_statement(v, TERM_UPDATE_STMT, &s);
+ if( rc!=SQLITE_OK ) return rc;
+
+ rc = sqlite3_bind_blob(s, 1, doclist->pData, doclist->nData, SQLITE_STATIC);
+ if( rc!=SQLITE_OK ) return rc;
+
+ rc = sqlite3_bind_int64(s, 2, rowid);
+ if( rc!=SQLITE_OK ) return rc;
+
+ return sql_single_step_statement(v, TERM_UPDATE_STMT, &s);
+}
+
+static int term_delete(fulltext_vtab *v, sqlite_int64 rowid){
+ sqlite3_stmt *s;
+ int rc = sql_get_statement(v, TERM_DELETE_STMT, &s);
+ if( rc!=SQLITE_OK ) return rc;
+
+ rc = sqlite3_bind_int64(s, 1, rowid);
+ if( rc!=SQLITE_OK ) return rc;
+
+ return sql_single_step_statement(v, TERM_DELETE_STMT, &s);
+}
+
+/*
+** Free the memory used to contain a fulltext_vtab structure.
+*/
+static void fulltext_vtab_destroy(fulltext_vtab *v){
+ int iStmt, i;
+
+ TRACE(("FTS1 Destroy %p\n", v));
+ for( iStmt=0; iStmt<MAX_STMT; iStmt++ ){
+ if( v->pFulltextStatements[iStmt]!=NULL ){
+ sqlite3_finalize(v->pFulltextStatements[iStmt]);
+ v->pFulltextStatements[iStmt] = NULL;
+ }
+ }
+
+ if( v->pTokenizer!=NULL ){
+ v->pTokenizer->pModule->xDestroy(v->pTokenizer);
+ v->pTokenizer = NULL;
+ }
+
+ free(v->azColumn);
+ for(i = 0; i < v->nColumn; ++i) {
+ sqlite3_free(v->azContentColumn[i]);
+ }
+ free(v->azContentColumn);
+ free(v);
+}
+
+/*
+** Token types for parsing the arguments to xConnect or xCreate.
+*/
+#define TOKEN_EOF 0 /* End of file */
+#define TOKEN_SPACE 1 /* Any kind of whitespace */
+#define TOKEN_ID 2 /* An identifier */
+#define TOKEN_STRING 3 /* A string literal */
+#define TOKEN_PUNCT 4 /* A single punctuation character */
+
+/*
+** If X is a character that can be used in an identifier then
+** IdChar(X) will be true. Otherwise it is false.
+**
+** For ASCII, any character with the high-order bit set is
+** allowed in an identifier. For 7-bit characters,
+** sqlite3IsIdChar[X] must be 1.
+**
+** Ticket #1066. the SQL standard does not allow '$' in the
+** middle of identfiers. But many SQL implementations do.
+** SQLite will allow '$' in identifiers for compatibility.
+** But the feature is undocumented.
+*/
+static const char isIdChar[] = {
+/* x0 x1 x2 x3 x4 x5 x6 x7 x8 x9 xA xB xC xD xE xF */
+ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, /* 2x */
+ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, /* 3x */
+ 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, /* 4x */
+ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, /* 5x */
+ 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, /* 6x */
+ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, /* 7x */
+};
+#define IdChar(C) (((c=C)&0x80)!=0 || (c>0x1f && isIdChar[c-0x20]))
+
+
+/*
+** Return the length of the token that begins at z[0].
+** Store the token type in *tokenType before returning.
+*/
+static int getToken(const char *z, int *tokenType){
+ int i, c;
+ switch( *z ){
+ case 0: {
+ *tokenType = TOKEN_EOF;
+ return 0;
+ }
+ case ' ': case '\t': case '\n': case '\f': case '\r': {
+ for(i=1; isspace(z[i]); i++){}
+ *tokenType = TOKEN_SPACE;
+ return i;
+ }
+ case '\'':
+ case '"': {
+ int delim = z[0];
+ for(i=1; (c=z[i])!=0; i++){
+ if( c==delim ){
+ if( z[i+1]==delim ){
+ i++;
+ }else{
+ break;
+ }
+ }
+ }
+ *tokenType = TOKEN_STRING;
+ return i + (c!=0);
+ }
+ case '[': {
+ for(i=1, c=z[0]; c!=']' && (c=z[i])!=0; i++){}
+ *tokenType = TOKEN_ID;
+ return i;
+ }
+ default: {
+ if( !IdChar(*z) ){
+ break;
+ }
+ for(i=1; IdChar(z[i]); i++){}
+ *tokenType = TOKEN_ID;
+ return i;
+ }
+ }
+ *tokenType = TOKEN_PUNCT;
+ return 1;
+}
+
+/*
+** A token extracted from a string is an instance of the following
+** structure.
+*/
+typedef struct Token {
+ const char *z; /* Pointer to token text. Not '\000' terminated */
+ short int n; /* Length of the token text in bytes. */
+} Token;
+
+/*
+** Given a input string (which is really one of the argv[] parameters
+** passed into xConnect or xCreate) split the string up into tokens.
+** Return an array of pointers to '\000' terminated strings, one string
+** for each non-whitespace token.
+**
+** The returned array is terminated by a single NULL pointer.
+**
+** Space to hold the returned array is obtained from a single
+** malloc and should be freed by passing the return value to free().
+** The individual strings within the token list are all a part of
+** the single memory allocation and will all be freed at once.
+*/
+static char **tokenizeString(const char *z, int *pnToken){
+ int nToken = 0;
+ Token *aToken = malloc( strlen(z) * sizeof(aToken[0]) );
+ int n = 1;
+ int e, i;
+ int totalSize = 0;
+ char **azToken;
+ char *zCopy;
+ while( n>0 ){
+ n = getToken(z, &e);
+ if( e!=TOKEN_SPACE ){
+ aToken[nToken].z = z;
+ aToken[nToken].n = n;
+ nToken++;
+ totalSize += n+1;
+ }
+ z += n;
+ }
+ azToken = (char**)malloc( nToken*sizeof(char*) + totalSize );
+ zCopy = (char*)&azToken[nToken];
+ nToken--;
+ for(i=0; i<nToken; i++){
+ azToken[i] = zCopy;
+ n = aToken[i].n;
+ memcpy(zCopy, aToken[i].z, n);
+ zCopy[n] = 0;
+ zCopy += n+1;
+ }
+ azToken[nToken] = 0;
+ free(aToken);
+ *pnToken = nToken;
+ return azToken;
+}
+
+/*
+** Convert an SQL-style quoted string into a normal string by removing
+** the quote characters. The conversion is done in-place. If the
+** input does not begin with a quote character, then this routine
+** is a no-op.
+**
+** Examples:
+**
+** "abc" becomes abc
+** 'xyz' becomes xyz
+** [pqr] becomes pqr
+** `mno` becomes mno
+*/
+void dequoteString(char *z){
+ int quote;
+ int i, j;
+ if( z==0 ) return;
+ quote = z[0];
+ switch( quote ){
+ case '\'': break;
+ case '"': break;
+ case '`': break; /* For MySQL compatibility */
+ case '[': quote = ']'; break; /* For MS SqlServer compatibility */
+ default: return;
+ }
+ for(i=1, j=0; z[i]; i++){
+ if( z[i]==quote ){
+ if( z[i+1]==quote ){
+ z[j++] = quote;
+ i++;
+ }else{
+ z[j++] = 0;
+ break;
+ }
+ }else{
+ z[j++] = z[i];
+ }
+ }
+}
+
+/*
+** The input azIn is a NULL-terminated list of tokens. Remove the first
+** token and all punctuation tokens. Remove the quotes from
+** around string literal tokens.
+**
+** Example:
+**
+** input: tokenize chinese ( 'simplifed' , 'mixed' )
+** output: chinese simplifed mixed
+**
+** Another example:
+**
+** input: delimiters ( '[' , ']' , '...' )
+** output: [ ] ...
+*/
+void tokenListToIdList(char **azIn){
+ int i, j;
+ if( azIn ){
+ for(i=0, j=-1; azIn[i]; i++){
+ if( isalnum(azIn[i][0]) || azIn[i][1] ){
+ dequoteString(azIn[i]);
+ if( j>=0 ){
+ azIn[j] = azIn[i];
+ }
+ j++;
+ }
+ }
+ azIn[j] = 0;
+ }
+}
+
+
+/*
+** Find the first alphanumeric token in the string zIn. Null-terminate
+** this token. Remove any quotation marks. And return a pointer to
+** the result.
+*/
+static char *firstToken(char *zIn, char **pzTail){
+ int i, n, ttype;
+ i = 0;
+ while(1){
+ n = getToken(zIn, &ttype);
+ if( ttype==TOKEN_SPACE ){
+ zIn += n;
+ }else if( ttype==TOKEN_EOF ){
+ *pzTail = zIn;
+ return 0;
+ }else{
+ zIn[n] = 0;
+ *pzTail = &zIn[1];
+ dequoteString(zIn);
+ return zIn;
+ }
+ }
+ /*NOTREACHED*/
+}
+
+/* Return true if...
+**
+** * s begins with the string t, ignoring case
+** * s is longer than t
+** * The first character of s beyond t is not a alphanumeric
+**
+** Ignore leading space in *s.
+**
+** To put it another way, return true if the first token of
+** s[] is t[].
+*/
+static int startsWith(const char *s, const char *t){
+ while( isspace(*s) ){ s++; }
+ while( *t ){
+ if( tolower(*s++)!=tolower(*t++) ) return 0;
+ }
+ return *s!='_' && !isalnum(*s);
+}
+
+/*
+** An instance of this structure defines the "spec" of a
+** full text index. This structure is populated by parseSpec
+** and use by fulltextConnect and fulltextCreate.
+*/
+typedef struct TableSpec {
+ const char *zName; /* Name of the full-text index */
+ int nColumn; /* Number of columns to be indexed */
+ char **azColumn; /* Original names of columns to be indexed */
+ char **azContentColumn; /* Column names for %_content */
+ char **azTokenizer; /* Name of tokenizer and its arguments */
+} TableSpec;
+
+/*
+** Reclaim all of the memory used by a TableSpec
+*/
+void clearTableSpec(TableSpec *p) {
+ free(p->azColumn);
+ free(p->azContentColumn);
+ free(p->azTokenizer);
+}
+
+/* Parse a CREATE VIRTUAL TABLE statement, which looks like this:
+ *
+ * CREATE VIRTUAL TABLE email
+ * USING fts1(subject, body, tokenize mytokenizer(myarg))
+ *
+ * We return parsed information in a TableSpec structure.
+ *
+ */
+int parseSpec(TableSpec *pSpec, int argc, const char *const*argv, char**pzErr){
+ int i, j, n;
+ char *z, *zDummy;
+ char **azArg;
+ const char *zTokenizer = 0; /* argv[] entry describing the tokenizer */
+
+ assert( argc>=3 );
+ /* Current interface:
+ ** argv[0] - module name
+ ** argv[1] - database name
+ ** argv[2] - table name
+ ** argv[3..] - columns, optionally followed by tokenizer specification
+ ** and snippet delimiters specification.
+ */
+
+ /* Make a copy of the complete argv[][] array in a single allocation.
+ ** The argv[][] array is read-only and transient. We can write to the
+ ** copy in order to modify things and the copy is persistent.
+ */
+ memset(pSpec, 0, sizeof(*pSpec));
+ for(i=n=0; i<argc; i++){
+ n += strlen(argv[i]) + 1;
+ }
+ azArg = malloc( sizeof(char*)*argc + n );
+ if( azArg==0 ){
+ return SQLITE_NOMEM;
+ }
+ z = (char*)&azArg[argc];
+ for(i=0; i<argc; i++){
+ azArg[i] = z;
+ strcpy(z, argv[i]);
+ z += strlen(z)+1;
+ }
+
+ /* Identify the column names and the tokenizer and delimiter arguments
+ ** in the argv[][] array.
+ */
+ pSpec->zName = azArg[2];
+ pSpec->nColumn = 0;
+ pSpec->azColumn = azArg;
+ zTokenizer = "tokenize simple";
+ for(i=3, j=0; i<argc; ++i){
+ if( startsWith(azArg[i],"tokenize") ){
+ zTokenizer = azArg[i];
+ }else{
+ z = azArg[pSpec->nColumn] = firstToken(azArg[i], &zDummy);
+ pSpec->nColumn++;
+ }
+ }
+ if( pSpec->nColumn==0 ){
+ azArg[0] = "content";
+ pSpec->nColumn = 1;
+ }
+
+ /*
+ ** Construct the list of content column names.
+ **
+ ** Each content column name will be of the form cNNAAAA
+ ** where NN is the column number and AAAA is the sanitized
+ ** column name. "sanitized" means that special characters are
+ ** converted to "_". The cNN prefix guarantees that all column
+ ** names are unique.
+ **
+ ** The AAAA suffix is not strictly necessary. It is included
+ ** for the convenience of people who might examine the generated
+ ** %_content table and wonder what the columns are used for.
+ */
+ pSpec->azContentColumn = malloc( pSpec->nColumn * sizeof(char *) );
+ if( pSpec->azContentColumn==0 ){
+ clearTableSpec(pSpec);
+ return SQLITE_NOMEM;
+ }
+ for(i=0; i<pSpec->nColumn; i++){
+ char *p;
+ pSpec->azContentColumn[i] = sqlite3_mprintf("c%d%s", i, azArg[i]);
+ for (p = pSpec->azContentColumn[i]; *p ; ++p) {
+ if( !isalnum(*p) ) *p = '_';
+ }
+ }
+
+ /*
+ ** Parse the tokenizer specification string.
+ */
+ pSpec->azTokenizer = tokenizeString(zTokenizer, &n);
+ tokenListToIdList(pSpec->azTokenizer);
+
+ return SQLITE_OK;
+}
+
+/*
+** Generate a CREATE TABLE statement that describes the schema of
+** the virtual table. Return a pointer to this schema string.
+**
+** Space is obtained from sqlite3_mprintf() and should be freed
+** using sqlite3_free().
+*/
+static char *fulltextSchema(
+ int nColumn, /* Number of columns */
+ const char *const* azColumn, /* List of columns */
+ const char *zTableName /* Name of the table */
+){
+ int i;
+ char *zSchema, *zNext;
+ const char *zSep = "(";
+ zSchema = sqlite3_mprintf("CREATE TABLE x");
+ for(i=0; i<nColumn; i++){
+ zNext = sqlite3_mprintf("%s%s%Q", zSchema, zSep, azColumn[i]);
+ sqlite3_free(zSchema);
+ zSchema = zNext;
+ zSep = ",";
+ }
+ zNext = sqlite3_mprintf("%s,%Q)", zSchema, zTableName);
+ sqlite3_free(zSchema);
+ return zNext;
+}
+
+/*
+** Build a new sqlite3_vtab structure that will describe the
+** fulltext index defined by spec.
+*/
+static int constructVtab(
+ sqlite3 *db, /* The SQLite database connection */
+ TableSpec *spec, /* Parsed spec information from parseSpec() */
+ sqlite3_vtab **ppVTab, /* Write the resulting vtab structure here */
+ char **pzErr /* Write any error message here */
+){
+ int rc;
+ int n;
+ fulltext_vtab *v = 0;
+ const sqlite3_tokenizer_module *m = NULL;
+ char *schema;
+
+ v = (fulltext_vtab *) malloc(sizeof(fulltext_vtab));
+ if( v==0 ) return SQLITE_NOMEM;
+ memset(v, 0, sizeof(*v));
+ /* sqlite will initialize v->base */
+ v->db = db;
+ v->zName = spec->zName; /* Freed when azColumn is freed */
+ v->nColumn = spec->nColumn;
+ v->azContentColumn = spec->azContentColumn;
+ spec->azContentColumn = 0;
+ v->azColumn = spec->azColumn;
+ spec->azColumn = 0;
+
+ if( spec->azTokenizer==0 ){
+ return SQLITE_NOMEM;
+ }
+ /* TODO(shess) For now, add new tokenizers as else if clauses. */
+ if( spec->azTokenizer[0]==0 || startsWith(spec->azTokenizer[0], "simple") ){
+ sqlite3Fts1SimpleTokenizerModule(&m);
+ }else if( startsWith(spec->azTokenizer[0], "porter") ){
+ sqlite3Fts1PorterTokenizerModule(&m);
+ }else{
+ *pzErr = sqlite3_mprintf("unknown tokenizer: %s", spec->azTokenizer[0]);
+ rc = SQLITE_ERROR;
+ goto err;
+ }
+ for(n=0; spec->azTokenizer[n]; n++){}
+ if( n ){
+ rc = m->xCreate(n-1, (const char*const*)&spec->azTokenizer[1],
+ &v->pTokenizer);
+ }else{
+ rc = m->xCreate(0, 0, &v->pTokenizer);
+ }
+ if( rc!=SQLITE_OK ) goto err;
+ v->pTokenizer->pModule = m;
+
+ /* TODO: verify the existence of backing tables foo_content, foo_term */
+
+ schema = fulltextSchema(v->nColumn, (const char*const*)v->azColumn,
+ spec->zName);
+ rc = sqlite3_declare_vtab(db, schema);
+ sqlite3_free(schema);
+ if( rc!=SQLITE_OK ) goto err;
+
+ memset(v->pFulltextStatements, 0, sizeof(v->pFulltextStatements));
+
+ *ppVTab = &v->base;
+ TRACE(("FTS1 Connect %p\n", v));
+
+ return rc;
+
+err:
+ fulltext_vtab_destroy(v);
+ return rc;
+}
+
+static int fulltextConnect(
+ sqlite3 *db,
+ void *pAux,
+ int argc, const char *const*argv,
+ sqlite3_vtab **ppVTab,
+ char **pzErr
+){
+ TableSpec spec;
+ int rc = parseSpec(&spec, argc, argv, pzErr);
+ if( rc!=SQLITE_OK ) return rc;
+
+ rc = constructVtab(db, &spec, ppVTab, pzErr);
+ clearTableSpec(&spec);
+ return rc;
+}
+
+ /* The %_content table holds the text of each document, with
+ ** the rowid used as the docid.
+ **
+ ** The %_term table maps each term to a document list blob
+ ** containing elements sorted by ascending docid, each element
+ ** encoded as:
+ **
+ ** docid varint-encoded
+ ** token elements:
+ ** position+1 varint-encoded as delta from previous position
+ ** start offset varint-encoded as delta from previous start offset
+ ** end offset varint-encoded as delta from start offset
+ **
+ ** The sentinel position of 0 indicates the end of the token list.
+ **
+ ** Additionally, doclist blobs are chunked into multiple segments,
+ ** using segment to order the segments. New elements are added to
+ ** the segment at segment 0, until it exceeds CHUNK_MAX. Then
+ ** segment 0 is deleted, and the doclist is inserted at segment 1.
+ ** If there is already a doclist at segment 1, the segment 0 doclist
+ ** is merged with it, the segment 1 doclist is deleted, and the
+ ** merged doclist is inserted at segment 2, repeating those
+ ** operations until an insert succeeds.
+ **
+ ** Since this structure doesn't allow us to update elements in place
+ ** in case of deletion or update, these are simply written to
+ ** segment 0 (with an empty token list in case of deletion), with
+ ** docListAccumulate() taking care to retain lower-segment
+ ** information in preference to higher-segment information.
+ */
+ /* TODO(shess) Provide a VACUUM type operation which both removes
+ ** deleted elements which are no longer necessary, and duplicated
+ ** elements. I suspect this will probably not be necessary in
+ ** practice, though.
+ */
+static int fulltextCreate(sqlite3 *db, void *pAux,
+ int argc, const char * const *argv,
+ sqlite3_vtab **ppVTab, char **pzErr){
+ int rc;
+ TableSpec spec;
+ StringBuffer schema;
+ TRACE(("FTS1 Create\n"));
+
+ rc = parseSpec(&spec, argc, argv, pzErr);
+ if( rc!=SQLITE_OK ) return rc;
+
+ initStringBuffer(&schema);
+ append(&schema, "CREATE TABLE %_content(");
+ appendList(&schema, spec.nColumn, spec.azContentColumn);
+ append(&schema, ")");
+ rc = sql_exec(db, spec.zName, schema.s);
+ free(schema.s);
+ if( rc!=SQLITE_OK ) goto out;
+
+ rc = sql_exec(db, spec.zName,
+ "create table %_term(term text, segment integer, doclist blob, "
+ "primary key(term, segment));");
+ if( rc!=SQLITE_OK ) goto out;
+
+ rc = constructVtab(db, &spec, ppVTab, pzErr);
+
+out:
+ clearTableSpec(&spec);
+ return rc;
+}
+
+/* Decide how to handle an SQL query. */
+static int fulltextBestIndex(sqlite3_vtab *pVTab, sqlite3_index_info *pInfo){
+ int i;
+
+ for(i=0; i<pInfo->nConstraint; ++i){
+ const struct sqlite3_index_constraint *pConstraint;
+ pConstraint = &pInfo->aConstraint[i];
+ if( pConstraint->usable ) {
+ if( pConstraint->iColumn==-1 &&
+ pConstraint->op==SQLITE_INDEX_CONSTRAINT_EQ ){
+ pInfo->idxNum = QUERY_ROWID; /* lookup by rowid */
+ } else if( pConstraint->iColumn>=0 &&
+ pConstraint->op==SQLITE_INDEX_CONSTRAINT_MATCH ){
+ /* full-text search */
+ pInfo->idxNum = QUERY_FULLTEXT + pConstraint->iColumn;
+ } else continue;
+
+ pInfo->aConstraintUsage[i].argvIndex = 1;
+ pInfo->aConstraintUsage[i].omit = 1;
+
+ /* An arbitrary value for now.
+ * TODO: Perhaps rowid matches should be considered cheaper than
+ * full-text searches. */
+ pInfo->estimatedCost = 1.0;
+
+ return SQLITE_OK;
+ }
+ }
+ pInfo->idxNum = QUERY_GENERIC;
+ TRACE(("FTS1 BestIndex\n"));
+ return SQLITE_OK;
+}
+
+static int fulltextDisconnect(sqlite3_vtab *pVTab){
+ TRACE(("FTS1 Disconnect %p\n", pVTab));
+ fulltext_vtab_destroy((fulltext_vtab *)pVTab);
+ return SQLITE_OK;
+}
+
+static int fulltextDestroy(sqlite3_vtab *pVTab){
+ fulltext_vtab *v = (fulltext_vtab *)pVTab;
+ int rc;
+
+ TRACE(("FTS1 Destroy %p\n", pVTab));
+ rc = sql_exec(v->db, v->zName,
+ "drop table %_content; drop table %_term");
+ if( rc!=SQLITE_OK ) return rc;
+
+ fulltext_vtab_destroy((fulltext_vtab *)pVTab);
+ return SQLITE_OK;
+}
+
+static int fulltextOpen(sqlite3_vtab *pVTab, sqlite3_vtab_cursor **ppCursor){
+ fulltext_cursor *c;
+
+ c = (fulltext_cursor *) calloc(sizeof(fulltext_cursor), 1);
+ /* sqlite will initialize c->base */
+ *ppCursor = &c->base;
+ TRACE(("FTS1 Open %p: %p\n", pVTab, c));
+
+ return SQLITE_OK;
+}
+
+
+/* Free all of the dynamically allocated memory held by *q
+*/
+static void queryClear(Query *q){
+ int i;
+ for(i = 0; i < q->nTerms; ++i){
+ free(q->pTerms[i].pTerm);
+ }
+ free(q->pTerms);
+ memset(q, 0, sizeof(*q));
+}
+
+/* Free all of the dynamically allocated memory held by the
+** Snippet
+*/
+static void snippetClear(Snippet *p){
+ free(p->aMatch);
+ free(p->zOffset);
+ free(p->zSnippet);
+ memset(p, 0, sizeof(*p));
+}
+/*
+** Append a single entry to the p->aMatch[] log.
+*/
+static void snippetAppendMatch(
+ Snippet *p, /* Append the entry to this snippet */
+ int iCol, int iTerm, /* The column and query term */
+ int iStart, int nByte /* Offset and size of the match */
+){
+ int i;
+ struct snippetMatch *pMatch;
+ if( p->nMatch+1>=p->nAlloc ){
+ p->nAlloc = p->nAlloc*2 + 10;
+ p->aMatch = realloc(p->aMatch, p->nAlloc*sizeof(p->aMatch[0]) );
+ if( p->aMatch==0 ){
+ p->nMatch = 0;
+ p->nAlloc = 0;
+ return;
+ }
+ }
+ i = p->nMatch++;
+ pMatch = &p->aMatch[i];
+ pMatch->iCol = iCol;
+ pMatch->iTerm = iTerm;
+ pMatch->iStart = iStart;
+ pMatch->nByte = nByte;
+}
+
+/*
+** Sizing information for the circular buffer used in snippetOffsetsOfColumn()
+*/
+#define FTS1_ROTOR_SZ (32)
+#define FTS1_ROTOR_MASK (FTS1_ROTOR_SZ-1)
+
+/*
+** Add entries to pSnippet->aMatch[] for every match that occurs against
+** document zDoc[0..nDoc-1] which is stored in column iColumn.
+*/
+static void snippetOffsetsOfColumn(
+ Query *pQuery,
+ Snippet *pSnippet,
+ int iColumn,
+ const char *zDoc,
+ int nDoc
+){
+ const sqlite3_tokenizer_module *pTModule; /* The tokenizer module */
+ sqlite3_tokenizer *pTokenizer; /* The specific tokenizer */
+ sqlite3_tokenizer_cursor *pTCursor; /* Tokenizer cursor */
+ fulltext_vtab *pVtab; /* The full text index */
+ int nColumn; /* Number of columns in the index */
+ const QueryTerm *aTerm; /* Query string terms */
+ int nTerm; /* Number of query string terms */
+ int i, j; /* Loop counters */
+ int rc; /* Return code */
+ unsigned int match, prevMatch; /* Phrase search bitmasks */
+ const char *zToken; /* Next token from the tokenizer */
+ int nToken; /* Size of zToken */
+ int iBegin, iEnd, iPos; /* Offsets of beginning and end */
+
+ /* The following variables keep a circular buffer of the last
+ ** few tokens */
+ unsigned int iRotor = 0; /* Index of current token */
+ int iRotorBegin[FTS1_ROTOR_SZ]; /* Beginning offset of token */
+ int iRotorLen[FTS1_ROTOR_SZ]; /* Length of token */
+
+ pVtab = pQuery->pFts;
+ nColumn = pVtab->nColumn;
+ pTokenizer = pVtab->pTokenizer;
+ pTModule = pTokenizer->pModule;
+ rc = pTModule->xOpen(pTokenizer, zDoc, nDoc, &pTCursor);
+ if( rc ) return;
+ pTCursor->pTokenizer = pTokenizer;
+ aTerm = pQuery->pTerms;
+ nTerm = pQuery->nTerms;
+ if( nTerm>=FTS1_ROTOR_SZ ){
+ nTerm = FTS1_ROTOR_SZ - 1;
+ }
+ prevMatch = 0;
+ while(1){
+ rc = pTModule->xNext(pTCursor, &zToken, &nToken, &iBegin, &iEnd, &iPos);
+ if( rc ) break;
+ iRotorBegin[iRotor&FTS1_ROTOR_MASK] = iBegin;
+ iRotorLen[iRotor&FTS1_ROTOR_MASK] = iEnd-iBegin;
+ match = 0;
+ for(i=0; i<nTerm; i++){
+ int iCol;
+ iCol = aTerm[i].iColumn;
+ if( iCol>=0 && iCol<nColumn && iCol!=iColumn ) continue;
+ if( aTerm[i].nTerm!=nToken ) continue;
+ if( memcmp(aTerm[i].pTerm, zToken, nToken) ) continue;
+ if( aTerm[i].iPhrase>1 && (prevMatch & (1<<i))==0 ) continue;
+ match |= 1<<i;
+ if( i==nTerm-1 || aTerm[i+1].iPhrase==1 ){
+ for(j=aTerm[i].iPhrase-1; j>=0; j--){
+ int k = (iRotor-j) & FTS1_ROTOR_MASK;
+ snippetAppendMatch(pSnippet, iColumn, i-j,
+ iRotorBegin[k], iRotorLen[k]);
+ }
+ }
+ }
+ prevMatch = match<<1;
+ iRotor++;
+ }
+ pTModule->xClose(pTCursor);
+}
+
+
+/*
+** Compute all offsets for the current row of the query.
+** If the offsets have already been computed, this routine is a no-op.
+*/
+static void snippetAllOffsets(fulltext_cursor *p){
+ int nColumn;
+ int iColumn, i;
+ int iFirst, iLast;
+ fulltext_vtab *pFts;
+
+ if( p->snippet.nMatch ) return;
+ if( p->q.nTerms==0 ) return;
+ pFts = p->q.pFts;
+ nColumn = pFts->nColumn;
+ iColumn = p->iCursorType;
+ if( iColumn<0 || iColumn>=nColumn ){
+ iFirst = 0;
+ iLast = nColumn-1;
+ }else{
+ iFirst = iColumn;
+ iLast = iColumn;
+ }
+ for(i=iFirst; i<=iLast; i++){
+ const char *zDoc;
+ int nDoc;
+ zDoc = (const char*)sqlite3_column_text(p->pStmt, i+1);
+ nDoc = sqlite3_column_bytes(p->pStmt, i+1);
+ snippetOffsetsOfColumn(&p->q, &p->snippet, i, zDoc, nDoc);
+ }
+}
+
+/*
+** Convert the information in the aMatch[] array of the snippet
+** into the string zOffset[0..nOffset-1].
+*/
+static void snippetOffsetText(Snippet *p){
+ int i;
+ int cnt = 0;
+ StringBuffer sb;
+ char zBuf[200];
+ if( p->zOffset ) return;
+ initStringBuffer(&sb);
+ for(i=0; i<p->nMatch; i++){
+ struct snippetMatch *pMatch = &p->aMatch[i];
+ zBuf[0] = ' ';
+ sprintf(&zBuf[cnt>0], "%d %d %d %d", pMatch->iCol,
+ pMatch->iTerm, pMatch->iStart, pMatch->nByte);
+ append(&sb, zBuf);
+ cnt++;
+ }
+ p->zOffset = sb.s;
+ p->nOffset = sb.len;
+}
+
+/*
+** zDoc[0..nDoc-1] is phrase of text. aMatch[0..nMatch-1] are a set
+** of matching words some of which might be in zDoc. zDoc is column
+** number iCol.
+**
+** iBreak is suggested spot in zDoc where we could begin or end an
+** excerpt. Return a value similar to iBreak but possibly adjusted
+** to be a little left or right so that the break point is better.
+*/
+static int wordBoundary(
+ int iBreak, /* The suggested break point */
+ const char *zDoc, /* Document text */
+ int nDoc, /* Number of bytes in zDoc[] */
+ struct snippetMatch *aMatch, /* Matching words */
+ int nMatch, /* Number of entries in aMatch[] */
+ int iCol /* The column number for zDoc[] */
+){
+ int i;
+ if( iBreak<=10 ){
+ return 0;
+ }
+ if( iBreak>=nDoc-10 ){
+ return nDoc;
+ }
+ for(i=0; i<nMatch && aMatch[i].iCol<iCol; i++){}
+ while( i<nMatch && aMatch[i].iStart+aMatch[i].nByte<iBreak ){ i++; }
+ if( i<nMatch ){
+ if( aMatch[i].iStart<iBreak+10 ){
+ return aMatch[i].iStart;
+ }
+ if( i>0 && aMatch[i-1].iStart+aMatch[i-1].nByte>=iBreak ){
+ return aMatch[i-1].iStart;
+ }
+ }
+ for(i=1; i<=10; i++){
+ if( isspace(zDoc[iBreak-i]) ){
+ return iBreak - i + 1;
+ }
+ if( isspace(zDoc[iBreak+i]) ){
+ return iBreak + i + 1;
+ }
+ }
+ return iBreak;
+}
+
+/*
+** If the StringBuffer does not end in white space, add a single
+** space character to the end.
+*/
+static void appendWhiteSpace(StringBuffer *p){
+ if( p->len==0 ) return;
+ if( isspace(p->s[p->len-1]) ) return;
+ append(p, " ");
+}
+
+/*
+** Remove white space from teh end of the StringBuffer
+*/
+static void trimWhiteSpace(StringBuffer *p){
+ while( p->len>0 && isspace(p->s[p->len-1]) ){
+ p->len--;
+ }
+}
+
+
+
+/*
+** Allowed values for Snippet.aMatch[].snStatus
+*/
+#define SNIPPET_IGNORE 0 /* It is ok to omit this match from the snippet */
+#define SNIPPET_DESIRED 1 /* We want to include this match in the snippet */
+
+/*
+** Generate the text of a snippet.
+*/
+static void snippetText(
+ fulltext_cursor *pCursor, /* The cursor we need the snippet for */
+ const char *zStartMark, /* Markup to appear before each match */
+ const char *zEndMark, /* Markup to appear after each match */
+ const char *zEllipsis /* Ellipsis mark */
+){
+ int i, j;
+ struct snippetMatch *aMatch;
+ int nMatch;
+ int nDesired;
+ StringBuffer sb;
+ int tailCol;
+ int tailOffset;
+ int iCol;
+ int nDoc;
+ const char *zDoc;
+ int iStart, iEnd;
+ int tailEllipsis = 0;
+ int iMatch;
+
+
+ free(pCursor->snippet.zSnippet);
+ pCursor->snippet.zSnippet = 0;
+ aMatch = pCursor->snippet.aMatch;
+ nMatch = pCursor->snippet.nMatch;
+ initStringBuffer(&sb);
+
+ for(i=0; i<nMatch; i++){
+ aMatch[i].snStatus = SNIPPET_IGNORE;
+ }
+ nDesired = 0;
+ for(i=0; i<pCursor->q.nTerms; i++){
+ for(j=0; j<nMatch; j++){
+ if( aMatch[j].iTerm==i ){
+ aMatch[j].snStatus = SNIPPET_DESIRED;
+ nDesired++;
+ break;
+ }
+ }
+ }
+
+ iMatch = 0;
+ tailCol = -1;
+ tailOffset = 0;
+ for(i=0; i<nMatch && nDesired>0; i++){
+ if( aMatch[i].snStatus!=SNIPPET_DESIRED ) continue;
+ nDesired--;
+ iCol = aMatch[i].iCol;
+ zDoc = (const char*)sqlite3_column_text(pCursor->pStmt, iCol+1);
+ nDoc = sqlite3_column_bytes(pCursor->pStmt, iCol+1);
+ iStart = aMatch[i].iStart - 40;
+ iStart = wordBoundary(iStart, zDoc, nDoc, aMatch, nMatch, iCol);
+ if( iStart<=10 ){
+ iStart = 0;
+ }
+ if( iCol==tailCol && iStart<=tailOffset+20 ){
+ iStart = tailOffset;
+ }
+ if( (iCol!=tailCol && tailCol>=0) || iStart!=tailOffset ){
+ trimWhiteSpace(&sb);
+ appendWhiteSpace(&sb);
+ append(&sb, zEllipsis);
+ appendWhiteSpace(&sb);
+ }
+ iEnd = aMatch[i].iStart + aMatch[i].nByte + 40;
+ iEnd = wordBoundary(iEnd, zDoc, nDoc, aMatch, nMatch, iCol);
+ if( iEnd>=nDoc-10 ){
+ iEnd = nDoc;
+ tailEllipsis = 0;
+ }else{
+ tailEllipsis = 1;
+ }
+ while( iMatch<nMatch && aMatch[iMatch].iCol<iCol ){ iMatch++; }
+ while( iStart<iEnd ){
+ while( iMatch<nMatch && aMatch[iMatch].iStart<iStart
+ && aMatch[iMatch].iCol<=iCol ){
+ iMatch++;
+ }
+ if( iMatch<nMatch && aMatch[iMatch].iStart<iEnd
+ && aMatch[iMatch].iCol==iCol ){
+ nappend(&sb, &zDoc[iStart], aMatch[iMatch].iStart - iStart);
+ iStart = aMatch[iMatch].iStart;
+ append(&sb, zStartMark);
+ nappend(&sb, &zDoc[iStart], aMatch[iMatch].nByte);
+ append(&sb, zEndMark);
+ iStart += aMatch[iMatch].nByte;
+ for(j=iMatch+1; j<nMatch; j++){
+ if( aMatch[j].iTerm==aMatch[iMatch].iTerm
+ && aMatch[j].snStatus==SNIPPET_DESIRED ){
+ nDesired--;
+ aMatch[j].snStatus = SNIPPET_IGNORE;
+ }
+ }
+ }else{
+ nappend(&sb, &zDoc[iStart], iEnd - iStart);
+ iStart = iEnd;
+ }
+ }
+ tailCol = iCol;
+ tailOffset = iEnd;
+ }
+ trimWhiteSpace(&sb);
+ if( tailEllipsis ){
+ appendWhiteSpace(&sb);
+ append(&sb, zEllipsis);
+ }
+ pCursor->snippet.zSnippet = sb.s;
+ pCursor->snippet.nSnippet = sb.len;
+}
+
+
+/*
+** Close the cursor. For additional information see the documentation
+** on the xClose method of the virtual table interface.
+*/
+static int fulltextClose(sqlite3_vtab_cursor *pCursor){
+ fulltext_cursor *c = (fulltext_cursor *) pCursor;
+ TRACE(("FTS1 Close %p\n", c));
+ sqlite3_finalize(c->pStmt);
+ queryClear(&c->q);
+ snippetClear(&c->snippet);
+ if( c->result.pDoclist!=NULL ){
+ docListDelete(c->result.pDoclist);
+ }
+ free(c);
+ return SQLITE_OK;
+}
+
+static int fulltextNext(sqlite3_vtab_cursor *pCursor){
+ fulltext_cursor *c = (fulltext_cursor *) pCursor;
+ sqlite_int64 iDocid;
+ int rc;
+
+ TRACE(("FTS1 Next %p\n", pCursor));
+ snippetClear(&c->snippet);
+ if( c->iCursorType < QUERY_FULLTEXT ){
+ /* TODO(shess) Handle SQLITE_SCHEMA AND SQLITE_BUSY. */
+ rc = sqlite3_step(c->pStmt);
+ switch( rc ){
+ case SQLITE_ROW:
+ c->eof = 0;
+ return SQLITE_OK;
+ case SQLITE_DONE:
+ c->eof = 1;
+ return SQLITE_OK;
+ default:
+ c->eof = 1;
+ return rc;
+ }
+ } else { /* full-text query */
+ rc = sqlite3_reset(c->pStmt);
+ if( rc!=SQLITE_OK ) return rc;
+
+ iDocid = nextDocid(&c->result);
+ if( iDocid==0 ){
+ c->eof = 1;
+ return SQLITE_OK;
+ }
+ rc = sqlite3_bind_int64(c->pStmt, 1, iDocid);
+ if( rc!=SQLITE_OK ) return rc;
+ /* TODO(shess) Handle SQLITE_SCHEMA AND SQLITE_BUSY. */
+ rc = sqlite3_step(c->pStmt);
+ if( rc==SQLITE_ROW ){ /* the case we expect */
+ c->eof = 0;
+ return SQLITE_OK;
+ }
+ /* an error occurred; abort */
+ return rc==SQLITE_DONE ? SQLITE_ERROR : rc;
+ }
+}
+
+
+/* Return a DocList corresponding to the query term *pTerm. If *pTerm
+** is the first term of a phrase query, go ahead and evaluate the phrase
+** query and return the doclist for the entire phrase query.
+**
+** The result is stored in pTerm->doclist.
+*/
+static int docListOfTerm(
+ fulltext_vtab *v, /* The full text index */
+ int iColumn, /* column to restrict to. No restrition if >=nColumn */
+ QueryTerm *pQTerm, /* Term we are looking for, or 1st term of a phrase */
+ DocList **ppResult /* Write the result here */
+){
+ DocList *pLeft, *pRight, *pNew;
+ int i, rc;
+
+ pLeft = docListNew(DL_POSITIONS);
+ rc = term_select_all(v, iColumn, pQTerm->pTerm, pQTerm->nTerm, pLeft);
+ if( rc ) return rc;
+ for(i=1; i<=pQTerm->nPhrase; i++){
+ pRight = docListNew(DL_POSITIONS);
+ rc = term_select_all(v, iColumn, pQTerm[i].pTerm, pQTerm[i].nTerm, pRight);
+ if( rc ){
+ docListDelete(pLeft);
+ return rc;
+ }
+ pNew = docListNew(i<pQTerm->nPhrase ? DL_POSITIONS : DL_DOCIDS);
+ docListPhraseMerge(pLeft, pRight, pNew);
+ docListDelete(pLeft);
+ docListDelete(pRight);
+ pLeft = pNew;
+ }
+ *ppResult = pLeft;
+ return SQLITE_OK;
+}
+
+/* Add a new term pTerm[0..nTerm-1] to the query *q.
+*/
+static void queryAdd(Query *q, const char *pTerm, int nTerm){
+ QueryTerm *t;
+ ++q->nTerms;
+ q->pTerms = realloc(q->pTerms, q->nTerms * sizeof(q->pTerms[0]));
+ if( q->pTerms==0 ){
+ q->nTerms = 0;
+ return;
+ }
+ t = &q->pTerms[q->nTerms - 1];
+ memset(t, 0, sizeof(*t));
+ t->pTerm = malloc(nTerm+1);
+ memcpy(t->pTerm, pTerm, nTerm);
+ t->pTerm[nTerm] = 0;
+ t->nTerm = nTerm;
+ t->isOr = q->nextIsOr;
+ q->nextIsOr = 0;
+ t->iColumn = q->nextColumn;
+ q->nextColumn = q->dfltColumn;
+}
+
+/*
+** Check to see if the string zToken[0...nToken-1] matches any
+** column name in the virtual table. If it does,
+** return the zero-indexed column number. If not, return -1.
+*/
+static int checkColumnSpecifier(
+ fulltext_vtab *pVtab, /* The virtual table */
+ const char *zToken, /* Text of the token */
+ int nToken /* Number of characters in the token */
+){
+ int i;
+ for(i=0; i<pVtab->nColumn; i++){
+ if( memcmp(pVtab->azColumn[i], zToken, nToken)==0
+ && pVtab->azColumn[i][nToken]==0 ){
+ return i;
+ }
+ }
+ return -1;
+}
+
+/*
+** Parse the text at pSegment[0..nSegment-1]. Add additional terms
+** to the query being assemblied in pQuery.
+**
+** inPhrase is true if pSegment[0..nSegement-1] is contained within
+** double-quotes. If inPhrase is true, then the first term
+** is marked with the number of terms in the phrase less one and
+** OR and "-" syntax is ignored. If inPhrase is false, then every
+** term found is marked with nPhrase=0 and OR and "-" syntax is significant.
+*/
+static int tokenizeSegment(
+ sqlite3_tokenizer *pTokenizer, /* The tokenizer to use */
+ const char *pSegment, int nSegment, /* Query expression being parsed */
+ int inPhrase, /* True if within "..." */
+ Query *pQuery /* Append results here */
+){
+ const sqlite3_tokenizer_module *pModule = pTokenizer->pModule;
+ sqlite3_tokenizer_cursor *pCursor;
+ int firstIndex = pQuery->nTerms;
+ int iCol;
+ int nTerm = 1;
+
+ int rc = pModule->xOpen(pTokenizer, pSegment, nSegment, &pCursor);
+ if( rc!=SQLITE_OK ) return rc;
+ pCursor->pTokenizer = pTokenizer;
+
+ while( 1 ){
+ const char *pToken;
+ int nToken, iBegin, iEnd, iPos;
+
+ rc = pModule->xNext(pCursor,
+ &pToken, &nToken,
+ &iBegin, &iEnd, &iPos);
+ if( rc!=SQLITE_OK ) break;
+ if( !inPhrase &&
+ pSegment[iEnd]==':' &&
+ (iCol = checkColumnSpecifier(pQuery->pFts, pToken, nToken))>=0 ){
+ pQuery->nextColumn = iCol;
+ continue;
+ }
+ if( !inPhrase && pQuery->nTerms>0 && nToken==2
+ && pSegment[iBegin]=='O' && pSegment[iBegin+1]=='R' ){
+ pQuery->nextIsOr = 1;
+ continue;
+ }
+ queryAdd(pQuery, pToken, nToken);
+ if( !inPhrase && iBegin>0 && pSegment[iBegin-1]=='-' ){
+ pQuery->pTerms[pQuery->nTerms-1].isNot = 1;
+ }
+ pQuery->pTerms[pQuery->nTerms-1].iPhrase = nTerm;
+ if( inPhrase ){
+ nTerm++;
+ }
+ }
+
+ if( inPhrase && pQuery->nTerms>firstIndex ){
+ pQuery->pTerms[firstIndex].nPhrase = pQuery->nTerms - firstIndex - 1;
+ }
+
+ return pModule->xClose(pCursor);
+}
+
+/* Parse a query string, yielding a Query object pQuery.
+**
+** The calling function will need to queryClear() to clean up
+** the dynamically allocated memory held by pQuery.
+*/
+static int parseQuery(
+ fulltext_vtab *v, /* The fulltext index */
+ const char *zInput, /* Input text of the query string */
+ int nInput, /* Size of the input text */
+ int dfltColumn, /* Default column of the index to match against */
+ Query *pQuery /* Write the parse results here. */
+){
+ int iInput, inPhrase = 0;
+
+ if( zInput==0 ) nInput = 0;
+ if( nInput<0 ) nInput = strlen(zInput);
+ pQuery->nTerms = 0;
+ pQuery->pTerms = NULL;
+ pQuery->nextIsOr = 0;
+ pQuery->nextColumn = dfltColumn;
+ pQuery->dfltColumn = dfltColumn;
+ pQuery->pFts = v;
+
+ for(iInput=0; iInput<nInput; ++iInput){
+ int i;
+ for(i=iInput; i<nInput && zInput[i]!='"'; ++i){}
+ if( i>iInput ){
+ tokenizeSegment(v->pTokenizer, zInput+iInput, i-iInput, inPhrase,
+ pQuery);
+ }
+ iInput = i;
+ if( i<nInput ){
+ assert( zInput[i]=='"' );
+ inPhrase = !inPhrase;
+ }
+ }
+
+ if( inPhrase ){
+ /* unmatched quote */
+ queryClear(pQuery);
+ return SQLITE_ERROR;
+ }
+ return SQLITE_OK;
+}
+
+/* Perform a full-text query using the search expression in
+** zInput[0..nInput-1]. Return a list of matching documents
+** in pResult.
+**
+** Queries must match column iColumn. Or if iColumn>=nColumn
+** they are allowed to match against any column.
+*/
+static int fulltextQuery(
+ fulltext_vtab *v, /* The full text index */
+ int iColumn, /* Match against this column by default */
+ const char *zInput, /* The query string */
+ int nInput, /* Number of bytes in zInput[] */
+ DocList **pResult, /* Write the result doclist here */
+ Query *pQuery /* Put parsed query string here */
+){
+ int i, iNext, rc;
+ DocList *pLeft = NULL;
+ DocList *pRight, *pNew, *pOr;
+ int nNot = 0;
+ QueryTerm *aTerm;
+
+ rc = parseQuery(v, zInput, nInput, iColumn, pQuery);
+ if( rc!=SQLITE_OK ) return rc;
+
+ /* Merge AND terms. */
+ aTerm = pQuery->pTerms;
+ for(i = 0; i<pQuery->nTerms; i=iNext){
+ if( aTerm[i].isNot ){
+ /* Handle all NOT terms in a separate pass */
+ nNot++;
+ iNext = i + aTerm[i].nPhrase+1;
+ continue;
+ }
+ iNext = i + aTerm[i].nPhrase + 1;
+ rc = docListOfTerm(v, aTerm[i].iColumn, &aTerm[i], &pRight);
+ if( rc ){
+ queryClear(pQuery);
+ return rc;
+ }
+ while( iNext<pQuery->nTerms && aTerm[iNext].isOr ){
+ rc = docListOfTerm(v, aTerm[iNext].iColumn, &aTerm[iNext], &pOr);
+ iNext += aTerm[iNext].nPhrase + 1;
+ if( rc ){
+ queryClear(pQuery);
+ return rc;
+ }
+ pNew = docListNew(DL_DOCIDS);
+ docListOrMerge(pRight, pOr, pNew);
+ docListDelete(pRight);
+ docListDelete(pOr);
+ pRight = pNew;
+ }
+ if( pLeft==0 ){
+ pLeft = pRight;
+ }else{
+ pNew = docListNew(DL_DOCIDS);
+ docListAndMerge(pLeft, pRight, pNew);
+ docListDelete(pRight);
+ docListDelete(pLeft);
+ pLeft = pNew;
+ }
+ }
+
+ if( nNot && pLeft==0 ){
+ /* We do not yet know how to handle a query of only NOT terms */
+ return SQLITE_ERROR;
+ }
+
+ /* Do the EXCEPT terms */
+ for(i=0; i<pQuery->nTerms; i += aTerm[i].nPhrase + 1){
+ if( !aTerm[i].isNot ) continue;
+ rc = docListOfTerm(v, aTerm[i].iColumn, &aTerm[i], &pRight);
+ if( rc ){
+ queryClear(pQuery);
+ docListDelete(pLeft);
+ return rc;
+ }
+ pNew = docListNew(DL_DOCIDS);
+ docListExceptMerge(pLeft, pRight, pNew);
+ docListDelete(pRight);
+ docListDelete(pLeft);
+ pLeft = pNew;
+ }
+
+ *pResult = pLeft;
+ return rc;
+}
+
+/*
+** This is the xFilter interface for the virtual table. See
+** the virtual table xFilter method documentation for additional
+** information.
+**
+** If idxNum==QUERY_GENERIC then do a full table scan against
+** the %_content table.
+**
+** If idxNum==QUERY_ROWID then do a rowid lookup for a single entry
+** in the %_content table.
+**
+** If idxNum>=QUERY_FULLTEXT then use the full text index. The
+** column on the left-hand side of the MATCH operator is column
+** number idxNum-QUERY_FULLTEXT, 0 indexed. argv[0] is the right-hand
+** side of the MATCH operator.
+*/
+static int fulltextFilter(
+ sqlite3_vtab_cursor *pCursor, /* The cursor used for this query */
+ int idxNum, const char *idxStr, /* Which indexing scheme to use */
+ int argc, sqlite3_value **argv /* Arguments for the indexing scheme */
+){
+ fulltext_cursor *c = (fulltext_cursor *) pCursor;
+ fulltext_vtab *v = cursor_vtab(c);
+ int rc;
+ char *zSql;
+
+ TRACE(("FTS1 Filter %p\n",pCursor));
+
+ zSql = sqlite3_mprintf("select rowid, * from %%_content %s",
+ idxNum==QUERY_GENERIC ? "" : "where rowid=?");
+ rc = sql_prepare(v->db, v->zName, &c->pStmt, zSql);
+ sqlite3_free(zSql);
+ if( rc!=SQLITE_OK ) goto out;
+
+ c->iCursorType = idxNum;
+ switch( idxNum ){
+ case QUERY_GENERIC:
+ break;
+
+ case QUERY_ROWID:
+ rc = sqlite3_bind_int64(c->pStmt, 1, sqlite3_value_int64(argv[0]));
+ if( rc!=SQLITE_OK ) goto out;
+ break;
+
+ default: /* full-text search */
+ {
+ const char *zQuery = (const char *)sqlite3_value_text(argv[0]);
+ DocList *pResult;
+ assert( idxNum<=QUERY_FULLTEXT+v->nColumn);
+ assert( argc==1 );
+ queryClear(&c->q);
+ rc = fulltextQuery(v, idxNum-QUERY_FULLTEXT, zQuery, -1, &pResult, &c->q);
+ if( rc!=SQLITE_OK ) goto out;
+ readerInit(&c->result, pResult);
+ break;
+ }
+ }
+
+ rc = fulltextNext(pCursor);
+
+out:
+ return rc;
+}
+
+/* This is the xEof method of the virtual table. The SQLite core
+** calls this routine to find out if it has reached the end of
+** a query's results set.
+*/
+static int fulltextEof(sqlite3_vtab_cursor *pCursor){
+ fulltext_cursor *c = (fulltext_cursor *) pCursor;
+ return c->eof;
+}
+
+/* This is the xColumn method of the virtual table. The SQLite
+** core calls this method during a query when it needs the value
+** of a column from the virtual table. This method needs to use
+** one of the sqlite3_result_*() routines to store the requested
+** value back in the pContext.
+*/
+static int fulltextColumn(sqlite3_vtab_cursor *pCursor,
+ sqlite3_context *pContext, int idxCol){
+ fulltext_cursor *c = (fulltext_cursor *) pCursor;
+ fulltext_vtab *v = cursor_vtab(c);
+
+ if( idxCol<v->nColumn ){
+ sqlite3_value *pVal = sqlite3_column_value(c->pStmt, idxCol+1);
+ sqlite3_result_value(pContext, pVal);
+ }else if( idxCol==v->nColumn ){
+ /* The extra column whose name is the same as the table.
+ ** Return a blob which is a pointer to the cursor
+ */
+ sqlite3_result_blob(pContext, &c, sizeof(c), SQLITE_TRANSIENT);
+ }
+ return SQLITE_OK;
+}
+
+/* This is the xRowid method. The SQLite core calls this routine to
+** retrive the rowid for the current row of the result set. The
+** rowid should be written to *pRowid.
+*/
+static int fulltextRowid(sqlite3_vtab_cursor *pCursor, sqlite_int64 *pRowid){
+ fulltext_cursor *c = (fulltext_cursor *) pCursor;
+
+ *pRowid = sqlite3_column_int64(c->pStmt, 0);
+ return SQLITE_OK;
+}
+
+/* Add all terms in [zText] to the given hash table. If [iColumn] > 0,
+ * we also store positions and offsets in the hash table using the given
+ * column number. */
+static int buildTerms(fulltext_vtab *v, fts1Hash *terms, sqlite_int64 iDocid,
+ const char *zText, int iColumn){
+ sqlite3_tokenizer *pTokenizer = v->pTokenizer;
+ sqlite3_tokenizer_cursor *pCursor;
+ const char *pToken;
+ int nTokenBytes;
+ int iStartOffset, iEndOffset, iPosition;
+ int rc;
+
+ rc = pTokenizer->pModule->xOpen(pTokenizer, zText, -1, &pCursor);
+ if( rc!=SQLITE_OK ) return rc;
+
+ pCursor->pTokenizer = pTokenizer;
+ while( SQLITE_OK==pTokenizer->pModule->xNext(pCursor,
+ &pToken, &nTokenBytes,
+ &iStartOffset, &iEndOffset,
+ &iPosition) ){
+ DocList *p;
+
+ /* Positions can't be negative; we use -1 as a terminator internally. */
+ if( iPosition<0 ){
+ pTokenizer->pModule->xClose(pCursor);
+ return SQLITE_ERROR;
+ }
+
+ p = fts1HashFind(terms, pToken, nTokenBytes);
+ if( p==NULL ){
+ p = docListNew(DL_DEFAULT);
+ docListAddDocid(p, iDocid);
+ fts1HashInsert(terms, pToken, nTokenBytes, p);
+ }
+ if( iColumn>=0 ){
+ docListAddPosOffset(p, iColumn, iPosition, iStartOffset, iEndOffset);
+ }
+ }
+
+ /* TODO(shess) Check return? Should this be able to cause errors at
+ ** this point? Actually, same question about sqlite3_finalize(),
+ ** though one could argue that failure there means that the data is
+ ** not durable. *ponder*
+ */
+ pTokenizer->pModule->xClose(pCursor);
+ return rc;
+}
+
+/* Update the %_terms table to map the term [pTerm] to the given rowid. */
+static int index_insert_term(fulltext_vtab *v, const char *pTerm, int nTerm,
+ DocList *d){
+ sqlite_int64 iIndexRow;
+ DocList doclist;
+ int iSegment = 0, rc;
+
+ rc = term_select(v, pTerm, nTerm, iSegment, &iIndexRow, &doclist);
+ if( rc==SQLITE_DONE ){
+ docListInit(&doclist, DL_DEFAULT, 0, 0);
+ docListUpdate(&doclist, d);
+ /* TODO(shess) Consider length(doclist)>CHUNK_MAX? */
+ rc = term_insert(v, NULL, pTerm, nTerm, iSegment, &doclist);
+ goto err;
+ }
+ if( rc!=SQLITE_ROW ) return SQLITE_ERROR;
+
+ docListUpdate(&doclist, d);
+ if( doclist.nData<=CHUNK_MAX ){
+ rc = term_update(v, iIndexRow, &doclist);
+ goto err;
+ }
+
+ /* Doclist doesn't fit, delete what's there, and accumulate
+ ** forward.
+ */
+ rc = term_delete(v, iIndexRow);
+ if( rc!=SQLITE_OK ) goto err;
+
+ /* Try to insert the doclist into a higher segment bucket. On
+ ** failure, accumulate existing doclist with the doclist from that
+ ** bucket, and put results in the next bucket.
+ */
+ iSegment++;
+ while( (rc=term_insert(v, &iIndexRow, pTerm, nTerm, iSegment,
+ &doclist))!=SQLITE_OK ){
+ sqlite_int64 iSegmentRow;
+ DocList old;
+ int rc2;
+
+ /* Retain old error in case the term_insert() error was really an
+ ** error rather than a bounced insert.
+ */
+ rc2 = term_select(v, pTerm, nTerm, iSegment, &iSegmentRow, &old);
+ if( rc2!=SQLITE_ROW ) goto err;
+
+ rc = term_delete(v, iSegmentRow);
+ if( rc!=SQLITE_OK ) goto err;
+
+ /* Reusing lowest-number deleted row keeps the index smaller. */
+ if( iSegmentRow<iIndexRow ) iIndexRow = iSegmentRow;
+
+ /* doclist contains the newer data, so accumulate it over old.
+ ** Then steal accumulated data for doclist.
+ */
+ docListAccumulate(&old, &doclist);
+ docListDestroy(&doclist);
+ doclist = old;
+
+ iSegment++;
+ }
+
+ err:
+ docListDestroy(&doclist);
+ return rc;
+}
+
+/* Add doclists for all terms in [pValues] to the hash table [terms]. */
+static int insertTerms(fulltext_vtab *v, fts1Hash *terms, sqlite_int64 iRowid,
+ sqlite3_value **pValues){
+ int i;
+ for(i = 0; i < v->nColumn ; ++i){
+ char *zText = (char*)sqlite3_value_text(pValues[i]);
+ int rc = buildTerms(v, terms, iRowid, zText, i);
+ if( rc!=SQLITE_OK ) return rc;
+ }
+ return SQLITE_OK;
+}
+
+/* Add empty doclists for all terms in the given row's content to the hash
+ * table [pTerms]. */
+static int deleteTerms(fulltext_vtab *v, fts1Hash *pTerms, sqlite_int64 iRowid){
+ const char **pValues;
+ int i;
+
+ int rc = content_select(v, iRowid, &pValues);
+ if( rc!=SQLITE_OK ) return rc;
+
+ for(i = 0 ; i < v->nColumn; ++i) {
+ rc = buildTerms(v, pTerms, iRowid, pValues[i], -1);
+ if( rc!=SQLITE_OK ) break;
+ }
+
+ freeStringArray(v->nColumn, pValues);
+ return SQLITE_OK;
+}
+
+/* Insert a row into the %_content table; set *piRowid to be the ID of the
+ * new row. Fill [pTerms] with new doclists for the %_term table. */
+static int index_insert(fulltext_vtab *v, sqlite3_value *pRequestRowid,
+ sqlite3_value **pValues,
+ sqlite_int64 *piRowid, fts1Hash *pTerms){
+ int rc;
+
+ rc = content_insert(v, pRequestRowid, pValues); /* execute an SQL INSERT */
+ if( rc!=SQLITE_OK ) return rc;
+ *piRowid = sqlite3_last_insert_rowid(v->db);
+ return insertTerms(v, pTerms, *piRowid, pValues);
+}
+
+/* Delete a row from the %_content table; fill [pTerms] with empty doclists
+ * to be written to the %_term table. */
+static int index_delete(fulltext_vtab *v, sqlite_int64 iRow, fts1Hash *pTerms){
+ int rc = deleteTerms(v, pTerms, iRow);
+ if( rc!=SQLITE_OK ) return rc;
+ return content_delete(v, iRow); /* execute an SQL DELETE */
+}
+
+/* Update a row in the %_content table; fill [pTerms] with new doclists for the
+ * %_term table. */
+static int index_update(fulltext_vtab *v, sqlite_int64 iRow,
+ sqlite3_value **pValues, fts1Hash *pTerms){
+ /* Generate an empty doclist for each term that previously appeared in this
+ * row. */
+ int rc = deleteTerms(v, pTerms, iRow);
+ if( rc!=SQLITE_OK ) return rc;
+
+ /* Now add positions for terms which appear in the updated row. */
+ rc = insertTerms(v, pTerms, iRow, pValues);
+ if( rc!=SQLITE_OK ) return rc;
+
+ return content_update(v, pValues, iRow); /* execute an SQL UPDATE */
+}
+
+/* This function implements the xUpdate callback; it's the top-level entry
+ * point for inserting, deleting or updating a row in a full-text table. */
+static int fulltextUpdate(sqlite3_vtab *pVtab, int nArg, sqlite3_value **ppArg,
+ sqlite_int64 *pRowid){
+ fulltext_vtab *v = (fulltext_vtab *) pVtab;
+ fts1Hash terms; /* maps term string -> PosList */
+ int rc;
+ fts1HashElem *e;
+
+ TRACE(("FTS1 Update %p\n", pVtab));
+
+ fts1HashInit(&terms, FTS1_HASH_STRING, 1);
+
+ if( nArg<2 ){
+ rc = index_delete(v, sqlite3_value_int64(ppArg[0]), &terms);
+ } else if( sqlite3_value_type(ppArg[0]) != SQLITE_NULL ){
+ /* An update:
+ * ppArg[0] = old rowid
+ * ppArg[1] = new rowid
+ * ppArg[2..2+v->nColumn-1] = values
+ * ppArg[2+v->nColumn] = value for magic column (we ignore this)
+ */
+ sqlite_int64 rowid = sqlite3_value_int64(ppArg[0]);
+ if( sqlite3_value_type(ppArg[1]) != SQLITE_INTEGER ||
+ sqlite3_value_int64(ppArg[1]) != rowid ){
+ rc = SQLITE_ERROR; /* we don't allow changing the rowid */
+ } else {
+ assert( nArg==2+v->nColumn+1);
+ rc = index_update(v, rowid, &ppArg[2], &terms);
+ }
+ } else {
+ /* An insert:
+ * ppArg[1] = requested rowid
+ * ppArg[2..2+v->nColumn-1] = values
+ * ppArg[2+v->nColumn] = value for magic column (we ignore this)
+ */
+ assert( nArg==2+v->nColumn+1);
+ rc = index_insert(v, ppArg[1], &ppArg[2], pRowid, &terms);
+ }
+
+ if( rc==SQLITE_OK ){
+ /* Write updated doclists to disk. */
+ for(e=fts1HashFirst(&terms); e; e=fts1HashNext(e)){
+ DocList *p = fts1HashData(e);
+ rc = index_insert_term(v, fts1HashKey(e), fts1HashKeysize(e), p);
+ if( rc!=SQLITE_OK ) break;
+ }
+ }
+
+ /* clean up */
+ for(e=fts1HashFirst(&terms); e; e=fts1HashNext(e)){
+ DocList *p = fts1HashData(e);
+ docListDelete(p);
+ }
+ fts1HashClear(&terms);
+
+ return rc;
+}
+
+/*
+** Implementation of the snippet() function for FTS1
+*/
+static void snippetFunc(
+ sqlite3_context *pContext,
+ int argc,
+ sqlite3_value **argv
+){
+ fulltext_cursor *pCursor;
+ if( argc<1 ) return;
+ if( sqlite3_value_type(argv[0])!=SQLITE_BLOB ||
+ sqlite3_value_bytes(argv[0])!=sizeof(pCursor) ){
+ sqlite3_result_error(pContext, "illegal first argument to html_snippet",-1);
+ }else{
+ const char *zStart = "<b>";
+ const char *zEnd = "</b>";
+ const char *zEllipsis = "<b>...</b>";
+ memcpy(&pCursor, sqlite3_value_blob(argv[0]), sizeof(pCursor));
+ if( argc>=2 ){
+ zStart = (const char*)sqlite3_value_text(argv[1]);
+ if( argc>=3 ){
+ zEnd = (const char*)sqlite3_value_text(argv[2]);
+ if( argc>=4 ){
+ zEllipsis = (const char*)sqlite3_value_text(argv[3]);
+ }
+ }
+ }
+ snippetAllOffsets(pCursor);
+ snippetText(pCursor, zStart, zEnd, zEllipsis);
+ sqlite3_result_text(pContext, pCursor->snippet.zSnippet,
+ pCursor->snippet.nSnippet, SQLITE_STATIC);
+ }
+}
+
+/*
+** Implementation of the offsets() function for FTS1
+*/
+static void snippetOffsetsFunc(
+ sqlite3_context *pContext,
+ int argc,
+ sqlite3_value **argv
+){
+ fulltext_cursor *pCursor;
+ if( argc<1 ) return;
+ if( sqlite3_value_type(argv[0])!=SQLITE_BLOB ||
+ sqlite3_value_bytes(argv[0])!=sizeof(pCursor) ){
+ sqlite3_result_error(pContext, "illegal first argument to offsets",-1);
+ }else{
+ memcpy(&pCursor, sqlite3_value_blob(argv[0]), sizeof(pCursor));
+ snippetAllOffsets(pCursor);
+ snippetOffsetText(&pCursor->snippet);
+ sqlite3_result_text(pContext,
+ pCursor->snippet.zOffset, pCursor->snippet.nOffset,
+ SQLITE_STATIC);
+ }
+}
+
+/*
+** This routine implements the xFindFunction method for the FTS1
+** virtual table.
+*/
+static int fulltextFindFunction(
+ sqlite3_vtab *pVtab,
+ int nArg,
+ const char *zName,
+ void (**pxFunc)(sqlite3_context*,int,sqlite3_value**),
+ void **ppArg
+){
+ if( strcmp(zName,"snippet")==0 ){
+ *pxFunc = snippetFunc;
+ return 1;
+ }else if( strcmp(zName,"offsets")==0 ){
+ *pxFunc = snippetOffsetsFunc;
+ return 1;
+ }
+ return 0;
+}
+
+static const sqlite3_module fulltextModule = {
+ /* iVersion */ 0,
+ /* xCreate */ fulltextCreate,
+ /* xConnect */ fulltextConnect,
+ /* xBestIndex */ fulltextBestIndex,
+ /* xDisconnect */ fulltextDisconnect,
+ /* xDestroy */ fulltextDestroy,
+ /* xOpen */ fulltextOpen,
+ /* xClose */ fulltextClose,
+ /* xFilter */ fulltextFilter,
+ /* xNext */ fulltextNext,
+ /* xEof */ fulltextEof,
+ /* xColumn */ fulltextColumn,
+ /* xRowid */ fulltextRowid,
+ /* xUpdate */ fulltextUpdate,
+ /* xBegin */ 0,
+ /* xSync */ 0,
+ /* xCommit */ 0,
+ /* xRollback */ 0,
+ /* xFindFunction */ fulltextFindFunction,
+};
+
+int sqlite3Fts1Init(sqlite3 *db){
+ sqlite3_overload_function(db, "snippet", -1);
+ sqlite3_overload_function(db, "offsets", -1);
+ return sqlite3_create_module(db, "fts1", &fulltextModule, 0);
+}
+
+#if !SQLITE_CORE
+int sqlite3_extension_init(sqlite3 *db, char **pzErrMsg,
+ const sqlite3_api_routines *pApi){
+ SQLITE_EXTENSION_INIT2(pApi)
+ return sqlite3Fts1Init(db);
+}
+#endif
+
+#endif /* !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS1) */
Added: freeswitch/trunk/libs/sqlite/ext/fts1/fts1.h
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/ext/fts1/fts1.h Tue Dec 19 15:11:50 2006
@@ -0,0 +1,11 @@
+#include "sqlite3.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif /* __cplusplus */
+
+int sqlite3Fts1Init(sqlite3 *db);
+
+#ifdef __cplusplus
+} /* extern "C" */
+#endif /* __cplusplus */
Added: freeswitch/trunk/libs/sqlite/ext/fts1/fts1_hash.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/ext/fts1/fts1_hash.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,369 @@
+/*
+** 2001 September 22
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** This is the implementation of generic hash-tables used in SQLite.
+** We've modified it slightly to serve as a standalone hash table
+** implementation for the full-text indexing module.
+*/
+#include <assert.h>
+#include <stdlib.h>
+#include <string.h>
+
+/*
+** The code in this file is only compiled if:
+**
+** * The FTS1 module is being built as an extension
+** (in which case SQLITE_CORE is not defined), or
+**
+** * The FTS1 module is being built into the core of
+** SQLite (in which case SQLITE_ENABLE_FTS1 is defined).
+*/
+#if !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS1)
+
+
+#include "fts1_hash.h"
+
+static void *malloc_and_zero(int n){
+ void *p = malloc(n);
+ if( p ){
+ memset(p, 0, n);
+ }
+ return p;
+}
+
+/* Turn bulk memory into a hash table object by initializing the
+** fields of the Hash structure.
+**
+** "pNew" is a pointer to the hash table that is to be initialized.
+** keyClass is one of the constants
+** FTS1_HASH_BINARY or FTS1_HASH_STRING. The value of keyClass
+** determines what kind of key the hash table will use. "copyKey" is
+** true if the hash table should make its own private copy of keys and
+** false if it should just use the supplied pointer.
+*/
+void sqlite3Fts1HashInit(fts1Hash *pNew, int keyClass, int copyKey){
+ assert( pNew!=0 );
+ assert( keyClass>=FTS1_HASH_STRING && keyClass<=FTS1_HASH_BINARY );
+ pNew->keyClass = keyClass;
+ pNew->copyKey = copyKey;
+ pNew->first = 0;
+ pNew->count = 0;
+ pNew->htsize = 0;
+ pNew->ht = 0;
+ pNew->xMalloc = malloc_and_zero;
+ pNew->xFree = free;
+}
+
+/* Remove all entries from a hash table. Reclaim all memory.
+** Call this routine to delete a hash table or to reset a hash table
+** to the empty state.
+*/
+void sqlite3Fts1HashClear(fts1Hash *pH){
+ fts1HashElem *elem; /* For looping over all elements of the table */
+
+ assert( pH!=0 );
+ elem = pH->first;
+ pH->first = 0;
+ if( pH->ht ) pH->xFree(pH->ht);
+ pH->ht = 0;
+ pH->htsize = 0;
+ while( elem ){
+ fts1HashElem *next_elem = elem->next;
+ if( pH->copyKey && elem->pKey ){
+ pH->xFree(elem->pKey);
+ }
+ pH->xFree(elem);
+ elem = next_elem;
+ }
+ pH->count = 0;
+}
+
+/*
+** Hash and comparison functions when the mode is FTS1_HASH_STRING
+*/
+static int strHash(const void *pKey, int nKey){
+ const char *z = (const char *)pKey;
+ int h = 0;
+ if( nKey<=0 ) nKey = (int) strlen(z);
+ while( nKey > 0 ){
+ h = (h<<3) ^ h ^ *z++;
+ nKey--;
+ }
+ return h & 0x7fffffff;
+}
+static int strCompare(const void *pKey1, int n1, const void *pKey2, int n2){
+ if( n1!=n2 ) return 1;
+ return strncmp((const char*)pKey1,(const char*)pKey2,n1);
+}
+
+/*
+** Hash and comparison functions when the mode is FTS1_HASH_BINARY
+*/
+static int binHash(const void *pKey, int nKey){
+ int h = 0;
+ const char *z = (const char *)pKey;
+ while( nKey-- > 0 ){
+ h = (h<<3) ^ h ^ *(z++);
+ }
+ return h & 0x7fffffff;
+}
+static int binCompare(const void *pKey1, int n1, const void *pKey2, int n2){
+ if( n1!=n2 ) return 1;
+ return memcmp(pKey1,pKey2,n1);
+}
+
+/*
+** Return a pointer to the appropriate hash function given the key class.
+**
+** The C syntax in this function definition may be unfamilar to some
+** programmers, so we provide the following additional explanation:
+**
+** The name of the function is "hashFunction". The function takes a
+** single parameter "keyClass". The return value of hashFunction()
+** is a pointer to another function. Specifically, the return value
+** of hashFunction() is a pointer to a function that takes two parameters
+** with types "const void*" and "int" and returns an "int".
+*/
+static int (*hashFunction(int keyClass))(const void*,int){
+ if( keyClass==FTS1_HASH_STRING ){
+ return &strHash;
+ }else{
+ assert( keyClass==FTS1_HASH_BINARY );
+ return &binHash;
+ }
+}
+
+/*
+** Return a pointer to the appropriate hash function given the key class.
+**
+** For help in interpreted the obscure C code in the function definition,
+** see the header comment on the previous function.
+*/
+static int (*compareFunction(int keyClass))(const void*,int,const void*,int){
+ if( keyClass==FTS1_HASH_STRING ){
+ return &strCompare;
+ }else{
+ assert( keyClass==FTS1_HASH_BINARY );
+ return &binCompare;
+ }
+}
+
+/* Link an element into the hash table
+*/
+static void insertElement(
+ fts1Hash *pH, /* The complete hash table */
+ struct _fts1ht *pEntry, /* The entry into which pNew is inserted */
+ fts1HashElem *pNew /* The element to be inserted */
+){
+ fts1HashElem *pHead; /* First element already in pEntry */
+ pHead = pEntry->chain;
+ if( pHead ){
+ pNew->next = pHead;
+ pNew->prev = pHead->prev;
+ if( pHead->prev ){ pHead->prev->next = pNew; }
+ else { pH->first = pNew; }
+ pHead->prev = pNew;
+ }else{
+ pNew->next = pH->first;
+ if( pH->first ){ pH->first->prev = pNew; }
+ pNew->prev = 0;
+ pH->first = pNew;
+ }
+ pEntry->count++;
+ pEntry->chain = pNew;
+}
+
+
+/* Resize the hash table so that it cantains "new_size" buckets.
+** "new_size" must be a power of 2. The hash table might fail
+** to resize if sqliteMalloc() fails.
+*/
+static void rehash(fts1Hash *pH, int new_size){
+ struct _fts1ht *new_ht; /* The new hash table */
+ fts1HashElem *elem, *next_elem; /* For looping over existing elements */
+ int (*xHash)(const void*,int); /* The hash function */
+
+ assert( (new_size & (new_size-1))==0 );
+ new_ht = (struct _fts1ht *)pH->xMalloc( new_size*sizeof(struct _fts1ht) );
+ if( new_ht==0 ) return;
+ if( pH->ht ) pH->xFree(pH->ht);
+ pH->ht = new_ht;
+ pH->htsize = new_size;
+ xHash = hashFunction(pH->keyClass);
+ for(elem=pH->first, pH->first=0; elem; elem = next_elem){
+ int h = (*xHash)(elem->pKey, elem->nKey) & (new_size-1);
+ next_elem = elem->next;
+ insertElement(pH, &new_ht[h], elem);
+ }
+}
+
+/* This function (for internal use only) locates an element in an
+** hash table that matches the given key. The hash for this key has
+** already been computed and is passed as the 4th parameter.
+*/
+static fts1HashElem *findElementGivenHash(
+ const fts1Hash *pH, /* The pH to be searched */
+ const void *pKey, /* The key we are searching for */
+ int nKey,
+ int h /* The hash for this key. */
+){
+ fts1HashElem *elem; /* Used to loop thru the element list */
+ int count; /* Number of elements left to test */
+ int (*xCompare)(const void*,int,const void*,int); /* comparison function */
+
+ if( pH->ht ){
+ struct _fts1ht *pEntry = &pH->ht[h];
+ elem = pEntry->chain;
+ count = pEntry->count;
+ xCompare = compareFunction(pH->keyClass);
+ while( count-- && elem ){
+ if( (*xCompare)(elem->pKey,elem->nKey,pKey,nKey)==0 ){
+ return elem;
+ }
+ elem = elem->next;
+ }
+ }
+ return 0;
+}
+
+/* Remove a single entry from the hash table given a pointer to that
+** element and a hash on the element's key.
+*/
+static void removeElementGivenHash(
+ fts1Hash *pH, /* The pH containing "elem" */
+ fts1HashElem* elem, /* The element to be removed from the pH */
+ int h /* Hash value for the element */
+){
+ struct _fts1ht *pEntry;
+ if( elem->prev ){
+ elem->prev->next = elem->next;
+ }else{
+ pH->first = elem->next;
+ }
+ if( elem->next ){
+ elem->next->prev = elem->prev;
+ }
+ pEntry = &pH->ht[h];
+ if( pEntry->chain==elem ){
+ pEntry->chain = elem->next;
+ }
+ pEntry->count--;
+ if( pEntry->count<=0 ){
+ pEntry->chain = 0;
+ }
+ if( pH->copyKey && elem->pKey ){
+ pH->xFree(elem->pKey);
+ }
+ pH->xFree( elem );
+ pH->count--;
+ if( pH->count<=0 ){
+ assert( pH->first==0 );
+ assert( pH->count==0 );
+ fts1HashClear(pH);
+ }
+}
+
+/* Attempt to locate an element of the hash table pH with a key
+** that matches pKey,nKey. Return the data for this element if it is
+** found, or NULL if there is no match.
+*/
+void *sqlite3Fts1HashFind(const fts1Hash *pH, const void *pKey, int nKey){
+ int h; /* A hash on key */
+ fts1HashElem *elem; /* The element that matches key */
+ int (*xHash)(const void*,int); /* The hash function */
+
+ if( pH==0 || pH->ht==0 ) return 0;
+ xHash = hashFunction(pH->keyClass);
+ assert( xHash!=0 );
+ h = (*xHash)(pKey,nKey);
+ assert( (pH->htsize & (pH->htsize-1))==0 );
+ elem = findElementGivenHash(pH,pKey,nKey, h & (pH->htsize-1));
+ return elem ? elem->data : 0;
+}
+
+/* Insert an element into the hash table pH. The key is pKey,nKey
+** and the data is "data".
+**
+** If no element exists with a matching key, then a new
+** element is created. A copy of the key is made if the copyKey
+** flag is set. NULL is returned.
+**
+** If another element already exists with the same key, then the
+** new data replaces the old data and the old data is returned.
+** The key is not copied in this instance. If a malloc fails, then
+** the new data is returned and the hash table is unchanged.
+**
+** If the "data" parameter to this function is NULL, then the
+** element corresponding to "key" is removed from the hash table.
+*/
+void *sqlite3Fts1HashInsert(
+ fts1Hash *pH, /* The hash table to insert into */
+ const void *pKey, /* The key */
+ int nKey, /* Number of bytes in the key */
+ void *data /* The data */
+){
+ int hraw; /* Raw hash value of the key */
+ int h; /* the hash of the key modulo hash table size */
+ fts1HashElem *elem; /* Used to loop thru the element list */
+ fts1HashElem *new_elem; /* New element added to the pH */
+ int (*xHash)(const void*,int); /* The hash function */
+
+ assert( pH!=0 );
+ xHash = hashFunction(pH->keyClass);
+ assert( xHash!=0 );
+ hraw = (*xHash)(pKey, nKey);
+ assert( (pH->htsize & (pH->htsize-1))==0 );
+ h = hraw & (pH->htsize-1);
+ elem = findElementGivenHash(pH,pKey,nKey,h);
+ if( elem ){
+ void *old_data = elem->data;
+ if( data==0 ){
+ removeElementGivenHash(pH,elem,h);
+ }else{
+ elem->data = data;
+ }
+ return old_data;
+ }
+ if( data==0 ) return 0;
+ new_elem = (fts1HashElem*)pH->xMalloc( sizeof(fts1HashElem) );
+ if( new_elem==0 ) return data;
+ if( pH->copyKey && pKey!=0 ){
+ new_elem->pKey = pH->xMalloc( nKey );
+ if( new_elem->pKey==0 ){
+ pH->xFree(new_elem);
+ return data;
+ }
+ memcpy((void*)new_elem->pKey, pKey, nKey);
+ }else{
+ new_elem->pKey = (void*)pKey;
+ }
+ new_elem->nKey = nKey;
+ pH->count++;
+ if( pH->htsize==0 ){
+ rehash(pH,8);
+ if( pH->htsize==0 ){
+ pH->count = 0;
+ pH->xFree(new_elem);
+ return data;
+ }
+ }
+ if( pH->count > pH->htsize ){
+ rehash(pH,pH->htsize*2);
+ }
+ assert( pH->htsize>0 );
+ assert( (pH->htsize & (pH->htsize-1))==0 );
+ h = hraw & (pH->htsize-1);
+ insertElement(pH, &pH->ht[h], new_elem);
+ new_elem->data = data;
+ return 0;
+}
+
+#endif /* !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS1) */
Added: freeswitch/trunk/libs/sqlite/ext/fts1/fts1_hash.h
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/ext/fts1/fts1_hash.h Tue Dec 19 15:11:50 2006
@@ -0,0 +1,112 @@
+/*
+** 2001 September 22
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** This is the header file for the generic hash-table implemenation
+** used in SQLite. We've modified it slightly to serve as a standalone
+** hash table implementation for the full-text indexing module.
+**
+*/
+#ifndef _FTS1_HASH_H_
+#define _FTS1_HASH_H_
+
+/* Forward declarations of structures. */
+typedef struct fts1Hash fts1Hash;
+typedef struct fts1HashElem fts1HashElem;
+
+/* A complete hash table is an instance of the following structure.
+** The internals of this structure are intended to be opaque -- client
+** code should not attempt to access or modify the fields of this structure
+** directly. Change this structure only by using the routines below.
+** However, many of the "procedures" and "functions" for modifying and
+** accessing this structure are really macros, so we can't really make
+** this structure opaque.
+*/
+struct fts1Hash {
+ char keyClass; /* HASH_INT, _POINTER, _STRING, _BINARY */
+ char copyKey; /* True if copy of key made on insert */
+ int count; /* Number of entries in this table */
+ fts1HashElem *first; /* The first element of the array */
+ void *(*xMalloc)(int); /* malloc() function to use */
+ void (*xFree)(void *); /* free() function to use */
+ int htsize; /* Number of buckets in the hash table */
+ struct _fts1ht { /* the hash table */
+ int count; /* Number of entries with this hash */
+ fts1HashElem *chain; /* Pointer to first entry with this hash */
+ } *ht;
+};
+
+/* Each element in the hash table is an instance of the following
+** structure. All elements are stored on a single doubly-linked list.
+**
+** Again, this structure is intended to be opaque, but it can't really
+** be opaque because it is used by macros.
+*/
+struct fts1HashElem {
+ fts1HashElem *next, *prev; /* Next and previous elements in the table */
+ void *data; /* Data associated with this element */
+ void *pKey; int nKey; /* Key associated with this element */
+};
+
+/*
+** There are 2 different modes of operation for a hash table:
+**
+** FTS1_HASH_STRING pKey points to a string that is nKey bytes long
+** (including the null-terminator, if any). Case
+** is respected in comparisons.
+**
+** FTS1_HASH_BINARY pKey points to binary data nKey bytes long.
+** memcmp() is used to compare keys.
+**
+** A copy of the key is made if the copyKey parameter to fts1HashInit is 1.
+*/
+#define FTS1_HASH_STRING 1
+#define FTS1_HASH_BINARY 2
+
+/*
+** Access routines. To delete, insert a NULL pointer.
+*/
+void sqlite3Fts1HashInit(fts1Hash*, int keytype, int copyKey);
+void *sqlite3Fts1HashInsert(fts1Hash*, const void *pKey, int nKey, void *pData);
+void *sqlite3Fts1HashFind(const fts1Hash*, const void *pKey, int nKey);
+void sqlite3Fts1HashClear(fts1Hash*);
+
+/*
+** Shorthand for the functions above
+*/
+#define fts1HashInit sqlite3Fts1HashInit
+#define fts1HashInsert sqlite3Fts1HashInsert
+#define fts1HashFind sqlite3Fts1HashFind
+#define fts1HashClear sqlite3Fts1HashClear
+
+/*
+** Macros for looping over all elements of a hash table. The idiom is
+** like this:
+**
+** fts1Hash h;
+** fts1HashElem *p;
+** ...
+** for(p=fts1HashFirst(&h); p; p=fts1HashNext(p)){
+** SomeStructure *pData = fts1HashData(p);
+** // do something with pData
+** }
+*/
+#define fts1HashFirst(H) ((H)->first)
+#define fts1HashNext(E) ((E)->next)
+#define fts1HashData(E) ((E)->data)
+#define fts1HashKey(E) ((E)->pKey)
+#define fts1HashKeysize(E) ((E)->nKey)
+
+/*
+** Number of entries in a hash table
+*/
+#define fts1HashCount(H) ((H)->count)
+
+#endif /* _FTS1_HASH_H_ */
Added: freeswitch/trunk/libs/sqlite/ext/fts1/fts1_porter.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/ext/fts1/fts1_porter.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,645 @@
+/*
+** 2006 September 30
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** Implementation of the full-text-search tokenizer that implements
+** a Porter stemmer.
+*/
+
+/*
+** The code in this file is only compiled if:
+**
+** * The FTS1 module is being built as an extension
+** (in which case SQLITE_CORE is not defined), or
+**
+** * The FTS1 module is being built into the core of
+** SQLite (in which case SQLITE_ENABLE_FTS1 is defined).
+*/
+#if !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS1)
+
+
+#include <assert.h>
+#if !defined(__APPLE__)
+#include <malloc.h>
+#else
+#include <stdlib.h>
+#endif
+#include <stdio.h>
+#include <string.h>
+#include <ctype.h>
+
+#include "fts1_tokenizer.h"
+
+/*
+** Class derived from sqlite3_tokenizer
+*/
+typedef struct porter_tokenizer {
+ sqlite3_tokenizer base; /* Base class */
+} porter_tokenizer;
+
+/*
+** Class derived from sqlit3_tokenizer_cursor
+*/
+typedef struct porter_tokenizer_cursor {
+ sqlite3_tokenizer_cursor base;
+ const char *zInput; /* input we are tokenizing */
+ int nInput; /* size of the input */
+ int iOffset; /* current position in zInput */
+ int iToken; /* index of next token to be returned */
+ char *zToken; /* storage for current token */
+ int nAllocated; /* space allocated to zToken buffer */
+} porter_tokenizer_cursor;
+
+
+/* Forward declaration */
+static const sqlite3_tokenizer_module porterTokenizerModule;
+
+
+/*
+** Create a new tokenizer instance.
+*/
+static int porterCreate(
+ int argc, const char * const *argv,
+ sqlite3_tokenizer **ppTokenizer
+){
+ porter_tokenizer *t;
+ int i;
+
+for(i=0; i<argc; i++) printf("argv[%d] = %s\n", i, argv[i]);
+ t = (porter_tokenizer *) calloc(sizeof(porter_tokenizer), 1);
+ *ppTokenizer = &t->base;
+ return SQLITE_OK;
+}
+
+/*
+** Destroy a tokenizer
+*/
+static int porterDestroy(sqlite3_tokenizer *pTokenizer){
+ free(pTokenizer);
+ return SQLITE_OK;
+}
+
+/*
+** Prepare to begin tokenizing a particular string. The input
+** string to be tokenized is zInput[0..nInput-1]. A cursor
+** used to incrementally tokenize this string is returned in
+** *ppCursor.
+*/
+static int porterOpen(
+ sqlite3_tokenizer *pTokenizer, /* The tokenizer */
+ const char *zInput, int nInput, /* String to be tokenized */
+ sqlite3_tokenizer_cursor **ppCursor /* OUT: Tokenization cursor */
+){
+ porter_tokenizer_cursor *c;
+
+ c = (porter_tokenizer_cursor *) malloc(sizeof(porter_tokenizer_cursor));
+ c->zInput = zInput;
+ if( zInput==0 ){
+ c->nInput = 0;
+ }else if( nInput<0 ){
+ c->nInput = (int)strlen(zInput);
+ }else{
+ c->nInput = nInput;
+ }
+ c->iOffset = 0; /* start tokenizing at the beginning */
+ c->iToken = 0;
+ c->zToken = NULL; /* no space allocated, yet. */
+ c->nAllocated = 0;
+
+ *ppCursor = &c->base;
+ return SQLITE_OK;
+}
+
+/*
+** Close a tokenization cursor previously opened by a call to
+** porterOpen() above.
+*/
+static int porterClose(sqlite3_tokenizer_cursor *pCursor){
+ porter_tokenizer_cursor *c = (porter_tokenizer_cursor *) pCursor;
+ free(c->zToken);
+ free(c);
+ return SQLITE_OK;
+}
+/*
+** Vowel or consonant
+*/
+static const char cType[] = {
+ 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0,
+ 1, 1, 1, 2, 1
+};
+
+/*
+** isConsonant() and isVowel() determine if their first character in
+** the string they point to is a consonant or a vowel, according
+** to Porter ruls.
+**
+** A consonate is any letter other than 'a', 'e', 'i', 'o', or 'u'.
+** 'Y' is a consonant unless it follows another consonant,
+** in which case it is a vowel.
+**
+** In these routine, the letters are in reverse order. So the 'y' rule
+** is that 'y' is a consonant unless it is followed by another
+** consonent.
+*/
+static int isVowel(const char*);
+static int isConsonant(const char *z){
+ int j;
+ char x = *z;
+ if( x==0 ) return 0;
+ assert( x>='a' && x<='z' );
+ j = cType[x-'a'];
+ if( j<2 ) return j;
+ return z[1]==0 || isVowel(z + 1);
+}
+static int isVowel(const char *z){
+ int j;
+ char x = *z;
+ if( x==0 ) return 0;
+ assert( x>='a' && x<='z' );
+ j = cType[x-'a'];
+ if( j<2 ) return 1-j;
+ return isConsonant(z + 1);
+}
+
+/*
+** Let any sequence of one or more vowels be represented by V and let
+** C be sequence of one or more consonants. Then every word can be
+** represented as:
+**
+** [C] (VC){m} [V]
+**
+** In prose: A word is an optional consonant followed by zero or
+** vowel-consonant pairs followed by an optional vowel. "m" is the
+** number of vowel consonant pairs. This routine computes the value
+** of m for the first i bytes of a word.
+**
+** Return true if the m-value for z is 1 or more. In other words,
+** return true if z contains at least one vowel that is followed
+** by a consonant.
+**
+** In this routine z[] is in reverse order. So we are really looking
+** for an instance of of a consonant followed by a vowel.
+*/
+static int m_gt_0(const char *z){
+ while( isVowel(z) ){ z++; }
+ if( *z==0 ) return 0;
+ while( isConsonant(z) ){ z++; }
+ return *z!=0;
+}
+
+/* Like mgt0 above except we are looking for a value of m which is
+** exactly 1
+*/
+static int m_eq_1(const char *z){
+ while( isVowel(z) ){ z++; }
+ if( *z==0 ) return 0;
+ while( isConsonant(z) ){ z++; }
+ if( *z==0 ) return 0;
+ while( isVowel(z) ){ z++; }
+ if( *z==0 ) return 1;
+ while( isConsonant(z) ){ z++; }
+ return *z==0;
+}
+
+/* Like mgt0 above except we are looking for a value of m>1 instead
+** or m>0
+*/
+static int m_gt_1(const char *z){
+ while( isVowel(z) ){ z++; }
+ if( *z==0 ) return 0;
+ while( isConsonant(z) ){ z++; }
+ if( *z==0 ) return 0;
+ while( isVowel(z) ){ z++; }
+ if( *z==0 ) return 0;
+ while( isConsonant(z) ){ z++; }
+ return *z!=0;
+}
+
+/*
+** Return TRUE if there is a vowel anywhere within z[0..n-1]
+*/
+static int hasVowel(const char *z){
+ while( isConsonant(z) ){ z++; }
+ return *z!=0;
+}
+
+/*
+** Return TRUE if the word ends in a double consonant.
+**
+** The text is reversed here. So we are really looking at
+** the first two characters of z[].
+*/
+static int doubleConsonant(const char *z){
+ return isConsonant(z) && z[0]==z[1] && isConsonant(z+1);
+}
+
+/*
+** Return TRUE if the word ends with three letters which
+** are consonant-vowel-consonent and where the final consonant
+** is not 'w', 'x', or 'y'.
+**
+** The word is reversed here. So we are really checking the
+** first three letters and the first one cannot be in [wxy].
+*/
+static int star_oh(const char *z){
+ return
+ z[0]!=0 && isConsonant(z) &&
+ z[0]!='w' && z[0]!='x' && z[0]!='y' &&
+ z[1]!=0 && isVowel(z+1) &&
+ z[2]!=0 && isConsonant(z+2);
+}
+
+/*
+** If the word ends with zFrom and xCond() is true for the stem
+** of the word that preceeds the zFrom ending, then change the
+** ending to zTo.
+**
+** The input word *pz and zFrom are both in reverse order. zTo
+** is in normal order.
+**
+** Return TRUE if zFrom matches. Return FALSE if zFrom does not
+** match. Not that TRUE is returned even if xCond() fails and
+** no substitution occurs.
+*/
+static int stem(
+ char **pz, /* The word being stemmed (Reversed) */
+ const char *zFrom, /* If the ending matches this... (Reversed) */
+ const char *zTo, /* ... change the ending to this (not reversed) */
+ int (*xCond)(const char*) /* Condition that must be true */
+){
+ char *z = *pz;
+ while( *zFrom && *zFrom==*z ){ z++; zFrom++; }
+ if( *zFrom!=0 ) return 0;
+ if( xCond && !xCond(z) ) return 1;
+ while( *zTo ){
+ *(--z) = *(zTo++);
+ }
+ *pz = z;
+ return 1;
+}
+
+/*
+** This is the fallback stemmer used when the porter stemmer is
+** inappropriate. The input word is copied into the output with
+** US-ASCII case folding. If the input word is too long (more
+** than 20 bytes if it contains no digits or more than 6 bytes if
+** it contains digits) then word is truncated to 20 or 6 bytes
+** by taking 10 or 3 bytes from the beginning and end.
+*/
+static void copy_stemmer(const char *zIn, int nIn, char *zOut, int *pnOut){
+ int i, mx, j;
+ int hasDigit = 0;
+ for(i=0; i<nIn; i++){
+ int c = zIn[i];
+ if( c>='A' && c<='Z' ){
+ zOut[i] = c - 'A' + 'a';
+ }else{
+ if( c>='0' && c<='9' ) hasDigit = 1;
+ zOut[i] = c;
+ }
+ }
+ mx = hasDigit ? 3 : 10;
+ if( nIn>mx*2 ){
+ for(j=mx, i=nIn-mx; i<nIn; i++, j++){
+ zOut[j] = zOut[i];
+ }
+ i = j;
+ }
+ zOut[i] = 0;
+ *pnOut = i;
+}
+
+
+/*
+** Stem the input word zIn[0..nIn-1]. Store the output in zOut.
+** zOut is at least big enough to hold nIn bytes. Write the actual
+** size of the output word (exclusive of the '\0' terminator) into *pnOut.
+**
+** Any upper-case characters in the US-ASCII character set ([A-Z])
+** are converted to lower case. Upper-case UTF characters are
+** unchanged.
+**
+** Words that are longer than about 20 bytes are stemmed by retaining
+** a few bytes from the beginning and the end of the word. If the
+** word contains digits, 3 bytes are taken from the beginning and
+** 3 bytes from the end. For long words without digits, 10 bytes
+** are taken from each end. US-ASCII case folding still applies.
+**
+** If the input word contains not digits but does characters not
+** in [a-zA-Z] then no stemming is attempted and this routine just
+** copies the input into the input into the output with US-ASCII
+** case folding.
+**
+** Stemming never increases the length of the word. So there is
+** no chance of overflowing the zOut buffer.
+*/
+static void porter_stemmer(const char *zIn, int nIn, char *zOut, int *pnOut){
+ int i, j, c;
+ char zReverse[28];
+ char *z, *z2;
+ if( nIn<3 || nIn>=sizeof(zReverse)-7 ){
+ /* The word is too big or too small for the porter stemmer.
+ ** Fallback to the copy stemmer */
+ copy_stemmer(zIn, nIn, zOut, pnOut);
+ return;
+ }
+ for(i=0, j=sizeof(zReverse)-6; i<nIn; i++, j--){
+ c = zIn[i];
+ if( c>='A' && c<='Z' ){
+ zReverse[j] = c + 'a' - 'A';
+ }else if( c>='a' && c<='z' ){
+ zReverse[j] = c;
+ }else{
+ /* The use of a character not in [a-zA-Z] means that we fallback
+ ** to the copy stemmer */
+ copy_stemmer(zIn, nIn, zOut, pnOut);
+ return;
+ }
+ }
+ memset(&zReverse[sizeof(zReverse)-5], 0, 5);
+ z = &zReverse[j+1];
+
+
+ /* Step 1a */
+ if( z[0]=='s' ){
+ if(
+ !stem(&z, "sess", "ss", 0) &&
+ !stem(&z, "sei", "i", 0) &&
+ !stem(&z, "ss", "ss", 0)
+ ){
+ z++;
+ }
+ }
+
+ /* Step 1b */
+ z2 = z;
+ if( stem(&z, "dee", "ee", m_gt_0) ){
+ /* Do nothing. The work was all in the test */
+ }else if(
+ (stem(&z, "gni", "", hasVowel) || stem(&z, "de", "", hasVowel))
+ && z!=z2
+ ){
+ if( stem(&z, "ta", "ate", 0) ||
+ stem(&z, "lb", "ble", 0) ||
+ stem(&z, "zi", "ize", 0) ){
+ /* Do nothing. The work was all in the test */
+ }else if( doubleConsonant(z) && (*z!='l' && *z!='s' && *z!='z') ){
+ z++;
+ }else if( m_eq_1(z) && star_oh(z) ){
+ *(--z) = 'e';
+ }
+ }
+
+ /* Step 1c */
+ if( z[0]=='y' && hasVowel(z+1) ){
+ z[0] = 'i';
+ }
+
+ /* Step 2 */
+ switch( z[1] ){
+ case 'a':
+ stem(&z, "lanoita", "ate", m_gt_0) ||
+ stem(&z, "lanoit", "tion", m_gt_0);
+ break;
+ case 'c':
+ stem(&z, "icne", "ence", m_gt_0) ||
+ stem(&z, "icna", "ance", m_gt_0);
+ break;
+ case 'e':
+ stem(&z, "rezi", "ize", m_gt_0);
+ break;
+ case 'g':
+ stem(&z, "igol", "log", m_gt_0);
+ break;
+ case 'l':
+ stem(&z, "ilb", "ble", m_gt_0) ||
+ stem(&z, "illa", "al", m_gt_0) ||
+ stem(&z, "iltne", "ent", m_gt_0) ||
+ stem(&z, "ile", "e", m_gt_0) ||
+ stem(&z, "ilsuo", "ous", m_gt_0);
+ break;
+ case 'o':
+ stem(&z, "noitazi", "ize", m_gt_0) ||
+ stem(&z, "noita", "ate", m_gt_0) ||
+ stem(&z, "rota", "ate", m_gt_0);
+ break;
+ case 's':
+ stem(&z, "msila", "al", m_gt_0) ||
+ stem(&z, "ssenevi", "ive", m_gt_0) ||
+ stem(&z, "ssenluf", "ful", m_gt_0) ||
+ stem(&z, "ssensuo", "ous", m_gt_0);
+ break;
+ case 't':
+ stem(&z, "itila", "al", m_gt_0) ||
+ stem(&z, "itivi", "ive", m_gt_0) ||
+ stem(&z, "itilib", "ble", m_gt_0);
+ break;
+ }
+
+ /* Step 3 */
+ switch( z[0] ){
+ case 'e':
+ stem(&z, "etaci", "ic", m_gt_0) ||
+ stem(&z, "evita", "", m_gt_0) ||
+ stem(&z, "ezila", "al", m_gt_0);
+ break;
+ case 'i':
+ stem(&z, "itici", "ic", m_gt_0);
+ break;
+ case 'l':
+ stem(&z, "laci", "ic", m_gt_0) ||
+ stem(&z, "luf", "", m_gt_0);
+ break;
+ case 's':
+ stem(&z, "ssen", "", m_gt_0);
+ break;
+ }
+
+ /* Step 4 */
+ switch( z[1] ){
+ case 'a':
+ if( z[0]=='l' && m_gt_1(z+2) ){
+ z += 2;
+ }
+ break;
+ case 'c':
+ if( z[0]=='e' && z[2]=='n' && (z[3]=='a' || z[3]=='e') && m_gt_1(z+4) ){
+ z += 4;
+ }
+ break;
+ case 'e':
+ if( z[0]=='r' && m_gt_1(z+2) ){
+ z += 2;
+ }
+ break;
+ case 'i':
+ if( z[0]=='c' && m_gt_1(z+2) ){
+ z += 2;
+ }
+ break;
+ case 'l':
+ if( z[0]=='e' && z[2]=='b' && (z[3]=='a' || z[3]=='i') && m_gt_1(z+4) ){
+ z += 4;
+ }
+ break;
+ case 'n':
+ if( z[0]=='t' ){
+ if( z[2]=='a' ){
+ if( m_gt_1(z+3) ){
+ z += 3;
+ }
+ }else if( z[2]=='e' ){
+ stem(&z, "tneme", "", m_gt_1) ||
+ stem(&z, "tnem", "", m_gt_1) ||
+ stem(&z, "tne", "", m_gt_1);
+ }
+ }
+ break;
+ case 'o':
+ if( z[0]=='u' ){
+ if( m_gt_1(z+2) ){
+ z += 2;
+ }
+ }else if( z[3]=='s' || z[3]=='t' ){
+ stem(&z, "noi", "", m_gt_1);
+ }
+ break;
+ case 's':
+ if( z[0]=='m' && z[2]=='i' && m_gt_1(z+3) ){
+ z += 3;
+ }
+ break;
+ case 't':
+ stem(&z, "eta", "", m_gt_1) ||
+ stem(&z, "iti", "", m_gt_1);
+ break;
+ case 'u':
+ if( z[0]=='s' && z[2]=='o' && m_gt_1(z+3) ){
+ z += 3;
+ }
+ break;
+ case 'v':
+ case 'z':
+ if( z[0]=='e' && z[2]=='i' && m_gt_1(z+3) ){
+ z += 3;
+ }
+ break;
+ }
+
+ /* Step 5a */
+ if( z[0]=='e' ){
+ if( m_gt_1(z+1) ){
+ z++;
+ }else if( m_eq_1(z+1) && !star_oh(z+1) ){
+ z++;
+ }
+ }
+
+ /* Step 5b */
+ if( m_gt_1(z) && z[0]=='l' && z[1]=='l' ){
+ z++;
+ }
+
+ /* z[] is now the stemmed word in reverse order. Flip it back
+ ** around into forward order and return.
+ */
+ *pnOut = i = strlen(z);
+ zOut[i] = 0;
+ while( *z ){
+ zOut[--i] = *(z++);
+ }
+}
+
+/*
+** Characters that can be part of a token. We assume any character
+** whose value is greater than 0x80 (any UTF character) can be
+** part of a token. In other words, delimiters all must have
+** values of 0x7f or lower.
+*/
+const char isIdChar[] = {
+/* x0 x1 x2 x3 x4 x5 x6 x7 x8 x9 xA xB xC xD xE xF */
+ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, /* 3x */
+ 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, /* 4x */
+ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, /* 5x */
+ 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, /* 6x */
+ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, /* 7x */
+};
+#define idChar(C) (((ch=C)&0x80)!=0 || (ch>0x2f && isIdChar[ch-0x30]))
+#define isDelim(C) (((ch=C)&0x80)==0 && (ch<0x30 || !isIdChar[ch-0x30]))
+
+/*
+** Extract the next token from a tokenization cursor. The cursor must
+** have been opened by a prior call to porterOpen().
+*/
+static int porterNext(
+ sqlite3_tokenizer_cursor *pCursor, /* Cursor returned by porterOpen */
+ const char **pzToken, /* OUT: *pzToken is the token text */
+ int *pnBytes, /* OUT: Number of bytes in token */
+ int *piStartOffset, /* OUT: Starting offset of token */
+ int *piEndOffset, /* OUT: Ending offset of token */
+ int *piPosition /* OUT: Position integer of token */
+){
+ porter_tokenizer_cursor *c = (porter_tokenizer_cursor *) pCursor;
+ const char *z = c->zInput;
+
+ while( c->iOffset<c->nInput ){
+ int iStartOffset, ch;
+
+ /* Scan past delimiter characters */
+ while( c->iOffset<c->nInput && isDelim(z[c->iOffset]) ){
+ c->iOffset++;
+ }
+
+ /* Count non-delimiter characters. */
+ iStartOffset = c->iOffset;
+ while( c->iOffset<c->nInput && !isDelim(z[c->iOffset]) ){
+ c->iOffset++;
+ }
+
+ if( c->iOffset>iStartOffset ){
+ int n = c->iOffset-iStartOffset;
+ if( n>c->nAllocated ){
+ c->nAllocated = n+20;
+ c->zToken = realloc(c->zToken, c->nAllocated);
+ }
+ porter_stemmer(&z[iStartOffset], n, c->zToken, pnBytes);
+ *pzToken = c->zToken;
+ *piStartOffset = iStartOffset;
+ *piEndOffset = c->iOffset;
+ *piPosition = c->iToken++;
+ return SQLITE_OK;
+ }
+ }
+ return SQLITE_DONE;
+}
+
+/*
+** The set of routines that implement the porter-stemmer tokenizer
+*/
+static const sqlite3_tokenizer_module porterTokenizerModule = {
+ 0,
+ porterCreate,
+ porterDestroy,
+ porterOpen,
+ porterClose,
+ porterNext,
+};
+
+/*
+** Allocate a new porter tokenizer. Return a pointer to the new
+** tokenizer in *ppModule
+*/
+void sqlite3Fts1PorterTokenizerModule(
+ sqlite3_tokenizer_module const**ppModule
+){
+ *ppModule = &porterTokenizerModule;
+}
+
+#endif /* !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS1) */
Added: freeswitch/trunk/libs/sqlite/ext/fts1/fts1_tokenizer.h
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/ext/fts1/fts1_tokenizer.h Tue Dec 19 15:11:50 2006
@@ -0,0 +1,90 @@
+/*
+** 2006 July 10
+**
+** The author disclaims copyright to this source code.
+**
+*************************************************************************
+** Defines the interface to tokenizers used by fulltext-search. There
+** are three basic components:
+**
+** sqlite3_tokenizer_module is a singleton defining the tokenizer
+** interface functions. This is essentially the class structure for
+** tokenizers.
+**
+** sqlite3_tokenizer is used to define a particular tokenizer, perhaps
+** including customization information defined at creation time.
+**
+** sqlite3_tokenizer_cursor is generated by a tokenizer to generate
+** tokens from a particular input.
+*/
+#ifndef _FTS1_TOKENIZER_H_
+#define _FTS1_TOKENIZER_H_
+
+/* TODO(shess) Only used for SQLITE_OK and SQLITE_DONE at this time.
+** If tokenizers are to be allowed to call sqlite3_*() functions, then
+** we will need a way to register the API consistently.
+*/
+#include "sqlite3.h"
+
+/*
+** Structures used by the tokenizer interface.
+*/
+typedef struct sqlite3_tokenizer sqlite3_tokenizer;
+typedef struct sqlite3_tokenizer_cursor sqlite3_tokenizer_cursor;
+typedef struct sqlite3_tokenizer_module sqlite3_tokenizer_module;
+
+struct sqlite3_tokenizer_module {
+ int iVersion; /* currently 0 */
+
+ /*
+ ** Create and destroy a tokenizer. argc/argv are passed down from
+ ** the fulltext virtual table creation to allow customization.
+ */
+ int (*xCreate)(int argc, const char *const*argv,
+ sqlite3_tokenizer **ppTokenizer);
+ int (*xDestroy)(sqlite3_tokenizer *pTokenizer);
+
+ /*
+ ** Tokenize a particular input. Call xOpen() to prepare to
+ ** tokenize, xNext() repeatedly until it returns SQLITE_DONE, then
+ ** xClose() to free any internal state. The pInput passed to
+ ** xOpen() must exist until the cursor is closed. The ppToken
+ ** result from xNext() is only valid until the next call to xNext()
+ ** or until xClose() is called.
+ */
+ /* TODO(shess) current implementation requires pInput to be
+ ** nul-terminated. This should either be fixed, or pInput/nBytes
+ ** should be converted to zInput.
+ */
+ int (*xOpen)(sqlite3_tokenizer *pTokenizer,
+ const char *pInput, int nBytes,
+ sqlite3_tokenizer_cursor **ppCursor);
+ int (*xClose)(sqlite3_tokenizer_cursor *pCursor);
+ int (*xNext)(sqlite3_tokenizer_cursor *pCursor,
+ const char **ppToken, int *pnBytes,
+ int *piStartOffset, int *piEndOffset, int *piPosition);
+};
+
+struct sqlite3_tokenizer {
+ const sqlite3_tokenizer_module *pModule; /* The module for this tokenizer */
+ /* Tokenizer implementations will typically add additional fields */
+};
+
+struct sqlite3_tokenizer_cursor {
+ sqlite3_tokenizer *pTokenizer; /* Tokenizer for this cursor. */
+ /* Tokenizer implementations will typically add additional fields */
+};
+
+/*
+** Get the module for a tokenizer which generates tokens based on a
+** set of non-token characters. The default is to break tokens at any
+** non-alnum character, though the set of delimiters can also be
+** specified by the first argv argument to xCreate().
+*/
+/* TODO(shess) This doesn't belong here. Need some sort of
+** registration process.
+*/
+void sqlite3Fts1SimpleTokenizerModule(sqlite3_tokenizer_module const**ppModule);
+void sqlite3Fts1PorterTokenizerModule(sqlite3_tokenizer_module const**ppModule);
+
+#endif /* _FTS1_TOKENIZER_H_ */
Added: freeswitch/trunk/libs/sqlite/ext/fts1/fts1_tokenizer1.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/ext/fts1/fts1_tokenizer1.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,220 @@
+/*
+** The author disclaims copyright to this source code.
+**
+*************************************************************************
+** Implementation of the "simple" full-text-search tokenizer.
+*/
+
+/*
+** The code in this file is only compiled if:
+**
+** * The FTS1 module is being built as an extension
+** (in which case SQLITE_CORE is not defined), or
+**
+** * The FTS1 module is being built into the core of
+** SQLite (in which case SQLITE_ENABLE_FTS1 is defined).
+*/
+#if !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS1)
+
+
+#include <assert.h>
+#if !defined(__APPLE__)
+#include <malloc.h>
+#else
+#include <stdlib.h>
+#endif
+#include <stdio.h>
+#include <string.h>
+#include <ctype.h>
+
+#include "fts1_tokenizer.h"
+
+typedef struct simple_tokenizer {
+ sqlite3_tokenizer base;
+ char delim[128]; /* flag ASCII delimiters */
+} simple_tokenizer;
+
+typedef struct simple_tokenizer_cursor {
+ sqlite3_tokenizer_cursor base;
+ const char *pInput; /* input we are tokenizing */
+ int nBytes; /* size of the input */
+ int iOffset; /* current position in pInput */
+ int iToken; /* index of next token to be returned */
+ char *pToken; /* storage for current token */
+ int nTokenAllocated; /* space allocated to zToken buffer */
+} simple_tokenizer_cursor;
+
+
+/* Forward declaration */
+static const sqlite3_tokenizer_module simpleTokenizerModule;
+
+static int isDelim(simple_tokenizer *t, unsigned char c){
+ return c<0x80 && t->delim[c];
+}
+
+/*
+** Create a new tokenizer instance.
+*/
+static int simpleCreate(
+ int argc, const char * const *argv,
+ sqlite3_tokenizer **ppTokenizer
+){
+ simple_tokenizer *t;
+
+ t = (simple_tokenizer *) calloc(sizeof(simple_tokenizer), 1);
+ /* TODO(shess) Delimiters need to remain the same from run to run,
+ ** else we need to reindex. One solution would be a meta-table to
+ ** track such information in the database, then we'd only want this
+ ** information on the initial create.
+ */
+ if( argc>1 ){
+ int i, n = strlen(argv[1]);
+ for(i=0; i<n; i++){
+ unsigned char ch = argv[1][i];
+ /* We explicitly don't support UTF-8 delimiters for now. */
+ if( ch>=0x80 ){
+ free(t);
+ return SQLITE_ERROR;
+ }
+ t->delim[ch] = 1;
+ }
+ } else {
+ /* Mark non-alphanumeric ASCII characters as delimiters */
+ int i;
+ for(i=1; i<0x80; i++){
+ t->delim[i] = !isalnum(i);
+ }
+ }
+
+ *ppTokenizer = &t->base;
+ return SQLITE_OK;
+}
+
+/*
+** Destroy a tokenizer
+*/
+static int simpleDestroy(sqlite3_tokenizer *pTokenizer){
+ free(pTokenizer);
+ return SQLITE_OK;
+}
+
+/*
+** Prepare to begin tokenizing a particular string. The input
+** string to be tokenized is pInput[0..nBytes-1]. A cursor
+** used to incrementally tokenize this string is returned in
+** *ppCursor.
+*/
+static int simpleOpen(
+ sqlite3_tokenizer *pTokenizer, /* The tokenizer */
+ const char *pInput, int nBytes, /* String to be tokenized */
+ sqlite3_tokenizer_cursor **ppCursor /* OUT: Tokenization cursor */
+){
+ simple_tokenizer_cursor *c;
+
+ c = (simple_tokenizer_cursor *) malloc(sizeof(simple_tokenizer_cursor));
+ c->pInput = pInput;
+ if( pInput==0 ){
+ c->nBytes = 0;
+ }else if( nBytes<0 ){
+ c->nBytes = (int)strlen(pInput);
+ }else{
+ c->nBytes = nBytes;
+ }
+ c->iOffset = 0; /* start tokenizing at the beginning */
+ c->iToken = 0;
+ c->pToken = NULL; /* no space allocated, yet. */
+ c->nTokenAllocated = 0;
+
+ *ppCursor = &c->base;
+ return SQLITE_OK;
+}
+
+/*
+** Close a tokenization cursor previously opened by a call to
+** simpleOpen() above.
+*/
+static int simpleClose(sqlite3_tokenizer_cursor *pCursor){
+ simple_tokenizer_cursor *c = (simple_tokenizer_cursor *) pCursor;
+ free(c->pToken);
+ free(c);
+ return SQLITE_OK;
+}
+
+/*
+** Extract the next token from a tokenization cursor. The cursor must
+** have been opened by a prior call to simpleOpen().
+*/
+static int simpleNext(
+ sqlite3_tokenizer_cursor *pCursor, /* Cursor returned by simpleOpen */
+ const char **ppToken, /* OUT: *ppToken is the token text */
+ int *pnBytes, /* OUT: Number of bytes in token */
+ int *piStartOffset, /* OUT: Starting offset of token */
+ int *piEndOffset, /* OUT: Ending offset of token */
+ int *piPosition /* OUT: Position integer of token */
+){
+ simple_tokenizer_cursor *c = (simple_tokenizer_cursor *) pCursor;
+ simple_tokenizer *t = (simple_tokenizer *) pCursor->pTokenizer;
+ unsigned char *p = (unsigned char *)c->pInput;
+
+ while( c->iOffset<c->nBytes ){
+ int iStartOffset;
+
+ /* Scan past delimiter characters */
+ while( c->iOffset<c->nBytes && isDelim(t, p[c->iOffset]) ){
+ c->iOffset++;
+ }
+
+ /* Count non-delimiter characters. */
+ iStartOffset = c->iOffset;
+ while( c->iOffset<c->nBytes && !isDelim(t, p[c->iOffset]) ){
+ c->iOffset++;
+ }
+
+ if( c->iOffset>iStartOffset ){
+ int i, n = c->iOffset-iStartOffset;
+ if( n>c->nTokenAllocated ){
+ c->nTokenAllocated = n+20;
+ c->pToken = realloc(c->pToken, c->nTokenAllocated);
+ }
+ for(i=0; i<n; i++){
+ /* TODO(shess) This needs expansion to handle UTF-8
+ ** case-insensitivity.
+ */
+ unsigned char ch = p[iStartOffset+i];
+ c->pToken[i] = ch<0x80 ? tolower(ch) : ch;
+ }
+ *ppToken = c->pToken;
+ *pnBytes = n;
+ *piStartOffset = iStartOffset;
+ *piEndOffset = c->iOffset;
+ *piPosition = c->iToken++;
+
+ return SQLITE_OK;
+ }
+ }
+ return SQLITE_DONE;
+}
+
+/*
+** The set of routines that implement the simple tokenizer
+*/
+static const sqlite3_tokenizer_module simpleTokenizerModule = {
+ 0,
+ simpleCreate,
+ simpleDestroy,
+ simpleOpen,
+ simpleClose,
+ simpleNext,
+};
+
+/*
+** Allocate a new simple tokenizer. Return a pointer to the new
+** tokenizer in *ppModule
+*/
+void sqlite3Fts1SimpleTokenizerModule(
+ sqlite3_tokenizer_module const**ppModule
+){
+ *ppModule = &simpleTokenizerModule;
+}
+
+#endif /* !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS1) */
Added: freeswitch/trunk/libs/sqlite/install-sh
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/install-sh Tue Dec 19 15:11:50 2006
@@ -0,0 +1,251 @@
+#!/bin/sh
+#
+# install - install a program, script, or datafile
+# This comes from X11R5 (mit/util/scripts/install.sh).
+#
+# Copyright 1991 by the Massachusetts Institute of Technology
+#
+# Permission to use, copy, modify, distribute, and sell this software and its
+# documentation for any purpose is hereby granted without fee, provided that
+# the above copyright notice appear in all copies and that both that
+# copyright notice and this permission notice appear in supporting
+# documentation, and that the name of M.I.T. not be used in advertising or
+# publicity pertaining to distribution of the software without specific,
+# written prior permission. M.I.T. makes no representations about the
+# suitability of this software for any purpose. It is provided "as is"
+# without express or implied warranty.
+#
+# Calling this script install-sh is preferred over install.sh, to prevent
+# `make' implicit rules from creating a file called install from it
+# when there is no Makefile.
+#
+# This script is compatible with the BSD install script, but was written
+# from scratch. It can only install one file at a time, a restriction
+# shared with many OS's install programs.
+
+
+# set DOITPROG to echo to test this script
+
+# Don't use :- since 4.3BSD and earlier shells don't like it.
+doit="${DOITPROG-}"
+
+
+# put in absolute paths if you don't have them in your path; or use env. vars.
+
+mvprog="${MVPROG-mv}"
+cpprog="${CPPROG-cp}"
+chmodprog="${CHMODPROG-chmod}"
+chownprog="${CHOWNPROG-chown}"
+chgrpprog="${CHGRPPROG-chgrp}"
+stripprog="${STRIPPROG-strip}"
+rmprog="${RMPROG-rm}"
+mkdirprog="${MKDIRPROG-mkdir}"
+
+transformbasename=""
+transform_arg=""
+instcmd="$mvprog"
+chmodcmd="$chmodprog 0755"
+chowncmd=""
+chgrpcmd=""
+stripcmd=""
+rmcmd="$rmprog -f"
+mvcmd="$mvprog"
+src=""
+dst=""
+dir_arg=""
+
+while [ x"$1" != x ]; do
+ case $1 in
+ -c) instcmd="$cpprog"
+ shift
+ continue;;
+
+ -d) dir_arg=true
+ shift
+ continue;;
+
+ -m) chmodcmd="$chmodprog $2"
+ shift
+ shift
+ continue;;
+
+ -o) chowncmd="$chownprog $2"
+ shift
+ shift
+ continue;;
+
+ -g) chgrpcmd="$chgrpprog $2"
+ shift
+ shift
+ continue;;
+
+ -s) stripcmd="$stripprog"
+ shift
+ continue;;
+
+ -t=*) transformarg=`echo $1 | sed 's/-t=//'`
+ shift
+ continue;;
+
+ -b=*) transformbasename=`echo $1 | sed 's/-b=//'`
+ shift
+ continue;;
+
+ *) if [ x"$src" = x ]
+ then
+ src=$1
+ else
+ # this colon is to work around a 386BSD /bin/sh bug
+ :
+ dst=$1
+ fi
+ shift
+ continue;;
+ esac
+done
+
+if [ x"$src" = x ]
+then
+ echo "install: no input file specified"
+ exit 1
+else
+ true
+fi
+
+if [ x"$dir_arg" != x ]; then
+ dst=$src
+ src=""
+
+ if [ -d $dst ]; then
+ instcmd=:
+ chmodcmd=""
+ else
+ instcmd=mkdir
+ fi
+else
+
+# Waiting for this to be detected by the "$instcmd $src $dsttmp" command
+# might cause directories to be created, which would be especially bad
+# if $src (and thus $dsttmp) contains '*'.
+
+ if [ -f $src -o -d $src ]
+ then
+ true
+ else
+ echo "install: $src does not exist"
+ exit 1
+ fi
+
+ if [ x"$dst" = x ]
+ then
+ echo "install: no destination specified"
+ exit 1
+ else
+ true
+ fi
+
+# If destination is a directory, append the input filename; if your system
+# does not like double slashes in filenames, you may need to add some logic
+
+ if [ -d $dst ]
+ then
+ dst="$dst"/`basename $src`
+ else
+ true
+ fi
+fi
+
+## this sed command emulates the dirname command
+dstdir=`echo $dst | sed -e 's,[^/]*$,,;s,/$,,;s,^$,.,'`
+
+# Make sure that the destination directory exists.
+# this part is taken from Noah Friedman's mkinstalldirs script
+
+# Skip lots of stat calls in the usual case.
+if [ ! -d "$dstdir" ]; then
+defaultIFS='
+'
+IFS="${IFS-${defaultIFS}}"
+
+oIFS="${IFS}"
+# Some sh's can't handle IFS=/ for some reason.
+IFS='%'
+set - `echo ${dstdir} | sed -e 's@/@%@g' -e 's@^%@/@'`
+IFS="${oIFS}"
+
+pathcomp=''
+
+while [ $# -ne 0 ] ; do
+ pathcomp="${pathcomp}${1}"
+ shift
+
+ if [ ! -d "${pathcomp}" ] ;
+ then
+ $mkdirprog "${pathcomp}"
+ else
+ true
+ fi
+
+ pathcomp="${pathcomp}/"
+done
+fi
+
+if [ x"$dir_arg" != x ]
+then
+ $doit $instcmd $dst &&
+
+ if [ x"$chowncmd" != x ]; then $doit $chowncmd $dst; else true ; fi &&
+ if [ x"$chgrpcmd" != x ]; then $doit $chgrpcmd $dst; else true ; fi &&
+ if [ x"$stripcmd" != x ]; then $doit $stripcmd $dst; else true ; fi &&
+ if [ x"$chmodcmd" != x ]; then $doit $chmodcmd $dst; else true ; fi
+else
+
+# If we're going to rename the final executable, determine the name now.
+
+ if [ x"$transformarg" = x ]
+ then
+ dstfile=`basename $dst`
+ else
+ dstfile=`basename $dst $transformbasename |
+ sed $transformarg`$transformbasename
+ fi
+
+# don't allow the sed command to completely eliminate the filename
+
+ if [ x"$dstfile" = x ]
+ then
+ dstfile=`basename $dst`
+ else
+ true
+ fi
+
+# Make a temp file name in the proper directory.
+
+ dsttmp=$dstdir/#inst.$$#
+
+# Move or copy the file name to the temp name
+
+ $doit $instcmd $src $dsttmp &&
+
+ trap "rm -f ${dsttmp}" 0 &&
+
+# and set any options; do chmod last to preserve setuid bits
+
+# If any of these fail, we abort the whole thing. If we want to
+# ignore errors from any of these, just make sure not to ignore
+# errors from the above "$doit $instcmd $src $dsttmp" command.
+
+ if [ x"$chowncmd" != x ]; then $doit $chowncmd $dsttmp; else true;fi &&
+ if [ x"$chgrpcmd" != x ]; then $doit $chgrpcmd $dsttmp; else true;fi &&
+ if [ x"$stripcmd" != x ]; then $doit $stripcmd $dsttmp; else true;fi &&
+ if [ x"$chmodcmd" != x ]; then $doit $chmodcmd $dsttmp; else true;fi &&
+
+# Now rename the file to the real destination.
+
+ $doit $rmcmd -f $dstdir/$dstfile &&
+ $doit $mvcmd $dsttmp $dstdir/$dstfile
+
+fi &&
+
+
+exit 0
Added: freeswitch/trunk/libs/sqlite/ltmain.sh
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/ltmain.sh Tue Dec 19 15:11:50 2006
@@ -0,0 +1,6399 @@
+# ltmain.sh - Provide generalized library-building support services.
+# NOTE: Changing this file will not affect anything until you rerun configure.
+#
+# Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003
+# Free Software Foundation, Inc.
+# Originally by Gordon Matzigkeit <gord at gnu.ai.mit.edu>, 1996
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+#
+# As a special exception to the GNU General Public License, if you
+# distribute this file as part of a program that contains a
+# configuration script generated by Autoconf, you may include it under
+# the same distribution terms that you use for the rest of that program.
+
+# Check that we have a working $echo.
+if test "X$1" = X--no-reexec; then
+ # Discard the --no-reexec flag, and continue.
+ shift
+elif test "X$1" = X--fallback-echo; then
+ # Avoid inline document here, it may be left over
+ :
+elif test "X`($echo '\t') 2>/dev/null`" = 'X\t'; then
+ # Yippee, $echo works!
+ :
+else
+ # Restart under the correct shell, and then maybe $echo will work.
+ exec $SHELL "$0" --no-reexec ${1+"$@"}
+fi
+
+if test "X$1" = X--fallback-echo; then
+ # used as fallback echo
+ shift
+ cat <<EOF
+$*
+EOF
+ exit 0
+fi
+
+# The name of this program.
+progname=`$echo "$0" | ${SED} 's%^.*/%%'`
+modename="$progname"
+
+# Constants.
+PROGRAM=ltmain.sh
+PACKAGE=libtool
+VERSION=1.5.2
+TIMESTAMP=" (1.1220.2.60 2004/01/25 12:25:08) Debian$Rev: 192 $"
+
+default_mode=
+help="Try \`$progname --help' for more information."
+magic="%%%MAGIC variable%%%"
+mkdir="mkdir"
+mv="mv -f"
+rm="rm -f"
+
+# Sed substitution that helps us do robust quoting. It backslashifies
+# metacharacters that are still active within double-quoted strings.
+Xsed="${SED}"' -e 1s/^X//'
+sed_quote_subst='s/\([\\`\\"$\\\\]\)/\\\1/g'
+# test EBCDIC or ASCII
+case `echo A|tr A '\301'` in
+ A) # EBCDIC based system
+ SP2NL="tr '\100' '\n'"
+ NL2SP="tr '\r\n' '\100\100'"
+ ;;
+ *) # Assume ASCII based system
+ SP2NL="tr '\040' '\012'"
+ NL2SP="tr '\015\012' '\040\040'"
+ ;;
+esac
+
+# NLS nuisances.
+# Only set LANG and LC_ALL to C if already set.
+# These must not be set unconditionally because not all systems understand
+# e.g. LANG=C (notably SCO).
+# We save the old values to restore during execute mode.
+if test "${LC_ALL+set}" = set; then
+ save_LC_ALL="$LC_ALL"; LC_ALL=C; export LC_ALL
+fi
+if test "${LANG+set}" = set; then
+ save_LANG="$LANG"; LANG=C; export LANG
+fi
+
+# Make sure IFS has a sensible default
+: ${IFS="
+"}
+
+if test "$build_libtool_libs" != yes && test "$build_old_libs" != yes; then
+ $echo "$modename: not configured to build any kind of library" 1>&2
+ $echo "Fatal configuration error. See the $PACKAGE docs for more information." 1>&2
+ exit 1
+fi
+
+# Global variables.
+mode=$default_mode
+nonopt=
+prev=
+prevopt=
+run=
+show="$echo"
+show_help=
+execute_dlfiles=
+lo2o="s/\\.lo\$/.${objext}/"
+o2lo="s/\\.${objext}\$/.lo/"
+
+#####################################
+# Shell function definitions:
+# This seems to be the best place for them
+
+# Need a lot of goo to handle *both* DLLs and import libs
+# Has to be a shell function in order to 'eat' the argument
+# that is supplied when $file_magic_command is called.
+win32_libid () {
+ win32_libid_type="unknown"
+ win32_fileres=`file -L $1 2>/dev/null`
+ case $win32_fileres in
+ *ar\ archive\ import\ library*) # definitely import
+ win32_libid_type="x86 archive import"
+ ;;
+ *ar\ archive*) # could be an import, or static
+ if eval $OBJDUMP -f $1 | $SED -e '10q' 2>/dev/null | \
+ grep -E 'file format pe-i386(.*architecture: i386)?' >/dev/null ; then
+ win32_nmres=`eval $NM -f posix -A $1 | \
+ sed -n -e '1,100{/ I /{x;/import/!{s/^/import/;h;p;};x;};}'`
+ if test "X$win32_nmres" = "Ximport" ; then
+ win32_libid_type="x86 archive import"
+ else
+ win32_libid_type="x86 archive static"
+ fi
+ fi
+ ;;
+ *DLL*)
+ win32_libid_type="x86 DLL"
+ ;;
+ *executable*) # but shell scripts are "executable" too...
+ case $win32_fileres in
+ *MS\ Windows\ PE\ Intel*)
+ win32_libid_type="x86 DLL"
+ ;;
+ esac
+ ;;
+ esac
+ $echo $win32_libid_type
+}
+
+# End of Shell function definitions
+#####################################
+
+# Parse our command line options once, thoroughly.
+while test "$#" -gt 0
+do
+ arg="$1"
+ shift
+
+ case $arg in
+ -*=*) optarg=`$echo "X$arg" | $Xsed -e 's/[-_a-zA-Z0-9]*=//'` ;;
+ *) optarg= ;;
+ esac
+
+ # If the previous option needs an argument, assign it.
+ if test -n "$prev"; then
+ case $prev in
+ execute_dlfiles)
+ execute_dlfiles="$execute_dlfiles $arg"
+ ;;
+ tag)
+ tagname="$arg"
+ preserve_args="${preserve_args}=$arg"
+
+ # Check whether tagname contains only valid characters
+ case $tagname in
+ *[!-_A-Za-z0-9,/]*)
+ $echo "$progname: invalid tag name: $tagname" 1>&2
+ exit 1
+ ;;
+ esac
+
+ case $tagname in
+ CC)
+ # Don't test for the "default" C tag, as we know, it's there, but
+ # not specially marked.
+ ;;
+ *)
+ if grep "^# ### BEGIN LIBTOOL TAG CONFIG: $tagname$" < "$0" > /dev/null; then
+ taglist="$taglist $tagname"
+ # Evaluate the configuration.
+ eval "`${SED} -n -e '/^# ### BEGIN LIBTOOL TAG CONFIG: '$tagname'$/,/^# ### END LIBTOOL TAG CONFIG: '$tagname'$/p' < $0`"
+ else
+ $echo "$progname: ignoring unknown tag $tagname" 1>&2
+ fi
+ ;;
+ esac
+ ;;
+ *)
+ eval "$prev=\$arg"
+ ;;
+ esac
+
+ prev=
+ prevopt=
+ continue
+ fi
+
+ # Have we seen a non-optional argument yet?
+ case $arg in
+ --help)
+ show_help=yes
+ ;;
+
+ --version)
+ $echo "$PROGRAM (GNU $PACKAGE) $VERSION$TIMESTAMP"
+ $echo
+ $echo "Copyright (C) 2003 Free Software Foundation, Inc."
+ $echo "This is free software; see the source for copying conditions. There is NO"
+ $echo "warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE."
+ exit 0
+ ;;
+
+ --config)
+ ${SED} -e '1,/^# ### BEGIN LIBTOOL CONFIG/d' -e '/^# ### END LIBTOOL CONFIG/,$d' $0
+ # Now print the configurations for the tags.
+ for tagname in $taglist; do
+ ${SED} -n -e "/^# ### BEGIN LIBTOOL TAG CONFIG: $tagname$/,/^# ### END LIBTOOL TAG CONFIG: $tagname$/p" < "$0"
+ done
+ exit 0
+ ;;
+
+ --debug)
+ $echo "$progname: enabling shell trace mode"
+ set -x
+ preserve_args="$preserve_args $arg"
+ ;;
+
+ --dry-run | -n)
+ run=:
+ ;;
+
+ --features)
+ $echo "host: $host"
+ if test "$build_libtool_libs" = yes; then
+ $echo "enable shared libraries"
+ else
+ $echo "disable shared libraries"
+ fi
+ if test "$build_old_libs" = yes; then
+ $echo "enable static libraries"
+ else
+ $echo "disable static libraries"
+ fi
+ exit 0
+ ;;
+
+ --finish) mode="finish" ;;
+
+ --mode) prevopt="--mode" prev=mode ;;
+ --mode=*) mode="$optarg" ;;
+
+ --preserve-dup-deps) duplicate_deps="yes" ;;
+
+ --quiet | --silent)
+ show=:
+ preserve_args="$preserve_args $arg"
+ ;;
+
+ --tag) prevopt="--tag" prev=tag ;;
+ --tag=*)
+ set tag "$optarg" ${1+"$@"}
+ shift
+ prev=tag
+ preserve_args="$preserve_args --tag"
+ ;;
+
+ -dlopen)
+ prevopt="-dlopen"
+ prev=execute_dlfiles
+ ;;
+
+ -*)
+ $echo "$modename: unrecognized option \`$arg'" 1>&2
+ $echo "$help" 1>&2
+ exit 1
+ ;;
+
+ *)
+ nonopt="$arg"
+ break
+ ;;
+ esac
+done
+
+if test -n "$prevopt"; then
+ $echo "$modename: option \`$prevopt' requires an argument" 1>&2
+ $echo "$help" 1>&2
+ exit 1
+fi
+
+# If this variable is set in any of the actions, the command in it
+# will be execed at the end. This prevents here-documents from being
+# left over by shells.
+exec_cmd=
+
+if test -z "$show_help"; then
+
+ # Infer the operation mode.
+ if test -z "$mode"; then
+ $echo "*** Warning: inferring the mode of operation is deprecated." 1>&2
+ $echo "*** Future versions of Libtool will require -mode=MODE be specified." 1>&2
+ case $nonopt in
+ *cc | cc* | *++ | gcc* | *-gcc* | g++* | xlc*)
+ mode=link
+ for arg
+ do
+ case $arg in
+ -c)
+ mode=compile
+ break
+ ;;
+ esac
+ done
+ ;;
+ *db | *dbx | *strace | *truss)
+ mode=execute
+ ;;
+ *install*|cp|mv)
+ mode=install
+ ;;
+ *rm)
+ mode=uninstall
+ ;;
+ *)
+ # If we have no mode, but dlfiles were specified, then do execute mode.
+ test -n "$execute_dlfiles" && mode=execute
+
+ # Just use the default operation mode.
+ if test -z "$mode"; then
+ if test -n "$nonopt"; then
+ $echo "$modename: warning: cannot infer operation mode from \`$nonopt'" 1>&2
+ else
+ $echo "$modename: warning: cannot infer operation mode without MODE-ARGS" 1>&2
+ fi
+ fi
+ ;;
+ esac
+ fi
+
+ # Only execute mode is allowed to have -dlopen flags.
+ if test -n "$execute_dlfiles" && test "$mode" != execute; then
+ $echo "$modename: unrecognized option \`-dlopen'" 1>&2
+ $echo "$help" 1>&2
+ exit 1
+ fi
+
+ # Change the help message to a mode-specific one.
+ generic_help="$help"
+ help="Try \`$modename --help --mode=$mode' for more information."
+
+ # These modes are in order of execution frequency so that they run quickly.
+ case $mode in
+ # libtool compile mode
+ compile)
+ modename="$modename: compile"
+ # Get the compilation command and the source file.
+ base_compile=
+ srcfile="$nonopt" # always keep a non-empty value in "srcfile"
+ suppress_opt=yes
+ suppress_output=
+ arg_mode=normal
+ libobj=
+ later=
+
+ for arg
+ do
+ case "$arg_mode" in
+ arg )
+ # do not "continue". Instead, add this to base_compile
+ lastarg="$arg"
+ arg_mode=normal
+ ;;
+
+ target )
+ libobj="$arg"
+ arg_mode=normal
+ continue
+ ;;
+
+ normal )
+ # Accept any command-line options.
+ case $arg in
+ -o)
+ if test -n "$libobj" ; then
+ $echo "$modename: you cannot specify \`-o' more than once" 1>&2
+ exit 1
+ fi
+ arg_mode=target
+ continue
+ ;;
+
+ -static | -prefer-pic | -prefer-non-pic)
+ later="$later $arg"
+ continue
+ ;;
+
+ -no-suppress)
+ suppress_opt=no
+ continue
+ ;;
+
+ -Xcompiler)
+ arg_mode=arg # the next one goes into the "base_compile" arg list
+ continue # The current "srcfile" will either be retained or
+ ;; # replaced later. I would guess that would be a bug.
+
+ -Wc,*)
+ args=`$echo "X$arg" | $Xsed -e "s/^-Wc,//"`
+ lastarg=
+ save_ifs="$IFS"; IFS=','
+ for arg in $args; do
+ IFS="$save_ifs"
+
+ # Double-quote args containing other shell metacharacters.
+ # Many Bourne shells cannot handle close brackets correctly
+ # in scan sets, so we specify it separately.
+ case $arg in
+ *[\[\~\#\^\&\*\(\)\{\}\|\;\<\>\?\'\ \ ]*|*]*|"")
+ arg="\"$arg\""
+ ;;
+ esac
+ lastarg="$lastarg $arg"
+ done
+ IFS="$save_ifs"
+ lastarg=`$echo "X$lastarg" | $Xsed -e "s/^ //"`
+
+ # Add the arguments to base_compile.
+ base_compile="$base_compile $lastarg"
+ continue
+ ;;
+
+ * )
+ # Accept the current argument as the source file.
+ # The previous "srcfile" becomes the current argument.
+ #
+ lastarg="$srcfile"
+ srcfile="$arg"
+ ;;
+ esac # case $arg
+ ;;
+ esac # case $arg_mode
+
+ # Aesthetically quote the previous argument.
+ lastarg=`$echo "X$lastarg" | $Xsed -e "$sed_quote_subst"`
+
+ case $lastarg in
+ # Double-quote args containing other shell metacharacters.
+ # Many Bourne shells cannot handle close brackets correctly
+ # in scan sets, so we specify it separately.
+ *[\[\~\#\^\&\*\(\)\{\}\|\;\<\>\?\'\ \ ]*|*]*|"")
+ lastarg="\"$lastarg\""
+ ;;
+ esac
+
+ base_compile="$base_compile $lastarg"
+ done # for arg
+
+ case $arg_mode in
+ arg)
+ $echo "$modename: you must specify an argument for -Xcompile"
+ exit 1
+ ;;
+ target)
+ $echo "$modename: you must specify a target with \`-o'" 1>&2
+ exit 1
+ ;;
+ *)
+ # Get the name of the library object.
+ [ -z "$libobj" ] && libobj=`$echo "X$srcfile" | $Xsed -e 's%^.*/%%'`
+ ;;
+ esac
+
+ # Recognize several different file suffixes.
+ # If the user specifies -o file.o, it is replaced with file.lo
+ xform='[cCFSifmso]'
+ case $libobj in
+ *.ada) xform=ada ;;
+ *.adb) xform=adb ;;
+ *.ads) xform=ads ;;
+ *.asm) xform=asm ;;
+ *.c++) xform=c++ ;;
+ *.cc) xform=cc ;;
+ *.ii) xform=ii ;;
+ *.class) xform=class ;;
+ *.cpp) xform=cpp ;;
+ *.cxx) xform=cxx ;;
+ *.f90) xform=f90 ;;
+ *.for) xform=for ;;
+ *.java) xform=java ;;
+ esac
+
+ libobj=`$echo "X$libobj" | $Xsed -e "s/\.$xform$/.lo/"`
+
+ case $libobj in
+ *.lo) obj=`$echo "X$libobj" | $Xsed -e "$lo2o"` ;;
+ *)
+ $echo "$modename: cannot determine name of library object from \`$libobj'" 1>&2
+ exit 1
+ ;;
+ esac
+
+ # Infer tagged configuration to use if any are available and
+ # if one wasn't chosen via the "--tag" command line option.
+ # Only attempt this if the compiler in the base compile
+ # command doesn't match the default compiler.
+ if test -n "$available_tags" && test -z "$tagname"; then
+ case $base_compile in
+ # Blanks in the command may have been stripped by the calling shell,
+ # but not from the CC environment variable when configure was run.
+ " $CC "* | "$CC "* | " `$echo $CC` "* | "`$echo $CC` "*) ;;
+ # Blanks at the start of $base_compile will cause this to fail
+ # if we don't check for them as well.
+ *)
+ for z in $available_tags; do
+ if grep "^# ### BEGIN LIBTOOL TAG CONFIG: $z$" < "$0" > /dev/null; then
+ # Evaluate the configuration.
+ eval "`${SED} -n -e '/^# ### BEGIN LIBTOOL TAG CONFIG: '$z'$/,/^# ### END LIBTOOL TAG CONFIG: '$z'$/p' < $0`"
+ case "$base_compile " in
+ "$CC "* | " $CC "* | "`$echo $CC` "* | " `$echo $CC` "*)
+ # The compiler in the base compile command matches
+ # the one in the tagged configuration.
+ # Assume this is the tagged configuration we want.
+ tagname=$z
+ break
+ ;;
+ esac
+ fi
+ done
+ # If $tagname still isn't set, then no tagged configuration
+ # was found and let the user know that the "--tag" command
+ # line option must be used.
+ if test -z "$tagname"; then
+ $echo "$modename: unable to infer tagged configuration"
+ $echo "$modename: specify a tag with \`--tag'" 1>&2
+ exit 1
+# else
+# $echo "$modename: using $tagname tagged configuration"
+ fi
+ ;;
+ esac
+ fi
+
+ for arg in $later; do
+ case $arg in
+ -static)
+ build_old_libs=yes
+ continue
+ ;;
+
+ -prefer-pic)
+ pic_mode=yes
+ continue
+ ;;
+
+ -prefer-non-pic)
+ pic_mode=no
+ continue
+ ;;
+ esac
+ done
+
+ objname=`$echo "X$obj" | $Xsed -e 's%^.*/%%'`
+ xdir=`$echo "X$obj" | $Xsed -e 's%/[^/]*$%%'`
+ if test "X$xdir" = "X$obj"; then
+ xdir=
+ else
+ xdir=$xdir/
+ fi
+ lobj=${xdir}$objdir/$objname
+
+ if test -z "$base_compile"; then
+ $echo "$modename: you must specify a compilation command" 1>&2
+ $echo "$help" 1>&2
+ exit 1
+ fi
+
+ # Delete any leftover library objects.
+ if test "$build_old_libs" = yes; then
+ removelist="$obj $lobj $libobj ${libobj}T"
+ else
+ removelist="$lobj $libobj ${libobj}T"
+ fi
+
+ $run $rm $removelist
+ trap "$run $rm $removelist; exit 1" 1 2 15
+
+ # On Cygwin there's no "real" PIC flag so we must build both object types
+ case $host_os in
+ cygwin* | mingw* | pw32* | os2*)
+ pic_mode=default
+ ;;
+ esac
+ if test "$pic_mode" = no && test "$deplibs_check_method" != pass_all; then
+ # non-PIC code in shared libraries is not supported
+ pic_mode=default
+ fi
+
+ # Calculate the filename of the output object if compiler does
+ # not support -o with -c
+ if test "$compiler_c_o" = no; then
+ output_obj=`$echo "X$srcfile" | $Xsed -e 's%^.*/%%' -e 's%\.[^.]*$%%'`.${objext}
+ lockfile="$output_obj.lock"
+ removelist="$removelist $output_obj $lockfile"
+ trap "$run $rm $removelist; exit 1" 1 2 15
+ else
+ output_obj=
+ need_locks=no
+ lockfile=
+ fi
+
+ # Lock this critical section if it is needed
+ # We use this script file to make the link, it avoids creating a new file
+ if test "$need_locks" = yes; then
+ until $run ln "$0" "$lockfile" 2>/dev/null; do
+ $show "Waiting for $lockfile to be removed"
+ sleep 2
+ done
+ elif test "$need_locks" = warn; then
+ if test -f "$lockfile"; then
+ $echo "\
+*** ERROR, $lockfile exists and contains:
+`cat $lockfile 2>/dev/null`
+
+This indicates that another process is trying to use the same
+temporary object file, and libtool could not work around it because
+your compiler does not support \`-c' and \`-o' together. If you
+repeat this compilation, it may succeed, by chance, but you had better
+avoid parallel builds (make -j) in this platform, or get a better
+compiler."
+
+ $run $rm $removelist
+ exit 1
+ fi
+ $echo $srcfile > "$lockfile"
+ fi
+
+ if test -n "$fix_srcfile_path"; then
+ eval srcfile=\"$fix_srcfile_path\"
+ fi
+
+ $run $rm "$libobj" "${libobj}T"
+
+ # Create a libtool object file (analogous to a ".la" file),
+ # but don't create it if we're doing a dry run.
+ test -z "$run" && cat > ${libobj}T <<EOF
+# $libobj - a libtool object file
+# Generated by $PROGRAM - GNU $PACKAGE $VERSION$TIMESTAMP
+#
+# Please DO NOT delete this file!
+# It is necessary for linking the library.
+
+# Name of the PIC object.
+EOF
+
+ # Only build a PIC object if we are building libtool libraries.
+ if test "$build_libtool_libs" = yes; then
+ # Without this assignment, base_compile gets emptied.
+ fbsd_hideous_sh_bug=$base_compile
+
+ if test "$pic_mode" != no; then
+ command="$base_compile $srcfile $pic_flag"
+ else
+ # Don't build PIC code
+ command="$base_compile $srcfile"
+ fi
+
+ if test ! -d "${xdir}$objdir"; then
+ $show "$mkdir ${xdir}$objdir"
+ $run $mkdir ${xdir}$objdir
+ status=$?
+ if test "$status" -ne 0 && test ! -d "${xdir}$objdir"; then
+ exit $status
+ fi
+ fi
+
+ if test -z "$output_obj"; then
+ # Place PIC objects in $objdir
+ command="$command -o $lobj"
+ fi
+
+ $run $rm "$lobj" "$output_obj"
+
+ $show "$command"
+ if $run eval "$command"; then :
+ else
+ test -n "$output_obj" && $run $rm $removelist
+ exit 1
+ fi
+
+ if test "$need_locks" = warn &&
+ test "X`cat $lockfile 2>/dev/null`" != "X$srcfile"; then
+ $echo "\
+*** ERROR, $lockfile contains:
+`cat $lockfile 2>/dev/null`
+
+but it should contain:
+$srcfile
+
+This indicates that another process is trying to use the same
+temporary object file, and libtool could not work around it because
+your compiler does not support \`-c' and \`-o' together. If you
+repeat this compilation, it may succeed, by chance, but you had better
+avoid parallel builds (make -j) in this platform, or get a better
+compiler."
+
+ $run $rm $removelist
+ exit 1
+ fi
+
+ # Just move the object if needed, then go on to compile the next one
+ if test -n "$output_obj" && test "X$output_obj" != "X$lobj"; then
+ $show "$mv $output_obj $lobj"
+ if $run $mv $output_obj $lobj; then :
+ else
+ error=$?
+ $run $rm $removelist
+ exit $error
+ fi
+ fi
+
+ # Append the name of the PIC object to the libtool object file.
+ test -z "$run" && cat >> ${libobj}T <<EOF
+pic_object='$objdir/$objname'
+
+EOF
+
+ # Allow error messages only from the first compilation.
+ if test "$suppress_opt" = yes; then
+ suppress_output=' >/dev/null 2>&1'
+ fi
+ else
+ # No PIC object so indicate it doesn't exist in the libtool
+ # object file.
+ test -z "$run" && cat >> ${libobj}T <<EOF
+pic_object=none
+
+EOF
+ fi
+
+ # Only build a position-dependent object if we build old libraries.
+ if test "$build_old_libs" = yes; then
+ if test "$pic_mode" != yes; then
+ # Don't build PIC code
+ command="$base_compile $srcfile"
+ else
+ command="$base_compile $srcfile $pic_flag"
+ fi
+ if test "$compiler_c_o" = yes; then
+ command="$command -o $obj"
+ fi
+
+ # Suppress compiler output if we already did a PIC compilation.
+ command="$command$suppress_output"
+ $run $rm "$obj" "$output_obj"
+ $show "$command"
+ if $run eval "$command"; then :
+ else
+ $run $rm $removelist
+ exit 1
+ fi
+
+ if test "$need_locks" = warn &&
+ test "X`cat $lockfile 2>/dev/null`" != "X$srcfile"; then
+ $echo "\
+*** ERROR, $lockfile contains:
+`cat $lockfile 2>/dev/null`
+
+but it should contain:
+$srcfile
+
+This indicates that another process is trying to use the same
+temporary object file, and libtool could not work around it because
+your compiler does not support \`-c' and \`-o' together. If you
+repeat this compilation, it may succeed, by chance, but you had better
+avoid parallel builds (make -j) in this platform, or get a better
+compiler."
+
+ $run $rm $removelist
+ exit 1
+ fi
+
+ # Just move the object if needed
+ if test -n "$output_obj" && test "X$output_obj" != "X$obj"; then
+ $show "$mv $output_obj $obj"
+ if $run $mv $output_obj $obj; then :
+ else
+ error=$?
+ $run $rm $removelist
+ exit $error
+ fi
+ fi
+
+ # Append the name of the non-PIC object the libtool object file.
+ # Only append if the libtool object file exists.
+ test -z "$run" && cat >> ${libobj}T <<EOF
+# Name of the non-PIC object.
+non_pic_object='$objname'
+
+EOF
+ else
+ # Append the name of the non-PIC object the libtool object file.
+ # Only append if the libtool object file exists.
+ test -z "$run" && cat >> ${libobj}T <<EOF
+# Name of the non-PIC object.
+non_pic_object=none
+
+EOF
+ fi
+
+ $run $mv "${libobj}T" "${libobj}"
+
+ # Unlock the critical section if it was locked
+ if test "$need_locks" != no; then
+ $run $rm "$lockfile"
+ fi
+
+ exit 0
+ ;;
+
+ # libtool link mode
+ link | relink)
+ modename="$modename: link"
+ case $host in
+ *-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-os2*)
+ # It is impossible to link a dll without this setting, and
+ # we shouldn't force the makefile maintainer to figure out
+ # which system we are compiling for in order to pass an extra
+ # flag for every libtool invocation.
+ # allow_undefined=no
+
+ # FIXME: Unfortunately, there are problems with the above when trying
+ # to make a dll which has undefined symbols, in which case not
+ # even a static library is built. For now, we need to specify
+ # -no-undefined on the libtool link line when we can be certain
+ # that all symbols are satisfied, otherwise we get a static library.
+ allow_undefined=yes
+ ;;
+ *)
+ allow_undefined=yes
+ ;;
+ esac
+ libtool_args="$nonopt"
+ base_compile="$nonopt $@"
+ compile_command="$nonopt"
+ finalize_command="$nonopt"
+
+ compile_rpath=
+ finalize_rpath=
+ compile_shlibpath=
+ finalize_shlibpath=
+ convenience=
+ old_convenience=
+ deplibs=
+ old_deplibs=
+ compiler_flags=
+ linker_flags=
+ dllsearchpath=
+ lib_search_path=`pwd`
+ inst_prefix_dir=
+
+ avoid_version=no
+ dlfiles=
+ dlprefiles=
+ dlself=no
+ export_dynamic=no
+ export_symbols=
+ export_symbols_regex=
+ generated=
+ libobjs=
+ ltlibs=
+ module=no
+ no_install=no
+ objs=
+ non_pic_objects=
+ precious_files_regex=
+ prefer_static_libs=no
+ preload=no
+ prev=
+ prevarg=
+ release=
+ rpath=
+ xrpath=
+ perm_rpath=
+ temp_rpath=
+ thread_safe=no
+ vinfo=
+ vinfo_number=no
+
+ # Infer tagged configuration to use if any are available and
+ # if one wasn't chosen via the "--tag" command line option.
+ # Only attempt this if the compiler in the base link
+ # command doesn't match the default compiler.
+ if test -n "$available_tags" && test -z "$tagname"; then
+ case $base_compile in
+ # Blanks in the command may have been stripped by the calling shell,
+ # but not from the CC environment variable when configure was run.
+ "$CC "* | " $CC "* | "`$echo $CC` "* | " `$echo $CC` "*) ;;
+ # Blanks at the start of $base_compile will cause this to fail
+ # if we don't check for them as well.
+ *)
+ for z in $available_tags; do
+ if grep "^# ### BEGIN LIBTOOL TAG CONFIG: $z$" < "$0" > /dev/null; then
+ # Evaluate the configuration.
+ eval "`${SED} -n -e '/^# ### BEGIN LIBTOOL TAG CONFIG: '$z'$/,/^# ### END LIBTOOL TAG CONFIG: '$z'$/p' < $0`"
+ case $base_compile in
+ "$CC "* | " $CC "* | "`$echo $CC` "* | " `$echo $CC` "*)
+ # The compiler in $compile_command matches
+ # the one in the tagged configuration.
+ # Assume this is the tagged configuration we want.
+ tagname=$z
+ break
+ ;;
+ esac
+ fi
+ done
+ # If $tagname still isn't set, then no tagged configuration
+ # was found and let the user know that the "--tag" command
+ # line option must be used.
+ if test -z "$tagname"; then
+ $echo "$modename: unable to infer tagged configuration"
+ $echo "$modename: specify a tag with \`--tag'" 1>&2
+ exit 1
+# else
+# $echo "$modename: using $tagname tagged configuration"
+ fi
+ ;;
+ esac
+ fi
+
+ # We need to know -static, to get the right output filenames.
+ for arg
+ do
+ case $arg in
+ -all-static | -static)
+ if test "X$arg" = "X-all-static"; then
+ if test "$build_libtool_libs" = yes && test -z "$link_static_flag"; then
+ $echo "$modename: warning: complete static linking is impossible in this configuration" 1>&2
+ fi
+ if test -n "$link_static_flag"; then
+ dlopen_self=$dlopen_self_static
+ fi
+ else
+ if test -z "$pic_flag" && test -n "$link_static_flag"; then
+ dlopen_self=$dlopen_self_static
+ fi
+ fi
+ build_libtool_libs=no
+ build_old_libs=yes
+ prefer_static_libs=yes
+ break
+ ;;
+ esac
+ done
+
+ # See if our shared archives depend on static archives.
+ test -n "$old_archive_from_new_cmds" && build_old_libs=yes
+
+ # Go through the arguments, transforming them on the way.
+ while test "$#" -gt 0; do
+ arg="$1"
+ shift
+ case $arg in
+ *[\[\~\#\^\&\*\(\)\{\}\|\;\<\>\?\'\ \ ]*|*]*|"")
+ qarg=\"`$echo "X$arg" | $Xsed -e "$sed_quote_subst"`\" ### testsuite: skip nested quoting test
+ ;;
+ *) qarg=$arg ;;
+ esac
+ libtool_args="$libtool_args $qarg"
+
+ # If the previous option needs an argument, assign it.
+ if test -n "$prev"; then
+ case $prev in
+ output)
+ compile_command="$compile_command @OUTPUT@"
+ finalize_command="$finalize_command @OUTPUT@"
+ ;;
+ esac
+
+ case $prev in
+ dlfiles|dlprefiles)
+ if test "$preload" = no; then
+ # Add the symbol object into the linking commands.
+ compile_command="$compile_command @SYMFILE@"
+ finalize_command="$finalize_command @SYMFILE@"
+ preload=yes
+ fi
+ case $arg in
+ *.la | *.lo) ;; # We handle these cases below.
+ force)
+ if test "$dlself" = no; then
+ dlself=needless
+ export_dynamic=yes
+ fi
+ prev=
+ continue
+ ;;
+ self)
+ if test "$prev" = dlprefiles; then
+ dlself=yes
+ elif test "$prev" = dlfiles && test "$dlopen_self" != yes; then
+ dlself=yes
+ else
+ dlself=needless
+ export_dynamic=yes
+ fi
+ prev=
+ continue
+ ;;
+ *)
+ if test "$prev" = dlfiles; then
+ dlfiles="$dlfiles $arg"
+ else
+ dlprefiles="$dlprefiles $arg"
+ fi
+ prev=
+ continue
+ ;;
+ esac
+ ;;
+ expsyms)
+ export_symbols="$arg"
+ if test ! -f "$arg"; then
+ $echo "$modename: symbol file \`$arg' does not exist"
+ exit 1
+ fi
+ prev=
+ continue
+ ;;
+ expsyms_regex)
+ export_symbols_regex="$arg"
+ prev=
+ continue
+ ;;
+ inst_prefix)
+ inst_prefix_dir="$arg"
+ prev=
+ continue
+ ;;
+ precious_regex)
+ precious_files_regex="$arg"
+ prev=
+ continue
+ ;;
+ release)
+ release="-$arg"
+ prev=
+ continue
+ ;;
+ objectlist)
+ if test -f "$arg"; then
+ save_arg=$arg
+ moreargs=
+ for fil in `cat $save_arg`
+ do
+# moreargs="$moreargs $fil"
+ arg=$fil
+ # A libtool-controlled object.
+
+ # Check to see that this really is a libtool object.
+ if (${SED} -e '2q' $arg | grep "^# Generated by .*$PACKAGE") >/dev/null 2>&1; then
+ pic_object=
+ non_pic_object=
+
+ # Read the .lo file
+ # If there is no directory component, then add one.
+ case $arg in
+ */* | *\\*) . $arg ;;
+ *) . ./$arg ;;
+ esac
+
+ if test -z "$pic_object" || \
+ test -z "$non_pic_object" ||
+ test "$pic_object" = none && \
+ test "$non_pic_object" = none; then
+ $echo "$modename: cannot find name of object for \`$arg'" 1>&2
+ exit 1
+ fi
+
+ # Extract subdirectory from the argument.
+ xdir=`$echo "X$arg" | $Xsed -e 's%/[^/]*$%%'`
+ if test "X$xdir" = "X$arg"; then
+ xdir=
+ else
+ xdir="$xdir/"
+ fi
+
+ if test "$pic_object" != none; then
+ # Prepend the subdirectory the object is found in.
+ pic_object="$xdir$pic_object"
+
+ if test "$prev" = dlfiles; then
+ if test "$build_libtool_libs" = yes && test "$dlopen_support" = yes; then
+ dlfiles="$dlfiles $pic_object"
+ prev=
+ continue
+ else
+ # If libtool objects are unsupported, then we need to preload.
+ prev=dlprefiles
+ fi
+ fi
+
+ # CHECK ME: I think I busted this. -Ossama
+ if test "$prev" = dlprefiles; then
+ # Preload the old-style object.
+ dlprefiles="$dlprefiles $pic_object"
+ prev=
+ fi
+
+ # A PIC object.
+ libobjs="$libobjs $pic_object"
+ arg="$pic_object"
+ fi
+
+ # Non-PIC object.
+ if test "$non_pic_object" != none; then
+ # Prepend the subdirectory the object is found in.
+ non_pic_object="$xdir$non_pic_object"
+
+ # A standard non-PIC object
+ non_pic_objects="$non_pic_objects $non_pic_object"
+ if test -z "$pic_object" || test "$pic_object" = none ; then
+ arg="$non_pic_object"
+ fi
+ fi
+ else
+ # Only an error if not doing a dry-run.
+ if test -z "$run"; then
+ $echo "$modename: \`$arg' is not a valid libtool object" 1>&2
+ exit 1
+ else
+ # Dry-run case.
+
+ # Extract subdirectory from the argument.
+ xdir=`$echo "X$arg" | $Xsed -e 's%/[^/]*$%%'`
+ if test "X$xdir" = "X$arg"; then
+ xdir=
+ else
+ xdir="$xdir/"
+ fi
+
+ pic_object=`$echo "X${xdir}${objdir}/${arg}" | $Xsed -e "$lo2o"`
+ non_pic_object=`$echo "X${xdir}${arg}" | $Xsed -e "$lo2o"`
+ libobjs="$libobjs $pic_object"
+ non_pic_objects="$non_pic_objects $non_pic_object"
+ fi
+ fi
+ done
+ else
+ $echo "$modename: link input file \`$save_arg' does not exist"
+ exit 1
+ fi
+ arg=$save_arg
+ prev=
+ continue
+ ;;
+ rpath | xrpath)
+ # We need an absolute path.
+ case $arg in
+ [\\/]* | [A-Za-z]:[\\/]*) ;;
+ *)
+ $echo "$modename: only absolute run-paths are allowed" 1>&2
+ exit 1
+ ;;
+ esac
+ if test "$prev" = rpath; then
+ case "$rpath " in
+ *" $arg "*) ;;
+ *) rpath="$rpath $arg" ;;
+ esac
+ else
+ case "$xrpath " in
+ *" $arg "*) ;;
+ *) xrpath="$xrpath $arg" ;;
+ esac
+ fi
+ prev=
+ continue
+ ;;
+ xcompiler)
+ compiler_flags="$compiler_flags $qarg"
+ prev=
+ compile_command="$compile_command $qarg"
+ finalize_command="$finalize_command $qarg"
+ continue
+ ;;
+ xlinker)
+ linker_flags="$linker_flags $qarg"
+ compiler_flags="$compiler_flags $wl$qarg"
+ prev=
+ compile_command="$compile_command $wl$qarg"
+ finalize_command="$finalize_command $wl$qarg"
+ continue
+ ;;
+ xcclinker)
+ linker_flags="$linker_flags $qarg"
+ compiler_flags="$compiler_flags $qarg"
+ prev=
+ compile_command="$compile_command $qarg"
+ finalize_command="$finalize_command $qarg"
+ continue
+ ;;
+ *)
+ eval "$prev=\"\$arg\""
+ prev=
+ continue
+ ;;
+ esac
+ fi # test -n "$prev"
+
+ prevarg="$arg"
+
+ case $arg in
+ -all-static)
+ if test -n "$link_static_flag"; then
+ compile_command="$compile_command $link_static_flag"
+ finalize_command="$finalize_command $link_static_flag"
+ fi
+ continue
+ ;;
+
+ -allow-undefined)
+ # FIXME: remove this flag sometime in the future.
+ $echo "$modename: \`-allow-undefined' is deprecated because it is the default" 1>&2
+ continue
+ ;;
+
+ -avoid-version)
+ avoid_version=yes
+ continue
+ ;;
+
+ -dlopen)
+ prev=dlfiles
+ continue
+ ;;
+
+ -dlpreopen)
+ prev=dlprefiles
+ continue
+ ;;
+
+ -export-dynamic)
+ export_dynamic=yes
+ continue
+ ;;
+
+ -export-symbols | -export-symbols-regex)
+ if test -n "$export_symbols" || test -n "$export_symbols_regex"; then
+ $echo "$modename: more than one -exported-symbols argument is not allowed"
+ exit 1
+ fi
+ if test "X$arg" = "X-export-symbols"; then
+ prev=expsyms
+ else
+ prev=expsyms_regex
+ fi
+ continue
+ ;;
+
+ -inst-prefix-dir)
+ prev=inst_prefix
+ continue
+ ;;
+
+ # The native IRIX linker understands -LANG:*, -LIST:* and -LNO:*
+ # so, if we see these flags be careful not to treat them like -L
+ -L[A-Z][A-Z]*:*)
+ case $with_gcc/$host in
+ no/*-*-irix* | /*-*-irix*)
+ compile_command="$compile_command $arg"
+ finalize_command="$finalize_command $arg"
+ ;;
+ esac
+ continue
+ ;;
+
+ -L*)
+ dir=`$echo "X$arg" | $Xsed -e 's/^-L//'`
+ # We need an absolute path.
+ case $dir in
+ [\\/]* | [A-Za-z]:[\\/]*) ;;
+ *)
+ absdir=`cd "$dir" && pwd`
+ if test -z "$absdir"; then
+ $echo "$modename: cannot determine absolute directory name of \`$dir'" 1>&2
+ exit 1
+ fi
+ dir="$absdir"
+ ;;
+ esac
+ case "$deplibs " in
+ *" -L$dir "*) ;;
+ *)
+ deplibs="$deplibs -L$dir"
+ lib_search_path="$lib_search_path $dir"
+ ;;
+ esac
+ case $host in
+ *-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-os2*)
+ case :$dllsearchpath: in
+ *":$dir:"*) ;;
+ *) dllsearchpath="$dllsearchpath:$dir";;
+ esac
+ ;;
+ esac
+ continue
+ ;;
+
+ -l*)
+ if test "X$arg" = "X-lc" || test "X$arg" = "X-lm"; then
+ case $host in
+ *-*-cygwin* | *-*-pw32* | *-*-beos*)
+ # These systems don't actually have a C or math library (as such)
+ continue
+ ;;
+ *-*-mingw* | *-*-os2*)
+ # These systems don't actually have a C library (as such)
+ test "X$arg" = "X-lc" && continue
+ ;;
+ *-*-openbsd* | *-*-freebsd*)
+ # Do not include libc due to us having libc/libc_r.
+ test "X$arg" = "X-lc" && continue
+ ;;
+ *-*-rhapsody* | *-*-darwin1.[012])
+ # Rhapsody C and math libraries are in the System framework
+ deplibs="$deplibs -framework System"
+ continue
+ esac
+ elif test "X$arg" = "X-lc_r"; then
+ case $host in
+ *-*-openbsd* | *-*-freebsd*)
+ # Do not include libc_r directly, use -pthread flag.
+ continue
+ ;;
+ esac
+ fi
+ deplibs="$deplibs $arg"
+ continue
+ ;;
+
+ -mt|-mthreads|-kthread|-Kthread|-pthread|-pthreads|--thread-safe)
+ deplibs="$deplibs $arg"
+ continue
+ ;;
+
+ -module)
+ module=yes
+ continue
+ ;;
+
+ # gcc -m* arguments should be passed to the linker via $compiler_flags
+ # in order to pass architecture information to the linker
+ # (e.g. 32 vs 64-bit). This may also be accomplished via -Wl,-mfoo
+ # but this is not reliable with gcc because gcc may use -mfoo to
+ # select a different linker, different libraries, etc, while
+ # -Wl,-mfoo simply passes -mfoo to the linker.
+ -m*)
+ # Unknown arguments in both finalize_command and compile_command need
+ # to be aesthetically quoted because they are evaled later.
+ arg=`$echo "X$arg" | $Xsed -e "$sed_quote_subst"`
+ case $arg in
+ *[\[\~\#\^\&\*\(\)\{\}\|\;\<\>\?\'\ \ ]*|*]*|"")
+ arg="\"$arg\""
+ ;;
+ esac
+ compile_command="$compile_command $arg"
+ finalize_command="$finalize_command $arg"
+ if test "$with_gcc" = "yes" ; then
+ compiler_flags="$compiler_flags $arg"
+ fi
+ continue
+ ;;
+
+ -shrext)
+ prev=shrext
+ continue
+ ;;
+
+ -no-fast-install)
+ fast_install=no
+ continue
+ ;;
+
+ -no-install)
+ case $host in
+ *-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-os2*)
+ # The PATH hackery in wrapper scripts is required on Windows
+ # in order for the loader to find any dlls it needs.
+ $echo "$modename: warning: \`-no-install' is ignored for $host" 1>&2
+ $echo "$modename: warning: assuming \`-no-fast-install' instead" 1>&2
+ fast_install=no
+ ;;
+ *) no_install=yes ;;
+ esac
+ continue
+ ;;
+
+ -no-undefined)
+ allow_undefined=no
+ continue
+ ;;
+
+ -objectlist)
+ prev=objectlist
+ continue
+ ;;
+
+ -o) prev=output ;;
+
+ -precious-files-regex)
+ prev=precious_regex
+ continue
+ ;;
+
+ -release)
+ prev=release
+ continue
+ ;;
+
+ -rpath)
+ prev=rpath
+ continue
+ ;;
+
+ -R)
+ prev=xrpath
+ continue
+ ;;
+
+ -R*)
+ dir=`$echo "X$arg" | $Xsed -e 's/^-R//'`
+ # We need an absolute path.
+ case $dir in
+ [\\/]* | [A-Za-z]:[\\/]*) ;;
+ *)
+ $echo "$modename: only absolute run-paths are allowed" 1>&2
+ exit 1
+ ;;
+ esac
+ case "$xrpath " in
+ *" $dir "*) ;;
+ *) xrpath="$xrpath $dir" ;;
+ esac
+ continue
+ ;;
+
+ -static)
+ # The effects of -static are defined in a previous loop.
+ # We used to do the same as -all-static on platforms that
+ # didn't have a PIC flag, but the assumption that the effects
+ # would be equivalent was wrong. It would break on at least
+ # Digital Unix and AIX.
+ continue
+ ;;
+
+ -thread-safe)
+ thread_safe=yes
+ continue
+ ;;
+
+ -version-info)
+ prev=vinfo
+ continue
+ ;;
+ -version-number)
+ prev=vinfo
+ vinfo_number=yes
+ continue
+ ;;
+
+ -Wc,*)
+ args=`$echo "X$arg" | $Xsed -e "$sed_quote_subst" -e 's/^-Wc,//'`
+ arg=
+ save_ifs="$IFS"; IFS=','
+ for flag in $args; do
+ IFS="$save_ifs"
+ case $flag in
+ *[\[\~\#\^\&\*\(\)\{\}\|\;\<\>\?\'\ \ ]*|*]*|"")
+ flag="\"$flag\""
+ ;;
+ esac
+ arg="$arg $wl$flag"
+ compiler_flags="$compiler_flags $flag"
+ done
+ IFS="$save_ifs"
+ arg=`$echo "X$arg" | $Xsed -e "s/^ //"`
+ ;;
+
+ -Wl,*)
+ args=`$echo "X$arg" | $Xsed -e "$sed_quote_subst" -e 's/^-Wl,//'`
+ arg=
+ save_ifs="$IFS"; IFS=','
+ for flag in $args; do
+ IFS="$save_ifs"
+ case $flag in
+ *[\[\~\#\^\&\*\(\)\{\}\|\;\<\>\?\'\ \ ]*|*]*|"")
+ flag="\"$flag\""
+ ;;
+ esac
+ arg="$arg $wl$flag"
+ compiler_flags="$compiler_flags $wl$flag"
+ linker_flags="$linker_flags $flag"
+ done
+ IFS="$save_ifs"
+ arg=`$echo "X$arg" | $Xsed -e "s/^ //"`
+ ;;
+
+ -Xcompiler)
+ prev=xcompiler
+ continue
+ ;;
+
+ -Xlinker)
+ prev=xlinker
+ continue
+ ;;
+
+ -XCClinker)
+ prev=xcclinker
+ continue
+ ;;
+
+ # Some other compiler flag.
+ -* | +*)
+ # Unknown arguments in both finalize_command and compile_command need
+ # to be aesthetically quoted because they are evaled later.
+ arg=`$echo "X$arg" | $Xsed -e "$sed_quote_subst"`
+ case $arg in
+ *[\[\~\#\^\&\*\(\)\{\}\|\;\<\>\?\'\ \ ]*|*]*|"")
+ arg="\"$arg\""
+ ;;
+ esac
+ ;;
+
+ *.$objext)
+ # A standard object.
+ objs="$objs $arg"
+ ;;
+
+ *.lo)
+ # A libtool-controlled object.
+
+ # Check to see that this really is a libtool object.
+ if (${SED} -e '2q' $arg | grep "^# Generated by .*$PACKAGE") >/dev/null 2>&1; then
+ pic_object=
+ non_pic_object=
+
+ # Read the .lo file
+ # If there is no directory component, then add one.
+ case $arg in
+ */* | *\\*) . $arg ;;
+ *) . ./$arg ;;
+ esac
+
+ if test -z "$pic_object" || \
+ test -z "$non_pic_object" ||
+ test "$pic_object" = none && \
+ test "$non_pic_object" = none; then
+ $echo "$modename: cannot find name of object for \`$arg'" 1>&2
+ exit 1
+ fi
+
+ # Extract subdirectory from the argument.
+ xdir=`$echo "X$arg" | $Xsed -e 's%/[^/]*$%%'`
+ if test "X$xdir" = "X$arg"; then
+ xdir=
+ else
+ xdir="$xdir/"
+ fi
+
+ if test "$pic_object" != none; then
+ # Prepend the subdirectory the object is found in.
+ pic_object="$xdir$pic_object"
+
+ if test "$prev" = dlfiles; then
+ if test "$build_libtool_libs" = yes && test "$dlopen_support" = yes; then
+ dlfiles="$dlfiles $pic_object"
+ prev=
+ continue
+ else
+ # If libtool objects are unsupported, then we need to preload.
+ prev=dlprefiles
+ fi
+ fi
+
+ # CHECK ME: I think I busted this. -Ossama
+ if test "$prev" = dlprefiles; then
+ # Preload the old-style object.
+ dlprefiles="$dlprefiles $pic_object"
+ prev=
+ fi
+
+ # A PIC object.
+ libobjs="$libobjs $pic_object"
+ arg="$pic_object"
+ fi
+
+ # Non-PIC object.
+ if test "$non_pic_object" != none; then
+ # Prepend the subdirectory the object is found in.
+ non_pic_object="$xdir$non_pic_object"
+
+ # A standard non-PIC object
+ non_pic_objects="$non_pic_objects $non_pic_object"
+ if test -z "$pic_object" || test "$pic_object" = none ; then
+ arg="$non_pic_object"
+ fi
+ fi
+ else
+ # Only an error if not doing a dry-run.
+ if test -z "$run"; then
+ $echo "$modename: \`$arg' is not a valid libtool object" 1>&2
+ exit 1
+ else
+ # Dry-run case.
+
+ # Extract subdirectory from the argument.
+ xdir=`$echo "X$arg" | $Xsed -e 's%/[^/]*$%%'`
+ if test "X$xdir" = "X$arg"; then
+ xdir=
+ else
+ xdir="$xdir/"
+ fi
+
+ pic_object=`$echo "X${xdir}${objdir}/${arg}" | $Xsed -e "$lo2o"`
+ non_pic_object=`$echo "X${xdir}${arg}" | $Xsed -e "$lo2o"`
+ libobjs="$libobjs $pic_object"
+ non_pic_objects="$non_pic_objects $non_pic_object"
+ fi
+ fi
+ ;;
+
+ *.$libext)
+ # An archive.
+ deplibs="$deplibs $arg"
+ old_deplibs="$old_deplibs $arg"
+ continue
+ ;;
+
+ *.la)
+ # A libtool-controlled library.
+
+ if test "$prev" = dlfiles; then
+ # This library was specified with -dlopen.
+ dlfiles="$dlfiles $arg"
+ prev=
+ elif test "$prev" = dlprefiles; then
+ # The library was specified with -dlpreopen.
+ dlprefiles="$dlprefiles $arg"
+ prev=
+ else
+ deplibs="$deplibs $arg"
+ fi
+ continue
+ ;;
+
+ # Some other compiler argument.
+ *)
+ # Unknown arguments in both finalize_command and compile_command need
+ # to be aesthetically quoted because they are evaled later.
+ arg=`$echo "X$arg" | $Xsed -e "$sed_quote_subst"`
+ case $arg in
+ *[\[\~\#\^\&\*\(\)\{\}\|\;\<\>\?\'\ \ ]*|*]*|"")
+ arg="\"$arg\""
+ ;;
+ esac
+ ;;
+ esac # arg
+
+ # Now actually substitute the argument into the commands.
+ if test -n "$arg"; then
+ compile_command="$compile_command $arg"
+ finalize_command="$finalize_command $arg"
+ fi
+ done # argument parsing loop
+
+ if test -n "$prev"; then
+ $echo "$modename: the \`$prevarg' option requires an argument" 1>&2
+ $echo "$help" 1>&2
+ exit 1
+ fi
+
+ if test "$export_dynamic" = yes && test -n "$export_dynamic_flag_spec"; then
+ eval arg=\"$export_dynamic_flag_spec\"
+ compile_command="$compile_command $arg"
+ finalize_command="$finalize_command $arg"
+ fi
+
+ oldlibs=
+ # calculate the name of the file, without its directory
+ outputname=`$echo "X$output" | $Xsed -e 's%^.*/%%'`
+ libobjs_save="$libobjs"
+
+ if test -n "$shlibpath_var"; then
+ # get the directories listed in $shlibpath_var
+ eval shlib_search_path=\`\$echo \"X\${$shlibpath_var}\" \| \$Xsed -e \'s/:/ /g\'\`
+ else
+ shlib_search_path=
+ fi
+ eval sys_lib_search_path=\"$sys_lib_search_path_spec\"
+ eval sys_lib_dlsearch_path=\"$sys_lib_dlsearch_path_spec\"
+
+ output_objdir=`$echo "X$output" | $Xsed -e 's%/[^/]*$%%'`
+ if test "X$output_objdir" = "X$output"; then
+ output_objdir="$objdir"
+ else
+ output_objdir="$output_objdir/$objdir"
+ fi
+ # Create the object directory.
+ if test ! -d "$output_objdir"; then
+ $show "$mkdir $output_objdir"
+ $run $mkdir $output_objdir
+ status=$?
+ if test "$status" -ne 0 && test ! -d "$output_objdir"; then
+ exit $status
+ fi
+ fi
+
+ # Determine the type of output
+ case $output in
+ "")
+ $echo "$modename: you must specify an output file" 1>&2
+ $echo "$help" 1>&2
+ exit 1
+ ;;
+ *.$libext) linkmode=oldlib ;;
+ *.lo | *.$objext) linkmode=obj ;;
+ *.la) linkmode=lib ;;
+ *) linkmode=prog ;; # Anything else should be a program.
+ esac
+
+ case $host in
+ *cygwin* | *mingw* | *pw32*)
+ # don't eliminate duplcations in $postdeps and $predeps
+ duplicate_compiler_generated_deps=yes
+ ;;
+ *)
+ duplicate_compiler_generated_deps=$duplicate_deps
+ ;;
+ esac
+ specialdeplibs=
+
+ libs=
+ # Find all interdependent deplibs by searching for libraries
+ # that are linked more than once (e.g. -la -lb -la)
+ for deplib in $deplibs; do
+ if test "X$duplicate_deps" = "Xyes" ; then
+ case "$libs " in
+ *" $deplib "*) specialdeplibs="$specialdeplibs $deplib" ;;
+ esac
+ fi
+ libs="$libs $deplib"
+ done
+
+ if test "$linkmode" = lib; then
+ libs="$predeps $libs $compiler_lib_search_path $postdeps"
+
+ # Compute libraries that are listed more than once in $predeps
+ # $postdeps and mark them as special (i.e., whose duplicates are
+ # not to be eliminated).
+ pre_post_deps=
+ if test "X$duplicate_compiler_generated_deps" = "Xyes" ; then
+ for pre_post_dep in $predeps $postdeps; do
+ case "$pre_post_deps " in
+ *" $pre_post_dep "*) specialdeplibs="$specialdeplibs $pre_post_deps" ;;
+ esac
+ pre_post_deps="$pre_post_deps $pre_post_dep"
+ done
+ fi
+ pre_post_deps=
+ fi
+
+ deplibs=
+ newdependency_libs=
+ newlib_search_path=
+ need_relink=no # whether we're linking any uninstalled libtool libraries
+ notinst_deplibs= # not-installed libtool libraries
+ notinst_path= # paths that contain not-installed libtool libraries
+ case $linkmode in
+ lib)
+ passes="conv link"
+ for file in $dlfiles $dlprefiles; do
+ case $file in
+ *.la) ;;
+ *)
+ $echo "$modename: libraries can \`-dlopen' only libtool libraries: $file" 1>&2
+ exit 1
+ ;;
+ esac
+ done
+ ;;
+ prog)
+ compile_deplibs=
+ finalize_deplibs=
+ alldeplibs=no
+ newdlfiles=
+ newdlprefiles=
+ passes="conv scan dlopen dlpreopen link"
+ ;;
+ *) passes="conv"
+ ;;
+ esac
+ for pass in $passes; do
+ if test "$linkmode,$pass" = "lib,link" ||
+ test "$linkmode,$pass" = "prog,scan"; then
+ libs="$deplibs"
+ deplibs=
+ fi
+ if test "$linkmode" = prog; then
+ case $pass in
+ dlopen) libs="$dlfiles" ;;
+ dlpreopen) libs="$dlprefiles" ;;
+ link) libs="$deplibs %DEPLIBS% $dependency_libs" ;;
+ esac
+ fi
+ if test "$pass" = dlopen; then
+ # Collect dlpreopened libraries
+ save_deplibs="$deplibs"
+ deplibs=
+ fi
+ for deplib in $libs; do
+ lib=
+ found=no
+ case $deplib in
+ -mt|-mthreads|-kthread|-Kthread|-pthread|-pthreads|--thread-safe)
+ if test "$linkmode,$pass" = "prog,link"; then
+ compile_deplibs="$deplib $compile_deplibs"
+ finalize_deplibs="$deplib $finalize_deplibs"
+ else
+ deplibs="$deplib $deplibs"
+ fi
+ continue
+ ;;
+ -l*)
+ if test "$linkmode" != lib && test "$linkmode" != prog; then
+ $echo "$modename: warning: \`-l' is ignored for archives/objects" 1>&2
+ continue
+ fi
+ if test "$pass" = conv; then
+ deplibs="$deplib $deplibs"
+ continue
+ fi
+ name=`$echo "X$deplib" | $Xsed -e 's/^-l//'`
+ for searchdir in $newlib_search_path $lib_search_path $sys_lib_search_path $shlib_search_path; do
+ for search_ext in .la $shrext .so .a; do
+ # Search the libtool library
+ lib="$searchdir/lib${name}${search_ext}"
+ if test -f "$lib"; then
+ if test "$search_ext" = ".la"; then
+ found=yes
+ else
+ found=no
+ fi
+ break 2
+ fi
+ done
+ done
+ if test "$found" != yes; then
+ # deplib doesn't seem to be a libtool library
+ if test "$linkmode,$pass" = "prog,link"; then
+ compile_deplibs="$deplib $compile_deplibs"
+ finalize_deplibs="$deplib $finalize_deplibs"
+ else
+ deplibs="$deplib $deplibs"
+ test "$linkmode" = lib && newdependency_libs="$deplib $newdependency_libs"
+ fi
+ continue
+ else # deplib is a libtool library
+ # If $allow_libtool_libs_with_static_runtimes && $deplib is a stdlib,
+ # We need to do some special things here, and not later.
+ if test "X$allow_libtool_libs_with_static_runtimes" = "Xyes" ; then
+ case " $predeps $postdeps " in
+ *" $deplib "*)
+ if (${SED} -e '2q' $lib |
+ grep "^# Generated by .*$PACKAGE") >/dev/null 2>&1; then
+ library_names=
+ old_library=
+ case $lib in
+ */* | *\\*) . $lib ;;
+ *) . ./$lib ;;
+ esac
+ for l in $old_library $library_names; do
+ ll="$l"
+ done
+ if test "X$ll" = "X$old_library" ; then # only static version available
+ found=no
+ ladir=`$echo "X$lib" | $Xsed -e 's%/[^/]*$%%'`
+ test "X$ladir" = "X$lib" && ladir="."
+ lib=$ladir/$old_library
+ if test "$linkmode,$pass" = "prog,link"; then
+ compile_deplibs="$deplib $compile_deplibs"
+ finalize_deplibs="$deplib $finalize_deplibs"
+ else
+ deplibs="$deplib $deplibs"
+ test "$linkmode" = lib && newdependency_libs="$deplib $newdependency_libs"
+ fi
+ continue
+ fi
+ fi
+ ;;
+ *) ;;
+ esac
+ fi
+ fi
+ ;; # -l
+ -L*)
+ case $linkmode in
+ lib)
+ deplibs="$deplib $deplibs"
+ test "$pass" = conv && continue
+ newdependency_libs="$deplib $newdependency_libs"
+ newlib_search_path="$newlib_search_path "`$echo "X$deplib" | $Xsed -e 's/^-L//'`
+ ;;
+ prog)
+ if test "$pass" = conv; then
+ deplibs="$deplib $deplibs"
+ continue
+ fi
+ if test "$pass" = scan; then
+ deplibs="$deplib $deplibs"
+ newlib_search_path="$newlib_search_path "`$echo "X$deplib" | $Xsed -e 's/^-L//'`
+ else
+ compile_deplibs="$deplib $compile_deplibs"
+ finalize_deplibs="$deplib $finalize_deplibs"
+ fi
+ ;;
+ *)
+ $echo "$modename: warning: \`-L' is ignored for archives/objects" 1>&2
+ ;;
+ esac # linkmode
+ continue
+ ;; # -L
+ -R*)
+ if test "$pass" = link; then
+ dir=`$echo "X$deplib" | $Xsed -e 's/^-R//'`
+ # Make sure the xrpath contains only unique directories.
+ case "$xrpath " in
+ *" $dir "*) ;;
+ *) xrpath="$xrpath $dir" ;;
+ esac
+ fi
+ deplibs="$deplib $deplibs"
+ continue
+ ;;
+ *.la) lib="$deplib" ;;
+ *.$libext)
+ if test "$pass" = conv; then
+ deplibs="$deplib $deplibs"
+ continue
+ fi
+ case $linkmode in
+ lib)
+ if test "$deplibs_check_method" != pass_all; then
+ $echo
+ $echo "*** Warning: Trying to link with static lib archive $deplib."
+ $echo "*** I have the capability to make that library automatically link in when"
+ $echo "*** you link to this library. But I can only do this if you have a"
+ $echo "*** shared version of the library, which you do not appear to have"
+ $echo "*** because the file extensions .$libext of this argument makes me believe"
+ $echo "*** that it is just a static archive that I should not used here."
+ else
+ $echo
+ $echo "*** Warning: Linking the shared library $output against the"
+ $echo "*** static library $deplib is not portable!"
+ deplibs="$deplib $deplibs"
+ fi
+ continue
+ ;;
+ prog)
+ if test "$pass" != link; then
+ deplibs="$deplib $deplibs"
+ else
+ compile_deplibs="$deplib $compile_deplibs"
+ finalize_deplibs="$deplib $finalize_deplibs"
+ fi
+ continue
+ ;;
+ esac # linkmode
+ ;; # *.$libext
+ *.lo | *.$objext)
+ if test "$pass" = conv; then
+ deplibs="$deplib $deplibs"
+ elif test "$linkmode" = prog; then
+ if test "$pass" = dlpreopen || test "$dlopen_support" != yes || test "$build_libtool_libs" = no; then
+ # If there is no dlopen support or we're linking statically,
+ # we need to preload.
+ newdlprefiles="$newdlprefiles $deplib"
+ compile_deplibs="$deplib $compile_deplibs"
+ finalize_deplibs="$deplib $finalize_deplibs"
+ else
+ newdlfiles="$newdlfiles $deplib"
+ fi
+ fi
+ continue
+ ;;
+ %DEPLIBS%)
+ alldeplibs=yes
+ continue
+ ;;
+ esac # case $deplib
+ if test "$found" = yes || test -f "$lib"; then :
+ else
+ $echo "$modename: cannot find the library \`$lib'" 1>&2
+ exit 1
+ fi
+
+ # Check to see that this really is a libtool archive.
+ if (${SED} -e '2q' $lib | grep "^# Generated by .*$PACKAGE") >/dev/null 2>&1; then :
+ else
+ $echo "$modename: \`$lib' is not a valid libtool archive" 1>&2
+ exit 1
+ fi
+
+ ladir=`$echo "X$lib" | $Xsed -e 's%/[^/]*$%%'`
+ test "X$ladir" = "X$lib" && ladir="."
+
+ dlname=
+ dlopen=
+ dlpreopen=
+ libdir=
+ library_names=
+ old_library=
+ # If the library was installed with an old release of libtool,
+ # it will not redefine variables installed, or shouldnotlink
+ installed=yes
+ shouldnotlink=no
+
+ # Read the .la file
+ case $lib in
+ */* | *\\*) . $lib ;;
+ *) . ./$lib ;;
+ esac
+
+ if test "$linkmode,$pass" = "lib,link" ||
+ test "$linkmode,$pass" = "prog,scan" ||
+ { test "$linkmode" != prog && test "$linkmode" != lib; }; then
+ test -n "$dlopen" && dlfiles="$dlfiles $dlopen"
+ test -n "$dlpreopen" && dlprefiles="$dlprefiles $dlpreopen"
+ fi
+
+ if test "$pass" = conv; then
+ # Only check for convenience libraries
+ deplibs="$lib $deplibs"
+ if test -z "$libdir"; then
+ if test -z "$old_library"; then
+ $echo "$modename: cannot find name of link library for \`$lib'" 1>&2
+ exit 1
+ fi
+ # It is a libtool convenience library, so add in its objects.
+ convenience="$convenience $ladir/$objdir/$old_library"
+ old_convenience="$old_convenience $ladir/$objdir/$old_library"
+ tmp_libs=
+ for deplib in $dependency_libs; do
+ deplibs="$deplib $deplibs"
+ if test "X$duplicate_deps" = "Xyes" ; then
+ case "$tmp_libs " in
+ *" $deplib "*) specialdeplibs="$specialdeplibs $deplib" ;;
+ esac
+ fi
+ tmp_libs="$tmp_libs $deplib"
+ done
+ elif test "$linkmode" != prog && test "$linkmode" != lib; then
+ $echo "$modename: \`$lib' is not a convenience library" 1>&2
+ exit 1
+ fi
+ continue
+ fi # $pass = conv
+
+
+ # Get the name of the library we link against.
+ linklib=
+ for l in $old_library $library_names; do
+ linklib="$l"
+ done
+ if test -z "$linklib"; then
+ $echo "$modename: cannot find name of link library for \`$lib'" 1>&2
+ exit 1
+ fi
+
+ # This library was specified with -dlopen.
+ if test "$pass" = dlopen; then
+ if test -z "$libdir"; then
+ $echo "$modename: cannot -dlopen a convenience library: \`$lib'" 1>&2
+ exit 1
+ fi
+ if test -z "$dlname" || test "$dlopen_support" != yes || test "$build_libtool_libs" = no; then
+ # If there is no dlname, no dlopen support or we're linking
+ # statically, we need to preload. We also need to preload any
+ # dependent libraries so libltdl's deplib preloader doesn't
+ # bomb out in the load deplibs phase.
+ dlprefiles="$dlprefiles $lib $dependency_libs"
+ else
+ newdlfiles="$newdlfiles $lib"
+ fi
+ continue
+ fi # $pass = dlopen
+
+ # We need an absolute path.
+ case $ladir in
+ [\\/]* | [A-Za-z]:[\\/]*) abs_ladir="$ladir" ;;
+ *)
+ abs_ladir=`cd "$ladir" && pwd`
+ if test -z "$abs_ladir"; then
+ $echo "$modename: warning: cannot determine absolute directory name of \`$ladir'" 1>&2
+ $echo "$modename: passing it literally to the linker, although it might fail" 1>&2
+ abs_ladir="$ladir"
+ fi
+ ;;
+ esac
+ laname=`$echo "X$lib" | $Xsed -e 's%^.*/%%'`
+
+ # Find the relevant object directory and library name.
+ if test "X$installed" = Xyes; then
+ if test ! -f "$libdir/$linklib" && test -f "$abs_ladir/$linklib"; then
+ $echo "$modename: warning: library \`$lib' was moved." 1>&2
+ dir="$ladir"
+ absdir="$abs_ladir"
+ libdir="$abs_ladir"
+ else
+ dir="$libdir"
+ absdir="$libdir"
+ fi
+ else
+ dir="$ladir/$objdir"
+ absdir="$abs_ladir/$objdir"
+ # Remove this search path later
+ notinst_path="$notinst_path $abs_ladir"
+ fi # $installed = yes
+ name=`$echo "X$laname" | $Xsed -e 's/\.la$//' -e 's/^lib//'`
+
+ # This library was specified with -dlpreopen.
+ if test "$pass" = dlpreopen; then
+ if test -z "$libdir"; then
+ $echo "$modename: cannot -dlpreopen a convenience library: \`$lib'" 1>&2
+ exit 1
+ fi
+ # Prefer using a static library (so that no silly _DYNAMIC symbols
+ # are required to link).
+ if test -n "$old_library"; then
+ newdlprefiles="$newdlprefiles $dir/$old_library"
+ # Otherwise, use the dlname, so that lt_dlopen finds it.
+ elif test -n "$dlname"; then
+ newdlprefiles="$newdlprefiles $dir/$dlname"
+ else
+ newdlprefiles="$newdlprefiles $dir/$linklib"
+ fi
+ fi # $pass = dlpreopen
+
+ if test -z "$libdir"; then
+ # Link the convenience library
+ if test "$linkmode" = lib; then
+ deplibs="$dir/$old_library $deplibs"
+ elif test "$linkmode,$pass" = "prog,link"; then
+ compile_deplibs="$dir/$old_library $compile_deplibs"
+ finalize_deplibs="$dir/$old_library $finalize_deplibs"
+ else
+ deplibs="$lib $deplibs" # used for prog,scan pass
+ fi
+ continue
+ fi
+
+
+ if test "$linkmode" = prog && test "$pass" != link; then
+ newlib_search_path="$newlib_search_path $ladir"
+ deplibs="$lib $deplibs"
+
+ linkalldeplibs=no
+ if test "$link_all_deplibs" != no || test -z "$library_names" ||
+ test "$build_libtool_libs" = no; then
+ linkalldeplibs=yes
+ fi
+
+ tmp_libs=
+ for deplib in $dependency_libs; do
+ case $deplib in
+ -L*) newlib_search_path="$newlib_search_path "`$echo "X$deplib" | $Xsed -e 's/^-L//'`;; ### testsuite: skip nested quoting test
+ esac
+ # Need to link against all dependency_libs?
+ if test "$linkalldeplibs" = yes; then
+ deplibs="$deplib $deplibs"
+ else
+ # Need to hardcode shared library paths
+ # or/and link against static libraries
+ newdependency_libs="$deplib $newdependency_libs"
+ fi
+ if test "X$duplicate_deps" = "Xyes" ; then
+ case "$tmp_libs " in
+ *" $deplib "*) specialdeplibs="$specialdeplibs $deplib" ;;
+ esac
+ fi
+ tmp_libs="$tmp_libs $deplib"
+ done # for deplib
+ continue
+ fi # $linkmode = prog...
+
+ if test "$linkmode,$pass" = "prog,link"; then
+ if test -n "$library_names" &&
+ { test "$prefer_static_libs" = no || test -z "$old_library"; }; then
+ # We need to hardcode the library path
+ if test -n "$shlibpath_var"; then
+ # Make sure the rpath contains only unique directories.
+ case "$temp_rpath " in
+ *" $dir "*) ;;
+ *" $absdir "*) ;;
+ *) temp_rpath="$temp_rpath $dir" ;;
+ esac
+ fi
+
+ # Hardcode the library path.
+ # Skip directories that are in the system default run-time
+ # search path.
+ case " $sys_lib_dlsearch_path " in
+ *" $absdir "*) ;;
+ *)
+ case "$compile_rpath " in
+ *" $absdir "*) ;;
+ *) compile_rpath="$compile_rpath $absdir"
+ esac
+ ;;
+ esac
+ case " $sys_lib_dlsearch_path " in
+ *" $libdir "*) ;;
+ *)
+ case "$finalize_rpath " in
+ *" $libdir "*) ;;
+ *) finalize_rpath="$finalize_rpath $libdir"
+ esac
+ ;;
+ esac
+ fi # $linkmode,$pass = prog,link...
+
+ if test "$alldeplibs" = yes &&
+ { test "$deplibs_check_method" = pass_all ||
+ { test "$build_libtool_libs" = yes &&
+ test -n "$library_names"; }; }; then
+ # We only need to search for static libraries
+ continue
+ fi
+ fi
+
+ link_static=no # Whether the deplib will be linked statically
+ if test -n "$library_names" &&
+ { test "$prefer_static_libs" = no || test -z "$old_library"; }; then
+ if test "$installed" = no; then
+ notinst_deplibs="$notinst_deplibs $lib"
+ need_relink=yes
+ fi
+ # This is a shared library
+
+ # Warn about portability, can't link against -module's on some systems (darwin)
+ if test "$shouldnotlink" = yes && test "$pass" = link ; then
+ $echo
+ if test "$linkmode" = prog; then
+ $echo "*** Warning: Linking the executable $output against the loadable module"
+ else
+ $echo "*** Warning: Linking the shared library $output against the loadable module"
+ fi
+ $echo "*** $linklib is not portable!"
+ fi
+ if test "$linkmode" = lib &&
+ test "$hardcode_into_libs" = yes; then
+ # Hardcode the library path.
+ # Skip directories that are in the system default run-time
+ # search path.
+ case " $sys_lib_dlsearch_path " in
+ *" $absdir "*) ;;
+ *)
+ case "$compile_rpath " in
+ *" $absdir "*) ;;
+ *) compile_rpath="$compile_rpath $absdir"
+ esac
+ ;;
+ esac
+ case " $sys_lib_dlsearch_path " in
+ *" $libdir "*) ;;
+ *)
+ case "$finalize_rpath " in
+ *" $libdir "*) ;;
+ *) finalize_rpath="$finalize_rpath $libdir"
+ esac
+ ;;
+ esac
+ fi
+
+ if test -n "$old_archive_from_expsyms_cmds"; then
+ # figure out the soname
+ set dummy $library_names
+ realname="$2"
+ shift; shift
+ libname=`eval \\$echo \"$libname_spec\"`
+ # use dlname if we got it. it's perfectly good, no?
+ if test -n "$dlname"; then
+ soname="$dlname"
+ elif test -n "$soname_spec"; then
+ # bleh windows
+ case $host in
+ *cygwin* | mingw*)
+ major=`expr $current - $age`
+ versuffix="-$major"
+ ;;
+ esac
+ eval soname=\"$soname_spec\"
+ else
+ soname="$realname"
+ fi
+
+ # Make a new name for the extract_expsyms_cmds to use
+ soroot="$soname"
+ soname=`$echo $soroot | ${SED} -e 's/^.*\///'`
+ newlib="libimp-`$echo $soname | ${SED} 's/^lib//;s/\.dll$//'`.a"
+
+ # If the library has no export list, then create one now
+ if test -f "$output_objdir/$soname-def"; then :
+ else
+ $show "extracting exported symbol list from \`$soname'"
+ save_ifs="$IFS"; IFS='~'
+ cmds=$extract_expsyms_cmds
+ for cmd in $cmds; do
+ IFS="$save_ifs"
+ eval cmd=\"$cmd\"
+ $show "$cmd"
+ $run eval "$cmd" || exit $?
+ done
+ IFS="$save_ifs"
+ fi
+
+ # Create $newlib
+ if test -f "$output_objdir/$newlib"; then :; else
+ $show "generating import library for \`$soname'"
+ save_ifs="$IFS"; IFS='~'
+ cmds=$old_archive_from_expsyms_cmds
+ for cmd in $cmds; do
+ IFS="$save_ifs"
+ eval cmd=\"$cmd\"
+ $show "$cmd"
+ $run eval "$cmd" || exit $?
+ done
+ IFS="$save_ifs"
+ fi
+ # make sure the library variables are pointing to the new library
+ dir=$output_objdir
+ linklib=$newlib
+ fi # test -n "$old_archive_from_expsyms_cmds"
+
+ if test "$linkmode" = prog || test "$mode" != relink; then
+ add_shlibpath=
+ add_dir=
+ add=
+ lib_linked=yes
+ case $hardcode_action in
+ immediate | unsupported)
+ if test "$hardcode_direct" = no; then
+ add="$dir/$linklib"
+ case $host in
+ *-*-sco3.2v5* ) add_dir="-L$dir" ;;
+ *-*-darwin* )
+ # if the lib is a module then we can not link against it, someone
+ # is ignoring the new warnings I added
+ if /usr/bin/file -L $add 2> /dev/null | grep "bundle" >/dev/null ; then
+ $echo "** Warning, lib $linklib is a module, not a shared library"
+ if test -z "$old_library" ; then
+ $echo
+ $echo "** And there doesn't seem to be a static archive available"
+ $echo "** The link will probably fail, sorry"
+ else
+ add="$dir/$old_library"
+ fi
+ fi
+ esac
+ elif test "$hardcode_minus_L" = no; then
+ case $host in
+ *-*-sunos*) add_shlibpath="$dir" ;;
+ esac
+ add_dir="-L$dir"
+ add="-l$name"
+ elif test "$hardcode_shlibpath_var" = no; then
+ add_shlibpath="$dir"
+ add="-l$name"
+ else
+ lib_linked=no
+ fi
+ ;;
+ relink)
+ if test "$hardcode_direct" = yes; then
+ add="$dir/$linklib"
+ elif test "$hardcode_minus_L" = yes; then
+ add_dir="-L$dir"
+ # Try looking first in the location we're being installed to.
+ if test -n "$inst_prefix_dir"; then
+ case "$libdir" in
+ [\\/]*)
+ add_dir="$add_dir -L$inst_prefix_dir$libdir"
+ ;;
+ esac
+ fi
+ add="-l$name"
+ elif test "$hardcode_shlibpath_var" = yes; then
+ add_shlibpath="$dir"
+ add="-l$name"
+ else
+ lib_linked=no
+ fi
+ ;;
+ *) lib_linked=no ;;
+ esac
+
+ if test "$lib_linked" != yes; then
+ $echo "$modename: configuration error: unsupported hardcode properties"
+ exit 1
+ fi
+
+ if test -n "$add_shlibpath"; then
+ case :$compile_shlibpath: in
+ *":$add_shlibpath:"*) ;;
+ *) compile_shlibpath="$compile_shlibpath$add_shlibpath:" ;;
+ esac
+ fi
+ if test "$linkmode" = prog; then
+ test -n "$add_dir" && compile_deplibs="$add_dir $compile_deplibs"
+ test -n "$add" && compile_deplibs="$add $compile_deplibs"
+ else
+ test -n "$add_dir" && deplibs="$add_dir $deplibs"
+ test -n "$add" && deplibs="$add $deplibs"
+ if test "$hardcode_direct" != yes && \
+ test "$hardcode_minus_L" != yes && \
+ test "$hardcode_shlibpath_var" = yes; then
+ case :$finalize_shlibpath: in
+ *":$libdir:"*) ;;
+ *) finalize_shlibpath="$finalize_shlibpath$libdir:" ;;
+ esac
+ fi
+ fi
+ fi
+
+ if test "$linkmode" = prog || test "$mode" = relink; then
+ add_shlibpath=
+ add_dir=
+ add=
+ # Finalize command for both is simple: just hardcode it.
+ if test "$hardcode_direct" = yes; then
+ add="$libdir/$linklib"
+ elif test "$hardcode_minus_L" = yes; then
+ add_dir="-L$libdir"
+ add="-l$name"
+ elif test "$hardcode_shlibpath_var" = yes; then
+ case :$finalize_shlibpath: in
+ *":$libdir:"*) ;;
+ *) finalize_shlibpath="$finalize_shlibpath$libdir:" ;;
+ esac
+ add="-l$name"
+ elif test "$hardcode_automatic" = yes; then
+ if test -n "$inst_prefix_dir" && test -f "$inst_prefix_dir$libdir/$linklib" ; then
+ add="$inst_prefix_dir$libdir/$linklib"
+ else
+ add="$libdir/$linklib"
+ fi
+ else
+ # We cannot seem to hardcode it, guess we'll fake it.
+ add_dir="-L$libdir"
+ # Try looking first in the location we're being installed to.
+ if test -n "$inst_prefix_dir"; then
+ case "$libdir" in
+ [\\/]*)
+ add_dir="$add_dir -L$inst_prefix_dir$libdir"
+ ;;
+ esac
+ fi
+ add="-l$name"
+ fi
+
+ if test "$linkmode" = prog; then
+ test -n "$add_dir" && finalize_deplibs="$add_dir $finalize_deplibs"
+ test -n "$add" && finalize_deplibs="$add $finalize_deplibs"
+ else
+ test -n "$add_dir" && deplibs="$add_dir $deplibs"
+ test -n "$add" && deplibs="$add $deplibs"
+ fi
+ fi
+ elif test "$linkmode" = prog; then
+ # Here we assume that one of hardcode_direct or hardcode_minus_L
+ # is not unsupported. This is valid on all known static and
+ # shared platforms.
+ if test "$hardcode_direct" != unsupported; then
+ test -n "$old_library" && linklib="$old_library"
+ compile_deplibs="$dir/$linklib $compile_deplibs"
+ finalize_deplibs="$dir/$linklib $finalize_deplibs"
+ else
+ compile_deplibs="-l$name -L$dir $compile_deplibs"
+ finalize_deplibs="-l$name -L$dir $finalize_deplibs"
+ fi
+ elif test "$build_libtool_libs" = yes; then
+ # Not a shared library
+ if test "$deplibs_check_method" != pass_all; then
+ # We're trying link a shared library against a static one
+ # but the system doesn't support it.
+
+ # Just print a warning and add the library to dependency_libs so
+ # that the program can be linked against the static library.
+ $echo
+ $echo "*** Warning: This system can not link to static lib archive $lib."
+ $echo "*** I have the capability to make that library automatically link in when"
+ $echo "*** you link to this library. But I can only do this if you have a"
+ $echo "*** shared version of the library, which you do not appear to have."
+ if test "$module" = yes; then
+ $echo "*** But as you try to build a module library, libtool will still create "
+ $echo "*** a static module, that should work as long as the dlopening application"
+ $echo "*** is linked with the -dlopen flag to resolve symbols at runtime."
+ if test -z "$global_symbol_pipe"; then
+ $echo
+ $echo "*** However, this would only work if libtool was able to extract symbol"
+ $echo "*** lists from a program, using \`nm' or equivalent, but libtool could"
+ $echo "*** not find such a program. So, this module is probably useless."
+ $echo "*** \`nm' from GNU binutils and a full rebuild may help."
+ fi
+ if test "$build_old_libs" = no; then
+ build_libtool_libs=module
+ build_old_libs=yes
+ else
+ build_libtool_libs=no
+ fi
+ fi
+ else
+ convenience="$convenience $dir/$old_library"
+ old_convenience="$old_convenience $dir/$old_library"
+ deplibs="$dir/$old_library $deplibs"
+ link_static=yes
+ fi
+ fi # link shared/static library?
+
+ if test "$linkmode" = lib; then
+ if test -n "$dependency_libs" &&
+ { test "$hardcode_into_libs" != yes || test "$build_old_libs" = yes ||
+ test "$link_static" = yes; }; then
+ # Extract -R from dependency_libs
+ temp_deplibs=
+ for libdir in $dependency_libs; do
+ case $libdir in
+ -R*) temp_xrpath=`$echo "X$libdir" | $Xsed -e 's/^-R//'`
+ case " $xrpath " in
+ *" $temp_xrpath "*) ;;
+ *) xrpath="$xrpath $temp_xrpath";;
+ esac;;
+ *) temp_deplibs="$temp_deplibs $libdir";;
+ esac
+ done
+ dependency_libs="$temp_deplibs"
+ fi
+
+ newlib_search_path="$newlib_search_path $absdir"
+ # Link against this library
+ test "$link_static" = no && newdependency_libs="$abs_ladir/$laname $newdependency_libs"
+ # ... and its dependency_libs
+ tmp_libs=
+ for deplib in $dependency_libs; do
+ newdependency_libs="$deplib $newdependency_libs"
+ if test "X$duplicate_deps" = "Xyes" ; then
+ case "$tmp_libs " in
+ *" $deplib "*) specialdeplibs="$specialdeplibs $deplib" ;;
+ esac
+ fi
+ tmp_libs="$tmp_libs $deplib"
+ done
+
+ if test "$link_all_deplibs" != no; then
+ # Add the search paths of all dependency libraries
+ for deplib in $dependency_libs; do
+ case $deplib in
+ -L*) path="$deplib" ;;
+ *.la)
+ dir=`$echo "X$deplib" | $Xsed -e 's%/[^/]*$%%'`
+ test "X$dir" = "X$deplib" && dir="."
+ # We need an absolute path.
+ case $dir in
+ [\\/]* | [A-Za-z]:[\\/]*) absdir="$dir" ;;
+ *)
+ absdir=`cd "$dir" && pwd`
+ if test -z "$absdir"; then
+ $echo "$modename: warning: cannot determine absolute directory name of \`$dir'" 1>&2
+ absdir="$dir"
+ fi
+ ;;
+ esac
+ if grep "^installed=no" $deplib > /dev/null; then
+ path="$absdir/$objdir"
+ else
+ eval libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $deplib`
+ if test -z "$libdir"; then
+ $echo "$modename: \`$deplib' is not a valid libtool archive" 1>&2
+ exit 1
+ fi
+ if test "$absdir" != "$libdir"; then
+ $echo "$modename: warning: \`$deplib' seems to be moved" 1>&2
+ fi
+ path="$absdir"
+ fi
+ depdepl=
+ case $host in
+ *-*-darwin*)
+ # we do not want to link against static libs, but need to link against shared
+ eval deplibrary_names=`${SED} -n -e 's/^library_names=\(.*\)$/\1/p' $deplib`
+ if test -n "$deplibrary_names" ; then
+ for tmp in $deplibrary_names ; do
+ depdepl=$tmp
+ done
+ if test -f "$path/$depdepl" ; then
+ depdepl="$path/$depdepl"
+ fi
+ # do not add paths which are already there
+ case " $newlib_search_path " in
+ *" $path "*) ;;
+ *) newlib_search_path="$newlib_search_path $path";;
+ esac
+ fi
+ path=""
+ ;;
+ *)
+ path="-L$path"
+ ;;
+ esac
+
+ ;;
+ -l*)
+ case $host in
+ *-*-darwin*)
+ # Again, we only want to link against shared libraries
+ eval tmp_libs=`$echo "X$deplib" | $Xsed -e "s,^\-l,,"`
+ for tmp in $newlib_search_path ; do
+ if test -f "$tmp/lib$tmp_libs.dylib" ; then
+ eval depdepl="$tmp/lib$tmp_libs.dylib"
+ break
+ fi
+ done
+ path=""
+ ;;
+ *) continue ;;
+ esac
+ ;;
+ *) continue ;;
+ esac
+ case " $deplibs " in
+ *" $depdepl "*) ;;
+ *) deplibs="$deplibs $depdepl" ;;
+ esac
+ case " $deplibs " in
+ *" $path "*) ;;
+ *) deplibs="$deplibs $path" ;;
+ esac
+ done
+ fi # link_all_deplibs != no
+ fi # linkmode = lib
+ done # for deplib in $libs
+ dependency_libs="$newdependency_libs"
+ if test "$pass" = dlpreopen; then
+ # Link the dlpreopened libraries before other libraries
+ for deplib in $save_deplibs; do
+ deplibs="$deplib $deplibs"
+ done
+ fi
+ if test "$pass" != dlopen; then
+ if test "$pass" != conv; then
+ # Make sure lib_search_path contains only unique directories.
+ lib_search_path=
+ for dir in $newlib_search_path; do
+ case "$lib_search_path " in
+ *" $dir "*) ;;
+ *) lib_search_path="$lib_search_path $dir" ;;
+ esac
+ done
+ newlib_search_path=
+ fi
+
+ if test "$linkmode,$pass" != "prog,link"; then
+ vars="deplibs"
+ else
+ vars="compile_deplibs finalize_deplibs"
+ fi
+ for var in $vars dependency_libs; do
+ # Add libraries to $var in reverse order
+ eval tmp_libs=\"\$$var\"
+ new_libs=
+ for deplib in $tmp_libs; do
+ # FIXME: Pedantically, this is the right thing to do, so
+ # that some nasty dependency loop isn't accidentally
+ # broken:
+ #new_libs="$deplib $new_libs"
+ # Pragmatically, this seems to cause very few problems in
+ # practice:
+ case $deplib in
+ -L*) new_libs="$deplib $new_libs" ;;
+ -R*) ;;
+ *)
+ # And here is the reason: when a library appears more
+ # than once as an explicit dependence of a library, or
+ # is implicitly linked in more than once by the
+ # compiler, it is considered special, and multiple
+ # occurrences thereof are not removed. Compare this
+ # with having the same library being listed as a
+ # dependency of multiple other libraries: in this case,
+ # we know (pedantically, we assume) the library does not
+ # need to be listed more than once, so we keep only the
+ # last copy. This is not always right, but it is rare
+ # enough that we require users that really mean to play
+ # such unportable linking tricks to link the library
+ # using -Wl,-lname, so that libtool does not consider it
+ # for duplicate removal.
+ case " $specialdeplibs " in
+ *" $deplib "*) new_libs="$deplib $new_libs" ;;
+ *)
+ case " $new_libs " in
+ *" $deplib "*) ;;
+ *) new_libs="$deplib $new_libs" ;;
+ esac
+ ;;
+ esac
+ ;;
+ esac
+ done
+ tmp_libs=
+ for deplib in $new_libs; do
+ case $deplib in
+ -L*)
+ case " $tmp_libs " in
+ *" $deplib "*) ;;
+ *) tmp_libs="$tmp_libs $deplib" ;;
+ esac
+ ;;
+ *) tmp_libs="$tmp_libs $deplib" ;;
+ esac
+ done
+ eval $var=\"$tmp_libs\"
+ done # for var
+ fi
+ # Last step: remove runtime libs from dependency_libs (they stay in deplibs)
+ tmp_libs=
+ for i in $dependency_libs ; do
+ case " $predeps $postdeps $compiler_lib_search_path " in
+ *" $i "*)
+ i=""
+ ;;
+ esac
+ if test -n "$i" ; then
+ tmp_libs="$tmp_libs $i"
+ fi
+ done
+ dependency_libs=$tmp_libs
+ done # for pass
+ if test "$linkmode" = prog; then
+ dlfiles="$newdlfiles"
+ dlprefiles="$newdlprefiles"
+ fi
+
+ case $linkmode in
+ oldlib)
+ if test -n "$deplibs"; then
+ $echo "$modename: warning: \`-l' and \`-L' are ignored for archives" 1>&2
+ fi
+
+ if test -n "$dlfiles$dlprefiles" || test "$dlself" != no; then
+ $echo "$modename: warning: \`-dlopen' is ignored for archives" 1>&2
+ fi
+
+ if test -n "$rpath"; then
+ $echo "$modename: warning: \`-rpath' is ignored for archives" 1>&2
+ fi
+
+ if test -n "$xrpath"; then
+ $echo "$modename: warning: \`-R' is ignored for archives" 1>&2
+ fi
+
+ if test -n "$vinfo"; then
+ $echo "$modename: warning: \`-version-info/-version-number' is ignored for archives" 1>&2
+ fi
+
+ if test -n "$release"; then
+ $echo "$modename: warning: \`-release' is ignored for archives" 1>&2
+ fi
+
+ if test -n "$export_symbols" || test -n "$export_symbols_regex"; then
+ $echo "$modename: warning: \`-export-symbols' is ignored for archives" 1>&2
+ fi
+
+ # Now set the variables for building old libraries.
+ build_libtool_libs=no
+ oldlibs="$output"
+ objs="$objs$old_deplibs"
+ ;;
+
+ lib)
+ # Make sure we only generate libraries of the form `libNAME.la'.
+ case $outputname in
+ lib*)
+ name=`$echo "X$outputname" | $Xsed -e 's/\.la$//' -e 's/^lib//'`
+ eval shared_ext=\"$shrext\"
+ eval libname=\"$libname_spec\"
+ ;;
+ *)
+ if test "$module" = no; then
+ $echo "$modename: libtool library \`$output' must begin with \`lib'" 1>&2
+ $echo "$help" 1>&2
+ exit 1
+ fi
+ if test "$need_lib_prefix" != no; then
+ # Add the "lib" prefix for modules if required
+ name=`$echo "X$outputname" | $Xsed -e 's/\.la$//'`
+ eval shared_ext=\"$shrext\"
+ eval libname=\"$libname_spec\"
+ else
+ libname=`$echo "X$outputname" | $Xsed -e 's/\.la$//'`
+ fi
+ ;;
+ esac
+
+ if test -n "$objs"; then
+ if test "$deplibs_check_method" != pass_all; then
+ $echo "$modename: cannot build libtool library \`$output' from non-libtool objects on this host:$objs" 2>&1
+ exit 1
+ else
+ $echo
+ $echo "*** Warning: Linking the shared library $output against the non-libtool"
+ $echo "*** objects $objs is not portable!"
+ libobjs="$libobjs $objs"
+ fi
+ fi
+
+ if test "$dlself" != no; then
+ $echo "$modename: warning: \`-dlopen self' is ignored for libtool libraries" 1>&2
+ fi
+
+ set dummy $rpath
+ if test "$#" -gt 2; then
+ $echo "$modename: warning: ignoring multiple \`-rpath's for a libtool library" 1>&2
+ fi
+ install_libdir="$2"
+
+ oldlibs=
+ if test -z "$rpath"; then
+ if test "$build_libtool_libs" = yes; then
+ # Building a libtool convenience library.
+ # Some compilers have problems with a `.al' extension so
+ # convenience libraries should have the same extension an
+ # archive normally would.
+ oldlibs="$output_objdir/$libname.$libext $oldlibs"
+ build_libtool_libs=convenience
+ build_old_libs=yes
+ fi
+
+ if test -n "$vinfo"; then
+ $echo "$modename: warning: \`-version-info/-version-number' is ignored for convenience libraries" 1>&2
+ fi
+
+ if test -n "$release"; then
+ $echo "$modename: warning: \`-release' is ignored for convenience libraries" 1>&2
+ fi
+ else
+
+ # Parse the version information argument.
+ save_ifs="$IFS"; IFS=':'
+ set dummy $vinfo 0 0 0
+ IFS="$save_ifs"
+
+ if test -n "$8"; then
+ $echo "$modename: too many parameters to \`-version-info'" 1>&2
+ $echo "$help" 1>&2
+ exit 1
+ fi
+
+ # convert absolute version numbers to libtool ages
+ # this retains compatibility with .la files and attempts
+ # to make the code below a bit more comprehensible
+
+ case $vinfo_number in
+ yes)
+ number_major="$2"
+ number_minor="$3"
+ number_revision="$4"
+ #
+ # There are really only two kinds -- those that
+ # use the current revision as the major version
+ # and those that subtract age and use age as
+ # a minor version. But, then there is irix
+ # which has an extra 1 added just for fun
+ #
+ case $version_type in
+ darwin|linux|osf|windows)
+ current=`expr $number_major + $number_minor`
+ age="$number_minor"
+ revision="$number_revision"
+ ;;
+ freebsd-aout|freebsd-elf|sunos)
+ current="$number_major"
+ revision="$number_minor"
+ age="0"
+ ;;
+ irix|nonstopux)
+ current=`expr $number_major + $number_minor - 1`
+ age="$number_minor"
+ revision="$number_minor"
+ ;;
+ esac
+ ;;
+ no)
+ current="$2"
+ revision="$3"
+ age="$4"
+ ;;
+ esac
+
+ # Check that each of the things are valid numbers.
+ case $current in
+ 0 | [1-9] | [1-9][0-9] | [1-9][0-9][0-9]) ;;
+ *)
+ $echo "$modename: CURRENT \`$current' is not a nonnegative integer" 1>&2
+ $echo "$modename: \`$vinfo' is not valid version information" 1>&2
+ exit 1
+ ;;
+ esac
+
+ case $revision in
+ 0 | [1-9] | [1-9][0-9] | [1-9][0-9][0-9]) ;;
+ *)
+ $echo "$modename: REVISION \`$revision' is not a nonnegative integer" 1>&2
+ $echo "$modename: \`$vinfo' is not valid version information" 1>&2
+ exit 1
+ ;;
+ esac
+
+ case $age in
+ 0 | [1-9] | [1-9][0-9] | [1-9][0-9][0-9]) ;;
+ *)
+ $echo "$modename: AGE \`$age' is not a nonnegative integer" 1>&2
+ $echo "$modename: \`$vinfo' is not valid version information" 1>&2
+ exit 1
+ ;;
+ esac
+
+ if test "$age" -gt "$current"; then
+ $echo "$modename: AGE \`$age' is greater than the current interface number \`$current'" 1>&2
+ $echo "$modename: \`$vinfo' is not valid version information" 1>&2
+ exit 1
+ fi
+
+ # Calculate the version variables.
+ major=
+ versuffix=
+ verstring=
+ case $version_type in
+ none) ;;
+
+ darwin)
+ # Like Linux, but with the current version available in
+ # verstring for coding it into the library header
+ major=.`expr $current - $age`
+ versuffix="$major.$age.$revision"
+ # Darwin ld doesn't like 0 for these options...
+ minor_current=`expr $current + 1`
+ verstring="-compatibility_version $minor_current -current_version $minor_current.$revision"
+ ;;
+
+ freebsd-aout)
+ major=".$current"
+ versuffix=".$current.$revision";
+ ;;
+
+ freebsd-elf)
+ major=".$current"
+ versuffix=".$current";
+ ;;
+
+ irix | nonstopux)
+ major=`expr $current - $age + 1`
+
+ case $version_type in
+ nonstopux) verstring_prefix=nonstopux ;;
+ *) verstring_prefix=sgi ;;
+ esac
+ verstring="$verstring_prefix$major.$revision"
+
+ # Add in all the interfaces that we are compatible with.
+ loop=$revision
+ while test "$loop" -ne 0; do
+ iface=`expr $revision - $loop`
+ loop=`expr $loop - 1`
+ verstring="$verstring_prefix$major.$iface:$verstring"
+ done
+
+ # Before this point, $major must not contain `.'.
+ major=.$major
+ versuffix="$major.$revision"
+ ;;
+
+ linux)
+ major=.`expr $current - $age`
+ versuffix="$major.$age.$revision"
+ ;;
+
+ osf)
+ major=.`expr $current - $age`
+ versuffix=".$current.$age.$revision"
+ verstring="$current.$age.$revision"
+
+ # Add in all the interfaces that we are compatible with.
+ loop=$age
+ while test "$loop" -ne 0; do
+ iface=`expr $current - $loop`
+ loop=`expr $loop - 1`
+ verstring="$verstring:${iface}.0"
+ done
+
+ # Make executables depend on our current version.
+ verstring="$verstring:${current}.0"
+ ;;
+
+ sunos)
+ major=".$current"
+ versuffix=".$current.$revision"
+ ;;
+
+ windows)
+ # Use '-' rather than '.', since we only want one
+ # extension on DOS 8.3 filesystems.
+ major=`expr $current - $age`
+ versuffix="-$major"
+ ;;
+
+ *)
+ $echo "$modename: unknown library version type \`$version_type'" 1>&2
+ $echo "Fatal configuration error. See the $PACKAGE docs for more information." 1>&2
+ exit 1
+ ;;
+ esac
+
+ # Clear the version info if we defaulted, and they specified a release.
+ if test -z "$vinfo" && test -n "$release"; then
+ major=
+ case $version_type in
+ darwin)
+ # we can't check for "0.0" in archive_cmds due to quoting
+ # problems, so we reset it completely
+ verstring=
+ ;;
+ *)
+ verstring="0.0"
+ ;;
+ esac
+ if test "$need_version" = no; then
+ versuffix=
+ else
+ versuffix=".0.0"
+ fi
+ fi
+
+ # Remove version info from name if versioning should be avoided
+ if test "$avoid_version" = yes && test "$need_version" = no; then
+ major=
+ versuffix=
+ verstring=""
+ fi
+
+ # Check to see if the archive will have undefined symbols.
+ if test "$allow_undefined" = yes; then
+ if test "$allow_undefined_flag" = unsupported; then
+ $echo "$modename: warning: undefined symbols not allowed in $host shared libraries" 1>&2
+ build_libtool_libs=no
+ build_old_libs=yes
+ fi
+ else
+ # Don't allow undefined symbols.
+ allow_undefined_flag="$no_undefined_flag"
+ fi
+ fi
+
+ if test "$mode" != relink; then
+ # Remove our outputs, but don't remove object files since they
+ # may have been created when compiling PIC objects.
+ removelist=
+ tempremovelist=`$echo "$output_objdir/*"`
+ for p in $tempremovelist; do
+ case $p in
+ *.$objext)
+ ;;
+ $output_objdir/$outputname | $output_objdir/$libname.* | $output_objdir/${libname}${release}.*)
+ if echo $p | $EGREP -e "$precious_files_regex" >/dev/null 2>&1
+ then
+ continue
+ fi
+ removelist="$removelist $p"
+ ;;
+ *) ;;
+ esac
+ done
+ if test -n "$removelist"; then
+ $show "${rm}r $removelist"
+ $run ${rm}r $removelist
+ fi
+ fi
+
+ # Now set the variables for building old libraries.
+ if test "$build_old_libs" = yes && test "$build_libtool_libs" != convenience ; then
+ oldlibs="$oldlibs $output_objdir/$libname.$libext"
+
+ # Transform .lo files to .o files.
+ oldobjs="$objs "`$echo "X$libobjs" | $SP2NL | $Xsed -e '/\.'${libext}'$/d' -e "$lo2o" | $NL2SP`
+ fi
+
+ # Eliminate all temporary directories.
+ for path in $notinst_path; do
+ lib_search_path=`$echo "$lib_search_path " | ${SED} -e 's% $path % %g'`
+ deplibs=`$echo "$deplibs " | ${SED} -e 's% -L$path % %g'`
+ dependency_libs=`$echo "$dependency_libs " | ${SED} -e 's% -L$path % %g'`
+ done
+
+ if test -n "$xrpath"; then
+ # If the user specified any rpath flags, then add them.
+ temp_xrpath=
+ for libdir in $xrpath; do
+ temp_xrpath="$temp_xrpath -R$libdir"
+ case "$finalize_rpath " in
+ *" $libdir "*) ;;
+ *) finalize_rpath="$finalize_rpath $libdir" ;;
+ esac
+ done
+ if test "$hardcode_into_libs" != yes || test "$build_old_libs" = yes; then
+ dependency_libs="$temp_xrpath $dependency_libs"
+ fi
+ fi
+
+ # Make sure dlfiles contains only unique files that won't be dlpreopened
+ old_dlfiles="$dlfiles"
+ dlfiles=
+ for lib in $old_dlfiles; do
+ case " $dlprefiles $dlfiles " in
+ *" $lib "*) ;;
+ *) dlfiles="$dlfiles $lib" ;;
+ esac
+ done
+
+ # Make sure dlprefiles contains only unique files
+ old_dlprefiles="$dlprefiles"
+ dlprefiles=
+ for lib in $old_dlprefiles; do
+ case "$dlprefiles " in
+ *" $lib "*) ;;
+ *) dlprefiles="$dlprefiles $lib" ;;
+ esac
+ done
+
+ if test "$build_libtool_libs" = yes; then
+ if test -n "$rpath"; then
+ case $host in
+ *-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-os2* | *-*-beos*)
+ # these systems don't actually have a c library (as such)!
+ ;;
+ *-*-rhapsody* | *-*-darwin1.[012])
+ # Rhapsody C library is in the System framework
+ deplibs="$deplibs -framework System"
+ ;;
+ *-*-netbsd*)
+ # Don't link with libc until the a.out ld.so is fixed.
+ ;;
+ *-*-openbsd* | *-*-freebsd*)
+ # Do not include libc due to us having libc/libc_r.
+ test "X$arg" = "X-lc" && continue
+ ;;
+ *)
+ # Add libc to deplibs on all other systems if necessary.
+ if test "$build_libtool_need_lc" = "yes"; then
+ deplibs="$deplibs -lc"
+ fi
+ ;;
+ esac
+ fi
+
+ # Transform deplibs into only deplibs that can be linked in shared.
+ name_save=$name
+ libname_save=$libname
+ release_save=$release
+ versuffix_save=$versuffix
+ major_save=$major
+ # I'm not sure if I'm treating the release correctly. I think
+ # release should show up in the -l (ie -lgmp5) so we don't want to
+ # add it in twice. Is that correct?
+ release=""
+ versuffix=""
+ major=""
+ newdeplibs=
+ droppeddeps=no
+ case $deplibs_check_method in
+ pass_all)
+ # Don't check for shared/static. Everything works.
+ # This might be a little naive. We might want to check
+ # whether the library exists or not. But this is on
+ # osf3 & osf4 and I'm not really sure... Just
+ # implementing what was already the behavior.
+ newdeplibs=$deplibs
+ ;;
+ test_compile)
+ # This code stresses the "libraries are programs" paradigm to its
+ # limits. Maybe even breaks it. We compile a program, linking it
+ # against the deplibs as a proxy for the library. Then we can check
+ # whether they linked in statically or dynamically with ldd.
+ $rm conftest.c
+ cat > conftest.c <<EOF
+ int main() { return 0; }
+EOF
+ $rm conftest
+ $LTCC -o conftest conftest.c $deplibs
+ if test "$?" -eq 0 ; then
+ ldd_output=`ldd conftest`
+ for i in $deplibs; do
+ name="`expr $i : '-l\(.*\)'`"
+ # If $name is empty we are operating on a -L argument.
+ if test "$name" != "" && test "$name" -ne "0"; then
+ if test "X$allow_libtool_libs_with_static_runtimes" = "Xyes" ; then
+ case " $predeps $postdeps " in
+ *" $i "*)
+ newdeplibs="$newdeplibs $i"
+ i=""
+ ;;
+ esac
+ fi
+ if test -n "$i" ; then
+ libname=`eval \\$echo \"$libname_spec\"`
+ deplib_matches=`eval \\$echo \"$library_names_spec\"`
+ set dummy $deplib_matches
+ deplib_match=$2
+ if test `expr "$ldd_output" : ".*$deplib_match"` -ne 0 ; then
+ newdeplibs="$newdeplibs $i"
+ else
+ droppeddeps=yes
+ $echo
+ $echo "*** Warning: dynamic linker does not accept needed library $i."
+ $echo "*** I have the capability to make that library automatically link in when"
+ $echo "*** you link to this library. But I can only do this if you have a"
+ $echo "*** shared version of the library, which I believe you do not have"
+ $echo "*** because a test_compile did reveal that the linker did not use it for"
+ $echo "*** its dynamic dependency list that programs get resolved with at runtime."
+ fi
+ fi
+ else
+ newdeplibs="$newdeplibs $i"
+ fi
+ done
+ else
+ # Error occurred in the first compile. Let's try to salvage
+ # the situation: Compile a separate program for each library.
+ for i in $deplibs; do
+ name="`expr $i : '-l\(.*\)'`"
+ # If $name is empty we are operating on a -L argument.
+ if test "$name" != "" && test "$name" != "0"; then
+ $rm conftest
+ $LTCC -o conftest conftest.c $i
+ # Did it work?
+ if test "$?" -eq 0 ; then
+ ldd_output=`ldd conftest`
+ if test "X$allow_libtool_libs_with_static_runtimes" = "Xyes" ; then
+ case " $predeps $postdeps " in
+ *" $i "*)
+ newdeplibs="$newdeplibs $i"
+ i=""
+ ;;
+ esac
+ fi
+ if test -n "$i" ; then
+ libname=`eval \\$echo \"$libname_spec\"`
+ deplib_matches=`eval \\$echo \"$library_names_spec\"`
+ set dummy $deplib_matches
+ deplib_match=$2
+ if test `expr "$ldd_output" : ".*$deplib_match"` -ne 0 ; then
+ newdeplibs="$newdeplibs $i"
+ else
+ droppeddeps=yes
+ $echo
+ $echo "*** Warning: dynamic linker does not accept needed library $i."
+ $echo "*** I have the capability to make that library automatically link in when"
+ $echo "*** you link to this library. But I can only do this if you have a"
+ $echo "*** shared version of the library, which you do not appear to have"
+ $echo "*** because a test_compile did reveal that the linker did not use this one"
+ $echo "*** as a dynamic dependency that programs can get resolved with at runtime."
+ fi
+ fi
+ else
+ droppeddeps=yes
+ $echo
+ $echo "*** Warning! Library $i is needed by this library but I was not able to"
+ $echo "*** make it link in! You will probably need to install it or some"
+ $echo "*** library that it depends on before this library will be fully"
+ $echo "*** functional. Installing it before continuing would be even better."
+ fi
+ else
+ newdeplibs="$newdeplibs $i"
+ fi
+ done
+ fi
+ ;;
+ file_magic*)
+ set dummy $deplibs_check_method
+ file_magic_regex=`expr "$deplibs_check_method" : "$2 \(.*\)"`
+ for a_deplib in $deplibs; do
+ name="`expr $a_deplib : '-l\(.*\)'`"
+ # If $name is empty we are operating on a -L argument.
+ if test "$name" != "" && test "$name" != "0"; then
+ if test "X$allow_libtool_libs_with_static_runtimes" = "Xyes" ; then
+ case " $predeps $postdeps " in
+ *" $a_deplib "*)
+ newdeplibs="$newdeplibs $a_deplib"
+ a_deplib=""
+ ;;
+ esac
+ fi
+ if test -n "$a_deplib" ; then
+ libname=`eval \\$echo \"$libname_spec\"`
+ for i in $lib_search_path $sys_lib_search_path $shlib_search_path; do
+ potential_libs=`ls $i/$libname[.-]* 2>/dev/null`
+ for potent_lib in $potential_libs; do
+ # Follow soft links.
+ if ls -lLd "$potent_lib" 2>/dev/null \
+ | grep " -> " >/dev/null; then
+ continue
+ fi
+ # The statement above tries to avoid entering an
+ # endless loop below, in case of cyclic links.
+ # We might still enter an endless loop, since a link
+ # loop can be closed while we follow links,
+ # but so what?
+ potlib="$potent_lib"
+ while test -h "$potlib" 2>/dev/null; do
+ potliblink=`ls -ld $potlib | ${SED} 's/.* -> //'`
+ case $potliblink in
+ [\\/]* | [A-Za-z]:[\\/]*) potlib="$potliblink";;
+ *) potlib=`$echo "X$potlib" | $Xsed -e 's,[^/]*$,,'`"$potliblink";;
+ esac
+ done
+ if eval $file_magic_cmd \"\$potlib\" 2>/dev/null \
+ | ${SED} 10q \
+ | $EGREP "$file_magic_regex" > /dev/null; then
+ newdeplibs="$newdeplibs $a_deplib"
+ a_deplib=""
+ break 2
+ fi
+ done
+ done
+ fi
+ if test -n "$a_deplib" ; then
+ droppeddeps=yes
+ $echo
+ $echo "*** Warning: linker path does not have real file for library $a_deplib."
+ $echo "*** I have the capability to make that library automatically link in when"
+ $echo "*** you link to this library. But I can only do this if you have a"
+ $echo "*** shared version of the library, which you do not appear to have"
+ $echo "*** because I did check the linker path looking for a file starting"
+ if test -z "$potlib" ; then
+ $echo "*** with $libname but no candidates were found. (...for file magic test)"
+ else
+ $echo "*** with $libname and none of the candidates passed a file format test"
+ $echo "*** using a file magic. Last file checked: $potlib"
+ fi
+ fi
+ else
+ # Add a -L argument.
+ newdeplibs="$newdeplibs $a_deplib"
+ fi
+ done # Gone through all deplibs.
+ ;;
+ match_pattern*)
+ set dummy $deplibs_check_method
+ match_pattern_regex=`expr "$deplibs_check_method" : "$2 \(.*\)"`
+ for a_deplib in $deplibs; do
+ name="`expr $a_deplib : '-l\(.*\)'`"
+ # If $name is empty we are operating on a -L argument.
+ if test -n "$name" && test "$name" != "0"; then
+ if test "X$allow_libtool_libs_with_static_runtimes" = "Xyes" ; then
+ case " $predeps $postdeps " in
+ *" $a_deplib "*)
+ newdeplibs="$newdeplibs $a_deplib"
+ a_deplib=""
+ ;;
+ esac
+ fi
+ if test -n "$a_deplib" ; then
+ libname=`eval \\$echo \"$libname_spec\"`
+ for i in $lib_search_path $sys_lib_search_path $shlib_search_path; do
+ potential_libs=`ls $i/$libname[.-]* 2>/dev/null`
+ for potent_lib in $potential_libs; do
+ potlib="$potent_lib" # see symlink-check above in file_magic test
+ if eval $echo \"$potent_lib\" 2>/dev/null \
+ | ${SED} 10q \
+ | $EGREP "$match_pattern_regex" > /dev/null; then
+ newdeplibs="$newdeplibs $a_deplib"
+ a_deplib=""
+ break 2
+ fi
+ done
+ done
+ fi
+ if test -n "$a_deplib" ; then
+ droppeddeps=yes
+ $echo
+ $echo "*** Warning: linker path does not have real file for library $a_deplib."
+ $echo "*** I have the capability to make that library automatically link in when"
+ $echo "*** you link to this library. But I can only do this if you have a"
+ $echo "*** shared version of the library, which you do not appear to have"
+ $echo "*** because I did check the linker path looking for a file starting"
+ if test -z "$potlib" ; then
+ $echo "*** with $libname but no candidates were found. (...for regex pattern test)"
+ else
+ $echo "*** with $libname and none of the candidates passed a file format test"
+ $echo "*** using a regex pattern. Last file checked: $potlib"
+ fi
+ fi
+ else
+ # Add a -L argument.
+ newdeplibs="$newdeplibs $a_deplib"
+ fi
+ done # Gone through all deplibs.
+ ;;
+ none | unknown | *)
+ newdeplibs=""
+ tmp_deplibs=`$echo "X $deplibs" | $Xsed -e 's/ -lc$//' \
+ -e 's/ -[LR][^ ]*//g'`
+ if test "X$allow_libtool_libs_with_static_runtimes" = "Xyes" ; then
+ for i in $predeps $postdeps ; do
+ # can't use Xsed below, because $i might contain '/'
+ tmp_deplibs=`$echo "X $tmp_deplibs" | ${SED} -e "1s,^X,," -e "s,$i,,"`
+ done
+ fi
+ if $echo "X $tmp_deplibs" | $Xsed -e 's/[ ]//g' \
+ | grep . >/dev/null; then
+ $echo
+ if test "X$deplibs_check_method" = "Xnone"; then
+ $echo "*** Warning: inter-library dependencies are not supported in this platform."
+ else
+ $echo "*** Warning: inter-library dependencies are not known to be supported."
+ fi
+ $echo "*** All declared inter-library dependencies are being dropped."
+ droppeddeps=yes
+ fi
+ ;;
+ esac
+ versuffix=$versuffix_save
+ major=$major_save
+ release=$release_save
+ libname=$libname_save
+ name=$name_save
+
+ case $host in
+ *-*-rhapsody* | *-*-darwin1.[012])
+ # On Rhapsody replace the C library is the System framework
+ newdeplibs=`$echo "X $newdeplibs" | $Xsed -e 's/ -lc / -framework System /'`
+ ;;
+ esac
+
+ if test "$droppeddeps" = yes; then
+ if test "$module" = yes; then
+ $echo
+ $echo "*** Warning: libtool could not satisfy all declared inter-library"
+ $echo "*** dependencies of module $libname. Therefore, libtool will create"
+ $echo "*** a static module, that should work as long as the dlopening"
+ $echo "*** application is linked with the -dlopen flag."
+ if test -z "$global_symbol_pipe"; then
+ $echo
+ $echo "*** However, this would only work if libtool was able to extract symbol"
+ $echo "*** lists from a program, using \`nm' or equivalent, but libtool could"
+ $echo "*** not find such a program. So, this module is probably useless."
+ $echo "*** \`nm' from GNU binutils and a full rebuild may help."
+ fi
+ if test "$build_old_libs" = no; then
+ oldlibs="$output_objdir/$libname.$libext"
+ build_libtool_libs=module
+ build_old_libs=yes
+ else
+ build_libtool_libs=no
+ fi
+ else
+ $echo "*** The inter-library dependencies that have been dropped here will be"
+ $echo "*** automatically added whenever a program is linked with this library"
+ $echo "*** or is declared to -dlopen it."
+
+ if test "$allow_undefined" = no; then
+ $echo
+ $echo "*** Since this library must not contain undefined symbols,"
+ $echo "*** because either the platform does not support them or"
+ $echo "*** it was explicitly requested with -no-undefined,"
+ $echo "*** libtool will only create a static version of it."
+ if test "$build_old_libs" = no; then
+ oldlibs="$output_objdir/$libname.$libext"
+ build_libtool_libs=module
+ build_old_libs=yes
+ else
+ build_libtool_libs=no
+ fi
+ fi
+ fi
+ fi
+ # Done checking deplibs!
+ deplibs=$newdeplibs
+ fi
+
+ # All the library-specific variables (install_libdir is set above).
+ library_names=
+ old_library=
+ dlname=
+
+ # Test again, we may have decided not to build it any more
+ if test "$build_libtool_libs" = yes; then
+ if test "$hardcode_into_libs" = yes; then
+ # Hardcode the library paths
+ hardcode_libdirs=
+ dep_rpath=
+ rpath="$finalize_rpath"
+ test "$mode" != relink && rpath="$compile_rpath$rpath"
+ for libdir in $rpath; do
+ if test -n "$hardcode_libdir_flag_spec"; then
+ if test -n "$hardcode_libdir_separator"; then
+ if test -z "$hardcode_libdirs"; then
+ hardcode_libdirs="$libdir"
+ else
+ # Just accumulate the unique libdirs.
+ case $hardcode_libdir_separator$hardcode_libdirs$hardcode_libdir_separator in
+ *"$hardcode_libdir_separator$libdir$hardcode_libdir_separator"*)
+ ;;
+ *)
+ hardcode_libdirs="$hardcode_libdirs$hardcode_libdir_separator$libdir"
+ ;;
+ esac
+ fi
+ else
+ eval flag=\"$hardcode_libdir_flag_spec\"
+ dep_rpath="$dep_rpath $flag"
+ fi
+ elif test -n "$runpath_var"; then
+ case "$perm_rpath " in
+ *" $libdir "*) ;;
+ *) perm_rpath="$perm_rpath $libdir" ;;
+ esac
+ fi
+ done
+ # Substitute the hardcoded libdirs into the rpath.
+ if test -n "$hardcode_libdir_separator" &&
+ test -n "$hardcode_libdirs"; then
+ libdir="$hardcode_libdirs"
+ if test -n "$hardcode_libdir_flag_spec_ld"; then
+ eval dep_rpath=\"$hardcode_libdir_flag_spec_ld\"
+ else
+ eval dep_rpath=\"$hardcode_libdir_flag_spec\"
+ fi
+ fi
+ if test -n "$runpath_var" && test -n "$perm_rpath"; then
+ # We should set the runpath_var.
+ rpath=
+ for dir in $perm_rpath; do
+ rpath="$rpath$dir:"
+ done
+ eval "$runpath_var='$rpath\$$runpath_var'; export $runpath_var"
+ fi
+ test -n "$dep_rpath" && deplibs="$dep_rpath $deplibs"
+ fi
+
+ shlibpath="$finalize_shlibpath"
+ test "$mode" != relink && shlibpath="$compile_shlibpath$shlibpath"
+ if test -n "$shlibpath"; then
+ eval "$shlibpath_var='$shlibpath\$$shlibpath_var'; export $shlibpath_var"
+ fi
+
+ # Get the real and link names of the library.
+ eval shared_ext=\"$shrext\"
+ eval library_names=\"$library_names_spec\"
+ set dummy $library_names
+ realname="$2"
+ shift; shift
+
+ if test -n "$soname_spec"; then
+ eval soname=\"$soname_spec\"
+ else
+ soname="$realname"
+ fi
+ if test -z "$dlname"; then
+ dlname=$soname
+ fi
+
+ lib="$output_objdir/$realname"
+ for link
+ do
+ linknames="$linknames $link"
+ done
+
+ # Use standard objects if they are pic
+ test -z "$pic_flag" && libobjs=`$echo "X$libobjs" | $SP2NL | $Xsed -e "$lo2o" | $NL2SP`
+
+ # Prepare the list of exported symbols
+ if test -z "$export_symbols"; then
+ if test "$always_export_symbols" = yes || test -n "$export_symbols_regex"; then
+ $show "generating symbol list for \`$libname.la'"
+ export_symbols="$output_objdir/$libname.exp"
+ $run $rm $export_symbols
+ cmds=$export_symbols_cmds
+ save_ifs="$IFS"; IFS='~'
+ for cmd in $cmds; do
+ IFS="$save_ifs"
+ eval cmd=\"$cmd\"
+ if len=`expr "X$cmd" : ".*"` &&
+ test "$len" -le "$max_cmd_len" || test "$max_cmd_len" -le -1; then
+ $show "$cmd"
+ $run eval "$cmd" || exit $?
+ skipped_export=false
+ else
+ # The command line is too long to execute in one step.
+ $show "using reloadable object file for export list..."
+ skipped_export=:
+ fi
+ done
+ IFS="$save_ifs"
+ if test -n "$export_symbols_regex"; then
+ $show "$EGREP -e \"$export_symbols_regex\" \"$export_symbols\" > \"${export_symbols}T\""
+ $run eval '$EGREP -e "$export_symbols_regex" "$export_symbols" > "${export_symbols}T"'
+ $show "$mv \"${export_symbols}T\" \"$export_symbols\""
+ $run eval '$mv "${export_symbols}T" "$export_symbols"'
+ fi
+ fi
+ fi
+
+ if test -n "$export_symbols" && test -n "$include_expsyms"; then
+ $run eval '$echo "X$include_expsyms" | $SP2NL >> "$export_symbols"'
+ fi
+
+ tmp_deplibs=
+ for test_deplib in $deplibs; do
+ case " $convenience " in
+ *" $test_deplib "*) ;;
+ *)
+ tmp_deplibs="$tmp_deplibs $test_deplib"
+ ;;
+ esac
+ done
+ deplibs="$tmp_deplibs"
+
+ if test -n "$convenience"; then
+ if test -n "$whole_archive_flag_spec"; then
+ save_libobjs=$libobjs
+ eval libobjs=\"\$libobjs $whole_archive_flag_spec\"
+ else
+ gentop="$output_objdir/${outputname}x"
+ $show "${rm}r $gentop"
+ $run ${rm}r "$gentop"
+ $show "$mkdir $gentop"
+ $run $mkdir "$gentop"
+ status=$?
+ if test "$status" -ne 0 && test ! -d "$gentop"; then
+ exit $status
+ fi
+ generated="$generated $gentop"
+
+ for xlib in $convenience; do
+ # Extract the objects.
+ case $xlib in
+ [\\/]* | [A-Za-z]:[\\/]*) xabs="$xlib" ;;
+ *) xabs=`pwd`"/$xlib" ;;
+ esac
+ xlib=`$echo "X$xlib" | $Xsed -e 's%^.*/%%'`
+ xdir="$gentop/$xlib"
+
+ $show "${rm}r $xdir"
+ $run ${rm}r "$xdir"
+ $show "$mkdir $xdir"
+ $run $mkdir "$xdir"
+ status=$?
+ if test "$status" -ne 0 && test ! -d "$xdir"; then
+ exit $status
+ fi
+ # We will extract separately just the conflicting names and we will no
+ # longer touch any unique names. It is faster to leave these extract
+ # automatically by $AR in one run.
+ $show "(cd $xdir && $AR x $xabs)"
+ $run eval "(cd \$xdir && $AR x \$xabs)" || exit $?
+ if ($AR t "$xabs" | sort | sort -uc >/dev/null 2>&1); then
+ :
+ else
+ $echo "$modename: warning: object name conflicts; renaming object files" 1>&2
+ $echo "$modename: warning: to ensure that they will not overwrite" 1>&2
+ $AR t "$xabs" | sort | uniq -cd | while read -r count name
+ do
+ i=1
+ while test "$i" -le "$count"
+ do
+ # Put our $i before any first dot (extension)
+ # Never overwrite any file
+ name_to="$name"
+ while test "X$name_to" = "X$name" || test -f "$xdir/$name_to"
+ do
+ name_to=`$echo "X$name_to" | $Xsed -e "s/\([^.]*\)/\1-$i/"`
+ done
+ $show "(cd $xdir && $AR xN $i $xabs '$name' && $mv '$name' '$name_to')"
+ $run eval "(cd \$xdir && $AR xN $i \$xabs '$name' && $mv '$name' '$name_to')" || exit $?
+ i=`expr $i + 1`
+ done
+ done
+ fi
+
+ libobjs="$libobjs "`find $xdir -name \*.$objext -print -o -name \*.lo -print | $NL2SP`
+ done
+ fi
+ fi
+
+ if test "$thread_safe" = yes && test -n "$thread_safe_flag_spec"; then
+ eval flag=\"$thread_safe_flag_spec\"
+ linker_flags="$linker_flags $flag"
+ fi
+
+ # Make a backup of the uninstalled library when relinking
+ if test "$mode" = relink; then
+ $run eval '(cd $output_objdir && $rm ${realname}U && $mv $realname ${realname}U)' || exit $?
+ fi
+
+ # Do each of the archive commands.
+ if test "$module" = yes && test -n "$module_cmds" ; then
+ if test -n "$export_symbols" && test -n "$module_expsym_cmds"; then
+ eval test_cmds=\"$module_expsym_cmds\"
+ cmds=$module_expsym_cmds
+ else
+ eval test_cmds=\"$module_cmds\"
+ cmds=$module_cmds
+ fi
+ else
+ if test -n "$export_symbols" && test -n "$archive_expsym_cmds"; then
+ eval test_cmds=\"$archive_expsym_cmds\"
+ cmds=$archive_expsym_cmds
+ else
+ eval test_cmds=\"$archive_cmds\"
+ cmds=$archive_cmds
+ fi
+ fi
+
+ if test "X$skipped_export" != "X:" && len=`expr "X$test_cmds" : ".*"` &&
+ test "$len" -le "$max_cmd_len" || test "$max_cmd_len" -le -1; then
+ :
+ else
+ # The command line is too long to link in one step, link piecewise.
+ $echo "creating reloadable object files..."
+
+ # Save the value of $output and $libobjs because we want to
+ # use them later. If we have whole_archive_flag_spec, we
+ # want to use save_libobjs as it was before
+ # whole_archive_flag_spec was expanded, because we can't
+ # assume the linker understands whole_archive_flag_spec.
+ # This may have to be revisited, in case too many
+ # convenience libraries get linked in and end up exceeding
+ # the spec.
+ if test -z "$convenience" || test -z "$whole_archive_flag_spec"; then
+ save_libobjs=$libobjs
+ fi
+ save_output=$output
+
+ # Clear the reloadable object creation command queue and
+ # initialize k to one.
+ test_cmds=
+ concat_cmds=
+ objlist=
+ delfiles=
+ last_robj=
+ k=1
+ output=$output_objdir/$save_output-${k}.$objext
+ # Loop over the list of objects to be linked.
+ for obj in $save_libobjs
+ do
+ eval test_cmds=\"$reload_cmds $objlist $last_robj\"
+ if test "X$objlist" = X ||
+ { len=`expr "X$test_cmds" : ".*"` &&
+ test "$len" -le "$max_cmd_len"; }; then
+ objlist="$objlist $obj"
+ else
+ # The command $test_cmds is almost too long, add a
+ # command to the queue.
+ if test "$k" -eq 1 ; then
+ # The first file doesn't have a previous command to add.
+ eval concat_cmds=\"$reload_cmds $objlist $last_robj\"
+ else
+ # All subsequent reloadable object files will link in
+ # the last one created.
+ eval concat_cmds=\"\$concat_cmds~$reload_cmds $objlist $last_robj\"
+ fi
+ last_robj=$output_objdir/$save_output-${k}.$objext
+ k=`expr $k + 1`
+ output=$output_objdir/$save_output-${k}.$objext
+ objlist=$obj
+ len=1
+ fi
+ done
+ # Handle the remaining objects by creating one last
+ # reloadable object file. All subsequent reloadable object
+ # files will link in the last one created.
+ test -z "$concat_cmds" || concat_cmds=$concat_cmds~
+ eval concat_cmds=\"\${concat_cmds}$reload_cmds $objlist $last_robj\"
+
+ if ${skipped_export-false}; then
+ $show "generating symbol list for \`$libname.la'"
+ export_symbols="$output_objdir/$libname.exp"
+ $run $rm $export_symbols
+ libobjs=$output
+ # Append the command to create the export file.
+ eval concat_cmds=\"\$concat_cmds~$export_symbols_cmds\"
+ fi
+
+ # Set up a command to remove the reloadale object files
+ # after they are used.
+ i=0
+ while test "$i" -lt "$k"
+ do
+ i=`expr $i + 1`
+ delfiles="$delfiles $output_objdir/$save_output-${i}.$objext"
+ done
+
+ $echo "creating a temporary reloadable object file: $output"
+
+ # Loop through the commands generated above and execute them.
+ save_ifs="$IFS"; IFS='~'
+ for cmd in $concat_cmds; do
+ IFS="$save_ifs"
+ eval cmd=\"$cmd\"
+ $show "$cmd"
+ $run eval "$cmd" || exit $?
+ done
+ IFS="$save_ifs"
+
+ libobjs=$output
+ # Restore the value of output.
+ output=$save_output
+
+ if test -n "$convenience" && test -n "$whole_archive_flag_spec"; then
+ eval libobjs=\"\$libobjs $whole_archive_flag_spec\"
+ fi
+ # Expand the library linking commands again to reset the
+ # value of $libobjs for piecewise linking.
+
+ # Do each of the archive commands.
+ if test "$module" = yes && test -n "$module_cmds" ; then
+ if test -n "$export_symbols" && test -n "$module_expsym_cmds"; then
+ cmds=$module_expsym_cmds
+ else
+ cmds=$module_cmds
+ fi
+ else
+ if test -n "$export_symbols" && test -n "$archive_expsym_cmds"; then
+ cmds=$archive_expsym_cmds
+ else
+ cmds=$archive_cmds
+ fi
+ fi
+
+ # Append the command to remove the reloadable object files
+ # to the just-reset $cmds.
+ eval cmds=\"\$cmds~\$rm $delfiles\"
+ fi
+ save_ifs="$IFS"; IFS='~'
+ for cmd in $cmds; do
+ IFS="$save_ifs"
+ eval cmd=\"$cmd\"
+ $show "$cmd"
+ $run eval "$cmd" || exit $?
+ done
+ IFS="$save_ifs"
+
+ # Restore the uninstalled library and exit
+ if test "$mode" = relink; then
+ $run eval '(cd $output_objdir && $rm ${realname}T && $mv $realname ${realname}T && $mv "$realname"U $realname)' || exit $?
+ exit 0
+ fi
+
+ # Create links to the real library.
+ for linkname in $linknames; do
+ if test "$realname" != "$linkname"; then
+ $show "(cd $output_objdir && $rm $linkname && $LN_S $realname $linkname)"
+ $run eval '(cd $output_objdir && $rm $linkname && $LN_S $realname $linkname)' || exit $?
+ fi
+ done
+
+ # If -module or -export-dynamic was specified, set the dlname.
+ if test "$module" = yes || test "$export_dynamic" = yes; then
+ # On all known operating systems, these are identical.
+ dlname="$soname"
+ fi
+ fi
+ ;;
+
+ obj)
+ if test -n "$deplibs"; then
+ $echo "$modename: warning: \`-l' and \`-L' are ignored for objects" 1>&2
+ fi
+
+ if test -n "$dlfiles$dlprefiles" || test "$dlself" != no; then
+ $echo "$modename: warning: \`-dlopen' is ignored for objects" 1>&2
+ fi
+
+ if test -n "$rpath"; then
+ $echo "$modename: warning: \`-rpath' is ignored for objects" 1>&2
+ fi
+
+ if test -n "$xrpath"; then
+ $echo "$modename: warning: \`-R' is ignored for objects" 1>&2
+ fi
+
+ if test -n "$vinfo"; then
+ $echo "$modename: warning: \`-version-info' is ignored for objects" 1>&2
+ fi
+
+ if test -n "$release"; then
+ $echo "$modename: warning: \`-release' is ignored for objects" 1>&2
+ fi
+
+ case $output in
+ *.lo)
+ if test -n "$objs$old_deplibs"; then
+ $echo "$modename: cannot build library object \`$output' from non-libtool objects" 1>&2
+ exit 1
+ fi
+ libobj="$output"
+ obj=`$echo "X$output" | $Xsed -e "$lo2o"`
+ ;;
+ *)
+ libobj=
+ obj="$output"
+ ;;
+ esac
+
+ # Delete the old objects.
+ $run $rm $obj $libobj
+
+ # Objects from convenience libraries. This assumes
+ # single-version convenience libraries. Whenever we create
+ # different ones for PIC/non-PIC, this we'll have to duplicate
+ # the extraction.
+ reload_conv_objs=
+ gentop=
+ # reload_cmds runs $LD directly, so let us get rid of
+ # -Wl from whole_archive_flag_spec
+ wl=
+
+ if test -n "$convenience"; then
+ if test -n "$whole_archive_flag_spec"; then
+ eval reload_conv_objs=\"\$reload_objs $whole_archive_flag_spec\"
+ else
+ gentop="$output_objdir/${obj}x"
+ $show "${rm}r $gentop"
+ $run ${rm}r "$gentop"
+ $show "$mkdir $gentop"
+ $run $mkdir "$gentop"
+ status=$?
+ if test "$status" -ne 0 && test ! -d "$gentop"; then
+ exit $status
+ fi
+ generated="$generated $gentop"
+
+ for xlib in $convenience; do
+ # Extract the objects.
+ case $xlib in
+ [\\/]* | [A-Za-z]:[\\/]*) xabs="$xlib" ;;
+ *) xabs=`pwd`"/$xlib" ;;
+ esac
+ xlib=`$echo "X$xlib" | $Xsed -e 's%^.*/%%'`
+ xdir="$gentop/$xlib"
+
+ $show "${rm}r $xdir"
+ $run ${rm}r "$xdir"
+ $show "$mkdir $xdir"
+ $run $mkdir "$xdir"
+ status=$?
+ if test "$status" -ne 0 && test ! -d "$xdir"; then
+ exit $status
+ fi
+ # We will extract separately just the conflicting names and we will no
+ # longer touch any unique names. It is faster to leave these extract
+ # automatically by $AR in one run.
+ $show "(cd $xdir && $AR x $xabs)"
+ $run eval "(cd \$xdir && $AR x \$xabs)" || exit $?
+ if ($AR t "$xabs" | sort | sort -uc >/dev/null 2>&1); then
+ :
+ else
+ $echo "$modename: warning: object name conflicts; renaming object files" 1>&2
+ $echo "$modename: warning: to ensure that they will not overwrite" 1>&2
+ $AR t "$xabs" | sort | uniq -cd | while read -r count name
+ do
+ i=1
+ while test "$i" -le "$count"
+ do
+ # Put our $i before any first dot (extension)
+ # Never overwrite any file
+ name_to="$name"
+ while test "X$name_to" = "X$name" || test -f "$xdir/$name_to"
+ do
+ name_to=`$echo "X$name_to" | $Xsed -e "s/\([^.]*\)/\1-$i/"`
+ done
+ $show "(cd $xdir && $AR xN $i $xabs '$name' && $mv '$name' '$name_to')"
+ $run eval "(cd \$xdir && $AR xN $i \$xabs '$name' && $mv '$name' '$name_to')" || exit $?
+ i=`expr $i + 1`
+ done
+ done
+ fi
+
+ reload_conv_objs="$reload_objs "`find $xdir -name \*.$objext -print -o -name \*.lo -print | $NL2SP`
+ done
+ fi
+ fi
+
+ # Create the old-style object.
+ reload_objs="$objs$old_deplibs "`$echo "X$libobjs" | $SP2NL | $Xsed -e '/\.'${libext}$'/d' -e '/\.lib$/d' -e "$lo2o" | $NL2SP`" $reload_conv_objs" ### testsuite: skip nested quoting test
+
+ output="$obj"
+ cmds=$reload_cmds
+ save_ifs="$IFS"; IFS='~'
+ for cmd in $cmds; do
+ IFS="$save_ifs"
+ eval cmd=\"$cmd\"
+ $show "$cmd"
+ $run eval "$cmd" || exit $?
+ done
+ IFS="$save_ifs"
+
+ # Exit if we aren't doing a library object file.
+ if test -z "$libobj"; then
+ if test -n "$gentop"; then
+ $show "${rm}r $gentop"
+ $run ${rm}r $gentop
+ fi
+
+ exit 0
+ fi
+
+ if test "$build_libtool_libs" != yes; then
+ if test -n "$gentop"; then
+ $show "${rm}r $gentop"
+ $run ${rm}r $gentop
+ fi
+
+ # Create an invalid libtool object if no PIC, so that we don't
+ # accidentally link it into a program.
+ # $show "echo timestamp > $libobj"
+ # $run eval "echo timestamp > $libobj" || exit $?
+ exit 0
+ fi
+
+ if test -n "$pic_flag" || test "$pic_mode" != default; then
+ # Only do commands if we really have different PIC objects.
+ reload_objs="$libobjs $reload_conv_objs"
+ output="$libobj"
+ cmds=$reload_cmds
+ save_ifs="$IFS"; IFS='~'
+ for cmd in $cmds; do
+ IFS="$save_ifs"
+ eval cmd=\"$cmd\"
+ $show "$cmd"
+ $run eval "$cmd" || exit $?
+ done
+ IFS="$save_ifs"
+ fi
+
+ if test -n "$gentop"; then
+ $show "${rm}r $gentop"
+ $run ${rm}r $gentop
+ fi
+
+ exit 0
+ ;;
+
+ prog)
+ case $host in
+ *cygwin*) output=`$echo $output | ${SED} -e 's,.exe$,,;s,$,.exe,'` ;;
+ esac
+ if test -n "$vinfo"; then
+ $echo "$modename: warning: \`-version-info' is ignored for programs" 1>&2
+ fi
+
+ if test -n "$release"; then
+ $echo "$modename: warning: \`-release' is ignored for programs" 1>&2
+ fi
+
+ if test "$preload" = yes; then
+ if test "$dlopen_support" = unknown && test "$dlopen_self" = unknown &&
+ test "$dlopen_self_static" = unknown; then
+ $echo "$modename: warning: \`AC_LIBTOOL_DLOPEN' not used. Assuming no dlopen support."
+ fi
+ fi
+
+ case $host in
+ *-*-rhapsody* | *-*-darwin1.[012])
+ # On Rhapsody replace the C library is the System framework
+ compile_deplibs=`$echo "X $compile_deplibs" | $Xsed -e 's/ -lc / -framework System /'`
+ finalize_deplibs=`$echo "X $finalize_deplibs" | $Xsed -e 's/ -lc / -framework System /'`
+ ;;
+ esac
+
+ case $host in
+ *darwin*)
+ # Don't allow lazy linking, it breaks C++ global constructors
+ if test "$tagname" = CXX ; then
+ compile_command="$compile_command ${wl}-bind_at_load"
+ finalize_command="$finalize_command ${wl}-bind_at_load"
+ fi
+ ;;
+ esac
+
+ compile_command="$compile_command $compile_deplibs"
+ finalize_command="$finalize_command $finalize_deplibs"
+
+ if test -n "$rpath$xrpath"; then
+ # If the user specified any rpath flags, then add them.
+ for libdir in $rpath $xrpath; do
+ # This is the magic to use -rpath.
+ case "$finalize_rpath " in
+ *" $libdir "*) ;;
+ *) finalize_rpath="$finalize_rpath $libdir" ;;
+ esac
+ done
+ fi
+
+ # Now hardcode the library paths
+ rpath=
+ hardcode_libdirs=
+ for libdir in $compile_rpath $finalize_rpath; do
+ if test -n "$hardcode_libdir_flag_spec"; then
+ if test -n "$hardcode_libdir_separator"; then
+ if test -z "$hardcode_libdirs"; then
+ hardcode_libdirs="$libdir"
+ else
+ # Just accumulate the unique libdirs.
+ case $hardcode_libdir_separator$hardcode_libdirs$hardcode_libdir_separator in
+ *"$hardcode_libdir_separator$libdir$hardcode_libdir_separator"*)
+ ;;
+ *)
+ hardcode_libdirs="$hardcode_libdirs$hardcode_libdir_separator$libdir"
+ ;;
+ esac
+ fi
+ else
+ eval flag=\"$hardcode_libdir_flag_spec\"
+ rpath="$rpath $flag"
+ fi
+ elif test -n "$runpath_var"; then
+ case "$perm_rpath " in
+ *" $libdir "*) ;;
+ *) perm_rpath="$perm_rpath $libdir" ;;
+ esac
+ fi
+ case $host in
+ *-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-os2*)
+ case :$dllsearchpath: in
+ *":$libdir:"*) ;;
+ *) dllsearchpath="$dllsearchpath:$libdir";;
+ esac
+ ;;
+ esac
+ done
+ # Substitute the hardcoded libdirs into the rpath.
+ if test -n "$hardcode_libdir_separator" &&
+ test -n "$hardcode_libdirs"; then
+ libdir="$hardcode_libdirs"
+ eval rpath=\" $hardcode_libdir_flag_spec\"
+ fi
+ compile_rpath="$rpath"
+
+ rpath=
+ hardcode_libdirs=
+ for libdir in $finalize_rpath; do
+ if test -n "$hardcode_libdir_flag_spec"; then
+ if test -n "$hardcode_libdir_separator"; then
+ if test -z "$hardcode_libdirs"; then
+ hardcode_libdirs="$libdir"
+ else
+ # Just accumulate the unique libdirs.
+ case $hardcode_libdir_separator$hardcode_libdirs$hardcode_libdir_separator in
+ *"$hardcode_libdir_separator$libdir$hardcode_libdir_separator"*)
+ ;;
+ *)
+ hardcode_libdirs="$hardcode_libdirs$hardcode_libdir_separator$libdir"
+ ;;
+ esac
+ fi
+ else
+ eval flag=\"$hardcode_libdir_flag_spec\"
+ rpath="$rpath $flag"
+ fi
+ elif test -n "$runpath_var"; then
+ case "$finalize_perm_rpath " in
+ *" $libdir "*) ;;
+ *) finalize_perm_rpath="$finalize_perm_rpath $libdir" ;;
+ esac
+ fi
+ done
+ # Substitute the hardcoded libdirs into the rpath.
+ if test -n "$hardcode_libdir_separator" &&
+ test -n "$hardcode_libdirs"; then
+ libdir="$hardcode_libdirs"
+ eval rpath=\" $hardcode_libdir_flag_spec\"
+ fi
+ finalize_rpath="$rpath"
+
+ if test -n "$libobjs" && test "$build_old_libs" = yes; then
+ # Transform all the library objects into standard objects.
+ compile_command=`$echo "X$compile_command" | $SP2NL | $Xsed -e "$lo2o" | $NL2SP`
+ finalize_command=`$echo "X$finalize_command" | $SP2NL | $Xsed -e "$lo2o" | $NL2SP`
+ fi
+
+ dlsyms=
+ if test -n "$dlfiles$dlprefiles" || test "$dlself" != no; then
+ if test -n "$NM" && test -n "$global_symbol_pipe"; then
+ dlsyms="${outputname}S.c"
+ else
+ $echo "$modename: not configured to extract global symbols from dlpreopened files" 1>&2
+ fi
+ fi
+
+ if test -n "$dlsyms"; then
+ case $dlsyms in
+ "") ;;
+ *.c)
+ # Discover the nlist of each of the dlfiles.
+ nlist="$output_objdir/${outputname}.nm"
+
+ $show "$rm $nlist ${nlist}S ${nlist}T"
+ $run $rm "$nlist" "${nlist}S" "${nlist}T"
+
+ # Parse the name list into a source file.
+ $show "creating $output_objdir/$dlsyms"
+
+ test -z "$run" && $echo > "$output_objdir/$dlsyms" "\
+/* $dlsyms - symbol resolution table for \`$outputname' dlsym emulation. */
+/* Generated by $PROGRAM - GNU $PACKAGE $VERSION$TIMESTAMP */
+
+#ifdef __cplusplus
+extern \"C\" {
+#endif
+
+/* Prevent the only kind of declaration conflicts we can make. */
+#define lt_preloaded_symbols some_other_symbol
+
+/* External symbol declarations for the compiler. */\
+"
+
+ if test "$dlself" = yes; then
+ $show "generating symbol list for \`$output'"
+
+ test -z "$run" && $echo ': @PROGRAM@ ' > "$nlist"
+
+ # Add our own program objects to the symbol list.
+ progfiles=`$echo "X$objs$old_deplibs" | $SP2NL | $Xsed -e "$lo2o" | $NL2SP`
+ for arg in $progfiles; do
+ $show "extracting global C symbols from \`$arg'"
+ $run eval "$NM $arg | $global_symbol_pipe >> '$nlist'"
+ done
+
+ if test -n "$exclude_expsyms"; then
+ $run eval '$EGREP -v " ($exclude_expsyms)$" "$nlist" > "$nlist"T'
+ $run eval '$mv "$nlist"T "$nlist"'
+ fi
+
+ if test -n "$export_symbols_regex"; then
+ $run eval '$EGREP -e "$export_symbols_regex" "$nlist" > "$nlist"T'
+ $run eval '$mv "$nlist"T "$nlist"'
+ fi
+
+ # Prepare the list of exported symbols
+ if test -z "$export_symbols"; then
+ export_symbols="$output_objdir/$output.exp"
+ $run $rm $export_symbols
+ $run eval "${SED} -n -e '/^: @PROGRAM@$/d' -e 's/^.* \(.*\)$/\1/p' "'< "$nlist" > "$export_symbols"'
+ else
+ $run eval "${SED} -e 's/\([][.*^$]\)/\\\1/g' -e 's/^/ /' -e 's/$/$/'"' < "$export_symbols" > "$output_objdir/$output.exp"'
+ $run eval 'grep -f "$output_objdir/$output.exp" < "$nlist" > "$nlist"T'
+ $run eval 'mv "$nlist"T "$nlist"'
+ fi
+ fi
+
+ for arg in $dlprefiles; do
+ $show "extracting global C symbols from \`$arg'"
+ name=`$echo "$arg" | ${SED} -e 's%^.*/%%'`
+ $run eval '$echo ": $name " >> "$nlist"'
+ $run eval "$NM $arg | $global_symbol_pipe >> '$nlist'"
+ done
+
+ if test -z "$run"; then
+ # Make sure we have at least an empty file.
+ test -f "$nlist" || : > "$nlist"
+
+ if test -n "$exclude_expsyms"; then
+ $EGREP -v " ($exclude_expsyms)$" "$nlist" > "$nlist"T
+ $mv "$nlist"T "$nlist"
+ fi
+
+ # Try sorting and uniquifying the output.
+ if grep -v "^: " < "$nlist" |
+ if sort -k 3 </dev/null >/dev/null 2>&1; then
+ sort -k 3
+ else
+ sort +2
+ fi |
+ uniq > "$nlist"S; then
+ :
+ else
+ grep -v "^: " < "$nlist" > "$nlist"S
+ fi
+
+ if test -f "$nlist"S; then
+ eval "$global_symbol_to_cdecl"' < "$nlist"S >> "$output_objdir/$dlsyms"'
+ else
+ $echo '/* NONE */' >> "$output_objdir/$dlsyms"
+ fi
+
+ $echo >> "$output_objdir/$dlsyms" "\
+
+#undef lt_preloaded_symbols
+
+#if defined (__STDC__) && __STDC__
+# define lt_ptr void *
+#else
+# define lt_ptr char *
+# define const
+#endif
+
+/* The mapping between symbol names and symbols. */
+const struct {
+ const char *name;
+ lt_ptr address;
+}
+lt_preloaded_symbols[] =
+{\
+"
+
+ eval "$global_symbol_to_c_name_address" < "$nlist" >> "$output_objdir/$dlsyms"
+
+ $echo >> "$output_objdir/$dlsyms" "\
+ {0, (lt_ptr) 0}
+};
+
+/* This works around a problem in FreeBSD linker */
+#ifdef FREEBSD_WORKAROUND
+static const void *lt_preloaded_setup() {
+ return lt_preloaded_symbols;
+}
+#endif
+
+#ifdef __cplusplus
+}
+#endif\
+"
+ fi
+
+ pic_flag_for_symtable=
+ case $host in
+ # compiling the symbol table file with pic_flag works around
+ # a FreeBSD bug that causes programs to crash when -lm is
+ # linked before any other PIC object. But we must not use
+ # pic_flag when linking with -static. The problem exists in
+ # FreeBSD 2.2.6 and is fixed in FreeBSD 3.1.
+ *-*-freebsd2*|*-*-freebsd3.0*|*-*-freebsdelf3.0*)
+ case "$compile_command " in
+ *" -static "*) ;;
+ *) pic_flag_for_symtable=" $pic_flag -DFREEBSD_WORKAROUND";;
+ esac;;
+ *-*-hpux*)
+ case "$compile_command " in
+ *" -static "*) ;;
+ *) pic_flag_for_symtable=" $pic_flag";;
+ esac
+ esac
+
+ # Now compile the dynamic symbol file.
+ $show "(cd $output_objdir && $LTCC -c$no_builtin_flag$pic_flag_for_symtable \"$dlsyms\")"
+ $run eval '(cd $output_objdir && $LTCC -c$no_builtin_flag$pic_flag_for_symtable "$dlsyms")' || exit $?
+
+ # Clean up the generated files.
+ $show "$rm $output_objdir/$dlsyms $nlist ${nlist}S ${nlist}T"
+ $run $rm "$output_objdir/$dlsyms" "$nlist" "${nlist}S" "${nlist}T"
+
+ # Transform the symbol file into the correct name.
+ compile_command=`$echo "X$compile_command" | $Xsed -e "s%@SYMFILE@%$output_objdir/${outputname}S.${objext}%"`
+ finalize_command=`$echo "X$finalize_command" | $Xsed -e "s%@SYMFILE@%$output_objdir/${outputname}S.${objext}%"`
+ ;;
+ *)
+ $echo "$modename: unknown suffix for \`$dlsyms'" 1>&2
+ exit 1
+ ;;
+ esac
+ else
+ # We keep going just in case the user didn't refer to
+ # lt_preloaded_symbols. The linker will fail if global_symbol_pipe
+ # really was required.
+
+ # Nullify the symbol file.
+ compile_command=`$echo "X$compile_command" | $Xsed -e "s% @SYMFILE@%%"`
+ finalize_command=`$echo "X$finalize_command" | $Xsed -e "s% @SYMFILE@%%"`
+ fi
+
+ if test "$need_relink" = no || test "$build_libtool_libs" != yes; then
+ # Replace the output file specification.
+ compile_command=`$echo "X$compile_command" | $Xsed -e 's%@OUTPUT@%'"$output"'%g'`
+ link_command="$compile_command$compile_rpath"
+
+ # We have no uninstalled library dependencies, so finalize right now.
+ $show "$link_command"
+ $run eval "$link_command"
+ status=$?
+
+ # Delete the generated files.
+ if test -n "$dlsyms"; then
+ $show "$rm $output_objdir/${outputname}S.${objext}"
+ $run $rm "$output_objdir/${outputname}S.${objext}"
+ fi
+
+ exit $status
+ fi
+
+ if test -n "$shlibpath_var"; then
+ # We should set the shlibpath_var
+ rpath=
+ for dir in $temp_rpath; do
+ case $dir in
+ [\\/]* | [A-Za-z]:[\\/]*)
+ # Absolute path.
+ rpath="$rpath$dir:"
+ ;;
+ *)
+ # Relative path: add a thisdir entry.
+ rpath="$rpath\$thisdir/$dir:"
+ ;;
+ esac
+ done
+ temp_rpath="$rpath"
+ fi
+
+ if test -n "$compile_shlibpath$finalize_shlibpath"; then
+ compile_command="$shlibpath_var=\"$compile_shlibpath$finalize_shlibpath\$$shlibpath_var\" $compile_command"
+ fi
+ if test -n "$finalize_shlibpath"; then
+ finalize_command="$shlibpath_var=\"$finalize_shlibpath\$$shlibpath_var\" $finalize_command"
+ fi
+
+ compile_var=
+ finalize_var=
+ if test -n "$runpath_var"; then
+ if test -n "$perm_rpath"; then
+ # We should set the runpath_var.
+ rpath=
+ for dir in $perm_rpath; do
+ rpath="$rpath$dir:"
+ done
+ compile_var="$runpath_var=\"$rpath\$$runpath_var\" "
+ fi
+ if test -n "$finalize_perm_rpath"; then
+ # We should set the runpath_var.
+ rpath=
+ for dir in $finalize_perm_rpath; do
+ rpath="$rpath$dir:"
+ done
+ finalize_var="$runpath_var=\"$rpath\$$runpath_var\" "
+ fi
+ fi
+
+ if test "$no_install" = yes; then
+ # We don't need to create a wrapper script.
+ link_command="$compile_var$compile_command$compile_rpath"
+ # Replace the output file specification.
+ link_command=`$echo "X$link_command" | $Xsed -e 's%@OUTPUT@%'"$output"'%g'`
+ # Delete the old output file.
+ $run $rm $output
+ # Link the executable and exit
+ $show "$link_command"
+ $run eval "$link_command" || exit $?
+ exit 0
+ fi
+
+ if test "$hardcode_action" = relink; then
+ # Fast installation is not supported
+ link_command="$compile_var$compile_command$compile_rpath"
+ relink_command="$finalize_var$finalize_command$finalize_rpath"
+
+ $echo "$modename: warning: this platform does not like uninstalled shared libraries" 1>&2
+ $echo "$modename: \`$output' will be relinked during installation" 1>&2
+ else
+ if test "$fast_install" != no; then
+ link_command="$finalize_var$compile_command$finalize_rpath"
+ if test "$fast_install" = yes; then
+ relink_command=`$echo "X$compile_var$compile_command$compile_rpath" | $Xsed -e 's%@OUTPUT@%\$progdir/\$file%g'`
+ else
+ # fast_install is set to needless
+ relink_command=
+ fi
+ else
+ link_command="$compile_var$compile_command$compile_rpath"
+ relink_command="$finalize_var$finalize_command$finalize_rpath"
+ fi
+ fi
+
+ # Replace the output file specification.
+ link_command=`$echo "X$link_command" | $Xsed -e 's%@OUTPUT@%'"$output_objdir/$outputname"'%g'`
+
+ # Delete the old output files.
+ $run $rm $output $output_objdir/$outputname $output_objdir/lt-$outputname
+
+ $show "$link_command"
+ $run eval "$link_command" || exit $?
+
+ # Now create the wrapper script.
+ $show "creating $output"
+
+ # Quote the relink command for shipping.
+ if test -n "$relink_command"; then
+ # Preserve any variables that may affect compiler behavior
+ for var in $variables_saved_for_relink; do
+ if eval test -z \"\${$var+set}\"; then
+ relink_command="{ test -z \"\${$var+set}\" || unset $var || { $var=; export $var; }; }; $relink_command"
+ elif eval var_value=\$$var; test -z "$var_value"; then
+ relink_command="$var=; export $var; $relink_command"
+ else
+ var_value=`$echo "X$var_value" | $Xsed -e "$sed_quote_subst"`
+ relink_command="$var=\"$var_value\"; export $var; $relink_command"
+ fi
+ done
+ relink_command="(cd `pwd`; $relink_command)"
+ relink_command=`$echo "X$relink_command" | $Xsed -e "$sed_quote_subst"`
+ fi
+
+ # Quote $echo for shipping.
+ if test "X$echo" = "X$SHELL $0 --fallback-echo"; then
+ case $0 in
+ [\\/]* | [A-Za-z]:[\\/]*) qecho="$SHELL $0 --fallback-echo";;
+ *) qecho="$SHELL `pwd`/$0 --fallback-echo";;
+ esac
+ qecho=`$echo "X$qecho" | $Xsed -e "$sed_quote_subst"`
+ else
+ qecho=`$echo "X$echo" | $Xsed -e "$sed_quote_subst"`
+ fi
+
+ # Only actually do things if our run command is non-null.
+ if test -z "$run"; then
+ # win32 will think the script is a binary if it has
+ # a .exe suffix, so we strip it off here.
+ case $output in
+ *.exe) output=`$echo $output|${SED} 's,.exe$,,'` ;;
+ esac
+ # test for cygwin because mv fails w/o .exe extensions
+ case $host in
+ *cygwin*)
+ exeext=.exe
+ outputname=`$echo $outputname|${SED} 's,.exe$,,'` ;;
+ *) exeext= ;;
+ esac
+ case $host in
+ *cygwin* | *mingw* )
+ cwrappersource=`$echo ${objdir}/lt-${output}.c`
+ cwrapper=`$echo ${output}.exe`
+ $rm $cwrappersource $cwrapper
+ trap "$rm $cwrappersource $cwrapper; exit 1" 1 2 15
+
+ cat > $cwrappersource <<EOF
+
+/* $cwrappersource - temporary wrapper executable for $objdir/$outputname
+ Generated by $PROGRAM - GNU $PACKAGE $VERSION$TIMESTAMP
+
+ The $output program cannot be directly executed until all the libtool
+ libraries that it depends on are installed.
+
+ This wrapper executable should never be moved out of the build directory.
+ If it is, it will not operate correctly.
+
+ Currently, it simply execs the wrapper *script* "/bin/sh $output",
+ but could eventually absorb all of the scripts functionality and
+ exec $objdir/$outputname directly.
+*/
+EOF
+ cat >> $cwrappersource<<"EOF"
+#include <stdio.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <malloc.h>
+#include <stdarg.h>
+#include <assert.h>
+
+#if defined(PATH_MAX)
+# define LT_PATHMAX PATH_MAX
+#elif defined(MAXPATHLEN)
+# define LT_PATHMAX MAXPATHLEN
+#else
+# define LT_PATHMAX 1024
+#endif
+
+#ifndef DIR_SEPARATOR
+#define DIR_SEPARATOR '/'
+#endif
+
+#if defined (_WIN32) || defined (__MSDOS__) || defined (__DJGPP__) || \
+ defined (__OS2__)
+#define HAVE_DOS_BASED_FILE_SYSTEM
+#ifndef DIR_SEPARATOR_2
+#define DIR_SEPARATOR_2 '\\'
+#endif
+#endif
+
+#ifndef DIR_SEPARATOR_2
+# define IS_DIR_SEPARATOR(ch) ((ch) == DIR_SEPARATOR)
+#else /* DIR_SEPARATOR_2 */
+# define IS_DIR_SEPARATOR(ch) \
+ (((ch) == DIR_SEPARATOR) || ((ch) == DIR_SEPARATOR_2))
+#endif /* DIR_SEPARATOR_2 */
+
+#define XMALLOC(type, num) ((type *) xmalloc ((num) * sizeof(type)))
+#define XFREE(stale) do { \
+ if (stale) { free ((void *) stale); stale = 0; } \
+} while (0)
+
+const char *program_name = NULL;
+
+void * xmalloc (size_t num);
+char * xstrdup (const char *string);
+char * basename (const char *name);
+char * fnqualify(const char *path);
+char * strendzap(char *str, const char *pat);
+void lt_fatal (const char *message, ...);
+
+int
+main (int argc, char *argv[])
+{
+ char **newargz;
+ int i;
+
+ program_name = (char *) xstrdup ((char *) basename (argv[0]));
+ newargz = XMALLOC(char *, argc+2);
+EOF
+
+ cat >> $cwrappersource <<EOF
+ newargz[0] = "$SHELL";
+EOF
+
+ cat >> $cwrappersource <<"EOF"
+ newargz[1] = fnqualify(argv[0]);
+ /* we know the script has the same name, without the .exe */
+ /* so make sure newargz[1] doesn't end in .exe */
+ strendzap(newargz[1],".exe");
+ for (i = 1; i < argc; i++)
+ newargz[i+1] = xstrdup(argv[i]);
+ newargz[argc+1] = NULL;
+EOF
+
+ cat >> $cwrappersource <<EOF
+ execv("$SHELL",newargz);
+EOF
+
+ cat >> $cwrappersource <<"EOF"
+}
+
+void *
+xmalloc (size_t num)
+{
+ void * p = (void *) malloc (num);
+ if (!p)
+ lt_fatal ("Memory exhausted");
+
+ return p;
+}
+
+char *
+xstrdup (const char *string)
+{
+ return string ? strcpy ((char *) xmalloc (strlen (string) + 1), string) : NULL
+;
+}
+
+char *
+basename (const char *name)
+{
+ const char *base;
+
+#if defined (HAVE_DOS_BASED_FILE_SYSTEM)
+ /* Skip over the disk name in MSDOS pathnames. */
+ if (isalpha (name[0]) && name[1] == ':')
+ name += 2;
+#endif
+
+ for (base = name; *name; name++)
+ if (IS_DIR_SEPARATOR (*name))
+ base = name + 1;
+ return (char *) base;
+}
+
+char *
+fnqualify(const char *path)
+{
+ size_t size;
+ char *p;
+ char tmp[LT_PATHMAX + 1];
+
+ assert(path != NULL);
+
+ /* Is it qualified already? */
+#if defined (HAVE_DOS_BASED_FILE_SYSTEM)
+ if (isalpha (path[0]) && path[1] == ':')
+ return xstrdup (path);
+#endif
+ if (IS_DIR_SEPARATOR (path[0]))
+ return xstrdup (path);
+
+ /* prepend the current directory */
+ /* doesn't handle '~' */
+ if (getcwd (tmp, LT_PATHMAX) == NULL)
+ lt_fatal ("getcwd failed");
+ size = strlen(tmp) + 1 + strlen(path) + 1; /* +2 for '/' and '\0' */
+ p = XMALLOC(char, size);
+ sprintf(p, "%s%c%s", tmp, DIR_SEPARATOR, path);
+ return p;
+}
+
+char *
+strendzap(char *str, const char *pat)
+{
+ size_t len, patlen;
+
+ assert(str != NULL);
+ assert(pat != NULL);
+
+ len = strlen(str);
+ patlen = strlen(pat);
+
+ if (patlen <= len)
+ {
+ str += len - patlen;
+ if (strcmp(str, pat) == 0)
+ *str = '\0';
+ }
+ return str;
+}
+
+static void
+lt_error_core (int exit_status, const char * mode,
+ const char * message, va_list ap)
+{
+ fprintf (stderr, "%s: %s: ", program_name, mode);
+ vfprintf (stderr, message, ap);
+ fprintf (stderr, ".\n");
+
+ if (exit_status >= 0)
+ exit (exit_status);
+}
+
+void
+lt_fatal (const char *message, ...)
+{
+ va_list ap;
+ va_start (ap, message);
+ lt_error_core (EXIT_FAILURE, "FATAL", message, ap);
+ va_end (ap);
+}
+EOF
+ # we should really use a build-platform specific compiler
+ # here, but OTOH, the wrappers (shell script and this C one)
+ # are only useful if you want to execute the "real" binary.
+ # Since the "real" binary is built for $host, then this
+ # wrapper might as well be built for $host, too.
+ $run $LTCC -s -o $cwrapper $cwrappersource
+ ;;
+ esac
+ $rm $output
+ trap "$rm $output; exit 1" 1 2 15
+
+ $echo > $output "\
+#! $SHELL
+
+# $output - temporary wrapper script for $objdir/$outputname
+# Generated by $PROGRAM - GNU $PACKAGE $VERSION$TIMESTAMP
+#
+# The $output program cannot be directly executed until all the libtool
+# libraries that it depends on are installed.
+#
+# This wrapper script should never be moved out of the build directory.
+# If it is, it will not operate correctly.
+
+# Sed substitution that helps us do robust quoting. It backslashifies
+# metacharacters that are still active within double-quoted strings.
+Xsed='${SED} -e 1s/^X//'
+sed_quote_subst='$sed_quote_subst'
+
+# The HP-UX ksh and POSIX shell print the target directory to stdout
+# if CDPATH is set.
+if test \"\${CDPATH+set}\" = set; then CDPATH=:; export CDPATH; fi
+
+relink_command=\"$relink_command\"
+
+# This environment variable determines our operation mode.
+if test \"\$libtool_install_magic\" = \"$magic\"; then
+ # install mode needs the following variable:
+ notinst_deplibs='$notinst_deplibs'
+else
+ # When we are sourced in execute mode, \$file and \$echo are already set.
+ if test \"\$libtool_execute_magic\" != \"$magic\"; then
+ echo=\"$qecho\"
+ file=\"\$0\"
+ # Make sure echo works.
+ if test \"X\$1\" = X--no-reexec; then
+ # Discard the --no-reexec flag, and continue.
+ shift
+ elif test \"X\`(\$echo '\t') 2>/dev/null\`\" = 'X\t'; then
+ # Yippee, \$echo works!
+ :
+ else
+ # Restart under the correct shell, and then maybe \$echo will work.
+ exec $SHELL \"\$0\" --no-reexec \${1+\"\$@\"}
+ fi
+ fi\
+"
+ $echo >> $output "\
+
+ # Find the directory that this script lives in.
+ thisdir=\`\$echo \"X\$file\" | \$Xsed -e 's%/[^/]*$%%'\`
+ test \"x\$thisdir\" = \"x\$file\" && thisdir=.
+
+ # Follow symbolic links until we get to the real thisdir.
+ file=\`ls -ld \"\$file\" | ${SED} -n 's/.*-> //p'\`
+ while test -n \"\$file\"; do
+ destdir=\`\$echo \"X\$file\" | \$Xsed -e 's%/[^/]*\$%%'\`
+
+ # If there was a directory component, then change thisdir.
+ if test \"x\$destdir\" != \"x\$file\"; then
+ case \"\$destdir\" in
+ [\\\\/]* | [A-Za-z]:[\\\\/]*) thisdir=\"\$destdir\" ;;
+ *) thisdir=\"\$thisdir/\$destdir\" ;;
+ esac
+ fi
+
+ file=\`\$echo \"X\$file\" | \$Xsed -e 's%^.*/%%'\`
+ file=\`ls -ld \"\$thisdir/\$file\" | ${SED} -n 's/.*-> //p'\`
+ done
+
+ # Try to get the absolute directory name.
+ absdir=\`cd \"\$thisdir\" && pwd\`
+ test -n \"\$absdir\" && thisdir=\"\$absdir\"
+"
+
+ if test "$fast_install" = yes; then
+ $echo >> $output "\
+ program=lt-'$outputname'$exeext
+ progdir=\"\$thisdir/$objdir\"
+
+ if test ! -f \"\$progdir/\$program\" || \\
+ { file=\`ls -1dt \"\$progdir/\$program\" \"\$progdir/../\$program\" 2>/dev/null | ${SED} 1q\`; \\
+ test \"X\$file\" != \"X\$progdir/\$program\"; }; then
+
+ file=\"\$\$-\$program\"
+
+ if test ! -d \"\$progdir\"; then
+ $mkdir \"\$progdir\"
+ else
+ $rm \"\$progdir/\$file\"
+ fi"
+
+ $echo >> $output "\
+
+ # relink executable if necessary
+ if test -n \"\$relink_command\"; then
+ if relink_command_output=\`eval \$relink_command 2>&1\`; then :
+ else
+ $echo \"\$relink_command_output\" >&2
+ $rm \"\$progdir/\$file\"
+ exit 1
+ fi
+ fi
+
+ $mv \"\$progdir/\$file\" \"\$progdir/\$program\" 2>/dev/null ||
+ { $rm \"\$progdir/\$program\";
+ $mv \"\$progdir/\$file\" \"\$progdir/\$program\"; }
+ $rm \"\$progdir/\$file\"
+ fi"
+ else
+ $echo >> $output "\
+ program='$outputname'
+ progdir=\"\$thisdir/$objdir\"
+"
+ fi
+
+ $echo >> $output "\
+
+ if test -f \"\$progdir/\$program\"; then"
+
+ # Export our shlibpath_var if we have one.
+ if test "$shlibpath_overrides_runpath" = yes && test -n "$shlibpath_var" && test -n "$temp_rpath"; then
+ $echo >> $output "\
+ # Add our own library path to $shlibpath_var
+ $shlibpath_var=\"$temp_rpath\$$shlibpath_var\"
+
+ # Some systems cannot cope with colon-terminated $shlibpath_var
+ # The second colon is a workaround for a bug in BeOS R4 sed
+ $shlibpath_var=\`\$echo \"X\$$shlibpath_var\" | \$Xsed -e 's/::*\$//'\`
+
+ export $shlibpath_var
+"
+ fi
+
+ # fixup the dll searchpath if we need to.
+ if test -n "$dllsearchpath"; then
+ $echo >> $output "\
+ # Add the dll search path components to the executable PATH
+ PATH=$dllsearchpath:\$PATH
+"
+ fi
+
+ $echo >> $output "\
+ if test \"\$libtool_execute_magic\" != \"$magic\"; then
+ # Run the actual program with our arguments.
+"
+ case $host in
+ # Backslashes separate directories on plain windows
+ *-*-mingw | *-*-os2*)
+ $echo >> $output "\
+ exec \$progdir\\\\\$program \${1+\"\$@\"}
+"
+ ;;
+
+ *)
+ $echo >> $output "\
+ exec \$progdir/\$program \${1+\"\$@\"}
+"
+ ;;
+ esac
+ $echo >> $output "\
+ \$echo \"\$0: cannot exec \$program \${1+\"\$@\"}\"
+ exit 1
+ fi
+ else
+ # The program doesn't exist.
+ \$echo \"\$0: error: \$progdir/\$program does not exist\" 1>&2
+ \$echo \"This script is just a wrapper for \$program.\" 1>&2
+ $echo \"See the $PACKAGE documentation for more information.\" 1>&2
+ exit 1
+ fi
+fi\
+"
+ chmod +x $output
+ fi
+ exit 0
+ ;;
+ esac
+
+ # See if we need to build an old-fashioned archive.
+ for oldlib in $oldlibs; do
+
+ if test "$build_libtool_libs" = convenience; then
+ oldobjs="$libobjs_save"
+ addlibs="$convenience"
+ build_libtool_libs=no
+ else
+ if test "$build_libtool_libs" = module; then
+ oldobjs="$libobjs_save"
+ build_libtool_libs=no
+ else
+ oldobjs="$old_deplibs $non_pic_objects"
+ fi
+ addlibs="$old_convenience"
+ fi
+
+ if test -n "$addlibs"; then
+ gentop="$output_objdir/${outputname}x"
+ $show "${rm}r $gentop"
+ $run ${rm}r "$gentop"
+ $show "$mkdir $gentop"
+ $run $mkdir "$gentop"
+ status=$?
+ if test "$status" -ne 0 && test ! -d "$gentop"; then
+ exit $status
+ fi
+ generated="$generated $gentop"
+
+ # Add in members from convenience archives.
+ for xlib in $addlibs; do
+ # Extract the objects.
+ case $xlib in
+ [\\/]* | [A-Za-z]:[\\/]*) xabs="$xlib" ;;
+ *) xabs=`pwd`"/$xlib" ;;
+ esac
+ xlib=`$echo "X$xlib" | $Xsed -e 's%^.*/%%'`
+ xdir="$gentop/$xlib"
+
+ $show "${rm}r $xdir"
+ $run ${rm}r "$xdir"
+ $show "$mkdir $xdir"
+ $run $mkdir "$xdir"
+ status=$?
+ if test "$status" -ne 0 && test ! -d "$xdir"; then
+ exit $status
+ fi
+ # We will extract separately just the conflicting names and we will no
+ # longer touch any unique names. It is faster to leave these extract
+ # automatically by $AR in one run.
+ $show "(cd $xdir && $AR x $xabs)"
+ $run eval "(cd \$xdir && $AR x \$xabs)" || exit $?
+ if ($AR t "$xabs" | sort | sort -uc >/dev/null 2>&1); then
+ :
+ else
+ $echo "$modename: warning: object name conflicts; renaming object files" 1>&2
+ $echo "$modename: warning: to ensure that they will not overwrite" 1>&2
+ $AR t "$xabs" | sort | uniq -cd | while read -r count name
+ do
+ i=1
+ while test "$i" -le "$count"
+ do
+ # Put our $i before any first dot (extension)
+ # Never overwrite any file
+ name_to="$name"
+ while test "X$name_to" = "X$name" || test -f "$xdir/$name_to"
+ do
+ name_to=`$echo "X$name_to" | $Xsed -e "s/\([^.]*\)/\1-$i/"`
+ done
+ $show "(cd $xdir && $AR xN $i $xabs '$name' && $mv '$name' '$name_to')"
+ $run eval "(cd \$xdir && $AR xN $i \$xabs '$name' && $mv '$name' '$name_to')" || exit $?
+ i=`expr $i + 1`
+ done
+ done
+ fi
+
+ oldobjs="$oldobjs "`find $xdir -name \*.${objext} -print -o -name \*.lo -print | $NL2SP`
+ done
+ fi
+
+ # Do each command in the archive commands.
+ if test -n "$old_archive_from_new_cmds" && test "$build_libtool_libs" = yes; then
+ cmds=$old_archive_from_new_cmds
+ else
+ eval cmds=\"$old_archive_cmds\"
+
+ if len=`expr "X$cmds" : ".*"` &&
+ test "$len" -le "$max_cmd_len" || test "$max_cmd_len" -le -1; then
+ cmds=$old_archive_cmds
+ else
+ # the command line is too long to link in one step, link in parts
+ $echo "using piecewise archive linking..."
+ save_RANLIB=$RANLIB
+ RANLIB=:
+ objlist=
+ concat_cmds=
+ save_oldobjs=$oldobjs
+ # GNU ar 2.10+ was changed to match POSIX; thus no paths are
+ # encoded into archives. This makes 'ar r' malfunction in
+ # this piecewise linking case whenever conflicting object
+ # names appear in distinct ar calls; check, warn and compensate.
+ if (for obj in $save_oldobjs
+ do
+ $echo "X$obj" | $Xsed -e 's%^.*/%%'
+ done | sort | sort -uc >/dev/null 2>&1); then
+ :
+ else
+ $echo "$modename: warning: object name conflicts; overriding AR_FLAGS to 'cq'" 1>&2
+ $echo "$modename: warning: to ensure that POSIX-compatible ar will work" 1>&2
+ AR_FLAGS=cq
+ fi
+ # Is there a better way of finding the last object in the list?
+ for obj in $save_oldobjs
+ do
+ last_oldobj=$obj
+ done
+ for obj in $save_oldobjs
+ do
+ oldobjs="$objlist $obj"
+ objlist="$objlist $obj"
+ eval test_cmds=\"$old_archive_cmds\"
+ if len=`expr "X$test_cmds" : ".*"` &&
+ test "$len" -le "$max_cmd_len"; then
+ :
+ else
+ # the above command should be used before it gets too long
+ oldobjs=$objlist
+ if test "$obj" = "$last_oldobj" ; then
+ RANLIB=$save_RANLIB
+ fi
+ test -z "$concat_cmds" || concat_cmds=$concat_cmds~
+ eval concat_cmds=\"\${concat_cmds}$old_archive_cmds\"
+ objlist=
+ fi
+ done
+ RANLIB=$save_RANLIB
+ oldobjs=$objlist
+ if test "X$oldobjs" = "X" ; then
+ eval cmds=\"\$concat_cmds\"
+ else
+ eval cmds=\"\$concat_cmds~\$old_archive_cmds\"
+ fi
+ fi
+ fi
+ save_ifs="$IFS"; IFS='~'
+ for cmd in $cmds; do
+ eval cmd=\"$cmd\"
+ IFS="$save_ifs"
+ $show "$cmd"
+ $run eval "$cmd" || exit $?
+ done
+ IFS="$save_ifs"
+ done
+
+ if test -n "$generated"; then
+ $show "${rm}r$generated"
+ $run ${rm}r$generated
+ fi
+
+ # Now create the libtool archive.
+ case $output in
+ *.la)
+ old_library=
+ test "$build_old_libs" = yes && old_library="$libname.$libext"
+ $show "creating $output"
+
+ # Preserve any variables that may affect compiler behavior
+ for var in $variables_saved_for_relink; do
+ if eval test -z \"\${$var+set}\"; then
+ relink_command="{ test -z \"\${$var+set}\" || unset $var || { $var=; export $var; }; }; $relink_command"
+ elif eval var_value=\$$var; test -z "$var_value"; then
+ relink_command="$var=; export $var; $relink_command"
+ else
+ var_value=`$echo "X$var_value" | $Xsed -e "$sed_quote_subst"`
+ relink_command="$var=\"$var_value\"; export $var; $relink_command"
+ fi
+ done
+ # Quote the link command for shipping.
+ relink_command="(cd `pwd`; $SHELL $0 $preserve_args --mode=relink $libtool_args @inst_prefix_dir@)"
+ relink_command=`$echo "X$relink_command" | $Xsed -e "$sed_quote_subst"`
+ if test "$hardcode_automatic" = yes ; then
+ relink_command=
+ fi
+ # Only create the output if not a dry run.
+ if test -z "$run"; then
+ for installed in no yes; do
+ if test "$installed" = yes; then
+ if test -z "$install_libdir"; then
+ break
+ fi
+ output="$output_objdir/$outputname"i
+ # Replace all uninstalled libtool libraries with the installed ones
+ newdependency_libs=
+ for deplib in $dependency_libs; do
+ case $deplib in
+ *.la)
+ name=`$echo "X$deplib" | $Xsed -e 's%^.*/%%'`
+ eval libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $deplib`
+ if test -z "$libdir"; then
+ $echo "$modename: \`$deplib' is not a valid libtool archive" 1>&2
+ exit 1
+ fi
+ newdependency_libs="$newdependency_libs $libdir/$name"
+ ;;
+ *) newdependency_libs="$newdependency_libs $deplib" ;;
+ esac
+ done
+ dependency_libs="$newdependency_libs"
+ newdlfiles=
+ for lib in $dlfiles; do
+ name=`$echo "X$lib" | $Xsed -e 's%^.*/%%'`
+ eval libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $lib`
+ if test -z "$libdir"; then
+ $echo "$modename: \`$lib' is not a valid libtool archive" 1>&2
+ exit 1
+ fi
+ newdlfiles="$newdlfiles $libdir/$name"
+ done
+ dlfiles="$newdlfiles"
+ newdlprefiles=
+ for lib in $dlprefiles; do
+ name=`$echo "X$lib" | $Xsed -e 's%^.*/%%'`
+ eval libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $lib`
+ if test -z "$libdir"; then
+ $echo "$modename: \`$lib' is not a valid libtool archive" 1>&2
+ exit 1
+ fi
+ newdlprefiles="$newdlprefiles $libdir/$name"
+ done
+ dlprefiles="$newdlprefiles"
+ else
+ newdlfiles=
+ for lib in $dlfiles; do
+ case $lib in
+ [\\/]* | [A-Za-z]:[\\/]*) abs="$lib" ;;
+ *) abs=`pwd`"/$lib" ;;
+ esac
+ newdlfiles="$newdlfiles $abs"
+ done
+ dlfiles="$newdlfiles"
+ newdlprefiles=
+ for lib in $dlprefiles; do
+ case $lib in
+ [\\/]* | [A-Za-z]:[\\/]*) abs="$lib" ;;
+ *) abs=`pwd`"/$lib" ;;
+ esac
+ newdlprefiles="$newdlprefiles $abs"
+ done
+ dlprefiles="$newdlprefiles"
+ fi
+ $rm $output
+ # place dlname in correct position for cygwin
+ tdlname=$dlname
+ case $host,$output,$installed,$module,$dlname in
+ *cygwin*,*lai,yes,no,*.dll | *mingw*,*lai,yes,no,*.dll) tdlname=../bin/$dlname ;;
+ esac
+ $echo > $output "\
+# $outputname - a libtool library file
+# Generated by $PROGRAM - GNU $PACKAGE $VERSION$TIMESTAMP
+#
+# Please DO NOT delete this file!
+# It is necessary for linking the library.
+
+# The name that we can dlopen(3).
+dlname='$tdlname'
+
+# Names of this library.
+library_names='$library_names'
+
+# The name of the static archive.
+old_library='$old_library'
+
+# Libraries that this one depends upon.
+dependency_libs='$dependency_libs'
+
+# Version information for $libname.
+current=$current
+age=$age
+revision=$revision
+
+# Is this an already installed library?
+installed=$installed
+
+# Should we warn about portability when linking against -modules?
+shouldnotlink=$module
+
+# Files to dlopen/dlpreopen
+dlopen='$dlfiles'
+dlpreopen='$dlprefiles'
+
+# Directory that this library needs to be installed in:
+libdir='$install_libdir'"
+ if test "$installed" = no && test "$need_relink" = yes; then
+ $echo >> $output "\
+relink_command=\"$relink_command\""
+ fi
+ done
+ fi
+
+ # Do a symbolic link so that the libtool archive can be found in
+ # LD_LIBRARY_PATH before the program is installed.
+ $show "(cd $output_objdir && $rm $outputname && $LN_S ../$outputname $outputname)"
+ $run eval '(cd $output_objdir && $rm $outputname && $LN_S ../$outputname $outputname)' || exit $?
+ ;;
+ esac
+ exit 0
+ ;;
+
+ # libtool install mode
+ install)
+ modename="$modename: install"
+
+ # There may be an optional sh(1) argument at the beginning of
+ # install_prog (especially on Windows NT).
+ if test "$nonopt" = "$SHELL" || test "$nonopt" = /bin/sh ||
+ # Allow the use of GNU shtool's install command.
+ $echo "X$nonopt" | $Xsed | grep shtool > /dev/null; then
+ # Aesthetically quote it.
+ arg=`$echo "X$nonopt" | $Xsed -e "$sed_quote_subst"`
+ case $arg in
+ *[\[\~\#\^\&\*\(\)\{\}\|\;\<\>\?\'\ \ ]*|*]*)
+ arg="\"$arg\""
+ ;;
+ esac
+ install_prog="$arg "
+ arg="$1"
+ shift
+ else
+ install_prog=
+ arg="$nonopt"
+ fi
+
+ # The real first argument should be the name of the installation program.
+ # Aesthetically quote it.
+ arg=`$echo "X$arg" | $Xsed -e "$sed_quote_subst"`
+ case $arg in
+ *[\[\~\#\^\&\*\(\)\{\}\|\;\<\>\?\'\ \ ]*|*]*)
+ arg="\"$arg\""
+ ;;
+ esac
+ install_prog="$install_prog$arg"
+
+ # We need to accept at least all the BSD install flags.
+ dest=
+ files=
+ opts=
+ prev=
+ install_type=
+ isdir=no
+ stripme=
+ for arg
+ do
+ if test -n "$dest"; then
+ files="$files $dest"
+ dest="$arg"
+ continue
+ fi
+
+ case $arg in
+ -d) isdir=yes ;;
+ -f) prev="-f" ;;
+ -g) prev="-g" ;;
+ -m) prev="-m" ;;
+ -o) prev="-o" ;;
+ -s)
+ stripme=" -s"
+ continue
+ ;;
+ -*) ;;
+
+ *)
+ # If the previous option needed an argument, then skip it.
+ if test -n "$prev"; then
+ prev=
+ else
+ dest="$arg"
+ continue
+ fi
+ ;;
+ esac
+
+ # Aesthetically quote the argument.
+ arg=`$echo "X$arg" | $Xsed -e "$sed_quote_subst"`
+ case $arg in
+ *[\[\~\#\^\&\*\(\)\{\}\|\;\<\>\?\'\ \ ]*|*]*)
+ arg="\"$arg\""
+ ;;
+ esac
+ install_prog="$install_prog $arg"
+ done
+
+ if test -z "$install_prog"; then
+ $echo "$modename: you must specify an install program" 1>&2
+ $echo "$help" 1>&2
+ exit 1
+ fi
+
+ if test -n "$prev"; then
+ $echo "$modename: the \`$prev' option requires an argument" 1>&2
+ $echo "$help" 1>&2
+ exit 1
+ fi
+
+ if test -z "$files"; then
+ if test -z "$dest"; then
+ $echo "$modename: no file or destination specified" 1>&2
+ else
+ $echo "$modename: you must specify a destination" 1>&2
+ fi
+ $echo "$help" 1>&2
+ exit 1
+ fi
+
+ # Strip any trailing slash from the destination.
+ dest=`$echo "X$dest" | $Xsed -e 's%/$%%'`
+
+ # Check to see that the destination is a directory.
+ test -d "$dest" && isdir=yes
+ if test "$isdir" = yes; then
+ destdir="$dest"
+ destname=
+ else
+ destdir=`$echo "X$dest" | $Xsed -e 's%/[^/]*$%%'`
+ test "X$destdir" = "X$dest" && destdir=.
+ destname=`$echo "X$dest" | $Xsed -e 's%^.*/%%'`
+
+ # Not a directory, so check to see that there is only one file specified.
+ set dummy $files
+ if test "$#" -gt 2; then
+ $echo "$modename: \`$dest' is not a directory" 1>&2
+ $echo "$help" 1>&2
+ exit 1
+ fi
+ fi
+ case $destdir in
+ [\\/]* | [A-Za-z]:[\\/]*) ;;
+ *)
+ for file in $files; do
+ case $file in
+ *.lo) ;;
+ *)
+ $echo "$modename: \`$destdir' must be an absolute directory name" 1>&2
+ $echo "$help" 1>&2
+ exit 1
+ ;;
+ esac
+ done
+ ;;
+ esac
+
+ # This variable tells wrapper scripts just to set variables rather
+ # than running their programs.
+ libtool_install_magic="$magic"
+
+ staticlibs=
+ future_libdirs=
+ current_libdirs=
+ for file in $files; do
+
+ # Do each installation.
+ case $file in
+ *.$libext)
+ # Do the static libraries later.
+ staticlibs="$staticlibs $file"
+ ;;
+
+ *.la)
+ # Check to see that this really is a libtool archive.
+ if (${SED} -e '2q' $file | grep "^# Generated by .*$PACKAGE") >/dev/null 2>&1; then :
+ else
+ $echo "$modename: \`$file' is not a valid libtool archive" 1>&2
+ $echo "$help" 1>&2
+ exit 1
+ fi
+
+ library_names=
+ old_library=
+ relink_command=
+ # If there is no directory component, then add one.
+ case $file in
+ */* | *\\*) . $file ;;
+ *) . ./$file ;;
+ esac
+
+ # Add the libdir to current_libdirs if it is the destination.
+ if test "X$destdir" = "X$libdir"; then
+ case "$current_libdirs " in
+ *" $libdir "*) ;;
+ *) current_libdirs="$current_libdirs $libdir" ;;
+ esac
+ else
+ # Note the libdir as a future libdir.
+ case "$future_libdirs " in
+ *" $libdir "*) ;;
+ *) future_libdirs="$future_libdirs $libdir" ;;
+ esac
+ fi
+
+ dir=`$echo "X$file" | $Xsed -e 's%/[^/]*$%%'`/
+ test "X$dir" = "X$file/" && dir=
+ dir="$dir$objdir"
+
+ if test -n "$relink_command"; then
+ # Determine the prefix the user has applied to our future dir.
+ inst_prefix_dir=`$echo "$destdir" | $SED "s%$libdir\$%%"`
+
+ # Don't allow the user to place us outside of our expected
+ # location b/c this prevents finding dependent libraries that
+ # are installed to the same prefix.
+ # At present, this check doesn't affect windows .dll's that
+ # are installed into $libdir/../bin (currently, that works fine)
+ # but it's something to keep an eye on.
+ if test "$inst_prefix_dir" = "$destdir"; then
+ $echo "$modename: error: cannot install \`$file' to a directory not ending in $libdir" 1>&2
+ exit 1
+ fi
+
+ if test -n "$inst_prefix_dir"; then
+ # Stick the inst_prefix_dir data into the link command.
+ relink_command=`$echo "$relink_command" | $SED "s%@inst_prefix_dir@%-inst-prefix-dir $inst_prefix_dir%"`
+ else
+ relink_command=`$echo "$relink_command" | $SED "s%@inst_prefix_dir@%%"`
+ fi
+
+ $echo "$modename: warning: relinking \`$file'" 1>&2
+ $show "$relink_command"
+ if $run eval "$relink_command"; then :
+ else
+ $echo "$modename: error: relink \`$file' with the above command before installing it" 1>&2
+ exit 1
+ fi
+ fi
+
+ # See the names of the shared library.
+ set dummy $library_names
+ if test -n "$2"; then
+ realname="$2"
+ shift
+ shift
+
+ srcname="$realname"
+ test -n "$relink_command" && srcname="$realname"T
+
+ # Install the shared library and build the symlinks.
+ $show "$install_prog $dir/$srcname $destdir/$realname"
+ $run eval "$install_prog $dir/$srcname $destdir/$realname" || exit $?
+ if test -n "$stripme" && test -n "$striplib"; then
+ $show "$striplib $destdir/$realname"
+ $run eval "$striplib $destdir/$realname" || exit $?
+ fi
+
+ if test "$#" -gt 0; then
+ # Delete the old symlinks, and create new ones.
+ for linkname
+ do
+ if test "$linkname" != "$realname"; then
+ $show "(cd $destdir && $rm $linkname && $LN_S $realname $linkname)"
+ $run eval "(cd $destdir && $rm $linkname && $LN_S $realname $linkname)"
+ fi
+ done
+ fi
+
+ # Do each command in the postinstall commands.
+ lib="$destdir/$realname"
+ cmds=$postinstall_cmds
+ save_ifs="$IFS"; IFS='~'
+ for cmd in $cmds; do
+ IFS="$save_ifs"
+ eval cmd=\"$cmd\"
+ $show "$cmd"
+ $run eval "$cmd" || exit $?
+ done
+ IFS="$save_ifs"
+ fi
+
+ # Install the pseudo-library for information purposes.
+ name=`$echo "X$file" | $Xsed -e 's%^.*/%%'`
+ instname="$dir/$name"i
+ $show "$install_prog $instname $destdir/$name"
+ $run eval "$install_prog $instname $destdir/$name" || exit $?
+
+ # Maybe install the static library, too.
+ test -n "$old_library" && staticlibs="$staticlibs $dir/$old_library"
+ ;;
+
+ *.lo)
+ # Install (i.e. copy) a libtool object.
+
+ # Figure out destination file name, if it wasn't already specified.
+ if test -n "$destname"; then
+ destfile="$destdir/$destname"
+ else
+ destfile=`$echo "X$file" | $Xsed -e 's%^.*/%%'`
+ destfile="$destdir/$destfile"
+ fi
+
+ # Deduce the name of the destination old-style object file.
+ case $destfile in
+ *.lo)
+ staticdest=`$echo "X$destfile" | $Xsed -e "$lo2o"`
+ ;;
+ *.$objext)
+ staticdest="$destfile"
+ destfile=
+ ;;
+ *)
+ $echo "$modename: cannot copy a libtool object to \`$destfile'" 1>&2
+ $echo "$help" 1>&2
+ exit 1
+ ;;
+ esac
+
+ # Install the libtool object if requested.
+ if test -n "$destfile"; then
+ $show "$install_prog $file $destfile"
+ $run eval "$install_prog $file $destfile" || exit $?
+ fi
+
+ # Install the old object if enabled.
+ if test "$build_old_libs" = yes; then
+ # Deduce the name of the old-style object file.
+ staticobj=`$echo "X$file" | $Xsed -e "$lo2o"`
+
+ $show "$install_prog $staticobj $staticdest"
+ $run eval "$install_prog \$staticobj \$staticdest" || exit $?
+ fi
+ exit 0
+ ;;
+
+ *)
+ # Figure out destination file name, if it wasn't already specified.
+ if test -n "$destname"; then
+ destfile="$destdir/$destname"
+ else
+ destfile=`$echo "X$file" | $Xsed -e 's%^.*/%%'`
+ destfile="$destdir/$destfile"
+ fi
+
+ # If the file is missing, and there is a .exe on the end, strip it
+ # because it is most likely a libtool script we actually want to
+ # install
+ stripped_ext=""
+ case $file in
+ *.exe)
+ if test ! -f "$file"; then
+ file=`$echo $file|${SED} 's,.exe$,,'`
+ stripped_ext=".exe"
+ fi
+ ;;
+ esac
+
+ # Do a test to see if this is really a libtool program.
+ case $host in
+ *cygwin*|*mingw*)
+ wrapper=`$echo $file | ${SED} -e 's,.exe$,,'`
+ ;;
+ *)
+ wrapper=$file
+ ;;
+ esac
+ if (${SED} -e '4q' $wrapper | grep "^# Generated by .*$PACKAGE")>/dev/null 2>&1; then
+ notinst_deplibs=
+ relink_command=
+
+ # To insure that "foo" is sourced, and not "foo.exe",
+ # finese the cygwin/MSYS system by explicitly sourcing "foo."
+ # which disallows the automatic-append-.exe behavior.
+ case $build in
+ *cygwin* | *mingw*) wrapperdot=${wrapper}. ;;
+ *) wrapperdot=${wrapper} ;;
+ esac
+ # If there is no directory component, then add one.
+ case $file in
+ */* | *\\*) . ${wrapperdot} ;;
+ *) . ./${wrapperdot} ;;
+ esac
+
+ # Check the variables that should have been set.
+ if test -z "$notinst_deplibs"; then
+ $echo "$modename: invalid libtool wrapper script \`$wrapper'" 1>&2
+ exit 1
+ fi
+
+ finalize=yes
+ for lib in $notinst_deplibs; do
+ # Check to see that each library is installed.
+ libdir=
+ if test -f "$lib"; then
+ # If there is no directory component, then add one.
+ case $lib in
+ */* | *\\*) . $lib ;;
+ *) . ./$lib ;;
+ esac
+ fi
+ libfile="$libdir/"`$echo "X$lib" | $Xsed -e 's%^.*/%%g'` ### testsuite: skip nested quoting test
+ if test -n "$libdir" && test ! -f "$libfile"; then
+ $echo "$modename: warning: \`$lib' has not been installed in \`$libdir'" 1>&2
+ finalize=no
+ fi
+ done
+
+ relink_command=
+ # To insure that "foo" is sourced, and not "foo.exe",
+ # finese the cygwin/MSYS system by explicitly sourcing "foo."
+ # which disallows the automatic-append-.exe behavior.
+ case $build in
+ *cygwin* | *mingw*) wrapperdot=${wrapper}. ;;
+ *) wrapperdot=${wrapper} ;;
+ esac
+ # If there is no directory component, then add one.
+ case $file in
+ */* | *\\*) . ${wrapperdot} ;;
+ *) . ./${wrapperdot} ;;
+ esac
+
+ outputname=
+ if test "$fast_install" = no && test -n "$relink_command"; then
+ if test "$finalize" = yes && test -z "$run"; then
+ tmpdir="/tmp"
+ test -n "$TMPDIR" && tmpdir="$TMPDIR"
+ tmpdir="$tmpdir/libtool-$$"
+ if $mkdir "$tmpdir" && chmod 700 "$tmpdir"; then :
+ else
+ $echo "$modename: error: cannot create temporary directory \`$tmpdir'" 1>&2
+ continue
+ fi
+ file=`$echo "X$file$stripped_ext" | $Xsed -e 's%^.*/%%'`
+ outputname="$tmpdir/$file"
+ # Replace the output file specification.
+ relink_command=`$echo "X$relink_command" | $Xsed -e 's%@OUTPUT@%'"$outputname"'%g'`
+
+ $show "$relink_command"
+ if $run eval "$relink_command"; then :
+ else
+ $echo "$modename: error: relink \`$file' with the above command before installing it" 1>&2
+ ${rm}r "$tmpdir"
+ continue
+ fi
+ file="$outputname"
+ else
+ $echo "$modename: warning: cannot relink \`$file'" 1>&2
+ fi
+ else
+ # Install the binary that we compiled earlier.
+ file=`$echo "X$file$stripped_ext" | $Xsed -e "s%\([^/]*\)$%$objdir/\1%"`
+ fi
+ fi
+
+ # remove .exe since cygwin /usr/bin/install will append another
+ # one anyways
+ case $install_prog,$host in
+ */usr/bin/install*,*cygwin*)
+ case $file:$destfile in
+ *.exe:*.exe)
+ # this is ok
+ ;;
+ *.exe:*)
+ destfile=$destfile.exe
+ ;;
+ *:*.exe)
+ destfile=`$echo $destfile | ${SED} -e 's,.exe$,,'`
+ ;;
+ esac
+ ;;
+ esac
+ $show "$install_prog$stripme $file $destfile"
+ $run eval "$install_prog\$stripme \$file \$destfile" || exit $?
+ test -n "$outputname" && ${rm}r "$tmpdir"
+ ;;
+ esac
+ done
+
+ for file in $staticlibs; do
+ name=`$echo "X$file" | $Xsed -e 's%^.*/%%'`
+
+ # Set up the ranlib parameters.
+ oldlib="$destdir/$name"
+
+ $show "$install_prog $file $oldlib"
+ $run eval "$install_prog \$file \$oldlib" || exit $?
+
+ if test -n "$stripme" && test -n "$old_striplib"; then
+ $show "$old_striplib $oldlib"
+ $run eval "$old_striplib $oldlib" || exit $?
+ fi
+
+ # Do each command in the postinstall commands.
+ cmds=$old_postinstall_cmds
+ save_ifs="$IFS"; IFS='~'
+ for cmd in $cmds; do
+ IFS="$save_ifs"
+ eval cmd=\"$cmd\"
+ $show "$cmd"
+ $run eval "$cmd" || exit $?
+ done
+ IFS="$save_ifs"
+ done
+
+ if test -n "$future_libdirs"; then
+ $echo "$modename: warning: remember to run \`$progname --finish$future_libdirs'" 1>&2
+ fi
+
+ if test -n "$current_libdirs"; then
+ # Maybe just do a dry run.
+ test -n "$run" && current_libdirs=" -n$current_libdirs"
+ exec_cmd='$SHELL $0 $preserve_args --finish$current_libdirs'
+ else
+ exit 0
+ fi
+ ;;
+
+ # libtool finish mode
+ finish)
+ modename="$modename: finish"
+ libdirs="$nonopt"
+ admincmds=
+
+ if test -n "$finish_cmds$finish_eval" && test -n "$libdirs"; then
+ for dir
+ do
+ libdirs="$libdirs $dir"
+ done
+
+ for libdir in $libdirs; do
+ if test -n "$finish_cmds"; then
+ # Do each command in the finish commands.
+ cmds=$finish_cmds
+ save_ifs="$IFS"; IFS='~'
+ for cmd in $cmds; do
+ IFS="$save_ifs"
+ eval cmd=\"$cmd\"
+ $show "$cmd"
+ $run eval "$cmd" || admincmds="$admincmds
+ $cmd"
+ done
+ IFS="$save_ifs"
+ fi
+ if test -n "$finish_eval"; then
+ # Do the single finish_eval.
+ eval cmds=\"$finish_eval\"
+ $run eval "$cmds" || admincmds="$admincmds
+ $cmds"
+ fi
+ done
+ fi
+
+ # Exit here if they wanted silent mode.
+ test "$show" = : && exit 0
+
+ $echo "----------------------------------------------------------------------"
+ $echo "Libraries have been installed in:"
+ for libdir in $libdirs; do
+ $echo " $libdir"
+ done
+ $echo
+ $echo "If you ever happen to want to link against installed libraries"
+ $echo "in a given directory, LIBDIR, you must either use libtool, and"
+ $echo "specify the full pathname of the library, or use the \`-LLIBDIR'"
+ $echo "flag during linking and do at least one of the following:"
+ if test -n "$shlibpath_var"; then
+ $echo " - add LIBDIR to the \`$shlibpath_var' environment variable"
+ $echo " during execution"
+ fi
+ if test -n "$runpath_var"; then
+ $echo " - add LIBDIR to the \`$runpath_var' environment variable"
+ $echo " during linking"
+ fi
+ if test -n "$hardcode_libdir_flag_spec"; then
+ libdir=LIBDIR
+ eval flag=\"$hardcode_libdir_flag_spec\"
+
+ $echo " - use the \`$flag' linker flag"
+ fi
+ if test -n "$admincmds"; then
+ $echo " - have your system administrator run these commands:$admincmds"
+ fi
+ if test -f /etc/ld.so.conf; then
+ $echo " - have your system administrator add LIBDIR to \`/etc/ld.so.conf'"
+ fi
+ $echo
+ $echo "See any operating system documentation about shared libraries for"
+ $echo "more information, such as the ld(1) and ld.so(8) manual pages."
+ $echo "----------------------------------------------------------------------"
+ exit 0
+ ;;
+
+ # libtool execute mode
+ execute)
+ modename="$modename: execute"
+
+ # The first argument is the command name.
+ cmd="$nonopt"
+ if test -z "$cmd"; then
+ $echo "$modename: you must specify a COMMAND" 1>&2
+ $echo "$help"
+ exit 1
+ fi
+
+ # Handle -dlopen flags immediately.
+ for file in $execute_dlfiles; do
+ if test ! -f "$file"; then
+ $echo "$modename: \`$file' is not a file" 1>&2
+ $echo "$help" 1>&2
+ exit 1
+ fi
+
+ dir=
+ case $file in
+ *.la)
+ # Check to see that this really is a libtool archive.
+ if (${SED} -e '2q' $file | grep "^# Generated by .*$PACKAGE") >/dev/null 2>&1; then :
+ else
+ $echo "$modename: \`$lib' is not a valid libtool archive" 1>&2
+ $echo "$help" 1>&2
+ exit 1
+ fi
+
+ # Read the libtool library.
+ dlname=
+ library_names=
+
+ # If there is no directory component, then add one.
+ case $file in
+ */* | *\\*) . $file ;;
+ *) . ./$file ;;
+ esac
+
+ # Skip this library if it cannot be dlopened.
+ if test -z "$dlname"; then
+ # Warn if it was a shared library.
+ test -n "$library_names" && $echo "$modename: warning: \`$file' was not linked with \`-export-dynamic'"
+ continue
+ fi
+
+ dir=`$echo "X$file" | $Xsed -e 's%/[^/]*$%%'`
+ test "X$dir" = "X$file" && dir=.
+
+ if test -f "$dir/$objdir/$dlname"; then
+ dir="$dir/$objdir"
+ else
+ $echo "$modename: cannot find \`$dlname' in \`$dir' or \`$dir/$objdir'" 1>&2
+ exit 1
+ fi
+ ;;
+
+ *.lo)
+ # Just add the directory containing the .lo file.
+ dir=`$echo "X$file" | $Xsed -e 's%/[^/]*$%%'`
+ test "X$dir" = "X$file" && dir=.
+ ;;
+
+ *)
+ $echo "$modename: warning \`-dlopen' is ignored for non-libtool libraries and objects" 1>&2
+ continue
+ ;;
+ esac
+
+ # Get the absolute pathname.
+ absdir=`cd "$dir" && pwd`
+ test -n "$absdir" && dir="$absdir"
+
+ # Now add the directory to shlibpath_var.
+ if eval "test -z \"\$$shlibpath_var\""; then
+ eval "$shlibpath_var=\"\$dir\""
+ else
+ eval "$shlibpath_var=\"\$dir:\$$shlibpath_var\""
+ fi
+ done
+
+ # This variable tells wrapper scripts just to set shlibpath_var
+ # rather than running their programs.
+ libtool_execute_magic="$magic"
+
+ # Check if any of the arguments is a wrapper script.
+ args=
+ for file
+ do
+ case $file in
+ -*) ;;
+ *)
+ # Do a test to see if this is really a libtool program.
+ if (${SED} -e '4q' $file | grep "^# Generated by .*$PACKAGE") >/dev/null 2>&1; then
+ # If there is no directory component, then add one.
+ case $file in
+ */* | *\\*) . $file ;;
+ *) . ./$file ;;
+ esac
+
+ # Transform arg to wrapped name.
+ file="$progdir/$program"
+ fi
+ ;;
+ esac
+ # Quote arguments (to preserve shell metacharacters).
+ file=`$echo "X$file" | $Xsed -e "$sed_quote_subst"`
+ args="$args \"$file\""
+ done
+
+ if test -z "$run"; then
+ if test -n "$shlibpath_var"; then
+ # Export the shlibpath_var.
+ eval "export $shlibpath_var"
+ fi
+
+ # Restore saved environment variables
+ if test "${save_LC_ALL+set}" = set; then
+ LC_ALL="$save_LC_ALL"; export LC_ALL
+ fi
+ if test "${save_LANG+set}" = set; then
+ LANG="$save_LANG"; export LANG
+ fi
+
+ # Now prepare to actually exec the command.
+ exec_cmd="\$cmd$args"
+ else
+ # Display what would be done.
+ if test -n "$shlibpath_var"; then
+ eval "\$echo \"\$shlibpath_var=\$$shlibpath_var\""
+ $echo "export $shlibpath_var"
+ fi
+ $echo "$cmd$args"
+ exit 0
+ fi
+ ;;
+
+ # libtool clean and uninstall mode
+ clean | uninstall)
+ modename="$modename: $mode"
+ rm="$nonopt"
+ files=
+ rmforce=
+ exit_status=0
+
+ # This variable tells wrapper scripts just to set variables rather
+ # than running their programs.
+ libtool_install_magic="$magic"
+
+ for arg
+ do
+ case $arg in
+ -f) rm="$rm $arg"; rmforce=yes ;;
+ -*) rm="$rm $arg" ;;
+ *) files="$files $arg" ;;
+ esac
+ done
+
+ if test -z "$rm"; then
+ $echo "$modename: you must specify an RM program" 1>&2
+ $echo "$help" 1>&2
+ exit 1
+ fi
+
+ rmdirs=
+
+ origobjdir="$objdir"
+ for file in $files; do
+ dir=`$echo "X$file" | $Xsed -e 's%/[^/]*$%%'`
+ if test "X$dir" = "X$file"; then
+ dir=.
+ objdir="$origobjdir"
+ else
+ objdir="$dir/$origobjdir"
+ fi
+ name=`$echo "X$file" | $Xsed -e 's%^.*/%%'`
+ test "$mode" = uninstall && objdir="$dir"
+
+ # Remember objdir for removal later, being careful to avoid duplicates
+ if test "$mode" = clean; then
+ case " $rmdirs " in
+ *" $objdir "*) ;;
+ *) rmdirs="$rmdirs $objdir" ;;
+ esac
+ fi
+
+ # Don't error if the file doesn't exist and rm -f was used.
+ if (test -L "$file") >/dev/null 2>&1 \
+ || (test -h "$file") >/dev/null 2>&1 \
+ || test -f "$file"; then
+ :
+ elif test -d "$file"; then
+ exit_status=1
+ continue
+ elif test "$rmforce" = yes; then
+ continue
+ fi
+
+ rmfiles="$file"
+
+ case $name in
+ *.la)
+ # Possibly a libtool archive, so verify it.
+ if (${SED} -e '2q' $file | grep "^# Generated by .*$PACKAGE") >/dev/null 2>&1; then
+ . $dir/$name
+
+ # Delete the libtool libraries and symlinks.
+ for n in $library_names; do
+ rmfiles="$rmfiles $objdir/$n"
+ done
+ test -n "$old_library" && rmfiles="$rmfiles $objdir/$old_library"
+ test "$mode" = clean && rmfiles="$rmfiles $objdir/$name $objdir/${name}i"
+
+ if test "$mode" = uninstall; then
+ if test -n "$library_names"; then
+ # Do each command in the postuninstall commands.
+ cmds=$postuninstall_cmds
+ save_ifs="$IFS"; IFS='~'
+ for cmd in $cmds; do
+ IFS="$save_ifs"
+ eval cmd=\"$cmd\"
+ $show "$cmd"
+ $run eval "$cmd"
+ if test "$?" -ne 0 && test "$rmforce" != yes; then
+ exit_status=1
+ fi
+ done
+ IFS="$save_ifs"
+ fi
+
+ if test -n "$old_library"; then
+ # Do each command in the old_postuninstall commands.
+ cmds=$old_postuninstall_cmds
+ save_ifs="$IFS"; IFS='~'
+ for cmd in $cmds; do
+ IFS="$save_ifs"
+ eval cmd=\"$cmd\"
+ $show "$cmd"
+ $run eval "$cmd"
+ if test "$?" -ne 0 && test "$rmforce" != yes; then
+ exit_status=1
+ fi
+ done
+ IFS="$save_ifs"
+ fi
+ # FIXME: should reinstall the best remaining shared library.
+ fi
+ fi
+ ;;
+
+ *.lo)
+ # Possibly a libtool object, so verify it.
+ if (${SED} -e '2q' $file | grep "^# Generated by .*$PACKAGE") >/dev/null 2>&1; then
+
+ # Read the .lo file
+ . $dir/$name
+
+ # Add PIC object to the list of files to remove.
+ if test -n "$pic_object" \
+ && test "$pic_object" != none; then
+ rmfiles="$rmfiles $dir/$pic_object"
+ fi
+
+ # Add non-PIC object to the list of files to remove.
+ if test -n "$non_pic_object" \
+ && test "$non_pic_object" != none; then
+ rmfiles="$rmfiles $dir/$non_pic_object"
+ fi
+ fi
+ ;;
+
+ *)
+ if test "$mode" = clean ; then
+ noexename=$name
+ case $file in
+ *.exe)
+ file=`$echo $file|${SED} 's,.exe$,,'`
+ noexename=`$echo $name|${SED} 's,.exe$,,'`
+ # $file with .exe has already been added to rmfiles,
+ # add $file without .exe
+ rmfiles="$rmfiles $file"
+ ;;
+ esac
+ # Do a test to see if this is a libtool program.
+ if (${SED} -e '4q' $file | grep "^# Generated by .*$PACKAGE") >/dev/null 2>&1; then
+ relink_command=
+ . $dir/$noexename
+
+ # note $name still contains .exe if it was in $file originally
+ # as does the version of $file that was added into $rmfiles
+ rmfiles="$rmfiles $objdir/$name $objdir/${name}S.${objext}"
+ if test "$fast_install" = yes && test -n "$relink_command"; then
+ rmfiles="$rmfiles $objdir/lt-$name"
+ fi
+ if test "X$noexename" != "X$name" ; then
+ rmfiles="$rmfiles $objdir/lt-${noexename}.c"
+ fi
+ fi
+ fi
+ ;;
+ esac
+ $show "$rm $rmfiles"
+ $run $rm $rmfiles || exit_status=1
+ done
+ objdir="$origobjdir"
+
+ # Try to remove the ${objdir}s in the directories where we deleted files
+ for dir in $rmdirs; do
+ if test -d "$dir"; then
+ $show "rmdir $dir"
+ $run rmdir $dir >/dev/null 2>&1
+ fi
+ done
+
+ exit $exit_status
+ ;;
+
+ "")
+ $echo "$modename: you must specify a MODE" 1>&2
+ $echo "$generic_help" 1>&2
+ exit 1
+ ;;
+ esac
+
+ if test -z "$exec_cmd"; then
+ $echo "$modename: invalid operation mode \`$mode'" 1>&2
+ $echo "$generic_help" 1>&2
+ exit 1
+ fi
+fi # test -z "$show_help"
+
+if test -n "$exec_cmd"; then
+ eval exec $exec_cmd
+ exit 1
+fi
+
+# We need to display help for each of the modes.
+case $mode in
+"") $echo \
+"Usage: $modename [OPTION]... [MODE-ARG]...
+
+Provide generalized library-building support services.
+
+ --config show all configuration variables
+ --debug enable verbose shell tracing
+-n, --dry-run display commands without modifying any files
+ --features display basic configuration information and exit
+ --finish same as \`--mode=finish'
+ --help display this help message and exit
+ --mode=MODE use operation mode MODE [default=inferred from MODE-ARGS]
+ --quiet same as \`--silent'
+ --silent don't print informational messages
+ --tag=TAG use configuration variables from tag TAG
+ --version print version information
+
+MODE must be one of the following:
+
+ clean remove files from the build directory
+ compile compile a source file into a libtool object
+ execute automatically set library path, then run a program
+ finish complete the installation of libtool libraries
+ install install libraries or executables
+ link create a library or an executable
+ uninstall remove libraries from an installed directory
+
+MODE-ARGS vary depending on the MODE. Try \`$modename --help --mode=MODE' for
+a more detailed description of MODE.
+
+Report bugs to <bug-libtool at gnu.org>."
+ exit 0
+ ;;
+
+clean)
+ $echo \
+"Usage: $modename [OPTION]... --mode=clean RM [RM-OPTION]... FILE...
+
+Remove files from the build directory.
+
+RM is the name of the program to use to delete files associated with each FILE
+(typically \`/bin/rm'). RM-OPTIONS are options (such as \`-f') to be passed
+to RM.
+
+If FILE is a libtool library, object or program, all the files associated
+with it are deleted. Otherwise, only FILE itself is deleted using RM."
+ ;;
+
+compile)
+ $echo \
+"Usage: $modename [OPTION]... --mode=compile COMPILE-COMMAND... SOURCEFILE
+
+Compile a source file into a libtool library object.
+
+This mode accepts the following additional options:
+
+ -o OUTPUT-FILE set the output file name to OUTPUT-FILE
+ -prefer-pic try to building PIC objects only
+ -prefer-non-pic try to building non-PIC objects only
+ -static always build a \`.o' file suitable for static linking
+
+COMPILE-COMMAND is a command to be used in creating a \`standard' object file
+from the given SOURCEFILE.
+
+The output file name is determined by removing the directory component from
+SOURCEFILE, then substituting the C source code suffix \`.c' with the
+library object suffix, \`.lo'."
+ ;;
+
+execute)
+ $echo \
+"Usage: $modename [OPTION]... --mode=execute COMMAND [ARGS]...
+
+Automatically set library path, then run a program.
+
+This mode accepts the following additional options:
+
+ -dlopen FILE add the directory containing FILE to the library path
+
+This mode sets the library path environment variable according to \`-dlopen'
+flags.
+
+If any of the ARGS are libtool executable wrappers, then they are translated
+into their corresponding uninstalled binary, and any of their required library
+directories are added to the library path.
+
+Then, COMMAND is executed, with ARGS as arguments."
+ ;;
+
+finish)
+ $echo \
+"Usage: $modename [OPTION]... --mode=finish [LIBDIR]...
+
+Complete the installation of libtool libraries.
+
+Each LIBDIR is a directory that contains libtool libraries.
+
+The commands that this mode executes may require superuser privileges. Use
+the \`--dry-run' option if you just want to see what would be executed."
+ ;;
+
+install)
+ $echo \
+"Usage: $modename [OPTION]... --mode=install INSTALL-COMMAND...
+
+Install executables or libraries.
+
+INSTALL-COMMAND is the installation command. The first component should be
+either the \`install' or \`cp' program.
+
+The rest of the components are interpreted as arguments to that command (only
+BSD-compatible install options are recognized)."
+ ;;
+
+link)
+ $echo \
+"Usage: $modename [OPTION]... --mode=link LINK-COMMAND...
+
+Link object files or libraries together to form another library, or to
+create an executable program.
+
+LINK-COMMAND is a command using the C compiler that you would use to create
+a program from several object files.
+
+The following components of LINK-COMMAND are treated specially:
+
+ -all-static do not do any dynamic linking at all
+ -avoid-version do not add a version suffix if possible
+ -dlopen FILE \`-dlpreopen' FILE if it cannot be dlopened at runtime
+ -dlpreopen FILE link in FILE and add its symbols to lt_preloaded_symbols
+ -export-dynamic allow symbols from OUTPUT-FILE to be resolved with dlsym(3)
+ -export-symbols SYMFILE
+ try to export only the symbols listed in SYMFILE
+ -export-symbols-regex REGEX
+ try to export only the symbols matching REGEX
+ -LLIBDIR search LIBDIR for required installed libraries
+ -lNAME OUTPUT-FILE requires the installed library libNAME
+ -module build a library that can dlopened
+ -no-fast-install disable the fast-install mode
+ -no-install link a not-installable executable
+ -no-undefined declare that a library does not refer to external symbols
+ -o OUTPUT-FILE create OUTPUT-FILE from the specified objects
+ -objectlist FILE Use a list of object files found in FILE to specify objects
+ -precious-files-regex REGEX
+ don't remove output files matching REGEX
+ -release RELEASE specify package release information
+ -rpath LIBDIR the created library will eventually be installed in LIBDIR
+ -R[ ]LIBDIR add LIBDIR to the runtime path of programs and libraries
+ -static do not do any dynamic linking of libtool libraries
+ -version-info CURRENT[:REVISION[:AGE]]
+ specify library version info [each variable defaults to 0]
+
+All other options (arguments beginning with \`-') are ignored.
+
+Every other argument is treated as a filename. Files ending in \`.la' are
+treated as uninstalled libtool libraries, other files are standard or library
+object files.
+
+If the OUTPUT-FILE ends in \`.la', then a libtool library is created,
+only library objects (\`.lo' files) may be specified, and \`-rpath' is
+required, except when creating a convenience library.
+
+If OUTPUT-FILE ends in \`.a' or \`.lib', then a standard library is created
+using \`ar' and \`ranlib', or on Windows using \`lib'.
+
+If OUTPUT-FILE ends in \`.lo' or \`.${objext}', then a reloadable object file
+is created, otherwise an executable program is created."
+ ;;
+
+uninstall)
+ $echo \
+"Usage: $modename [OPTION]... --mode=uninstall RM [RM-OPTION]... FILE...
+
+Remove libraries from an installation directory.
+
+RM is the name of the program to use to delete files associated with each FILE
+(typically \`/bin/rm'). RM-OPTIONS are options (such as \`-f') to be passed
+to RM.
+
+If FILE is a libtool library, all the files associated with it are deleted.
+Otherwise, only FILE itself is deleted using RM."
+ ;;
+
+*)
+ $echo "$modename: invalid operation mode \`$mode'" 1>&2
+ $echo "$help" 1>&2
+ exit 1
+ ;;
+esac
+
+$echo
+$echo "Try \`$modename --help' for more information about other modes."
+
+exit 0
+
+# The TAGs below are defined such that we never get into a situation
+# in which we disable both kinds of libraries. Given conflicting
+# choices, we go for a static library, that is the most portable,
+# since we can't tell whether shared libraries were disabled because
+# the user asked for that or because the platform doesn't support
+# them. This is particularly important on AIX, because we don't
+# support having both static and shared libraries enabled at the same
+# time on that platform, so we default to a shared-only configuration.
+# If a disable-shared tag is given, we'll fallback to a static-only
+# configuration. But we'll never go from static-only to shared-only.
+
+# ### BEGIN LIBTOOL TAG CONFIG: disable-shared
+build_libtool_libs=no
+build_old_libs=yes
+# ### END LIBTOOL TAG CONFIG: disable-shared
+
+# ### BEGIN LIBTOOL TAG CONFIG: disable-static
+build_old_libs=`case $build_libtool_libs in yes) $echo no;; *) $echo yes;; esac`
+# ### END LIBTOOL TAG CONFIG: disable-static
+
+# Local Variables:
+# mode:shell-script
+# sh-indentation:2
+# End:
Added: freeswitch/trunk/libs/sqlite/main.mk
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/main.mk Tue Dec 19 15:11:50 2006
@@ -0,0 +1,618 @@
+###############################################################################
+# The following macros should be defined before this script is
+# invoked:
+#
+# TOP The toplevel directory of the source tree. This is the
+# directory that contains this "Makefile.in" and the
+# "configure.in" script.
+#
+# BCC C Compiler and options for use in building executables that
+# will run on the platform that is doing the build.
+#
+# USLEEP If the target operating system supports the "usleep()" system
+# call, then define the HAVE_USLEEP macro for all C modules.
+#
+# THREADSAFE If you want the SQLite library to be safe for use within a
+# multi-threaded program, then define the following macro
+# appropriately:
+#
+# THREADLIB Specify any extra linker options needed to make the library
+# thread safe
+#
+# OPTS Extra compiler command-line options.
+#
+# EXE The suffix to add to executable files. ".exe" for windows
+# and "" for Unix.
+#
+# TCC C Compiler and options for use in building executables that
+# will run on the target platform. This is usually the same
+# as BCC, unless you are cross-compiling.
+#
+# AR Tools used to build a static library.
+# RANLIB
+#
+# TCL_FLAGS Extra compiler options needed for programs that use the
+# TCL library.
+#
+# LIBTCL Linker options needed to link against the TCL library.
+#
+# READLINE_FLAGS Compiler options needed for programs that use the
+# readline() library.
+#
+# LIBREADLINE Linker options needed by programs using readline() must
+# link against.
+#
+# NAWK Nawk compatible awk program. Older (obsolete?) solaris
+# systems need this to avoid using the original AT&T AWK.
+#
+# Once the macros above are defined, the rest of this make script will
+# build the SQLite library and testing tools.
+################################################################################
+
+# This is how we compile
+#
+TCCX = $(TCC) $(OPTS) $(THREADSAFE) $(USLEEP) -I. -I$(TOP)/src
+
+# Object files for the SQLite library.
+#
+LIBOBJ+= alter.o analyze.o attach.o auth.o btree.o build.o \
+ callback.o complete.o date.o delete.o \
+ expr.o func.o hash.o insert.o loadext.o \
+ main.o opcodes.o os.o os_os2.o os_unix.o os_win.o \
+ pager.o parse.o pragma.o prepare.o printf.o random.o \
+ select.o table.o tclsqlite.o tokenize.o trigger.o \
+ update.o util.o vacuum.o \
+ vdbe.o vdbeapi.o vdbeaux.o vdbefifo.o vdbemem.o \
+ where.o utf.o legacy.o vtab.o
+
+# All of the source code files.
+#
+SRC = \
+ $(TOP)/src/alter.c \
+ $(TOP)/src/analyze.c \
+ $(TOP)/src/attach.c \
+ $(TOP)/src/auth.c \
+ $(TOP)/src/btree.c \
+ $(TOP)/src/btree.h \
+ $(TOP)/src/build.c \
+ $(TOP)/src/callback.c \
+ $(TOP)/src/complete.c \
+ $(TOP)/src/date.c \
+ $(TOP)/src/delete.c \
+ $(TOP)/src/expr.c \
+ $(TOP)/src/func.c \
+ $(TOP)/src/hash.c \
+ $(TOP)/src/hash.h \
+ $(TOP)/src/insert.c \
+ $(TOP)/src/legacy.c \
+ $(TOP)/src/loadext.c \
+ $(TOP)/src/main.c \
+ $(TOP)/src/os.c \
+ $(TOP)/src/os_os2.c \
+ $(TOP)/src/os_unix.c \
+ $(TOP)/src/os_win.c \
+ $(TOP)/src/pager.c \
+ $(TOP)/src/pager.h \
+ $(TOP)/src/parse.y \
+ $(TOP)/src/pragma.c \
+ $(TOP)/src/prepare.c \
+ $(TOP)/src/printf.c \
+ $(TOP)/src/random.c \
+ $(TOP)/src/select.c \
+ $(TOP)/src/shell.c \
+ $(TOP)/src/sqlite.h.in \
+ $(TOP)/src/sqliteInt.h \
+ $(TOP)/src/table.c \
+ $(TOP)/src/tclsqlite.c \
+ $(TOP)/src/tokenize.c \
+ $(TOP)/src/trigger.c \
+ $(TOP)/src/utf.c \
+ $(TOP)/src/update.c \
+ $(TOP)/src/util.c \
+ $(TOP)/src/vacuum.c \
+ $(TOP)/src/vdbe.c \
+ $(TOP)/src/vdbe.h \
+ $(TOP)/src/vdbeapi.c \
+ $(TOP)/src/vdbeaux.c \
+ $(TOP)/src/vdbefifo.c \
+ $(TOP)/src/vdbemem.c \
+ $(TOP)/src/vdbeInt.h \
+ $(TOP)/src/vtab.c \
+ $(TOP)/src/where.c
+
+# Source code for extensions
+#
+SRC += \
+ $(TOP)/ext/fts1/fts1.c \
+ $(TOP)/ext/fts1/fts1.h \
+ $(TOP)/ext/fts1/fts1_hash.c \
+ $(TOP)/ext/fts1/fts1_hash.h \
+ $(TOP)/ext/fts1/fts1_porter.c \
+ $(TOP)/ext/fts1/fts1_tokenizer.h \
+ $(TOP)/ext/fts1/fts1_tokenizer1.c
+
+
+# Source code to the test files.
+#
+TESTSRC = \
+ $(TOP)/src/btree.c \
+ $(TOP)/src/date.c \
+ $(TOP)/src/func.c \
+ $(TOP)/src/main.c \
+ $(TOP)/src/os.c \
+ $(TOP)/src/os_os2.c \
+ $(TOP)/src/os_unix.c \
+ $(TOP)/src/os_win.c \
+ $(TOP)/src/pager.c \
+ $(TOP)/src/pragma.c \
+ $(TOP)/src/printf.c \
+ $(TOP)/src/test1.c \
+ $(TOP)/src/test2.c \
+ $(TOP)/src/test3.c \
+ $(TOP)/src/test4.c \
+ $(TOP)/src/test5.c \
+ $(TOP)/src/test6.c \
+ $(TOP)/src/test7.c \
+ $(TOP)/src/test8.c \
+ $(TOP)/src/test_autoext.c \
+ $(TOP)/src/test_async.c \
+ $(TOP)/src/test_md5.c \
+ $(TOP)/src/test_schema.c \
+ $(TOP)/src/test_server.c \
+ $(TOP)/src/test_tclvar.c \
+ $(TOP)/src/utf.c \
+ $(TOP)/src/util.c \
+ $(TOP)/src/vdbe.c \
+ $(TOP)/src/vdbeaux.c \
+ $(TOP)/src/where.c
+
+# Header files used by all library source files.
+#
+HDR = \
+ sqlite3.h \
+ $(TOP)/src/btree.h \
+ $(TOP)/src/hash.h \
+ opcodes.h \
+ $(TOP)/src/os.h \
+ $(TOP)/src/os_common.h \
+ $(TOP)/src/sqlite3ext.h \
+ $(TOP)/src/sqliteInt.h \
+ $(TOP)/src/vdbe.h \
+ parse.h
+
+# Header files used by extensions
+#
+HDR += \
+ $(TOP)/ext/fts1/fts1.h \
+ $(TOP)/ext/fts1/fts1_hash.h \
+ $(TOP)/ext/fts1/fts1_tokenizer.h
+
+
+# Header files used by the VDBE submodule
+#
+VDBEHDR = \
+ $(HDR) \
+ $(TOP)/src/vdbeInt.h
+
+# This is the default Makefile target. The objects listed here
+# are what get build when you type just "make" with no arguments.
+#
+all: sqlite3.h libsqlite3.a sqlite3$(EXE)
+
+# Generate the file "last_change" which contains the date of change
+# of the most recently modified source code file
+#
+last_change: $(SRC)
+ cat $(SRC) | grep '$$Id: ' | sort -k 5 | tail -1 \
+ | $(NAWK) '{print $$5,$$6}' >last_change
+
+libsqlite3.a: $(LIBOBJ)
+ $(AR) libsqlite3.a $(LIBOBJ)
+ $(RANLIB) libsqlite3.a
+
+sqlite3$(EXE): $(TOP)/src/shell.c libsqlite3.a sqlite3.h
+ $(TCCX) $(READLINE_FLAGS) -o sqlite3$(EXE) $(TOP)/src/shell.c \
+ libsqlite3.a $(LIBREADLINE) $(TLIBS) $(THREADLIB)
+
+objects: $(LIBOBJ_ORIG)
+
+# This target creates a directory named "tsrc" and fills it with
+# copies of all of the C source code and header files needed to
+# build on the target system. Some of the C source code and header
+# files are automatically generated. This target takes care of
+# all that automatic generation.
+#
+target_source: $(SRC) $(VDBEHDR) opcodes.c keywordhash.h
+ rm -rf tsrc
+ mkdir tsrc
+ cp $(SRC) $(VDBEHDR) tsrc
+ rm tsrc/sqlite.h.in tsrc/parse.y
+ cp parse.c opcodes.c keywordhash.h tsrc
+
+# Rules to build the LEMON compiler generator
+#
+lemon: $(TOP)/tool/lemon.c $(TOP)/tool/lempar.c
+ $(BCC) -o lemon $(TOP)/tool/lemon.c
+ cp $(TOP)/tool/lempar.c .
+
+# Rules to build individual files
+#
+alter.o: $(TOP)/src/alter.c $(HDR)
+ $(TCCX) -c $(TOP)/src/alter.c
+
+analyze.o: $(TOP)/src/analyze.c $(HDR)
+ $(TCCX) -c $(TOP)/src/analyze.c
+
+attach.o: $(TOP)/src/attach.c $(HDR)
+ $(TCCX) -c $(TOP)/src/attach.c
+
+auth.o: $(TOP)/src/auth.c $(HDR)
+ $(TCCX) -c $(TOP)/src/auth.c
+
+btree.o: $(TOP)/src/btree.c $(HDR) $(TOP)/src/pager.h
+ $(TCCX) -c $(TOP)/src/btree.c
+
+build.o: $(TOP)/src/build.c $(HDR)
+ $(TCCX) -c $(TOP)/src/build.c
+
+callback.o: $(TOP)/src/callback.c $(HDR)
+ $(TCCX) -c $(TOP)/src/callback.c
+
+complete.o: $(TOP)/src/complete.c $(HDR)
+ $(TCCX) -c $(TOP)/src/complete.c
+
+date.o: $(TOP)/src/date.c $(HDR)
+ $(TCCX) -c $(TOP)/src/date.c
+
+delete.o: $(TOP)/src/delete.c $(HDR)
+ $(TCCX) -c $(TOP)/src/delete.c
+
+expr.o: $(TOP)/src/expr.c $(HDR)
+ $(TCCX) -c $(TOP)/src/expr.c
+
+func.o: $(TOP)/src/func.c $(HDR)
+ $(TCCX) -c $(TOP)/src/func.c
+
+hash.o: $(TOP)/src/hash.c $(HDR)
+ $(TCCX) -c $(TOP)/src/hash.c
+
+insert.o: $(TOP)/src/insert.c $(HDR)
+ $(TCCX) -c $(TOP)/src/insert.c
+
+legacy.o: $(TOP)/src/legacy.c $(HDR)
+ $(TCCX) -c $(TOP)/src/legacy.c
+
+loadext.o: $(TOP)/src/loadext.c $(HDR)
+ $(TCCX) -c $(TOP)/src/loadext.c
+
+main.o: $(TOP)/src/main.c $(HDR)
+ $(TCCX) -c $(TOP)/src/main.c
+
+pager.o: $(TOP)/src/pager.c $(HDR) $(TOP)/src/pager.h
+ $(TCCX) -c $(TOP)/src/pager.c
+
+opcodes.o: opcodes.c
+ $(TCCX) -c opcodes.c
+
+opcodes.c: opcodes.h $(TOP)/mkopcodec.awk
+ sort -n -b -k 3 opcodes.h | $(NAWK) -f $(TOP)/mkopcodec.awk >opcodes.c
+
+opcodes.h: parse.h $(TOP)/src/vdbe.c $(TOP)/mkopcodeh.awk
+ cat parse.h $(TOP)/src/vdbe.c | $(NAWK) -f $(TOP)/mkopcodeh.awk >opcodes.h
+
+os.o: $(TOP)/src/os.c $(HDR)
+ $(TCCX) -c $(TOP)/src/os.c
+
+os_os2.o: $(TOP)/src/os_os2.c $(HDR)
+ $(TCCX) -c $(TOP)/src/os_os2.c
+
+os_unix.o: $(TOP)/src/os_unix.c $(HDR)
+ $(TCCX) -c $(TOP)/src/os_unix.c
+
+os_win.o: $(TOP)/src/os_win.c $(HDR)
+ $(TCCX) -c $(TOP)/src/os_win.c
+
+parse.o: parse.c $(HDR)
+ $(TCCX) -c parse.c
+
+parse.h: parse.c
+
+parse.c: $(TOP)/src/parse.y lemon $(TOP)/addopcodes.awk
+ cp $(TOP)/src/parse.y .
+ ./lemon $(OPTS) parse.y
+ mv parse.h parse.h.temp
+ awk -f $(TOP)/addopcodes.awk parse.h.temp >parse.h
+
+pragma.o: $(TOP)/src/pragma.c $(HDR)
+ $(TCCX) $(TCL_FLAGS) -c $(TOP)/src/pragma.c
+
+prepare.o: $(TOP)/src/prepare.c $(HDR)
+ $(TCCX) $(TCL_FLAGS) -c $(TOP)/src/prepare.c
+
+printf.o: $(TOP)/src/printf.c $(HDR)
+ $(TCCX) $(TCL_FLAGS) -c $(TOP)/src/printf.c
+
+random.o: $(TOP)/src/random.c $(HDR)
+ $(TCCX) -c $(TOP)/src/random.c
+
+select.o: $(TOP)/src/select.c $(HDR)
+ $(TCCX) -c $(TOP)/src/select.c
+
+sqlite3.h: $(TOP)/src/sqlite.h.in
+ sed -e s/--VERS--/`cat ${TOP}/VERSION`/ \
+ -e s/--VERSION-NUMBER--/`cat ${TOP}/VERSION | sed 's/[^0-9]/ /g' | $(NAWK) '{printf "%d%03d%03d",$$1,$$2,$$3}'`/ \
+ $(TOP)/src/sqlite.h.in >sqlite3.h
+
+table.o: $(TOP)/src/table.c $(HDR)
+ $(TCCX) -c $(TOP)/src/table.c
+
+tclsqlite.o: $(TOP)/src/tclsqlite.c $(HDR)
+ $(TCCX) $(TCL_FLAGS) -c $(TOP)/src/tclsqlite.c
+
+tokenize.o: $(TOP)/src/tokenize.c keywordhash.h $(HDR)
+ $(TCCX) -c $(TOP)/src/tokenize.c
+
+keywordhash.h: $(TOP)/tool/mkkeywordhash.c
+ $(BCC) -o mkkeywordhash $(OPTS) $(TOP)/tool/mkkeywordhash.c
+ ./mkkeywordhash >keywordhash.h
+
+trigger.o: $(TOP)/src/trigger.c $(HDR)
+ $(TCCX) -c $(TOP)/src/trigger.c
+
+update.o: $(TOP)/src/update.c $(HDR)
+ $(TCCX) -c $(TOP)/src/update.c
+
+utf.o: $(TOP)/src/utf.c $(HDR)
+ $(TCCX) -c $(TOP)/src/utf.c
+
+util.o: $(TOP)/src/util.c $(HDR)
+ $(TCCX) -c $(TOP)/src/util.c
+
+vacuum.o: $(TOP)/src/vacuum.c $(HDR)
+ $(TCCX) -c $(TOP)/src/vacuum.c
+
+vdbe.o: $(TOP)/src/vdbe.c $(VDBEHDR)
+ $(TCCX) -c $(TOP)/src/vdbe.c
+
+vdbeapi.o: $(TOP)/src/vdbeapi.c $(VDBEHDR)
+ $(TCCX) -c $(TOP)/src/vdbeapi.c
+
+vdbeaux.o: $(TOP)/src/vdbeaux.c $(VDBEHDR)
+ $(TCCX) -c $(TOP)/src/vdbeaux.c
+
+vdbefifo.o: $(TOP)/src/vdbefifo.c $(VDBEHDR)
+ $(TCCX) -c $(TOP)/src/vdbefifo.c
+
+vdbemem.o: $(TOP)/src/vdbemem.c $(VDBEHDR)
+ $(TCCX) -c $(TOP)/src/vdbemem.c
+
+vtab.o: $(TOP)/src/vtab.c $(VDBEHDR)
+ $(TCCX) -c $(TOP)/src/vtab.c
+
+where.o: $(TOP)/src/where.c $(HDR)
+ $(TCCX) -c $(TOP)/src/where.c
+
+# Rules for building test programs and for running tests
+#
+tclsqlite3: $(TOP)/src/tclsqlite.c libsqlite3.a
+ $(TCCX) $(TCL_FLAGS) -DTCLSH=1 -o tclsqlite3 \
+ $(TOP)/src/tclsqlite.c libsqlite3.a $(LIBTCL) $(THREADLIB)
+
+testfixture$(EXE): $(TOP)/src/tclsqlite.c libsqlite3.a $(TESTSRC)
+ $(TCCX) $(TCL_FLAGS) -DTCLSH=1 -DSQLITE_TEST=1 -DSQLITE_CRASH_TEST=1 \
+ -DSQLITE_SERVER=1 -o testfixture$(EXE) \
+ $(TESTSRC) $(TOP)/src/tclsqlite.c \
+ libsqlite3.a $(LIBTCL) $(THREADLIB)
+
+fulltest: testfixture$(EXE) sqlite3$(EXE)
+ ./testfixture$(EXE) $(TOP)/test/all.test
+
+test: testfixture$(EXE) sqlite3$(EXE)
+ ./testfixture$(EXE) $(TOP)/test/quick.test
+
+sqlite3_analyzer$(EXE): $(TOP)/src/tclsqlite.c libsqlite3.a $(TESTSRC) \
+ $(TOP)/tool/spaceanal.tcl
+ sed \
+ -e '/^#/d' \
+ -e 's,\\,\\\\,g' \
+ -e 's,",\\",g' \
+ -e 's,^,",' \
+ -e 's,$$,\\n",' \
+ $(TOP)/tool/spaceanal.tcl >spaceanal_tcl.h
+ $(TCCX) $(TCL_FLAGS) -DTCLSH=2 -DSQLITE_TEST=1 -DSQLITE_DEBUG=1 -o \
+ sqlite3_analyzer$(EXE) $(TESTSRC) $(TOP)/src/tclsqlite.c \
+ libsqlite3.a $(LIBTCL) $(THREADLIB)
+
+TEST_EXTENSION = $(SHPREFIX)testloadext.$(SO)
+$(TEST_EXTENSION): $(TOP)/src/test_loadext.c
+ $(MKSHLIB) $(TOP)/src/test_loadext.c -o $(TEST_EXTENSION)
+
+extensiontest: testfixture$(EXE) $(TEST_EXTENSION)
+ ./testfixture$(EXE) $(TOP)/test/loadext.test
+
+# Rules used to build documentation
+#
+arch.html: $(TOP)/www/arch.tcl
+ tclsh $(TOP)/www/arch.tcl >arch.html
+
+autoinc.html: $(TOP)/www/autoinc.tcl
+ tclsh $(TOP)/www/autoinc.tcl >autoinc.html
+
+c_interface.html: $(TOP)/www/c_interface.tcl
+ tclsh $(TOP)/www/c_interface.tcl >c_interface.html
+
+capi3.html: $(TOP)/www/capi3.tcl
+ tclsh $(TOP)/www/capi3.tcl >capi3.html
+
+capi3ref.html: $(TOP)/www/capi3ref.tcl
+ tclsh $(TOP)/www/capi3ref.tcl >capi3ref.html
+
+changes.html: $(TOP)/www/changes.tcl
+ tclsh $(TOP)/www/changes.tcl >changes.html
+
+compile.html: $(TOP)/www/compile.tcl
+ tclsh $(TOP)/www/compile.tcl >compile.html
+
+copyright.html: $(TOP)/www/copyright.tcl
+ tclsh $(TOP)/www/copyright.tcl >copyright.html
+
+copyright-release.html: $(TOP)/www/copyright-release.html
+ cp $(TOP)/www/copyright-release.html .
+
+copyright-release.pdf: $(TOP)/www/copyright-release.pdf
+ cp $(TOP)/www/copyright-release.pdf .
+
+common.tcl: $(TOP)/www/common.tcl
+ cp $(TOP)/www/common.tcl .
+
+conflict.html: $(TOP)/www/conflict.tcl
+ tclsh $(TOP)/www/conflict.tcl >conflict.html
+
+datatypes.html: $(TOP)/www/datatypes.tcl
+ tclsh $(TOP)/www/datatypes.tcl >datatypes.html
+
+datatype3.html: $(TOP)/www/datatype3.tcl
+ tclsh $(TOP)/www/datatype3.tcl >datatype3.html
+
+different.html: $(TOP)/www/different.tcl
+ tclsh $(TOP)/www/different.tcl >different.html
+
+docs.html: $(TOP)/www/docs.tcl
+ tclsh $(TOP)/www/docs.tcl >docs.html
+
+download.html: $(TOP)/www/download.tcl
+ mkdir -p doc
+ tclsh $(TOP)/www/download.tcl >download.html
+
+faq.html: $(TOP)/www/faq.tcl
+ tclsh $(TOP)/www/faq.tcl >faq.html
+
+fileformat.html: $(TOP)/www/fileformat.tcl
+ tclsh $(TOP)/www/fileformat.tcl >fileformat.html
+
+formatchng.html: $(TOP)/www/formatchng.tcl
+ tclsh $(TOP)/www/formatchng.tcl >formatchng.html
+
+index.html: $(TOP)/www/index.tcl last_change
+ tclsh $(TOP)/www/index.tcl >index.html
+
+lang.html: $(TOP)/www/lang.tcl
+ tclsh $(TOP)/www/lang.tcl doc >lang.html
+
+pragma.html: $(TOP)/www/pragma.tcl
+ tclsh $(TOP)/www/pragma.tcl >pragma.html
+
+lockingv3.html: $(TOP)/www/lockingv3.tcl
+ tclsh $(TOP)/www/lockingv3.tcl >lockingv3.html
+
+sharedcache.html: $(TOP)/www/sharedcache.tcl
+ tclsh $(TOP)/www/sharedcache.tcl >sharedcache.html
+
+mingw.html: $(TOP)/www/mingw.tcl
+ tclsh $(TOP)/www/mingw.tcl >mingw.html
+
+nulls.html: $(TOP)/www/nulls.tcl
+ tclsh $(TOP)/www/nulls.tcl >nulls.html
+
+oldnews.html: $(TOP)/www/oldnews.tcl
+ tclsh $(TOP)/www/oldnews.tcl >oldnews.html
+
+omitted.html: $(TOP)/www/omitted.tcl
+ tclsh $(TOP)/www/omitted.tcl >omitted.html
+
+opcode.html: $(TOP)/www/opcode.tcl $(TOP)/src/vdbe.c
+ tclsh $(TOP)/www/opcode.tcl $(TOP)/src/vdbe.c >opcode.html
+
+optimizer.html: $(TOP)/www/optimizer.tcl
+ tclsh $(TOP)/www/optimizer.tcl >optimizer.html
+
+optoverview.html: $(TOP)/www/optoverview.tcl
+ tclsh $(TOP)/www/optoverview.tcl >optoverview.html
+
+quickstart.html: $(TOP)/www/quickstart.tcl
+ tclsh $(TOP)/www/quickstart.tcl >quickstart.html
+
+speed.html: $(TOP)/www/speed.tcl
+ tclsh $(TOP)/www/speed.tcl >speed.html
+
+sqlite.html: $(TOP)/www/sqlite.tcl
+ tclsh $(TOP)/www/sqlite.tcl >sqlite.html
+
+support.html: $(TOP)/www/support.tcl
+ tclsh $(TOP)/www/support.tcl >support.html
+
+tclsqlite.html: $(TOP)/www/tclsqlite.tcl
+ tclsh $(TOP)/www/tclsqlite.tcl >tclsqlite.html
+
+vdbe.html: $(TOP)/www/vdbe.tcl
+ tclsh $(TOP)/www/vdbe.tcl >vdbe.html
+
+version3.html: $(TOP)/www/version3.tcl
+ tclsh $(TOP)/www/version3.tcl >version3.html
+
+whentouse.html: $(TOP)/www/whentouse.tcl
+ tclsh $(TOP)/www/whentouse.tcl >whentouse.html
+
+
+# Files to be published on the website.
+#
+DOC = \
+ arch.html \
+ autoinc.html \
+ c_interface.html \
+ capi3.html \
+ capi3ref.html \
+ changes.html \
+ compile.html \
+ copyright.html \
+ copyright-release.html \
+ copyright-release.pdf \
+ conflict.html \
+ datatypes.html \
+ datatype3.html \
+ different.html \
+ docs.html \
+ download.html \
+ faq.html \
+ fileformat.html \
+ formatchng.html \
+ index.html \
+ lang.html \
+ lockingv3.html \
+ mingw.html \
+ nulls.html \
+ oldnews.html \
+ omitted.html \
+ opcode.html \
+ optimizer.html \
+ optoverview.html \
+ pragma.html \
+ quickstart.html \
+ sharedcache.html \
+ speed.html \
+ sqlite.html \
+ support.html \
+ tclsqlite.html \
+ vdbe.html \
+ version3.html \
+ whentouse.html
+
+doc: common.tcl $(DOC)
+ mkdir -p doc
+ mv $(DOC) doc
+ cp $(TOP)/www/*.gif $(TOP)/art/*.gif doc
+
+# Standard install and cleanup targets
+#
+install: sqlite3 libsqlite3.a sqlite3.h
+ mv sqlite3 /usr/bin
+ mv libsqlite3.a /usr/lib
+ mv sqlite3.h /usr/include
+
+clean:
+ rm -f *.o sqlite3 libsqlite3.a sqlite3.h opcodes.*
+ rm -f lemon lempar.c parse.* sqlite*.tar.gz mkkeywordhash keywordhash.h
+ rm -f $(PUBLISH)
+ rm -f *.da *.bb *.bbg gmon.out
+ rm -rf tsrc
+ rm -f testloadext.dll libtestloadext.so
Added: freeswitch/trunk/libs/sqlite/mkdll.sh
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/mkdll.sh Tue Dec 19 15:11:50 2006
@@ -0,0 +1,50 @@
+#!/bin/sh
+#
+# This script is used to compile SQLite into a DLL.
+#
+# Two separate DLLs are generated. "sqlite3.dll" is the core
+# library. "tclsqlite3.dll" contains the TCL bindings and is the
+# library that is loaded into TCL in order to run SQLite.
+#
+make target_source
+cd tsrc
+PATH=$PATH:/opt/mingw/bin
+TCLDIR=/home/drh/tcltk/846/win/846win
+TCLSTUBLIB=$TCLDIR/libtcl84stub.a
+OPTS='-DUSE_TCL_STUBS=1 -DNDEBUG=1 -DTHREADSAFE=1 -DBUILD_sqlite=1'
+CC="i386-mingw32msvc-gcc -O2 $OPTS -I. -I$TCLDIR"
+NM="i386-mingw32msvc-nm"
+rm shell.c
+for i in *.c; do
+ CMD="$CC -c $i"
+ echo $CMD
+ $CMD
+done
+echo 'EXPORTS' >tclsqlite3.def
+$NM *.o | grep ' T ' >temp1
+grep '_Init$' temp1 >temp2
+grep '_SafeInit$' temp1 >>temp2
+grep ' T _sqlite3_' temp1 >>temp2
+echo 'EXPORTS' >tclsqlite3.def
+sed 's/^.* T _//' temp2 | sort | uniq >>tclsqlite3.def
+i386-mingw32msvc-dllwrap \
+ --def tclsqlite3.def -v --export-all \
+ --driver-name i386-mingw32msvc-gcc \
+ --dlltool-name i386-mingw32msvc-dlltool \
+ --as i386-mingw32msvc-as \
+ --target i386-mingw32 \
+ -dllname tclsqlite3.dll -lmsvcrt *.o $TCLSTUBLIB
+#i386-mingw32msvc-strip tclsqlite3.dll
+rm tclsqlite.o
+$NM *.o | grep ' T ' >temp1
+echo 'EXPORTS' >sqlite3.def
+grep ' _sqlite3_' temp1 | sed 's/^.* _//' >>sqlite3.def
+i386-mingw32msvc-dllwrap \
+ --def sqlite3.def -v --export-all \
+ --driver-name i386-mingw32msvc-gcc \
+ --dlltool-name i386-mingw32msvc-dlltool \
+ --as i386-mingw32msvc-as \
+ --target i386-mingw32 \
+ -dllname sqlite3.dll -lmsvcrt *.o
+#i386-mingw32msvc-strip sqlite3.dll
+cd ..
Added: freeswitch/trunk/libs/sqlite/mkopcodec.awk
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/mkopcodec.awk Tue Dec 19 15:11:50 2006
@@ -0,0 +1,28 @@
+#!/usr/bin/awk -f
+#
+# This AWK script scans the opcodes.h file (which is itself generated by
+# another awk script) and uses the information gleaned to create the
+# opcodes.c source file.
+#
+# Opcodes.c contains strings which are the symbolic names for the various
+# opcodes used by the VDBE. These strings are used when disassembling a
+# VDBE program during tracing or as a result of the EXPLAIN keyword.
+#
+BEGIN {
+ print "/* Automatically generated. Do not edit */"
+ print "/* See the mkopcodec.awk script for details. */"
+ printf "#if !defined(SQLITE_OMIT_EXPLAIN)"
+ printf " || !defined(NDEBUG)"
+ printf " || defined(VDBE_PROFILE)"
+ print " || defined(SQLITE_DEBUG)"
+ print "const char *const sqlite3OpcodeNames[] = { \"?\","
+}
+/define OP_/ {
+ sub("OP_","",$2)
+ i++
+ printf " /* %3d */ \"%s\",\n", $3, $2
+}
+END {
+ print "};"
+ print "#endif"
+}
Added: freeswitch/trunk/libs/sqlite/mkopcodeh.awk
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/mkopcodeh.awk Tue Dec 19 15:11:50 2006
@@ -0,0 +1,127 @@
+#!/usr/bin/awk -f
+#
+# Generate the file opcodes.h.
+#
+# This AWK script scans a concatenation of the parse.h output file from the
+# parser and the vdbe.c source file in order to generate the opcodes numbers
+# for all opcodes.
+#
+# The lines of the vdbe.c that we are interested in are of the form:
+#
+# case OP_aaaa: /* same as TK_bbbbb */
+#
+# The TK_ comment is optional. If it is present, then the value assigned to
+# the OP_ is the same as the TK_ value. If missing, the OP_ value is assigned
+# a small integer that is different from every other OP_ value.
+#
+# We go to the trouble of making some OP_ values the same as TK_ values
+# as an optimization. During parsing, things like expression operators
+# are coded with TK_ values such as TK_ADD, TK_DIVIDE, and so forth. Later
+# during code generation, we need to generate corresponding opcodes like
+# OP_Add and OP_Divide. By making TK_ADD==OP_Add and TK_DIVIDE==OP_Divide,
+# code to translate from one to the other is avoided. This makes the
+# code generator run (infinitesimally) faster and more importantly it makes
+# the library footprint smaller.
+#
+# This script also scans for lines of the form:
+#
+# case OP_aaaa: /* no-push */
+#
+# When the no-push comment is found on an opcode, it means that that
+# opcode does not leave a result on the stack. By identifying which
+# opcodes leave results on the stack it is possible to determine a
+# much smaller upper bound on the size of the stack. This allows
+# a smaller stack to be allocated, which is important to embedded
+# systems with limited memory space. This script generates a series
+# of "NOPUSH_MASK" defines that contain bitmaps of opcodes that leave
+# results on the stack. The NOPUSH_MASK defines are used in vdbeaux.c
+# to help determine the maximum stack size.
+#
+
+
+# Remember the TK_ values from the parse.h file
+/^#define TK_/ {
+ tk[$2] = $3
+}
+
+# Scan for "case OP_aaaa:" lines in the vdbe.c file
+/^case OP_/ {
+ name = $2
+ sub(/:/,"",name)
+ sub("\r","",name)
+ op[name] = -1
+ for(i=3; i<NF; i++){
+ if($i=="same" && $(i+1)=="as"){
+ sym = $(i+2)
+ sub(/,/,"",sym)
+ op[name] = tk[sym]
+ used[op[name]] = 1
+ sameas[op[name]] = sym
+ }
+ if($i=="no-push"){
+ nopush[name] = 1
+ }
+ }
+}
+
+# Assign numbers to all opcodes and output the result.
+END {
+ cnt = 0
+ max = 0
+ print "/* Automatically generated. Do not edit */"
+ print "/* See the mkopcodeh.awk script for details */"
+ for(name in op){
+ if( op[name]<0 ){
+ cnt++
+ while( used[cnt] ) cnt++
+ op[name] = cnt
+ }
+ used[op[name]] = 1;
+ if( op[name]>max ) max = op[name]
+ printf "#define %-25s %15d", name, op[name]
+ if( sameas[op[name]] ) {
+ printf " /* same as %-12s*/", sameas[op[name]]
+ }
+ printf "\n"
+
+ }
+ seenUnused = 0;
+ for(i=1; i<max; i++){
+ if( !used[i] ){
+ if( !seenUnused ){
+ printf "\n/* The following opcode values are never used */\n"
+ seenUnused = 1
+ }
+ printf "#define %-25s %15d\n", sprintf( "OP_NotUsed_%-3d", i ), i
+ }
+ }
+
+ # Generate the 10 16-bit bitmasks used by function opcodeUsesStack()
+ # in vdbeaux.c. See comments in that function for details.
+ #
+ nopush[0] = 0 # 0..15
+ nopush[1] = 0 # 16..31
+ nopush[2] = 0 # 32..47
+ nopush[3] = 0 # 48..63
+ nopush[4] = 0 # 64..79
+ nopush[5] = 0 # 80..95
+ nopush[6] = 0 # 96..111
+ nopush[7] = 0 # 112..127
+ nopush[8] = 0 # 128..143
+ nopush[9] = 0 # 144..159
+ for(name in op){
+ if( nopush[name] ){
+ n = op[name]
+ j = n%16
+ i = ((n - j)/16)
+ nopush[i] = nopush[i] + (2^j)
+ }
+ }
+ printf "\n"
+ print "/* Opcodes that are guaranteed to never push a value onto the stack"
+ print "** contain a 1 their corresponding position of the following mask"
+ print "** set. See the opcodeNoPush() function in vdbeaux.c */"
+ for(i=0; i<10; i++){
+ printf "#define NOPUSH_MASK_%d 0x%04x\n", i, nopush[i]
+ }
+}
Added: freeswitch/trunk/libs/sqlite/mkso.sh
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/mkso.sh Tue Dec 19 15:11:50 2006
@@ -0,0 +1,29 @@
+#!/bin/sh
+#
+# This script is used to compile SQLite into a shared library on Linux.
+#
+# Two separate shared libraries are generated. "sqlite3.so" is the core
+# library. "tclsqlite3.so" contains the TCL bindings and is the
+# library that is loaded into TCL in order to run SQLite.
+#
+make target_source
+cd tsrc
+rm shell.c
+TCLDIR=/home/drh/tcltk/846/linux/846linux
+TCLSTUBLIB=$TCLDIR/libtclstub8.4g.a
+OPTS='-DUSE_TCL_STUBS=1 -DNDEBUG=1 -DHAVE_DLOPEN=1'
+for i in *.c; do
+ if test $i != 'keywordhash.c'; then
+ CMD="cc -fPIC $OPTS -O2 -I. -I$TCLDIR -c $i"
+ echo $CMD
+ $CMD
+ fi
+done
+echo gcc -shared *.o $TCLSTUBLIB -o tclsqlite3.so
+gcc -shared *.o $TCLSTUBLIB -o tclsqlite3.so
+strip tclsqlite3.so
+rm tclsqlite.c tclsqlite.o
+echo gcc -shared *.o -o sqlite3.so
+gcc -shared *.o -o sqlite3.so
+strip sqlite3.so
+cd ..
Added: freeswitch/trunk/libs/sqlite/publish.sh
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/publish.sh Tue Dec 19 15:11:50 2006
@@ -0,0 +1,117 @@
+#!/bin/sh
+#
+# This script is used to compile SQLite and all its documentation and
+# ship everything up to the SQLite website. This script will only work
+# on the system "zadok" at the Hwaci offices. But others might find
+# the script useful as an example.
+#
+
+# Set srcdir to the name of the directory that contains the publish.sh
+# script.
+#
+srcdir=`echo "$0" | sed 's%\(^.*\)/[^/][^/]*$%\1%'`
+
+# Get the makefile.
+#
+cp $srcdir/Makefile.linux-gcc ./Makefile
+chmod +x $srcdir/install-sh
+
+# Get the current version number - needed to help build filenames
+#
+VERS=`cat $srcdir/VERSION`
+VERSW=`sed 's/\./_/g' $srcdir/VERSION`
+
+# Start by building an sqlite shell for linux.
+#
+make clean
+make sqlite3
+strip sqlite3
+mv sqlite3 sqlite3-$VERS.bin
+gzip sqlite3-$VERS.bin
+mv sqlite3-$VERS.bin.gz doc
+
+# Build a source archive useful for windows.
+#
+make target_source
+cd tsrc
+zip ../doc/sqlite-source-$VERSW.zip *
+cd ..
+
+# Build the sqlite.so and tclsqlite.so shared libraries
+# under Linux
+#
+. $srcdir/mkso.sh
+cd tsrc
+mv tclsqlite3.so tclsqlite-$VERS.so
+gzip tclsqlite-$VERS.so
+mv tclsqlite-$VERS.so.gz ../doc
+mv sqlite3.so sqlite-$VERS.so
+gzip sqlite-$VERS.so
+mv sqlite-$VERS.so.gz ../doc
+cd ..
+
+# Build the tclsqlite3.dll and sqlite3.dll shared libraries.
+#
+. $srcdir/mkdll.sh
+cd tsrc
+echo zip ../doc/tclsqlite-$VERSW.zip tclsqlite3.dll
+zip ../doc/tclsqlite-$VERSW.zip tclsqlite3.dll
+echo zip ../doc/sqlitedll-$VERSW.zip sqlite3.dll sqlite3.def
+zip ../doc/sqlitedll-$VERSW.zip sqlite3.dll sqlite3.def
+cd ..
+
+# Build the sqlite.exe executable for windows.
+#
+make target_source
+cd tsrc
+rm tclsqlite.c
+OPTS='-DSTATIC_BUILD=1 -DNDEBUG=1'
+i386-mingw32msvc-gcc -O2 $OPTS -I. -I$TCLDIR *.c -o sqlite3.exe
+zip ../doc/sqlite-$VERSW.zip sqlite3.exe
+cd ..
+
+# Construct a tarball of the source tree
+#
+ORIGIN=`pwd`
+cd $srcdir
+cd ..
+mv sqlite sqlite-$VERS
+EXCLUDE=`find sqlite-$VERS -print | grep CVS | sed 's,^, --exclude ,'`
+tar czf $ORIGIN/doc/sqlite-$VERS.tar.gz $EXCLUDE sqlite-$VERS
+mv sqlite-$VERS sqlite
+cd $ORIGIN
+
+#
+# Build RPMS (binary) and Source RPM
+#
+
+# Make sure we are properly setup to build RPMs
+#
+echo "%HOME %{expand:%%(cd; pwd)}" > $HOME/.rpmmacros
+echo "%_topdir %{HOME}/rpm" >> $HOME/.rpmmacros
+mkdir $HOME/rpm
+mkdir $HOME/rpm/BUILD
+mkdir $HOME/rpm/SOURCES
+mkdir $HOME/rpm/RPMS
+mkdir $HOME/rpm/SRPMS
+mkdir $HOME/rpm/SPECS
+
+# create the spec file from the template
+sed s/SQLITE_VERSION/$VERS/g $srcdir/spec.template > $HOME/rpm/SPECS/sqlite.spec
+
+# copy the source tarball to the rpm directory
+cp doc/sqlite-$VERS.tar.gz $HOME/rpm/SOURCES/.
+
+# build all the rpms
+rpm -ba $HOME/rpm/SPECS/sqlite.spec >& rpm-$vers.log
+
+# copy the RPMs into the build directory.
+mv $HOME/rpm/RPMS/i386/sqlite*-$vers*.rpm doc
+mv $HOME/rpm/SRPMS/sqlite-$vers*.rpm doc
+
+# Build the website
+#
+#cp $srcdir/../historical/* doc
+make doc
+cd doc
+chmod 644 *.gz
Added: freeswitch/trunk/libs/sqlite/spec.template
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/spec.template Tue Dec 19 15:11:50 2006
@@ -0,0 +1,62 @@
+%define name sqlite
+%define version SQLITE_VERSION
+%define release 1
+
+Name: %{name}
+Summary: SQLite is a C library that implements an embeddable SQL database engine
+Version: %{version}
+Release: %{release}
+Source: %{name}-%{version}.tar.gz
+Group: System/Libraries
+URL: http://www.hwaci.com/sw/sqlite/
+License: Public Domain
+BuildRoot: %{_tmppath}/%{name}-%{version}-root
+
+%description
+SQLite is a C library that implements an embeddable SQL database engine.
+Programs that link with the SQLite library can have SQL database access
+without running a separate RDBMS process. The distribution comes with a
+standalone command-line access program (sqlite) that can be used to
+administer an SQLite database and which serves as an example of how to
+use the SQLite library.
+
+%package -n %{name}-devel
+Summary: Header files and libraries for developing apps which will use sqlite
+Group: Development/C
+Requires: %{name} = %{version}-%{release}
+
+%description -n %{name}-devel
+The sqlite-devel package contains the header files and libraries needed
+to develop programs that use the sqlite database library.
+
+%prep
+%setup -q -n %{name}
+
+%build
+CFLAGS="%optflags -DNDEBUG=1" CXXFLAGS="%optflags -DNDEBUG=1" ./configure --prefix=%{_prefix}
+
+make
+make doc
+
+%install
+install -d $RPM_BUILD_ROOT/%{_prefix}
+install -d $RPM_BUILD_ROOT/%{_prefix}/bin
+install -d $RPM_BUILD_ROOT/%{_prefix}/include
+install -d $RPM_BUILD_ROOT/%{_prefix}/lib
+make install prefix=$RPM_BUILD_ROOT/%{_prefix}
+
+%clean
+rm -fr $RPM_BUILD_ROOT
+
+%files
+%defattr(-, root, root)
+%{_libdir}/*.so*
+%{_bindir}/*
+
+%files -n %{name}-devel
+%defattr(-, root, root)
+%{_libdir}/pkgconfig/sqlite3.pc
+%{_libdir}/*.a
+%{_libdir}/*.la
+%{_includedir}/*
+%doc doc/*
Added: freeswitch/trunk/libs/sqlite/sqlite.pc.in
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/sqlite.pc.in Tue Dec 19 15:11:50 2006
@@ -0,0 +1,12 @@
+# Package Information for pkg-config
+
+prefix=@prefix@
+exec_prefix=@exec_prefix@
+libdir=@libdir@
+includedir=@includedir@
+
+Name: SQLite
+Description: SQL database engine
+Version: @VERSION@
+Libs: -L${libdir} -lsqlite
+Cflags: -I${includedir}
Added: freeswitch/trunk/libs/sqlite/sqlite3.1
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/sqlite3.1 Tue Dec 19 15:11:50 2006
@@ -0,0 +1,229 @@
+.\" Hey, EMACS: -*- nroff -*-
+.\" First parameter, NAME, should be all caps
+.\" Second parameter, SECTION, should be 1-8, maybe w/ subsection
+.\" other parameters are allowed: see man(7), man(1)
+.TH SQLITE3 1 "Mon Apr 15 23:49:17 2002"
+.\" Please adjust this date whenever revising the manpage.
+.\"
+.\" Some roff macros, for reference:
+.\" .nh disable hyphenation
+.\" .hy enable hyphenation
+.\" .ad l left justify
+.\" .ad b justify to both left and right margins
+.\" .nf disable filling
+.\" .fi enable filling
+.\" .br insert line break
+.\" .sp <n> insert n+1 empty lines
+.\" for manpage-specific macros, see man(7)
+.SH NAME
+.B sqlite3
+\- A command line interface for SQLite version 3
+
+.SH SYNOPSIS
+.B sqlite3
+.RI [ options ]
+.RI [ databasefile ]
+.RI [ SQL ]
+
+.SH SUMMARY
+.PP
+.B sqlite3
+is a terminal-based front-end to the SQLite library that can evaluate
+queries interactively and display the results in multiple formats.
+.B sqlite3
+can also be used within shell scripts and other applications to provide
+batch processing features.
+
+.SH DESCRIPTION
+To start a
+.B sqlite3
+interactive session, invoke the
+.B sqlite3
+command and optionally provide the name of a database file. If the
+database file does not exist, it will be created. If the database file
+does exist, it will be opened.
+
+For example, to create a new database file named "mydata.db", create
+a table named "memos" and insert a couple of records into that table:
+.sp
+$
+.B sqlite3 mydata.db
+.br
+SQLite version 3.1.3
+.br
+Enter ".help" for instructions
+.br
+sqlite>
+.B create table memos(text, priority INTEGER);
+.br
+sqlite>
+.B insert into memos values('deliver project description', 10);
+.br
+sqlite>
+.B insert into memos values('lunch with Christine', 100);
+.br
+sqlite>
+.B select * from memos;
+.br
+deliver project description|10
+.br
+lunch with Christine|100
+.br
+sqlite>
+.sp
+
+If no database name is supplied, the ATTACH sql command can be used
+to attach to existing or create new database files. ATTACH can also
+be used to attach to multiple databases within the same interactive
+session. This is useful for migrating data between databases,
+possibly changing the schema along the way.
+
+Optionally, a SQL statement or set of SQL statements can be supplied as
+a single argument. Multiple statements should be separated by
+semi-colons.
+
+For example:
+.sp
+$
+.B sqlite3 -line mydata.db 'select * from memos where priority > 20;'
+.br
+ text = lunch with Christine
+.br
+priority = 100
+.br
+.sp
+
+.SS SQLITE META-COMMANDS
+.PP
+The interactive interpreter offers a set of meta-commands that can be
+used to control the output format, examine the currently attached
+database files, or perform administrative operations upon the
+attached databases (such as rebuilding indices). Meta-commands are
+always prefixed with a dot (.).
+
+A list of available meta-commands can be viewed at any time by issuing
+the '.help' command. For example:
+.sp
+sqlite>
+.B .help
+.nf
+.cc |
+.databases List names and files of attached databases
+.dump ?TABLE? ... Dump the database in an SQL text format
+.echo ON|OFF Turn command echo on or off
+.exit Exit this program
+.explain ON|OFF Turn output mode suitable for EXPLAIN on or off.
+.header(s) ON|OFF Turn display of headers on or off
+.help Show this message
+.import FILE TABLE Import data from FILE into TABLE
+.indices TABLE Show names of all indices on TABLE
+.mode MODE ?TABLE? Set output mode where MODE is one of:
+ csv Comma-separated values
+ column Left-aligned columns. (See .width)
+ html HTML <table> code
+ insert SQL insert statements for TABLE
+ line One value per line
+ list Values delimited by .separator string
+ tabs Tab-separated values
+ tcl TCL list elements
+.nullvalue STRING Print STRING in place of NULL values
+.output FILENAME Send output to FILENAME
+.output stdout Send output to the screen
+.prompt MAIN CONTINUE Replace the standard prompts
+.quit Exit this program
+.read FILENAME Execute SQL in FILENAME
+.schema ?TABLE? Show the CREATE statements
+.separator STRING Change separator used by output mode and .import
+.show Show the current values for various settings
+.tables ?PATTERN? List names of tables matching a LIKE pattern
+.timeout MS Try opening locked tables for MS milliseconds
+.width NUM NUM ... Set column widths for "column" mode
+sqlite>
+|cc .
+.sp
+.fi
+
+.SH OPTIONS
+.B sqlite3
+has the following options:
+.TP
+.BI \-init\ file
+Read and execute commands from
+.I file
+, which can contain a mix of SQL statements and meta-commands.
+.TP
+.B \-echo
+Print commands before execution.
+.TP
+.B \-[no]header
+Turn headers on or off.
+.TP
+.B \-column
+Query results will be displayed in a table like form, using
+whitespace characters to separate the columns and align the
+output.
+.TP
+.B \-html
+Query results will be output as simple HTML tables.
+.TP
+.B \-line
+Query results will be displayed with one value per line, rows
+separated by a blank line. Designed to be easily parsed by
+scripts or other programs
+.TP
+.B \-list
+Query results will be displayed with the separator (|, by default)
+character between each field value. The default.
+.TP
+.BI \-separator\ separator
+Set output field separator. Default is '|'.
+.TP
+.BI \-nullvalue\ string
+Set string used to represent NULL values. Default is ''
+(empty string).
+.TP
+.B \-version
+Show SQLite version.
+.TP
+.B \-help
+Show help on options and exit.
+
+
+.SH INIT FILE
+.B sqlite3
+reads an initialization file to set the configuration of the
+interactive environment. Throughout initialization, any previously
+specified setting can be overridden. The sequence of initialization is
+as follows:
+
+o The default configuration is established as follows:
+
+.sp
+.nf
+.cc |
+mode = LIST
+separator = "|"
+main prompt = "sqlite> "
+continue prompt = " ...> "
+|cc .
+.sp
+.fi
+
+o If the file
+.B ~/.sqliterc
+exists, it is processed first.
+can be found in the user's home directory, it is
+read and processed. It should generally only contain meta-commands.
+
+o If the -init option is present, the specified file is processed.
+
+o All other command line options are processed.
+
+.SH SEE ALSO
+http://www.sqlite.org/
+.br
+The sqlite-doc package
+.SH AUTHOR
+This manual page was originally written by Andreas Rottmann
+<rotty at debian.org>, for the Debian GNU/Linux system (but may be used
+by others). It was subsequently revised by Bill Bumgarner <bbum at mac.com>.
Added: freeswitch/trunk/libs/sqlite/sqlite3.pc.in
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/sqlite3.pc.in Tue Dec 19 15:11:50 2006
@@ -0,0 +1,12 @@
+# Package Information for pkg-config
+
+prefix=@prefix@
+exec_prefix=@exec_prefix@
+libdir=@libdir@
+includedir=@includedir@
+
+Name: SQLite
+Description: SQL database engine
+Version: @VERSION@
+Libs: -L${libdir} -lsqlite3
+Cflags: -I${includedir}
Added: freeswitch/trunk/libs/sqlite/src/alter.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/alter.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,575 @@
+/*
+** 2005 February 15
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** This file contains C code routines that used to generate VDBE code
+** that implements the ALTER TABLE command.
+**
+** $Id: alter.c,v 1.22 2006/09/08 12:27:37 drh Exp $
+*/
+#include "sqliteInt.h"
+#include <ctype.h>
+
+/*
+** The code in this file only exists if we are not omitting the
+** ALTER TABLE logic from the build.
+*/
+#ifndef SQLITE_OMIT_ALTERTABLE
+
+
+/*
+** This function is used by SQL generated to implement the
+** ALTER TABLE command. The first argument is the text of a CREATE TABLE or
+** CREATE INDEX command. The second is a table name. The table name in
+** the CREATE TABLE or CREATE INDEX statement is replaced with the third
+** argument and the result returned. Examples:
+**
+** sqlite_rename_table('CREATE TABLE abc(a, b, c)', 'def')
+** -> 'CREATE TABLE def(a, b, c)'
+**
+** sqlite_rename_table('CREATE INDEX i ON abc(a)', 'def')
+** -> 'CREATE INDEX i ON def(a, b, c)'
+*/
+static void renameTableFunc(
+ sqlite3_context *context,
+ int argc,
+ sqlite3_value **argv
+){
+ unsigned char const *zSql = sqlite3_value_text(argv[0]);
+ unsigned char const *zTableName = sqlite3_value_text(argv[1]);
+
+ int token;
+ Token tname;
+ unsigned char const *zCsr = zSql;
+ int len = 0;
+ char *zRet;
+
+ /* The principle used to locate the table name in the CREATE TABLE
+ ** statement is that the table name is the first token that is immediatedly
+ ** followed by a left parenthesis - TK_LP.
+ */
+ if( zSql ){
+ do {
+ /* Store the token that zCsr points to in tname. */
+ tname.z = zCsr;
+ tname.n = len;
+
+ /* Advance zCsr to the next token. Store that token type in 'token',
+ ** and it's length in 'len' (to be used next iteration of this loop).
+ */
+ do {
+ zCsr += len;
+ len = sqlite3GetToken(zCsr, &token);
+ } while( token==TK_SPACE );
+ assert( len>0 );
+ } while( token!=TK_LP );
+
+ zRet = sqlite3MPrintf("%.*s%Q%s", tname.z - zSql, zSql,
+ zTableName, tname.z+tname.n);
+ sqlite3_result_text(context, zRet, -1, sqlite3FreeX);
+ }
+}
+
+#ifndef SQLITE_OMIT_TRIGGER
+/* This function is used by SQL generated to implement the
+** ALTER TABLE command. The first argument is the text of a CREATE TRIGGER
+** statement. The second is a table name. The table name in the CREATE
+** TRIGGER statement is replaced with the third argument and the result
+** returned. This is analagous to renameTableFunc() above, except for CREATE
+** TRIGGER, not CREATE INDEX and CREATE TABLE.
+*/
+static void renameTriggerFunc(
+ sqlite3_context *context,
+ int argc,
+ sqlite3_value **argv
+){
+ unsigned char const *zSql = sqlite3_value_text(argv[0]);
+ unsigned char const *zTableName = sqlite3_value_text(argv[1]);
+
+ int token;
+ Token tname;
+ int dist = 3;
+ unsigned char const *zCsr = zSql;
+ int len = 0;
+ char *zRet;
+
+ /* The principle used to locate the table name in the CREATE TRIGGER
+ ** statement is that the table name is the first token that is immediatedly
+ ** preceded by either TK_ON or TK_DOT and immediatedly followed by one
+ ** of TK_WHEN, TK_BEGIN or TK_FOR.
+ */
+ if( zSql ){
+ do {
+ /* Store the token that zCsr points to in tname. */
+ tname.z = zCsr;
+ tname.n = len;
+
+ /* Advance zCsr to the next token. Store that token type in 'token',
+ ** and it's length in 'len' (to be used next iteration of this loop).
+ */
+ do {
+ zCsr += len;
+ len = sqlite3GetToken(zCsr, &token);
+ }while( token==TK_SPACE );
+ assert( len>0 );
+
+ /* Variable 'dist' stores the number of tokens read since the most
+ ** recent TK_DOT or TK_ON. This means that when a WHEN, FOR or BEGIN
+ ** token is read and 'dist' equals 2, the condition stated above
+ ** to be met.
+ **
+ ** Note that ON cannot be a database, table or column name, so
+ ** there is no need to worry about syntax like
+ ** "CREATE TRIGGER ... ON ON.ON BEGIN ..." etc.
+ */
+ dist++;
+ if( token==TK_DOT || token==TK_ON ){
+ dist = 0;
+ }
+ } while( dist!=2 || (token!=TK_WHEN && token!=TK_FOR && token!=TK_BEGIN) );
+
+ /* Variable tname now contains the token that is the old table-name
+ ** in the CREATE TRIGGER statement.
+ */
+ zRet = sqlite3MPrintf("%.*s%Q%s", tname.z - zSql, zSql,
+ zTableName, tname.z+tname.n);
+ sqlite3_result_text(context, zRet, -1, sqlite3FreeX);
+ }
+}
+#endif /* !SQLITE_OMIT_TRIGGER */
+
+/*
+** Register built-in functions used to help implement ALTER TABLE
+*/
+void sqlite3AlterFunctions(sqlite3 *db){
+ static const struct {
+ char *zName;
+ signed char nArg;
+ void (*xFunc)(sqlite3_context*,int,sqlite3_value **);
+ } aFuncs[] = {
+ { "sqlite_rename_table", 2, renameTableFunc},
+#ifndef SQLITE_OMIT_TRIGGER
+ { "sqlite_rename_trigger", 2, renameTriggerFunc},
+#endif
+ };
+ int i;
+
+ for(i=0; i<sizeof(aFuncs)/sizeof(aFuncs[0]); i++){
+ sqlite3CreateFunc(db, aFuncs[i].zName, aFuncs[i].nArg,
+ SQLITE_UTF8, 0, aFuncs[i].xFunc, 0, 0);
+ }
+}
+
+/*
+** Generate the text of a WHERE expression which can be used to select all
+** temporary triggers on table pTab from the sqlite_temp_master table. If
+** table pTab has no temporary triggers, or is itself stored in the
+** temporary database, NULL is returned.
+*/
+static char *whereTempTriggers(Parse *pParse, Table *pTab){
+ Trigger *pTrig;
+ char *zWhere = 0;
+ char *tmp = 0;
+ const Schema *pTempSchema = pParse->db->aDb[1].pSchema; /* Temp db schema */
+
+ /* If the table is not located in the temp-db (in which case NULL is
+ ** returned, loop through the tables list of triggers. For each trigger
+ ** that is not part of the temp-db schema, add a clause to the WHERE
+ ** expression being built up in zWhere.
+ */
+ if( pTab->pSchema!=pTempSchema ){
+ for( pTrig=pTab->pTrigger; pTrig; pTrig=pTrig->pNext ){
+ if( pTrig->pSchema==pTempSchema ){
+ if( !zWhere ){
+ zWhere = sqlite3MPrintf("name=%Q", pTrig->name);
+ }else{
+ tmp = zWhere;
+ zWhere = sqlite3MPrintf("%s OR name=%Q", zWhere, pTrig->name);
+ sqliteFree(tmp);
+ }
+ }
+ }
+ }
+ return zWhere;
+}
+
+/*
+** Generate code to drop and reload the internal representation of table
+** pTab from the database, including triggers and temporary triggers.
+** Argument zName is the name of the table in the database schema at
+** the time the generated code is executed. This can be different from
+** pTab->zName if this function is being called to code part of an
+** "ALTER TABLE RENAME TO" statement.
+*/
+static void reloadTableSchema(Parse *pParse, Table *pTab, const char *zName){
+ Vdbe *v;
+ char *zWhere;
+ int iDb; /* Index of database containing pTab */
+#ifndef SQLITE_OMIT_TRIGGER
+ Trigger *pTrig;
+#endif
+
+ v = sqlite3GetVdbe(pParse);
+ if( !v ) return;
+ iDb = sqlite3SchemaToIndex(pParse->db, pTab->pSchema);
+ assert( iDb>=0 );
+
+#ifndef SQLITE_OMIT_TRIGGER
+ /* Drop any table triggers from the internal schema. */
+ for(pTrig=pTab->pTrigger; pTrig; pTrig=pTrig->pNext){
+ int iTrigDb = sqlite3SchemaToIndex(pParse->db, pTrig->pSchema);
+ assert( iTrigDb==iDb || iTrigDb==1 );
+ sqlite3VdbeOp3(v, OP_DropTrigger, iTrigDb, 0, pTrig->name, 0);
+ }
+#endif
+
+ /* Drop the table and index from the internal schema */
+ sqlite3VdbeOp3(v, OP_DropTable, iDb, 0, pTab->zName, 0);
+
+ /* Reload the table, index and permanent trigger schemas. */
+ zWhere = sqlite3MPrintf("tbl_name=%Q", zName);
+ if( !zWhere ) return;
+ sqlite3VdbeOp3(v, OP_ParseSchema, iDb, 0, zWhere, P3_DYNAMIC);
+
+#ifndef SQLITE_OMIT_TRIGGER
+ /* Now, if the table is not stored in the temp database, reload any temp
+ ** triggers. Don't use IN(...) in case SQLITE_OMIT_SUBQUERY is defined.
+ */
+ if( (zWhere=whereTempTriggers(pParse, pTab))!=0 ){
+ sqlite3VdbeOp3(v, OP_ParseSchema, 1, 0, zWhere, P3_DYNAMIC);
+ }
+#endif
+}
+
+/*
+** Generate code to implement the "ALTER TABLE xxx RENAME TO yyy"
+** command.
+*/
+void sqlite3AlterRenameTable(
+ Parse *pParse, /* Parser context. */
+ SrcList *pSrc, /* The table to rename. */
+ Token *pName /* The new table name. */
+){
+ int iDb; /* Database that contains the table */
+ char *zDb; /* Name of database iDb */
+ Table *pTab; /* Table being renamed */
+ char *zName = 0; /* NULL-terminated version of pName */
+ sqlite3 *db = pParse->db; /* Database connection */
+ Vdbe *v;
+#ifndef SQLITE_OMIT_TRIGGER
+ char *zWhere = 0; /* Where clause to locate temp triggers */
+#endif
+
+ if( sqlite3MallocFailed() ) goto exit_rename_table;
+ assert( pSrc->nSrc==1 );
+
+ pTab = sqlite3LocateTable(pParse, pSrc->a[0].zName, pSrc->a[0].zDatabase);
+ if( !pTab ) goto exit_rename_table;
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+ if( IsVirtual(pTab) ){
+ sqlite3ErrorMsg(pParse, "virtual tables may not be altered");
+ goto exit_rename_table;
+ }
+#endif
+ iDb = sqlite3SchemaToIndex(pParse->db, pTab->pSchema);
+ zDb = db->aDb[iDb].zName;
+
+ /* Get a NULL terminated version of the new table name. */
+ zName = sqlite3NameFromToken(pName);
+ if( !zName ) goto exit_rename_table;
+
+ /* Check that a table or index named 'zName' does not already exist
+ ** in database iDb. If so, this is an error.
+ */
+ if( sqlite3FindTable(db, zName, zDb) || sqlite3FindIndex(db, zName, zDb) ){
+ sqlite3ErrorMsg(pParse,
+ "there is already another table or index with this name: %s", zName);
+ goto exit_rename_table;
+ }
+
+ /* Make sure it is not a system table being altered, or a reserved name
+ ** that the table is being renamed to.
+ */
+ if( strlen(pTab->zName)>6 && 0==sqlite3StrNICmp(pTab->zName, "sqlite_", 7) ){
+ sqlite3ErrorMsg(pParse, "table %s may not be altered", pTab->zName);
+ goto exit_rename_table;
+ }
+ if( SQLITE_OK!=sqlite3CheckObjectName(pParse, zName) ){
+ goto exit_rename_table;
+ }
+
+#ifndef SQLITE_OMIT_AUTHORIZATION
+ /* Invoke the authorization callback. */
+ if( sqlite3AuthCheck(pParse, SQLITE_ALTER_TABLE, zDb, pTab->zName, 0) ){
+ goto exit_rename_table;
+ }
+#endif
+
+ /* Begin a transaction and code the VerifyCookie for database iDb.
+ ** Then modify the schema cookie (since the ALTER TABLE modifies the
+ ** schema).
+ */
+ v = sqlite3GetVdbe(pParse);
+ if( v==0 ){
+ goto exit_rename_table;
+ }
+ sqlite3BeginWriteOperation(pParse, 0, iDb);
+ sqlite3ChangeCookie(db, v, iDb);
+
+ /* Modify the sqlite_master table to use the new table name. */
+ sqlite3NestedParse(pParse,
+ "UPDATE %Q.%s SET "
+#ifdef SQLITE_OMIT_TRIGGER
+ "sql = sqlite_rename_table(sql, %Q), "
+#else
+ "sql = CASE "
+ "WHEN type = 'trigger' THEN sqlite_rename_trigger(sql, %Q)"
+ "ELSE sqlite_rename_table(sql, %Q) END, "
+#endif
+ "tbl_name = %Q, "
+ "name = CASE "
+ "WHEN type='table' THEN %Q "
+ "WHEN name LIKE 'sqlite_autoindex%%' AND type='index' THEN "
+ "'sqlite_autoindex_' || %Q || substr(name, %d+18,10) "
+ "ELSE name END "
+ "WHERE tbl_name=%Q AND "
+ "(type='table' OR type='index' OR type='trigger');",
+ zDb, SCHEMA_TABLE(iDb), zName, zName, zName,
+#ifndef SQLITE_OMIT_TRIGGER
+ zName,
+#endif
+ zName, strlen(pTab->zName), pTab->zName
+ );
+
+#ifndef SQLITE_OMIT_AUTOINCREMENT
+ /* If the sqlite_sequence table exists in this database, then update
+ ** it with the new table name.
+ */
+ if( sqlite3FindTable(db, "sqlite_sequence", zDb) ){
+ sqlite3NestedParse(pParse,
+ "UPDATE %Q.sqlite_sequence set name = %Q WHERE name = %Q",
+ zDb, zName, pTab->zName);
+ }
+#endif
+
+#ifndef SQLITE_OMIT_TRIGGER
+ /* If there are TEMP triggers on this table, modify the sqlite_temp_master
+ ** table. Don't do this if the table being ALTERed is itself located in
+ ** the temp database.
+ */
+ if( (zWhere=whereTempTriggers(pParse, pTab))!=0 ){
+ sqlite3NestedParse(pParse,
+ "UPDATE sqlite_temp_master SET "
+ "sql = sqlite_rename_trigger(sql, %Q), "
+ "tbl_name = %Q "
+ "WHERE %s;", zName, zName, zWhere);
+ sqliteFree(zWhere);
+ }
+#endif
+
+ /* Drop and reload the internal table schema. */
+ reloadTableSchema(pParse, pTab, zName);
+
+exit_rename_table:
+ sqlite3SrcListDelete(pSrc);
+ sqliteFree(zName);
+}
+
+
+/*
+** This function is called after an "ALTER TABLE ... ADD" statement
+** has been parsed. Argument pColDef contains the text of the new
+** column definition.
+**
+** The Table structure pParse->pNewTable was extended to include
+** the new column during parsing.
+*/
+void sqlite3AlterFinishAddColumn(Parse *pParse, Token *pColDef){
+ Table *pNew; /* Copy of pParse->pNewTable */
+ Table *pTab; /* Table being altered */
+ int iDb; /* Database number */
+ const char *zDb; /* Database name */
+ const char *zTab; /* Table name */
+ char *zCol; /* Null-terminated column definition */
+ Column *pCol; /* The new column */
+ Expr *pDflt; /* Default value for the new column */
+
+ if( pParse->nErr ) return;
+ pNew = pParse->pNewTable;
+ assert( pNew );
+
+ iDb = sqlite3SchemaToIndex(pParse->db, pNew->pSchema);
+ zDb = pParse->db->aDb[iDb].zName;
+ zTab = pNew->zName;
+ pCol = &pNew->aCol[pNew->nCol-1];
+ pDflt = pCol->pDflt;
+ pTab = sqlite3FindTable(pParse->db, zTab, zDb);
+ assert( pTab );
+
+#ifndef SQLITE_OMIT_AUTHORIZATION
+ /* Invoke the authorization callback. */
+ if( sqlite3AuthCheck(pParse, SQLITE_ALTER_TABLE, zDb, pTab->zName, 0) ){
+ return;
+ }
+#endif
+
+ /* If the default value for the new column was specified with a
+ ** literal NULL, then set pDflt to 0. This simplifies checking
+ ** for an SQL NULL default below.
+ */
+ if( pDflt && pDflt->op==TK_NULL ){
+ pDflt = 0;
+ }
+
+ /* Check that the new column is not specified as PRIMARY KEY or UNIQUE.
+ ** If there is a NOT NULL constraint, then the default value for the
+ ** column must not be NULL.
+ */
+ if( pCol->isPrimKey ){
+ sqlite3ErrorMsg(pParse, "Cannot add a PRIMARY KEY column");
+ return;
+ }
+ if( pNew->pIndex ){
+ sqlite3ErrorMsg(pParse, "Cannot add a UNIQUE column");
+ return;
+ }
+ if( pCol->notNull && !pDflt ){
+ sqlite3ErrorMsg(pParse,
+ "Cannot add a NOT NULL column with default value NULL");
+ return;
+ }
+
+ /* Ensure the default expression is something that sqlite3ValueFromExpr()
+ ** can handle (i.e. not CURRENT_TIME etc.)
+ */
+ if( pDflt ){
+ sqlite3_value *pVal;
+ if( sqlite3ValueFromExpr(pDflt, SQLITE_UTF8, SQLITE_AFF_NONE, &pVal) ){
+ /* malloc() has failed */
+ return;
+ }
+ if( !pVal ){
+ sqlite3ErrorMsg(pParse, "Cannot add a column with non-constant default");
+ return;
+ }
+ sqlite3ValueFree(pVal);
+ }
+
+ /* Modify the CREATE TABLE statement. */
+ zCol = sqliteStrNDup((char*)pColDef->z, pColDef->n);
+ if( zCol ){
+ char *zEnd = &zCol[pColDef->n-1];
+ while( (zEnd>zCol && *zEnd==';') || isspace(*(unsigned char *)zEnd) ){
+ *zEnd-- = '\0';
+ }
+ sqlite3NestedParse(pParse,
+ "UPDATE %Q.%s SET "
+ "sql = substr(sql,1,%d) || ', ' || %Q || substr(sql,%d,length(sql)) "
+ "WHERE type = 'table' AND name = %Q",
+ zDb, SCHEMA_TABLE(iDb), pNew->addColOffset, zCol, pNew->addColOffset+1,
+ zTab
+ );
+ sqliteFree(zCol);
+ }
+
+ /* If the default value of the new column is NULL, then set the file
+ ** format to 2. If the default value of the new column is not NULL,
+ ** the file format becomes 3.
+ */
+ sqlite3MinimumFileFormat(pParse, iDb, pDflt ? 3 : 2);
+
+ /* Reload the schema of the modified table. */
+ reloadTableSchema(pParse, pTab, pTab->zName);
+}
+
+/*
+** This function is called by the parser after the table-name in
+** an "ALTER TABLE <table-name> ADD" statement is parsed. Argument
+** pSrc is the full-name of the table being altered.
+**
+** This routine makes a (partial) copy of the Table structure
+** for the table being altered and sets Parse.pNewTable to point
+** to it. Routines called by the parser as the column definition
+** is parsed (i.e. sqlite3AddColumn()) add the new Column data to
+** the copy. The copy of the Table structure is deleted by tokenize.c
+** after parsing is finished.
+**
+** Routine sqlite3AlterFinishAddColumn() will be called to complete
+** coding the "ALTER TABLE ... ADD" statement.
+*/
+void sqlite3AlterBeginAddColumn(Parse *pParse, SrcList *pSrc){
+ Table *pNew;
+ Table *pTab;
+ Vdbe *v;
+ int iDb;
+ int i;
+ int nAlloc;
+
+ /* Look up the table being altered. */
+ assert( pParse->pNewTable==0 );
+ if( sqlite3MallocFailed() ) goto exit_begin_add_column;
+ pTab = sqlite3LocateTable(pParse, pSrc->a[0].zName, pSrc->a[0].zDatabase);
+ if( !pTab ) goto exit_begin_add_column;
+
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+ if( IsVirtual(pTab) ){
+ sqlite3ErrorMsg(pParse, "virtual tables may not be altered");
+ goto exit_begin_add_column;
+ }
+#endif
+
+ /* Make sure this is not an attempt to ALTER a view. */
+ if( pTab->pSelect ){
+ sqlite3ErrorMsg(pParse, "Cannot add a column to a view");
+ goto exit_begin_add_column;
+ }
+
+ assert( pTab->addColOffset>0 );
+ iDb = sqlite3SchemaToIndex(pParse->db, pTab->pSchema);
+
+ /* Put a copy of the Table struct in Parse.pNewTable for the
+ ** sqlite3AddColumn() function and friends to modify.
+ */
+ pNew = (Table *)sqliteMalloc(sizeof(Table));
+ if( !pNew ) goto exit_begin_add_column;
+ pParse->pNewTable = pNew;
+ pNew->nRef = 1;
+ pNew->nCol = pTab->nCol;
+ assert( pNew->nCol>0 );
+ nAlloc = (((pNew->nCol-1)/8)*8)+8;
+ assert( nAlloc>=pNew->nCol && nAlloc%8==0 && nAlloc-pNew->nCol<8 );
+ pNew->aCol = (Column *)sqliteMalloc(sizeof(Column)*nAlloc);
+ pNew->zName = sqliteStrDup(pTab->zName);
+ if( !pNew->aCol || !pNew->zName ){
+ goto exit_begin_add_column;
+ }
+ memcpy(pNew->aCol, pTab->aCol, sizeof(Column)*pNew->nCol);
+ for(i=0; i<pNew->nCol; i++){
+ Column *pCol = &pNew->aCol[i];
+ pCol->zName = sqliteStrDup(pCol->zName);
+ pCol->zColl = 0;
+ pCol->zType = 0;
+ pCol->pDflt = 0;
+ }
+ pNew->pSchema = pParse->db->aDb[iDb].pSchema;
+ pNew->addColOffset = pTab->addColOffset;
+ pNew->nRef = 1;
+
+ /* Begin a transaction and increment the schema cookie. */
+ sqlite3BeginWriteOperation(pParse, 0, iDb);
+ v = sqlite3GetVdbe(pParse);
+ if( !v ) goto exit_begin_add_column;
+ sqlite3ChangeCookie(pParse->db, v, iDb);
+
+exit_begin_add_column:
+ sqlite3SrcListDelete(pSrc);
+ return;
+}
+#endif /* SQLITE_ALTER_TABLE */
Added: freeswitch/trunk/libs/sqlite/src/analyze.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/analyze.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,403 @@
+/*
+** 2005 July 8
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** This file contains code associated with the ANALYZE command.
+**
+** @(#) $Id: analyze.c,v 1.16 2006/01/10 17:58:23 danielk1977 Exp $
+*/
+#ifndef SQLITE_OMIT_ANALYZE
+#include "sqliteInt.h"
+
+/*
+** This routine generates code that opens the sqlite_stat1 table on cursor
+** iStatCur.
+**
+** If the sqlite_stat1 tables does not previously exist, it is created.
+** If it does previously exist, all entires associated with table zWhere
+** are removed. If zWhere==0 then all entries are removed.
+*/
+static void openStatTable(
+ Parse *pParse, /* Parsing context */
+ int iDb, /* The database we are looking in */
+ int iStatCur, /* Open the sqlite_stat1 table on this cursor */
+ const char *zWhere /* Delete entries associated with this table */
+){
+ sqlite3 *db = pParse->db;
+ Db *pDb;
+ int iRootPage;
+ Table *pStat;
+ Vdbe *v = sqlite3GetVdbe(pParse);
+
+ pDb = &db->aDb[iDb];
+ if( (pStat = sqlite3FindTable(db, "sqlite_stat1", pDb->zName))==0 ){
+ /* The sqlite_stat1 tables does not exist. Create it.
+ ** Note that a side-effect of the CREATE TABLE statement is to leave
+ ** the rootpage of the new table on the top of the stack. This is
+ ** important because the OpenWrite opcode below will be needing it. */
+ sqlite3NestedParse(pParse,
+ "CREATE TABLE %Q.sqlite_stat1(tbl,idx,stat)",
+ pDb->zName
+ );
+ iRootPage = 0; /* Cause rootpage to be taken from top of stack */
+ }else if( zWhere ){
+ /* The sqlite_stat1 table exists. Delete all entries associated with
+ ** the table zWhere. */
+ sqlite3NestedParse(pParse,
+ "DELETE FROM %Q.sqlite_stat1 WHERE tbl=%Q",
+ pDb->zName, zWhere
+ );
+ iRootPage = pStat->tnum;
+ }else{
+ /* The sqlite_stat1 table already exists. Delete all rows. */
+ iRootPage = pStat->tnum;
+ sqlite3VdbeAddOp(v, OP_Clear, pStat->tnum, iDb);
+ }
+
+ /* Open the sqlite_stat1 table for writing. Unless it was created
+ ** by this vdbe program, lock it for writing at the shared-cache level.
+ ** If this vdbe did create the sqlite_stat1 table, then it must have
+ ** already obtained a schema-lock, making the write-lock redundant.
+ */
+ if( iRootPage>0 ){
+ sqlite3TableLock(pParse, iDb, iRootPage, 1, "sqlite_stat1");
+ }
+ sqlite3VdbeAddOp(v, OP_Integer, iDb, 0);
+ sqlite3VdbeAddOp(v, OP_OpenWrite, iStatCur, iRootPage);
+ sqlite3VdbeAddOp(v, OP_SetNumColumns, iStatCur, 3);
+}
+
+/*
+** Generate code to do an analysis of all indices associated with
+** a single table.
+*/
+static void analyzeOneTable(
+ Parse *pParse, /* Parser context */
+ Table *pTab, /* Table whose indices are to be analyzed */
+ int iStatCur, /* Cursor that writes to the sqlite_stat1 table */
+ int iMem /* Available memory locations begin here */
+){
+ Index *pIdx; /* An index to being analyzed */
+ int iIdxCur; /* Cursor number for index being analyzed */
+ int nCol; /* Number of columns in the index */
+ Vdbe *v; /* The virtual machine being built up */
+ int i; /* Loop counter */
+ int topOfLoop; /* The top of the loop */
+ int endOfLoop; /* The end of the loop */
+ int addr; /* The address of an instruction */
+ int iDb; /* Index of database containing pTab */
+
+ v = sqlite3GetVdbe(pParse);
+ if( pTab==0 || pTab->pIndex==0 ){
+ /* Do no analysis for tables that have no indices */
+ return;
+ }
+
+ iDb = sqlite3SchemaToIndex(pParse->db, pTab->pSchema);
+ assert( iDb>=0 );
+#ifndef SQLITE_OMIT_AUTHORIZATION
+ if( sqlite3AuthCheck(pParse, SQLITE_ANALYZE, pTab->zName, 0,
+ pParse->db->aDb[iDb].zName ) ){
+ return;
+ }
+#endif
+
+ /* Establish a read-lock on the table at the shared-cache level. */
+ sqlite3TableLock(pParse, iDb, pTab->tnum, 0, pTab->zName);
+
+ iIdxCur = pParse->nTab;
+ for(pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext){
+ KeyInfo *pKey = sqlite3IndexKeyinfo(pParse, pIdx);
+
+ /* Open a cursor to the index to be analyzed
+ */
+ assert( iDb==sqlite3SchemaToIndex(pParse->db, pIdx->pSchema) );
+ sqlite3VdbeAddOp(v, OP_Integer, iDb, 0);
+ VdbeComment((v, "# %s", pIdx->zName));
+ sqlite3VdbeOp3(v, OP_OpenRead, iIdxCur, pIdx->tnum,
+ (char *)pKey, P3_KEYINFO_HANDOFF);
+ nCol = pIdx->nColumn;
+ if( iMem+nCol*2>=pParse->nMem ){
+ pParse->nMem = iMem+nCol*2+1;
+ }
+ sqlite3VdbeAddOp(v, OP_SetNumColumns, iIdxCur, nCol+1);
+
+ /* Memory cells are used as follows:
+ **
+ ** mem[iMem]: The total number of rows in the table.
+ ** mem[iMem+1]: Number of distinct values in column 1
+ ** ...
+ ** mem[iMem+nCol]: Number of distinct values in column N
+ ** mem[iMem+nCol+1] Last observed value of column 1
+ ** ...
+ ** mem[iMem+nCol+nCol]: Last observed value of column N
+ **
+ ** Cells iMem through iMem+nCol are initialized to 0. The others
+ ** are initialized to NULL.
+ */
+ for(i=0; i<=nCol; i++){
+ sqlite3VdbeAddOp(v, OP_MemInt, 0, iMem+i);
+ }
+ for(i=0; i<nCol; i++){
+ sqlite3VdbeAddOp(v, OP_MemNull, iMem+nCol+i+1, 0);
+ }
+
+ /* Do the analysis.
+ */
+ endOfLoop = sqlite3VdbeMakeLabel(v);
+ sqlite3VdbeAddOp(v, OP_Rewind, iIdxCur, endOfLoop);
+ topOfLoop = sqlite3VdbeCurrentAddr(v);
+ sqlite3VdbeAddOp(v, OP_MemIncr, 1, iMem);
+ for(i=0; i<nCol; i++){
+ sqlite3VdbeAddOp(v, OP_Column, iIdxCur, i);
+ sqlite3VdbeAddOp(v, OP_MemLoad, iMem+nCol+i+1, 0);
+ sqlite3VdbeAddOp(v, OP_Ne, 0x100, 0);
+ }
+ sqlite3VdbeAddOp(v, OP_Goto, 0, endOfLoop);
+ for(i=0; i<nCol; i++){
+ addr = sqlite3VdbeAddOp(v, OP_MemIncr, 1, iMem+i+1);
+ sqlite3VdbeChangeP2(v, topOfLoop + 3*i + 3, addr);
+ sqlite3VdbeAddOp(v, OP_Column, iIdxCur, i);
+ sqlite3VdbeAddOp(v, OP_MemStore, iMem+nCol+i+1, 1);
+ }
+ sqlite3VdbeResolveLabel(v, endOfLoop);
+ sqlite3VdbeAddOp(v, OP_Next, iIdxCur, topOfLoop);
+ sqlite3VdbeAddOp(v, OP_Close, iIdxCur, 0);
+
+ /* Store the results.
+ **
+ ** The result is a single row of the sqlite_stmt1 table. The first
+ ** two columns are the names of the table and index. The third column
+ ** is a string composed of a list of integer statistics about the
+ ** index. The first integer in the list is the total number of entires
+ ** in the index. There is one additional integer in the list for each
+ ** column of the table. This additional integer is a guess of how many
+ ** rows of the table the index will select. If D is the count of distinct
+ ** values and K is the total number of rows, then the integer is computed
+ ** as:
+ **
+ ** I = (K+D-1)/D
+ **
+ ** If K==0 then no entry is made into the sqlite_stat1 table.
+ ** If K>0 then it is always the case the D>0 so division by zero
+ ** is never possible.
+ */
+ sqlite3VdbeAddOp(v, OP_MemLoad, iMem, 0);
+ addr = sqlite3VdbeAddOp(v, OP_IfNot, 0, 0);
+ sqlite3VdbeAddOp(v, OP_NewRowid, iStatCur, 0);
+ sqlite3VdbeOp3(v, OP_String8, 0, 0, pTab->zName, 0);
+ sqlite3VdbeOp3(v, OP_String8, 0, 0, pIdx->zName, 0);
+ sqlite3VdbeAddOp(v, OP_MemLoad, iMem, 0);
+ sqlite3VdbeOp3(v, OP_String8, 0, 0, " ", 0);
+ for(i=0; i<nCol; i++){
+ sqlite3VdbeAddOp(v, OP_MemLoad, iMem, 0);
+ sqlite3VdbeAddOp(v, OP_MemLoad, iMem+i+1, 0);
+ sqlite3VdbeAddOp(v, OP_Add, 0, 0);
+ sqlite3VdbeAddOp(v, OP_AddImm, -1, 0);
+ sqlite3VdbeAddOp(v, OP_MemLoad, iMem+i+1, 0);
+ sqlite3VdbeAddOp(v, OP_Divide, 0, 0);
+ sqlite3VdbeAddOp(v, OP_ToInt, 0, 0);
+ if( i==nCol-1 ){
+ sqlite3VdbeAddOp(v, OP_Concat, nCol*2-1, 0);
+ }else{
+ sqlite3VdbeAddOp(v, OP_Dup, 1, 0);
+ }
+ }
+ sqlite3VdbeOp3(v, OP_MakeRecord, 3, 0, "aaa", 0);
+ sqlite3VdbeAddOp(v, OP_Insert, iStatCur, 0);
+ sqlite3VdbeJumpHere(v, addr);
+ }
+}
+
+/*
+** Generate code that will cause the most recent index analysis to
+** be laoded into internal hash tables where is can be used.
+*/
+static void loadAnalysis(Parse *pParse, int iDb){
+ Vdbe *v = sqlite3GetVdbe(pParse);
+ sqlite3VdbeAddOp(v, OP_LoadAnalysis, iDb, 0);
+}
+
+/*
+** Generate code that will do an analysis of an entire database
+*/
+static void analyzeDatabase(Parse *pParse, int iDb){
+ sqlite3 *db = pParse->db;
+ Schema *pSchema = db->aDb[iDb].pSchema; /* Schema of database iDb */
+ HashElem *k;
+ int iStatCur;
+ int iMem;
+
+ sqlite3BeginWriteOperation(pParse, 0, iDb);
+ iStatCur = pParse->nTab++;
+ openStatTable(pParse, iDb, iStatCur, 0);
+ iMem = pParse->nMem;
+ for(k=sqliteHashFirst(&pSchema->tblHash); k; k=sqliteHashNext(k)){
+ Table *pTab = (Table*)sqliteHashData(k);
+ analyzeOneTable(pParse, pTab, iStatCur, iMem);
+ }
+ loadAnalysis(pParse, iDb);
+}
+
+/*
+** Generate code that will do an analysis of a single table in
+** a database.
+*/
+static void analyzeTable(Parse *pParse, Table *pTab){
+ int iDb;
+ int iStatCur;
+
+ assert( pTab!=0 );
+ iDb = sqlite3SchemaToIndex(pParse->db, pTab->pSchema);
+ sqlite3BeginWriteOperation(pParse, 0, iDb);
+ iStatCur = pParse->nTab++;
+ openStatTable(pParse, iDb, iStatCur, pTab->zName);
+ analyzeOneTable(pParse, pTab, iStatCur, pParse->nMem);
+ loadAnalysis(pParse, iDb);
+}
+
+/*
+** Generate code for the ANALYZE command. The parser calls this routine
+** when it recognizes an ANALYZE command.
+**
+** ANALYZE -- 1
+** ANALYZE <database> -- 2
+** ANALYZE ?<database>.?<tablename> -- 3
+**
+** Form 1 causes all indices in all attached databases to be analyzed.
+** Form 2 analyzes all indices the single database named.
+** Form 3 analyzes all indices associated with the named table.
+*/
+void sqlite3Analyze(Parse *pParse, Token *pName1, Token *pName2){
+ sqlite3 *db = pParse->db;
+ int iDb;
+ int i;
+ char *z, *zDb;
+ Table *pTab;
+ Token *pTableName;
+
+ /* Read the database schema. If an error occurs, leave an error message
+ ** and code in pParse and return NULL. */
+ if( SQLITE_OK!=sqlite3ReadSchema(pParse) ){
+ return;
+ }
+
+ if( pName1==0 ){
+ /* Form 1: Analyze everything */
+ for(i=0; i<db->nDb; i++){
+ if( i==1 ) continue; /* Do not analyze the TEMP database */
+ analyzeDatabase(pParse, i);
+ }
+ }else if( pName2==0 || pName2->n==0 ){
+ /* Form 2: Analyze the database or table named */
+ iDb = sqlite3FindDb(db, pName1);
+ if( iDb>=0 ){
+ analyzeDatabase(pParse, iDb);
+ }else{
+ z = sqlite3NameFromToken(pName1);
+ pTab = sqlite3LocateTable(pParse, z, 0);
+ sqliteFree(z);
+ if( pTab ){
+ analyzeTable(pParse, pTab);
+ }
+ }
+ }else{
+ /* Form 3: Analyze the fully qualified table name */
+ iDb = sqlite3TwoPartName(pParse, pName1, pName2, &pTableName);
+ if( iDb>=0 ){
+ zDb = db->aDb[iDb].zName;
+ z = sqlite3NameFromToken(pTableName);
+ pTab = sqlite3LocateTable(pParse, z, zDb);
+ sqliteFree(z);
+ if( pTab ){
+ analyzeTable(pParse, pTab);
+ }
+ }
+ }
+}
+
+/*
+** Used to pass information from the analyzer reader through to the
+** callback routine.
+*/
+typedef struct analysisInfo analysisInfo;
+struct analysisInfo {
+ sqlite3 *db;
+ const char *zDatabase;
+};
+
+/*
+** This callback is invoked once for each index when reading the
+** sqlite_stat1 table.
+**
+** argv[0] = name of the index
+** argv[1] = results of analysis - on integer for each column
+*/
+static int analysisLoader(void *pData, int argc, char **argv, char **azNotUsed){
+ analysisInfo *pInfo = (analysisInfo*)pData;
+ Index *pIndex;
+ int i, c;
+ unsigned int v;
+ const char *z;
+
+ assert( argc==2 );
+ if( argv==0 || argv[0]==0 || argv[1]==0 ){
+ return 0;
+ }
+ pIndex = sqlite3FindIndex(pInfo->db, argv[0], pInfo->zDatabase);
+ if( pIndex==0 ){
+ return 0;
+ }
+ z = argv[1];
+ for(i=0; *z && i<=pIndex->nColumn; i++){
+ v = 0;
+ while( (c=z[0])>='0' && c<='9' ){
+ v = v*10 + c - '0';
+ z++;
+ }
+ pIndex->aiRowEst[i] = v;
+ if( *z==' ' ) z++;
+ }
+ return 0;
+}
+
+/*
+** Load the content of the sqlite_stat1 table into the index hash tables.
+*/
+void sqlite3AnalysisLoad(sqlite3 *db, int iDb){
+ analysisInfo sInfo;
+ HashElem *i;
+ char *zSql;
+
+ /* Clear any prior statistics */
+ for(i=sqliteHashFirst(&db->aDb[iDb].pSchema->idxHash);i;i=sqliteHashNext(i)){
+ Index *pIdx = sqliteHashData(i);
+ sqlite3DefaultRowEst(pIdx);
+ }
+
+ /* Check to make sure the sqlite_stat1 table existss */
+ sInfo.db = db;
+ sInfo.zDatabase = db->aDb[iDb].zName;
+ if( sqlite3FindTable(db, "sqlite_stat1", sInfo.zDatabase)==0 ){
+ return;
+ }
+
+
+ /* Load new statistics out of the sqlite_stat1 table */
+ zSql = sqlite3MPrintf("SELECT idx, stat FROM %Q.sqlite_stat1",
+ sInfo.zDatabase);
+ sqlite3SafetyOff(db);
+ sqlite3_exec(db, zSql, analysisLoader, &sInfo, 0);
+ sqlite3SafetyOn(db);
+ sqliteFree(zSql);
+}
+
+
+#endif /* SQLITE_OMIT_ANALYZE */
Added: freeswitch/trunk/libs/sqlite/src/attach.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/attach.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,504 @@
+/*
+** 2003 April 6
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** This file contains code used to implement the ATTACH and DETACH commands.
+**
+** $Id: attach.c,v 1.53 2006/06/27 16:34:57 danielk1977 Exp $
+*/
+#include "sqliteInt.h"
+
+/*
+** Resolve an expression that was part of an ATTACH or DETACH statement. This
+** is slightly different from resolving a normal SQL expression, because simple
+** identifiers are treated as strings, not possible column names or aliases.
+**
+** i.e. if the parser sees:
+**
+** ATTACH DATABASE abc AS def
+**
+** it treats the two expressions as literal strings 'abc' and 'def' instead of
+** looking for columns of the same name.
+**
+** This only applies to the root node of pExpr, so the statement:
+**
+** ATTACH DATABASE abc||def AS 'db2'
+**
+** will fail because neither abc or def can be resolved.
+*/
+static int resolveAttachExpr(NameContext *pName, Expr *pExpr)
+{
+ int rc = SQLITE_OK;
+ if( pExpr ){
+ if( pExpr->op!=TK_ID ){
+ rc = sqlite3ExprResolveNames(pName, pExpr);
+ }else{
+ pExpr->op = TK_STRING;
+ }
+ }
+ return rc;
+}
+
+/*
+** An SQL user-function registered to do the work of an ATTACH statement. The
+** three arguments to the function come directly from an attach statement:
+**
+** ATTACH DATABASE x AS y KEY z
+**
+** SELECT sqlite_attach(x, y, z)
+**
+** If the optional "KEY z" syntax is omitted, an SQL NULL is passed as the
+** third argument.
+*/
+static void attachFunc(
+ sqlite3_context *context,
+ int argc,
+ sqlite3_value **argv
+){
+ int i;
+ int rc = 0;
+ sqlite3 *db = sqlite3_user_data(context);
+ const char *zName;
+ const char *zFile;
+ Db *aNew;
+ char zErr[128];
+ char *zErrDyn = 0;
+
+ zFile = (const char *)sqlite3_value_text(argv[0]);
+ zName = (const char *)sqlite3_value_text(argv[1]);
+ if( zFile==0 ) zFile = "";
+ if( zName==0 ) zName = "";
+
+ /* Check for the following errors:
+ **
+ ** * Too many attached databases,
+ ** * Transaction currently open
+ ** * Specified database name already being used.
+ */
+ if( db->nDb>=MAX_ATTACHED+2 ){
+ sqlite3_snprintf(
+ sizeof(zErr), zErr, "too many attached databases - max %d", MAX_ATTACHED
+ );
+ goto attach_error;
+ }
+ if( !db->autoCommit ){
+ strcpy(zErr, "cannot ATTACH database within transaction");
+ goto attach_error;
+ }
+ for(i=0; i<db->nDb; i++){
+ char *z = db->aDb[i].zName;
+ if( z && zName && sqlite3StrICmp(z, zName)==0 ){
+ sqlite3_snprintf(sizeof(zErr), zErr, "database %s is already in use", zName);
+ goto attach_error;
+ }
+ }
+
+ /* Allocate the new entry in the db->aDb[] array and initialise the schema
+ ** hash tables.
+ */
+ if( db->aDb==db->aDbStatic ){
+ aNew = sqliteMalloc( sizeof(db->aDb[0])*3 );
+ if( aNew==0 ){
+ return;
+ }
+ memcpy(aNew, db->aDb, sizeof(db->aDb[0])*2);
+ }else{
+ aNew = sqliteRealloc(db->aDb, sizeof(db->aDb[0])*(db->nDb+1) );
+ if( aNew==0 ){
+ return;
+ }
+ }
+ db->aDb = aNew;
+ aNew = &db->aDb[db->nDb++];
+ memset(aNew, 0, sizeof(*aNew));
+
+ /* Open the database file. If the btree is successfully opened, use
+ ** it to obtain the database schema. At this point the schema may
+ ** or may not be initialised.
+ */
+ rc = sqlite3BtreeFactory(db, zFile, 0, MAX_PAGES, &aNew->pBt);
+ if( rc==SQLITE_OK ){
+ aNew->pSchema = sqlite3SchemaGet(aNew->pBt);
+ if( !aNew->pSchema ){
+ rc = SQLITE_NOMEM;
+ }else if( aNew->pSchema->file_format && aNew->pSchema->enc!=ENC(db) ){
+ strcpy(zErr,
+ "attached databases must use the same text encoding as main database");
+ goto attach_error;
+ }
+ }
+ aNew->zName = sqliteStrDup(zName);
+ aNew->safety_level = 3;
+
+#if SQLITE_HAS_CODEC
+ {
+ extern int sqlite3CodecAttach(sqlite3*, int, void*, int);
+ extern void sqlite3CodecGetKey(sqlite3*, int, void**, int*);
+ int nKey;
+ char *zKey;
+ int t = sqlite3_value_type(argv[2]);
+ switch( t ){
+ case SQLITE_INTEGER:
+ case SQLITE_FLOAT:
+ zErrDyn = sqliteStrDup("Invalid key value");
+ rc = SQLITE_ERROR;
+ break;
+
+ case SQLITE_TEXT:
+ case SQLITE_BLOB:
+ nKey = sqlite3_value_bytes(argv[2]);
+ zKey = (char *)sqlite3_value_blob(argv[2]);
+ sqlite3CodecAttach(db, db->nDb-1, zKey, nKey);
+ break;
+
+ case SQLITE_NULL:
+ /* No key specified. Use the key from the main database */
+ sqlite3CodecGetKey(db, 0, (void**)&zKey, &nKey);
+ sqlite3CodecAttach(db, db->nDb-1, zKey, nKey);
+ break;
+ }
+ }
+#endif
+
+ /* If the file was opened successfully, read the schema for the new database.
+ ** If this fails, or if opening the file failed, then close the file and
+ ** remove the entry from the db->aDb[] array. i.e. put everything back the way
+ ** we found it.
+ */
+ if( rc==SQLITE_OK ){
+ sqlite3SafetyOn(db);
+ rc = sqlite3Init(db, &zErrDyn);
+ sqlite3SafetyOff(db);
+ }
+ if( rc ){
+ int iDb = db->nDb - 1;
+ assert( iDb>=2 );
+ if( db->aDb[iDb].pBt ){
+ sqlite3BtreeClose(db->aDb[iDb].pBt);
+ db->aDb[iDb].pBt = 0;
+ db->aDb[iDb].pSchema = 0;
+ }
+ sqlite3ResetInternalSchema(db, 0);
+ db->nDb = iDb;
+ if( rc==SQLITE_NOMEM ){
+ if( !sqlite3MallocFailed() ) sqlite3FailedMalloc();
+ sqlite3_snprintf(sizeof(zErr),zErr, "out of memory");
+ }else{
+ sqlite3_snprintf(sizeof(zErr),zErr, "unable to open database: %s", zFile);
+ }
+ goto attach_error;
+ }
+
+ return;
+
+attach_error:
+ /* Return an error if we get here */
+ if( zErrDyn ){
+ sqlite3_result_error(context, zErrDyn, -1);
+ sqliteFree(zErrDyn);
+ }else{
+ zErr[sizeof(zErr)-1] = 0;
+ sqlite3_result_error(context, zErr, -1);
+ }
+}
+
+/*
+** An SQL user-function registered to do the work of an DETACH statement. The
+** three arguments to the function come directly from a detach statement:
+**
+** DETACH DATABASE x
+**
+** SELECT sqlite_detach(x)
+*/
+static void detachFunc(
+ sqlite3_context *context,
+ int argc,
+ sqlite3_value **argv
+){
+ const char *zName = (const char *)sqlite3_value_text(argv[0]);
+ sqlite3 *db = sqlite3_user_data(context);
+ int i;
+ Db *pDb = 0;
+ char zErr[128];
+
+ if( zName==0 ) zName = "";
+ for(i=0; i<db->nDb; i++){
+ pDb = &db->aDb[i];
+ if( pDb->pBt==0 ) continue;
+ if( sqlite3StrICmp(pDb->zName, zName)==0 ) break;
+ }
+
+ if( i>=db->nDb ){
+ sqlite3_snprintf(sizeof(zErr),zErr, "no such database: %s", zName);
+ goto detach_error;
+ }
+ if( i<2 ){
+ sqlite3_snprintf(sizeof(zErr),zErr, "cannot detach database %s", zName);
+ goto detach_error;
+ }
+ if( !db->autoCommit ){
+ strcpy(zErr, "cannot DETACH database within transaction");
+ goto detach_error;
+ }
+ if( sqlite3BtreeIsInReadTrans(pDb->pBt) ){
+ sqlite3_snprintf(sizeof(zErr),zErr, "database %s is locked", zName);
+ goto detach_error;
+ }
+
+ sqlite3BtreeClose(pDb->pBt);
+ pDb->pBt = 0;
+ pDb->pSchema = 0;
+ sqlite3ResetInternalSchema(db, 0);
+ return;
+
+detach_error:
+ sqlite3_result_error(context, zErr, -1);
+}
+
+/*
+** This procedure generates VDBE code for a single invocation of either the
+** sqlite_detach() or sqlite_attach() SQL user functions.
+*/
+static void codeAttach(
+ Parse *pParse, /* The parser context */
+ int type, /* Either SQLITE_ATTACH or SQLITE_DETACH */
+ const char *zFunc, /* Either "sqlite_attach" or "sqlite_detach */
+ int nFunc, /* Number of args to pass to zFunc */
+ Expr *pAuthArg, /* Expression to pass to authorization callback */
+ Expr *pFilename, /* Name of database file */
+ Expr *pDbname, /* Name of the database to use internally */
+ Expr *pKey /* Database key for encryption extension */
+){
+ int rc;
+ NameContext sName;
+ Vdbe *v;
+ FuncDef *pFunc;
+ sqlite3* db = pParse->db;
+
+#ifndef SQLITE_OMIT_AUTHORIZATION
+ assert( sqlite3MallocFailed() || pAuthArg );
+ if( pAuthArg ){
+ char *zAuthArg = sqlite3NameFromToken(&pAuthArg->span);
+ if( !zAuthArg ){
+ goto attach_end;
+ }
+ rc = sqlite3AuthCheck(pParse, type, zAuthArg, 0, 0);
+ sqliteFree(zAuthArg);
+ if(rc!=SQLITE_OK ){
+ goto attach_end;
+ }
+ }
+#endif /* SQLITE_OMIT_AUTHORIZATION */
+
+ memset(&sName, 0, sizeof(NameContext));
+ sName.pParse = pParse;
+
+ if(
+ SQLITE_OK!=(rc = resolveAttachExpr(&sName, pFilename)) ||
+ SQLITE_OK!=(rc = resolveAttachExpr(&sName, pDbname)) ||
+ SQLITE_OK!=(rc = resolveAttachExpr(&sName, pKey))
+ ){
+ pParse->nErr++;
+ goto attach_end;
+ }
+
+ v = sqlite3GetVdbe(pParse);
+ sqlite3ExprCode(pParse, pFilename);
+ sqlite3ExprCode(pParse, pDbname);
+ sqlite3ExprCode(pParse, pKey);
+
+ assert( v || sqlite3MallocFailed() );
+ if( v ){
+ sqlite3VdbeAddOp(v, OP_Function, 0, nFunc);
+ pFunc = sqlite3FindFunction(db, zFunc, strlen(zFunc), nFunc, SQLITE_UTF8,0);
+ sqlite3VdbeChangeP3(v, -1, (char *)pFunc, P3_FUNCDEF);
+
+ /* Code an OP_Expire. For an ATTACH statement, set P1 to true (expire this
+ ** statement only). For DETACH, set it to false (expire all existing
+ ** statements).
+ */
+ sqlite3VdbeAddOp(v, OP_Expire, (type==SQLITE_ATTACH), 0);
+ }
+
+attach_end:
+ sqlite3ExprDelete(pFilename);
+ sqlite3ExprDelete(pDbname);
+ sqlite3ExprDelete(pKey);
+}
+
+/*
+** Called by the parser to compile a DETACH statement.
+**
+** DETACH pDbname
+*/
+void sqlite3Detach(Parse *pParse, Expr *pDbname){
+ codeAttach(pParse, SQLITE_DETACH, "sqlite_detach", 1, pDbname, 0, 0, pDbname);
+}
+
+/*
+** Called by the parser to compile an ATTACH statement.
+**
+** ATTACH p AS pDbname KEY pKey
+*/
+void sqlite3Attach(Parse *pParse, Expr *p, Expr *pDbname, Expr *pKey){
+ codeAttach(pParse, SQLITE_ATTACH, "sqlite_attach", 3, p, p, pDbname, pKey);
+}
+
+/*
+** Register the functions sqlite_attach and sqlite_detach.
+*/
+void sqlite3AttachFunctions(sqlite3 *db){
+ static const int enc = SQLITE_UTF8;
+ sqlite3CreateFunc(db, "sqlite_attach", 3, enc, db, attachFunc, 0, 0);
+ sqlite3CreateFunc(db, "sqlite_detach", 1, enc, db, detachFunc, 0, 0);
+}
+
+/*
+** Initialize a DbFixer structure. This routine must be called prior
+** to passing the structure to one of the sqliteFixAAAA() routines below.
+**
+** The return value indicates whether or not fixation is required. TRUE
+** means we do need to fix the database references, FALSE means we do not.
+*/
+int sqlite3FixInit(
+ DbFixer *pFix, /* The fixer to be initialized */
+ Parse *pParse, /* Error messages will be written here */
+ int iDb, /* This is the database that must be used */
+ const char *zType, /* "view", "trigger", or "index" */
+ const Token *pName /* Name of the view, trigger, or index */
+){
+ sqlite3 *db;
+
+ if( iDb<0 || iDb==1 ) return 0;
+ db = pParse->db;
+ assert( db->nDb>iDb );
+ pFix->pParse = pParse;
+ pFix->zDb = db->aDb[iDb].zName;
+ pFix->zType = zType;
+ pFix->pName = pName;
+ return 1;
+}
+
+/*
+** The following set of routines walk through the parse tree and assign
+** a specific database to all table references where the database name
+** was left unspecified in the original SQL statement. The pFix structure
+** must have been initialized by a prior call to sqlite3FixInit().
+**
+** These routines are used to make sure that an index, trigger, or
+** view in one database does not refer to objects in a different database.
+** (Exception: indices, triggers, and views in the TEMP database are
+** allowed to refer to anything.) If a reference is explicitly made
+** to an object in a different database, an error message is added to
+** pParse->zErrMsg and these routines return non-zero. If everything
+** checks out, these routines return 0.
+*/
+int sqlite3FixSrcList(
+ DbFixer *pFix, /* Context of the fixation */
+ SrcList *pList /* The Source list to check and modify */
+){
+ int i;
+ const char *zDb;
+ struct SrcList_item *pItem;
+
+ if( pList==0 ) return 0;
+ zDb = pFix->zDb;
+ for(i=0, pItem=pList->a; i<pList->nSrc; i++, pItem++){
+ if( pItem->zDatabase==0 ){
+ pItem->zDatabase = sqliteStrDup(zDb);
+ }else if( sqlite3StrICmp(pItem->zDatabase,zDb)!=0 ){
+ sqlite3ErrorMsg(pFix->pParse,
+ "%s %T cannot reference objects in database %s",
+ pFix->zType, pFix->pName, pItem->zDatabase);
+ return 1;
+ }
+#if !defined(SQLITE_OMIT_VIEW) || !defined(SQLITE_OMIT_TRIGGER)
+ if( sqlite3FixSelect(pFix, pItem->pSelect) ) return 1;
+ if( sqlite3FixExpr(pFix, pItem->pOn) ) return 1;
+#endif
+ }
+ return 0;
+}
+#if !defined(SQLITE_OMIT_VIEW) || !defined(SQLITE_OMIT_TRIGGER)
+int sqlite3FixSelect(
+ DbFixer *pFix, /* Context of the fixation */
+ Select *pSelect /* The SELECT statement to be fixed to one database */
+){
+ while( pSelect ){
+ if( sqlite3FixExprList(pFix, pSelect->pEList) ){
+ return 1;
+ }
+ if( sqlite3FixSrcList(pFix, pSelect->pSrc) ){
+ return 1;
+ }
+ if( sqlite3FixExpr(pFix, pSelect->pWhere) ){
+ return 1;
+ }
+ if( sqlite3FixExpr(pFix, pSelect->pHaving) ){
+ return 1;
+ }
+ pSelect = pSelect->pPrior;
+ }
+ return 0;
+}
+int sqlite3FixExpr(
+ DbFixer *pFix, /* Context of the fixation */
+ Expr *pExpr /* The expression to be fixed to one database */
+){
+ while( pExpr ){
+ if( sqlite3FixSelect(pFix, pExpr->pSelect) ){
+ return 1;
+ }
+ if( sqlite3FixExprList(pFix, pExpr->pList) ){
+ return 1;
+ }
+ if( sqlite3FixExpr(pFix, pExpr->pRight) ){
+ return 1;
+ }
+ pExpr = pExpr->pLeft;
+ }
+ return 0;
+}
+int sqlite3FixExprList(
+ DbFixer *pFix, /* Context of the fixation */
+ ExprList *pList /* The expression to be fixed to one database */
+){
+ int i;
+ struct ExprList_item *pItem;
+ if( pList==0 ) return 0;
+ for(i=0, pItem=pList->a; i<pList->nExpr; i++, pItem++){
+ if( sqlite3FixExpr(pFix, pItem->pExpr) ){
+ return 1;
+ }
+ }
+ return 0;
+}
+#endif
+
+#ifndef SQLITE_OMIT_TRIGGER
+int sqlite3FixTriggerStep(
+ DbFixer *pFix, /* Context of the fixation */
+ TriggerStep *pStep /* The trigger step be fixed to one database */
+){
+ while( pStep ){
+ if( sqlite3FixSelect(pFix, pStep->pSelect) ){
+ return 1;
+ }
+ if( sqlite3FixExpr(pFix, pStep->pWhere) ){
+ return 1;
+ }
+ if( sqlite3FixExprList(pFix, pStep->pExprList) ){
+ return 1;
+ }
+ pStep = pStep->pNext;
+ }
+ return 0;
+}
+#endif
Added: freeswitch/trunk/libs/sqlite/src/auth.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/auth.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,234 @@
+/*
+** 2003 January 11
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** This file contains code used to implement the sqlite3_set_authorizer()
+** API. This facility is an optional feature of the library. Embedded
+** systems that do not need this facility may omit it by recompiling
+** the library with -DSQLITE_OMIT_AUTHORIZATION=1
+**
+** $Id: auth.c,v 1.25 2006/06/16 08:01:03 danielk1977 Exp $
+*/
+#include "sqliteInt.h"
+
+/*
+** All of the code in this file may be omitted by defining a single
+** macro.
+*/
+#ifndef SQLITE_OMIT_AUTHORIZATION
+
+/*
+** Set or clear the access authorization function.
+**
+** The access authorization function is be called during the compilation
+** phase to verify that the user has read and/or write access permission on
+** various fields of the database. The first argument to the auth function
+** is a copy of the 3rd argument to this routine. The second argument
+** to the auth function is one of these constants:
+**
+** SQLITE_CREATE_INDEX
+** SQLITE_CREATE_TABLE
+** SQLITE_CREATE_TEMP_INDEX
+** SQLITE_CREATE_TEMP_TABLE
+** SQLITE_CREATE_TEMP_TRIGGER
+** SQLITE_CREATE_TEMP_VIEW
+** SQLITE_CREATE_TRIGGER
+** SQLITE_CREATE_VIEW
+** SQLITE_DELETE
+** SQLITE_DROP_INDEX
+** SQLITE_DROP_TABLE
+** SQLITE_DROP_TEMP_INDEX
+** SQLITE_DROP_TEMP_TABLE
+** SQLITE_DROP_TEMP_TRIGGER
+** SQLITE_DROP_TEMP_VIEW
+** SQLITE_DROP_TRIGGER
+** SQLITE_DROP_VIEW
+** SQLITE_INSERT
+** SQLITE_PRAGMA
+** SQLITE_READ
+** SQLITE_SELECT
+** SQLITE_TRANSACTION
+** SQLITE_UPDATE
+**
+** The third and fourth arguments to the auth function are the name of
+** the table and the column that are being accessed. The auth function
+** should return either SQLITE_OK, SQLITE_DENY, or SQLITE_IGNORE. If
+** SQLITE_OK is returned, it means that access is allowed. SQLITE_DENY
+** means that the SQL statement will never-run - the sqlite3_exec() call
+** will return with an error. SQLITE_IGNORE means that the SQL statement
+** should run but attempts to read the specified column will return NULL
+** and attempts to write the column will be ignored.
+**
+** Setting the auth function to NULL disables this hook. The default
+** setting of the auth function is NULL.
+*/
+int sqlite3_set_authorizer(
+ sqlite3 *db,
+ int (*xAuth)(void*,int,const char*,const char*,const char*,const char*),
+ void *pArg
+){
+ db->xAuth = xAuth;
+ db->pAuthArg = pArg;
+ sqlite3ExpirePreparedStatements(db);
+ return SQLITE_OK;
+}
+
+/*
+** Write an error message into pParse->zErrMsg that explains that the
+** user-supplied authorization function returned an illegal value.
+*/
+static void sqliteAuthBadReturnCode(Parse *pParse, int rc){
+ sqlite3ErrorMsg(pParse, "illegal return value (%d) from the "
+ "authorization function - should be SQLITE_OK, SQLITE_IGNORE, "
+ "or SQLITE_DENY", rc);
+ pParse->rc = SQLITE_ERROR;
+}
+
+/*
+** The pExpr should be a TK_COLUMN expression. The table referred to
+** is in pTabList or else it is the NEW or OLD table of a trigger.
+** Check to see if it is OK to read this particular column.
+**
+** If the auth function returns SQLITE_IGNORE, change the TK_COLUMN
+** instruction into a TK_NULL. If the auth function returns SQLITE_DENY,
+** then generate an error.
+*/
+void sqlite3AuthRead(
+ Parse *pParse, /* The parser context */
+ Expr *pExpr, /* The expression to check authorization on */
+ SrcList *pTabList /* All table that pExpr might refer to */
+){
+ sqlite3 *db = pParse->db;
+ int rc;
+ Table *pTab; /* The table being read */
+ const char *zCol; /* Name of the column of the table */
+ int iSrc; /* Index in pTabList->a[] of table being read */
+ const char *zDBase; /* Name of database being accessed */
+ TriggerStack *pStack; /* The stack of current triggers */
+ int iDb; /* The index of the database the expression refers to */
+
+ if( db->xAuth==0 ) return;
+ if( pExpr->op==TK_AS ) return;
+ assert( pExpr->op==TK_COLUMN );
+ iDb = sqlite3SchemaToIndex(pParse->db, pExpr->pSchema);
+ if( iDb<0 ){
+ /* An attempt to read a column out of a subquery or other
+ ** temporary table. */
+ return;
+ }
+ for(iSrc=0; pTabList && iSrc<pTabList->nSrc; iSrc++){
+ if( pExpr->iTable==pTabList->a[iSrc].iCursor ) break;
+ }
+ if( iSrc>=0 && pTabList && iSrc<pTabList->nSrc ){
+ pTab = pTabList->a[iSrc].pTab;
+ }else if( (pStack = pParse->trigStack)!=0 ){
+ /* This must be an attempt to read the NEW or OLD pseudo-tables
+ ** of a trigger.
+ */
+ assert( pExpr->iTable==pStack->newIdx || pExpr->iTable==pStack->oldIdx );
+ pTab = pStack->pTab;
+ }else{
+ return;
+ }
+ if( pTab==0 ) return;
+ if( pExpr->iColumn>=0 ){
+ assert( pExpr->iColumn<pTab->nCol );
+ zCol = pTab->aCol[pExpr->iColumn].zName;
+ }else if( pTab->iPKey>=0 ){
+ assert( pTab->iPKey<pTab->nCol );
+ zCol = pTab->aCol[pTab->iPKey].zName;
+ }else{
+ zCol = "ROWID";
+ }
+ assert( iDb>=0 && iDb<db->nDb );
+ zDBase = db->aDb[iDb].zName;
+ rc = db->xAuth(db->pAuthArg, SQLITE_READ, pTab->zName, zCol, zDBase,
+ pParse->zAuthContext);
+ if( rc==SQLITE_IGNORE ){
+ pExpr->op = TK_NULL;
+ }else if( rc==SQLITE_DENY ){
+ if( db->nDb>2 || iDb!=0 ){
+ sqlite3ErrorMsg(pParse, "access to %s.%s.%s is prohibited",
+ zDBase, pTab->zName, zCol);
+ }else{
+ sqlite3ErrorMsg(pParse, "access to %s.%s is prohibited",pTab->zName,zCol);
+ }
+ pParse->rc = SQLITE_AUTH;
+ }else if( rc!=SQLITE_OK ){
+ sqliteAuthBadReturnCode(pParse, rc);
+ }
+}
+
+/*
+** Do an authorization check using the code and arguments given. Return
+** either SQLITE_OK (zero) or SQLITE_IGNORE or SQLITE_DENY. If SQLITE_DENY
+** is returned, then the error count and error message in pParse are
+** modified appropriately.
+*/
+int sqlite3AuthCheck(
+ Parse *pParse,
+ int code,
+ const char *zArg1,
+ const char *zArg2,
+ const char *zArg3
+){
+ sqlite3 *db = pParse->db;
+ int rc;
+
+ /* Don't do any authorization checks if the database is initialising
+ ** or if the parser is being invoked from within sqlite3_declare_vtab.
+ */
+ if( db->init.busy || IN_DECLARE_VTAB ){
+ return SQLITE_OK;
+ }
+
+ if( db->xAuth==0 ){
+ return SQLITE_OK;
+ }
+ rc = db->xAuth(db->pAuthArg, code, zArg1, zArg2, zArg3, pParse->zAuthContext);
+ if( rc==SQLITE_DENY ){
+ sqlite3ErrorMsg(pParse, "not authorized");
+ pParse->rc = SQLITE_AUTH;
+ }else if( rc!=SQLITE_OK && rc!=SQLITE_IGNORE ){
+ rc = SQLITE_DENY;
+ sqliteAuthBadReturnCode(pParse, rc);
+ }
+ return rc;
+}
+
+/*
+** Push an authorization context. After this routine is called, the
+** zArg3 argument to authorization callbacks will be zContext until
+** popped. Or if pParse==0, this routine is a no-op.
+*/
+void sqlite3AuthContextPush(
+ Parse *pParse,
+ AuthContext *pContext,
+ const char *zContext
+){
+ pContext->pParse = pParse;
+ if( pParse ){
+ pContext->zAuthContext = pParse->zAuthContext;
+ pParse->zAuthContext = zContext;
+ }
+}
+
+/*
+** Pop an authorization context that was previously pushed
+** by sqlite3AuthContextPush
+*/
+void sqlite3AuthContextPop(AuthContext *pContext){
+ if( pContext->pParse ){
+ pContext->pParse->zAuthContext = pContext->zAuthContext;
+ pContext->pParse = 0;
+ }
+}
+
+#endif /* SQLITE_OMIT_AUTHORIZATION */
Added: freeswitch/trunk/libs/sqlite/src/btree.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/btree.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,6666 @@
+/*
+** 2004 April 6
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** $Id: btree.c,v 1.328 2006/08/16 16:42:48 drh Exp $
+**
+** This file implements a external (disk-based) database using BTrees.
+** For a detailed discussion of BTrees, refer to
+**
+** Donald E. Knuth, THE ART OF COMPUTER PROGRAMMING, Volume 3:
+** "Sorting And Searching", pages 473-480. Addison-Wesley
+** Publishing Company, Reading, Massachusetts.
+**
+** The basic idea is that each page of the file contains N database
+** entries and N+1 pointers to subpages.
+**
+** ----------------------------------------------------------------
+** | Ptr(0) | Key(0) | Ptr(1) | Key(1) | ... | Key(N) | Ptr(N+1) |
+** ----------------------------------------------------------------
+**
+** All of the keys on the page that Ptr(0) points to have values less
+** than Key(0). All of the keys on page Ptr(1) and its subpages have
+** values greater than Key(0) and less than Key(1). All of the keys
+** on Ptr(N+1) and its subpages have values greater than Key(N). And
+** so forth.
+**
+** Finding a particular key requires reading O(log(M)) pages from the
+** disk where M is the number of entries in the tree.
+**
+** In this implementation, a single file can hold one or more separate
+** BTrees. Each BTree is identified by the index of its root page. The
+** key and data for any entry are combined to form the "payload". A
+** fixed amount of payload can be carried directly on the database
+** page. If the payload is larger than the preset amount then surplus
+** bytes are stored on overflow pages. The payload for an entry
+** and the preceding pointer are combined to form a "Cell". Each
+** page has a small header which contains the Ptr(N+1) pointer and other
+** information such as the size of key and data.
+**
+** FORMAT DETAILS
+**
+** The file is divided into pages. The first page is called page 1,
+** the second is page 2, and so forth. A page number of zero indicates
+** "no such page". The page size can be anything between 512 and 65536.
+** Each page can be either a btree page, a freelist page or an overflow
+** page.
+**
+** The first page is always a btree page. The first 100 bytes of the first
+** page contain a special header (the "file header") that describes the file.
+** The format of the file header is as follows:
+**
+** OFFSET SIZE DESCRIPTION
+** 0 16 Header string: "SQLite format 3\000"
+** 16 2 Page size in bytes.
+** 18 1 File format write version
+** 19 1 File format read version
+** 20 1 Bytes of unused space at the end of each page
+** 21 1 Max embedded payload fraction
+** 22 1 Min embedded payload fraction
+** 23 1 Min leaf payload fraction
+** 24 4 File change counter
+** 28 4 Reserved for future use
+** 32 4 First freelist page
+** 36 4 Number of freelist pages in the file
+** 40 60 15 4-byte meta values passed to higher layers
+**
+** All of the integer values are big-endian (most significant byte first).
+**
+** The file change counter is incremented when the database is changed more
+** than once within the same second. This counter, together with the
+** modification time of the file, allows other processes to know
+** when the file has changed and thus when they need to flush their
+** cache.
+**
+** The max embedded payload fraction is the amount of the total usable
+** space in a page that can be consumed by a single cell for standard
+** B-tree (non-LEAFDATA) tables. A value of 255 means 100%. The default
+** is to limit the maximum cell size so that at least 4 cells will fit
+** on one page. Thus the default max embedded payload fraction is 64.
+**
+** If the payload for a cell is larger than the max payload, then extra
+** payload is spilled to overflow pages. Once an overflow page is allocated,
+** as many bytes as possible are moved into the overflow pages without letting
+** the cell size drop below the min embedded payload fraction.
+**
+** The min leaf payload fraction is like the min embedded payload fraction
+** except that it applies to leaf nodes in a LEAFDATA tree. The maximum
+** payload fraction for a LEAFDATA tree is always 100% (or 255) and it
+** not specified in the header.
+**
+** Each btree pages is divided into three sections: The header, the
+** cell pointer array, and the cell area area. Page 1 also has a 100-byte
+** file header that occurs before the page header.
+**
+** |----------------|
+** | file header | 100 bytes. Page 1 only.
+** |----------------|
+** | page header | 8 bytes for leaves. 12 bytes for interior nodes
+** |----------------|
+** | cell pointer | | 2 bytes per cell. Sorted order.
+** | array | | Grows downward
+** | | v
+** |----------------|
+** | unallocated |
+** | space |
+** |----------------| ^ Grows upwards
+** | cell content | | Arbitrary order interspersed with freeblocks.
+** | area | | and free space fragments.
+** |----------------|
+**
+** The page headers looks like this:
+**
+** OFFSET SIZE DESCRIPTION
+** 0 1 Flags. 1: intkey, 2: zerodata, 4: leafdata, 8: leaf
+** 1 2 byte offset to the first freeblock
+** 3 2 number of cells on this page
+** 5 2 first byte of the cell content area
+** 7 1 number of fragmented free bytes
+** 8 4 Right child (the Ptr(N+1) value). Omitted on leaves.
+**
+** The flags define the format of this btree page. The leaf flag means that
+** this page has no children. The zerodata flag means that this page carries
+** only keys and no data. The intkey flag means that the key is a integer
+** which is stored in the key size entry of the cell header rather than in
+** the payload area.
+**
+** The cell pointer array begins on the first byte after the page header.
+** The cell pointer array contains zero or more 2-byte numbers which are
+** offsets from the beginning of the page to the cell content in the cell
+** content area. The cell pointers occur in sorted order. The system strives
+** to keep free space after the last cell pointer so that new cells can
+** be easily added without having to defragment the page.
+**
+** Cell content is stored at the very end of the page and grows toward the
+** beginning of the page.
+**
+** Unused space within the cell content area is collected into a linked list of
+** freeblocks. Each freeblock is at least 4 bytes in size. The byte offset
+** to the first freeblock is given in the header. Freeblocks occur in
+** increasing order. Because a freeblock must be at least 4 bytes in size,
+** any group of 3 or fewer unused bytes in the cell content area cannot
+** exist on the freeblock chain. A group of 3 or fewer free bytes is called
+** a fragment. The total number of bytes in all fragments is recorded.
+** in the page header at offset 7.
+**
+** SIZE DESCRIPTION
+** 2 Byte offset of the next freeblock
+** 2 Bytes in this freeblock
+**
+** Cells are of variable length. Cells are stored in the cell content area at
+** the end of the page. Pointers to the cells are in the cell pointer array
+** that immediately follows the page header. Cells is not necessarily
+** contiguous or in order, but cell pointers are contiguous and in order.
+**
+** Cell content makes use of variable length integers. A variable
+** length integer is 1 to 9 bytes where the lower 7 bits of each
+** byte are used. The integer consists of all bytes that have bit 8 set and
+** the first byte with bit 8 clear. The most significant byte of the integer
+** appears first. A variable-length integer may not be more than 9 bytes long.
+** As a special case, all 8 bytes of the 9th byte are used as data. This
+** allows a 64-bit integer to be encoded in 9 bytes.
+**
+** 0x00 becomes 0x00000000
+** 0x7f becomes 0x0000007f
+** 0x81 0x00 becomes 0x00000080
+** 0x82 0x00 becomes 0x00000100
+** 0x80 0x7f becomes 0x0000007f
+** 0x8a 0x91 0xd1 0xac 0x78 becomes 0x12345678
+** 0x81 0x81 0x81 0x81 0x01 becomes 0x10204081
+**
+** Variable length integers are used for rowids and to hold the number of
+** bytes of key and data in a btree cell.
+**
+** The content of a cell looks like this:
+**
+** SIZE DESCRIPTION
+** 4 Page number of the left child. Omitted if leaf flag is set.
+** var Number of bytes of data. Omitted if the zerodata flag is set.
+** var Number of bytes of key. Or the key itself if intkey flag is set.
+** * Payload
+** 4 First page of the overflow chain. Omitted if no overflow
+**
+** Overflow pages form a linked list. Each page except the last is completely
+** filled with data (pagesize - 4 bytes). The last page can have as little
+** as 1 byte of data.
+**
+** SIZE DESCRIPTION
+** 4 Page number of next overflow page
+** * Data
+**
+** Freelist pages come in two subtypes: trunk pages and leaf pages. The
+** file header points to first in a linked list of trunk page. Each trunk
+** page points to multiple leaf pages. The content of a leaf page is
+** unspecified. A trunk page looks like this:
+**
+** SIZE DESCRIPTION
+** 4 Page number of next trunk page
+** 4 Number of leaf pointers on this page
+** * zero or more pages numbers of leaves
+*/
+#include "sqliteInt.h"
+#include "pager.h"
+#include "btree.h"
+#include "os.h"
+#include <assert.h>
+
+/* Round up a number to the next larger multiple of 8. This is used
+** to force 8-byte alignment on 64-bit architectures.
+*/
+#define ROUND8(x) ((x+7)&~7)
+
+
+/* The following value is the maximum cell size assuming a maximum page
+** size give above.
+*/
+#define MX_CELL_SIZE(pBt) (pBt->pageSize-8)
+
+/* The maximum number of cells on a single page of the database. This
+** assumes a minimum cell size of 3 bytes. Such small cells will be
+** exceedingly rare, but they are possible.
+*/
+#define MX_CELL(pBt) ((pBt->pageSize-8)/3)
+
+/* Forward declarations */
+typedef struct MemPage MemPage;
+typedef struct BtLock BtLock;
+
+/*
+** This is a magic string that appears at the beginning of every
+** SQLite database in order to identify the file as a real database.
+**
+** You can change this value at compile-time by specifying a
+** -DSQLITE_FILE_HEADER="..." on the compiler command-line. The
+** header must be exactly 16 bytes including the zero-terminator so
+** the string itself should be 15 characters long. If you change
+** the header, then your custom library will not be able to read
+** databases generated by the standard tools and the standard tools
+** will not be able to read databases created by your custom library.
+*/
+#ifndef SQLITE_FILE_HEADER /* 123456789 123456 */
+# define SQLITE_FILE_HEADER "SQLite format 3"
+#endif
+static const char zMagicHeader[] = SQLITE_FILE_HEADER;
+
+/*
+** Page type flags. An ORed combination of these flags appear as the
+** first byte of every BTree page.
+*/
+#define PTF_INTKEY 0x01
+#define PTF_ZERODATA 0x02
+#define PTF_LEAFDATA 0x04
+#define PTF_LEAF 0x08
+
+/*
+** As each page of the file is loaded into memory, an instance of the following
+** structure is appended and initialized to zero. This structure stores
+** information about the page that is decoded from the raw file page.
+**
+** The pParent field points back to the parent page. This allows us to
+** walk up the BTree from any leaf to the root. Care must be taken to
+** unref() the parent page pointer when this page is no longer referenced.
+** The pageDestructor() routine handles that chore.
+*/
+struct MemPage {
+ u8 isInit; /* True if previously initialized. MUST BE FIRST! */
+ u8 idxShift; /* True if Cell indices have changed */
+ u8 nOverflow; /* Number of overflow cell bodies in aCell[] */
+ u8 intKey; /* True if intkey flag is set */
+ u8 leaf; /* True if leaf flag is set */
+ u8 zeroData; /* True if table stores keys only */
+ u8 leafData; /* True if tables stores data on leaves only */
+ u8 hasData; /* True if this page stores data */
+ u8 hdrOffset; /* 100 for page 1. 0 otherwise */
+ u8 childPtrSize; /* 0 if leaf==1. 4 if leaf==0 */
+ u16 maxLocal; /* Copy of Btree.maxLocal or Btree.maxLeaf */
+ u16 minLocal; /* Copy of Btree.minLocal or Btree.minLeaf */
+ u16 cellOffset; /* Index in aData of first cell pointer */
+ u16 idxParent; /* Index in parent of this node */
+ u16 nFree; /* Number of free bytes on the page */
+ u16 nCell; /* Number of cells on this page, local and ovfl */
+ struct _OvflCell { /* Cells that will not fit on aData[] */
+ u8 *pCell; /* Pointers to the body of the overflow cell */
+ u16 idx; /* Insert this cell before idx-th non-overflow cell */
+ } aOvfl[5];
+ BtShared *pBt; /* Pointer back to BTree structure */
+ u8 *aData; /* Pointer back to the start of the page */
+ Pgno pgno; /* Page number for this page */
+ MemPage *pParent; /* The parent of this page. NULL for root */
+};
+
+/*
+** The in-memory image of a disk page has the auxiliary information appended
+** to the end. EXTRA_SIZE is the number of bytes of space needed to hold
+** that extra information.
+*/
+#define EXTRA_SIZE sizeof(MemPage)
+
+/* Btree handle */
+struct Btree {
+ sqlite3 *pSqlite;
+ BtShared *pBt;
+ u8 inTrans; /* TRANS_NONE, TRANS_READ or TRANS_WRITE */
+};
+
+/*
+** Btree.inTrans may take one of the following values.
+**
+** If the shared-data extension is enabled, there may be multiple users
+** of the Btree structure. At most one of these may open a write transaction,
+** but any number may have active read transactions. Variable Btree.pDb
+** points to the handle that owns any current write-transaction.
+*/
+#define TRANS_NONE 0
+#define TRANS_READ 1
+#define TRANS_WRITE 2
+
+/*
+** Everything we need to know about an open database
+*/
+struct BtShared {
+ Pager *pPager; /* The page cache */
+ BtCursor *pCursor; /* A list of all open cursors */
+ MemPage *pPage1; /* First page of the database */
+ u8 inStmt; /* True if we are in a statement subtransaction */
+ u8 readOnly; /* True if the underlying file is readonly */
+ u8 maxEmbedFrac; /* Maximum payload as % of total page size */
+ u8 minEmbedFrac; /* Minimum payload as % of total page size */
+ u8 minLeafFrac; /* Minimum leaf payload as % of total page size */
+ u8 pageSizeFixed; /* True if the page size can no longer be changed */
+#ifndef SQLITE_OMIT_AUTOVACUUM
+ u8 autoVacuum; /* True if database supports auto-vacuum */
+#endif
+ u16 pageSize; /* Total number of bytes on a page */
+ u16 usableSize; /* Number of usable bytes on each page */
+ int maxLocal; /* Maximum local payload in non-LEAFDATA tables */
+ int minLocal; /* Minimum local payload in non-LEAFDATA tables */
+ int maxLeaf; /* Maximum local payload in a LEAFDATA table */
+ int minLeaf; /* Minimum local payload in a LEAFDATA table */
+ BusyHandler *pBusyHandler; /* Callback for when there is lock contention */
+ u8 inTransaction; /* Transaction state */
+ int nRef; /* Number of references to this structure */
+ int nTransaction; /* Number of open transactions (read + write) */
+ void *pSchema; /* Pointer to space allocated by sqlite3BtreeSchema() */
+ void (*xFreeSchema)(void*); /* Destructor for BtShared.pSchema */
+#ifndef SQLITE_OMIT_SHARED_CACHE
+ BtLock *pLock; /* List of locks held on this shared-btree struct */
+ BtShared *pNext; /* Next in ThreadData.pBtree linked list */
+#endif
+};
+
+/*
+** An instance of the following structure is used to hold information
+** about a cell. The parseCellPtr() function fills in this structure
+** based on information extract from the raw disk page.
+*/
+typedef struct CellInfo CellInfo;
+struct CellInfo {
+ u8 *pCell; /* Pointer to the start of cell content */
+ i64 nKey; /* The key for INTKEY tables, or number of bytes in key */
+ u32 nData; /* Number of bytes of data */
+ u16 nHeader; /* Size of the cell content header in bytes */
+ u16 nLocal; /* Amount of payload held locally */
+ u16 iOverflow; /* Offset to overflow page number. Zero if no overflow */
+ u16 nSize; /* Size of the cell content on the main b-tree page */
+};
+
+/*
+** A cursor is a pointer to a particular entry in the BTree.
+** The entry is identified by its MemPage and the index in
+** MemPage.aCell[] of the entry.
+*/
+struct BtCursor {
+ Btree *pBtree; /* The Btree to which this cursor belongs */
+ BtCursor *pNext, *pPrev; /* Forms a linked list of all cursors */
+ int (*xCompare)(void*,int,const void*,int,const void*); /* Key comp func */
+ void *pArg; /* First arg to xCompare() */
+ Pgno pgnoRoot; /* The root page of this tree */
+ MemPage *pPage; /* Page that contains the entry */
+ int idx; /* Index of the entry in pPage->aCell[] */
+ CellInfo info; /* A parse of the cell we are pointing at */
+ u8 wrFlag; /* True if writable */
+ u8 eState; /* One of the CURSOR_XXX constants (see below) */
+ void *pKey; /* Saved key that was cursor's last known position */
+ i64 nKey; /* Size of pKey, or last integer key */
+ int skip; /* (skip<0) -> Prev() is a no-op. (skip>0) -> Next() is */
+};
+
+/*
+** Potential values for BtCursor.eState.
+**
+** CURSOR_VALID:
+** Cursor points to a valid entry. getPayload() etc. may be called.
+**
+** CURSOR_INVALID:
+** Cursor does not point to a valid entry. This can happen (for example)
+** because the table is empty or because BtreeCursorFirst() has not been
+** called.
+**
+** CURSOR_REQUIRESEEK:
+** The table that this cursor was opened on still exists, but has been
+** modified since the cursor was last used. The cursor position is saved
+** in variables BtCursor.pKey and BtCursor.nKey. When a cursor is in
+** this state, restoreOrClearCursorPosition() can be called to attempt to
+** seek the cursor to the saved position.
+*/
+#define CURSOR_INVALID 0
+#define CURSOR_VALID 1
+#define CURSOR_REQUIRESEEK 2
+
+/*
+** The TRACE macro will print high-level status information about the
+** btree operation when the global variable sqlite3_btree_trace is
+** enabled.
+*/
+#if SQLITE_TEST
+# define TRACE(X) if( sqlite3_btree_trace )\
+ { sqlite3DebugPrintf X; fflush(stdout); }
+int sqlite3_btree_trace=0; /* True to enable tracing */
+#else
+# define TRACE(X)
+#endif
+
+/*
+** Forward declaration
+*/
+static int checkReadLocks(Btree*,Pgno,BtCursor*);
+
+/*
+** Read or write a two- and four-byte big-endian integer values.
+*/
+static u32 get2byte(unsigned char *p){
+ return (p[0]<<8) | p[1];
+}
+static u32 get4byte(unsigned char *p){
+ return (p[0]<<24) | (p[1]<<16) | (p[2]<<8) | p[3];
+}
+static void put2byte(unsigned char *p, u32 v){
+ p[0] = v>>8;
+ p[1] = v;
+}
+static void put4byte(unsigned char *p, u32 v){
+ p[0] = v>>24;
+ p[1] = v>>16;
+ p[2] = v>>8;
+ p[3] = v;
+}
+
+/*
+** Routines to read and write variable-length integers. These used to
+** be defined locally, but now we use the varint routines in the util.c
+** file.
+*/
+#define getVarint sqlite3GetVarint
+/* #define getVarint32 sqlite3GetVarint32 */
+#define getVarint32(A,B) ((*B=*(A))<=0x7f?1:sqlite3GetVarint32(A,B))
+#define putVarint sqlite3PutVarint
+
+/* The database page the PENDING_BYTE occupies. This page is never used.
+** TODO: This macro is very similary to PAGER_MJ_PGNO() in pager.c. They
+** should possibly be consolidated (presumably in pager.h).
+**
+** If disk I/O is omitted (meaning that the database is stored purely
+** in memory) then there is no pending byte.
+*/
+#ifdef SQLITE_OMIT_DISKIO
+# define PENDING_BYTE_PAGE(pBt) 0x7fffffff
+#else
+# define PENDING_BYTE_PAGE(pBt) ((PENDING_BYTE/(pBt)->pageSize)+1)
+#endif
+
+/*
+** A linked list of the following structures is stored at BtShared.pLock.
+** Locks are added (or upgraded from READ_LOCK to WRITE_LOCK) when a cursor
+** is opened on the table with root page BtShared.iTable. Locks are removed
+** from this list when a transaction is committed or rolled back, or when
+** a btree handle is closed.
+*/
+struct BtLock {
+ Btree *pBtree; /* Btree handle holding this lock */
+ Pgno iTable; /* Root page of table */
+ u8 eLock; /* READ_LOCK or WRITE_LOCK */
+ BtLock *pNext; /* Next in BtShared.pLock list */
+};
+
+/* Candidate values for BtLock.eLock */
+#define READ_LOCK 1
+#define WRITE_LOCK 2
+
+#ifdef SQLITE_OMIT_SHARED_CACHE
+ /*
+ ** The functions queryTableLock(), lockTable() and unlockAllTables()
+ ** manipulate entries in the BtShared.pLock linked list used to store
+ ** shared-cache table level locks. If the library is compiled with the
+ ** shared-cache feature disabled, then there is only ever one user
+ ** of each BtShared structure and so this locking is not necessary.
+ ** So define the lock related functions as no-ops.
+ */
+ #define queryTableLock(a,b,c) SQLITE_OK
+ #define lockTable(a,b,c) SQLITE_OK
+ #define unlockAllTables(a)
+#else
+
+
+/*
+** Query to see if btree handle p may obtain a lock of type eLock
+** (READ_LOCK or WRITE_LOCK) on the table with root-page iTab. Return
+** SQLITE_OK if the lock may be obtained (by calling lockTable()), or
+** SQLITE_LOCKED if not.
+*/
+static int queryTableLock(Btree *p, Pgno iTab, u8 eLock){
+ BtShared *pBt = p->pBt;
+ BtLock *pIter;
+
+ /* This is a no-op if the shared-cache is not enabled */
+ if( 0==sqlite3ThreadDataReadOnly()->useSharedData ){
+ return SQLITE_OK;
+ }
+
+ /* This (along with lockTable()) is where the ReadUncommitted flag is
+ ** dealt with. If the caller is querying for a read-lock and the flag is
+ ** set, it is unconditionally granted - even if there are write-locks
+ ** on the table. If a write-lock is requested, the ReadUncommitted flag
+ ** is not considered.
+ **
+ ** In function lockTable(), if a read-lock is demanded and the
+ ** ReadUncommitted flag is set, no entry is added to the locks list
+ ** (BtShared.pLock).
+ **
+ ** To summarize: If the ReadUncommitted flag is set, then read cursors do
+ ** not create or respect table locks. The locking procedure for a
+ ** write-cursor does not change.
+ */
+ if(
+ !p->pSqlite ||
+ 0==(p->pSqlite->flags&SQLITE_ReadUncommitted) ||
+ eLock==WRITE_LOCK ||
+ iTab==MASTER_ROOT
+ ){
+ for(pIter=pBt->pLock; pIter; pIter=pIter->pNext){
+ if( pIter->pBtree!=p && pIter->iTable==iTab &&
+ (pIter->eLock!=eLock || eLock!=READ_LOCK) ){
+ return SQLITE_LOCKED;
+ }
+ }
+ }
+ return SQLITE_OK;
+}
+
+/*
+** Add a lock on the table with root-page iTable to the shared-btree used
+** by Btree handle p. Parameter eLock must be either READ_LOCK or
+** WRITE_LOCK.
+**
+** SQLITE_OK is returned if the lock is added successfully. SQLITE_BUSY and
+** SQLITE_NOMEM may also be returned.
+*/
+static int lockTable(Btree *p, Pgno iTable, u8 eLock){
+ BtShared *pBt = p->pBt;
+ BtLock *pLock = 0;
+ BtLock *pIter;
+
+ /* This is a no-op if the shared-cache is not enabled */
+ if( 0==sqlite3ThreadDataReadOnly()->useSharedData ){
+ return SQLITE_OK;
+ }
+
+ assert( SQLITE_OK==queryTableLock(p, iTable, eLock) );
+
+ /* If the read-uncommitted flag is set and a read-lock is requested,
+ ** return early without adding an entry to the BtShared.pLock list. See
+ ** comment in function queryTableLock() for more info on handling
+ ** the ReadUncommitted flag.
+ */
+ if(
+ (p->pSqlite) &&
+ (p->pSqlite->flags&SQLITE_ReadUncommitted) &&
+ (eLock==READ_LOCK) &&
+ iTable!=MASTER_ROOT
+ ){
+ return SQLITE_OK;
+ }
+
+ /* First search the list for an existing lock on this table. */
+ for(pIter=pBt->pLock; pIter; pIter=pIter->pNext){
+ if( pIter->iTable==iTable && pIter->pBtree==p ){
+ pLock = pIter;
+ break;
+ }
+ }
+
+ /* If the above search did not find a BtLock struct associating Btree p
+ ** with table iTable, allocate one and link it into the list.
+ */
+ if( !pLock ){
+ pLock = (BtLock *)sqliteMalloc(sizeof(BtLock));
+ if( !pLock ){
+ return SQLITE_NOMEM;
+ }
+ pLock->iTable = iTable;
+ pLock->pBtree = p;
+ pLock->pNext = pBt->pLock;
+ pBt->pLock = pLock;
+ }
+
+ /* Set the BtLock.eLock variable to the maximum of the current lock
+ ** and the requested lock. This means if a write-lock was already held
+ ** and a read-lock requested, we don't incorrectly downgrade the lock.
+ */
+ assert( WRITE_LOCK>READ_LOCK );
+ if( eLock>pLock->eLock ){
+ pLock->eLock = eLock;
+ }
+
+ return SQLITE_OK;
+}
+
+/*
+** Release all the table locks (locks obtained via calls to the lockTable()
+** procedure) held by Btree handle p.
+*/
+static void unlockAllTables(Btree *p){
+ BtLock **ppIter = &p->pBt->pLock;
+
+ /* If the shared-cache extension is not enabled, there should be no
+ ** locks in the BtShared.pLock list, making this procedure a no-op. Assert
+ ** that this is the case.
+ */
+ assert( sqlite3ThreadDataReadOnly()->useSharedData || 0==*ppIter );
+
+ while( *ppIter ){
+ BtLock *pLock = *ppIter;
+ if( pLock->pBtree==p ){
+ *ppIter = pLock->pNext;
+ sqliteFree(pLock);
+ }else{
+ ppIter = &pLock->pNext;
+ }
+ }
+}
+#endif /* SQLITE_OMIT_SHARED_CACHE */
+
+static void releasePage(MemPage *pPage); /* Forward reference */
+
+/*
+** Save the current cursor position in the variables BtCursor.nKey
+** and BtCursor.pKey. The cursor's state is set to CURSOR_REQUIRESEEK.
+*/
+static int saveCursorPosition(BtCursor *pCur){
+ int rc;
+
+ assert( CURSOR_VALID==pCur->eState );
+ assert( 0==pCur->pKey );
+
+ rc = sqlite3BtreeKeySize(pCur, &pCur->nKey);
+
+ /* If this is an intKey table, then the above call to BtreeKeySize()
+ ** stores the integer key in pCur->nKey. In this case this value is
+ ** all that is required. Otherwise, if pCur is not open on an intKey
+ ** table, then malloc space for and store the pCur->nKey bytes of key
+ ** data.
+ */
+ if( rc==SQLITE_OK && 0==pCur->pPage->intKey){
+ void *pKey = sqliteMalloc(pCur->nKey);
+ if( pKey ){
+ rc = sqlite3BtreeKey(pCur, 0, pCur->nKey, pKey);
+ if( rc==SQLITE_OK ){
+ pCur->pKey = pKey;
+ }else{
+ sqliteFree(pKey);
+ }
+ }else{
+ rc = SQLITE_NOMEM;
+ }
+ }
+ assert( !pCur->pPage->intKey || !pCur->pKey );
+
+ if( rc==SQLITE_OK ){
+ releasePage(pCur->pPage);
+ pCur->pPage = 0;
+ pCur->eState = CURSOR_REQUIRESEEK;
+ }
+
+ return rc;
+}
+
+/*
+** Save the positions of all cursors except pExcept open on the table
+** with root-page iRoot. Usually, this is called just before cursor
+** pExcept is used to modify the table (BtreeDelete() or BtreeInsert()).
+*/
+static int saveAllCursors(BtShared *pBt, Pgno iRoot, BtCursor *pExcept){
+ BtCursor *p;
+ for(p=pBt->pCursor; p; p=p->pNext){
+ if( p!=pExcept && (0==iRoot || p->pgnoRoot==iRoot) &&
+ p->eState==CURSOR_VALID ){
+ int rc = saveCursorPosition(p);
+ if( SQLITE_OK!=rc ){
+ return rc;
+ }
+ }
+ }
+ return SQLITE_OK;
+}
+
+/*
+** Restore the cursor to the position it was in (or as close to as possible)
+** when saveCursorPosition() was called. Note that this call deletes the
+** saved position info stored by saveCursorPosition(), so there can be
+** at most one effective restoreOrClearCursorPosition() call after each
+** saveCursorPosition().
+**
+** If the second argument argument - doSeek - is false, then instead of
+** returning the cursor to it's saved position, any saved position is deleted
+** and the cursor state set to CURSOR_INVALID.
+*/
+static int restoreOrClearCursorPositionX(BtCursor *pCur, int doSeek){
+ int rc = SQLITE_OK;
+ assert( pCur->eState==CURSOR_REQUIRESEEK );
+ pCur->eState = CURSOR_INVALID;
+ if( doSeek ){
+ rc = sqlite3BtreeMoveto(pCur, pCur->pKey, pCur->nKey, &pCur->skip);
+ }
+ if( rc==SQLITE_OK ){
+ sqliteFree(pCur->pKey);
+ pCur->pKey = 0;
+ assert( CURSOR_VALID==pCur->eState || CURSOR_INVALID==pCur->eState );
+ }
+ return rc;
+}
+
+#define restoreOrClearCursorPosition(p,x) \
+ (p->eState==CURSOR_REQUIRESEEK?restoreOrClearCursorPositionX(p,x):SQLITE_OK)
+
+#ifndef SQLITE_OMIT_AUTOVACUUM
+/*
+** These macros define the location of the pointer-map entry for a
+** database page. The first argument to each is the number of usable
+** bytes on each page of the database (often 1024). The second is the
+** page number to look up in the pointer map.
+**
+** PTRMAP_PAGENO returns the database page number of the pointer-map
+** page that stores the required pointer. PTRMAP_PTROFFSET returns
+** the offset of the requested map entry.
+**
+** If the pgno argument passed to PTRMAP_PAGENO is a pointer-map page,
+** then pgno is returned. So (pgno==PTRMAP_PAGENO(pgsz, pgno)) can be
+** used to test if pgno is a pointer-map page. PTRMAP_ISPAGE implements
+** this test.
+*/
+#define PTRMAP_PAGENO(pBt, pgno) ptrmapPageno(pBt, pgno)
+#define PTRMAP_PTROFFSET(pBt, pgno) (5*(pgno-ptrmapPageno(pBt, pgno)-1))
+#define PTRMAP_ISPAGE(pBt, pgno) (PTRMAP_PAGENO((pBt),(pgno))==(pgno))
+
+static Pgno ptrmapPageno(BtShared *pBt, Pgno pgno){
+ int nPagesPerMapPage = (pBt->usableSize/5)+1;
+ int iPtrMap = (pgno-2)/nPagesPerMapPage;
+ int ret = (iPtrMap*nPagesPerMapPage) + 2;
+ if( ret==PENDING_BYTE_PAGE(pBt) ){
+ ret++;
+ }
+ return ret;
+}
+
+/*
+** The pointer map is a lookup table that identifies the parent page for
+** each child page in the database file. The parent page is the page that
+** contains a pointer to the child. Every page in the database contains
+** 0 or 1 parent pages. (In this context 'database page' refers
+** to any page that is not part of the pointer map itself.) Each pointer map
+** entry consists of a single byte 'type' and a 4 byte parent page number.
+** The PTRMAP_XXX identifiers below are the valid types.
+**
+** The purpose of the pointer map is to facility moving pages from one
+** position in the file to another as part of autovacuum. When a page
+** is moved, the pointer in its parent must be updated to point to the
+** new location. The pointer map is used to locate the parent page quickly.
+**
+** PTRMAP_ROOTPAGE: The database page is a root-page. The page-number is not
+** used in this case.
+**
+** PTRMAP_FREEPAGE: The database page is an unused (free) page. The page-number
+** is not used in this case.
+**
+** PTRMAP_OVERFLOW1: The database page is the first page in a list of
+** overflow pages. The page number identifies the page that
+** contains the cell with a pointer to this overflow page.
+**
+** PTRMAP_OVERFLOW2: The database page is the second or later page in a list of
+** overflow pages. The page-number identifies the previous
+** page in the overflow page list.
+**
+** PTRMAP_BTREE: The database page is a non-root btree page. The page number
+** identifies the parent page in the btree.
+*/
+#define PTRMAP_ROOTPAGE 1
+#define PTRMAP_FREEPAGE 2
+#define PTRMAP_OVERFLOW1 3
+#define PTRMAP_OVERFLOW2 4
+#define PTRMAP_BTREE 5
+
+/*
+** Write an entry into the pointer map.
+**
+** This routine updates the pointer map entry for page number 'key'
+** so that it maps to type 'eType' and parent page number 'pgno'.
+** An error code is returned if something goes wrong, otherwise SQLITE_OK.
+*/
+static int ptrmapPut(BtShared *pBt, Pgno key, u8 eType, Pgno parent){
+ u8 *pPtrmap; /* The pointer map page */
+ Pgno iPtrmap; /* The pointer map page number */
+ int offset; /* Offset in pointer map page */
+ int rc;
+
+ /* The master-journal page number must never be used as a pointer map page */
+ assert( 0==PTRMAP_ISPAGE(pBt, PENDING_BYTE_PAGE(pBt)) );
+
+ assert( pBt->autoVacuum );
+ if( key==0 ){
+ return SQLITE_CORRUPT_BKPT;
+ }
+ iPtrmap = PTRMAP_PAGENO(pBt, key);
+ rc = sqlite3pager_get(pBt->pPager, iPtrmap, (void **)&pPtrmap);
+ if( rc!=SQLITE_OK ){
+ return rc;
+ }
+ offset = PTRMAP_PTROFFSET(pBt, key);
+
+ if( eType!=pPtrmap[offset] || get4byte(&pPtrmap[offset+1])!=parent ){
+ TRACE(("PTRMAP_UPDATE: %d->(%d,%d)\n", key, eType, parent));
+ rc = sqlite3pager_write(pPtrmap);
+ if( rc==SQLITE_OK ){
+ pPtrmap[offset] = eType;
+ put4byte(&pPtrmap[offset+1], parent);
+ }
+ }
+
+ sqlite3pager_unref(pPtrmap);
+ return rc;
+}
+
+/*
+** Read an entry from the pointer map.
+**
+** This routine retrieves the pointer map entry for page 'key', writing
+** the type and parent page number to *pEType and *pPgno respectively.
+** An error code is returned if something goes wrong, otherwise SQLITE_OK.
+*/
+static int ptrmapGet(BtShared *pBt, Pgno key, u8 *pEType, Pgno *pPgno){
+ int iPtrmap; /* Pointer map page index */
+ u8 *pPtrmap; /* Pointer map page data */
+ int offset; /* Offset of entry in pointer map */
+ int rc;
+
+ iPtrmap = PTRMAP_PAGENO(pBt, key);
+ rc = sqlite3pager_get(pBt->pPager, iPtrmap, (void **)&pPtrmap);
+ if( rc!=0 ){
+ return rc;
+ }
+
+ offset = PTRMAP_PTROFFSET(pBt, key);
+ assert( pEType!=0 );
+ *pEType = pPtrmap[offset];
+ if( pPgno ) *pPgno = get4byte(&pPtrmap[offset+1]);
+
+ sqlite3pager_unref(pPtrmap);
+ if( *pEType<1 || *pEType>5 ) return SQLITE_CORRUPT_BKPT;
+ return SQLITE_OK;
+}
+
+#endif /* SQLITE_OMIT_AUTOVACUUM */
+
+/*
+** Given a btree page and a cell index (0 means the first cell on
+** the page, 1 means the second cell, and so forth) return a pointer
+** to the cell content.
+**
+** This routine works only for pages that do not contain overflow cells.
+*/
+static u8 *findCell(MemPage *pPage, int iCell){
+ u8 *data = pPage->aData;
+ assert( iCell>=0 );
+ assert( iCell<get2byte(&data[pPage->hdrOffset+3]) );
+ return data + get2byte(&data[pPage->cellOffset+2*iCell]);
+}
+
+/*
+** This a more complex version of findCell() that works for
+** pages that do contain overflow cells. See insert
+*/
+static u8 *findOverflowCell(MemPage *pPage, int iCell){
+ int i;
+ for(i=pPage->nOverflow-1; i>=0; i--){
+ int k;
+ struct _OvflCell *pOvfl;
+ pOvfl = &pPage->aOvfl[i];
+ k = pOvfl->idx;
+ if( k<=iCell ){
+ if( k==iCell ){
+ return pOvfl->pCell;
+ }
+ iCell--;
+ }
+ }
+ return findCell(pPage, iCell);
+}
+
+/*
+** Parse a cell content block and fill in the CellInfo structure. There
+** are two versions of this function. parseCell() takes a cell index
+** as the second argument and parseCellPtr() takes a pointer to the
+** body of the cell as its second argument.
+*/
+static void parseCellPtr(
+ MemPage *pPage, /* Page containing the cell */
+ u8 *pCell, /* Pointer to the cell text. */
+ CellInfo *pInfo /* Fill in this structure */
+){
+ int n; /* Number bytes in cell content header */
+ u32 nPayload; /* Number of bytes of cell payload */
+
+ pInfo->pCell = pCell;
+ assert( pPage->leaf==0 || pPage->leaf==1 );
+ n = pPage->childPtrSize;
+ assert( n==4-4*pPage->leaf );
+ if( pPage->hasData ){
+ n += getVarint32(&pCell[n], &nPayload);
+ }else{
+ nPayload = 0;
+ }
+ pInfo->nData = nPayload;
+ if( pPage->intKey ){
+ n += getVarint(&pCell[n], (u64 *)&pInfo->nKey);
+ }else{
+ u32 x;
+ n += getVarint32(&pCell[n], &x);
+ pInfo->nKey = x;
+ nPayload += x;
+ }
+ pInfo->nHeader = n;
+ if( nPayload<=pPage->maxLocal ){
+ /* This is the (easy) common case where the entire payload fits
+ ** on the local page. No overflow is required.
+ */
+ int nSize; /* Total size of cell content in bytes */
+ pInfo->nLocal = nPayload;
+ pInfo->iOverflow = 0;
+ nSize = nPayload + n;
+ if( nSize<4 ){
+ nSize = 4; /* Minimum cell size is 4 */
+ }
+ pInfo->nSize = nSize;
+ }else{
+ /* If the payload will not fit completely on the local page, we have
+ ** to decide how much to store locally and how much to spill onto
+ ** overflow pages. The strategy is to minimize the amount of unused
+ ** space on overflow pages while keeping the amount of local storage
+ ** in between minLocal and maxLocal.
+ **
+ ** Warning: changing the way overflow payload is distributed in any
+ ** way will result in an incompatible file format.
+ */
+ int minLocal; /* Minimum amount of payload held locally */
+ int maxLocal; /* Maximum amount of payload held locally */
+ int surplus; /* Overflow payload available for local storage */
+
+ minLocal = pPage->minLocal;
+ maxLocal = pPage->maxLocal;
+ surplus = minLocal + (nPayload - minLocal)%(pPage->pBt->usableSize - 4);
+ if( surplus <= maxLocal ){
+ pInfo->nLocal = surplus;
+ }else{
+ pInfo->nLocal = minLocal;
+ }
+ pInfo->iOverflow = pInfo->nLocal + n;
+ pInfo->nSize = pInfo->iOverflow + 4;
+ }
+}
+static void parseCell(
+ MemPage *pPage, /* Page containing the cell */
+ int iCell, /* The cell index. First cell is 0 */
+ CellInfo *pInfo /* Fill in this structure */
+){
+ parseCellPtr(pPage, findCell(pPage, iCell), pInfo);
+}
+
+/*
+** Compute the total number of bytes that a Cell needs in the cell
+** data area of the btree-page. The return number includes the cell
+** data header and the local payload, but not any overflow page or
+** the space used by the cell pointer.
+*/
+#ifndef NDEBUG
+static int cellSize(MemPage *pPage, int iCell){
+ CellInfo info;
+ parseCell(pPage, iCell, &info);
+ return info.nSize;
+}
+#endif
+static int cellSizePtr(MemPage *pPage, u8 *pCell){
+ CellInfo info;
+ parseCellPtr(pPage, pCell, &info);
+ return info.nSize;
+}
+
+#ifndef SQLITE_OMIT_AUTOVACUUM
+/*
+** If the cell pCell, part of page pPage contains a pointer
+** to an overflow page, insert an entry into the pointer-map
+** for the overflow page.
+*/
+static int ptrmapPutOvflPtr(MemPage *pPage, u8 *pCell){
+ if( pCell ){
+ CellInfo info;
+ parseCellPtr(pPage, pCell, &info);
+ if( (info.nData+(pPage->intKey?0:info.nKey))>info.nLocal ){
+ Pgno ovfl = get4byte(&pCell[info.iOverflow]);
+ return ptrmapPut(pPage->pBt, ovfl, PTRMAP_OVERFLOW1, pPage->pgno);
+ }
+ }
+ return SQLITE_OK;
+}
+/*
+** If the cell with index iCell on page pPage contains a pointer
+** to an overflow page, insert an entry into the pointer-map
+** for the overflow page.
+*/
+static int ptrmapPutOvfl(MemPage *pPage, int iCell){
+ u8 *pCell;
+ pCell = findOverflowCell(pPage, iCell);
+ return ptrmapPutOvflPtr(pPage, pCell);
+}
+#endif
+
+
+/*
+** Do sanity checking on a page. Throw an exception if anything is
+** not right.
+**
+** This routine is used for internal error checking only. It is omitted
+** from most builds.
+*/
+#if defined(BTREE_DEBUG) && !defined(NDEBUG) && 0
+static void _pageIntegrity(MemPage *pPage){
+ int usableSize;
+ u8 *data;
+ int i, j, idx, c, pc, hdr, nFree;
+ int cellOffset;
+ int nCell, cellLimit;
+ u8 *used;
+
+ used = sqliteMallocRaw( pPage->pBt->pageSize );
+ if( used==0 ) return;
+ usableSize = pPage->pBt->usableSize;
+ assert( pPage->aData==&((unsigned char*)pPage)[-pPage->pBt->pageSize] );
+ hdr = pPage->hdrOffset;
+ assert( hdr==(pPage->pgno==1 ? 100 : 0) );
+ assert( pPage->pgno==sqlite3pager_pagenumber(pPage->aData) );
+ c = pPage->aData[hdr];
+ if( pPage->isInit ){
+ assert( pPage->leaf == ((c & PTF_LEAF)!=0) );
+ assert( pPage->zeroData == ((c & PTF_ZERODATA)!=0) );
+ assert( pPage->leafData == ((c & PTF_LEAFDATA)!=0) );
+ assert( pPage->intKey == ((c & (PTF_INTKEY|PTF_LEAFDATA))!=0) );
+ assert( pPage->hasData ==
+ !(pPage->zeroData || (!pPage->leaf && pPage->leafData)) );
+ assert( pPage->cellOffset==pPage->hdrOffset+12-4*pPage->leaf );
+ assert( pPage->nCell = get2byte(&pPage->aData[hdr+3]) );
+ }
+ data = pPage->aData;
+ memset(used, 0, usableSize);
+ for(i=0; i<hdr+10-pPage->leaf*4; i++) used[i] = 1;
+ nFree = 0;
+ pc = get2byte(&data[hdr+1]);
+ while( pc ){
+ int size;
+ assert( pc>0 && pc<usableSize-4 );
+ size = get2byte(&data[pc+2]);
+ assert( pc+size<=usableSize );
+ nFree += size;
+ for(i=pc; i<pc+size; i++){
+ assert( used[i]==0 );
+ used[i] = 1;
+ }
+ pc = get2byte(&data[pc]);
+ }
+ idx = 0;
+ nCell = get2byte(&data[hdr+3]);
+ cellLimit = get2byte(&data[hdr+5]);
+ assert( pPage->isInit==0
+ || pPage->nFree==nFree+data[hdr+7]+cellLimit-(cellOffset+2*nCell) );
+ cellOffset = pPage->cellOffset;
+ for(i=0; i<nCell; i++){
+ int size;
+ pc = get2byte(&data[cellOffset+2*i]);
+ assert( pc>0 && pc<usableSize-4 );
+ size = cellSize(pPage, &data[pc]);
+ assert( pc+size<=usableSize );
+ for(j=pc; j<pc+size; j++){
+ assert( used[j]==0 );
+ used[j] = 1;
+ }
+ }
+ for(i=cellOffset+2*nCell; i<cellimit; i++){
+ assert( used[i]==0 );
+ used[i] = 1;
+ }
+ nFree = 0;
+ for(i=0; i<usableSize; i++){
+ assert( used[i]<=1 );
+ if( used[i]==0 ) nFree++;
+ }
+ assert( nFree==data[hdr+7] );
+ sqliteFree(used);
+}
+#define pageIntegrity(X) _pageIntegrity(X)
+#else
+# define pageIntegrity(X)
+#endif
+
+/* A bunch of assert() statements to check the transaction state variables
+** of handle p (type Btree*) are internally consistent.
+*/
+#define btreeIntegrity(p) \
+ assert( p->inTrans!=TRANS_NONE || p->pBt->nTransaction<p->pBt->nRef ); \
+ assert( p->pBt->nTransaction<=p->pBt->nRef ); \
+ assert( p->pBt->inTransaction!=TRANS_NONE || p->pBt->nTransaction==0 ); \
+ assert( p->pBt->inTransaction>=p->inTrans );
+
+/*
+** Defragment the page given. All Cells are moved to the
+** end of the page and all free space is collected into one
+** big FreeBlk that occurs in between the header and cell
+** pointer array and the cell content area.
+*/
+static int defragmentPage(MemPage *pPage){
+ int i; /* Loop counter */
+ int pc; /* Address of a i-th cell */
+ int addr; /* Offset of first byte after cell pointer array */
+ int hdr; /* Offset to the page header */
+ int size; /* Size of a cell */
+ int usableSize; /* Number of usable bytes on a page */
+ int cellOffset; /* Offset to the cell pointer array */
+ int brk; /* Offset to the cell content area */
+ int nCell; /* Number of cells on the page */
+ unsigned char *data; /* The page data */
+ unsigned char *temp; /* Temp area for cell content */
+
+ assert( sqlite3pager_iswriteable(pPage->aData) );
+ assert( pPage->pBt!=0 );
+ assert( pPage->pBt->usableSize <= SQLITE_MAX_PAGE_SIZE );
+ assert( pPage->nOverflow==0 );
+ temp = sqliteMalloc( pPage->pBt->pageSize );
+ if( temp==0 ) return SQLITE_NOMEM;
+ data = pPage->aData;
+ hdr = pPage->hdrOffset;
+ cellOffset = pPage->cellOffset;
+ nCell = pPage->nCell;
+ assert( nCell==get2byte(&data[hdr+3]) );
+ usableSize = pPage->pBt->usableSize;
+ brk = get2byte(&data[hdr+5]);
+ memcpy(&temp[brk], &data[brk], usableSize - brk);
+ brk = usableSize;
+ for(i=0; i<nCell; i++){
+ u8 *pAddr; /* The i-th cell pointer */
+ pAddr = &data[cellOffset + i*2];
+ pc = get2byte(pAddr);
+ assert( pc<pPage->pBt->usableSize );
+ size = cellSizePtr(pPage, &temp[pc]);
+ brk -= size;
+ memcpy(&data[brk], &temp[pc], size);
+ put2byte(pAddr, brk);
+ }
+ assert( brk>=cellOffset+2*nCell );
+ put2byte(&data[hdr+5], brk);
+ data[hdr+1] = 0;
+ data[hdr+2] = 0;
+ data[hdr+7] = 0;
+ addr = cellOffset+2*nCell;
+ memset(&data[addr], 0, brk-addr);
+ sqliteFree(temp);
+ return SQLITE_OK;
+}
+
+/*
+** Allocate nByte bytes of space on a page.
+**
+** Return the index into pPage->aData[] of the first byte of
+** the new allocation. Or return 0 if there is not enough free
+** space on the page to satisfy the allocation request.
+**
+** If the page contains nBytes of free space but does not contain
+** nBytes of contiguous free space, then this routine automatically
+** calls defragementPage() to consolidate all free space before
+** allocating the new chunk.
+*/
+static int allocateSpace(MemPage *pPage, int nByte){
+ int addr, pc, hdr;
+ int size;
+ int nFrag;
+ int top;
+ int nCell;
+ int cellOffset;
+ unsigned char *data;
+
+ data = pPage->aData;
+ assert( sqlite3pager_iswriteable(data) );
+ assert( pPage->pBt );
+ if( nByte<4 ) nByte = 4;
+ if( pPage->nFree<nByte || pPage->nOverflow>0 ) return 0;
+ pPage->nFree -= nByte;
+ hdr = pPage->hdrOffset;
+
+ nFrag = data[hdr+7];
+ if( nFrag<60 ){
+ /* Search the freelist looking for a slot big enough to satisfy the
+ ** space request. */
+ addr = hdr+1;
+ while( (pc = get2byte(&data[addr]))>0 ){
+ size = get2byte(&data[pc+2]);
+ if( size>=nByte ){
+ if( size<nByte+4 ){
+ memcpy(&data[addr], &data[pc], 2);
+ data[hdr+7] = nFrag + size - nByte;
+ return pc;
+ }else{
+ put2byte(&data[pc+2], size-nByte);
+ return pc + size - nByte;
+ }
+ }
+ addr = pc;
+ }
+ }
+
+ /* Allocate memory from the gap in between the cell pointer array
+ ** and the cell content area.
+ */
+ top = get2byte(&data[hdr+5]);
+ nCell = get2byte(&data[hdr+3]);
+ cellOffset = pPage->cellOffset;
+ if( nFrag>=60 || cellOffset + 2*nCell > top - nByte ){
+ if( defragmentPage(pPage) ) return 0;
+ top = get2byte(&data[hdr+5]);
+ }
+ top -= nByte;
+ assert( cellOffset + 2*nCell <= top );
+ put2byte(&data[hdr+5], top);
+ return top;
+}
+
+/*
+** Return a section of the pPage->aData to the freelist.
+** The first byte of the new free block is pPage->aDisk[start]
+** and the size of the block is "size" bytes.
+**
+** Most of the effort here is involved in coalesing adjacent
+** free blocks into a single big free block.
+*/
+static void freeSpace(MemPage *pPage, int start, int size){
+ int addr, pbegin, hdr;
+ unsigned char *data = pPage->aData;
+
+ assert( pPage->pBt!=0 );
+ assert( sqlite3pager_iswriteable(data) );
+ assert( start>=pPage->hdrOffset+6+(pPage->leaf?0:4) );
+ assert( (start + size)<=pPage->pBt->usableSize );
+ if( size<4 ) size = 4;
+
+#ifdef SQLITE_SECURE_DELETE
+ /* Overwrite deleted information with zeros when the SECURE_DELETE
+ ** option is enabled at compile-time */
+ memset(&data[start], 0, size);
+#endif
+
+ /* Add the space back into the linked list of freeblocks */
+ hdr = pPage->hdrOffset;
+ addr = hdr + 1;
+ while( (pbegin = get2byte(&data[addr]))<start && pbegin>0 ){
+ assert( pbegin<=pPage->pBt->usableSize-4 );
+ assert( pbegin>addr );
+ addr = pbegin;
+ }
+ assert( pbegin<=pPage->pBt->usableSize-4 );
+ assert( pbegin>addr || pbegin==0 );
+ put2byte(&data[addr], start);
+ put2byte(&data[start], pbegin);
+ put2byte(&data[start+2], size);
+ pPage->nFree += size;
+
+ /* Coalesce adjacent free blocks */
+ addr = pPage->hdrOffset + 1;
+ while( (pbegin = get2byte(&data[addr]))>0 ){
+ int pnext, psize;
+ assert( pbegin>addr );
+ assert( pbegin<=pPage->pBt->usableSize-4 );
+ pnext = get2byte(&data[pbegin]);
+ psize = get2byte(&data[pbegin+2]);
+ if( pbegin + psize + 3 >= pnext && pnext>0 ){
+ int frag = pnext - (pbegin+psize);
+ assert( frag<=data[pPage->hdrOffset+7] );
+ data[pPage->hdrOffset+7] -= frag;
+ put2byte(&data[pbegin], get2byte(&data[pnext]));
+ put2byte(&data[pbegin+2], pnext+get2byte(&data[pnext+2])-pbegin);
+ }else{
+ addr = pbegin;
+ }
+ }
+
+ /* If the cell content area begins with a freeblock, remove it. */
+ if( data[hdr+1]==data[hdr+5] && data[hdr+2]==data[hdr+6] ){
+ int top;
+ pbegin = get2byte(&data[hdr+1]);
+ memcpy(&data[hdr+1], &data[pbegin], 2);
+ top = get2byte(&data[hdr+5]);
+ put2byte(&data[hdr+5], top + get2byte(&data[pbegin+2]));
+ }
+}
+
+/*
+** Decode the flags byte (the first byte of the header) for a page
+** and initialize fields of the MemPage structure accordingly.
+*/
+static void decodeFlags(MemPage *pPage, int flagByte){
+ BtShared *pBt; /* A copy of pPage->pBt */
+
+ assert( pPage->hdrOffset==(pPage->pgno==1 ? 100 : 0) );
+ pPage->intKey = (flagByte & (PTF_INTKEY|PTF_LEAFDATA))!=0;
+ pPage->zeroData = (flagByte & PTF_ZERODATA)!=0;
+ pPage->leaf = (flagByte & PTF_LEAF)!=0;
+ pPage->childPtrSize = 4*(pPage->leaf==0);
+ pBt = pPage->pBt;
+ if( flagByte & PTF_LEAFDATA ){
+ pPage->leafData = 1;
+ pPage->maxLocal = pBt->maxLeaf;
+ pPage->minLocal = pBt->minLeaf;
+ }else{
+ pPage->leafData = 0;
+ pPage->maxLocal = pBt->maxLocal;
+ pPage->minLocal = pBt->minLocal;
+ }
+ pPage->hasData = !(pPage->zeroData || (!pPage->leaf && pPage->leafData));
+}
+
+/*
+** Initialize the auxiliary information for a disk block.
+**
+** The pParent parameter must be a pointer to the MemPage which
+** is the parent of the page being initialized. The root of a
+** BTree has no parent and so for that page, pParent==NULL.
+**
+** Return SQLITE_OK on success. If we see that the page does
+** not contain a well-formed database page, then return
+** SQLITE_CORRUPT. Note that a return of SQLITE_OK does not
+** guarantee that the page is well-formed. It only shows that
+** we failed to detect any corruption.
+*/
+static int initPage(
+ MemPage *pPage, /* The page to be initialized */
+ MemPage *pParent /* The parent. Might be NULL */
+){
+ int pc; /* Address of a freeblock within pPage->aData[] */
+ int hdr; /* Offset to beginning of page header */
+ u8 *data; /* Equal to pPage->aData */
+ BtShared *pBt; /* The main btree structure */
+ int usableSize; /* Amount of usable space on each page */
+ int cellOffset; /* Offset from start of page to first cell pointer */
+ int nFree; /* Number of unused bytes on the page */
+ int top; /* First byte of the cell content area */
+
+ pBt = pPage->pBt;
+ assert( pBt!=0 );
+ assert( pParent==0 || pParent->pBt==pBt );
+ assert( pPage->pgno==sqlite3pager_pagenumber(pPage->aData) );
+ assert( pPage->aData == &((unsigned char*)pPage)[-pBt->pageSize] );
+ if( pPage->pParent!=pParent && (pPage->pParent!=0 || pPage->isInit) ){
+ /* The parent page should never change unless the file is corrupt */
+ return SQLITE_CORRUPT_BKPT;
+ }
+ if( pPage->isInit ) return SQLITE_OK;
+ if( pPage->pParent==0 && pParent!=0 ){
+ pPage->pParent = pParent;
+ sqlite3pager_ref(pParent->aData);
+ }
+ hdr = pPage->hdrOffset;
+ data = pPage->aData;
+ decodeFlags(pPage, data[hdr]);
+ pPage->nOverflow = 0;
+ pPage->idxShift = 0;
+ usableSize = pBt->usableSize;
+ pPage->cellOffset = cellOffset = hdr + 12 - 4*pPage->leaf;
+ top = get2byte(&data[hdr+5]);
+ pPage->nCell = get2byte(&data[hdr+3]);
+ if( pPage->nCell>MX_CELL(pBt) ){
+ /* To many cells for a single page. The page must be corrupt */
+ return SQLITE_CORRUPT_BKPT;
+ }
+ if( pPage->nCell==0 && pParent!=0 && pParent->pgno!=1 ){
+ /* All pages must have at least one cell, except for root pages */
+ return SQLITE_CORRUPT_BKPT;
+ }
+
+ /* Compute the total free space on the page */
+ pc = get2byte(&data[hdr+1]);
+ nFree = data[hdr+7] + top - (cellOffset + 2*pPage->nCell);
+ while( pc>0 ){
+ int next, size;
+ if( pc>usableSize-4 ){
+ /* Free block is off the page */
+ return SQLITE_CORRUPT_BKPT;
+ }
+ next = get2byte(&data[pc]);
+ size = get2byte(&data[pc+2]);
+ if( next>0 && next<=pc+size+3 ){
+ /* Free blocks must be in accending order */
+ return SQLITE_CORRUPT_BKPT;
+ }
+ nFree += size;
+ pc = next;
+ }
+ pPage->nFree = nFree;
+ if( nFree>=usableSize ){
+ /* Free space cannot exceed total page size */
+ return SQLITE_CORRUPT_BKPT;
+ }
+
+ pPage->isInit = 1;
+ pageIntegrity(pPage);
+ return SQLITE_OK;
+}
+
+/*
+** Set up a raw page so that it looks like a database page holding
+** no entries.
+*/
+static void zeroPage(MemPage *pPage, int flags){
+ unsigned char *data = pPage->aData;
+ BtShared *pBt = pPage->pBt;
+ int hdr = pPage->hdrOffset;
+ int first;
+
+ assert( sqlite3pager_pagenumber(data)==pPage->pgno );
+ assert( &data[pBt->pageSize] == (unsigned char*)pPage );
+ assert( sqlite3pager_iswriteable(data) );
+ memset(&data[hdr], 0, pBt->usableSize - hdr);
+ data[hdr] = flags;
+ first = hdr + 8 + 4*((flags&PTF_LEAF)==0);
+ memset(&data[hdr+1], 0, 4);
+ data[hdr+7] = 0;
+ put2byte(&data[hdr+5], pBt->usableSize);
+ pPage->nFree = pBt->usableSize - first;
+ decodeFlags(pPage, flags);
+ pPage->hdrOffset = hdr;
+ pPage->cellOffset = first;
+ pPage->nOverflow = 0;
+ pPage->idxShift = 0;
+ pPage->nCell = 0;
+ pPage->isInit = 1;
+ pageIntegrity(pPage);
+}
+
+/*
+** Get a page from the pager. Initialize the MemPage.pBt and
+** MemPage.aData elements if needed.
+*/
+static int getPage(BtShared *pBt, Pgno pgno, MemPage **ppPage){
+ int rc;
+ unsigned char *aData;
+ MemPage *pPage;
+ rc = sqlite3pager_get(pBt->pPager, pgno, (void**)&aData);
+ if( rc ) return rc;
+ pPage = (MemPage*)&aData[pBt->pageSize];
+ pPage->aData = aData;
+ pPage->pBt = pBt;
+ pPage->pgno = pgno;
+ pPage->hdrOffset = pPage->pgno==1 ? 100 : 0;
+ *ppPage = pPage;
+ return SQLITE_OK;
+}
+
+/*
+** Get a page from the pager and initialize it. This routine
+** is just a convenience wrapper around separate calls to
+** getPage() and initPage().
+*/
+static int getAndInitPage(
+ BtShared *pBt, /* The database file */
+ Pgno pgno, /* Number of the page to get */
+ MemPage **ppPage, /* Write the page pointer here */
+ MemPage *pParent /* Parent of the page */
+){
+ int rc;
+ if( pgno==0 ){
+ return SQLITE_CORRUPT_BKPT;
+ }
+ rc = getPage(pBt, pgno, ppPage);
+ if( rc==SQLITE_OK && (*ppPage)->isInit==0 ){
+ rc = initPage(*ppPage, pParent);
+ }
+ return rc;
+}
+
+/*
+** Release a MemPage. This should be called once for each prior
+** call to getPage.
+*/
+static void releasePage(MemPage *pPage){
+ if( pPage ){
+ assert( pPage->aData );
+ assert( pPage->pBt );
+ assert( &pPage->aData[pPage->pBt->pageSize]==(unsigned char*)pPage );
+ sqlite3pager_unref(pPage->aData);
+ }
+}
+
+/*
+** This routine is called when the reference count for a page
+** reaches zero. We need to unref the pParent pointer when that
+** happens.
+*/
+static void pageDestructor(void *pData, int pageSize){
+ MemPage *pPage;
+ assert( (pageSize & 7)==0 );
+ pPage = (MemPage*)&((char*)pData)[pageSize];
+ if( pPage->pParent ){
+ MemPage *pParent = pPage->pParent;
+ pPage->pParent = 0;
+ releasePage(pParent);
+ }
+ pPage->isInit = 0;
+}
+
+/*
+** During a rollback, when the pager reloads information into the cache
+** so that the cache is restored to its original state at the start of
+** the transaction, for each page restored this routine is called.
+**
+** This routine needs to reset the extra data section at the end of the
+** page to agree with the restored data.
+*/
+static void pageReinit(void *pData, int pageSize){
+ MemPage *pPage;
+ assert( (pageSize & 7)==0 );
+ pPage = (MemPage*)&((char*)pData)[pageSize];
+ if( pPage->isInit ){
+ pPage->isInit = 0;
+ initPage(pPage, pPage->pParent);
+ }
+}
+
+/*
+** Open a database file.
+**
+** zFilename is the name of the database file. If zFilename is NULL
+** a new database with a random name is created. This randomly named
+** database file will be deleted when sqlite3BtreeClose() is called.
+*/
+int sqlite3BtreeOpen(
+ const char *zFilename, /* Name of the file containing the BTree database */
+ sqlite3 *pSqlite, /* Associated database handle */
+ Btree **ppBtree, /* Pointer to new Btree object written here */
+ int flags /* Options */
+){
+ BtShared *pBt; /* Shared part of btree structure */
+ Btree *p; /* Handle to return */
+ int rc;
+ int nReserve;
+ unsigned char zDbHeader[100];
+#if !defined(SQLITE_OMIT_SHARED_CACHE) && !defined(SQLITE_OMIT_DISKIO)
+ const ThreadData *pTsdro;
+#endif
+
+ /* Set the variable isMemdb to true for an in-memory database, or
+ ** false for a file-based database. This symbol is only required if
+ ** either of the shared-data or autovacuum features are compiled
+ ** into the library.
+ */
+#if !defined(SQLITE_OMIT_SHARED_CACHE) || !defined(SQLITE_OMIT_AUTOVACUUM)
+ #ifdef SQLITE_OMIT_MEMORYDB
+ const int isMemdb = 0;
+ #else
+ const int isMemdb = zFilename && !strcmp(zFilename, ":memory:");
+ #endif
+#endif
+
+ p = sqliteMalloc(sizeof(Btree));
+ if( !p ){
+ return SQLITE_NOMEM;
+ }
+ p->inTrans = TRANS_NONE;
+ p->pSqlite = pSqlite;
+
+ /* Try to find an existing Btree structure opened on zFilename. */
+#if !defined(SQLITE_OMIT_SHARED_CACHE) && !defined(SQLITE_OMIT_DISKIO)
+ pTsdro = sqlite3ThreadDataReadOnly();
+ if( pTsdro->useSharedData && zFilename && !isMemdb ){
+ char *zFullPathname = sqlite3OsFullPathname(zFilename);
+ if( !zFullPathname ){
+ sqliteFree(p);
+ return SQLITE_NOMEM;
+ }
+ for(pBt=pTsdro->pBtree; pBt; pBt=pBt->pNext){
+ assert( pBt->nRef>0 );
+ if( 0==strcmp(zFullPathname, sqlite3pager_filename(pBt->pPager)) ){
+ p->pBt = pBt;
+ *ppBtree = p;
+ pBt->nRef++;
+ sqliteFree(zFullPathname);
+ return SQLITE_OK;
+ }
+ }
+ sqliteFree(zFullPathname);
+ }
+#endif
+
+ /*
+ ** The following asserts make sure that structures used by the btree are
+ ** the right size. This is to guard against size changes that result
+ ** when compiling on a different architecture.
+ */
+ assert( sizeof(i64)==8 || sizeof(i64)==4 );
+ assert( sizeof(u64)==8 || sizeof(u64)==4 );
+ assert( sizeof(u32)==4 );
+ assert( sizeof(u16)==2 );
+ assert( sizeof(Pgno)==4 );
+
+ pBt = sqliteMalloc( sizeof(*pBt) );
+ if( pBt==0 ){
+ *ppBtree = 0;
+ sqliteFree(p);
+ return SQLITE_NOMEM;
+ }
+ rc = sqlite3pager_open(&pBt->pPager, zFilename, EXTRA_SIZE, flags);
+ if( rc!=SQLITE_OK ){
+ if( pBt->pPager ) sqlite3pager_close(pBt->pPager);
+ sqliteFree(pBt);
+ sqliteFree(p);
+ *ppBtree = 0;
+ return rc;
+ }
+ p->pBt = pBt;
+
+ sqlite3pager_set_destructor(pBt->pPager, pageDestructor);
+ sqlite3pager_set_reiniter(pBt->pPager, pageReinit);
+ pBt->pCursor = 0;
+ pBt->pPage1 = 0;
+ pBt->readOnly = sqlite3pager_isreadonly(pBt->pPager);
+ sqlite3pager_read_fileheader(pBt->pPager, sizeof(zDbHeader), zDbHeader);
+ pBt->pageSize = get2byte(&zDbHeader[16]);
+ if( pBt->pageSize<512 || pBt->pageSize>SQLITE_MAX_PAGE_SIZE
+ || ((pBt->pageSize-1)&pBt->pageSize)!=0 ){
+ pBt->pageSize = SQLITE_DEFAULT_PAGE_SIZE;
+ pBt->maxEmbedFrac = 64; /* 25% */
+ pBt->minEmbedFrac = 32; /* 12.5% */
+ pBt->minLeafFrac = 32; /* 12.5% */
+#ifndef SQLITE_OMIT_AUTOVACUUM
+ /* If the magic name ":memory:" will create an in-memory database, then
+ ** do not set the auto-vacuum flag, even if SQLITE_DEFAULT_AUTOVACUUM
+ ** is true. On the other hand, if SQLITE_OMIT_MEMORYDB has been defined,
+ ** then ":memory:" is just a regular file-name. Respect the auto-vacuum
+ ** default in this case.
+ */
+ if( zFilename && !isMemdb ){
+ pBt->autoVacuum = SQLITE_DEFAULT_AUTOVACUUM;
+ }
+#endif
+ nReserve = 0;
+ }else{
+ nReserve = zDbHeader[20];
+ pBt->maxEmbedFrac = zDbHeader[21];
+ pBt->minEmbedFrac = zDbHeader[22];
+ pBt->minLeafFrac = zDbHeader[23];
+ pBt->pageSizeFixed = 1;
+#ifndef SQLITE_OMIT_AUTOVACUUM
+ pBt->autoVacuum = (get4byte(&zDbHeader[36 + 4*4])?1:0);
+#endif
+ }
+ pBt->usableSize = pBt->pageSize - nReserve;
+ assert( (pBt->pageSize & 7)==0 ); /* 8-byte alignment of pageSize */
+ sqlite3pager_set_pagesize(pBt->pPager, pBt->pageSize);
+
+#if !defined(SQLITE_OMIT_SHARED_CACHE) && !defined(SQLITE_OMIT_DISKIO)
+ /* Add the new btree to the linked list starting at ThreadData.pBtree.
+ ** There is no chance that a malloc() may fail inside of the
+ ** sqlite3ThreadData() call, as the ThreadData structure must have already
+ ** been allocated for pTsdro->useSharedData to be non-zero.
+ */
+ if( pTsdro->useSharedData && zFilename && !isMemdb ){
+ pBt->pNext = pTsdro->pBtree;
+ sqlite3ThreadData()->pBtree = pBt;
+ }
+#endif
+ pBt->nRef = 1;
+ *ppBtree = p;
+ return SQLITE_OK;
+}
+
+/*
+** Close an open database and invalidate all cursors.
+*/
+int sqlite3BtreeClose(Btree *p){
+ BtShared *pBt = p->pBt;
+ BtCursor *pCur;
+
+#ifndef SQLITE_OMIT_SHARED_CACHE
+ ThreadData *pTsd;
+#endif
+
+ /* Close all cursors opened via this handle. */
+ pCur = pBt->pCursor;
+ while( pCur ){
+ BtCursor *pTmp = pCur;
+ pCur = pCur->pNext;
+ if( pTmp->pBtree==p ){
+ sqlite3BtreeCloseCursor(pTmp);
+ }
+ }
+
+ /* Rollback any active transaction and free the handle structure.
+ ** The call to sqlite3BtreeRollback() drops any table-locks held by
+ ** this handle.
+ */
+ sqlite3BtreeRollback(p);
+ sqliteFree(p);
+
+#ifndef SQLITE_OMIT_SHARED_CACHE
+ /* If there are still other outstanding references to the shared-btree
+ ** structure, return now. The remainder of this procedure cleans
+ ** up the shared-btree.
+ */
+ assert( pBt->nRef>0 );
+ pBt->nRef--;
+ if( pBt->nRef ){
+ return SQLITE_OK;
+ }
+
+ /* Remove the shared-btree from the thread wide list. Call
+ ** ThreadDataReadOnly() and then cast away the const property of the
+ ** pointer to avoid allocating thread data if it is not really required.
+ */
+ pTsd = (ThreadData *)sqlite3ThreadDataReadOnly();
+ if( pTsd->pBtree==pBt ){
+ assert( pTsd==sqlite3ThreadData() );
+ pTsd->pBtree = pBt->pNext;
+ }else{
+ BtShared *pPrev;
+ for(pPrev=pTsd->pBtree; pPrev && pPrev->pNext!=pBt; pPrev=pPrev->pNext){}
+ if( pPrev ){
+ assert( pTsd==sqlite3ThreadData() );
+ pPrev->pNext = pBt->pNext;
+ }
+ }
+#endif
+
+ /* Close the pager and free the shared-btree structure */
+ assert( !pBt->pCursor );
+ sqlite3pager_close(pBt->pPager);
+ if( pBt->xFreeSchema && pBt->pSchema ){
+ pBt->xFreeSchema(pBt->pSchema);
+ }
+ sqliteFree(pBt->pSchema);
+ sqliteFree(pBt);
+ return SQLITE_OK;
+}
+
+/*
+** Change the busy handler callback function.
+*/
+int sqlite3BtreeSetBusyHandler(Btree *p, BusyHandler *pHandler){
+ BtShared *pBt = p->pBt;
+ pBt->pBusyHandler = pHandler;
+ sqlite3pager_set_busyhandler(pBt->pPager, pHandler);
+ return SQLITE_OK;
+}
+
+/*
+** Change the limit on the number of pages allowed in the cache.
+**
+** The maximum number of cache pages is set to the absolute
+** value of mxPage. If mxPage is negative, the pager will
+** operate asynchronously - it will not stop to do fsync()s
+** to insure data is written to the disk surface before
+** continuing. Transactions still work if synchronous is off,
+** and the database cannot be corrupted if this program
+** crashes. But if the operating system crashes or there is
+** an abrupt power failure when synchronous is off, the database
+** could be left in an inconsistent and unrecoverable state.
+** Synchronous is on by default so database corruption is not
+** normally a worry.
+*/
+int sqlite3BtreeSetCacheSize(Btree *p, int mxPage){
+ BtShared *pBt = p->pBt;
+ sqlite3pager_set_cachesize(pBt->pPager, mxPage);
+ return SQLITE_OK;
+}
+
+/*
+** Change the way data is synced to disk in order to increase or decrease
+** how well the database resists damage due to OS crashes and power
+** failures. Level 1 is the same as asynchronous (no syncs() occur and
+** there is a high probability of damage) Level 2 is the default. There
+** is a very low but non-zero probability of damage. Level 3 reduces the
+** probability of damage to near zero but with a write performance reduction.
+*/
+#ifndef SQLITE_OMIT_PAGER_PRAGMAS
+int sqlite3BtreeSetSafetyLevel(Btree *p, int level, int fullSync){
+ BtShared *pBt = p->pBt;
+ sqlite3pager_set_safety_level(pBt->pPager, level, fullSync);
+ return SQLITE_OK;
+}
+#endif
+
+/*
+** Return TRUE if the given btree is set to safety level 1. In other
+** words, return TRUE if no sync() occurs on the disk files.
+*/
+int sqlite3BtreeSyncDisabled(Btree *p){
+ BtShared *pBt = p->pBt;
+ assert( pBt && pBt->pPager );
+ return sqlite3pager_nosync(pBt->pPager);
+}
+
+#if !defined(SQLITE_OMIT_PAGER_PRAGMAS) || !defined(SQLITE_OMIT_VACUUM)
+/*
+** Change the default pages size and the number of reserved bytes per page.
+**
+** The page size must be a power of 2 between 512 and 65536. If the page
+** size supplied does not meet this constraint then the page size is not
+** changed.
+**
+** Page sizes are constrained to be a power of two so that the region
+** of the database file used for locking (beginning at PENDING_BYTE,
+** the first byte past the 1GB boundary, 0x40000000) needs to occur
+** at the beginning of a page.
+**
+** If parameter nReserve is less than zero, then the number of reserved
+** bytes per page is left unchanged.
+*/
+int sqlite3BtreeSetPageSize(Btree *p, int pageSize, int nReserve){
+ BtShared *pBt = p->pBt;
+ if( pBt->pageSizeFixed ){
+ return SQLITE_READONLY;
+ }
+ if( nReserve<0 ){
+ nReserve = pBt->pageSize - pBt->usableSize;
+ }
+ if( pageSize>=512 && pageSize<=SQLITE_MAX_PAGE_SIZE &&
+ ((pageSize-1)&pageSize)==0 ){
+ assert( (pageSize & 7)==0 );
+ assert( !pBt->pPage1 && !pBt->pCursor );
+ pBt->pageSize = sqlite3pager_set_pagesize(pBt->pPager, pageSize);
+ }
+ pBt->usableSize = pBt->pageSize - nReserve;
+ return SQLITE_OK;
+}
+
+/*
+** Return the currently defined page size
+*/
+int sqlite3BtreeGetPageSize(Btree *p){
+ return p->pBt->pageSize;
+}
+int sqlite3BtreeGetReserve(Btree *p){
+ return p->pBt->pageSize - p->pBt->usableSize;
+}
+#endif /* !defined(SQLITE_OMIT_PAGER_PRAGMAS) || !defined(SQLITE_OMIT_VACUUM) */
+
+/*
+** Change the 'auto-vacuum' property of the database. If the 'autoVacuum'
+** parameter is non-zero, then auto-vacuum mode is enabled. If zero, it
+** is disabled. The default value for the auto-vacuum property is
+** determined by the SQLITE_DEFAULT_AUTOVACUUM macro.
+*/
+int sqlite3BtreeSetAutoVacuum(Btree *p, int autoVacuum){
+ BtShared *pBt = p->pBt;;
+#ifdef SQLITE_OMIT_AUTOVACUUM
+ return SQLITE_READONLY;
+#else
+ if( pBt->pageSizeFixed ){
+ return SQLITE_READONLY;
+ }
+ pBt->autoVacuum = (autoVacuum?1:0);
+ return SQLITE_OK;
+#endif
+}
+
+/*
+** Return the value of the 'auto-vacuum' property. If auto-vacuum is
+** enabled 1 is returned. Otherwise 0.
+*/
+int sqlite3BtreeGetAutoVacuum(Btree *p){
+#ifdef SQLITE_OMIT_AUTOVACUUM
+ return 0;
+#else
+ return p->pBt->autoVacuum;
+#endif
+}
+
+
+/*
+** Get a reference to pPage1 of the database file. This will
+** also acquire a readlock on that file.
+**
+** SQLITE_OK is returned on success. If the file is not a
+** well-formed database file, then SQLITE_CORRUPT is returned.
+** SQLITE_BUSY is returned if the database is locked. SQLITE_NOMEM
+** is returned if we run out of memory. SQLITE_PROTOCOL is returned
+** if there is a locking protocol violation.
+*/
+static int lockBtree(BtShared *pBt){
+ int rc, pageSize;
+ MemPage *pPage1;
+ if( pBt->pPage1 ) return SQLITE_OK;
+ rc = getPage(pBt, 1, &pPage1);
+ if( rc!=SQLITE_OK ) return rc;
+
+
+ /* Do some checking to help insure the file we opened really is
+ ** a valid database file.
+ */
+ rc = SQLITE_NOTADB;
+ if( sqlite3pager_pagecount(pBt->pPager)>0 ){
+ u8 *page1 = pPage1->aData;
+ if( memcmp(page1, zMagicHeader, 16)!=0 ){
+ goto page1_init_failed;
+ }
+ if( page1[18]>1 || page1[19]>1 ){
+ goto page1_init_failed;
+ }
+ pageSize = get2byte(&page1[16]);
+ if( ((pageSize-1)&pageSize)!=0 ){
+ goto page1_init_failed;
+ }
+ assert( (pageSize & 7)==0 );
+ pBt->pageSize = pageSize;
+ pBt->usableSize = pageSize - page1[20];
+ if( pBt->usableSize<500 ){
+ goto page1_init_failed;
+ }
+ pBt->maxEmbedFrac = page1[21];
+ pBt->minEmbedFrac = page1[22];
+ pBt->minLeafFrac = page1[23];
+#ifndef SQLITE_OMIT_AUTOVACUUM
+ pBt->autoVacuum = (get4byte(&page1[36 + 4*4])?1:0);
+#endif
+ }
+
+ /* maxLocal is the maximum amount of payload to store locally for
+ ** a cell. Make sure it is small enough so that at least minFanout
+ ** cells can will fit on one page. We assume a 10-byte page header.
+ ** Besides the payload, the cell must store:
+ ** 2-byte pointer to the cell
+ ** 4-byte child pointer
+ ** 9-byte nKey value
+ ** 4-byte nData value
+ ** 4-byte overflow page pointer
+ ** So a cell consists of a 2-byte poiner, a header which is as much as
+ ** 17 bytes long, 0 to N bytes of payload, and an optional 4 byte overflow
+ ** page pointer.
+ */
+ pBt->maxLocal = (pBt->usableSize-12)*pBt->maxEmbedFrac/255 - 23;
+ pBt->minLocal = (pBt->usableSize-12)*pBt->minEmbedFrac/255 - 23;
+ pBt->maxLeaf = pBt->usableSize - 35;
+ pBt->minLeaf = (pBt->usableSize-12)*pBt->minLeafFrac/255 - 23;
+ if( pBt->minLocal>pBt->maxLocal || pBt->maxLocal<0 ){
+ goto page1_init_failed;
+ }
+ assert( pBt->maxLeaf + 23 <= MX_CELL_SIZE(pBt) );
+ pBt->pPage1 = pPage1;
+ return SQLITE_OK;
+
+page1_init_failed:
+ releasePage(pPage1);
+ pBt->pPage1 = 0;
+ return rc;
+}
+
+/*
+** This routine works like lockBtree() except that it also invokes the
+** busy callback if there is lock contention.
+*/
+static int lockBtreeWithRetry(Btree *pRef){
+ int rc = SQLITE_OK;
+ if( pRef->inTrans==TRANS_NONE ){
+ u8 inTransaction = pRef->pBt->inTransaction;
+ btreeIntegrity(pRef);
+ rc = sqlite3BtreeBeginTrans(pRef, 0);
+ pRef->pBt->inTransaction = inTransaction;
+ pRef->inTrans = TRANS_NONE;
+ if( rc==SQLITE_OK ){
+ pRef->pBt->nTransaction--;
+ }
+ btreeIntegrity(pRef);
+ }
+ return rc;
+}
+
+
+/*
+** If there are no outstanding cursors and we are not in the middle
+** of a transaction but there is a read lock on the database, then
+** this routine unrefs the first page of the database file which
+** has the effect of releasing the read lock.
+**
+** If there are any outstanding cursors, this routine is a no-op.
+**
+** If there is a transaction in progress, this routine is a no-op.
+*/
+static void unlockBtreeIfUnused(BtShared *pBt){
+ if( pBt->inTransaction==TRANS_NONE && pBt->pCursor==0 && pBt->pPage1!=0 ){
+ if( pBt->pPage1->aData==0 ){
+ MemPage *pPage = pBt->pPage1;
+ pPage->aData = &((u8*)pPage)[-pBt->pageSize];
+ pPage->pBt = pBt;
+ pPage->pgno = 1;
+ }
+ releasePage(pBt->pPage1);
+ pBt->pPage1 = 0;
+ pBt->inStmt = 0;
+ }
+}
+
+/*
+** Create a new database by initializing the first page of the
+** file.
+*/
+static int newDatabase(BtShared *pBt){
+ MemPage *pP1;
+ unsigned char *data;
+ int rc;
+ if( sqlite3pager_pagecount(pBt->pPager)>0 ) return SQLITE_OK;
+ pP1 = pBt->pPage1;
+ assert( pP1!=0 );
+ data = pP1->aData;
+ rc = sqlite3pager_write(data);
+ if( rc ) return rc;
+ memcpy(data, zMagicHeader, sizeof(zMagicHeader));
+ assert( sizeof(zMagicHeader)==16 );
+ put2byte(&data[16], pBt->pageSize);
+ data[18] = 1;
+ data[19] = 1;
+ data[20] = pBt->pageSize - pBt->usableSize;
+ data[21] = pBt->maxEmbedFrac;
+ data[22] = pBt->minEmbedFrac;
+ data[23] = pBt->minLeafFrac;
+ memset(&data[24], 0, 100-24);
+ zeroPage(pP1, PTF_INTKEY|PTF_LEAF|PTF_LEAFDATA );
+ pBt->pageSizeFixed = 1;
+#ifndef SQLITE_OMIT_AUTOVACUUM
+ if( pBt->autoVacuum ){
+ put4byte(&data[36 + 4*4], 1);
+ }
+#endif
+ return SQLITE_OK;
+}
+
+/*
+** Attempt to start a new transaction. A write-transaction
+** is started if the second argument is nonzero, otherwise a read-
+** transaction. If the second argument is 2 or more and exclusive
+** transaction is started, meaning that no other process is allowed
+** to access the database. A preexisting transaction may not be
+** upgraded to exclusive by calling this routine a second time - the
+** exclusivity flag only works for a new transaction.
+**
+** A write-transaction must be started before attempting any
+** changes to the database. None of the following routines
+** will work unless a transaction is started first:
+**
+** sqlite3BtreeCreateTable()
+** sqlite3BtreeCreateIndex()
+** sqlite3BtreeClearTable()
+** sqlite3BtreeDropTable()
+** sqlite3BtreeInsert()
+** sqlite3BtreeDelete()
+** sqlite3BtreeUpdateMeta()
+**
+** If an initial attempt to acquire the lock fails because of lock contention
+** and the database was previously unlocked, then invoke the busy handler
+** if there is one. But if there was previously a read-lock, do not
+** invoke the busy handler - just return SQLITE_BUSY. SQLITE_BUSY is
+** returned when there is already a read-lock in order to avoid a deadlock.
+**
+** Suppose there are two processes A and B. A has a read lock and B has
+** a reserved lock. B tries to promote to exclusive but is blocked because
+** of A's read lock. A tries to promote to reserved but is blocked by B.
+** One or the other of the two processes must give way or there can be
+** no progress. By returning SQLITE_BUSY and not invoking the busy callback
+** when A already has a read lock, we encourage A to give up and let B
+** proceed.
+*/
+int sqlite3BtreeBeginTrans(Btree *p, int wrflag){
+ BtShared *pBt = p->pBt;
+ int rc = SQLITE_OK;
+
+ btreeIntegrity(p);
+
+ /* If the btree is already in a write-transaction, or it
+ ** is already in a read-transaction and a read-transaction
+ ** is requested, this is a no-op.
+ */
+ if( p->inTrans==TRANS_WRITE || (p->inTrans==TRANS_READ && !wrflag) ){
+ return SQLITE_OK;
+ }
+
+ /* Write transactions are not possible on a read-only database */
+ if( pBt->readOnly && wrflag ){
+ return SQLITE_READONLY;
+ }
+
+ /* If another database handle has already opened a write transaction
+ ** on this shared-btree structure and a second write transaction is
+ ** requested, return SQLITE_BUSY.
+ */
+ if( pBt->inTransaction==TRANS_WRITE && wrflag ){
+ return SQLITE_BUSY;
+ }
+
+ do {
+ if( pBt->pPage1==0 ){
+ rc = lockBtree(pBt);
+ }
+
+ if( rc==SQLITE_OK && wrflag ){
+ rc = sqlite3pager_begin(pBt->pPage1->aData, wrflag>1);
+ if( rc==SQLITE_OK ){
+ rc = newDatabase(pBt);
+ }
+ }
+
+ if( rc==SQLITE_OK ){
+ if( wrflag ) pBt->inStmt = 0;
+ }else{
+ unlockBtreeIfUnused(pBt);
+ }
+ }while( rc==SQLITE_BUSY && pBt->inTransaction==TRANS_NONE &&
+ sqlite3InvokeBusyHandler(pBt->pBusyHandler) );
+
+ if( rc==SQLITE_OK ){
+ if( p->inTrans==TRANS_NONE ){
+ pBt->nTransaction++;
+ }
+ p->inTrans = (wrflag?TRANS_WRITE:TRANS_READ);
+ if( p->inTrans>pBt->inTransaction ){
+ pBt->inTransaction = p->inTrans;
+ }
+ }
+
+ btreeIntegrity(p);
+ return rc;
+}
+
+#ifndef SQLITE_OMIT_AUTOVACUUM
+
+/*
+** Set the pointer-map entries for all children of page pPage. Also, if
+** pPage contains cells that point to overflow pages, set the pointer
+** map entries for the overflow pages as well.
+*/
+static int setChildPtrmaps(MemPage *pPage){
+ int i; /* Counter variable */
+ int nCell; /* Number of cells in page pPage */
+ int rc = SQLITE_OK; /* Return code */
+ BtShared *pBt = pPage->pBt;
+ int isInitOrig = pPage->isInit;
+ Pgno pgno = pPage->pgno;
+
+ initPage(pPage, 0);
+ nCell = pPage->nCell;
+
+ for(i=0; i<nCell; i++){
+ u8 *pCell = findCell(pPage, i);
+
+ rc = ptrmapPutOvflPtr(pPage, pCell);
+ if( rc!=SQLITE_OK ){
+ goto set_child_ptrmaps_out;
+ }
+
+ if( !pPage->leaf ){
+ Pgno childPgno = get4byte(pCell);
+ rc = ptrmapPut(pBt, childPgno, PTRMAP_BTREE, pgno);
+ if( rc!=SQLITE_OK ) goto set_child_ptrmaps_out;
+ }
+ }
+
+ if( !pPage->leaf ){
+ Pgno childPgno = get4byte(&pPage->aData[pPage->hdrOffset+8]);
+ rc = ptrmapPut(pBt, childPgno, PTRMAP_BTREE, pgno);
+ }
+
+set_child_ptrmaps_out:
+ pPage->isInit = isInitOrig;
+ return rc;
+}
+
+/*
+** Somewhere on pPage, which is guarenteed to be a btree page, not an overflow
+** page, is a pointer to page iFrom. Modify this pointer so that it points to
+** iTo. Parameter eType describes the type of pointer to be modified, as
+** follows:
+**
+** PTRMAP_BTREE: pPage is a btree-page. The pointer points at a child
+** page of pPage.
+**
+** PTRMAP_OVERFLOW1: pPage is a btree-page. The pointer points at an overflow
+** page pointed to by one of the cells on pPage.
+**
+** PTRMAP_OVERFLOW2: pPage is an overflow-page. The pointer points at the next
+** overflow page in the list.
+*/
+static int modifyPagePointer(MemPage *pPage, Pgno iFrom, Pgno iTo, u8 eType){
+ if( eType==PTRMAP_OVERFLOW2 ){
+ /* The pointer is always the first 4 bytes of the page in this case. */
+ if( get4byte(pPage->aData)!=iFrom ){
+ return SQLITE_CORRUPT_BKPT;
+ }
+ put4byte(pPage->aData, iTo);
+ }else{
+ int isInitOrig = pPage->isInit;
+ int i;
+ int nCell;
+
+ initPage(pPage, 0);
+ nCell = pPage->nCell;
+
+ for(i=0; i<nCell; i++){
+ u8 *pCell = findCell(pPage, i);
+ if( eType==PTRMAP_OVERFLOW1 ){
+ CellInfo info;
+ parseCellPtr(pPage, pCell, &info);
+ if( info.iOverflow ){
+ if( iFrom==get4byte(&pCell[info.iOverflow]) ){
+ put4byte(&pCell[info.iOverflow], iTo);
+ break;
+ }
+ }
+ }else{
+ if( get4byte(pCell)==iFrom ){
+ put4byte(pCell, iTo);
+ break;
+ }
+ }
+ }
+
+ if( i==nCell ){
+ if( eType!=PTRMAP_BTREE ||
+ get4byte(&pPage->aData[pPage->hdrOffset+8])!=iFrom ){
+ return SQLITE_CORRUPT_BKPT;
+ }
+ put4byte(&pPage->aData[pPage->hdrOffset+8], iTo);
+ }
+
+ pPage->isInit = isInitOrig;
+ }
+ return SQLITE_OK;
+}
+
+
+/*
+** Move the open database page pDbPage to location iFreePage in the
+** database. The pDbPage reference remains valid.
+*/
+static int relocatePage(
+ BtShared *pBt, /* Btree */
+ MemPage *pDbPage, /* Open page to move */
+ u8 eType, /* Pointer map 'type' entry for pDbPage */
+ Pgno iPtrPage, /* Pointer map 'page-no' entry for pDbPage */
+ Pgno iFreePage /* The location to move pDbPage to */
+){
+ MemPage *pPtrPage; /* The page that contains a pointer to pDbPage */
+ Pgno iDbPage = pDbPage->pgno;
+ Pager *pPager = pBt->pPager;
+ int rc;
+
+ assert( eType==PTRMAP_OVERFLOW2 || eType==PTRMAP_OVERFLOW1 ||
+ eType==PTRMAP_BTREE || eType==PTRMAP_ROOTPAGE );
+
+ /* Move page iDbPage from it's current location to page number iFreePage */
+ TRACE(("AUTOVACUUM: Moving %d to free page %d (ptr page %d type %d)\n",
+ iDbPage, iFreePage, iPtrPage, eType));
+ rc = sqlite3pager_movepage(pPager, pDbPage->aData, iFreePage);
+ if( rc!=SQLITE_OK ){
+ return rc;
+ }
+ pDbPage->pgno = iFreePage;
+
+ /* If pDbPage was a btree-page, then it may have child pages and/or cells
+ ** that point to overflow pages. The pointer map entries for all these
+ ** pages need to be changed.
+ **
+ ** If pDbPage is an overflow page, then the first 4 bytes may store a
+ ** pointer to a subsequent overflow page. If this is the case, then
+ ** the pointer map needs to be updated for the subsequent overflow page.
+ */
+ if( eType==PTRMAP_BTREE || eType==PTRMAP_ROOTPAGE ){
+ rc = setChildPtrmaps(pDbPage);
+ if( rc!=SQLITE_OK ){
+ return rc;
+ }
+ }else{
+ Pgno nextOvfl = get4byte(pDbPage->aData);
+ if( nextOvfl!=0 ){
+ rc = ptrmapPut(pBt, nextOvfl, PTRMAP_OVERFLOW2, iFreePage);
+ if( rc!=SQLITE_OK ){
+ return rc;
+ }
+ }
+ }
+
+ /* Fix the database pointer on page iPtrPage that pointed at iDbPage so
+ ** that it points at iFreePage. Also fix the pointer map entry for
+ ** iPtrPage.
+ */
+ if( eType!=PTRMAP_ROOTPAGE ){
+ rc = getPage(pBt, iPtrPage, &pPtrPage);
+ if( rc!=SQLITE_OK ){
+ return rc;
+ }
+ rc = sqlite3pager_write(pPtrPage->aData);
+ if( rc!=SQLITE_OK ){
+ releasePage(pPtrPage);
+ return rc;
+ }
+ rc = modifyPagePointer(pPtrPage, iDbPage, iFreePage, eType);
+ releasePage(pPtrPage);
+ if( rc==SQLITE_OK ){
+ rc = ptrmapPut(pBt, iFreePage, eType, iPtrPage);
+ }
+ }
+ return rc;
+}
+
+/* Forward declaration required by autoVacuumCommit(). */
+static int allocatePage(BtShared *, MemPage **, Pgno *, Pgno, u8);
+
+/*
+** This routine is called prior to sqlite3pager_commit when a transaction
+** is commited for an auto-vacuum database.
+*/
+static int autoVacuumCommit(BtShared *pBt, Pgno *nTrunc){
+ Pager *pPager = pBt->pPager;
+ Pgno nFreeList; /* Number of pages remaining on the free-list. */
+ int nPtrMap; /* Number of pointer-map pages deallocated */
+ Pgno origSize; /* Pages in the database file */
+ Pgno finSize; /* Pages in the database file after truncation */
+ int rc; /* Return code */
+ u8 eType;
+ int pgsz = pBt->pageSize; /* Page size for this database */
+ Pgno iDbPage; /* The database page to move */
+ MemPage *pDbMemPage = 0; /* "" */
+ Pgno iPtrPage; /* The page that contains a pointer to iDbPage */
+ Pgno iFreePage; /* The free-list page to move iDbPage to */
+ MemPage *pFreeMemPage = 0; /* "" */
+
+#ifndef NDEBUG
+ int nRef = sqlite3pager_refcount(pPager);
+#endif
+
+ assert( pBt->autoVacuum );
+ if( PTRMAP_ISPAGE(pBt, sqlite3pager_pagecount(pPager)) ){
+ return SQLITE_CORRUPT_BKPT;
+ }
+
+ /* Figure out how many free-pages are in the database. If there are no
+ ** free pages, then auto-vacuum is a no-op.
+ */
+ nFreeList = get4byte(&pBt->pPage1->aData[36]);
+ if( nFreeList==0 ){
+ *nTrunc = 0;
+ return SQLITE_OK;
+ }
+
+ /* This block figures out how many pages there are in the database
+ ** now (variable origSize), and how many there will be after the
+ ** truncation (variable finSize).
+ **
+ ** The final size is the original size, less the number of free pages
+ ** in the database, less any pointer-map pages that will no longer
+ ** be required, less 1 if the pending-byte page was part of the database
+ ** but is not after the truncation.
+ **/
+ origSize = sqlite3pager_pagecount(pPager);
+ if( origSize==PENDING_BYTE_PAGE(pBt) ){
+ origSize--;
+ }
+ nPtrMap = (nFreeList-origSize+PTRMAP_PAGENO(pBt, origSize)+pgsz/5)/(pgsz/5);
+ finSize = origSize - nFreeList - nPtrMap;
+ if( origSize>PENDING_BYTE_PAGE(pBt) && finSize<=PENDING_BYTE_PAGE(pBt) ){
+ finSize--;
+ }
+ while( PTRMAP_ISPAGE(pBt, finSize) || finSize==PENDING_BYTE_PAGE(pBt) ){
+ finSize--;
+ }
+ TRACE(("AUTOVACUUM: Begin (db size %d->%d)\n", origSize, finSize));
+
+ /* Variable 'finSize' will be the size of the file in pages after
+ ** the auto-vacuum has completed (the current file size minus the number
+ ** of pages on the free list). Loop through the pages that lie beyond
+ ** this mark, and if they are not already on the free list, move them
+ ** to a free page earlier in the file (somewhere before finSize).
+ */
+ for( iDbPage=finSize+1; iDbPage<=origSize; iDbPage++ ){
+ /* If iDbPage is a pointer map page, or the pending-byte page, skip it. */
+ if( PTRMAP_ISPAGE(pBt, iDbPage) || iDbPage==PENDING_BYTE_PAGE(pBt) ){
+ continue;
+ }
+
+ rc = ptrmapGet(pBt, iDbPage, &eType, &iPtrPage);
+ if( rc!=SQLITE_OK ) goto autovacuum_out;
+ if( eType==PTRMAP_ROOTPAGE ){
+ rc = SQLITE_CORRUPT_BKPT;
+ goto autovacuum_out;
+ }
+
+ /* If iDbPage is free, do not swap it. */
+ if( eType==PTRMAP_FREEPAGE ){
+ continue;
+ }
+ rc = getPage(pBt, iDbPage, &pDbMemPage);
+ if( rc!=SQLITE_OK ) goto autovacuum_out;
+
+ /* Find the next page in the free-list that is not already at the end
+ ** of the file. A page can be pulled off the free list using the
+ ** allocatePage() routine.
+ */
+ do{
+ if( pFreeMemPage ){
+ releasePage(pFreeMemPage);
+ pFreeMemPage = 0;
+ }
+ rc = allocatePage(pBt, &pFreeMemPage, &iFreePage, 0, 0);
+ if( rc!=SQLITE_OK ){
+ releasePage(pDbMemPage);
+ goto autovacuum_out;
+ }
+ assert( iFreePage<=origSize );
+ }while( iFreePage>finSize );
+ releasePage(pFreeMemPage);
+ pFreeMemPage = 0;
+
+ /* Relocate the page into the body of the file. Note that although the
+ ** page has moved within the database file, the pDbMemPage pointer
+ ** remains valid. This means that this function can run without
+ ** invalidating cursors open on the btree. This is important in
+ ** shared-cache mode.
+ */
+ rc = relocatePage(pBt, pDbMemPage, eType, iPtrPage, iFreePage);
+ releasePage(pDbMemPage);
+ if( rc!=SQLITE_OK ) goto autovacuum_out;
+ }
+
+ /* The entire free-list has been swapped to the end of the file. So
+ ** truncate the database file to finSize pages and consider the
+ ** free-list empty.
+ */
+ rc = sqlite3pager_write(pBt->pPage1->aData);
+ if( rc!=SQLITE_OK ) goto autovacuum_out;
+ put4byte(&pBt->pPage1->aData[32], 0);
+ put4byte(&pBt->pPage1->aData[36], 0);
+ *nTrunc = finSize;
+ assert( finSize!=PENDING_BYTE_PAGE(pBt) );
+
+autovacuum_out:
+ assert( nRef==sqlite3pager_refcount(pPager) );
+ if( rc!=SQLITE_OK ){
+ sqlite3pager_rollback(pPager);
+ }
+ return rc;
+}
+#endif
+
+/*
+** Commit the transaction currently in progress.
+**
+** This will release the write lock on the database file. If there
+** are no active cursors, it also releases the read lock.
+*/
+int sqlite3BtreeCommit(Btree *p){
+ BtShared *pBt = p->pBt;
+
+ btreeIntegrity(p);
+
+ /* If the handle has a write-transaction open, commit the shared-btrees
+ ** transaction and set the shared state to TRANS_READ.
+ */
+ if( p->inTrans==TRANS_WRITE ){
+ int rc;
+ assert( pBt->inTransaction==TRANS_WRITE );
+ assert( pBt->nTransaction>0 );
+ rc = sqlite3pager_commit(pBt->pPager);
+ if( rc!=SQLITE_OK ){
+ return rc;
+ }
+ pBt->inTransaction = TRANS_READ;
+ pBt->inStmt = 0;
+ }
+ unlockAllTables(p);
+
+ /* If the handle has any kind of transaction open, decrement the transaction
+ ** count of the shared btree. If the transaction count reaches 0, set
+ ** the shared state to TRANS_NONE. The unlockBtreeIfUnused() call below
+ ** will unlock the pager.
+ */
+ if( p->inTrans!=TRANS_NONE ){
+ pBt->nTransaction--;
+ if( 0==pBt->nTransaction ){
+ pBt->inTransaction = TRANS_NONE;
+ }
+ }
+
+ /* Set the handles current transaction state to TRANS_NONE and unlock
+ ** the pager if this call closed the only read or write transaction.
+ */
+ p->inTrans = TRANS_NONE;
+ unlockBtreeIfUnused(pBt);
+
+ btreeIntegrity(p);
+ return SQLITE_OK;
+}
+
+#ifndef NDEBUG
+/*
+** Return the number of write-cursors open on this handle. This is for use
+** in assert() expressions, so it is only compiled if NDEBUG is not
+** defined.
+*/
+static int countWriteCursors(BtShared *pBt){
+ BtCursor *pCur;
+ int r = 0;
+ for(pCur=pBt->pCursor; pCur; pCur=pCur->pNext){
+ if( pCur->wrFlag ) r++;
+ }
+ return r;
+}
+#endif
+
+#if defined(SQLITE_TEST) || defined(SQLITE_DEBUG)
+/*
+** Print debugging information about all cursors to standard output.
+*/
+void sqlite3BtreeCursorList(Btree *p){
+ BtCursor *pCur;
+ BtShared *pBt = p->pBt;
+ for(pCur=pBt->pCursor; pCur; pCur=pCur->pNext){
+ MemPage *pPage = pCur->pPage;
+ char *zMode = pCur->wrFlag ? "rw" : "ro";
+ sqlite3DebugPrintf("CURSOR %p rooted at %4d(%s) currently at %d.%d%s\n",
+ pCur, pCur->pgnoRoot, zMode,
+ pPage ? pPage->pgno : 0, pCur->idx,
+ (pCur->eState==CURSOR_VALID) ? "" : " eof"
+ );
+ }
+}
+#endif
+
+/*
+** Rollback the transaction in progress. All cursors will be
+** invalided by this operation. Any attempt to use a cursor
+** that was open at the beginning of this operation will result
+** in an error.
+**
+** This will release the write lock on the database file. If there
+** are no active cursors, it also releases the read lock.
+*/
+int sqlite3BtreeRollback(Btree *p){
+ int rc;
+ BtShared *pBt = p->pBt;
+ MemPage *pPage1;
+
+ rc = saveAllCursors(pBt, 0, 0);
+#ifndef SQLITE_OMIT_SHARED_CACHE
+ if( rc!=SQLITE_OK ){
+ /* This is a horrible situation. An IO or malloc() error occured whilst
+ ** trying to save cursor positions. If this is an automatic rollback (as
+ ** the result of a constraint, malloc() failure or IO error) then
+ ** the cache may be internally inconsistent (not contain valid trees) so
+ ** we cannot simply return the error to the caller. Instead, abort
+ ** all queries that may be using any of the cursors that failed to save.
+ */
+ while( pBt->pCursor ){
+ sqlite3 *db = pBt->pCursor->pBtree->pSqlite;
+ if( db ){
+ sqlite3AbortOtherActiveVdbes(db, 0);
+ }
+ }
+ }
+#endif
+ btreeIntegrity(p);
+ unlockAllTables(p);
+
+ if( p->inTrans==TRANS_WRITE ){
+ int rc2;
+
+ assert( TRANS_WRITE==pBt->inTransaction );
+ rc2 = sqlite3pager_rollback(pBt->pPager);
+ if( rc2!=SQLITE_OK ){
+ rc = rc2;
+ }
+
+ /* The rollback may have destroyed the pPage1->aData value. So
+ ** call getPage() on page 1 again to make sure pPage1->aData is
+ ** set correctly. */
+ if( getPage(pBt, 1, &pPage1)==SQLITE_OK ){
+ releasePage(pPage1);
+ }
+ assert( countWriteCursors(pBt)==0 );
+ pBt->inTransaction = TRANS_READ;
+ }
+
+ if( p->inTrans!=TRANS_NONE ){
+ assert( pBt->nTransaction>0 );
+ pBt->nTransaction--;
+ if( 0==pBt->nTransaction ){
+ pBt->inTransaction = TRANS_NONE;
+ }
+ }
+
+ p->inTrans = TRANS_NONE;
+ pBt->inStmt = 0;
+ unlockBtreeIfUnused(pBt);
+
+ btreeIntegrity(p);
+ return rc;
+}
+
+/*
+** Start a statement subtransaction. The subtransaction can
+** can be rolled back independently of the main transaction.
+** You must start a transaction before starting a subtransaction.
+** The subtransaction is ended automatically if the main transaction
+** commits or rolls back.
+**
+** Only one subtransaction may be active at a time. It is an error to try
+** to start a new subtransaction if another subtransaction is already active.
+**
+** Statement subtransactions are used around individual SQL statements
+** that are contained within a BEGIN...COMMIT block. If a constraint
+** error occurs within the statement, the effect of that one statement
+** can be rolled back without having to rollback the entire transaction.
+*/
+int sqlite3BtreeBeginStmt(Btree *p){
+ int rc;
+ BtShared *pBt = p->pBt;
+ if( (p->inTrans!=TRANS_WRITE) || pBt->inStmt ){
+ return pBt->readOnly ? SQLITE_READONLY : SQLITE_ERROR;
+ }
+ assert( pBt->inTransaction==TRANS_WRITE );
+ rc = pBt->readOnly ? SQLITE_OK : sqlite3pager_stmt_begin(pBt->pPager);
+ pBt->inStmt = 1;
+ return rc;
+}
+
+
+/*
+** Commit the statment subtransaction currently in progress. If no
+** subtransaction is active, this is a no-op.
+*/
+int sqlite3BtreeCommitStmt(Btree *p){
+ int rc;
+ BtShared *pBt = p->pBt;
+ if( pBt->inStmt && !pBt->readOnly ){
+ rc = sqlite3pager_stmt_commit(pBt->pPager);
+ }else{
+ rc = SQLITE_OK;
+ }
+ pBt->inStmt = 0;
+ return rc;
+}
+
+/*
+** Rollback the active statement subtransaction. If no subtransaction
+** is active this routine is a no-op.
+**
+** All cursors will be invalidated by this operation. Any attempt
+** to use a cursor that was open at the beginning of this operation
+** will result in an error.
+*/
+int sqlite3BtreeRollbackStmt(Btree *p){
+ int rc = SQLITE_OK;
+ BtShared *pBt = p->pBt;
+ sqlite3MallocDisallow();
+ if( pBt->inStmt && !pBt->readOnly ){
+ rc = sqlite3pager_stmt_rollback(pBt->pPager);
+ assert( countWriteCursors(pBt)==0 );
+ pBt->inStmt = 0;
+ }
+ sqlite3MallocAllow();
+ return rc;
+}
+
+/*
+** Default key comparison function to be used if no comparison function
+** is specified on the sqlite3BtreeCursor() call.
+*/
+static int dfltCompare(
+ void *NotUsed, /* User data is not used */
+ int n1, const void *p1, /* First key to compare */
+ int n2, const void *p2 /* Second key to compare */
+){
+ int c;
+ c = memcmp(p1, p2, n1<n2 ? n1 : n2);
+ if( c==0 ){
+ c = n1 - n2;
+ }
+ return c;
+}
+
+/*
+** Create a new cursor for the BTree whose root is on the page
+** iTable. The act of acquiring a cursor gets a read lock on
+** the database file.
+**
+** If wrFlag==0, then the cursor can only be used for reading.
+** If wrFlag==1, then the cursor can be used for reading or for
+** writing if other conditions for writing are also met. These
+** are the conditions that must be met in order for writing to
+** be allowed:
+**
+** 1: The cursor must have been opened with wrFlag==1
+**
+** 2: No other cursors may be open with wrFlag==0 on the same table
+**
+** 3: The database must be writable (not on read-only media)
+**
+** 4: There must be an active transaction.
+**
+** Condition 2 warrants further discussion. If any cursor is opened
+** on a table with wrFlag==0, that prevents all other cursors from
+** writing to that table. This is a kind of "read-lock". When a cursor
+** is opened with wrFlag==0 it is guaranteed that the table will not
+** change as long as the cursor is open. This allows the cursor to
+** do a sequential scan of the table without having to worry about
+** entries being inserted or deleted during the scan. Cursors should
+** be opened with wrFlag==0 only if this read-lock property is needed.
+** That is to say, cursors should be opened with wrFlag==0 only if they
+** intend to use the sqlite3BtreeNext() system call. All other cursors
+** should be opened with wrFlag==1 even if they never really intend
+** to write.
+**
+** No checking is done to make sure that page iTable really is the
+** root page of a b-tree. If it is not, then the cursor acquired
+** will not work correctly.
+**
+** The comparison function must be logically the same for every cursor
+** on a particular table. Changing the comparison function will result
+** in incorrect operations. If the comparison function is NULL, a
+** default comparison function is used. The comparison function is
+** always ignored for INTKEY tables.
+*/
+int sqlite3BtreeCursor(
+ Btree *p, /* The btree */
+ int iTable, /* Root page of table to open */
+ int wrFlag, /* 1 to write. 0 read-only */
+ int (*xCmp)(void*,int,const void*,int,const void*), /* Key Comparison func */
+ void *pArg, /* First arg to xCompare() */
+ BtCursor **ppCur /* Write new cursor here */
+){
+ int rc;
+ BtCursor *pCur;
+ BtShared *pBt = p->pBt;
+
+ *ppCur = 0;
+ if( wrFlag ){
+ if( pBt->readOnly ){
+ return SQLITE_READONLY;
+ }
+ if( checkReadLocks(p, iTable, 0) ){
+ return SQLITE_LOCKED;
+ }
+ }
+
+ if( pBt->pPage1==0 ){
+ rc = lockBtreeWithRetry(p);
+ if( rc!=SQLITE_OK ){
+ return rc;
+ }
+ }
+ pCur = sqliteMalloc( sizeof(*pCur) );
+ if( pCur==0 ){
+ rc = SQLITE_NOMEM;
+ goto create_cursor_exception;
+ }
+ pCur->pgnoRoot = (Pgno)iTable;
+ if( iTable==1 && sqlite3pager_pagecount(pBt->pPager)==0 ){
+ rc = SQLITE_EMPTY;
+ goto create_cursor_exception;
+ }
+ rc = getAndInitPage(pBt, pCur->pgnoRoot, &pCur->pPage, 0);
+ if( rc!=SQLITE_OK ){
+ goto create_cursor_exception;
+ }
+
+ /* Now that no other errors can occur, finish filling in the BtCursor
+ ** variables, link the cursor into the BtShared list and set *ppCur (the
+ ** output argument to this function).
+ */
+ pCur->xCompare = xCmp ? xCmp : dfltCompare;
+ pCur->pArg = pArg;
+ pCur->pBtree = p;
+ pCur->wrFlag = wrFlag;
+ pCur->pNext = pBt->pCursor;
+ if( pCur->pNext ){
+ pCur->pNext->pPrev = pCur;
+ }
+ pBt->pCursor = pCur;
+ pCur->eState = CURSOR_INVALID;
+ *ppCur = pCur;
+
+ return SQLITE_OK;
+create_cursor_exception:
+ if( pCur ){
+ releasePage(pCur->pPage);
+ sqliteFree(pCur);
+ }
+ unlockBtreeIfUnused(pBt);
+ return rc;
+}
+
+#if 0 /* Not Used */
+/*
+** Change the value of the comparison function used by a cursor.
+*/
+void sqlite3BtreeSetCompare(
+ BtCursor *pCur, /* The cursor to whose comparison function is changed */
+ int(*xCmp)(void*,int,const void*,int,const void*), /* New comparison func */
+ void *pArg /* First argument to xCmp() */
+){
+ pCur->xCompare = xCmp ? xCmp : dfltCompare;
+ pCur->pArg = pArg;
+}
+#endif
+
+/*
+** Close a cursor. The read lock on the database file is released
+** when the last cursor is closed.
+*/
+int sqlite3BtreeCloseCursor(BtCursor *pCur){
+ BtShared *pBt = pCur->pBtree->pBt;
+ restoreOrClearCursorPosition(pCur, 0);
+ if( pCur->pPrev ){
+ pCur->pPrev->pNext = pCur->pNext;
+ }else{
+ pBt->pCursor = pCur->pNext;
+ }
+ if( pCur->pNext ){
+ pCur->pNext->pPrev = pCur->pPrev;
+ }
+ releasePage(pCur->pPage);
+ unlockBtreeIfUnused(pBt);
+ sqliteFree(pCur);
+ return SQLITE_OK;
+}
+
+/*
+** Make a temporary cursor by filling in the fields of pTempCur.
+** The temporary cursor is not on the cursor list for the Btree.
+*/
+static void getTempCursor(BtCursor *pCur, BtCursor *pTempCur){
+ memcpy(pTempCur, pCur, sizeof(*pCur));
+ pTempCur->pNext = 0;
+ pTempCur->pPrev = 0;
+ if( pTempCur->pPage ){
+ sqlite3pager_ref(pTempCur->pPage->aData);
+ }
+}
+
+/*
+** Delete a temporary cursor such as was made by the CreateTemporaryCursor()
+** function above.
+*/
+static void releaseTempCursor(BtCursor *pCur){
+ if( pCur->pPage ){
+ sqlite3pager_unref(pCur->pPage->aData);
+ }
+}
+
+/*
+** Make sure the BtCursor.info field of the given cursor is valid.
+** If it is not already valid, call parseCell() to fill it in.
+**
+** BtCursor.info is a cache of the information in the current cell.
+** Using this cache reduces the number of calls to parseCell().
+*/
+static void getCellInfo(BtCursor *pCur){
+ if( pCur->info.nSize==0 ){
+ parseCell(pCur->pPage, pCur->idx, &pCur->info);
+ }else{
+#ifndef NDEBUG
+ CellInfo info;
+ memset(&info, 0, sizeof(info));
+ parseCell(pCur->pPage, pCur->idx, &info);
+ assert( memcmp(&info, &pCur->info, sizeof(info))==0 );
+#endif
+ }
+}
+
+/*
+** Set *pSize to the size of the buffer needed to hold the value of
+** the key for the current entry. If the cursor is not pointing
+** to a valid entry, *pSize is set to 0.
+**
+** For a table with the INTKEY flag set, this routine returns the key
+** itself, not the number of bytes in the key.
+*/
+int sqlite3BtreeKeySize(BtCursor *pCur, i64 *pSize){
+ int rc = restoreOrClearCursorPosition(pCur, 1);
+ if( rc==SQLITE_OK ){
+ assert( pCur->eState==CURSOR_INVALID || pCur->eState==CURSOR_VALID );
+ if( pCur->eState==CURSOR_INVALID ){
+ *pSize = 0;
+ }else{
+ getCellInfo(pCur);
+ *pSize = pCur->info.nKey;
+ }
+ }
+ return rc;
+}
+
+/*
+** Set *pSize to the number of bytes of data in the entry the
+** cursor currently points to. Always return SQLITE_OK.
+** Failure is not possible. If the cursor is not currently
+** pointing to an entry (which can happen, for example, if
+** the database is empty) then *pSize is set to 0.
+*/
+int sqlite3BtreeDataSize(BtCursor *pCur, u32 *pSize){
+ int rc = restoreOrClearCursorPosition(pCur, 1);
+ if( rc==SQLITE_OK ){
+ assert( pCur->eState==CURSOR_INVALID || pCur->eState==CURSOR_VALID );
+ if( pCur->eState==CURSOR_INVALID ){
+ /* Not pointing at a valid entry - set *pSize to 0. */
+ *pSize = 0;
+ }else{
+ getCellInfo(pCur);
+ *pSize = pCur->info.nData;
+ }
+ }
+ return rc;
+}
+
+/*
+** Read payload information from the entry that the pCur cursor is
+** pointing to. Begin reading the payload at "offset" and read
+** a total of "amt" bytes. Put the result in zBuf.
+**
+** This routine does not make a distinction between key and data.
+** It just reads bytes from the payload area. Data might appear
+** on the main page or be scattered out on multiple overflow pages.
+*/
+static int getPayload(
+ BtCursor *pCur, /* Cursor pointing to entry to read from */
+ int offset, /* Begin reading this far into payload */
+ int amt, /* Read this many bytes */
+ unsigned char *pBuf, /* Write the bytes into this buffer */
+ int skipKey /* offset begins at data if this is true */
+){
+ unsigned char *aPayload;
+ Pgno nextPage;
+ int rc;
+ MemPage *pPage;
+ BtShared *pBt;
+ int ovflSize;
+ u32 nKey;
+
+ assert( pCur!=0 && pCur->pPage!=0 );
+ assert( pCur->eState==CURSOR_VALID );
+ pBt = pCur->pBtree->pBt;
+ pPage = pCur->pPage;
+ pageIntegrity(pPage);
+ assert( pCur->idx>=0 && pCur->idx<pPage->nCell );
+ getCellInfo(pCur);
+ aPayload = pCur->info.pCell + pCur->info.nHeader;
+ if( pPage->intKey ){
+ nKey = 0;
+ }else{
+ nKey = pCur->info.nKey;
+ }
+ assert( offset>=0 );
+ if( skipKey ){
+ offset += nKey;
+ }
+ if( offset+amt > nKey+pCur->info.nData ){
+ return SQLITE_ERROR;
+ }
+ if( offset<pCur->info.nLocal ){
+ int a = amt;
+ if( a+offset>pCur->info.nLocal ){
+ a = pCur->info.nLocal - offset;
+ }
+ memcpy(pBuf, &aPayload[offset], a);
+ if( a==amt ){
+ return SQLITE_OK;
+ }
+ offset = 0;
+ pBuf += a;
+ amt -= a;
+ }else{
+ offset -= pCur->info.nLocal;
+ }
+ ovflSize = pBt->usableSize - 4;
+ if( amt>0 ){
+ nextPage = get4byte(&aPayload[pCur->info.nLocal]);
+ while( amt>0 && nextPage ){
+ rc = sqlite3pager_get(pBt->pPager, nextPage, (void**)&aPayload);
+ if( rc!=0 ){
+ return rc;
+ }
+ nextPage = get4byte(aPayload);
+ if( offset<ovflSize ){
+ int a = amt;
+ if( a + offset > ovflSize ){
+ a = ovflSize - offset;
+ }
+ memcpy(pBuf, &aPayload[offset+4], a);
+ offset = 0;
+ amt -= a;
+ pBuf += a;
+ }else{
+ offset -= ovflSize;
+ }
+ sqlite3pager_unref(aPayload);
+ }
+ }
+
+ if( amt>0 ){
+ return SQLITE_CORRUPT_BKPT;
+ }
+ return SQLITE_OK;
+}
+
+/*
+** Read part of the key associated with cursor pCur. Exactly
+** "amt" bytes will be transfered into pBuf[]. The transfer
+** begins at "offset".
+**
+** Return SQLITE_OK on success or an error code if anything goes
+** wrong. An error is returned if "offset+amt" is larger than
+** the available payload.
+*/
+int sqlite3BtreeKey(BtCursor *pCur, u32 offset, u32 amt, void *pBuf){
+ int rc = restoreOrClearCursorPosition(pCur, 1);
+ if( rc==SQLITE_OK ){
+ assert( pCur->eState==CURSOR_VALID );
+ assert( pCur->pPage!=0 );
+ if( pCur->pPage->intKey ){
+ return SQLITE_CORRUPT_BKPT;
+ }
+ assert( pCur->pPage->intKey==0 );
+ assert( pCur->idx>=0 && pCur->idx<pCur->pPage->nCell );
+ rc = getPayload(pCur, offset, amt, (unsigned char*)pBuf, 0);
+ }
+ return rc;
+}
+
+/*
+** Read part of the data associated with cursor pCur. Exactly
+** "amt" bytes will be transfered into pBuf[]. The transfer
+** begins at "offset".
+**
+** Return SQLITE_OK on success or an error code if anything goes
+** wrong. An error is returned if "offset+amt" is larger than
+** the available payload.
+*/
+int sqlite3BtreeData(BtCursor *pCur, u32 offset, u32 amt, void *pBuf){
+ int rc = restoreOrClearCursorPosition(pCur, 1);
+ if( rc==SQLITE_OK ){
+ assert( pCur->eState==CURSOR_VALID );
+ assert( pCur->pPage!=0 );
+ assert( pCur->idx>=0 && pCur->idx<pCur->pPage->nCell );
+ rc = getPayload(pCur, offset, amt, pBuf, 1);
+ }
+ return rc;
+}
+
+/*
+** Return a pointer to payload information from the entry that the
+** pCur cursor is pointing to. The pointer is to the beginning of
+** the key if skipKey==0 and it points to the beginning of data if
+** skipKey==1. The number of bytes of available key/data is written
+** into *pAmt. If *pAmt==0, then the value returned will not be
+** a valid pointer.
+**
+** This routine is an optimization. It is common for the entire key
+** and data to fit on the local page and for there to be no overflow
+** pages. When that is so, this routine can be used to access the
+** key and data without making a copy. If the key and/or data spills
+** onto overflow pages, then getPayload() must be used to reassembly
+** the key/data and copy it into a preallocated buffer.
+**
+** The pointer returned by this routine looks directly into the cached
+** page of the database. The data might change or move the next time
+** any btree routine is called.
+*/
+static const unsigned char *fetchPayload(
+ BtCursor *pCur, /* Cursor pointing to entry to read from */
+ int *pAmt, /* Write the number of available bytes here */
+ int skipKey /* read beginning at data if this is true */
+){
+ unsigned char *aPayload;
+ MemPage *pPage;
+ u32 nKey;
+ int nLocal;
+
+ assert( pCur!=0 && pCur->pPage!=0 );
+ assert( pCur->eState==CURSOR_VALID );
+ pPage = pCur->pPage;
+ pageIntegrity(pPage);
+ assert( pCur->idx>=0 && pCur->idx<pPage->nCell );
+ getCellInfo(pCur);
+ aPayload = pCur->info.pCell;
+ aPayload += pCur->info.nHeader;
+ if( pPage->intKey ){
+ nKey = 0;
+ }else{
+ nKey = pCur->info.nKey;
+ }
+ if( skipKey ){
+ aPayload += nKey;
+ nLocal = pCur->info.nLocal - nKey;
+ }else{
+ nLocal = pCur->info.nLocal;
+ if( nLocal>nKey ){
+ nLocal = nKey;
+ }
+ }
+ *pAmt = nLocal;
+ return aPayload;
+}
+
+
+/*
+** For the entry that cursor pCur is point to, return as
+** many bytes of the key or data as are available on the local
+** b-tree page. Write the number of available bytes into *pAmt.
+**
+** The pointer returned is ephemeral. The key/data may move
+** or be destroyed on the next call to any Btree routine.
+**
+** These routines is used to get quick access to key and data
+** in the common case where no overflow pages are used.
+*/
+const void *sqlite3BtreeKeyFetch(BtCursor *pCur, int *pAmt){
+ if( pCur->eState==CURSOR_VALID ){
+ return (const void*)fetchPayload(pCur, pAmt, 0);
+ }
+ return 0;
+}
+const void *sqlite3BtreeDataFetch(BtCursor *pCur, int *pAmt){
+ if( pCur->eState==CURSOR_VALID ){
+ return (const void*)fetchPayload(pCur, pAmt, 1);
+ }
+ return 0;
+}
+
+
+/*
+** Move the cursor down to a new child page. The newPgno argument is the
+** page number of the child page to move to.
+*/
+static int moveToChild(BtCursor *pCur, u32 newPgno){
+ int rc;
+ MemPage *pNewPage;
+ MemPage *pOldPage;
+ BtShared *pBt = pCur->pBtree->pBt;
+
+ assert( pCur->eState==CURSOR_VALID );
+ rc = getAndInitPage(pBt, newPgno, &pNewPage, pCur->pPage);
+ if( rc ) return rc;
+ pageIntegrity(pNewPage);
+ pNewPage->idxParent = pCur->idx;
+ pOldPage = pCur->pPage;
+ pOldPage->idxShift = 0;
+ releasePage(pOldPage);
+ pCur->pPage = pNewPage;
+ pCur->idx = 0;
+ pCur->info.nSize = 0;
+ if( pNewPage->nCell<1 ){
+ return SQLITE_CORRUPT_BKPT;
+ }
+ return SQLITE_OK;
+}
+
+/*
+** Return true if the page is the virtual root of its table.
+**
+** The virtual root page is the root page for most tables. But
+** for the table rooted on page 1, sometime the real root page
+** is empty except for the right-pointer. In such cases the
+** virtual root page is the page that the right-pointer of page
+** 1 is pointing to.
+*/
+static int isRootPage(MemPage *pPage){
+ MemPage *pParent = pPage->pParent;
+ if( pParent==0 ) return 1;
+ if( pParent->pgno>1 ) return 0;
+ if( get2byte(&pParent->aData[pParent->hdrOffset+3])==0 ) return 1;
+ return 0;
+}
+
+/*
+** Move the cursor up to the parent page.
+**
+** pCur->idx is set to the cell index that contains the pointer
+** to the page we are coming from. If we are coming from the
+** right-most child page then pCur->idx is set to one more than
+** the largest cell index.
+*/
+static void moveToParent(BtCursor *pCur){
+ MemPage *pParent;
+ MemPage *pPage;
+ int idxParent;
+
+ assert( pCur->eState==CURSOR_VALID );
+ pPage = pCur->pPage;
+ assert( pPage!=0 );
+ assert( !isRootPage(pPage) );
+ pageIntegrity(pPage);
+ pParent = pPage->pParent;
+ assert( pParent!=0 );
+ pageIntegrity(pParent);
+ idxParent = pPage->idxParent;
+ sqlite3pager_ref(pParent->aData);
+ releasePage(pPage);
+ pCur->pPage = pParent;
+ pCur->info.nSize = 0;
+ assert( pParent->idxShift==0 );
+ pCur->idx = idxParent;
+}
+
+/*
+** Move the cursor to the root page
+*/
+static int moveToRoot(BtCursor *pCur){
+ MemPage *pRoot;
+ int rc = SQLITE_OK;
+ BtShared *pBt = pCur->pBtree->pBt;
+
+ restoreOrClearCursorPosition(pCur, 0);
+ pRoot = pCur->pPage;
+ if( pRoot && pRoot->pgno==pCur->pgnoRoot ){
+ assert( pRoot->isInit );
+ }else{
+ if(
+ SQLITE_OK!=(rc = getAndInitPage(pBt, pCur->pgnoRoot, &pRoot, 0))
+ ){
+ pCur->eState = CURSOR_INVALID;
+ return rc;
+ }
+ releasePage(pCur->pPage);
+ pageIntegrity(pRoot);
+ pCur->pPage = pRoot;
+ }
+ pCur->idx = 0;
+ pCur->info.nSize = 0;
+ if( pRoot->nCell==0 && !pRoot->leaf ){
+ Pgno subpage;
+ assert( pRoot->pgno==1 );
+ subpage = get4byte(&pRoot->aData[pRoot->hdrOffset+8]);
+ assert( subpage>0 );
+ pCur->eState = CURSOR_VALID;
+ rc = moveToChild(pCur, subpage);
+ }
+ pCur->eState = ((pCur->pPage->nCell>0)?CURSOR_VALID:CURSOR_INVALID);
+ return rc;
+}
+
+/*
+** Move the cursor down to the left-most leaf entry beneath the
+** entry to which it is currently pointing.
+**
+** The left-most leaf is the one with the smallest key - the first
+** in ascending order.
+*/
+static int moveToLeftmost(BtCursor *pCur){
+ Pgno pgno;
+ int rc;
+ MemPage *pPage;
+
+ assert( pCur->eState==CURSOR_VALID );
+ while( !(pPage = pCur->pPage)->leaf ){
+ assert( pCur->idx>=0 && pCur->idx<pPage->nCell );
+ pgno = get4byte(findCell(pPage, pCur->idx));
+ rc = moveToChild(pCur, pgno);
+ if( rc ) return rc;
+ }
+ return SQLITE_OK;
+}
+
+/*
+** Move the cursor down to the right-most leaf entry beneath the
+** page to which it is currently pointing. Notice the difference
+** between moveToLeftmost() and moveToRightmost(). moveToLeftmost()
+** finds the left-most entry beneath the *entry* whereas moveToRightmost()
+** finds the right-most entry beneath the *page*.
+**
+** The right-most entry is the one with the largest key - the last
+** key in ascending order.
+*/
+static int moveToRightmost(BtCursor *pCur){
+ Pgno pgno;
+ int rc;
+ MemPage *pPage;
+
+ assert( pCur->eState==CURSOR_VALID );
+ while( !(pPage = pCur->pPage)->leaf ){
+ pgno = get4byte(&pPage->aData[pPage->hdrOffset+8]);
+ pCur->idx = pPage->nCell;
+ rc = moveToChild(pCur, pgno);
+ if( rc ) return rc;
+ }
+ pCur->idx = pPage->nCell - 1;
+ pCur->info.nSize = 0;
+ return SQLITE_OK;
+}
+
+/* Move the cursor to the first entry in the table. Return SQLITE_OK
+** on success. Set *pRes to 0 if the cursor actually points to something
+** or set *pRes to 1 if the table is empty.
+*/
+int sqlite3BtreeFirst(BtCursor *pCur, int *pRes){
+ int rc;
+ rc = moveToRoot(pCur);
+ if( rc ) return rc;
+ if( pCur->eState==CURSOR_INVALID ){
+ assert( pCur->pPage->nCell==0 );
+ *pRes = 1;
+ return SQLITE_OK;
+ }
+ assert( pCur->pPage->nCell>0 );
+ *pRes = 0;
+ rc = moveToLeftmost(pCur);
+ return rc;
+}
+
+/* Move the cursor to the last entry in the table. Return SQLITE_OK
+** on success. Set *pRes to 0 if the cursor actually points to something
+** or set *pRes to 1 if the table is empty.
+*/
+int sqlite3BtreeLast(BtCursor *pCur, int *pRes){
+ int rc;
+ rc = moveToRoot(pCur);
+ if( rc ) return rc;
+ if( CURSOR_INVALID==pCur->eState ){
+ assert( pCur->pPage->nCell==0 );
+ *pRes = 1;
+ return SQLITE_OK;
+ }
+ assert( pCur->eState==CURSOR_VALID );
+ *pRes = 0;
+ rc = moveToRightmost(pCur);
+ return rc;
+}
+
+/* Move the cursor so that it points to an entry near pKey/nKey.
+** Return a success code.
+**
+** For INTKEY tables, only the nKey parameter is used. pKey is
+** ignored. For other tables, nKey is the number of bytes of data
+** in pKey. The comparison function specified when the cursor was
+** created is used to compare keys.
+**
+** If an exact match is not found, then the cursor is always
+** left pointing at a leaf page which would hold the entry if it
+** were present. The cursor might point to an entry that comes
+** before or after the key.
+**
+** The result of comparing the key with the entry to which the
+** cursor is written to *pRes if pRes!=NULL. The meaning of
+** this value is as follows:
+**
+** *pRes<0 The cursor is left pointing at an entry that
+** is smaller than pKey or if the table is empty
+** and the cursor is therefore left point to nothing.
+**
+** *pRes==0 The cursor is left pointing at an entry that
+** exactly matches pKey.
+**
+** *pRes>0 The cursor is left pointing at an entry that
+** is larger than pKey.
+*/
+int sqlite3BtreeMoveto(BtCursor *pCur, const void *pKey, i64 nKey, int *pRes){
+ int rc;
+ int tryRightmost;
+ rc = moveToRoot(pCur);
+ if( rc ) return rc;
+ assert( pCur->pPage );
+ assert( pCur->pPage->isInit );
+ tryRightmost = pCur->pPage->intKey;
+ if( pCur->eState==CURSOR_INVALID ){
+ *pRes = -1;
+ assert( pCur->pPage->nCell==0 );
+ return SQLITE_OK;
+ }
+ for(;;){
+ int lwr, upr;
+ Pgno chldPg;
+ MemPage *pPage = pCur->pPage;
+ int c = -1; /* pRes return if table is empty must be -1 */
+ lwr = 0;
+ upr = pPage->nCell-1;
+ if( !pPage->intKey && pKey==0 ){
+ return SQLITE_CORRUPT_BKPT;
+ }
+ pageIntegrity(pPage);
+ while( lwr<=upr ){
+ void *pCellKey;
+ i64 nCellKey;
+ pCur->idx = (lwr+upr)/2;
+ pCur->info.nSize = 0;
+ if( pPage->intKey ){
+ u8 *pCell;
+ if( tryRightmost ){
+ pCur->idx = upr;
+ }
+ pCell = findCell(pPage, pCur->idx) + pPage->childPtrSize;
+ if( pPage->hasData ){
+ u32 dummy;
+ pCell += getVarint32(pCell, &dummy);
+ }
+ getVarint(pCell, (u64 *)&nCellKey);
+ if( nCellKey<nKey ){
+ c = -1;
+ }else if( nCellKey>nKey ){
+ c = +1;
+ tryRightmost = 0;
+ }else{
+ c = 0;
+ }
+ }else{
+ int available;
+ pCellKey = (void *)fetchPayload(pCur, &available, 0);
+ nCellKey = pCur->info.nKey;
+ if( available>=nCellKey ){
+ c = pCur->xCompare(pCur->pArg, nCellKey, pCellKey, nKey, pKey);
+ }else{
+ pCellKey = sqliteMallocRaw( nCellKey );
+ if( pCellKey==0 ) return SQLITE_NOMEM;
+ rc = sqlite3BtreeKey(pCur, 0, nCellKey, (void *)pCellKey);
+ c = pCur->xCompare(pCur->pArg, nCellKey, pCellKey, nKey, pKey);
+ sqliteFree(pCellKey);
+ if( rc ) return rc;
+ }
+ }
+ if( c==0 ){
+ if( pPage->leafData && !pPage->leaf ){
+ lwr = pCur->idx;
+ upr = lwr - 1;
+ break;
+ }else{
+ if( pRes ) *pRes = 0;
+ return SQLITE_OK;
+ }
+ }
+ if( c<0 ){
+ lwr = pCur->idx+1;
+ }else{
+ upr = pCur->idx-1;
+ }
+ }
+ assert( lwr==upr+1 );
+ assert( pPage->isInit );
+ if( pPage->leaf ){
+ chldPg = 0;
+ }else if( lwr>=pPage->nCell ){
+ chldPg = get4byte(&pPage->aData[pPage->hdrOffset+8]);
+ }else{
+ chldPg = get4byte(findCell(pPage, lwr));
+ }
+ if( chldPg==0 ){
+ assert( pCur->idx>=0 && pCur->idx<pCur->pPage->nCell );
+ if( pRes ) *pRes = c;
+ return SQLITE_OK;
+ }
+ pCur->idx = lwr;
+ pCur->info.nSize = 0;
+ rc = moveToChild(pCur, chldPg);
+ if( rc ){
+ return rc;
+ }
+ }
+ /* NOT REACHED */
+}
+
+/*
+** Return TRUE if the cursor is not pointing at an entry of the table.
+**
+** TRUE will be returned after a call to sqlite3BtreeNext() moves
+** past the last entry in the table or sqlite3BtreePrev() moves past
+** the first entry. TRUE is also returned if the table is empty.
+*/
+int sqlite3BtreeEof(BtCursor *pCur){
+ /* TODO: What if the cursor is in CURSOR_REQUIRESEEK but all table entries
+ ** have been deleted? This API will need to change to return an error code
+ ** as well as the boolean result value.
+ */
+ return (CURSOR_VALID!=pCur->eState);
+}
+
+/*
+** Advance the cursor to the next entry in the database. If
+** successful then set *pRes=0. If the cursor
+** was already pointing to the last entry in the database before
+** this routine was called, then set *pRes=1.
+*/
+int sqlite3BtreeNext(BtCursor *pCur, int *pRes){
+ int rc;
+ MemPage *pPage;
+
+#ifndef SQLITE_OMIT_SHARED_CACHE
+ rc = restoreOrClearCursorPosition(pCur, 1);
+ if( rc!=SQLITE_OK ){
+ return rc;
+ }
+ if( pCur->skip>0 ){
+ pCur->skip = 0;
+ *pRes = 0;
+ return SQLITE_OK;
+ }
+ pCur->skip = 0;
+#endif
+
+ assert( pRes!=0 );
+ pPage = pCur->pPage;
+ if( CURSOR_INVALID==pCur->eState ){
+ *pRes = 1;
+ return SQLITE_OK;
+ }
+ assert( pPage->isInit );
+ assert( pCur->idx<pPage->nCell );
+
+ pCur->idx++;
+ pCur->info.nSize = 0;
+ if( pCur->idx>=pPage->nCell ){
+ if( !pPage->leaf ){
+ rc = moveToChild(pCur, get4byte(&pPage->aData[pPage->hdrOffset+8]));
+ if( rc ) return rc;
+ rc = moveToLeftmost(pCur);
+ *pRes = 0;
+ return rc;
+ }
+ do{
+ if( isRootPage(pPage) ){
+ *pRes = 1;
+ pCur->eState = CURSOR_INVALID;
+ return SQLITE_OK;
+ }
+ moveToParent(pCur);
+ pPage = pCur->pPage;
+ }while( pCur->idx>=pPage->nCell );
+ *pRes = 0;
+ if( pPage->leafData ){
+ rc = sqlite3BtreeNext(pCur, pRes);
+ }else{
+ rc = SQLITE_OK;
+ }
+ return rc;
+ }
+ *pRes = 0;
+ if( pPage->leaf ){
+ return SQLITE_OK;
+ }
+ rc = moveToLeftmost(pCur);
+ return rc;
+}
+
+/*
+** Step the cursor to the back to the previous entry in the database. If
+** successful then set *pRes=0. If the cursor
+** was already pointing to the first entry in the database before
+** this routine was called, then set *pRes=1.
+*/
+int sqlite3BtreePrevious(BtCursor *pCur, int *pRes){
+ int rc;
+ Pgno pgno;
+ MemPage *pPage;
+
+#ifndef SQLITE_OMIT_SHARED_CACHE
+ rc = restoreOrClearCursorPosition(pCur, 1);
+ if( rc!=SQLITE_OK ){
+ return rc;
+ }
+ if( pCur->skip<0 ){
+ pCur->skip = 0;
+ *pRes = 0;
+ return SQLITE_OK;
+ }
+ pCur->skip = 0;
+#endif
+
+ if( CURSOR_INVALID==pCur->eState ){
+ *pRes = 1;
+ return SQLITE_OK;
+ }
+
+ pPage = pCur->pPage;
+ assert( pPage->isInit );
+ assert( pCur->idx>=0 );
+ if( !pPage->leaf ){
+ pgno = get4byte( findCell(pPage, pCur->idx) );
+ rc = moveToChild(pCur, pgno);
+ if( rc ) return rc;
+ rc = moveToRightmost(pCur);
+ }else{
+ while( pCur->idx==0 ){
+ if( isRootPage(pPage) ){
+ pCur->eState = CURSOR_INVALID;
+ *pRes = 1;
+ return SQLITE_OK;
+ }
+ moveToParent(pCur);
+ pPage = pCur->pPage;
+ }
+ pCur->idx--;
+ pCur->info.nSize = 0;
+ if( pPage->leafData && !pPage->leaf ){
+ rc = sqlite3BtreePrevious(pCur, pRes);
+ }else{
+ rc = SQLITE_OK;
+ }
+ }
+ *pRes = 0;
+ return rc;
+}
+
+/*
+** Allocate a new page from the database file.
+**
+** The new page is marked as dirty. (In other words, sqlite3pager_write()
+** has already been called on the new page.) The new page has also
+** been referenced and the calling routine is responsible for calling
+** sqlite3pager_unref() on the new page when it is done.
+**
+** SQLITE_OK is returned on success. Any other return value indicates
+** an error. *ppPage and *pPgno are undefined in the event of an error.
+** Do not invoke sqlite3pager_unref() on *ppPage if an error is returned.
+**
+** If the "nearby" parameter is not 0, then a (feeble) effort is made to
+** locate a page close to the page number "nearby". This can be used in an
+** attempt to keep related pages close to each other in the database file,
+** which in turn can make database access faster.
+**
+** If the "exact" parameter is not 0, and the page-number nearby exists
+** anywhere on the free-list, then it is guarenteed to be returned. This
+** is only used by auto-vacuum databases when allocating a new table.
+*/
+static int allocatePage(
+ BtShared *pBt,
+ MemPage **ppPage,
+ Pgno *pPgno,
+ Pgno nearby,
+ u8 exact
+){
+ MemPage *pPage1;
+ int rc;
+ int n; /* Number of pages on the freelist */
+ int k; /* Number of leaves on the trunk of the freelist */
+
+ pPage1 = pBt->pPage1;
+ n = get4byte(&pPage1->aData[36]);
+ if( n>0 ){
+ /* There are pages on the freelist. Reuse one of those pages. */
+ MemPage *pTrunk = 0;
+ Pgno iTrunk;
+ MemPage *pPrevTrunk = 0;
+ u8 searchList = 0; /* If the free-list must be searched for 'nearby' */
+
+ /* If the 'exact' parameter was true and a query of the pointer-map
+ ** shows that the page 'nearby' is somewhere on the free-list, then
+ ** the entire-list will be searched for that page.
+ */
+#ifndef SQLITE_OMIT_AUTOVACUUM
+ if( exact ){
+ u8 eType;
+ assert( nearby>0 );
+ assert( pBt->autoVacuum );
+ rc = ptrmapGet(pBt, nearby, &eType, 0);
+ if( rc ) return rc;
+ if( eType==PTRMAP_FREEPAGE ){
+ searchList = 1;
+ }
+ *pPgno = nearby;
+ }
+#endif
+
+ /* Decrement the free-list count by 1. Set iTrunk to the index of the
+ ** first free-list trunk page. iPrevTrunk is initially 1.
+ */
+ rc = sqlite3pager_write(pPage1->aData);
+ if( rc ) return rc;
+ put4byte(&pPage1->aData[36], n-1);
+
+ /* The code within this loop is run only once if the 'searchList' variable
+ ** is not true. Otherwise, it runs once for each trunk-page on the
+ ** free-list until the page 'nearby' is located.
+ */
+ do {
+ pPrevTrunk = pTrunk;
+ if( pPrevTrunk ){
+ iTrunk = get4byte(&pPrevTrunk->aData[0]);
+ }else{
+ iTrunk = get4byte(&pPage1->aData[32]);
+ }
+ rc = getPage(pBt, iTrunk, &pTrunk);
+ if( rc ){
+ releasePage(pPrevTrunk);
+ return rc;
+ }
+
+ /* TODO: This should move to after the loop? */
+ rc = sqlite3pager_write(pTrunk->aData);
+ if( rc ){
+ releasePage(pTrunk);
+ releasePage(pPrevTrunk);
+ return rc;
+ }
+
+ k = get4byte(&pTrunk->aData[4]);
+ if( k==0 && !searchList ){
+ /* The trunk has no leaves and the list is not being searched.
+ ** So extract the trunk page itself and use it as the newly
+ ** allocated page */
+ assert( pPrevTrunk==0 );
+ *pPgno = iTrunk;
+ memcpy(&pPage1->aData[32], &pTrunk->aData[0], 4);
+ *ppPage = pTrunk;
+ pTrunk = 0;
+ TRACE(("ALLOCATE: %d trunk - %d free pages left\n", *pPgno, n-1));
+ }else if( k>pBt->usableSize/4 - 8 ){
+ /* Value of k is out of range. Database corruption */
+ return SQLITE_CORRUPT_BKPT;
+#ifndef SQLITE_OMIT_AUTOVACUUM
+ }else if( searchList && nearby==iTrunk ){
+ /* The list is being searched and this trunk page is the page
+ ** to allocate, regardless of whether it has leaves.
+ */
+ assert( *pPgno==iTrunk );
+ *ppPage = pTrunk;
+ searchList = 0;
+ if( k==0 ){
+ if( !pPrevTrunk ){
+ memcpy(&pPage1->aData[32], &pTrunk->aData[0], 4);
+ }else{
+ memcpy(&pPrevTrunk->aData[0], &pTrunk->aData[0], 4);
+ }
+ }else{
+ /* The trunk page is required by the caller but it contains
+ ** pointers to free-list leaves. The first leaf becomes a trunk
+ ** page in this case.
+ */
+ MemPage *pNewTrunk;
+ Pgno iNewTrunk = get4byte(&pTrunk->aData[8]);
+ rc = getPage(pBt, iNewTrunk, &pNewTrunk);
+ if( rc!=SQLITE_OK ){
+ releasePage(pTrunk);
+ releasePage(pPrevTrunk);
+ return rc;
+ }
+ rc = sqlite3pager_write(pNewTrunk->aData);
+ if( rc!=SQLITE_OK ){
+ releasePage(pNewTrunk);
+ releasePage(pTrunk);
+ releasePage(pPrevTrunk);
+ return rc;
+ }
+ memcpy(&pNewTrunk->aData[0], &pTrunk->aData[0], 4);
+ put4byte(&pNewTrunk->aData[4], k-1);
+ memcpy(&pNewTrunk->aData[8], &pTrunk->aData[12], (k-1)*4);
+ if( !pPrevTrunk ){
+ put4byte(&pPage1->aData[32], iNewTrunk);
+ }else{
+ put4byte(&pPrevTrunk->aData[0], iNewTrunk);
+ }
+ releasePage(pNewTrunk);
+ }
+ pTrunk = 0;
+ TRACE(("ALLOCATE: %d trunk - %d free pages left\n", *pPgno, n-1));
+#endif
+ }else{
+ /* Extract a leaf from the trunk */
+ int closest;
+ Pgno iPage;
+ unsigned char *aData = pTrunk->aData;
+ if( nearby>0 ){
+ int i, dist;
+ closest = 0;
+ dist = get4byte(&aData[8]) - nearby;
+ if( dist<0 ) dist = -dist;
+ for(i=1; i<k; i++){
+ int d2 = get4byte(&aData[8+i*4]) - nearby;
+ if( d2<0 ) d2 = -d2;
+ if( d2<dist ){
+ closest = i;
+ dist = d2;
+ }
+ }
+ }else{
+ closest = 0;
+ }
+
+ iPage = get4byte(&aData[8+closest*4]);
+ if( !searchList || iPage==nearby ){
+ *pPgno = iPage;
+ if( *pPgno>sqlite3pager_pagecount(pBt->pPager) ){
+ /* Free page off the end of the file */
+ return SQLITE_CORRUPT_BKPT;
+ }
+ TRACE(("ALLOCATE: %d was leaf %d of %d on trunk %d"
+ ": %d more free pages\n",
+ *pPgno, closest+1, k, pTrunk->pgno, n-1));
+ if( closest<k-1 ){
+ memcpy(&aData[8+closest*4], &aData[4+k*4], 4);
+ }
+ put4byte(&aData[4], k-1);
+ rc = getPage(pBt, *pPgno, ppPage);
+ if( rc==SQLITE_OK ){
+ sqlite3pager_dont_rollback((*ppPage)->aData);
+ rc = sqlite3pager_write((*ppPage)->aData);
+ if( rc!=SQLITE_OK ){
+ releasePage(*ppPage);
+ }
+ }
+ searchList = 0;
+ }
+ }
+ releasePage(pPrevTrunk);
+ }while( searchList );
+ releasePage(pTrunk);
+ }else{
+ /* There are no pages on the freelist, so create a new page at the
+ ** end of the file */
+ *pPgno = sqlite3pager_pagecount(pBt->pPager) + 1;
+
+#ifndef SQLITE_OMIT_AUTOVACUUM
+ if( pBt->autoVacuum && PTRMAP_ISPAGE(pBt, *pPgno) ){
+ /* If *pPgno refers to a pointer-map page, allocate two new pages
+ ** at the end of the file instead of one. The first allocated page
+ ** becomes a new pointer-map page, the second is used by the caller.
+ */
+ TRACE(("ALLOCATE: %d from end of file (pointer-map page)\n", *pPgno));
+ assert( *pPgno!=PENDING_BYTE_PAGE(pBt) );
+ (*pPgno)++;
+ }
+#endif
+
+ assert( *pPgno!=PENDING_BYTE_PAGE(pBt) );
+ rc = getPage(pBt, *pPgno, ppPage);
+ if( rc ) return rc;
+ rc = sqlite3pager_write((*ppPage)->aData);
+ if( rc!=SQLITE_OK ){
+ releasePage(*ppPage);
+ }
+ TRACE(("ALLOCATE: %d from end of file\n", *pPgno));
+ }
+
+ assert( *pPgno!=PENDING_BYTE_PAGE(pBt) );
+ return rc;
+}
+
+/*
+** Add a page of the database file to the freelist.
+**
+** sqlite3pager_unref() is NOT called for pPage.
+*/
+static int freePage(MemPage *pPage){
+ BtShared *pBt = pPage->pBt;
+ MemPage *pPage1 = pBt->pPage1;
+ int rc, n, k;
+
+ /* Prepare the page for freeing */
+ assert( pPage->pgno>1 );
+ pPage->isInit = 0;
+ releasePage(pPage->pParent);
+ pPage->pParent = 0;
+
+ /* Increment the free page count on pPage1 */
+ rc = sqlite3pager_write(pPage1->aData);
+ if( rc ) return rc;
+ n = get4byte(&pPage1->aData[36]);
+ put4byte(&pPage1->aData[36], n+1);
+
+#ifdef SQLITE_SECURE_DELETE
+ /* If the SQLITE_SECURE_DELETE compile-time option is enabled, then
+ ** always fully overwrite deleted information with zeros.
+ */
+ rc = sqlite3pager_write(pPage->aData);
+ if( rc ) return rc;
+ memset(pPage->aData, 0, pPage->pBt->pageSize);
+#endif
+
+#ifndef SQLITE_OMIT_AUTOVACUUM
+ /* If the database supports auto-vacuum, write an entry in the pointer-map
+ ** to indicate that the page is free.
+ */
+ if( pBt->autoVacuum ){
+ rc = ptrmapPut(pBt, pPage->pgno, PTRMAP_FREEPAGE, 0);
+ if( rc ) return rc;
+ }
+#endif
+
+ if( n==0 ){
+ /* This is the first free page */
+ rc = sqlite3pager_write(pPage->aData);
+ if( rc ) return rc;
+ memset(pPage->aData, 0, 8);
+ put4byte(&pPage1->aData[32], pPage->pgno);
+ TRACE(("FREE-PAGE: %d first\n", pPage->pgno));
+ }else{
+ /* Other free pages already exist. Retrive the first trunk page
+ ** of the freelist and find out how many leaves it has. */
+ MemPage *pTrunk;
+ rc = getPage(pBt, get4byte(&pPage1->aData[32]), &pTrunk);
+ if( rc ) return rc;
+ k = get4byte(&pTrunk->aData[4]);
+ if( k>=pBt->usableSize/4 - 8 ){
+ /* The trunk is full. Turn the page being freed into a new
+ ** trunk page with no leaves. */
+ rc = sqlite3pager_write(pPage->aData);
+ if( rc ) return rc;
+ put4byte(pPage->aData, pTrunk->pgno);
+ put4byte(&pPage->aData[4], 0);
+ put4byte(&pPage1->aData[32], pPage->pgno);
+ TRACE(("FREE-PAGE: %d new trunk page replacing %d\n",
+ pPage->pgno, pTrunk->pgno));
+ }else{
+ /* Add the newly freed page as a leaf on the current trunk */
+ rc = sqlite3pager_write(pTrunk->aData);
+ if( rc ) return rc;
+ put4byte(&pTrunk->aData[4], k+1);
+ put4byte(&pTrunk->aData[8+k*4], pPage->pgno);
+#ifndef SQLITE_SECURE_DELETE
+ sqlite3pager_dont_write(pBt->pPager, pPage->pgno);
+#endif
+ TRACE(("FREE-PAGE: %d leaf on trunk page %d\n",pPage->pgno,pTrunk->pgno));
+ }
+ releasePage(pTrunk);
+ }
+ return rc;
+}
+
+/*
+** Free any overflow pages associated with the given Cell.
+*/
+static int clearCell(MemPage *pPage, unsigned char *pCell){
+ BtShared *pBt = pPage->pBt;
+ CellInfo info;
+ Pgno ovflPgno;
+ int rc;
+
+ parseCellPtr(pPage, pCell, &info);
+ if( info.iOverflow==0 ){
+ return SQLITE_OK; /* No overflow pages. Return without doing anything */
+ }
+ ovflPgno = get4byte(&pCell[info.iOverflow]);
+ while( ovflPgno!=0 ){
+ MemPage *pOvfl;
+ if( ovflPgno>sqlite3pager_pagecount(pBt->pPager) ){
+ return SQLITE_CORRUPT_BKPT;
+ }
+ rc = getPage(pBt, ovflPgno, &pOvfl);
+ if( rc ) return rc;
+ ovflPgno = get4byte(pOvfl->aData);
+ rc = freePage(pOvfl);
+ sqlite3pager_unref(pOvfl->aData);
+ if( rc ) return rc;
+ }
+ return SQLITE_OK;
+}
+
+/*
+** Create the byte sequence used to represent a cell on page pPage
+** and write that byte sequence into pCell[]. Overflow pages are
+** allocated and filled in as necessary. The calling procedure
+** is responsible for making sure sufficient space has been allocated
+** for pCell[].
+**
+** Note that pCell does not necessary need to point to the pPage->aData
+** area. pCell might point to some temporary storage. The cell will
+** be constructed in this temporary area then copied into pPage->aData
+** later.
+*/
+static int fillInCell(
+ MemPage *pPage, /* The page that contains the cell */
+ unsigned char *pCell, /* Complete text of the cell */
+ const void *pKey, i64 nKey, /* The key */
+ const void *pData,int nData, /* The data */
+ int *pnSize /* Write cell size here */
+){
+ int nPayload;
+ const u8 *pSrc;
+ int nSrc, n, rc;
+ int spaceLeft;
+ MemPage *pOvfl = 0;
+ MemPage *pToRelease = 0;
+ unsigned char *pPrior;
+ unsigned char *pPayload;
+ BtShared *pBt = pPage->pBt;
+ Pgno pgnoOvfl = 0;
+ int nHeader;
+ CellInfo info;
+
+ /* Fill in the header. */
+ nHeader = 0;
+ if( !pPage->leaf ){
+ nHeader += 4;
+ }
+ if( pPage->hasData ){
+ nHeader += putVarint(&pCell[nHeader], nData);
+ }else{
+ nData = 0;
+ }
+ nHeader += putVarint(&pCell[nHeader], *(u64*)&nKey);
+ parseCellPtr(pPage, pCell, &info);
+ assert( info.nHeader==nHeader );
+ assert( info.nKey==nKey );
+ assert( info.nData==nData );
+
+ /* Fill in the payload */
+ nPayload = nData;
+ if( pPage->intKey ){
+ pSrc = pData;
+ nSrc = nData;
+ nData = 0;
+ }else{
+ nPayload += nKey;
+ pSrc = pKey;
+ nSrc = nKey;
+ }
+ *pnSize = info.nSize;
+ spaceLeft = info.nLocal;
+ pPayload = &pCell[nHeader];
+ pPrior = &pCell[info.iOverflow];
+
+ while( nPayload>0 ){
+ if( spaceLeft==0 ){
+#ifndef SQLITE_OMIT_AUTOVACUUM
+ Pgno pgnoPtrmap = pgnoOvfl; /* Overflow page pointer-map entry page */
+#endif
+ rc = allocatePage(pBt, &pOvfl, &pgnoOvfl, pgnoOvfl, 0);
+#ifndef SQLITE_OMIT_AUTOVACUUM
+ /* If the database supports auto-vacuum, and the second or subsequent
+ ** overflow page is being allocated, add an entry to the pointer-map
+ ** for that page now. The entry for the first overflow page will be
+ ** added later, by the insertCell() routine.
+ */
+ if( pBt->autoVacuum && pgnoPtrmap!=0 && rc==SQLITE_OK ){
+ rc = ptrmapPut(pBt, pgnoOvfl, PTRMAP_OVERFLOW2, pgnoPtrmap);
+ }
+#endif
+ if( rc ){
+ releasePage(pToRelease);
+ /* clearCell(pPage, pCell); */
+ return rc;
+ }
+ put4byte(pPrior, pgnoOvfl);
+ releasePage(pToRelease);
+ pToRelease = pOvfl;
+ pPrior = pOvfl->aData;
+ put4byte(pPrior, 0);
+ pPayload = &pOvfl->aData[4];
+ spaceLeft = pBt->usableSize - 4;
+ }
+ n = nPayload;
+ if( n>spaceLeft ) n = spaceLeft;
+ if( n>nSrc ) n = nSrc;
+ assert( pSrc );
+ memcpy(pPayload, pSrc, n);
+ nPayload -= n;
+ pPayload += n;
+ pSrc += n;
+ nSrc -= n;
+ spaceLeft -= n;
+ if( nSrc==0 ){
+ nSrc = nData;
+ pSrc = pData;
+ }
+ }
+ releasePage(pToRelease);
+ return SQLITE_OK;
+}
+
+/*
+** Change the MemPage.pParent pointer on the page whose number is
+** given in the second argument so that MemPage.pParent holds the
+** pointer in the third argument.
+*/
+static int reparentPage(BtShared *pBt, Pgno pgno, MemPage *pNewParent, int idx){
+ MemPage *pThis;
+ unsigned char *aData;
+
+ assert( pNewParent!=0 );
+ if( pgno==0 ) return SQLITE_OK;
+ assert( pBt->pPager!=0 );
+ aData = sqlite3pager_lookup(pBt->pPager, pgno);
+ if( aData ){
+ pThis = (MemPage*)&aData[pBt->pageSize];
+ assert( pThis->aData==aData );
+ if( pThis->isInit ){
+ if( pThis->pParent!=pNewParent ){
+ if( pThis->pParent ) sqlite3pager_unref(pThis->pParent->aData);
+ pThis->pParent = pNewParent;
+ sqlite3pager_ref(pNewParent->aData);
+ }
+ pThis->idxParent = idx;
+ }
+ sqlite3pager_unref(aData);
+ }
+
+#ifndef SQLITE_OMIT_AUTOVACUUM
+ if( pBt->autoVacuum ){
+ return ptrmapPut(pBt, pgno, PTRMAP_BTREE, pNewParent->pgno);
+ }
+#endif
+ return SQLITE_OK;
+}
+
+
+
+/*
+** Change the pParent pointer of all children of pPage to point back
+** to pPage.
+**
+** In other words, for every child of pPage, invoke reparentPage()
+** to make sure that each child knows that pPage is its parent.
+**
+** This routine gets called after you memcpy() one page into
+** another.
+*/
+static int reparentChildPages(MemPage *pPage){
+ int i;
+ BtShared *pBt = pPage->pBt;
+ int rc = SQLITE_OK;
+
+ if( pPage->leaf ) return SQLITE_OK;
+
+ for(i=0; i<pPage->nCell; i++){
+ u8 *pCell = findCell(pPage, i);
+ if( !pPage->leaf ){
+ rc = reparentPage(pBt, get4byte(pCell), pPage, i);
+ if( rc!=SQLITE_OK ) return rc;
+ }
+ }
+ if( !pPage->leaf ){
+ rc = reparentPage(pBt, get4byte(&pPage->aData[pPage->hdrOffset+8]),
+ pPage, i);
+ pPage->idxShift = 0;
+ }
+ return rc;
+}
+
+/*
+** Remove the i-th cell from pPage. This routine effects pPage only.
+** The cell content is not freed or deallocated. It is assumed that
+** the cell content has been copied someplace else. This routine just
+** removes the reference to the cell from pPage.
+**
+** "sz" must be the number of bytes in the cell.
+*/
+static void dropCell(MemPage *pPage, int idx, int sz){
+ int i; /* Loop counter */
+ int pc; /* Offset to cell content of cell being deleted */
+ u8 *data; /* pPage->aData */
+ u8 *ptr; /* Used to move bytes around within data[] */
+
+ assert( idx>=0 && idx<pPage->nCell );
+ assert( sz==cellSize(pPage, idx) );
+ assert( sqlite3pager_iswriteable(pPage->aData) );
+ data = pPage->aData;
+ ptr = &data[pPage->cellOffset + 2*idx];
+ pc = get2byte(ptr);
+ assert( pc>10 && pc+sz<=pPage->pBt->usableSize );
+ freeSpace(pPage, pc, sz);
+ for(i=idx+1; i<pPage->nCell; i++, ptr+=2){
+ ptr[0] = ptr[2];
+ ptr[1] = ptr[3];
+ }
+ pPage->nCell--;
+ put2byte(&data[pPage->hdrOffset+3], pPage->nCell);
+ pPage->nFree += 2;
+ pPage->idxShift = 1;
+}
+
+/*
+** Insert a new cell on pPage at cell index "i". pCell points to the
+** content of the cell.
+**
+** If the cell content will fit on the page, then put it there. If it
+** will not fit, then make a copy of the cell content into pTemp if
+** pTemp is not null. Regardless of pTemp, allocate a new entry
+** in pPage->aOvfl[] and make it point to the cell content (either
+** in pTemp or the original pCell) and also record its index.
+** Allocating a new entry in pPage->aCell[] implies that
+** pPage->nOverflow is incremented.
+**
+** If nSkip is non-zero, then do not copy the first nSkip bytes of the
+** cell. The caller will overwrite them after this function returns. If
+** nSkip is non-zero, then pCell may not point to an invalid memory location
+** (but pCell+nSkip is always valid).
+*/
+static int insertCell(
+ MemPage *pPage, /* Page into which we are copying */
+ int i, /* New cell becomes the i-th cell of the page */
+ u8 *pCell, /* Content of the new cell */
+ int sz, /* Bytes of content in pCell */
+ u8 *pTemp, /* Temp storage space for pCell, if needed */
+ u8 nSkip /* Do not write the first nSkip bytes of the cell */
+){
+ int idx; /* Where to write new cell content in data[] */
+ int j; /* Loop counter */
+ int top; /* First byte of content for any cell in data[] */
+ int end; /* First byte past the last cell pointer in data[] */
+ int ins; /* Index in data[] where new cell pointer is inserted */
+ int hdr; /* Offset into data[] of the page header */
+ int cellOffset; /* Address of first cell pointer in data[] */
+ u8 *data; /* The content of the whole page */
+ u8 *ptr; /* Used for moving information around in data[] */
+
+ assert( i>=0 && i<=pPage->nCell+pPage->nOverflow );
+ assert( sz==cellSizePtr(pPage, pCell) );
+ assert( sqlite3pager_iswriteable(pPage->aData) );
+ if( pPage->nOverflow || sz+2>pPage->nFree ){
+ if( pTemp ){
+ memcpy(pTemp+nSkip, pCell+nSkip, sz-nSkip);
+ pCell = pTemp;
+ }
+ j = pPage->nOverflow++;
+ assert( j<sizeof(pPage->aOvfl)/sizeof(pPage->aOvfl[0]) );
+ pPage->aOvfl[j].pCell = pCell;
+ pPage->aOvfl[j].idx = i;
+ pPage->nFree = 0;
+ }else{
+ data = pPage->aData;
+ hdr = pPage->hdrOffset;
+ top = get2byte(&data[hdr+5]);
+ cellOffset = pPage->cellOffset;
+ end = cellOffset + 2*pPage->nCell + 2;
+ ins = cellOffset + 2*i;
+ if( end > top - sz ){
+ int rc = defragmentPage(pPage);
+ if( rc!=SQLITE_OK ) return rc;
+ top = get2byte(&data[hdr+5]);
+ assert( end + sz <= top );
+ }
+ idx = allocateSpace(pPage, sz);
+ assert( idx>0 );
+ assert( end <= get2byte(&data[hdr+5]) );
+ pPage->nCell++;
+ pPage->nFree -= 2;
+ memcpy(&data[idx+nSkip], pCell+nSkip, sz-nSkip);
+ for(j=end-2, ptr=&data[j]; j>ins; j-=2, ptr-=2){
+ ptr[0] = ptr[-2];
+ ptr[1] = ptr[-1];
+ }
+ put2byte(&data[ins], idx);
+ put2byte(&data[hdr+3], pPage->nCell);
+ pPage->idxShift = 1;
+ pageIntegrity(pPage);
+#ifndef SQLITE_OMIT_AUTOVACUUM
+ if( pPage->pBt->autoVacuum ){
+ /* The cell may contain a pointer to an overflow page. If so, write
+ ** the entry for the overflow page into the pointer map.
+ */
+ CellInfo info;
+ parseCellPtr(pPage, pCell, &info);
+ if( (info.nData+(pPage->intKey?0:info.nKey))>info.nLocal ){
+ Pgno pgnoOvfl = get4byte(&pCell[info.iOverflow]);
+ int rc = ptrmapPut(pPage->pBt, pgnoOvfl, PTRMAP_OVERFLOW1, pPage->pgno);
+ if( rc!=SQLITE_OK ) return rc;
+ }
+ }
+#endif
+ }
+
+ return SQLITE_OK;
+}
+
+/*
+** Add a list of cells to a page. The page should be initially empty.
+** The cells are guaranteed to fit on the page.
+*/
+static void assemblePage(
+ MemPage *pPage, /* The page to be assemblied */
+ int nCell, /* The number of cells to add to this page */
+ u8 **apCell, /* Pointers to cell bodies */
+ int *aSize /* Sizes of the cells */
+){
+ int i; /* Loop counter */
+ int totalSize; /* Total size of all cells */
+ int hdr; /* Index of page header */
+ int cellptr; /* Address of next cell pointer */
+ int cellbody; /* Address of next cell body */
+ u8 *data; /* Data for the page */
+
+ assert( pPage->nOverflow==0 );
+ totalSize = 0;
+ for(i=0; i<nCell; i++){
+ totalSize += aSize[i];
+ }
+ assert( totalSize+2*nCell<=pPage->nFree );
+ assert( pPage->nCell==0 );
+ cellptr = pPage->cellOffset;
+ data = pPage->aData;
+ hdr = pPage->hdrOffset;
+ put2byte(&data[hdr+3], nCell);
+ if( nCell ){
+ cellbody = allocateSpace(pPage, totalSize);
+ assert( cellbody>0 );
+ assert( pPage->nFree >= 2*nCell );
+ pPage->nFree -= 2*nCell;
+ for(i=0; i<nCell; i++){
+ put2byte(&data[cellptr], cellbody);
+ memcpy(&data[cellbody], apCell[i], aSize[i]);
+ cellptr += 2;
+ cellbody += aSize[i];
+ }
+ assert( cellbody==pPage->pBt->usableSize );
+ }
+ pPage->nCell = nCell;
+}
+
+/*
+** The following parameters determine how many adjacent pages get involved
+** in a balancing operation. NN is the number of neighbors on either side
+** of the page that participate in the balancing operation. NB is the
+** total number of pages that participate, including the target page and
+** NN neighbors on either side.
+**
+** The minimum value of NN is 1 (of course). Increasing NN above 1
+** (to 2 or 3) gives a modest improvement in SELECT and DELETE performance
+** in exchange for a larger degradation in INSERT and UPDATE performance.
+** The value of NN appears to give the best results overall.
+*/
+#define NN 1 /* Number of neighbors on either side of pPage */
+#define NB (NN*2+1) /* Total pages involved in the balance */
+
+/* Forward reference */
+static int balance(MemPage*, int);
+
+#ifndef SQLITE_OMIT_QUICKBALANCE
+/*
+** This version of balance() handles the common special case where
+** a new entry is being inserted on the extreme right-end of the
+** tree, in other words, when the new entry will become the largest
+** entry in the tree.
+**
+** Instead of trying balance the 3 right-most leaf pages, just add
+** a new page to the right-hand side and put the one new entry in
+** that page. This leaves the right side of the tree somewhat
+** unbalanced. But odds are that we will be inserting new entries
+** at the end soon afterwards so the nearly empty page will quickly
+** fill up. On average.
+**
+** pPage is the leaf page which is the right-most page in the tree.
+** pParent is its parent. pPage must have a single overflow entry
+** which is also the right-most entry on the page.
+*/
+static int balance_quick(MemPage *pPage, MemPage *pParent){
+ int rc;
+ MemPage *pNew;
+ Pgno pgnoNew;
+ u8 *pCell;
+ int szCell;
+ CellInfo info;
+ BtShared *pBt = pPage->pBt;
+ int parentIdx = pParent->nCell; /* pParent new divider cell index */
+ int parentSize; /* Size of new divider cell */
+ u8 parentCell[64]; /* Space for the new divider cell */
+
+ /* Allocate a new page. Insert the overflow cell from pPage
+ ** into it. Then remove the overflow cell from pPage.
+ */
+ rc = allocatePage(pBt, &pNew, &pgnoNew, 0, 0);
+ if( rc!=SQLITE_OK ){
+ return rc;
+ }
+ pCell = pPage->aOvfl[0].pCell;
+ szCell = cellSizePtr(pPage, pCell);
+ zeroPage(pNew, pPage->aData[0]);
+ assemblePage(pNew, 1, &pCell, &szCell);
+ pPage->nOverflow = 0;
+
+ /* Set the parent of the newly allocated page to pParent. */
+ pNew->pParent = pParent;
+ sqlite3pager_ref(pParent->aData);
+
+ /* pPage is currently the right-child of pParent. Change this
+ ** so that the right-child is the new page allocated above and
+ ** pPage is the next-to-right child.
+ */
+ assert( pPage->nCell>0 );
+ parseCellPtr(pPage, findCell(pPage, pPage->nCell-1), &info);
+ rc = fillInCell(pParent, parentCell, 0, info.nKey, 0, 0, &parentSize);
+ if( rc!=SQLITE_OK ){
+ return rc;
+ }
+ assert( parentSize<64 );
+ rc = insertCell(pParent, parentIdx, parentCell, parentSize, 0, 4);
+ if( rc!=SQLITE_OK ){
+ return rc;
+ }
+ put4byte(findOverflowCell(pParent,parentIdx), pPage->pgno);
+ put4byte(&pParent->aData[pParent->hdrOffset+8], pgnoNew);
+
+#ifndef SQLITE_OMIT_AUTOVACUUM
+ /* If this is an auto-vacuum database, update the pointer map
+ ** with entries for the new page, and any pointer from the
+ ** cell on the page to an overflow page.
+ */
+ if( pBt->autoVacuum ){
+ rc = ptrmapPut(pBt, pgnoNew, PTRMAP_BTREE, pParent->pgno);
+ if( rc!=SQLITE_OK ){
+ return rc;
+ }
+ rc = ptrmapPutOvfl(pNew, 0);
+ if( rc!=SQLITE_OK ){
+ return rc;
+ }
+ }
+#endif
+
+ /* Release the reference to the new page and balance the parent page,
+ ** in case the divider cell inserted caused it to become overfull.
+ */
+ releasePage(pNew);
+ return balance(pParent, 0);
+}
+#endif /* SQLITE_OMIT_QUICKBALANCE */
+
+/*
+** The ISAUTOVACUUM macro is used within balance_nonroot() to determine
+** if the database supports auto-vacuum or not. Because it is used
+** within an expression that is an argument to another macro
+** (sqliteMallocRaw), it is not possible to use conditional compilation.
+** So, this macro is defined instead.
+*/
+#ifndef SQLITE_OMIT_AUTOVACUUM
+#define ISAUTOVACUUM (pBt->autoVacuum)
+#else
+#define ISAUTOVACUUM 0
+#endif
+
+/*
+** This routine redistributes Cells on pPage and up to NN*2 siblings
+** of pPage so that all pages have about the same amount of free space.
+** Usually NN siblings on either side of pPage is used in the balancing,
+** though more siblings might come from one side if pPage is the first
+** or last child of its parent. If pPage has fewer than 2*NN siblings
+** (something which can only happen if pPage is the root page or a
+** child of root) then all available siblings participate in the balancing.
+**
+** The number of siblings of pPage might be increased or decreased by one or
+** two in an effort to keep pages nearly full but not over full. The root page
+** is special and is allowed to be nearly empty. If pPage is
+** the root page, then the depth of the tree might be increased
+** or decreased by one, as necessary, to keep the root page from being
+** overfull or completely empty.
+**
+** Note that when this routine is called, some of the Cells on pPage
+** might not actually be stored in pPage->aData[]. This can happen
+** if the page is overfull. Part of the job of this routine is to
+** make sure all Cells for pPage once again fit in pPage->aData[].
+**
+** In the course of balancing the siblings of pPage, the parent of pPage
+** might become overfull or underfull. If that happens, then this routine
+** is called recursively on the parent.
+**
+** If this routine fails for any reason, it might leave the database
+** in a corrupted state. So if this routine fails, the database should
+** be rolled back.
+*/
+static int balance_nonroot(MemPage *pPage){
+ MemPage *pParent; /* The parent of pPage */
+ BtShared *pBt; /* The whole database */
+ int nCell = 0; /* Number of cells in apCell[] */
+ int nMaxCells = 0; /* Allocated size of apCell, szCell, aFrom. */
+ int nOld; /* Number of pages in apOld[] */
+ int nNew; /* Number of pages in apNew[] */
+ int nDiv; /* Number of cells in apDiv[] */
+ int i, j, k; /* Loop counters */
+ int idx; /* Index of pPage in pParent->aCell[] */
+ int nxDiv; /* Next divider slot in pParent->aCell[] */
+ int rc; /* The return code */
+ int leafCorrection; /* 4 if pPage is a leaf. 0 if not */
+ int leafData; /* True if pPage is a leaf of a LEAFDATA tree */
+ int usableSpace; /* Bytes in pPage beyond the header */
+ int pageFlags; /* Value of pPage->aData[0] */
+ int subtotal; /* Subtotal of bytes in cells on one page */
+ int iSpace = 0; /* First unused byte of aSpace[] */
+ MemPage *apOld[NB]; /* pPage and up to two siblings */
+ Pgno pgnoOld[NB]; /* Page numbers for each page in apOld[] */
+ MemPage *apCopy[NB]; /* Private copies of apOld[] pages */
+ MemPage *apNew[NB+2]; /* pPage and up to NB siblings after balancing */
+ Pgno pgnoNew[NB+2]; /* Page numbers for each page in apNew[] */
+ u8 *apDiv[NB]; /* Divider cells in pParent */
+ int cntNew[NB+2]; /* Index in aCell[] of cell after i-th page */
+ int szNew[NB+2]; /* Combined size of cells place on i-th page */
+ u8 **apCell = 0; /* All cells begin balanced */
+ int *szCell; /* Local size of all cells in apCell[] */
+ u8 *aCopy[NB]; /* Space for holding data of apCopy[] */
+ u8 *aSpace; /* Space to hold copies of dividers cells */
+#ifndef SQLITE_OMIT_AUTOVACUUM
+ u8 *aFrom = 0;
+#endif
+
+ /*
+ ** Find the parent page.
+ */
+ assert( pPage->isInit );
+ assert( sqlite3pager_iswriteable(pPage->aData) );
+ pBt = pPage->pBt;
+ pParent = pPage->pParent;
+ assert( pParent );
+ if( SQLITE_OK!=(rc = sqlite3pager_write(pParent->aData)) ){
+ return rc;
+ }
+ TRACE(("BALANCE: begin page %d child of %d\n", pPage->pgno, pParent->pgno));
+
+#ifndef SQLITE_OMIT_QUICKBALANCE
+ /*
+ ** A special case: If a new entry has just been inserted into a
+ ** table (that is, a btree with integer keys and all data at the leaves)
+ ** and the new entry is the right-most entry in the tree (it has the
+ ** largest key) then use the special balance_quick() routine for
+ ** balancing. balance_quick() is much faster and results in a tighter
+ ** packing of data in the common case.
+ */
+ if( pPage->leaf &&
+ pPage->intKey &&
+ pPage->leafData &&
+ pPage->nOverflow==1 &&
+ pPage->aOvfl[0].idx==pPage->nCell &&
+ pPage->pParent->pgno!=1 &&
+ get4byte(&pParent->aData[pParent->hdrOffset+8])==pPage->pgno
+ ){
+ /*
+ ** TODO: Check the siblings to the left of pPage. It may be that
+ ** they are not full and no new page is required.
+ */
+ return balance_quick(pPage, pParent);
+ }
+#endif
+
+ /*
+ ** Find the cell in the parent page whose left child points back
+ ** to pPage. The "idx" variable is the index of that cell. If pPage
+ ** is the rightmost child of pParent then set idx to pParent->nCell
+ */
+ if( pParent->idxShift ){
+ Pgno pgno;
+ pgno = pPage->pgno;
+ assert( pgno==sqlite3pager_pagenumber(pPage->aData) );
+ for(idx=0; idx<pParent->nCell; idx++){
+ if( get4byte(findCell(pParent, idx))==pgno ){
+ break;
+ }
+ }
+ assert( idx<pParent->nCell
+ || get4byte(&pParent->aData[pParent->hdrOffset+8])==pgno );
+ }else{
+ idx = pPage->idxParent;
+ }
+
+ /*
+ ** Initialize variables so that it will be safe to jump
+ ** directly to balance_cleanup at any moment.
+ */
+ nOld = nNew = 0;
+ sqlite3pager_ref(pParent->aData);
+
+ /*
+ ** Find sibling pages to pPage and the cells in pParent that divide
+ ** the siblings. An attempt is made to find NN siblings on either
+ ** side of pPage. More siblings are taken from one side, however, if
+ ** pPage there are fewer than NN siblings on the other side. If pParent
+ ** has NB or fewer children then all children of pParent are taken.
+ */
+ nxDiv = idx - NN;
+ if( nxDiv + NB > pParent->nCell ){
+ nxDiv = pParent->nCell - NB + 1;
+ }
+ if( nxDiv<0 ){
+ nxDiv = 0;
+ }
+ nDiv = 0;
+ for(i=0, k=nxDiv; i<NB; i++, k++){
+ if( k<pParent->nCell ){
+ apDiv[i] = findCell(pParent, k);
+ nDiv++;
+ assert( !pParent->leaf );
+ pgnoOld[i] = get4byte(apDiv[i]);
+ }else if( k==pParent->nCell ){
+ pgnoOld[i] = get4byte(&pParent->aData[pParent->hdrOffset+8]);
+ }else{
+ break;
+ }
+ rc = getAndInitPage(pBt, pgnoOld[i], &apOld[i], pParent);
+ if( rc ) goto balance_cleanup;
+ apOld[i]->idxParent = k;
+ apCopy[i] = 0;
+ assert( i==nOld );
+ nOld++;
+ nMaxCells += 1+apOld[i]->nCell+apOld[i]->nOverflow;
+ }
+
+ /* Make nMaxCells a multiple of 2 in order to preserve 8-byte
+ ** alignment */
+ nMaxCells = (nMaxCells + 1)&~1;
+
+ /*
+ ** Allocate space for memory structures
+ */
+ apCell = sqliteMallocRaw(
+ nMaxCells*sizeof(u8*) /* apCell */
+ + nMaxCells*sizeof(int) /* szCell */
+ + ROUND8(sizeof(MemPage))*NB /* aCopy */
+ + pBt->pageSize*(5+NB) /* aSpace */
+ + (ISAUTOVACUUM ? nMaxCells : 0) /* aFrom */
+ );
+ if( apCell==0 ){
+ rc = SQLITE_NOMEM;
+ goto balance_cleanup;
+ }
+ szCell = (int*)&apCell[nMaxCells];
+ aCopy[0] = (u8*)&szCell[nMaxCells];
+ assert( ((aCopy[0] - (u8*)apCell) & 7)==0 ); /* 8-byte alignment required */
+ for(i=1; i<NB; i++){
+ aCopy[i] = &aCopy[i-1][pBt->pageSize+ROUND8(sizeof(MemPage))];
+ assert( ((aCopy[i] - (u8*)apCell) & 7)==0 ); /* 8-byte alignment required */
+ }
+ aSpace = &aCopy[NB-1][pBt->pageSize+ROUND8(sizeof(MemPage))];
+ assert( ((aSpace - (u8*)apCell) & 7)==0 ); /* 8-byte alignment required */
+#ifndef SQLITE_OMIT_AUTOVACUUM
+ if( pBt->autoVacuum ){
+ aFrom = &aSpace[5*pBt->pageSize];
+ }
+#endif
+
+ /*
+ ** Make copies of the content of pPage and its siblings into aOld[].
+ ** The rest of this function will use data from the copies rather
+ ** that the original pages since the original pages will be in the
+ ** process of being overwritten.
+ */
+ for(i=0; i<nOld; i++){
+ MemPage *p = apCopy[i] = (MemPage*)&aCopy[i][pBt->pageSize];
+ p->aData = &((u8*)p)[-pBt->pageSize];
+ memcpy(p->aData, apOld[i]->aData, pBt->pageSize + sizeof(MemPage));
+ /* The memcpy() above changes the value of p->aData so we have to
+ ** set it again. */
+ p->aData = &((u8*)p)[-pBt->pageSize];
+ }
+
+ /*
+ ** Load pointers to all cells on sibling pages and the divider cells
+ ** into the local apCell[] array. Make copies of the divider cells
+ ** into space obtained form aSpace[] and remove the the divider Cells
+ ** from pParent.
+ **
+ ** If the siblings are on leaf pages, then the child pointers of the
+ ** divider cells are stripped from the cells before they are copied
+ ** into aSpace[]. In this way, all cells in apCell[] are without
+ ** child pointers. If siblings are not leaves, then all cell in
+ ** apCell[] include child pointers. Either way, all cells in apCell[]
+ ** are alike.
+ **
+ ** leafCorrection: 4 if pPage is a leaf. 0 if pPage is not a leaf.
+ ** leafData: 1 if pPage holds key+data and pParent holds only keys.
+ */
+ nCell = 0;
+ leafCorrection = pPage->leaf*4;
+ leafData = pPage->leafData && pPage->leaf;
+ for(i=0; i<nOld; i++){
+ MemPage *pOld = apCopy[i];
+ int limit = pOld->nCell+pOld->nOverflow;
+ for(j=0; j<limit; j++){
+ assert( nCell<nMaxCells );
+ apCell[nCell] = findOverflowCell(pOld, j);
+ szCell[nCell] = cellSizePtr(pOld, apCell[nCell]);
+#ifndef SQLITE_OMIT_AUTOVACUUM
+ if( pBt->autoVacuum ){
+ int a;
+ aFrom[nCell] = i;
+ for(a=0; a<pOld->nOverflow; a++){
+ if( pOld->aOvfl[a].pCell==apCell[nCell] ){
+ aFrom[nCell] = 0xFF;
+ break;
+ }
+ }
+ }
+#endif
+ nCell++;
+ }
+ if( i<nOld-1 ){
+ int sz = cellSizePtr(pParent, apDiv[i]);
+ if( leafData ){
+ /* With the LEAFDATA flag, pParent cells hold only INTKEYs that
+ ** are duplicates of keys on the child pages. We need to remove
+ ** the divider cells from pParent, but the dividers cells are not
+ ** added to apCell[] because they are duplicates of child cells.
+ */
+ dropCell(pParent, nxDiv, sz);
+ }else{
+ u8 *pTemp;
+ assert( nCell<nMaxCells );
+ szCell[nCell] = sz;
+ pTemp = &aSpace[iSpace];
+ iSpace += sz;
+ assert( iSpace<=pBt->pageSize*5 );
+ memcpy(pTemp, apDiv[i], sz);
+ apCell[nCell] = pTemp+leafCorrection;
+#ifndef SQLITE_OMIT_AUTOVACUUM
+ if( pBt->autoVacuum ){
+ aFrom[nCell] = 0xFF;
+ }
+#endif
+ dropCell(pParent, nxDiv, sz);
+ szCell[nCell] -= leafCorrection;
+ assert( get4byte(pTemp)==pgnoOld[i] );
+ if( !pOld->leaf ){
+ assert( leafCorrection==0 );
+ /* The right pointer of the child page pOld becomes the left
+ ** pointer of the divider cell */
+ memcpy(apCell[nCell], &pOld->aData[pOld->hdrOffset+8], 4);
+ }else{
+ assert( leafCorrection==4 );
+ }
+ nCell++;
+ }
+ }
+ }
+
+ /*
+ ** Figure out the number of pages needed to hold all nCell cells.
+ ** Store this number in "k". Also compute szNew[] which is the total
+ ** size of all cells on the i-th page and cntNew[] which is the index
+ ** in apCell[] of the cell that divides page i from page i+1.
+ ** cntNew[k] should equal nCell.
+ **
+ ** Values computed by this block:
+ **
+ ** k: The total number of sibling pages
+ ** szNew[i]: Spaced used on the i-th sibling page.
+ ** cntNew[i]: Index in apCell[] and szCell[] for the first cell to
+ ** the right of the i-th sibling page.
+ ** usableSpace: Number of bytes of space available on each sibling.
+ **
+ */
+ usableSpace = pBt->usableSize - 12 + leafCorrection;
+ for(subtotal=k=i=0; i<nCell; i++){
+ assert( i<nMaxCells );
+ subtotal += szCell[i] + 2;
+ if( subtotal > usableSpace ){
+ szNew[k] = subtotal - szCell[i];
+ cntNew[k] = i;
+ if( leafData ){ i--; }
+ subtotal = 0;
+ k++;
+ }
+ }
+ szNew[k] = subtotal;
+ cntNew[k] = nCell;
+ k++;
+
+ /*
+ ** The packing computed by the previous block is biased toward the siblings
+ ** on the left side. The left siblings are always nearly full, while the
+ ** right-most sibling might be nearly empty. This block of code attempts
+ ** to adjust the packing of siblings to get a better balance.
+ **
+ ** This adjustment is more than an optimization. The packing above might
+ ** be so out of balance as to be illegal. For example, the right-most
+ ** sibling might be completely empty. This adjustment is not optional.
+ */
+ for(i=k-1; i>0; i--){
+ int szRight = szNew[i]; /* Size of sibling on the right */
+ int szLeft = szNew[i-1]; /* Size of sibling on the left */
+ int r; /* Index of right-most cell in left sibling */
+ int d; /* Index of first cell to the left of right sibling */
+
+ r = cntNew[i-1] - 1;
+ d = r + 1 - leafData;
+ assert( d<nMaxCells );
+ assert( r<nMaxCells );
+ while( szRight==0 || szRight+szCell[d]+2<=szLeft-(szCell[r]+2) ){
+ szRight += szCell[d] + 2;
+ szLeft -= szCell[r] + 2;
+ cntNew[i-1]--;
+ r = cntNew[i-1] - 1;
+ d = r + 1 - leafData;
+ }
+ szNew[i] = szRight;
+ szNew[i-1] = szLeft;
+ }
+
+ /* Either we found one or more cells (cntnew[0])>0) or we are the
+ ** a virtual root page. A virtual root page is when the real root
+ ** page is page 1 and we are the only child of that page.
+ */
+ assert( cntNew[0]>0 || (pParent->pgno==1 && pParent->nCell==0) );
+
+ /*
+ ** Allocate k new pages. Reuse old pages where possible.
+ */
+ assert( pPage->pgno>1 );
+ pageFlags = pPage->aData[0];
+ for(i=0; i<k; i++){
+ MemPage *pNew;
+ if( i<nOld ){
+ pNew = apNew[i] = apOld[i];
+ pgnoNew[i] = pgnoOld[i];
+ apOld[i] = 0;
+ rc = sqlite3pager_write(pNew->aData);
+ if( rc ) goto balance_cleanup;
+ }else{
+ assert( i>0 );
+ rc = allocatePage(pBt, &pNew, &pgnoNew[i], pgnoNew[i-1], 0);
+ if( rc ) goto balance_cleanup;
+ apNew[i] = pNew;
+ }
+ nNew++;
+ zeroPage(pNew, pageFlags);
+ }
+
+ /* Free any old pages that were not reused as new pages.
+ */
+ while( i<nOld ){
+ rc = freePage(apOld[i]);
+ if( rc ) goto balance_cleanup;
+ releasePage(apOld[i]);
+ apOld[i] = 0;
+ i++;
+ }
+
+ /*
+ ** Put the new pages in accending order. This helps to
+ ** keep entries in the disk file in order so that a scan
+ ** of the table is a linear scan through the file. That
+ ** in turn helps the operating system to deliver pages
+ ** from the disk more rapidly.
+ **
+ ** An O(n^2) insertion sort algorithm is used, but since
+ ** n is never more than NB (a small constant), that should
+ ** not be a problem.
+ **
+ ** When NB==3, this one optimization makes the database
+ ** about 25% faster for large insertions and deletions.
+ */
+ for(i=0; i<k-1; i++){
+ int minV = pgnoNew[i];
+ int minI = i;
+ for(j=i+1; j<k; j++){
+ if( pgnoNew[j]<(unsigned)minV ){
+ minI = j;
+ minV = pgnoNew[j];
+ }
+ }
+ if( minI>i ){
+ int t;
+ MemPage *pT;
+ t = pgnoNew[i];
+ pT = apNew[i];
+ pgnoNew[i] = pgnoNew[minI];
+ apNew[i] = apNew[minI];
+ pgnoNew[minI] = t;
+ apNew[minI] = pT;
+ }
+ }
+ TRACE(("BALANCE: old: %d %d %d new: %d(%d) %d(%d) %d(%d) %d(%d) %d(%d)\n",
+ pgnoOld[0],
+ nOld>=2 ? pgnoOld[1] : 0,
+ nOld>=3 ? pgnoOld[2] : 0,
+ pgnoNew[0], szNew[0],
+ nNew>=2 ? pgnoNew[1] : 0, nNew>=2 ? szNew[1] : 0,
+ nNew>=3 ? pgnoNew[2] : 0, nNew>=3 ? szNew[2] : 0,
+ nNew>=4 ? pgnoNew[3] : 0, nNew>=4 ? szNew[3] : 0,
+ nNew>=5 ? pgnoNew[4] : 0, nNew>=5 ? szNew[4] : 0));
+
+ /*
+ ** Evenly distribute the data in apCell[] across the new pages.
+ ** Insert divider cells into pParent as necessary.
+ */
+ j = 0;
+ for(i=0; i<nNew; i++){
+ /* Assemble the new sibling page. */
+ MemPage *pNew = apNew[i];
+ assert( j<nMaxCells );
+ assert( pNew->pgno==pgnoNew[i] );
+ assemblePage(pNew, cntNew[i]-j, &apCell[j], &szCell[j]);
+ assert( pNew->nCell>0 || (nNew==1 && cntNew[0]==0) );
+ assert( pNew->nOverflow==0 );
+
+#ifndef SQLITE_OMIT_AUTOVACUUM
+ /* If this is an auto-vacuum database, update the pointer map entries
+ ** that point to the siblings that were rearranged. These can be: left
+ ** children of cells, the right-child of the page, or overflow pages
+ ** pointed to by cells.
+ */
+ if( pBt->autoVacuum ){
+ for(k=j; k<cntNew[i]; k++){
+ assert( k<nMaxCells );
+ if( aFrom[k]==0xFF || apCopy[aFrom[k]]->pgno!=pNew->pgno ){
+ rc = ptrmapPutOvfl(pNew, k-j);
+ if( rc!=SQLITE_OK ){
+ goto balance_cleanup;
+ }
+ }
+ }
+ }
+#endif
+
+ j = cntNew[i];
+
+ /* If the sibling page assembled above was not the right-most sibling,
+ ** insert a divider cell into the parent page.
+ */
+ if( i<nNew-1 && j<nCell ){
+ u8 *pCell;
+ u8 *pTemp;
+ int sz;
+
+ assert( j<nMaxCells );
+ pCell = apCell[j];
+ sz = szCell[j] + leafCorrection;
+ if( !pNew->leaf ){
+ memcpy(&pNew->aData[8], pCell, 4);
+ pTemp = 0;
+ }else if( leafData ){
+ /* If the tree is a leaf-data tree, and the siblings are leaves,
+ ** then there is no divider cell in apCell[]. Instead, the divider
+ ** cell consists of the integer key for the right-most cell of
+ ** the sibling-page assembled above only.
+ */
+ CellInfo info;
+ j--;
+ parseCellPtr(pNew, apCell[j], &info);
+ pCell = &aSpace[iSpace];
+ fillInCell(pParent, pCell, 0, info.nKey, 0, 0, &sz);
+ iSpace += sz;
+ assert( iSpace<=pBt->pageSize*5 );
+ pTemp = 0;
+ }else{
+ pCell -= 4;
+ pTemp = &aSpace[iSpace];
+ iSpace += sz;
+ assert( iSpace<=pBt->pageSize*5 );
+ }
+ rc = insertCell(pParent, nxDiv, pCell, sz, pTemp, 4);
+ if( rc!=SQLITE_OK ) goto balance_cleanup;
+ put4byte(findOverflowCell(pParent,nxDiv), pNew->pgno);
+#ifndef SQLITE_OMIT_AUTOVACUUM
+ /* If this is an auto-vacuum database, and not a leaf-data tree,
+ ** then update the pointer map with an entry for the overflow page
+ ** that the cell just inserted points to (if any).
+ */
+ if( pBt->autoVacuum && !leafData ){
+ rc = ptrmapPutOvfl(pParent, nxDiv);
+ if( rc!=SQLITE_OK ){
+ goto balance_cleanup;
+ }
+ }
+#endif
+ j++;
+ nxDiv++;
+ }
+ }
+ assert( j==nCell );
+ assert( nOld>0 );
+ assert( nNew>0 );
+ if( (pageFlags & PTF_LEAF)==0 ){
+ memcpy(&apNew[nNew-1]->aData[8], &apCopy[nOld-1]->aData[8], 4);
+ }
+ if( nxDiv==pParent->nCell+pParent->nOverflow ){
+ /* Right-most sibling is the right-most child of pParent */
+ put4byte(&pParent->aData[pParent->hdrOffset+8], pgnoNew[nNew-1]);
+ }else{
+ /* Right-most sibling is the left child of the first entry in pParent
+ ** past the right-most divider entry */
+ put4byte(findOverflowCell(pParent, nxDiv), pgnoNew[nNew-1]);
+ }
+
+ /*
+ ** Reparent children of all cells.
+ */
+ for(i=0; i<nNew; i++){
+ rc = reparentChildPages(apNew[i]);
+ if( rc!=SQLITE_OK ) goto balance_cleanup;
+ }
+ rc = reparentChildPages(pParent);
+ if( rc!=SQLITE_OK ) goto balance_cleanup;
+
+ /*
+ ** Balance the parent page. Note that the current page (pPage) might
+ ** have been added to the freelist so it might no longer be initialized.
+ ** But the parent page will always be initialized.
+ */
+ assert( pParent->isInit );
+ /* assert( pPage->isInit ); // No! pPage might have been added to freelist */
+ /* pageIntegrity(pPage); // No! pPage might have been added to freelist */
+ rc = balance(pParent, 0);
+
+ /*
+ ** Cleanup before returning.
+ */
+balance_cleanup:
+ sqliteFree(apCell);
+ for(i=0; i<nOld; i++){
+ releasePage(apOld[i]);
+ }
+ for(i=0; i<nNew; i++){
+ releasePage(apNew[i]);
+ }
+ releasePage(pParent);
+ TRACE(("BALANCE: finished with %d: old=%d new=%d cells=%d\n",
+ pPage->pgno, nOld, nNew, nCell));
+ return rc;
+}
+
+/*
+** This routine is called for the root page of a btree when the root
+** page contains no cells. This is an opportunity to make the tree
+** shallower by one level.
+*/
+static int balance_shallower(MemPage *pPage){
+ MemPage *pChild; /* The only child page of pPage */
+ Pgno pgnoChild; /* Page number for pChild */
+ int rc = SQLITE_OK; /* Return code from subprocedures */
+ BtShared *pBt; /* The main BTree structure */
+ int mxCellPerPage; /* Maximum number of cells per page */
+ u8 **apCell; /* All cells from pages being balanced */
+ int *szCell; /* Local size of all cells */
+
+ assert( pPage->pParent==0 );
+ assert( pPage->nCell==0 );
+ pBt = pPage->pBt;
+ mxCellPerPage = MX_CELL(pBt);
+ apCell = sqliteMallocRaw( mxCellPerPage*(sizeof(u8*)+sizeof(int)) );
+ if( apCell==0 ) return SQLITE_NOMEM;
+ szCell = (int*)&apCell[mxCellPerPage];
+ if( pPage->leaf ){
+ /* The table is completely empty */
+ TRACE(("BALANCE: empty table %d\n", pPage->pgno));
+ }else{
+ /* The root page is empty but has one child. Transfer the
+ ** information from that one child into the root page if it
+ ** will fit. This reduces the depth of the tree by one.
+ **
+ ** If the root page is page 1, it has less space available than
+ ** its child (due to the 100 byte header that occurs at the beginning
+ ** of the database fle), so it might not be able to hold all of the
+ ** information currently contained in the child. If this is the
+ ** case, then do not do the transfer. Leave page 1 empty except
+ ** for the right-pointer to the child page. The child page becomes
+ ** the virtual root of the tree.
+ */
+ pgnoChild = get4byte(&pPage->aData[pPage->hdrOffset+8]);
+ assert( pgnoChild>0 );
+ assert( pgnoChild<=sqlite3pager_pagecount(pPage->pBt->pPager) );
+ rc = getPage(pPage->pBt, pgnoChild, &pChild);
+ if( rc ) goto end_shallow_balance;
+ if( pPage->pgno==1 ){
+ rc = initPage(pChild, pPage);
+ if( rc ) goto end_shallow_balance;
+ assert( pChild->nOverflow==0 );
+ if( pChild->nFree>=100 ){
+ /* The child information will fit on the root page, so do the
+ ** copy */
+ int i;
+ zeroPage(pPage, pChild->aData[0]);
+ for(i=0; i<pChild->nCell; i++){
+ apCell[i] = findCell(pChild,i);
+ szCell[i] = cellSizePtr(pChild, apCell[i]);
+ }
+ assemblePage(pPage, pChild->nCell, apCell, szCell);
+ /* Copy the right-pointer of the child to the parent. */
+ put4byte(&pPage->aData[pPage->hdrOffset+8],
+ get4byte(&pChild->aData[pChild->hdrOffset+8]));
+ freePage(pChild);
+ TRACE(("BALANCE: child %d transfer to page 1\n", pChild->pgno));
+ }else{
+ /* The child has more information that will fit on the root.
+ ** The tree is already balanced. Do nothing. */
+ TRACE(("BALANCE: child %d will not fit on page 1\n", pChild->pgno));
+ }
+ }else{
+ memcpy(pPage->aData, pChild->aData, pPage->pBt->usableSize);
+ pPage->isInit = 0;
+ pPage->pParent = 0;
+ rc = initPage(pPage, 0);
+ assert( rc==SQLITE_OK );
+ freePage(pChild);
+ TRACE(("BALANCE: transfer child %d into root %d\n",
+ pChild->pgno, pPage->pgno));
+ }
+ rc = reparentChildPages(pPage);
+ assert( pPage->nOverflow==0 );
+#ifndef SQLITE_OMIT_AUTOVACUUM
+ if( pBt->autoVacuum ){
+ int i;
+ for(i=0; i<pPage->nCell; i++){
+ rc = ptrmapPutOvfl(pPage, i);
+ if( rc!=SQLITE_OK ){
+ goto end_shallow_balance;
+ }
+ }
+ }
+#endif
+ if( rc!=SQLITE_OK ) goto end_shallow_balance;
+ releasePage(pChild);
+ }
+end_shallow_balance:
+ sqliteFree(apCell);
+ return rc;
+}
+
+
+/*
+** The root page is overfull
+**
+** When this happens, Create a new child page and copy the
+** contents of the root into the child. Then make the root
+** page an empty page with rightChild pointing to the new
+** child. Finally, call balance_internal() on the new child
+** to cause it to split.
+*/
+static int balance_deeper(MemPage *pPage){
+ int rc; /* Return value from subprocedures */
+ MemPage *pChild; /* Pointer to a new child page */
+ Pgno pgnoChild; /* Page number of the new child page */
+ BtShared *pBt; /* The BTree */
+ int usableSize; /* Total usable size of a page */
+ u8 *data; /* Content of the parent page */
+ u8 *cdata; /* Content of the child page */
+ int hdr; /* Offset to page header in parent */
+ int brk; /* Offset to content of first cell in parent */
+
+ assert( pPage->pParent==0 );
+ assert( pPage->nOverflow>0 );
+ pBt = pPage->pBt;
+ rc = allocatePage(pBt, &pChild, &pgnoChild, pPage->pgno, 0);
+ if( rc ) return rc;
+ assert( sqlite3pager_iswriteable(pChild->aData) );
+ usableSize = pBt->usableSize;
+ data = pPage->aData;
+ hdr = pPage->hdrOffset;
+ brk = get2byte(&data[hdr+5]);
+ cdata = pChild->aData;
+ memcpy(cdata, &data[hdr], pPage->cellOffset+2*pPage->nCell-hdr);
+ memcpy(&cdata[brk], &data[brk], usableSize-brk);
+ assert( pChild->isInit==0 );
+ rc = initPage(pChild, pPage);
+ if( rc ) goto balancedeeper_out;
+ memcpy(pChild->aOvfl, pPage->aOvfl, pPage->nOverflow*sizeof(pPage->aOvfl[0]));
+ pChild->nOverflow = pPage->nOverflow;
+ if( pChild->nOverflow ){
+ pChild->nFree = 0;
+ }
+ assert( pChild->nCell==pPage->nCell );
+ zeroPage(pPage, pChild->aData[0] & ~PTF_LEAF);
+ put4byte(&pPage->aData[pPage->hdrOffset+8], pgnoChild);
+ TRACE(("BALANCE: copy root %d into %d\n", pPage->pgno, pChild->pgno));
+#ifndef SQLITE_OMIT_AUTOVACUUM
+ if( pBt->autoVacuum ){
+ int i;
+ rc = ptrmapPut(pBt, pChild->pgno, PTRMAP_BTREE, pPage->pgno);
+ if( rc ) goto balancedeeper_out;
+ for(i=0; i<pChild->nCell; i++){
+ rc = ptrmapPutOvfl(pChild, i);
+ if( rc!=SQLITE_OK ){
+ return rc;
+ }
+ }
+ }
+#endif
+ rc = balance_nonroot(pChild);
+
+balancedeeper_out:
+ releasePage(pChild);
+ return rc;
+}
+
+/*
+** Decide if the page pPage needs to be balanced. If balancing is
+** required, call the appropriate balancing routine.
+*/
+static int balance(MemPage *pPage, int insert){
+ int rc = SQLITE_OK;
+ if( pPage->pParent==0 ){
+ if( pPage->nOverflow>0 ){
+ rc = balance_deeper(pPage);
+ }
+ if( rc==SQLITE_OK && pPage->nCell==0 ){
+ rc = balance_shallower(pPage);
+ }
+ }else{
+ if( pPage->nOverflow>0 ||
+ (!insert && pPage->nFree>pPage->pBt->usableSize*2/3) ){
+ rc = balance_nonroot(pPage);
+ }
+ }
+ return rc;
+}
+
+/*
+** This routine checks all cursors that point to table pgnoRoot.
+** If any of those cursors were opened with wrFlag==0 in a different
+** database connection (a database connection that shares the pager
+** cache with the current connection) and that other connection
+** is not in the ReadUncommmitted state, then this routine returns
+** SQLITE_LOCKED.
+**
+** In addition to checking for read-locks (where a read-lock
+** means a cursor opened with wrFlag==0) this routine also moves
+** all cursors write cursors so that they are pointing to the
+** first Cell on the root page. This is necessary because an insert
+** or delete might change the number of cells on a page or delete
+** a page entirely and we do not want to leave any cursors
+** pointing to non-existant pages or cells.
+*/
+static int checkReadLocks(Btree *pBtree, Pgno pgnoRoot, BtCursor *pExclude){
+ BtCursor *p;
+ BtShared *pBt = pBtree->pBt;
+ sqlite3 *db = pBtree->pSqlite;
+ for(p=pBt->pCursor; p; p=p->pNext){
+ if( p==pExclude ) continue;
+ if( p->eState!=CURSOR_VALID ) continue;
+ if( p->pgnoRoot!=pgnoRoot ) continue;
+ if( p->wrFlag==0 ){
+ sqlite3 *dbOther = p->pBtree->pSqlite;
+ if( dbOther==0 ||
+ (dbOther!=db && (dbOther->flags & SQLITE_ReadUncommitted)==0) ){
+ return SQLITE_LOCKED;
+ }
+ }else if( p->pPage->pgno!=p->pgnoRoot ){
+ moveToRoot(p);
+ }
+ }
+ return SQLITE_OK;
+}
+
+/*
+** Insert a new record into the BTree. The key is given by (pKey,nKey)
+** and the data is given by (pData,nData). The cursor is used only to
+** define what table the record should be inserted into. The cursor
+** is left pointing at a random location.
+**
+** For an INTKEY table, only the nKey value of the key is used. pKey is
+** ignored. For a ZERODATA table, the pData and nData are both ignored.
+*/
+int sqlite3BtreeInsert(
+ BtCursor *pCur, /* Insert data into the table of this cursor */
+ const void *pKey, i64 nKey, /* The key of the new record */
+ const void *pData, int nData /* The data of the new record */
+){
+ int rc;
+ int loc;
+ int szNew;
+ MemPage *pPage;
+ BtShared *pBt = pCur->pBtree->pBt;
+ unsigned char *oldCell;
+ unsigned char *newCell = 0;
+
+ if( pBt->inTransaction!=TRANS_WRITE ){
+ /* Must start a transaction before doing an insert */
+ return pBt->readOnly ? SQLITE_READONLY : SQLITE_ERROR;
+ }
+ assert( !pBt->readOnly );
+ if( !pCur->wrFlag ){
+ return SQLITE_PERM; /* Cursor not open for writing */
+ }
+ if( checkReadLocks(pCur->pBtree, pCur->pgnoRoot, pCur) ){
+ return SQLITE_LOCKED; /* The table pCur points to has a read lock */
+ }
+
+ /* Save the positions of any other cursors open on this table */
+ restoreOrClearCursorPosition(pCur, 0);
+ if(
+ SQLITE_OK!=(rc = saveAllCursors(pBt, pCur->pgnoRoot, pCur)) ||
+ SQLITE_OK!=(rc = sqlite3BtreeMoveto(pCur, pKey, nKey, &loc))
+ ){
+ return rc;
+ }
+
+ pPage = pCur->pPage;
+ assert( pPage->intKey || nKey>=0 );
+ assert( pPage->leaf || !pPage->leafData );
+ TRACE(("INSERT: table=%d nkey=%lld ndata=%d page=%d %s\n",
+ pCur->pgnoRoot, nKey, nData, pPage->pgno,
+ loc==0 ? "overwrite" : "new entry"));
+ assert( pPage->isInit );
+ rc = sqlite3pager_write(pPage->aData);
+ if( rc ) return rc;
+ newCell = sqliteMallocRaw( MX_CELL_SIZE(pBt) );
+ if( newCell==0 ) return SQLITE_NOMEM;
+ rc = fillInCell(pPage, newCell, pKey, nKey, pData, nData, &szNew);
+ if( rc ) goto end_insert;
+ assert( szNew==cellSizePtr(pPage, newCell) );
+ assert( szNew<=MX_CELL_SIZE(pBt) );
+ if( loc==0 && CURSOR_VALID==pCur->eState ){
+ int szOld;
+ assert( pCur->idx>=0 && pCur->idx<pPage->nCell );
+ oldCell = findCell(pPage, pCur->idx);
+ if( !pPage->leaf ){
+ memcpy(newCell, oldCell, 4);
+ }
+ szOld = cellSizePtr(pPage, oldCell);
+ rc = clearCell(pPage, oldCell);
+ if( rc ) goto end_insert;
+ dropCell(pPage, pCur->idx, szOld);
+ }else if( loc<0 && pPage->nCell>0 ){
+ assert( pPage->leaf );
+ pCur->idx++;
+ pCur->info.nSize = 0;
+ }else{
+ assert( pPage->leaf );
+ }
+ rc = insertCell(pPage, pCur->idx, newCell, szNew, 0, 0);
+ if( rc!=SQLITE_OK ) goto end_insert;
+ rc = balance(pPage, 1);
+ /* sqlite3BtreePageDump(pCur->pBt, pCur->pgnoRoot, 1); */
+ /* fflush(stdout); */
+ if( rc==SQLITE_OK ){
+ moveToRoot(pCur);
+ }
+end_insert:
+ sqliteFree(newCell);
+ return rc;
+}
+
+/*
+** Delete the entry that the cursor is pointing to. The cursor
+** is left pointing at a random location.
+*/
+int sqlite3BtreeDelete(BtCursor *pCur){
+ MemPage *pPage = pCur->pPage;
+ unsigned char *pCell;
+ int rc;
+ Pgno pgnoChild = 0;
+ BtShared *pBt = pCur->pBtree->pBt;
+
+ assert( pPage->isInit );
+ if( pBt->inTransaction!=TRANS_WRITE ){
+ /* Must start a transaction before doing a delete */
+ return pBt->readOnly ? SQLITE_READONLY : SQLITE_ERROR;
+ }
+ assert( !pBt->readOnly );
+ if( pCur->idx >= pPage->nCell ){
+ return SQLITE_ERROR; /* The cursor is not pointing to anything */
+ }
+ if( !pCur->wrFlag ){
+ return SQLITE_PERM; /* Did not open this cursor for writing */
+ }
+ if( checkReadLocks(pCur->pBtree, pCur->pgnoRoot, pCur) ){
+ return SQLITE_LOCKED; /* The table pCur points to has a read lock */
+ }
+
+ /* Restore the current cursor position (a no-op if the cursor is not in
+ ** CURSOR_REQUIRESEEK state) and save the positions of any other cursors
+ ** open on the same table. Then call sqlite3pager_write() on the page
+ ** that the entry will be deleted from.
+ */
+ if(
+ (rc = restoreOrClearCursorPosition(pCur, 1))!=0 ||
+ (rc = saveAllCursors(pBt, pCur->pgnoRoot, pCur))!=0 ||
+ (rc = sqlite3pager_write(pPage->aData))!=0
+ ){
+ return rc;
+ }
+
+ /* Locate the cell within it's page and leave pCell pointing to the
+ ** data. The clearCell() call frees any overflow pages associated with the
+ ** cell. The cell itself is still intact.
+ */
+ pCell = findCell(pPage, pCur->idx);
+ if( !pPage->leaf ){
+ pgnoChild = get4byte(pCell);
+ }
+ rc = clearCell(pPage, pCell);
+ if( rc ) return rc;
+
+ if( !pPage->leaf ){
+ /*
+ ** The entry we are about to delete is not a leaf so if we do not
+ ** do something we will leave a hole on an internal page.
+ ** We have to fill the hole by moving in a cell from a leaf. The
+ ** next Cell after the one to be deleted is guaranteed to exist and
+ ** to be a leaf so we can use it.
+ */
+ BtCursor leafCur;
+ unsigned char *pNext;
+ int szNext; /* The compiler warning is wrong: szNext is always
+ ** initialized before use. Adding an extra initialization
+ ** to silence the compiler slows down the code. */
+ int notUsed;
+ unsigned char *tempCell = 0;
+ assert( !pPage->leafData );
+ getTempCursor(pCur, &leafCur);
+ rc = sqlite3BtreeNext(&leafCur, ¬Used);
+ if( rc!=SQLITE_OK ){
+ if( rc!=SQLITE_NOMEM ){
+ rc = SQLITE_CORRUPT_BKPT;
+ }
+ }
+ if( rc==SQLITE_OK ){
+ rc = sqlite3pager_write(leafCur.pPage->aData);
+ }
+ if( rc==SQLITE_OK ){
+ TRACE(("DELETE: table=%d delete internal from %d replace from leaf %d\n",
+ pCur->pgnoRoot, pPage->pgno, leafCur.pPage->pgno));
+ dropCell(pPage, pCur->idx, cellSizePtr(pPage, pCell));
+ pNext = findCell(leafCur.pPage, leafCur.idx);
+ szNext = cellSizePtr(leafCur.pPage, pNext);
+ assert( MX_CELL_SIZE(pBt)>=szNext+4 );
+ tempCell = sqliteMallocRaw( MX_CELL_SIZE(pBt) );
+ if( tempCell==0 ){
+ rc = SQLITE_NOMEM;
+ }
+ }
+ if( rc==SQLITE_OK ){
+ rc = insertCell(pPage, pCur->idx, pNext-4, szNext+4, tempCell, 0);
+ }
+ if( rc==SQLITE_OK ){
+ put4byte(findOverflowCell(pPage, pCur->idx), pgnoChild);
+ rc = balance(pPage, 0);
+ }
+ if( rc==SQLITE_OK ){
+ dropCell(leafCur.pPage, leafCur.idx, szNext);
+ rc = balance(leafCur.pPage, 0);
+ }
+ sqliteFree(tempCell);
+ releaseTempCursor(&leafCur);
+ }else{
+ TRACE(("DELETE: table=%d delete from leaf %d\n",
+ pCur->pgnoRoot, pPage->pgno));
+ dropCell(pPage, pCur->idx, cellSizePtr(pPage, pCell));
+ rc = balance(pPage, 0);
+ }
+ if( rc==SQLITE_OK ){
+ moveToRoot(pCur);
+ }
+ return rc;
+}
+
+/*
+** Create a new BTree table. Write into *piTable the page
+** number for the root page of the new table.
+**
+** The type of type is determined by the flags parameter. Only the
+** following values of flags are currently in use. Other values for
+** flags might not work:
+**
+** BTREE_INTKEY|BTREE_LEAFDATA Used for SQL tables with rowid keys
+** BTREE_ZERODATA Used for SQL indices
+*/
+int sqlite3BtreeCreateTable(Btree *p, int *piTable, int flags){
+ BtShared *pBt = p->pBt;
+ MemPage *pRoot;
+ Pgno pgnoRoot;
+ int rc;
+ if( pBt->inTransaction!=TRANS_WRITE ){
+ /* Must start a transaction first */
+ return pBt->readOnly ? SQLITE_READONLY : SQLITE_ERROR;
+ }
+ assert( !pBt->readOnly );
+
+ /* It is illegal to create a table if any cursors are open on the
+ ** database. This is because in auto-vacuum mode the backend may
+ ** need to move a database page to make room for the new root-page.
+ ** If an open cursor was using the page a problem would occur.
+ */
+ if( pBt->pCursor ){
+ return SQLITE_LOCKED;
+ }
+
+#ifdef SQLITE_OMIT_AUTOVACUUM
+ rc = allocatePage(pBt, &pRoot, &pgnoRoot, 1, 0);
+ if( rc ) return rc;
+#else
+ if( pBt->autoVacuum ){
+ Pgno pgnoMove; /* Move a page here to make room for the root-page */
+ MemPage *pPageMove; /* The page to move to. */
+
+ /* Read the value of meta[3] from the database to determine where the
+ ** root page of the new table should go. meta[3] is the largest root-page
+ ** created so far, so the new root-page is (meta[3]+1).
+ */
+ rc = sqlite3BtreeGetMeta(p, 4, &pgnoRoot);
+ if( rc!=SQLITE_OK ) return rc;
+ pgnoRoot++;
+
+ /* The new root-page may not be allocated on a pointer-map page, or the
+ ** PENDING_BYTE page.
+ */
+ if( pgnoRoot==PTRMAP_PAGENO(pBt, pgnoRoot) ||
+ pgnoRoot==PENDING_BYTE_PAGE(pBt) ){
+ pgnoRoot++;
+ }
+ assert( pgnoRoot>=3 );
+
+ /* Allocate a page. The page that currently resides at pgnoRoot will
+ ** be moved to the allocated page (unless the allocated page happens
+ ** to reside at pgnoRoot).
+ */
+ rc = allocatePage(pBt, &pPageMove, &pgnoMove, pgnoRoot, 1);
+ if( rc!=SQLITE_OK ){
+ return rc;
+ }
+
+ if( pgnoMove!=pgnoRoot ){
+ u8 eType;
+ Pgno iPtrPage;
+
+ releasePage(pPageMove);
+ rc = getPage(pBt, pgnoRoot, &pRoot);
+ if( rc!=SQLITE_OK ){
+ return rc;
+ }
+ rc = ptrmapGet(pBt, pgnoRoot, &eType, &iPtrPage);
+ if( rc!=SQLITE_OK || eType==PTRMAP_ROOTPAGE || eType==PTRMAP_FREEPAGE ){
+ releasePage(pRoot);
+ return rc;
+ }
+ assert( eType!=PTRMAP_ROOTPAGE );
+ assert( eType!=PTRMAP_FREEPAGE );
+ rc = sqlite3pager_write(pRoot->aData);
+ if( rc!=SQLITE_OK ){
+ releasePage(pRoot);
+ return rc;
+ }
+ rc = relocatePage(pBt, pRoot, eType, iPtrPage, pgnoMove);
+ releasePage(pRoot);
+ if( rc!=SQLITE_OK ){
+ return rc;
+ }
+ rc = getPage(pBt, pgnoRoot, &pRoot);
+ if( rc!=SQLITE_OK ){
+ return rc;
+ }
+ rc = sqlite3pager_write(pRoot->aData);
+ if( rc!=SQLITE_OK ){
+ releasePage(pRoot);
+ return rc;
+ }
+ }else{
+ pRoot = pPageMove;
+ }
+
+ /* Update the pointer-map and meta-data with the new root-page number. */
+ rc = ptrmapPut(pBt, pgnoRoot, PTRMAP_ROOTPAGE, 0);
+ if( rc ){
+ releasePage(pRoot);
+ return rc;
+ }
+ rc = sqlite3BtreeUpdateMeta(p, 4, pgnoRoot);
+ if( rc ){
+ releasePage(pRoot);
+ return rc;
+ }
+
+ }else{
+ rc = allocatePage(pBt, &pRoot, &pgnoRoot, 1, 0);
+ if( rc ) return rc;
+ }
+#endif
+ assert( sqlite3pager_iswriteable(pRoot->aData) );
+ zeroPage(pRoot, flags | PTF_LEAF);
+ sqlite3pager_unref(pRoot->aData);
+ *piTable = (int)pgnoRoot;
+ return SQLITE_OK;
+}
+
+/*
+** Erase the given database page and all its children. Return
+** the page to the freelist.
+*/
+static int clearDatabasePage(
+ BtShared *pBt, /* The BTree that contains the table */
+ Pgno pgno, /* Page number to clear */
+ MemPage *pParent, /* Parent page. NULL for the root */
+ int freePageFlag /* Deallocate page if true */
+){
+ MemPage *pPage = 0;
+ int rc;
+ unsigned char *pCell;
+ int i;
+
+ if( pgno>sqlite3pager_pagecount(pBt->pPager) ){
+ return SQLITE_CORRUPT_BKPT;
+ }
+
+ rc = getAndInitPage(pBt, pgno, &pPage, pParent);
+ if( rc ) goto cleardatabasepage_out;
+ rc = sqlite3pager_write(pPage->aData);
+ if( rc ) goto cleardatabasepage_out;
+ for(i=0; i<pPage->nCell; i++){
+ pCell = findCell(pPage, i);
+ if( !pPage->leaf ){
+ rc = clearDatabasePage(pBt, get4byte(pCell), pPage->pParent, 1);
+ if( rc ) goto cleardatabasepage_out;
+ }
+ rc = clearCell(pPage, pCell);
+ if( rc ) goto cleardatabasepage_out;
+ }
+ if( !pPage->leaf ){
+ rc = clearDatabasePage(pBt, get4byte(&pPage->aData[8]), pPage->pParent, 1);
+ if( rc ) goto cleardatabasepage_out;
+ }
+ if( freePageFlag ){
+ rc = freePage(pPage);
+ }else{
+ zeroPage(pPage, pPage->aData[0] | PTF_LEAF);
+ }
+
+cleardatabasepage_out:
+ releasePage(pPage);
+ return rc;
+}
+
+/*
+** Delete all information from a single table in the database. iTable is
+** the page number of the root of the table. After this routine returns,
+** the root page is empty, but still exists.
+**
+** This routine will fail with SQLITE_LOCKED if there are any open
+** read cursors on the table. Open write cursors are moved to the
+** root of the table.
+*/
+int sqlite3BtreeClearTable(Btree *p, int iTable){
+ int rc;
+ BtShared *pBt = p->pBt;
+ if( p->inTrans!=TRANS_WRITE ){
+ return pBt->readOnly ? SQLITE_READONLY : SQLITE_ERROR;
+ }
+ rc = checkReadLocks(p, iTable, 0);
+ if( rc ){
+ return rc;
+ }
+
+ /* Save the position of all cursors open on this table */
+ if( SQLITE_OK!=(rc = saveAllCursors(pBt, iTable, 0)) ){
+ return rc;
+ }
+
+ return clearDatabasePage(pBt, (Pgno)iTable, 0, 0);
+}
+
+/*
+** Erase all information in a table and add the root of the table to
+** the freelist. Except, the root of the principle table (the one on
+** page 1) is never added to the freelist.
+**
+** This routine will fail with SQLITE_LOCKED if there are any open
+** cursors on the table.
+**
+** If AUTOVACUUM is enabled and the page at iTable is not the last
+** root page in the database file, then the last root page
+** in the database file is moved into the slot formerly occupied by
+** iTable and that last slot formerly occupied by the last root page
+** is added to the freelist instead of iTable. In this say, all
+** root pages are kept at the beginning of the database file, which
+** is necessary for AUTOVACUUM to work right. *piMoved is set to the
+** page number that used to be the last root page in the file before
+** the move. If no page gets moved, *piMoved is set to 0.
+** The last root page is recorded in meta[3] and the value of
+** meta[3] is updated by this procedure.
+*/
+int sqlite3BtreeDropTable(Btree *p, int iTable, int *piMoved){
+ int rc;
+ MemPage *pPage = 0;
+ BtShared *pBt = p->pBt;
+
+ if( p->inTrans!=TRANS_WRITE ){
+ return pBt->readOnly ? SQLITE_READONLY : SQLITE_ERROR;
+ }
+
+ /* It is illegal to drop a table if any cursors are open on the
+ ** database. This is because in auto-vacuum mode the backend may
+ ** need to move another root-page to fill a gap left by the deleted
+ ** root page. If an open cursor was using this page a problem would
+ ** occur.
+ */
+ if( pBt->pCursor ){
+ return SQLITE_LOCKED;
+ }
+
+ rc = getPage(pBt, (Pgno)iTable, &pPage);
+ if( rc ) return rc;
+ rc = sqlite3BtreeClearTable(p, iTable);
+ if( rc ){
+ releasePage(pPage);
+ return rc;
+ }
+
+ *piMoved = 0;
+
+ if( iTable>1 ){
+#ifdef SQLITE_OMIT_AUTOVACUUM
+ rc = freePage(pPage);
+ releasePage(pPage);
+#else
+ if( pBt->autoVacuum ){
+ Pgno maxRootPgno;
+ rc = sqlite3BtreeGetMeta(p, 4, &maxRootPgno);
+ if( rc!=SQLITE_OK ){
+ releasePage(pPage);
+ return rc;
+ }
+
+ if( iTable==maxRootPgno ){
+ /* If the table being dropped is the table with the largest root-page
+ ** number in the database, put the root page on the free list.
+ */
+ rc = freePage(pPage);
+ releasePage(pPage);
+ if( rc!=SQLITE_OK ){
+ return rc;
+ }
+ }else{
+ /* The table being dropped does not have the largest root-page
+ ** number in the database. So move the page that does into the
+ ** gap left by the deleted root-page.
+ */
+ MemPage *pMove;
+ releasePage(pPage);
+ rc = getPage(pBt, maxRootPgno, &pMove);
+ if( rc!=SQLITE_OK ){
+ return rc;
+ }
+ rc = relocatePage(pBt, pMove, PTRMAP_ROOTPAGE, 0, iTable);
+ releasePage(pMove);
+ if( rc!=SQLITE_OK ){
+ return rc;
+ }
+ rc = getPage(pBt, maxRootPgno, &pMove);
+ if( rc!=SQLITE_OK ){
+ return rc;
+ }
+ rc = freePage(pMove);
+ releasePage(pMove);
+ if( rc!=SQLITE_OK ){
+ return rc;
+ }
+ *piMoved = maxRootPgno;
+ }
+
+ /* Set the new 'max-root-page' value in the database header. This
+ ** is the old value less one, less one more if that happens to
+ ** be a root-page number, less one again if that is the
+ ** PENDING_BYTE_PAGE.
+ */
+ maxRootPgno--;
+ if( maxRootPgno==PENDING_BYTE_PAGE(pBt) ){
+ maxRootPgno--;
+ }
+ if( maxRootPgno==PTRMAP_PAGENO(pBt, maxRootPgno) ){
+ maxRootPgno--;
+ }
+ assert( maxRootPgno!=PENDING_BYTE_PAGE(pBt) );
+
+ rc = sqlite3BtreeUpdateMeta(p, 4, maxRootPgno);
+ }else{
+ rc = freePage(pPage);
+ releasePage(pPage);
+ }
+#endif
+ }else{
+ /* If sqlite3BtreeDropTable was called on page 1. */
+ zeroPage(pPage, PTF_INTKEY|PTF_LEAF );
+ releasePage(pPage);
+ }
+ return rc;
+}
+
+
+/*
+** Read the meta-information out of a database file. Meta[0]
+** is the number of free pages currently in the database. Meta[1]
+** through meta[15] are available for use by higher layers. Meta[0]
+** is read-only, the others are read/write.
+**
+** The schema layer numbers meta values differently. At the schema
+** layer (and the SetCookie and ReadCookie opcodes) the number of
+** free pages is not visible. So Cookie[0] is the same as Meta[1].
+*/
+int sqlite3BtreeGetMeta(Btree *p, int idx, u32 *pMeta){
+ int rc;
+ unsigned char *pP1;
+ BtShared *pBt = p->pBt;
+
+ /* Reading a meta-data value requires a read-lock on page 1 (and hence
+ ** the sqlite_master table. We grab this lock regardless of whether or
+ ** not the SQLITE_ReadUncommitted flag is set (the table rooted at page
+ ** 1 is treated as a special case by queryTableLock() and lockTable()).
+ */
+ rc = queryTableLock(p, 1, READ_LOCK);
+ if( rc!=SQLITE_OK ){
+ return rc;
+ }
+
+ assert( idx>=0 && idx<=15 );
+ rc = sqlite3pager_get(pBt->pPager, 1, (void**)&pP1);
+ if( rc ) return rc;
+ *pMeta = get4byte(&pP1[36 + idx*4]);
+ sqlite3pager_unref(pP1);
+
+ /* If autovacuumed is disabled in this build but we are trying to
+ ** access an autovacuumed database, then make the database readonly.
+ */
+#ifdef SQLITE_OMIT_AUTOVACUUM
+ if( idx==4 && *pMeta>0 ) pBt->readOnly = 1;
+#endif
+
+ /* Grab the read-lock on page 1. */
+ rc = lockTable(p, 1, READ_LOCK);
+ return rc;
+}
+
+/*
+** Write meta-information back into the database. Meta[0] is
+** read-only and may not be written.
+*/
+int sqlite3BtreeUpdateMeta(Btree *p, int idx, u32 iMeta){
+ BtShared *pBt = p->pBt;
+ unsigned char *pP1;
+ int rc;
+ assert( idx>=1 && idx<=15 );
+ if( p->inTrans!=TRANS_WRITE ){
+ return pBt->readOnly ? SQLITE_READONLY : SQLITE_ERROR;
+ }
+ assert( pBt->pPage1!=0 );
+ pP1 = pBt->pPage1->aData;
+ rc = sqlite3pager_write(pP1);
+ if( rc ) return rc;
+ put4byte(&pP1[36 + idx*4], iMeta);
+ return SQLITE_OK;
+}
+
+/*
+** Return the flag byte at the beginning of the page that the cursor
+** is currently pointing to.
+*/
+int sqlite3BtreeFlags(BtCursor *pCur){
+ /* TODO: What about CURSOR_REQUIRESEEK state? Probably need to call
+ ** restoreOrClearCursorPosition() here.
+ */
+ MemPage *pPage = pCur->pPage;
+ return pPage ? pPage->aData[pPage->hdrOffset] : 0;
+}
+
+#ifdef SQLITE_DEBUG
+/*
+** Print a disassembly of the given page on standard output. This routine
+** is used for debugging and testing only.
+*/
+static int btreePageDump(BtShared *pBt, int pgno, int recursive, MemPage *pParent){
+ int rc;
+ MemPage *pPage;
+ int i, j, c;
+ int nFree;
+ u16 idx;
+ int hdr;
+ int nCell;
+ int isInit;
+ unsigned char *data;
+ char range[20];
+ unsigned char payload[20];
+
+ rc = getPage(pBt, (Pgno)pgno, &pPage);
+ isInit = pPage->isInit;
+ if( pPage->isInit==0 ){
+ initPage(pPage, pParent);
+ }
+ if( rc ){
+ return rc;
+ }
+ hdr = pPage->hdrOffset;
+ data = pPage->aData;
+ c = data[hdr];
+ pPage->intKey = (c & (PTF_INTKEY|PTF_LEAFDATA))!=0;
+ pPage->zeroData = (c & PTF_ZERODATA)!=0;
+ pPage->leafData = (c & PTF_LEAFDATA)!=0;
+ pPage->leaf = (c & PTF_LEAF)!=0;
+ pPage->hasData = !(pPage->zeroData || (!pPage->leaf && pPage->leafData));
+ nCell = get2byte(&data[hdr+3]);
+ sqlite3DebugPrintf("PAGE %d: flags=0x%02x frag=%d parent=%d\n", pgno,
+ data[hdr], data[hdr+7],
+ (pPage->isInit && pPage->pParent) ? pPage->pParent->pgno : 0);
+ assert( hdr == (pgno==1 ? 100 : 0) );
+ idx = hdr + 12 - pPage->leaf*4;
+ for(i=0; i<nCell; i++){
+ CellInfo info;
+ Pgno child;
+ unsigned char *pCell;
+ int sz;
+ int addr;
+
+ addr = get2byte(&data[idx + 2*i]);
+ pCell = &data[addr];
+ parseCellPtr(pPage, pCell, &info);
+ sz = info.nSize;
+ sprintf(range,"%d..%d", addr, addr+sz-1);
+ if( pPage->leaf ){
+ child = 0;
+ }else{
+ child = get4byte(pCell);
+ }
+ sz = info.nData;
+ if( !pPage->intKey ) sz += info.nKey;
+ if( sz>sizeof(payload)-1 ) sz = sizeof(payload)-1;
+ memcpy(payload, &pCell[info.nHeader], sz);
+ for(j=0; j<sz; j++){
+ if( payload[j]<0x20 || payload[j]>0x7f ) payload[j] = '.';
+ }
+ payload[sz] = 0;
+ sqlite3DebugPrintf(
+ "cell %2d: i=%-10s chld=%-4d nk=%-4lld nd=%-4d payload=%s\n",
+ i, range, child, info.nKey, info.nData, payload
+ );
+ }
+ if( !pPage->leaf ){
+ sqlite3DebugPrintf("right_child: %d\n", get4byte(&data[hdr+8]));
+ }
+ nFree = 0;
+ i = 0;
+ idx = get2byte(&data[hdr+1]);
+ while( idx>0 && idx<pPage->pBt->usableSize ){
+ int sz = get2byte(&data[idx+2]);
+ sprintf(range,"%d..%d", idx, idx+sz-1);
+ nFree += sz;
+ sqlite3DebugPrintf("freeblock %2d: i=%-10s size=%-4d total=%d\n",
+ i, range, sz, nFree);
+ idx = get2byte(&data[idx]);
+ i++;
+ }
+ if( idx!=0 ){
+ sqlite3DebugPrintf("ERROR: next freeblock index out of range: %d\n", idx);
+ }
+ if( recursive && !pPage->leaf ){
+ for(i=0; i<nCell; i++){
+ unsigned char *pCell = findCell(pPage, i);
+ btreePageDump(pBt, get4byte(pCell), 1, pPage);
+ idx = get2byte(pCell);
+ }
+ btreePageDump(pBt, get4byte(&data[hdr+8]), 1, pPage);
+ }
+ pPage->isInit = isInit;
+ sqlite3pager_unref(data);
+ fflush(stdout);
+ return SQLITE_OK;
+}
+int sqlite3BtreePageDump(Btree *p, int pgno, int recursive){
+ return btreePageDump(p->pBt, pgno, recursive, 0);
+}
+#endif
+
+#if defined(SQLITE_TEST) || defined(SQLITE_DEBUG)
+/*
+** Fill aResult[] with information about the entry and page that the
+** cursor is pointing to.
+**
+** aResult[0] = The page number
+** aResult[1] = The entry number
+** aResult[2] = Total number of entries on this page
+** aResult[3] = Cell size (local payload + header)
+** aResult[4] = Number of free bytes on this page
+** aResult[5] = Number of free blocks on the page
+** aResult[6] = Total payload size (local + overflow)
+** aResult[7] = Header size in bytes
+** aResult[8] = Local payload size
+** aResult[9] = Parent page number
+**
+** This routine is used for testing and debugging only.
+*/
+int sqlite3BtreeCursorInfo(BtCursor *pCur, int *aResult, int upCnt){
+ int cnt, idx;
+ MemPage *pPage = pCur->pPage;
+ BtCursor tmpCur;
+
+ int rc = restoreOrClearCursorPosition(pCur, 1);
+ if( rc!=SQLITE_OK ){
+ return rc;
+ }
+
+ pageIntegrity(pPage);
+ assert( pPage->isInit );
+ getTempCursor(pCur, &tmpCur);
+ while( upCnt-- ){
+ moveToParent(&tmpCur);
+ }
+ pPage = tmpCur.pPage;
+ pageIntegrity(pPage);
+ aResult[0] = sqlite3pager_pagenumber(pPage->aData);
+ assert( aResult[0]==pPage->pgno );
+ aResult[1] = tmpCur.idx;
+ aResult[2] = pPage->nCell;
+ if( tmpCur.idx>=0 && tmpCur.idx<pPage->nCell ){
+ getCellInfo(&tmpCur);
+ aResult[3] = tmpCur.info.nSize;
+ aResult[6] = tmpCur.info.nData;
+ aResult[7] = tmpCur.info.nHeader;
+ aResult[8] = tmpCur.info.nLocal;
+ }else{
+ aResult[3] = 0;
+ aResult[6] = 0;
+ aResult[7] = 0;
+ aResult[8] = 0;
+ }
+ aResult[4] = pPage->nFree;
+ cnt = 0;
+ idx = get2byte(&pPage->aData[pPage->hdrOffset+1]);
+ while( idx>0 && idx<pPage->pBt->usableSize ){
+ cnt++;
+ idx = get2byte(&pPage->aData[idx]);
+ }
+ aResult[5] = cnt;
+ if( pPage->pParent==0 || isRootPage(pPage) ){
+ aResult[9] = 0;
+ }else{
+ aResult[9] = pPage->pParent->pgno;
+ }
+ releaseTempCursor(&tmpCur);
+ return SQLITE_OK;
+}
+#endif
+
+/*
+** Return the pager associated with a BTree. This routine is used for
+** testing and debugging only.
+*/
+Pager *sqlite3BtreePager(Btree *p){
+ return p->pBt->pPager;
+}
+
+/*
+** This structure is passed around through all the sanity checking routines
+** in order to keep track of some global state information.
+*/
+typedef struct IntegrityCk IntegrityCk;
+struct IntegrityCk {
+ BtShared *pBt; /* The tree being checked out */
+ Pager *pPager; /* The associated pager. Also accessible by pBt->pPager */
+ int nPage; /* Number of pages in the database */
+ int *anRef; /* Number of times each page is referenced */
+ char *zErrMsg; /* An error message. NULL of no errors seen. */
+};
+
+#ifndef SQLITE_OMIT_INTEGRITY_CHECK
+/*
+** Append a message to the error message string.
+*/
+static void checkAppendMsg(
+ IntegrityCk *pCheck,
+ char *zMsg1,
+ const char *zFormat,
+ ...
+){
+ va_list ap;
+ char *zMsg2;
+ va_start(ap, zFormat);
+ zMsg2 = sqlite3VMPrintf(zFormat, ap);
+ va_end(ap);
+ if( zMsg1==0 ) zMsg1 = "";
+ if( pCheck->zErrMsg ){
+ char *zOld = pCheck->zErrMsg;
+ pCheck->zErrMsg = 0;
+ sqlite3SetString(&pCheck->zErrMsg, zOld, "\n", zMsg1, zMsg2, (char*)0);
+ sqliteFree(zOld);
+ }else{
+ sqlite3SetString(&pCheck->zErrMsg, zMsg1, zMsg2, (char*)0);
+ }
+ sqliteFree(zMsg2);
+}
+#endif /* SQLITE_OMIT_INTEGRITY_CHECK */
+
+#ifndef SQLITE_OMIT_INTEGRITY_CHECK
+/*
+** Add 1 to the reference count for page iPage. If this is the second
+** reference to the page, add an error message to pCheck->zErrMsg.
+** Return 1 if there are 2 ore more references to the page and 0 if
+** if this is the first reference to the page.
+**
+** Also check that the page number is in bounds.
+*/
+static int checkRef(IntegrityCk *pCheck, int iPage, char *zContext){
+ if( iPage==0 ) return 1;
+ if( iPage>pCheck->nPage || iPage<0 ){
+ checkAppendMsg(pCheck, zContext, "invalid page number %d", iPage);
+ return 1;
+ }
+ if( pCheck->anRef[iPage]==1 ){
+ checkAppendMsg(pCheck, zContext, "2nd reference to page %d", iPage);
+ return 1;
+ }
+ return (pCheck->anRef[iPage]++)>1;
+}
+
+#ifndef SQLITE_OMIT_AUTOVACUUM
+/*
+** Check that the entry in the pointer-map for page iChild maps to
+** page iParent, pointer type ptrType. If not, append an error message
+** to pCheck.
+*/
+static void checkPtrmap(
+ IntegrityCk *pCheck, /* Integrity check context */
+ Pgno iChild, /* Child page number */
+ u8 eType, /* Expected pointer map type */
+ Pgno iParent, /* Expected pointer map parent page number */
+ char *zContext /* Context description (used for error msg) */
+){
+ int rc;
+ u8 ePtrmapType;
+ Pgno iPtrmapParent;
+
+ rc = ptrmapGet(pCheck->pBt, iChild, &ePtrmapType, &iPtrmapParent);
+ if( rc!=SQLITE_OK ){
+ checkAppendMsg(pCheck, zContext, "Failed to read ptrmap key=%d", iChild);
+ return;
+ }
+
+ if( ePtrmapType!=eType || iPtrmapParent!=iParent ){
+ checkAppendMsg(pCheck, zContext,
+ "Bad ptr map entry key=%d expected=(%d,%d) got=(%d,%d)",
+ iChild, eType, iParent, ePtrmapType, iPtrmapParent);
+ }
+}
+#endif
+
+/*
+** Check the integrity of the freelist or of an overflow page list.
+** Verify that the number of pages on the list is N.
+*/
+static void checkList(
+ IntegrityCk *pCheck, /* Integrity checking context */
+ int isFreeList, /* True for a freelist. False for overflow page list */
+ int iPage, /* Page number for first page in the list */
+ int N, /* Expected number of pages in the list */
+ char *zContext /* Context for error messages */
+){
+ int i;
+ int expected = N;
+ int iFirst = iPage;
+ while( N-- > 0 ){
+ unsigned char *pOvfl;
+ if( iPage<1 ){
+ checkAppendMsg(pCheck, zContext,
+ "%d of %d pages missing from overflow list starting at %d",
+ N+1, expected, iFirst);
+ break;
+ }
+ if( checkRef(pCheck, iPage, zContext) ) break;
+ if( sqlite3pager_get(pCheck->pPager, (Pgno)iPage, (void**)&pOvfl) ){
+ checkAppendMsg(pCheck, zContext, "failed to get page %d", iPage);
+ break;
+ }
+ if( isFreeList ){
+ int n = get4byte(&pOvfl[4]);
+#ifndef SQLITE_OMIT_AUTOVACUUM
+ if( pCheck->pBt->autoVacuum ){
+ checkPtrmap(pCheck, iPage, PTRMAP_FREEPAGE, 0, zContext);
+ }
+#endif
+ if( n>pCheck->pBt->usableSize/4-8 ){
+ checkAppendMsg(pCheck, zContext,
+ "freelist leaf count too big on page %d", iPage);
+ N--;
+ }else{
+ for(i=0; i<n; i++){
+ Pgno iFreePage = get4byte(&pOvfl[8+i*4]);
+#ifndef SQLITE_OMIT_AUTOVACUUM
+ if( pCheck->pBt->autoVacuum ){
+ checkPtrmap(pCheck, iFreePage, PTRMAP_FREEPAGE, 0, zContext);
+ }
+#endif
+ checkRef(pCheck, iFreePage, zContext);
+ }
+ N -= n;
+ }
+ }
+#ifndef SQLITE_OMIT_AUTOVACUUM
+ else{
+ /* If this database supports auto-vacuum and iPage is not the last
+ ** page in this overflow list, check that the pointer-map entry for
+ ** the following page matches iPage.
+ */
+ if( pCheck->pBt->autoVacuum && N>0 ){
+ i = get4byte(pOvfl);
+ checkPtrmap(pCheck, i, PTRMAP_OVERFLOW2, iPage, zContext);
+ }
+ }
+#endif
+ iPage = get4byte(pOvfl);
+ sqlite3pager_unref(pOvfl);
+ }
+}
+#endif /* SQLITE_OMIT_INTEGRITY_CHECK */
+
+#ifndef SQLITE_OMIT_INTEGRITY_CHECK
+/*
+** Do various sanity checks on a single page of a tree. Return
+** the tree depth. Root pages return 0. Parents of root pages
+** return 1, and so forth.
+**
+** These checks are done:
+**
+** 1. Make sure that cells and freeblocks do not overlap
+** but combine to completely cover the page.
+** NO 2. Make sure cell keys are in order.
+** NO 3. Make sure no key is less than or equal to zLowerBound.
+** NO 4. Make sure no key is greater than or equal to zUpperBound.
+** 5. Check the integrity of overflow pages.
+** 6. Recursively call checkTreePage on all children.
+** 7. Verify that the depth of all children is the same.
+** 8. Make sure this page is at least 33% full or else it is
+** the root of the tree.
+*/
+static int checkTreePage(
+ IntegrityCk *pCheck, /* Context for the sanity check */
+ int iPage, /* Page number of the page to check */
+ MemPage *pParent, /* Parent page */
+ char *zParentContext /* Parent context */
+){
+ MemPage *pPage;
+ int i, rc, depth, d2, pgno, cnt;
+ int hdr, cellStart;
+ int nCell;
+ u8 *data;
+ BtShared *pBt;
+ int usableSize;
+ char zContext[100];
+ char *hit;
+
+ sprintf(zContext, "Page %d: ", iPage);
+
+ /* Check that the page exists
+ */
+ pBt = pCheck->pBt;
+ usableSize = pBt->usableSize;
+ if( iPage==0 ) return 0;
+ if( checkRef(pCheck, iPage, zParentContext) ) return 0;
+ if( (rc = getPage(pBt, (Pgno)iPage, &pPage))!=0 ){
+ checkAppendMsg(pCheck, zContext,
+ "unable to get the page. error code=%d", rc);
+ return 0;
+ }
+ if( (rc = initPage(pPage, pParent))!=0 ){
+ checkAppendMsg(pCheck, zContext, "initPage() returns error code %d", rc);
+ releasePage(pPage);
+ return 0;
+ }
+
+ /* Check out all the cells.
+ */
+ depth = 0;
+ for(i=0; i<pPage->nCell; i++){
+ u8 *pCell;
+ int sz;
+ CellInfo info;
+
+ /* Check payload overflow pages
+ */
+ sprintf(zContext, "On tree page %d cell %d: ", iPage, i);
+ pCell = findCell(pPage,i);
+ parseCellPtr(pPage, pCell, &info);
+ sz = info.nData;
+ if( !pPage->intKey ) sz += info.nKey;
+ if( sz>info.nLocal ){
+ int nPage = (sz - info.nLocal + usableSize - 5)/(usableSize - 4);
+ Pgno pgnoOvfl = get4byte(&pCell[info.iOverflow]);
+#ifndef SQLITE_OMIT_AUTOVACUUM
+ if( pBt->autoVacuum ){
+ checkPtrmap(pCheck, pgnoOvfl, PTRMAP_OVERFLOW1, iPage, zContext);
+ }
+#endif
+ checkList(pCheck, 0, pgnoOvfl, nPage, zContext);
+ }
+
+ /* Check sanity of left child page.
+ */
+ if( !pPage->leaf ){
+ pgno = get4byte(pCell);
+#ifndef SQLITE_OMIT_AUTOVACUUM
+ if( pBt->autoVacuum ){
+ checkPtrmap(pCheck, pgno, PTRMAP_BTREE, iPage, zContext);
+ }
+#endif
+ d2 = checkTreePage(pCheck,pgno,pPage,zContext);
+ if( i>0 && d2!=depth ){
+ checkAppendMsg(pCheck, zContext, "Child page depth differs");
+ }
+ depth = d2;
+ }
+ }
+ if( !pPage->leaf ){
+ pgno = get4byte(&pPage->aData[pPage->hdrOffset+8]);
+ sprintf(zContext, "On page %d at right child: ", iPage);
+#ifndef SQLITE_OMIT_AUTOVACUUM
+ if( pBt->autoVacuum ){
+ checkPtrmap(pCheck, pgno, PTRMAP_BTREE, iPage, 0);
+ }
+#endif
+ checkTreePage(pCheck, pgno, pPage, zContext);
+ }
+
+ /* Check for complete coverage of the page
+ */
+ data = pPage->aData;
+ hdr = pPage->hdrOffset;
+ hit = sqliteMalloc( usableSize );
+ if( hit ){
+ memset(hit, 1, get2byte(&data[hdr+5]));
+ nCell = get2byte(&data[hdr+3]);
+ cellStart = hdr + 12 - 4*pPage->leaf;
+ for(i=0; i<nCell; i++){
+ int pc = get2byte(&data[cellStart+i*2]);
+ int size = cellSizePtr(pPage, &data[pc]);
+ int j;
+ if( (pc+size-1)>=usableSize || pc<0 ){
+ checkAppendMsg(pCheck, 0,
+ "Corruption detected in cell %d on page %d",i,iPage,0);
+ }else{
+ for(j=pc+size-1; j>=pc; j--) hit[j]++;
+ }
+ }
+ for(cnt=0, i=get2byte(&data[hdr+1]); i>0 && i<usableSize && cnt<10000;
+ cnt++){
+ int size = get2byte(&data[i+2]);
+ int j;
+ if( (i+size-1)>=usableSize || i<0 ){
+ checkAppendMsg(pCheck, 0,
+ "Corruption detected in cell %d on page %d",i,iPage,0);
+ }else{
+ for(j=i+size-1; j>=i; j--) hit[j]++;
+ }
+ i = get2byte(&data[i]);
+ }
+ for(i=cnt=0; i<usableSize; i++){
+ if( hit[i]==0 ){
+ cnt++;
+ }else if( hit[i]>1 ){
+ checkAppendMsg(pCheck, 0,
+ "Multiple uses for byte %d of page %d", i, iPage);
+ break;
+ }
+ }
+ if( cnt!=data[hdr+7] ){
+ checkAppendMsg(pCheck, 0,
+ "Fragmented space is %d byte reported as %d on page %d",
+ cnt, data[hdr+7], iPage);
+ }
+ }
+ sqliteFree(hit);
+
+ releasePage(pPage);
+ return depth+1;
+}
+#endif /* SQLITE_OMIT_INTEGRITY_CHECK */
+
+#ifndef SQLITE_OMIT_INTEGRITY_CHECK
+/*
+** This routine does a complete check of the given BTree file. aRoot[] is
+** an array of pages numbers were each page number is the root page of
+** a table. nRoot is the number of entries in aRoot.
+**
+** If everything checks out, this routine returns NULL. If something is
+** amiss, an error message is written into memory obtained from malloc()
+** and a pointer to that error message is returned. The calling function
+** is responsible for freeing the error message when it is done.
+*/
+char *sqlite3BtreeIntegrityCheck(Btree *p, int *aRoot, int nRoot){
+ int i;
+ int nRef;
+ IntegrityCk sCheck;
+ BtShared *pBt = p->pBt;
+
+ nRef = sqlite3pager_refcount(pBt->pPager);
+ if( lockBtreeWithRetry(p)!=SQLITE_OK ){
+ return sqliteStrDup("Unable to acquire a read lock on the database");
+ }
+ sCheck.pBt = pBt;
+ sCheck.pPager = pBt->pPager;
+ sCheck.nPage = sqlite3pager_pagecount(sCheck.pPager);
+ if( sCheck.nPage==0 ){
+ unlockBtreeIfUnused(pBt);
+ return 0;
+ }
+ sCheck.anRef = sqliteMallocRaw( (sCheck.nPage+1)*sizeof(sCheck.anRef[0]) );
+ if( !sCheck.anRef ){
+ unlockBtreeIfUnused(pBt);
+ return sqlite3MPrintf("Unable to malloc %d bytes",
+ (sCheck.nPage+1)*sizeof(sCheck.anRef[0]));
+ }
+ for(i=0; i<=sCheck.nPage; i++){ sCheck.anRef[i] = 0; }
+ i = PENDING_BYTE_PAGE(pBt);
+ if( i<=sCheck.nPage ){
+ sCheck.anRef[i] = 1;
+ }
+ sCheck.zErrMsg = 0;
+
+ /* Check the integrity of the freelist
+ */
+ checkList(&sCheck, 1, get4byte(&pBt->pPage1->aData[32]),
+ get4byte(&pBt->pPage1->aData[36]), "Main freelist: ");
+
+ /* Check all the tables.
+ */
+ for(i=0; i<nRoot; i++){
+ if( aRoot[i]==0 ) continue;
+#ifndef SQLITE_OMIT_AUTOVACUUM
+ if( pBt->autoVacuum && aRoot[i]>1 ){
+ checkPtrmap(&sCheck, aRoot[i], PTRMAP_ROOTPAGE, 0, 0);
+ }
+#endif
+ checkTreePage(&sCheck, aRoot[i], 0, "List of tree roots: ");
+ }
+
+ /* Make sure every page in the file is referenced
+ */
+ for(i=1; i<=sCheck.nPage; i++){
+#ifdef SQLITE_OMIT_AUTOVACUUM
+ if( sCheck.anRef[i]==0 ){
+ checkAppendMsg(&sCheck, 0, "Page %d is never used", i);
+ }
+#else
+ /* If the database supports auto-vacuum, make sure no tables contain
+ ** references to pointer-map pages.
+ */
+ if( sCheck.anRef[i]==0 &&
+ (PTRMAP_PAGENO(pBt, i)!=i || !pBt->autoVacuum) ){
+ checkAppendMsg(&sCheck, 0, "Page %d is never used", i);
+ }
+ if( sCheck.anRef[i]!=0 &&
+ (PTRMAP_PAGENO(pBt, i)==i && pBt->autoVacuum) ){
+ checkAppendMsg(&sCheck, 0, "Pointer map page %d is referenced", i);
+ }
+#endif
+ }
+
+ /* Make sure this analysis did not leave any unref() pages
+ */
+ unlockBtreeIfUnused(pBt);
+ if( nRef != sqlite3pager_refcount(pBt->pPager) ){
+ checkAppendMsg(&sCheck, 0,
+ "Outstanding page count goes from %d to %d during this analysis",
+ nRef, sqlite3pager_refcount(pBt->pPager)
+ );
+ }
+
+ /* Clean up and report errors.
+ */
+ sqliteFree(sCheck.anRef);
+ return sCheck.zErrMsg;
+}
+#endif /* SQLITE_OMIT_INTEGRITY_CHECK */
+
+/*
+** Return the full pathname of the underlying database file.
+*/
+const char *sqlite3BtreeGetFilename(Btree *p){
+ assert( p->pBt->pPager!=0 );
+ return sqlite3pager_filename(p->pBt->pPager);
+}
+
+/*
+** Return the pathname of the directory that contains the database file.
+*/
+const char *sqlite3BtreeGetDirname(Btree *p){
+ assert( p->pBt->pPager!=0 );
+ return sqlite3pager_dirname(p->pBt->pPager);
+}
+
+/*
+** Return the pathname of the journal file for this database. The return
+** value of this routine is the same regardless of whether the journal file
+** has been created or not.
+*/
+const char *sqlite3BtreeGetJournalname(Btree *p){
+ assert( p->pBt->pPager!=0 );
+ return sqlite3pager_journalname(p->pBt->pPager);
+}
+
+#ifndef SQLITE_OMIT_VACUUM
+/*
+** Copy the complete content of pBtFrom into pBtTo. A transaction
+** must be active for both files.
+**
+** The size of file pBtFrom may be reduced by this operation.
+** If anything goes wrong, the transaction on pBtFrom is rolled back.
+*/
+int sqlite3BtreeCopyFile(Btree *pTo, Btree *pFrom){
+ int rc = SQLITE_OK;
+ Pgno i, nPage, nToPage, iSkip;
+
+ BtShared *pBtTo = pTo->pBt;
+ BtShared *pBtFrom = pFrom->pBt;
+
+ if( pTo->inTrans!=TRANS_WRITE || pFrom->inTrans!=TRANS_WRITE ){
+ return SQLITE_ERROR;
+ }
+ if( pBtTo->pCursor ) return SQLITE_BUSY;
+ nToPage = sqlite3pager_pagecount(pBtTo->pPager);
+ nPage = sqlite3pager_pagecount(pBtFrom->pPager);
+ iSkip = PENDING_BYTE_PAGE(pBtTo);
+ for(i=1; rc==SQLITE_OK && i<=nPage; i++){
+ void *pPage;
+ if( i==iSkip ) continue;
+ rc = sqlite3pager_get(pBtFrom->pPager, i, &pPage);
+ if( rc ) break;
+ rc = sqlite3pager_overwrite(pBtTo->pPager, i, pPage);
+ if( rc ) break;
+ sqlite3pager_unref(pPage);
+ }
+ for(i=nPage+1; rc==SQLITE_OK && i<=nToPage; i++){
+ void *pPage;
+ if( i==iSkip ) continue;
+ rc = sqlite3pager_get(pBtTo->pPager, i, &pPage);
+ if( rc ) break;
+ rc = sqlite3pager_write(pPage);
+ sqlite3pager_unref(pPage);
+ sqlite3pager_dont_write(pBtTo->pPager, i);
+ }
+ if( !rc && nPage<nToPage ){
+ rc = sqlite3pager_truncate(pBtTo->pPager, nPage);
+ }
+ if( rc ){
+ sqlite3BtreeRollback(pTo);
+ }
+ return rc;
+}
+#endif /* SQLITE_OMIT_VACUUM */
+
+/*
+** Return non-zero if a transaction is active.
+*/
+int sqlite3BtreeIsInTrans(Btree *p){
+ return (p && (p->inTrans==TRANS_WRITE));
+}
+
+/*
+** Return non-zero if a statement transaction is active.
+*/
+int sqlite3BtreeIsInStmt(Btree *p){
+ return (p->pBt && p->pBt->inStmt);
+}
+
+/*
+** Return non-zero if a read (or write) transaction is active.
+*/
+int sqlite3BtreeIsInReadTrans(Btree *p){
+ return (p && (p->inTrans!=TRANS_NONE));
+}
+
+/*
+** This call is a no-op if no write-transaction is currently active on pBt.
+**
+** Otherwise, sync the database file for the btree pBt. zMaster points to
+** the name of a master journal file that should be written into the
+** individual journal file, or is NULL, indicating no master journal file
+** (single database transaction).
+**
+** When this is called, the master journal should already have been
+** created, populated with this journal pointer and synced to disk.
+**
+** Once this is routine has returned, the only thing required to commit
+** the write-transaction for this database file is to delete the journal.
+*/
+int sqlite3BtreeSync(Btree *p, const char *zMaster){
+ int rc = SQLITE_OK;
+ if( p->inTrans==TRANS_WRITE ){
+ BtShared *pBt = p->pBt;
+ Pgno nTrunc = 0;
+#ifndef SQLITE_OMIT_AUTOVACUUM
+ if( pBt->autoVacuum ){
+ rc = autoVacuumCommit(pBt, &nTrunc);
+ if( rc!=SQLITE_OK ){
+ return rc;
+ }
+ }
+#endif
+ rc = sqlite3pager_sync(pBt->pPager, zMaster, nTrunc);
+ }
+ return rc;
+}
+
+/*
+** This function returns a pointer to a blob of memory associated with
+** a single shared-btree. The memory is used by client code for it's own
+** purposes (for example, to store a high-level schema associated with
+** the shared-btree). The btree layer manages reference counting issues.
+**
+** The first time this is called on a shared-btree, nBytes bytes of memory
+** are allocated, zeroed, and returned to the caller. For each subsequent
+** call the nBytes parameter is ignored and a pointer to the same blob
+** of memory returned.
+**
+** Just before the shared-btree is closed, the function passed as the
+** xFree argument when the memory allocation was made is invoked on the
+** blob of allocated memory. This function should not call sqliteFree()
+** on the memory, the btree layer does that.
+*/
+void *sqlite3BtreeSchema(Btree *p, int nBytes, void(*xFree)(void *)){
+ BtShared *pBt = p->pBt;
+ if( !pBt->pSchema ){
+ pBt->pSchema = sqliteMalloc(nBytes);
+ pBt->xFreeSchema = xFree;
+ }
+ return pBt->pSchema;
+}
+
+/*
+** Return true if another user of the same shared btree as the argument
+** handle holds an exclusive lock on the sqlite_master table.
+*/
+int sqlite3BtreeSchemaLocked(Btree *p){
+ return (queryTableLock(p, MASTER_ROOT, READ_LOCK)!=SQLITE_OK);
+}
+
+
+#ifndef SQLITE_OMIT_SHARED_CACHE
+/*
+** Obtain a lock on the table whose root page is iTab. The
+** lock is a write lock if isWritelock is true or a read lock
+** if it is false.
+*/
+int sqlite3BtreeLockTable(Btree *p, int iTab, u8 isWriteLock){
+ int rc = SQLITE_OK;
+ u8 lockType = (isWriteLock?WRITE_LOCK:READ_LOCK);
+ rc = queryTableLock(p, iTab, lockType);
+ if( rc==SQLITE_OK ){
+ rc = lockTable(p, iTab, lockType);
+ }
+ return rc;
+}
+#endif
+
+/*
+** The following debugging interface has to be in this file (rather
+** than in, for example, test1.c) so that it can get access to
+** the definition of BtShared.
+*/
+#if defined(SQLITE_DEBUG) && defined(TCLSH)
+#include <tcl.h>
+int sqlite3_shared_cache_report(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+#ifndef SQLITE_OMIT_SHARED_CACHE
+ const ThreadData *pTd = sqlite3ThreadDataReadOnly();
+ if( pTd->useSharedData ){
+ BtShared *pBt;
+ Tcl_Obj *pRet = Tcl_NewObj();
+ for(pBt=pTd->pBtree; pBt; pBt=pBt->pNext){
+ const char *zFile = sqlite3pager_filename(pBt->pPager);
+ Tcl_ListObjAppendElement(interp, pRet, Tcl_NewStringObj(zFile, -1));
+ Tcl_ListObjAppendElement(interp, pRet, Tcl_NewIntObj(pBt->nRef));
+ }
+ Tcl_SetObjResult(interp, pRet);
+ }
+#endif
+ return TCL_OK;
+}
+#endif
Added: freeswitch/trunk/libs/sqlite/src/btree.h
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/btree.h Tue Dec 19 15:11:50 2006
@@ -0,0 +1,149 @@
+/*
+** 2001 September 15
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** This header file defines the interface that the sqlite B-Tree file
+** subsystem. See comments in the source code for a detailed description
+** of what each interface routine does.
+**
+** @(#) $Id: btree.h,v 1.71 2006/06/27 16:34:57 danielk1977 Exp $
+*/
+#ifndef _BTREE_H_
+#define _BTREE_H_
+
+/* TODO: This definition is just included so other modules compile. It
+** needs to be revisited.
+*/
+#define SQLITE_N_BTREE_META 10
+
+/*
+** If defined as non-zero, auto-vacuum is enabled by default. Otherwise
+** it must be turned on for each database using "PRAGMA auto_vacuum = 1".
+*/
+#ifndef SQLITE_DEFAULT_AUTOVACUUM
+ #define SQLITE_DEFAULT_AUTOVACUUM 0
+#endif
+
+/*
+** Forward declarations of structure
+*/
+typedef struct Btree Btree;
+typedef struct BtCursor BtCursor;
+typedef struct BtShared BtShared;
+
+
+int sqlite3BtreeOpen(
+ const char *zFilename, /* Name of database file to open */
+ sqlite3 *db, /* Associated database connection */
+ Btree **, /* Return open Btree* here */
+ int flags /* Flags */
+);
+
+/* The flags parameter to sqlite3BtreeOpen can be the bitwise or of the
+** following values.
+**
+** NOTE: These values must match the corresponding PAGER_ values in
+** pager.h.
+*/
+#define BTREE_OMIT_JOURNAL 1 /* Do not use journal. No argument */
+#define BTREE_NO_READLOCK 2 /* Omit readlocks on readonly files */
+#define BTREE_MEMORY 4 /* In-memory DB. No argument */
+
+int sqlite3BtreeClose(Btree*);
+int sqlite3BtreeSetBusyHandler(Btree*,BusyHandler*);
+int sqlite3BtreeSetCacheSize(Btree*,int);
+int sqlite3BtreeSetSafetyLevel(Btree*,int,int);
+int sqlite3BtreeSyncDisabled(Btree*);
+int sqlite3BtreeSetPageSize(Btree*,int,int);
+int sqlite3BtreeGetPageSize(Btree*);
+int sqlite3BtreeGetReserve(Btree*);
+int sqlite3BtreeSetAutoVacuum(Btree *, int);
+int sqlite3BtreeGetAutoVacuum(Btree *);
+int sqlite3BtreeBeginTrans(Btree*,int);
+int sqlite3BtreeCommit(Btree*);
+int sqlite3BtreeRollback(Btree*);
+int sqlite3BtreeBeginStmt(Btree*);
+int sqlite3BtreeCommitStmt(Btree*);
+int sqlite3BtreeRollbackStmt(Btree*);
+int sqlite3BtreeCreateTable(Btree*, int*, int flags);
+int sqlite3BtreeIsInTrans(Btree*);
+int sqlite3BtreeIsInStmt(Btree*);
+int sqlite3BtreeIsInReadTrans(Btree*);
+int sqlite3BtreeSync(Btree*, const char *zMaster);
+void *sqlite3BtreeSchema(Btree *, int, void(*)(void *));
+int sqlite3BtreeSchemaLocked(Btree *);
+int sqlite3BtreeLockTable(Btree *, int, u8);
+
+const char *sqlite3BtreeGetFilename(Btree *);
+const char *sqlite3BtreeGetDirname(Btree *);
+const char *sqlite3BtreeGetJournalname(Btree *);
+int sqlite3BtreeCopyFile(Btree *, Btree *);
+
+/* The flags parameter to sqlite3BtreeCreateTable can be the bitwise OR
+** of the following flags:
+*/
+#define BTREE_INTKEY 1 /* Table has only 64-bit signed integer keys */
+#define BTREE_ZERODATA 2 /* Table has keys only - no data */
+#define BTREE_LEAFDATA 4 /* Data stored in leaves only. Implies INTKEY */
+
+int sqlite3BtreeDropTable(Btree*, int, int*);
+int sqlite3BtreeClearTable(Btree*, int);
+int sqlite3BtreeGetMeta(Btree*, int idx, u32 *pValue);
+int sqlite3BtreeUpdateMeta(Btree*, int idx, u32 value);
+
+int sqlite3BtreeCursor(
+ Btree*, /* BTree containing table to open */
+ int iTable, /* Index of root page */
+ int wrFlag, /* 1 for writing. 0 for read-only */
+ int(*)(void*,int,const void*,int,const void*), /* Key comparison function */
+ void*, /* First argument to compare function */
+ BtCursor **ppCursor /* Returned cursor */
+);
+
+void sqlite3BtreeSetCompare(
+ BtCursor *,
+ int(*)(void*,int,const void*,int,const void*),
+ void*
+);
+
+int sqlite3BtreeCloseCursor(BtCursor*);
+int sqlite3BtreeMoveto(BtCursor*, const void *pKey, i64 nKey, int *pRes);
+int sqlite3BtreeDelete(BtCursor*);
+int sqlite3BtreeInsert(BtCursor*, const void *pKey, i64 nKey,
+ const void *pData, int nData);
+int sqlite3BtreeFirst(BtCursor*, int *pRes);
+int sqlite3BtreeLast(BtCursor*, int *pRes);
+int sqlite3BtreeNext(BtCursor*, int *pRes);
+int sqlite3BtreeEof(BtCursor*);
+int sqlite3BtreeFlags(BtCursor*);
+int sqlite3BtreePrevious(BtCursor*, int *pRes);
+int sqlite3BtreeKeySize(BtCursor*, i64 *pSize);
+int sqlite3BtreeKey(BtCursor*, u32 offset, u32 amt, void*);
+const void *sqlite3BtreeKeyFetch(BtCursor*, int *pAmt);
+const void *sqlite3BtreeDataFetch(BtCursor*, int *pAmt);
+int sqlite3BtreeDataSize(BtCursor*, u32 *pSize);
+int sqlite3BtreeData(BtCursor*, u32 offset, u32 amt, void*);
+
+char *sqlite3BtreeIntegrityCheck(Btree*, int *aRoot, int nRoot);
+struct Pager *sqlite3BtreePager(Btree*);
+
+
+#ifdef SQLITE_TEST
+int sqlite3BtreeCursorInfo(BtCursor*, int*, int);
+void sqlite3BtreeCursorList(Btree*);
+#endif
+
+#ifdef SQLITE_DEBUG
+int sqlite3BtreePageDump(Btree*, int, int recursive);
+#else
+#define sqlite3BtreePageDump(X,Y,Z) SQLITE_OK
+#endif
+
+#endif /* _BTREE_H_ */
Added: freeswitch/trunk/libs/sqlite/src/build.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/build.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,3291 @@
+/*
+** 2001 September 15
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** This file contains C code routines that are called by the SQLite parser
+** when syntax rules are reduced. The routines in this file handle the
+** following kinds of SQL syntax:
+**
+** CREATE TABLE
+** DROP TABLE
+** CREATE INDEX
+** DROP INDEX
+** creating ID lists
+** BEGIN TRANSACTION
+** COMMIT
+** ROLLBACK
+**
+** $Id: build.c,v 1.411 2006/09/11 23:45:49 drh Exp $
+*/
+#include "sqliteInt.h"
+#include <ctype.h>
+
+/*
+** This routine is called when a new SQL statement is beginning to
+** be parsed. Initialize the pParse structure as needed.
+*/
+void sqlite3BeginParse(Parse *pParse, int explainFlag){
+ pParse->explain = explainFlag;
+ pParse->nVar = 0;
+}
+
+#ifndef SQLITE_OMIT_SHARED_CACHE
+/*
+** The TableLock structure is only used by the sqlite3TableLock() and
+** codeTableLocks() functions.
+*/
+struct TableLock {
+ int iDb; /* The database containing the table to be locked */
+ int iTab; /* The root page of the table to be locked */
+ u8 isWriteLock; /* True for write lock. False for a read lock */
+ const char *zName; /* Name of the table */
+};
+
+/*
+** Record the fact that we want to lock a table at run-time.
+**
+** The table to be locked has root page iTab and is found in database iDb.
+** A read or a write lock can be taken depending on isWritelock.
+**
+** This routine just records the fact that the lock is desired. The
+** code to make the lock occur is generated by a later call to
+** codeTableLocks() which occurs during sqlite3FinishCoding().
+*/
+void sqlite3TableLock(
+ Parse *pParse, /* Parsing context */
+ int iDb, /* Index of the database containing the table to lock */
+ int iTab, /* Root page number of the table to be locked */
+ u8 isWriteLock, /* True for a write lock */
+ const char *zName /* Name of the table to be locked */
+){
+ int i;
+ int nBytes;
+ TableLock *p;
+
+ if( 0==sqlite3ThreadDataReadOnly()->useSharedData || iDb<0 ){
+ return;
+ }
+
+ for(i=0; i<pParse->nTableLock; i++){
+ p = &pParse->aTableLock[i];
+ if( p->iDb==iDb && p->iTab==iTab ){
+ p->isWriteLock = (p->isWriteLock || isWriteLock);
+ return;
+ }
+ }
+
+ nBytes = sizeof(TableLock) * (pParse->nTableLock+1);
+ sqliteReallocOrFree((void **)&pParse->aTableLock, nBytes);
+ if( pParse->aTableLock ){
+ p = &pParse->aTableLock[pParse->nTableLock++];
+ p->iDb = iDb;
+ p->iTab = iTab;
+ p->isWriteLock = isWriteLock;
+ p->zName = zName;
+ }
+}
+
+/*
+** Code an OP_TableLock instruction for each table locked by the
+** statement (configured by calls to sqlite3TableLock()).
+*/
+static void codeTableLocks(Parse *pParse){
+ int i;
+ Vdbe *pVdbe;
+ assert( sqlite3ThreadDataReadOnly()->useSharedData || pParse->nTableLock==0 );
+
+ if( 0==(pVdbe = sqlite3GetVdbe(pParse)) ){
+ return;
+ }
+
+ for(i=0; i<pParse->nTableLock; i++){
+ TableLock *p = &pParse->aTableLock[i];
+ int p1 = p->iDb;
+ if( p->isWriteLock ){
+ p1 = -1*(p1+1);
+ }
+ sqlite3VdbeOp3(pVdbe, OP_TableLock, p1, p->iTab, p->zName, P3_STATIC);
+ }
+}
+#else
+ #define codeTableLocks(x)
+#endif
+
+/*
+** This routine is called after a single SQL statement has been
+** parsed and a VDBE program to execute that statement has been
+** prepared. This routine puts the finishing touches on the
+** VDBE program and resets the pParse structure for the next
+** parse.
+**
+** Note that if an error occurred, it might be the case that
+** no VDBE code was generated.
+*/
+void sqlite3FinishCoding(Parse *pParse){
+ sqlite3 *db;
+ Vdbe *v;
+
+ if( sqlite3MallocFailed() ) return;
+ if( pParse->nested ) return;
+ if( !pParse->pVdbe ){
+ if( pParse->rc==SQLITE_OK && pParse->nErr ){
+ pParse->rc = SQLITE_ERROR;
+ return;
+ }
+ }
+
+ /* Begin by generating some termination code at the end of the
+ ** vdbe program
+ */
+ db = pParse->db;
+ v = sqlite3GetVdbe(pParse);
+ if( v ){
+ sqlite3VdbeAddOp(v, OP_Halt, 0, 0);
+
+ /* The cookie mask contains one bit for each database file open.
+ ** (Bit 0 is for main, bit 1 is for temp, and so forth.) Bits are
+ ** set for each database that is used. Generate code to start a
+ ** transaction on each used database and to verify the schema cookie
+ ** on each used database.
+ */
+ if( pParse->cookieGoto>0 ){
+ u32 mask;
+ int iDb;
+ sqlite3VdbeJumpHere(v, pParse->cookieGoto-1);
+ for(iDb=0, mask=1; iDb<db->nDb; mask<<=1, iDb++){
+ if( (mask & pParse->cookieMask)==0 ) continue;
+ sqlite3VdbeAddOp(v, OP_Transaction, iDb, (mask & pParse->writeMask)!=0);
+ sqlite3VdbeAddOp(v, OP_VerifyCookie, iDb, pParse->cookieValue[iDb]);
+ }
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+ if( pParse->pVirtualLock ){
+ char *vtab = (char *)pParse->pVirtualLock->pVtab;
+ sqlite3VdbeOp3(v, OP_VBegin, 0, 0, vtab, P3_VTAB);
+ }
+#endif
+
+ /* Once all the cookies have been verified and transactions opened,
+ ** obtain the required table-locks. This is a no-op unless the
+ ** shared-cache feature is enabled.
+ */
+ codeTableLocks(pParse);
+ sqlite3VdbeAddOp(v, OP_Goto, 0, pParse->cookieGoto);
+ }
+
+#ifndef SQLITE_OMIT_TRACE
+ /* Add a No-op that contains the complete text of the compiled SQL
+ ** statement as its P3 argument. This does not change the functionality
+ ** of the program.
+ **
+ ** This is used to implement sqlite3_trace().
+ */
+ sqlite3VdbeOp3(v, OP_Noop, 0, 0, pParse->zSql, pParse->zTail-pParse->zSql);
+#endif /* SQLITE_OMIT_TRACE */
+ }
+
+
+ /* Get the VDBE program ready for execution
+ */
+ if( v && pParse->nErr==0 && !sqlite3MallocFailed() ){
+ FILE *trace = (db->flags & SQLITE_VdbeTrace)!=0 ? stdout : 0;
+ sqlite3VdbeTrace(v, trace);
+ sqlite3VdbeMakeReady(v, pParse->nVar, pParse->nMem+3,
+ pParse->nTab+3, pParse->explain);
+ pParse->rc = SQLITE_DONE;
+ pParse->colNamesSet = 0;
+ }else if( pParse->rc==SQLITE_OK ){
+ pParse->rc = SQLITE_ERROR;
+ }
+ pParse->nTab = 0;
+ pParse->nMem = 0;
+ pParse->nSet = 0;
+ pParse->nVar = 0;
+ pParse->cookieMask = 0;
+ pParse->cookieGoto = 0;
+}
+
+/*
+** Run the parser and code generator recursively in order to generate
+** code for the SQL statement given onto the end of the pParse context
+** currently under construction. When the parser is run recursively
+** this way, the final OP_Halt is not appended and other initialization
+** and finalization steps are omitted because those are handling by the
+** outermost parser.
+**
+** Not everything is nestable. This facility is designed to permit
+** INSERT, UPDATE, and DELETE operations against SQLITE_MASTER. Use
+** care if you decide to try to use this routine for some other purposes.
+*/
+void sqlite3NestedParse(Parse *pParse, const char *zFormat, ...){
+ va_list ap;
+ char *zSql;
+# define SAVE_SZ (sizeof(Parse) - offsetof(Parse,nVar))
+ char saveBuf[SAVE_SZ];
+
+ if( pParse->nErr ) return;
+ assert( pParse->nested<10 ); /* Nesting should only be of limited depth */
+ va_start(ap, zFormat);
+ zSql = sqlite3VMPrintf(zFormat, ap);
+ va_end(ap);
+ if( zSql==0 ){
+ return; /* A malloc must have failed */
+ }
+ pParse->nested++;
+ memcpy(saveBuf, &pParse->nVar, SAVE_SZ);
+ memset(&pParse->nVar, 0, SAVE_SZ);
+ sqlite3RunParser(pParse, zSql, 0);
+ sqliteFree(zSql);
+ memcpy(&pParse->nVar, saveBuf, SAVE_SZ);
+ pParse->nested--;
+}
+
+/*
+** Locate the in-memory structure that describes a particular database
+** table given the name of that table and (optionally) the name of the
+** database containing the table. Return NULL if not found.
+**
+** If zDatabase is 0, all databases are searched for the table and the
+** first matching table is returned. (No checking for duplicate table
+** names is done.) The search order is TEMP first, then MAIN, then any
+** auxiliary databases added using the ATTACH command.
+**
+** See also sqlite3LocateTable().
+*/
+Table *sqlite3FindTable(sqlite3 *db, const char *zName, const char *zDatabase){
+ Table *p = 0;
+ int i;
+ assert( zName!=0 );
+ for(i=OMIT_TEMPDB; i<db->nDb; i++){
+ int j = (i<2) ? i^1 : i; /* Search TEMP before MAIN */
+ if( zDatabase!=0 && sqlite3StrICmp(zDatabase, db->aDb[j].zName) ) continue;
+ p = sqlite3HashFind(&db->aDb[j].pSchema->tblHash, zName, strlen(zName)+1);
+ if( p ) break;
+ }
+ return p;
+}
+
+/*
+** Locate the in-memory structure that describes a particular database
+** table given the name of that table and (optionally) the name of the
+** database containing the table. Return NULL if not found. Also leave an
+** error message in pParse->zErrMsg.
+**
+** The difference between this routine and sqlite3FindTable() is that this
+** routine leaves an error message in pParse->zErrMsg where
+** sqlite3FindTable() does not.
+*/
+Table *sqlite3LocateTable(Parse *pParse, const char *zName, const char *zDbase){
+ Table *p;
+
+ /* Read the database schema. If an error occurs, leave an error message
+ ** and code in pParse and return NULL. */
+ if( SQLITE_OK!=sqlite3ReadSchema(pParse) ){
+ return 0;
+ }
+
+ p = sqlite3FindTable(pParse->db, zName, zDbase);
+ if( p==0 ){
+ if( zDbase ){
+ sqlite3ErrorMsg(pParse, "no such table: %s.%s", zDbase, zName);
+ }else{
+ sqlite3ErrorMsg(pParse, "no such table: %s", zName);
+ }
+ pParse->checkSchema = 1;
+ }
+ return p;
+}
+
+/*
+** Locate the in-memory structure that describes
+** a particular index given the name of that index
+** and the name of the database that contains the index.
+** Return NULL if not found.
+**
+** If zDatabase is 0, all databases are searched for the
+** table and the first matching index is returned. (No checking
+** for duplicate index names is done.) The search order is
+** TEMP first, then MAIN, then any auxiliary databases added
+** using the ATTACH command.
+*/
+Index *sqlite3FindIndex(sqlite3 *db, const char *zName, const char *zDb){
+ Index *p = 0;
+ int i;
+ for(i=OMIT_TEMPDB; i<db->nDb; i++){
+ int j = (i<2) ? i^1 : i; /* Search TEMP before MAIN */
+ Schema *pSchema = db->aDb[j].pSchema;
+ if( zDb && sqlite3StrICmp(zDb, db->aDb[j].zName) ) continue;
+ assert( pSchema || (j==1 && !db->aDb[1].pBt) );
+ if( pSchema ){
+ p = sqlite3HashFind(&pSchema->idxHash, zName, strlen(zName)+1);
+ }
+ if( p ) break;
+ }
+ return p;
+}
+
+/*
+** Reclaim the memory used by an index
+*/
+static void freeIndex(Index *p){
+ sqliteFree(p->zColAff);
+ sqliteFree(p);
+}
+
+/*
+** Remove the given index from the index hash table, and free
+** its memory structures.
+**
+** The index is removed from the database hash tables but
+** it is not unlinked from the Table that it indexes.
+** Unlinking from the Table must be done by the calling function.
+*/
+static void sqliteDeleteIndex(Index *p){
+ Index *pOld;
+ const char *zName = p->zName;
+
+ pOld = sqlite3HashInsert(&p->pSchema->idxHash, zName, strlen( zName)+1, 0);
+ assert( pOld==0 || pOld==p );
+ freeIndex(p);
+}
+
+/*
+** For the index called zIdxName which is found in the database iDb,
+** unlike that index from its Table then remove the index from
+** the index hash table and free all memory structures associated
+** with the index.
+*/
+void sqlite3UnlinkAndDeleteIndex(sqlite3 *db, int iDb, const char *zIdxName){
+ Index *pIndex;
+ int len;
+ Hash *pHash = &db->aDb[iDb].pSchema->idxHash;
+
+ len = strlen(zIdxName);
+ pIndex = sqlite3HashInsert(pHash, zIdxName, len+1, 0);
+ if( pIndex ){
+ if( pIndex->pTable->pIndex==pIndex ){
+ pIndex->pTable->pIndex = pIndex->pNext;
+ }else{
+ Index *p;
+ for(p=pIndex->pTable->pIndex; p && p->pNext!=pIndex; p=p->pNext){}
+ if( p && p->pNext==pIndex ){
+ p->pNext = pIndex->pNext;
+ }
+ }
+ freeIndex(pIndex);
+ }
+ db->flags |= SQLITE_InternChanges;
+}
+
+/*
+** Erase all schema information from the in-memory hash tables of
+** a single database. This routine is called to reclaim memory
+** before the database closes. It is also called during a rollback
+** if there were schema changes during the transaction or if a
+** schema-cookie mismatch occurs.
+**
+** If iDb<=0 then reset the internal schema tables for all database
+** files. If iDb>=2 then reset the internal schema for only the
+** single file indicated.
+*/
+void sqlite3ResetInternalSchema(sqlite3 *db, int iDb){
+ int i, j;
+
+ assert( iDb>=0 && iDb<db->nDb );
+ for(i=iDb; i<db->nDb; i++){
+ Db *pDb = &db->aDb[i];
+ if( pDb->pSchema ){
+ sqlite3SchemaFree(pDb->pSchema);
+ }
+ if( iDb>0 ) return;
+ }
+ assert( iDb==0 );
+ db->flags &= ~SQLITE_InternChanges;
+
+ /* If one or more of the auxiliary database files has been closed,
+ ** then remove them from the auxiliary database list. We take the
+ ** opportunity to do this here since we have just deleted all of the
+ ** schema hash tables and therefore do not have to make any changes
+ ** to any of those tables.
+ */
+ for(i=0; i<db->nDb; i++){
+ struct Db *pDb = &db->aDb[i];
+ if( pDb->pBt==0 ){
+ if( pDb->pAux && pDb->xFreeAux ) pDb->xFreeAux(pDb->pAux);
+ pDb->pAux = 0;
+ }
+ }
+ for(i=j=2; i<db->nDb; i++){
+ struct Db *pDb = &db->aDb[i];
+ if( pDb->pBt==0 ){
+ sqliteFree(pDb->zName);
+ pDb->zName = 0;
+ continue;
+ }
+ if( j<i ){
+ db->aDb[j] = db->aDb[i];
+ }
+ j++;
+ }
+ memset(&db->aDb[j], 0, (db->nDb-j)*sizeof(db->aDb[j]));
+ db->nDb = j;
+ if( db->nDb<=2 && db->aDb!=db->aDbStatic ){
+ memcpy(db->aDbStatic, db->aDb, 2*sizeof(db->aDb[0]));
+ sqliteFree(db->aDb);
+ db->aDb = db->aDbStatic;
+ }
+}
+
+/*
+** This routine is called whenever a rollback occurs. If there were
+** schema changes during the transaction, then we have to reset the
+** internal hash tables and reload them from disk.
+*/
+void sqlite3RollbackInternalChanges(sqlite3 *db){
+ if( db->flags & SQLITE_InternChanges ){
+ sqlite3ResetInternalSchema(db, 0);
+ }
+}
+
+/*
+** This routine is called when a commit occurs.
+*/
+void sqlite3CommitInternalChanges(sqlite3 *db){
+ db->flags &= ~SQLITE_InternChanges;
+}
+
+/*
+** Clear the column names from a table or view.
+*/
+static void sqliteResetColumnNames(Table *pTable){
+ int i;
+ Column *pCol;
+ assert( pTable!=0 );
+ if( (pCol = pTable->aCol)!=0 ){
+ for(i=0; i<pTable->nCol; i++, pCol++){
+ sqliteFree(pCol->zName);
+ sqlite3ExprDelete(pCol->pDflt);
+ sqliteFree(pCol->zType);
+ sqliteFree(pCol->zColl);
+ }
+ sqliteFree(pTable->aCol);
+ }
+ pTable->aCol = 0;
+ pTable->nCol = 0;
+}
+
+/*
+** Remove the memory data structures associated with the given
+** Table. No changes are made to disk by this routine.
+**
+** This routine just deletes the data structure. It does not unlink
+** the table data structure from the hash table. Nor does it remove
+** foreign keys from the sqlite.aFKey hash table. But it does destroy
+** memory structures of the indices and foreign keys associated with
+** the table.
+**
+** Indices associated with the table are unlinked from the "db"
+** data structure if db!=NULL. If db==NULL, indices attached to
+** the table are deleted, but it is assumed they have already been
+** unlinked.
+*/
+void sqlite3DeleteTable(sqlite3 *db, Table *pTable){
+ Index *pIndex, *pNext;
+ FKey *pFKey, *pNextFKey;
+
+ db = 0;
+
+ if( pTable==0 ) return;
+
+ /* Do not delete the table until the reference count reaches zero. */
+ pTable->nRef--;
+ if( pTable->nRef>0 ){
+ return;
+ }
+ assert( pTable->nRef==0 );
+
+ /* Delete all indices associated with this table
+ */
+ for(pIndex = pTable->pIndex; pIndex; pIndex=pNext){
+ pNext = pIndex->pNext;
+ assert( pIndex->pSchema==pTable->pSchema );
+ sqliteDeleteIndex(pIndex);
+ }
+
+#ifndef SQLITE_OMIT_FOREIGN_KEY
+ /* Delete all foreign keys associated with this table. The keys
+ ** should have already been unlinked from the db->aFKey hash table
+ */
+ for(pFKey=pTable->pFKey; pFKey; pFKey=pNextFKey){
+ pNextFKey = pFKey->pNextFrom;
+ assert( sqlite3HashFind(&pTable->pSchema->aFKey,
+ pFKey->zTo, strlen(pFKey->zTo)+1)!=pFKey );
+ sqliteFree(pFKey);
+ }
+#endif
+
+ /* Delete the Table structure itself.
+ */
+ sqliteResetColumnNames(pTable);
+ sqliteFree(pTable->zName);
+ sqliteFree(pTable->zColAff);
+ sqlite3SelectDelete(pTable->pSelect);
+#ifndef SQLITE_OMIT_CHECK
+ sqlite3ExprDelete(pTable->pCheck);
+#endif
+ sqlite3VtabClear(pTable);
+ sqliteFree(pTable);
+}
+
+/*
+** Unlink the given table from the hash tables and the delete the
+** table structure with all its indices and foreign keys.
+*/
+void sqlite3UnlinkAndDeleteTable(sqlite3 *db, int iDb, const char *zTabName){
+ Table *p;
+ FKey *pF1, *pF2;
+ Db *pDb;
+
+ assert( db!=0 );
+ assert( iDb>=0 && iDb<db->nDb );
+ assert( zTabName && zTabName[0] );
+ pDb = &db->aDb[iDb];
+ p = sqlite3HashInsert(&pDb->pSchema->tblHash, zTabName, strlen(zTabName)+1,0);
+ if( p ){
+#ifndef SQLITE_OMIT_FOREIGN_KEY
+ for(pF1=p->pFKey; pF1; pF1=pF1->pNextFrom){
+ int nTo = strlen(pF1->zTo) + 1;
+ pF2 = sqlite3HashFind(&pDb->pSchema->aFKey, pF1->zTo, nTo);
+ if( pF2==pF1 ){
+ sqlite3HashInsert(&pDb->pSchema->aFKey, pF1->zTo, nTo, pF1->pNextTo);
+ }else{
+ while( pF2 && pF2->pNextTo!=pF1 ){ pF2=pF2->pNextTo; }
+ if( pF2 ){
+ pF2->pNextTo = pF1->pNextTo;
+ }
+ }
+ }
+#endif
+ sqlite3DeleteTable(db, p);
+ }
+ db->flags |= SQLITE_InternChanges;
+}
+
+/*
+** Given a token, return a string that consists of the text of that
+** token with any quotations removed. Space to hold the returned string
+** is obtained from sqliteMalloc() and must be freed by the calling
+** function.
+**
+** Tokens are often just pointers into the original SQL text and so
+** are not \000 terminated and are not persistent. The returned string
+** is \000 terminated and is persistent.
+*/
+char *sqlite3NameFromToken(Token *pName){
+ char *zName;
+ if( pName ){
+ zName = sqliteStrNDup((char*)pName->z, pName->n);
+ sqlite3Dequote(zName);
+ }else{
+ zName = 0;
+ }
+ return zName;
+}
+
+/*
+** Open the sqlite_master table stored in database number iDb for
+** writing. The table is opened using cursor 0.
+*/
+void sqlite3OpenMasterTable(Parse *p, int iDb){
+ Vdbe *v = sqlite3GetVdbe(p);
+ sqlite3TableLock(p, iDb, MASTER_ROOT, 1, SCHEMA_TABLE(iDb));
+ sqlite3VdbeAddOp(v, OP_Integer, iDb, 0);
+ sqlite3VdbeAddOp(v, OP_OpenWrite, 0, MASTER_ROOT);
+ sqlite3VdbeAddOp(v, OP_SetNumColumns, 0, 5); /* sqlite_master has 5 columns */
+}
+
+/*
+** The token *pName contains the name of a database (either "main" or
+** "temp" or the name of an attached db). This routine returns the
+** index of the named database in db->aDb[], or -1 if the named db
+** does not exist.
+*/
+int sqlite3FindDb(sqlite3 *db, Token *pName){
+ int i = -1; /* Database number */
+ int n; /* Number of characters in the name */
+ Db *pDb; /* A database whose name space is being searched */
+ char *zName; /* Name we are searching for */
+
+ zName = sqlite3NameFromToken(pName);
+ if( zName ){
+ n = strlen(zName);
+ for(i=(db->nDb-1), pDb=&db->aDb[i]; i>=0; i--, pDb--){
+ if( (!OMIT_TEMPDB || i!=1 ) && n==strlen(pDb->zName) &&
+ 0==sqlite3StrICmp(pDb->zName, zName) ){
+ break;
+ }
+ }
+ sqliteFree(zName);
+ }
+ return i;
+}
+
+/* The table or view or trigger name is passed to this routine via tokens
+** pName1 and pName2. If the table name was fully qualified, for example:
+**
+** CREATE TABLE xxx.yyy (...);
+**
+** Then pName1 is set to "xxx" and pName2 "yyy". On the other hand if
+** the table name is not fully qualified, i.e.:
+**
+** CREATE TABLE yyy(...);
+**
+** Then pName1 is set to "yyy" and pName2 is "".
+**
+** This routine sets the *ppUnqual pointer to point at the token (pName1 or
+** pName2) that stores the unqualified table name. The index of the
+** database "xxx" is returned.
+*/
+int sqlite3TwoPartName(
+ Parse *pParse, /* Parsing and code generating context */
+ Token *pName1, /* The "xxx" in the name "xxx.yyy" or "xxx" */
+ Token *pName2, /* The "yyy" in the name "xxx.yyy" */
+ Token **pUnqual /* Write the unqualified object name here */
+){
+ int iDb; /* Database holding the object */
+ sqlite3 *db = pParse->db;
+
+ if( pName2 && pName2->n>0 ){
+ assert( !db->init.busy );
+ *pUnqual = pName2;
+ iDb = sqlite3FindDb(db, pName1);
+ if( iDb<0 ){
+ sqlite3ErrorMsg(pParse, "unknown database %T", pName1);
+ pParse->nErr++;
+ return -1;
+ }
+ }else{
+ assert( db->init.iDb==0 || db->init.busy );
+ iDb = db->init.iDb;
+ *pUnqual = pName1;
+ }
+ return iDb;
+}
+
+/*
+** This routine is used to check if the UTF-8 string zName is a legal
+** unqualified name for a new schema object (table, index, view or
+** trigger). All names are legal except those that begin with the string
+** "sqlite_" (in upper, lower or mixed case). This portion of the namespace
+** is reserved for internal use.
+*/
+int sqlite3CheckObjectName(Parse *pParse, const char *zName){
+ if( !pParse->db->init.busy && pParse->nested==0
+ && (pParse->db->flags & SQLITE_WriteSchema)==0
+ && 0==sqlite3StrNICmp(zName, "sqlite_", 7) ){
+ sqlite3ErrorMsg(pParse, "object name reserved for internal use: %s", zName);
+ return SQLITE_ERROR;
+ }
+ return SQLITE_OK;
+}
+
+/*
+** Begin constructing a new table representation in memory. This is
+** the first of several action routines that get called in response
+** to a CREATE TABLE statement. In particular, this routine is called
+** after seeing tokens "CREATE" and "TABLE" and the table name. The isTemp
+** flag is true if the table should be stored in the auxiliary database
+** file instead of in the main database file. This is normally the case
+** when the "TEMP" or "TEMPORARY" keyword occurs in between
+** CREATE and TABLE.
+**
+** The new table record is initialized and put in pParse->pNewTable.
+** As more of the CREATE TABLE statement is parsed, additional action
+** routines will be called to add more information to this record.
+** At the end of the CREATE TABLE statement, the sqlite3EndTable() routine
+** is called to complete the construction of the new table record.
+*/
+void sqlite3StartTable(
+ Parse *pParse, /* Parser context */
+ Token *pName1, /* First part of the name of the table or view */
+ Token *pName2, /* Second part of the name of the table or view */
+ int isTemp, /* True if this is a TEMP table */
+ int isView, /* True if this is a VIEW */
+ int isVirtual, /* True if this is a VIRTUAL table */
+ int noErr /* Do nothing if table already exists */
+){
+ Table *pTable;
+ char *zName = 0; /* The name of the new table */
+ sqlite3 *db = pParse->db;
+ Vdbe *v;
+ int iDb; /* Database number to create the table in */
+ Token *pName; /* Unqualified name of the table to create */
+
+ /* The table or view name to create is passed to this routine via tokens
+ ** pName1 and pName2. If the table name was fully qualified, for example:
+ **
+ ** CREATE TABLE xxx.yyy (...);
+ **
+ ** Then pName1 is set to "xxx" and pName2 "yyy". On the other hand if
+ ** the table name is not fully qualified, i.e.:
+ **
+ ** CREATE TABLE yyy(...);
+ **
+ ** Then pName1 is set to "yyy" and pName2 is "".
+ **
+ ** The call below sets the pName pointer to point at the token (pName1 or
+ ** pName2) that stores the unqualified table name. The variable iDb is
+ ** set to the index of the database that the table or view is to be
+ ** created in.
+ */
+ iDb = sqlite3TwoPartName(pParse, pName1, pName2, &pName);
+ if( iDb<0 ) return;
+ if( !OMIT_TEMPDB && isTemp && iDb>1 ){
+ /* If creating a temp table, the name may not be qualified */
+ sqlite3ErrorMsg(pParse, "temporary table name must be unqualified");
+ return;
+ }
+ if( !OMIT_TEMPDB && isTemp ) iDb = 1;
+
+ pParse->sNameToken = *pName;
+ zName = sqlite3NameFromToken(pName);
+ if( zName==0 ) return;
+ if( SQLITE_OK!=sqlite3CheckObjectName(pParse, zName) ){
+ goto begin_table_error;
+ }
+ if( db->init.iDb==1 ) isTemp = 1;
+#ifndef SQLITE_OMIT_AUTHORIZATION
+ assert( (isTemp & 1)==isTemp );
+ {
+ int code;
+ char *zDb = db->aDb[iDb].zName;
+ if( sqlite3AuthCheck(pParse, SQLITE_INSERT, SCHEMA_TABLE(isTemp), 0, zDb) ){
+ goto begin_table_error;
+ }
+ if( isView ){
+ if( !OMIT_TEMPDB && isTemp ){
+ code = SQLITE_CREATE_TEMP_VIEW;
+ }else{
+ code = SQLITE_CREATE_VIEW;
+ }
+ }else{
+ if( !OMIT_TEMPDB && isTemp ){
+ code = SQLITE_CREATE_TEMP_TABLE;
+ }else{
+ code = SQLITE_CREATE_TABLE;
+ }
+ }
+ if( !isVirtual && sqlite3AuthCheck(pParse, code, zName, 0, zDb) ){
+ goto begin_table_error;
+ }
+ }
+#endif
+
+ /* Make sure the new table name does not collide with an existing
+ ** index or table name in the same database. Issue an error message if
+ ** it does. The exception is if the statement being parsed was passed
+ ** to an sqlite3_declare_vtab() call. In that case only the column names
+ ** and types will be used, so there is no need to test for namespace
+ ** collisions.
+ */
+ if( !IN_DECLARE_VTAB ){
+ if( SQLITE_OK!=sqlite3ReadSchema(pParse) ){
+ goto begin_table_error;
+ }
+ pTable = sqlite3FindTable(db, zName, db->aDb[iDb].zName);
+ if( pTable ){
+ if( !noErr ){
+ sqlite3ErrorMsg(pParse, "table %T already exists", pName);
+ }
+ goto begin_table_error;
+ }
+ if( sqlite3FindIndex(db, zName, 0)!=0 && (iDb==0 || !db->init.busy) ){
+ sqlite3ErrorMsg(pParse, "there is already an index named %s", zName);
+ goto begin_table_error;
+ }
+ }
+
+ pTable = sqliteMalloc( sizeof(Table) );
+ if( pTable==0 ){
+ pParse->rc = SQLITE_NOMEM;
+ pParse->nErr++;
+ goto begin_table_error;
+ }
+ pTable->zName = zName;
+ pTable->iPKey = -1;
+ pTable->pSchema = db->aDb[iDb].pSchema;
+ pTable->nRef = 1;
+ if( pParse->pNewTable ) sqlite3DeleteTable(db, pParse->pNewTable);
+ pParse->pNewTable = pTable;
+
+ /* If this is the magic sqlite_sequence table used by autoincrement,
+ ** then record a pointer to this table in the main database structure
+ ** so that INSERT can find the table easily.
+ */
+#ifndef SQLITE_OMIT_AUTOINCREMENT
+ if( !pParse->nested && strcmp(zName, "sqlite_sequence")==0 ){
+ pTable->pSchema->pSeqTab = pTable;
+ }
+#endif
+
+ /* Begin generating the code that will insert the table record into
+ ** the SQLITE_MASTER table. Note in particular that we must go ahead
+ ** and allocate the record number for the table entry now. Before any
+ ** PRIMARY KEY or UNIQUE keywords are parsed. Those keywords will cause
+ ** indices to be created and the table record must come before the
+ ** indices. Hence, the record number for the table must be allocated
+ ** now.
+ */
+ if( !db->init.busy && (v = sqlite3GetVdbe(pParse))!=0 ){
+ int lbl;
+ int fileFormat;
+ sqlite3BeginWriteOperation(pParse, 0, iDb);
+
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+ if( isVirtual ){
+ sqlite3VdbeAddOp(v, OP_VBegin, 0, 0);
+ }
+#endif
+
+ /* If the file format and encoding in the database have not been set,
+ ** set them now.
+ */
+ sqlite3VdbeAddOp(v, OP_ReadCookie, iDb, 1); /* file_format */
+ lbl = sqlite3VdbeMakeLabel(v);
+ sqlite3VdbeAddOp(v, OP_If, 0, lbl);
+ fileFormat = (db->flags & SQLITE_LegacyFileFmt)!=0 ?
+ 1 : SQLITE_MAX_FILE_FORMAT;
+ sqlite3VdbeAddOp(v, OP_Integer, fileFormat, 0);
+ sqlite3VdbeAddOp(v, OP_SetCookie, iDb, 1);
+ sqlite3VdbeAddOp(v, OP_Integer, ENC(db), 0);
+ sqlite3VdbeAddOp(v, OP_SetCookie, iDb, 4);
+ sqlite3VdbeResolveLabel(v, lbl);
+
+ /* This just creates a place-holder record in the sqlite_master table.
+ ** The record created does not contain anything yet. It will be replaced
+ ** by the real entry in code generated at sqlite3EndTable().
+ **
+ ** The rowid for the new entry is left on the top of the stack.
+ ** The rowid value is needed by the code that sqlite3EndTable will
+ ** generate.
+ */
+#if !defined(SQLITE_OMIT_VIEW) || !defined(SQLITE_OMIT_VIRTUALTABLE)
+ if( isView || isVirtual ){
+ sqlite3VdbeAddOp(v, OP_Integer, 0, 0);
+ }else
+#endif
+ {
+ sqlite3VdbeAddOp(v, OP_CreateTable, iDb, 0);
+ }
+ sqlite3OpenMasterTable(pParse, iDb);
+ sqlite3VdbeAddOp(v, OP_NewRowid, 0, 0);
+ sqlite3VdbeAddOp(v, OP_Dup, 0, 0);
+ sqlite3VdbeAddOp(v, OP_Null, 0, 0);
+ sqlite3VdbeAddOp(v, OP_Insert, 0, 0);
+ sqlite3VdbeAddOp(v, OP_Close, 0, 0);
+ sqlite3VdbeAddOp(v, OP_Pull, 1, 0);
+ }
+
+ /* Normal (non-error) return. */
+ return;
+
+ /* If an error occurs, we jump here */
+begin_table_error:
+ sqliteFree(zName);
+ return;
+}
+
+/*
+** This macro is used to compare two strings in a case-insensitive manner.
+** It is slightly faster than calling sqlite3StrICmp() directly, but
+** produces larger code.
+**
+** WARNING: This macro is not compatible with the strcmp() family. It
+** returns true if the two strings are equal, otherwise false.
+*/
+#define STRICMP(x, y) (\
+sqlite3UpperToLower[*(unsigned char *)(x)]== \
+sqlite3UpperToLower[*(unsigned char *)(y)] \
+&& sqlite3StrICmp((x)+1,(y)+1)==0 )
+
+/*
+** Add a new column to the table currently being constructed.
+**
+** The parser calls this routine once for each column declaration
+** in a CREATE TABLE statement. sqlite3StartTable() gets called
+** first to get things going. Then this routine is called for each
+** column.
+*/
+void sqlite3AddColumn(Parse *pParse, Token *pName){
+ Table *p;
+ int i;
+ char *z;
+ Column *pCol;
+ if( (p = pParse->pNewTable)==0 ) return;
+ z = sqlite3NameFromToken(pName);
+ if( z==0 ) return;
+ for(i=0; i<p->nCol; i++){
+ if( STRICMP(z, p->aCol[i].zName) ){
+ sqlite3ErrorMsg(pParse, "duplicate column name: %s", z);
+ sqliteFree(z);
+ return;
+ }
+ }
+ if( (p->nCol & 0x7)==0 ){
+ Column *aNew;
+ aNew = sqliteRealloc( p->aCol, (p->nCol+8)*sizeof(p->aCol[0]));
+ if( aNew==0 ){
+ sqliteFree(z);
+ return;
+ }
+ p->aCol = aNew;
+ }
+ pCol = &p->aCol[p->nCol];
+ memset(pCol, 0, sizeof(p->aCol[0]));
+ pCol->zName = z;
+
+ /* If there is no type specified, columns have the default affinity
+ ** 'NONE'. If there is a type specified, then sqlite3AddColumnType() will
+ ** be called next to set pCol->affinity correctly.
+ */
+ pCol->affinity = SQLITE_AFF_NONE;
+ p->nCol++;
+}
+
+/*
+** This routine is called by the parser while in the middle of
+** parsing a CREATE TABLE statement. A "NOT NULL" constraint has
+** been seen on a column. This routine sets the notNull flag on
+** the column currently under construction.
+*/
+void sqlite3AddNotNull(Parse *pParse, int onError){
+ Table *p;
+ int i;
+ if( (p = pParse->pNewTable)==0 ) return;
+ i = p->nCol-1;
+ if( i>=0 ) p->aCol[i].notNull = onError;
+}
+
+/*
+** Scan the column type name zType (length nType) and return the
+** associated affinity type.
+**
+** This routine does a case-independent search of zType for the
+** substrings in the following table. If one of the substrings is
+** found, the corresponding affinity is returned. If zType contains
+** more than one of the substrings, entries toward the top of
+** the table take priority. For example, if zType is 'BLOBINT',
+** SQLITE_AFF_INTEGER is returned.
+**
+** Substring | Affinity
+** --------------------------------
+** 'INT' | SQLITE_AFF_INTEGER
+** 'CHAR' | SQLITE_AFF_TEXT
+** 'CLOB' | SQLITE_AFF_TEXT
+** 'TEXT' | SQLITE_AFF_TEXT
+** 'BLOB' | SQLITE_AFF_NONE
+** 'REAL' | SQLITE_AFF_REAL
+** 'FLOA' | SQLITE_AFF_REAL
+** 'DOUB' | SQLITE_AFF_REAL
+**
+** If none of the substrings in the above table are found,
+** SQLITE_AFF_NUMERIC is returned.
+*/
+char sqlite3AffinityType(const Token *pType){
+ u32 h = 0;
+ char aff = SQLITE_AFF_NUMERIC;
+ const unsigned char *zIn = pType->z;
+ const unsigned char *zEnd = &pType->z[pType->n];
+
+ while( zIn!=zEnd ){
+ h = (h<<8) + sqlite3UpperToLower[*zIn];
+ zIn++;
+ if( h==(('c'<<24)+('h'<<16)+('a'<<8)+'r') ){ /* CHAR */
+ aff = SQLITE_AFF_TEXT;
+ }else if( h==(('c'<<24)+('l'<<16)+('o'<<8)+'b') ){ /* CLOB */
+ aff = SQLITE_AFF_TEXT;
+ }else if( h==(('t'<<24)+('e'<<16)+('x'<<8)+'t') ){ /* TEXT */
+ aff = SQLITE_AFF_TEXT;
+ }else if( h==(('b'<<24)+('l'<<16)+('o'<<8)+'b') /* BLOB */
+ && (aff==SQLITE_AFF_NUMERIC || aff==SQLITE_AFF_REAL) ){
+ aff = SQLITE_AFF_NONE;
+#ifndef SQLITE_OMIT_FLOATING_POINT
+ }else if( h==(('r'<<24)+('e'<<16)+('a'<<8)+'l') /* REAL */
+ && aff==SQLITE_AFF_NUMERIC ){
+ aff = SQLITE_AFF_REAL;
+ }else if( h==(('f'<<24)+('l'<<16)+('o'<<8)+'a') /* FLOA */
+ && aff==SQLITE_AFF_NUMERIC ){
+ aff = SQLITE_AFF_REAL;
+ }else if( h==(('d'<<24)+('o'<<16)+('u'<<8)+'b') /* DOUB */
+ && aff==SQLITE_AFF_NUMERIC ){
+ aff = SQLITE_AFF_REAL;
+#endif
+ }else if( (h&0x00FFFFFF)==(('i'<<16)+('n'<<8)+'t') ){ /* INT */
+ aff = SQLITE_AFF_INTEGER;
+ break;
+ }
+ }
+
+ return aff;
+}
+
+/*
+** This routine is called by the parser while in the middle of
+** parsing a CREATE TABLE statement. The pFirst token is the first
+** token in the sequence of tokens that describe the type of the
+** column currently under construction. pLast is the last token
+** in the sequence. Use this information to construct a string
+** that contains the typename of the column and store that string
+** in zType.
+*/
+void sqlite3AddColumnType(Parse *pParse, Token *pType){
+ Table *p;
+ int i;
+ Column *pCol;
+
+ if( (p = pParse->pNewTable)==0 ) return;
+ i = p->nCol-1;
+ if( i<0 ) return;
+ pCol = &p->aCol[i];
+ sqliteFree(pCol->zType);
+ pCol->zType = sqlite3NameFromToken(pType);
+ pCol->affinity = sqlite3AffinityType(pType);
+}
+
+/*
+** The expression is the default value for the most recently added column
+** of the table currently under construction.
+**
+** Default value expressions must be constant. Raise an exception if this
+** is not the case.
+**
+** This routine is called by the parser while in the middle of
+** parsing a CREATE TABLE statement.
+*/
+void sqlite3AddDefaultValue(Parse *pParse, Expr *pExpr){
+ Table *p;
+ Column *pCol;
+ if( (p = pParse->pNewTable)!=0 ){
+ pCol = &(p->aCol[p->nCol-1]);
+ if( !sqlite3ExprIsConstantOrFunction(pExpr) ){
+ sqlite3ErrorMsg(pParse, "default value of column [%s] is not constant",
+ pCol->zName);
+ }else{
+ Expr *pCopy;
+ sqlite3ExprDelete(pCol->pDflt);
+ pCol->pDflt = pCopy = sqlite3ExprDup(pExpr);
+ if( pCopy ){
+ sqlite3TokenCopy(&pCopy->span, &pExpr->span);
+ }
+ }
+ }
+ sqlite3ExprDelete(pExpr);
+}
+
+/*
+** Designate the PRIMARY KEY for the table. pList is a list of names
+** of columns that form the primary key. If pList is NULL, then the
+** most recently added column of the table is the primary key.
+**
+** A table can have at most one primary key. If the table already has
+** a primary key (and this is the second primary key) then create an
+** error.
+**
+** If the PRIMARY KEY is on a single column whose datatype is INTEGER,
+** then we will try to use that column as the rowid. Set the Table.iPKey
+** field of the table under construction to be the index of the
+** INTEGER PRIMARY KEY column. Table.iPKey is set to -1 if there is
+** no INTEGER PRIMARY KEY.
+**
+** If the key is not an INTEGER PRIMARY KEY, then create a unique
+** index for the key. No index is created for INTEGER PRIMARY KEYs.
+*/
+void sqlite3AddPrimaryKey(
+ Parse *pParse, /* Parsing context */
+ ExprList *pList, /* List of field names to be indexed */
+ int onError, /* What to do with a uniqueness conflict */
+ int autoInc, /* True if the AUTOINCREMENT keyword is present */
+ int sortOrder /* SQLITE_SO_ASC or SQLITE_SO_DESC */
+){
+ Table *pTab = pParse->pNewTable;
+ char *zType = 0;
+ int iCol = -1, i;
+ if( pTab==0 || IN_DECLARE_VTAB ) goto primary_key_exit;
+ if( pTab->hasPrimKey ){
+ sqlite3ErrorMsg(pParse,
+ "table \"%s\" has more than one primary key", pTab->zName);
+ goto primary_key_exit;
+ }
+ pTab->hasPrimKey = 1;
+ if( pList==0 ){
+ iCol = pTab->nCol - 1;
+ pTab->aCol[iCol].isPrimKey = 1;
+ }else{
+ for(i=0; i<pList->nExpr; i++){
+ for(iCol=0; iCol<pTab->nCol; iCol++){
+ if( sqlite3StrICmp(pList->a[i].zName, pTab->aCol[iCol].zName)==0 ){
+ break;
+ }
+ }
+ if( iCol<pTab->nCol ){
+ pTab->aCol[iCol].isPrimKey = 1;
+ }
+ }
+ if( pList->nExpr>1 ) iCol = -1;
+ }
+ if( iCol>=0 && iCol<pTab->nCol ){
+ zType = pTab->aCol[iCol].zType;
+ }
+ if( zType && sqlite3StrICmp(zType, "INTEGER")==0
+ && sortOrder==SQLITE_SO_ASC ){
+ pTab->iPKey = iCol;
+ pTab->keyConf = onError;
+ pTab->autoInc = autoInc;
+ }else if( autoInc ){
+#ifndef SQLITE_OMIT_AUTOINCREMENT
+ sqlite3ErrorMsg(pParse, "AUTOINCREMENT is only allowed on an "
+ "INTEGER PRIMARY KEY");
+#endif
+ }else{
+ sqlite3CreateIndex(pParse, 0, 0, 0, pList, onError, 0, 0, sortOrder, 0);
+ pList = 0;
+ }
+
+primary_key_exit:
+ sqlite3ExprListDelete(pList);
+ return;
+}
+
+/*
+** Add a new CHECK constraint to the table currently under construction.
+*/
+void sqlite3AddCheckConstraint(
+ Parse *pParse, /* Parsing context */
+ Expr *pCheckExpr /* The check expression */
+){
+#ifndef SQLITE_OMIT_CHECK
+ Table *pTab = pParse->pNewTable;
+ if( pTab && !IN_DECLARE_VTAB ){
+ /* The CHECK expression must be duplicated so that tokens refer
+ ** to malloced space and not the (ephemeral) text of the CREATE TABLE
+ ** statement */
+ pTab->pCheck = sqlite3ExprAnd(pTab->pCheck, sqlite3ExprDup(pCheckExpr));
+ }
+#endif
+ sqlite3ExprDelete(pCheckExpr);
+}
+
+/*
+** Set the collation function of the most recently parsed table column
+** to the CollSeq given.
+*/
+void sqlite3AddCollateType(Parse *pParse, const char *zType, int nType){
+ Table *p;
+ int i;
+
+ if( (p = pParse->pNewTable)==0 ) return;
+ i = p->nCol-1;
+
+ if( sqlite3LocateCollSeq(pParse, zType, nType) ){
+ Index *pIdx;
+ p->aCol[i].zColl = sqliteStrNDup(zType, nType);
+
+ /* If the column is declared as "<name> PRIMARY KEY COLLATE <type>",
+ ** then an index may have been created on this column before the
+ ** collation type was added. Correct this if it is the case.
+ */
+ for(pIdx=p->pIndex; pIdx; pIdx=pIdx->pNext){
+ assert( pIdx->nColumn==1 );
+ if( pIdx->aiColumn[0]==i ){
+ pIdx->azColl[0] = p->aCol[i].zColl;
+ }
+ }
+ }
+}
+
+/*
+** This function returns the collation sequence for database native text
+** encoding identified by the string zName, length nName.
+**
+** If the requested collation sequence is not available, or not available
+** in the database native encoding, the collation factory is invoked to
+** request it. If the collation factory does not supply such a sequence,
+** and the sequence is available in another text encoding, then that is
+** returned instead.
+**
+** If no versions of the requested collations sequence are available, or
+** another error occurs, NULL is returned and an error message written into
+** pParse.
+*/
+CollSeq *sqlite3LocateCollSeq(Parse *pParse, const char *zName, int nName){
+ sqlite3 *db = pParse->db;
+ u8 enc = ENC(db);
+ u8 initbusy = db->init.busy;
+ CollSeq *pColl;
+
+ pColl = sqlite3FindCollSeq(db, enc, zName, nName, initbusy);
+ if( !initbusy && (!pColl || !pColl->xCmp) ){
+ pColl = sqlite3GetCollSeq(db, pColl, zName, nName);
+ if( !pColl ){
+ if( nName<0 ){
+ nName = strlen(zName);
+ }
+ sqlite3ErrorMsg(pParse, "no such collation sequence: %.*s", nName, zName);
+ pColl = 0;
+ }
+ }
+
+ return pColl;
+}
+
+
+/*
+** Generate code that will increment the schema cookie.
+**
+** The schema cookie is used to determine when the schema for the
+** database changes. After each schema change, the cookie value
+** changes. When a process first reads the schema it records the
+** cookie. Thereafter, whenever it goes to access the database,
+** it checks the cookie to make sure the schema has not changed
+** since it was last read.
+**
+** This plan is not completely bullet-proof. It is possible for
+** the schema to change multiple times and for the cookie to be
+** set back to prior value. But schema changes are infrequent
+** and the probability of hitting the same cookie value is only
+** 1 chance in 2^32. So we're safe enough.
+*/
+void sqlite3ChangeCookie(sqlite3 *db, Vdbe *v, int iDb){
+ sqlite3VdbeAddOp(v, OP_Integer, db->aDb[iDb].pSchema->schema_cookie+1, 0);
+ sqlite3VdbeAddOp(v, OP_SetCookie, iDb, 0);
+}
+
+/*
+** Measure the number of characters needed to output the given
+** identifier. The number returned includes any quotes used
+** but does not include the null terminator.
+**
+** The estimate is conservative. It might be larger that what is
+** really needed.
+*/
+static int identLength(const char *z){
+ int n;
+ for(n=0; *z; n++, z++){
+ if( *z=='"' ){ n++; }
+ }
+ return n + 2;
+}
+
+/*
+** Write an identifier onto the end of the given string. Add
+** quote characters as needed.
+*/
+static void identPut(char *z, int *pIdx, char *zSignedIdent){
+ unsigned char *zIdent = (unsigned char*)zSignedIdent;
+ int i, j, needQuote;
+ i = *pIdx;
+ for(j=0; zIdent[j]; j++){
+ if( !isalnum(zIdent[j]) && zIdent[j]!='_' ) break;
+ }
+ needQuote = zIdent[j]!=0 || isdigit(zIdent[0])
+ || sqlite3KeywordCode(zIdent, j)!=TK_ID;
+ if( needQuote ) z[i++] = '"';
+ for(j=0; zIdent[j]; j++){
+ z[i++] = zIdent[j];
+ if( zIdent[j]=='"' ) z[i++] = '"';
+ }
+ if( needQuote ) z[i++] = '"';
+ z[i] = 0;
+ *pIdx = i;
+}
+
+/*
+** Generate a CREATE TABLE statement appropriate for the given
+** table. Memory to hold the text of the statement is obtained
+** from sqliteMalloc() and must be freed by the calling function.
+*/
+static char *createTableStmt(Table *p, int isTemp){
+ int i, k, n;
+ char *zStmt;
+ char *zSep, *zSep2, *zEnd, *z;
+ Column *pCol;
+ n = 0;
+ for(pCol = p->aCol, i=0; i<p->nCol; i++, pCol++){
+ n += identLength(pCol->zName);
+ z = pCol->zType;
+ if( z ){
+ n += (strlen(z) + 1);
+ }
+ }
+ n += identLength(p->zName);
+ if( n<50 ){
+ zSep = "";
+ zSep2 = ",";
+ zEnd = ")";
+ }else{
+ zSep = "\n ";
+ zSep2 = ",\n ";
+ zEnd = "\n)";
+ }
+ n += 35 + 6*p->nCol;
+ zStmt = sqliteMallocRaw( n );
+ if( zStmt==0 ) return 0;
+ strcpy(zStmt, !OMIT_TEMPDB&&isTemp ? "CREATE TEMP TABLE ":"CREATE TABLE ");
+ k = strlen(zStmt);
+ identPut(zStmt, &k, p->zName);
+ zStmt[k++] = '(';
+ for(pCol=p->aCol, i=0; i<p->nCol; i++, pCol++){
+ strcpy(&zStmt[k], zSep);
+ k += strlen(&zStmt[k]);
+ zSep = zSep2;
+ identPut(zStmt, &k, pCol->zName);
+ if( (z = pCol->zType)!=0 ){
+ zStmt[k++] = ' ';
+ strcpy(&zStmt[k], z);
+ k += strlen(z);
+ }
+ }
+ strcpy(&zStmt[k], zEnd);
+ return zStmt;
+}
+
+/*
+** This routine is called to report the final ")" that terminates
+** a CREATE TABLE statement.
+**
+** The table structure that other action routines have been building
+** is added to the internal hash tables, assuming no errors have
+** occurred.
+**
+** An entry for the table is made in the master table on disk, unless
+** this is a temporary table or db->init.busy==1. When db->init.busy==1
+** it means we are reading the sqlite_master table because we just
+** connected to the database or because the sqlite_master table has
+** recently changed, so the entry for this table already exists in
+** the sqlite_master table. We do not want to create it again.
+**
+** If the pSelect argument is not NULL, it means that this routine
+** was called to create a table generated from a
+** "CREATE TABLE ... AS SELECT ..." statement. The column names of
+** the new table will match the result set of the SELECT.
+*/
+void sqlite3EndTable(
+ Parse *pParse, /* Parse context */
+ Token *pCons, /* The ',' token after the last column defn. */
+ Token *pEnd, /* The final ')' token in the CREATE TABLE */
+ Select *pSelect /* Select from a "CREATE ... AS SELECT" */
+){
+ Table *p;
+ sqlite3 *db = pParse->db;
+ int iDb;
+
+ if( (pEnd==0 && pSelect==0) || pParse->nErr || sqlite3MallocFailed() ) {
+ return;
+ }
+ p = pParse->pNewTable;
+ if( p==0 ) return;
+
+ assert( !db->init.busy || !pSelect );
+
+ iDb = sqlite3SchemaToIndex(db, p->pSchema);
+
+#ifndef SQLITE_OMIT_CHECK
+ /* Resolve names in all CHECK constraint expressions.
+ */
+ if( p->pCheck ){
+ SrcList sSrc; /* Fake SrcList for pParse->pNewTable */
+ NameContext sNC; /* Name context for pParse->pNewTable */
+
+ memset(&sNC, 0, sizeof(sNC));
+ memset(&sSrc, 0, sizeof(sSrc));
+ sSrc.nSrc = 1;
+ sSrc.a[0].zName = p->zName;
+ sSrc.a[0].pTab = p;
+ sSrc.a[0].iCursor = -1;
+ sNC.pParse = pParse;
+ sNC.pSrcList = &sSrc;
+ sNC.isCheck = 1;
+ if( sqlite3ExprResolveNames(&sNC, p->pCheck) ){
+ return;
+ }
+ }
+#endif /* !defined(SQLITE_OMIT_CHECK) */
+
+ /* If the db->init.busy is 1 it means we are reading the SQL off the
+ ** "sqlite_master" or "sqlite_temp_master" table on the disk.
+ ** So do not write to the disk again. Extract the root page number
+ ** for the table from the db->init.newTnum field. (The page number
+ ** should have been put there by the sqliteOpenCb routine.)
+ */
+ if( db->init.busy ){
+ p->tnum = db->init.newTnum;
+ }
+
+ /* If not initializing, then create a record for the new table
+ ** in the SQLITE_MASTER table of the database. The record number
+ ** for the new table entry should already be on the stack.
+ **
+ ** If this is a TEMPORARY table, write the entry into the auxiliary
+ ** file instead of into the main database file.
+ */
+ if( !db->init.busy ){
+ int n;
+ Vdbe *v;
+ char *zType; /* "view" or "table" */
+ char *zType2; /* "VIEW" or "TABLE" */
+ char *zStmt; /* Text of the CREATE TABLE or CREATE VIEW statement */
+
+ v = sqlite3GetVdbe(pParse);
+ if( v==0 ) return;
+
+ sqlite3VdbeAddOp(v, OP_Close, 0, 0);
+
+ /* Create the rootpage for the new table and push it onto the stack.
+ ** A view has no rootpage, so just push a zero onto the stack for
+ ** views. Initialize zType at the same time.
+ */
+ if( p->pSelect==0 ){
+ /* A regular table */
+ zType = "table";
+ zType2 = "TABLE";
+#ifndef SQLITE_OMIT_VIEW
+ }else{
+ /* A view */
+ zType = "view";
+ zType2 = "VIEW";
+#endif
+ }
+
+ /* If this is a CREATE TABLE xx AS SELECT ..., execute the SELECT
+ ** statement to populate the new table. The root-page number for the
+ ** new table is on the top of the vdbe stack.
+ **
+ ** Once the SELECT has been coded by sqlite3Select(), it is in a
+ ** suitable state to query for the column names and types to be used
+ ** by the new table.
+ **
+ ** A shared-cache write-lock is not required to write to the new table,
+ ** as a schema-lock must have already been obtained to create it. Since
+ ** a schema-lock excludes all other database users, the write-lock would
+ ** be redundant.
+ */
+ if( pSelect ){
+ Table *pSelTab;
+ sqlite3VdbeAddOp(v, OP_Dup, 0, 0);
+ sqlite3VdbeAddOp(v, OP_Integer, iDb, 0);
+ sqlite3VdbeAddOp(v, OP_OpenWrite, 1, 0);
+ pParse->nTab = 2;
+ sqlite3Select(pParse, pSelect, SRT_Table, 1, 0, 0, 0, 0);
+ sqlite3VdbeAddOp(v, OP_Close, 1, 0);
+ if( pParse->nErr==0 ){
+ pSelTab = sqlite3ResultSetOfSelect(pParse, 0, pSelect);
+ if( pSelTab==0 ) return;
+ assert( p->aCol==0 );
+ p->nCol = pSelTab->nCol;
+ p->aCol = pSelTab->aCol;
+ pSelTab->nCol = 0;
+ pSelTab->aCol = 0;
+ sqlite3DeleteTable(0, pSelTab);
+ }
+ }
+
+ /* Compute the complete text of the CREATE statement */
+ if( pSelect ){
+ zStmt = createTableStmt(p, p->pSchema==pParse->db->aDb[1].pSchema);
+ }else{
+ n = pEnd->z - pParse->sNameToken.z + 1;
+ zStmt = sqlite3MPrintf("CREATE %s %.*s", zType2, n, pParse->sNameToken.z);
+ }
+
+ /* A slot for the record has already been allocated in the
+ ** SQLITE_MASTER table. We just need to update that slot with all
+ ** the information we've collected. The rowid for the preallocated
+ ** slot is the 2nd item on the stack. The top of the stack is the
+ ** root page for the new table (or a 0 if this is a view).
+ */
+ sqlite3NestedParse(pParse,
+ "UPDATE %Q.%s "
+ "SET type='%s', name=%Q, tbl_name=%Q, rootpage=#0, sql=%Q "
+ "WHERE rowid=#1",
+ db->aDb[iDb].zName, SCHEMA_TABLE(iDb),
+ zType,
+ p->zName,
+ p->zName,
+ zStmt
+ );
+ sqliteFree(zStmt);
+ sqlite3ChangeCookie(db, v, iDb);
+
+#ifndef SQLITE_OMIT_AUTOINCREMENT
+ /* Check to see if we need to create an sqlite_sequence table for
+ ** keeping track of autoincrement keys.
+ */
+ if( p->autoInc ){
+ Db *pDb = &db->aDb[iDb];
+ if( pDb->pSchema->pSeqTab==0 ){
+ sqlite3NestedParse(pParse,
+ "CREATE TABLE %Q.sqlite_sequence(name,seq)",
+ pDb->zName
+ );
+ }
+ }
+#endif
+
+ /* Reparse everything to update our internal data structures */
+ sqlite3VdbeOp3(v, OP_ParseSchema, iDb, 0,
+ sqlite3MPrintf("tbl_name='%q'",p->zName), P3_DYNAMIC);
+ }
+
+
+ /* Add the table to the in-memory representation of the database.
+ */
+ if( db->init.busy && pParse->nErr==0 ){
+ Table *pOld;
+ FKey *pFKey;
+ Schema *pSchema = p->pSchema;
+ pOld = sqlite3HashInsert(&pSchema->tblHash, p->zName, strlen(p->zName)+1,p);
+ if( pOld ){
+ assert( p==pOld ); /* Malloc must have failed inside HashInsert() */
+ return;
+ }
+#ifndef SQLITE_OMIT_FOREIGN_KEY
+ for(pFKey=p->pFKey; pFKey; pFKey=pFKey->pNextFrom){
+ int nTo = strlen(pFKey->zTo) + 1;
+ pFKey->pNextTo = sqlite3HashFind(&pSchema->aFKey, pFKey->zTo, nTo);
+ sqlite3HashInsert(&pSchema->aFKey, pFKey->zTo, nTo, pFKey);
+ }
+#endif
+ pParse->pNewTable = 0;
+ db->nTable++;
+ db->flags |= SQLITE_InternChanges;
+
+#ifndef SQLITE_OMIT_ALTERTABLE
+ if( !p->pSelect ){
+ const char *zName = (const char *)pParse->sNameToken.z;
+ int nName;
+ assert( !pSelect && pCons && pEnd );
+ if( pCons->z==0 ){
+ pCons = pEnd;
+ }
+ nName = (const char *)pCons->z - zName;
+ p->addColOffset = 13 + sqlite3utf8CharLen(zName, nName);
+ }
+#endif
+ }
+}
+
+#ifndef SQLITE_OMIT_VIEW
+/*
+** The parser calls this routine in order to create a new VIEW
+*/
+void sqlite3CreateView(
+ Parse *pParse, /* The parsing context */
+ Token *pBegin, /* The CREATE token that begins the statement */
+ Token *pName1, /* The token that holds the name of the view */
+ Token *pName2, /* The token that holds the name of the view */
+ Select *pSelect, /* A SELECT statement that will become the new view */
+ int isTemp, /* TRUE for a TEMPORARY view */
+ int noErr /* Suppress error messages if VIEW already exists */
+){
+ Table *p;
+ int n;
+ const unsigned char *z;
+ Token sEnd;
+ DbFixer sFix;
+ Token *pName;
+ int iDb;
+
+ if( pParse->nVar>0 ){
+ sqlite3ErrorMsg(pParse, "parameters are not allowed in views");
+ sqlite3SelectDelete(pSelect);
+ return;
+ }
+ sqlite3StartTable(pParse, pName1, pName2, isTemp, 1, 0, noErr);
+ p = pParse->pNewTable;
+ if( p==0 || pParse->nErr ){
+ sqlite3SelectDelete(pSelect);
+ return;
+ }
+ sqlite3TwoPartName(pParse, pName1, pName2, &pName);
+ iDb = sqlite3SchemaToIndex(pParse->db, p->pSchema);
+ if( sqlite3FixInit(&sFix, pParse, iDb, "view", pName)
+ && sqlite3FixSelect(&sFix, pSelect)
+ ){
+ sqlite3SelectDelete(pSelect);
+ return;
+ }
+
+ /* Make a copy of the entire SELECT statement that defines the view.
+ ** This will force all the Expr.token.z values to be dynamically
+ ** allocated rather than point to the input string - which means that
+ ** they will persist after the current sqlite3_exec() call returns.
+ */
+ p->pSelect = sqlite3SelectDup(pSelect);
+ sqlite3SelectDelete(pSelect);
+ if( sqlite3MallocFailed() ){
+ return;
+ }
+ if( !pParse->db->init.busy ){
+ sqlite3ViewGetColumnNames(pParse, p);
+ }
+
+ /* Locate the end of the CREATE VIEW statement. Make sEnd point to
+ ** the end.
+ */
+ sEnd = pParse->sLastToken;
+ if( sEnd.z[0]!=0 && sEnd.z[0]!=';' ){
+ sEnd.z += sEnd.n;
+ }
+ sEnd.n = 0;
+ n = sEnd.z - pBegin->z;
+ z = (const unsigned char*)pBegin->z;
+ while( n>0 && (z[n-1]==';' || isspace(z[n-1])) ){ n--; }
+ sEnd.z = &z[n-1];
+ sEnd.n = 1;
+
+ /* Use sqlite3EndTable() to add the view to the SQLITE_MASTER table */
+ sqlite3EndTable(pParse, 0, &sEnd, 0);
+ return;
+}
+#endif /* SQLITE_OMIT_VIEW */
+
+#if !defined(SQLITE_OMIT_VIEW) || !defined(SQLITE_OMIT_VIRTUALTABLE)
+/*
+** The Table structure pTable is really a VIEW. Fill in the names of
+** the columns of the view in the pTable structure. Return the number
+** of errors. If an error is seen leave an error message in pParse->zErrMsg.
+*/
+int sqlite3ViewGetColumnNames(Parse *pParse, Table *pTable){
+ Table *pSelTab; /* A fake table from which we get the result set */
+ Select *pSel; /* Copy of the SELECT that implements the view */
+ int nErr = 0; /* Number of errors encountered */
+ int n; /* Temporarily holds the number of cursors assigned */
+
+ assert( pTable );
+
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+ if( sqlite3VtabCallConnect(pParse, pTable) ){
+ return SQLITE_ERROR;
+ }
+ if( IsVirtual(pTable) ) return 0;
+#endif
+
+#ifndef SQLITE_OMIT_VIEW
+ /* A positive nCol means the columns names for this view are
+ ** already known.
+ */
+ if( pTable->nCol>0 ) return 0;
+
+ /* A negative nCol is a special marker meaning that we are currently
+ ** trying to compute the column names. If we enter this routine with
+ ** a negative nCol, it means two or more views form a loop, like this:
+ **
+ ** CREATE VIEW one AS SELECT * FROM two;
+ ** CREATE VIEW two AS SELECT * FROM one;
+ **
+ ** Actually, this error is caught previously and so the following test
+ ** should always fail. But we will leave it in place just to be safe.
+ */
+ if( pTable->nCol<0 ){
+ sqlite3ErrorMsg(pParse, "view %s is circularly defined", pTable->zName);
+ return 1;
+ }
+ assert( pTable->nCol>=0 );
+
+ /* If we get this far, it means we need to compute the table names.
+ ** Note that the call to sqlite3ResultSetOfSelect() will expand any
+ ** "*" elements in the results set of the view and will assign cursors
+ ** to the elements of the FROM clause. But we do not want these changes
+ ** to be permanent. So the computation is done on a copy of the SELECT
+ ** statement that defines the view.
+ */
+ assert( pTable->pSelect );
+ pSel = sqlite3SelectDup(pTable->pSelect);
+ if( pSel ){
+ n = pParse->nTab;
+ sqlite3SrcListAssignCursors(pParse, pSel->pSrc);
+ pTable->nCol = -1;
+ pSelTab = sqlite3ResultSetOfSelect(pParse, 0, pSel);
+ pParse->nTab = n;
+ if( pSelTab ){
+ assert( pTable->aCol==0 );
+ pTable->nCol = pSelTab->nCol;
+ pTable->aCol = pSelTab->aCol;
+ pSelTab->nCol = 0;
+ pSelTab->aCol = 0;
+ sqlite3DeleteTable(0, pSelTab);
+ pTable->pSchema->flags |= DB_UnresetViews;
+ }else{
+ pTable->nCol = 0;
+ nErr++;
+ }
+ sqlite3SelectDelete(pSel);
+ } else {
+ nErr++;
+ }
+#endif /* SQLITE_OMIT_VIEW */
+ return nErr;
+}
+#endif /* !defined(SQLITE_OMIT_VIEW) || !defined(SQLITE_OMIT_VIRTUALTABLE) */
+
+#ifndef SQLITE_OMIT_VIEW
+/*
+** Clear the column names from every VIEW in database idx.
+*/
+static void sqliteViewResetAll(sqlite3 *db, int idx){
+ HashElem *i;
+ if( !DbHasProperty(db, idx, DB_UnresetViews) ) return;
+ for(i=sqliteHashFirst(&db->aDb[idx].pSchema->tblHash); i;i=sqliteHashNext(i)){
+ Table *pTab = sqliteHashData(i);
+ if( pTab->pSelect ){
+ sqliteResetColumnNames(pTab);
+ }
+ }
+ DbClearProperty(db, idx, DB_UnresetViews);
+}
+#else
+# define sqliteViewResetAll(A,B)
+#endif /* SQLITE_OMIT_VIEW */
+
+/*
+** This function is called by the VDBE to adjust the internal schema
+** used by SQLite when the btree layer moves a table root page. The
+** root-page of a table or index in database iDb has changed from iFrom
+** to iTo.
+**
+** Ticket #1728: The symbol table might still contain information
+** on tables and/or indices that are the process of being deleted.
+** If you are unlucky, one of those deleted indices or tables might
+** have the same rootpage number as the real table or index that is
+** being moved. So we cannot stop searching after the first match
+** because the first match might be for one of the deleted indices
+** or tables and not the table/index that is actually being moved.
+** We must continue looping until all tables and indices with
+** rootpage==iFrom have been converted to have a rootpage of iTo
+** in order to be certain that we got the right one.
+*/
+#ifndef SQLITE_OMIT_AUTOVACUUM
+void sqlite3RootPageMoved(Db *pDb, int iFrom, int iTo){
+ HashElem *pElem;
+ Hash *pHash;
+
+ pHash = &pDb->pSchema->tblHash;
+ for(pElem=sqliteHashFirst(pHash); pElem; pElem=sqliteHashNext(pElem)){
+ Table *pTab = sqliteHashData(pElem);
+ if( pTab->tnum==iFrom ){
+ pTab->tnum = iTo;
+ }
+ }
+ pHash = &pDb->pSchema->idxHash;
+ for(pElem=sqliteHashFirst(pHash); pElem; pElem=sqliteHashNext(pElem)){
+ Index *pIdx = sqliteHashData(pElem);
+ if( pIdx->tnum==iFrom ){
+ pIdx->tnum = iTo;
+ }
+ }
+}
+#endif
+
+/*
+** Write code to erase the table with root-page iTable from database iDb.
+** Also write code to modify the sqlite_master table and internal schema
+** if a root-page of another table is moved by the btree-layer whilst
+** erasing iTable (this can happen with an auto-vacuum database).
+*/
+static void destroyRootPage(Parse *pParse, int iTable, int iDb){
+ Vdbe *v = sqlite3GetVdbe(pParse);
+ sqlite3VdbeAddOp(v, OP_Destroy, iTable, iDb);
+#ifndef SQLITE_OMIT_AUTOVACUUM
+ /* OP_Destroy pushes an integer onto the stack. If this integer
+ ** is non-zero, then it is the root page number of a table moved to
+ ** location iTable. The following code modifies the sqlite_master table to
+ ** reflect this.
+ **
+ ** The "#0" in the SQL is a special constant that means whatever value
+ ** is on the top of the stack. See sqlite3RegisterExpr().
+ */
+ sqlite3NestedParse(pParse,
+ "UPDATE %Q.%s SET rootpage=%d WHERE #0 AND rootpage=#0",
+ pParse->db->aDb[iDb].zName, SCHEMA_TABLE(iDb), iTable);
+#endif
+}
+
+/*
+** Write VDBE code to erase table pTab and all associated indices on disk.
+** Code to update the sqlite_master tables and internal schema definitions
+** in case a root-page belonging to another table is moved by the btree layer
+** is also added (this can happen with an auto-vacuum database).
+*/
+static void destroyTable(Parse *pParse, Table *pTab){
+#ifdef SQLITE_OMIT_AUTOVACUUM
+ Index *pIdx;
+ int iDb = sqlite3SchemaToIndex(pParse->db, pTab->pSchema);
+ destroyRootPage(pParse, pTab->tnum, iDb);
+ for(pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext){
+ destroyRootPage(pParse, pIdx->tnum, iDb);
+ }
+#else
+ /* If the database may be auto-vacuum capable (if SQLITE_OMIT_AUTOVACUUM
+ ** is not defined), then it is important to call OP_Destroy on the
+ ** table and index root-pages in order, starting with the numerically
+ ** largest root-page number. This guarantees that none of the root-pages
+ ** to be destroyed is relocated by an earlier OP_Destroy. i.e. if the
+ ** following were coded:
+ **
+ ** OP_Destroy 4 0
+ ** ...
+ ** OP_Destroy 5 0
+ **
+ ** and root page 5 happened to be the largest root-page number in the
+ ** database, then root page 5 would be moved to page 4 by the
+ ** "OP_Destroy 4 0" opcode. The subsequent "OP_Destroy 5 0" would hit
+ ** a free-list page.
+ */
+ int iTab = pTab->tnum;
+ int iDestroyed = 0;
+
+ while( 1 ){
+ Index *pIdx;
+ int iLargest = 0;
+
+ if( iDestroyed==0 || iTab<iDestroyed ){
+ iLargest = iTab;
+ }
+ for(pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext){
+ int iIdx = pIdx->tnum;
+ assert( pIdx->pSchema==pTab->pSchema );
+ if( (iDestroyed==0 || (iIdx<iDestroyed)) && iIdx>iLargest ){
+ iLargest = iIdx;
+ }
+ }
+ if( iLargest==0 ){
+ return;
+ }else{
+ int iDb = sqlite3SchemaToIndex(pParse->db, pTab->pSchema);
+ destroyRootPage(pParse, iLargest, iDb);
+ iDestroyed = iLargest;
+ }
+ }
+#endif
+}
+
+/*
+** This routine is called to do the work of a DROP TABLE statement.
+** pName is the name of the table to be dropped.
+*/
+void sqlite3DropTable(Parse *pParse, SrcList *pName, int isView, int noErr){
+ Table *pTab;
+ Vdbe *v;
+ sqlite3 *db = pParse->db;
+ int iDb;
+
+ if( pParse->nErr || sqlite3MallocFailed() ){
+ goto exit_drop_table;
+ }
+ assert( pName->nSrc==1 );
+ pTab = sqlite3LocateTable(pParse, pName->a[0].zName, pName->a[0].zDatabase);
+
+ if( pTab==0 ){
+ if( noErr ){
+ sqlite3ErrorClear(pParse);
+ }
+ goto exit_drop_table;
+ }
+ iDb = sqlite3SchemaToIndex(db, pTab->pSchema);
+ assert( iDb>=0 && iDb<db->nDb );
+#ifndef SQLITE_OMIT_AUTHORIZATION
+ {
+ int code;
+ const char *zTab = SCHEMA_TABLE(iDb);
+ const char *zDb = db->aDb[iDb].zName;
+ const char *zArg2 = 0;
+ if( sqlite3AuthCheck(pParse, SQLITE_DELETE, zTab, 0, zDb)){
+ goto exit_drop_table;
+ }
+ if( isView ){
+ if( !OMIT_TEMPDB && iDb==1 ){
+ code = SQLITE_DROP_TEMP_VIEW;
+ }else{
+ code = SQLITE_DROP_VIEW;
+ }
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+ }else if( IsVirtual(pTab) ){
+ if( sqlite3ViewGetColumnNames(pParse, pTab) ){
+ goto exit_drop_table;
+ }
+ code = SQLITE_DROP_VTABLE;
+ zArg2 = pTab->pMod->zName;
+#endif
+ }else{
+ if( !OMIT_TEMPDB && iDb==1 ){
+ code = SQLITE_DROP_TEMP_TABLE;
+ }else{
+ code = SQLITE_DROP_TABLE;
+ }
+ }
+ if( sqlite3AuthCheck(pParse, code, pTab->zName, zArg2, zDb) ){
+ goto exit_drop_table;
+ }
+ if( sqlite3AuthCheck(pParse, SQLITE_DELETE, pTab->zName, 0, zDb) ){
+ goto exit_drop_table;
+ }
+ }
+#endif
+ if( pTab->readOnly || pTab==db->aDb[iDb].pSchema->pSeqTab ){
+ sqlite3ErrorMsg(pParse, "table %s may not be dropped", pTab->zName);
+ goto exit_drop_table;
+ }
+
+#ifndef SQLITE_OMIT_VIEW
+ /* Ensure DROP TABLE is not used on a view, and DROP VIEW is not used
+ ** on a table.
+ */
+ if( isView && pTab->pSelect==0 ){
+ sqlite3ErrorMsg(pParse, "use DROP TABLE to delete table %s", pTab->zName);
+ goto exit_drop_table;
+ }
+ if( !isView && pTab->pSelect ){
+ sqlite3ErrorMsg(pParse, "use DROP VIEW to delete view %s", pTab->zName);
+ goto exit_drop_table;
+ }
+#endif
+
+ /* Generate code to remove the table from the master table
+ ** on disk.
+ */
+ v = sqlite3GetVdbe(pParse);
+ if( v ){
+ Trigger *pTrigger;
+ Db *pDb = &db->aDb[iDb];
+ sqlite3BeginWriteOperation(pParse, 0, iDb);
+
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+ if( IsVirtual(pTab) ){
+ Vdbe *v = sqlite3GetVdbe(pParse);
+ if( v ){
+ sqlite3VdbeAddOp(v, OP_VBegin, 0, 0);
+ }
+ }
+#endif
+
+ /* Drop all triggers associated with the table being dropped. Code
+ ** is generated to remove entries from sqlite_master and/or
+ ** sqlite_temp_master if required.
+ */
+ pTrigger = pTab->pTrigger;
+ while( pTrigger ){
+ assert( pTrigger->pSchema==pTab->pSchema ||
+ pTrigger->pSchema==db->aDb[1].pSchema );
+ sqlite3DropTriggerPtr(pParse, pTrigger);
+ pTrigger = pTrigger->pNext;
+ }
+
+#ifndef SQLITE_OMIT_AUTOINCREMENT
+ /* Remove any entries of the sqlite_sequence table associated with
+ ** the table being dropped. This is done before the table is dropped
+ ** at the btree level, in case the sqlite_sequence table needs to
+ ** move as a result of the drop (can happen in auto-vacuum mode).
+ */
+ if( pTab->autoInc ){
+ sqlite3NestedParse(pParse,
+ "DELETE FROM %s.sqlite_sequence WHERE name=%Q",
+ pDb->zName, pTab->zName
+ );
+ }
+#endif
+
+ /* Drop all SQLITE_MASTER table and index entries that refer to the
+ ** table. The program name loops through the master table and deletes
+ ** every row that refers to a table of the same name as the one being
+ ** dropped. Triggers are handled seperately because a trigger can be
+ ** created in the temp database that refers to a table in another
+ ** database.
+ */
+ sqlite3NestedParse(pParse,
+ "DELETE FROM %Q.%s WHERE tbl_name=%Q and type!='trigger'",
+ pDb->zName, SCHEMA_TABLE(iDb), pTab->zName);
+ if( !isView && !IsVirtual(pTab) ){
+ destroyTable(pParse, pTab);
+ }
+
+ /* Remove the table entry from SQLite's internal schema and modify
+ ** the schema cookie.
+ */
+ if( IsVirtual(pTab) ){
+ sqlite3VdbeOp3(v, OP_VDestroy, iDb, 0, pTab->zName, 0);
+ }
+ sqlite3VdbeOp3(v, OP_DropTable, iDb, 0, pTab->zName, 0);
+ sqlite3ChangeCookie(db, v, iDb);
+ }
+ sqliteViewResetAll(db, iDb);
+
+exit_drop_table:
+ sqlite3SrcListDelete(pName);
+}
+
+/*
+** This routine is called to create a new foreign key on the table
+** currently under construction. pFromCol determines which columns
+** in the current table point to the foreign key. If pFromCol==0 then
+** connect the key to the last column inserted. pTo is the name of
+** the table referred to. pToCol is a list of tables in the other
+** pTo table that the foreign key points to. flags contains all
+** information about the conflict resolution algorithms specified
+** in the ON DELETE, ON UPDATE and ON INSERT clauses.
+**
+** An FKey structure is created and added to the table currently
+** under construction in the pParse->pNewTable field. The new FKey
+** is not linked into db->aFKey at this point - that does not happen
+** until sqlite3EndTable().
+**
+** The foreign key is set for IMMEDIATE processing. A subsequent call
+** to sqlite3DeferForeignKey() might change this to DEFERRED.
+*/
+void sqlite3CreateForeignKey(
+ Parse *pParse, /* Parsing context */
+ ExprList *pFromCol, /* Columns in this table that point to other table */
+ Token *pTo, /* Name of the other table */
+ ExprList *pToCol, /* Columns in the other table */
+ int flags /* Conflict resolution algorithms. */
+){
+#ifndef SQLITE_OMIT_FOREIGN_KEY
+ FKey *pFKey = 0;
+ Table *p = pParse->pNewTable;
+ int nByte;
+ int i;
+ int nCol;
+ char *z;
+
+ assert( pTo!=0 );
+ if( p==0 || pParse->nErr || IN_DECLARE_VTAB ) goto fk_end;
+ if( pFromCol==0 ){
+ int iCol = p->nCol-1;
+ if( iCol<0 ) goto fk_end;
+ if( pToCol && pToCol->nExpr!=1 ){
+ sqlite3ErrorMsg(pParse, "foreign key on %s"
+ " should reference only one column of table %T",
+ p->aCol[iCol].zName, pTo);
+ goto fk_end;
+ }
+ nCol = 1;
+ }else if( pToCol && pToCol->nExpr!=pFromCol->nExpr ){
+ sqlite3ErrorMsg(pParse,
+ "number of columns in foreign key does not match the number of "
+ "columns in the referenced table");
+ goto fk_end;
+ }else{
+ nCol = pFromCol->nExpr;
+ }
+ nByte = sizeof(*pFKey) + nCol*sizeof(pFKey->aCol[0]) + pTo->n + 1;
+ if( pToCol ){
+ for(i=0; i<pToCol->nExpr; i++){
+ nByte += strlen(pToCol->a[i].zName) + 1;
+ }
+ }
+ pFKey = sqliteMalloc( nByte );
+ if( pFKey==0 ) goto fk_end;
+ pFKey->pFrom = p;
+ pFKey->pNextFrom = p->pFKey;
+ z = (char*)&pFKey[1];
+ pFKey->aCol = (struct sColMap*)z;
+ z += sizeof(struct sColMap)*nCol;
+ pFKey->zTo = z;
+ memcpy(z, pTo->z, pTo->n);
+ z[pTo->n] = 0;
+ z += pTo->n+1;
+ pFKey->pNextTo = 0;
+ pFKey->nCol = nCol;
+ if( pFromCol==0 ){
+ pFKey->aCol[0].iFrom = p->nCol-1;
+ }else{
+ for(i=0; i<nCol; i++){
+ int j;
+ for(j=0; j<p->nCol; j++){
+ if( sqlite3StrICmp(p->aCol[j].zName, pFromCol->a[i].zName)==0 ){
+ pFKey->aCol[i].iFrom = j;
+ break;
+ }
+ }
+ if( j>=p->nCol ){
+ sqlite3ErrorMsg(pParse,
+ "unknown column \"%s\" in foreign key definition",
+ pFromCol->a[i].zName);
+ goto fk_end;
+ }
+ }
+ }
+ if( pToCol ){
+ for(i=0; i<nCol; i++){
+ int n = strlen(pToCol->a[i].zName);
+ pFKey->aCol[i].zCol = z;
+ memcpy(z, pToCol->a[i].zName, n);
+ z[n] = 0;
+ z += n+1;
+ }
+ }
+ pFKey->isDeferred = 0;
+ pFKey->deleteConf = flags & 0xff;
+ pFKey->updateConf = (flags >> 8 ) & 0xff;
+ pFKey->insertConf = (flags >> 16 ) & 0xff;
+
+ /* Link the foreign key to the table as the last step.
+ */
+ p->pFKey = pFKey;
+ pFKey = 0;
+
+fk_end:
+ sqliteFree(pFKey);
+#endif /* !defined(SQLITE_OMIT_FOREIGN_KEY) */
+ sqlite3ExprListDelete(pFromCol);
+ sqlite3ExprListDelete(pToCol);
+}
+
+/*
+** This routine is called when an INITIALLY IMMEDIATE or INITIALLY DEFERRED
+** clause is seen as part of a foreign key definition. The isDeferred
+** parameter is 1 for INITIALLY DEFERRED and 0 for INITIALLY IMMEDIATE.
+** The behavior of the most recently created foreign key is adjusted
+** accordingly.
+*/
+void sqlite3DeferForeignKey(Parse *pParse, int isDeferred){
+#ifndef SQLITE_OMIT_FOREIGN_KEY
+ Table *pTab;
+ FKey *pFKey;
+ if( (pTab = pParse->pNewTable)==0 || (pFKey = pTab->pFKey)==0 ) return;
+ pFKey->isDeferred = isDeferred;
+#endif
+}
+
+/*
+** Generate code that will erase and refill index *pIdx. This is
+** used to initialize a newly created index or to recompute the
+** content of an index in response to a REINDEX command.
+**
+** if memRootPage is not negative, it means that the index is newly
+** created. The memory cell specified by memRootPage contains the
+** root page number of the index. If memRootPage is negative, then
+** the index already exists and must be cleared before being refilled and
+** the root page number of the index is taken from pIndex->tnum.
+*/
+static void sqlite3RefillIndex(Parse *pParse, Index *pIndex, int memRootPage){
+ Table *pTab = pIndex->pTable; /* The table that is indexed */
+ int iTab = pParse->nTab; /* Btree cursor used for pTab */
+ int iIdx = pParse->nTab+1; /* Btree cursor used for pIndex */
+ int addr1; /* Address of top of loop */
+ int tnum; /* Root page of index */
+ Vdbe *v; /* Generate code into this virtual machine */
+ KeyInfo *pKey; /* KeyInfo for index */
+ int iDb = sqlite3SchemaToIndex(pParse->db, pIndex->pSchema);
+
+#ifndef SQLITE_OMIT_AUTHORIZATION
+ if( sqlite3AuthCheck(pParse, SQLITE_REINDEX, pIndex->zName, 0,
+ pParse->db->aDb[iDb].zName ) ){
+ return;
+ }
+#endif
+
+ /* Require a write-lock on the table to perform this operation */
+ sqlite3TableLock(pParse, iDb, pTab->tnum, 1, pTab->zName);
+
+ v = sqlite3GetVdbe(pParse);
+ if( v==0 ) return;
+ if( memRootPage>=0 ){
+ sqlite3VdbeAddOp(v, OP_MemLoad, memRootPage, 0);
+ tnum = 0;
+ }else{
+ tnum = pIndex->tnum;
+ sqlite3VdbeAddOp(v, OP_Clear, tnum, iDb);
+ }
+ sqlite3VdbeAddOp(v, OP_Integer, iDb, 0);
+ pKey = sqlite3IndexKeyinfo(pParse, pIndex);
+ sqlite3VdbeOp3(v, OP_OpenWrite, iIdx, tnum, (char *)pKey, P3_KEYINFO_HANDOFF);
+ sqlite3OpenTable(pParse, iTab, iDb, pTab, OP_OpenRead);
+ addr1 = sqlite3VdbeAddOp(v, OP_Rewind, iTab, 0);
+ sqlite3GenerateIndexKey(v, pIndex, iTab);
+ if( pIndex->onError!=OE_None ){
+ int curaddr = sqlite3VdbeCurrentAddr(v);
+ int addr2 = curaddr+4;
+ sqlite3VdbeChangeP2(v, curaddr-1, addr2);
+ sqlite3VdbeAddOp(v, OP_Rowid, iTab, 0);
+ sqlite3VdbeAddOp(v, OP_AddImm, 1, 0);
+ sqlite3VdbeAddOp(v, OP_IsUnique, iIdx, addr2);
+ sqlite3VdbeOp3(v, OP_Halt, SQLITE_CONSTRAINT, OE_Abort,
+ "indexed columns are not unique", P3_STATIC);
+ assert( addr2==sqlite3VdbeCurrentAddr(v) );
+ }
+ sqlite3VdbeAddOp(v, OP_IdxInsert, iIdx, 0);
+ sqlite3VdbeAddOp(v, OP_Next, iTab, addr1+1);
+ sqlite3VdbeJumpHere(v, addr1);
+ sqlite3VdbeAddOp(v, OP_Close, iTab, 0);
+ sqlite3VdbeAddOp(v, OP_Close, iIdx, 0);
+}
+
+/*
+** Create a new index for an SQL table. pName1.pName2 is the name of the index
+** and pTblList is the name of the table that is to be indexed. Both will
+** be NULL for a primary key or an index that is created to satisfy a
+** UNIQUE constraint. If pTable and pIndex are NULL, use pParse->pNewTable
+** as the table to be indexed. pParse->pNewTable is a table that is
+** currently being constructed by a CREATE TABLE statement.
+**
+** pList is a list of columns to be indexed. pList will be NULL if this
+** is a primary key or unique-constraint on the most recent column added
+** to the table currently under construction.
+*/
+void sqlite3CreateIndex(
+ Parse *pParse, /* All information about this parse */
+ Token *pName1, /* First part of index name. May be NULL */
+ Token *pName2, /* Second part of index name. May be NULL */
+ SrcList *pTblName, /* Table to index. Use pParse->pNewTable if 0 */
+ ExprList *pList, /* A list of columns to be indexed */
+ int onError, /* OE_Abort, OE_Ignore, OE_Replace, or OE_None */
+ Token *pStart, /* The CREATE token that begins a CREATE TABLE statement */
+ Token *pEnd, /* The ")" that closes the CREATE INDEX statement */
+ int sortOrder, /* Sort order of primary key when pList==NULL */
+ int ifNotExist /* Omit error if index already exists */
+){
+ Table *pTab = 0; /* Table to be indexed */
+ Index *pIndex = 0; /* The index to be created */
+ char *zName = 0; /* Name of the index */
+ int nName; /* Number of characters in zName */
+ int i, j;
+ Token nullId; /* Fake token for an empty ID list */
+ DbFixer sFix; /* For assigning database names to pTable */
+ int sortOrderMask; /* 1 to honor DESC in index. 0 to ignore. */
+ sqlite3 *db = pParse->db;
+ Db *pDb; /* The specific table containing the indexed database */
+ int iDb; /* Index of the database that is being written */
+ Token *pName = 0; /* Unqualified name of the index to create */
+ struct ExprList_item *pListItem; /* For looping over pList */
+ int nCol;
+ int nExtra = 0;
+ char *zExtra;
+
+ if( pParse->nErr || sqlite3MallocFailed() || IN_DECLARE_VTAB ){
+ goto exit_create_index;
+ }
+
+ /*
+ ** Find the table that is to be indexed. Return early if not found.
+ */
+ if( pTblName!=0 ){
+
+ /* Use the two-part index name to determine the database
+ ** to search for the table. 'Fix' the table name to this db
+ ** before looking up the table.
+ */
+ assert( pName1 && pName2 );
+ iDb = sqlite3TwoPartName(pParse, pName1, pName2, &pName);
+ if( iDb<0 ) goto exit_create_index;
+
+#ifndef SQLITE_OMIT_TEMPDB
+ /* If the index name was unqualified, check if the the table
+ ** is a temp table. If so, set the database to 1.
+ */
+ pTab = sqlite3SrcListLookup(pParse, pTblName);
+ if( pName2 && pName2->n==0 && pTab && pTab->pSchema==db->aDb[1].pSchema ){
+ iDb = 1;
+ }
+#endif
+
+ if( sqlite3FixInit(&sFix, pParse, iDb, "index", pName) &&
+ sqlite3FixSrcList(&sFix, pTblName)
+ ){
+ /* Because the parser constructs pTblName from a single identifier,
+ ** sqlite3FixSrcList can never fail. */
+ assert(0);
+ }
+ pTab = sqlite3LocateTable(pParse, pTblName->a[0].zName,
+ pTblName->a[0].zDatabase);
+ if( !pTab ) goto exit_create_index;
+ assert( db->aDb[iDb].pSchema==pTab->pSchema );
+ }else{
+ assert( pName==0 );
+ pTab = pParse->pNewTable;
+ if( !pTab ) goto exit_create_index;
+ iDb = sqlite3SchemaToIndex(db, pTab->pSchema);
+ }
+ pDb = &db->aDb[iDb];
+
+ if( pTab==0 || pParse->nErr ) goto exit_create_index;
+ if( pTab->readOnly ){
+ sqlite3ErrorMsg(pParse, "table %s may not be indexed", pTab->zName);
+ goto exit_create_index;
+ }
+#ifndef SQLITE_OMIT_VIEW
+ if( pTab->pSelect ){
+ sqlite3ErrorMsg(pParse, "views may not be indexed");
+ goto exit_create_index;
+ }
+#endif
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+ if( IsVirtual(pTab) ){
+ sqlite3ErrorMsg(pParse, "virtual tables may not be indexed");
+ goto exit_create_index;
+ }
+#endif
+
+ /*
+ ** Find the name of the index. Make sure there is not already another
+ ** index or table with the same name.
+ **
+ ** Exception: If we are reading the names of permanent indices from the
+ ** sqlite_master table (because some other process changed the schema) and
+ ** one of the index names collides with the name of a temporary table or
+ ** index, then we will continue to process this index.
+ **
+ ** If pName==0 it means that we are
+ ** dealing with a primary key or UNIQUE constraint. We have to invent our
+ ** own name.
+ */
+ if( pName ){
+ zName = sqlite3NameFromToken(pName);
+ if( SQLITE_OK!=sqlite3ReadSchema(pParse) ) goto exit_create_index;
+ if( zName==0 ) goto exit_create_index;
+ if( SQLITE_OK!=sqlite3CheckObjectName(pParse, zName) ){
+ goto exit_create_index;
+ }
+ if( !db->init.busy ){
+ if( SQLITE_OK!=sqlite3ReadSchema(pParse) ) goto exit_create_index;
+ if( sqlite3FindIndex(db, zName, pDb->zName)!=0 ){
+ if( !ifNotExist ){
+ sqlite3ErrorMsg(pParse, "index %s already exists", zName);
+ }
+ goto exit_create_index;
+ }
+ if( sqlite3FindTable(db, zName, 0)!=0 ){
+ sqlite3ErrorMsg(pParse, "there is already a table named %s", zName);
+ goto exit_create_index;
+ }
+ }
+ }else{
+ char zBuf[30];
+ int n;
+ Index *pLoop;
+ for(pLoop=pTab->pIndex, n=1; pLoop; pLoop=pLoop->pNext, n++){}
+ sprintf(zBuf,"_%d",n);
+ zName = 0;
+ sqlite3SetString(&zName, "sqlite_autoindex_", pTab->zName, zBuf, (char*)0);
+ if( zName==0 ) goto exit_create_index;
+ }
+
+ /* Check for authorization to create an index.
+ */
+#ifndef SQLITE_OMIT_AUTHORIZATION
+ {
+ const char *zDb = pDb->zName;
+ if( sqlite3AuthCheck(pParse, SQLITE_INSERT, SCHEMA_TABLE(iDb), 0, zDb) ){
+ goto exit_create_index;
+ }
+ i = SQLITE_CREATE_INDEX;
+ if( !OMIT_TEMPDB && iDb==1 ) i = SQLITE_CREATE_TEMP_INDEX;
+ if( sqlite3AuthCheck(pParse, i, zName, pTab->zName, zDb) ){
+ goto exit_create_index;
+ }
+ }
+#endif
+
+ /* If pList==0, it means this routine was called to make a primary
+ ** key out of the last column added to the table under construction.
+ ** So create a fake list to simulate this.
+ */
+ if( pList==0 ){
+ nullId.z = (u8*)pTab->aCol[pTab->nCol-1].zName;
+ nullId.n = strlen((char*)nullId.z);
+ pList = sqlite3ExprListAppend(0, 0, &nullId);
+ if( pList==0 ) goto exit_create_index;
+ pList->a[0].sortOrder = sortOrder;
+ }
+
+ /* Figure out how many bytes of space are required to store explicitly
+ ** specified collation sequence names.
+ */
+ for(i=0; i<pList->nExpr; i++){
+ Expr *pExpr = pList->a[i].pExpr;
+ if( pExpr ){
+ nExtra += (1 + strlen(pExpr->pColl->zName));
+ }
+ }
+
+ /*
+ ** Allocate the index structure.
+ */
+ nName = strlen(zName);
+ nCol = pList->nExpr;
+ pIndex = sqliteMalloc(
+ sizeof(Index) + /* Index structure */
+ sizeof(int)*nCol + /* Index.aiColumn */
+ sizeof(int)*(nCol+1) + /* Index.aiRowEst */
+ sizeof(char *)*nCol + /* Index.azColl */
+ sizeof(u8)*nCol + /* Index.aSortOrder */
+ nName + 1 + /* Index.zName */
+ nExtra /* Collation sequence names */
+ );
+ if( sqlite3MallocFailed() ) goto exit_create_index;
+ pIndex->azColl = (char**)(&pIndex[1]);
+ pIndex->aiColumn = (int *)(&pIndex->azColl[nCol]);
+ pIndex->aiRowEst = (unsigned *)(&pIndex->aiColumn[nCol]);
+ pIndex->aSortOrder = (u8 *)(&pIndex->aiRowEst[nCol+1]);
+ pIndex->zName = (char *)(&pIndex->aSortOrder[nCol]);
+ zExtra = (char *)(&pIndex->zName[nName+1]);
+ strcpy(pIndex->zName, zName);
+ pIndex->pTable = pTab;
+ pIndex->nColumn = pList->nExpr;
+ pIndex->onError = onError;
+ pIndex->autoIndex = pName==0;
+ pIndex->pSchema = db->aDb[iDb].pSchema;
+
+ /* Check to see if we should honor DESC requests on index columns
+ */
+ if( pDb->pSchema->file_format>=4 ){
+ sortOrderMask = -1; /* Honor DESC */
+ }else{
+ sortOrderMask = 0; /* Ignore DESC */
+ }
+
+ /* Scan the names of the columns of the table to be indexed and
+ ** load the column indices into the Index structure. Report an error
+ ** if any column is not found.
+ */
+ for(i=0, pListItem=pList->a; i<pList->nExpr; i++, pListItem++){
+ const char *zColName = pListItem->zName;
+ Column *pTabCol;
+ int requestedSortOrder;
+ char *zColl; /* Collation sequence */
+
+ for(j=0, pTabCol=pTab->aCol; j<pTab->nCol; j++, pTabCol++){
+ if( sqlite3StrICmp(zColName, pTabCol->zName)==0 ) break;
+ }
+ if( j>=pTab->nCol ){
+ sqlite3ErrorMsg(pParse, "table %s has no column named %s",
+ pTab->zName, zColName);
+ goto exit_create_index;
+ }
+ pIndex->aiColumn[i] = j;
+ if( pListItem->pExpr ){
+ assert( pListItem->pExpr->pColl );
+ zColl = zExtra;
+ strcpy(zExtra, pListItem->pExpr->pColl->zName);
+ zExtra += (strlen(zColl) + 1);
+ }else{
+ zColl = pTab->aCol[j].zColl;
+ if( !zColl ){
+ zColl = db->pDfltColl->zName;
+ }
+ }
+ if( !db->init.busy && !sqlite3LocateCollSeq(pParse, zColl, -1) ){
+ goto exit_create_index;
+ }
+ pIndex->azColl[i] = zColl;
+ requestedSortOrder = pListItem->sortOrder & sortOrderMask;
+ pIndex->aSortOrder[i] = requestedSortOrder;
+ }
+ sqlite3DefaultRowEst(pIndex);
+
+ if( pTab==pParse->pNewTable ){
+ /* This routine has been called to create an automatic index as a
+ ** result of a PRIMARY KEY or UNIQUE clause on a column definition, or
+ ** a PRIMARY KEY or UNIQUE clause following the column definitions.
+ ** i.e. one of:
+ **
+ ** CREATE TABLE t(x PRIMARY KEY, y);
+ ** CREATE TABLE t(x, y, UNIQUE(x, y));
+ **
+ ** Either way, check to see if the table already has such an index. If
+ ** so, don't bother creating this one. This only applies to
+ ** automatically created indices. Users can do as they wish with
+ ** explicit indices.
+ */
+ Index *pIdx;
+ for(pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext){
+ int k;
+ assert( pIdx->onError!=OE_None );
+ assert( pIdx->autoIndex );
+ assert( pIndex->onError!=OE_None );
+
+ if( pIdx->nColumn!=pIndex->nColumn ) continue;
+ for(k=0; k<pIdx->nColumn; k++){
+ const char *z1 = pIdx->azColl[k];
+ const char *z2 = pIndex->azColl[k];
+ if( pIdx->aiColumn[k]!=pIndex->aiColumn[k] ) break;
+ if( pIdx->aSortOrder[k]!=pIndex->aSortOrder[k] ) break;
+ if( z1!=z2 && sqlite3StrICmp(z1, z2) ) break;
+ }
+ if( k==pIdx->nColumn ){
+ if( pIdx->onError!=pIndex->onError ){
+ /* This constraint creates the same index as a previous
+ ** constraint specified somewhere in the CREATE TABLE statement.
+ ** However the ON CONFLICT clauses are different. If both this
+ ** constraint and the previous equivalent constraint have explicit
+ ** ON CONFLICT clauses this is an error. Otherwise, use the
+ ** explicitly specified behaviour for the index.
+ */
+ if( !(pIdx->onError==OE_Default || pIndex->onError==OE_Default) ){
+ sqlite3ErrorMsg(pParse,
+ "conflicting ON CONFLICT clauses specified", 0);
+ }
+ if( pIdx->onError==OE_Default ){
+ pIdx->onError = pIndex->onError;
+ }
+ }
+ goto exit_create_index;
+ }
+ }
+ }
+
+ /* Link the new Index structure to its table and to the other
+ ** in-memory database structures.
+ */
+ if( db->init.busy ){
+ Index *p;
+ p = sqlite3HashInsert(&pIndex->pSchema->idxHash,
+ pIndex->zName, strlen(pIndex->zName)+1, pIndex);
+ if( p ){
+ assert( p==pIndex ); /* Malloc must have failed */
+ goto exit_create_index;
+ }
+ db->flags |= SQLITE_InternChanges;
+ if( pTblName!=0 ){
+ pIndex->tnum = db->init.newTnum;
+ }
+ }
+
+ /* If the db->init.busy is 0 then create the index on disk. This
+ ** involves writing the index into the master table and filling in the
+ ** index with the current table contents.
+ **
+ ** The db->init.busy is 0 when the user first enters a CREATE INDEX
+ ** command. db->init.busy is 1 when a database is opened and
+ ** CREATE INDEX statements are read out of the master table. In
+ ** the latter case the index already exists on disk, which is why
+ ** we don't want to recreate it.
+ **
+ ** If pTblName==0 it means this index is generated as a primary key
+ ** or UNIQUE constraint of a CREATE TABLE statement. Since the table
+ ** has just been created, it contains no data and the index initialization
+ ** step can be skipped.
+ */
+ else if( db->init.busy==0 ){
+ Vdbe *v;
+ char *zStmt;
+ int iMem = pParse->nMem++;
+
+ v = sqlite3GetVdbe(pParse);
+ if( v==0 ) goto exit_create_index;
+
+
+ /* Create the rootpage for the index
+ */
+ sqlite3BeginWriteOperation(pParse, 1, iDb);
+ sqlite3VdbeAddOp(v, OP_CreateIndex, iDb, 0);
+ sqlite3VdbeAddOp(v, OP_MemStore, iMem, 0);
+
+ /* Gather the complete text of the CREATE INDEX statement into
+ ** the zStmt variable
+ */
+ if( pStart && pEnd ){
+ /* A named index with an explicit CREATE INDEX statement */
+ zStmt = sqlite3MPrintf("CREATE%s INDEX %.*s",
+ onError==OE_None ? "" : " UNIQUE",
+ pEnd->z - pName->z + 1,
+ pName->z);
+ }else{
+ /* An automatic index created by a PRIMARY KEY or UNIQUE constraint */
+ /* zStmt = sqlite3MPrintf(""); */
+ zStmt = 0;
+ }
+
+ /* Add an entry in sqlite_master for this index
+ */
+ sqlite3NestedParse(pParse,
+ "INSERT INTO %Q.%s VALUES('index',%Q,%Q,#0,%Q);",
+ db->aDb[iDb].zName, SCHEMA_TABLE(iDb),
+ pIndex->zName,
+ pTab->zName,
+ zStmt
+ );
+ sqlite3VdbeAddOp(v, OP_Pop, 1, 0);
+ sqliteFree(zStmt);
+
+ /* Fill the index with data and reparse the schema. Code an OP_Expire
+ ** to invalidate all pre-compiled statements.
+ */
+ if( pTblName ){
+ sqlite3RefillIndex(pParse, pIndex, iMem);
+ sqlite3ChangeCookie(db, v, iDb);
+ sqlite3VdbeOp3(v, OP_ParseSchema, iDb, 0,
+ sqlite3MPrintf("name='%q'", pIndex->zName), P3_DYNAMIC);
+ sqlite3VdbeAddOp(v, OP_Expire, 0, 0);
+ }
+ }
+
+ /* When adding an index to the list of indices for a table, make
+ ** sure all indices labeled OE_Replace come after all those labeled
+ ** OE_Ignore. This is necessary for the correct operation of UPDATE
+ ** and INSERT.
+ */
+ if( db->init.busy || pTblName==0 ){
+ if( onError!=OE_Replace || pTab->pIndex==0
+ || pTab->pIndex->onError==OE_Replace){
+ pIndex->pNext = pTab->pIndex;
+ pTab->pIndex = pIndex;
+ }else{
+ Index *pOther = pTab->pIndex;
+ while( pOther->pNext && pOther->pNext->onError!=OE_Replace ){
+ pOther = pOther->pNext;
+ }
+ pIndex->pNext = pOther->pNext;
+ pOther->pNext = pIndex;
+ }
+ pIndex = 0;
+ }
+
+ /* Clean up before exiting */
+exit_create_index:
+ if( pIndex ){
+ freeIndex(pIndex);
+ }
+ sqlite3ExprListDelete(pList);
+ sqlite3SrcListDelete(pTblName);
+ sqliteFree(zName);
+ return;
+}
+
+/*
+** Generate code to make sure the file format number is at least minFormat.
+** The generated code will increase the file format number if necessary.
+*/
+void sqlite3MinimumFileFormat(Parse *pParse, int iDb, int minFormat){
+ Vdbe *v;
+ v = sqlite3GetVdbe(pParse);
+ if( v ){
+ sqlite3VdbeAddOp(v, OP_ReadCookie, iDb, 1);
+ sqlite3VdbeAddOp(v, OP_Integer, minFormat, 0);
+ sqlite3VdbeAddOp(v, OP_Ge, 0, sqlite3VdbeCurrentAddr(v)+3);
+ sqlite3VdbeAddOp(v, OP_Integer, minFormat, 0);
+ sqlite3VdbeAddOp(v, OP_SetCookie, iDb, 1);
+ }
+}
+
+/*
+** Fill the Index.aiRowEst[] array with default information - information
+** to be used when we have not run the ANALYZE command.
+**
+** aiRowEst[0] is suppose to contain the number of elements in the index.
+** Since we do not know, guess 1 million. aiRowEst[1] is an estimate of the
+** number of rows in the table that match any particular value of the
+** first column of the index. aiRowEst[2] is an estimate of the number
+** of rows that match any particular combiniation of the first 2 columns
+** of the index. And so forth. It must always be the case that
+*
+** aiRowEst[N]<=aiRowEst[N-1]
+** aiRowEst[N]>=1
+**
+** Apart from that, we have little to go on besides intuition as to
+** how aiRowEst[] should be initialized. The numbers generated here
+** are based on typical values found in actual indices.
+*/
+void sqlite3DefaultRowEst(Index *pIdx){
+ unsigned *a = pIdx->aiRowEst;
+ int i;
+ assert( a!=0 );
+ a[0] = 1000000;
+ for(i=pIdx->nColumn; i>=5; i--){
+ a[i] = 5;
+ }
+ while( i>=1 ){
+ a[i] = 11 - i;
+ i--;
+ }
+ if( pIdx->onError!=OE_None ){
+ a[pIdx->nColumn] = 1;
+ }
+}
+
+/*
+** This routine will drop an existing named index. This routine
+** implements the DROP INDEX statement.
+*/
+void sqlite3DropIndex(Parse *pParse, SrcList *pName, int ifExists){
+ Index *pIndex;
+ Vdbe *v;
+ sqlite3 *db = pParse->db;
+ int iDb;
+
+ if( pParse->nErr || sqlite3MallocFailed() ){
+ goto exit_drop_index;
+ }
+ assert( pName->nSrc==1 );
+ if( SQLITE_OK!=sqlite3ReadSchema(pParse) ){
+ goto exit_drop_index;
+ }
+ pIndex = sqlite3FindIndex(db, pName->a[0].zName, pName->a[0].zDatabase);
+ if( pIndex==0 ){
+ if( !ifExists ){
+ sqlite3ErrorMsg(pParse, "no such index: %S", pName, 0);
+ }
+ pParse->checkSchema = 1;
+ goto exit_drop_index;
+ }
+ if( pIndex->autoIndex ){
+ sqlite3ErrorMsg(pParse, "index associated with UNIQUE "
+ "or PRIMARY KEY constraint cannot be dropped", 0);
+ goto exit_drop_index;
+ }
+ iDb = sqlite3SchemaToIndex(db, pIndex->pSchema);
+#ifndef SQLITE_OMIT_AUTHORIZATION
+ {
+ int code = SQLITE_DROP_INDEX;
+ Table *pTab = pIndex->pTable;
+ const char *zDb = db->aDb[iDb].zName;
+ const char *zTab = SCHEMA_TABLE(iDb);
+ if( sqlite3AuthCheck(pParse, SQLITE_DELETE, zTab, 0, zDb) ){
+ goto exit_drop_index;
+ }
+ if( !OMIT_TEMPDB && iDb ) code = SQLITE_DROP_TEMP_INDEX;
+ if( sqlite3AuthCheck(pParse, code, pIndex->zName, pTab->zName, zDb) ){
+ goto exit_drop_index;
+ }
+ }
+#endif
+
+ /* Generate code to remove the index and from the master table */
+ v = sqlite3GetVdbe(pParse);
+ if( v ){
+ sqlite3NestedParse(pParse,
+ "DELETE FROM %Q.%s WHERE name=%Q",
+ db->aDb[iDb].zName, SCHEMA_TABLE(iDb),
+ pIndex->zName
+ );
+ sqlite3ChangeCookie(db, v, iDb);
+ destroyRootPage(pParse, pIndex->tnum, iDb);
+ sqlite3VdbeOp3(v, OP_DropIndex, iDb, 0, pIndex->zName, 0);
+ }
+
+exit_drop_index:
+ sqlite3SrcListDelete(pName);
+}
+
+/*
+** ppArray points into a structure where there is an array pointer
+** followed by two integers. The first integer is the
+** number of elements in the structure array. The second integer
+** is the number of allocated slots in the array.
+**
+** In other words, the structure looks something like this:
+**
+** struct Example1 {
+** struct subElem *aEntry;
+** int nEntry;
+** int nAlloc;
+** }
+**
+** The pnEntry parameter points to the equivalent of Example1.nEntry.
+**
+** This routine allocates a new slot in the array, zeros it out,
+** and returns its index. If malloc fails a negative number is returned.
+**
+** szEntry is the sizeof of a single array entry. initSize is the
+** number of array entries allocated on the initial allocation.
+*/
+int sqlite3ArrayAllocate(void **ppArray, int szEntry, int initSize){
+ char *p;
+ int *an = (int*)&ppArray[1];
+ if( an[0]>=an[1] ){
+ void *pNew;
+ int newSize;
+ newSize = an[1]*2 + initSize;
+ pNew = sqliteRealloc(*ppArray, newSize*szEntry);
+ if( pNew==0 ){
+ return -1;
+ }
+ an[1] = newSize;
+ *ppArray = pNew;
+ }
+ p = *ppArray;
+ memset(&p[an[0]*szEntry], 0, szEntry);
+ return an[0]++;
+}
+
+/*
+** Append a new element to the given IdList. Create a new IdList if
+** need be.
+**
+** A new IdList is returned, or NULL if malloc() fails.
+*/
+IdList *sqlite3IdListAppend(IdList *pList, Token *pToken){
+ int i;
+ if( pList==0 ){
+ pList = sqliteMalloc( sizeof(IdList) );
+ if( pList==0 ) return 0;
+ pList->nAlloc = 0;
+ }
+ i = sqlite3ArrayAllocate((void**)&pList->a, sizeof(pList->a[0]), 5);
+ if( i<0 ){
+ sqlite3IdListDelete(pList);
+ return 0;
+ }
+ pList->a[i].zName = sqlite3NameFromToken(pToken);
+ return pList;
+}
+
+/*
+** Delete an IdList.
+*/
+void sqlite3IdListDelete(IdList *pList){
+ int i;
+ if( pList==0 ) return;
+ for(i=0; i<pList->nId; i++){
+ sqliteFree(pList->a[i].zName);
+ }
+ sqliteFree(pList->a);
+ sqliteFree(pList);
+}
+
+/*
+** Return the index in pList of the identifier named zId. Return -1
+** if not found.
+*/
+int sqlite3IdListIndex(IdList *pList, const char *zName){
+ int i;
+ if( pList==0 ) return -1;
+ for(i=0; i<pList->nId; i++){
+ if( sqlite3StrICmp(pList->a[i].zName, zName)==0 ) return i;
+ }
+ return -1;
+}
+
+/*
+** Append a new table name to the given SrcList. Create a new SrcList if
+** need be. A new entry is created in the SrcList even if pToken is NULL.
+**
+** A new SrcList is returned, or NULL if malloc() fails.
+**
+** If pDatabase is not null, it means that the table has an optional
+** database name prefix. Like this: "database.table". The pDatabase
+** points to the table name and the pTable points to the database name.
+** The SrcList.a[].zName field is filled with the table name which might
+** come from pTable (if pDatabase is NULL) or from pDatabase.
+** SrcList.a[].zDatabase is filled with the database name from pTable,
+** or with NULL if no database is specified.
+**
+** In other words, if call like this:
+**
+** sqlite3SrcListAppend(A,B,0);
+**
+** Then B is a table name and the database name is unspecified. If called
+** like this:
+**
+** sqlite3SrcListAppend(A,B,C);
+**
+** Then C is the table name and B is the database name.
+*/
+SrcList *sqlite3SrcListAppend(SrcList *pList, Token *pTable, Token *pDatabase){
+ struct SrcList_item *pItem;
+ if( pList==0 ){
+ pList = sqliteMalloc( sizeof(SrcList) );
+ if( pList==0 ) return 0;
+ pList->nAlloc = 1;
+ }
+ if( pList->nSrc>=pList->nAlloc ){
+ SrcList *pNew;
+ pList->nAlloc *= 2;
+ pNew = sqliteRealloc(pList,
+ sizeof(*pList) + (pList->nAlloc-1)*sizeof(pList->a[0]) );
+ if( pNew==0 ){
+ sqlite3SrcListDelete(pList);
+ return 0;
+ }
+ pList = pNew;
+ }
+ pItem = &pList->a[pList->nSrc];
+ memset(pItem, 0, sizeof(pList->a[0]));
+ if( pDatabase && pDatabase->z==0 ){
+ pDatabase = 0;
+ }
+ if( pDatabase && pTable ){
+ Token *pTemp = pDatabase;
+ pDatabase = pTable;
+ pTable = pTemp;
+ }
+ pItem->zName = sqlite3NameFromToken(pTable);
+ pItem->zDatabase = sqlite3NameFromToken(pDatabase);
+ pItem->iCursor = -1;
+ pItem->isPopulated = 0;
+ pList->nSrc++;
+ return pList;
+}
+
+/*
+** Assign cursors to all tables in a SrcList
+*/
+void sqlite3SrcListAssignCursors(Parse *pParse, SrcList *pList){
+ int i;
+ struct SrcList_item *pItem;
+ assert(pList || sqlite3MallocFailed() );
+ if( pList ){
+ for(i=0, pItem=pList->a; i<pList->nSrc; i++, pItem++){
+ if( pItem->iCursor>=0 ) break;
+ pItem->iCursor = pParse->nTab++;
+ if( pItem->pSelect ){
+ sqlite3SrcListAssignCursors(pParse, pItem->pSelect->pSrc);
+ }
+ }
+ }
+}
+
+/*
+** Add an alias to the last identifier on the given identifier list.
+*/
+void sqlite3SrcListAddAlias(SrcList *pList, Token *pToken){
+ if( pList && pList->nSrc>0 ){
+ pList->a[pList->nSrc-1].zAlias = sqlite3NameFromToken(pToken);
+ }
+}
+
+/*
+** Delete an entire SrcList including all its substructure.
+*/
+void sqlite3SrcListDelete(SrcList *pList){
+ int i;
+ struct SrcList_item *pItem;
+ if( pList==0 ) return;
+ for(pItem=pList->a, i=0; i<pList->nSrc; i++, pItem++){
+ sqliteFree(pItem->zDatabase);
+ sqliteFree(pItem->zName);
+ sqliteFree(pItem->zAlias);
+ sqlite3DeleteTable(0, pItem->pTab);
+ sqlite3SelectDelete(pItem->pSelect);
+ sqlite3ExprDelete(pItem->pOn);
+ sqlite3IdListDelete(pItem->pUsing);
+ }
+ sqliteFree(pList);
+}
+
+/*
+** Begin a transaction
+*/
+void sqlite3BeginTransaction(Parse *pParse, int type){
+ sqlite3 *db;
+ Vdbe *v;
+ int i;
+
+ if( pParse==0 || (db=pParse->db)==0 || db->aDb[0].pBt==0 ) return;
+ if( pParse->nErr || sqlite3MallocFailed() ) return;
+ if( sqlite3AuthCheck(pParse, SQLITE_TRANSACTION, "BEGIN", 0, 0) ) return;
+
+ v = sqlite3GetVdbe(pParse);
+ if( !v ) return;
+ if( type!=TK_DEFERRED ){
+ for(i=0; i<db->nDb; i++){
+ sqlite3VdbeAddOp(v, OP_Transaction, i, (type==TK_EXCLUSIVE)+1);
+ }
+ }
+ sqlite3VdbeAddOp(v, OP_AutoCommit, 0, 0);
+}
+
+/*
+** Commit a transaction
+*/
+void sqlite3CommitTransaction(Parse *pParse){
+ sqlite3 *db;
+ Vdbe *v;
+
+ if( pParse==0 || (db=pParse->db)==0 || db->aDb[0].pBt==0 ) return;
+ if( pParse->nErr || sqlite3MallocFailed() ) return;
+ if( sqlite3AuthCheck(pParse, SQLITE_TRANSACTION, "COMMIT", 0, 0) ) return;
+
+ v = sqlite3GetVdbe(pParse);
+ if( v ){
+ sqlite3VdbeAddOp(v, OP_AutoCommit, 1, 0);
+ }
+}
+
+/*
+** Rollback a transaction
+*/
+void sqlite3RollbackTransaction(Parse *pParse){
+ sqlite3 *db;
+ Vdbe *v;
+
+ if( pParse==0 || (db=pParse->db)==0 || db->aDb[0].pBt==0 ) return;
+ if( pParse->nErr || sqlite3MallocFailed() ) return;
+ if( sqlite3AuthCheck(pParse, SQLITE_TRANSACTION, "ROLLBACK", 0, 0) ) return;
+
+ v = sqlite3GetVdbe(pParse);
+ if( v ){
+ sqlite3VdbeAddOp(v, OP_AutoCommit, 1, 1);
+ }
+}
+
+/*
+** Make sure the TEMP database is open and available for use. Return
+** the number of errors. Leave any error messages in the pParse structure.
+*/
+int sqlite3OpenTempDatabase(Parse *pParse){
+ sqlite3 *db = pParse->db;
+ if( db->aDb[1].pBt==0 && !pParse->explain ){
+ int rc = sqlite3BtreeFactory(db, 0, 0, MAX_PAGES, &db->aDb[1].pBt);
+ if( rc!=SQLITE_OK ){
+ sqlite3ErrorMsg(pParse, "unable to open a temporary database "
+ "file for storing temporary tables");
+ pParse->rc = rc;
+ return 1;
+ }
+ if( db->flags & !db->autoCommit ){
+ rc = sqlite3BtreeBeginTrans(db->aDb[1].pBt, 1);
+ if( rc!=SQLITE_OK ){
+ sqlite3ErrorMsg(pParse, "unable to get a write lock on "
+ "the temporary database file");
+ pParse->rc = rc;
+ return 1;
+ }
+ }
+ assert( db->aDb[1].pSchema );
+ }
+ return 0;
+}
+
+/*
+** Generate VDBE code that will verify the schema cookie and start
+** a read-transaction for all named database files.
+**
+** It is important that all schema cookies be verified and all
+** read transactions be started before anything else happens in
+** the VDBE program. But this routine can be called after much other
+** code has been generated. So here is what we do:
+**
+** The first time this routine is called, we code an OP_Goto that
+** will jump to a subroutine at the end of the program. Then we
+** record every database that needs its schema verified in the
+** pParse->cookieMask field. Later, after all other code has been
+** generated, the subroutine that does the cookie verifications and
+** starts the transactions will be coded and the OP_Goto P2 value
+** will be made to point to that subroutine. The generation of the
+** cookie verification subroutine code happens in sqlite3FinishCoding().
+**
+** If iDb<0 then code the OP_Goto only - don't set flag to verify the
+** schema on any databases. This can be used to position the OP_Goto
+** early in the code, before we know if any database tables will be used.
+*/
+void sqlite3CodeVerifySchema(Parse *pParse, int iDb){
+ sqlite3 *db;
+ Vdbe *v;
+ int mask;
+
+ v = sqlite3GetVdbe(pParse);
+ if( v==0 ) return; /* This only happens if there was a prior error */
+ db = pParse->db;
+ if( pParse->cookieGoto==0 ){
+ pParse->cookieGoto = sqlite3VdbeAddOp(v, OP_Goto, 0, 0)+1;
+ }
+ if( iDb>=0 ){
+ assert( iDb<db->nDb );
+ assert( db->aDb[iDb].pBt!=0 || iDb==1 );
+ assert( iDb<MAX_ATTACHED+2 );
+ mask = 1<<iDb;
+ if( (pParse->cookieMask & mask)==0 ){
+ pParse->cookieMask |= mask;
+ pParse->cookieValue[iDb] = db->aDb[iDb].pSchema->schema_cookie;
+ if( !OMIT_TEMPDB && iDb==1 ){
+ sqlite3OpenTempDatabase(pParse);
+ }
+ }
+ }
+}
+
+/*
+** Generate VDBE code that prepares for doing an operation that
+** might change the database.
+**
+** This routine starts a new transaction if we are not already within
+** a transaction. If we are already within a transaction, then a checkpoint
+** is set if the setStatement parameter is true. A checkpoint should
+** be set for operations that might fail (due to a constraint) part of
+** the way through and which will need to undo some writes without having to
+** rollback the whole transaction. For operations where all constraints
+** can be checked before any changes are made to the database, it is never
+** necessary to undo a write and the checkpoint should not be set.
+**
+** Only database iDb and the temp database are made writable by this call.
+** If iDb==0, then the main and temp databases are made writable. If
+** iDb==1 then only the temp database is made writable. If iDb>1 then the
+** specified auxiliary database and the temp database are made writable.
+*/
+void sqlite3BeginWriteOperation(Parse *pParse, int setStatement, int iDb){
+ Vdbe *v = sqlite3GetVdbe(pParse);
+ if( v==0 ) return;
+ sqlite3CodeVerifySchema(pParse, iDb);
+ pParse->writeMask |= 1<<iDb;
+ if( setStatement && pParse->nested==0 ){
+ sqlite3VdbeAddOp(v, OP_Statement, iDb, 0);
+ }
+ if( (OMIT_TEMPDB || iDb!=1) && pParse->db->aDb[1].pBt!=0 ){
+ sqlite3BeginWriteOperation(pParse, setStatement, 1);
+ }
+}
+
+/*
+** Check to see if pIndex uses the collating sequence pColl. Return
+** true if it does and false if it does not.
+*/
+#ifndef SQLITE_OMIT_REINDEX
+static int collationMatch(const char *zColl, Index *pIndex){
+ int i;
+ for(i=0; i<pIndex->nColumn; i++){
+ const char *z = pIndex->azColl[i];
+ if( z==zColl || (z && zColl && 0==sqlite3StrICmp(z, zColl)) ){
+ return 1;
+ }
+ }
+ return 0;
+}
+#endif
+
+/*
+** Recompute all indices of pTab that use the collating sequence pColl.
+** If pColl==0 then recompute all indices of pTab.
+*/
+#ifndef SQLITE_OMIT_REINDEX
+static void reindexTable(Parse *pParse, Table *pTab, char const *zColl){
+ Index *pIndex; /* An index associated with pTab */
+
+ for(pIndex=pTab->pIndex; pIndex; pIndex=pIndex->pNext){
+ if( zColl==0 || collationMatch(zColl, pIndex) ){
+ int iDb = sqlite3SchemaToIndex(pParse->db, pTab->pSchema);
+ sqlite3BeginWriteOperation(pParse, 0, iDb);
+ sqlite3RefillIndex(pParse, pIndex, -1);
+ }
+ }
+}
+#endif
+
+/*
+** Recompute all indices of all tables in all databases where the
+** indices use the collating sequence pColl. If pColl==0 then recompute
+** all indices everywhere.
+*/
+#ifndef SQLITE_OMIT_REINDEX
+static void reindexDatabases(Parse *pParse, char const *zColl){
+ Db *pDb; /* A single database */
+ int iDb; /* The database index number */
+ sqlite3 *db = pParse->db; /* The database connection */
+ HashElem *k; /* For looping over tables in pDb */
+ Table *pTab; /* A table in the database */
+
+ for(iDb=0, pDb=db->aDb; iDb<db->nDb; iDb++, pDb++){
+ assert( pDb!=0 );
+ for(k=sqliteHashFirst(&pDb->pSchema->tblHash); k; k=sqliteHashNext(k)){
+ pTab = (Table*)sqliteHashData(k);
+ reindexTable(pParse, pTab, zColl);
+ }
+ }
+}
+#endif
+
+/*
+** Generate code for the REINDEX command.
+**
+** REINDEX -- 1
+** REINDEX <collation> -- 2
+** REINDEX ?<database>.?<tablename> -- 3
+** REINDEX ?<database>.?<indexname> -- 4
+**
+** Form 1 causes all indices in all attached databases to be rebuilt.
+** Form 2 rebuilds all indices in all databases that use the named
+** collating function. Forms 3 and 4 rebuild the named index or all
+** indices associated with the named table.
+*/
+#ifndef SQLITE_OMIT_REINDEX
+void sqlite3Reindex(Parse *pParse, Token *pName1, Token *pName2){
+ CollSeq *pColl; /* Collating sequence to be reindexed, or NULL */
+ char *z; /* Name of a table or index */
+ const char *zDb; /* Name of the database */
+ Table *pTab; /* A table in the database */
+ Index *pIndex; /* An index associated with pTab */
+ int iDb; /* The database index number */
+ sqlite3 *db = pParse->db; /* The database connection */
+ Token *pObjName; /* Name of the table or index to be reindexed */
+
+ /* Read the database schema. If an error occurs, leave an error message
+ ** and code in pParse and return NULL. */
+ if( SQLITE_OK!=sqlite3ReadSchema(pParse) ){
+ return;
+ }
+
+ if( pName1==0 || pName1->z==0 ){
+ reindexDatabases(pParse, 0);
+ return;
+ }else if( pName2==0 || pName2->z==0 ){
+ assert( pName1->z );
+ pColl = sqlite3FindCollSeq(db, ENC(db), (char*)pName1->z, pName1->n, 0);
+ if( pColl ){
+ char *zColl = sqliteStrNDup((const char *)pName1->z, pName1->n);
+ if( zColl ){
+ reindexDatabases(pParse, zColl);
+ sqliteFree(zColl);
+ }
+ return;
+ }
+ }
+ iDb = sqlite3TwoPartName(pParse, pName1, pName2, &pObjName);
+ if( iDb<0 ) return;
+ z = sqlite3NameFromToken(pObjName);
+ zDb = db->aDb[iDb].zName;
+ pTab = sqlite3FindTable(db, z, zDb);
+ if( pTab ){
+ reindexTable(pParse, pTab, 0);
+ sqliteFree(z);
+ return;
+ }
+ pIndex = sqlite3FindIndex(db, z, zDb);
+ sqliteFree(z);
+ if( pIndex ){
+ sqlite3BeginWriteOperation(pParse, 0, iDb);
+ sqlite3RefillIndex(pParse, pIndex, -1);
+ return;
+ }
+ sqlite3ErrorMsg(pParse, "unable to identify the object to be reindexed");
+}
+#endif
+
+/*
+** Return a dynamicly allocated KeyInfo structure that can be used
+** with OP_OpenRead or OP_OpenWrite to access database index pIdx.
+**
+** If successful, a pointer to the new structure is returned. In this case
+** the caller is responsible for calling sqliteFree() on the returned
+** pointer. If an error occurs (out of memory or missing collation
+** sequence), NULL is returned and the state of pParse updated to reflect
+** the error.
+*/
+KeyInfo *sqlite3IndexKeyinfo(Parse *pParse, Index *pIdx){
+ int i;
+ int nCol = pIdx->nColumn;
+ int nBytes = sizeof(KeyInfo) + (nCol-1)*sizeof(CollSeq*) + nCol;
+ KeyInfo *pKey = (KeyInfo *)sqliteMalloc(nBytes);
+
+ if( pKey ){
+ pKey->aSortOrder = (u8 *)&(pKey->aColl[nCol]);
+ assert( &pKey->aSortOrder[nCol]==&(((u8 *)pKey)[nBytes]) );
+ for(i=0; i<nCol; i++){
+ char *zColl = pIdx->azColl[i];
+ assert( zColl );
+ pKey->aColl[i] = sqlite3LocateCollSeq(pParse, zColl, -1);
+ pKey->aSortOrder[i] = pIdx->aSortOrder[i];
+ }
+ pKey->nField = nCol;
+ }
+
+ if( pParse->nErr ){
+ sqliteFree(pKey);
+ pKey = 0;
+ }
+ return pKey;
+}
Added: freeswitch/trunk/libs/sqlite/src/callback.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/callback.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,368 @@
+/*
+** 2005 May 23
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+**
+** This file contains functions used to access the internal hash tables
+** of user defined functions and collation sequences.
+**
+** $Id: callback.c,v 1.15 2006/05/24 12:43:27 drh Exp $
+*/
+
+#include "sqliteInt.h"
+
+/*
+** Invoke the 'collation needed' callback to request a collation sequence
+** in the database text encoding of name zName, length nName.
+** If the collation sequence
+*/
+static void callCollNeeded(sqlite3 *db, const char *zName, int nName){
+ assert( !db->xCollNeeded || !db->xCollNeeded16 );
+ if( nName<0 ) nName = strlen(zName);
+ if( db->xCollNeeded ){
+ char *zExternal = sqliteStrNDup(zName, nName);
+ if( !zExternal ) return;
+ db->xCollNeeded(db->pCollNeededArg, db, (int)ENC(db), zExternal);
+ sqliteFree(zExternal);
+ }
+#ifndef SQLITE_OMIT_UTF16
+ if( db->xCollNeeded16 ){
+ char const *zExternal;
+ sqlite3_value *pTmp = sqlite3ValueNew();
+ sqlite3ValueSetStr(pTmp, nName, zName, SQLITE_UTF8, SQLITE_STATIC);
+ zExternal = sqlite3ValueText(pTmp, SQLITE_UTF16NATIVE);
+ if( zExternal ){
+ db->xCollNeeded16(db->pCollNeededArg, db, (int)ENC(db), zExternal);
+ }
+ sqlite3ValueFree(pTmp);
+ }
+#endif
+}
+
+/*
+** This routine is called if the collation factory fails to deliver a
+** collation function in the best encoding but there may be other versions
+** of this collation function (for other text encodings) available. Use one
+** of these instead if they exist. Avoid a UTF-8 <-> UTF-16 conversion if
+** possible.
+*/
+static int synthCollSeq(sqlite3 *db, CollSeq *pColl){
+ CollSeq *pColl2;
+ char *z = pColl->zName;
+ int n = strlen(z);
+ int i;
+ static const u8 aEnc[] = { SQLITE_UTF16BE, SQLITE_UTF16LE, SQLITE_UTF8 };
+ for(i=0; i<3; i++){
+ pColl2 = sqlite3FindCollSeq(db, aEnc[i], z, n, 0);
+ if( pColl2->xCmp!=0 ){
+ memcpy(pColl, pColl2, sizeof(CollSeq));
+ return SQLITE_OK;
+ }
+ }
+ return SQLITE_ERROR;
+}
+
+/*
+** This function is responsible for invoking the collation factory callback
+** or substituting a collation sequence of a different encoding when the
+** requested collation sequence is not available in the database native
+** encoding.
+**
+** If it is not NULL, then pColl must point to the database native encoding
+** collation sequence with name zName, length nName.
+**
+** The return value is either the collation sequence to be used in database
+** db for collation type name zName, length nName, or NULL, if no collation
+** sequence can be found.
+*/
+CollSeq *sqlite3GetCollSeq(
+ sqlite3* db,
+ CollSeq *pColl,
+ const char *zName,
+ int nName
+){
+ CollSeq *p;
+
+ p = pColl;
+ if( !p ){
+ p = sqlite3FindCollSeq(db, ENC(db), zName, nName, 0);
+ }
+ if( !p || !p->xCmp ){
+ /* No collation sequence of this type for this encoding is registered.
+ ** Call the collation factory to see if it can supply us with one.
+ */
+ callCollNeeded(db, zName, nName);
+ p = sqlite3FindCollSeq(db, ENC(db), zName, nName, 0);
+ }
+ if( p && !p->xCmp && synthCollSeq(db, p) ){
+ p = 0;
+ }
+ assert( !p || p->xCmp );
+ return p;
+}
+
+/*
+** This routine is called on a collation sequence before it is used to
+** check that it is defined. An undefined collation sequence exists when
+** a database is loaded that contains references to collation sequences
+** that have not been defined by sqlite3_create_collation() etc.
+**
+** If required, this routine calls the 'collation needed' callback to
+** request a definition of the collating sequence. If this doesn't work,
+** an equivalent collating sequence that uses a text encoding different
+** from the main database is substituted, if one is available.
+*/
+int sqlite3CheckCollSeq(Parse *pParse, CollSeq *pColl){
+ if( pColl ){
+ const char *zName = pColl->zName;
+ CollSeq *p = sqlite3GetCollSeq(pParse->db, pColl, zName, -1);
+ if( !p ){
+ if( pParse->nErr==0 ){
+ sqlite3ErrorMsg(pParse, "no such collation sequence: %s", zName);
+ }
+ pParse->nErr++;
+ return SQLITE_ERROR;
+ }
+ assert( p==pColl );
+ }
+ return SQLITE_OK;
+}
+
+
+
+/*
+** Locate and return an entry from the db.aCollSeq hash table. If the entry
+** specified by zName and nName is not found and parameter 'create' is
+** true, then create a new entry. Otherwise return NULL.
+**
+** Each pointer stored in the sqlite3.aCollSeq hash table contains an
+** array of three CollSeq structures. The first is the collation sequence
+** prefferred for UTF-8, the second UTF-16le, and the third UTF-16be.
+**
+** Stored immediately after the three collation sequences is a copy of
+** the collation sequence name. A pointer to this string is stored in
+** each collation sequence structure.
+*/
+static CollSeq *findCollSeqEntry(
+ sqlite3 *db,
+ const char *zName,
+ int nName,
+ int create
+){
+ CollSeq *pColl;
+ if( nName<0 ) nName = strlen(zName);
+ pColl = sqlite3HashFind(&db->aCollSeq, zName, nName);
+
+ if( 0==pColl && create ){
+ pColl = sqliteMalloc( 3*sizeof(*pColl) + nName + 1 );
+ if( pColl ){
+ CollSeq *pDel = 0;
+ pColl[0].zName = (char*)&pColl[3];
+ pColl[0].enc = SQLITE_UTF8;
+ pColl[1].zName = (char*)&pColl[3];
+ pColl[1].enc = SQLITE_UTF16LE;
+ pColl[2].zName = (char*)&pColl[3];
+ pColl[2].enc = SQLITE_UTF16BE;
+ memcpy(pColl[0].zName, zName, nName);
+ pColl[0].zName[nName] = 0;
+ pDel = sqlite3HashInsert(&db->aCollSeq, pColl[0].zName, nName, pColl);
+
+ /* If a malloc() failure occured in sqlite3HashInsert(), it will
+ ** return the pColl pointer to be deleted (because it wasn't added
+ ** to the hash table).
+ */
+ assert( !pDel || (sqlite3MallocFailed() && pDel==pColl) );
+ if( pDel ){
+ sqliteFree(pDel);
+ pColl = 0;
+ }
+ }
+ }
+ return pColl;
+}
+
+/*
+** Parameter zName points to a UTF-8 encoded string nName bytes long.
+** Return the CollSeq* pointer for the collation sequence named zName
+** for the encoding 'enc' from the database 'db'.
+**
+** If the entry specified is not found and 'create' is true, then create a
+** new entry. Otherwise return NULL.
+*/
+CollSeq *sqlite3FindCollSeq(
+ sqlite3 *db,
+ u8 enc,
+ const char *zName,
+ int nName,
+ int create
+){
+ CollSeq *pColl;
+ if( zName ){
+ pColl = findCollSeqEntry(db, zName, nName, create);
+ }else{
+ pColl = db->pDfltColl;
+ }
+ assert( SQLITE_UTF8==1 && SQLITE_UTF16LE==2 && SQLITE_UTF16BE==3 );
+ assert( enc>=SQLITE_UTF8 && enc<=SQLITE_UTF16BE );
+ if( pColl ) pColl += enc-1;
+ return pColl;
+}
+
+/*
+** Locate a user function given a name, a number of arguments and a flag
+** indicating whether the function prefers UTF-16 over UTF-8. Return a
+** pointer to the FuncDef structure that defines that function, or return
+** NULL if the function does not exist.
+**
+** If the createFlag argument is true, then a new (blank) FuncDef
+** structure is created and liked into the "db" structure if a
+** no matching function previously existed. When createFlag is true
+** and the nArg parameter is -1, then only a function that accepts
+** any number of arguments will be returned.
+**
+** If createFlag is false and nArg is -1, then the first valid
+** function found is returned. A function is valid if either xFunc
+** or xStep is non-zero.
+**
+** If createFlag is false, then a function with the required name and
+** number of arguments may be returned even if the eTextRep flag does not
+** match that requested.
+*/
+FuncDef *sqlite3FindFunction(
+ sqlite3 *db, /* An open database */
+ const char *zName, /* Name of the function. Not null-terminated */
+ int nName, /* Number of characters in the name */
+ int nArg, /* Number of arguments. -1 means any number */
+ u8 enc, /* Preferred text encoding */
+ int createFlag /* Create new entry if true and does not otherwise exist */
+){
+ FuncDef *p; /* Iterator variable */
+ FuncDef *pFirst; /* First function with this name */
+ FuncDef *pBest = 0; /* Best match found so far */
+ int bestmatch = 0;
+
+
+ assert( enc==SQLITE_UTF8 || enc==SQLITE_UTF16LE || enc==SQLITE_UTF16BE );
+ if( nArg<-1 ) nArg = -1;
+
+ pFirst = (FuncDef*)sqlite3HashFind(&db->aFunc, zName, nName);
+ for(p=pFirst; p; p=p->pNext){
+ /* During the search for the best function definition, bestmatch is set
+ ** as follows to indicate the quality of the match with the definition
+ ** pointed to by pBest:
+ **
+ ** 0: pBest is NULL. No match has been found.
+ ** 1: A variable arguments function that prefers UTF-8 when a UTF-16
+ ** encoding is requested, or vice versa.
+ ** 2: A variable arguments function that uses UTF-16BE when UTF-16LE is
+ ** requested, or vice versa.
+ ** 3: A variable arguments function using the same text encoding.
+ ** 4: A function with the exact number of arguments requested that
+ ** prefers UTF-8 when a UTF-16 encoding is requested, or vice versa.
+ ** 5: A function with the exact number of arguments requested that
+ ** prefers UTF-16LE when UTF-16BE is requested, or vice versa.
+ ** 6: An exact match.
+ **
+ ** A larger value of 'matchqual' indicates a more desirable match.
+ */
+ if( p->nArg==-1 || p->nArg==nArg || nArg==-1 ){
+ int match = 1; /* Quality of this match */
+ if( p->nArg==nArg || nArg==-1 ){
+ match = 4;
+ }
+ if( enc==p->iPrefEnc ){
+ match += 2;
+ }
+ else if( (enc==SQLITE_UTF16LE && p->iPrefEnc==SQLITE_UTF16BE) ||
+ (enc==SQLITE_UTF16BE && p->iPrefEnc==SQLITE_UTF16LE) ){
+ match += 1;
+ }
+
+ if( match>bestmatch ){
+ pBest = p;
+ bestmatch = match;
+ }
+ }
+ }
+
+ /* If the createFlag parameter is true, and the seach did not reveal an
+ ** exact match for the name, number of arguments and encoding, then add a
+ ** new entry to the hash table and return it.
+ */
+ if( createFlag && bestmatch<6 &&
+ (pBest = sqliteMalloc(sizeof(*pBest)+nName))!=0 ){
+ pBest->nArg = nArg;
+ pBest->pNext = pFirst;
+ pBest->iPrefEnc = enc;
+ memcpy(pBest->zName, zName, nName);
+ pBest->zName[nName] = 0;
+ if( pBest==sqlite3HashInsert(&db->aFunc,pBest->zName,nName,(void*)pBest) ){
+ sqliteFree(pBest);
+ return 0;
+ }
+ }
+
+ if( pBest && (pBest->xStep || pBest->xFunc || createFlag) ){
+ return pBest;
+ }
+ return 0;
+}
+
+/*
+** Free all resources held by the schema structure. The void* argument points
+** at a Schema struct. This function does not call sqliteFree() on the
+** pointer itself, it just cleans up subsiduary resources (i.e. the contents
+** of the schema hash tables).
+*/
+void sqlite3SchemaFree(void *p){
+ Hash temp1;
+ Hash temp2;
+ HashElem *pElem;
+ Schema *pSchema = (Schema *)p;
+
+ temp1 = pSchema->tblHash;
+ temp2 = pSchema->trigHash;
+ sqlite3HashInit(&pSchema->trigHash, SQLITE_HASH_STRING, 0);
+ sqlite3HashClear(&pSchema->aFKey);
+ sqlite3HashClear(&pSchema->idxHash);
+ for(pElem=sqliteHashFirst(&temp2); pElem; pElem=sqliteHashNext(pElem)){
+ sqlite3DeleteTrigger((Trigger*)sqliteHashData(pElem));
+ }
+ sqlite3HashClear(&temp2);
+ sqlite3HashInit(&pSchema->tblHash, SQLITE_HASH_STRING, 0);
+ for(pElem=sqliteHashFirst(&temp1); pElem; pElem=sqliteHashNext(pElem)){
+ Table *pTab = sqliteHashData(pElem);
+ sqlite3DeleteTable(0, pTab);
+ }
+ sqlite3HashClear(&temp1);
+ pSchema->pSeqTab = 0;
+ pSchema->flags &= ~DB_SchemaLoaded;
+}
+
+/*
+** Find and return the schema associated with a BTree. Create
+** a new one if necessary.
+*/
+Schema *sqlite3SchemaGet(Btree *pBt){
+ Schema * p;
+ if( pBt ){
+ p = (Schema *)sqlite3BtreeSchema(pBt,sizeof(Schema),sqlite3SchemaFree);
+ }else{
+ p = (Schema *)sqliteMalloc(sizeof(Schema));
+ }
+ if( p && 0==p->file_format ){
+ sqlite3HashInit(&p->tblHash, SQLITE_HASH_STRING, 0);
+ sqlite3HashInit(&p->idxHash, SQLITE_HASH_STRING, 0);
+ sqlite3HashInit(&p->trigHash, SQLITE_HASH_STRING, 0);
+ sqlite3HashInit(&p->aFKey, SQLITE_HASH_STRING, 1);
+ p->enc = SQLITE_UTF8;
+ }
+ return p;
+}
Added: freeswitch/trunk/libs/sqlite/src/complete.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/complete.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,263 @@
+/*
+** 2001 September 15
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** An tokenizer for SQL
+**
+** This file contains C code that implements the sqlite3_complete() API.
+** This code used to be part of the tokenizer.c source file. But by
+** separating it out, the code will be automatically omitted from
+** static links that do not use it.
+**
+** $Id: complete.c,v 1.3 2006/01/18 15:25:17 danielk1977 Exp $
+*/
+#include "sqliteInt.h"
+#ifndef SQLITE_OMIT_COMPLETE
+
+/*
+** This is defined in tokenize.c. We just have to import the definition.
+*/
+extern const char sqlite3IsIdChar[];
+#define IdChar(C) (((c=C)&0x80)!=0 || (c>0x1f && sqlite3IsIdChar[c-0x20]))
+
+
+/*
+** Token types used by the sqlite3_complete() routine. See the header
+** comments on that procedure for additional information.
+*/
+#define tkSEMI 0
+#define tkWS 1
+#define tkOTHER 2
+#define tkEXPLAIN 3
+#define tkCREATE 4
+#define tkTEMP 5
+#define tkTRIGGER 6
+#define tkEND 7
+
+/*
+** Return TRUE if the given SQL string ends in a semicolon.
+**
+** Special handling is require for CREATE TRIGGER statements.
+** Whenever the CREATE TRIGGER keywords are seen, the statement
+** must end with ";END;".
+**
+** This implementation uses a state machine with 7 states:
+**
+** (0) START At the beginning or end of an SQL statement. This routine
+** returns 1 if it ends in the START state and 0 if it ends
+** in any other state.
+**
+** (1) NORMAL We are in the middle of statement which ends with a single
+** semicolon.
+**
+** (2) EXPLAIN The keyword EXPLAIN has been seen at the beginning of
+** a statement.
+**
+** (3) CREATE The keyword CREATE has been seen at the beginning of a
+** statement, possibly preceeded by EXPLAIN and/or followed by
+** TEMP or TEMPORARY
+**
+** (4) TRIGGER We are in the middle of a trigger definition that must be
+** ended by a semicolon, the keyword END, and another semicolon.
+**
+** (5) SEMI We've seen the first semicolon in the ";END;" that occurs at
+** the end of a trigger definition.
+**
+** (6) END We've seen the ";END" of the ";END;" that occurs at the end
+** of a trigger difinition.
+**
+** Transitions between states above are determined by tokens extracted
+** from the input. The following tokens are significant:
+**
+** (0) tkSEMI A semicolon.
+** (1) tkWS Whitespace
+** (2) tkOTHER Any other SQL token.
+** (3) tkEXPLAIN The "explain" keyword.
+** (4) tkCREATE The "create" keyword.
+** (5) tkTEMP The "temp" or "temporary" keyword.
+** (6) tkTRIGGER The "trigger" keyword.
+** (7) tkEND The "end" keyword.
+**
+** Whitespace never causes a state transition and is always ignored.
+**
+** If we compile with SQLITE_OMIT_TRIGGER, all of the computation needed
+** to recognize the end of a trigger can be omitted. All we have to do
+** is look for a semicolon that is not part of an string or comment.
+*/
+int sqlite3_complete(const char *zSql){
+ u8 state = 0; /* Current state, using numbers defined in header comment */
+ u8 token; /* Value of the next token */
+
+#ifndef SQLITE_OMIT_TRIGGER
+ /* A complex statement machine used to detect the end of a CREATE TRIGGER
+ ** statement. This is the normal case.
+ */
+ static const u8 trans[7][8] = {
+ /* Token: */
+ /* State: ** SEMI WS OTHER EXPLAIN CREATE TEMP TRIGGER END */
+ /* 0 START: */ { 0, 0, 1, 2, 3, 1, 1, 1, },
+ /* 1 NORMAL: */ { 0, 1, 1, 1, 1, 1, 1, 1, },
+ /* 2 EXPLAIN: */ { 0, 2, 1, 1, 3, 1, 1, 1, },
+ /* 3 CREATE: */ { 0, 3, 1, 1, 1, 3, 4, 1, },
+ /* 4 TRIGGER: */ { 5, 4, 4, 4, 4, 4, 4, 4, },
+ /* 5 SEMI: */ { 5, 5, 4, 4, 4, 4, 4, 6, },
+ /* 6 END: */ { 0, 6, 4, 4, 4, 4, 4, 4, },
+ };
+#else
+ /* If triggers are not suppored by this compile then the statement machine
+ ** used to detect the end of a statement is much simplier
+ */
+ static const u8 trans[2][3] = {
+ /* Token: */
+ /* State: ** SEMI WS OTHER */
+ /* 0 START: */ { 0, 0, 1, },
+ /* 1 NORMAL: */ { 0, 1, 1, },
+ };
+#endif /* SQLITE_OMIT_TRIGGER */
+
+ while( *zSql ){
+ switch( *zSql ){
+ case ';': { /* A semicolon */
+ token = tkSEMI;
+ break;
+ }
+ case ' ':
+ case '\r':
+ case '\t':
+ case '\n':
+ case '\f': { /* White space is ignored */
+ token = tkWS;
+ break;
+ }
+ case '/': { /* C-style comments */
+ if( zSql[1]!='*' ){
+ token = tkOTHER;
+ break;
+ }
+ zSql += 2;
+ while( zSql[0] && (zSql[0]!='*' || zSql[1]!='/') ){ zSql++; }
+ if( zSql[0]==0 ) return 0;
+ zSql++;
+ token = tkWS;
+ break;
+ }
+ case '-': { /* SQL-style comments from "--" to end of line */
+ if( zSql[1]!='-' ){
+ token = tkOTHER;
+ break;
+ }
+ while( *zSql && *zSql!='\n' ){ zSql++; }
+ if( *zSql==0 ) return state==0;
+ token = tkWS;
+ break;
+ }
+ case '[': { /* Microsoft-style identifiers in [...] */
+ zSql++;
+ while( *zSql && *zSql!=']' ){ zSql++; }
+ if( *zSql==0 ) return 0;
+ token = tkOTHER;
+ break;
+ }
+ case '`': /* Grave-accent quoted symbols used by MySQL */
+ case '"': /* single- and double-quoted strings */
+ case '\'': {
+ int c = *zSql;
+ zSql++;
+ while( *zSql && *zSql!=c ){ zSql++; }
+ if( *zSql==0 ) return 0;
+ token = tkOTHER;
+ break;
+ }
+ default: {
+ int c;
+ if( IdChar((u8)*zSql) ){
+ /* Keywords and unquoted identifiers */
+ int nId;
+ for(nId=1; IdChar(zSql[nId]); nId++){}
+#ifdef SQLITE_OMIT_TRIGGER
+ token = tkOTHER;
+#else
+ switch( *zSql ){
+ case 'c': case 'C': {
+ if( nId==6 && sqlite3StrNICmp(zSql, "create", 6)==0 ){
+ token = tkCREATE;
+ }else{
+ token = tkOTHER;
+ }
+ break;
+ }
+ case 't': case 'T': {
+ if( nId==7 && sqlite3StrNICmp(zSql, "trigger", 7)==0 ){
+ token = tkTRIGGER;
+ }else if( nId==4 && sqlite3StrNICmp(zSql, "temp", 4)==0 ){
+ token = tkTEMP;
+ }else if( nId==9 && sqlite3StrNICmp(zSql, "temporary", 9)==0 ){
+ token = tkTEMP;
+ }else{
+ token = tkOTHER;
+ }
+ break;
+ }
+ case 'e': case 'E': {
+ if( nId==3 && sqlite3StrNICmp(zSql, "end", 3)==0 ){
+ token = tkEND;
+ }else
+#ifndef SQLITE_OMIT_EXPLAIN
+ if( nId==7 && sqlite3StrNICmp(zSql, "explain", 7)==0 ){
+ token = tkEXPLAIN;
+ }else
+#endif
+ {
+ token = tkOTHER;
+ }
+ break;
+ }
+ default: {
+ token = tkOTHER;
+ break;
+ }
+ }
+#endif /* SQLITE_OMIT_TRIGGER */
+ zSql += nId-1;
+ }else{
+ /* Operators and special symbols */
+ token = tkOTHER;
+ }
+ break;
+ }
+ }
+ state = trans[state][token];
+ zSql++;
+ }
+ return state==0;
+}
+
+#ifndef SQLITE_OMIT_UTF16
+/*
+** This routine is the same as the sqlite3_complete() routine described
+** above, except that the parameter is required to be UTF-16 encoded, not
+** UTF-8.
+*/
+int sqlite3_complete16(const void *zSql){
+ sqlite3_value *pVal;
+ char const *zSql8;
+ int rc = 0;
+
+ pVal = sqlite3ValueNew();
+ sqlite3ValueSetStr(pVal, -1, zSql, SQLITE_UTF16NATIVE, SQLITE_STATIC);
+ zSql8 = sqlite3ValueText(pVal, SQLITE_UTF8);
+ if( zSql8 ){
+ rc = sqlite3_complete(zSql8);
+ }
+ sqlite3ValueFree(pVal);
+ return sqlite3ApiExit(0, rc);
+}
+#endif /* SQLITE_OMIT_UTF16 */
+#endif /* SQLITE_OMIT_COMPLETE */
Added: freeswitch/trunk/libs/sqlite/src/date.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/date.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,1026 @@
+/*
+** 2003 October 31
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** This file contains the C functions that implement date and time
+** functions for SQLite.
+**
+** There is only one exported symbol in this file - the function
+** sqlite3RegisterDateTimeFunctions() found at the bottom of the file.
+** All other code has file scope.
+**
+** $Id: date.c,v 1.58 2006/09/25 18:05:04 drh Exp $
+**
+** NOTES:
+**
+** SQLite processes all times and dates as Julian Day numbers. The
+** dates and times are stored as the number of days since noon
+** in Greenwich on November 24, 4714 B.C. according to the Gregorian
+** calendar system.
+**
+** 1970-01-01 00:00:00 is JD 2440587.5
+** 2000-01-01 00:00:00 is JD 2451544.5
+**
+** This implemention requires years to be expressed as a 4-digit number
+** which means that only dates between 0000-01-01 and 9999-12-31 can
+** be represented, even though julian day numbers allow a much wider
+** range of dates.
+**
+** The Gregorian calendar system is used for all dates and times,
+** even those that predate the Gregorian calendar. Historians usually
+** use the Julian calendar for dates prior to 1582-10-15 and for some
+** dates afterwards, depending on locale. Beware of this difference.
+**
+** The conversion algorithms are implemented based on descriptions
+** in the following text:
+**
+** Jean Meeus
+** Astronomical Algorithms, 2nd Edition, 1998
+** ISBM 0-943396-61-1
+** Willmann-Bell, Inc
+** Richmond, Virginia (USA)
+*/
+#include "sqliteInt.h"
+#include "os.h"
+#include <ctype.h>
+#include <stdlib.h>
+#include <assert.h>
+#include <time.h>
+
+#ifndef SQLITE_OMIT_DATETIME_FUNCS
+
+/*
+** A structure for holding a single date and time.
+*/
+typedef struct DateTime DateTime;
+struct DateTime {
+ double rJD; /* The julian day number */
+ int Y, M, D; /* Year, month, and day */
+ int h, m; /* Hour and minutes */
+ int tz; /* Timezone offset in minutes */
+ double s; /* Seconds */
+ char validYMD; /* True if Y,M,D are valid */
+ char validHMS; /* True if h,m,s are valid */
+ char validJD; /* True if rJD is valid */
+ char validTZ; /* True if tz is valid */
+};
+
+
+/*
+** Convert zDate into one or more integers. Additional arguments
+** come in groups of 5 as follows:
+**
+** N number of digits in the integer
+** min minimum allowed value of the integer
+** max maximum allowed value of the integer
+** nextC first character after the integer
+** pVal where to write the integers value.
+**
+** Conversions continue until one with nextC==0 is encountered.
+** The function returns the number of successful conversions.
+*/
+static int getDigits(const char *zDate, ...){
+ va_list ap;
+ int val;
+ int N;
+ int min;
+ int max;
+ int nextC;
+ int *pVal;
+ int cnt = 0;
+ va_start(ap, zDate);
+ do{
+ N = va_arg(ap, int);
+ min = va_arg(ap, int);
+ max = va_arg(ap, int);
+ nextC = va_arg(ap, int);
+ pVal = va_arg(ap, int*);
+ val = 0;
+ while( N-- ){
+ if( !isdigit(*(u8*)zDate) ){
+ goto end_getDigits;
+ }
+ val = val*10 + *zDate - '0';
+ zDate++;
+ }
+ if( val<min || val>max || (nextC!=0 && nextC!=*zDate) ){
+ goto end_getDigits;
+ }
+ *pVal = val;
+ zDate++;
+ cnt++;
+ }while( nextC );
+end_getDigits:
+ va_end(ap);
+ return cnt;
+}
+
+/*
+** Read text from z[] and convert into a floating point number. Return
+** the number of digits converted.
+*/
+#define getValue sqlite3AtoF
+
+/*
+** Parse a timezone extension on the end of a date-time.
+** The extension is of the form:
+**
+** (+/-)HH:MM
+**
+** If the parse is successful, write the number of minutes
+** of change in *pnMin and return 0. If a parser error occurs,
+** return 0.
+**
+** A missing specifier is not considered an error.
+*/
+static int parseTimezone(const char *zDate, DateTime *p){
+ int sgn = 0;
+ int nHr, nMn;
+ while( isspace(*(u8*)zDate) ){ zDate++; }
+ p->tz = 0;
+ if( *zDate=='-' ){
+ sgn = -1;
+ }else if( *zDate=='+' ){
+ sgn = +1;
+ }else{
+ return *zDate!=0;
+ }
+ zDate++;
+ if( getDigits(zDate, 2, 0, 14, ':', &nHr, 2, 0, 59, 0, &nMn)!=2 ){
+ return 1;
+ }
+ zDate += 5;
+ p->tz = sgn*(nMn + nHr*60);
+ while( isspace(*(u8*)zDate) ){ zDate++; }
+ return *zDate!=0;
+}
+
+/*
+** Parse times of the form HH:MM or HH:MM:SS or HH:MM:SS.FFFF.
+** The HH, MM, and SS must each be exactly 2 digits. The
+** fractional seconds FFFF can be one or more digits.
+**
+** Return 1 if there is a parsing error and 0 on success.
+*/
+static int parseHhMmSs(const char *zDate, DateTime *p){
+ int h, m, s;
+ double ms = 0.0;
+ if( getDigits(zDate, 2, 0, 24, ':', &h, 2, 0, 59, 0, &m)!=2 ){
+ return 1;
+ }
+ zDate += 5;
+ if( *zDate==':' ){
+ zDate++;
+ if( getDigits(zDate, 2, 0, 59, 0, &s)!=1 ){
+ return 1;
+ }
+ zDate += 2;
+ if( *zDate=='.' && isdigit((u8)zDate[1]) ){
+ double rScale = 1.0;
+ zDate++;
+ while( isdigit(*(u8*)zDate) ){
+ ms = ms*10.0 + *zDate - '0';
+ rScale *= 10.0;
+ zDate++;
+ }
+ ms /= rScale;
+ }
+ }else{
+ s = 0;
+ }
+ p->validJD = 0;
+ p->validHMS = 1;
+ p->h = h;
+ p->m = m;
+ p->s = s + ms;
+ if( parseTimezone(zDate, p) ) return 1;
+ p->validTZ = p->tz!=0;
+ return 0;
+}
+
+/*
+** Convert from YYYY-MM-DD HH:MM:SS to julian day. We always assume
+** that the YYYY-MM-DD is according to the Gregorian calendar.
+**
+** Reference: Meeus page 61
+*/
+static void computeJD(DateTime *p){
+ int Y, M, D, A, B, X1, X2;
+
+ if( p->validJD ) return;
+ if( p->validYMD ){
+ Y = p->Y;
+ M = p->M;
+ D = p->D;
+ }else{
+ Y = 2000; /* If no YMD specified, assume 2000-Jan-01 */
+ M = 1;
+ D = 1;
+ }
+ if( M<=2 ){
+ Y--;
+ M += 12;
+ }
+ A = Y/100;
+ B = 2 - A + (A/4);
+ X1 = 365.25*(Y+4716);
+ X2 = 30.6001*(M+1);
+ p->rJD = X1 + X2 + D + B - 1524.5;
+ p->validJD = 1;
+ if( p->validHMS ){
+ p->rJD += (p->h*3600.0 + p->m*60.0 + p->s)/86400.0;
+ if( p->validTZ ){
+ p->rJD -= p->tz*60/86400.0;
+ p->validYMD = 0;
+ p->validHMS = 0;
+ p->validTZ = 0;
+ }
+ }
+}
+
+/*
+** Parse dates of the form
+**
+** YYYY-MM-DD HH:MM:SS.FFF
+** YYYY-MM-DD HH:MM:SS
+** YYYY-MM-DD HH:MM
+** YYYY-MM-DD
+**
+** Write the result into the DateTime structure and return 0
+** on success and 1 if the input string is not a well-formed
+** date.
+*/
+static int parseYyyyMmDd(const char *zDate, DateTime *p){
+ int Y, M, D, neg;
+
+ if( zDate[0]=='-' ){
+ zDate++;
+ neg = 1;
+ }else{
+ neg = 0;
+ }
+ if( getDigits(zDate,4,0,9999,'-',&Y,2,1,12,'-',&M,2,1,31,0,&D)!=3 ){
+ return 1;
+ }
+ zDate += 10;
+ while( isspace(*(u8*)zDate) || 'T'==*(u8*)zDate ){ zDate++; }
+ if( parseHhMmSs(zDate, p)==0 ){
+ /* We got the time */
+ }else if( *zDate==0 ){
+ p->validHMS = 0;
+ }else{
+ return 1;
+ }
+ p->validJD = 0;
+ p->validYMD = 1;
+ p->Y = neg ? -Y : Y;
+ p->M = M;
+ p->D = D;
+ if( p->validTZ ){
+ computeJD(p);
+ }
+ return 0;
+}
+
+/*
+** Attempt to parse the given string into a Julian Day Number. Return
+** the number of errors.
+**
+** The following are acceptable forms for the input string:
+**
+** YYYY-MM-DD HH:MM:SS.FFF +/-HH:MM
+** DDDD.DD
+** now
+**
+** In the first form, the +/-HH:MM is always optional. The fractional
+** seconds extension (the ".FFF") is optional. The seconds portion
+** (":SS.FFF") is option. The year and date can be omitted as long
+** as there is a time string. The time string can be omitted as long
+** as there is a year and date.
+*/
+static int parseDateOrTime(const char *zDate, DateTime *p){
+ memset(p, 0, sizeof(*p));
+ if( parseYyyyMmDd(zDate,p)==0 ){
+ return 0;
+ }else if( parseHhMmSs(zDate, p)==0 ){
+ return 0;
+ }else if( sqlite3StrICmp(zDate,"now")==0){
+ double r;
+ sqlite3OsCurrentTime(&r);
+ p->rJD = r;
+ p->validJD = 1;
+ return 0;
+ }else if( sqlite3IsNumber(zDate, 0, SQLITE_UTF8) ){
+ getValue(zDate, &p->rJD);
+ p->validJD = 1;
+ return 0;
+ }
+ return 1;
+}
+
+/*
+** Compute the Year, Month, and Day from the julian day number.
+*/
+static void computeYMD(DateTime *p){
+ int Z, A, B, C, D, E, X1;
+ if( p->validYMD ) return;
+ if( !p->validJD ){
+ p->Y = 2000;
+ p->M = 1;
+ p->D = 1;
+ }else{
+ Z = p->rJD + 0.5;
+ A = (Z - 1867216.25)/36524.25;
+ A = Z + 1 + A - (A/4);
+ B = A + 1524;
+ C = (B - 122.1)/365.25;
+ D = 365.25*C;
+ E = (B-D)/30.6001;
+ X1 = 30.6001*E;
+ p->D = B - D - X1;
+ p->M = E<14 ? E-1 : E-13;
+ p->Y = p->M>2 ? C - 4716 : C - 4715;
+ }
+ p->validYMD = 1;
+}
+
+/*
+** Compute the Hour, Minute, and Seconds from the julian day number.
+*/
+static void computeHMS(DateTime *p){
+ int Z, s;
+ if( p->validHMS ) return;
+ computeJD(p);
+ Z = p->rJD + 0.5;
+ s = (p->rJD + 0.5 - Z)*86400000.0 + 0.5;
+ p->s = 0.001*s;
+ s = p->s;
+ p->s -= s;
+ p->h = s/3600;
+ s -= p->h*3600;
+ p->m = s/60;
+ p->s += s - p->m*60;
+ p->validHMS = 1;
+}
+
+/*
+** Compute both YMD and HMS
+*/
+static void computeYMD_HMS(DateTime *p){
+ computeYMD(p);
+ computeHMS(p);
+}
+
+/*
+** Clear the YMD and HMS and the TZ
+*/
+static void clearYMD_HMS_TZ(DateTime *p){
+ p->validYMD = 0;
+ p->validHMS = 0;
+ p->validTZ = 0;
+}
+
+/*
+** Compute the difference (in days) between localtime and UTC (a.k.a. GMT)
+** for the time value p where p is in UTC.
+*/
+static double localtimeOffset(DateTime *p){
+ DateTime x, y;
+ time_t t;
+ x = *p;
+ computeYMD_HMS(&x);
+ if( x.Y<1971 || x.Y>=2038 ){
+ x.Y = 2000;
+ x.M = 1;
+ x.D = 1;
+ x.h = 0;
+ x.m = 0;
+ x.s = 0.0;
+ } else {
+ int s = x.s + 0.5;
+ x.s = s;
+ }
+ x.tz = 0;
+ x.validJD = 0;
+ computeJD(&x);
+ t = (x.rJD-2440587.5)*86400.0 + 0.5;
+#ifdef HAVE_LOCALTIME_R
+ {
+ struct tm sLocal;
+ localtime_r(&t, &sLocal);
+ y.Y = sLocal.tm_year + 1900;
+ y.M = sLocal.tm_mon + 1;
+ y.D = sLocal.tm_mday;
+ y.h = sLocal.tm_hour;
+ y.m = sLocal.tm_min;
+ y.s = sLocal.tm_sec;
+ }
+#else
+ {
+ struct tm *pTm;
+ sqlite3OsEnterMutex();
+ pTm = localtime(&t);
+ y.Y = pTm->tm_year + 1900;
+ y.M = pTm->tm_mon + 1;
+ y.D = pTm->tm_mday;
+ y.h = pTm->tm_hour;
+ y.m = pTm->tm_min;
+ y.s = pTm->tm_sec;
+ sqlite3OsLeaveMutex();
+ }
+#endif
+ y.validYMD = 1;
+ y.validHMS = 1;
+ y.validJD = 0;
+ y.validTZ = 0;
+ computeJD(&y);
+ return y.rJD - x.rJD;
+}
+
+/*
+** Process a modifier to a date-time stamp. The modifiers are
+** as follows:
+**
+** NNN days
+** NNN hours
+** NNN minutes
+** NNN.NNNN seconds
+** NNN months
+** NNN years
+** start of month
+** start of year
+** start of week
+** start of day
+** weekday N
+** unixepoch
+** localtime
+** utc
+**
+** Return 0 on success and 1 if there is any kind of error.
+*/
+static int parseModifier(const char *zMod, DateTime *p){
+ int rc = 1;
+ int n;
+ double r;
+ char *z, zBuf[30];
+ z = zBuf;
+ for(n=0; n<sizeof(zBuf)-1 && zMod[n]; n++){
+ z[n] = tolower(zMod[n]);
+ }
+ z[n] = 0;
+ switch( z[0] ){
+ case 'l': {
+ /* localtime
+ **
+ ** Assuming the current time value is UTC (a.k.a. GMT), shift it to
+ ** show local time.
+ */
+ if( strcmp(z, "localtime")==0 ){
+ computeJD(p);
+ p->rJD += localtimeOffset(p);
+ clearYMD_HMS_TZ(p);
+ rc = 0;
+ }
+ break;
+ }
+ case 'u': {
+ /*
+ ** unixepoch
+ **
+ ** Treat the current value of p->rJD as the number of
+ ** seconds since 1970. Convert to a real julian day number.
+ */
+ if( strcmp(z, "unixepoch")==0 && p->validJD ){
+ p->rJD = p->rJD/86400.0 + 2440587.5;
+ clearYMD_HMS_TZ(p);
+ rc = 0;
+ }else if( strcmp(z, "utc")==0 ){
+ double c1;
+ computeJD(p);
+ c1 = localtimeOffset(p);
+ p->rJD -= c1;
+ clearYMD_HMS_TZ(p);
+ p->rJD += c1 - localtimeOffset(p);
+ rc = 0;
+ }
+ break;
+ }
+ case 'w': {
+ /*
+ ** weekday N
+ **
+ ** Move the date to the same time on the next occurrence of
+ ** weekday N where 0==Sunday, 1==Monday, and so forth. If the
+ ** date is already on the appropriate weekday, this is a no-op.
+ */
+ if( strncmp(z, "weekday ", 8)==0 && getValue(&z[8],&r)>0
+ && (n=r)==r && n>=0 && r<7 ){
+ int Z;
+ computeYMD_HMS(p);
+ p->validTZ = 0;
+ p->validJD = 0;
+ computeJD(p);
+ Z = p->rJD + 1.5;
+ Z %= 7;
+ if( Z>n ) Z -= 7;
+ p->rJD += n - Z;
+ clearYMD_HMS_TZ(p);
+ rc = 0;
+ }
+ break;
+ }
+ case 's': {
+ /*
+ ** start of TTTTT
+ **
+ ** Move the date backwards to the beginning of the current day,
+ ** or month or year.
+ */
+ if( strncmp(z, "start of ", 9)!=0 ) break;
+ z += 9;
+ computeYMD(p);
+ p->validHMS = 1;
+ p->h = p->m = 0;
+ p->s = 0.0;
+ p->validTZ = 0;
+ p->validJD = 0;
+ if( strcmp(z,"month")==0 ){
+ p->D = 1;
+ rc = 0;
+ }else if( strcmp(z,"year")==0 ){
+ computeYMD(p);
+ p->M = 1;
+ p->D = 1;
+ rc = 0;
+ }else if( strcmp(z,"day")==0 ){
+ rc = 0;
+ }
+ break;
+ }
+ case '+':
+ case '-':
+ case '0':
+ case '1':
+ case '2':
+ case '3':
+ case '4':
+ case '5':
+ case '6':
+ case '7':
+ case '8':
+ case '9': {
+ n = getValue(z, &r);
+ if( n<=0 ) break;
+ if( z[n]==':' ){
+ /* A modifier of the form (+|-)HH:MM:SS.FFF adds (or subtracts) the
+ ** specified number of hours, minutes, seconds, and fractional seconds
+ ** to the time. The ".FFF" may be omitted. The ":SS.FFF" may be
+ ** omitted.
+ */
+ const char *z2 = z;
+ DateTime tx;
+ int day;
+ if( !isdigit(*(u8*)z2) ) z2++;
+ memset(&tx, 0, sizeof(tx));
+ if( parseHhMmSs(z2, &tx) ) break;
+ computeJD(&tx);
+ tx.rJD -= 0.5;
+ day = (int)tx.rJD;
+ tx.rJD -= day;
+ if( z[0]=='-' ) tx.rJD = -tx.rJD;
+ computeJD(p);
+ clearYMD_HMS_TZ(p);
+ p->rJD += tx.rJD;
+ rc = 0;
+ break;
+ }
+ z += n;
+ while( isspace(*(u8*)z) ) z++;
+ n = strlen(z);
+ if( n>10 || n<3 ) break;
+ if( z[n-1]=='s' ){ z[n-1] = 0; n--; }
+ computeJD(p);
+ rc = 0;
+ if( n==3 && strcmp(z,"day")==0 ){
+ p->rJD += r;
+ }else if( n==4 && strcmp(z,"hour")==0 ){
+ p->rJD += r/24.0;
+ }else if( n==6 && strcmp(z,"minute")==0 ){
+ p->rJD += r/(24.0*60.0);
+ }else if( n==6 && strcmp(z,"second")==0 ){
+ p->rJD += r/(24.0*60.0*60.0);
+ }else if( n==5 && strcmp(z,"month")==0 ){
+ int x, y;
+ computeYMD_HMS(p);
+ p->M += r;
+ x = p->M>0 ? (p->M-1)/12 : (p->M-12)/12;
+ p->Y += x;
+ p->M -= x*12;
+ p->validJD = 0;
+ computeJD(p);
+ y = r;
+ if( y!=r ){
+ p->rJD += (r - y)*30.0;
+ }
+ }else if( n==4 && strcmp(z,"year")==0 ){
+ computeYMD_HMS(p);
+ p->Y += r;
+ p->validJD = 0;
+ computeJD(p);
+ }else{
+ rc = 1;
+ }
+ clearYMD_HMS_TZ(p);
+ break;
+ }
+ default: {
+ break;
+ }
+ }
+ return rc;
+}
+
+/*
+** Process time function arguments. argv[0] is a date-time stamp.
+** argv[1] and following are modifiers. Parse them all and write
+** the resulting time into the DateTime structure p. Return 0
+** on success and 1 if there are any errors.
+*/
+static int isDate(int argc, sqlite3_value **argv, DateTime *p){
+ int i;
+ if( argc==0 ) return 1;
+ if( SQLITE_NULL==sqlite3_value_type(argv[0]) ||
+ parseDateOrTime((char*)sqlite3_value_text(argv[0]), p) ) return 1;
+ for(i=1; i<argc; i++){
+ if( SQLITE_NULL==sqlite3_value_type(argv[i]) ||
+ parseModifier((char*)sqlite3_value_text(argv[i]), p) ) return 1;
+ }
+ return 0;
+}
+
+
+/*
+** The following routines implement the various date and time functions
+** of SQLite.
+*/
+
+/*
+** julianday( TIMESTRING, MOD, MOD, ...)
+**
+** Return the julian day number of the date specified in the arguments
+*/
+static void juliandayFunc(
+ sqlite3_context *context,
+ int argc,
+ sqlite3_value **argv
+){
+ DateTime x;
+ if( isDate(argc, argv, &x)==0 ){
+ computeJD(&x);
+ sqlite3_result_double(context, x.rJD);
+ }
+}
+
+/*
+** datetime( TIMESTRING, MOD, MOD, ...)
+**
+** Return YYYY-MM-DD HH:MM:SS
+*/
+static void datetimeFunc(
+ sqlite3_context *context,
+ int argc,
+ sqlite3_value **argv
+){
+ DateTime x;
+ if( isDate(argc, argv, &x)==0 ){
+ char zBuf[100];
+ computeYMD_HMS(&x);
+ sprintf(zBuf, "%04d-%02d-%02d %02d:%02d:%02d",x.Y, x.M, x.D, x.h, x.m,
+ (int)(x.s));
+ sqlite3_result_text(context, zBuf, -1, SQLITE_TRANSIENT);
+ }
+}
+
+/*
+** time( TIMESTRING, MOD, MOD, ...)
+**
+** Return HH:MM:SS
+*/
+static void timeFunc(
+ sqlite3_context *context,
+ int argc,
+ sqlite3_value **argv
+){
+ DateTime x;
+ if( isDate(argc, argv, &x)==0 ){
+ char zBuf[100];
+ computeHMS(&x);
+ sprintf(zBuf, "%02d:%02d:%02d", x.h, x.m, (int)x.s);
+ sqlite3_result_text(context, zBuf, -1, SQLITE_TRANSIENT);
+ }
+}
+
+/*
+** date( TIMESTRING, MOD, MOD, ...)
+**
+** Return YYYY-MM-DD
+*/
+static void dateFunc(
+ sqlite3_context *context,
+ int argc,
+ sqlite3_value **argv
+){
+ DateTime x;
+ if( isDate(argc, argv, &x)==0 ){
+ char zBuf[100];
+ computeYMD(&x);
+ sprintf(zBuf, "%04d-%02d-%02d", x.Y, x.M, x.D);
+ sqlite3_result_text(context, zBuf, -1, SQLITE_TRANSIENT);
+ }
+}
+
+/*
+** strftime( FORMAT, TIMESTRING, MOD, MOD, ...)
+**
+** Return a string described by FORMAT. Conversions as follows:
+**
+** %d day of month
+** %f ** fractional seconds SS.SSS
+** %H hour 00-24
+** %j day of year 000-366
+** %J ** Julian day number
+** %m month 01-12
+** %M minute 00-59
+** %s seconds since 1970-01-01
+** %S seconds 00-59
+** %w day of week 0-6 sunday==0
+** %W week of year 00-53
+** %Y year 0000-9999
+** %% %
+*/
+static void strftimeFunc(
+ sqlite3_context *context,
+ int argc,
+ sqlite3_value **argv
+){
+ DateTime x;
+ int n, i, j;
+ char *z;
+ const char *zFmt = (const char*)sqlite3_value_text(argv[0]);
+ char zBuf[100];
+ if( zFmt==0 || isDate(argc-1, argv+1, &x) ) return;
+ for(i=0, n=1; zFmt[i]; i++, n++){
+ if( zFmt[i]=='%' ){
+ switch( zFmt[i+1] ){
+ case 'd':
+ case 'H':
+ case 'm':
+ case 'M':
+ case 'S':
+ case 'W':
+ n++;
+ /* fall thru */
+ case 'w':
+ case '%':
+ break;
+ case 'f':
+ n += 8;
+ break;
+ case 'j':
+ n += 3;
+ break;
+ case 'Y':
+ n += 8;
+ break;
+ case 's':
+ case 'J':
+ n += 50;
+ break;
+ default:
+ return; /* ERROR. return a NULL */
+ }
+ i++;
+ }
+ }
+ if( n<sizeof(zBuf) ){
+ z = zBuf;
+ }else{
+ z = sqliteMalloc( n );
+ if( z==0 ) return;
+ }
+ computeJD(&x);
+ computeYMD_HMS(&x);
+ for(i=j=0; zFmt[i]; i++){
+ if( zFmt[i]!='%' ){
+ z[j++] = zFmt[i];
+ }else{
+ i++;
+ switch( zFmt[i] ){
+ case 'd': sprintf(&z[j],"%02d",x.D); j+=2; break;
+ case 'f': {
+ double s = x.s;
+ if( s>59.999 ) s = 59.999;
+ sqlite3_snprintf(7, &z[j],"%02.3f", s);
+ j += strlen(&z[j]);
+ break;
+ }
+ case 'H': sprintf(&z[j],"%02d",x.h); j+=2; break;
+ case 'W': /* Fall thru */
+ case 'j': {
+ int nDay; /* Number of days since 1st day of year */
+ DateTime y = x;
+ y.validJD = 0;
+ y.M = 1;
+ y.D = 1;
+ computeJD(&y);
+ nDay = x.rJD - y.rJD;
+ if( zFmt[i]=='W' ){
+ int wd; /* 0=Monday, 1=Tuesday, ... 6=Sunday */
+ wd = ((int)(x.rJD+0.5)) % 7;
+ sprintf(&z[j],"%02d",(nDay+7-wd)/7);
+ j += 2;
+ }else{
+ sprintf(&z[j],"%03d",nDay+1);
+ j += 3;
+ }
+ break;
+ }
+ case 'J': sprintf(&z[j],"%.16g",x.rJD); j+=strlen(&z[j]); break;
+ case 'm': sprintf(&z[j],"%02d",x.M); j+=2; break;
+ case 'M': sprintf(&z[j],"%02d",x.m); j+=2; break;
+ case 's': {
+ sprintf(&z[j],"%d",(int)((x.rJD-2440587.5)*86400.0 + 0.5));
+ j += strlen(&z[j]);
+ break;
+ }
+ case 'S': sprintf(&z[j],"%02d",(int)(x.s+0.5)); j+=2; break;
+ case 'w': z[j++] = (((int)(x.rJD+1.5)) % 7) + '0'; break;
+ case 'Y': sprintf(&z[j],"%04d",x.Y); j+=strlen(&z[j]); break;
+ case '%': z[j++] = '%'; break;
+ }
+ }
+ }
+ z[j] = 0;
+ sqlite3_result_text(context, z, -1, SQLITE_TRANSIENT);
+ if( z!=zBuf ){
+ sqliteFree(z);
+ }
+}
+
+/*
+** current_time()
+**
+** This function returns the same value as time('now').
+*/
+static void ctimeFunc(
+ sqlite3_context *context,
+ int argc,
+ sqlite3_value **argv
+){
+ sqlite3_value *pVal = sqlite3ValueNew();
+ if( pVal ){
+ sqlite3ValueSetStr(pVal, -1, "now", SQLITE_UTF8, SQLITE_STATIC);
+ timeFunc(context, 1, &pVal);
+ sqlite3ValueFree(pVal);
+ }
+}
+
+/*
+** current_date()
+**
+** This function returns the same value as date('now').
+*/
+static void cdateFunc(
+ sqlite3_context *context,
+ int argc,
+ sqlite3_value **argv
+){
+ sqlite3_value *pVal = sqlite3ValueNew();
+ if( pVal ){
+ sqlite3ValueSetStr(pVal, -1, "now", SQLITE_UTF8, SQLITE_STATIC);
+ dateFunc(context, 1, &pVal);
+ sqlite3ValueFree(pVal);
+ }
+}
+
+/*
+** current_timestamp()
+**
+** This function returns the same value as datetime('now').
+*/
+static void ctimestampFunc(
+ sqlite3_context *context,
+ int argc,
+ sqlite3_value **argv
+){
+ sqlite3_value *pVal = sqlite3ValueNew();
+ if( pVal ){
+ sqlite3ValueSetStr(pVal, -1, "now", SQLITE_UTF8, SQLITE_STATIC);
+ datetimeFunc(context, 1, &pVal);
+ sqlite3ValueFree(pVal);
+ }
+}
+#endif /* !defined(SQLITE_OMIT_DATETIME_FUNCS) */
+
+#ifdef SQLITE_OMIT_DATETIME_FUNCS
+/*
+** If the library is compiled to omit the full-scale date and time
+** handling (to get a smaller binary), the following minimal version
+** of the functions current_time(), current_date() and current_timestamp()
+** are included instead. This is to support column declarations that
+** include "DEFAULT CURRENT_TIME" etc.
+**
+** This function uses the C-library functions time(), gmtime()
+** and strftime(). The format string to pass to strftime() is supplied
+** as the user-data for the function.
+*/
+static void currentTimeFunc(
+ sqlite3_context *context,
+ int argc,
+ sqlite3_value **argv
+){
+ time_t t;
+ char *zFormat = (char *)sqlite3_user_data(context);
+ char zBuf[20];
+
+ time(&t);
+#ifdef SQLITE_TEST
+ {
+ extern int sqlite3_current_time; /* See os_XXX.c */
+ if( sqlite3_current_time ){
+ t = sqlite3_current_time;
+ }
+ }
+#endif
+
+#ifdef HAVE_GMTIME_R
+ {
+ struct tm sNow;
+ gmtime_r(&t, &sNow);
+ strftime(zBuf, 20, zFormat, &sNow);
+ }
+#else
+ {
+ struct tm *pTm;
+ sqlite3OsEnterMutex();
+ pTm = gmtime(&t);
+ strftime(zBuf, 20, zFormat, pTm);
+ sqlite3OsLeaveMutex();
+ }
+#endif
+
+ sqlite3_result_text(context, zBuf, -1, SQLITE_TRANSIENT);
+}
+#endif
+
+/*
+** This function registered all of the above C functions as SQL
+** functions. This should be the only routine in this file with
+** external linkage.
+*/
+void sqlite3RegisterDateTimeFunctions(sqlite3 *db){
+#ifndef SQLITE_OMIT_DATETIME_FUNCS
+ static const struct {
+ char *zName;
+ int nArg;
+ void (*xFunc)(sqlite3_context*,int,sqlite3_value**);
+ } aFuncs[] = {
+ { "julianday", -1, juliandayFunc },
+ { "date", -1, dateFunc },
+ { "time", -1, timeFunc },
+ { "datetime", -1, datetimeFunc },
+ { "strftime", -1, strftimeFunc },
+ { "current_time", 0, ctimeFunc },
+ { "current_timestamp", 0, ctimestampFunc },
+ { "current_date", 0, cdateFunc },
+ };
+ int i;
+
+ for(i=0; i<sizeof(aFuncs)/sizeof(aFuncs[0]); i++){
+ sqlite3CreateFunc(db, aFuncs[i].zName, aFuncs[i].nArg,
+ SQLITE_UTF8, 0, aFuncs[i].xFunc, 0, 0);
+ }
+#else
+ static const struct {
+ char *zName;
+ char *zFormat;
+ } aFuncs[] = {
+ { "current_time", "%H:%M:%S" },
+ { "current_date", "%Y-%m-%d" },
+ { "current_timestamp", "%Y-%m-%d %H:%M:%S" }
+ };
+ int i;
+
+ for(i=0; i<sizeof(aFuncs)/sizeof(aFuncs[0]); i++){
+ sqlite3CreateFunc(db, aFuncs[i].zName, 0, SQLITE_UTF8,
+ aFuncs[i].zFormat, currentTimeFunc, 0, 0);
+ }
+#endif
+}
Added: freeswitch/trunk/libs/sqlite/src/delete.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/delete.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,464 @@
+/*
+** 2001 September 15
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** This file contains C code routines that are called by the parser
+** in order to generate code for DELETE FROM statements.
+**
+** $Id: delete.c,v 1.127 2006/06/19 03:05:10 danielk1977 Exp $
+*/
+#include "sqliteInt.h"
+
+/*
+** Look up every table that is named in pSrc. If any table is not found,
+** add an error message to pParse->zErrMsg and return NULL. If all tables
+** are found, return a pointer to the last table.
+*/
+Table *sqlite3SrcListLookup(Parse *pParse, SrcList *pSrc){
+ Table *pTab = 0;
+ int i;
+ struct SrcList_item *pItem;
+ for(i=0, pItem=pSrc->a; i<pSrc->nSrc; i++, pItem++){
+ pTab = sqlite3LocateTable(pParse, pItem->zName, pItem->zDatabase);
+ sqlite3DeleteTable(pParse->db, pItem->pTab);
+ pItem->pTab = pTab;
+ if( pTab ){
+ pTab->nRef++;
+ }
+ }
+ return pTab;
+}
+
+/*
+** Check to make sure the given table is writable. If it is not
+** writable, generate an error message and return 1. If it is
+** writable return 0;
+*/
+int sqlite3IsReadOnly(Parse *pParse, Table *pTab, int viewOk){
+ if( (pTab->readOnly && (pParse->db->flags & SQLITE_WriteSchema)==0
+ && pParse->nested==0)
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+ || (pTab->pMod && pTab->pMod->pModule->xUpdate==0)
+#endif
+ ){
+ sqlite3ErrorMsg(pParse, "table %s may not be modified", pTab->zName);
+ return 1;
+ }
+#ifndef SQLITE_OMIT_VIEW
+ if( !viewOk && pTab->pSelect ){
+ sqlite3ErrorMsg(pParse,"cannot modify %s because it is a view",pTab->zName);
+ return 1;
+ }
+#endif
+ return 0;
+}
+
+/*
+** Generate code that will open a table for reading.
+*/
+void sqlite3OpenTable(
+ Parse *p, /* Generate code into this VDBE */
+ int iCur, /* The cursor number of the table */
+ int iDb, /* The database index in sqlite3.aDb[] */
+ Table *pTab, /* The table to be opened */
+ int opcode /* OP_OpenRead or OP_OpenWrite */
+){
+ Vdbe *v;
+ if( IsVirtual(pTab) ) return;
+ v = sqlite3GetVdbe(p);
+ assert( opcode==OP_OpenWrite || opcode==OP_OpenRead );
+ sqlite3TableLock(p, iDb, pTab->tnum, (opcode==OP_OpenWrite), pTab->zName);
+ sqlite3VdbeAddOp(v, OP_Integer, iDb, 0);
+ VdbeComment((v, "# %s", pTab->zName));
+ sqlite3VdbeAddOp(v, opcode, iCur, pTab->tnum);
+ sqlite3VdbeAddOp(v, OP_SetNumColumns, iCur, pTab->nCol);
+}
+
+
+/*
+** Generate code for a DELETE FROM statement.
+**
+** DELETE FROM table_wxyz WHERE a<5 AND b NOT NULL;
+** \________/ \________________/
+** pTabList pWhere
+*/
+void sqlite3DeleteFrom(
+ Parse *pParse, /* The parser context */
+ SrcList *pTabList, /* The table from which we should delete things */
+ Expr *pWhere /* The WHERE clause. May be null */
+){
+ Vdbe *v; /* The virtual database engine */
+ Table *pTab; /* The table from which records will be deleted */
+ const char *zDb; /* Name of database holding pTab */
+ int end, addr = 0; /* A couple addresses of generated code */
+ int i; /* Loop counter */
+ WhereInfo *pWInfo; /* Information about the WHERE clause */
+ Index *pIdx; /* For looping over indices of the table */
+ int iCur; /* VDBE Cursor number for pTab */
+ sqlite3 *db; /* Main database structure */
+ AuthContext sContext; /* Authorization context */
+ int oldIdx = -1; /* Cursor for the OLD table of AFTER triggers */
+ NameContext sNC; /* Name context to resolve expressions in */
+ int iDb;
+
+#ifndef SQLITE_OMIT_TRIGGER
+ int isView; /* True if attempting to delete from a view */
+ int triggers_exist = 0; /* True if any triggers exist */
+#endif
+
+ sContext.pParse = 0;
+ if( pParse->nErr || sqlite3MallocFailed() ){
+ goto delete_from_cleanup;
+ }
+ db = pParse->db;
+ assert( pTabList->nSrc==1 );
+
+ /* Locate the table which we want to delete. This table has to be
+ ** put in an SrcList structure because some of the subroutines we
+ ** will be calling are designed to work with multiple tables and expect
+ ** an SrcList* parameter instead of just a Table* parameter.
+ */
+ pTab = sqlite3SrcListLookup(pParse, pTabList);
+ if( pTab==0 ) goto delete_from_cleanup;
+
+ /* Figure out if we have any triggers and if the table being
+ ** deleted from is a view
+ */
+#ifndef SQLITE_OMIT_TRIGGER
+ triggers_exist = sqlite3TriggersExist(pParse, pTab, TK_DELETE, 0);
+ isView = pTab->pSelect!=0;
+#else
+# define triggers_exist 0
+# define isView 0
+#endif
+#ifdef SQLITE_OMIT_VIEW
+# undef isView
+# define isView 0
+#endif
+
+ if( sqlite3IsReadOnly(pParse, pTab, triggers_exist) ){
+ goto delete_from_cleanup;
+ }
+ iDb = sqlite3SchemaToIndex(db, pTab->pSchema);
+ assert( iDb<db->nDb );
+ zDb = db->aDb[iDb].zName;
+ if( sqlite3AuthCheck(pParse, SQLITE_DELETE, pTab->zName, 0, zDb) ){
+ goto delete_from_cleanup;
+ }
+
+ /* If pTab is really a view, make sure it has been initialized.
+ */
+ if( sqlite3ViewGetColumnNames(pParse, pTab) ){
+ goto delete_from_cleanup;
+ }
+
+ /* Allocate a cursor used to store the old.* data for a trigger.
+ */
+ if( triggers_exist ){
+ oldIdx = pParse->nTab++;
+ }
+
+ /* Resolve the column names in the WHERE clause.
+ */
+ assert( pTabList->nSrc==1 );
+ iCur = pTabList->a[0].iCursor = pParse->nTab++;
+ memset(&sNC, 0, sizeof(sNC));
+ sNC.pParse = pParse;
+ sNC.pSrcList = pTabList;
+ if( sqlite3ExprResolveNames(&sNC, pWhere) ){
+ goto delete_from_cleanup;
+ }
+
+ /* Start the view context
+ */
+ if( isView ){
+ sqlite3AuthContextPush(pParse, &sContext, pTab->zName);
+ }
+
+ /* Begin generating code.
+ */
+ v = sqlite3GetVdbe(pParse);
+ if( v==0 ){
+ goto delete_from_cleanup;
+ }
+ if( pParse->nested==0 ) sqlite3VdbeCountChanges(v);
+ sqlite3BeginWriteOperation(pParse, triggers_exist, iDb);
+
+ /* If we are trying to delete from a view, realize that view into
+ ** a ephemeral table.
+ */
+ if( isView ){
+ Select *pView = sqlite3SelectDup(pTab->pSelect);
+ sqlite3Select(pParse, pView, SRT_EphemTab, iCur, 0, 0, 0, 0);
+ sqlite3SelectDelete(pView);
+ }
+
+ /* Initialize the counter of the number of rows deleted, if
+ ** we are counting rows.
+ */
+ if( db->flags & SQLITE_CountRows ){
+ sqlite3VdbeAddOp(v, OP_Integer, 0, 0);
+ }
+
+ /* Special case: A DELETE without a WHERE clause deletes everything.
+ ** It is easier just to erase the whole table. Note, however, that
+ ** this means that the row change count will be incorrect.
+ */
+ if( pWhere==0 && !triggers_exist && !IsVirtual(pTab) ){
+ if( db->flags & SQLITE_CountRows ){
+ /* If counting rows deleted, just count the total number of
+ ** entries in the table. */
+ int endOfLoop = sqlite3VdbeMakeLabel(v);
+ int addr2;
+ if( !isView ){
+ sqlite3OpenTable(pParse, iCur, iDb, pTab, OP_OpenRead);
+ }
+ sqlite3VdbeAddOp(v, OP_Rewind, iCur, sqlite3VdbeCurrentAddr(v)+2);
+ addr2 = sqlite3VdbeAddOp(v, OP_AddImm, 1, 0);
+ sqlite3VdbeAddOp(v, OP_Next, iCur, addr2);
+ sqlite3VdbeResolveLabel(v, endOfLoop);
+ sqlite3VdbeAddOp(v, OP_Close, iCur, 0);
+ }
+ if( !isView ){
+ sqlite3VdbeAddOp(v, OP_Clear, pTab->tnum, iDb);
+ if( !pParse->nested ){
+ sqlite3VdbeChangeP3(v, -1, pTab->zName, P3_STATIC);
+ }
+ for(pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext){
+ assert( pIdx->pSchema==pTab->pSchema );
+ sqlite3VdbeAddOp(v, OP_Clear, pIdx->tnum, iDb);
+ }
+ }
+ }
+ /* The usual case: There is a WHERE clause so we have to scan through
+ ** the table and pick which records to delete.
+ */
+ else{
+ /* Begin the database scan
+ */
+ pWInfo = sqlite3WhereBegin(pParse, pTabList, pWhere, 0);
+ if( pWInfo==0 ) goto delete_from_cleanup;
+
+ /* Remember the rowid of every item to be deleted.
+ */
+ sqlite3VdbeAddOp(v, IsVirtual(pTab) ? OP_VRowid : OP_Rowid, iCur, 0);
+ sqlite3VdbeAddOp(v, OP_FifoWrite, 0, 0);
+ if( db->flags & SQLITE_CountRows ){
+ sqlite3VdbeAddOp(v, OP_AddImm, 1, 0);
+ }
+
+ /* End the database scan loop.
+ */
+ sqlite3WhereEnd(pWInfo);
+
+ /* Open the pseudo-table used to store OLD if there are triggers.
+ */
+ if( triggers_exist ){
+ sqlite3VdbeAddOp(v, OP_OpenPseudo, oldIdx, 0);
+ sqlite3VdbeAddOp(v, OP_SetNumColumns, oldIdx, pTab->nCol);
+ }
+
+ /* Delete every item whose key was written to the list during the
+ ** database scan. We have to delete items after the scan is complete
+ ** because deleting an item can change the scan order.
+ */
+ end = sqlite3VdbeMakeLabel(v);
+
+ /* This is the beginning of the delete loop when there are
+ ** row triggers.
+ */
+ if( triggers_exist ){
+ addr = sqlite3VdbeAddOp(v, OP_FifoRead, 0, end);
+ if( !isView ){
+ sqlite3VdbeAddOp(v, OP_Dup, 0, 0);
+ sqlite3OpenTable(pParse, iCur, iDb, pTab, OP_OpenRead);
+ }
+ sqlite3VdbeAddOp(v, OP_MoveGe, iCur, 0);
+ sqlite3VdbeAddOp(v, OP_Rowid, iCur, 0);
+ sqlite3VdbeAddOp(v, OP_RowData, iCur, 0);
+ sqlite3VdbeAddOp(v, OP_Insert, oldIdx, 0);
+ if( !isView ){
+ sqlite3VdbeAddOp(v, OP_Close, iCur, 0);
+ }
+
+ (void)sqlite3CodeRowTrigger(pParse, TK_DELETE, 0, TRIGGER_BEFORE, pTab,
+ -1, oldIdx, (pParse->trigStack)?pParse->trigStack->orconf:OE_Default,
+ addr);
+ }
+
+ if( !isView ){
+ /* Open cursors for the table we are deleting from and all its
+ ** indices. If there are row triggers, this happens inside the
+ ** OP_FifoRead loop because the cursor have to all be closed
+ ** before the trigger fires. If there are no row triggers, the
+ ** cursors are opened only once on the outside the loop.
+ */
+ sqlite3OpenTableAndIndices(pParse, pTab, iCur, OP_OpenWrite);
+
+ /* This is the beginning of the delete loop when there are no
+ ** row triggers */
+ if( !triggers_exist ){
+ addr = sqlite3VdbeAddOp(v, OP_FifoRead, 0, end);
+ }
+
+ /* Delete the row */
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+ if( IsVirtual(pTab) ){
+ pParse->pVirtualLock = pTab;
+ sqlite3VdbeOp3(v, OP_VUpdate, 0, 1, (const char*)pTab->pVtab, P3_VTAB);
+ }else
+#endif
+ {
+ sqlite3GenerateRowDelete(db, v, pTab, iCur, pParse->nested==0);
+ }
+ }
+
+ /* If there are row triggers, close all cursors then invoke
+ ** the AFTER triggers
+ */
+ if( triggers_exist ){
+ if( !isView ){
+ for(i=1, pIdx=pTab->pIndex; pIdx; i++, pIdx=pIdx->pNext){
+ sqlite3VdbeAddOp(v, OP_Close, iCur + i, pIdx->tnum);
+ }
+ sqlite3VdbeAddOp(v, OP_Close, iCur, 0);
+ }
+ (void)sqlite3CodeRowTrigger(pParse, TK_DELETE, 0, TRIGGER_AFTER, pTab, -1,
+ oldIdx, (pParse->trigStack)?pParse->trigStack->orconf:OE_Default,
+ addr);
+ }
+
+ /* End of the delete loop */
+ sqlite3VdbeAddOp(v, OP_Goto, 0, addr);
+ sqlite3VdbeResolveLabel(v, end);
+
+ /* Close the cursors after the loop if there are no row triggers */
+ if( !triggers_exist && !IsVirtual(pTab) ){
+ for(i=1, pIdx=pTab->pIndex; pIdx; i++, pIdx=pIdx->pNext){
+ sqlite3VdbeAddOp(v, OP_Close, iCur + i, pIdx->tnum);
+ }
+ sqlite3VdbeAddOp(v, OP_Close, iCur, 0);
+ }
+ }
+
+ /*
+ ** Return the number of rows that were deleted. If this routine is
+ ** generating code because of a call to sqlite3NestedParse(), do not
+ ** invoke the callback function.
+ */
+ if( db->flags & SQLITE_CountRows && pParse->nested==0 && !pParse->trigStack ){
+ sqlite3VdbeAddOp(v, OP_Callback, 1, 0);
+ sqlite3VdbeSetNumCols(v, 1);
+ sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "rows deleted", P3_STATIC);
+ }
+
+delete_from_cleanup:
+ sqlite3AuthContextPop(&sContext);
+ sqlite3SrcListDelete(pTabList);
+ sqlite3ExprDelete(pWhere);
+ return;
+}
+
+/*
+** This routine generates VDBE code that causes a single row of a
+** single table to be deleted.
+**
+** The VDBE must be in a particular state when this routine is called.
+** These are the requirements:
+**
+** 1. A read/write cursor pointing to pTab, the table containing the row
+** to be deleted, must be opened as cursor number "base".
+**
+** 2. Read/write cursors for all indices of pTab must be open as
+** cursor number base+i for the i-th index.
+**
+** 3. The record number of the row to be deleted must be on the top
+** of the stack.
+**
+** This routine pops the top of the stack to remove the record number
+** and then generates code to remove both the table record and all index
+** entries that point to that record.
+*/
+void sqlite3GenerateRowDelete(
+ sqlite3 *db, /* The database containing the index */
+ Vdbe *v, /* Generate code into this VDBE */
+ Table *pTab, /* Table containing the row to be deleted */
+ int iCur, /* Cursor number for the table */
+ int count /* Increment the row change counter */
+){
+ int addr;
+ addr = sqlite3VdbeAddOp(v, OP_NotExists, iCur, 0);
+ sqlite3GenerateRowIndexDelete(v, pTab, iCur, 0);
+ sqlite3VdbeAddOp(v, OP_Delete, iCur, (count?OPFLAG_NCHANGE:0));
+ if( count ){
+ sqlite3VdbeChangeP3(v, -1, pTab->zName, P3_STATIC);
+ }
+ sqlite3VdbeJumpHere(v, addr);
+}
+
+/*
+** This routine generates VDBE code that causes the deletion of all
+** index entries associated with a single row of a single table.
+**
+** The VDBE must be in a particular state when this routine is called.
+** These are the requirements:
+**
+** 1. A read/write cursor pointing to pTab, the table containing the row
+** to be deleted, must be opened as cursor number "iCur".
+**
+** 2. Read/write cursors for all indices of pTab must be open as
+** cursor number iCur+i for the i-th index.
+**
+** 3. The "iCur" cursor must be pointing to the row that is to be
+** deleted.
+*/
+void sqlite3GenerateRowIndexDelete(
+ Vdbe *v, /* Generate code into this VDBE */
+ Table *pTab, /* Table containing the row to be deleted */
+ int iCur, /* Cursor number for the table */
+ char *aIdxUsed /* Only delete if aIdxUsed!=0 && aIdxUsed[i]!=0 */
+){
+ int i;
+ Index *pIdx;
+
+ for(i=1, pIdx=pTab->pIndex; pIdx; i++, pIdx=pIdx->pNext){
+ if( aIdxUsed!=0 && aIdxUsed[i-1]==0 ) continue;
+ sqlite3GenerateIndexKey(v, pIdx, iCur);
+ sqlite3VdbeAddOp(v, OP_IdxDelete, iCur+i, 0);
+ }
+}
+
+/*
+** Generate code that will assemble an index key and put it on the top
+** of the tack. The key with be for index pIdx which is an index on pTab.
+** iCur is the index of a cursor open on the pTab table and pointing to
+** the entry that needs indexing.
+*/
+void sqlite3GenerateIndexKey(
+ Vdbe *v, /* Generate code into this VDBE */
+ Index *pIdx, /* The index for which to generate a key */
+ int iCur /* Cursor number for the pIdx->pTable table */
+){
+ int j;
+ Table *pTab = pIdx->pTable;
+
+ sqlite3VdbeAddOp(v, OP_Rowid, iCur, 0);
+ for(j=0; j<pIdx->nColumn; j++){
+ int idx = pIdx->aiColumn[j];
+ if( idx==pTab->iPKey ){
+ sqlite3VdbeAddOp(v, OP_Dup, j, 0);
+ }else{
+ sqlite3VdbeAddOp(v, OP_Column, iCur, idx);
+ sqlite3ColumnDefault(v, pTab, idx);
+ }
+ }
+ sqlite3VdbeAddOp(v, OP_MakeIdxRec, pIdx->nColumn, 0);
+ sqlite3IndexAffinityStr(v, pIdx);
+}
Added: freeswitch/trunk/libs/sqlite/src/expr.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/expr.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,2356 @@
+/*
+** 2001 September 15
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** This file contains routines used for analyzing expressions and
+** for generating VDBE code that evaluates expressions in SQLite.
+**
+** $Id: expr.c,v 1.268 2006/08/24 15:18:25 drh Exp $
+*/
+#include "sqliteInt.h"
+#include <ctype.h>
+
+/*
+** Return the 'affinity' of the expression pExpr if any.
+**
+** If pExpr is a column, a reference to a column via an 'AS' alias,
+** or a sub-select with a column as the return value, then the
+** affinity of that column is returned. Otherwise, 0x00 is returned,
+** indicating no affinity for the expression.
+**
+** i.e. the WHERE clause expresssions in the following statements all
+** have an affinity:
+**
+** CREATE TABLE t1(a);
+** SELECT * FROM t1 WHERE a;
+** SELECT a AS b FROM t1 WHERE b;
+** SELECT * FROM t1 WHERE (select a from t1);
+*/
+char sqlite3ExprAffinity(Expr *pExpr){
+ int op = pExpr->op;
+ if( op==TK_AS ){
+ return sqlite3ExprAffinity(pExpr->pLeft);
+ }
+ if( op==TK_SELECT ){
+ return sqlite3ExprAffinity(pExpr->pSelect->pEList->a[0].pExpr);
+ }
+#ifndef SQLITE_OMIT_CAST
+ if( op==TK_CAST ){
+ return sqlite3AffinityType(&pExpr->token);
+ }
+#endif
+ return pExpr->affinity;
+}
+
+/*
+** Return the default collation sequence for the expression pExpr. If
+** there is no default collation type, return 0.
+*/
+CollSeq *sqlite3ExprCollSeq(Parse *pParse, Expr *pExpr){
+ CollSeq *pColl = 0;
+ if( pExpr ){
+ pColl = pExpr->pColl;
+ if( (pExpr->op==TK_AS || pExpr->op==TK_CAST) && !pColl ){
+ return sqlite3ExprCollSeq(pParse, pExpr->pLeft);
+ }
+ }
+ if( sqlite3CheckCollSeq(pParse, pColl) ){
+ pColl = 0;
+ }
+ return pColl;
+}
+
+/*
+** pExpr is an operand of a comparison operator. aff2 is the
+** type affinity of the other operand. This routine returns the
+** type affinity that should be used for the comparison operator.
+*/
+char sqlite3CompareAffinity(Expr *pExpr, char aff2){
+ char aff1 = sqlite3ExprAffinity(pExpr);
+ if( aff1 && aff2 ){
+ /* Both sides of the comparison are columns. If one has numeric
+ ** affinity, use that. Otherwise use no affinity.
+ */
+ if( sqlite3IsNumericAffinity(aff1) || sqlite3IsNumericAffinity(aff2) ){
+ return SQLITE_AFF_NUMERIC;
+ }else{
+ return SQLITE_AFF_NONE;
+ }
+ }else if( !aff1 && !aff2 ){
+ /* Neither side of the comparison is a column. Compare the
+ ** results directly.
+ */
+ return SQLITE_AFF_NONE;
+ }else{
+ /* One side is a column, the other is not. Use the columns affinity. */
+ assert( aff1==0 || aff2==0 );
+ return (aff1 + aff2);
+ }
+}
+
+/*
+** pExpr is a comparison operator. Return the type affinity that should
+** be applied to both operands prior to doing the comparison.
+*/
+static char comparisonAffinity(Expr *pExpr){
+ char aff;
+ assert( pExpr->op==TK_EQ || pExpr->op==TK_IN || pExpr->op==TK_LT ||
+ pExpr->op==TK_GT || pExpr->op==TK_GE || pExpr->op==TK_LE ||
+ pExpr->op==TK_NE );
+ assert( pExpr->pLeft );
+ aff = sqlite3ExprAffinity(pExpr->pLeft);
+ if( pExpr->pRight ){
+ aff = sqlite3CompareAffinity(pExpr->pRight, aff);
+ }
+ else if( pExpr->pSelect ){
+ aff = sqlite3CompareAffinity(pExpr->pSelect->pEList->a[0].pExpr, aff);
+ }
+ else if( !aff ){
+ aff = SQLITE_AFF_NUMERIC;
+ }
+ return aff;
+}
+
+/*
+** pExpr is a comparison expression, eg. '=', '<', IN(...) etc.
+** idx_affinity is the affinity of an indexed column. Return true
+** if the index with affinity idx_affinity may be used to implement
+** the comparison in pExpr.
+*/
+int sqlite3IndexAffinityOk(Expr *pExpr, char idx_affinity){
+ char aff = comparisonAffinity(pExpr);
+ switch( aff ){
+ case SQLITE_AFF_NONE:
+ return 1;
+ case SQLITE_AFF_TEXT:
+ return idx_affinity==SQLITE_AFF_TEXT;
+ default:
+ return sqlite3IsNumericAffinity(idx_affinity);
+ }
+}
+
+/*
+** Return the P1 value that should be used for a binary comparison
+** opcode (OP_Eq, OP_Ge etc.) used to compare pExpr1 and pExpr2.
+** If jumpIfNull is true, then set the low byte of the returned
+** P1 value to tell the opcode to jump if either expression
+** evaluates to NULL.
+*/
+static int binaryCompareP1(Expr *pExpr1, Expr *pExpr2, int jumpIfNull){
+ char aff = sqlite3ExprAffinity(pExpr2);
+ return ((int)sqlite3CompareAffinity(pExpr1, aff))+(jumpIfNull?0x100:0);
+}
+
+/*
+** Return a pointer to the collation sequence that should be used by
+** a binary comparison operator comparing pLeft and pRight.
+**
+** If the left hand expression has a collating sequence type, then it is
+** used. Otherwise the collation sequence for the right hand expression
+** is used, or the default (BINARY) if neither expression has a collating
+** type.
+*/
+static CollSeq* binaryCompareCollSeq(Parse *pParse, Expr *pLeft, Expr *pRight){
+ CollSeq *pColl = sqlite3ExprCollSeq(pParse, pLeft);
+ if( !pColl ){
+ pColl = sqlite3ExprCollSeq(pParse, pRight);
+ }
+ return pColl;
+}
+
+/*
+** Generate code for a comparison operator.
+*/
+static int codeCompare(
+ Parse *pParse, /* The parsing (and code generating) context */
+ Expr *pLeft, /* The left operand */
+ Expr *pRight, /* The right operand */
+ int opcode, /* The comparison opcode */
+ int dest, /* Jump here if true. */
+ int jumpIfNull /* If true, jump if either operand is NULL */
+){
+ int p1 = binaryCompareP1(pLeft, pRight, jumpIfNull);
+ CollSeq *p3 = binaryCompareCollSeq(pParse, pLeft, pRight);
+ return sqlite3VdbeOp3(pParse->pVdbe, opcode, p1, dest, (void*)p3, P3_COLLSEQ);
+}
+
+/*
+** Construct a new expression node and return a pointer to it. Memory
+** for this node is obtained from sqliteMalloc(). The calling function
+** is responsible for making sure the node eventually gets freed.
+*/
+Expr *sqlite3Expr(int op, Expr *pLeft, Expr *pRight, const Token *pToken){
+ Expr *pNew;
+ pNew = sqliteMalloc( sizeof(Expr) );
+ if( pNew==0 ){
+ /* When malloc fails, delete pLeft and pRight. Expressions passed to
+ ** this function must always be allocated with sqlite3Expr() for this
+ ** reason.
+ */
+ sqlite3ExprDelete(pLeft);
+ sqlite3ExprDelete(pRight);
+ return 0;
+ }
+ pNew->op = op;
+ pNew->pLeft = pLeft;
+ pNew->pRight = pRight;
+ pNew->iAgg = -1;
+ if( pToken ){
+ assert( pToken->dyn==0 );
+ pNew->span = pNew->token = *pToken;
+ }else if( pLeft && pRight ){
+ sqlite3ExprSpan(pNew, &pLeft->span, &pRight->span);
+ }
+ return pNew;
+}
+
+/*
+** Works like sqlite3Expr() but frees its pLeft and pRight arguments
+** if it fails due to a malloc problem.
+*/
+Expr *sqlite3ExprOrFree(int op, Expr *pLeft, Expr *pRight, const Token *pToken){
+ Expr *pNew = sqlite3Expr(op, pLeft, pRight, pToken);
+ if( pNew==0 ){
+ sqlite3ExprDelete(pLeft);
+ sqlite3ExprDelete(pRight);
+ }
+ return pNew;
+}
+
+/*
+** When doing a nested parse, you can include terms in an expression
+** that look like this: #0 #1 #2 ... These terms refer to elements
+** on the stack. "#0" means the top of the stack.
+** "#1" means the next down on the stack. And so forth.
+**
+** This routine is called by the parser to deal with on of those terms.
+** It immediately generates code to store the value in a memory location.
+** The returns an expression that will code to extract the value from
+** that memory location as needed.
+*/
+Expr *sqlite3RegisterExpr(Parse *pParse, Token *pToken){
+ Vdbe *v = pParse->pVdbe;
+ Expr *p;
+ int depth;
+ if( pParse->nested==0 ){
+ sqlite3ErrorMsg(pParse, "near \"%T\": syntax error", pToken);
+ return 0;
+ }
+ if( v==0 ) return 0;
+ p = sqlite3Expr(TK_REGISTER, 0, 0, pToken);
+ if( p==0 ){
+ return 0; /* Malloc failed */
+ }
+ depth = atoi((char*)&pToken->z[1]);
+ p->iTable = pParse->nMem++;
+ sqlite3VdbeAddOp(v, OP_Dup, depth, 0);
+ sqlite3VdbeAddOp(v, OP_MemStore, p->iTable, 1);
+ return p;
+}
+
+/*
+** Join two expressions using an AND operator. If either expression is
+** NULL, then just return the other expression.
+*/
+Expr *sqlite3ExprAnd(Expr *pLeft, Expr *pRight){
+ if( pLeft==0 ){
+ return pRight;
+ }else if( pRight==0 ){
+ return pLeft;
+ }else{
+ return sqlite3Expr(TK_AND, pLeft, pRight, 0);
+ }
+}
+
+/*
+** Set the Expr.span field of the given expression to span all
+** text between the two given tokens.
+*/
+void sqlite3ExprSpan(Expr *pExpr, Token *pLeft, Token *pRight){
+ assert( pRight!=0 );
+ assert( pLeft!=0 );
+ if( !sqlite3MallocFailed() && pRight->z && pLeft->z ){
+ assert( pLeft->dyn==0 || pLeft->z[pLeft->n]==0 );
+ if( pLeft->dyn==0 && pRight->dyn==0 ){
+ pExpr->span.z = pLeft->z;
+ pExpr->span.n = pRight->n + (pRight->z - pLeft->z);
+ }else{
+ pExpr->span.z = 0;
+ }
+ }
+}
+
+/*
+** Construct a new expression node for a function with multiple
+** arguments.
+*/
+Expr *sqlite3ExprFunction(ExprList *pList, Token *pToken){
+ Expr *pNew;
+ assert( pToken );
+ pNew = sqliteMalloc( sizeof(Expr) );
+ if( pNew==0 ){
+ sqlite3ExprListDelete(pList); /* Avoid leaking memory when malloc fails */
+ return 0;
+ }
+ pNew->op = TK_FUNCTION;
+ pNew->pList = pList;
+ assert( pToken->dyn==0 );
+ pNew->token = *pToken;
+ pNew->span = pNew->token;
+ return pNew;
+}
+
+/*
+** Assign a variable number to an expression that encodes a wildcard
+** in the original SQL statement.
+**
+** Wildcards consisting of a single "?" are assigned the next sequential
+** variable number.
+**
+** Wildcards of the form "?nnn" are assigned the number "nnn". We make
+** sure "nnn" is not too be to avoid a denial of service attack when
+** the SQL statement comes from an external source.
+**
+** Wildcards of the form ":aaa" or "$aaa" are assigned the same number
+** as the previous instance of the same wildcard. Or if this is the first
+** instance of the wildcard, the next sequenial variable number is
+** assigned.
+*/
+void sqlite3ExprAssignVarNumber(Parse *pParse, Expr *pExpr){
+ Token *pToken;
+ if( pExpr==0 ) return;
+ pToken = &pExpr->token;
+ assert( pToken->n>=1 );
+ assert( pToken->z!=0 );
+ assert( pToken->z[0]!=0 );
+ if( pToken->n==1 ){
+ /* Wildcard of the form "?". Assign the next variable number */
+ pExpr->iTable = ++pParse->nVar;
+ }else if( pToken->z[0]=='?' ){
+ /* Wildcard of the form "?nnn". Convert "nnn" to an integer and
+ ** use it as the variable number */
+ int i;
+ pExpr->iTable = i = atoi((char*)&pToken->z[1]);
+ if( i<1 || i>SQLITE_MAX_VARIABLE_NUMBER ){
+ sqlite3ErrorMsg(pParse, "variable number must be between ?1 and ?%d",
+ SQLITE_MAX_VARIABLE_NUMBER);
+ }
+ if( i>pParse->nVar ){
+ pParse->nVar = i;
+ }
+ }else{
+ /* Wildcards of the form ":aaa" or "$aaa". Reuse the same variable
+ ** number as the prior appearance of the same name, or if the name
+ ** has never appeared before, reuse the same variable number
+ */
+ int i, n;
+ n = pToken->n;
+ for(i=0; i<pParse->nVarExpr; i++){
+ Expr *pE;
+ if( (pE = pParse->apVarExpr[i])!=0
+ && pE->token.n==n
+ && memcmp(pE->token.z, pToken->z, n)==0 ){
+ pExpr->iTable = pE->iTable;
+ break;
+ }
+ }
+ if( i>=pParse->nVarExpr ){
+ pExpr->iTable = ++pParse->nVar;
+ if( pParse->nVarExpr>=pParse->nVarExprAlloc-1 ){
+ pParse->nVarExprAlloc += pParse->nVarExprAlloc + 10;
+ sqliteReallocOrFree((void**)&pParse->apVarExpr,
+ pParse->nVarExprAlloc*sizeof(pParse->apVarExpr[0]) );
+ }
+ if( !sqlite3MallocFailed() ){
+ assert( pParse->apVarExpr!=0 );
+ pParse->apVarExpr[pParse->nVarExpr++] = pExpr;
+ }
+ }
+ }
+}
+
+/*
+** Recursively delete an expression tree.
+*/
+void sqlite3ExprDelete(Expr *p){
+ if( p==0 ) return;
+ if( p->span.dyn ) sqliteFree((char*)p->span.z);
+ if( p->token.dyn ) sqliteFree((char*)p->token.z);
+ sqlite3ExprDelete(p->pLeft);
+ sqlite3ExprDelete(p->pRight);
+ sqlite3ExprListDelete(p->pList);
+ sqlite3SelectDelete(p->pSelect);
+ sqliteFree(p);
+}
+
+/*
+** The Expr.token field might be a string literal that is quoted.
+** If so, remove the quotation marks.
+*/
+void sqlite3DequoteExpr(Expr *p){
+ if( ExprHasAnyProperty(p, EP_Dequoted) ){
+ return;
+ }
+ ExprSetProperty(p, EP_Dequoted);
+ if( p->token.dyn==0 ){
+ sqlite3TokenCopy(&p->token, &p->token);
+ }
+ sqlite3Dequote((char*)p->token.z);
+}
+
+
+/*
+** The following group of routines make deep copies of expressions,
+** expression lists, ID lists, and select statements. The copies can
+** be deleted (by being passed to their respective ...Delete() routines)
+** without effecting the originals.
+**
+** The expression list, ID, and source lists return by sqlite3ExprListDup(),
+** sqlite3IdListDup(), and sqlite3SrcListDup() can not be further expanded
+** by subsequent calls to sqlite*ListAppend() routines.
+**
+** Any tables that the SrcList might point to are not duplicated.
+*/
+Expr *sqlite3ExprDup(Expr *p){
+ Expr *pNew;
+ if( p==0 ) return 0;
+ pNew = sqliteMallocRaw( sizeof(*p) );
+ if( pNew==0 ) return 0;
+ memcpy(pNew, p, sizeof(*pNew));
+ if( p->token.z!=0 ){
+ pNew->token.z = (u8*)sqliteStrNDup((char*)p->token.z, p->token.n);
+ pNew->token.dyn = 1;
+ }else{
+ assert( pNew->token.z==0 );
+ }
+ pNew->span.z = 0;
+ pNew->pLeft = sqlite3ExprDup(p->pLeft);
+ pNew->pRight = sqlite3ExprDup(p->pRight);
+ pNew->pList = sqlite3ExprListDup(p->pList);
+ pNew->pSelect = sqlite3SelectDup(p->pSelect);
+ pNew->pTab = p->pTab;
+ return pNew;
+}
+void sqlite3TokenCopy(Token *pTo, Token *pFrom){
+ if( pTo->dyn ) sqliteFree((char*)pTo->z);
+ if( pFrom->z ){
+ pTo->n = pFrom->n;
+ pTo->z = (u8*)sqliteStrNDup((char*)pFrom->z, pFrom->n);
+ pTo->dyn = 1;
+ }else{
+ pTo->z = 0;
+ }
+}
+ExprList *sqlite3ExprListDup(ExprList *p){
+ ExprList *pNew;
+ struct ExprList_item *pItem, *pOldItem;
+ int i;
+ if( p==0 ) return 0;
+ pNew = sqliteMalloc( sizeof(*pNew) );
+ if( pNew==0 ) return 0;
+ pNew->nExpr = pNew->nAlloc = p->nExpr;
+ pNew->a = pItem = sqliteMalloc( p->nExpr*sizeof(p->a[0]) );
+ if( pItem==0 ){
+ sqliteFree(pNew);
+ return 0;
+ }
+ pOldItem = p->a;
+ for(i=0; i<p->nExpr; i++, pItem++, pOldItem++){
+ Expr *pNewExpr, *pOldExpr;
+ pItem->pExpr = pNewExpr = sqlite3ExprDup(pOldExpr = pOldItem->pExpr);
+ if( pOldExpr->span.z!=0 && pNewExpr ){
+ /* Always make a copy of the span for top-level expressions in the
+ ** expression list. The logic in SELECT processing that determines
+ ** the names of columns in the result set needs this information */
+ sqlite3TokenCopy(&pNewExpr->span, &pOldExpr->span);
+ }
+ assert( pNewExpr==0 || pNewExpr->span.z!=0
+ || pOldExpr->span.z==0
+ || sqlite3MallocFailed() );
+ pItem->zName = sqliteStrDup(pOldItem->zName);
+ pItem->sortOrder = pOldItem->sortOrder;
+ pItem->isAgg = pOldItem->isAgg;
+ pItem->done = 0;
+ }
+ return pNew;
+}
+
+/*
+** If cursors, triggers, views and subqueries are all omitted from
+** the build, then none of the following routines, except for
+** sqlite3SelectDup(), can be called. sqlite3SelectDup() is sometimes
+** called with a NULL argument.
+*/
+#if !defined(SQLITE_OMIT_VIEW) || !defined(SQLITE_OMIT_TRIGGER) \
+ || !defined(SQLITE_OMIT_SUBQUERY)
+SrcList *sqlite3SrcListDup(SrcList *p){
+ SrcList *pNew;
+ int i;
+ int nByte;
+ if( p==0 ) return 0;
+ nByte = sizeof(*p) + (p->nSrc>0 ? sizeof(p->a[0]) * (p->nSrc-1) : 0);
+ pNew = sqliteMallocRaw( nByte );
+ if( pNew==0 ) return 0;
+ pNew->nSrc = pNew->nAlloc = p->nSrc;
+ for(i=0; i<p->nSrc; i++){
+ struct SrcList_item *pNewItem = &pNew->a[i];
+ struct SrcList_item *pOldItem = &p->a[i];
+ Table *pTab;
+ pNewItem->zDatabase = sqliteStrDup(pOldItem->zDatabase);
+ pNewItem->zName = sqliteStrDup(pOldItem->zName);
+ pNewItem->zAlias = sqliteStrDup(pOldItem->zAlias);
+ pNewItem->jointype = pOldItem->jointype;
+ pNewItem->iCursor = pOldItem->iCursor;
+ pNewItem->isPopulated = pOldItem->isPopulated;
+ pTab = pNewItem->pTab = pOldItem->pTab;
+ if( pTab ){
+ pTab->nRef++;
+ }
+ pNewItem->pSelect = sqlite3SelectDup(pOldItem->pSelect);
+ pNewItem->pOn = sqlite3ExprDup(pOldItem->pOn);
+ pNewItem->pUsing = sqlite3IdListDup(pOldItem->pUsing);
+ pNewItem->colUsed = pOldItem->colUsed;
+ }
+ return pNew;
+}
+IdList *sqlite3IdListDup(IdList *p){
+ IdList *pNew;
+ int i;
+ if( p==0 ) return 0;
+ pNew = sqliteMallocRaw( sizeof(*pNew) );
+ if( pNew==0 ) return 0;
+ pNew->nId = pNew->nAlloc = p->nId;
+ pNew->a = sqliteMallocRaw( p->nId*sizeof(p->a[0]) );
+ if( pNew->a==0 ){
+ sqliteFree(pNew);
+ return 0;
+ }
+ for(i=0; i<p->nId; i++){
+ struct IdList_item *pNewItem = &pNew->a[i];
+ struct IdList_item *pOldItem = &p->a[i];
+ pNewItem->zName = sqliteStrDup(pOldItem->zName);
+ pNewItem->idx = pOldItem->idx;
+ }
+ return pNew;
+}
+Select *sqlite3SelectDup(Select *p){
+ Select *pNew;
+ if( p==0 ) return 0;
+ pNew = sqliteMallocRaw( sizeof(*p) );
+ if( pNew==0 ) return 0;
+ pNew->isDistinct = p->isDistinct;
+ pNew->pEList = sqlite3ExprListDup(p->pEList);
+ pNew->pSrc = sqlite3SrcListDup(p->pSrc);
+ pNew->pWhere = sqlite3ExprDup(p->pWhere);
+ pNew->pGroupBy = sqlite3ExprListDup(p->pGroupBy);
+ pNew->pHaving = sqlite3ExprDup(p->pHaving);
+ pNew->pOrderBy = sqlite3ExprListDup(p->pOrderBy);
+ pNew->op = p->op;
+ pNew->pPrior = sqlite3SelectDup(p->pPrior);
+ pNew->pLimit = sqlite3ExprDup(p->pLimit);
+ pNew->pOffset = sqlite3ExprDup(p->pOffset);
+ pNew->iLimit = -1;
+ pNew->iOffset = -1;
+ pNew->isResolved = p->isResolved;
+ pNew->isAgg = p->isAgg;
+ pNew->usesEphm = 0;
+ pNew->disallowOrderBy = 0;
+ pNew->pRightmost = 0;
+ pNew->addrOpenEphm[0] = -1;
+ pNew->addrOpenEphm[1] = -1;
+ pNew->addrOpenEphm[2] = -1;
+ return pNew;
+}
+#else
+Select *sqlite3SelectDup(Select *p){
+ assert( p==0 );
+ return 0;
+}
+#endif
+
+
+/*
+** Add a new element to the end of an expression list. If pList is
+** initially NULL, then create a new expression list.
+*/
+ExprList *sqlite3ExprListAppend(ExprList *pList, Expr *pExpr, Token *pName){
+ if( pList==0 ){
+ pList = sqliteMalloc( sizeof(ExprList) );
+ if( pList==0 ){
+ goto no_mem;
+ }
+ assert( pList->nAlloc==0 );
+ }
+ if( pList->nAlloc<=pList->nExpr ){
+ struct ExprList_item *a;
+ int n = pList->nAlloc*2 + 4;
+ a = sqliteRealloc(pList->a, n*sizeof(pList->a[0]));
+ if( a==0 ){
+ goto no_mem;
+ }
+ pList->a = a;
+ pList->nAlloc = n;
+ }
+ assert( pList->a!=0 );
+ if( pExpr || pName ){
+ struct ExprList_item *pItem = &pList->a[pList->nExpr++];
+ memset(pItem, 0, sizeof(*pItem));
+ pItem->zName = sqlite3NameFromToken(pName);
+ pItem->pExpr = pExpr;
+ }
+ return pList;
+
+no_mem:
+ /* Avoid leaking memory if malloc has failed. */
+ sqlite3ExprDelete(pExpr);
+ sqlite3ExprListDelete(pList);
+ return 0;
+}
+
+/*
+** Delete an entire expression list.
+*/
+void sqlite3ExprListDelete(ExprList *pList){
+ int i;
+ struct ExprList_item *pItem;
+ if( pList==0 ) return;
+ assert( pList->a!=0 || (pList->nExpr==0 && pList->nAlloc==0) );
+ assert( pList->nExpr<=pList->nAlloc );
+ for(pItem=pList->a, i=0; i<pList->nExpr; i++, pItem++){
+ sqlite3ExprDelete(pItem->pExpr);
+ sqliteFree(pItem->zName);
+ }
+ sqliteFree(pList->a);
+ sqliteFree(pList);
+}
+
+/*
+** Walk an expression tree. Call xFunc for each node visited.
+**
+** The return value from xFunc determines whether the tree walk continues.
+** 0 means continue walking the tree. 1 means do not walk children
+** of the current node but continue with siblings. 2 means abandon
+** the tree walk completely.
+**
+** The return value from this routine is 1 to abandon the tree walk
+** and 0 to continue.
+**
+** NOTICE: This routine does *not* descend into subqueries.
+*/
+static int walkExprList(ExprList *, int (*)(void *, Expr*), void *);
+static int walkExprTree(Expr *pExpr, int (*xFunc)(void*,Expr*), void *pArg){
+ int rc;
+ if( pExpr==0 ) return 0;
+ rc = (*xFunc)(pArg, pExpr);
+ if( rc==0 ){
+ if( walkExprTree(pExpr->pLeft, xFunc, pArg) ) return 1;
+ if( walkExprTree(pExpr->pRight, xFunc, pArg) ) return 1;
+ if( walkExprList(pExpr->pList, xFunc, pArg) ) return 1;
+ }
+ return rc>1;
+}
+
+/*
+** Call walkExprTree() for every expression in list p.
+*/
+static int walkExprList(ExprList *p, int (*xFunc)(void *, Expr*), void *pArg){
+ int i;
+ struct ExprList_item *pItem;
+ if( !p ) return 0;
+ for(i=p->nExpr, pItem=p->a; i>0; i--, pItem++){
+ if( walkExprTree(pItem->pExpr, xFunc, pArg) ) return 1;
+ }
+ return 0;
+}
+
+/*
+** Call walkExprTree() for every expression in Select p, not including
+** expressions that are part of sub-selects in any FROM clause or the LIMIT
+** or OFFSET expressions..
+*/
+static int walkSelectExpr(Select *p, int (*xFunc)(void *, Expr*), void *pArg){
+ walkExprList(p->pEList, xFunc, pArg);
+ walkExprTree(p->pWhere, xFunc, pArg);
+ walkExprList(p->pGroupBy, xFunc, pArg);
+ walkExprTree(p->pHaving, xFunc, pArg);
+ walkExprList(p->pOrderBy, xFunc, pArg);
+ return 0;
+}
+
+
+/*
+** This routine is designed as an xFunc for walkExprTree().
+**
+** pArg is really a pointer to an integer. If we can tell by looking
+** at pExpr that the expression that contains pExpr is not a constant
+** expression, then set *pArg to 0 and return 2 to abandon the tree walk.
+** If pExpr does does not disqualify the expression from being a constant
+** then do nothing.
+**
+** After walking the whole tree, if no nodes are found that disqualify
+** the expression as constant, then we assume the whole expression
+** is constant. See sqlite3ExprIsConstant() for additional information.
+*/
+static int exprNodeIsConstant(void *pArg, Expr *pExpr){
+ switch( pExpr->op ){
+ /* Consider functions to be constant if all their arguments are constant
+ ** and *pArg==2 */
+ case TK_FUNCTION:
+ if( *((int*)pArg)==2 ) return 0;
+ /* Fall through */
+ case TK_ID:
+ case TK_COLUMN:
+ case TK_DOT:
+ case TK_AGG_FUNCTION:
+ case TK_AGG_COLUMN:
+#ifndef SQLITE_OMIT_SUBQUERY
+ case TK_SELECT:
+ case TK_EXISTS:
+#endif
+ *((int*)pArg) = 0;
+ return 2;
+ case TK_IN:
+ if( pExpr->pSelect ){
+ *((int*)pArg) = 0;
+ return 2;
+ }
+ default:
+ return 0;
+ }
+}
+
+/*
+** Walk an expression tree. Return 1 if the expression is constant
+** and 0 if it involves variables or function calls.
+**
+** For the purposes of this function, a double-quoted string (ex: "abc")
+** is considered a variable but a single-quoted string (ex: 'abc') is
+** a constant.
+*/
+int sqlite3ExprIsConstant(Expr *p){
+ int isConst = 1;
+ walkExprTree(p, exprNodeIsConstant, &isConst);
+ return isConst;
+}
+
+/*
+** Walk an expression tree. Return 1 if the expression is constant
+** or a function call with constant arguments. Return and 0 if there
+** are any variables.
+**
+** For the purposes of this function, a double-quoted string (ex: "abc")
+** is considered a variable but a single-quoted string (ex: 'abc') is
+** a constant.
+*/
+int sqlite3ExprIsConstantOrFunction(Expr *p){
+ int isConst = 2;
+ walkExprTree(p, exprNodeIsConstant, &isConst);
+ return isConst!=0;
+}
+
+/*
+** If the expression p codes a constant integer that is small enough
+** to fit in a 32-bit integer, return 1 and put the value of the integer
+** in *pValue. If the expression is not an integer or if it is too big
+** to fit in a signed 32-bit integer, return 0 and leave *pValue unchanged.
+*/
+int sqlite3ExprIsInteger(Expr *p, int *pValue){
+ switch( p->op ){
+ case TK_INTEGER: {
+ if( sqlite3GetInt32((char*)p->token.z, pValue) ){
+ return 1;
+ }
+ break;
+ }
+ case TK_UPLUS: {
+ return sqlite3ExprIsInteger(p->pLeft, pValue);
+ }
+ case TK_UMINUS: {
+ int v;
+ if( sqlite3ExprIsInteger(p->pLeft, &v) ){
+ *pValue = -v;
+ return 1;
+ }
+ break;
+ }
+ default: break;
+ }
+ return 0;
+}
+
+/*
+** Return TRUE if the given string is a row-id column name.
+*/
+int sqlite3IsRowid(const char *z){
+ if( sqlite3StrICmp(z, "_ROWID_")==0 ) return 1;
+ if( sqlite3StrICmp(z, "ROWID")==0 ) return 1;
+ if( sqlite3StrICmp(z, "OID")==0 ) return 1;
+ return 0;
+}
+
+/*
+** Given the name of a column of the form X.Y.Z or Y.Z or just Z, look up
+** that name in the set of source tables in pSrcList and make the pExpr
+** expression node refer back to that source column. The following changes
+** are made to pExpr:
+**
+** pExpr->iDb Set the index in db->aDb[] of the database holding
+** the table.
+** pExpr->iTable Set to the cursor number for the table obtained
+** from pSrcList.
+** pExpr->iColumn Set to the column number within the table.
+** pExpr->op Set to TK_COLUMN.
+** pExpr->pLeft Any expression this points to is deleted
+** pExpr->pRight Any expression this points to is deleted.
+**
+** The pDbToken is the name of the database (the "X"). This value may be
+** NULL meaning that name is of the form Y.Z or Z. Any available database
+** can be used. The pTableToken is the name of the table (the "Y"). This
+** value can be NULL if pDbToken is also NULL. If pTableToken is NULL it
+** means that the form of the name is Z and that columns from any table
+** can be used.
+**
+** If the name cannot be resolved unambiguously, leave an error message
+** in pParse and return non-zero. Return zero on success.
+*/
+static int lookupName(
+ Parse *pParse, /* The parsing context */
+ Token *pDbToken, /* Name of the database containing table, or NULL */
+ Token *pTableToken, /* Name of table containing column, or NULL */
+ Token *pColumnToken, /* Name of the column. */
+ NameContext *pNC, /* The name context used to resolve the name */
+ Expr *pExpr /* Make this EXPR node point to the selected column */
+){
+ char *zDb = 0; /* Name of the database. The "X" in X.Y.Z */
+ char *zTab = 0; /* Name of the table. The "Y" in X.Y.Z or Y.Z */
+ char *zCol = 0; /* Name of the column. The "Z" */
+ int i, j; /* Loop counters */
+ int cnt = 0; /* Number of matching column names */
+ int cntTab = 0; /* Number of matching table names */
+ sqlite3 *db = pParse->db; /* The database */
+ struct SrcList_item *pItem; /* Use for looping over pSrcList items */
+ struct SrcList_item *pMatch = 0; /* The matching pSrcList item */
+ NameContext *pTopNC = pNC; /* First namecontext in the list */
+
+ assert( pColumnToken && pColumnToken->z ); /* The Z in X.Y.Z cannot be NULL */
+ zDb = sqlite3NameFromToken(pDbToken);
+ zTab = sqlite3NameFromToken(pTableToken);
+ zCol = sqlite3NameFromToken(pColumnToken);
+ if( sqlite3MallocFailed() ){
+ goto lookupname_end;
+ }
+
+ pExpr->iTable = -1;
+ while( pNC && cnt==0 ){
+ ExprList *pEList;
+ SrcList *pSrcList = pNC->pSrcList;
+
+ if( pSrcList ){
+ for(i=0, pItem=pSrcList->a; i<pSrcList->nSrc; i++, pItem++){
+ Table *pTab;
+ int iDb;
+ Column *pCol;
+
+ pTab = pItem->pTab;
+ assert( pTab!=0 );
+ iDb = sqlite3SchemaToIndex(db, pTab->pSchema);
+ assert( pTab->nCol>0 );
+ if( zTab ){
+ if( pItem->zAlias ){
+ char *zTabName = pItem->zAlias;
+ if( sqlite3StrICmp(zTabName, zTab)!=0 ) continue;
+ }else{
+ char *zTabName = pTab->zName;
+ if( zTabName==0 || sqlite3StrICmp(zTabName, zTab)!=0 ) continue;
+ if( zDb!=0 && sqlite3StrICmp(db->aDb[iDb].zName, zDb)!=0 ){
+ continue;
+ }
+ }
+ }
+ if( 0==(cntTab++) ){
+ pExpr->iTable = pItem->iCursor;
+ pExpr->pSchema = pTab->pSchema;
+ pMatch = pItem;
+ }
+ for(j=0, pCol=pTab->aCol; j<pTab->nCol; j++, pCol++){
+ if( sqlite3StrICmp(pCol->zName, zCol)==0 ){
+ const char *zColl = pTab->aCol[j].zColl;
+ IdList *pUsing;
+ cnt++;
+ pExpr->iTable = pItem->iCursor;
+ pMatch = pItem;
+ pExpr->pSchema = pTab->pSchema;
+ /* Substitute the rowid (column -1) for the INTEGER PRIMARY KEY */
+ pExpr->iColumn = j==pTab->iPKey ? -1 : j;
+ pExpr->affinity = pTab->aCol[j].affinity;
+ pExpr->pColl = sqlite3FindCollSeq(db, ENC(db), zColl,-1, 0);
+ if( pItem->jointype & JT_NATURAL ){
+ /* If this match occurred in the left table of a natural join,
+ ** then skip the right table to avoid a duplicate match */
+ pItem++;
+ i++;
+ }
+ if( (pUsing = pItem->pUsing)!=0 ){
+ /* If this match occurs on a column that is in the USING clause
+ ** of a join, skip the search of the right table of the join
+ ** to avoid a duplicate match there. */
+ int k;
+ for(k=0; k<pUsing->nId; k++){
+ if( sqlite3StrICmp(pUsing->a[k].zName, zCol)==0 ){
+ pItem++;
+ i++;
+ break;
+ }
+ }
+ }
+ break;
+ }
+ }
+ }
+ }
+
+#ifndef SQLITE_OMIT_TRIGGER
+ /* If we have not already resolved the name, then maybe
+ ** it is a new.* or old.* trigger argument reference
+ */
+ if( zDb==0 && zTab!=0 && cnt==0 && pParse->trigStack!=0 ){
+ TriggerStack *pTriggerStack = pParse->trigStack;
+ Table *pTab = 0;
+ if( pTriggerStack->newIdx != -1 && sqlite3StrICmp("new", zTab) == 0 ){
+ pExpr->iTable = pTriggerStack->newIdx;
+ assert( pTriggerStack->pTab );
+ pTab = pTriggerStack->pTab;
+ }else if( pTriggerStack->oldIdx != -1 && sqlite3StrICmp("old", zTab)==0 ){
+ pExpr->iTable = pTriggerStack->oldIdx;
+ assert( pTriggerStack->pTab );
+ pTab = pTriggerStack->pTab;
+ }
+
+ if( pTab ){
+ int iCol;
+ Column *pCol = pTab->aCol;
+
+ pExpr->pSchema = pTab->pSchema;
+ cntTab++;
+ for(iCol=0; iCol < pTab->nCol; iCol++, pCol++) {
+ if( sqlite3StrICmp(pCol->zName, zCol)==0 ){
+ const char *zColl = pTab->aCol[iCol].zColl;
+ cnt++;
+ pExpr->iColumn = iCol==pTab->iPKey ? -1 : iCol;
+ pExpr->affinity = pTab->aCol[iCol].affinity;
+ pExpr->pColl = sqlite3FindCollSeq(db, ENC(db), zColl,-1, 0);
+ pExpr->pTab = pTab;
+ break;
+ }
+ }
+ }
+ }
+#endif /* !defined(SQLITE_OMIT_TRIGGER) */
+
+ /*
+ ** Perhaps the name is a reference to the ROWID
+ */
+ if( cnt==0 && cntTab==1 && sqlite3IsRowid(zCol) ){
+ cnt = 1;
+ pExpr->iColumn = -1;
+ pExpr->affinity = SQLITE_AFF_INTEGER;
+ }
+
+ /*
+ ** If the input is of the form Z (not Y.Z or X.Y.Z) then the name Z
+ ** might refer to an result-set alias. This happens, for example, when
+ ** we are resolving names in the WHERE clause of the following command:
+ **
+ ** SELECT a+b AS x FROM table WHERE x<10;
+ **
+ ** In cases like this, replace pExpr with a copy of the expression that
+ ** forms the result set entry ("a+b" in the example) and return immediately.
+ ** Note that the expression in the result set should have already been
+ ** resolved by the time the WHERE clause is resolved.
+ */
+ if( cnt==0 && (pEList = pNC->pEList)!=0 && zTab==0 ){
+ for(j=0; j<pEList->nExpr; j++){
+ char *zAs = pEList->a[j].zName;
+ if( zAs!=0 && sqlite3StrICmp(zAs, zCol)==0 ){
+ assert( pExpr->pLeft==0 && pExpr->pRight==0 );
+ pExpr->op = TK_AS;
+ pExpr->iColumn = j;
+ pExpr->pLeft = sqlite3ExprDup(pEList->a[j].pExpr);
+ cnt = 1;
+ assert( zTab==0 && zDb==0 );
+ goto lookupname_end_2;
+ }
+ }
+ }
+
+ /* Advance to the next name context. The loop will exit when either
+ ** we have a match (cnt>0) or when we run out of name contexts.
+ */
+ if( cnt==0 ){
+ pNC = pNC->pNext;
+ }
+ }
+
+ /*
+ ** If X and Y are NULL (in other words if only the column name Z is
+ ** supplied) and the value of Z is enclosed in double-quotes, then
+ ** Z is a string literal if it doesn't match any column names. In that
+ ** case, we need to return right away and not make any changes to
+ ** pExpr.
+ **
+ ** Because no reference was made to outer contexts, the pNC->nRef
+ ** fields are not changed in any context.
+ */
+ if( cnt==0 && zTab==0 && pColumnToken->z[0]=='"' ){
+ sqliteFree(zCol);
+ return 0;
+ }
+
+ /*
+ ** cnt==0 means there was not match. cnt>1 means there were two or
+ ** more matches. Either way, we have an error.
+ */
+ if( cnt!=1 ){
+ char *z = 0;
+ char *zErr;
+ zErr = cnt==0 ? "no such column: %s" : "ambiguous column name: %s";
+ if( zDb ){
+ sqlite3SetString(&z, zDb, ".", zTab, ".", zCol, (char*)0);
+ }else if( zTab ){
+ sqlite3SetString(&z, zTab, ".", zCol, (char*)0);
+ }else{
+ z = sqliteStrDup(zCol);
+ }
+ sqlite3ErrorMsg(pParse, zErr, z);
+ sqliteFree(z);
+ pTopNC->nErr++;
+ }
+
+ /* If a column from a table in pSrcList is referenced, then record
+ ** this fact in the pSrcList.a[].colUsed bitmask. Column 0 causes
+ ** bit 0 to be set. Column 1 sets bit 1. And so forth. If the
+ ** column number is greater than the number of bits in the bitmask
+ ** then set the high-order bit of the bitmask.
+ */
+ if( pExpr->iColumn>=0 && pMatch!=0 ){
+ int n = pExpr->iColumn;
+ if( n>=sizeof(Bitmask)*8 ){
+ n = sizeof(Bitmask)*8-1;
+ }
+ assert( pMatch->iCursor==pExpr->iTable );
+ pMatch->colUsed |= 1<<n;
+ }
+
+lookupname_end:
+ /* Clean up and return
+ */
+ sqliteFree(zDb);
+ sqliteFree(zTab);
+ sqlite3ExprDelete(pExpr->pLeft);
+ pExpr->pLeft = 0;
+ sqlite3ExprDelete(pExpr->pRight);
+ pExpr->pRight = 0;
+ pExpr->op = TK_COLUMN;
+lookupname_end_2:
+ sqliteFree(zCol);
+ if( cnt==1 ){
+ assert( pNC!=0 );
+ sqlite3AuthRead(pParse, pExpr, pNC->pSrcList);
+ if( pMatch && !pMatch->pSelect ){
+ pExpr->pTab = pMatch->pTab;
+ }
+ /* Increment the nRef value on all name contexts from TopNC up to
+ ** the point where the name matched. */
+ for(;;){
+ assert( pTopNC!=0 );
+ pTopNC->nRef++;
+ if( pTopNC==pNC ) break;
+ pTopNC = pTopNC->pNext;
+ }
+ return 0;
+ } else {
+ return 1;
+ }
+}
+
+/*
+** This routine is designed as an xFunc for walkExprTree().
+**
+** Resolve symbolic names into TK_COLUMN operators for the current
+** node in the expression tree. Return 0 to continue the search down
+** the tree or 2 to abort the tree walk.
+**
+** This routine also does error checking and name resolution for
+** function names. The operator for aggregate functions is changed
+** to TK_AGG_FUNCTION.
+*/
+static int nameResolverStep(void *pArg, Expr *pExpr){
+ NameContext *pNC = (NameContext*)pArg;
+ Parse *pParse;
+
+ if( pExpr==0 ) return 1;
+ assert( pNC!=0 );
+ pParse = pNC->pParse;
+
+ if( ExprHasAnyProperty(pExpr, EP_Resolved) ) return 1;
+ ExprSetProperty(pExpr, EP_Resolved);
+#ifndef NDEBUG
+ if( pNC->pSrcList && pNC->pSrcList->nAlloc>0 ){
+ SrcList *pSrcList = pNC->pSrcList;
+ int i;
+ for(i=0; i<pNC->pSrcList->nSrc; i++){
+ assert( pSrcList->a[i].iCursor>=0 && pSrcList->a[i].iCursor<pParse->nTab);
+ }
+ }
+#endif
+ switch( pExpr->op ){
+ /* Double-quoted strings (ex: "abc") are used as identifiers if
+ ** possible. Otherwise they remain as strings. Single-quoted
+ ** strings (ex: 'abc') are always string literals.
+ */
+ case TK_STRING: {
+ if( pExpr->token.z[0]=='\'' ) break;
+ /* Fall thru into the TK_ID case if this is a double-quoted string */
+ }
+ /* A lone identifier is the name of a column.
+ */
+ case TK_ID: {
+ lookupName(pParse, 0, 0, &pExpr->token, pNC, pExpr);
+ return 1;
+ }
+
+ /* A table name and column name: ID.ID
+ ** Or a database, table and column: ID.ID.ID
+ */
+ case TK_DOT: {
+ Token *pColumn;
+ Token *pTable;
+ Token *pDb;
+ Expr *pRight;
+
+ /* if( pSrcList==0 ) break; */
+ pRight = pExpr->pRight;
+ if( pRight->op==TK_ID ){
+ pDb = 0;
+ pTable = &pExpr->pLeft->token;
+ pColumn = &pRight->token;
+ }else{
+ assert( pRight->op==TK_DOT );
+ pDb = &pExpr->pLeft->token;
+ pTable = &pRight->pLeft->token;
+ pColumn = &pRight->pRight->token;
+ }
+ lookupName(pParse, pDb, pTable, pColumn, pNC, pExpr);
+ return 1;
+ }
+
+ /* Resolve function names
+ */
+ case TK_CONST_FUNC:
+ case TK_FUNCTION: {
+ ExprList *pList = pExpr->pList; /* The argument list */
+ int n = pList ? pList->nExpr : 0; /* Number of arguments */
+ int no_such_func = 0; /* True if no such function exists */
+ int wrong_num_args = 0; /* True if wrong number of arguments */
+ int is_agg = 0; /* True if is an aggregate function */
+ int i;
+ int auth; /* Authorization to use the function */
+ int nId; /* Number of characters in function name */
+ const char *zId; /* The function name. */
+ FuncDef *pDef; /* Information about the function */
+ int enc = ENC(pParse->db); /* The database encoding */
+
+ zId = (char*)pExpr->token.z;
+ nId = pExpr->token.n;
+ pDef = sqlite3FindFunction(pParse->db, zId, nId, n, enc, 0);
+ if( pDef==0 ){
+ pDef = sqlite3FindFunction(pParse->db, zId, nId, -1, enc, 0);
+ if( pDef==0 ){
+ no_such_func = 1;
+ }else{
+ wrong_num_args = 1;
+ }
+ }else{
+ is_agg = pDef->xFunc==0;
+ }
+#ifndef SQLITE_OMIT_AUTHORIZER
+ if( pDef ){
+ auth = sqlite3AuthCheck(pParse, SQLITE_FUNCTION, 0, pDef->zName, 0);
+ if( auth!=SQLITE_OK ){
+ if( auth==SQLITE_DENY ){
+ sqlite3ErrorMsg(pParse, "not authorized to use function: %s",
+ pDef->zName);
+ pNC->nErr++;
+ }
+ pExpr->op = TK_NULL;
+ return 1;
+ }
+ }
+#endif
+ if( is_agg && !pNC->allowAgg ){
+ sqlite3ErrorMsg(pParse, "misuse of aggregate function %.*s()", nId,zId);
+ pNC->nErr++;
+ is_agg = 0;
+ }else if( no_such_func ){
+ sqlite3ErrorMsg(pParse, "no such function: %.*s", nId, zId);
+ pNC->nErr++;
+ }else if( wrong_num_args ){
+ sqlite3ErrorMsg(pParse,"wrong number of arguments to function %.*s()",
+ nId, zId);
+ pNC->nErr++;
+ }
+ if( is_agg ){
+ pExpr->op = TK_AGG_FUNCTION;
+ pNC->hasAgg = 1;
+ }
+ if( is_agg ) pNC->allowAgg = 0;
+ for(i=0; pNC->nErr==0 && i<n; i++){
+ walkExprTree(pList->a[i].pExpr, nameResolverStep, pNC);
+ }
+ if( is_agg ) pNC->allowAgg = 1;
+ /* FIX ME: Compute pExpr->affinity based on the expected return
+ ** type of the function
+ */
+ return is_agg;
+ }
+#ifndef SQLITE_OMIT_SUBQUERY
+ case TK_SELECT:
+ case TK_EXISTS:
+#endif
+ case TK_IN: {
+ if( pExpr->pSelect ){
+ int nRef = pNC->nRef;
+#ifndef SQLITE_OMIT_CHECK
+ if( pNC->isCheck ){
+ sqlite3ErrorMsg(pParse,"subqueries prohibited in CHECK constraints");
+ }
+#endif
+ sqlite3SelectResolve(pParse, pExpr->pSelect, pNC);
+ assert( pNC->nRef>=nRef );
+ if( nRef!=pNC->nRef ){
+ ExprSetProperty(pExpr, EP_VarSelect);
+ }
+ }
+ break;
+ }
+#ifndef SQLITE_OMIT_CHECK
+ case TK_VARIABLE: {
+ if( pNC->isCheck ){
+ sqlite3ErrorMsg(pParse,"parameters prohibited in CHECK constraints");
+ }
+ break;
+ }
+#endif
+ }
+ return 0;
+}
+
+/*
+** This routine walks an expression tree and resolves references to
+** table columns. Nodes of the form ID.ID or ID resolve into an
+** index to the table in the table list and a column offset. The
+** Expr.opcode for such nodes is changed to TK_COLUMN. The Expr.iTable
+** value is changed to the index of the referenced table in pTabList
+** plus the "base" value. The base value will ultimately become the
+** VDBE cursor number for a cursor that is pointing into the referenced
+** table. The Expr.iColumn value is changed to the index of the column
+** of the referenced table. The Expr.iColumn value for the special
+** ROWID column is -1. Any INTEGER PRIMARY KEY column is tried as an
+** alias for ROWID.
+**
+** Also resolve function names and check the functions for proper
+** usage. Make sure all function names are recognized and all functions
+** have the correct number of arguments. Leave an error message
+** in pParse->zErrMsg if anything is amiss. Return the number of errors.
+**
+** If the expression contains aggregate functions then set the EP_Agg
+** property on the expression.
+*/
+int sqlite3ExprResolveNames(
+ NameContext *pNC, /* Namespace to resolve expressions in. */
+ Expr *pExpr /* The expression to be analyzed. */
+){
+ int savedHasAgg;
+ if( pExpr==0 ) return 0;
+ savedHasAgg = pNC->hasAgg;
+ pNC->hasAgg = 0;
+ walkExprTree(pExpr, nameResolverStep, pNC);
+ if( pNC->nErr>0 ){
+ ExprSetProperty(pExpr, EP_Error);
+ }
+ if( pNC->hasAgg ){
+ ExprSetProperty(pExpr, EP_Agg);
+ }else if( savedHasAgg ){
+ pNC->hasAgg = 1;
+ }
+ return ExprHasProperty(pExpr, EP_Error);
+}
+
+/*
+** A pointer instance of this structure is used to pass information
+** through walkExprTree into codeSubqueryStep().
+*/
+typedef struct QueryCoder QueryCoder;
+struct QueryCoder {
+ Parse *pParse; /* The parsing context */
+ NameContext *pNC; /* Namespace of first enclosing query */
+};
+
+
+/*
+** Generate code for scalar subqueries used as an expression
+** and IN operators. Examples:
+**
+** (SELECT a FROM b) -- subquery
+** EXISTS (SELECT a FROM b) -- EXISTS subquery
+** x IN (4,5,11) -- IN operator with list on right-hand side
+** x IN (SELECT a FROM b) -- IN operator with subquery on the right
+**
+** The pExpr parameter describes the expression that contains the IN
+** operator or subquery.
+*/
+#ifndef SQLITE_OMIT_SUBQUERY
+void sqlite3CodeSubselect(Parse *pParse, Expr *pExpr){
+ int testAddr = 0; /* One-time test address */
+ Vdbe *v = sqlite3GetVdbe(pParse);
+ if( v==0 ) return;
+
+ /* This code must be run in its entirety every time it is encountered
+ ** if any of the following is true:
+ **
+ ** * The right-hand side is a correlated subquery
+ ** * The right-hand side is an expression list containing variables
+ ** * We are inside a trigger
+ **
+ ** If all of the above are false, then we can run this code just once
+ ** save the results, and reuse the same result on subsequent invocations.
+ */
+ if( !ExprHasAnyProperty(pExpr, EP_VarSelect) && !pParse->trigStack ){
+ int mem = pParse->nMem++;
+ sqlite3VdbeAddOp(v, OP_MemLoad, mem, 0);
+ testAddr = sqlite3VdbeAddOp(v, OP_If, 0, 0);
+ assert( testAddr>0 || sqlite3MallocFailed() );
+ sqlite3VdbeAddOp(v, OP_MemInt, 1, mem);
+ }
+
+ switch( pExpr->op ){
+ case TK_IN: {
+ char affinity;
+ KeyInfo keyInfo;
+ int addr; /* Address of OP_OpenEphemeral instruction */
+
+ affinity = sqlite3ExprAffinity(pExpr->pLeft);
+
+ /* Whether this is an 'x IN(SELECT...)' or an 'x IN(<exprlist>)'
+ ** expression it is handled the same way. A virtual table is
+ ** filled with single-field index keys representing the results
+ ** from the SELECT or the <exprlist>.
+ **
+ ** If the 'x' expression is a column value, or the SELECT...
+ ** statement returns a column value, then the affinity of that
+ ** column is used to build the index keys. If both 'x' and the
+ ** SELECT... statement are columns, then numeric affinity is used
+ ** if either column has NUMERIC or INTEGER affinity. If neither
+ ** 'x' nor the SELECT... statement are columns, then numeric affinity
+ ** is used.
+ */
+ pExpr->iTable = pParse->nTab++;
+ addr = sqlite3VdbeAddOp(v, OP_OpenEphemeral, pExpr->iTable, 0);
+ memset(&keyInfo, 0, sizeof(keyInfo));
+ keyInfo.nField = 1;
+ sqlite3VdbeAddOp(v, OP_SetNumColumns, pExpr->iTable, 1);
+
+ if( pExpr->pSelect ){
+ /* Case 1: expr IN (SELECT ...)
+ **
+ ** Generate code to write the results of the select into the temporary
+ ** table allocated and opened above.
+ */
+ int iParm = pExpr->iTable + (((int)affinity)<<16);
+ ExprList *pEList;
+ assert( (pExpr->iTable&0x0000FFFF)==pExpr->iTable );
+ sqlite3Select(pParse, pExpr->pSelect, SRT_Set, iParm, 0, 0, 0, 0);
+ pEList = pExpr->pSelect->pEList;
+ if( pEList && pEList->nExpr>0 ){
+ keyInfo.aColl[0] = binaryCompareCollSeq(pParse, pExpr->pLeft,
+ pEList->a[0].pExpr);
+ }
+ }else if( pExpr->pList ){
+ /* Case 2: expr IN (exprlist)
+ **
+ ** For each expression, build an index key from the evaluation and
+ ** store it in the temporary table. If <expr> is a column, then use
+ ** that columns affinity when building index keys. If <expr> is not
+ ** a column, use numeric affinity.
+ */
+ int i;
+ ExprList *pList = pExpr->pList;
+ struct ExprList_item *pItem;
+
+ if( !affinity ){
+ affinity = SQLITE_AFF_NONE;
+ }
+ keyInfo.aColl[0] = pExpr->pLeft->pColl;
+
+ /* Loop through each expression in <exprlist>. */
+ for(i=pList->nExpr, pItem=pList->a; i>0; i--, pItem++){
+ Expr *pE2 = pItem->pExpr;
+
+ /* If the expression is not constant then we will need to
+ ** disable the test that was generated above that makes sure
+ ** this code only executes once. Because for a non-constant
+ ** expression we need to rerun this code each time.
+ */
+ if( testAddr>0 && !sqlite3ExprIsConstant(pE2) ){
+ sqlite3VdbeChangeToNoop(v, testAddr-1, 3);
+ testAddr = 0;
+ }
+
+ /* Evaluate the expression and insert it into the temp table */
+ sqlite3ExprCode(pParse, pE2);
+ sqlite3VdbeOp3(v, OP_MakeRecord, 1, 0, &affinity, 1);
+ sqlite3VdbeAddOp(v, OP_IdxInsert, pExpr->iTable, 0);
+ }
+ }
+ sqlite3VdbeChangeP3(v, addr, (void *)&keyInfo, P3_KEYINFO);
+ break;
+ }
+
+ case TK_EXISTS:
+ case TK_SELECT: {
+ /* This has to be a scalar SELECT. Generate code to put the
+ ** value of this select in a memory cell and record the number
+ ** of the memory cell in iColumn.
+ */
+ static const Token one = { (u8*)"1", 0, 1 };
+ Select *pSel;
+ int iMem;
+ int sop;
+
+ pExpr->iColumn = iMem = pParse->nMem++;
+ pSel = pExpr->pSelect;
+ if( pExpr->op==TK_SELECT ){
+ sop = SRT_Mem;
+ sqlite3VdbeAddOp(v, OP_MemNull, iMem, 0);
+ VdbeComment((v, "# Init subquery result"));
+ }else{
+ sop = SRT_Exists;
+ sqlite3VdbeAddOp(v, OP_MemInt, 0, iMem);
+ VdbeComment((v, "# Init EXISTS result"));
+ }
+ sqlite3ExprDelete(pSel->pLimit);
+ pSel->pLimit = sqlite3Expr(TK_INTEGER, 0, 0, &one);
+ sqlite3Select(pParse, pSel, sop, iMem, 0, 0, 0, 0);
+ break;
+ }
+ }
+
+ if( testAddr ){
+ sqlite3VdbeJumpHere(v, testAddr);
+ }
+ return;
+}
+#endif /* SQLITE_OMIT_SUBQUERY */
+
+/*
+** Generate an instruction that will put the integer describe by
+** text z[0..n-1] on the stack.
+*/
+static void codeInteger(Vdbe *v, const char *z, int n){
+ int i;
+ if( sqlite3GetInt32(z, &i) ){
+ sqlite3VdbeAddOp(v, OP_Integer, i, 0);
+ }else if( sqlite3FitsIn64Bits(z) ){
+ sqlite3VdbeOp3(v, OP_Int64, 0, 0, z, n);
+ }else{
+ sqlite3VdbeOp3(v, OP_Real, 0, 0, z, n);
+ }
+}
+
+/*
+** Generate code into the current Vdbe to evaluate the given
+** expression and leave the result on the top of stack.
+**
+** This code depends on the fact that certain token values (ex: TK_EQ)
+** are the same as opcode values (ex: OP_Eq) that implement the corresponding
+** operation. Special comments in vdbe.c and the mkopcodeh.awk script in
+** the make process cause these values to align. Assert()s in the code
+** below verify that the numbers are aligned correctly.
+*/
+void sqlite3ExprCode(Parse *pParse, Expr *pExpr){
+ Vdbe *v = pParse->pVdbe;
+ int op;
+ int stackChng = 1; /* Amount of change to stack depth */
+
+ if( v==0 ) return;
+ if( pExpr==0 ){
+ sqlite3VdbeAddOp(v, OP_Null, 0, 0);
+ return;
+ }
+ op = pExpr->op;
+ switch( op ){
+ case TK_AGG_COLUMN: {
+ AggInfo *pAggInfo = pExpr->pAggInfo;
+ struct AggInfo_col *pCol = &pAggInfo->aCol[pExpr->iAgg];
+ if( !pAggInfo->directMode ){
+ sqlite3VdbeAddOp(v, OP_MemLoad, pCol->iMem, 0);
+ break;
+ }else if( pAggInfo->useSortingIdx ){
+ sqlite3VdbeAddOp(v, OP_Column, pAggInfo->sortingIdx,
+ pCol->iSorterColumn);
+ break;
+ }
+ /* Otherwise, fall thru into the TK_COLUMN case */
+ }
+ case TK_COLUMN: {
+ if( pExpr->iTable<0 ){
+ /* This only happens when coding check constraints */
+ assert( pParse->ckOffset>0 );
+ sqlite3VdbeAddOp(v, OP_Dup, pParse->ckOffset-pExpr->iColumn-1, 1);
+ }else if( pExpr->iColumn>=0 ){
+ Table *pTab = pExpr->pTab;
+ int iCol = pExpr->iColumn;
+ int op = (pTab && IsVirtual(pTab)) ? OP_VColumn : OP_Column;
+ sqlite3VdbeAddOp(v, op, pExpr->iTable, iCol);
+ sqlite3ColumnDefault(v, pTab, iCol);
+#ifndef SQLITE_OMIT_FLOATING_POINT
+ if( pTab && pTab->aCol[iCol].affinity==SQLITE_AFF_REAL ){
+ sqlite3VdbeAddOp(v, OP_RealAffinity, 0, 0);
+ }
+#endif
+ }else{
+ Table *pTab = pExpr->pTab;
+ int op = (pTab && IsVirtual(pTab)) ? OP_VRowid : OP_Rowid;
+ sqlite3VdbeAddOp(v, op, pExpr->iTable, 0);
+ }
+ break;
+ }
+ case TK_INTEGER: {
+ codeInteger(v, (char*)pExpr->token.z, pExpr->token.n);
+ break;
+ }
+ case TK_FLOAT:
+ case TK_STRING: {
+ assert( TK_FLOAT==OP_Real );
+ assert( TK_STRING==OP_String8 );
+ sqlite3DequoteExpr(pExpr);
+ sqlite3VdbeOp3(v, op, 0, 0, (char*)pExpr->token.z, pExpr->token.n);
+ break;
+ }
+ case TK_NULL: {
+ sqlite3VdbeAddOp(v, OP_Null, 0, 0);
+ break;
+ }
+#ifndef SQLITE_OMIT_BLOB_LITERAL
+ case TK_BLOB: {
+ int n;
+ const char *z;
+ assert( TK_BLOB==OP_HexBlob );
+ n = pExpr->token.n - 3;
+ z = (char*)pExpr->token.z + 2;
+ assert( n>=0 );
+ if( n==0 ){
+ z = "";
+ }
+ sqlite3VdbeOp3(v, op, 0, 0, z, n);
+ break;
+ }
+#endif
+ case TK_VARIABLE: {
+ sqlite3VdbeAddOp(v, OP_Variable, pExpr->iTable, 0);
+ if( pExpr->token.n>1 ){
+ sqlite3VdbeChangeP3(v, -1, (char*)pExpr->token.z, pExpr->token.n);
+ }
+ break;
+ }
+ case TK_REGISTER: {
+ sqlite3VdbeAddOp(v, OP_MemLoad, pExpr->iTable, 0);
+ break;
+ }
+#ifndef SQLITE_OMIT_CAST
+ case TK_CAST: {
+ /* Expressions of the form: CAST(pLeft AS token) */
+ int aff, to_op;
+ sqlite3ExprCode(pParse, pExpr->pLeft);
+ aff = sqlite3AffinityType(&pExpr->token);
+ to_op = aff - SQLITE_AFF_TEXT + OP_ToText;
+ assert( to_op==OP_ToText || aff!=SQLITE_AFF_TEXT );
+ assert( to_op==OP_ToBlob || aff!=SQLITE_AFF_NONE );
+ assert( to_op==OP_ToNumeric || aff!=SQLITE_AFF_NUMERIC );
+ assert( to_op==OP_ToInt || aff!=SQLITE_AFF_INTEGER );
+ assert( to_op==OP_ToReal || aff!=SQLITE_AFF_REAL );
+ sqlite3VdbeAddOp(v, to_op, 0, 0);
+ stackChng = 0;
+ break;
+ }
+#endif /* SQLITE_OMIT_CAST */
+ case TK_LT:
+ case TK_LE:
+ case TK_GT:
+ case TK_GE:
+ case TK_NE:
+ case TK_EQ: {
+ assert( TK_LT==OP_Lt );
+ assert( TK_LE==OP_Le );
+ assert( TK_GT==OP_Gt );
+ assert( TK_GE==OP_Ge );
+ assert( TK_EQ==OP_Eq );
+ assert( TK_NE==OP_Ne );
+ sqlite3ExprCode(pParse, pExpr->pLeft);
+ sqlite3ExprCode(pParse, pExpr->pRight);
+ codeCompare(pParse, pExpr->pLeft, pExpr->pRight, op, 0, 0);
+ stackChng = -1;
+ break;
+ }
+ case TK_AND:
+ case TK_OR:
+ case TK_PLUS:
+ case TK_STAR:
+ case TK_MINUS:
+ case TK_REM:
+ case TK_BITAND:
+ case TK_BITOR:
+ case TK_SLASH:
+ case TK_LSHIFT:
+ case TK_RSHIFT:
+ case TK_CONCAT: {
+ assert( TK_AND==OP_And );
+ assert( TK_OR==OP_Or );
+ assert( TK_PLUS==OP_Add );
+ assert( TK_MINUS==OP_Subtract );
+ assert( TK_REM==OP_Remainder );
+ assert( TK_BITAND==OP_BitAnd );
+ assert( TK_BITOR==OP_BitOr );
+ assert( TK_SLASH==OP_Divide );
+ assert( TK_LSHIFT==OP_ShiftLeft );
+ assert( TK_RSHIFT==OP_ShiftRight );
+ assert( TK_CONCAT==OP_Concat );
+ sqlite3ExprCode(pParse, pExpr->pLeft);
+ sqlite3ExprCode(pParse, pExpr->pRight);
+ sqlite3VdbeAddOp(v, op, 0, 0);
+ stackChng = -1;
+ break;
+ }
+ case TK_UMINUS: {
+ Expr *pLeft = pExpr->pLeft;
+ assert( pLeft );
+ if( pLeft->op==TK_FLOAT || pLeft->op==TK_INTEGER ){
+ Token *p = &pLeft->token;
+ char *z = sqlite3MPrintf("-%.*s", p->n, p->z);
+ if( pLeft->op==TK_FLOAT ){
+ sqlite3VdbeOp3(v, OP_Real, 0, 0, z, p->n+1);
+ }else{
+ codeInteger(v, z, p->n+1);
+ }
+ sqliteFree(z);
+ break;
+ }
+ /* Fall through into TK_NOT */
+ }
+ case TK_BITNOT:
+ case TK_NOT: {
+ assert( TK_BITNOT==OP_BitNot );
+ assert( TK_NOT==OP_Not );
+ sqlite3ExprCode(pParse, pExpr->pLeft);
+ sqlite3VdbeAddOp(v, op, 0, 0);
+ stackChng = 0;
+ break;
+ }
+ case TK_ISNULL:
+ case TK_NOTNULL: {
+ int dest;
+ assert( TK_ISNULL==OP_IsNull );
+ assert( TK_NOTNULL==OP_NotNull );
+ sqlite3VdbeAddOp(v, OP_Integer, 1, 0);
+ sqlite3ExprCode(pParse, pExpr->pLeft);
+ dest = sqlite3VdbeCurrentAddr(v) + 2;
+ sqlite3VdbeAddOp(v, op, 1, dest);
+ sqlite3VdbeAddOp(v, OP_AddImm, -1, 0);
+ stackChng = 0;
+ break;
+ }
+ case TK_AGG_FUNCTION: {
+ AggInfo *pInfo = pExpr->pAggInfo;
+ if( pInfo==0 ){
+ sqlite3ErrorMsg(pParse, "misuse of aggregate: %T",
+ &pExpr->span);
+ }else{
+ sqlite3VdbeAddOp(v, OP_MemLoad, pInfo->aFunc[pExpr->iAgg].iMem, 0);
+ }
+ break;
+ }
+ case TK_CONST_FUNC:
+ case TK_FUNCTION: {
+ ExprList *pList = pExpr->pList;
+ int nExpr = pList ? pList->nExpr : 0;
+ FuncDef *pDef;
+ int nId;
+ const char *zId;
+ int constMask = 0;
+ int i;
+ u8 enc = ENC(pParse->db);
+ CollSeq *pColl = 0;
+ zId = (char*)pExpr->token.z;
+ nId = pExpr->token.n;
+ pDef = sqlite3FindFunction(pParse->db, zId, nId, nExpr, enc, 0);
+ assert( pDef!=0 );
+ nExpr = sqlite3ExprCodeExprList(pParse, pList);
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+ /* Possibly overload the function if the first argument is
+ ** a virtual table column.
+ **
+ ** For infix functions (LIKE, GLOB, REGEXP, and MATCH) use the
+ ** second argument, not the first, as the argument to test to
+ ** see if it is a column in a virtual table. This is done because
+ ** the left operand of infix functions (the operand we want to
+ ** control overloading) ends up as the second argument to the
+ ** function. The expression "A glob B" is equivalent to
+ ** "glob(B,A). We want to use the A in "A glob B" to test
+ ** for function overloading. But we use the B term in "glob(B,A)".
+ */
+ if( nExpr>=2 && (pExpr->flags & EP_InfixFunc) ){
+ pDef = sqlite3VtabOverloadFunction(pDef, nExpr, pList->a[1].pExpr);
+ }else if( nExpr>0 ){
+ pDef = sqlite3VtabOverloadFunction(pDef, nExpr, pList->a[0].pExpr);
+ }
+#endif
+ for(i=0; i<nExpr && i<32; i++){
+ if( sqlite3ExprIsConstant(pList->a[i].pExpr) ){
+ constMask |= (1<<i);
+ }
+ if( pDef->needCollSeq && !pColl ){
+ pColl = sqlite3ExprCollSeq(pParse, pList->a[i].pExpr);
+ }
+ }
+ if( pDef->needCollSeq ){
+ if( !pColl ) pColl = pParse->db->pDfltColl;
+ sqlite3VdbeOp3(v, OP_CollSeq, 0, 0, (char *)pColl, P3_COLLSEQ);
+ }
+ sqlite3VdbeOp3(v, OP_Function, constMask, nExpr, (char*)pDef, P3_FUNCDEF);
+ stackChng = 1-nExpr;
+ break;
+ }
+#ifndef SQLITE_OMIT_SUBQUERY
+ case TK_EXISTS:
+ case TK_SELECT: {
+ if( pExpr->iColumn==0 ){
+ sqlite3CodeSubselect(pParse, pExpr);
+ }
+ sqlite3VdbeAddOp(v, OP_MemLoad, pExpr->iColumn, 0);
+ VdbeComment((v, "# load subquery result"));
+ break;
+ }
+ case TK_IN: {
+ int addr;
+ char affinity;
+ int ckOffset = pParse->ckOffset;
+ sqlite3CodeSubselect(pParse, pExpr);
+
+ /* Figure out the affinity to use to create a key from the results
+ ** of the expression. affinityStr stores a static string suitable for
+ ** P3 of OP_MakeRecord.
+ */
+ affinity = comparisonAffinity(pExpr);
+
+ sqlite3VdbeAddOp(v, OP_Integer, 1, 0);
+ pParse->ckOffset = ckOffset+1;
+
+ /* Code the <expr> from "<expr> IN (...)". The temporary table
+ ** pExpr->iTable contains the values that make up the (...) set.
+ */
+ sqlite3ExprCode(pParse, pExpr->pLeft);
+ addr = sqlite3VdbeCurrentAddr(v);
+ sqlite3VdbeAddOp(v, OP_NotNull, -1, addr+4); /* addr + 0 */
+ sqlite3VdbeAddOp(v, OP_Pop, 2, 0);
+ sqlite3VdbeAddOp(v, OP_Null, 0, 0);
+ sqlite3VdbeAddOp(v, OP_Goto, 0, addr+7);
+ sqlite3VdbeOp3(v, OP_MakeRecord, 1, 0, &affinity, 1); /* addr + 4 */
+ sqlite3VdbeAddOp(v, OP_Found, pExpr->iTable, addr+7);
+ sqlite3VdbeAddOp(v, OP_AddImm, -1, 0); /* addr + 6 */
+
+ break;
+ }
+#endif
+ case TK_BETWEEN: {
+ Expr *pLeft = pExpr->pLeft;
+ struct ExprList_item *pLItem = pExpr->pList->a;
+ Expr *pRight = pLItem->pExpr;
+ sqlite3ExprCode(pParse, pLeft);
+ sqlite3VdbeAddOp(v, OP_Dup, 0, 0);
+ sqlite3ExprCode(pParse, pRight);
+ codeCompare(pParse, pLeft, pRight, OP_Ge, 0, 0);
+ sqlite3VdbeAddOp(v, OP_Pull, 1, 0);
+ pLItem++;
+ pRight = pLItem->pExpr;
+ sqlite3ExprCode(pParse, pRight);
+ codeCompare(pParse, pLeft, pRight, OP_Le, 0, 0);
+ sqlite3VdbeAddOp(v, OP_And, 0, 0);
+ break;
+ }
+ case TK_UPLUS:
+ case TK_AS: {
+ sqlite3ExprCode(pParse, pExpr->pLeft);
+ stackChng = 0;
+ break;
+ }
+ case TK_CASE: {
+ int expr_end_label;
+ int jumpInst;
+ int nExpr;
+ int i;
+ ExprList *pEList;
+ struct ExprList_item *aListelem;
+
+ assert(pExpr->pList);
+ assert((pExpr->pList->nExpr % 2) == 0);
+ assert(pExpr->pList->nExpr > 0);
+ pEList = pExpr->pList;
+ aListelem = pEList->a;
+ nExpr = pEList->nExpr;
+ expr_end_label = sqlite3VdbeMakeLabel(v);
+ if( pExpr->pLeft ){
+ sqlite3ExprCode(pParse, pExpr->pLeft);
+ }
+ for(i=0; i<nExpr; i=i+2){
+ sqlite3ExprCode(pParse, aListelem[i].pExpr);
+ if( pExpr->pLeft ){
+ sqlite3VdbeAddOp(v, OP_Dup, 1, 1);
+ jumpInst = codeCompare(pParse, pExpr->pLeft, aListelem[i].pExpr,
+ OP_Ne, 0, 1);
+ sqlite3VdbeAddOp(v, OP_Pop, 1, 0);
+ }else{
+ jumpInst = sqlite3VdbeAddOp(v, OP_IfNot, 1, 0);
+ }
+ sqlite3ExprCode(pParse, aListelem[i+1].pExpr);
+ sqlite3VdbeAddOp(v, OP_Goto, 0, expr_end_label);
+ sqlite3VdbeJumpHere(v, jumpInst);
+ }
+ if( pExpr->pLeft ){
+ sqlite3VdbeAddOp(v, OP_Pop, 1, 0);
+ }
+ if( pExpr->pRight ){
+ sqlite3ExprCode(pParse, pExpr->pRight);
+ }else{
+ sqlite3VdbeAddOp(v, OP_Null, 0, 0);
+ }
+ sqlite3VdbeResolveLabel(v, expr_end_label);
+ break;
+ }
+#ifndef SQLITE_OMIT_TRIGGER
+ case TK_RAISE: {
+ if( !pParse->trigStack ){
+ sqlite3ErrorMsg(pParse,
+ "RAISE() may only be used within a trigger-program");
+ return;
+ }
+ if( pExpr->iColumn!=OE_Ignore ){
+ assert( pExpr->iColumn==OE_Rollback ||
+ pExpr->iColumn == OE_Abort ||
+ pExpr->iColumn == OE_Fail );
+ sqlite3DequoteExpr(pExpr);
+ sqlite3VdbeOp3(v, OP_Halt, SQLITE_CONSTRAINT, pExpr->iColumn,
+ (char*)pExpr->token.z, pExpr->token.n);
+ } else {
+ assert( pExpr->iColumn == OE_Ignore );
+ sqlite3VdbeAddOp(v, OP_ContextPop, 0, 0);
+ sqlite3VdbeAddOp(v, OP_Goto, 0, pParse->trigStack->ignoreJump);
+ VdbeComment((v, "# raise(IGNORE)"));
+ }
+ stackChng = 0;
+ break;
+ }
+#endif
+ }
+
+ if( pParse->ckOffset ){
+ pParse->ckOffset += stackChng;
+ assert( pParse->ckOffset );
+ }
+}
+
+#ifndef SQLITE_OMIT_TRIGGER
+/*
+** Generate code that evalutes the given expression and leaves the result
+** on the stack. See also sqlite3ExprCode().
+**
+** This routine might also cache the result and modify the pExpr tree
+** so that it will make use of the cached result on subsequent evaluations
+** rather than evaluate the whole expression again. Trivial expressions are
+** not cached. If the expression is cached, its result is stored in a
+** memory location.
+*/
+void sqlite3ExprCodeAndCache(Parse *pParse, Expr *pExpr){
+ Vdbe *v = pParse->pVdbe;
+ int iMem;
+ int addr1, addr2;
+ if( v==0 ) return;
+ addr1 = sqlite3VdbeCurrentAddr(v);
+ sqlite3ExprCode(pParse, pExpr);
+ addr2 = sqlite3VdbeCurrentAddr(v);
+ if( addr2>addr1+1 || sqlite3VdbeGetOp(v, addr1)->opcode==OP_Function ){
+ iMem = pExpr->iTable = pParse->nMem++;
+ sqlite3VdbeAddOp(v, OP_MemStore, iMem, 0);
+ pExpr->op = TK_REGISTER;
+ }
+}
+#endif
+
+/*
+** Generate code that pushes the value of every element of the given
+** expression list onto the stack.
+**
+** Return the number of elements pushed onto the stack.
+*/
+int sqlite3ExprCodeExprList(
+ Parse *pParse, /* Parsing context */
+ ExprList *pList /* The expression list to be coded */
+){
+ struct ExprList_item *pItem;
+ int i, n;
+ if( pList==0 ) return 0;
+ n = pList->nExpr;
+ for(pItem=pList->a, i=n; i>0; i--, pItem++){
+ sqlite3ExprCode(pParse, pItem->pExpr);
+ }
+ return n;
+}
+
+/*
+** Generate code for a boolean expression such that a jump is made
+** to the label "dest" if the expression is true but execution
+** continues straight thru if the expression is false.
+**
+** If the expression evaluates to NULL (neither true nor false), then
+** take the jump if the jumpIfNull flag is true.
+**
+** This code depends on the fact that certain token values (ex: TK_EQ)
+** are the same as opcode values (ex: OP_Eq) that implement the corresponding
+** operation. Special comments in vdbe.c and the mkopcodeh.awk script in
+** the make process cause these values to align. Assert()s in the code
+** below verify that the numbers are aligned correctly.
+*/
+void sqlite3ExprIfTrue(Parse *pParse, Expr *pExpr, int dest, int jumpIfNull){
+ Vdbe *v = pParse->pVdbe;
+ int op = 0;
+ int ckOffset = pParse->ckOffset;
+ if( v==0 || pExpr==0 ) return;
+ op = pExpr->op;
+ switch( op ){
+ case TK_AND: {
+ int d2 = sqlite3VdbeMakeLabel(v);
+ sqlite3ExprIfFalse(pParse, pExpr->pLeft, d2, !jumpIfNull);
+ sqlite3ExprIfTrue(pParse, pExpr->pRight, dest, jumpIfNull);
+ sqlite3VdbeResolveLabel(v, d2);
+ break;
+ }
+ case TK_OR: {
+ sqlite3ExprIfTrue(pParse, pExpr->pLeft, dest, jumpIfNull);
+ sqlite3ExprIfTrue(pParse, pExpr->pRight, dest, jumpIfNull);
+ break;
+ }
+ case TK_NOT: {
+ sqlite3ExprIfFalse(pParse, pExpr->pLeft, dest, jumpIfNull);
+ break;
+ }
+ case TK_LT:
+ case TK_LE:
+ case TK_GT:
+ case TK_GE:
+ case TK_NE:
+ case TK_EQ: {
+ assert( TK_LT==OP_Lt );
+ assert( TK_LE==OP_Le );
+ assert( TK_GT==OP_Gt );
+ assert( TK_GE==OP_Ge );
+ assert( TK_EQ==OP_Eq );
+ assert( TK_NE==OP_Ne );
+ sqlite3ExprCode(pParse, pExpr->pLeft);
+ sqlite3ExprCode(pParse, pExpr->pRight);
+ codeCompare(pParse, pExpr->pLeft, pExpr->pRight, op, dest, jumpIfNull);
+ break;
+ }
+ case TK_ISNULL:
+ case TK_NOTNULL: {
+ assert( TK_ISNULL==OP_IsNull );
+ assert( TK_NOTNULL==OP_NotNull );
+ sqlite3ExprCode(pParse, pExpr->pLeft);
+ sqlite3VdbeAddOp(v, op, 1, dest);
+ break;
+ }
+ case TK_BETWEEN: {
+ /* The expression "x BETWEEN y AND z" is implemented as:
+ **
+ ** 1 IF (x < y) GOTO 3
+ ** 2 IF (x <= z) GOTO <dest>
+ ** 3 ...
+ */
+ int addr;
+ Expr *pLeft = pExpr->pLeft;
+ Expr *pRight = pExpr->pList->a[0].pExpr;
+ sqlite3ExprCode(pParse, pLeft);
+ sqlite3VdbeAddOp(v, OP_Dup, 0, 0);
+ sqlite3ExprCode(pParse, pRight);
+ addr = codeCompare(pParse, pLeft, pRight, OP_Lt, 0, !jumpIfNull);
+
+ pRight = pExpr->pList->a[1].pExpr;
+ sqlite3ExprCode(pParse, pRight);
+ codeCompare(pParse, pLeft, pRight, OP_Le, dest, jumpIfNull);
+
+ sqlite3VdbeAddOp(v, OP_Integer, 0, 0);
+ sqlite3VdbeJumpHere(v, addr);
+ sqlite3VdbeAddOp(v, OP_Pop, 1, 0);
+ break;
+ }
+ default: {
+ sqlite3ExprCode(pParse, pExpr);
+ sqlite3VdbeAddOp(v, OP_If, jumpIfNull, dest);
+ break;
+ }
+ }
+ pParse->ckOffset = ckOffset;
+}
+
+/*
+** Generate code for a boolean expression such that a jump is made
+** to the label "dest" if the expression is false but execution
+** continues straight thru if the expression is true.
+**
+** If the expression evaluates to NULL (neither true nor false) then
+** jump if jumpIfNull is true or fall through if jumpIfNull is false.
+*/
+void sqlite3ExprIfFalse(Parse *pParse, Expr *pExpr, int dest, int jumpIfNull){
+ Vdbe *v = pParse->pVdbe;
+ int op = 0;
+ int ckOffset = pParse->ckOffset;
+ if( v==0 || pExpr==0 ) return;
+
+ /* The value of pExpr->op and op are related as follows:
+ **
+ ** pExpr->op op
+ ** --------- ----------
+ ** TK_ISNULL OP_NotNull
+ ** TK_NOTNULL OP_IsNull
+ ** TK_NE OP_Eq
+ ** TK_EQ OP_Ne
+ ** TK_GT OP_Le
+ ** TK_LE OP_Gt
+ ** TK_GE OP_Lt
+ ** TK_LT OP_Ge
+ **
+ ** For other values of pExpr->op, op is undefined and unused.
+ ** The value of TK_ and OP_ constants are arranged such that we
+ ** can compute the mapping above using the following expression.
+ ** Assert()s verify that the computation is correct.
+ */
+ op = ((pExpr->op+(TK_ISNULL&1))^1)-(TK_ISNULL&1);
+
+ /* Verify correct alignment of TK_ and OP_ constants
+ */
+ assert( pExpr->op!=TK_ISNULL || op==OP_NotNull );
+ assert( pExpr->op!=TK_NOTNULL || op==OP_IsNull );
+ assert( pExpr->op!=TK_NE || op==OP_Eq );
+ assert( pExpr->op!=TK_EQ || op==OP_Ne );
+ assert( pExpr->op!=TK_LT || op==OP_Ge );
+ assert( pExpr->op!=TK_LE || op==OP_Gt );
+ assert( pExpr->op!=TK_GT || op==OP_Le );
+ assert( pExpr->op!=TK_GE || op==OP_Lt );
+
+ switch( pExpr->op ){
+ case TK_AND: {
+ sqlite3ExprIfFalse(pParse, pExpr->pLeft, dest, jumpIfNull);
+ sqlite3ExprIfFalse(pParse, pExpr->pRight, dest, jumpIfNull);
+ break;
+ }
+ case TK_OR: {
+ int d2 = sqlite3VdbeMakeLabel(v);
+ sqlite3ExprIfTrue(pParse, pExpr->pLeft, d2, !jumpIfNull);
+ sqlite3ExprIfFalse(pParse, pExpr->pRight, dest, jumpIfNull);
+ sqlite3VdbeResolveLabel(v, d2);
+ break;
+ }
+ case TK_NOT: {
+ sqlite3ExprIfTrue(pParse, pExpr->pLeft, dest, jumpIfNull);
+ break;
+ }
+ case TK_LT:
+ case TK_LE:
+ case TK_GT:
+ case TK_GE:
+ case TK_NE:
+ case TK_EQ: {
+ sqlite3ExprCode(pParse, pExpr->pLeft);
+ sqlite3ExprCode(pParse, pExpr->pRight);
+ codeCompare(pParse, pExpr->pLeft, pExpr->pRight, op, dest, jumpIfNull);
+ break;
+ }
+ case TK_ISNULL:
+ case TK_NOTNULL: {
+ sqlite3ExprCode(pParse, pExpr->pLeft);
+ sqlite3VdbeAddOp(v, op, 1, dest);
+ break;
+ }
+ case TK_BETWEEN: {
+ /* The expression is "x BETWEEN y AND z". It is implemented as:
+ **
+ ** 1 IF (x >= y) GOTO 3
+ ** 2 GOTO <dest>
+ ** 3 IF (x > z) GOTO <dest>
+ */
+ int addr;
+ Expr *pLeft = pExpr->pLeft;
+ Expr *pRight = pExpr->pList->a[0].pExpr;
+ sqlite3ExprCode(pParse, pLeft);
+ sqlite3VdbeAddOp(v, OP_Dup, 0, 0);
+ sqlite3ExprCode(pParse, pRight);
+ addr = sqlite3VdbeCurrentAddr(v);
+ codeCompare(pParse, pLeft, pRight, OP_Ge, addr+3, !jumpIfNull);
+
+ sqlite3VdbeAddOp(v, OP_Pop, 1, 0);
+ sqlite3VdbeAddOp(v, OP_Goto, 0, dest);
+ pRight = pExpr->pList->a[1].pExpr;
+ sqlite3ExprCode(pParse, pRight);
+ codeCompare(pParse, pLeft, pRight, OP_Gt, dest, jumpIfNull);
+ break;
+ }
+ default: {
+ sqlite3ExprCode(pParse, pExpr);
+ sqlite3VdbeAddOp(v, OP_IfNot, jumpIfNull, dest);
+ break;
+ }
+ }
+ pParse->ckOffset = ckOffset;
+}
+
+/*
+** Do a deep comparison of two expression trees. Return TRUE (non-zero)
+** if they are identical and return FALSE if they differ in any way.
+*/
+int sqlite3ExprCompare(Expr *pA, Expr *pB){
+ int i;
+ if( pA==0||pB==0 ){
+ return pB==pA;
+ }
+ if( pA->op!=pB->op ) return 0;
+ if( (pA->flags & EP_Distinct)!=(pB->flags & EP_Distinct) ) return 0;
+ if( !sqlite3ExprCompare(pA->pLeft, pB->pLeft) ) return 0;
+ if( !sqlite3ExprCompare(pA->pRight, pB->pRight) ) return 0;
+ if( pA->pList ){
+ if( pB->pList==0 ) return 0;
+ if( pA->pList->nExpr!=pB->pList->nExpr ) return 0;
+ for(i=0; i<pA->pList->nExpr; i++){
+ if( !sqlite3ExprCompare(pA->pList->a[i].pExpr, pB->pList->a[i].pExpr) ){
+ return 0;
+ }
+ }
+ }else if( pB->pList ){
+ return 0;
+ }
+ if( pA->pSelect || pB->pSelect ) return 0;
+ if( pA->iTable!=pB->iTable || pA->iColumn!=pB->iColumn ) return 0;
+ if( pA->token.z ){
+ if( pB->token.z==0 ) return 0;
+ if( pB->token.n!=pA->token.n ) return 0;
+ if( sqlite3StrNICmp((char*)pA->token.z,(char*)pB->token.z,pB->token.n)!=0 ){
+ return 0;
+ }
+ }
+ return 1;
+}
+
+
+/*
+** Add a new element to the pAggInfo->aCol[] array. Return the index of
+** the new element. Return a negative number if malloc fails.
+*/
+static int addAggInfoColumn(AggInfo *pInfo){
+ int i;
+ i = sqlite3ArrayAllocate((void**)&pInfo->aCol, sizeof(pInfo->aCol[0]), 3);
+ if( i<0 ){
+ return -1;
+ }
+ return i;
+}
+
+/*
+** Add a new element to the pAggInfo->aFunc[] array. Return the index of
+** the new element. Return a negative number if malloc fails.
+*/
+static int addAggInfoFunc(AggInfo *pInfo){
+ int i;
+ i = sqlite3ArrayAllocate((void**)&pInfo->aFunc, sizeof(pInfo->aFunc[0]), 2);
+ if( i<0 ){
+ return -1;
+ }
+ return i;
+}
+
+/*
+** This is an xFunc for walkExprTree() used to implement
+** sqlite3ExprAnalyzeAggregates(). See sqlite3ExprAnalyzeAggregates
+** for additional information.
+**
+** This routine analyzes the aggregate function at pExpr.
+*/
+static int analyzeAggregate(void *pArg, Expr *pExpr){
+ int i;
+ NameContext *pNC = (NameContext *)pArg;
+ Parse *pParse = pNC->pParse;
+ SrcList *pSrcList = pNC->pSrcList;
+ AggInfo *pAggInfo = pNC->pAggInfo;
+
+
+ switch( pExpr->op ){
+ case TK_COLUMN: {
+ /* Check to see if the column is in one of the tables in the FROM
+ ** clause of the aggregate query */
+ if( pSrcList ){
+ struct SrcList_item *pItem = pSrcList->a;
+ for(i=0; i<pSrcList->nSrc; i++, pItem++){
+ struct AggInfo_col *pCol;
+ if( pExpr->iTable==pItem->iCursor ){
+ /* If we reach this point, it means that pExpr refers to a table
+ ** that is in the FROM clause of the aggregate query.
+ **
+ ** Make an entry for the column in pAggInfo->aCol[] if there
+ ** is not an entry there already.
+ */
+ pCol = pAggInfo->aCol;
+ for(i=0; i<pAggInfo->nColumn; i++, pCol++){
+ if( pCol->iTable==pExpr->iTable &&
+ pCol->iColumn==pExpr->iColumn ){
+ break;
+ }
+ }
+ if( i>=pAggInfo->nColumn && (i = addAggInfoColumn(pAggInfo))>=0 ){
+ pCol = &pAggInfo->aCol[i];
+ pCol->iTable = pExpr->iTable;
+ pCol->iColumn = pExpr->iColumn;
+ pCol->iMem = pParse->nMem++;
+ pCol->iSorterColumn = -1;
+ pCol->pExpr = pExpr;
+ if( pAggInfo->pGroupBy ){
+ int j, n;
+ ExprList *pGB = pAggInfo->pGroupBy;
+ struct ExprList_item *pTerm = pGB->a;
+ n = pGB->nExpr;
+ for(j=0; j<n; j++, pTerm++){
+ Expr *pE = pTerm->pExpr;
+ if( pE->op==TK_COLUMN && pE->iTable==pExpr->iTable &&
+ pE->iColumn==pExpr->iColumn ){
+ pCol->iSorterColumn = j;
+ break;
+ }
+ }
+ }
+ if( pCol->iSorterColumn<0 ){
+ pCol->iSorterColumn = pAggInfo->nSortingColumn++;
+ }
+ }
+ /* There is now an entry for pExpr in pAggInfo->aCol[] (either
+ ** because it was there before or because we just created it).
+ ** Convert the pExpr to be a TK_AGG_COLUMN referring to that
+ ** pAggInfo->aCol[] entry.
+ */
+ pExpr->pAggInfo = pAggInfo;
+ pExpr->op = TK_AGG_COLUMN;
+ pExpr->iAgg = i;
+ break;
+ } /* endif pExpr->iTable==pItem->iCursor */
+ } /* end loop over pSrcList */
+ }
+ return 1;
+ }
+ case TK_AGG_FUNCTION: {
+ /* The pNC->nDepth==0 test causes aggregate functions in subqueries
+ ** to be ignored */
+ if( pNC->nDepth==0 ){
+ /* Check to see if pExpr is a duplicate of another aggregate
+ ** function that is already in the pAggInfo structure
+ */
+ struct AggInfo_func *pItem = pAggInfo->aFunc;
+ for(i=0; i<pAggInfo->nFunc; i++, pItem++){
+ if( sqlite3ExprCompare(pItem->pExpr, pExpr) ){
+ break;
+ }
+ }
+ if( i>=pAggInfo->nFunc ){
+ /* pExpr is original. Make a new entry in pAggInfo->aFunc[]
+ */
+ u8 enc = ENC(pParse->db);
+ i = addAggInfoFunc(pAggInfo);
+ if( i>=0 ){
+ pItem = &pAggInfo->aFunc[i];
+ pItem->pExpr = pExpr;
+ pItem->iMem = pParse->nMem++;
+ pItem->pFunc = sqlite3FindFunction(pParse->db,
+ (char*)pExpr->token.z, pExpr->token.n,
+ pExpr->pList ? pExpr->pList->nExpr : 0, enc, 0);
+ if( pExpr->flags & EP_Distinct ){
+ pItem->iDistinct = pParse->nTab++;
+ }else{
+ pItem->iDistinct = -1;
+ }
+ }
+ }
+ /* Make pExpr point to the appropriate pAggInfo->aFunc[] entry
+ */
+ pExpr->iAgg = i;
+ pExpr->pAggInfo = pAggInfo;
+ return 1;
+ }
+ }
+ }
+
+ /* Recursively walk subqueries looking for TK_COLUMN nodes that need
+ ** to be changed to TK_AGG_COLUMN. But increment nDepth so that
+ ** TK_AGG_FUNCTION nodes in subqueries will be unchanged.
+ */
+ if( pExpr->pSelect ){
+ pNC->nDepth++;
+ walkSelectExpr(pExpr->pSelect, analyzeAggregate, pNC);
+ pNC->nDepth--;
+ }
+ return 0;
+}
+
+/*
+** Analyze the given expression looking for aggregate functions and
+** for variables that need to be added to the pParse->aAgg[] array.
+** Make additional entries to the pParse->aAgg[] array as necessary.
+**
+** This routine should only be called after the expression has been
+** analyzed by sqlite3ExprResolveNames().
+**
+** If errors are seen, leave an error message in zErrMsg and return
+** the number of errors.
+*/
+int sqlite3ExprAnalyzeAggregates(NameContext *pNC, Expr *pExpr){
+ int nErr = pNC->pParse->nErr;
+ walkExprTree(pExpr, analyzeAggregate, pNC);
+ return pNC->pParse->nErr - nErr;
+}
+
+/*
+** Call sqlite3ExprAnalyzeAggregates() for every expression in an
+** expression list. Return the number of errors.
+**
+** If an error is found, the analysis is cut short.
+*/
+int sqlite3ExprAnalyzeAggList(NameContext *pNC, ExprList *pList){
+ struct ExprList_item *pItem;
+ int i;
+ int nErr = 0;
+ if( pList ){
+ for(pItem=pList->a, i=0; nErr==0 && i<pList->nExpr; i++, pItem++){
+ nErr += sqlite3ExprAnalyzeAggregates(pNC, pItem->pExpr);
+ }
+ }
+ return nErr;
+}
Added: freeswitch/trunk/libs/sqlite/src/func.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/func.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,1179 @@
+/*
+** 2002 February 23
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** This file contains the C functions that implement various SQL
+** functions of SQLite.
+**
+** There is only one exported symbol in this file - the function
+** sqliteRegisterBuildinFunctions() found at the bottom of the file.
+** All other code has file scope.
+**
+** $Id: func.c,v 1.134 2006/09/16 21:45:14 drh Exp $
+*/
+#include "sqliteInt.h"
+#include <ctype.h>
+/* #include <math.h> */
+#include <stdlib.h>
+#include <assert.h>
+#include "vdbeInt.h"
+#include "os.h"
+
+/*
+** Return the collating function associated with a function.
+*/
+static CollSeq *sqlite3GetFuncCollSeq(sqlite3_context *context){
+ return context->pColl;
+}
+
+/*
+** Implementation of the non-aggregate min() and max() functions
+*/
+static void minmaxFunc(
+ sqlite3_context *context,
+ int argc,
+ sqlite3_value **argv
+){
+ int i;
+ int mask; /* 0 for min() or 0xffffffff for max() */
+ int iBest;
+ CollSeq *pColl;
+
+ if( argc==0 ) return;
+ mask = sqlite3_user_data(context)==0 ? 0 : -1;
+ pColl = sqlite3GetFuncCollSeq(context);
+ assert( pColl );
+ assert( mask==-1 || mask==0 );
+ iBest = 0;
+ if( sqlite3_value_type(argv[0])==SQLITE_NULL ) return;
+ for(i=1; i<argc; i++){
+ if( sqlite3_value_type(argv[i])==SQLITE_NULL ) return;
+ if( (sqlite3MemCompare(argv[iBest], argv[i], pColl)^mask)>=0 ){
+ iBest = i;
+ }
+ }
+ sqlite3_result_value(context, argv[iBest]);
+}
+
+/*
+** Return the type of the argument.
+*/
+static void typeofFunc(
+ sqlite3_context *context,
+ int argc,
+ sqlite3_value **argv
+){
+ const char *z = 0;
+ switch( sqlite3_value_type(argv[0]) ){
+ case SQLITE_NULL: z = "null"; break;
+ case SQLITE_INTEGER: z = "integer"; break;
+ case SQLITE_TEXT: z = "text"; break;
+ case SQLITE_FLOAT: z = "real"; break;
+ case SQLITE_BLOB: z = "blob"; break;
+ }
+ sqlite3_result_text(context, z, -1, SQLITE_STATIC);
+}
+
+
+/*
+** Implementation of the length() function
+*/
+static void lengthFunc(
+ sqlite3_context *context,
+ int argc,
+ sqlite3_value **argv
+){
+ int len;
+
+ assert( argc==1 );
+ switch( sqlite3_value_type(argv[0]) ){
+ case SQLITE_BLOB:
+ case SQLITE_INTEGER:
+ case SQLITE_FLOAT: {
+ sqlite3_result_int(context, sqlite3_value_bytes(argv[0]));
+ break;
+ }
+ case SQLITE_TEXT: {
+ const unsigned char *z = sqlite3_value_text(argv[0]);
+ for(len=0; *z; z++){ if( (0xc0&*z)!=0x80 ) len++; }
+ sqlite3_result_int(context, len);
+ break;
+ }
+ default: {
+ sqlite3_result_null(context);
+ break;
+ }
+ }
+}
+
+/*
+** Implementation of the abs() function
+*/
+static void absFunc(sqlite3_context *context, int argc, sqlite3_value **argv){
+ assert( argc==1 );
+ switch( sqlite3_value_type(argv[0]) ){
+ case SQLITE_INTEGER: {
+ i64 iVal = sqlite3_value_int64(argv[0]);
+ if( iVal<0 ){
+ if( (iVal<<1)==0 ){
+ sqlite3_result_error(context, "integer overflow", -1);
+ return;
+ }
+ iVal = -iVal;
+ }
+ sqlite3_result_int64(context, iVal);
+ break;
+ }
+ case SQLITE_NULL: {
+ sqlite3_result_null(context);
+ break;
+ }
+ default: {
+ double rVal = sqlite3_value_double(argv[0]);
+ if( rVal<0 ) rVal = -rVal;
+ sqlite3_result_double(context, rVal);
+ break;
+ }
+ }
+}
+
+/*
+** Implementation of the substr() function
+*/
+static void substrFunc(
+ sqlite3_context *context,
+ int argc,
+ sqlite3_value **argv
+){
+ const unsigned char *z;
+ const unsigned char *z2;
+ int i;
+ int p1, p2, len;
+
+ assert( argc==3 );
+ z = sqlite3_value_text(argv[0]);
+ if( z==0 ) return;
+ p1 = sqlite3_value_int(argv[1]);
+ p2 = sqlite3_value_int(argv[2]);
+ for(len=0, z2=z; *z2; z2++){ if( (0xc0&*z2)!=0x80 ) len++; }
+ if( p1<0 ){
+ p1 += len;
+ if( p1<0 ){
+ p2 += p1;
+ p1 = 0;
+ }
+ }else if( p1>0 ){
+ p1--;
+ }
+ if( p1+p2>len ){
+ p2 = len-p1;
+ }
+ for(i=0; i<p1 && z[i]; i++){
+ if( (z[i]&0xc0)==0x80 ) p1++;
+ }
+ while( z[i] && (z[i]&0xc0)==0x80 ){ i++; p1++; }
+ for(; i<p1+p2 && z[i]; i++){
+ if( (z[i]&0xc0)==0x80 ) p2++;
+ }
+ while( z[i] && (z[i]&0xc0)==0x80 ){ i++; p2++; }
+ if( p2<0 ) p2 = 0;
+ sqlite3_result_text(context, (char*)&z[p1], p2, SQLITE_TRANSIENT);
+}
+
+/*
+** Implementation of the round() function
+*/
+static void roundFunc(sqlite3_context *context, int argc, sqlite3_value **argv){
+ int n = 0;
+ double r;
+ char zBuf[500]; /* larger than the %f representation of the largest double */
+ assert( argc==1 || argc==2 );
+ if( argc==2 ){
+ if( SQLITE_NULL==sqlite3_value_type(argv[1]) ) return;
+ n = sqlite3_value_int(argv[1]);
+ if( n>30 ) n = 30;
+ if( n<0 ) n = 0;
+ }
+ if( sqlite3_value_type(argv[0])==SQLITE_NULL ) return;
+ r = sqlite3_value_double(argv[0]);
+ sqlite3_snprintf(sizeof(zBuf),zBuf,"%.*f",n,r);
+ sqlite3AtoF(zBuf, &r);
+ sqlite3_result_double(context, r);
+}
+
+/*
+** Implementation of the upper() and lower() SQL functions.
+*/
+static void upperFunc(sqlite3_context *context, int argc, sqlite3_value **argv){
+ unsigned char *z;
+ int i;
+ if( argc<1 || SQLITE_NULL==sqlite3_value_type(argv[0]) ) return;
+ z = sqliteMalloc(sqlite3_value_bytes(argv[0])+1);
+ if( z==0 ) return;
+ strcpy((char*)z, (char*)sqlite3_value_text(argv[0]));
+ for(i=0; z[i]; i++){
+ z[i] = toupper(z[i]);
+ }
+ sqlite3_result_text(context, (char*)z, -1, SQLITE_TRANSIENT);
+ sqliteFree(z);
+}
+static void lowerFunc(sqlite3_context *context, int argc, sqlite3_value **argv){
+ unsigned char *z;
+ int i;
+ if( argc<1 || SQLITE_NULL==sqlite3_value_type(argv[0]) ) return;
+ z = sqliteMalloc(sqlite3_value_bytes(argv[0])+1);
+ if( z==0 ) return;
+ strcpy((char*)z, (char*)sqlite3_value_text(argv[0]));
+ for(i=0; z[i]; i++){
+ z[i] = tolower(z[i]);
+ }
+ sqlite3_result_text(context, (char*)z, -1, SQLITE_TRANSIENT);
+ sqliteFree(z);
+}
+
+/*
+** Implementation of the IFNULL(), NVL(), and COALESCE() functions.
+** All three do the same thing. They return the first non-NULL
+** argument.
+*/
+static void ifnullFunc(
+ sqlite3_context *context,
+ int argc,
+ sqlite3_value **argv
+){
+ int i;
+ for(i=0; i<argc; i++){
+ if( SQLITE_NULL!=sqlite3_value_type(argv[i]) ){
+ sqlite3_result_value(context, argv[i]);
+ break;
+ }
+ }
+}
+
+/*
+** Implementation of random(). Return a random integer.
+*/
+static void randomFunc(
+ sqlite3_context *context,
+ int argc,
+ sqlite3_value **argv
+){
+ sqlite_int64 r;
+ sqlite3Randomness(sizeof(r), &r);
+ if( (r<<1)==0 ) r = 0; /* Prevent 0x8000.... as the result so that we */
+ /* can always do abs() of the result */
+ sqlite3_result_int64(context, r);
+}
+
+/*
+** Implementation of the last_insert_rowid() SQL function. The return
+** value is the same as the sqlite3_last_insert_rowid() API function.
+*/
+static void last_insert_rowid(
+ sqlite3_context *context,
+ int arg,
+ sqlite3_value **argv
+){
+ sqlite3 *db = sqlite3_user_data(context);
+ sqlite3_result_int64(context, sqlite3_last_insert_rowid(db));
+}
+
+/*
+** Implementation of the changes() SQL function. The return value is the
+** same as the sqlite3_changes() API function.
+*/
+static void changes(
+ sqlite3_context *context,
+ int arg,
+ sqlite3_value **argv
+){
+ sqlite3 *db = sqlite3_user_data(context);
+ sqlite3_result_int(context, sqlite3_changes(db));
+}
+
+/*
+** Implementation of the total_changes() SQL function. The return value is
+** the same as the sqlite3_total_changes() API function.
+*/
+static void total_changes(
+ sqlite3_context *context,
+ int arg,
+ sqlite3_value **argv
+){
+ sqlite3 *db = sqlite3_user_data(context);
+ sqlite3_result_int(context, sqlite3_total_changes(db));
+}
+
+/*
+** A structure defining how to do GLOB-style comparisons.
+*/
+struct compareInfo {
+ u8 matchAll;
+ u8 matchOne;
+ u8 matchSet;
+ u8 noCase;
+};
+
+static const struct compareInfo globInfo = { '*', '?', '[', 0 };
+/* The correct SQL-92 behavior is for the LIKE operator to ignore
+** case. Thus 'a' LIKE 'A' would be true. */
+static const struct compareInfo likeInfoNorm = { '%', '_', 0, 1 };
+/* If SQLITE_CASE_SENSITIVE_LIKE is defined, then the LIKE operator
+** is case sensitive causing 'a' LIKE 'A' to be false */
+static const struct compareInfo likeInfoAlt = { '%', '_', 0, 0 };
+
+/*
+** X is a pointer to the first byte of a UTF-8 character. Increment
+** X so that it points to the next character. This only works right
+** if X points to a well-formed UTF-8 string.
+*/
+#define sqliteNextChar(X) while( (0xc0&*++(X))==0x80 ){}
+#define sqliteCharVal(X) sqlite3ReadUtf8(X)
+
+
+/*
+** Compare two UTF-8 strings for equality where the first string can
+** potentially be a "glob" expression. Return true (1) if they
+** are the same and false (0) if they are different.
+**
+** Globbing rules:
+**
+** '*' Matches any sequence of zero or more characters.
+**
+** '?' Matches exactly one character.
+**
+** [...] Matches one character from the enclosed list of
+** characters.
+**
+** [^...] Matches one character not in the enclosed list.
+**
+** With the [...] and [^...] matching, a ']' character can be included
+** in the list by making it the first character after '[' or '^'. A
+** range of characters can be specified using '-'. Example:
+** "[a-z]" matches any single lower-case letter. To match a '-', make
+** it the last character in the list.
+**
+** This routine is usually quick, but can be N**2 in the worst case.
+**
+** Hints: to match '*' or '?', put them in "[]". Like this:
+**
+** abc[*]xyz Matches "abc*xyz" only
+*/
+static int patternCompare(
+ const u8 *zPattern, /* The glob pattern */
+ const u8 *zString, /* The string to compare against the glob */
+ const struct compareInfo *pInfo, /* Information about how to do the compare */
+ const int esc /* The escape character */
+){
+ register int c;
+ int invert;
+ int seen;
+ int c2;
+ u8 matchOne = pInfo->matchOne;
+ u8 matchAll = pInfo->matchAll;
+ u8 matchSet = pInfo->matchSet;
+ u8 noCase = pInfo->noCase;
+ int prevEscape = 0; /* True if the previous character was 'escape' */
+
+ while( (c = *zPattern)!=0 ){
+ if( !prevEscape && c==matchAll ){
+ while( (c=zPattern[1]) == matchAll || c == matchOne ){
+ if( c==matchOne ){
+ if( *zString==0 ) return 0;
+ sqliteNextChar(zString);
+ }
+ zPattern++;
+ }
+ if( c && esc && sqlite3ReadUtf8(&zPattern[1])==esc ){
+ u8 const *zTemp = &zPattern[1];
+ sqliteNextChar(zTemp);
+ c = *zTemp;
+ }
+ if( c==0 ) return 1;
+ if( c==matchSet ){
+ assert( esc==0 ); /* This is GLOB, not LIKE */
+ while( *zString && patternCompare(&zPattern[1],zString,pInfo,esc)==0 ){
+ sqliteNextChar(zString);
+ }
+ return *zString!=0;
+ }else{
+ while( (c2 = *zString)!=0 ){
+ if( noCase ){
+ c2 = sqlite3UpperToLower[c2];
+ c = sqlite3UpperToLower[c];
+ while( c2 != 0 && c2 != c ){ c2 = sqlite3UpperToLower[*++zString]; }
+ }else{
+ while( c2 != 0 && c2 != c ){ c2 = *++zString; }
+ }
+ if( c2==0 ) return 0;
+ if( patternCompare(&zPattern[1],zString,pInfo,esc) ) return 1;
+ sqliteNextChar(zString);
+ }
+ return 0;
+ }
+ }else if( !prevEscape && c==matchOne ){
+ if( *zString==0 ) return 0;
+ sqliteNextChar(zString);
+ zPattern++;
+ }else if( c==matchSet ){
+ int prior_c = 0;
+ assert( esc==0 ); /* This only occurs for GLOB, not LIKE */
+ seen = 0;
+ invert = 0;
+ c = sqliteCharVal(zString);
+ if( c==0 ) return 0;
+ c2 = *++zPattern;
+ if( c2=='^' ){ invert = 1; c2 = *++zPattern; }
+ if( c2==']' ){
+ if( c==']' ) seen = 1;
+ c2 = *++zPattern;
+ }
+ while( (c2 = sqliteCharVal(zPattern))!=0 && c2!=']' ){
+ if( c2=='-' && zPattern[1]!=']' && zPattern[1]!=0 && prior_c>0 ){
+ zPattern++;
+ c2 = sqliteCharVal(zPattern);
+ if( c>=prior_c && c<=c2 ) seen = 1;
+ prior_c = 0;
+ }else if( c==c2 ){
+ seen = 1;
+ prior_c = c2;
+ }else{
+ prior_c = c2;
+ }
+ sqliteNextChar(zPattern);
+ }
+ if( c2==0 || (seen ^ invert)==0 ) return 0;
+ sqliteNextChar(zString);
+ zPattern++;
+ }else if( esc && !prevEscape && sqlite3ReadUtf8(zPattern)==esc){
+ prevEscape = 1;
+ sqliteNextChar(zPattern);
+ }else{
+ if( noCase ){
+ if( sqlite3UpperToLower[c] != sqlite3UpperToLower[*zString] ) return 0;
+ }else{
+ if( c != *zString ) return 0;
+ }
+ zPattern++;
+ zString++;
+ prevEscape = 0;
+ }
+ }
+ return *zString==0;
+}
+
+/*
+** Count the number of times that the LIKE operator (or GLOB which is
+** just a variation of LIKE) gets called. This is used for testing
+** only.
+*/
+#ifdef SQLITE_TEST
+int sqlite3_like_count = 0;
+#endif
+
+
+/*
+** Implementation of the like() SQL function. This function implements
+** the build-in LIKE operator. The first argument to the function is the
+** pattern and the second argument is the string. So, the SQL statements:
+**
+** A LIKE B
+**
+** is implemented as like(B,A).
+**
+** This same function (with a different compareInfo structure) computes
+** the GLOB operator.
+*/
+static void likeFunc(
+ sqlite3_context *context,
+ int argc,
+ sqlite3_value **argv
+){
+ const unsigned char *zA = sqlite3_value_text(argv[0]);
+ const unsigned char *zB = sqlite3_value_text(argv[1]);
+ int escape = 0;
+ if( argc==3 ){
+ /* The escape character string must consist of a single UTF-8 character.
+ ** Otherwise, return an error.
+ */
+ const unsigned char *zEsc = sqlite3_value_text(argv[2]);
+ if( sqlite3utf8CharLen((char*)zEsc, -1)!=1 ){
+ sqlite3_result_error(context,
+ "ESCAPE expression must be a single character", -1);
+ return;
+ }
+ escape = sqlite3ReadUtf8(zEsc);
+ }
+ if( zA && zB ){
+ struct compareInfo *pInfo = sqlite3_user_data(context);
+#ifdef SQLITE_TEST
+ sqlite3_like_count++;
+#endif
+ sqlite3_result_int(context, patternCompare(zA, zB, pInfo, escape));
+ }
+}
+
+/*
+** Implementation of the NULLIF(x,y) function. The result is the first
+** argument if the arguments are different. The result is NULL if the
+** arguments are equal to each other.
+*/
+static void nullifFunc(
+ sqlite3_context *context,
+ int argc,
+ sqlite3_value **argv
+){
+ CollSeq *pColl = sqlite3GetFuncCollSeq(context);
+ if( sqlite3MemCompare(argv[0], argv[1], pColl)!=0 ){
+ sqlite3_result_value(context, argv[0]);
+ }
+}
+
+/*
+** Implementation of the VERSION(*) function. The result is the version
+** of the SQLite library that is running.
+*/
+static void versionFunc(
+ sqlite3_context *context,
+ int argc,
+ sqlite3_value **argv
+){
+ sqlite3_result_text(context, sqlite3_version, -1, SQLITE_STATIC);
+}
+
+
+/*
+** EXPERIMENTAL - This is not an official function. The interface may
+** change. This function may disappear. Do not write code that depends
+** on this function.
+**
+** Implementation of the QUOTE() function. This function takes a single
+** argument. If the argument is numeric, the return value is the same as
+** the argument. If the argument is NULL, the return value is the string
+** "NULL". Otherwise, the argument is enclosed in single quotes with
+** single-quote escapes.
+*/
+static void quoteFunc(sqlite3_context *context, int argc, sqlite3_value **argv){
+ if( argc<1 ) return;
+ switch( sqlite3_value_type(argv[0]) ){
+ case SQLITE_NULL: {
+ sqlite3_result_text(context, "NULL", 4, SQLITE_STATIC);
+ break;
+ }
+ case SQLITE_INTEGER:
+ case SQLITE_FLOAT: {
+ sqlite3_result_value(context, argv[0]);
+ break;
+ }
+ case SQLITE_BLOB: {
+ static const char hexdigits[] = {
+ '0', '1', '2', '3', '4', '5', '6', '7',
+ '8', '9', 'A', 'B', 'C', 'D', 'E', 'F'
+ };
+ char *zText = 0;
+ int nBlob = sqlite3_value_bytes(argv[0]);
+ char const *zBlob = sqlite3_value_blob(argv[0]);
+
+ zText = (char *)sqliteMalloc((2*nBlob)+4);
+ if( !zText ){
+ sqlite3_result_error(context, "out of memory", -1);
+ }else{
+ int i;
+ for(i=0; i<nBlob; i++){
+ zText[(i*2)+2] = hexdigits[(zBlob[i]>>4)&0x0F];
+ zText[(i*2)+3] = hexdigits[(zBlob[i])&0x0F];
+ }
+ zText[(nBlob*2)+2] = '\'';
+ zText[(nBlob*2)+3] = '\0';
+ zText[0] = 'X';
+ zText[1] = '\'';
+ sqlite3_result_text(context, zText, -1, SQLITE_TRANSIENT);
+ sqliteFree(zText);
+ }
+ break;
+ }
+ case SQLITE_TEXT: {
+ int i,j,n;
+ const unsigned char *zArg = sqlite3_value_text(argv[0]);
+ char *z;
+
+ for(i=n=0; zArg[i]; i++){ if( zArg[i]=='\'' ) n++; }
+ z = sqliteMalloc( i+n+3 );
+ if( z==0 ) return;
+ z[0] = '\'';
+ for(i=0, j=1; zArg[i]; i++){
+ z[j++] = zArg[i];
+ if( zArg[i]=='\'' ){
+ z[j++] = '\'';
+ }
+ }
+ z[j++] = '\'';
+ z[j] = 0;
+ sqlite3_result_text(context, z, j, SQLITE_TRANSIENT);
+ sqliteFree(z);
+ }
+ }
+}
+
+#ifdef SQLITE_SOUNDEX
+/*
+** Compute the soundex encoding of a word.
+*/
+static void soundexFunc(sqlite3_context *context, int argc, sqlite3_value **argv){
+ char zResult[8];
+ const u8 *zIn;
+ int i, j;
+ static const unsigned char iCode[] = {
+ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 1, 2, 3, 0, 1, 2, 0, 0, 2, 2, 4, 5, 5, 0,
+ 1, 2, 6, 2, 3, 0, 1, 0, 2, 0, 2, 0, 0, 0, 0, 0,
+ 0, 0, 1, 2, 3, 0, 1, 2, 0, 0, 2, 2, 4, 5, 5, 0,
+ 1, 2, 6, 2, 3, 0, 1, 0, 2, 0, 2, 0, 0, 0, 0, 0,
+ };
+ assert( argc==1 );
+ zIn = (u8*)sqlite3_value_text(argv[0]);
+ if( zIn==0 ) zIn = (u8*)"";
+ for(i=0; zIn[i] && !isalpha(zIn[i]); i++){}
+ if( zIn[i] ){
+ u8 prevcode = iCode[zIn[i]&0x7f];
+ zResult[0] = toupper(zIn[i]);
+ for(j=1; j<4 && zIn[i]; i++){
+ int code = iCode[zIn[i]&0x7f];
+ if( code>0 ){
+ if( code!=prevcode ){
+ prevcode = code;
+ zResult[j++] = code + '0';
+ }
+ }else{
+ prevcode = 0;
+ }
+ }
+ while( j<4 ){
+ zResult[j++] = '0';
+ }
+ zResult[j] = 0;
+ sqlite3_result_text(context, zResult, 4, SQLITE_TRANSIENT);
+ }else{
+ sqlite3_result_text(context, "?000", 4, SQLITE_STATIC);
+ }
+}
+#endif
+
+#ifndef SQLITE_OMIT_LOAD_EXTENSION
+/*
+** A function that loads a shared-library extension then returns NULL.
+*/
+static void loadExt(sqlite3_context *context, int argc, sqlite3_value **argv){
+ const char *zFile = (const char *)sqlite3_value_text(argv[0]);
+ const char *zProc = 0;
+ sqlite3 *db = sqlite3_user_data(context);
+ char *zErrMsg = 0;
+
+ if( argc==2 ){
+ zProc = (const char *)sqlite3_value_text(argv[1]);
+ }
+ if( sqlite3_load_extension(db, zFile, zProc, &zErrMsg) ){
+ sqlite3_result_error(context, zErrMsg, -1);
+ sqlite3_free(zErrMsg);
+ }
+}
+#endif
+
+#ifdef SQLITE_TEST
+/*
+** This function generates a string of random characters. Used for
+** generating test data.
+*/
+static void randStr(sqlite3_context *context, int argc, sqlite3_value **argv){
+ static const unsigned char zSrc[] =
+ "abcdefghijklmnopqrstuvwxyz"
+ "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+ "0123456789"
+ ".-!,:*^+=_|?/<> ";
+ int iMin, iMax, n, r, i;
+ unsigned char zBuf[1000];
+ if( argc>=1 ){
+ iMin = sqlite3_value_int(argv[0]);
+ if( iMin<0 ) iMin = 0;
+ if( iMin>=sizeof(zBuf) ) iMin = sizeof(zBuf)-1;
+ }else{
+ iMin = 1;
+ }
+ if( argc>=2 ){
+ iMax = sqlite3_value_int(argv[1]);
+ if( iMax<iMin ) iMax = iMin;
+ if( iMax>=sizeof(zBuf) ) iMax = sizeof(zBuf)-1;
+ }else{
+ iMax = 50;
+ }
+ n = iMin;
+ if( iMax>iMin ){
+ sqlite3Randomness(sizeof(r), &r);
+ r &= 0x7fffffff;
+ n += r%(iMax + 1 - iMin);
+ }
+ assert( n<sizeof(zBuf) );
+ sqlite3Randomness(n, zBuf);
+ for(i=0; i<n; i++){
+ zBuf[i] = zSrc[zBuf[i]%(sizeof(zSrc)-1)];
+ }
+ zBuf[n] = 0;
+ sqlite3_result_text(context, (char*)zBuf, n, SQLITE_TRANSIENT);
+}
+#endif /* SQLITE_TEST */
+
+#ifdef SQLITE_TEST
+/*
+** The following two SQL functions are used to test returning a text
+** result with a destructor. Function 'test_destructor' takes one argument
+** and returns the same argument interpreted as TEXT. A destructor is
+** passed with the sqlite3_result_text() call.
+**
+** SQL function 'test_destructor_count' returns the number of outstanding
+** allocations made by 'test_destructor';
+**
+** WARNING: Not threadsafe.
+*/
+static int test_destructor_count_var = 0;
+static void destructor(void *p){
+ char *zVal = (char *)p;
+ assert(zVal);
+ zVal--;
+ sqliteFree(zVal);
+ test_destructor_count_var--;
+}
+static void test_destructor(
+ sqlite3_context *pCtx,
+ int nArg,
+ sqlite3_value **argv
+){
+ char *zVal;
+ int len;
+ sqlite3 *db = sqlite3_user_data(pCtx);
+
+ test_destructor_count_var++;
+ assert( nArg==1 );
+ if( sqlite3_value_type(argv[0])==SQLITE_NULL ) return;
+ len = sqlite3ValueBytes(argv[0], ENC(db));
+ zVal = sqliteMalloc(len+3);
+ zVal[len] = 0;
+ zVal[len-1] = 0;
+ assert( zVal );
+ zVal++;
+ memcpy(zVal, sqlite3ValueText(argv[0], ENC(db)), len);
+ if( ENC(db)==SQLITE_UTF8 ){
+ sqlite3_result_text(pCtx, zVal, -1, destructor);
+#ifndef SQLITE_OMIT_UTF16
+ }else if( ENC(db)==SQLITE_UTF16LE ){
+ sqlite3_result_text16le(pCtx, zVal, -1, destructor);
+ }else{
+ sqlite3_result_text16be(pCtx, zVal, -1, destructor);
+#endif /* SQLITE_OMIT_UTF16 */
+ }
+}
+static void test_destructor_count(
+ sqlite3_context *pCtx,
+ int nArg,
+ sqlite3_value **argv
+){
+ sqlite3_result_int(pCtx, test_destructor_count_var);
+}
+#endif /* SQLITE_TEST */
+
+#ifdef SQLITE_TEST
+/*
+** Routines for testing the sqlite3_get_auxdata() and sqlite3_set_auxdata()
+** interface.
+**
+** The test_auxdata() SQL function attempts to register each of its arguments
+** as auxiliary data. If there are no prior registrations of aux data for
+** that argument (meaning the argument is not a constant or this is its first
+** call) then the result for that argument is 0. If there is a prior
+** registration, the result for that argument is 1. The overall result
+** is the individual argument results separated by spaces.
+*/
+static void free_test_auxdata(void *p) {sqliteFree(p);}
+static void test_auxdata(
+ sqlite3_context *pCtx,
+ int nArg,
+ sqlite3_value **argv
+){
+ int i;
+ char *zRet = sqliteMalloc(nArg*2);
+ if( !zRet ) return;
+ for(i=0; i<nArg; i++){
+ char const *z = (char*)sqlite3_value_text(argv[i]);
+ if( z ){
+ char *zAux = sqlite3_get_auxdata(pCtx, i);
+ if( zAux ){
+ zRet[i*2] = '1';
+ if( strcmp(zAux, z) ){
+ sqlite3_result_error(pCtx, "Auxilary data corruption", -1);
+ return;
+ }
+ }else{
+ zRet[i*2] = '0';
+ zAux = sqliteStrDup(z);
+ sqlite3_set_auxdata(pCtx, i, zAux, free_test_auxdata);
+ }
+ zRet[i*2+1] = ' ';
+ }
+ }
+ sqlite3_result_text(pCtx, zRet, 2*nArg-1, free_test_auxdata);
+}
+#endif /* SQLITE_TEST */
+
+#ifdef SQLITE_TEST
+/*
+** A function to test error reporting from user functions. This function
+** returns a copy of it's first argument as an error.
+*/
+static void test_error(
+ sqlite3_context *pCtx,
+ int nArg,
+ sqlite3_value **argv
+){
+ sqlite3_result_error(pCtx, (char*)sqlite3_value_text(argv[0]), 0);
+}
+#endif /* SQLITE_TEST */
+
+/*
+** An instance of the following structure holds the context of a
+** sum() or avg() aggregate computation.
+*/
+typedef struct SumCtx SumCtx;
+struct SumCtx {
+ double rSum; /* Floating point sum */
+ i64 iSum; /* Integer sum */
+ i64 cnt; /* Number of elements summed */
+ u8 overflow; /* True if integer overflow seen */
+ u8 approx; /* True if non-integer value was input to the sum */
+};
+
+/*
+** Routines used to compute the sum, average, and total.
+**
+** The SUM() function follows the (broken) SQL standard which means
+** that it returns NULL if it sums over no inputs. TOTAL returns
+** 0.0 in that case. In addition, TOTAL always returns a float where
+** SUM might return an integer if it never encounters a floating point
+** value. TOTAL never fails, but SUM might through an exception if
+** it overflows an integer.
+*/
+static void sumStep(sqlite3_context *context, int argc, sqlite3_value **argv){
+ SumCtx *p;
+ int type;
+ assert( argc==1 );
+ p = sqlite3_aggregate_context(context, sizeof(*p));
+ type = sqlite3_value_numeric_type(argv[0]);
+ if( p && type!=SQLITE_NULL ){
+ p->cnt++;
+ if( type==SQLITE_INTEGER ){
+ i64 v = sqlite3_value_int64(argv[0]);
+ p->rSum += v;
+ if( (p->approx|p->overflow)==0 ){
+ i64 iNewSum = p->iSum + v;
+ int s1 = p->iSum >> (sizeof(i64)*8-1);
+ int s2 = v >> (sizeof(i64)*8-1);
+ int s3 = iNewSum >> (sizeof(i64)*8-1);
+ p->overflow = (s1&s2&~s3) | (~s1&~s2&s3);
+ p->iSum = iNewSum;
+ }
+ }else{
+ p->rSum += sqlite3_value_double(argv[0]);
+ p->approx = 1;
+ }
+ }
+}
+static void sumFinalize(sqlite3_context *context){
+ SumCtx *p;
+ p = sqlite3_aggregate_context(context, 0);
+ if( p && p->cnt>0 ){
+ if( p->overflow ){
+ sqlite3_result_error(context,"integer overflow",-1);
+ }else if( p->approx ){
+ sqlite3_result_double(context, p->rSum);
+ }else{
+ sqlite3_result_int64(context, p->iSum);
+ }
+ }
+}
+static void avgFinalize(sqlite3_context *context){
+ SumCtx *p;
+ p = sqlite3_aggregate_context(context, 0);
+ if( p && p->cnt>0 ){
+ sqlite3_result_double(context, p->rSum/(double)p->cnt);
+ }
+}
+static void totalFinalize(sqlite3_context *context){
+ SumCtx *p;
+ p = sqlite3_aggregate_context(context, 0);
+ sqlite3_result_double(context, p ? p->rSum : 0.0);
+}
+
+/*
+** The following structure keeps track of state information for the
+** count() aggregate function.
+*/
+typedef struct CountCtx CountCtx;
+struct CountCtx {
+ i64 n;
+};
+
+/*
+** Routines to implement the count() aggregate function.
+*/
+static void countStep(sqlite3_context *context, int argc, sqlite3_value **argv){
+ CountCtx *p;
+ p = sqlite3_aggregate_context(context, sizeof(*p));
+ if( (argc==0 || SQLITE_NULL!=sqlite3_value_type(argv[0])) && p ){
+ p->n++;
+ }
+}
+static void countFinalize(sqlite3_context *context){
+ CountCtx *p;
+ p = sqlite3_aggregate_context(context, 0);
+ sqlite3_result_int64(context, p ? p->n : 0);
+}
+
+/*
+** Routines to implement min() and max() aggregate functions.
+*/
+static void minmaxStep(sqlite3_context *context, int argc, sqlite3_value **argv){
+ Mem *pArg = (Mem *)argv[0];
+ Mem *pBest;
+
+ if( sqlite3_value_type(argv[0])==SQLITE_NULL ) return;
+ pBest = (Mem *)sqlite3_aggregate_context(context, sizeof(*pBest));
+ if( !pBest ) return;
+
+ if( pBest->flags ){
+ int max;
+ int cmp;
+ CollSeq *pColl = sqlite3GetFuncCollSeq(context);
+ /* This step function is used for both the min() and max() aggregates,
+ ** the only difference between the two being that the sense of the
+ ** comparison is inverted. For the max() aggregate, the
+ ** sqlite3_user_data() function returns (void *)-1. For min() it
+ ** returns (void *)db, where db is the sqlite3* database pointer.
+ ** Therefore the next statement sets variable 'max' to 1 for the max()
+ ** aggregate, or 0 for min().
+ */
+ max = ((sqlite3_user_data(context)==(void *)-1)?1:0);
+ cmp = sqlite3MemCompare(pBest, pArg, pColl);
+ if( (max && cmp<0) || (!max && cmp>0) ){
+ sqlite3VdbeMemCopy(pBest, pArg);
+ }
+ }else{
+ sqlite3VdbeMemCopy(pBest, pArg);
+ }
+}
+static void minMaxFinalize(sqlite3_context *context){
+ sqlite3_value *pRes;
+ pRes = (sqlite3_value *)sqlite3_aggregate_context(context, 0);
+ if( pRes ){
+ if( pRes->flags ){
+ sqlite3_result_value(context, pRes);
+ }
+ sqlite3VdbeMemRelease(pRes);
+ }
+}
+
+
+/*
+** This function registered all of the above C functions as SQL
+** functions. This should be the only routine in this file with
+** external linkage.
+*/
+void sqlite3RegisterBuiltinFunctions(sqlite3 *db){
+ static const struct {
+ char *zName;
+ signed char nArg;
+ u8 argType; /* 0: none. 1: db 2: (-1) */
+ u8 eTextRep; /* 1: UTF-16. 0: UTF-8 */
+ u8 needCollSeq;
+ void (*xFunc)(sqlite3_context*,int,sqlite3_value **);
+ } aFuncs[] = {
+ { "min", -1, 0, SQLITE_UTF8, 1, minmaxFunc },
+ { "min", 0, 0, SQLITE_UTF8, 1, 0 },
+ { "max", -1, 2, SQLITE_UTF8, 1, minmaxFunc },
+ { "max", 0, 2, SQLITE_UTF8, 1, 0 },
+ { "typeof", 1, 0, SQLITE_UTF8, 0, typeofFunc },
+ { "length", 1, 0, SQLITE_UTF8, 0, lengthFunc },
+ { "substr", 3, 0, SQLITE_UTF8, 0, substrFunc },
+#ifndef SQLITE_OMIT_UTF16
+ { "substr", 3, 0, SQLITE_UTF16LE, 0, sqlite3utf16Substr },
+#endif
+ { "abs", 1, 0, SQLITE_UTF8, 0, absFunc },
+ { "round", 1, 0, SQLITE_UTF8, 0, roundFunc },
+ { "round", 2, 0, SQLITE_UTF8, 0, roundFunc },
+ { "upper", 1, 0, SQLITE_UTF8, 0, upperFunc },
+ { "lower", 1, 0, SQLITE_UTF8, 0, lowerFunc },
+ { "coalesce", -1, 0, SQLITE_UTF8, 0, ifnullFunc },
+ { "coalesce", 0, 0, SQLITE_UTF8, 0, 0 },
+ { "coalesce", 1, 0, SQLITE_UTF8, 0, 0 },
+ { "ifnull", 2, 0, SQLITE_UTF8, 1, ifnullFunc },
+ { "random", -1, 0, SQLITE_UTF8, 0, randomFunc },
+ { "nullif", 2, 0, SQLITE_UTF8, 1, nullifFunc },
+ { "sqlite_version", 0, 0, SQLITE_UTF8, 0, versionFunc},
+ { "quote", 1, 0, SQLITE_UTF8, 0, quoteFunc },
+ { "last_insert_rowid", 0, 1, SQLITE_UTF8, 0, last_insert_rowid },
+ { "changes", 0, 1, SQLITE_UTF8, 0, changes },
+ { "total_changes", 0, 1, SQLITE_UTF8, 0, total_changes },
+#ifdef SQLITE_SOUNDEX
+ { "soundex", 1, 0, SQLITE_UTF8, 0, soundexFunc},
+#endif
+#ifndef SQLITE_OMIT_LOAD_EXTENSION
+ { "load_extension", 1, 1, SQLITE_UTF8, 0, loadExt },
+ { "load_extension", 2, 1, SQLITE_UTF8, 0, loadExt },
+#endif
+#ifdef SQLITE_TEST
+ { "randstr", 2, 0, SQLITE_UTF8, 0, randStr },
+ { "test_destructor", 1, 1, SQLITE_UTF8, 0, test_destructor},
+ { "test_destructor_count", 0, 0, SQLITE_UTF8, 0, test_destructor_count},
+ { "test_auxdata", -1, 0, SQLITE_UTF8, 0, test_auxdata},
+ { "test_error", 1, 0, SQLITE_UTF8, 0, test_error},
+#endif
+ };
+ static const struct {
+ char *zName;
+ signed char nArg;
+ u8 argType;
+ u8 needCollSeq;
+ void (*xStep)(sqlite3_context*,int,sqlite3_value**);
+ void (*xFinalize)(sqlite3_context*);
+ } aAggs[] = {
+ { "min", 1, 0, 1, minmaxStep, minMaxFinalize },
+ { "max", 1, 2, 1, minmaxStep, minMaxFinalize },
+ { "sum", 1, 0, 0, sumStep, sumFinalize },
+ { "total", 1, 0, 0, sumStep, totalFinalize },
+ { "avg", 1, 0, 0, sumStep, avgFinalize },
+ { "count", 0, 0, 0, countStep, countFinalize },
+ { "count", 1, 0, 0, countStep, countFinalize },
+ };
+ int i;
+
+ for(i=0; i<sizeof(aFuncs)/sizeof(aFuncs[0]); i++){
+ void *pArg = 0;
+ switch( aFuncs[i].argType ){
+ case 1: pArg = db; break;
+ case 2: pArg = (void *)(-1); break;
+ }
+ sqlite3CreateFunc(db, aFuncs[i].zName, aFuncs[i].nArg,
+ aFuncs[i].eTextRep, pArg, aFuncs[i].xFunc, 0, 0);
+ if( aFuncs[i].needCollSeq ){
+ FuncDef *pFunc = sqlite3FindFunction(db, aFuncs[i].zName,
+ strlen(aFuncs[i].zName), aFuncs[i].nArg, aFuncs[i].eTextRep, 0);
+ if( pFunc && aFuncs[i].needCollSeq ){
+ pFunc->needCollSeq = 1;
+ }
+ }
+ }
+#ifndef SQLITE_OMIT_ALTERTABLE
+ sqlite3AlterFunctions(db);
+#endif
+#ifndef SQLITE_OMIT_PARSER
+ sqlite3AttachFunctions(db);
+#endif
+ for(i=0; i<sizeof(aAggs)/sizeof(aAggs[0]); i++){
+ void *pArg = 0;
+ switch( aAggs[i].argType ){
+ case 1: pArg = db; break;
+ case 2: pArg = (void *)(-1); break;
+ }
+ sqlite3CreateFunc(db, aAggs[i].zName, aAggs[i].nArg, SQLITE_UTF8,
+ pArg, 0, aAggs[i].xStep, aAggs[i].xFinalize);
+ if( aAggs[i].needCollSeq ){
+ FuncDef *pFunc = sqlite3FindFunction( db, aAggs[i].zName,
+ strlen(aAggs[i].zName), aAggs[i].nArg, SQLITE_UTF8, 0);
+ if( pFunc && aAggs[i].needCollSeq ){
+ pFunc->needCollSeq = 1;
+ }
+ }
+ }
+ sqlite3RegisterDateTimeFunctions(db);
+ sqlite3_overload_function(db, "MATCH", 2);
+#ifdef SQLITE_SSE
+ (void)sqlite3SseFunctions(db);
+#endif
+#ifdef SQLITE_CASE_SENSITIVE_LIKE
+ sqlite3RegisterLikeFunctions(db, 1);
+#else
+ sqlite3RegisterLikeFunctions(db, 0);
+#endif
+}
+
+/*
+** Set the LIKEOPT flag on the 2-argument function with the given name.
+*/
+static void setLikeOptFlag(sqlite3 *db, const char *zName, int flagVal){
+ FuncDef *pDef;
+ pDef = sqlite3FindFunction(db, zName, strlen(zName), 2, SQLITE_UTF8, 0);
+ if( pDef ){
+ pDef->flags = flagVal;
+ }
+}
+
+/*
+** Register the built-in LIKE and GLOB functions. The caseSensitive
+** parameter determines whether or not the LIKE operator is case
+** sensitive. GLOB is always case sensitive.
+*/
+void sqlite3RegisterLikeFunctions(sqlite3 *db, int caseSensitive){
+ struct compareInfo *pInfo;
+ if( caseSensitive ){
+ pInfo = (struct compareInfo*)&likeInfoAlt;
+ }else{
+ pInfo = (struct compareInfo*)&likeInfoNorm;
+ }
+ sqlite3CreateFunc(db, "like", 2, SQLITE_UTF8, pInfo, likeFunc, 0, 0);
+ sqlite3CreateFunc(db, "like", 3, SQLITE_UTF8, pInfo, likeFunc, 0, 0);
+ sqlite3CreateFunc(db, "glob", 2, SQLITE_UTF8,
+ (struct compareInfo*)&globInfo, likeFunc, 0,0);
+ setLikeOptFlag(db, "glob", SQLITE_FUNC_LIKE | SQLITE_FUNC_CASE);
+ setLikeOptFlag(db, "like",
+ caseSensitive ? (SQLITE_FUNC_LIKE | SQLITE_FUNC_CASE) : SQLITE_FUNC_LIKE);
+}
+
+/*
+** pExpr points to an expression which implements a function. If
+** it is appropriate to apply the LIKE optimization to that function
+** then set aWc[0] through aWc[2] to the wildcard characters and
+** return TRUE. If the function is not a LIKE-style function then
+** return FALSE.
+*/
+int sqlite3IsLikeFunction(sqlite3 *db, Expr *pExpr, int *pIsNocase, char *aWc){
+ FuncDef *pDef;
+ if( pExpr->op!=TK_FUNCTION ){
+ return 0;
+ }
+ if( pExpr->pList->nExpr!=2 ){
+ return 0;
+ }
+ pDef = sqlite3FindFunction(db, (char*)pExpr->token.z, pExpr->token.n, 2,
+ SQLITE_UTF8, 0);
+ if( pDef==0 || (pDef->flags & SQLITE_FUNC_LIKE)==0 ){
+ return 0;
+ }
+
+ /* The memcpy() statement assumes that the wildcard characters are
+ ** the first three statements in the compareInfo structure. The
+ ** asserts() that follow verify that assumption
+ */
+ memcpy(aWc, pDef->pUserData, 3);
+ assert( (char*)&likeInfoAlt == (char*)&likeInfoAlt.matchAll );
+ assert( &((char*)&likeInfoAlt)[1] == (char*)&likeInfoAlt.matchOne );
+ assert( &((char*)&likeInfoAlt)[2] == (char*)&likeInfoAlt.matchSet );
+ *pIsNocase = (pDef->flags & SQLITE_FUNC_CASE)==0;
+ return 1;
+}
Added: freeswitch/trunk/libs/sqlite/src/hash.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/hash.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,394 @@
+/*
+** 2001 September 22
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** This is the implementation of generic hash-tables
+** used in SQLite.
+**
+** $Id: hash.c,v 1.18 2006/02/14 10:48:39 danielk1977 Exp $
+*/
+#include "sqliteInt.h"
+#include <assert.h>
+
+/* Turn bulk memory into a hash table object by initializing the
+** fields of the Hash structure.
+**
+** "pNew" is a pointer to the hash table that is to be initialized.
+** keyClass is one of the constants SQLITE_HASH_INT, SQLITE_HASH_POINTER,
+** SQLITE_HASH_BINARY, or SQLITE_HASH_STRING. The value of keyClass
+** determines what kind of key the hash table will use. "copyKey" is
+** true if the hash table should make its own private copy of keys and
+** false if it should just use the supplied pointer. CopyKey only makes
+** sense for SQLITE_HASH_STRING and SQLITE_HASH_BINARY and is ignored
+** for other key classes.
+*/
+void sqlite3HashInit(Hash *pNew, int keyClass, int copyKey){
+ assert( pNew!=0 );
+ assert( keyClass>=SQLITE_HASH_STRING && keyClass<=SQLITE_HASH_BINARY );
+ pNew->keyClass = keyClass;
+#if 0
+ if( keyClass==SQLITE_HASH_POINTER || keyClass==SQLITE_HASH_INT ) copyKey = 0;
+#endif
+ pNew->copyKey = copyKey;
+ pNew->first = 0;
+ pNew->count = 0;
+ pNew->htsize = 0;
+ pNew->ht = 0;
+ pNew->xMalloc = sqlite3MallocX;
+ pNew->xFree = sqlite3FreeX;
+}
+
+/* Remove all entries from a hash table. Reclaim all memory.
+** Call this routine to delete a hash table or to reset a hash table
+** to the empty state.
+*/
+void sqlite3HashClear(Hash *pH){
+ HashElem *elem; /* For looping over all elements of the table */
+
+ assert( pH!=0 );
+ elem = pH->first;
+ pH->first = 0;
+ if( pH->ht ) pH->xFree(pH->ht);
+ pH->ht = 0;
+ pH->htsize = 0;
+ while( elem ){
+ HashElem *next_elem = elem->next;
+ if( pH->copyKey && elem->pKey ){
+ pH->xFree(elem->pKey);
+ }
+ pH->xFree(elem);
+ elem = next_elem;
+ }
+ pH->count = 0;
+}
+
+#if 0 /* NOT USED */
+/*
+** Hash and comparison functions when the mode is SQLITE_HASH_INT
+*/
+static int intHash(const void *pKey, int nKey){
+ return nKey ^ (nKey<<8) ^ (nKey>>8);
+}
+static int intCompare(const void *pKey1, int n1, const void *pKey2, int n2){
+ return n2 - n1;
+}
+#endif
+
+#if 0 /* NOT USED */
+/*
+** Hash and comparison functions when the mode is SQLITE_HASH_POINTER
+*/
+static int ptrHash(const void *pKey, int nKey){
+ uptr x = Addr(pKey);
+ return x ^ (x<<8) ^ (x>>8);
+}
+static int ptrCompare(const void *pKey1, int n1, const void *pKey2, int n2){
+ if( pKey1==pKey2 ) return 0;
+ if( pKey1<pKey2 ) return -1;
+ return 1;
+}
+#endif
+
+/*
+** Hash and comparison functions when the mode is SQLITE_HASH_STRING
+*/
+static int strHash(const void *pKey, int nKey){
+ const char *z = (const char *)pKey;
+ int h = 0;
+ if( nKey<=0 ) nKey = strlen(z);
+ while( nKey > 0 ){
+ h = (h<<3) ^ h ^ sqlite3UpperToLower[(unsigned char)*z++];
+ nKey--;
+ }
+ return h & 0x7fffffff;
+}
+static int strCompare(const void *pKey1, int n1, const void *pKey2, int n2){
+ if( n1!=n2 ) return 1;
+ return sqlite3StrNICmp((const char*)pKey1,(const char*)pKey2,n1);
+}
+
+/*
+** Hash and comparison functions when the mode is SQLITE_HASH_BINARY
+*/
+static int binHash(const void *pKey, int nKey){
+ int h = 0;
+ const char *z = (const char *)pKey;
+ while( nKey-- > 0 ){
+ h = (h<<3) ^ h ^ *(z++);
+ }
+ return h & 0x7fffffff;
+}
+static int binCompare(const void *pKey1, int n1, const void *pKey2, int n2){
+ if( n1!=n2 ) return 1;
+ return memcmp(pKey1,pKey2,n1);
+}
+
+/*
+** Return a pointer to the appropriate hash function given the key class.
+**
+** The C syntax in this function definition may be unfamilar to some
+** programmers, so we provide the following additional explanation:
+**
+** The name of the function is "hashFunction". The function takes a
+** single parameter "keyClass". The return value of hashFunction()
+** is a pointer to another function. Specifically, the return value
+** of hashFunction() is a pointer to a function that takes two parameters
+** with types "const void*" and "int" and returns an "int".
+*/
+static int (*hashFunction(int keyClass))(const void*,int){
+#if 0 /* HASH_INT and HASH_POINTER are never used */
+ switch( keyClass ){
+ case SQLITE_HASH_INT: return &intHash;
+ case SQLITE_HASH_POINTER: return &ptrHash;
+ case SQLITE_HASH_STRING: return &strHash;
+ case SQLITE_HASH_BINARY: return &binHash;;
+ default: break;
+ }
+ return 0;
+#else
+ if( keyClass==SQLITE_HASH_STRING ){
+ return &strHash;
+ }else{
+ assert( keyClass==SQLITE_HASH_BINARY );
+ return &binHash;
+ }
+#endif
+}
+
+/*
+** Return a pointer to the appropriate hash function given the key class.
+**
+** For help in interpreted the obscure C code in the function definition,
+** see the header comment on the previous function.
+*/
+static int (*compareFunction(int keyClass))(const void*,int,const void*,int){
+#if 0 /* HASH_INT and HASH_POINTER are never used */
+ switch( keyClass ){
+ case SQLITE_HASH_INT: return &intCompare;
+ case SQLITE_HASH_POINTER: return &ptrCompare;
+ case SQLITE_HASH_STRING: return &strCompare;
+ case SQLITE_HASH_BINARY: return &binCompare;
+ default: break;
+ }
+ return 0;
+#else
+ if( keyClass==SQLITE_HASH_STRING ){
+ return &strCompare;
+ }else{
+ assert( keyClass==SQLITE_HASH_BINARY );
+ return &binCompare;
+ }
+#endif
+}
+
+/* Link an element into the hash table
+*/
+static void insertElement(
+ Hash *pH, /* The complete hash table */
+ struct _ht *pEntry, /* The entry into which pNew is inserted */
+ HashElem *pNew /* The element to be inserted */
+){
+ HashElem *pHead; /* First element already in pEntry */
+ pHead = pEntry->chain;
+ if( pHead ){
+ pNew->next = pHead;
+ pNew->prev = pHead->prev;
+ if( pHead->prev ){ pHead->prev->next = pNew; }
+ else { pH->first = pNew; }
+ pHead->prev = pNew;
+ }else{
+ pNew->next = pH->first;
+ if( pH->first ){ pH->first->prev = pNew; }
+ pNew->prev = 0;
+ pH->first = pNew;
+ }
+ pEntry->count++;
+ pEntry->chain = pNew;
+}
+
+
+/* Resize the hash table so that it cantains "new_size" buckets.
+** "new_size" must be a power of 2. The hash table might fail
+** to resize if sqliteMalloc() fails.
+*/
+static void rehash(Hash *pH, int new_size){
+ struct _ht *new_ht; /* The new hash table */
+ HashElem *elem, *next_elem; /* For looping over existing elements */
+ int (*xHash)(const void*,int); /* The hash function */
+
+ assert( (new_size & (new_size-1))==0 );
+ new_ht = (struct _ht *)pH->xMalloc( new_size*sizeof(struct _ht) );
+ if( new_ht==0 ) return;
+ if( pH->ht ) pH->xFree(pH->ht);
+ pH->ht = new_ht;
+ pH->htsize = new_size;
+ xHash = hashFunction(pH->keyClass);
+ for(elem=pH->first, pH->first=0; elem; elem = next_elem){
+ int h = (*xHash)(elem->pKey, elem->nKey) & (new_size-1);
+ next_elem = elem->next;
+ insertElement(pH, &new_ht[h], elem);
+ }
+}
+
+/* This function (for internal use only) locates an element in an
+** hash table that matches the given key. The hash for this key has
+** already been computed and is passed as the 4th parameter.
+*/
+static HashElem *findElementGivenHash(
+ const Hash *pH, /* The pH to be searched */
+ const void *pKey, /* The key we are searching for */
+ int nKey,
+ int h /* The hash for this key. */
+){
+ HashElem *elem; /* Used to loop thru the element list */
+ int count; /* Number of elements left to test */
+ int (*xCompare)(const void*,int,const void*,int); /* comparison function */
+
+ if( pH->ht ){
+ struct _ht *pEntry = &pH->ht[h];
+ elem = pEntry->chain;
+ count = pEntry->count;
+ xCompare = compareFunction(pH->keyClass);
+ while( count-- && elem ){
+ if( (*xCompare)(elem->pKey,elem->nKey,pKey,nKey)==0 ){
+ return elem;
+ }
+ elem = elem->next;
+ }
+ }
+ return 0;
+}
+
+/* Remove a single entry from the hash table given a pointer to that
+** element and a hash on the element's key.
+*/
+static void removeElementGivenHash(
+ Hash *pH, /* The pH containing "elem" */
+ HashElem* elem, /* The element to be removed from the pH */
+ int h /* Hash value for the element */
+){
+ struct _ht *pEntry;
+ if( elem->prev ){
+ elem->prev->next = elem->next;
+ }else{
+ pH->first = elem->next;
+ }
+ if( elem->next ){
+ elem->next->prev = elem->prev;
+ }
+ pEntry = &pH->ht[h];
+ if( pEntry->chain==elem ){
+ pEntry->chain = elem->next;
+ }
+ pEntry->count--;
+ if( pEntry->count<=0 ){
+ pEntry->chain = 0;
+ }
+ if( pH->copyKey && elem->pKey ){
+ pH->xFree(elem->pKey);
+ }
+ pH->xFree( elem );
+ pH->count--;
+ if( pH->count<=0 ){
+ assert( pH->first==0 );
+ assert( pH->count==0 );
+ sqlite3HashClear(pH);
+ }
+}
+
+/* Attempt to locate an element of the hash table pH with a key
+** that matches pKey,nKey. Return the data for this element if it is
+** found, or NULL if there is no match.
+*/
+void *sqlite3HashFind(const Hash *pH, const void *pKey, int nKey){
+ int h; /* A hash on key */
+ HashElem *elem; /* The element that matches key */
+ int (*xHash)(const void*,int); /* The hash function */
+
+ if( pH==0 || pH->ht==0 ) return 0;
+ xHash = hashFunction(pH->keyClass);
+ assert( xHash!=0 );
+ h = (*xHash)(pKey,nKey);
+ assert( (pH->htsize & (pH->htsize-1))==0 );
+ elem = findElementGivenHash(pH,pKey,nKey, h & (pH->htsize-1));
+ return elem ? elem->data : 0;
+}
+
+/* Insert an element into the hash table pH. The key is pKey,nKey
+** and the data is "data".
+**
+** If no element exists with a matching key, then a new
+** element is created. A copy of the key is made if the copyKey
+** flag is set. NULL is returned.
+**
+** If another element already exists with the same key, then the
+** new data replaces the old data and the old data is returned.
+** The key is not copied in this instance. If a malloc fails, then
+** the new data is returned and the hash table is unchanged.
+**
+** If the "data" parameter to this function is NULL, then the
+** element corresponding to "key" is removed from the hash table.
+*/
+void *sqlite3HashInsert(Hash *pH, const void *pKey, int nKey, void *data){
+ int hraw; /* Raw hash value of the key */
+ int h; /* the hash of the key modulo hash table size */
+ HashElem *elem; /* Used to loop thru the element list */
+ HashElem *new_elem; /* New element added to the pH */
+ int (*xHash)(const void*,int); /* The hash function */
+
+ assert( pH!=0 );
+ xHash = hashFunction(pH->keyClass);
+ assert( xHash!=0 );
+ hraw = (*xHash)(pKey, nKey);
+ assert( (pH->htsize & (pH->htsize-1))==0 );
+ h = hraw & (pH->htsize-1);
+ elem = findElementGivenHash(pH,pKey,nKey,h);
+ if( elem ){
+ void *old_data = elem->data;
+ if( data==0 ){
+ removeElementGivenHash(pH,elem,h);
+ }else{
+ elem->data = data;
+ }
+ return old_data;
+ }
+ if( data==0 ) return 0;
+ new_elem = (HashElem*)pH->xMalloc( sizeof(HashElem) );
+ if( new_elem==0 ) return data;
+ if( pH->copyKey && pKey!=0 ){
+ new_elem->pKey = pH->xMalloc( nKey );
+ if( new_elem->pKey==0 ){
+ pH->xFree(new_elem);
+ return data;
+ }
+ memcpy((void*)new_elem->pKey, pKey, nKey);
+ }else{
+ new_elem->pKey = (void*)pKey;
+ }
+ new_elem->nKey = nKey;
+ pH->count++;
+ if( pH->htsize==0 ){
+ rehash(pH,8);
+ if( pH->htsize==0 ){
+ pH->count = 0;
+ pH->xFree(new_elem);
+ return data;
+ }
+ }
+ if( pH->count > pH->htsize ){
+ rehash(pH,pH->htsize*2);
+ }
+ assert( pH->htsize>0 );
+ assert( (pH->htsize & (pH->htsize-1))==0 );
+ h = hraw & (pH->htsize-1);
+ insertElement(pH, &pH->ht[h], new_elem);
+ new_elem->data = data;
+ return 0;
+}
Added: freeswitch/trunk/libs/sqlite/src/hash.h
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/hash.h Tue Dec 19 15:11:50 2006
@@ -0,0 +1,111 @@
+/*
+** 2001 September 22
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** This is the header file for the generic hash-table implemenation
+** used in SQLite.
+**
+** $Id: hash.h,v 1.9 2006/02/14 10:48:39 danielk1977 Exp $
+*/
+#ifndef _SQLITE_HASH_H_
+#define _SQLITE_HASH_H_
+
+/* Forward declarations of structures. */
+typedef struct Hash Hash;
+typedef struct HashElem HashElem;
+
+/* A complete hash table is an instance of the following structure.
+** The internals of this structure are intended to be opaque -- client
+** code should not attempt to access or modify the fields of this structure
+** directly. Change this structure only by using the routines below.
+** However, many of the "procedures" and "functions" for modifying and
+** accessing this structure are really macros, so we can't really make
+** this structure opaque.
+*/
+struct Hash {
+ char keyClass; /* SQLITE_HASH_INT, _POINTER, _STRING, _BINARY */
+ char copyKey; /* True if copy of key made on insert */
+ int count; /* Number of entries in this table */
+ HashElem *first; /* The first element of the array */
+ void *(*xMalloc)(int); /* malloc() function to use */
+ void (*xFree)(void *); /* free() function to use */
+ int htsize; /* Number of buckets in the hash table */
+ struct _ht { /* the hash table */
+ int count; /* Number of entries with this hash */
+ HashElem *chain; /* Pointer to first entry with this hash */
+ } *ht;
+};
+
+/* Each element in the hash table is an instance of the following
+** structure. All elements are stored on a single doubly-linked list.
+**
+** Again, this structure is intended to be opaque, but it can't really
+** be opaque because it is used by macros.
+*/
+struct HashElem {
+ HashElem *next, *prev; /* Next and previous elements in the table */
+ void *data; /* Data associated with this element */
+ void *pKey; int nKey; /* Key associated with this element */
+};
+
+/*
+** There are 4 different modes of operation for a hash table:
+**
+** SQLITE_HASH_INT nKey is used as the key and pKey is ignored.
+**
+** SQLITE_HASH_POINTER pKey is used as the key and nKey is ignored.
+**
+** SQLITE_HASH_STRING pKey points to a string that is nKey bytes long
+** (including the null-terminator, if any). Case
+** is ignored in comparisons.
+**
+** SQLITE_HASH_BINARY pKey points to binary data nKey bytes long.
+** memcmp() is used to compare keys.
+**
+** A copy of the key is made for SQLITE_HASH_STRING and SQLITE_HASH_BINARY
+** if the copyKey parameter to HashInit is 1.
+*/
+/* #define SQLITE_HASH_INT 1 // NOT USED */
+/* #define SQLITE_HASH_POINTER 2 // NOT USED */
+#define SQLITE_HASH_STRING 3
+#define SQLITE_HASH_BINARY 4
+
+/*
+** Access routines. To delete, insert a NULL pointer.
+*/
+void sqlite3HashInit(Hash*, int keytype, int copyKey);
+void *sqlite3HashInsert(Hash*, const void *pKey, int nKey, void *pData);
+void *sqlite3HashFind(const Hash*, const void *pKey, int nKey);
+void sqlite3HashClear(Hash*);
+
+/*
+** Macros for looping over all elements of a hash table. The idiom is
+** like this:
+**
+** Hash h;
+** HashElem *p;
+** ...
+** for(p=sqliteHashFirst(&h); p; p=sqliteHashNext(p)){
+** SomeStructure *pData = sqliteHashData(p);
+** // do something with pData
+** }
+*/
+#define sqliteHashFirst(H) ((H)->first)
+#define sqliteHashNext(E) ((E)->next)
+#define sqliteHashData(E) ((E)->data)
+#define sqliteHashKey(E) ((E)->pKey)
+#define sqliteHashKeysize(E) ((E)->nKey)
+
+/*
+** Number of entries in a hash table
+*/
+#define sqliteHashCount(H) ((H)->count)
+
+#endif /* _SQLITE_HASH_H_ */
Added: freeswitch/trunk/libs/sqlite/src/insert.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/insert.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,1142 @@
+/*
+** 2001 September 15
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** This file contains C code routines that are called by the parser
+** to handle INSERT statements in SQLite.
+**
+** $Id: insert.c,v 1.172 2006/08/29 18:46:14 drh Exp $
+*/
+#include "sqliteInt.h"
+
+/*
+** Set P3 of the most recently inserted opcode to a column affinity
+** string for index pIdx. A column affinity string has one character
+** for each column in the table, according to the affinity of the column:
+**
+** Character Column affinity
+** ------------------------------
+** 'a' TEXT
+** 'b' NONE
+** 'c' NUMERIC
+** 'd' INTEGER
+** 'e' REAL
+*/
+void sqlite3IndexAffinityStr(Vdbe *v, Index *pIdx){
+ if( !pIdx->zColAff ){
+ /* The first time a column affinity string for a particular index is
+ ** required, it is allocated and populated here. It is then stored as
+ ** a member of the Index structure for subsequent use.
+ **
+ ** The column affinity string will eventually be deleted by
+ ** sqliteDeleteIndex() when the Index structure itself is cleaned
+ ** up.
+ */
+ int n;
+ Table *pTab = pIdx->pTable;
+ pIdx->zColAff = (char *)sqliteMalloc(pIdx->nColumn+1);
+ if( !pIdx->zColAff ){
+ return;
+ }
+ for(n=0; n<pIdx->nColumn; n++){
+ pIdx->zColAff[n] = pTab->aCol[pIdx->aiColumn[n]].affinity;
+ }
+ pIdx->zColAff[pIdx->nColumn] = '\0';
+ }
+
+ sqlite3VdbeChangeP3(v, -1, pIdx->zColAff, 0);
+}
+
+/*
+** Set P3 of the most recently inserted opcode to a column affinity
+** string for table pTab. A column affinity string has one character
+** for each column indexed by the index, according to the affinity of the
+** column:
+**
+** Character Column affinity
+** ------------------------------
+** 'a' TEXT
+** 'b' NONE
+** 'c' NUMERIC
+** 'd' INTEGER
+** 'e' REAL
+*/
+void sqlite3TableAffinityStr(Vdbe *v, Table *pTab){
+ /* The first time a column affinity string for a particular table
+ ** is required, it is allocated and populated here. It is then
+ ** stored as a member of the Table structure for subsequent use.
+ **
+ ** The column affinity string will eventually be deleted by
+ ** sqlite3DeleteTable() when the Table structure itself is cleaned up.
+ */
+ if( !pTab->zColAff ){
+ char *zColAff;
+ int i;
+
+ zColAff = (char *)sqliteMalloc(pTab->nCol+1);
+ if( !zColAff ){
+ return;
+ }
+
+ for(i=0; i<pTab->nCol; i++){
+ zColAff[i] = pTab->aCol[i].affinity;
+ }
+ zColAff[pTab->nCol] = '\0';
+
+ pTab->zColAff = zColAff;
+ }
+
+ sqlite3VdbeChangeP3(v, -1, pTab->zColAff, 0);
+}
+
+/*
+** Return non-zero if SELECT statement p opens the table with rootpage
+** iTab in database iDb. This is used to see if a statement of the form
+** "INSERT INTO <iDb, iTab> SELECT ..." can run without using temporary
+** table for the results of the SELECT.
+**
+** No checking is done for sub-selects that are part of expressions.
+*/
+static int selectReadsTable(Select *p, Schema *pSchema, int iTab){
+ int i;
+ struct SrcList_item *pItem;
+ if( p->pSrc==0 ) return 0;
+ for(i=0, pItem=p->pSrc->a; i<p->pSrc->nSrc; i++, pItem++){
+ if( pItem->pSelect ){
+ if( selectReadsTable(pItem->pSelect, pSchema, iTab) ) return 1;
+ }else{
+ if( pItem->pTab->pSchema==pSchema && pItem->pTab->tnum==iTab ) return 1;
+ }
+ }
+ return 0;
+}
+
+/*
+** This routine is call to handle SQL of the following forms:
+**
+** insert into TABLE (IDLIST) values(EXPRLIST)
+** insert into TABLE (IDLIST) select
+**
+** The IDLIST following the table name is always optional. If omitted,
+** then a list of all columns for the table is substituted. The IDLIST
+** appears in the pColumn parameter. pColumn is NULL if IDLIST is omitted.
+**
+** The pList parameter holds EXPRLIST in the first form of the INSERT
+** statement above, and pSelect is NULL. For the second form, pList is
+** NULL and pSelect is a pointer to the select statement used to generate
+** data for the insert.
+**
+** The code generated follows one of three templates. For a simple
+** select with data coming from a VALUES clause, the code executes
+** once straight down through. The template looks like this:
+**
+** open write cursor to <table> and its indices
+** puts VALUES clause expressions onto the stack
+** write the resulting record into <table>
+** cleanup
+**
+** If the statement is of the form
+**
+** INSERT INTO <table> SELECT ...
+**
+** And the SELECT clause does not read from <table> at any time, then
+** the generated code follows this template:
+**
+** goto B
+** A: setup for the SELECT
+** loop over the tables in the SELECT
+** gosub C
+** end loop
+** cleanup after the SELECT
+** goto D
+** B: open write cursor to <table> and its indices
+** goto A
+** C: insert the select result into <table>
+** return
+** D: cleanup
+**
+** The third template is used if the insert statement takes its
+** values from a SELECT but the data is being inserted into a table
+** that is also read as part of the SELECT. In the third form,
+** we have to use a intermediate table to store the results of
+** the select. The template is like this:
+**
+** goto B
+** A: setup for the SELECT
+** loop over the tables in the SELECT
+** gosub C
+** end loop
+** cleanup after the SELECT
+** goto D
+** C: insert the select result into the intermediate table
+** return
+** B: open a cursor to an intermediate table
+** goto A
+** D: open write cursor to <table> and its indices
+** loop over the intermediate table
+** transfer values form intermediate table into <table>
+** end the loop
+** cleanup
+*/
+void sqlite3Insert(
+ Parse *pParse, /* Parser context */
+ SrcList *pTabList, /* Name of table into which we are inserting */
+ ExprList *pList, /* List of values to be inserted */
+ Select *pSelect, /* A SELECT statement to use as the data source */
+ IdList *pColumn, /* Column names corresponding to IDLIST. */
+ int onError /* How to handle constraint errors */
+){
+ Table *pTab; /* The table to insert into */
+ char *zTab; /* Name of the table into which we are inserting */
+ const char *zDb; /* Name of the database holding this table */
+ int i, j, idx; /* Loop counters */
+ Vdbe *v; /* Generate code into this virtual machine */
+ Index *pIdx; /* For looping over indices of the table */
+ int nColumn; /* Number of columns in the data */
+ int base = 0; /* VDBE Cursor number for pTab */
+ int iCont=0,iBreak=0; /* Beginning and end of the loop over srcTab */
+ sqlite3 *db; /* The main database structure */
+ int keyColumn = -1; /* Column that is the INTEGER PRIMARY KEY */
+ int endOfLoop; /* Label for the end of the insertion loop */
+ int useTempTable = 0; /* Store SELECT results in intermediate table */
+ int srcTab = 0; /* Data comes from this temporary cursor if >=0 */
+ int iSelectLoop = 0; /* Address of code that implements the SELECT */
+ int iCleanup = 0; /* Address of the cleanup code */
+ int iInsertBlock = 0; /* Address of the subroutine used to insert data */
+ int iCntMem = 0; /* Memory cell used for the row counter */
+ int newIdx = -1; /* Cursor for the NEW table */
+ Db *pDb; /* The database containing table being inserted into */
+ int counterMem = 0; /* Memory cell holding AUTOINCREMENT counter */
+ int iDb;
+
+#ifndef SQLITE_OMIT_TRIGGER
+ int isView; /* True if attempting to insert into a view */
+ int triggers_exist = 0; /* True if there are FOR EACH ROW triggers */
+#endif
+
+#ifndef SQLITE_OMIT_AUTOINCREMENT
+ int counterRowid = 0; /* Memory cell holding rowid of autoinc counter */
+#endif
+
+ if( pParse->nErr || sqlite3MallocFailed() ){
+ goto insert_cleanup;
+ }
+ db = pParse->db;
+
+ /* Locate the table into which we will be inserting new information.
+ */
+ assert( pTabList->nSrc==1 );
+ zTab = pTabList->a[0].zName;
+ if( zTab==0 ) goto insert_cleanup;
+ pTab = sqlite3SrcListLookup(pParse, pTabList);
+ if( pTab==0 ){
+ goto insert_cleanup;
+ }
+ iDb = sqlite3SchemaToIndex(db, pTab->pSchema);
+ assert( iDb<db->nDb );
+ pDb = &db->aDb[iDb];
+ zDb = pDb->zName;
+ if( sqlite3AuthCheck(pParse, SQLITE_INSERT, pTab->zName, 0, zDb) ){
+ goto insert_cleanup;
+ }
+
+ /* Figure out if we have any triggers and if the table being
+ ** inserted into is a view
+ */
+#ifndef SQLITE_OMIT_TRIGGER
+ triggers_exist = sqlite3TriggersExist(pParse, pTab, TK_INSERT, 0);
+ isView = pTab->pSelect!=0;
+#else
+# define triggers_exist 0
+# define isView 0
+#endif
+#ifdef SQLITE_OMIT_VIEW
+# undef isView
+# define isView 0
+#endif
+
+ /* Ensure that:
+ * (a) the table is not read-only,
+ * (b) that if it is a view then ON INSERT triggers exist
+ */
+ if( sqlite3IsReadOnly(pParse, pTab, triggers_exist) ){
+ goto insert_cleanup;
+ }
+ assert( pTab!=0 );
+
+ /* If pTab is really a view, make sure it has been initialized.
+ ** ViewGetColumnNames() is a no-op if pTab is not a view (or virtual
+ ** module table).
+ */
+ if( sqlite3ViewGetColumnNames(pParse, pTab) ){
+ goto insert_cleanup;
+ }
+
+ /* Allocate a VDBE
+ */
+ v = sqlite3GetVdbe(pParse);
+ if( v==0 ) goto insert_cleanup;
+ if( pParse->nested==0 ) sqlite3VdbeCountChanges(v);
+ sqlite3BeginWriteOperation(pParse, pSelect || triggers_exist, iDb);
+
+ /* if there are row triggers, allocate a temp table for new.* references. */
+ if( triggers_exist ){
+ newIdx = pParse->nTab++;
+ }
+
+#ifndef SQLITE_OMIT_AUTOINCREMENT
+ /* If this is an AUTOINCREMENT table, look up the sequence number in the
+ ** sqlite_sequence table and store it in memory cell counterMem. Also
+ ** remember the rowid of the sqlite_sequence table entry in memory cell
+ ** counterRowid.
+ */
+ if( pTab->autoInc ){
+ int iCur = pParse->nTab;
+ int addr = sqlite3VdbeCurrentAddr(v);
+ counterRowid = pParse->nMem++;
+ counterMem = pParse->nMem++;
+ sqlite3OpenTable(pParse, iCur, iDb, pDb->pSchema->pSeqTab, OP_OpenRead);
+ sqlite3VdbeAddOp(v, OP_Rewind, iCur, addr+13);
+ sqlite3VdbeAddOp(v, OP_Column, iCur, 0);
+ sqlite3VdbeOp3(v, OP_String8, 0, 0, pTab->zName, 0);
+ sqlite3VdbeAddOp(v, OP_Ne, 0x100, addr+12);
+ sqlite3VdbeAddOp(v, OP_Rowid, iCur, 0);
+ sqlite3VdbeAddOp(v, OP_MemStore, counterRowid, 1);
+ sqlite3VdbeAddOp(v, OP_Column, iCur, 1);
+ sqlite3VdbeAddOp(v, OP_MemStore, counterMem, 1);
+ sqlite3VdbeAddOp(v, OP_Goto, 0, addr+13);
+ sqlite3VdbeAddOp(v, OP_Next, iCur, addr+4);
+ sqlite3VdbeAddOp(v, OP_Close, iCur, 0);
+ }
+#endif /* SQLITE_OMIT_AUTOINCREMENT */
+
+ /* Figure out how many columns of data are supplied. If the data
+ ** is coming from a SELECT statement, then this step also generates
+ ** all the code to implement the SELECT statement and invoke a subroutine
+ ** to process each row of the result. (Template 2.) If the SELECT
+ ** statement uses the the table that is being inserted into, then the
+ ** subroutine is also coded here. That subroutine stores the SELECT
+ ** results in a temporary table. (Template 3.)
+ */
+ if( pSelect ){
+ /* Data is coming from a SELECT. Generate code to implement that SELECT
+ */
+ int rc, iInitCode;
+ iInitCode = sqlite3VdbeAddOp(v, OP_Goto, 0, 0);
+ iSelectLoop = sqlite3VdbeCurrentAddr(v);
+ iInsertBlock = sqlite3VdbeMakeLabel(v);
+
+ /* Resolve the expressions in the SELECT statement and execute it. */
+ rc = sqlite3Select(pParse, pSelect, SRT_Subroutine, iInsertBlock,0,0,0,0);
+ if( rc || pParse->nErr || sqlite3MallocFailed() ){
+ goto insert_cleanup;
+ }
+
+ iCleanup = sqlite3VdbeMakeLabel(v);
+ sqlite3VdbeAddOp(v, OP_Goto, 0, iCleanup);
+ assert( pSelect->pEList );
+ nColumn = pSelect->pEList->nExpr;
+
+ /* Set useTempTable to TRUE if the result of the SELECT statement
+ ** should be written into a temporary table. Set to FALSE if each
+ ** row of the SELECT can be written directly into the result table.
+ **
+ ** A temp table must be used if the table being updated is also one
+ ** of the tables being read by the SELECT statement. Also use a
+ ** temp table in the case of row triggers.
+ */
+ if( triggers_exist || selectReadsTable(pSelect,pTab->pSchema,pTab->tnum) ){
+ useTempTable = 1;
+ }
+
+ if( useTempTable ){
+ /* Generate the subroutine that SELECT calls to process each row of
+ ** the result. Store the result in a temporary table
+ */
+ srcTab = pParse->nTab++;
+ sqlite3VdbeResolveLabel(v, iInsertBlock);
+ sqlite3VdbeAddOp(v, OP_MakeRecord, nColumn, 0);
+ sqlite3VdbeAddOp(v, OP_NewRowid, srcTab, 0);
+ sqlite3VdbeAddOp(v, OP_Pull, 1, 0);
+ sqlite3VdbeAddOp(v, OP_Insert, srcTab, 0);
+ sqlite3VdbeAddOp(v, OP_Return, 0, 0);
+
+ /* The following code runs first because the GOTO at the very top
+ ** of the program jumps to it. Create the temporary table, then jump
+ ** back up and execute the SELECT code above.
+ */
+ sqlite3VdbeJumpHere(v, iInitCode);
+ sqlite3VdbeAddOp(v, OP_OpenEphemeral, srcTab, 0);
+ sqlite3VdbeAddOp(v, OP_SetNumColumns, srcTab, nColumn);
+ sqlite3VdbeAddOp(v, OP_Goto, 0, iSelectLoop);
+ sqlite3VdbeResolveLabel(v, iCleanup);
+ }else{
+ sqlite3VdbeJumpHere(v, iInitCode);
+ }
+ }else{
+ /* This is the case if the data for the INSERT is coming from a VALUES
+ ** clause
+ */
+ NameContext sNC;
+ memset(&sNC, 0, sizeof(sNC));
+ sNC.pParse = pParse;
+ srcTab = -1;
+ useTempTable = 0;
+ nColumn = pList ? pList->nExpr : 0;
+ for(i=0; i<nColumn; i++){
+ if( sqlite3ExprResolveNames(&sNC, pList->a[i].pExpr) ){
+ goto insert_cleanup;
+ }
+ }
+ }
+
+ /* Make sure the number of columns in the source data matches the number
+ ** of columns to be inserted into the table.
+ */
+ if( pColumn==0 && nColumn && nColumn!=pTab->nCol ){
+ sqlite3ErrorMsg(pParse,
+ "table %S has %d columns but %d values were supplied",
+ pTabList, 0, pTab->nCol, nColumn);
+ goto insert_cleanup;
+ }
+ if( pColumn!=0 && nColumn!=pColumn->nId ){
+ sqlite3ErrorMsg(pParse, "%d values for %d columns", nColumn, pColumn->nId);
+ goto insert_cleanup;
+ }
+
+ /* If the INSERT statement included an IDLIST term, then make sure
+ ** all elements of the IDLIST really are columns of the table and
+ ** remember the column indices.
+ **
+ ** If the table has an INTEGER PRIMARY KEY column and that column
+ ** is named in the IDLIST, then record in the keyColumn variable
+ ** the index into IDLIST of the primary key column. keyColumn is
+ ** the index of the primary key as it appears in IDLIST, not as
+ ** is appears in the original table. (The index of the primary
+ ** key in the original table is pTab->iPKey.)
+ */
+ if( pColumn ){
+ for(i=0; i<pColumn->nId; i++){
+ pColumn->a[i].idx = -1;
+ }
+ for(i=0; i<pColumn->nId; i++){
+ for(j=0; j<pTab->nCol; j++){
+ if( sqlite3StrICmp(pColumn->a[i].zName, pTab->aCol[j].zName)==0 ){
+ pColumn->a[i].idx = j;
+ if( j==pTab->iPKey ){
+ keyColumn = i;
+ }
+ break;
+ }
+ }
+ if( j>=pTab->nCol ){
+ if( sqlite3IsRowid(pColumn->a[i].zName) ){
+ keyColumn = i;
+ }else{
+ sqlite3ErrorMsg(pParse, "table %S has no column named %s",
+ pTabList, 0, pColumn->a[i].zName);
+ pParse->nErr++;
+ goto insert_cleanup;
+ }
+ }
+ }
+ }
+
+ /* If there is no IDLIST term but the table has an integer primary
+ ** key, the set the keyColumn variable to the primary key column index
+ ** in the original table definition.
+ */
+ if( pColumn==0 && nColumn>0 ){
+ keyColumn = pTab->iPKey;
+ }
+
+ /* Open the temp table for FOR EACH ROW triggers
+ */
+ if( triggers_exist ){
+ sqlite3VdbeAddOp(v, OP_OpenPseudo, newIdx, 0);
+ sqlite3VdbeAddOp(v, OP_SetNumColumns, newIdx, pTab->nCol);
+ }
+
+ /* Initialize the count of rows to be inserted
+ */
+ if( db->flags & SQLITE_CountRows ){
+ iCntMem = pParse->nMem++;
+ sqlite3VdbeAddOp(v, OP_MemInt, 0, iCntMem);
+ }
+
+ /* Open tables and indices if there are no row triggers */
+ if( !triggers_exist ){
+ base = pParse->nTab;
+ sqlite3OpenTableAndIndices(pParse, pTab, base, OP_OpenWrite);
+ }
+
+ /* If the data source is a temporary table, then we have to create
+ ** a loop because there might be multiple rows of data. If the data
+ ** source is a subroutine call from the SELECT statement, then we need
+ ** to launch the SELECT statement processing.
+ */
+ if( useTempTable ){
+ iBreak = sqlite3VdbeMakeLabel(v);
+ sqlite3VdbeAddOp(v, OP_Rewind, srcTab, iBreak);
+ iCont = sqlite3VdbeCurrentAddr(v);
+ }else if( pSelect ){
+ sqlite3VdbeAddOp(v, OP_Goto, 0, iSelectLoop);
+ sqlite3VdbeResolveLabel(v, iInsertBlock);
+ }
+
+ /* Run the BEFORE and INSTEAD OF triggers, if there are any
+ */
+ endOfLoop = sqlite3VdbeMakeLabel(v);
+ if( triggers_exist & TRIGGER_BEFORE ){
+
+ /* build the NEW.* reference row. Note that if there is an INTEGER
+ ** PRIMARY KEY into which a NULL is being inserted, that NULL will be
+ ** translated into a unique ID for the row. But on a BEFORE trigger,
+ ** we do not know what the unique ID will be (because the insert has
+ ** not happened yet) so we substitute a rowid of -1
+ */
+ if( keyColumn<0 ){
+ sqlite3VdbeAddOp(v, OP_Integer, -1, 0);
+ }else if( useTempTable ){
+ sqlite3VdbeAddOp(v, OP_Column, srcTab, keyColumn);
+ }else{
+ assert( pSelect==0 ); /* Otherwise useTempTable is true */
+ sqlite3ExprCode(pParse, pList->a[keyColumn].pExpr);
+ sqlite3VdbeAddOp(v, OP_NotNull, -1, sqlite3VdbeCurrentAddr(v)+3);
+ sqlite3VdbeAddOp(v, OP_Pop, 1, 0);
+ sqlite3VdbeAddOp(v, OP_Integer, -1, 0);
+ sqlite3VdbeAddOp(v, OP_MustBeInt, 0, 0);
+ }
+
+ /* Create the new column data
+ */
+ for(i=0; i<pTab->nCol; i++){
+ if( pColumn==0 ){
+ j = i;
+ }else{
+ for(j=0; j<pColumn->nId; j++){
+ if( pColumn->a[j].idx==i ) break;
+ }
+ }
+ if( pColumn && j>=pColumn->nId ){
+ sqlite3ExprCode(pParse, pTab->aCol[i].pDflt);
+ }else if( useTempTable ){
+ sqlite3VdbeAddOp(v, OP_Column, srcTab, j);
+ }else{
+ assert( pSelect==0 ); /* Otherwise useTempTable is true */
+ sqlite3ExprCodeAndCache(pParse, pList->a[j].pExpr);
+ }
+ }
+ sqlite3VdbeAddOp(v, OP_MakeRecord, pTab->nCol, 0);
+
+ /* If this is an INSERT on a view with an INSTEAD OF INSERT trigger,
+ ** do not attempt any conversions before assembling the record.
+ ** If this is a real table, attempt conversions as required by the
+ ** table column affinities.
+ */
+ if( !isView ){
+ sqlite3TableAffinityStr(v, pTab);
+ }
+ sqlite3VdbeAddOp(v, OP_Insert, newIdx, 0);
+
+ /* Fire BEFORE or INSTEAD OF triggers */
+ if( sqlite3CodeRowTrigger(pParse, TK_INSERT, 0, TRIGGER_BEFORE, pTab,
+ newIdx, -1, onError, endOfLoop) ){
+ goto insert_cleanup;
+ }
+ }
+
+ /* If any triggers exists, the opening of tables and indices is deferred
+ ** until now.
+ */
+ if( triggers_exist && !isView ){
+ base = pParse->nTab;
+ sqlite3OpenTableAndIndices(pParse, pTab, base, OP_OpenWrite);
+ }
+
+ /* Push the record number for the new entry onto the stack. The
+ ** record number is a randomly generate integer created by NewRowid
+ ** except when the table has an INTEGER PRIMARY KEY column, in which
+ ** case the record number is the same as that column.
+ */
+ if( !isView ){
+ if( IsVirtual(pTab) ){
+ /* The row that the VUpdate opcode will delete: none */
+ sqlite3VdbeAddOp(v, OP_Null, 0, 0);
+ }
+ if( keyColumn>=0 ){
+ if( useTempTable ){
+ sqlite3VdbeAddOp(v, OP_Column, srcTab, keyColumn);
+ }else if( pSelect ){
+ sqlite3VdbeAddOp(v, OP_Dup, nColumn - keyColumn - 1, 1);
+ }else{
+ sqlite3ExprCode(pParse, pList->a[keyColumn].pExpr);
+ }
+ /* If the PRIMARY KEY expression is NULL, then use OP_NewRowid
+ ** to generate a unique primary key value.
+ */
+ sqlite3VdbeAddOp(v, OP_NotNull, -1, sqlite3VdbeCurrentAddr(v)+3);
+ sqlite3VdbeAddOp(v, OP_Pop, 1, 0);
+ sqlite3VdbeAddOp(v, OP_NewRowid, base, counterMem);
+ sqlite3VdbeAddOp(v, OP_MustBeInt, 0, 0);
+ }else if( IsVirtual(pTab) ){
+ sqlite3VdbeAddOp(v, OP_Null, 0, 0);
+ }else{
+ sqlite3VdbeAddOp(v, OP_NewRowid, base, counterMem);
+ }
+#ifndef SQLITE_OMIT_AUTOINCREMENT
+ if( pTab->autoInc ){
+ sqlite3VdbeAddOp(v, OP_MemMax, counterMem, 0);
+ }
+#endif /* SQLITE_OMIT_AUTOINCREMENT */
+
+ /* Push onto the stack, data for all columns of the new entry, beginning
+ ** with the first column.
+ */
+ for(i=0; i<pTab->nCol; i++){
+ if( i==pTab->iPKey ){
+ /* The value of the INTEGER PRIMARY KEY column is always a NULL.
+ ** Whenever this column is read, the record number will be substituted
+ ** in its place. So will fill this column with a NULL to avoid
+ ** taking up data space with information that will never be used. */
+ sqlite3VdbeAddOp(v, OP_Null, 0, 0);
+ continue;
+ }
+ if( pColumn==0 ){
+ j = i;
+ }else{
+ for(j=0; j<pColumn->nId; j++){
+ if( pColumn->a[j].idx==i ) break;
+ }
+ }
+ if( nColumn==0 || (pColumn && j>=pColumn->nId) ){
+ sqlite3ExprCode(pParse, pTab->aCol[i].pDflt);
+ }else if( useTempTable ){
+ sqlite3VdbeAddOp(v, OP_Column, srcTab, j);
+ }else if( pSelect ){
+ sqlite3VdbeAddOp(v, OP_Dup, i+nColumn-j+IsVirtual(pTab), 1);
+ }else{
+ sqlite3ExprCode(pParse, pList->a[j].pExpr);
+ }
+ }
+
+ /* Generate code to check constraints and generate index keys and
+ ** do the insertion.
+ */
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+ if( IsVirtual(pTab) ){
+ pParse->pVirtualLock = pTab;
+ sqlite3VdbeOp3(v, OP_VUpdate, 1, pTab->nCol+2,
+ (const char*)pTab->pVtab, P3_VTAB);
+ }else
+#endif
+ {
+ sqlite3GenerateConstraintChecks(pParse, pTab, base, 0, keyColumn>=0,
+ 0, onError, endOfLoop);
+ sqlite3CompleteInsertion(pParse, pTab, base, 0,0,0,
+ (triggers_exist & TRIGGER_AFTER)!=0 ? newIdx : -1);
+ }
+ }
+
+ /* Update the count of rows that are inserted
+ */
+ if( (db->flags & SQLITE_CountRows)!=0 ){
+ sqlite3VdbeAddOp(v, OP_MemIncr, 1, iCntMem);
+ }
+
+ if( triggers_exist ){
+ /* Close all tables opened */
+ if( !isView ){
+ sqlite3VdbeAddOp(v, OP_Close, base, 0);
+ for(idx=1, pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext, idx++){
+ sqlite3VdbeAddOp(v, OP_Close, idx+base, 0);
+ }
+ }
+
+ /* Code AFTER triggers */
+ if( sqlite3CodeRowTrigger(pParse, TK_INSERT, 0, TRIGGER_AFTER, pTab,
+ newIdx, -1, onError, endOfLoop) ){
+ goto insert_cleanup;
+ }
+ }
+
+ /* The bottom of the loop, if the data source is a SELECT statement
+ */
+ sqlite3VdbeResolveLabel(v, endOfLoop);
+ if( useTempTable ){
+ sqlite3VdbeAddOp(v, OP_Next, srcTab, iCont);
+ sqlite3VdbeResolveLabel(v, iBreak);
+ sqlite3VdbeAddOp(v, OP_Close, srcTab, 0);
+ }else if( pSelect ){
+ sqlite3VdbeAddOp(v, OP_Pop, nColumn, 0);
+ sqlite3VdbeAddOp(v, OP_Return, 0, 0);
+ sqlite3VdbeResolveLabel(v, iCleanup);
+ }
+
+ if( !triggers_exist && !IsVirtual(pTab) ){
+ /* Close all tables opened */
+ sqlite3VdbeAddOp(v, OP_Close, base, 0);
+ for(idx=1, pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext, idx++){
+ sqlite3VdbeAddOp(v, OP_Close, idx+base, 0);
+ }
+ }
+
+#ifndef SQLITE_OMIT_AUTOINCREMENT
+ /* Update the sqlite_sequence table by storing the content of the
+ ** counter value in memory counterMem back into the sqlite_sequence
+ ** table.
+ */
+ if( pTab->autoInc ){
+ int iCur = pParse->nTab;
+ int addr = sqlite3VdbeCurrentAddr(v);
+ sqlite3OpenTable(pParse, iCur, iDb, pDb->pSchema->pSeqTab, OP_OpenWrite);
+ sqlite3VdbeAddOp(v, OP_MemLoad, counterRowid, 0);
+ sqlite3VdbeAddOp(v, OP_NotNull, -1, addr+7);
+ sqlite3VdbeAddOp(v, OP_Pop, 1, 0);
+ sqlite3VdbeAddOp(v, OP_NewRowid, iCur, 0);
+ sqlite3VdbeOp3(v, OP_String8, 0, 0, pTab->zName, 0);
+ sqlite3VdbeAddOp(v, OP_MemLoad, counterMem, 0);
+ sqlite3VdbeAddOp(v, OP_MakeRecord, 2, 0);
+ sqlite3VdbeAddOp(v, OP_Insert, iCur, 0);
+ sqlite3VdbeAddOp(v, OP_Close, iCur, 0);
+ }
+#endif
+
+ /*
+ ** Return the number of rows inserted. If this routine is
+ ** generating code because of a call to sqlite3NestedParse(), do not
+ ** invoke the callback function.
+ */
+ if( db->flags & SQLITE_CountRows && pParse->nested==0 && !pParse->trigStack ){
+ sqlite3VdbeAddOp(v, OP_MemLoad, iCntMem, 0);
+ sqlite3VdbeAddOp(v, OP_Callback, 1, 0);
+ sqlite3VdbeSetNumCols(v, 1);
+ sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "rows inserted", P3_STATIC);
+ }
+
+insert_cleanup:
+ sqlite3SrcListDelete(pTabList);
+ sqlite3ExprListDelete(pList);
+ sqlite3SelectDelete(pSelect);
+ sqlite3IdListDelete(pColumn);
+}
+
+/*
+** Generate code to do a constraint check prior to an INSERT or an UPDATE.
+**
+** When this routine is called, the stack contains (from bottom to top)
+** the following values:
+**
+** 1. The rowid of the row to be updated before the update. This
+** value is omitted unless we are doing an UPDATE that involves a
+** change to the record number.
+**
+** 2. The rowid of the row after the update.
+**
+** 3. The data in the first column of the entry after the update.
+**
+** i. Data from middle columns...
+**
+** N. The data in the last column of the entry after the update.
+**
+** The old rowid shown as entry (1) above is omitted unless both isUpdate
+** and rowidChng are 1. isUpdate is true for UPDATEs and false for
+** INSERTs and rowidChng is true if the record number is being changed.
+**
+** The code generated by this routine pushes additional entries onto
+** the stack which are the keys for new index entries for the new record.
+** The order of index keys is the same as the order of the indices on
+** the pTable->pIndex list. A key is only created for index i if
+** aIdxUsed!=0 and aIdxUsed[i]!=0.
+**
+** This routine also generates code to check constraints. NOT NULL,
+** CHECK, and UNIQUE constraints are all checked. If a constraint fails,
+** then the appropriate action is performed. There are five possible
+** actions: ROLLBACK, ABORT, FAIL, REPLACE, and IGNORE.
+**
+** Constraint type Action What Happens
+** --------------- ---------- ----------------------------------------
+** any ROLLBACK The current transaction is rolled back and
+** sqlite3_exec() returns immediately with a
+** return code of SQLITE_CONSTRAINT.
+**
+** any ABORT Back out changes from the current command
+** only (do not do a complete rollback) then
+** cause sqlite3_exec() to return immediately
+** with SQLITE_CONSTRAINT.
+**
+** any FAIL Sqlite_exec() returns immediately with a
+** return code of SQLITE_CONSTRAINT. The
+** transaction is not rolled back and any
+** prior changes are retained.
+**
+** any IGNORE The record number and data is popped from
+** the stack and there is an immediate jump
+** to label ignoreDest.
+**
+** NOT NULL REPLACE The NULL value is replace by the default
+** value for that column. If the default value
+** is NULL, the action is the same as ABORT.
+**
+** UNIQUE REPLACE The other row that conflicts with the row
+** being inserted is removed.
+**
+** CHECK REPLACE Illegal. The results in an exception.
+**
+** Which action to take is determined by the overrideError parameter.
+** Or if overrideError==OE_Default, then the pParse->onError parameter
+** is used. Or if pParse->onError==OE_Default then the onError value
+** for the constraint is used.
+**
+** The calling routine must open a read/write cursor for pTab with
+** cursor number "base". All indices of pTab must also have open
+** read/write cursors with cursor number base+i for the i-th cursor.
+** Except, if there is no possibility of a REPLACE action then
+** cursors do not need to be open for indices where aIdxUsed[i]==0.
+**
+** If the isUpdate flag is true, it means that the "base" cursor is
+** initially pointing to an entry that is being updated. The isUpdate
+** flag causes extra code to be generated so that the "base" cursor
+** is still pointing at the same entry after the routine returns.
+** Without the isUpdate flag, the "base" cursor might be moved.
+*/
+void sqlite3GenerateConstraintChecks(
+ Parse *pParse, /* The parser context */
+ Table *pTab, /* the table into which we are inserting */
+ int base, /* Index of a read/write cursor pointing at pTab */
+ char *aIdxUsed, /* Which indices are used. NULL means all are used */
+ int rowidChng, /* True if the record number will change */
+ int isUpdate, /* True for UPDATE, False for INSERT */
+ int overrideError, /* Override onError to this if not OE_Default */
+ int ignoreDest /* Jump to this label on an OE_Ignore resolution */
+){
+ int i;
+ Vdbe *v;
+ int nCol;
+ int onError;
+ int addr;
+ int extra;
+ int iCur;
+ Index *pIdx;
+ int seenReplace = 0;
+ int jumpInst1=0, jumpInst2;
+ int hasTwoRowids = (isUpdate && rowidChng);
+
+ v = sqlite3GetVdbe(pParse);
+ assert( v!=0 );
+ assert( pTab->pSelect==0 ); /* This table is not a VIEW */
+ nCol = pTab->nCol;
+
+ /* Test all NOT NULL constraints.
+ */
+ for(i=0; i<nCol; i++){
+ if( i==pTab->iPKey ){
+ continue;
+ }
+ onError = pTab->aCol[i].notNull;
+ if( onError==OE_None ) continue;
+ if( overrideError!=OE_Default ){
+ onError = overrideError;
+ }else if( onError==OE_Default ){
+ onError = OE_Abort;
+ }
+ if( onError==OE_Replace && pTab->aCol[i].pDflt==0 ){
+ onError = OE_Abort;
+ }
+ sqlite3VdbeAddOp(v, OP_Dup, nCol-1-i, 1);
+ addr = sqlite3VdbeAddOp(v, OP_NotNull, 1, 0);
+ assert( onError==OE_Rollback || onError==OE_Abort || onError==OE_Fail
+ || onError==OE_Ignore || onError==OE_Replace );
+ switch( onError ){
+ case OE_Rollback:
+ case OE_Abort:
+ case OE_Fail: {
+ char *zMsg = 0;
+ sqlite3VdbeAddOp(v, OP_Halt, SQLITE_CONSTRAINT, onError);
+ sqlite3SetString(&zMsg, pTab->zName, ".", pTab->aCol[i].zName,
+ " may not be NULL", (char*)0);
+ sqlite3VdbeChangeP3(v, -1, zMsg, P3_DYNAMIC);
+ break;
+ }
+ case OE_Ignore: {
+ sqlite3VdbeAddOp(v, OP_Pop, nCol+1+hasTwoRowids, 0);
+ sqlite3VdbeAddOp(v, OP_Goto, 0, ignoreDest);
+ break;
+ }
+ case OE_Replace: {
+ sqlite3ExprCode(pParse, pTab->aCol[i].pDflt);
+ sqlite3VdbeAddOp(v, OP_Push, nCol-i, 0);
+ break;
+ }
+ }
+ sqlite3VdbeJumpHere(v, addr);
+ }
+
+ /* Test all CHECK constraints
+ */
+#ifndef SQLITE_OMIT_CHECK
+ if( pTab->pCheck && (pParse->db->flags & SQLITE_IgnoreChecks)==0 ){
+ int allOk = sqlite3VdbeMakeLabel(v);
+ assert( pParse->ckOffset==0 );
+ pParse->ckOffset = nCol;
+ sqlite3ExprIfTrue(pParse, pTab->pCheck, allOk, 1);
+ assert( pParse->ckOffset==nCol );
+ pParse->ckOffset = 0;
+ onError = overrideError!=OE_Default ? overrideError : OE_Abort;
+ if( onError==OE_Ignore || onError==OE_Replace ){
+ sqlite3VdbeAddOp(v, OP_Pop, nCol+1+hasTwoRowids, 0);
+ sqlite3VdbeAddOp(v, OP_Goto, 0, ignoreDest);
+ }else{
+ sqlite3VdbeAddOp(v, OP_Halt, SQLITE_CONSTRAINT, onError);
+ }
+ sqlite3VdbeResolveLabel(v, allOk);
+ }
+#endif /* !defined(SQLITE_OMIT_CHECK) */
+
+ /* If we have an INTEGER PRIMARY KEY, make sure the primary key
+ ** of the new record does not previously exist. Except, if this
+ ** is an UPDATE and the primary key is not changing, that is OK.
+ */
+ if( rowidChng ){
+ onError = pTab->keyConf;
+ if( overrideError!=OE_Default ){
+ onError = overrideError;
+ }else if( onError==OE_Default ){
+ onError = OE_Abort;
+ }
+
+ if( isUpdate ){
+ sqlite3VdbeAddOp(v, OP_Dup, nCol+1, 1);
+ sqlite3VdbeAddOp(v, OP_Dup, nCol+1, 1);
+ jumpInst1 = sqlite3VdbeAddOp(v, OP_Eq, 0, 0);
+ }
+ sqlite3VdbeAddOp(v, OP_Dup, nCol, 1);
+ jumpInst2 = sqlite3VdbeAddOp(v, OP_NotExists, base, 0);
+ switch( onError ){
+ default: {
+ onError = OE_Abort;
+ /* Fall thru into the next case */
+ }
+ case OE_Rollback:
+ case OE_Abort:
+ case OE_Fail: {
+ sqlite3VdbeOp3(v, OP_Halt, SQLITE_CONSTRAINT, onError,
+ "PRIMARY KEY must be unique", P3_STATIC);
+ break;
+ }
+ case OE_Replace: {
+ sqlite3GenerateRowIndexDelete(v, pTab, base, 0);
+ if( isUpdate ){
+ sqlite3VdbeAddOp(v, OP_Dup, nCol+hasTwoRowids, 1);
+ sqlite3VdbeAddOp(v, OP_MoveGe, base, 0);
+ }
+ seenReplace = 1;
+ break;
+ }
+ case OE_Ignore: {
+ assert( seenReplace==0 );
+ sqlite3VdbeAddOp(v, OP_Pop, nCol+1+hasTwoRowids, 0);
+ sqlite3VdbeAddOp(v, OP_Goto, 0, ignoreDest);
+ break;
+ }
+ }
+ sqlite3VdbeJumpHere(v, jumpInst2);
+ if( isUpdate ){
+ sqlite3VdbeJumpHere(v, jumpInst1);
+ sqlite3VdbeAddOp(v, OP_Dup, nCol+1, 1);
+ sqlite3VdbeAddOp(v, OP_MoveGe, base, 0);
+ }
+ }
+
+ /* Test all UNIQUE constraints by creating entries for each UNIQUE
+ ** index and making sure that duplicate entries do not already exist.
+ ** Add the new records to the indices as we go.
+ */
+ extra = -1;
+ for(iCur=0, pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext, iCur++){
+ if( aIdxUsed && aIdxUsed[iCur]==0 ) continue; /* Skip unused indices */
+ extra++;
+
+ /* Create a key for accessing the index entry */
+ sqlite3VdbeAddOp(v, OP_Dup, nCol+extra, 1);
+ for(i=0; i<pIdx->nColumn; i++){
+ int idx = pIdx->aiColumn[i];
+ if( idx==pTab->iPKey ){
+ sqlite3VdbeAddOp(v, OP_Dup, i+extra+nCol+1, 1);
+ }else{
+ sqlite3VdbeAddOp(v, OP_Dup, i+extra+nCol-idx, 1);
+ }
+ }
+ jumpInst1 = sqlite3VdbeAddOp(v, OP_MakeIdxRec, pIdx->nColumn, 0);
+ sqlite3IndexAffinityStr(v, pIdx);
+
+ /* Find out what action to take in case there is an indexing conflict */
+ onError = pIdx->onError;
+ if( onError==OE_None ) continue; /* pIdx is not a UNIQUE index */
+ if( overrideError!=OE_Default ){
+ onError = overrideError;
+ }else if( onError==OE_Default ){
+ onError = OE_Abort;
+ }
+ if( seenReplace ){
+ if( onError==OE_Ignore ) onError = OE_Replace;
+ else if( onError==OE_Fail ) onError = OE_Abort;
+ }
+
+
+ /* Check to see if the new index entry will be unique */
+ sqlite3VdbeAddOp(v, OP_Dup, extra+nCol+1+hasTwoRowids, 1);
+ jumpInst2 = sqlite3VdbeAddOp(v, OP_IsUnique, base+iCur+1, 0);
+
+ /* Generate code that executes if the new index entry is not unique */
+ assert( onError==OE_Rollback || onError==OE_Abort || onError==OE_Fail
+ || onError==OE_Ignore || onError==OE_Replace );
+ switch( onError ){
+ case OE_Rollback:
+ case OE_Abort:
+ case OE_Fail: {
+ int j, n1, n2;
+ char zErrMsg[200];
+ strcpy(zErrMsg, pIdx->nColumn>1 ? "columns " : "column ");
+ n1 = strlen(zErrMsg);
+ for(j=0; j<pIdx->nColumn && n1<sizeof(zErrMsg)-30; j++){
+ char *zCol = pTab->aCol[pIdx->aiColumn[j]].zName;
+ n2 = strlen(zCol);
+ if( j>0 ){
+ strcpy(&zErrMsg[n1], ", ");
+ n1 += 2;
+ }
+ if( n1+n2>sizeof(zErrMsg)-30 ){
+ strcpy(&zErrMsg[n1], "...");
+ n1 += 3;
+ break;
+ }else{
+ strcpy(&zErrMsg[n1], zCol);
+ n1 += n2;
+ }
+ }
+ strcpy(&zErrMsg[n1],
+ pIdx->nColumn>1 ? " are not unique" : " is not unique");
+ sqlite3VdbeOp3(v, OP_Halt, SQLITE_CONSTRAINT, onError, zErrMsg, 0);
+ break;
+ }
+ case OE_Ignore: {
+ assert( seenReplace==0 );
+ sqlite3VdbeAddOp(v, OP_Pop, nCol+extra+3+hasTwoRowids, 0);
+ sqlite3VdbeAddOp(v, OP_Goto, 0, ignoreDest);
+ break;
+ }
+ case OE_Replace: {
+ sqlite3GenerateRowDelete(pParse->db, v, pTab, base, 0);
+ if( isUpdate ){
+ sqlite3VdbeAddOp(v, OP_Dup, nCol+extra+1+hasTwoRowids, 1);
+ sqlite3VdbeAddOp(v, OP_MoveGe, base, 0);
+ }
+ seenReplace = 1;
+ break;
+ }
+ }
+#if NULL_DISTINCT_FOR_UNIQUE
+ sqlite3VdbeJumpHere(v, jumpInst1);
+#endif
+ sqlite3VdbeJumpHere(v, jumpInst2);
+ }
+}
+
+/*
+** This routine generates code to finish the INSERT or UPDATE operation
+** that was started by a prior call to sqlite3GenerateConstraintChecks.
+** The stack must contain keys for all active indices followed by data
+** and the rowid for the new entry. This routine creates the new
+** entries in all indices and in the main table.
+**
+** The arguments to this routine should be the same as the first six
+** arguments to sqlite3GenerateConstraintChecks.
+*/
+void sqlite3CompleteInsertion(
+ Parse *pParse, /* The parser context */
+ Table *pTab, /* the table into which we are inserting */
+ int base, /* Index of a read/write cursor pointing at pTab */
+ char *aIdxUsed, /* Which indices are used. NULL means all are used */
+ int rowidChng, /* True if the record number will change */
+ int isUpdate, /* True for UPDATE, False for INSERT */
+ int newIdx /* Index of NEW table for triggers. -1 if none */
+){
+ int i;
+ Vdbe *v;
+ int nIdx;
+ Index *pIdx;
+ int pik_flags;
+
+ v = sqlite3GetVdbe(pParse);
+ assert( v!=0 );
+ assert( pTab->pSelect==0 ); /* This table is not a VIEW */
+ for(nIdx=0, pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext, nIdx++){}
+ for(i=nIdx-1; i>=0; i--){
+ if( aIdxUsed && aIdxUsed[i]==0 ) continue;
+ sqlite3VdbeAddOp(v, OP_IdxInsert, base+i+1, 0);
+ }
+ sqlite3VdbeAddOp(v, OP_MakeRecord, pTab->nCol, 0);
+ sqlite3TableAffinityStr(v, pTab);
+#ifndef SQLITE_OMIT_TRIGGER
+ if( newIdx>=0 ){
+ sqlite3VdbeAddOp(v, OP_Dup, 1, 0);
+ sqlite3VdbeAddOp(v, OP_Dup, 1, 0);
+ sqlite3VdbeAddOp(v, OP_Insert, newIdx, 0);
+ }
+#endif
+ if( pParse->nested ){
+ pik_flags = 0;
+ }else{
+ pik_flags = OPFLAG_NCHANGE;
+ pik_flags |= (isUpdate?OPFLAG_ISUPDATE:OPFLAG_LASTROWID);
+ }
+ sqlite3VdbeAddOp(v, OP_Insert, base, pik_flags);
+ if( !pParse->nested ){
+ sqlite3VdbeChangeP3(v, -1, pTab->zName, P3_STATIC);
+ }
+
+ if( isUpdate && rowidChng ){
+ sqlite3VdbeAddOp(v, OP_Pop, 1, 0);
+ }
+}
+
+/*
+** Generate code that will open cursors for a table and for all
+** indices of that table. The "base" parameter is the cursor number used
+** for the table. Indices are opened on subsequent cursors.
+*/
+void sqlite3OpenTableAndIndices(
+ Parse *pParse, /* Parsing context */
+ Table *pTab, /* Table to be opened */
+ int base, /* Cursor number assigned to the table */
+ int op /* OP_OpenRead or OP_OpenWrite */
+){
+ int i;
+ int iDb;
+ Index *pIdx;
+ Vdbe *v;
+
+ if( IsVirtual(pTab) ) return;
+ iDb = sqlite3SchemaToIndex(pParse->db, pTab->pSchema);
+ v = sqlite3GetVdbe(pParse);
+ assert( v!=0 );
+ sqlite3OpenTable(pParse, base, iDb, pTab, op);
+ for(i=1, pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext, i++){
+ KeyInfo *pKey = sqlite3IndexKeyinfo(pParse, pIdx);
+ assert( pIdx->pSchema==pTab->pSchema );
+ sqlite3VdbeAddOp(v, OP_Integer, iDb, 0);
+ VdbeComment((v, "# %s", pIdx->zName));
+ sqlite3VdbeOp3(v, op, i+base, pIdx->tnum, (char*)pKey, P3_KEYINFO_HANDOFF);
+ }
+ if( pParse->nTab<=base+i ){
+ pParse->nTab = base+i;
+ }
+}
Added: freeswitch/trunk/libs/sqlite/src/legacy.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/legacy.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,136 @@
+/*
+** 2001 September 15
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** Main file for the SQLite library. The routines in this file
+** implement the programmer interface to the library. Routines in
+** other files are for internal use by SQLite and should not be
+** accessed by users of the library.
+**
+** $Id: legacy.c,v 1.16 2006/09/15 07:28:50 drh Exp $
+*/
+
+#include "sqliteInt.h"
+#include "os.h"
+#include <ctype.h>
+
+/*
+** Execute SQL code. Return one of the SQLITE_ success/failure
+** codes. Also write an error message into memory obtained from
+** malloc() and make *pzErrMsg point to that message.
+**
+** If the SQL is a query, then for each row in the query result
+** the xCallback() function is called. pArg becomes the first
+** argument to xCallback(). If xCallback=NULL then no callback
+** is invoked, even for queries.
+*/
+int sqlite3_exec(
+ sqlite3 *db, /* The database on which the SQL executes */
+ const char *zSql, /* The SQL to be executed */
+ sqlite3_callback xCallback, /* Invoke this callback routine */
+ void *pArg, /* First argument to xCallback() */
+ char **pzErrMsg /* Write error messages here */
+){
+ int rc = SQLITE_OK;
+ const char *zLeftover;
+ sqlite3_stmt *pStmt = 0;
+ char **azCols = 0;
+
+ int nRetry = 0;
+ int nChange = 0;
+ int nCallback;
+
+ if( zSql==0 ) return SQLITE_OK;
+ while( (rc==SQLITE_OK || (rc==SQLITE_SCHEMA && (++nRetry)<2)) && zSql[0] ){
+ int nCol;
+ char **azVals = 0;
+
+ pStmt = 0;
+ rc = sqlite3_prepare(db, zSql, -1, &pStmt, &zLeftover);
+ assert( rc==SQLITE_OK || pStmt==0 );
+ if( rc!=SQLITE_OK ){
+ continue;
+ }
+ if( !pStmt ){
+ /* this happens for a comment or white-space */
+ zSql = zLeftover;
+ continue;
+ }
+
+ db->nChange += nChange;
+ nCallback = 0;
+
+ nCol = sqlite3_column_count(pStmt);
+ azCols = sqliteMalloc(2*nCol*sizeof(const char *) + 1);
+ if( azCols==0 ){
+ goto exec_out;
+ }
+
+ while( 1 ){
+ int i;
+ rc = sqlite3_step(pStmt);
+
+ /* Invoke the callback function if required */
+ if( xCallback && (SQLITE_ROW==rc ||
+ (SQLITE_DONE==rc && !nCallback && db->flags&SQLITE_NullCallback)) ){
+ if( 0==nCallback ){
+ for(i=0; i<nCol; i++){
+ azCols[i] = (char *)sqlite3_column_name(pStmt, i);
+ }
+ nCallback++;
+ }
+ if( rc==SQLITE_ROW ){
+ azVals = &azCols[nCol];
+ for(i=0; i<nCol; i++){
+ azVals[i] = (char *)sqlite3_column_text(pStmt, i);
+ }
+ }
+ if( xCallback(pArg, nCol, azVals, azCols) ){
+ rc = SQLITE_ABORT;
+ goto exec_out;
+ }
+ }
+
+ if( rc!=SQLITE_ROW ){
+ rc = sqlite3_finalize(pStmt);
+ pStmt = 0;
+ if( db->pVdbe==0 ){
+ nChange = db->nChange;
+ }
+ if( rc!=SQLITE_SCHEMA ){
+ nRetry = 0;
+ zSql = zLeftover;
+ while( isspace((unsigned char)zSql[0]) ) zSql++;
+ }
+ break;
+ }
+ }
+
+ sqliteFree(azCols);
+ azCols = 0;
+ }
+
+exec_out:
+ if( pStmt ) sqlite3_finalize(pStmt);
+ if( azCols ) sqliteFree(azCols);
+
+ rc = sqlite3ApiExit(0, rc);
+ if( rc!=SQLITE_OK && rc==sqlite3_errcode(db) && pzErrMsg ){
+ *pzErrMsg = sqlite3_malloc(1+strlen(sqlite3_errmsg(db)));
+ if( *pzErrMsg ){
+ strcpy(*pzErrMsg, sqlite3_errmsg(db));
+ }
+ }else if( pzErrMsg ){
+ *pzErrMsg = 0;
+ }
+
+ assert( (rc&db->errMask)==rc );
+ return rc;
+}
Added: freeswitch/trunk/libs/sqlite/src/loadext.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/loadext.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,439 @@
+/*
+** 2006 June 7
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** This file contains code used to dynamically load extensions into
+** the SQLite library.
+*/
+#ifndef SQLITE_OMIT_LOAD_EXTENSION
+
+#define SQLITE_CORE 1 /* Disable the API redefinition in sqlite3ext.h */
+#include "sqlite3ext.h"
+#include "sqliteInt.h"
+#include "os.h"
+#include <string.h>
+#include <ctype.h>
+
+/*
+** Some API routines are omitted when various features are
+** excluded from a build of SQLite. Substitute a NULL pointer
+** for any missing APIs.
+*/
+#ifndef SQLITE_ENABLE_COLUMN_METADATA
+# define sqlite3_column_database_name 0
+# define sqlite3_column_database_name16 0
+# define sqlite3_column_table_name 0
+# define sqlite3_column_table_name16 0
+# define sqlite3_column_origin_name 0
+# define sqlite3_column_origin_name16 0
+# define sqlite3_table_column_metadata 0
+#endif
+
+#ifdef SQLITE_OMIT_AUTHORIZATION
+# define sqlite3_set_authorizer 0
+#endif
+
+#ifdef SQLITE_OMIT_UTF16
+# define sqlite3_bind_text16 0
+# define sqlite3_collation_needed16 0
+# define sqlite3_column_decltype16 0
+# define sqlite3_column_name16 0
+# define sqlite3_column_text16 0
+# define sqlite3_complete16 0
+# define sqlite3_create_collation16 0
+# define sqlite3_create_function16 0
+# define sqlite3_errmsg16 0
+# define sqlite3_open16 0
+# define sqlite3_prepare16 0
+# define sqlite3_result_error16 0
+# define sqlite3_result_text16 0
+# define sqlite3_result_text16be 0
+# define sqlite3_result_text16le 0
+# define sqlite3_value_text16 0
+# define sqlite3_value_text16be 0
+# define sqlite3_value_text16le 0
+#endif
+
+#ifdef SQLITE_OMIT_COMPLETE
+# define sqlite3_complete 0
+# define sqlite3_complete16 0
+#endif
+
+#ifdef SQLITE_OMIT_PROGRESS_CALLBACK
+# define sqlite3_progress_handler 0
+#endif
+
+#ifdef SQLITE_OMIT_VIRTUALTABLE
+# define sqlite3_create_module 0
+# define sqlite3_declare_vtab 0
+#endif
+
+/*
+** The following structure contains pointers to all SQLite API routines.
+** A pointer to this structure is passed into extensions when they are
+** loaded so that the extension can make calls back into the SQLite
+** library.
+**
+** When adding new APIs, add them to the bottom of this structure
+** in order to preserve backwards compatibility.
+**
+** Extensions that use newer APIs should first call the
+** sqlite3_libversion_number() to make sure that the API they
+** intend to use is supported by the library. Extensions should
+** also check to make sure that the pointer to the function is
+** not NULL before calling it.
+*/
+const sqlite3_api_routines sqlite3_apis = {
+ sqlite3_aggregate_context,
+ sqlite3_aggregate_count,
+ sqlite3_bind_blob,
+ sqlite3_bind_double,
+ sqlite3_bind_int,
+ sqlite3_bind_int64,
+ sqlite3_bind_null,
+ sqlite3_bind_parameter_count,
+ sqlite3_bind_parameter_index,
+ sqlite3_bind_parameter_name,
+ sqlite3_bind_text,
+ sqlite3_bind_text16,
+ sqlite3_bind_value,
+ sqlite3_busy_handler,
+ sqlite3_busy_timeout,
+ sqlite3_changes,
+ sqlite3_close,
+ sqlite3_collation_needed,
+ sqlite3_collation_needed16,
+ sqlite3_column_blob,
+ sqlite3_column_bytes,
+ sqlite3_column_bytes16,
+ sqlite3_column_count,
+ sqlite3_column_database_name,
+ sqlite3_column_database_name16,
+ sqlite3_column_decltype,
+ sqlite3_column_decltype16,
+ sqlite3_column_double,
+ sqlite3_column_int,
+ sqlite3_column_int64,
+ sqlite3_column_name,
+ sqlite3_column_name16,
+ sqlite3_column_origin_name,
+ sqlite3_column_origin_name16,
+ sqlite3_column_table_name,
+ sqlite3_column_table_name16,
+ sqlite3_column_text,
+ sqlite3_column_text16,
+ sqlite3_column_type,
+ sqlite3_column_value,
+ sqlite3_commit_hook,
+ sqlite3_complete,
+ sqlite3_complete16,
+ sqlite3_create_collation,
+ sqlite3_create_collation16,
+ sqlite3_create_function,
+ sqlite3_create_function16,
+ sqlite3_create_module,
+ sqlite3_data_count,
+ sqlite3_db_handle,
+ sqlite3_declare_vtab,
+ sqlite3_enable_shared_cache,
+ sqlite3_errcode,
+ sqlite3_errmsg,
+ sqlite3_errmsg16,
+ sqlite3_exec,
+ sqlite3_expired,
+ sqlite3_finalize,
+ sqlite3_free,
+ sqlite3_free_table,
+ sqlite3_get_autocommit,
+ sqlite3_get_auxdata,
+ sqlite3_get_table,
+ sqlite3_global_recover,
+ sqlite3_interrupt,
+ sqlite3_last_insert_rowid,
+ sqlite3_libversion,
+ sqlite3_libversion_number,
+ sqlite3_malloc,
+ sqlite3_mprintf,
+ sqlite3_open,
+ sqlite3_open16,
+ sqlite3_prepare,
+ sqlite3_prepare16,
+ sqlite3_profile,
+ sqlite3_progress_handler,
+ sqlite3_realloc,
+ sqlite3_reset,
+ sqlite3_result_blob,
+ sqlite3_result_double,
+ sqlite3_result_error,
+ sqlite3_result_error16,
+ sqlite3_result_int,
+ sqlite3_result_int64,
+ sqlite3_result_null,
+ sqlite3_result_text,
+ sqlite3_result_text16,
+ sqlite3_result_text16be,
+ sqlite3_result_text16le,
+ sqlite3_result_value,
+ sqlite3_rollback_hook,
+ sqlite3_set_authorizer,
+ sqlite3_set_auxdata,
+ sqlite3_snprintf,
+ sqlite3_step,
+ sqlite3_table_column_metadata,
+ sqlite3_thread_cleanup,
+ sqlite3_total_changes,
+ sqlite3_trace,
+ sqlite3_transfer_bindings,
+ sqlite3_update_hook,
+ sqlite3_user_data,
+ sqlite3_value_blob,
+ sqlite3_value_bytes,
+ sqlite3_value_bytes16,
+ sqlite3_value_double,
+ sqlite3_value_int,
+ sqlite3_value_int64,
+ sqlite3_value_numeric_type,
+ sqlite3_value_text,
+ sqlite3_value_text16,
+ sqlite3_value_text16be,
+ sqlite3_value_text16le,
+ sqlite3_value_type,
+ sqlite3_vmprintf,
+ /*
+ ** The original API set ends here. All extensions can call any
+ ** of the APIs above provided that the pointer is not NULL. But
+ ** before calling APIs that follow, extension should check the
+ ** sqlite3_libversion_number() to make sure they are dealing with
+ ** a library that is new enough to support that API.
+ *************************************************************************
+ */
+ sqlite3_overload_function,
+};
+
+/*
+** The windows implementation of shared-library loaders
+*/
+#if defined(_WIN32) || defined(WIN32) || defined(__MINGW32__) || defined(__BORLANDC__)
+# include <windows.h>
+# define SQLITE_LIBRARY_TYPE HANDLE
+# define SQLITE_OPEN_LIBRARY(A) LoadLibrary(A)
+# define SQLITE_FIND_SYMBOL(A,B) GetProcAddress(A,B)
+# define SQLITE_CLOSE_LIBRARY(A) FreeLibrary(A)
+#endif /* windows */
+
+/*
+** The unix implementation of shared-library loaders
+*/
+#if defined(HAVE_DLOPEN) && !defined(SQLITE_LIBRARY_TYPE)
+# include <dlfcn.h>
+# define SQLITE_LIBRARY_TYPE void*
+# define SQLITE_OPEN_LIBRARY(A) dlopen(A, RTLD_NOW | RTLD_GLOBAL)
+# define SQLITE_FIND_SYMBOL(A,B) dlsym(A,B)
+# define SQLITE_CLOSE_LIBRARY(A) dlclose(A)
+#endif
+
+/*
+** Attempt to load an SQLite extension library contained in the file
+** zFile. The entry point is zProc. zProc may be 0 in which case a
+** default entry point name (sqlite3_extension_init) is used. Use
+** of the default name is recommended.
+**
+** Return SQLITE_OK on success and SQLITE_ERROR if something goes wrong.
+**
+** If an error occurs and pzErrMsg is not 0, then fill *pzErrMsg with
+** error message text. The calling function should free this memory
+** by calling sqlite3_free().
+*/
+int sqlite3_load_extension(
+ sqlite3 *db, /* Load the extension into this database connection */
+ const char *zFile, /* Name of the shared library containing extension */
+ const char *zProc, /* Entry point. Use "sqlite3_extension_init" if 0 */
+ char **pzErrMsg /* Put error message here if not 0 */
+){
+#ifdef SQLITE_LIBRARY_TYPE
+ SQLITE_LIBRARY_TYPE handle;
+ int (*xInit)(sqlite3*,char**,const sqlite3_api_routines*);
+ char *zErrmsg = 0;
+ SQLITE_LIBRARY_TYPE *aHandle;
+
+ /* Ticket #1863. To avoid a creating security problems for older
+ ** applications that relink against newer versions of SQLite, the
+ ** ability to run load_extension is turned off by default. One
+ ** must call sqlite3_enable_load_extension() to turn on extension
+ ** loading. Otherwise you get the following error.
+ */
+ if( (db->flags & SQLITE_LoadExtension)==0 ){
+ if( pzErrMsg ){
+ *pzErrMsg = sqlite3_mprintf("not authorized");
+ }
+ return SQLITE_ERROR;
+ }
+
+ if( zProc==0 ){
+ zProc = "sqlite3_extension_init";
+ }
+
+ handle = SQLITE_OPEN_LIBRARY(zFile);
+ if( handle==0 ){
+ if( pzErrMsg ){
+ *pzErrMsg = sqlite3_mprintf("unable to open shared library [%s]", zFile);
+ }
+ return SQLITE_ERROR;
+ }
+ xInit = (int(*)(sqlite3*,char**,const sqlite3_api_routines*))
+ SQLITE_FIND_SYMBOL(handle, zProc);
+ if( xInit==0 ){
+ if( pzErrMsg ){
+ *pzErrMsg = sqlite3_mprintf("no entry point [%s] in shared library [%s]",
+ zProc, zFile);
+ }
+ SQLITE_CLOSE_LIBRARY(handle);
+ return SQLITE_ERROR;
+ }else if( xInit(db, &zErrmsg, &sqlite3_apis) ){
+ if( pzErrMsg ){
+ *pzErrMsg = sqlite3_mprintf("error during initialization: %s", zErrmsg);
+ }
+ sqlite3_free(zErrmsg);
+ SQLITE_CLOSE_LIBRARY(handle);
+ return SQLITE_ERROR;
+ }
+
+ /* Append the new shared library handle to the db->aExtension array. */
+ db->nExtension++;
+ aHandle = sqliteMalloc(sizeof(handle)*db->nExtension);
+ if( aHandle==0 ){
+ return SQLITE_NOMEM;
+ }
+ if( db->nExtension>0 ){
+ memcpy(aHandle, db->aExtension, sizeof(handle)*(db->nExtension-1));
+ }
+ sqliteFree(db->aExtension);
+ db->aExtension = aHandle;
+
+ ((SQLITE_LIBRARY_TYPE*)db->aExtension)[db->nExtension-1] = handle;
+ return SQLITE_OK;
+#else
+ if( pzErrMsg ){
+ *pzErrMsg = sqlite3_mprintf("extension loading is disabled");
+ }
+ return SQLITE_ERROR;
+#endif
+}
+
+/*
+** Call this routine when the database connection is closing in order
+** to clean up loaded extensions
+*/
+void sqlite3CloseExtensions(sqlite3 *db){
+#ifdef SQLITE_LIBRARY_TYPE
+ int i;
+ for(i=0; i<db->nExtension; i++){
+ SQLITE_CLOSE_LIBRARY(((SQLITE_LIBRARY_TYPE*)db->aExtension)[i]);
+ }
+ sqliteFree(db->aExtension);
+#endif
+}
+
+/*
+** Enable or disable extension loading. Extension loading is disabled by
+** default so as not to open security holes in older applications.
+*/
+int sqlite3_enable_load_extension(sqlite3 *db, int onoff){
+ if( onoff ){
+ db->flags |= SQLITE_LoadExtension;
+ }else{
+ db->flags &= ~SQLITE_LoadExtension;
+ }
+ return SQLITE_OK;
+}
+
+/*
+** A list of automatically loaded extensions.
+**
+** This list is shared across threads, so be sure to hold the
+** mutex while accessing or changing it.
+*/
+static int nAutoExtension = 0;
+static void **aAutoExtension = 0;
+
+
+/*
+** Register a statically linked extension that is automatically
+** loaded by every new database connection.
+*/
+int sqlite3_auto_extension(void *xInit){
+ int i;
+ int rc = SQLITE_OK;
+ sqlite3OsEnterMutex();
+ for(i=0; i<nAutoExtension; i++){
+ if( aAutoExtension[i]==xInit ) break;
+ }
+ if( i==nAutoExtension ){
+ nAutoExtension++;
+ aAutoExtension = sqlite3Realloc( aAutoExtension,
+ nAutoExtension*sizeof(aAutoExtension[0]) );
+ if( aAutoExtension==0 ){
+ nAutoExtension = 0;
+ rc = SQLITE_NOMEM;
+ }else{
+ aAutoExtension[nAutoExtension-1] = xInit;
+ }
+ }
+ sqlite3OsLeaveMutex();
+ assert( (rc&0xff)==rc );
+ return rc;
+}
+
+/*
+** Reset the automatic extension loading mechanism.
+*/
+void sqlite3_reset_auto_extension(void){
+ sqlite3OsEnterMutex();
+ sqliteFree(aAutoExtension);
+ aAutoExtension = 0;
+ nAutoExtension = 0;
+ sqlite3OsLeaveMutex();
+}
+
+/*
+** Load all automatic extensions.
+*/
+int sqlite3AutoLoadExtensions(sqlite3 *db){
+ int i;
+ int go = 1;
+ int rc = SQLITE_OK;
+ int (*xInit)(sqlite3*,char**,const sqlite3_api_routines*);
+
+ if( nAutoExtension==0 ){
+ /* Common case: early out without every having to acquire a mutex */
+ return SQLITE_OK;
+ }
+ for(i=0; go; i++){
+ char *zErrmsg = 0;
+ sqlite3OsEnterMutex();
+ if( i>=nAutoExtension ){
+ xInit = 0;
+ go = 0;
+ }else{
+ xInit = (int(*)(sqlite3*,char**,const sqlite3_api_routines*))
+ aAutoExtension[i];
+ }
+ sqlite3OsLeaveMutex();
+ if( xInit && xInit(db, &zErrmsg, &sqlite3_apis) ){
+ sqlite3Error(db, SQLITE_ERROR,
+ "automatic extension loading failed: %s", zErrmsg);
+ go = 0;
+ rc = SQLITE_ERROR;
+ }
+ }
+ return rc;
+}
+
+#endif /* SQLITE_OMIT_LOAD_EXTENSION */
Added: freeswitch/trunk/libs/sqlite/src/main.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/main.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,1335 @@
+/*
+** 2001 September 15
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** Main file for the SQLite library. The routines in this file
+** implement the programmer interface to the library. Routines in
+** other files are for internal use by SQLite and should not be
+** accessed by users of the library.
+**
+** $Id: main.c,v 1.358 2006/09/16 21:45:14 drh Exp $
+*/
+#include "sqliteInt.h"
+#include "os.h"
+#include <ctype.h>
+
+/*
+** The following constant value is used by the SQLITE_BIGENDIAN and
+** SQLITE_LITTLEENDIAN macros.
+*/
+const int sqlite3one = 1;
+
+/*
+** The version of the library
+*/
+const char sqlite3_version[] = SQLITE_VERSION;
+const char *sqlite3_libversion(void){ return sqlite3_version; }
+int sqlite3_libversion_number(void){ return SQLITE_VERSION_NUMBER; }
+
+/*
+** This is the default collating function named "BINARY" which is always
+** available.
+*/
+static int binCollFunc(
+ void *NotUsed,
+ int nKey1, const void *pKey1,
+ int nKey2, const void *pKey2
+){
+ int rc, n;
+ n = nKey1<nKey2 ? nKey1 : nKey2;
+ rc = memcmp(pKey1, pKey2, n);
+ if( rc==0 ){
+ rc = nKey1 - nKey2;
+ }
+ return rc;
+}
+
+/*
+** Another built-in collating sequence: NOCASE.
+**
+** This collating sequence is intended to be used for "case independant
+** comparison". SQLite's knowledge of upper and lower case equivalents
+** extends only to the 26 characters used in the English language.
+**
+** At the moment there is only a UTF-8 implementation.
+*/
+static int nocaseCollatingFunc(
+ void *NotUsed,
+ int nKey1, const void *pKey1,
+ int nKey2, const void *pKey2
+){
+ int r = sqlite3StrNICmp(
+ (const char *)pKey1, (const char *)pKey2, (nKey1<nKey2)?nKey1:nKey2);
+ if( 0==r ){
+ r = nKey1-nKey2;
+ }
+ return r;
+}
+
+/*
+** Return the ROWID of the most recent insert
+*/
+sqlite_int64 sqlite3_last_insert_rowid(sqlite3 *db){
+ return db->lastRowid;
+}
+
+/*
+** Return the number of changes in the most recent call to sqlite3_exec().
+*/
+int sqlite3_changes(sqlite3 *db){
+ return db->nChange;
+}
+
+/*
+** Return the number of changes since the database handle was opened.
+*/
+int sqlite3_total_changes(sqlite3 *db){
+ return db->nTotalChange;
+}
+
+/*
+** Close an existing SQLite database
+*/
+int sqlite3_close(sqlite3 *db){
+ HashElem *i;
+ int j;
+
+ if( !db ){
+ return SQLITE_OK;
+ }
+ if( sqlite3SafetyCheck(db) ){
+ return SQLITE_MISUSE;
+ }
+
+#ifdef SQLITE_SSE
+ {
+ extern void sqlite3SseCleanup(sqlite3*);
+ sqlite3SseCleanup(db);
+ }
+#endif
+
+ /* If there are any outstanding VMs, return SQLITE_BUSY. */
+ sqlite3ResetInternalSchema(db, 0);
+ if( db->pVdbe ){
+ sqlite3Error(db, SQLITE_BUSY,
+ "Unable to close due to unfinalised statements");
+ return SQLITE_BUSY;
+ }
+ assert( !sqlite3SafetyCheck(db) );
+
+ /* FIX ME: db->magic may be set to SQLITE_MAGIC_CLOSED if the database
+ ** cannot be opened for some reason. So this routine needs to run in
+ ** that case. But maybe there should be an extra magic value for the
+ ** "failed to open" state.
+ */
+ if( db->magic!=SQLITE_MAGIC_CLOSED && sqlite3SafetyOn(db) ){
+ /* printf("DID NOT CLOSE\n"); fflush(stdout); */
+ return SQLITE_ERROR;
+ }
+
+ sqlite3VtabRollback(db);
+
+ for(j=0; j<db->nDb; j++){
+ struct Db *pDb = &db->aDb[j];
+ if( pDb->pBt ){
+ sqlite3BtreeClose(pDb->pBt);
+ pDb->pBt = 0;
+ if( j!=1 ){
+ pDb->pSchema = 0;
+ }
+ }
+ }
+ sqlite3ResetInternalSchema(db, 0);
+ assert( db->nDb<=2 );
+ assert( db->aDb==db->aDbStatic );
+ for(i=sqliteHashFirst(&db->aFunc); i; i=sqliteHashNext(i)){
+ FuncDef *pFunc, *pNext;
+ for(pFunc = (FuncDef*)sqliteHashData(i); pFunc; pFunc=pNext){
+ pNext = pFunc->pNext;
+ sqliteFree(pFunc);
+ }
+ }
+
+ for(i=sqliteHashFirst(&db->aCollSeq); i; i=sqliteHashNext(i)){
+ CollSeq *pColl = (CollSeq *)sqliteHashData(i);
+ sqliteFree(pColl);
+ }
+ sqlite3HashClear(&db->aCollSeq);
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+ for(i=sqliteHashFirst(&db->aModule); i; i=sqliteHashNext(i)){
+ Module *pMod = (Module *)sqliteHashData(i);
+ sqliteFree(pMod);
+ }
+ sqlite3HashClear(&db->aModule);
+#endif
+
+ sqlite3HashClear(&db->aFunc);
+ sqlite3Error(db, SQLITE_OK, 0); /* Deallocates any cached error strings. */
+ if( db->pErr ){
+ sqlite3ValueFree(db->pErr);
+ }
+ sqlite3CloseExtensions(db);
+
+ db->magic = SQLITE_MAGIC_ERROR;
+
+ /* The temp-database schema is allocated differently from the other schema
+ ** objects (using sqliteMalloc() directly, instead of sqlite3BtreeSchema()).
+ ** So it needs to be freed here. Todo: Why not roll the temp schema into
+ ** the same sqliteMalloc() as the one that allocates the database
+ ** structure?
+ */
+ sqliteFree(db->aDb[1].pSchema);
+ sqliteFree(db);
+ sqlite3ReleaseThreadData();
+ return SQLITE_OK;
+}
+
+/*
+** Rollback all database files.
+*/
+void sqlite3RollbackAll(sqlite3 *db){
+ int i;
+ int inTrans = 0;
+ for(i=0; i<db->nDb; i++){
+ if( db->aDb[i].pBt ){
+ if( sqlite3BtreeIsInTrans(db->aDb[i].pBt) ){
+ inTrans = 1;
+ }
+ sqlite3BtreeRollback(db->aDb[i].pBt);
+ db->aDb[i].inTrans = 0;
+ }
+ }
+ sqlite3VtabRollback(db);
+ if( db->flags&SQLITE_InternChanges ){
+ sqlite3ResetInternalSchema(db, 0);
+ }
+
+ /* If one has been configured, invoke the rollback-hook callback */
+ if( db->xRollbackCallback && (inTrans || !db->autoCommit) ){
+ db->xRollbackCallback(db->pRollbackArg);
+ }
+}
+
+/*
+** Return a static string that describes the kind of error specified in the
+** argument.
+*/
+const char *sqlite3ErrStr(int rc){
+ const char *z;
+ switch( rc & 0xff ){
+ case SQLITE_ROW:
+ case SQLITE_DONE:
+ case SQLITE_OK: z = "not an error"; break;
+ case SQLITE_ERROR: z = "SQL logic error or missing database"; break;
+ case SQLITE_PERM: z = "access permission denied"; break;
+ case SQLITE_ABORT: z = "callback requested query abort"; break;
+ case SQLITE_BUSY: z = "database is locked"; break;
+ case SQLITE_LOCKED: z = "database table is locked"; break;
+ case SQLITE_NOMEM: z = "out of memory"; break;
+ case SQLITE_READONLY: z = "attempt to write a readonly database"; break;
+ case SQLITE_INTERRUPT: z = "interrupted"; break;
+ case SQLITE_IOERR: z = "disk I/O error"; break;
+ case SQLITE_CORRUPT: z = "database disk image is malformed"; break;
+ case SQLITE_FULL: z = "database or disk is full"; break;
+ case SQLITE_CANTOPEN: z = "unable to open database file"; break;
+ case SQLITE_PROTOCOL: z = "database locking protocol failure"; break;
+ case SQLITE_EMPTY: z = "table contains no data"; break;
+ case SQLITE_SCHEMA: z = "database schema has changed"; break;
+ case SQLITE_CONSTRAINT: z = "constraint failed"; break;
+ case SQLITE_MISMATCH: z = "datatype mismatch"; break;
+ case SQLITE_MISUSE: z = "library routine called out of sequence";break;
+ case SQLITE_NOLFS: z = "kernel lacks large file support"; break;
+ case SQLITE_AUTH: z = "authorization denied"; break;
+ case SQLITE_FORMAT: z = "auxiliary database format error"; break;
+ case SQLITE_RANGE: z = "bind or column index out of range"; break;
+ case SQLITE_NOTADB: z = "file is encrypted or is not a database";break;
+ default: z = "unknown error"; break;
+ }
+ return z;
+}
+
+/*
+** This routine implements a busy callback that sleeps and tries
+** again until a timeout value is reached. The timeout value is
+** an integer number of milliseconds passed in as the first
+** argument.
+*/
+static int sqliteDefaultBusyCallback(
+ void *ptr, /* Database connection */
+ int count /* Number of times table has been busy */
+){
+#if OS_WIN || (defined(HAVE_USLEEP) && HAVE_USLEEP)
+ static const u8 delays[] =
+ { 1, 2, 5, 10, 15, 20, 25, 25, 25, 50, 50, 100 };
+ static const u8 totals[] =
+ { 0, 1, 3, 8, 18, 33, 53, 78, 103, 128, 178, 228 };
+# define NDELAY (sizeof(delays)/sizeof(delays[0]))
+ int timeout = ((sqlite3 *)ptr)->busyTimeout;
+ int delay, prior;
+
+ assert( count>=0 );
+ if( count < NDELAY ){
+ delay = delays[count];
+ prior = totals[count];
+ }else{
+ delay = delays[NDELAY-1];
+ prior = totals[NDELAY-1] + delay*(count-(NDELAY-1));
+ }
+ if( prior + delay > timeout ){
+ delay = timeout - prior;
+ if( delay<=0 ) return 0;
+ }
+ sqlite3OsSleep(delay);
+ return 1;
+#else
+ int timeout = ((sqlite3 *)ptr)->busyTimeout;
+ if( (count+1)*1000 > timeout ){
+ return 0;
+ }
+ sqlite3OsSleep(1000);
+ return 1;
+#endif
+}
+
+/*
+** Invoke the given busy handler.
+**
+** This routine is called when an operation failed with a lock.
+** If this routine returns non-zero, the lock is retried. If it
+** returns 0, the operation aborts with an SQLITE_BUSY error.
+*/
+int sqlite3InvokeBusyHandler(BusyHandler *p){
+ int rc;
+ if( p==0 || p->xFunc==0 || p->nBusy<0 ) return 0;
+ rc = p->xFunc(p->pArg, p->nBusy);
+ if( rc==0 ){
+ p->nBusy = -1;
+ }else{
+ p->nBusy++;
+ }
+ return rc;
+}
+
+/*
+** This routine sets the busy callback for an Sqlite database to the
+** given callback function with the given argument.
+*/
+int sqlite3_busy_handler(
+ sqlite3 *db,
+ int (*xBusy)(void*,int),
+ void *pArg
+){
+ if( sqlite3SafetyCheck(db) ){
+ return SQLITE_MISUSE;
+ }
+ db->busyHandler.xFunc = xBusy;
+ db->busyHandler.pArg = pArg;
+ db->busyHandler.nBusy = 0;
+ return SQLITE_OK;
+}
+
+#ifndef SQLITE_OMIT_PROGRESS_CALLBACK
+/*
+** This routine sets the progress callback for an Sqlite database to the
+** given callback function with the given argument. The progress callback will
+** be invoked every nOps opcodes.
+*/
+void sqlite3_progress_handler(
+ sqlite3 *db,
+ int nOps,
+ int (*xProgress)(void*),
+ void *pArg
+){
+ if( !sqlite3SafetyCheck(db) ){
+ if( nOps>0 ){
+ db->xProgress = xProgress;
+ db->nProgressOps = nOps;
+ db->pProgressArg = pArg;
+ }else{
+ db->xProgress = 0;
+ db->nProgressOps = 0;
+ db->pProgressArg = 0;
+ }
+ }
+}
+#endif
+
+
+/*
+** This routine installs a default busy handler that waits for the
+** specified number of milliseconds before returning 0.
+*/
+int sqlite3_busy_timeout(sqlite3 *db, int ms){
+ if( sqlite3SafetyCheck(db) ){
+ return SQLITE_MISUSE;
+ }
+ if( ms>0 ){
+ db->busyTimeout = ms;
+ sqlite3_busy_handler(db, sqliteDefaultBusyCallback, (void*)db);
+ }else{
+ sqlite3_busy_handler(db, 0, 0);
+ }
+ return SQLITE_OK;
+}
+
+/*
+** Cause any pending operation to stop at its earliest opportunity.
+*/
+void sqlite3_interrupt(sqlite3 *db){
+ if( db && (db->magic==SQLITE_MAGIC_OPEN || db->magic==SQLITE_MAGIC_BUSY) ){
+ db->u1.isInterrupted = 1;
+ }
+}
+
+/*
+** Memory allocation routines that use SQLites internal memory
+** memory allocator. Depending on how SQLite is compiled, the
+** internal memory allocator might be just an alias for the
+** system default malloc/realloc/free. Or the built-in allocator
+** might do extra stuff like put sentinals around buffers to
+** check for overruns or look for memory leaks.
+**
+** Use sqlite3_free() to free memory returned by sqlite3_mprintf().
+*/
+void sqlite3_free(void *p){ if( p ) sqlite3OsFree(p); }
+void *sqlite3_malloc(int nByte){ return nByte>0 ? sqlite3OsMalloc(nByte) : 0; }
+void *sqlite3_realloc(void *pOld, int nByte){
+ if( pOld ){
+ if( nByte>0 ){
+ return sqlite3OsRealloc(pOld, nByte);
+ }else{
+ sqlite3OsFree(pOld);
+ return 0;
+ }
+ }else{
+ return sqlite3_malloc(nByte);
+ }
+}
+
+/*
+** This function is exactly the same as sqlite3_create_function(), except
+** that it is designed to be called by internal code. The difference is
+** that if a malloc() fails in sqlite3_create_function(), an error code
+** is returned and the mallocFailed flag cleared.
+*/
+int sqlite3CreateFunc(
+ sqlite3 *db,
+ const char *zFunctionName,
+ int nArg,
+ int enc,
+ void *pUserData,
+ void (*xFunc)(sqlite3_context*,int,sqlite3_value **),
+ void (*xStep)(sqlite3_context*,int,sqlite3_value **),
+ void (*xFinal)(sqlite3_context*)
+){
+ FuncDef *p;
+ int nName;
+
+ if( sqlite3SafetyCheck(db) ){
+ return SQLITE_MISUSE;
+ }
+ if( zFunctionName==0 ||
+ (xFunc && (xFinal || xStep)) ||
+ (!xFunc && (xFinal && !xStep)) ||
+ (!xFunc && (!xFinal && xStep)) ||
+ (nArg<-1 || nArg>127) ||
+ (255<(nName = strlen(zFunctionName))) ){
+ sqlite3Error(db, SQLITE_ERROR, "bad parameters");
+ return SQLITE_ERROR;
+ }
+
+#ifndef SQLITE_OMIT_UTF16
+ /* If SQLITE_UTF16 is specified as the encoding type, transform this
+ ** to one of SQLITE_UTF16LE or SQLITE_UTF16BE using the
+ ** SQLITE_UTF16NATIVE macro. SQLITE_UTF16 is not used internally.
+ **
+ ** If SQLITE_ANY is specified, add three versions of the function
+ ** to the hash table.
+ */
+ if( enc==SQLITE_UTF16 ){
+ enc = SQLITE_UTF16NATIVE;
+ }else if( enc==SQLITE_ANY ){
+ int rc;
+ rc = sqlite3CreateFunc(db, zFunctionName, nArg, SQLITE_UTF8,
+ pUserData, xFunc, xStep, xFinal);
+ if( rc!=SQLITE_OK ) return rc;
+ rc = sqlite3CreateFunc(db, zFunctionName, nArg, SQLITE_UTF16LE,
+ pUserData, xFunc, xStep, xFinal);
+ if( rc!=SQLITE_OK ) return rc;
+ enc = SQLITE_UTF16BE;
+ }
+#else
+ enc = SQLITE_UTF8;
+#endif
+
+ /* Check if an existing function is being overridden or deleted. If so,
+ ** and there are active VMs, then return SQLITE_BUSY. If a function
+ ** is being overridden/deleted but there are no active VMs, allow the
+ ** operation to continue but invalidate all precompiled statements.
+ */
+ p = sqlite3FindFunction(db, zFunctionName, nName, nArg, enc, 0);
+ if( p && p->iPrefEnc==enc && p->nArg==nArg ){
+ if( db->activeVdbeCnt ){
+ sqlite3Error(db, SQLITE_BUSY,
+ "Unable to delete/modify user-function due to active statements");
+ assert( !sqlite3MallocFailed() );
+ return SQLITE_BUSY;
+ }else{
+ sqlite3ExpirePreparedStatements(db);
+ }
+ }
+
+ p = sqlite3FindFunction(db, zFunctionName, nName, nArg, enc, 1);
+ if( p ){
+ p->flags = 0;
+ p->xFunc = xFunc;
+ p->xStep = xStep;
+ p->xFinalize = xFinal;
+ p->pUserData = pUserData;
+ p->nArg = nArg;
+ }
+ return SQLITE_OK;
+}
+
+/*
+** Create new user functions.
+*/
+int sqlite3_create_function(
+ sqlite3 *db,
+ const char *zFunctionName,
+ int nArg,
+ int enc,
+ void *p,
+ void (*xFunc)(sqlite3_context*,int,sqlite3_value **),
+ void (*xStep)(sqlite3_context*,int,sqlite3_value **),
+ void (*xFinal)(sqlite3_context*)
+){
+ int rc;
+ assert( !sqlite3MallocFailed() );
+ rc = sqlite3CreateFunc(db, zFunctionName, nArg, enc, p, xFunc, xStep, xFinal);
+
+ return sqlite3ApiExit(db, rc);
+}
+
+#ifndef SQLITE_OMIT_UTF16
+int sqlite3_create_function16(
+ sqlite3 *db,
+ const void *zFunctionName,
+ int nArg,
+ int eTextRep,
+ void *p,
+ void (*xFunc)(sqlite3_context*,int,sqlite3_value**),
+ void (*xStep)(sqlite3_context*,int,sqlite3_value**),
+ void (*xFinal)(sqlite3_context*)
+){
+ int rc;
+ char *zFunc8;
+ assert( !sqlite3MallocFailed() );
+
+ zFunc8 = sqlite3utf16to8(zFunctionName, -1);
+ rc = sqlite3CreateFunc(db, zFunc8, nArg, eTextRep, p, xFunc, xStep, xFinal);
+ sqliteFree(zFunc8);
+
+ return sqlite3ApiExit(db, rc);
+}
+#endif
+
+
+/*
+** Declare that a function has been overloaded by a virtual table.
+**
+** If the function already exists as a regular global function, then
+** this routine is a no-op. If the function does not exist, then create
+** a new one that always throws a run-time error.
+**
+** When virtual tables intend to provide an overloaded function, they
+** should call this routine to make sure the global function exists.
+** A global function must exist in order for name resolution to work
+** properly.
+*/
+int sqlite3_overload_function(
+ sqlite3 *db,
+ const char *zName,
+ int nArg
+){
+ int nName = strlen(zName);
+ if( sqlite3FindFunction(db, zName, nName, nArg, SQLITE_UTF8, 0)==0 ){
+ sqlite3CreateFunc(db, zName, nArg, SQLITE_UTF8,
+ 0, sqlite3InvalidFunction, 0, 0);
+ }
+ return sqlite3ApiExit(db, SQLITE_OK);
+}
+
+#ifndef SQLITE_OMIT_TRACE
+/*
+** Register a trace function. The pArg from the previously registered trace
+** is returned.
+**
+** A NULL trace function means that no tracing is executes. A non-NULL
+** trace is a pointer to a function that is invoked at the start of each
+** SQL statement.
+*/
+void *sqlite3_trace(sqlite3 *db, void (*xTrace)(void*,const char*), void *pArg){
+ void *pOld = db->pTraceArg;
+ db->xTrace = xTrace;
+ db->pTraceArg = pArg;
+ return pOld;
+}
+/*
+** Register a profile function. The pArg from the previously registered
+** profile function is returned.
+**
+** A NULL profile function means that no profiling is executes. A non-NULL
+** profile is a pointer to a function that is invoked at the conclusion of
+** each SQL statement that is run.
+*/
+void *sqlite3_profile(
+ sqlite3 *db,
+ void (*xProfile)(void*,const char*,sqlite_uint64),
+ void *pArg
+){
+ void *pOld = db->pProfileArg;
+ db->xProfile = xProfile;
+ db->pProfileArg = pArg;
+ return pOld;
+}
+#endif /* SQLITE_OMIT_TRACE */
+
+/*** EXPERIMENTAL ***
+**
+** Register a function to be invoked when a transaction comments.
+** If the invoked function returns non-zero, then the commit becomes a
+** rollback.
+*/
+void *sqlite3_commit_hook(
+ sqlite3 *db, /* Attach the hook to this database */
+ int (*xCallback)(void*), /* Function to invoke on each commit */
+ void *pArg /* Argument to the function */
+){
+ void *pOld = db->pCommitArg;
+ db->xCommitCallback = xCallback;
+ db->pCommitArg = pArg;
+ return pOld;
+}
+
+/*
+** Register a callback to be invoked each time a row is updated,
+** inserted or deleted using this database connection.
+*/
+void *sqlite3_update_hook(
+ sqlite3 *db, /* Attach the hook to this database */
+ void (*xCallback)(void*,int,char const *,char const *,sqlite_int64),
+ void *pArg /* Argument to the function */
+){
+ void *pRet = db->pUpdateArg;
+ db->xUpdateCallback = xCallback;
+ db->pUpdateArg = pArg;
+ return pRet;
+}
+
+/*
+** Register a callback to be invoked each time a transaction is rolled
+** back by this database connection.
+*/
+void *sqlite3_rollback_hook(
+ sqlite3 *db, /* Attach the hook to this database */
+ void (*xCallback)(void*), /* Callback function */
+ void *pArg /* Argument to the function */
+){
+ void *pRet = db->pRollbackArg;
+ db->xRollbackCallback = xCallback;
+ db->pRollbackArg = pArg;
+ return pRet;
+}
+
+/*
+** This routine is called to create a connection to a database BTree
+** driver. If zFilename is the name of a file, then that file is
+** opened and used. If zFilename is the magic name ":memory:" then
+** the database is stored in memory (and is thus forgotten as soon as
+** the connection is closed.) If zFilename is NULL then the database
+** is a "virtual" database for transient use only and is deleted as
+** soon as the connection is closed.
+**
+** A virtual database can be either a disk file (that is automatically
+** deleted when the file is closed) or it an be held entirely in memory,
+** depending on the values of the TEMP_STORE compile-time macro and the
+** db->temp_store variable, according to the following chart:
+**
+** TEMP_STORE db->temp_store Location of temporary database
+** ---------- -------------- ------------------------------
+** 0 any file
+** 1 1 file
+** 1 2 memory
+** 1 0 file
+** 2 1 file
+** 2 2 memory
+** 2 0 memory
+** 3 any memory
+*/
+int sqlite3BtreeFactory(
+ const sqlite3 *db, /* Main database when opening aux otherwise 0 */
+ const char *zFilename, /* Name of the file containing the BTree database */
+ int omitJournal, /* if TRUE then do not journal this file */
+ int nCache, /* How many pages in the page cache */
+ Btree **ppBtree /* Pointer to new Btree object written here */
+){
+ int btree_flags = 0;
+ int rc;
+
+ assert( ppBtree != 0);
+ if( omitJournal ){
+ btree_flags |= BTREE_OMIT_JOURNAL;
+ }
+ if( db->flags & SQLITE_NoReadlock ){
+ btree_flags |= BTREE_NO_READLOCK;
+ }
+ if( zFilename==0 ){
+#if TEMP_STORE==0
+ /* Do nothing */
+#endif
+#ifndef SQLITE_OMIT_MEMORYDB
+#if TEMP_STORE==1
+ if( db->temp_store==2 ) zFilename = ":memory:";
+#endif
+#if TEMP_STORE==2
+ if( db->temp_store!=1 ) zFilename = ":memory:";
+#endif
+#if TEMP_STORE==3
+ zFilename = ":memory:";
+#endif
+#endif /* SQLITE_OMIT_MEMORYDB */
+ }
+
+ rc = sqlite3BtreeOpen(zFilename, (sqlite3 *)db, ppBtree, btree_flags);
+ if( rc==SQLITE_OK ){
+ sqlite3BtreeSetBusyHandler(*ppBtree, (void*)&db->busyHandler);
+ sqlite3BtreeSetCacheSize(*ppBtree, nCache);
+ }
+ return rc;
+}
+
+/*
+** Return UTF-8 encoded English language explanation of the most recent
+** error.
+*/
+const char *sqlite3_errmsg(sqlite3 *db){
+ const char *z;
+ if( !db || sqlite3MallocFailed() ){
+ return sqlite3ErrStr(SQLITE_NOMEM);
+ }
+ if( sqlite3SafetyCheck(db) || db->errCode==SQLITE_MISUSE ){
+ return sqlite3ErrStr(SQLITE_MISUSE);
+ }
+ z = (char*)sqlite3_value_text(db->pErr);
+ if( z==0 ){
+ z = sqlite3ErrStr(db->errCode);
+ }
+ return z;
+}
+
+#ifndef SQLITE_OMIT_UTF16
+/*
+** Return UTF-16 encoded English language explanation of the most recent
+** error.
+*/
+const void *sqlite3_errmsg16(sqlite3 *db){
+ /* Because all the characters in the string are in the unicode
+ ** range 0x00-0xFF, if we pad the big-endian string with a
+ ** zero byte, we can obtain the little-endian string with
+ ** &big_endian[1].
+ */
+ static const char outOfMemBe[] = {
+ 0, 'o', 0, 'u', 0, 't', 0, ' ',
+ 0, 'o', 0, 'f', 0, ' ',
+ 0, 'm', 0, 'e', 0, 'm', 0, 'o', 0, 'r', 0, 'y', 0, 0, 0
+ };
+ static const char misuseBe [] = {
+ 0, 'l', 0, 'i', 0, 'b', 0, 'r', 0, 'a', 0, 'r', 0, 'y', 0, ' ',
+ 0, 'r', 0, 'o', 0, 'u', 0, 't', 0, 'i', 0, 'n', 0, 'e', 0, ' ',
+ 0, 'c', 0, 'a', 0, 'l', 0, 'l', 0, 'e', 0, 'd', 0, ' ',
+ 0, 'o', 0, 'u', 0, 't', 0, ' ',
+ 0, 'o', 0, 'f', 0, ' ',
+ 0, 's', 0, 'e', 0, 'q', 0, 'u', 0, 'e', 0, 'n', 0, 'c', 0, 'e', 0, 0, 0
+ };
+
+ const void *z;
+ if( sqlite3MallocFailed() ){
+ return (void *)(&outOfMemBe[SQLITE_UTF16NATIVE==SQLITE_UTF16LE?1:0]);
+ }
+ if( sqlite3SafetyCheck(db) || db->errCode==SQLITE_MISUSE ){
+ return (void *)(&misuseBe[SQLITE_UTF16NATIVE==SQLITE_UTF16LE?1:0]);
+ }
+ z = sqlite3_value_text16(db->pErr);
+ if( z==0 ){
+ sqlite3ValueSetStr(db->pErr, -1, sqlite3ErrStr(db->errCode),
+ SQLITE_UTF8, SQLITE_STATIC);
+ z = sqlite3_value_text16(db->pErr);
+ }
+ sqlite3ApiExit(0, 0);
+ return z;
+}
+#endif /* SQLITE_OMIT_UTF16 */
+
+/*
+** Return the most recent error code generated by an SQLite routine. If NULL is
+** passed to this function, we assume a malloc() failed during sqlite3_open().
+*/
+int sqlite3_errcode(sqlite3 *db){
+ if( !db || sqlite3MallocFailed() ){
+ return SQLITE_NOMEM;
+ }
+ if( sqlite3SafetyCheck(db) ){
+ return SQLITE_MISUSE;
+ }
+ return db->errCode & db->errMask;
+}
+
+/*
+** Create a new collating function for database "db". The name is zName
+** and the encoding is enc.
+*/
+static int createCollation(
+ sqlite3* db,
+ const char *zName,
+ int enc,
+ void* pCtx,
+ int(*xCompare)(void*,int,const void*,int,const void*)
+){
+ CollSeq *pColl;
+ int enc2;
+
+ if( sqlite3SafetyCheck(db) ){
+ return SQLITE_MISUSE;
+ }
+
+ /* If SQLITE_UTF16 is specified as the encoding type, transform this
+ ** to one of SQLITE_UTF16LE or SQLITE_UTF16BE using the
+ ** SQLITE_UTF16NATIVE macro. SQLITE_UTF16 is not used internally.
+ */
+ enc2 = enc & ~SQLITE_UTF16_ALIGNED;
+ if( enc2==SQLITE_UTF16 ){
+ enc2 = SQLITE_UTF16NATIVE;
+ }
+
+ if( (enc2&~3)!=0 ){
+ sqlite3Error(db, SQLITE_ERROR, "unknown encoding");
+ return SQLITE_ERROR;
+ }
+
+ /* Check if this call is removing or replacing an existing collation
+ ** sequence. If so, and there are active VMs, return busy. If there
+ ** are no active VMs, invalidate any pre-compiled statements.
+ */
+ pColl = sqlite3FindCollSeq(db, (u8)enc2, zName, strlen(zName), 0);
+ if( pColl && pColl->xCmp ){
+ if( db->activeVdbeCnt ){
+ sqlite3Error(db, SQLITE_BUSY,
+ "Unable to delete/modify collation sequence due to active statements");
+ return SQLITE_BUSY;
+ }
+ sqlite3ExpirePreparedStatements(db);
+ }
+
+ pColl = sqlite3FindCollSeq(db, (u8)enc2, zName, strlen(zName), 1);
+ if( pColl ){
+ pColl->xCmp = xCompare;
+ pColl->pUser = pCtx;
+ pColl->enc = enc2 | (enc & SQLITE_UTF16_ALIGNED);
+ }
+ sqlite3Error(db, SQLITE_OK, 0);
+ return SQLITE_OK;
+}
+
+
+/*
+** This routine does the work of opening a database on behalf of
+** sqlite3_open() and sqlite3_open16(). The database filename "zFilename"
+** is UTF-8 encoded.
+*/
+static int openDatabase(
+ const char *zFilename, /* Database filename UTF-8 encoded */
+ sqlite3 **ppDb /* OUT: Returned database handle */
+){
+ sqlite3 *db;
+ int rc;
+ CollSeq *pColl;
+
+ assert( !sqlite3MallocFailed() );
+
+ /* Allocate the sqlite data structure */
+ db = sqliteMalloc( sizeof(sqlite3) );
+ if( db==0 ) goto opendb_out;
+ db->errMask = 0xff;
+ db->priorNewRowid = 0;
+ db->magic = SQLITE_MAGIC_BUSY;
+ db->nDb = 2;
+ db->aDb = db->aDbStatic;
+ db->autoCommit = 1;
+ db->flags |= SQLITE_ShortColNames
+#if SQLITE_DEFAULT_FILE_FORMAT<4
+ | SQLITE_LegacyFileFmt
+#endif
+ ;
+ sqlite3HashInit(&db->aFunc, SQLITE_HASH_STRING, 0);
+ sqlite3HashInit(&db->aCollSeq, SQLITE_HASH_STRING, 0);
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+ sqlite3HashInit(&db->aModule, SQLITE_HASH_STRING, 0);
+#endif
+
+ /* Add the default collation sequence BINARY. BINARY works for both UTF-8
+ ** and UTF-16, so add a version for each to avoid any unnecessary
+ ** conversions. The only error that can occur here is a malloc() failure.
+ */
+ if( createCollation(db, "BINARY", SQLITE_UTF8, 0, binCollFunc) ||
+ createCollation(db, "BINARY", SQLITE_UTF16BE, 0, binCollFunc) ||
+ createCollation(db, "BINARY", SQLITE_UTF16LE, 0, binCollFunc) ||
+ (db->pDfltColl = sqlite3FindCollSeq(db, SQLITE_UTF8, "BINARY", 6, 0))==0
+ ){
+ assert( sqlite3MallocFailed() );
+ db->magic = SQLITE_MAGIC_CLOSED;
+ goto opendb_out;
+ }
+
+ /* Also add a UTF-8 case-insensitive collation sequence. */
+ createCollation(db, "NOCASE", SQLITE_UTF8, 0, nocaseCollatingFunc);
+
+ /* Set flags on the built-in collating sequences */
+ db->pDfltColl->type = SQLITE_COLL_BINARY;
+ pColl = sqlite3FindCollSeq(db, SQLITE_UTF8, "NOCASE", 6, 0);
+ if( pColl ){
+ pColl->type = SQLITE_COLL_NOCASE;
+ }
+
+ /* Open the backend database driver */
+ rc = sqlite3BtreeFactory(db, zFilename, 0, MAX_PAGES, &db->aDb[0].pBt);
+ if( rc!=SQLITE_OK ){
+ sqlite3Error(db, rc, 0);
+ db->magic = SQLITE_MAGIC_CLOSED;
+ goto opendb_out;
+ }
+ db->aDb[0].pSchema = sqlite3SchemaGet(db->aDb[0].pBt);
+ db->aDb[1].pSchema = sqlite3SchemaGet(0);
+
+
+ /* The default safety_level for the main database is 'full'; for the temp
+ ** database it is 'NONE'. This matches the pager layer defaults.
+ */
+ db->aDb[0].zName = "main";
+ db->aDb[0].safety_level = 3;
+#ifndef SQLITE_OMIT_TEMPDB
+ db->aDb[1].zName = "temp";
+ db->aDb[1].safety_level = 1;
+#endif
+
+ /* Register all built-in functions, but do not attempt to read the
+ ** database schema yet. This is delayed until the first time the database
+ ** is accessed.
+ */
+ if( !sqlite3MallocFailed() ){
+ sqlite3Error(db, SQLITE_OK, 0);
+ sqlite3RegisterBuiltinFunctions(db);
+ }
+ db->magic = SQLITE_MAGIC_OPEN;
+
+ /* Load automatic extensions - extensions that have been registered
+ ** using the sqlite3_automatic_extension() API.
+ */
+ sqlite3AutoLoadExtensions(db);
+
+#ifdef SQLITE_ENABLE_FTS1
+ {
+ extern int sqlite3Fts1Init(sqlite3*);
+ sqlite3Fts1Init(db);
+ }
+#endif
+
+opendb_out:
+ if( SQLITE_NOMEM==(rc = sqlite3_errcode(db)) ){
+ sqlite3_close(db);
+ db = 0;
+ }
+ *ppDb = db;
+ return sqlite3ApiExit(0, rc);
+}
+
+/*
+** Open a new database handle.
+*/
+int sqlite3_open(
+ const char *zFilename,
+ sqlite3 **ppDb
+){
+ return openDatabase(zFilename, ppDb);
+}
+
+#ifndef SQLITE_OMIT_UTF16
+/*
+** Open a new database handle.
+*/
+int sqlite3_open16(
+ const void *zFilename,
+ sqlite3 **ppDb
+){
+ char const *zFilename8; /* zFilename encoded in UTF-8 instead of UTF-16 */
+ int rc = SQLITE_OK;
+ sqlite3_value *pVal;
+
+ assert( zFilename );
+ assert( ppDb );
+ *ppDb = 0;
+ pVal = sqlite3ValueNew();
+ sqlite3ValueSetStr(pVal, -1, zFilename, SQLITE_UTF16NATIVE, SQLITE_STATIC);
+ zFilename8 = sqlite3ValueText(pVal, SQLITE_UTF8);
+ if( zFilename8 ){
+ rc = openDatabase(zFilename8, ppDb);
+ if( rc==SQLITE_OK && *ppDb ){
+ rc = sqlite3_exec(*ppDb, "PRAGMA encoding = 'UTF-16'", 0, 0, 0);
+ if( rc!=SQLITE_OK ){
+ sqlite3_close(*ppDb);
+ *ppDb = 0;
+ }
+ }
+ }
+ sqlite3ValueFree(pVal);
+
+ return sqlite3ApiExit(0, rc);
+}
+#endif /* SQLITE_OMIT_UTF16 */
+
+/*
+** The following routine destroys a virtual machine that is created by
+** the sqlite3_compile() routine. The integer returned is an SQLITE_
+** success/failure code that describes the result of executing the virtual
+** machine.
+**
+** This routine sets the error code and string returned by
+** sqlite3_errcode(), sqlite3_errmsg() and sqlite3_errmsg16().
+*/
+int sqlite3_finalize(sqlite3_stmt *pStmt){
+ int rc;
+ if( pStmt==0 ){
+ rc = SQLITE_OK;
+ }else{
+ rc = sqlite3VdbeFinalize((Vdbe*)pStmt);
+ }
+ return rc;
+}
+
+/*
+** Terminate the current execution of an SQL statement and reset it
+** back to its starting state so that it can be reused. A success code from
+** the prior execution is returned.
+**
+** This routine sets the error code and string returned by
+** sqlite3_errcode(), sqlite3_errmsg() and sqlite3_errmsg16().
+*/
+int sqlite3_reset(sqlite3_stmt *pStmt){
+ int rc;
+ if( pStmt==0 ){
+ rc = SQLITE_OK;
+ }else{
+ rc = sqlite3VdbeReset((Vdbe*)pStmt);
+ sqlite3VdbeMakeReady((Vdbe*)pStmt, -1, 0, 0, 0);
+ assert( (rc & (sqlite3_db_handle(pStmt)->errMask))==rc );
+ }
+ return rc;
+}
+
+/*
+** Register a new collation sequence with the database handle db.
+*/
+int sqlite3_create_collation(
+ sqlite3* db,
+ const char *zName,
+ int enc,
+ void* pCtx,
+ int(*xCompare)(void*,int,const void*,int,const void*)
+){
+ int rc;
+ assert( !sqlite3MallocFailed() );
+ rc = createCollation(db, zName, enc, pCtx, xCompare);
+ return sqlite3ApiExit(db, rc);
+}
+
+#ifndef SQLITE_OMIT_UTF16
+/*
+** Register a new collation sequence with the database handle db.
+*/
+int sqlite3_create_collation16(
+ sqlite3* db,
+ const char *zName,
+ int enc,
+ void* pCtx,
+ int(*xCompare)(void*,int,const void*,int,const void*)
+){
+ int rc = SQLITE_OK;
+ char *zName8;
+ assert( !sqlite3MallocFailed() );
+ zName8 = sqlite3utf16to8(zName, -1);
+ if( zName8 ){
+ rc = createCollation(db, zName8, enc, pCtx, xCompare);
+ sqliteFree(zName8);
+ }
+ return sqlite3ApiExit(db, rc);
+}
+#endif /* SQLITE_OMIT_UTF16 */
+
+/*
+** Register a collation sequence factory callback with the database handle
+** db. Replace any previously installed collation sequence factory.
+*/
+int sqlite3_collation_needed(
+ sqlite3 *db,
+ void *pCollNeededArg,
+ void(*xCollNeeded)(void*,sqlite3*,int eTextRep,const char*)
+){
+ if( sqlite3SafetyCheck(db) ){
+ return SQLITE_MISUSE;
+ }
+ db->xCollNeeded = xCollNeeded;
+ db->xCollNeeded16 = 0;
+ db->pCollNeededArg = pCollNeededArg;
+ return SQLITE_OK;
+}
+
+#ifndef SQLITE_OMIT_UTF16
+/*
+** Register a collation sequence factory callback with the database handle
+** db. Replace any previously installed collation sequence factory.
+*/
+int sqlite3_collation_needed16(
+ sqlite3 *db,
+ void *pCollNeededArg,
+ void(*xCollNeeded16)(void*,sqlite3*,int eTextRep,const void*)
+){
+ if( sqlite3SafetyCheck(db) ){
+ return SQLITE_MISUSE;
+ }
+ db->xCollNeeded = 0;
+ db->xCollNeeded16 = xCollNeeded16;
+ db->pCollNeededArg = pCollNeededArg;
+ return SQLITE_OK;
+}
+#endif /* SQLITE_OMIT_UTF16 */
+
+#ifndef SQLITE_OMIT_GLOBALRECOVER
+/*
+** This function is now an anachronism. It used to be used to recover from a
+** malloc() failure, but SQLite now does this automatically.
+*/
+int sqlite3_global_recover(){
+ return SQLITE_OK;
+}
+#endif
+
+/*
+** Test to see whether or not the database connection is in autocommit
+** mode. Return TRUE if it is and FALSE if not. Autocommit mode is on
+** by default. Autocommit is disabled by a BEGIN statement and reenabled
+** by the next COMMIT or ROLLBACK.
+**
+******* THIS IS AN EXPERIMENTAL API AND IS SUBJECT TO CHANGE ******
+*/
+int sqlite3_get_autocommit(sqlite3 *db){
+ return db->autoCommit;
+}
+
+#ifdef SQLITE_DEBUG
+/*
+** The following routine is subtituted for constant SQLITE_CORRUPT in
+** debugging builds. This provides a way to set a breakpoint for when
+** corruption is first detected.
+*/
+int sqlite3Corrupt(void){
+ return SQLITE_CORRUPT;
+}
+#endif
+
+
+#ifndef SQLITE_OMIT_SHARED_CACHE
+/*
+** Enable or disable the shared pager and schema features for the
+** current thread.
+**
+** This routine should only be called when there are no open
+** database connections.
+*/
+int sqlite3_enable_shared_cache(int enable){
+ ThreadData *pTd = sqlite3ThreadData();
+ if( pTd ){
+ /* It is only legal to call sqlite3_enable_shared_cache() when there
+ ** are no currently open b-trees that were opened by the calling thread.
+ ** This condition is only easy to detect if the shared-cache were
+ ** previously enabled (and is being disabled).
+ */
+ if( pTd->pBtree && !enable ){
+ assert( pTd->useSharedData );
+ return SQLITE_MISUSE;
+ }
+
+ pTd->useSharedData = enable;
+ sqlite3ReleaseThreadData();
+ }
+ return sqlite3ApiExit(0, SQLITE_OK);
+}
+#endif
+
+/*
+** This is a convenience routine that makes sure that all thread-specific
+** data for this thread has been deallocated.
+*/
+void sqlite3_thread_cleanup(void){
+ ThreadData *pTd = sqlite3OsThreadSpecificData(0);
+ if( pTd ){
+ memset(pTd, 0, sizeof(*pTd));
+ sqlite3OsThreadSpecificData(-1);
+ }
+}
+
+/*
+** Return meta information about a specific column of a database table.
+** See comment in sqlite3.h (sqlite.h.in) for details.
+*/
+#ifdef SQLITE_ENABLE_COLUMN_METADATA
+int sqlite3_table_column_metadata(
+ sqlite3 *db, /* Connection handle */
+ const char *zDbName, /* Database name or NULL */
+ const char *zTableName, /* Table name */
+ const char *zColumnName, /* Column name */
+ char const **pzDataType, /* OUTPUT: Declared data type */
+ char const **pzCollSeq, /* OUTPUT: Collation sequence name */
+ int *pNotNull, /* OUTPUT: True if NOT NULL constraint exists */
+ int *pPrimaryKey, /* OUTPUT: True if column part of PK */
+ int *pAutoinc /* OUTPUT: True if colums is auto-increment */
+){
+ int rc;
+ char *zErrMsg = 0;
+ Table *pTab = 0;
+ Column *pCol = 0;
+ int iCol;
+
+ char const *zDataType = 0;
+ char const *zCollSeq = 0;
+ int notnull = 0;
+ int primarykey = 0;
+ int autoinc = 0;
+
+ /* Ensure the database schema has been loaded */
+ if( sqlite3SafetyOn(db) ){
+ return SQLITE_MISUSE;
+ }
+ rc = sqlite3Init(db, &zErrMsg);
+ if( SQLITE_OK!=rc ){
+ goto error_out;
+ }
+
+ /* Locate the table in question */
+ pTab = sqlite3FindTable(db, zTableName, zDbName);
+ if( !pTab || pTab->pSelect ){
+ pTab = 0;
+ goto error_out;
+ }
+
+ /* Find the column for which info is requested */
+ if( sqlite3IsRowid(zColumnName) ){
+ iCol = pTab->iPKey;
+ if( iCol>=0 ){
+ pCol = &pTab->aCol[iCol];
+ }
+ }else{
+ for(iCol=0; iCol<pTab->nCol; iCol++){
+ pCol = &pTab->aCol[iCol];
+ if( 0==sqlite3StrICmp(pCol->zName, zColumnName) ){
+ break;
+ }
+ }
+ if( iCol==pTab->nCol ){
+ pTab = 0;
+ goto error_out;
+ }
+ }
+
+ /* The following block stores the meta information that will be returned
+ ** to the caller in local variables zDataType, zCollSeq, notnull, primarykey
+ ** and autoinc. At this point there are two possibilities:
+ **
+ ** 1. The specified column name was rowid", "oid" or "_rowid_"
+ ** and there is no explicitly declared IPK column.
+ **
+ ** 2. The table is not a view and the column name identified an
+ ** explicitly declared column. Copy meta information from *pCol.
+ */
+ if( pCol ){
+ zDataType = pCol->zType;
+ zCollSeq = pCol->zColl;
+ notnull = (pCol->notNull?1:0);
+ primarykey = (pCol->isPrimKey?1:0);
+ autoinc = ((pTab->iPKey==iCol && pTab->autoInc)?1:0);
+ }else{
+ zDataType = "INTEGER";
+ primarykey = 1;
+ }
+ if( !zCollSeq ){
+ zCollSeq = "BINARY";
+ }
+
+error_out:
+ if( sqlite3SafetyOff(db) ){
+ rc = SQLITE_MISUSE;
+ }
+
+ /* Whether the function call succeeded or failed, set the output parameters
+ ** to whatever their local counterparts contain. If an error did occur,
+ ** this has the effect of zeroing all output parameters.
+ */
+ if( pzDataType ) *pzDataType = zDataType;
+ if( pzCollSeq ) *pzCollSeq = zCollSeq;
+ if( pNotNull ) *pNotNull = notnull;
+ if( pPrimaryKey ) *pPrimaryKey = primarykey;
+ if( pAutoinc ) *pAutoinc = autoinc;
+
+ if( SQLITE_OK==rc && !pTab ){
+ sqlite3SetString(&zErrMsg, "no such table column: ", zTableName, ".",
+ zColumnName, 0);
+ rc = SQLITE_ERROR;
+ }
+ sqlite3Error(db, rc, (zErrMsg?"%s":0), zErrMsg);
+ sqliteFree(zErrMsg);
+ return sqlite3ApiExit(db, rc);
+}
+#endif
+
+/*
+** Set all the parameters in the compiled SQL statement to NULL.
+*/
+int sqlite3_clear_bindings(sqlite3_stmt *pStmt){
+ int i;
+ int rc = SQLITE_OK;
+ for(i=1; rc==SQLITE_OK && i<=sqlite3_bind_parameter_count(pStmt); i++){
+ rc = sqlite3_bind_null(pStmt, i);
+ }
+ return rc;
+}
+
+/*
+** Sleep for a little while. Return the amount of time slept.
+*/
+int sqlite3_sleep(int ms){
+ return sqlite3OsSleep(ms);
+}
+
+/*
+** Enable or disable the extended result codes.
+*/
+int sqlite3_extended_result_codes(sqlite3 *db, int onoff){
+ db->errMask = onoff ? 0xffffffff : 0xff;
+ return SQLITE_OK;
+}
Added: freeswitch/trunk/libs/sqlite/src/os.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/os.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,92 @@
+/*
+** 2005 November 29
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+******************************************************************************
+**
+** This file contains OS interface code that is common to all
+** architectures.
+*/
+#define _SQLITE_OS_C_ 1
+#include "sqliteInt.h"
+#include "os.h"
+
+/*
+** The following routines are convenience wrappers around methods
+** of the OsFile object. This is mostly just syntactic sugar. All
+** of this would be completely automatic if SQLite were coded using
+** C++ instead of plain old C.
+*/
+int sqlite3OsClose(OsFile **pId){
+ OsFile *id;
+ if( pId!=0 && (id = *pId)!=0 ){
+ return id->pMethod->xClose(pId);
+ }else{
+ return SQLITE_OK;
+ }
+}
+int sqlite3OsOpenDirectory(OsFile *id, const char *zName){
+ return id->pMethod->xOpenDirectory(id, zName);
+}
+int sqlite3OsRead(OsFile *id, void *pBuf, int amt){
+ return id->pMethod->xRead(id, pBuf, amt);
+}
+int sqlite3OsWrite(OsFile *id, const void *pBuf, int amt){
+ return id->pMethod->xWrite(id, pBuf, amt);
+}
+int sqlite3OsSeek(OsFile *id, i64 offset){
+ return id->pMethod->xSeek(id, offset);
+}
+int sqlite3OsTruncate(OsFile *id, i64 size){
+ return id->pMethod->xTruncate(id, size);
+}
+int sqlite3OsSync(OsFile *id, int fullsync){
+ return id->pMethod->xSync(id, fullsync);
+}
+void sqlite3OsSetFullSync(OsFile *id, int value){
+ id->pMethod->xSetFullSync(id, value);
+}
+#if defined(SQLITE_TEST) || defined(SQLITE_DEBUG)
+/* This method is currently only used while interactively debugging the
+** pager. More specificly, it can only be used when sqlite3DebugPrintf() is
+** included in the build. */
+int sqlite3OsFileHandle(OsFile *id){
+ return id->pMethod->xFileHandle(id);
+}
+#endif
+int sqlite3OsFileSize(OsFile *id, i64 *pSize){
+ return id->pMethod->xFileSize(id, pSize);
+}
+int sqlite3OsLock(OsFile *id, int lockType){
+ return id->pMethod->xLock(id, lockType);
+}
+int sqlite3OsUnlock(OsFile *id, int lockType){
+ return id->pMethod->xUnlock(id, lockType);
+}
+int sqlite3OsLockState(OsFile *id){
+ return id->pMethod->xLockState(id);
+}
+int sqlite3OsCheckReservedLock(OsFile *id){
+ return id->pMethod->xCheckReservedLock(id);
+}
+
+#ifdef SQLITE_ENABLE_REDEF_IO
+/*
+** A function to return a pointer to the virtual function table.
+** This routine really does not accomplish very much since the
+** virtual function table is a global variable and anybody who
+** can call this function can just as easily access the variable
+** for themselves. Nevertheless, we include this routine for
+** backwards compatibility with an earlier redefinable I/O
+** interface design.
+*/
+struct sqlite3OsVtbl *sqlite3_os_switch(void){
+ return &sqlite3Os;
+}
+#endif
Added: freeswitch/trunk/libs/sqlite/src/os.h
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/os.h Tue Dec 19 15:11:50 2006
@@ -0,0 +1,480 @@
+/*
+** 2001 September 16
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+******************************************************************************
+**
+** This header file (together with is companion C source-code file
+** "os.c") attempt to abstract the underlying operating system so that
+** the SQLite library will work on both POSIX and windows systems.
+*/
+#ifndef _SQLITE_OS_H_
+#define _SQLITE_OS_H_
+
+/*
+** Figure out if we are dealing with Unix, Windows, or some other
+** operating system.
+*/
+#if !defined(OS_UNIX) && !defined(OS_OTHER)
+# define OS_OTHER 0
+# ifndef OS_WIN
+# if defined(_WIN32) || defined(WIN32) || defined(__CYGWIN__) || defined(__MINGW32__) || defined(__BORLANDC__)
+# define OS_WIN 1
+# define OS_UNIX 0
+# define OS_OS2 0
+# elif defined(_EMX_) || defined(_OS2) || defined(OS2) || defined(_OS2_) || defined(__OS2__)
+# define OS_WIN 0
+# define OS_UNIX 0
+# define OS_OS2 1
+# else
+# define OS_WIN 0
+# define OS_UNIX 1
+# define OS_OS2 0
+# endif
+# else
+# define OS_UNIX 0
+# define OS_OS2 0
+# endif
+#else
+# ifndef OS_WIN
+# define OS_WIN 0
+# endif
+#endif
+
+
+/*
+** Define the maximum size of a temporary filename
+*/
+#if OS_WIN
+# include <windows.h>
+# define SQLITE_TEMPNAME_SIZE (MAX_PATH+50)
+#elif OS_OS2
+# define INCL_DOSDATETIME
+# define INCL_DOSFILEMGR
+# define INCL_DOSERRORS
+# define INCL_DOSMISC
+# define INCL_DOSPROCESS
+# include <os2.h>
+# define SQLITE_TEMPNAME_SIZE (CCHMAXPATHCOMP)
+#else
+# define SQLITE_TEMPNAME_SIZE 200
+#endif
+
+/* If the SET_FULLSYNC macro is not defined above, then make it
+** a no-op
+*/
+#ifndef SET_FULLSYNC
+# define SET_FULLSYNC(x,y)
+#endif
+
+/*
+** Temporary files are named starting with this prefix followed by 16 random
+** alphanumeric characters, and no file extension. They are stored in the
+** OS's standard temporary file directory, and are deleted prior to exit.
+** If sqlite is being embedded in another program, you may wish to change the
+** prefix to reflect your program's name, so that if your program exits
+** prematurely, old temporary files can be easily identified. This can be done
+** using -DTEMP_FILE_PREFIX=myprefix_ on the compiler command line.
+*/
+#ifndef TEMP_FILE_PREFIX
+# define TEMP_FILE_PREFIX "sqlite_"
+#endif
+
+/*
+** Define the interfaces for Unix, Windows, and OS/2.
+*/
+#if OS_UNIX
+#define sqlite3OsOpenReadWrite sqlite3UnixOpenReadWrite
+#define sqlite3OsOpenExclusive sqlite3UnixOpenExclusive
+#define sqlite3OsOpenReadOnly sqlite3UnixOpenReadOnly
+#define sqlite3OsDelete sqlite3UnixDelete
+#define sqlite3OsFileExists sqlite3UnixFileExists
+#define sqlite3OsFullPathname sqlite3UnixFullPathname
+#define sqlite3OsIsDirWritable sqlite3UnixIsDirWritable
+#define sqlite3OsSyncDirectory sqlite3UnixSyncDirectory
+#define sqlite3OsTempFileName sqlite3UnixTempFileName
+#define sqlite3OsRandomSeed sqlite3UnixRandomSeed
+#define sqlite3OsSleep sqlite3UnixSleep
+#define sqlite3OsCurrentTime sqlite3UnixCurrentTime
+#define sqlite3OsEnterMutex sqlite3UnixEnterMutex
+#define sqlite3OsLeaveMutex sqlite3UnixLeaveMutex
+#define sqlite3OsInMutex sqlite3UnixInMutex
+#define sqlite3OsThreadSpecificData sqlite3UnixThreadSpecificData
+#define sqlite3OsMalloc sqlite3GenericMalloc
+#define sqlite3OsRealloc sqlite3GenericRealloc
+#define sqlite3OsFree sqlite3GenericFree
+#define sqlite3OsAllocationSize sqlite3GenericAllocationSize
+#endif
+#if OS_WIN
+#define sqlite3OsOpenReadWrite sqlite3WinOpenReadWrite
+#define sqlite3OsOpenExclusive sqlite3WinOpenExclusive
+#define sqlite3OsOpenReadOnly sqlite3WinOpenReadOnly
+#define sqlite3OsDelete sqlite3WinDelete
+#define sqlite3OsFileExists sqlite3WinFileExists
+#define sqlite3OsFullPathname sqlite3WinFullPathname
+#define sqlite3OsIsDirWritable sqlite3WinIsDirWritable
+#define sqlite3OsSyncDirectory sqlite3WinSyncDirectory
+#define sqlite3OsTempFileName sqlite3WinTempFileName
+#define sqlite3OsRandomSeed sqlite3WinRandomSeed
+#define sqlite3OsSleep sqlite3WinSleep
+#define sqlite3OsCurrentTime sqlite3WinCurrentTime
+#define sqlite3OsEnterMutex sqlite3WinEnterMutex
+#define sqlite3OsLeaveMutex sqlite3WinLeaveMutex
+#define sqlite3OsInMutex sqlite3WinInMutex
+#define sqlite3OsThreadSpecificData sqlite3WinThreadSpecificData
+#define sqlite3OsMalloc sqlite3GenericMalloc
+#define sqlite3OsRealloc sqlite3GenericRealloc
+#define sqlite3OsFree sqlite3GenericFree
+#define sqlite3OsAllocationSize sqlite3GenericAllocationSize
+#endif
+#if OS_OS2
+#define sqlite3OsOpenReadWrite sqlite3Os2OpenReadWrite
+#define sqlite3OsOpenExclusive sqlite3Os2OpenExclusive
+#define sqlite3OsOpenReadOnly sqlite3Os2OpenReadOnly
+#define sqlite3OsDelete sqlite3Os2Delete
+#define sqlite3OsFileExists sqlite3Os2FileExists
+#define sqlite3OsFullPathname sqlite3Os2FullPathname
+#define sqlite3OsIsDirWritable sqlite3Os2IsDirWritable
+#define sqlite3OsSyncDirectory sqlite3Os2SyncDirectory
+#define sqlite3OsTempFileName sqlite3Os2TempFileName
+#define sqlite3OsRandomSeed sqlite3Os2RandomSeed
+#define sqlite3OsSleep sqlite3Os2Sleep
+#define sqlite3OsCurrentTime sqlite3Os2CurrentTime
+#define sqlite3OsEnterMutex sqlite3Os2EnterMutex
+#define sqlite3OsLeaveMutex sqlite3Os2LeaveMutex
+#define sqlite3OsInMutex sqlite3Os2InMutex
+#define sqlite3OsThreadSpecificData sqlite3Os2ThreadSpecificData
+#define sqlite3OsMalloc sqlite3GenericMalloc
+#define sqlite3OsRealloc sqlite3GenericRealloc
+#define sqlite3OsFree sqlite3GenericFree
+#define sqlite3OsAllocationSize sqlite3GenericAllocationSize
+#endif
+
+
+
+
+/*
+** If using an alternative OS interface, then we must have an "os_other.h"
+** header file available for that interface. Presumably the "os_other.h"
+** header file contains #defines similar to those above.
+*/
+#if OS_OTHER
+# include "os_other.h"
+#endif
+
+
+
+/*
+** Forward declarations
+*/
+typedef struct OsFile OsFile;
+typedef struct IoMethod IoMethod;
+
+/*
+** An instance of the following structure contains pointers to all
+** methods on an OsFile object.
+*/
+struct IoMethod {
+ int (*xClose)(OsFile**);
+ int (*xOpenDirectory)(OsFile*, const char*);
+ int (*xRead)(OsFile*, void*, int amt);
+ int (*xWrite)(OsFile*, const void*, int amt);
+ int (*xSeek)(OsFile*, i64 offset);
+ int (*xTruncate)(OsFile*, i64 size);
+ int (*xSync)(OsFile*, int);
+ void (*xSetFullSync)(OsFile *id, int setting);
+ int (*xFileHandle)(OsFile *id);
+ int (*xFileSize)(OsFile*, i64 *pSize);
+ int (*xLock)(OsFile*, int);
+ int (*xUnlock)(OsFile*, int);
+ int (*xLockState)(OsFile *id);
+ int (*xCheckReservedLock)(OsFile *id);
+};
+
+/*
+** The OsFile object describes an open disk file in an OS-dependent way.
+** The version of OsFile defined here is a generic version. Each OS
+** implementation defines its own subclass of this structure that contains
+** additional information needed to handle file I/O. But the pMethod
+** entry (pointing to the virtual function table) always occurs first
+** so that we can always find the appropriate methods.
+*/
+struct OsFile {
+ IoMethod const *pMethod;
+};
+
+/*
+** The following values may be passed as the second argument to
+** sqlite3OsLock(). The various locks exhibit the following semantics:
+**
+** SHARED: Any number of processes may hold a SHARED lock simultaneously.
+** RESERVED: A single process may hold a RESERVED lock on a file at
+** any time. Other processes may hold and obtain new SHARED locks.
+** PENDING: A single process may hold a PENDING lock on a file at
+** any one time. Existing SHARED locks may persist, but no new
+** SHARED locks may be obtained by other processes.
+** EXCLUSIVE: An EXCLUSIVE lock precludes all other locks.
+**
+** PENDING_LOCK may not be passed directly to sqlite3OsLock(). Instead, a
+** process that requests an EXCLUSIVE lock may actually obtain a PENDING
+** lock. This can be upgraded to an EXCLUSIVE lock by a subsequent call to
+** sqlite3OsLock().
+*/
+#define NO_LOCK 0
+#define SHARED_LOCK 1
+#define RESERVED_LOCK 2
+#define PENDING_LOCK 3
+#define EXCLUSIVE_LOCK 4
+
+/*
+** File Locking Notes: (Mostly about windows but also some info for Unix)
+**
+** We cannot use LockFileEx() or UnlockFileEx() on Win95/98/ME because
+** those functions are not available. So we use only LockFile() and
+** UnlockFile().
+**
+** LockFile() prevents not just writing but also reading by other processes.
+** A SHARED_LOCK is obtained by locking a single randomly-chosen
+** byte out of a specific range of bytes. The lock byte is obtained at
+** random so two separate readers can probably access the file at the
+** same time, unless they are unlucky and choose the same lock byte.
+** An EXCLUSIVE_LOCK is obtained by locking all bytes in the range.
+** There can only be one writer. A RESERVED_LOCK is obtained by locking
+** a single byte of the file that is designated as the reserved lock byte.
+** A PENDING_LOCK is obtained by locking a designated byte different from
+** the RESERVED_LOCK byte.
+**
+** On WinNT/2K/XP systems, LockFileEx() and UnlockFileEx() are available,
+** which means we can use reader/writer locks. When reader/writer locks
+** are used, the lock is placed on the same range of bytes that is used
+** for probabilistic locking in Win95/98/ME. Hence, the locking scheme
+** will support two or more Win95 readers or two or more WinNT readers.
+** But a single Win95 reader will lock out all WinNT readers and a single
+** WinNT reader will lock out all other Win95 readers.
+**
+** The following #defines specify the range of bytes used for locking.
+** SHARED_SIZE is the number of bytes available in the pool from which
+** a random byte is selected for a shared lock. The pool of bytes for
+** shared locks begins at SHARED_FIRST.
+**
+** These #defines are available in sqlite_aux.h so that adaptors for
+** connecting SQLite to other operating systems can use the same byte
+** ranges for locking. In particular, the same locking strategy and
+** byte ranges are used for Unix. This leaves open the possiblity of having
+** clients on win95, winNT, and unix all talking to the same shared file
+** and all locking correctly. To do so would require that samba (or whatever
+** tool is being used for file sharing) implements locks correctly between
+** windows and unix. I'm guessing that isn't likely to happen, but by
+** using the same locking range we are at least open to the possibility.
+**
+** Locking in windows is manditory. For this reason, we cannot store
+** actual data in the bytes used for locking. The pager never allocates
+** the pages involved in locking therefore. SHARED_SIZE is selected so
+** that all locks will fit on a single page even at the minimum page size.
+** PENDING_BYTE defines the beginning of the locks. By default PENDING_BYTE
+** is set high so that we don't have to allocate an unused page except
+** for very large databases. But one should test the page skipping logic
+** by setting PENDING_BYTE low and running the entire regression suite.
+**
+** Changing the value of PENDING_BYTE results in a subtly incompatible
+** file format. Depending on how it is changed, you might not notice
+** the incompatibility right away, even running a full regression test.
+** The default location of PENDING_BYTE is the first byte past the
+** 1GB boundary.
+**
+*/
+#ifndef SQLITE_TEST
+#define PENDING_BYTE 0x40000000 /* First byte past the 1GB boundary */
+#else
+extern unsigned int sqlite3_pending_byte;
+#define PENDING_BYTE sqlite3_pending_byte
+#endif
+
+#define RESERVED_BYTE (PENDING_BYTE+1)
+#define SHARED_FIRST (PENDING_BYTE+2)
+#define SHARED_SIZE 510
+
+/*
+** Prototypes for operating system interface routines.
+*/
+int sqlite3OsClose(OsFile**);
+int sqlite3OsOpenDirectory(OsFile*, const char*);
+int sqlite3OsRead(OsFile*, void*, int amt);
+int sqlite3OsWrite(OsFile*, const void*, int amt);
+int sqlite3OsSeek(OsFile*, i64 offset);
+int sqlite3OsTruncate(OsFile*, i64 size);
+int sqlite3OsSync(OsFile*, int);
+void sqlite3OsSetFullSync(OsFile *id, int setting);
+int sqlite3OsFileHandle(OsFile *id);
+int sqlite3OsFileSize(OsFile*, i64 *pSize);
+int sqlite3OsLock(OsFile*, int);
+int sqlite3OsUnlock(OsFile*, int);
+int sqlite3OsLockState(OsFile *id);
+int sqlite3OsCheckReservedLock(OsFile *id);
+int sqlite3OsOpenReadWrite(const char*, OsFile**, int*);
+int sqlite3OsOpenExclusive(const char*, OsFile**, int);
+int sqlite3OsOpenReadOnly(const char*, OsFile**);
+int sqlite3OsDelete(const char*);
+int sqlite3OsFileExists(const char*);
+char *sqlite3OsFullPathname(const char*);
+int sqlite3OsIsDirWritable(char*);
+int sqlite3OsSyncDirectory(const char*);
+int sqlite3OsTempFileName(char*);
+int sqlite3OsRandomSeed(char*);
+int sqlite3OsSleep(int ms);
+int sqlite3OsCurrentTime(double*);
+void sqlite3OsEnterMutex(void);
+void sqlite3OsLeaveMutex(void);
+int sqlite3OsInMutex(int);
+ThreadData *sqlite3OsThreadSpecificData(int);
+void *sqlite3OsMalloc(int);
+void *sqlite3OsRealloc(void *, int);
+void sqlite3OsFree(void *);
+int sqlite3OsAllocationSize(void *);
+
+/*
+** If the SQLITE_ENABLE_REDEF_IO macro is defined, then the OS-layer
+** interface routines are not called directly but are invoked using
+** pointers to functions. This allows the implementation of various
+** OS-layer interface routines to be modified at run-time. There are
+** obscure but legitimate reasons for wanting to do this. But for
+** most users, a direct call to the underlying interface is preferable
+** so the the redefinable I/O interface is turned off by default.
+*/
+#ifdef SQLITE_ENABLE_REDEF_IO
+
+/*
+** When redefinable I/O is enabled, a single global instance of the
+** following structure holds pointers to the routines that SQLite
+** uses to talk with the underlying operating system. Modify this
+** structure (before using any SQLite API!) to accomodate perculiar
+** operating system interfaces or behaviors.
+*/
+struct sqlite3OsVtbl {
+ int (*xOpenReadWrite)(const char*, OsFile**, int*);
+ int (*xOpenExclusive)(const char*, OsFile**, int);
+ int (*xOpenReadOnly)(const char*, OsFile**);
+
+ int (*xDelete)(const char*);
+ int (*xFileExists)(const char*);
+ char *(*xFullPathname)(const char*);
+ int (*xIsDirWritable)(char*);
+ int (*xSyncDirectory)(const char*);
+ int (*xTempFileName)(char*);
+
+ int (*xRandomSeed)(char*);
+ int (*xSleep)(int ms);
+ int (*xCurrentTime)(double*);
+
+ void (*xEnterMutex)(void);
+ void (*xLeaveMutex)(void);
+ int (*xInMutex)(int);
+ ThreadData *(*xThreadSpecificData)(int);
+
+ void *(*xMalloc)(int);
+ void *(*xRealloc)(void *, int);
+ void (*xFree)(void *);
+ int (*xAllocationSize)(void *);
+};
+
+/* Macro used to comment out routines that do not exists when there is
+** no disk I/O
+*/
+#ifdef SQLITE_OMIT_DISKIO
+# define IF_DISKIO(X) 0
+#else
+# define IF_DISKIO(X) X
+#endif
+
+#ifdef _SQLITE_OS_C_
+ /*
+ ** The os.c file implements the global virtual function table.
+ */
+ struct sqlite3OsVtbl sqlite3Os = {
+ IF_DISKIO( sqlite3OsOpenReadWrite ),
+ IF_DISKIO( sqlite3OsOpenExclusive ),
+ IF_DISKIO( sqlite3OsOpenReadOnly ),
+ IF_DISKIO( sqlite3OsDelete ),
+ IF_DISKIO( sqlite3OsFileExists ),
+ IF_DISKIO( sqlite3OsFullPathname ),
+ IF_DISKIO( sqlite3OsIsDirWritable ),
+ IF_DISKIO( sqlite3OsSyncDirectory ),
+ IF_DISKIO( sqlite3OsTempFileName ),
+ sqlite3OsRandomSeed,
+ sqlite3OsSleep,
+ sqlite3OsCurrentTime,
+ sqlite3OsEnterMutex,
+ sqlite3OsLeaveMutex,
+ sqlite3OsInMutex,
+ sqlite3OsThreadSpecificData,
+ sqlite3OsMalloc,
+ sqlite3OsRealloc,
+ sqlite3OsFree,
+ sqlite3OsAllocationSize
+ };
+#else
+ /*
+ ** Files other than os.c just reference the global virtual function table.
+ */
+ extern struct sqlite3OsVtbl sqlite3Os;
+#endif /* _SQLITE_OS_C_ */
+
+
+/* This additional API routine is available with redefinable I/O */
+struct sqlite3OsVtbl *sqlite3_os_switch(void);
+
+
+/*
+** Redefine the OS interface to go through the virtual function table
+** rather than calling routines directly.
+*/
+#undef sqlite3OsOpenReadWrite
+#undef sqlite3OsOpenExclusive
+#undef sqlite3OsOpenReadOnly
+#undef sqlite3OsDelete
+#undef sqlite3OsFileExists
+#undef sqlite3OsFullPathname
+#undef sqlite3OsIsDirWritable
+#undef sqlite3OsSyncDirectory
+#undef sqlite3OsTempFileName
+#undef sqlite3OsRandomSeed
+#undef sqlite3OsSleep
+#undef sqlite3OsCurrentTime
+#undef sqlite3OsEnterMutex
+#undef sqlite3OsLeaveMutex
+#undef sqlite3OsInMutex
+#undef sqlite3OsThreadSpecificData
+#undef sqlite3OsMalloc
+#undef sqlite3OsRealloc
+#undef sqlite3OsFree
+#undef sqlite3OsAllocationSize
+#define sqlite3OsOpenReadWrite sqlite3Os.xOpenReadWrite
+#define sqlite3OsOpenExclusive sqlite3Os.xOpenExclusive
+#define sqlite3OsOpenReadOnly sqlite3Os.xOpenReadOnly
+#define sqlite3OsDelete sqlite3Os.xDelete
+#define sqlite3OsFileExists sqlite3Os.xFileExists
+#define sqlite3OsFullPathname sqlite3Os.xFullPathname
+#define sqlite3OsIsDirWritable sqlite3Os.xIsDirWritable
+#define sqlite3OsSyncDirectory sqlite3Os.xSyncDirectory
+#define sqlite3OsTempFileName sqlite3Os.xTempFileName
+#define sqlite3OsRandomSeed sqlite3Os.xRandomSeed
+#define sqlite3OsSleep sqlite3Os.xSleep
+#define sqlite3OsCurrentTime sqlite3Os.xCurrentTime
+#define sqlite3OsEnterMutex sqlite3Os.xEnterMutex
+#define sqlite3OsLeaveMutex sqlite3Os.xLeaveMutex
+#define sqlite3OsInMutex sqlite3Os.xInMutex
+#define sqlite3OsThreadSpecificData sqlite3Os.xThreadSpecificData
+#define sqlite3OsMalloc sqlite3Os.xMalloc
+#define sqlite3OsRealloc sqlite3Os.xRealloc
+#define sqlite3OsFree sqlite3Os.xFree
+#define sqlite3OsAllocationSize sqlite3Os.xAllocationSize
+
+#endif /* SQLITE_ENABLE_REDEF_IO */
+
+#endif /* _SQLITE_OS_H_ */
Added: freeswitch/trunk/libs/sqlite/src/os_common.h
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/os_common.h Tue Dec 19 15:11:50 2006
@@ -0,0 +1,188 @@
+/*
+** 2004 May 22
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+******************************************************************************
+**
+** This file contains macros and a little bit of code that is common to
+** all of the platform-specific files (os_*.c) and is #included into those
+** files.
+**
+** This file should be #included by the os_*.c files only. It is not a
+** general purpose header file.
+*/
+
+/*
+** At least two bugs have slipped in because we changed the MEMORY_DEBUG
+** macro to SQLITE_DEBUG and some older makefiles have not yet made the
+** switch. The following code should catch this problem at compile-time.
+*/
+#ifdef MEMORY_DEBUG
+# error "The MEMORY_DEBUG macro is obsolete. Use SQLITE_DEBUG instead."
+#endif
+
+
+/*
+ * When testing, this global variable stores the location of the
+ * pending-byte in the database file.
+ */
+#ifdef SQLITE_TEST
+unsigned int sqlite3_pending_byte = 0x40000000;
+#endif
+
+int sqlite3_os_trace = 0;
+#ifdef SQLITE_DEBUG
+static int last_page = 0;
+#define SEEK(X) last_page=(X)
+#define TRACE1(X) if( sqlite3_os_trace ) sqlite3DebugPrintf(X)
+#define TRACE2(X,Y) if( sqlite3_os_trace ) sqlite3DebugPrintf(X,Y)
+#define TRACE3(X,Y,Z) if( sqlite3_os_trace ) sqlite3DebugPrintf(X,Y,Z)
+#define TRACE4(X,Y,Z,A) if( sqlite3_os_trace ) sqlite3DebugPrintf(X,Y,Z,A)
+#define TRACE5(X,Y,Z,A,B) if( sqlite3_os_trace ) sqlite3DebugPrintf(X,Y,Z,A,B)
+#define TRACE6(X,Y,Z,A,B,C) if(sqlite3_os_trace) sqlite3DebugPrintf(X,Y,Z,A,B,C)
+#define TRACE7(X,Y,Z,A,B,C,D) \
+ if(sqlite3_os_trace) sqlite3DebugPrintf(X,Y,Z,A,B,C,D)
+#else
+#define SEEK(X)
+#define TRACE1(X)
+#define TRACE2(X,Y)
+#define TRACE3(X,Y,Z)
+#define TRACE4(X,Y,Z,A)
+#define TRACE5(X,Y,Z,A,B)
+#define TRACE6(X,Y,Z,A,B,C)
+#define TRACE7(X,Y,Z,A,B,C,D)
+#endif
+
+/*
+** Macros for performance tracing. Normally turned off. Only works
+** on i486 hardware.
+*/
+#ifdef SQLITE_PERFORMANCE_TRACE
+__inline__ unsigned long long int hwtime(void){
+ unsigned long long int x;
+ __asm__("rdtsc\n\t"
+ "mov %%edx, %%ecx\n\t"
+ :"=A" (x));
+ return x;
+}
+static unsigned long long int g_start;
+static unsigned int elapse;
+#define TIMER_START g_start=hwtime()
+#define TIMER_END elapse=hwtime()-g_start
+#define TIMER_ELAPSED elapse
+#else
+#define TIMER_START
+#define TIMER_END
+#define TIMER_ELAPSED 0
+#endif
+
+/*
+** If we compile with the SQLITE_TEST macro set, then the following block
+** of code will give us the ability to simulate a disk I/O error. This
+** is used for testing the I/O recovery logic.
+*/
+#ifdef SQLITE_TEST
+int sqlite3_io_error_hit = 0;
+int sqlite3_io_error_pending = 0;
+int sqlite3_diskfull_pending = 0;
+int sqlite3_diskfull = 0;
+#define SimulateIOError(CODE) \
+ if( sqlite3_io_error_pending ) \
+ if( sqlite3_io_error_pending-- == 1 ){ local_ioerr(); CODE; }
+static void local_ioerr(){
+ sqlite3_io_error_hit = 1; /* Really just a place to set a breakpoint */
+}
+#define SimulateDiskfullError(CODE) \
+ if( sqlite3_diskfull_pending ){ \
+ if( sqlite3_diskfull_pending == 1 ){ \
+ local_ioerr(); \
+ sqlite3_diskfull = 1; \
+ CODE; \
+ }else{ \
+ sqlite3_diskfull_pending--; \
+ } \
+ }
+#else
+#define SimulateIOError(A)
+#define SimulateDiskfullError(A)
+#endif
+
+/*
+** When testing, keep a count of the number of open files.
+*/
+#ifdef SQLITE_TEST
+int sqlite3_open_file_count = 0;
+#define OpenCounter(X) sqlite3_open_file_count+=(X)
+#else
+#define OpenCounter(X)
+#endif
+
+/*
+** sqlite3GenericMalloc
+** sqlite3GenericRealloc
+** sqlite3GenericOsFree
+** sqlite3GenericAllocationSize
+**
+** Implementation of the os level dynamic memory allocation interface in terms
+** of the standard malloc(), realloc() and free() found in many operating
+** systems. No rocket science here.
+**
+** There are two versions of these four functions here. The version
+** implemented here is only used if memory-management or memory-debugging is
+** enabled. This version allocates an extra 8-bytes at the beginning of each
+** block and stores the size of the allocation there.
+**
+** If neither memory-management or debugging is enabled, the second
+** set of implementations is used instead.
+*/
+#if defined(SQLITE_ENABLE_MEMORY_MANAGEMENT) || defined (SQLITE_MEMDEBUG)
+void *sqlite3GenericMalloc(int n){
+ char *p = (char *)malloc(n+8);
+ assert(n>0);
+ assert(sizeof(int)<=8);
+ if( p ){
+ *(int *)p = n;
+ p += 8;
+ }
+ return (void *)p;
+}
+void *sqlite3GenericRealloc(void *p, int n){
+ char *p2 = ((char *)p - 8);
+ assert(n>0);
+ p2 = (char*)realloc(p2, n+8);
+ if( p2 ){
+ *(int *)p2 = n;
+ p2 += 8;
+ }
+ return (void *)p2;
+}
+void sqlite3GenericFree(void *p){
+ assert(p);
+ free((void *)((char *)p - 8));
+}
+int sqlite3GenericAllocationSize(void *p){
+ return p ? *(int *)((char *)p - 8) : 0;
+}
+#else
+void *sqlite3GenericMalloc(int n){
+ char *p = (char *)malloc(n);
+ return (void *)p;
+}
+void *sqlite3GenericRealloc(void *p, int n){
+ assert(n>0);
+ p = realloc(p, n);
+ return p;
+}
+void sqlite3GenericFree(void *p){
+ assert(p);
+ free(p);
+}
+/* Never actually used, but needed for the linker */
+int sqlite3GenericAllocationSize(void *p){ return 0; }
+#endif
Added: freeswitch/trunk/libs/sqlite/src/os_os2.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/os_os2.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,968 @@
+/*
+** 2006 Feb 14
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+******************************************************************************
+**
+** This file contains code that is specific to OS/2.
+*/
+#include "sqliteInt.h"
+#include "os.h"
+
+#if OS_OS2
+
+/*
+** Macros used to determine whether or not to use threads.
+*/
+#if defined(THREADSAFE) && THREADSAFE
+# define SQLITE_OS2_THREADS 1
+#endif
+
+/*
+** Include code that is common to all os_*.c files
+*/
+#include "os_common.h"
+
+/*
+** The os2File structure is subclass of OsFile specific for the OS/2
+** protability layer.
+*/
+typedef struct os2File os2File;
+struct os2File {
+ IoMethod const *pMethod; /* Always the first entry */
+ HFILE h; /* Handle for accessing the file */
+ int delOnClose; /* True if file is to be deleted on close */
+ char* pathToDel; /* Name of file to delete on close */
+ unsigned char locktype; /* Type of lock currently held on this file */
+};
+
+/*
+** Do not include any of the File I/O interface procedures if the
+** SQLITE_OMIT_DISKIO macro is defined (indicating that there database
+** will be in-memory only)
+*/
+#ifndef SQLITE_OMIT_DISKIO
+
+/*
+** Delete the named file
+*/
+int sqlite3Os2Delete( const char *zFilename ){
+ APIRET rc = NO_ERROR;
+
+ rc = DosDelete( (PSZ)zFilename );
+ TRACE2( "DELETE \"%s\"\n", zFilename );
+ return rc == NO_ERROR ? SQLITE_OK : SQLITE_IOERR;
+}
+
+/*
+** Return TRUE if the named file exists.
+*/
+int sqlite3Os2FileExists( const char *zFilename ){
+ FILESTATUS3 fsts3ConfigInfo;
+ memset(&fsts3ConfigInfo, 0, sizeof(fsts3ConfigInfo));
+ return DosQueryPathInfo( (PSZ)zFilename, FIL_STANDARD,
+ &fsts3ConfigInfo, sizeof(FILESTATUS3) ) == NO_ERROR;
+}
+
+/* Forward declaration */
+int allocateOs2File( os2File *pInit, OsFile **pld );
+
+/*
+** Attempt to open a file for both reading and writing. If that
+** fails, try opening it read-only. If the file does not exist,
+** try to create it.
+**
+** On success, a handle for the open file is written to *id
+** and *pReadonly is set to 0 if the file was opened for reading and
+** writing or 1 if the file was opened read-only. The function returns
+** SQLITE_OK.
+**
+** On failure, the function returns SQLITE_CANTOPEN and leaves
+** *id and *pReadonly unchanged.
+*/
+int sqlite3Os2OpenReadWrite(
+ const char *zFilename,
+ OsFile **pld,
+ int *pReadonly
+){
+ os2File f;
+ HFILE hf;
+ ULONG ulAction;
+ APIRET rc = NO_ERROR;
+
+ assert( *pld == 0 );
+ rc = DosOpen( (PSZ)zFilename, &hf, &ulAction, 0L,
+ FILE_ARCHIVED | FILE_NORMAL,
+ OPEN_ACTION_CREATE_IF_NEW | OPEN_ACTION_OPEN_IF_EXISTS,
+ OPEN_FLAGS_FAIL_ON_ERROR | OPEN_FLAGS_RANDOM |
+ OPEN_SHARE_DENYNONE | OPEN_ACCESS_READWRITE, (PEAOP2)NULL );
+ if( rc != NO_ERROR ){
+ rc = DosOpen( (PSZ)zFilename, &hf, &ulAction, 0L,
+ FILE_ARCHIVED | FILE_NORMAL,
+ OPEN_ACTION_CREATE_IF_NEW | OPEN_ACTION_OPEN_IF_EXISTS,
+ OPEN_FLAGS_FAIL_ON_ERROR | OPEN_FLAGS_RANDOM |
+ OPEN_SHARE_DENYWRITE | OPEN_ACCESS_READONLY, (PEAOP2)NULL );
+ if( rc != NO_ERROR ){
+ return SQLITE_CANTOPEN;
+ }
+ *pReadonly = 1;
+ }
+ else{
+ *pReadonly = 0;
+ }
+ f.h = hf;
+ f.locktype = NO_LOCK;
+ f.delOnClose = 0;
+ f.pathToDel = NULL;
+ OpenCounter(+1);
+ TRACE3( "OPEN R/W %d \"%s\"\n", hf, zFilename );
+ return allocateOs2File( &f, pld );
+}
+
+
+/*
+** Attempt to open a new file for exclusive access by this process.
+** The file will be opened for both reading and writing. To avoid
+** a potential security problem, we do not allow the file to have
+** previously existed. Nor do we allow the file to be a symbolic
+** link.
+**
+** If delFlag is true, then make arrangements to automatically delete
+** the file when it is closed.
+**
+** On success, write the file handle into *id and return SQLITE_OK.
+**
+** On failure, return SQLITE_CANTOPEN.
+*/
+int sqlite3Os2OpenExclusive( const char *zFilename, OsFile **pld, int delFlag ){
+ os2File f;
+ HFILE hf;
+ ULONG ulAction;
+ APIRET rc = NO_ERROR;
+
+ assert( *pld == 0 );
+ rc = DosOpen( (PSZ)zFilename, &hf, &ulAction, 0L, FILE_NORMAL,
+ OPEN_ACTION_CREATE_IF_NEW | OPEN_ACTION_REPLACE_IF_EXISTS,
+ OPEN_FLAGS_FAIL_ON_ERROR | OPEN_FLAGS_RANDOM |
+ OPEN_SHARE_DENYREADWRITE | OPEN_ACCESS_READWRITE, (PEAOP2)NULL );
+ if( rc != NO_ERROR ){
+ return SQLITE_CANTOPEN;
+ }
+
+ f.h = hf;
+ f.locktype = NO_LOCK;
+ f.delOnClose = delFlag ? 1 : 0;
+ f.pathToDel = delFlag ? sqlite3OsFullPathname( zFilename ) : NULL;
+ OpenCounter( +1 );
+ if( delFlag ) DosForceDelete( sqlite3OsFullPathname( zFilename ) );
+ TRACE3( "OPEN EX %d \"%s\"\n", hf, sqlite3OsFullPathname ( zFilename ) );
+ return allocateOs2File( &f, pld );
+}
+
+/*
+** Attempt to open a new file for read-only access.
+**
+** On success, write the file handle into *id and return SQLITE_OK.
+**
+** On failure, return SQLITE_CANTOPEN.
+*/
+int sqlite3Os2OpenReadOnly( const char *zFilename, OsFile **pld ){
+ os2File f;
+ HFILE hf;
+ ULONG ulAction;
+ APIRET rc = NO_ERROR;
+
+ assert( *pld == 0 );
+ rc = DosOpen( (PSZ)zFilename, &hf, &ulAction, 0L,
+ FILE_NORMAL, OPEN_ACTION_OPEN_IF_EXISTS,
+ OPEN_FLAGS_FAIL_ON_ERROR | OPEN_FLAGS_RANDOM |
+ OPEN_SHARE_DENYWRITE | OPEN_ACCESS_READONLY, (PEAOP2)NULL );
+ if( rc != NO_ERROR ){
+ return SQLITE_CANTOPEN;
+ }
+ f.h = hf;
+ f.locktype = NO_LOCK;
+ f.delOnClose = 0;
+ f.pathToDel = NULL;
+ OpenCounter( +1 );
+ TRACE3( "OPEN RO %d \"%s\"\n", hf, zFilename );
+ return allocateOs2File( &f, pld );
+}
+
+/*
+** Attempt to open a file descriptor for the directory that contains a
+** file. This file descriptor can be used to fsync() the directory
+** in order to make sure the creation of a new file is actually written
+** to disk.
+**
+** This routine is only meaningful for Unix. It is a no-op under
+** OS/2 since OS/2 does not support hard links.
+**
+** On success, a handle for a previously open file is at *id is
+** updated with the new directory file descriptor and SQLITE_OK is
+** returned.
+**
+** On failure, the function returns SQLITE_CANTOPEN and leaves
+** *id unchanged.
+*/
+int os2OpenDirectory(
+ OsFile *id,
+ const char *zDirname
+){
+ return SQLITE_OK;
+}
+
+/*
+** If the following global variable points to a string which is the
+** name of a directory, then that directory will be used to store
+** temporary files.
+*/
+char *sqlite3_temp_directory = 0;
+
+/*
+** Create a temporary file name in zBuf. zBuf must be big enough to
+** hold at least SQLITE_TEMPNAME_SIZE characters.
+*/
+int sqlite3Os2TempFileName( char *zBuf ){
+ static const unsigned char zChars[] =
+ "abcdefghijklmnopqrstuvwxyz"
+ "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+ "0123456789";
+ int i, j;
+ PSZ zTempPath = 0;
+ if( DosScanEnv( "TEMP", &zTempPath ) ){
+ if( DosScanEnv( "TMP", &zTempPath ) ){
+ if( DosScanEnv( "TMPDIR", &zTempPath ) ){
+ ULONG ulDriveNum = 0, ulDriveMap = 0;
+ DosQueryCurrentDisk( &ulDriveNum, &ulDriveMap );
+ sprintf( zTempPath, "%c:", (char)( 'A' + ulDriveNum - 1 ) );
+ }
+ }
+ }
+ for(;;){
+ sprintf( zBuf, "%s\\"TEMP_FILE_PREFIX, zTempPath );
+ j = strlen( zBuf );
+ sqlite3Randomness( 15, &zBuf[j] );
+ for( i = 0; i < 15; i++, j++ ){
+ zBuf[j] = (char)zChars[ ((unsigned char)zBuf[j])%(sizeof(zChars)-1) ];
+ }
+ zBuf[j] = 0;
+ if( !sqlite3OsFileExists( zBuf ) ) break;
+ }
+ TRACE2( "TEMP FILENAME: %s\n", zBuf );
+ return SQLITE_OK;
+}
+
+/*
+** Close a file.
+*/
+int os2Close( OsFile **pld ){
+ os2File *pFile;
+ APIRET rc = NO_ERROR;
+ if( pld && (pFile = (os2File*)*pld) != 0 ){
+ TRACE2( "CLOSE %d\n", pFile->h );
+ rc = DosClose( pFile->h );
+ pFile->locktype = NO_LOCK;
+ if( pFile->delOnClose != 0 ){
+ rc = DosForceDelete( pFile->pathToDel );
+ }
+ *pld = 0;
+ OpenCounter( -1 );
+ }
+
+ return rc == NO_ERROR ? SQLITE_OK : SQLITE_IOERR;
+}
+
+/*
+** Read data from a file into a buffer. Return SQLITE_OK if all
+** bytes were read successfully and SQLITE_IOERR if anything goes
+** wrong.
+*/
+int os2Read( OsFile *id, void *pBuf, int amt ){
+ ULONG got;
+ assert( id!=0 );
+ SimulateIOError( return SQLITE_IOERR );
+ TRACE3( "READ %d lock=%d\n", ((os2File*)id)->h, ((os2File*)id)->locktype );
+ DosRead( ((os2File*)id)->h, pBuf, amt, &got );
+ return (got == (ULONG)amt) ? SQLITE_OK : SQLITE_IOERR;
+}
+
+/*
+** Write data from a buffer into a file. Return SQLITE_OK on success
+** or some other error code on failure.
+*/
+int os2Write( OsFile *id, const void *pBuf, int amt ){
+ APIRET rc = NO_ERROR;
+ ULONG wrote;
+ assert( id!=0 );
+ SimulateIOError( return SQLITE_IOERR );
+ SimulateDiskfullError( return SQLITE_FULL );
+ TRACE3( "WRITE %d lock=%d\n", ((os2File*)id)->h, ((os2File*)id)->locktype );
+ while( amt > 0 &&
+ (rc = DosWrite( ((os2File*)id)->h, (PVOID)pBuf, amt, &wrote )) && wrote > 0 ){
+ amt -= wrote;
+ pBuf = &((char*)pBuf)[wrote];
+ }
+
+ return ( rc != NO_ERROR || amt > (int)wrote ) ? SQLITE_FULL : SQLITE_OK;
+}
+
+/*
+** Move the read/write pointer in a file.
+*/
+int os2Seek( OsFile *id, i64 offset ){
+ APIRET rc = NO_ERROR;
+ ULONG filePointer = 0L;
+ assert( id!=0 );
+ rc = DosSetFilePtr( ((os2File*)id)->h, offset, FILE_BEGIN, &filePointer );
+ TRACE3( "SEEK %d %lld\n", ((os2File*)id)->h, offset );
+ return rc == NO_ERROR ? SQLITE_OK : SQLITE_IOERR;
+}
+
+/*
+** Make sure all writes to a particular file are committed to disk.
+*/
+int os2Sync( OsFile *id, int dataOnly ){
+ assert( id!=0 );
+ TRACE3( "SYNC %d lock=%d\n", ((os2File*)id)->h, ((os2File*)id)->locktype );
+ return DosResetBuffer( ((os2File*)id)->h ) == NO_ERROR ? SQLITE_OK : SQLITE_IOERR;
+}
+
+/*
+** Sync the directory zDirname. This is a no-op on operating systems other
+** than UNIX.
+*/
+int sqlite3Os2SyncDirectory( const char *zDirname ){
+ SimulateIOError( return SQLITE_IOERR );
+ return SQLITE_OK;
+}
+
+/*
+** Truncate an open file to a specified size
+*/
+int os2Truncate( OsFile *id, i64 nByte ){
+ APIRET rc = NO_ERROR;
+ ULONG upperBits = nByte>>32;
+ assert( id!=0 );
+ TRACE3( "TRUNCATE %d %lld\n", ((os2File*)id)->h, nByte );
+ SimulateIOError( return SQLITE_IOERR );
+ rc = DosSetFilePtr( ((os2File*)id)->h, nByte, FILE_BEGIN, &upperBits );
+ if( rc != NO_ERROR ){
+ return SQLITE_IOERR;
+ }
+ rc = DosSetFilePtr( ((os2File*)id)->h, 0L, FILE_END, &upperBits );
+ return rc == NO_ERROR ? SQLITE_OK : SQLITE_IOERR;
+}
+
+/*
+** Determine the current size of a file in bytes
+*/
+int os2FileSize( OsFile *id, i64 *pSize ){
+ APIRET rc = NO_ERROR;
+ FILESTATUS3 fsts3FileInfo;
+ memset(&fsts3FileInfo, 0, sizeof(fsts3FileInfo));
+ assert( id!=0 );
+ SimulateIOError( return SQLITE_IOERR );
+ rc = DosQueryFileInfo( ((os2File*)id)->h, FIL_STANDARD, &fsts3FileInfo, sizeof(FILESTATUS3) );
+ if( rc == NO_ERROR ){
+ *pSize = fsts3FileInfo.cbFile;
+ return SQLITE_OK;
+ }
+ else{
+ return SQLITE_IOERR;
+ }
+}
+
+/*
+** Acquire a reader lock.
+*/
+static int getReadLock( os2File *id ){
+ FILELOCK LockArea,
+ UnlockArea;
+ memset(&LockArea, 0, sizeof(LockArea));
+ memset(&UnlockArea, 0, sizeof(UnlockArea));
+ LockArea.lOffset = SHARED_FIRST;
+ LockArea.lRange = SHARED_SIZE;
+ UnlockArea.lOffset = 0L;
+ UnlockArea.lRange = 0L;
+ return DosSetFileLocks( id->h, &UnlockArea, &LockArea, 2000L, 1L );
+}
+
+/*
+** Undo a readlock
+*/
+static int unlockReadLock( os2File *id ){
+ FILELOCK LockArea,
+ UnlockArea;
+ memset(&LockArea, 0, sizeof(LockArea));
+ memset(&UnlockArea, 0, sizeof(UnlockArea));
+ LockArea.lOffset = 0L;
+ LockArea.lRange = 0L;
+ UnlockArea.lOffset = SHARED_FIRST;
+ UnlockArea.lRange = SHARED_SIZE;
+ return DosSetFileLocks( id->h, &UnlockArea, &LockArea, 2000L, 1L );
+}
+
+#ifndef SQLITE_OMIT_PAGER_PRAGMAS
+/*
+** Check that a given pathname is a directory and is writable
+**
+*/
+int sqlite3Os2IsDirWritable( char *zDirname ){
+ FILESTATUS3 fsts3ConfigInfo;
+ APIRET rc = NO_ERROR;
+ memset(&fsts3ConfigInfo, 0, sizeof(fsts3ConfigInfo));
+ if( zDirname==0 ) return 0;
+ if( strlen(zDirname)>CCHMAXPATH ) return 0;
+ rc = DosQueryPathInfo( (PSZ)zDirname, FIL_STANDARD, &fsts3ConfigInfo, sizeof(FILESTATUS3) );
+ if( rc != NO_ERROR ) return 0;
+ if( (fsts3ConfigInfo.attrFile & FILE_DIRECTORY) != FILE_DIRECTORY ) return 0;
+
+ return 1;
+}
+#endif /* SQLITE_OMIT_PAGER_PRAGMAS */
+
+/*
+** Lock the file with the lock specified by parameter locktype - one
+** of the following:
+**
+** (1) SHARED_LOCK
+** (2) RESERVED_LOCK
+** (3) PENDING_LOCK
+** (4) EXCLUSIVE_LOCK
+**
+** Sometimes when requesting one lock state, additional lock states
+** are inserted in between. The locking might fail on one of the later
+** transitions leaving the lock state different from what it started but
+** still short of its goal. The following chart shows the allowed
+** transitions and the inserted intermediate states:
+**
+** UNLOCKED -> SHARED
+** SHARED -> RESERVED
+** SHARED -> (PENDING) -> EXCLUSIVE
+** RESERVED -> (PENDING) -> EXCLUSIVE
+** PENDING -> EXCLUSIVE
+**
+** This routine will only increase a lock. The os2Unlock() routine
+** erases all locks at once and returns us immediately to locking level 0.
+** It is not possible to lower the locking level one step at a time. You
+** must go straight to locking level 0.
+*/
+int os2Lock( OsFile *id, int locktype ){
+ APIRET rc = SQLITE_OK; /* Return code from subroutines */
+ APIRET res = NO_ERROR; /* Result of an OS/2 lock call */
+ int newLocktype; /* Set id->locktype to this value before exiting */
+ int gotPendingLock = 0;/* True if we acquired a PENDING lock this time */
+ FILELOCK LockArea,
+ UnlockArea;
+ os2File *pFile = (os2File*)id;
+ memset(&LockArea, 0, sizeof(LockArea));
+ memset(&UnlockArea, 0, sizeof(UnlockArea));
+ assert( pFile!=0 );
+ TRACE4( "LOCK %d %d was %d\n", pFile->h, locktype, pFile->locktype );
+
+ /* If there is already a lock of this type or more restrictive on the
+ ** OsFile, do nothing. Don't use the end_lock: exit path, as
+ ** sqlite3OsEnterMutex() hasn't been called yet.
+ */
+ if( pFile->locktype>=locktype ){
+ return SQLITE_OK;
+ }
+
+ /* Make sure the locking sequence is correct
+ */
+ assert( pFile->locktype!=NO_LOCK || locktype==SHARED_LOCK );
+ assert( locktype!=PENDING_LOCK );
+ assert( locktype!=RESERVED_LOCK || pFile->locktype==SHARED_LOCK );
+
+ /* Lock the PENDING_LOCK byte if we need to acquire a PENDING lock or
+ ** a SHARED lock. If we are acquiring a SHARED lock, the acquisition of
+ ** the PENDING_LOCK byte is temporary.
+ */
+ newLocktype = pFile->locktype;
+ if( pFile->locktype==NO_LOCK
+ || (locktype==EXCLUSIVE_LOCK && pFile->locktype==RESERVED_LOCK)
+ ){
+ int cnt = 3;
+
+ LockArea.lOffset = PENDING_BYTE;
+ LockArea.lRange = 1L;
+ UnlockArea.lOffset = 0L;
+ UnlockArea.lRange = 0L;
+
+ while( cnt-->0 && (res = DosSetFileLocks( pFile->h, &UnlockArea, &LockArea, 2000L, 1L) )!=NO_ERROR ){
+ /* Try 3 times to get the pending lock. The pending lock might be
+ ** held by another reader process who will release it momentarily.
+ */
+ TRACE2( "could not get a PENDING lock. cnt=%d\n", cnt );
+ DosSleep(1);
+ }
+ gotPendingLock = res;
+ }
+
+ /* Acquire a shared lock
+ */
+ if( locktype==SHARED_LOCK && res ){
+ assert( pFile->locktype==NO_LOCK );
+ res = getReadLock(pFile);
+ if( res == NO_ERROR ){
+ newLocktype = SHARED_LOCK;
+ }
+ }
+
+ /* Acquire a RESERVED lock
+ */
+ if( locktype==RESERVED_LOCK && res ){
+ assert( pFile->locktype==SHARED_LOCK );
+ LockArea.lOffset = RESERVED_BYTE;
+ LockArea.lRange = 1L;
+ UnlockArea.lOffset = 0L;
+ UnlockArea.lRange = 0L;
+ res = DosSetFileLocks( pFile->h, &UnlockArea, &LockArea, 2000L, 1L );
+ if( res == NO_ERROR ){
+ newLocktype = RESERVED_LOCK;
+ }
+ }
+
+ /* Acquire a PENDING lock
+ */
+ if( locktype==EXCLUSIVE_LOCK && res ){
+ newLocktype = PENDING_LOCK;
+ gotPendingLock = 0;
+ }
+
+ /* Acquire an EXCLUSIVE lock
+ */
+ if( locktype==EXCLUSIVE_LOCK && res ){
+ assert( pFile->locktype>=SHARED_LOCK );
+ res = unlockReadLock(pFile);
+ TRACE2( "unreadlock = %d\n", res );
+ LockArea.lOffset = SHARED_FIRST;
+ LockArea.lRange = SHARED_SIZE;
+ UnlockArea.lOffset = 0L;
+ UnlockArea.lRange = 0L;
+ res = DosSetFileLocks( pFile->h, &UnlockArea, &LockArea, 2000L, 1L );
+ if( res == NO_ERROR ){
+ newLocktype = EXCLUSIVE_LOCK;
+ }else{
+ TRACE2( "error-code = %d\n", res );
+ }
+ }
+
+ /* If we are holding a PENDING lock that ought to be released, then
+ ** release it now.
+ */
+ if( gotPendingLock && locktype==SHARED_LOCK ){
+ LockArea.lOffset = 0L;
+ LockArea.lRange = 0L;
+ UnlockArea.lOffset = PENDING_BYTE;
+ UnlockArea.lRange = 1L;
+ DosSetFileLocks( pFile->h, &UnlockArea, &LockArea, 2000L, 1L );
+ }
+
+ /* Update the state of the lock has held in the file descriptor then
+ ** return the appropriate result code.
+ */
+ if( res == NO_ERROR ){
+ rc = SQLITE_OK;
+ }else{
+ TRACE4( "LOCK FAILED %d trying for %d but got %d\n", pFile->h,
+ locktype, newLocktype );
+ rc = SQLITE_BUSY;
+ }
+ pFile->locktype = newLocktype;
+ return rc;
+}
+
+/*
+** This routine checks if there is a RESERVED lock held on the specified
+** file by this or any other process. If such a lock is held, return
+** non-zero, otherwise zero.
+*/
+int os2CheckReservedLock( OsFile *id ){
+ APIRET rc = NO_ERROR;
+ os2File *pFile = (os2File*)id;
+ assert( pFile!=0 );
+ if( pFile->locktype>=RESERVED_LOCK ){
+ rc = 1;
+ TRACE3( "TEST WR-LOCK %d %d (local)\n", pFile->h, rc );
+ }else{
+ FILELOCK LockArea,
+ UnlockArea;
+ memset(&LockArea, 0, sizeof(LockArea));
+ memset(&UnlockArea, 0, sizeof(UnlockArea));
+ LockArea.lOffset = RESERVED_BYTE;
+ LockArea.lRange = 1L;
+ UnlockArea.lOffset = 0L;
+ UnlockArea.lRange = 0L;
+ rc = DosSetFileLocks( pFile->h, &UnlockArea, &LockArea, 2000L, 1L );
+ if( rc == NO_ERROR ){
+ LockArea.lOffset = 0L;
+ LockArea.lRange = 0L;
+ UnlockArea.lOffset = RESERVED_BYTE;
+ UnlockArea.lRange = 1L;
+ rc = DosSetFileLocks( pFile->h, &UnlockArea, &LockArea, 2000L, 1L );
+ }
+ TRACE3( "TEST WR-LOCK %d %d (remote)\n", pFile->h, rc );
+ }
+ return rc;
+}
+
+/*
+** Lower the locking level on file descriptor id to locktype. locktype
+** must be either NO_LOCK or SHARED_LOCK.
+**
+** If the locking level of the file descriptor is already at or below
+** the requested locking level, this routine is a no-op.
+**
+** It is not possible for this routine to fail if the second argument
+** is NO_LOCK. If the second argument is SHARED_LOCK then this routine
+** might return SQLITE_IOERR;
+*/
+int os2Unlock( OsFile *id, int locktype ){
+ int type;
+ APIRET rc = SQLITE_OK;
+ os2File *pFile = (os2File*)id;
+ FILELOCK LockArea,
+ UnlockArea;
+ memset(&LockArea, 0, sizeof(LockArea));
+ memset(&UnlockArea, 0, sizeof(UnlockArea));
+ assert( pFile!=0 );
+ assert( locktype<=SHARED_LOCK );
+ TRACE4( "UNLOCK %d to %d was %d\n", pFile->h, locktype, pFile->locktype );
+ type = pFile->locktype;
+ if( type>=EXCLUSIVE_LOCK ){
+ LockArea.lOffset = 0L;
+ LockArea.lRange = 0L;
+ UnlockArea.lOffset = SHARED_FIRST;
+ UnlockArea.lRange = SHARED_SIZE;
+ DosSetFileLocks( pFile->h, &UnlockArea, &LockArea, 2000L, 1L );
+ if( locktype==SHARED_LOCK && getReadLock(pFile) != NO_ERROR ){
+ /* This should never happen. We should always be able to
+ ** reacquire the read lock */
+ rc = SQLITE_IOERR;
+ }
+ }
+ if( type>=RESERVED_LOCK ){
+ LockArea.lOffset = 0L;
+ LockArea.lRange = 0L;
+ UnlockArea.lOffset = RESERVED_BYTE;
+ UnlockArea.lRange = 1L;
+ DosSetFileLocks( pFile->h, &UnlockArea, &LockArea, 2000L, 1L );
+ }
+ if( locktype==NO_LOCK && type>=SHARED_LOCK ){
+ unlockReadLock(pFile);
+ }
+ if( type>=PENDING_LOCK ){
+ LockArea.lOffset = 0L;
+ LockArea.lRange = 0L;
+ UnlockArea.lOffset = PENDING_BYTE;
+ UnlockArea.lRange = 1L;
+ DosSetFileLocks( pFile->h, &UnlockArea, &LockArea, 2000L, 1L );
+ }
+ pFile->locktype = locktype;
+ return rc;
+}
+
+/*
+** Turn a relative pathname into a full pathname. Return a pointer
+** to the full pathname stored in space obtained from sqliteMalloc().
+** The calling function is responsible for freeing this space once it
+** is no longer needed.
+*/
+char *sqlite3Os2FullPathname( const char *zRelative ){
+ char *zFull = 0;
+ if( strchr(zRelative, ':') ){
+ sqlite3SetString( &zFull, zRelative, (char*)0 );
+ }else{
+ char zBuff[SQLITE_TEMPNAME_SIZE - 2] = {0};
+ char zDrive[1] = {0};
+ ULONG cbzFullLen = SQLITE_TEMPNAME_SIZE;
+ ULONG ulDriveNum = 0;
+ ULONG ulDriveMap = 0;
+ DosQueryCurrentDisk( &ulDriveNum, &ulDriveMap );
+ DosQueryCurrentDir( 0L, zBuff, &cbzFullLen );
+ zFull = sqliteMalloc( cbzFullLen );
+ sprintf( zDrive, "%c", (char)('A' + ulDriveNum - 1) );
+ sqlite3SetString( &zFull, zDrive, ":\\", zBuff, "\\", zRelative, (char*)0 );
+ }
+ return zFull;
+}
+
+/*
+** The fullSync option is meaningless on os2, or correct me if I'm wrong. This is a no-op.
+** From os_unix.c: Change the value of the fullsync flag in the given file descriptor.
+** From os_unix.c: ((unixFile*)id)->fullSync = v;
+*/
+static void os2SetFullSync( OsFile *id, int v ){
+ return;
+}
+
+/*
+** Return the underlying file handle for an OsFile
+*/
+static int os2FileHandle( OsFile *id ){
+ return (int)((os2File*)id)->h;
+}
+
+/*
+** Return an integer that indices the type of lock currently held
+** by this handle. (Used for testing and analysis only.)
+*/
+static int os2LockState( OsFile *id ){
+ return ((os2File*)id)->locktype;
+}
+
+/*
+** This vector defines all the methods that can operate on an OsFile
+** for os2.
+*/
+static const IoMethod sqlite3Os2IoMethod = {
+ os2Close,
+ os2OpenDirectory,
+ os2Read,
+ os2Write,
+ os2Seek,
+ os2Truncate,
+ os2Sync,
+ os2SetFullSync,
+ os2FileHandle,
+ os2FileSize,
+ os2Lock,
+ os2Unlock,
+ os2LockState,
+ os2CheckReservedLock,
+};
+
+/*
+** Allocate memory for an OsFile. Initialize the new OsFile
+** to the value given in pInit and return a pointer to the new
+** OsFile. If we run out of memory, close the file and return NULL.
+*/
+int allocateOs2File( os2File *pInit, OsFile **pld ){
+ os2File *pNew;
+ pNew = sqliteMalloc( sizeof(*pNew) );
+ if( pNew==0 ){
+ DosClose( pInit->h );
+ *pld = 0;
+ return SQLITE_NOMEM;
+ }else{
+ *pNew = *pInit;
+ pNew->pMethod = &sqlite3Os2IoMethod;
+ pNew->locktype = NO_LOCK;
+ *pld = (OsFile*)pNew;
+ OpenCounter(+1);
+ return SQLITE_OK;
+ }
+}
+
+#endif /* SQLITE_OMIT_DISKIO */
+/***************************************************************************
+** Everything above deals with file I/O. Everything that follows deals
+** with other miscellanous aspects of the operating system interface
+****************************************************************************/
+
+/*
+** Get information to seed the random number generator. The seed
+** is written into the buffer zBuf[256]. The calling function must
+** supply a sufficiently large buffer.
+*/
+int sqlite3Os2RandomSeed( char *zBuf ){
+ /* We have to initialize zBuf to prevent valgrind from reporting
+ ** errors. The reports issued by valgrind are incorrect - we would
+ ** prefer that the randomness be increased by making use of the
+ ** uninitialized space in zBuf - but valgrind errors tend to worry
+ ** some users. Rather than argue, it seems easier just to initialize
+ ** the whole array and silence valgrind, even if that means less randomness
+ ** in the random seed.
+ **
+ ** When testing, initializing zBuf[] to zero is all we do. That means
+ ** that we always use the same random number sequence. This makes the
+ ** tests repeatable.
+ */
+ memset( zBuf, 0, 256 );
+ DosGetDateTime( (PDATETIME)zBuf );
+ return SQLITE_OK;
+}
+
+/*
+** Sleep for a little while. Return the amount of time slept.
+*/
+int sqlite3Os2Sleep( int ms ){
+ DosSleep( ms );
+ return ms;
+}
+
+/*
+** Static variables used for thread synchronization
+*/
+static int inMutex = 0;
+#ifdef SQLITE_OS2_THREADS
+static ULONG mutexOwner;
+#endif
+
+/*
+** The following pair of routines implement mutual exclusion for
+** multi-threaded processes. Only a single thread is allowed to
+** executed code that is surrounded by EnterMutex() and LeaveMutex().
+**
+** SQLite uses only a single Mutex. There is not much critical
+** code and what little there is executes quickly and without blocking.
+*/
+void sqlite3Os2EnterMutex(){
+ PTIB ptib;
+#ifdef SQLITE_OS2_THREADS
+ DosEnterCritSec();
+ DosGetInfoBlocks( &ptib, NULL );
+ mutexOwner = ptib->tib_ptib2->tib2_ultid;
+#endif
+ assert( !inMutex );
+ inMutex = 1;
+}
+void sqlite3Os2LeaveMutex(){
+ PTIB ptib;
+ assert( inMutex );
+ inMutex = 0;
+#ifdef SQLITE_OS2_THREADS
+ DosGetInfoBlocks( &ptib, NULL );
+ assert( mutexOwner == ptib->tib_ptib2->tib2_ultid );
+ DosExitCritSec();
+#endif
+}
+
+/*
+** Return TRUE if the mutex is currently held.
+**
+** If the thisThreadOnly parameter is true, return true if and only if the
+** calling thread holds the mutex. If the parameter is false, return
+** true if any thread holds the mutex.
+*/
+int sqlite3Os2InMutex( int thisThreadOnly ){
+#ifdef SQLITE_OS2_THREADS
+ PTIB ptib;
+ DosGetInfoBlocks( &ptib, NULL );
+ return inMutex>0 && (thisThreadOnly==0 || mutexOwner==ptib->tib_ptib2->tib2_ultid);
+#else
+ return inMutex>0;
+#endif
+}
+
+/*
+** The following variable, if set to a non-zero value, becomes the result
+** returned from sqlite3OsCurrentTime(). This is used for testing.
+*/
+#ifdef SQLITE_TEST
+int sqlite3_current_time = 0;
+#endif
+
+/*
+** Find the current time (in Universal Coordinated Time). Write the
+** current time and date as a Julian Day number into *prNow and
+** return 0. Return 1 if the time and date cannot be found.
+*/
+int sqlite3Os2CurrentTime( double *prNow ){
+ double now;
+ USHORT second, minute, hour,
+ day, month, year;
+ DATETIME dt;
+ DosGetDateTime( &dt );
+ second = (USHORT)dt.seconds;
+ minute = (USHORT)dt.minutes + dt.timezone;
+ hour = (USHORT)dt.hours;
+ day = (USHORT)dt.day;
+ month = (USHORT)dt.month;
+ year = (USHORT)dt.year;
+
+ /* Calculations from http://www.astro.keele.ac.uk/~rno/Astronomy/hjd.html
+ http://www.astro.keele.ac.uk/~rno/Astronomy/hjd-0.1.c */
+ /* Calculate the Julian days */
+ now = day - 32076 +
+ 1461*(year + 4800 + (month - 14)/12)/4 +
+ 367*(month - 2 - (month - 14)/12*12)/12 -
+ 3*((year + 4900 + (month - 14)/12)/100)/4;
+
+ /* Add the fractional hours, mins and seconds */
+ now += (hour + 12.0)/24.0;
+ now += minute/1440.0;
+ now += second/86400.0;
+ *prNow = now;
+#ifdef SQLITE_TEST
+ if( sqlite3_current_time ){
+ *prNow = sqlite3_current_time/86400.0 + 2440587.5;
+ }
+#endif
+ return 0;
+}
+
+/*
+** Remember the number of thread-specific-data blocks allocated.
+** Use this to verify that we are not leaking thread-specific-data.
+** Ticket #1601
+*/
+#ifdef SQLITE_TEST
+int sqlite3_tsd_count = 0;
+# define TSD_COUNTER_INCR InterlockedIncrement( &sqlite3_tsd_count )
+# define TSD_COUNTER_DECR InterlockedDecrement( &sqlite3_tsd_count )
+#else
+# define TSD_COUNTER_INCR /* no-op */
+# define TSD_COUNTER_DECR /* no-op */
+#endif
+
+/*
+** If called with allocateFlag>1, then return a pointer to thread
+** specific data for the current thread. Allocate and zero the
+** thread-specific data if it does not already exist necessary.
+**
+** If called with allocateFlag==0, then check the current thread
+** specific data. Return it if it exists. If it does not exist,
+** then return NULL.
+**
+** If called with allocateFlag<0, check to see if the thread specific
+** data is allocated and is all zero. If it is then deallocate it.
+** Return a pointer to the thread specific data or NULL if it is
+** unallocated or gets deallocated.
+*/
+ThreadData *sqlite3Os2ThreadSpecificData( int allocateFlag ){
+ static ThreadData **s_ppTsd = NULL;
+ static const ThreadData zeroData = {0, 0, 0};
+ ThreadData *pTsd;
+
+ if( !s_ppTsd ){
+ sqlite3OsEnterMutex();
+ if( !s_ppTsd ){
+ PULONG pul;
+ APIRET rc = DosAllocThreadLocalMemory(1, &pul);
+ if( rc != NO_ERROR ){
+ sqlite3OsLeaveMutex();
+ return 0;
+ }
+ s_ppTsd = (ThreadData **)pul;
+ }
+ sqlite3OsLeaveMutex();
+ }
+ pTsd = *s_ppTsd;
+ if( allocateFlag>0 ){
+ if( !pTsd ){
+ pTsd = sqlite3OsMalloc( sizeof(zeroData) );
+ if( pTsd ){
+ *pTsd = zeroData;
+ *s_ppTsd = pTsd;
+ TSD_COUNTER_INCR;
+ }
+ }
+ }else if( pTsd!=0 && allocateFlag<0
+ && memcmp( pTsd, &zeroData, sizeof(ThreadData) )==0 ){
+ sqlite3OsFree(pTsd);
+ *s_ppTsd = NULL;
+ TSD_COUNTER_DECR;
+ pTsd = 0;
+ }
+ return pTsd;
+}
+#endif /* OS_OS2 */
Added: freeswitch/trunk/libs/sqlite/src/os_os2.h
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/os_os2.h Tue Dec 19 15:11:50 2006
@@ -0,0 +1,73 @@
+/*
+** 2004 May 22
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+******************************************************************************
+**
+** This header file defined OS-specific features for OS/2.
+*/
+#ifndef _SQLITE_OS_OS2_H_
+#define _SQLITE_OS_OS2_H_
+
+/*
+** standard include files.
+*/
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+#include <unistd.h>
+
+/*
+** Macros used to determine whether or not to use threads. The
+** SQLITE_UNIX_THREADS macro is defined if we are synchronizing for
+** Posix threads and SQLITE_W32_THREADS is defined if we are
+** synchronizing using Win32 threads.
+*/
+/* this mutex implementation only available with EMX */
+#if defined(THREADSAFE) && THREADSAFE
+# include <sys/builtin.h>
+# include <sys/smutex.h>
+# define SQLITE_OS2_THREADS 1
+#endif
+
+/*
+** The OsFile structure is a operating-system independing representation
+** of an open file handle. It is defined differently for each architecture.
+**
+** This is the definition for Unix.
+**
+** OsFile.locktype takes one of the values SHARED_LOCK, RESERVED_LOCK,
+** PENDING_LOCK or EXCLUSIVE_LOCK.
+*/
+typedef struct OsFile OsFile;
+struct OsFile {
+ int h; /* The file descriptor (LHANDLE) */
+ int locked; /* True if this user holds the lock */
+ int delOnClose; /* True if file is to be deleted on close */
+ char *pathToDel; /* Name of file to delete on close */
+ unsigned char locktype; /* The type of lock held on this fd */
+ unsigned char isOpen; /* True if needs to be closed */
+ unsigned char fullSync;
+};
+
+/*
+** Maximum number of characters in a temporary file name
+*/
+#define SQLITE_TEMPNAME_SIZE 200
+
+/*
+** Minimum interval supported by sqlite3OsSleep().
+*/
+#define SQLITE_MIN_SLEEP_MS 1
+
+#ifndef SQLITE_DEFAULT_FILE_PERMISSIONS
+# define SQLITE_DEFAULT_FILE_PERMISSIONS 0600
+#endif
+
+#endif /* _SQLITE_OS_OS2_H_ */
Added: freeswitch/trunk/libs/sqlite/src/os_unix.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/os_unix.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,2875 @@
+/*
+** 2004 May 22
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+******************************************************************************
+**
+** This file contains code that is specific to Unix systems.
+*/
+#include "sqliteInt.h"
+#include "os.h"
+#if OS_UNIX /* This file is used on unix only */
+
+/* #define SQLITE_ENABLE_LOCKING_STYLE 0 */
+
+/*
+** These #defines should enable >2GB file support on Posix if the
+** underlying operating system supports it. If the OS lacks
+** large file support, these should be no-ops.
+**
+** Large file support can be disabled using the -DSQLITE_DISABLE_LFS switch
+** on the compiler command line. This is necessary if you are compiling
+** on a recent machine (ex: RedHat 7.2) but you want your code to work
+** on an older machine (ex: RedHat 6.0). If you compile on RedHat 7.2
+** without this option, LFS is enable. But LFS does not exist in the kernel
+** in RedHat 6.0, so the code won't work. Hence, for maximum binary
+** portability you should omit LFS.
+*/
+#ifndef SQLITE_DISABLE_LFS
+# define _LARGE_FILE 1
+# ifndef _FILE_OFFSET_BITS
+# define _FILE_OFFSET_BITS 64
+# endif
+# define _LARGEFILE_SOURCE 1
+#endif
+
+/*
+** standard include files.
+*/
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+#include <unistd.h>
+#include <time.h>
+#include <sys/time.h>
+#include <errno.h>
+#ifdef SQLITE_ENABLE_LOCKING_STYLE
+#include <sys/ioctl.h>
+#include <sys/param.h>
+#include <sys/mount.h>
+#endif /* SQLITE_ENABLE_LOCKING_STYLE */
+
+/*
+** If we are to be thread-safe, include the pthreads header and define
+** the SQLITE_UNIX_THREADS macro.
+*/
+#if defined(THREADSAFE) && THREADSAFE
+# include <pthread.h>
+# define SQLITE_UNIX_THREADS 1
+#endif
+
+/*
+** Default permissions when creating a new file
+*/
+#ifndef SQLITE_DEFAULT_FILE_PERMISSIONS
+# define SQLITE_DEFAULT_FILE_PERMISSIONS 0644
+#endif
+
+
+
+/*
+** The unixFile structure is subclass of OsFile specific for the unix
+** protability layer.
+*/
+typedef struct unixFile unixFile;
+struct unixFile {
+ IoMethod const *pMethod; /* Always the first entry */
+ struct openCnt *pOpen; /* Info about all open fd's on this inode */
+ struct lockInfo *pLock; /* Info about locks on this inode */
+#ifdef SQLITE_ENABLE_LOCKING_STYLE
+ void *lockingContext; /* Locking style specific state */
+#endif /* SQLITE_ENABLE_LOCKING_STYLE */
+ int h; /* The file descriptor */
+ unsigned char locktype; /* The type of lock held on this fd */
+ unsigned char isOpen; /* True if needs to be closed */
+ unsigned char fullSync; /* Use F_FULLSYNC if available */
+ int dirfd; /* File descriptor for the directory */
+ i64 offset; /* Seek offset */
+#ifdef SQLITE_UNIX_THREADS
+ pthread_t tid; /* The thread that "owns" this OsFile */
+#endif
+};
+
+/*
+** Provide the ability to override some OS-layer functions during
+** testing. This is used to simulate OS crashes to verify that
+** commits are atomic even in the event of an OS crash.
+*/
+#ifdef SQLITE_CRASH_TEST
+ extern int sqlite3CrashTestEnable;
+ extern int sqlite3CrashOpenReadWrite(const char*, OsFile**, int*);
+ extern int sqlite3CrashOpenExclusive(const char*, OsFile**, int);
+ extern int sqlite3CrashOpenReadOnly(const char*, OsFile**, int);
+# define CRASH_TEST_OVERRIDE(X,A,B,C) \
+ if(sqlite3CrashTestEnable){ return X(A,B,C); }
+#else
+# define CRASH_TEST_OVERRIDE(X,A,B,C) /* no-op */
+#endif
+
+
+/*
+** Include code that is common to all os_*.c files
+*/
+#include "os_common.h"
+
+/*
+** Do not include any of the File I/O interface procedures if the
+** SQLITE_OMIT_DISKIO macro is defined (indicating that the database
+** will be in-memory only)
+*/
+#ifndef SQLITE_OMIT_DISKIO
+
+
+/*
+** Define various macros that are missing from some systems.
+*/
+#ifndef O_LARGEFILE
+# define O_LARGEFILE 0
+#endif
+#ifdef SQLITE_DISABLE_LFS
+# undef O_LARGEFILE
+# define O_LARGEFILE 0
+#endif
+#ifndef O_NOFOLLOW
+# define O_NOFOLLOW 0
+#endif
+#ifndef O_BINARY
+# define O_BINARY 0
+#endif
+
+/*
+** The DJGPP compiler environment looks mostly like Unix, but it
+** lacks the fcntl() system call. So redefine fcntl() to be something
+** that always succeeds. This means that locking does not occur under
+** DJGPP. But it's DOS - what did you expect?
+*/
+#ifdef __DJGPP__
+# define fcntl(A,B,C) 0
+#endif
+
+/*
+** The threadid macro resolves to the thread-id or to 0. Used for
+** testing and debugging only.
+*/
+#ifdef SQLITE_UNIX_THREADS
+#define threadid pthread_self()
+#else
+#define threadid 0
+#endif
+
+/*
+** Set or check the OsFile.tid field. This field is set when an OsFile
+** is first opened. All subsequent uses of the OsFile verify that the
+** same thread is operating on the OsFile. Some operating systems do
+** not allow locks to be overridden by other threads and that restriction
+** means that sqlite3* database handles cannot be moved from one thread
+** to another. This logic makes sure a user does not try to do that
+** by mistake.
+**
+** Version 3.3.1 (2006-01-15): OsFiles can be moved from one thread to
+** another as long as we are running on a system that supports threads
+** overriding each others locks (which now the most common behavior)
+** or if no locks are held. But the OsFile.pLock field needs to be
+** recomputed because its key includes the thread-id. See the
+** transferOwnership() function below for additional information
+*/
+#if defined(SQLITE_UNIX_THREADS)
+# define SET_THREADID(X) (X)->tid = pthread_self()
+# define CHECK_THREADID(X) (threadsOverrideEachOthersLocks==0 && \
+ !pthread_equal((X)->tid, pthread_self()))
+#else
+# define SET_THREADID(X)
+# define CHECK_THREADID(X) 0
+#endif
+
+/*
+** Here is the dirt on POSIX advisory locks: ANSI STD 1003.1 (1996)
+** section 6.5.2.2 lines 483 through 490 specify that when a process
+** sets or clears a lock, that operation overrides any prior locks set
+** by the same process. It does not explicitly say so, but this implies
+** that it overrides locks set by the same process using a different
+** file descriptor. Consider this test case:
+**
+** int fd1 = open("./file1", O_RDWR|O_CREAT, 0644);
+** int fd2 = open("./file2", O_RDWR|O_CREAT, 0644);
+**
+** Suppose ./file1 and ./file2 are really the same file (because
+** one is a hard or symbolic link to the other) then if you set
+** an exclusive lock on fd1, then try to get an exclusive lock
+** on fd2, it works. I would have expected the second lock to
+** fail since there was already a lock on the file due to fd1.
+** But not so. Since both locks came from the same process, the
+** second overrides the first, even though they were on different
+** file descriptors opened on different file names.
+**
+** Bummer. If you ask me, this is broken. Badly broken. It means
+** that we cannot use POSIX locks to synchronize file access among
+** competing threads of the same process. POSIX locks will work fine
+** to synchronize access for threads in separate processes, but not
+** threads within the same process.
+**
+** To work around the problem, SQLite has to manage file locks internally
+** on its own. Whenever a new database is opened, we have to find the
+** specific inode of the database file (the inode is determined by the
+** st_dev and st_ino fields of the stat structure that fstat() fills in)
+** and check for locks already existing on that inode. When locks are
+** created or removed, we have to look at our own internal record of the
+** locks to see if another thread has previously set a lock on that same
+** inode.
+**
+** The OsFile structure for POSIX is no longer just an integer file
+** descriptor. It is now a structure that holds the integer file
+** descriptor and a pointer to a structure that describes the internal
+** locks on the corresponding inode. There is one locking structure
+** per inode, so if the same inode is opened twice, both OsFile structures
+** point to the same locking structure. The locking structure keeps
+** a reference count (so we will know when to delete it) and a "cnt"
+** field that tells us its internal lock status. cnt==0 means the
+** file is unlocked. cnt==-1 means the file has an exclusive lock.
+** cnt>0 means there are cnt shared locks on the file.
+**
+** Any attempt to lock or unlock a file first checks the locking
+** structure. The fcntl() system call is only invoked to set a
+** POSIX lock if the internal lock structure transitions between
+** a locked and an unlocked state.
+**
+** 2004-Jan-11:
+** More recent discoveries about POSIX advisory locks. (The more
+** I discover, the more I realize the a POSIX advisory locks are
+** an abomination.)
+**
+** If you close a file descriptor that points to a file that has locks,
+** all locks on that file that are owned by the current process are
+** released. To work around this problem, each OsFile structure contains
+** a pointer to an openCnt structure. There is one openCnt structure
+** per open inode, which means that multiple OsFiles can point to a single
+** openCnt. When an attempt is made to close an OsFile, if there are
+** other OsFiles open on the same inode that are holding locks, the call
+** to close() the file descriptor is deferred until all of the locks clear.
+** The openCnt structure keeps a list of file descriptors that need to
+** be closed and that list is walked (and cleared) when the last lock
+** clears.
+**
+** First, under Linux threads, because each thread has a separate
+** process ID, lock operations in one thread do not override locks
+** to the same file in other threads. Linux threads behave like
+** separate processes in this respect. But, if you close a file
+** descriptor in linux threads, all locks are cleared, even locks
+** on other threads and even though the other threads have different
+** process IDs. Linux threads is inconsistent in this respect.
+** (I'm beginning to think that linux threads is an abomination too.)
+** The consequence of this all is that the hash table for the lockInfo
+** structure has to include the process id as part of its key because
+** locks in different threads are treated as distinct. But the
+** openCnt structure should not include the process id in its
+** key because close() clears lock on all threads, not just the current
+** thread. Were it not for this goofiness in linux threads, we could
+** combine the lockInfo and openCnt structures into a single structure.
+**
+** 2004-Jun-28:
+** On some versions of linux, threads can override each others locks.
+** On others not. Sometimes you can change the behavior on the same
+** system by setting the LD_ASSUME_KERNEL environment variable. The
+** POSIX standard is silent as to which behavior is correct, as far
+** as I can tell, so other versions of unix might show the same
+** inconsistency. There is no little doubt in my mind that posix
+** advisory locks and linux threads are profoundly broken.
+**
+** To work around the inconsistencies, we have to test at runtime
+** whether or not threads can override each others locks. This test
+** is run once, the first time any lock is attempted. A static
+** variable is set to record the results of this test for future
+** use.
+*/
+
+/*
+** An instance of the following structure serves as the key used
+** to locate a particular lockInfo structure given its inode.
+**
+** If threads cannot override each others locks, then we set the
+** lockKey.tid field to the thread ID. If threads can override
+** each others locks then tid is always set to zero. tid is omitted
+** if we compile without threading support.
+*/
+struct lockKey {
+ dev_t dev; /* Device number */
+ ino_t ino; /* Inode number */
+#ifdef SQLITE_UNIX_THREADS
+ pthread_t tid; /* Thread ID or zero if threads can override each other */
+#endif
+};
+
+/*
+** An instance of the following structure is allocated for each open
+** inode on each thread with a different process ID. (Threads have
+** different process IDs on linux, but not on most other unixes.)
+**
+** A single inode can have multiple file descriptors, so each OsFile
+** structure contains a pointer to an instance of this object and this
+** object keeps a count of the number of OsFiles pointing to it.
+*/
+struct lockInfo {
+ struct lockKey key; /* The lookup key */
+ int cnt; /* Number of SHARED locks held */
+ int locktype; /* One of SHARED_LOCK, RESERVED_LOCK etc. */
+ int nRef; /* Number of pointers to this structure */
+};
+
+/*
+** An instance of the following structure serves as the key used
+** to locate a particular openCnt structure given its inode. This
+** is the same as the lockKey except that the thread ID is omitted.
+*/
+struct openKey {
+ dev_t dev; /* Device number */
+ ino_t ino; /* Inode number */
+};
+
+/*
+** An instance of the following structure is allocated for each open
+** inode. This structure keeps track of the number of locks on that
+** inode. If a close is attempted against an inode that is holding
+** locks, the close is deferred until all locks clear by adding the
+** file descriptor to be closed to the pending list.
+*/
+struct openCnt {
+ struct openKey key; /* The lookup key */
+ int nRef; /* Number of pointers to this structure */
+ int nLock; /* Number of outstanding locks */
+ int nPending; /* Number of pending close() operations */
+ int *aPending; /* Malloced space holding fd's awaiting a close() */
+};
+
+/*
+** These hash tables map inodes and file descriptors (really, lockKey and
+** openKey structures) into lockInfo and openCnt structures. Access to
+** these hash tables must be protected by a mutex.
+*/
+static Hash lockHash = {SQLITE_HASH_BINARY, 0, 0, 0,
+ sqlite3ThreadSafeMalloc, sqlite3ThreadSafeFree, 0, 0};
+static Hash openHash = {SQLITE_HASH_BINARY, 0, 0, 0,
+ sqlite3ThreadSafeMalloc, sqlite3ThreadSafeFree, 0, 0};
+
+#ifdef SQLITE_ENABLE_LOCKING_STYLE
+/*
+** The locking styles are associated with the different file locking
+** capabilities supported by different file systems.
+**
+** POSIX locking style fully supports shared and exclusive byte-range locks
+** ADP locking only supports exclusive byte-range locks
+** FLOCK only supports a single file-global exclusive lock
+** DOTLOCK isn't a true locking style, it refers to the use of a special
+** file named the same as the database file with a '.lock' extension, this
+** can be used on file systems that do not offer any reliable file locking
+** NO locking means that no locking will be attempted, this is only used for
+** read-only file systems currently
+** UNSUPPORTED means that no locking will be attempted, this is only used for
+** file systems that are known to be unsupported
+*/
+typedef enum {
+ posixLockingStyle = 0, /* standard posix-advisory locks */
+ afpLockingStyle, /* use afp locks */
+ flockLockingStyle, /* use flock() */
+ dotlockLockingStyle, /* use <file>.lock files */
+ noLockingStyle, /* useful for read-only file system */
+ unsupportedLockingStyle /* indicates unsupported file system */
+} sqlite3LockingStyle;
+#endif /* SQLITE_ENABLE_LOCKING_STYLE */
+
+#ifdef SQLITE_UNIX_THREADS
+/*
+** This variable records whether or not threads can override each others
+** locks.
+**
+** 0: No. Threads cannot override each others locks.
+** 1: Yes. Threads can override each others locks.
+** -1: We don't know yet.
+**
+** On some systems, we know at compile-time if threads can override each
+** others locks. On those systems, the SQLITE_THREAD_OVERRIDE_LOCK macro
+** will be set appropriately. On other systems, we have to check at
+** runtime. On these latter systems, SQLTIE_THREAD_OVERRIDE_LOCK is
+** undefined.
+**
+** This variable normally has file scope only. But during testing, we make
+** it a global so that the test code can change its value in order to verify
+** that the right stuff happens in either case.
+*/
+#ifndef SQLITE_THREAD_OVERRIDE_LOCK
+# define SQLITE_THREAD_OVERRIDE_LOCK -1
+#endif
+#ifdef SQLITE_TEST
+int threadsOverrideEachOthersLocks = SQLITE_THREAD_OVERRIDE_LOCK;
+#else
+static int threadsOverrideEachOthersLocks = SQLITE_THREAD_OVERRIDE_LOCK;
+#endif
+
+/*
+** This structure holds information passed into individual test
+** threads by the testThreadLockingBehavior() routine.
+*/
+struct threadTestData {
+ int fd; /* File to be locked */
+ struct flock lock; /* The locking operation */
+ int result; /* Result of the locking operation */
+};
+
+#ifdef SQLITE_LOCK_TRACE
+/*
+** Print out information about all locking operations.
+**
+** This routine is used for troubleshooting locks on multithreaded
+** platforms. Enable by compiling with the -DSQLITE_LOCK_TRACE
+** command-line option on the compiler. This code is normally
+** turned off.
+*/
+static int lockTrace(int fd, int op, struct flock *p){
+ char *zOpName, *zType;
+ int s;
+ int savedErrno;
+ if( op==F_GETLK ){
+ zOpName = "GETLK";
+ }else if( op==F_SETLK ){
+ zOpName = "SETLK";
+ }else{
+ s = fcntl(fd, op, p);
+ sqlite3DebugPrintf("fcntl unknown %d %d %d\n", fd, op, s);
+ return s;
+ }
+ if( p->l_type==F_RDLCK ){
+ zType = "RDLCK";
+ }else if( p->l_type==F_WRLCK ){
+ zType = "WRLCK";
+ }else if( p->l_type==F_UNLCK ){
+ zType = "UNLCK";
+ }else{
+ assert( 0 );
+ }
+ assert( p->l_whence==SEEK_SET );
+ s = fcntl(fd, op, p);
+ savedErrno = errno;
+ sqlite3DebugPrintf("fcntl %d %d %s %s %d %d %d %d\n",
+ threadid, fd, zOpName, zType, (int)p->l_start, (int)p->l_len,
+ (int)p->l_pid, s);
+ if( s && op==F_SETLK && (p->l_type==F_RDLCK || p->l_type==F_WRLCK) ){
+ struct flock l2;
+ l2 = *p;
+ fcntl(fd, F_GETLK, &l2);
+ if( l2.l_type==F_RDLCK ){
+ zType = "RDLCK";
+ }else if( l2.l_type==F_WRLCK ){
+ zType = "WRLCK";
+ }else if( l2.l_type==F_UNLCK ){
+ zType = "UNLCK";
+ }else{
+ assert( 0 );
+ }
+ sqlite3DebugPrintf("fcntl-failure-reason: %s %d %d %d\n",
+ zType, (int)l2.l_start, (int)l2.l_len, (int)l2.l_pid);
+ }
+ errno = savedErrno;
+ return s;
+}
+#define fcntl lockTrace
+#endif /* SQLITE_LOCK_TRACE */
+
+/*
+** The testThreadLockingBehavior() routine launches two separate
+** threads on this routine. This routine attempts to lock a file
+** descriptor then returns. The success or failure of that attempt
+** allows the testThreadLockingBehavior() procedure to determine
+** whether or not threads can override each others locks.
+*/
+static void *threadLockingTest(void *pArg){
+ struct threadTestData *pData = (struct threadTestData*)pArg;
+ pData->result = fcntl(pData->fd, F_SETLK, &pData->lock);
+ return pArg;
+}
+
+/*
+** This procedure attempts to determine whether or not threads
+** can override each others locks then sets the
+** threadsOverrideEachOthersLocks variable appropriately.
+*/
+static void testThreadLockingBehavior(int fd_orig){
+ int fd;
+ struct threadTestData d[2];
+ pthread_t t[2];
+
+ fd = dup(fd_orig);
+ if( fd<0 ) return;
+ memset(d, 0, sizeof(d));
+ d[0].fd = fd;
+ d[0].lock.l_type = F_RDLCK;
+ d[0].lock.l_len = 1;
+ d[0].lock.l_start = 0;
+ d[0].lock.l_whence = SEEK_SET;
+ d[1] = d[0];
+ d[1].lock.l_type = F_WRLCK;
+ pthread_create(&t[0], 0, threadLockingTest, &d[0]);
+ pthread_create(&t[1], 0, threadLockingTest, &d[1]);
+ pthread_join(t[0], 0);
+ pthread_join(t[1], 0);
+ close(fd);
+ threadsOverrideEachOthersLocks = d[0].result==0 && d[1].result==0;
+}
+#endif /* SQLITE_UNIX_THREADS */
+
+/*
+** Release a lockInfo structure previously allocated by findLockInfo().
+*/
+static void releaseLockInfo(struct lockInfo *pLock){
+ assert( sqlite3OsInMutex(1) );
+ if (pLock == NULL)
+ return;
+ pLock->nRef--;
+ if( pLock->nRef==0 ){
+ sqlite3HashInsert(&lockHash, &pLock->key, sizeof(pLock->key), 0);
+ sqlite3ThreadSafeFree(pLock);
+ }
+}
+
+/*
+** Release a openCnt structure previously allocated by findLockInfo().
+*/
+static void releaseOpenCnt(struct openCnt *pOpen){
+ assert( sqlite3OsInMutex(1) );
+ if (pOpen == NULL)
+ return;
+ pOpen->nRef--;
+ if( pOpen->nRef==0 ){
+ sqlite3HashInsert(&openHash, &pOpen->key, sizeof(pOpen->key), 0);
+ free(pOpen->aPending);
+ sqlite3ThreadSafeFree(pOpen);
+ }
+}
+
+#ifdef SQLITE_ENABLE_LOCKING_STYLE
+/*
+** Tests a byte-range locking query to see if byte range locks are
+** supported, if not we fall back to dotlockLockingStyle.
+*/
+static sqlite3LockingStyle sqlite3TestLockingStyle(const char *filePath,
+ int fd) {
+ /* test byte-range lock using fcntl */
+ struct flock lockInfo;
+
+ lockInfo.l_len = 1;
+ lockInfo.l_start = 0;
+ lockInfo.l_whence = SEEK_SET;
+ lockInfo.l_type = F_RDLCK;
+
+ if (fcntl(fd, F_GETLK, (int) &lockInfo) != -1) {
+ return posixLockingStyle;
+ }
+
+ /* testing for flock can give false positives. So if if the above test
+ ** fails, then we fall back to using dot-lock style locking.
+ */
+ return dotlockLockingStyle;
+}
+
+/*
+** Examines the f_fstypename entry in the statfs structure as returned by
+** stat() for the file system hosting the database file, assigns the
+** appropriate locking style based on it's value. These values and
+** assignments are based on Darwin/OSX behavior and have not been tested on
+** other systems.
+*/
+static sqlite3LockingStyle sqlite3DetectLockingStyle(const char *filePath,
+ int fd) {
+
+#ifdef SQLITE_FIXED_LOCKING_STYLE
+ return (sqlite3LockingStyle)SQLITE_FIXED_LOCKING_STYLE;
+#else
+ struct statfs fsInfo;
+
+ if (statfs(filePath, &fsInfo) == -1)
+ return sqlite3TestLockingStyle(filePath, fd);
+
+ if (fsInfo.f_flags & MNT_RDONLY)
+ return noLockingStyle;
+
+ if( (!strcmp(fsInfo.f_fstypename, "hfs")) ||
+ (!strcmp(fsInfo.f_fstypename, "ufs")) )
+ return posixLockingStyle;
+
+ if(!strcmp(fsInfo.f_fstypename, "afpfs"))
+ return afpLockingStyle;
+
+ if(!strcmp(fsInfo.f_fstypename, "nfs"))
+ return sqlite3TestLockingStyle(filePath, fd);
+
+ if(!strcmp(fsInfo.f_fstypename, "smbfs"))
+ return flockLockingStyle;
+
+ if(!strcmp(fsInfo.f_fstypename, "msdos"))
+ return dotlockLockingStyle;
+
+ if(!strcmp(fsInfo.f_fstypename, "webdav"))
+ return unsupportedLockingStyle;
+
+ return sqlite3TestLockingStyle(filePath, fd);
+#endif // SQLITE_FIXED_LOCKING_STYLE
+}
+
+#endif /* SQLITE_ENABLE_LOCKING_STYLE */
+
+/*
+** Given a file descriptor, locate lockInfo and openCnt structures that
+** describes that file descriptor. Create new ones if necessary. The
+** return values might be uninitialized if an error occurs.
+**
+** Return the number of errors.
+*/
+static int findLockInfo(
+ int fd, /* The file descriptor used in the key */
+ struct lockInfo **ppLock, /* Return the lockInfo structure here */
+ struct openCnt **ppOpen /* Return the openCnt structure here */
+){
+ int rc;
+ struct lockKey key1;
+ struct openKey key2;
+ struct stat statbuf;
+ struct lockInfo *pLock;
+ struct openCnt *pOpen;
+ rc = fstat(fd, &statbuf);
+ if( rc!=0 ) return 1;
+
+ assert( sqlite3OsInMutex(1) );
+ memset(&key1, 0, sizeof(key1));
+ key1.dev = statbuf.st_dev;
+ key1.ino = statbuf.st_ino;
+#ifdef SQLITE_UNIX_THREADS
+ if( threadsOverrideEachOthersLocks<0 ){
+ testThreadLockingBehavior(fd);
+ }
+ key1.tid = threadsOverrideEachOthersLocks ? 0 : pthread_self();
+#endif
+ memset(&key2, 0, sizeof(key2));
+ key2.dev = statbuf.st_dev;
+ key2.ino = statbuf.st_ino;
+ pLock = (struct lockInfo*)sqlite3HashFind(&lockHash, &key1, sizeof(key1));
+ if( pLock==0 ){
+ struct lockInfo *pOld;
+ pLock = sqlite3ThreadSafeMalloc( sizeof(*pLock) );
+ if( pLock==0 ){
+ rc = 1;
+ goto exit_findlockinfo;
+ }
+ pLock->key = key1;
+ pLock->nRef = 1;
+ pLock->cnt = 0;
+ pLock->locktype = 0;
+ pOld = sqlite3HashInsert(&lockHash, &pLock->key, sizeof(key1), pLock);
+ if( pOld!=0 ){
+ assert( pOld==pLock );
+ sqlite3ThreadSafeFree(pLock);
+ rc = 1;
+ goto exit_findlockinfo;
+ }
+ }else{
+ pLock->nRef++;
+ }
+ *ppLock = pLock;
+ if( ppOpen!=0 ){
+ pOpen = (struct openCnt*)sqlite3HashFind(&openHash, &key2, sizeof(key2));
+ if( pOpen==0 ){
+ struct openCnt *pOld;
+ pOpen = sqlite3ThreadSafeMalloc( sizeof(*pOpen) );
+ if( pOpen==0 ){
+ releaseLockInfo(pLock);
+ rc = 1;
+ goto exit_findlockinfo;
+ }
+ pOpen->key = key2;
+ pOpen->nRef = 1;
+ pOpen->nLock = 0;
+ pOpen->nPending = 0;
+ pOpen->aPending = 0;
+ pOld = sqlite3HashInsert(&openHash, &pOpen->key, sizeof(key2), pOpen);
+ if( pOld!=0 ){
+ assert( pOld==pOpen );
+ sqlite3ThreadSafeFree(pOpen);
+ releaseLockInfo(pLock);
+ rc = 1;
+ goto exit_findlockinfo;
+ }
+ }else{
+ pOpen->nRef++;
+ }
+ *ppOpen = pOpen;
+ }
+
+exit_findlockinfo:
+ return rc;
+}
+
+#ifdef SQLITE_DEBUG
+/*
+** Helper function for printing out trace information from debugging
+** binaries. This returns the string represetation of the supplied
+** integer lock-type.
+*/
+static const char *locktypeName(int locktype){
+ switch( locktype ){
+ case NO_LOCK: return "NONE";
+ case SHARED_LOCK: return "SHARED";
+ case RESERVED_LOCK: return "RESERVED";
+ case PENDING_LOCK: return "PENDING";
+ case EXCLUSIVE_LOCK: return "EXCLUSIVE";
+ }
+ return "ERROR";
+}
+#endif
+
+/*
+** If we are currently in a different thread than the thread that the
+** unixFile argument belongs to, then transfer ownership of the unixFile
+** over to the current thread.
+**
+** A unixFile is only owned by a thread on systems where one thread is
+** unable to override locks created by a different thread. RedHat9 is
+** an example of such a system.
+**
+** Ownership transfer is only allowed if the unixFile is currently unlocked.
+** If the unixFile is locked and an ownership is wrong, then return
+** SQLITE_MISUSE. SQLITE_OK is returned if everything works.
+*/
+#ifdef SQLITE_UNIX_THREADS
+static int transferOwnership(unixFile *pFile){
+ int rc;
+ pthread_t hSelf;
+ if( threadsOverrideEachOthersLocks ){
+ /* Ownership transfers not needed on this system */
+ return SQLITE_OK;
+ }
+ hSelf = pthread_self();
+ if( pthread_equal(pFile->tid, hSelf) ){
+ /* We are still in the same thread */
+ TRACE1("No-transfer, same thread\n");
+ return SQLITE_OK;
+ }
+ if( pFile->locktype!=NO_LOCK ){
+ /* We cannot change ownership while we are holding a lock! */
+ return SQLITE_MISUSE;
+ }
+ TRACE4("Transfer ownership of %d from %d to %d\n", pFile->h,pFile->tid,hSelf);
+ pFile->tid = hSelf;
+ if (pFile->pLock != NULL) {
+ releaseLockInfo(pFile->pLock);
+ rc = findLockInfo(pFile->h, &pFile->pLock, 0);
+ TRACE5("LOCK %d is now %s(%s,%d)\n", pFile->h,
+ locktypeName(pFile->locktype),
+ locktypeName(pFile->pLock->locktype), pFile->pLock->cnt);
+ return rc;
+ } else {
+ return SQLITE_OK;
+ }
+}
+#else
+ /* On single-threaded builds, ownership transfer is a no-op */
+# define transferOwnership(X) SQLITE_OK
+#endif
+
+/*
+** Delete the named file
+*/
+int sqlite3UnixDelete(const char *zFilename){
+ unlink(zFilename);
+ return SQLITE_OK;
+}
+
+/*
+** Return TRUE if the named file exists.
+*/
+int sqlite3UnixFileExists(const char *zFilename){
+ return access(zFilename, 0)==0;
+}
+
+/* Forward declaration */
+static int allocateUnixFile(
+ int h, /* File descriptor of the open file */
+ OsFile **pId, /* Write the real file descriptor here */
+ const char *zFilename, /* Name of the file being opened */
+ int delFlag /* If true, make sure the file deletes on close */
+);
+
+/*
+** Attempt to open a file for both reading and writing. If that
+** fails, try opening it read-only. If the file does not exist,
+** try to create it.
+**
+** On success, a handle for the open file is written to *id
+** and *pReadonly is set to 0 if the file was opened for reading and
+** writing or 1 if the file was opened read-only. The function returns
+** SQLITE_OK.
+**
+** On failure, the function returns SQLITE_CANTOPEN and leaves
+** *id and *pReadonly unchanged.
+*/
+int sqlite3UnixOpenReadWrite(
+ const char *zFilename,
+ OsFile **pId,
+ int *pReadonly
+){
+ int h;
+
+ CRASH_TEST_OVERRIDE(sqlite3CrashOpenReadWrite, zFilename, pId, pReadonly);
+ assert( 0==*pId );
+ h = open(zFilename, O_RDWR|O_CREAT|O_LARGEFILE|O_BINARY,
+ SQLITE_DEFAULT_FILE_PERMISSIONS);
+ if( h<0 ){
+#ifdef EISDIR
+ if( errno==EISDIR ){
+ return SQLITE_CANTOPEN;
+ }
+#endif
+ h = open(zFilename, O_RDONLY|O_LARGEFILE|O_BINARY);
+ if( h<0 ){
+ return SQLITE_CANTOPEN;
+ }
+ *pReadonly = 1;
+ }else{
+ *pReadonly = 0;
+ }
+ return allocateUnixFile(h, pId, zFilename, 0);
+}
+
+
+/*
+** Attempt to open a new file for exclusive access by this process.
+** The file will be opened for both reading and writing. To avoid
+** a potential security problem, we do not allow the file to have
+** previously existed. Nor do we allow the file to be a symbolic
+** link.
+**
+** If delFlag is true, then make arrangements to automatically delete
+** the file when it is closed.
+**
+** On success, write the file handle into *id and return SQLITE_OK.
+**
+** On failure, return SQLITE_CANTOPEN.
+*/
+int sqlite3UnixOpenExclusive(const char *zFilename, OsFile **pId, int delFlag){
+ int h;
+
+ CRASH_TEST_OVERRIDE(sqlite3CrashOpenExclusive, zFilename, pId, delFlag);
+ assert( 0==*pId );
+ h = open(zFilename,
+ O_RDWR|O_CREAT|O_EXCL|O_NOFOLLOW|O_LARGEFILE|O_BINARY,
+ SQLITE_DEFAULT_FILE_PERMISSIONS);
+ if( h<0 ){
+ return SQLITE_CANTOPEN;
+ }
+ return allocateUnixFile(h, pId, zFilename, delFlag);
+}
+
+/*
+** Attempt to open a new file for read-only access.
+**
+** On success, write the file handle into *id and return SQLITE_OK.
+**
+** On failure, return SQLITE_CANTOPEN.
+*/
+int sqlite3UnixOpenReadOnly(const char *zFilename, OsFile **pId){
+ int h;
+
+ CRASH_TEST_OVERRIDE(sqlite3CrashOpenReadOnly, zFilename, pId, 0);
+ assert( 0==*pId );
+ h = open(zFilename, O_RDONLY|O_LARGEFILE|O_BINARY);
+ if( h<0 ){
+ return SQLITE_CANTOPEN;
+ }
+ return allocateUnixFile(h, pId, zFilename, 0);
+}
+
+/*
+** Attempt to open a file descriptor for the directory that contains a
+** file. This file descriptor can be used to fsync() the directory
+** in order to make sure the creation of a new file is actually written
+** to disk.
+**
+** This routine is only meaningful for Unix. It is a no-op under
+** windows since windows does not support hard links.
+**
+** If FULL_FSYNC is enabled, this function is not longer useful,
+** a FULL_FSYNC sync applies to all pending disk operations.
+**
+** On success, a handle for a previously open file at *id is
+** updated with the new directory file descriptor and SQLITE_OK is
+** returned.
+**
+** On failure, the function returns SQLITE_CANTOPEN and leaves
+** *id unchanged.
+*/
+static int unixOpenDirectory(
+ OsFile *id,
+ const char *zDirname
+){
+ unixFile *pFile = (unixFile*)id;
+ if( pFile==0 ){
+ /* Do not open the directory if the corresponding file is not already
+ ** open. */
+ return SQLITE_CANTOPEN;
+ }
+ SET_THREADID(pFile);
+ assert( pFile->dirfd<0 );
+ pFile->dirfd = open(zDirname, O_RDONLY|O_BINARY, 0);
+ if( pFile->dirfd<0 ){
+ return SQLITE_CANTOPEN;
+ }
+ TRACE3("OPENDIR %-3d %s\n", pFile->dirfd, zDirname);
+ return SQLITE_OK;
+}
+
+/*
+** If the following global variable points to a string which is the
+** name of a directory, then that directory will be used to store
+** temporary files.
+**
+** See also the "PRAGMA temp_store_directory" SQL command.
+*/
+char *sqlite3_temp_directory = 0;
+
+/*
+** Create a temporary file name in zBuf. zBuf must be big enough to
+** hold at least SQLITE_TEMPNAME_SIZE characters.
+*/
+int sqlite3UnixTempFileName(char *zBuf){
+ static const char *azDirs[] = {
+ 0,
+ "/var/tmp",
+ "/usr/tmp",
+ "/tmp",
+ ".",
+ };
+ static const unsigned char zChars[] =
+ "abcdefghijklmnopqrstuvwxyz"
+ "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+ "0123456789";
+ int i, j;
+ struct stat buf;
+ const char *zDir = ".";
+ azDirs[0] = sqlite3_temp_directory;
+ for(i=0; i<sizeof(azDirs)/sizeof(azDirs[0]); i++){
+ if( azDirs[i]==0 ) continue;
+ if( stat(azDirs[i], &buf) ) continue;
+ if( !S_ISDIR(buf.st_mode) ) continue;
+ if( access(azDirs[i], 07) ) continue;
+ zDir = azDirs[i];
+ break;
+ }
+ do{
+ sprintf(zBuf, "%s/"TEMP_FILE_PREFIX, zDir);
+ j = strlen(zBuf);
+ sqlite3Randomness(15, &zBuf[j]);
+ for(i=0; i<15; i++, j++){
+ zBuf[j] = (char)zChars[ ((unsigned char)zBuf[j])%(sizeof(zChars)-1) ];
+ }
+ zBuf[j] = 0;
+ }while( access(zBuf,0)==0 );
+ return SQLITE_OK;
+}
+
+/*
+** Check that a given pathname is a directory and is writable
+**
+*/
+int sqlite3UnixIsDirWritable(char *zBuf){
+#ifndef SQLITE_OMIT_PAGER_PRAGMAS
+ struct stat buf;
+ if( zBuf==0 ) return 0;
+ if( zBuf[0]==0 ) return 0;
+ if( stat(zBuf, &buf) ) return 0;
+ if( !S_ISDIR(buf.st_mode) ) return 0;
+ if( access(zBuf, 07) ) return 0;
+#endif /* SQLITE_OMIT_PAGER_PRAGMAS */
+ return 1;
+}
+
+/*
+** Seek to the offset in id->offset then read cnt bytes into pBuf.
+** Return the number of bytes actually read. Update the offset.
+*/
+static int seekAndRead(unixFile *id, void *pBuf, int cnt){
+ int got;
+#ifdef USE_PREAD
+ got = pread(id->h, pBuf, cnt, id->offset);
+#else
+ lseek(id->h, id->offset, SEEK_SET);
+ got = read(id->h, pBuf, cnt);
+#endif
+ if( got>0 ){
+ id->offset += got;
+ }
+ return got;
+}
+
+/*
+** Read data from a file into a buffer. Return SQLITE_OK if all
+** bytes were read successfully and SQLITE_IOERR if anything goes
+** wrong.
+*/
+static int unixRead(OsFile *id, void *pBuf, int amt){
+ int got;
+ assert( id );
+ TIMER_START;
+ got = seekAndRead((unixFile*)id, pBuf, amt);
+ TIMER_END;
+ TRACE5("READ %-3d %5d %7d %d\n", ((unixFile*)id)->h, got,
+ last_page, TIMER_ELAPSED);
+ SEEK(0);
+ SimulateIOError( got=0 );
+ if( got==amt ){
+ return SQLITE_OK;
+ }else if( got<0 ){
+ return SQLITE_IOERR_READ;
+ }else{
+ return SQLITE_IOERR_SHORT_READ;
+ }
+}
+
+/*
+** Seek to the offset in id->offset then read cnt bytes into pBuf.
+** Return the number of bytes actually read. Update the offset.
+*/
+static int seekAndWrite(unixFile *id, const void *pBuf, int cnt){
+ int got;
+#ifdef USE_PREAD
+ got = pwrite(id->h, pBuf, cnt, id->offset);
+#else
+ lseek(id->h, id->offset, SEEK_SET);
+ got = write(id->h, pBuf, cnt);
+#endif
+ if( got>0 ){
+ id->offset += got;
+ }
+ return got;
+}
+
+
+/*
+** Write data from a buffer into a file. Return SQLITE_OK on success
+** or some other error code on failure.
+*/
+static int unixWrite(OsFile *id, const void *pBuf, int amt){
+ int wrote = 0;
+ assert( id );
+ assert( amt>0 );
+ TIMER_START;
+ while( amt>0 && (wrote = seekAndWrite((unixFile*)id, pBuf, amt))>0 ){
+ amt -= wrote;
+ pBuf = &((char*)pBuf)[wrote];
+ }
+ TIMER_END;
+ TRACE5("WRITE %-3d %5d %7d %d\n", ((unixFile*)id)->h, wrote,
+ last_page, TIMER_ELAPSED);
+ SEEK(0);
+ SimulateIOError(( wrote=(-1), amt=1 ));
+ SimulateDiskfullError(( wrote=0, amt=1 ));
+ if( amt>0 ){
+ if( wrote<0 ){
+ return SQLITE_IOERR_WRITE;
+ }else{
+ return SQLITE_FULL;
+ }
+ }
+ return SQLITE_OK;
+}
+
+/*
+** Move the read/write pointer in a file.
+*/
+static int unixSeek(OsFile *id, i64 offset){
+ assert( id );
+ SEEK(offset/1024 + 1);
+#ifdef SQLITE_TEST
+ if( offset ) SimulateDiskfullError(return SQLITE_FULL);
+#endif
+ ((unixFile*)id)->offset = offset;
+ return SQLITE_OK;
+}
+
+#ifdef SQLITE_TEST
+/*
+** Count the number of fullsyncs and normal syncs. This is used to test
+** that syncs and fullsyncs are occuring at the right times.
+*/
+int sqlite3_sync_count = 0;
+int sqlite3_fullsync_count = 0;
+#endif
+
+/*
+** Use the fdatasync() API only if the HAVE_FDATASYNC macro is defined.
+** Otherwise use fsync() in its place.
+*/
+#ifndef HAVE_FDATASYNC
+# define fdatasync fsync
+#endif
+
+/*
+** Define HAVE_FULLFSYNC to 0 or 1 depending on whether or not
+** the F_FULLFSYNC macro is defined. F_FULLFSYNC is currently
+** only available on Mac OS X. But that could change.
+*/
+#ifdef F_FULLFSYNC
+# define HAVE_FULLFSYNC 1
+#else
+# define HAVE_FULLFSYNC 0
+#endif
+
+
+/*
+** The fsync() system call does not work as advertised on many
+** unix systems. The following procedure is an attempt to make
+** it work better.
+**
+** The SQLITE_NO_SYNC macro disables all fsync()s. This is useful
+** for testing when we want to run through the test suite quickly.
+** You are strongly advised *not* to deploy with SQLITE_NO_SYNC
+** enabled, however, since with SQLITE_NO_SYNC enabled, an OS crash
+** or power failure will likely corrupt the database file.
+*/
+static int full_fsync(int fd, int fullSync, int dataOnly){
+ int rc;
+
+ /* Record the number of times that we do a normal fsync() and
+ ** FULLSYNC. This is used during testing to verify that this procedure
+ ** gets called with the correct arguments.
+ */
+#ifdef SQLITE_TEST
+ if( fullSync ) sqlite3_fullsync_count++;
+ sqlite3_sync_count++;
+#endif
+
+ /* If we compiled with the SQLITE_NO_SYNC flag, then syncing is a
+ ** no-op
+ */
+#ifdef SQLITE_NO_SYNC
+ rc = SQLITE_OK;
+#else
+
+#if HAVE_FULLFSYNC
+ if( fullSync ){
+ rc = fcntl(fd, F_FULLFSYNC, 0);
+ }else
+#endif /* HAVE_FULLFSYNC */
+ if( dataOnly ){
+ rc = fdatasync(fd);
+ }else{
+ rc = fsync(fd);
+ }
+#endif /* defined(SQLITE_NO_SYNC) */
+
+ return rc;
+}
+
+/*
+** Make sure all writes to a particular file are committed to disk.
+**
+** If dataOnly==0 then both the file itself and its metadata (file
+** size, access time, etc) are synced. If dataOnly!=0 then only the
+** file data is synced.
+**
+** Under Unix, also make sure that the directory entry for the file
+** has been created by fsync-ing the directory that contains the file.
+** If we do not do this and we encounter a power failure, the directory
+** entry for the journal might not exist after we reboot. The next
+** SQLite to access the file will not know that the journal exists (because
+** the directory entry for the journal was never created) and the transaction
+** will not roll back - possibly leading to database corruption.
+*/
+static int unixSync(OsFile *id, int dataOnly){
+ int rc;
+ unixFile *pFile = (unixFile*)id;
+ assert( pFile );
+ TRACE2("SYNC %-3d\n", pFile->h);
+ rc = full_fsync(pFile->h, pFile->fullSync, dataOnly);
+ SimulateIOError( rc=1 );
+ if( rc ){
+ return SQLITE_IOERR_FSYNC;
+ }
+ if( pFile->dirfd>=0 ){
+ TRACE4("DIRSYNC %-3d (have_fullfsync=%d fullsync=%d)\n", pFile->dirfd,
+ HAVE_FULLFSYNC, pFile->fullSync);
+#ifndef SQLITE_DISABLE_DIRSYNC
+ /* The directory sync is only attempted if full_fsync is
+ ** turned off or unavailable. If a full_fsync occurred above,
+ ** then the directory sync is superfluous.
+ */
+ if( (!HAVE_FULLFSYNC || !pFile->fullSync) && full_fsync(pFile->dirfd,0,0) ){
+ /*
+ ** We have received multiple reports of fsync() returning
+ ** errors when applied to directories on certain file systems.
+ ** A failed directory sync is not a big deal. So it seems
+ ** better to ignore the error. Ticket #1657
+ */
+ /* return SQLITE_IOERR; */
+ }
+#endif
+ close(pFile->dirfd); /* Only need to sync once, so close the directory */
+ pFile->dirfd = -1; /* when we are done. */
+ }
+ return SQLITE_OK;
+}
+
+/*
+** Sync the directory zDirname. This is a no-op on operating systems other
+** than UNIX.
+**
+** This is used to make sure the master journal file has truely been deleted
+** before making changes to individual journals on a multi-database commit.
+** The F_FULLFSYNC option is not needed here.
+*/
+int sqlite3UnixSyncDirectory(const char *zDirname){
+#ifdef SQLITE_DISABLE_DIRSYNC
+ return SQLITE_OK;
+#else
+ int fd;
+ int r;
+ fd = open(zDirname, O_RDONLY|O_BINARY, 0);
+ TRACE3("DIRSYNC %-3d (%s)\n", fd, zDirname);
+ if( fd<0 ){
+ return SQLITE_CANTOPEN;
+ }
+ r = fsync(fd);
+ close(fd);
+ SimulateIOError( r=1 );
+ if( r ){
+ return SQLITE_IOERR_DIR_FSYNC;
+ }else{
+ return SQLITE_OK;
+ }
+#endif
+}
+
+/*
+** Truncate an open file to a specified size
+*/
+static int unixTruncate(OsFile *id, i64 nByte){
+ int rc;
+ assert( id );
+ rc = ftruncate(((unixFile*)id)->h, nByte);
+ SimulateIOError( rc=1 );
+ if( rc ){
+ return SQLITE_IOERR_TRUNCATE;
+ }else{
+ return SQLITE_OK;
+ }
+}
+
+/*
+** Determine the current size of a file in bytes
+*/
+static int unixFileSize(OsFile *id, i64 *pSize){
+ int rc;
+ struct stat buf;
+ assert( id );
+ rc = fstat(((unixFile*)id)->h, &buf);
+ SimulateIOError( rc=1 );
+ if( rc!=0 ){
+ return SQLITE_IOERR_FSTAT;
+ }
+ *pSize = buf.st_size;
+ return SQLITE_OK;
+}
+
+/*
+** This routine checks if there is a RESERVED lock held on the specified
+** file by this or any other process. If such a lock is held, return
+** non-zero. If the file is unlocked or holds only SHARED locks, then
+** return zero.
+*/
+static int unixCheckReservedLock(OsFile *id){
+ int r = 0;
+ unixFile *pFile = (unixFile*)id;
+
+ assert( pFile );
+ sqlite3OsEnterMutex(); /* Because pFile->pLock is shared across threads */
+
+ /* Check if a thread in this process holds such a lock */
+ if( pFile->pLock->locktype>SHARED_LOCK ){
+ r = 1;
+ }
+
+ /* Otherwise see if some other process holds it.
+ */
+ if( !r ){
+ struct flock lock;
+ lock.l_whence = SEEK_SET;
+ lock.l_start = RESERVED_BYTE;
+ lock.l_len = 1;
+ lock.l_type = F_WRLCK;
+ fcntl(pFile->h, F_GETLK, &lock);
+ if( lock.l_type!=F_UNLCK ){
+ r = 1;
+ }
+ }
+
+ sqlite3OsLeaveMutex();
+ TRACE3("TEST WR-LOCK %d %d\n", pFile->h, r);
+
+ return r;
+}
+
+/*
+** Lock the file with the lock specified by parameter locktype - one
+** of the following:
+**
+** (1) SHARED_LOCK
+** (2) RESERVED_LOCK
+** (3) PENDING_LOCK
+** (4) EXCLUSIVE_LOCK
+**
+** Sometimes when requesting one lock state, additional lock states
+** are inserted in between. The locking might fail on one of the later
+** transitions leaving the lock state different from what it started but
+** still short of its goal. The following chart shows the allowed
+** transitions and the inserted intermediate states:
+**
+** UNLOCKED -> SHARED
+** SHARED -> RESERVED
+** SHARED -> (PENDING) -> EXCLUSIVE
+** RESERVED -> (PENDING) -> EXCLUSIVE
+** PENDING -> EXCLUSIVE
+**
+** This routine will only increase a lock. Use the sqlite3OsUnlock()
+** routine to lower a locking level.
+*/
+static int unixLock(OsFile *id, int locktype){
+ /* The following describes the implementation of the various locks and
+ ** lock transitions in terms of the POSIX advisory shared and exclusive
+ ** lock primitives (called read-locks and write-locks below, to avoid
+ ** confusion with SQLite lock names). The algorithms are complicated
+ ** slightly in order to be compatible with windows systems simultaneously
+ ** accessing the same database file, in case that is ever required.
+ **
+ ** Symbols defined in os.h indentify the 'pending byte' and the 'reserved
+ ** byte', each single bytes at well known offsets, and the 'shared byte
+ ** range', a range of 510 bytes at a well known offset.
+ **
+ ** To obtain a SHARED lock, a read-lock is obtained on the 'pending
+ ** byte'. If this is successful, a random byte from the 'shared byte
+ ** range' is read-locked and the lock on the 'pending byte' released.
+ **
+ ** A process may only obtain a RESERVED lock after it has a SHARED lock.
+ ** A RESERVED lock is implemented by grabbing a write-lock on the
+ ** 'reserved byte'.
+ **
+ ** A process may only obtain a PENDING lock after it has obtained a
+ ** SHARED lock. A PENDING lock is implemented by obtaining a write-lock
+ ** on the 'pending byte'. This ensures that no new SHARED locks can be
+ ** obtained, but existing SHARED locks are allowed to persist. A process
+ ** does not have to obtain a RESERVED lock on the way to a PENDING lock.
+ ** This property is used by the algorithm for rolling back a journal file
+ ** after a crash.
+ **
+ ** An EXCLUSIVE lock, obtained after a PENDING lock is held, is
+ ** implemented by obtaining a write-lock on the entire 'shared byte
+ ** range'. Since all other locks require a read-lock on one of the bytes
+ ** within this range, this ensures that no other locks are held on the
+ ** database.
+ **
+ ** The reason a single byte cannot be used instead of the 'shared byte
+ ** range' is that some versions of windows do not support read-locks. By
+ ** locking a random byte from a range, concurrent SHARED locks may exist
+ ** even if the locking primitive used is always a write-lock.
+ */
+ int rc = SQLITE_OK;
+ unixFile *pFile = (unixFile*)id;
+ struct lockInfo *pLock = pFile->pLock;
+ struct flock lock;
+ int s;
+
+ assert( pFile );
+ TRACE7("LOCK %d %s was %s(%s,%d) pid=%d\n", pFile->h,
+ locktypeName(locktype), locktypeName(pFile->locktype),
+ locktypeName(pLock->locktype), pLock->cnt , getpid());
+
+ /* If there is already a lock of this type or more restrictive on the
+ ** OsFile, do nothing. Don't use the end_lock: exit path, as
+ ** sqlite3OsEnterMutex() hasn't been called yet.
+ */
+ if( pFile->locktype>=locktype ){
+ TRACE3("LOCK %d %s ok (already held)\n", pFile->h,
+ locktypeName(locktype));
+ return SQLITE_OK;
+ }
+
+ /* Make sure the locking sequence is correct
+ */
+ assert( pFile->locktype!=NO_LOCK || locktype==SHARED_LOCK );
+ assert( locktype!=PENDING_LOCK );
+ assert( locktype!=RESERVED_LOCK || pFile->locktype==SHARED_LOCK );
+
+ /* This mutex is needed because pFile->pLock is shared across threads
+ */
+ sqlite3OsEnterMutex();
+
+ /* Make sure the current thread owns the pFile.
+ */
+ rc = transferOwnership(pFile);
+ if( rc!=SQLITE_OK ){
+ sqlite3OsLeaveMutex();
+ return rc;
+ }
+ pLock = pFile->pLock;
+
+ /* If some thread using this PID has a lock via a different OsFile*
+ ** handle that precludes the requested lock, return BUSY.
+ */
+ if( (pFile->locktype!=pLock->locktype &&
+ (pLock->locktype>=PENDING_LOCK || locktype>SHARED_LOCK))
+ ){
+ rc = SQLITE_BUSY;
+ goto end_lock;
+ }
+
+ /* If a SHARED lock is requested, and some thread using this PID already
+ ** has a SHARED or RESERVED lock, then increment reference counts and
+ ** return SQLITE_OK.
+ */
+ if( locktype==SHARED_LOCK &&
+ (pLock->locktype==SHARED_LOCK || pLock->locktype==RESERVED_LOCK) ){
+ assert( locktype==SHARED_LOCK );
+ assert( pFile->locktype==0 );
+ assert( pLock->cnt>0 );
+ pFile->locktype = SHARED_LOCK;
+ pLock->cnt++;
+ pFile->pOpen->nLock++;
+ goto end_lock;
+ }
+
+ lock.l_len = 1L;
+
+ lock.l_whence = SEEK_SET;
+
+ /* A PENDING lock is needed before acquiring a SHARED lock and before
+ ** acquiring an EXCLUSIVE lock. For the SHARED lock, the PENDING will
+ ** be released.
+ */
+ if( locktype==SHARED_LOCK
+ || (locktype==EXCLUSIVE_LOCK && pFile->locktype<PENDING_LOCK)
+ ){
+ lock.l_type = (locktype==SHARED_LOCK?F_RDLCK:F_WRLCK);
+ lock.l_start = PENDING_BYTE;
+ s = fcntl(pFile->h, F_SETLK, &lock);
+ if( s ){
+ rc = (errno==EINVAL) ? SQLITE_NOLFS : SQLITE_BUSY;
+ goto end_lock;
+ }
+ }
+
+
+ /* If control gets to this point, then actually go ahead and make
+ ** operating system calls for the specified lock.
+ */
+ if( locktype==SHARED_LOCK ){
+ assert( pLock->cnt==0 );
+ assert( pLock->locktype==0 );
+
+ /* Now get the read-lock */
+ lock.l_start = SHARED_FIRST;
+ lock.l_len = SHARED_SIZE;
+ s = fcntl(pFile->h, F_SETLK, &lock);
+
+ /* Drop the temporary PENDING lock */
+ lock.l_start = PENDING_BYTE;
+ lock.l_len = 1L;
+ lock.l_type = F_UNLCK;
+ if( fcntl(pFile->h, F_SETLK, &lock)!=0 ){
+ rc = SQLITE_IOERR_UNLOCK; /* This should never happen */
+ goto end_lock;
+ }
+ if( s ){
+ rc = (errno==EINVAL) ? SQLITE_NOLFS : SQLITE_BUSY;
+ }else{
+ pFile->locktype = SHARED_LOCK;
+ pFile->pOpen->nLock++;
+ pLock->cnt = 1;
+ }
+ }else if( locktype==EXCLUSIVE_LOCK && pLock->cnt>1 ){
+ /* We are trying for an exclusive lock but another thread in this
+ ** same process is still holding a shared lock. */
+ rc = SQLITE_BUSY;
+ }else{
+ /* The request was for a RESERVED or EXCLUSIVE lock. It is
+ ** assumed that there is a SHARED or greater lock on the file
+ ** already.
+ */
+ assert( 0!=pFile->locktype );
+ lock.l_type = F_WRLCK;
+ switch( locktype ){
+ case RESERVED_LOCK:
+ lock.l_start = RESERVED_BYTE;
+ break;
+ case EXCLUSIVE_LOCK:
+ lock.l_start = SHARED_FIRST;
+ lock.l_len = SHARED_SIZE;
+ break;
+ default:
+ assert(0);
+ }
+ s = fcntl(pFile->h, F_SETLK, &lock);
+ if( s ){
+ rc = (errno==EINVAL) ? SQLITE_NOLFS : SQLITE_BUSY;
+ }
+ }
+
+ if( rc==SQLITE_OK ){
+ pFile->locktype = locktype;
+ pLock->locktype = locktype;
+ }else if( locktype==EXCLUSIVE_LOCK ){
+ pFile->locktype = PENDING_LOCK;
+ pLock->locktype = PENDING_LOCK;
+ }
+
+end_lock:
+ sqlite3OsLeaveMutex();
+ TRACE4("LOCK %d %s %s\n", pFile->h, locktypeName(locktype),
+ rc==SQLITE_OK ? "ok" : "failed");
+ return rc;
+}
+
+/*
+** Lower the locking level on file descriptor pFile to locktype. locktype
+** must be either NO_LOCK or SHARED_LOCK.
+**
+** If the locking level of the file descriptor is already at or below
+** the requested locking level, this routine is a no-op.
+*/
+static int unixUnlock(OsFile *id, int locktype){
+ struct lockInfo *pLock;
+ struct flock lock;
+ int rc = SQLITE_OK;
+ unixFile *pFile = (unixFile*)id;
+
+ assert( pFile );
+ TRACE7("UNLOCK %d %d was %d(%d,%d) pid=%d\n", pFile->h, locktype,
+ pFile->locktype, pFile->pLock->locktype, pFile->pLock->cnt, getpid());
+
+ assert( locktype<=SHARED_LOCK );
+ if( pFile->locktype<=locktype ){
+ return SQLITE_OK;
+ }
+ if( CHECK_THREADID(pFile) ){
+ return SQLITE_MISUSE;
+ }
+ sqlite3OsEnterMutex();
+ pLock = pFile->pLock;
+ assert( pLock->cnt!=0 );
+ if( pFile->locktype>SHARED_LOCK ){
+ assert( pLock->locktype==pFile->locktype );
+ if( locktype==SHARED_LOCK ){
+ lock.l_type = F_RDLCK;
+ lock.l_whence = SEEK_SET;
+ lock.l_start = SHARED_FIRST;
+ lock.l_len = SHARED_SIZE;
+ if( fcntl(pFile->h, F_SETLK, &lock)!=0 ){
+ /* This should never happen */
+ rc = SQLITE_IOERR_RDLOCK;
+ }
+ }
+ lock.l_type = F_UNLCK;
+ lock.l_whence = SEEK_SET;
+ lock.l_start = PENDING_BYTE;
+ lock.l_len = 2L; assert( PENDING_BYTE+1==RESERVED_BYTE );
+ if( fcntl(pFile->h, F_SETLK, &lock)==0 ){
+ pLock->locktype = SHARED_LOCK;
+ }else{
+ rc = SQLITE_IOERR_UNLOCK; /* This should never happen */
+ }
+ }
+ if( locktype==NO_LOCK ){
+ struct openCnt *pOpen;
+
+ /* Decrement the shared lock counter. Release the lock using an
+ ** OS call only when all threads in this same process have released
+ ** the lock.
+ */
+ pLock->cnt--;
+ if( pLock->cnt==0 ){
+ lock.l_type = F_UNLCK;
+ lock.l_whence = SEEK_SET;
+ lock.l_start = lock.l_len = 0L;
+ if( fcntl(pFile->h, F_SETLK, &lock)==0 ){
+ pLock->locktype = NO_LOCK;
+ }else{
+ rc = SQLITE_IOERR_UNLOCK; /* This should never happen */
+ }
+ }
+
+ /* Decrement the count of locks against this same file. When the
+ ** count reaches zero, close any other file descriptors whose close
+ ** was deferred because of outstanding locks.
+ */
+ pOpen = pFile->pOpen;
+ pOpen->nLock--;
+ assert( pOpen->nLock>=0 );
+ if( pOpen->nLock==0 && pOpen->nPending>0 ){
+ int i;
+ for(i=0; i<pOpen->nPending; i++){
+ close(pOpen->aPending[i]);
+ }
+ free(pOpen->aPending);
+ pOpen->nPending = 0;
+ pOpen->aPending = 0;
+ }
+ }
+ sqlite3OsLeaveMutex();
+ pFile->locktype = locktype;
+ return rc;
+}
+
+/*
+** Close a file.
+*/
+static int unixClose(OsFile **pId){
+ unixFile *id = (unixFile*)*pId;
+
+ if( !id ) return SQLITE_OK;
+ unixUnlock(*pId, NO_LOCK);
+ if( id->dirfd>=0 ) close(id->dirfd);
+ id->dirfd = -1;
+ sqlite3OsEnterMutex();
+
+ if( id->pOpen->nLock ){
+ /* If there are outstanding locks, do not actually close the file just
+ ** yet because that would clear those locks. Instead, add the file
+ ** descriptor to pOpen->aPending. It will be automatically closed when
+ ** the last lock is cleared.
+ */
+ int *aNew;
+ struct openCnt *pOpen = id->pOpen;
+ aNew = realloc( pOpen->aPending, (pOpen->nPending+1)*sizeof(int) );
+ if( aNew==0 ){
+ /* If a malloc fails, just leak the file descriptor */
+ }else{
+ pOpen->aPending = aNew;
+ pOpen->aPending[pOpen->nPending] = id->h;
+ pOpen->nPending++;
+ }
+ }else{
+ /* There are no outstanding locks so we can close the file immediately */
+ close(id->h);
+ }
+ releaseLockInfo(id->pLock);
+ releaseOpenCnt(id->pOpen);
+
+ sqlite3OsLeaveMutex();
+ id->isOpen = 0;
+ TRACE2("CLOSE %-3d\n", id->h);
+ OpenCounter(-1);
+ sqlite3ThreadSafeFree(id);
+ *pId = 0;
+ return SQLITE_OK;
+}
+
+
+#ifdef SQLITE_ENABLE_LOCKING_STYLE
+#pragma mark AFP Support
+
+/*
+ ** The afpLockingContext structure contains all afp lock specific state
+ */
+typedef struct afpLockingContext afpLockingContext;
+struct afpLockingContext {
+ unsigned long long sharedLockByte;
+ char *filePath;
+};
+
+struct ByteRangeLockPB2
+{
+ unsigned long long offset; /* offset to first byte to lock */
+ unsigned long long length; /* nbr of bytes to lock */
+ unsigned long long retRangeStart; /* nbr of 1st byte locked if successful */
+ unsigned char unLockFlag; /* 1 = unlock, 0 = lock */
+ unsigned char startEndFlag; /* 1=rel to end of fork, 0=rel to start */
+ int fd; /* file desc to assoc this lock with */
+};
+
+#define afpfsByteRangeLock2FSCTL _IOWR('z', 23, struct ByteRangeLockPB2)
+
+/* return 0 on success, 1 on failure. To match the behavior of the
+ normal posix file locking (used in unixLock for example), we should
+ provide 'richer' return codes - specifically to differentiate between
+ 'file busy' and 'file system error' results */
+static int _AFPFSSetLock(const char *path, int fd, unsigned long long offset,
+ unsigned long long length, int setLockFlag)
+{
+ struct ByteRangeLockPB2 pb;
+ int err;
+
+ pb.unLockFlag = setLockFlag ? 0 : 1;
+ pb.startEndFlag = 0;
+ pb.offset = offset;
+ pb.length = length;
+ pb.fd = fd;
+ TRACE5("AFPLOCK setting lock %s for %d in range %llx:%llx\n",
+ (setLockFlag?"ON":"OFF"), fd, offset, length);
+ err = fsctl(path, afpfsByteRangeLock2FSCTL, &pb, 0);
+ if ( err==-1 ) {
+ TRACE4("AFPLOCK failed to fsctl() '%s' %d %s\n", path, errno,
+ strerror(errno));
+ return 1; // error
+ } else {
+ return 0;
+ }
+}
+
+/*
+ ** This routine checks if there is a RESERVED lock held on the specified
+ ** file by this or any other process. If such a lock is held, return
+ ** non-zero. If the file is unlocked or holds only SHARED locks, then
+ ** return zero.
+ */
+static int afpUnixCheckReservedLock(OsFile *id){
+ int r = 0;
+ unixFile *pFile = (unixFile*)id;
+
+ assert( pFile );
+ afpLockingContext *context = (afpLockingContext *) pFile->lockingContext;
+
+ /* Check if a thread in this process holds such a lock */
+ if( pFile->locktype>SHARED_LOCK ){
+ r = 1;
+ }
+
+ /* Otherwise see if some other process holds it.
+ */
+ if ( !r ) {
+ // lock the byte
+ int failed = _AFPFSSetLock(context->filePath, pFile->h, RESERVED_BYTE, 1,1);
+ if (failed) {
+ /* if we failed to get the lock then someone else must have it */
+ r = 1;
+ } else {
+ /* if we succeeded in taking the reserved lock, unlock it to restore
+ ** the original state */
+ _AFPFSSetLock(context->filePath, pFile->h, RESERVED_BYTE, 1, 0);
+ }
+ }
+ TRACE3("TEST WR-LOCK %d %d\n", pFile->h, r);
+
+ return r;
+}
+
+/* AFP-style locking following the behavior of unixLock, see the unixLock
+** function comments for details of lock management. */
+static int afpUnixLock(OsFile *id, int locktype)
+{
+ int rc = SQLITE_OK;
+ unixFile *pFile = (unixFile*)id;
+ afpLockingContext *context = (afpLockingContext *) pFile->lockingContext;
+ int gotPendingLock = 0;
+
+ assert( pFile );
+ TRACE5("LOCK %d %s was %s pid=%d\n", pFile->h,
+ locktypeName(locktype), locktypeName(pFile->locktype), getpid());
+ /* If there is already a lock of this type or more restrictive on the
+ ** OsFile, do nothing. Don't use the afp_end_lock: exit path, as
+ ** sqlite3OsEnterMutex() hasn't been called yet.
+ */
+ if( pFile->locktype>=locktype ){
+ TRACE3("LOCK %d %s ok (already held)\n", pFile->h,
+ locktypeName(locktype));
+ return SQLITE_OK;
+ }
+
+ /* Make sure the locking sequence is correct
+ */
+ assert( pFile->locktype!=NO_LOCK || locktype==SHARED_LOCK );
+ assert( locktype!=PENDING_LOCK );
+ assert( locktype!=RESERVED_LOCK || pFile->locktype==SHARED_LOCK );
+
+ /* This mutex is needed because pFile->pLock is shared across threads
+ */
+ sqlite3OsEnterMutex();
+
+ /* Make sure the current thread owns the pFile.
+ */
+ rc = transferOwnership(pFile);
+ if( rc!=SQLITE_OK ){
+ sqlite3OsLeaveMutex();
+ return rc;
+ }
+
+ /* A PENDING lock is needed before acquiring a SHARED lock and before
+ ** acquiring an EXCLUSIVE lock. For the SHARED lock, the PENDING will
+ ** be released.
+ */
+ if( locktype==SHARED_LOCK
+ || (locktype==EXCLUSIVE_LOCK && pFile->locktype<PENDING_LOCK)
+ ){
+ int failed = _AFPFSSetLock(context->filePath, pFile->h,
+ PENDING_BYTE, 1, 1);
+ if (failed) {
+ rc = SQLITE_BUSY;
+ goto afp_end_lock;
+ }
+ }
+
+ /* If control gets to this point, then actually go ahead and make
+ ** operating system calls for the specified lock.
+ */
+ if( locktype==SHARED_LOCK ){
+ int lk, failed;
+ int tries = 0;
+
+ /* Now get the read-lock */
+ /* note that the quality of the randomness doesn't matter that much */
+ lk = random();
+ context->sharedLockByte = (lk & 0x7fffffff)%(SHARED_SIZE - 1);
+ failed = _AFPFSSetLock(context->filePath, pFile->h,
+ SHARED_FIRST+context->sharedLockByte, 1, 1);
+
+ /* Drop the temporary PENDING lock */
+ if (_AFPFSSetLock(context->filePath, pFile->h, PENDING_BYTE, 1, 0)) {
+ rc = SQLITE_IOERR_UNLOCK; /* This should never happen */
+ goto afp_end_lock;
+ }
+
+ if( failed ){
+ rc = SQLITE_BUSY;
+ } else {
+ pFile->locktype = SHARED_LOCK;
+ }
+ }else{
+ /* The request was for a RESERVED or EXCLUSIVE lock. It is
+ ** assumed that there is a SHARED or greater lock on the file
+ ** already.
+ */
+ int failed = 0;
+ assert( 0!=pFile->locktype );
+ if (locktype >= RESERVED_LOCK && pFile->locktype < RESERVED_LOCK) {
+ /* Acquire a RESERVED lock */
+ failed = _AFPFSSetLock(context->filePath, pFile->h, RESERVED_BYTE, 1,1);
+ }
+ if (!failed && locktype == EXCLUSIVE_LOCK) {
+ /* Acquire an EXCLUSIVE lock */
+
+ /* Remove the shared lock before trying the range. we'll need to
+ ** reestablish the shared lock if we can't get the afpUnixUnlock
+ */
+ if (!_AFPFSSetLock(context->filePath, pFile->h, SHARED_FIRST +
+ context->sharedLockByte, 1, 0)) {
+ /* now attemmpt to get the exclusive lock range */
+ failed = _AFPFSSetLock(context->filePath, pFile->h, SHARED_FIRST,
+ SHARED_SIZE, 1);
+ if (failed && _AFPFSSetLock(context->filePath, pFile->h, SHARED_FIRST +
+ context->sharedLockByte, 1, 1)) {
+ rc = SQLITE_IOERR_RDLOCK; /* this should never happen */
+ }
+ } else {
+ /* */
+ rc = SQLITE_IOERR_UNLOCK; /* this should never happen */
+ }
+ }
+ if( failed && rc == SQLITE_OK){
+ rc = SQLITE_BUSY;
+ }
+ }
+
+ if( rc==SQLITE_OK ){
+ pFile->locktype = locktype;
+ }else if( locktype==EXCLUSIVE_LOCK ){
+ pFile->locktype = PENDING_LOCK;
+ }
+
+afp_end_lock:
+ sqlite3OsLeaveMutex();
+ TRACE4("LOCK %d %s %s\n", pFile->h, locktypeName(locktype),
+ rc==SQLITE_OK ? "ok" : "failed");
+ return rc;
+}
+
+/*
+ ** Lower the locking level on file descriptor pFile to locktype. locktype
+ ** must be either NO_LOCK or SHARED_LOCK.
+ **
+ ** If the locking level of the file descriptor is already at or below
+ ** the requested locking level, this routine is a no-op.
+ */
+static int afpUnixUnlock(OsFile *id, int locktype) {
+ struct flock lock;
+ int rc = SQLITE_OK;
+ unixFile *pFile = (unixFile*)id;
+ afpLockingContext *context = (afpLockingContext *) pFile->lockingContext;
+
+ assert( pFile );
+ TRACE5("UNLOCK %d %d was %d pid=%d\n", pFile->h, locktype,
+ pFile->locktype, getpid());
+
+ assert( locktype<=SHARED_LOCK );
+ if( pFile->locktype<=locktype ){
+ return SQLITE_OK;
+ }
+ if( CHECK_THREADID(pFile) ){
+ return SQLITE_MISUSE;
+ }
+ sqlite3OsEnterMutex();
+ if( pFile->locktype>SHARED_LOCK ){
+ if( locktype==SHARED_LOCK ){
+ int failed = 0;
+
+ /* unlock the exclusive range - then re-establish the shared lock */
+ if (pFile->locktype==EXCLUSIVE_LOCK) {
+ failed = _AFPFSSetLock(context->filePath, pFile->h, SHARED_FIRST,
+ SHARED_SIZE, 0);
+ if (!failed) {
+ /* successfully removed the exclusive lock */
+ if (_AFPFSSetLock(context->filePath, pFile->h, SHARED_FIRST+
+ context->sharedLockByte, 1, 1)) {
+ /* failed to re-establish our shared lock */
+ rc = SQLITE_IOERR_RDLOCK; /* This should never happen */
+ }
+ } else {
+ /* This should never happen - failed to unlock the exclusive range */
+ rc = SQLITE_IOERR_UNLOCK;
+ }
+ }
+ }
+ if (rc == SQLITE_OK && pFile->locktype>=PENDING_LOCK) {
+ if (_AFPFSSetLock(context->filePath, pFile->h, PENDING_BYTE, 1, 0)){
+ /* failed to release the pending lock */
+ rc = SQLITE_IOERR_UNLOCK; /* This should never happen */
+ }
+ }
+ if (rc == SQLITE_OK && pFile->locktype>=RESERVED_LOCK) {
+ if (_AFPFSSetLock(context->filePath, pFile->h, RESERVED_BYTE, 1, 0)) {
+ /* failed to release the reserved lock */
+ rc = SQLITE_IOERR_UNLOCK; /* This should never happen */
+ }
+ }
+ }
+ if( locktype==NO_LOCK ){
+ int failed = _AFPFSSetLock(context->filePath, pFile->h,
+ SHARED_FIRST + context->sharedLockByte, 1, 0);
+ if (failed) {
+ rc = SQLITE_IOERR_UNLOCK; /* This should never happen */
+ }
+ }
+ if (rc == SQLITE_OK)
+ pFile->locktype = locktype;
+ sqlite3OsLeaveMutex();
+ return rc;
+}
+
+/*
+ ** Close a file & cleanup AFP specific locking context
+ */
+static int afpUnixClose(OsFile **pId) {
+ unixFile *id = (unixFile*)*pId;
+
+ if( !id ) return SQLITE_OK;
+ afpUnixUnlock(*pId, NO_LOCK);
+ /* free the AFP locking structure */
+ if (id->lockingContext != NULL) {
+ if (((afpLockingContext *)id->lockingContext)->filePath != NULL)
+ sqlite3ThreadSafeFree(((afpLockingContext*)id->lockingContext)->filePath);
+ sqlite3ThreadSafeFree(id->lockingContext);
+ }
+
+ if( id->dirfd>=0 ) close(id->dirfd);
+ id->dirfd = -1;
+ close(id->h);
+ id->isOpen = 0;
+ TRACE2("CLOSE %-3d\n", id->h);
+ OpenCounter(-1);
+ sqlite3ThreadSafeFree(id);
+ *pId = 0;
+ return SQLITE_OK;
+}
+
+
+#pragma mark flock() style locking
+
+/*
+ ** The flockLockingContext is not used
+ */
+typedef void flockLockingContext;
+
+static int flockUnixCheckReservedLock(OsFile *id) {
+ unixFile *pFile = (unixFile*)id;
+
+ if (pFile->locktype == RESERVED_LOCK) {
+ return 1; // already have a reserved lock
+ } else {
+ // attempt to get the lock
+ int rc = flock(pFile->h, LOCK_EX | LOCK_NB);
+ if (!rc) {
+ // got the lock, unlock it
+ flock(pFile->h, LOCK_UN);
+ return 0; // no one has it reserved
+ }
+ return 1; // someone else might have it reserved
+ }
+}
+
+static int flockUnixLock(OsFile *id, int locktype) {
+ unixFile *pFile = (unixFile*)id;
+
+ // if we already have a lock, it is exclusive.
+ // Just adjust level and punt on outta here.
+ if (pFile->locktype > NO_LOCK) {
+ pFile->locktype = locktype;
+ return SQLITE_OK;
+ }
+
+ // grab an exclusive lock
+ int rc = flock(pFile->h, LOCK_EX | LOCK_NB);
+ if (rc) {
+ // didn't get, must be busy
+ return SQLITE_BUSY;
+ } else {
+ // got it, set the type and return ok
+ pFile->locktype = locktype;
+ return SQLITE_OK;
+ }
+}
+
+static int flockUnixUnlock(OsFile *id, int locktype) {
+ unixFile *pFile = (unixFile*)id;
+
+ assert( locktype<=SHARED_LOCK );
+
+ // no-op if possible
+ if( pFile->locktype==locktype ){
+ return SQLITE_OK;
+ }
+
+ // shared can just be set because we always have an exclusive
+ if (locktype==SHARED_LOCK) {
+ pFile->locktype = locktype;
+ return SQLITE_OK;
+ }
+
+ // no, really, unlock.
+ int rc = flock(pFile->h, LOCK_UN);
+ if (rc)
+ return SQLITE_IOERR_UNLOCK;
+ else {
+ pFile->locktype = NO_LOCK;
+ return SQLITE_OK;
+ }
+}
+
+/*
+ ** Close a file.
+ */
+static int flockUnixClose(OsFile **pId) {
+ unixFile *id = (unixFile*)*pId;
+
+ if( !id ) return SQLITE_OK;
+ flockUnixUnlock(*pId, NO_LOCK);
+
+ if( id->dirfd>=0 ) close(id->dirfd);
+ id->dirfd = -1;
+ sqlite3OsEnterMutex();
+
+ close(id->h);
+ sqlite3OsLeaveMutex();
+ id->isOpen = 0;
+ TRACE2("CLOSE %-3d\n", id->h);
+ OpenCounter(-1);
+ sqlite3ThreadSafeFree(id);
+ *pId = 0;
+ return SQLITE_OK;
+}
+
+#pragma mark Old-School .lock file based locking
+
+/*
+ ** The dotlockLockingContext structure contains all dotlock (.lock) lock
+ ** specific state
+ */
+typedef struct dotlockLockingContext dotlockLockingContext;
+struct dotlockLockingContext {
+ char *lockPath;
+};
+
+
+static int dotlockUnixCheckReservedLock(OsFile *id) {
+ unixFile *pFile = (unixFile*)id;
+ dotlockLockingContext *context =
+ (dotlockLockingContext *) pFile->lockingContext;
+
+ if (pFile->locktype == RESERVED_LOCK) {
+ return 1; // already have a reserved lock
+ } else {
+ struct stat statBuf;
+ if (lstat(context->lockPath,&statBuf) == 0)
+ // file exists, someone else has the lock
+ return 1;
+ else
+ // file does not exist, we could have it if we want it
+ return 0;
+ }
+}
+
+static int dotlockUnixLock(OsFile *id, int locktype) {
+ unixFile *pFile = (unixFile*)id;
+ dotlockLockingContext *context =
+ (dotlockLockingContext *) pFile->lockingContext;
+
+ // if we already have a lock, it is exclusive.
+ // Just adjust level and punt on outta here.
+ if (pFile->locktype > NO_LOCK) {
+ pFile->locktype = locktype;
+
+ /* Always update the timestamp on the old file */
+ utimes(context->lockPath,NULL);
+ return SQLITE_OK;
+ }
+
+ // check to see if lock file already exists
+ struct stat statBuf;
+ if (lstat(context->lockPath,&statBuf) == 0){
+ return SQLITE_BUSY; // it does, busy
+ }
+
+ // grab an exclusive lock
+ int fd = open(context->lockPath,O_RDONLY|O_CREAT|O_EXCL,0600);
+ if (fd < 0) {
+ // failed to open/create the file, someone else may have stolen the lock
+ return SQLITE_BUSY;
+ }
+ close(fd);
+
+ // got it, set the type and return ok
+ pFile->locktype = locktype;
+ return SQLITE_OK;
+}
+
+static int dotlockUnixUnlock(OsFile *id, int locktype) {
+ unixFile *pFile = (unixFile*)id;
+ dotlockLockingContext *context =
+ (dotlockLockingContext *) pFile->lockingContext;
+
+ assert( locktype<=SHARED_LOCK );
+
+ // no-op if possible
+ if( pFile->locktype==locktype ){
+ return SQLITE_OK;
+ }
+
+ // shared can just be set because we always have an exclusive
+ if (locktype==SHARED_LOCK) {
+ pFile->locktype = locktype;
+ return SQLITE_OK;
+ }
+
+ // no, really, unlock.
+ unlink(context->lockPath);
+ pFile->locktype = NO_LOCK;
+ return SQLITE_OK;
+}
+
+/*
+ ** Close a file.
+ */
+static int dotlockUnixClose(OsFile **pId) {
+ unixFile *id = (unixFile*)*pId;
+
+ if( !id ) return SQLITE_OK;
+ dotlockUnixUnlock(*pId, NO_LOCK);
+ /* free the dotlock locking structure */
+ if (id->lockingContext != NULL) {
+ if (((dotlockLockingContext *)id->lockingContext)->lockPath != NULL)
+ sqlite3ThreadSafeFree( ( (dotlockLockingContext *)
+ id->lockingContext)->lockPath);
+ sqlite3ThreadSafeFree(id->lockingContext);
+ }
+
+ if( id->dirfd>=0 ) close(id->dirfd);
+ id->dirfd = -1;
+ sqlite3OsEnterMutex();
+
+ close(id->h);
+
+ sqlite3OsLeaveMutex();
+ id->isOpen = 0;
+ TRACE2("CLOSE %-3d\n", id->h);
+ OpenCounter(-1);
+ sqlite3ThreadSafeFree(id);
+ *pId = 0;
+ return SQLITE_OK;
+}
+
+
+#pragma mark No locking
+
+/*
+ ** The nolockLockingContext is void
+ */
+typedef void nolockLockingContext;
+
+static int nolockUnixCheckReservedLock(OsFile *id) {
+ return 0;
+}
+
+static int nolockUnixLock(OsFile *id, int locktype) {
+ return SQLITE_OK;
+}
+
+static int nolockUnixUnlock(OsFile *id, int locktype) {
+ return SQLITE_OK;
+}
+
+/*
+ ** Close a file.
+ */
+static int nolockUnixClose(OsFile **pId) {
+ unixFile *id = (unixFile*)*pId;
+
+ if( !id ) return SQLITE_OK;
+ if( id->dirfd>=0 ) close(id->dirfd);
+ id->dirfd = -1;
+ sqlite3OsEnterMutex();
+
+ close(id->h);
+
+ sqlite3OsLeaveMutex();
+ id->isOpen = 0;
+ TRACE2("CLOSE %-3d\n", id->h);
+ OpenCounter(-1);
+ sqlite3ThreadSafeFree(id);
+ *pId = 0;
+ return SQLITE_OK;
+}
+
+#endif /* SQLITE_ENABLE_LOCKING_STYLE */
+
+/*
+** Turn a relative pathname into a full pathname. Return a pointer
+** to the full pathname stored in space obtained from sqliteMalloc().
+** The calling function is responsible for freeing this space once it
+** is no longer needed.
+*/
+char *sqlite3UnixFullPathname(const char *zRelative){
+ char *zFull = 0;
+ if( zRelative[0]=='/' ){
+ sqlite3SetString(&zFull, zRelative, (char*)0);
+ }else{
+ char *zBuf = sqliteMalloc(5000);
+ if( zBuf==0 ){
+ return 0;
+ }
+ zBuf[0] = 0;
+ sqlite3SetString(&zFull, getcwd(zBuf, 5000), "/", zRelative,
+ (char*)0);
+ sqliteFree(zBuf);
+ }
+
+#if 0
+ /*
+ ** Remove "/./" path elements and convert "/A/./" path elements
+ ** to just "/".
+ */
+ if( zFull ){
+ int i, j;
+ for(i=j=0; zFull[i]; i++){
+ if( zFull[i]=='/' ){
+ if( zFull[i+1]=='/' ) continue;
+ if( zFull[i+1]=='.' && zFull[i+2]=='/' ){
+ i += 1;
+ continue;
+ }
+ if( zFull[i+1]=='.' && zFull[i+2]=='.' && zFull[i+3]=='/' ){
+ while( j>0 && zFull[j-1]!='/' ){ j--; }
+ i += 3;
+ continue;
+ }
+ }
+ zFull[j++] = zFull[i];
+ }
+ zFull[j] = 0;
+ }
+#endif
+
+ return zFull;
+}
+
+/*
+** Change the value of the fullsync flag in the given file descriptor.
+*/
+static void unixSetFullSync(OsFile *id, int v){
+ ((unixFile*)id)->fullSync = v;
+}
+
+/*
+** Return the underlying file handle for an OsFile
+*/
+static int unixFileHandle(OsFile *id){
+ return ((unixFile*)id)->h;
+}
+
+/*
+** Return an integer that indices the type of lock currently held
+** by this handle. (Used for testing and analysis only.)
+*/
+static int unixLockState(OsFile *id){
+ return ((unixFile*)id)->locktype;
+}
+
+/*
+** This vector defines all the methods that can operate on an OsFile
+** for unix.
+*/
+static const IoMethod sqlite3UnixIoMethod = {
+ unixClose,
+ unixOpenDirectory,
+ unixRead,
+ unixWrite,
+ unixSeek,
+ unixTruncate,
+ unixSync,
+ unixSetFullSync,
+ unixFileHandle,
+ unixFileSize,
+ unixLock,
+ unixUnlock,
+ unixLockState,
+ unixCheckReservedLock,
+};
+
+#ifdef SQLITE_ENABLE_LOCKING_STYLE
+/*
+ ** This vector defines all the methods that can operate on an OsFile
+ ** for unix with AFP style file locking.
+ */
+static const IoMethod sqlite3AFPLockingUnixIoMethod = {
+ afpUnixClose,
+ unixOpenDirectory,
+ unixRead,
+ unixWrite,
+ unixSeek,
+ unixTruncate,
+ unixSync,
+ unixSetFullSync,
+ unixFileHandle,
+ unixFileSize,
+ afpUnixLock,
+ afpUnixUnlock,
+ unixLockState,
+ afpUnixCheckReservedLock,
+};
+
+/*
+ ** This vector defines all the methods that can operate on an OsFile
+ ** for unix with flock() style file locking.
+ */
+static const IoMethod sqlite3FlockLockingUnixIoMethod = {
+ flockUnixClose,
+ unixOpenDirectory,
+ unixRead,
+ unixWrite,
+ unixSeek,
+ unixTruncate,
+ unixSync,
+ unixSetFullSync,
+ unixFileHandle,
+ unixFileSize,
+ flockUnixLock,
+ flockUnixUnlock,
+ unixLockState,
+ flockUnixCheckReservedLock,
+};
+
+/*
+ ** This vector defines all the methods that can operate on an OsFile
+ ** for unix with dotlock style file locking.
+ */
+static const IoMethod sqlite3DotlockLockingUnixIoMethod = {
+ dotlockUnixClose,
+ unixOpenDirectory,
+ unixRead,
+ unixWrite,
+ unixSeek,
+ unixTruncate,
+ unixSync,
+ unixSetFullSync,
+ unixFileHandle,
+ unixFileSize,
+ dotlockUnixLock,
+ dotlockUnixUnlock,
+ unixLockState,
+ dotlockUnixCheckReservedLock,
+};
+
+/*
+ ** This vector defines all the methods that can operate on an OsFile
+ ** for unix with dotlock style file locking.
+ */
+static const IoMethod sqlite3NolockLockingUnixIoMethod = {
+ nolockUnixClose,
+ unixOpenDirectory,
+ unixRead,
+ unixWrite,
+ unixSeek,
+ unixTruncate,
+ unixSync,
+ unixSetFullSync,
+ unixFileHandle,
+ unixFileSize,
+ nolockUnixLock,
+ nolockUnixUnlock,
+ unixLockState,
+ nolockUnixCheckReservedLock,
+};
+
+#endif /* SQLITE_ENABLE_LOCKING_STYLE */
+
+/*
+** Allocate memory for a new unixFile and initialize that unixFile.
+** Write a pointer to the new unixFile into *pId.
+** If we run out of memory, close the file and return an error.
+*/
+#ifdef SQLITE_ENABLE_LOCKING_STYLE
+/*
+ ** When locking extensions are enabled, the filepath and locking style
+ ** are needed to determine the unixFile pMethod to use for locking operations.
+ ** The locking-style specific lockingContext data structure is created
+ ** and assigned here also.
+ */
+static int allocateUnixFile(
+ int h, /* Open file descriptor of file being opened */
+ OsFile **pId, /* Write completed initialization here */
+ const char *zFilename, /* Name of the file being opened */
+ int delFlag /* Delete-on-or-before-close flag */
+){
+ sqlite3LockingStyle lockStyle;
+ unixFile *pNew;
+ unixFile f;
+ int rc;
+
+ lockingStyle = sqlite3DetectLockingStyle(zFilename, f.h);
+ if ( lockingStyle == posixLockingStyle ) {
+ sqlite3OsEnterMutex();
+ rc = findLockInfo(h, &f.pLock, &f.pOpen);
+ sqlite3OsLeaveMutex();
+ if( rc ){
+ close(h);
+ unlink(zFilename);
+ return SQLITE_NOMEM;
+ }
+ } else {
+ // pLock and pOpen are only used for posix advisory locking
+ f.pLock = NULL;
+ f.pOpen = NULL;
+ }
+ if( delFlag ){
+ unlink(zFilename);
+ }
+ f.dirfd = -1;
+ f.fullSync = 0;
+ f.locktype = 0;
+ f.offset = 0;
+ f.h = h;
+ SET_THREADID(&f);
+ pNew = sqlite3ThreadSafeMalloc( sizeof(unixFile) );
+ if( pNew==0 ){
+ close(h);
+ sqlite3OsEnterMutex();
+ releaseLockInfo(f.pLock);
+ releaseOpenCnt(f.pOpen);
+ sqlite3OsLeaveMutex();
+ *pId = 0;
+ return SQLITE_NOMEM;
+ }else{
+ *pNew = f;
+ switch(lockStyle) {
+ case afpLockingStyle:
+ /* afp locking uses the file path so it needs to be included in
+ ** the afpLockingContext */
+ pNew->pMethod = &sqlite3AFPLockingUnixIoMethod;
+ pNew->lockingContext =
+ sqlite3ThreadSafeMalloc(sizeof(afpLockingContext));
+ ((afpLockingContext *)pNew->lockingContext)->filePath =
+ sqlite3ThreadSafeMalloc(strlen(zFilename) + 1);
+ strcpy(((afpLockingContext *)pNew->lockingContext)->filePath,
+ zFilename);
+ srandomdev();
+ break;
+ case flockLockingStyle:
+ /* flock locking doesn't need additional lockingContext information */
+ pNew->pMethod = &sqlite3FlockLockingUnixIoMethod;
+ break;
+ case dotlockLockingStyle:
+ /* dotlock locking uses the file path so it needs to be included in
+ ** the dotlockLockingContext */
+ pNew->pMethod = &sqlite3DotlockLockingUnixIoMethod;
+ pNew->lockingContext = sqlite3ThreadSafeMalloc(
+ sizeof(dotlockLockingContext));
+ ((dotlockLockingContext *)pNew->lockingContext)->lockPath =
+ sqlite3ThreadSafeMalloc(strlen(zFilename) + strlen(".lock") + 1);
+ sprintf(((dotlockLockingContext *)pNew->lockingContext)->lockPath,
+ "%s.lock", zFilename);
+ break;
+ case posixLockingStyle:
+ /* posix locking doesn't need additional lockingContext information */
+ pNew->pMethod = &sqlite3UnixIoMethod;
+ break;
+ case noLockingStyle:
+ case unsupportedLockingStyle:
+ default:
+ pNew->pMethod = &sqlite3NolockLockingUnixIoMethod;
+ }
+ *pId = (OsFile*)pNew;
+ OpenCounter(+1);
+ return SQLITE_OK;
+ }
+}
+#else /* SQLITE_ENABLE_LOCKING_STYLE */
+static int allocateUnixFile(
+ int h, /* Open file descriptor on file being opened */
+ OsFile **pId, /* Write the resul unixFile structure here */
+ const char *zFilename, /* Name of the file being opened */
+ int delFlag /* If true, delete the file on or before closing */
+){
+ unixFile *pNew;
+ unixFile f;
+ int rc;
+
+ sqlite3OsEnterMutex();
+ rc = findLockInfo(h, &f.pLock, &f.pOpen);
+ sqlite3OsLeaveMutex();
+ if( delFlag ){
+ unlink(zFilename);
+ }
+ if( rc ){
+ close(h);
+ return SQLITE_NOMEM;
+ }
+ TRACE3("OPEN %-3d %s\n", h, zFilename);
+ f.dirfd = -1;
+ f.fullSync = 0;
+ f.locktype = 0;
+ f.offset = 0;
+ f.h = h;
+ SET_THREADID(&f);
+ pNew = sqlite3ThreadSafeMalloc( sizeof(unixFile) );
+ if( pNew==0 ){
+ close(h);
+ sqlite3OsEnterMutex();
+ releaseLockInfo(f.pLock);
+ releaseOpenCnt(f.pOpen);
+ sqlite3OsLeaveMutex();
+ *pId = 0;
+ return SQLITE_NOMEM;
+ }else{
+ *pNew = f;
+ pNew->pMethod = &sqlite3UnixIoMethod;
+ *pId = (OsFile*)pNew;
+ OpenCounter(+1);
+ return SQLITE_OK;
+ }
+}
+#endif /* SQLITE_ENABLE_LOCKING_STYLE */
+
+#endif /* SQLITE_OMIT_DISKIO */
+/***************************************************************************
+** Everything above deals with file I/O. Everything that follows deals
+** with other miscellanous aspects of the operating system interface
+****************************************************************************/
+
+
+/*
+** Get information to seed the random number generator. The seed
+** is written into the buffer zBuf[256]. The calling function must
+** supply a sufficiently large buffer.
+*/
+int sqlite3UnixRandomSeed(char *zBuf){
+ /* We have to initialize zBuf to prevent valgrind from reporting
+ ** errors. The reports issued by valgrind are incorrect - we would
+ ** prefer that the randomness be increased by making use of the
+ ** uninitialized space in zBuf - but valgrind errors tend to worry
+ ** some users. Rather than argue, it seems easier just to initialize
+ ** the whole array and silence valgrind, even if that means less randomness
+ ** in the random seed.
+ **
+ ** When testing, initializing zBuf[] to zero is all we do. That means
+ ** that we always use the same random number sequence. This makes the
+ ** tests repeatable.
+ */
+ memset(zBuf, 0, 256);
+#if !defined(SQLITE_TEST)
+ {
+ int pid, fd;
+ fd = open("/dev/urandom", O_RDONLY);
+ if( fd<0 ){
+ time_t t;
+ time(&t);
+ memcpy(zBuf, &t, sizeof(t));
+ pid = getpid();
+ memcpy(&zBuf[sizeof(time_t)], &pid, sizeof(pid));
+ }else{
+ read(fd, zBuf, 256);
+ close(fd);
+ }
+ }
+#endif
+ return SQLITE_OK;
+}
+
+/*
+** Sleep for a little while. Return the amount of time slept.
+** The argument is the number of milliseconds we want to sleep.
+*/
+int sqlite3UnixSleep(int ms){
+#if defined(HAVE_USLEEP) && HAVE_USLEEP
+ usleep(ms*1000);
+ return ms;
+#else
+ sleep((ms+999)/1000);
+ return 1000*((ms+999)/1000);
+#endif
+}
+
+/*
+** Static variables used for thread synchronization.
+**
+** inMutex the nesting depth of the recursive mutex. The thread
+** holding mutexMain can read this variable at any time.
+** But is must hold mutexAux to change this variable. Other
+** threads must hold mutexAux to read the variable and can
+** never write.
+**
+** mutexOwner The thread id of the thread holding mutexMain. Same
+** access rules as for inMutex.
+**
+** mutexOwnerValid True if the value in mutexOwner is valid. The same
+** access rules apply as for inMutex.
+**
+** mutexMain The main mutex. Hold this mutex in order to get exclusive
+** access to SQLite data structures.
+**
+** mutexAux An auxiliary mutex needed to access variables defined above.
+**
+** Mutexes are always acquired in this order: mutexMain mutexAux. It
+** is not necessary to acquire mutexMain in order to get mutexAux - just
+** do not attempt to acquire them in the reverse order: mutexAux mutexMain.
+** Either get the mutexes with mutexMain first or get mutexAux only.
+**
+** When running on a platform where the three variables inMutex, mutexOwner,
+** and mutexOwnerValid can be set atomically, the mutexAux is not required.
+** On many systems, all three are 32-bit integers and writing to a 32-bit
+** integer is atomic. I think. But there are no guarantees. So it seems
+** safer to protect them using mutexAux.
+*/
+static int inMutex = 0;
+#ifdef SQLITE_UNIX_THREADS
+static pthread_t mutexOwner; /* Thread holding mutexMain */
+static int mutexOwnerValid = 0; /* True if mutexOwner is valid */
+static pthread_mutex_t mutexMain = PTHREAD_MUTEX_INITIALIZER; /* The mutex */
+static pthread_mutex_t mutexAux = PTHREAD_MUTEX_INITIALIZER; /* Aux mutex */
+#endif
+
+/*
+** The following pair of routine implement mutual exclusion for
+** multi-threaded processes. Only a single thread is allowed to
+** executed code that is surrounded by EnterMutex() and LeaveMutex().
+**
+** SQLite uses only a single Mutex. There is not much critical
+** code and what little there is executes quickly and without blocking.
+**
+** As of version 3.3.2, this mutex must be recursive.
+*/
+void sqlite3UnixEnterMutex(){
+#ifdef SQLITE_UNIX_THREADS
+ pthread_mutex_lock(&mutexAux);
+ if( !mutexOwnerValid || !pthread_equal(mutexOwner, pthread_self()) ){
+ pthread_mutex_unlock(&mutexAux);
+ pthread_mutex_lock(&mutexMain);
+ assert( inMutex==0 );
+ assert( !mutexOwnerValid );
+ pthread_mutex_lock(&mutexAux);
+ mutexOwner = pthread_self();
+ mutexOwnerValid = 1;
+ }
+ inMutex++;
+ pthread_mutex_unlock(&mutexAux);
+#else
+ inMutex++;
+#endif
+}
+void sqlite3UnixLeaveMutex(){
+ assert( inMutex>0 );
+#ifdef SQLITE_UNIX_THREADS
+ pthread_mutex_lock(&mutexAux);
+ inMutex--;
+ assert( pthread_equal(mutexOwner, pthread_self()) );
+ if( inMutex==0 ){
+ assert( mutexOwnerValid );
+ mutexOwnerValid = 0;
+ pthread_mutex_unlock(&mutexMain);
+ }
+ pthread_mutex_unlock(&mutexAux);
+#else
+ inMutex--;
+#endif
+}
+
+/*
+** Return TRUE if the mutex is currently held.
+**
+** If the thisThrd parameter is true, return true only if the
+** calling thread holds the mutex. If the parameter is false, return
+** true if any thread holds the mutex.
+*/
+int sqlite3UnixInMutex(int thisThrd){
+#ifdef SQLITE_UNIX_THREADS
+ int rc;
+ pthread_mutex_lock(&mutexAux);
+ rc = inMutex>0 && (thisThrd==0 || pthread_equal(mutexOwner,pthread_self()));
+ pthread_mutex_unlock(&mutexAux);
+ return rc;
+#else
+ return inMutex>0;
+#endif
+}
+
+/*
+** Remember the number of thread-specific-data blocks allocated.
+** Use this to verify that we are not leaking thread-specific-data.
+** Ticket #1601
+*/
+#ifdef SQLITE_TEST
+int sqlite3_tsd_count = 0;
+# ifdef SQLITE_UNIX_THREADS
+ static pthread_mutex_t tsd_counter_mutex = PTHREAD_MUTEX_INITIALIZER;
+# define TSD_COUNTER(N) \
+ pthread_mutex_lock(&tsd_counter_mutex); \
+ sqlite3_tsd_count += N; \
+ pthread_mutex_unlock(&tsd_counter_mutex);
+# else
+# define TSD_COUNTER(N) sqlite3_tsd_count += N
+# endif
+#else
+# define TSD_COUNTER(N) /* no-op */
+#endif
+
+/*
+** If called with allocateFlag>0, then return a pointer to thread
+** specific data for the current thread. Allocate and zero the
+** thread-specific data if it does not already exist.
+**
+** If called with allocateFlag==0, then check the current thread
+** specific data. Return it if it exists. If it does not exist,
+** then return NULL.
+**
+** If called with allocateFlag<0, check to see if the thread specific
+** data is allocated and is all zero. If it is then deallocate it.
+** Return a pointer to the thread specific data or NULL if it is
+** unallocated or gets deallocated.
+*/
+ThreadData *sqlite3UnixThreadSpecificData(int allocateFlag){
+ static const ThreadData zeroData = {0}; /* Initializer to silence warnings
+ ** from broken compilers */
+#ifdef SQLITE_UNIX_THREADS
+ static pthread_key_t key;
+ static int keyInit = 0;
+ ThreadData *pTsd;
+
+ if( !keyInit ){
+ sqlite3OsEnterMutex();
+ if( !keyInit ){
+ int rc;
+ rc = pthread_key_create(&key, 0);
+ if( rc ){
+ sqlite3OsLeaveMutex();
+ return 0;
+ }
+ keyInit = 1;
+ }
+ sqlite3OsLeaveMutex();
+ }
+
+ pTsd = pthread_getspecific(key);
+ if( allocateFlag>0 ){
+ if( pTsd==0 ){
+ if( !sqlite3TestMallocFail() ){
+ pTsd = sqlite3OsMalloc(sizeof(zeroData));
+ }
+#ifdef SQLITE_MEMDEBUG
+ sqlite3_isFail = 0;
+#endif
+ if( pTsd ){
+ *pTsd = zeroData;
+ pthread_setspecific(key, pTsd);
+ TSD_COUNTER(+1);
+ }
+ }
+ }else if( pTsd!=0 && allocateFlag<0
+ && memcmp(pTsd, &zeroData, sizeof(ThreadData))==0 ){
+ sqlite3OsFree(pTsd);
+ pthread_setspecific(key, 0);
+ TSD_COUNTER(-1);
+ pTsd = 0;
+ }
+ return pTsd;
+#else
+ static ThreadData *pTsd = 0;
+ if( allocateFlag>0 ){
+ if( pTsd==0 ){
+ if( !sqlite3TestMallocFail() ){
+ pTsd = sqlite3OsMalloc( sizeof(zeroData) );
+ }
+#ifdef SQLITE_MEMDEBUG
+ sqlite3_isFail = 0;
+#endif
+ if( pTsd ){
+ *pTsd = zeroData;
+ TSD_COUNTER(+1);
+ }
+ }
+ }else if( pTsd!=0 && allocateFlag<0
+ && memcmp(pTsd, &zeroData, sizeof(ThreadData))==0 ){
+ sqlite3OsFree(pTsd);
+ TSD_COUNTER(-1);
+ pTsd = 0;
+ }
+ return pTsd;
+#endif
+}
+
+/*
+** The following variable, if set to a non-zero value, becomes the result
+** returned from sqlite3OsCurrentTime(). This is used for testing.
+*/
+#ifdef SQLITE_TEST
+int sqlite3_current_time = 0;
+#endif
+
+/*
+** Find the current time (in Universal Coordinated Time). Write the
+** current time and date as a Julian Day number into *prNow and
+** return 0. Return 1 if the time and date cannot be found.
+*/
+int sqlite3UnixCurrentTime(double *prNow){
+#ifdef NO_GETTOD
+ time_t t;
+ time(&t);
+ *prNow = t/86400.0 + 2440587.5;
+#else
+ struct timeval sNow;
+ struct timezone sTz; /* Not used */
+ gettimeofday(&sNow, &sTz);
+ *prNow = 2440587.5 + sNow.tv_sec/86400.0 + sNow.tv_usec/86400000000.0;
+#endif
+#ifdef SQLITE_TEST
+ if( sqlite3_current_time ){
+ *prNow = sqlite3_current_time/86400.0 + 2440587.5;
+ }
+#endif
+ return 0;
+}
+
+#endif /* OS_UNIX */
Added: freeswitch/trunk/libs/sqlite/src/os_win.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/os_win.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,1558 @@
+/*
+** 2004 May 22
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+******************************************************************************
+**
+** This file contains code that is specific to windows.
+*/
+#include "sqliteInt.h"
+#include "os.h"
+#if OS_WIN /* This file is used for windows only */
+
+#include <winbase.h>
+
+#ifdef __CYGWIN__
+# include <sys/cygwin.h>
+#endif
+
+/*
+** Macros used to determine whether or not to use threads.
+*/
+#if defined(THREADSAFE) && THREADSAFE
+# define SQLITE_W32_THREADS 1
+#endif
+
+/*
+** Include code that is common to all os_*.c files
+*/
+#include "os_common.h"
+
+/*
+** Determine if we are dealing with WindowsCE - which has a much
+** reduced API.
+*/
+#if defined(_WIN32_WCE)
+# define OS_WINCE 1
+#else
+# define OS_WINCE 0
+#endif
+
+/*
+** WinCE lacks native support for file locking so we have to fake it
+** with some code of our own.
+*/
+#if OS_WINCE
+typedef struct winceLock {
+ int nReaders; /* Number of reader locks obtained */
+ BOOL bPending; /* Indicates a pending lock has been obtained */
+ BOOL bReserved; /* Indicates a reserved lock has been obtained */
+ BOOL bExclusive; /* Indicates an exclusive lock has been obtained */
+} winceLock;
+#endif
+
+/*
+** The winFile structure is a subclass of OsFile specific to the win32
+** portability layer.
+*/
+typedef struct winFile winFile;
+struct winFile {
+ IoMethod const *pMethod;/* Must be first */
+ HANDLE h; /* Handle for accessing the file */
+ unsigned char locktype; /* Type of lock currently held on this file */
+ short sharedLockByte; /* Randomly chosen byte used as a shared lock */
+#if OS_WINCE
+ WCHAR *zDeleteOnClose; /* Name of file to delete when closing */
+ HANDLE hMutex; /* Mutex used to control access to shared lock */
+ HANDLE hShared; /* Shared memory segment used for locking */
+ winceLock local; /* Locks obtained by this instance of winFile */
+ winceLock *shared; /* Global shared lock memory for the file */
+#endif
+};
+
+
+/*
+** Do not include any of the File I/O interface procedures if the
+** SQLITE_OMIT_DISKIO macro is defined (indicating that there database
+** will be in-memory only)
+*/
+#ifndef SQLITE_OMIT_DISKIO
+
+/*
+** The following variable is (normally) set once and never changes
+** thereafter. It records whether the operating system is Win95
+** or WinNT.
+**
+** 0: Operating system unknown.
+** 1: Operating system is Win95.
+** 2: Operating system is WinNT.
+**
+** In order to facilitate testing on a WinNT system, the test fixture
+** can manually set this value to 1 to emulate Win98 behavior.
+*/
+int sqlite3_os_type = 0;
+
+/*
+** Return true (non-zero) if we are running under WinNT, Win2K, WinXP,
+** or WinCE. Return false (zero) for Win95, Win98, or WinME.
+**
+** Here is an interesting observation: Win95, Win98, and WinME lack
+** the LockFileEx() API. But we can still statically link against that
+** API as long as we don't call it win running Win95/98/ME. A call to
+** this routine is used to determine if the host is Win95/98/ME or
+** WinNT/2K/XP so that we will know whether or not we can safely call
+** the LockFileEx() API.
+*/
+#if OS_WINCE
+# define isNT() (1)
+#else
+ static int isNT(void){
+ if( sqlite3_os_type==0 ){
+ OSVERSIONINFO sInfo;
+ sInfo.dwOSVersionInfoSize = sizeof(sInfo);
+ GetVersionEx(&sInfo);
+ sqlite3_os_type = sInfo.dwPlatformId==VER_PLATFORM_WIN32_NT ? 2 : 1;
+ }
+ return sqlite3_os_type==2;
+ }
+#endif /* OS_WINCE */
+
+/*
+** Convert a UTF-8 string to UTF-32. Space to hold the returned string
+** is obtained from sqliteMalloc.
+*/
+static WCHAR *utf8ToUnicode(const char *zFilename){
+ int nChar;
+ WCHAR *zWideFilename;
+
+ if( !isNT() ){
+ return 0;
+ }
+ nChar = MultiByteToWideChar(CP_UTF8, 0, zFilename, -1, NULL, 0);
+ zWideFilename = sqliteMalloc( nChar*sizeof(zWideFilename[0]) );
+ if( zWideFilename==0 ){
+ return 0;
+ }
+ nChar = MultiByteToWideChar(CP_UTF8, 0, zFilename, -1, zWideFilename, nChar);
+ if( nChar==0 ){
+ sqliteFree(zWideFilename);
+ zWideFilename = 0;
+ }
+ return zWideFilename;
+}
+
+/*
+** Convert UTF-32 to UTF-8. Space to hold the returned string is
+** obtained from sqliteMalloc().
+*/
+static char *unicodeToUtf8(const WCHAR *zWideFilename){
+ int nByte;
+ char *zFilename;
+
+ nByte = WideCharToMultiByte(CP_UTF8, 0, zWideFilename, -1, 0, 0, 0, 0);
+ zFilename = sqliteMalloc( nByte );
+ if( zFilename==0 ){
+ return 0;
+ }
+ nByte = WideCharToMultiByte(CP_UTF8, 0, zWideFilename, -1, zFilename, nByte,
+ 0, 0);
+ if( nByte == 0 ){
+ sqliteFree(zFilename);
+ zFilename = 0;
+ }
+ return zFilename;
+}
+
+#if OS_WINCE
+/*************************************************************************
+** This section contains code for WinCE only.
+*/
+/*
+** WindowsCE does not have a localtime() function. So create a
+** substitute.
+*/
+#include <time.h>
+struct tm *__cdecl localtime(const time_t *t)
+{
+ static struct tm y;
+ FILETIME uTm, lTm;
+ SYSTEMTIME pTm;
+ i64 t64;
+ t64 = *t;
+ t64 = (t64 + 11644473600)*10000000;
+ uTm.dwLowDateTime = t64 & 0xFFFFFFFF;
+ uTm.dwHighDateTime= t64 >> 32;
+ FileTimeToLocalFileTime(&uTm,&lTm);
+ FileTimeToSystemTime(&lTm,&pTm);
+ y.tm_year = pTm.wYear - 1900;
+ y.tm_mon = pTm.wMonth - 1;
+ y.tm_wday = pTm.wDayOfWeek;
+ y.tm_mday = pTm.wDay;
+ y.tm_hour = pTm.wHour;
+ y.tm_min = pTm.wMinute;
+ y.tm_sec = pTm.wSecond;
+ return &y;
+}
+
+/* This will never be called, but defined to make the code compile */
+#define GetTempPathA(a,b)
+
+#define LockFile(a,b,c,d,e) winceLockFile(&a, b, c, d, e)
+#define UnlockFile(a,b,c,d,e) winceUnlockFile(&a, b, c, d, e)
+#define LockFileEx(a,b,c,d,e,f) winceLockFileEx(&a, b, c, d, e, f)
+
+#define HANDLE_TO_WINFILE(a) (winFile*)&((char*)a)[-offsetof(winFile,h)]
+
+/*
+** Acquire a lock on the handle h
+*/
+static void winceMutexAcquire(HANDLE h){
+ DWORD dwErr;
+ do {
+ dwErr = WaitForSingleObject(h, INFINITE);
+ } while (dwErr != WAIT_OBJECT_0 && dwErr != WAIT_ABANDONED);
+}
+/*
+** Release a lock acquired by winceMutexAcquire()
+*/
+#define winceMutexRelease(h) ReleaseMutex(h)
+
+/*
+** Create the mutex and shared memory used for locking in the file
+** descriptor pFile
+*/
+static BOOL winceCreateLock(const char *zFilename, winFile *pFile){
+ WCHAR *zTok;
+ WCHAR *zName = utf8ToUnicode(zFilename);
+ BOOL bInit = TRUE;
+
+ /* Initialize the local lockdata */
+ ZeroMemory(&pFile->local, sizeof(pFile->local));
+
+ /* Replace the backslashes from the filename and lowercase it
+ ** to derive a mutex name. */
+ zTok = CharLowerW(zName);
+ for (;*zTok;zTok++){
+ if (*zTok == '\\') *zTok = '_';
+ }
+
+ /* Create/open the named mutex */
+ pFile->hMutex = CreateMutexW(NULL, FALSE, zName);
+ if (!pFile->hMutex){
+ sqliteFree(zName);
+ return FALSE;
+ }
+
+ /* Acquire the mutex before continuing */
+ winceMutexAcquire(pFile->hMutex);
+
+ /* Since the names of named mutexes, semaphores, file mappings etc are
+ ** case-sensitive, take advantage of that by uppercasing the mutex name
+ ** and using that as the shared filemapping name.
+ */
+ CharUpperW(zName);
+ pFile->hShared = CreateFileMappingW(INVALID_HANDLE_VALUE, NULL,
+ PAGE_READWRITE, 0, sizeof(winceLock),
+ zName);
+
+ /* Set a flag that indicates we're the first to create the memory so it
+ ** must be zero-initialized */
+ if (GetLastError() == ERROR_ALREADY_EXISTS){
+ bInit = FALSE;
+ }
+
+ sqliteFree(zName);
+
+ /* If we succeeded in making the shared memory handle, map it. */
+ if (pFile->hShared){
+ pFile->shared = (winceLock*)MapViewOfFile(pFile->hShared,
+ FILE_MAP_READ|FILE_MAP_WRITE, 0, 0, sizeof(winceLock));
+ /* If mapping failed, close the shared memory handle and erase it */
+ if (!pFile->shared){
+ CloseHandle(pFile->hShared);
+ pFile->hShared = NULL;
+ }
+ }
+
+ /* If shared memory could not be created, then close the mutex and fail */
+ if (pFile->hShared == NULL){
+ winceMutexRelease(pFile->hMutex);
+ CloseHandle(pFile->hMutex);
+ pFile->hMutex = NULL;
+ return FALSE;
+ }
+
+ /* Initialize the shared memory if we're supposed to */
+ if (bInit) {
+ ZeroMemory(pFile->shared, sizeof(winceLock));
+ }
+
+ winceMutexRelease(pFile->hMutex);
+ return TRUE;
+}
+
+/*
+** Destroy the part of winFile that deals with wince locks
+*/
+static void winceDestroyLock(winFile *pFile){
+ if (pFile->hMutex){
+ /* Acquire the mutex */
+ winceMutexAcquire(pFile->hMutex);
+
+ /* The following blocks should probably assert in debug mode, but they
+ are to cleanup in case any locks remained open */
+ if (pFile->local.nReaders){
+ pFile->shared->nReaders --;
+ }
+ if (pFile->local.bReserved){
+ pFile->shared->bReserved = FALSE;
+ }
+ if (pFile->local.bPending){
+ pFile->shared->bPending = FALSE;
+ }
+ if (pFile->local.bExclusive){
+ pFile->shared->bExclusive = FALSE;
+ }
+
+ /* De-reference and close our copy of the shared memory handle */
+ UnmapViewOfFile(pFile->shared);
+ CloseHandle(pFile->hShared);
+
+ /* Done with the mutex */
+ winceMutexRelease(pFile->hMutex);
+ CloseHandle(pFile->hMutex);
+ pFile->hMutex = NULL;
+ }
+}
+
+/*
+** An implementation of the LockFile() API of windows for wince
+*/
+static BOOL winceLockFile(
+ HANDLE *phFile,
+ DWORD dwFileOffsetLow,
+ DWORD dwFileOffsetHigh,
+ DWORD nNumberOfBytesToLockLow,
+ DWORD nNumberOfBytesToLockHigh
+){
+ winFile *pFile = HANDLE_TO_WINFILE(phFile);
+ BOOL bReturn = FALSE;
+
+ if (!pFile->hMutex) return TRUE;
+ winceMutexAcquire(pFile->hMutex);
+
+ /* Wanting an exclusive lock? */
+ if (dwFileOffsetLow == SHARED_FIRST
+ && nNumberOfBytesToLockLow == SHARED_SIZE){
+ if (pFile->shared->nReaders == 0 && pFile->shared->bExclusive == 0){
+ pFile->shared->bExclusive = TRUE;
+ pFile->local.bExclusive = TRUE;
+ bReturn = TRUE;
+ }
+ }
+
+ /* Want a read-only lock? */
+ else if ((dwFileOffsetLow >= SHARED_FIRST &&
+ dwFileOffsetLow < SHARED_FIRST + SHARED_SIZE) &&
+ nNumberOfBytesToLockLow == 1){
+ if (pFile->shared->bExclusive == 0){
+ pFile->local.nReaders ++;
+ if (pFile->local.nReaders == 1){
+ pFile->shared->nReaders ++;
+ }
+ bReturn = TRUE;
+ }
+ }
+
+ /* Want a pending lock? */
+ else if (dwFileOffsetLow == PENDING_BYTE && nNumberOfBytesToLockLow == 1){
+ /* If no pending lock has been acquired, then acquire it */
+ if (pFile->shared->bPending == 0) {
+ pFile->shared->bPending = TRUE;
+ pFile->local.bPending = TRUE;
+ bReturn = TRUE;
+ }
+ }
+ /* Want a reserved lock? */
+ else if (dwFileOffsetLow == RESERVED_BYTE && nNumberOfBytesToLockLow == 1){
+ if (pFile->shared->bReserved == 0) {
+ pFile->shared->bReserved = TRUE;
+ pFile->local.bReserved = TRUE;
+ bReturn = TRUE;
+ }
+ }
+
+ winceMutexRelease(pFile->hMutex);
+ return bReturn;
+}
+
+/*
+** An implementation of the UnlockFile API of windows for wince
+*/
+static BOOL winceUnlockFile(
+ HANDLE *phFile,
+ DWORD dwFileOffsetLow,
+ DWORD dwFileOffsetHigh,
+ DWORD nNumberOfBytesToUnlockLow,
+ DWORD nNumberOfBytesToUnlockHigh
+){
+ winFile *pFile = HANDLE_TO_WINFILE(phFile);
+ BOOL bReturn = FALSE;
+
+ if (!pFile->hMutex) return TRUE;
+ winceMutexAcquire(pFile->hMutex);
+
+ /* Releasing a reader lock or an exclusive lock */
+ if (dwFileOffsetLow >= SHARED_FIRST &&
+ dwFileOffsetLow < SHARED_FIRST + SHARED_SIZE){
+ /* Did we have an exclusive lock? */
+ if (pFile->local.bExclusive){
+ pFile->local.bExclusive = FALSE;
+ pFile->shared->bExclusive = FALSE;
+ bReturn = TRUE;
+ }
+
+ /* Did we just have a reader lock? */
+ else if (pFile->local.nReaders){
+ pFile->local.nReaders --;
+ if (pFile->local.nReaders == 0)
+ {
+ pFile->shared->nReaders --;
+ }
+ bReturn = TRUE;
+ }
+ }
+
+ /* Releasing a pending lock */
+ else if (dwFileOffsetLow == PENDING_BYTE && nNumberOfBytesToUnlockLow == 1){
+ if (pFile->local.bPending){
+ pFile->local.bPending = FALSE;
+ pFile->shared->bPending = FALSE;
+ bReturn = TRUE;
+ }
+ }
+ /* Releasing a reserved lock */
+ else if (dwFileOffsetLow == RESERVED_BYTE && nNumberOfBytesToUnlockLow == 1){
+ if (pFile->local.bReserved) {
+ pFile->local.bReserved = FALSE;
+ pFile->shared->bReserved = FALSE;
+ bReturn = TRUE;
+ }
+ }
+
+ winceMutexRelease(pFile->hMutex);
+ return bReturn;
+}
+
+/*
+** An implementation of the LockFileEx() API of windows for wince
+*/
+static BOOL winceLockFileEx(
+ HANDLE *phFile,
+ DWORD dwFlags,
+ DWORD dwReserved,
+ DWORD nNumberOfBytesToLockLow,
+ DWORD nNumberOfBytesToLockHigh,
+ LPOVERLAPPED lpOverlapped
+){
+ /* If the caller wants a shared read lock, forward this call
+ ** to winceLockFile */
+ if (lpOverlapped->Offset == SHARED_FIRST &&
+ dwFlags == 1 &&
+ nNumberOfBytesToLockLow == SHARED_SIZE){
+ return winceLockFile(phFile, SHARED_FIRST, 0, 1, 0);
+ }
+ return FALSE;
+}
+/*
+** End of the special code for wince
+*****************************************************************************/
+#endif /* OS_WINCE */
+
+/*
+** Delete the named file.
+**
+** Note that windows does not allow a file to be deleted if some other
+** process has it open. Sometimes a virus scanner or indexing program
+** will open a journal file shortly after it is created in order to do
+** whatever it is it does. While this other process is holding the
+** file open, we will be unable to delete it. To work around this
+** problem, we delay 100 milliseconds and try to delete again. Up
+** to MX_DELETION_ATTEMPTs deletion attempts are run before giving
+** up and returning an error.
+*/
+#define MX_DELETION_ATTEMPTS 3
+int sqlite3WinDelete(const char *zFilename){
+ WCHAR *zWide = utf8ToUnicode(zFilename);
+ int cnt = 0;
+ int rc;
+ if( zWide ){
+ do{
+ rc = DeleteFileW(zWide);
+ }while( rc==0 && GetFileAttributesW(zWide)!=0xffffffff
+ && cnt++ < MX_DELETION_ATTEMPTS && (Sleep(100), 1) );
+ sqliteFree(zWide);
+ }else{
+#if OS_WINCE
+ return SQLITE_NOMEM;
+#else
+ do{
+ rc = DeleteFileA(zFilename);
+ }while( rc==0 && GetFileAttributesA(zFilename)!=0xffffffff
+ && cnt++ < MX_DELETION_ATTEMPTS && (Sleep(100), 1) );
+#endif
+ }
+ TRACE2("DELETE \"%s\"\n", zFilename);
+ return rc!=0 ? SQLITE_OK : SQLITE_IOERR;
+}
+
+/*
+** Return TRUE if the named file exists.
+*/
+int sqlite3WinFileExists(const char *zFilename){
+ int exists = 0;
+ WCHAR *zWide = utf8ToUnicode(zFilename);
+ if( zWide ){
+ exists = GetFileAttributesW(zWide) != 0xffffffff;
+ sqliteFree(zWide);
+ }else{
+#if OS_WINCE
+ return SQLITE_NOMEM;
+#else
+ exists = GetFileAttributesA(zFilename) != 0xffffffff;
+#endif
+ }
+ return exists;
+}
+
+/* Forward declaration */
+static int allocateWinFile(winFile *pInit, OsFile **pId);
+
+/*
+** Attempt to open a file for both reading and writing. If that
+** fails, try opening it read-only. If the file does not exist,
+** try to create it.
+**
+** On success, a handle for the open file is written to *id
+** and *pReadonly is set to 0 if the file was opened for reading and
+** writing or 1 if the file was opened read-only. The function returns
+** SQLITE_OK.
+**
+** On failure, the function returns SQLITE_CANTOPEN and leaves
+** *id and *pReadonly unchanged.
+*/
+int sqlite3WinOpenReadWrite(
+ const char *zFilename,
+ OsFile **pId,
+ int *pReadonly
+){
+ winFile f;
+ HANDLE h;
+ WCHAR *zWide = utf8ToUnicode(zFilename);
+ assert( *pId==0 );
+ if( zWide ){
+ h = CreateFileW(zWide,
+ GENERIC_READ | GENERIC_WRITE,
+ FILE_SHARE_READ | FILE_SHARE_WRITE,
+ NULL,
+ OPEN_ALWAYS,
+ FILE_ATTRIBUTE_NORMAL | FILE_FLAG_RANDOM_ACCESS,
+ NULL
+ );
+ if( h==INVALID_HANDLE_VALUE ){
+ h = CreateFileW(zWide,
+ GENERIC_READ,
+ FILE_SHARE_READ | FILE_SHARE_WRITE,
+ NULL,
+ OPEN_ALWAYS,
+ FILE_ATTRIBUTE_NORMAL | FILE_FLAG_RANDOM_ACCESS,
+ NULL
+ );
+ if( h==INVALID_HANDLE_VALUE ){
+ sqliteFree(zWide);
+ return SQLITE_CANTOPEN;
+ }
+ *pReadonly = 1;
+ }else{
+ *pReadonly = 0;
+ }
+#if OS_WINCE
+ if (!winceCreateLock(zFilename, &f)){
+ CloseHandle(h);
+ sqliteFree(zWide);
+ return SQLITE_CANTOPEN;
+ }
+#endif
+ sqliteFree(zWide);
+ }else{
+#if OS_WINCE
+ return SQLITE_NOMEM;
+#else
+ h = CreateFileA(zFilename,
+ GENERIC_READ | GENERIC_WRITE,
+ FILE_SHARE_READ | FILE_SHARE_WRITE,
+ NULL,
+ OPEN_ALWAYS,
+ FILE_ATTRIBUTE_NORMAL | FILE_FLAG_RANDOM_ACCESS,
+ NULL
+ );
+ if( h==INVALID_HANDLE_VALUE ){
+ h = CreateFileA(zFilename,
+ GENERIC_READ,
+ FILE_SHARE_READ | FILE_SHARE_WRITE,
+ NULL,
+ OPEN_ALWAYS,
+ FILE_ATTRIBUTE_NORMAL | FILE_FLAG_RANDOM_ACCESS,
+ NULL
+ );
+ if( h==INVALID_HANDLE_VALUE ){
+ return SQLITE_CANTOPEN;
+ }
+ *pReadonly = 1;
+ }else{
+ *pReadonly = 0;
+ }
+#endif /* OS_WINCE */
+ }
+ f.h = h;
+#if OS_WINCE
+ f.zDeleteOnClose = 0;
+#endif
+ TRACE3("OPEN R/W %d \"%s\"\n", h, zFilename);
+ return allocateWinFile(&f, pId);
+}
+
+
+/*
+** Attempt to open a new file for exclusive access by this process.
+** The file will be opened for both reading and writing. To avoid
+** a potential security problem, we do not allow the file to have
+** previously existed. Nor do we allow the file to be a symbolic
+** link.
+**
+** If delFlag is true, then make arrangements to automatically delete
+** the file when it is closed.
+**
+** On success, write the file handle into *id and return SQLITE_OK.
+**
+** On failure, return SQLITE_CANTOPEN.
+**
+** Sometimes if we have just deleted a prior journal file, windows
+** will fail to open a new one because there is a "pending delete".
+** To work around this bug, we pause for 100 milliseconds and attempt
+** a second open after the first one fails. The whole operation only
+** fails if both open attempts are unsuccessful.
+*/
+int sqlite3WinOpenExclusive(const char *zFilename, OsFile **pId, int delFlag){
+ winFile f;
+ HANDLE h;
+ int fileflags;
+ WCHAR *zWide = utf8ToUnicode(zFilename);
+ assert( *pId == 0 );
+ fileflags = FILE_FLAG_RANDOM_ACCESS;
+#if !OS_WINCE
+ if( delFlag ){
+ fileflags |= FILE_ATTRIBUTE_TEMPORARY | FILE_FLAG_DELETE_ON_CLOSE;
+ }
+#endif
+ if( zWide ){
+ int cnt = 0;
+ do{
+ h = CreateFileW(zWide,
+ GENERIC_READ | GENERIC_WRITE,
+ 0,
+ NULL,
+ CREATE_ALWAYS,
+ fileflags,
+ NULL
+ );
+ }while( h==INVALID_HANDLE_VALUE && cnt++ < 2 && (Sleep(100), 1) );
+ sqliteFree(zWide);
+ }else{
+#if OS_WINCE
+ return SQLITE_NOMEM;
+#else
+ int cnt = 0;
+ do{
+ h = CreateFileA(zFilename,
+ GENERIC_READ | GENERIC_WRITE,
+ 0,
+ NULL,
+ CREATE_ALWAYS,
+ fileflags,
+ NULL
+ );
+ }while( h==INVALID_HANDLE_VALUE && cnt++ < 2 && (Sleep(100), 1) );
+#endif /* OS_WINCE */
+ }
+ if( h==INVALID_HANDLE_VALUE ){
+ return SQLITE_CANTOPEN;
+ }
+ f.h = h;
+#if OS_WINCE
+ f.zDeleteOnClose = delFlag ? utf8ToUnicode(zFilename) : 0;
+ f.hMutex = NULL;
+#endif
+ TRACE3("OPEN EX %d \"%s\"\n", h, zFilename);
+ return allocateWinFile(&f, pId);
+}
+
+/*
+** Attempt to open a new file for read-only access.
+**
+** On success, write the file handle into *id and return SQLITE_OK.
+**
+** On failure, return SQLITE_CANTOPEN.
+*/
+int sqlite3WinOpenReadOnly(const char *zFilename, OsFile **pId){
+ winFile f;
+ HANDLE h;
+ WCHAR *zWide = utf8ToUnicode(zFilename);
+ assert( *pId==0 );
+ if( zWide ){
+ h = CreateFileW(zWide,
+ GENERIC_READ,
+ 0,
+ NULL,
+ OPEN_EXISTING,
+ FILE_ATTRIBUTE_NORMAL | FILE_FLAG_RANDOM_ACCESS,
+ NULL
+ );
+ sqliteFree(zWide);
+ }else{
+#if OS_WINCE
+ return SQLITE_NOMEM;
+#else
+ h = CreateFileA(zFilename,
+ GENERIC_READ,
+ 0,
+ NULL,
+ OPEN_EXISTING,
+ FILE_ATTRIBUTE_NORMAL | FILE_FLAG_RANDOM_ACCESS,
+ NULL
+ );
+#endif
+ }
+ if( h==INVALID_HANDLE_VALUE ){
+ return SQLITE_CANTOPEN;
+ }
+ f.h = h;
+#if OS_WINCE
+ f.zDeleteOnClose = 0;
+ f.hMutex = NULL;
+#endif
+ TRACE3("OPEN RO %d \"%s\"\n", h, zFilename);
+ return allocateWinFile(&f, pId);
+}
+
+/*
+** Attempt to open a file descriptor for the directory that contains a
+** file. This file descriptor can be used to fsync() the directory
+** in order to make sure the creation of a new file is actually written
+** to disk.
+**
+** This routine is only meaningful for Unix. It is a no-op under
+** windows since windows does not support hard links.
+**
+** On success, a handle for a previously open file is at *id is
+** updated with the new directory file descriptor and SQLITE_OK is
+** returned.
+**
+** On failure, the function returns SQLITE_CANTOPEN and leaves
+** *id unchanged.
+*/
+static int winOpenDirectory(
+ OsFile *id,
+ const char *zDirname
+){
+ return SQLITE_OK;
+}
+
+/*
+** If the following global variable points to a string which is the
+** name of a directory, then that directory will be used to store
+** temporary files.
+*/
+char *sqlite3_temp_directory = 0;
+
+/*
+** Create a temporary file name in zBuf. zBuf must be big enough to
+** hold at least SQLITE_TEMPNAME_SIZE characters.
+*/
+int sqlite3WinTempFileName(char *zBuf){
+ static char zChars[] =
+ "abcdefghijklmnopqrstuvwxyz"
+ "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+ "0123456789";
+ int i, j;
+ char zTempPath[SQLITE_TEMPNAME_SIZE];
+ if( sqlite3_temp_directory ){
+ strncpy(zTempPath, sqlite3_temp_directory, SQLITE_TEMPNAME_SIZE-30);
+ zTempPath[SQLITE_TEMPNAME_SIZE-30] = 0;
+ }else if( isNT() ){
+ char *zMulti;
+ WCHAR zWidePath[SQLITE_TEMPNAME_SIZE];
+ GetTempPathW(SQLITE_TEMPNAME_SIZE-30, zWidePath);
+ zMulti = unicodeToUtf8(zWidePath);
+ if( zMulti ){
+ strncpy(zTempPath, zMulti, SQLITE_TEMPNAME_SIZE-30);
+ zTempPath[SQLITE_TEMPNAME_SIZE-30] = 0;
+ sqliteFree(zMulti);
+ }
+ }else{
+ GetTempPathA(SQLITE_TEMPNAME_SIZE-30, zTempPath);
+ }
+ for(i=strlen(zTempPath); i>0 && zTempPath[i-1]=='\\'; i--){}
+ zTempPath[i] = 0;
+ for(;;){
+ sprintf(zBuf, "%s\\"TEMP_FILE_PREFIX, zTempPath);
+ j = strlen(zBuf);
+ sqlite3Randomness(15, &zBuf[j]);
+ for(i=0; i<15; i++, j++){
+ zBuf[j] = (char)zChars[ ((unsigned char)zBuf[j])%(sizeof(zChars)-1) ];
+ }
+ zBuf[j] = 0;
+ if( !sqlite3OsFileExists(zBuf) ) break;
+ }
+ TRACE2("TEMP FILENAME: %s\n", zBuf);
+ return SQLITE_OK;
+}
+
+/*
+** Close a file.
+**
+** It is reported that an attempt to close a handle might sometimes
+** fail. This is a very unreasonable result, but windows is notorious
+** for being unreasonable so I do not doubt that it might happen. If
+** the close fails, we pause for 100 milliseconds and try again. As
+** many as MX_CLOSE_ATTEMPT attempts to close the handle are made before
+** giving up and returning an error.
+*/
+#define MX_CLOSE_ATTEMPT 3
+static int winClose(OsFile **pId){
+ winFile *pFile;
+ int rc = 1;
+ if( pId && (pFile = (winFile*)*pId)!=0 ){
+ int rc, cnt = 0;
+ TRACE2("CLOSE %d\n", pFile->h);
+ do{
+ rc = CloseHandle(pFile->h);
+ }while( rc==0 && cnt++ < MX_CLOSE_ATTEMPT && (Sleep(100), 1) );
+#if OS_WINCE
+ winceDestroyLock(pFile);
+ if( pFile->zDeleteOnClose ){
+ DeleteFileW(pFile->zDeleteOnClose);
+ sqliteFree(pFile->zDeleteOnClose);
+ }
+#endif
+ OpenCounter(-1);
+ sqliteFree(pFile);
+ *pId = 0;
+ }
+ return rc ? SQLITE_OK : SQLITE_IOERR;
+}
+
+/*
+** Read data from a file into a buffer. Return SQLITE_OK if all
+** bytes were read successfully and SQLITE_IOERR if anything goes
+** wrong.
+*/
+static int winRead(OsFile *id, void *pBuf, int amt){
+ DWORD got;
+ assert( id!=0 );
+ SimulateIOError(return SQLITE_IOERR);
+ TRACE3("READ %d lock=%d\n", ((winFile*)id)->h, ((winFile*)id)->locktype);
+ if( !ReadFile(((winFile*)id)->h, pBuf, amt, &got, 0) ){
+ got = 0;
+ }
+ if( got==(DWORD)amt ){
+ return SQLITE_OK;
+ }else{
+ return SQLITE_IOERR;
+ }
+}
+
+/*
+** Write data from a buffer into a file. Return SQLITE_OK on success
+** or some other error code on failure.
+*/
+static int winWrite(OsFile *id, const void *pBuf, int amt){
+ int rc = 0;
+ DWORD wrote;
+ assert( id!=0 );
+ SimulateIOError(return SQLITE_IOERR);
+ SimulateDiskfullError(return SQLITE_FULL);
+ TRACE3("WRITE %d lock=%d\n", ((winFile*)id)->h, ((winFile*)id)->locktype);
+ assert( amt>0 );
+ while( amt>0 && (rc = WriteFile(((winFile*)id)->h, pBuf, amt, &wrote, 0))!=0
+ && wrote>0 ){
+ amt -= wrote;
+ pBuf = &((char*)pBuf)[wrote];
+ }
+ if( !rc || amt>(int)wrote ){
+ return SQLITE_FULL;
+ }
+ return SQLITE_OK;
+}
+
+/*
+** Some microsoft compilers lack this definition.
+*/
+#ifndef INVALID_SET_FILE_POINTER
+# define INVALID_SET_FILE_POINTER ((DWORD)-1)
+#endif
+
+/*
+** Move the read/write pointer in a file.
+*/
+static int winSeek(OsFile *id, i64 offset){
+ LONG upperBits = offset>>32;
+ LONG lowerBits = offset & 0xffffffff;
+ DWORD rc;
+ assert( id!=0 );
+#ifdef SQLITE_TEST
+ if( offset ) SimulateDiskfullError(return SQLITE_FULL);
+#endif
+ SEEK(offset/1024 + 1);
+ rc = SetFilePointer(((winFile*)id)->h, lowerBits, &upperBits, FILE_BEGIN);
+ TRACE3("SEEK %d %lld\n", ((winFile*)id)->h, offset);
+ if( rc==INVALID_SET_FILE_POINTER && GetLastError()!=NO_ERROR ){
+ return SQLITE_FULL;
+ }
+ return SQLITE_OK;
+}
+
+/*
+** Make sure all writes to a particular file are committed to disk.
+*/
+static int winSync(OsFile *id, int dataOnly){
+ assert( id!=0 );
+ TRACE3("SYNC %d lock=%d\n", ((winFile*)id)->h, ((winFile*)id)->locktype);
+ if( FlushFileBuffers(((winFile*)id)->h) ){
+ return SQLITE_OK;
+ }else{
+ return SQLITE_IOERR;
+ }
+}
+
+/*
+** Sync the directory zDirname. This is a no-op on operating systems other
+** than UNIX.
+*/
+int sqlite3WinSyncDirectory(const char *zDirname){
+ SimulateIOError(return SQLITE_IOERR);
+ return SQLITE_OK;
+}
+
+/*
+** Truncate an open file to a specified size
+*/
+static int winTruncate(OsFile *id, i64 nByte){
+ LONG upperBits = nByte>>32;
+ assert( id!=0 );
+ TRACE3("TRUNCATE %d %lld\n", ((winFile*)id)->h, nByte);
+ SimulateIOError(return SQLITE_IOERR);
+ SetFilePointer(((winFile*)id)->h, nByte, &upperBits, FILE_BEGIN);
+ SetEndOfFile(((winFile*)id)->h);
+ return SQLITE_OK;
+}
+
+/*
+** Determine the current size of a file in bytes
+*/
+static int winFileSize(OsFile *id, i64 *pSize){
+ DWORD upperBits, lowerBits;
+ assert( id!=0 );
+ SimulateIOError(return SQLITE_IOERR);
+ lowerBits = GetFileSize(((winFile*)id)->h, &upperBits);
+ *pSize = (((i64)upperBits)<<32) + lowerBits;
+ return SQLITE_OK;
+}
+
+/*
+** LOCKFILE_FAIL_IMMEDIATELY is undefined on some Windows systems.
+*/
+#ifndef LOCKFILE_FAIL_IMMEDIATELY
+# define LOCKFILE_FAIL_IMMEDIATELY 1
+#endif
+
+/*
+** Acquire a reader lock.
+** Different API routines are called depending on whether or not this
+** is Win95 or WinNT.
+*/
+static int getReadLock(winFile *id){
+ int res;
+ if( isNT() ){
+ OVERLAPPED ovlp;
+ ovlp.Offset = SHARED_FIRST;
+ ovlp.OffsetHigh = 0;
+ ovlp.hEvent = 0;
+ res = LockFileEx(id->h, LOCKFILE_FAIL_IMMEDIATELY, 0, SHARED_SIZE,0,&ovlp);
+ }else{
+ int lk;
+ sqlite3Randomness(sizeof(lk), &lk);
+ id->sharedLockByte = (lk & 0x7fffffff)%(SHARED_SIZE - 1);
+ res = LockFile(id->h, SHARED_FIRST+id->sharedLockByte, 0, 1, 0);
+ }
+ return res;
+}
+
+/*
+** Undo a readlock
+*/
+static int unlockReadLock(winFile *pFile){
+ int res;
+ if( isNT() ){
+ res = UnlockFile(pFile->h, SHARED_FIRST, 0, SHARED_SIZE, 0);
+ }else{
+ res = UnlockFile(pFile->h, SHARED_FIRST + pFile->sharedLockByte, 0, 1, 0);
+ }
+ return res;
+}
+
+#ifndef SQLITE_OMIT_PAGER_PRAGMAS
+/*
+** Check that a given pathname is a directory and is writable
+**
+*/
+int sqlite3WinIsDirWritable(char *zDirname){
+ int fileAttr;
+ WCHAR *zWide;
+ if( zDirname==0 ) return 0;
+ if( !isNT() && strlen(zDirname)>MAX_PATH ) return 0;
+ zWide = utf8ToUnicode(zDirname);
+ if( zWide ){
+ fileAttr = GetFileAttributesW(zWide);
+ sqliteFree(zWide);
+ }else{
+#if OS_WINCE
+ return 0;
+#else
+ fileAttr = GetFileAttributesA(zDirname);
+#endif
+ }
+ if( fileAttr == 0xffffffff ) return 0;
+ if( (fileAttr & FILE_ATTRIBUTE_DIRECTORY) != FILE_ATTRIBUTE_DIRECTORY ){
+ return 0;
+ }
+ return 1;
+}
+#endif /* SQLITE_OMIT_PAGER_PRAGMAS */
+
+/*
+** Lock the file with the lock specified by parameter locktype - one
+** of the following:
+**
+** (1) SHARED_LOCK
+** (2) RESERVED_LOCK
+** (3) PENDING_LOCK
+** (4) EXCLUSIVE_LOCK
+**
+** Sometimes when requesting one lock state, additional lock states
+** are inserted in between. The locking might fail on one of the later
+** transitions leaving the lock state different from what it started but
+** still short of its goal. The following chart shows the allowed
+** transitions and the inserted intermediate states:
+**
+** UNLOCKED -> SHARED
+** SHARED -> RESERVED
+** SHARED -> (PENDING) -> EXCLUSIVE
+** RESERVED -> (PENDING) -> EXCLUSIVE
+** PENDING -> EXCLUSIVE
+**
+** This routine will only increase a lock. The winUnlock() routine
+** erases all locks at once and returns us immediately to locking level 0.
+** It is not possible to lower the locking level one step at a time. You
+** must go straight to locking level 0.
+*/
+static int winLock(OsFile *id, int locktype){
+ int rc = SQLITE_OK; /* Return code from subroutines */
+ int res = 1; /* Result of a windows lock call */
+ int newLocktype; /* Set id->locktype to this value before exiting */
+ int gotPendingLock = 0;/* True if we acquired a PENDING lock this time */
+ winFile *pFile = (winFile*)id;
+
+ assert( pFile!=0 );
+ TRACE5("LOCK %d %d was %d(%d)\n",
+ pFile->h, locktype, pFile->locktype, pFile->sharedLockByte);
+
+ /* If there is already a lock of this type or more restrictive on the
+ ** OsFile, do nothing. Don't use the end_lock: exit path, as
+ ** sqlite3OsEnterMutex() hasn't been called yet.
+ */
+ if( pFile->locktype>=locktype ){
+ return SQLITE_OK;
+ }
+
+ /* Make sure the locking sequence is correct
+ */
+ assert( pFile->locktype!=NO_LOCK || locktype==SHARED_LOCK );
+ assert( locktype!=PENDING_LOCK );
+ assert( locktype!=RESERVED_LOCK || pFile->locktype==SHARED_LOCK );
+
+ /* Lock the PENDING_LOCK byte if we need to acquire a PENDING lock or
+ ** a SHARED lock. If we are acquiring a SHARED lock, the acquisition of
+ ** the PENDING_LOCK byte is temporary.
+ */
+ newLocktype = pFile->locktype;
+ if( pFile->locktype==NO_LOCK
+ || (locktype==EXCLUSIVE_LOCK && pFile->locktype==RESERVED_LOCK)
+ ){
+ int cnt = 3;
+ while( cnt-->0 && (res = LockFile(pFile->h, PENDING_BYTE, 0, 1, 0))==0 ){
+ /* Try 3 times to get the pending lock. The pending lock might be
+ ** held by another reader process who will release it momentarily.
+ */
+ TRACE2("could not get a PENDING lock. cnt=%d\n", cnt);
+ Sleep(1);
+ }
+ gotPendingLock = res;
+ }
+
+ /* Acquire a shared lock
+ */
+ if( locktype==SHARED_LOCK && res ){
+ assert( pFile->locktype==NO_LOCK );
+ res = getReadLock(pFile);
+ if( res ){
+ newLocktype = SHARED_LOCK;
+ }
+ }
+
+ /* Acquire a RESERVED lock
+ */
+ if( locktype==RESERVED_LOCK && res ){
+ assert( pFile->locktype==SHARED_LOCK );
+ res = LockFile(pFile->h, RESERVED_BYTE, 0, 1, 0);
+ if( res ){
+ newLocktype = RESERVED_LOCK;
+ }
+ }
+
+ /* Acquire a PENDING lock
+ */
+ if( locktype==EXCLUSIVE_LOCK && res ){
+ newLocktype = PENDING_LOCK;
+ gotPendingLock = 0;
+ }
+
+ /* Acquire an EXCLUSIVE lock
+ */
+ if( locktype==EXCLUSIVE_LOCK && res ){
+ assert( pFile->locktype>=SHARED_LOCK );
+ res = unlockReadLock(pFile);
+ TRACE2("unreadlock = %d\n", res);
+ res = LockFile(pFile->h, SHARED_FIRST, 0, SHARED_SIZE, 0);
+ if( res ){
+ newLocktype = EXCLUSIVE_LOCK;
+ }else{
+ TRACE2("error-code = %d\n", GetLastError());
+ }
+ }
+
+ /* If we are holding a PENDING lock that ought to be released, then
+ ** release it now.
+ */
+ if( gotPendingLock && locktype==SHARED_LOCK ){
+ UnlockFile(pFile->h, PENDING_BYTE, 0, 1, 0);
+ }
+
+ /* Update the state of the lock has held in the file descriptor then
+ ** return the appropriate result code.
+ */
+ if( res ){
+ rc = SQLITE_OK;
+ }else{
+ TRACE4("LOCK FAILED %d trying for %d but got %d\n", pFile->h,
+ locktype, newLocktype);
+ rc = SQLITE_BUSY;
+ }
+ pFile->locktype = newLocktype;
+ return rc;
+}
+
+/*
+** This routine checks if there is a RESERVED lock held on the specified
+** file by this or any other process. If such a lock is held, return
+** non-zero, otherwise zero.
+*/
+static int winCheckReservedLock(OsFile *id){
+ int rc;
+ winFile *pFile = (winFile*)id;
+ assert( pFile!=0 );
+ if( pFile->locktype>=RESERVED_LOCK ){
+ rc = 1;
+ TRACE3("TEST WR-LOCK %d %d (local)\n", pFile->h, rc);
+ }else{
+ rc = LockFile(pFile->h, RESERVED_BYTE, 0, 1, 0);
+ if( rc ){
+ UnlockFile(pFile->h, RESERVED_BYTE, 0, 1, 0);
+ }
+ rc = !rc;
+ TRACE3("TEST WR-LOCK %d %d (remote)\n", pFile->h, rc);
+ }
+ return rc;
+}
+
+/*
+** Lower the locking level on file descriptor id to locktype. locktype
+** must be either NO_LOCK or SHARED_LOCK.
+**
+** If the locking level of the file descriptor is already at or below
+** the requested locking level, this routine is a no-op.
+**
+** It is not possible for this routine to fail if the second argument
+** is NO_LOCK. If the second argument is SHARED_LOCK then this routine
+** might return SQLITE_IOERR;
+*/
+static int winUnlock(OsFile *id, int locktype){
+ int type;
+ int rc = SQLITE_OK;
+ winFile *pFile = (winFile*)id;
+ assert( pFile!=0 );
+ assert( locktype<=SHARED_LOCK );
+ TRACE5("UNLOCK %d to %d was %d(%d)\n", pFile->h, locktype,
+ pFile->locktype, pFile->sharedLockByte);
+ type = pFile->locktype;
+ if( type>=EXCLUSIVE_LOCK ){
+ UnlockFile(pFile->h, SHARED_FIRST, 0, SHARED_SIZE, 0);
+ if( locktype==SHARED_LOCK && !getReadLock(pFile) ){
+ /* This should never happen. We should always be able to
+ ** reacquire the read lock */
+ rc = SQLITE_IOERR;
+ }
+ }
+ if( type>=RESERVED_LOCK ){
+ UnlockFile(pFile->h, RESERVED_BYTE, 0, 1, 0);
+ }
+ if( locktype==NO_LOCK && type>=SHARED_LOCK ){
+ unlockReadLock(pFile);
+ }
+ if( type>=PENDING_LOCK ){
+ UnlockFile(pFile->h, PENDING_BYTE, 0, 1, 0);
+ }
+ pFile->locktype = locktype;
+ return rc;
+}
+
+/*
+** Turn a relative pathname into a full pathname. Return a pointer
+** to the full pathname stored in space obtained from sqliteMalloc().
+** The calling function is responsible for freeing this space once it
+** is no longer needed.
+*/
+char *sqlite3WinFullPathname(const char *zRelative){
+ char *zFull;
+#if defined(__CYGWIN__)
+ int nByte;
+ nByte = strlen(zRelative) + MAX_PATH + 1001;
+ zFull = sqliteMalloc( nByte );
+ if( zFull==0 ) return 0;
+ if( cygwin_conv_to_full_win32_path(zRelative, zFull) ) return 0;
+#elif OS_WINCE
+ /* WinCE has no concept of a relative pathname, or so I am told. */
+ zFull = sqliteStrDup(zRelative);
+#else
+ char *zNotUsed;
+ WCHAR *zWide;
+ int nByte;
+ zWide = utf8ToUnicode(zRelative);
+ if( zWide ){
+ WCHAR *zTemp, *zNotUsedW;
+ nByte = GetFullPathNameW(zWide, 0, 0, &zNotUsedW) + 1;
+ zTemp = sqliteMalloc( nByte*sizeof(zTemp[0]) );
+ if( zTemp==0 ) return 0;
+ GetFullPathNameW(zWide, nByte, zTemp, &zNotUsedW);
+ sqliteFree(zWide);
+ zFull = unicodeToUtf8(zTemp);
+ sqliteFree(zTemp);
+ }else{
+ nByte = GetFullPathNameA(zRelative, 0, 0, &zNotUsed) + 1;
+ zFull = sqliteMalloc( nByte*sizeof(zFull[0]) );
+ if( zFull==0 ) return 0;
+ GetFullPathNameA(zRelative, nByte, zFull, &zNotUsed);
+ }
+#endif
+ return zFull;
+}
+
+/*
+** The fullSync option is meaningless on windows. This is a no-op.
+*/
+static void winSetFullSync(OsFile *id, int v){
+ return;
+}
+
+/*
+** Return the underlying file handle for an OsFile
+*/
+static int winFileHandle(OsFile *id){
+ return (int)((winFile*)id)->h;
+}
+
+/*
+** Return an integer that indices the type of lock currently held
+** by this handle. (Used for testing and analysis only.)
+*/
+static int winLockState(OsFile *id){
+ return ((winFile*)id)->locktype;
+}
+
+/*
+** This vector defines all the methods that can operate on an OsFile
+** for win32.
+*/
+static const IoMethod sqlite3WinIoMethod = {
+ winClose,
+ winOpenDirectory,
+ winRead,
+ winWrite,
+ winSeek,
+ winTruncate,
+ winSync,
+ winSetFullSync,
+ winFileHandle,
+ winFileSize,
+ winLock,
+ winUnlock,
+ winLockState,
+ winCheckReservedLock,
+};
+
+/*
+** Allocate memory for an OsFile. Initialize the new OsFile
+** to the value given in pInit and return a pointer to the new
+** OsFile. If we run out of memory, close the file and return NULL.
+*/
+static int allocateWinFile(winFile *pInit, OsFile **pId){
+ winFile *pNew;
+ pNew = sqliteMalloc( sizeof(*pNew) );
+ if( pNew==0 ){
+ CloseHandle(pInit->h);
+#if OS_WINCE
+ sqliteFree(pInit->zDeleteOnClose);
+#endif
+ *pId = 0;
+ return SQLITE_NOMEM;
+ }else{
+ *pNew = *pInit;
+ pNew->pMethod = &sqlite3WinIoMethod;
+ pNew->locktype = NO_LOCK;
+ pNew->sharedLockByte = 0;
+ *pId = (OsFile*)pNew;
+ OpenCounter(+1);
+ return SQLITE_OK;
+ }
+}
+
+
+#endif /* SQLITE_OMIT_DISKIO */
+/***************************************************************************
+** Everything above deals with file I/O. Everything that follows deals
+** with other miscellanous aspects of the operating system interface
+****************************************************************************/
+
+/*
+** Get information to seed the random number generator. The seed
+** is written into the buffer zBuf[256]. The calling function must
+** supply a sufficiently large buffer.
+*/
+int sqlite3WinRandomSeed(char *zBuf){
+ /* We have to initialize zBuf to prevent valgrind from reporting
+ ** errors. The reports issued by valgrind are incorrect - we would
+ ** prefer that the randomness be increased by making use of the
+ ** uninitialized space in zBuf - but valgrind errors tend to worry
+ ** some users. Rather than argue, it seems easier just to initialize
+ ** the whole array and silence valgrind, even if that means less randomness
+ ** in the random seed.
+ **
+ ** When testing, initializing zBuf[] to zero is all we do. That means
+ ** that we always use the same random number sequence.* This makes the
+ ** tests repeatable.
+ */
+ memset(zBuf, 0, 256);
+ GetSystemTime((LPSYSTEMTIME)zBuf);
+ return SQLITE_OK;
+}
+
+/*
+** Sleep for a little while. Return the amount of time slept.
+*/
+int sqlite3WinSleep(int ms){
+ Sleep(ms);
+ return ms;
+}
+
+/*
+** Static variables used for thread synchronization
+*/
+static int inMutex = 0;
+#ifdef SQLITE_W32_THREADS
+ static DWORD mutexOwner;
+ static CRITICAL_SECTION cs;
+#endif
+
+/*
+** The following pair of routines implement mutual exclusion for
+** multi-threaded processes. Only a single thread is allowed to
+** executed code that is surrounded by EnterMutex() and LeaveMutex().
+**
+** SQLite uses only a single Mutex. There is not much critical
+** code and what little there is executes quickly and without blocking.
+**
+** Version 3.3.1 and earlier used a simple mutex. Beginning with
+** version 3.3.2, a recursive mutex is required.
+*/
+void sqlite3WinEnterMutex(){
+#ifdef SQLITE_W32_THREADS
+ static int isInit = 0;
+ while( !isInit ){
+ static long lock = 0;
+ if( InterlockedIncrement(&lock)==1 ){
+ InitializeCriticalSection(&cs);
+ isInit = 1;
+ }else{
+ Sleep(1);
+ }
+ }
+ EnterCriticalSection(&cs);
+ mutexOwner = GetCurrentThreadId();
+#endif
+ inMutex++;
+}
+void sqlite3WinLeaveMutex(){
+ assert( inMutex );
+ inMutex--;
+#ifdef SQLITE_W32_THREADS
+ assert( mutexOwner==GetCurrentThreadId() );
+ LeaveCriticalSection(&cs);
+#endif
+}
+
+/*
+** Return TRUE if the mutex is currently held.
+**
+** If the thisThreadOnly parameter is true, return true if and only if the
+** calling thread holds the mutex. If the parameter is false, return
+** true if any thread holds the mutex.
+*/
+int sqlite3WinInMutex(int thisThreadOnly){
+#ifdef SQLITE_W32_THREADS
+ return inMutex>0 && (thisThreadOnly==0 || mutexOwner==GetCurrentThreadId());
+#else
+ return inMutex>0;
+#endif
+}
+
+
+/*
+** The following variable, if set to a non-zero value, becomes the result
+** returned from sqlite3OsCurrentTime(). This is used for testing.
+*/
+#ifdef SQLITE_TEST
+int sqlite3_current_time = 0;
+#endif
+
+/*
+** Find the current time (in Universal Coordinated Time). Write the
+** current time and date as a Julian Day number into *prNow and
+** return 0. Return 1 if the time and date cannot be found.
+*/
+int sqlite3WinCurrentTime(double *prNow){
+ FILETIME ft;
+ /* FILETIME structure is a 64-bit value representing the number of
+ 100-nanosecond intervals since January 1, 1601 (= JD 2305813.5).
+ */
+ double now;
+#if OS_WINCE
+ SYSTEMTIME time;
+ GetSystemTime(&time);
+ SystemTimeToFileTime(&time,&ft);
+#else
+ GetSystemTimeAsFileTime( &ft );
+#endif
+ now = ((double)ft.dwHighDateTime) * 4294967296.0;
+ *prNow = (now + ft.dwLowDateTime)/864000000000.0 + 2305813.5;
+#ifdef SQLITE_TEST
+ if( sqlite3_current_time ){
+ *prNow = sqlite3_current_time/86400.0 + 2440587.5;
+ }
+#endif
+ return 0;
+}
+
+/*
+** Remember the number of thread-specific-data blocks allocated.
+** Use this to verify that we are not leaking thread-specific-data.
+** Ticket #1601
+*/
+#ifdef SQLITE_TEST
+int sqlite3_tsd_count = 0;
+# define TSD_COUNTER_INCR InterlockedIncrement(&sqlite3_tsd_count)
+# define TSD_COUNTER_DECR InterlockedDecrement(&sqlite3_tsd_count)
+#else
+# define TSD_COUNTER_INCR /* no-op */
+# define TSD_COUNTER_DECR /* no-op */
+#endif
+
+
+
+/*
+** If called with allocateFlag>1, then return a pointer to thread
+** specific data for the current thread. Allocate and zero the
+** thread-specific data if it does not already exist necessary.
+**
+** If called with allocateFlag==0, then check the current thread
+** specific data. Return it if it exists. If it does not exist,
+** then return NULL.
+**
+** If called with allocateFlag<0, check to see if the thread specific
+** data is allocated and is all zero. If it is then deallocate it.
+** Return a pointer to the thread specific data or NULL if it is
+** unallocated or gets deallocated.
+*/
+ThreadData *sqlite3WinThreadSpecificData(int allocateFlag){
+ static int key;
+ static int keyInit = 0;
+ static const ThreadData zeroData = {0};
+ ThreadData *pTsd;
+
+ if( !keyInit ){
+ sqlite3OsEnterMutex();
+ if( !keyInit ){
+ key = TlsAlloc();
+ if( key==0xffffffff ){
+ sqlite3OsLeaveMutex();
+ return 0;
+ }
+ keyInit = 1;
+ }
+ sqlite3OsLeaveMutex();
+ }
+ pTsd = TlsGetValue(key);
+ if( allocateFlag>0 ){
+ if( !pTsd ){
+ pTsd = sqlite3OsMalloc( sizeof(zeroData) );
+ if( pTsd ){
+ *pTsd = zeroData;
+ TlsSetValue(key, pTsd);
+ TSD_COUNTER_INCR;
+ }
+ }
+ }else if( pTsd!=0 && allocateFlag<0
+ && memcmp(pTsd, &zeroData, sizeof(ThreadData))==0 ){
+ sqlite3OsFree(pTsd);
+ TlsSetValue(key, 0);
+ TSD_COUNTER_DECR;
+ pTsd = 0;
+ }
+ return pTsd;
+}
+#endif /* OS_WIN */
Added: freeswitch/trunk/libs/sqlite/src/pager.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/pager.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,3939 @@
+/*
+** 2001 September 15
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** This is the implementation of the page cache subsystem or "pager".
+**
+** The pager is used to access a database disk file. It implements
+** atomic commit and rollback through the use of a journal file that
+** is separate from the database file. The pager also implements file
+** locking to prevent two processes from writing the same database
+** file simultaneously, or one process from reading the database while
+** another is writing.
+**
+** @(#) $Id: pager.c,v 1.274 2006/10/03 19:05:19 drh Exp $
+*/
+#ifndef SQLITE_OMIT_DISKIO
+#include "sqliteInt.h"
+#include "os.h"
+#include "pager.h"
+#include <assert.h>
+#include <string.h>
+
+/*
+** Macros for troubleshooting. Normally turned off
+*/
+#if 0
+#define TRACE1(X) sqlite3DebugPrintf(X)
+#define TRACE2(X,Y) sqlite3DebugPrintf(X,Y)
+#define TRACE3(X,Y,Z) sqlite3DebugPrintf(X,Y,Z)
+#define TRACE4(X,Y,Z,W) sqlite3DebugPrintf(X,Y,Z,W)
+#define TRACE5(X,Y,Z,W,V) sqlite3DebugPrintf(X,Y,Z,W,V)
+#else
+#define TRACE1(X)
+#define TRACE2(X,Y)
+#define TRACE3(X,Y,Z)
+#define TRACE4(X,Y,Z,W)
+#define TRACE5(X,Y,Z,W,V)
+#endif
+
+/*
+** The following two macros are used within the TRACEX() macros above
+** to print out file-descriptors.
+**
+** PAGERID() takes a pointer to a Pager struct as it's argument. The
+** associated file-descriptor is returned. FILEHANDLEID() takes an OsFile
+** struct as it's argument.
+*/
+#define PAGERID(p) ((int)(p->fd))
+#define FILEHANDLEID(fd) ((int)fd)
+
+/*
+** The page cache as a whole is always in one of the following
+** states:
+**
+** PAGER_UNLOCK The page cache is not currently reading or
+** writing the database file. There is no
+** data held in memory. This is the initial
+** state.
+**
+** PAGER_SHARED The page cache is reading the database.
+** Writing is not permitted. There can be
+** multiple readers accessing the same database
+** file at the same time.
+**
+** PAGER_RESERVED This process has reserved the database for writing
+** but has not yet made any changes. Only one process
+** at a time can reserve the database. The original
+** database file has not been modified so other
+** processes may still be reading the on-disk
+** database file.
+**
+** PAGER_EXCLUSIVE The page cache is writing the database.
+** Access is exclusive. No other processes or
+** threads can be reading or writing while one
+** process is writing.
+**
+** PAGER_SYNCED The pager moves to this state from PAGER_EXCLUSIVE
+** after all dirty pages have been written to the
+** database file and the file has been synced to
+** disk. All that remains to do is to remove the
+** journal file and the transaction will be
+** committed.
+**
+** The page cache comes up in PAGER_UNLOCK. The first time a
+** sqlite3pager_get() occurs, the state transitions to PAGER_SHARED.
+** After all pages have been released using sqlite_page_unref(),
+** the state transitions back to PAGER_UNLOCK. The first time
+** that sqlite3pager_write() is called, the state transitions to
+** PAGER_RESERVED. (Note that sqlite_page_write() can only be
+** called on an outstanding page which means that the pager must
+** be in PAGER_SHARED before it transitions to PAGER_RESERVED.)
+** The transition to PAGER_EXCLUSIVE occurs when before any changes
+** are made to the database file. After an sqlite3pager_rollback()
+** or sqlite_pager_commit(), the state goes back to PAGER_SHARED.
+*/
+#define PAGER_UNLOCK 0
+#define PAGER_SHARED 1 /* same as SHARED_LOCK */
+#define PAGER_RESERVED 2 /* same as RESERVED_LOCK */
+#define PAGER_EXCLUSIVE 4 /* same as EXCLUSIVE_LOCK */
+#define PAGER_SYNCED 5
+
+/*
+** If the SQLITE_BUSY_RESERVED_LOCK macro is set to true at compile-time,
+** then failed attempts to get a reserved lock will invoke the busy callback.
+** This is off by default. To see why, consider the following scenario:
+**
+** Suppose thread A already has a shared lock and wants a reserved lock.
+** Thread B already has a reserved lock and wants an exclusive lock. If
+** both threads are using their busy callbacks, it might be a long time
+** be for one of the threads give up and allows the other to proceed.
+** But if the thread trying to get the reserved lock gives up quickly
+** (if it never invokes its busy callback) then the contention will be
+** resolved quickly.
+*/
+#ifndef SQLITE_BUSY_RESERVED_LOCK
+# define SQLITE_BUSY_RESERVED_LOCK 0
+#endif
+
+/*
+** This macro rounds values up so that if the value is an address it
+** is guaranteed to be an address that is aligned to an 8-byte boundary.
+*/
+#define FORCE_ALIGNMENT(X) (((X)+7)&~7)
+
+/*
+** Each in-memory image of a page begins with the following header.
+** This header is only visible to this pager module. The client
+** code that calls pager sees only the data that follows the header.
+**
+** Client code should call sqlite3pager_write() on a page prior to making
+** any modifications to that page. The first time sqlite3pager_write()
+** is called, the original page contents are written into the rollback
+** journal and PgHdr.inJournal and PgHdr.needSync are set. Later, once
+** the journal page has made it onto the disk surface, PgHdr.needSync
+** is cleared. The modified page cannot be written back into the original
+** database file until the journal pages has been synced to disk and the
+** PgHdr.needSync has been cleared.
+**
+** The PgHdr.dirty flag is set when sqlite3pager_write() is called and
+** is cleared again when the page content is written back to the original
+** database file.
+*/
+typedef struct PgHdr PgHdr;
+struct PgHdr {
+ Pager *pPager; /* The pager to which this page belongs */
+ Pgno pgno; /* The page number for this page */
+ PgHdr *pNextHash, *pPrevHash; /* Hash collision chain for PgHdr.pgno */
+ PgHdr *pNextFree, *pPrevFree; /* Freelist of pages where nRef==0 */
+ PgHdr *pNextAll; /* A list of all pages */
+ PgHdr *pNextStmt, *pPrevStmt; /* List of pages in the statement journal */
+ u8 inJournal; /* TRUE if has been written to journal */
+ u8 inStmt; /* TRUE if in the statement subjournal */
+ u8 dirty; /* TRUE if we need to write back changes */
+ u8 needSync; /* Sync journal before writing this page */
+ u8 alwaysRollback; /* Disable dont_rollback() for this page */
+ short int nRef; /* Number of users of this page */
+ PgHdr *pDirty, *pPrevDirty; /* Dirty pages */
+ u32 notUsed; /* Buffer space */
+#ifdef SQLITE_CHECK_PAGES
+ u32 pageHash;
+#endif
+ /* pPager->pageSize bytes of page data follow this header */
+ /* Pager.nExtra bytes of local data follow the page data */
+};
+
+/*
+** For an in-memory only database, some extra information is recorded about
+** each page so that changes can be rolled back. (Journal files are not
+** used for in-memory databases.) The following information is added to
+** the end of every EXTRA block for in-memory databases.
+**
+** This information could have been added directly to the PgHdr structure.
+** But then it would take up an extra 8 bytes of storage on every PgHdr
+** even for disk-based databases. Splitting it out saves 8 bytes. This
+** is only a savings of 0.8% but those percentages add up.
+*/
+typedef struct PgHistory PgHistory;
+struct PgHistory {
+ u8 *pOrig; /* Original page text. Restore to this on a full rollback */
+ u8 *pStmt; /* Text as it was at the beginning of the current statement */
+};
+
+/*
+** A macro used for invoking the codec if there is one
+*/
+#ifdef SQLITE_HAS_CODEC
+# define CODEC1(P,D,N,X) if( P->xCodec!=0 ){ P->xCodec(P->pCodecArg,D,N,X); }
+# define CODEC2(P,D,N,X) ((char*)(P->xCodec!=0?P->xCodec(P->pCodecArg,D,N,X):D))
+#else
+# define CODEC1(P,D,N,X) /* NO-OP */
+# define CODEC2(P,D,N,X) ((char*)D)
+#endif
+
+/*
+** Convert a pointer to a PgHdr into a pointer to its data
+** and back again.
+*/
+#define PGHDR_TO_DATA(P) ((void*)(&(P)[1]))
+#define DATA_TO_PGHDR(D) (&((PgHdr*)(D))[-1])
+#define PGHDR_TO_EXTRA(G,P) ((void*)&((char*)(&(G)[1]))[(P)->pageSize])
+#define PGHDR_TO_HIST(P,PGR) \
+ ((PgHistory*)&((char*)(&(P)[1]))[(PGR)->pageSize+(PGR)->nExtra])
+
+/*
+** A open page cache is an instance of the following structure.
+**
+** Pager.errCode may be set to SQLITE_IOERR, SQLITE_CORRUPT, SQLITE_PROTOCOL
+** or SQLITE_FULL. Once one of the first three errors occurs, it persists
+** and is returned as the result of every major pager API call. The
+** SQLITE_FULL return code is slightly different. It persists only until the
+** next successful rollback is performed on the pager cache. Also,
+** SQLITE_FULL does not affect the sqlite3pager_get() and sqlite3pager_lookup()
+** APIs, they may still be used successfully.
+*/
+struct Pager {
+ u8 journalOpen; /* True if journal file descriptors is valid */
+ u8 journalStarted; /* True if header of journal is synced */
+ u8 useJournal; /* Use a rollback journal on this file */
+ u8 noReadlock; /* Do not bother to obtain readlocks */
+ u8 stmtOpen; /* True if the statement subjournal is open */
+ u8 stmtInUse; /* True we are in a statement subtransaction */
+ u8 stmtAutoopen; /* Open stmt journal when main journal is opened*/
+ u8 noSync; /* Do not sync the journal if true */
+ u8 fullSync; /* Do extra syncs of the journal for robustness */
+ u8 full_fsync; /* Use F_FULLFSYNC when available */
+ u8 state; /* PAGER_UNLOCK, _SHARED, _RESERVED, etc. */
+ u8 tempFile; /* zFilename is a temporary file */
+ u8 readOnly; /* True for a read-only database */
+ u8 needSync; /* True if an fsync() is needed on the journal */
+ u8 dirtyCache; /* True if cached pages have changed */
+ u8 alwaysRollback; /* Disable dont_rollback() for all pages */
+ u8 memDb; /* True to inhibit all file I/O */
+ u8 setMaster; /* True if a m-j name has been written to jrnl */
+ int errCode; /* One of several kinds of errors */
+ int dbSize; /* Number of pages in the file */
+ int origDbSize; /* dbSize before the current change */
+ int stmtSize; /* Size of database (in pages) at stmt_begin() */
+ int nRec; /* Number of pages written to the journal */
+ u32 cksumInit; /* Quasi-random value added to every checksum */
+ int stmtNRec; /* Number of records in stmt subjournal */
+ int nExtra; /* Add this many bytes to each in-memory page */
+ int pageSize; /* Number of bytes in a page */
+ int nPage; /* Total number of in-memory pages */
+ int nMaxPage; /* High water mark of nPage */
+ int nRef; /* Number of in-memory pages with PgHdr.nRef>0 */
+ int mxPage; /* Maximum number of pages to hold in cache */
+ u8 *aInJournal; /* One bit for each page in the database file */
+ u8 *aInStmt; /* One bit for each page in the database */
+ char *zFilename; /* Name of the database file */
+ char *zJournal; /* Name of the journal file */
+ char *zDirectory; /* Directory hold database and journal files */
+ OsFile *fd, *jfd; /* File descriptors for database and journal */
+ OsFile *stfd; /* File descriptor for the statement subjournal*/
+ BusyHandler *pBusyHandler; /* Pointer to sqlite.busyHandler */
+ PgHdr *pFirst, *pLast; /* List of free pages */
+ PgHdr *pFirstSynced; /* First free page with PgHdr.needSync==0 */
+ PgHdr *pAll; /* List of all pages */
+ PgHdr *pStmt; /* List of pages in the statement subjournal */
+ PgHdr *pDirty; /* List of all dirty pages */
+ i64 journalOff; /* Current byte offset in the journal file */
+ i64 journalHdr; /* Byte offset to previous journal header */
+ i64 stmtHdrOff; /* First journal header written this statement */
+ i64 stmtCksum; /* cksumInit when statement was started */
+ i64 stmtJSize; /* Size of journal at stmt_begin() */
+ int sectorSize; /* Assumed sector size during rollback */
+#ifdef SQLITE_TEST
+ int nHit, nMiss, nOvfl; /* Cache hits, missing, and LRU overflows */
+ int nRead,nWrite; /* Database pages read/written */
+#endif
+ void (*xDestructor)(void*,int); /* Call this routine when freeing pages */
+ void (*xReiniter)(void*,int); /* Call this routine when reloading pages */
+ void *(*xCodec)(void*,void*,Pgno,int); /* Routine for en/decoding data */
+ void *pCodecArg; /* First argument to xCodec() */
+ int nHash; /* Size of the pager hash table */
+ PgHdr **aHash; /* Hash table to map page number to PgHdr */
+#ifdef SQLITE_ENABLE_MEMORY_MANAGEMENT
+ Pager *pNext; /* Linked list of pagers in this thread */
+#endif
+};
+
+/*
+** If SQLITE_TEST is defined then increment the variable given in
+** the argument
+*/
+#ifdef SQLITE_TEST
+# define TEST_INCR(x) x++
+#else
+# define TEST_INCR(x)
+#endif
+
+/*
+** Journal files begin with the following magic string. The data
+** was obtained from /dev/random. It is used only as a sanity check.
+**
+** Since version 2.8.0, the journal format contains additional sanity
+** checking information. If the power fails while the journal is begin
+** written, semi-random garbage data might appear in the journal
+** file after power is restored. If an attempt is then made
+** to roll the journal back, the database could be corrupted. The additional
+** sanity checking data is an attempt to discover the garbage in the
+** journal and ignore it.
+**
+** The sanity checking information for the new journal format consists
+** of a 32-bit checksum on each page of data. The checksum covers both
+** the page number and the pPager->pageSize bytes of data for the page.
+** This cksum is initialized to a 32-bit random value that appears in the
+** journal file right after the header. The random initializer is important,
+** because garbage data that appears at the end of a journal is likely
+** data that was once in other files that have now been deleted. If the
+** garbage data came from an obsolete journal file, the checksums might
+** be correct. But by initializing the checksum to random value which
+** is different for every journal, we minimize that risk.
+*/
+static const unsigned char aJournalMagic[] = {
+ 0xd9, 0xd5, 0x05, 0xf9, 0x20, 0xa1, 0x63, 0xd7,
+};
+
+/*
+** The size of the header and of each page in the journal is determined
+** by the following macros.
+*/
+#define JOURNAL_PG_SZ(pPager) ((pPager->pageSize) + 8)
+
+/*
+** The journal header size for this pager. In the future, this could be
+** set to some value read from the disk controller. The important
+** characteristic is that it is the same size as a disk sector.
+*/
+#define JOURNAL_HDR_SZ(pPager) (pPager->sectorSize)
+
+/*
+** The macro MEMDB is true if we are dealing with an in-memory database.
+** We do this as a macro so that if the SQLITE_OMIT_MEMORYDB macro is set,
+** the value of MEMDB will be a constant and the compiler will optimize
+** out code that would never execute.
+*/
+#ifdef SQLITE_OMIT_MEMORYDB
+# define MEMDB 0
+#else
+# define MEMDB pPager->memDb
+#endif
+
+/*
+** The default size of a disk sector
+*/
+#define PAGER_SECTOR_SIZE 512
+
+/*
+** Page number PAGER_MJ_PGNO is never used in an SQLite database (it is
+** reserved for working around a windows/posix incompatibility). It is
+** used in the journal to signify that the remainder of the journal file
+** is devoted to storing a master journal name - there are no more pages to
+** roll back. See comments for function writeMasterJournal() for details.
+*/
+/* #define PAGER_MJ_PGNO(x) (PENDING_BYTE/((x)->pageSize)) */
+#define PAGER_MJ_PGNO(x) ((PENDING_BYTE/((x)->pageSize))+1)
+
+/*
+** The maximum legal page number is (2^31 - 1).
+*/
+#define PAGER_MAX_PGNO 2147483647
+
+/*
+** Enable reference count tracking (for debugging) here:
+*/
+#ifdef SQLITE_TEST
+ int pager3_refinfo_enable = 0;
+ static void pager_refinfo(PgHdr *p){
+ static int cnt = 0;
+ if( !pager3_refinfo_enable ) return;
+ sqlite3DebugPrintf(
+ "REFCNT: %4d addr=%p nRef=%d\n",
+ p->pgno, PGHDR_TO_DATA(p), p->nRef
+ );
+ cnt++; /* Something to set a breakpoint on */
+ }
+# define REFINFO(X) pager_refinfo(X)
+#else
+# define REFINFO(X)
+#endif
+
+
+/*
+** Change the size of the pager hash table to N. N must be a power
+** of two.
+*/
+static void pager_resize_hash_table(Pager *pPager, int N){
+ PgHdr **aHash, *pPg;
+ assert( N>0 && (N&(N-1))==0 );
+ aHash = sqliteMalloc( sizeof(aHash[0])*N );
+ if( aHash==0 ){
+ /* Failure to rehash is not an error. It is only a performance hit. */
+ return;
+ }
+ sqliteFree(pPager->aHash);
+ pPager->nHash = N;
+ pPager->aHash = aHash;
+ for(pPg=pPager->pAll; pPg; pPg=pPg->pNextAll){
+ int h;
+ if( pPg->pgno==0 ){
+ assert( pPg->pNextHash==0 && pPg->pPrevHash==0 );
+ continue;
+ }
+ h = pPg->pgno & (N-1);
+ pPg->pNextHash = aHash[h];
+ if( aHash[h] ){
+ aHash[h]->pPrevHash = pPg;
+ }
+ aHash[h] = pPg;
+ pPg->pPrevHash = 0;
+ }
+}
+
+/*
+** Read a 32-bit integer from the given file descriptor. Store the integer
+** that is read in *pRes. Return SQLITE_OK if everything worked, or an
+** error code is something goes wrong.
+**
+** All values are stored on disk as big-endian.
+*/
+static int read32bits(OsFile *fd, u32 *pRes){
+ unsigned char ac[4];
+ int rc = sqlite3OsRead(fd, ac, sizeof(ac));
+ if( rc==SQLITE_OK ){
+ *pRes = (ac[0]<<24) | (ac[1]<<16) | (ac[2]<<8) | ac[3];
+ }
+ return rc;
+}
+
+/*
+** Write a 32-bit integer into a string buffer in big-endian byte order.
+*/
+static void put32bits(char *ac, u32 val){
+ ac[0] = (val>>24) & 0xff;
+ ac[1] = (val>>16) & 0xff;
+ ac[2] = (val>>8) & 0xff;
+ ac[3] = val & 0xff;
+}
+
+/*
+** Write a 32-bit integer into the given file descriptor. Return SQLITE_OK
+** on success or an error code is something goes wrong.
+*/
+static int write32bits(OsFile *fd, u32 val){
+ char ac[4];
+ put32bits(ac, val);
+ return sqlite3OsWrite(fd, ac, 4);
+}
+
+/*
+** Read a 32-bit integer at offset 'offset' from the page identified by
+** page header 'p'.
+*/
+static u32 retrieve32bits(PgHdr *p, int offset){
+ unsigned char *ac;
+ ac = &((unsigned char*)PGHDR_TO_DATA(p))[offset];
+ return (ac[0]<<24) | (ac[1]<<16) | (ac[2]<<8) | ac[3];
+}
+
+
+/*
+** This function should be called when an error occurs within the pager
+** code. The first argument is a pointer to the pager structure, the
+** second the error-code about to be returned by a pager API function.
+** The value returned is a copy of the second argument to this function.
+**
+** If the second argument is SQLITE_IOERR, SQLITE_CORRUPT or SQLITE_PROTOCOL,
+** the error becomes persistent. All subsequent API calls on this Pager
+** will immediately return the same error code.
+*/
+static int pager_error(Pager *pPager, int rc){
+ int rc2 = rc & 0xff;
+ assert( pPager->errCode==SQLITE_FULL || pPager->errCode==SQLITE_OK );
+ if(
+ rc2==SQLITE_FULL ||
+ rc2==SQLITE_IOERR ||
+ rc2==SQLITE_CORRUPT ||
+ rc2==SQLITE_PROTOCOL
+ ){
+ pPager->errCode = rc;
+ }
+ return rc;
+}
+
+#ifdef SQLITE_CHECK_PAGES
+/*
+** Return a 32-bit hash of the page data for pPage.
+*/
+static u32 pager_pagehash(PgHdr *pPage){
+ u32 hash = 0;
+ int i;
+ unsigned char *pData = (unsigned char *)PGHDR_TO_DATA(pPage);
+ for(i=0; i<pPage->pPager->pageSize; i++){
+ hash = (hash+i)^pData[i];
+ }
+ return hash;
+}
+
+/*
+** The CHECK_PAGE macro takes a PgHdr* as an argument. If SQLITE_CHECK_PAGES
+** is defined, and NDEBUG is not defined, an assert() statement checks
+** that the page is either dirty or still matches the calculated page-hash.
+*/
+#define CHECK_PAGE(x) checkPage(x)
+static void checkPage(PgHdr *pPg){
+ Pager *pPager = pPg->pPager;
+ assert( !pPg->pageHash || pPager->errCode || MEMDB || pPg->dirty ||
+ pPg->pageHash==pager_pagehash(pPg) );
+}
+
+#else
+#define CHECK_PAGE(x)
+#endif
+
+/*
+** When this is called the journal file for pager pPager must be open.
+** The master journal file name is read from the end of the file and
+** written into memory obtained from sqliteMalloc(). *pzMaster is
+** set to point at the memory and SQLITE_OK returned. The caller must
+** sqliteFree() *pzMaster.
+**
+** If no master journal file name is present *pzMaster is set to 0 and
+** SQLITE_OK returned.
+*/
+static int readMasterJournal(OsFile *pJrnl, char **pzMaster){
+ int rc;
+ u32 len;
+ i64 szJ;
+ u32 cksum;
+ int i;
+ unsigned char aMagic[8]; /* A buffer to hold the magic header */
+
+ *pzMaster = 0;
+
+ rc = sqlite3OsFileSize(pJrnl, &szJ);
+ if( rc!=SQLITE_OK || szJ<16 ) return rc;
+
+ rc = sqlite3OsSeek(pJrnl, szJ-16);
+ if( rc!=SQLITE_OK ) return rc;
+
+ rc = read32bits(pJrnl, &len);
+ if( rc!=SQLITE_OK ) return rc;
+
+ rc = read32bits(pJrnl, &cksum);
+ if( rc!=SQLITE_OK ) return rc;
+
+ rc = sqlite3OsRead(pJrnl, aMagic, 8);
+ if( rc!=SQLITE_OK || memcmp(aMagic, aJournalMagic, 8) ) return rc;
+
+ rc = sqlite3OsSeek(pJrnl, szJ-16-len);
+ if( rc!=SQLITE_OK ) return rc;
+
+ *pzMaster = (char *)sqliteMalloc(len+1);
+ if( !*pzMaster ){
+ return SQLITE_NOMEM;
+ }
+ rc = sqlite3OsRead(pJrnl, *pzMaster, len);
+ if( rc!=SQLITE_OK ){
+ sqliteFree(*pzMaster);
+ *pzMaster = 0;
+ return rc;
+ }
+
+ /* See if the checksum matches the master journal name */
+ for(i=0; i<len; i++){
+ cksum -= (*pzMaster)[i];
+ }
+ if( cksum ){
+ /* If the checksum doesn't add up, then one or more of the disk sectors
+ ** containing the master journal filename is corrupted. This means
+ ** definitely roll back, so just return SQLITE_OK and report a (nul)
+ ** master-journal filename.
+ */
+ sqliteFree(*pzMaster);
+ *pzMaster = 0;
+ }else{
+ (*pzMaster)[len] = '\0';
+ }
+
+ return SQLITE_OK;
+}
+
+/*
+** Seek the journal file descriptor to the next sector boundary where a
+** journal header may be read or written. Pager.journalOff is updated with
+** the new seek offset.
+**
+** i.e for a sector size of 512:
+**
+** Input Offset Output Offset
+** ---------------------------------------
+** 0 0
+** 512 512
+** 100 512
+** 2000 2048
+**
+*/
+static int seekJournalHdr(Pager *pPager){
+ i64 offset = 0;
+ i64 c = pPager->journalOff;
+ if( c ){
+ offset = ((c-1)/JOURNAL_HDR_SZ(pPager) + 1) * JOURNAL_HDR_SZ(pPager);
+ }
+ assert( offset%JOURNAL_HDR_SZ(pPager)==0 );
+ assert( offset>=c );
+ assert( (offset-c)<JOURNAL_HDR_SZ(pPager) );
+ pPager->journalOff = offset;
+ return sqlite3OsSeek(pPager->jfd, pPager->journalOff);
+}
+
+/*
+** The journal file must be open when this routine is called. A journal
+** header (JOURNAL_HDR_SZ bytes) is written into the journal file at the
+** current location.
+**
+** The format for the journal header is as follows:
+** - 8 bytes: Magic identifying journal format.
+** - 4 bytes: Number of records in journal, or -1 no-sync mode is on.
+** - 4 bytes: Random number used for page hash.
+** - 4 bytes: Initial database page count.
+** - 4 bytes: Sector size used by the process that wrote this journal.
+**
+** Followed by (JOURNAL_HDR_SZ - 24) bytes of unused space.
+*/
+static int writeJournalHdr(Pager *pPager){
+ char zHeader[sizeof(aJournalMagic)+16];
+
+ int rc = seekJournalHdr(pPager);
+ if( rc ) return rc;
+
+ pPager->journalHdr = pPager->journalOff;
+ if( pPager->stmtHdrOff==0 ){
+ pPager->stmtHdrOff = pPager->journalHdr;
+ }
+ pPager->journalOff += JOURNAL_HDR_SZ(pPager);
+
+ /* FIX ME:
+ **
+ ** Possibly for a pager not in no-sync mode, the journal magic should not
+ ** be written until nRec is filled in as part of next syncJournal().
+ **
+ ** Actually maybe the whole journal header should be delayed until that
+ ** point. Think about this.
+ */
+ memcpy(zHeader, aJournalMagic, sizeof(aJournalMagic));
+ /* The nRec Field. 0xFFFFFFFF for no-sync journals. */
+ put32bits(&zHeader[sizeof(aJournalMagic)], pPager->noSync ? 0xffffffff : 0);
+ /* The random check-hash initialiser */
+ sqlite3Randomness(sizeof(pPager->cksumInit), &pPager->cksumInit);
+ put32bits(&zHeader[sizeof(aJournalMagic)+4], pPager->cksumInit);
+ /* The initial database size */
+ put32bits(&zHeader[sizeof(aJournalMagic)+8], pPager->dbSize);
+ /* The assumed sector size for this process */
+ put32bits(&zHeader[sizeof(aJournalMagic)+12], pPager->sectorSize);
+ rc = sqlite3OsWrite(pPager->jfd, zHeader, sizeof(zHeader));
+
+ /* The journal header has been written successfully. Seek the journal
+ ** file descriptor to the end of the journal header sector.
+ */
+ if( rc==SQLITE_OK ){
+ rc = sqlite3OsSeek(pPager->jfd, pPager->journalOff-1);
+ if( rc==SQLITE_OK ){
+ rc = sqlite3OsWrite(pPager->jfd, "\000", 1);
+ }
+ }
+ return rc;
+}
+
+/*
+** The journal file must be open when this is called. A journal header file
+** (JOURNAL_HDR_SZ bytes) is read from the current location in the journal
+** file. See comments above function writeJournalHdr() for a description of
+** the journal header format.
+**
+** If the header is read successfully, *nRec is set to the number of
+** page records following this header and *dbSize is set to the size of the
+** database before the transaction began, in pages. Also, pPager->cksumInit
+** is set to the value read from the journal header. SQLITE_OK is returned
+** in this case.
+**
+** If the journal header file appears to be corrupted, SQLITE_DONE is
+** returned and *nRec and *dbSize are not set. If JOURNAL_HDR_SZ bytes
+** cannot be read from the journal file an error code is returned.
+*/
+static int readJournalHdr(
+ Pager *pPager,
+ i64 journalSize,
+ u32 *pNRec,
+ u32 *pDbSize
+){
+ int rc;
+ unsigned char aMagic[8]; /* A buffer to hold the magic header */
+
+ rc = seekJournalHdr(pPager);
+ if( rc ) return rc;
+
+ if( pPager->journalOff+JOURNAL_HDR_SZ(pPager) > journalSize ){
+ return SQLITE_DONE;
+ }
+
+ rc = sqlite3OsRead(pPager->jfd, aMagic, sizeof(aMagic));
+ if( rc ) return rc;
+
+ if( memcmp(aMagic, aJournalMagic, sizeof(aMagic))!=0 ){
+ return SQLITE_DONE;
+ }
+
+ rc = read32bits(pPager->jfd, pNRec);
+ if( rc ) return rc;
+
+ rc = read32bits(pPager->jfd, &pPager->cksumInit);
+ if( rc ) return rc;
+
+ rc = read32bits(pPager->jfd, pDbSize);
+ if( rc ) return rc;
+
+ /* Update the assumed sector-size to match the value used by
+ ** the process that created this journal. If this journal was
+ ** created by a process other than this one, then this routine
+ ** is being called from within pager_playback(). The local value
+ ** of Pager.sectorSize is restored at the end of that routine.
+ */
+ rc = read32bits(pPager->jfd, (u32 *)&pPager->sectorSize);
+ if( rc ) return rc;
+
+ pPager->journalOff += JOURNAL_HDR_SZ(pPager);
+ rc = sqlite3OsSeek(pPager->jfd, pPager->journalOff);
+ return rc;
+}
+
+
+/*
+** Write the supplied master journal name into the journal file for pager
+** pPager at the current location. The master journal name must be the last
+** thing written to a journal file. If the pager is in full-sync mode, the
+** journal file descriptor is advanced to the next sector boundary before
+** anything is written. The format is:
+**
+** + 4 bytes: PAGER_MJ_PGNO.
+** + N bytes: length of master journal name.
+** + 4 bytes: N
+** + 4 bytes: Master journal name checksum.
+** + 8 bytes: aJournalMagic[].
+**
+** The master journal page checksum is the sum of the bytes in the master
+** journal name.
+**
+** If zMaster is a NULL pointer (occurs for a single database transaction),
+** this call is a no-op.
+*/
+static int writeMasterJournal(Pager *pPager, const char *zMaster){
+ int rc;
+ int len;
+ int i;
+ u32 cksum = 0;
+ char zBuf[sizeof(aJournalMagic)+2*4];
+
+ if( !zMaster || pPager->setMaster) return SQLITE_OK;
+ pPager->setMaster = 1;
+
+ len = strlen(zMaster);
+ for(i=0; i<len; i++){
+ cksum += zMaster[i];
+ }
+
+ /* If in full-sync mode, advance to the next disk sector before writing
+ ** the master journal name. This is in case the previous page written to
+ ** the journal has already been synced.
+ */
+ if( pPager->fullSync ){
+ rc = seekJournalHdr(pPager);
+ if( rc!=SQLITE_OK ) return rc;
+ }
+ pPager->journalOff += (len+20);
+
+ rc = write32bits(pPager->jfd, PAGER_MJ_PGNO(pPager));
+ if( rc!=SQLITE_OK ) return rc;
+
+ rc = sqlite3OsWrite(pPager->jfd, zMaster, len);
+ if( rc!=SQLITE_OK ) return rc;
+
+ put32bits(zBuf, len);
+ put32bits(&zBuf[4], cksum);
+ memcpy(&zBuf[8], aJournalMagic, sizeof(aJournalMagic));
+ rc = sqlite3OsWrite(pPager->jfd, zBuf, 8+sizeof(aJournalMagic));
+ pPager->needSync = !pPager->noSync;
+ return rc;
+}
+
+/*
+** Add or remove a page from the list of all pages that are in the
+** statement journal.
+**
+** The Pager keeps a separate list of pages that are currently in
+** the statement journal. This helps the sqlite3pager_stmt_commit()
+** routine run MUCH faster for the common case where there are many
+** pages in memory but only a few are in the statement journal.
+*/
+static void page_add_to_stmt_list(PgHdr *pPg){
+ Pager *pPager = pPg->pPager;
+ if( pPg->inStmt ) return;
+ assert( pPg->pPrevStmt==0 && pPg->pNextStmt==0 );
+ pPg->pPrevStmt = 0;
+ if( pPager->pStmt ){
+ pPager->pStmt->pPrevStmt = pPg;
+ }
+ pPg->pNextStmt = pPager->pStmt;
+ pPager->pStmt = pPg;
+ pPg->inStmt = 1;
+}
+static void page_remove_from_stmt_list(PgHdr *pPg){
+ if( !pPg->inStmt ) return;
+ if( pPg->pPrevStmt ){
+ assert( pPg->pPrevStmt->pNextStmt==pPg );
+ pPg->pPrevStmt->pNextStmt = pPg->pNextStmt;
+ }else{
+ assert( pPg->pPager->pStmt==pPg );
+ pPg->pPager->pStmt = pPg->pNextStmt;
+ }
+ if( pPg->pNextStmt ){
+ assert( pPg->pNextStmt->pPrevStmt==pPg );
+ pPg->pNextStmt->pPrevStmt = pPg->pPrevStmt;
+ }
+ pPg->pNextStmt = 0;
+ pPg->pPrevStmt = 0;
+ pPg->inStmt = 0;
+}
+
+/*
+** Find a page in the hash table given its page number. Return
+** a pointer to the page or NULL if not found.
+*/
+static PgHdr *pager_lookup(Pager *pPager, Pgno pgno){
+ PgHdr *p;
+ if( pPager->aHash==0 ) return 0;
+ p = pPager->aHash[pgno & (pPager->nHash-1)];
+ while( p && p->pgno!=pgno ){
+ p = p->pNextHash;
+ }
+ return p;
+}
+
+/*
+** Unlock the database and clear the in-memory cache. This routine
+** sets the state of the pager back to what it was when it was first
+** opened. Any outstanding pages are invalidated and subsequent attempts
+** to access those pages will likely result in a coredump.
+*/
+static void pager_reset(Pager *pPager){
+ PgHdr *pPg, *pNext;
+ if( pPager->errCode ) return;
+ for(pPg=pPager->pAll; pPg; pPg=pNext){
+ pNext = pPg->pNextAll;
+ sqliteFree(pPg);
+ }
+ pPager->pFirst = 0;
+ pPager->pFirstSynced = 0;
+ pPager->pLast = 0;
+ pPager->pAll = 0;
+ pPager->nHash = 0;
+ sqliteFree(pPager->aHash);
+ pPager->nPage = 0;
+ pPager->aHash = 0;
+ if( pPager->state>=PAGER_RESERVED ){
+ sqlite3pager_rollback(pPager);
+ }
+ sqlite3OsUnlock(pPager->fd, NO_LOCK);
+ pPager->state = PAGER_UNLOCK;
+ pPager->dbSize = -1;
+ pPager->nRef = 0;
+ assert( pPager->journalOpen==0 );
+}
+
+/*
+** When this routine is called, the pager has the journal file open and
+** a RESERVED or EXCLUSIVE lock on the database. This routine releases
+** the database lock and acquires a SHARED lock in its place. The journal
+** file is deleted and closed.
+**
+** TODO: Consider keeping the journal file open for temporary databases.
+** This might give a performance improvement on windows where opening
+** a file is an expensive operation.
+*/
+static int pager_unwritelock(Pager *pPager){
+ PgHdr *pPg;
+ int rc;
+ assert( !MEMDB );
+ if( pPager->state<PAGER_RESERVED ){
+ return SQLITE_OK;
+ }
+ sqlite3pager_stmt_commit(pPager);
+ if( pPager->stmtOpen ){
+ sqlite3OsClose(&pPager->stfd);
+ pPager->stmtOpen = 0;
+ }
+ if( pPager->journalOpen ){
+ sqlite3OsClose(&pPager->jfd);
+ pPager->journalOpen = 0;
+ sqlite3OsDelete(pPager->zJournal);
+ sqliteFree( pPager->aInJournal );
+ pPager->aInJournal = 0;
+ for(pPg=pPager->pAll; pPg; pPg=pPg->pNextAll){
+ pPg->inJournal = 0;
+ pPg->dirty = 0;
+ pPg->needSync = 0;
+#ifdef SQLITE_CHECK_PAGES
+ pPg->pageHash = pager_pagehash(pPg);
+#endif
+ }
+ pPager->pDirty = 0;
+ pPager->dirtyCache = 0;
+ pPager->nRec = 0;
+ }else{
+ assert( pPager->aInJournal==0 );
+ assert( pPager->dirtyCache==0 || pPager->useJournal==0 );
+ }
+ rc = sqlite3OsUnlock(pPager->fd, SHARED_LOCK);
+ pPager->state = PAGER_SHARED;
+ pPager->origDbSize = 0;
+ pPager->setMaster = 0;
+ pPager->needSync = 0;
+ pPager->pFirstSynced = pPager->pFirst;
+ return rc;
+}
+
+/*
+** Compute and return a checksum for the page of data.
+**
+** This is not a real checksum. It is really just the sum of the
+** random initial value and the page number. We experimented with
+** a checksum of the entire data, but that was found to be too slow.
+**
+** Note that the page number is stored at the beginning of data and
+** the checksum is stored at the end. This is important. If journal
+** corruption occurs due to a power failure, the most likely scenario
+** is that one end or the other of the record will be changed. It is
+** much less likely that the two ends of the journal record will be
+** correct and the middle be corrupt. Thus, this "checksum" scheme,
+** though fast and simple, catches the mostly likely kind of corruption.
+**
+** FIX ME: Consider adding every 200th (or so) byte of the data to the
+** checksum. That way if a single page spans 3 or more disk sectors and
+** only the middle sector is corrupt, we will still have a reasonable
+** chance of failing the checksum and thus detecting the problem.
+*/
+static u32 pager_cksum(Pager *pPager, const u8 *aData){
+ u32 cksum = pPager->cksumInit;
+ int i = pPager->pageSize-200;
+ while( i>0 ){
+ cksum += aData[i];
+ i -= 200;
+ }
+ return cksum;
+}
+
+/* Forward declaration */
+static void makeClean(PgHdr*);
+
+/*
+** Read a single page from the journal file opened on file descriptor
+** jfd. Playback this one page.
+**
+** If useCksum==0 it means this journal does not use checksums. Checksums
+** are not used in statement journals because statement journals do not
+** need to survive power failures.
+*/
+static int pager_playback_one_page(Pager *pPager, OsFile *jfd, int useCksum){
+ int rc;
+ PgHdr *pPg; /* An existing page in the cache */
+ Pgno pgno; /* The page number of a page in journal */
+ u32 cksum; /* Checksum used for sanity checking */
+ u8 aData[SQLITE_MAX_PAGE_SIZE]; /* Temp storage for a page */
+
+ /* useCksum should be true for the main journal and false for
+ ** statement journals. Verify that this is always the case
+ */
+ assert( jfd == (useCksum ? pPager->jfd : pPager->stfd) );
+
+
+ rc = read32bits(jfd, &pgno);
+ if( rc!=SQLITE_OK ) return rc;
+ rc = sqlite3OsRead(jfd, &aData, pPager->pageSize);
+ if( rc!=SQLITE_OK ) return rc;
+ pPager->journalOff += pPager->pageSize + 4;
+
+ /* Sanity checking on the page. This is more important that I originally
+ ** thought. If a power failure occurs while the journal is being written,
+ ** it could cause invalid data to be written into the journal. We need to
+ ** detect this invalid data (with high probability) and ignore it.
+ */
+ if( pgno==0 || pgno==PAGER_MJ_PGNO(pPager) ){
+ return SQLITE_DONE;
+ }
+ if( pgno>(unsigned)pPager->dbSize ){
+ return SQLITE_OK;
+ }
+ if( useCksum ){
+ rc = read32bits(jfd, &cksum);
+ if( rc ) return rc;
+ pPager->journalOff += 4;
+ if( pager_cksum(pPager, aData)!=cksum ){
+ return SQLITE_DONE;
+ }
+ }
+
+ assert( pPager->state==PAGER_RESERVED || pPager->state>=PAGER_EXCLUSIVE );
+
+ /* If the pager is in RESERVED state, then there must be a copy of this
+ ** page in the pager cache. In this case just update the pager cache,
+ ** not the database file. The page is left marked dirty in this case.
+ **
+ ** If in EXCLUSIVE state, then we update the pager cache if it exists
+ ** and the main file. The page is then marked not dirty.
+ **
+ ** Ticket #1171: The statement journal might contain page content that is
+ ** different from the page content at the start of the transaction.
+ ** This occurs when a page is changed prior to the start of a statement
+ ** then changed again within the statement. When rolling back such a
+ ** statement we must not write to the original database unless we know
+ ** for certain that original page contents are in the main rollback
+ ** journal. Otherwise, if a full ROLLBACK occurs after the statement
+ ** rollback the full ROLLBACK will not restore the page to its original
+ ** content. Two conditions must be met before writing to the database
+ ** files. (1) the database must be locked. (2) we know that the original
+ ** page content is in the main journal either because the page is not in
+ ** cache or else it is marked as needSync==0.
+ */
+ pPg = pager_lookup(pPager, pgno);
+ assert( pPager->state>=PAGER_EXCLUSIVE || pPg!=0 );
+ TRACE3("PLAYBACK %d page %d\n", PAGERID(pPager), pgno);
+ if( pPager->state>=PAGER_EXCLUSIVE && (pPg==0 || pPg->needSync==0) ){
+ rc = sqlite3OsSeek(pPager->fd, (pgno-1)*(i64)pPager->pageSize);
+ if( rc==SQLITE_OK ){
+ rc = sqlite3OsWrite(pPager->fd, aData, pPager->pageSize);
+ }
+ if( pPg ){
+ makeClean(pPg);
+ }
+ }
+ if( pPg ){
+ /* No page should ever be explicitly rolled back that is in use, except
+ ** for page 1 which is held in use in order to keep the lock on the
+ ** database active. However such a page may be rolled back as a result
+ ** of an internal error resulting in an automatic call to
+ ** sqlite3pager_rollback().
+ */
+ void *pData;
+ /* assert( pPg->nRef==0 || pPg->pgno==1 ); */
+ pData = PGHDR_TO_DATA(pPg);
+ memcpy(pData, aData, pPager->pageSize);
+ if( pPager->xDestructor ){ /*** FIX ME: Should this be xReinit? ***/
+ pPager->xDestructor(pData, pPager->pageSize);
+ }
+#ifdef SQLITE_CHECK_PAGES
+ pPg->pageHash = pager_pagehash(pPg);
+#endif
+ CODEC1(pPager, pData, pPg->pgno, 3);
+ }
+ return rc;
+}
+
+/*
+** Parameter zMaster is the name of a master journal file. A single journal
+** file that referred to the master journal file has just been rolled back.
+** This routine checks if it is possible to delete the master journal file,
+** and does so if it is.
+**
+** The master journal file contains the names of all child journals.
+** To tell if a master journal can be deleted, check to each of the
+** children. If all children are either missing or do not refer to
+** a different master journal, then this master journal can be deleted.
+*/
+static int pager_delmaster(const char *zMaster){
+ int rc;
+ int master_open = 0;
+ OsFile *master = 0;
+ char *zMasterJournal = 0; /* Contents of master journal file */
+ i64 nMasterJournal; /* Size of master journal file */
+
+ /* Open the master journal file exclusively in case some other process
+ ** is running this routine also. Not that it makes too much difference.
+ */
+ rc = sqlite3OsOpenReadOnly(zMaster, &master);
+ if( rc!=SQLITE_OK ) goto delmaster_out;
+ master_open = 1;
+ rc = sqlite3OsFileSize(master, &nMasterJournal);
+ if( rc!=SQLITE_OK ) goto delmaster_out;
+
+ if( nMasterJournal>0 ){
+ char *zJournal;
+ char *zMasterPtr = 0;
+
+ /* Load the entire master journal file into space obtained from
+ ** sqliteMalloc() and pointed to by zMasterJournal.
+ */
+ zMasterJournal = (char *)sqliteMalloc(nMasterJournal);
+ if( !zMasterJournal ){
+ rc = SQLITE_NOMEM;
+ goto delmaster_out;
+ }
+ rc = sqlite3OsRead(master, zMasterJournal, nMasterJournal);
+ if( rc!=SQLITE_OK ) goto delmaster_out;
+
+ zJournal = zMasterJournal;
+ while( (zJournal-zMasterJournal)<nMasterJournal ){
+ if( sqlite3OsFileExists(zJournal) ){
+ /* One of the journals pointed to by the master journal exists.
+ ** Open it and check if it points at the master journal. If
+ ** so, return without deleting the master journal file.
+ */
+ OsFile *journal = 0;
+ int c;
+
+ rc = sqlite3OsOpenReadOnly(zJournal, &journal);
+ if( rc!=SQLITE_OK ){
+ goto delmaster_out;
+ }
+
+ rc = readMasterJournal(journal, &zMasterPtr);
+ sqlite3OsClose(&journal);
+ if( rc!=SQLITE_OK ){
+ goto delmaster_out;
+ }
+
+ c = zMasterPtr!=0 && strcmp(zMasterPtr, zMaster)==0;
+ sqliteFree(zMasterPtr);
+ if( c ){
+ /* We have a match. Do not delete the master journal file. */
+ goto delmaster_out;
+ }
+ }
+ zJournal += (strlen(zJournal)+1);
+ }
+ }
+
+ sqlite3OsDelete(zMaster);
+
+delmaster_out:
+ if( zMasterJournal ){
+ sqliteFree(zMasterJournal);
+ }
+ if( master_open ){
+ sqlite3OsClose(&master);
+ }
+ return rc;
+}
+
+/*
+** Make every page in the cache agree with what is on disk. In other words,
+** reread the disk to reset the state of the cache.
+**
+** This routine is called after a rollback in which some of the dirty cache
+** pages had never been written out to disk. We need to roll back the
+** cache content and the easiest way to do that is to reread the old content
+** back from the disk.
+*/
+static int pager_reload_cache(Pager *pPager){
+ PgHdr *pPg;
+ int rc = SQLITE_OK;
+ for(pPg=pPager->pAll; pPg; pPg=pPg->pNextAll){
+ char zBuf[SQLITE_MAX_PAGE_SIZE];
+ if( !pPg->dirty ) continue;
+ if( (int)pPg->pgno <= pPager->origDbSize ){
+ rc = sqlite3OsSeek(pPager->fd, pPager->pageSize*(i64)(pPg->pgno-1));
+ if( rc==SQLITE_OK ){
+ rc = sqlite3OsRead(pPager->fd, zBuf, pPager->pageSize);
+ }
+ TRACE3("REFETCH %d page %d\n", PAGERID(pPager), pPg->pgno);
+ if( rc ) break;
+ CODEC1(pPager, zBuf, pPg->pgno, 2);
+ }else{
+ memset(zBuf, 0, pPager->pageSize);
+ }
+ if( pPg->nRef==0 || memcmp(zBuf, PGHDR_TO_DATA(pPg), pPager->pageSize) ){
+ memcpy(PGHDR_TO_DATA(pPg), zBuf, pPager->pageSize);
+ if( pPager->xReiniter ){
+ pPager->xReiniter(PGHDR_TO_DATA(pPg), pPager->pageSize);
+ }else{
+ memset(PGHDR_TO_EXTRA(pPg, pPager), 0, pPager->nExtra);
+ }
+ }
+ pPg->needSync = 0;
+ pPg->dirty = 0;
+#ifdef SQLITE_CHECK_PAGES
+ pPg->pageHash = pager_pagehash(pPg);
+#endif
+ }
+ pPager->pDirty = 0;
+ return rc;
+}
+
+/*
+** Truncate the main file of the given pager to the number of pages
+** indicated.
+*/
+static int pager_truncate(Pager *pPager, int nPage){
+ assert( pPager->state>=PAGER_EXCLUSIVE );
+ return sqlite3OsTruncate(pPager->fd, pPager->pageSize*(i64)nPage);
+}
+
+/*
+** Playback the journal and thus restore the database file to
+** the state it was in before we started making changes.
+**
+** The journal file format is as follows:
+**
+** (1) 8 byte prefix. A copy of aJournalMagic[].
+** (2) 4 byte big-endian integer which is the number of valid page records
+** in the journal. If this value is 0xffffffff, then compute the
+** number of page records from the journal size.
+** (3) 4 byte big-endian integer which is the initial value for the
+** sanity checksum.
+** (4) 4 byte integer which is the number of pages to truncate the
+** database to during a rollback.
+** (5) 4 byte integer which is the number of bytes in the master journal
+** name. The value may be zero (indicate that there is no master
+** journal.)
+** (6) N bytes of the master journal name. The name will be nul-terminated
+** and might be shorter than the value read from (5). If the first byte
+** of the name is \000 then there is no master journal. The master
+** journal name is stored in UTF-8.
+** (7) Zero or more pages instances, each as follows:
+** + 4 byte page number.
+** + pPager->pageSize bytes of data.
+** + 4 byte checksum
+**
+** When we speak of the journal header, we mean the first 6 items above.
+** Each entry in the journal is an instance of the 7th item.
+**
+** Call the value from the second bullet "nRec". nRec is the number of
+** valid page entries in the journal. In most cases, you can compute the
+** value of nRec from the size of the journal file. But if a power
+** failure occurred while the journal was being written, it could be the
+** case that the size of the journal file had already been increased but
+** the extra entries had not yet made it safely to disk. In such a case,
+** the value of nRec computed from the file size would be too large. For
+** that reason, we always use the nRec value in the header.
+**
+** If the nRec value is 0xffffffff it means that nRec should be computed
+** from the file size. This value is used when the user selects the
+** no-sync option for the journal. A power failure could lead to corruption
+** in this case. But for things like temporary table (which will be
+** deleted when the power is restored) we don't care.
+**
+** If the file opened as the journal file is not a well-formed
+** journal file then all pages up to the first corrupted page are rolled
+** back (or no pages if the journal header is corrupted). The journal file
+** is then deleted and SQLITE_OK returned, just as if no corruption had
+** been encountered.
+**
+** If an I/O or malloc() error occurs, the journal-file is not deleted
+** and an error code is returned.
+*/
+static int pager_playback(Pager *pPager){
+ i64 szJ; /* Size of the journal file in bytes */
+ u32 nRec; /* Number of Records in the journal */
+ int i; /* Loop counter */
+ Pgno mxPg = 0; /* Size of the original file in pages */
+ int rc; /* Result code of a subroutine */
+ char *zMaster = 0; /* Name of master journal file if any */
+
+ /* Figure out how many records are in the journal. Abort early if
+ ** the journal is empty.
+ */
+ assert( pPager->journalOpen );
+ rc = sqlite3OsFileSize(pPager->jfd, &szJ);
+ if( rc!=SQLITE_OK ){
+ goto end_playback;
+ }
+
+ /* Read the master journal name from the journal, if it is present.
+ ** If a master journal file name is specified, but the file is not
+ ** present on disk, then the journal is not hot and does not need to be
+ ** played back.
+ */
+ rc = readMasterJournal(pPager->jfd, &zMaster);
+ assert( rc!=SQLITE_DONE );
+ if( rc!=SQLITE_OK || (zMaster && !sqlite3OsFileExists(zMaster)) ){
+ sqliteFree(zMaster);
+ zMaster = 0;
+ if( rc==SQLITE_DONE ) rc = SQLITE_OK;
+ goto end_playback;
+ }
+ sqlite3OsSeek(pPager->jfd, 0);
+ pPager->journalOff = 0;
+
+ /* This loop terminates either when the readJournalHdr() call returns
+ ** SQLITE_DONE or an IO error occurs. */
+ while( 1 ){
+
+ /* Read the next journal header from the journal file. If there are
+ ** not enough bytes left in the journal file for a complete header, or
+ ** it is corrupted, then a process must of failed while writing it.
+ ** This indicates nothing more needs to be rolled back.
+ */
+ rc = readJournalHdr(pPager, szJ, &nRec, &mxPg);
+ if( rc!=SQLITE_OK ){
+ if( rc==SQLITE_DONE ){
+ rc = SQLITE_OK;
+ }
+ goto end_playback;
+ }
+
+ /* If nRec is 0xffffffff, then this journal was created by a process
+ ** working in no-sync mode. This means that the rest of the journal
+ ** file consists of pages, there are no more journal headers. Compute
+ ** the value of nRec based on this assumption.
+ */
+ if( nRec==0xffffffff ){
+ assert( pPager->journalOff==JOURNAL_HDR_SZ(pPager) );
+ nRec = (szJ - JOURNAL_HDR_SZ(pPager))/JOURNAL_PG_SZ(pPager);
+ }
+
+ /* If this is the first header read from the journal, truncate the
+ ** database file back to it's original size.
+ */
+ if( pPager->state>=PAGER_EXCLUSIVE &&
+ pPager->journalOff==JOURNAL_HDR_SZ(pPager) ){
+ assert( pPager->origDbSize==0 || pPager->origDbSize==mxPg );
+ rc = pager_truncate(pPager, mxPg);
+ if( rc!=SQLITE_OK ){
+ goto end_playback;
+ }
+ pPager->dbSize = mxPg;
+ }
+
+ /* Copy original pages out of the journal and back into the database file.
+ */
+ for(i=0; i<nRec; i++){
+ rc = pager_playback_one_page(pPager, pPager->jfd, 1);
+ if( rc!=SQLITE_OK ){
+ if( rc==SQLITE_DONE ){
+ rc = SQLITE_OK;
+ pPager->journalOff = szJ;
+ break;
+ }else{
+ /* If we are unable to rollback a hot journal, then the database
+ ** is probably not recoverable. Return CORRUPT.
+ */
+ rc = SQLITE_CORRUPT;
+ goto end_playback;
+ }
+ }
+ }
+ }
+ /*NOTREACHED*/
+ assert( 0 );
+
+end_playback:
+ if( rc==SQLITE_OK ){
+ rc = pager_unwritelock(pPager);
+ }
+ if( zMaster ){
+ /* If there was a master journal and this routine will return true,
+ ** see if it is possible to delete the master journal.
+ */
+ if( rc==SQLITE_OK ){
+ rc = pager_delmaster(zMaster);
+ }
+ sqliteFree(zMaster);
+ }
+
+ /* The Pager.sectorSize variable may have been updated while rolling
+ ** back a journal created by a process with a different PAGER_SECTOR_SIZE
+ ** value. Reset it to the correct value for this process.
+ */
+ pPager->sectorSize = PAGER_SECTOR_SIZE;
+ return rc;
+}
+
+/*
+** Playback the statement journal.
+**
+** This is similar to playing back the transaction journal but with
+** a few extra twists.
+**
+** (1) The number of pages in the database file at the start of
+** the statement is stored in pPager->stmtSize, not in the
+** journal file itself.
+**
+** (2) In addition to playing back the statement journal, also
+** playback all pages of the transaction journal beginning
+** at offset pPager->stmtJSize.
+*/
+static int pager_stmt_playback(Pager *pPager){
+ i64 szJ; /* Size of the full journal */
+ i64 hdrOff;
+ int nRec; /* Number of Records */
+ int i; /* Loop counter */
+ int rc;
+
+ szJ = pPager->journalOff;
+#ifndef NDEBUG
+ {
+ i64 os_szJ;
+ rc = sqlite3OsFileSize(pPager->jfd, &os_szJ);
+ if( rc!=SQLITE_OK ) return rc;
+ assert( szJ==os_szJ );
+ }
+#endif
+
+ /* Set hdrOff to be the offset to the first journal header written
+ ** this statement transaction, or the end of the file if no journal
+ ** header was written.
+ */
+ hdrOff = pPager->stmtHdrOff;
+ assert( pPager->fullSync || !hdrOff );
+ if( !hdrOff ){
+ hdrOff = szJ;
+ }
+
+ /* Truncate the database back to its original size.
+ */
+ if( pPager->state>=PAGER_EXCLUSIVE ){
+ rc = pager_truncate(pPager, pPager->stmtSize);
+ }
+ pPager->dbSize = pPager->stmtSize;
+
+ /* Figure out how many records are in the statement journal.
+ */
+ assert( pPager->stmtInUse && pPager->journalOpen );
+ sqlite3OsSeek(pPager->stfd, 0);
+ nRec = pPager->stmtNRec;
+
+ /* Copy original pages out of the statement journal and back into the
+ ** database file. Note that the statement journal omits checksums from
+ ** each record since power-failure recovery is not important to statement
+ ** journals.
+ */
+ for(i=nRec-1; i>=0; i--){
+ rc = pager_playback_one_page(pPager, pPager->stfd, 0);
+ assert( rc!=SQLITE_DONE );
+ if( rc!=SQLITE_OK ) goto end_stmt_playback;
+ }
+
+ /* Now roll some pages back from the transaction journal. Pager.stmtJSize
+ ** was the size of the journal file when this statement was started, so
+ ** everything after that needs to be rolled back, either into the
+ ** database, the memory cache, or both.
+ **
+ ** If it is not zero, then Pager.stmtHdrOff is the offset to the start
+ ** of the first journal header written during this statement transaction.
+ */
+ rc = sqlite3OsSeek(pPager->jfd, pPager->stmtJSize);
+ if( rc!=SQLITE_OK ){
+ goto end_stmt_playback;
+ }
+ pPager->journalOff = pPager->stmtJSize;
+ pPager->cksumInit = pPager->stmtCksum;
+ assert( JOURNAL_HDR_SZ(pPager)<(pPager->pageSize+8) );
+ while( pPager->journalOff <= (hdrOff-(pPager->pageSize+8)) ){
+ rc = pager_playback_one_page(pPager, pPager->jfd, 1);
+ assert( rc!=SQLITE_DONE );
+ if( rc!=SQLITE_OK ) goto end_stmt_playback;
+ }
+
+ while( pPager->journalOff < szJ ){
+ u32 nJRec; /* Number of Journal Records */
+ u32 dummy;
+ rc = readJournalHdr(pPager, szJ, &nJRec, &dummy);
+ if( rc!=SQLITE_OK ){
+ assert( rc!=SQLITE_DONE );
+ goto end_stmt_playback;
+ }
+ if( nJRec==0 ){
+ nJRec = (szJ - pPager->journalOff) / (pPager->pageSize+8);
+ }
+ for(i=nJRec-1; i>=0 && pPager->journalOff < szJ; i--){
+ rc = pager_playback_one_page(pPager, pPager->jfd, 1);
+ assert( rc!=SQLITE_DONE );
+ if( rc!=SQLITE_OK ) goto end_stmt_playback;
+ }
+ }
+
+ pPager->journalOff = szJ;
+
+end_stmt_playback:
+ if( rc==SQLITE_OK) {
+ pPager->journalOff = szJ;
+ /* pager_reload_cache(pPager); */
+ }
+ return rc;
+}
+
+/*
+** Change the maximum number of in-memory pages that are allowed.
+*/
+void sqlite3pager_set_cachesize(Pager *pPager, int mxPage){
+ if( mxPage>10 ){
+ pPager->mxPage = mxPage;
+ }else{
+ pPager->mxPage = 10;
+ }
+}
+
+/*
+** Adjust the robustness of the database to damage due to OS crashes
+** or power failures by changing the number of syncs()s when writing
+** the rollback journal. There are three levels:
+**
+** OFF sqlite3OsSync() is never called. This is the default
+** for temporary and transient files.
+**
+** NORMAL The journal is synced once before writes begin on the
+** database. This is normally adequate protection, but
+** it is theoretically possible, though very unlikely,
+** that an inopertune power failure could leave the journal
+** in a state which would cause damage to the database
+** when it is rolled back.
+**
+** FULL The journal is synced twice before writes begin on the
+** database (with some additional information - the nRec field
+** of the journal header - being written in between the two
+** syncs). If we assume that writing a
+** single disk sector is atomic, then this mode provides
+** assurance that the journal will not be corrupted to the
+** point of causing damage to the database during rollback.
+**
+** Numeric values associated with these states are OFF==1, NORMAL=2,
+** and FULL=3.
+*/
+#ifndef SQLITE_OMIT_PAGER_PRAGMAS
+void sqlite3pager_set_safety_level(Pager *pPager, int level, int full_fsync){
+ pPager->noSync = level==1 || pPager->tempFile;
+ pPager->fullSync = level==3 && !pPager->tempFile;
+ pPager->full_fsync = full_fsync;
+ if( pPager->noSync ) pPager->needSync = 0;
+}
+#endif
+
+/*
+** The following global variable is incremented whenever the library
+** attempts to open a temporary file. This information is used for
+** testing and analysis only.
+*/
+#ifdef SQLITE_TEST
+int sqlite3_opentemp_count = 0;
+#endif
+
+/*
+** Open a temporary file. Write the name of the file into zFile
+** (zFile must be at least SQLITE_TEMPNAME_SIZE bytes long.) Write
+** the file descriptor into *fd. Return SQLITE_OK on success or some
+** other error code if we fail.
+**
+** The OS will automatically delete the temporary file when it is
+** closed.
+*/
+static int sqlite3pager_opentemp(char *zFile, OsFile **pFd){
+ int cnt = 8;
+ int rc;
+#ifdef SQLITE_TEST
+ sqlite3_opentemp_count++; /* Used for testing and analysis only */
+#endif
+ do{
+ cnt--;
+ sqlite3OsTempFileName(zFile);
+ rc = sqlite3OsOpenExclusive(zFile, pFd, 1);
+ }while( cnt>0 && rc!=SQLITE_OK && rc!=SQLITE_NOMEM );
+ return rc;
+}
+
+/*
+** Create a new page cache and put a pointer to the page cache in *ppPager.
+** The file to be cached need not exist. The file is not locked until
+** the first call to sqlite3pager_get() and is only held open until the
+** last page is released using sqlite3pager_unref().
+**
+** If zFilename is NULL then a randomly-named temporary file is created
+** and used as the file to be cached. The file will be deleted
+** automatically when it is closed.
+**
+** If zFilename is ":memory:" then all information is held in cache.
+** It is never written to disk. This can be used to implement an
+** in-memory database.
+*/
+int sqlite3pager_open(
+ Pager **ppPager, /* Return the Pager structure here */
+ const char *zFilename, /* Name of the database file to open */
+ int nExtra, /* Extra bytes append to each in-memory page */
+ int flags /* flags controlling this file */
+){
+ Pager *pPager = 0;
+ char *zFullPathname = 0;
+ int nameLen; /* Compiler is wrong. This is always initialized before use */
+ OsFile *fd;
+ int rc = SQLITE_OK;
+ int i;
+ int tempFile = 0;
+ int memDb = 0;
+ int readOnly = 0;
+ int useJournal = (flags & PAGER_OMIT_JOURNAL)==0;
+ int noReadlock = (flags & PAGER_NO_READLOCK)!=0;
+ char zTemp[SQLITE_TEMPNAME_SIZE];
+#ifdef SQLITE_ENABLE_MEMORY_MANAGEMENT
+ /* A malloc() cannot fail in sqlite3ThreadData() as one or more calls to
+ ** malloc() must have already been made by this thread before it gets
+ ** to this point. This means the ThreadData must have been allocated already
+ ** so that ThreadData.nAlloc can be set. It would be nice to assert
+ ** that ThreadData.nAlloc is non-zero, but alas this breaks test cases
+ ** written to invoke the pager directly.
+ */
+ ThreadData *pTsd = sqlite3ThreadData();
+ assert( pTsd );
+#endif
+
+ /* If malloc() has already failed return SQLITE_NOMEM. Before even
+ ** testing for this, set *ppPager to NULL so the caller knows the pager
+ ** structure was never allocated.
+ */
+ *ppPager = 0;
+ if( sqlite3MallocFailed() ){
+ return SQLITE_NOMEM;
+ }
+ memset(&fd, 0, sizeof(fd));
+
+ /* Open the pager file and set zFullPathname to point at malloc()ed
+ ** memory containing the complete filename (i.e. including the directory).
+ */
+ if( zFilename && zFilename[0] ){
+#ifndef SQLITE_OMIT_MEMORYDB
+ if( strcmp(zFilename,":memory:")==0 ){
+ memDb = 1;
+ zFullPathname = sqliteStrDup("");
+ }else
+#endif
+ {
+ zFullPathname = sqlite3OsFullPathname(zFilename);
+ if( zFullPathname ){
+ rc = sqlite3OsOpenReadWrite(zFullPathname, &fd, &readOnly);
+ }
+ }
+ }else{
+ rc = sqlite3pager_opentemp(zTemp, &fd);
+ zFilename = zTemp;
+ zFullPathname = sqlite3OsFullPathname(zFilename);
+ if( rc==SQLITE_OK ){
+ tempFile = 1;
+ }
+ }
+
+ /* Allocate the Pager structure. As part of the same allocation, allocate
+ ** space for the full paths of the file, directory and journal
+ ** (Pager.zFilename, Pager.zDirectory and Pager.zJournal).
+ */
+ if( zFullPathname ){
+ nameLen = strlen(zFullPathname);
+ pPager = sqliteMalloc( sizeof(*pPager) + nameLen*3 + 30 );
+ }
+
+ /* If an error occured in either of the blocks above, free the memory
+ ** pointed to by zFullPathname, free the Pager structure and close the
+ ** file. Since the pager is not allocated there is no need to set
+ ** any Pager.errMask variables.
+ */
+ if( !pPager || !zFullPathname || rc!=SQLITE_OK ){
+ sqlite3OsClose(&fd);
+ sqliteFree(zFullPathname);
+ sqliteFree(pPager);
+ return ((rc==SQLITE_OK)?SQLITE_NOMEM:rc);
+ }
+
+ TRACE3("OPEN %d %s\n", FILEHANDLEID(fd), zFullPathname);
+ pPager->zFilename = (char*)&pPager[1];
+ pPager->zDirectory = &pPager->zFilename[nameLen+1];
+ pPager->zJournal = &pPager->zDirectory[nameLen+1];
+ strcpy(pPager->zFilename, zFullPathname);
+ strcpy(pPager->zDirectory, zFullPathname);
+
+ for(i=nameLen; i>0 && pPager->zDirectory[i-1]!='/'; i--){}
+ if( i>0 ) pPager->zDirectory[i-1] = 0;
+ strcpy(pPager->zJournal, zFullPathname);
+ sqliteFree(zFullPathname);
+ strcpy(&pPager->zJournal[nameLen], "-journal");
+ pPager->fd = fd;
+ /* pPager->journalOpen = 0; */
+ pPager->useJournal = useJournal && !memDb;
+ pPager->noReadlock = noReadlock && readOnly;
+ /* pPager->stmtOpen = 0; */
+ /* pPager->stmtInUse = 0; */
+ /* pPager->nRef = 0; */
+ pPager->dbSize = memDb-1;
+ pPager->pageSize = SQLITE_DEFAULT_PAGE_SIZE;
+ /* pPager->stmtSize = 0; */
+ /* pPager->stmtJSize = 0; */
+ /* pPager->nPage = 0; */
+ /* pPager->nMaxPage = 0; */
+ pPager->mxPage = 100;
+ assert( PAGER_UNLOCK==0 );
+ /* pPager->state = PAGER_UNLOCK; */
+ /* pPager->errMask = 0; */
+ pPager->tempFile = tempFile;
+ pPager->memDb = memDb;
+ pPager->readOnly = readOnly;
+ /* pPager->needSync = 0; */
+ pPager->noSync = pPager->tempFile || !useJournal;
+ pPager->fullSync = (pPager->noSync?0:1);
+ /* pPager->pFirst = 0; */
+ /* pPager->pFirstSynced = 0; */
+ /* pPager->pLast = 0; */
+ pPager->nExtra = FORCE_ALIGNMENT(nExtra);
+ pPager->sectorSize = PAGER_SECTOR_SIZE;
+ /* pPager->pBusyHandler = 0; */
+ /* memset(pPager->aHash, 0, sizeof(pPager->aHash)); */
+ *ppPager = pPager;
+#ifdef SQLITE_ENABLE_MEMORY_MANAGEMENT
+ pPager->pNext = pTsd->pPager;
+ pTsd->pPager = pPager;
+#endif
+ return SQLITE_OK;
+}
+
+/*
+** Set the busy handler function.
+*/
+void sqlite3pager_set_busyhandler(Pager *pPager, BusyHandler *pBusyHandler){
+ pPager->pBusyHandler = pBusyHandler;
+}
+
+/*
+** Set the destructor for this pager. If not NULL, the destructor is called
+** when the reference count on each page reaches zero. The destructor can
+** be used to clean up information in the extra segment appended to each page.
+**
+** The destructor is not called as a result sqlite3pager_close().
+** Destructors are only called by sqlite3pager_unref().
+*/
+void sqlite3pager_set_destructor(Pager *pPager, void (*xDesc)(void*,int)){
+ pPager->xDestructor = xDesc;
+}
+
+/*
+** Set the reinitializer for this pager. If not NULL, the reinitializer
+** is called when the content of a page in cache is restored to its original
+** value as a result of a rollback. The callback gives higher-level code
+** an opportunity to restore the EXTRA section to agree with the restored
+** page data.
+*/
+void sqlite3pager_set_reiniter(Pager *pPager, void (*xReinit)(void*,int)){
+ pPager->xReiniter = xReinit;
+}
+
+/*
+** Set the page size. Return the new size. If the suggest new page
+** size is inappropriate, then an alternative page size is selected
+** and returned.
+*/
+int sqlite3pager_set_pagesize(Pager *pPager, int pageSize){
+ assert( pageSize>=512 && pageSize<=SQLITE_MAX_PAGE_SIZE );
+ if( !pPager->memDb ){
+ pPager->pageSize = pageSize;
+ }
+ return pPager->pageSize;
+}
+
+/*
+** The following set of routines are used to disable the simulated
+** I/O error mechanism. These routines are used to avoid simulated
+** errors in places where we do not care about errors.
+**
+** Unless -DSQLITE_TEST=1 is used, these routines are all no-ops
+** and generate no code.
+*/
+#ifdef SQLITE_TEST
+extern int sqlite3_io_error_pending;
+extern int sqlite3_io_error_hit;
+static int saved_cnt;
+void clear_simulated_io_error(){
+ sqlite3_io_error_hit = 0;
+}
+void disable_simulated_io_errors(void){
+ saved_cnt = sqlite3_io_error_pending;
+ sqlite3_io_error_pending = -1;
+}
+void enable_simulated_io_errors(void){
+ sqlite3_io_error_pending = saved_cnt;
+}
+#else
+# define clear_simulated_io_error()
+# define disable_simulated_io_errors()
+# define enable_simulated_io_errors()
+#endif
+
+/*
+** Read the first N bytes from the beginning of the file into memory
+** that pDest points to.
+**
+** No error checking is done. The rational for this is that this function
+** may be called even if the file does not exist or contain a header. In
+** these cases sqlite3OsRead() will return an error, to which the correct
+** response is to zero the memory at pDest and continue. A real IO error
+** will presumably recur and be picked up later (Todo: Think about this).
+*/
+void sqlite3pager_read_fileheader(Pager *pPager, int N, unsigned char *pDest){
+ memset(pDest, 0, N);
+ if( MEMDB==0 ){
+ disable_simulated_io_errors();
+ sqlite3OsSeek(pPager->fd, 0);
+ sqlite3OsRead(pPager->fd, pDest, N);
+ enable_simulated_io_errors();
+ }
+}
+
+/*
+** Return the total number of pages in the disk file associated with
+** pPager.
+**
+** If the PENDING_BYTE lies on the page directly after the end of the
+** file, then consider this page part of the file too. For example, if
+** PENDING_BYTE is byte 4096 (the first byte of page 5) and the size of the
+** file is 4096 bytes, 5 is returned instead of 4.
+*/
+int sqlite3pager_pagecount(Pager *pPager){
+ i64 n;
+ int rc;
+ assert( pPager!=0 );
+ if( pPager->dbSize>=0 ){
+ n = pPager->dbSize;
+ } else {
+ if( (rc = sqlite3OsFileSize(pPager->fd, &n))!=SQLITE_OK ){
+ pager_error(pPager, rc);
+ return 0;
+ }
+ if( n>0 && n<pPager->pageSize ){
+ n = 1;
+ }else{
+ n /= pPager->pageSize;
+ }
+ if( pPager->state!=PAGER_UNLOCK ){
+ pPager->dbSize = n;
+ }
+ }
+ if( n==(PENDING_BYTE/pPager->pageSize) ){
+ n++;
+ }
+ return n;
+}
+
+
+#ifndef SQLITE_OMIT_MEMORYDB
+/*
+** Clear a PgHistory block
+*/
+static void clearHistory(PgHistory *pHist){
+ sqliteFree(pHist->pOrig);
+ sqliteFree(pHist->pStmt);
+ pHist->pOrig = 0;
+ pHist->pStmt = 0;
+}
+#else
+#define clearHistory(x)
+#endif
+
+/*
+** Forward declaration
+*/
+static int syncJournal(Pager*);
+
+/*
+** Unlink pPg from it's hash chain. Also set the page number to 0 to indicate
+** that the page is not part of any hash chain. This is required because the
+** sqlite3pager_movepage() routine can leave a page in the
+** pNextFree/pPrevFree list that is not a part of any hash-chain.
+*/
+static void unlinkHashChain(Pager *pPager, PgHdr *pPg){
+ if( pPg->pgno==0 ){
+ assert( pPg->pNextHash==0 && pPg->pPrevHash==0 );
+ return;
+ }
+ if( pPg->pNextHash ){
+ pPg->pNextHash->pPrevHash = pPg->pPrevHash;
+ }
+ if( pPg->pPrevHash ){
+ assert( pPager->aHash[pPg->pgno & (pPager->nHash-1)]!=pPg );
+ pPg->pPrevHash->pNextHash = pPg->pNextHash;
+ }else{
+ int h = pPg->pgno & (pPager->nHash-1);
+ pPager->aHash[h] = pPg->pNextHash;
+ }
+ if( MEMDB ){
+ clearHistory(PGHDR_TO_HIST(pPg, pPager));
+ }
+ pPg->pgno = 0;
+ pPg->pNextHash = pPg->pPrevHash = 0;
+}
+
+/*
+** Unlink a page from the free list (the list of all pages where nRef==0)
+** and from its hash collision chain.
+*/
+static void unlinkPage(PgHdr *pPg){
+ Pager *pPager = pPg->pPager;
+
+ /* Keep the pFirstSynced pointer pointing at the first synchronized page */
+ if( pPg==pPager->pFirstSynced ){
+ PgHdr *p = pPg->pNextFree;
+ while( p && p->needSync ){ p = p->pNextFree; }
+ pPager->pFirstSynced = p;
+ }
+
+ /* Unlink from the freelist */
+ if( pPg->pPrevFree ){
+ pPg->pPrevFree->pNextFree = pPg->pNextFree;
+ }else{
+ assert( pPager->pFirst==pPg );
+ pPager->pFirst = pPg->pNextFree;
+ }
+ if( pPg->pNextFree ){
+ pPg->pNextFree->pPrevFree = pPg->pPrevFree;
+ }else{
+ assert( pPager->pLast==pPg );
+ pPager->pLast = pPg->pPrevFree;
+ }
+ pPg->pNextFree = pPg->pPrevFree = 0;
+
+ /* Unlink from the pgno hash table */
+ unlinkHashChain(pPager, pPg);
+}
+
+#ifndef SQLITE_OMIT_MEMORYDB
+/*
+** This routine is used to truncate an in-memory database. Delete
+** all pages whose pgno is larger than pPager->dbSize and is unreferenced.
+** Referenced pages larger than pPager->dbSize are zeroed.
+*/
+static void memoryTruncate(Pager *pPager){
+ PgHdr *pPg;
+ PgHdr **ppPg;
+ int dbSize = pPager->dbSize;
+
+ ppPg = &pPager->pAll;
+ while( (pPg = *ppPg)!=0 ){
+ if( pPg->pgno<=dbSize ){
+ ppPg = &pPg->pNextAll;
+ }else if( pPg->nRef>0 ){
+ memset(PGHDR_TO_DATA(pPg), 0, pPager->pageSize);
+ ppPg = &pPg->pNextAll;
+ }else{
+ *ppPg = pPg->pNextAll;
+ unlinkPage(pPg);
+ makeClean(pPg);
+ sqliteFree(pPg);
+ pPager->nPage--;
+ }
+ }
+}
+#else
+#define memoryTruncate(p)
+#endif
+
+/*
+** Try to obtain a lock on a file. Invoke the busy callback if the lock
+** is currently not available. Repeat until the busy callback returns
+** false or until the lock succeeds.
+**
+** Return SQLITE_OK on success and an error code if we cannot obtain
+** the lock.
+*/
+static int pager_wait_on_lock(Pager *pPager, int locktype){
+ int rc;
+ assert( PAGER_SHARED==SHARED_LOCK );
+ assert( PAGER_RESERVED==RESERVED_LOCK );
+ assert( PAGER_EXCLUSIVE==EXCLUSIVE_LOCK );
+ if( pPager->state>=locktype ){
+ rc = SQLITE_OK;
+ }else{
+ do {
+ rc = sqlite3OsLock(pPager->fd, locktype);
+ }while( rc==SQLITE_BUSY && sqlite3InvokeBusyHandler(pPager->pBusyHandler) );
+ if( rc==SQLITE_OK ){
+ pPager->state = locktype;
+ }
+ }
+ return rc;
+}
+
+/*
+** Truncate the file to the number of pages specified.
+*/
+int sqlite3pager_truncate(Pager *pPager, Pgno nPage){
+ int rc;
+ sqlite3pager_pagecount(pPager);
+ if( pPager->errCode ){
+ rc = pPager->errCode;
+ return rc;
+ }
+ if( nPage>=(unsigned)pPager->dbSize ){
+ return SQLITE_OK;
+ }
+ if( MEMDB ){
+ pPager->dbSize = nPage;
+ memoryTruncate(pPager);
+ return SQLITE_OK;
+ }
+ rc = syncJournal(pPager);
+ if( rc!=SQLITE_OK ){
+ return rc;
+ }
+
+ /* Get an exclusive lock on the database before truncating. */
+ rc = pager_wait_on_lock(pPager, EXCLUSIVE_LOCK);
+ if( rc!=SQLITE_OK ){
+ return rc;
+ }
+
+ rc = pager_truncate(pPager, nPage);
+ if( rc==SQLITE_OK ){
+ pPager->dbSize = nPage;
+ }
+ return rc;
+}
+
+/*
+** Shutdown the page cache. Free all memory and close all files.
+**
+** If a transaction was in progress when this routine is called, that
+** transaction is rolled back. All outstanding pages are invalidated
+** and their memory is freed. Any attempt to use a page associated
+** with this page cache after this function returns will likely
+** result in a coredump.
+**
+** This function always succeeds. If a transaction is active an attempt
+** is made to roll it back. If an error occurs during the rollback
+** a hot journal may be left in the filesystem but no error is returned
+** to the caller.
+*/
+int sqlite3pager_close(Pager *pPager){
+ PgHdr *pPg, *pNext;
+#ifdef SQLITE_ENABLE_MEMORY_MANAGEMENT
+ /* A malloc() cannot fail in sqlite3ThreadData() as one or more calls to
+ ** malloc() must have already been made by this thread before it gets
+ ** to this point. This means the ThreadData must have been allocated already
+ ** so that ThreadData.nAlloc can be set.
+ */
+ ThreadData *pTsd = sqlite3ThreadData();
+ assert( pPager );
+ assert( pTsd && pTsd->nAlloc );
+#endif
+
+ switch( pPager->state ){
+ case PAGER_RESERVED:
+ case PAGER_SYNCED:
+ case PAGER_EXCLUSIVE: {
+ /* We ignore any IO errors that occur during the rollback
+ ** operation. So disable IO error simulation so that testing
+ ** works more easily.
+ */
+ disable_simulated_io_errors();
+ sqlite3pager_rollback(pPager);
+ enable_simulated_io_errors();
+ if( !MEMDB ){
+ sqlite3OsUnlock(pPager->fd, NO_LOCK);
+ }
+ assert( pPager->errCode || pPager->journalOpen==0 );
+ break;
+ }
+ case PAGER_SHARED: {
+ if( !MEMDB ){
+ sqlite3OsUnlock(pPager->fd, NO_LOCK);
+ }
+ break;
+ }
+ default: {
+ /* Do nothing */
+ break;
+ }
+ }
+ for(pPg=pPager->pAll; pPg; pPg=pNext){
+#ifndef NDEBUG
+ if( MEMDB ){
+ PgHistory *pHist = PGHDR_TO_HIST(pPg, pPager);
+ assert( !pPg->alwaysRollback );
+ assert( !pHist->pOrig );
+ assert( !pHist->pStmt );
+ }
+#endif
+ pNext = pPg->pNextAll;
+ sqliteFree(pPg);
+ }
+ TRACE2("CLOSE %d\n", PAGERID(pPager));
+ assert( pPager->errCode || (pPager->journalOpen==0 && pPager->stmtOpen==0) );
+ if( pPager->journalOpen ){
+ sqlite3OsClose(&pPager->jfd);
+ }
+ sqliteFree(pPager->aInJournal);
+ if( pPager->stmtOpen ){
+ sqlite3OsClose(&pPager->stfd);
+ }
+ sqlite3OsClose(&pPager->fd);
+ /* Temp files are automatically deleted by the OS
+ ** if( pPager->tempFile ){
+ ** sqlite3OsDelete(pPager->zFilename);
+ ** }
+ */
+
+#ifdef SQLITE_ENABLE_MEMORY_MANAGEMENT
+ /* Remove the pager from the linked list of pagers starting at
+ ** ThreadData.pPager if memory-management is enabled.
+ */
+ if( pPager==pTsd->pPager ){
+ pTsd->pPager = pPager->pNext;
+ }else{
+ Pager *pTmp;
+ for(pTmp = pTsd->pPager; pTmp->pNext!=pPager; pTmp=pTmp->pNext){}
+ pTmp->pNext = pPager->pNext;
+ }
+#endif
+ sqliteFree(pPager->aHash);
+ sqliteFree(pPager);
+ return SQLITE_OK;
+}
+
+/*
+** Return the page number for the given page data.
+*/
+Pgno sqlite3pager_pagenumber(void *pData){
+ PgHdr *p = DATA_TO_PGHDR(pData);
+ return p->pgno;
+}
+
+/*
+** The page_ref() function increments the reference count for a page.
+** If the page is currently on the freelist (the reference count is zero) then
+** remove it from the freelist.
+**
+** For non-test systems, page_ref() is a macro that calls _page_ref()
+** online of the reference count is zero. For test systems, page_ref()
+** is a real function so that we can set breakpoints and trace it.
+*/
+static void _page_ref(PgHdr *pPg){
+ if( pPg->nRef==0 ){
+ /* The page is currently on the freelist. Remove it. */
+ if( pPg==pPg->pPager->pFirstSynced ){
+ PgHdr *p = pPg->pNextFree;
+ while( p && p->needSync ){ p = p->pNextFree; }
+ pPg->pPager->pFirstSynced = p;
+ }
+ if( pPg->pPrevFree ){
+ pPg->pPrevFree->pNextFree = pPg->pNextFree;
+ }else{
+ pPg->pPager->pFirst = pPg->pNextFree;
+ }
+ if( pPg->pNextFree ){
+ pPg->pNextFree->pPrevFree = pPg->pPrevFree;
+ }else{
+ pPg->pPager->pLast = pPg->pPrevFree;
+ }
+ pPg->pPager->nRef++;
+ }
+ pPg->nRef++;
+ REFINFO(pPg);
+}
+#ifdef SQLITE_DEBUG
+ static void page_ref(PgHdr *pPg){
+ if( pPg->nRef==0 ){
+ _page_ref(pPg);
+ }else{
+ pPg->nRef++;
+ REFINFO(pPg);
+ }
+ }
+#else
+# define page_ref(P) ((P)->nRef==0?_page_ref(P):(void)(P)->nRef++)
+#endif
+
+/*
+** Increment the reference count for a page. The input pointer is
+** a reference to the page data.
+*/
+int sqlite3pager_ref(void *pData){
+ PgHdr *pPg = DATA_TO_PGHDR(pData);
+ page_ref(pPg);
+ return SQLITE_OK;
+}
+
+/*
+** Sync the journal. In other words, make sure all the pages that have
+** been written to the journal have actually reached the surface of the
+** disk. It is not safe to modify the original database file until after
+** the journal has been synced. If the original database is modified before
+** the journal is synced and a power failure occurs, the unsynced journal
+** data would be lost and we would be unable to completely rollback the
+** database changes. Database corruption would occur.
+**
+** This routine also updates the nRec field in the header of the journal.
+** (See comments on the pager_playback() routine for additional information.)
+** If the sync mode is FULL, two syncs will occur. First the whole journal
+** is synced, then the nRec field is updated, then a second sync occurs.
+**
+** For temporary databases, we do not care if we are able to rollback
+** after a power failure, so sync occurs.
+**
+** This routine clears the needSync field of every page current held in
+** memory.
+*/
+static int syncJournal(Pager *pPager){
+ PgHdr *pPg;
+ int rc = SQLITE_OK;
+
+ /* Sync the journal before modifying the main database
+ ** (assuming there is a journal and it needs to be synced.)
+ */
+ if( pPager->needSync ){
+ if( !pPager->tempFile ){
+ assert( pPager->journalOpen );
+ /* assert( !pPager->noSync ); // noSync might be set if synchronous
+ ** was turned off after the transaction was started. Ticket #615 */
+#ifndef NDEBUG
+ {
+ /* Make sure the pPager->nRec counter we are keeping agrees
+ ** with the nRec computed from the size of the journal file.
+ */
+ i64 jSz;
+ rc = sqlite3OsFileSize(pPager->jfd, &jSz);
+ if( rc!=0 ) return rc;
+ assert( pPager->journalOff==jSz );
+ }
+#endif
+ {
+ /* Write the nRec value into the journal file header. If in
+ ** full-synchronous mode, sync the journal first. This ensures that
+ ** all data has really hit the disk before nRec is updated to mark
+ ** it as a candidate for rollback.
+ */
+ if( pPager->fullSync ){
+ TRACE2("SYNC journal of %d\n", PAGERID(pPager));
+ rc = sqlite3OsSync(pPager->jfd, 0);
+ if( rc!=0 ) return rc;
+ }
+ rc = sqlite3OsSeek(pPager->jfd,
+ pPager->journalHdr + sizeof(aJournalMagic));
+ if( rc ) return rc;
+ rc = write32bits(pPager->jfd, pPager->nRec);
+ if( rc ) return rc;
+
+ rc = sqlite3OsSeek(pPager->jfd, pPager->journalOff);
+ if( rc ) return rc;
+ }
+ TRACE2("SYNC journal of %d\n", PAGERID(pPager));
+ rc = sqlite3OsSync(pPager->jfd, pPager->full_fsync);
+ if( rc!=0 ) return rc;
+ pPager->journalStarted = 1;
+ }
+ pPager->needSync = 0;
+
+ /* Erase the needSync flag from every page.
+ */
+ for(pPg=pPager->pAll; pPg; pPg=pPg->pNextAll){
+ pPg->needSync = 0;
+ }
+ pPager->pFirstSynced = pPager->pFirst;
+ }
+
+#ifndef NDEBUG
+ /* If the Pager.needSync flag is clear then the PgHdr.needSync
+ ** flag must also be clear for all pages. Verify that this
+ ** invariant is true.
+ */
+ else{
+ for(pPg=pPager->pAll; pPg; pPg=pPg->pNextAll){
+ assert( pPg->needSync==0 );
+ }
+ assert( pPager->pFirstSynced==pPager->pFirst );
+ }
+#endif
+
+ return rc;
+}
+
+/*
+** Merge two lists of pages connected by pDirty and in pgno order.
+** Do not both fixing the pPrevDirty pointers.
+*/
+static PgHdr *merge_pagelist(PgHdr *pA, PgHdr *pB){
+ PgHdr result, *pTail;
+ pTail = &result;
+ while( pA && pB ){
+ if( pA->pgno<pB->pgno ){
+ pTail->pDirty = pA;
+ pTail = pA;
+ pA = pA->pDirty;
+ }else{
+ pTail->pDirty = pB;
+ pTail = pB;
+ pB = pB->pDirty;
+ }
+ }
+ if( pA ){
+ pTail->pDirty = pA;
+ }else if( pB ){
+ pTail->pDirty = pB;
+ }else{
+ pTail->pDirty = 0;
+ }
+ return result.pDirty;
+}
+
+/*
+** Sort the list of pages in accending order by pgno. Pages are
+** connected by pDirty pointers. The pPrevDirty pointers are
+** corrupted by this sort.
+*/
+#define N_SORT_BUCKET 25
+static PgHdr *sort_pagelist(PgHdr *pIn){
+ PgHdr *a[N_SORT_BUCKET], *p;
+ int i;
+ memset(a, 0, sizeof(a));
+ while( pIn ){
+ p = pIn;
+ pIn = p->pDirty;
+ p->pDirty = 0;
+ for(i=0; i<N_SORT_BUCKET-1; i++){
+ if( a[i]==0 ){
+ a[i] = p;
+ break;
+ }else{
+ p = merge_pagelist(a[i], p);
+ a[i] = 0;
+ }
+ }
+ if( i==N_SORT_BUCKET-1 ){
+ a[i] = merge_pagelist(a[i], p);
+ }
+ }
+ p = a[0];
+ for(i=1; i<N_SORT_BUCKET; i++){
+ p = merge_pagelist(p, a[i]);
+ }
+ return p;
+}
+
+/*
+** Given a list of pages (connected by the PgHdr.pDirty pointer) write
+** every one of those pages out to the database file and mark them all
+** as clean.
+*/
+static int pager_write_pagelist(PgHdr *pList){
+ Pager *pPager;
+ int rc;
+
+ if( pList==0 ) return SQLITE_OK;
+ pPager = pList->pPager;
+
+ /* At this point there may be either a RESERVED or EXCLUSIVE lock on the
+ ** database file. If there is already an EXCLUSIVE lock, the following
+ ** calls to sqlite3OsLock() are no-ops.
+ **
+ ** Moving the lock from RESERVED to EXCLUSIVE actually involves going
+ ** through an intermediate state PENDING. A PENDING lock prevents new
+ ** readers from attaching to the database but is unsufficient for us to
+ ** write. The idea of a PENDING lock is to prevent new readers from
+ ** coming in while we wait for existing readers to clear.
+ **
+ ** While the pager is in the RESERVED state, the original database file
+ ** is unchanged and we can rollback without having to playback the
+ ** journal into the original database file. Once we transition to
+ ** EXCLUSIVE, it means the database file has been changed and any rollback
+ ** will require a journal playback.
+ */
+ rc = pager_wait_on_lock(pPager, EXCLUSIVE_LOCK);
+ if( rc!=SQLITE_OK ){
+ return rc;
+ }
+
+ pList = sort_pagelist(pList);
+ while( pList ){
+ assert( pList->dirty );
+ rc = sqlite3OsSeek(pPager->fd, (pList->pgno-1)*(i64)pPager->pageSize);
+ if( rc ) return rc;
+ /* If there are dirty pages in the page cache with page numbers greater
+ ** than Pager.dbSize, this means sqlite3pager_truncate() was called to
+ ** make the file smaller (presumably by auto-vacuum code). Do not write
+ ** any such pages to the file.
+ */
+ if( pList->pgno<=pPager->dbSize ){
+ char *pData = CODEC2(pPager, PGHDR_TO_DATA(pList), pList->pgno, 6);
+ TRACE3("STORE %d page %d\n", PAGERID(pPager), pList->pgno);
+ rc = sqlite3OsWrite(pPager->fd, pData, pPager->pageSize);
+ TEST_INCR(pPager->nWrite);
+ }
+#ifndef NDEBUG
+ else{
+ TRACE3("NOSTORE %d page %d\n", PAGERID(pPager), pList->pgno);
+ }
+#endif
+ if( rc ) return rc;
+ pList->dirty = 0;
+#ifdef SQLITE_CHECK_PAGES
+ pList->pageHash = pager_pagehash(pList);
+#endif
+ pList = pList->pDirty;
+ }
+ return SQLITE_OK;
+}
+
+/*
+** Collect every dirty page into a dirty list and
+** return a pointer to the head of that list. All pages are
+** collected even if they are still in use.
+*/
+static PgHdr *pager_get_all_dirty_pages(Pager *pPager){
+ return pPager->pDirty;
+}
+
+/*
+** Return TRUE if there is a hot journal on the given pager.
+** A hot journal is one that needs to be played back.
+**
+** If the current size of the database file is 0 but a journal file
+** exists, that is probably an old journal left over from a prior
+** database with the same name. Just delete the journal.
+*/
+static int hasHotJournal(Pager *pPager){
+ if( !pPager->useJournal ) return 0;
+ if( !sqlite3OsFileExists(pPager->zJournal) ) return 0;
+ if( sqlite3OsCheckReservedLock(pPager->fd) ) return 0;
+ if( sqlite3pager_pagecount(pPager)==0 ){
+ sqlite3OsDelete(pPager->zJournal);
+ return 0;
+ }else{
+ return 1;
+ }
+}
+
+/*
+** Try to find a page in the cache that can be recycled.
+**
+** This routine may return SQLITE_IOERR, SQLITE_FULL or SQLITE_OK. It
+** does not set the pPager->errCode variable.
+*/
+static int pager_recycle(Pager *pPager, int syncOk, PgHdr **ppPg){
+ PgHdr *pPg;
+ *ppPg = 0;
+
+ /* Find a page to recycle. Try to locate a page that does not
+ ** require us to do an fsync() on the journal.
+ */
+ pPg = pPager->pFirstSynced;
+
+ /* If we could not find a page that does not require an fsync()
+ ** on the journal file then fsync the journal file. This is a
+ ** very slow operation, so we work hard to avoid it. But sometimes
+ ** it can't be helped.
+ */
+ if( pPg==0 && pPager->pFirst && syncOk && !MEMDB){
+ int rc = syncJournal(pPager);
+ if( rc!=0 ){
+ return rc;
+ }
+ if( pPager->fullSync ){
+ /* If in full-sync mode, write a new journal header into the
+ ** journal file. This is done to avoid ever modifying a journal
+ ** header that is involved in the rollback of pages that have
+ ** already been written to the database (in case the header is
+ ** trashed when the nRec field is updated).
+ */
+ pPager->nRec = 0;
+ assert( pPager->journalOff > 0 );
+ rc = writeJournalHdr(pPager);
+ if( rc!=0 ){
+ return rc;
+ }
+ }
+ pPg = pPager->pFirst;
+ }
+ if( pPg==0 ){
+ return SQLITE_OK;
+ }
+
+ assert( pPg->nRef==0 );
+
+ /* Write the page to the database file if it is dirty.
+ */
+ if( pPg->dirty ){
+ int rc;
+ assert( pPg->needSync==0 );
+ makeClean(pPg);
+ pPg->dirty = 1;
+ pPg->pDirty = 0;
+ rc = pager_write_pagelist( pPg );
+ if( rc!=SQLITE_OK ){
+ return rc;
+ }
+ }
+ assert( pPg->dirty==0 );
+
+ /* If the page we are recycling is marked as alwaysRollback, then
+ ** set the global alwaysRollback flag, thus disabling the
+ ** sqlite_dont_rollback() optimization for the rest of this transaction.
+ ** It is necessary to do this because the page marked alwaysRollback
+ ** might be reloaded at a later time but at that point we won't remember
+ ** that is was marked alwaysRollback. This means that all pages must
+ ** be marked as alwaysRollback from here on out.
+ */
+ if( pPg->alwaysRollback ){
+ pPager->alwaysRollback = 1;
+ }
+
+ /* Unlink the old page from the free list and the hash table
+ */
+ unlinkPage(pPg);
+ TEST_INCR(pPager->nOvfl);
+
+ *ppPg = pPg;
+ return SQLITE_OK;
+}
+
+/*
+** This function is called to free superfluous dynamically allocated memory
+** held by the pager system. Memory in use by any SQLite pager allocated
+** by the current thread may be sqliteFree()ed.
+**
+** nReq is the number of bytes of memory required. Once this much has
+** been released, the function returns. A negative value for nReq means
+** free as much memory as possible. The return value is the total number
+** of bytes of memory released.
+*/
+#ifdef SQLITE_ENABLE_MEMORY_MANAGEMENT
+int sqlite3pager_release_memory(int nReq){
+ const ThreadData *pTsdro = sqlite3ThreadDataReadOnly();
+ Pager *p;
+ int nReleased = 0;
+ int i;
+
+ /* If the the global mutex is held, this subroutine becomes a
+ ** o-op; zero bytes of memory are freed. This is because
+ ** some of the code invoked by this function may also
+ ** try to obtain the mutex, resulting in a deadlock.
+ */
+ if( sqlite3OsInMutex(0) ){
+ return 0;
+ }
+
+ /* Outermost loop runs for at most two iterations. First iteration we
+ ** try to find memory that can be released without calling fsync(). Second
+ ** iteration (which only runs if the first failed to free nReq bytes of
+ ** memory) is permitted to call fsync(). This is of course much more
+ ** expensive.
+ */
+ for(i=0; i<=1; i++){
+
+ /* Loop through all the SQLite pagers opened by the current thread. */
+ for(p=pTsdro->pPager; p && (nReq<0 || nReleased<nReq); p=p->pNext){
+ PgHdr *pPg;
+ int rc;
+
+ /* For each pager, try to free as many pages as possible (without
+ ** calling fsync() if this is the first iteration of the outermost
+ ** loop).
+ */
+ while( SQLITE_OK==(rc = pager_recycle(p, i, &pPg)) && pPg) {
+ /* We've found a page to free. At this point the page has been
+ ** removed from the page hash-table, free-list and synced-list
+ ** (pFirstSynced). It is still in the all pages (pAll) list.
+ ** Remove it from this list before freeing.
+ **
+ ** Todo: Check the Pager.pStmt list to make sure this is Ok. It
+ ** probably is though.
+ */
+ PgHdr *pTmp;
+ assert( pPg );
+ page_remove_from_stmt_list(pPg);
+ if( pPg==p->pAll ){
+ p->pAll = pPg->pNextAll;
+ }else{
+ for( pTmp=p->pAll; pTmp->pNextAll!=pPg; pTmp=pTmp->pNextAll ){}
+ pTmp->pNextAll = pPg->pNextAll;
+ }
+ nReleased += sqliteAllocSize(pPg);
+ sqliteFree(pPg);
+ }
+
+ if( rc!=SQLITE_OK ){
+ /* An error occured whilst writing to the database file or
+ ** journal in pager_recycle(). The error is not returned to the
+ ** caller of this function. Instead, set the Pager.errCode variable.
+ ** The error will be returned to the user (or users, in the case
+ ** of a shared pager cache) of the pager for which the error occured.
+ */
+ assert( (rc&0xff)==SQLITE_IOERR || rc==SQLITE_FULL );
+ assert( p->state>=PAGER_RESERVED );
+ pager_error(p, rc);
+ }
+ }
+ }
+
+ return nReleased;
+}
+#endif /* SQLITE_ENABLE_MEMORY_MANAGEMENT */
+
+/*
+** Acquire a page.
+**
+** A read lock on the disk file is obtained when the first page is acquired.
+** This read lock is dropped when the last page is released.
+**
+** A _get works for any page number greater than 0. If the database
+** file is smaller than the requested page, then no actual disk
+** read occurs and the memory image of the page is initialized to
+** all zeros. The extra data appended to a page is always initialized
+** to zeros the first time a page is loaded into memory.
+**
+** The acquisition might fail for several reasons. In all cases,
+** an appropriate error code is returned and *ppPage is set to NULL.
+**
+** See also sqlite3pager_lookup(). Both this routine and _lookup() attempt
+** to find a page in the in-memory cache first. If the page is not already
+** in memory, this routine goes to disk to read it in whereas _lookup()
+** just returns 0. This routine acquires a read-lock the first time it
+** has to go to disk, and could also playback an old journal if necessary.
+** Since _lookup() never goes to disk, it never has to deal with locks
+** or journal files.
+*/
+int sqlite3pager_get(Pager *pPager, Pgno pgno, void **ppPage){
+ PgHdr *pPg;
+ int rc;
+
+ /* The maximum page number is 2^31. Return SQLITE_CORRUPT if a page
+ ** number greater than this, or zero, is requested.
+ */
+ if( pgno>PAGER_MAX_PGNO || pgno==0 || pgno==PAGER_MJ_PGNO(pPager) ){
+ return SQLITE_CORRUPT_BKPT;
+ }
+
+ /* Make sure we have not hit any critical errors.
+ */
+ assert( pPager!=0 );
+ *ppPage = 0;
+ if( pPager->errCode && pPager->errCode!=SQLITE_FULL ){
+ return pPager->errCode;
+ }
+
+ /* If this is the first page accessed, then get a SHARED lock
+ ** on the database file.
+ */
+ if( pPager->nRef==0 && !MEMDB ){
+ if( !pPager->noReadlock ){
+ rc = pager_wait_on_lock(pPager, SHARED_LOCK);
+ if( rc!=SQLITE_OK ){
+ return pager_error(pPager, rc);
+ }
+ }
+
+ /* If a journal file exists, and there is no RESERVED lock on the
+ ** database file, then it either needs to be played back or deleted.
+ */
+ if( hasHotJournal(pPager) ){
+ /* Get an EXCLUSIVE lock on the database file. At this point it is
+ ** important that a RESERVED lock is not obtained on the way to the
+ ** EXCLUSIVE lock. If it were, another process might open the
+ ** database file, detect the RESERVED lock, and conclude that the
+ ** database is safe to read while this process is still rolling it
+ ** back.
+ **
+ ** Because the intermediate RESERVED lock is not requested, the
+ ** second process will get to this point in the code and fail to
+ ** obtain it's own EXCLUSIVE lock on the database file.
+ */
+ rc = sqlite3OsLock(pPager->fd, EXCLUSIVE_LOCK);
+ if( rc!=SQLITE_OK ){
+ sqlite3OsUnlock(pPager->fd, NO_LOCK);
+ pPager->state = PAGER_UNLOCK;
+ return pager_error(pPager, rc);
+ }
+ pPager->state = PAGER_EXCLUSIVE;
+
+ /* Open the journal for reading only. Return SQLITE_BUSY if
+ ** we are unable to open the journal file.
+ **
+ ** The journal file does not need to be locked itself. The
+ ** journal file is never open unless the main database file holds
+ ** a write lock, so there is never any chance of two or more
+ ** processes opening the journal at the same time.
+ */
+ rc = sqlite3OsOpenReadOnly(pPager->zJournal, &pPager->jfd);
+ if( rc!=SQLITE_OK ){
+ sqlite3OsUnlock(pPager->fd, NO_LOCK);
+ pPager->state = PAGER_UNLOCK;
+ return SQLITE_BUSY;
+ }
+ pPager->journalOpen = 1;
+ pPager->journalStarted = 0;
+ pPager->journalOff = 0;
+ pPager->setMaster = 0;
+ pPager->journalHdr = 0;
+
+ /* Playback and delete the journal. Drop the database write
+ ** lock and reacquire the read lock.
+ */
+ rc = pager_playback(pPager);
+ if( rc!=SQLITE_OK ){
+ return pager_error(pPager, rc);
+ }
+ }
+ pPg = 0;
+ }else{
+ /* Search for page in cache */
+ pPg = pager_lookup(pPager, pgno);
+ if( MEMDB && pPager->state==PAGER_UNLOCK ){
+ pPager->state = PAGER_SHARED;
+ }
+ }
+ if( pPg==0 ){
+ /* The requested page is not in the page cache. */
+ int h;
+ TEST_INCR(pPager->nMiss);
+ if( pPager->nPage<pPager->mxPage || pPager->pFirst==0 || MEMDB ){
+ /* Create a new page */
+ if( pPager->nPage>=pPager->nHash ){
+ pager_resize_hash_table(pPager,
+ pPager->nHash<256 ? 256 : pPager->nHash*2);
+ if( pPager->nHash==0 ){
+ return SQLITE_NOMEM;
+ }
+ }
+ pPg = sqliteMallocRaw( sizeof(*pPg) + pPager->pageSize
+ + sizeof(u32) + pPager->nExtra
+ + MEMDB*sizeof(PgHistory) );
+ if( pPg==0 ){
+ return SQLITE_NOMEM;
+ }
+ memset(pPg, 0, sizeof(*pPg));
+ if( MEMDB ){
+ memset(PGHDR_TO_HIST(pPg, pPager), 0, sizeof(PgHistory));
+ }
+ pPg->pPager = pPager;
+ pPg->pNextAll = pPager->pAll;
+ pPager->pAll = pPg;
+ pPager->nPage++;
+ if( pPager->nPage>pPager->nMaxPage ){
+ assert( pPager->nMaxPage==(pPager->nPage-1) );
+ pPager->nMaxPage++;
+ }
+ }else{
+ rc = pager_recycle(pPager, 1, &pPg);
+ if( rc!=SQLITE_OK ){
+ return rc;
+ }
+ assert(pPg) ;
+ }
+ pPg->pgno = pgno;
+ if( pPager->aInJournal && (int)pgno<=pPager->origDbSize ){
+ sqlite3CheckMemory(pPager->aInJournal, pgno/8);
+ assert( pPager->journalOpen );
+ pPg->inJournal = (pPager->aInJournal[pgno/8] & (1<<(pgno&7)))!=0;
+ pPg->needSync = 0;
+ }else{
+ pPg->inJournal = 0;
+ pPg->needSync = 0;
+ }
+ if( pPager->aInStmt && (int)pgno<=pPager->stmtSize
+ && (pPager->aInStmt[pgno/8] & (1<<(pgno&7)))!=0 ){
+ page_add_to_stmt_list(pPg);
+ }else{
+ page_remove_from_stmt_list(pPg);
+ }
+ makeClean(pPg);
+ pPg->nRef = 1;
+ REFINFO(pPg);
+
+ pPager->nRef++;
+ if( pPager->nExtra>0 ){
+ memset(PGHDR_TO_EXTRA(pPg, pPager), 0, pPager->nExtra);
+ }
+ if( pPager->errCode ){
+ sqlite3pager_unref(PGHDR_TO_DATA(pPg));
+ rc = pPager->errCode;
+ return rc;
+ }
+
+ /* Populate the page with data, either by reading from the database
+ ** file, or by setting the entire page to zero.
+ */
+ if( sqlite3pager_pagecount(pPager)<(int)pgno || MEMDB ){
+ memset(PGHDR_TO_DATA(pPg), 0, pPager->pageSize);
+ }else{
+ assert( MEMDB==0 );
+ rc = sqlite3OsSeek(pPager->fd, (pgno-1)*(i64)pPager->pageSize);
+ if( rc==SQLITE_OK ){
+ rc = sqlite3OsRead(pPager->fd, PGHDR_TO_DATA(pPg),
+ pPager->pageSize);
+ }
+ TRACE3("FETCH %d page %d\n", PAGERID(pPager), pPg->pgno);
+ CODEC1(pPager, PGHDR_TO_DATA(pPg), pPg->pgno, 3);
+ if( rc!=SQLITE_OK ){
+ i64 fileSize;
+ int rc2 = sqlite3OsFileSize(pPager->fd, &fileSize);
+ if( rc2!=SQLITE_OK || fileSize>=pgno*pPager->pageSize ){
+ /* An IO error occured in one of the the sqlite3OsSeek() or
+ ** sqlite3OsRead() calls above. */
+ pPg->pgno = 0;
+ sqlite3pager_unref(PGHDR_TO_DATA(pPg));
+ return rc;
+ }else{
+ clear_simulated_io_error();
+ memset(PGHDR_TO_DATA(pPg), 0, pPager->pageSize);
+ }
+ }else{
+ TEST_INCR(pPager->nRead);
+ }
+ }
+
+ /* Link the page into the page hash table */
+ h = pgno & (pPager->nHash-1);
+ assert( pgno!=0 );
+ pPg->pNextHash = pPager->aHash[h];
+ pPager->aHash[h] = pPg;
+ if( pPg->pNextHash ){
+ assert( pPg->pNextHash->pPrevHash==0 );
+ pPg->pNextHash->pPrevHash = pPg;
+ }
+
+#ifdef SQLITE_CHECK_PAGES
+ pPg->pageHash = pager_pagehash(pPg);
+#endif
+ }else{
+ /* The requested page is in the page cache. */
+ TEST_INCR(pPager->nHit);
+ page_ref(pPg);
+ }
+ *ppPage = PGHDR_TO_DATA(pPg);
+ return SQLITE_OK;
+}
+
+/*
+** Acquire a page if it is already in the in-memory cache. Do
+** not read the page from disk. Return a pointer to the page,
+** or 0 if the page is not in cache.
+**
+** See also sqlite3pager_get(). The difference between this routine
+** and sqlite3pager_get() is that _get() will go to the disk and read
+** in the page if the page is not already in cache. This routine
+** returns NULL if the page is not in cache or if a disk I/O error
+** has ever happened.
+*/
+void *sqlite3pager_lookup(Pager *pPager, Pgno pgno){
+ PgHdr *pPg;
+
+ assert( pPager!=0 );
+ assert( pgno!=0 );
+ if( pPager->errCode && pPager->errCode!=SQLITE_FULL ){
+ return 0;
+ }
+ pPg = pager_lookup(pPager, pgno);
+ if( pPg==0 ) return 0;
+ page_ref(pPg);
+ return PGHDR_TO_DATA(pPg);
+}
+
+/*
+** Release a page.
+**
+** If the number of references to the page drop to zero, then the
+** page is added to the LRU list. When all references to all pages
+** are released, a rollback occurs and the lock on the database is
+** removed.
+*/
+int sqlite3pager_unref(void *pData){
+ PgHdr *pPg;
+
+ /* Decrement the reference count for this page
+ */
+ pPg = DATA_TO_PGHDR(pData);
+ assert( pPg->nRef>0 );
+ pPg->nRef--;
+ REFINFO(pPg);
+
+ CHECK_PAGE(pPg);
+
+ /* When the number of references to a page reach 0, call the
+ ** destructor and add the page to the freelist.
+ */
+ if( pPg->nRef==0 ){
+ Pager *pPager;
+ pPager = pPg->pPager;
+ pPg->pNextFree = 0;
+ pPg->pPrevFree = pPager->pLast;
+ pPager->pLast = pPg;
+ if( pPg->pPrevFree ){
+ pPg->pPrevFree->pNextFree = pPg;
+ }else{
+ pPager->pFirst = pPg;
+ }
+ if( pPg->needSync==0 && pPager->pFirstSynced==0 ){
+ pPager->pFirstSynced = pPg;
+ }
+ if( pPager->xDestructor ){
+ pPager->xDestructor(pData, pPager->pageSize);
+ }
+
+ /* When all pages reach the freelist, drop the read lock from
+ ** the database file.
+ */
+ pPager->nRef--;
+ assert( pPager->nRef>=0 );
+ if( pPager->nRef==0 && !MEMDB ){
+ pager_reset(pPager);
+ }
+ }
+ return SQLITE_OK;
+}
+
+/*
+** Create a journal file for pPager. There should already be a RESERVED
+** or EXCLUSIVE lock on the database file when this routine is called.
+**
+** Return SQLITE_OK if everything. Return an error code and release the
+** write lock if anything goes wrong.
+*/
+static int pager_open_journal(Pager *pPager){
+ int rc;
+ assert( !MEMDB );
+ assert( pPager->state>=PAGER_RESERVED );
+ assert( pPager->journalOpen==0 );
+ assert( pPager->useJournal );
+ assert( pPager->aInJournal==0 );
+ sqlite3pager_pagecount(pPager);
+ pPager->aInJournal = sqliteMalloc( pPager->dbSize/8 + 1 );
+ if( pPager->aInJournal==0 ){
+ rc = SQLITE_NOMEM;
+ goto failed_to_open_journal;
+ }
+ rc = sqlite3OsOpenExclusive(pPager->zJournal, &pPager->jfd,
+ pPager->tempFile);
+ pPager->journalOff = 0;
+ pPager->setMaster = 0;
+ pPager->journalHdr = 0;
+ if( rc!=SQLITE_OK ){
+ goto failed_to_open_journal;
+ }
+ sqlite3OsSetFullSync(pPager->jfd, pPager->full_fsync);
+ sqlite3OsSetFullSync(pPager->fd, pPager->full_fsync);
+ sqlite3OsOpenDirectory(pPager->jfd, pPager->zDirectory);
+ pPager->journalOpen = 1;
+ pPager->journalStarted = 0;
+ pPager->needSync = 0;
+ pPager->alwaysRollback = 0;
+ pPager->nRec = 0;
+ if( pPager->errCode ){
+ rc = pPager->errCode;
+ goto failed_to_open_journal;
+ }
+ pPager->origDbSize = pPager->dbSize;
+
+ rc = writeJournalHdr(pPager);
+
+ if( pPager->stmtAutoopen && rc==SQLITE_OK ){
+ rc = sqlite3pager_stmt_begin(pPager);
+ }
+ if( rc!=SQLITE_OK && rc!=SQLITE_NOMEM ){
+ rc = pager_unwritelock(pPager);
+ if( rc==SQLITE_OK ){
+ rc = SQLITE_FULL;
+ }
+ }
+ return rc;
+
+failed_to_open_journal:
+ sqliteFree(pPager->aInJournal);
+ pPager->aInJournal = 0;
+ if( rc==SQLITE_NOMEM ){
+ /* If this was a malloc() failure, then we will not be closing the pager
+ ** file. So delete any journal file we may have just created. Otherwise,
+ ** the system will get confused, we have a read-lock on the file and a
+ ** mysterious journal has appeared in the filesystem.
+ */
+ sqlite3OsDelete(pPager->zJournal);
+ }else{
+ sqlite3OsUnlock(pPager->fd, NO_LOCK);
+ pPager->state = PAGER_UNLOCK;
+ }
+ return rc;
+}
+
+/*
+** Acquire a write-lock on the database. The lock is removed when
+** the any of the following happen:
+**
+** * sqlite3pager_commit() is called.
+** * sqlite3pager_rollback() is called.
+** * sqlite3pager_close() is called.
+** * sqlite3pager_unref() is called to on every outstanding page.
+**
+** The first parameter to this routine is a pointer to any open page of the
+** database file. Nothing changes about the page - it is used merely to
+** acquire a pointer to the Pager structure and as proof that there is
+** already a read-lock on the database.
+**
+** The second parameter indicates how much space in bytes to reserve for a
+** master journal file-name at the start of the journal when it is created.
+**
+** A journal file is opened if this is not a temporary file. For temporary
+** files, the opening of the journal file is deferred until there is an
+** actual need to write to the journal.
+**
+** If the database is already reserved for writing, this routine is a no-op.
+**
+** If exFlag is true, go ahead and get an EXCLUSIVE lock on the file
+** immediately instead of waiting until we try to flush the cache. The
+** exFlag is ignored if a transaction is already active.
+*/
+int sqlite3pager_begin(void *pData, int exFlag){
+ PgHdr *pPg = DATA_TO_PGHDR(pData);
+ Pager *pPager = pPg->pPager;
+ int rc = SQLITE_OK;
+ assert( pPg->nRef>0 );
+ assert( pPager->state!=PAGER_UNLOCK );
+ if( pPager->state==PAGER_SHARED ){
+ assert( pPager->aInJournal==0 );
+ if( MEMDB ){
+ pPager->state = PAGER_EXCLUSIVE;
+ pPager->origDbSize = pPager->dbSize;
+ }else{
+ rc = sqlite3OsLock(pPager->fd, RESERVED_LOCK);
+ if( rc==SQLITE_OK ){
+ pPager->state = PAGER_RESERVED;
+ if( exFlag ){
+ rc = pager_wait_on_lock(pPager, EXCLUSIVE_LOCK);
+ }
+ }
+ if( rc!=SQLITE_OK ){
+ return rc;
+ }
+ pPager->dirtyCache = 0;
+ TRACE2("TRANSACTION %d\n", PAGERID(pPager));
+ if( pPager->useJournal && !pPager->tempFile ){
+ rc = pager_open_journal(pPager);
+ }
+ }
+ }
+ return rc;
+}
+
+/*
+** Make a page dirty. Set its dirty flag and add it to the dirty
+** page list.
+*/
+static void makeDirty(PgHdr *pPg){
+ if( pPg->dirty==0 ){
+ Pager *pPager = pPg->pPager;
+ pPg->dirty = 1;
+ pPg->pDirty = pPager->pDirty;
+ if( pPager->pDirty ){
+ pPager->pDirty->pPrevDirty = pPg;
+ }
+ pPg->pPrevDirty = 0;
+ pPager->pDirty = pPg;
+ }
+}
+
+/*
+** Make a page clean. Clear its dirty bit and remove it from the
+** dirty page list.
+*/
+static void makeClean(PgHdr *pPg){
+ if( pPg->dirty ){
+ pPg->dirty = 0;
+ if( pPg->pDirty ){
+ pPg->pDirty->pPrevDirty = pPg->pPrevDirty;
+ }
+ if( pPg->pPrevDirty ){
+ pPg->pPrevDirty->pDirty = pPg->pDirty;
+ }else{
+ pPg->pPager->pDirty = pPg->pDirty;
+ }
+ }
+}
+
+
+/*
+** Mark a data page as writeable. The page is written into the journal
+** if it is not there already. This routine must be called before making
+** changes to a page.
+**
+** The first time this routine is called, the pager creates a new
+** journal and acquires a RESERVED lock on the database. If the RESERVED
+** lock could not be acquired, this routine returns SQLITE_BUSY. The
+** calling routine must check for that return value and be careful not to
+** change any page data until this routine returns SQLITE_OK.
+**
+** If the journal file could not be written because the disk is full,
+** then this routine returns SQLITE_FULL and does an immediate rollback.
+** All subsequent write attempts also return SQLITE_FULL until there
+** is a call to sqlite3pager_commit() or sqlite3pager_rollback() to
+** reset.
+*/
+int sqlite3pager_write(void *pData){
+ PgHdr *pPg = DATA_TO_PGHDR(pData);
+ Pager *pPager = pPg->pPager;
+ int rc = SQLITE_OK;
+
+ /* Check for errors
+ */
+ if( pPager->errCode ){
+ return pPager->errCode;
+ }
+ if( pPager->readOnly ){
+ return SQLITE_PERM;
+ }
+
+ assert( !pPager->setMaster );
+
+ CHECK_PAGE(pPg);
+
+ /* Mark the page as dirty. If the page has already been written
+ ** to the journal then we can return right away.
+ */
+ makeDirty(pPg);
+ if( pPg->inJournal && (pPg->inStmt || pPager->stmtInUse==0) ){
+ pPager->dirtyCache = 1;
+ }else{
+
+ /* If we get this far, it means that the page needs to be
+ ** written to the transaction journal or the ckeckpoint journal
+ ** or both.
+ **
+ ** First check to see that the transaction journal exists and
+ ** create it if it does not.
+ */
+ assert( pPager->state!=PAGER_UNLOCK );
+ rc = sqlite3pager_begin(pData, 0);
+ if( rc!=SQLITE_OK ){
+ return rc;
+ }
+ assert( pPager->state>=PAGER_RESERVED );
+ if( !pPager->journalOpen && pPager->useJournal ){
+ rc = pager_open_journal(pPager);
+ if( rc!=SQLITE_OK ) return rc;
+ }
+ assert( pPager->journalOpen || !pPager->useJournal );
+ pPager->dirtyCache = 1;
+
+ /* The transaction journal now exists and we have a RESERVED or an
+ ** EXCLUSIVE lock on the main database file. Write the current page to
+ ** the transaction journal if it is not there already.
+ */
+ if( !pPg->inJournal && (pPager->useJournal || MEMDB) ){
+ if( (int)pPg->pgno <= pPager->origDbSize ){
+ int szPg;
+ if( MEMDB ){
+ PgHistory *pHist = PGHDR_TO_HIST(pPg, pPager);
+ TRACE3("JOURNAL %d page %d\n", PAGERID(pPager), pPg->pgno);
+ assert( pHist->pOrig==0 );
+ pHist->pOrig = sqliteMallocRaw( pPager->pageSize );
+ if( pHist->pOrig ){
+ memcpy(pHist->pOrig, PGHDR_TO_DATA(pPg), pPager->pageSize);
+ }
+ }else{
+ u32 cksum, saved;
+ char *pData2, *pEnd;
+ /* We should never write to the journal file the page that
+ ** contains the database locks. The following assert verifies
+ ** that we do not. */
+ assert( pPg->pgno!=PAGER_MJ_PGNO(pPager) );
+ pData2 = CODEC2(pPager, pData, pPg->pgno, 7);
+ cksum = pager_cksum(pPager, (u8*)pData2);
+ pEnd = pData2 + pPager->pageSize;
+ pData2 -= 4;
+ saved = *(u32*)pEnd;
+ put32bits(pEnd, cksum);
+ szPg = pPager->pageSize+8;
+ put32bits(pData2, pPg->pgno);
+ rc = sqlite3OsWrite(pPager->jfd, pData2, szPg);
+ pPager->journalOff += szPg;
+ TRACE4("JOURNAL %d page %d needSync=%d\n",
+ PAGERID(pPager), pPg->pgno, pPg->needSync);
+ *(u32*)pEnd = saved;
+
+ /* An error has occured writing to the journal file. The
+ ** transaction will be rolled back by the layer above.
+ */
+ if( rc!=SQLITE_OK ){
+ return rc;
+ }
+
+ pPager->nRec++;
+ assert( pPager->aInJournal!=0 );
+ pPager->aInJournal[pPg->pgno/8] |= 1<<(pPg->pgno&7);
+ pPg->needSync = !pPager->noSync;
+ if( pPager->stmtInUse ){
+ pPager->aInStmt[pPg->pgno/8] |= 1<<(pPg->pgno&7);
+ page_add_to_stmt_list(pPg);
+ }
+ }
+ }else{
+ pPg->needSync = !pPager->journalStarted && !pPager->noSync;
+ TRACE4("APPEND %d page %d needSync=%d\n",
+ PAGERID(pPager), pPg->pgno, pPg->needSync);
+ }
+ if( pPg->needSync ){
+ pPager->needSync = 1;
+ }
+ pPg->inJournal = 1;
+ }
+
+ /* If the statement journal is open and the page is not in it,
+ ** then write the current page to the statement journal. Note that
+ ** the statement journal format differs from the standard journal format
+ ** in that it omits the checksums and the header.
+ */
+ if( pPager->stmtInUse && !pPg->inStmt && (int)pPg->pgno<=pPager->stmtSize ){
+ assert( pPg->inJournal || (int)pPg->pgno>pPager->origDbSize );
+ if( MEMDB ){
+ PgHistory *pHist = PGHDR_TO_HIST(pPg, pPager);
+ assert( pHist->pStmt==0 );
+ pHist->pStmt = sqliteMallocRaw( pPager->pageSize );
+ if( pHist->pStmt ){
+ memcpy(pHist->pStmt, PGHDR_TO_DATA(pPg), pPager->pageSize);
+ }
+ TRACE3("STMT-JOURNAL %d page %d\n", PAGERID(pPager), pPg->pgno);
+ }else{
+ char *pData2 = CODEC2(pPager, pData, pPg->pgno, 7)-4;
+ put32bits(pData2, pPg->pgno);
+ rc = sqlite3OsWrite(pPager->stfd, pData2, pPager->pageSize+4);
+ TRACE3("STMT-JOURNAL %d page %d\n", PAGERID(pPager), pPg->pgno);
+ if( rc!=SQLITE_OK ){
+ return rc;
+ }
+ pPager->stmtNRec++;
+ assert( pPager->aInStmt!=0 );
+ pPager->aInStmt[pPg->pgno/8] |= 1<<(pPg->pgno&7);
+ }
+ page_add_to_stmt_list(pPg);
+ }
+ }
+
+ /* Update the database size and return.
+ */
+ if( pPager->dbSize<(int)pPg->pgno ){
+ pPager->dbSize = pPg->pgno;
+ if( !MEMDB && pPager->dbSize==PENDING_BYTE/pPager->pageSize ){
+ pPager->dbSize++;
+ }
+ }
+ return rc;
+}
+
+/*
+** Return TRUE if the page given in the argument was previously passed
+** to sqlite3pager_write(). In other words, return TRUE if it is ok
+** to change the content of the page.
+*/
+#ifndef NDEBUG
+int sqlite3pager_iswriteable(void *pData){
+ PgHdr *pPg = DATA_TO_PGHDR(pData);
+ return pPg->dirty;
+}
+#endif
+
+#ifndef SQLITE_OMIT_VACUUM
+/*
+** Replace the content of a single page with the information in the third
+** argument.
+*/
+int sqlite3pager_overwrite(Pager *pPager, Pgno pgno, void *pData){
+ void *pPage;
+ int rc;
+
+ rc = sqlite3pager_get(pPager, pgno, &pPage);
+ if( rc==SQLITE_OK ){
+ rc = sqlite3pager_write(pPage);
+ if( rc==SQLITE_OK ){
+ memcpy(pPage, pData, pPager->pageSize);
+ }
+ sqlite3pager_unref(pPage);
+ }
+ return rc;
+}
+#endif
+
+/*
+** A call to this routine tells the pager that it is not necessary to
+** write the information on page "pgno" back to the disk, even though
+** that page might be marked as dirty.
+**
+** The overlying software layer calls this routine when all of the data
+** on the given page is unused. The pager marks the page as clean so
+** that it does not get written to disk.
+**
+** Tests show that this optimization, together with the
+** sqlite3pager_dont_rollback() below, more than double the speed
+** of large INSERT operations and quadruple the speed of large DELETEs.
+**
+** When this routine is called, set the alwaysRollback flag to true.
+** Subsequent calls to sqlite3pager_dont_rollback() for the same page
+** will thereafter be ignored. This is necessary to avoid a problem
+** where a page with data is added to the freelist during one part of
+** a transaction then removed from the freelist during a later part
+** of the same transaction and reused for some other purpose. When it
+** is first added to the freelist, this routine is called. When reused,
+** the dont_rollback() routine is called. But because the page contains
+** critical data, we still need to be sure it gets rolled back in spite
+** of the dont_rollback() call.
+*/
+void sqlite3pager_dont_write(Pager *pPager, Pgno pgno){
+ PgHdr *pPg;
+
+ if( MEMDB ) return;
+
+ pPg = pager_lookup(pPager, pgno);
+ assert( pPg!=0 ); /* We never call _dont_write unless the page is in mem */
+ pPg->alwaysRollback = 1;
+ if( pPg->dirty && !pPager->stmtInUse ){
+ if( pPager->dbSize==(int)pPg->pgno && pPager->origDbSize<pPager->dbSize ){
+ /* If this pages is the last page in the file and the file has grown
+ ** during the current transaction, then do NOT mark the page as clean.
+ ** When the database file grows, we must make sure that the last page
+ ** gets written at least once so that the disk file will be the correct
+ ** size. If you do not write this page and the size of the file
+ ** on the disk ends up being too small, that can lead to database
+ ** corruption during the next transaction.
+ */
+ }else{
+ TRACE3("DONT_WRITE page %d of %d\n", pgno, PAGERID(pPager));
+ makeClean(pPg);
+#ifdef SQLITE_CHECK_PAGES
+ pPg->pageHash = pager_pagehash(pPg);
+#endif
+ }
+ }
+}
+
+/*
+** A call to this routine tells the pager that if a rollback occurs,
+** it is not necessary to restore the data on the given page. This
+** means that the pager does not have to record the given page in the
+** rollback journal.
+*/
+void sqlite3pager_dont_rollback(void *pData){
+ PgHdr *pPg = DATA_TO_PGHDR(pData);
+ Pager *pPager = pPg->pPager;
+
+ if( pPager->state!=PAGER_EXCLUSIVE || pPager->journalOpen==0 ) return;
+ if( pPg->alwaysRollback || pPager->alwaysRollback || MEMDB ) return;
+ if( !pPg->inJournal && (int)pPg->pgno <= pPager->origDbSize ){
+ assert( pPager->aInJournal!=0 );
+ pPager->aInJournal[pPg->pgno/8] |= 1<<(pPg->pgno&7);
+ pPg->inJournal = 1;
+ if( pPager->stmtInUse ){
+ pPager->aInStmt[pPg->pgno/8] |= 1<<(pPg->pgno&7);
+ page_add_to_stmt_list(pPg);
+ }
+ TRACE3("DONT_ROLLBACK page %d of %d\n", pPg->pgno, PAGERID(pPager));
+ }
+ if( pPager->stmtInUse && !pPg->inStmt && (int)pPg->pgno<=pPager->stmtSize ){
+ assert( pPg->inJournal || (int)pPg->pgno>pPager->origDbSize );
+ assert( pPager->aInStmt!=0 );
+ pPager->aInStmt[pPg->pgno/8] |= 1<<(pPg->pgno&7);
+ page_add_to_stmt_list(pPg);
+ }
+}
+
+
+/*
+** Commit all changes to the database and release the write lock.
+**
+** If the commit fails for any reason, a rollback attempt is made
+** and an error code is returned. If the commit worked, SQLITE_OK
+** is returned.
+*/
+int sqlite3pager_commit(Pager *pPager){
+ int rc;
+ PgHdr *pPg;
+
+ if( pPager->errCode ){
+ return pPager->errCode;
+ }
+ if( pPager->state<PAGER_RESERVED ){
+ return SQLITE_ERROR;
+ }
+ TRACE2("COMMIT %d\n", PAGERID(pPager));
+ if( MEMDB ){
+ pPg = pager_get_all_dirty_pages(pPager);
+ while( pPg ){
+ clearHistory(PGHDR_TO_HIST(pPg, pPager));
+ pPg->dirty = 0;
+ pPg->inJournal = 0;
+ pPg->inStmt = 0;
+ pPg->needSync = 0;
+ pPg->pPrevStmt = pPg->pNextStmt = 0;
+ pPg = pPg->pDirty;
+ }
+ pPager->pDirty = 0;
+#ifndef NDEBUG
+ for(pPg=pPager->pAll; pPg; pPg=pPg->pNextAll){
+ PgHistory *pHist = PGHDR_TO_HIST(pPg, pPager);
+ assert( !pPg->alwaysRollback );
+ assert( !pHist->pOrig );
+ assert( !pHist->pStmt );
+ }
+#endif
+ pPager->pStmt = 0;
+ pPager->state = PAGER_SHARED;
+ return SQLITE_OK;
+ }
+ if( pPager->dirtyCache==0 ){
+ /* Exit early (without doing the time-consuming sqlite3OsSync() calls)
+ ** if there have been no changes to the database file. */
+ assert( pPager->needSync==0 );
+ rc = pager_unwritelock(pPager);
+ pPager->dbSize = -1;
+ return rc;
+ }
+ assert( pPager->journalOpen );
+ rc = sqlite3pager_sync(pPager, 0, 0);
+ if( rc==SQLITE_OK ){
+ rc = pager_unwritelock(pPager);
+ pPager->dbSize = -1;
+ }
+ return rc;
+}
+
+/*
+** Rollback all changes. The database falls back to PAGER_SHARED mode.
+** All in-memory cache pages revert to their original data contents.
+** The journal is deleted.
+**
+** This routine cannot fail unless some other process is not following
+** the correct locking protocol (SQLITE_PROTOCOL) or unless some other
+** process is writing trash into the journal file (SQLITE_CORRUPT) or
+** unless a prior malloc() failed (SQLITE_NOMEM). Appropriate error
+** codes are returned for all these occasions. Otherwise,
+** SQLITE_OK is returned.
+*/
+int sqlite3pager_rollback(Pager *pPager){
+ int rc;
+ TRACE2("ROLLBACK %d\n", PAGERID(pPager));
+ if( MEMDB ){
+ PgHdr *p;
+ for(p=pPager->pAll; p; p=p->pNextAll){
+ PgHistory *pHist;
+ assert( !p->alwaysRollback );
+ if( !p->dirty ){
+ assert( !((PgHistory *)PGHDR_TO_HIST(p, pPager))->pOrig );
+ assert( !((PgHistory *)PGHDR_TO_HIST(p, pPager))->pStmt );
+ continue;
+ }
+
+ pHist = PGHDR_TO_HIST(p, pPager);
+ if( pHist->pOrig ){
+ memcpy(PGHDR_TO_DATA(p), pHist->pOrig, pPager->pageSize);
+ TRACE3("ROLLBACK-PAGE %d of %d\n", p->pgno, PAGERID(pPager));
+ }else{
+ TRACE3("PAGE %d is clean on %d\n", p->pgno, PAGERID(pPager));
+ }
+ clearHistory(pHist);
+ p->dirty = 0;
+ p->inJournal = 0;
+ p->inStmt = 0;
+ p->pPrevStmt = p->pNextStmt = 0;
+ if( pPager->xReiniter ){
+ pPager->xReiniter(PGHDR_TO_DATA(p), pPager->pageSize);
+ }
+ }
+ pPager->pDirty = 0;
+ pPager->pStmt = 0;
+ pPager->dbSize = pPager->origDbSize;
+ memoryTruncate(pPager);
+ pPager->stmtInUse = 0;
+ pPager->state = PAGER_SHARED;
+ return SQLITE_OK;
+ }
+
+ if( !pPager->dirtyCache || !pPager->journalOpen ){
+ rc = pager_unwritelock(pPager);
+ pPager->dbSize = -1;
+ return rc;
+ }
+
+ if( pPager->errCode && pPager->errCode!=SQLITE_FULL ){
+ if( pPager->state>=PAGER_EXCLUSIVE ){
+ pager_playback(pPager);
+ }
+ return pPager->errCode;
+ }
+ if( pPager->state==PAGER_RESERVED ){
+ int rc2;
+ rc = pager_reload_cache(pPager);
+ rc2 = pager_unwritelock(pPager);
+ if( rc==SQLITE_OK ){
+ rc = rc2;
+ }
+ }else{
+ rc = pager_playback(pPager);
+ }
+ pPager->dbSize = -1;
+
+ /* If an error occurs during a ROLLBACK, we can no longer trust the pager
+ ** cache. So call pager_error() on the way out to make any error
+ ** persistent.
+ */
+ return pager_error(pPager, rc);
+}
+
+/*
+** Return TRUE if the database file is opened read-only. Return FALSE
+** if the database is (in theory) writable.
+*/
+int sqlite3pager_isreadonly(Pager *pPager){
+ return pPager->readOnly;
+}
+
+/*
+** Return the number of references to the pager.
+*/
+int sqlite3pager_refcount(Pager *pPager){
+ return pPager->nRef;
+}
+
+#ifdef SQLITE_TEST
+/*
+** This routine is used for testing and analysis only.
+*/
+int *sqlite3pager_stats(Pager *pPager){
+ static int a[11];
+ a[0] = pPager->nRef;
+ a[1] = pPager->nPage;
+ a[2] = pPager->mxPage;
+ a[3] = pPager->dbSize;
+ a[4] = pPager->state;
+ a[5] = pPager->errCode;
+ a[6] = pPager->nHit;
+ a[7] = pPager->nMiss;
+ a[8] = pPager->nOvfl;
+ a[9] = pPager->nRead;
+ a[10] = pPager->nWrite;
+ return a;
+}
+#endif
+
+/*
+** Set the statement rollback point.
+**
+** This routine should be called with the transaction journal already
+** open. A new statement journal is created that can be used to rollback
+** changes of a single SQL command within a larger transaction.
+*/
+int sqlite3pager_stmt_begin(Pager *pPager){
+ int rc;
+ char zTemp[SQLITE_TEMPNAME_SIZE];
+ assert( !pPager->stmtInUse );
+ assert( pPager->dbSize>=0 );
+ TRACE2("STMT-BEGIN %d\n", PAGERID(pPager));
+ if( MEMDB ){
+ pPager->stmtInUse = 1;
+ pPager->stmtSize = pPager->dbSize;
+ return SQLITE_OK;
+ }
+ if( !pPager->journalOpen ){
+ pPager->stmtAutoopen = 1;
+ return SQLITE_OK;
+ }
+ assert( pPager->journalOpen );
+ pPager->aInStmt = sqliteMalloc( pPager->dbSize/8 + 1 );
+ if( pPager->aInStmt==0 ){
+ /* sqlite3OsLock(pPager->fd, SHARED_LOCK); */
+ return SQLITE_NOMEM;
+ }
+#ifndef NDEBUG
+ rc = sqlite3OsFileSize(pPager->jfd, &pPager->stmtJSize);
+ if( rc ) goto stmt_begin_failed;
+ assert( pPager->stmtJSize == pPager->journalOff );
+#endif
+ pPager->stmtJSize = pPager->journalOff;
+ pPager->stmtSize = pPager->dbSize;
+ pPager->stmtHdrOff = 0;
+ pPager->stmtCksum = pPager->cksumInit;
+ if( !pPager->stmtOpen ){
+ rc = sqlite3pager_opentemp(zTemp, &pPager->stfd);
+ if( rc ) goto stmt_begin_failed;
+ pPager->stmtOpen = 1;
+ pPager->stmtNRec = 0;
+ }
+ pPager->stmtInUse = 1;
+ return SQLITE_OK;
+
+stmt_begin_failed:
+ if( pPager->aInStmt ){
+ sqliteFree(pPager->aInStmt);
+ pPager->aInStmt = 0;
+ }
+ return rc;
+}
+
+/*
+** Commit a statement.
+*/
+int sqlite3pager_stmt_commit(Pager *pPager){
+ if( pPager->stmtInUse ){
+ PgHdr *pPg, *pNext;
+ TRACE2("STMT-COMMIT %d\n", PAGERID(pPager));
+ if( !MEMDB ){
+ sqlite3OsSeek(pPager->stfd, 0);
+ /* sqlite3OsTruncate(pPager->stfd, 0); */
+ sqliteFree( pPager->aInStmt );
+ pPager->aInStmt = 0;
+ }
+ for(pPg=pPager->pStmt; pPg; pPg=pNext){
+ pNext = pPg->pNextStmt;
+ assert( pPg->inStmt );
+ pPg->inStmt = 0;
+ pPg->pPrevStmt = pPg->pNextStmt = 0;
+ if( MEMDB ){
+ PgHistory *pHist = PGHDR_TO_HIST(pPg, pPager);
+ sqliteFree(pHist->pStmt);
+ pHist->pStmt = 0;
+ }
+ }
+ pPager->stmtNRec = 0;
+ pPager->stmtInUse = 0;
+ pPager->pStmt = 0;
+ }
+ pPager->stmtAutoopen = 0;
+ return SQLITE_OK;
+}
+
+/*
+** Rollback a statement.
+*/
+int sqlite3pager_stmt_rollback(Pager *pPager){
+ int rc;
+ if( pPager->stmtInUse ){
+ TRACE2("STMT-ROLLBACK %d\n", PAGERID(pPager));
+ if( MEMDB ){
+ PgHdr *pPg;
+ for(pPg=pPager->pStmt; pPg; pPg=pPg->pNextStmt){
+ PgHistory *pHist = PGHDR_TO_HIST(pPg, pPager);
+ if( pHist->pStmt ){
+ memcpy(PGHDR_TO_DATA(pPg), pHist->pStmt, pPager->pageSize);
+ sqliteFree(pHist->pStmt);
+ pHist->pStmt = 0;
+ }
+ }
+ pPager->dbSize = pPager->stmtSize;
+ memoryTruncate(pPager);
+ rc = SQLITE_OK;
+ }else{
+ rc = pager_stmt_playback(pPager);
+ }
+ sqlite3pager_stmt_commit(pPager);
+ }else{
+ rc = SQLITE_OK;
+ }
+ pPager->stmtAutoopen = 0;
+ return rc;
+}
+
+/*
+** Return the full pathname of the database file.
+*/
+const char *sqlite3pager_filename(Pager *pPager){
+ return pPager->zFilename;
+}
+
+/*
+** Return the directory of the database file.
+*/
+const char *sqlite3pager_dirname(Pager *pPager){
+ return pPager->zDirectory;
+}
+
+/*
+** Return the full pathname of the journal file.
+*/
+const char *sqlite3pager_journalname(Pager *pPager){
+ return pPager->zJournal;
+}
+
+/*
+** Return true if fsync() calls are disabled for this pager. Return FALSE
+** if fsync()s are executed normally.
+*/
+int sqlite3pager_nosync(Pager *pPager){
+ return pPager->noSync;
+}
+
+/*
+** Set the codec for this pager
+*/
+void sqlite3pager_set_codec(
+ Pager *pPager,
+ void *(*xCodec)(void*,void*,Pgno,int),
+ void *pCodecArg
+){
+ pPager->xCodec = xCodec;
+ pPager->pCodecArg = pCodecArg;
+}
+
+/*
+** This routine is called to increment the database file change-counter,
+** stored at byte 24 of the pager file.
+*/
+static int pager_incr_changecounter(Pager *pPager){
+ void *pPage;
+ PgHdr *pPgHdr;
+ u32 change_counter;
+ int rc;
+
+ /* Open page 1 of the file for writing. */
+ rc = sqlite3pager_get(pPager, 1, &pPage);
+ if( rc!=SQLITE_OK ) return rc;
+ rc = sqlite3pager_write(pPage);
+ if( rc!=SQLITE_OK ) return rc;
+
+ /* Read the current value at byte 24. */
+ pPgHdr = DATA_TO_PGHDR(pPage);
+ change_counter = retrieve32bits(pPgHdr, 24);
+
+ /* Increment the value just read and write it back to byte 24. */
+ change_counter++;
+ put32bits(((char*)PGHDR_TO_DATA(pPgHdr))+24, change_counter);
+
+ /* Release the page reference. */
+ sqlite3pager_unref(pPage);
+ return SQLITE_OK;
+}
+
+/*
+** Sync the database file for the pager pPager. zMaster points to the name
+** of a master journal file that should be written into the individual
+** journal file. zMaster may be NULL, which is interpreted as no master
+** journal (a single database transaction).
+**
+** This routine ensures that the journal is synced, all dirty pages written
+** to the database file and the database file synced. The only thing that
+** remains to commit the transaction is to delete the journal file (or
+** master journal file if specified).
+**
+** Note that if zMaster==NULL, this does not overwrite a previous value
+** passed to an sqlite3pager_sync() call.
+**
+** If parameter nTrunc is non-zero, then the pager file is truncated to
+** nTrunc pages (this is used by auto-vacuum databases).
+*/
+int sqlite3pager_sync(Pager *pPager, const char *zMaster, Pgno nTrunc){
+ int rc = SQLITE_OK;
+
+ TRACE4("DATABASE SYNC: File=%s zMaster=%s nTrunc=%d\n",
+ pPager->zFilename, zMaster, nTrunc);
+
+ /* If this is an in-memory db, or no pages have been written to, or this
+ ** function has already been called, it is a no-op.
+ */
+ if( pPager->state!=PAGER_SYNCED && !MEMDB && pPager->dirtyCache ){
+ PgHdr *pPg;
+ assert( pPager->journalOpen );
+
+ /* If a master journal file name has already been written to the
+ ** journal file, then no sync is required. This happens when it is
+ ** written, then the process fails to upgrade from a RESERVED to an
+ ** EXCLUSIVE lock. The next time the process tries to commit the
+ ** transaction the m-j name will have already been written.
+ */
+ if( !pPager->setMaster ){
+ rc = pager_incr_changecounter(pPager);
+ if( rc!=SQLITE_OK ) goto sync_exit;
+#ifndef SQLITE_OMIT_AUTOVACUUM
+ if( nTrunc!=0 ){
+ /* If this transaction has made the database smaller, then all pages
+ ** being discarded by the truncation must be written to the journal
+ ** file.
+ */
+ Pgno i;
+ void *pPage;
+ int iSkip = PAGER_MJ_PGNO(pPager);
+ for( i=nTrunc+1; i<=pPager->origDbSize; i++ ){
+ if( !(pPager->aInJournal[i/8] & (1<<(i&7))) && i!=iSkip ){
+ rc = sqlite3pager_get(pPager, i, &pPage);
+ if( rc!=SQLITE_OK ) goto sync_exit;
+ rc = sqlite3pager_write(pPage);
+ sqlite3pager_unref(pPage);
+ if( rc!=SQLITE_OK ) goto sync_exit;
+ }
+ }
+ }
+#endif
+ rc = writeMasterJournal(pPager, zMaster);
+ if( rc!=SQLITE_OK ) goto sync_exit;
+ rc = syncJournal(pPager);
+ if( rc!=SQLITE_OK ) goto sync_exit;
+ }
+
+#ifndef SQLITE_OMIT_AUTOVACUUM
+ if( nTrunc!=0 ){
+ rc = sqlite3pager_truncate(pPager, nTrunc);
+ if( rc!=SQLITE_OK ) goto sync_exit;
+ }
+#endif
+
+ /* Write all dirty pages to the database file */
+ pPg = pager_get_all_dirty_pages(pPager);
+ rc = pager_write_pagelist(pPg);
+ if( rc!=SQLITE_OK ) goto sync_exit;
+
+ /* Sync the database file. */
+ if( !pPager->noSync ){
+ rc = sqlite3OsSync(pPager->fd, 0);
+ }
+
+ pPager->state = PAGER_SYNCED;
+ }else if( MEMDB && nTrunc!=0 ){
+ rc = sqlite3pager_truncate(pPager, nTrunc);
+ }
+
+sync_exit:
+ return rc;
+}
+
+#ifndef SQLITE_OMIT_AUTOVACUUM
+/*
+** Move the page identified by pData to location pgno in the file.
+**
+** There must be no references to the current page pgno. If current page
+** pgno is not already in the rollback journal, it is not written there by
+** by this routine. The same applies to the page pData refers to on entry to
+** this routine.
+**
+** References to the page refered to by pData remain valid. Updating any
+** meta-data associated with page pData (i.e. data stored in the nExtra bytes
+** allocated along with the page) is the responsibility of the caller.
+**
+** A transaction must be active when this routine is called. It used to be
+** required that a statement transaction was not active, but this restriction
+** has been removed (CREATE INDEX needs to move a page when a statement
+** transaction is active).
+*/
+int sqlite3pager_movepage(Pager *pPager, void *pData, Pgno pgno){
+ PgHdr *pPg = DATA_TO_PGHDR(pData);
+ PgHdr *pPgOld;
+ int h;
+ Pgno needSyncPgno = 0;
+
+ assert( pPg->nRef>0 );
+
+ TRACE5("MOVE %d page %d (needSync=%d) moves to %d\n",
+ PAGERID(pPager), pPg->pgno, pPg->needSync, pgno);
+
+ if( pPg->needSync ){
+ needSyncPgno = pPg->pgno;
+ assert( pPg->inJournal );
+ assert( pPg->dirty );
+ assert( pPager->needSync );
+ }
+
+ /* Unlink pPg from it's hash-chain */
+ unlinkHashChain(pPager, pPg);
+
+ /* If the cache contains a page with page-number pgno, remove it
+ ** from it's hash chain. Also, if the PgHdr.needSync was set for
+ ** page pgno before the 'move' operation, it needs to be retained
+ ** for the page moved there.
+ */
+ pPgOld = pager_lookup(pPager, pgno);
+ if( pPgOld ){
+ assert( pPgOld->nRef==0 );
+ unlinkHashChain(pPager, pPgOld);
+ makeClean(pPgOld);
+ if( pPgOld->needSync ){
+ assert( pPgOld->inJournal );
+ pPg->inJournal = 1;
+ pPg->needSync = 1;
+ assert( pPager->needSync );
+ }
+ }
+
+ /* Change the page number for pPg and insert it into the new hash-chain. */
+ assert( pgno!=0 );
+ pPg->pgno = pgno;
+ h = pgno & (pPager->nHash-1);
+ if( pPager->aHash[h] ){
+ assert( pPager->aHash[h]->pPrevHash==0 );
+ pPager->aHash[h]->pPrevHash = pPg;
+ }
+ pPg->pNextHash = pPager->aHash[h];
+ pPager->aHash[h] = pPg;
+ pPg->pPrevHash = 0;
+
+ makeDirty(pPg);
+ pPager->dirtyCache = 1;
+
+ if( needSyncPgno ){
+ /* If needSyncPgno is non-zero, then the journal file needs to be
+ ** sync()ed before any data is written to database file page needSyncPgno.
+ ** Currently, no such page exists in the page-cache and the
+ ** Pager.aInJournal bit has been set. This needs to be remedied by loading
+ ** the page into the pager-cache and setting the PgHdr.needSync flag.
+ **
+ ** The sqlite3pager_get() call may cause the journal to sync. So make
+ ** sure the Pager.needSync flag is set too.
+ */
+ int rc;
+ void *pNeedSync;
+ assert( pPager->needSync );
+ rc = sqlite3pager_get(pPager, needSyncPgno, &pNeedSync);
+ if( rc!=SQLITE_OK ) return rc;
+ pPager->needSync = 1;
+ DATA_TO_PGHDR(pNeedSync)->needSync = 1;
+ DATA_TO_PGHDR(pNeedSync)->inJournal = 1;
+ makeDirty(DATA_TO_PGHDR(pNeedSync));
+ sqlite3pager_unref(pNeedSync);
+ }
+
+ return SQLITE_OK;
+}
+#endif
+
+#if defined(SQLITE_DEBUG) || defined(SQLITE_TEST)
+/*
+** Return the current state of the file lock for the given pager.
+** The return value is one of NO_LOCK, SHARED_LOCK, RESERVED_LOCK,
+** PENDING_LOCK, or EXCLUSIVE_LOCK.
+*/
+int sqlite3pager_lockstate(Pager *pPager){
+ return sqlite3OsLockState(pPager->fd);
+}
+#endif
+
+#ifdef SQLITE_DEBUG
+/*
+** Print a listing of all referenced pages and their ref count.
+*/
+void sqlite3pager_refdump(Pager *pPager){
+ PgHdr *pPg;
+ for(pPg=pPager->pAll; pPg; pPg=pPg->pNextAll){
+ if( pPg->nRef<=0 ) continue;
+ sqlite3DebugPrintf("PAGE %3d addr=%p nRef=%d\n",
+ pPg->pgno, PGHDR_TO_DATA(pPg), pPg->nRef);
+ }
+}
+#endif
+
+#endif /* SQLITE_OMIT_DISKIO */
Added: freeswitch/trunk/libs/sqlite/src/pager.h
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/pager.h Tue Dec 19 15:11:50 2006
@@ -0,0 +1,123 @@
+/*
+** 2001 September 15
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** This header file defines the interface that the sqlite page cache
+** subsystem. The page cache subsystem reads and writes a file a page
+** at a time and provides a journal for rollback.
+**
+** @(#) $Id: pager.h,v 1.51 2006/08/08 13:51:43 drh Exp $
+*/
+
+#ifndef _PAGER_H_
+#define _PAGER_H_
+
+/*
+** The default size of a database page.
+*/
+#ifndef SQLITE_DEFAULT_PAGE_SIZE
+# define SQLITE_DEFAULT_PAGE_SIZE 1024
+#endif
+
+/* Maximum page size. The upper bound on this value is 32768. This a limit
+** imposed by necessity of storing the value in a 2-byte unsigned integer
+** and the fact that the page size must be a power of 2.
+**
+** This value is used to initialize certain arrays on the stack at
+** various places in the code. On embedded machines where stack space
+** is limited and the flexibility of having large pages is not needed,
+** it makes good sense to reduce the maximum page size to something more
+** reasonable, like 1024.
+*/
+#ifndef SQLITE_MAX_PAGE_SIZE
+# define SQLITE_MAX_PAGE_SIZE 32768
+#endif
+
+/*
+** Maximum number of pages in one database.
+*/
+#define SQLITE_MAX_PAGE 1073741823
+
+/*
+** The type used to represent a page number. The first page in a file
+** is called page 1. 0 is used to represent "not a page".
+*/
+typedef unsigned int Pgno;
+
+/*
+** Each open file is managed by a separate instance of the "Pager" structure.
+*/
+typedef struct Pager Pager;
+
+/*
+** Allowed values for the flags parameter to sqlite3pager_open().
+**
+** NOTE: This values must match the corresponding BTREE_ values in btree.h.
+*/
+#define PAGER_OMIT_JOURNAL 0x0001 /* Do not use a rollback journal */
+#define PAGER_NO_READLOCK 0x0002 /* Omit readlocks on readonly files */
+
+
+/*
+** See source code comments for a detailed description of the following
+** routines:
+*/
+int sqlite3pager_open(Pager **ppPager, const char *zFilename,
+ int nExtra, int flags);
+void sqlite3pager_set_busyhandler(Pager*, BusyHandler *pBusyHandler);
+void sqlite3pager_set_destructor(Pager*, void(*)(void*,int));
+void sqlite3pager_set_reiniter(Pager*, void(*)(void*,int));
+int sqlite3pager_set_pagesize(Pager*, int);
+void sqlite3pager_read_fileheader(Pager*, int, unsigned char*);
+void sqlite3pager_set_cachesize(Pager*, int);
+int sqlite3pager_close(Pager *pPager);
+int sqlite3pager_get(Pager *pPager, Pgno pgno, void **ppPage);
+void *sqlite3pager_lookup(Pager *pPager, Pgno pgno);
+int sqlite3pager_ref(void*);
+int sqlite3pager_unref(void*);
+Pgno sqlite3pager_pagenumber(void*);
+int sqlite3pager_write(void*);
+int sqlite3pager_iswriteable(void*);
+int sqlite3pager_overwrite(Pager *pPager, Pgno pgno, void*);
+int sqlite3pager_pagecount(Pager*);
+int sqlite3pager_truncate(Pager*,Pgno);
+int sqlite3pager_begin(void*, int exFlag);
+int sqlite3pager_commit(Pager*);
+int sqlite3pager_sync(Pager*,const char *zMaster, Pgno);
+int sqlite3pager_rollback(Pager*);
+int sqlite3pager_isreadonly(Pager*);
+int sqlite3pager_stmt_begin(Pager*);
+int sqlite3pager_stmt_commit(Pager*);
+int sqlite3pager_stmt_rollback(Pager*);
+void sqlite3pager_dont_rollback(void*);
+void sqlite3pager_dont_write(Pager*, Pgno);
+int sqlite3pager_refcount(Pager*);
+int *sqlite3pager_stats(Pager*);
+void sqlite3pager_set_safety_level(Pager*,int,int);
+const char *sqlite3pager_filename(Pager*);
+const char *sqlite3pager_dirname(Pager*);
+const char *sqlite3pager_journalname(Pager*);
+int sqlite3pager_nosync(Pager*);
+int sqlite3pager_rename(Pager*, const char *zNewName);
+void sqlite3pager_set_codec(Pager*,void*(*)(void*,void*,Pgno,int),void*);
+int sqlite3pager_movepage(Pager*,void*,Pgno);
+int sqlite3pager_reset(Pager*);
+int sqlite3pager_release_memory(int);
+
+#if defined(SQLITE_DEBUG) || defined(SQLITE_TEST)
+int sqlite3pager_lockstate(Pager*);
+#endif
+
+#ifdef SQLITE_TEST
+void sqlite3pager_refdump(Pager*);
+int pager3_refinfo_enable;
+#endif
+
+#endif /* _PAGER_H_ */
Added: freeswitch/trunk/libs/sqlite/src/parse.y
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/parse.y Tue Dec 19 15:11:50 2006
@@ -0,0 +1,1089 @@
+/*
+** 2001 September 15
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** This file contains SQLite's grammar for SQL. Process this file
+** using the lemon parser generator to generate C code that runs
+** the parser. Lemon will also generate a header file containing
+** numeric codes for all of the tokens.
+**
+** @(#) $Id: parse.y,v 1.210 2006/09/21 11:02:17 drh Exp $
+*/
+
+// All token codes are small integers with #defines that begin with "TK_"
+%token_prefix TK_
+
+// The type of the data attached to each token is Token. This is also the
+// default type for non-terminals.
+//
+%token_type {Token}
+%default_type {Token}
+
+// The generated parser function takes a 4th argument as follows:
+%extra_argument {Parse *pParse}
+
+// This code runs whenever there is a syntax error
+//
+%syntax_error {
+ if( !pParse->parseError ){
+ if( TOKEN.z[0] ){
+ sqlite3ErrorMsg(pParse, "near \"%T\": syntax error", &TOKEN);
+ }else{
+ sqlite3ErrorMsg(pParse, "incomplete SQL statement");
+ }
+ pParse->parseError = 1;
+ }
+}
+%stack_overflow {
+ sqlite3ErrorMsg(pParse, "parser stack overflow");
+ pParse->parseError = 1;
+}
+
+// The name of the generated procedure that implements the parser
+// is as follows:
+%name sqlite3Parser
+
+// The following text is included near the beginning of the C source
+// code file that implements the parser.
+//
+%include {
+#include "sqliteInt.h"
+#include "parse.h"
+
+/*
+** An instance of this structure holds information about the
+** LIMIT clause of a SELECT statement.
+*/
+struct LimitVal {
+ Expr *pLimit; /* The LIMIT expression. NULL if there is no limit */
+ Expr *pOffset; /* The OFFSET expression. NULL if there is none */
+};
+
+/*
+** An instance of this structure is used to store the LIKE,
+** GLOB, NOT LIKE, and NOT GLOB operators.
+*/
+struct LikeOp {
+ Token eOperator; /* "like" or "glob" or "regexp" */
+ int not; /* True if the NOT keyword is present */
+};
+
+/*
+** An instance of the following structure describes the event of a
+** TRIGGER. "a" is the event type, one of TK_UPDATE, TK_INSERT,
+** TK_DELETE, or TK_INSTEAD. If the event is of the form
+**
+** UPDATE ON (a,b,c)
+**
+** Then the "b" IdList records the list "a,b,c".
+*/
+struct TrigEvent { int a; IdList * b; };
+
+/*
+** An instance of this structure holds the ATTACH key and the key type.
+*/
+struct AttachKey { int type; Token key; };
+
+} // end %include
+
+// Input is a single SQL command
+input ::= cmdlist.
+cmdlist ::= cmdlist ecmd.
+cmdlist ::= ecmd.
+cmdx ::= cmd. { sqlite3FinishCoding(pParse); }
+ecmd ::= SEMI.
+ecmd ::= explain cmdx SEMI.
+explain ::= . { sqlite3BeginParse(pParse, 0); }
+%ifndef SQLITE_OMIT_EXPLAIN
+explain ::= EXPLAIN. { sqlite3BeginParse(pParse, 1); }
+explain ::= EXPLAIN QUERY PLAN. { sqlite3BeginParse(pParse, 2); }
+%endif SQLITE_OMIT_EXPLAIN
+
+///////////////////// Begin and end transactions. ////////////////////////////
+//
+
+cmd ::= BEGIN transtype(Y) trans_opt. {sqlite3BeginTransaction(pParse, Y);}
+trans_opt ::= .
+trans_opt ::= TRANSACTION.
+trans_opt ::= TRANSACTION nm.
+%type transtype {int}
+transtype(A) ::= . {A = TK_DEFERRED;}
+transtype(A) ::= DEFERRED(X). {A = @X;}
+transtype(A) ::= IMMEDIATE(X). {A = @X;}
+transtype(A) ::= EXCLUSIVE(X). {A = @X;}
+cmd ::= COMMIT trans_opt. {sqlite3CommitTransaction(pParse);}
+cmd ::= END trans_opt. {sqlite3CommitTransaction(pParse);}
+cmd ::= ROLLBACK trans_opt. {sqlite3RollbackTransaction(pParse);}
+
+///////////////////// The CREATE TABLE statement ////////////////////////////
+//
+cmd ::= create_table create_table_args.
+create_table ::= CREATE temp(T) TABLE ifnotexists(E) nm(Y) dbnm(Z). {
+ sqlite3StartTable(pParse,&Y,&Z,T,0,0,E);
+}
+%type ifnotexists {int}
+ifnotexists(A) ::= . {A = 0;}
+ifnotexists(A) ::= IF NOT EXISTS. {A = 1;}
+%type temp {int}
+%ifndef SQLITE_OMIT_TEMPDB
+temp(A) ::= TEMP. {A = 1;}
+%endif SQLITE_OMIT_TEMPDB
+temp(A) ::= . {A = 0;}
+create_table_args ::= LP columnlist conslist_opt(X) RP(Y). {
+ sqlite3EndTable(pParse,&X,&Y,0);
+}
+create_table_args ::= AS select(S). {
+ sqlite3EndTable(pParse,0,0,S);
+ sqlite3SelectDelete(S);
+}
+columnlist ::= columnlist COMMA column.
+columnlist ::= column.
+
+// A "column" is a complete description of a single column in a
+// CREATE TABLE statement. This includes the column name, its
+// datatype, and other keywords such as PRIMARY KEY, UNIQUE, REFERENCES,
+// NOT NULL and so forth.
+//
+column(A) ::= columnid(X) type carglist. {
+ A.z = X.z;
+ A.n = (pParse->sLastToken.z-X.z) + pParse->sLastToken.n;
+}
+columnid(A) ::= nm(X). {
+ sqlite3AddColumn(pParse,&X);
+ A = X;
+}
+
+
+// An IDENTIFIER can be a generic identifier, or one of several
+// keywords. Any non-standard keyword can also be an identifier.
+//
+%type id {Token}
+id(A) ::= ID(X). {A = X;}
+
+// The following directive causes tokens ABORT, AFTER, ASC, etc. to
+// fallback to ID if they will not parse as their original value.
+// This obviates the need for the "id" nonterminal.
+//
+%fallback ID
+ ABORT AFTER ANALYZE ASC ATTACH BEFORE BEGIN CASCADE CAST CONFLICT
+ DATABASE DEFERRED DESC DETACH EACH END EXCLUSIVE EXPLAIN FAIL FOR
+ IGNORE IMMEDIATE INITIALLY INSTEAD LIKE_KW MATCH PLAN QUERY KEY
+ OF OFFSET PRAGMA RAISE REPLACE RESTRICT ROW STATEMENT
+ TEMP TRIGGER VACUUM VIEW VIRTUAL
+%ifdef SQLITE_OMIT_COMPOUND_SELECT
+ EXCEPT INTERSECT UNION
+%endif SQLITE_OMIT_COMPOUND_SELECT
+ REINDEX RENAME CTIME_KW IF
+ .
+%wildcard ANY.
+
+// Define operator precedence early so that this is the first occurance
+// of the operator tokens in the grammer. Keeping the operators together
+// causes them to be assigned integer values that are close together,
+// which keeps parser tables smaller.
+//
+// The token values assigned to these symbols is determined by the order
+// in which lemon first sees them. It must be the case that ISNULL/NOTNULL,
+// NE/EQ, GT/LE, and GE/LT are separated by only a single value. See
+// the sqlite3ExprIfFalse() routine for additional information on this
+// constraint.
+//
+%left OR.
+%left AND.
+%right NOT.
+%left IS MATCH LIKE_KW BETWEEN IN ISNULL NOTNULL NE EQ.
+%left GT LE LT GE.
+%right ESCAPE.
+%left BITAND BITOR LSHIFT RSHIFT.
+%left PLUS MINUS.
+%left STAR SLASH REM.
+%left CONCAT.
+%right UMINUS UPLUS BITNOT.
+
+// And "ids" is an identifer-or-string.
+//
+%type ids {Token}
+ids(A) ::= ID|STRING(X). {A = X;}
+
+// The name of a column or table can be any of the following:
+//
+%type nm {Token}
+nm(A) ::= ID(X). {A = X;}
+nm(A) ::= STRING(X). {A = X;}
+nm(A) ::= JOIN_KW(X). {A = X;}
+
+// A typetoken is really one or more tokens that form a type name such
+// as can be found after the column name in a CREATE TABLE statement.
+// Multiple tokens are concatenated to form the value of the typetoken.
+//
+%type typetoken {Token}
+type ::= .
+type ::= typetoken(X). {sqlite3AddColumnType(pParse,&X);}
+typetoken(A) ::= typename(X). {A = X;}
+typetoken(A) ::= typename(X) LP signed RP(Y). {
+ A.z = X.z;
+ A.n = &Y.z[Y.n] - X.z;
+}
+typetoken(A) ::= typename(X) LP signed COMMA signed RP(Y). {
+ A.z = X.z;
+ A.n = &Y.z[Y.n] - X.z;
+}
+%type typename {Token}
+typename(A) ::= ids(X). {A = X;}
+typename(A) ::= typename(X) ids(Y). {A.z=X.z; A.n=Y.n+(Y.z-X.z);}
+%type signed {int}
+signed(A) ::= plus_num(X). { A = atoi((char*)X.z); }
+signed(A) ::= minus_num(X). { A = -atoi((char*)X.z); }
+
+// "carglist" is a list of additional constraints that come after the
+// column name and column type in a CREATE TABLE statement.
+//
+carglist ::= carglist carg.
+carglist ::= .
+carg ::= CONSTRAINT nm ccons.
+carg ::= ccons.
+carg ::= DEFAULT term(X). {sqlite3AddDefaultValue(pParse,X);}
+carg ::= DEFAULT LP expr(X) RP. {sqlite3AddDefaultValue(pParse,X);}
+carg ::= DEFAULT PLUS term(X). {sqlite3AddDefaultValue(pParse,X);}
+carg ::= DEFAULT MINUS term(X). {
+ Expr *p = sqlite3Expr(TK_UMINUS, X, 0, 0);
+ sqlite3AddDefaultValue(pParse,p);
+}
+carg ::= DEFAULT id(X). {
+ Expr *p = sqlite3Expr(TK_STRING, 0, 0, &X);
+ sqlite3AddDefaultValue(pParse,p);
+}
+
+// In addition to the type name, we also care about the primary key and
+// UNIQUE constraints.
+//
+ccons ::= NULL onconf.
+ccons ::= NOT NULL onconf(R). {sqlite3AddNotNull(pParse, R);}
+ccons ::= PRIMARY KEY sortorder(Z) onconf(R) autoinc(I).
+ {sqlite3AddPrimaryKey(pParse,0,R,I,Z);}
+ccons ::= UNIQUE onconf(R). {sqlite3CreateIndex(pParse,0,0,0,0,R,0,0,0,0);}
+ccons ::= CHECK LP expr(X) RP. {sqlite3AddCheckConstraint(pParse,X);}
+ccons ::= REFERENCES nm(T) idxlist_opt(TA) refargs(R).
+ {sqlite3CreateForeignKey(pParse,0,&T,TA,R);}
+ccons ::= defer_subclause(D). {sqlite3DeferForeignKey(pParse,D);}
+ccons ::= COLLATE id(C). {sqlite3AddCollateType(pParse, (char*)C.z, C.n);}
+
+// The optional AUTOINCREMENT keyword
+%type autoinc {int}
+autoinc(X) ::= . {X = 0;}
+autoinc(X) ::= AUTOINCR. {X = 1;}
+
+// The next group of rules parses the arguments to a REFERENCES clause
+// that determine if the referential integrity checking is deferred or
+// or immediate and which determine what action to take if a ref-integ
+// check fails.
+//
+%type refargs {int}
+refargs(A) ::= . { A = OE_Restrict * 0x010101; }
+refargs(A) ::= refargs(X) refarg(Y). { A = (X & Y.mask) | Y.value; }
+%type refarg {struct {int value; int mask;}}
+refarg(A) ::= MATCH nm. { A.value = 0; A.mask = 0x000000; }
+refarg(A) ::= ON DELETE refact(X). { A.value = X; A.mask = 0x0000ff; }
+refarg(A) ::= ON UPDATE refact(X). { A.value = X<<8; A.mask = 0x00ff00; }
+refarg(A) ::= ON INSERT refact(X). { A.value = X<<16; A.mask = 0xff0000; }
+%type refact {int}
+refact(A) ::= SET NULL. { A = OE_SetNull; }
+refact(A) ::= SET DEFAULT. { A = OE_SetDflt; }
+refact(A) ::= CASCADE. { A = OE_Cascade; }
+refact(A) ::= RESTRICT. { A = OE_Restrict; }
+%type defer_subclause {int}
+defer_subclause(A) ::= NOT DEFERRABLE init_deferred_pred_opt(X). {A = X;}
+defer_subclause(A) ::= DEFERRABLE init_deferred_pred_opt(X). {A = X;}
+%type init_deferred_pred_opt {int}
+init_deferred_pred_opt(A) ::= . {A = 0;}
+init_deferred_pred_opt(A) ::= INITIALLY DEFERRED. {A = 1;}
+init_deferred_pred_opt(A) ::= INITIALLY IMMEDIATE. {A = 0;}
+
+// For the time being, the only constraint we care about is the primary
+// key and UNIQUE. Both create indices.
+//
+conslist_opt(A) ::= . {A.n = 0; A.z = 0;}
+conslist_opt(A) ::= COMMA(X) conslist. {A = X;}
+conslist ::= conslist COMMA tcons.
+conslist ::= conslist tcons.
+conslist ::= tcons.
+tcons ::= CONSTRAINT nm.
+tcons ::= PRIMARY KEY LP idxlist(X) autoinc(I) RP onconf(R).
+ {sqlite3AddPrimaryKey(pParse,X,R,I,0);}
+tcons ::= UNIQUE LP idxlist(X) RP onconf(R).
+ {sqlite3CreateIndex(pParse,0,0,0,X,R,0,0,0,0);}
+tcons ::= CHECK LP expr(E) RP onconf. {sqlite3AddCheckConstraint(pParse,E);}
+tcons ::= FOREIGN KEY LP idxlist(FA) RP
+ REFERENCES nm(T) idxlist_opt(TA) refargs(R) defer_subclause_opt(D). {
+ sqlite3CreateForeignKey(pParse, FA, &T, TA, R);
+ sqlite3DeferForeignKey(pParse, D);
+}
+%type defer_subclause_opt {int}
+defer_subclause_opt(A) ::= . {A = 0;}
+defer_subclause_opt(A) ::= defer_subclause(X). {A = X;}
+
+// The following is a non-standard extension that allows us to declare the
+// default behavior when there is a constraint conflict.
+//
+%type onconf {int}
+%type orconf {int}
+%type resolvetype {int}
+onconf(A) ::= . {A = OE_Default;}
+onconf(A) ::= ON CONFLICT resolvetype(X). {A = X;}
+orconf(A) ::= . {A = OE_Default;}
+orconf(A) ::= OR resolvetype(X). {A = X;}
+resolvetype(A) ::= raisetype(X). {A = X;}
+resolvetype(A) ::= IGNORE. {A = OE_Ignore;}
+resolvetype(A) ::= REPLACE. {A = OE_Replace;}
+
+////////////////////////// The DROP TABLE /////////////////////////////////////
+//
+cmd ::= DROP TABLE ifexists(E) fullname(X). {
+ sqlite3DropTable(pParse, X, 0, E);
+}
+%type ifexists {int}
+ifexists(A) ::= IF EXISTS. {A = 1;}
+ifexists(A) ::= . {A = 0;}
+
+///////////////////// The CREATE VIEW statement /////////////////////////////
+//
+%ifndef SQLITE_OMIT_VIEW
+cmd ::= CREATE(X) temp(T) VIEW ifnotexists(E) nm(Y) dbnm(Z) AS select(S). {
+ sqlite3CreateView(pParse, &X, &Y, &Z, S, T, E);
+}
+cmd ::= DROP VIEW ifexists(E) fullname(X). {
+ sqlite3DropTable(pParse, X, 1, E);
+}
+%endif SQLITE_OMIT_VIEW
+
+//////////////////////// The SELECT statement /////////////////////////////////
+//
+cmd ::= select(X). {
+ sqlite3Select(pParse, X, SRT_Callback, 0, 0, 0, 0, 0);
+ sqlite3SelectDelete(X);
+}
+
+%type select {Select*}
+%destructor select {sqlite3SelectDelete($$);}
+%type oneselect {Select*}
+%destructor oneselect {sqlite3SelectDelete($$);}
+
+select(A) ::= oneselect(X). {A = X;}
+%ifndef SQLITE_OMIT_COMPOUND_SELECT
+select(A) ::= select(X) multiselect_op(Y) oneselect(Z). {
+ if( Z ){
+ Z->op = Y;
+ Z->pPrior = X;
+ }
+ A = Z;
+}
+%type multiselect_op {int}
+multiselect_op(A) ::= UNION(OP). {A = @OP;}
+multiselect_op(A) ::= UNION ALL. {A = TK_ALL;}
+multiselect_op(A) ::= EXCEPT|INTERSECT(OP). {A = @OP;}
+%endif SQLITE_OMIT_COMPOUND_SELECT
+oneselect(A) ::= SELECT distinct(D) selcollist(W) from(X) where_opt(Y)
+ groupby_opt(P) having_opt(Q) orderby_opt(Z) limit_opt(L). {
+ A = sqlite3SelectNew(W,X,Y,P,Q,Z,D,L.pLimit,L.pOffset);
+}
+
+// The "distinct" nonterminal is true (1) if the DISTINCT keyword is
+// present and false (0) if it is not.
+//
+%type distinct {int}
+distinct(A) ::= DISTINCT. {A = 1;}
+distinct(A) ::= ALL. {A = 0;}
+distinct(A) ::= . {A = 0;}
+
+// selcollist is a list of expressions that are to become the return
+// values of the SELECT statement. The "*" in statements like
+// "SELECT * FROM ..." is encoded as a special expression with an
+// opcode of TK_ALL.
+//
+%type selcollist {ExprList*}
+%destructor selcollist {sqlite3ExprListDelete($$);}
+%type sclp {ExprList*}
+%destructor sclp {sqlite3ExprListDelete($$);}
+sclp(A) ::= selcollist(X) COMMA. {A = X;}
+sclp(A) ::= . {A = 0;}
+selcollist(A) ::= sclp(P) expr(X) as(Y). {
+ A = sqlite3ExprListAppend(P,X,Y.n?&Y:0);
+}
+selcollist(A) ::= sclp(P) STAR. {
+ A = sqlite3ExprListAppend(P, sqlite3Expr(TK_ALL, 0, 0, 0), 0);
+}
+selcollist(A) ::= sclp(P) nm(X) DOT STAR. {
+ Expr *pRight = sqlite3Expr(TK_ALL, 0, 0, 0);
+ Expr *pLeft = sqlite3Expr(TK_ID, 0, 0, &X);
+ A = sqlite3ExprListAppend(P, sqlite3Expr(TK_DOT, pLeft, pRight, 0), 0);
+}
+
+// An option "AS <id>" phrase that can follow one of the expressions that
+// define the result set, or one of the tables in the FROM clause.
+//
+%type as {Token}
+as(X) ::= AS nm(Y). {X = Y;}
+as(X) ::= ids(Y). {X = Y;}
+as(X) ::= . {X.n = 0;}
+
+
+%type seltablist {SrcList*}
+%destructor seltablist {sqlite3SrcListDelete($$);}
+%type stl_prefix {SrcList*}
+%destructor stl_prefix {sqlite3SrcListDelete($$);}
+%type from {SrcList*}
+%destructor from {sqlite3SrcListDelete($$);}
+
+// A complete FROM clause.
+//
+from(A) ::= . {A = sqliteMalloc(sizeof(*A));}
+from(A) ::= FROM seltablist(X). {A = X;}
+
+// "seltablist" is a "Select Table List" - the content of the FROM clause
+// in a SELECT statement. "stl_prefix" is a prefix of this list.
+//
+stl_prefix(A) ::= seltablist(X) joinop(Y). {
+ A = X;
+ if( A && A->nSrc>0 ) A->a[A->nSrc-1].jointype = Y;
+}
+stl_prefix(A) ::= . {A = 0;}
+seltablist(A) ::= stl_prefix(X) nm(Y) dbnm(D) as(Z) on_opt(N) using_opt(U). {
+ A = sqlite3SrcListAppend(X,&Y,&D);
+ if( Z.n ) sqlite3SrcListAddAlias(A,&Z);
+ if( N ){
+ if( A && A->nSrc>1 ){ A->a[A->nSrc-2].pOn = N; }
+ else { sqlite3ExprDelete(N); }
+ }
+ if( U ){
+ if( A && A->nSrc>1 ){ A->a[A->nSrc-2].pUsing = U; }
+ else { sqlite3IdListDelete(U); }
+ }
+}
+%ifndef SQLITE_OMIT_SUBQUERY
+ seltablist(A) ::= stl_prefix(X) LP seltablist_paren(S) RP
+ as(Z) on_opt(N) using_opt(U). {
+ A = sqlite3SrcListAppend(X,0,0);
+ if( A && A->nSrc>0 ) A->a[A->nSrc-1].pSelect = S;
+ if( Z.n ) sqlite3SrcListAddAlias(A,&Z);
+ if( N ){
+ if( A && A->nSrc>1 ){ A->a[A->nSrc-2].pOn = N; }
+ else { sqlite3ExprDelete(N); }
+ }
+ if( U ){
+ if( A && A->nSrc>1 ){ A->a[A->nSrc-2].pUsing = U; }
+ else { sqlite3IdListDelete(U); }
+ }
+ }
+
+ // A seltablist_paren nonterminal represents anything in a FROM that
+ // is contained inside parentheses. This can be either a subquery or
+ // a grouping of table and subqueries.
+ //
+ %type seltablist_paren {Select*}
+ %destructor seltablist_paren {sqlite3SelectDelete($$);}
+ seltablist_paren(A) ::= select(S). {A = S;}
+ seltablist_paren(A) ::= seltablist(F). {
+ A = sqlite3SelectNew(0,F,0,0,0,0,0,0,0);
+ }
+%endif SQLITE_OMIT_SUBQUERY
+
+%type dbnm {Token}
+dbnm(A) ::= . {A.z=0; A.n=0;}
+dbnm(A) ::= DOT nm(X). {A = X;}
+
+%type fullname {SrcList*}
+%destructor fullname {sqlite3SrcListDelete($$);}
+fullname(A) ::= nm(X) dbnm(Y). {A = sqlite3SrcListAppend(0,&X,&Y);}
+
+%type joinop {int}
+%type joinop2 {int}
+joinop(X) ::= COMMA|JOIN. { X = JT_INNER; }
+joinop(X) ::= JOIN_KW(A) JOIN. { X = sqlite3JoinType(pParse,&A,0,0); }
+joinop(X) ::= JOIN_KW(A) nm(B) JOIN. { X = sqlite3JoinType(pParse,&A,&B,0); }
+joinop(X) ::= JOIN_KW(A) nm(B) nm(C) JOIN.
+ { X = sqlite3JoinType(pParse,&A,&B,&C); }
+
+%type on_opt {Expr*}
+%destructor on_opt {sqlite3ExprDelete($$);}
+on_opt(N) ::= ON expr(E). {N = E;}
+on_opt(N) ::= . {N = 0;}
+
+%type using_opt {IdList*}
+%destructor using_opt {sqlite3IdListDelete($$);}
+using_opt(U) ::= USING LP inscollist(L) RP. {U = L;}
+using_opt(U) ::= . {U = 0;}
+
+
+%type orderby_opt {ExprList*}
+%destructor orderby_opt {sqlite3ExprListDelete($$);}
+%type sortlist {ExprList*}
+%destructor sortlist {sqlite3ExprListDelete($$);}
+%type sortitem {Expr*}
+%destructor sortitem {sqlite3ExprDelete($$);}
+
+orderby_opt(A) ::= . {A = 0;}
+orderby_opt(A) ::= ORDER BY sortlist(X). {A = X;}
+sortlist(A) ::= sortlist(X) COMMA sortitem(Y) collate(C) sortorder(Z). {
+ A = sqlite3ExprListAppend(X,Y,C.n>0?&C:0);
+ if( A ) A->a[A->nExpr-1].sortOrder = Z;
+}
+sortlist(A) ::= sortitem(Y) collate(C) sortorder(Z). {
+ A = sqlite3ExprListAppend(0,Y,C.n>0?&C:0);
+ if( A && A->a ) A->a[0].sortOrder = Z;
+}
+sortitem(A) ::= expr(X). {A = X;}
+
+%type sortorder {int}
+%type collate {Token}
+
+sortorder(A) ::= ASC. {A = SQLITE_SO_ASC;}
+sortorder(A) ::= DESC. {A = SQLITE_SO_DESC;}
+sortorder(A) ::= . {A = SQLITE_SO_ASC;}
+collate(C) ::= . {C.z = 0; C.n = 0;}
+collate(C) ::= COLLATE id(X). {C = X;}
+
+%type groupby_opt {ExprList*}
+%destructor groupby_opt {sqlite3ExprListDelete($$);}
+groupby_opt(A) ::= . {A = 0;}
+groupby_opt(A) ::= GROUP BY exprlist(X). {A = X;}
+
+%type having_opt {Expr*}
+%destructor having_opt {sqlite3ExprDelete($$);}
+having_opt(A) ::= . {A = 0;}
+having_opt(A) ::= HAVING expr(X). {A = X;}
+
+%type limit_opt {struct LimitVal}
+%destructor limit_opt {
+ sqlite3ExprDelete($$.pLimit);
+ sqlite3ExprDelete($$.pOffset);
+}
+limit_opt(A) ::= . {A.pLimit = 0; A.pOffset = 0;}
+limit_opt(A) ::= LIMIT expr(X). {A.pLimit = X; A.pOffset = 0;}
+limit_opt(A) ::= LIMIT expr(X) OFFSET expr(Y).
+ {A.pLimit = X; A.pOffset = Y;}
+limit_opt(A) ::= LIMIT expr(X) COMMA expr(Y).
+ {A.pOffset = X; A.pLimit = Y;}
+
+/////////////////////////// The DELETE statement /////////////////////////////
+//
+cmd ::= DELETE FROM fullname(X) where_opt(Y). {sqlite3DeleteFrom(pParse,X,Y);}
+
+%type where_opt {Expr*}
+%destructor where_opt {sqlite3ExprDelete($$);}
+
+where_opt(A) ::= . {A = 0;}
+where_opt(A) ::= WHERE expr(X). {A = X;}
+
+////////////////////////// The UPDATE command ////////////////////////////////
+//
+cmd ::= UPDATE orconf(R) fullname(X) SET setlist(Y) where_opt(Z).
+ {sqlite3Update(pParse,X,Y,Z,R);}
+
+%type setlist {ExprList*}
+%destructor setlist {sqlite3ExprListDelete($$);}
+
+setlist(A) ::= setlist(Z) COMMA nm(X) EQ expr(Y).
+ {A = sqlite3ExprListAppend(Z,Y,&X);}
+setlist(A) ::= nm(X) EQ expr(Y). {A = sqlite3ExprListAppend(0,Y,&X);}
+
+////////////////////////// The INSERT command /////////////////////////////////
+//
+cmd ::= insert_cmd(R) INTO fullname(X) inscollist_opt(F)
+ VALUES LP itemlist(Y) RP.
+ {sqlite3Insert(pParse, X, Y, 0, F, R);}
+cmd ::= insert_cmd(R) INTO fullname(X) inscollist_opt(F) select(S).
+ {sqlite3Insert(pParse, X, 0, S, F, R);}
+cmd ::= insert_cmd(R) INTO fullname(X) inscollist_opt(F) DEFAULT VALUES.
+ {sqlite3Insert(pParse, X, 0, 0, F, R);}
+
+%type insert_cmd {int}
+insert_cmd(A) ::= INSERT orconf(R). {A = R;}
+insert_cmd(A) ::= REPLACE. {A = OE_Replace;}
+
+
+%type itemlist {ExprList*}
+%destructor itemlist {sqlite3ExprListDelete($$);}
+
+itemlist(A) ::= itemlist(X) COMMA expr(Y). {A = sqlite3ExprListAppend(X,Y,0);}
+itemlist(A) ::= expr(X). {A = sqlite3ExprListAppend(0,X,0);}
+
+%type inscollist_opt {IdList*}
+%destructor inscollist_opt {sqlite3IdListDelete($$);}
+%type inscollist {IdList*}
+%destructor inscollist {sqlite3IdListDelete($$);}
+
+inscollist_opt(A) ::= . {A = 0;}
+inscollist_opt(A) ::= LP inscollist(X) RP. {A = X;}
+inscollist(A) ::= inscollist(X) COMMA nm(Y). {A = sqlite3IdListAppend(X,&Y);}
+inscollist(A) ::= nm(Y). {A = sqlite3IdListAppend(0,&Y);}
+
+/////////////////////////// Expression Processing /////////////////////////////
+//
+
+%type expr {Expr*}
+%destructor expr {sqlite3ExprDelete($$);}
+%type term {Expr*}
+%destructor term {sqlite3ExprDelete($$);}
+
+expr(A) ::= term(X). {A = X;}
+expr(A) ::= LP(B) expr(X) RP(E). {A = X; sqlite3ExprSpan(A,&B,&E); }
+term(A) ::= NULL(X). {A = sqlite3Expr(@X, 0, 0, &X);}
+expr(A) ::= ID(X). {A = sqlite3Expr(TK_ID, 0, 0, &X);}
+expr(A) ::= JOIN_KW(X). {A = sqlite3Expr(TK_ID, 0, 0, &X);}
+expr(A) ::= nm(X) DOT nm(Y). {
+ Expr *temp1 = sqlite3Expr(TK_ID, 0, 0, &X);
+ Expr *temp2 = sqlite3Expr(TK_ID, 0, 0, &Y);
+ A = sqlite3Expr(TK_DOT, temp1, temp2, 0);
+}
+expr(A) ::= nm(X) DOT nm(Y) DOT nm(Z). {
+ Expr *temp1 = sqlite3Expr(TK_ID, 0, 0, &X);
+ Expr *temp2 = sqlite3Expr(TK_ID, 0, 0, &Y);
+ Expr *temp3 = sqlite3Expr(TK_ID, 0, 0, &Z);
+ Expr *temp4 = sqlite3Expr(TK_DOT, temp2, temp3, 0);
+ A = sqlite3Expr(TK_DOT, temp1, temp4, 0);
+}
+term(A) ::= INTEGER|FLOAT|BLOB(X). {A = sqlite3Expr(@X, 0, 0, &X);}
+term(A) ::= STRING(X). {A = sqlite3Expr(@X, 0, 0, &X);}
+expr(A) ::= REGISTER(X). {A = sqlite3RegisterExpr(pParse, &X);}
+expr(A) ::= VARIABLE(X). {
+ Token *pToken = &X;
+ Expr *pExpr = A = sqlite3Expr(TK_VARIABLE, 0, 0, pToken);
+ sqlite3ExprAssignVarNumber(pParse, pExpr);
+}
+%ifndef SQLITE_OMIT_CAST
+expr(A) ::= CAST(X) LP expr(E) AS typetoken(T) RP(Y). {
+ A = sqlite3Expr(TK_CAST, E, 0, &T);
+ sqlite3ExprSpan(A,&X,&Y);
+}
+%endif SQLITE_OMIT_CAST
+expr(A) ::= ID(X) LP distinct(D) exprlist(Y) RP(E). {
+ A = sqlite3ExprFunction(Y, &X);
+ sqlite3ExprSpan(A,&X,&E);
+ if( D && A ){
+ A->flags |= EP_Distinct;
+ }
+}
+expr(A) ::= ID(X) LP STAR RP(E). {
+ A = sqlite3ExprFunction(0, &X);
+ sqlite3ExprSpan(A,&X,&E);
+}
+term(A) ::= CTIME_KW(OP). {
+ /* The CURRENT_TIME, CURRENT_DATE, and CURRENT_TIMESTAMP values are
+ ** treated as functions that return constants */
+ A = sqlite3ExprFunction(0,&OP);
+ if( A ){
+ A->op = TK_CONST_FUNC;
+ A->span = OP;
+ }
+}
+expr(A) ::= expr(X) AND(OP) expr(Y). {A = sqlite3Expr(@OP, X, Y, 0);}
+expr(A) ::= expr(X) OR(OP) expr(Y). {A = sqlite3Expr(@OP, X, Y, 0);}
+expr(A) ::= expr(X) LT|GT|GE|LE(OP) expr(Y). {A = sqlite3Expr(@OP, X, Y, 0);}
+expr(A) ::= expr(X) EQ|NE(OP) expr(Y). {A = sqlite3Expr(@OP, X, Y, 0);}
+expr(A) ::= expr(X) BITAND|BITOR|LSHIFT|RSHIFT(OP) expr(Y).
+ {A = sqlite3Expr(@OP, X, Y, 0);}
+expr(A) ::= expr(X) PLUS|MINUS(OP) expr(Y). {A = sqlite3Expr(@OP, X, Y, 0);}
+expr(A) ::= expr(X) STAR|SLASH|REM(OP) expr(Y). {A = sqlite3Expr(@OP, X, Y, 0);}
+expr(A) ::= expr(X) CONCAT(OP) expr(Y). {A = sqlite3Expr(@OP, X, Y, 0);}
+%type likeop {struct LikeOp}
+likeop(A) ::= LIKE_KW(X). {A.eOperator = X; A.not = 0;}
+likeop(A) ::= NOT LIKE_KW(X). {A.eOperator = X; A.not = 1;}
+likeop(A) ::= MATCH(X). {A.eOperator = X; A.not = 0;}
+likeop(A) ::= NOT MATCH(X). {A.eOperator = X; A.not = 1;}
+%type escape {Expr*}
+%destructor escape {sqlite3ExprDelete($$);}
+escape(X) ::= ESCAPE expr(A). [ESCAPE] {X = A;}
+escape(X) ::= . [ESCAPE] {X = 0;}
+expr(A) ::= expr(X) likeop(OP) expr(Y) escape(E). [LIKE_KW] {
+ ExprList *pList;
+ pList = sqlite3ExprListAppend(0, Y, 0);
+ pList = sqlite3ExprListAppend(pList, X, 0);
+ if( E ){
+ pList = sqlite3ExprListAppend(pList, E, 0);
+ }
+ A = sqlite3ExprFunction(pList, &OP.eOperator);
+ if( OP.not ) A = sqlite3Expr(TK_NOT, A, 0, 0);
+ sqlite3ExprSpan(A, &X->span, &Y->span);
+ if( A ) A->flags |= EP_InfixFunc;
+}
+
+expr(A) ::= expr(X) ISNULL|NOTNULL(E). {
+ A = sqlite3Expr(@E, X, 0, 0);
+ sqlite3ExprSpan(A,&X->span,&E);
+}
+expr(A) ::= expr(X) IS NULL(E). {
+ A = sqlite3Expr(TK_ISNULL, X, 0, 0);
+ sqlite3ExprSpan(A,&X->span,&E);
+}
+expr(A) ::= expr(X) NOT NULL(E). {
+ A = sqlite3Expr(TK_NOTNULL, X, 0, 0);
+ sqlite3ExprSpan(A,&X->span,&E);
+}
+expr(A) ::= expr(X) IS NOT NULL(E). {
+ A = sqlite3Expr(TK_NOTNULL, X, 0, 0);
+ sqlite3ExprSpan(A,&X->span,&E);
+}
+expr(A) ::= NOT|BITNOT(B) expr(X). {
+ A = sqlite3Expr(@B, X, 0, 0);
+ sqlite3ExprSpan(A,&B,&X->span);
+}
+expr(A) ::= MINUS(B) expr(X). [UMINUS] {
+ A = sqlite3Expr(TK_UMINUS, X, 0, 0);
+ sqlite3ExprSpan(A,&B,&X->span);
+}
+expr(A) ::= PLUS(B) expr(X). [UPLUS] {
+ A = sqlite3Expr(TK_UPLUS, X, 0, 0);
+ sqlite3ExprSpan(A,&B,&X->span);
+}
+%type between_op {int}
+between_op(A) ::= BETWEEN. {A = 0;}
+between_op(A) ::= NOT BETWEEN. {A = 1;}
+expr(A) ::= expr(W) between_op(N) expr(X) AND expr(Y). [BETWEEN] {
+ ExprList *pList = sqlite3ExprListAppend(0, X, 0);
+ pList = sqlite3ExprListAppend(pList, Y, 0);
+ A = sqlite3Expr(TK_BETWEEN, W, 0, 0);
+ if( A ){
+ A->pList = pList;
+ }else{
+ sqlite3ExprListDelete(pList);
+ }
+ if( N ) A = sqlite3Expr(TK_NOT, A, 0, 0);
+ sqlite3ExprSpan(A,&W->span,&Y->span);
+}
+%ifndef SQLITE_OMIT_SUBQUERY
+ %type in_op {int}
+ in_op(A) ::= IN. {A = 0;}
+ in_op(A) ::= NOT IN. {A = 1;}
+ expr(A) ::= expr(X) in_op(N) LP exprlist(Y) RP(E). [IN] {
+ A = sqlite3Expr(TK_IN, X, 0, 0);
+ if( A ){
+ A->pList = Y;
+ }else{
+ sqlite3ExprListDelete(Y);
+ }
+ if( N ) A = sqlite3Expr(TK_NOT, A, 0, 0);
+ sqlite3ExprSpan(A,&X->span,&E);
+ }
+ expr(A) ::= LP(B) select(X) RP(E). {
+ A = sqlite3Expr(TK_SELECT, 0, 0, 0);
+ if( A ){
+ A->pSelect = X;
+ }else{
+ sqlite3SelectDelete(X);
+ }
+ sqlite3ExprSpan(A,&B,&E);
+ }
+ expr(A) ::= expr(X) in_op(N) LP select(Y) RP(E). [IN] {
+ A = sqlite3Expr(TK_IN, X, 0, 0);
+ if( A ){
+ A->pSelect = Y;
+ }else{
+ sqlite3SelectDelete(Y);
+ }
+ if( N ) A = sqlite3Expr(TK_NOT, A, 0, 0);
+ sqlite3ExprSpan(A,&X->span,&E);
+ }
+ expr(A) ::= expr(X) in_op(N) nm(Y) dbnm(Z). [IN] {
+ SrcList *pSrc = sqlite3SrcListAppend(0,&Y,&Z);
+ A = sqlite3Expr(TK_IN, X, 0, 0);
+ if( A ){
+ A->pSelect = sqlite3SelectNew(0,pSrc,0,0,0,0,0,0,0);
+ }else{
+ sqlite3SrcListDelete(pSrc);
+ }
+ if( N ) A = sqlite3Expr(TK_NOT, A, 0, 0);
+ sqlite3ExprSpan(A,&X->span,Z.z?&Z:&Y);
+ }
+ expr(A) ::= EXISTS(B) LP select(Y) RP(E). {
+ Expr *p = A = sqlite3Expr(TK_EXISTS, 0, 0, 0);
+ if( p ){
+ p->pSelect = Y;
+ sqlite3ExprSpan(p,&B,&E);
+ }else{
+ sqlite3SelectDelete(Y);
+ }
+ }
+%endif SQLITE_OMIT_SUBQUERY
+
+/* CASE expressions */
+expr(A) ::= CASE(C) case_operand(X) case_exprlist(Y) case_else(Z) END(E). {
+ A = sqlite3Expr(TK_CASE, X, Z, 0);
+ if( A ){
+ A->pList = Y;
+ }else{
+ sqlite3ExprListDelete(Y);
+ }
+ sqlite3ExprSpan(A, &C, &E);
+}
+%type case_exprlist {ExprList*}
+%destructor case_exprlist {sqlite3ExprListDelete($$);}
+case_exprlist(A) ::= case_exprlist(X) WHEN expr(Y) THEN expr(Z). {
+ A = sqlite3ExprListAppend(X, Y, 0);
+ A = sqlite3ExprListAppend(A, Z, 0);
+}
+case_exprlist(A) ::= WHEN expr(Y) THEN expr(Z). {
+ A = sqlite3ExprListAppend(0, Y, 0);
+ A = sqlite3ExprListAppend(A, Z, 0);
+}
+%type case_else {Expr*}
+%destructor case_else {sqlite3ExprDelete($$);}
+case_else(A) ::= ELSE expr(X). {A = X;}
+case_else(A) ::= . {A = 0;}
+%type case_operand {Expr*}
+%destructor case_operand {sqlite3ExprDelete($$);}
+case_operand(A) ::= expr(X). {A = X;}
+case_operand(A) ::= . {A = 0;}
+
+%type exprlist {ExprList*}
+%destructor exprlist {sqlite3ExprListDelete($$);}
+%type expritem {Expr*}
+%destructor expritem {sqlite3ExprDelete($$);}
+
+exprlist(A) ::= exprlist(X) COMMA expritem(Y).
+ {A = sqlite3ExprListAppend(X,Y,0);}
+exprlist(A) ::= expritem(X). {A = sqlite3ExprListAppend(0,X,0);}
+expritem(A) ::= expr(X). {A = X;}
+expritem(A) ::= . {A = 0;}
+
+///////////////////////////// The CREATE INDEX command ///////////////////////
+//
+cmd ::= CREATE(S) uniqueflag(U) INDEX ifnotexists(NE) nm(X) dbnm(D)
+ ON nm(Y) LP idxlist(Z) RP(E). {
+ sqlite3CreateIndex(pParse, &X, &D, sqlite3SrcListAppend(0,&Y,0), Z, U,
+ &S, &E, SQLITE_SO_ASC, NE);
+}
+
+%type uniqueflag {int}
+uniqueflag(A) ::= UNIQUE. {A = OE_Abort;}
+uniqueflag(A) ::= . {A = OE_None;}
+
+%type idxlist {ExprList*}
+%destructor idxlist {sqlite3ExprListDelete($$);}
+%type idxlist_opt {ExprList*}
+%destructor idxlist_opt {sqlite3ExprListDelete($$);}
+%type idxitem {Token}
+
+idxlist_opt(A) ::= . {A = 0;}
+idxlist_opt(A) ::= LP idxlist(X) RP. {A = X;}
+idxlist(A) ::= idxlist(X) COMMA idxitem(Y) collate(C) sortorder(Z). {
+ Expr *p = 0;
+ if( C.n>0 ){
+ p = sqlite3Expr(TK_COLUMN, 0, 0, 0);
+ if( p ) p->pColl = sqlite3LocateCollSeq(pParse, (char*)C.z, C.n);
+ }
+ A = sqlite3ExprListAppend(X, p, &Y);
+ if( A ) A->a[A->nExpr-1].sortOrder = Z;
+}
+idxlist(A) ::= idxitem(Y) collate(C) sortorder(Z). {
+ Expr *p = 0;
+ if( C.n>0 ){
+ p = sqlite3Expr(TK_COLUMN, 0, 0, 0);
+ if( p ) p->pColl = sqlite3LocateCollSeq(pParse, (char*)C.z, C.n);
+ }
+ A = sqlite3ExprListAppend(0, p, &Y);
+ if( A ) A->a[A->nExpr-1].sortOrder = Z;
+}
+idxitem(A) ::= nm(X). {A = X;}
+
+
+///////////////////////////// The DROP INDEX command /////////////////////////
+//
+cmd ::= DROP INDEX ifexists(E) fullname(X). {sqlite3DropIndex(pParse, X, E);}
+
+///////////////////////////// The VACUUM command /////////////////////////////
+//
+%ifndef SQLITE_OMIT_VACUUM
+cmd ::= VACUUM. {sqlite3Vacuum(pParse);}
+cmd ::= VACUUM nm. {sqlite3Vacuum(pParse);}
+%endif SQLITE_OMIT_VACUUM
+
+///////////////////////////// The PRAGMA command /////////////////////////////
+//
+%ifndef SQLITE_OMIT_PRAGMA
+cmd ::= PRAGMA nm(X) dbnm(Z) EQ nm(Y). {sqlite3Pragma(pParse,&X,&Z,&Y,0);}
+cmd ::= PRAGMA nm(X) dbnm(Z) EQ ON(Y). {sqlite3Pragma(pParse,&X,&Z,&Y,0);}
+cmd ::= PRAGMA nm(X) dbnm(Z) EQ plus_num(Y). {sqlite3Pragma(pParse,&X,&Z,&Y,0);}
+cmd ::= PRAGMA nm(X) dbnm(Z) EQ minus_num(Y). {
+ sqlite3Pragma(pParse,&X,&Z,&Y,1);
+}
+cmd ::= PRAGMA nm(X) dbnm(Z) LP nm(Y) RP. {sqlite3Pragma(pParse,&X,&Z,&Y,0);}
+cmd ::= PRAGMA nm(X) dbnm(Z). {sqlite3Pragma(pParse,&X,&Z,0,0);}
+%endif SQLITE_OMIT_PRAGMA
+plus_num(A) ::= plus_opt number(X). {A = X;}
+minus_num(A) ::= MINUS number(X). {A = X;}
+number(A) ::= INTEGER|FLOAT(X). {A = X;}
+plus_opt ::= PLUS.
+plus_opt ::= .
+
+//////////////////////////// The CREATE TRIGGER command /////////////////////
+
+%ifndef SQLITE_OMIT_TRIGGER
+
+cmd ::= CREATE trigger_decl(A) BEGIN trigger_cmd_list(S) END(Z). {
+ Token all;
+ all.z = A.z;
+ all.n = (Z.z - A.z) + Z.n;
+ sqlite3FinishTrigger(pParse, S, &all);
+}
+
+trigger_decl(A) ::= temp(T) TRIGGER ifnotexists(NOERR) nm(B) dbnm(Z)
+ trigger_time(C) trigger_event(D)
+ ON fullname(E) foreach_clause(F) when_clause(G). {
+ sqlite3BeginTrigger(pParse, &B, &Z, C, D.a, D.b, E, F, G, T, NOERR);
+ A = (Z.n==0?B:Z);
+}
+
+%type trigger_time {int}
+trigger_time(A) ::= BEFORE. { A = TK_BEFORE; }
+trigger_time(A) ::= AFTER. { A = TK_AFTER; }
+trigger_time(A) ::= INSTEAD OF. { A = TK_INSTEAD;}
+trigger_time(A) ::= . { A = TK_BEFORE; }
+
+%type trigger_event {struct TrigEvent}
+%destructor trigger_event {sqlite3IdListDelete($$.b);}
+trigger_event(A) ::= DELETE|INSERT(OP). {A.a = @OP; A.b = 0;}
+trigger_event(A) ::= UPDATE(OP). {A.a = @OP; A.b = 0;}
+trigger_event(A) ::= UPDATE OF inscollist(X). {A.a = TK_UPDATE; A.b = X;}
+
+%type foreach_clause {int}
+foreach_clause(A) ::= . { A = TK_ROW; }
+foreach_clause(A) ::= FOR EACH ROW. { A = TK_ROW; }
+foreach_clause(A) ::= FOR EACH STATEMENT. { A = TK_STATEMENT; }
+
+%type when_clause {Expr*}
+%destructor when_clause {sqlite3ExprDelete($$);}
+when_clause(A) ::= . { A = 0; }
+when_clause(A) ::= WHEN expr(X). { A = X; }
+
+%type trigger_cmd_list {TriggerStep*}
+%destructor trigger_cmd_list {sqlite3DeleteTriggerStep($$);}
+trigger_cmd_list(A) ::= trigger_cmd_list(Y) trigger_cmd(X) SEMI. {
+ if( Y ){
+ Y->pLast->pNext = X;
+ }else{
+ Y = X;
+ }
+ Y->pLast = X;
+ A = Y;
+}
+trigger_cmd_list(A) ::= . { A = 0; }
+
+%type trigger_cmd {TriggerStep*}
+%destructor trigger_cmd {sqlite3DeleteTriggerStep($$);}
+// UPDATE
+trigger_cmd(A) ::= UPDATE orconf(R) nm(X) SET setlist(Y) where_opt(Z).
+ { A = sqlite3TriggerUpdateStep(&X, Y, Z, R); }
+
+// INSERT
+trigger_cmd(A) ::= insert_cmd(R) INTO nm(X) inscollist_opt(F)
+ VALUES LP itemlist(Y) RP.
+ {A = sqlite3TriggerInsertStep(&X, F, Y, 0, R);}
+
+trigger_cmd(A) ::= insert_cmd(R) INTO nm(X) inscollist_opt(F) select(S).
+ {A = sqlite3TriggerInsertStep(&X, F, 0, S, R);}
+
+// DELETE
+trigger_cmd(A) ::= DELETE FROM nm(X) where_opt(Y).
+ {A = sqlite3TriggerDeleteStep(&X, Y);}
+
+// SELECT
+trigger_cmd(A) ::= select(X). {A = sqlite3TriggerSelectStep(X); }
+
+// The special RAISE expression that may occur in trigger programs
+expr(A) ::= RAISE(X) LP IGNORE RP(Y). {
+ A = sqlite3Expr(TK_RAISE, 0, 0, 0);
+ if( A ){
+ A->iColumn = OE_Ignore;
+ sqlite3ExprSpan(A, &X, &Y);
+ }
+}
+expr(A) ::= RAISE(X) LP raisetype(T) COMMA nm(Z) RP(Y). {
+ A = sqlite3Expr(TK_RAISE, 0, 0, &Z);
+ if( A ) {
+ A->iColumn = T;
+ sqlite3ExprSpan(A, &X, &Y);
+ }
+}
+%endif !SQLITE_OMIT_TRIGGER
+
+%type raisetype {int}
+raisetype(A) ::= ROLLBACK. {A = OE_Rollback;}
+raisetype(A) ::= ABORT. {A = OE_Abort;}
+raisetype(A) ::= FAIL. {A = OE_Fail;}
+
+
+//////////////////////// DROP TRIGGER statement //////////////////////////////
+%ifndef SQLITE_OMIT_TRIGGER
+cmd ::= DROP TRIGGER ifexists(NOERR) fullname(X). {
+ sqlite3DropTrigger(pParse,X,NOERR);
+}
+%endif !SQLITE_OMIT_TRIGGER
+
+//////////////////////// ATTACH DATABASE file AS name /////////////////////////
+cmd ::= ATTACH database_kw_opt expr(F) AS expr(D) key_opt(K). {
+ sqlite3Attach(pParse, F, D, K);
+}
+%type key_opt {Expr *}
+%destructor key_opt {sqlite3ExprDelete($$);}
+key_opt(A) ::= . { A = 0; }
+key_opt(A) ::= KEY expr(X). { A = X; }
+
+database_kw_opt ::= DATABASE.
+database_kw_opt ::= .
+
+//////////////////////// DETACH DATABASE name /////////////////////////////////
+cmd ::= DETACH database_kw_opt expr(D). {
+ sqlite3Detach(pParse, D);
+}
+
+////////////////////////// REINDEX collation //////////////////////////////////
+%ifndef SQLITE_OMIT_REINDEX
+cmd ::= REINDEX. {sqlite3Reindex(pParse, 0, 0);}
+cmd ::= REINDEX nm(X) dbnm(Y). {sqlite3Reindex(pParse, &X, &Y);}
+%endif SQLITE_OMIT_REINDEX
+
+/////////////////////////////////// ANALYZE ///////////////////////////////////
+%ifndef SQLITE_OMIT_ANALYZE
+cmd ::= ANALYZE. {sqlite3Analyze(pParse, 0, 0);}
+cmd ::= ANALYZE nm(X) dbnm(Y). {sqlite3Analyze(pParse, &X, &Y);}
+%endif
+
+//////////////////////// ALTER TABLE table ... ////////////////////////////////
+%ifndef SQLITE_OMIT_ALTERTABLE
+cmd ::= ALTER TABLE fullname(X) RENAME TO nm(Z). {
+ sqlite3AlterRenameTable(pParse,X,&Z);
+}
+cmd ::= ALTER TABLE add_column_fullname ADD kwcolumn_opt column(Y). {
+ sqlite3AlterFinishAddColumn(pParse, &Y);
+}
+add_column_fullname ::= fullname(X). {
+ sqlite3AlterBeginAddColumn(pParse, X);
+}
+kwcolumn_opt ::= .
+kwcolumn_opt ::= COLUMNKW.
+%endif SQLITE_OMIT_ALTERTABLE
+
+//////////////////////// CREATE VIRTUAL TABLE ... /////////////////////////////
+%ifndef SQLITE_OMIT_VIRTUALTABLE
+cmd ::= create_vtab. {sqlite3VtabFinishParse(pParse,0);}
+cmd ::= create_vtab LP vtabarglist RP(X). {sqlite3VtabFinishParse(pParse,&X);}
+create_vtab ::= CREATE VIRTUAL TABLE nm(X) dbnm(Y) USING nm(Z). {
+ sqlite3VtabBeginParse(pParse, &X, &Y, &Z);
+}
+vtabarglist ::= vtabarg.
+vtabarglist ::= vtabarglist COMMA vtabarg.
+vtabarg ::= . {sqlite3VtabArgInit(pParse);}
+vtabarg ::= vtabarg vtabargtoken.
+vtabargtoken ::= ANY(X). {sqlite3VtabArgExtend(pParse,&X);}
+vtabargtoken ::= lp anylist RP(X). {sqlite3VtabArgExtend(pParse,&X);}
+lp ::= LP(X). {sqlite3VtabArgExtend(pParse,&X);}
+anylist ::= .
+anylist ::= anylist ANY(X). {sqlite3VtabArgExtend(pParse,&X);}
+%endif SQLITE_OMIT_VIRTUALTABLE
Added: freeswitch/trunk/libs/sqlite/src/pragma.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/pragma.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,992 @@
+/*
+** 2003 April 6
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** This file contains code used to implement the PRAGMA command.
+**
+** $Id: pragma.c,v 1.124 2006/09/25 18:01:57 drh Exp $
+*/
+#include "sqliteInt.h"
+#include "os.h"
+#include <ctype.h>
+
+/* Ignore this whole file if pragmas are disabled
+*/
+#if !defined(SQLITE_OMIT_PRAGMA) && !defined(SQLITE_OMIT_PARSER)
+
+#if defined(SQLITE_DEBUG) || defined(SQLITE_TEST)
+# include "pager.h"
+# include "btree.h"
+#endif
+
+/*
+** Interpret the given string as a safety level. Return 0 for OFF,
+** 1 for ON or NORMAL and 2 for FULL. Return 1 for an empty or
+** unrecognized string argument.
+**
+** Note that the values returned are one less that the values that
+** should be passed into sqlite3BtreeSetSafetyLevel(). The is done
+** to support legacy SQL code. The safety level used to be boolean
+** and older scripts may have used numbers 0 for OFF and 1 for ON.
+*/
+static int getSafetyLevel(const char *z){
+ /* 123456789 123456789 */
+ static const char zText[] = "onoffalseyestruefull";
+ static const u8 iOffset[] = {0, 1, 2, 4, 9, 12, 16};
+ static const u8 iLength[] = {2, 2, 3, 5, 3, 4, 4};
+ static const u8 iValue[] = {1, 0, 0, 0, 1, 1, 2};
+ int i, n;
+ if( isdigit(*z) ){
+ return atoi(z);
+ }
+ n = strlen(z);
+ for(i=0; i<sizeof(iLength); i++){
+ if( iLength[i]==n && sqlite3StrNICmp(&zText[iOffset[i]],z,n)==0 ){
+ return iValue[i];
+ }
+ }
+ return 1;
+}
+
+/*
+** Interpret the given string as a boolean value.
+*/
+static int getBoolean(const char *z){
+ return getSafetyLevel(z)&1;
+}
+
+#ifndef SQLITE_OMIT_PAGER_PRAGMAS
+/*
+** Interpret the given string as a temp db location. Return 1 for file
+** backed temporary databases, 2 for the Red-Black tree in memory database
+** and 0 to use the compile-time default.
+*/
+static int getTempStore(const char *z){
+ if( z[0]>='0' && z[0]<='2' ){
+ return z[0] - '0';
+ }else if( sqlite3StrICmp(z, "file")==0 ){
+ return 1;
+ }else if( sqlite3StrICmp(z, "memory")==0 ){
+ return 2;
+ }else{
+ return 0;
+ }
+}
+#endif /* SQLITE_PAGER_PRAGMAS */
+
+#ifndef SQLITE_OMIT_PAGER_PRAGMAS
+/*
+** Invalidate temp storage, either when the temp storage is changed
+** from default, or when 'file' and the temp_store_directory has changed
+*/
+static int invalidateTempStorage(Parse *pParse){
+ sqlite3 *db = pParse->db;
+ if( db->aDb[1].pBt!=0 ){
+ if( db->flags & SQLITE_InTrans ){
+ sqlite3ErrorMsg(pParse, "temporary storage cannot be changed "
+ "from within a transaction");
+ return SQLITE_ERROR;
+ }
+ sqlite3BtreeClose(db->aDb[1].pBt);
+ db->aDb[1].pBt = 0;
+ sqlite3ResetInternalSchema(db, 0);
+ }
+ return SQLITE_OK;
+}
+#endif /* SQLITE_PAGER_PRAGMAS */
+
+#ifndef SQLITE_OMIT_PAGER_PRAGMAS
+/*
+** If the TEMP database is open, close it and mark the database schema
+** as needing reloading. This must be done when using the TEMP_STORE
+** or DEFAULT_TEMP_STORE pragmas.
+*/
+static int changeTempStorage(Parse *pParse, const char *zStorageType){
+ int ts = getTempStore(zStorageType);
+ sqlite3 *db = pParse->db;
+ if( db->temp_store==ts ) return SQLITE_OK;
+ if( invalidateTempStorage( pParse ) != SQLITE_OK ){
+ return SQLITE_ERROR;
+ }
+ db->temp_store = ts;
+ return SQLITE_OK;
+}
+#endif /* SQLITE_PAGER_PRAGMAS */
+
+/*
+** Generate code to return a single integer value.
+*/
+static void returnSingleInt(Parse *pParse, const char *zLabel, int value){
+ Vdbe *v = sqlite3GetVdbe(pParse);
+ sqlite3VdbeAddOp(v, OP_Integer, value, 0);
+ if( pParse->explain==0 ){
+ sqlite3VdbeSetNumCols(v, 1);
+ sqlite3VdbeSetColName(v, 0, COLNAME_NAME, zLabel, P3_STATIC);
+ }
+ sqlite3VdbeAddOp(v, OP_Callback, 1, 0);
+}
+
+#ifndef SQLITE_OMIT_FLAG_PRAGMAS
+/*
+** Check to see if zRight and zLeft refer to a pragma that queries
+** or changes one of the flags in db->flags. Return 1 if so and 0 if not.
+** Also, implement the pragma.
+*/
+static int flagPragma(Parse *pParse, const char *zLeft, const char *zRight){
+ static const struct sPragmaType {
+ const char *zName; /* Name of the pragma */
+ int mask; /* Mask for the db->flags value */
+ } aPragma[] = {
+ { "vdbe_trace", SQLITE_VdbeTrace },
+ { "sql_trace", SQLITE_SqlTrace },
+ { "vdbe_listing", SQLITE_VdbeListing },
+ { "full_column_names", SQLITE_FullColNames },
+ { "short_column_names", SQLITE_ShortColNames },
+ { "count_changes", SQLITE_CountRows },
+ { "empty_result_callbacks", SQLITE_NullCallback },
+ { "legacy_file_format", SQLITE_LegacyFileFmt },
+ { "fullfsync", SQLITE_FullFSync },
+#ifndef SQLITE_OMIT_CHECK
+ { "ignore_check_constraints", SQLITE_IgnoreChecks },
+#endif
+ /* The following is VERY experimental */
+ { "writable_schema", SQLITE_WriteSchema },
+ { "omit_readlock", SQLITE_NoReadlock },
+
+ /* TODO: Maybe it shouldn't be possible to change the ReadUncommitted
+ ** flag if there are any active statements. */
+ { "read_uncommitted", SQLITE_ReadUncommitted },
+ };
+ int i;
+ const struct sPragmaType *p;
+ for(i=0, p=aPragma; i<sizeof(aPragma)/sizeof(aPragma[0]); i++, p++){
+ if( sqlite3StrICmp(zLeft, p->zName)==0 ){
+ sqlite3 *db = pParse->db;
+ Vdbe *v;
+ v = sqlite3GetVdbe(pParse);
+ if( v ){
+ if( zRight==0 ){
+ returnSingleInt(pParse, p->zName, (db->flags & p->mask)!=0 );
+ }else{
+ if( getBoolean(zRight) ){
+ db->flags |= p->mask;
+ }else{
+ db->flags &= ~p->mask;
+ }
+ }
+ }
+ return 1;
+ }
+ }
+ return 0;
+}
+#endif /* SQLITE_OMIT_FLAG_PRAGMAS */
+
+/*
+** Process a pragma statement.
+**
+** Pragmas are of this form:
+**
+** PRAGMA [database.]id [= value]
+**
+** The identifier might also be a string. The value is a string, and
+** identifier, or a number. If minusFlag is true, then the value is
+** a number that was preceded by a minus sign.
+**
+** If the left side is "database.id" then pId1 is the database name
+** and pId2 is the id. If the left side is just "id" then pId1 is the
+** id and pId2 is any empty string.
+*/
+void sqlite3Pragma(
+ Parse *pParse,
+ Token *pId1, /* First part of [database.]id field */
+ Token *pId2, /* Second part of [database.]id field, or NULL */
+ Token *pValue, /* Token for <value>, or NULL */
+ int minusFlag /* True if a '-' sign preceded <value> */
+){
+ char *zLeft = 0; /* Nul-terminated UTF-8 string <id> */
+ char *zRight = 0; /* Nul-terminated UTF-8 string <value>, or NULL */
+ const char *zDb = 0; /* The database name */
+ Token *pId; /* Pointer to <id> token */
+ int iDb; /* Database index for <database> */
+ sqlite3 *db = pParse->db;
+ Db *pDb;
+ Vdbe *v = sqlite3GetVdbe(pParse);
+ if( v==0 ) return;
+
+ /* Interpret the [database.] part of the pragma statement. iDb is the
+ ** index of the database this pragma is being applied to in db.aDb[]. */
+ iDb = sqlite3TwoPartName(pParse, pId1, pId2, &pId);
+ if( iDb<0 ) return;
+ pDb = &db->aDb[iDb];
+
+ /* If the temp database has been explicitly named as part of the
+ ** pragma, make sure it is open.
+ */
+ if( iDb==1 && sqlite3OpenTempDatabase(pParse) ){
+ return;
+ }
+
+ zLeft = sqlite3NameFromToken(pId);
+ if( !zLeft ) return;
+ if( minusFlag ){
+ zRight = sqlite3MPrintf("-%T", pValue);
+ }else{
+ zRight = sqlite3NameFromToken(pValue);
+ }
+
+ zDb = ((iDb>0)?pDb->zName:0);
+ if( sqlite3AuthCheck(pParse, SQLITE_PRAGMA, zLeft, zRight, zDb) ){
+ goto pragma_out;
+ }
+
+#ifndef SQLITE_OMIT_PAGER_PRAGMAS
+ /*
+ ** PRAGMA [database.]default_cache_size
+ ** PRAGMA [database.]default_cache_size=N
+ **
+ ** The first form reports the current persistent setting for the
+ ** page cache size. The value returned is the maximum number of
+ ** pages in the page cache. The second form sets both the current
+ ** page cache size value and the persistent page cache size value
+ ** stored in the database file.
+ **
+ ** The default cache size is stored in meta-value 2 of page 1 of the
+ ** database file. The cache size is actually the absolute value of
+ ** this memory location. The sign of meta-value 2 determines the
+ ** synchronous setting. A negative value means synchronous is off
+ ** and a positive value means synchronous is on.
+ */
+ if( sqlite3StrICmp(zLeft,"default_cache_size")==0 ){
+ static const VdbeOpList getCacheSize[] = {
+ { OP_ReadCookie, 0, 2, 0}, /* 0 */
+ { OP_AbsValue, 0, 0, 0},
+ { OP_Dup, 0, 0, 0},
+ { OP_Integer, 0, 0, 0},
+ { OP_Ne, 0, 6, 0},
+ { OP_Integer, 0, 0, 0}, /* 5 */
+ { OP_Callback, 1, 0, 0},
+ };
+ int addr;
+ if( sqlite3ReadSchema(pParse) ) goto pragma_out;
+ if( !zRight ){
+ sqlite3VdbeSetNumCols(v, 1);
+ sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "cache_size", P3_STATIC);
+ addr = sqlite3VdbeAddOpList(v, ArraySize(getCacheSize), getCacheSize);
+ sqlite3VdbeChangeP1(v, addr, iDb);
+ sqlite3VdbeChangeP1(v, addr+5, MAX_PAGES);
+ }else{
+ int size = atoi(zRight);
+ if( size<0 ) size = -size;
+ sqlite3BeginWriteOperation(pParse, 0, iDb);
+ sqlite3VdbeAddOp(v, OP_Integer, size, 0);
+ sqlite3VdbeAddOp(v, OP_ReadCookie, iDb, 2);
+ addr = sqlite3VdbeAddOp(v, OP_Integer, 0, 0);
+ sqlite3VdbeAddOp(v, OP_Ge, 0, addr+3);
+ sqlite3VdbeAddOp(v, OP_Negative, 0, 0);
+ sqlite3VdbeAddOp(v, OP_SetCookie, iDb, 2);
+ pDb->pSchema->cache_size = size;
+ sqlite3BtreeSetCacheSize(pDb->pBt, pDb->pSchema->cache_size);
+ }
+ }else
+
+ /*
+ ** PRAGMA [database.]page_size
+ ** PRAGMA [database.]page_size=N
+ **
+ ** The first form reports the current setting for the
+ ** database page size in bytes. The second form sets the
+ ** database page size value. The value can only be set if
+ ** the database has not yet been created.
+ */
+ if( sqlite3StrICmp(zLeft,"page_size")==0 ){
+ Btree *pBt = pDb->pBt;
+ if( !zRight ){
+ int size = pBt ? sqlite3BtreeGetPageSize(pBt) : 0;
+ returnSingleInt(pParse, "page_size", size);
+ }else{
+ sqlite3BtreeSetPageSize(pBt, atoi(zRight), -1);
+ }
+ }else
+#endif /* SQLITE_OMIT_PAGER_PRAGMAS */
+
+ /*
+ ** PRAGMA [database.]auto_vacuum
+ ** PRAGMA [database.]auto_vacuum=N
+ **
+ ** Get or set the (boolean) value of the database 'auto-vacuum' parameter.
+ */
+#ifndef SQLITE_OMIT_AUTOVACUUM
+ if( sqlite3StrICmp(zLeft,"auto_vacuum")==0 ){
+ Btree *pBt = pDb->pBt;
+ if( !zRight ){
+ int auto_vacuum =
+ pBt ? sqlite3BtreeGetAutoVacuum(pBt) : SQLITE_DEFAULT_AUTOVACUUM;
+ returnSingleInt(pParse, "auto_vacuum", auto_vacuum);
+ }else{
+ sqlite3BtreeSetAutoVacuum(pBt, getBoolean(zRight));
+ }
+ }else
+#endif
+
+#ifndef SQLITE_OMIT_PAGER_PRAGMAS
+ /*
+ ** PRAGMA [database.]cache_size
+ ** PRAGMA [database.]cache_size=N
+ **
+ ** The first form reports the current local setting for the
+ ** page cache size. The local setting can be different from
+ ** the persistent cache size value that is stored in the database
+ ** file itself. The value returned is the maximum number of
+ ** pages in the page cache. The second form sets the local
+ ** page cache size value. It does not change the persistent
+ ** cache size stored on the disk so the cache size will revert
+ ** to its default value when the database is closed and reopened.
+ ** N should be a positive integer.
+ */
+ if( sqlite3StrICmp(zLeft,"cache_size")==0 ){
+ if( sqlite3ReadSchema(pParse) ) goto pragma_out;
+ if( !zRight ){
+ returnSingleInt(pParse, "cache_size", pDb->pSchema->cache_size);
+ }else{
+ int size = atoi(zRight);
+ if( size<0 ) size = -size;
+ pDb->pSchema->cache_size = size;
+ sqlite3BtreeSetCacheSize(pDb->pBt, pDb->pSchema->cache_size);
+ }
+ }else
+
+ /*
+ ** PRAGMA temp_store
+ ** PRAGMA temp_store = "default"|"memory"|"file"
+ **
+ ** Return or set the local value of the temp_store flag. Changing
+ ** the local value does not make changes to the disk file and the default
+ ** value will be restored the next time the database is opened.
+ **
+ ** Note that it is possible for the library compile-time options to
+ ** override this setting
+ */
+ if( sqlite3StrICmp(zLeft, "temp_store")==0 ){
+ if( !zRight ){
+ returnSingleInt(pParse, "temp_store", db->temp_store);
+ }else{
+ changeTempStorage(pParse, zRight);
+ }
+ }else
+
+ /*
+ ** PRAGMA temp_store_directory
+ ** PRAGMA temp_store_directory = ""|"directory_name"
+ **
+ ** Return or set the local value of the temp_store_directory flag. Changing
+ ** the value sets a specific directory to be used for temporary files.
+ ** Setting to a null string reverts to the default temporary directory search.
+ ** If temporary directory is changed, then invalidateTempStorage.
+ **
+ */
+ if( sqlite3StrICmp(zLeft, "temp_store_directory")==0 ){
+ if( !zRight ){
+ if( sqlite3_temp_directory ){
+ sqlite3VdbeSetNumCols(v, 1);
+ sqlite3VdbeSetColName(v, 0, COLNAME_NAME,
+ "temp_store_directory", P3_STATIC);
+ sqlite3VdbeOp3(v, OP_String8, 0, 0, sqlite3_temp_directory, 0);
+ sqlite3VdbeAddOp(v, OP_Callback, 1, 0);
+ }
+ }else{
+ if( zRight[0] && !sqlite3OsIsDirWritable(zRight) ){
+ sqlite3ErrorMsg(pParse, "not a writable directory");
+ goto pragma_out;
+ }
+ if( TEMP_STORE==0
+ || (TEMP_STORE==1 && db->temp_store<=1)
+ || (TEMP_STORE==2 && db->temp_store==1)
+ ){
+ invalidateTempStorage(pParse);
+ }
+ sqliteFree(sqlite3_temp_directory);
+ if( zRight[0] ){
+ sqlite3_temp_directory = zRight;
+ zRight = 0;
+ }else{
+ sqlite3_temp_directory = 0;
+ }
+ }
+ }else
+
+ /*
+ ** PRAGMA [database.]synchronous
+ ** PRAGMA [database.]synchronous=OFF|ON|NORMAL|FULL
+ **
+ ** Return or set the local value of the synchronous flag. Changing
+ ** the local value does not make changes to the disk file and the
+ ** default value will be restored the next time the database is
+ ** opened.
+ */
+ if( sqlite3StrICmp(zLeft,"synchronous")==0 ){
+ if( sqlite3ReadSchema(pParse) ) goto pragma_out;
+ if( !zRight ){
+ returnSingleInt(pParse, "synchronous", pDb->safety_level-1);
+ }else{
+ if( !db->autoCommit ){
+ sqlite3ErrorMsg(pParse,
+ "Safety level may not be changed inside a transaction");
+ }else{
+ pDb->safety_level = getSafetyLevel(zRight)+1;
+ }
+ }
+ }else
+#endif /* SQLITE_OMIT_PAGER_PRAGMAS */
+
+#ifndef SQLITE_OMIT_FLAG_PRAGMAS
+ if( flagPragma(pParse, zLeft, zRight) ){
+ /* The flagPragma() subroutine also generates any necessary code
+ ** there is nothing more to do here */
+ }else
+#endif /* SQLITE_OMIT_FLAG_PRAGMAS */
+
+#ifndef SQLITE_OMIT_SCHEMA_PRAGMAS
+ /*
+ ** PRAGMA table_info(<table>)
+ **
+ ** Return a single row for each column of the named table. The columns of
+ ** the returned data set are:
+ **
+ ** cid: Column id (numbered from left to right, starting at 0)
+ ** name: Column name
+ ** type: Column declaration type.
+ ** notnull: True if 'NOT NULL' is part of column declaration
+ ** dflt_value: The default value for the column, if any.
+ */
+ if( sqlite3StrICmp(zLeft, "table_info")==0 && zRight ){
+ Table *pTab;
+ if( sqlite3ReadSchema(pParse) ) goto pragma_out;
+ pTab = sqlite3FindTable(db, zRight, zDb);
+ if( pTab ){
+ int i;
+ Column *pCol;
+ sqlite3VdbeSetNumCols(v, 6);
+ sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "cid", P3_STATIC);
+ sqlite3VdbeSetColName(v, 1, COLNAME_NAME, "name", P3_STATIC);
+ sqlite3VdbeSetColName(v, 2, COLNAME_NAME, "type", P3_STATIC);
+ sqlite3VdbeSetColName(v, 3, COLNAME_NAME, "notnull", P3_STATIC);
+ sqlite3VdbeSetColName(v, 4, COLNAME_NAME, "dflt_value", P3_STATIC);
+ sqlite3VdbeSetColName(v, 5, COLNAME_NAME, "pk", P3_STATIC);
+ sqlite3ViewGetColumnNames(pParse, pTab);
+ for(i=0, pCol=pTab->aCol; i<pTab->nCol; i++, pCol++){
+ const Token *pDflt;
+ static const Token noDflt = { (unsigned char*)"", 0, 0 };
+ sqlite3VdbeAddOp(v, OP_Integer, i, 0);
+ sqlite3VdbeOp3(v, OP_String8, 0, 0, pCol->zName, 0);
+ sqlite3VdbeOp3(v, OP_String8, 0, 0,
+ pCol->zType ? pCol->zType : "", 0);
+ sqlite3VdbeAddOp(v, OP_Integer, pCol->notNull, 0);
+ pDflt = pCol->pDflt ? &pCol->pDflt->span : &noDflt;
+ if( pDflt->z ){
+ sqlite3VdbeOp3(v, OP_String8, 0, 0, (char*)pDflt->z, pDflt->n);
+ }else{
+ sqlite3VdbeAddOp(v, OP_Null, 0, 0);
+ }
+ sqlite3VdbeAddOp(v, OP_Integer, pCol->isPrimKey, 0);
+ sqlite3VdbeAddOp(v, OP_Callback, 6, 0);
+ }
+ }
+ }else
+
+ if( sqlite3StrICmp(zLeft, "index_info")==0 && zRight ){
+ Index *pIdx;
+ Table *pTab;
+ if( sqlite3ReadSchema(pParse) ) goto pragma_out;
+ pIdx = sqlite3FindIndex(db, zRight, zDb);
+ if( pIdx ){
+ int i;
+ pTab = pIdx->pTable;
+ sqlite3VdbeSetNumCols(v, 3);
+ sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "seqno", P3_STATIC);
+ sqlite3VdbeSetColName(v, 1, COLNAME_NAME, "cid", P3_STATIC);
+ sqlite3VdbeSetColName(v, 2, COLNAME_NAME, "name", P3_STATIC);
+ for(i=0; i<pIdx->nColumn; i++){
+ int cnum = pIdx->aiColumn[i];
+ sqlite3VdbeAddOp(v, OP_Integer, i, 0);
+ sqlite3VdbeAddOp(v, OP_Integer, cnum, 0);
+ assert( pTab->nCol>cnum );
+ sqlite3VdbeOp3(v, OP_String8, 0, 0, pTab->aCol[cnum].zName, 0);
+ sqlite3VdbeAddOp(v, OP_Callback, 3, 0);
+ }
+ }
+ }else
+
+ if( sqlite3StrICmp(zLeft, "index_list")==0 && zRight ){
+ Index *pIdx;
+ Table *pTab;
+ if( sqlite3ReadSchema(pParse) ) goto pragma_out;
+ pTab = sqlite3FindTable(db, zRight, zDb);
+ if( pTab ){
+ v = sqlite3GetVdbe(pParse);
+ pIdx = pTab->pIndex;
+ if( pIdx ){
+ int i = 0;
+ sqlite3VdbeSetNumCols(v, 3);
+ sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "seq", P3_STATIC);
+ sqlite3VdbeSetColName(v, 1, COLNAME_NAME, "name", P3_STATIC);
+ sqlite3VdbeSetColName(v, 2, COLNAME_NAME, "unique", P3_STATIC);
+ while(pIdx){
+ sqlite3VdbeAddOp(v, OP_Integer, i, 0);
+ sqlite3VdbeOp3(v, OP_String8, 0, 0, pIdx->zName, 0);
+ sqlite3VdbeAddOp(v, OP_Integer, pIdx->onError!=OE_None, 0);
+ sqlite3VdbeAddOp(v, OP_Callback, 3, 0);
+ ++i;
+ pIdx = pIdx->pNext;
+ }
+ }
+ }
+ }else
+
+ if( sqlite3StrICmp(zLeft, "database_list")==0 ){
+ int i;
+ if( sqlite3ReadSchema(pParse) ) goto pragma_out;
+ sqlite3VdbeSetNumCols(v, 3);
+ sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "seq", P3_STATIC);
+ sqlite3VdbeSetColName(v, 1, COLNAME_NAME, "name", P3_STATIC);
+ sqlite3VdbeSetColName(v, 2, COLNAME_NAME, "file", P3_STATIC);
+ for(i=0; i<db->nDb; i++){
+ if( db->aDb[i].pBt==0 ) continue;
+ assert( db->aDb[i].zName!=0 );
+ sqlite3VdbeAddOp(v, OP_Integer, i, 0);
+ sqlite3VdbeOp3(v, OP_String8, 0, 0, db->aDb[i].zName, 0);
+ sqlite3VdbeOp3(v, OP_String8, 0, 0,
+ sqlite3BtreeGetFilename(db->aDb[i].pBt), 0);
+ sqlite3VdbeAddOp(v, OP_Callback, 3, 0);
+ }
+ }else
+
+ if( sqlite3StrICmp(zLeft, "collation_list")==0 ){
+ int i = 0;
+ HashElem *p;
+ sqlite3VdbeSetNumCols(v, 2);
+ sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "seq", P3_STATIC);
+ sqlite3VdbeSetColName(v, 1, COLNAME_NAME, "name", P3_STATIC);
+ for(p=sqliteHashFirst(&db->aCollSeq); p; p=sqliteHashNext(p)){
+ CollSeq *pColl = (CollSeq *)sqliteHashData(p);
+ sqlite3VdbeAddOp(v, OP_Integer, i++, 0);
+ sqlite3VdbeOp3(v, OP_String8, 0, 0, pColl->zName, 0);
+ sqlite3VdbeAddOp(v, OP_Callback, 2, 0);
+ }
+ }else
+#endif /* SQLITE_OMIT_SCHEMA_PRAGMAS */
+
+#ifndef SQLITE_OMIT_FOREIGN_KEY
+ if( sqlite3StrICmp(zLeft, "foreign_key_list")==0 && zRight ){
+ FKey *pFK;
+ Table *pTab;
+ if( sqlite3ReadSchema(pParse) ) goto pragma_out;
+ pTab = sqlite3FindTable(db, zRight, zDb);
+ if( pTab ){
+ v = sqlite3GetVdbe(pParse);
+ pFK = pTab->pFKey;
+ if( pFK ){
+ int i = 0;
+ sqlite3VdbeSetNumCols(v, 5);
+ sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "id", P3_STATIC);
+ sqlite3VdbeSetColName(v, 1, COLNAME_NAME, "seq", P3_STATIC);
+ sqlite3VdbeSetColName(v, 2, COLNAME_NAME, "table", P3_STATIC);
+ sqlite3VdbeSetColName(v, 3, COLNAME_NAME, "from", P3_STATIC);
+ sqlite3VdbeSetColName(v, 4, COLNAME_NAME, "to", P3_STATIC);
+ while(pFK){
+ int j;
+ for(j=0; j<pFK->nCol; j++){
+ char *zCol = pFK->aCol[j].zCol;
+ sqlite3VdbeAddOp(v, OP_Integer, i, 0);
+ sqlite3VdbeAddOp(v, OP_Integer, j, 0);
+ sqlite3VdbeOp3(v, OP_String8, 0, 0, pFK->zTo, 0);
+ sqlite3VdbeOp3(v, OP_String8, 0, 0,
+ pTab->aCol[pFK->aCol[j].iFrom].zName, 0);
+ sqlite3VdbeOp3(v, zCol ? OP_String8 : OP_Null, 0, 0, zCol, 0);
+ sqlite3VdbeAddOp(v, OP_Callback, 5, 0);
+ }
+ ++i;
+ pFK = pFK->pNextFrom;
+ }
+ }
+ }
+ }else
+#endif /* !defined(SQLITE_OMIT_FOREIGN_KEY) */
+
+#ifndef NDEBUG
+ if( sqlite3StrICmp(zLeft, "parser_trace")==0 ){
+ extern void sqlite3ParserTrace(FILE*, char *);
+ if( zRight ){
+ if( getBoolean(zRight) ){
+ sqlite3ParserTrace(stderr, "parser: ");
+ }else{
+ sqlite3ParserTrace(0, 0);
+ }
+ }
+ }else
+#endif
+
+ /* Reinstall the LIKE and GLOB functions. The variant of LIKE
+ ** used will be case sensitive or not depending on the RHS.
+ */
+ if( sqlite3StrICmp(zLeft, "case_sensitive_like")==0 ){
+ if( zRight ){
+ sqlite3RegisterLikeFunctions(db, getBoolean(zRight));
+ }
+ }else
+
+#ifndef SQLITE_OMIT_INTEGRITY_CHECK
+ if( sqlite3StrICmp(zLeft, "integrity_check")==0 ){
+ int i, j, addr;
+
+ /* Code that appears at the end of the integrity check. If no error
+ ** messages have been generated, output OK. Otherwise output the
+ ** error message
+ */
+ static const VdbeOpList endCode[] = {
+ { OP_MemLoad, 0, 0, 0},
+ { OP_Integer, 0, 0, 0},
+ { OP_Ne, 0, 0, 0}, /* 2 */
+ { OP_String8, 0, 0, "ok"},
+ { OP_Callback, 1, 0, 0},
+ };
+
+ /* Initialize the VDBE program */
+ if( sqlite3ReadSchema(pParse) ) goto pragma_out;
+ sqlite3VdbeSetNumCols(v, 1);
+ sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "integrity_check", P3_STATIC);
+ sqlite3VdbeAddOp(v, OP_MemInt, 0, 0); /* Initialize error count to 0 */
+
+ /* Do an integrity check on each database file */
+ for(i=0; i<db->nDb; i++){
+ HashElem *x;
+ Hash *pTbls;
+ int cnt = 0;
+
+ if( OMIT_TEMPDB && i==1 ) continue;
+
+ sqlite3CodeVerifySchema(pParse, i);
+
+ /* Do an integrity check of the B-Tree
+ */
+ pTbls = &db->aDb[i].pSchema->tblHash;
+ for(x=sqliteHashFirst(pTbls); x; x=sqliteHashNext(x)){
+ Table *pTab = sqliteHashData(x);
+ Index *pIdx;
+ sqlite3VdbeAddOp(v, OP_Integer, pTab->tnum, 0);
+ cnt++;
+ for(pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext){
+ sqlite3VdbeAddOp(v, OP_Integer, pIdx->tnum, 0);
+ cnt++;
+ }
+ }
+ assert( cnt>0 );
+ sqlite3VdbeAddOp(v, OP_IntegrityCk, cnt, i);
+ sqlite3VdbeAddOp(v, OP_Dup, 0, 1);
+ addr = sqlite3VdbeOp3(v, OP_String8, 0, 0, "ok", P3_STATIC);
+ sqlite3VdbeAddOp(v, OP_Eq, 0, addr+7);
+ sqlite3VdbeOp3(v, OP_String8, 0, 0,
+ sqlite3MPrintf("*** in database %s ***\n", db->aDb[i].zName),
+ P3_DYNAMIC);
+ sqlite3VdbeAddOp(v, OP_Pull, 1, 0);
+ sqlite3VdbeAddOp(v, OP_Concat, 0, 1);
+ sqlite3VdbeAddOp(v, OP_Callback, 1, 0);
+ sqlite3VdbeAddOp(v, OP_MemIncr, 1, 0);
+
+ /* Make sure all the indices are constructed correctly.
+ */
+ sqlite3CodeVerifySchema(pParse, i);
+ for(x=sqliteHashFirst(pTbls); x; x=sqliteHashNext(x)){
+ Table *pTab = sqliteHashData(x);
+ Index *pIdx;
+ int loopTop;
+
+ if( pTab->pIndex==0 ) continue;
+ sqlite3OpenTableAndIndices(pParse, pTab, 1, OP_OpenRead);
+ sqlite3VdbeAddOp(v, OP_MemInt, 0, 1);
+ loopTop = sqlite3VdbeAddOp(v, OP_Rewind, 1, 0);
+ sqlite3VdbeAddOp(v, OP_MemIncr, 1, 1);
+ for(j=0, pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext, j++){
+ int jmp2;
+ static const VdbeOpList idxErr[] = {
+ { OP_MemIncr, 1, 0, 0},
+ { OP_String8, 0, 0, "rowid "},
+ { OP_Rowid, 1, 0, 0},
+ { OP_String8, 0, 0, " missing from index "},
+ { OP_String8, 0, 0, 0}, /* 4 */
+ { OP_Concat, 2, 0, 0},
+ { OP_Callback, 1, 0, 0},
+ };
+ sqlite3GenerateIndexKey(v, pIdx, 1);
+ jmp2 = sqlite3VdbeAddOp(v, OP_Found, j+2, 0);
+ addr = sqlite3VdbeAddOpList(v, ArraySize(idxErr), idxErr);
+ sqlite3VdbeChangeP3(v, addr+4, pIdx->zName, P3_STATIC);
+ sqlite3VdbeJumpHere(v, jmp2);
+ }
+ sqlite3VdbeAddOp(v, OP_Next, 1, loopTop+1);
+ sqlite3VdbeJumpHere(v, loopTop);
+ for(j=0, pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext, j++){
+ static const VdbeOpList cntIdx[] = {
+ { OP_MemInt, 0, 2, 0},
+ { OP_Rewind, 0, 0, 0}, /* 1 */
+ { OP_MemIncr, 1, 2, 0},
+ { OP_Next, 0, 0, 0}, /* 3 */
+ { OP_MemLoad, 1, 0, 0},
+ { OP_MemLoad, 2, 0, 0},
+ { OP_Eq, 0, 0, 0}, /* 6 */
+ { OP_MemIncr, 1, 0, 0},
+ { OP_String8, 0, 0, "wrong # of entries in index "},
+ { OP_String8, 0, 0, 0}, /* 9 */
+ { OP_Concat, 0, 0, 0},
+ { OP_Callback, 1, 0, 0},
+ };
+ if( pIdx->tnum==0 ) continue;
+ addr = sqlite3VdbeAddOpList(v, ArraySize(cntIdx), cntIdx);
+ sqlite3VdbeChangeP1(v, addr+1, j+2);
+ sqlite3VdbeChangeP2(v, addr+1, addr+4);
+ sqlite3VdbeChangeP1(v, addr+3, j+2);
+ sqlite3VdbeChangeP2(v, addr+3, addr+2);
+ sqlite3VdbeJumpHere(v, addr+6);
+ sqlite3VdbeChangeP3(v, addr+9, pIdx->zName, P3_STATIC);
+ }
+ }
+ }
+ addr = sqlite3VdbeAddOpList(v, ArraySize(endCode), endCode);
+ sqlite3VdbeJumpHere(v, addr+2);
+ }else
+#endif /* SQLITE_OMIT_INTEGRITY_CHECK */
+
+#ifndef SQLITE_OMIT_UTF16
+ /*
+ ** PRAGMA encoding
+ ** PRAGMA encoding = "utf-8"|"utf-16"|"utf-16le"|"utf-16be"
+ **
+ ** In it's first form, this pragma returns the encoding of the main
+ ** database. If the database is not initialized, it is initialized now.
+ **
+ ** The second form of this pragma is a no-op if the main database file
+ ** has not already been initialized. In this case it sets the default
+ ** encoding that will be used for the main database file if a new file
+ ** is created. If an existing main database file is opened, then the
+ ** default text encoding for the existing database is used.
+ **
+ ** In all cases new databases created using the ATTACH command are
+ ** created to use the same default text encoding as the main database. If
+ ** the main database has not been initialized and/or created when ATTACH
+ ** is executed, this is done before the ATTACH operation.
+ **
+ ** In the second form this pragma sets the text encoding to be used in
+ ** new database files created using this database handle. It is only
+ ** useful if invoked immediately after the main database i
+ */
+ if( sqlite3StrICmp(zLeft, "encoding")==0 ){
+ static const struct EncName {
+ char *zName;
+ u8 enc;
+ } encnames[] = {
+ { "UTF-8", SQLITE_UTF8 },
+ { "UTF8", SQLITE_UTF8 },
+ { "UTF-16le", SQLITE_UTF16LE },
+ { "UTF16le", SQLITE_UTF16LE },
+ { "UTF-16be", SQLITE_UTF16BE },
+ { "UTF16be", SQLITE_UTF16BE },
+ { "UTF-16", 0 }, /* SQLITE_UTF16NATIVE */
+ { "UTF16", 0 }, /* SQLITE_UTF16NATIVE */
+ { 0, 0 }
+ };
+ const struct EncName *pEnc;
+ if( !zRight ){ /* "PRAGMA encoding" */
+ if( sqlite3ReadSchema(pParse) ) goto pragma_out;
+ sqlite3VdbeSetNumCols(v, 1);
+ sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "encoding", P3_STATIC);
+ sqlite3VdbeAddOp(v, OP_String8, 0, 0);
+ for(pEnc=&encnames[0]; pEnc->zName; pEnc++){
+ if( pEnc->enc==ENC(pParse->db) ){
+ sqlite3VdbeChangeP3(v, -1, pEnc->zName, P3_STATIC);
+ break;
+ }
+ }
+ sqlite3VdbeAddOp(v, OP_Callback, 1, 0);
+ }else{ /* "PRAGMA encoding = XXX" */
+ /* Only change the value of sqlite.enc if the database handle is not
+ ** initialized. If the main database exists, the new sqlite.enc value
+ ** will be overwritten when the schema is next loaded. If it does not
+ ** already exists, it will be created to use the new encoding value.
+ */
+ if(
+ !(DbHasProperty(db, 0, DB_SchemaLoaded)) ||
+ DbHasProperty(db, 0, DB_Empty)
+ ){
+ for(pEnc=&encnames[0]; pEnc->zName; pEnc++){
+ if( 0==sqlite3StrICmp(zRight, pEnc->zName) ){
+ ENC(pParse->db) = pEnc->enc ? pEnc->enc : SQLITE_UTF16NATIVE;
+ break;
+ }
+ }
+ if( !pEnc->zName ){
+ sqlite3ErrorMsg(pParse, "unsupported encoding: %s", zRight);
+ }
+ }
+ }
+ }else
+#endif /* SQLITE_OMIT_UTF16 */
+
+#ifndef SQLITE_OMIT_SCHEMA_VERSION_PRAGMAS
+ /*
+ ** PRAGMA [database.]schema_version
+ ** PRAGMA [database.]schema_version = <integer>
+ **
+ ** PRAGMA [database.]user_version
+ ** PRAGMA [database.]user_version = <integer>
+ **
+ ** The pragma's schema_version and user_version are used to set or get
+ ** the value of the schema-version and user-version, respectively. Both
+ ** the schema-version and the user-version are 32-bit signed integers
+ ** stored in the database header.
+ **
+ ** The schema-cookie is usually only manipulated internally by SQLite. It
+ ** is incremented by SQLite whenever the database schema is modified (by
+ ** creating or dropping a table or index). The schema version is used by
+ ** SQLite each time a query is executed to ensure that the internal cache
+ ** of the schema used when compiling the SQL query matches the schema of
+ ** the database against which the compiled query is actually executed.
+ ** Subverting this mechanism by using "PRAGMA schema_version" to modify
+ ** the schema-version is potentially dangerous and may lead to program
+ ** crashes or database corruption. Use with caution!
+ **
+ ** The user-version is not used internally by SQLite. It may be used by
+ ** applications for any purpose.
+ */
+ if( sqlite3StrICmp(zLeft, "schema_version")==0 ||
+ sqlite3StrICmp(zLeft, "user_version")==0 ){
+
+ int iCookie; /* Cookie index. 0 for schema-cookie, 6 for user-cookie. */
+ if( zLeft[0]=='s' || zLeft[0]=='S' ){
+ iCookie = 0;
+ }else{
+ iCookie = 5;
+ }
+
+ if( zRight ){
+ /* Write the specified cookie value */
+ static const VdbeOpList setCookie[] = {
+ { OP_Transaction, 0, 1, 0}, /* 0 */
+ { OP_Integer, 0, 0, 0}, /* 1 */
+ { OP_SetCookie, 0, 0, 0}, /* 2 */
+ };
+ int addr = sqlite3VdbeAddOpList(v, ArraySize(setCookie), setCookie);
+ sqlite3VdbeChangeP1(v, addr, iDb);
+ sqlite3VdbeChangeP1(v, addr+1, atoi(zRight));
+ sqlite3VdbeChangeP1(v, addr+2, iDb);
+ sqlite3VdbeChangeP2(v, addr+2, iCookie);
+ }else{
+ /* Read the specified cookie value */
+ static const VdbeOpList readCookie[] = {
+ { OP_ReadCookie, 0, 0, 0}, /* 0 */
+ { OP_Callback, 1, 0, 0}
+ };
+ int addr = sqlite3VdbeAddOpList(v, ArraySize(readCookie), readCookie);
+ sqlite3VdbeChangeP1(v, addr, iDb);
+ sqlite3VdbeChangeP2(v, addr, iCookie);
+ sqlite3VdbeSetNumCols(v, 1);
+ }
+ }
+#endif /* SQLITE_OMIT_SCHEMA_VERSION_PRAGMAS */
+
+#if defined(SQLITE_DEBUG) || defined(SQLITE_TEST)
+ /*
+ ** Report the current state of file logs for all databases
+ */
+ if( sqlite3StrICmp(zLeft, "lock_status")==0 ){
+ static const char *const azLockName[] = {
+ "unlocked", "shared", "reserved", "pending", "exclusive"
+ };
+ int i;
+ Vdbe *v = sqlite3GetVdbe(pParse);
+ sqlite3VdbeSetNumCols(v, 2);
+ sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "database", P3_STATIC);
+ sqlite3VdbeSetColName(v, 1, COLNAME_NAME, "status", P3_STATIC);
+ for(i=0; i<db->nDb; i++){
+ Btree *pBt;
+ Pager *pPager;
+ if( db->aDb[i].zName==0 ) continue;
+ sqlite3VdbeOp3(v, OP_String8, 0, 0, db->aDb[i].zName, P3_STATIC);
+ pBt = db->aDb[i].pBt;
+ if( pBt==0 || (pPager = sqlite3BtreePager(pBt))==0 ){
+ sqlite3VdbeOp3(v, OP_String8, 0, 0, "closed", P3_STATIC);
+ }else{
+ int j = sqlite3pager_lockstate(pPager);
+ sqlite3VdbeOp3(v, OP_String8, 0, 0,
+ (j>=0 && j<=4) ? azLockName[j] : "unknown", P3_STATIC);
+ }
+ sqlite3VdbeAddOp(v, OP_Callback, 2, 0);
+ }
+ }else
+#endif
+
+#ifdef SQLITE_SSE
+ /*
+ ** Check to see if the sqlite_statements table exists. Create it
+ ** if it does not.
+ */
+ if( sqlite3StrICmp(zLeft, "create_sqlite_statement_table")==0 ){
+ extern int sqlite3CreateStatementsTable(Parse*);
+ sqlite3CreateStatementsTable(pParse);
+ }else
+#endif
+
+#if SQLITE_HAS_CODEC
+ if( sqlite3StrICmp(zLeft, "key")==0 ){
+ sqlite3_key(db, zRight, strlen(zRight));
+ }else
+#endif
+#if SQLITE_HAS_CODEC || defined(SQLITE_ENABLE_CEROD)
+ if( sqlite3StrICmp(zLeft, "activate_extensions")==0 ){
+#if SQLITE_HAS_CODEC
+ if( sqlite3StrNICmp(zRight, "see-", 4)==0 ){
+ extern void sqlite3_activate_see(const char*);
+ sqlite3_activate_see(&zRight[4]);
+ }
+#endif
+#ifdef SQLITE_ENABLE_CEROD
+ if( sqlite3StrNICmp(zRight, "cerod-", 6)==0 ){
+ extern void sqlite3_activate_cerod(const char*);
+ sqlite3_activate_cerod(&zRight[6]);
+ }
+#endif
+ }
+#endif
+
+ {}
+
+ if( v ){
+ /* Code an OP_Expire at the end of each PRAGMA program to cause
+ ** the VDBE implementing the pragma to expire. Most (all?) pragmas
+ ** are only valid for a single execution.
+ */
+ sqlite3VdbeAddOp(v, OP_Expire, 1, 0);
+
+ /*
+ ** Reset the safety level, in case the fullfsync flag or synchronous
+ ** setting changed.
+ */
+#ifndef SQLITE_OMIT_PAGER_PRAGMAS
+ if( db->autoCommit ){
+ sqlite3BtreeSetSafetyLevel(pDb->pBt, pDb->safety_level,
+ (db->flags&SQLITE_FullFSync)!=0);
+ }
+#endif
+ }
+pragma_out:
+ sqliteFree(zLeft);
+ sqliteFree(zRight);
+}
+
+#endif /* SQLITE_OMIT_PRAGMA || SQLITE_OMIT_PARSER */
Added: freeswitch/trunk/libs/sqlite/src/prepare.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/prepare.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,588 @@
+/*
+** 2005 May 25
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** This file contains the implementation of the sqlite3_prepare()
+** interface, and routines that contribute to loading the database schema
+** from disk.
+**
+** $Id: prepare.c,v 1.40 2006/09/23 20:36:02 drh Exp $
+*/
+#include "sqliteInt.h"
+#include "os.h"
+#include <ctype.h>
+
+/*
+** Fill the InitData structure with an error message that indicates
+** that the database is corrupt.
+*/
+static void corruptSchema(InitData *pData, const char *zExtra){
+ if( !sqlite3MallocFailed() ){
+ sqlite3SetString(pData->pzErrMsg, "malformed database schema",
+ zExtra!=0 && zExtra[0]!=0 ? " - " : (char*)0, zExtra, (char*)0);
+ }
+ pData->rc = SQLITE_CORRUPT;
+}
+
+/*
+** This is the callback routine for the code that initializes the
+** database. See sqlite3Init() below for additional information.
+** This routine is also called from the OP_ParseSchema opcode of the VDBE.
+**
+** Each callback contains the following information:
+**
+** argv[0] = name of thing being created
+** argv[1] = root page number for table or index. 0 for trigger or view.
+** argv[2] = SQL text for the CREATE statement.
+**
+*/
+int sqlite3InitCallback(void *pInit, int argc, char **argv, char **azColName){
+ InitData *pData = (InitData*)pInit;
+ sqlite3 *db = pData->db;
+ int iDb = pData->iDb;
+
+ pData->rc = SQLITE_OK;
+ DbClearProperty(db, iDb, DB_Empty);
+ if( sqlite3MallocFailed() ){
+ corruptSchema(pData, 0);
+ return SQLITE_NOMEM;
+ }
+
+ assert( argc==3 );
+ if( argv==0 ) return 0; /* Might happen if EMPTY_RESULT_CALLBACKS are on */
+ if( argv[1]==0 ){
+ corruptSchema(pData, 0);
+ return 1;
+ }
+ assert( iDb>=0 && iDb<db->nDb );
+ if( argv[2] && argv[2][0] ){
+ /* Call the parser to process a CREATE TABLE, INDEX or VIEW.
+ ** But because db->init.busy is set to 1, no VDBE code is generated
+ ** or executed. All the parser does is build the internal data
+ ** structures that describe the table, index, or view.
+ */
+ char *zErr;
+ int rc;
+ assert( db->init.busy );
+ db->init.iDb = iDb;
+ db->init.newTnum = atoi(argv[1]);
+ rc = sqlite3_exec(db, argv[2], 0, 0, &zErr);
+ db->init.iDb = 0;
+ assert( rc!=SQLITE_OK || zErr==0 );
+ if( SQLITE_OK!=rc ){
+ pData->rc = rc;
+ if( rc==SQLITE_NOMEM ){
+ sqlite3FailedMalloc();
+ }else if( rc!=SQLITE_INTERRUPT ){
+ corruptSchema(pData, zErr);
+ }
+ sqlite3_free(zErr);
+ return 1;
+ }
+ }else{
+ /* If the SQL column is blank it means this is an index that
+ ** was created to be the PRIMARY KEY or to fulfill a UNIQUE
+ ** constraint for a CREATE TABLE. The index should have already
+ ** been created when we processed the CREATE TABLE. All we have
+ ** to do here is record the root page number for that index.
+ */
+ Index *pIndex;
+ pIndex = sqlite3FindIndex(db, argv[0], db->aDb[iDb].zName);
+ if( pIndex==0 || pIndex->tnum!=0 ){
+ /* This can occur if there exists an index on a TEMP table which
+ ** has the same name as another index on a permanent index. Since
+ ** the permanent table is hidden by the TEMP table, we can also
+ ** safely ignore the index on the permanent table.
+ */
+ /* Do Nothing */;
+ }else{
+ pIndex->tnum = atoi(argv[1]);
+ }
+ }
+ return 0;
+}
+
+/*
+** Attempt to read the database schema and initialize internal
+** data structures for a single database file. The index of the
+** database file is given by iDb. iDb==0 is used for the main
+** database. iDb==1 should never be used. iDb>=2 is used for
+** auxiliary databases. Return one of the SQLITE_ error codes to
+** indicate success or failure.
+*/
+static int sqlite3InitOne(sqlite3 *db, int iDb, char **pzErrMsg){
+ int rc;
+ BtCursor *curMain;
+ int size;
+ Table *pTab;
+ Db *pDb;
+ char const *azArg[4];
+ int meta[10];
+ InitData initData;
+ char const *zMasterSchema;
+ char const *zMasterName = SCHEMA_TABLE(iDb);
+
+ /*
+ ** The master database table has a structure like this
+ */
+ static const char master_schema[] =
+ "CREATE TABLE sqlite_master(\n"
+ " type text,\n"
+ " name text,\n"
+ " tbl_name text,\n"
+ " rootpage integer,\n"
+ " sql text\n"
+ ")"
+ ;
+#ifndef SQLITE_OMIT_TEMPDB
+ static const char temp_master_schema[] =
+ "CREATE TEMP TABLE sqlite_temp_master(\n"
+ " type text,\n"
+ " name text,\n"
+ " tbl_name text,\n"
+ " rootpage integer,\n"
+ " sql text\n"
+ ")"
+ ;
+#else
+ #define temp_master_schema 0
+#endif
+
+ assert( iDb>=0 && iDb<db->nDb );
+ assert( db->aDb[iDb].pSchema );
+
+ /* zMasterSchema and zInitScript are set to point at the master schema
+ ** and initialisation script appropriate for the database being
+ ** initialised. zMasterName is the name of the master table.
+ */
+ if( !OMIT_TEMPDB && iDb==1 ){
+ zMasterSchema = temp_master_schema;
+ }else{
+ zMasterSchema = master_schema;
+ }
+ zMasterName = SCHEMA_TABLE(iDb);
+
+ /* Construct the schema tables. */
+ sqlite3SafetyOff(db);
+ azArg[0] = zMasterName;
+ azArg[1] = "1";
+ azArg[2] = zMasterSchema;
+ azArg[3] = 0;
+ initData.db = db;
+ initData.iDb = iDb;
+ initData.pzErrMsg = pzErrMsg;
+ rc = sqlite3InitCallback(&initData, 3, (char **)azArg, 0);
+ if( rc ){
+ sqlite3SafetyOn(db);
+ return initData.rc;
+ }
+ pTab = sqlite3FindTable(db, zMasterName, db->aDb[iDb].zName);
+ if( pTab ){
+ pTab->readOnly = 1;
+ }
+ sqlite3SafetyOn(db);
+
+ /* Create a cursor to hold the database open
+ */
+ pDb = &db->aDb[iDb];
+ if( pDb->pBt==0 ){
+ if( !OMIT_TEMPDB && iDb==1 ){
+ DbSetProperty(db, 1, DB_SchemaLoaded);
+ }
+ return SQLITE_OK;
+ }
+ rc = sqlite3BtreeCursor(pDb->pBt, MASTER_ROOT, 0, 0, 0, &curMain);
+ if( rc!=SQLITE_OK && rc!=SQLITE_EMPTY ){
+ sqlite3SetString(pzErrMsg, sqlite3ErrStr(rc), (char*)0);
+ return rc;
+ }
+
+ /* Get the database meta information.
+ **
+ ** Meta values are as follows:
+ ** meta[0] Schema cookie. Changes with each schema change.
+ ** meta[1] File format of schema layer.
+ ** meta[2] Size of the page cache.
+ ** meta[3] Use freelist if 0. Autovacuum if greater than zero.
+ ** meta[4] Db text encoding. 1:UTF-8 2:UTF-16LE 3:UTF-16BE
+ ** meta[5] The user cookie. Used by the application.
+ ** meta[6]
+ ** meta[7]
+ ** meta[8]
+ ** meta[9]
+ **
+ ** Note: The #defined SQLITE_UTF* symbols in sqliteInt.h correspond to
+ ** the possible values of meta[4].
+ */
+ if( rc==SQLITE_OK ){
+ int i;
+ for(i=0; rc==SQLITE_OK && i<sizeof(meta)/sizeof(meta[0]); i++){
+ rc = sqlite3BtreeGetMeta(pDb->pBt, i+1, (u32 *)&meta[i]);
+ }
+ if( rc ){
+ sqlite3SetString(pzErrMsg, sqlite3ErrStr(rc), (char*)0);
+ sqlite3BtreeCloseCursor(curMain);
+ return rc;
+ }
+ }else{
+ memset(meta, 0, sizeof(meta));
+ }
+ pDb->pSchema->schema_cookie = meta[0];
+
+ /* If opening a non-empty database, check the text encoding. For the
+ ** main database, set sqlite3.enc to the encoding of the main database.
+ ** For an attached db, it is an error if the encoding is not the same
+ ** as sqlite3.enc.
+ */
+ if( meta[4] ){ /* text encoding */
+ if( iDb==0 ){
+ /* If opening the main database, set ENC(db). */
+ ENC(db) = (u8)meta[4];
+ db->pDfltColl = sqlite3FindCollSeq(db, SQLITE_UTF8, "BINARY", 6, 0);
+ }else{
+ /* If opening an attached database, the encoding much match ENC(db) */
+ if( meta[4]!=ENC(db) ){
+ sqlite3BtreeCloseCursor(curMain);
+ sqlite3SetString(pzErrMsg, "attached databases must use the same"
+ " text encoding as main database", (char*)0);
+ return SQLITE_ERROR;
+ }
+ }
+ }else{
+ DbSetProperty(db, iDb, DB_Empty);
+ }
+ pDb->pSchema->enc = ENC(db);
+
+ size = meta[2];
+ if( size==0 ){ size = MAX_PAGES; }
+ pDb->pSchema->cache_size = size;
+ sqlite3BtreeSetCacheSize(pDb->pBt, pDb->pSchema->cache_size);
+
+ /*
+ ** file_format==1 Version 3.0.0.
+ ** file_format==2 Version 3.1.3. // ALTER TABLE ADD COLUMN
+ ** file_format==3 Version 3.1.4. // ditto but with non-NULL defaults
+ ** file_format==4 Version 3.3.0. // DESC indices. Boolean constants
+ */
+ pDb->pSchema->file_format = meta[1];
+ if( pDb->pSchema->file_format==0 ){
+ pDb->pSchema->file_format = 1;
+ }
+ if( pDb->pSchema->file_format>SQLITE_MAX_FILE_FORMAT ){
+ sqlite3BtreeCloseCursor(curMain);
+ sqlite3SetString(pzErrMsg, "unsupported file format", (char*)0);
+ return SQLITE_ERROR;
+ }
+
+
+ /* Read the schema information out of the schema tables
+ */
+ assert( db->init.busy );
+ if( rc==SQLITE_EMPTY ){
+ /* For an empty database, there is nothing to read */
+ rc = SQLITE_OK;
+ }else{
+ char *zSql;
+ zSql = sqlite3MPrintf(
+ "SELECT name, rootpage, sql FROM '%q'.%s",
+ db->aDb[iDb].zName, zMasterName);
+ sqlite3SafetyOff(db);
+ rc = sqlite3_exec(db, zSql, sqlite3InitCallback, &initData, 0);
+ if( rc==SQLITE_ABORT ) rc = initData.rc;
+ sqlite3SafetyOn(db);
+ sqliteFree(zSql);
+#ifndef SQLITE_OMIT_ANALYZE
+ if( rc==SQLITE_OK ){
+ sqlite3AnalysisLoad(db, iDb);
+ }
+#endif
+ sqlite3BtreeCloseCursor(curMain);
+ }
+ if( sqlite3MallocFailed() ){
+ /* sqlite3SetString(pzErrMsg, "out of memory", (char*)0); */
+ rc = SQLITE_NOMEM;
+ sqlite3ResetInternalSchema(db, 0);
+ }
+ if( rc==SQLITE_OK ){
+ DbSetProperty(db, iDb, DB_SchemaLoaded);
+ }else{
+ sqlite3ResetInternalSchema(db, iDb);
+ }
+ return rc;
+}
+
+/*
+** Initialize all database files - the main database file, the file
+** used to store temporary tables, and any additional database files
+** created using ATTACH statements. Return a success code. If an
+** error occurs, write an error message into *pzErrMsg.
+**
+** After a database is initialized, the DB_SchemaLoaded bit is set
+** bit is set in the flags field of the Db structure. If the database
+** file was of zero-length, then the DB_Empty flag is also set.
+*/
+int sqlite3Init(sqlite3 *db, char **pzErrMsg){
+ int i, rc;
+ int called_initone = 0;
+
+ if( db->init.busy ) return SQLITE_OK;
+ rc = SQLITE_OK;
+ db->init.busy = 1;
+ for(i=0; rc==SQLITE_OK && i<db->nDb; i++){
+ if( DbHasProperty(db, i, DB_SchemaLoaded) || i==1 ) continue;
+ rc = sqlite3InitOne(db, i, pzErrMsg);
+ if( rc ){
+ sqlite3ResetInternalSchema(db, i);
+ }
+ called_initone = 1;
+ }
+
+ /* Once all the other databases have been initialised, load the schema
+ ** for the TEMP database. This is loaded last, as the TEMP database
+ ** schema may contain references to objects in other databases.
+ */
+#ifndef SQLITE_OMIT_TEMPDB
+ if( rc==SQLITE_OK && db->nDb>1 && !DbHasProperty(db, 1, DB_SchemaLoaded) ){
+ rc = sqlite3InitOne(db, 1, pzErrMsg);
+ if( rc ){
+ sqlite3ResetInternalSchema(db, 1);
+ }
+ called_initone = 1;
+ }
+#endif
+
+ db->init.busy = 0;
+ if( rc==SQLITE_OK && called_initone ){
+ sqlite3CommitInternalChanges(db);
+ }
+
+ return rc;
+}
+
+/*
+** This routine is a no-op if the database schema is already initialised.
+** Otherwise, the schema is loaded. An error code is returned.
+*/
+int sqlite3ReadSchema(Parse *pParse){
+ int rc = SQLITE_OK;
+ sqlite3 *db = pParse->db;
+ if( !db->init.busy ){
+ rc = sqlite3Init(db, &pParse->zErrMsg);
+ }
+ if( rc!=SQLITE_OK ){
+ pParse->rc = rc;
+ pParse->nErr++;
+ }
+ return rc;
+}
+
+
+/*
+** Check schema cookies in all databases. If any cookie is out
+** of date, return 0. If all schema cookies are current, return 1.
+*/
+static int schemaIsValid(sqlite3 *db){
+ int iDb;
+ int rc;
+ BtCursor *curTemp;
+ int cookie;
+ int allOk = 1;
+
+ for(iDb=0; allOk && iDb<db->nDb; iDb++){
+ Btree *pBt;
+ pBt = db->aDb[iDb].pBt;
+ if( pBt==0 ) continue;
+ rc = sqlite3BtreeCursor(pBt, MASTER_ROOT, 0, 0, 0, &curTemp);
+ if( rc==SQLITE_OK ){
+ rc = sqlite3BtreeGetMeta(pBt, 1, (u32 *)&cookie);
+ if( rc==SQLITE_OK && cookie!=db->aDb[iDb].pSchema->schema_cookie ){
+ allOk = 0;
+ }
+ sqlite3BtreeCloseCursor(curTemp);
+ }
+ }
+ return allOk;
+}
+
+/*
+** Convert a schema pointer into the iDb index that indicates
+** which database file in db->aDb[] the schema refers to.
+**
+** If the same database is attached more than once, the first
+** attached database is returned.
+*/
+int sqlite3SchemaToIndex(sqlite3 *db, Schema *pSchema){
+ int i = -1000000;
+
+ /* If pSchema is NULL, then return -1000000. This happens when code in
+ ** expr.c is trying to resolve a reference to a transient table (i.e. one
+ ** created by a sub-select). In this case the return value of this
+ ** function should never be used.
+ **
+ ** We return -1000000 instead of the more usual -1 simply because using
+ ** -1000000 as incorrectly using -1000000 index into db->aDb[] is much
+ ** more likely to cause a segfault than -1 (of course there are assert()
+ ** statements too, but it never hurts to play the odds).
+ */
+ if( pSchema ){
+ for(i=0; i<db->nDb; i++){
+ if( db->aDb[i].pSchema==pSchema ){
+ break;
+ }
+ }
+ assert( i>=0 &&i>=0 && i<db->nDb );
+ }
+ return i;
+}
+
+/*
+** Compile the UTF-8 encoded SQL statement zSql into a statement handle.
+*/
+int sqlite3_prepare(
+ sqlite3 *db, /* Database handle. */
+ const char *zSql, /* UTF-8 encoded SQL statement. */
+ int nBytes, /* Length of zSql in bytes. */
+ sqlite3_stmt **ppStmt, /* OUT: A pointer to the prepared statement */
+ const char** pzTail /* OUT: End of parsed string */
+){
+ Parse sParse;
+ char *zErrMsg = 0;
+ int rc = SQLITE_OK;
+ int i;
+
+ /* Assert that malloc() has not failed */
+ assert( !sqlite3MallocFailed() );
+
+ assert( ppStmt );
+ *ppStmt = 0;
+ if( sqlite3SafetyOn(db) ){
+ return SQLITE_MISUSE;
+ }
+
+ /* If any attached database schemas are locked, do not proceed with
+ ** compilation. Instead return SQLITE_LOCKED immediately.
+ */
+ for(i=0; i<db->nDb; i++) {
+ Btree *pBt = db->aDb[i].pBt;
+ if( pBt && sqlite3BtreeSchemaLocked(pBt) ){
+ const char *zDb = db->aDb[i].zName;
+ sqlite3Error(db, SQLITE_LOCKED, "database schema is locked: %s", zDb);
+ sqlite3SafetyOff(db);
+ return SQLITE_LOCKED;
+ }
+ }
+
+ memset(&sParse, 0, sizeof(sParse));
+ sParse.db = db;
+ if( nBytes>=0 && zSql[nBytes]!=0 ){
+ char *zSqlCopy = sqlite3StrNDup(zSql, nBytes);
+ sqlite3RunParser(&sParse, zSqlCopy, &zErrMsg);
+ sParse.zTail += zSql - zSqlCopy;
+ sqliteFree(zSqlCopy);
+ }else{
+ sqlite3RunParser(&sParse, zSql, &zErrMsg);
+ }
+
+ if( sqlite3MallocFailed() ){
+ sParse.rc = SQLITE_NOMEM;
+ }
+ if( sParse.rc==SQLITE_DONE ) sParse.rc = SQLITE_OK;
+ if( sParse.checkSchema && !schemaIsValid(db) ){
+ sParse.rc = SQLITE_SCHEMA;
+ }
+ if( sParse.rc==SQLITE_SCHEMA ){
+ sqlite3ResetInternalSchema(db, 0);
+ }
+ if( sqlite3MallocFailed() ){
+ sParse.rc = SQLITE_NOMEM;
+ }
+ if( pzTail ) *pzTail = sParse.zTail;
+ rc = sParse.rc;
+
+#ifndef SQLITE_OMIT_EXPLAIN
+ if( rc==SQLITE_OK && sParse.pVdbe && sParse.explain ){
+ if( sParse.explain==2 ){
+ sqlite3VdbeSetNumCols(sParse.pVdbe, 3);
+ sqlite3VdbeSetColName(sParse.pVdbe, 0, COLNAME_NAME, "order", P3_STATIC);
+ sqlite3VdbeSetColName(sParse.pVdbe, 1, COLNAME_NAME, "from", P3_STATIC);
+ sqlite3VdbeSetColName(sParse.pVdbe, 2, COLNAME_NAME, "detail", P3_STATIC);
+ }else{
+ sqlite3VdbeSetNumCols(sParse.pVdbe, 5);
+ sqlite3VdbeSetColName(sParse.pVdbe, 0, COLNAME_NAME, "addr", P3_STATIC);
+ sqlite3VdbeSetColName(sParse.pVdbe, 1, COLNAME_NAME, "opcode", P3_STATIC);
+ sqlite3VdbeSetColName(sParse.pVdbe, 2, COLNAME_NAME, "p1", P3_STATIC);
+ sqlite3VdbeSetColName(sParse.pVdbe, 3, COLNAME_NAME, "p2", P3_STATIC);
+ sqlite3VdbeSetColName(sParse.pVdbe, 4, COLNAME_NAME, "p3", P3_STATIC);
+ }
+ }
+#endif
+
+ if( sqlite3SafetyOff(db) ){
+ rc = SQLITE_MISUSE;
+ }
+ if( rc==SQLITE_OK ){
+ *ppStmt = (sqlite3_stmt*)sParse.pVdbe;
+ }else if( sParse.pVdbe ){
+ sqlite3_finalize((sqlite3_stmt*)sParse.pVdbe);
+ }
+
+ if( zErrMsg ){
+ sqlite3Error(db, rc, "%s", zErrMsg);
+ sqliteFree(zErrMsg);
+ }else{
+ sqlite3Error(db, rc, 0);
+ }
+
+ rc = sqlite3ApiExit(db, rc);
+ sqlite3ReleaseThreadData();
+ assert( (rc&db->errMask)==rc );
+ return rc;
+}
+
+#ifndef SQLITE_OMIT_UTF16
+/*
+** Compile the UTF-16 encoded SQL statement zSql into a statement handle.
+*/
+int sqlite3_prepare16(
+ sqlite3 *db, /* Database handle. */
+ const void *zSql, /* UTF-8 encoded SQL statement. */
+ int nBytes, /* Length of zSql in bytes. */
+ sqlite3_stmt **ppStmt, /* OUT: A pointer to the prepared statement */
+ const void **pzTail /* OUT: End of parsed string */
+){
+ /* This function currently works by first transforming the UTF-16
+ ** encoded string to UTF-8, then invoking sqlite3_prepare(). The
+ ** tricky bit is figuring out the pointer to return in *pzTail.
+ */
+ char *zSql8;
+ const char *zTail8 = 0;
+ int rc = SQLITE_OK;
+
+ if( sqlite3SafetyCheck(db) ){
+ return SQLITE_MISUSE;
+ }
+ zSql8 = sqlite3utf16to8(zSql, nBytes);
+ if( zSql8 ){
+ rc = sqlite3_prepare(db, zSql8, -1, ppStmt, &zTail8);
+ }
+
+ if( zTail8 && pzTail ){
+ /* If sqlite3_prepare returns a tail pointer, we calculate the
+ ** equivalent pointer into the UTF-16 string by counting the unicode
+ ** characters between zSql8 and zTail8, and then returning a pointer
+ ** the same number of characters into the UTF-16 string.
+ */
+ int chars_parsed = sqlite3utf8CharLen(zSql8, zTail8-zSql8);
+ *pzTail = (u8 *)zSql + sqlite3utf16ByteLen(zSql, chars_parsed);
+ }
+ sqliteFree(zSql8);
+ return sqlite3ApiExit(db, rc);
+}
+#endif /* SQLITE_OMIT_UTF16 */
Added: freeswitch/trunk/libs/sqlite/src/printf.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/printf.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,863 @@
+/*
+** The "printf" code that follows dates from the 1980's. It is in
+** the public domain. The original comments are included here for
+** completeness. They are very out-of-date but might be useful as
+** an historical reference. Most of the "enhancements" have been backed
+** out so that the functionality is now the same as standard printf().
+**
+**************************************************************************
+**
+** The following modules is an enhanced replacement for the "printf" subroutines
+** found in the standard C library. The following enhancements are
+** supported:
+**
+** + Additional functions. The standard set of "printf" functions
+** includes printf, fprintf, sprintf, vprintf, vfprintf, and
+** vsprintf. This module adds the following:
+**
+** * snprintf -- Works like sprintf, but has an extra argument
+** which is the size of the buffer written to.
+**
+** * mprintf -- Similar to sprintf. Writes output to memory
+** obtained from malloc.
+**
+** * xprintf -- Calls a function to dispose of output.
+**
+** * nprintf -- No output, but returns the number of characters
+** that would have been output by printf.
+**
+** * A v- version (ex: vsnprintf) of every function is also
+** supplied.
+**
+** + A few extensions to the formatting notation are supported:
+**
+** * The "=" flag (similar to "-") causes the output to be
+** be centered in the appropriately sized field.
+**
+** * The %b field outputs an integer in binary notation.
+**
+** * The %c field now accepts a precision. The character output
+** is repeated by the number of times the precision specifies.
+**
+** * The %' field works like %c, but takes as its character the
+** next character of the format string, instead of the next
+** argument. For example, printf("%.78'-") prints 78 minus
+** signs, the same as printf("%.78c",'-').
+**
+** + When compiled using GCC on a SPARC, this version of printf is
+** faster than the library printf for SUN OS 4.1.
+**
+** + All functions are fully reentrant.
+**
+*/
+#include "sqliteInt.h"
+
+/*
+** Conversion types fall into various categories as defined by the
+** following enumeration.
+*/
+#define etRADIX 1 /* Integer types. %d, %x, %o, and so forth */
+#define etFLOAT 2 /* Floating point. %f */
+#define etEXP 3 /* Exponentional notation. %e and %E */
+#define etGENERIC 4 /* Floating or exponential, depending on exponent. %g */
+#define etSIZE 5 /* Return number of characters processed so far. %n */
+#define etSTRING 6 /* Strings. %s */
+#define etDYNSTRING 7 /* Dynamically allocated strings. %z */
+#define etPERCENT 8 /* Percent symbol. %% */
+#define etCHARX 9 /* Characters. %c */
+/* The rest are extensions, not normally found in printf() */
+#define etCHARLIT 10 /* Literal characters. %' */
+#define etSQLESCAPE 11 /* Strings with '\'' doubled. %q */
+#define etSQLESCAPE2 12 /* Strings with '\'' doubled and enclosed in '',
+ NULL pointers replaced by SQL NULL. %Q */
+#define etTOKEN 13 /* a pointer to a Token structure */
+#define etSRCLIST 14 /* a pointer to a SrcList */
+#define etPOINTER 15 /* The %p conversion */
+
+
+/*
+** An "etByte" is an 8-bit unsigned value.
+*/
+typedef unsigned char etByte;
+
+/*
+** Each builtin conversion character (ex: the 'd' in "%d") is described
+** by an instance of the following structure
+*/
+typedef struct et_info { /* Information about each format field */
+ char fmttype; /* The format field code letter */
+ etByte base; /* The base for radix conversion */
+ etByte flags; /* One or more of FLAG_ constants below */
+ etByte type; /* Conversion paradigm */
+ etByte charset; /* Offset into aDigits[] of the digits string */
+ etByte prefix; /* Offset into aPrefix[] of the prefix string */
+} et_info;
+
+/*
+** Allowed values for et_info.flags
+*/
+#define FLAG_SIGNED 1 /* True if the value to convert is signed */
+#define FLAG_INTERN 2 /* True if for internal use only */
+#define FLAG_STRING 4 /* Allow infinity precision */
+
+
+/*
+** The following table is searched linearly, so it is good to put the
+** most frequently used conversion types first.
+*/
+static const char aDigits[] = "0123456789ABCDEF0123456789abcdef";
+static const char aPrefix[] = "-x0\000X0";
+static const et_info fmtinfo[] = {
+ { 'd', 10, 1, etRADIX, 0, 0 },
+ { 's', 0, 4, etSTRING, 0, 0 },
+ { 'g', 0, 1, etGENERIC, 30, 0 },
+ { 'z', 0, 6, etDYNSTRING, 0, 0 },
+ { 'q', 0, 4, etSQLESCAPE, 0, 0 },
+ { 'Q', 0, 4, etSQLESCAPE2, 0, 0 },
+ { 'c', 0, 0, etCHARX, 0, 0 },
+ { 'o', 8, 0, etRADIX, 0, 2 },
+ { 'u', 10, 0, etRADIX, 0, 0 },
+ { 'x', 16, 0, etRADIX, 16, 1 },
+ { 'X', 16, 0, etRADIX, 0, 4 },
+#ifndef SQLITE_OMIT_FLOATING_POINT
+ { 'f', 0, 1, etFLOAT, 0, 0 },
+ { 'e', 0, 1, etEXP, 30, 0 },
+ { 'E', 0, 1, etEXP, 14, 0 },
+ { 'G', 0, 1, etGENERIC, 14, 0 },
+#endif
+ { 'i', 10, 1, etRADIX, 0, 0 },
+ { 'n', 0, 0, etSIZE, 0, 0 },
+ { '%', 0, 0, etPERCENT, 0, 0 },
+ { 'p', 16, 0, etPOINTER, 0, 1 },
+ { 'T', 0, 2, etTOKEN, 0, 0 },
+ { 'S', 0, 2, etSRCLIST, 0, 0 },
+};
+#define etNINFO (sizeof(fmtinfo)/sizeof(fmtinfo[0]))
+
+/*
+** If SQLITE_OMIT_FLOATING_POINT is defined, then none of the floating point
+** conversions will work.
+*/
+#ifndef SQLITE_OMIT_FLOATING_POINT
+/*
+** "*val" is a double such that 0.1 <= *val < 10.0
+** Return the ascii code for the leading digit of *val, then
+** multiply "*val" by 10.0 to renormalize.
+**
+** Example:
+** input: *val = 3.14159
+** output: *val = 1.4159 function return = '3'
+**
+** The counter *cnt is incremented each time. After counter exceeds
+** 16 (the number of significant digits in a 64-bit float) '0' is
+** always returned.
+*/
+static int et_getdigit(LONGDOUBLE_TYPE *val, int *cnt){
+ int digit;
+ LONGDOUBLE_TYPE d;
+ if( (*cnt)++ >= 16 ) return '0';
+ digit = (int)*val;
+ d = digit;
+ digit += '0';
+ *val = (*val - d)*10.0;
+ return digit;
+}
+#endif /* SQLITE_OMIT_FLOATING_POINT */
+
+/*
+** On machines with a small stack size, you can redefine the
+** SQLITE_PRINT_BUF_SIZE to be less than 350. But beware - for
+** smaller values some %f conversions may go into an infinite loop.
+*/
+#ifndef SQLITE_PRINT_BUF_SIZE
+# define SQLITE_PRINT_BUF_SIZE 350
+#endif
+#define etBUFSIZE SQLITE_PRINT_BUF_SIZE /* Size of the output buffer */
+
+/*
+** The root program. All variations call this core.
+**
+** INPUTS:
+** func This is a pointer to a function taking three arguments
+** 1. A pointer to anything. Same as the "arg" parameter.
+** 2. A pointer to the list of characters to be output
+** (Note, this list is NOT null terminated.)
+** 3. An integer number of characters to be output.
+** (Note: This number might be zero.)
+**
+** arg This is the pointer to anything which will be passed as the
+** first argument to "func". Use it for whatever you like.
+**
+** fmt This is the format string, as in the usual print.
+**
+** ap This is a pointer to a list of arguments. Same as in
+** vfprint.
+**
+** OUTPUTS:
+** The return value is the total number of characters sent to
+** the function "func". Returns -1 on a error.
+**
+** Note that the order in which automatic variables are declared below
+** seems to make a big difference in determining how fast this beast
+** will run.
+*/
+static int vxprintf(
+ void (*func)(void*,const char*,int), /* Consumer of text */
+ void *arg, /* First argument to the consumer */
+ int useExtended, /* Allow extended %-conversions */
+ const char *fmt, /* Format string */
+ va_list ap /* arguments */
+){
+ int c; /* Next character in the format string */
+ char *bufpt; /* Pointer to the conversion buffer */
+ int precision; /* Precision of the current field */
+ int length; /* Length of the field */
+ int idx; /* A general purpose loop counter */
+ int count; /* Total number of characters output */
+ int width; /* Width of the current field */
+ etByte flag_leftjustify; /* True if "-" flag is present */
+ etByte flag_plussign; /* True if "+" flag is present */
+ etByte flag_blanksign; /* True if " " flag is present */
+ etByte flag_alternateform; /* True if "#" flag is present */
+ etByte flag_altform2; /* True if "!" flag is present */
+ etByte flag_zeropad; /* True if field width constant starts with zero */
+ etByte flag_long; /* True if "l" flag is present */
+ etByte flag_longlong; /* True if the "ll" flag is present */
+ etByte done; /* Loop termination flag */
+ sqlite_uint64 longvalue; /* Value for integer types */
+ LONGDOUBLE_TYPE realvalue; /* Value for real types */
+ const et_info *infop; /* Pointer to the appropriate info structure */
+ char buf[etBUFSIZE]; /* Conversion buffer */
+ char prefix; /* Prefix character. "+" or "-" or " " or '\0'. */
+ etByte errorflag = 0; /* True if an error is encountered */
+ etByte xtype; /* Conversion paradigm */
+ char *zExtra; /* Extra memory used for etTCLESCAPE conversions */
+ static const char spaces[] =
+ " ";
+#define etSPACESIZE (sizeof(spaces)-1)
+#ifndef SQLITE_OMIT_FLOATING_POINT
+ int exp, e2; /* exponent of real numbers */
+ double rounder; /* Used for rounding floating point values */
+ etByte flag_dp; /* True if decimal point should be shown */
+ etByte flag_rtz; /* True if trailing zeros should be removed */
+ etByte flag_exp; /* True to force display of the exponent */
+ int nsd; /* Number of significant digits returned */
+#endif
+
+ func(arg,"",0);
+ count = length = 0;
+ bufpt = 0;
+ for(; (c=(*fmt))!=0; ++fmt){
+ if( c!='%' ){
+ int amt;
+ bufpt = (char *)fmt;
+ amt = 1;
+ while( (c=(*++fmt))!='%' && c!=0 ) amt++;
+ (*func)(arg,bufpt,amt);
+ count += amt;
+ if( c==0 ) break;
+ }
+ if( (c=(*++fmt))==0 ){
+ errorflag = 1;
+ (*func)(arg,"%",1);
+ count++;
+ break;
+ }
+ /* Find out what flags are present */
+ flag_leftjustify = flag_plussign = flag_blanksign =
+ flag_alternateform = flag_altform2 = flag_zeropad = 0;
+ done = 0;
+ do{
+ switch( c ){
+ case '-': flag_leftjustify = 1; break;
+ case '+': flag_plussign = 1; break;
+ case ' ': flag_blanksign = 1; break;
+ case '#': flag_alternateform = 1; break;
+ case '!': flag_altform2 = 1; break;
+ case '0': flag_zeropad = 1; break;
+ default: done = 1; break;
+ }
+ }while( !done && (c=(*++fmt))!=0 );
+ /* Get the field width */
+ width = 0;
+ if( c=='*' ){
+ width = va_arg(ap,int);
+ if( width<0 ){
+ flag_leftjustify = 1;
+ width = -width;
+ }
+ c = *++fmt;
+ }else{
+ while( c>='0' && c<='9' ){
+ width = width*10 + c - '0';
+ c = *++fmt;
+ }
+ }
+ if( width > etBUFSIZE-10 ){
+ width = etBUFSIZE-10;
+ }
+ /* Get the precision */
+ if( c=='.' ){
+ precision = 0;
+ c = *++fmt;
+ if( c=='*' ){
+ precision = va_arg(ap,int);
+ if( precision<0 ) precision = -precision;
+ c = *++fmt;
+ }else{
+ while( c>='0' && c<='9' ){
+ precision = precision*10 + c - '0';
+ c = *++fmt;
+ }
+ }
+ }else{
+ precision = -1;
+ }
+ /* Get the conversion type modifier */
+ if( c=='l' ){
+ flag_long = 1;
+ c = *++fmt;
+ if( c=='l' ){
+ flag_longlong = 1;
+ c = *++fmt;
+ }else{
+ flag_longlong = 0;
+ }
+ }else{
+ flag_long = flag_longlong = 0;
+ }
+ /* Fetch the info entry for the field */
+ infop = 0;
+ for(idx=0; idx<etNINFO; idx++){
+ if( c==fmtinfo[idx].fmttype ){
+ infop = &fmtinfo[idx];
+ if( useExtended || (infop->flags & FLAG_INTERN)==0 ){
+ xtype = infop->type;
+ }else{
+ return -1;
+ }
+ break;
+ }
+ }
+ zExtra = 0;
+ if( infop==0 ){
+ return -1;
+ }
+
+
+ /* Limit the precision to prevent overflowing buf[] during conversion */
+ if( precision>etBUFSIZE-40 && (infop->flags & FLAG_STRING)==0 ){
+ precision = etBUFSIZE-40;
+ }
+
+ /*
+ ** At this point, variables are initialized as follows:
+ **
+ ** flag_alternateform TRUE if a '#' is present.
+ ** flag_altform2 TRUE if a '!' is present.
+ ** flag_plussign TRUE if a '+' is present.
+ ** flag_leftjustify TRUE if a '-' is present or if the
+ ** field width was negative.
+ ** flag_zeropad TRUE if the width began with 0.
+ ** flag_long TRUE if the letter 'l' (ell) prefixed
+ ** the conversion character.
+ ** flag_longlong TRUE if the letter 'll' (ell ell) prefixed
+ ** the conversion character.
+ ** flag_blanksign TRUE if a ' ' is present.
+ ** width The specified field width. This is
+ ** always non-negative. Zero is the default.
+ ** precision The specified precision. The default
+ ** is -1.
+ ** xtype The class of the conversion.
+ ** infop Pointer to the appropriate info struct.
+ */
+ switch( xtype ){
+ case etPOINTER:
+ flag_longlong = sizeof(char*)==sizeof(i64);
+ flag_long = sizeof(char*)==sizeof(long int);
+ /* Fall through into the next case */
+ case etRADIX:
+ if( infop->flags & FLAG_SIGNED ){
+ i64 v;
+ if( flag_longlong ) v = va_arg(ap,i64);
+ else if( flag_long ) v = va_arg(ap,long int);
+ else v = va_arg(ap,int);
+ if( v<0 ){
+ longvalue = -v;
+ prefix = '-';
+ }else{
+ longvalue = v;
+ if( flag_plussign ) prefix = '+';
+ else if( flag_blanksign ) prefix = ' ';
+ else prefix = 0;
+ }
+ }else{
+ if( flag_longlong ) longvalue = va_arg(ap,u64);
+ else if( flag_long ) longvalue = va_arg(ap,unsigned long int);
+ else longvalue = va_arg(ap,unsigned int);
+ prefix = 0;
+ }
+ if( longvalue==0 ) flag_alternateform = 0;
+ if( flag_zeropad && precision<width-(prefix!=0) ){
+ precision = width-(prefix!=0);
+ }
+ bufpt = &buf[etBUFSIZE-1];
+ {
+ register const char *cset; /* Use registers for speed */
+ register int base;
+ cset = &aDigits[infop->charset];
+ base = infop->base;
+ do{ /* Convert to ascii */
+ *(--bufpt) = cset[longvalue%base];
+ longvalue = longvalue/base;
+ }while( longvalue>0 );
+ }
+ length = &buf[etBUFSIZE-1]-bufpt;
+ for(idx=precision-length; idx>0; idx--){
+ *(--bufpt) = '0'; /* Zero pad */
+ }
+ if( prefix ) *(--bufpt) = prefix; /* Add sign */
+ if( flag_alternateform && infop->prefix ){ /* Add "0" or "0x" */
+ const char *pre;
+ char x;
+ pre = &aPrefix[infop->prefix];
+ if( *bufpt!=pre[0] ){
+ for(; (x=(*pre))!=0; pre++) *(--bufpt) = x;
+ }
+ }
+ length = &buf[etBUFSIZE-1]-bufpt;
+ break;
+ case etFLOAT:
+ case etEXP:
+ case etGENERIC:
+ realvalue = va_arg(ap,double);
+#ifndef SQLITE_OMIT_FLOATING_POINT
+ if( precision<0 ) precision = 6; /* Set default precision */
+ if( precision>etBUFSIZE/2-10 ) precision = etBUFSIZE/2-10;
+ if( realvalue<0.0 ){
+ realvalue = -realvalue;
+ prefix = '-';
+ }else{
+ if( flag_plussign ) prefix = '+';
+ else if( flag_blanksign ) prefix = ' ';
+ else prefix = 0;
+ }
+ if( xtype==etGENERIC && precision>0 ) precision--;
+#if 0
+ /* Rounding works like BSD when the constant 0.4999 is used. Wierd! */
+ for(idx=precision, rounder=0.4999; idx>0; idx--, rounder*=0.1);
+#else
+ /* It makes more sense to use 0.5 */
+ for(idx=precision, rounder=0.5; idx>0; idx--, rounder*=0.1){}
+#endif
+ if( xtype==etFLOAT ) realvalue += rounder;
+ /* Normalize realvalue to within 10.0 > realvalue >= 1.0 */
+ exp = 0;
+ if( realvalue>0.0 ){
+ while( realvalue>=1e32 && exp<=350 ){ realvalue *= 1e-32; exp+=32; }
+ while( realvalue>=1e8 && exp<=350 ){ realvalue *= 1e-8; exp+=8; }
+ while( realvalue>=10.0 && exp<=350 ){ realvalue *= 0.1; exp++; }
+ while( realvalue<1e-8 && exp>=-350 ){ realvalue *= 1e8; exp-=8; }
+ while( realvalue<1.0 && exp>=-350 ){ realvalue *= 10.0; exp--; }
+ if( exp>350 || exp<-350 ){
+ bufpt = "NaN";
+ length = 3;
+ break;
+ }
+ }
+ bufpt = buf;
+ /*
+ ** If the field type is etGENERIC, then convert to either etEXP
+ ** or etFLOAT, as appropriate.
+ */
+ flag_exp = xtype==etEXP;
+ if( xtype!=etFLOAT ){
+ realvalue += rounder;
+ if( realvalue>=10.0 ){ realvalue *= 0.1; exp++; }
+ }
+ if( xtype==etGENERIC ){
+ flag_rtz = !flag_alternateform;
+ if( exp<-4 || exp>precision ){
+ xtype = etEXP;
+ }else{
+ precision = precision - exp;
+ xtype = etFLOAT;
+ }
+ }else{
+ flag_rtz = 0;
+ }
+ if( xtype==etEXP ){
+ e2 = 0;
+ }else{
+ e2 = exp;
+ }
+ nsd = 0;
+ flag_dp = (precision>0) | flag_alternateform | flag_altform2;
+ /* The sign in front of the number */
+ if( prefix ){
+ *(bufpt++) = prefix;
+ }
+ /* Digits prior to the decimal point */
+ if( e2<0 ){
+ *(bufpt++) = '0';
+ }else{
+ for(; e2>=0; e2--){
+ *(bufpt++) = et_getdigit(&realvalue,&nsd);
+ }
+ }
+ /* The decimal point */
+ if( flag_dp ){
+ *(bufpt++) = '.';
+ }
+ /* "0" digits after the decimal point but before the first
+ ** significant digit of the number */
+ for(e2++; e2<0 && precision>0; precision--, e2++){
+ *(bufpt++) = '0';
+ }
+ /* Significant digits after the decimal point */
+ while( (precision--)>0 ){
+ *(bufpt++) = et_getdigit(&realvalue,&nsd);
+ }
+ /* Remove trailing zeros and the "." if no digits follow the "." */
+ if( flag_rtz && flag_dp ){
+ while( bufpt[-1]=='0' ) *(--bufpt) = 0;
+ assert( bufpt>buf );
+ if( bufpt[-1]=='.' ){
+ if( flag_altform2 ){
+ *(bufpt++) = '0';
+ }else{
+ *(--bufpt) = 0;
+ }
+ }
+ }
+ /* Add the "eNNN" suffix */
+ if( flag_exp || (xtype==etEXP && exp) ){
+ *(bufpt++) = aDigits[infop->charset];
+ if( exp<0 ){
+ *(bufpt++) = '-'; exp = -exp;
+ }else{
+ *(bufpt++) = '+';
+ }
+ if( exp>=100 ){
+ *(bufpt++) = (exp/100)+'0'; /* 100's digit */
+ exp %= 100;
+ }
+ *(bufpt++) = exp/10+'0'; /* 10's digit */
+ *(bufpt++) = exp%10+'0'; /* 1's digit */
+ }
+ *bufpt = 0;
+
+ /* The converted number is in buf[] and zero terminated. Output it.
+ ** Note that the number is in the usual order, not reversed as with
+ ** integer conversions. */
+ length = bufpt-buf;
+ bufpt = buf;
+
+ /* Special case: Add leading zeros if the flag_zeropad flag is
+ ** set and we are not left justified */
+ if( flag_zeropad && !flag_leftjustify && length < width){
+ int i;
+ int nPad = width - length;
+ for(i=width; i>=nPad; i--){
+ bufpt[i] = bufpt[i-nPad];
+ }
+ i = prefix!=0;
+ while( nPad-- ) bufpt[i++] = '0';
+ length = width;
+ }
+#endif
+ break;
+ case etSIZE:
+ *(va_arg(ap,int*)) = count;
+ length = width = 0;
+ break;
+ case etPERCENT:
+ buf[0] = '%';
+ bufpt = buf;
+ length = 1;
+ break;
+ case etCHARLIT:
+ case etCHARX:
+ c = buf[0] = (xtype==etCHARX ? va_arg(ap,int) : *++fmt);
+ if( precision>=0 ){
+ for(idx=1; idx<precision; idx++) buf[idx] = c;
+ length = precision;
+ }else{
+ length =1;
+ }
+ bufpt = buf;
+ break;
+ case etSTRING:
+ case etDYNSTRING:
+ bufpt = va_arg(ap,char*);
+ if( bufpt==0 ){
+ bufpt = "";
+ }else if( xtype==etDYNSTRING ){
+ zExtra = bufpt;
+ }
+ length = strlen(bufpt);
+ if( precision>=0 && precision<length ) length = precision;
+ break;
+ case etSQLESCAPE:
+ case etSQLESCAPE2: {
+ int i, j, n, ch, isnull;
+ int needQuote;
+ char *escarg = va_arg(ap,char*);
+ isnull = escarg==0;
+ if( isnull ) escarg = (xtype==etSQLESCAPE2 ? "NULL" : "(NULL)");
+ for(i=n=0; (ch=escarg[i])!=0; i++){
+ if( ch=='\'' ) n++;
+ }
+ needQuote = !isnull && xtype==etSQLESCAPE2;
+ n += i + 1 + needQuote*2;
+ if( n>etBUFSIZE ){
+ bufpt = zExtra = sqliteMalloc( n );
+ if( bufpt==0 ) return -1;
+ }else{
+ bufpt = buf;
+ }
+ j = 0;
+ if( needQuote ) bufpt[j++] = '\'';
+ for(i=0; (ch=escarg[i])!=0; i++){
+ bufpt[j++] = ch;
+ if( ch=='\'' ) bufpt[j++] = ch;
+ }
+ if( needQuote ) bufpt[j++] = '\'';
+ bufpt[j] = 0;
+ length = j;
+ /* The precision is ignored on %q and %Q */
+ /* if( precision>=0 && precision<length ) length = precision; */
+ break;
+ }
+ case etTOKEN: {
+ Token *pToken = va_arg(ap, Token*);
+ if( pToken && pToken->z ){
+ (*func)(arg, (char*)pToken->z, pToken->n);
+ }
+ length = width = 0;
+ break;
+ }
+ case etSRCLIST: {
+ SrcList *pSrc = va_arg(ap, SrcList*);
+ int k = va_arg(ap, int);
+ struct SrcList_item *pItem = &pSrc->a[k];
+ assert( k>=0 && k<pSrc->nSrc );
+ if( pItem->zDatabase && pItem->zDatabase[0] ){
+ (*func)(arg, pItem->zDatabase, strlen(pItem->zDatabase));
+ (*func)(arg, ".", 1);
+ }
+ (*func)(arg, pItem->zName, strlen(pItem->zName));
+ length = width = 0;
+ break;
+ }
+ }/* End switch over the format type */
+ /*
+ ** The text of the conversion is pointed to by "bufpt" and is
+ ** "length" characters long. The field width is "width". Do
+ ** the output.
+ */
+ if( !flag_leftjustify ){
+ register int nspace;
+ nspace = width-length;
+ if( nspace>0 ){
+ count += nspace;
+ while( nspace>=etSPACESIZE ){
+ (*func)(arg,spaces,etSPACESIZE);
+ nspace -= etSPACESIZE;
+ }
+ if( nspace>0 ) (*func)(arg,spaces,nspace);
+ }
+ }
+ if( length>0 ){
+ (*func)(arg,bufpt,length);
+ count += length;
+ }
+ if( flag_leftjustify ){
+ register int nspace;
+ nspace = width-length;
+ if( nspace>0 ){
+ count += nspace;
+ while( nspace>=etSPACESIZE ){
+ (*func)(arg,spaces,etSPACESIZE);
+ nspace -= etSPACESIZE;
+ }
+ if( nspace>0 ) (*func)(arg,spaces,nspace);
+ }
+ }
+ if( zExtra ){
+ sqliteFree(zExtra);
+ }
+ }/* End for loop over the format string */
+ return errorflag ? -1 : count;
+} /* End of function */
+
+
+/* This structure is used to store state information about the
+** write to memory that is currently in progress.
+*/
+struct sgMprintf {
+ char *zBase; /* A base allocation */
+ char *zText; /* The string collected so far */
+ int nChar; /* Length of the string so far */
+ int nTotal; /* Output size if unconstrained */
+ int nAlloc; /* Amount of space allocated in zText */
+ void *(*xRealloc)(void*,int); /* Function used to realloc memory */
+};
+
+/*
+** This function implements the callback from vxprintf.
+**
+** This routine add nNewChar characters of text in zNewText to
+** the sgMprintf structure pointed to by "arg".
+*/
+static void mout(void *arg, const char *zNewText, int nNewChar){
+ struct sgMprintf *pM = (struct sgMprintf*)arg;
+ pM->nTotal += nNewChar;
+ if( pM->nChar + nNewChar + 1 > pM->nAlloc ){
+ if( pM->xRealloc==0 ){
+ nNewChar = pM->nAlloc - pM->nChar - 1;
+ }else{
+ pM->nAlloc = pM->nChar + nNewChar*2 + 1;
+ if( pM->zText==pM->zBase ){
+ pM->zText = pM->xRealloc(0, pM->nAlloc);
+ if( pM->zText && pM->nChar ){
+ memcpy(pM->zText, pM->zBase, pM->nChar);
+ }
+ }else{
+ char *zNew;
+ zNew = pM->xRealloc(pM->zText, pM->nAlloc);
+ if( zNew ){
+ pM->zText = zNew;
+ }
+ }
+ }
+ }
+ if( pM->zText ){
+ if( nNewChar>0 ){
+ memcpy(&pM->zText[pM->nChar], zNewText, nNewChar);
+ pM->nChar += nNewChar;
+ }
+ pM->zText[pM->nChar] = 0;
+ }
+}
+
+/*
+** This routine is a wrapper around xprintf() that invokes mout() as
+** the consumer.
+*/
+static char *base_vprintf(
+ void *(*xRealloc)(void*,int), /* Routine to realloc memory. May be NULL */
+ int useInternal, /* Use internal %-conversions if true */
+ char *zInitBuf, /* Initially write here, before mallocing */
+ int nInitBuf, /* Size of zInitBuf[] */
+ const char *zFormat, /* format string */
+ va_list ap /* arguments */
+){
+ struct sgMprintf sM;
+ sM.zBase = sM.zText = zInitBuf;
+ sM.nChar = sM.nTotal = 0;
+ sM.nAlloc = nInitBuf;
+ sM.xRealloc = xRealloc;
+ vxprintf(mout, &sM, useInternal, zFormat, ap);
+ if( xRealloc ){
+ if( sM.zText==sM.zBase ){
+ sM.zText = xRealloc(0, sM.nChar+1);
+ if( sM.zText ){
+ memcpy(sM.zText, sM.zBase, sM.nChar+1);
+ }
+ }else if( sM.nAlloc>sM.nChar+10 ){
+ char *zNew = xRealloc(sM.zText, sM.nChar+1);
+ if( zNew ){
+ sM.zText = zNew;
+ }
+ }
+ }
+ return sM.zText;
+}
+
+/*
+** Realloc that is a real function, not a macro.
+*/
+static void *printf_realloc(void *old, int size){
+ return sqliteRealloc(old,size);
+}
+
+/*
+** Print into memory obtained from sqliteMalloc(). Use the internal
+** %-conversion extensions.
+*/
+char *sqlite3VMPrintf(const char *zFormat, va_list ap){
+ char zBase[SQLITE_PRINT_BUF_SIZE];
+ return base_vprintf(printf_realloc, 1, zBase, sizeof(zBase), zFormat, ap);
+}
+
+/*
+** Print into memory obtained from sqliteMalloc(). Use the internal
+** %-conversion extensions.
+*/
+char *sqlite3MPrintf(const char *zFormat, ...){
+ va_list ap;
+ char *z;
+ char zBase[SQLITE_PRINT_BUF_SIZE];
+ va_start(ap, zFormat);
+ z = base_vprintf(printf_realloc, 1, zBase, sizeof(zBase), zFormat, ap);
+ va_end(ap);
+ return z;
+}
+
+/*
+** Print into memory obtained from sqlite3_malloc(). Omit the internal
+** %-conversion extensions.
+*/
+char *sqlite3_vmprintf(const char *zFormat, va_list ap){
+ char zBase[SQLITE_PRINT_BUF_SIZE];
+ return base_vprintf(sqlite3_realloc, 0, zBase, sizeof(zBase), zFormat, ap);
+}
+
+/*
+** Print into memory obtained from sqlite3_malloc()(). Omit the internal
+** %-conversion extensions.
+*/
+char *sqlite3_mprintf(const char *zFormat, ...){
+ va_list ap;
+ char *z;
+ char zBase[SQLITE_PRINT_BUF_SIZE];
+ va_start(ap, zFormat);
+ z = base_vprintf(sqlite3_realloc, 0, zBase, sizeof(zBase), zFormat, ap);
+ va_end(ap);
+ return z;
+}
+
+/*
+** sqlite3_snprintf() works like snprintf() except that it ignores the
+** current locale settings. This is important for SQLite because we
+** are not able to use a "," as the decimal point in place of "." as
+** specified by some locales.
+*/
+char *sqlite3_snprintf(int n, char *zBuf, const char *zFormat, ...){
+ char *z;
+ va_list ap;
+
+ va_start(ap,zFormat);
+ z = base_vprintf(0, 0, zBuf, n, zFormat, ap);
+ va_end(ap);
+ return z;
+}
+
+#if defined(SQLITE_TEST) || defined(SQLITE_DEBUG)
+/*
+** A version of printf() that understands %lld. Used for debugging.
+** The printf() built into some versions of windows does not understand %lld
+** and segfaults if you give it a long long int.
+*/
+void sqlite3DebugPrintf(const char *zFormat, ...){
+ extern int getpid(void);
+ va_list ap;
+ char zBuf[500];
+ va_start(ap, zFormat);
+ base_vprintf(0, 0, zBuf, sizeof(zBuf), zFormat, ap);
+ va_end(ap);
+ fprintf(stdout,"%d: %s", getpid(), zBuf);
+ fflush(stdout);
+}
+#endif
Added: freeswitch/trunk/libs/sqlite/src/random.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/random.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,100 @@
+/*
+** 2001 September 15
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** This file contains code to implement a pseudo-random number
+** generator (PRNG) for SQLite.
+**
+** Random numbers are used by some of the database backends in order
+** to generate random integer keys for tables or random filenames.
+**
+** $Id: random.c,v 1.15 2006/01/06 14:32:20 drh Exp $
+*/
+#include "sqliteInt.h"
+#include "os.h"
+
+
+/*
+** Get a single 8-bit random value from the RC4 PRNG. The Mutex
+** must be held while executing this routine.
+**
+** Why not just use a library random generator like lrand48() for this?
+** Because the OP_NewRowid opcode in the VDBE depends on having a very
+** good source of random numbers. The lrand48() library function may
+** well be good enough. But maybe not. Or maybe lrand48() has some
+** subtle problems on some systems that could cause problems. It is hard
+** to know. To minimize the risk of problems due to bad lrand48()
+** implementations, SQLite uses this random number generator based
+** on RC4, which we know works very well.
+**
+** (Later): Actually, OP_NewRowid does not depend on a good source of
+** randomness any more. But we will leave this code in all the same.
+*/
+static int randomByte(){
+ unsigned char t;
+
+ /* All threads share a single random number generator.
+ ** This structure is the current state of the generator.
+ */
+ static struct {
+ unsigned char isInit; /* True if initialized */
+ unsigned char i, j; /* State variables */
+ unsigned char s[256]; /* State variables */
+ } prng;
+
+ /* Initialize the state of the random number generator once,
+ ** the first time this routine is called. The seed value does
+ ** not need to contain a lot of randomness since we are not
+ ** trying to do secure encryption or anything like that...
+ **
+ ** Nothing in this file or anywhere else in SQLite does any kind of
+ ** encryption. The RC4 algorithm is being used as a PRNG (pseudo-random
+ ** number generator) not as an encryption device.
+ */
+ if( !prng.isInit ){
+ int i;
+ char k[256];
+ prng.j = 0;
+ prng.i = 0;
+ sqlite3OsRandomSeed(k);
+ for(i=0; i<256; i++){
+ prng.s[i] = i;
+ }
+ for(i=0; i<256; i++){
+ prng.j += prng.s[i] + k[i];
+ t = prng.s[prng.j];
+ prng.s[prng.j] = prng.s[i];
+ prng.s[i] = t;
+ }
+ prng.isInit = 1;
+ }
+
+ /* Generate and return single random byte
+ */
+ prng.i++;
+ t = prng.s[prng.i];
+ prng.j += t;
+ prng.s[prng.i] = prng.s[prng.j];
+ prng.s[prng.j] = t;
+ t += prng.s[prng.i];
+ return prng.s[t];
+}
+
+/*
+** Return N random bytes.
+*/
+void sqlite3Randomness(int N, void *pBuf){
+ unsigned char *zBuf = pBuf;
+ sqlite3OsEnterMutex();
+ while( N-- ){
+ *(zBuf++) = randomByte();
+ }
+ sqlite3OsLeaveMutex();
+}
Added: freeswitch/trunk/libs/sqlite/src/select.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/select.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,3295 @@
+/*
+** 2001 September 15
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** This file contains C code routines that are called by the parser
+** to handle SELECT statements in SQLite.
+**
+** $Id: select.c,v 1.321 2006/09/29 14:01:05 drh Exp $
+*/
+#include "sqliteInt.h"
+
+
+/*
+** Delete all the content of a Select structure but do not deallocate
+** the select structure itself.
+*/
+static void clearSelect(Select *p){
+ sqlite3ExprListDelete(p->pEList);
+ sqlite3SrcListDelete(p->pSrc);
+ sqlite3ExprDelete(p->pWhere);
+ sqlite3ExprListDelete(p->pGroupBy);
+ sqlite3ExprDelete(p->pHaving);
+ sqlite3ExprListDelete(p->pOrderBy);
+ sqlite3SelectDelete(p->pPrior);
+ sqlite3ExprDelete(p->pLimit);
+ sqlite3ExprDelete(p->pOffset);
+}
+
+
+/*
+** Allocate a new Select structure and return a pointer to that
+** structure.
+*/
+Select *sqlite3SelectNew(
+ ExprList *pEList, /* which columns to include in the result */
+ SrcList *pSrc, /* the FROM clause -- which tables to scan */
+ Expr *pWhere, /* the WHERE clause */
+ ExprList *pGroupBy, /* the GROUP BY clause */
+ Expr *pHaving, /* the HAVING clause */
+ ExprList *pOrderBy, /* the ORDER BY clause */
+ int isDistinct, /* true if the DISTINCT keyword is present */
+ Expr *pLimit, /* LIMIT value. NULL means not used */
+ Expr *pOffset /* OFFSET value. NULL means no offset */
+){
+ Select *pNew;
+ Select standin;
+ pNew = sqliteMalloc( sizeof(*pNew) );
+ assert( !pOffset || pLimit ); /* Can't have OFFSET without LIMIT. */
+ if( pNew==0 ){
+ pNew = &standin;
+ memset(pNew, 0, sizeof(*pNew));
+ }
+ if( pEList==0 ){
+ pEList = sqlite3ExprListAppend(0, sqlite3Expr(TK_ALL,0,0,0), 0);
+ }
+ pNew->pEList = pEList;
+ pNew->pSrc = pSrc;
+ pNew->pWhere = pWhere;
+ pNew->pGroupBy = pGroupBy;
+ pNew->pHaving = pHaving;
+ pNew->pOrderBy = pOrderBy;
+ pNew->isDistinct = isDistinct;
+ pNew->op = TK_SELECT;
+ pNew->pLimit = pLimit;
+ pNew->pOffset = pOffset;
+ pNew->iLimit = -1;
+ pNew->iOffset = -1;
+ pNew->addrOpenEphm[0] = -1;
+ pNew->addrOpenEphm[1] = -1;
+ pNew->addrOpenEphm[2] = -1;
+ if( pNew==&standin) {
+ clearSelect(pNew);
+ pNew = 0;
+ }
+ return pNew;
+}
+
+/*
+** Delete the given Select structure and all of its substructures.
+*/
+void sqlite3SelectDelete(Select *p){
+ if( p ){
+ clearSelect(p);
+ sqliteFree(p);
+ }
+}
+
+/*
+** Given 1 to 3 identifiers preceeding the JOIN keyword, determine the
+** type of join. Return an integer constant that expresses that type
+** in terms of the following bit values:
+**
+** JT_INNER
+** JT_CROSS
+** JT_OUTER
+** JT_NATURAL
+** JT_LEFT
+** JT_RIGHT
+**
+** A full outer join is the combination of JT_LEFT and JT_RIGHT.
+**
+** If an illegal or unsupported join type is seen, then still return
+** a join type, but put an error in the pParse structure.
+*/
+int sqlite3JoinType(Parse *pParse, Token *pA, Token *pB, Token *pC){
+ int jointype = 0;
+ Token *apAll[3];
+ Token *p;
+ static const struct {
+ const char zKeyword[8];
+ u8 nChar;
+ u8 code;
+ } keywords[] = {
+ { "natural", 7, JT_NATURAL },
+ { "left", 4, JT_LEFT|JT_OUTER },
+ { "right", 5, JT_RIGHT|JT_OUTER },
+ { "full", 4, JT_LEFT|JT_RIGHT|JT_OUTER },
+ { "outer", 5, JT_OUTER },
+ { "inner", 5, JT_INNER },
+ { "cross", 5, JT_INNER|JT_CROSS },
+ };
+ int i, j;
+ apAll[0] = pA;
+ apAll[1] = pB;
+ apAll[2] = pC;
+ for(i=0; i<3 && apAll[i]; i++){
+ p = apAll[i];
+ for(j=0; j<sizeof(keywords)/sizeof(keywords[0]); j++){
+ if( p->n==keywords[j].nChar
+ && sqlite3StrNICmp((char*)p->z, keywords[j].zKeyword, p->n)==0 ){
+ jointype |= keywords[j].code;
+ break;
+ }
+ }
+ if( j>=sizeof(keywords)/sizeof(keywords[0]) ){
+ jointype |= JT_ERROR;
+ break;
+ }
+ }
+ if(
+ (jointype & (JT_INNER|JT_OUTER))==(JT_INNER|JT_OUTER) ||
+ (jointype & JT_ERROR)!=0
+ ){
+ const char *zSp1 = " ";
+ const char *zSp2 = " ";
+ if( pB==0 ){ zSp1++; }
+ if( pC==0 ){ zSp2++; }
+ sqlite3ErrorMsg(pParse, "unknown or unsupported join type: "
+ "%T%s%T%s%T", pA, zSp1, pB, zSp2, pC);
+ jointype = JT_INNER;
+ }else if( jointype & JT_RIGHT ){
+ sqlite3ErrorMsg(pParse,
+ "RIGHT and FULL OUTER JOINs are not currently supported");
+ jointype = JT_INNER;
+ }
+ return jointype;
+}
+
+/*
+** Return the index of a column in a table. Return -1 if the column
+** is not contained in the table.
+*/
+static int columnIndex(Table *pTab, const char *zCol){
+ int i;
+ for(i=0; i<pTab->nCol; i++){
+ if( sqlite3StrICmp(pTab->aCol[i].zName, zCol)==0 ) return i;
+ }
+ return -1;
+}
+
+/*
+** Set the value of a token to a '\000'-terminated string.
+*/
+static void setToken(Token *p, const char *z){
+ p->z = (u8*)z;
+ p->n = z ? strlen(z) : 0;
+ p->dyn = 0;
+}
+
+/*
+** Create an expression node for an identifier with the name of zName
+*/
+Expr *sqlite3CreateIdExpr(const char *zName){
+ Token dummy;
+ setToken(&dummy, zName);
+ return sqlite3Expr(TK_ID, 0, 0, &dummy);
+}
+
+
+/*
+** Add a term to the WHERE expression in *ppExpr that requires the
+** zCol column to be equal in the two tables pTab1 and pTab2.
+*/
+static void addWhereTerm(
+ const char *zCol, /* Name of the column */
+ const Table *pTab1, /* First table */
+ const char *zAlias1, /* Alias for first table. May be NULL */
+ const Table *pTab2, /* Second table */
+ const char *zAlias2, /* Alias for second table. May be NULL */
+ int iRightJoinTable, /* VDBE cursor for the right table */
+ Expr **ppExpr /* Add the equality term to this expression */
+){
+ Expr *pE1a, *pE1b, *pE1c;
+ Expr *pE2a, *pE2b, *pE2c;
+ Expr *pE;
+
+ pE1a = sqlite3CreateIdExpr(zCol);
+ pE2a = sqlite3CreateIdExpr(zCol);
+ if( zAlias1==0 ){
+ zAlias1 = pTab1->zName;
+ }
+ pE1b = sqlite3CreateIdExpr(zAlias1);
+ if( zAlias2==0 ){
+ zAlias2 = pTab2->zName;
+ }
+ pE2b = sqlite3CreateIdExpr(zAlias2);
+ pE1c = sqlite3ExprOrFree(TK_DOT, pE1b, pE1a, 0);
+ pE2c = sqlite3ExprOrFree(TK_DOT, pE2b, pE2a, 0);
+ pE = sqlite3ExprOrFree(TK_EQ, pE1c, pE2c, 0);
+ if( pE ){
+ ExprSetProperty(pE, EP_FromJoin);
+ pE->iRightJoinTable = iRightJoinTable;
+ }
+ pE = sqlite3ExprAnd(*ppExpr, pE);
+ if( pE ){
+ *ppExpr = pE;
+ }
+}
+
+/*
+** Set the EP_FromJoin property on all terms of the given expression.
+** And set the Expr.iRightJoinTable to iTable for every term in the
+** expression.
+**
+** The EP_FromJoin property is used on terms of an expression to tell
+** the LEFT OUTER JOIN processing logic that this term is part of the
+** join restriction specified in the ON or USING clause and not a part
+** of the more general WHERE clause. These terms are moved over to the
+** WHERE clause during join processing but we need to remember that they
+** originated in the ON or USING clause.
+**
+** The Expr.iRightJoinTable tells the WHERE clause processing that the
+** expression depends on table iRightJoinTable even if that table is not
+** explicitly mentioned in the expression. That information is needed
+** for cases like this:
+**
+** SELECT * FROM t1 LEFT JOIN t2 ON t1.a=t2.b AND t1.x=5
+**
+** The where clause needs to defer the handling of the t1.x=5
+** term until after the t2 loop of the join. In that way, a
+** NULL t2 row will be inserted whenever t1.x!=5. If we do not
+** defer the handling of t1.x=5, it will be processed immediately
+** after the t1 loop and rows with t1.x!=5 will never appear in
+** the output, which is incorrect.
+*/
+static void setJoinExpr(Expr *p, int iTable){
+ while( p ){
+ ExprSetProperty(p, EP_FromJoin);
+ p->iRightJoinTable = iTable;
+ setJoinExpr(p->pLeft, iTable);
+ p = p->pRight;
+ }
+}
+
+/*
+** This routine processes the join information for a SELECT statement.
+** ON and USING clauses are converted into extra terms of the WHERE clause.
+** NATURAL joins also create extra WHERE clause terms.
+**
+** The terms of a FROM clause are contained in the Select.pSrc structure.
+** The left most table is the first entry in Select.pSrc. The right-most
+** table is the last entry. The join operator is held in the entry to
+** the left. Thus entry 0 contains the join operator for the join between
+** entries 0 and 1. Any ON or USING clauses associated with the join are
+** also attached to the left entry.
+**
+** This routine returns the number of errors encountered.
+*/
+static int sqliteProcessJoin(Parse *pParse, Select *p){
+ SrcList *pSrc; /* All tables in the FROM clause */
+ int i, j; /* Loop counters */
+ struct SrcList_item *pLeft; /* Left table being joined */
+ struct SrcList_item *pRight; /* Right table being joined */
+
+ pSrc = p->pSrc;
+ pLeft = &pSrc->a[0];
+ pRight = &pLeft[1];
+ for(i=0; i<pSrc->nSrc-1; i++, pRight++, pLeft++){
+ Table *pLeftTab = pLeft->pTab;
+ Table *pRightTab = pRight->pTab;
+
+ if( pLeftTab==0 || pRightTab==0 ) continue;
+
+ /* When the NATURAL keyword is present, add WHERE clause terms for
+ ** every column that the two tables have in common.
+ */
+ if( pLeft->jointype & JT_NATURAL ){
+ if( pLeft->pOn || pLeft->pUsing ){
+ sqlite3ErrorMsg(pParse, "a NATURAL join may not have "
+ "an ON or USING clause", 0);
+ return 1;
+ }
+ for(j=0; j<pLeftTab->nCol; j++){
+ char *zName = pLeftTab->aCol[j].zName;
+ if( columnIndex(pRightTab, zName)>=0 ){
+ addWhereTerm(zName, pLeftTab, pLeft->zAlias,
+ pRightTab, pRight->zAlias,
+ pRight->iCursor, &p->pWhere);
+
+ }
+ }
+ }
+
+ /* Disallow both ON and USING clauses in the same join
+ */
+ if( pLeft->pOn && pLeft->pUsing ){
+ sqlite3ErrorMsg(pParse, "cannot have both ON and USING "
+ "clauses in the same join");
+ return 1;
+ }
+
+ /* Add the ON clause to the end of the WHERE clause, connected by
+ ** an AND operator.
+ */
+ if( pLeft->pOn ){
+ setJoinExpr(pLeft->pOn, pRight->iCursor);
+ p->pWhere = sqlite3ExprAnd(p->pWhere, pLeft->pOn);
+ pLeft->pOn = 0;
+ }
+
+ /* Create extra terms on the WHERE clause for each column named
+ ** in the USING clause. Example: If the two tables to be joined are
+ ** A and B and the USING clause names X, Y, and Z, then add this
+ ** to the WHERE clause: A.X=B.X AND A.Y=B.Y AND A.Z=B.Z
+ ** Report an error if any column mentioned in the USING clause is
+ ** not contained in both tables to be joined.
+ */
+ if( pLeft->pUsing ){
+ IdList *pList = pLeft->pUsing;
+ for(j=0; j<pList->nId; j++){
+ char *zName = pList->a[j].zName;
+ if( columnIndex(pLeftTab, zName)<0 || columnIndex(pRightTab, zName)<0 ){
+ sqlite3ErrorMsg(pParse, "cannot join using column %s - column "
+ "not present in both tables", zName);
+ return 1;
+ }
+ addWhereTerm(zName, pLeftTab, pLeft->zAlias,
+ pRightTab, pRight->zAlias,
+ pRight->iCursor, &p->pWhere);
+ }
+ }
+ }
+ return 0;
+}
+
+/*
+** Insert code into "v" that will push the record on the top of the
+** stack into the sorter.
+*/
+static void pushOntoSorter(
+ Parse *pParse, /* Parser context */
+ ExprList *pOrderBy, /* The ORDER BY clause */
+ Select *pSelect /* The whole SELECT statement */
+){
+ Vdbe *v = pParse->pVdbe;
+ sqlite3ExprCodeExprList(pParse, pOrderBy);
+ sqlite3VdbeAddOp(v, OP_Sequence, pOrderBy->iECursor, 0);
+ sqlite3VdbeAddOp(v, OP_Pull, pOrderBy->nExpr + 1, 0);
+ sqlite3VdbeAddOp(v, OP_MakeRecord, pOrderBy->nExpr + 2, 0);
+ sqlite3VdbeAddOp(v, OP_IdxInsert, pOrderBy->iECursor, 0);
+ if( pSelect->iLimit>=0 ){
+ int addr1, addr2;
+ addr1 = sqlite3VdbeAddOp(v, OP_IfMemZero, pSelect->iLimit+1, 0);
+ sqlite3VdbeAddOp(v, OP_MemIncr, -1, pSelect->iLimit+1);
+ addr2 = sqlite3VdbeAddOp(v, OP_Goto, 0, 0);
+ sqlite3VdbeJumpHere(v, addr1);
+ sqlite3VdbeAddOp(v, OP_Last, pOrderBy->iECursor, 0);
+ sqlite3VdbeAddOp(v, OP_Delete, pOrderBy->iECursor, 0);
+ sqlite3VdbeJumpHere(v, addr2);
+ pSelect->iLimit = -1;
+ }
+}
+
+/*
+** Add code to implement the OFFSET
+*/
+static void codeOffset(
+ Vdbe *v, /* Generate code into this VM */
+ Select *p, /* The SELECT statement being coded */
+ int iContinue, /* Jump here to skip the current record */
+ int nPop /* Number of times to pop stack when jumping */
+){
+ if( p->iOffset>=0 && iContinue!=0 ){
+ int addr;
+ sqlite3VdbeAddOp(v, OP_MemIncr, -1, p->iOffset);
+ addr = sqlite3VdbeAddOp(v, OP_IfMemNeg, p->iOffset, 0);
+ if( nPop>0 ){
+ sqlite3VdbeAddOp(v, OP_Pop, nPop, 0);
+ }
+ sqlite3VdbeAddOp(v, OP_Goto, 0, iContinue);
+ VdbeComment((v, "# skip OFFSET records"));
+ sqlite3VdbeJumpHere(v, addr);
+ }
+}
+
+/*
+** Add code that will check to make sure the top N elements of the
+** stack are distinct. iTab is a sorting index that holds previously
+** seen combinations of the N values. A new entry is made in iTab
+** if the current N values are new.
+**
+** A jump to addrRepeat is made and the N+1 values are popped from the
+** stack if the top N elements are not distinct.
+*/
+static void codeDistinct(
+ Vdbe *v, /* Generate code into this VM */
+ int iTab, /* A sorting index used to test for distinctness */
+ int addrRepeat, /* Jump to here if not distinct */
+ int N /* The top N elements of the stack must be distinct */
+){
+ sqlite3VdbeAddOp(v, OP_MakeRecord, -N, 0);
+ sqlite3VdbeAddOp(v, OP_Distinct, iTab, sqlite3VdbeCurrentAddr(v)+3);
+ sqlite3VdbeAddOp(v, OP_Pop, N+1, 0);
+ sqlite3VdbeAddOp(v, OP_Goto, 0, addrRepeat);
+ VdbeComment((v, "# skip indistinct records"));
+ sqlite3VdbeAddOp(v, OP_IdxInsert, iTab, 0);
+}
+
+
+/*
+** This routine generates the code for the inside of the inner loop
+** of a SELECT.
+**
+** If srcTab and nColumn are both zero, then the pEList expressions
+** are evaluated in order to get the data for this row. If nColumn>0
+** then data is pulled from srcTab and pEList is used only to get the
+** datatypes for each column.
+*/
+static int selectInnerLoop(
+ Parse *pParse, /* The parser context */
+ Select *p, /* The complete select statement being coded */
+ ExprList *pEList, /* List of values being extracted */
+ int srcTab, /* Pull data from this table */
+ int nColumn, /* Number of columns in the source table */
+ ExprList *pOrderBy, /* If not NULL, sort results using this key */
+ int distinct, /* If >=0, make sure results are distinct */
+ int eDest, /* How to dispose of the results */
+ int iParm, /* An argument to the disposal method */
+ int iContinue, /* Jump here to continue with next row */
+ int iBreak, /* Jump here to break out of the inner loop */
+ char *aff /* affinity string if eDest is SRT_Union */
+){
+ Vdbe *v = pParse->pVdbe;
+ int i;
+ int hasDistinct; /* True if the DISTINCT keyword is present */
+
+ if( v==0 ) return 0;
+ assert( pEList!=0 );
+
+ /* If there was a LIMIT clause on the SELECT statement, then do the check
+ ** to see if this row should be output.
+ */
+ hasDistinct = distinct>=0 && pEList->nExpr>0;
+ if( pOrderBy==0 && !hasDistinct ){
+ codeOffset(v, p, iContinue, 0);
+ }
+
+ /* Pull the requested columns.
+ */
+ if( nColumn>0 ){
+ for(i=0; i<nColumn; i++){
+ sqlite3VdbeAddOp(v, OP_Column, srcTab, i);
+ }
+ }else{
+ nColumn = pEList->nExpr;
+ sqlite3ExprCodeExprList(pParse, pEList);
+ }
+
+ /* If the DISTINCT keyword was present on the SELECT statement
+ ** and this row has been seen before, then do not make this row
+ ** part of the result.
+ */
+ if( hasDistinct ){
+ assert( pEList!=0 );
+ assert( pEList->nExpr==nColumn );
+ codeDistinct(v, distinct, iContinue, nColumn);
+ if( pOrderBy==0 ){
+ codeOffset(v, p, iContinue, nColumn);
+ }
+ }
+
+ switch( eDest ){
+ /* In this mode, write each query result to the key of the temporary
+ ** table iParm.
+ */
+#ifndef SQLITE_OMIT_COMPOUND_SELECT
+ case SRT_Union: {
+ sqlite3VdbeAddOp(v, OP_MakeRecord, nColumn, 0);
+ if( aff ){
+ sqlite3VdbeChangeP3(v, -1, aff, P3_STATIC);
+ }
+ sqlite3VdbeAddOp(v, OP_IdxInsert, iParm, 0);
+ break;
+ }
+
+ /* Construct a record from the query result, but instead of
+ ** saving that record, use it as a key to delete elements from
+ ** the temporary table iParm.
+ */
+ case SRT_Except: {
+ int addr;
+ addr = sqlite3VdbeAddOp(v, OP_MakeRecord, nColumn, 0);
+ sqlite3VdbeChangeP3(v, -1, aff, P3_STATIC);
+ sqlite3VdbeAddOp(v, OP_NotFound, iParm, addr+3);
+ sqlite3VdbeAddOp(v, OP_Delete, iParm, 0);
+ break;
+ }
+#endif
+
+ /* Store the result as data using a unique key.
+ */
+ case SRT_Table:
+ case SRT_EphemTab: {
+ sqlite3VdbeAddOp(v, OP_MakeRecord, nColumn, 0);
+ if( pOrderBy ){
+ pushOntoSorter(pParse, pOrderBy, p);
+ }else{
+ sqlite3VdbeAddOp(v, OP_NewRowid, iParm, 0);
+ sqlite3VdbeAddOp(v, OP_Pull, 1, 0);
+ sqlite3VdbeAddOp(v, OP_Insert, iParm, 0);
+ }
+ break;
+ }
+
+#ifndef SQLITE_OMIT_SUBQUERY
+ /* If we are creating a set for an "expr IN (SELECT ...)" construct,
+ ** then there should be a single item on the stack. Write this
+ ** item into the set table with bogus data.
+ */
+ case SRT_Set: {
+ int addr1 = sqlite3VdbeCurrentAddr(v);
+ int addr2;
+
+ assert( nColumn==1 );
+ sqlite3VdbeAddOp(v, OP_NotNull, -1, addr1+3);
+ sqlite3VdbeAddOp(v, OP_Pop, 1, 0);
+ addr2 = sqlite3VdbeAddOp(v, OP_Goto, 0, 0);
+ if( pOrderBy ){
+ /* At first glance you would think we could optimize out the
+ ** ORDER BY in this case since the order of entries in the set
+ ** does not matter. But there might be a LIMIT clause, in which
+ ** case the order does matter */
+ pushOntoSorter(pParse, pOrderBy, p);
+ }else{
+ char affinity = (iParm>>16)&0xFF;
+ affinity = sqlite3CompareAffinity(pEList->a[0].pExpr, affinity);
+ sqlite3VdbeOp3(v, OP_MakeRecord, 1, 0, &affinity, 1);
+ sqlite3VdbeAddOp(v, OP_IdxInsert, (iParm&0x0000FFFF), 0);
+ }
+ sqlite3VdbeJumpHere(v, addr2);
+ break;
+ }
+
+ /* If any row exist in the result set, record that fact and abort.
+ */
+ case SRT_Exists: {
+ sqlite3VdbeAddOp(v, OP_MemInt, 1, iParm);
+ sqlite3VdbeAddOp(v, OP_Pop, nColumn, 0);
+ /* The LIMIT clause will terminate the loop for us */
+ break;
+ }
+
+ /* If this is a scalar select that is part of an expression, then
+ ** store the results in the appropriate memory cell and break out
+ ** of the scan loop.
+ */
+ case SRT_Mem: {
+ assert( nColumn==1 );
+ if( pOrderBy ){
+ pushOntoSorter(pParse, pOrderBy, p);
+ }else{
+ sqlite3VdbeAddOp(v, OP_MemStore, iParm, 1);
+ /* The LIMIT clause will jump out of the loop for us */
+ }
+ break;
+ }
+#endif /* #ifndef SQLITE_OMIT_SUBQUERY */
+
+ /* Send the data to the callback function or to a subroutine. In the
+ ** case of a subroutine, the subroutine itself is responsible for
+ ** popping the data from the stack.
+ */
+ case SRT_Subroutine:
+ case SRT_Callback: {
+ if( pOrderBy ){
+ sqlite3VdbeAddOp(v, OP_MakeRecord, nColumn, 0);
+ pushOntoSorter(pParse, pOrderBy, p);
+ }else if( eDest==SRT_Subroutine ){
+ sqlite3VdbeAddOp(v, OP_Gosub, 0, iParm);
+ }else{
+ sqlite3VdbeAddOp(v, OP_Callback, nColumn, 0);
+ }
+ break;
+ }
+
+#if !defined(SQLITE_OMIT_TRIGGER)
+ /* Discard the results. This is used for SELECT statements inside
+ ** the body of a TRIGGER. The purpose of such selects is to call
+ ** user-defined functions that have side effects. We do not care
+ ** about the actual results of the select.
+ */
+ default: {
+ assert( eDest==SRT_Discard );
+ sqlite3VdbeAddOp(v, OP_Pop, nColumn, 0);
+ break;
+ }
+#endif
+ }
+
+ /* Jump to the end of the loop if the LIMIT is reached.
+ */
+ if( p->iLimit>=0 && pOrderBy==0 ){
+ sqlite3VdbeAddOp(v, OP_MemIncr, -1, p->iLimit);
+ sqlite3VdbeAddOp(v, OP_IfMemZero, p->iLimit, iBreak);
+ }
+ return 0;
+}
+
+/*
+** Given an expression list, generate a KeyInfo structure that records
+** the collating sequence for each expression in that expression list.
+**
+** If the ExprList is an ORDER BY or GROUP BY clause then the resulting
+** KeyInfo structure is appropriate for initializing a virtual index to
+** implement that clause. If the ExprList is the result set of a SELECT
+** then the KeyInfo structure is appropriate for initializing a virtual
+** index to implement a DISTINCT test.
+**
+** Space to hold the KeyInfo structure is obtain from malloc. The calling
+** function is responsible for seeing that this structure is eventually
+** freed. Add the KeyInfo structure to the P3 field of an opcode using
+** P3_KEYINFO_HANDOFF is the usual way of dealing with this.
+*/
+static KeyInfo *keyInfoFromExprList(Parse *pParse, ExprList *pList){
+ sqlite3 *db = pParse->db;
+ int nExpr;
+ KeyInfo *pInfo;
+ struct ExprList_item *pItem;
+ int i;
+
+ nExpr = pList->nExpr;
+ pInfo = sqliteMalloc( sizeof(*pInfo) + nExpr*(sizeof(CollSeq*)+1) );
+ if( pInfo ){
+ pInfo->aSortOrder = (u8*)&pInfo->aColl[nExpr];
+ pInfo->nField = nExpr;
+ pInfo->enc = ENC(db);
+ for(i=0, pItem=pList->a; i<nExpr; i++, pItem++){
+ CollSeq *pColl;
+ pColl = sqlite3ExprCollSeq(pParse, pItem->pExpr);
+ if( !pColl ){
+ pColl = db->pDfltColl;
+ }
+ pInfo->aColl[i] = pColl;
+ pInfo->aSortOrder[i] = pItem->sortOrder;
+ }
+ }
+ return pInfo;
+}
+
+
+/*
+** If the inner loop was generated using a non-null pOrderBy argument,
+** then the results were placed in a sorter. After the loop is terminated
+** we need to run the sorter and output the results. The following
+** routine generates the code needed to do that.
+*/
+static void generateSortTail(
+ Parse *pParse, /* Parsing context */
+ Select *p, /* The SELECT statement */
+ Vdbe *v, /* Generate code into this VDBE */
+ int nColumn, /* Number of columns of data */
+ int eDest, /* Write the sorted results here */
+ int iParm /* Optional parameter associated with eDest */
+){
+ int brk = sqlite3VdbeMakeLabel(v);
+ int cont = sqlite3VdbeMakeLabel(v);
+ int addr;
+ int iTab;
+ int pseudoTab;
+ ExprList *pOrderBy = p->pOrderBy;
+
+ iTab = pOrderBy->iECursor;
+ if( eDest==SRT_Callback || eDest==SRT_Subroutine ){
+ pseudoTab = pParse->nTab++;
+ sqlite3VdbeAddOp(v, OP_OpenPseudo, pseudoTab, 0);
+ sqlite3VdbeAddOp(v, OP_SetNumColumns, pseudoTab, nColumn);
+ }
+ addr = 1 + sqlite3VdbeAddOp(v, OP_Sort, iTab, brk);
+ codeOffset(v, p, cont, 0);
+ if( eDest==SRT_Callback || eDest==SRT_Subroutine ){
+ sqlite3VdbeAddOp(v, OP_Integer, 1, 0);
+ }
+ sqlite3VdbeAddOp(v, OP_Column, iTab, pOrderBy->nExpr + 1);
+ switch( eDest ){
+ case SRT_Table:
+ case SRT_EphemTab: {
+ sqlite3VdbeAddOp(v, OP_NewRowid, iParm, 0);
+ sqlite3VdbeAddOp(v, OP_Pull, 1, 0);
+ sqlite3VdbeAddOp(v, OP_Insert, iParm, 0);
+ break;
+ }
+#ifndef SQLITE_OMIT_SUBQUERY
+ case SRT_Set: {
+ assert( nColumn==1 );
+ sqlite3VdbeAddOp(v, OP_NotNull, -1, sqlite3VdbeCurrentAddr(v)+3);
+ sqlite3VdbeAddOp(v, OP_Pop, 1, 0);
+ sqlite3VdbeAddOp(v, OP_Goto, 0, sqlite3VdbeCurrentAddr(v)+3);
+ sqlite3VdbeOp3(v, OP_MakeRecord, 1, 0, "c", P3_STATIC);
+ sqlite3VdbeAddOp(v, OP_IdxInsert, (iParm&0x0000FFFF), 0);
+ break;
+ }
+ case SRT_Mem: {
+ assert( nColumn==1 );
+ sqlite3VdbeAddOp(v, OP_MemStore, iParm, 1);
+ /* The LIMIT clause will terminate the loop for us */
+ break;
+ }
+#endif
+ case SRT_Callback:
+ case SRT_Subroutine: {
+ int i;
+ sqlite3VdbeAddOp(v, OP_Insert, pseudoTab, 0);
+ for(i=0; i<nColumn; i++){
+ sqlite3VdbeAddOp(v, OP_Column, pseudoTab, i);
+ }
+ if( eDest==SRT_Callback ){
+ sqlite3VdbeAddOp(v, OP_Callback, nColumn, 0);
+ }else{
+ sqlite3VdbeAddOp(v, OP_Gosub, 0, iParm);
+ }
+ break;
+ }
+ default: {
+ /* Do nothing */
+ break;
+ }
+ }
+
+ /* Jump to the end of the loop when the LIMIT is reached
+ */
+ if( p->iLimit>=0 ){
+ sqlite3VdbeAddOp(v, OP_MemIncr, -1, p->iLimit);
+ sqlite3VdbeAddOp(v, OP_IfMemZero, p->iLimit, brk);
+ }
+
+ /* The bottom of the loop
+ */
+ sqlite3VdbeResolveLabel(v, cont);
+ sqlite3VdbeAddOp(v, OP_Next, iTab, addr);
+ sqlite3VdbeResolveLabel(v, brk);
+ if( eDest==SRT_Callback || eDest==SRT_Subroutine ){
+ sqlite3VdbeAddOp(v, OP_Close, pseudoTab, 0);
+ }
+
+}
+
+/*
+** Return a pointer to a string containing the 'declaration type' of the
+** expression pExpr. The string may be treated as static by the caller.
+**
+** The declaration type is the exact datatype definition extracted from the
+** original CREATE TABLE statement if the expression is a column. The
+** declaration type for a ROWID field is INTEGER. Exactly when an expression
+** is considered a column can be complex in the presence of subqueries. The
+** result-set expression in all of the following SELECT statements is
+** considered a column by this function.
+**
+** SELECT col FROM tbl;
+** SELECT (SELECT col FROM tbl;
+** SELECT (SELECT col FROM tbl);
+** SELECT abc FROM (SELECT col AS abc FROM tbl);
+**
+** The declaration type for any expression other than a column is NULL.
+*/
+static const char *columnType(
+ NameContext *pNC,
+ Expr *pExpr,
+ const char **pzOriginDb,
+ const char **pzOriginTab,
+ const char **pzOriginCol
+){
+ char const *zType = 0;
+ char const *zOriginDb = 0;
+ char const *zOriginTab = 0;
+ char const *zOriginCol = 0;
+ int j;
+ if( pExpr==0 || pNC->pSrcList==0 ) return 0;
+
+ /* The TK_AS operator can only occur in ORDER BY, GROUP BY, HAVING,
+ ** and LIMIT clauses. But pExpr originates in the result set of a
+ ** SELECT. So pExpr can never contain an AS operator.
+ */
+ assert( pExpr->op!=TK_AS );
+
+ switch( pExpr->op ){
+ case TK_AGG_COLUMN:
+ case TK_COLUMN: {
+ /* The expression is a column. Locate the table the column is being
+ ** extracted from in NameContext.pSrcList. This table may be real
+ ** database table or a subquery.
+ */
+ Table *pTab = 0; /* Table structure column is extracted from */
+ Select *pS = 0; /* Select the column is extracted from */
+ int iCol = pExpr->iColumn; /* Index of column in pTab */
+ while( pNC && !pTab ){
+ SrcList *pTabList = pNC->pSrcList;
+ for(j=0;j<pTabList->nSrc && pTabList->a[j].iCursor!=pExpr->iTable;j++);
+ if( j<pTabList->nSrc ){
+ pTab = pTabList->a[j].pTab;
+ pS = pTabList->a[j].pSelect;
+ }else{
+ pNC = pNC->pNext;
+ }
+ }
+
+ if( pTab==0 ){
+ /* FIX ME:
+ ** This can occurs if you have something like "SELECT new.x;" inside
+ ** a trigger. In other words, if you reference the special "new"
+ ** table in the result set of a select. We do not have a good way
+ ** to find the actual table type, so call it "TEXT". This is really
+ ** something of a bug, but I do not know how to fix it.
+ **
+ ** This code does not produce the correct answer - it just prevents
+ ** a segfault. See ticket #1229.
+ */
+ zType = "TEXT";
+ break;
+ }
+
+ assert( pTab );
+ if( pS ){
+ /* The "table" is actually a sub-select or a view in the FROM clause
+ ** of the SELECT statement. Return the declaration type and origin
+ ** data for the result-set column of the sub-select.
+ */
+ if( iCol>=0 && iCol<pS->pEList->nExpr ){
+ /* If iCol is less than zero, then the expression requests the
+ ** rowid of the sub-select or view. This expression is legal (see
+ ** test case misc2.2.2) - it always evaluates to NULL.
+ */
+ NameContext sNC;
+ Expr *p = pS->pEList->a[iCol].pExpr;
+ sNC.pSrcList = pS->pSrc;
+ sNC.pNext = 0;
+ sNC.pParse = pNC->pParse;
+ zType = columnType(&sNC, p, &zOriginDb, &zOriginTab, &zOriginCol);
+ }
+ }else if( pTab->pSchema ){
+ /* A real table */
+ assert( !pS );
+ if( iCol<0 ) iCol = pTab->iPKey;
+ assert( iCol==-1 || (iCol>=0 && iCol<pTab->nCol) );
+ if( iCol<0 ){
+ zType = "INTEGER";
+ zOriginCol = "rowid";
+ }else{
+ zType = pTab->aCol[iCol].zType;
+ zOriginCol = pTab->aCol[iCol].zName;
+ }
+ zOriginTab = pTab->zName;
+ if( pNC->pParse ){
+ int iDb = sqlite3SchemaToIndex(pNC->pParse->db, pTab->pSchema);
+ zOriginDb = pNC->pParse->db->aDb[iDb].zName;
+ }
+ }
+ break;
+ }
+#ifndef SQLITE_OMIT_SUBQUERY
+ case TK_SELECT: {
+ /* The expression is a sub-select. Return the declaration type and
+ ** origin info for the single column in the result set of the SELECT
+ ** statement.
+ */
+ NameContext sNC;
+ Select *pS = pExpr->pSelect;
+ Expr *p = pS->pEList->a[0].pExpr;
+ sNC.pSrcList = pS->pSrc;
+ sNC.pNext = pNC;
+ sNC.pParse = pNC->pParse;
+ zType = columnType(&sNC, p, &zOriginDb, &zOriginTab, &zOriginCol);
+ break;
+ }
+#endif
+ }
+
+ if( pzOriginDb ){
+ assert( pzOriginTab && pzOriginCol );
+ *pzOriginDb = zOriginDb;
+ *pzOriginTab = zOriginTab;
+ *pzOriginCol = zOriginCol;
+ }
+ return zType;
+}
+
+/*
+** Generate code that will tell the VDBE the declaration types of columns
+** in the result set.
+*/
+static void generateColumnTypes(
+ Parse *pParse, /* Parser context */
+ SrcList *pTabList, /* List of tables */
+ ExprList *pEList /* Expressions defining the result set */
+){
+ Vdbe *v = pParse->pVdbe;
+ int i;
+ NameContext sNC;
+ sNC.pSrcList = pTabList;
+ sNC.pParse = pParse;
+ for(i=0; i<pEList->nExpr; i++){
+ Expr *p = pEList->a[i].pExpr;
+ const char *zOrigDb = 0;
+ const char *zOrigTab = 0;
+ const char *zOrigCol = 0;
+ const char *zType = columnType(&sNC, p, &zOrigDb, &zOrigTab, &zOrigCol);
+
+ /* The vdbe must make it's own copy of the column-type and other
+ ** column specific strings, in case the schema is reset before this
+ ** virtual machine is deleted.
+ */
+ sqlite3VdbeSetColName(v, i, COLNAME_DECLTYPE, zType, P3_TRANSIENT);
+ sqlite3VdbeSetColName(v, i, COLNAME_DATABASE, zOrigDb, P3_TRANSIENT);
+ sqlite3VdbeSetColName(v, i, COLNAME_TABLE, zOrigTab, P3_TRANSIENT);
+ sqlite3VdbeSetColName(v, i, COLNAME_COLUMN, zOrigCol, P3_TRANSIENT);
+ }
+}
+
+/*
+** Generate code that will tell the VDBE the names of columns
+** in the result set. This information is used to provide the
+** azCol[] values in the callback.
+*/
+static void generateColumnNames(
+ Parse *pParse, /* Parser context */
+ SrcList *pTabList, /* List of tables */
+ ExprList *pEList /* Expressions defining the result set */
+){
+ Vdbe *v = pParse->pVdbe;
+ int i, j;
+ sqlite3 *db = pParse->db;
+ int fullNames, shortNames;
+
+#ifndef SQLITE_OMIT_EXPLAIN
+ /* If this is an EXPLAIN, skip this step */
+ if( pParse->explain ){
+ return;
+ }
+#endif
+
+ assert( v!=0 );
+ if( pParse->colNamesSet || v==0 || sqlite3MallocFailed() ) return;
+ pParse->colNamesSet = 1;
+ fullNames = (db->flags & SQLITE_FullColNames)!=0;
+ shortNames = (db->flags & SQLITE_ShortColNames)!=0;
+ sqlite3VdbeSetNumCols(v, pEList->nExpr);
+ for(i=0; i<pEList->nExpr; i++){
+ Expr *p;
+ p = pEList->a[i].pExpr;
+ if( p==0 ) continue;
+ if( pEList->a[i].zName ){
+ char *zName = pEList->a[i].zName;
+ sqlite3VdbeSetColName(v, i, COLNAME_NAME, zName, strlen(zName));
+ continue;
+ }
+ if( p->op==TK_COLUMN && pTabList ){
+ Table *pTab;
+ char *zCol;
+ int iCol = p->iColumn;
+ for(j=0; j<pTabList->nSrc && pTabList->a[j].iCursor!=p->iTable; j++){}
+ assert( j<pTabList->nSrc );
+ pTab = pTabList->a[j].pTab;
+ if( iCol<0 ) iCol = pTab->iPKey;
+ assert( iCol==-1 || (iCol>=0 && iCol<pTab->nCol) );
+ if( iCol<0 ){
+ zCol = "rowid";
+ }else{
+ zCol = pTab->aCol[iCol].zName;
+ }
+ if( !shortNames && !fullNames && p->span.z && p->span.z[0] ){
+ sqlite3VdbeSetColName(v, i, COLNAME_NAME, (char*)p->span.z, p->span.n);
+ }else if( fullNames || (!shortNames && pTabList->nSrc>1) ){
+ char *zName = 0;
+ char *zTab;
+
+ zTab = pTabList->a[j].zAlias;
+ if( fullNames || zTab==0 ) zTab = pTab->zName;
+ sqlite3SetString(&zName, zTab, ".", zCol, (char*)0);
+ sqlite3VdbeSetColName(v, i, COLNAME_NAME, zName, P3_DYNAMIC);
+ }else{
+ sqlite3VdbeSetColName(v, i, COLNAME_NAME, zCol, strlen(zCol));
+ }
+ }else if( p->span.z && p->span.z[0] ){
+ sqlite3VdbeSetColName(v, i, COLNAME_NAME, (char*)p->span.z, p->span.n);
+ /* sqlite3VdbeCompressSpace(v, addr); */
+ }else{
+ char zName[30];
+ assert( p->op!=TK_COLUMN || pTabList==0 );
+ sprintf(zName, "column%d", i+1);
+ sqlite3VdbeSetColName(v, i, COLNAME_NAME, zName, 0);
+ }
+ }
+ generateColumnTypes(pParse, pTabList, pEList);
+}
+
+#ifndef SQLITE_OMIT_COMPOUND_SELECT
+/*
+** Name of the connection operator, used for error messages.
+*/
+static const char *selectOpName(int id){
+ char *z;
+ switch( id ){
+ case TK_ALL: z = "UNION ALL"; break;
+ case TK_INTERSECT: z = "INTERSECT"; break;
+ case TK_EXCEPT: z = "EXCEPT"; break;
+ default: z = "UNION"; break;
+ }
+ return z;
+}
+#endif /* SQLITE_OMIT_COMPOUND_SELECT */
+
+/*
+** Forward declaration
+*/
+static int prepSelectStmt(Parse*, Select*);
+
+/*
+** Given a SELECT statement, generate a Table structure that describes
+** the result set of that SELECT.
+*/
+Table *sqlite3ResultSetOfSelect(Parse *pParse, char *zTabName, Select *pSelect){
+ Table *pTab;
+ int i, j;
+ ExprList *pEList;
+ Column *aCol, *pCol;
+
+ while( pSelect->pPrior ) pSelect = pSelect->pPrior;
+ if( prepSelectStmt(pParse, pSelect) ){
+ return 0;
+ }
+ if( sqlite3SelectResolve(pParse, pSelect, 0) ){
+ return 0;
+ }
+ pTab = sqliteMalloc( sizeof(Table) );
+ if( pTab==0 ){
+ return 0;
+ }
+ pTab->nRef = 1;
+ pTab->zName = zTabName ? sqliteStrDup(zTabName) : 0;
+ pEList = pSelect->pEList;
+ pTab->nCol = pEList->nExpr;
+ assert( pTab->nCol>0 );
+ pTab->aCol = aCol = sqliteMalloc( sizeof(pTab->aCol[0])*pTab->nCol );
+ for(i=0, pCol=aCol; i<pTab->nCol; i++, pCol++){
+ Expr *p, *pR;
+ char *zType;
+ char *zName;
+ int nName;
+ CollSeq *pColl;
+ int cnt;
+ NameContext sNC;
+
+ /* Get an appropriate name for the column
+ */
+ p = pEList->a[i].pExpr;
+ assert( p->pRight==0 || p->pRight->token.z==0 || p->pRight->token.z[0]!=0 );
+ if( (zName = pEList->a[i].zName)!=0 ){
+ /* If the column contains an "AS <name>" phrase, use <name> as the name */
+ zName = sqliteStrDup(zName);
+ }else if( p->op==TK_DOT
+ && (pR=p->pRight)!=0 && pR->token.z && pR->token.z[0] ){
+ /* For columns of the from A.B use B as the name */
+ zName = sqlite3MPrintf("%T", &pR->token);
+ }else if( p->span.z && p->span.z[0] ){
+ /* Use the original text of the column expression as its name */
+ zName = sqlite3MPrintf("%T", &p->span);
+ }else{
+ /* If all else fails, make up a name */
+ zName = sqlite3MPrintf("column%d", i+1);
+ }
+ sqlite3Dequote(zName);
+ if( sqlite3MallocFailed() ){
+ sqliteFree(zName);
+ sqlite3DeleteTable(0, pTab);
+ return 0;
+ }
+
+ /* Make sure the column name is unique. If the name is not unique,
+ ** append a integer to the name so that it becomes unique.
+ */
+ nName = strlen(zName);
+ for(j=cnt=0; j<i; j++){
+ if( sqlite3StrICmp(aCol[j].zName, zName)==0 ){
+ zName[nName] = 0;
+ zName = sqlite3MPrintf("%z:%d", zName, ++cnt);
+ j = -1;
+ if( zName==0 ) break;
+ }
+ }
+ pCol->zName = zName;
+
+ /* Get the typename, type affinity, and collating sequence for the
+ ** column.
+ */
+ memset(&sNC, 0, sizeof(sNC));
+ sNC.pSrcList = pSelect->pSrc;
+ zType = sqliteStrDup(columnType(&sNC, p, 0, 0, 0));
+ pCol->zType = zType;
+ pCol->affinity = sqlite3ExprAffinity(p);
+ pColl = sqlite3ExprCollSeq(pParse, p);
+ if( pColl ){
+ pCol->zColl = sqliteStrDup(pColl->zName);
+ }
+ }
+ pTab->iPKey = -1;
+ return pTab;
+}
+
+/*
+** Prepare a SELECT statement for processing by doing the following
+** things:
+**
+** (1) Make sure VDBE cursor numbers have been assigned to every
+** element of the FROM clause.
+**
+** (2) Fill in the pTabList->a[].pTab fields in the SrcList that
+** defines FROM clause. When views appear in the FROM clause,
+** fill pTabList->a[].pSelect with a copy of the SELECT statement
+** that implements the view. A copy is made of the view's SELECT
+** statement so that we can freely modify or delete that statement
+** without worrying about messing up the presistent representation
+** of the view.
+**
+** (3) Add terms to the WHERE clause to accomodate the NATURAL keyword
+** on joins and the ON and USING clause of joins.
+**
+** (4) Scan the list of columns in the result set (pEList) looking
+** for instances of the "*" operator or the TABLE.* operator.
+** If found, expand each "*" to be every column in every table
+** and TABLE.* to be every column in TABLE.
+**
+** Return 0 on success. If there are problems, leave an error message
+** in pParse and return non-zero.
+*/
+static int prepSelectStmt(Parse *pParse, Select *p){
+ int i, j, k, rc;
+ SrcList *pTabList;
+ ExprList *pEList;
+ struct SrcList_item *pFrom;
+
+ if( p==0 || p->pSrc==0 || sqlite3MallocFailed() ){
+ return 1;
+ }
+ pTabList = p->pSrc;
+ pEList = p->pEList;
+
+ /* Make sure cursor numbers have been assigned to all entries in
+ ** the FROM clause of the SELECT statement.
+ */
+ sqlite3SrcListAssignCursors(pParse, p->pSrc);
+
+ /* Look up every table named in the FROM clause of the select. If
+ ** an entry of the FROM clause is a subquery instead of a table or view,
+ ** then create a transient table structure to describe the subquery.
+ */
+ for(i=0, pFrom=pTabList->a; i<pTabList->nSrc; i++, pFrom++){
+ Table *pTab;
+ if( pFrom->pTab!=0 ){
+ /* This statement has already been prepared. There is no need
+ ** to go further. */
+ assert( i==0 );
+ return 0;
+ }
+ if( pFrom->zName==0 ){
+#ifndef SQLITE_OMIT_SUBQUERY
+ /* A sub-query in the FROM clause of a SELECT */
+ assert( pFrom->pSelect!=0 );
+ if( pFrom->zAlias==0 ){
+ pFrom->zAlias =
+ sqlite3MPrintf("sqlite_subquery_%p_", (void*)pFrom->pSelect);
+ }
+ assert( pFrom->pTab==0 );
+ pFrom->pTab = pTab =
+ sqlite3ResultSetOfSelect(pParse, pFrom->zAlias, pFrom->pSelect);
+ if( pTab==0 ){
+ return 1;
+ }
+ /* The isEphem flag indicates that the Table structure has been
+ ** dynamically allocated and may be freed at any time. In other words,
+ ** pTab is not pointing to a persistent table structure that defines
+ ** part of the schema. */
+ pTab->isEphem = 1;
+#endif
+ }else{
+ /* An ordinary table or view name in the FROM clause */
+ assert( pFrom->pTab==0 );
+ pFrom->pTab = pTab =
+ sqlite3LocateTable(pParse,pFrom->zName,pFrom->zDatabase);
+ if( pTab==0 ){
+ return 1;
+ }
+ pTab->nRef++;
+#if !defined(SQLITE_OMIT_VIEW) || !defined (SQLITE_OMIT_VIRTUALTABLE)
+ if( pTab->pSelect || IsVirtual(pTab) ){
+ /* We reach here if the named table is a really a view */
+ if( sqlite3ViewGetColumnNames(pParse, pTab) ){
+ return 1;
+ }
+ /* If pFrom->pSelect!=0 it means we are dealing with a
+ ** view within a view. The SELECT structure has already been
+ ** copied by the outer view so we can skip the copy step here
+ ** in the inner view.
+ */
+ if( pFrom->pSelect==0 ){
+ pFrom->pSelect = sqlite3SelectDup(pTab->pSelect);
+ }
+ }
+#endif
+ }
+ }
+
+ /* Process NATURAL keywords, and ON and USING clauses of joins.
+ */
+ if( sqliteProcessJoin(pParse, p) ) return 1;
+
+ /* For every "*" that occurs in the column list, insert the names of
+ ** all columns in all tables. And for every TABLE.* insert the names
+ ** of all columns in TABLE. The parser inserted a special expression
+ ** with the TK_ALL operator for each "*" that it found in the column list.
+ ** The following code just has to locate the TK_ALL expressions and expand
+ ** each one to the list of all columns in all tables.
+ **
+ ** The first loop just checks to see if there are any "*" operators
+ ** that need expanding.
+ */
+ for(k=0; k<pEList->nExpr; k++){
+ Expr *pE = pEList->a[k].pExpr;
+ if( pE->op==TK_ALL ) break;
+ if( pE->op==TK_DOT && pE->pRight && pE->pRight->op==TK_ALL
+ && pE->pLeft && pE->pLeft->op==TK_ID ) break;
+ }
+ rc = 0;
+ if( k<pEList->nExpr ){
+ /*
+ ** If we get here it means the result set contains one or more "*"
+ ** operators that need to be expanded. Loop through each expression
+ ** in the result set and expand them one by one.
+ */
+ struct ExprList_item *a = pEList->a;
+ ExprList *pNew = 0;
+ int flags = pParse->db->flags;
+ int longNames = (flags & SQLITE_FullColNames)!=0 &&
+ (flags & SQLITE_ShortColNames)==0;
+
+ for(k=0; k<pEList->nExpr; k++){
+ Expr *pE = a[k].pExpr;
+ if( pE->op!=TK_ALL &&
+ (pE->op!=TK_DOT || pE->pRight==0 || pE->pRight->op!=TK_ALL) ){
+ /* This particular expression does not need to be expanded.
+ */
+ pNew = sqlite3ExprListAppend(pNew, a[k].pExpr, 0);
+ if( pNew ){
+ pNew->a[pNew->nExpr-1].zName = a[k].zName;
+ }else{
+ rc = 1;
+ }
+ a[k].pExpr = 0;
+ a[k].zName = 0;
+ }else{
+ /* This expression is a "*" or a "TABLE.*" and needs to be
+ ** expanded. */
+ int tableSeen = 0; /* Set to 1 when TABLE matches */
+ char *zTName; /* text of name of TABLE */
+ if( pE->op==TK_DOT && pE->pLeft ){
+ zTName = sqlite3NameFromToken(&pE->pLeft->token);
+ }else{
+ zTName = 0;
+ }
+ for(i=0, pFrom=pTabList->a; i<pTabList->nSrc; i++, pFrom++){
+ Table *pTab = pFrom->pTab;
+ char *zTabName = pFrom->zAlias;
+ if( zTabName==0 || zTabName[0]==0 ){
+ zTabName = pTab->zName;
+ }
+ if( zTName && (zTabName==0 || zTabName[0]==0 ||
+ sqlite3StrICmp(zTName, zTabName)!=0) ){
+ continue;
+ }
+ tableSeen = 1;
+ for(j=0; j<pTab->nCol; j++){
+ Expr *pExpr, *pRight;
+ char *zName = pTab->aCol[j].zName;
+
+ if( i>0 ){
+ struct SrcList_item *pLeft = &pTabList->a[i-1];
+ if( (pLeft->jointype & JT_NATURAL)!=0 &&
+ columnIndex(pLeft->pTab, zName)>=0 ){
+ /* In a NATURAL join, omit the join columns from the
+ ** table on the right */
+ continue;
+ }
+ if( sqlite3IdListIndex(pLeft->pUsing, zName)>=0 ){
+ /* In a join with a USING clause, omit columns in the
+ ** using clause from the table on the right. */
+ continue;
+ }
+ }
+ pRight = sqlite3Expr(TK_ID, 0, 0, 0);
+ if( pRight==0 ) break;
+ setToken(&pRight->token, zName);
+ if( zTabName && (longNames || pTabList->nSrc>1) ){
+ Expr *pLeft = sqlite3Expr(TK_ID, 0, 0, 0);
+ pExpr = sqlite3Expr(TK_DOT, pLeft, pRight, 0);
+ if( pExpr==0 ) break;
+ setToken(&pLeft->token, zTabName);
+ setToken(&pExpr->span, sqlite3MPrintf("%s.%s", zTabName, zName));
+ pExpr->span.dyn = 1;
+ pExpr->token.z = 0;
+ pExpr->token.n = 0;
+ pExpr->token.dyn = 0;
+ }else{
+ pExpr = pRight;
+ pExpr->span = pExpr->token;
+ }
+ if( longNames ){
+ pNew = sqlite3ExprListAppend(pNew, pExpr, &pExpr->span);
+ }else{
+ pNew = sqlite3ExprListAppend(pNew, pExpr, &pRight->token);
+ }
+ }
+ }
+ if( !tableSeen ){
+ if( zTName ){
+ sqlite3ErrorMsg(pParse, "no such table: %s", zTName);
+ }else{
+ sqlite3ErrorMsg(pParse, "no tables specified");
+ }
+ rc = 1;
+ }
+ sqliteFree(zTName);
+ }
+ }
+ sqlite3ExprListDelete(pEList);
+ p->pEList = pNew;
+ }
+ return rc;
+}
+
+#ifndef SQLITE_OMIT_COMPOUND_SELECT
+/*
+** This routine associates entries in an ORDER BY expression list with
+** columns in a result. For each ORDER BY expression, the opcode of
+** the top-level node is changed to TK_COLUMN and the iColumn value of
+** the top-level node is filled in with column number and the iTable
+** value of the top-level node is filled with iTable parameter.
+**
+** If there are prior SELECT clauses, they are processed first. A match
+** in an earlier SELECT takes precedence over a later SELECT.
+**
+** Any entry that does not match is flagged as an error. The number
+** of errors is returned.
+*/
+static int matchOrderbyToColumn(
+ Parse *pParse, /* A place to leave error messages */
+ Select *pSelect, /* Match to result columns of this SELECT */
+ ExprList *pOrderBy, /* The ORDER BY values to match against columns */
+ int iTable, /* Insert this value in iTable */
+ int mustComplete /* If TRUE all ORDER BYs must match */
+){
+ int nErr = 0;
+ int i, j;
+ ExprList *pEList;
+
+ if( pSelect==0 || pOrderBy==0 ) return 1;
+ if( mustComplete ){
+ for(i=0; i<pOrderBy->nExpr; i++){ pOrderBy->a[i].done = 0; }
+ }
+ if( prepSelectStmt(pParse, pSelect) ){
+ return 1;
+ }
+ if( pSelect->pPrior ){
+ if( matchOrderbyToColumn(pParse, pSelect->pPrior, pOrderBy, iTable, 0) ){
+ return 1;
+ }
+ }
+ pEList = pSelect->pEList;
+ for(i=0; i<pOrderBy->nExpr; i++){
+ Expr *pE = pOrderBy->a[i].pExpr;
+ int iCol = -1;
+ if( pOrderBy->a[i].done ) continue;
+ if( sqlite3ExprIsInteger(pE, &iCol) ){
+ if( iCol<=0 || iCol>pEList->nExpr ){
+ sqlite3ErrorMsg(pParse,
+ "ORDER BY position %d should be between 1 and %d",
+ iCol, pEList->nExpr);
+ nErr++;
+ break;
+ }
+ if( !mustComplete ) continue;
+ iCol--;
+ }
+ for(j=0; iCol<0 && j<pEList->nExpr; j++){
+ if( pEList->a[j].zName && (pE->op==TK_ID || pE->op==TK_STRING) ){
+ char *zName, *zLabel;
+ zName = pEList->a[j].zName;
+ zLabel = sqlite3NameFromToken(&pE->token);
+ assert( zLabel!=0 );
+ if( sqlite3StrICmp(zName, zLabel)==0 ){
+ iCol = j;
+ }
+ sqliteFree(zLabel);
+ }
+ if( iCol<0 && sqlite3ExprCompare(pE, pEList->a[j].pExpr) ){
+ iCol = j;
+ }
+ }
+ if( iCol>=0 ){
+ pE->op = TK_COLUMN;
+ pE->iColumn = iCol;
+ pE->iTable = iTable;
+ pE->iAgg = -1;
+ pOrderBy->a[i].done = 1;
+ }
+ if( iCol<0 && mustComplete ){
+ sqlite3ErrorMsg(pParse,
+ "ORDER BY term number %d does not match any result column", i+1);
+ nErr++;
+ break;
+ }
+ }
+ return nErr;
+}
+#endif /* #ifndef SQLITE_OMIT_COMPOUND_SELECT */
+
+/*
+** Get a VDBE for the given parser context. Create a new one if necessary.
+** If an error occurs, return NULL and leave a message in pParse.
+*/
+Vdbe *sqlite3GetVdbe(Parse *pParse){
+ Vdbe *v = pParse->pVdbe;
+ if( v==0 ){
+ v = pParse->pVdbe = sqlite3VdbeCreate(pParse->db);
+ }
+ return v;
+}
+
+
+/*
+** Compute the iLimit and iOffset fields of the SELECT based on the
+** pLimit and pOffset expressions. pLimit and pOffset hold the expressions
+** that appear in the original SQL statement after the LIMIT and OFFSET
+** keywords. Or NULL if those keywords are omitted. iLimit and iOffset
+** are the integer memory register numbers for counters used to compute
+** the limit and offset. If there is no limit and/or offset, then
+** iLimit and iOffset are negative.
+**
+** This routine changes the values of iLimit and iOffset only if
+** a limit or offset is defined by pLimit and pOffset. iLimit and
+** iOffset should have been preset to appropriate default values
+** (usually but not always -1) prior to calling this routine.
+** Only if pLimit!=0 or pOffset!=0 do the limit registers get
+** redefined. The UNION ALL operator uses this property to force
+** the reuse of the same limit and offset registers across multiple
+** SELECT statements.
+*/
+static void computeLimitRegisters(Parse *pParse, Select *p, int iBreak){
+ Vdbe *v = 0;
+ int iLimit = 0;
+ int iOffset;
+ int addr1, addr2;
+
+ /*
+ ** "LIMIT -1" always shows all rows. There is some
+ ** contraversy about what the correct behavior should be.
+ ** The current implementation interprets "LIMIT 0" to mean
+ ** no rows.
+ */
+ if( p->pLimit ){
+ p->iLimit = iLimit = pParse->nMem;
+ pParse->nMem += 2;
+ v = sqlite3GetVdbe(pParse);
+ if( v==0 ) return;
+ sqlite3ExprCode(pParse, p->pLimit);
+ sqlite3VdbeAddOp(v, OP_MustBeInt, 0, 0);
+ sqlite3VdbeAddOp(v, OP_MemStore, iLimit, 0);
+ VdbeComment((v, "# LIMIT counter"));
+ sqlite3VdbeAddOp(v, OP_IfMemZero, iLimit, iBreak);
+ }
+ if( p->pOffset ){
+ p->iOffset = iOffset = pParse->nMem++;
+ v = sqlite3GetVdbe(pParse);
+ if( v==0 ) return;
+ sqlite3ExprCode(pParse, p->pOffset);
+ sqlite3VdbeAddOp(v, OP_MustBeInt, 0, 0);
+ sqlite3VdbeAddOp(v, OP_MemStore, iOffset, p->pLimit==0);
+ VdbeComment((v, "# OFFSET counter"));
+ addr1 = sqlite3VdbeAddOp(v, OP_IfMemPos, iOffset, 0);
+ sqlite3VdbeAddOp(v, OP_Pop, 1, 0);
+ sqlite3VdbeAddOp(v, OP_Integer, 0, 0);
+ sqlite3VdbeJumpHere(v, addr1);
+ if( p->pLimit ){
+ sqlite3VdbeAddOp(v, OP_Add, 0, 0);
+ }
+ }
+ if( p->pLimit ){
+ addr1 = sqlite3VdbeAddOp(v, OP_IfMemPos, iLimit, 0);
+ sqlite3VdbeAddOp(v, OP_Pop, 1, 0);
+ sqlite3VdbeAddOp(v, OP_MemInt, -1, iLimit+1);
+ addr2 = sqlite3VdbeAddOp(v, OP_Goto, 0, 0);
+ sqlite3VdbeJumpHere(v, addr1);
+ sqlite3VdbeAddOp(v, OP_MemStore, iLimit+1, 1);
+ VdbeComment((v, "# LIMIT+OFFSET"));
+ sqlite3VdbeJumpHere(v, addr2);
+ }
+}
+
+/*
+** Allocate a virtual index to use for sorting.
+*/
+static void createSortingIndex(Parse *pParse, Select *p, ExprList *pOrderBy){
+ if( pOrderBy ){
+ int addr;
+ assert( pOrderBy->iECursor==0 );
+ pOrderBy->iECursor = pParse->nTab++;
+ addr = sqlite3VdbeAddOp(pParse->pVdbe, OP_OpenEphemeral,
+ pOrderBy->iECursor, pOrderBy->nExpr+1);
+ assert( p->addrOpenEphm[2] == -1 );
+ p->addrOpenEphm[2] = addr;
+ }
+}
+
+#ifndef SQLITE_OMIT_COMPOUND_SELECT
+/*
+** Return the appropriate collating sequence for the iCol-th column of
+** the result set for the compound-select statement "p". Return NULL if
+** the column has no default collating sequence.
+**
+** The collating sequence for the compound select is taken from the
+** left-most term of the select that has a collating sequence.
+*/
+static CollSeq *multiSelectCollSeq(Parse *pParse, Select *p, int iCol){
+ CollSeq *pRet;
+ if( p->pPrior ){
+ pRet = multiSelectCollSeq(pParse, p->pPrior, iCol);
+ }else{
+ pRet = 0;
+ }
+ if( pRet==0 ){
+ pRet = sqlite3ExprCollSeq(pParse, p->pEList->a[iCol].pExpr);
+ }
+ return pRet;
+}
+#endif /* SQLITE_OMIT_COMPOUND_SELECT */
+
+#ifndef SQLITE_OMIT_COMPOUND_SELECT
+/*
+** This routine is called to process a query that is really the union
+** or intersection of two or more separate queries.
+**
+** "p" points to the right-most of the two queries. the query on the
+** left is p->pPrior. The left query could also be a compound query
+** in which case this routine will be called recursively.
+**
+** The results of the total query are to be written into a destination
+** of type eDest with parameter iParm.
+**
+** Example 1: Consider a three-way compound SQL statement.
+**
+** SELECT a FROM t1 UNION SELECT b FROM t2 UNION SELECT c FROM t3
+**
+** This statement is parsed up as follows:
+**
+** SELECT c FROM t3
+** |
+** `-----> SELECT b FROM t2
+** |
+** `------> SELECT a FROM t1
+**
+** The arrows in the diagram above represent the Select.pPrior pointer.
+** So if this routine is called with p equal to the t3 query, then
+** pPrior will be the t2 query. p->op will be TK_UNION in this case.
+**
+** Notice that because of the way SQLite parses compound SELECTs, the
+** individual selects always group from left to right.
+*/
+static int multiSelect(
+ Parse *pParse, /* Parsing context */
+ Select *p, /* The right-most of SELECTs to be coded */
+ int eDest, /* \___ Store query results as specified */
+ int iParm, /* / by these two parameters. */
+ char *aff /* If eDest is SRT_Union, the affinity string */
+){
+ int rc = SQLITE_OK; /* Success code from a subroutine */
+ Select *pPrior; /* Another SELECT immediately to our left */
+ Vdbe *v; /* Generate code to this VDBE */
+ int nCol; /* Number of columns in the result set */
+ ExprList *pOrderBy; /* The ORDER BY clause on p */
+ int aSetP2[2]; /* Set P2 value of these op to number of columns */
+ int nSetP2 = 0; /* Number of slots in aSetP2[] used */
+
+ /* Make sure there is no ORDER BY or LIMIT clause on prior SELECTs. Only
+ ** the last (right-most) SELECT in the series may have an ORDER BY or LIMIT.
+ */
+ if( p==0 || p->pPrior==0 ){
+ rc = 1;
+ goto multi_select_end;
+ }
+ pPrior = p->pPrior;
+ assert( pPrior->pRightmost!=pPrior );
+ assert( pPrior->pRightmost==p->pRightmost );
+ if( pPrior->pOrderBy ){
+ sqlite3ErrorMsg(pParse,"ORDER BY clause should come after %s not before",
+ selectOpName(p->op));
+ rc = 1;
+ goto multi_select_end;
+ }
+ if( pPrior->pLimit ){
+ sqlite3ErrorMsg(pParse,"LIMIT clause should come after %s not before",
+ selectOpName(p->op));
+ rc = 1;
+ goto multi_select_end;
+ }
+
+ /* Make sure we have a valid query engine. If not, create a new one.
+ */
+ v = sqlite3GetVdbe(pParse);
+ if( v==0 ){
+ rc = 1;
+ goto multi_select_end;
+ }
+
+ /* Create the destination temporary table if necessary
+ */
+ if( eDest==SRT_EphemTab ){
+ assert( p->pEList );
+ assert( nSetP2<sizeof(aSetP2)/sizeof(aSetP2[0]) );
+ aSetP2[nSetP2++] = sqlite3VdbeAddOp(v, OP_OpenEphemeral, iParm, 0);
+ eDest = SRT_Table;
+ }
+
+ /* Generate code for the left and right SELECT statements.
+ */
+ pOrderBy = p->pOrderBy;
+ switch( p->op ){
+ case TK_ALL: {
+ if( pOrderBy==0 ){
+ int addr = 0;
+ assert( !pPrior->pLimit );
+ pPrior->pLimit = p->pLimit;
+ pPrior->pOffset = p->pOffset;
+ rc = sqlite3Select(pParse, pPrior, eDest, iParm, 0, 0, 0, aff);
+ p->pLimit = 0;
+ p->pOffset = 0;
+ if( rc ){
+ goto multi_select_end;
+ }
+ p->pPrior = 0;
+ p->iLimit = pPrior->iLimit;
+ p->iOffset = pPrior->iOffset;
+ if( p->iLimit>=0 ){
+ addr = sqlite3VdbeAddOp(v, OP_IfMemZero, p->iLimit, 0);
+ VdbeComment((v, "# Jump ahead if LIMIT reached"));
+ }
+ rc = sqlite3Select(pParse, p, eDest, iParm, 0, 0, 0, aff);
+ p->pPrior = pPrior;
+ if( rc ){
+ goto multi_select_end;
+ }
+ if( addr ){
+ sqlite3VdbeJumpHere(v, addr);
+ }
+ break;
+ }
+ /* For UNION ALL ... ORDER BY fall through to the next case */
+ }
+ case TK_EXCEPT:
+ case TK_UNION: {
+ int unionTab; /* Cursor number of the temporary table holding result */
+ int op = 0; /* One of the SRT_ operations to apply to self */
+ int priorOp; /* The SRT_ operation to apply to prior selects */
+ Expr *pLimit, *pOffset; /* Saved values of p->nLimit and p->nOffset */
+ int addr;
+
+ priorOp = p->op==TK_ALL ? SRT_Table : SRT_Union;
+ if( eDest==priorOp && pOrderBy==0 && !p->pLimit && !p->pOffset ){
+ /* We can reuse a temporary table generated by a SELECT to our
+ ** right.
+ */
+ unionTab = iParm;
+ }else{
+ /* We will need to create our own temporary table to hold the
+ ** intermediate results.
+ */
+ unionTab = pParse->nTab++;
+ if( pOrderBy && matchOrderbyToColumn(pParse, p, pOrderBy, unionTab,1) ){
+ rc = 1;
+ goto multi_select_end;
+ }
+ addr = sqlite3VdbeAddOp(v, OP_OpenEphemeral, unionTab, 0);
+ if( priorOp==SRT_Table ){
+ assert( nSetP2<sizeof(aSetP2)/sizeof(aSetP2[0]) );
+ aSetP2[nSetP2++] = addr;
+ }else{
+ assert( p->addrOpenEphm[0] == -1 );
+ p->addrOpenEphm[0] = addr;
+ p->pRightmost->usesEphm = 1;
+ }
+ createSortingIndex(pParse, p, pOrderBy);
+ assert( p->pEList );
+ }
+
+ /* Code the SELECT statements to our left
+ */
+ assert( !pPrior->pOrderBy );
+ rc = sqlite3Select(pParse, pPrior, priorOp, unionTab, 0, 0, 0, aff);
+ if( rc ){
+ goto multi_select_end;
+ }
+
+ /* Code the current SELECT statement
+ */
+ switch( p->op ){
+ case TK_EXCEPT: op = SRT_Except; break;
+ case TK_UNION: op = SRT_Union; break;
+ case TK_ALL: op = SRT_Table; break;
+ }
+ p->pPrior = 0;
+ p->pOrderBy = 0;
+ p->disallowOrderBy = pOrderBy!=0;
+ pLimit = p->pLimit;
+ p->pLimit = 0;
+ pOffset = p->pOffset;
+ p->pOffset = 0;
+ rc = sqlite3Select(pParse, p, op, unionTab, 0, 0, 0, aff);
+ p->pPrior = pPrior;
+ p->pOrderBy = pOrderBy;
+ sqlite3ExprDelete(p->pLimit);
+ p->pLimit = pLimit;
+ p->pOffset = pOffset;
+ p->iLimit = -1;
+ p->iOffset = -1;
+ if( rc ){
+ goto multi_select_end;
+ }
+
+
+ /* Convert the data in the temporary table into whatever form
+ ** it is that we currently need.
+ */
+ if( eDest!=priorOp || unionTab!=iParm ){
+ int iCont, iBreak, iStart;
+ assert( p->pEList );
+ if( eDest==SRT_Callback ){
+ Select *pFirst = p;
+ while( pFirst->pPrior ) pFirst = pFirst->pPrior;
+ generateColumnNames(pParse, 0, pFirst->pEList);
+ }
+ iBreak = sqlite3VdbeMakeLabel(v);
+ iCont = sqlite3VdbeMakeLabel(v);
+ computeLimitRegisters(pParse, p, iBreak);
+ sqlite3VdbeAddOp(v, OP_Rewind, unionTab, iBreak);
+ iStart = sqlite3VdbeCurrentAddr(v);
+ rc = selectInnerLoop(pParse, p, p->pEList, unionTab, p->pEList->nExpr,
+ pOrderBy, -1, eDest, iParm,
+ iCont, iBreak, 0);
+ if( rc ){
+ rc = 1;
+ goto multi_select_end;
+ }
+ sqlite3VdbeResolveLabel(v, iCont);
+ sqlite3VdbeAddOp(v, OP_Next, unionTab, iStart);
+ sqlite3VdbeResolveLabel(v, iBreak);
+ sqlite3VdbeAddOp(v, OP_Close, unionTab, 0);
+ }
+ break;
+ }
+ case TK_INTERSECT: {
+ int tab1, tab2;
+ int iCont, iBreak, iStart;
+ Expr *pLimit, *pOffset;
+ int addr;
+
+ /* INTERSECT is different from the others since it requires
+ ** two temporary tables. Hence it has its own case. Begin
+ ** by allocating the tables we will need.
+ */
+ tab1 = pParse->nTab++;
+ tab2 = pParse->nTab++;
+ if( pOrderBy && matchOrderbyToColumn(pParse,p,pOrderBy,tab1,1) ){
+ rc = 1;
+ goto multi_select_end;
+ }
+ createSortingIndex(pParse, p, pOrderBy);
+
+ addr = sqlite3VdbeAddOp(v, OP_OpenEphemeral, tab1, 0);
+ assert( p->addrOpenEphm[0] == -1 );
+ p->addrOpenEphm[0] = addr;
+ p->pRightmost->usesEphm = 1;
+ assert( p->pEList );
+
+ /* Code the SELECTs to our left into temporary table "tab1".
+ */
+ rc = sqlite3Select(pParse, pPrior, SRT_Union, tab1, 0, 0, 0, aff);
+ if( rc ){
+ goto multi_select_end;
+ }
+
+ /* Code the current SELECT into temporary table "tab2"
+ */
+ addr = sqlite3VdbeAddOp(v, OP_OpenEphemeral, tab2, 0);
+ assert( p->addrOpenEphm[1] == -1 );
+ p->addrOpenEphm[1] = addr;
+ p->pPrior = 0;
+ pLimit = p->pLimit;
+ p->pLimit = 0;
+ pOffset = p->pOffset;
+ p->pOffset = 0;
+ rc = sqlite3Select(pParse, p, SRT_Union, tab2, 0, 0, 0, aff);
+ p->pPrior = pPrior;
+ sqlite3ExprDelete(p->pLimit);
+ p->pLimit = pLimit;
+ p->pOffset = pOffset;
+ if( rc ){
+ goto multi_select_end;
+ }
+
+ /* Generate code to take the intersection of the two temporary
+ ** tables.
+ */
+ assert( p->pEList );
+ if( eDest==SRT_Callback ){
+ Select *pFirst = p;
+ while( pFirst->pPrior ) pFirst = pFirst->pPrior;
+ generateColumnNames(pParse, 0, pFirst->pEList);
+ }
+ iBreak = sqlite3VdbeMakeLabel(v);
+ iCont = sqlite3VdbeMakeLabel(v);
+ computeLimitRegisters(pParse, p, iBreak);
+ sqlite3VdbeAddOp(v, OP_Rewind, tab1, iBreak);
+ iStart = sqlite3VdbeAddOp(v, OP_RowKey, tab1, 0);
+ sqlite3VdbeAddOp(v, OP_NotFound, tab2, iCont);
+ rc = selectInnerLoop(pParse, p, p->pEList, tab1, p->pEList->nExpr,
+ pOrderBy, -1, eDest, iParm,
+ iCont, iBreak, 0);
+ if( rc ){
+ rc = 1;
+ goto multi_select_end;
+ }
+ sqlite3VdbeResolveLabel(v, iCont);
+ sqlite3VdbeAddOp(v, OP_Next, tab1, iStart);
+ sqlite3VdbeResolveLabel(v, iBreak);
+ sqlite3VdbeAddOp(v, OP_Close, tab2, 0);
+ sqlite3VdbeAddOp(v, OP_Close, tab1, 0);
+ break;
+ }
+ }
+
+ /* Make sure all SELECTs in the statement have the same number of elements
+ ** in their result sets.
+ */
+ assert( p->pEList && pPrior->pEList );
+ if( p->pEList->nExpr!=pPrior->pEList->nExpr ){
+ sqlite3ErrorMsg(pParse, "SELECTs to the left and right of %s"
+ " do not have the same number of result columns", selectOpName(p->op));
+ rc = 1;
+ goto multi_select_end;
+ }
+
+ /* Set the number of columns in temporary tables
+ */
+ nCol = p->pEList->nExpr;
+ while( nSetP2 ){
+ sqlite3VdbeChangeP2(v, aSetP2[--nSetP2], nCol);
+ }
+
+ /* Compute collating sequences used by either the ORDER BY clause or
+ ** by any temporary tables needed to implement the compound select.
+ ** Attach the KeyInfo structure to all temporary tables. Invoke the
+ ** ORDER BY processing if there is an ORDER BY clause.
+ **
+ ** This section is run by the right-most SELECT statement only.
+ ** SELECT statements to the left always skip this part. The right-most
+ ** SELECT might also skip this part if it has no ORDER BY clause and
+ ** no temp tables are required.
+ */
+ if( pOrderBy || p->usesEphm ){
+ int i; /* Loop counter */
+ KeyInfo *pKeyInfo; /* Collating sequence for the result set */
+ Select *pLoop; /* For looping through SELECT statements */
+ int nKeyCol; /* Number of entries in pKeyInfo->aCol[] */
+ CollSeq **apColl;
+ CollSeq **aCopy;
+
+ assert( p->pRightmost==p );
+ nKeyCol = nCol + (pOrderBy ? pOrderBy->nExpr : 0);
+ pKeyInfo = sqliteMalloc(sizeof(*pKeyInfo)+nKeyCol*(sizeof(CollSeq*) + 1));
+ if( !pKeyInfo ){
+ rc = SQLITE_NOMEM;
+ goto multi_select_end;
+ }
+
+ pKeyInfo->enc = ENC(pParse->db);
+ pKeyInfo->nField = nCol;
+
+ for(i=0, apColl=pKeyInfo->aColl; i<nCol; i++, apColl++){
+ *apColl = multiSelectCollSeq(pParse, p, i);
+ if( 0==*apColl ){
+ *apColl = pParse->db->pDfltColl;
+ }
+ }
+
+ for(pLoop=p; pLoop; pLoop=pLoop->pPrior){
+ for(i=0; i<2; i++){
+ int addr = pLoop->addrOpenEphm[i];
+ if( addr<0 ){
+ /* If [0] is unused then [1] is also unused. So we can
+ ** always safely abort as soon as the first unused slot is found */
+ assert( pLoop->addrOpenEphm[1]<0 );
+ break;
+ }
+ sqlite3VdbeChangeP2(v, addr, nCol);
+ sqlite3VdbeChangeP3(v, addr, (char*)pKeyInfo, P3_KEYINFO);
+ }
+ }
+
+ if( pOrderBy ){
+ struct ExprList_item *pOTerm = pOrderBy->a;
+ int nOrderByExpr = pOrderBy->nExpr;
+ int addr;
+ u8 *pSortOrder;
+
+ aCopy = &pKeyInfo->aColl[nOrderByExpr];
+ pSortOrder = pKeyInfo->aSortOrder = (u8*)&aCopy[nCol];
+ memcpy(aCopy, pKeyInfo->aColl, nCol*sizeof(CollSeq*));
+ apColl = pKeyInfo->aColl;
+ for(i=0; i<nOrderByExpr; i++, pOTerm++, apColl++, pSortOrder++){
+ Expr *pExpr = pOTerm->pExpr;
+ char *zName = pOTerm->zName;
+ assert( pExpr->op==TK_COLUMN && pExpr->iColumn<nCol );
+ if( zName ){
+ *apColl = sqlite3LocateCollSeq(pParse, zName, -1);
+ }else{
+ *apColl = aCopy[pExpr->iColumn];
+ }
+ *pSortOrder = pOTerm->sortOrder;
+ }
+ assert( p->pRightmost==p );
+ assert( p->addrOpenEphm[2]>=0 );
+ addr = p->addrOpenEphm[2];
+ sqlite3VdbeChangeP2(v, addr, p->pEList->nExpr+2);
+ pKeyInfo->nField = nOrderByExpr;
+ sqlite3VdbeChangeP3(v, addr, (char*)pKeyInfo, P3_KEYINFO_HANDOFF);
+ pKeyInfo = 0;
+ generateSortTail(pParse, p, v, p->pEList->nExpr, eDest, iParm);
+ }
+
+ sqliteFree(pKeyInfo);
+ }
+
+multi_select_end:
+ return rc;
+}
+#endif /* SQLITE_OMIT_COMPOUND_SELECT */
+
+#ifndef SQLITE_OMIT_VIEW
+/*
+** Scan through the expression pExpr. Replace every reference to
+** a column in table number iTable with a copy of the iColumn-th
+** entry in pEList. (But leave references to the ROWID column
+** unchanged.)
+**
+** This routine is part of the flattening procedure. A subquery
+** whose result set is defined by pEList appears as entry in the
+** FROM clause of a SELECT such that the VDBE cursor assigned to that
+** FORM clause entry is iTable. This routine make the necessary
+** changes to pExpr so that it refers directly to the source table
+** of the subquery rather the result set of the subquery.
+*/
+static void substExprList(ExprList*,int,ExprList*); /* Forward Decl */
+static void substSelect(Select *, int, ExprList *); /* Forward Decl */
+static void substExpr(Expr *pExpr, int iTable, ExprList *pEList){
+ if( pExpr==0 ) return;
+ if( pExpr->op==TK_COLUMN && pExpr->iTable==iTable ){
+ if( pExpr->iColumn<0 ){
+ pExpr->op = TK_NULL;
+ }else{
+ Expr *pNew;
+ assert( pEList!=0 && pExpr->iColumn<pEList->nExpr );
+ assert( pExpr->pLeft==0 && pExpr->pRight==0 && pExpr->pList==0 );
+ pNew = pEList->a[pExpr->iColumn].pExpr;
+ assert( pNew!=0 );
+ pExpr->op = pNew->op;
+ assert( pExpr->pLeft==0 );
+ pExpr->pLeft = sqlite3ExprDup(pNew->pLeft);
+ assert( pExpr->pRight==0 );
+ pExpr->pRight = sqlite3ExprDup(pNew->pRight);
+ assert( pExpr->pList==0 );
+ pExpr->pList = sqlite3ExprListDup(pNew->pList);
+ pExpr->iTable = pNew->iTable;
+ pExpr->pTab = pNew->pTab;
+ pExpr->iColumn = pNew->iColumn;
+ pExpr->iAgg = pNew->iAgg;
+ sqlite3TokenCopy(&pExpr->token, &pNew->token);
+ sqlite3TokenCopy(&pExpr->span, &pNew->span);
+ pExpr->pSelect = sqlite3SelectDup(pNew->pSelect);
+ pExpr->flags = pNew->flags;
+ }
+ }else{
+ substExpr(pExpr->pLeft, iTable, pEList);
+ substExpr(pExpr->pRight, iTable, pEList);
+ substSelect(pExpr->pSelect, iTable, pEList);
+ substExprList(pExpr->pList, iTable, pEList);
+ }
+}
+static void substExprList(ExprList *pList, int iTable, ExprList *pEList){
+ int i;
+ if( pList==0 ) return;
+ for(i=0; i<pList->nExpr; i++){
+ substExpr(pList->a[i].pExpr, iTable, pEList);
+ }
+}
+static void substSelect(Select *p, int iTable, ExprList *pEList){
+ if( !p ) return;
+ substExprList(p->pEList, iTable, pEList);
+ substExprList(p->pGroupBy, iTable, pEList);
+ substExprList(p->pOrderBy, iTable, pEList);
+ substExpr(p->pHaving, iTable, pEList);
+ substExpr(p->pWhere, iTable, pEList);
+}
+#endif /* !defined(SQLITE_OMIT_VIEW) */
+
+#ifndef SQLITE_OMIT_VIEW
+/*
+** This routine attempts to flatten subqueries in order to speed
+** execution. It returns 1 if it makes changes and 0 if no flattening
+** occurs.
+**
+** To understand the concept of flattening, consider the following
+** query:
+**
+** SELECT a FROM (SELECT x+y AS a FROM t1 WHERE z<100) WHERE a>5
+**
+** The default way of implementing this query is to execute the
+** subquery first and store the results in a temporary table, then
+** run the outer query on that temporary table. This requires two
+** passes over the data. Furthermore, because the temporary table
+** has no indices, the WHERE clause on the outer query cannot be
+** optimized.
+**
+** This routine attempts to rewrite queries such as the above into
+** a single flat select, like this:
+**
+** SELECT x+y AS a FROM t1 WHERE z<100 AND a>5
+**
+** The code generated for this simpification gives the same result
+** but only has to scan the data once. And because indices might
+** exist on the table t1, a complete scan of the data might be
+** avoided.
+**
+** Flattening is only attempted if all of the following are true:
+**
+** (1) The subquery and the outer query do not both use aggregates.
+**
+** (2) The subquery is not an aggregate or the outer query is not a join.
+**
+** (3) The subquery is not the right operand of a left outer join, or
+** the subquery is not itself a join. (Ticket #306)
+**
+** (4) The subquery is not DISTINCT or the outer query is not a join.
+**
+** (5) The subquery is not DISTINCT or the outer query does not use
+** aggregates.
+**
+** (6) The subquery does not use aggregates or the outer query is not
+** DISTINCT.
+**
+** (7) The subquery has a FROM clause.
+**
+** (8) The subquery does not use LIMIT or the outer query is not a join.
+**
+** (9) The subquery does not use LIMIT or the outer query does not use
+** aggregates.
+**
+** (10) The subquery does not use aggregates or the outer query does not
+** use LIMIT.
+**
+** (11) The subquery and the outer query do not both have ORDER BY clauses.
+**
+** (12) The subquery is not the right term of a LEFT OUTER JOIN or the
+** subquery has no WHERE clause. (added by ticket #350)
+**
+** (13) The subquery and outer query do not both use LIMIT
+**
+** (14) The subquery does not use OFFSET
+**
+** In this routine, the "p" parameter is a pointer to the outer query.
+** The subquery is p->pSrc->a[iFrom]. isAgg is true if the outer query
+** uses aggregates and subqueryIsAgg is true if the subquery uses aggregates.
+**
+** If flattening is not attempted, this routine is a no-op and returns 0.
+** If flattening is attempted this routine returns 1.
+**
+** All of the expression analysis must occur on both the outer query and
+** the subquery before this routine runs.
+*/
+static int flattenSubquery(
+ Select *p, /* The parent or outer SELECT statement */
+ int iFrom, /* Index in p->pSrc->a[] of the inner subquery */
+ int isAgg, /* True if outer SELECT uses aggregate functions */
+ int subqueryIsAgg /* True if the subquery uses aggregate functions */
+){
+ Select *pSub; /* The inner query or "subquery" */
+ SrcList *pSrc; /* The FROM clause of the outer query */
+ SrcList *pSubSrc; /* The FROM clause of the subquery */
+ ExprList *pList; /* The result set of the outer query */
+ int iParent; /* VDBE cursor number of the pSub result set temp table */
+ int i; /* Loop counter */
+ Expr *pWhere; /* The WHERE clause */
+ struct SrcList_item *pSubitem; /* The subquery */
+
+ /* Check to see if flattening is permitted. Return 0 if not.
+ */
+ if( p==0 ) return 0;
+ pSrc = p->pSrc;
+ assert( pSrc && iFrom>=0 && iFrom<pSrc->nSrc );
+ pSubitem = &pSrc->a[iFrom];
+ pSub = pSubitem->pSelect;
+ assert( pSub!=0 );
+ if( isAgg && subqueryIsAgg ) return 0; /* Restriction (1) */
+ if( subqueryIsAgg && pSrc->nSrc>1 ) return 0; /* Restriction (2) */
+ pSubSrc = pSub->pSrc;
+ assert( pSubSrc );
+ /* Prior to version 3.1.2, when LIMIT and OFFSET had to be simple constants,
+ ** not arbitrary expresssions, we allowed some combining of LIMIT and OFFSET
+ ** because they could be computed at compile-time. But when LIMIT and OFFSET
+ ** became arbitrary expressions, we were forced to add restrictions (13)
+ ** and (14). */
+ if( pSub->pLimit && p->pLimit ) return 0; /* Restriction (13) */
+ if( pSub->pOffset ) return 0; /* Restriction (14) */
+ if( pSubSrc->nSrc==0 ) return 0; /* Restriction (7) */
+ if( (pSub->isDistinct || pSub->pLimit)
+ && (pSrc->nSrc>1 || isAgg) ){ /* Restrictions (4)(5)(8)(9) */
+ return 0;
+ }
+ if( p->isDistinct && subqueryIsAgg ) return 0; /* Restriction (6) */
+ if( (p->disallowOrderBy || p->pOrderBy) && pSub->pOrderBy ){
+ return 0; /* Restriction (11) */
+ }
+
+ /* Restriction 3: If the subquery is a join, make sure the subquery is
+ ** not used as the right operand of an outer join. Examples of why this
+ ** is not allowed:
+ **
+ ** t1 LEFT OUTER JOIN (t2 JOIN t3)
+ **
+ ** If we flatten the above, we would get
+ **
+ ** (t1 LEFT OUTER JOIN t2) JOIN t3
+ **
+ ** which is not at all the same thing.
+ */
+ if( pSubSrc->nSrc>1 && iFrom>0 && (pSrc->a[iFrom-1].jointype & JT_OUTER)!=0 ){
+ return 0;
+ }
+
+ /* Restriction 12: If the subquery is the right operand of a left outer
+ ** join, make sure the subquery has no WHERE clause.
+ ** An examples of why this is not allowed:
+ **
+ ** t1 LEFT OUTER JOIN (SELECT * FROM t2 WHERE t2.x>0)
+ **
+ ** If we flatten the above, we would get
+ **
+ ** (t1 LEFT OUTER JOIN t2) WHERE t2.x>0
+ **
+ ** But the t2.x>0 test will always fail on a NULL row of t2, which
+ ** effectively converts the OUTER JOIN into an INNER JOIN.
+ */
+ if( iFrom>0 && (pSrc->a[iFrom-1].jointype & JT_OUTER)!=0
+ && pSub->pWhere!=0 ){
+ return 0;
+ }
+
+ /* If we reach this point, it means flattening is permitted for the
+ ** iFrom-th entry of the FROM clause in the outer query.
+ */
+
+ /* Move all of the FROM elements of the subquery into the
+ ** the FROM clause of the outer query. Before doing this, remember
+ ** the cursor number for the original outer query FROM element in
+ ** iParent. The iParent cursor will never be used. Subsequent code
+ ** will scan expressions looking for iParent references and replace
+ ** those references with expressions that resolve to the subquery FROM
+ ** elements we are now copying in.
+ */
+ iParent = pSubitem->iCursor;
+ {
+ int nSubSrc = pSubSrc->nSrc;
+ int jointype = pSubitem->jointype;
+
+ sqlite3DeleteTable(0, pSubitem->pTab);
+ sqliteFree(pSubitem->zDatabase);
+ sqliteFree(pSubitem->zName);
+ sqliteFree(pSubitem->zAlias);
+ if( nSubSrc>1 ){
+ int extra = nSubSrc - 1;
+ for(i=1; i<nSubSrc; i++){
+ pSrc = sqlite3SrcListAppend(pSrc, 0, 0);
+ }
+ p->pSrc = pSrc;
+ for(i=pSrc->nSrc-1; i-extra>=iFrom; i--){
+ pSrc->a[i] = pSrc->a[i-extra];
+ }
+ }
+ for(i=0; i<nSubSrc; i++){
+ pSrc->a[i+iFrom] = pSubSrc->a[i];
+ memset(&pSubSrc->a[i], 0, sizeof(pSubSrc->a[i]));
+ }
+ pSrc->a[iFrom+nSubSrc-1].jointype = jointype;
+ }
+
+ /* Now begin substituting subquery result set expressions for
+ ** references to the iParent in the outer query.
+ **
+ ** Example:
+ **
+ ** SELECT a+5, b*10 FROM (SELECT x*3 AS a, y+10 AS b FROM t1) WHERE a>b;
+ ** \ \_____________ subquery __________/ /
+ ** \_____________________ outer query ______________________________/
+ **
+ ** We look at every expression in the outer query and every place we see
+ ** "a" we substitute "x*3" and every place we see "b" we substitute "y+10".
+ */
+ pList = p->pEList;
+ for(i=0; i<pList->nExpr; i++){
+ Expr *pExpr;
+ if( pList->a[i].zName==0 && (pExpr = pList->a[i].pExpr)->span.z!=0 ){
+ pList->a[i].zName = sqliteStrNDup((char*)pExpr->span.z, pExpr->span.n);
+ }
+ }
+ substExprList(p->pEList, iParent, pSub->pEList);
+ if( isAgg ){
+ substExprList(p->pGroupBy, iParent, pSub->pEList);
+ substExpr(p->pHaving, iParent, pSub->pEList);
+ }
+ if( pSub->pOrderBy ){
+ assert( p->pOrderBy==0 );
+ p->pOrderBy = pSub->pOrderBy;
+ pSub->pOrderBy = 0;
+ }else if( p->pOrderBy ){
+ substExprList(p->pOrderBy, iParent, pSub->pEList);
+ }
+ if( pSub->pWhere ){
+ pWhere = sqlite3ExprDup(pSub->pWhere);
+ }else{
+ pWhere = 0;
+ }
+ if( subqueryIsAgg ){
+ assert( p->pHaving==0 );
+ p->pHaving = p->pWhere;
+ p->pWhere = pWhere;
+ substExpr(p->pHaving, iParent, pSub->pEList);
+ p->pHaving = sqlite3ExprAnd(p->pHaving, sqlite3ExprDup(pSub->pHaving));
+ assert( p->pGroupBy==0 );
+ p->pGroupBy = sqlite3ExprListDup(pSub->pGroupBy);
+ }else{
+ substExpr(p->pWhere, iParent, pSub->pEList);
+ p->pWhere = sqlite3ExprAnd(p->pWhere, pWhere);
+ }
+
+ /* The flattened query is distinct if either the inner or the
+ ** outer query is distinct.
+ */
+ p->isDistinct = p->isDistinct || pSub->isDistinct;
+
+ /*
+ ** SELECT ... FROM (SELECT ... LIMIT a OFFSET b) LIMIT x OFFSET y;
+ **
+ ** One is tempted to try to add a and b to combine the limits. But this
+ ** does not work if either limit is negative.
+ */
+ if( pSub->pLimit ){
+ p->pLimit = pSub->pLimit;
+ pSub->pLimit = 0;
+ }
+
+ /* Finially, delete what is left of the subquery and return
+ ** success.
+ */
+ sqlite3SelectDelete(pSub);
+ return 1;
+}
+#endif /* SQLITE_OMIT_VIEW */
+
+/*
+** Analyze the SELECT statement passed in as an argument to see if it
+** is a simple min() or max() query. If it is and this query can be
+** satisfied using a single seek to the beginning or end of an index,
+** then generate the code for this SELECT and return 1. If this is not a
+** simple min() or max() query, then return 0;
+**
+** A simply min() or max() query looks like this:
+**
+** SELECT min(a) FROM table;
+** SELECT max(a) FROM table;
+**
+** The query may have only a single table in its FROM argument. There
+** can be no GROUP BY or HAVING or WHERE clauses. The result set must
+** be the min() or max() of a single column of the table. The column
+** in the min() or max() function must be indexed.
+**
+** The parameters to this routine are the same as for sqlite3Select().
+** See the header comment on that routine for additional information.
+*/
+static int simpleMinMaxQuery(Parse *pParse, Select *p, int eDest, int iParm){
+ Expr *pExpr;
+ int iCol;
+ Table *pTab;
+ Index *pIdx;
+ int base;
+ Vdbe *v;
+ int seekOp;
+ ExprList *pEList, *pList, eList;
+ struct ExprList_item eListItem;
+ SrcList *pSrc;
+ int brk;
+ int iDb;
+
+ /* Check to see if this query is a simple min() or max() query. Return
+ ** zero if it is not.
+ */
+ if( p->pGroupBy || p->pHaving || p->pWhere ) return 0;
+ pSrc = p->pSrc;
+ if( pSrc->nSrc!=1 ) return 0;
+ pEList = p->pEList;
+ if( pEList->nExpr!=1 ) return 0;
+ pExpr = pEList->a[0].pExpr;
+ if( pExpr->op!=TK_AGG_FUNCTION ) return 0;
+ pList = pExpr->pList;
+ if( pList==0 || pList->nExpr!=1 ) return 0;
+ if( pExpr->token.n!=3 ) return 0;
+ if( sqlite3StrNICmp((char*)pExpr->token.z,"min",3)==0 ){
+ seekOp = OP_Rewind;
+ }else if( sqlite3StrNICmp((char*)pExpr->token.z,"max",3)==0 ){
+ seekOp = OP_Last;
+ }else{
+ return 0;
+ }
+ pExpr = pList->a[0].pExpr;
+ if( pExpr->op!=TK_COLUMN ) return 0;
+ iCol = pExpr->iColumn;
+ pTab = pSrc->a[0].pTab;
+
+
+ /* If we get to here, it means the query is of the correct form.
+ ** Check to make sure we have an index and make pIdx point to the
+ ** appropriate index. If the min() or max() is on an INTEGER PRIMARY
+ ** key column, no index is necessary so set pIdx to NULL. If no
+ ** usable index is found, return 0.
+ */
+ if( iCol<0 ){
+ pIdx = 0;
+ }else{
+ CollSeq *pColl = sqlite3ExprCollSeq(pParse, pExpr);
+ if( pColl==0 ) return 0;
+ for(pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext){
+ assert( pIdx->nColumn>=1 );
+ if( pIdx->aiColumn[0]==iCol &&
+ 0==sqlite3StrICmp(pIdx->azColl[0], pColl->zName) ){
+ break;
+ }
+ }
+ if( pIdx==0 ) return 0;
+ }
+
+ /* Identify column types if we will be using the callback. This
+ ** step is skipped if the output is going to a table or a memory cell.
+ ** The column names have already been generated in the calling function.
+ */
+ v = sqlite3GetVdbe(pParse);
+ if( v==0 ) return 0;
+
+ /* If the output is destined for a temporary table, open that table.
+ */
+ if( eDest==SRT_EphemTab ){
+ sqlite3VdbeAddOp(v, OP_OpenEphemeral, iParm, 1);
+ }
+
+ /* Generating code to find the min or the max. Basically all we have
+ ** to do is find the first or the last entry in the chosen index. If
+ ** the min() or max() is on the INTEGER PRIMARY KEY, then find the first
+ ** or last entry in the main table.
+ */
+ iDb = sqlite3SchemaToIndex(pParse->db, pTab->pSchema);
+ assert( iDb>=0 || pTab->isEphem );
+ sqlite3CodeVerifySchema(pParse, iDb);
+ sqlite3TableLock(pParse, iDb, pTab->tnum, 0, pTab->zName);
+ base = pSrc->a[0].iCursor;
+ brk = sqlite3VdbeMakeLabel(v);
+ computeLimitRegisters(pParse, p, brk);
+ if( pSrc->a[0].pSelect==0 ){
+ sqlite3OpenTable(pParse, base, iDb, pTab, OP_OpenRead);
+ }
+ if( pIdx==0 ){
+ sqlite3VdbeAddOp(v, seekOp, base, 0);
+ }else{
+ /* Even though the cursor used to open the index here is closed
+ ** as soon as a single value has been read from it, allocate it
+ ** using (pParse->nTab++) to prevent the cursor id from being
+ ** reused. This is important for statements of the form
+ ** "INSERT INTO x SELECT max() FROM x".
+ */
+ int iIdx;
+ KeyInfo *pKey = sqlite3IndexKeyinfo(pParse, pIdx);
+ iIdx = pParse->nTab++;
+ assert( pIdx->pSchema==pTab->pSchema );
+ sqlite3VdbeAddOp(v, OP_Integer, iDb, 0);
+ sqlite3VdbeOp3(v, OP_OpenRead, iIdx, pIdx->tnum,
+ (char*)pKey, P3_KEYINFO_HANDOFF);
+ if( seekOp==OP_Rewind ){
+ sqlite3VdbeAddOp(v, OP_Null, 0, 0);
+ sqlite3VdbeAddOp(v, OP_MakeRecord, 1, 0);
+ seekOp = OP_MoveGt;
+ }
+ sqlite3VdbeAddOp(v, seekOp, iIdx, 0);
+ sqlite3VdbeAddOp(v, OP_IdxRowid, iIdx, 0);
+ sqlite3VdbeAddOp(v, OP_Close, iIdx, 0);
+ sqlite3VdbeAddOp(v, OP_MoveGe, base, 0);
+ }
+ eList.nExpr = 1;
+ memset(&eListItem, 0, sizeof(eListItem));
+ eList.a = &eListItem;
+ eList.a[0].pExpr = pExpr;
+ selectInnerLoop(pParse, p, &eList, 0, 0, 0, -1, eDest, iParm, brk, brk, 0);
+ sqlite3VdbeResolveLabel(v, brk);
+ sqlite3VdbeAddOp(v, OP_Close, base, 0);
+
+ return 1;
+}
+
+/*
+** Analyze and ORDER BY or GROUP BY clause in a SELECT statement. Return
+** the number of errors seen.
+**
+** An ORDER BY or GROUP BY is a list of expressions. If any expression
+** is an integer constant, then that expression is replaced by the
+** corresponding entry in the result set.
+*/
+static int processOrderGroupBy(
+ NameContext *pNC, /* Name context of the SELECT statement. */
+ ExprList *pOrderBy, /* The ORDER BY or GROUP BY clause to be processed */
+ const char *zType /* Either "ORDER" or "GROUP", as appropriate */
+){
+ int i;
+ ExprList *pEList = pNC->pEList; /* The result set of the SELECT */
+ Parse *pParse = pNC->pParse; /* The result set of the SELECT */
+ assert( pEList );
+
+ if( pOrderBy==0 ) return 0;
+ for(i=0; i<pOrderBy->nExpr; i++){
+ int iCol;
+ Expr *pE = pOrderBy->a[i].pExpr;
+ if( sqlite3ExprIsInteger(pE, &iCol) ){
+ if( iCol>0 && iCol<=pEList->nExpr ){
+ sqlite3ExprDelete(pE);
+ pE = pOrderBy->a[i].pExpr = sqlite3ExprDup(pEList->a[iCol-1].pExpr);
+ }else{
+ sqlite3ErrorMsg(pParse,
+ "%s BY column number %d out of range - should be "
+ "between 1 and %d", zType, iCol, pEList->nExpr);
+ return 1;
+ }
+ }
+ if( sqlite3ExprResolveNames(pNC, pE) ){
+ return 1;
+ }
+ }
+ return 0;
+}
+
+/*
+** This routine resolves any names used in the result set of the
+** supplied SELECT statement. If the SELECT statement being resolved
+** is a sub-select, then pOuterNC is a pointer to the NameContext
+** of the parent SELECT.
+*/
+int sqlite3SelectResolve(
+ Parse *pParse, /* The parser context */
+ Select *p, /* The SELECT statement being coded. */
+ NameContext *pOuterNC /* The outer name context. May be NULL. */
+){
+ ExprList *pEList; /* Result set. */
+ int i; /* For-loop variable used in multiple places */
+ NameContext sNC; /* Local name-context */
+ ExprList *pGroupBy; /* The group by clause */
+
+ /* If this routine has run before, return immediately. */
+ if( p->isResolved ){
+ assert( !pOuterNC );
+ return SQLITE_OK;
+ }
+ p->isResolved = 1;
+
+ /* If there have already been errors, do nothing. */
+ if( pParse->nErr>0 ){
+ return SQLITE_ERROR;
+ }
+
+ /* Prepare the select statement. This call will allocate all cursors
+ ** required to handle the tables and subqueries in the FROM clause.
+ */
+ if( prepSelectStmt(pParse, p) ){
+ return SQLITE_ERROR;
+ }
+
+ /* Resolve the expressions in the LIMIT and OFFSET clauses. These
+ ** are not allowed to refer to any names, so pass an empty NameContext.
+ */
+ memset(&sNC, 0, sizeof(sNC));
+ sNC.pParse = pParse;
+ if( sqlite3ExprResolveNames(&sNC, p->pLimit) ||
+ sqlite3ExprResolveNames(&sNC, p->pOffset) ){
+ return SQLITE_ERROR;
+ }
+
+ /* Set up the local name-context to pass to ExprResolveNames() to
+ ** resolve the expression-list.
+ */
+ sNC.allowAgg = 1;
+ sNC.pSrcList = p->pSrc;
+ sNC.pNext = pOuterNC;
+
+ /* Resolve names in the result set. */
+ pEList = p->pEList;
+ if( !pEList ) return SQLITE_ERROR;
+ for(i=0; i<pEList->nExpr; i++){
+ Expr *pX = pEList->a[i].pExpr;
+ if( sqlite3ExprResolveNames(&sNC, pX) ){
+ return SQLITE_ERROR;
+ }
+ }
+
+ /* If there are no aggregate functions in the result-set, and no GROUP BY
+ ** expression, do not allow aggregates in any of the other expressions.
+ */
+ assert( !p->isAgg );
+ pGroupBy = p->pGroupBy;
+ if( pGroupBy || sNC.hasAgg ){
+ p->isAgg = 1;
+ }else{
+ sNC.allowAgg = 0;
+ }
+
+ /* If a HAVING clause is present, then there must be a GROUP BY clause.
+ */
+ if( p->pHaving && !pGroupBy ){
+ sqlite3ErrorMsg(pParse, "a GROUP BY clause is required before HAVING");
+ return SQLITE_ERROR;
+ }
+
+ /* Add the expression list to the name-context before parsing the
+ ** other expressions in the SELECT statement. This is so that
+ ** expressions in the WHERE clause (etc.) can refer to expressions by
+ ** aliases in the result set.
+ **
+ ** Minor point: If this is the case, then the expression will be
+ ** re-evaluated for each reference to it.
+ */
+ sNC.pEList = p->pEList;
+ if( sqlite3ExprResolveNames(&sNC, p->pWhere) ||
+ sqlite3ExprResolveNames(&sNC, p->pHaving) ||
+ processOrderGroupBy(&sNC, p->pOrderBy, "ORDER") ||
+ processOrderGroupBy(&sNC, pGroupBy, "GROUP")
+ ){
+ return SQLITE_ERROR;
+ }
+
+ /* Make sure the GROUP BY clause does not contain aggregate functions.
+ */
+ if( pGroupBy ){
+ struct ExprList_item *pItem;
+
+ for(i=0, pItem=pGroupBy->a; i<pGroupBy->nExpr; i++, pItem++){
+ if( ExprHasProperty(pItem->pExpr, EP_Agg) ){
+ sqlite3ErrorMsg(pParse, "aggregate functions are not allowed in "
+ "the GROUP BY clause");
+ return SQLITE_ERROR;
+ }
+ }
+ }
+
+ return SQLITE_OK;
+}
+
+/*
+** Reset the aggregate accumulator.
+**
+** The aggregate accumulator is a set of memory cells that hold
+** intermediate results while calculating an aggregate. This
+** routine simply stores NULLs in all of those memory cells.
+*/
+static void resetAccumulator(Parse *pParse, AggInfo *pAggInfo){
+ Vdbe *v = pParse->pVdbe;
+ int i;
+ struct AggInfo_func *pFunc;
+ if( pAggInfo->nFunc+pAggInfo->nColumn==0 ){
+ return;
+ }
+ for(i=0; i<pAggInfo->nColumn; i++){
+ sqlite3VdbeAddOp(v, OP_MemNull, pAggInfo->aCol[i].iMem, 0);
+ }
+ for(pFunc=pAggInfo->aFunc, i=0; i<pAggInfo->nFunc; i++, pFunc++){
+ sqlite3VdbeAddOp(v, OP_MemNull, pFunc->iMem, 0);
+ if( pFunc->iDistinct>=0 ){
+ Expr *pE = pFunc->pExpr;
+ if( pE->pList==0 || pE->pList->nExpr!=1 ){
+ sqlite3ErrorMsg(pParse, "DISTINCT in aggregate must be followed "
+ "by an expression");
+ pFunc->iDistinct = -1;
+ }else{
+ KeyInfo *pKeyInfo = keyInfoFromExprList(pParse, pE->pList);
+ sqlite3VdbeOp3(v, OP_OpenEphemeral, pFunc->iDistinct, 0,
+ (char*)pKeyInfo, P3_KEYINFO_HANDOFF);
+ }
+ }
+ }
+}
+
+/*
+** Invoke the OP_AggFinalize opcode for every aggregate function
+** in the AggInfo structure.
+*/
+static void finalizeAggFunctions(Parse *pParse, AggInfo *pAggInfo){
+ Vdbe *v = pParse->pVdbe;
+ int i;
+ struct AggInfo_func *pF;
+ for(i=0, pF=pAggInfo->aFunc; i<pAggInfo->nFunc; i++, pF++){
+ ExprList *pList = pF->pExpr->pList;
+ sqlite3VdbeOp3(v, OP_AggFinal, pF->iMem, pList ? pList->nExpr : 0,
+ (void*)pF->pFunc, P3_FUNCDEF);
+ }
+}
+
+/*
+** Update the accumulator memory cells for an aggregate based on
+** the current cursor position.
+*/
+static void updateAccumulator(Parse *pParse, AggInfo *pAggInfo){
+ Vdbe *v = pParse->pVdbe;
+ int i;
+ struct AggInfo_func *pF;
+ struct AggInfo_col *pC;
+
+ pAggInfo->directMode = 1;
+ for(i=0, pF=pAggInfo->aFunc; i<pAggInfo->nFunc; i++, pF++){
+ int nArg;
+ int addrNext = 0;
+ ExprList *pList = pF->pExpr->pList;
+ if( pList ){
+ nArg = pList->nExpr;
+ sqlite3ExprCodeExprList(pParse, pList);
+ }else{
+ nArg = 0;
+ }
+ if( pF->iDistinct>=0 ){
+ addrNext = sqlite3VdbeMakeLabel(v);
+ assert( nArg==1 );
+ codeDistinct(v, pF->iDistinct, addrNext, 1);
+ }
+ if( pF->pFunc->needCollSeq ){
+ CollSeq *pColl = 0;
+ struct ExprList_item *pItem;
+ int j;
+ assert( pList!=0 ); /* pList!=0 if pF->pFunc->needCollSeq is true */
+ for(j=0, pItem=pList->a; !pColl && j<nArg; j++, pItem++){
+ pColl = sqlite3ExprCollSeq(pParse, pItem->pExpr);
+ }
+ if( !pColl ){
+ pColl = pParse->db->pDfltColl;
+ }
+ sqlite3VdbeOp3(v, OP_CollSeq, 0, 0, (char *)pColl, P3_COLLSEQ);
+ }
+ sqlite3VdbeOp3(v, OP_AggStep, pF->iMem, nArg, (void*)pF->pFunc, P3_FUNCDEF);
+ if( addrNext ){
+ sqlite3VdbeResolveLabel(v, addrNext);
+ }
+ }
+ for(i=0, pC=pAggInfo->aCol; i<pAggInfo->nAccumulator; i++, pC++){
+ sqlite3ExprCode(pParse, pC->pExpr);
+ sqlite3VdbeAddOp(v, OP_MemStore, pC->iMem, 1);
+ }
+ pAggInfo->directMode = 0;
+}
+
+
+/*
+** Generate code for the given SELECT statement.
+**
+** The results are distributed in various ways depending on the
+** value of eDest and iParm.
+**
+** eDest Value Result
+** ------------ -------------------------------------------
+** SRT_Callback Invoke the callback for each row of the result.
+**
+** SRT_Mem Store first result in memory cell iParm
+**
+** SRT_Set Store results as keys of table iParm.
+**
+** SRT_Union Store results as a key in a temporary table iParm
+**
+** SRT_Except Remove results from the temporary table iParm.
+**
+** SRT_Table Store results in temporary table iParm
+**
+** The table above is incomplete. Additional eDist value have be added
+** since this comment was written. See the selectInnerLoop() function for
+** a complete listing of the allowed values of eDest and their meanings.
+**
+** This routine returns the number of errors. If any errors are
+** encountered, then an appropriate error message is left in
+** pParse->zErrMsg.
+**
+** This routine does NOT free the Select structure passed in. The
+** calling function needs to do that.
+**
+** The pParent, parentTab, and *pParentAgg fields are filled in if this
+** SELECT is a subquery. This routine may try to combine this SELECT
+** with its parent to form a single flat query. In so doing, it might
+** change the parent query from a non-aggregate to an aggregate query.
+** For that reason, the pParentAgg flag is passed as a pointer, so it
+** can be changed.
+**
+** Example 1: The meaning of the pParent parameter.
+**
+** SELECT * FROM t1 JOIN (SELECT x, count(*) FROM t2) JOIN t3;
+** \ \_______ subquery _______/ /
+** \ /
+** \____________________ outer query ___________________/
+**
+** This routine is called for the outer query first. For that call,
+** pParent will be NULL. During the processing of the outer query, this
+** routine is called recursively to handle the subquery. For the recursive
+** call, pParent will point to the outer query. Because the subquery is
+** the second element in a three-way join, the parentTab parameter will
+** be 1 (the 2nd value of a 0-indexed array.)
+*/
+int sqlite3Select(
+ Parse *pParse, /* The parser context */
+ Select *p, /* The SELECT statement being coded. */
+ int eDest, /* How to dispose of the results */
+ int iParm, /* A parameter used by the eDest disposal method */
+ Select *pParent, /* Another SELECT for which this is a sub-query */
+ int parentTab, /* Index in pParent->pSrc of this query */
+ int *pParentAgg, /* True if pParent uses aggregate functions */
+ char *aff /* If eDest is SRT_Union, the affinity string */
+){
+ int i, j; /* Loop counters */
+ WhereInfo *pWInfo; /* Return from sqlite3WhereBegin() */
+ Vdbe *v; /* The virtual machine under construction */
+ int isAgg; /* True for select lists like "count(*)" */
+ ExprList *pEList; /* List of columns to extract. */
+ SrcList *pTabList; /* List of tables to select from */
+ Expr *pWhere; /* The WHERE clause. May be NULL */
+ ExprList *pOrderBy; /* The ORDER BY clause. May be NULL */
+ ExprList *pGroupBy; /* The GROUP BY clause. May be NULL */
+ Expr *pHaving; /* The HAVING clause. May be NULL */
+ int isDistinct; /* True if the DISTINCT keyword is present */
+ int distinct; /* Table to use for the distinct set */
+ int rc = 1; /* Value to return from this function */
+ int addrSortIndex; /* Address of an OP_OpenEphemeral instruction */
+ AggInfo sAggInfo; /* Information used by aggregate queries */
+ int iEnd; /* Address of the end of the query */
+
+ if( p==0 || sqlite3MallocFailed() || pParse->nErr ){
+ return 1;
+ }
+ if( sqlite3AuthCheck(pParse, SQLITE_SELECT, 0, 0, 0) ) return 1;
+ memset(&sAggInfo, 0, sizeof(sAggInfo));
+
+#ifndef SQLITE_OMIT_COMPOUND_SELECT
+ /* If there is are a sequence of queries, do the earlier ones first.
+ */
+ if( p->pPrior ){
+ if( p->pRightmost==0 ){
+ Select *pLoop;
+ for(pLoop=p; pLoop; pLoop=pLoop->pPrior){
+ pLoop->pRightmost = p;
+ }
+ }
+ return multiSelect(pParse, p, eDest, iParm, aff);
+ }
+#endif
+
+ pOrderBy = p->pOrderBy;
+ if( IgnorableOrderby(eDest) ){
+ p->pOrderBy = 0;
+ }
+ if( sqlite3SelectResolve(pParse, p, 0) ){
+ goto select_end;
+ }
+ p->pOrderBy = pOrderBy;
+
+ /* Make local copies of the parameters for this query.
+ */
+ pTabList = p->pSrc;
+ pWhere = p->pWhere;
+ pGroupBy = p->pGroupBy;
+ pHaving = p->pHaving;
+ isAgg = p->isAgg;
+ isDistinct = p->isDistinct;
+ pEList = p->pEList;
+ if( pEList==0 ) goto select_end;
+
+ /*
+ ** Do not even attempt to generate any code if we have already seen
+ ** errors before this routine starts.
+ */
+ if( pParse->nErr>0 ) goto select_end;
+
+ /* If writing to memory or generating a set
+ ** only a single column may be output.
+ */
+#ifndef SQLITE_OMIT_SUBQUERY
+ if( (eDest==SRT_Mem || eDest==SRT_Set) && pEList->nExpr>1 ){
+ sqlite3ErrorMsg(pParse, "only a single result allowed for "
+ "a SELECT that is part of an expression");
+ goto select_end;
+ }
+#endif
+
+ /* ORDER BY is ignored for some destinations.
+ */
+ if( IgnorableOrderby(eDest) ){
+ pOrderBy = 0;
+ }
+
+ /* Begin generating code.
+ */
+ v = sqlite3GetVdbe(pParse);
+ if( v==0 ) goto select_end;
+
+ /* Generate code for all sub-queries in the FROM clause
+ */
+#if !defined(SQLITE_OMIT_SUBQUERY) || !defined(SQLITE_OMIT_VIEW)
+ for(i=0; i<pTabList->nSrc; i++){
+ const char *zSavedAuthContext = 0;
+ int needRestoreContext;
+ struct SrcList_item *pItem = &pTabList->a[i];
+
+ if( pItem->pSelect==0 || pItem->isPopulated ) continue;
+ if( pItem->zName!=0 ){
+ zSavedAuthContext = pParse->zAuthContext;
+ pParse->zAuthContext = pItem->zName;
+ needRestoreContext = 1;
+ }else{
+ needRestoreContext = 0;
+ }
+ sqlite3Select(pParse, pItem->pSelect, SRT_EphemTab,
+ pItem->iCursor, p, i, &isAgg, 0);
+ if( needRestoreContext ){
+ pParse->zAuthContext = zSavedAuthContext;
+ }
+ pTabList = p->pSrc;
+ pWhere = p->pWhere;
+ if( !IgnorableOrderby(eDest) ){
+ pOrderBy = p->pOrderBy;
+ }
+ pGroupBy = p->pGroupBy;
+ pHaving = p->pHaving;
+ isDistinct = p->isDistinct;
+ }
+#endif
+
+ /* Check for the special case of a min() or max() function by itself
+ ** in the result set.
+ */
+ if( simpleMinMaxQuery(pParse, p, eDest, iParm) ){
+ rc = 0;
+ goto select_end;
+ }
+
+ /* Check to see if this is a subquery that can be "flattened" into its parent.
+ ** If flattening is a possiblity, do so and return immediately.
+ */
+#ifndef SQLITE_OMIT_VIEW
+ if( pParent && pParentAgg &&
+ flattenSubquery(pParent, parentTab, *pParentAgg, isAgg) ){
+ if( isAgg ) *pParentAgg = 1;
+ goto select_end;
+ }
+#endif
+
+ /* If there is an ORDER BY clause, resolve any collation sequences
+ ** names that have been explicitly specified and create a sorting index.
+ **
+ ** This sorting index might end up being unused if the data can be
+ ** extracted in pre-sorted order. If that is the case, then the
+ ** OP_OpenEphemeral instruction will be changed to an OP_Noop once
+ ** we figure out that the sorting index is not needed. The addrSortIndex
+ ** variable is used to facilitate that change.
+ */
+ if( pOrderBy ){
+ struct ExprList_item *pTerm;
+ KeyInfo *pKeyInfo;
+ for(i=0, pTerm=pOrderBy->a; i<pOrderBy->nExpr; i++, pTerm++){
+ if( pTerm->zName ){
+ pTerm->pExpr->pColl = sqlite3LocateCollSeq(pParse, pTerm->zName, -1);
+ }
+ }
+ if( pParse->nErr ){
+ goto select_end;
+ }
+ pKeyInfo = keyInfoFromExprList(pParse, pOrderBy);
+ pOrderBy->iECursor = pParse->nTab++;
+ p->addrOpenEphm[2] = addrSortIndex =
+ sqlite3VdbeOp3(v, OP_OpenEphemeral, pOrderBy->iECursor, pOrderBy->nExpr+2, (char*)pKeyInfo, P3_KEYINFO_HANDOFF);
+ }else{
+ addrSortIndex = -1;
+ }
+
+ /* If the output is destined for a temporary table, open that table.
+ */
+ if( eDest==SRT_EphemTab ){
+ sqlite3VdbeAddOp(v, OP_OpenEphemeral, iParm, pEList->nExpr);
+ }
+
+ /* Set the limiter.
+ */
+ iEnd = sqlite3VdbeMakeLabel(v);
+ computeLimitRegisters(pParse, p, iEnd);
+
+ /* Open a virtual index to use for the distinct set.
+ */
+ if( isDistinct ){
+ KeyInfo *pKeyInfo;
+ distinct = pParse->nTab++;
+ pKeyInfo = keyInfoFromExprList(pParse, p->pEList);
+ sqlite3VdbeOp3(v, OP_OpenEphemeral, distinct, 0,
+ (char*)pKeyInfo, P3_KEYINFO_HANDOFF);
+ }else{
+ distinct = -1;
+ }
+
+ /* Aggregate and non-aggregate queries are handled differently */
+ if( !isAgg && pGroupBy==0 ){
+ /* This case is for non-aggregate queries
+ ** Begin the database scan
+ */
+ pWInfo = sqlite3WhereBegin(pParse, pTabList, pWhere, &pOrderBy);
+ if( pWInfo==0 ) goto select_end;
+
+ /* If sorting index that was created by a prior OP_OpenEphemeral
+ ** instruction ended up not being needed, then change the OP_OpenEphemeral
+ ** into an OP_Noop.
+ */
+ if( addrSortIndex>=0 && pOrderBy==0 ){
+ sqlite3VdbeChangeToNoop(v, addrSortIndex, 1);
+ p->addrOpenEphm[2] = -1;
+ }
+
+ /* Use the standard inner loop
+ */
+ if( selectInnerLoop(pParse, p, pEList, 0, 0, pOrderBy, distinct, eDest,
+ iParm, pWInfo->iContinue, pWInfo->iBreak, aff) ){
+ goto select_end;
+ }
+
+ /* End the database scan loop.
+ */
+ sqlite3WhereEnd(pWInfo);
+ }else{
+ /* This is the processing for aggregate queries */
+ NameContext sNC; /* Name context for processing aggregate information */
+ int iAMem; /* First Mem address for storing current GROUP BY */
+ int iBMem; /* First Mem address for previous GROUP BY */
+ int iUseFlag; /* Mem address holding flag indicating that at least
+ ** one row of the input to the aggregator has been
+ ** processed */
+ int iAbortFlag; /* Mem address which causes query abort if positive */
+ int groupBySort; /* Rows come from source in GROUP BY order */
+
+
+ /* The following variables hold addresses or labels for parts of the
+ ** virtual machine program we are putting together */
+ int addrOutputRow; /* Start of subroutine that outputs a result row */
+ int addrSetAbort; /* Set the abort flag and return */
+ int addrInitializeLoop; /* Start of code that initializes the input loop */
+ int addrTopOfLoop; /* Top of the input loop */
+ int addrGroupByChange; /* Code that runs when any GROUP BY term changes */
+ int addrProcessRow; /* Code to process a single input row */
+ int addrEnd; /* End of all processing */
+ int addrSortingIdx; /* The OP_OpenEphemeral for the sorting index */
+ int addrReset; /* Subroutine for resetting the accumulator */
+
+ addrEnd = sqlite3VdbeMakeLabel(v);
+
+ /* Convert TK_COLUMN nodes into TK_AGG_COLUMN and make entries in
+ ** sAggInfo for all TK_AGG_FUNCTION nodes in expressions of the
+ ** SELECT statement.
+ */
+ memset(&sNC, 0, sizeof(sNC));
+ sNC.pParse = pParse;
+ sNC.pSrcList = pTabList;
+ sNC.pAggInfo = &sAggInfo;
+ sAggInfo.nSortingColumn = pGroupBy ? pGroupBy->nExpr+1 : 0;
+ sAggInfo.pGroupBy = pGroupBy;
+ if( sqlite3ExprAnalyzeAggList(&sNC, pEList) ){
+ goto select_end;
+ }
+ if( sqlite3ExprAnalyzeAggList(&sNC, pOrderBy) ){
+ goto select_end;
+ }
+ if( pHaving && sqlite3ExprAnalyzeAggregates(&sNC, pHaving) ){
+ goto select_end;
+ }
+ sAggInfo.nAccumulator = sAggInfo.nColumn;
+ for(i=0; i<sAggInfo.nFunc; i++){
+ if( sqlite3ExprAnalyzeAggList(&sNC, sAggInfo.aFunc[i].pExpr->pList) ){
+ goto select_end;
+ }
+ }
+ if( sqlite3MallocFailed() ) goto select_end;
+
+ /* Processing for aggregates with GROUP BY is very different and
+ ** much more complex tha aggregates without a GROUP BY.
+ */
+ if( pGroupBy ){
+ KeyInfo *pKeyInfo; /* Keying information for the group by clause */
+
+ /* Create labels that we will be needing
+ */
+
+ addrInitializeLoop = sqlite3VdbeMakeLabel(v);
+ addrGroupByChange = sqlite3VdbeMakeLabel(v);
+ addrProcessRow = sqlite3VdbeMakeLabel(v);
+
+ /* If there is a GROUP BY clause we might need a sorting index to
+ ** implement it. Allocate that sorting index now. If it turns out
+ ** that we do not need it after all, the OpenEphemeral instruction
+ ** will be converted into a Noop.
+ */
+ sAggInfo.sortingIdx = pParse->nTab++;
+ pKeyInfo = keyInfoFromExprList(pParse, pGroupBy);
+ addrSortingIdx =
+ sqlite3VdbeOp3(v, OP_OpenEphemeral, sAggInfo.sortingIdx,
+ sAggInfo.nSortingColumn,
+ (char*)pKeyInfo, P3_KEYINFO_HANDOFF);
+
+ /* Initialize memory locations used by GROUP BY aggregate processing
+ */
+ iUseFlag = pParse->nMem++;
+ iAbortFlag = pParse->nMem++;
+ iAMem = pParse->nMem;
+ pParse->nMem += pGroupBy->nExpr;
+ iBMem = pParse->nMem;
+ pParse->nMem += pGroupBy->nExpr;
+ sqlite3VdbeAddOp(v, OP_MemInt, 0, iAbortFlag);
+ VdbeComment((v, "# clear abort flag"));
+ sqlite3VdbeAddOp(v, OP_MemInt, 0, iUseFlag);
+ VdbeComment((v, "# indicate accumulator empty"));
+ sqlite3VdbeAddOp(v, OP_Goto, 0, addrInitializeLoop);
+
+ /* Generate a subroutine that outputs a single row of the result
+ ** set. This subroutine first looks at the iUseFlag. If iUseFlag
+ ** is less than or equal to zero, the subroutine is a no-op. If
+ ** the processing calls for the query to abort, this subroutine
+ ** increments the iAbortFlag memory location before returning in
+ ** order to signal the caller to abort.
+ */
+ addrSetAbort = sqlite3VdbeCurrentAddr(v);
+ sqlite3VdbeAddOp(v, OP_MemInt, 1, iAbortFlag);
+ VdbeComment((v, "# set abort flag"));
+ sqlite3VdbeAddOp(v, OP_Return, 0, 0);
+ addrOutputRow = sqlite3VdbeCurrentAddr(v);
+ sqlite3VdbeAddOp(v, OP_IfMemPos, iUseFlag, addrOutputRow+2);
+ VdbeComment((v, "# Groupby result generator entry point"));
+ sqlite3VdbeAddOp(v, OP_Return, 0, 0);
+ finalizeAggFunctions(pParse, &sAggInfo);
+ if( pHaving ){
+ sqlite3ExprIfFalse(pParse, pHaving, addrOutputRow+1, 1);
+ }
+ rc = selectInnerLoop(pParse, p, p->pEList, 0, 0, pOrderBy,
+ distinct, eDest, iParm,
+ addrOutputRow+1, addrSetAbort, aff);
+ if( rc ){
+ goto select_end;
+ }
+ sqlite3VdbeAddOp(v, OP_Return, 0, 0);
+ VdbeComment((v, "# end groupby result generator"));
+
+ /* Generate a subroutine that will reset the group-by accumulator
+ */
+ addrReset = sqlite3VdbeCurrentAddr(v);
+ resetAccumulator(pParse, &sAggInfo);
+ sqlite3VdbeAddOp(v, OP_Return, 0, 0);
+
+ /* Begin a loop that will extract all source rows in GROUP BY order.
+ ** This might involve two separate loops with an OP_Sort in between, or
+ ** it might be a single loop that uses an index to extract information
+ ** in the right order to begin with.
+ */
+ sqlite3VdbeResolveLabel(v, addrInitializeLoop);
+ sqlite3VdbeAddOp(v, OP_Gosub, 0, addrReset);
+ pWInfo = sqlite3WhereBegin(pParse, pTabList, pWhere, &pGroupBy);
+ if( pWInfo==0 ) goto select_end;
+ if( pGroupBy==0 ){
+ /* The optimizer is able to deliver rows in group by order so
+ ** we do not have to sort. The OP_OpenEphemeral table will be
+ ** cancelled later because we still need to use the pKeyInfo
+ */
+ pGroupBy = p->pGroupBy;
+ groupBySort = 0;
+ }else{
+ /* Rows are coming out in undetermined order. We have to push
+ ** each row into a sorting index, terminate the first loop,
+ ** then loop over the sorting index in order to get the output
+ ** in sorted order
+ */
+ groupBySort = 1;
+ sqlite3ExprCodeExprList(pParse, pGroupBy);
+ sqlite3VdbeAddOp(v, OP_Sequence, sAggInfo.sortingIdx, 0);
+ j = pGroupBy->nExpr+1;
+ for(i=0; i<sAggInfo.nColumn; i++){
+ struct AggInfo_col *pCol = &sAggInfo.aCol[i];
+ if( pCol->iSorterColumn<j ) continue;
+ if( pCol->iColumn<0 ){
+ sqlite3VdbeAddOp(v, OP_Rowid, pCol->iTable, 0);
+ }else{
+ sqlite3VdbeAddOp(v, OP_Column, pCol->iTable, pCol->iColumn);
+ }
+ j++;
+ }
+ sqlite3VdbeAddOp(v, OP_MakeRecord, j, 0);
+ sqlite3VdbeAddOp(v, OP_IdxInsert, sAggInfo.sortingIdx, 0);
+ sqlite3WhereEnd(pWInfo);
+ sqlite3VdbeAddOp(v, OP_Sort, sAggInfo.sortingIdx, addrEnd);
+ VdbeComment((v, "# GROUP BY sort"));
+ sAggInfo.useSortingIdx = 1;
+ }
+
+ /* Evaluate the current GROUP BY terms and store in b0, b1, b2...
+ ** (b0 is memory location iBMem+0, b1 is iBMem+1, and so forth)
+ ** Then compare the current GROUP BY terms against the GROUP BY terms
+ ** from the previous row currently stored in a0, a1, a2...
+ */
+ addrTopOfLoop = sqlite3VdbeCurrentAddr(v);
+ for(j=0; j<pGroupBy->nExpr; j++){
+ if( groupBySort ){
+ sqlite3VdbeAddOp(v, OP_Column, sAggInfo.sortingIdx, j);
+ }else{
+ sAggInfo.directMode = 1;
+ sqlite3ExprCode(pParse, pGroupBy->a[j].pExpr);
+ }
+ sqlite3VdbeAddOp(v, OP_MemStore, iBMem+j, j<pGroupBy->nExpr-1);
+ }
+ for(j=pGroupBy->nExpr-1; j>=0; j--){
+ if( j<pGroupBy->nExpr-1 ){
+ sqlite3VdbeAddOp(v, OP_MemLoad, iBMem+j, 0);
+ }
+ sqlite3VdbeAddOp(v, OP_MemLoad, iAMem+j, 0);
+ if( j==0 ){
+ sqlite3VdbeAddOp(v, OP_Eq, 0x200, addrProcessRow);
+ }else{
+ sqlite3VdbeAddOp(v, OP_Ne, 0x200, addrGroupByChange);
+ }
+ sqlite3VdbeChangeP3(v, -1, (void*)pKeyInfo->aColl[j], P3_COLLSEQ);
+ }
+
+ /* Generate code that runs whenever the GROUP BY changes.
+ ** Change in the GROUP BY are detected by the previous code
+ ** block. If there were no changes, this block is skipped.
+ **
+ ** This code copies current group by terms in b0,b1,b2,...
+ ** over to a0,a1,a2. It then calls the output subroutine
+ ** and resets the aggregate accumulator registers in preparation
+ ** for the next GROUP BY batch.
+ */
+ sqlite3VdbeResolveLabel(v, addrGroupByChange);
+ for(j=0; j<pGroupBy->nExpr; j++){
+ sqlite3VdbeAddOp(v, OP_MemMove, iAMem+j, iBMem+j);
+ }
+ sqlite3VdbeAddOp(v, OP_Gosub, 0, addrOutputRow);
+ VdbeComment((v, "# output one row"));
+ sqlite3VdbeAddOp(v, OP_IfMemPos, iAbortFlag, addrEnd);
+ VdbeComment((v, "# check abort flag"));
+ sqlite3VdbeAddOp(v, OP_Gosub, 0, addrReset);
+ VdbeComment((v, "# reset accumulator"));
+
+ /* Update the aggregate accumulators based on the content of
+ ** the current row
+ */
+ sqlite3VdbeResolveLabel(v, addrProcessRow);
+ updateAccumulator(pParse, &sAggInfo);
+ sqlite3VdbeAddOp(v, OP_MemInt, 1, iUseFlag);
+ VdbeComment((v, "# indicate data in accumulator"));
+
+ /* End of the loop
+ */
+ if( groupBySort ){
+ sqlite3VdbeAddOp(v, OP_Next, sAggInfo.sortingIdx, addrTopOfLoop);
+ }else{
+ sqlite3WhereEnd(pWInfo);
+ sqlite3VdbeChangeToNoop(v, addrSortingIdx, 1);
+ }
+
+ /* Output the final row of result
+ */
+ sqlite3VdbeAddOp(v, OP_Gosub, 0, addrOutputRow);
+ VdbeComment((v, "# output final row"));
+
+ } /* endif pGroupBy */
+ else {
+ /* This case runs if the aggregate has no GROUP BY clause. The
+ ** processing is much simpler since there is only a single row
+ ** of output.
+ */
+ resetAccumulator(pParse, &sAggInfo);
+ pWInfo = sqlite3WhereBegin(pParse, pTabList, pWhere, 0);
+ if( pWInfo==0 ) goto select_end;
+ updateAccumulator(pParse, &sAggInfo);
+ sqlite3WhereEnd(pWInfo);
+ finalizeAggFunctions(pParse, &sAggInfo);
+ pOrderBy = 0;
+ if( pHaving ){
+ sqlite3ExprIfFalse(pParse, pHaving, addrEnd, 1);
+ }
+ selectInnerLoop(pParse, p, p->pEList, 0, 0, 0, -1,
+ eDest, iParm, addrEnd, addrEnd, aff);
+ }
+ sqlite3VdbeResolveLabel(v, addrEnd);
+
+ } /* endif aggregate query */
+
+ /* If there is an ORDER BY clause, then we need to sort the results
+ ** and send them to the callback one by one.
+ */
+ if( pOrderBy ){
+ generateSortTail(pParse, p, v, pEList->nExpr, eDest, iParm);
+ }
+
+#ifndef SQLITE_OMIT_SUBQUERY
+ /* If this was a subquery, we have now converted the subquery into a
+ ** temporary table. So set the SrcList_item.isPopulated flag to prevent
+ ** this subquery from being evaluated again and to force the use of
+ ** the temporary table.
+ */
+ if( pParent ){
+ assert( pParent->pSrc->nSrc>parentTab );
+ assert( pParent->pSrc->a[parentTab].pSelect==p );
+ pParent->pSrc->a[parentTab].isPopulated = 1;
+ }
+#endif
+
+ /* Jump here to skip this query
+ */
+ sqlite3VdbeResolveLabel(v, iEnd);
+
+ /* The SELECT was successfully coded. Set the return code to 0
+ ** to indicate no errors.
+ */
+ rc = 0;
+
+ /* Control jumps to here if an error is encountered above, or upon
+ ** successful coding of the SELECT.
+ */
+select_end:
+
+ /* Identify column names if we will be using them in a callback. This
+ ** step is skipped if the output is going to some other destination.
+ */
+ if( rc==SQLITE_OK && eDest==SRT_Callback ){
+ generateColumnNames(pParse, pTabList, pEList);
+ }
+
+ sqliteFree(sAggInfo.aCol);
+ sqliteFree(sAggInfo.aFunc);
+ return rc;
+}
Added: freeswitch/trunk/libs/sqlite/src/shell.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/shell.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,1857 @@
+/*
+** 2001 September 15
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** This file contains code to implement the "sqlite" command line
+** utility for accessing SQLite databases.
+**
+** $Id: shell.c,v 1.150 2006/09/25 13:09:23 drh Exp $
+*/
+#include <stdlib.h>
+#include <string.h>
+#include <stdio.h>
+#include <assert.h>
+#include "sqlite3.h"
+#include <ctype.h>
+
+#if !defined(_WIN32) && !defined(WIN32) && !defined(__MACOS__) && !defined(__OS2__)
+# include <signal.h>
+# include <pwd.h>
+# include <unistd.h>
+# include <sys/types.h>
+#endif
+
+#ifdef __MACOS__
+# include <console.h>
+# include <signal.h>
+# include <unistd.h>
+# include <extras.h>
+# include <Files.h>
+# include <Folders.h>
+#endif
+
+#ifdef __OS2__
+# include <unistd.h>
+#endif
+
+#if defined(HAVE_READLINE) && HAVE_READLINE==1
+# include <readline/readline.h>
+# include <readline/history.h>
+#else
+# define readline(p) local_getline(p,stdin)
+# define add_history(X)
+# define read_history(X)
+# define write_history(X)
+# define stifle_history(X)
+#endif
+
+#if defined(_WIN32) || defined(WIN32)
+# include <io.h>
+#else
+/* Make sure isatty() has a prototype.
+*/
+extern int isatty();
+#endif
+
+/*
+** The following is the open SQLite database. We make a pointer
+** to this database a static variable so that it can be accessed
+** by the SIGINT handler to interrupt database processing.
+*/
+static sqlite3 *db = 0;
+
+/*
+** True if an interrupt (Control-C) has been received.
+*/
+static volatile int seenInterrupt = 0;
+
+/*
+** This is the name of our program. It is set in main(), used
+** in a number of other places, mostly for error messages.
+*/
+static char *Argv0;
+
+/*
+** Prompt strings. Initialized in main. Settable with
+** .prompt main continue
+*/
+static char mainPrompt[20]; /* First line prompt. default: "sqlite> "*/
+static char continuePrompt[20]; /* Continuation prompt. default: " ...> " */
+
+
+/*
+** Determines if a string is a number of not.
+*/
+static int isNumber(const char *z, int *realnum){
+ if( *z=='-' || *z=='+' ) z++;
+ if( !isdigit(*z) ){
+ return 0;
+ }
+ z++;
+ if( realnum ) *realnum = 0;
+ while( isdigit(*z) ){ z++; }
+ if( *z=='.' ){
+ z++;
+ if( !isdigit(*z) ) return 0;
+ while( isdigit(*z) ){ z++; }
+ if( realnum ) *realnum = 1;
+ }
+ if( *z=='e' || *z=='E' ){
+ z++;
+ if( *z=='+' || *z=='-' ) z++;
+ if( !isdigit(*z) ) return 0;
+ while( isdigit(*z) ){ z++; }
+ if( realnum ) *realnum = 1;
+ }
+ return *z==0;
+}
+
+/*
+** A global char* and an SQL function to access its current value
+** from within an SQL statement. This program used to use the
+** sqlite_exec_printf() API to substitue a string into an SQL statement.
+** The correct way to do this with sqlite3 is to use the bind API, but
+** since the shell is built around the callback paradigm it would be a lot
+** of work. Instead just use this hack, which is quite harmless.
+*/
+static const char *zShellStatic = 0;
+static void shellstaticFunc(
+ sqlite3_context *context,
+ int argc,
+ sqlite3_value **argv
+){
+ assert( 0==argc );
+ assert( zShellStatic );
+ sqlite3_result_text(context, zShellStatic, -1, SQLITE_STATIC);
+}
+
+
+/*
+** This routine reads a line of text from FILE in, stores
+** the text in memory obtained from malloc() and returns a pointer
+** to the text. NULL is returned at end of file, or if malloc()
+** fails.
+**
+** The interface is like "readline" but no command-line editing
+** is done.
+*/
+static char *local_getline(char *zPrompt, FILE *in){
+ char *zLine;
+ int nLine;
+ int n;
+ int eol;
+
+ if( zPrompt && *zPrompt ){
+ printf("%s",zPrompt);
+ fflush(stdout);
+ }
+ nLine = 100;
+ zLine = malloc( nLine );
+ if( zLine==0 ) return 0;
+ n = 0;
+ eol = 0;
+ while( !eol ){
+ if( n+100>nLine ){
+ nLine = nLine*2 + 100;
+ zLine = realloc(zLine, nLine);
+ if( zLine==0 ) return 0;
+ }
+ if( fgets(&zLine[n], nLine - n, in)==0 ){
+ if( n==0 ){
+ free(zLine);
+ return 0;
+ }
+ zLine[n] = 0;
+ eol = 1;
+ break;
+ }
+ while( zLine[n] ){ n++; }
+ if( n>0 && zLine[n-1]=='\n' ){
+ n--;
+ zLine[n] = 0;
+ eol = 1;
+ }
+ }
+ zLine = realloc( zLine, n+1 );
+ return zLine;
+}
+
+/*
+** Retrieve a single line of input text. "isatty" is true if text
+** is coming from a terminal. In that case, we issue a prompt and
+** attempt to use "readline" for command-line editing. If "isatty"
+** is false, use "local_getline" instead of "readline" and issue no prompt.
+**
+** zPrior is a string of prior text retrieved. If not the empty
+** string, then issue a continuation prompt.
+*/
+static char *one_input_line(const char *zPrior, FILE *in){
+ char *zPrompt;
+ char *zResult;
+ if( in!=0 ){
+ return local_getline(0, in);
+ }
+ if( zPrior && zPrior[0] ){
+ zPrompt = continuePrompt;
+ }else{
+ zPrompt = mainPrompt;
+ }
+ zResult = readline(zPrompt);
+#if defined(HAVE_READLINE) && HAVE_READLINE==1
+ if( zResult && *zResult ) add_history(zResult);
+#endif
+ return zResult;
+}
+
+struct previous_mode_data {
+ int valid; /* Is there legit data in here? */
+ int mode;
+ int showHeader;
+ int colWidth[100];
+};
+/*
+** An pointer to an instance of this structure is passed from
+** the main program to the callback. This is used to communicate
+** state and mode information.
+*/
+struct callback_data {
+ sqlite3 *db; /* The database */
+ int echoOn; /* True to echo input commands */
+ int cnt; /* Number of records displayed so far */
+ FILE *out; /* Write results here */
+ int mode; /* An output mode setting */
+ int showHeader; /* True to show column names in List or Column mode */
+ char *zDestTable; /* Name of destination table when MODE_Insert */
+ char separator[20]; /* Separator character for MODE_List */
+ int colWidth[100]; /* Requested width of each column when in column mode*/
+ int actualWidth[100]; /* Actual width of each column */
+ char nullvalue[20]; /* The text to print when a NULL comes back from
+ ** the database */
+ struct previous_mode_data explainPrev;
+ /* Holds the mode information just before
+ ** .explain ON */
+ char outfile[FILENAME_MAX]; /* Filename for *out */
+ const char *zDbFilename; /* name of the database file */
+};
+
+/*
+** These are the allowed modes.
+*/
+#define MODE_Line 0 /* One column per line. Blank line between records */
+#define MODE_Column 1 /* One record per line in neat columns */
+#define MODE_List 2 /* One record per line with a separator */
+#define MODE_Semi 3 /* Same as MODE_List but append ";" to each line */
+#define MODE_Html 4 /* Generate an XHTML table */
+#define MODE_Insert 5 /* Generate SQL "insert" statements */
+#define MODE_Tcl 6 /* Generate ANSI-C or TCL quoted elements */
+#define MODE_Csv 7 /* Quote strings, numbers are plain */
+#define MODE_NUM_OF 8 /* The number of modes (not a mode itself) */
+
+static const char *modeDescr[MODE_NUM_OF] = {
+ "line",
+ "column",
+ "list",
+ "semi",
+ "html",
+ "insert",
+ "tcl",
+ "csv",
+};
+
+/*
+** Number of elements in an array
+*/
+#define ArraySize(X) (sizeof(X)/sizeof(X[0]))
+
+/*
+** Output the given string as a quoted string using SQL quoting conventions.
+*/
+static void output_quoted_string(FILE *out, const char *z){
+ int i;
+ int nSingle = 0;
+ for(i=0; z[i]; i++){
+ if( z[i]=='\'' ) nSingle++;
+ }
+ if( nSingle==0 ){
+ fprintf(out,"'%s'",z);
+ }else{
+ fprintf(out,"'");
+ while( *z ){
+ for(i=0; z[i] && z[i]!='\''; i++){}
+ if( i==0 ){
+ fprintf(out,"''");
+ z++;
+ }else if( z[i]=='\'' ){
+ fprintf(out,"%.*s''",i,z);
+ z += i+1;
+ }else{
+ fprintf(out,"%s",z);
+ break;
+ }
+ }
+ fprintf(out,"'");
+ }
+}
+
+/*
+** Output the given string as a quoted according to C or TCL quoting rules.
+*/
+static void output_c_string(FILE *out, const char *z){
+ unsigned int c;
+ fputc('"', out);
+ while( (c = *(z++))!=0 ){
+ if( c=='\\' ){
+ fputc(c, out);
+ fputc(c, out);
+ }else if( c=='\t' ){
+ fputc('\\', out);
+ fputc('t', out);
+ }else if( c=='\n' ){
+ fputc('\\', out);
+ fputc('n', out);
+ }else if( c=='\r' ){
+ fputc('\\', out);
+ fputc('r', out);
+ }else if( !isprint(c) ){
+ fprintf(out, "\\%03o", c&0xff);
+ }else{
+ fputc(c, out);
+ }
+ }
+ fputc('"', out);
+}
+
+/*
+** Output the given string with characters that are special to
+** HTML escaped.
+*/
+static void output_html_string(FILE *out, const char *z){
+ int i;
+ while( *z ){
+ for(i=0; z[i] && z[i]!='<' && z[i]!='&'; i++){}
+ if( i>0 ){
+ fprintf(out,"%.*s",i,z);
+ }
+ if( z[i]=='<' ){
+ fprintf(out,"<");
+ }else if( z[i]=='&' ){
+ fprintf(out,"&");
+ }else{
+ break;
+ }
+ z += i + 1;
+ }
+}
+
+/*
+** Output a single term of CSV. Actually, p->separator is used for
+** the separator, which may or may not be a comma. p->nullvalue is
+** the null value. Strings are quoted using ANSI-C rules. Numbers
+** appear outside of quotes.
+*/
+static void output_csv(struct callback_data *p, const char *z, int bSep){
+ if( z==0 ){
+ fprintf(p->out,"%s",p->nullvalue);
+ }else if( isNumber(z, 0) ){
+ fprintf(p->out,"%s",z);
+ }else{
+ output_c_string(p->out, z);
+ }
+ if( bSep ){
+ fprintf(p->out, p->separator);
+ }
+}
+
+#ifdef SIGINT
+/*
+** This routine runs when the user presses Ctrl-C
+*/
+static void interrupt_handler(int NotUsed){
+ seenInterrupt = 1;
+ if( db ) sqlite3_interrupt(db);
+}
+#endif
+
+/*
+** This is the callback routine that the SQLite library
+** invokes for each row of a query result.
+*/
+static int callback(void *pArg, int nArg, char **azArg, char **azCol){
+ int i;
+ struct callback_data *p = (struct callback_data*)pArg;
+ switch( p->mode ){
+ case MODE_Line: {
+ int w = 5;
+ if( azArg==0 ) break;
+ for(i=0; i<nArg; i++){
+ int len = strlen(azCol[i] ? azCol[i] : "");
+ if( len>w ) w = len;
+ }
+ if( p->cnt++>0 ) fprintf(p->out,"\n");
+ for(i=0; i<nArg; i++){
+ fprintf(p->out,"%*s = %s\n", w, azCol[i],
+ azArg[i] ? azArg[i] : p->nullvalue);
+ }
+ break;
+ }
+ case MODE_Column: {
+ if( p->cnt++==0 ){
+ for(i=0; i<nArg; i++){
+ int w, n;
+ if( i<ArraySize(p->colWidth) ){
+ w = p->colWidth[i];
+ }else{
+ w = 0;
+ }
+ if( w<=0 ){
+ w = strlen(azCol[i] ? azCol[i] : "");
+ if( w<10 ) w = 10;
+ n = strlen(azArg && azArg[i] ? azArg[i] : p->nullvalue);
+ if( w<n ) w = n;
+ }
+ if( i<ArraySize(p->actualWidth) ){
+ p->actualWidth[i] = w;
+ }
+ if( p->showHeader ){
+ fprintf(p->out,"%-*.*s%s",w,w,azCol[i], i==nArg-1 ? "\n": " ");
+ }
+ }
+ if( p->showHeader ){
+ for(i=0; i<nArg; i++){
+ int w;
+ if( i<ArraySize(p->actualWidth) ){
+ w = p->actualWidth[i];
+ }else{
+ w = 10;
+ }
+ fprintf(p->out,"%-*.*s%s",w,w,"-----------------------------------"
+ "----------------------------------------------------------",
+ i==nArg-1 ? "\n": " ");
+ }
+ }
+ }
+ if( azArg==0 ) break;
+ for(i=0; i<nArg; i++){
+ int w;
+ if( i<ArraySize(p->actualWidth) ){
+ w = p->actualWidth[i];
+ }else{
+ w = 10;
+ }
+ fprintf(p->out,"%-*.*s%s",w,w,
+ azArg[i] ? azArg[i] : p->nullvalue, i==nArg-1 ? "\n": " ");
+ }
+ break;
+ }
+ case MODE_Semi:
+ case MODE_List: {
+ if( p->cnt++==0 && p->showHeader ){
+ for(i=0; i<nArg; i++){
+ fprintf(p->out,"%s%s",azCol[i], i==nArg-1 ? "\n" : p->separator);
+ }
+ }
+ if( azArg==0 ) break;
+ for(i=0; i<nArg; i++){
+ char *z = azArg[i];
+ if( z==0 ) z = p->nullvalue;
+ fprintf(p->out, "%s", z);
+ if( i<nArg-1 ){
+ fprintf(p->out, "%s", p->separator);
+ }else if( p->mode==MODE_Semi ){
+ fprintf(p->out, ";\n");
+ }else{
+ fprintf(p->out, "\n");
+ }
+ }
+ break;
+ }
+ case MODE_Html: {
+ if( p->cnt++==0 && p->showHeader ){
+ fprintf(p->out,"<TR>");
+ for(i=0; i<nArg; i++){
+ fprintf(p->out,"<TH>%s</TH>",azCol[i]);
+ }
+ fprintf(p->out,"</TR>\n");
+ }
+ if( azArg==0 ) break;
+ fprintf(p->out,"<TR>");
+ for(i=0; i<nArg; i++){
+ fprintf(p->out,"<TD>");
+ output_html_string(p->out, azArg[i] ? azArg[i] : p->nullvalue);
+ fprintf(p->out,"</TD>\n");
+ }
+ fprintf(p->out,"</TR>\n");
+ break;
+ }
+ case MODE_Tcl: {
+ if( p->cnt++==0 && p->showHeader ){
+ for(i=0; i<nArg; i++){
+ output_c_string(p->out,azCol[i] ? azCol[i] : "");
+ fprintf(p->out, "%s", p->separator);
+ }
+ fprintf(p->out,"\n");
+ }
+ if( azArg==0 ) break;
+ for(i=0; i<nArg; i++){
+ output_c_string(p->out, azArg[i] ? azArg[i] : p->nullvalue);
+ fprintf(p->out, "%s", p->separator);
+ }
+ fprintf(p->out,"\n");
+ break;
+ }
+ case MODE_Csv: {
+ if( p->cnt++==0 && p->showHeader ){
+ for(i=0; i<nArg; i++){
+ output_csv(p, azCol[i] ? azCol[i] : "", i<nArg-1);
+ }
+ fprintf(p->out,"\n");
+ }
+ if( azArg==0 ) break;
+ for(i=0; i<nArg; i++){
+ output_csv(p, azArg[i], i<nArg-1);
+ }
+ fprintf(p->out,"\n");
+ break;
+ }
+ case MODE_Insert: {
+ if( azArg==0 ) break;
+ fprintf(p->out,"INSERT INTO %s VALUES(",p->zDestTable);
+ for(i=0; i<nArg; i++){
+ char *zSep = i>0 ? ",": "";
+ if( azArg[i]==0 ){
+ fprintf(p->out,"%sNULL",zSep);
+ }else if( isNumber(azArg[i], 0) ){
+ fprintf(p->out,"%s%s",zSep, azArg[i]);
+ }else{
+ if( zSep[0] ) fprintf(p->out,"%s",zSep);
+ output_quoted_string(p->out, azArg[i]);
+ }
+ }
+ fprintf(p->out,");\n");
+ break;
+ }
+ }
+ return 0;
+}
+
+/*
+** Set the destination table field of the callback_data structure to
+** the name of the table given. Escape any quote characters in the
+** table name.
+*/
+static void set_table_name(struct callback_data *p, const char *zName){
+ int i, n;
+ int needQuote;
+ char *z;
+
+ if( p->zDestTable ){
+ free(p->zDestTable);
+ p->zDestTable = 0;
+ }
+ if( zName==0 ) return;
+ needQuote = !isalpha((unsigned char)*zName) && *zName!='_';
+ for(i=n=0; zName[i]; i++, n++){
+ if( !isalnum((unsigned char)zName[i]) && zName[i]!='_' ){
+ needQuote = 1;
+ if( zName[i]=='\'' ) n++;
+ }
+ }
+ if( needQuote ) n += 2;
+ z = p->zDestTable = malloc( n+1 );
+ if( z==0 ){
+ fprintf(stderr,"Out of memory!\n");
+ exit(1);
+ }
+ n = 0;
+ if( needQuote ) z[n++] = '\'';
+ for(i=0; zName[i]; i++){
+ z[n++] = zName[i];
+ if( zName[i]=='\'' ) z[n++] = '\'';
+ }
+ if( needQuote ) z[n++] = '\'';
+ z[n] = 0;
+}
+
+/* zIn is either a pointer to a NULL-terminated string in memory obtained
+** from malloc(), or a NULL pointer. The string pointed to by zAppend is
+** added to zIn, and the result returned in memory obtained from malloc().
+** zIn, if it was not NULL, is freed.
+**
+** If the third argument, quote, is not '\0', then it is used as a
+** quote character for zAppend.
+*/
+static char * appendText(char *zIn, char const *zAppend, char quote){
+ int len;
+ int i;
+ int nAppend = strlen(zAppend);
+ int nIn = (zIn?strlen(zIn):0);
+
+ len = nAppend+nIn+1;
+ if( quote ){
+ len += 2;
+ for(i=0; i<nAppend; i++){
+ if( zAppend[i]==quote ) len++;
+ }
+ }
+
+ zIn = (char *)realloc(zIn, len);
+ if( !zIn ){
+ return 0;
+ }
+
+ if( quote ){
+ char *zCsr = &zIn[nIn];
+ *zCsr++ = quote;
+ for(i=0; i<nAppend; i++){
+ *zCsr++ = zAppend[i];
+ if( zAppend[i]==quote ) *zCsr++ = quote;
+ }
+ *zCsr++ = quote;
+ *zCsr++ = '\0';
+ assert( (zCsr-zIn)==len );
+ }else{
+ memcpy(&zIn[nIn], zAppend, nAppend);
+ zIn[len-1] = '\0';
+ }
+
+ return zIn;
+}
+
+
+/*
+** Execute a query statement that has a single result column. Print
+** that result column on a line by itself with a semicolon terminator.
+*/
+static int run_table_dump_query(FILE *out, sqlite3 *db, const char *zSelect){
+ sqlite3_stmt *pSelect;
+ int rc;
+ rc = sqlite3_prepare(db, zSelect, -1, &pSelect, 0);
+ if( rc!=SQLITE_OK || !pSelect ){
+ return rc;
+ }
+ rc = sqlite3_step(pSelect);
+ while( rc==SQLITE_ROW ){
+ fprintf(out, "%s;\n", sqlite3_column_text(pSelect, 0));
+ rc = sqlite3_step(pSelect);
+ }
+ return sqlite3_finalize(pSelect);
+}
+
+
+/*
+** This is a different callback routine used for dumping the database.
+** Each row received by this callback consists of a table name,
+** the table type ("index" or "table") and SQL to create the table.
+** This routine should print text sufficient to recreate the table.
+*/
+static int dump_callback(void *pArg, int nArg, char **azArg, char **azCol){
+ int rc;
+ const char *zTable;
+ const char *zType;
+ const char *zSql;
+ struct callback_data *p = (struct callback_data *)pArg;
+
+ if( nArg!=3 ) return 1;
+ zTable = azArg[0];
+ zType = azArg[1];
+ zSql = azArg[2];
+
+ if( strcmp(zTable, "sqlite_sequence")==0 ){
+ fprintf(p->out, "DELETE FROM sqlite_sequence;\n");
+ }else if( strcmp(zTable, "sqlite_stat1")==0 ){
+ fprintf(p->out, "ANALYZE sqlite_master;\n");
+ }else if( strncmp(zTable, "sqlite_", 7)==0 ){
+ return 0;
+ }else{
+ fprintf(p->out, "%s;\n", zSql);
+ }
+
+ if( strcmp(zType, "table")==0 ){
+ sqlite3_stmt *pTableInfo = 0;
+ char *zSelect = 0;
+ char *zTableInfo = 0;
+ char *zTmp = 0;
+
+ zTableInfo = appendText(zTableInfo, "PRAGMA table_info(", 0);
+ zTableInfo = appendText(zTableInfo, zTable, '"');
+ zTableInfo = appendText(zTableInfo, ");", 0);
+
+ rc = sqlite3_prepare(p->db, zTableInfo, -1, &pTableInfo, 0);
+ if( zTableInfo ) free(zTableInfo);
+ if( rc!=SQLITE_OK || !pTableInfo ){
+ return 1;
+ }
+
+ zSelect = appendText(zSelect, "SELECT 'INSERT INTO ' || ", 0);
+ zTmp = appendText(zTmp, zTable, '"');
+ if( zTmp ){
+ zSelect = appendText(zSelect, zTmp, '\'');
+ }
+ zSelect = appendText(zSelect, " || ' VALUES(' || ", 0);
+ rc = sqlite3_step(pTableInfo);
+ while( rc==SQLITE_ROW ){
+ const char *zText = (const char *)sqlite3_column_text(pTableInfo, 1);
+ zSelect = appendText(zSelect, "quote(", 0);
+ zSelect = appendText(zSelect, zText, '"');
+ rc = sqlite3_step(pTableInfo);
+ if( rc==SQLITE_ROW ){
+ zSelect = appendText(zSelect, ") || ', ' || ", 0);
+ }else{
+ zSelect = appendText(zSelect, ") ", 0);
+ }
+ }
+ rc = sqlite3_finalize(pTableInfo);
+ if( rc!=SQLITE_OK ){
+ if( zSelect ) free(zSelect);
+ return 1;
+ }
+ zSelect = appendText(zSelect, "|| ')' FROM ", 0);
+ zSelect = appendText(zSelect, zTable, '"');
+
+ rc = run_table_dump_query(p->out, p->db, zSelect);
+ if( rc==SQLITE_CORRUPT ){
+ zSelect = appendText(zSelect, " ORDER BY rowid DESC", 0);
+ rc = run_table_dump_query(p->out, p->db, zSelect);
+ }
+ if( zSelect ) free(zSelect);
+ if( rc!=SQLITE_OK ){
+ return 1;
+ }
+ }
+ return 0;
+}
+
+/*
+** Run zQuery. Update dump_callback() as the callback routine.
+** If we get a SQLITE_CORRUPT error, rerun the query after appending
+** "ORDER BY rowid DESC" to the end.
+*/
+static int run_schema_dump_query(
+ struct callback_data *p,
+ const char *zQuery,
+ char **pzErrMsg
+){
+ int rc;
+ rc = sqlite3_exec(p->db, zQuery, dump_callback, p, pzErrMsg);
+ if( rc==SQLITE_CORRUPT ){
+ char *zQ2;
+ int len = strlen(zQuery);
+ if( pzErrMsg ) sqlite3_free(*pzErrMsg);
+ zQ2 = malloc( len+100 );
+ if( zQ2==0 ) return rc;
+ sprintf(zQ2, "%s ORDER BY rowid DESC", zQuery);
+ rc = sqlite3_exec(p->db, zQ2, dump_callback, p, pzErrMsg);
+ free(zQ2);
+ }
+ return rc;
+}
+
+/*
+** Text of a help message
+*/
+static char zHelp[] =
+ ".databases List names and files of attached databases\n"
+ ".dump ?TABLE? ... Dump the database in an SQL text format\n"
+ ".echo ON|OFF Turn command echo on or off\n"
+ ".exit Exit this program\n"
+ ".explain ON|OFF Turn output mode suitable for EXPLAIN on or off.\n"
+ ".header(s) ON|OFF Turn display of headers on or off\n"
+ ".help Show this message\n"
+ ".import FILE TABLE Import data from FILE into TABLE\n"
+ ".indices TABLE Show names of all indices on TABLE\n"
+#ifndef SQLITE_OMIT_LOAD_EXTENSION
+ ".load FILE ?ENTRY? Load an extension library\n"
+#endif
+ ".mode MODE ?TABLE? Set output mode where MODE is one of:\n"
+ " csv Comma-separated values\n"
+ " column Left-aligned columns. (See .width)\n"
+ " html HTML <table> code\n"
+ " insert SQL insert statements for TABLE\n"
+ " line One value per line\n"
+ " list Values delimited by .separator string\n"
+ " tabs Tab-separated values\n"
+ " tcl TCL list elements\n"
+ ".nullvalue STRING Print STRING in place of NULL values\n"
+ ".output FILENAME Send output to FILENAME\n"
+ ".output stdout Send output to the screen\n"
+ ".prompt MAIN CONTINUE Replace the standard prompts\n"
+ ".quit Exit this program\n"
+ ".read FILENAME Execute SQL in FILENAME\n"
+ ".schema ?TABLE? Show the CREATE statements\n"
+ ".separator STRING Change separator used by output mode and .import\n"
+ ".show Show the current values for various settings\n"
+ ".tables ?PATTERN? List names of tables matching a LIKE pattern\n"
+ ".timeout MS Try opening locked tables for MS milliseconds\n"
+ ".width NUM NUM ... Set column widths for \"column\" mode\n"
+;
+
+/* Forward reference */
+static void process_input(struct callback_data *p, FILE *in);
+
+/*
+** Make sure the database is open. If it is not, then open it. If
+** the database fails to open, print an error message and exit.
+*/
+static void open_db(struct callback_data *p){
+ if( p->db==0 ){
+ sqlite3_open(p->zDbFilename, &p->db);
+ db = p->db;
+ sqlite3_create_function(db, "shellstatic", 0, SQLITE_UTF8, 0,
+ shellstaticFunc, 0, 0);
+ if( SQLITE_OK!=sqlite3_errcode(db) ){
+ fprintf(stderr,"Unable to open database \"%s\": %s\n",
+ p->zDbFilename, sqlite3_errmsg(db));
+ exit(1);
+ }
+#ifndef SQLITE_OMIT_LOAD_EXTENSION
+ sqlite3_enable_load_extension(p->db, 1);
+#endif
+ }
+}
+
+/*
+** Do C-language style dequoting.
+**
+** \t -> tab
+** \n -> newline
+** \r -> carriage return
+** \NNN -> ascii character NNN in octal
+** \\ -> backslash
+*/
+static void resolve_backslashes(char *z){
+ int i, j, c;
+ for(i=j=0; (c = z[i])!=0; i++, j++){
+ if( c=='\\' ){
+ c = z[++i];
+ if( c=='n' ){
+ c = '\n';
+ }else if( c=='t' ){
+ c = '\t';
+ }else if( c=='r' ){
+ c = '\r';
+ }else if( c>='0' && c<='7' ){
+ c -= '0';
+ if( z[i+1]>='0' && z[i+1]<='7' ){
+ i++;
+ c = (c<<3) + z[i] - '0';
+ if( z[i+1]>='0' && z[i+1]<='7' ){
+ i++;
+ c = (c<<3) + z[i] - '0';
+ }
+ }
+ }
+ }
+ z[j] = c;
+ }
+ z[j] = 0;
+}
+
+/*
+** If an input line begins with "." then invoke this routine to
+** process that line.
+**
+** Return 1 to exit and 0 to continue.
+*/
+static int do_meta_command(char *zLine, struct callback_data *p){
+ int i = 1;
+ int nArg = 0;
+ int n, c;
+ int rc = 0;
+ char *azArg[50];
+
+ /* Parse the input line into tokens.
+ */
+ while( zLine[i] && nArg<ArraySize(azArg) ){
+ while( isspace((unsigned char)zLine[i]) ){ i++; }
+ if( zLine[i]==0 ) break;
+ if( zLine[i]=='\'' || zLine[i]=='"' ){
+ int delim = zLine[i++];
+ azArg[nArg++] = &zLine[i];
+ while( zLine[i] && zLine[i]!=delim ){ i++; }
+ if( zLine[i]==delim ){
+ zLine[i++] = 0;
+ }
+ if( delim=='"' ) resolve_backslashes(azArg[nArg-1]);
+ }else{
+ azArg[nArg++] = &zLine[i];
+ while( zLine[i] && !isspace((unsigned char)zLine[i]) ){ i++; }
+ if( zLine[i] ) zLine[i++] = 0;
+ resolve_backslashes(azArg[nArg-1]);
+ }
+ }
+
+ /* Process the input line.
+ */
+ if( nArg==0 ) return rc;
+ n = strlen(azArg[0]);
+ c = azArg[0][0];
+ if( c=='d' && n>1 && strncmp(azArg[0], "databases", n)==0 ){
+ struct callback_data data;
+ char *zErrMsg = 0;
+ open_db(p);
+ memcpy(&data, p, sizeof(data));
+ data.showHeader = 1;
+ data.mode = MODE_Column;
+ data.colWidth[0] = 3;
+ data.colWidth[1] = 15;
+ data.colWidth[2] = 58;
+ data.cnt = 0;
+ sqlite3_exec(p->db, "PRAGMA database_list; ", callback, &data, &zErrMsg);
+ if( zErrMsg ){
+ fprintf(stderr,"Error: %s\n", zErrMsg);
+ sqlite3_free(zErrMsg);
+ }
+ }else
+
+ if( c=='d' && strncmp(azArg[0], "dump", n)==0 ){
+ char *zErrMsg = 0;
+ open_db(p);
+ fprintf(p->out, "BEGIN TRANSACTION;\n");
+ if( nArg==1 ){
+ run_schema_dump_query(p,
+ "SELECT name, type, sql FROM sqlite_master "
+ "WHERE sql NOT NULL AND type=='table' AND rootpage!=0", 0
+ );
+ run_schema_dump_query(p,
+ "SELECT name, type, sql FROM sqlite_master "
+ "WHERE sql NOT NULL AND "
+ " AND type!='table' AND type!='meta'", 0
+ );
+ run_table_dump_query(p->out, p->db,
+ "SELECT sql FROM sqlite_master "
+ "WHERE sql NOT NULL AND rootpage==0 AND type='table'"
+ );
+ }else{
+ int i;
+ for(i=1; i<nArg; i++){
+ zShellStatic = azArg[i];
+ run_schema_dump_query(p,
+ "SELECT name, type, sql FROM sqlite_master "
+ "WHERE tbl_name LIKE shellstatic() AND type=='table'"
+ " AND rootpage!=0 AND sql NOT NULL", 0);
+ run_schema_dump_query(p,
+ "SELECT name, type, sql FROM sqlite_master "
+ "WHERE tbl_name LIKE shellstatic() AND type!='table'"
+ " AND type!='meta' AND sql NOT NULL", 0);
+ run_table_dump_query(p->out, p->db,
+ "SELECT sql FROM sqlite_master "
+ "WHERE sql NOT NULL AND rootpage==0 AND type='table'"
+ " AND tbl_name LIKE shellstatic()"
+ );
+ zShellStatic = 0;
+ }
+ }
+ if( zErrMsg ){
+ fprintf(stderr,"Error: %s\n", zErrMsg);
+ sqlite3_free(zErrMsg);
+ }else{
+ fprintf(p->out, "COMMIT;\n");
+ }
+ }else
+
+ if( c=='e' && strncmp(azArg[0], "echo", n)==0 && nArg>1 ){
+ int j;
+ char *z = azArg[1];
+ int val = atoi(azArg[1]);
+ for(j=0; z[j]; j++){
+ z[j] = tolower((unsigned char)z[j]);
+ }
+ if( strcmp(z,"on")==0 ){
+ val = 1;
+ }else if( strcmp(z,"yes")==0 ){
+ val = 1;
+ }
+ p->echoOn = val;
+ }else
+
+ if( c=='e' && strncmp(azArg[0], "exit", n)==0 ){
+ rc = 1;
+ }else
+
+ if( c=='e' && strncmp(azArg[0], "explain", n)==0 ){
+ int j;
+ static char zOne[] = "1";
+ char *z = nArg>=2 ? azArg[1] : zOne;
+ int val = atoi(z);
+ for(j=0; z[j]; j++){
+ z[j] = tolower((unsigned char)z[j]);
+ }
+ if( strcmp(z,"on")==0 ){
+ val = 1;
+ }else if( strcmp(z,"yes")==0 ){
+ val = 1;
+ }
+ if(val == 1) {
+ if(!p->explainPrev.valid) {
+ p->explainPrev.valid = 1;
+ p->explainPrev.mode = p->mode;
+ p->explainPrev.showHeader = p->showHeader;
+ memcpy(p->explainPrev.colWidth,p->colWidth,sizeof(p->colWidth));
+ }
+ /* We could put this code under the !p->explainValid
+ ** condition so that it does not execute if we are already in
+ ** explain mode. However, always executing it allows us an easy
+ ** was to reset to explain mode in case the user previously
+ ** did an .explain followed by a .width, .mode or .header
+ ** command.
+ */
+ p->mode = MODE_Column;
+ p->showHeader = 1;
+ memset(p->colWidth,0,ArraySize(p->colWidth));
+ p->colWidth[0] = 4;
+ p->colWidth[1] = 14;
+ p->colWidth[2] = 10;
+ p->colWidth[3] = 10;
+ p->colWidth[4] = 33;
+ }else if (p->explainPrev.valid) {
+ p->explainPrev.valid = 0;
+ p->mode = p->explainPrev.mode;
+ p->showHeader = p->explainPrev.showHeader;
+ memcpy(p->colWidth,p->explainPrev.colWidth,sizeof(p->colWidth));
+ }
+ }else
+
+ if( c=='h' && (strncmp(azArg[0], "header", n)==0
+ ||
+ strncmp(azArg[0], "headers", n)==0 )&& nArg>1 ){
+ int j;
+ char *z = azArg[1];
+ int val = atoi(azArg[1]);
+ for(j=0; z[j]; j++){
+ z[j] = tolower((unsigned char)z[j]);
+ }
+ if( strcmp(z,"on")==0 ){
+ val = 1;
+ }else if( strcmp(z,"yes")==0 ){
+ val = 1;
+ }
+ p->showHeader = val;
+ }else
+
+ if( c=='h' && strncmp(azArg[0], "help", n)==0 ){
+ fprintf(stderr,zHelp);
+ }else
+
+ if( c=='i' && strncmp(azArg[0], "import", n)==0 && nArg>=3 ){
+ char *zTable = azArg[2]; /* Insert data into this table */
+ char *zFile = azArg[1]; /* The file from which to extract data */
+ sqlite3_stmt *pStmt; /* A statement */
+ int rc; /* Result code */
+ int nCol; /* Number of columns in the table */
+ int nByte; /* Number of bytes in an SQL string */
+ int i, j; /* Loop counters */
+ int nSep; /* Number of bytes in p->separator[] */
+ char *zSql; /* An SQL statement */
+ char *zLine; /* A single line of input from the file */
+ char **azCol; /* zLine[] broken up into columns */
+ char *zCommit; /* How to commit changes */
+ FILE *in; /* The input file */
+ int lineno = 0; /* Line number of input file */
+
+ open_db(p);
+ nSep = strlen(p->separator);
+ if( nSep==0 ){
+ fprintf(stderr, "non-null separator required for import\n");
+ return 0;
+ }
+ zSql = sqlite3_mprintf("SELECT * FROM '%q'", zTable);
+ if( zSql==0 ) return 0;
+ nByte = strlen(zSql);
+ rc = sqlite3_prepare(p->db, zSql, -1, &pStmt, 0);
+ sqlite3_free(zSql);
+ if( rc ){
+ fprintf(stderr,"Error: %s\n", sqlite3_errmsg(db));
+ nCol = 0;
+ }else{
+ nCol = sqlite3_column_count(pStmt);
+ }
+ sqlite3_finalize(pStmt);
+ if( nCol==0 ) return 0;
+ zSql = malloc( nByte + 20 + nCol*2 );
+ if( zSql==0 ) return 0;
+ sqlite3_snprintf(nByte+20, zSql, "INSERT INTO '%q' VALUES(?", zTable);
+ j = strlen(zSql);
+ for(i=1; i<nCol; i++){
+ zSql[j++] = ',';
+ zSql[j++] = '?';
+ }
+ zSql[j++] = ')';
+ zSql[j] = 0;
+ rc = sqlite3_prepare(p->db, zSql, -1, &pStmt, 0);
+ free(zSql);
+ if( rc ){
+ fprintf(stderr, "Error: %s\n", sqlite3_errmsg(db));
+ sqlite3_finalize(pStmt);
+ return 0;
+ }
+ in = fopen(zFile, "rb");
+ if( in==0 ){
+ fprintf(stderr, "cannot open file: %s\n", zFile);
+ sqlite3_finalize(pStmt);
+ return 0;
+ }
+ azCol = malloc( sizeof(azCol[0])*(nCol+1) );
+ if( azCol==0 ){
+ fclose(in);
+ return 0;
+ }
+ sqlite3_exec(p->db, "BEGIN", 0, 0, 0);
+ zCommit = "COMMIT";
+ while( (zLine = local_getline(0, in))!=0 ){
+ char *z;
+ i = 0;
+ lineno++;
+ azCol[0] = zLine;
+ for(i=0, z=zLine; *z && *z!='\n' && *z!='\r'; z++){
+ if( *z==p->separator[0] && strncmp(z, p->separator, nSep)==0 ){
+ *z = 0;
+ i++;
+ if( i<nCol ){
+ azCol[i] = &z[nSep];
+ z += nSep-1;
+ }
+ }
+ }
+ *z = 0;
+ if( i+1!=nCol ){
+ fprintf(stderr,"%s line %d: expected %d columns of data but found %d\n",
+ zFile, lineno, nCol, i+1);
+ zCommit = "ROLLBACK";
+ break;
+ }
+ for(i=0; i<nCol; i++){
+ sqlite3_bind_text(pStmt, i+1, azCol[i], -1, SQLITE_STATIC);
+ }
+ sqlite3_step(pStmt);
+ rc = sqlite3_reset(pStmt);
+ free(zLine);
+ if( rc!=SQLITE_OK ){
+ fprintf(stderr,"Error: %s\n", sqlite3_errmsg(db));
+ zCommit = "ROLLBACK";
+ break;
+ }
+ }
+ free(azCol);
+ fclose(in);
+ sqlite3_finalize(pStmt);
+ sqlite3_exec(p->db, zCommit, 0, 0, 0);
+ }else
+
+ if( c=='i' && strncmp(azArg[0], "indices", n)==0 && nArg>1 ){
+ struct callback_data data;
+ char *zErrMsg = 0;
+ open_db(p);
+ memcpy(&data, p, sizeof(data));
+ data.showHeader = 0;
+ data.mode = MODE_List;
+ zShellStatic = azArg[1];
+ sqlite3_exec(p->db,
+ "SELECT name FROM sqlite_master "
+ "WHERE type='index' AND tbl_name LIKE shellstatic() "
+ "UNION ALL "
+ "SELECT name FROM sqlite_temp_master "
+ "WHERE type='index' AND tbl_name LIKE shellstatic() "
+ "ORDER BY 1",
+ callback, &data, &zErrMsg
+ );
+ zShellStatic = 0;
+ if( zErrMsg ){
+ fprintf(stderr,"Error: %s\n", zErrMsg);
+ sqlite3_free(zErrMsg);
+ }
+ }else
+
+#ifndef SQLITE_OMIT_LOAD_EXTENSION
+ if( c=='l' && strncmp(azArg[0], "load", n)==0 && nArg>=2 ){
+ const char *zFile, *zProc;
+ char *zErrMsg = 0;
+ int rc;
+ zFile = azArg[1];
+ zProc = nArg>=3 ? azArg[2] : 0;
+ open_db(p);
+ rc = sqlite3_load_extension(p->db, zFile, zProc, &zErrMsg);
+ if( rc!=SQLITE_OK ){
+ fprintf(stderr, "%s\n", zErrMsg);
+ sqlite3_free(zErrMsg);
+ }
+ }else
+#endif
+
+ if( c=='m' && strncmp(azArg[0], "mode", n)==0 && nArg>=2 ){
+ int n2 = strlen(azArg[1]);
+ if( strncmp(azArg[1],"line",n2)==0
+ ||
+ strncmp(azArg[1],"lines",n2)==0 ){
+ p->mode = MODE_Line;
+ }else if( strncmp(azArg[1],"column",n2)==0
+ ||
+ strncmp(azArg[1],"columns",n2)==0 ){
+ p->mode = MODE_Column;
+ }else if( strncmp(azArg[1],"list",n2)==0 ){
+ p->mode = MODE_List;
+ }else if( strncmp(azArg[1],"html",n2)==0 ){
+ p->mode = MODE_Html;
+ }else if( strncmp(azArg[1],"tcl",n2)==0 ){
+ p->mode = MODE_Tcl;
+ }else if( strncmp(azArg[1],"csv",n2)==0 ){
+ p->mode = MODE_Csv;
+ strcpy(p->separator, ",");
+ }else if( strncmp(azArg[1],"tabs",n2)==0 ){
+ p->mode = MODE_List;
+ strcpy(p->separator, "\t");
+ }else if( strncmp(azArg[1],"insert",n2)==0 ){
+ p->mode = MODE_Insert;
+ if( nArg>=3 ){
+ set_table_name(p, azArg[2]);
+ }else{
+ set_table_name(p, "table");
+ }
+ }else {
+ fprintf(stderr,"mode should be on of: "
+ "column csv html insert line list tabs tcl\n");
+ }
+ }else
+
+ if( c=='n' && strncmp(azArg[0], "nullvalue", n)==0 && nArg==2 ) {
+ sprintf(p->nullvalue, "%.*s", (int)ArraySize(p->nullvalue)-1, azArg[1]);
+ }else
+
+ if( c=='o' && strncmp(azArg[0], "output", n)==0 && nArg==2 ){
+ if( p->out!=stdout ){
+ fclose(p->out);
+ }
+ if( strcmp(azArg[1],"stdout")==0 ){
+ p->out = stdout;
+ strcpy(p->outfile,"stdout");
+ }else{
+ p->out = fopen(azArg[1], "wb");
+ if( p->out==0 ){
+ fprintf(stderr,"can't write to \"%s\"\n", azArg[1]);
+ p->out = stdout;
+ } else {
+ strcpy(p->outfile,azArg[1]);
+ }
+ }
+ }else
+
+ if( c=='p' && strncmp(azArg[0], "prompt", n)==0 && (nArg==2 || nArg==3)){
+ if( nArg >= 2) {
+ strncpy(mainPrompt,azArg[1],(int)ArraySize(mainPrompt)-1);
+ }
+ if( nArg >= 3) {
+ strncpy(continuePrompt,azArg[2],(int)ArraySize(continuePrompt)-1);
+ }
+ }else
+
+ if( c=='q' && strncmp(azArg[0], "quit", n)==0 ){
+ rc = 1;
+ }else
+
+ if( c=='r' && strncmp(azArg[0], "read", n)==0 && nArg==2 ){
+ FILE *alt = fopen(azArg[1], "rb");
+ if( alt==0 ){
+ fprintf(stderr,"can't open \"%s\"\n", azArg[1]);
+ }else{
+ process_input(p, alt);
+ fclose(alt);
+ }
+ }else
+
+ if( c=='s' && strncmp(azArg[0], "schema", n)==0 ){
+ struct callback_data data;
+ char *zErrMsg = 0;
+ open_db(p);
+ memcpy(&data, p, sizeof(data));
+ data.showHeader = 0;
+ data.mode = MODE_Semi;
+ if( nArg>1 ){
+ int i;
+ for(i=0; azArg[1][i]; i++) azArg[1][i] = tolower(azArg[1][i]);
+ if( strcmp(azArg[1],"sqlite_master")==0 ){
+ char *new_argv[2], *new_colv[2];
+ new_argv[0] = "CREATE TABLE sqlite_master (\n"
+ " type text,\n"
+ " name text,\n"
+ " tbl_name text,\n"
+ " rootpage integer,\n"
+ " sql text\n"
+ ")";
+ new_argv[1] = 0;
+ new_colv[0] = "sql";
+ new_colv[1] = 0;
+ callback(&data, 1, new_argv, new_colv);
+ }else if( strcmp(azArg[1],"sqlite_temp_master")==0 ){
+ char *new_argv[2], *new_colv[2];
+ new_argv[0] = "CREATE TEMP TABLE sqlite_temp_master (\n"
+ " type text,\n"
+ " name text,\n"
+ " tbl_name text,\n"
+ " rootpage integer,\n"
+ " sql text\n"
+ ")";
+ new_argv[1] = 0;
+ new_colv[0] = "sql";
+ new_colv[1] = 0;
+ callback(&data, 1, new_argv, new_colv);
+ }else{
+ zShellStatic = azArg[1];
+ sqlite3_exec(p->db,
+ "SELECT sql FROM "
+ " (SELECT * FROM sqlite_master UNION ALL"
+ " SELECT * FROM sqlite_temp_master) "
+ "WHERE tbl_name LIKE shellstatic() AND type!='meta' AND sql NOTNULL "
+ "ORDER BY substr(type,2,1), name",
+ callback, &data, &zErrMsg);
+ zShellStatic = 0;
+ }
+ }else{
+ sqlite3_exec(p->db,
+ "SELECT sql FROM "
+ " (SELECT * FROM sqlite_master UNION ALL"
+ " SELECT * FROM sqlite_temp_master) "
+ "WHERE type!='meta' AND sql NOTNULL AND name NOT LIKE 'sqlite_%'"
+ "ORDER BY substr(type,2,1), name",
+ callback, &data, &zErrMsg
+ );
+ }
+ if( zErrMsg ){
+ fprintf(stderr,"Error: %s\n", zErrMsg);
+ sqlite3_free(zErrMsg);
+ }
+ }else
+
+ if( c=='s' && strncmp(azArg[0], "separator", n)==0 && nArg==2 ){
+ sprintf(p->separator, "%.*s", (int)ArraySize(p->separator)-1, azArg[1]);
+ }else
+
+ if( c=='s' && strncmp(azArg[0], "show", n)==0){
+ int i;
+ fprintf(p->out,"%9.9s: %s\n","echo", p->echoOn ? "on" : "off");
+ fprintf(p->out,"%9.9s: %s\n","explain", p->explainPrev.valid ? "on" :"off");
+ fprintf(p->out,"%9.9s: %s\n","headers", p->showHeader ? "on" : "off");
+ fprintf(p->out,"%9.9s: %s\n","mode", modeDescr[p->mode]);
+ fprintf(p->out,"%9.9s: ", "nullvalue");
+ output_c_string(p->out, p->nullvalue);
+ fprintf(p->out, "\n");
+ fprintf(p->out,"%9.9s: %s\n","output",
+ strlen(p->outfile) ? p->outfile : "stdout");
+ fprintf(p->out,"%9.9s: ", "separator");
+ output_c_string(p->out, p->separator);
+ fprintf(p->out, "\n");
+ fprintf(p->out,"%9.9s: ","width");
+ for (i=0;i<(int)ArraySize(p->colWidth) && p->colWidth[i] != 0;i++) {
+ fprintf(p->out,"%d ",p->colWidth[i]);
+ }
+ fprintf(p->out,"\n");
+ }else
+
+ if( c=='t' && n>1 && strncmp(azArg[0], "tables", n)==0 ){
+ char **azResult;
+ int nRow, rc;
+ char *zErrMsg;
+ open_db(p);
+ if( nArg==1 ){
+ rc = sqlite3_get_table(p->db,
+ "SELECT name FROM sqlite_master "
+ "WHERE type IN ('table','view') AND name NOT LIKE 'sqlite_%'"
+ "UNION ALL "
+ "SELECT name FROM sqlite_temp_master "
+ "WHERE type IN ('table','view') "
+ "ORDER BY 1",
+ &azResult, &nRow, 0, &zErrMsg
+ );
+ }else{
+ zShellStatic = azArg[1];
+ rc = sqlite3_get_table(p->db,
+ "SELECT name FROM sqlite_master "
+ "WHERE type IN ('table','view') AND name LIKE '%'||shellstatic()||'%' "
+ "UNION ALL "
+ "SELECT name FROM sqlite_temp_master "
+ "WHERE type IN ('table','view') AND name LIKE '%'||shellstatic()||'%' "
+ "ORDER BY 1",
+ &azResult, &nRow, 0, &zErrMsg
+ );
+ zShellStatic = 0;
+ }
+ if( zErrMsg ){
+ fprintf(stderr,"Error: %s\n", zErrMsg);
+ sqlite3_free(zErrMsg);
+ }
+ if( rc==SQLITE_OK ){
+ int len, maxlen = 0;
+ int i, j;
+ int nPrintCol, nPrintRow;
+ for(i=1; i<=nRow; i++){
+ if( azResult[i]==0 ) continue;
+ len = strlen(azResult[i]);
+ if( len>maxlen ) maxlen = len;
+ }
+ nPrintCol = 80/(maxlen+2);
+ if( nPrintCol<1 ) nPrintCol = 1;
+ nPrintRow = (nRow + nPrintCol - 1)/nPrintCol;
+ for(i=0; i<nPrintRow; i++){
+ for(j=i+1; j<=nRow; j+=nPrintRow){
+ char *zSp = j<=nPrintRow ? "" : " ";
+ printf("%s%-*s", zSp, maxlen, azResult[j] ? azResult[j] : "");
+ }
+ printf("\n");
+ }
+ }
+ sqlite3_free_table(azResult);
+ }else
+
+ if( c=='t' && n>1 && strncmp(azArg[0], "timeout", n)==0 && nArg>=2 ){
+ open_db(p);
+ sqlite3_busy_timeout(p->db, atoi(azArg[1]));
+ }else
+
+ if( c=='w' && strncmp(azArg[0], "width", n)==0 ){
+ int j;
+ assert( nArg<=ArraySize(azArg) );
+ for(j=1; j<nArg && j<ArraySize(p->colWidth); j++){
+ p->colWidth[j-1] = atoi(azArg[j]);
+ }
+ }else
+
+ {
+ fprintf(stderr, "unknown command or invalid arguments: "
+ " \"%s\". Enter \".help\" for help\n", azArg[0]);
+ }
+
+ return rc;
+}
+
+/*
+** Return TRUE if the last non-whitespace character in z[] is a semicolon.
+** z[] is N characters long.
+*/
+static int _ends_with_semicolon(const char *z, int N){
+ while( N>0 && isspace((unsigned char)z[N-1]) ){ N--; }
+ return N>0 && z[N-1]==';';
+}
+
+/*
+** Test to see if a line consists entirely of whitespace.
+*/
+static int _all_whitespace(const char *z){
+ for(; *z; z++){
+ if( isspace(*(unsigned char*)z) ) continue;
+ if( *z=='/' && z[1]=='*' ){
+ z += 2;
+ while( *z && (*z!='*' || z[1]!='/') ){ z++; }
+ if( *z==0 ) return 0;
+ z++;
+ continue;
+ }
+ if( *z=='-' && z[1]=='-' ){
+ z += 2;
+ while( *z && *z!='\n' ){ z++; }
+ if( *z==0 ) return 1;
+ continue;
+ }
+ return 0;
+ }
+ return 1;
+}
+
+/*
+** Return TRUE if the line typed in is an SQL command terminator other
+** than a semi-colon. The SQL Server style "go" command is understood
+** as is the Oracle "/".
+*/
+static int _is_command_terminator(const char *zLine){
+ while( isspace(*(unsigned char*)zLine) ){ zLine++; };
+ if( zLine[0]=='/' && _all_whitespace(&zLine[1]) ) return 1; /* Oracle */
+ if( tolower(zLine[0])=='g' && tolower(zLine[1])=='o'
+ && _all_whitespace(&zLine[2]) ){
+ return 1; /* SQL Server */
+ }
+ return 0;
+}
+
+/*
+** Read input from *in and process it. If *in==0 then input
+** is interactive - the user is typing it it. Otherwise, input
+** is coming from a file or device. A prompt is issued and history
+** is saved only if input is interactive. An interrupt signal will
+** cause this routine to exit immediately, unless input is interactive.
+*/
+static void process_input(struct callback_data *p, FILE *in){
+ char *zLine;
+ char *zSql = 0;
+ int nSql = 0;
+ char *zErrMsg;
+ int rc;
+ while( fflush(p->out), (zLine = one_input_line(zSql, in))!=0 ){
+ if( seenInterrupt ){
+ if( in!=0 ) break;
+ seenInterrupt = 0;
+ }
+ if( p->echoOn ) printf("%s\n", zLine);
+ if( (zSql==0 || zSql[0]==0) && _all_whitespace(zLine) ) continue;
+ if( zLine && zLine[0]=='.' && nSql==0 ){
+ int rc = do_meta_command(zLine, p);
+ free(zLine);
+ if( rc ) break;
+ continue;
+ }
+ if( _is_command_terminator(zLine) ){
+ strcpy(zLine,";");
+ }
+ if( zSql==0 ){
+ int i;
+ for(i=0; zLine[i] && isspace((unsigned char)zLine[i]); i++){}
+ if( zLine[i]!=0 ){
+ nSql = strlen(zLine);
+ zSql = malloc( nSql+1 );
+ if( zSql==0 ){
+ fprintf(stderr, "out of memory\n");
+ exit(1);
+ }
+ strcpy(zSql, zLine);
+ }
+ }else{
+ int len = strlen(zLine);
+ zSql = realloc( zSql, nSql + len + 2 );
+ if( zSql==0 ){
+ fprintf(stderr,"%s: out of memory!\n", Argv0);
+ exit(1);
+ }
+ strcpy(&zSql[nSql++], "\n");
+ strcpy(&zSql[nSql], zLine);
+ nSql += len;
+ }
+ free(zLine);
+ if( zSql && _ends_with_semicolon(zSql, nSql) && sqlite3_complete(zSql) ){
+ p->cnt = 0;
+ open_db(p);
+ rc = sqlite3_exec(p->db, zSql, callback, p, &zErrMsg);
+ if( rc || zErrMsg ){
+ /* if( in!=0 && !p->echoOn ) printf("%s\n",zSql); */
+ if( zErrMsg!=0 ){
+ printf("SQL error: %s\n", zErrMsg);
+ sqlite3_free(zErrMsg);
+ zErrMsg = 0;
+ }else{
+ printf("SQL error: %s\n", sqlite3_errmsg(p->db));
+ }
+ }
+ free(zSql);
+ zSql = 0;
+ nSql = 0;
+ }
+ }
+ if( zSql ){
+ if( !_all_whitespace(zSql) ) printf("Incomplete SQL: %s\n", zSql);
+ free(zSql);
+ }
+}
+
+/*
+** Return a pathname which is the user's home directory. A
+** 0 return indicates an error of some kind. Space to hold the
+** resulting string is obtained from malloc(). The calling
+** function should free the result.
+*/
+static char *find_home_dir(void){
+ char *home_dir = NULL;
+
+#if !defined(_WIN32) && !defined(WIN32) && !defined(__MACOS__) && !defined(__OS2__)
+ struct passwd *pwent;
+ uid_t uid = getuid();
+ if( (pwent=getpwuid(uid)) != NULL) {
+ home_dir = pwent->pw_dir;
+ }
+#endif
+
+#ifdef __MACOS__
+ char home_path[_MAX_PATH+1];
+ home_dir = getcwd(home_path, _MAX_PATH);
+#endif
+
+#if defined(_WIN32) || defined(WIN32) || defined(__OS2__)
+ if (!home_dir) {
+ home_dir = getenv("USERPROFILE");
+ }
+#endif
+
+ if (!home_dir) {
+ home_dir = getenv("HOME");
+ }
+
+#if defined(_WIN32) || defined(WIN32) || defined(__OS2__)
+ if (!home_dir) {
+ char *zDrive, *zPath;
+ int n;
+ zDrive = getenv("HOMEDRIVE");
+ zPath = getenv("HOMEPATH");
+ if( zDrive && zPath ){
+ n = strlen(zDrive) + strlen(zPath) + 1;
+ home_dir = malloc( n );
+ if( home_dir==0 ) return 0;
+ sqlite3_snprintf(n, home_dir, "%s%s", zDrive, zPath);
+ return home_dir;
+ }
+ home_dir = "c:\\";
+ }
+#endif
+
+ if( home_dir ){
+ char *z = malloc( strlen(home_dir)+1 );
+ if( z ) strcpy(z, home_dir);
+ home_dir = z;
+ }
+
+ return home_dir;
+}
+
+/*
+** Read input from the file given by sqliterc_override. Or if that
+** parameter is NULL, take input from ~/.sqliterc
+*/
+static void process_sqliterc(
+ struct callback_data *p, /* Configuration data */
+ const char *sqliterc_override /* Name of config file. NULL to use default */
+){
+ char *home_dir = NULL;
+ const char *sqliterc = sqliterc_override;
+ char *zBuf = 0;
+ FILE *in = NULL;
+
+ if (sqliterc == NULL) {
+ home_dir = find_home_dir();
+ if( home_dir==0 ){
+ fprintf(stderr,"%s: cannot locate your home directory!\n", Argv0);
+ return;
+ }
+ zBuf = malloc(strlen(home_dir) + 15);
+ if( zBuf==0 ){
+ fprintf(stderr,"%s: out of memory!\n", Argv0);
+ exit(1);
+ }
+ sprintf(zBuf,"%s/.sqliterc",home_dir);
+ free(home_dir);
+ sqliterc = (const char*)zBuf;
+ }
+ in = fopen(sqliterc,"rb");
+ if( in ){
+ if( isatty(fileno(stdout)) ){
+ printf("Loading resources from %s\n",sqliterc);
+ }
+ process_input(p,in);
+ fclose(in);
+ }
+ free(zBuf);
+ return;
+}
+
+/*
+** Show available command line options
+*/
+static const char zOptions[] =
+ " -init filename read/process named file\n"
+ " -echo print commands before execution\n"
+ " -[no]header turn headers on or off\n"
+ " -column set output mode to 'column'\n"
+ " -html set output mode to HTML\n"
+ " -line set output mode to 'line'\n"
+ " -list set output mode to 'list'\n"
+ " -separator 'x' set output field separator (|)\n"
+ " -nullvalue 'text' set text string for NULL values\n"
+ " -version show SQLite version\n"
+;
+static void usage(int showDetail){
+ fprintf(stderr,
+ "Usage: %s [OPTIONS] FILENAME [SQL]\n"
+ "FILENAME is the name of an SQLite database. A new database is created\n"
+ "if the file does not previously exist.\n", Argv0);
+ if( showDetail ){
+ fprintf(stderr, "OPTIONS include:\n%s", zOptions);
+ }else{
+ fprintf(stderr, "Use the -help option for additional information\n");
+ }
+ exit(1);
+}
+
+/*
+** Initialize the state information in data
+*/
+static void main_init(struct callback_data *data) {
+ memset(data, 0, sizeof(*data));
+ data->mode = MODE_List;
+ strcpy(data->separator,"|");
+ data->showHeader = 0;
+ strcpy(mainPrompt,"sqlite> ");
+ strcpy(continuePrompt," ...> ");
+}
+
+int main(int argc, char **argv){
+ char *zErrMsg = 0;
+ struct callback_data data;
+ const char *zInitFile = 0;
+ char *zFirstCmd = 0;
+ int i;
+
+#ifdef __MACOS__
+ argc = ccommand(&argv);
+#endif
+
+ Argv0 = argv[0];
+ main_init(&data);
+
+ /* Make sure we have a valid signal handler early, before anything
+ ** else is done.
+ */
+#ifdef SIGINT
+ signal(SIGINT, interrupt_handler);
+#endif
+
+ /* Do an initial pass through the command-line argument to locate
+ ** the name of the database file, the name of the initialization file,
+ ** and the first command to execute.
+ */
+ for(i=1; i<argc-1; i++){
+ if( argv[i][0]!='-' ) break;
+ if( strcmp(argv[i],"-separator")==0 || strcmp(argv[i],"-nullvalue")==0 ){
+ i++;
+ }else if( strcmp(argv[i],"-init")==0 ){
+ i++;
+ zInitFile = argv[i];
+ }
+ }
+ if( i<argc ){
+ data.zDbFilename = argv[i++];
+ }else{
+#ifndef SQLITE_OMIT_MEMORYDB
+ data.zDbFilename = ":memory:";
+#else
+ data.zDbFilename = 0;
+#endif
+ }
+ if( i<argc ){
+ zFirstCmd = argv[i++];
+ }
+ data.out = stdout;
+
+#ifdef SQLITE_OMIT_MEMORYDB
+ if( data.zDbFilename==0 ){
+ fprintf(stderr,"%s: no database filename specified\n", argv[0]);
+ exit(1);
+ }
+#endif
+
+ /* Go ahead and open the database file if it already exists. If the
+ ** file does not exist, delay opening it. This prevents empty database
+ ** files from being created if a user mistypes the database name argument
+ ** to the sqlite command-line tool.
+ */
+ if( access(data.zDbFilename, 0)==0 ){
+ open_db(&data);
+ }
+
+ /* Process the initialization file if there is one. If no -init option
+ ** is given on the command line, look for a file named ~/.sqliterc and
+ ** try to process it.
+ */
+ process_sqliterc(&data,zInitFile);
+
+ /* Make a second pass through the command-line argument and set
+ ** options. This second pass is delayed until after the initialization
+ ** file is processed so that the command-line arguments will override
+ ** settings in the initialization file.
+ */
+ for(i=1; i<argc && argv[i][0]=='-'; i++){
+ char *z = argv[i];
+ if( strcmp(z,"-init")==0 ){
+ i++;
+ }else if( strcmp(z,"-html")==0 ){
+ data.mode = MODE_Html;
+ }else if( strcmp(z,"-list")==0 ){
+ data.mode = MODE_List;
+ }else if( strcmp(z,"-line")==0 ){
+ data.mode = MODE_Line;
+ }else if( strcmp(z,"-column")==0 ){
+ data.mode = MODE_Column;
+ }else if( strcmp(z,"-separator")==0 ){
+ i++;
+ sprintf(data.separator,"%.*s",(int)sizeof(data.separator)-1,argv[i]);
+ }else if( strcmp(z,"-nullvalue")==0 ){
+ i++;
+ sprintf(data.nullvalue,"%.*s",(int)sizeof(data.nullvalue)-1,argv[i]);
+ }else if( strcmp(z,"-header")==0 ){
+ data.showHeader = 1;
+ }else if( strcmp(z,"-noheader")==0 ){
+ data.showHeader = 0;
+ }else if( strcmp(z,"-echo")==0 ){
+ data.echoOn = 1;
+ }else if( strcmp(z,"-version")==0 ){
+ printf("%s\n", sqlite3_libversion());
+ return 0;
+ }else if( strcmp(z,"-help")==0 || strcmp(z, "--help")==0 ){
+ usage(1);
+ }else{
+ fprintf(stderr,"%s: unknown option: %s\n", Argv0, z);
+ fprintf(stderr,"Use -help for a list of options.\n");
+ return 1;
+ }
+ }
+
+ if( zFirstCmd ){
+ /* Run just the command that follows the database name
+ */
+ if( zFirstCmd[0]=='.' ){
+ do_meta_command(zFirstCmd, &data);
+ exit(0);
+ }else{
+ int rc;
+ open_db(&data);
+ rc = sqlite3_exec(data.db, zFirstCmd, callback, &data, &zErrMsg);
+ if( rc!=0 && zErrMsg!=0 ){
+ fprintf(stderr,"SQL error: %s\n", zErrMsg);
+ exit(1);
+ }
+ }
+ }else{
+ /* Run commands received from standard input
+ */
+ if( isatty(fileno(stdout)) && isatty(fileno(stdin)) ){
+ char *zHome;
+ char *zHistory = 0;
+ printf(
+ "SQLite version %s\n"
+ "Enter \".help\" for instructions\n",
+ sqlite3_libversion()
+ );
+ zHome = find_home_dir();
+ if( zHome && (zHistory = malloc(strlen(zHome)+20))!=0 ){
+ sprintf(zHistory,"%s/.sqlite_history", zHome);
+ }
+#if defined(HAVE_READLINE) && HAVE_READLINE==1
+ if( zHistory ) read_history(zHistory);
+#endif
+ process_input(&data, 0);
+ if( zHistory ){
+ stifle_history(100);
+ write_history(zHistory);
+ free(zHistory);
+ }
+ free(zHome);
+ }else{
+ process_input(&data, stdin);
+ }
+ }
+ set_table_name(&data, 0);
+ if( db ){
+ if( sqlite3_close(db)!=SQLITE_OK ){
+ fprintf(stderr,"error closing database: %s\n", sqlite3_errmsg(db));
+ }
+ }
+ return 0;
+}
Added: freeswitch/trunk/libs/sqlite/src/sqlite.h.in
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/sqlite.h.in Tue Dec 19 15:11:50 2006
@@ -0,0 +1,1826 @@
+/*
+** 2001 September 15
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** This header file defines the interface that the SQLite library
+** presents to client programs.
+**
+** @(#) $Id: sqlite.h.in,v 1.194 2006/09/16 21:45:14 drh Exp $
+*/
+#ifndef _SQLITE3_H_
+#define _SQLITE3_H_
+#include <stdarg.h> /* Needed for the definition of va_list */
+
+/*
+** Make sure we can call this stuff from C++.
+*/
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/*
+** The version of the SQLite library.
+*/
+#ifdef SQLITE_VERSION
+# undef SQLITE_VERSION
+#endif
+#define SQLITE_VERSION "--VERS--"
+
+/*
+** The format of the version string is "X.Y.Z<trailing string>", where
+** X is the major version number, Y is the minor version number and Z
+** is the release number. The trailing string is often "alpha" or "beta".
+** For example "3.1.1beta".
+**
+** The SQLITE_VERSION_NUMBER is an integer with the value
+** (X*100000 + Y*1000 + Z). For example, for version "3.1.1beta",
+** SQLITE_VERSION_NUMBER is set to 3001001. To detect if they are using
+** version 3.1.1 or greater at compile time, programs may use the test
+** (SQLITE_VERSION_NUMBER>=3001001).
+*/
+#ifdef SQLITE_VERSION_NUMBER
+# undef SQLITE_VERSION_NUMBER
+#endif
+#define SQLITE_VERSION_NUMBER --VERSION-NUMBER--
+
+/*
+** The version string is also compiled into the library so that a program
+** can check to make sure that the lib*.a file and the *.h file are from
+** the same version. The sqlite3_libversion() function returns a pointer
+** to the sqlite3_version variable - useful in DLLs which cannot access
+** global variables.
+*/
+extern const char sqlite3_version[];
+const char *sqlite3_libversion(void);
+
+/*
+** Return the value of the SQLITE_VERSION_NUMBER macro when the
+** library was compiled.
+*/
+int sqlite3_libversion_number(void);
+
+/*
+** Each open sqlite database is represented by an instance of the
+** following opaque structure.
+*/
+typedef struct sqlite3 sqlite3;
+
+
+/*
+** Some compilers do not support the "long long" datatype. So we have
+** to do a typedef that for 64-bit integers that depends on what compiler
+** is being used.
+*/
+#ifdef SQLITE_INT64_TYPE
+ typedef SQLITE_INT64_TYPE sqlite_int64;
+ typedef unsigned SQLITE_INT64_TYPE sqlite_uint64;
+#elif defined(_MSC_VER) || defined(__BORLANDC__)
+ typedef __int64 sqlite_int64;
+ typedef unsigned __int64 sqlite_uint64;
+#else
+ typedef long long int sqlite_int64;
+ typedef unsigned long long int sqlite_uint64;
+#endif
+
+/*
+** If compiling for a processor that lacks floating point support,
+** substitute integer for floating-point
+*/
+#ifdef SQLITE_OMIT_FLOATING_POINT
+# define double sqlite_int64
+#endif
+
+/*
+** A function to close the database.
+**
+** Call this function with a pointer to a structure that was previously
+** returned from sqlite3_open() and the corresponding database will by closed.
+**
+** All SQL statements prepared using sqlite3_prepare() or
+** sqlite3_prepare16() must be deallocated using sqlite3_finalize() before
+** this routine is called. Otherwise, SQLITE_BUSY is returned and the
+** database connection remains open.
+*/
+int sqlite3_close(sqlite3 *);
+
+/*
+** The type for a callback function.
+*/
+typedef int (*sqlite3_callback)(void*,int,char**, char**);
+
+/*
+** A function to executes one or more statements of SQL.
+**
+** If one or more of the SQL statements are queries, then
+** the callback function specified by the 3rd parameter is
+** invoked once for each row of the query result. This callback
+** should normally return 0. If the callback returns a non-zero
+** value then the query is aborted, all subsequent SQL statements
+** are skipped and the sqlite3_exec() function returns the SQLITE_ABORT.
+**
+** The 4th parameter is an arbitrary pointer that is passed
+** to the callback function as its first parameter.
+**
+** The 2nd parameter to the callback function is the number of
+** columns in the query result. The 3rd parameter to the callback
+** is an array of strings holding the values for each column.
+** The 4th parameter to the callback is an array of strings holding
+** the names of each column.
+**
+** The callback function may be NULL, even for queries. A NULL
+** callback is not an error. It just means that no callback
+** will be invoked.
+**
+** If an error occurs while parsing or evaluating the SQL (but
+** not while executing the callback) then an appropriate error
+** message is written into memory obtained from malloc() and
+** *errmsg is made to point to that message. The calling function
+** is responsible for freeing the memory that holds the error
+** message. Use sqlite3_free() for this. If errmsg==NULL,
+** then no error message is ever written.
+**
+** The return value is is SQLITE_OK if there are no errors and
+** some other return code if there is an error. The particular
+** return value depends on the type of error.
+**
+** If the query could not be executed because a database file is
+** locked or busy, then this function returns SQLITE_BUSY. (This
+** behavior can be modified somewhat using the sqlite3_busy_handler()
+** and sqlite3_busy_timeout() functions below.)
+*/
+int sqlite3_exec(
+ sqlite3*, /* An open database */
+ const char *sql, /* SQL to be executed */
+ sqlite3_callback, /* Callback function */
+ void *, /* 1st argument to callback function */
+ char **errmsg /* Error msg written here */
+);
+
+/*
+** Return values for sqlite3_exec() and sqlite3_step()
+*/
+#define SQLITE_OK 0 /* Successful result */
+/* beginning-of-error-codes */
+#define SQLITE_ERROR 1 /* SQL error or missing database */
+#define SQLITE_INTERNAL 2 /* NOT USED. Internal logic error in SQLite */
+#define SQLITE_PERM 3 /* Access permission denied */
+#define SQLITE_ABORT 4 /* Callback routine requested an abort */
+#define SQLITE_BUSY 5 /* The database file is locked */
+#define SQLITE_LOCKED 6 /* A table in the database is locked */
+#define SQLITE_NOMEM 7 /* A malloc() failed */
+#define SQLITE_READONLY 8 /* Attempt to write a readonly database */
+#define SQLITE_INTERRUPT 9 /* Operation terminated by sqlite3_interrupt()*/
+#define SQLITE_IOERR 10 /* Some kind of disk I/O error occurred */
+#define SQLITE_CORRUPT 11 /* The database disk image is malformed */
+#define SQLITE_NOTFOUND 12 /* NOT USED. Table or record not found */
+#define SQLITE_FULL 13 /* Insertion failed because database is full */
+#define SQLITE_CANTOPEN 14 /* Unable to open the database file */
+#define SQLITE_PROTOCOL 15 /* Database lock protocol error */
+#define SQLITE_EMPTY 16 /* Database is empty */
+#define SQLITE_SCHEMA 17 /* The database schema changed */
+#define SQLITE_TOOBIG 18 /* NOT USED. Too much data for one row */
+#define SQLITE_CONSTRAINT 19 /* Abort due to contraint violation */
+#define SQLITE_MISMATCH 20 /* Data type mismatch */
+#define SQLITE_MISUSE 21 /* Library used incorrectly */
+#define SQLITE_NOLFS 22 /* Uses OS features not supported on host */
+#define SQLITE_AUTH 23 /* Authorization denied */
+#define SQLITE_FORMAT 24 /* Auxiliary database format error */
+#define SQLITE_RANGE 25 /* 2nd parameter to sqlite3_bind out of range */
+#define SQLITE_NOTADB 26 /* File opened that is not a database file */
+#define SQLITE_ROW 100 /* sqlite3_step() has another row ready */
+#define SQLITE_DONE 101 /* sqlite3_step() has finished executing */
+/* end-of-error-codes */
+
+/*
+** Using the sqlite3_extended_result_codes() API, you can cause
+** SQLite to return result codes with additional information in
+** their upper bits. The lower 8 bits will be the same as the
+** primary result codes above. But the upper bits might contain
+** more specific error information.
+**
+** To extract the primary result code from an extended result code,
+** simply mask off the lower 8 bits.
+**
+** primary = extended & 0xff;
+**
+** New result error codes may be added from time to time. Software
+** that uses the extended result codes should plan accordingly and be
+** sure to always handle new unknown codes gracefully.
+**
+** The SQLITE_OK result code will never be extended. It will always
+** be exactly zero.
+**
+** The extended result codes always have the primary result code
+** as a prefix. Primary result codes only contain a single "_"
+** character. Extended result codes contain two or more "_" characters.
+*/
+#define SQLITE_IOERR_READ (SQLITE_IOERR | (1<<8))
+#define SQLITE_IOERR_SHORT_READ (SQLITE_IOERR | (2<<8))
+#define SQLITE_IOERR_WRITE (SQLITE_IOERR | (3<<8))
+#define SQLITE_IOERR_FSYNC (SQLITE_IOERR | (4<<8))
+#define SQLITE_IOERR_DIR_FSYNC (SQLITE_IOERR | (5<<8))
+#define SQLITE_IOERR_TRUNCATE (SQLITE_IOERR | (6<<8))
+#define SQLITE_IOERR_FSTAT (SQLITE_IOERR | (7<<8))
+#define SQLITE_IOERR_UNLOCK (SQLITE_IOERR | (8<<8))
+#define SQLITE_IOERR_RDLOCK (SQLITE_IOERR | (9<<8))
+
+/*
+** Enable or disable the extended result codes.
+*/
+int sqlite3_extended_result_codes(sqlite3*, int onoff);
+
+/*
+** Each entry in an SQLite table has a unique integer key. (The key is
+** the value of the INTEGER PRIMARY KEY column if there is such a column,
+** otherwise the key is generated at random. The unique key is always
+** available as the ROWID, OID, or _ROWID_ column.) The following routine
+** returns the integer key of the most recent insert in the database.
+**
+** This function is similar to the mysql_insert_id() function from MySQL.
+*/
+sqlite_int64 sqlite3_last_insert_rowid(sqlite3*);
+
+/*
+** This function returns the number of database rows that were changed
+** (or inserted or deleted) by the most recent called sqlite3_exec().
+**
+** All changes are counted, even if they were later undone by a
+** ROLLBACK or ABORT. Except, changes associated with creating and
+** dropping tables are not counted.
+**
+** If a callback invokes sqlite3_exec() recursively, then the changes
+** in the inner, recursive call are counted together with the changes
+** in the outer call.
+**
+** SQLite implements the command "DELETE FROM table" without a WHERE clause
+** by dropping and recreating the table. (This is much faster than going
+** through and deleting individual elements form the table.) Because of
+** this optimization, the change count for "DELETE FROM table" will be
+** zero regardless of the number of elements that were originally in the
+** table. To get an accurate count of the number of rows deleted, use
+** "DELETE FROM table WHERE 1" instead.
+*/
+int sqlite3_changes(sqlite3*);
+
+/*
+** This function returns the number of database rows that have been
+** modified by INSERT, UPDATE or DELETE statements since the database handle
+** was opened. This includes UPDATE, INSERT and DELETE statements executed
+** as part of trigger programs. All changes are counted as soon as the
+** statement that makes them is completed (when the statement handle is
+** passed to sqlite3_reset() or sqlite_finalise()).
+**
+** SQLite implements the command "DELETE FROM table" without a WHERE clause
+** by dropping and recreating the table. (This is much faster than going
+** through and deleting individual elements form the table.) Because of
+** this optimization, the change count for "DELETE FROM table" will be
+** zero regardless of the number of elements that were originally in the
+** table. To get an accurate count of the number of rows deleted, use
+** "DELETE FROM table WHERE 1" instead.
+*/
+int sqlite3_total_changes(sqlite3*);
+
+/* This function causes any pending database operation to abort and
+** return at its earliest opportunity. This routine is typically
+** called in response to a user action such as pressing "Cancel"
+** or Ctrl-C where the user wants a long query operation to halt
+** immediately.
+*/
+void sqlite3_interrupt(sqlite3*);
+
+
+/* These functions return true if the given input string comprises
+** one or more complete SQL statements. For the sqlite3_complete() call,
+** the parameter must be a nul-terminated UTF-8 string. For
+** sqlite3_complete16(), a nul-terminated machine byte order UTF-16 string
+** is required.
+**
+** The algorithm is simple. If the last token other than spaces
+** and comments is a semicolon, then return true. otherwise return
+** false.
+*/
+int sqlite3_complete(const char *sql);
+int sqlite3_complete16(const void *sql);
+
+/*
+** This routine identifies a callback function that is invoked
+** whenever an attempt is made to open a database table that is
+** currently locked by another process or thread. If the busy callback
+** is NULL, then sqlite3_exec() returns SQLITE_BUSY immediately if
+** it finds a locked table. If the busy callback is not NULL, then
+** sqlite3_exec() invokes the callback with three arguments. The
+** second argument is the name of the locked table and the third
+** argument is the number of times the table has been busy. If the
+** busy callback returns 0, then sqlite3_exec() immediately returns
+** SQLITE_BUSY. If the callback returns non-zero, then sqlite3_exec()
+** tries to open the table again and the cycle repeats.
+**
+** The default busy callback is NULL.
+**
+** Sqlite is re-entrant, so the busy handler may start a new query.
+** (It is not clear why anyone would every want to do this, but it
+** is allowed, in theory.) But the busy handler may not close the
+** database. Closing the database from a busy handler will delete
+** data structures out from under the executing query and will
+** probably result in a coredump.
+*/
+int sqlite3_busy_handler(sqlite3*, int(*)(void*,int), void*);
+
+/*
+** This routine sets a busy handler that sleeps for a while when a
+** table is locked. The handler will sleep multiple times until
+** at least "ms" milleseconds of sleeping have been done. After
+** "ms" milleseconds of sleeping, the handler returns 0 which
+** causes sqlite3_exec() to return SQLITE_BUSY.
+**
+** Calling this routine with an argument less than or equal to zero
+** turns off all busy handlers.
+*/
+int sqlite3_busy_timeout(sqlite3*, int ms);
+
+/*
+** This next routine is really just a wrapper around sqlite3_exec().
+** Instead of invoking a user-supplied callback for each row of the
+** result, this routine remembers each row of the result in memory
+** obtained from malloc(), then returns all of the result after the
+** query has finished.
+**
+** As an example, suppose the query result where this table:
+**
+** Name | Age
+** -----------------------
+** Alice | 43
+** Bob | 28
+** Cindy | 21
+**
+** If the 3rd argument were &azResult then after the function returns
+** azResult will contain the following data:
+**
+** azResult[0] = "Name";
+** azResult[1] = "Age";
+** azResult[2] = "Alice";
+** azResult[3] = "43";
+** azResult[4] = "Bob";
+** azResult[5] = "28";
+** azResult[6] = "Cindy";
+** azResult[7] = "21";
+**
+** Notice that there is an extra row of data containing the column
+** headers. But the *nrow return value is still 3. *ncolumn is
+** set to 2. In general, the number of values inserted into azResult
+** will be ((*nrow) + 1)*(*ncolumn).
+**
+** After the calling function has finished using the result, it should
+** pass the result data pointer to sqlite3_free_table() in order to
+** release the memory that was malloc-ed. Because of the way the
+** malloc() happens, the calling function must not try to call
+** free() directly. Only sqlite3_free_table() is able to release
+** the memory properly and safely.
+**
+** The return value of this routine is the same as from sqlite3_exec().
+*/
+int sqlite3_get_table(
+ sqlite3*, /* An open database */
+ const char *sql, /* SQL to be executed */
+ char ***resultp, /* Result written to a char *[] that this points to */
+ int *nrow, /* Number of result rows written here */
+ int *ncolumn, /* Number of result columns written here */
+ char **errmsg /* Error msg written here */
+);
+
+/*
+** Call this routine to free the memory that sqlite3_get_table() allocated.
+*/
+void sqlite3_free_table(char **result);
+
+/*
+** The following routines are variants of the "sprintf()" from the
+** standard C library. The resulting string is written into memory
+** obtained from malloc() so that there is never a possiblity of buffer
+** overflow. These routines also implement some additional formatting
+** options that are useful for constructing SQL statements.
+**
+** The strings returned by these routines should be freed by calling
+** sqlite3_free().
+**
+** All of the usual printf formatting options apply. In addition, there
+** is a "%q" option. %q works like %s in that it substitutes a null-terminated
+** string from the argument list. But %q also doubles every '\'' character.
+** %q is designed for use inside a string literal. By doubling each '\''
+** character it escapes that character and allows it to be inserted into
+** the string.
+**
+** For example, so some string variable contains text as follows:
+**
+** char *zText = "It's a happy day!";
+**
+** We can use this text in an SQL statement as follows:
+**
+** char *z = sqlite3_mprintf("INSERT INTO TABLES('%q')", zText);
+** sqlite3_exec(db, z, callback1, 0, 0);
+** sqlite3_free(z);
+**
+** Because the %q format string is used, the '\'' character in zText
+** is escaped and the SQL generated is as follows:
+**
+** INSERT INTO table1 VALUES('It''s a happy day!')
+**
+** This is correct. Had we used %s instead of %q, the generated SQL
+** would have looked like this:
+**
+** INSERT INTO table1 VALUES('It's a happy day!');
+**
+** This second example is an SQL syntax error. As a general rule you
+** should always use %q instead of %s when inserting text into a string
+** literal.
+*/
+char *sqlite3_mprintf(const char*,...);
+char *sqlite3_vmprintf(const char*, va_list);
+char *sqlite3_snprintf(int,char*,const char*, ...);
+
+/*
+** SQLite uses its own memory allocator. On many installations, this
+** memory allocator is identical to the standard malloc()/realloc()/free()
+** and can be used interchangable. On others, the implementations are
+** different. For maximum portability, it is best not to mix calls
+** to the standard malloc/realloc/free with the sqlite versions.
+*/
+void *sqlite3_malloc(int);
+void *sqlite3_realloc(void*, int);
+void sqlite3_free(void*);
+
+#ifndef SQLITE_OMIT_AUTHORIZATION
+/*
+** This routine registers a callback with the SQLite library. The
+** callback is invoked (at compile-time, not at run-time) for each
+** attempt to access a column of a table in the database. The callback
+** returns SQLITE_OK if access is allowed, SQLITE_DENY if the entire
+** SQL statement should be aborted with an error and SQLITE_IGNORE
+** if the column should be treated as a NULL value.
+*/
+int sqlite3_set_authorizer(
+ sqlite3*,
+ int (*xAuth)(void*,int,const char*,const char*,const char*,const char*),
+ void *pUserData
+);
+#endif
+
+/*
+** The second parameter to the access authorization function above will
+** be one of the values below. These values signify what kind of operation
+** is to be authorized. The 3rd and 4th parameters to the authorization
+** function will be parameters or NULL depending on which of the following
+** codes is used as the second parameter. The 5th parameter is the name
+** of the database ("main", "temp", etc.) if applicable. The 6th parameter
+** is the name of the inner-most trigger or view that is responsible for
+** the access attempt or NULL if this access attempt is directly from
+** input SQL code.
+**
+** Arg-3 Arg-4
+*/
+#define SQLITE_COPY 0 /* Table Name File Name */
+#define SQLITE_CREATE_INDEX 1 /* Index Name Table Name */
+#define SQLITE_CREATE_TABLE 2 /* Table Name NULL */
+#define SQLITE_CREATE_TEMP_INDEX 3 /* Index Name Table Name */
+#define SQLITE_CREATE_TEMP_TABLE 4 /* Table Name NULL */
+#define SQLITE_CREATE_TEMP_TRIGGER 5 /* Trigger Name Table Name */
+#define SQLITE_CREATE_TEMP_VIEW 6 /* View Name NULL */
+#define SQLITE_CREATE_TRIGGER 7 /* Trigger Name Table Name */
+#define SQLITE_CREATE_VIEW 8 /* View Name NULL */
+#define SQLITE_DELETE 9 /* Table Name NULL */
+#define SQLITE_DROP_INDEX 10 /* Index Name Table Name */
+#define SQLITE_DROP_TABLE 11 /* Table Name NULL */
+#define SQLITE_DROP_TEMP_INDEX 12 /* Index Name Table Name */
+#define SQLITE_DROP_TEMP_TABLE 13 /* Table Name NULL */
+#define SQLITE_DROP_TEMP_TRIGGER 14 /* Trigger Name Table Name */
+#define SQLITE_DROP_TEMP_VIEW 15 /* View Name NULL */
+#define SQLITE_DROP_TRIGGER 16 /* Trigger Name Table Name */
+#define SQLITE_DROP_VIEW 17 /* View Name NULL */
+#define SQLITE_INSERT 18 /* Table Name NULL */
+#define SQLITE_PRAGMA 19 /* Pragma Name 1st arg or NULL */
+#define SQLITE_READ 20 /* Table Name Column Name */
+#define SQLITE_SELECT 21 /* NULL NULL */
+#define SQLITE_TRANSACTION 22 /* NULL NULL */
+#define SQLITE_UPDATE 23 /* Table Name Column Name */
+#define SQLITE_ATTACH 24 /* Filename NULL */
+#define SQLITE_DETACH 25 /* Database Name NULL */
+#define SQLITE_ALTER_TABLE 26 /* Database Name Table Name */
+#define SQLITE_REINDEX 27 /* Index Name NULL */
+#define SQLITE_ANALYZE 28 /* Table Name NULL */
+#define SQLITE_CREATE_VTABLE 29 /* Table Name Module Name */
+#define SQLITE_DROP_VTABLE 30 /* Table Name Module Name */
+#define SQLITE_FUNCTION 31 /* Function Name NULL */
+
+/*
+** The return value of the authorization function should be one of the
+** following constants:
+*/
+/* #define SQLITE_OK 0 // Allow access (This is actually defined above) */
+#define SQLITE_DENY 1 /* Abort the SQL statement with an error */
+#define SQLITE_IGNORE 2 /* Don't allow access, but don't generate an error */
+
+/*
+** Register a function for tracing SQL command evaluation. The function
+** registered by sqlite3_trace() is invoked at the first sqlite3_step()
+** for the evaluation of an SQL statement. The function registered by
+** sqlite3_profile() runs at the end of each SQL statement and includes
+** information on how long that statement ran.
+**
+** The sqlite3_profile() API is currently considered experimental and
+** is subject to change.
+*/
+void *sqlite3_trace(sqlite3*, void(*xTrace)(void*,const char*), void*);
+void *sqlite3_profile(sqlite3*,
+ void(*xProfile)(void*,const char*,sqlite_uint64), void*);
+
+/*
+** This routine configures a callback function - the progress callback - that
+** is invoked periodically during long running calls to sqlite3_exec(),
+** sqlite3_step() and sqlite3_get_table(). An example use for this API is to
+** keep a GUI updated during a large query.
+**
+** The progress callback is invoked once for every N virtual machine opcodes,
+** where N is the second argument to this function. The progress callback
+** itself is identified by the third argument to this function. The fourth
+** argument to this function is a void pointer passed to the progress callback
+** function each time it is invoked.
+**
+** If a call to sqlite3_exec(), sqlite3_step() or sqlite3_get_table() results
+** in less than N opcodes being executed, then the progress callback is not
+** invoked.
+**
+** To remove the progress callback altogether, pass NULL as the third
+** argument to this function.
+**
+** If the progress callback returns a result other than 0, then the current
+** query is immediately terminated and any database changes rolled back. If the
+** query was part of a larger transaction, then the transaction is not rolled
+** back and remains active. The sqlite3_exec() call returns SQLITE_ABORT.
+**
+******* THIS IS AN EXPERIMENTAL API AND IS SUBJECT TO CHANGE ******
+*/
+void sqlite3_progress_handler(sqlite3*, int, int(*)(void*), void*);
+
+/*
+** Register a callback function to be invoked whenever a new transaction
+** is committed. The pArg argument is passed through to the callback.
+** callback. If the callback function returns non-zero, then the commit
+** is converted into a rollback.
+**
+** If another function was previously registered, its pArg value is returned.
+** Otherwise NULL is returned.
+**
+** Registering a NULL function disables the callback.
+**
+******* THIS IS AN EXPERIMENTAL API AND IS SUBJECT TO CHANGE ******
+*/
+void *sqlite3_commit_hook(sqlite3*, int(*)(void*), void*);
+
+/*
+** Open the sqlite database file "filename". The "filename" is UTF-8
+** encoded for sqlite3_open() and UTF-16 encoded in the native byte order
+** for sqlite3_open16(). An sqlite3* handle is returned in *ppDb, even
+** if an error occurs. If the database is opened (or created) successfully,
+** then SQLITE_OK is returned. Otherwise an error code is returned. The
+** sqlite3_errmsg() or sqlite3_errmsg16() routines can be used to obtain
+** an English language description of the error.
+**
+** If the database file does not exist, then a new database is created.
+** The encoding for the database is UTF-8 if sqlite3_open() is called and
+** UTF-16 if sqlite3_open16 is used.
+**
+** Whether or not an error occurs when it is opened, resources associated
+** with the sqlite3* handle should be released by passing it to
+** sqlite3_close() when it is no longer required.
+*/
+int sqlite3_open(
+ const char *filename, /* Database filename (UTF-8) */
+ sqlite3 **ppDb /* OUT: SQLite db handle */
+);
+int sqlite3_open16(
+ const void *filename, /* Database filename (UTF-16) */
+ sqlite3 **ppDb /* OUT: SQLite db handle */
+);
+
+/*
+** Return the error code for the most recent sqlite3_* API call associated
+** with sqlite3 handle 'db'. SQLITE_OK is returned if the most recent
+** API call was successful.
+**
+** Calls to many sqlite3_* functions set the error code and string returned
+** by sqlite3_errcode(), sqlite3_errmsg() and sqlite3_errmsg16()
+** (overwriting the previous values). Note that calls to sqlite3_errcode(),
+** sqlite3_errmsg() and sqlite3_errmsg16() themselves do not affect the
+** results of future invocations.
+**
+** Assuming no other intervening sqlite3_* API calls are made, the error
+** code returned by this function is associated with the same error as
+** the strings returned by sqlite3_errmsg() and sqlite3_errmsg16().
+*/
+int sqlite3_errcode(sqlite3 *db);
+
+/*
+** Return a pointer to a UTF-8 encoded string describing in english the
+** error condition for the most recent sqlite3_* API call. The returned
+** string is always terminated by an 0x00 byte.
+**
+** The string "not an error" is returned when the most recent API call was
+** successful.
+*/
+const char *sqlite3_errmsg(sqlite3*);
+
+/*
+** Return a pointer to a UTF-16 native byte order encoded string describing
+** in english the error condition for the most recent sqlite3_* API call.
+** The returned string is always terminated by a pair of 0x00 bytes.
+**
+** The string "not an error" is returned when the most recent API call was
+** successful.
+*/
+const void *sqlite3_errmsg16(sqlite3*);
+
+/*
+** An instance of the following opaque structure is used to represent
+** a compiled SQL statment.
+*/
+typedef struct sqlite3_stmt sqlite3_stmt;
+
+/*
+** To execute an SQL query, it must first be compiled into a byte-code
+** program using one of the following routines. The only difference between
+** them is that the second argument, specifying the SQL statement to
+** compile, is assumed to be encoded in UTF-8 for the sqlite3_prepare()
+** function and UTF-16 for sqlite3_prepare16().
+**
+** The first parameter "db" is an SQLite database handle. The second
+** parameter "zSql" is the statement to be compiled, encoded as either
+** UTF-8 or UTF-16 (see above). If the next parameter, "nBytes", is less
+** than zero, then zSql is read up to the first nul terminator. If
+** "nBytes" is not less than zero, then it is the length of the string zSql
+** in bytes (not characters).
+**
+** *pzTail is made to point to the first byte past the end of the first
+** SQL statement in zSql. This routine only compiles the first statement
+** in zSql, so *pzTail is left pointing to what remains uncompiled.
+**
+** *ppStmt is left pointing to a compiled SQL statement that can be
+** executed using sqlite3_step(). Or if there is an error, *ppStmt may be
+** set to NULL. If the input text contained no SQL (if the input is and
+** empty string or a comment) then *ppStmt is set to NULL.
+**
+** On success, SQLITE_OK is returned. Otherwise an error code is returned.
+*/
+int sqlite3_prepare(
+ sqlite3 *db, /* Database handle */
+ const char *zSql, /* SQL statement, UTF-8 encoded */
+ int nBytes, /* Length of zSql in bytes. */
+ sqlite3_stmt **ppStmt, /* OUT: Statement handle */
+ const char **pzTail /* OUT: Pointer to unused portion of zSql */
+);
+int sqlite3_prepare16(
+ sqlite3 *db, /* Database handle */
+ const void *zSql, /* SQL statement, UTF-16 encoded */
+ int nBytes, /* Length of zSql in bytes. */
+ sqlite3_stmt **ppStmt, /* OUT: Statement handle */
+ const void **pzTail /* OUT: Pointer to unused portion of zSql */
+);
+
+/*
+** Pointers to the following two opaque structures are used to communicate
+** with the implementations of user-defined functions.
+*/
+typedef struct sqlite3_context sqlite3_context;
+typedef struct Mem sqlite3_value;
+
+/*
+** In the SQL strings input to sqlite3_prepare() and sqlite3_prepare16(),
+** one or more literals can be replace by parameters "?" or ":AAA" or
+** "$VVV" where AAA is an identifer and VVV is a variable name according
+** to the syntax rules of the TCL programming language.
+** The value of these parameters (also called "host parameter names") can
+** be set using the routines listed below.
+**
+** In every case, the first parameter is a pointer to the sqlite3_stmt
+** structure returned from sqlite3_prepare(). The second parameter is the
+** index of the parameter. The first parameter as an index of 1. For
+** named parameters (":AAA" or "$VVV") you can use
+** sqlite3_bind_parameter_index() to get the correct index value given
+** the parameters name. If the same named parameter occurs more than
+** once, it is assigned the same index each time.
+**
+** The fifth parameter to sqlite3_bind_blob(), sqlite3_bind_text(), and
+** sqlite3_bind_text16() is a destructor used to dispose of the BLOB or
+** text after SQLite has finished with it. If the fifth argument is the
+** special value SQLITE_STATIC, then the library assumes that the information
+** is in static, unmanaged space and does not need to be freed. If the
+** fifth argument has the value SQLITE_TRANSIENT, then SQLite makes its
+** own private copy of the data.
+**
+** The sqlite3_bind_* routine must be called before sqlite3_step() after
+** an sqlite3_prepare() or sqlite3_reset(). Unbound parameterss are
+** interpreted as NULL.
+*/
+int sqlite3_bind_blob(sqlite3_stmt*, int, const void*, int n, void(*)(void*));
+int sqlite3_bind_double(sqlite3_stmt*, int, double);
+int sqlite3_bind_int(sqlite3_stmt*, int, int);
+int sqlite3_bind_int64(sqlite3_stmt*, int, sqlite_int64);
+int sqlite3_bind_null(sqlite3_stmt*, int);
+int sqlite3_bind_text(sqlite3_stmt*, int, const char*, int n, void(*)(void*));
+int sqlite3_bind_text16(sqlite3_stmt*, int, const void*, int, void(*)(void*));
+int sqlite3_bind_value(sqlite3_stmt*, int, const sqlite3_value*);
+
+/*
+** Return the number of parameters in a compiled SQL statement. This
+** routine was added to support DBD::SQLite.
+*/
+int sqlite3_bind_parameter_count(sqlite3_stmt*);
+
+/*
+** Return the name of the i-th parameter. Ordinary parameters "?" are
+** nameless and a NULL is returned. For parameters of the form :AAA or
+** $VVV the complete text of the parameter name is returned, including
+** the initial ":" or "$". NULL is returned if the index is out of range.
+*/
+const char *sqlite3_bind_parameter_name(sqlite3_stmt*, int);
+
+/*
+** Return the index of a parameter with the given name. The name
+** must match exactly. If no parameter with the given name is found,
+** return 0.
+*/
+int sqlite3_bind_parameter_index(sqlite3_stmt*, const char *zName);
+
+/*
+** Set all the parameters in the compiled SQL statement to NULL.
+*/
+int sqlite3_clear_bindings(sqlite3_stmt*);
+
+/*
+** Return the number of columns in the result set returned by the compiled
+** SQL statement. This routine returns 0 if pStmt is an SQL statement
+** that does not return data (for example an UPDATE).
+*/
+int sqlite3_column_count(sqlite3_stmt *pStmt);
+
+/*
+** The first parameter is a compiled SQL statement. This function returns
+** the column heading for the Nth column of that statement, where N is the
+** second function parameter. The string returned is UTF-8 for
+** sqlite3_column_name() and UTF-16 for sqlite3_column_name16().
+*/
+const char *sqlite3_column_name(sqlite3_stmt*,int);
+const void *sqlite3_column_name16(sqlite3_stmt*,int);
+
+/*
+** The first parameter to the following calls is a compiled SQL statement.
+** These functions return information about the Nth column returned by
+** the statement, where N is the second function argument.
+**
+** If the Nth column returned by the statement is not a column value,
+** then all of the functions return NULL. Otherwise, the return the
+** name of the attached database, table and column that the expression
+** extracts a value from.
+**
+** As with all other SQLite APIs, those postfixed with "16" return UTF-16
+** encoded strings, the other functions return UTF-8. The memory containing
+** the returned strings is valid until the statement handle is finalized().
+**
+** These APIs are only available if the library was compiled with the
+** SQLITE_ENABLE_COLUMN_METADATA preprocessor symbol defined.
+*/
+const char *sqlite3_column_database_name(sqlite3_stmt*,int);
+const void *sqlite3_column_database_name16(sqlite3_stmt*,int);
+const char *sqlite3_column_table_name(sqlite3_stmt*,int);
+const void *sqlite3_column_table_name16(sqlite3_stmt*,int);
+const char *sqlite3_column_origin_name(sqlite3_stmt*,int);
+const void *sqlite3_column_origin_name16(sqlite3_stmt*,int);
+
+/*
+** The first parameter is a compiled SQL statement. If this statement
+** is a SELECT statement, the Nth column of the returned result set
+** of the SELECT is a table column then the declared type of the table
+** column is returned. If the Nth column of the result set is not at table
+** column, then a NULL pointer is returned. The returned string is always
+** UTF-8 encoded. For example, in the database schema:
+**
+** CREATE TABLE t1(c1 VARIANT);
+**
+** And the following statement compiled:
+**
+** SELECT c1 + 1, c1 FROM t1;
+**
+** Then this routine would return the string "VARIANT" for the second
+** result column (i==1), and a NULL pointer for the first result column
+** (i==0).
+*/
+const char *sqlite3_column_decltype(sqlite3_stmt *, int i);
+
+/*
+** The first parameter is a compiled SQL statement. If this statement
+** is a SELECT statement, the Nth column of the returned result set
+** of the SELECT is a table column then the declared type of the table
+** column is returned. If the Nth column of the result set is not at table
+** column, then a NULL pointer is returned. The returned string is always
+** UTF-16 encoded. For example, in the database schema:
+**
+** CREATE TABLE t1(c1 INTEGER);
+**
+** And the following statement compiled:
+**
+** SELECT c1 + 1, c1 FROM t1;
+**
+** Then this routine would return the string "INTEGER" for the second
+** result column (i==1), and a NULL pointer for the first result column
+** (i==0).
+*/
+const void *sqlite3_column_decltype16(sqlite3_stmt*,int);
+
+/*
+** After an SQL query has been compiled with a call to either
+** sqlite3_prepare() or sqlite3_prepare16(), then this function must be
+** called one or more times to execute the statement.
+**
+** The return value will be either SQLITE_BUSY, SQLITE_DONE,
+** SQLITE_ROW, SQLITE_ERROR, or SQLITE_MISUSE.
+**
+** SQLITE_BUSY means that the database engine attempted to open
+** a locked database and there is no busy callback registered.
+** Call sqlite3_step() again to retry the open.
+**
+** SQLITE_DONE means that the statement has finished executing
+** successfully. sqlite3_step() should not be called again on this virtual
+** machine.
+**
+** If the SQL statement being executed returns any data, then
+** SQLITE_ROW is returned each time a new row of data is ready
+** for processing by the caller. The values may be accessed using
+** the sqlite3_column_*() functions described below. sqlite3_step()
+** is called again to retrieve the next row of data.
+**
+** SQLITE_ERROR means that a run-time error (such as a constraint
+** violation) has occurred. sqlite3_step() should not be called again on
+** the VM. More information may be found by calling sqlite3_errmsg().
+**
+** SQLITE_MISUSE means that the this routine was called inappropriately.
+** Perhaps it was called on a virtual machine that had already been
+** finalized or on one that had previously returned SQLITE_ERROR or
+** SQLITE_DONE. Or it could be the case the the same database connection
+** is being used simulataneously by two or more threads.
+*/
+int sqlite3_step(sqlite3_stmt*);
+
+/*
+** Return the number of values in the current row of the result set.
+**
+** After a call to sqlite3_step() that returns SQLITE_ROW, this routine
+** will return the same value as the sqlite3_column_count() function.
+** After sqlite3_step() has returned an SQLITE_DONE, SQLITE_BUSY or
+** error code, or before sqlite3_step() has been called on a
+** compiled SQL statement, this routine returns zero.
+*/
+int sqlite3_data_count(sqlite3_stmt *pStmt);
+
+/*
+** Values are stored in the database in one of the following fundamental
+** types.
+*/
+#define SQLITE_INTEGER 1
+#define SQLITE_FLOAT 2
+/* #define SQLITE_TEXT 3 // See below */
+#define SQLITE_BLOB 4
+#define SQLITE_NULL 5
+
+/*
+** SQLite version 2 defines SQLITE_TEXT differently. To allow both
+** version 2 and version 3 to be included, undefine them both if a
+** conflict is seen. Define SQLITE3_TEXT to be the version 3 value.
+*/
+#ifdef SQLITE_TEXT
+# undef SQLITE_TEXT
+#else
+# define SQLITE_TEXT 3
+#endif
+#define SQLITE3_TEXT 3
+
+/*
+** The next group of routines returns information about the information
+** in a single column of the current result row of a query. In every
+** case the first parameter is a pointer to the SQL statement that is being
+** executed (the sqlite_stmt* that was returned from sqlite3_prepare()) and
+** the second argument is the index of the column for which information
+** should be returned. iCol is zero-indexed. The left-most column as an
+** index of 0.
+**
+** If the SQL statement is not currently point to a valid row, or if the
+** the colulmn index is out of range, the result is undefined.
+**
+** These routines attempt to convert the value where appropriate. For
+** example, if the internal representation is FLOAT and a text result
+** is requested, sprintf() is used internally to do the conversion
+** automatically. The following table details the conversions that
+** are applied:
+**
+** Internal Type Requested Type Conversion
+** ------------- -------------- --------------------------
+** NULL INTEGER Result is 0
+** NULL FLOAT Result is 0.0
+** NULL TEXT Result is an empty string
+** NULL BLOB Result is a zero-length BLOB
+** INTEGER FLOAT Convert from integer to float
+** INTEGER TEXT ASCII rendering of the integer
+** INTEGER BLOB Same as for INTEGER->TEXT
+** FLOAT INTEGER Convert from float to integer
+** FLOAT TEXT ASCII rendering of the float
+** FLOAT BLOB Same as FLOAT->TEXT
+** TEXT INTEGER Use atoi()
+** TEXT FLOAT Use atof()
+** TEXT BLOB No change
+** BLOB INTEGER Convert to TEXT then use atoi()
+** BLOB FLOAT Convert to TEXT then use atof()
+** BLOB TEXT Add a \000 terminator if needed
+**
+** The following access routines are provided:
+**
+** _type() Return the datatype of the result. This is one of
+** SQLITE_INTEGER, SQLITE_FLOAT, SQLITE_TEXT, SQLITE_BLOB,
+** or SQLITE_NULL.
+** _blob() Return the value of a BLOB.
+** _bytes() Return the number of bytes in a BLOB value or the number
+** of bytes in a TEXT value represented as UTF-8. The \000
+** terminator is included in the byte count for TEXT values.
+** _bytes16() Return the number of bytes in a BLOB value or the number
+** of bytes in a TEXT value represented as UTF-16. The \u0000
+** terminator is included in the byte count for TEXT values.
+** _double() Return a FLOAT value.
+** _int() Return an INTEGER value in the host computer's native
+** integer representation. This might be either a 32- or 64-bit
+** integer depending on the host.
+** _int64() Return an INTEGER value as a 64-bit signed integer.
+** _text() Return the value as UTF-8 text.
+** _text16() Return the value as UTF-16 text.
+*/
+const void *sqlite3_column_blob(sqlite3_stmt*, int iCol);
+int sqlite3_column_bytes(sqlite3_stmt*, int iCol);
+int sqlite3_column_bytes16(sqlite3_stmt*, int iCol);
+double sqlite3_column_double(sqlite3_stmt*, int iCol);
+int sqlite3_column_int(sqlite3_stmt*, int iCol);
+sqlite_int64 sqlite3_column_int64(sqlite3_stmt*, int iCol);
+const unsigned char *sqlite3_column_text(sqlite3_stmt*, int iCol);
+const void *sqlite3_column_text16(sqlite3_stmt*, int iCol);
+int sqlite3_column_type(sqlite3_stmt*, int iCol);
+int sqlite3_column_numeric_type(sqlite3_stmt*, int iCol);
+sqlite3_value *sqlite3_column_value(sqlite3_stmt*, int iCol);
+
+/*
+** The sqlite3_finalize() function is called to delete a compiled
+** SQL statement obtained by a previous call to sqlite3_prepare()
+** or sqlite3_prepare16(). If the statement was executed successfully, or
+** not executed at all, then SQLITE_OK is returned. If execution of the
+** statement failed then an error code is returned.
+**
+** This routine can be called at any point during the execution of the
+** virtual machine. If the virtual machine has not completed execution
+** when this routine is called, that is like encountering an error or
+** an interrupt. (See sqlite3_interrupt().) Incomplete updates may be
+** rolled back and transactions cancelled, depending on the circumstances,
+** and the result code returned will be SQLITE_ABORT.
+*/
+int sqlite3_finalize(sqlite3_stmt *pStmt);
+
+/*
+** The sqlite3_reset() function is called to reset a compiled SQL
+** statement obtained by a previous call to sqlite3_prepare() or
+** sqlite3_prepare16() back to it's initial state, ready to be re-executed.
+** Any SQL statement variables that had values bound to them using
+** the sqlite3_bind_*() API retain their values.
+*/
+int sqlite3_reset(sqlite3_stmt *pStmt);
+
+/*
+** The following two functions are used to add user functions or aggregates
+** implemented in C to the SQL langauge interpreted by SQLite. The
+** difference only between the two is that the second parameter, the
+** name of the (scalar) function or aggregate, is encoded in UTF-8 for
+** sqlite3_create_function() and UTF-16 for sqlite3_create_function16().
+**
+** The first argument is the database handle that the new function or
+** aggregate is to be added to. If a single program uses more than one
+** database handle internally, then user functions or aggregates must
+** be added individually to each database handle with which they will be
+** used.
+**
+** The third parameter is the number of arguments that the function or
+** aggregate takes. If this parameter is negative, then the function or
+** aggregate may take any number of arguments.
+**
+** The fourth parameter is one of SQLITE_UTF* values defined below,
+** indicating the encoding that the function is most likely to handle
+** values in. This does not change the behaviour of the programming
+** interface. However, if two versions of the same function are registered
+** with different encoding values, SQLite invokes the version likely to
+** minimize conversions between text encodings.
+**
+** The seventh, eighth and ninth parameters, xFunc, xStep and xFinal, are
+** pointers to user implemented C functions that implement the user
+** function or aggregate. A scalar function requires an implementation of
+** the xFunc callback only, NULL pointers should be passed as the xStep
+** and xFinal parameters. An aggregate function requires an implementation
+** of xStep and xFinal, but NULL should be passed for xFunc. To delete an
+** existing user function or aggregate, pass NULL for all three function
+** callback. Specifying an inconstent set of callback values, such as an
+** xFunc and an xFinal, or an xStep but no xFinal, SQLITE_ERROR is
+** returned.
+*/
+int sqlite3_create_function(
+ sqlite3 *,
+ const char *zFunctionName,
+ int nArg,
+ int eTextRep,
+ void*,
+ void (*xFunc)(sqlite3_context*,int,sqlite3_value**),
+ void (*xStep)(sqlite3_context*,int,sqlite3_value**),
+ void (*xFinal)(sqlite3_context*)
+);
+int sqlite3_create_function16(
+ sqlite3*,
+ const void *zFunctionName,
+ int nArg,
+ int eTextRep,
+ void*,
+ void (*xFunc)(sqlite3_context*,int,sqlite3_value**),
+ void (*xStep)(sqlite3_context*,int,sqlite3_value**),
+ void (*xFinal)(sqlite3_context*)
+);
+
+/*
+** This function is deprecated. Do not use it. It continues to exist
+** so as not to break legacy code. But new code should avoid using it.
+*/
+int sqlite3_aggregate_count(sqlite3_context*);
+
+/*
+** The next group of routines returns information about parameters to
+** a user-defined function. Function implementations use these routines
+** to access their parameters. These routines are the same as the
+** sqlite3_column_* routines except that these routines take a single
+** sqlite3_value* pointer instead of an sqlite3_stmt* and an integer
+** column number.
+*/
+const void *sqlite3_value_blob(sqlite3_value*);
+int sqlite3_value_bytes(sqlite3_value*);
+int sqlite3_value_bytes16(sqlite3_value*);
+double sqlite3_value_double(sqlite3_value*);
+int sqlite3_value_int(sqlite3_value*);
+sqlite_int64 sqlite3_value_int64(sqlite3_value*);
+const unsigned char *sqlite3_value_text(sqlite3_value*);
+const void *sqlite3_value_text16(sqlite3_value*);
+const void *sqlite3_value_text16le(sqlite3_value*);
+const void *sqlite3_value_text16be(sqlite3_value*);
+int sqlite3_value_type(sqlite3_value*);
+int sqlite3_value_numeric_type(sqlite3_value*);
+
+/*
+** Aggregate functions use the following routine to allocate
+** a structure for storing their state. The first time this routine
+** is called for a particular aggregate, a new structure of size nBytes
+** is allocated, zeroed, and returned. On subsequent calls (for the
+** same aggregate instance) the same buffer is returned. The implementation
+** of the aggregate can use the returned buffer to accumulate data.
+**
+** The buffer allocated is freed automatically by SQLite.
+*/
+void *sqlite3_aggregate_context(sqlite3_context*, int nBytes);
+
+/*
+** The pUserData parameter to the sqlite3_create_function()
+** routine used to register user functions is available to
+** the implementation of the function using this call.
+*/
+void *sqlite3_user_data(sqlite3_context*);
+
+/*
+** The following two functions may be used by scalar user functions to
+** associate meta-data with argument values. If the same value is passed to
+** multiple invocations of the user-function during query execution, under
+** some circumstances the associated meta-data may be preserved. This may
+** be used, for example, to add a regular-expression matching scalar
+** function. The compiled version of the regular expression is stored as
+** meta-data associated with the SQL value passed as the regular expression
+** pattern.
+**
+** Calling sqlite3_get_auxdata() returns a pointer to the meta data
+** associated with the Nth argument value to the current user function
+** call, where N is the second parameter. If no meta-data has been set for
+** that value, then a NULL pointer is returned.
+**
+** The sqlite3_set_auxdata() is used to associate meta data with a user
+** function argument. The third parameter is a pointer to the meta data
+** to be associated with the Nth user function argument value. The fourth
+** parameter specifies a 'delete function' that will be called on the meta
+** data pointer to release it when it is no longer required. If the delete
+** function pointer is NULL, it is not invoked.
+**
+** In practice, meta-data is preserved between function calls for
+** expressions that are constant at compile time. This includes literal
+** values and SQL variables.
+*/
+void *sqlite3_get_auxdata(sqlite3_context*, int);
+void sqlite3_set_auxdata(sqlite3_context*, int, void*, void (*)(void*));
+
+
+/*
+** These are special value for the destructor that is passed in as the
+** final argument to routines like sqlite3_result_blob(). If the destructor
+** argument is SQLITE_STATIC, it means that the content pointer is constant
+** and will never change. It does not need to be destroyed. The
+** SQLITE_TRANSIENT value means that the content will likely change in
+** the near future and that SQLite should make its own private copy of
+** the content before returning.
+*/
+#define SQLITE_STATIC ((void(*)(void *))0)
+#define SQLITE_TRANSIENT ((void(*)(void *))-1)
+
+/*
+** User-defined functions invoke the following routines in order to
+** set their return value.
+*/
+void sqlite3_result_blob(sqlite3_context*, const void*, int, void(*)(void*));
+void sqlite3_result_double(sqlite3_context*, double);
+void sqlite3_result_error(sqlite3_context*, const char*, int);
+void sqlite3_result_error16(sqlite3_context*, const void*, int);
+void sqlite3_result_int(sqlite3_context*, int);
+void sqlite3_result_int64(sqlite3_context*, sqlite_int64);
+void sqlite3_result_null(sqlite3_context*);
+void sqlite3_result_text(sqlite3_context*, const char*, int, void(*)(void*));
+void sqlite3_result_text16(sqlite3_context*, const void*, int, void(*)(void*));
+void sqlite3_result_text16le(sqlite3_context*, const void*, int,void(*)(void*));
+void sqlite3_result_text16be(sqlite3_context*, const void*, int,void(*)(void*));
+void sqlite3_result_value(sqlite3_context*, sqlite3_value*);
+
+/*
+** These are the allowed values for the eTextRep argument to
+** sqlite3_create_collation and sqlite3_create_function.
+*/
+#define SQLITE_UTF8 1
+#define SQLITE_UTF16LE 2
+#define SQLITE_UTF16BE 3
+#define SQLITE_UTF16 4 /* Use native byte order */
+#define SQLITE_ANY 5 /* sqlite3_create_function only */
+#define SQLITE_UTF16_ALIGNED 8 /* sqlite3_create_collation only */
+
+/*
+** These two functions are used to add new collation sequences to the
+** sqlite3 handle specified as the first argument.
+**
+** The name of the new collation sequence is specified as a UTF-8 string
+** for sqlite3_create_collation() and a UTF-16 string for
+** sqlite3_create_collation16(). In both cases the name is passed as the
+** second function argument.
+**
+** The third argument must be one of the constants SQLITE_UTF8,
+** SQLITE_UTF16LE or SQLITE_UTF16BE, indicating that the user-supplied
+** routine expects to be passed pointers to strings encoded using UTF-8,
+** UTF-16 little-endian or UTF-16 big-endian respectively.
+**
+** A pointer to the user supplied routine must be passed as the fifth
+** argument. If it is NULL, this is the same as deleting the collation
+** sequence (so that SQLite cannot call it anymore). Each time the user
+** supplied function is invoked, it is passed a copy of the void* passed as
+** the fourth argument to sqlite3_create_collation() or
+** sqlite3_create_collation16() as its first parameter.
+**
+** The remaining arguments to the user-supplied routine are two strings,
+** each represented by a [length, data] pair and encoded in the encoding
+** that was passed as the third argument when the collation sequence was
+** registered. The user routine should return negative, zero or positive if
+** the first string is less than, equal to, or greater than the second
+** string. i.e. (STRING1 - STRING2).
+*/
+int sqlite3_create_collation(
+ sqlite3*,
+ const char *zName,
+ int eTextRep,
+ void*,
+ int(*xCompare)(void*,int,const void*,int,const void*)
+);
+int sqlite3_create_collation16(
+ sqlite3*,
+ const char *zName,
+ int eTextRep,
+ void*,
+ int(*xCompare)(void*,int,const void*,int,const void*)
+);
+
+/*
+** To avoid having to register all collation sequences before a database
+** can be used, a single callback function may be registered with the
+** database handle to be called whenever an undefined collation sequence is
+** required.
+**
+** If the function is registered using the sqlite3_collation_needed() API,
+** then it is passed the names of undefined collation sequences as strings
+** encoded in UTF-8. If sqlite3_collation_needed16() is used, the names
+** are passed as UTF-16 in machine native byte order. A call to either
+** function replaces any existing callback.
+**
+** When the user-function is invoked, the first argument passed is a copy
+** of the second argument to sqlite3_collation_needed() or
+** sqlite3_collation_needed16(). The second argument is the database
+** handle. The third argument is one of SQLITE_UTF8, SQLITE_UTF16BE or
+** SQLITE_UTF16LE, indicating the most desirable form of the collation
+** sequence function required. The fourth parameter is the name of the
+** required collation sequence.
+**
+** The collation sequence is returned to SQLite by a collation-needed
+** callback using the sqlite3_create_collation() or
+** sqlite3_create_collation16() APIs, described above.
+*/
+int sqlite3_collation_needed(
+ sqlite3*,
+ void*,
+ void(*)(void*,sqlite3*,int eTextRep,const char*)
+);
+int sqlite3_collation_needed16(
+ sqlite3*,
+ void*,
+ void(*)(void*,sqlite3*,int eTextRep,const void*)
+);
+
+/*
+** Specify the key for an encrypted database. This routine should be
+** called right after sqlite3_open().
+**
+** The code to implement this API is not available in the public release
+** of SQLite.
+*/
+int sqlite3_key(
+ sqlite3 *db, /* Database to be rekeyed */
+ const void *pKey, int nKey /* The key */
+);
+
+/*
+** Change the key on an open database. If the current database is not
+** encrypted, this routine will encrypt it. If pNew==0 or nNew==0, the
+** database is decrypted.
+**
+** The code to implement this API is not available in the public release
+** of SQLite.
+*/
+int sqlite3_rekey(
+ sqlite3 *db, /* Database to be rekeyed */
+ const void *pKey, int nKey /* The new key */
+);
+
+/*
+** Sleep for a little while. The second parameter is the number of
+** miliseconds to sleep for.
+**
+** If the operating system does not support sleep requests with
+** milisecond time resolution, then the time will be rounded up to
+** the nearest second. The number of miliseconds of sleep actually
+** requested from the operating system is returned.
+*/
+int sqlite3_sleep(int);
+
+/*
+** Return TRUE (non-zero) if the statement supplied as an argument needs
+** to be recompiled. A statement needs to be recompiled whenever the
+** execution environment changes in a way that would alter the program
+** that sqlite3_prepare() generates. For example, if new functions or
+** collating sequences are registered or if an authorizer function is
+** added or changed.
+**
+*/
+int sqlite3_expired(sqlite3_stmt*);
+
+/*
+** Move all bindings from the first prepared statement over to the second.
+** This routine is useful, for example, if the first prepared statement
+** fails with an SQLITE_SCHEMA error. The same SQL can be prepared into
+** the second prepared statement then all of the bindings transfered over
+** to the second statement before the first statement is finalized.
+*/
+int sqlite3_transfer_bindings(sqlite3_stmt*, sqlite3_stmt*);
+
+/*
+** If the following global variable is made to point to a
+** string which is the name of a directory, then all temporary files
+** created by SQLite will be placed in that directory. If this variable
+** is NULL pointer, then SQLite does a search for an appropriate temporary
+** file directory.
+**
+** Once sqlite3_open() has been called, changing this variable will invalidate
+** the current temporary database, if any.
+*/
+extern char *sqlite3_temp_directory;
+
+/*
+** This function is called to recover from a malloc() failure that occured
+** within the SQLite library. Normally, after a single malloc() fails the
+** library refuses to function (all major calls return SQLITE_NOMEM).
+** This function restores the library state so that it can be used again.
+**
+** All existing statements (sqlite3_stmt pointers) must be finalized or
+** reset before this call is made. Otherwise, SQLITE_BUSY is returned.
+** If any in-memory databases are in use, either as a main or TEMP
+** database, SQLITE_ERROR is returned. In either of these cases, the
+** library is not reset and remains unusable.
+**
+** This function is *not* threadsafe. Calling this from within a threaded
+** application when threads other than the caller have used SQLite is
+** dangerous and will almost certainly result in malfunctions.
+**
+** This functionality can be omitted from a build by defining the
+** SQLITE_OMIT_GLOBALRECOVER at compile time.
+*/
+int sqlite3_global_recover(void);
+
+/*
+** Test to see whether or not the database connection is in autocommit
+** mode. Return TRUE if it is and FALSE if not. Autocommit mode is on
+** by default. Autocommit is disabled by a BEGIN statement and reenabled
+** by the next COMMIT or ROLLBACK.
+*/
+int sqlite3_get_autocommit(sqlite3*);
+
+/*
+** Return the sqlite3* database handle to which the prepared statement given
+** in the argument belongs. This is the same database handle that was
+** the first argument to the sqlite3_prepare() that was used to create
+** the statement in the first place.
+*/
+sqlite3 *sqlite3_db_handle(sqlite3_stmt*);
+
+/*
+** Register a callback function with the database connection identified by the
+** first argument to be invoked whenever a row is updated, inserted or deleted.
+** Any callback set by a previous call to this function for the same
+** database connection is overridden.
+**
+** The second argument is a pointer to the function to invoke when a
+** row is updated, inserted or deleted. The first argument to the callback is
+** a copy of the third argument to sqlite3_update_hook. The second callback
+** argument is one of SQLITE_INSERT, SQLITE_DELETE or SQLITE_UPDATE, depending
+** on the operation that caused the callback to be invoked. The third and
+** fourth arguments to the callback contain pointers to the database and
+** table name containing the affected row. The final callback parameter is
+** the rowid of the row. In the case of an update, this is the rowid after
+** the update takes place.
+**
+** The update hook is not invoked when internal system tables are
+** modified (i.e. sqlite_master and sqlite_sequence).
+**
+** If another function was previously registered, its pArg value is returned.
+** Otherwise NULL is returned.
+*/
+void *sqlite3_update_hook(
+ sqlite3*,
+ void(*)(void *,int ,char const *,char const *,sqlite_int64),
+ void*
+);
+
+/*
+** Register a callback to be invoked whenever a transaction is rolled
+** back.
+**
+** The new callback function overrides any existing rollback-hook
+** callback. If there was an existing callback, then it's pArg value
+** (the third argument to sqlite3_rollback_hook() when it was registered)
+** is returned. Otherwise, NULL is returned.
+**
+** For the purposes of this API, a transaction is said to have been
+** rolled back if an explicit "ROLLBACK" statement is executed, or
+** an error or constraint causes an implicit rollback to occur. The
+** callback is not invoked if a transaction is automatically rolled
+** back because the database connection is closed.
+*/
+void *sqlite3_rollback_hook(sqlite3*, void(*)(void *), void*);
+
+/*
+** This function is only available if the library is compiled without
+** the SQLITE_OMIT_SHARED_CACHE macro defined. It is used to enable or
+** disable (if the argument is true or false, respectively) the
+** "shared pager" feature.
+*/
+int sqlite3_enable_shared_cache(int);
+
+/*
+** Attempt to free N bytes of heap memory by deallocating non-essential
+** memory allocations held by the database library (example: memory
+** used to cache database pages to improve performance).
+**
+** This function is not a part of standard builds. It is only created
+** if SQLite is compiled with the SQLITE_ENABLE_MEMORY_MANAGEMENT macro.
+*/
+int sqlite3_release_memory(int);
+
+/*
+** Place a "soft" limit on the amount of heap memory that may be allocated by
+** SQLite within the current thread. If an internal allocation is requested
+** that would exceed the specified limit, sqlite3_release_memory() is invoked
+** one or more times to free up some space before the allocation is made.
+**
+** The limit is called "soft", because if sqlite3_release_memory() cannot free
+** sufficient memory to prevent the limit from being exceeded, the memory is
+** allocated anyway and the current operation proceeds.
+**
+** This function is only available if the library was compiled with the
+** SQLITE_ENABLE_MEMORY_MANAGEMENT option set.
+** memory-management has been enabled.
+*/
+void sqlite3_soft_heap_limit(int);
+
+/*
+** This routine makes sure that all thread-local storage has been
+** deallocated for the current thread.
+**
+** This routine is not technically necessary. All thread-local storage
+** will be automatically deallocated once memory-management and
+** shared-cache are disabled and the soft heap limit has been set
+** to zero. This routine is provided as a convenience for users who
+** want to make absolutely sure they have not forgotten something
+** prior to killing off a thread.
+*/
+void sqlite3_thread_cleanup(void);
+
+/*
+** Return meta information about a specific column of a specific database
+** table accessible using the connection handle passed as the first function
+** argument.
+**
+** The column is identified by the second, third and fourth parameters to
+** this function. The second parameter is either the name of the database
+** (i.e. "main", "temp" or an attached database) containing the specified
+** table or NULL. If it is NULL, then all attached databases are searched
+** for the table using the same algorithm as the database engine uses to
+** resolve unqualified table references.
+**
+** The third and fourth parameters to this function are the table and column
+** name of the desired column, respectively. Neither of these parameters
+** may be NULL.
+**
+** Meta information is returned by writing to the memory locations passed as
+** the 5th and subsequent parameters to this function. Any of these
+** arguments may be NULL, in which case the corresponding element of meta
+** information is ommitted.
+**
+** Parameter Output Type Description
+** -----------------------------------
+**
+** 5th const char* Data type
+** 6th const char* Name of the default collation sequence
+** 7th int True if the column has a NOT NULL constraint
+** 8th int True if the column is part of the PRIMARY KEY
+** 9th int True if the column is AUTOINCREMENT
+**
+**
+** The memory pointed to by the character pointers returned for the
+** declaration type and collation sequence is valid only until the next
+** call to any sqlite API function.
+**
+** If the specified table is actually a view, then an error is returned.
+**
+** If the specified column is "rowid", "oid" or "_rowid_" and an
+** INTEGER PRIMARY KEY column has been explicitly declared, then the output
+** parameters are set for the explicitly declared column. If there is no
+** explicitly declared IPK column, then the output parameters are set as
+** follows:
+**
+** data type: "INTEGER"
+** collation sequence: "BINARY"
+** not null: 0
+** primary key: 1
+** auto increment: 0
+**
+** This function may load one or more schemas from database files. If an
+** error occurs during this process, or if the requested table or column
+** cannot be found, an SQLITE error code is returned and an error message
+** left in the database handle (to be retrieved using sqlite3_errmsg()).
+**
+** This API is only available if the library was compiled with the
+** SQLITE_ENABLE_COLUMN_METADATA preprocessor symbol defined.
+*/
+int sqlite3_table_column_metadata(
+ sqlite3 *db, /* Connection handle */
+ const char *zDbName, /* Database name or NULL */
+ const char *zTableName, /* Table name */
+ const char *zColumnName, /* Column name */
+ char const **pzDataType, /* OUTPUT: Declared data type */
+ char const **pzCollSeq, /* OUTPUT: Collation sequence name */
+ int *pNotNull, /* OUTPUT: True if NOT NULL constraint exists */
+ int *pPrimaryKey, /* OUTPUT: True if column part of PK */
+ int *pAutoinc /* OUTPUT: True if colums is auto-increment */
+);
+
+/*
+****** EXPERIMENTAL - subject to change without notice **************
+**
+** Attempt to load an SQLite extension library contained in the file
+** zFile. The entry point is zProc. zProc may be 0 in which case the
+** name of the entry point defaults to "sqlite3_extension_init".
+**
+** Return SQLITE_OK on success and SQLITE_ERROR if something goes wrong.
+**
+** If an error occurs and pzErrMsg is not 0, then fill *pzErrMsg with
+** error message text. The calling function should free this memory
+** by calling sqlite3_free().
+**
+** Extension loading must be enabled using sqlite3_enable_load_extension()
+** prior to calling this API or an error will be returned.
+**
+****** EXPERIMENTAL - subject to change without notice **************
+*/
+int sqlite3_load_extension(
+ sqlite3 *db, /* Load the extension into this database connection */
+ const char *zFile, /* Name of the shared library containing extension */
+ const char *zProc, /* Entry point. Derived from zFile if 0 */
+ char **pzErrMsg /* Put error message here if not 0 */
+);
+
+/*
+** So as not to open security holes in older applications that are
+** unprepared to deal with extension load, and as a means of disabling
+** extension loading while executing user-entered SQL, the following
+** API is provided to turn the extension loading mechanism on and
+** off. It is off by default. See ticket #1863.
+**
+** Call this routine with onoff==1 to turn extension loading on
+** and call it with onoff==0 to turn it back off again.
+*/
+int sqlite3_enable_load_extension(sqlite3 *db, int onoff);
+
+/*
+****** EXPERIMENTAL - subject to change without notice **************
+**
+** Register an extension entry point that is automatically invoked
+** whenever a new database connection is opened.
+**
+** This API can be invoked at program startup in order to register
+** one or more statically linked extensions that will be available
+** to all new database connections.
+**
+** Duplicate extensions are detected so calling this routine multiple
+** times with the same extension is harmless.
+**
+** This routine stores a pointer to the extension in an array
+** that is obtained from malloc(). If you run a memory leak
+** checker on your program and it reports a leak because of this
+** array, then invoke sqlite3_automatic_extension_reset() prior
+** to shutdown to free the memory.
+**
+** Automatic extensions apply across all threads.
+*/
+int sqlite3_auto_extension(void *xEntryPoint);
+
+
+/*
+****** EXPERIMENTAL - subject to change without notice **************
+**
+** Disable all previously registered automatic extensions. This
+** routine undoes the effect of all prior sqlite3_automatic_extension()
+** calls.
+**
+** This call disabled automatic extensions in all threads.
+*/
+void sqlite3_reset_auto_extension(void);
+
+
+/*
+****** EXPERIMENTAL - subject to change without notice **************
+**
+** The interface to the virtual-table mechanism is currently considered
+** to be experimental. The interface might change in incompatible ways.
+** If this is a problem for you, do not use the interface at this time.
+**
+** When the virtual-table mechanism stablizes, we will declare the
+** interface fixed, support it indefinitely, and remove this comment.
+*/
+
+/*
+** Structures used by the virtual table interface
+*/
+typedef struct sqlite3_vtab sqlite3_vtab;
+typedef struct sqlite3_index_info sqlite3_index_info;
+typedef struct sqlite3_vtab_cursor sqlite3_vtab_cursor;
+typedef struct sqlite3_module sqlite3_module;
+
+/*
+** A module is a class of virtual tables. Each module is defined
+** by an instance of the following structure. This structure consists
+** mostly of methods for the module.
+*/
+struct sqlite3_module {
+ int iVersion;
+ int (*xCreate)(sqlite3*, void *pAux,
+ int argc, const char *const*argv,
+ sqlite3_vtab **ppVTab, char**);
+ int (*xConnect)(sqlite3*, void *pAux,
+ int argc, const char *const*argv,
+ sqlite3_vtab **ppVTab, char**);
+ int (*xBestIndex)(sqlite3_vtab *pVTab, sqlite3_index_info*);
+ int (*xDisconnect)(sqlite3_vtab *pVTab);
+ int (*xDestroy)(sqlite3_vtab *pVTab);
+ int (*xOpen)(sqlite3_vtab *pVTab, sqlite3_vtab_cursor **ppCursor);
+ int (*xClose)(sqlite3_vtab_cursor*);
+ int (*xFilter)(sqlite3_vtab_cursor*, int idxNum, const char *idxStr,
+ int argc, sqlite3_value **argv);
+ int (*xNext)(sqlite3_vtab_cursor*);
+ int (*xEof)(sqlite3_vtab_cursor*);
+ int (*xColumn)(sqlite3_vtab_cursor*, sqlite3_context*, int);
+ int (*xRowid)(sqlite3_vtab_cursor*, sqlite_int64 *pRowid);
+ int (*xUpdate)(sqlite3_vtab *, int, sqlite3_value **, sqlite_int64 *);
+ int (*xBegin)(sqlite3_vtab *pVTab);
+ int (*xSync)(sqlite3_vtab *pVTab);
+ int (*xCommit)(sqlite3_vtab *pVTab);
+ int (*xRollback)(sqlite3_vtab *pVTab);
+ int (*xFindFunction)(sqlite3_vtab *pVtab, int nArg, const char *zName,
+ void (**pxFunc)(sqlite3_context*,int,sqlite3_value**),
+ void **ppArg);
+};
+
+/*
+** The sqlite3_index_info structure and its substructures is used to
+** pass information into and receive the reply from the xBestIndex
+** method of an sqlite3_module. The fields under **Inputs** are the
+** inputs to xBestIndex and are read-only. xBestIndex inserts its
+** results into the **Outputs** fields.
+**
+** The aConstraint[] array records WHERE clause constraints of the
+** form:
+**
+** column OP expr
+**
+** Where OP is =, <, <=, >, or >=. The particular operator is stored
+** in aConstraint[].op. The index of the column is stored in
+** aConstraint[].iColumn. aConstraint[].usable is TRUE if the
+** expr on the right-hand side can be evaluated (and thus the constraint
+** is usable) and false if it cannot.
+**
+** The optimizer automatically inverts terms of the form "expr OP column"
+** and makes other simplificatinos to the WHERE clause in an attempt to
+** get as many WHERE clause terms into the form shown above as possible.
+** The aConstraint[] array only reports WHERE clause terms in the correct
+** form that refer to the particular virtual table being queried.
+**
+** Information about the ORDER BY clause is stored in aOrderBy[].
+** Each term of aOrderBy records a column of the ORDER BY clause.
+**
+** The xBestIndex method must fill aConstraintUsage[] with information
+** about what parameters to pass to xFilter. If argvIndex>0 then
+** the right-hand side of the corresponding aConstraint[] is evaluated
+** and becomes the argvIndex-th entry in argv. If aConstraintUsage[].omit
+** is true, then the constraint is assumed to be fully handled by the
+** virtual table and is not checked again by SQLite.
+**
+** The idxNum and idxPtr values are recorded and passed into xFilter.
+** sqlite3_free() is used to free idxPtr if needToFreeIdxPtr is true.
+**
+** The orderByConsumed means that output from xFilter will occur in
+** the correct order to satisfy the ORDER BY clause so that no separate
+** sorting step is required.
+**
+** The estimatedCost value is an estimate of the cost of doing the
+** particular lookup. A full scan of a table with N entries should have
+** a cost of N. A binary search of a table of N entries should have a
+** cost of approximately log(N).
+*/
+struct sqlite3_index_info {
+ /* Inputs */
+ const int nConstraint; /* Number of entries in aConstraint */
+ const struct sqlite3_index_constraint {
+ int iColumn; /* Column on left-hand side of constraint */
+ unsigned char op; /* Constraint operator */
+ unsigned char usable; /* True if this constraint is usable */
+ int iTermOffset; /* Used internally - xBestIndex should ignore */
+ } *const aConstraint; /* Table of WHERE clause constraints */
+ const int nOrderBy; /* Number of terms in the ORDER BY clause */
+ const struct sqlite3_index_orderby {
+ int iColumn; /* Column number */
+ unsigned char desc; /* True for DESC. False for ASC. */
+ } *const aOrderBy; /* The ORDER BY clause */
+
+ /* Outputs */
+ struct sqlite3_index_constraint_usage {
+ int argvIndex; /* if >0, constraint is part of argv to xFilter */
+ unsigned char omit; /* Do not code a test for this constraint */
+ } *const aConstraintUsage;
+ int idxNum; /* Number used to identify the index */
+ char *idxStr; /* String, possibly obtained from sqlite3_malloc */
+ int needToFreeIdxStr; /* Free idxStr using sqlite3_free() if true */
+ int orderByConsumed; /* True if output is already ordered */
+ double estimatedCost; /* Estimated cost of using this index */
+};
+#define SQLITE_INDEX_CONSTRAINT_EQ 2
+#define SQLITE_INDEX_CONSTRAINT_GT 4
+#define SQLITE_INDEX_CONSTRAINT_LE 8
+#define SQLITE_INDEX_CONSTRAINT_LT 16
+#define SQLITE_INDEX_CONSTRAINT_GE 32
+#define SQLITE_INDEX_CONSTRAINT_MATCH 64
+
+/*
+** This routine is used to register a new module name with an SQLite
+** connection. Module names must be registered before creating new
+** virtual tables on the module, or before using preexisting virtual
+** tables of the module.
+*/
+int sqlite3_create_module(
+ sqlite3 *db, /* SQLite connection to register module with */
+ const char *zName, /* Name of the module */
+ const sqlite3_module *, /* Methods for the module */
+ void * /* Client data for xCreate/xConnect */
+);
+
+/*
+** Every module implementation uses a subclass of the following structure
+** to describe a particular instance of the module. Each subclass will
+** be taylored to the specific needs of the module implementation. The
+** purpose of this superclass is to define certain fields that are common
+** to all module implementations.
+**
+** Virtual tables methods can set an error message by assigning a
+** string obtained from sqlite3_mprintf() to zErrMsg. The method should
+** take care that any prior string is freed by a call to sqlite3_free()
+** prior to assigning a new string to zErrMsg. After the error message
+** is delivered up to the client application, the string will be automatically
+** freed by sqlite3_free() and the zErrMsg field will be zeroed. Note
+** that sqlite3_mprintf() and sqlite3_free() are used on the zErrMsg field
+** since virtual tables are commonly implemented in loadable extensions which
+** do not have access to sqlite3MPrintf() or sqlite3Free().
+*/
+struct sqlite3_vtab {
+ const sqlite3_module *pModule; /* The module for this virtual table */
+ int nRef; /* Used internally */
+ char *zErrMsg; /* Error message from sqlite3_mprintf() */
+ /* Virtual table implementations will typically add additional fields */
+};
+
+/* Every module implementation uses a subclass of the following structure
+** to describe cursors that point into the virtual table and are used
+** to loop through the virtual table. Cursors are created using the
+** xOpen method of the module. Each module implementation will define
+** the content of a cursor structure to suit its own needs.
+**
+** This superclass exists in order to define fields of the cursor that
+** are common to all implementations.
+*/
+struct sqlite3_vtab_cursor {
+ sqlite3_vtab *pVtab; /* Virtual table of this cursor */
+ /* Virtual table implementations will typically add additional fields */
+};
+
+/*
+** The xCreate and xConnect methods of a module use the following API
+** to declare the format (the names and datatypes of the columns) of
+** the virtual tables they implement.
+*/
+int sqlite3_declare_vtab(sqlite3*, const char *zCreateTable);
+
+/*
+** Virtual tables can provide alternative implementations of functions
+** using the xFindFunction method. But global versions of those functions
+** must exist in order to be overloaded.
+**
+** This API makes sure a global version of a function with a particular
+** name and number of parameters exists. If no such function exists
+** before this API is called, a new function is created. The implementation
+** of the new function always causes an exception to be thrown. So
+** the new function is not good for anything by itself. Its only
+** purpose is to be a place-holder function that can be overloaded
+** by virtual tables.
+**
+** This API should be considered part of the virtual table interface,
+** which is experimental and subject to change.
+*/
+int sqlite3_overload_function(sqlite3*, const char *zFuncName, int nArg);
+
+/*
+** The interface to the virtual-table mechanism defined above (back up
+** to a comment remarkably similar to this one) is currently considered
+** to be experimental. The interface might change in incompatible ways.
+** If this is a problem for you, do not use the interface at this time.
+**
+** When the virtual-table mechanism stablizes, we will declare the
+** interface fixed, support it indefinitely, and remove this comment.
+**
+****** EXPERIMENTAL - subject to change without notice **************
+*/
+
+/*
+** Undo the hack that converts floating point types to integer for
+** builds on processors without floating point support.
+*/
+#ifdef SQLITE_OMIT_FLOATING_POINT
+# undef double
+#endif
+
+#ifdef __cplusplus
+} /* End of the 'extern "C"' block */
+#endif
+#endif
Added: freeswitch/trunk/libs/sqlite/src/sqlite3ext.h
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/sqlite3ext.h Tue Dec 19 15:11:50 2006
@@ -0,0 +1,282 @@
+/*
+** 2006 June 7
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** This header file defines the SQLite interface for use by
+** shared libraries that want to be imported as extensions into
+** an SQLite instance. Shared libraries that intend to be loaded
+** as extensions by SQLite should #include this file instead of
+** sqlite3.h.
+**
+** @(#) $Id: sqlite3ext.h,v 1.7 2006/09/22 23:38:21 shess Exp $
+*/
+#ifndef _SQLITE3EXT_H_
+#define _SQLITE3EXT_H_
+#include "sqlite3.h"
+
+typedef struct sqlite3_api_routines sqlite3_api_routines;
+
+/*
+** The following structure hold pointers to all of the SQLite API
+** routines.
+*/
+struct sqlite3_api_routines {
+ void * (*aggregate_context)(sqlite3_context*,int nBytes);
+ int (*aggregate_count)(sqlite3_context*);
+ int (*bind_blob)(sqlite3_stmt*,int,const void*,int n,void(*)(void*));
+ int (*bind_double)(sqlite3_stmt*,int,double);
+ int (*bind_int)(sqlite3_stmt*,int,int);
+ int (*bind_int64)(sqlite3_stmt*,int,sqlite_int64);
+ int (*bind_null)(sqlite3_stmt*,int);
+ int (*bind_parameter_count)(sqlite3_stmt*);
+ int (*bind_parameter_index)(sqlite3_stmt*,const char*zName);
+ const char * (*bind_parameter_name)(sqlite3_stmt*,int);
+ int (*bind_text)(sqlite3_stmt*,int,const char*,int n,void(*)(void*));
+ int (*bind_text16)(sqlite3_stmt*,int,const void*,int,void(*)(void*));
+ int (*bind_value)(sqlite3_stmt*,int,const sqlite3_value*);
+ int (*busy_handler)(sqlite3*,int(*)(void*,int),void*);
+ int (*busy_timeout)(sqlite3*,int ms);
+ int (*changes)(sqlite3*);
+ int (*close)(sqlite3*);
+ int (*collation_needed)(sqlite3*,void*,void(*)(void*,sqlite3*,int eTextRep,const char*));
+ int (*collation_needed16)(sqlite3*,void*,void(*)(void*,sqlite3*,int eTextRep,const void*));
+ const void * (*column_blob)(sqlite3_stmt*,int iCol);
+ int (*column_bytes)(sqlite3_stmt*,int iCol);
+ int (*column_bytes16)(sqlite3_stmt*,int iCol);
+ int (*column_count)(sqlite3_stmt*pStmt);
+ const char * (*column_database_name)(sqlite3_stmt*,int);
+ const void * (*column_database_name16)(sqlite3_stmt*,int);
+ const char * (*column_decltype)(sqlite3_stmt*,int i);
+ const void * (*column_decltype16)(sqlite3_stmt*,int);
+ double (*column_double)(sqlite3_stmt*,int iCol);
+ int (*column_int)(sqlite3_stmt*,int iCol);
+ sqlite_int64 (*column_int64)(sqlite3_stmt*,int iCol);
+ const char * (*column_name)(sqlite3_stmt*,int);
+ const void * (*column_name16)(sqlite3_stmt*,int);
+ const char * (*column_origin_name)(sqlite3_stmt*,int);
+ const void * (*column_origin_name16)(sqlite3_stmt*,int);
+ const char * (*column_table_name)(sqlite3_stmt*,int);
+ const void * (*column_table_name16)(sqlite3_stmt*,int);
+ const unsigned char * (*column_text)(sqlite3_stmt*,int iCol);
+ const void * (*column_text16)(sqlite3_stmt*,int iCol);
+ int (*column_type)(sqlite3_stmt*,int iCol);
+ sqlite3_value* (*column_value)(sqlite3_stmt*,int iCol);
+ void * (*commit_hook)(sqlite3*,int(*)(void*),void*);
+ int (*complete)(const char*sql);
+ int (*complete16)(const void*sql);
+ int (*create_collation)(sqlite3*,const char*,int,void*,int(*)(void*,int,const void*,int,const void*));
+ int (*create_collation16)(sqlite3*,const char*,int,void*,int(*)(void*,int,const void*,int,const void*));
+ int (*create_function)(sqlite3*,const char*,int,int,void*,void (*xFunc)(sqlite3_context*,int,sqlite3_value**),void (*xStep)(sqlite3_context*,int,sqlite3_value**),void (*xFinal)(sqlite3_context*));
+ int (*create_function16)(sqlite3*,const void*,int,int,void*,void (*xFunc)(sqlite3_context*,int,sqlite3_value**),void (*xStep)(sqlite3_context*,int,sqlite3_value**),void (*xFinal)(sqlite3_context*));
+ int (*create_module)(sqlite3*,const char*,const sqlite3_module*,void*);
+ int (*data_count)(sqlite3_stmt*pStmt);
+ sqlite3 * (*db_handle)(sqlite3_stmt*);
+ int (*declare_vtab)(sqlite3*,const char*);
+ int (*enable_shared_cache)(int);
+ int (*errcode)(sqlite3*db);
+ const char * (*errmsg)(sqlite3*);
+ const void * (*errmsg16)(sqlite3*);
+ int (*exec)(sqlite3*,const char*,sqlite3_callback,void*,char**);
+ int (*expired)(sqlite3_stmt*);
+ int (*finalize)(sqlite3_stmt*pStmt);
+ void (*free)(void*);
+ void (*free_table)(char**result);
+ int (*get_autocommit)(sqlite3*);
+ void * (*get_auxdata)(sqlite3_context*,int);
+ int (*get_table)(sqlite3*,const char*,char***,int*,int*,char**);
+ int (*global_recover)(void);
+ void (*interrupt)(sqlite3*);
+ sqlite_int64 (*last_insert_rowid)(sqlite3*);
+ const char * (*libversion)(void);
+ int (*libversion_number)(void);
+ void *(*malloc)(int);
+ char * (*mprintf)(const char*,...);
+ int (*open)(const char*,sqlite3**);
+ int (*open16)(const void*,sqlite3**);
+ int (*prepare)(sqlite3*,const char*,int,sqlite3_stmt**,const char**);
+ int (*prepare16)(sqlite3*,const void*,int,sqlite3_stmt**,const void**);
+ void * (*profile)(sqlite3*,void(*)(void*,const char*,sqlite_uint64),void*);
+ void (*progress_handler)(sqlite3*,int,int(*)(void*),void*);
+ void *(*realloc)(void*,int);
+ int (*reset)(sqlite3_stmt*pStmt);
+ void (*result_blob)(sqlite3_context*,const void*,int,void(*)(void*));
+ void (*result_double)(sqlite3_context*,double);
+ void (*result_error)(sqlite3_context*,const char*,int);
+ void (*result_error16)(sqlite3_context*,const void*,int);
+ void (*result_int)(sqlite3_context*,int);
+ void (*result_int64)(sqlite3_context*,sqlite_int64);
+ void (*result_null)(sqlite3_context*);
+ void (*result_text)(sqlite3_context*,const char*,int,void(*)(void*));
+ void (*result_text16)(sqlite3_context*,const void*,int,void(*)(void*));
+ void (*result_text16be)(sqlite3_context*,const void*,int,void(*)(void*));
+ void (*result_text16le)(sqlite3_context*,const void*,int,void(*)(void*));
+ void (*result_value)(sqlite3_context*,sqlite3_value*);
+ void * (*rollback_hook)(sqlite3*,void(*)(void*),void*);
+ int (*set_authorizer)(sqlite3*,int(*)(void*,int,const char*,const char*,const char*,const char*),void*);
+ void (*set_auxdata)(sqlite3_context*,int,void*,void (*)(void*));
+ char * (*snprintf)(int,char*,const char*,...);
+ int (*step)(sqlite3_stmt*);
+ int (*table_column_metadata)(sqlite3*,const char*,const char*,const char*,char const**,char const**,int*,int*,int*);
+ void (*thread_cleanup)(void);
+ int (*total_changes)(sqlite3*);
+ void * (*trace)(sqlite3*,void(*xTrace)(void*,const char*),void*);
+ int (*transfer_bindings)(sqlite3_stmt*,sqlite3_stmt*);
+ void * (*update_hook)(sqlite3*,void(*)(void*,int ,char const*,char const*,sqlite_int64),void*);
+ void * (*user_data)(sqlite3_context*);
+ const void * (*value_blob)(sqlite3_value*);
+ int (*value_bytes)(sqlite3_value*);
+ int (*value_bytes16)(sqlite3_value*);
+ double (*value_double)(sqlite3_value*);
+ int (*value_int)(sqlite3_value*);
+ sqlite_int64 (*value_int64)(sqlite3_value*);
+ int (*value_numeric_type)(sqlite3_value*);
+ const unsigned char * (*value_text)(sqlite3_value*);
+ const void * (*value_text16)(sqlite3_value*);
+ const void * (*value_text16be)(sqlite3_value*);
+ const void * (*value_text16le)(sqlite3_value*);
+ int (*value_type)(sqlite3_value*);
+ char * (*vmprintf)(const char*,va_list);
+ int (*overload_function)(sqlite3*, const char *zFuncName, int nArg);
+};
+
+/*
+** The following macros redefine the API routines so that they are
+** redirected throught the global sqlite3_api structure.
+**
+** This header file is also used by the loadext.c source file
+** (part of the main SQLite library - not an extension) so that
+** it can get access to the sqlite3_api_routines structure
+** definition. But the main library does not want to redefine
+** the API. So the redefinition macros are only valid if the
+** SQLITE_CORE macros is undefined.
+*/
+#ifndef SQLITE_CORE
+#define sqlite3_aggregate_context sqlite3_api->aggregate_context
+#define sqlite3_aggregate_count sqlite3_api->aggregate_count
+#define sqlite3_bind_blob sqlite3_api->bind_blob
+#define sqlite3_bind_double sqlite3_api->bind_double
+#define sqlite3_bind_int sqlite3_api->bind_int
+#define sqlite3_bind_int64 sqlite3_api->bind_int64
+#define sqlite3_bind_null sqlite3_api->bind_null
+#define sqlite3_bind_parameter_count sqlite3_api->bind_parameter_count
+#define sqlite3_bind_parameter_index sqlite3_api->bind_parameter_index
+#define sqlite3_bind_parameter_name sqlite3_api->bind_parameter_name
+#define sqlite3_bind_text sqlite3_api->bind_text
+#define sqlite3_bind_text16 sqlite3_api->bind_text16
+#define sqlite3_bind_value sqlite3_api->bind_value
+#define sqlite3_busy_handler sqlite3_api->busy_handler
+#define sqlite3_busy_timeout sqlite3_api->busy_timeout
+#define sqlite3_changes sqlite3_api->changes
+#define sqlite3_close sqlite3_api->close
+#define sqlite3_collation_needed sqlite3_api->collation_needed
+#define sqlite3_collation_needed16 sqlite3_api->collation_needed16
+#define sqlite3_column_blob sqlite3_api->column_blob
+#define sqlite3_column_bytes sqlite3_api->column_bytes
+#define sqlite3_column_bytes16 sqlite3_api->column_bytes16
+#define sqlite3_column_count sqlite3_api->column_count
+#define sqlite3_column_database_name sqlite3_api->column_database_name
+#define sqlite3_column_database_name16 sqlite3_api->column_database_name16
+#define sqlite3_column_decltype sqlite3_api->column_decltype
+#define sqlite3_column_decltype16 sqlite3_api->column_decltype16
+#define sqlite3_column_double sqlite3_api->column_double
+#define sqlite3_column_int sqlite3_api->column_int
+#define sqlite3_column_int64 sqlite3_api->column_int64
+#define sqlite3_column_name sqlite3_api->column_name
+#define sqlite3_column_name16 sqlite3_api->column_name16
+#define sqlite3_column_origin_name sqlite3_api->column_origin_name
+#define sqlite3_column_origin_name16 sqlite3_api->column_origin_name16
+#define sqlite3_column_table_name sqlite3_api->column_table_name
+#define sqlite3_column_table_name16 sqlite3_api->column_table_name16
+#define sqlite3_column_text sqlite3_api->column_text
+#define sqlite3_column_text16 sqlite3_api->column_text16
+#define sqlite3_column_type sqlite3_api->column_type
+#define sqlite3_column_value sqlite3_api->column_value
+#define sqlite3_commit_hook sqlite3_api->commit_hook
+#define sqlite3_complete sqlite3_api->complete
+#define sqlite3_complete16 sqlite3_api->complete16
+#define sqlite3_create_collation sqlite3_api->create_collation
+#define sqlite3_create_collation16 sqlite3_api->create_collation16
+#define sqlite3_create_function sqlite3_api->create_function
+#define sqlite3_create_function16 sqlite3_api->create_function16
+#define sqlite3_create_module sqlite3_api->create_module
+#define sqlite3_data_count sqlite3_api->data_count
+#define sqlite3_db_handle sqlite3_api->db_handle
+#define sqlite3_declare_vtab sqlite3_api->declare_vtab
+#define sqlite3_enable_shared_cache sqlite3_api->enable_shared_cache
+#define sqlite3_errcode sqlite3_api->errcode
+#define sqlite3_errmsg sqlite3_api->errmsg
+#define sqlite3_errmsg16 sqlite3_api->errmsg16
+#define sqlite3_exec sqlite3_api->exec
+#define sqlite3_expired sqlite3_api->expired
+#define sqlite3_finalize sqlite3_api->finalize
+#define sqlite3_free sqlite3_api->free
+#define sqlite3_free_table sqlite3_api->free_table
+#define sqlite3_get_autocommit sqlite3_api->get_autocommit
+#define sqlite3_get_auxdata sqlite3_api->get_auxdata
+#define sqlite3_get_table sqlite3_api->get_table
+#define sqlite3_global_recover sqlite3_api->global_recover
+#define sqlite3_interrupt sqlite3_api->interrupt
+#define sqlite3_last_insert_rowid sqlite3_api->last_insert_rowid
+#define sqlite3_libversion sqlite3_api->libversion
+#define sqlite3_libversion_number sqlite3_api->libversion_number
+#define sqlite3_malloc sqlite3_api->malloc
+#define sqlite3_mprintf sqlite3_api->mprintf
+#define sqlite3_open sqlite3_api->open
+#define sqlite3_open16 sqlite3_api->open16
+#define sqlite3_prepare sqlite3_api->prepare
+#define sqlite3_prepare16 sqlite3_api->prepare16
+#define sqlite3_profile sqlite3_api->profile
+#define sqlite3_progress_handler sqlite3_api->progress_handler
+#define sqlite3_realloc sqlite3_api->realloc
+#define sqlite3_reset sqlite3_api->reset
+#define sqlite3_result_blob sqlite3_api->result_blob
+#define sqlite3_result_double sqlite3_api->result_double
+#define sqlite3_result_error sqlite3_api->result_error
+#define sqlite3_result_error16 sqlite3_api->result_error16
+#define sqlite3_result_int sqlite3_api->result_int
+#define sqlite3_result_int64 sqlite3_api->result_int64
+#define sqlite3_result_null sqlite3_api->result_null
+#define sqlite3_result_text sqlite3_api->result_text
+#define sqlite3_result_text16 sqlite3_api->result_text16
+#define sqlite3_result_text16be sqlite3_api->result_text16be
+#define sqlite3_result_text16le sqlite3_api->result_text16le
+#define sqlite3_result_value sqlite3_api->result_value
+#define sqlite3_rollback_hook sqlite3_api->rollback_hook
+#define sqlite3_set_authorizer sqlite3_api->set_authorizer
+#define sqlite3_set_auxdata sqlite3_api->set_auxdata
+#define sqlite3_snprintf sqlite3_api->snprintf
+#define sqlite3_step sqlite3_api->step
+#define sqlite3_table_column_metadata sqlite3_api->table_column_metadata
+#define sqlite3_thread_cleanup sqlite3_api->thread_cleanup
+#define sqlite3_total_changes sqlite3_api->total_changes
+#define sqlite3_trace sqlite3_api->trace
+#define sqlite3_transfer_bindings sqlite3_api->transfer_bindings
+#define sqlite3_update_hook sqlite3_api->update_hook
+#define sqlite3_user_data sqlite3_api->user_data
+#define sqlite3_value_blob sqlite3_api->value_blob
+#define sqlite3_value_bytes sqlite3_api->value_bytes
+#define sqlite3_value_bytes16 sqlite3_api->value_bytes16
+#define sqlite3_value_double sqlite3_api->value_double
+#define sqlite3_value_int sqlite3_api->value_int
+#define sqlite3_value_int64 sqlite3_api->value_int64
+#define sqlite3_value_numeric_type sqlite3_api->value_numeric_type
+#define sqlite3_value_text sqlite3_api->value_text
+#define sqlite3_value_text16 sqlite3_api->value_text16
+#define sqlite3_value_text16be sqlite3_api->value_text16be
+#define sqlite3_value_text16le sqlite3_api->value_text16le
+#define sqlite3_value_type sqlite3_api->value_type
+#define sqlite3_vmprintf sqlite3_api->vmprintf
+#define sqlite3_overload_function sqlite3_api->overload_function
+#endif /* SQLITE_CORE */
+
+#define SQLITE_EXTENSION_INIT1 const sqlite3_api_routines *sqlite3_api;
+#define SQLITE_EXTENSION_INIT2(v) sqlite3_api = v;
+
+#endif /* _SQLITE3EXT_H_ */
Added: freeswitch/trunk/libs/sqlite/src/sqliteInt.h
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/sqliteInt.h Tue Dec 19 15:11:50 2006
@@ -0,0 +1,1884 @@
+/*
+** 2001 September 15
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** Internal interface definitions for SQLite.
+**
+** @(#) $Id: sqliteInt.h,v 1.529 2006/09/23 20:36:02 drh Exp $
+*/
+#ifndef _SQLITEINT_H_
+#define _SQLITEINT_H_
+
+/*
+** Extra interface definitions for those who need them
+*/
+#ifdef SQLITE_EXTRA
+# include "sqliteExtra.h"
+#endif
+
+/*
+** Many people are failing to set -DNDEBUG=1 when compiling SQLite.
+** Setting NDEBUG makes the code smaller and run faster. So the following
+** lines are added to automatically set NDEBUG unless the -DSQLITE_DEBUG=1
+** option is set. Thus NDEBUG becomes an opt-in rather than an opt-out
+** feature.
+*/
+#if !defined(NDEBUG) && !defined(SQLITE_DEBUG)
+# define NDEBUG 1
+#endif
+
+/*
+** These #defines should enable >2GB file support on Posix if the
+** underlying operating system supports it. If the OS lacks
+** large file support, or if the OS is windows, these should be no-ops.
+**
+** Large file support can be disabled using the -DSQLITE_DISABLE_LFS switch
+** on the compiler command line. This is necessary if you are compiling
+** on a recent machine (ex: RedHat 7.2) but you want your code to work
+** on an older machine (ex: RedHat 6.0). If you compile on RedHat 7.2
+** without this option, LFS is enable. But LFS does not exist in the kernel
+** in RedHat 6.0, so the code won't work. Hence, for maximum binary
+** portability you should omit LFS.
+**
+** Similar is true for MacOS. LFS is only supported on MacOS 9 and later.
+*/
+#ifndef SQLITE_DISABLE_LFS
+# define _LARGE_FILE 1
+# ifndef _FILE_OFFSET_BITS
+# define _FILE_OFFSET_BITS 64
+# endif
+# define _LARGEFILE_SOURCE 1
+#endif
+
+#include "sqlite3.h"
+#include "hash.h"
+#include "parse.h"
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <assert.h>
+#include <stddef.h>
+
+/*
+** If compiling for a processor that lacks floating point support,
+** substitute integer for floating-point
+*/
+#ifdef SQLITE_OMIT_FLOATING_POINT
+# define double sqlite_int64
+# define LONGDOUBLE_TYPE sqlite_int64
+# ifndef SQLITE_BIG_DBL
+# define SQLITE_BIG_DBL (0x7fffffffffffffff)
+# endif
+# define SQLITE_OMIT_DATETIME_FUNCS 1
+# define SQLITE_OMIT_TRACE 1
+#endif
+#ifndef SQLITE_BIG_DBL
+# define SQLITE_BIG_DBL (1e99)
+#endif
+
+/*
+** The maximum number of in-memory pages to use for the main database
+** table and for temporary tables. Internally, the MAX_PAGES and
+** TEMP_PAGES macros are used. To override the default values at
+** compilation time, the SQLITE_DEFAULT_CACHE_SIZE and
+** SQLITE_DEFAULT_TEMP_CACHE_SIZE macros should be set.
+*/
+#ifdef SQLITE_DEFAULT_CACHE_SIZE
+# define MAX_PAGES SQLITE_DEFAULT_CACHE_SIZE
+#else
+# define MAX_PAGES 2000
+#endif
+#ifdef SQLITE_DEFAULT_TEMP_CACHE_SIZE
+# define TEMP_PAGES SQLITE_DEFAULT_TEMP_CACHE_SIZE
+#else
+# define TEMP_PAGES 500
+#endif
+
+/*
+** OMIT_TEMPDB is set to 1 if SQLITE_OMIT_TEMPDB is defined, or 0
+** afterward. Having this macro allows us to cause the C compiler
+** to omit code used by TEMP tables without messy #ifndef statements.
+*/
+#ifdef SQLITE_OMIT_TEMPDB
+#define OMIT_TEMPDB 1
+#else
+#define OMIT_TEMPDB 0
+#endif
+
+/*
+** If the following macro is set to 1, then NULL values are considered
+** distinct when determining whether or not two entries are the same
+** in a UNIQUE index. This is the way PostgreSQL, Oracle, DB2, MySQL,
+** OCELOT, and Firebird all work. The SQL92 spec explicitly says this
+** is the way things are suppose to work.
+**
+** If the following macro is set to 0, the NULLs are indistinct for
+** a UNIQUE index. In this mode, you can only have a single NULL entry
+** for a column declared UNIQUE. This is the way Informix and SQL Server
+** work.
+*/
+#define NULL_DISTINCT_FOR_UNIQUE 1
+
+/*
+** The maximum number of attached databases. This must be at least 2
+** in order to support the main database file (0) and the file used to
+** hold temporary tables (1). And it must be less than 32 because
+** we use a bitmask of databases with a u32 in places (for example
+** the Parse.cookieMask field).
+*/
+#define MAX_ATTACHED 10
+
+/*
+** The maximum value of a ?nnn wildcard that the parser will accept.
+*/
+#define SQLITE_MAX_VARIABLE_NUMBER 999
+
+/*
+** The "file format" number is an integer that is incremented whenever
+** the VDBE-level file format changes. The following macros define the
+** the default file format for new databases and the maximum file format
+** that the library can read.
+*/
+#define SQLITE_MAX_FILE_FORMAT 4
+#ifndef SQLITE_DEFAULT_FILE_FORMAT
+# define SQLITE_DEFAULT_FILE_FORMAT 1
+#endif
+
+/*
+** Provide a default value for TEMP_STORE in case it is not specified
+** on the command-line
+*/
+#ifndef TEMP_STORE
+# define TEMP_STORE 1
+#endif
+
+/*
+** GCC does not define the offsetof() macro so we'll have to do it
+** ourselves.
+*/
+#ifndef offsetof
+#define offsetof(STRUCTURE,FIELD) ((int)((char*)&((STRUCTURE*)0)->FIELD))
+#endif
+
+/*
+** Check to see if this machine uses EBCDIC. (Yes, believe it or
+** not, there are still machines out there that use EBCDIC.)
+*/
+#if 'A' == '\301'
+# define SQLITE_EBCDIC 1
+#else
+# define SQLITE_ASCII 1
+#endif
+
+/*
+** Integers of known sizes. These typedefs might change for architectures
+** where the sizes very. Preprocessor macros are available so that the
+** types can be conveniently redefined at compile-type. Like this:
+**
+** cc '-DUINTPTR_TYPE=long long int' ...
+*/
+#ifndef UINT32_TYPE
+# define UINT32_TYPE unsigned int
+#endif
+#ifndef UINT16_TYPE
+# define UINT16_TYPE unsigned short int
+#endif
+#ifndef INT16_TYPE
+# define INT16_TYPE short int
+#endif
+#ifndef UINT8_TYPE
+# define UINT8_TYPE unsigned char
+#endif
+#ifndef INT8_TYPE
+# define INT8_TYPE signed char
+#endif
+#ifndef LONGDOUBLE_TYPE
+# define LONGDOUBLE_TYPE long double
+#endif
+typedef sqlite_int64 i64; /* 8-byte signed integer */
+typedef sqlite_uint64 u64; /* 8-byte unsigned integer */
+typedef UINT32_TYPE u32; /* 4-byte unsigned integer */
+typedef UINT16_TYPE u16; /* 2-byte unsigned integer */
+typedef INT16_TYPE i16; /* 2-byte signed integer */
+typedef UINT8_TYPE u8; /* 1-byte unsigned integer */
+typedef UINT8_TYPE i8; /* 1-byte signed integer */
+
+/*
+** Macros to determine whether the machine is big or little endian,
+** evaluated at runtime.
+*/
+extern const int sqlite3one;
+#define SQLITE_BIGENDIAN (*(char *)(&sqlite3one)==0)
+#define SQLITE_LITTLEENDIAN (*(char *)(&sqlite3one)==1)
+
+/*
+** An instance of the following structure is used to store the busy-handler
+** callback for a given sqlite handle.
+**
+** The sqlite.busyHandler member of the sqlite struct contains the busy
+** callback for the database handle. Each pager opened via the sqlite
+** handle is passed a pointer to sqlite.busyHandler. The busy-handler
+** callback is currently invoked only from within pager.c.
+*/
+typedef struct BusyHandler BusyHandler;
+struct BusyHandler {
+ int (*xFunc)(void *,int); /* The busy callback */
+ void *pArg; /* First arg to busy callback */
+ int nBusy; /* Incremented with each busy call */
+};
+
+/*
+** Defer sourcing vdbe.h and btree.h until after the "u8" and
+** "BusyHandler typedefs.
+*/
+#include "vdbe.h"
+#include "btree.h"
+#include "pager.h"
+
+#ifdef SQLITE_MEMDEBUG
+/*
+** The following global variables are used for testing and debugging
+** only. They only work if SQLITE_MEMDEBUG is defined.
+*/
+extern int sqlite3_nMalloc; /* Number of sqliteMalloc() calls */
+extern int sqlite3_nFree; /* Number of sqliteFree() calls */
+extern int sqlite3_iMallocFail; /* Fail sqliteMalloc() after this many calls */
+extern int sqlite3_iMallocReset; /* Set iMallocFail to this when it reaches 0 */
+
+extern void *sqlite3_pFirst; /* Pointer to linked list of allocations */
+extern int sqlite3_nMaxAlloc; /* High water mark of ThreadData.nAlloc */
+extern int sqlite3_mallocDisallowed; /* assert() in sqlite3Malloc() if set */
+extern int sqlite3_isFail; /* True if all malloc calls should fail */
+extern const char *sqlite3_zFile; /* Filename to associate debug info with */
+extern int sqlite3_iLine; /* Line number for debug info */
+
+#define ENTER_MALLOC (sqlite3_zFile = __FILE__, sqlite3_iLine = __LINE__)
+#define sqliteMalloc(x) (ENTER_MALLOC, sqlite3Malloc(x,1))
+#define sqliteMallocRaw(x) (ENTER_MALLOC, sqlite3MallocRaw(x,1))
+#define sqliteRealloc(x,y) (ENTER_MALLOC, sqlite3Realloc(x,y))
+#define sqliteStrDup(x) (ENTER_MALLOC, sqlite3StrDup(x))
+#define sqliteStrNDup(x,y) (ENTER_MALLOC, sqlite3StrNDup(x,y))
+#define sqliteReallocOrFree(x,y) (ENTER_MALLOC, sqlite3ReallocOrFree(x,y))
+
+#else
+
+#define ENTER_MALLOC 0
+#define sqliteMalloc(x) sqlite3Malloc(x,1)
+#define sqliteMallocRaw(x) sqlite3MallocRaw(x,1)
+#define sqliteRealloc(x,y) sqlite3Realloc(x,y)
+#define sqliteStrDup(x) sqlite3StrDup(x)
+#define sqliteStrNDup(x,y) sqlite3StrNDup(x,y)
+#define sqliteReallocOrFree(x,y) sqlite3ReallocOrFree(x,y)
+
+#endif
+
+#define sqliteFree(x) sqlite3FreeX(x)
+#define sqliteAllocSize(x) sqlite3AllocSize(x)
+
+
+/*
+** An instance of this structure might be allocated to store information
+** specific to a single thread.
+*/
+struct ThreadData {
+ int dummy; /* So that this structure is never empty */
+
+#ifdef SQLITE_ENABLE_MEMORY_MANAGEMENT
+ int nSoftHeapLimit; /* Suggested max mem allocation. No limit if <0 */
+ int nAlloc; /* Number of bytes currently allocated */
+ Pager *pPager; /* Linked list of all pagers in this thread */
+#endif
+
+#ifndef SQLITE_OMIT_SHARED_CACHE
+ u8 useSharedData; /* True if shared pagers and schemas are enabled */
+ BtShared *pBtree; /* Linked list of all currently open BTrees */
+#endif
+};
+
+/*
+** Name of the master database table. The master database table
+** is a special table that holds the names and attributes of all
+** user tables and indices.
+*/
+#define MASTER_NAME "sqlite_master"
+#define TEMP_MASTER_NAME "sqlite_temp_master"
+
+/*
+** The root-page of the master database table.
+*/
+#define MASTER_ROOT 1
+
+/*
+** The name of the schema table.
+*/
+#define SCHEMA_TABLE(x) ((!OMIT_TEMPDB)&&(x==1)?TEMP_MASTER_NAME:MASTER_NAME)
+
+/*
+** A convenience macro that returns the number of elements in
+** an array.
+*/
+#define ArraySize(X) (sizeof(X)/sizeof(X[0]))
+
+/*
+** Forward references to structures
+*/
+typedef struct AggInfo AggInfo;
+typedef struct AuthContext AuthContext;
+typedef struct CollSeq CollSeq;
+typedef struct Column Column;
+typedef struct Db Db;
+typedef struct Schema Schema;
+typedef struct Expr Expr;
+typedef struct ExprList ExprList;
+typedef struct FKey FKey;
+typedef struct FuncDef FuncDef;
+typedef struct IdList IdList;
+typedef struct Index Index;
+typedef struct KeyClass KeyClass;
+typedef struct KeyInfo KeyInfo;
+typedef struct Module Module;
+typedef struct NameContext NameContext;
+typedef struct Parse Parse;
+typedef struct Select Select;
+typedef struct SrcList SrcList;
+typedef struct ThreadData ThreadData;
+typedef struct Table Table;
+typedef struct TableLock TableLock;
+typedef struct Token Token;
+typedef struct TriggerStack TriggerStack;
+typedef struct TriggerStep TriggerStep;
+typedef struct Trigger Trigger;
+typedef struct WhereInfo WhereInfo;
+typedef struct WhereLevel WhereLevel;
+
+/*
+** Each database file to be accessed by the system is an instance
+** of the following structure. There are normally two of these structures
+** in the sqlite.aDb[] array. aDb[0] is the main database file and
+** aDb[1] is the database file used to hold temporary tables. Additional
+** databases may be attached.
+*/
+struct Db {
+ char *zName; /* Name of this database */
+ Btree *pBt; /* The B*Tree structure for this database file */
+ u8 inTrans; /* 0: not writable. 1: Transaction. 2: Checkpoint */
+ u8 safety_level; /* How aggressive at synching data to disk */
+ void *pAux; /* Auxiliary data. Usually NULL */
+ void (*xFreeAux)(void*); /* Routine to free pAux */
+ Schema *pSchema; /* Pointer to database schema (possibly shared) */
+};
+
+/*
+** An instance of the following structure stores a database schema.
+*/
+struct Schema {
+ int schema_cookie; /* Database schema version number for this file */
+ Hash tblHash; /* All tables indexed by name */
+ Hash idxHash; /* All (named) indices indexed by name */
+ Hash trigHash; /* All triggers indexed by name */
+ Hash aFKey; /* Foreign keys indexed by to-table */
+ Table *pSeqTab; /* The sqlite_sequence table used by AUTOINCREMENT */
+ u8 file_format; /* Schema format version for this file */
+ u8 enc; /* Text encoding used by this database */
+ u16 flags; /* Flags associated with this schema */
+ int cache_size; /* Number of pages to use in the cache */
+};
+
+/*
+** These macros can be used to test, set, or clear bits in the
+** Db.flags field.
+*/
+#define DbHasProperty(D,I,P) (((D)->aDb[I].pSchema->flags&(P))==(P))
+#define DbHasAnyProperty(D,I,P) (((D)->aDb[I].pSchema->flags&(P))!=0)
+#define DbSetProperty(D,I,P) (D)->aDb[I].pSchema->flags|=(P)
+#define DbClearProperty(D,I,P) (D)->aDb[I].pSchema->flags&=~(P)
+
+/*
+** Allowed values for the DB.flags field.
+**
+** The DB_SchemaLoaded flag is set after the database schema has been
+** read into internal hash tables.
+**
+** DB_UnresetViews means that one or more views have column names that
+** have been filled out. If the schema changes, these column names might
+** changes and so the view will need to be reset.
+*/
+#define DB_SchemaLoaded 0x0001 /* The schema has been loaded */
+#define DB_UnresetViews 0x0002 /* Some views have defined column names */
+#define DB_Empty 0x0004 /* The file is empty (length 0 bytes) */
+
+#define SQLITE_UTF16NATIVE (SQLITE_BIGENDIAN?SQLITE_UTF16BE:SQLITE_UTF16LE)
+
+/*
+** Each database is an instance of the following structure.
+**
+** The sqlite.lastRowid records the last insert rowid generated by an
+** insert statement. Inserts on views do not affect its value. Each
+** trigger has its own context, so that lastRowid can be updated inside
+** triggers as usual. The previous value will be restored once the trigger
+** exits. Upon entering a before or instead of trigger, lastRowid is no
+** longer (since after version 2.8.12) reset to -1.
+**
+** The sqlite.nChange does not count changes within triggers and keeps no
+** context. It is reset at start of sqlite3_exec.
+** The sqlite.lsChange represents the number of changes made by the last
+** insert, update, or delete statement. It remains constant throughout the
+** length of a statement and is then updated by OP_SetCounts. It keeps a
+** context stack just like lastRowid so that the count of changes
+** within a trigger is not seen outside the trigger. Changes to views do not
+** affect the value of lsChange.
+** The sqlite.csChange keeps track of the number of current changes (since
+** the last statement) and is used to update sqlite_lsChange.
+**
+** The member variables sqlite.errCode, sqlite.zErrMsg and sqlite.zErrMsg16
+** store the most recent error code and, if applicable, string. The
+** internal function sqlite3Error() is used to set these variables
+** consistently.
+*/
+struct sqlite3 {
+ int nDb; /* Number of backends currently in use */
+ Db *aDb; /* All backends */
+ int flags; /* Miscellanous flags. See below */
+ int errCode; /* Most recent error code (SQLITE_*) */
+ int errMask; /* & result codes with this before returning */
+ u8 autoCommit; /* The auto-commit flag. */
+ u8 temp_store; /* 1: file 2: memory 0: default */
+ int nTable; /* Number of tables in the database */
+ CollSeq *pDfltColl; /* The default collating sequence (BINARY) */
+ i64 lastRowid; /* ROWID of most recent insert (see above) */
+ i64 priorNewRowid; /* Last randomly generated ROWID */
+ int magic; /* Magic number for detect library misuse */
+ int nChange; /* Value returned by sqlite3_changes() */
+ int nTotalChange; /* Value returned by sqlite3_total_changes() */
+ struct sqlite3InitInfo { /* Information used during initialization */
+ int iDb; /* When back is being initialized */
+ int newTnum; /* Rootpage of table being initialized */
+ u8 busy; /* TRUE if currently initializing */
+ } init;
+ int nExtension; /* Number of loaded extensions */
+ void *aExtension; /* Array of shared libraray handles */
+ struct Vdbe *pVdbe; /* List of active virtual machines */
+ int activeVdbeCnt; /* Number of vdbes currently executing */
+ void (*xTrace)(void*,const char*); /* Trace function */
+ void *pTraceArg; /* Argument to the trace function */
+ void (*xProfile)(void*,const char*,u64); /* Profiling function */
+ void *pProfileArg; /* Argument to profile function */
+ void *pCommitArg; /* Argument to xCommitCallback() */
+ int (*xCommitCallback)(void*); /* Invoked at every commit. */
+ void *pRollbackArg; /* Argument to xRollbackCallback() */
+ void (*xRollbackCallback)(void*); /* Invoked at every commit. */
+ void *pUpdateArg;
+ void (*xUpdateCallback)(void*,int, const char*,const char*,sqlite_int64);
+ void(*xCollNeeded)(void*,sqlite3*,int eTextRep,const char*);
+ void(*xCollNeeded16)(void*,sqlite3*,int eTextRep,const void*);
+ void *pCollNeededArg;
+ sqlite3_value *pErr; /* Most recent error message */
+ char *zErrMsg; /* Most recent error message (UTF-8 encoded) */
+ char *zErrMsg16; /* Most recent error message (UTF-16 encoded) */
+ union {
+ int isInterrupted; /* True if sqlite3_interrupt has been called */
+ double notUsed1; /* Spacer */
+ } u1;
+#ifndef SQLITE_OMIT_AUTHORIZATION
+ int (*xAuth)(void*,int,const char*,const char*,const char*,const char*);
+ /* Access authorization function */
+ void *pAuthArg; /* 1st argument to the access auth function */
+#endif
+#ifndef SQLITE_OMIT_PROGRESS_CALLBACK
+ int (*xProgress)(void *); /* The progress callback */
+ void *pProgressArg; /* Argument to the progress callback */
+ int nProgressOps; /* Number of opcodes for progress callback */
+#endif
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+ Hash aModule; /* populated by sqlite3_create_module() */
+ Table *pVTab; /* vtab with active Connect/Create method */
+ sqlite3_vtab **aVTrans; /* Virtual tables with open transactions */
+ int nVTrans; /* Allocated size of aVTrans */
+#endif
+ Hash aFunc; /* All functions that can be in SQL exprs */
+ Hash aCollSeq; /* All collating sequences */
+ BusyHandler busyHandler; /* Busy callback */
+ int busyTimeout; /* Busy handler timeout, in msec */
+ Db aDbStatic[2]; /* Static space for the 2 default backends */
+#ifdef SQLITE_SSE
+ sqlite3_stmt *pFetch; /* Used by SSE to fetch stored statements */
+#endif
+};
+
+/*
+** A macro to discover the encoding of a database.
+*/
+#define ENC(db) ((db)->aDb[0].pSchema->enc)
+
+/*
+** Possible values for the sqlite.flags and or Db.flags fields.
+**
+** On sqlite.flags, the SQLITE_InTrans value means that we have
+** executed a BEGIN. On Db.flags, SQLITE_InTrans means a statement
+** transaction is active on that particular database file.
+*/
+#define SQLITE_VdbeTrace 0x00000001 /* True to trace VDBE execution */
+#define SQLITE_InTrans 0x00000008 /* True if in a transaction */
+#define SQLITE_InternChanges 0x00000010 /* Uncommitted Hash table changes */
+#define SQLITE_FullColNames 0x00000020 /* Show full column names on SELECT */
+#define SQLITE_ShortColNames 0x00000040 /* Show short columns names */
+#define SQLITE_CountRows 0x00000080 /* Count rows changed by INSERT, */
+ /* DELETE, or UPDATE and return */
+ /* the count using a callback. */
+#define SQLITE_NullCallback 0x00000100 /* Invoke the callback once if the */
+ /* result set is empty */
+#define SQLITE_SqlTrace 0x00000200 /* Debug print SQL as it executes */
+#define SQLITE_VdbeListing 0x00000400 /* Debug listings of VDBE programs */
+#define SQLITE_WriteSchema 0x00000800 /* OK to update SQLITE_MASTER */
+#define SQLITE_NoReadlock 0x00001000 /* Readlocks are omitted when
+ ** accessing read-only databases */
+#define SQLITE_IgnoreChecks 0x00002000 /* Do not enforce check constraints */
+#define SQLITE_ReadUncommitted 0x00004000 /* For shared-cache mode */
+#define SQLITE_LegacyFileFmt 0x00008000 /* Create new databases in format 1 */
+#define SQLITE_FullFSync 0x00010000 /* Use full fsync on the backend */
+#define SQLITE_LoadExtension 0x00020000 /* Enable load_extension */
+
+/*
+** Possible values for the sqlite.magic field.
+** The numbers are obtained at random and have no special meaning, other
+** than being distinct from one another.
+*/
+#define SQLITE_MAGIC_OPEN 0xa029a697 /* Database is open */
+#define SQLITE_MAGIC_CLOSED 0x9f3c2d33 /* Database is closed */
+#define SQLITE_MAGIC_BUSY 0xf03b7906 /* Database currently in use */
+#define SQLITE_MAGIC_ERROR 0xb5357930 /* An SQLITE_MISUSE error occurred */
+
+/*
+** Each SQL function is defined by an instance of the following
+** structure. A pointer to this structure is stored in the sqlite.aFunc
+** hash table. When multiple functions have the same name, the hash table
+** points to a linked list of these structures.
+*/
+struct FuncDef {
+ i16 nArg; /* Number of arguments. -1 means unlimited */
+ u8 iPrefEnc; /* Preferred text encoding (SQLITE_UTF8, 16LE, 16BE) */
+ u8 needCollSeq; /* True if sqlite3GetFuncCollSeq() might be called */
+ u8 flags; /* Some combination of SQLITE_FUNC_* */
+ void *pUserData; /* User data parameter */
+ FuncDef *pNext; /* Next function with same name */
+ void (*xFunc)(sqlite3_context*,int,sqlite3_value**); /* Regular function */
+ void (*xStep)(sqlite3_context*,int,sqlite3_value**); /* Aggregate step */
+ void (*xFinalize)(sqlite3_context*); /* Aggregate finializer */
+ char zName[1]; /* SQL name of the function. MUST BE LAST */
+};
+
+/*
+** Each SQLite module (virtual table definition) is defined by an
+** instance of the following structure, stored in the sqlite3.aModule
+** hash table.
+*/
+struct Module {
+ const sqlite3_module *pModule; /* Callback pointers */
+ const char *zName; /* Name passed to create_module() */
+ void *pAux; /* pAux passed to create_module() */
+};
+
+/*
+** Possible values for FuncDef.flags
+*/
+#define SQLITE_FUNC_LIKE 0x01 /* Candidate for the LIKE optimization */
+#define SQLITE_FUNC_CASE 0x02 /* Case-sensitive LIKE-type function */
+#define SQLITE_FUNC_EPHEM 0x04 /* Ephermeral. Delete with VDBE */
+
+/*
+** information about each column of an SQL table is held in an instance
+** of this structure.
+*/
+struct Column {
+ char *zName; /* Name of this column */
+ Expr *pDflt; /* Default value of this column */
+ char *zType; /* Data type for this column */
+ char *zColl; /* Collating sequence. If NULL, use the default */
+ u8 notNull; /* True if there is a NOT NULL constraint */
+ u8 isPrimKey; /* True if this column is part of the PRIMARY KEY */
+ char affinity; /* One of the SQLITE_AFF_... values */
+};
+
+/*
+** A "Collating Sequence" is defined by an instance of the following
+** structure. Conceptually, a collating sequence consists of a name and
+** a comparison routine that defines the order of that sequence.
+**
+** There may two seperate implementations of the collation function, one
+** that processes text in UTF-8 encoding (CollSeq.xCmp) and another that
+** processes text encoded in UTF-16 (CollSeq.xCmp16), using the machine
+** native byte order. When a collation sequence is invoked, SQLite selects
+** the version that will require the least expensive encoding
+** translations, if any.
+**
+** The CollSeq.pUser member variable is an extra parameter that passed in
+** as the first argument to the UTF-8 comparison function, xCmp.
+** CollSeq.pUser16 is the equivalent for the UTF-16 comparison function,
+** xCmp16.
+**
+** If both CollSeq.xCmp and CollSeq.xCmp16 are NULL, it means that the
+** collating sequence is undefined. Indices built on an undefined
+** collating sequence may not be read or written.
+*/
+struct CollSeq {
+ char *zName; /* Name of the collating sequence, UTF-8 encoded */
+ u8 enc; /* Text encoding handled by xCmp() */
+ u8 type; /* One of the SQLITE_COLL_... values below */
+ void *pUser; /* First argument to xCmp() */
+ int (*xCmp)(void*,int, const void*, int, const void*);
+};
+
+/*
+** Allowed values of CollSeq flags:
+*/
+#define SQLITE_COLL_BINARY 1 /* The default memcmp() collating sequence */
+#define SQLITE_COLL_NOCASE 2 /* The built-in NOCASE collating sequence */
+#define SQLITE_COLL_REVERSE 3 /* The built-in REVERSE collating sequence */
+#define SQLITE_COLL_USER 0 /* Any other user-defined collating sequence */
+
+/*
+** A sort order can be either ASC or DESC.
+*/
+#define SQLITE_SO_ASC 0 /* Sort in ascending order */
+#define SQLITE_SO_DESC 1 /* Sort in ascending order */
+
+/*
+** Column affinity types.
+**
+** These used to have mnemonic name like 'i' for SQLITE_AFF_INTEGER and
+** 't' for SQLITE_AFF_TEXT. But we can save a little space and improve
+** the speed a little by number the values consecutively.
+**
+** But rather than start with 0 or 1, we begin with 'a'. That way,
+** when multiple affinity types are concatenated into a string and
+** used as the P3 operand, they will be more readable.
+**
+** Note also that the numeric types are grouped together so that testing
+** for a numeric type is a single comparison.
+*/
+#define SQLITE_AFF_TEXT 'a'
+#define SQLITE_AFF_NONE 'b'
+#define SQLITE_AFF_NUMERIC 'c'
+#define SQLITE_AFF_INTEGER 'd'
+#define SQLITE_AFF_REAL 'e'
+
+#define sqlite3IsNumericAffinity(X) ((X)>=SQLITE_AFF_NUMERIC)
+
+/*
+** Each SQL table is represented in memory by an instance of the
+** following structure.
+**
+** Table.zName is the name of the table. The case of the original
+** CREATE TABLE statement is stored, but case is not significant for
+** comparisons.
+**
+** Table.nCol is the number of columns in this table. Table.aCol is a
+** pointer to an array of Column structures, one for each column.
+**
+** If the table has an INTEGER PRIMARY KEY, then Table.iPKey is the index of
+** the column that is that key. Otherwise Table.iPKey is negative. Note
+** that the datatype of the PRIMARY KEY must be INTEGER for this field to
+** be set. An INTEGER PRIMARY KEY is used as the rowid for each row of
+** the table. If a table has no INTEGER PRIMARY KEY, then a random rowid
+** is generated for each row of the table. Table.hasPrimKey is true if
+** the table has any PRIMARY KEY, INTEGER or otherwise.
+**
+** Table.tnum is the page number for the root BTree page of the table in the
+** database file. If Table.iDb is the index of the database table backend
+** in sqlite.aDb[]. 0 is for the main database and 1 is for the file that
+** holds temporary tables and indices. If Table.isEphem
+** is true, then the table is stored in a file that is automatically deleted
+** when the VDBE cursor to the table is closed. In this case Table.tnum
+** refers VDBE cursor number that holds the table open, not to the root
+** page number. Transient tables are used to hold the results of a
+** sub-query that appears instead of a real table name in the FROM clause
+** of a SELECT statement.
+*/
+struct Table {
+ char *zName; /* Name of the table */
+ int nCol; /* Number of columns in this table */
+ Column *aCol; /* Information about each column */
+ int iPKey; /* If not less then 0, use aCol[iPKey] as the primary key */
+ Index *pIndex; /* List of SQL indexes on this table. */
+ int tnum; /* Root BTree node for this table (see note above) */
+ Select *pSelect; /* NULL for tables. Points to definition if a view. */
+ int nRef; /* Number of pointers to this Table */
+ Trigger *pTrigger; /* List of SQL triggers on this table */
+ FKey *pFKey; /* Linked list of all foreign keys in this table */
+ char *zColAff; /* String defining the affinity of each column */
+#ifndef SQLITE_OMIT_CHECK
+ Expr *pCheck; /* The AND of all CHECK constraints */
+#endif
+#ifndef SQLITE_OMIT_ALTERTABLE
+ int addColOffset; /* Offset in CREATE TABLE statement to add a new column */
+#endif
+ u8 readOnly; /* True if this table should not be written by the user */
+ u8 isEphem; /* True if created using OP_OpenEphermeral */
+ u8 hasPrimKey; /* True if there exists a primary key */
+ u8 keyConf; /* What to do in case of uniqueness conflict on iPKey */
+ u8 autoInc; /* True if the integer primary key is autoincrement */
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+ u8 isVirtual; /* True if this is a virtual table */
+ u8 isCommit; /* True once the CREATE TABLE has been committed */
+ Module *pMod; /* Pointer to the implementation of the module */
+ sqlite3_vtab *pVtab; /* Pointer to the module instance */
+ int nModuleArg; /* Number of arguments to the module */
+ char **azModuleArg; /* Text of all module args. [0] is module name */
+#endif
+ Schema *pSchema;
+};
+
+/*
+** Test to see whether or not a table is a virtual table. This is
+** done as a macro so that it will be optimized out when virtual
+** table support is omitted from the build.
+*/
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+# define IsVirtual(X) ((X)->isVirtual)
+#else
+# define IsVirtual(X) 0
+#endif
+
+/*
+** Each foreign key constraint is an instance of the following structure.
+**
+** A foreign key is associated with two tables. The "from" table is
+** the table that contains the REFERENCES clause that creates the foreign
+** key. The "to" table is the table that is named in the REFERENCES clause.
+** Consider this example:
+**
+** CREATE TABLE ex1(
+** a INTEGER PRIMARY KEY,
+** b INTEGER CONSTRAINT fk1 REFERENCES ex2(x)
+** );
+**
+** For foreign key "fk1", the from-table is "ex1" and the to-table is "ex2".
+**
+** Each REFERENCES clause generates an instance of the following structure
+** which is attached to the from-table. The to-table need not exist when
+** the from-table is created. The existance of the to-table is not checked
+** until an attempt is made to insert data into the from-table.
+**
+** The sqlite.aFKey hash table stores pointers to this structure
+** given the name of a to-table. For each to-table, all foreign keys
+** associated with that table are on a linked list using the FKey.pNextTo
+** field.
+*/
+struct FKey {
+ Table *pFrom; /* The table that constains the REFERENCES clause */
+ FKey *pNextFrom; /* Next foreign key in pFrom */
+ char *zTo; /* Name of table that the key points to */
+ FKey *pNextTo; /* Next foreign key that points to zTo */
+ int nCol; /* Number of columns in this key */
+ struct sColMap { /* Mapping of columns in pFrom to columns in zTo */
+ int iFrom; /* Index of column in pFrom */
+ char *zCol; /* Name of column in zTo. If 0 use PRIMARY KEY */
+ } *aCol; /* One entry for each of nCol column s */
+ u8 isDeferred; /* True if constraint checking is deferred till COMMIT */
+ u8 updateConf; /* How to resolve conflicts that occur on UPDATE */
+ u8 deleteConf; /* How to resolve conflicts that occur on DELETE */
+ u8 insertConf; /* How to resolve conflicts that occur on INSERT */
+};
+
+/*
+** SQLite supports many different ways to resolve a contraint
+** error. ROLLBACK processing means that a constraint violation
+** causes the operation in process to fail and for the current transaction
+** to be rolled back. ABORT processing means the operation in process
+** fails and any prior changes from that one operation are backed out,
+** but the transaction is not rolled back. FAIL processing means that
+** the operation in progress stops and returns an error code. But prior
+** changes due to the same operation are not backed out and no rollback
+** occurs. IGNORE means that the particular row that caused the constraint
+** error is not inserted or updated. Processing continues and no error
+** is returned. REPLACE means that preexisting database rows that caused
+** a UNIQUE constraint violation are removed so that the new insert or
+** update can proceed. Processing continues and no error is reported.
+**
+** RESTRICT, SETNULL, and CASCADE actions apply only to foreign keys.
+** RESTRICT is the same as ABORT for IMMEDIATE foreign keys and the
+** same as ROLLBACK for DEFERRED keys. SETNULL means that the foreign
+** key is set to NULL. CASCADE means that a DELETE or UPDATE of the
+** referenced table row is propagated into the row that holds the
+** foreign key.
+**
+** The following symbolic values are used to record which type
+** of action to take.
+*/
+#define OE_None 0 /* There is no constraint to check */
+#define OE_Rollback 1 /* Fail the operation and rollback the transaction */
+#define OE_Abort 2 /* Back out changes but do no rollback transaction */
+#define OE_Fail 3 /* Stop the operation but leave all prior changes */
+#define OE_Ignore 4 /* Ignore the error. Do not do the INSERT or UPDATE */
+#define OE_Replace 5 /* Delete existing record, then do INSERT or UPDATE */
+
+#define OE_Restrict 6 /* OE_Abort for IMMEDIATE, OE_Rollback for DEFERRED */
+#define OE_SetNull 7 /* Set the foreign key value to NULL */
+#define OE_SetDflt 8 /* Set the foreign key value to its default */
+#define OE_Cascade 9 /* Cascade the changes */
+
+#define OE_Default 99 /* Do whatever the default action is */
+
+
+/*
+** An instance of the following structure is passed as the first
+** argument to sqlite3VdbeKeyCompare and is used to control the
+** comparison of the two index keys.
+**
+** If the KeyInfo.incrKey value is true and the comparison would
+** otherwise be equal, then return a result as if the second key
+** were larger.
+*/
+struct KeyInfo {
+ u8 enc; /* Text encoding - one of the TEXT_Utf* values */
+ u8 incrKey; /* Increase 2nd key by epsilon before comparison */
+ int nField; /* Number of entries in aColl[] */
+ u8 *aSortOrder; /* If defined an aSortOrder[i] is true, sort DESC */
+ CollSeq *aColl[1]; /* Collating sequence for each term of the key */
+};
+
+/*
+** Each SQL index is represented in memory by an
+** instance of the following structure.
+**
+** The columns of the table that are to be indexed are described
+** by the aiColumn[] field of this structure. For example, suppose
+** we have the following table and index:
+**
+** CREATE TABLE Ex1(c1 int, c2 int, c3 text);
+** CREATE INDEX Ex2 ON Ex1(c3,c1);
+**
+** In the Table structure describing Ex1, nCol==3 because there are
+** three columns in the table. In the Index structure describing
+** Ex2, nColumn==2 since 2 of the 3 columns of Ex1 are indexed.
+** The value of aiColumn is {2, 0}. aiColumn[0]==2 because the
+** first column to be indexed (c3) has an index of 2 in Ex1.aCol[].
+** The second column to be indexed (c1) has an index of 0 in
+** Ex1.aCol[], hence Ex2.aiColumn[1]==0.
+**
+** The Index.onError field determines whether or not the indexed columns
+** must be unique and what to do if they are not. When Index.onError=OE_None,
+** it means this is not a unique index. Otherwise it is a unique index
+** and the value of Index.onError indicate the which conflict resolution
+** algorithm to employ whenever an attempt is made to insert a non-unique
+** element.
+*/
+struct Index {
+ char *zName; /* Name of this index */
+ int nColumn; /* Number of columns in the table used by this index */
+ int *aiColumn; /* Which columns are used by this index. 1st is 0 */
+ unsigned *aiRowEst; /* Result of ANALYZE: Est. rows selected by each column */
+ Table *pTable; /* The SQL table being indexed */
+ int tnum; /* Page containing root of this index in database file */
+ u8 onError; /* OE_Abort, OE_Ignore, OE_Replace, or OE_None */
+ u8 autoIndex; /* True if is automatically created (ex: by UNIQUE) */
+ char *zColAff; /* String defining the affinity of each column */
+ Index *pNext; /* The next index associated with the same table */
+ Schema *pSchema; /* Schema containing this index */
+ u8 *aSortOrder; /* Array of size Index.nColumn. True==DESC, False==ASC */
+ char **azColl; /* Array of collation sequence names for index */
+};
+
+/*
+** Each token coming out of the lexer is an instance of
+** this structure. Tokens are also used as part of an expression.
+**
+** Note if Token.z==0 then Token.dyn and Token.n are undefined and
+** may contain random values. Do not make any assuptions about Token.dyn
+** and Token.n when Token.z==0.
+*/
+struct Token {
+ const unsigned char *z; /* Text of the token. Not NULL-terminated! */
+ unsigned dyn : 1; /* True for malloced memory, false for static */
+ unsigned n : 31; /* Number of characters in this token */
+};
+
+/*
+** An instance of this structure contains information needed to generate
+** code for a SELECT that contains aggregate functions.
+**
+** If Expr.op==TK_AGG_COLUMN or TK_AGG_FUNCTION then Expr.pAggInfo is a
+** pointer to this structure. The Expr.iColumn field is the index in
+** AggInfo.aCol[] or AggInfo.aFunc[] of information needed to generate
+** code for that node.
+**
+** AggInfo.pGroupBy and AggInfo.aFunc.pExpr point to fields within the
+** original Select structure that describes the SELECT statement. These
+** fields do not need to be freed when deallocating the AggInfo structure.
+*/
+struct AggInfo {
+ u8 directMode; /* Direct rendering mode means take data directly
+ ** from source tables rather than from accumulators */
+ u8 useSortingIdx; /* In direct mode, reference the sorting index rather
+ ** than the source table */
+ int sortingIdx; /* Cursor number of the sorting index */
+ ExprList *pGroupBy; /* The group by clause */
+ int nSortingColumn; /* Number of columns in the sorting index */
+ struct AggInfo_col { /* For each column used in source tables */
+ int iTable; /* Cursor number of the source table */
+ int iColumn; /* Column number within the source table */
+ int iSorterColumn; /* Column number in the sorting index */
+ int iMem; /* Memory location that acts as accumulator */
+ Expr *pExpr; /* The original expression */
+ } *aCol;
+ int nColumn; /* Number of used entries in aCol[] */
+ int nColumnAlloc; /* Number of slots allocated for aCol[] */
+ int nAccumulator; /* Number of columns that show through to the output.
+ ** Additional columns are used only as parameters to
+ ** aggregate functions */
+ struct AggInfo_func { /* For each aggregate function */
+ Expr *pExpr; /* Expression encoding the function */
+ FuncDef *pFunc; /* The aggregate function implementation */
+ int iMem; /* Memory location that acts as accumulator */
+ int iDistinct; /* Ephermeral table used to enforce DISTINCT */
+ } *aFunc;
+ int nFunc; /* Number of entries in aFunc[] */
+ int nFuncAlloc; /* Number of slots allocated for aFunc[] */
+};
+
+/*
+** Each node of an expression in the parse tree is an instance
+** of this structure.
+**
+** Expr.op is the opcode. The integer parser token codes are reused
+** as opcodes here. For example, the parser defines TK_GE to be an integer
+** code representing the ">=" operator. This same integer code is reused
+** to represent the greater-than-or-equal-to operator in the expression
+** tree.
+**
+** Expr.pRight and Expr.pLeft are subexpressions. Expr.pList is a list
+** of argument if the expression is a function.
+**
+** Expr.token is the operator token for this node. For some expressions
+** that have subexpressions, Expr.token can be the complete text that gave
+** rise to the Expr. In the latter case, the token is marked as being
+** a compound token.
+**
+** An expression of the form ID or ID.ID refers to a column in a table.
+** For such expressions, Expr.op is set to TK_COLUMN and Expr.iTable is
+** the integer cursor number of a VDBE cursor pointing to that table and
+** Expr.iColumn is the column number for the specific column. If the
+** expression is used as a result in an aggregate SELECT, then the
+** value is also stored in the Expr.iAgg column in the aggregate so that
+** it can be accessed after all aggregates are computed.
+**
+** If the expression is a function, the Expr.iTable is an integer code
+** representing which function. If the expression is an unbound variable
+** marker (a question mark character '?' in the original SQL) then the
+** Expr.iTable holds the index number for that variable.
+**
+** If the expression is a subquery then Expr.iColumn holds an integer
+** register number containing the result of the subquery. If the
+** subquery gives a constant result, then iTable is -1. If the subquery
+** gives a different answer at different times during statement processing
+** then iTable is the address of a subroutine that computes the subquery.
+**
+** The Expr.pSelect field points to a SELECT statement. The SELECT might
+** be the right operand of an IN operator. Or, if a scalar SELECT appears
+** in an expression the opcode is TK_SELECT and Expr.pSelect is the only
+** operand.
+**
+** If the Expr is of type OP_Column, and the table it is selecting from
+** is a disk table or the "old.*" pseudo-table, then pTab points to the
+** corresponding table definition.
+*/
+struct Expr {
+ u8 op; /* Operation performed by this node */
+ char affinity; /* The affinity of the column or 0 if not a column */
+ u16 flags; /* Various flags. See below */
+ CollSeq *pColl; /* The collation type of the column or 0 */
+ Expr *pLeft, *pRight; /* Left and right subnodes */
+ ExprList *pList; /* A list of expressions used as function arguments
+ ** or in "<expr> IN (<expr-list)" */
+ Token token; /* An operand token */
+ Token span; /* Complete text of the expression */
+ int iTable, iColumn; /* When op==TK_COLUMN, then this expr node means the
+ ** iColumn-th field of the iTable-th table. */
+ AggInfo *pAggInfo; /* Used by TK_AGG_COLUMN and TK_AGG_FUNCTION */
+ int iAgg; /* Which entry in pAggInfo->aCol[] or ->aFunc[] */
+ int iRightJoinTable; /* If EP_FromJoin, the right table of the join */
+ Select *pSelect; /* When the expression is a sub-select. Also the
+ ** right side of "<expr> IN (<select>)" */
+ Table *pTab; /* Table for OP_Column expressions. */
+ Schema *pSchema;
+};
+
+/*
+** The following are the meanings of bits in the Expr.flags field.
+*/
+#define EP_FromJoin 0x01 /* Originated in ON or USING clause of a join */
+#define EP_Agg 0x02 /* Contains one or more aggregate functions */
+#define EP_Resolved 0x04 /* IDs have been resolved to COLUMNs */
+#define EP_Error 0x08 /* Expression contains one or more errors */
+#define EP_Distinct 0x10 /* Aggregate function with DISTINCT keyword */
+#define EP_VarSelect 0x20 /* pSelect is correlated, not constant */
+#define EP_Dequoted 0x40 /* True if the string has been dequoted */
+#define EP_InfixFunc 0x80 /* True for an infix function: LIKE, GLOB, etc */
+
+/*
+** These macros can be used to test, set, or clear bits in the
+** Expr.flags field.
+*/
+#define ExprHasProperty(E,P) (((E)->flags&(P))==(P))
+#define ExprHasAnyProperty(E,P) (((E)->flags&(P))!=0)
+#define ExprSetProperty(E,P) (E)->flags|=(P)
+#define ExprClearProperty(E,P) (E)->flags&=~(P)
+
+/*
+** A list of expressions. Each expression may optionally have a
+** name. An expr/name combination can be used in several ways, such
+** as the list of "expr AS ID" fields following a "SELECT" or in the
+** list of "ID = expr" items in an UPDATE. A list of expressions can
+** also be used as the argument to a function, in which case the a.zName
+** field is not used.
+*/
+struct ExprList {
+ int nExpr; /* Number of expressions on the list */
+ int nAlloc; /* Number of entries allocated below */
+ int iECursor; /* VDBE Cursor associated with this ExprList */
+ struct ExprList_item {
+ Expr *pExpr; /* The list of expressions */
+ char *zName; /* Token associated with this expression */
+ u8 sortOrder; /* 1 for DESC or 0 for ASC */
+ u8 isAgg; /* True if this is an aggregate like count(*) */
+ u8 done; /* A flag to indicate when processing is finished */
+ } *a; /* One entry for each expression */
+};
+
+/*
+** An instance of this structure can hold a simple list of identifiers,
+** such as the list "a,b,c" in the following statements:
+**
+** INSERT INTO t(a,b,c) VALUES ...;
+** CREATE INDEX idx ON t(a,b,c);
+** CREATE TRIGGER trig BEFORE UPDATE ON t(a,b,c) ...;
+**
+** The IdList.a.idx field is used when the IdList represents the list of
+** column names after a table name in an INSERT statement. In the statement
+**
+** INSERT INTO t(a,b,c) ...
+**
+** If "a" is the k-th column of table "t", then IdList.a[0].idx==k.
+*/
+struct IdList {
+ struct IdList_item {
+ char *zName; /* Name of the identifier */
+ int idx; /* Index in some Table.aCol[] of a column named zName */
+ } *a;
+ int nId; /* Number of identifiers on the list */
+ int nAlloc; /* Number of entries allocated for a[] below */
+};
+
+/*
+** The bitmask datatype defined below is used for various optimizations.
+*/
+typedef unsigned int Bitmask;
+
+/*
+** The following structure describes the FROM clause of a SELECT statement.
+** Each table or subquery in the FROM clause is a separate element of
+** the SrcList.a[] array.
+**
+** With the addition of multiple database support, the following structure
+** can also be used to describe a particular table such as the table that
+** is modified by an INSERT, DELETE, or UPDATE statement. In standard SQL,
+** such a table must be a simple name: ID. But in SQLite, the table can
+** now be identified by a database name, a dot, then the table name: ID.ID.
+*/
+struct SrcList {
+ i16 nSrc; /* Number of tables or subqueries in the FROM clause */
+ i16 nAlloc; /* Number of entries allocated in a[] below */
+ struct SrcList_item {
+ char *zDatabase; /* Name of database holding this table */
+ char *zName; /* Name of the table */
+ char *zAlias; /* The "B" part of a "A AS B" phrase. zName is the "A" */
+ Table *pTab; /* An SQL table corresponding to zName */
+ Select *pSelect; /* A SELECT statement used in place of a table name */
+ u8 isPopulated; /* Temporary table associated with SELECT is populated */
+ u8 jointype; /* Type of join between this table and the next */
+ i16 iCursor; /* The VDBE cursor number used to access this table */
+ Expr *pOn; /* The ON clause of a join */
+ IdList *pUsing; /* The USING clause of a join */
+ Bitmask colUsed; /* Bit N (1<<N) set if column N or pTab is used */
+ } a[1]; /* One entry for each identifier on the list */
+};
+
+/*
+** Permitted values of the SrcList.a.jointype field
+*/
+#define JT_INNER 0x0001 /* Any kind of inner or cross join */
+#define JT_CROSS 0x0002 /* Explicit use of the CROSS keyword */
+#define JT_NATURAL 0x0004 /* True for a "natural" join */
+#define JT_LEFT 0x0008 /* Left outer join */
+#define JT_RIGHT 0x0010 /* Right outer join */
+#define JT_OUTER 0x0020 /* The "OUTER" keyword is present */
+#define JT_ERROR 0x0040 /* unknown or unsupported join type */
+
+/*
+** For each nested loop in a WHERE clause implementation, the WhereInfo
+** structure contains a single instance of this structure. This structure
+** is intended to be private the the where.c module and should not be
+** access or modified by other modules.
+**
+** The pIdxInfo and pBestIdx fields are used to help pick the best
+** index on a virtual table. The pIdxInfo pointer contains indexing
+** information for the i-th table in the FROM clause before reordering.
+** All the pIdxInfo pointers are freed by whereInfoFree() in where.c.
+** The pBestIdx pointer is a copy of pIdxInfo for the i-th table after
+** FROM clause ordering. This is a little confusing so I will repeat
+** it in different words. WhereInfo.a[i].pIdxInfo is index information
+** for WhereInfo.pTabList.a[i]. WhereInfo.a[i].pBestInfo is the
+** index information for the i-th loop of the join. pBestInfo is always
+** either NULL or a copy of some pIdxInfo. So for cleanup it is
+** sufficient to free all of the pIdxInfo pointers.
+**
+*/
+struct WhereLevel {
+ int iFrom; /* Which entry in the FROM clause */
+ int flags; /* Flags associated with this level */
+ int iMem; /* First memory cell used by this level */
+ int iLeftJoin; /* Memory cell used to implement LEFT OUTER JOIN */
+ Index *pIdx; /* Index used. NULL if no index */
+ int iTabCur; /* The VDBE cursor used to access the table */
+ int iIdxCur; /* The VDBE cursor used to acesss pIdx */
+ int brk; /* Jump here to break out of the loop */
+ int cont; /* Jump here to continue with the next loop cycle */
+ int top; /* First instruction of interior of the loop */
+ int op, p1, p2; /* Opcode used to terminate the loop */
+ int nEq; /* Number of == or IN constraints on this loop */
+ int nIn; /* Number of IN operators constraining this loop */
+ int *aInLoop; /* Loop terminators for IN operators */
+ sqlite3_index_info *pBestIdx; /* Index information for this level */
+
+ /* The following field is really not part of the current level. But
+ ** we need a place to cache index information for each table in the
+ ** FROM clause and the WhereLevel structure is a convenient place.
+ */
+ sqlite3_index_info *pIdxInfo; /* Index info for n-th source table */
+};
+
+/*
+** The WHERE clause processing routine has two halves. The
+** first part does the start of the WHERE loop and the second
+** half does the tail of the WHERE loop. An instance of
+** this structure is returned by the first half and passed
+** into the second half to give some continuity.
+*/
+struct WhereInfo {
+ Parse *pParse;
+ SrcList *pTabList; /* List of tables in the join */
+ int iTop; /* The very beginning of the WHERE loop */
+ int iContinue; /* Jump here to continue with next record */
+ int iBreak; /* Jump here to break out of the loop */
+ int nLevel; /* Number of nested loop */
+ sqlite3_index_info **apInfo; /* Array of pointers to index info structures */
+ WhereLevel a[1]; /* Information about each nest loop in the WHERE */
+};
+
+/*
+** A NameContext defines a context in which to resolve table and column
+** names. The context consists of a list of tables (the pSrcList) field and
+** a list of named expression (pEList). The named expression list may
+** be NULL. The pSrc corresponds to the FROM clause of a SELECT or
+** to the table being operated on by INSERT, UPDATE, or DELETE. The
+** pEList corresponds to the result set of a SELECT and is NULL for
+** other statements.
+**
+** NameContexts can be nested. When resolving names, the inner-most
+** context is searched first. If no match is found, the next outer
+** context is checked. If there is still no match, the next context
+** is checked. This process continues until either a match is found
+** or all contexts are check. When a match is found, the nRef member of
+** the context containing the match is incremented.
+**
+** Each subquery gets a new NameContext. The pNext field points to the
+** NameContext in the parent query. Thus the process of scanning the
+** NameContext list corresponds to searching through successively outer
+** subqueries looking for a match.
+*/
+struct NameContext {
+ Parse *pParse; /* The parser */
+ SrcList *pSrcList; /* One or more tables used to resolve names */
+ ExprList *pEList; /* Optional list of named expressions */
+ int nRef; /* Number of names resolved by this context */
+ int nErr; /* Number of errors encountered while resolving names */
+ u8 allowAgg; /* Aggregate functions allowed here */
+ u8 hasAgg; /* True if aggregates are seen */
+ u8 isCheck; /* True if resolving names in a CHECK constraint */
+ int nDepth; /* Depth of subquery recursion. 1 for no recursion */
+ AggInfo *pAggInfo; /* Information about aggregates at this level */
+ NameContext *pNext; /* Next outer name context. NULL for outermost */
+};
+
+/*
+** An instance of the following structure contains all information
+** needed to generate code for a single SELECT statement.
+**
+** nLimit is set to -1 if there is no LIMIT clause. nOffset is set to 0.
+** If there is a LIMIT clause, the parser sets nLimit to the value of the
+** limit and nOffset to the value of the offset (or 0 if there is not
+** offset). But later on, nLimit and nOffset become the memory locations
+** in the VDBE that record the limit and offset counters.
+**
+** addrOpenEphm[] entries contain the address of OP_OpenEphemeral opcodes.
+** These addresses must be stored so that we can go back and fill in
+** the P3_KEYINFO and P2 parameters later. Neither the KeyInfo nor
+** the number of columns in P2 can be computed at the same time
+** as the OP_OpenEphm instruction is coded because not
+** enough information about the compound query is known at that point.
+** The KeyInfo for addrOpenTran[0] and [1] contains collating sequences
+** for the result set. The KeyInfo for addrOpenTran[2] contains collating
+** sequences for the ORDER BY clause.
+*/
+struct Select {
+ ExprList *pEList; /* The fields of the result */
+ u8 op; /* One of: TK_UNION TK_ALL TK_INTERSECT TK_EXCEPT */
+ u8 isDistinct; /* True if the DISTINCT keyword is present */
+ u8 isResolved; /* True once sqlite3SelectResolve() has run. */
+ u8 isAgg; /* True if this is an aggregate query */
+ u8 usesEphm; /* True if uses an OpenEphemeral opcode */
+ u8 disallowOrderBy; /* Do not allow an ORDER BY to be attached if TRUE */
+ SrcList *pSrc; /* The FROM clause */
+ Expr *pWhere; /* The WHERE clause */
+ ExprList *pGroupBy; /* The GROUP BY clause */
+ Expr *pHaving; /* The HAVING clause */
+ ExprList *pOrderBy; /* The ORDER BY clause */
+ Select *pPrior; /* Prior select in a compound select statement */
+ Select *pRightmost; /* Right-most select in a compound select statement */
+ Expr *pLimit; /* LIMIT expression. NULL means not used. */
+ Expr *pOffset; /* OFFSET expression. NULL means not used. */
+ int iLimit, iOffset; /* Memory registers holding LIMIT & OFFSET counters */
+ int addrOpenEphm[3]; /* OP_OpenEphem opcodes related to this select */
+};
+
+/*
+** The results of a select can be distributed in several ways.
+*/
+#define SRT_Union 1 /* Store result as keys in an index */
+#define SRT_Except 2 /* Remove result from a UNION index */
+#define SRT_Discard 3 /* Do not save the results anywhere */
+
+/* The ORDER BY clause is ignored for all of the above */
+#define IgnorableOrderby(X) (X<=SRT_Discard)
+
+#define SRT_Callback 4 /* Invoke a callback with each row of result */
+#define SRT_Mem 5 /* Store result in a memory cell */
+#define SRT_Set 6 /* Store non-null results as keys in an index */
+#define SRT_Table 7 /* Store result as data with an automatic rowid */
+#define SRT_EphemTab 8 /* Create transient tab and store like SRT_Table */
+#define SRT_Subroutine 9 /* Call a subroutine to handle results */
+#define SRT_Exists 10 /* Store 1 if the result is not empty */
+
+/*
+** An SQL parser context. A copy of this structure is passed through
+** the parser and down into all the parser action routine in order to
+** carry around information that is global to the entire parse.
+**
+** The structure is divided into two parts. When the parser and code
+** generate call themselves recursively, the first part of the structure
+** is constant but the second part is reset at the beginning and end of
+** each recursion.
+**
+** The nTableLock and aTableLock variables are only used if the shared-cache
+** feature is enabled (if sqlite3Tsd()->useSharedData is true). They are
+** used to store the set of table-locks required by the statement being
+** compiled. Function sqlite3TableLock() is used to add entries to the
+** list.
+*/
+struct Parse {
+ sqlite3 *db; /* The main database structure */
+ int rc; /* Return code from execution */
+ char *zErrMsg; /* An error message */
+ Vdbe *pVdbe; /* An engine for executing database bytecode */
+ u8 colNamesSet; /* TRUE after OP_ColumnName has been issued to pVdbe */
+ u8 nameClash; /* A permanent table name clashes with temp table name */
+ u8 checkSchema; /* Causes schema cookie check after an error */
+ u8 nested; /* Number of nested calls to the parser/code generator */
+ u8 parseError; /* True if a parsing error has been seen */
+ int nErr; /* Number of errors seen */
+ int nTab; /* Number of previously allocated VDBE cursors */
+ int nMem; /* Number of memory cells used so far */
+ int nSet; /* Number of sets used so far */
+ int ckOffset; /* Stack offset to data used by CHECK constraints */
+ u32 writeMask; /* Start a write transaction on these databases */
+ u32 cookieMask; /* Bitmask of schema verified databases */
+ int cookieGoto; /* Address of OP_Goto to cookie verifier subroutine */
+ int cookieValue[MAX_ATTACHED+2]; /* Values of cookies to verify */
+#ifndef SQLITE_OMIT_SHARED_CACHE
+ int nTableLock; /* Number of locks in aTableLock */
+ TableLock *aTableLock; /* Required table locks for shared-cache mode */
+#endif
+
+ /* Above is constant between recursions. Below is reset before and after
+ ** each recursion */
+
+ int nVar; /* Number of '?' variables seen in the SQL so far */
+ int nVarExpr; /* Number of used slots in apVarExpr[] */
+ int nVarExprAlloc; /* Number of allocated slots in apVarExpr[] */
+ Expr **apVarExpr; /* Pointers to :aaa and $aaaa wildcard expressions */
+ u8 explain; /* True if the EXPLAIN flag is found on the query */
+ Token sErrToken; /* The token at which the error occurred */
+ Token sNameToken; /* Token with unqualified schema object name */
+ Token sLastToken; /* The last token parsed */
+ const char *zSql; /* All SQL text */
+ const char *zTail; /* All SQL text past the last semicolon parsed */
+ Table *pNewTable; /* A table being constructed by CREATE TABLE */
+ Trigger *pNewTrigger; /* Trigger under construct by a CREATE TRIGGER */
+ TriggerStack *trigStack; /* Trigger actions being coded */
+ const char *zAuthContext; /* The 6th parameter to db->xAuth callbacks */
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+ Token sArg; /* Complete text of a module argument */
+ u8 declareVtab; /* True if inside sqlite3_declare_vtab() */
+ Table *pVirtualLock; /* Require virtual table lock on this table */
+#endif
+};
+
+#ifdef SQLITE_OMIT_VIRTUALTABLE
+ #define IN_DECLARE_VTAB 0
+#else
+ #define IN_DECLARE_VTAB (pParse->declareVtab)
+#endif
+
+/*
+** An instance of the following structure can be declared on a stack and used
+** to save the Parse.zAuthContext value so that it can be restored later.
+*/
+struct AuthContext {
+ const char *zAuthContext; /* Put saved Parse.zAuthContext here */
+ Parse *pParse; /* The Parse structure */
+};
+
+/*
+** Bitfield flags for P2 value in OP_Insert and OP_Delete
+*/
+#define OPFLAG_NCHANGE 1 /* Set to update db->nChange */
+#define OPFLAG_LASTROWID 2 /* Set to update db->lastRowid */
+#define OPFLAG_ISUPDATE 4 /* This OP_Insert is an sql UPDATE */
+
+/*
+ * Each trigger present in the database schema is stored as an instance of
+ * struct Trigger.
+ *
+ * Pointers to instances of struct Trigger are stored in two ways.
+ * 1. In the "trigHash" hash table (part of the sqlite3* that represents the
+ * database). This allows Trigger structures to be retrieved by name.
+ * 2. All triggers associated with a single table form a linked list, using the
+ * pNext member of struct Trigger. A pointer to the first element of the
+ * linked list is stored as the "pTrigger" member of the associated
+ * struct Table.
+ *
+ * The "step_list" member points to the first element of a linked list
+ * containing the SQL statements specified as the trigger program.
+ */
+struct Trigger {
+ char *name; /* The name of the trigger */
+ char *table; /* The table or view to which the trigger applies */
+ u8 op; /* One of TK_DELETE, TK_UPDATE, TK_INSERT */
+ u8 tr_tm; /* One of TRIGGER_BEFORE, TRIGGER_AFTER */
+ Expr *pWhen; /* The WHEN clause of the expresion (may be NULL) */
+ IdList *pColumns; /* If this is an UPDATE OF <column-list> trigger,
+ the <column-list> is stored here */
+ int foreach; /* One of TK_ROW or TK_STATEMENT */
+ Token nameToken; /* Token containing zName. Use during parsing only */
+ Schema *pSchema; /* Schema containing the trigger */
+ Schema *pTabSchema; /* Schema containing the table */
+ TriggerStep *step_list; /* Link list of trigger program steps */
+ Trigger *pNext; /* Next trigger associated with the table */
+};
+
+/*
+** A trigger is either a BEFORE or an AFTER trigger. The following constants
+** determine which.
+**
+** If there are multiple triggers, you might of some BEFORE and some AFTER.
+** In that cases, the constants below can be ORed together.
+*/
+#define TRIGGER_BEFORE 1
+#define TRIGGER_AFTER 2
+
+/*
+ * An instance of struct TriggerStep is used to store a single SQL statement
+ * that is a part of a trigger-program.
+ *
+ * Instances of struct TriggerStep are stored in a singly linked list (linked
+ * using the "pNext" member) referenced by the "step_list" member of the
+ * associated struct Trigger instance. The first element of the linked list is
+ * the first step of the trigger-program.
+ *
+ * The "op" member indicates whether this is a "DELETE", "INSERT", "UPDATE" or
+ * "SELECT" statement. The meanings of the other members is determined by the
+ * value of "op" as follows:
+ *
+ * (op == TK_INSERT)
+ * orconf -> stores the ON CONFLICT algorithm
+ * pSelect -> If this is an INSERT INTO ... SELECT ... statement, then
+ * this stores a pointer to the SELECT statement. Otherwise NULL.
+ * target -> A token holding the name of the table to insert into.
+ * pExprList -> If this is an INSERT INTO ... VALUES ... statement, then
+ * this stores values to be inserted. Otherwise NULL.
+ * pIdList -> If this is an INSERT INTO ... (<column-names>) VALUES ...
+ * statement, then this stores the column-names to be
+ * inserted into.
+ *
+ * (op == TK_DELETE)
+ * target -> A token holding the name of the table to delete from.
+ * pWhere -> The WHERE clause of the DELETE statement if one is specified.
+ * Otherwise NULL.
+ *
+ * (op == TK_UPDATE)
+ * target -> A token holding the name of the table to update rows of.
+ * pWhere -> The WHERE clause of the UPDATE statement if one is specified.
+ * Otherwise NULL.
+ * pExprList -> A list of the columns to update and the expressions to update
+ * them to. See sqlite3Update() documentation of "pChanges"
+ * argument.
+ *
+ */
+struct TriggerStep {
+ int op; /* One of TK_DELETE, TK_UPDATE, TK_INSERT, TK_SELECT */
+ int orconf; /* OE_Rollback etc. */
+ Trigger *pTrig; /* The trigger that this step is a part of */
+
+ Select *pSelect; /* Valid for SELECT and sometimes
+ INSERT steps (when pExprList == 0) */
+ Token target; /* Valid for DELETE, UPDATE, INSERT steps */
+ Expr *pWhere; /* Valid for DELETE, UPDATE steps */
+ ExprList *pExprList; /* Valid for UPDATE statements and sometimes
+ INSERT steps (when pSelect == 0) */
+ IdList *pIdList; /* Valid for INSERT statements only */
+ TriggerStep *pNext; /* Next in the link-list */
+ TriggerStep *pLast; /* Last element in link-list. Valid for 1st elem only */
+};
+
+/*
+ * An instance of struct TriggerStack stores information required during code
+ * generation of a single trigger program. While the trigger program is being
+ * coded, its associated TriggerStack instance is pointed to by the
+ * "pTriggerStack" member of the Parse structure.
+ *
+ * The pTab member points to the table that triggers are being coded on. The
+ * newIdx member contains the index of the vdbe cursor that points at the temp
+ * table that stores the new.* references. If new.* references are not valid
+ * for the trigger being coded (for example an ON DELETE trigger), then newIdx
+ * is set to -1. The oldIdx member is analogous to newIdx, for old.* references.
+ *
+ * The ON CONFLICT policy to be used for the trigger program steps is stored
+ * as the orconf member. If this is OE_Default, then the ON CONFLICT clause
+ * specified for individual triggers steps is used.
+ *
+ * struct TriggerStack has a "pNext" member, to allow linked lists to be
+ * constructed. When coding nested triggers (triggers fired by other triggers)
+ * each nested trigger stores its parent trigger's TriggerStack as the "pNext"
+ * pointer. Once the nested trigger has been coded, the pNext value is restored
+ * to the pTriggerStack member of the Parse stucture and coding of the parent
+ * trigger continues.
+ *
+ * Before a nested trigger is coded, the linked list pointed to by the
+ * pTriggerStack is scanned to ensure that the trigger is not about to be coded
+ * recursively. If this condition is detected, the nested trigger is not coded.
+ */
+struct TriggerStack {
+ Table *pTab; /* Table that triggers are currently being coded on */
+ int newIdx; /* Index of vdbe cursor to "new" temp table */
+ int oldIdx; /* Index of vdbe cursor to "old" temp table */
+ int orconf; /* Current orconf policy */
+ int ignoreJump; /* where to jump to for a RAISE(IGNORE) */
+ Trigger *pTrigger; /* The trigger currently being coded */
+ TriggerStack *pNext; /* Next trigger down on the trigger stack */
+};
+
+/*
+** The following structure contains information used by the sqliteFix...
+** routines as they walk the parse tree to make database references
+** explicit.
+*/
+typedef struct DbFixer DbFixer;
+struct DbFixer {
+ Parse *pParse; /* The parsing context. Error messages written here */
+ const char *zDb; /* Make sure all objects are contained in this database */
+ const char *zType; /* Type of the container - used for error messages */
+ const Token *pName; /* Name of the container - used for error messages */
+};
+
+/*
+** A pointer to this structure is used to communicate information
+** from sqlite3Init and OP_ParseSchema into the sqlite3InitCallback.
+*/
+typedef struct {
+ sqlite3 *db; /* The database being initialized */
+ int iDb; /* 0 for main database. 1 for TEMP, 2.. for ATTACHed */
+ char **pzErrMsg; /* Error message stored here */
+ int rc; /* Result code stored here */
+} InitData;
+
+/*
+ * This global flag is set for performance testing of triggers. When it is set
+ * SQLite will perform the overhead of building new and old trigger references
+ * even when no triggers exist
+ */
+extern int sqlite3_always_code_trigger_setup;
+
+/*
+** The SQLITE_CORRUPT_BKPT macro can be either a constant (for production
+** builds) or a function call (for debugging). If it is a function call,
+** it allows the operator to set a breakpoint at the spot where database
+** corruption is first detected.
+*/
+#ifdef SQLITE_DEBUG
+ extern int sqlite3Corrupt(void);
+# define SQLITE_CORRUPT_BKPT sqlite3Corrupt()
+#else
+# define SQLITE_CORRUPT_BKPT SQLITE_CORRUPT
+#endif
+
+/*
+** Internal function prototypes
+*/
+int sqlite3StrICmp(const char *, const char *);
+int sqlite3StrNICmp(const char *, const char *, int);
+int sqlite3HashNoCase(const char *, int);
+int sqlite3IsNumber(const char*, int*, u8);
+int sqlite3Compare(const char *, const char *);
+int sqlite3SortCompare(const char *, const char *);
+void sqlite3RealToSortable(double r, char *);
+
+void *sqlite3Malloc(int,int);
+void *sqlite3MallocRaw(int,int);
+void sqlite3Free(void*);
+void *sqlite3Realloc(void*,int);
+char *sqlite3StrDup(const char*);
+char *sqlite3StrNDup(const char*, int);
+# define sqlite3CheckMemory(a,b)
+void sqlite3ReallocOrFree(void**,int);
+void sqlite3FreeX(void*);
+void *sqlite3MallocX(int);
+int sqlite3AllocSize(void *);
+
+char *sqlite3MPrintf(const char*, ...);
+char *sqlite3VMPrintf(const char*, va_list);
+void sqlite3DebugPrintf(const char*, ...);
+void *sqlite3TextToPtr(const char*);
+void sqlite3SetString(char **, ...);
+void sqlite3ErrorMsg(Parse*, const char*, ...);
+void sqlite3ErrorClear(Parse*);
+void sqlite3Dequote(char*);
+void sqlite3DequoteExpr(Expr*);
+int sqlite3KeywordCode(const unsigned char*, int);
+int sqlite3RunParser(Parse*, const char*, char **);
+void sqlite3FinishCoding(Parse*);
+Expr *sqlite3Expr(int, Expr*, Expr*, const Token*);
+Expr *sqlite3ExprOrFree(int, Expr*, Expr*, const Token*);
+Expr *sqlite3RegisterExpr(Parse*,Token*);
+Expr *sqlite3ExprAnd(Expr*, Expr*);
+void sqlite3ExprSpan(Expr*,Token*,Token*);
+Expr *sqlite3ExprFunction(ExprList*, Token*);
+void sqlite3ExprAssignVarNumber(Parse*, Expr*);
+void sqlite3ExprDelete(Expr*);
+ExprList *sqlite3ExprListAppend(ExprList*,Expr*,Token*);
+void sqlite3ExprListDelete(ExprList*);
+int sqlite3Init(sqlite3*, char**);
+int sqlite3InitCallback(void*, int, char**, char**);
+void sqlite3Pragma(Parse*,Token*,Token*,Token*,int);
+void sqlite3ResetInternalSchema(sqlite3*, int);
+void sqlite3BeginParse(Parse*,int);
+void sqlite3RollbackInternalChanges(sqlite3*);
+void sqlite3CommitInternalChanges(sqlite3*);
+Table *sqlite3ResultSetOfSelect(Parse*,char*,Select*);
+void sqlite3OpenMasterTable(Parse *, int);
+void sqlite3StartTable(Parse*,Token*,Token*,int,int,int,int);
+void sqlite3AddColumn(Parse*,Token*);
+void sqlite3AddNotNull(Parse*, int);
+void sqlite3AddPrimaryKey(Parse*, ExprList*, int, int, int);
+void sqlite3AddCheckConstraint(Parse*, Expr*);
+void sqlite3AddColumnType(Parse*,Token*);
+void sqlite3AddDefaultValue(Parse*,Expr*);
+void sqlite3AddCollateType(Parse*, const char*, int);
+void sqlite3EndTable(Parse*,Token*,Token*,Select*);
+
+void sqlite3CreateView(Parse*,Token*,Token*,Token*,Select*,int,int);
+
+#if !defined(SQLITE_OMIT_VIEW) || !defined(SQLITE_OMIT_VIRTUALTABLE)
+ int sqlite3ViewGetColumnNames(Parse*,Table*);
+#else
+# define sqlite3ViewGetColumnNames(A,B) 0
+#endif
+
+void sqlite3DropTable(Parse*, SrcList*, int, int);
+void sqlite3DeleteTable(sqlite3*, Table*);
+void sqlite3Insert(Parse*, SrcList*, ExprList*, Select*, IdList*, int);
+int sqlite3ArrayAllocate(void**,int,int);
+IdList *sqlite3IdListAppend(IdList*, Token*);
+int sqlite3IdListIndex(IdList*,const char*);
+SrcList *sqlite3SrcListAppend(SrcList*, Token*, Token*);
+void sqlite3SrcListAddAlias(SrcList*, Token*);
+void sqlite3SrcListAssignCursors(Parse*, SrcList*);
+void sqlite3IdListDelete(IdList*);
+void sqlite3SrcListDelete(SrcList*);
+void sqlite3CreateIndex(Parse*,Token*,Token*,SrcList*,ExprList*,int,Token*,
+ Token*, int, int);
+void sqlite3DropIndex(Parse*, SrcList*, int);
+void sqlite3AddKeyType(Vdbe*, ExprList*);
+void sqlite3AddIdxKeyType(Vdbe*, Index*);
+int sqlite3Select(Parse*, Select*, int, int, Select*, int, int*, char *aff);
+Select *sqlite3SelectNew(ExprList*,SrcList*,Expr*,ExprList*,Expr*,ExprList*,
+ int,Expr*,Expr*);
+void sqlite3SelectDelete(Select*);
+void sqlite3SelectUnbind(Select*);
+Table *sqlite3SrcListLookup(Parse*, SrcList*);
+int sqlite3IsReadOnly(Parse*, Table*, int);
+void sqlite3OpenTable(Parse*, int iCur, int iDb, Table*, int);
+void sqlite3DeleteFrom(Parse*, SrcList*, Expr*);
+void sqlite3Update(Parse*, SrcList*, ExprList*, Expr*, int);
+WhereInfo *sqlite3WhereBegin(Parse*, SrcList*, Expr*, ExprList**);
+void sqlite3WhereEnd(WhereInfo*);
+void sqlite3ExprCode(Parse*, Expr*);
+void sqlite3ExprCodeAndCache(Parse*, Expr*);
+int sqlite3ExprCodeExprList(Parse*, ExprList*);
+void sqlite3ExprIfTrue(Parse*, Expr*, int, int);
+void sqlite3ExprIfFalse(Parse*, Expr*, int, int);
+void sqlite3NextedParse(Parse*, const char*, ...);
+Table *sqlite3FindTable(sqlite3*,const char*, const char*);
+Table *sqlite3LocateTable(Parse*,const char*, const char*);
+Index *sqlite3FindIndex(sqlite3*,const char*, const char*);
+void sqlite3UnlinkAndDeleteTable(sqlite3*,int,const char*);
+void sqlite3UnlinkAndDeleteIndex(sqlite3*,int,const char*);
+void sqlite3Vacuum(Parse*);
+int sqlite3RunVacuum(char**, sqlite3*);
+char *sqlite3NameFromToken(Token*);
+int sqlite3ExprCheck(Parse*, Expr*, int, int*);
+int sqlite3ExprCompare(Expr*, Expr*);
+int sqliteFuncId(Token*);
+int sqlite3ExprResolveNames(NameContext *, Expr *);
+int sqlite3ExprAnalyzeAggregates(NameContext*, Expr*);
+int sqlite3ExprAnalyzeAggList(NameContext*,ExprList*);
+Vdbe *sqlite3GetVdbe(Parse*);
+Expr *sqlite3CreateIdExpr(const char*);
+void sqlite3Randomness(int, void*);
+void sqlite3RollbackAll(sqlite3*);
+void sqlite3CodeVerifySchema(Parse*, int);
+void sqlite3BeginTransaction(Parse*, int);
+void sqlite3CommitTransaction(Parse*);
+void sqlite3RollbackTransaction(Parse*);
+int sqlite3ExprIsConstant(Expr*);
+int sqlite3ExprIsConstantOrFunction(Expr*);
+int sqlite3ExprIsInteger(Expr*, int*);
+int sqlite3IsRowid(const char*);
+void sqlite3GenerateRowDelete(sqlite3*, Vdbe*, Table*, int, int);
+void sqlite3GenerateRowIndexDelete(Vdbe*, Table*, int, char*);
+void sqlite3GenerateIndexKey(Vdbe*, Index*, int);
+void sqlite3GenerateConstraintChecks(Parse*,Table*,int,char*,int,int,int,int);
+void sqlite3CompleteInsertion(Parse*, Table*, int, char*, int, int, int);
+void sqlite3OpenTableAndIndices(Parse*, Table*, int, int);
+void sqlite3BeginWriteOperation(Parse*, int, int);
+Expr *sqlite3ExprDup(Expr*);
+void sqlite3TokenCopy(Token*, Token*);
+ExprList *sqlite3ExprListDup(ExprList*);
+SrcList *sqlite3SrcListDup(SrcList*);
+IdList *sqlite3IdListDup(IdList*);
+Select *sqlite3SelectDup(Select*);
+FuncDef *sqlite3FindFunction(sqlite3*,const char*,int,int,u8,int);
+void sqlite3RegisterBuiltinFunctions(sqlite3*);
+void sqlite3RegisterDateTimeFunctions(sqlite3*);
+int sqlite3SafetyOn(sqlite3*);
+int sqlite3SafetyOff(sqlite3*);
+int sqlite3SafetyCheck(sqlite3*);
+void sqlite3ChangeCookie(sqlite3*, Vdbe*, int);
+
+#ifndef SQLITE_OMIT_TRIGGER
+ void sqlite3BeginTrigger(Parse*, Token*,Token*,int,int,IdList*,SrcList*,
+ int,Expr*,int, int);
+ void sqlite3FinishTrigger(Parse*, TriggerStep*, Token*);
+ void sqlite3DropTrigger(Parse*, SrcList*, int);
+ void sqlite3DropTriggerPtr(Parse*, Trigger*);
+ int sqlite3TriggersExist(Parse*, Table*, int, ExprList*);
+ int sqlite3CodeRowTrigger(Parse*, int, ExprList*, int, Table *, int, int,
+ int, int);
+ void sqliteViewTriggers(Parse*, Table*, Expr*, int, ExprList*);
+ void sqlite3DeleteTriggerStep(TriggerStep*);
+ TriggerStep *sqlite3TriggerSelectStep(Select*);
+ TriggerStep *sqlite3TriggerInsertStep(Token*, IdList*, ExprList*,Select*,int);
+ TriggerStep *sqlite3TriggerUpdateStep(Token*, ExprList*, Expr*, int);
+ TriggerStep *sqlite3TriggerDeleteStep(Token*, Expr*);
+ void sqlite3DeleteTrigger(Trigger*);
+ void sqlite3UnlinkAndDeleteTrigger(sqlite3*,int,const char*);
+#else
+# define sqlite3TriggersExist(A,B,C,D,E,F) 0
+# define sqlite3DeleteTrigger(A)
+# define sqlite3DropTriggerPtr(A,B)
+# define sqlite3UnlinkAndDeleteTrigger(A,B,C)
+# define sqlite3CodeRowTrigger(A,B,C,D,E,F,G,H,I) 0
+#endif
+
+int sqlite3JoinType(Parse*, Token*, Token*, Token*);
+void sqlite3CreateForeignKey(Parse*, ExprList*, Token*, ExprList*, int);
+void sqlite3DeferForeignKey(Parse*, int);
+#ifndef SQLITE_OMIT_AUTHORIZATION
+ void sqlite3AuthRead(Parse*,Expr*,SrcList*);
+ int sqlite3AuthCheck(Parse*,int, const char*, const char*, const char*);
+ void sqlite3AuthContextPush(Parse*, AuthContext*, const char*);
+ void sqlite3AuthContextPop(AuthContext*);
+#else
+# define sqlite3AuthRead(a,b,c)
+# define sqlite3AuthCheck(a,b,c,d,e) SQLITE_OK
+# define sqlite3AuthContextPush(a,b,c)
+# define sqlite3AuthContextPop(a) ((void)(a))
+#endif
+void sqlite3Attach(Parse*, Expr*, Expr*, Expr*);
+void sqlite3Detach(Parse*, Expr*);
+int sqlite3BtreeFactory(const sqlite3 *db, const char *zFilename,
+ int omitJournal, int nCache, Btree **ppBtree);
+int sqlite3FixInit(DbFixer*, Parse*, int, const char*, const Token*);
+int sqlite3FixSrcList(DbFixer*, SrcList*);
+int sqlite3FixSelect(DbFixer*, Select*);
+int sqlite3FixExpr(DbFixer*, Expr*);
+int sqlite3FixExprList(DbFixer*, ExprList*);
+int sqlite3FixTriggerStep(DbFixer*, TriggerStep*);
+int sqlite3AtoF(const char *z, double*);
+char *sqlite3_snprintf(int,char*,const char*,...);
+int sqlite3GetInt32(const char *, int*);
+int sqlite3FitsIn64Bits(const char *);
+int sqlite3utf16ByteLen(const void *pData, int nChar);
+int sqlite3utf8CharLen(const char *pData, int nByte);
+int sqlite3ReadUtf8(const unsigned char *);
+int sqlite3PutVarint(unsigned char *, u64);
+int sqlite3GetVarint(const unsigned char *, u64 *);
+int sqlite3GetVarint32(const unsigned char *, u32 *);
+int sqlite3VarintLen(u64 v);
+void sqlite3IndexAffinityStr(Vdbe *, Index *);
+void sqlite3TableAffinityStr(Vdbe *, Table *);
+char sqlite3CompareAffinity(Expr *pExpr, char aff2);
+int sqlite3IndexAffinityOk(Expr *pExpr, char idx_affinity);
+char sqlite3ExprAffinity(Expr *pExpr);
+int sqlite3atoi64(const char*, i64*);
+void sqlite3Error(sqlite3*, int, const char*,...);
+void *sqlite3HexToBlob(const char *z);
+int sqlite3TwoPartName(Parse *, Token *, Token *, Token **);
+const char *sqlite3ErrStr(int);
+int sqlite3ReadUniChar(const char *zStr, int *pOffset, u8 *pEnc, int fold);
+int sqlite3ReadSchema(Parse *pParse);
+CollSeq *sqlite3FindCollSeq(sqlite3*,u8 enc, const char *,int,int);
+CollSeq *sqlite3LocateCollSeq(Parse *pParse, const char *zName, int nName);
+CollSeq *sqlite3ExprCollSeq(Parse *pParse, Expr *pExpr);
+int sqlite3CheckCollSeq(Parse *, CollSeq *);
+int sqlite3CheckIndexCollSeq(Parse *, Index *);
+int sqlite3CheckObjectName(Parse *, const char *);
+void sqlite3VdbeSetChanges(sqlite3 *, int);
+void sqlite3utf16Substr(sqlite3_context *,int,sqlite3_value **);
+
+const void *sqlite3ValueText(sqlite3_value*, u8);
+int sqlite3ValueBytes(sqlite3_value*, u8);
+void sqlite3ValueSetStr(sqlite3_value*, int, const void *,u8, void(*)(void*));
+void sqlite3ValueFree(sqlite3_value*);
+sqlite3_value *sqlite3ValueNew(void);
+char *sqlite3utf16to8(const void*, int);
+int sqlite3ValueFromExpr(Expr *, u8, u8, sqlite3_value **);
+void sqlite3ValueApplyAffinity(sqlite3_value *, u8, u8);
+extern const unsigned char sqlite3UpperToLower[];
+void sqlite3RootPageMoved(Db*, int, int);
+void sqlite3Reindex(Parse*, Token*, Token*);
+void sqlite3AlterFunctions(sqlite3*);
+void sqlite3AlterRenameTable(Parse*, SrcList*, Token*);
+int sqlite3GetToken(const unsigned char *, int *);
+void sqlite3NestedParse(Parse*, const char*, ...);
+void sqlite3ExpirePreparedStatements(sqlite3*);
+void sqlite3CodeSubselect(Parse *, Expr *);
+int sqlite3SelectResolve(Parse *, Select *, NameContext *);
+void sqlite3ColumnDefault(Vdbe *, Table *, int);
+void sqlite3AlterFinishAddColumn(Parse *, Token *);
+void sqlite3AlterBeginAddColumn(Parse *, SrcList *);
+const char *sqlite3TestErrorName(int);
+CollSeq *sqlite3GetCollSeq(sqlite3*, CollSeq *, const char *, int);
+char sqlite3AffinityType(const Token*);
+void sqlite3Analyze(Parse*, Token*, Token*);
+int sqlite3InvokeBusyHandler(BusyHandler*);
+int sqlite3FindDb(sqlite3*, Token*);
+void sqlite3AnalysisLoad(sqlite3*,int iDB);
+void sqlite3DefaultRowEst(Index*);
+void sqlite3RegisterLikeFunctions(sqlite3*, int);
+int sqlite3IsLikeFunction(sqlite3*,Expr*,int*,char*);
+ThreadData *sqlite3ThreadData(void);
+const ThreadData *sqlite3ThreadDataReadOnly(void);
+void sqlite3ReleaseThreadData(void);
+void sqlite3AttachFunctions(sqlite3 *);
+void sqlite3MinimumFileFormat(Parse*, int, int);
+void sqlite3SchemaFree(void *);
+Schema *sqlite3SchemaGet(Btree *);
+int sqlite3SchemaToIndex(sqlite3 *db, Schema *);
+KeyInfo *sqlite3IndexKeyinfo(Parse *, Index *);
+int sqlite3CreateFunc(sqlite3 *, const char *, int, int, void *,
+ void (*)(sqlite3_context*,int,sqlite3_value **),
+ void (*)(sqlite3_context*,int,sqlite3_value **), void (*)(sqlite3_context*));
+int sqlite3ApiExit(sqlite3 *db, int);
+int sqlite3MallocFailed(void);
+void sqlite3FailedMalloc(void);
+void sqlite3AbortOtherActiveVdbes(sqlite3 *, Vdbe *);
+int sqlite3OpenTempDatabase(Parse *);
+
+#ifndef SQLITE_OMIT_LOAD_EXTENSION
+ void sqlite3CloseExtensions(sqlite3*);
+ int sqlite3AutoLoadExtensions(sqlite3*);
+#else
+# define sqlite3CloseExtensions(X)
+# define sqlite3AutoLoadExtensions(X) SQLITE_OK
+#endif
+
+#ifndef SQLITE_OMIT_SHARED_CACHE
+ void sqlite3TableLock(Parse *, int, int, u8, const char *);
+#else
+ #define sqlite3TableLock(v,w,x,y,z)
+#endif
+
+#ifdef SQLITE_MEMDEBUG
+ void sqlite3MallocDisallow(void);
+ void sqlite3MallocAllow(void);
+ int sqlite3TestMallocFail(void);
+#else
+ #define sqlite3TestMallocFail() 0
+ #define sqlite3MallocDisallow()
+ #define sqlite3MallocAllow()
+#endif
+
+#ifdef SQLITE_ENABLE_MEMORY_MANAGEMENT
+ void *sqlite3ThreadSafeMalloc(int);
+ void sqlite3ThreadSafeFree(void *);
+#else
+ #define sqlite3ThreadSafeMalloc sqlite3MallocX
+ #define sqlite3ThreadSafeFree sqlite3FreeX
+#endif
+
+#ifdef SQLITE_OMIT_VIRTUALTABLE
+# define sqlite3VtabClear(X)
+# define sqlite3VtabSync(X,Y) (Y)
+# define sqlite3VtabRollback(X)
+# define sqlite3VtabCommit(X)
+#else
+ void sqlite3VtabClear(Table*);
+ int sqlite3VtabSync(sqlite3 *db, int rc);
+ int sqlite3VtabRollback(sqlite3 *db);
+ int sqlite3VtabCommit(sqlite3 *db);
+#endif
+void sqlite3VtabLock(sqlite3_vtab*);
+void sqlite3VtabUnlock(sqlite3_vtab*);
+void sqlite3VtabBeginParse(Parse*, Token*, Token*, Token*);
+void sqlite3VtabFinishParse(Parse*, Token*);
+void sqlite3VtabArgInit(Parse*);
+void sqlite3VtabArgExtend(Parse*, Token*);
+int sqlite3VtabCallCreate(sqlite3*, int, const char *, char **);
+int sqlite3VtabCallConnect(Parse*, Table*);
+int sqlite3VtabCallDestroy(sqlite3*, int, const char *);
+int sqlite3VtabBegin(sqlite3 *, sqlite3_vtab *);
+FuncDef *sqlite3VtabOverloadFunction(FuncDef*, int nArg, Expr*);
+void sqlite3InvalidFunction(sqlite3_context*,int,sqlite3_value**);
+
+#ifdef SQLITE_SSE
+#include "sseInt.h"
+#endif
+
+#endif
Added: freeswitch/trunk/libs/sqlite/src/table.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/table.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,198 @@
+/*
+** 2001 September 15
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** This file contains the sqlite3_get_table() and sqlite3_free_table()
+** interface routines. These are just wrappers around the main
+** interface routine of sqlite3_exec().
+**
+** These routines are in a separate files so that they will not be linked
+** if they are not used.
+*/
+#include "sqliteInt.h"
+#include <stdlib.h>
+#include <string.h>
+
+#ifndef SQLITE_OMIT_GET_TABLE
+
+/*
+** This structure is used to pass data from sqlite3_get_table() through
+** to the callback function is uses to build the result.
+*/
+typedef struct TabResult {
+ char **azResult;
+ char *zErrMsg;
+ int nResult;
+ int nAlloc;
+ int nRow;
+ int nColumn;
+ int nData;
+ int rc;
+} TabResult;
+
+/*
+** This routine is called once for each row in the result table. Its job
+** is to fill in the TabResult structure appropriately, allocating new
+** memory as necessary.
+*/
+static int sqlite3_get_table_cb(void *pArg, int nCol, char **argv, char **colv){
+ TabResult *p = (TabResult*)pArg;
+ int need;
+ int i;
+ char *z;
+
+ /* Make sure there is enough space in p->azResult to hold everything
+ ** we need to remember from this invocation of the callback.
+ */
+ if( p->nRow==0 && argv!=0 ){
+ need = nCol*2;
+ }else{
+ need = nCol;
+ }
+ if( p->nData + need >= p->nAlloc ){
+ char **azNew;
+ p->nAlloc = p->nAlloc*2 + need + 1;
+ azNew = sqlite3_realloc( p->azResult, sizeof(char*)*p->nAlloc );
+ if( azNew==0 ) goto malloc_failed;
+ p->azResult = azNew;
+ }
+
+ /* If this is the first row, then generate an extra row containing
+ ** the names of all columns.
+ */
+ if( p->nRow==0 ){
+ p->nColumn = nCol;
+ for(i=0; i<nCol; i++){
+ if( colv[i]==0 ){
+ z = sqlite3_mprintf("");
+ }else{
+ z = sqlite3_mprintf("%s", colv[i]);
+ }
+ p->azResult[p->nData++] = z;
+ }
+ }else if( p->nColumn!=nCol ){
+ sqlite3SetString(&p->zErrMsg,
+ "sqlite3_get_table() called with two or more incompatible queries",
+ (char*)0);
+ p->rc = SQLITE_ERROR;
+ return 1;
+ }
+
+ /* Copy over the row data
+ */
+ if( argv!=0 ){
+ for(i=0; i<nCol; i++){
+ if( argv[i]==0 ){
+ z = 0;
+ }else{
+ z = sqlite3_malloc( strlen(argv[i])+1 );
+ if( z==0 ) goto malloc_failed;
+ strcpy(z, argv[i]);
+ }
+ p->azResult[p->nData++] = z;
+ }
+ p->nRow++;
+ }
+ return 0;
+
+malloc_failed:
+ p->rc = SQLITE_NOMEM;
+ return 1;
+}
+
+/*
+** Query the database. But instead of invoking a callback for each row,
+** malloc() for space to hold the result and return the entire results
+** at the conclusion of the call.
+**
+** The result that is written to ***pazResult is held in memory obtained
+** from malloc(). But the caller cannot free this memory directly.
+** Instead, the entire table should be passed to sqlite3_free_table() when
+** the calling procedure is finished using it.
+*/
+int sqlite3_get_table(
+ sqlite3 *db, /* The database on which the SQL executes */
+ const char *zSql, /* The SQL to be executed */
+ char ***pazResult, /* Write the result table here */
+ int *pnRow, /* Write the number of rows in the result here */
+ int *pnColumn, /* Write the number of columns of result here */
+ char **pzErrMsg /* Write error messages here */
+){
+ int rc;
+ TabResult res;
+ if( pazResult==0 ){ return SQLITE_ERROR; }
+ *pazResult = 0;
+ if( pnColumn ) *pnColumn = 0;
+ if( pnRow ) *pnRow = 0;
+ res.zErrMsg = 0;
+ res.nResult = 0;
+ res.nRow = 0;
+ res.nColumn = 0;
+ res.nData = 1;
+ res.nAlloc = 20;
+ res.rc = SQLITE_OK;
+ res.azResult = sqlite3_malloc( sizeof(char*)*res.nAlloc );
+ if( res.azResult==0 ) return SQLITE_NOMEM;
+ res.azResult[0] = 0;
+ rc = sqlite3_exec(db, zSql, sqlite3_get_table_cb, &res, pzErrMsg);
+ if( res.azResult ){
+ assert( sizeof(res.azResult[0])>= sizeof(res.nData) );
+ res.azResult[0] = (char*)res.nData;
+ }
+ if( (rc&0xff)==SQLITE_ABORT ){
+ sqlite3_free_table(&res.azResult[1]);
+ if( res.zErrMsg ){
+ if( pzErrMsg ){
+ sqlite3_free(*pzErrMsg);
+ *pzErrMsg = sqlite3_mprintf("%s",res.zErrMsg);
+ }
+ sqliteFree(res.zErrMsg);
+ }
+ db->errCode = res.rc;
+ return res.rc & db->errMask;
+ }
+ sqliteFree(res.zErrMsg);
+ if( rc!=SQLITE_OK ){
+ sqlite3_free_table(&res.azResult[1]);
+ return rc & db->errMask;
+ }
+ if( res.nAlloc>res.nData ){
+ char **azNew;
+ azNew = sqlite3_realloc( res.azResult, sizeof(char*)*(res.nData+1) );
+ if( azNew==0 ){
+ sqlite3_free_table(&res.azResult[1]);
+ return SQLITE_NOMEM;
+ }
+ res.nAlloc = res.nData+1;
+ res.azResult = azNew;
+ }
+ *pazResult = &res.azResult[1];
+ if( pnColumn ) *pnColumn = res.nColumn;
+ if( pnRow ) *pnRow = res.nRow;
+ return rc & db->errMask;
+}
+
+/*
+** This routine frees the space the sqlite3_get_table() malloced.
+*/
+void sqlite3_free_table(
+ char **azResult /* Result returned from from sqlite3_get_table() */
+){
+ if( azResult ){
+ int i, n;
+ azResult--;
+ if( azResult==0 ) return;
+ n = (int)azResult[0];
+ for(i=1; i<n; i++){ if( azResult[i] ) sqlite3_free(azResult[i]); }
+ sqlite3_free(azResult);
+ }
+}
+
+#endif /* SQLITE_OMIT_GET_TABLE */
Added: freeswitch/trunk/libs/sqlite/src/tclsqlite.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/tclsqlite.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,2252 @@
+/*
+** 2001 September 15
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** A TCL Interface to SQLite
+**
+** $Id: tclsqlite.c,v 1.173 2006/09/02 14:17:00 drh Exp $
+*/
+#ifndef NO_TCL /* Omit this whole file if TCL is unavailable */
+
+#include "sqliteInt.h"
+#include "hash.h"
+#include "tcl.h"
+#include <stdlib.h>
+#include <string.h>
+#include <assert.h>
+#include <ctype.h>
+
+/*
+ * Windows needs to know which symbols to export. Unix does not.
+ * BUILD_sqlite should be undefined for Unix.
+ */
+#ifdef BUILD_sqlite
+#undef TCL_STORAGE_CLASS
+#define TCL_STORAGE_CLASS DLLEXPORT
+#endif /* BUILD_sqlite */
+
+#define NUM_PREPARED_STMTS 10
+#define MAX_PREPARED_STMTS 100
+
+/*
+** If TCL uses UTF-8 and SQLite is configured to use iso8859, then we
+** have to do a translation when going between the two. Set the
+** UTF_TRANSLATION_NEEDED macro to indicate that we need to do
+** this translation.
+*/
+#if defined(TCL_UTF_MAX) && !defined(SQLITE_UTF8)
+# define UTF_TRANSLATION_NEEDED 1
+#endif
+
+/*
+** New SQL functions can be created as TCL scripts. Each such function
+** is described by an instance of the following structure.
+*/
+typedef struct SqlFunc SqlFunc;
+struct SqlFunc {
+ Tcl_Interp *interp; /* The TCL interpret to execute the function */
+ Tcl_Obj *pScript; /* The Tcl_Obj representation of the script */
+ int useEvalObjv; /* True if it is safe to use Tcl_EvalObjv */
+ char *zName; /* Name of this function */
+ SqlFunc *pNext; /* Next function on the list of them all */
+};
+
+/*
+** New collation sequences function can be created as TCL scripts. Each such
+** function is described by an instance of the following structure.
+*/
+typedef struct SqlCollate SqlCollate;
+struct SqlCollate {
+ Tcl_Interp *interp; /* The TCL interpret to execute the function */
+ char *zScript; /* The script to be run */
+ SqlCollate *pNext; /* Next function on the list of them all */
+};
+
+/*
+** Prepared statements are cached for faster execution. Each prepared
+** statement is described by an instance of the following structure.
+*/
+typedef struct SqlPreparedStmt SqlPreparedStmt;
+struct SqlPreparedStmt {
+ SqlPreparedStmt *pNext; /* Next in linked list */
+ SqlPreparedStmt *pPrev; /* Previous on the list */
+ sqlite3_stmt *pStmt; /* The prepared statement */
+ int nSql; /* chars in zSql[] */
+ char zSql[1]; /* Text of the SQL statement */
+};
+
+/*
+** There is one instance of this structure for each SQLite database
+** that has been opened by the SQLite TCL interface.
+*/
+typedef struct SqliteDb SqliteDb;
+struct SqliteDb {
+ sqlite3 *db; /* The "real" database structure. MUST BE FIRST */
+ Tcl_Interp *interp; /* The interpreter used for this database */
+ char *zBusy; /* The busy callback routine */
+ char *zCommit; /* The commit hook callback routine */
+ char *zTrace; /* The trace callback routine */
+ char *zProfile; /* The profile callback routine */
+ char *zProgress; /* The progress callback routine */
+ char *zAuth; /* The authorization callback routine */
+ char *zNull; /* Text to substitute for an SQL NULL value */
+ SqlFunc *pFunc; /* List of SQL functions */
+ Tcl_Obj *pUpdateHook; /* Update hook script (if any) */
+ Tcl_Obj *pRollbackHook; /* Rollback hook script (if any) */
+ SqlCollate *pCollate; /* List of SQL collation functions */
+ int rc; /* Return code of most recent sqlite3_exec() */
+ Tcl_Obj *pCollateNeeded; /* Collation needed script */
+ SqlPreparedStmt *stmtList; /* List of prepared statements*/
+ SqlPreparedStmt *stmtLast; /* Last statement in the list */
+ int maxStmt; /* The next maximum number of stmtList */
+ int nStmt; /* Number of statements in stmtList */
+};
+
+/*
+** Look at the script prefix in pCmd. We will be executing this script
+** after first appending one or more arguments. This routine analyzes
+** the script to see if it is safe to use Tcl_EvalObjv() on the script
+** rather than the more general Tcl_EvalEx(). Tcl_EvalObjv() is much
+** faster.
+**
+** Scripts that are safe to use with Tcl_EvalObjv() consists of a
+** command name followed by zero or more arguments with no [...] or $
+** or {...} or ; to be seen anywhere. Most callback scripts consist
+** of just a single procedure name and they meet this requirement.
+*/
+static int safeToUseEvalObjv(Tcl_Interp *interp, Tcl_Obj *pCmd){
+ /* We could try to do something with Tcl_Parse(). But we will instead
+ ** just do a search for forbidden characters. If any of the forbidden
+ ** characters appear in pCmd, we will report the string as unsafe.
+ */
+ const char *z;
+ int n;
+ z = Tcl_GetStringFromObj(pCmd, &n);
+ while( n-- > 0 ){
+ int c = *(z++);
+ if( c=='$' || c=='[' || c==';' ) return 0;
+ }
+ return 1;
+}
+
+/*
+** Find an SqlFunc structure with the given name. Or create a new
+** one if an existing one cannot be found. Return a pointer to the
+** structure.
+*/
+static SqlFunc *findSqlFunc(SqliteDb *pDb, const char *zName){
+ SqlFunc *p, *pNew;
+ int i;
+ pNew = (SqlFunc*)Tcl_Alloc( sizeof(*pNew) + strlen(zName) + 1 );
+ pNew->zName = (char*)&pNew[1];
+ for(i=0; zName[i]; i++){ pNew->zName[i] = tolower(zName[i]); }
+ pNew->zName[i] = 0;
+ for(p=pDb->pFunc; p; p=p->pNext){
+ if( strcmp(p->zName, pNew->zName)==0 ){
+ Tcl_Free((char*)pNew);
+ return p;
+ }
+ }
+ pNew->interp = pDb->interp;
+ pNew->pScript = 0;
+ pNew->pNext = pDb->pFunc;
+ pDb->pFunc = pNew;
+ return pNew;
+}
+
+/*
+** Finalize and free a list of prepared statements
+*/
+static void flushStmtCache( SqliteDb *pDb ){
+ SqlPreparedStmt *pPreStmt;
+
+ while( pDb->stmtList ){
+ sqlite3_finalize( pDb->stmtList->pStmt );
+ pPreStmt = pDb->stmtList;
+ pDb->stmtList = pDb->stmtList->pNext;
+ Tcl_Free( (char*)pPreStmt );
+ }
+ pDb->nStmt = 0;
+ pDb->stmtLast = 0;
+}
+
+/*
+** TCL calls this procedure when an sqlite3 database command is
+** deleted.
+*/
+static void DbDeleteCmd(void *db){
+ SqliteDb *pDb = (SqliteDb*)db;
+ flushStmtCache(pDb);
+ sqlite3_close(pDb->db);
+ while( pDb->pFunc ){
+ SqlFunc *pFunc = pDb->pFunc;
+ pDb->pFunc = pFunc->pNext;
+ Tcl_DecrRefCount(pFunc->pScript);
+ Tcl_Free((char*)pFunc);
+ }
+ while( pDb->pCollate ){
+ SqlCollate *pCollate = pDb->pCollate;
+ pDb->pCollate = pCollate->pNext;
+ Tcl_Free((char*)pCollate);
+ }
+ if( pDb->zBusy ){
+ Tcl_Free(pDb->zBusy);
+ }
+ if( pDb->zTrace ){
+ Tcl_Free(pDb->zTrace);
+ }
+ if( pDb->zProfile ){
+ Tcl_Free(pDb->zProfile);
+ }
+ if( pDb->zAuth ){
+ Tcl_Free(pDb->zAuth);
+ }
+ if( pDb->zNull ){
+ Tcl_Free(pDb->zNull);
+ }
+ if( pDb->pUpdateHook ){
+ Tcl_DecrRefCount(pDb->pUpdateHook);
+ }
+ if( pDb->pRollbackHook ){
+ Tcl_DecrRefCount(pDb->pRollbackHook);
+ }
+ if( pDb->pCollateNeeded ){
+ Tcl_DecrRefCount(pDb->pCollateNeeded);
+ }
+ Tcl_Free((char*)pDb);
+}
+
+/*
+** This routine is called when a database file is locked while trying
+** to execute SQL.
+*/
+static int DbBusyHandler(void *cd, int nTries){
+ SqliteDb *pDb = (SqliteDb*)cd;
+ int rc;
+ char zVal[30];
+
+ sprintf(zVal, "%d", nTries);
+ rc = Tcl_VarEval(pDb->interp, pDb->zBusy, " ", zVal, (char*)0);
+ if( rc!=TCL_OK || atoi(Tcl_GetStringResult(pDb->interp)) ){
+ return 0;
+ }
+ return 1;
+}
+
+/*
+** This routine is invoked as the 'progress callback' for the database.
+*/
+static int DbProgressHandler(void *cd){
+ SqliteDb *pDb = (SqliteDb*)cd;
+ int rc;
+
+ assert( pDb->zProgress );
+ rc = Tcl_Eval(pDb->interp, pDb->zProgress);
+ if( rc!=TCL_OK || atoi(Tcl_GetStringResult(pDb->interp)) ){
+ return 1;
+ }
+ return 0;
+}
+
+#ifndef SQLITE_OMIT_TRACE
+/*
+** This routine is called by the SQLite trace handler whenever a new
+** block of SQL is executed. The TCL script in pDb->zTrace is executed.
+*/
+static void DbTraceHandler(void *cd, const char *zSql){
+ SqliteDb *pDb = (SqliteDb*)cd;
+ Tcl_DString str;
+
+ Tcl_DStringInit(&str);
+ Tcl_DStringAppend(&str, pDb->zTrace, -1);
+ Tcl_DStringAppendElement(&str, zSql);
+ Tcl_Eval(pDb->interp, Tcl_DStringValue(&str));
+ Tcl_DStringFree(&str);
+ Tcl_ResetResult(pDb->interp);
+}
+#endif
+
+#ifndef SQLITE_OMIT_TRACE
+/*
+** This routine is called by the SQLite profile handler after a statement
+** SQL has executed. The TCL script in pDb->zProfile is evaluated.
+*/
+static void DbProfileHandler(void *cd, const char *zSql, sqlite_uint64 tm){
+ SqliteDb *pDb = (SqliteDb*)cd;
+ Tcl_DString str;
+ char zTm[100];
+
+ sqlite3_snprintf(sizeof(zTm)-1, zTm, "%lld", tm);
+ Tcl_DStringInit(&str);
+ Tcl_DStringAppend(&str, pDb->zProfile, -1);
+ Tcl_DStringAppendElement(&str, zSql);
+ Tcl_DStringAppendElement(&str, zTm);
+ Tcl_Eval(pDb->interp, Tcl_DStringValue(&str));
+ Tcl_DStringFree(&str);
+ Tcl_ResetResult(pDb->interp);
+}
+#endif
+
+/*
+** This routine is called when a transaction is committed. The
+** TCL script in pDb->zCommit is executed. If it returns non-zero or
+** if it throws an exception, the transaction is rolled back instead
+** of being committed.
+*/
+static int DbCommitHandler(void *cd){
+ SqliteDb *pDb = (SqliteDb*)cd;
+ int rc;
+
+ rc = Tcl_Eval(pDb->interp, pDb->zCommit);
+ if( rc!=TCL_OK || atoi(Tcl_GetStringResult(pDb->interp)) ){
+ return 1;
+ }
+ return 0;
+}
+
+static void DbRollbackHandler(void *clientData){
+ SqliteDb *pDb = (SqliteDb*)clientData;
+ assert(pDb->pRollbackHook);
+ if( TCL_OK!=Tcl_EvalObjEx(pDb->interp, pDb->pRollbackHook, 0) ){
+ Tcl_BackgroundError(pDb->interp);
+ }
+}
+
+static void DbUpdateHandler(
+ void *p,
+ int op,
+ const char *zDb,
+ const char *zTbl,
+ sqlite_int64 rowid
+){
+ SqliteDb *pDb = (SqliteDb *)p;
+ Tcl_Obj *pCmd;
+
+ assert( pDb->pUpdateHook );
+ assert( op==SQLITE_INSERT || op==SQLITE_UPDATE || op==SQLITE_DELETE );
+
+ pCmd = Tcl_DuplicateObj(pDb->pUpdateHook);
+ Tcl_IncrRefCount(pCmd);
+ Tcl_ListObjAppendElement(0, pCmd, Tcl_NewStringObj(
+ ( (op==SQLITE_INSERT)?"INSERT":(op==SQLITE_UPDATE)?"UPDATE":"DELETE"), -1));
+ Tcl_ListObjAppendElement(0, pCmd, Tcl_NewStringObj(zDb, -1));
+ Tcl_ListObjAppendElement(0, pCmd, Tcl_NewStringObj(zTbl, -1));
+ Tcl_ListObjAppendElement(0, pCmd, Tcl_NewWideIntObj(rowid));
+ Tcl_EvalObjEx(pDb->interp, pCmd, TCL_EVAL_DIRECT);
+}
+
+static void tclCollateNeeded(
+ void *pCtx,
+ sqlite3 *db,
+ int enc,
+ const char *zName
+){
+ SqliteDb *pDb = (SqliteDb *)pCtx;
+ Tcl_Obj *pScript = Tcl_DuplicateObj(pDb->pCollateNeeded);
+ Tcl_IncrRefCount(pScript);
+ Tcl_ListObjAppendElement(0, pScript, Tcl_NewStringObj(zName, -1));
+ Tcl_EvalObjEx(pDb->interp, pScript, 0);
+ Tcl_DecrRefCount(pScript);
+}
+
+/*
+** This routine is called to evaluate an SQL collation function implemented
+** using TCL script.
+*/
+static int tclSqlCollate(
+ void *pCtx,
+ int nA,
+ const void *zA,
+ int nB,
+ const void *zB
+){
+ SqlCollate *p = (SqlCollate *)pCtx;
+ Tcl_Obj *pCmd;
+
+ pCmd = Tcl_NewStringObj(p->zScript, -1);
+ Tcl_IncrRefCount(pCmd);
+ Tcl_ListObjAppendElement(p->interp, pCmd, Tcl_NewStringObj(zA, nA));
+ Tcl_ListObjAppendElement(p->interp, pCmd, Tcl_NewStringObj(zB, nB));
+ Tcl_EvalObjEx(p->interp, pCmd, TCL_EVAL_DIRECT);
+ Tcl_DecrRefCount(pCmd);
+ return (atoi(Tcl_GetStringResult(p->interp)));
+}
+
+/*
+** This routine is called to evaluate an SQL function implemented
+** using TCL script.
+*/
+static void tclSqlFunc(sqlite3_context *context, int argc, sqlite3_value**argv){
+ SqlFunc *p = sqlite3_user_data(context);
+ Tcl_Obj *pCmd;
+ int i;
+ int rc;
+
+ if( argc==0 ){
+ /* If there are no arguments to the function, call Tcl_EvalObjEx on the
+ ** script object directly. This allows the TCL compiler to generate
+ ** bytecode for the command on the first invocation and thus make
+ ** subsequent invocations much faster. */
+ pCmd = p->pScript;
+ Tcl_IncrRefCount(pCmd);
+ rc = Tcl_EvalObjEx(p->interp, pCmd, 0);
+ Tcl_DecrRefCount(pCmd);
+ }else{
+ /* If there are arguments to the function, make a shallow copy of the
+ ** script object, lappend the arguments, then evaluate the copy.
+ **
+ ** By "shallow" copy, we mean a only the outer list Tcl_Obj is duplicated.
+ ** The new Tcl_Obj contains pointers to the original list elements.
+ ** That way, when Tcl_EvalObjv() is run and shimmers the first element
+ ** of the list to tclCmdNameType, that alternate representation will
+ ** be preserved and reused on the next invocation.
+ */
+ Tcl_Obj **aArg;
+ int nArg;
+ if( Tcl_ListObjGetElements(p->interp, p->pScript, &nArg, &aArg) ){
+ sqlite3_result_error(context, Tcl_GetStringResult(p->interp), -1);
+ return;
+ }
+ pCmd = Tcl_NewListObj(nArg, aArg);
+ Tcl_IncrRefCount(pCmd);
+ for(i=0; i<argc; i++){
+ sqlite3_value *pIn = argv[i];
+ Tcl_Obj *pVal;
+
+ /* Set pVal to contain the i'th column of this row. */
+ switch( sqlite3_value_type(pIn) ){
+ case SQLITE_BLOB: {
+ int bytes = sqlite3_value_bytes(pIn);
+ pVal = Tcl_NewByteArrayObj(sqlite3_value_blob(pIn), bytes);
+ break;
+ }
+ case SQLITE_INTEGER: {
+ sqlite_int64 v = sqlite3_value_int64(pIn);
+ if( v>=-2147483647 && v<=2147483647 ){
+ pVal = Tcl_NewIntObj(v);
+ }else{
+ pVal = Tcl_NewWideIntObj(v);
+ }
+ break;
+ }
+ case SQLITE_FLOAT: {
+ double r = sqlite3_value_double(pIn);
+ pVal = Tcl_NewDoubleObj(r);
+ break;
+ }
+ case SQLITE_NULL: {
+ pVal = Tcl_NewStringObj("", 0);
+ break;
+ }
+ default: {
+ int bytes = sqlite3_value_bytes(pIn);
+ pVal = Tcl_NewStringObj((char *)sqlite3_value_text(pIn), bytes);
+ break;
+ }
+ }
+ rc = Tcl_ListObjAppendElement(p->interp, pCmd, pVal);
+ if( rc ){
+ Tcl_DecrRefCount(pCmd);
+ sqlite3_result_error(context, Tcl_GetStringResult(p->interp), -1);
+ return;
+ }
+ }
+ if( !p->useEvalObjv ){
+ /* Tcl_EvalObjEx() will automatically call Tcl_EvalObjv() if pCmd
+ ** is a list without a string representation. To prevent this from
+ ** happening, make sure pCmd has a valid string representation */
+ Tcl_GetString(pCmd);
+ }
+ rc = Tcl_EvalObjEx(p->interp, pCmd, TCL_EVAL_DIRECT);
+ Tcl_DecrRefCount(pCmd);
+ }
+
+ if( rc && rc!=TCL_RETURN ){
+ sqlite3_result_error(context, Tcl_GetStringResult(p->interp), -1);
+ }else{
+ Tcl_Obj *pVar = Tcl_GetObjResult(p->interp);
+ int n;
+ u8 *data;
+ char *zType = pVar->typePtr ? pVar->typePtr->name : "";
+ char c = zType[0];
+ if( c=='b' && strcmp(zType,"bytearray")==0 && pVar->bytes==0 ){
+ /* Only return a BLOB type if the Tcl variable is a bytearray and
+ ** has no string representation. */
+ data = Tcl_GetByteArrayFromObj(pVar, &n);
+ sqlite3_result_blob(context, data, n, SQLITE_TRANSIENT);
+ }else if( (c=='b' && strcmp(zType,"boolean")==0) ||
+ (c=='i' && strcmp(zType,"int")==0) ){
+ Tcl_GetIntFromObj(0, pVar, &n);
+ sqlite3_result_int(context, n);
+ }else if( c=='d' && strcmp(zType,"double")==0 ){
+ double r;
+ Tcl_GetDoubleFromObj(0, pVar, &r);
+ sqlite3_result_double(context, r);
+ }else if( c=='w' && strcmp(zType,"wideInt")==0 ){
+ Tcl_WideInt v;
+ Tcl_GetWideIntFromObj(0, pVar, &v);
+ sqlite3_result_int64(context, v);
+ }else{
+ data = (unsigned char *)Tcl_GetStringFromObj(pVar, &n);
+ sqlite3_result_text(context, (char *)data, n, SQLITE_TRANSIENT);
+ }
+ }
+}
+
+#ifndef SQLITE_OMIT_AUTHORIZATION
+/*
+** This is the authentication function. It appends the authentication
+** type code and the two arguments to zCmd[] then invokes the result
+** on the interpreter. The reply is examined to determine if the
+** authentication fails or succeeds.
+*/
+static int auth_callback(
+ void *pArg,
+ int code,
+ const char *zArg1,
+ const char *zArg2,
+ const char *zArg3,
+ const char *zArg4
+){
+ char *zCode;
+ Tcl_DString str;
+ int rc;
+ const char *zReply;
+ SqliteDb *pDb = (SqliteDb*)pArg;
+
+ switch( code ){
+ case SQLITE_COPY : zCode="SQLITE_COPY"; break;
+ case SQLITE_CREATE_INDEX : zCode="SQLITE_CREATE_INDEX"; break;
+ case SQLITE_CREATE_TABLE : zCode="SQLITE_CREATE_TABLE"; break;
+ case SQLITE_CREATE_TEMP_INDEX : zCode="SQLITE_CREATE_TEMP_INDEX"; break;
+ case SQLITE_CREATE_TEMP_TABLE : zCode="SQLITE_CREATE_TEMP_TABLE"; break;
+ case SQLITE_CREATE_TEMP_TRIGGER: zCode="SQLITE_CREATE_TEMP_TRIGGER"; break;
+ case SQLITE_CREATE_TEMP_VIEW : zCode="SQLITE_CREATE_TEMP_VIEW"; break;
+ case SQLITE_CREATE_TRIGGER : zCode="SQLITE_CREATE_TRIGGER"; break;
+ case SQLITE_CREATE_VIEW : zCode="SQLITE_CREATE_VIEW"; break;
+ case SQLITE_DELETE : zCode="SQLITE_DELETE"; break;
+ case SQLITE_DROP_INDEX : zCode="SQLITE_DROP_INDEX"; break;
+ case SQLITE_DROP_TABLE : zCode="SQLITE_DROP_TABLE"; break;
+ case SQLITE_DROP_TEMP_INDEX : zCode="SQLITE_DROP_TEMP_INDEX"; break;
+ case SQLITE_DROP_TEMP_TABLE : zCode="SQLITE_DROP_TEMP_TABLE"; break;
+ case SQLITE_DROP_TEMP_TRIGGER : zCode="SQLITE_DROP_TEMP_TRIGGER"; break;
+ case SQLITE_DROP_TEMP_VIEW : zCode="SQLITE_DROP_TEMP_VIEW"; break;
+ case SQLITE_DROP_TRIGGER : zCode="SQLITE_DROP_TRIGGER"; break;
+ case SQLITE_DROP_VIEW : zCode="SQLITE_DROP_VIEW"; break;
+ case SQLITE_INSERT : zCode="SQLITE_INSERT"; break;
+ case SQLITE_PRAGMA : zCode="SQLITE_PRAGMA"; break;
+ case SQLITE_READ : zCode="SQLITE_READ"; break;
+ case SQLITE_SELECT : zCode="SQLITE_SELECT"; break;
+ case SQLITE_TRANSACTION : zCode="SQLITE_TRANSACTION"; break;
+ case SQLITE_UPDATE : zCode="SQLITE_UPDATE"; break;
+ case SQLITE_ATTACH : zCode="SQLITE_ATTACH"; break;
+ case SQLITE_DETACH : zCode="SQLITE_DETACH"; break;
+ case SQLITE_ALTER_TABLE : zCode="SQLITE_ALTER_TABLE"; break;
+ case SQLITE_REINDEX : zCode="SQLITE_REINDEX"; break;
+ case SQLITE_ANALYZE : zCode="SQLITE_ANALYZE"; break;
+ case SQLITE_CREATE_VTABLE : zCode="SQLITE_CREATE_VTABLE"; break;
+ case SQLITE_DROP_VTABLE : zCode="SQLITE_DROP_VTABLE"; break;
+ case SQLITE_FUNCTION : zCode="SQLITE_FUNCTION"; break;
+ default : zCode="????"; break;
+ }
+ Tcl_DStringInit(&str);
+ Tcl_DStringAppend(&str, pDb->zAuth, -1);
+ Tcl_DStringAppendElement(&str, zCode);
+ Tcl_DStringAppendElement(&str, zArg1 ? zArg1 : "");
+ Tcl_DStringAppendElement(&str, zArg2 ? zArg2 : "");
+ Tcl_DStringAppendElement(&str, zArg3 ? zArg3 : "");
+ Tcl_DStringAppendElement(&str, zArg4 ? zArg4 : "");
+ rc = Tcl_GlobalEval(pDb->interp, Tcl_DStringValue(&str));
+ Tcl_DStringFree(&str);
+ zReply = Tcl_GetStringResult(pDb->interp);
+ if( strcmp(zReply,"SQLITE_OK")==0 ){
+ rc = SQLITE_OK;
+ }else if( strcmp(zReply,"SQLITE_DENY")==0 ){
+ rc = SQLITE_DENY;
+ }else if( strcmp(zReply,"SQLITE_IGNORE")==0 ){
+ rc = SQLITE_IGNORE;
+ }else{
+ rc = 999;
+ }
+ return rc;
+}
+#endif /* SQLITE_OMIT_AUTHORIZATION */
+
+/*
+** zText is a pointer to text obtained via an sqlite3_result_text()
+** or similar interface. This routine returns a Tcl string object,
+** reference count set to 0, containing the text. If a translation
+** between iso8859 and UTF-8 is required, it is preformed.
+*/
+static Tcl_Obj *dbTextToObj(char const *zText){
+ Tcl_Obj *pVal;
+#ifdef UTF_TRANSLATION_NEEDED
+ Tcl_DString dCol;
+ Tcl_DStringInit(&dCol);
+ Tcl_ExternalToUtfDString(NULL, zText, -1, &dCol);
+ pVal = Tcl_NewStringObj(Tcl_DStringValue(&dCol), -1);
+ Tcl_DStringFree(&dCol);
+#else
+ pVal = Tcl_NewStringObj(zText, -1);
+#endif
+ return pVal;
+}
+
+/*
+** This routine reads a line of text from FILE in, stores
+** the text in memory obtained from malloc() and returns a pointer
+** to the text. NULL is returned at end of file, or if malloc()
+** fails.
+**
+** The interface is like "readline" but no command-line editing
+** is done.
+**
+** copied from shell.c from '.import' command
+*/
+static char *local_getline(char *zPrompt, FILE *in){
+ char *zLine;
+ int nLine;
+ int n;
+ int eol;
+
+ nLine = 100;
+ zLine = malloc( nLine );
+ if( zLine==0 ) return 0;
+ n = 0;
+ eol = 0;
+ while( !eol ){
+ if( n+100>nLine ){
+ nLine = nLine*2 + 100;
+ zLine = realloc(zLine, nLine);
+ if( zLine==0 ) return 0;
+ }
+ if( fgets(&zLine[n], nLine - n, in)==0 ){
+ if( n==0 ){
+ free(zLine);
+ return 0;
+ }
+ zLine[n] = 0;
+ eol = 1;
+ break;
+ }
+ while( zLine[n] ){ n++; }
+ if( n>0 && zLine[n-1]=='\n' ){
+ n--;
+ zLine[n] = 0;
+ eol = 1;
+ }
+ }
+ zLine = realloc( zLine, n+1 );
+ return zLine;
+}
+
+/*
+** The "sqlite" command below creates a new Tcl command for each
+** connection it opens to an SQLite database. This routine is invoked
+** whenever one of those connection-specific commands is executed
+** in Tcl. For example, if you run Tcl code like this:
+**
+** sqlite3 db1 "my_database"
+** db1 close
+**
+** The first command opens a connection to the "my_database" database
+** and calls that connection "db1". The second command causes this
+** subroutine to be invoked.
+*/
+static int DbObjCmd(void *cd, Tcl_Interp *interp, int objc,Tcl_Obj *const*objv){
+ SqliteDb *pDb = (SqliteDb*)cd;
+ int choice;
+ int rc = TCL_OK;
+ static const char *DB_strs[] = {
+ "authorizer", "busy", "cache",
+ "changes", "close", "collate",
+ "collation_needed", "commit_hook", "complete",
+ "copy", "enable_load_extension","errorcode",
+ "eval", "exists", "function",
+ "interrupt", "last_insert_rowid", "nullvalue",
+ "onecolumn", "profile", "progress",
+ "rekey", "rollback_hook", "timeout",
+ "total_changes", "trace", "transaction",
+ "update_hook", "version", 0
+ };
+ enum DB_enum {
+ DB_AUTHORIZER, DB_BUSY, DB_CACHE,
+ DB_CHANGES, DB_CLOSE, DB_COLLATE,
+ DB_COLLATION_NEEDED, DB_COMMIT_HOOK, DB_COMPLETE,
+ DB_COPY, DB_ENABLE_LOAD_EXTENSION,DB_ERRORCODE,
+ DB_EVAL, DB_EXISTS, DB_FUNCTION,
+ DB_INTERRUPT, DB_LAST_INSERT_ROWID,DB_NULLVALUE,
+ DB_ONECOLUMN, DB_PROFILE, DB_PROGRESS,
+ DB_REKEY, DB_ROLLBACK_HOOK, DB_TIMEOUT,
+ DB_TOTAL_CHANGES, DB_TRACE, DB_TRANSACTION,
+ DB_UPDATE_HOOK, DB_VERSION,
+ };
+ /* don't leave trailing commas on DB_enum, it confuses the AIX xlc compiler */
+
+ if( objc<2 ){
+ Tcl_WrongNumArgs(interp, 1, objv, "SUBCOMMAND ...");
+ return TCL_ERROR;
+ }
+ if( Tcl_GetIndexFromObj(interp, objv[1], DB_strs, "option", 0, &choice) ){
+ return TCL_ERROR;
+ }
+
+ switch( (enum DB_enum)choice ){
+
+ /* $db authorizer ?CALLBACK?
+ **
+ ** Invoke the given callback to authorize each SQL operation as it is
+ ** compiled. 5 arguments are appended to the callback before it is
+ ** invoked:
+ **
+ ** (1) The authorization type (ex: SQLITE_CREATE_TABLE, SQLITE_INSERT, ...)
+ ** (2) First descriptive name (depends on authorization type)
+ ** (3) Second descriptive name
+ ** (4) Name of the database (ex: "main", "temp")
+ ** (5) Name of trigger that is doing the access
+ **
+ ** The callback should return on of the following strings: SQLITE_OK,
+ ** SQLITE_IGNORE, or SQLITE_DENY. Any other return value is an error.
+ **
+ ** If this method is invoked with no arguments, the current authorization
+ ** callback string is returned.
+ */
+ case DB_AUTHORIZER: {
+#ifdef SQLITE_OMIT_AUTHORIZATION
+ Tcl_AppendResult(interp, "authorization not available in this build", 0);
+ return TCL_ERROR;
+#else
+ if( objc>3 ){
+ Tcl_WrongNumArgs(interp, 2, objv, "?CALLBACK?");
+ return TCL_ERROR;
+ }else if( objc==2 ){
+ if( pDb->zAuth ){
+ Tcl_AppendResult(interp, pDb->zAuth, 0);
+ }
+ }else{
+ char *zAuth;
+ int len;
+ if( pDb->zAuth ){
+ Tcl_Free(pDb->zAuth);
+ }
+ zAuth = Tcl_GetStringFromObj(objv[2], &len);
+ if( zAuth && len>0 ){
+ pDb->zAuth = Tcl_Alloc( len + 1 );
+ strcpy(pDb->zAuth, zAuth);
+ }else{
+ pDb->zAuth = 0;
+ }
+ if( pDb->zAuth ){
+ pDb->interp = interp;
+ sqlite3_set_authorizer(pDb->db, auth_callback, pDb);
+ }else{
+ sqlite3_set_authorizer(pDb->db, 0, 0);
+ }
+ }
+#endif
+ break;
+ }
+
+ /* $db busy ?CALLBACK?
+ **
+ ** Invoke the given callback if an SQL statement attempts to open
+ ** a locked database file.
+ */
+ case DB_BUSY: {
+ if( objc>3 ){
+ Tcl_WrongNumArgs(interp, 2, objv, "CALLBACK");
+ return TCL_ERROR;
+ }else if( objc==2 ){
+ if( pDb->zBusy ){
+ Tcl_AppendResult(interp, pDb->zBusy, 0);
+ }
+ }else{
+ char *zBusy;
+ int len;
+ if( pDb->zBusy ){
+ Tcl_Free(pDb->zBusy);
+ }
+ zBusy = Tcl_GetStringFromObj(objv[2], &len);
+ if( zBusy && len>0 ){
+ pDb->zBusy = Tcl_Alloc( len + 1 );
+ strcpy(pDb->zBusy, zBusy);
+ }else{
+ pDb->zBusy = 0;
+ }
+ if( pDb->zBusy ){
+ pDb->interp = interp;
+ sqlite3_busy_handler(pDb->db, DbBusyHandler, pDb);
+ }else{
+ sqlite3_busy_handler(pDb->db, 0, 0);
+ }
+ }
+ break;
+ }
+
+ /* $db cache flush
+ ** $db cache size n
+ **
+ ** Flush the prepared statement cache, or set the maximum number of
+ ** cached statements.
+ */
+ case DB_CACHE: {
+ char *subCmd;
+ int n;
+
+ if( objc<=2 ){
+ Tcl_WrongNumArgs(interp, 1, objv, "cache option ?arg?");
+ return TCL_ERROR;
+ }
+ subCmd = Tcl_GetStringFromObj( objv[2], 0 );
+ if( *subCmd=='f' && strcmp(subCmd,"flush")==0 ){
+ if( objc!=3 ){
+ Tcl_WrongNumArgs(interp, 2, objv, "flush");
+ return TCL_ERROR;
+ }else{
+ flushStmtCache( pDb );
+ }
+ }else if( *subCmd=='s' && strcmp(subCmd,"size")==0 ){
+ if( objc!=4 ){
+ Tcl_WrongNumArgs(interp, 2, objv, "size n");
+ return TCL_ERROR;
+ }else{
+ if( TCL_ERROR==Tcl_GetIntFromObj(interp, objv[3], &n) ){
+ Tcl_AppendResult( interp, "cannot convert \"",
+ Tcl_GetStringFromObj(objv[3],0), "\" to integer", 0);
+ return TCL_ERROR;
+ }else{
+ if( n<0 ){
+ flushStmtCache( pDb );
+ n = 0;
+ }else if( n>MAX_PREPARED_STMTS ){
+ n = MAX_PREPARED_STMTS;
+ }
+ pDb->maxStmt = n;
+ }
+ }
+ }else{
+ Tcl_AppendResult( interp, "bad option \"",
+ Tcl_GetStringFromObj(objv[0],0), "\": must be flush or size", 0);
+ return TCL_ERROR;
+ }
+ break;
+ }
+
+ /* $db changes
+ **
+ ** Return the number of rows that were modified, inserted, or deleted by
+ ** the most recent INSERT, UPDATE or DELETE statement, not including
+ ** any changes made by trigger programs.
+ */
+ case DB_CHANGES: {
+ Tcl_Obj *pResult;
+ if( objc!=2 ){
+ Tcl_WrongNumArgs(interp, 2, objv, "");
+ return TCL_ERROR;
+ }
+ pResult = Tcl_GetObjResult(interp);
+ Tcl_SetIntObj(pResult, sqlite3_changes(pDb->db));
+ break;
+ }
+
+ /* $db close
+ **
+ ** Shutdown the database
+ */
+ case DB_CLOSE: {
+ Tcl_DeleteCommand(interp, Tcl_GetStringFromObj(objv[0], 0));
+ break;
+ }
+
+ /*
+ ** $db collate NAME SCRIPT
+ **
+ ** Create a new SQL collation function called NAME. Whenever
+ ** that function is called, invoke SCRIPT to evaluate the function.
+ */
+ case DB_COLLATE: {
+ SqlCollate *pCollate;
+ char *zName;
+ char *zScript;
+ int nScript;
+ if( objc!=4 ){
+ Tcl_WrongNumArgs(interp, 2, objv, "NAME SCRIPT");
+ return TCL_ERROR;
+ }
+ zName = Tcl_GetStringFromObj(objv[2], 0);
+ zScript = Tcl_GetStringFromObj(objv[3], &nScript);
+ pCollate = (SqlCollate*)Tcl_Alloc( sizeof(*pCollate) + nScript + 1 );
+ if( pCollate==0 ) return TCL_ERROR;
+ pCollate->interp = interp;
+ pCollate->pNext = pDb->pCollate;
+ pCollate->zScript = (char*)&pCollate[1];
+ pDb->pCollate = pCollate;
+ strcpy(pCollate->zScript, zScript);
+ if( sqlite3_create_collation(pDb->db, zName, SQLITE_UTF8,
+ pCollate, tclSqlCollate) ){
+ Tcl_SetResult(interp, (char *)sqlite3_errmsg(pDb->db), TCL_VOLATILE);
+ return TCL_ERROR;
+ }
+ break;
+ }
+
+ /*
+ ** $db collation_needed SCRIPT
+ **
+ ** Create a new SQL collation function called NAME. Whenever
+ ** that function is called, invoke SCRIPT to evaluate the function.
+ */
+ case DB_COLLATION_NEEDED: {
+ if( objc!=3 ){
+ Tcl_WrongNumArgs(interp, 2, objv, "SCRIPT");
+ return TCL_ERROR;
+ }
+ if( pDb->pCollateNeeded ){
+ Tcl_DecrRefCount(pDb->pCollateNeeded);
+ }
+ pDb->pCollateNeeded = Tcl_DuplicateObj(objv[2]);
+ Tcl_IncrRefCount(pDb->pCollateNeeded);
+ sqlite3_collation_needed(pDb->db, pDb, tclCollateNeeded);
+ break;
+ }
+
+ /* $db commit_hook ?CALLBACK?
+ **
+ ** Invoke the given callback just before committing every SQL transaction.
+ ** If the callback throws an exception or returns non-zero, then the
+ ** transaction is aborted. If CALLBACK is an empty string, the callback
+ ** is disabled.
+ */
+ case DB_COMMIT_HOOK: {
+ if( objc>3 ){
+ Tcl_WrongNumArgs(interp, 2, objv, "?CALLBACK?");
+ return TCL_ERROR;
+ }else if( objc==2 ){
+ if( pDb->zCommit ){
+ Tcl_AppendResult(interp, pDb->zCommit, 0);
+ }
+ }else{
+ char *zCommit;
+ int len;
+ if( pDb->zCommit ){
+ Tcl_Free(pDb->zCommit);
+ }
+ zCommit = Tcl_GetStringFromObj(objv[2], &len);
+ if( zCommit && len>0 ){
+ pDb->zCommit = Tcl_Alloc( len + 1 );
+ strcpy(pDb->zCommit, zCommit);
+ }else{
+ pDb->zCommit = 0;
+ }
+ if( pDb->zCommit ){
+ pDb->interp = interp;
+ sqlite3_commit_hook(pDb->db, DbCommitHandler, pDb);
+ }else{
+ sqlite3_commit_hook(pDb->db, 0, 0);
+ }
+ }
+ break;
+ }
+
+ /* $db complete SQL
+ **
+ ** Return TRUE if SQL is a complete SQL statement. Return FALSE if
+ ** additional lines of input are needed. This is similar to the
+ ** built-in "info complete" command of Tcl.
+ */
+ case DB_COMPLETE: {
+#ifndef SQLITE_OMIT_COMPLETE
+ Tcl_Obj *pResult;
+ int isComplete;
+ if( objc!=3 ){
+ Tcl_WrongNumArgs(interp, 2, objv, "SQL");
+ return TCL_ERROR;
+ }
+ isComplete = sqlite3_complete( Tcl_GetStringFromObj(objv[2], 0) );
+ pResult = Tcl_GetObjResult(interp);
+ Tcl_SetBooleanObj(pResult, isComplete);
+#endif
+ break;
+ }
+
+ /* $db copy conflict-algorithm table filename ?SEPARATOR? ?NULLINDICATOR?
+ **
+ ** Copy data into table from filename, optionally using SEPARATOR
+ ** as column separators. If a column contains a null string, or the
+ ** value of NULLINDICATOR, a NULL is inserted for the column.
+ ** conflict-algorithm is one of the sqlite conflict algorithms:
+ ** rollback, abort, fail, ignore, replace
+ ** On success, return the number of lines processed, not necessarily same
+ ** as 'db changes' due to conflict-algorithm selected.
+ **
+ ** This code is basically an implementation/enhancement of
+ ** the sqlite3 shell.c ".import" command.
+ **
+ ** This command usage is equivalent to the sqlite2.x COPY statement,
+ ** which imports file data into a table using the PostgreSQL COPY file format:
+ ** $db copy $conflit_algo $table_name $filename \t \\N
+ */
+ case DB_COPY: {
+ char *zTable; /* Insert data into this table */
+ char *zFile; /* The file from which to extract data */
+ char *zConflict; /* The conflict algorithm to use */
+ sqlite3_stmt *pStmt; /* A statement */
+ int rc; /* Result code */
+ int nCol; /* Number of columns in the table */
+ int nByte; /* Number of bytes in an SQL string */
+ int i, j; /* Loop counters */
+ int nSep; /* Number of bytes in zSep[] */
+ int nNull; /* Number of bytes in zNull[] */
+ char *zSql; /* An SQL statement */
+ char *zLine; /* A single line of input from the file */
+ char **azCol; /* zLine[] broken up into columns */
+ char *zCommit; /* How to commit changes */
+ FILE *in; /* The input file */
+ int lineno = 0; /* Line number of input file */
+ char zLineNum[80]; /* Line number print buffer */
+ Tcl_Obj *pResult; /* interp result */
+
+ char *zSep;
+ char *zNull;
+ if( objc<5 || objc>7 ){
+ Tcl_WrongNumArgs(interp, 2, objv,
+ "CONFLICT-ALGORITHM TABLE FILENAME ?SEPARATOR? ?NULLINDICATOR?");
+ return TCL_ERROR;
+ }
+ if( objc>=6 ){
+ zSep = Tcl_GetStringFromObj(objv[5], 0);
+ }else{
+ zSep = "\t";
+ }
+ if( objc>=7 ){
+ zNull = Tcl_GetStringFromObj(objv[6], 0);
+ }else{
+ zNull = "";
+ }
+ zConflict = Tcl_GetStringFromObj(objv[2], 0);
+ zTable = Tcl_GetStringFromObj(objv[3], 0);
+ zFile = Tcl_GetStringFromObj(objv[4], 0);
+ nSep = strlen(zSep);
+ nNull = strlen(zNull);
+ if( nSep==0 ){
+ Tcl_AppendResult(interp,"Error: non-null separator required for copy",0);
+ return TCL_ERROR;
+ }
+ if(sqlite3StrICmp(zConflict, "rollback") != 0 &&
+ sqlite3StrICmp(zConflict, "abort" ) != 0 &&
+ sqlite3StrICmp(zConflict, "fail" ) != 0 &&
+ sqlite3StrICmp(zConflict, "ignore" ) != 0 &&
+ sqlite3StrICmp(zConflict, "replace" ) != 0 ) {
+ Tcl_AppendResult(interp, "Error: \"", zConflict,
+ "\", conflict-algorithm must be one of: rollback, "
+ "abort, fail, ignore, or replace", 0);
+ return TCL_ERROR;
+ }
+ zSql = sqlite3_mprintf("SELECT * FROM '%q'", zTable);
+ if( zSql==0 ){
+ Tcl_AppendResult(interp, "Error: no such table: ", zTable, 0);
+ return TCL_ERROR;
+ }
+ nByte = strlen(zSql);
+ rc = sqlite3_prepare(pDb->db, zSql, 0, &pStmt, 0);
+ sqlite3_free(zSql);
+ if( rc ){
+ Tcl_AppendResult(interp, "Error: ", sqlite3_errmsg(pDb->db), 0);
+ nCol = 0;
+ }else{
+ nCol = sqlite3_column_count(pStmt);
+ }
+ sqlite3_finalize(pStmt);
+ if( nCol==0 ) {
+ return TCL_ERROR;
+ }
+ zSql = malloc( nByte + 50 + nCol*2 );
+ if( zSql==0 ) {
+ Tcl_AppendResult(interp, "Error: can't malloc()", 0);
+ return TCL_ERROR;
+ }
+ sqlite3_snprintf(nByte+50, zSql, "INSERT OR %q INTO '%q' VALUES(?",
+ zConflict, zTable);
+ j = strlen(zSql);
+ for(i=1; i<nCol; i++){
+ zSql[j++] = ',';
+ zSql[j++] = '?';
+ }
+ zSql[j++] = ')';
+ zSql[j] = 0;
+ rc = sqlite3_prepare(pDb->db, zSql, 0, &pStmt, 0);
+ free(zSql);
+ if( rc ){
+ Tcl_AppendResult(interp, "Error: ", sqlite3_errmsg(pDb->db), 0);
+ sqlite3_finalize(pStmt);
+ return TCL_ERROR;
+ }
+ in = fopen(zFile, "rb");
+ if( in==0 ){
+ Tcl_AppendResult(interp, "Error: cannot open file: ", zFile, NULL);
+ sqlite3_finalize(pStmt);
+ return TCL_ERROR;
+ }
+ azCol = malloc( sizeof(azCol[0])*(nCol+1) );
+ if( azCol==0 ) {
+ Tcl_AppendResult(interp, "Error: can't malloc()", 0);
+ fclose(in);
+ return TCL_ERROR;
+ }
+ (void)sqlite3_exec(pDb->db, "BEGIN", 0, 0, 0);
+ zCommit = "COMMIT";
+ while( (zLine = local_getline(0, in))!=0 ){
+ char *z;
+ i = 0;
+ lineno++;
+ azCol[0] = zLine;
+ for(i=0, z=zLine; *z; z++){
+ if( *z==zSep[0] && strncmp(z, zSep, nSep)==0 ){
+ *z = 0;
+ i++;
+ if( i<nCol ){
+ azCol[i] = &z[nSep];
+ z += nSep-1;
+ }
+ }
+ }
+ if( i+1!=nCol ){
+ char *zErr;
+ zErr = malloc(200 + strlen(zFile));
+ if( zErr ){
+ sprintf(zErr,
+ "Error: %s line %d: expected %d columns of data but found %d",
+ zFile, lineno, nCol, i+1);
+ Tcl_AppendResult(interp, zErr, 0);
+ free(zErr);
+ }
+ zCommit = "ROLLBACK";
+ break;
+ }
+ for(i=0; i<nCol; i++){
+ /* check for null data, if so, bind as null */
+ if ((nNull>0 && strcmp(azCol[i], zNull)==0) || strlen(azCol[i])==0) {
+ sqlite3_bind_null(pStmt, i+1);
+ }else{
+ sqlite3_bind_text(pStmt, i+1, azCol[i], -1, SQLITE_STATIC);
+ }
+ }
+ sqlite3_step(pStmt);
+ rc = sqlite3_reset(pStmt);
+ free(zLine);
+ if( rc!=SQLITE_OK ){
+ Tcl_AppendResult(interp,"Error: ", sqlite3_errmsg(pDb->db), 0);
+ zCommit = "ROLLBACK";
+ break;
+ }
+ }
+ free(azCol);
+ fclose(in);
+ sqlite3_finalize(pStmt);
+ (void)sqlite3_exec(pDb->db, zCommit, 0, 0, 0);
+
+ if( zCommit[0] == 'C' ){
+ /* success, set result as number of lines processed */
+ pResult = Tcl_GetObjResult(interp);
+ Tcl_SetIntObj(pResult, lineno);
+ rc = TCL_OK;
+ }else{
+ /* failure, append lineno where failed */
+ sprintf(zLineNum,"%d",lineno);
+ Tcl_AppendResult(interp,", failed while processing line: ",zLineNum,0);
+ rc = TCL_ERROR;
+ }
+ break;
+ }
+
+ /*
+ ** $db enable_load_extension BOOLEAN
+ **
+ ** Turn the extension loading feature on or off. It if off by
+ ** default.
+ */
+ case DB_ENABLE_LOAD_EXTENSION: {
+ int onoff;
+ if( objc!=3 ){
+ Tcl_WrongNumArgs(interp, 2, objv, "BOOLEAN");
+ return TCL_ERROR;
+ }
+ if( Tcl_GetBooleanFromObj(interp, objv[2], &onoff) ){
+ return TCL_ERROR;
+ }
+ sqlite3_enable_load_extension(pDb->db, onoff);
+ break;
+ }
+
+ /*
+ ** $db errorcode
+ **
+ ** Return the numeric error code that was returned by the most recent
+ ** call to sqlite3_exec().
+ */
+ case DB_ERRORCODE: {
+ Tcl_SetObjResult(interp, Tcl_NewIntObj(sqlite3_errcode(pDb->db)));
+ break;
+ }
+
+ /*
+ ** $db eval $sql ?array? ?{ ...code... }?
+ ** $db onecolumn $sql
+ **
+ ** The SQL statement in $sql is evaluated. For each row, the values are
+ ** placed in elements of the array named "array" and ...code... is executed.
+ ** If "array" and "code" are omitted, then no callback is every invoked.
+ ** If "array" is an empty string, then the values are placed in variables
+ ** that have the same name as the fields extracted by the query.
+ **
+ ** The onecolumn method is the equivalent of:
+ ** lindex [$db eval $sql] 0
+ */
+ case DB_ONECOLUMN:
+ case DB_EVAL:
+ case DB_EXISTS: {
+ char const *zSql; /* Next SQL statement to execute */
+ char const *zLeft; /* What is left after first stmt in zSql */
+ sqlite3_stmt *pStmt; /* Compiled SQL statment */
+ Tcl_Obj *pArray; /* Name of array into which results are written */
+ Tcl_Obj *pScript; /* Script to run for each result set */
+ Tcl_Obj **apParm; /* Parameters that need a Tcl_DecrRefCount() */
+ int nParm; /* Number of entries used in apParm[] */
+ Tcl_Obj *aParm[10]; /* Static space for apParm[] in the common case */
+ Tcl_Obj *pRet; /* Value to be returned */
+ SqlPreparedStmt *pPreStmt; /* Pointer to a prepared statement */
+ int rc2;
+
+ if( choice==DB_EVAL ){
+ if( objc<3 || objc>5 ){
+ Tcl_WrongNumArgs(interp, 2, objv, "SQL ?ARRAY-NAME? ?SCRIPT?");
+ return TCL_ERROR;
+ }
+ pRet = Tcl_NewObj();
+ Tcl_IncrRefCount(pRet);
+ }else{
+ if( objc!=3 ){
+ Tcl_WrongNumArgs(interp, 2, objv, "SQL");
+ return TCL_ERROR;
+ }
+ if( choice==DB_EXISTS ){
+ pRet = Tcl_NewBooleanObj(0);
+ Tcl_IncrRefCount(pRet);
+ }else{
+ pRet = 0;
+ }
+ }
+ if( objc==3 ){
+ pArray = pScript = 0;
+ }else if( objc==4 ){
+ pArray = 0;
+ pScript = objv[3];
+ }else{
+ pArray = objv[3];
+ if( Tcl_GetString(pArray)[0]==0 ) pArray = 0;
+ pScript = objv[4];
+ }
+
+ Tcl_IncrRefCount(objv[2]);
+ zSql = Tcl_GetStringFromObj(objv[2], 0);
+ while( rc==TCL_OK && zSql[0] ){
+ int i; /* Loop counter */
+ int nVar; /* Number of bind parameters in the pStmt */
+ int nCol; /* Number of columns in the result set */
+ Tcl_Obj **apColName = 0; /* Array of column names */
+ int len; /* String length of zSql */
+
+ /* Try to find a SQL statement that has already been compiled and
+ ** which matches the next sequence of SQL.
+ */
+ pStmt = 0;
+ pPreStmt = pDb->stmtList;
+ len = strlen(zSql);
+ if( pPreStmt && sqlite3_expired(pPreStmt->pStmt) ){
+ flushStmtCache(pDb);
+ pPreStmt = 0;
+ }
+ for(; pPreStmt; pPreStmt=pPreStmt->pNext){
+ int n = pPreStmt->nSql;
+ if( len>=n
+ && memcmp(pPreStmt->zSql, zSql, n)==0
+ && (zSql[n]==0 || zSql[n-1]==';')
+ ){
+ pStmt = pPreStmt->pStmt;
+ zLeft = &zSql[pPreStmt->nSql];
+
+ /* When a prepared statement is found, unlink it from the
+ ** cache list. It will later be added back to the beginning
+ ** of the cache list in order to implement LRU replacement.
+ */
+ if( pPreStmt->pPrev ){
+ pPreStmt->pPrev->pNext = pPreStmt->pNext;
+ }else{
+ pDb->stmtList = pPreStmt->pNext;
+ }
+ if( pPreStmt->pNext ){
+ pPreStmt->pNext->pPrev = pPreStmt->pPrev;
+ }else{
+ pDb->stmtLast = pPreStmt->pPrev;
+ }
+ pDb->nStmt--;
+ break;
+ }
+ }
+
+ /* If no prepared statement was found. Compile the SQL text
+ */
+ if( pStmt==0 ){
+ if( SQLITE_OK!=sqlite3_prepare(pDb->db, zSql, -1, &pStmt, &zLeft) ){
+ Tcl_SetObjResult(interp, dbTextToObj(sqlite3_errmsg(pDb->db)));
+ rc = TCL_ERROR;
+ break;
+ }
+ if( pStmt==0 ){
+ if( SQLITE_OK!=sqlite3_errcode(pDb->db) ){
+ /* A compile-time error in the statement
+ */
+ Tcl_SetObjResult(interp, dbTextToObj(sqlite3_errmsg(pDb->db)));
+ rc = TCL_ERROR;
+ break;
+ }else{
+ /* The statement was a no-op. Continue to the next statement
+ ** in the SQL string.
+ */
+ zSql = zLeft;
+ continue;
+ }
+ }
+ assert( pPreStmt==0 );
+ }
+
+ /* Bind values to parameters that begin with $ or :
+ */
+ nVar = sqlite3_bind_parameter_count(pStmt);
+ nParm = 0;
+ if( nVar>sizeof(aParm)/sizeof(aParm[0]) ){
+ apParm = (Tcl_Obj**)Tcl_Alloc(nVar*sizeof(apParm[0]));
+ }else{
+ apParm = aParm;
+ }
+ for(i=1; i<=nVar; i++){
+ const char *zVar = sqlite3_bind_parameter_name(pStmt, i);
+ if( zVar!=0 && (zVar[0]=='$' || zVar[0]==':') ){
+ Tcl_Obj *pVar = Tcl_GetVar2Ex(interp, &zVar[1], 0, 0);
+ if( pVar ){
+ int n;
+ u8 *data;
+ char *zType = pVar->typePtr ? pVar->typePtr->name : "";
+ char c = zType[0];
+ if( c=='b' && strcmp(zType,"bytearray")==0 && pVar->bytes==0 ){
+ /* Only load a BLOB type if the Tcl variable is a bytearray and
+ ** has no string representation. */
+ data = Tcl_GetByteArrayFromObj(pVar, &n);
+ sqlite3_bind_blob(pStmt, i, data, n, SQLITE_STATIC);
+ Tcl_IncrRefCount(pVar);
+ apParm[nParm++] = pVar;
+ }else if( (c=='b' && strcmp(zType,"boolean")==0) ||
+ (c=='i' && strcmp(zType,"int")==0) ){
+ Tcl_GetIntFromObj(interp, pVar, &n);
+ sqlite3_bind_int(pStmt, i, n);
+ }else if( c=='d' && strcmp(zType,"double")==0 ){
+ double r;
+ Tcl_GetDoubleFromObj(interp, pVar, &r);
+ sqlite3_bind_double(pStmt, i, r);
+ }else if( c=='w' && strcmp(zType,"wideInt")==0 ){
+ Tcl_WideInt v;
+ Tcl_GetWideIntFromObj(interp, pVar, &v);
+ sqlite3_bind_int64(pStmt, i, v);
+ }else{
+ data = (unsigned char *)Tcl_GetStringFromObj(pVar, &n);
+ sqlite3_bind_text(pStmt, i, (char *)data, n, SQLITE_STATIC);
+ Tcl_IncrRefCount(pVar);
+ apParm[nParm++] = pVar;
+ }
+ }else{
+ sqlite3_bind_null( pStmt, i );
+ }
+ }
+ }
+
+ /* Compute column names */
+ nCol = sqlite3_column_count(pStmt);
+ if( pScript ){
+ apColName = (Tcl_Obj**)Tcl_Alloc( sizeof(Tcl_Obj*)*nCol );
+ if( apColName==0 ) break;
+ for(i=0; i<nCol; i++){
+ apColName[i] = dbTextToObj(sqlite3_column_name(pStmt,i));
+ Tcl_IncrRefCount(apColName[i]);
+ }
+ }
+
+ /* If results are being stored in an array variable, then create
+ ** the array(*) entry for that array
+ */
+ if( pArray ){
+ Tcl_Obj *pColList = Tcl_NewObj();
+ Tcl_Obj *pStar = Tcl_NewStringObj("*", -1);
+ Tcl_IncrRefCount(pColList);
+ for(i=0; i<nCol; i++){
+ Tcl_ListObjAppendElement(interp, pColList, apColName[i]);
+ }
+ Tcl_ObjSetVar2(interp, pArray, pStar, pColList,0);
+ Tcl_DecrRefCount(pColList);
+ Tcl_DecrRefCount(pStar);
+ }
+
+ /* Execute the SQL
+ */
+ while( rc==TCL_OK && pStmt && SQLITE_ROW==sqlite3_step(pStmt) ){
+ for(i=0; i<nCol; i++){
+ Tcl_Obj *pVal;
+
+ /* Set pVal to contain the i'th column of this row. */
+ switch( sqlite3_column_type(pStmt, i) ){
+ case SQLITE_BLOB: {
+ int bytes = sqlite3_column_bytes(pStmt, i);
+ pVal = Tcl_NewByteArrayObj(sqlite3_column_blob(pStmt, i), bytes);
+ break;
+ }
+ case SQLITE_INTEGER: {
+ sqlite_int64 v = sqlite3_column_int64(pStmt, i);
+ if( v>=-2147483647 && v<=2147483647 ){
+ pVal = Tcl_NewIntObj(v);
+ }else{
+ pVal = Tcl_NewWideIntObj(v);
+ }
+ break;
+ }
+ case SQLITE_FLOAT: {
+ double r = sqlite3_column_double(pStmt, i);
+ pVal = Tcl_NewDoubleObj(r);
+ break;
+ }
+ case SQLITE_NULL: {
+ pVal = dbTextToObj(pDb->zNull);
+ break;
+ }
+ default: {
+ pVal = dbTextToObj((char *)sqlite3_column_text(pStmt, i));
+ break;
+ }
+ }
+
+ if( pScript ){
+ if( pArray==0 ){
+ Tcl_ObjSetVar2(interp, apColName[i], 0, pVal, 0);
+ }else{
+ Tcl_ObjSetVar2(interp, pArray, apColName[i], pVal, 0);
+ }
+ }else if( choice==DB_ONECOLUMN ){
+ assert( pRet==0 );
+ if( pRet==0 ){
+ pRet = pVal;
+ Tcl_IncrRefCount(pRet);
+ }
+ rc = TCL_BREAK;
+ i = nCol;
+ }else if( choice==DB_EXISTS ){
+ Tcl_DecrRefCount(pRet);
+ pRet = Tcl_NewBooleanObj(1);
+ Tcl_IncrRefCount(pRet);
+ rc = TCL_BREAK;
+ i = nCol;
+ }else{
+ Tcl_ListObjAppendElement(interp, pRet, pVal);
+ }
+ }
+
+ if( pScript ){
+ rc = Tcl_EvalObjEx(interp, pScript, 0);
+ if( rc==TCL_CONTINUE ){
+ rc = TCL_OK;
+ }
+ }
+ }
+ if( rc==TCL_BREAK ){
+ rc = TCL_OK;
+ }
+
+ /* Free the column name objects */
+ if( pScript ){
+ for(i=0; i<nCol; i++){
+ Tcl_DecrRefCount(apColName[i]);
+ }
+ Tcl_Free((char*)apColName);
+ }
+
+ /* Free the bound string and blob parameters */
+ for(i=0; i<nParm; i++){
+ Tcl_DecrRefCount(apParm[i]);
+ }
+ if( apParm!=aParm ){
+ Tcl_Free((char*)apParm);
+ }
+
+ /* Reset the statement. If the result code is SQLITE_SCHEMA, then
+ ** flush the statement cache and try the statement again.
+ */
+ rc2 = sqlite3_reset(pStmt);
+ if( SQLITE_SCHEMA==rc2 ){
+ /* After a schema change, flush the cache and try to run the
+ ** statement again
+ */
+ flushStmtCache( pDb );
+ sqlite3_finalize(pStmt);
+ if( pPreStmt ) Tcl_Free((char*)pPreStmt);
+ continue;
+ }else if( SQLITE_OK!=rc2 ){
+ /* If a run-time error occurs, report the error and stop reading
+ ** the SQL
+ */
+ Tcl_SetObjResult(interp, dbTextToObj(sqlite3_errmsg(pDb->db)));
+ sqlite3_finalize(pStmt);
+ rc = TCL_ERROR;
+ if( pPreStmt ) Tcl_Free((char*)pPreStmt);
+ break;
+ }else if( pDb->maxStmt<=0 ){
+ /* If the cache is turned off, deallocated the statement */
+ if( pPreStmt ) Tcl_Free((char*)pPreStmt);
+ sqlite3_finalize(pStmt);
+ }else{
+ /* Everything worked and the cache is operational.
+ ** Create a new SqlPreparedStmt structure if we need one.
+ ** (If we already have one we can just reuse it.)
+ */
+ if( pPreStmt==0 ){
+ len = zLeft - zSql;
+ pPreStmt = (SqlPreparedStmt*)Tcl_Alloc( sizeof(*pPreStmt) + len );
+ if( pPreStmt==0 ) return TCL_ERROR;
+ pPreStmt->pStmt = pStmt;
+ pPreStmt->nSql = len;
+ memcpy(pPreStmt->zSql, zSql, len);
+ pPreStmt->zSql[len] = 0;
+ }
+
+ /* Add the prepared statement to the beginning of the cache list
+ */
+ pPreStmt->pNext = pDb->stmtList;
+ pPreStmt->pPrev = 0;
+ if( pDb->stmtList ){
+ pDb->stmtList->pPrev = pPreStmt;
+ }
+ pDb->stmtList = pPreStmt;
+ if( pDb->stmtLast==0 ){
+ assert( pDb->nStmt==0 );
+ pDb->stmtLast = pPreStmt;
+ }else{
+ assert( pDb->nStmt>0 );
+ }
+ pDb->nStmt++;
+
+ /* If we have too many statement in cache, remove the surplus from the
+ ** end of the cache list.
+ */
+ while( pDb->nStmt>pDb->maxStmt ){
+ sqlite3_finalize(pDb->stmtLast->pStmt);
+ pDb->stmtLast = pDb->stmtLast->pPrev;
+ Tcl_Free((char*)pDb->stmtLast->pNext);
+ pDb->stmtLast->pNext = 0;
+ pDb->nStmt--;
+ }
+ }
+
+ /* Proceed to the next statement */
+ zSql = zLeft;
+ }
+ Tcl_DecrRefCount(objv[2]);
+
+ if( pRet ){
+ if( rc==TCL_OK ){
+ Tcl_SetObjResult(interp, pRet);
+ }
+ Tcl_DecrRefCount(pRet);
+ }else if( rc==TCL_OK ){
+ Tcl_ResetResult(interp);
+ }
+ break;
+ }
+
+ /*
+ ** $db function NAME SCRIPT
+ **
+ ** Create a new SQL function called NAME. Whenever that function is
+ ** called, invoke SCRIPT to evaluate the function.
+ */
+ case DB_FUNCTION: {
+ SqlFunc *pFunc;
+ Tcl_Obj *pScript;
+ char *zName;
+ if( objc!=4 ){
+ Tcl_WrongNumArgs(interp, 2, objv, "NAME SCRIPT");
+ return TCL_ERROR;
+ }
+ zName = Tcl_GetStringFromObj(objv[2], 0);
+ pScript = objv[3];
+ pFunc = findSqlFunc(pDb, zName);
+ if( pFunc==0 ) return TCL_ERROR;
+ if( pFunc->pScript ){
+ Tcl_DecrRefCount(pFunc->pScript);
+ }
+ pFunc->pScript = pScript;
+ Tcl_IncrRefCount(pScript);
+ pFunc->useEvalObjv = safeToUseEvalObjv(interp, pScript);
+ rc = sqlite3_create_function(pDb->db, zName, -1, SQLITE_UTF8,
+ pFunc, tclSqlFunc, 0, 0);
+ if( rc!=SQLITE_OK ){
+ rc = TCL_ERROR;
+ Tcl_SetResult(interp, (char *)sqlite3_errmsg(pDb->db), TCL_VOLATILE);
+ }else{
+ /* Must flush any cached statements */
+ flushStmtCache( pDb );
+ }
+ break;
+ }
+
+ /*
+ ** $db interrupt
+ **
+ ** Interrupt the execution of the inner-most SQL interpreter. This
+ ** causes the SQL statement to return an error of SQLITE_INTERRUPT.
+ */
+ case DB_INTERRUPT: {
+ sqlite3_interrupt(pDb->db);
+ break;
+ }
+
+ /*
+ ** $db nullvalue ?STRING?
+ **
+ ** Change text used when a NULL comes back from the database. If ?STRING?
+ ** is not present, then the current string used for NULL is returned.
+ ** If STRING is present, then STRING is returned.
+ **
+ */
+ case DB_NULLVALUE: {
+ if( objc!=2 && objc!=3 ){
+ Tcl_WrongNumArgs(interp, 2, objv, "NULLVALUE");
+ return TCL_ERROR;
+ }
+ if( objc==3 ){
+ int len;
+ char *zNull = Tcl_GetStringFromObj(objv[2], &len);
+ if( pDb->zNull ){
+ Tcl_Free(pDb->zNull);
+ }
+ if( zNull && len>0 ){
+ pDb->zNull = Tcl_Alloc( len + 1 );
+ strncpy(pDb->zNull, zNull, len);
+ pDb->zNull[len] = '\0';
+ }else{
+ pDb->zNull = 0;
+ }
+ }
+ Tcl_SetObjResult(interp, dbTextToObj(pDb->zNull));
+ break;
+ }
+
+ /*
+ ** $db last_insert_rowid
+ **
+ ** Return an integer which is the ROWID for the most recent insert.
+ */
+ case DB_LAST_INSERT_ROWID: {
+ Tcl_Obj *pResult;
+ Tcl_WideInt rowid;
+ if( objc!=2 ){
+ Tcl_WrongNumArgs(interp, 2, objv, "");
+ return TCL_ERROR;
+ }
+ rowid = sqlite3_last_insert_rowid(pDb->db);
+ pResult = Tcl_GetObjResult(interp);
+ Tcl_SetWideIntObj(pResult, rowid);
+ break;
+ }
+
+ /*
+ ** The DB_ONECOLUMN method is implemented together with DB_EVAL.
+ */
+
+ /* $db progress ?N CALLBACK?
+ **
+ ** Invoke the given callback every N virtual machine opcodes while executing
+ ** queries.
+ */
+ case DB_PROGRESS: {
+ if( objc==2 ){
+ if( pDb->zProgress ){
+ Tcl_AppendResult(interp, pDb->zProgress, 0);
+ }
+ }else if( objc==4 ){
+ char *zProgress;
+ int len;
+ int N;
+ if( TCL_OK!=Tcl_GetIntFromObj(interp, objv[2], &N) ){
+ return TCL_ERROR;
+ };
+ if( pDb->zProgress ){
+ Tcl_Free(pDb->zProgress);
+ }
+ zProgress = Tcl_GetStringFromObj(objv[3], &len);
+ if( zProgress && len>0 ){
+ pDb->zProgress = Tcl_Alloc( len + 1 );
+ strcpy(pDb->zProgress, zProgress);
+ }else{
+ pDb->zProgress = 0;
+ }
+#ifndef SQLITE_OMIT_PROGRESS_CALLBACK
+ if( pDb->zProgress ){
+ pDb->interp = interp;
+ sqlite3_progress_handler(pDb->db, N, DbProgressHandler, pDb);
+ }else{
+ sqlite3_progress_handler(pDb->db, 0, 0, 0);
+ }
+#endif
+ }else{
+ Tcl_WrongNumArgs(interp, 2, objv, "N CALLBACK");
+ return TCL_ERROR;
+ }
+ break;
+ }
+
+ /* $db profile ?CALLBACK?
+ **
+ ** Make arrangements to invoke the CALLBACK routine after each SQL statement
+ ** that has run. The text of the SQL and the amount of elapse time are
+ ** appended to CALLBACK before the script is run.
+ */
+ case DB_PROFILE: {
+ if( objc>3 ){
+ Tcl_WrongNumArgs(interp, 2, objv, "?CALLBACK?");
+ return TCL_ERROR;
+ }else if( objc==2 ){
+ if( pDb->zProfile ){
+ Tcl_AppendResult(interp, pDb->zProfile, 0);
+ }
+ }else{
+ char *zProfile;
+ int len;
+ if( pDb->zProfile ){
+ Tcl_Free(pDb->zProfile);
+ }
+ zProfile = Tcl_GetStringFromObj(objv[2], &len);
+ if( zProfile && len>0 ){
+ pDb->zProfile = Tcl_Alloc( len + 1 );
+ strcpy(pDb->zProfile, zProfile);
+ }else{
+ pDb->zProfile = 0;
+ }
+#ifndef SQLITE_OMIT_TRACE
+ if( pDb->zProfile ){
+ pDb->interp = interp;
+ sqlite3_profile(pDb->db, DbProfileHandler, pDb);
+ }else{
+ sqlite3_profile(pDb->db, 0, 0);
+ }
+#endif
+ }
+ break;
+ }
+
+ /*
+ ** $db rekey KEY
+ **
+ ** Change the encryption key on the currently open database.
+ */
+ case DB_REKEY: {
+ int nKey;
+ void *pKey;
+ if( objc!=3 ){
+ Tcl_WrongNumArgs(interp, 2, objv, "KEY");
+ return TCL_ERROR;
+ }
+ pKey = Tcl_GetByteArrayFromObj(objv[2], &nKey);
+#ifdef SQLITE_HAS_CODEC
+ rc = sqlite3_rekey(pDb->db, pKey, nKey);
+ if( rc ){
+ Tcl_AppendResult(interp, sqlite3ErrStr(rc), 0);
+ rc = TCL_ERROR;
+ }
+#endif
+ break;
+ }
+
+ /*
+ ** $db timeout MILLESECONDS
+ **
+ ** Delay for the number of milliseconds specified when a file is locked.
+ */
+ case DB_TIMEOUT: {
+ int ms;
+ if( objc!=3 ){
+ Tcl_WrongNumArgs(interp, 2, objv, "MILLISECONDS");
+ return TCL_ERROR;
+ }
+ if( Tcl_GetIntFromObj(interp, objv[2], &ms) ) return TCL_ERROR;
+ sqlite3_busy_timeout(pDb->db, ms);
+ break;
+ }
+
+ /*
+ ** $db total_changes
+ **
+ ** Return the number of rows that were modified, inserted, or deleted
+ ** since the database handle was created.
+ */
+ case DB_TOTAL_CHANGES: {
+ Tcl_Obj *pResult;
+ if( objc!=2 ){
+ Tcl_WrongNumArgs(interp, 2, objv, "");
+ return TCL_ERROR;
+ }
+ pResult = Tcl_GetObjResult(interp);
+ Tcl_SetIntObj(pResult, sqlite3_total_changes(pDb->db));
+ break;
+ }
+
+ /* $db trace ?CALLBACK?
+ **
+ ** Make arrangements to invoke the CALLBACK routine for each SQL statement
+ ** that is executed. The text of the SQL is appended to CALLBACK before
+ ** it is executed.
+ */
+ case DB_TRACE: {
+ if( objc>3 ){
+ Tcl_WrongNumArgs(interp, 2, objv, "?CALLBACK?");
+ return TCL_ERROR;
+ }else if( objc==2 ){
+ if( pDb->zTrace ){
+ Tcl_AppendResult(interp, pDb->zTrace, 0);
+ }
+ }else{
+ char *zTrace;
+ int len;
+ if( pDb->zTrace ){
+ Tcl_Free(pDb->zTrace);
+ }
+ zTrace = Tcl_GetStringFromObj(objv[2], &len);
+ if( zTrace && len>0 ){
+ pDb->zTrace = Tcl_Alloc( len + 1 );
+ strcpy(pDb->zTrace, zTrace);
+ }else{
+ pDb->zTrace = 0;
+ }
+#ifndef SQLITE_OMIT_TRACE
+ if( pDb->zTrace ){
+ pDb->interp = interp;
+ sqlite3_trace(pDb->db, DbTraceHandler, pDb);
+ }else{
+ sqlite3_trace(pDb->db, 0, 0);
+ }
+#endif
+ }
+ break;
+ }
+
+ /* $db transaction [-deferred|-immediate|-exclusive] SCRIPT
+ **
+ ** Start a new transaction (if we are not already in the midst of a
+ ** transaction) and execute the TCL script SCRIPT. After SCRIPT
+ ** completes, either commit the transaction or roll it back if SCRIPT
+ ** throws an exception. Or if no new transation was started, do nothing.
+ ** pass the exception on up the stack.
+ **
+ ** This command was inspired by Dave Thomas's talk on Ruby at the
+ ** 2005 O'Reilly Open Source Convention (OSCON).
+ */
+ case DB_TRANSACTION: {
+ int inTrans;
+ Tcl_Obj *pScript;
+ const char *zBegin = "BEGIN";
+ if( objc!=3 && objc!=4 ){
+ Tcl_WrongNumArgs(interp, 2, objv, "[TYPE] SCRIPT");
+ return TCL_ERROR;
+ }
+ if( objc==3 ){
+ pScript = objv[2];
+ } else {
+ static const char *TTYPE_strs[] = {
+ "deferred", "exclusive", "immediate", 0
+ };
+ enum TTYPE_enum {
+ TTYPE_DEFERRED, TTYPE_EXCLUSIVE, TTYPE_IMMEDIATE
+ };
+ int ttype;
+ if( Tcl_GetIndexFromObj(interp, objv[2], TTYPE_strs, "transaction type",
+ 0, &ttype) ){
+ return TCL_ERROR;
+ }
+ switch( (enum TTYPE_enum)ttype ){
+ case TTYPE_DEFERRED: /* no-op */; break;
+ case TTYPE_EXCLUSIVE: zBegin = "BEGIN EXCLUSIVE"; break;
+ case TTYPE_IMMEDIATE: zBegin = "BEGIN IMMEDIATE"; break;
+ }
+ pScript = objv[3];
+ }
+ inTrans = !sqlite3_get_autocommit(pDb->db);
+ if( !inTrans ){
+ (void)sqlite3_exec(pDb->db, zBegin, 0, 0, 0);
+ }
+ rc = Tcl_EvalObjEx(interp, pScript, 0);
+ if( !inTrans ){
+ const char *zEnd;
+ if( rc==TCL_ERROR ){
+ zEnd = "ROLLBACK";
+ } else {
+ zEnd = "COMMIT";
+ }
+ (void)sqlite3_exec(pDb->db, zEnd, 0, 0, 0);
+ }
+ break;
+ }
+
+ /*
+ ** $db update_hook ?script?
+ ** $db rollback_hook ?script?
+ */
+ case DB_UPDATE_HOOK:
+ case DB_ROLLBACK_HOOK: {
+
+ /* set ppHook to point at pUpdateHook or pRollbackHook, depending on
+ ** whether [$db update_hook] or [$db rollback_hook] was invoked.
+ */
+ Tcl_Obj **ppHook;
+ if( choice==DB_UPDATE_HOOK ){
+ ppHook = &pDb->pUpdateHook;
+ }else{
+ ppHook = &pDb->pRollbackHook;
+ }
+
+ if( objc!=2 && objc!=3 ){
+ Tcl_WrongNumArgs(interp, 2, objv, "?SCRIPT?");
+ return TCL_ERROR;
+ }
+ if( *ppHook ){
+ Tcl_SetObjResult(interp, *ppHook);
+ if( objc==3 ){
+ Tcl_DecrRefCount(*ppHook);
+ *ppHook = 0;
+ }
+ }
+ if( objc==3 ){
+ assert( !(*ppHook) );
+ if( Tcl_GetCharLength(objv[2])>0 ){
+ *ppHook = objv[2];
+ Tcl_IncrRefCount(*ppHook);
+ }
+ }
+
+ sqlite3_update_hook(pDb->db, (pDb->pUpdateHook?DbUpdateHandler:0), pDb);
+ sqlite3_rollback_hook(pDb->db,(pDb->pRollbackHook?DbRollbackHandler:0),pDb);
+
+ break;
+ }
+
+ /* $db version
+ **
+ ** Return the version string for this database.
+ */
+ case DB_VERSION: {
+ Tcl_SetResult(interp, (char *)sqlite3_libversion(), TCL_STATIC);
+ break;
+ }
+
+
+ } /* End of the SWITCH statement */
+ return rc;
+}
+
+/*
+** sqlite3 DBNAME FILENAME ?MODE? ?-key KEY?
+**
+** This is the main Tcl command. When the "sqlite" Tcl command is
+** invoked, this routine runs to process that command.
+**
+** The first argument, DBNAME, is an arbitrary name for a new
+** database connection. This command creates a new command named
+** DBNAME that is used to control that connection. The database
+** connection is deleted when the DBNAME command is deleted.
+**
+** The second argument is the name of the directory that contains
+** the sqlite database that is to be accessed.
+**
+** For testing purposes, we also support the following:
+**
+** sqlite3 -encoding
+**
+** Return the encoding used by LIKE and GLOB operators. Choices
+** are UTF-8 and iso8859.
+**
+** sqlite3 -version
+**
+** Return the version number of the SQLite library.
+**
+** sqlite3 -tcl-uses-utf
+**
+** Return "1" if compiled with a Tcl uses UTF-8. Return "0" if
+** not. Used by tests to make sure the library was compiled
+** correctly.
+*/
+static int DbMain(void *cd, Tcl_Interp *interp, int objc,Tcl_Obj *const*objv){
+ SqliteDb *p;
+ void *pKey = 0;
+ int nKey = 0;
+ const char *zArg;
+ char *zErrMsg;
+ const char *zFile;
+ Tcl_DString translatedFilename;
+ if( objc==2 ){
+ zArg = Tcl_GetStringFromObj(objv[1], 0);
+ if( strcmp(zArg,"-version")==0 ){
+ Tcl_AppendResult(interp,sqlite3_version,0);
+ return TCL_OK;
+ }
+ if( strcmp(zArg,"-has-codec")==0 ){
+#ifdef SQLITE_HAS_CODEC
+ Tcl_AppendResult(interp,"1",0);
+#else
+ Tcl_AppendResult(interp,"0",0);
+#endif
+ return TCL_OK;
+ }
+ if( strcmp(zArg,"-tcl-uses-utf")==0 ){
+#ifdef TCL_UTF_MAX
+ Tcl_AppendResult(interp,"1",0);
+#else
+ Tcl_AppendResult(interp,"0",0);
+#endif
+ return TCL_OK;
+ }
+ }
+ if( objc==5 || objc==6 ){
+ zArg = Tcl_GetStringFromObj(objv[objc-2], 0);
+ if( strcmp(zArg,"-key")==0 ){
+ pKey = Tcl_GetByteArrayFromObj(objv[objc-1], &nKey);
+ objc -= 2;
+ }
+ }
+ if( objc!=3 && objc!=4 ){
+ Tcl_WrongNumArgs(interp, 1, objv,
+#ifdef SQLITE_HAS_CODEC
+ "HANDLE FILENAME ?-key CODEC-KEY?"
+#else
+ "HANDLE FILENAME ?MODE?"
+#endif
+ );
+ return TCL_ERROR;
+ }
+ zErrMsg = 0;
+ p = (SqliteDb*)Tcl_Alloc( sizeof(*p) );
+ if( p==0 ){
+ Tcl_SetResult(interp, "malloc failed", TCL_STATIC);
+ return TCL_ERROR;
+ }
+ memset(p, 0, sizeof(*p));
+ zFile = Tcl_GetStringFromObj(objv[2], 0);
+ zFile = Tcl_TranslateFileName(interp, zFile, &translatedFilename);
+ sqlite3_open(zFile, &p->db);
+ Tcl_DStringFree(&translatedFilename);
+ if( SQLITE_OK!=sqlite3_errcode(p->db) ){
+ zErrMsg = strdup(sqlite3_errmsg(p->db));
+ sqlite3_close(p->db);
+ p->db = 0;
+ }
+#ifdef SQLITE_HAS_CODEC
+ sqlite3_key(p->db, pKey, nKey);
+#endif
+ if( p->db==0 ){
+ Tcl_SetResult(interp, zErrMsg, TCL_VOLATILE);
+ Tcl_Free((char*)p);
+ free(zErrMsg);
+ return TCL_ERROR;
+ }
+ p->maxStmt = NUM_PREPARED_STMTS;
+ p->interp = interp;
+ zArg = Tcl_GetStringFromObj(objv[1], 0);
+ Tcl_CreateObjCommand(interp, zArg, DbObjCmd, (char*)p, DbDeleteCmd);
+
+ /* If compiled with SQLITE_TEST turned on, then register the "md5sum"
+ ** SQL function.
+ */
+#ifdef SQLITE_TEST
+ {
+ extern void Md5_Register(sqlite3*);
+#ifdef SQLITE_MEMDEBUG
+ int mallocfail = sqlite3_iMallocFail;
+ sqlite3_iMallocFail = 0;
+#endif
+ Md5_Register(p->db);
+#ifdef SQLITE_MEMDEBUG
+ sqlite3_iMallocFail = mallocfail;
+#endif
+ }
+#endif
+ return TCL_OK;
+}
+
+/*
+** Provide a dummy Tcl_InitStubs if we are using this as a static
+** library.
+*/
+#ifndef USE_TCL_STUBS
+# undef Tcl_InitStubs
+# define Tcl_InitStubs(a,b,c)
+#endif
+
+/*
+** Make sure we have a PACKAGE_VERSION macro defined. This will be
+** defined automatically by the TEA makefile. But other makefiles
+** do not define it.
+*/
+#ifndef PACKAGE_VERSION
+# define PACKAGE_VERSION SQLITE_VERSION
+#endif
+
+/*
+** Initialize this module.
+**
+** This Tcl module contains only a single new Tcl command named "sqlite".
+** (Hence there is no namespace. There is no point in using a namespace
+** if the extension only supplies one new name!) The "sqlite" command is
+** used to open a new SQLite database. See the DbMain() routine above
+** for additional information.
+*/
+EXTERN int Sqlite3_Init(Tcl_Interp *interp){
+ Tcl_InitStubs(interp, "8.4", 0);
+ Tcl_CreateObjCommand(interp, "sqlite3", (Tcl_ObjCmdProc*)DbMain, 0, 0);
+ Tcl_PkgProvide(interp, "sqlite3", PACKAGE_VERSION);
+ Tcl_CreateObjCommand(interp, "sqlite", (Tcl_ObjCmdProc*)DbMain, 0, 0);
+ Tcl_PkgProvide(interp, "sqlite", PACKAGE_VERSION);
+ return TCL_OK;
+}
+EXTERN int Tclsqlite3_Init(Tcl_Interp *interp){ return Sqlite3_Init(interp); }
+EXTERN int Sqlite3_SafeInit(Tcl_Interp *interp){ return TCL_OK; }
+EXTERN int Tclsqlite3_SafeInit(Tcl_Interp *interp){ return TCL_OK; }
+
+#ifndef SQLITE_3_SUFFIX_ONLY
+EXTERN int Sqlite_Init(Tcl_Interp *interp){ return Sqlite3_Init(interp); }
+EXTERN int Tclsqlite_Init(Tcl_Interp *interp){ return Sqlite3_Init(interp); }
+EXTERN int Sqlite_SafeInit(Tcl_Interp *interp){ return TCL_OK; }
+EXTERN int Tclsqlite_SafeInit(Tcl_Interp *interp){ return TCL_OK; }
+#endif
+
+#ifdef TCLSH
+/*****************************************************************************
+** The code that follows is used to build standalone TCL interpreters
+*/
+
+/*
+** If the macro TCLSH is one, then put in code this for the
+** "main" routine that will initialize Tcl and take input from
+** standard input.
+*/
+#if TCLSH==1
+static char zMainloop[] =
+ "set line {}\n"
+ "while {![eof stdin]} {\n"
+ "if {$line!=\"\"} {\n"
+ "puts -nonewline \"> \"\n"
+ "} else {\n"
+ "puts -nonewline \"% \"\n"
+ "}\n"
+ "flush stdout\n"
+ "append line [gets stdin]\n"
+ "if {[info complete $line]} {\n"
+ "if {[catch {uplevel #0 $line} result]} {\n"
+ "puts stderr \"Error: $result\"\n"
+ "} elseif {$result!=\"\"} {\n"
+ "puts $result\n"
+ "}\n"
+ "set line {}\n"
+ "} else {\n"
+ "append line \\n\n"
+ "}\n"
+ "}\n"
+;
+#endif
+
+/*
+** If the macro TCLSH is two, then get the main loop code out of
+** the separate file "spaceanal_tcl.h".
+*/
+#if TCLSH==2
+static char zMainloop[] =
+#include "spaceanal_tcl.h"
+;
+#endif
+
+#define TCLSH_MAIN main /* Needed to fake out mktclapp */
+int TCLSH_MAIN(int argc, char **argv){
+ Tcl_Interp *interp;
+ Tcl_FindExecutable(argv[0]);
+ interp = Tcl_CreateInterp();
+ Sqlite3_Init(interp);
+#ifdef SQLITE_TEST
+ {
+ extern int Sqlitetest1_Init(Tcl_Interp*);
+ extern int Sqlitetest2_Init(Tcl_Interp*);
+ extern int Sqlitetest3_Init(Tcl_Interp*);
+ extern int Sqlitetest4_Init(Tcl_Interp*);
+ extern int Sqlitetest5_Init(Tcl_Interp*);
+ extern int Sqlitetest6_Init(Tcl_Interp*);
+ extern int Sqlitetest7_Init(Tcl_Interp*);
+ extern int Sqlitetest8_Init(Tcl_Interp*);
+ extern int Md5_Init(Tcl_Interp*);
+ extern int Sqlitetestsse_Init(Tcl_Interp*);
+ extern int Sqlitetestasync_Init(Tcl_Interp*);
+ extern int Sqlitetesttclvar_Init(Tcl_Interp*);
+ extern int Sqlitetestschema_Init(Tcl_Interp*);
+ extern int Sqlitetest_autoext_Init(Tcl_Interp*);
+
+ Sqlitetest1_Init(interp);
+ Sqlitetest2_Init(interp);
+ Sqlitetest3_Init(interp);
+ Sqlitetest4_Init(interp);
+ Sqlitetest5_Init(interp);
+ Sqlitetest6_Init(interp);
+ Sqlitetest7_Init(interp);
+ Sqlitetest8_Init(interp);
+ Sqlitetestasync_Init(interp);
+ Sqlitetesttclvar_Init(interp);
+ Sqlitetestschema_Init(interp);
+ Sqlitetest_autoext_Init(interp);
+ Md5_Init(interp);
+#ifdef SQLITE_SSE
+ Sqlitetestsse_Init(interp);
+#endif
+ }
+#endif
+ if( argc>=2 || TCLSH==2 ){
+ int i;
+ char zArgc[32];
+ sqlite3_snprintf(sizeof(zArgc), zArgc, "%d", argc-(3-TCLSH));
+ Tcl_SetVar(interp,"argc", zArgc, TCL_GLOBAL_ONLY);
+ Tcl_SetVar(interp,"argv0",argv[1],TCL_GLOBAL_ONLY);
+ Tcl_SetVar(interp,"argv", "", TCL_GLOBAL_ONLY);
+ for(i=3-TCLSH; i<argc; i++){
+ Tcl_SetVar(interp, "argv", argv[i],
+ TCL_GLOBAL_ONLY | TCL_LIST_ELEMENT | TCL_APPEND_VALUE);
+ }
+ if( TCLSH==1 && Tcl_EvalFile(interp, argv[1])!=TCL_OK ){
+ const char *zInfo = Tcl_GetVar(interp, "errorInfo", TCL_GLOBAL_ONLY);
+ if( zInfo==0 ) zInfo = interp->result;
+ fprintf(stderr,"%s: %s\n", *argv, zInfo);
+ return 1;
+ }
+ }
+ if( argc<=1 || TCLSH==2 ){
+ Tcl_GlobalEval(interp, zMainloop);
+ }
+ return 0;
+}
+#endif /* TCLSH */
+
+#endif /* !defined(NO_TCL) */
Added: freeswitch/trunk/libs/sqlite/src/test1.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/test1.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,4072 @@
+/*
+** 2001 September 15
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** Code for testing all sorts of SQLite interfaces. This code
+** is not included in the SQLite library. It is used for automated
+** testing of the SQLite library.
+**
+** $Id: test1.c,v 1.222 2006/09/15 07:28:51 drh Exp $
+*/
+#include "sqliteInt.h"
+#include "tcl.h"
+#include "os.h"
+#include <stdlib.h>
+#include <string.h>
+
+/*
+** This is a copy of the first part of the SqliteDb structure in
+** tclsqlite.c. We need it here so that the get_sqlite_pointer routine
+** can extract the sqlite3* pointer from an existing Tcl SQLite
+** connection.
+*/
+struct SqliteDb {
+ sqlite3 *db;
+};
+
+/*
+** A TCL command that returns the address of the sqlite* pointer
+** for an sqlite connection instance. Bad things happen if the
+** input is not an sqlite connection.
+*/
+static int get_sqlite_pointer(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ struct SqliteDb *p;
+ Tcl_CmdInfo cmdInfo;
+ char zBuf[100];
+ if( objc!=2 ){
+ Tcl_WrongNumArgs(interp, 1, objv, "SQLITE-CONNECTION");
+ return TCL_ERROR;
+ }
+ if( !Tcl_GetCommandInfo(interp, Tcl_GetString(objv[1]), &cmdInfo) ){
+ Tcl_AppendResult(interp, "command not found: ",
+ Tcl_GetString(objv[1]), (char*)0);
+ return TCL_ERROR;
+ }
+ p = (struct SqliteDb*)cmdInfo.objClientData;
+ sprintf(zBuf, "%p", p->db);
+ if( strncmp(zBuf,"0x",2) ){
+ sprintf(zBuf, "0x%p", p->db);
+ }
+ Tcl_AppendResult(interp, zBuf, 0);
+ return TCL_OK;
+}
+
+const char *sqlite3TestErrorName(int rc){
+ const char *zName = 0;
+ switch( rc & 0xff ){
+ case SQLITE_OK: zName = "SQLITE_OK"; break;
+ case SQLITE_ERROR: zName = "SQLITE_ERROR"; break;
+ case SQLITE_PERM: zName = "SQLITE_PERM"; break;
+ case SQLITE_ABORT: zName = "SQLITE_ABORT"; break;
+ case SQLITE_BUSY: zName = "SQLITE_BUSY"; break;
+ case SQLITE_LOCKED: zName = "SQLITE_LOCKED"; break;
+ case SQLITE_NOMEM: zName = "SQLITE_NOMEM"; break;
+ case SQLITE_READONLY: zName = "SQLITE_READONLY"; break;
+ case SQLITE_INTERRUPT: zName = "SQLITE_INTERRUPT"; break;
+ case SQLITE_IOERR: zName = "SQLITE_IOERR"; break;
+ case SQLITE_CORRUPT: zName = "SQLITE_CORRUPT"; break;
+ case SQLITE_FULL: zName = "SQLITE_FULL"; break;
+ case SQLITE_CANTOPEN: zName = "SQLITE_CANTOPEN"; break;
+ case SQLITE_PROTOCOL: zName = "SQLITE_PROTOCOL"; break;
+ case SQLITE_EMPTY: zName = "SQLITE_EMPTY"; break;
+ case SQLITE_SCHEMA: zName = "SQLITE_SCHEMA"; break;
+ case SQLITE_CONSTRAINT: zName = "SQLITE_CONSTRAINT"; break;
+ case SQLITE_MISMATCH: zName = "SQLITE_MISMATCH"; break;
+ case SQLITE_MISUSE: zName = "SQLITE_MISUSE"; break;
+ case SQLITE_NOLFS: zName = "SQLITE_NOLFS"; break;
+ case SQLITE_AUTH: zName = "SQLITE_AUTH"; break;
+ case SQLITE_FORMAT: zName = "SQLITE_FORMAT"; break;
+ case SQLITE_RANGE: zName = "SQLITE_RANGE"; break;
+ case SQLITE_ROW: zName = "SQLITE_ROW"; break;
+ case SQLITE_DONE: zName = "SQLITE_DONE"; break;
+ case SQLITE_NOTADB: zName = "SQLITE_NOTADB"; break;
+ default: zName = "SQLITE_Unknown"; break;
+ }
+ return zName;
+}
+#define errorName sqlite3TestErrorName
+
+/*
+** Convert an sqlite3_stmt* into an sqlite3*. This depends on the
+** fact that the sqlite3* is the first field in the Vdbe structure.
+*/
+#define StmtToDb(X) sqlite3_db_handle(X)
+
+/*
+** Check a return value to make sure it agrees with the results
+** from sqlite3_errcode.
+*/
+int sqlite3TestErrCode(Tcl_Interp *interp, sqlite3 *db, int rc){
+ if( rc!=SQLITE_MISUSE && rc!=SQLITE_OK && sqlite3_errcode(db)!=rc ){
+ char zBuf[200];
+ int r2 = sqlite3_errcode(db);
+ sprintf(zBuf, "error code %s (%d) does not match sqlite3_errcode %s (%d)",
+ errorName(rc), rc, errorName(r2), r2);
+ Tcl_ResetResult(interp);
+ Tcl_AppendResult(interp, zBuf, 0);
+ return 1;
+ }
+ return 0;
+}
+
+/*
+** Decode a pointer to an sqlite3 object.
+*/
+static int getDbPointer(Tcl_Interp *interp, const char *zA, sqlite3 **ppDb){
+ *ppDb = (sqlite3*)sqlite3TextToPtr(zA);
+ return TCL_OK;
+}
+
+/*
+** Decode a pointer to an sqlite3_stmt object.
+*/
+static int getStmtPointer(
+ Tcl_Interp *interp,
+ const char *zArg,
+ sqlite3_stmt **ppStmt
+){
+ *ppStmt = (sqlite3_stmt*)sqlite3TextToPtr(zArg);
+ return TCL_OK;
+}
+
+/*
+** Decode a pointer to an sqlite3_stmt object.
+*/
+static int getFilePointer(
+ Tcl_Interp *interp,
+ const char *zArg,
+ OsFile **ppFile
+){
+ *ppFile = (OsFile*)sqlite3TextToPtr(zArg);
+ return TCL_OK;
+}
+
+/*
+** Generate a text representation of a pointer that can be understood
+** by the getDbPointer and getVmPointer routines above.
+**
+** The problem is, on some machines (Solaris) if you do a printf with
+** "%p" you cannot turn around and do a scanf with the same "%p" and
+** get your pointer back. You have to prepend a "0x" before it will
+** work. Or at least that is what is reported to me (drh). But this
+** behavior varies from machine to machine. The solution used her is
+** to test the string right after it is generated to see if it can be
+** understood by scanf, and if not, try prepending an "0x" to see if
+** that helps. If nothing works, a fatal error is generated.
+*/
+int sqlite3TestMakePointerStr(Tcl_Interp *interp, char *zPtr, void *p){
+ sqlite3_snprintf(100, zPtr, "%p", p);
+ return TCL_OK;
+}
+
+/*
+** The callback routine for sqlite3_exec_printf().
+*/
+static int exec_printf_cb(void *pArg, int argc, char **argv, char **name){
+ Tcl_DString *str = (Tcl_DString*)pArg;
+ int i;
+
+ if( Tcl_DStringLength(str)==0 ){
+ for(i=0; i<argc; i++){
+ Tcl_DStringAppendElement(str, name[i] ? name[i] : "NULL");
+ }
+ }
+ for(i=0; i<argc; i++){
+ Tcl_DStringAppendElement(str, argv[i] ? argv[i] : "NULL");
+ }
+ return 0;
+}
+
+/*
+** Usage: sqlite3_exec_printf DB FORMAT STRING
+**
+** Invoke the sqlite3_exec_printf() interface using the open database
+** DB. The SQL is the string FORMAT. The format string should contain
+** one %s or %q. STRING is the value inserted into %s or %q.
+*/
+static int test_exec_printf(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ char **argv /* Text of each argument */
+){
+ sqlite3 *db;
+ Tcl_DString str;
+ int rc;
+ char *zErr = 0;
+ char *zSql;
+ char zBuf[30];
+ if( argc!=4 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " DB FORMAT STRING", 0);
+ return TCL_ERROR;
+ }
+ if( getDbPointer(interp, argv[1], &db) ) return TCL_ERROR;
+ Tcl_DStringInit(&str);
+ zSql = sqlite3_mprintf(argv[2], argv[3]);
+ rc = sqlite3_exec(db, zSql, exec_printf_cb, &str, &zErr);
+ sqlite3_free(zSql);
+ sprintf(zBuf, "%d", rc);
+ Tcl_AppendElement(interp, zBuf);
+ Tcl_AppendElement(interp, rc==SQLITE_OK ? Tcl_DStringValue(&str) : zErr);
+ Tcl_DStringFree(&str);
+ if( zErr ) sqlite3_free(zErr);
+ if( sqlite3TestErrCode(interp, db, rc) ) return TCL_ERROR;
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_mprintf_z_test SEPARATOR ARG0 ARG1 ...
+**
+** Test the %z format of sqliteMPrintf(). Use multiple mprintf() calls to
+** concatenate arg0 through argn using separator as the separator.
+** Return the result.
+*/
+static int test_mprintf_z(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ char **argv /* Text of each argument */
+){
+ char *zResult = 0;
+ int i;
+
+ for(i=2; i<argc; i++){
+ zResult = sqlite3MPrintf("%z%s%s", zResult, argv[1], argv[i]);
+ }
+ Tcl_AppendResult(interp, zResult, 0);
+ sqliteFree(zResult);
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_mprintf_n_test STRING
+**
+** Test the %n format of sqliteMPrintf(). Return the length of the
+** input string.
+*/
+static int test_mprintf_n(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ char **argv /* Text of each argument */
+){
+ char *zStr;
+ int n = 0;
+ zStr = sqlite3MPrintf("%s%n", argv[1], &n);
+ sqliteFree(zStr);
+ Tcl_SetObjResult(interp, Tcl_NewIntObj(n));
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_get_table_printf DB FORMAT STRING
+**
+** Invoke the sqlite3_get_table_printf() interface using the open database
+** DB. The SQL is the string FORMAT. The format string should contain
+** one %s or %q. STRING is the value inserted into %s or %q.
+*/
+static int test_get_table_printf(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ char **argv /* Text of each argument */
+){
+ sqlite3 *db;
+ Tcl_DString str;
+ int rc;
+ char *zErr = 0;
+ int nRow, nCol;
+ char **aResult;
+ int i;
+ char zBuf[30];
+ char *zSql;
+ if( argc!=4 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " DB FORMAT STRING", 0);
+ return TCL_ERROR;
+ }
+ if( getDbPointer(interp, argv[1], &db) ) return TCL_ERROR;
+ Tcl_DStringInit(&str);
+ zSql = sqlite3_mprintf(argv[2],argv[3]);
+ rc = sqlite3_get_table(db, zSql, &aResult, &nRow, &nCol, &zErr);
+ sqlite3_free(zSql);
+ sprintf(zBuf, "%d", rc);
+ Tcl_AppendElement(interp, zBuf);
+ if( rc==SQLITE_OK ){
+ sprintf(zBuf, "%d", nRow);
+ Tcl_AppendElement(interp, zBuf);
+ sprintf(zBuf, "%d", nCol);
+ Tcl_AppendElement(interp, zBuf);
+ for(i=0; i<(nRow+1)*nCol; i++){
+ Tcl_AppendElement(interp, aResult[i] ? aResult[i] : "NULL");
+ }
+ }else{
+ Tcl_AppendElement(interp, zErr);
+ }
+ sqlite3_free_table(aResult);
+ if( zErr ) sqlite3_free(zErr);
+ if( sqlite3TestErrCode(interp, db, rc) ) return TCL_ERROR;
+ return TCL_OK;
+}
+
+
+/*
+** Usage: sqlite3_last_insert_rowid DB
+**
+** Returns the integer ROWID of the most recent insert.
+*/
+static int test_last_rowid(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ char **argv /* Text of each argument */
+){
+ sqlite3 *db;
+ char zBuf[30];
+
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0], " DB\"", 0);
+ return TCL_ERROR;
+ }
+ if( getDbPointer(interp, argv[1], &db) ) return TCL_ERROR;
+ sprintf(zBuf, "%lld", sqlite3_last_insert_rowid(db));
+ Tcl_AppendResult(interp, zBuf, 0);
+ return SQLITE_OK;
+}
+
+/*
+** Usage: sqlite3_key DB KEY
+**
+** Set the codec key.
+*/
+static int test_key(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ char **argv /* Text of each argument */
+){
+ sqlite3 *db;
+ const char *zKey;
+ int nKey;
+ if( argc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " FILENAME\"", 0);
+ return TCL_ERROR;
+ }
+ if( getDbPointer(interp, argv[1], &db) ) return TCL_ERROR;
+ zKey = argv[2];
+ nKey = strlen(zKey);
+#ifdef SQLITE_HAS_CODEC
+ sqlite3_key(db, zKey, nKey);
+#endif
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_rekey DB KEY
+**
+** Change the codec key.
+*/
+static int test_rekey(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ char **argv /* Text of each argument */
+){
+ sqlite3 *db;
+ const char *zKey;
+ int nKey;
+ if( argc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " FILENAME\"", 0);
+ return TCL_ERROR;
+ }
+ if( getDbPointer(interp, argv[1], &db) ) return TCL_ERROR;
+ zKey = argv[2];
+ nKey = strlen(zKey);
+#ifdef SQLITE_HAS_CODEC
+ sqlite3_rekey(db, zKey, nKey);
+#endif
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_close DB
+**
+** Closes the database opened by sqlite3_open.
+*/
+static int sqlite_test_close(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ char **argv /* Text of each argument */
+){
+ sqlite3 *db;
+ int rc;
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " FILENAME\"", 0);
+ return TCL_ERROR;
+ }
+ if( getDbPointer(interp, argv[1], &db) ) return TCL_ERROR;
+ rc = sqlite3_close(db);
+ Tcl_SetResult(interp, (char *)errorName(rc), TCL_STATIC);
+ return TCL_OK;
+}
+
+/*
+** Implementation of the x_coalesce() function.
+** Return the first argument non-NULL argument.
+*/
+static void ifnullFunc(sqlite3_context *context, int argc, sqlite3_value **argv){
+ int i;
+ for(i=0; i<argc; i++){
+ if( SQLITE_NULL!=sqlite3_value_type(argv[i]) ){
+ sqlite3_result_text(context, (char*)sqlite3_value_text(argv[i]),
+ sqlite3_value_bytes(argv[i]), SQLITE_TRANSIENT);
+ break;
+ }
+ }
+}
+
+/*
+** These are test functions. hex8() interprets its argument as
+** UTF8 and returns a hex encoding. hex16le() interprets its argument
+** as UTF16le and returns a hex encoding.
+*/
+static void hex8Func(sqlite3_context *p, int argc, sqlite3_value **argv){
+ const unsigned char *z;
+ int i;
+ char zBuf[200];
+ z = sqlite3_value_text(argv[0]);
+ for(i=0; i<sizeof(zBuf)/2 - 2 && z[i]; i++){
+ sprintf(&zBuf[i*2], "%02x", z[i]&0xff);
+ }
+ zBuf[i*2] = 0;
+ sqlite3_result_text(p, (char*)zBuf, -1, SQLITE_TRANSIENT);
+}
+static void hex16Func(sqlite3_context *p, int argc, sqlite3_value **argv){
+ const unsigned short int *z;
+ int i;
+ char zBuf[400];
+ z = sqlite3_value_text16(argv[0]);
+ for(i=0; i<sizeof(zBuf)/4 - 4 && z[i]; i++){
+ sprintf(&zBuf[i*4], "%04x", z[i]&0xff);
+ }
+ zBuf[i*4] = 0;
+ sqlite3_result_text(p, (char*)zBuf, -1, SQLITE_TRANSIENT);
+}
+
+/*
+** A structure into which to accumulate text.
+*/
+struct dstr {
+ int nAlloc; /* Space allocated */
+ int nUsed; /* Space used */
+ char *z; /* The space */
+};
+
+/*
+** Append text to a dstr
+*/
+static void dstrAppend(struct dstr *p, const char *z, int divider){
+ int n = strlen(z);
+ if( p->nUsed + n + 2 > p->nAlloc ){
+ char *zNew;
+ p->nAlloc = p->nAlloc*2 + n + 200;
+ zNew = sqliteRealloc(p->z, p->nAlloc);
+ if( zNew==0 ){
+ sqliteFree(p->z);
+ memset(p, 0, sizeof(*p));
+ return;
+ }
+ p->z = zNew;
+ }
+ if( divider && p->nUsed>0 ){
+ p->z[p->nUsed++] = divider;
+ }
+ memcpy(&p->z[p->nUsed], z, n+1);
+ p->nUsed += n;
+}
+
+/*
+** Invoked for each callback from sqlite3ExecFunc
+*/
+static int execFuncCallback(void *pData, int argc, char **argv, char **NotUsed){
+ struct dstr *p = (struct dstr*)pData;
+ int i;
+ for(i=0; i<argc; i++){
+ if( argv[i]==0 ){
+ dstrAppend(p, "NULL", ' ');
+ }else{
+ dstrAppend(p, argv[i], ' ');
+ }
+ }
+ return 0;
+}
+
+/*
+** Implementation of the x_sqlite_exec() function. This function takes
+** a single argument and attempts to execute that argument as SQL code.
+** This is illegal and should set the SQLITE_MISUSE flag on the database.
+**
+** 2004-Jan-07: We have changed this to make it legal to call sqlite3_exec()
+** from within a function call.
+**
+** This routine simulates the effect of having two threads attempt to
+** use the same database at the same time.
+*/
+static void sqlite3ExecFunc(
+ sqlite3_context *context,
+ int argc,
+ sqlite3_value **argv
+){
+ struct dstr x;
+ memset(&x, 0, sizeof(x));
+ (void)sqlite3_exec((sqlite3*)sqlite3_user_data(context),
+ (char*)sqlite3_value_text(argv[0]),
+ execFuncCallback, &x, 0);
+ sqlite3_result_text(context, x.z, x.nUsed, SQLITE_TRANSIENT);
+ sqliteFree(x.z);
+}
+
+/*
+** Usage: sqlite_test_create_function DB
+**
+** Call the sqlite3_create_function API on the given database in order
+** to create a function named "x_coalesce". This function does the same thing
+** as the "coalesce" function. This function also registers an SQL function
+** named "x_sqlite_exec" that invokes sqlite3_exec(). Invoking sqlite3_exec()
+** in this way is illegal recursion and should raise an SQLITE_MISUSE error.
+** The effect is similar to trying to use the same database connection from
+** two threads at the same time.
+**
+** The original motivation for this routine was to be able to call the
+** sqlite3_create_function function while a query is in progress in order
+** to test the SQLITE_MISUSE detection logic.
+*/
+static int test_create_function(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ char **argv /* Text of each argument */
+){
+ int rc;
+ sqlite3 *db;
+ extern void Md5_Register(sqlite3*);
+
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " DB\"", 0);
+ return TCL_ERROR;
+ }
+ if( getDbPointer(interp, argv[1], &db) ) return TCL_ERROR;
+ rc = sqlite3_create_function(db, "x_coalesce", -1, SQLITE_ANY, 0,
+ ifnullFunc, 0, 0);
+ if( rc==SQLITE_OK ){
+ rc = sqlite3_create_function(db, "hex8", 1, SQLITE_ANY, 0,
+ hex8Func, 0, 0);
+ }
+ if( rc==SQLITE_OK ){
+ rc = sqlite3_create_function(db, "hex16", 1, SQLITE_ANY, 0,
+ hex16Func, 0, 0);
+ }
+
+#ifndef SQLITE_OMIT_UTF16
+ /* Use the sqlite3_create_function16() API here. Mainly for fun, but also
+ ** because it is not tested anywhere else. */
+ if( rc==SQLITE_OK ){
+ sqlite3_value *pVal;
+#ifdef SQLITE_MEMDEBUG
+ if( sqlite3_iMallocFail>0 ){
+ sqlite3_iMallocFail++;
+ }
+#endif
+ pVal = sqlite3ValueNew();
+ sqlite3ValueSetStr(pVal, -1, "x_sqlite_exec", SQLITE_UTF8, SQLITE_STATIC);
+ rc = sqlite3_create_function16(db,
+ sqlite3ValueText(pVal, SQLITE_UTF16NATIVE),
+ 1, SQLITE_UTF16, db, sqlite3ExecFunc, 0, 0);
+ sqlite3ValueFree(pVal);
+ }
+#endif
+
+ if( sqlite3TestErrCode(interp, db, rc) ) return TCL_ERROR;
+ Tcl_SetResult(interp, (char *)errorName(rc), 0);
+ return TCL_OK;
+}
+
+/*
+** Routines to implement the x_count() aggregate function.
+**
+** x_count() counts the number of non-null arguments. But there are
+** some twists for testing purposes.
+**
+** If the argument to x_count() is 40 then a UTF-8 error is reported
+** on the step function. If x_count(41) is seen, then a UTF-16 error
+** is reported on the step function. If the total count is 42, then
+** a UTF-8 error is reported on the finalize function.
+*/
+typedef struct CountCtx CountCtx;
+struct CountCtx {
+ int n;
+};
+static void countStep(sqlite3_context *context, int argc, sqlite3_value **argv){
+ CountCtx *p;
+ p = sqlite3_aggregate_context(context, sizeof(*p));
+ if( (argc==0 || SQLITE_NULL!=sqlite3_value_type(argv[0]) ) && p ){
+ p->n++;
+ }
+ if( argc>0 ){
+ int v = sqlite3_value_int(argv[0]);
+ if( v==40 ){
+ sqlite3_result_error(context, "value of 40 handed to x_count", -1);
+#ifndef SQLITE_OMIT_UTF16
+ }else if( v==41 ){
+ const char zUtf16ErrMsg[] = { 0, 0x61, 0, 0x62, 0, 0x63, 0, 0, 0};
+ sqlite3_result_error16(context, &zUtf16ErrMsg[1-SQLITE_BIGENDIAN], -1);
+#endif
+ }
+ }
+}
+static void countFinalize(sqlite3_context *context){
+ CountCtx *p;
+ p = sqlite3_aggregate_context(context, sizeof(*p));
+ if( p ){
+ if( p->n==42 ){
+ sqlite3_result_error(context, "x_count totals to 42", -1);
+ }else{
+ sqlite3_result_int(context, p ? p->n : 0);
+ }
+ }
+}
+
+/*
+** Usage: sqlite_test_create_aggregate DB
+**
+** Call the sqlite3_create_function API on the given database in order
+** to create a function named "x_count". This function does the same thing
+** as the "md5sum" function.
+**
+** The original motivation for this routine was to be able to call the
+** sqlite3_create_aggregate function while a query is in progress in order
+** to test the SQLITE_MISUSE detection logic. See misuse.test.
+**
+** This routine was later extended to test the use of sqlite3_result_error()
+** within aggregate functions.
+*/
+static int test_create_aggregate(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ char **argv /* Text of each argument */
+){
+ sqlite3 *db;
+ int rc;
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " FILENAME\"", 0);
+ return TCL_ERROR;
+ }
+ if( getDbPointer(interp, argv[1], &db) ) return TCL_ERROR;
+ rc = sqlite3_create_function(db, "x_count", 0, SQLITE_UTF8, 0, 0,
+ countStep,countFinalize);
+ if( rc==SQLITE_OK ){
+ sqlite3_create_function(db, "x_count", 1, SQLITE_UTF8, 0, 0,
+ countStep,countFinalize);
+ }
+ if( sqlite3TestErrCode(interp, db, rc) ) return TCL_ERROR;
+ return TCL_OK;
+}
+
+
+
+/*
+** Usage: sqlite3_mprintf_int FORMAT INTEGER INTEGER INTEGER
+**
+** Call mprintf with three integer arguments
+*/
+static int sqlite3_mprintf_int(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ char **argv /* Text of each argument */
+){
+ int a[3], i;
+ char *z;
+ if( argc!=5 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " FORMAT INT INT INT\"", 0);
+ return TCL_ERROR;
+ }
+ for(i=2; i<5; i++){
+ if( Tcl_GetInt(interp, argv[i], &a[i-2]) ) return TCL_ERROR;
+ }
+ z = sqlite3_mprintf(argv[1], a[0], a[1], a[2]);
+ Tcl_AppendResult(interp, z, 0);
+ sqlite3_free(z);
+ return TCL_OK;
+}
+
+/*
+** If zNum represents an integer that will fit in 64-bits, then set
+** *pValue to that integer and return true. Otherwise return false.
+*/
+static int sqlite3GetInt64(const char *zNum, i64 *pValue){
+ if( sqlite3FitsIn64Bits(zNum) ){
+ sqlite3atoi64(zNum, pValue);
+ return 1;
+ }
+ return 0;
+}
+
+/*
+** Usage: sqlite3_mprintf_int64 FORMAT INTEGER INTEGER INTEGER
+**
+** Call mprintf with three 64-bit integer arguments
+*/
+static int sqlite3_mprintf_int64(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ char **argv /* Text of each argument */
+){
+ int i;
+ sqlite_int64 a[3];
+ char *z;
+ if( argc!=5 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " FORMAT INT INT INT\"", 0);
+ return TCL_ERROR;
+ }
+ for(i=2; i<5; i++){
+ if( !sqlite3GetInt64(argv[i], &a[i-2]) ){
+ Tcl_AppendResult(interp, "argument is not a valid 64-bit integer", 0);
+ return TCL_ERROR;
+ }
+ }
+ z = sqlite3_mprintf(argv[1], a[0], a[1], a[2]);
+ Tcl_AppendResult(interp, z, 0);
+ sqlite3_free(z);
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_mprintf_str FORMAT INTEGER INTEGER STRING
+**
+** Call mprintf with two integer arguments and one string argument
+*/
+static int sqlite3_mprintf_str(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ char **argv /* Text of each argument */
+){
+ int a[3], i;
+ char *z;
+ if( argc<4 || argc>5 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " FORMAT INT INT ?STRING?\"", 0);
+ return TCL_ERROR;
+ }
+ for(i=2; i<4; i++){
+ if( Tcl_GetInt(interp, argv[i], &a[i-2]) ) return TCL_ERROR;
+ }
+ z = sqlite3_mprintf(argv[1], a[0], a[1], argc>4 ? argv[4] : NULL);
+ Tcl_AppendResult(interp, z, 0);
+ sqlite3_free(z);
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_mprintf_double FORMAT INTEGER INTEGER DOUBLE
+**
+** Call mprintf with two integer arguments and one double argument
+*/
+static int sqlite3_mprintf_double(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ char **argv /* Text of each argument */
+){
+ int a[3], i;
+ double r;
+ char *z;
+ if( argc!=5 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " FORMAT INT INT DOUBLE\"", 0);
+ return TCL_ERROR;
+ }
+ for(i=2; i<4; i++){
+ if( Tcl_GetInt(interp, argv[i], &a[i-2]) ) return TCL_ERROR;
+ }
+ if( Tcl_GetDouble(interp, argv[4], &r) ) return TCL_ERROR;
+ z = sqlite3_mprintf(argv[1], a[0], a[1], r);
+ Tcl_AppendResult(interp, z, 0);
+ sqlite3_free(z);
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_mprintf_scaled FORMAT DOUBLE DOUBLE
+**
+** Call mprintf with a single double argument which is the product of the
+** two arguments given above. This is used to generate overflow and underflow
+** doubles to test that they are converted properly.
+*/
+static int sqlite3_mprintf_scaled(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ char **argv /* Text of each argument */
+){
+ int i;
+ double r[2];
+ char *z;
+ if( argc!=4 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " FORMAT DOUBLE DOUBLE\"", 0);
+ return TCL_ERROR;
+ }
+ for(i=2; i<4; i++){
+ if( Tcl_GetDouble(interp, argv[i], &r[i-2]) ) return TCL_ERROR;
+ }
+ z = sqlite3_mprintf(argv[1], r[0]*r[1]);
+ Tcl_AppendResult(interp, z, 0);
+ sqlite3_free(z);
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_mprintf_stronly FORMAT STRING
+**
+** Call mprintf with a single double argument which is the product of the
+** two arguments given above. This is used to generate overflow and underflow
+** doubles to test that they are converted properly.
+*/
+static int sqlite3_mprintf_stronly(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ char **argv /* Text of each argument */
+){
+ char *z;
+ if( argc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " FORMAT STRING\"", 0);
+ return TCL_ERROR;
+ }
+ z = sqlite3_mprintf(argv[1], argv[2]);
+ Tcl_AppendResult(interp, z, 0);
+ sqlite3_free(z);
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_mprintf_hexdouble FORMAT HEX
+**
+** Call mprintf with a single double argument which is derived from the
+** hexadecimal encoding of an IEEE double.
+*/
+static int sqlite3_mprintf_hexdouble(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ char **argv /* Text of each argument */
+){
+ char *z;
+ double r;
+ unsigned x1, x2;
+ long long unsigned d;
+ if( argc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " FORMAT STRING\"", 0);
+ return TCL_ERROR;
+ }
+ if( sscanf(argv[2], "%08x%08x", &x2, &x1)!=2 ){
+ Tcl_AppendResult(interp, "2nd argument should be 16-characters of hex", 0);
+ return TCL_ERROR;
+ }
+ d = x2;
+ d = (d<<32) + x1;
+ memcpy(&r, &d, sizeof(r));
+ z = sqlite3_mprintf(argv[1], r);
+ Tcl_AppendResult(interp, z, 0);
+ sqlite3_free(z);
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite_malloc_fail N ?REPEAT-INTERVAL?
+**
+** Rig sqliteMalloc() to fail on the N-th call and every REPEAT-INTERVAL call
+** after that. If REPEAT-INTERVAL is 0 or is omitted, then only a single
+** malloc will fail. If REPEAT-INTERVAL is 1 then all mallocs after the
+** first failure will continue to fail on every call. If REPEAT-INTERVAL is
+** 2 then every other malloc will fail. And so forth.
+**
+** Turn off this mechanism and reset the sqlite3ThreadData()->mallocFailed
+** variable if N==0.
+*/
+#ifdef SQLITE_MEMDEBUG
+static int sqlite_malloc_fail(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ char **argv /* Text of each argument */
+){
+ int n;
+ int rep;
+ if( argc!=2 && argc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0], " N\"", 0);
+ return TCL_ERROR;
+ }
+ if( Tcl_GetInt(interp, argv[1], &n) ) return TCL_ERROR;
+ if( argc==3 ){
+ if( Tcl_GetInt(interp, argv[2], &rep) ) return TCL_ERROR;
+ }else{
+ rep = 0;
+ }
+ sqlite3_iMallocFail = n;
+ sqlite3_iMallocReset = rep;
+ return TCL_OK;
+}
+#endif
+
+/*
+** Usage: sqlite_malloc_stat
+**
+** Return the number of prior calls to sqliteMalloc() and sqliteFree().
+*/
+#ifdef SQLITE_MEMDEBUG
+static int sqlite_malloc_stat(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ char **argv /* Text of each argument */
+){
+ char zBuf[200];
+ sprintf(zBuf, "%d %d %d", sqlite3_nMalloc,sqlite3_nFree,sqlite3_iMallocFail);
+ Tcl_AppendResult(interp, zBuf, 0);
+ return TCL_OK;
+}
+
+/*
+** This function implements a Tcl command that may be invoked using any of
+** the four forms enumerated below.
+**
+** sqlite_malloc_outstanding
+** Return a summary of all unfreed blocks of memory allocated by the
+** current thread. See comments above function sqlite3OutstandingMallocs()
+** in util.c for a description of the returned value.
+**
+** sqlite_malloc_outstanding -bytes
+** Return the total amount of unfreed memory (in bytes) allocated by
+** this thread.
+**
+** sqlite_malloc_outstanding -maxbytes
+** Return the maximum amount of dynamic memory in use at one time
+** by this thread.
+**
+** sqlite_malloc_outstanding -clearmaxbytes
+** Set the value returned by [sqlite_malloc_outstanding -maxbytes]
+** to the current value of [sqlite_malloc_outstanding -bytes].
+*/
+static int sqlite_malloc_outstanding(
+ ClientData clientData,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int objc, /* Number of arguments */
+ Tcl_Obj *CONST objv[] /* Command arguments */
+){
+ extern int sqlite3OutstandingMallocs(Tcl_Interp *interp);
+
+#if defined(SQLITE_DEBUG) && defined(SQLITE_MEMDEBUG) && SQLITE_MEMDEBUG>1
+ if( objc==2 ){
+ const char *zArg = Tcl_GetString(objv[1]);
+#ifdef SQLITE_ENABLE_MEMORY_MANAGEMENT
+ ThreadData const *pTd = sqlite3ThreadDataReadOnly();
+ if( 0==strcmp(zArg, "-bytes") ){
+ Tcl_SetObjResult(interp, Tcl_NewIntObj(pTd->nAlloc));
+ }else if( 0==strcmp(zArg, "-clearmaxbytes") ){
+ sqlite3_nMaxAlloc = pTd->nAlloc;
+ }else
+#endif
+ if( 0==strcmp(zArg, "-maxbytes") ){
+ Tcl_SetObjResult(interp, Tcl_NewWideIntObj(sqlite3_nMaxAlloc));
+ }else{
+ Tcl_AppendResult(interp, "bad option \"", zArg,
+ "\": must be -bytes, -maxbytes or -clearmaxbytes", 0
+ );
+ return TCL_ERROR;
+ }
+
+ return TCL_OK;
+ }
+
+ if( objc!=1 ){
+ Tcl_WrongNumArgs(interp, 1, objv, "?-bytes?");
+ return TCL_ERROR;
+ }
+
+ return sqlite3OutstandingMallocs(interp);
+#else
+ return TCL_OK;
+#endif
+}
+#endif
+
+/*
+** Usage: sqlite3_enable_shared_cache BOOLEAN
+**
+*/
+#if !defined(SQLITE_OMIT_SHARED_CACHE)
+static int test_enable_shared(
+ ClientData clientData, /* Pointer to sqlite3_enable_XXX function */
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int objc, /* Number of arguments */
+ Tcl_Obj *CONST objv[] /* Command arguments */
+){
+ int rc;
+ int enable;
+ int ret = 0;
+
+ if( objc!=2 ){
+ Tcl_WrongNumArgs(interp, 1, objv, "BOOLEAN");
+ return TCL_ERROR;
+ }
+ if( Tcl_GetBooleanFromObj(interp, objv[1], &enable) ){
+ return TCL_ERROR;
+ }
+ ret = sqlite3ThreadDataReadOnly()->useSharedData;
+ rc = sqlite3_enable_shared_cache(enable);
+ if( rc!=SQLITE_OK ){
+ Tcl_SetResult(interp, (char *)sqlite3ErrStr(rc), TCL_STATIC);
+ return TCL_ERROR;
+ }
+ Tcl_SetObjResult(interp, Tcl_NewBooleanObj(ret));
+ return TCL_OK;
+}
+#endif
+
+/*
+** Usage: sqlite3_extended_result_codes DB BOOLEAN
+**
+*/
+static int test_extended_result_codes(
+ ClientData clientData, /* Pointer to sqlite3_enable_XXX function */
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int objc, /* Number of arguments */
+ Tcl_Obj *CONST objv[] /* Command arguments */
+){
+ int enable;
+ sqlite3 *db;
+
+ if( objc!=3 ){
+ Tcl_WrongNumArgs(interp, 1, objv, "DB BOOLEAN");
+ return TCL_ERROR;
+ }
+ if( getDbPointer(interp, Tcl_GetString(objv[1]), &db) ) return TCL_ERROR;
+ if( Tcl_GetBooleanFromObj(interp, objv[2], &enable) ) return TCL_ERROR;
+ sqlite3_extended_result_codes(db, enable);
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_libversion_number
+**
+*/
+static int test_libversion_number(
+ ClientData clientData, /* Pointer to sqlite3_enable_XXX function */
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int objc, /* Number of arguments */
+ Tcl_Obj *CONST objv[] /* Command arguments */
+){
+ Tcl_SetObjResult(interp, Tcl_NewIntObj(sqlite3_libversion_number()));
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_table_column_metadata DB dbname tblname colname
+**
+*/
+#ifdef SQLITE_ENABLE_COLUMN_METADATA
+static int test_table_column_metadata(
+ ClientData clientData, /* Pointer to sqlite3_enable_XXX function */
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int objc, /* Number of arguments */
+ Tcl_Obj *CONST objv[] /* Command arguments */
+){
+ sqlite3 *db;
+ const char *zDb;
+ const char *zTbl;
+ const char *zCol;
+ int rc;
+ Tcl_Obj *pRet;
+
+ const char *zDatatype;
+ const char *zCollseq;
+ int notnull;
+ int primarykey;
+ int autoincrement;
+
+ if( objc!=5 ){
+ Tcl_WrongNumArgs(interp, 1, objv, "DB dbname tblname colname");
+ return TCL_ERROR;
+ }
+ if( getDbPointer(interp, Tcl_GetString(objv[1]), &db) ) return TCL_ERROR;
+ zDb = Tcl_GetString(objv[2]);
+ zTbl = Tcl_GetString(objv[3]);
+ zCol = Tcl_GetString(objv[4]);
+
+ if( strlen(zDb)==0 ) zDb = 0;
+
+ rc = sqlite3_table_column_metadata(db, zDb, zTbl, zCol,
+ &zDatatype, &zCollseq, ¬null, &primarykey, &autoincrement);
+
+ if( rc!=SQLITE_OK ){
+ Tcl_AppendResult(interp, sqlite3_errmsg(db), 0);
+ return TCL_ERROR;
+ }
+
+ pRet = Tcl_NewObj();
+ Tcl_ListObjAppendElement(0, pRet, Tcl_NewStringObj(zDatatype, -1));
+ Tcl_ListObjAppendElement(0, pRet, Tcl_NewStringObj(zCollseq, -1));
+ Tcl_ListObjAppendElement(0, pRet, Tcl_NewIntObj(notnull));
+ Tcl_ListObjAppendElement(0, pRet, Tcl_NewIntObj(primarykey));
+ Tcl_ListObjAppendElement(0, pRet, Tcl_NewIntObj(autoincrement));
+ Tcl_SetObjResult(interp, pRet);
+
+ return TCL_OK;
+}
+#endif
+
+
+/*
+** Usage: sqlite3_load_extension DB-HANDLE FILE ?PROC?
+*/
+static int test_load_extension(
+ ClientData clientData, /* Not used */
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int objc, /* Number of arguments */
+ Tcl_Obj *CONST objv[] /* Command arguments */
+){
+ Tcl_CmdInfo cmdInfo;
+ sqlite3 *db;
+ int rc;
+ char *zDb;
+ char *zFile;
+ char *zProc = 0;
+ char *zErr = 0;
+
+ if( objc!=4 && objc!=3 ){
+ Tcl_WrongNumArgs(interp, 1, objv, "DB-HANDLE FILE ?PROC?");
+ return TCL_ERROR;
+ }
+ zDb = Tcl_GetString(objv[1]);
+ zFile = Tcl_GetString(objv[2]);
+ if( objc==4 ){
+ zProc = Tcl_GetString(objv[3]);
+ }
+
+ /* Extract the C database handle from the Tcl command name */
+ if( !Tcl_GetCommandInfo(interp, zDb, &cmdInfo) ){
+ Tcl_AppendResult(interp, "command not found: ", zDb, (char*)0);
+ return TCL_ERROR;
+ }
+ db = ((struct SqliteDb*)cmdInfo.objClientData)->db;
+ assert(db);
+
+ /* Call the underlying C function. If an error occurs, set rc to
+ ** TCL_ERROR and load any error string into the interpreter. If no
+ ** error occurs, set rc to TCL_OK.
+ */
+#ifdef SQLITE_OMIT_LOAD_EXTENSION
+ rc = SQLITE_ERROR;
+ zErr = sqlite3_mprintf("this build omits sqlite3_load_extension()");
+#else
+ rc = sqlite3_load_extension(db, zFile, zProc, &zErr);
+#endif
+ if( rc!=SQLITE_OK ){
+ Tcl_SetResult(interp, zErr ? zErr : "", TCL_VOLATILE);
+ rc = TCL_ERROR;
+ }else{
+ rc = TCL_OK;
+ }
+ sqlite3_free(zErr);
+
+ return rc;
+}
+
+/*
+** Usage: sqlite3_enable_load_extension DB-HANDLE ONOFF
+*/
+static int test_enable_load(
+ ClientData clientData, /* Not used */
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int objc, /* Number of arguments */
+ Tcl_Obj *CONST objv[] /* Command arguments */
+){
+ Tcl_CmdInfo cmdInfo;
+ sqlite3 *db;
+ char *zDb;
+ int onoff;
+
+ if( objc!=3 ){
+ Tcl_WrongNumArgs(interp, 1, objv, "DB-HANDLE ONOFF");
+ return TCL_ERROR;
+ }
+ zDb = Tcl_GetString(objv[1]);
+
+ /* Extract the C database handle from the Tcl command name */
+ if( !Tcl_GetCommandInfo(interp, zDb, &cmdInfo) ){
+ Tcl_AppendResult(interp, "command not found: ", zDb, (char*)0);
+ return TCL_ERROR;
+ }
+ db = ((struct SqliteDb*)cmdInfo.objClientData)->db;
+ assert(db);
+
+ /* Get the onoff parameter */
+ if( Tcl_GetBooleanFromObj(interp, objv[2], &onoff) ){
+ return TCL_ERROR;
+ }
+
+#ifdef SQLITE_OMIT_LOAD_EXTENSION
+ Tcl_AppendResult(interp, "this build omits sqlite3_load_extension()");
+ return TCL_ERROR;
+#else
+ sqlite3_enable_load_extension(db, onoff);
+ return TCL_OK;
+#endif
+}
+
+/*
+** Usage: sqlite_abort
+**
+** Shutdown the process immediately. This is not a clean shutdown.
+** This command is used to test the recoverability of a database in
+** the event of a program crash.
+*/
+static int sqlite_abort(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ char **argv /* Text of each argument */
+){
+ assert( interp==0 ); /* This will always fail */
+ return TCL_OK;
+}
+
+/*
+** The following routine is a user-defined SQL function whose purpose
+** is to test the sqlite_set_result() API.
+*/
+static void testFunc(sqlite3_context *context, int argc, sqlite3_value **argv){
+ while( argc>=2 ){
+ const char *zArg0 = (char*)sqlite3_value_text(argv[0]);
+ if( zArg0 ){
+ if( 0==sqlite3StrICmp(zArg0, "int") ){
+ sqlite3_result_int(context, sqlite3_value_int(argv[1]));
+ }else if( sqlite3StrICmp(zArg0,"int64")==0 ){
+ sqlite3_result_int64(context, sqlite3_value_int64(argv[1]));
+ }else if( sqlite3StrICmp(zArg0,"string")==0 ){
+ sqlite3_result_text(context, (char*)sqlite3_value_text(argv[1]), -1,
+ SQLITE_TRANSIENT);
+ }else if( sqlite3StrICmp(zArg0,"double")==0 ){
+ sqlite3_result_double(context, sqlite3_value_double(argv[1]));
+ }else if( sqlite3StrICmp(zArg0,"null")==0 ){
+ sqlite3_result_null(context);
+ }else if( sqlite3StrICmp(zArg0,"value")==0 ){
+ sqlite3_result_value(context, argv[sqlite3_value_int(argv[1])]);
+ }else{
+ goto error_out;
+ }
+ }else{
+ goto error_out;
+ }
+ argc -= 2;
+ argv += 2;
+ }
+ return;
+
+error_out:
+ sqlite3_result_error(context,"first argument should be one of: "
+ "int int64 string double null value", -1);
+}
+
+/*
+** Usage: sqlite_register_test_function DB NAME
+**
+** Register the test SQL function on the database DB under the name NAME.
+*/
+static int test_register_func(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ char **argv /* Text of each argument */
+){
+ sqlite3 *db;
+ int rc;
+ if( argc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " DB FUNCTION-NAME", 0);
+ return TCL_ERROR;
+ }
+ if( getDbPointer(interp, argv[1], &db) ) return TCL_ERROR;
+ rc = sqlite3_create_function(db, argv[2], -1, SQLITE_UTF8, 0,
+ testFunc, 0, 0);
+ if( rc!=0 ){
+ Tcl_AppendResult(interp, sqlite3ErrStr(rc), 0);
+ return TCL_ERROR;
+ }
+ if( sqlite3TestErrCode(interp, db, rc) ) return TCL_ERROR;
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_finalize STMT
+**
+** Finalize a statement handle.
+*/
+static int test_finalize(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ sqlite3_stmt *pStmt;
+ int rc;
+ sqlite3 *db;
+
+ if( objc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"",
+ Tcl_GetStringFromObj(objv[0], 0), " <STMT>", 0);
+ return TCL_ERROR;
+ }
+
+ if( getStmtPointer(interp, Tcl_GetString(objv[1]), &pStmt) ) return TCL_ERROR;
+
+ if( pStmt ){
+ db = StmtToDb(pStmt);
+ }
+ rc = sqlite3_finalize(pStmt);
+ Tcl_SetResult(interp, (char *)errorName(rc), TCL_STATIC);
+ if( db && sqlite3TestErrCode(interp, db, rc) ) return TCL_ERROR;
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_reset STMT
+**
+** Reset a statement handle.
+*/
+static int test_reset(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ sqlite3_stmt *pStmt;
+ int rc;
+
+ if( objc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"",
+ Tcl_GetStringFromObj(objv[0], 0), " <STMT>", 0);
+ return TCL_ERROR;
+ }
+
+ if( getStmtPointer(interp, Tcl_GetString(objv[1]), &pStmt) ) return TCL_ERROR;
+
+ rc = sqlite3_reset(pStmt);
+ if( pStmt && sqlite3TestErrCode(interp, StmtToDb(pStmt), rc) ){
+ return TCL_ERROR;
+ }
+ Tcl_SetResult(interp, (char *)errorName(rc), TCL_STATIC);
+/*
+ if( rc ){
+ return TCL_ERROR;
+ }
+*/
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_expired STMT
+**
+** Return TRUE if a recompilation of the statement is recommended.
+*/
+static int test_expired(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ sqlite3_stmt *pStmt;
+ if( objc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"",
+ Tcl_GetStringFromObj(objv[0], 0), " <STMT>", 0);
+ return TCL_ERROR;
+ }
+ if( getStmtPointer(interp, Tcl_GetString(objv[1]), &pStmt) ) return TCL_ERROR;
+ Tcl_SetObjResult(interp, Tcl_NewBooleanObj(sqlite3_expired(pStmt)));
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_transfer_bindings FROMSTMT TOSTMT
+**
+** Transfer all bindings from FROMSTMT over to TOSTMT
+*/
+static int test_transfer_bind(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ sqlite3_stmt *pStmt1, *pStmt2;
+ if( objc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"",
+ Tcl_GetStringFromObj(objv[0], 0), " FROM-STMT TO-STMT", 0);
+ return TCL_ERROR;
+ }
+ if( getStmtPointer(interp, Tcl_GetString(objv[1]), &pStmt1)) return TCL_ERROR;
+ if( getStmtPointer(interp, Tcl_GetString(objv[2]), &pStmt2)) return TCL_ERROR;
+ Tcl_SetObjResult(interp,
+ Tcl_NewIntObj(sqlite3_transfer_bindings(pStmt1,pStmt2)));
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_changes DB
+**
+** Return the number of changes made to the database by the last SQL
+** execution.
+*/
+static int test_changes(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ sqlite3 *db;
+ if( objc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"",
+ Tcl_GetString(objv[0]), " DB", 0);
+ return TCL_ERROR;
+ }
+ if( getDbPointer(interp, Tcl_GetString(objv[1]), &db) ) return TCL_ERROR;
+ Tcl_SetObjResult(interp, Tcl_NewIntObj(sqlite3_changes(db)));
+ return TCL_OK;
+}
+
+/*
+** This is the "static_bind_value" that variables are bound to when
+** the FLAG option of sqlite3_bind is "static"
+*/
+static char *sqlite_static_bind_value = 0;
+static int sqlite_static_bind_nbyte = 0;
+
+/*
+** Usage: sqlite3_bind VM IDX VALUE FLAGS
+**
+** Sets the value of the IDX-th occurance of "?" in the original SQL
+** string. VALUE is the new value. If FLAGS=="null" then VALUE is
+** ignored and the value is set to NULL. If FLAGS=="static" then
+** the value is set to the value of a static variable named
+** "sqlite_static_bind_value". If FLAGS=="normal" then a copy
+** of the VALUE is made. If FLAGS=="blob10" then a VALUE is ignored
+** an a 10-byte blob "abc\000xyz\000pq" is inserted.
+*/
+static int test_bind(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ char **argv /* Text of each argument */
+){
+ sqlite3_stmt *pStmt;
+ int rc;
+ int idx;
+ if( argc!=5 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " VM IDX VALUE (null|static|normal)\"", 0);
+ return TCL_ERROR;
+ }
+ if( getStmtPointer(interp, argv[1], &pStmt) ) return TCL_ERROR;
+ if( Tcl_GetInt(interp, argv[2], &idx) ) return TCL_ERROR;
+ if( strcmp(argv[4],"null")==0 ){
+ rc = sqlite3_bind_null(pStmt, idx);
+ }else if( strcmp(argv[4],"static")==0 ){
+ rc = sqlite3_bind_text(pStmt, idx, sqlite_static_bind_value, -1, 0);
+ }else if( strcmp(argv[4],"static-nbytes")==0 ){
+ rc = sqlite3_bind_text(pStmt, idx, sqlite_static_bind_value,
+ sqlite_static_bind_nbyte, 0);
+ }else if( strcmp(argv[4],"normal")==0 ){
+ rc = sqlite3_bind_text(pStmt, idx, argv[3], -1, SQLITE_TRANSIENT);
+ }else if( strcmp(argv[4],"blob10")==0 ){
+ rc = sqlite3_bind_text(pStmt, idx, "abc\000xyz\000pq", 10, SQLITE_STATIC);
+ }else{
+ Tcl_AppendResult(interp, "4th argument should be "
+ "\"null\" or \"static\" or \"normal\"", 0);
+ return TCL_ERROR;
+ }
+ if( sqlite3TestErrCode(interp, StmtToDb(pStmt), rc) ) return TCL_ERROR;
+ if( rc ){
+ char zBuf[50];
+ sprintf(zBuf, "(%d) ", rc);
+ Tcl_AppendResult(interp, zBuf, sqlite3ErrStr(rc), 0);
+ return TCL_ERROR;
+ }
+ return TCL_OK;
+}
+
+#ifndef SQLITE_OMIT_UTF16
+/*
+** Usage: add_test_collate <db ptr> <utf8> <utf16le> <utf16be>
+**
+** This function is used to test that SQLite selects the correct collation
+** sequence callback when multiple versions (for different text encodings)
+** are available.
+**
+** Calling this routine registers the collation sequence "test_collate"
+** with database handle <db>. The second argument must be a list of three
+** boolean values. If the first is true, then a version of test_collate is
+** registered for UTF-8, if the second is true, a version is registered for
+** UTF-16le, if the third is true, a UTF-16be version is available.
+** Previous versions of test_collate are deleted.
+**
+** The collation sequence test_collate is implemented by calling the
+** following TCL script:
+**
+** "test_collate <enc> <lhs> <rhs>"
+**
+** The <lhs> and <rhs> are the two values being compared, encoded in UTF-8.
+** The <enc> parameter is the encoding of the collation function that
+** SQLite selected to call. The TCL test script implements the
+** "test_collate" proc.
+**
+** Note that this will only work with one intepreter at a time, as the
+** interp pointer to use when evaluating the TCL script is stored in
+** pTestCollateInterp.
+*/
+static Tcl_Interp* pTestCollateInterp;
+static int test_collate_func(
+ void *pCtx,
+ int nA, const void *zA,
+ int nB, const void *zB
+){
+ Tcl_Interp *i = pTestCollateInterp;
+ int encin = (int)pCtx;
+ int res;
+ int n;
+
+ sqlite3_value *pVal;
+ Tcl_Obj *pX;
+
+ pX = Tcl_NewStringObj("test_collate", -1);
+ Tcl_IncrRefCount(pX);
+
+ switch( encin ){
+ case SQLITE_UTF8:
+ Tcl_ListObjAppendElement(i,pX,Tcl_NewStringObj("UTF-8",-1));
+ break;
+ case SQLITE_UTF16LE:
+ Tcl_ListObjAppendElement(i,pX,Tcl_NewStringObj("UTF-16LE",-1));
+ break;
+ case SQLITE_UTF16BE:
+ Tcl_ListObjAppendElement(i,pX,Tcl_NewStringObj("UTF-16BE",-1));
+ break;
+ default:
+ assert(0);
+ }
+
+ pVal = sqlite3ValueNew();
+ sqlite3ValueSetStr(pVal, nA, zA, encin, SQLITE_STATIC);
+ n = sqlite3_value_bytes(pVal);
+ Tcl_ListObjAppendElement(i,pX,
+ Tcl_NewStringObj((char*)sqlite3_value_text(pVal),n));
+ sqlite3ValueSetStr(pVal, nB, zB, encin, SQLITE_STATIC);
+ n = sqlite3_value_bytes(pVal);
+ Tcl_ListObjAppendElement(i,pX,
+ Tcl_NewStringObj((char*)sqlite3_value_text(pVal),n));
+ sqlite3ValueFree(pVal);
+
+ Tcl_EvalObjEx(i, pX, 0);
+ Tcl_DecrRefCount(pX);
+ Tcl_GetIntFromObj(i, Tcl_GetObjResult(i), &res);
+ return res;
+}
+static int test_collate(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ sqlite3 *db;
+ int val;
+ sqlite3_value *pVal;
+ int rc;
+
+ if( objc!=5 ) goto bad_args;
+ pTestCollateInterp = interp;
+ if( getDbPointer(interp, Tcl_GetString(objv[1]), &db) ) return TCL_ERROR;
+
+ if( TCL_OK!=Tcl_GetBooleanFromObj(interp, objv[2], &val) ) return TCL_ERROR;
+ rc = sqlite3_create_collation(db, "test_collate", SQLITE_UTF8,
+ (void *)SQLITE_UTF8, val?test_collate_func:0);
+ if( rc==SQLITE_OK ){
+ if( TCL_OK!=Tcl_GetBooleanFromObj(interp, objv[3], &val) ) return TCL_ERROR;
+ rc = sqlite3_create_collation(db, "test_collate", SQLITE_UTF16LE,
+ (void *)SQLITE_UTF16LE, val?test_collate_func:0);
+ if( TCL_OK!=Tcl_GetBooleanFromObj(interp, objv[4], &val) ) return TCL_ERROR;
+
+#ifdef SQLITE_MEMDEBUG
+ if( sqlite3_iMallocFail>0 ){
+ sqlite3_iMallocFail++;
+ }
+#endif
+ pVal = sqlite3ValueNew();
+ sqlite3ValueSetStr(pVal, -1, "test_collate", SQLITE_UTF8, SQLITE_STATIC);
+ rc = sqlite3_create_collation16(db,
+ sqlite3ValueText(pVal, SQLITE_UTF16NATIVE), SQLITE_UTF16BE,
+ (void *)SQLITE_UTF16BE, val?test_collate_func:0);
+ sqlite3ValueFree(pVal);
+ }
+ if( sqlite3TestErrCode(interp, db, rc) ) return TCL_ERROR;
+
+ if( rc!=SQLITE_OK ){
+ Tcl_AppendResult(interp, sqlite3TestErrorName(rc), 0);
+ return TCL_ERROR;
+ }
+ return TCL_OK;
+
+bad_args:
+ Tcl_AppendResult(interp, "wrong # args: should be \"",
+ Tcl_GetStringFromObj(objv[0], 0), " <DB> <utf8> <utf16le> <utf16be>", 0);
+ return TCL_ERROR;
+}
+
+/*
+** When the collation needed callback is invoked, record the name of
+** the requested collating function here. The recorded name is linked
+** to a TCL variable and used to make sure that the requested collation
+** name is correct.
+*/
+static char zNeededCollation[200];
+static char *pzNeededCollation = zNeededCollation;
+
+
+/*
+** Called when a collating sequence is needed. Registered using
+** sqlite3_collation_needed16().
+*/
+static void test_collate_needed_cb(
+ void *pCtx,
+ sqlite3 *db,
+ int eTextRep,
+ const void *pName
+){
+ int enc = ENC(db);
+ int i;
+ char *z;
+ for(z = (char*)pName, i=0; *z || z[1]; z++){
+ if( *z ) zNeededCollation[i++] = *z;
+ }
+ zNeededCollation[i] = 0;
+ sqlite3_create_collation(
+ db, "test_collate", ENC(db), (void *)enc, test_collate_func);
+}
+
+/*
+** Usage: add_test_collate_needed DB
+*/
+static int test_collate_needed(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ sqlite3 *db;
+ int rc;
+
+ if( objc!=2 ) goto bad_args;
+ if( getDbPointer(interp, Tcl_GetString(objv[1]), &db) ) return TCL_ERROR;
+ rc = sqlite3_collation_needed16(db, 0, test_collate_needed_cb);
+ zNeededCollation[0] = 0;
+ if( sqlite3TestErrCode(interp, db, rc) ) return TCL_ERROR;
+ return TCL_OK;
+
+bad_args:
+ Tcl_WrongNumArgs(interp, 1, objv, "DB");
+ return TCL_ERROR;
+}
+
+/*
+** tclcmd: add_alignment_test_collations DB
+**
+** Add two new collating sequences to the database DB
+**
+** utf16_aligned
+** utf16_unaligned
+**
+** Both collating sequences use the same sort order as BINARY.
+** The only difference is that the utf16_aligned collating
+** sequence is declared with the SQLITE_UTF16_ALIGNED flag.
+** Both collating functions increment the unaligned utf16 counter
+** whenever they see a string that begins on an odd byte boundary.
+*/
+static int unaligned_string_counter = 0;
+static int alignmentCollFunc(
+ void *NotUsed,
+ int nKey1, const void *pKey1,
+ int nKey2, const void *pKey2
+){
+ int rc, n;
+ n = nKey1<nKey2 ? nKey1 : nKey2;
+ if( nKey1>0 && 1==(1&(int)pKey1) ) unaligned_string_counter++;
+ if( nKey2>0 && 1==(1&(int)pKey2) ) unaligned_string_counter++;
+ rc = memcmp(pKey1, pKey2, n);
+ if( rc==0 ){
+ rc = nKey1 - nKey2;
+ }
+ return rc;
+}
+static int add_alignment_test_collations(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ sqlite3 *db;
+ if( objc>=2 ){
+ if( getDbPointer(interp, Tcl_GetString(objv[1]), &db) ) return TCL_ERROR;
+ sqlite3_create_collation(db, "utf16_unaligned",
+ SQLITE_UTF16,
+ 0, alignmentCollFunc);
+ sqlite3_create_collation(db, "utf16_aligned",
+ SQLITE_UTF16 | SQLITE_UTF16_ALIGNED,
+ 0, alignmentCollFunc);
+ }
+ return SQLITE_OK;
+}
+#endif /* !defined(SQLITE_OMIT_UTF16) */
+
+/*
+** Usage: add_test_function <db ptr> <utf8> <utf16le> <utf16be>
+**
+** This function is used to test that SQLite selects the correct user
+** function callback when multiple versions (for different text encodings)
+** are available.
+**
+** Calling this routine registers up to three versions of the user function
+** "test_function" with database handle <db>. If the second argument is
+** true, then a version of test_function is registered for UTF-8, if the
+** third is true, a version is registered for UTF-16le, if the fourth is
+** true, a UTF-16be version is available. Previous versions of
+** test_function are deleted.
+**
+** The user function is implemented by calling the following TCL script:
+**
+** "test_function <enc> <arg>"
+**
+** Where <enc> is one of UTF-8, UTF-16LE or UTF16BE, and <arg> is the
+** single argument passed to the SQL function. The value returned by
+** the TCL script is used as the return value of the SQL function. It
+** is passed to SQLite using UTF-16BE for a UTF-8 test_function(), UTF-8
+** for a UTF-16LE test_function(), and UTF-16LE for an implementation that
+** prefers UTF-16BE.
+*/
+#ifndef SQLITE_OMIT_UTF16
+static void test_function_utf8(
+ sqlite3_context *pCtx,
+ int nArg,
+ sqlite3_value **argv
+){
+ Tcl_Interp *interp;
+ Tcl_Obj *pX;
+ sqlite3_value *pVal;
+ interp = (Tcl_Interp *)sqlite3_user_data(pCtx);
+ pX = Tcl_NewStringObj("test_function", -1);
+ Tcl_IncrRefCount(pX);
+ Tcl_ListObjAppendElement(interp, pX, Tcl_NewStringObj("UTF-8", -1));
+ Tcl_ListObjAppendElement(interp, pX,
+ Tcl_NewStringObj((char*)sqlite3_value_text(argv[0]), -1));
+ Tcl_EvalObjEx(interp, pX, 0);
+ Tcl_DecrRefCount(pX);
+ sqlite3_result_text(pCtx, Tcl_GetStringResult(interp), -1, SQLITE_TRANSIENT);
+ pVal = sqlite3ValueNew();
+ sqlite3ValueSetStr(pVal, -1, Tcl_GetStringResult(interp),
+ SQLITE_UTF8, SQLITE_STATIC);
+ sqlite3_result_text16be(pCtx, sqlite3_value_text16be(pVal),
+ -1, SQLITE_TRANSIENT);
+ sqlite3ValueFree(pVal);
+}
+static void test_function_utf16le(
+ sqlite3_context *pCtx,
+ int nArg,
+ sqlite3_value **argv
+){
+ Tcl_Interp *interp;
+ Tcl_Obj *pX;
+ sqlite3_value *pVal;
+ interp = (Tcl_Interp *)sqlite3_user_data(pCtx);
+ pX = Tcl_NewStringObj("test_function", -1);
+ Tcl_IncrRefCount(pX);
+ Tcl_ListObjAppendElement(interp, pX, Tcl_NewStringObj("UTF-16LE", -1));
+ Tcl_ListObjAppendElement(interp, pX,
+ Tcl_NewStringObj((char*)sqlite3_value_text(argv[0]), -1));
+ Tcl_EvalObjEx(interp, pX, 0);
+ Tcl_DecrRefCount(pX);
+ pVal = sqlite3ValueNew();
+ sqlite3ValueSetStr(pVal, -1, Tcl_GetStringResult(interp),
+ SQLITE_UTF8, SQLITE_STATIC);
+ sqlite3_result_text(pCtx,(char*)sqlite3_value_text(pVal),-1,SQLITE_TRANSIENT);
+ sqlite3ValueFree(pVal);
+}
+static void test_function_utf16be(
+ sqlite3_context *pCtx,
+ int nArg,
+ sqlite3_value **argv
+){
+ Tcl_Interp *interp;
+ Tcl_Obj *pX;
+ sqlite3_value *pVal;
+ interp = (Tcl_Interp *)sqlite3_user_data(pCtx);
+ pX = Tcl_NewStringObj("test_function", -1);
+ Tcl_IncrRefCount(pX);
+ Tcl_ListObjAppendElement(interp, pX, Tcl_NewStringObj("UTF-16BE", -1));
+ Tcl_ListObjAppendElement(interp, pX,
+ Tcl_NewStringObj((char*)sqlite3_value_text(argv[0]), -1));
+ Tcl_EvalObjEx(interp, pX, 0);
+ Tcl_DecrRefCount(pX);
+ pVal = sqlite3ValueNew();
+ sqlite3ValueSetStr(pVal, -1, Tcl_GetStringResult(interp),
+ SQLITE_UTF8, SQLITE_STATIC);
+ sqlite3_result_text16le(pCtx, sqlite3_value_text16le(pVal),
+ -1, SQLITE_TRANSIENT);
+ sqlite3ValueFree(pVal);
+}
+#endif /* SQLITE_OMIT_UTF16 */
+static int test_function(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+#ifndef SQLITE_OMIT_UTF16
+ sqlite3 *db;
+ int val;
+
+ if( objc!=5 ) goto bad_args;
+ if( getDbPointer(interp, Tcl_GetString(objv[1]), &db) ) return TCL_ERROR;
+
+ if( TCL_OK!=Tcl_GetBooleanFromObj(interp, objv[2], &val) ) return TCL_ERROR;
+ if( val ){
+ sqlite3_create_function(db, "test_function", 1, SQLITE_UTF8,
+ interp, test_function_utf8, 0, 0);
+ }
+ if( TCL_OK!=Tcl_GetBooleanFromObj(interp, objv[3], &val) ) return TCL_ERROR;
+ if( val ){
+ sqlite3_create_function(db, "test_function", 1, SQLITE_UTF16LE,
+ interp, test_function_utf16le, 0, 0);
+ }
+ if( TCL_OK!=Tcl_GetBooleanFromObj(interp, objv[4], &val) ) return TCL_ERROR;
+ if( val ){
+ sqlite3_create_function(db, "test_function", 1, SQLITE_UTF16BE,
+ interp, test_function_utf16be, 0, 0);
+ }
+
+ return TCL_OK;
+bad_args:
+ Tcl_AppendResult(interp, "wrong # args: should be \"",
+ Tcl_GetStringFromObj(objv[0], 0), " <DB> <utf8> <utf16le> <utf16be>", 0);
+#endif /* SQLITE_OMIT_UTF16 */
+ return TCL_ERROR;
+}
+
+/*
+** Usage: test_errstr <err code>
+**
+** Test that the english language string equivalents for sqlite error codes
+** are sane. The parameter is an integer representing an sqlite error code.
+** The result is a list of two elements, the string representation of the
+** error code and the english language explanation.
+*/
+static int test_errstr(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ char *zCode;
+ int i;
+ if( objc!=1 ){
+ Tcl_WrongNumArgs(interp, 1, objv, "<error code>");
+ }
+
+ zCode = Tcl_GetString(objv[1]);
+ for(i=0; i<200; i++){
+ if( 0==strcmp(errorName(i), zCode) ) break;
+ }
+ Tcl_SetResult(interp, (char *)sqlite3ErrStr(i), 0);
+ return TCL_OK;
+}
+
+/*
+** Usage: breakpoint
+**
+** This routine exists for one purpose - to provide a place to put a
+** breakpoint with GDB that can be triggered using TCL code. The use
+** for this is when a particular test fails on (say) the 1485th iteration.
+** In the TCL test script, we can add code like this:
+**
+** if {$i==1485} breakpoint
+**
+** Then run testfixture in the debugger and wait for the breakpoint to
+** fire. Then additional breakpoints can be set to trace down the bug.
+*/
+static int test_breakpoint(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ char **argv /* Text of each argument */
+){
+ return TCL_OK; /* Do nothing */
+}
+
+/*
+** Usage: sqlite3_bind_int STMT N VALUE
+**
+** Test the sqlite3_bind_int interface. STMT is a prepared statement.
+** N is the index of a wildcard in the prepared statement. This command
+** binds a 32-bit integer VALUE to that wildcard.
+*/
+static int test_bind_int(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ sqlite3_stmt *pStmt;
+ int idx;
+ int value;
+ int rc;
+
+ if( objc!=4 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"",
+ Tcl_GetStringFromObj(objv[0], 0), " STMT N VALUE", 0);
+ return TCL_ERROR;
+ }
+
+ if( getStmtPointer(interp, Tcl_GetString(objv[1]), &pStmt) ) return TCL_ERROR;
+ if( Tcl_GetIntFromObj(interp, objv[2], &idx) ) return TCL_ERROR;
+ if( Tcl_GetIntFromObj(interp, objv[3], &value) ) return TCL_ERROR;
+
+ rc = sqlite3_bind_int(pStmt, idx, value);
+ if( sqlite3TestErrCode(interp, StmtToDb(pStmt), rc) ) return TCL_ERROR;
+ if( rc!=SQLITE_OK ){
+ return TCL_ERROR;
+ }
+
+ return TCL_OK;
+}
+
+
+/*
+** Usage: sqlite3_bind_int64 STMT N VALUE
+**
+** Test the sqlite3_bind_int64 interface. STMT is a prepared statement.
+** N is the index of a wildcard in the prepared statement. This command
+** binds a 64-bit integer VALUE to that wildcard.
+*/
+static int test_bind_int64(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ sqlite3_stmt *pStmt;
+ int idx;
+ i64 value;
+ int rc;
+
+ if( objc!=4 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"",
+ Tcl_GetStringFromObj(objv[0], 0), " STMT N VALUE", 0);
+ return TCL_ERROR;
+ }
+
+ if( getStmtPointer(interp, Tcl_GetString(objv[1]), &pStmt) ) return TCL_ERROR;
+ if( Tcl_GetIntFromObj(interp, objv[2], &idx) ) return TCL_ERROR;
+ if( Tcl_GetWideIntFromObj(interp, objv[3], &value) ) return TCL_ERROR;
+
+ rc = sqlite3_bind_int64(pStmt, idx, value);
+ if( sqlite3TestErrCode(interp, StmtToDb(pStmt), rc) ) return TCL_ERROR;
+ if( rc!=SQLITE_OK ){
+ return TCL_ERROR;
+ }
+
+ return TCL_OK;
+}
+
+
+/*
+** Usage: sqlite3_bind_double STMT N VALUE
+**
+** Test the sqlite3_bind_double interface. STMT is a prepared statement.
+** N is the index of a wildcard in the prepared statement. This command
+** binds a 64-bit integer VALUE to that wildcard.
+*/
+static int test_bind_double(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ sqlite3_stmt *pStmt;
+ int idx;
+ double value;
+ int rc;
+
+ if( objc!=4 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"",
+ Tcl_GetStringFromObj(objv[0], 0), " STMT N VALUE", 0);
+ return TCL_ERROR;
+ }
+
+ if( getStmtPointer(interp, Tcl_GetString(objv[1]), &pStmt) ) return TCL_ERROR;
+ if( Tcl_GetIntFromObj(interp, objv[2], &idx) ) return TCL_ERROR;
+ if( Tcl_GetDoubleFromObj(interp, objv[3], &value) ) return TCL_ERROR;
+
+ rc = sqlite3_bind_double(pStmt, idx, value);
+ if( sqlite3TestErrCode(interp, StmtToDb(pStmt), rc) ) return TCL_ERROR;
+ if( rc!=SQLITE_OK ){
+ return TCL_ERROR;
+ }
+
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_bind_null STMT N
+**
+** Test the sqlite3_bind_null interface. STMT is a prepared statement.
+** N is the index of a wildcard in the prepared statement. This command
+** binds a NULL to the wildcard.
+*/
+static int test_bind_null(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ sqlite3_stmt *pStmt;
+ int idx;
+ int rc;
+
+ if( objc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"",
+ Tcl_GetStringFromObj(objv[0], 0), " STMT N", 0);
+ return TCL_ERROR;
+ }
+
+ if( getStmtPointer(interp, Tcl_GetString(objv[1]), &pStmt) ) return TCL_ERROR;
+ if( Tcl_GetIntFromObj(interp, objv[2], &idx) ) return TCL_ERROR;
+
+ rc = sqlite3_bind_null(pStmt, idx);
+ if( sqlite3TestErrCode(interp, StmtToDb(pStmt), rc) ) return TCL_ERROR;
+ if( rc!=SQLITE_OK ){
+ return TCL_ERROR;
+ }
+
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_bind_text STMT N STRING BYTES
+**
+** Test the sqlite3_bind_text interface. STMT is a prepared statement.
+** N is the index of a wildcard in the prepared statement. This command
+** binds a UTF-8 string STRING to the wildcard. The string is BYTES bytes
+** long.
+*/
+static int test_bind_text(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ sqlite3_stmt *pStmt;
+ int idx;
+ int bytes;
+ char *value;
+ int rc;
+
+ if( objc!=5 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"",
+ Tcl_GetStringFromObj(objv[0], 0), " STMT N VALUE BYTES", 0);
+ return TCL_ERROR;
+ }
+
+ if( getStmtPointer(interp, Tcl_GetString(objv[1]), &pStmt) ) return TCL_ERROR;
+ if( Tcl_GetIntFromObj(interp, objv[2], &idx) ) return TCL_ERROR;
+ value = Tcl_GetString(objv[3]);
+ if( Tcl_GetIntFromObj(interp, objv[4], &bytes) ) return TCL_ERROR;
+
+ rc = sqlite3_bind_text(pStmt, idx, value, bytes, SQLITE_TRANSIENT);
+ if( sqlite3TestErrCode(interp, StmtToDb(pStmt), rc) ) return TCL_ERROR;
+ if( rc!=SQLITE_OK ){
+ Tcl_AppendResult(interp, sqlite3TestErrorName(rc), 0);
+ return TCL_ERROR;
+ }
+
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_bind_text16 ?-static? STMT N STRING BYTES
+**
+** Test the sqlite3_bind_text16 interface. STMT is a prepared statement.
+** N is the index of a wildcard in the prepared statement. This command
+** binds a UTF-16 string STRING to the wildcard. The string is BYTES bytes
+** long.
+*/
+static int test_bind_text16(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+#ifndef SQLITE_OMIT_UTF16
+ sqlite3_stmt *pStmt;
+ int idx;
+ int bytes;
+ char *value;
+ int rc;
+
+ void (*xDel)() = (objc==6?SQLITE_STATIC:SQLITE_TRANSIENT);
+ Tcl_Obj *oStmt = objv[objc-4];
+ Tcl_Obj *oN = objv[objc-3];
+ Tcl_Obj *oString = objv[objc-2];
+ Tcl_Obj *oBytes = objv[objc-1];
+
+ if( objc!=5 && objc!=6){
+ Tcl_AppendResult(interp, "wrong # args: should be \"",
+ Tcl_GetStringFromObj(objv[0], 0), " STMT N VALUE BYTES", 0);
+ return TCL_ERROR;
+ }
+
+ if( getStmtPointer(interp, Tcl_GetString(oStmt), &pStmt) ) return TCL_ERROR;
+ if( Tcl_GetIntFromObj(interp, oN, &idx) ) return TCL_ERROR;
+ value = (char*)Tcl_GetByteArrayFromObj(oString, 0);
+ if( Tcl_GetIntFromObj(interp, oBytes, &bytes) ) return TCL_ERROR;
+
+ rc = sqlite3_bind_text16(pStmt, idx, (void *)value, bytes, xDel);
+ if( sqlite3TestErrCode(interp, StmtToDb(pStmt), rc) ) return TCL_ERROR;
+ if( rc!=SQLITE_OK ){
+ return TCL_ERROR;
+ }
+
+#endif /* SQLITE_OMIT_UTF16 */
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_bind_blob STMT N DATA BYTES
+**
+** Test the sqlite3_bind_blob interface. STMT is a prepared statement.
+** N is the index of a wildcard in the prepared statement. This command
+** binds a BLOB to the wildcard. The BLOB is BYTES bytes in size.
+*/
+static int test_bind_blob(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ sqlite3_stmt *pStmt;
+ int idx;
+ int bytes;
+ char *value;
+ int rc;
+
+ if( objc!=5 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"",
+ Tcl_GetStringFromObj(objv[0], 0), " STMT N DATA BYTES", 0);
+ return TCL_ERROR;
+ }
+
+ if( getStmtPointer(interp, Tcl_GetString(objv[1]), &pStmt) ) return TCL_ERROR;
+ if( Tcl_GetIntFromObj(interp, objv[2], &idx) ) return TCL_ERROR;
+ value = Tcl_GetString(objv[3]);
+ if( Tcl_GetIntFromObj(interp, objv[4], &bytes) ) return TCL_ERROR;
+
+ rc = sqlite3_bind_blob(pStmt, idx, value, bytes, SQLITE_TRANSIENT);
+ if( sqlite3TestErrCode(interp, StmtToDb(pStmt), rc) ) return TCL_ERROR;
+ if( rc!=SQLITE_OK ){
+ return TCL_ERROR;
+ }
+
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_bind_parameter_count STMT
+**
+** Return the number of wildcards in the given statement.
+*/
+static int test_bind_parameter_count(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ sqlite3_stmt *pStmt;
+
+ if( objc!=2 ){
+ Tcl_WrongNumArgs(interp, 1, objv, "STMT");
+ return TCL_ERROR;
+ }
+ if( getStmtPointer(interp, Tcl_GetString(objv[1]), &pStmt) ) return TCL_ERROR;
+ Tcl_SetObjResult(interp, Tcl_NewIntObj(sqlite3_bind_parameter_count(pStmt)));
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_bind_parameter_name STMT N
+**
+** Return the name of the Nth wildcard. The first wildcard is 1.
+** An empty string is returned if N is out of range or if the wildcard
+** is nameless.
+*/
+static int test_bind_parameter_name(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ sqlite3_stmt *pStmt;
+ int i;
+
+ if( objc!=3 ){
+ Tcl_WrongNumArgs(interp, 1, objv, "STMT N");
+ return TCL_ERROR;
+ }
+ if( getStmtPointer(interp, Tcl_GetString(objv[1]), &pStmt) ) return TCL_ERROR;
+ if( Tcl_GetIntFromObj(interp, objv[2], &i) ) return TCL_ERROR;
+ Tcl_SetObjResult(interp,
+ Tcl_NewStringObj(sqlite3_bind_parameter_name(pStmt,i),-1)
+ );
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_bind_parameter_index STMT NAME
+**
+** Return the index of the wildcard called NAME. Return 0 if there is
+** no such wildcard.
+*/
+static int test_bind_parameter_index(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ sqlite3_stmt *pStmt;
+
+ if( objc!=3 ){
+ Tcl_WrongNumArgs(interp, 1, objv, "STMT NAME");
+ return TCL_ERROR;
+ }
+ if( getStmtPointer(interp, Tcl_GetString(objv[1]), &pStmt) ) return TCL_ERROR;
+ Tcl_SetObjResult(interp,
+ Tcl_NewIntObj(
+ sqlite3_bind_parameter_index(pStmt,Tcl_GetString(objv[2]))
+ )
+ );
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_clear_bindings STMT
+**
+*/
+static int test_clear_bindings(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ sqlite3_stmt *pStmt;
+
+ if( objc!=2 ){
+ Tcl_WrongNumArgs(interp, 1, objv, "STMT");
+ return TCL_ERROR;
+ }
+ if( getStmtPointer(interp, Tcl_GetString(objv[1]), &pStmt) ) return TCL_ERROR;
+ Tcl_SetObjResult(interp, Tcl_NewIntObj(sqlite3_clear_bindings(pStmt)));
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_sleep MILLISECONDS
+*/
+static int test_sleep(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ int ms;
+
+ if( objc!=2 ){
+ Tcl_WrongNumArgs(interp, 1, objv, "MILLISECONDS");
+ return TCL_ERROR;
+ }
+ if( Tcl_GetIntFromObj(interp, objv[1], &ms) ){
+ return TCL_ERROR;
+ }
+ Tcl_SetObjResult(interp, Tcl_NewIntObj(sqlite3_sleep(ms)));
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_errcode DB
+**
+** Return the string representation of the most recent sqlite3_* API
+** error code. e.g. "SQLITE_ERROR".
+*/
+static int test_errcode(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ sqlite3 *db;
+ int rc;
+ char zBuf[30];
+
+ if( objc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"",
+ Tcl_GetString(objv[0]), " DB", 0);
+ return TCL_ERROR;
+ }
+ if( getDbPointer(interp, Tcl_GetString(objv[1]), &db) ) return TCL_ERROR;
+ rc = sqlite3_errcode(db);
+ if( (rc&0xff)==rc ){
+ zBuf[0] = 0;
+ }else{
+ sprintf(zBuf,"+%d", rc>>8);
+ }
+ Tcl_AppendResult(interp, (char *)errorName(rc), zBuf, 0);
+ return TCL_OK;
+}
+
+/*
+** Usage: test_errmsg DB
+**
+** Returns the UTF-8 representation of the error message string for the
+** most recent sqlite3_* API call.
+*/
+static int test_errmsg(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ sqlite3 *db;
+ const char *zErr;
+
+ if( objc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"",
+ Tcl_GetString(objv[0]), " DB", 0);
+ return TCL_ERROR;
+ }
+ if( getDbPointer(interp, Tcl_GetString(objv[1]), &db) ) return TCL_ERROR;
+
+ zErr = sqlite3_errmsg(db);
+ Tcl_SetObjResult(interp, Tcl_NewStringObj(zErr, -1));
+ return TCL_OK;
+}
+
+/*
+** Usage: test_errmsg16 DB
+**
+** Returns the UTF-16 representation of the error message string for the
+** most recent sqlite3_* API call. This is a byte array object at the TCL
+** level, and it includes the 0x00 0x00 terminator bytes at the end of the
+** UTF-16 string.
+*/
+static int test_errmsg16(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+#ifndef SQLITE_OMIT_UTF16
+ sqlite3 *db;
+ const void *zErr;
+ int bytes = 0;
+
+ if( objc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"",
+ Tcl_GetString(objv[0]), " DB", 0);
+ return TCL_ERROR;
+ }
+ if( getDbPointer(interp, Tcl_GetString(objv[1]), &db) ) return TCL_ERROR;
+
+ zErr = sqlite3_errmsg16(db);
+ if( zErr ){
+ bytes = sqlite3utf16ByteLen(zErr, -1);
+ }
+ Tcl_SetObjResult(interp, Tcl_NewByteArrayObj(zErr, bytes));
+#endif /* SQLITE_OMIT_UTF16 */
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_prepare DB sql bytes tailvar
+**
+** Compile up to <bytes> bytes of the supplied SQL string <sql> using
+** database handle <DB>. The parameter <tailval> is the name of a global
+** variable that is set to the unused portion of <sql> (if any). A
+** STMT handle is returned.
+*/
+static int test_prepare(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ sqlite3 *db;
+ const char *zSql;
+ int bytes;
+ const char *zTail = 0;
+ sqlite3_stmt *pStmt = 0;
+ char zBuf[50];
+ int rc;
+
+ if( objc!=5 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"",
+ Tcl_GetString(objv[0]), " DB sql bytes tailvar", 0);
+ return TCL_ERROR;
+ }
+ if( getDbPointer(interp, Tcl_GetString(objv[1]), &db) ) return TCL_ERROR;
+ zSql = Tcl_GetString(objv[2]);
+ if( Tcl_GetIntFromObj(interp, objv[3], &bytes) ) return TCL_ERROR;
+
+ rc = sqlite3_prepare(db, zSql, bytes, &pStmt, &zTail);
+ if( sqlite3TestErrCode(interp, db, rc) ) return TCL_ERROR;
+ if( zTail ){
+ if( bytes>=0 ){
+ bytes = bytes - (zTail-zSql);
+ }
+ Tcl_ObjSetVar2(interp, objv[4], 0, Tcl_NewStringObj(zTail, bytes), 0);
+ }
+ if( rc!=SQLITE_OK ){
+ assert( pStmt==0 );
+ sprintf(zBuf, "(%d) ", rc);
+ Tcl_AppendResult(interp, zBuf, sqlite3_errmsg(db), 0);
+ return TCL_ERROR;
+ }
+
+ if( pStmt ){
+ if( sqlite3TestMakePointerStr(interp, zBuf, pStmt) ) return TCL_ERROR;
+ Tcl_AppendResult(interp, zBuf, 0);
+ }
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_prepare DB sql bytes tailvar
+**
+** Compile up to <bytes> bytes of the supplied SQL string <sql> using
+** database handle <DB>. The parameter <tailval> is the name of a global
+** variable that is set to the unused portion of <sql> (if any). A
+** STMT handle is returned.
+*/
+static int test_prepare16(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+#ifndef SQLITE_OMIT_UTF16
+ sqlite3 *db;
+ const void *zSql;
+ const void *zTail = 0;
+ Tcl_Obj *pTail = 0;
+ sqlite3_stmt *pStmt = 0;
+ char zBuf[50];
+ int rc;
+ int bytes; /* The integer specified as arg 3 */
+ int objlen; /* The byte-array length of arg 2 */
+
+ if( objc!=5 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"",
+ Tcl_GetString(objv[0]), " DB sql bytes tailvar", 0);
+ return TCL_ERROR;
+ }
+ if( getDbPointer(interp, Tcl_GetString(objv[1]), &db) ) return TCL_ERROR;
+ zSql = Tcl_GetByteArrayFromObj(objv[2], &objlen);
+ if( Tcl_GetIntFromObj(interp, objv[3], &bytes) ) return TCL_ERROR;
+
+ rc = sqlite3_prepare16(db, zSql, bytes, &pStmt, &zTail);
+ if( sqlite3TestErrCode(interp, db, rc) ) return TCL_ERROR;
+ if( rc ){
+ return TCL_ERROR;
+ }
+
+ if( zTail ){
+ objlen = objlen - ((u8 *)zTail-(u8 *)zSql);
+ }else{
+ objlen = 0;
+ }
+ pTail = Tcl_NewByteArrayObj((u8 *)zTail, objlen);
+ Tcl_IncrRefCount(pTail);
+ Tcl_ObjSetVar2(interp, objv[4], 0, pTail, 0);
+ Tcl_DecrRefCount(pTail);
+
+ if( pStmt ){
+ if( sqlite3TestMakePointerStr(interp, zBuf, pStmt) ) return TCL_ERROR;
+ }
+ Tcl_AppendResult(interp, zBuf, 0);
+#endif /* SQLITE_OMIT_UTF16 */
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_open filename ?options-list?
+*/
+static int test_open(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ const char *zFilename;
+ sqlite3 *db;
+ int rc;
+ char zBuf[100];
+
+ if( objc!=3 && objc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"",
+ Tcl_GetString(objv[0]), " filename options-list", 0);
+ return TCL_ERROR;
+ }
+
+ zFilename = Tcl_GetString(objv[1]);
+ rc = sqlite3_open(zFilename, &db);
+
+ if( sqlite3TestMakePointerStr(interp, zBuf, db) ) return TCL_ERROR;
+ Tcl_AppendResult(interp, zBuf, 0);
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_open16 filename options
+*/
+static int test_open16(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+#ifndef SQLITE_OMIT_UTF16
+ const void *zFilename;
+ sqlite3 *db;
+ int rc;
+ char zBuf[100];
+
+ if( objc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"",
+ Tcl_GetString(objv[0]), " filename options-list", 0);
+ return TCL_ERROR;
+ }
+
+ zFilename = Tcl_GetByteArrayFromObj(objv[1], 0);
+ rc = sqlite3_open16(zFilename, &db);
+
+ if( sqlite3TestMakePointerStr(interp, zBuf, db) ) return TCL_ERROR;
+ Tcl_AppendResult(interp, zBuf, 0);
+#endif /* SQLITE_OMIT_UTF16 */
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_complete16 <UTF-16 string>
+**
+** Return 1 if the supplied argument is a complete SQL statement, or zero
+** otherwise.
+*/
+static int test_complete16(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+#if !defined(SQLITE_OMIT_COMPLETE) && !defined(SQLITE_OMIT_UTF16)
+ char *zBuf;
+
+ if( objc!=2 ){
+ Tcl_WrongNumArgs(interp, 1, objv, "<utf-16 sql>");
+ return TCL_ERROR;
+ }
+
+ zBuf = (char*)Tcl_GetByteArrayFromObj(objv[1], 0);
+ Tcl_SetObjResult(interp, Tcl_NewIntObj(sqlite3_complete16(zBuf)));
+#endif /* SQLITE_OMIT_COMPLETE && SQLITE_OMIT_UTF16 */
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_step STMT
+**
+** Advance the statement to the next row.
+*/
+static int test_step(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ sqlite3_stmt *pStmt;
+ int rc;
+
+ if( objc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"",
+ Tcl_GetString(objv[0]), " STMT", 0);
+ return TCL_ERROR;
+ }
+
+ if( getStmtPointer(interp, Tcl_GetString(objv[1]), &pStmt) ) return TCL_ERROR;
+ rc = sqlite3_step(pStmt);
+
+ /* if( rc!=SQLITE_DONE && rc!=SQLITE_ROW ) return TCL_ERROR; */
+ Tcl_SetResult(interp, (char *)errorName(rc), 0);
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_column_count STMT
+**
+** Return the number of columns returned by the sql statement STMT.
+*/
+static int test_column_count(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ sqlite3_stmt *pStmt;
+
+ if( objc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"",
+ Tcl_GetString(objv[0]), " STMT column", 0);
+ return TCL_ERROR;
+ }
+
+ if( getStmtPointer(interp, Tcl_GetString(objv[1]), &pStmt) ) return TCL_ERROR;
+
+ Tcl_SetObjResult(interp, Tcl_NewIntObj(sqlite3_column_count(pStmt)));
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_column_type STMT column
+**
+** Return the type of the data in column 'column' of the current row.
+*/
+static int test_column_type(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ sqlite3_stmt *pStmt;
+ int col;
+ int tp;
+
+ if( objc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"",
+ Tcl_GetString(objv[0]), " STMT column", 0);
+ return TCL_ERROR;
+ }
+
+ if( getStmtPointer(interp, Tcl_GetString(objv[1]), &pStmt) ) return TCL_ERROR;
+ if( Tcl_GetIntFromObj(interp, objv[2], &col) ) return TCL_ERROR;
+
+ tp = sqlite3_column_type(pStmt, col);
+ switch( tp ){
+ case SQLITE_INTEGER:
+ Tcl_SetResult(interp, "INTEGER", TCL_STATIC);
+ break;
+ case SQLITE_NULL:
+ Tcl_SetResult(interp, "NULL", TCL_STATIC);
+ break;
+ case SQLITE_FLOAT:
+ Tcl_SetResult(interp, "FLOAT", TCL_STATIC);
+ break;
+ case SQLITE_TEXT:
+ Tcl_SetResult(interp, "TEXT", TCL_STATIC);
+ break;
+ case SQLITE_BLOB:
+ Tcl_SetResult(interp, "BLOB", TCL_STATIC);
+ break;
+ default:
+ assert(0);
+ }
+
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_column_int64 STMT column
+**
+** Return the data in column 'column' of the current row cast as an
+** wide (64-bit) integer.
+*/
+static int test_column_int64(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ sqlite3_stmt *pStmt;
+ int col;
+ i64 iVal;
+
+ if( objc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"",
+ Tcl_GetString(objv[0]), " STMT column", 0);
+ return TCL_ERROR;
+ }
+
+ if( getStmtPointer(interp, Tcl_GetString(objv[1]), &pStmt) ) return TCL_ERROR;
+ if( Tcl_GetIntFromObj(interp, objv[2], &col) ) return TCL_ERROR;
+
+ iVal = sqlite3_column_int64(pStmt, col);
+ Tcl_SetObjResult(interp, Tcl_NewWideIntObj(iVal));
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_column_blob STMT column
+*/
+static int test_column_blob(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ sqlite3_stmt *pStmt;
+ int col;
+
+ int len;
+ const void *pBlob;
+
+ if( objc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"",
+ Tcl_GetString(objv[0]), " STMT column", 0);
+ return TCL_ERROR;
+ }
+
+ if( getStmtPointer(interp, Tcl_GetString(objv[1]), &pStmt) ) return TCL_ERROR;
+ if( Tcl_GetIntFromObj(interp, objv[2], &col) ) return TCL_ERROR;
+
+ pBlob = sqlite3_column_blob(pStmt, col);
+ len = sqlite3_column_bytes(pStmt, col);
+ Tcl_SetObjResult(interp, Tcl_NewByteArrayObj(pBlob, len));
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_column_double STMT column
+**
+** Return the data in column 'column' of the current row cast as a double.
+*/
+static int test_column_double(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ sqlite3_stmt *pStmt;
+ int col;
+ double rVal;
+
+ if( objc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"",
+ Tcl_GetString(objv[0]), " STMT column", 0);
+ return TCL_ERROR;
+ }
+
+ if( getStmtPointer(interp, Tcl_GetString(objv[1]), &pStmt) ) return TCL_ERROR;
+ if( Tcl_GetIntFromObj(interp, objv[2], &col) ) return TCL_ERROR;
+
+ rVal = sqlite3_column_double(pStmt, col);
+ Tcl_SetObjResult(interp, Tcl_NewDoubleObj(rVal));
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_data_count STMT
+**
+** Return the number of columns returned by the sql statement STMT.
+*/
+static int test_data_count(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ sqlite3_stmt *pStmt;
+
+ if( objc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"",
+ Tcl_GetString(objv[0]), " STMT column", 0);
+ return TCL_ERROR;
+ }
+
+ if( getStmtPointer(interp, Tcl_GetString(objv[1]), &pStmt) ) return TCL_ERROR;
+
+ Tcl_SetObjResult(interp, Tcl_NewIntObj(sqlite3_data_count(pStmt)));
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_column_text STMT column
+**
+** Usage: sqlite3_column_decltype STMT column
+**
+** Usage: sqlite3_column_name STMT column
+*/
+static int test_stmt_utf8(
+ void * clientData, /* Pointer to SQLite API function to be invoke */
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ sqlite3_stmt *pStmt;
+ int col;
+ const char *(*xFunc)(sqlite3_stmt*, int) = clientData;
+ const char *zRet;
+
+ if( objc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"",
+ Tcl_GetString(objv[0]), " STMT column", 0);
+ return TCL_ERROR;
+ }
+
+ if( getStmtPointer(interp, Tcl_GetString(objv[1]), &pStmt) ) return TCL_ERROR;
+ if( Tcl_GetIntFromObj(interp, objv[2], &col) ) return TCL_ERROR;
+ zRet = xFunc(pStmt, col);
+ if( zRet ){
+ Tcl_SetResult(interp, (char *)zRet, 0);
+ }
+ return TCL_OK;
+}
+
+static int test_global_recover(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+#ifndef SQLITE_OMIT_GLOBALRECOVER
+ int rc;
+ if( objc!=1 ){
+ Tcl_WrongNumArgs(interp, 1, objv, "");
+ return TCL_ERROR;
+ }
+ rc = sqlite3_global_recover();
+ Tcl_SetResult(interp, (char *)errorName(rc), TCL_STATIC);
+#endif
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_column_text STMT column
+**
+** Usage: sqlite3_column_decltype STMT column
+**
+** Usage: sqlite3_column_name STMT column
+*/
+static int test_stmt_utf16(
+ void * clientData, /* Pointer to SQLite API function to be invoked */
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+#ifndef SQLITE_OMIT_UTF16
+ sqlite3_stmt *pStmt;
+ int col;
+ Tcl_Obj *pRet;
+ const void *zName16;
+ const void *(*xFunc)(sqlite3_stmt*, int) = clientData;
+
+ if( objc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"",
+ Tcl_GetString(objv[0]), " STMT column", 0);
+ return TCL_ERROR;
+ }
+
+ if( getStmtPointer(interp, Tcl_GetString(objv[1]), &pStmt) ) return TCL_ERROR;
+ if( Tcl_GetIntFromObj(interp, objv[2], &col) ) return TCL_ERROR;
+
+ zName16 = xFunc(pStmt, col);
+ if( zName16 ){
+ pRet = Tcl_NewByteArrayObj(zName16, sqlite3utf16ByteLen(zName16, -1)+2);
+ Tcl_SetObjResult(interp, pRet);
+ }
+#endif /* SQLITE_OMIT_UTF16 */
+
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_column_int STMT column
+**
+** Usage: sqlite3_column_bytes STMT column
+**
+** Usage: sqlite3_column_bytes16 STMT column
+**
+*/
+static int test_stmt_int(
+ void * clientData, /* Pointer to SQLite API function to be invoked */
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ sqlite3_stmt *pStmt;
+ int col;
+ int (*xFunc)(sqlite3_stmt*, int) = clientData;
+
+ if( objc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"",
+ Tcl_GetString(objv[0]), " STMT column", 0);
+ return TCL_ERROR;
+ }
+
+ if( getStmtPointer(interp, Tcl_GetString(objv[1]), &pStmt) ) return TCL_ERROR;
+ if( Tcl_GetIntFromObj(interp, objv[2], &col) ) return TCL_ERROR;
+
+ Tcl_SetObjResult(interp, Tcl_NewIntObj(xFunc(pStmt, col)));
+ return TCL_OK;
+}
+
+#ifndef SQLITE_OMIT_DISKIO
+/*
+** Usage: sqlite3OsOpenReadWrite <filename>
+*/
+static int test_sqlite3OsOpenReadWrite(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ OsFile *pFile;
+ int rc;
+ int dummy;
+ char zBuf[100];
+
+ if( objc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"",
+ Tcl_GetString(objv[0]), " filename", 0);
+ return TCL_ERROR;
+ }
+
+ rc = sqlite3OsOpenReadWrite(Tcl_GetString(objv[1]), &pFile, &dummy);
+ if( rc!=SQLITE_OK ){
+ Tcl_SetResult(interp, (char *)errorName(rc), TCL_STATIC);
+ return TCL_ERROR;
+ }
+ sqlite3TestMakePointerStr(interp, zBuf, pFile);
+ Tcl_SetResult(interp, zBuf, 0);
+ return TCL_ERROR;
+}
+
+/*
+** Usage: sqlite3OsClose <file handle>
+*/
+static int test_sqlite3OsClose(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ OsFile *pFile;
+ int rc;
+
+ if( objc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"",
+ Tcl_GetString(objv[0]), " filehandle", 0);
+ return TCL_ERROR;
+ }
+
+ if( getFilePointer(interp, Tcl_GetString(objv[1]), &pFile) ){
+ return TCL_ERROR;
+ }
+ rc = sqlite3OsClose(&pFile);
+ if( rc!=SQLITE_OK ){
+ Tcl_SetResult(interp, (char *)errorName(rc), TCL_STATIC);
+ return TCL_ERROR;
+ }
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3OsLock <file handle> <locktype>
+*/
+static int test_sqlite3OsLock(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ OsFile * pFile;
+ int rc;
+
+ if( objc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"",
+ Tcl_GetString(objv[0]),
+ " filehandle (SHARED|RESERVED|PENDING|EXCLUSIVE)", 0);
+ return TCL_ERROR;
+ }
+
+ if( getFilePointer(interp, Tcl_GetString(objv[1]), &pFile) ){
+ return TCL_ERROR;
+ }
+
+ if( 0==strcmp("SHARED", Tcl_GetString(objv[2])) ){
+ rc = sqlite3OsLock(pFile, SHARED_LOCK);
+ }
+ else if( 0==strcmp("RESERVED", Tcl_GetString(objv[2])) ){
+ rc = sqlite3OsLock(pFile, RESERVED_LOCK);
+ }
+ else if( 0==strcmp("PENDING", Tcl_GetString(objv[2])) ){
+ rc = sqlite3OsLock(pFile, PENDING_LOCK);
+ }
+ else if( 0==strcmp("EXCLUSIVE", Tcl_GetString(objv[2])) ){
+ rc = sqlite3OsLock(pFile, EXCLUSIVE_LOCK);
+ }else{
+ Tcl_AppendResult(interp, "wrong # args: should be \"",
+ Tcl_GetString(objv[0]),
+ " filehandle (SHARED|RESERVED|PENDING|EXCLUSIVE)", 0);
+ return TCL_ERROR;
+ }
+
+ if( rc!=SQLITE_OK ){
+ Tcl_SetResult(interp, (char *)errorName(rc), TCL_STATIC);
+ return TCL_ERROR;
+ }
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3OsUnlock <file handle>
+*/
+static int test_sqlite3OsUnlock(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ OsFile * pFile;
+ int rc;
+
+ if( objc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"",
+ Tcl_GetString(objv[0]), " filehandle", 0);
+ return TCL_ERROR;
+ }
+
+ if( getFilePointer(interp, Tcl_GetString(objv[1]), &pFile) ){
+ return TCL_ERROR;
+ }
+ rc = sqlite3OsUnlock(pFile, NO_LOCK);
+ if( rc!=SQLITE_OK ){
+ Tcl_SetResult(interp, (char *)errorName(rc), TCL_STATIC);
+ return TCL_ERROR;
+ }
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3OsTempFileName
+*/
+static int test_sqlite3OsTempFileName(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ char zFile[SQLITE_TEMPNAME_SIZE];
+ int rc;
+
+ rc = sqlite3OsTempFileName(zFile);
+ if( rc!=SQLITE_OK ){
+ Tcl_SetResult(interp, (char *)errorName(rc), TCL_STATIC);
+ return TCL_ERROR;
+ }
+ Tcl_AppendResult(interp, zFile, 0);
+ return TCL_OK;
+}
+#endif
+
+/*
+** Usage: sqlite_set_magic DB MAGIC-NUMBER
+**
+** Set the db->magic value. This is used to test error recovery logic.
+*/
+static int sqlite_set_magic(
+ void * clientData,
+ Tcl_Interp *interp,
+ int argc,
+ char **argv
+){
+ sqlite3 *db;
+ if( argc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " DB MAGIC", 0);
+ return TCL_ERROR;
+ }
+ if( getDbPointer(interp, argv[1], &db) ) return TCL_ERROR;
+ if( strcmp(argv[2], "SQLITE_MAGIC_OPEN")==0 ){
+ db->magic = SQLITE_MAGIC_OPEN;
+ }else if( strcmp(argv[2], "SQLITE_MAGIC_CLOSED")==0 ){
+ db->magic = SQLITE_MAGIC_CLOSED;
+ }else if( strcmp(argv[2], "SQLITE_MAGIC_BUSY")==0 ){
+ db->magic = SQLITE_MAGIC_BUSY;
+ }else if( strcmp(argv[2], "SQLITE_MAGIC_ERROR")==0 ){
+ db->magic = SQLITE_MAGIC_ERROR;
+ }else if( Tcl_GetInt(interp, argv[2], &db->magic) ){
+ return TCL_ERROR;
+ }
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_interrupt DB
+**
+** Trigger an interrupt on DB
+*/
+static int test_interrupt(
+ void * clientData,
+ Tcl_Interp *interp,
+ int argc,
+ char **argv
+){
+ sqlite3 *db;
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0], " DB", 0);
+ return TCL_ERROR;
+ }
+ if( getDbPointer(interp, argv[1], &db) ) return TCL_ERROR;
+ sqlite3_interrupt(db);
+ return TCL_OK;
+}
+
+static u8 *sqlite3_stack_baseline = 0;
+
+/*
+** Fill the stack with a known bitpattern.
+*/
+static void prepStack(void){
+ int i;
+ u32 bigBuf[65536];
+ for(i=0; i<sizeof(bigBuf); i++) bigBuf[i] = 0xdeadbeef;
+ sqlite3_stack_baseline = (u8*)&bigBuf[65536];
+}
+
+/*
+** Get the current stack depth. Used for debugging only.
+*/
+u64 sqlite3StackDepth(void){
+ u8 x;
+ return (u64)(sqlite3_stack_baseline - &x);
+}
+
+/*
+** Usage: sqlite3_stack_used DB SQL
+**
+** Try to measure the amount of stack space used by a call to sqlite3_exec
+*/
+static int test_stack_used(
+ void * clientData,
+ Tcl_Interp *interp,
+ int argc,
+ char **argv
+){
+ sqlite3 *db;
+ int i;
+ if( argc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " DB SQL", 0);
+ return TCL_ERROR;
+ }
+ if( getDbPointer(interp, argv[1], &db) ) return TCL_ERROR;
+ prepStack();
+ (void)sqlite3_exec(db, argv[2], 0, 0, 0);
+ for(i=65535; i>=0 && ((u32*)sqlite3_stack_baseline)[-i]==0xdeadbeef; i--){}
+ Tcl_SetObjResult(interp, Tcl_NewIntObj(i*4));
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite_delete_function DB function-name
+**
+** Delete the user function 'function-name' from database handle DB. It
+** is assumed that the user function was created as UTF8, any number of
+** arguments (the way the TCL interface does it).
+*/
+static int delete_function(
+ void * clientData,
+ Tcl_Interp *interp,
+ int argc,
+ char **argv
+){
+ int rc;
+ sqlite3 *db;
+ if( argc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " DB function-name", 0);
+ return TCL_ERROR;
+ }
+ if( getDbPointer(interp, argv[1], &db) ) return TCL_ERROR;
+ rc = sqlite3_create_function(db, argv[2], -1, SQLITE_UTF8, 0, 0, 0, 0);
+ Tcl_SetResult(interp, (char *)errorName(rc), TCL_STATIC);
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite_delete_collation DB collation-name
+**
+** Delete the collation sequence 'collation-name' from database handle
+** DB. It is assumed that the collation sequence was created as UTF8 (the
+** way the TCL interface does it).
+*/
+static int delete_collation(
+ void * clientData,
+ Tcl_Interp *interp,
+ int argc,
+ char **argv
+){
+ int rc;
+ sqlite3 *db;
+ if( argc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " DB function-name", 0);
+ return TCL_ERROR;
+ }
+ if( getDbPointer(interp, argv[1], &db) ) return TCL_ERROR;
+ rc = sqlite3_create_collation(db, argv[2], SQLITE_UTF8, 0, 0);
+ Tcl_SetResult(interp, (char *)errorName(rc), TCL_STATIC);
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_get_autocommit DB
+**
+** Return true if the database DB is currently in auto-commit mode.
+** Return false if not.
+*/
+static int get_autocommit(
+ void * clientData,
+ Tcl_Interp *interp,
+ int argc,
+ char **argv
+){
+ char zBuf[30];
+ sqlite3 *db;
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " DB", 0);
+ return TCL_ERROR;
+ }
+ if( getDbPointer(interp, argv[1], &db) ) return TCL_ERROR;
+ sprintf(zBuf, "%d", sqlite3_get_autocommit(db));
+ Tcl_AppendResult(interp, zBuf, 0);
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_busy_timeout DB MS
+**
+** Set the busy timeout. This is more easily done using the timeout
+** method of the TCL interface. But we need a way to test the case
+** where it returns SQLITE_MISUSE.
+*/
+static int test_busy_timeout(
+ void * clientData,
+ Tcl_Interp *interp,
+ int argc,
+ char **argv
+){
+ int rc, ms;
+ sqlite3 *db;
+ if( argc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " DB", 0);
+ return TCL_ERROR;
+ }
+ if( getDbPointer(interp, argv[1], &db) ) return TCL_ERROR;
+ if( Tcl_GetInt(interp, argv[2], &ms) ) return TCL_ERROR;
+ rc = sqlite3_busy_timeout(db, ms);
+ Tcl_AppendResult(interp, sqlite3TestErrorName(rc), 0);
+ return TCL_OK;
+}
+
+/*
+** Usage: tcl_variable_type VARIABLENAME
+**
+** Return the name of the internal representation for the
+** value of the given variable.
+*/
+static int tcl_variable_type(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ Tcl_Obj *pVar;
+ if( objc!=2 ){
+ Tcl_WrongNumArgs(interp, 1, objv, "VARIABLE");
+ return TCL_ERROR;
+ }
+ pVar = Tcl_GetVar2Ex(interp, Tcl_GetString(objv[1]), 0, TCL_LEAVE_ERR_MSG);
+ if( pVar==0 ) return TCL_ERROR;
+ if( pVar->typePtr ){
+ Tcl_SetObjResult(interp, Tcl_NewStringObj(pVar->typePtr->name, -1));
+ }
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_release_memory ?N?
+**
+** Attempt to release memory currently held but not actually required.
+** The integer N is the number of bytes we are trying to release. The
+** return value is the amount of memory actually released.
+*/
+static int test_release_memory(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+#if defined(SQLITE_ENABLE_MEMORY_MANAGEMENT) && !defined(SQLITE_OMIT_DISKIO)
+ int N;
+ int amt;
+ if( objc!=1 && objc!=2 ){
+ Tcl_WrongNumArgs(interp, 1, objv, "?N?");
+ return TCL_ERROR;
+ }
+ if( objc==2 ){
+ if( Tcl_GetIntFromObj(interp, objv[1], &N) ) return TCL_ERROR;
+ }else{
+ N = -1;
+ }
+ amt = sqlite3_release_memory(N);
+ Tcl_SetObjResult(interp, Tcl_NewIntObj(amt));
+#endif
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_soft_heap_limit ?N?
+**
+** Query or set the soft heap limit for the current thread. The
+** limit is only changed if the N is present. The previous limit
+** is returned.
+*/
+static int test_soft_heap_limit(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+#if defined(SQLITE_ENABLE_MEMORY_MANAGEMENT) && !defined(SQLITE_OMIT_DISKIO)
+ int amt;
+ if( objc!=1 && objc!=2 ){
+ Tcl_WrongNumArgs(interp, 1, objv, "?N?");
+ return TCL_ERROR;
+ }
+ amt = sqlite3ThreadDataReadOnly()->nSoftHeapLimit;
+ if( objc==2 ){
+ int N;
+ if( Tcl_GetIntFromObj(interp, objv[1], &N) ) return TCL_ERROR;
+ sqlite3_soft_heap_limit(N);
+ }
+ Tcl_SetObjResult(interp, Tcl_NewIntObj(amt));
+#endif
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_clear_tsd_memdebug
+**
+** Clear all of the MEMDEBUG information out of thread-specific data.
+** This will allow it to be deallocated.
+*/
+static int test_clear_tsd_memdebug(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_tsd_release
+**
+** Call sqlite3ReleaseThreadData.
+*/
+static int test_tsd_release(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+#if defined(SQLITE_MEMDEBUG)
+ sqlite3ReleaseThreadData();
+#endif
+ return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_thread_cleanup
+**
+** Call the sqlite3_thread_cleanup API.
+*/
+static int test_thread_cleanup(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ sqlite3_thread_cleanup();
+ return TCL_OK;
+}
+
+
+/*
+** This routine sets entries in the global ::sqlite_options() array variable
+** according to the compile-time configuration of the database. Test
+** procedures use this to determine when tests should be omitted.
+*/
+static void set_options(Tcl_Interp *interp){
+#ifdef SQLITE_32BIT_ROWID
+ Tcl_SetVar2(interp, "sqlite_options", "rowid32", "1", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "rowid32", "0", TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_CASE_SENSITIVE_LIKE
+ Tcl_SetVar2(interp, "sqlite_options","casesensitivelike","1",TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options","casesensitivelike","0",TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_DISABLE_DIRSYNC
+ Tcl_SetVar2(interp, "sqlite_options", "dirsync", "0", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "dirsync", "1", TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_DISABLE_LFS
+ Tcl_SetVar2(interp, "sqlite_options", "lfs", "0", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "lfs", "1", TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_OMIT_ALTERTABLE
+ Tcl_SetVar2(interp, "sqlite_options", "altertable", "0", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "altertable", "1", TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_OMIT_ANALYZE
+ Tcl_SetVar2(interp, "sqlite_options", "analyze", "0", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "analyze", "1", TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_OMIT_AUTHORIZATION
+ Tcl_SetVar2(interp, "sqlite_options", "auth", "0", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "auth", "1", TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_OMIT_AUTOINCREMENT
+ Tcl_SetVar2(interp, "sqlite_options", "autoinc", "0", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "autoinc", "1", TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_OMIT_AUTOVACUUM
+ Tcl_SetVar2(interp, "sqlite_options", "autovacuum", "0", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "autovacuum", "1", TCL_GLOBAL_ONLY);
+#endif /* SQLITE_OMIT_AUTOVACUUM */
+#if !defined(SQLITE_DEFAULT_AUTOVACUUM) || SQLITE_DEFAULT_AUTOVACUUM==0
+ Tcl_SetVar2(interp,"sqlite_options","default_autovacuum","0",TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp,"sqlite_options","default_autovacuum","1",TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_OMIT_BETWEEN_OPTIMIZATION
+ Tcl_SetVar2(interp, "sqlite_options", "between_opt", "0", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "between_opt", "1", TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_OMIT_BLOB_LITERAL
+ Tcl_SetVar2(interp, "sqlite_options", "bloblit", "0", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "bloblit", "1", TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_OMIT_CAST
+ Tcl_SetVar2(interp, "sqlite_options", "cast", "0", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "cast", "1", TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_OMIT_CHECK
+ Tcl_SetVar2(interp, "sqlite_options", "check", "0", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "check", "1", TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_ENABLE_COLUMN_METADATA
+ Tcl_SetVar2(interp, "sqlite_options", "columnmetadata", "1", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "columnmetadata", "0", TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_OMIT_COMPLETE
+ Tcl_SetVar2(interp, "sqlite_options", "complete", "0", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "complete", "1", TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_OMIT_COMPOUND_SELECT
+ Tcl_SetVar2(interp, "sqlite_options", "compound", "0", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "compound", "1", TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_OMIT_CONFLICT_CLAUSE
+ Tcl_SetVar2(interp, "sqlite_options", "conflict", "0", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "conflict", "1", TCL_GLOBAL_ONLY);
+#endif
+
+#if OS_UNIX
+ Tcl_SetVar2(interp, "sqlite_options", "crashtest", "1", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "crashtest", "0", TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_OMIT_DATETIME_FUNCS
+ Tcl_SetVar2(interp, "sqlite_options", "datetime", "0", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "datetime", "1", TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_OMIT_DISKIO
+ Tcl_SetVar2(interp, "sqlite_options", "diskio", "0", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "diskio", "1", TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_OMIT_EXPLAIN
+ Tcl_SetVar2(interp, "sqlite_options", "explain", "0", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "explain", "1", TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_OMIT_FLOATING_POINT
+ Tcl_SetVar2(interp, "sqlite_options", "floatingpoint", "0", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "floatingpoint", "1", TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_OMIT_FOREIGN_KEY
+ Tcl_SetVar2(interp, "sqlite_options", "foreignkey", "0", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "foreignkey", "1", TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_ENABLE_FTS1
+ Tcl_SetVar2(interp, "sqlite_options", "fts1", "1", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "fts1", "0", TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_OMIT_GLOBALRECOVER
+ Tcl_SetVar2(interp, "sqlite_options", "globalrecover", "0", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "globalrecover", "1", TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_OMIT_INTEGRITY_CHECK
+ Tcl_SetVar2(interp, "sqlite_options", "integrityck", "0", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "integrityck", "1", TCL_GLOBAL_ONLY);
+#endif
+
+#if defined(SQLITE_DEFAULT_FILE_FORMAT) && SQLITE_DEFAULT_FILE_FORMAT==1
+ Tcl_SetVar2(interp, "sqlite_options", "legacyformat", "1", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "legacyformat", "0", TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_OMIT_LIKE_OPTIMIZATION
+ Tcl_SetVar2(interp, "sqlite_options", "like_opt", "0", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "like_opt", "1", TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_OMIT_MEMORYDB
+ Tcl_SetVar2(interp, "sqlite_options", "memorydb", "0", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "memorydb", "1", TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_ENABLE_MEMORY_MANAGEMENT
+ Tcl_SetVar2(interp, "sqlite_options", "memorymanage", "1", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "memorymanage", "0", TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_OMIT_OR_OPTIMIZATION
+ Tcl_SetVar2(interp, "sqlite_options", "or_opt", "0", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "or_opt", "1", TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_OMIT_PAGER_PRAGMAS
+ Tcl_SetVar2(interp, "sqlite_options", "pager_pragmas", "0", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "pager_pragmas", "1", TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_OMIT_PARSER
+ Tcl_SetVar2(interp, "sqlite_options", "parser", "0", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "parser", "1", TCL_GLOBAL_ONLY);
+#endif
+
+#if defined(SQLITE_OMIT_PRAGMA) || defined(SQLITE_OMIT_FLAG_PRAGMAS)
+ Tcl_SetVar2(interp, "sqlite_options", "pragma", "0", TCL_GLOBAL_ONLY);
+ Tcl_SetVar2(interp, "sqlite_options", "integrityck", "0", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "pragma", "1", TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_OMIT_PROGRESS_CALLBACK
+ Tcl_SetVar2(interp, "sqlite_options", "progress", "0", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "progress", "1", TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_ENABLE_REDEF_IO
+ Tcl_SetVar2(interp, "sqlite_options", "redefio", "1", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "redefio", "0", TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_OMIT_REINDEX
+ Tcl_SetVar2(interp, "sqlite_options", "reindex", "0", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "reindex", "1", TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_OMIT_SCHEMA_PRAGMAS
+ Tcl_SetVar2(interp, "sqlite_options", "schema_pragmas", "0", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "schema_pragmas", "1", TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_OMIT_SCHEMA_VERSION_PRAGMAS
+ Tcl_SetVar2(interp, "sqlite_options", "schema_version", "0", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "schema_version", "1", TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_OMIT_SHARED_CACHE
+ Tcl_SetVar2(interp, "sqlite_options", "shared_cache", "0", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "shared_cache", "1", TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_OMIT_SUBQUERY
+ Tcl_SetVar2(interp, "sqlite_options", "subquery", "0", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "subquery", "1", TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_OMIT_TCL_VARIABLE
+ Tcl_SetVar2(interp, "sqlite_options", "tclvar", "0", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "tclvar", "1", TCL_GLOBAL_ONLY);
+#endif
+
+#if defined(THREADSAFE) && THREADSAFE
+ Tcl_SetVar2(interp, "sqlite_options", "threadsafe", "1", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "threadsafe", "0", TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_OMIT_TRACE
+ Tcl_SetVar2(interp, "sqlite_options", "trace", "0", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "trace", "1", TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_OMIT_TRIGGER
+ Tcl_SetVar2(interp, "sqlite_options", "trigger", "0", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "trigger", "1", TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_OMIT_TEMPDB
+ Tcl_SetVar2(interp, "sqlite_options", "tempdb", "0", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "tempdb", "1", TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_OMIT_UTF16
+ Tcl_SetVar2(interp, "sqlite_options", "utf16", "0", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "utf16", "1", TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_OMIT_VACUUM
+ Tcl_SetVar2(interp, "sqlite_options", "vacuum", "0", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "vacuum", "1", TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_OMIT_VIEW
+ Tcl_SetVar2(interp, "sqlite_options", "view", "0", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "view", "1", TCL_GLOBAL_ONLY);
+#endif
+
+#ifdef SQLITE_OMIT_VIRTUALTABLE
+ Tcl_SetVar2(interp, "sqlite_options", "vtab", "0", TCL_GLOBAL_ONLY);
+#else
+ Tcl_SetVar2(interp, "sqlite_options", "vtab", "1", TCL_GLOBAL_ONLY);
+#endif
+}
+
+/*
+** tclcmd: working_64bit_int
+**
+** Some TCL builds (ex: cygwin) do not support 64-bit integers. This
+** leads to a number of test failures. The present command checks the
+** TCL build to see whether or not it supports 64-bit integers. It
+** returns TRUE if it does and FALSE if not.
+**
+** This command is used to warn users that their TCL build is defective
+** and that the errors they are seeing in the test scripts might be
+** a result of their defective TCL rather than problems in SQLite.
+*/
+static int working_64bit_int(
+ ClientData clientData, /* Pointer to sqlite3_enable_XXX function */
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int objc, /* Number of arguments */
+ Tcl_Obj *CONST objv[] /* Command arguments */
+){
+ Tcl_Obj *pTestObj;
+ int working = 0;
+
+ pTestObj = Tcl_NewWideIntObj(1000000*(i64)1234567890);
+ working = strcmp(Tcl_GetString(pTestObj), "1234567890000000")==0;
+ Tcl_DecrRefCount(pTestObj);
+ Tcl_SetObjResult(interp, Tcl_NewBooleanObj(working));
+ return TCL_OK;
+}
+
+
+/*
+** Register commands with the TCL interpreter.
+*/
+int Sqlitetest1_Init(Tcl_Interp *interp){
+ extern int sqlite3_search_count;
+ extern int sqlite3_interrupt_count;
+ extern int sqlite3_open_file_count;
+ extern int sqlite3_sort_count;
+ extern int sqlite3_current_time;
+ static struct {
+ char *zName;
+ Tcl_CmdProc *xProc;
+ } aCmd[] = {
+ { "sqlite3_mprintf_int", (Tcl_CmdProc*)sqlite3_mprintf_int },
+ { "sqlite3_mprintf_int64", (Tcl_CmdProc*)sqlite3_mprintf_int64 },
+ { "sqlite3_mprintf_str", (Tcl_CmdProc*)sqlite3_mprintf_str },
+ { "sqlite3_mprintf_stronly", (Tcl_CmdProc*)sqlite3_mprintf_stronly},
+ { "sqlite3_mprintf_double", (Tcl_CmdProc*)sqlite3_mprintf_double },
+ { "sqlite3_mprintf_scaled", (Tcl_CmdProc*)sqlite3_mprintf_scaled },
+ { "sqlite3_mprintf_hexdouble", (Tcl_CmdProc*)sqlite3_mprintf_hexdouble},
+ { "sqlite3_mprintf_z_test", (Tcl_CmdProc*)test_mprintf_z },
+ { "sqlite3_mprintf_n_test", (Tcl_CmdProc*)test_mprintf_n },
+ { "sqlite3_last_insert_rowid", (Tcl_CmdProc*)test_last_rowid },
+ { "sqlite3_exec_printf", (Tcl_CmdProc*)test_exec_printf },
+ { "sqlite3_get_table_printf", (Tcl_CmdProc*)test_get_table_printf },
+ { "sqlite3_close", (Tcl_CmdProc*)sqlite_test_close },
+ { "sqlite3_create_function", (Tcl_CmdProc*)test_create_function },
+ { "sqlite3_create_aggregate", (Tcl_CmdProc*)test_create_aggregate },
+ { "sqlite_register_test_function", (Tcl_CmdProc*)test_register_func },
+ { "sqlite_abort", (Tcl_CmdProc*)sqlite_abort },
+#ifdef SQLITE_MEMDEBUG
+ { "sqlite_malloc_fail", (Tcl_CmdProc*)sqlite_malloc_fail },
+ { "sqlite_malloc_stat", (Tcl_CmdProc*)sqlite_malloc_stat },
+#endif
+ { "sqlite_bind", (Tcl_CmdProc*)test_bind },
+ { "breakpoint", (Tcl_CmdProc*)test_breakpoint },
+ { "sqlite3_key", (Tcl_CmdProc*)test_key },
+ { "sqlite3_rekey", (Tcl_CmdProc*)test_rekey },
+ { "sqlite_set_magic", (Tcl_CmdProc*)sqlite_set_magic },
+ { "sqlite3_interrupt", (Tcl_CmdProc*)test_interrupt },
+ { "sqlite_delete_function", (Tcl_CmdProc*)delete_function },
+ { "sqlite_delete_collation", (Tcl_CmdProc*)delete_collation },
+ { "sqlite3_get_autocommit", (Tcl_CmdProc*)get_autocommit },
+ { "sqlite3_stack_used", (Tcl_CmdProc*)test_stack_used },
+ { "sqlite3_busy_timeout", (Tcl_CmdProc*)test_busy_timeout },
+ };
+ static struct {
+ char *zName;
+ Tcl_ObjCmdProc *xProc;
+ void *clientData;
+ } aObjCmd[] = {
+ { "sqlite3_connection_pointer", get_sqlite_pointer, 0 },
+ { "sqlite3_bind_int", test_bind_int, 0 },
+ { "sqlite3_bind_int64", test_bind_int64, 0 },
+ { "sqlite3_bind_double", test_bind_double, 0 },
+ { "sqlite3_bind_null", test_bind_null ,0 },
+ { "sqlite3_bind_text", test_bind_text ,0 },
+ { "sqlite3_bind_text16", test_bind_text16 ,0 },
+ { "sqlite3_bind_blob", test_bind_blob ,0 },
+ { "sqlite3_bind_parameter_count", test_bind_parameter_count, 0},
+ { "sqlite3_bind_parameter_name", test_bind_parameter_name, 0},
+ { "sqlite3_bind_parameter_index", test_bind_parameter_index, 0},
+ { "sqlite3_clear_bindings", test_clear_bindings, 0},
+ { "sqlite3_sleep", test_sleep, 0},
+ { "sqlite3_errcode", test_errcode ,0 },
+ { "sqlite3_errmsg", test_errmsg ,0 },
+ { "sqlite3_errmsg16", test_errmsg16 ,0 },
+ { "sqlite3_open", test_open ,0 },
+ { "sqlite3_open16", test_open16 ,0 },
+ { "sqlite3_complete16", test_complete16 ,0 },
+
+ { "sqlite3_prepare", test_prepare ,0 },
+ { "sqlite3_prepare16", test_prepare16 ,0 },
+ { "sqlite3_finalize", test_finalize ,0 },
+ { "sqlite3_reset", test_reset ,0 },
+ { "sqlite3_expired", test_expired ,0 },
+ { "sqlite3_transfer_bindings", test_transfer_bind ,0 },
+ { "sqlite3_changes", test_changes ,0 },
+ { "sqlite3_step", test_step ,0 },
+
+ { "sqlite3_release_memory", test_release_memory, 0},
+ { "sqlite3_soft_heap_limit", test_soft_heap_limit, 0},
+ { "sqlite3_clear_tsd_memdebug", test_clear_tsd_memdebug, 0},
+ { "sqlite3_tsd_release", test_tsd_release, 0},
+ { "sqlite3_thread_cleanup", test_thread_cleanup, 0},
+
+ { "sqlite3_load_extension", test_load_extension, 0},
+ { "sqlite3_enable_load_extension", test_enable_load, 0},
+ { "sqlite3_extended_result_codes", test_extended_result_codes, 0},
+
+ /* sqlite3_column_*() API */
+ { "sqlite3_column_count", test_column_count ,0 },
+ { "sqlite3_data_count", test_data_count ,0 },
+ { "sqlite3_column_type", test_column_type ,0 },
+ { "sqlite3_column_blob", test_column_blob ,0 },
+ { "sqlite3_column_double", test_column_double ,0 },
+ { "sqlite3_column_int64", test_column_int64 ,0 },
+ { "sqlite3_column_text", test_stmt_utf8, sqlite3_column_text },
+ { "sqlite3_column_decltype", test_stmt_utf8, sqlite3_column_decltype },
+ { "sqlite3_column_name", test_stmt_utf8, sqlite3_column_name },
+ { "sqlite3_column_int", test_stmt_int, sqlite3_column_int },
+ { "sqlite3_column_bytes", test_stmt_int, sqlite3_column_bytes },
+#ifdef SQLITE_ENABLE_COLUMN_METADATA
+{ "sqlite3_column_database_name", test_stmt_utf8, sqlite3_column_database_name},
+{ "sqlite3_column_table_name", test_stmt_utf8, sqlite3_column_table_name},
+{ "sqlite3_column_origin_name", test_stmt_utf8, sqlite3_column_origin_name},
+#endif
+
+#ifndef SQLITE_OMIT_UTF16
+ { "sqlite3_column_bytes16", test_stmt_int, sqlite3_column_bytes16 },
+ { "sqlite3_column_text16", test_stmt_utf16, sqlite3_column_text16 },
+ { "sqlite3_column_decltype16", test_stmt_utf16, sqlite3_column_decltype16},
+ { "sqlite3_column_name16", test_stmt_utf16, sqlite3_column_name16 },
+ { "add_alignment_test_collations", add_alignment_test_collations, 0 },
+#ifdef SQLITE_ENABLE_COLUMN_METADATA
+{"sqlite3_column_database_name16",
+ test_stmt_utf16, sqlite3_column_database_name16},
+{"sqlite3_column_table_name16", test_stmt_utf16, sqlite3_column_table_name16},
+{"sqlite3_column_origin_name16", test_stmt_utf16, sqlite3_column_origin_name16},
+#endif
+#endif
+ { "sqlite3_global_recover", test_global_recover, 0 },
+ { "working_64bit_int", working_64bit_int, 0 },
+
+ /* Functions from os.h */
+#ifndef SQLITE_OMIT_DISKIO
+ { "sqlite3OsOpenReadWrite",test_sqlite3OsOpenReadWrite, 0 },
+ { "sqlite3OsClose", test_sqlite3OsClose, 0 },
+ { "sqlite3OsLock", test_sqlite3OsLock, 0 },
+ { "sqlite3OsTempFileName", test_sqlite3OsTempFileName, 0 },
+
+ /* Custom test interfaces */
+ { "sqlite3OsUnlock", test_sqlite3OsUnlock, 0 },
+#endif
+#ifndef SQLITE_OMIT_UTF16
+ { "add_test_collate", test_collate, 0 },
+ { "add_test_collate_needed", test_collate_needed, 0 },
+ { "add_test_function", test_function, 0 },
+#endif
+#ifdef SQLITE_MEMDEBUG
+ { "sqlite_malloc_outstanding", sqlite_malloc_outstanding, 0},
+#endif
+ { "sqlite3_test_errstr", test_errstr, 0 },
+ { "tcl_variable_type", tcl_variable_type, 0 },
+#ifndef SQLITE_OMIT_SHARED_CACHE
+ { "sqlite3_enable_shared_cache", test_enable_shared, 0 },
+#endif
+ { "sqlite3_libversion_number", test_libversion_number, 0 },
+#ifdef SQLITE_ENABLE_COLUMN_METADATA
+ { "sqlite3_table_column_metadata", test_table_column_metadata, 0 },
+#endif
+ };
+ static int bitmask_size = sizeof(Bitmask)*8;
+ int i;
+ extern int sqlite3_os_trace;
+ extern int sqlite3_where_trace;
+ extern int sqlite3_sync_count, sqlite3_fullsync_count;
+ extern int sqlite3_opentemp_count;
+ extern int sqlite3_memUsed;
+ extern int sqlite3_malloc_id;
+ extern int sqlite3_memMax;
+ extern int sqlite3_like_count;
+ extern int sqlite3_tsd_count;
+#if OS_UNIX && defined(SQLITE_TEST) && defined(THREADSAFE) && THREADSAFE
+ extern int threadsOverrideEachOthersLocks;
+#endif
+#if OS_WIN
+ extern int sqlite3_os_type;
+#endif
+#ifdef SQLITE_DEBUG
+ extern int sqlite3_vdbe_addop_trace;
+#endif
+#ifdef SQLITE_TEST
+ extern char sqlite3_query_plan[];
+ static char *query_plan = sqlite3_query_plan;
+#endif
+
+ for(i=0; i<sizeof(aCmd)/sizeof(aCmd[0]); i++){
+ Tcl_CreateCommand(interp, aCmd[i].zName, aCmd[i].xProc, 0, 0);
+ }
+ for(i=0; i<sizeof(aObjCmd)/sizeof(aObjCmd[0]); i++){
+ Tcl_CreateObjCommand(interp, aObjCmd[i].zName,
+ aObjCmd[i].xProc, aObjCmd[i].clientData, 0);
+ }
+ Tcl_LinkVar(interp, "sqlite_search_count",
+ (char*)&sqlite3_search_count, TCL_LINK_INT);
+ Tcl_LinkVar(interp, "sqlite_sort_count",
+ (char*)&sqlite3_sort_count, TCL_LINK_INT);
+ Tcl_LinkVar(interp, "sqlite_like_count",
+ (char*)&sqlite3_like_count, TCL_LINK_INT);
+ Tcl_LinkVar(interp, "sqlite_interrupt_count",
+ (char*)&sqlite3_interrupt_count, TCL_LINK_INT);
+ Tcl_LinkVar(interp, "sqlite_open_file_count",
+ (char*)&sqlite3_open_file_count, TCL_LINK_INT);
+ Tcl_LinkVar(interp, "sqlite_current_time",
+ (char*)&sqlite3_current_time, TCL_LINK_INT);
+ Tcl_LinkVar(interp, "sqlite_os_trace",
+ (char*)&sqlite3_os_trace, TCL_LINK_INT);
+ Tcl_LinkVar(interp, "sqlite3_tsd_count",
+ (char*)&sqlite3_tsd_count, TCL_LINK_INT);
+#ifndef SQLITE_OMIT_UTF16
+ Tcl_LinkVar(interp, "unaligned_string_counter",
+ (char*)&unaligned_string_counter, TCL_LINK_INT);
+#endif
+#if OS_UNIX && defined(SQLITE_TEST) && defined(THREADSAFE) && THREADSAFE
+ Tcl_LinkVar(interp, "threadsOverrideEachOthersLocks",
+ (char*)&threadsOverrideEachOthersLocks, TCL_LINK_INT);
+#endif
+#ifndef SQLITE_OMIT_UTF16
+ Tcl_LinkVar(interp, "sqlite_last_needed_collation",
+ (char*)&pzNeededCollation, TCL_LINK_STRING|TCL_LINK_READ_ONLY);
+#endif
+#ifdef SQLITE_MEMDEBUG
+ Tcl_LinkVar(interp, "sqlite_malloc_id",
+ (char*)&sqlite3_malloc_id, TCL_LINK_STRING);
+#endif
+#if OS_WIN
+ Tcl_LinkVar(interp, "sqlite_os_type",
+ (char*)&sqlite3_os_type, TCL_LINK_INT);
+#endif
+#ifdef SQLITE_TEST
+ Tcl_LinkVar(interp, "sqlite_query_plan",
+ (char*)&query_plan, TCL_LINK_STRING|TCL_LINK_READ_ONLY);
+#endif
+#ifdef SQLITE_DEBUG
+ Tcl_LinkVar(interp, "sqlite_addop_trace",
+ (char*)&sqlite3_vdbe_addop_trace, TCL_LINK_INT);
+ Tcl_LinkVar(interp, "sqlite_where_trace",
+ (char*)&sqlite3_where_trace, TCL_LINK_INT);
+#endif
+#ifdef SQLITE_MEMDEBUG
+ Tcl_LinkVar(interp, "sqlite_memused",
+ (char*)&sqlite3_memUsed, TCL_LINK_INT | TCL_LINK_READ_ONLY);
+ Tcl_LinkVar(interp, "sqlite_memmax",
+ (char*)&sqlite3_memMax, TCL_LINK_INT | TCL_LINK_READ_ONLY);
+#endif
+#ifndef SQLITE_OMIT_DISKIO
+ Tcl_LinkVar(interp, "sqlite_opentemp_count",
+ (char*)&sqlite3_opentemp_count, TCL_LINK_INT);
+#endif
+ Tcl_LinkVar(interp, "sqlite_static_bind_value",
+ (char*)&sqlite_static_bind_value, TCL_LINK_STRING);
+ Tcl_LinkVar(interp, "sqlite_static_bind_nbyte",
+ (char*)&sqlite_static_bind_nbyte, TCL_LINK_INT);
+ Tcl_LinkVar(interp, "sqlite_temp_directory",
+ (char*)&sqlite3_temp_directory, TCL_LINK_STRING);
+ Tcl_LinkVar(interp, "bitmask_size",
+ (char*)&bitmask_size, TCL_LINK_INT|TCL_LINK_READ_ONLY);
+#if OS_UNIX
+ Tcl_LinkVar(interp, "sqlite_sync_count",
+ (char*)&sqlite3_sync_count, TCL_LINK_INT);
+ Tcl_LinkVar(interp, "sqlite_fullsync_count",
+ (char*)&sqlite3_fullsync_count, TCL_LINK_INT);
+#endif /* OS_UNIX */
+ set_options(interp);
+
+ {
+#ifdef SQLITE_DEBUG
+ extern int sqlite3_shared_cache_report(void *, Tcl_Interp *,
+ int, Tcl_Obj *CONST[]);
+ Tcl_CreateObjCommand(interp, "sqlite_shared_cache_report",
+ sqlite3_shared_cache_report, 0, 0);
+#endif
+ }
+ return TCL_OK;
+}
Added: freeswitch/trunk/libs/sqlite/src/test2.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/test2.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,606 @@
+/*
+** 2001 September 15
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** Code for testing the pager.c module in SQLite. This code
+** is not included in the SQLite library. It is used for automated
+** testing of the SQLite library.
+**
+** $Id: test2.c,v 1.39 2006/01/06 14:32:20 drh Exp $
+*/
+#include "sqliteInt.h"
+#include "os.h"
+#include "pager.h"
+#include "tcl.h"
+#include <stdlib.h>
+#include <string.h>
+
+/*
+** Interpret an SQLite error number
+*/
+static char *errorName(int rc){
+ char *zName;
+ switch( rc ){
+ case SQLITE_OK: zName = "SQLITE_OK"; break;
+ case SQLITE_ERROR: zName = "SQLITE_ERROR"; break;
+ case SQLITE_PERM: zName = "SQLITE_PERM"; break;
+ case SQLITE_ABORT: zName = "SQLITE_ABORT"; break;
+ case SQLITE_BUSY: zName = "SQLITE_BUSY"; break;
+ case SQLITE_NOMEM: zName = "SQLITE_NOMEM"; break;
+ case SQLITE_READONLY: zName = "SQLITE_READONLY"; break;
+ case SQLITE_INTERRUPT: zName = "SQLITE_INTERRUPT"; break;
+ case SQLITE_IOERR: zName = "SQLITE_IOERR"; break;
+ case SQLITE_CORRUPT: zName = "SQLITE_CORRUPT"; break;
+ case SQLITE_FULL: zName = "SQLITE_FULL"; break;
+ case SQLITE_CANTOPEN: zName = "SQLITE_CANTOPEN"; break;
+ case SQLITE_PROTOCOL: zName = "SQLITE_PROTOCOL"; break;
+ case SQLITE_EMPTY: zName = "SQLITE_EMPTY"; break;
+ case SQLITE_SCHEMA: zName = "SQLITE_SCHEMA"; break;
+ case SQLITE_CONSTRAINT: zName = "SQLITE_CONSTRAINT"; break;
+ case SQLITE_MISMATCH: zName = "SQLITE_MISMATCH"; break;
+ case SQLITE_MISUSE: zName = "SQLITE_MISUSE"; break;
+ case SQLITE_NOLFS: zName = "SQLITE_NOLFS"; break;
+ default: zName = "SQLITE_Unknown"; break;
+ }
+ return zName;
+}
+
+/*
+** Page size and reserved size used for testing.
+*/
+static int test_pagesize = 1024;
+
+/*
+** Usage: pager_open FILENAME N-PAGE
+**
+** Open a new pager
+*/
+static int pager_open(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ Pager *pPager;
+ int nPage;
+ int rc;
+ char zBuf[100];
+ if( argc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " FILENAME N-PAGE\"", 0);
+ return TCL_ERROR;
+ }
+ if( Tcl_GetInt(interp, argv[2], &nPage) ) return TCL_ERROR;
+ rc = sqlite3pager_open(&pPager, argv[1], 0, 0);
+ if( rc!=SQLITE_OK ){
+ Tcl_AppendResult(interp, errorName(rc), 0);
+ return TCL_ERROR;
+ }
+ sqlite3pager_set_cachesize(pPager, nPage);
+ sqlite3pager_set_pagesize(pPager, test_pagesize);
+ sqlite3_snprintf(sizeof(zBuf),zBuf,"%p",pPager);
+ Tcl_AppendResult(interp, zBuf, 0);
+ return TCL_OK;
+}
+
+/*
+** Usage: pager_close ID
+**
+** Close the given pager.
+*/
+static int pager_close(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ Pager *pPager;
+ int rc;
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID\"", 0);
+ return TCL_ERROR;
+ }
+ pPager = sqlite3TextToPtr(argv[1]);
+ rc = sqlite3pager_close(pPager);
+ if( rc!=SQLITE_OK ){
+ Tcl_AppendResult(interp, errorName(rc), 0);
+ return TCL_ERROR;
+ }
+ return TCL_OK;
+}
+
+/*
+** Usage: pager_rollback ID
+**
+** Rollback changes
+*/
+static int pager_rollback(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ Pager *pPager;
+ int rc;
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID\"", 0);
+ return TCL_ERROR;
+ }
+ pPager = sqlite3TextToPtr(argv[1]);
+ rc = sqlite3pager_rollback(pPager);
+ if( rc!=SQLITE_OK ){
+ Tcl_AppendResult(interp, errorName(rc), 0);
+ return TCL_ERROR;
+ }
+ return TCL_OK;
+}
+
+/*
+** Usage: pager_commit ID
+**
+** Commit all changes
+*/
+static int pager_commit(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ Pager *pPager;
+ int rc;
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID\"", 0);
+ return TCL_ERROR;
+ }
+ pPager = sqlite3TextToPtr(argv[1]);
+ rc = sqlite3pager_commit(pPager);
+ if( rc!=SQLITE_OK ){
+ Tcl_AppendResult(interp, errorName(rc), 0);
+ return TCL_ERROR;
+ }
+ return TCL_OK;
+}
+
+/*
+** Usage: pager_stmt_begin ID
+**
+** Start a new checkpoint.
+*/
+static int pager_stmt_begin(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ Pager *pPager;
+ int rc;
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID\"", 0);
+ return TCL_ERROR;
+ }
+ pPager = sqlite3TextToPtr(argv[1]);
+ rc = sqlite3pager_stmt_begin(pPager);
+ if( rc!=SQLITE_OK ){
+ Tcl_AppendResult(interp, errorName(rc), 0);
+ return TCL_ERROR;
+ }
+ return TCL_OK;
+}
+
+/*
+** Usage: pager_stmt_rollback ID
+**
+** Rollback changes to a checkpoint
+*/
+static int pager_stmt_rollback(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ Pager *pPager;
+ int rc;
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID\"", 0);
+ return TCL_ERROR;
+ }
+ pPager = sqlite3TextToPtr(argv[1]);
+ rc = sqlite3pager_stmt_rollback(pPager);
+ if( rc!=SQLITE_OK ){
+ Tcl_AppendResult(interp, errorName(rc), 0);
+ return TCL_ERROR;
+ }
+ return TCL_OK;
+}
+
+/*
+** Usage: pager_stmt_commit ID
+**
+** Commit changes to a checkpoint
+*/
+static int pager_stmt_commit(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ Pager *pPager;
+ int rc;
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID\"", 0);
+ return TCL_ERROR;
+ }
+ pPager = sqlite3TextToPtr(argv[1]);
+ rc = sqlite3pager_stmt_commit(pPager);
+ if( rc!=SQLITE_OK ){
+ Tcl_AppendResult(interp, errorName(rc), 0);
+ return TCL_ERROR;
+ }
+ return TCL_OK;
+}
+
+/*
+** Usage: pager_stats ID
+**
+** Return pager statistics.
+*/
+static int pager_stats(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ Pager *pPager;
+ int i, *a;
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID\"", 0);
+ return TCL_ERROR;
+ }
+ pPager = sqlite3TextToPtr(argv[1]);
+ a = sqlite3pager_stats(pPager);
+ for(i=0; i<9; i++){
+ static char *zName[] = {
+ "ref", "page", "max", "size", "state", "err",
+ "hit", "miss", "ovfl",
+ };
+ char zBuf[100];
+ Tcl_AppendElement(interp, zName[i]);
+ sqlite3_snprintf(sizeof(zBuf),zBuf,"%d",a[i]);
+ Tcl_AppendElement(interp, zBuf);
+ }
+ return TCL_OK;
+}
+
+/*
+** Usage: pager_pagecount ID
+**
+** Return the size of the database file.
+*/
+static int pager_pagecount(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ Pager *pPager;
+ char zBuf[100];
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID\"", 0);
+ return TCL_ERROR;
+ }
+ pPager = sqlite3TextToPtr(argv[1]);
+ sqlite3_snprintf(sizeof(zBuf),zBuf,"%d",sqlite3pager_pagecount(pPager));
+ Tcl_AppendResult(interp, zBuf, 0);
+ return TCL_OK;
+}
+
+/*
+** Usage: page_get ID PGNO
+**
+** Return a pointer to a page from the database.
+*/
+static int page_get(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ Pager *pPager;
+ char zBuf[100];
+ void *pPage;
+ int pgno;
+ int rc;
+ if( argc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID PGNO\"", 0);
+ return TCL_ERROR;
+ }
+ pPager = sqlite3TextToPtr(argv[1]);
+ if( Tcl_GetInt(interp, argv[2], &pgno) ) return TCL_ERROR;
+ rc = sqlite3pager_get(pPager, pgno, &pPage);
+ if( rc!=SQLITE_OK ){
+ Tcl_AppendResult(interp, errorName(rc), 0);
+ return TCL_ERROR;
+ }
+ sqlite3_snprintf(sizeof(zBuf),zBuf,"%p",pPage);
+ Tcl_AppendResult(interp, zBuf, 0);
+ return TCL_OK;
+}
+
+/*
+** Usage: page_lookup ID PGNO
+**
+** Return a pointer to a page if the page is already in cache.
+** If not in cache, return an empty string.
+*/
+static int page_lookup(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ Pager *pPager;
+ char zBuf[100];
+ void *pPage;
+ int pgno;
+ if( argc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID PGNO\"", 0);
+ return TCL_ERROR;
+ }
+ pPager = sqlite3TextToPtr(argv[1]);
+ if( Tcl_GetInt(interp, argv[2], &pgno) ) return TCL_ERROR;
+ pPage = sqlite3pager_lookup(pPager, pgno);
+ if( pPage ){
+ sqlite3_snprintf(sizeof(zBuf),zBuf,"%p",pPage);
+ Tcl_AppendResult(interp, zBuf, 0);
+ }
+ return TCL_OK;
+}
+
+/*
+** Usage: pager_truncate ID PGNO
+*/
+static int pager_truncate(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ Pager *pPager;
+ int rc;
+ int pgno;
+ if( argc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID PGNO\"", 0);
+ return TCL_ERROR;
+ }
+ pPager = sqlite3TextToPtr(argv[1]);
+ if( Tcl_GetInt(interp, argv[2], &pgno) ) return TCL_ERROR;
+ rc = sqlite3pager_truncate(pPager, pgno);
+ if( rc!=SQLITE_OK ){
+ Tcl_AppendResult(interp, errorName(rc), 0);
+ return TCL_ERROR;
+ }
+ return TCL_OK;
+}
+
+
+/*
+** Usage: page_unref PAGE
+**
+** Drop a pointer to a page.
+*/
+static int page_unref(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ void *pPage;
+ int rc;
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " PAGE\"", 0);
+ return TCL_ERROR;
+ }
+ pPage = sqlite3TextToPtr(argv[1]);
+ rc = sqlite3pager_unref(pPage);
+ if( rc!=SQLITE_OK ){
+ Tcl_AppendResult(interp, errorName(rc), 0);
+ return TCL_ERROR;
+ }
+ return TCL_OK;
+}
+
+/*
+** Usage: page_read PAGE
+**
+** Return the content of a page
+*/
+static int page_read(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ char zBuf[100];
+ void *pPage;
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " PAGE\"", 0);
+ return TCL_ERROR;
+ }
+ pPage = sqlite3TextToPtr(argv[1]);
+ memcpy(zBuf, pPage, sizeof(zBuf));
+ Tcl_AppendResult(interp, zBuf, 0);
+ return TCL_OK;
+}
+
+/*
+** Usage: page_number PAGE
+**
+** Return the page number for a page.
+*/
+static int page_number(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ char zBuf[100];
+ void *pPage;
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " PAGE\"", 0);
+ return TCL_ERROR;
+ }
+ pPage = sqlite3TextToPtr(argv[1]);
+ sqlite3_snprintf(sizeof(zBuf), zBuf, "%d", sqlite3pager_pagenumber(pPage));
+ Tcl_AppendResult(interp, zBuf, 0);
+ return TCL_OK;
+}
+
+/*
+** Usage: page_write PAGE DATA
+**
+** Write something into a page.
+*/
+static int page_write(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ void *pPage;
+ int rc;
+ if( argc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " PAGE DATA\"", 0);
+ return TCL_ERROR;
+ }
+ pPage = sqlite3TextToPtr(argv[1]);
+ rc = sqlite3pager_write(pPage);
+ if( rc!=SQLITE_OK ){
+ Tcl_AppendResult(interp, errorName(rc), 0);
+ return TCL_ERROR;
+ }
+ strncpy((char*)pPage, argv[2], test_pagesize-1);
+ ((char*)pPage)[test_pagesize-1] = 0;
+ return TCL_OK;
+}
+
+#ifndef SQLITE_OMIT_DISKIO
+/*
+** Usage: fake_big_file N FILENAME
+**
+** Write a few bytes at the N megabyte point of FILENAME. This will
+** create a large file. If the file was a valid SQLite database, then
+** the next time the database is opened, SQLite will begin allocating
+** new pages after N. If N is 2096 or bigger, this will test the
+** ability of SQLite to write to large files.
+*/
+static int fake_big_file(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ int rc;
+ int n;
+ i64 offset;
+ OsFile *fd = 0;
+ int readOnly = 0;
+ if( argc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " N-MEGABYTES FILE\"", 0);
+ return TCL_ERROR;
+ }
+ if( Tcl_GetInt(interp, argv[1], &n) ) return TCL_ERROR;
+ rc = sqlite3OsOpenReadWrite(argv[2], &fd, &readOnly);
+ if( rc ){
+ Tcl_AppendResult(interp, "open failed: ", errorName(rc), 0);
+ return TCL_ERROR;
+ }
+ offset = n;
+ offset *= 1024*1024;
+ rc = sqlite3OsSeek(fd, offset);
+ if( rc ){
+ Tcl_AppendResult(interp, "seek failed: ", errorName(rc), 0);
+ return TCL_ERROR;
+ }
+ rc = sqlite3OsWrite(fd, "Hello, World!", 14);
+ sqlite3OsClose(&fd);
+ if( rc ){
+ Tcl_AppendResult(interp, "write failed: ", errorName(rc), 0);
+ return TCL_ERROR;
+ }
+ return TCL_OK;
+}
+#endif
+
+/*
+** Register commands with the TCL interpreter.
+*/
+int Sqlitetest2_Init(Tcl_Interp *interp){
+ extern int sqlite3_io_error_pending;
+ extern int sqlite3_io_error_hit;
+ extern int sqlite3_diskfull_pending;
+ extern int sqlite3_diskfull;
+ static struct {
+ char *zName;
+ Tcl_CmdProc *xProc;
+ } aCmd[] = {
+ { "pager_open", (Tcl_CmdProc*)pager_open },
+ { "pager_close", (Tcl_CmdProc*)pager_close },
+ { "pager_commit", (Tcl_CmdProc*)pager_commit },
+ { "pager_rollback", (Tcl_CmdProc*)pager_rollback },
+ { "pager_stmt_begin", (Tcl_CmdProc*)pager_stmt_begin },
+ { "pager_stmt_commit", (Tcl_CmdProc*)pager_stmt_commit },
+ { "pager_stmt_rollback", (Tcl_CmdProc*)pager_stmt_rollback },
+ { "pager_stats", (Tcl_CmdProc*)pager_stats },
+ { "pager_pagecount", (Tcl_CmdProc*)pager_pagecount },
+ { "page_get", (Tcl_CmdProc*)page_get },
+ { "page_lookup", (Tcl_CmdProc*)page_lookup },
+ { "page_unref", (Tcl_CmdProc*)page_unref },
+ { "page_read", (Tcl_CmdProc*)page_read },
+ { "page_write", (Tcl_CmdProc*)page_write },
+ { "page_number", (Tcl_CmdProc*)page_number },
+ { "pager_truncate", (Tcl_CmdProc*)pager_truncate },
+#ifndef SQLITE_OMIT_DISKIO
+ { "fake_big_file", (Tcl_CmdProc*)fake_big_file },
+#endif
+ };
+ int i;
+ for(i=0; i<sizeof(aCmd)/sizeof(aCmd[0]); i++){
+ Tcl_CreateCommand(interp, aCmd[i].zName, aCmd[i].xProc, 0, 0);
+ }
+ Tcl_LinkVar(interp, "sqlite_io_error_pending",
+ (char*)&sqlite3_io_error_pending, TCL_LINK_INT);
+ Tcl_LinkVar(interp, "sqlite_io_error_hit",
+ (char*)&sqlite3_io_error_hit, TCL_LINK_INT);
+ Tcl_LinkVar(interp, "sqlite_diskfull_pending",
+ (char*)&sqlite3_diskfull_pending, TCL_LINK_INT);
+ Tcl_LinkVar(interp, "sqlite_diskfull",
+ (char*)&sqlite3_diskfull, TCL_LINK_INT);
+ Tcl_LinkVar(interp, "sqlite_pending_byte",
+ (char*)&sqlite3_pending_byte, TCL_LINK_INT);
+ Tcl_LinkVar(interp, "pager_pagesize",
+ (char*)&test_pagesize, TCL_LINK_INT);
+ return TCL_OK;
+}
Added: freeswitch/trunk/libs/sqlite/src/test3.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/test3.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,1459 @@
+/*
+** 2001 September 15
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** Code for testing the btree.c module in SQLite. This code
+** is not included in the SQLite library. It is used for automated
+** testing of the SQLite library.
+**
+** $Id: test3.c,v 1.67 2006/08/13 18:39:26 drh Exp $
+*/
+#include "sqliteInt.h"
+#include "pager.h"
+#include "btree.h"
+#include "tcl.h"
+#include <stdlib.h>
+#include <string.h>
+
+/*
+** Interpret an SQLite error number
+*/
+static char *errorName(int rc){
+ char *zName;
+ switch( rc ){
+ case SQLITE_OK: zName = "SQLITE_OK"; break;
+ case SQLITE_ERROR: zName = "SQLITE_ERROR"; break;
+ case SQLITE_PERM: zName = "SQLITE_PERM"; break;
+ case SQLITE_ABORT: zName = "SQLITE_ABORT"; break;
+ case SQLITE_BUSY: zName = "SQLITE_BUSY"; break;
+ case SQLITE_NOMEM: zName = "SQLITE_NOMEM"; break;
+ case SQLITE_READONLY: zName = "SQLITE_READONLY"; break;
+ case SQLITE_INTERRUPT: zName = "SQLITE_INTERRUPT"; break;
+ case SQLITE_IOERR: zName = "SQLITE_IOERR"; break;
+ case SQLITE_CORRUPT: zName = "SQLITE_CORRUPT"; break;
+ case SQLITE_FULL: zName = "SQLITE_FULL"; break;
+ case SQLITE_CANTOPEN: zName = "SQLITE_CANTOPEN"; break;
+ case SQLITE_PROTOCOL: zName = "SQLITE_PROTOCOL"; break;
+ case SQLITE_EMPTY: zName = "SQLITE_EMPTY"; break;
+ case SQLITE_LOCKED: zName = "SQLITE_LOCKED"; break;
+ default: zName = "SQLITE_Unknown"; break;
+ }
+ return zName;
+}
+
+/*
+** Usage: btree_open FILENAME NCACHE FLAGS
+**
+** Open a new database
+*/
+static int btree_open(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ Btree *pBt;
+ int rc, nCache, flags;
+ char zBuf[100];
+ if( argc!=4 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " FILENAME NCACHE FLAGS\"", 0);
+ return TCL_ERROR;
+ }
+ if( Tcl_GetInt(interp, argv[2], &nCache) ) return TCL_ERROR;
+ if( Tcl_GetInt(interp, argv[3], &flags) ) return TCL_ERROR;
+ rc = sqlite3BtreeOpen(argv[1], 0, &pBt, flags);
+ if( rc!=SQLITE_OK ){
+ Tcl_AppendResult(interp, errorName(rc), 0);
+ return TCL_ERROR;
+ }
+ sqlite3BtreeSetCacheSize(pBt, nCache);
+ sqlite3_snprintf(sizeof(zBuf), zBuf,"%p", pBt);
+ Tcl_AppendResult(interp, zBuf, 0);
+ return TCL_OK;
+}
+
+/*
+** Usage: btree_close ID
+**
+** Close the given database.
+*/
+static int btree_close(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ Btree *pBt;
+ int rc;
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID\"", 0);
+ return TCL_ERROR;
+ }
+ pBt = sqlite3TextToPtr(argv[1]);
+ rc = sqlite3BtreeClose(pBt);
+ if( rc!=SQLITE_OK ){
+ Tcl_AppendResult(interp, errorName(rc), 0);
+ return TCL_ERROR;
+ }
+ return TCL_OK;
+}
+
+/*
+** Usage: btree_begin_transaction ID
+**
+** Start a new transaction
+*/
+static int btree_begin_transaction(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ Btree *pBt;
+ int rc;
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID\"", 0);
+ return TCL_ERROR;
+ }
+ pBt = sqlite3TextToPtr(argv[1]);
+ rc = sqlite3BtreeBeginTrans(pBt, 1);
+ if( rc!=SQLITE_OK ){
+ Tcl_AppendResult(interp, errorName(rc), 0);
+ return TCL_ERROR;
+ }
+ return TCL_OK;
+}
+
+/*
+** Usage: btree_rollback ID
+**
+** Rollback changes
+*/
+static int btree_rollback(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ Btree *pBt;
+ int rc;
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID\"", 0);
+ return TCL_ERROR;
+ }
+ pBt = sqlite3TextToPtr(argv[1]);
+ rc = sqlite3BtreeRollback(pBt);
+ if( rc!=SQLITE_OK ){
+ Tcl_AppendResult(interp, errorName(rc), 0);
+ return TCL_ERROR;
+ }
+ return TCL_OK;
+}
+
+/*
+** Usage: btree_commit ID
+**
+** Commit all changes
+*/
+static int btree_commit(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ Btree *pBt;
+ int rc;
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID\"", 0);
+ return TCL_ERROR;
+ }
+ pBt = sqlite3TextToPtr(argv[1]);
+ rc = sqlite3BtreeCommit(pBt);
+ if( rc!=SQLITE_OK ){
+ Tcl_AppendResult(interp, errorName(rc), 0);
+ return TCL_ERROR;
+ }
+ return TCL_OK;
+}
+
+/*
+** Usage: btree_begin_statement ID
+**
+** Start a new statement transaction
+*/
+static int btree_begin_statement(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ Btree *pBt;
+ int rc;
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID\"", 0);
+ return TCL_ERROR;
+ }
+ pBt = sqlite3TextToPtr(argv[1]);
+ rc = sqlite3BtreeBeginStmt(pBt);
+ if( rc!=SQLITE_OK ){
+ Tcl_AppendResult(interp, errorName(rc), 0);
+ return TCL_ERROR;
+ }
+ return TCL_OK;
+}
+
+/*
+** Usage: btree_rollback_statement ID
+**
+** Rollback changes
+*/
+static int btree_rollback_statement(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ Btree *pBt;
+ int rc;
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID\"", 0);
+ return TCL_ERROR;
+ }
+ pBt = sqlite3TextToPtr(argv[1]);
+ rc = sqlite3BtreeRollbackStmt(pBt);
+ if( rc!=SQLITE_OK ){
+ Tcl_AppendResult(interp, errorName(rc), 0);
+ return TCL_ERROR;
+ }
+ return TCL_OK;
+}
+
+/*
+** Usage: btree_commit_statement ID
+**
+** Commit all changes
+*/
+static int btree_commit_statement(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ Btree *pBt;
+ int rc;
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID\"", 0);
+ return TCL_ERROR;
+ }
+ pBt = sqlite3TextToPtr(argv[1]);
+ rc = sqlite3BtreeCommitStmt(pBt);
+ if( rc!=SQLITE_OK ){
+ Tcl_AppendResult(interp, errorName(rc), 0);
+ return TCL_ERROR;
+ }
+ return TCL_OK;
+}
+
+/*
+** Usage: btree_create_table ID FLAGS
+**
+** Create a new table in the database
+*/
+static int btree_create_table(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ Btree *pBt;
+ int rc, iTable, flags;
+ char zBuf[30];
+ if( argc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID FLAGS\"", 0);
+ return TCL_ERROR;
+ }
+ pBt = sqlite3TextToPtr(argv[1]);
+ if( Tcl_GetInt(interp, argv[2], &flags) ) return TCL_ERROR;
+ rc = sqlite3BtreeCreateTable(pBt, &iTable, flags);
+ if( rc!=SQLITE_OK ){
+ Tcl_AppendResult(interp, errorName(rc), 0);
+ return TCL_ERROR;
+ }
+ sqlite3_snprintf(sizeof(zBuf), zBuf, "%d", iTable);
+ Tcl_AppendResult(interp, zBuf, 0);
+ return TCL_OK;
+}
+
+/*
+** Usage: btree_drop_table ID TABLENUM
+**
+** Delete an entire table from the database
+*/
+static int btree_drop_table(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ Btree *pBt;
+ int iTable;
+ int rc;
+ int notUsed1;
+ if( argc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID TABLENUM\"", 0);
+ return TCL_ERROR;
+ }
+ pBt = sqlite3TextToPtr(argv[1]);
+ if( Tcl_GetInt(interp, argv[2], &iTable) ) return TCL_ERROR;
+ rc = sqlite3BtreeDropTable(pBt, iTable, ¬Used1);
+ if( rc!=SQLITE_OK ){
+ Tcl_AppendResult(interp, errorName(rc), 0);
+ return TCL_ERROR;
+ }
+ return TCL_OK;
+}
+
+/*
+** Usage: btree_clear_table ID TABLENUM
+**
+** Remove all entries from the given table but keep the table around.
+*/
+static int btree_clear_table(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ Btree *pBt;
+ int iTable;
+ int rc;
+ if( argc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID TABLENUM\"", 0);
+ return TCL_ERROR;
+ }
+ pBt = sqlite3TextToPtr(argv[1]);
+ if( Tcl_GetInt(interp, argv[2], &iTable) ) return TCL_ERROR;
+ rc = sqlite3BtreeClearTable(pBt, iTable);
+ if( rc!=SQLITE_OK ){
+ Tcl_AppendResult(interp, errorName(rc), 0);
+ return TCL_ERROR;
+ }
+ return TCL_OK;
+}
+
+/*
+** Usage: btree_get_meta ID
+**
+** Return meta data
+*/
+static int btree_get_meta(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ Btree *pBt;
+ int rc;
+ int i;
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID\"", 0);
+ return TCL_ERROR;
+ }
+ pBt = sqlite3TextToPtr(argv[1]);
+ for(i=0; i<SQLITE_N_BTREE_META; i++){
+ char zBuf[30];
+ unsigned int v;
+ rc = sqlite3BtreeGetMeta(pBt, i, &v);
+ if( rc!=SQLITE_OK ){
+ Tcl_AppendResult(interp, errorName(rc), 0);
+ return TCL_ERROR;
+ }
+ sqlite3_snprintf(sizeof(zBuf), zBuf,"%d",v);
+ Tcl_AppendElement(interp, zBuf);
+ }
+ return TCL_OK;
+}
+
+/*
+** Usage: btree_update_meta ID METADATA...
+**
+** Return meta data
+*/
+static int btree_update_meta(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ Btree *pBt;
+ int rc;
+ int i;
+ int aMeta[SQLITE_N_BTREE_META];
+
+ if( argc!=2+SQLITE_N_BTREE_META ){
+ char zBuf[30];
+ sqlite3_snprintf(sizeof(zBuf), zBuf,"%d",SQLITE_N_BTREE_META);
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID METADATA...\" (METADATA is ", zBuf, " integers)", 0);
+ return TCL_ERROR;
+ }
+ pBt = sqlite3TextToPtr(argv[1]);
+ for(i=1; i<SQLITE_N_BTREE_META; i++){
+ if( Tcl_GetInt(interp, argv[i+2], &aMeta[i]) ) return TCL_ERROR;
+ }
+ for(i=1; i<SQLITE_N_BTREE_META; i++){
+ rc = sqlite3BtreeUpdateMeta(pBt, i, aMeta[i]);
+ if( rc!=SQLITE_OK ){
+ Tcl_AppendResult(interp, errorName(rc), 0);
+ return TCL_ERROR;
+ }
+ }
+ return TCL_OK;
+}
+
+/*
+** Usage: btree_page_dump ID PAGENUM
+**
+** Print a disassembly of a page on standard output
+*/
+static int btree_page_dump(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ Btree *pBt;
+ int iPage;
+ int rc;
+
+ if( argc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID\"", 0);
+ return TCL_ERROR;
+ }
+ pBt = sqlite3TextToPtr(argv[1]);
+ if( Tcl_GetInt(interp, argv[2], &iPage) ) return TCL_ERROR;
+ rc = sqlite3BtreePageDump(pBt, iPage, 0);
+ if( rc!=SQLITE_OK ){
+ Tcl_AppendResult(interp, errorName(rc), 0);
+ return TCL_ERROR;
+ }
+ return TCL_OK;
+}
+
+/*
+** Usage: btree_tree_dump ID PAGENUM
+**
+** Print a disassembly of a page and all its child pages on standard output
+*/
+static int btree_tree_dump(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ Btree *pBt;
+ int iPage;
+ int rc;
+
+ if( argc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID\"", 0);
+ return TCL_ERROR;
+ }
+ pBt = sqlite3TextToPtr(argv[1]);
+ if( Tcl_GetInt(interp, argv[2], &iPage) ) return TCL_ERROR;
+ rc = sqlite3BtreePageDump(pBt, iPage, 1);
+ if( rc!=SQLITE_OK ){
+ Tcl_AppendResult(interp, errorName(rc), 0);
+ return TCL_ERROR;
+ }
+ return TCL_OK;
+}
+
+/*
+** Usage: btree_pager_stats ID
+**
+** Returns pager statistics
+*/
+static int btree_pager_stats(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ Btree *pBt;
+ int i;
+ int *a;
+
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID\"", 0);
+ return TCL_ERROR;
+ }
+ pBt = sqlite3TextToPtr(argv[1]);
+ a = sqlite3pager_stats(sqlite3BtreePager(pBt));
+ for(i=0; i<11; i++){
+ static char *zName[] = {
+ "ref", "page", "max", "size", "state", "err",
+ "hit", "miss", "ovfl", "read", "write"
+ };
+ char zBuf[100];
+ Tcl_AppendElement(interp, zName[i]);
+ sqlite3_snprintf(sizeof(zBuf), zBuf,"%d",a[i]);
+ Tcl_AppendElement(interp, zBuf);
+ }
+ return TCL_OK;
+}
+
+/*
+** Usage: btree_pager_ref_dump ID
+**
+** Print out all outstanding pages.
+*/
+static int btree_pager_ref_dump(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ Btree *pBt;
+
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID\"", 0);
+ return TCL_ERROR;
+ }
+ pBt = sqlite3TextToPtr(argv[1]);
+#ifdef SQLITE_DEBUG
+ sqlite3pager_refdump(sqlite3BtreePager(pBt));
+#endif
+ return TCL_OK;
+}
+
+/*
+** Usage: btree_integrity_check ID ROOT ...
+**
+** Look through every page of the given BTree file to verify correct
+** formatting and linkage. Return a line of text for each problem found.
+** Return an empty string if everything worked.
+*/
+static int btree_integrity_check(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ Btree *pBt;
+ int nRoot;
+ int *aRoot;
+ int i;
+ char *zResult;
+
+ if( argc<3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID ROOT ...\"", 0);
+ return TCL_ERROR;
+ }
+ pBt = sqlite3TextToPtr(argv[1]);
+ nRoot = argc-2;
+ aRoot = malloc( sizeof(int)*(argc-2) );
+ for(i=0; i<argc-2; i++){
+ if( Tcl_GetInt(interp, argv[i+2], &aRoot[i]) ) return TCL_ERROR;
+ }
+#ifndef SQLITE_OMIT_INTEGRITY_CHECK
+ zResult = sqlite3BtreeIntegrityCheck(pBt, aRoot, nRoot);
+#else
+ zResult = 0;
+#endif
+ free(aRoot);
+ if( zResult ){
+ Tcl_AppendResult(interp, zResult, 0);
+ sqliteFree(zResult);
+ }
+ return TCL_OK;
+}
+
+/*
+** Usage: btree_cursor_list ID
+**
+** Print information about all cursors to standard output for debugging.
+*/
+static int btree_cursor_list(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ Btree *pBt;
+
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID\"", 0);
+ return TCL_ERROR;
+ }
+ pBt = sqlite3TextToPtr(argv[1]);
+ sqlite3BtreeCursorList(pBt);
+ return SQLITE_OK;
+}
+
+/*
+** Usage: btree_cursor ID TABLENUM WRITEABLE
+**
+** Create a new cursor. Return the ID for the cursor.
+*/
+static int btree_cursor(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ Btree *pBt;
+ int iTable;
+ BtCursor *pCur;
+ int rc;
+ int wrFlag;
+ char zBuf[30];
+
+ if( argc!=4 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID TABLENUM WRITEABLE\"", 0);
+ return TCL_ERROR;
+ }
+ pBt = sqlite3TextToPtr(argv[1]);
+ if( Tcl_GetInt(interp, argv[2], &iTable) ) return TCL_ERROR;
+ if( Tcl_GetBoolean(interp, argv[3], &wrFlag) ) return TCL_ERROR;
+ rc = sqlite3BtreeCursor(pBt, iTable, wrFlag, 0, 0, &pCur);
+ if( rc ){
+ Tcl_AppendResult(interp, errorName(rc), 0);
+ return TCL_ERROR;
+ }
+ sqlite3_snprintf(sizeof(zBuf), zBuf,"%p", pCur);
+ Tcl_AppendResult(interp, zBuf, 0);
+ return SQLITE_OK;
+}
+
+/*
+** Usage: btree_close_cursor ID
+**
+** Close a cursor opened using btree_cursor.
+*/
+static int btree_close_cursor(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ BtCursor *pCur;
+ int rc;
+
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID\"", 0);
+ return TCL_ERROR;
+ }
+ pCur = sqlite3TextToPtr(argv[1]);
+ rc = sqlite3BtreeCloseCursor(pCur);
+ if( rc ){
+ Tcl_AppendResult(interp, errorName(rc), 0);
+ return TCL_ERROR;
+ }
+ return SQLITE_OK;
+}
+
+/*
+** Usage: btree_move_to ID KEY
+**
+** Move the cursor to the entry with the given key.
+*/
+static int btree_move_to(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ BtCursor *pCur;
+ int rc;
+ int res;
+ char zBuf[20];
+
+ if( argc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID KEY\"", 0);
+ return TCL_ERROR;
+ }
+ pCur = sqlite3TextToPtr(argv[1]);
+ if( sqlite3BtreeFlags(pCur) & BTREE_INTKEY ){
+ int iKey;
+ if( Tcl_GetInt(interp, argv[2], &iKey) ) return TCL_ERROR;
+ rc = sqlite3BtreeMoveto(pCur, 0, iKey, &res);
+ }else{
+ rc = sqlite3BtreeMoveto(pCur, argv[2], strlen(argv[2]), &res);
+ }
+ if( rc ){
+ Tcl_AppendResult(interp, errorName(rc), 0);
+ return TCL_ERROR;
+ }
+ if( res<0 ) res = -1;
+ if( res>0 ) res = 1;
+ sqlite3_snprintf(sizeof(zBuf), zBuf,"%d",res);
+ Tcl_AppendResult(interp, zBuf, 0);
+ return SQLITE_OK;
+}
+
+/*
+** Usage: btree_delete ID
+**
+** Delete the entry that the cursor is pointing to
+*/
+static int btree_delete(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ BtCursor *pCur;
+ int rc;
+
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID\"", 0);
+ return TCL_ERROR;
+ }
+ pCur = sqlite3TextToPtr(argv[1]);
+ rc = sqlite3BtreeDelete(pCur);
+ if( rc ){
+ Tcl_AppendResult(interp, errorName(rc), 0);
+ return TCL_ERROR;
+ }
+ return SQLITE_OK;
+}
+
+/*
+** Usage: btree_insert ID KEY DATA
+**
+** Create a new entry with the given key and data. If an entry already
+** exists with the same key the old entry is overwritten.
+*/
+static int btree_insert(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ BtCursor *pCur;
+ int rc;
+
+ if( objc!=4 ){
+ Tcl_WrongNumArgs(interp, 1, objv, "ID KEY DATA");
+ return TCL_ERROR;
+ }
+ pCur = sqlite3TextToPtr(Tcl_GetString(objv[1]));
+ if( sqlite3BtreeFlags(pCur) & BTREE_INTKEY ){
+ i64 iKey;
+ int len;
+ unsigned char *pBuf;
+ if( Tcl_GetWideIntFromObj(interp, objv[2], &iKey) ) return TCL_ERROR;
+ pBuf = Tcl_GetByteArrayFromObj(objv[3], &len);
+ rc = sqlite3BtreeInsert(pCur, 0, iKey, pBuf, len);
+ }else{
+ int keylen;
+ int dlen;
+ unsigned char *pKBuf;
+ unsigned char *pDBuf;
+ pKBuf = Tcl_GetByteArrayFromObj(objv[2], &keylen);
+ pDBuf = Tcl_GetByteArrayFromObj(objv[3], &dlen);
+ rc = sqlite3BtreeInsert(pCur, pKBuf, keylen, pDBuf, dlen);
+ }
+ if( rc ){
+ Tcl_AppendResult(interp, errorName(rc), 0);
+ return TCL_ERROR;
+ }
+ return SQLITE_OK;
+}
+
+/*
+** Usage: btree_next ID
+**
+** Move the cursor to the next entry in the table. Return 0 on success
+** or 1 if the cursor was already on the last entry in the table or if
+** the table is empty.
+*/
+static int btree_next(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ BtCursor *pCur;
+ int rc;
+ int res = 0;
+ char zBuf[100];
+
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID\"", 0);
+ return TCL_ERROR;
+ }
+ pCur = sqlite3TextToPtr(argv[1]);
+ rc = sqlite3BtreeNext(pCur, &res);
+ if( rc ){
+ Tcl_AppendResult(interp, errorName(rc), 0);
+ return TCL_ERROR;
+ }
+ sqlite3_snprintf(sizeof(zBuf),zBuf,"%d",res);
+ Tcl_AppendResult(interp, zBuf, 0);
+ return SQLITE_OK;
+}
+
+/*
+** Usage: btree_prev ID
+**
+** Move the cursor to the previous entry in the table. Return 0 on
+** success and 1 if the cursor was already on the first entry in
+** the table or if the table was empty.
+*/
+static int btree_prev(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ BtCursor *pCur;
+ int rc;
+ int res = 0;
+ char zBuf[100];
+
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID\"", 0);
+ return TCL_ERROR;
+ }
+ pCur = sqlite3TextToPtr(argv[1]);
+ rc = sqlite3BtreePrevious(pCur, &res);
+ if( rc ){
+ Tcl_AppendResult(interp, errorName(rc), 0);
+ return TCL_ERROR;
+ }
+ sqlite3_snprintf(sizeof(zBuf),zBuf,"%d",res);
+ Tcl_AppendResult(interp, zBuf, 0);
+ return SQLITE_OK;
+}
+
+/*
+** Usage: btree_first ID
+**
+** Move the cursor to the first entry in the table. Return 0 if the
+** cursor was left point to something and 1 if the table is empty.
+*/
+static int btree_first(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ BtCursor *pCur;
+ int rc;
+ int res = 0;
+ char zBuf[100];
+
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID\"", 0);
+ return TCL_ERROR;
+ }
+ pCur = sqlite3TextToPtr(argv[1]);
+ rc = sqlite3BtreeFirst(pCur, &res);
+ if( rc ){
+ Tcl_AppendResult(interp, errorName(rc), 0);
+ return TCL_ERROR;
+ }
+ sqlite3_snprintf(sizeof(zBuf),zBuf,"%d",res);
+ Tcl_AppendResult(interp, zBuf, 0);
+ return SQLITE_OK;
+}
+
+/*
+** Usage: btree_last ID
+**
+** Move the cursor to the last entry in the table. Return 0 if the
+** cursor was left point to something and 1 if the table is empty.
+*/
+static int btree_last(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ BtCursor *pCur;
+ int rc;
+ int res = 0;
+ char zBuf[100];
+
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID\"", 0);
+ return TCL_ERROR;
+ }
+ pCur = sqlite3TextToPtr(argv[1]);
+ rc = sqlite3BtreeLast(pCur, &res);
+ if( rc ){
+ Tcl_AppendResult(interp, errorName(rc), 0);
+ return TCL_ERROR;
+ }
+ sqlite3_snprintf(sizeof(zBuf),zBuf,"%d",res);
+ Tcl_AppendResult(interp, zBuf, 0);
+ return SQLITE_OK;
+}
+
+/*
+** Usage: btree_eof ID
+**
+** Return TRUE if the given cursor is not pointing at a valid entry.
+** Return FALSE if the cursor does point to a valid entry.
+*/
+static int btree_eof(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ BtCursor *pCur;
+ char zBuf[50];
+
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID\"", 0);
+ return TCL_ERROR;
+ }
+ pCur = sqlite3TextToPtr(argv[1]);
+ sqlite3_snprintf(sizeof(zBuf),zBuf, "%d", sqlite3BtreeEof(pCur));
+ Tcl_AppendResult(interp, zBuf, 0);
+ return SQLITE_OK;
+}
+
+/*
+** Usage: btree_keysize ID
+**
+** Return the number of bytes of key. For an INTKEY table, this
+** returns the key itself.
+*/
+static int btree_keysize(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ BtCursor *pCur;
+ u64 n;
+ char zBuf[50];
+
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID\"", 0);
+ return TCL_ERROR;
+ }
+ pCur = sqlite3TextToPtr(argv[1]);
+ sqlite3BtreeKeySize(pCur, (i64*)&n);
+ sqlite3_snprintf(sizeof(zBuf),zBuf, "%llu", n);
+ Tcl_AppendResult(interp, zBuf, 0);
+ return SQLITE_OK;
+}
+
+/*
+** Usage: btree_key ID
+**
+** Return the key for the entry at which the cursor is pointing.
+*/
+static int btree_key(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ BtCursor *pCur;
+ int rc;
+ u64 n;
+ char *zBuf;
+
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID\"", 0);
+ return TCL_ERROR;
+ }
+ pCur = sqlite3TextToPtr(argv[1]);
+ sqlite3BtreeKeySize(pCur, (i64*)&n);
+ if( sqlite3BtreeFlags(pCur) & BTREE_INTKEY ){
+ char zBuf2[60];
+ sqlite3_snprintf(sizeof(zBuf2),zBuf2, "%llu", n);
+ Tcl_AppendResult(interp, zBuf2, 0);
+ }else{
+ zBuf = malloc( n+1 );
+ rc = sqlite3BtreeKey(pCur, 0, n, zBuf);
+ if( rc ){
+ Tcl_AppendResult(interp, errorName(rc), 0);
+ return TCL_ERROR;
+ }
+ zBuf[n] = 0;
+ Tcl_AppendResult(interp, zBuf, 0);
+ free(zBuf);
+ }
+ return SQLITE_OK;
+}
+
+/*
+** Usage: btree_data ID ?N?
+**
+** Return the data for the entry at which the cursor is pointing.
+*/
+static int btree_data(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ BtCursor *pCur;
+ int rc;
+ u32 n;
+ char *zBuf;
+
+ if( argc!=2 && argc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID\"", 0);
+ return TCL_ERROR;
+ }
+ pCur = sqlite3TextToPtr(argv[1]);
+ if( argc==2 ){
+ sqlite3BtreeDataSize(pCur, &n);
+ }else{
+ n = atoi(argv[2]);
+ }
+ zBuf = malloc( n+1 );
+ rc = sqlite3BtreeData(pCur, 0, n, zBuf);
+ if( rc ){
+ Tcl_AppendResult(interp, errorName(rc), 0);
+ return TCL_ERROR;
+ }
+ zBuf[n] = 0;
+ Tcl_AppendResult(interp, zBuf, 0);
+ free(zBuf);
+ return SQLITE_OK;
+}
+
+/*
+** Usage: btree_fetch_key ID AMT
+**
+** Use the sqlite3BtreeKeyFetch() routine to get AMT bytes of the key.
+** If sqlite3BtreeKeyFetch() fails, return an empty string.
+*/
+static int btree_fetch_key(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ BtCursor *pCur;
+ int n;
+ int amt;
+ u64 nKey;
+ const char *zBuf;
+ char zStatic[1000];
+
+ if( argc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID AMT\"", 0);
+ return TCL_ERROR;
+ }
+ pCur = sqlite3TextToPtr(argv[1]);
+ if( Tcl_GetInt(interp, argv[2], &n) ) return TCL_ERROR;
+ sqlite3BtreeKeySize(pCur, (i64*)&nKey);
+ zBuf = sqlite3BtreeKeyFetch(pCur, &amt);
+ if( zBuf && amt>=n ){
+ assert( nKey<sizeof(zStatic) );
+ if( n>0 ) nKey = n;
+ memcpy(zStatic, zBuf, (int)nKey);
+ zStatic[nKey] = 0;
+ Tcl_AppendResult(interp, zStatic, 0);
+ }
+ return TCL_OK;
+}
+
+/*
+** Usage: btree_fetch_data ID AMT
+**
+** Use the sqlite3BtreeDataFetch() routine to get AMT bytes of the key.
+** If sqlite3BtreeDataFetch() fails, return an empty string.
+*/
+static int btree_fetch_data(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ BtCursor *pCur;
+ int n;
+ int amt;
+ u32 nData;
+ const char *zBuf;
+ char zStatic[1000];
+
+ if( argc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID AMT\"", 0);
+ return TCL_ERROR;
+ }
+ pCur = sqlite3TextToPtr(argv[1]);
+ if( Tcl_GetInt(interp, argv[2], &n) ) return TCL_ERROR;
+ sqlite3BtreeDataSize(pCur, &nData);
+ zBuf = sqlite3BtreeDataFetch(pCur, &amt);
+ if( zBuf && amt>=n ){
+ assert( nData<sizeof(zStatic) );
+ if( n>0 ) nData = n;
+ memcpy(zStatic, zBuf, (int)nData);
+ zStatic[nData] = 0;
+ Tcl_AppendResult(interp, zStatic, 0);
+ }
+ return TCL_OK;
+}
+
+/*
+** Usage: btree_payload_size ID
+**
+** Return the number of bytes of payload
+*/
+static int btree_payload_size(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ BtCursor *pCur;
+ int n2;
+ u64 n1;
+ char zBuf[50];
+
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID\"", 0);
+ return TCL_ERROR;
+ }
+ pCur = sqlite3TextToPtr(argv[1]);
+ if( sqlite3BtreeFlags(pCur) & BTREE_INTKEY ){
+ n1 = 0;
+ }else{
+ sqlite3BtreeKeySize(pCur, (i64*)&n1);
+ }
+ sqlite3BtreeDataSize(pCur, (u32*)&n2);
+ sqlite3_snprintf(sizeof(zBuf),zBuf, "%d", (int)(n1+n2));
+ Tcl_AppendResult(interp, zBuf, 0);
+ return SQLITE_OK;
+}
+
+/*
+** Usage: btree_cursor_info ID ?UP-CNT?
+**
+** Return integers containing information about the entry the
+** cursor is pointing to:
+**
+** aResult[0] = The page number
+** aResult[1] = The entry number
+** aResult[2] = Total number of entries on this page
+** aResult[3] = Cell size (local payload + header)
+** aResult[4] = Number of free bytes on this page
+** aResult[5] = Number of free blocks on the page
+** aResult[6] = Total payload size (local + overflow)
+** aResult[7] = Header size in bytes
+** aResult[8] = Local payload size
+** aResult[9] = Parent page number
+*/
+static int btree_cursor_info(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ BtCursor *pCur;
+ int rc;
+ int i, j;
+ int up;
+ int aResult[10];
+ char zBuf[400];
+
+ if( argc!=2 && argc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID ?UP-CNT?\"", 0);
+ return TCL_ERROR;
+ }
+ pCur = sqlite3TextToPtr(argv[1]);
+ if( argc==3 ){
+ if( Tcl_GetInt(interp, argv[2], &up) ) return TCL_ERROR;
+ }else{
+ up = 0;
+ }
+ rc = sqlite3BtreeCursorInfo(pCur, aResult, up);
+ if( rc ){
+ Tcl_AppendResult(interp, errorName(rc), 0);
+ return TCL_ERROR;
+ }
+ j = 0;
+ for(i=0; i<sizeof(aResult)/sizeof(aResult[0]); i++){
+ sqlite3_snprintf(40,&zBuf[j]," %d", aResult[i]);
+ j += strlen(&zBuf[j]);
+ }
+ Tcl_AppendResult(interp, &zBuf[1], 0);
+ return SQLITE_OK;
+}
+
+/*
+** The command is provided for the purpose of setting breakpoints.
+** in regression test scripts.
+**
+** By setting a GDB breakpoint on this procedure and executing the
+** btree_breakpoint command in a test script, we can stop GDB at
+** the point in the script where the btree_breakpoint command is
+** inserted. This is useful for debugging.
+*/
+static int btree_breakpoint(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ return TCL_OK;
+}
+
+/*
+** usage: varint_test START MULTIPLIER COUNT INCREMENT
+**
+** This command tests the sqlite3PutVarint() and sqlite3GetVarint()
+** routines, both for accuracy and for speed.
+**
+** An integer is written using PutVarint() and read back with
+** GetVarint() and varified to be unchanged. This repeats COUNT
+** times. The first integer is START*MULTIPLIER. Each iteration
+** increases the integer by INCREMENT.
+**
+** This command returns nothing if it works. It returns an error message
+** if something goes wrong.
+*/
+static int btree_varint_test(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ u32 start, mult, count, incr;
+ u64 in, out;
+ int n1, n2, i, j;
+ unsigned char zBuf[100];
+ if( argc!=5 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " START MULTIPLIER COUNT INCREMENT\"", 0);
+ return TCL_ERROR;
+ }
+ if( Tcl_GetInt(interp, argv[1], (int*)&start) ) return TCL_ERROR;
+ if( Tcl_GetInt(interp, argv[2], (int*)&mult) ) return TCL_ERROR;
+ if( Tcl_GetInt(interp, argv[3], (int*)&count) ) return TCL_ERROR;
+ if( Tcl_GetInt(interp, argv[4], (int*)&incr) ) return TCL_ERROR;
+ in = start;
+ in *= mult;
+ for(i=0; i<count; i++){
+ char zErr[200];
+ n1 = sqlite3PutVarint(zBuf, in);
+ if( n1>9 || n1<1 ){
+ sprintf(zErr, "PutVarint returned %d - should be between 1 and 9", n1);
+ Tcl_AppendResult(interp, zErr, 0);
+ return TCL_ERROR;
+ }
+ n2 = sqlite3GetVarint(zBuf, &out);
+ if( n1!=n2 ){
+ sprintf(zErr, "PutVarint returned %d and GetVarint returned %d", n1, n2);
+ Tcl_AppendResult(interp, zErr, 0);
+ return TCL_ERROR;
+ }
+ if( in!=out ){
+ sprintf(zErr, "Wrote 0x%016llx and got back 0x%016llx", in, out);
+ Tcl_AppendResult(interp, zErr, 0);
+ return TCL_ERROR;
+ }
+ if( (in & 0xffffffff)==in ){
+ u32 out32;
+ n2 = sqlite3GetVarint32(zBuf, &out32);
+ out = out32;
+ if( n1!=n2 ){
+ sprintf(zErr, "PutVarint returned %d and GetVarint32 returned %d",
+ n1, n2);
+ Tcl_AppendResult(interp, zErr, 0);
+ return TCL_ERROR;
+ }
+ if( in!=out ){
+ sprintf(zErr, "Wrote 0x%016llx and got back 0x%016llx from GetVarint32",
+ in, out);
+ Tcl_AppendResult(interp, zErr, 0);
+ return TCL_ERROR;
+ }
+ }
+
+ /* In order to get realistic timings, run getVarint 19 more times.
+ ** This is because getVarint is called about 20 times more often
+ ** than putVarint.
+ */
+ for(j=0; j<19; j++){
+ sqlite3GetVarint(zBuf, &out);
+ }
+ in += incr;
+ }
+ return TCL_OK;
+}
+
+/*
+** usage: btree_from_db DB-HANDLE
+**
+** This command returns the btree handle for the main database associated
+** with the database-handle passed as the argument. Example usage:
+**
+** sqlite3 db test.db
+** set bt [btree_from_db db]
+*/
+static int btree_from_db(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ char zBuf[100];
+ Tcl_CmdInfo info;
+ sqlite3 *db;
+ Btree *pBt;
+
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " DB-HANDLE\"", 0);
+ return TCL_ERROR;
+ }
+
+ if( 1!=Tcl_GetCommandInfo(interp, argv[1], &info) ){
+ Tcl_AppendResult(interp, "No such db-handle: \"", argv[1], "\"", 0);
+ return TCL_ERROR;
+ }
+ db = *((sqlite3 **)info.objClientData);
+ assert( db );
+
+ pBt = db->aDb[0].pBt;
+ sqlite3_snprintf(sizeof(zBuf), zBuf, "%p", pBt);
+ Tcl_SetResult(interp, zBuf, TCL_VOLATILE);
+ return TCL_OK;
+}
+
+
+/*
+** usage: btree_set_cache_size ID NCACHE
+**
+** Set the size of the cache used by btree $ID.
+*/
+static int btree_set_cache_size(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ int nCache;
+ Btree *pBt;
+
+ if( argc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " BT NCACHE\"", 0);
+ return TCL_ERROR;
+ }
+ pBt = sqlite3TextToPtr(argv[1]);
+ if( Tcl_GetInt(interp, argv[2], &nCache) ) return TCL_ERROR;
+ sqlite3BtreeSetCacheSize(pBt, nCache);
+ return TCL_OK;
+}
+
+
+/*
+** Register commands with the TCL interpreter.
+*/
+int Sqlitetest3_Init(Tcl_Interp *interp){
+ extern int sqlite3_btree_trace;
+ static struct {
+ char *zName;
+ Tcl_CmdProc *xProc;
+ } aCmd[] = {
+ { "btree_open", (Tcl_CmdProc*)btree_open },
+ { "btree_close", (Tcl_CmdProc*)btree_close },
+ { "btree_begin_transaction", (Tcl_CmdProc*)btree_begin_transaction },
+ { "btree_commit", (Tcl_CmdProc*)btree_commit },
+ { "btree_rollback", (Tcl_CmdProc*)btree_rollback },
+ { "btree_create_table", (Tcl_CmdProc*)btree_create_table },
+ { "btree_drop_table", (Tcl_CmdProc*)btree_drop_table },
+ { "btree_clear_table", (Tcl_CmdProc*)btree_clear_table },
+ { "btree_get_meta", (Tcl_CmdProc*)btree_get_meta },
+ { "btree_update_meta", (Tcl_CmdProc*)btree_update_meta },
+ { "btree_page_dump", (Tcl_CmdProc*)btree_page_dump },
+ { "btree_tree_dump", (Tcl_CmdProc*)btree_tree_dump },
+ { "btree_pager_stats", (Tcl_CmdProc*)btree_pager_stats },
+ { "btree_pager_ref_dump", (Tcl_CmdProc*)btree_pager_ref_dump },
+ { "btree_cursor", (Tcl_CmdProc*)btree_cursor },
+ { "btree_close_cursor", (Tcl_CmdProc*)btree_close_cursor },
+ { "btree_move_to", (Tcl_CmdProc*)btree_move_to },
+ { "btree_delete", (Tcl_CmdProc*)btree_delete },
+ { "btree_next", (Tcl_CmdProc*)btree_next },
+ { "btree_prev", (Tcl_CmdProc*)btree_prev },
+ { "btree_eof", (Tcl_CmdProc*)btree_eof },
+ { "btree_keysize", (Tcl_CmdProc*)btree_keysize },
+ { "btree_key", (Tcl_CmdProc*)btree_key },
+ { "btree_data", (Tcl_CmdProc*)btree_data },
+ { "btree_fetch_key", (Tcl_CmdProc*)btree_fetch_key },
+ { "btree_fetch_data", (Tcl_CmdProc*)btree_fetch_data },
+ { "btree_payload_size", (Tcl_CmdProc*)btree_payload_size },
+ { "btree_first", (Tcl_CmdProc*)btree_first },
+ { "btree_last", (Tcl_CmdProc*)btree_last },
+ { "btree_integrity_check", (Tcl_CmdProc*)btree_integrity_check },
+ { "btree_breakpoint", (Tcl_CmdProc*)btree_breakpoint },
+ { "btree_varint_test", (Tcl_CmdProc*)btree_varint_test },
+ { "btree_begin_statement", (Tcl_CmdProc*)btree_begin_statement },
+ { "btree_commit_statement", (Tcl_CmdProc*)btree_commit_statement },
+ { "btree_rollback_statement", (Tcl_CmdProc*)btree_rollback_statement },
+ { "btree_from_db", (Tcl_CmdProc*)btree_from_db },
+ { "btree_set_cache_size", (Tcl_CmdProc*)btree_set_cache_size },
+ { "btree_cursor_info", (Tcl_CmdProc*)btree_cursor_info },
+ { "btree_cursor_list", (Tcl_CmdProc*)btree_cursor_list },
+ };
+ int i;
+
+ for(i=0; i<sizeof(aCmd)/sizeof(aCmd[0]); i++){
+ Tcl_CreateCommand(interp, aCmd[i].zName, aCmd[i].xProc, 0, 0);
+ }
+ Tcl_LinkVar(interp, "pager_refinfo_enable", (char*)&pager3_refinfo_enable,
+ TCL_LINK_INT);
+ Tcl_LinkVar(interp, "btree_trace", (char*)&sqlite3_btree_trace,
+ TCL_LINK_INT);
+
+ /* The btree_insert command is implemented using the tcl 'object'
+ ** interface, not the string interface like the other commands in this
+ ** file. This is so binary data can be inserted into btree tables.
+ */
+ Tcl_CreateObjCommand(interp, "btree_insert", btree_insert, 0, 0);
+ return TCL_OK;
+}
Added: freeswitch/trunk/libs/sqlite/src/test4.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/test4.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,717 @@
+/*
+** 2003 December 18
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** Code for testing the the SQLite library in a multithreaded environment.
+**
+** $Id: test4.c,v 1.17 2006/02/23 21:43:56 drh Exp $
+*/
+#include "sqliteInt.h"
+#include "tcl.h"
+#include "os.h"
+#if defined(OS_UNIX) && OS_UNIX==1 && defined(THREADSAFE) && THREADSAFE==1
+#include <stdlib.h>
+#include <string.h>
+#include <pthread.h>
+#include <sched.h>
+#include <ctype.h>
+
+/*
+** Each thread is controlled by an instance of the following
+** structure.
+*/
+typedef struct Thread Thread;
+struct Thread {
+ /* The first group of fields are writable by the master and read-only
+ ** to the thread. */
+ char *zFilename; /* Name of database file */
+ void (*xOp)(Thread*); /* next operation to do */
+ char *zArg; /* argument usable by xOp */
+ int opnum; /* Operation number */
+ int busy; /* True if this thread is in use */
+
+ /* The next group of fields are writable by the thread but read-only to the
+ ** master. */
+ int completed; /* Number of operations completed */
+ sqlite3 *db; /* Open database */
+ sqlite3_stmt *pStmt; /* Pending operation */
+ char *zErr; /* operation error */
+ char *zStaticErr; /* Static error message */
+ int rc; /* operation return code */
+ int argc; /* number of columns in result */
+ const char *argv[100]; /* result columns */
+ const char *colv[100]; /* result column names */
+};
+
+/*
+** There can be as many as 26 threads running at once. Each is named
+** by a capital letter: A, B, C, ..., Y, Z.
+*/
+#define N_THREAD 26
+static Thread threadset[N_THREAD];
+
+
+/*
+** The main loop for a thread. Threads use busy waiting.
+*/
+static void *thread_main(void *pArg){
+ Thread *p = (Thread*)pArg;
+ if( p->db ){
+ sqlite3_close(p->db);
+ }
+ sqlite3_open(p->zFilename, &p->db);
+ if( SQLITE_OK!=sqlite3_errcode(p->db) ){
+ p->zErr = strdup(sqlite3_errmsg(p->db));
+ sqlite3_close(p->db);
+ p->db = 0;
+ }
+ p->pStmt = 0;
+ p->completed = 1;
+ while( p->opnum<=p->completed ) sched_yield();
+ while( p->xOp ){
+ if( p->zErr && p->zErr!=p->zStaticErr ){
+ sqlite3_free(p->zErr);
+ p->zErr = 0;
+ }
+ (*p->xOp)(p);
+ p->completed++;
+ while( p->opnum<=p->completed ) sched_yield();
+ }
+ if( p->pStmt ){
+ sqlite3_finalize(p->pStmt);
+ p->pStmt = 0;
+ }
+ if( p->db ){
+ sqlite3_close(p->db);
+ p->db = 0;
+ }
+ if( p->zErr && p->zErr!=p->zStaticErr ){
+ sqlite3_free(p->zErr);
+ p->zErr = 0;
+ }
+ p->completed++;
+ sqlite3_thread_cleanup();
+ return 0;
+}
+
+/*
+** Get a thread ID which is an upper case letter. Return the index.
+** If the argument is not a valid thread ID put an error message in
+** the interpreter and return -1.
+*/
+static int parse_thread_id(Tcl_Interp *interp, const char *zArg){
+ if( zArg==0 || zArg[0]==0 || zArg[1]!=0 || !isupper((unsigned char)zArg[0]) ){
+ Tcl_AppendResult(interp, "thread ID must be an upper case letter", 0);
+ return -1;
+ }
+ return zArg[0] - 'A';
+}
+
+/*
+** Usage: thread_create NAME FILENAME
+**
+** NAME should be an upper case letter. Start the thread running with
+** an open connection to the given database.
+*/
+static int tcl_thread_create(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ int i;
+ pthread_t x;
+ int rc;
+
+ if( argc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID FILENAME", 0);
+ return TCL_ERROR;
+ }
+ i = parse_thread_id(interp, argv[1]);
+ if( i<0 ) return TCL_ERROR;
+ if( threadset[i].busy ){
+ Tcl_AppendResult(interp, "thread ", argv[1], " is already running", 0);
+ return TCL_ERROR;
+ }
+ threadset[i].busy = 1;
+ sqliteFree(threadset[i].zFilename);
+ threadset[i].zFilename = sqliteStrDup(argv[2]);
+ threadset[i].opnum = 1;
+ threadset[i].completed = 0;
+ rc = pthread_create(&x, 0, thread_main, &threadset[i]);
+ if( rc ){
+ Tcl_AppendResult(interp, "failed to create the thread", 0);
+ sqliteFree(threadset[i].zFilename);
+ threadset[i].busy = 0;
+ return TCL_ERROR;
+ }
+ pthread_detach(x);
+ return TCL_OK;
+}
+
+/*
+** Wait for a thread to reach its idle state.
+*/
+static void thread_wait(Thread *p){
+ while( p->opnum>p->completed ) sched_yield();
+}
+
+/*
+** Usage: thread_wait ID
+**
+** Wait on thread ID to reach its idle state.
+*/
+static int tcl_thread_wait(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ int i;
+
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID", 0);
+ return TCL_ERROR;
+ }
+ i = parse_thread_id(interp, argv[1]);
+ if( i<0 ) return TCL_ERROR;
+ if( !threadset[i].busy ){
+ Tcl_AppendResult(interp, "no such thread", 0);
+ return TCL_ERROR;
+ }
+ thread_wait(&threadset[i]);
+ return TCL_OK;
+}
+
+/*
+** Stop a thread.
+*/
+static void stop_thread(Thread *p){
+ thread_wait(p);
+ p->xOp = 0;
+ p->opnum++;
+ thread_wait(p);
+ sqliteFree(p->zArg);
+ p->zArg = 0;
+ sqliteFree(p->zFilename);
+ p->zFilename = 0;
+ p->busy = 0;
+}
+
+/*
+** Usage: thread_halt ID
+**
+** Cause a thread to shut itself down. Wait for the shutdown to be
+** completed. If ID is "*" then stop all threads.
+*/
+static int tcl_thread_halt(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ int i;
+
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID", 0);
+ return TCL_ERROR;
+ }
+ if( argv[1][0]=='*' && argv[1][1]==0 ){
+ for(i=0; i<N_THREAD; i++){
+ if( threadset[i].busy ) stop_thread(&threadset[i]);
+ }
+ }else{
+ i = parse_thread_id(interp, argv[1]);
+ if( i<0 ) return TCL_ERROR;
+ if( !threadset[i].busy ){
+ Tcl_AppendResult(interp, "no such thread", 0);
+ return TCL_ERROR;
+ }
+ stop_thread(&threadset[i]);
+ }
+ return TCL_OK;
+}
+
+/*
+** Usage: thread_argc ID
+**
+** Wait on the most recent thread_step to complete, then return the
+** number of columns in the result set.
+*/
+static int tcl_thread_argc(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ int i;
+ char zBuf[100];
+
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID", 0);
+ return TCL_ERROR;
+ }
+ i = parse_thread_id(interp, argv[1]);
+ if( i<0 ) return TCL_ERROR;
+ if( !threadset[i].busy ){
+ Tcl_AppendResult(interp, "no such thread", 0);
+ return TCL_ERROR;
+ }
+ thread_wait(&threadset[i]);
+ sprintf(zBuf, "%d", threadset[i].argc);
+ Tcl_AppendResult(interp, zBuf, 0);
+ return TCL_OK;
+}
+
+/*
+** Usage: thread_argv ID N
+**
+** Wait on the most recent thread_step to complete, then return the
+** value of the N-th columns in the result set.
+*/
+static int tcl_thread_argv(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ int i;
+ int n;
+
+ if( argc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID N", 0);
+ return TCL_ERROR;
+ }
+ i = parse_thread_id(interp, argv[1]);
+ if( i<0 ) return TCL_ERROR;
+ if( !threadset[i].busy ){
+ Tcl_AppendResult(interp, "no such thread", 0);
+ return TCL_ERROR;
+ }
+ if( Tcl_GetInt(interp, argv[2], &n) ) return TCL_ERROR;
+ thread_wait(&threadset[i]);
+ if( n<0 || n>=threadset[i].argc ){
+ Tcl_AppendResult(interp, "column number out of range", 0);
+ return TCL_ERROR;
+ }
+ Tcl_AppendResult(interp, threadset[i].argv[n], 0);
+ return TCL_OK;
+}
+
+/*
+** Usage: thread_colname ID N
+**
+** Wait on the most recent thread_step to complete, then return the
+** name of the N-th columns in the result set.
+*/
+static int tcl_thread_colname(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ int i;
+ int n;
+
+ if( argc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID N", 0);
+ return TCL_ERROR;
+ }
+ i = parse_thread_id(interp, argv[1]);
+ if( i<0 ) return TCL_ERROR;
+ if( !threadset[i].busy ){
+ Tcl_AppendResult(interp, "no such thread", 0);
+ return TCL_ERROR;
+ }
+ if( Tcl_GetInt(interp, argv[2], &n) ) return TCL_ERROR;
+ thread_wait(&threadset[i]);
+ if( n<0 || n>=threadset[i].argc ){
+ Tcl_AppendResult(interp, "column number out of range", 0);
+ return TCL_ERROR;
+ }
+ Tcl_AppendResult(interp, threadset[i].colv[n], 0);
+ return TCL_OK;
+}
+
+/*
+** Usage: thread_result ID
+**
+** Wait on the most recent operation to complete, then return the
+** result code from that operation.
+*/
+static int tcl_thread_result(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ int i;
+ const char *zName;
+
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID", 0);
+ return TCL_ERROR;
+ }
+ i = parse_thread_id(interp, argv[1]);
+ if( i<0 ) return TCL_ERROR;
+ if( !threadset[i].busy ){
+ Tcl_AppendResult(interp, "no such thread", 0);
+ return TCL_ERROR;
+ }
+ thread_wait(&threadset[i]);
+ switch( threadset[i].rc ){
+ case SQLITE_OK: zName = "SQLITE_OK"; break;
+ case SQLITE_ERROR: zName = "SQLITE_ERROR"; break;
+ case SQLITE_PERM: zName = "SQLITE_PERM"; break;
+ case SQLITE_ABORT: zName = "SQLITE_ABORT"; break;
+ case SQLITE_BUSY: zName = "SQLITE_BUSY"; break;
+ case SQLITE_LOCKED: zName = "SQLITE_LOCKED"; break;
+ case SQLITE_NOMEM: zName = "SQLITE_NOMEM"; break;
+ case SQLITE_READONLY: zName = "SQLITE_READONLY"; break;
+ case SQLITE_INTERRUPT: zName = "SQLITE_INTERRUPT"; break;
+ case SQLITE_IOERR: zName = "SQLITE_IOERR"; break;
+ case SQLITE_CORRUPT: zName = "SQLITE_CORRUPT"; break;
+ case SQLITE_FULL: zName = "SQLITE_FULL"; break;
+ case SQLITE_CANTOPEN: zName = "SQLITE_CANTOPEN"; break;
+ case SQLITE_PROTOCOL: zName = "SQLITE_PROTOCOL"; break;
+ case SQLITE_EMPTY: zName = "SQLITE_EMPTY"; break;
+ case SQLITE_SCHEMA: zName = "SQLITE_SCHEMA"; break;
+ case SQLITE_CONSTRAINT: zName = "SQLITE_CONSTRAINT"; break;
+ case SQLITE_MISMATCH: zName = "SQLITE_MISMATCH"; break;
+ case SQLITE_MISUSE: zName = "SQLITE_MISUSE"; break;
+ case SQLITE_NOLFS: zName = "SQLITE_NOLFS"; break;
+ case SQLITE_AUTH: zName = "SQLITE_AUTH"; break;
+ case SQLITE_FORMAT: zName = "SQLITE_FORMAT"; break;
+ case SQLITE_RANGE: zName = "SQLITE_RANGE"; break;
+ case SQLITE_ROW: zName = "SQLITE_ROW"; break;
+ case SQLITE_DONE: zName = "SQLITE_DONE"; break;
+ default: zName = "SQLITE_Unknown"; break;
+ }
+ Tcl_AppendResult(interp, zName, 0);
+ return TCL_OK;
+}
+
+/*
+** Usage: thread_error ID
+**
+** Wait on the most recent operation to complete, then return the
+** error string.
+*/
+static int tcl_thread_error(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ int i;
+
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID", 0);
+ return TCL_ERROR;
+ }
+ i = parse_thread_id(interp, argv[1]);
+ if( i<0 ) return TCL_ERROR;
+ if( !threadset[i].busy ){
+ Tcl_AppendResult(interp, "no such thread", 0);
+ return TCL_ERROR;
+ }
+ thread_wait(&threadset[i]);
+ Tcl_AppendResult(interp, threadset[i].zErr, 0);
+ return TCL_OK;
+}
+
+/*
+** This procedure runs in the thread to compile an SQL statement.
+*/
+static void do_compile(Thread *p){
+ if( p->db==0 ){
+ p->zErr = p->zStaticErr = "no database is open";
+ p->rc = SQLITE_ERROR;
+ return;
+ }
+ if( p->pStmt ){
+ sqlite3_finalize(p->pStmt);
+ p->pStmt = 0;
+ }
+ p->rc = sqlite3_prepare(p->db, p->zArg, -1, &p->pStmt, 0);
+}
+
+/*
+** Usage: thread_compile ID SQL
+**
+** Compile a new virtual machine.
+*/
+static int tcl_thread_compile(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ int i;
+ if( argc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID SQL", 0);
+ return TCL_ERROR;
+ }
+ i = parse_thread_id(interp, argv[1]);
+ if( i<0 ) return TCL_ERROR;
+ if( !threadset[i].busy ){
+ Tcl_AppendResult(interp, "no such thread", 0);
+ return TCL_ERROR;
+ }
+ thread_wait(&threadset[i]);
+ threadset[i].xOp = do_compile;
+ sqliteFree(threadset[i].zArg);
+ threadset[i].zArg = sqliteStrDup(argv[2]);
+ threadset[i].opnum++;
+ return TCL_OK;
+}
+
+/*
+** This procedure runs in the thread to step the virtual machine.
+*/
+static void do_step(Thread *p){
+ int i;
+ if( p->pStmt==0 ){
+ p->zErr = p->zStaticErr = "no virtual machine available";
+ p->rc = SQLITE_ERROR;
+ return;
+ }
+ p->rc = sqlite3_step(p->pStmt);
+ if( p->rc==SQLITE_ROW ){
+ p->argc = sqlite3_column_count(p->pStmt);
+ for(i=0; i<sqlite3_data_count(p->pStmt); i++){
+ p->argv[i] = (char*)sqlite3_column_text(p->pStmt, i);
+ }
+ for(i=0; i<p->argc; i++){
+ p->colv[i] = sqlite3_column_name(p->pStmt, i);
+ }
+ }
+}
+
+/*
+** Usage: thread_step ID
+**
+** Advance the virtual machine by one step
+*/
+static int tcl_thread_step(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ int i;
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " IDL", 0);
+ return TCL_ERROR;
+ }
+ i = parse_thread_id(interp, argv[1]);
+ if( i<0 ) return TCL_ERROR;
+ if( !threadset[i].busy ){
+ Tcl_AppendResult(interp, "no such thread", 0);
+ return TCL_ERROR;
+ }
+ thread_wait(&threadset[i]);
+ threadset[i].xOp = do_step;
+ threadset[i].opnum++;
+ return TCL_OK;
+}
+
+/*
+** This procedure runs in the thread to finalize a virtual machine.
+*/
+static void do_finalize(Thread *p){
+ if( p->pStmt==0 ){
+ p->zErr = p->zStaticErr = "no virtual machine available";
+ p->rc = SQLITE_ERROR;
+ return;
+ }
+ p->rc = sqlite3_finalize(p->pStmt);
+ p->pStmt = 0;
+}
+
+/*
+** Usage: thread_finalize ID
+**
+** Finalize the virtual machine.
+*/
+static int tcl_thread_finalize(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ int i;
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " IDL", 0);
+ return TCL_ERROR;
+ }
+ i = parse_thread_id(interp, argv[1]);
+ if( i<0 ) return TCL_ERROR;
+ if( !threadset[i].busy ){
+ Tcl_AppendResult(interp, "no such thread", 0);
+ return TCL_ERROR;
+ }
+ thread_wait(&threadset[i]);
+ threadset[i].xOp = do_finalize;
+ sqliteFree(threadset[i].zArg);
+ threadset[i].zArg = 0;
+ threadset[i].opnum++;
+ return TCL_OK;
+}
+
+/*
+** Usage: thread_swap ID ID
+**
+** Interchange the sqlite* pointer between two threads.
+*/
+static int tcl_thread_swap(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ int i, j;
+ sqlite3 *temp;
+ if( argc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID1 ID2", 0);
+ return TCL_ERROR;
+ }
+ i = parse_thread_id(interp, argv[1]);
+ if( i<0 ) return TCL_ERROR;
+ if( !threadset[i].busy ){
+ Tcl_AppendResult(interp, "no such thread", 0);
+ return TCL_ERROR;
+ }
+ thread_wait(&threadset[i]);
+ j = parse_thread_id(interp, argv[2]);
+ if( j<0 ) return TCL_ERROR;
+ if( !threadset[j].busy ){
+ Tcl_AppendResult(interp, "no such thread", 0);
+ return TCL_ERROR;
+ }
+ thread_wait(&threadset[j]);
+ temp = threadset[i].db;
+ threadset[i].db = threadset[j].db;
+ threadset[j].db = temp;
+ return TCL_OK;
+}
+
+/*
+** Usage: thread_db_get ID
+**
+** Return the database connection pointer for the given thread. Then
+** remove the pointer from the thread itself. Afterwards, the thread
+** can be stopped and the connection can be used by the main thread.
+*/
+static int tcl_thread_db_get(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ int i;
+ char zBuf[100];
+ extern int sqlite3TestMakePointerStr(Tcl_Interp*, char*, void*);
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID", 0);
+ return TCL_ERROR;
+ }
+ i = parse_thread_id(interp, argv[1]);
+ if( i<0 ) return TCL_ERROR;
+ if( !threadset[i].busy ){
+ Tcl_AppendResult(interp, "no such thread", 0);
+ return TCL_ERROR;
+ }
+ thread_wait(&threadset[i]);
+ sqlite3TestMakePointerStr(interp, zBuf, threadset[i].db);
+ threadset[i].db = 0;
+ Tcl_AppendResult(interp, zBuf, (char*)0);
+ return TCL_OK;
+}
+
+/*
+** Usage: thread_stmt_get ID
+**
+** Return the database stmt pointer for the given thread. Then
+** remove the pointer from the thread itself.
+*/
+static int tcl_thread_stmt_get(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ int i;
+ char zBuf[100];
+ extern int sqlite3TestMakePointerStr(Tcl_Interp*, char*, void*);
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID", 0);
+ return TCL_ERROR;
+ }
+ i = parse_thread_id(interp, argv[1]);
+ if( i<0 ) return TCL_ERROR;
+ if( !threadset[i].busy ){
+ Tcl_AppendResult(interp, "no such thread", 0);
+ return TCL_ERROR;
+ }
+ thread_wait(&threadset[i]);
+ sqlite3TestMakePointerStr(interp, zBuf, threadset[i].pStmt);
+ threadset[i].pStmt = 0;
+ Tcl_AppendResult(interp, zBuf, (char*)0);
+ return TCL_OK;
+}
+
+/*
+** Register commands with the TCL interpreter.
+*/
+int Sqlitetest4_Init(Tcl_Interp *interp){
+ static struct {
+ char *zName;
+ Tcl_CmdProc *xProc;
+ } aCmd[] = {
+ { "thread_create", (Tcl_CmdProc*)tcl_thread_create },
+ { "thread_wait", (Tcl_CmdProc*)tcl_thread_wait },
+ { "thread_halt", (Tcl_CmdProc*)tcl_thread_halt },
+ { "thread_argc", (Tcl_CmdProc*)tcl_thread_argc },
+ { "thread_argv", (Tcl_CmdProc*)tcl_thread_argv },
+ { "thread_colname", (Tcl_CmdProc*)tcl_thread_colname },
+ { "thread_result", (Tcl_CmdProc*)tcl_thread_result },
+ { "thread_error", (Tcl_CmdProc*)tcl_thread_error },
+ { "thread_compile", (Tcl_CmdProc*)tcl_thread_compile },
+ { "thread_step", (Tcl_CmdProc*)tcl_thread_step },
+ { "thread_finalize", (Tcl_CmdProc*)tcl_thread_finalize },
+ { "thread_swap", (Tcl_CmdProc*)tcl_thread_swap },
+ { "thread_db_get", (Tcl_CmdProc*)tcl_thread_db_get },
+ { "thread_stmt_get", (Tcl_CmdProc*)tcl_thread_stmt_get },
+ };
+ int i;
+
+ for(i=0; i<sizeof(aCmd)/sizeof(aCmd[0]); i++){
+ Tcl_CreateCommand(interp, aCmd[i].zName, aCmd[i].xProc, 0, 0);
+ }
+ return TCL_OK;
+}
+#else
+int Sqlitetest4_Init(Tcl_Interp *interp){ return TCL_OK; }
+#endif /* OS_UNIX */
Added: freeswitch/trunk/libs/sqlite/src/test5.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/test5.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,218 @@
+/*
+** 2001 September 15
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** Code for testing the utf.c module in SQLite. This code
+** is not included in the SQLite library. It is used for automated
+** testing of the SQLite library. Specifically, the code in this file
+** is used for testing the SQLite routines for converting between
+** the various supported unicode encodings.
+**
+** $Id: test5.c,v 1.15 2005/12/09 20:21:59 drh Exp $
+*/
+#include "sqliteInt.h"
+#include "vdbeInt.h"
+#include "os.h" /* to get SQLITE_BIGENDIAN */
+#include "tcl.h"
+#include <stdlib.h>
+#include <string.h>
+
+/*
+** The first argument is a TCL UTF-8 string. Return the byte array
+** object with the encoded representation of the string, including
+** the NULL terminator.
+*/
+static int binarize(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ int len;
+ char *bytes;
+ Tcl_Obj *pRet;
+ assert(objc==2);
+
+ bytes = Tcl_GetStringFromObj(objv[1], &len);
+ pRet = Tcl_NewByteArrayObj((u8*)bytes, len+1);
+ Tcl_SetObjResult(interp, pRet);
+ return TCL_OK;
+}
+
+/*
+** Usage: test_value_overhead <repeat-count> <do-calls>.
+**
+** This routine is used to test the overhead of calls to
+** sqlite3_value_text(), on a value that contains a UTF-8 string. The idea
+** is to figure out whether or not it is a problem to use sqlite3_value
+** structures with collation sequence functions.
+**
+** If <do-calls> is 0, then the calls to sqlite3_value_text() are not
+** actually made.
+*/
+static int test_value_overhead(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ int do_calls;
+ int repeat_count;
+ int i;
+ Mem val;
+ const char *zVal;
+
+ if( objc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"",
+ Tcl_GetStringFromObj(objv[0], 0), " <repeat-count> <do-calls>", 0);
+ return TCL_ERROR;
+ }
+
+ if( Tcl_GetIntFromObj(interp, objv[1], &repeat_count) ) return TCL_ERROR;
+ if( Tcl_GetIntFromObj(interp, objv[2], &do_calls) ) return TCL_ERROR;
+
+ val.flags = MEM_Str|MEM_Term|MEM_Static;
+ val.z = "hello world";
+ val.type = SQLITE_TEXT;
+ val.enc = SQLITE_UTF8;
+
+ for(i=0; i<repeat_count; i++){
+ if( do_calls ){
+ zVal = (char*)sqlite3_value_text(&val);
+ }
+ }
+
+ return TCL_OK;
+}
+
+static u8 name_to_enc(Tcl_Interp *interp, Tcl_Obj *pObj){
+ struct EncName {
+ char *zName;
+ u8 enc;
+ } encnames[] = {
+ { "UTF8", SQLITE_UTF8 },
+ { "UTF16LE", SQLITE_UTF16LE },
+ { "UTF16BE", SQLITE_UTF16BE },
+ { "UTF16", SQLITE_UTF16NATIVE },
+ { 0, 0 }
+ };
+ struct EncName *pEnc;
+ char *z = Tcl_GetString(pObj);
+ for(pEnc=&encnames[0]; pEnc->zName; pEnc++){
+ if( 0==sqlite3StrICmp(z, pEnc->zName) ){
+ break;
+ }
+ }
+ if( !pEnc->enc ){
+ Tcl_AppendResult(interp, "No such encoding: ", z, 0);
+ }
+ return pEnc->enc;
+}
+
+/*
+** Usage: test_translate <string/blob> <from enc> <to enc> ?<transient>?
+**
+*/
+static int test_translate(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ u8 enc_from;
+ u8 enc_to;
+ sqlite3_value *pVal;
+
+ char *z;
+ int len;
+ void (*xDel)(void *p) = SQLITE_STATIC;
+
+ if( objc!=4 && objc!=5 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"",
+ Tcl_GetStringFromObj(objv[0], 0),
+ " <string/blob> <from enc> <to enc>", 0
+ );
+ return TCL_ERROR;
+ }
+ if( objc==5 ){
+ xDel = sqlite3FreeX;
+ }
+
+ enc_from = name_to_enc(interp, objv[2]);
+ if( !enc_from ) return TCL_ERROR;
+ enc_to = name_to_enc(interp, objv[3]);
+ if( !enc_to ) return TCL_ERROR;
+
+ pVal = sqlite3ValueNew();
+
+ if( enc_from==SQLITE_UTF8 ){
+ z = Tcl_GetString(objv[1]);
+ if( objc==5 ){
+ z = sqliteStrDup(z);
+ }
+ sqlite3ValueSetStr(pVal, -1, z, enc_from, xDel);
+ }else{
+ z = (char*)Tcl_GetByteArrayFromObj(objv[1], &len);
+ if( objc==5 ){
+ char *zTmp = z;
+ z = sqliteMalloc(len);
+ memcpy(z, zTmp, len);
+ }
+ sqlite3ValueSetStr(pVal, -1, z, enc_from, xDel);
+ }
+
+ z = (char *)sqlite3ValueText(pVal, enc_to);
+ len = sqlite3ValueBytes(pVal, enc_to) + (enc_to==SQLITE_UTF8?1:2);
+ Tcl_SetObjResult(interp, Tcl_NewByteArrayObj((u8*)z, len));
+
+ sqlite3ValueFree(pVal);
+
+ return TCL_OK;
+}
+
+/*
+** Usage: translate_selftest
+**
+** Call sqlite3utfSelfTest() to run the internal tests for unicode
+** translation. If there is a problem an assert() will fail.
+**/
+void sqlite3utfSelfTest();
+static int test_translate_selftest(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+#ifndef SQLITE_OMIT_UTF16
+ sqlite3utfSelfTest();
+#endif
+ return SQLITE_OK;
+}
+
+
+/*
+** Register commands with the TCL interpreter.
+*/
+int Sqlitetest5_Init(Tcl_Interp *interp){
+ static struct {
+ char *zName;
+ Tcl_ObjCmdProc *xProc;
+ } aCmd[] = {
+ { "binarize", (Tcl_ObjCmdProc*)binarize },
+ { "test_value_overhead", (Tcl_ObjCmdProc*)test_value_overhead },
+ { "test_translate", (Tcl_ObjCmdProc*)test_translate },
+ { "translate_selftest", (Tcl_ObjCmdProc*)test_translate_selftest},
+ };
+ int i;
+ for(i=0; i<sizeof(aCmd)/sizeof(aCmd[0]); i++){
+ Tcl_CreateObjCommand(interp, aCmd[i].zName, aCmd[i].xProc, 0, 0);
+ }
+ return SQLITE_OK;
+}
Added: freeswitch/trunk/libs/sqlite/src/test6.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/test6.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,554 @@
+/*
+** 2004 May 22
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+******************************************************************************
+**
+** This file contains code that modified the OS layer in order to simulate
+** the effect on the database file of an OS crash or power failure. This
+** is used to test the ability of SQLite to recover from those situations.
+*/
+#if SQLITE_TEST /* This file is used for the testing only */
+#include "sqliteInt.h"
+#include "os.h"
+#include "tcl.h"
+
+#ifndef SQLITE_OMIT_DISKIO /* This file is a no-op if disk I/O is disabled */
+
+/*
+** crashFile is a subclass of OsFile that is taylored for the
+** crash test module.
+*/
+typedef struct crashFile crashFile;
+struct crashFile {
+ IoMethod const *pMethod; /* Must be first */
+ u8 **apBlk; /* Array of blocks that have been written to. */
+ int nBlk; /* Size of apBlock. */
+ i64 offset; /* Next character to be read from the file */
+ int nMaxWrite; /* Largest offset written to. */
+ char *zName; /* File name */
+ OsFile *pBase; /* The real file */
+ crashFile *pNext; /* Next in a list of them all */
+};
+
+/*
+** Size of a simulated disk block
+*/
+#define BLOCKSIZE 512
+#define BLOCK_OFFSET(x) ((x) * BLOCKSIZE)
+
+
+/*
+** The following variables control when a simulated crash occurs.
+**
+** If iCrashDelay is non-zero, then zCrashFile contains (full path) name of
+** a file that SQLite will call sqlite3OsSync() on. Each time this happens
+** iCrashDelay is decremented. If iCrashDelay is zero after being
+** decremented, a "crash" occurs during the sync() operation.
+**
+** In other words, a crash occurs the iCrashDelay'th time zCrashFile is
+** synced.
+*/
+static int iCrashDelay = 0;
+static char zCrashFile[500];
+
+/*
+** Set the value of the two crash parameters.
+*/
+static void setCrashParams(int iDelay, char const *zFile){
+ sqlite3OsEnterMutex();
+ assert( strlen(zFile)<sizeof(zCrashFile) );
+ strcpy(zCrashFile, zFile);
+ iCrashDelay = iDelay;
+ sqlite3OsLeaveMutex();
+}
+
+/*
+** File zPath is being sync()ed. Return non-zero if this should
+** cause a crash.
+*/
+static int crashRequired(char const *zPath){
+ int r;
+ int n;
+ sqlite3OsEnterMutex();
+ n = strlen(zCrashFile);
+ if( zCrashFile[n-1]=='*' ){
+ n--;
+ }else if( strlen(zPath)>n ){
+ n = strlen(zPath);
+ }
+ r = 0;
+ if( iCrashDelay>0 && strncmp(zPath, zCrashFile, n)==0 ){
+ iCrashDelay--;
+ if( iCrashDelay<=0 ){
+ r = 1;
+ }
+ }
+ sqlite3OsLeaveMutex();
+ return r;
+}
+
+/*
+** A list of all open files.
+*/
+static crashFile *pAllFiles = 0;
+
+/* Forward reference */
+static void initFile(OsFile **pId, char const *zName, OsFile *pBase);
+
+/*
+** Undo the work done by initFile. Delete the OsFile structure
+** and unlink the structure from the pAllFiles list.
+*/
+static void closeFile(crashFile **pId){
+ crashFile *pFile = *pId;
+ if( pFile==pAllFiles ){
+ pAllFiles = pFile->pNext;
+ }else{
+ crashFile *p;
+ for(p=pAllFiles; p->pNext!=pFile; p=p->pNext ){
+ assert( p );
+ }
+ p->pNext = pFile->pNext;
+ }
+ sqliteFree(*pId);
+ *pId = 0;
+}
+
+/*
+** Read block 'blk' off of the real disk file and into the cache of pFile.
+*/
+static int readBlockIntoCache(crashFile *pFile, int blk){
+ if( blk>=pFile->nBlk ){
+ int n = ((pFile->nBlk * 2) + 100 + blk);
+ /* if( pFile->nBlk==0 ){ printf("DIRTY %s\n", pFile->zName); } */
+ pFile->apBlk = (u8 **)sqliteRealloc(pFile->apBlk, n * sizeof(u8*));
+ if( !pFile->apBlk ) return SQLITE_NOMEM;
+ memset(&pFile->apBlk[pFile->nBlk], 0, (n - pFile->nBlk)*sizeof(u8*));
+ pFile->nBlk = n;
+ }
+
+ if( !pFile->apBlk[blk] ){
+ i64 filesize;
+ int rc;
+
+ u8 *p = sqliteMalloc(BLOCKSIZE);
+ if( !p ) return SQLITE_NOMEM;
+ pFile->apBlk[blk] = p;
+
+ rc = sqlite3OsFileSize(pFile->pBase, &filesize);
+ if( rc!=SQLITE_OK ) return rc;
+
+ if( BLOCK_OFFSET(blk)<filesize ){
+ int len = BLOCKSIZE;
+ rc = sqlite3OsSeek(pFile->pBase, blk*BLOCKSIZE);
+ if( BLOCK_OFFSET(blk+1)>filesize ){
+ len = filesize - BLOCK_OFFSET(blk);
+ }
+ if( rc!=SQLITE_OK ) return rc;
+ rc = sqlite3OsRead(pFile->pBase, p, len);
+ if( rc!=SQLITE_OK ) return rc;
+ }
+ }
+
+ return SQLITE_OK;
+}
+
+/*
+** Write the cache of pFile to disk. If crash is non-zero, randomly
+** skip blocks when writing. The cache is deleted before returning.
+*/
+static int writeCache2(crashFile *pFile, int crash){
+ int i;
+ int nMax = pFile->nMaxWrite;
+ int rc = SQLITE_OK;
+
+ for(i=0; i<pFile->nBlk; i++){
+ u8 *p = pFile->apBlk[i];
+ if( p ){
+ int skip = 0;
+ int trash = 0;
+ if( crash ){
+ char random;
+ sqlite3Randomness(1, &random);
+ if( random & 0x01 ){
+ if( random & 0x02 ){
+ trash = 1;
+#ifdef TRACE_WRITECACHE
+printf("Trashing block %d of %s\n", i, pFile->zName);
+#endif
+ }else{
+ skip = 1;
+#ifdef TRACE_WRITECACHE
+printf("Skiping block %d of %s\n", i, pFile->zName);
+#endif
+ }
+ }else{
+#ifdef TRACE_WRITECACHE
+printf("Writing block %d of %s\n", i, pFile->zName);
+#endif
+ }
+ }
+ if( rc==SQLITE_OK ){
+ rc = sqlite3OsSeek(pFile->pBase, BLOCK_OFFSET(i));
+ }
+ if( rc==SQLITE_OK && !skip ){
+ int len = BLOCKSIZE;
+ if( BLOCK_OFFSET(i+1)>nMax ){
+ len = nMax-BLOCK_OFFSET(i);
+ }
+ if( len>0 ){
+ if( trash ){
+ sqlite3Randomness(len, p);
+ }
+ rc = sqlite3OsWrite(pFile->pBase, p, len);
+ }
+ }
+ sqliteFree(p);
+ }
+ }
+ sqliteFree(pFile->apBlk);
+ pFile->nBlk = 0;
+ pFile->apBlk = 0;
+ pFile->nMaxWrite = 0;
+ return rc;
+}
+
+/*
+** Write the cache to disk.
+*/
+static int writeCache(crashFile *pFile){
+ if( pFile->apBlk ){
+ int c = crashRequired(pFile->zName);
+ if( c ){
+ crashFile *p;
+#ifdef TRACE_WRITECACHE
+ printf("\nCrash during sync of %s\n", pFile->zName);
+#endif
+ for(p=pAllFiles; p; p=p->pNext){
+ writeCache2(p, 1);
+ }
+ exit(-1);
+ }else{
+ return writeCache2(pFile, 0);
+ }
+ }
+ return SQLITE_OK;
+}
+
+/*
+** Close the file.
+*/
+static int crashClose(OsFile **pId){
+ crashFile *pFile = (crashFile*)*pId;
+ if( pFile ){
+ /* printf("CLOSE %s (%d blocks)\n", pFile->zName, pFile->nBlk); */
+ writeCache(pFile);
+ sqlite3OsClose(&pFile->pBase);
+ }
+ closeFile(&pFile);
+ *pId = 0;
+ return SQLITE_OK;
+}
+
+static int crashSeek(OsFile *id, i64 offset){
+ ((crashFile*)id)->offset = offset;
+ return SQLITE_OK;
+}
+
+static int crashRead(OsFile *id, void *pBuf, int amt){
+ i64 offset; /* The current offset from the start of the file */
+ i64 end; /* The byte just past the last byte read */
+ int blk; /* Block number the read starts on */
+ int i;
+ u8 *zCsr;
+ int rc = SQLITE_OK;
+ crashFile *pFile = (crashFile*)id;
+
+ offset = pFile->offset;
+ end = offset+amt;
+ blk = (offset/BLOCKSIZE);
+
+ zCsr = (u8 *)pBuf;
+ for(i=blk; i*BLOCKSIZE<end; i++){
+ int off = 0;
+ int len = 0;
+
+
+ if( BLOCK_OFFSET(i) < offset ){
+ off = offset-BLOCK_OFFSET(i);
+ }
+ len = BLOCKSIZE - off;
+ if( BLOCK_OFFSET(i+1) > end ){
+ len = len - (BLOCK_OFFSET(i+1)-end);
+ }
+
+ if( i<pFile->nBlk && pFile->apBlk[i]){
+ u8 *pBlk = pFile->apBlk[i];
+ memcpy(zCsr, &pBlk[off], len);
+ }else{
+ rc = sqlite3OsSeek(pFile->pBase, BLOCK_OFFSET(i) + off);
+ if( rc!=SQLITE_OK ) return rc;
+ rc = sqlite3OsRead(pFile->pBase, zCsr, len);
+ if( rc!=SQLITE_OK ) return rc;
+ }
+
+ zCsr += len;
+ }
+ assert( zCsr==&((u8 *)pBuf)[amt] );
+
+ pFile->offset = end;
+ return rc;
+}
+
+static int crashWrite(OsFile *id, const void *pBuf, int amt){
+ i64 offset; /* The current offset from the start of the file */
+ i64 end; /* The byte just past the last byte written */
+ int blk; /* Block number the write starts on */
+ int i;
+ const u8 *zCsr;
+ int rc = SQLITE_OK;
+ crashFile *pFile = (crashFile*)id;
+
+ offset = pFile->offset;
+ end = offset+amt;
+ blk = (offset/BLOCKSIZE);
+
+ zCsr = (u8 *)pBuf;
+ for(i=blk; i*BLOCKSIZE<end; i++){
+ u8 *pBlk;
+ int off = 0;
+ int len = 0;
+
+ /* Make sure the block is in the cache */
+ rc = readBlockIntoCache(pFile, i);
+ if( rc!=SQLITE_OK ) return rc;
+
+ /* Write into the cache */
+ pBlk = pFile->apBlk[i];
+ assert( pBlk );
+
+ if( BLOCK_OFFSET(i) < offset ){
+ off = offset-BLOCK_OFFSET(i);
+ }
+ len = BLOCKSIZE - off;
+ if( BLOCK_OFFSET(i+1) > end ){
+ len = len - (BLOCK_OFFSET(i+1)-end);
+ }
+ memcpy(&pBlk[off], zCsr, len);
+ zCsr += len;
+ }
+ if( pFile->nMaxWrite<end ){
+ pFile->nMaxWrite = end;
+ }
+ assert( zCsr==&((u8 *)pBuf)[amt] );
+ pFile->offset = end;
+ return rc;
+}
+
+/*
+** Sync the file. First flush the write-cache to disk, then call the
+** real sync() function.
+*/
+static int crashSync(OsFile *id, int dataOnly){
+ return writeCache((crashFile*)id);
+}
+
+/*
+** Truncate the file. Set the internal OsFile.nMaxWrite variable to the new
+** file size to ensure that nothing in the write-cache past this point
+** is written to disk.
+*/
+static int crashTruncate(OsFile *id, i64 nByte){
+ crashFile *pFile = (crashFile*)id;
+ pFile->nMaxWrite = nByte;
+ return sqlite3OsTruncate(pFile->pBase, nByte);
+}
+
+/*
+** Return the size of the file. If the cache contains a write that extended
+** the file, then return this size instead of the on-disk size.
+*/
+static int crashFileSize(OsFile *id, i64 *pSize){
+ crashFile *pFile = (crashFile*)id;
+ int rc = sqlite3OsFileSize(pFile->pBase, pSize);
+ if( rc==SQLITE_OK && pSize && *pSize<pFile->nMaxWrite ){
+ *pSize = pFile->nMaxWrite;
+ }
+ return rc;
+}
+
+/*
+** Set this global variable to 1 to enable crash testing.
+*/
+int sqlite3CrashTestEnable = 0;
+
+/*
+** The three functions used to open files. All that is required is to
+** initialise the os_test.c specific fields and then call the corresponding
+** os_unix.c function to really open the file.
+*/
+int sqlite3CrashOpenReadWrite(const char *zFilename, OsFile **pId,int *pRdonly){
+ OsFile *pBase = 0;
+ int rc;
+
+ sqlite3CrashTestEnable = 0;
+ rc = sqlite3OsOpenReadWrite(zFilename, &pBase, pRdonly);
+ sqlite3CrashTestEnable = 1;
+ if( !rc ){
+ initFile(pId, zFilename, pBase);
+ }
+ return rc;
+}
+int sqlite3CrashOpenExclusive(const char *zFilename, OsFile **pId, int delFlag){
+ OsFile *pBase = 0;
+ int rc;
+
+ sqlite3CrashTestEnable = 0;
+ rc = sqlite3OsOpenExclusive(zFilename, &pBase, delFlag);
+ sqlite3CrashTestEnable = 1;
+ if( !rc ){
+ initFile(pId, zFilename, pBase);
+ }
+ return rc;
+}
+int sqlite3CrashOpenReadOnly(const char *zFilename, OsFile **pId, int NotUsed){
+ OsFile *pBase = 0;
+ int rc;
+
+ sqlite3CrashTestEnable = 0;
+ rc = sqlite3OsOpenReadOnly(zFilename, &pBase);
+ sqlite3CrashTestEnable = 1;
+ if( !rc ){
+ initFile(pId, zFilename, pBase);
+ }
+ return rc;
+}
+
+/*
+** OpenDirectory is a no-op
+*/
+static int crashOpenDir(OsFile *id, const char *zName){
+ return SQLITE_OK;
+}
+
+/*
+** Locking primitives are passed through into the underlying
+** file descriptor.
+*/
+int crashLock(OsFile *id, int lockType){
+ return sqlite3OsLock(((crashFile*)id)->pBase, lockType);
+}
+int crashUnlock(OsFile *id, int lockType){
+ return sqlite3OsUnlock(((crashFile*)id)->pBase, lockType);
+}
+int crashCheckReservedLock(OsFile *id){
+ return sqlite3OsCheckReservedLock(((crashFile*)id)->pBase);
+}
+void crashSetFullSync(OsFile *id, int setting){
+ return; /* This is a no-op */
+}
+int crashLockState(OsFile *id){
+ return sqlite3OsLockState(((crashFile*)id)->pBase);
+}
+
+/*
+** Return the underlying file handle.
+*/
+int crashFileHandle(OsFile *id){
+#if defined(SQLITE_TEST) || defined(SQLITE_DEBUG)
+ return sqlite3OsFileHandle(((crashFile*)id)->pBase);
+#endif
+ return 0;
+}
+
+/*
+** This vector defines all the methods that can operate on an OsFile
+** for the crash tester.
+*/
+static const IoMethod crashIoMethod = {
+ crashClose,
+ crashOpenDir,
+ crashRead,
+ crashWrite,
+ crashSeek,
+ crashTruncate,
+ crashSync,
+ crashSetFullSync,
+ crashFileHandle,
+ crashFileSize,
+ crashLock,
+ crashUnlock,
+ crashLockState,
+ crashCheckReservedLock,
+};
+
+
+/*
+** Initialise the os_test.c specific fields of pFile.
+*/
+static void initFile(OsFile **pId, char const *zName, OsFile *pBase){
+ crashFile *pFile = sqliteMalloc(sizeof(crashFile) + strlen(zName)+1);
+ pFile->pMethod = &crashIoMethod;
+ pFile->nMaxWrite = 0;
+ pFile->offset = 0;
+ pFile->nBlk = 0;
+ pFile->apBlk = 0;
+ pFile->zName = (char *)(&pFile[1]);
+ strcpy(pFile->zName, zName);
+ pFile->pBase = pBase;
+ pFile->pNext = pAllFiles;
+ pAllFiles = pFile;
+ *pId = (OsFile*)pFile;
+}
+
+
+/*
+** tclcmd: sqlite_crashparams DELAY CRASHFILE
+**
+** This procedure implements a TCL command that enables crash testing
+** in testfixture. Once enabled, crash testing cannot be disabled.
+*/
+static int crashParamsObjCmd(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ int delay;
+ const char *zFile;
+ int nFile;
+ if( objc!=3 ){
+ Tcl_WrongNumArgs(interp, 1, objv, "DELAY CRASHFILE");
+ return TCL_ERROR;
+ }
+ if( Tcl_GetIntFromObj(interp, objv[1], &delay) ) return TCL_ERROR;
+ zFile = Tcl_GetStringFromObj(objv[2], &nFile);
+ if( nFile>=sizeof(zCrashFile)-1 ){
+ Tcl_AppendResult(interp, "crash file name too big", 0);
+ return TCL_ERROR;
+ }
+ setCrashParams(delay, zFile);
+ sqlite3CrashTestEnable = 1;
+ return TCL_OK;
+}
+
+#endif /* SQLITE_OMIT_DISKIO */
+
+/*
+** This procedure registers the TCL procedures defined in this file.
+*/
+int Sqlitetest6_Init(Tcl_Interp *interp){
+#ifndef SQLITE_OMIT_DISKIO
+ Tcl_CreateObjCommand(interp, "sqlite3_crashparams", crashParamsObjCmd, 0, 0);
+#endif
+ return TCL_OK;
+}
+
+#endif /* SQLITE_TEST */
Added: freeswitch/trunk/libs/sqlite/src/test7.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/test7.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,724 @@
+/*
+** 2006 January 09
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** Code for testing the client/server version of the SQLite library.
+** Derived from test4.c.
+**
+** $Id: test7.c,v 1.4 2006/03/22 22:10:08 drh Exp $
+*/
+#include "sqliteInt.h"
+#include "tcl.h"
+#include "os.h"
+
+/*
+** This test only works on UNIX with a THREADSAFE build that includes
+** the SQLITE_SERVER option.
+*/
+#if OS_UNIX && defined(THREADSAFE) && THREADSAFE==1 && \
+ defined(SQLITE_SERVER) && !defined(SQLITE_OMIT_SHARED_CACHE)
+
+#include <stdlib.h>
+#include <string.h>
+#include <pthread.h>
+#include <sched.h>
+#include <ctype.h>
+
+/*
+** Interfaces defined in server.c
+*/
+int sqlite3_client_open(const char*, sqlite3**);
+int sqlite3_client_prepare(sqlite3*,const char*,int,
+ sqlite3_stmt**,const char**);
+int sqlite3_client_step(sqlite3_stmt*);
+int sqlite3_client_reset(sqlite3_stmt*);
+int sqlite3_client_finalize(sqlite3_stmt*);
+int sqlite3_client_close(sqlite3*);
+int sqlite3_server_start(void);
+int sqlite3_server_stop(void);
+
+/*
+** Each thread is controlled by an instance of the following
+** structure.
+*/
+typedef struct Thread Thread;
+struct Thread {
+ /* The first group of fields are writable by the supervisor thread
+ ** and read-only to the client threads
+ */
+ char *zFilename; /* Name of database file */
+ void (*xOp)(Thread*); /* next operation to do */
+ char *zArg; /* argument usable by xOp */
+ volatile int opnum; /* Operation number */
+ volatile int busy; /* True if this thread is in use */
+
+ /* The next group of fields are writable by the client threads
+ ** but read-only to the superviser thread.
+ */
+ volatile int completed; /* Number of operations completed */
+ sqlite3 *db; /* Open database */
+ sqlite3_stmt *pStmt; /* Pending operation */
+ char *zErr; /* operation error */
+ char *zStaticErr; /* Static error message */
+ int rc; /* operation return code */
+ int argc; /* number of columns in result */
+ const char *argv[100]; /* result columns */
+ const char *colv[100]; /* result column names */
+};
+
+/*
+** There can be as many as 26 threads running at once. Each is named
+** by a capital letter: A, B, C, ..., Y, Z.
+*/
+#define N_THREAD 26
+static Thread threadset[N_THREAD];
+
+/*
+** The main loop for a thread. Threads use busy waiting.
+*/
+static void *client_main(void *pArg){
+ Thread *p = (Thread*)pArg;
+ if( p->db ){
+ sqlite3_client_close(p->db);
+ }
+ sqlite3_client_open(p->zFilename, &p->db);
+ if( SQLITE_OK!=sqlite3_errcode(p->db) ){
+ p->zErr = strdup(sqlite3_errmsg(p->db));
+ sqlite3_client_close(p->db);
+ p->db = 0;
+ }
+ p->pStmt = 0;
+ p->completed = 1;
+ while( p->opnum<=p->completed ) sched_yield();
+ while( p->xOp ){
+ if( p->zErr && p->zErr!=p->zStaticErr ){
+ sqlite3_free(p->zErr);
+ p->zErr = 0;
+ }
+ (*p->xOp)(p);
+ p->completed++;
+ while( p->opnum<=p->completed ) sched_yield();
+ }
+ if( p->pStmt ){
+ sqlite3_client_finalize(p->pStmt);
+ p->pStmt = 0;
+ }
+ if( p->db ){
+ sqlite3_client_close(p->db);
+ p->db = 0;
+ }
+ if( p->zErr && p->zErr!=p->zStaticErr ){
+ sqlite3_free(p->zErr);
+ p->zErr = 0;
+ }
+ p->completed++;
+ sqlite3_thread_cleanup();
+ return 0;
+}
+
+/*
+** Get a thread ID which is an upper case letter. Return the index.
+** If the argument is not a valid thread ID put an error message in
+** the interpreter and return -1.
+*/
+static int parse_client_id(Tcl_Interp *interp, const char *zArg){
+ if( zArg==0 || zArg[0]==0 || zArg[1]!=0 || !isupper((unsigned char)zArg[0]) ){
+ Tcl_AppendResult(interp, "thread ID must be an upper case letter", 0);
+ return -1;
+ }
+ return zArg[0] - 'A';
+}
+
+/*
+** Usage: client_create NAME FILENAME
+**
+** NAME should be an upper case letter. Start the thread running with
+** an open connection to the given database.
+*/
+static int tcl_client_create(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ int i;
+ pthread_t x;
+ int rc;
+
+ if( argc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID FILENAME", 0);
+ return TCL_ERROR;
+ }
+ i = parse_client_id(interp, argv[1]);
+ if( i<0 ) return TCL_ERROR;
+ if( threadset[i].busy ){
+ Tcl_AppendResult(interp, "thread ", argv[1], " is already running", 0);
+ return TCL_ERROR;
+ }
+ threadset[i].busy = 1;
+ sqliteFree(threadset[i].zFilename);
+ threadset[i].zFilename = sqliteStrDup(argv[2]);
+ threadset[i].opnum = 1;
+ threadset[i].completed = 0;
+ rc = pthread_create(&x, 0, client_main, &threadset[i]);
+ if( rc ){
+ Tcl_AppendResult(interp, "failed to create the thread", 0);
+ sqliteFree(threadset[i].zFilename);
+ threadset[i].busy = 0;
+ return TCL_ERROR;
+ }
+ pthread_detach(x);
+ sqlite3_server_start();
+ return TCL_OK;
+}
+
+/*
+** Wait for a thread to reach its idle state.
+*/
+static void client_wait(Thread *p){
+ while( p->opnum>p->completed ) sched_yield();
+}
+
+/*
+** Usage: client_wait ID
+**
+** Wait on thread ID to reach its idle state.
+*/
+static int tcl_client_wait(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ int i;
+
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID", 0);
+ return TCL_ERROR;
+ }
+ i = parse_client_id(interp, argv[1]);
+ if( i<0 ) return TCL_ERROR;
+ if( !threadset[i].busy ){
+ Tcl_AppendResult(interp, "no such thread", 0);
+ return TCL_ERROR;
+ }
+ client_wait(&threadset[i]);
+ return TCL_OK;
+}
+
+/*
+** Stop a thread.
+*/
+static void stop_thread(Thread *p){
+ client_wait(p);
+ p->xOp = 0;
+ p->opnum++;
+ client_wait(p);
+ sqliteFree(p->zArg);
+ p->zArg = 0;
+ sqliteFree(p->zFilename);
+ p->zFilename = 0;
+ p->busy = 0;
+}
+
+/*
+** Usage: client_halt ID
+**
+** Cause a client thread to shut itself down. Wait for the shutdown to be
+** completed. If ID is "*" then stop all client threads.
+*/
+static int tcl_client_halt(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ int i;
+
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID", 0);
+ return TCL_ERROR;
+ }
+ if( argv[1][0]=='*' && argv[1][1]==0 ){
+ for(i=0; i<N_THREAD; i++){
+ if( threadset[i].busy ){
+ stop_thread(&threadset[i]);
+ }
+ }
+ }else{
+ i = parse_client_id(interp, argv[1]);
+ if( i<0 ) return TCL_ERROR;
+ if( !threadset[i].busy ){
+ Tcl_AppendResult(interp, "no such thread", 0);
+ return TCL_ERROR;
+ }
+ stop_thread(&threadset[i]);
+ }
+
+ /* If no client threads are still running, also stop the server */
+ for(i=0; i<N_THREAD && threadset[i].busy==0; i++){}
+ if( i>=N_THREAD ){
+ sqlite3_server_stop();
+ }
+ return TCL_OK;
+}
+
+/*
+** Usage: client_argc ID
+**
+** Wait on the most recent client_step to complete, then return the
+** number of columns in the result set.
+*/
+static int tcl_client_argc(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ int i;
+ char zBuf[100];
+
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID", 0);
+ return TCL_ERROR;
+ }
+ i = parse_client_id(interp, argv[1]);
+ if( i<0 ) return TCL_ERROR;
+ if( !threadset[i].busy ){
+ Tcl_AppendResult(interp, "no such thread", 0);
+ return TCL_ERROR;
+ }
+ client_wait(&threadset[i]);
+ sprintf(zBuf, "%d", threadset[i].argc);
+ Tcl_AppendResult(interp, zBuf, 0);
+ return TCL_OK;
+}
+
+/*
+** Usage: client_argv ID N
+**
+** Wait on the most recent client_step to complete, then return the
+** value of the N-th columns in the result set.
+*/
+static int tcl_client_argv(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ int i;
+ int n;
+
+ if( argc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID N", 0);
+ return TCL_ERROR;
+ }
+ i = parse_client_id(interp, argv[1]);
+ if( i<0 ) return TCL_ERROR;
+ if( !threadset[i].busy ){
+ Tcl_AppendResult(interp, "no such thread", 0);
+ return TCL_ERROR;
+ }
+ if( Tcl_GetInt(interp, argv[2], &n) ) return TCL_ERROR;
+ client_wait(&threadset[i]);
+ if( n<0 || n>=threadset[i].argc ){
+ Tcl_AppendResult(interp, "column number out of range", 0);
+ return TCL_ERROR;
+ }
+ Tcl_AppendResult(interp, threadset[i].argv[n], 0);
+ return TCL_OK;
+}
+
+/*
+** Usage: client_colname ID N
+**
+** Wait on the most recent client_step to complete, then return the
+** name of the N-th columns in the result set.
+*/
+static int tcl_client_colname(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ int i;
+ int n;
+
+ if( argc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID N", 0);
+ return TCL_ERROR;
+ }
+ i = parse_client_id(interp, argv[1]);
+ if( i<0 ) return TCL_ERROR;
+ if( !threadset[i].busy ){
+ Tcl_AppendResult(interp, "no such thread", 0);
+ return TCL_ERROR;
+ }
+ if( Tcl_GetInt(interp, argv[2], &n) ) return TCL_ERROR;
+ client_wait(&threadset[i]);
+ if( n<0 || n>=threadset[i].argc ){
+ Tcl_AppendResult(interp, "column number out of range", 0);
+ return TCL_ERROR;
+ }
+ Tcl_AppendResult(interp, threadset[i].colv[n], 0);
+ return TCL_OK;
+}
+
+/*
+** Usage: client_result ID
+**
+** Wait on the most recent operation to complete, then return the
+** result code from that operation.
+*/
+static int tcl_client_result(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ int i;
+ const char *zName;
+
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID", 0);
+ return TCL_ERROR;
+ }
+ i = parse_client_id(interp, argv[1]);
+ if( i<0 ) return TCL_ERROR;
+ if( !threadset[i].busy ){
+ Tcl_AppendResult(interp, "no such thread", 0);
+ return TCL_ERROR;
+ }
+ client_wait(&threadset[i]);
+ switch( threadset[i].rc ){
+ case SQLITE_OK: zName = "SQLITE_OK"; break;
+ case SQLITE_ERROR: zName = "SQLITE_ERROR"; break;
+ case SQLITE_PERM: zName = "SQLITE_PERM"; break;
+ case SQLITE_ABORT: zName = "SQLITE_ABORT"; break;
+ case SQLITE_BUSY: zName = "SQLITE_BUSY"; break;
+ case SQLITE_LOCKED: zName = "SQLITE_LOCKED"; break;
+ case SQLITE_NOMEM: zName = "SQLITE_NOMEM"; break;
+ case SQLITE_READONLY: zName = "SQLITE_READONLY"; break;
+ case SQLITE_INTERRUPT: zName = "SQLITE_INTERRUPT"; break;
+ case SQLITE_IOERR: zName = "SQLITE_IOERR"; break;
+ case SQLITE_CORRUPT: zName = "SQLITE_CORRUPT"; break;
+ case SQLITE_FULL: zName = "SQLITE_FULL"; break;
+ case SQLITE_CANTOPEN: zName = "SQLITE_CANTOPEN"; break;
+ case SQLITE_PROTOCOL: zName = "SQLITE_PROTOCOL"; break;
+ case SQLITE_EMPTY: zName = "SQLITE_EMPTY"; break;
+ case SQLITE_SCHEMA: zName = "SQLITE_SCHEMA"; break;
+ case SQLITE_CONSTRAINT: zName = "SQLITE_CONSTRAINT"; break;
+ case SQLITE_MISMATCH: zName = "SQLITE_MISMATCH"; break;
+ case SQLITE_MISUSE: zName = "SQLITE_MISUSE"; break;
+ case SQLITE_NOLFS: zName = "SQLITE_NOLFS"; break;
+ case SQLITE_AUTH: zName = "SQLITE_AUTH"; break;
+ case SQLITE_FORMAT: zName = "SQLITE_FORMAT"; break;
+ case SQLITE_RANGE: zName = "SQLITE_RANGE"; break;
+ case SQLITE_ROW: zName = "SQLITE_ROW"; break;
+ case SQLITE_DONE: zName = "SQLITE_DONE"; break;
+ default: zName = "SQLITE_Unknown"; break;
+ }
+ Tcl_AppendResult(interp, zName, 0);
+ return TCL_OK;
+}
+
+/*
+** Usage: client_error ID
+**
+** Wait on the most recent operation to complete, then return the
+** error string.
+*/
+static int tcl_client_error(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ int i;
+
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID", 0);
+ return TCL_ERROR;
+ }
+ i = parse_client_id(interp, argv[1]);
+ if( i<0 ) return TCL_ERROR;
+ if( !threadset[i].busy ){
+ Tcl_AppendResult(interp, "no such thread", 0);
+ return TCL_ERROR;
+ }
+ client_wait(&threadset[i]);
+ Tcl_AppendResult(interp, threadset[i].zErr, 0);
+ return TCL_OK;
+}
+
+/*
+** This procedure runs in the thread to compile an SQL statement.
+*/
+static void do_compile(Thread *p){
+ if( p->db==0 ){
+ p->zErr = p->zStaticErr = "no database is open";
+ p->rc = SQLITE_ERROR;
+ return;
+ }
+ if( p->pStmt ){
+ sqlite3_client_finalize(p->pStmt);
+ p->pStmt = 0;
+ }
+ p->rc = sqlite3_client_prepare(p->db, p->zArg, -1, &p->pStmt, 0);
+}
+
+/*
+** Usage: client_compile ID SQL
+**
+** Compile a new virtual machine.
+*/
+static int tcl_client_compile(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ int i;
+ if( argc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID SQL", 0);
+ return TCL_ERROR;
+ }
+ i = parse_client_id(interp, argv[1]);
+ if( i<0 ) return TCL_ERROR;
+ if( !threadset[i].busy ){
+ Tcl_AppendResult(interp, "no such thread", 0);
+ return TCL_ERROR;
+ }
+ client_wait(&threadset[i]);
+ threadset[i].xOp = do_compile;
+ sqliteFree(threadset[i].zArg);
+ threadset[i].zArg = sqliteStrDup(argv[2]);
+ threadset[i].opnum++;
+ return TCL_OK;
+}
+
+/*
+** This procedure runs in the thread to step the virtual machine.
+*/
+static void do_step(Thread *p){
+ int i;
+ if( p->pStmt==0 ){
+ p->zErr = p->zStaticErr = "no virtual machine available";
+ p->rc = SQLITE_ERROR;
+ return;
+ }
+ p->rc = sqlite3_client_step(p->pStmt);
+ if( p->rc==SQLITE_ROW ){
+ p->argc = sqlite3_column_count(p->pStmt);
+ for(i=0; i<sqlite3_data_count(p->pStmt); i++){
+ p->argv[i] = (char*)sqlite3_column_text(p->pStmt, i);
+ }
+ for(i=0; i<p->argc; i++){
+ p->colv[i] = sqlite3_column_name(p->pStmt, i);
+ }
+ }
+}
+
+/*
+** Usage: client_step ID
+**
+** Advance the virtual machine by one step
+*/
+static int tcl_client_step(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ int i;
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " IDL", 0);
+ return TCL_ERROR;
+ }
+ i = parse_client_id(interp, argv[1]);
+ if( i<0 ) return TCL_ERROR;
+ if( !threadset[i].busy ){
+ Tcl_AppendResult(interp, "no such thread", 0);
+ return TCL_ERROR;
+ }
+ client_wait(&threadset[i]);
+ threadset[i].xOp = do_step;
+ threadset[i].opnum++;
+ return TCL_OK;
+}
+
+/*
+** This procedure runs in the thread to finalize a virtual machine.
+*/
+static void do_finalize(Thread *p){
+ if( p->pStmt==0 ){
+ p->zErr = p->zStaticErr = "no virtual machine available";
+ p->rc = SQLITE_ERROR;
+ return;
+ }
+ p->rc = sqlite3_client_finalize(p->pStmt);
+ p->pStmt = 0;
+}
+
+/*
+** Usage: client_finalize ID
+**
+** Finalize the virtual machine.
+*/
+static int tcl_client_finalize(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ int i;
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " IDL", 0);
+ return TCL_ERROR;
+ }
+ i = parse_client_id(interp, argv[1]);
+ if( i<0 ) return TCL_ERROR;
+ if( !threadset[i].busy ){
+ Tcl_AppendResult(interp, "no such thread", 0);
+ return TCL_ERROR;
+ }
+ client_wait(&threadset[i]);
+ threadset[i].xOp = do_finalize;
+ sqliteFree(threadset[i].zArg);
+ threadset[i].zArg = 0;
+ threadset[i].opnum++;
+ return TCL_OK;
+}
+
+/*
+** This procedure runs in the thread to reset a virtual machine.
+*/
+static void do_reset(Thread *p){
+ if( p->pStmt==0 ){
+ p->zErr = p->zStaticErr = "no virtual machine available";
+ p->rc = SQLITE_ERROR;
+ return;
+ }
+ p->rc = sqlite3_client_reset(p->pStmt);
+ p->pStmt = 0;
+}
+
+/*
+** Usage: client_reset ID
+**
+** Finalize the virtual machine.
+*/
+static int tcl_client_reset(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ int i;
+ if( argc!=2 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " IDL", 0);
+ return TCL_ERROR;
+ }
+ i = parse_client_id(interp, argv[1]);
+ if( i<0 ) return TCL_ERROR;
+ if( !threadset[i].busy ){
+ Tcl_AppendResult(interp, "no such thread", 0);
+ return TCL_ERROR;
+ }
+ client_wait(&threadset[i]);
+ threadset[i].xOp = do_reset;
+ sqliteFree(threadset[i].zArg);
+ threadset[i].zArg = 0;
+ threadset[i].opnum++;
+ return TCL_OK;
+}
+
+/*
+** Usage: client_swap ID ID
+**
+** Interchange the sqlite* pointer between two threads.
+*/
+static int tcl_client_swap(
+ void *NotUsed,
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int argc, /* Number of arguments */
+ const char **argv /* Text of each argument */
+){
+ int i, j;
+ sqlite3 *temp;
+ if( argc!=3 ){
+ Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+ " ID1 ID2", 0);
+ return TCL_ERROR;
+ }
+ i = parse_client_id(interp, argv[1]);
+ if( i<0 ) return TCL_ERROR;
+ if( !threadset[i].busy ){
+ Tcl_AppendResult(interp, "no such thread", 0);
+ return TCL_ERROR;
+ }
+ client_wait(&threadset[i]);
+ j = parse_client_id(interp, argv[2]);
+ if( j<0 ) return TCL_ERROR;
+ if( !threadset[j].busy ){
+ Tcl_AppendResult(interp, "no such thread", 0);
+ return TCL_ERROR;
+ }
+ client_wait(&threadset[j]);
+ temp = threadset[i].db;
+ threadset[i].db = threadset[j].db;
+ threadset[j].db = temp;
+ return TCL_OK;
+}
+
+/*
+** Register commands with the TCL interpreter.
+*/
+int Sqlitetest7_Init(Tcl_Interp *interp){
+ static struct {
+ char *zName;
+ Tcl_CmdProc *xProc;
+ } aCmd[] = {
+ { "client_create", (Tcl_CmdProc*)tcl_client_create },
+ { "client_wait", (Tcl_CmdProc*)tcl_client_wait },
+ { "client_halt", (Tcl_CmdProc*)tcl_client_halt },
+ { "client_argc", (Tcl_CmdProc*)tcl_client_argc },
+ { "client_argv", (Tcl_CmdProc*)tcl_client_argv },
+ { "client_colname", (Tcl_CmdProc*)tcl_client_colname },
+ { "client_result", (Tcl_CmdProc*)tcl_client_result },
+ { "client_error", (Tcl_CmdProc*)tcl_client_error },
+ { "client_compile", (Tcl_CmdProc*)tcl_client_compile },
+ { "client_step", (Tcl_CmdProc*)tcl_client_step },
+ { "client_reset", (Tcl_CmdProc*)tcl_client_reset },
+ { "client_finalize", (Tcl_CmdProc*)tcl_client_finalize },
+ { "client_swap", (Tcl_CmdProc*)tcl_client_swap },
+ };
+ int i;
+
+ for(i=0; i<sizeof(aCmd)/sizeof(aCmd[0]); i++){
+ Tcl_CreateCommand(interp, aCmd[i].zName, aCmd[i].xProc, 0, 0);
+ }
+ return TCL_OK;
+}
+#else
+int Sqlitetest7_Init(Tcl_Interp *interp){ return TCL_OK; }
+#endif /* OS_UNIX */
Added: freeswitch/trunk/libs/sqlite/src/test8.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/test8.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,1063 @@
+/*
+** 2006 June 10
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** Code for testing the virtual table interfaces. This code
+** is not included in the SQLite library. It is used for automated
+** testing of the SQLite library.
+**
+** $Id: test8.c,v 1.43 2006/10/08 18:56:57 drh Exp $
+*/
+#include "sqliteInt.h"
+#include "tcl.h"
+#include "os.h"
+#include <stdlib.h>
+#include <string.h>
+
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+
+typedef struct echo_vtab echo_vtab;
+typedef struct echo_cursor echo_cursor;
+
+/*
+** The test module defined in this file uses two global Tcl variables to
+** commicate with test-scripts:
+**
+** $::echo_module
+** $::echo_module_sync_fail
+** $::echo_module_begin_fail
+**
+** The variable ::echo_module is a list. Each time one of the following
+** methods is called, one or more elements are appended to the list.
+** This is used for automated testing of virtual table modules.
+**
+** The ::echo_module_sync_fail variable is set by test scripts and read
+** by code in this file. If it is set to the name of a real table in the
+** the database, then all xSync operations on echo virtual tables that
+** use the named table as a backing store will fail.
+*/
+
+/*
+** An echo virtual-table object.
+**
+** echo.vtab.aIndex is an array of booleans. The nth entry is true if
+** the nth column of the real table is the left-most column of an index
+** (implicit or otherwise). In other words, if SQLite can optimize
+** a query like "SELECT * FROM real_table WHERE col = ?".
+**
+** Member variable aCol[] contains copies of the column names of the real
+** table.
+*/
+struct echo_vtab {
+ sqlite3_vtab base;
+ Tcl_Interp *interp; /* Tcl interpreter containing debug variables */
+ sqlite3 *db; /* Database connection */
+
+ char *zTableName; /* Name of the real table */
+ char *zLogName; /* Name of the log table */
+ int nCol; /* Number of columns in the real table */
+ int *aIndex; /* Array of size nCol. True if column has an index */
+ char **aCol; /* Array of size nCol. Column names */
+};
+
+/* An echo cursor object */
+struct echo_cursor {
+ sqlite3_vtab_cursor base;
+ sqlite3_stmt *pStmt;
+};
+
+/*
+** Retrieve the column names for the table named zTab via database
+** connection db. SQLITE_OK is returned on success, or an sqlite error
+** code otherwise.
+**
+** If successful, the number of columns is written to *pnCol. *paCol is
+** set to point at sqliteMalloc()'d space containing the array of
+** nCol column names. The caller is responsible for calling sqliteFree
+** on *paCol.
+*/
+static int getColumnNames(
+ sqlite3 *db,
+ const char *zTab,
+ char ***paCol,
+ int *pnCol
+){
+ char **aCol = 0;
+ char *zSql;
+ sqlite3_stmt *pStmt = 0;
+ int rc = SQLITE_OK;
+ int nCol = 0;
+
+ /* Prepare the statement "SELECT * FROM <tbl>". The column names
+ ** of the result set of the compiled SELECT will be the same as
+ ** the column names of table <tbl>.
+ */
+ zSql = sqlite3MPrintf("SELECT * FROM %Q", zTab);
+ if( !zSql ){
+ rc = SQLITE_NOMEM;
+ goto out;
+ }
+ rc = sqlite3_prepare(db, zSql, -1, &pStmt, 0);
+ sqliteFree(zSql);
+
+ if( rc==SQLITE_OK ){
+ int ii;
+ int nBytes;
+ char *zSpace;
+ nCol = sqlite3_column_count(pStmt);
+
+ /* Figure out how much space to allocate for the array of column names
+ ** (including space for the strings themselves). Then allocate it.
+ */
+ nBytes = sizeof(char *) * nCol;
+ for(ii=0; ii<nCol; ii++){
+ nBytes += (strlen(sqlite3_column_name(pStmt, ii)) + 1);
+ }
+ aCol = (char **)sqliteMalloc(nBytes);
+ if( !aCol ){
+ rc = SQLITE_NOMEM;
+ goto out;
+ }
+
+ /* Copy the column names into the allocated space and set up the
+ ** pointers in the aCol[] array.
+ */
+ zSpace = (char *)(&aCol[nCol]);
+ for(ii=0; ii<nCol; ii++){
+ aCol[ii] = zSpace;
+ zSpace += sprintf(zSpace, "%s", sqlite3_column_name(pStmt, ii));
+ zSpace++;
+ }
+ assert( (zSpace-nBytes)==(char *)aCol );
+ }
+
+ *paCol = aCol;
+ *pnCol = nCol;
+
+out:
+ sqlite3_finalize(pStmt);
+ return rc;
+}
+
+/*
+** Parameter zTab is the name of a table in database db with nCol
+** columns. This function allocates an array of integers nCol in
+** size and populates it according to any implicit or explicit
+** indices on table zTab.
+**
+** If successful, SQLITE_OK is returned and *paIndex set to point
+** at the allocated array. Otherwise, an error code is returned.
+**
+** See comments associated with the member variable aIndex above
+** "struct echo_vtab" for details of the contents of the array.
+*/
+static int getIndexArray(
+ sqlite3 *db, /* Database connection */
+ const char *zTab, /* Name of table in database db */
+ int nCol,
+ int **paIndex
+){
+ sqlite3_stmt *pStmt = 0;
+ int *aIndex = 0;
+ int rc;
+ char *zSql;
+
+ /* Allocate space for the index array */
+ aIndex = (int *)sqliteMalloc(sizeof(int) * nCol);
+ if( !aIndex ){
+ rc = SQLITE_NOMEM;
+ goto get_index_array_out;
+ }
+
+ /* Compile an sqlite pragma to loop through all indices on table zTab */
+ zSql = sqlite3MPrintf("PRAGMA index_list(%s)", zTab);
+ if( !zSql ){
+ rc = SQLITE_NOMEM;
+ goto get_index_array_out;
+ }
+ rc = sqlite3_prepare(db, zSql, -1, &pStmt, 0);
+ sqliteFree(zSql);
+
+ /* For each index, figure out the left-most column and set the
+ ** corresponding entry in aIndex[] to 1.
+ */
+ while( pStmt && sqlite3_step(pStmt)==SQLITE_ROW ){
+ const char *zIdx = (const char *)sqlite3_column_text(pStmt, 1);
+ sqlite3_stmt *pStmt2 = 0;
+ zSql = sqlite3MPrintf("PRAGMA index_info(%s)", zIdx);
+ if( !zSql ){
+ rc = SQLITE_NOMEM;
+ goto get_index_array_out;
+ }
+ rc = sqlite3_prepare(db, zSql, -1, &pStmt2, 0);
+ sqliteFree(zSql);
+ if( pStmt2 && sqlite3_step(pStmt2)==SQLITE_ROW ){
+ int cid = sqlite3_column_int(pStmt2, 1);
+ assert( cid>=0 && cid<nCol );
+ aIndex[cid] = 1;
+ }
+ if( pStmt2 ){
+ rc = sqlite3_finalize(pStmt2);
+ }
+ if( rc!=SQLITE_OK ){
+ goto get_index_array_out;
+ }
+ }
+
+
+get_index_array_out:
+ if( pStmt ){
+ int rc2 = sqlite3_finalize(pStmt);
+ if( rc==SQLITE_OK ){
+ rc = rc2;
+ }
+ }
+ if( rc!=SQLITE_OK ){
+ sqliteFree(aIndex);
+ aIndex = 0;
+ }
+ *paIndex = aIndex;
+ return rc;
+}
+
+/*
+** Global Tcl variable $echo_module is a list. This routine appends
+** the string element zArg to that list in interpreter interp.
+*/
+static void appendToEchoModule(Tcl_Interp *interp, const char *zArg){
+ int flags = (TCL_APPEND_VALUE | TCL_LIST_ELEMENT | TCL_GLOBAL_ONLY);
+ Tcl_SetVar(interp, "echo_module", (zArg?zArg:""), flags);
+}
+
+/*
+** This function is called from within the echo-modules xCreate and
+** xConnect methods. The argc and argv arguments are copies of those
+** passed to the calling method. This function is responsible for
+** calling sqlite3_declare_vtab() to declare the schema of the virtual
+** table being created or connected.
+**
+** If the constructor was passed just one argument, i.e.:
+**
+** CREATE TABLE t1 AS echo(t2);
+**
+** Then t2 is assumed to be the name of a *real* database table. The
+** schema of the virtual table is declared by passing a copy of the
+** CREATE TABLE statement for the real table to sqlite3_declare_vtab().
+** Hence, the virtual table should have exactly the same column names and
+** types as the real table.
+*/
+static int echoDeclareVtab(
+ echo_vtab *pVtab,
+ sqlite3 *db,
+ int argc,
+ const char *const*argv
+){
+ int rc = SQLITE_OK;
+
+ if( argc>=4 ){
+ sqlite3_stmt *pStmt = 0;
+ sqlite3_prepare(db,
+ "SELECT sql FROM sqlite_master WHERE type = 'table' AND name = ?",
+ -1, &pStmt, 0);
+ sqlite3_bind_text(pStmt, 1, argv[3], -1, 0);
+ if( sqlite3_step(pStmt)==SQLITE_ROW ){
+ const char *zCreateTable = (const char *)sqlite3_column_text(pStmt, 0);
+ sqlite3_declare_vtab(db, zCreateTable);
+ rc = sqlite3_finalize(pStmt);
+ } else {
+ rc = sqlite3_finalize(pStmt);
+ if( rc==SQLITE_OK ){
+ rc = SQLITE_ERROR;
+ }
+ }
+
+ if( rc==SQLITE_OK ){
+ rc = getColumnNames(db, argv[3], &pVtab->aCol, &pVtab->nCol);
+ }
+ if( rc==SQLITE_OK ){
+ rc = getIndexArray(db, argv[3], pVtab->nCol, &pVtab->aIndex);
+ }
+ }
+
+ return rc;
+}
+
+/*
+** This function frees all runtime structures associated with the virtual
+** table pVtab.
+*/
+static int echoDestructor(sqlite3_vtab *pVtab){
+ echo_vtab *p = (echo_vtab*)pVtab;
+ sqliteFree(p->aIndex);
+ sqliteFree(p->aCol);
+ sqliteFree(p->zTableName);
+ sqliteFree(p->zLogName);
+ sqliteFree(p);
+ return 0;
+}
+
+/*
+** This function is called to do the work of the xConnect() method -
+** to allocate the required in-memory structures for a newly connected
+** virtual table.
+*/
+static int echoConstructor(
+ sqlite3 *db,
+ void *pAux,
+ int argc, const char *const*argv,
+ sqlite3_vtab **ppVtab,
+ char **pzErr
+){
+ int i;
+ echo_vtab *pVtab;
+
+ /* Allocate the sqlite3_vtab/echo_vtab structure itself */
+ pVtab = sqliteMalloc( sizeof(*pVtab) );
+ if( !pVtab ){
+ return SQLITE_NOMEM;
+ }
+ pVtab->interp = (Tcl_Interp *)pAux;
+ pVtab->db = db;
+
+ /* Allocate echo_vtab.zTableName */
+ pVtab->zTableName = sqlite3MPrintf("%s", argv[3]);
+ if( !pVtab->zTableName ){
+ echoDestructor((sqlite3_vtab *)pVtab);
+ return SQLITE_NOMEM;
+ }
+
+ /* Log the arguments to this function to Tcl var ::echo_module */
+ for(i=0; i<argc; i++){
+ appendToEchoModule(pVtab->interp, argv[i]);
+ }
+
+ /* Invoke sqlite3_declare_vtab and set up other members of the echo_vtab
+ ** structure. If an error occurs, delete the sqlite3_vtab structure and
+ ** return an error code.
+ */
+ if( echoDeclareVtab(pVtab, db, argc, argv) ){
+ echoDestructor((sqlite3_vtab *)pVtab);
+ return SQLITE_ERROR;
+ }
+
+ /* Success. Set *ppVtab and return */
+ *ppVtab = &pVtab->base;
+ return SQLITE_OK;
+}
+
+/*
+** Echo virtual table module xCreate method.
+*/
+static int echoCreate(
+ sqlite3 *db,
+ void *pAux,
+ int argc, const char *const*argv,
+ sqlite3_vtab **ppVtab,
+ char **pzErr
+){
+ int rc = SQLITE_OK;
+ appendToEchoModule((Tcl_Interp *)(pAux), "xCreate");
+ rc = echoConstructor(db, pAux, argc, argv, ppVtab, pzErr);
+
+ /* If there were two arguments passed to the module at the SQL level
+ ** (i.e. "CREATE VIRTUAL TABLE tbl USING echo(arg1, arg2)"), then
+ ** the second argument is used as a table name. Attempt to create
+ ** such a table with a single column, "logmsg". This table will
+ ** be used to log calls to the xUpdate method. It will be deleted
+ ** when the virtual table is DROPed.
+ **
+ ** Note: The main point of this is to test that we can drop tables
+ ** from within an xDestroy method call.
+ */
+ if( rc==SQLITE_OK && argc==5 ){
+ char *zSql;
+ echo_vtab *pVtab = *(echo_vtab **)ppVtab;
+ pVtab->zLogName = sqlite3MPrintf("%s", argv[4]);
+ zSql = sqlite3MPrintf("CREATE TABLE %Q(logmsg)", pVtab->zLogName);
+ rc = sqlite3_exec(db, zSql, 0, 0, 0);
+ sqliteFree(zSql);
+ }
+
+ return rc;
+}
+
+/*
+** Echo virtual table module xConnect method.
+*/
+static int echoConnect(
+ sqlite3 *db,
+ void *pAux,
+ int argc, const char *const*argv,
+ sqlite3_vtab **ppVtab,
+ char **pzErr
+){
+ appendToEchoModule((Tcl_Interp *)(pAux), "xConnect");
+ return echoConstructor(db, pAux, argc, argv, ppVtab, pzErr);
+}
+
+/*
+** Echo virtual table module xDisconnect method.
+*/
+static int echoDisconnect(sqlite3_vtab *pVtab){
+ appendToEchoModule(((echo_vtab *)pVtab)->interp, "xDisconnect");
+ return echoDestructor(pVtab);
+}
+
+/*
+** Echo virtual table module xDestroy method.
+*/
+static int echoDestroy(sqlite3_vtab *pVtab){
+ int rc = SQLITE_OK;
+ echo_vtab *p = (echo_vtab *)pVtab;
+ appendToEchoModule(((echo_vtab *)pVtab)->interp, "xDestroy");
+
+ /* Drop the "log" table, if one exists (see echoCreate() for details) */
+ if( p && p->zLogName ){
+ char *zSql;
+ zSql = sqlite3MPrintf("DROP TABLE %Q", p->zLogName);
+ rc = sqlite3_exec(p->db, zSql, 0, 0, 0);
+ sqliteFree(zSql);
+ }
+
+ if( rc==SQLITE_OK ){
+ rc = echoDestructor(pVtab);
+ }
+ return rc;
+}
+
+/*
+** Echo virtual table module xOpen method.
+*/
+static int echoOpen(sqlite3_vtab *pVTab, sqlite3_vtab_cursor **ppCursor){
+ echo_cursor *pCur;
+ pCur = sqliteMalloc(sizeof(echo_cursor));
+ *ppCursor = (sqlite3_vtab_cursor *)pCur;
+ return (pCur ? SQLITE_OK : SQLITE_NOMEM);
+}
+
+/*
+** Echo virtual table module xClose method.
+*/
+static int echoClose(sqlite3_vtab_cursor *cur){
+ int rc;
+ echo_cursor *pCur = (echo_cursor *)cur;
+ sqlite3_stmt *pStmt = pCur->pStmt;
+ pCur->pStmt = 0;
+ sqliteFree(pCur);
+ rc = sqlite3_finalize(pStmt);
+ return rc;
+}
+
+/*
+** Return non-zero if the cursor does not currently point to a valid record
+** (i.e if the scan has finished), or zero otherwise.
+*/
+static int echoEof(sqlite3_vtab_cursor *cur){
+ return (((echo_cursor *)cur)->pStmt ? 0 : 1);
+}
+
+/*
+** Echo virtual table module xNext method.
+*/
+static int echoNext(sqlite3_vtab_cursor *cur){
+ int rc;
+ echo_cursor *pCur = (echo_cursor *)cur;
+ rc = sqlite3_step(pCur->pStmt);
+
+ if( rc==SQLITE_ROW ){
+ rc = SQLITE_OK;
+ }else{
+ rc = sqlite3_finalize(pCur->pStmt);
+ pCur->pStmt = 0;
+ }
+
+ return rc;
+}
+
+/*
+** Echo virtual table module xColumn method.
+*/
+static int echoColumn(sqlite3_vtab_cursor *cur, sqlite3_context *ctx, int i){
+ int iCol = i + 1;
+ sqlite3_stmt *pStmt = ((echo_cursor *)cur)->pStmt;
+ if( !pStmt ){
+ sqlite3_result_null(ctx);
+ }else{
+ assert( sqlite3_data_count(pStmt)>iCol );
+ sqlite3_result_value(ctx, sqlite3_column_value(pStmt, iCol));
+ }
+ return SQLITE_OK;
+}
+
+/*
+** Echo virtual table module xRowid method.
+*/
+static int echoRowid(sqlite3_vtab_cursor *cur, sqlite_int64 *pRowid){
+ sqlite3_stmt *pStmt = ((echo_cursor *)cur)->pStmt;
+ *pRowid = sqlite3_column_int64(pStmt, 0);
+ return SQLITE_OK;
+}
+
+/*
+** Compute a simple hash of the null terminated string zString.
+**
+** This module uses only sqlite3_index_info.idxStr, not
+** sqlite3_index_info.idxNum. So to test idxNum, when idxStr is set
+** in echoBestIndex(), idxNum is set to the corresponding hash value.
+** In echoFilter(), code assert()s that the supplied idxNum value is
+** indeed the hash of the supplied idxStr.
+*/
+static int hashString(const char *zString){
+ int val = 0;
+ int ii;
+ for(ii=0; zString[ii]; ii++){
+ val = (val << 3) + (int)zString[ii];
+ }
+ return val;
+}
+
+/*
+** Echo virtual table module xFilter method.
+*/
+static int echoFilter(
+ sqlite3_vtab_cursor *pVtabCursor,
+ int idxNum, const char *idxStr,
+ int argc, sqlite3_value **argv
+){
+ int rc;
+ int i;
+
+ echo_cursor *pCur = (echo_cursor *)pVtabCursor;
+ echo_vtab *pVtab = (echo_vtab *)pVtabCursor->pVtab;
+ sqlite3 *db = pVtab->db;
+
+ /* Check that idxNum matches idxStr */
+ assert( idxNum==hashString(idxStr) );
+
+ /* Log arguments to the ::echo_module Tcl variable */
+ appendToEchoModule(pVtab->interp, "xFilter");
+ appendToEchoModule(pVtab->interp, idxStr);
+ for(i=0; i<argc; i++){
+ appendToEchoModule(pVtab->interp, (const char*)sqlite3_value_text(argv[i]));
+ }
+
+ sqlite3_finalize(pCur->pStmt);
+ pCur->pStmt = 0;
+
+ /* Prepare the SQL statement created by echoBestIndex and bind the
+ ** runtime parameters passed to this function to it.
+ */
+ rc = sqlite3_prepare(db, idxStr, -1, &pCur->pStmt, 0);
+ assert( pCur->pStmt || rc!=SQLITE_OK );
+ for(i=0; rc==SQLITE_OK && i<argc; i++){
+ sqlite3_bind_value(pCur->pStmt, i+1, argv[i]);
+ }
+
+ /* If everything was successful, advance to the first row of the scan */
+ if( rc==SQLITE_OK ){
+ rc = echoNext(pVtabCursor);
+ }
+
+ return rc;
+}
+
+
+/*
+** A helper function used by echoUpdate() and echoBestIndex() for
+** manipulating strings in concert with the sqlite3_mprintf() function.
+**
+** Parameter pzStr points to a pointer to a string allocated with
+** sqlite3_mprintf. The second parameter, zAppend, points to another
+** string. The two strings are concatenated together and *pzStr
+** set to point at the result. The initial buffer pointed to by *pzStr
+** is deallocated via sqlite3_free().
+**
+** If the third argument, doFree, is true, then sqlite3_free() is
+** also called to free the buffer pointed to by zAppend.
+*/
+static void string_concat(char **pzStr, char *zAppend, int doFree){
+ char *zIn = *pzStr;
+ if( zIn ){
+ char *zTemp = zIn;
+ zIn = sqlite3_mprintf("%s%s", zIn, zAppend);
+ sqlite3_free(zTemp);
+ }else{
+ zIn = sqlite3_mprintf("%s", zAppend);
+ }
+ *pzStr = zIn;
+ if( doFree ){
+ sqlite3_free(zAppend);
+ }
+}
+
+/*
+** The echo module implements the subset of query constraints and sort
+** orders that may take advantage of SQLite indices on the underlying
+** real table. For example, if the real table is declared as:
+**
+** CREATE TABLE real(a, b, c);
+** CREATE INDEX real_index ON real(b);
+**
+** then the echo module handles WHERE or ORDER BY clauses that refer
+** to the column "b", but not "a" or "c". If a multi-column index is
+** present, only it's left most column is considered.
+**
+** This xBestIndex method encodes the proposed search strategy as
+** an SQL query on the real table underlying the virtual echo module
+** table and stores the query in sqlite3_index_info.idxStr. The SQL
+** statement is of the form:
+**
+** SELECT rowid, * FROM <real-table> ?<where-clause>? ?<order-by-clause>?
+**
+** where the <where-clause> and <order-by-clause> are determined
+** by the contents of the structure pointed to by the pIdxInfo argument.
+*/
+static int echoBestIndex(sqlite3_vtab *tab, sqlite3_index_info *pIdxInfo){
+ int ii;
+ char *zQuery = 0;
+ char *zNew;
+ int nArg = 0;
+ const char *zSep = "WHERE";
+ echo_vtab *pVtab = (echo_vtab *)tab;
+ sqlite3_stmt *pStmt = 0;
+
+ int nRow;
+ int useIdx = 0;
+ int rc = SQLITE_OK;
+
+ /* Determine the number of rows in the table and store this value in local
+ ** variable nRow. The 'estimated-cost' of the scan will be the number of
+ ** rows in the table for a linear scan, or the log (base 2) of the
+ ** number of rows if the proposed scan uses an index.
+ */
+ zQuery = sqlite3_mprintf("SELECT count(*) FROM %Q", pVtab->zTableName);
+ rc = sqlite3_prepare(pVtab->db, zQuery, -1, &pStmt, 0);
+ if( rc!=SQLITE_OK ){
+ return rc;
+ }
+ sqlite3_step(pStmt);
+ nRow = sqlite3_column_int(pStmt, 0);
+ rc = sqlite3_finalize(pStmt);
+ if( rc!=SQLITE_OK ){
+ return rc;
+ }
+
+ zQuery = sqlite3_mprintf("SELECT rowid, * FROM %Q", pVtab->zTableName);
+ for(ii=0; ii<pIdxInfo->nConstraint; ii++){
+ const struct sqlite3_index_constraint *pConstraint;
+ struct sqlite3_index_constraint_usage *pUsage;
+ int iCol;
+
+ pConstraint = &pIdxInfo->aConstraint[ii];
+ pUsage = &pIdxInfo->aConstraintUsage[ii];
+
+ iCol = pConstraint->iColumn;
+ if( pVtab->aIndex[iCol] ){
+ char *zCol = pVtab->aCol[iCol];
+ char *zOp = 0;
+ useIdx = 1;
+ if( iCol<0 ){
+ zCol = "rowid";
+ }
+ switch( pConstraint->op ){
+ case SQLITE_INDEX_CONSTRAINT_EQ:
+ zOp = "="; break;
+ case SQLITE_INDEX_CONSTRAINT_LT:
+ zOp = "<"; break;
+ case SQLITE_INDEX_CONSTRAINT_GT:
+ zOp = ">"; break;
+ case SQLITE_INDEX_CONSTRAINT_LE:
+ zOp = "<="; break;
+ case SQLITE_INDEX_CONSTRAINT_GE:
+ zOp = ">="; break;
+ case SQLITE_INDEX_CONSTRAINT_MATCH:
+ zOp = "LIKE"; break;
+ }
+ if( zOp[0]=='L' ){
+ zNew = sqlite3_mprintf(" %s %s LIKE (SELECT '%%'||?||'%%')",
+ zSep, zCol);
+ } else {
+ zNew = sqlite3_mprintf(" %s %s %s ?", zSep, zCol, zOp);
+ }
+ string_concat(&zQuery, zNew, 1);
+
+ zSep = "AND";
+ pUsage->argvIndex = ++nArg;
+ pUsage->omit = 1;
+ }
+ }
+
+ /* If there is only one term in the ORDER BY clause, and it is
+ ** on a column that this virtual table has an index for, then consume
+ ** the ORDER BY clause.
+ */
+ if( pIdxInfo->nOrderBy==1 && pVtab->aIndex[pIdxInfo->aOrderBy->iColumn] ){
+ int iCol = pIdxInfo->aOrderBy->iColumn;
+ char *zCol = pVtab->aCol[iCol];
+ char *zDir = pIdxInfo->aOrderBy->desc?"DESC":"ASC";
+ if( iCol<0 ){
+ zCol = "rowid";
+ }
+ zNew = sqlite3_mprintf(" ORDER BY %s %s", zCol, zDir);
+ string_concat(&zQuery, zNew, 1);
+ pIdxInfo->orderByConsumed = 1;
+ }
+
+ appendToEchoModule(pVtab->interp, "xBestIndex");;
+ appendToEchoModule(pVtab->interp, zQuery);
+
+ pIdxInfo->idxNum = hashString(zQuery);
+ pIdxInfo->idxStr = zQuery;
+ pIdxInfo->needToFreeIdxStr = 1;
+ if( useIdx ){
+ /* Approximation of log2(nRow). */
+ for( ii=0; ii<(sizeof(int)*8); ii++ ){
+ if( nRow & (1<<ii) ){
+ pIdxInfo->estimatedCost = (double)ii;
+ }
+ }
+ } else {
+ pIdxInfo->estimatedCost = (double)nRow;
+ }
+ return rc;
+}
+
+/*
+** The xUpdate method for echo module virtual tables.
+**
+** apData[0] apData[1] apData[2..]
+**
+** INTEGER DELETE
+**
+** INTEGER NULL (nCol args) UPDATE (do not set rowid)
+** INTEGER INTEGER (nCol args) UPDATE (with SET rowid = <arg1>)
+**
+** NULL NULL (nCol args) INSERT INTO (automatic rowid value)
+** NULL INTEGER (nCol args) INSERT (incl. rowid value)
+**
+*/
+int echoUpdate(
+ sqlite3_vtab *tab,
+ int nData,
+ sqlite3_value **apData,
+ sqlite_int64 *pRowid
+){
+ echo_vtab *pVtab = (echo_vtab *)tab;
+ sqlite3 *db = pVtab->db;
+ int rc = SQLITE_OK;
+
+ sqlite3_stmt *pStmt;
+ char *z = 0; /* SQL statement to execute */
+ int bindArgZero = 0; /* True to bind apData[0] to sql var no. nData */
+ int bindArgOne = 0; /* True to bind apData[1] to sql var no. 1 */
+ int i; /* Counter variable used by for loops */
+
+ assert( nData==pVtab->nCol+2 || nData==1 );
+
+ /* If apData[0] is an integer and nData>1 then do an UPDATE */
+ if( nData>1 && sqlite3_value_type(apData[0])==SQLITE_INTEGER ){
+ char *zSep = " SET";
+ z = sqlite3_mprintf("UPDATE %Q", pVtab->zTableName);
+
+ bindArgOne = (apData[1] && sqlite3_value_type(apData[1])==SQLITE_INTEGER);
+ bindArgZero = 1;
+
+ if( bindArgOne ){
+ string_concat(&z, " SET rowid=?1 ", 0);
+ zSep = ",";
+ }
+ for(i=2; i<nData; i++){
+ if( apData[i]==0 ) continue;
+ string_concat(&z, sqlite3_mprintf(
+ "%s %Q=?%d", zSep, pVtab->aCol[i-2], i), 1);
+ zSep = ",";
+ }
+ string_concat(&z, sqlite3_mprintf(" WHERE rowid=?%d", nData), 0);
+ }
+
+ /* If apData[0] is an integer and nData==1 then do a DELETE */
+ else if( nData==1 && sqlite3_value_type(apData[0])==SQLITE_INTEGER ){
+ z = sqlite3_mprintf("DELETE FROM %Q WHERE rowid = ?1", pVtab->zTableName);
+ bindArgZero = 1;
+ }
+
+ /* If the first argument is NULL and there are more than two args, INSERT */
+ else if( nData>2 && sqlite3_value_type(apData[0])==SQLITE_NULL ){
+ int ii;
+ char *zInsert = 0;
+ char *zValues = 0;
+
+ zInsert = sqlite3_mprintf("INSERT INTO %Q (", pVtab->zTableName);
+ if( sqlite3_value_type(apData[1])==SQLITE_INTEGER ){
+ bindArgOne = 1;
+ zValues = sqlite3_mprintf("?");
+ string_concat(&zInsert, "rowid", 0);
+ }
+
+ assert((pVtab->nCol+2)==nData);
+ for(ii=2; ii<nData; ii++){
+ string_concat(&zInsert,
+ sqlite3_mprintf("%s%Q", zValues?", ":"", pVtab->aCol[ii-2]), 1);
+ string_concat(&zValues,
+ sqlite3_mprintf("%s?%d", zValues?", ":"", ii), 1);
+ }
+
+ string_concat(&z, zInsert, 1);
+ string_concat(&z, ") VALUES(", 0);
+ string_concat(&z, zValues, 1);
+ string_concat(&z, ")", 0);
+ }
+
+ /* Anything else is an error */
+ else{
+ assert(0);
+ return SQLITE_ERROR;
+ }
+
+ rc = sqlite3_prepare(db, z, -1, &pStmt, 0);
+ assert( rc!=SQLITE_OK || pStmt );
+ sqlite3_free(z);
+ if( rc==SQLITE_OK ) {
+ if( bindArgZero ){
+ sqlite3_bind_value(pStmt, nData, apData[0]);
+ }
+ if( bindArgOne ){
+ sqlite3_bind_value(pStmt, 1, apData[1]);
+ }
+ for(i=2; i<nData; i++){
+ if( apData[i] ) sqlite3_bind_value(pStmt, i, apData[i]);
+ }
+ sqlite3_step(pStmt);
+ rc = sqlite3_finalize(pStmt);
+ }
+
+ if( pRowid && rc==SQLITE_OK ){
+ *pRowid = sqlite3_last_insert_rowid(db);
+ }
+
+ return rc;
+}
+
+/*
+** xBegin, xSync, xCommit and xRollback callbacks for echo module
+** virtual tables. Do nothing other than add the name of the callback
+** to the $::echo_module Tcl variable.
+*/
+static int echoTransactionCall(sqlite3_vtab *tab, const char *zCall){
+ char *z;
+ echo_vtab *pVtab = (echo_vtab *)tab;
+ z = sqlite3_mprintf("echo(%s)", pVtab->zTableName);
+ appendToEchoModule(pVtab->interp, zCall);
+ appendToEchoModule(pVtab->interp, z);
+ sqlite3_free(z);
+ return SQLITE_OK;
+}
+static int echoBegin(sqlite3_vtab *tab){
+ echo_vtab *pVtab = (echo_vtab *)tab;
+ Tcl_Interp *interp = pVtab->interp;
+ const char *zVal;
+
+ echoTransactionCall(tab, "xBegin");
+
+ /* Check if the $::echo_module_begin_fail variable is defined. If it is,
+ ** and it is set to the name of the real table underlying this virtual
+ ** echo module table, then cause this xSync operation to fail.
+ */
+ zVal = Tcl_GetVar(interp, "echo_module_begin_fail", TCL_GLOBAL_ONLY);
+ if( zVal && 0==strcmp(zVal, pVtab->zTableName) ){
+ return SQLITE_ERROR;
+ }
+ return SQLITE_OK;
+}
+static int echoSync(sqlite3_vtab *tab){
+ echo_vtab *pVtab = (echo_vtab *)tab;
+ Tcl_Interp *interp = pVtab->interp;
+ const char *zVal;
+
+ echoTransactionCall(tab, "xSync");
+
+ /* Check if the $::echo_module_sync_fail variable is defined. If it is,
+ ** and it is set to the name of the real table underlying this virtual
+ ** echo module table, then cause this xSync operation to fail.
+ */
+ zVal = Tcl_GetVar(interp, "echo_module_sync_fail", TCL_GLOBAL_ONLY);
+ if( zVal && 0==strcmp(zVal, pVtab->zTableName) ){
+ return -1;
+ }
+ return SQLITE_OK;
+}
+static int echoCommit(sqlite3_vtab *tab){
+ return echoTransactionCall(tab, "xCommit");
+}
+static int echoRollback(sqlite3_vtab *tab){
+ return echoTransactionCall(tab, "xRollback");
+}
+
+/*
+** Implementation of "GLOB" function on the echo module. Pass
+** all arguments to the ::echo_glob_overload procedure of TCL
+** and return the result of that procedure as a string.
+*/
+static void overloadedGlobFunction(
+ sqlite3_context *pContext,
+ int nArg,
+ sqlite3_value **apArg
+){
+ Tcl_Interp *interp = sqlite3_user_data(pContext);
+ Tcl_DString str;
+ int i;
+ int rc;
+ Tcl_DStringInit(&str);
+ Tcl_DStringAppendElement(&str, "::echo_glob_overload");
+ for(i=0; i<nArg; i++){
+ Tcl_DStringAppendElement(&str, (char*)sqlite3_value_text(apArg[i]));
+ }
+ rc = Tcl_Eval(interp, Tcl_DStringValue(&str));
+ Tcl_DStringFree(&str);
+ if( rc ){
+ sqlite3_result_error(pContext, Tcl_GetStringResult(interp), -1);
+ }else{
+ sqlite3_result_text(pContext, Tcl_GetStringResult(interp),
+ -1, SQLITE_TRANSIENT);
+ }
+ Tcl_ResetResult(interp);
+}
+
+/*
+** This is the xFindFunction implementation for the echo module.
+** SQLite calls this routine when the first argument of a function
+** is a column of an echo virtual table. This routine can optionally
+** override the implementation of that function. It will choose to
+** do so if the function is named "glob", and a TCL command named
+** ::echo_glob_overload exists.
+*/
+static int echoFindFunction(
+ sqlite3_vtab *vtab,
+ int nArg,
+ const char *zFuncName,
+ void (**pxFunc)(sqlite3_context*,int,sqlite3_value**),
+ void **ppArg
+){
+ echo_vtab *pVtab = (echo_vtab *)vtab;
+ Tcl_Interp *interp = pVtab->interp;
+ Tcl_CmdInfo info;
+ if( strcmp(zFuncName,"glob")!=0 ){
+ return 0;
+ }
+ if( Tcl_GetCommandInfo(interp, "::echo_glob_overload", &info)==0 ){
+ return 0;
+ }
+ *pxFunc = overloadedGlobFunction;
+ *ppArg = interp;
+ return 1;
+}
+
+/*
+** A virtual table module that merely "echos" the contents of another
+** table (like an SQL VIEW).
+*/
+static sqlite3_module echoModule = {
+ 0, /* iVersion */
+ echoCreate,
+ echoConnect,
+ echoBestIndex,
+ echoDisconnect,
+ echoDestroy,
+ echoOpen, /* xOpen - open a cursor */
+ echoClose, /* xClose - close a cursor */
+ echoFilter, /* xFilter - configure scan constraints */
+ echoNext, /* xNext - advance a cursor */
+ echoEof, /* xEof */
+ echoColumn, /* xColumn - read data */
+ echoRowid, /* xRowid - read data */
+ echoUpdate, /* xUpdate - write data */
+ echoBegin, /* xBegin - begin transaction */
+ echoSync, /* xSync - sync transaction */
+ echoCommit, /* xCommit - commit transaction */
+ echoRollback, /* xRollback - rollback transaction */
+ echoFindFunction, /* xFindFunction - function overloading */
+};
+
+/*
+** Decode a pointer to an sqlite3 object.
+*/
+static int getDbPointer(Tcl_Interp *interp, const char *zA, sqlite3 **ppDb){
+ *ppDb = (sqlite3*)sqlite3TextToPtr(zA);
+ return TCL_OK;
+}
+
+/*
+** Register the echo virtual table module.
+*/
+static int register_echo_module(
+ ClientData clientData, /* Pointer to sqlite3_enable_XXX function */
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int objc, /* Number of arguments */
+ Tcl_Obj *CONST objv[] /* Command arguments */
+){
+ sqlite3 *db;
+ if( objc!=2 ){
+ Tcl_WrongNumArgs(interp, 1, objv, "DB");
+ return TCL_ERROR;
+ }
+ if( getDbPointer(interp, Tcl_GetString(objv[1]), &db) ) return TCL_ERROR;
+ sqlite3_create_module(db, "echo", &echoModule, (void *)interp);
+ return TCL_OK;
+}
+
+/*
+** Tcl interface to sqlite3_declare_vtab, invoked as follows from Tcl:
+**
+** sqlite3_declare_vtab DB SQL
+*/
+static int declare_vtab(
+ ClientData clientData, /* Pointer to sqlite3_enable_XXX function */
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int objc, /* Number of arguments */
+ Tcl_Obj *CONST objv[] /* Command arguments */
+){
+ sqlite3 *db;
+ int rc;
+ if( objc!=3 ){
+ Tcl_WrongNumArgs(interp, 1, objv, "DB SQL");
+ return TCL_ERROR;
+ }
+ if( getDbPointer(interp, Tcl_GetString(objv[1]), &db) ) return TCL_ERROR;
+ rc = sqlite3_declare_vtab(db, Tcl_GetString(objv[2]));
+ if( rc!=SQLITE_OK ){
+ Tcl_SetResult(interp, (char *)sqlite3_errmsg(db), TCL_VOLATILE);
+ return TCL_ERROR;
+ }
+ return TCL_OK;
+}
+
+#endif /* ifndef SQLITE_OMIT_VIRTUALTABLE */
+
+/*
+** Register commands with the TCL interpreter.
+*/
+int Sqlitetest8_Init(Tcl_Interp *interp){
+ static struct {
+ char *zName;
+ Tcl_ObjCmdProc *xProc;
+ void *clientData;
+ } aObjCmd[] = {
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+ { "register_echo_module", register_echo_module, 0 },
+ { "sqlite3_declare_vtab", declare_vtab, 0 },
+#endif
+ };
+ int i;
+ for(i=0; i<sizeof(aObjCmd)/sizeof(aObjCmd[0]); i++){
+ Tcl_CreateObjCommand(interp, aObjCmd[i].zName,
+ aObjCmd[i].xProc, aObjCmd[i].clientData, 0);
+ }
+ return TCL_OK;
+}
Added: freeswitch/trunk/libs/sqlite/src/test_async.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/test_async.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,1262 @@
+/*
+** 2005 December 14
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+**
+** This file contains an example implementation of an asynchronous IO
+** backend for SQLite.
+**
+** WHAT IS ASYNCHRONOUS I/O?
+**
+** With asynchronous I/O, write requests are handled by a separate thread
+** running in the background. This means that the thread that initiates
+** a database write does not have to wait for (sometimes slow) disk I/O
+** to occur. The write seems to happen very quickly, though in reality
+** it is happening at its usual slow pace in the background.
+**
+** Asynchronous I/O appears to give better responsiveness, but at a price.
+** You lose the Durable property. With the default I/O backend of SQLite,
+** once a write completes, you know that the information you wrote is
+** safely on disk. With the asynchronous I/O, this is no the case. If
+** your program crashes or if you take a power lose after the database
+** write but before the asynchronous write thread has completed, then the
+** database change might never make it to disk and the next user of the
+** database might not see your change.
+**
+** You lose Durability with asynchronous I/O, but you still retain the
+** other parts of ACID: Atomic, Consistent, and Isolated. Many
+** appliations get along fine without the Durablity.
+**
+** HOW IT WORKS
+**
+** Asynchronous I/O works by overloading the OS-layer disk I/O routines
+** with modified versions that store the data to be written in queue of
+** pending write operations. Look at the asyncEnable() subroutine to see
+** how overloading works. Six os-layer routines are overloaded:
+**
+** sqlite3OsOpenReadWrite;
+** sqlite3OsOpenReadOnly;
+** sqlite3OsOpenExclusive;
+** sqlite3OsDelete;
+** sqlite3OsFileExists;
+** sqlite3OsSyncDirectory;
+**
+** The original implementations of these routines are saved and are
+** used by the writer thread to do the real I/O. The substitute
+** implementations typically put the I/O operation on a queue
+** to be handled later by the writer thread, though read operations
+** must be handled right away, obviously.
+**
+** Asynchronous I/O is disabled by setting the os-layer interface routines
+** back to their original values.
+**
+** LIMITATIONS
+**
+** This demonstration code is deliberately kept simple in order to keep
+** the main ideas clear and easy to understand. Real applications that
+** want to do asynchronous I/O might want to add additional capabilities.
+** For example, in this demonstration if writes are happening at a steady
+** stream that exceeds the I/O capability of the background writer thread,
+** the queue of pending write operations will grow without bound until we
+** run out of memory. Users of this technique may want to keep track of
+** the quantity of pending writes and stop accepting new write requests
+** when the buffer gets to be too big.
+*/
+
+#include "sqliteInt.h"
+#include "os.h"
+#include <tcl.h>
+
+/* If the THREADSAFE macro is not set, assume that it is turned off. */
+#ifndef THREADSAFE
+# define THREADSAFE 0
+#endif
+
+/*
+** This test uses pthreads and hence only works on unix and with
+** a threadsafe build of SQLite. It also requires that the redefinable
+** I/O feature of SQLite be turned on. This feature is turned off by
+** default. If a required element is missing, almost all of the code
+** in this file is commented out.
+*/
+#if OS_UNIX && THREADSAFE && defined(SQLITE_ENABLE_REDEF_IO)
+
+/*
+** This demo uses pthreads. If you do not have a pthreads implementation
+** for your operating system, you will need to recode the threading
+** logic.
+*/
+#include <pthread.h>
+#include <sched.h>
+
+/* Useful macros used in several places */
+#define MIN(x,y) ((x)<(y)?(x):(y))
+#define MAX(x,y) ((x)>(y)?(x):(y))
+
+/* Forward references */
+typedef struct AsyncWrite AsyncWrite;
+typedef struct AsyncFile AsyncFile;
+
+/* Enable for debugging */
+static int sqlite3async_trace = 0;
+# define TRACE(X) if( sqlite3async_trace ) asyncTrace X
+static void asyncTrace(const char *zFormat, ...){
+ char *z;
+ va_list ap;
+ va_start(ap, zFormat);
+ z = sqlite3_vmprintf(zFormat, ap);
+ va_end(ap);
+ fprintf(stderr, "[%d] %s", (int)pthread_self(), z);
+ free(z);
+}
+
+/*
+** THREAD SAFETY NOTES
+**
+** Basic rules:
+**
+** * Both read and write access to the global write-op queue must be
+** protected by the async.queueMutex.
+**
+** * The file handles from the underlying system are assumed not to
+** be thread safe.
+**
+** * See the last two paragraphs under "The Writer Thread" for
+** an assumption to do with file-handle synchronization by the Os.
+**
+** File system operations (invoked by SQLite thread):
+**
+** xOpenXXX (three versions)
+** xDelete
+** xFileExists
+** xSyncDirectory
+**
+** File handle operations (invoked by SQLite thread):
+**
+** asyncWrite, asyncClose, asyncTruncate, asyncSync,
+** asyncSetFullSync, asyncOpenDirectory.
+**
+** The operations above add an entry to the global write-op list. They
+** prepare the entry, acquire the async.queueMutex momentarily while
+** list pointers are manipulated to insert the new entry, then release
+** the mutex and signal the writer thread to wake up in case it happens
+** to be asleep.
+**
+**
+** asyncRead, asyncFileSize.
+**
+** Read operations. Both of these read from both the underlying file
+** first then adjust their result based on pending writes in the
+** write-op queue. So async.queueMutex is held for the duration
+** of these operations to prevent other threads from changing the
+** queue in mid operation.
+**
+**
+** asyncLock, asyncUnlock, asyncLockState, asyncCheckReservedLock
+**
+** These primitives implement in-process locking using a hash table
+** on the file name. Files are locked correctly for connections coming
+** from the same process. But other processes cannot see these locks
+** and will therefore not honor them.
+**
+**
+** asyncFileHandle.
+**
+** The sqlite3OsFileHandle() function is currently only used when
+** debugging the pager module. Unless sqlite3OsClose() is called on the
+** file (shouldn't be possible for other reasons), the underlying
+** implementations are safe to call without grabbing any mutex. So we just
+** go ahead and call it no matter what any other threads are doing.
+**
+**
+** asyncSeek.
+**
+** Calling this method just manipulates the AsyncFile.iOffset variable.
+** Since this variable is never accessed by writer thread, this
+** function does not require the mutex. Actual calls to OsSeek() take
+** place just before OsWrite() or OsRead(), which are always protected by
+** the mutex.
+**
+** The writer thread:
+**
+** The async.writerMutex is used to make sure only there is only
+** a single writer thread running at a time.
+**
+** Inside the writer thread is a loop that works like this:
+**
+** WHILE (write-op list is not empty)
+** Do IO operation at head of write-op list
+** Remove entry from head of write-op list
+** END WHILE
+**
+** The async.queueMutex is always held during the <write-op list is
+** not empty> test, and when the entry is removed from the head
+** of the write-op list. Sometimes it is held for the interim
+** period (while the IO is performed), and sometimes it is
+** relinquished. It is relinquished if (a) the IO op is an
+** ASYNC_CLOSE or (b) when the file handle was opened, two of
+** the underlying systems handles were opened on the same
+** file-system entry.
+**
+** If condition (b) above is true, then one file-handle
+** (AsyncFile.pBaseRead) is used exclusively by sqlite threads to read the
+** file, the other (AsyncFile.pBaseWrite) by sqlite3_async_flush()
+** threads to perform write() operations. This means that read
+** operations are not blocked by asynchronous writes (although
+** asynchronous writes may still be blocked by reads).
+**
+** This assumes that the OS keeps two handles open on the same file
+** properly in sync. That is, any read operation that starts after a
+** write operation on the same file system entry has completed returns
+** data consistent with the write. We also assume that if one thread
+** reads a file while another is writing it all bytes other than the
+** ones actually being written contain valid data.
+**
+** If the above assumptions are not true, set the preprocessor symbol
+** SQLITE_ASYNC_TWO_FILEHANDLES to 0.
+*/
+
+#ifndef SQLITE_ASYNC_TWO_FILEHANDLES
+/* #define SQLITE_ASYNC_TWO_FILEHANDLES 0 */
+#define SQLITE_ASYNC_TWO_FILEHANDLES 1
+#endif
+
+/*
+** State information is held in the static variable "async" defined
+** as follows:
+*/
+static struct TestAsyncStaticData {
+ pthread_mutex_t queueMutex; /* Mutex for access to write operation queue */
+ pthread_mutex_t writerMutex; /* Prevents multiple writer threads */
+ pthread_mutex_t lockMutex; /* For access to aLock hash table */
+ pthread_cond_t queueSignal; /* For waking up sleeping writer thread */
+ pthread_cond_t emptySignal; /* Notify when the write queue is empty */
+ AsyncWrite *pQueueFirst; /* Next write operation to be processed */
+ AsyncWrite *pQueueLast; /* Last write operation on the list */
+ Hash aLock; /* Files locked */
+ volatile int ioDelay; /* Extra delay between write operations */
+ volatile int writerHaltWhenIdle; /* Writer thread halts when queue empty */
+ volatile int writerHaltNow; /* Writer thread halts after next op */
+ int ioError; /* True if an IO error has occured */
+ int nFile; /* Number of open files (from sqlite pov) */
+} async = {
+ PTHREAD_MUTEX_INITIALIZER,
+ PTHREAD_MUTEX_INITIALIZER,
+ PTHREAD_MUTEX_INITIALIZER,
+ PTHREAD_COND_INITIALIZER,
+ PTHREAD_COND_INITIALIZER,
+};
+
+/* Possible values of AsyncWrite.op */
+#define ASYNC_NOOP 0
+#define ASYNC_WRITE 1
+#define ASYNC_SYNC 2
+#define ASYNC_TRUNCATE 3
+#define ASYNC_CLOSE 4
+#define ASYNC_OPENDIRECTORY 5
+#define ASYNC_SETFULLSYNC 6
+#define ASYNC_DELETE 7
+#define ASYNC_OPENEXCLUSIVE 8
+#define ASYNC_SYNCDIRECTORY 9
+
+/* Names of opcodes. Used for debugging only.
+** Make sure these stay in sync with the macros above!
+*/
+static const char *azOpcodeName[] = {
+ "NOOP", "WRITE", "SYNC", "TRUNCATE", "CLOSE",
+ "OPENDIR", "SETFULLSYNC", "DELETE", "OPENEX", "SYNCDIR",
+};
+
+/*
+** Entries on the write-op queue are instances of the AsyncWrite
+** structure, defined here.
+**
+** The interpretation of the iOffset and nByte variables varies depending
+** on the value of AsyncWrite.op:
+**
+** ASYNC_WRITE:
+** iOffset -> Offset in file to write to.
+** nByte -> Number of bytes of data to write (pointed to by zBuf).
+**
+** ASYNC_SYNC:
+** iOffset -> Unused.
+** nByte -> Value of "fullsync" flag to pass to sqlite3OsSync().
+**
+** ASYNC_TRUNCATE:
+** iOffset -> Size to truncate file to.
+** nByte -> Unused.
+**
+** ASYNC_CLOSE:
+** iOffset -> Unused.
+** nByte -> Unused.
+**
+** ASYNC_OPENDIRECTORY:
+** iOffset -> Unused.
+** nByte -> Number of bytes of zBuf points to (directory name).
+**
+** ASYNC_SETFULLSYNC:
+** iOffset -> Unused.
+** nByte -> New value for the full-sync flag.
+**
+**
+** ASYNC_DELETE:
+** iOffset -> Unused.
+** nByte -> Number of bytes of zBuf points to (file name).
+**
+** ASYNC_OPENEXCLUSIVE:
+** iOffset -> Value of "delflag".
+** nByte -> Number of bytes of zBuf points to (file name).
+**
+**
+** For an ASYNC_WRITE operation, zBuf points to the data to write to the file.
+** This space is sqliteMalloc()d along with the AsyncWrite structure in a
+** single blob, so is deleted when sqliteFree() is called on the parent
+** structure.
+*/
+struct AsyncWrite {
+ AsyncFile *pFile; /* File to write data to or sync */
+ int op; /* One of ASYNC_xxx etc. */
+ i64 iOffset; /* See above */
+ int nByte; /* See above */
+ char *zBuf; /* Data to write to file (or NULL if op!=ASYNC_WRITE) */
+ AsyncWrite *pNext; /* Next write operation (to any file) */
+};
+
+/*
+** The AsyncFile structure is a subclass of OsFile used for asynchronous IO.
+*/
+struct AsyncFile {
+ IoMethod *pMethod; /* Must be first */
+ i64 iOffset; /* Current seek() offset in file */
+ char *zName; /* Underlying OS filename - used for debugging */
+ int nName; /* Number of characters in zName */
+ OsFile *pBaseRead; /* Read handle to the underlying Os file */
+ OsFile *pBaseWrite; /* Write handle to the underlying Os file */
+};
+
+/*
+** Add an entry to the end of the global write-op list. pWrite should point
+** to an AsyncWrite structure allocated using sqlite3OsMalloc(). The writer
+** thread will call sqlite3OsFree() to free the structure after the specified
+** operation has been completed.
+**
+** Once an AsyncWrite structure has been added to the list, it becomes the
+** property of the writer thread and must not be read or modified by the
+** caller.
+*/
+static void addAsyncWrite(AsyncWrite *pWrite){
+ /* We must hold the queue mutex in order to modify the queue pointers */
+ pthread_mutex_lock(&async.queueMutex);
+
+ /* Add the record to the end of the write-op queue */
+ assert( !pWrite->pNext );
+ if( async.pQueueLast ){
+ assert( async.pQueueFirst );
+ async.pQueueLast->pNext = pWrite;
+ }else{
+ async.pQueueFirst = pWrite;
+ }
+ async.pQueueLast = pWrite;
+ TRACE(("PUSH %p (%s %s %d)\n", pWrite, azOpcodeName[pWrite->op],
+ pWrite->pFile ? pWrite->pFile->zName : "-", pWrite->iOffset));
+
+ if( pWrite->op==ASYNC_CLOSE ){
+ async.nFile--;
+ if( async.nFile==0 ){
+ async.ioError = SQLITE_OK;
+ }
+ }
+
+ /* Drop the queue mutex */
+ pthread_mutex_unlock(&async.queueMutex);
+
+ /* The writer thread might have been idle because there was nothing
+ ** on the write-op queue for it to do. So wake it up. */
+ pthread_cond_signal(&async.queueSignal);
+}
+
+/*
+** Increment async.nFile in a thread-safe manner.
+*/
+static void incrOpenFileCount(){
+ /* We must hold the queue mutex in order to modify async.nFile */
+ pthread_mutex_lock(&async.queueMutex);
+ if( async.nFile==0 ){
+ async.ioError = SQLITE_OK;
+ }
+ async.nFile++;
+ pthread_mutex_unlock(&async.queueMutex);
+}
+
+/*
+** This is a utility function to allocate and populate a new AsyncWrite
+** structure and insert it (via addAsyncWrite() ) into the global list.
+*/
+static int addNewAsyncWrite(
+ AsyncFile *pFile,
+ int op,
+ i64 iOffset,
+ int nByte,
+ const char *zByte
+){
+ AsyncWrite *p;
+ if( op!=ASYNC_CLOSE && async.ioError ){
+ return async.ioError;
+ }
+ p = sqlite3OsMalloc(sizeof(AsyncWrite) + (zByte?nByte:0));
+ if( !p ){
+ return SQLITE_NOMEM;
+ }
+ p->op = op;
+ p->iOffset = iOffset;
+ p->nByte = nByte;
+ p->pFile = pFile;
+ p->pNext = 0;
+ if( zByte ){
+ p->zBuf = (char *)&p[1];
+ memcpy(p->zBuf, zByte, nByte);
+ }else{
+ p->zBuf = 0;
+ }
+ addAsyncWrite(p);
+ return SQLITE_OK;
+}
+
+/*
+** Close the file. This just adds an entry to the write-op list, the file is
+** not actually closed.
+*/
+static int asyncClose(OsFile **pId){
+ return addNewAsyncWrite((AsyncFile *)*pId, ASYNC_CLOSE, 0, 0, 0);
+}
+
+/*
+** Implementation of sqlite3OsWrite() for asynchronous files. Instead of
+** writing to the underlying file, this function adds an entry to the end of
+** the global AsyncWrite list. Either SQLITE_OK or SQLITE_NOMEM may be
+** returned.
+*/
+static int asyncWrite(OsFile *id, const void *pBuf, int amt){
+ AsyncFile *pFile = (AsyncFile *)id;
+ int rc = addNewAsyncWrite(pFile, ASYNC_WRITE, pFile->iOffset, amt, pBuf);
+ pFile->iOffset += (i64)amt;
+ return rc;
+}
+
+/*
+** Truncate the file to nByte bytes in length. This just adds an entry to
+** the write-op list, no IO actually takes place.
+*/
+static int asyncTruncate(OsFile *id, i64 nByte){
+ return addNewAsyncWrite((AsyncFile *)id, ASYNC_TRUNCATE, nByte, 0, 0);
+}
+
+/*
+** Open the directory identified by zName and associate it with the
+** specified file. This just adds an entry to the write-op list, the
+** directory is opened later by sqlite3_async_flush().
+*/
+static int asyncOpenDirectory(OsFile *id, const char *zName){
+ AsyncFile *pFile = (AsyncFile *)id;
+ return addNewAsyncWrite(pFile, ASYNC_OPENDIRECTORY, 0, strlen(zName)+1,zName);
+}
+
+/*
+** Sync the file. This just adds an entry to the write-op list, the
+** sync() is done later by sqlite3_async_flush().
+*/
+static int asyncSync(OsFile *id, int fullsync){
+ return addNewAsyncWrite((AsyncFile *)id, ASYNC_SYNC, 0, fullsync, 0);
+}
+
+/*
+** Set (or clear) the full-sync flag on the underlying file. This operation
+** is queued and performed later by sqlite3_async_flush().
+*/
+static void asyncSetFullSync(OsFile *id, int value){
+ addNewAsyncWrite((AsyncFile *)id, ASYNC_SETFULLSYNC, 0, value, 0);
+}
+
+/*
+** Read data from the file. First we read from the filesystem, then adjust
+** the contents of the buffer based on ASYNC_WRITE operations in the
+** write-op queue.
+**
+** This method holds the mutex from start to finish.
+*/
+static int asyncRead(OsFile *id, void *obuf, int amt){
+ int rc = SQLITE_OK;
+ i64 filesize;
+ int nRead;
+ AsyncFile *pFile = (AsyncFile *)id;
+ OsFile *pBase = pFile->pBaseRead;
+
+ /* If an I/O error has previously occurred on this file, then all
+ ** subsequent operations fail.
+ */
+ if( async.ioError!=SQLITE_OK ){
+ return async.ioError;
+ }
+
+ /* Grab the write queue mutex for the duration of the call */
+ pthread_mutex_lock(&async.queueMutex);
+
+ if( pBase ){
+ rc = sqlite3OsFileSize(pBase, &filesize);
+ if( rc!=SQLITE_OK ){
+ goto asyncread_out;
+ }
+ rc = sqlite3OsSeek(pBase, pFile->iOffset);
+ if( rc!=SQLITE_OK ){
+ goto asyncread_out;
+ }
+ nRead = MIN(filesize - pFile->iOffset, amt);
+ if( nRead>0 ){
+ rc = sqlite3OsRead(pBase, obuf, nRead);
+ TRACE(("READ %s %d bytes at %d\n", pFile->zName, nRead, pFile->iOffset));
+ }
+ }
+
+ if( rc==SQLITE_OK ){
+ AsyncWrite *p;
+ i64 iOffset = pFile->iOffset; /* Current seek offset */
+
+ for(p=async.pQueueFirst; p; p = p->pNext){
+ if( p->pFile==pFile && p->op==ASYNC_WRITE ){
+ int iBeginOut = (p->iOffset - iOffset);
+ int iBeginIn = -iBeginOut;
+ int nCopy;
+
+ if( iBeginIn<0 ) iBeginIn = 0;
+ if( iBeginOut<0 ) iBeginOut = 0;
+ nCopy = MIN(p->nByte-iBeginIn, amt-iBeginOut);
+
+ if( nCopy>0 ){
+ memcpy(&((char *)obuf)[iBeginOut], &p->zBuf[iBeginIn], nCopy);
+ TRACE(("OVERREAD %d bytes at %d\n", nCopy, iBeginOut+iOffset));
+ }
+ }
+ }
+
+ pFile->iOffset += (i64)amt;
+ }
+
+asyncread_out:
+ pthread_mutex_unlock(&async.queueMutex);
+ return rc;
+}
+
+/*
+** Seek to the specified offset. This just adjusts the AsyncFile.iOffset
+** variable - calling seek() on the underlying file is defered until the
+** next read() or write() operation.
+*/
+static int asyncSeek(OsFile *id, i64 offset){
+ AsyncFile *pFile = (AsyncFile *)id;
+ pFile->iOffset = offset;
+ return SQLITE_OK;
+}
+
+/*
+** Read the size of the file. First we read the size of the file system
+** entry, then adjust for any ASYNC_WRITE or ASYNC_TRUNCATE operations
+** currently in the write-op list.
+**
+** This method holds the mutex from start to finish.
+*/
+int asyncFileSize(OsFile *id, i64 *pSize){
+ int rc = SQLITE_OK;
+ i64 s = 0;
+ OsFile *pBase;
+
+ pthread_mutex_lock(&async.queueMutex);
+
+ /* Read the filesystem size from the base file. If pBaseRead is NULL, this
+ ** means the file hasn't been opened yet. In this case all relevant data
+ ** must be in the write-op queue anyway, so we can omit reading from the
+ ** file-system.
+ */
+ pBase = ((AsyncFile *)id)->pBaseRead;
+ if( pBase ){
+ rc = sqlite3OsFileSize(pBase, &s);
+ }
+
+ if( rc==SQLITE_OK ){
+ AsyncWrite *p;
+ for(p=async.pQueueFirst; p; p = p->pNext){
+ if( p->pFile==(AsyncFile *)id ){
+ switch( p->op ){
+ case ASYNC_WRITE:
+ s = MAX(p->iOffset + (i64)(p->nByte), s);
+ break;
+ case ASYNC_TRUNCATE:
+ s = MIN(s, p->iOffset);
+ break;
+ }
+ }
+ }
+ *pSize = s;
+ }
+ pthread_mutex_unlock(&async.queueMutex);
+ return rc;
+}
+
+/*
+** Return the operating system file handle. This is only used for debugging
+** at the moment anyway.
+*/
+static int asyncFileHandle(OsFile *id){
+ return sqlite3OsFileHandle(((AsyncFile *)id)->pBaseRead);
+}
+
+/*
+** No disk locking is performed. We keep track of locks locally in
+** the async.aLock hash table. Locking should appear to work the same
+** as with standard (unmodified) SQLite as long as all connections
+** come from this one process. Connections from external processes
+** cannot see our internal hash table (obviously) and will thus not
+** honor our locks.
+*/
+static int asyncLock(OsFile *id, int lockType){
+ AsyncFile *pFile = (AsyncFile*)id;
+ TRACE(("LOCK %d (%s)\n", lockType, pFile->zName));
+ pthread_mutex_lock(&async.lockMutex);
+ sqlite3HashInsert(&async.aLock, pFile->zName, pFile->nName, (void*)lockType);
+ pthread_mutex_unlock(&async.lockMutex);
+ return SQLITE_OK;
+}
+static int asyncUnlock(OsFile *id, int lockType){
+ return asyncLock(id, lockType);
+}
+
+/*
+** This function is called when the pager layer first opens a database file
+** and is checking for a hot-journal.
+*/
+static int asyncCheckReservedLock(OsFile *id){
+ AsyncFile *pFile = (AsyncFile*)id;
+ int rc;
+ pthread_mutex_lock(&async.lockMutex);
+ rc = (int)sqlite3HashFind(&async.aLock, pFile->zName, pFile->nName);
+ pthread_mutex_unlock(&async.lockMutex);
+ TRACE(("CHECK-LOCK %d (%s)\n", rc, pFile->zName));
+ return rc>SHARED_LOCK;
+}
+
+/*
+** This is broken. But sqlite3OsLockState() is only used for testing anyway.
+*/
+static int asyncLockState(OsFile *id){
+ return SQLITE_OK;
+}
+
+/*
+** The following variables hold pointers to the original versions of
+** OS-layer interface routines that are overloaded in order to create
+** the asynchronous I/O backend.
+*/
+static int (*xOrigOpenReadWrite)(const char*, OsFile**, int*) = 0;
+static int (*xOrigOpenExclusive)(const char*, OsFile**, int) = 0;
+static int (*xOrigOpenReadOnly)(const char*, OsFile**) = 0;
+static int (*xOrigDelete)(const char*) = 0;
+static int (*xOrigFileExists)(const char*) = 0;
+static int (*xOrigSyncDirectory)(const char*) = 0;
+
+/*
+** This routine does most of the work of opening a file and building
+** the OsFile structure.
+*/
+static int asyncOpenFile(
+ const char *zName, /* The name of the file to be opened */
+ OsFile **pFile, /* Put the OsFile structure here */
+ OsFile *pBaseRead, /* The real OsFile from the real I/O routine */
+ int openForWriting /* Open a second file handle for writing if true */
+){
+ int rc, i, n;
+ AsyncFile *p;
+ OsFile *pBaseWrite = 0;
+
+ static IoMethod iomethod = {
+ asyncClose,
+ asyncOpenDirectory,
+ asyncRead,
+ asyncWrite,
+ asyncSeek,
+ asyncTruncate,
+ asyncSync,
+ asyncSetFullSync,
+ asyncFileHandle,
+ asyncFileSize,
+ asyncLock,
+ asyncUnlock,
+ asyncLockState,
+ asyncCheckReservedLock
+ };
+
+ if( openForWriting && SQLITE_ASYNC_TWO_FILEHANDLES ){
+ int dummy;
+ rc = xOrigOpenReadWrite(zName, &pBaseWrite, &dummy);
+ if( rc!=SQLITE_OK ){
+ goto error_out;
+ }
+ }
+
+ n = strlen(zName);
+ for(i=n-1; i>=0 && zName[i]!='/'; i--){}
+ p = (AsyncFile *)sqlite3OsMalloc(sizeof(AsyncFile) + n - i);
+ if( !p ){
+ rc = SQLITE_NOMEM;
+ goto error_out;
+ }
+ memset(p, 0, sizeof(AsyncFile));
+ p->zName = (char*)&p[1];
+ strcpy(p->zName, &zName[i+1]);
+ p->nName = n - i;
+ p->pMethod = &iomethod;
+ p->pBaseRead = pBaseRead;
+ p->pBaseWrite = pBaseWrite;
+
+ *pFile = (OsFile *)p;
+ return SQLITE_OK;
+
+error_out:
+ assert(!p);
+ sqlite3OsClose(&pBaseRead);
+ sqlite3OsClose(&pBaseWrite);
+ *pFile = 0;
+ return rc;
+}
+
+/*
+** The async-IO backends implementation of the three functions used to open
+** a file (xOpenExclusive, xOpenReadWrite and xOpenReadOnly). Most of the
+** work is done in function asyncOpenFile() - see above.
+*/
+static int asyncOpenExclusive(const char *z, OsFile **ppFile, int delFlag){
+ int rc = asyncOpenFile(z, ppFile, 0, 0);
+ if( rc==SQLITE_OK ){
+ AsyncFile *pFile = (AsyncFile *)(*ppFile);
+ int nByte = strlen(z)+1;
+ i64 i = (i64)(delFlag);
+ rc = addNewAsyncWrite(pFile, ASYNC_OPENEXCLUSIVE, i, nByte, z);
+ if( rc!=SQLITE_OK ){
+ sqlite3OsFree(pFile);
+ *ppFile = 0;
+ }
+ }
+ if( rc==SQLITE_OK ){
+ incrOpenFileCount();
+ }
+ return rc;
+}
+static int asyncOpenReadOnly(const char *z, OsFile **ppFile){
+ OsFile *pBase = 0;
+ int rc = xOrigOpenReadOnly(z, &pBase);
+ if( rc==SQLITE_OK ){
+ rc = asyncOpenFile(z, ppFile, pBase, 0);
+ }
+ if( rc==SQLITE_OK ){
+ incrOpenFileCount();
+ }
+ return rc;
+}
+static int asyncOpenReadWrite(const char *z, OsFile **ppFile, int *pReadOnly){
+ OsFile *pBase = 0;
+ int rc = xOrigOpenReadWrite(z, &pBase, pReadOnly);
+ if( rc==SQLITE_OK ){
+ rc = asyncOpenFile(z, ppFile, pBase, (*pReadOnly ? 0 : 1));
+ }
+ if( rc==SQLITE_OK ){
+ incrOpenFileCount();
+ }
+ return rc;
+}
+
+/*
+** Implementation of sqlite3OsDelete. Add an entry to the end of the
+** write-op queue to perform the delete.
+*/
+static int asyncDelete(const char *z){
+ return addNewAsyncWrite(0, ASYNC_DELETE, 0, strlen(z)+1, z);
+}
+
+/*
+** Implementation of sqlite3OsSyncDirectory. Add an entry to the end of the
+** write-op queue to perform the directory sync.
+*/
+static int asyncSyncDirectory(const char *z){
+ return addNewAsyncWrite(0, ASYNC_SYNCDIRECTORY, 0, strlen(z)+1, z);
+}
+
+/*
+** Implementation of sqlite3OsFileExists. Return true if file 'z' exists
+** in the file system.
+**
+** This method holds the mutex from start to finish.
+*/
+static int asyncFileExists(const char *z){
+ int ret;
+ AsyncWrite *p;
+
+ pthread_mutex_lock(&async.queueMutex);
+
+ /* See if the real file system contains the specified file. */
+ ret = xOrigFileExists(z);
+
+ for(p=async.pQueueFirst; p; p = p->pNext){
+ if( p->op==ASYNC_DELETE && 0==strcmp(p->zBuf, z) ){
+ ret = 0;
+ }else if( p->op==ASYNC_OPENEXCLUSIVE && 0==strcmp(p->zBuf, z) ){
+ ret = 1;
+ }
+ }
+
+ TRACE(("EXISTS: %s = %d\n", z, ret));
+ pthread_mutex_unlock(&async.queueMutex);
+ return ret;
+}
+
+/*
+** Call this routine to enable or disable the
+** asynchronous IO features implemented in this file.
+**
+** This routine is not even remotely threadsafe. Do not call
+** this routine while any SQLite database connections are open.
+*/
+static void asyncEnable(int enable){
+ if( enable && xOrigOpenReadWrite==0 ){
+ assert(sqlite3Os.xOpenReadWrite);
+ sqlite3HashInit(&async.aLock, SQLITE_HASH_BINARY, 1);
+ xOrigOpenReadWrite = sqlite3Os.xOpenReadWrite;
+ xOrigOpenReadOnly = sqlite3Os.xOpenReadOnly;
+ xOrigOpenExclusive = sqlite3Os.xOpenExclusive;
+ xOrigDelete = sqlite3Os.xDelete;
+ xOrigFileExists = sqlite3Os.xFileExists;
+ xOrigSyncDirectory = sqlite3Os.xSyncDirectory;
+
+ sqlite3Os.xOpenReadWrite = asyncOpenReadWrite;
+ sqlite3Os.xOpenReadOnly = asyncOpenReadOnly;
+ sqlite3Os.xOpenExclusive = asyncOpenExclusive;
+ sqlite3Os.xDelete = asyncDelete;
+ sqlite3Os.xFileExists = asyncFileExists;
+ sqlite3Os.xSyncDirectory = asyncSyncDirectory;
+ assert(sqlite3Os.xOpenReadWrite);
+ }
+ if( !enable && xOrigOpenReadWrite!=0 ){
+ assert(sqlite3Os.xOpenReadWrite);
+ sqlite3HashClear(&async.aLock);
+ sqlite3Os.xOpenReadWrite = xOrigOpenReadWrite;
+ sqlite3Os.xOpenReadOnly = xOrigOpenReadOnly;
+ sqlite3Os.xOpenExclusive = xOrigOpenExclusive;
+ sqlite3Os.xDelete = xOrigDelete;
+ sqlite3Os.xFileExists = xOrigFileExists;
+ sqlite3Os.xSyncDirectory = xOrigSyncDirectory;
+
+ xOrigOpenReadWrite = 0;
+ xOrigOpenReadOnly = 0;
+ xOrigOpenExclusive = 0;
+ xOrigDelete = 0;
+ xOrigFileExists = 0;
+ xOrigSyncDirectory = 0;
+ assert(sqlite3Os.xOpenReadWrite);
+ }
+}
+
+/*
+** This procedure runs in a separate thread, reading messages off of the
+** write queue and processing them one by one.
+**
+** If async.writerHaltNow is true, then this procedure exits
+** after processing a single message.
+**
+** If async.writerHaltWhenIdle is true, then this procedure exits when
+** the write queue is empty.
+**
+** If both of the above variables are false, this procedure runs
+** indefinately, waiting for operations to be added to the write queue
+** and processing them in the order in which they arrive.
+**
+** An artifical delay of async.ioDelay milliseconds is inserted before
+** each write operation in order to simulate the effect of a slow disk.
+**
+** Only one instance of this procedure may be running at a time.
+*/
+static void *asyncWriterThread(void *NotUsed){
+ AsyncWrite *p = 0;
+ int rc = SQLITE_OK;
+ int holdingMutex = 0;
+
+ if( pthread_mutex_trylock(&async.writerMutex) ){
+ return 0;
+ }
+ while( async.writerHaltNow==0 ){
+ OsFile *pBase = 0;
+
+ if( !holdingMutex ){
+ pthread_mutex_lock(&async.queueMutex);
+ }
+ while( (p = async.pQueueFirst)==0 ){
+ pthread_cond_broadcast(&async.emptySignal);
+ if( async.writerHaltWhenIdle ){
+ pthread_mutex_unlock(&async.queueMutex);
+ break;
+ }else{
+ TRACE(("IDLE\n"));
+ pthread_cond_wait(&async.queueSignal, &async.queueMutex);
+ TRACE(("WAKEUP\n"));
+ }
+ }
+ if( p==0 ) break;
+ holdingMutex = 1;
+
+ /* Right now this thread is holding the mutex on the write-op queue.
+ ** Variable 'p' points to the first entry in the write-op queue. In
+ ** the general case, we hold on to the mutex for the entire body of
+ ** the loop.
+ **
+ ** However in the cases enumerated below, we relinquish the mutex,
+ ** perform the IO, and then re-request the mutex before removing 'p' from
+ ** the head of the write-op queue. The idea is to increase concurrency with
+ ** sqlite threads.
+ **
+ ** * An ASYNC_CLOSE operation.
+ ** * An ASYNC_OPENEXCLUSIVE operation. For this one, we relinquish
+ ** the mutex, call the underlying xOpenExclusive() function, then
+ ** re-aquire the mutex before seting the AsyncFile.pBaseRead
+ ** variable.
+ ** * ASYNC_SYNC and ASYNC_WRITE operations, if
+ ** SQLITE_ASYNC_TWO_FILEHANDLES was set at compile time and two
+ ** file-handles are open for the particular file being "synced".
+ */
+ if( async.ioError!=SQLITE_OK && p->op!=ASYNC_CLOSE ){
+ p->op = ASYNC_NOOP;
+ }
+ if( p->pFile ){
+ pBase = p->pFile->pBaseWrite;
+ if(
+ p->op==ASYNC_CLOSE ||
+ p->op==ASYNC_OPENEXCLUSIVE ||
+ (pBase && (p->op==ASYNC_SYNC || p->op==ASYNC_WRITE) )
+ ){
+ pthread_mutex_unlock(&async.queueMutex);
+ holdingMutex = 0;
+ }
+ if( !pBase ){
+ pBase = p->pFile->pBaseRead;
+ }
+ }
+
+ switch( p->op ){
+ case ASYNC_NOOP:
+ break;
+
+ case ASYNC_WRITE:
+ assert( pBase );
+ TRACE(("WRITE %s %d bytes at %d\n",
+ p->pFile->zName, p->nByte, p->iOffset));
+ rc = sqlite3OsSeek(pBase, p->iOffset);
+ if( rc==SQLITE_OK ){
+ rc = sqlite3OsWrite(pBase, (const void *)(p->zBuf), p->nByte);
+ }
+ break;
+
+ case ASYNC_SYNC:
+ assert( pBase );
+ TRACE(("SYNC %s\n", p->pFile->zName));
+ rc = sqlite3OsSync(pBase, p->nByte);
+ break;
+
+ case ASYNC_TRUNCATE:
+ assert( pBase );
+ TRACE(("TRUNCATE %s to %d bytes\n", p->pFile->zName, p->iOffset));
+ rc = sqlite3OsTruncate(pBase, p->iOffset);
+ break;
+
+ case ASYNC_CLOSE:
+ TRACE(("CLOSE %s\n", p->pFile->zName));
+ sqlite3OsClose(&p->pFile->pBaseWrite);
+ sqlite3OsClose(&p->pFile->pBaseRead);
+ sqlite3OsFree(p->pFile);
+ break;
+
+ case ASYNC_OPENDIRECTORY:
+ assert( pBase );
+ TRACE(("OPENDIR %s\n", p->zBuf));
+ sqlite3OsOpenDirectory(pBase, p->zBuf);
+ break;
+
+ case ASYNC_SETFULLSYNC:
+ assert( pBase );
+ TRACE(("SETFULLSYNC %s %d\n", p->pFile->zName, p->nByte));
+ sqlite3OsSetFullSync(pBase, p->nByte);
+ break;
+
+ case ASYNC_DELETE:
+ TRACE(("DELETE %s\n", p->zBuf));
+ rc = xOrigDelete(p->zBuf);
+ break;
+
+ case ASYNC_SYNCDIRECTORY:
+ TRACE(("SYNCDIR %s\n", p->zBuf));
+ rc = xOrigSyncDirectory(p->zBuf);
+ break;
+
+ case ASYNC_OPENEXCLUSIVE: {
+ AsyncFile *pFile = p->pFile;
+ int delFlag = ((p->iOffset)?1:0);
+ OsFile *pBase = 0;
+ TRACE(("OPEN %s delFlag=%d\n", p->zBuf, delFlag));
+ assert(pFile->pBaseRead==0 && pFile->pBaseWrite==0);
+ rc = xOrigOpenExclusive(p->zBuf, &pBase, delFlag);
+ assert( holdingMutex==0 );
+ pthread_mutex_lock(&async.queueMutex);
+ holdingMutex = 1;
+ if( rc==SQLITE_OK ){
+ pFile->pBaseRead = pBase;
+ }
+ break;
+ }
+
+ default: assert(!"Illegal value for AsyncWrite.op");
+ }
+
+ /* If we didn't hang on to the mutex during the IO op, obtain it now
+ ** so that the AsyncWrite structure can be safely removed from the
+ ** global write-op queue.
+ */
+ if( !holdingMutex ){
+ pthread_mutex_lock(&async.queueMutex);
+ holdingMutex = 1;
+ }
+ /* TRACE(("UNLINK %p\n", p)); */
+ if( p==async.pQueueLast ){
+ async.pQueueLast = 0;
+ }
+ async.pQueueFirst = p->pNext;
+ sqlite3OsFree(p);
+ assert( holdingMutex );
+
+ /* An IO error has occured. We cannot report the error back to the
+ ** connection that requested the I/O since the error happened
+ ** asynchronously. The connection has already moved on. There
+ ** really is nobody to report the error to.
+ **
+ ** The file for which the error occured may have been a database or
+ ** journal file. Regardless, none of the currently queued operations
+ ** associated with the same database should now be performed. Nor should
+ ** any subsequently requested IO on either a database or journal file
+ ** handle for the same database be accepted until the main database
+ ** file handle has been closed and reopened.
+ **
+ ** Furthermore, no further IO should be queued or performed on any file
+ ** handle associated with a database that may have been part of a
+ ** multi-file transaction that included the database associated with
+ ** the IO error (i.e. a database ATTACHed to the same handle at some
+ ** point in time).
+ */
+ if( rc!=SQLITE_OK ){
+ async.ioError = rc;
+ }
+
+ /* Drop the queue mutex before continuing to the next write operation
+ ** in order to give other threads a chance to work with the write queue.
+ */
+ if( !async.pQueueFirst || !async.ioError ){
+ sqlite3ApiExit(0, 0);
+ pthread_mutex_unlock(&async.queueMutex);
+ holdingMutex = 0;
+ if( async.ioDelay>0 ){
+ sqlite3OsSleep(async.ioDelay);
+ }else{
+ sched_yield();
+ }
+ }
+ }
+
+ pthread_mutex_unlock(&async.writerMutex);
+ return 0;
+}
+
+/**************************************************************************
+** The remaining code defines a Tcl interface for testing the asynchronous
+** IO implementation in this file.
+**
+** To adapt the code to a non-TCL environment, delete or comment out
+** the code that follows.
+*/
+
+/*
+** sqlite3async_enable ?YES/NO?
+**
+** Enable or disable the asynchronous I/O backend. This command is
+** not thread-safe. Do not call it while any database connections
+** are open.
+*/
+static int testAsyncEnable(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ if( objc!=1 && objc!=2 ){
+ Tcl_WrongNumArgs(interp, 1, objv, "?YES/NO?");
+ return TCL_ERROR;
+ }
+ if( objc==1 ){
+ Tcl_SetObjResult(interp, Tcl_NewBooleanObj(xOrigOpenReadWrite!=0));
+ }else{
+ int en;
+ if( Tcl_GetBooleanFromObj(interp, objv[1], &en) ) return TCL_ERROR;
+ asyncEnable(en);
+ }
+ return TCL_OK;
+}
+
+/*
+** sqlite3async_halt "now"|"idle"|"never"
+**
+** Set the conditions at which the writer thread will halt.
+*/
+static int testAsyncHalt(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ const char *zCond;
+ if( objc!=2 ){
+ Tcl_WrongNumArgs(interp, 1, objv, "\"now\"|\"idle\"|\"never\"");
+ return TCL_ERROR;
+ }
+ zCond = Tcl_GetString(objv[1]);
+ if( strcmp(zCond, "now")==0 ){
+ async.writerHaltNow = 1;
+ pthread_cond_broadcast(&async.queueSignal);
+ }else if( strcmp(zCond, "idle")==0 ){
+ async.writerHaltWhenIdle = 1;
+ async.writerHaltNow = 0;
+ pthread_cond_broadcast(&async.queueSignal);
+ }else if( strcmp(zCond, "never")==0 ){
+ async.writerHaltWhenIdle = 0;
+ async.writerHaltNow = 0;
+ }else{
+ Tcl_AppendResult(interp,
+ "should be one of: \"now\", \"idle\", or \"never\"", (char*)0);
+ return TCL_ERROR;
+ }
+ return TCL_OK;
+}
+
+/*
+** sqlite3async_delay ?MS?
+**
+** Query or set the number of milliseconds of delay in the writer
+** thread after each write operation. The default is 0. By increasing
+** the memory delay we can simulate the effect of slow disk I/O.
+*/
+static int testAsyncDelay(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ if( objc!=1 && objc!=2 ){
+ Tcl_WrongNumArgs(interp, 1, objv, "?MS?");
+ return TCL_ERROR;
+ }
+ if( objc==1 ){
+ Tcl_SetObjResult(interp, Tcl_NewIntObj(async.ioDelay));
+ }else{
+ int ioDelay;
+ if( Tcl_GetIntFromObj(interp, objv[1], &ioDelay) ) return TCL_ERROR;
+ async.ioDelay = ioDelay;
+ }
+ return TCL_OK;
+}
+
+/*
+** sqlite3async_start
+**
+** Start a new writer thread.
+*/
+static int testAsyncStart(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ pthread_t x;
+ int rc;
+ rc = pthread_create(&x, 0, asyncWriterThread, 0);
+ if( rc ){
+ Tcl_AppendResult(interp, "failed to create the thread", 0);
+ return TCL_ERROR;
+ }
+ pthread_detach(x);
+ return TCL_OK;
+}
+
+/*
+** sqlite3async_wait
+**
+** Wait for the current writer thread to terminate.
+**
+** If the current writer thread is set to run forever then this
+** command would block forever. To prevent that, an error is returned.
+*/
+static int testAsyncWait(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ int cnt = 10;
+ if( async.writerHaltNow==0 && async.writerHaltWhenIdle==0 ){
+ Tcl_AppendResult(interp, "would block forever", (char*)0);
+ return TCL_ERROR;
+ }
+
+ while( cnt-- && !pthread_mutex_trylock(&async.writerMutex) ){
+ pthread_mutex_unlock(&async.writerMutex);
+ sched_yield();
+ }
+ if( cnt>=0 ){
+ TRACE(("WAIT\n"));
+ pthread_mutex_lock(&async.queueMutex);
+ pthread_cond_broadcast(&async.queueSignal);
+ pthread_mutex_unlock(&async.queueMutex);
+ pthread_mutex_lock(&async.writerMutex);
+ pthread_mutex_unlock(&async.writerMutex);
+ }else{
+ TRACE(("NO-WAIT\n"));
+ }
+ return TCL_OK;
+}
+
+
+#endif /* OS_UNIX and THREADSAFE and defined(SQLITE_ENABLE_REDEF_IO) */
+
+/*
+** This routine registers the custom TCL commands defined in this
+** module. This should be the only procedure visible from outside
+** of this module.
+*/
+int Sqlitetestasync_Init(Tcl_Interp *interp){
+#if OS_UNIX && THREADSAFE && defined(SQLITE_ENABLE_REDEF_IO)
+ Tcl_CreateObjCommand(interp,"sqlite3async_enable",testAsyncEnable,0,0);
+ Tcl_CreateObjCommand(interp,"sqlite3async_halt",testAsyncHalt,0,0);
+ Tcl_CreateObjCommand(interp,"sqlite3async_delay",testAsyncDelay,0,0);
+ Tcl_CreateObjCommand(interp,"sqlite3async_start",testAsyncStart,0,0);
+ Tcl_CreateObjCommand(interp,"sqlite3async_wait",testAsyncWait,0,0);
+ Tcl_LinkVar(interp, "sqlite3async_trace",
+ (char*)&sqlite3async_trace, TCL_LINK_INT);
+#endif /* OS_UNIX and THREADSAFE and defined(SQLITE_ENABLE_REDEF_IO) */
+ return TCL_OK;
+}
Added: freeswitch/trunk/libs/sqlite/src/test_autoext.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/test_autoext.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,164 @@
+/*
+** 2006 August 23
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** Test extension for testing the sqlite3_auto_extension() function.
+**
+** $Id: test_autoext.c,v 1.1 2006/08/23 20:07:22 drh Exp $
+*/
+#ifndef SQLITE_OMIT_LOAD_EXTENSION
+#include "tcl.h"
+#include "sqlite3ext.h"
+static SQLITE_EXTENSION_INIT1
+
+/*
+** The sqr() SQL function returns the square of its input value.
+*/
+static void sqrFunc(
+ sqlite3_context *context,
+ int argc,
+ sqlite3_value **argv
+){
+ double r = sqlite3_value_double(argv[0]);
+ sqlite3_result_double(context, r*r);
+}
+
+/*
+** This is the entry point to register the extension for the sqr() function.
+*/
+static int sqr_init(
+ sqlite3 *db,
+ char **pzErrMsg,
+ const sqlite3_api_routines *pApi
+){
+ SQLITE_EXTENSION_INIT2(pApi);
+ sqlite3_create_function(db, "sqr", 1, SQLITE_ANY, 0, sqrFunc, 0, 0);
+ return 0;
+}
+
+/*
+** The cube() SQL function returns the cube of its input value.
+*/
+static void cubeFunc(
+ sqlite3_context *context,
+ int argc,
+ sqlite3_value **argv
+){
+ double r = sqlite3_value_double(argv[0]);
+ sqlite3_result_double(context, r*r*r);
+}
+
+/*
+** This is the entry point to register the extension for the cube() function.
+*/
+static int cube_init(
+ sqlite3 *db,
+ char **pzErrMsg,
+ const sqlite3_api_routines *pApi
+){
+ SQLITE_EXTENSION_INIT2(pApi);
+ sqlite3_create_function(db, "cube", 1, SQLITE_ANY, 0, cubeFunc, 0, 0);
+ return 0;
+}
+
+/*
+** This is a broken extension entry point
+*/
+static int broken_init(
+ sqlite3 *db,
+ char **pzErrMsg,
+ const sqlite3_api_routines *pApi
+){
+ char *zErr;
+ SQLITE_EXTENSION_INIT2(pApi);
+ zErr = sqlite3_mprintf("broken autoext!");
+ *pzErrMsg = zErr;
+ return 1;
+}
+
+/*
+** tclcmd: sqlite3_auto_extension_sqr
+**
+** Register the "sqr" extension to be loaded automatically.
+*/
+static int autoExtSqrObjCmd(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ sqlite3_auto_extension((void*)sqr_init);
+ return SQLITE_OK;
+}
+
+/*
+** tclcmd: sqlite3_auto_extension_cube
+**
+** Register the "cube" extension to be loaded automatically.
+*/
+static int autoExtCubeObjCmd(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ sqlite3_auto_extension((void*)cube_init);
+ return SQLITE_OK;
+}
+
+/*
+** tclcmd: sqlite3_auto_extension_broken
+**
+** Register the broken extension to be loaded automatically.
+*/
+static int autoExtBrokenObjCmd(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ sqlite3_auto_extension((void*)broken_init);
+ return SQLITE_OK;
+}
+
+/*
+** tclcmd: sqlite3_reset_auto_extension
+**
+** Reset all auto-extensions
+*/
+static int resetAutoExtObjCmd(
+ void * clientData,
+ Tcl_Interp *interp,
+ int objc,
+ Tcl_Obj *CONST objv[]
+){
+ sqlite3_reset_auto_extension();
+ return SQLITE_OK;
+}
+
+
+#endif /* SQLITE_OMIT_LOAD_EXTENSION */
+
+/*
+** This procedure registers the TCL procs defined in this file.
+*/
+int Sqlitetest_autoext_Init(Tcl_Interp *interp){
+#ifndef SQLITE_OMIT_LOAD_EXTENSION
+ Tcl_CreateObjCommand(interp, "sqlite3_auto_extension_sqr",
+ autoExtSqrObjCmd, 0, 0);
+ Tcl_CreateObjCommand(interp, "sqlite3_auto_extension_cube",
+ autoExtCubeObjCmd, 0, 0);
+ Tcl_CreateObjCommand(interp, "sqlite3_auto_extension_broken",
+ autoExtBrokenObjCmd, 0, 0);
+ Tcl_CreateObjCommand(interp, "sqlite3_reset_auto_extension",
+ resetAutoExtObjCmd, 0, 0);
+#endif
+ return TCL_OK;
+}
Added: freeswitch/trunk/libs/sqlite/src/test_loadext.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/test_loadext.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,57 @@
+/*
+** 2006 June 14
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** Test extension for testing the sqlite3_load_extension() function.
+**
+** $Id: test_loadext.c,v 1.1 2006/06/14 10:38:03 danielk1977 Exp $
+*/
+
+#include "sqlite3ext.h"
+SQLITE_EXTENSION_INIT1
+
+/*
+** The half() SQL function returns half of its input value.
+*/
+static void halfFunc(
+ sqlite3_context *context,
+ int argc,
+ sqlite3_value **argv
+){
+ sqlite3_result_double(context, 0.5*sqlite3_value_double(argv[0]));
+}
+
+/*
+** Extension load function.
+*/
+int testloadext_init(
+ sqlite3 *db,
+ char **pzErrMsg,
+ const sqlite3_api_routines *pApi
+){
+ SQLITE_EXTENSION_INIT2(pApi);
+ sqlite3_create_function(db, "half", 1, SQLITE_ANY, 0, halfFunc, 0, 0);
+ return 0;
+}
+
+/*
+** Another extension entry point. This one always fails.
+*/
+int testbrokenext_init(
+ sqlite3 *db,
+ char **pzErrMsg,
+ const sqlite3_api_routines *pApi
+){
+ char *zErr;
+ SQLITE_EXTENSION_INIT2(pApi);
+ zErr = sqlite3_mprintf("broken!");
+ *pzErrMsg = zErr;
+ return 1;
+}
Added: freeswitch/trunk/libs/sqlite/src/test_md5.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/test_md5.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,388 @@
+/*
+** SQLite uses this code for testing only. It is not a part of
+** the SQLite library. This file implements two new TCL commands
+** "md5" and "md5file" that compute md5 checksums on arbitrary text
+** and on complete files. These commands are used by the "testfixture"
+** program to help verify the correct operation of the SQLite library.
+**
+** The original use of these TCL commands was to test the ROLLBACK
+** feature of SQLite. First compute the MD5-checksum of the database.
+** Then make some changes but rollback the changes rather than commit
+** them. Compute a second MD5-checksum of the file and verify that the
+** two checksums are the same. Such is the original use of this code.
+** New uses may have been added since this comment was written.
+*/
+/*
+ * This code implements the MD5 message-digest algorithm.
+ * The algorithm is due to Ron Rivest. This code was
+ * written by Colin Plumb in 1993, no copyright is claimed.
+ * This code is in the public domain; do with it what you wish.
+ *
+ * Equivalent code is available from RSA Data Security, Inc.
+ * This code has been tested against that, and is equivalent,
+ * except that you don't need to include two pages of legalese
+ * with every copy.
+ *
+ * To compute the message digest of a chunk of bytes, declare an
+ * MD5Context structure, pass it to MD5Init, call MD5Update as
+ * needed on buffers full of bytes, and then call MD5Final, which
+ * will fill a supplied 16-byte array with the digest.
+ */
+#include <tcl.h>
+#include <string.h>
+#include "sqlite3.h"
+
+/*
+ * If compiled on a machine that doesn't have a 32-bit integer,
+ * you just set "uint32" to the appropriate datatype for an
+ * unsigned 32-bit integer. For example:
+ *
+ * cc -Duint32='unsigned long' md5.c
+ *
+ */
+#ifndef uint32
+# define uint32 unsigned int
+#endif
+
+struct Context {
+ int isInit;
+ uint32 buf[4];
+ uint32 bits[2];
+ unsigned char in[64];
+};
+typedef struct Context MD5Context;
+
+/*
+ * Note: this code is harmless on little-endian machines.
+ */
+static void byteReverse (unsigned char *buf, unsigned longs){
+ uint32 t;
+ do {
+ t = (uint32)((unsigned)buf[3]<<8 | buf[2]) << 16 |
+ ((unsigned)buf[1]<<8 | buf[0]);
+ *(uint32 *)buf = t;
+ buf += 4;
+ } while (--longs);
+}
+/* The four core functions - F1 is optimized somewhat */
+
+/* #define F1(x, y, z) (x & y | ~x & z) */
+#define F1(x, y, z) (z ^ (x & (y ^ z)))
+#define F2(x, y, z) F1(z, x, y)
+#define F3(x, y, z) (x ^ y ^ z)
+#define F4(x, y, z) (y ^ (x | ~z))
+
+/* This is the central step in the MD5 algorithm. */
+#define MD5STEP(f, w, x, y, z, data, s) \
+ ( w += f(x, y, z) + data, w = w<<s | w>>(32-s), w += x )
+
+/*
+ * The core of the MD5 algorithm, this alters an existing MD5 hash to
+ * reflect the addition of 16 longwords of new data. MD5Update blocks
+ * the data and converts bytes into longwords for this routine.
+ */
+static void MD5Transform(uint32 buf[4], const uint32 in[16]){
+ register uint32 a, b, c, d;
+
+ a = buf[0];
+ b = buf[1];
+ c = buf[2];
+ d = buf[3];
+
+ MD5STEP(F1, a, b, c, d, in[ 0]+0xd76aa478, 7);
+ MD5STEP(F1, d, a, b, c, in[ 1]+0xe8c7b756, 12);
+ MD5STEP(F1, c, d, a, b, in[ 2]+0x242070db, 17);
+ MD5STEP(F1, b, c, d, a, in[ 3]+0xc1bdceee, 22);
+ MD5STEP(F1, a, b, c, d, in[ 4]+0xf57c0faf, 7);
+ MD5STEP(F1, d, a, b, c, in[ 5]+0x4787c62a, 12);
+ MD5STEP(F1, c, d, a, b, in[ 6]+0xa8304613, 17);
+ MD5STEP(F1, b, c, d, a, in[ 7]+0xfd469501, 22);
+ MD5STEP(F1, a, b, c, d, in[ 8]+0x698098d8, 7);
+ MD5STEP(F1, d, a, b, c, in[ 9]+0x8b44f7af, 12);
+ MD5STEP(F1, c, d, a, b, in[10]+0xffff5bb1, 17);
+ MD5STEP(F1, b, c, d, a, in[11]+0x895cd7be, 22);
+ MD5STEP(F1, a, b, c, d, in[12]+0x6b901122, 7);
+ MD5STEP(F1, d, a, b, c, in[13]+0xfd987193, 12);
+ MD5STEP(F1, c, d, a, b, in[14]+0xa679438e, 17);
+ MD5STEP(F1, b, c, d, a, in[15]+0x49b40821, 22);
+
+ MD5STEP(F2, a, b, c, d, in[ 1]+0xf61e2562, 5);
+ MD5STEP(F2, d, a, b, c, in[ 6]+0xc040b340, 9);
+ MD5STEP(F2, c, d, a, b, in[11]+0x265e5a51, 14);
+ MD5STEP(F2, b, c, d, a, in[ 0]+0xe9b6c7aa, 20);
+ MD5STEP(F2, a, b, c, d, in[ 5]+0xd62f105d, 5);
+ MD5STEP(F2, d, a, b, c, in[10]+0x02441453, 9);
+ MD5STEP(F2, c, d, a, b, in[15]+0xd8a1e681, 14);
+ MD5STEP(F2, b, c, d, a, in[ 4]+0xe7d3fbc8, 20);
+ MD5STEP(F2, a, b, c, d, in[ 9]+0x21e1cde6, 5);
+ MD5STEP(F2, d, a, b, c, in[14]+0xc33707d6, 9);
+ MD5STEP(F2, c, d, a, b, in[ 3]+0xf4d50d87, 14);
+ MD5STEP(F2, b, c, d, a, in[ 8]+0x455a14ed, 20);
+ MD5STEP(F2, a, b, c, d, in[13]+0xa9e3e905, 5);
+ MD5STEP(F2, d, a, b, c, in[ 2]+0xfcefa3f8, 9);
+ MD5STEP(F2, c, d, a, b, in[ 7]+0x676f02d9, 14);
+ MD5STEP(F2, b, c, d, a, in[12]+0x8d2a4c8a, 20);
+
+ MD5STEP(F3, a, b, c, d, in[ 5]+0xfffa3942, 4);
+ MD5STEP(F3, d, a, b, c, in[ 8]+0x8771f681, 11);
+ MD5STEP(F3, c, d, a, b, in[11]+0x6d9d6122, 16);
+ MD5STEP(F3, b, c, d, a, in[14]+0xfde5380c, 23);
+ MD5STEP(F3, a, b, c, d, in[ 1]+0xa4beea44, 4);
+ MD5STEP(F3, d, a, b, c, in[ 4]+0x4bdecfa9, 11);
+ MD5STEP(F3, c, d, a, b, in[ 7]+0xf6bb4b60, 16);
+ MD5STEP(F3, b, c, d, a, in[10]+0xbebfbc70, 23);
+ MD5STEP(F3, a, b, c, d, in[13]+0x289b7ec6, 4);
+ MD5STEP(F3, d, a, b, c, in[ 0]+0xeaa127fa, 11);
+ MD5STEP(F3, c, d, a, b, in[ 3]+0xd4ef3085, 16);
+ MD5STEP(F3, b, c, d, a, in[ 6]+0x04881d05, 23);
+ MD5STEP(F3, a, b, c, d, in[ 9]+0xd9d4d039, 4);
+ MD5STEP(F3, d, a, b, c, in[12]+0xe6db99e5, 11);
+ MD5STEP(F3, c, d, a, b, in[15]+0x1fa27cf8, 16);
+ MD5STEP(F3, b, c, d, a, in[ 2]+0xc4ac5665, 23);
+
+ MD5STEP(F4, a, b, c, d, in[ 0]+0xf4292244, 6);
+ MD5STEP(F4, d, a, b, c, in[ 7]+0x432aff97, 10);
+ MD5STEP(F4, c, d, a, b, in[14]+0xab9423a7, 15);
+ MD5STEP(F4, b, c, d, a, in[ 5]+0xfc93a039, 21);
+ MD5STEP(F4, a, b, c, d, in[12]+0x655b59c3, 6);
+ MD5STEP(F4, d, a, b, c, in[ 3]+0x8f0ccc92, 10);
+ MD5STEP(F4, c, d, a, b, in[10]+0xffeff47d, 15);
+ MD5STEP(F4, b, c, d, a, in[ 1]+0x85845dd1, 21);
+ MD5STEP(F4, a, b, c, d, in[ 8]+0x6fa87e4f, 6);
+ MD5STEP(F4, d, a, b, c, in[15]+0xfe2ce6e0, 10);
+ MD5STEP(F4, c, d, a, b, in[ 6]+0xa3014314, 15);
+ MD5STEP(F4, b, c, d, a, in[13]+0x4e0811a1, 21);
+ MD5STEP(F4, a, b, c, d, in[ 4]+0xf7537e82, 6);
+ MD5STEP(F4, d, a, b, c, in[11]+0xbd3af235, 10);
+ MD5STEP(F4, c, d, a, b, in[ 2]+0x2ad7d2bb, 15);
+ MD5STEP(F4, b, c, d, a, in[ 9]+0xeb86d391, 21);
+
+ buf[0] += a;
+ buf[1] += b;
+ buf[2] += c;
+ buf[3] += d;
+}
+
+/*
+ * Start MD5 accumulation. Set bit count to 0 and buffer to mysterious
+ * initialization constants.
+ */
+static void MD5Init(MD5Context *ctx){
+ ctx->isInit = 1;
+ ctx->buf[0] = 0x67452301;
+ ctx->buf[1] = 0xefcdab89;
+ ctx->buf[2] = 0x98badcfe;
+ ctx->buf[3] = 0x10325476;
+ ctx->bits[0] = 0;
+ ctx->bits[1] = 0;
+}
+
+/*
+ * Update context to reflect the concatenation of another buffer full
+ * of bytes.
+ */
+static
+void MD5Update(MD5Context *pCtx, const unsigned char *buf, unsigned int len){
+ struct Context *ctx = (struct Context *)pCtx;
+ uint32 t;
+
+ /* Update bitcount */
+
+ t = ctx->bits[0];
+ if ((ctx->bits[0] = t + ((uint32)len << 3)) < t)
+ ctx->bits[1]++; /* Carry from low to high */
+ ctx->bits[1] += len >> 29;
+
+ t = (t >> 3) & 0x3f; /* Bytes already in shsInfo->data */
+
+ /* Handle any leading odd-sized chunks */
+
+ if ( t ) {
+ unsigned char *p = (unsigned char *)ctx->in + t;
+
+ t = 64-t;
+ if (len < t) {
+ memcpy(p, buf, len);
+ return;
+ }
+ memcpy(p, buf, t);
+ byteReverse(ctx->in, 16);
+ MD5Transform(ctx->buf, (uint32 *)ctx->in);
+ buf += t;
+ len -= t;
+ }
+
+ /* Process data in 64-byte chunks */
+
+ while (len >= 64) {
+ memcpy(ctx->in, buf, 64);
+ byteReverse(ctx->in, 16);
+ MD5Transform(ctx->buf, (uint32 *)ctx->in);
+ buf += 64;
+ len -= 64;
+ }
+
+ /* Handle any remaining bytes of data. */
+
+ memcpy(ctx->in, buf, len);
+}
+
+/*
+ * Final wrapup - pad to 64-byte boundary with the bit pattern
+ * 1 0* (64-bit count of bits processed, MSB-first)
+ */
+static void MD5Final(unsigned char digest[16], MD5Context *pCtx){
+ struct Context *ctx = (struct Context *)pCtx;
+ unsigned count;
+ unsigned char *p;
+
+ /* Compute number of bytes mod 64 */
+ count = (ctx->bits[0] >> 3) & 0x3F;
+
+ /* Set the first char of padding to 0x80. This is safe since there is
+ always at least one byte free */
+ p = ctx->in + count;
+ *p++ = 0x80;
+
+ /* Bytes of padding needed to make 64 bytes */
+ count = 64 - 1 - count;
+
+ /* Pad out to 56 mod 64 */
+ if (count < 8) {
+ /* Two lots of padding: Pad the first block to 64 bytes */
+ memset(p, 0, count);
+ byteReverse(ctx->in, 16);
+ MD5Transform(ctx->buf, (uint32 *)ctx->in);
+
+ /* Now fill the next block with 56 bytes */
+ memset(ctx->in, 0, 56);
+ } else {
+ /* Pad block to 56 bytes */
+ memset(p, 0, count-8);
+ }
+ byteReverse(ctx->in, 14);
+
+ /* Append length in bits and transform */
+ ((uint32 *)ctx->in)[ 14 ] = ctx->bits[0];
+ ((uint32 *)ctx->in)[ 15 ] = ctx->bits[1];
+
+ MD5Transform(ctx->buf, (uint32 *)ctx->in);
+ byteReverse((unsigned char *)ctx->buf, 4);
+ memcpy(digest, ctx->buf, 16);
+ memset(ctx, 0, sizeof(ctx)); /* In case it's sensitive */
+}
+
+/*
+** Convert a digest into base-16. digest should be declared as
+** "unsigned char digest[16]" in the calling function. The MD5
+** digest is stored in the first 16 bytes. zBuf should
+** be "char zBuf[33]".
+*/
+static void DigestToBase16(unsigned char *digest, char *zBuf){
+ static char const zEncode[] = "0123456789abcdef";
+ int i, j;
+
+ for(j=i=0; i<16; i++){
+ int a = digest[i];
+ zBuf[j++] = zEncode[(a>>4)&0xf];
+ zBuf[j++] = zEncode[a & 0xf];
+ }
+ zBuf[j] = 0;
+}
+
+/*
+** A TCL command for md5. The argument is the text to be hashed. The
+** Result is the hash in base64.
+*/
+static int md5_cmd(void*cd, Tcl_Interp *interp, int argc, const char **argv){
+ MD5Context ctx;
+ unsigned char digest[16];
+
+ if( argc!=2 ){
+ Tcl_AppendResult(interp,"wrong # args: should be \"", argv[0],
+ " TEXT\"", 0);
+ return TCL_ERROR;
+ }
+ MD5Init(&ctx);
+ MD5Update(&ctx, (unsigned char*)argv[1], (unsigned)strlen(argv[1]));
+ MD5Final(digest, &ctx);
+ DigestToBase16(digest, interp->result);
+ return TCL_OK;
+}
+
+/*
+** A TCL command to take the md5 hash of a file. The argument is the
+** name of the file.
+*/
+static int md5file_cmd(void*cd, Tcl_Interp*interp, int argc, const char **argv){
+ FILE *in;
+ MD5Context ctx;
+ unsigned char digest[16];
+ char zBuf[10240];
+
+ if( argc!=2 ){
+ Tcl_AppendResult(interp,"wrong # args: should be \"", argv[0],
+ " FILENAME\"", 0);
+ return TCL_ERROR;
+ }
+ in = fopen(argv[1],"rb");
+ if( in==0 ){
+ Tcl_AppendResult(interp,"unable to open file \"", argv[1],
+ "\" for reading", 0);
+ return TCL_ERROR;
+ }
+ MD5Init(&ctx);
+ for(;;){
+ int n;
+ n = fread(zBuf, 1, sizeof(zBuf), in);
+ if( n<=0 ) break;
+ MD5Update(&ctx, (unsigned char*)zBuf, (unsigned)n);
+ }
+ fclose(in);
+ MD5Final(digest, &ctx);
+ DigestToBase16(digest, interp->result);
+ return TCL_OK;
+}
+
+/*
+** Register the two TCL commands above with the TCL interpreter.
+*/
+int Md5_Init(Tcl_Interp *interp){
+ Tcl_CreateCommand(interp, "md5", (Tcl_CmdProc*)md5_cmd, 0, 0);
+ Tcl_CreateCommand(interp, "md5file", (Tcl_CmdProc*)md5file_cmd, 0, 0);
+ return TCL_OK;
+}
+
+/*
+** During testing, the special md5sum() aggregate function is available.
+** inside SQLite. The following routines implement that function.
+*/
+static void md5step(sqlite3_context *context, int argc, sqlite3_value **argv){
+ MD5Context *p;
+ int i;
+ if( argc<1 ) return;
+ p = sqlite3_aggregate_context(context, sizeof(*p));
+ if( p==0 ) return;
+ if( !p->isInit ){
+ MD5Init(p);
+ }
+ for(i=0; i<argc; i++){
+ const char *zData = (char*)sqlite3_value_text(argv[i]);
+ if( zData ){
+ MD5Update(p, (unsigned char*)zData, strlen(zData));
+ }
+ }
+}
+static void md5finalize(sqlite3_context *context){
+ MD5Context *p;
+ unsigned char digest[16];
+ char zBuf[33];
+ p = sqlite3_aggregate_context(context, sizeof(*p));
+ MD5Final(digest,p);
+ DigestToBase16(digest, zBuf);
+ sqlite3_result_text(context, zBuf, -1, SQLITE_TRANSIENT);
+}
+void Md5_Register(sqlite3 *db){
+ sqlite3_create_function(db, "md5sum", -1, SQLITE_UTF8, 0, 0,
+ md5step, md5finalize);
+}
Added: freeswitch/trunk/libs/sqlite/src/test_schema.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/test_schema.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,361 @@
+/*
+** 2006 June 10
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** Code for testing the virtual table interfaces. This code
+** is not included in the SQLite library. It is used for automated
+** testing of the SQLite library.
+**
+** $Id: test_schema.c,v 1.11 2006/09/11 00:34:22 drh Exp $
+*/
+
+/* The code in this file defines a sqlite3 virtual-table module that
+** provides a read-only view of the current database schema. There is one
+** row in the schema table for each column in the database schema.
+*/
+#define SCHEMA \
+"CREATE TABLE x(" \
+ "database," /* Name of database (i.e. main, temp etc.) */ \
+ "tablename," /* Name of table */ \
+ "cid," /* Column number (from left-to-right, 0 upward) */ \
+ "name," /* Column name */ \
+ "type," /* Specified type (i.e. VARCHAR(32)) */ \
+ "not_null," /* Boolean. True if NOT NULL was specified */ \
+ "dflt_value," /* Default value for this column */ \
+ "pk" /* True if this column is part of the primary key */ \
+")"
+
+/* If SQLITE_TEST is defined this code is preprocessed for use as part
+** of the sqlite test binary "testfixture". Otherwise it is preprocessed
+** to be compiled into an sqlite dynamic extension.
+*/
+#ifdef SQLITE_TEST
+ #include "sqliteInt.h"
+ #include "tcl.h"
+ #define MALLOC(x) sqliteMallocRaw(x)
+ #define FREE(x) sqliteFree(x)
+#else
+ #include "sqlite3ext.h"
+ SQLITE_EXTENSION_INIT1
+ #define MALLOC(x) malloc(x)
+ #define FREE(x) free(x)
+#endif
+
+#include <stdlib.h>
+#include <string.h>
+#include <assert.h>
+
+typedef struct schema_vtab schema_vtab;
+typedef struct schema_cursor schema_cursor;
+
+/* A schema table object */
+struct schema_vtab {
+ sqlite3_vtab base;
+ sqlite3 *db;
+};
+
+/* A schema table cursor object */
+struct schema_cursor {
+ sqlite3_vtab_cursor base;
+ sqlite3_stmt *pDbList;
+ sqlite3_stmt *pTableList;
+ sqlite3_stmt *pColumnList;
+ int rowid;
+};
+
+/*
+** Table destructor for the schema module.
+*/
+static int schemaDestroy(sqlite3_vtab *pVtab){
+ FREE(pVtab);
+ return 0;
+}
+
+/*
+** Table constructor for the schema module.
+*/
+static int schemaCreate(
+ sqlite3 *db,
+ void *pAux,
+ int argc, const char *const*argv,
+ sqlite3_vtab **ppVtab,
+ char **pzErr
+){
+ int rc = SQLITE_NOMEM;
+ schema_vtab *pVtab = MALLOC(sizeof(schema_vtab));
+ if( pVtab ){
+ memset(pVtab, 0, sizeof(schema_vtab));
+ pVtab->db = db;
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+ rc = sqlite3_declare_vtab(db, SCHEMA);
+#endif
+ }
+ *ppVtab = (sqlite3_vtab *)pVtab;
+ return rc;
+}
+
+/*
+** Open a new cursor on the schema table.
+*/
+static int schemaOpen(sqlite3_vtab *pVTab, sqlite3_vtab_cursor **ppCursor){
+ int rc = SQLITE_NOMEM;
+ schema_cursor *pCur;
+ pCur = MALLOC(sizeof(schema_cursor));
+ if( pCur ){
+ memset(pCur, 0, sizeof(schema_cursor));
+ *ppCursor = (sqlite3_vtab_cursor *)pCur;
+ rc = SQLITE_OK;
+ }
+ return rc;
+}
+
+/*
+** Close a schema table cursor.
+*/
+static int schemaClose(sqlite3_vtab_cursor *cur){
+ schema_cursor *pCur = (schema_cursor *)cur;
+ sqlite3_finalize(pCur->pDbList);
+ sqlite3_finalize(pCur->pTableList);
+ sqlite3_finalize(pCur->pColumnList);
+ FREE(pCur);
+ return SQLITE_OK;
+}
+
+/*
+** Retrieve a column of data.
+*/
+static int schemaColumn(sqlite3_vtab_cursor *cur, sqlite3_context *ctx, int i){
+ schema_cursor *pCur = (schema_cursor *)cur;
+ switch( i ){
+ case 0:
+ sqlite3_result_value(ctx, sqlite3_column_value(pCur->pDbList, 1));
+ break;
+ case 1:
+ sqlite3_result_value(ctx, sqlite3_column_value(pCur->pTableList, 0));
+ break;
+ default:
+ sqlite3_result_value(ctx, sqlite3_column_value(pCur->pColumnList, i-2));
+ break;
+ }
+ return SQLITE_OK;
+}
+
+/*
+** Retrieve the current rowid.
+*/
+static int schemaRowid(sqlite3_vtab_cursor *cur, sqlite_int64 *pRowid){
+ schema_cursor *pCur = (schema_cursor *)cur;
+ *pRowid = pCur->rowid;
+ return SQLITE_OK;
+}
+
+static int finalize(sqlite3_stmt **ppStmt){
+ int rc = sqlite3_finalize(*ppStmt);
+ *ppStmt = 0;
+ return rc;
+}
+
+static int schemaEof(sqlite3_vtab_cursor *cur){
+ schema_cursor *pCur = (schema_cursor *)cur;
+ return (pCur->pDbList ? 0 : 1);
+}
+
+/*
+** Advance the cursor to the next row.
+*/
+static int schemaNext(sqlite3_vtab_cursor *cur){
+ int rc = SQLITE_OK;
+ schema_cursor *pCur = (schema_cursor *)cur;
+ schema_vtab *pVtab = (schema_vtab *)(cur->pVtab);
+ char *zSql = 0;
+
+ while( !pCur->pColumnList || SQLITE_ROW!=sqlite3_step(pCur->pColumnList) ){
+ if( SQLITE_OK!=(rc = finalize(&pCur->pColumnList)) ) goto next_exit;
+
+ while( !pCur->pTableList || SQLITE_ROW!=sqlite3_step(pCur->pTableList) ){
+ if( SQLITE_OK!=(rc = finalize(&pCur->pTableList)) ) goto next_exit;
+
+ assert(pCur->pDbList);
+ while( SQLITE_ROW!=sqlite3_step(pCur->pDbList) ){
+ rc = finalize(&pCur->pDbList);
+ goto next_exit;
+ }
+
+ /* Set zSql to the SQL to pull the list of tables from the
+ ** sqlite_master (or sqlite_temp_master) table of the database
+ ** identfied by the row pointed to by the SQL statement pCur->pDbList
+ ** (iterating through a "PRAGMA database_list;" statement).
+ */
+ if( sqlite3_column_int(pCur->pDbList, 0)==1 ){
+ zSql = sqlite3_mprintf(
+ "SELECT name FROM sqlite_temp_master WHERE type='table'"
+ );
+ }else{
+ sqlite3_stmt *pDbList = pCur->pDbList;
+ zSql = sqlite3_mprintf(
+ "SELECT name FROM %Q.sqlite_master WHERE type='table'",
+ sqlite3_column_text(pDbList, 1)
+ );
+ }
+ if( !zSql ){
+ rc = SQLITE_NOMEM;
+ goto next_exit;
+ }
+
+ rc = sqlite3_prepare(pVtab->db, zSql, -1, &pCur->pTableList, 0);
+ sqlite3_free(zSql);
+ if( rc!=SQLITE_OK ) goto next_exit;
+ }
+
+ /* Set zSql to the SQL to the table_info pragma for the table currently
+ ** identified by the rows pointed to by statements pCur->pDbList and
+ ** pCur->pTableList.
+ */
+ zSql = sqlite3_mprintf("PRAGMA %Q.table_info(%Q)",
+ sqlite3_column_text(pCur->pDbList, 1),
+ sqlite3_column_text(pCur->pTableList, 0)
+ );
+
+ if( !zSql ){
+ rc = SQLITE_NOMEM;
+ goto next_exit;
+ }
+ rc = sqlite3_prepare(pVtab->db, zSql, -1, &pCur->pColumnList, 0);
+ sqlite3_free(zSql);
+ if( rc!=SQLITE_OK ) goto next_exit;
+ }
+ pCur->rowid++;
+
+next_exit:
+ /* TODO: Handle rc */
+ return rc;
+}
+
+/*
+** Reset a schema table cursor.
+*/
+static int schemaFilter(
+ sqlite3_vtab_cursor *pVtabCursor,
+ int idxNum, const char *idxStr,
+ int argc, sqlite3_value **argv
+){
+ int rc;
+ schema_vtab *pVtab = (schema_vtab *)(pVtabCursor->pVtab);
+ schema_cursor *pCur = (schema_cursor *)pVtabCursor;
+ pCur->rowid = 0;
+ finalize(&pCur->pTableList);
+ finalize(&pCur->pColumnList);
+ finalize(&pCur->pDbList);
+ rc = sqlite3_prepare(pVtab->db,"PRAGMA database_list", -1, &pCur->pDbList, 0);
+ return (rc==SQLITE_OK ? schemaNext(pVtabCursor) : rc);
+}
+
+/*
+** Analyse the WHERE condition.
+*/
+static int schemaBestIndex(sqlite3_vtab *tab, sqlite3_index_info *pIdxInfo){
+ return SQLITE_OK;
+}
+
+/*
+** A virtual table module that merely echos method calls into TCL
+** variables.
+*/
+static sqlite3_module schemaModule = {
+ 0, /* iVersion */
+ schemaCreate,
+ schemaCreate,
+ schemaBestIndex,
+ schemaDestroy,
+ schemaDestroy,
+ schemaOpen, /* xOpen - open a cursor */
+ schemaClose, /* xClose - close a cursor */
+ schemaFilter, /* xFilter - configure scan constraints */
+ schemaNext, /* xNext - advance a cursor */
+ schemaEof, /* xEof */
+ schemaColumn, /* xColumn - read data */
+ schemaRowid, /* xRowid - read data */
+ 0, /* xUpdate */
+ 0, /* xBegin */
+ 0, /* xSync */
+ 0, /* xCommit */
+ 0, /* xRollback */
+ 0, /* xFindMethod */
+};
+
+
+#ifdef SQLITE_TEST
+
+/*
+** Decode a pointer to an sqlite3 object.
+*/
+static int getDbPointer(Tcl_Interp *interp, const char *zA, sqlite3 **ppDb){
+ *ppDb = (sqlite3*)sqlite3TextToPtr(zA);
+ return TCL_OK;
+}
+
+/*
+** Register the schema virtual table module.
+*/
+static int register_schema_module(
+ ClientData clientData, /* Not used */
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int objc, /* Number of arguments */
+ Tcl_Obj *CONST objv[] /* Command arguments */
+){
+ sqlite3 *db;
+ if( objc!=2 ){
+ Tcl_WrongNumArgs(interp, 1, objv, "DB");
+ return TCL_ERROR;
+ }
+ if( getDbPointer(interp, Tcl_GetString(objv[1]), &db) ) return TCL_ERROR;
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+ sqlite3_create_module(db, "schema", &schemaModule, 0);
+#endif
+ return TCL_OK;
+}
+
+/*
+** Register commands with the TCL interpreter.
+*/
+int Sqlitetestschema_Init(Tcl_Interp *interp){
+ static struct {
+ char *zName;
+ Tcl_ObjCmdProc *xProc;
+ void *clientData;
+ } aObjCmd[] = {
+ { "register_schema_module", register_schema_module, 0 },
+ };
+ int i;
+ for(i=0; i<sizeof(aObjCmd)/sizeof(aObjCmd[0]); i++){
+ Tcl_CreateObjCommand(interp, aObjCmd[i].zName,
+ aObjCmd[i].xProc, aObjCmd[i].clientData, 0);
+ }
+ return TCL_OK;
+}
+
+#else
+
+/*
+** Extension load function.
+*/
+int sqlite3_extension_init(
+ sqlite3 *db,
+ char **pzErrMsg,
+ const sqlite3_api_routines *pApi
+){
+ SQLITE_EXTENSION_INIT2(pApi);
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+ sqlite3_create_module(db, "schema", &schemaModule, 0);
+#endif
+ return 0;
+}
+
+#endif
Added: freeswitch/trunk/libs/sqlite/src/test_server.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/test_server.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,485 @@
+/*
+** 2006 January 07
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+******************************************************************************
+**
+** This file contains demonstration code. Nothing in this file gets compiled
+** or linked into the SQLite library unless you use a non-standard option:
+**
+** -DSQLITE_SERVER=1
+**
+** The configure script will never generate a Makefile with the option
+** above. You will need to manually modify the Makefile if you want to
+** include any of the code from this file in your project. Or, at your
+** option, you may copy and paste the code from this file and
+** thereby avoiding a recompile of SQLite.
+**
+**
+** This source file demonstrates how to use SQLite to create an SQL database
+** server thread in a multiple-threaded program. One or more client threads
+** send messages to the server thread and the server thread processes those
+** messages in the order received and returns the results to the client.
+**
+** One might ask: "Why bother? Why not just let each thread connect
+** to the database directly?" There are a several of reasons to
+** prefer the client/server approach.
+**
+** (1) Some systems (ex: Redhat9) have broken threading implementations
+** that prevent SQLite database connections from being used in
+** a thread different from the one where they were created. With
+** the client/server approach, all database connections are created
+** and used within the server thread. Client calls to the database
+** can be made from multiple threads (though not at the same time!)
+**
+** (2) Beginning with SQLite version 3.3.0, when two or more
+** connections to the same database occur within the same thread,
+** they can optionally share their database cache. This reduces
+** I/O and memory requirements. Cache shared is controlled using
+** the sqlite3_enable_shared_cache() API.
+**
+** (3) Database connections on a shared cache use table-level locking
+** instead of file-level locking for improved concurrency.
+**
+** (4) Database connections on a shared cache can by optionally
+** set to READ UNCOMMITTED isolation. (The default isolation for
+** SQLite is SERIALIZABLE.) When this occurs, readers will
+** never be blocked by a writer and writers will not be
+** blocked by readers. There can still only be a single writer
+** at a time, but multiple readers can simultaneously exist with
+** that writer. This is a huge increase in concurrency.
+**
+** To summarize the rational for using a client/server approach: prior
+** to SQLite version 3.3.0 it probably was not worth the trouble. But
+** with SQLite version 3.3.0 and beyond you can get significant performance
+** and concurrency improvements and memory usage reductions by going
+** client/server.
+**
+** Note: The extra features of version 3.3.0 described by points (2)
+** through (4) above are only available if you compile without the
+** option -DSQLITE_OMIT_SHARED_CACHE.
+**
+** Here is how the client/server approach works: The database server
+** thread is started on this procedure:
+**
+** void *sqlite3_server(void *NotUsed);
+**
+** The sqlite_server procedure runs as long as the g.serverHalt variable
+** is false. A mutex is used to make sure no more than one server runs
+** at a time. The server waits for messages to arrive on a message
+** queue and processes the messages in order.
+**
+** Two convenience routines are provided for starting and stopping the
+** server thread:
+**
+** void sqlite3_server_start(void);
+** void sqlite3_server_stop(void);
+**
+** Both of the convenience routines return immediately. Neither will
+** ever give an error. If a server is already started or already halted,
+** then the routines are effectively no-ops.
+**
+** Clients use the following interfaces:
+**
+** sqlite3_client_open
+** sqlite3_client_prepare
+** sqlite3_client_step
+** sqlite3_client_reset
+** sqlite3_client_finalize
+** sqlite3_client_close
+**
+** These interfaces work exactly like the standard core SQLite interfaces
+** having the same names without the "_client_" infix. Many other SQLite
+** interfaces can be used directly without having to send messages to the
+** server as long as SQLITE_ENABLE_MEMORY_MANAGEMENT is not defined.
+** The following interfaces fall into this second category:
+**
+** sqlite3_bind_*
+** sqlite3_changes
+** sqlite3_clear_bindings
+** sqlite3_column_*
+** sqlite3_complete
+** sqlite3_create_collation
+** sqlite3_create_function
+** sqlite3_data_count
+** sqlite3_db_handle
+** sqlite3_errcode
+** sqlite3_errmsg
+** sqlite3_last_insert_rowid
+** sqlite3_total_changes
+** sqlite3_transfer_bindings
+**
+** A single SQLite connection (an sqlite3* object) or an SQLite statement
+** (an sqlite3_stmt* object) should only be passed to a single interface
+** function at a time. The connections and statements can be passed from
+** any thread to any of the functions listed in the second group above as
+** long as the same connection is not in use by two threads at once and
+** as long as SQLITE_ENABLE_MEMORY_MANAGEMENT is not defined. Additional
+** information about the SQLITE_ENABLE_MEMORY_MANAGEMENT constraint is
+** below.
+**
+** The busy handler for all database connections should remain turned
+** off. That means that any lock contention will cause the associated
+** sqlite3_client_step() call to return immediately with an SQLITE_BUSY
+** error code. If a busy handler is enabled and lock contention occurs,
+** then the entire server thread will block. This will cause not only
+** the requesting client to block but every other database client as
+** well. It is possible to enhance the code below so that lock
+** contention will cause the message to be placed back on the top of
+** the queue to be tried again later. But such enhanced processing is
+** not included here, in order to keep the example simple.
+**
+** This example code assumes the use of pthreads. Pthreads
+** implementations are available for windows. (See, for example
+** http://sourceware.org/pthreads-win32/announcement.html.) Or, you
+** can translate the locking and thread synchronization code to use
+** windows primitives easily enough. The details are left as an
+** exercise to the reader.
+**
+**** Restrictions Associated With SQLITE_ENABLE_MEMORY_MANAGEMENT ****
+**
+** If you compile with SQLITE_ENABLE_MEMORY_MANAGEMENT defined, then
+** SQLite includes code that tracks how much memory is being used by
+** each thread. These memory counts can become confused if memory
+** is allocated by one thread and then freed by another. For that
+** reason, when SQLITE_ENABLE_MEMORY_MANAGEMENT is used, all operations
+** that might allocate or free memory should be performanced in the same
+** thread that originally created the database connection. In that case,
+** many of the operations that are listed above as safe to be performed
+** in separate threads would need to be sent over to the server to be
+** done there. If SQLITE_ENABLE_MEMORY_MANAGEMENT is defined, then
+** the following functions can be used safely from different threads
+** without messing up the allocation counts:
+**
+** sqlite3_bind_parameter_name
+** sqlite3_bind_parameter_index
+** sqlite3_changes
+** sqlite3_column_blob
+** sqlite3_column_count
+** sqlite3_complete
+** sqlite3_data_count
+** sqlite3_db_handle
+** sqlite3_errcode
+** sqlite3_errmsg
+** sqlite3_last_insert_rowid
+** sqlite3_total_changes
+**
+** The remaining functions are not thread-safe when memory management
+** is enabled. So one would have to define some new interface routines
+** along the following lines:
+**
+** sqlite3_client_bind_*
+** sqlite3_client_clear_bindings
+** sqlite3_client_column_*
+** sqlite3_client_create_collation
+** sqlite3_client_create_function
+** sqlite3_client_transfer_bindings
+**
+** The example code in this file is intended for use with memory
+** management turned off. So the implementation of these additional
+** client interfaces is left as an exercise to the reader.
+**
+** It may seem surprising to the reader that the list of safe functions
+** above does not include things like sqlite3_bind_int() or
+** sqlite3_column_int(). But those routines might, in fact, allocate
+** or deallocate memory. In the case of sqlite3_bind_int(), if the
+** parameter was previously bound to a string that string might need
+** to be deallocated before the new integer value is inserted. In
+** the case of sqlite3_column_int(), the value of the column might be
+** a UTF-16 string which will need to be converted to UTF-8 then into
+** an integer.
+*/
+
+/*
+** Only compile the code in this file on UNIX with a THREADSAFE build
+** and only if the SQLITE_SERVER macro is defined.
+*/
+#if defined(SQLITE_SERVER) && !defined(SQLITE_OMIT_SHARED_CACHE)
+#if defined(OS_UNIX) && OS_UNIX && defined(THREADSAFE) && THREADSAFE
+
+/*
+** We require only pthreads and the public interface of SQLite.
+*/
+#include <pthread.h>
+#include "sqlite3.h"
+
+/*
+** Messages are passed from client to server and back again as
+** instances of the following structure.
+*/
+typedef struct SqlMessage SqlMessage;
+struct SqlMessage {
+ int op; /* Opcode for the message */
+ sqlite3 *pDb; /* The SQLite connection */
+ sqlite3_stmt *pStmt; /* A specific statement */
+ int errCode; /* Error code returned */
+ const char *zIn; /* Input filename or SQL statement */
+ int nByte; /* Size of the zIn parameter for prepare() */
+ const char *zOut; /* Tail of the SQL statement */
+ SqlMessage *pNext; /* Next message in the queue */
+ SqlMessage *pPrev; /* Previous message in the queue */
+ pthread_mutex_t clientMutex; /* Hold this mutex to access the message */
+ pthread_cond_t clientWakeup; /* Signal to wake up the client */
+};
+
+/*
+** Legal values for SqlMessage.op
+*/
+#define MSG_Open 1 /* sqlite3_open(zIn, &pDb) */
+#define MSG_Prepare 2 /* sqlite3_prepare(pDb, zIn, nByte, &pStmt, &zOut) */
+#define MSG_Step 3 /* sqlite3_step(pStmt) */
+#define MSG_Reset 4 /* sqlite3_reset(pStmt) */
+#define MSG_Finalize 5 /* sqlite3_finalize(pStmt) */
+#define MSG_Close 6 /* sqlite3_close(pDb) */
+#define MSG_Done 7 /* Server has finished with this message */
+
+
+/*
+** State information about the server is stored in a static variable
+** named "g" as follows:
+*/
+static struct ServerState {
+ pthread_mutex_t queueMutex; /* Hold this mutex to access the msg queue */
+ pthread_mutex_t serverMutex; /* Held by the server while it is running */
+ pthread_cond_t serverWakeup; /* Signal this condvar to wake up the server */
+ volatile int serverHalt; /* Server halts itself when true */
+ SqlMessage *pQueueHead; /* Head of the message queue */
+ SqlMessage *pQueueTail; /* Tail of the message queue */
+} g = {
+ PTHREAD_MUTEX_INITIALIZER,
+ PTHREAD_MUTEX_INITIALIZER,
+ PTHREAD_COND_INITIALIZER,
+};
+
+/*
+** Send a message to the server. Block until we get a reply.
+**
+** The mutex and condition variable in the message are uninitialized
+** when this routine is called. This routine takes care of
+** initializing them and destroying them when it has finished.
+*/
+static void sendToServer(SqlMessage *pMsg){
+ /* Initialize the mutex and condition variable on the message
+ */
+ pthread_mutex_init(&pMsg->clientMutex, 0);
+ pthread_cond_init(&pMsg->clientWakeup, 0);
+
+ /* Add the message to the head of the server's message queue.
+ */
+ pthread_mutex_lock(&g.queueMutex);
+ pMsg->pNext = g.pQueueHead;
+ if( g.pQueueHead==0 ){
+ g.pQueueTail = pMsg;
+ }else{
+ g.pQueueHead->pPrev = pMsg;
+ }
+ pMsg->pPrev = 0;
+ g.pQueueHead = pMsg;
+ pthread_mutex_unlock(&g.queueMutex);
+
+ /* Signal the server that the new message has be queued, then
+ ** block waiting for the server to process the message.
+ */
+ pthread_mutex_lock(&pMsg->clientMutex);
+ pthread_cond_signal(&g.serverWakeup);
+ while( pMsg->op!=MSG_Done ){
+ pthread_cond_wait(&pMsg->clientWakeup, &pMsg->clientMutex);
+ }
+ pthread_mutex_unlock(&pMsg->clientMutex);
+
+ /* Destroy the mutex and condition variable of the message.
+ */
+ pthread_mutex_destroy(&pMsg->clientMutex);
+ pthread_cond_destroy(&pMsg->clientWakeup);
+}
+
+/*
+** The following 6 routines are client-side implementations of the
+** core SQLite interfaces:
+**
+** sqlite3_open
+** sqlite3_prepare
+** sqlite3_step
+** sqlite3_reset
+** sqlite3_finalize
+** sqlite3_close
+**
+** Clients should use the following client-side routines instead of
+** the core routines above.
+**
+** sqlite3_client_open
+** sqlite3_client_prepare
+** sqlite3_client_step
+** sqlite3_client_reset
+** sqlite3_client_finalize
+** sqlite3_client_close
+**
+** Each of these routines creates a message for the desired operation,
+** sends that message to the server, waits for the server to process
+** then message and return a response.
+*/
+int sqlite3_client_open(const char *zDatabaseName, sqlite3 **ppDb){
+ SqlMessage msg;
+ msg.op = MSG_Open;
+ msg.zIn = zDatabaseName;
+ sendToServer(&msg);
+ *ppDb = msg.pDb;
+ return msg.errCode;
+}
+int sqlite3_client_prepare(
+ sqlite3 *pDb,
+ const char *zSql,
+ int nByte,
+ sqlite3_stmt **ppStmt,
+ const char **pzTail
+){
+ SqlMessage msg;
+ msg.op = MSG_Prepare;
+ msg.pDb = pDb;
+ msg.zIn = zSql;
+ msg.nByte = nByte;
+ sendToServer(&msg);
+ *ppStmt = msg.pStmt;
+ if( pzTail ) *pzTail = msg.zOut;
+ return msg.errCode;
+}
+int sqlite3_client_step(sqlite3_stmt *pStmt){
+ SqlMessage msg;
+ msg.op = MSG_Step;
+ msg.pStmt = pStmt;
+ sendToServer(&msg);
+ return msg.errCode;
+}
+int sqlite3_client_reset(sqlite3_stmt *pStmt){
+ SqlMessage msg;
+ msg.op = MSG_Reset;
+ msg.pStmt = pStmt;
+ sendToServer(&msg);
+ return msg.errCode;
+}
+int sqlite3_client_finalize(sqlite3_stmt *pStmt){
+ SqlMessage msg;
+ msg.op = MSG_Finalize;
+ msg.pStmt = pStmt;
+ sendToServer(&msg);
+ return msg.errCode;
+}
+int sqlite3_client_close(sqlite3 *pDb){
+ SqlMessage msg;
+ msg.op = MSG_Close;
+ msg.pDb = pDb;
+ sendToServer(&msg);
+ return msg.errCode;
+}
+
+/*
+** This routine implements the server. To start the server, first
+** make sure g.serverHalt is false, then create a new detached thread
+** on this procedure. See the sqlite3_server_start() routine below
+** for an example. This procedure loops until g.serverHalt becomes
+** true.
+*/
+void *sqlite3_server(void *NotUsed){
+ sqlite3_enable_shared_cache(1);
+ if( pthread_mutex_trylock(&g.serverMutex) ){
+ sqlite3_enable_shared_cache(0);
+ return 0; /* Another server is already running */
+ }
+ while( !g.serverHalt ){
+ SqlMessage *pMsg;
+
+ /* Remove the last message from the message queue.
+ */
+ pthread_mutex_lock(&g.queueMutex);
+ while( g.pQueueTail==0 && g.serverHalt==0 ){
+ pthread_cond_wait(&g.serverWakeup, &g.queueMutex);
+ }
+ pMsg = g.pQueueTail;
+ if( pMsg ){
+ if( pMsg->pPrev ){
+ pMsg->pPrev->pNext = 0;
+ }else{
+ g.pQueueHead = 0;
+ }
+ g.pQueueTail = pMsg->pPrev;
+ }
+ pthread_mutex_unlock(&g.queueMutex);
+ if( pMsg==0 ) break;
+
+ /* Process the message just removed
+ */
+ pthread_mutex_lock(&pMsg->clientMutex);
+ switch( pMsg->op ){
+ case MSG_Open: {
+ pMsg->errCode = sqlite3_open(pMsg->zIn, &pMsg->pDb);
+ break;
+ }
+ case MSG_Prepare: {
+ pMsg->errCode = sqlite3_prepare(pMsg->pDb, pMsg->zIn, pMsg->nByte,
+ &pMsg->pStmt, &pMsg->zOut);
+ break;
+ }
+ case MSG_Step: {
+ pMsg->errCode = sqlite3_step(pMsg->pStmt);
+ break;
+ }
+ case MSG_Reset: {
+ pMsg->errCode = sqlite3_reset(pMsg->pStmt);
+ break;
+ }
+ case MSG_Finalize: {
+ pMsg->errCode = sqlite3_finalize(pMsg->pStmt);
+ break;
+ }
+ case MSG_Close: {
+ pMsg->errCode = sqlite3_close(pMsg->pDb);
+ break;
+ }
+ }
+
+ /* Signal the client that the message has been processed.
+ */
+ pMsg->op = MSG_Done;
+ pthread_mutex_unlock(&pMsg->clientMutex);
+ pthread_cond_signal(&pMsg->clientWakeup);
+ }
+ pthread_mutex_unlock(&g.serverMutex);
+ sqlite3_thread_cleanup();
+ return 0;
+}
+
+/*
+** Start a server thread if one is not already running. If there
+** is aleady a server thread running, the new thread will quickly
+** die and this routine is effectively a no-op.
+*/
+void sqlite3_server_start(void){
+ pthread_t x;
+ int rc;
+ g.serverHalt = 0;
+ rc = pthread_create(&x, 0, sqlite3_server, 0);
+ if( rc==0 ){
+ pthread_detach(x);
+ }
+}
+
+/*
+** If a server thread is running, then stop it. If no server is
+** running, this routine is effectively a no-op.
+**
+** This routine returns immediately without waiting for the server
+** thread to stop. But be assured that the server will eventually stop.
+*/
+void sqlite3_server_stop(void){
+ g.serverHalt = 1;
+ pthread_cond_broadcast(&g.serverWakeup);
+}
+
+#endif /* defined(OS_UNIX) && OS_UNIX && defined(THREADSAFE) && THREADSAFE */
+#endif /* defined(SQLITE_SERVER) */
Added: freeswitch/trunk/libs/sqlite/src/test_tclvar.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/test_tclvar.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,327 @@
+/*
+** 2006 June 13
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** Code for testing the virtual table interfaces. This code
+** is not included in the SQLite library. It is used for automated
+** testing of the SQLite library.
+**
+** The emphasis of this file is a virtual table that provides
+** access to TCL variables.
+**
+** $Id: test_tclvar.c,v 1.10 2006/09/11 00:34:22 drh Exp $
+*/
+#include "sqliteInt.h"
+#include "tcl.h"
+#include "os.h"
+#include <stdlib.h>
+#include <string.h>
+
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+
+typedef struct tclvar_vtab tclvar_vtab;
+typedef struct tclvar_cursor tclvar_cursor;
+
+/*
+** A tclvar virtual-table object
+*/
+struct tclvar_vtab {
+ sqlite3_vtab base;
+ Tcl_Interp *interp;
+};
+
+/* A tclvar cursor object */
+struct tclvar_cursor {
+ sqlite3_vtab_cursor base;
+
+ Tcl_Obj *pList1; /* Result of [info vars ?pattern?] */
+ Tcl_Obj *pList2; /* Result of [array names [lindex $pList1 $i1]] */
+ int i1; /* Current item in pList1 */
+ int i2; /* Current item (if any) in pList2 */
+};
+
+/* Methods for the tclvar module */
+static int tclvarConnect(
+ sqlite3 *db,
+ void *pAux,
+ int argc, const char *const*argv,
+ sqlite3_vtab **ppVtab,
+ char **pzErr
+){
+ tclvar_vtab *pVtab;
+ static const char zSchema[] =
+ "CREATE TABLE whatever(name TEXT, arrayname TEXT, value TEXT)";
+ pVtab = sqliteMalloc( sizeof(*pVtab) );
+ if( pVtab==0 ) return SQLITE_NOMEM;
+ *ppVtab = &pVtab->base;
+ pVtab->interp = (Tcl_Interp *)pAux;
+ sqlite3_declare_vtab(db, zSchema);
+ return SQLITE_OK;
+}
+/* Note that for this virtual table, the xCreate and xConnect
+** methods are identical. */
+
+static int tclvarDisconnect(sqlite3_vtab *pVtab){
+ sqliteFree(pVtab);
+ return SQLITE_OK;
+}
+/* The xDisconnect and xDestroy methods are also the same */
+
+/*
+** Open a new tclvar cursor.
+*/
+static int tclvarOpen(sqlite3_vtab *pVTab, sqlite3_vtab_cursor **ppCursor){
+ tclvar_cursor *pCur;
+ pCur = sqliteMalloc(sizeof(tclvar_cursor));
+ *ppCursor = &pCur->base;
+ return SQLITE_OK;
+}
+
+/*
+** Close a tclvar cursor.
+*/
+static int tclvarClose(sqlite3_vtab_cursor *cur){
+ tclvar_cursor *pCur = (tclvar_cursor *)cur;
+ if( pCur->pList1 ){
+ Tcl_DecrRefCount(pCur->pList1);
+ }
+ if( pCur->pList2 ){
+ Tcl_DecrRefCount(pCur->pList2);
+ }
+ sqliteFree(pCur);
+ return SQLITE_OK;
+}
+
+/*
+** Returns 1 if data is ready, or 0 if not.
+*/
+static int next2(Tcl_Interp *interp, tclvar_cursor *pCur, Tcl_Obj *pObj){
+ Tcl_Obj *p;
+
+ if( pObj ){
+ if( !pCur->pList2 ){
+ p = Tcl_NewStringObj("array names", -1);
+ Tcl_IncrRefCount(p);
+ Tcl_ListObjAppendElement(0, p, pObj);
+ Tcl_EvalObjEx(interp, p, TCL_EVAL_GLOBAL);
+ Tcl_DecrRefCount(p);
+ pCur->pList2 = Tcl_GetObjResult(interp);
+ Tcl_IncrRefCount(pCur->pList2);
+ assert( pCur->i2==0 );
+ }else{
+ int n = 0;
+ pCur->i2++;
+ Tcl_ListObjLength(0, pCur->pList2, &n);
+ if( pCur->i2>=n ){
+ Tcl_DecrRefCount(pCur->pList2);
+ pCur->pList2 = 0;
+ pCur->i2 = 0;
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+static int tclvarNext(sqlite3_vtab_cursor *cur){
+ Tcl_Obj *pObj;
+ int n = 0;
+ int ok = 0;
+
+ tclvar_cursor *pCur = (tclvar_cursor *)cur;
+ Tcl_Interp *interp = ((tclvar_vtab *)(cur->pVtab))->interp;
+
+ Tcl_ListObjLength(0, pCur->pList1, &n);
+ while( !ok && pCur->i1<n ){
+ Tcl_ListObjIndex(0, pCur->pList1, pCur->i1, &pObj);
+ ok = next2(interp, pCur, pObj);
+ if( !ok ){
+ pCur->i1++;
+ }
+ }
+
+ return 0;
+}
+
+static int tclvarFilter(
+ sqlite3_vtab_cursor *pVtabCursor,
+ int idxNum, const char *idxStr,
+ int argc, sqlite3_value **argv
+){
+ tclvar_cursor *pCur = (tclvar_cursor *)pVtabCursor;
+ Tcl_Interp *interp = ((tclvar_vtab *)(pVtabCursor->pVtab))->interp;
+
+ Tcl_Obj *p = Tcl_NewStringObj("info vars", -1);
+ Tcl_IncrRefCount(p);
+
+ assert( argc==0 || argc==1 );
+ if( argc==1 ){
+ Tcl_Obj *pArg = Tcl_NewStringObj((char*)sqlite3_value_text(argv[0]), -1);
+ Tcl_ListObjAppendElement(0, p, pArg);
+ }
+ Tcl_EvalObjEx(interp, p, TCL_EVAL_GLOBAL);
+ pCur->pList1 = Tcl_GetObjResult(interp);
+ Tcl_IncrRefCount(pCur->pList1);
+ assert( pCur->i1==0 && pCur->i2==0 && pCur->pList2==0 );
+
+ Tcl_DecrRefCount(p);
+ return tclvarNext(pVtabCursor);
+}
+
+static int tclvarColumn(sqlite3_vtab_cursor *cur, sqlite3_context *ctx, int i){
+ Tcl_Obj *p1;
+ Tcl_Obj *p2;
+ const char *z1;
+ const char *z2 = "";
+ tclvar_cursor *pCur = (tclvar_cursor*)cur;
+ Tcl_Interp *interp = ((tclvar_vtab *)cur->pVtab)->interp;
+
+ Tcl_ListObjIndex(interp, pCur->pList1, pCur->i1, &p1);
+ Tcl_ListObjIndex(interp, pCur->pList2, pCur->i2, &p2);
+ z1 = Tcl_GetString(p1);
+ if( p2 ){
+ z2 = Tcl_GetString(p2);
+ }
+ switch (i) {
+ case 0: {
+ sqlite3_result_text(ctx, z1, -1, SQLITE_TRANSIENT);
+ break;
+ }
+ case 1: {
+ sqlite3_result_text(ctx, z2, -1, SQLITE_TRANSIENT);
+ break;
+ }
+ case 2: {
+ Tcl_Obj *pVal = Tcl_GetVar2Ex(interp, z1, *z2?z2:0, TCL_GLOBAL_ONLY);
+ sqlite3_result_text(ctx, Tcl_GetString(pVal), -1, SQLITE_TRANSIENT);
+ break;
+ }
+ }
+ return SQLITE_OK;
+}
+
+static int tclvarRowid(sqlite3_vtab_cursor *cur, sqlite_int64 *pRowid){
+ *pRowid = 0;
+ return SQLITE_OK;
+}
+
+static int tclvarEof(sqlite3_vtab_cursor *cur){
+ tclvar_cursor *pCur = (tclvar_cursor*)cur;
+ return (pCur->pList2?0:1);
+}
+
+static int tclvarBestIndex(sqlite3_vtab *tab, sqlite3_index_info *pIdxInfo){
+ int ii;
+
+ for(ii=0; ii<pIdxInfo->nConstraint; ii++){
+ struct sqlite3_index_constraint const *pCons = &pIdxInfo->aConstraint[ii];
+ if( pCons->iColumn==0 && pCons->op==SQLITE_INDEX_CONSTRAINT_EQ ){
+ struct sqlite3_index_constraint_usage *pUsage;
+ pUsage = &pIdxInfo->aConstraintUsage[ii];
+ pUsage->omit = 0;
+ pUsage->argvIndex = 1;
+ return SQLITE_OK;
+ }
+ }
+
+ for(ii=0; ii<pIdxInfo->nConstraint; ii++){
+ struct sqlite3_index_constraint const *pCons = &pIdxInfo->aConstraint[ii];
+ if( pCons->iColumn==0 && pCons->op==SQLITE_INDEX_CONSTRAINT_MATCH ){
+ struct sqlite3_index_constraint_usage *pUsage;
+ pUsage = &pIdxInfo->aConstraintUsage[ii];
+ pUsage->omit = 1;
+ pUsage->argvIndex = 1;
+ return SQLITE_OK;
+ }
+ }
+
+ return SQLITE_OK;
+}
+
+/*
+** A virtual table module that provides read-only access to a
+** Tcl global variable namespace.
+*/
+static sqlite3_module tclvarModule = {
+ 0, /* iVersion */
+ tclvarConnect,
+ tclvarConnect,
+ tclvarBestIndex,
+ tclvarDisconnect,
+ tclvarDisconnect,
+ tclvarOpen, /* xOpen - open a cursor */
+ tclvarClose, /* xClose - close a cursor */
+ tclvarFilter, /* xFilter - configure scan constraints */
+ tclvarNext, /* xNext - advance a cursor */
+ tclvarEof, /* xEof - check for end of scan */
+ tclvarColumn, /* xColumn - read data */
+ tclvarRowid, /* xRowid - read data */
+ 0, /* xUpdate */
+ 0, /* xBegin */
+ 0, /* xSync */
+ 0, /* xCommit */
+ 0, /* xRollback */
+ 0, /* xFindMethod */
+};
+
+/*
+** Decode a pointer to an sqlite3 object.
+*/
+static int getDbPointer(Tcl_Interp *interp, const char *zA, sqlite3 **ppDb){
+ *ppDb = (sqlite3*)sqlite3TextToPtr(zA);
+ return TCL_OK;
+}
+
+
+/*
+** Register the echo virtual table module.
+*/
+static int register_tclvar_module(
+ ClientData clientData, /* Pointer to sqlite3_enable_XXX function */
+ Tcl_Interp *interp, /* The TCL interpreter that invoked this command */
+ int objc, /* Number of arguments */
+ Tcl_Obj *CONST objv[] /* Command arguments */
+){
+ sqlite3 *db;
+ if( objc!=2 ){
+ Tcl_WrongNumArgs(interp, 1, objv, "DB");
+ return TCL_ERROR;
+ }
+ if( getDbPointer(interp, Tcl_GetString(objv[1]), &db) ) return TCL_ERROR;
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+ sqlite3_create_module(db, "tclvar", &tclvarModule, (void *)interp);
+#endif
+ return TCL_OK;
+}
+
+#endif
+
+
+/*
+** Register commands with the TCL interpreter.
+*/
+int Sqlitetesttclvar_Init(Tcl_Interp *interp){
+ static struct {
+ char *zName;
+ Tcl_ObjCmdProc *xProc;
+ void *clientData;
+ } aObjCmd[] = {
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+ { "register_tclvar_module", register_tclvar_module, 0 },
+#endif
+ };
+ int i;
+ for(i=0; i<sizeof(aObjCmd)/sizeof(aObjCmd[0]); i++){
+ Tcl_CreateObjCommand(interp, aObjCmd[i].zName,
+ aObjCmd[i].xProc, aObjCmd[i].clientData, 0);
+ }
+ return TCL_OK;
+}
Added: freeswitch/trunk/libs/sqlite/src/tokenize.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/tokenize.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,507 @@
+/*
+** 2001 September 15
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** An tokenizer for SQL
+**
+** This file contains C code that splits an SQL input string up into
+** individual tokens and sends those tokens one-by-one over to the
+** parser for analysis.
+**
+** $Id: tokenize.c,v 1.124 2006/08/12 12:33:14 drh Exp $
+*/
+#include "sqliteInt.h"
+#include "os.h"
+#include <ctype.h>
+#include <stdlib.h>
+
+/*
+** The charMap() macro maps alphabetic characters into their
+** lower-case ASCII equivalent. On ASCII machines, this is just
+** an upper-to-lower case map. On EBCDIC machines we also need
+** to adjust the encoding. Only alphabetic characters and underscores
+** need to be translated.
+*/
+#ifdef SQLITE_ASCII
+# define charMap(X) sqlite3UpperToLower[(unsigned char)X]
+#endif
+#ifdef SQLITE_EBCDIC
+# define charMap(X) ebcdicToAscii[(unsigned char)X]
+const unsigned char ebcdicToAscii[] = {
+/* 0 1 2 3 4 5 6 7 8 9 A B C D E F */
+ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, /* 0x */
+ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, /* 1x */
+ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, /* 2x */
+ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, /* 3x */
+ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, /* 4x */
+ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, /* 5x */
+ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 95, 0, 0, /* 6x */
+ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, /* 7x */
+ 0, 97, 98, 99,100,101,102,103,104,105, 0, 0, 0, 0, 0, 0, /* 8x */
+ 0,106,107,108,109,110,111,112,113,114, 0, 0, 0, 0, 0, 0, /* 9x */
+ 0, 0,115,116,117,118,119,120,121,122, 0, 0, 0, 0, 0, 0, /* Ax */
+ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, /* Bx */
+ 0, 97, 98, 99,100,101,102,103,104,105, 0, 0, 0, 0, 0, 0, /* Cx */
+ 0,106,107,108,109,110,111,112,113,114, 0, 0, 0, 0, 0, 0, /* Dx */
+ 0, 0,115,116,117,118,119,120,121,122, 0, 0, 0, 0, 0, 0, /* Ex */
+ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, /* Fx */
+};
+#endif
+
+/*
+** The sqlite3KeywordCode function looks up an identifier to determine if
+** it is a keyword. If it is a keyword, the token code of that keyword is
+** returned. If the input is not a keyword, TK_ID is returned.
+**
+** The implementation of this routine was generated by a program,
+** mkkeywordhash.h, located in the tool subdirectory of the distribution.
+** The output of the mkkeywordhash.c program is written into a file
+** named keywordhash.h and then included into this source file by
+** the #include below.
+*/
+#include "keywordhash.h"
+
+
+/*
+** If X is a character that can be used in an identifier then
+** IdChar(X) will be true. Otherwise it is false.
+**
+** For ASCII, any character with the high-order bit set is
+** allowed in an identifier. For 7-bit characters,
+** sqlite3IsIdChar[X] must be 1.
+**
+** For EBCDIC, the rules are more complex but have the same
+** end result.
+**
+** Ticket #1066. the SQL standard does not allow '$' in the
+** middle of identfiers. But many SQL implementations do.
+** SQLite will allow '$' in identifiers for compatibility.
+** But the feature is undocumented.
+*/
+#ifdef SQLITE_ASCII
+const char sqlite3IsIdChar[] = {
+/* x0 x1 x2 x3 x4 x5 x6 x7 x8 x9 xA xB xC xD xE xF */
+ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, /* 2x */
+ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, /* 3x */
+ 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, /* 4x */
+ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, /* 5x */
+ 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, /* 6x */
+ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, /* 7x */
+};
+#define IdChar(C) (((c=C)&0x80)!=0 || (c>0x1f && sqlite3IsIdChar[c-0x20]))
+#endif
+#ifdef SQLITE_EBCDIC
+const char sqlite3IsIdChar[] = {
+/* x0 x1 x2 x3 x4 x5 x6 x7 x8 x9 xA xB xC xD xE xF */
+ 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, /* 4x */
+ 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, /* 5x */
+ 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, /* 6x */
+ 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, /* 7x */
+ 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, /* 8x */
+ 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, /* 9x */
+ 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, /* Ax */
+ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, /* Bx */
+ 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, /* Cx */
+ 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, /* Dx */
+ 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, /* Ex */
+ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, /* Fx */
+};
+#define IdChar(C) (((c=C)>=0x42 && sqlite3IsIdChar[c-0x40]))
+#endif
+
+
+/*
+** Return the length of the token that begins at z[0].
+** Store the token type in *tokenType before returning.
+*/
+static int getToken(const unsigned char *z, int *tokenType){
+ int i, c;
+ switch( *z ){
+ case ' ': case '\t': case '\n': case '\f': case '\r': {
+ for(i=1; isspace(z[i]); i++){}
+ *tokenType = TK_SPACE;
+ return i;
+ }
+ case '-': {
+ if( z[1]=='-' ){
+ for(i=2; (c=z[i])!=0 && c!='\n'; i++){}
+ *tokenType = TK_COMMENT;
+ return i;
+ }
+ *tokenType = TK_MINUS;
+ return 1;
+ }
+ case '(': {
+ *tokenType = TK_LP;
+ return 1;
+ }
+ case ')': {
+ *tokenType = TK_RP;
+ return 1;
+ }
+ case ';': {
+ *tokenType = TK_SEMI;
+ return 1;
+ }
+ case '+': {
+ *tokenType = TK_PLUS;
+ return 1;
+ }
+ case '*': {
+ *tokenType = TK_STAR;
+ return 1;
+ }
+ case '/': {
+ if( z[1]!='*' || z[2]==0 ){
+ *tokenType = TK_SLASH;
+ return 1;
+ }
+ for(i=3, c=z[2]; (c!='*' || z[i]!='/') && (c=z[i])!=0; i++){}
+ if( c ) i++;
+ *tokenType = TK_COMMENT;
+ return i;
+ }
+ case '%': {
+ *tokenType = TK_REM;
+ return 1;
+ }
+ case '=': {
+ *tokenType = TK_EQ;
+ return 1 + (z[1]=='=');
+ }
+ case '<': {
+ if( (c=z[1])=='=' ){
+ *tokenType = TK_LE;
+ return 2;
+ }else if( c=='>' ){
+ *tokenType = TK_NE;
+ return 2;
+ }else if( c=='<' ){
+ *tokenType = TK_LSHIFT;
+ return 2;
+ }else{
+ *tokenType = TK_LT;
+ return 1;
+ }
+ }
+ case '>': {
+ if( (c=z[1])=='=' ){
+ *tokenType = TK_GE;
+ return 2;
+ }else if( c=='>' ){
+ *tokenType = TK_RSHIFT;
+ return 2;
+ }else{
+ *tokenType = TK_GT;
+ return 1;
+ }
+ }
+ case '!': {
+ if( z[1]!='=' ){
+ *tokenType = TK_ILLEGAL;
+ return 2;
+ }else{
+ *tokenType = TK_NE;
+ return 2;
+ }
+ }
+ case '|': {
+ if( z[1]!='|' ){
+ *tokenType = TK_BITOR;
+ return 1;
+ }else{
+ *tokenType = TK_CONCAT;
+ return 2;
+ }
+ }
+ case ',': {
+ *tokenType = TK_COMMA;
+ return 1;
+ }
+ case '&': {
+ *tokenType = TK_BITAND;
+ return 1;
+ }
+ case '~': {
+ *tokenType = TK_BITNOT;
+ return 1;
+ }
+ case '`':
+ case '\'':
+ case '"': {
+ int delim = z[0];
+ for(i=1; (c=z[i])!=0; i++){
+ if( c==delim ){
+ if( z[i+1]==delim ){
+ i++;
+ }else{
+ break;
+ }
+ }
+ }
+ if( c ){
+ *tokenType = TK_STRING;
+ return i+1;
+ }else{
+ *tokenType = TK_ILLEGAL;
+ return i;
+ }
+ }
+ case '.': {
+#ifndef SQLITE_OMIT_FLOATING_POINT
+ if( !isdigit(z[1]) )
+#endif
+ {
+ *tokenType = TK_DOT;
+ return 1;
+ }
+ /* If the next character is a digit, this is a floating point
+ ** number that begins with ".". Fall thru into the next case */
+ }
+ case '0': case '1': case '2': case '3': case '4':
+ case '5': case '6': case '7': case '8': case '9': {
+ *tokenType = TK_INTEGER;
+ for(i=0; isdigit(z[i]); i++){}
+#ifndef SQLITE_OMIT_FLOATING_POINT
+ if( z[i]=='.' ){
+ i++;
+ while( isdigit(z[i]) ){ i++; }
+ *tokenType = TK_FLOAT;
+ }
+ if( (z[i]=='e' || z[i]=='E') &&
+ ( isdigit(z[i+1])
+ || ((z[i+1]=='+' || z[i+1]=='-') && isdigit(z[i+2]))
+ )
+ ){
+ i += 2;
+ while( isdigit(z[i]) ){ i++; }
+ *tokenType = TK_FLOAT;
+ }
+#endif
+ while( IdChar(z[i]) ){
+ *tokenType = TK_ILLEGAL;
+ i++;
+ }
+ return i;
+ }
+ case '[': {
+ for(i=1, c=z[0]; c!=']' && (c=z[i])!=0; i++){}
+ *tokenType = TK_ID;
+ return i;
+ }
+ case '?': {
+ *tokenType = TK_VARIABLE;
+ for(i=1; isdigit(z[i]); i++){}
+ return i;
+ }
+ case '#': {
+ for(i=1; isdigit(z[i]); i++){}
+ if( i>1 ){
+ /* Parameters of the form #NNN (where NNN is a number) are used
+ ** internally by sqlite3NestedParse. */
+ *tokenType = TK_REGISTER;
+ return i;
+ }
+ /* Fall through into the next case if the '#' is not followed by
+ ** a digit. Try to match #AAAA where AAAA is a parameter name. */
+ }
+#ifndef SQLITE_OMIT_TCL_VARIABLE
+ case '$':
+#endif
+ case '@': /* For compatibility with MS SQL Server */
+ case ':': {
+ int n = 0;
+ *tokenType = TK_VARIABLE;
+ for(i=1; (c=z[i])!=0; i++){
+ if( IdChar(c) ){
+ n++;
+#ifndef SQLITE_OMIT_TCL_VARIABLE
+ }else if( c=='(' && n>0 ){
+ do{
+ i++;
+ }while( (c=z[i])!=0 && !isspace(c) && c!=')' );
+ if( c==')' ){
+ i++;
+ }else{
+ *tokenType = TK_ILLEGAL;
+ }
+ break;
+ }else if( c==':' && z[i+1]==':' ){
+ i++;
+#endif
+ }else{
+ break;
+ }
+ }
+ if( n==0 ) *tokenType = TK_ILLEGAL;
+ return i;
+ }
+#ifndef SQLITE_OMIT_BLOB_LITERAL
+ case 'x': case 'X': {
+ if( (c=z[1])=='\'' || c=='"' ){
+ int delim = c;
+ *tokenType = TK_BLOB;
+ for(i=2; (c=z[i])!=0; i++){
+ if( c==delim ){
+ if( i%2 ) *tokenType = TK_ILLEGAL;
+ break;
+ }
+ if( !isxdigit(c) ){
+ *tokenType = TK_ILLEGAL;
+ return i;
+ }
+ }
+ if( c ) i++;
+ return i;
+ }
+ /* Otherwise fall through to the next case */
+ }
+#endif
+ default: {
+ if( !IdChar(*z) ){
+ break;
+ }
+ for(i=1; IdChar(z[i]); i++){}
+ *tokenType = keywordCode((char*)z, i);
+ return i;
+ }
+ }
+ *tokenType = TK_ILLEGAL;
+ return 1;
+}
+int sqlite3GetToken(const unsigned char *z, int *tokenType){
+ return getToken(z, tokenType);
+}
+
+/*
+** Run the parser on the given SQL string. The parser structure is
+** passed in. An SQLITE_ status code is returned. If an error occurs
+** and pzErrMsg!=NULL then an error message might be written into
+** memory obtained from malloc() and *pzErrMsg made to point to that
+** error message. Or maybe not.
+*/
+int sqlite3RunParser(Parse *pParse, const char *zSql, char **pzErrMsg){
+ int nErr = 0;
+ int i;
+ void *pEngine;
+ int tokenType;
+ int lastTokenParsed = -1;
+ sqlite3 *db = pParse->db;
+ extern void *sqlite3ParserAlloc(void*(*)(int));
+ extern void sqlite3ParserFree(void*, void(*)(void*));
+ extern int sqlite3Parser(void*, int, Token, Parse*);
+
+ if( db->activeVdbeCnt==0 ){
+ db->u1.isInterrupted = 0;
+ }
+ pParse->rc = SQLITE_OK;
+ i = 0;
+ pEngine = sqlite3ParserAlloc((void*(*)(int))sqlite3MallocX);
+ if( pEngine==0 ){
+ return SQLITE_NOMEM;
+ }
+ assert( pParse->sLastToken.dyn==0 );
+ assert( pParse->pNewTable==0 );
+ assert( pParse->pNewTrigger==0 );
+ assert( pParse->nVar==0 );
+ assert( pParse->nVarExpr==0 );
+ assert( pParse->nVarExprAlloc==0 );
+ assert( pParse->apVarExpr==0 );
+ pParse->zTail = pParse->zSql = zSql;
+ while( !sqlite3MallocFailed() && zSql[i]!=0 ){
+ assert( i>=0 );
+ pParse->sLastToken.z = (u8*)&zSql[i];
+ assert( pParse->sLastToken.dyn==0 );
+ pParse->sLastToken.n = getToken((unsigned char*)&zSql[i],&tokenType);
+ i += pParse->sLastToken.n;
+ switch( tokenType ){
+ case TK_SPACE:
+ case TK_COMMENT: {
+ if( db->u1.isInterrupted ){
+ pParse->rc = SQLITE_INTERRUPT;
+ sqlite3SetString(pzErrMsg, "interrupt", (char*)0);
+ goto abort_parse;
+ }
+ break;
+ }
+ case TK_ILLEGAL: {
+ if( pzErrMsg ){
+ sqliteFree(*pzErrMsg);
+ *pzErrMsg = sqlite3MPrintf("unrecognized token: \"%T\"",
+ &pParse->sLastToken);
+ }
+ nErr++;
+ goto abort_parse;
+ }
+ case TK_SEMI: {
+ pParse->zTail = &zSql[i];
+ /* Fall thru into the default case */
+ }
+ default: {
+ sqlite3Parser(pEngine, tokenType, pParse->sLastToken, pParse);
+ lastTokenParsed = tokenType;
+ if( pParse->rc!=SQLITE_OK ){
+ goto abort_parse;
+ }
+ break;
+ }
+ }
+ }
+abort_parse:
+ if( zSql[i]==0 && nErr==0 && pParse->rc==SQLITE_OK ){
+ if( lastTokenParsed!=TK_SEMI ){
+ sqlite3Parser(pEngine, TK_SEMI, pParse->sLastToken, pParse);
+ pParse->zTail = &zSql[i];
+ }
+ sqlite3Parser(pEngine, 0, pParse->sLastToken, pParse);
+ }
+ sqlite3ParserFree(pEngine, sqlite3FreeX);
+ if( sqlite3MallocFailed() ){
+ pParse->rc = SQLITE_NOMEM;
+ }
+ if( pParse->rc!=SQLITE_OK && pParse->rc!=SQLITE_DONE && pParse->zErrMsg==0 ){
+ sqlite3SetString(&pParse->zErrMsg, sqlite3ErrStr(pParse->rc), (char*)0);
+ }
+ if( pParse->zErrMsg ){
+ if( pzErrMsg && *pzErrMsg==0 ){
+ *pzErrMsg = pParse->zErrMsg;
+ }else{
+ sqliteFree(pParse->zErrMsg);
+ }
+ pParse->zErrMsg = 0;
+ if( !nErr ) nErr++;
+ }
+ if( pParse->pVdbe && pParse->nErr>0 && pParse->nested==0 ){
+ sqlite3VdbeDelete(pParse->pVdbe);
+ pParse->pVdbe = 0;
+ }
+#ifndef SQLITE_OMIT_SHARED_CACHE
+ if( pParse->nested==0 ){
+ sqliteFree(pParse->aTableLock);
+ pParse->aTableLock = 0;
+ pParse->nTableLock = 0;
+ }
+#endif
+
+ if( !IN_DECLARE_VTAB ){
+ /* If the pParse->declareVtab flag is set, do not delete any table
+ ** structure built up in pParse->pNewTable. The calling code (see vtab.c)
+ ** will take responsibility for freeing the Table structure.
+ */
+ sqlite3DeleteTable(pParse->db, pParse->pNewTable);
+ }
+
+ sqlite3DeleteTrigger(pParse->pNewTrigger);
+ sqliteFree(pParse->apVarExpr);
+ if( nErr>0 && (pParse->rc==SQLITE_OK || pParse->rc==SQLITE_DONE) ){
+ pParse->rc = SQLITE_ERROR;
+ }
+ return nErr;
+}
Added: freeswitch/trunk/libs/sqlite/src/trigger.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/trigger.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,823 @@
+/*
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+*
+*/
+#include "sqliteInt.h"
+
+#ifndef SQLITE_OMIT_TRIGGER
+/*
+** Delete a linked list of TriggerStep structures.
+*/
+void sqlite3DeleteTriggerStep(TriggerStep *pTriggerStep){
+ while( pTriggerStep ){
+ TriggerStep * pTmp = pTriggerStep;
+ pTriggerStep = pTriggerStep->pNext;
+
+ if( pTmp->target.dyn ) sqliteFree((char*)pTmp->target.z);
+ sqlite3ExprDelete(pTmp->pWhere);
+ sqlite3ExprListDelete(pTmp->pExprList);
+ sqlite3SelectDelete(pTmp->pSelect);
+ sqlite3IdListDelete(pTmp->pIdList);
+
+ sqliteFree(pTmp);
+ }
+}
+
+/*
+** This is called by the parser when it sees a CREATE TRIGGER statement
+** up to the point of the BEGIN before the trigger actions. A Trigger
+** structure is generated based on the information available and stored
+** in pParse->pNewTrigger. After the trigger actions have been parsed, the
+** sqlite3FinishTrigger() function is called to complete the trigger
+** construction process.
+*/
+void sqlite3BeginTrigger(
+ Parse *pParse, /* The parse context of the CREATE TRIGGER statement */
+ Token *pName1, /* The name of the trigger */
+ Token *pName2, /* The name of the trigger */
+ int tr_tm, /* One of TK_BEFORE, TK_AFTER, TK_INSTEAD */
+ int op, /* One of TK_INSERT, TK_UPDATE, TK_DELETE */
+ IdList *pColumns, /* column list if this is an UPDATE OF trigger */
+ SrcList *pTableName,/* The name of the table/view the trigger applies to */
+ int foreach, /* One of TK_ROW or TK_STATEMENT */
+ Expr *pWhen, /* WHEN clause */
+ int isTemp, /* True if the TEMPORARY keyword is present */
+ int noErr /* Suppress errors if the trigger already exists */
+){
+ Trigger *pTrigger = 0;
+ Table *pTab;
+ char *zName = 0; /* Name of the trigger */
+ sqlite3 *db = pParse->db;
+ int iDb; /* The database to store the trigger in */
+ Token *pName; /* The unqualified db name */
+ DbFixer sFix;
+ int iTabDb;
+
+ assert( pName1!=0 ); /* pName1->z might be NULL, but not pName1 itself */
+ assert( pName2!=0 );
+ if( isTemp ){
+ /* If TEMP was specified, then the trigger name may not be qualified. */
+ if( pName2->n>0 ){
+ sqlite3ErrorMsg(pParse, "temporary trigger may not have qualified name");
+ goto trigger_cleanup;
+ }
+ iDb = 1;
+ pName = pName1;
+ }else{
+ /* Figure out the db that the the trigger will be created in */
+ iDb = sqlite3TwoPartName(pParse, pName1, pName2, &pName);
+ if( iDb<0 ){
+ goto trigger_cleanup;
+ }
+ }
+
+ /* If the trigger name was unqualified, and the table is a temp table,
+ ** then set iDb to 1 to create the trigger in the temporary database.
+ ** If sqlite3SrcListLookup() returns 0, indicating the table does not
+ ** exist, the error is caught by the block below.
+ */
+ if( !pTableName || sqlite3MallocFailed() ){
+ goto trigger_cleanup;
+ }
+ pTab = sqlite3SrcListLookup(pParse, pTableName);
+ if( pName2->n==0 && pTab && pTab->pSchema==db->aDb[1].pSchema ){
+ iDb = 1;
+ }
+
+ /* Ensure the table name matches database name and that the table exists */
+ if( sqlite3MallocFailed() ) goto trigger_cleanup;
+ assert( pTableName->nSrc==1 );
+ if( sqlite3FixInit(&sFix, pParse, iDb, "trigger", pName) &&
+ sqlite3FixSrcList(&sFix, pTableName) ){
+ goto trigger_cleanup;
+ }
+ pTab = sqlite3SrcListLookup(pParse, pTableName);
+ if( !pTab ){
+ /* The table does not exist. */
+ goto trigger_cleanup;
+ }
+ if( IsVirtual(pTab) ){
+ sqlite3ErrorMsg(pParse, "cannot create triggers on virtual tables");
+ goto trigger_cleanup;
+ }
+
+ /* Check that the trigger name is not reserved and that no trigger of the
+ ** specified name exists */
+ zName = sqlite3NameFromToken(pName);
+ if( !zName || SQLITE_OK!=sqlite3CheckObjectName(pParse, zName) ){
+ goto trigger_cleanup;
+ }
+ if( sqlite3HashFind(&(db->aDb[iDb].pSchema->trigHash), zName,strlen(zName)) ){
+ if( !noErr ){
+ sqlite3ErrorMsg(pParse, "trigger %T already exists", pName);
+ }
+ goto trigger_cleanup;
+ }
+
+ /* Do not create a trigger on a system table */
+ if( sqlite3StrNICmp(pTab->zName, "sqlite_", 7)==0 ){
+ sqlite3ErrorMsg(pParse, "cannot create trigger on system table");
+ pParse->nErr++;
+ goto trigger_cleanup;
+ }
+
+ /* INSTEAD of triggers are only for views and views only support INSTEAD
+ ** of triggers.
+ */
+ if( pTab->pSelect && tr_tm!=TK_INSTEAD ){
+ sqlite3ErrorMsg(pParse, "cannot create %s trigger on view: %S",
+ (tr_tm == TK_BEFORE)?"BEFORE":"AFTER", pTableName, 0);
+ goto trigger_cleanup;
+ }
+ if( !pTab->pSelect && tr_tm==TK_INSTEAD ){
+ sqlite3ErrorMsg(pParse, "cannot create INSTEAD OF"
+ " trigger on table: %S", pTableName, 0);
+ goto trigger_cleanup;
+ }
+ iTabDb = sqlite3SchemaToIndex(db, pTab->pSchema);
+
+#ifndef SQLITE_OMIT_AUTHORIZATION
+ {
+ int code = SQLITE_CREATE_TRIGGER;
+ const char *zDb = db->aDb[iTabDb].zName;
+ const char *zDbTrig = isTemp ? db->aDb[1].zName : zDb;
+ if( iTabDb==1 || isTemp ) code = SQLITE_CREATE_TEMP_TRIGGER;
+ if( sqlite3AuthCheck(pParse, code, zName, pTab->zName, zDbTrig) ){
+ goto trigger_cleanup;
+ }
+ if( sqlite3AuthCheck(pParse, SQLITE_INSERT, SCHEMA_TABLE(iTabDb),0,zDb)){
+ goto trigger_cleanup;
+ }
+ }
+#endif
+
+ /* INSTEAD OF triggers can only appear on views and BEFORE triggers
+ ** cannot appear on views. So we might as well translate every
+ ** INSTEAD OF trigger into a BEFORE trigger. It simplifies code
+ ** elsewhere.
+ */
+ if (tr_tm == TK_INSTEAD){
+ tr_tm = TK_BEFORE;
+ }
+
+ /* Build the Trigger object */
+ pTrigger = (Trigger*)sqliteMalloc(sizeof(Trigger));
+ if( pTrigger==0 ) goto trigger_cleanup;
+ pTrigger->name = zName;
+ zName = 0;
+ pTrigger->table = sqliteStrDup(pTableName->a[0].zName);
+ pTrigger->pSchema = db->aDb[iDb].pSchema;
+ pTrigger->pTabSchema = pTab->pSchema;
+ pTrigger->op = op;
+ pTrigger->tr_tm = tr_tm==TK_BEFORE ? TRIGGER_BEFORE : TRIGGER_AFTER;
+ pTrigger->pWhen = sqlite3ExprDup(pWhen);
+ pTrigger->pColumns = sqlite3IdListDup(pColumns);
+ pTrigger->foreach = foreach;
+ sqlite3TokenCopy(&pTrigger->nameToken,pName);
+ assert( pParse->pNewTrigger==0 );
+ pParse->pNewTrigger = pTrigger;
+
+trigger_cleanup:
+ sqliteFree(zName);
+ sqlite3SrcListDelete(pTableName);
+ sqlite3IdListDelete(pColumns);
+ sqlite3ExprDelete(pWhen);
+ if( !pParse->pNewTrigger ){
+ sqlite3DeleteTrigger(pTrigger);
+ }else{
+ assert( pParse->pNewTrigger==pTrigger );
+ }
+}
+
+/*
+** This routine is called after all of the trigger actions have been parsed
+** in order to complete the process of building the trigger.
+*/
+void sqlite3FinishTrigger(
+ Parse *pParse, /* Parser context */
+ TriggerStep *pStepList, /* The triggered program */
+ Token *pAll /* Token that describes the complete CREATE TRIGGER */
+){
+ Trigger *pTrig = 0; /* The trigger whose construction is finishing up */
+ sqlite3 *db = pParse->db; /* The database */
+ DbFixer sFix;
+ int iDb; /* Database containing the trigger */
+
+ pTrig = pParse->pNewTrigger;
+ pParse->pNewTrigger = 0;
+ if( pParse->nErr || !pTrig ) goto triggerfinish_cleanup;
+ iDb = sqlite3SchemaToIndex(pParse->db, pTrig->pSchema);
+ pTrig->step_list = pStepList;
+ while( pStepList ){
+ pStepList->pTrig = pTrig;
+ pStepList = pStepList->pNext;
+ }
+ if( sqlite3FixInit(&sFix, pParse, iDb, "trigger", &pTrig->nameToken)
+ && sqlite3FixTriggerStep(&sFix, pTrig->step_list) ){
+ goto triggerfinish_cleanup;
+ }
+
+ /* if we are not initializing, and this trigger is not on a TEMP table,
+ ** build the sqlite_master entry
+ */
+ if( !db->init.busy ){
+ static const VdbeOpList insertTrig[] = {
+ { OP_NewRowid, 0, 0, 0 },
+ { OP_String8, 0, 0, "trigger" },
+ { OP_String8, 0, 0, 0 }, /* 2: trigger name */
+ { OP_String8, 0, 0, 0 }, /* 3: table name */
+ { OP_Integer, 0, 0, 0 },
+ { OP_String8, 0, 0, "CREATE TRIGGER "},
+ { OP_String8, 0, 0, 0 }, /* 6: SQL */
+ { OP_Concat, 0, 0, 0 },
+ { OP_MakeRecord, 5, 0, "aaada" },
+ { OP_Insert, 0, 0, 0 },
+ };
+ int addr;
+ Vdbe *v;
+
+ /* Make an entry in the sqlite_master table */
+ v = sqlite3GetVdbe(pParse);
+ if( v==0 ) goto triggerfinish_cleanup;
+ sqlite3BeginWriteOperation(pParse, 0, iDb);
+ sqlite3OpenMasterTable(pParse, iDb);
+ addr = sqlite3VdbeAddOpList(v, ArraySize(insertTrig), insertTrig);
+ sqlite3VdbeChangeP3(v, addr+2, pTrig->name, 0);
+ sqlite3VdbeChangeP3(v, addr+3, pTrig->table, 0);
+ sqlite3VdbeChangeP3(v, addr+6, (char*)pAll->z, pAll->n);
+ sqlite3ChangeCookie(db, v, iDb);
+ sqlite3VdbeAddOp(v, OP_Close, 0, 0);
+ sqlite3VdbeOp3(v, OP_ParseSchema, iDb, 0,
+ sqlite3MPrintf("type='trigger' AND name='%q'", pTrig->name), P3_DYNAMIC);
+ }
+
+ if( db->init.busy ){
+ int n;
+ Table *pTab;
+ Trigger *pDel;
+ pDel = sqlite3HashInsert(&db->aDb[iDb].pSchema->trigHash,
+ pTrig->name, strlen(pTrig->name), pTrig);
+ if( pDel ){
+ assert( sqlite3MallocFailed() && pDel==pTrig );
+ goto triggerfinish_cleanup;
+ }
+ n = strlen(pTrig->table) + 1;
+ pTab = sqlite3HashFind(&pTrig->pTabSchema->tblHash, pTrig->table, n);
+ assert( pTab!=0 );
+ pTrig->pNext = pTab->pTrigger;
+ pTab->pTrigger = pTrig;
+ pTrig = 0;
+ }
+
+triggerfinish_cleanup:
+ sqlite3DeleteTrigger(pTrig);
+ assert( !pParse->pNewTrigger );
+ sqlite3DeleteTriggerStep(pStepList);
+}
+
+/*
+** Make a copy of all components of the given trigger step. This has
+** the effect of copying all Expr.token.z values into memory obtained
+** from sqliteMalloc(). As initially created, the Expr.token.z values
+** all point to the input string that was fed to the parser. But that
+** string is ephemeral - it will go away as soon as the sqlite3_exec()
+** call that started the parser exits. This routine makes a persistent
+** copy of all the Expr.token.z strings so that the TriggerStep structure
+** will be valid even after the sqlite3_exec() call returns.
+*/
+static void sqlitePersistTriggerStep(TriggerStep *p){
+ if( p->target.z ){
+ p->target.z = (u8*)sqliteStrNDup((char*)p->target.z, p->target.n);
+ p->target.dyn = 1;
+ }
+ if( p->pSelect ){
+ Select *pNew = sqlite3SelectDup(p->pSelect);
+ sqlite3SelectDelete(p->pSelect);
+ p->pSelect = pNew;
+ }
+ if( p->pWhere ){
+ Expr *pNew = sqlite3ExprDup(p->pWhere);
+ sqlite3ExprDelete(p->pWhere);
+ p->pWhere = pNew;
+ }
+ if( p->pExprList ){
+ ExprList *pNew = sqlite3ExprListDup(p->pExprList);
+ sqlite3ExprListDelete(p->pExprList);
+ p->pExprList = pNew;
+ }
+ if( p->pIdList ){
+ IdList *pNew = sqlite3IdListDup(p->pIdList);
+ sqlite3IdListDelete(p->pIdList);
+ p->pIdList = pNew;
+ }
+}
+
+/*
+** Turn a SELECT statement (that the pSelect parameter points to) into
+** a trigger step. Return a pointer to a TriggerStep structure.
+**
+** The parser calls this routine when it finds a SELECT statement in
+** body of a TRIGGER.
+*/
+TriggerStep *sqlite3TriggerSelectStep(Select *pSelect){
+ TriggerStep *pTriggerStep = sqliteMalloc(sizeof(TriggerStep));
+ if( pTriggerStep==0 ) {
+ sqlite3SelectDelete(pSelect);
+ return 0;
+ }
+
+ pTriggerStep->op = TK_SELECT;
+ pTriggerStep->pSelect = pSelect;
+ pTriggerStep->orconf = OE_Default;
+ sqlitePersistTriggerStep(pTriggerStep);
+
+ return pTriggerStep;
+}
+
+/*
+** Build a trigger step out of an INSERT statement. Return a pointer
+** to the new trigger step.
+**
+** The parser calls this routine when it sees an INSERT inside the
+** body of a trigger.
+*/
+TriggerStep *sqlite3TriggerInsertStep(
+ Token *pTableName, /* Name of the table into which we insert */
+ IdList *pColumn, /* List of columns in pTableName to insert into */
+ ExprList *pEList, /* The VALUE clause: a list of values to be inserted */
+ Select *pSelect, /* A SELECT statement that supplies values */
+ int orconf /* The conflict algorithm (OE_Abort, OE_Replace, etc.) */
+){
+ TriggerStep *pTriggerStep = sqliteMalloc(sizeof(TriggerStep));
+
+ assert(pEList == 0 || pSelect == 0);
+ assert(pEList != 0 || pSelect != 0);
+
+ if( pTriggerStep ){
+ pTriggerStep->op = TK_INSERT;
+ pTriggerStep->pSelect = pSelect;
+ pTriggerStep->target = *pTableName;
+ pTriggerStep->pIdList = pColumn;
+ pTriggerStep->pExprList = pEList;
+ pTriggerStep->orconf = orconf;
+ sqlitePersistTriggerStep(pTriggerStep);
+ }else{
+ sqlite3IdListDelete(pColumn);
+ sqlite3ExprListDelete(pEList);
+ sqlite3SelectDup(pSelect);
+ }
+
+ return pTriggerStep;
+}
+
+/*
+** Construct a trigger step that implements an UPDATE statement and return
+** a pointer to that trigger step. The parser calls this routine when it
+** sees an UPDATE statement inside the body of a CREATE TRIGGER.
+*/
+TriggerStep *sqlite3TriggerUpdateStep(
+ Token *pTableName, /* Name of the table to be updated */
+ ExprList *pEList, /* The SET clause: list of column and new values */
+ Expr *pWhere, /* The WHERE clause */
+ int orconf /* The conflict algorithm. (OE_Abort, OE_Ignore, etc) */
+){
+ TriggerStep *pTriggerStep = sqliteMalloc(sizeof(TriggerStep));
+ if( pTriggerStep==0 ) return 0;
+
+ pTriggerStep->op = TK_UPDATE;
+ pTriggerStep->target = *pTableName;
+ pTriggerStep->pExprList = pEList;
+ pTriggerStep->pWhere = pWhere;
+ pTriggerStep->orconf = orconf;
+ sqlitePersistTriggerStep(pTriggerStep);
+
+ return pTriggerStep;
+}
+
+/*
+** Construct a trigger step that implements a DELETE statement and return
+** a pointer to that trigger step. The parser calls this routine when it
+** sees a DELETE statement inside the body of a CREATE TRIGGER.
+*/
+TriggerStep *sqlite3TriggerDeleteStep(Token *pTableName, Expr *pWhere){
+ TriggerStep *pTriggerStep = sqliteMalloc(sizeof(TriggerStep));
+ if( pTriggerStep==0 ) return 0;
+
+ pTriggerStep->op = TK_DELETE;
+ pTriggerStep->target = *pTableName;
+ pTriggerStep->pWhere = pWhere;
+ pTriggerStep->orconf = OE_Default;
+ sqlitePersistTriggerStep(pTriggerStep);
+
+ return pTriggerStep;
+}
+
+/*
+** Recursively delete a Trigger structure
+*/
+void sqlite3DeleteTrigger(Trigger *pTrigger){
+ if( pTrigger==0 ) return;
+ sqlite3DeleteTriggerStep(pTrigger->step_list);
+ sqliteFree(pTrigger->name);
+ sqliteFree(pTrigger->table);
+ sqlite3ExprDelete(pTrigger->pWhen);
+ sqlite3IdListDelete(pTrigger->pColumns);
+ if( pTrigger->nameToken.dyn ) sqliteFree((char*)pTrigger->nameToken.z);
+ sqliteFree(pTrigger);
+}
+
+/*
+** This function is called to drop a trigger from the database schema.
+**
+** This may be called directly from the parser and therefore identifies
+** the trigger by name. The sqlite3DropTriggerPtr() routine does the
+** same job as this routine except it takes a pointer to the trigger
+** instead of the trigger name.
+**/
+void sqlite3DropTrigger(Parse *pParse, SrcList *pName, int noErr){
+ Trigger *pTrigger = 0;
+ int i;
+ const char *zDb;
+ const char *zName;
+ int nName;
+ sqlite3 *db = pParse->db;
+
+ if( sqlite3MallocFailed() ) goto drop_trigger_cleanup;
+ if( SQLITE_OK!=sqlite3ReadSchema(pParse) ){
+ goto drop_trigger_cleanup;
+ }
+
+ assert( pName->nSrc==1 );
+ zDb = pName->a[0].zDatabase;
+ zName = pName->a[0].zName;
+ nName = strlen(zName);
+ for(i=OMIT_TEMPDB; i<db->nDb; i++){
+ int j = (i<2) ? i^1 : i; /* Search TEMP before MAIN */
+ if( zDb && sqlite3StrICmp(db->aDb[j].zName, zDb) ) continue;
+ pTrigger = sqlite3HashFind(&(db->aDb[j].pSchema->trigHash), zName, nName);
+ if( pTrigger ) break;
+ }
+ if( !pTrigger ){
+ if( !noErr ){
+ sqlite3ErrorMsg(pParse, "no such trigger: %S", pName, 0);
+ }
+ goto drop_trigger_cleanup;
+ }
+ sqlite3DropTriggerPtr(pParse, pTrigger);
+
+drop_trigger_cleanup:
+ sqlite3SrcListDelete(pName);
+}
+
+/*
+** Return a pointer to the Table structure for the table that a trigger
+** is set on.
+*/
+static Table *tableOfTrigger(Trigger *pTrigger){
+ int n = strlen(pTrigger->table) + 1;
+ return sqlite3HashFind(&pTrigger->pTabSchema->tblHash, pTrigger->table, n);
+}
+
+
+/*
+** Drop a trigger given a pointer to that trigger.
+*/
+void sqlite3DropTriggerPtr(Parse *pParse, Trigger *pTrigger){
+ Table *pTable;
+ Vdbe *v;
+ sqlite3 *db = pParse->db;
+ int iDb;
+
+ iDb = sqlite3SchemaToIndex(pParse->db, pTrigger->pSchema);
+ assert( iDb>=0 && iDb<db->nDb );
+ pTable = tableOfTrigger(pTrigger);
+ assert( pTable );
+ assert( pTable->pSchema==pTrigger->pSchema || iDb==1 );
+#ifndef SQLITE_OMIT_AUTHORIZATION
+ {
+ int code = SQLITE_DROP_TRIGGER;
+ const char *zDb = db->aDb[iDb].zName;
+ const char *zTab = SCHEMA_TABLE(iDb);
+ if( iDb==1 ) code = SQLITE_DROP_TEMP_TRIGGER;
+ if( sqlite3AuthCheck(pParse, code, pTrigger->name, pTable->zName, zDb) ||
+ sqlite3AuthCheck(pParse, SQLITE_DELETE, zTab, 0, zDb) ){
+ return;
+ }
+ }
+#endif
+
+ /* Generate code to destroy the database record of the trigger.
+ */
+ assert( pTable!=0 );
+ if( (v = sqlite3GetVdbe(pParse))!=0 ){
+ int base;
+ static const VdbeOpList dropTrigger[] = {
+ { OP_Rewind, 0, ADDR(9), 0},
+ { OP_String8, 0, 0, 0}, /* 1 */
+ { OP_Column, 0, 1, 0},
+ { OP_Ne, 0, ADDR(8), 0},
+ { OP_String8, 0, 0, "trigger"},
+ { OP_Column, 0, 0, 0},
+ { OP_Ne, 0, ADDR(8), 0},
+ { OP_Delete, 0, 0, 0},
+ { OP_Next, 0, ADDR(1), 0}, /* 8 */
+ };
+
+ sqlite3BeginWriteOperation(pParse, 0, iDb);
+ sqlite3OpenMasterTable(pParse, iDb);
+ base = sqlite3VdbeAddOpList(v, ArraySize(dropTrigger), dropTrigger);
+ sqlite3VdbeChangeP3(v, base+1, pTrigger->name, 0);
+ sqlite3ChangeCookie(db, v, iDb);
+ sqlite3VdbeAddOp(v, OP_Close, 0, 0);
+ sqlite3VdbeOp3(v, OP_DropTrigger, iDb, 0, pTrigger->name, 0);
+ }
+}
+
+/*
+** Remove a trigger from the hash tables of the sqlite* pointer.
+*/
+void sqlite3UnlinkAndDeleteTrigger(sqlite3 *db, int iDb, const char *zName){
+ Trigger *pTrigger;
+ int nName = strlen(zName);
+ pTrigger = sqlite3HashInsert(&(db->aDb[iDb].pSchema->trigHash),
+ zName, nName, 0);
+ if( pTrigger ){
+ Table *pTable = tableOfTrigger(pTrigger);
+ assert( pTable!=0 );
+ if( pTable->pTrigger == pTrigger ){
+ pTable->pTrigger = pTrigger->pNext;
+ }else{
+ Trigger *cc = pTable->pTrigger;
+ while( cc ){
+ if( cc->pNext == pTrigger ){
+ cc->pNext = cc->pNext->pNext;
+ break;
+ }
+ cc = cc->pNext;
+ }
+ assert(cc);
+ }
+ sqlite3DeleteTrigger(pTrigger);
+ db->flags |= SQLITE_InternChanges;
+ }
+}
+
+/*
+** pEList is the SET clause of an UPDATE statement. Each entry
+** in pEList is of the format <id>=<expr>. If any of the entries
+** in pEList have an <id> which matches an identifier in pIdList,
+** then return TRUE. If pIdList==NULL, then it is considered a
+** wildcard that matches anything. Likewise if pEList==NULL then
+** it matches anything so always return true. Return false only
+** if there is no match.
+*/
+static int checkColumnOverLap(IdList *pIdList, ExprList *pEList){
+ int e;
+ if( !pIdList || !pEList ) return 1;
+ for(e=0; e<pEList->nExpr; e++){
+ if( sqlite3IdListIndex(pIdList, pEList->a[e].zName)>=0 ) return 1;
+ }
+ return 0;
+}
+
+/*
+** Return a bit vector to indicate what kind of triggers exist for operation
+** "op" on table pTab. If pChanges is not NULL then it is a list of columns
+** that are being updated. Triggers only match if the ON clause of the
+** trigger definition overlaps the set of columns being updated.
+**
+** The returned bit vector is some combination of TRIGGER_BEFORE and
+** TRIGGER_AFTER.
+*/
+int sqlite3TriggersExist(
+ Parse *pParse, /* Used to check for recursive triggers */
+ Table *pTab, /* The table the contains the triggers */
+ int op, /* one of TK_DELETE, TK_INSERT, TK_UPDATE */
+ ExprList *pChanges /* Columns that change in an UPDATE statement */
+){
+ Trigger *pTrigger;
+ int mask = 0;
+
+ pTrigger = IsVirtual(pTab) ? 0 : pTab->pTrigger;
+ while( pTrigger ){
+ if( pTrigger->op==op && checkColumnOverLap(pTrigger->pColumns, pChanges) ){
+ mask |= pTrigger->tr_tm;
+ }
+ pTrigger = pTrigger->pNext;
+ }
+ return mask;
+}
+
+/*
+** Convert the pStep->target token into a SrcList and return a pointer
+** to that SrcList.
+**
+** This routine adds a specific database name, if needed, to the target when
+** forming the SrcList. This prevents a trigger in one database from
+** referring to a target in another database. An exception is when the
+** trigger is in TEMP in which case it can refer to any other database it
+** wants.
+*/
+static SrcList *targetSrcList(
+ Parse *pParse, /* The parsing context */
+ TriggerStep *pStep /* The trigger containing the target token */
+){
+ Token sDb; /* Dummy database name token */
+ int iDb; /* Index of the database to use */
+ SrcList *pSrc; /* SrcList to be returned */
+
+ iDb = sqlite3SchemaToIndex(pParse->db, pStep->pTrig->pSchema);
+ if( iDb==0 || iDb>=2 ){
+ assert( iDb<pParse->db->nDb );
+ sDb.z = (u8*)pParse->db->aDb[iDb].zName;
+ sDb.n = strlen((char*)sDb.z);
+ pSrc = sqlite3SrcListAppend(0, &sDb, &pStep->target);
+ } else {
+ pSrc = sqlite3SrcListAppend(0, &pStep->target, 0);
+ }
+ return pSrc;
+}
+
+/*
+** Generate VDBE code for zero or more statements inside the body of a
+** trigger.
+*/
+static int codeTriggerProgram(
+ Parse *pParse, /* The parser context */
+ TriggerStep *pStepList, /* List of statements inside the trigger body */
+ int orconfin /* Conflict algorithm. (OE_Abort, etc) */
+){
+ TriggerStep * pTriggerStep = pStepList;
+ int orconf;
+ Vdbe *v = pParse->pVdbe;
+
+ assert( pTriggerStep!=0 );
+ assert( v!=0 );
+ sqlite3VdbeAddOp(v, OP_ContextPush, 0, 0);
+ VdbeComment((v, "# begin trigger %s", pStepList->pTrig->name));
+ while( pTriggerStep ){
+ orconf = (orconfin == OE_Default)?pTriggerStep->orconf:orconfin;
+ pParse->trigStack->orconf = orconf;
+ switch( pTriggerStep->op ){
+ case TK_SELECT: {
+ Select * ss = sqlite3SelectDup(pTriggerStep->pSelect);
+ assert(ss);
+ assert(ss->pSrc);
+ sqlite3SelectResolve(pParse, ss, 0);
+ sqlite3Select(pParse, ss, SRT_Discard, 0, 0, 0, 0, 0);
+ sqlite3SelectDelete(ss);
+ break;
+ }
+ case TK_UPDATE: {
+ SrcList *pSrc;
+ pSrc = targetSrcList(pParse, pTriggerStep);
+ sqlite3VdbeAddOp(v, OP_ResetCount, 0, 0);
+ sqlite3Update(pParse, pSrc,
+ sqlite3ExprListDup(pTriggerStep->pExprList),
+ sqlite3ExprDup(pTriggerStep->pWhere), orconf);
+ sqlite3VdbeAddOp(v, OP_ResetCount, 1, 0);
+ break;
+ }
+ case TK_INSERT: {
+ SrcList *pSrc;
+ pSrc = targetSrcList(pParse, pTriggerStep);
+ sqlite3VdbeAddOp(v, OP_ResetCount, 0, 0);
+ sqlite3Insert(pParse, pSrc,
+ sqlite3ExprListDup(pTriggerStep->pExprList),
+ sqlite3SelectDup(pTriggerStep->pSelect),
+ sqlite3IdListDup(pTriggerStep->pIdList), orconf);
+ sqlite3VdbeAddOp(v, OP_ResetCount, 1, 0);
+ break;
+ }
+ case TK_DELETE: {
+ SrcList *pSrc;
+ sqlite3VdbeAddOp(v, OP_ResetCount, 0, 0);
+ pSrc = targetSrcList(pParse, pTriggerStep);
+ sqlite3DeleteFrom(pParse, pSrc, sqlite3ExprDup(pTriggerStep->pWhere));
+ sqlite3VdbeAddOp(v, OP_ResetCount, 1, 0);
+ break;
+ }
+ default:
+ assert(0);
+ }
+ pTriggerStep = pTriggerStep->pNext;
+ }
+ sqlite3VdbeAddOp(v, OP_ContextPop, 0, 0);
+ VdbeComment((v, "# end trigger %s", pStepList->pTrig->name));
+
+ return 0;
+}
+
+/*
+** This is called to code FOR EACH ROW triggers.
+**
+** When the code that this function generates is executed, the following
+** must be true:
+**
+** 1. No cursors may be open in the main database. (But newIdx and oldIdx
+** can be indices of cursors in temporary tables. See below.)
+**
+** 2. If the triggers being coded are ON INSERT or ON UPDATE triggers, then
+** a temporary vdbe cursor (index newIdx) must be open and pointing at
+** a row containing values to be substituted for new.* expressions in the
+** trigger program(s).
+**
+** 3. If the triggers being coded are ON DELETE or ON UPDATE triggers, then
+** a temporary vdbe cursor (index oldIdx) must be open and pointing at
+** a row containing values to be substituted for old.* expressions in the
+** trigger program(s).
+**
+*/
+int sqlite3CodeRowTrigger(
+ Parse *pParse, /* Parse context */
+ int op, /* One of TK_UPDATE, TK_INSERT, TK_DELETE */
+ ExprList *pChanges, /* Changes list for any UPDATE OF triggers */
+ int tr_tm, /* One of TRIGGER_BEFORE, TRIGGER_AFTER */
+ Table *pTab, /* The table to code triggers from */
+ int newIdx, /* The indice of the "new" row to access */
+ int oldIdx, /* The indice of the "old" row to access */
+ int orconf, /* ON CONFLICT policy */
+ int ignoreJump /* Instruction to jump to for RAISE(IGNORE) */
+){
+ Trigger *p;
+ TriggerStack trigStackEntry;
+
+ assert(op == TK_UPDATE || op == TK_INSERT || op == TK_DELETE);
+ assert(tr_tm == TRIGGER_BEFORE || tr_tm == TRIGGER_AFTER );
+
+ assert(newIdx != -1 || oldIdx != -1);
+
+ for(p=pTab->pTrigger; p; p=p->pNext){
+ int fire_this = 0;
+
+ /* Determine whether we should code this trigger */
+ if(
+ p->op==op &&
+ p->tr_tm==tr_tm &&
+ (p->pSchema==p->pTabSchema || p->pSchema==pParse->db->aDb[1].pSchema) &&
+ (op!=TK_UPDATE||!p->pColumns||checkColumnOverLap(p->pColumns,pChanges))
+ ){
+ TriggerStack *pS; /* Pointer to trigger-stack entry */
+ for(pS=pParse->trigStack; pS && p!=pS->pTrigger; pS=pS->pNext){}
+ if( !pS ){
+ fire_this = 1;
+ }
+#if 0 /* Give no warning for recursive triggers. Just do not do them */
+ else{
+ sqlite3ErrorMsg(pParse, "recursive triggers not supported (%s)",
+ p->name);
+ return SQLITE_ERROR;
+ }
+#endif
+ }
+
+ if( fire_this ){
+ int endTrigger;
+ Expr * whenExpr;
+ AuthContext sContext;
+ NameContext sNC;
+
+ memset(&sNC, 0, sizeof(sNC));
+ sNC.pParse = pParse;
+
+ /* Push an entry on to the trigger stack */
+ trigStackEntry.pTrigger = p;
+ trigStackEntry.newIdx = newIdx;
+ trigStackEntry.oldIdx = oldIdx;
+ trigStackEntry.pTab = pTab;
+ trigStackEntry.pNext = pParse->trigStack;
+ trigStackEntry.ignoreJump = ignoreJump;
+ pParse->trigStack = &trigStackEntry;
+ sqlite3AuthContextPush(pParse, &sContext, p->name);
+
+ /* code the WHEN clause */
+ endTrigger = sqlite3VdbeMakeLabel(pParse->pVdbe);
+ whenExpr = sqlite3ExprDup(p->pWhen);
+ if( sqlite3ExprResolveNames(&sNC, whenExpr) ){
+ pParse->trigStack = trigStackEntry.pNext;
+ sqlite3ExprDelete(whenExpr);
+ return 1;
+ }
+ sqlite3ExprIfFalse(pParse, whenExpr, endTrigger, 1);
+ sqlite3ExprDelete(whenExpr);
+
+ codeTriggerProgram(pParse, p->step_list, orconf);
+
+ /* Pop the entry off the trigger stack */
+ pParse->trigStack = trigStackEntry.pNext;
+ sqlite3AuthContextPop(&sContext);
+
+ sqlite3VdbeResolveLabel(pParse->pVdbe, endTrigger);
+ }
+ }
+ return 0;
+}
+#endif /* !defined(SQLITE_OMIT_TRIGGER) */
Added: freeswitch/trunk/libs/sqlite/src/update.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/update.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,622 @@
+/*
+** 2001 September 15
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** This file contains C code routines that are called by the parser
+** to handle UPDATE statements.
+**
+** $Id: update.c,v 1.133 2006/06/27 13:20:21 drh Exp $
+*/
+#include "sqliteInt.h"
+
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+/* Forward declaration */
+static void updateVirtualTable(
+ Parse *pParse, /* The parsing context */
+ SrcList *pSrc, /* The virtual table to be modified */
+ Table *pTab, /* The virtual table */
+ ExprList *pChanges, /* The columns to change in the UPDATE statement */
+ Expr *pRowidExpr, /* Expression used to recompute the rowid */
+ int *aXRef, /* Mapping from columns of pTab to entries in pChanges */
+ Expr *pWhere /* WHERE clause of the UPDATE statement */
+);
+#endif /* SQLITE_OMIT_VIRTUALTABLE */
+
+/*
+** The most recently coded instruction was an OP_Column to retrieve the
+** i-th column of table pTab. This routine sets the P3 parameter of the
+** OP_Column to the default value, if any.
+**
+** The default value of a column is specified by a DEFAULT clause in the
+** column definition. This was either supplied by the user when the table
+** was created, or added later to the table definition by an ALTER TABLE
+** command. If the latter, then the row-records in the table btree on disk
+** may not contain a value for the column and the default value, taken
+** from the P3 parameter of the OP_Column instruction, is returned instead.
+** If the former, then all row-records are guaranteed to include a value
+** for the column and the P3 value is not required.
+**
+** Column definitions created by an ALTER TABLE command may only have
+** literal default values specified: a number, null or a string. (If a more
+** complicated default expression value was provided, it is evaluated
+** when the ALTER TABLE is executed and one of the literal values written
+** into the sqlite_master table.)
+**
+** Therefore, the P3 parameter is only required if the default value for
+** the column is a literal number, string or null. The sqlite3ValueFromExpr()
+** function is capable of transforming these types of expressions into
+** sqlite3_value objects.
+*/
+void sqlite3ColumnDefault(Vdbe *v, Table *pTab, int i){
+ if( pTab && !pTab->pSelect ){
+ sqlite3_value *pValue;
+ u8 enc = ENC(sqlite3VdbeDb(v));
+ Column *pCol = &pTab->aCol[i];
+ sqlite3ValueFromExpr(pCol->pDflt, enc, pCol->affinity, &pValue);
+ if( pValue ){
+ sqlite3VdbeChangeP3(v, -1, (const char *)pValue, P3_MEM);
+ }else{
+ VdbeComment((v, "# %s.%s", pTab->zName, pCol->zName));
+ }
+ }
+}
+
+/*
+** Process an UPDATE statement.
+**
+** UPDATE OR IGNORE table_wxyz SET a=b, c=d WHERE e<5 AND f NOT NULL;
+** \_______/ \________/ \______/ \________________/
+* onError pTabList pChanges pWhere
+*/
+void sqlite3Update(
+ Parse *pParse, /* The parser context */
+ SrcList *pTabList, /* The table in which we should change things */
+ ExprList *pChanges, /* Things to be changed */
+ Expr *pWhere, /* The WHERE clause. May be null */
+ int onError /* How to handle constraint errors */
+){
+ int i, j; /* Loop counters */
+ Table *pTab; /* The table to be updated */
+ int addr = 0; /* VDBE instruction address of the start of the loop */
+ WhereInfo *pWInfo; /* Information about the WHERE clause */
+ Vdbe *v; /* The virtual database engine */
+ Index *pIdx; /* For looping over indices */
+ int nIdx; /* Number of indices that need updating */
+ int nIdxTotal; /* Total number of indices */
+ int iCur; /* VDBE Cursor number of pTab */
+ sqlite3 *db; /* The database structure */
+ Index **apIdx = 0; /* An array of indices that need updating too */
+ char *aIdxUsed = 0; /* aIdxUsed[i]==1 if the i-th index is used */
+ int *aXRef = 0; /* aXRef[i] is the index in pChanges->a[] of the
+ ** an expression for the i-th column of the table.
+ ** aXRef[i]==-1 if the i-th column is not changed. */
+ int chngRowid; /* True if the record number is being changed */
+ Expr *pRowidExpr = 0; /* Expression defining the new record number */
+ int openAll = 0; /* True if all indices need to be opened */
+ AuthContext sContext; /* The authorization context */
+ NameContext sNC; /* The name-context to resolve expressions in */
+ int iDb; /* Database containing the table being updated */
+
+#ifndef SQLITE_OMIT_TRIGGER
+ int isView; /* Trying to update a view */
+ int triggers_exist = 0; /* True if any row triggers exist */
+#endif
+
+ int newIdx = -1; /* index of trigger "new" temp table */
+ int oldIdx = -1; /* index of trigger "old" temp table */
+
+ sContext.pParse = 0;
+ if( pParse->nErr || sqlite3MallocFailed() ){
+ goto update_cleanup;
+ }
+ db = pParse->db;
+ assert( pTabList->nSrc==1 );
+
+ /* Locate the table which we want to update.
+ */
+ pTab = sqlite3SrcListLookup(pParse, pTabList);
+ if( pTab==0 ) goto update_cleanup;
+ iDb = sqlite3SchemaToIndex(pParse->db, pTab->pSchema);
+
+ /* Figure out if we have any triggers and if the table being
+ ** updated is a view
+ */
+#ifndef SQLITE_OMIT_TRIGGER
+ triggers_exist = sqlite3TriggersExist(pParse, pTab, TK_UPDATE, pChanges);
+ isView = pTab->pSelect!=0;
+#else
+# define triggers_exist 0
+# define isView 0
+#endif
+#ifdef SQLITE_OMIT_VIEW
+# undef isView
+# define isView 0
+#endif
+
+ if( sqlite3IsReadOnly(pParse, pTab, triggers_exist) ){
+ goto update_cleanup;
+ }
+ if( sqlite3ViewGetColumnNames(pParse, pTab) ){
+ goto update_cleanup;
+ }
+ aXRef = sqliteMallocRaw( sizeof(int) * pTab->nCol );
+ if( aXRef==0 ) goto update_cleanup;
+ for(i=0; i<pTab->nCol; i++) aXRef[i] = -1;
+
+ /* If there are FOR EACH ROW triggers, allocate cursors for the
+ ** special OLD and NEW tables
+ */
+ if( triggers_exist ){
+ newIdx = pParse->nTab++;
+ oldIdx = pParse->nTab++;
+ }
+
+ /* Allocate a cursors for the main database table and for all indices.
+ ** The index cursors might not be used, but if they are used they
+ ** need to occur right after the database cursor. So go ahead and
+ ** allocate enough space, just in case.
+ */
+ pTabList->a[0].iCursor = iCur = pParse->nTab++;
+ for(pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext){
+ pParse->nTab++;
+ }
+
+ /* Initialize the name-context */
+ memset(&sNC, 0, sizeof(sNC));
+ sNC.pParse = pParse;
+ sNC.pSrcList = pTabList;
+
+ /* Resolve the column names in all the expressions of the
+ ** of the UPDATE statement. Also find the column index
+ ** for each column to be updated in the pChanges array. For each
+ ** column to be updated, make sure we have authorization to change
+ ** that column.
+ */
+ chngRowid = 0;
+ for(i=0; i<pChanges->nExpr; i++){
+ if( sqlite3ExprResolveNames(&sNC, pChanges->a[i].pExpr) ){
+ goto update_cleanup;
+ }
+ for(j=0; j<pTab->nCol; j++){
+ if( sqlite3StrICmp(pTab->aCol[j].zName, pChanges->a[i].zName)==0 ){
+ if( j==pTab->iPKey ){
+ chngRowid = 1;
+ pRowidExpr = pChanges->a[i].pExpr;
+ }
+ aXRef[j] = i;
+ break;
+ }
+ }
+ if( j>=pTab->nCol ){
+ if( sqlite3IsRowid(pChanges->a[i].zName) ){
+ chngRowid = 1;
+ pRowidExpr = pChanges->a[i].pExpr;
+ }else{
+ sqlite3ErrorMsg(pParse, "no such column: %s", pChanges->a[i].zName);
+ goto update_cleanup;
+ }
+ }
+#ifndef SQLITE_OMIT_AUTHORIZATION
+ {
+ int rc;
+ rc = sqlite3AuthCheck(pParse, SQLITE_UPDATE, pTab->zName,
+ pTab->aCol[j].zName, db->aDb[iDb].zName);
+ if( rc==SQLITE_DENY ){
+ goto update_cleanup;
+ }else if( rc==SQLITE_IGNORE ){
+ aXRef[j] = -1;
+ }
+ }
+#endif
+ }
+
+ /* Allocate memory for the array apIdx[] and fill it with pointers to every
+ ** index that needs to be updated. Indices only need updating if their
+ ** key includes one of the columns named in pChanges or if the record
+ ** number of the original table entry is changing.
+ */
+ for(nIdx=nIdxTotal=0, pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext, nIdxTotal++){
+ if( chngRowid ){
+ i = 0;
+ }else {
+ for(i=0; i<pIdx->nColumn; i++){
+ if( aXRef[pIdx->aiColumn[i]]>=0 ) break;
+ }
+ }
+ if( i<pIdx->nColumn ) nIdx++;
+ }
+ if( nIdxTotal>0 ){
+ apIdx = sqliteMallocRaw( sizeof(Index*) * nIdx + nIdxTotal );
+ if( apIdx==0 ) goto update_cleanup;
+ aIdxUsed = (char*)&apIdx[nIdx];
+ }
+ for(nIdx=j=0, pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext, j++){
+ if( chngRowid ){
+ i = 0;
+ }else{
+ for(i=0; i<pIdx->nColumn; i++){
+ if( aXRef[pIdx->aiColumn[i]]>=0 ) break;
+ }
+ }
+ if( i<pIdx->nColumn ){
+ apIdx[nIdx++] = pIdx;
+ aIdxUsed[j] = 1;
+ }else{
+ aIdxUsed[j] = 0;
+ }
+ }
+
+ /* Begin generating code.
+ */
+ v = sqlite3GetVdbe(pParse);
+ if( v==0 ) goto update_cleanup;
+ if( pParse->nested==0 ) sqlite3VdbeCountChanges(v);
+ sqlite3BeginWriteOperation(pParse, 1, iDb);
+
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+ /* Virtual tables must be handled separately */
+ if( IsVirtual(pTab) ){
+ updateVirtualTable(pParse, pTabList, pTab, pChanges, pRowidExpr, aXRef,
+ pWhere);
+ pWhere = 0;
+ pTabList = 0;
+ goto update_cleanup;
+ }
+#endif
+
+ /* Resolve the column names in all the expressions in the
+ ** WHERE clause.
+ */
+ if( sqlite3ExprResolveNames(&sNC, pWhere) ){
+ goto update_cleanup;
+ }
+
+ /* Start the view context
+ */
+ if( isView ){
+ sqlite3AuthContextPush(pParse, &sContext, pTab->zName);
+ }
+
+ /* If we are trying to update a view, realize that view into
+ ** a ephemeral table.
+ */
+ if( isView ){
+ Select *pView;
+ pView = sqlite3SelectDup(pTab->pSelect);
+ sqlite3Select(pParse, pView, SRT_EphemTab, iCur, 0, 0, 0, 0);
+ sqlite3SelectDelete(pView);
+ }
+
+ /* Begin the database scan
+ */
+ pWInfo = sqlite3WhereBegin(pParse, pTabList, pWhere, 0);
+ if( pWInfo==0 ) goto update_cleanup;
+
+ /* Remember the rowid of every item to be updated.
+ */
+ sqlite3VdbeAddOp(v, IsVirtual(pTab) ? OP_VRowid : OP_Rowid, iCur, 0);
+ sqlite3VdbeAddOp(v, OP_FifoWrite, 0, 0);
+
+ /* End the database scan loop.
+ */
+ sqlite3WhereEnd(pWInfo);
+
+ /* Initialize the count of updated rows
+ */
+ if( db->flags & SQLITE_CountRows && !pParse->trigStack ){
+ sqlite3VdbeAddOp(v, OP_Integer, 0, 0);
+ }
+
+ if( triggers_exist ){
+ /* Create pseudo-tables for NEW and OLD
+ */
+ sqlite3VdbeAddOp(v, OP_OpenPseudo, oldIdx, 0);
+ sqlite3VdbeAddOp(v, OP_SetNumColumns, oldIdx, pTab->nCol);
+ sqlite3VdbeAddOp(v, OP_OpenPseudo, newIdx, 0);
+ sqlite3VdbeAddOp(v, OP_SetNumColumns, newIdx, pTab->nCol);
+
+ /* The top of the update loop for when there are triggers.
+ */
+ addr = sqlite3VdbeAddOp(v, OP_FifoRead, 0, 0);
+
+ if( !isView ){
+ sqlite3VdbeAddOp(v, OP_Dup, 0, 0);
+ sqlite3VdbeAddOp(v, OP_Dup, 0, 0);
+ /* Open a cursor and make it point to the record that is
+ ** being updated.
+ */
+ sqlite3OpenTable(pParse, iCur, iDb, pTab, OP_OpenRead);
+ }
+ sqlite3VdbeAddOp(v, OP_MoveGe, iCur, 0);
+
+ /* Generate the OLD table
+ */
+ sqlite3VdbeAddOp(v, OP_Rowid, iCur, 0);
+ sqlite3VdbeAddOp(v, OP_RowData, iCur, 0);
+ sqlite3VdbeAddOp(v, OP_Insert, oldIdx, 0);
+
+ /* Generate the NEW table
+ */
+ if( chngRowid ){
+ sqlite3ExprCodeAndCache(pParse, pRowidExpr);
+ }else{
+ sqlite3VdbeAddOp(v, OP_Rowid, iCur, 0);
+ }
+ for(i=0; i<pTab->nCol; i++){
+ if( i==pTab->iPKey ){
+ sqlite3VdbeAddOp(v, OP_Null, 0, 0);
+ continue;
+ }
+ j = aXRef[i];
+ if( j<0 ){
+ sqlite3VdbeAddOp(v, OP_Column, iCur, i);
+ sqlite3ColumnDefault(v, pTab, i);
+ }else{
+ sqlite3ExprCodeAndCache(pParse, pChanges->a[j].pExpr);
+ }
+ }
+ sqlite3VdbeAddOp(v, OP_MakeRecord, pTab->nCol, 0);
+ if( !isView ){
+ sqlite3TableAffinityStr(v, pTab);
+ }
+ if( pParse->nErr ) goto update_cleanup;
+ sqlite3VdbeAddOp(v, OP_Insert, newIdx, 0);
+ if( !isView ){
+ sqlite3VdbeAddOp(v, OP_Close, iCur, 0);
+ }
+
+ /* Fire the BEFORE and INSTEAD OF triggers
+ */
+ if( sqlite3CodeRowTrigger(pParse, TK_UPDATE, pChanges, TRIGGER_BEFORE, pTab,
+ newIdx, oldIdx, onError, addr) ){
+ goto update_cleanup;
+ }
+ }
+
+ if( !isView && !IsVirtual(pTab) ){
+ /*
+ ** Open every index that needs updating. Note that if any
+ ** index could potentially invoke a REPLACE conflict resolution
+ ** action, then we need to open all indices because we might need
+ ** to be deleting some records.
+ */
+ sqlite3OpenTable(pParse, iCur, iDb, pTab, OP_OpenWrite);
+ if( onError==OE_Replace ){
+ openAll = 1;
+ }else{
+ openAll = 0;
+ for(pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext){
+ if( pIdx->onError==OE_Replace ){
+ openAll = 1;
+ break;
+ }
+ }
+ }
+ for(i=0, pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext, i++){
+ if( openAll || aIdxUsed[i] ){
+ KeyInfo *pKey = sqlite3IndexKeyinfo(pParse, pIdx);
+ sqlite3VdbeAddOp(v, OP_Integer, iDb, 0);
+ sqlite3VdbeOp3(v, OP_OpenWrite, iCur+i+1, pIdx->tnum,
+ (char*)pKey, P3_KEYINFO_HANDOFF);
+ assert( pParse->nTab>iCur+i+1 );
+ }
+ }
+
+ /* Loop over every record that needs updating. We have to load
+ ** the old data for each record to be updated because some columns
+ ** might not change and we will need to copy the old value.
+ ** Also, the old data is needed to delete the old index entries.
+ ** So make the cursor point at the old record.
+ */
+ if( !triggers_exist ){
+ addr = sqlite3VdbeAddOp(v, OP_FifoRead, 0, 0);
+ sqlite3VdbeAddOp(v, OP_Dup, 0, 0);
+ }
+ sqlite3VdbeAddOp(v, OP_NotExists, iCur, addr);
+
+ /* If the record number will change, push the record number as it
+ ** will be after the update. (The old record number is currently
+ ** on top of the stack.)
+ */
+ if( chngRowid ){
+ sqlite3ExprCode(pParse, pRowidExpr);
+ sqlite3VdbeAddOp(v, OP_MustBeInt, 0, 0);
+ }
+
+ /* Compute new data for this record.
+ */
+ for(i=0; i<pTab->nCol; i++){
+ if( i==pTab->iPKey ){
+ sqlite3VdbeAddOp(v, OP_Null, 0, 0);
+ continue;
+ }
+ j = aXRef[i];
+ if( j<0 ){
+ sqlite3VdbeAddOp(v, OP_Column, iCur, i);
+ sqlite3ColumnDefault(v, pTab, i);
+ }else{
+ sqlite3ExprCode(pParse, pChanges->a[j].pExpr);
+ }
+ }
+
+ /* Do constraint checks
+ */
+ sqlite3GenerateConstraintChecks(pParse, pTab, iCur, aIdxUsed, chngRowid, 1,
+ onError, addr);
+
+ /* Delete the old indices for the current record.
+ */
+ sqlite3GenerateRowIndexDelete(v, pTab, iCur, aIdxUsed);
+
+ /* If changing the record number, delete the old record.
+ */
+ if( chngRowid ){
+ sqlite3VdbeAddOp(v, OP_Delete, iCur, 0);
+ }
+
+ /* Create the new index entries and the new record.
+ */
+ sqlite3CompleteInsertion(pParse, pTab, iCur, aIdxUsed, chngRowid, 1, -1);
+ }
+
+ /* Increment the row counter
+ */
+ if( db->flags & SQLITE_CountRows && !pParse->trigStack){
+ sqlite3VdbeAddOp(v, OP_AddImm, 1, 0);
+ }
+
+ /* If there are triggers, close all the cursors after each iteration
+ ** through the loop. The fire the after triggers.
+ */
+ if( triggers_exist ){
+ if( !isView ){
+ for(i=0, pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext, i++){
+ if( openAll || aIdxUsed[i] )
+ sqlite3VdbeAddOp(v, OP_Close, iCur+i+1, 0);
+ }
+ sqlite3VdbeAddOp(v, OP_Close, iCur, 0);
+ }
+ if( sqlite3CodeRowTrigger(pParse, TK_UPDATE, pChanges, TRIGGER_AFTER, pTab,
+ newIdx, oldIdx, onError, addr) ){
+ goto update_cleanup;
+ }
+ }
+
+ /* Repeat the above with the next record to be updated, until
+ ** all record selected by the WHERE clause have been updated.
+ */
+ sqlite3VdbeAddOp(v, OP_Goto, 0, addr);
+ sqlite3VdbeJumpHere(v, addr);
+
+ /* Close all tables if there were no FOR EACH ROW triggers */
+ if( !triggers_exist ){
+ for(i=0, pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext, i++){
+ if( openAll || aIdxUsed[i] ){
+ sqlite3VdbeAddOp(v, OP_Close, iCur+i+1, 0);
+ }
+ }
+ sqlite3VdbeAddOp(v, OP_Close, iCur, 0);
+ }else{
+ sqlite3VdbeAddOp(v, OP_Close, newIdx, 0);
+ sqlite3VdbeAddOp(v, OP_Close, oldIdx, 0);
+ }
+
+ /*
+ ** Return the number of rows that were changed. If this routine is
+ ** generating code because of a call to sqlite3NestedParse(), do not
+ ** invoke the callback function.
+ */
+ if( db->flags & SQLITE_CountRows && !pParse->trigStack && pParse->nested==0 ){
+ sqlite3VdbeAddOp(v, OP_Callback, 1, 0);
+ sqlite3VdbeSetNumCols(v, 1);
+ sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "rows updated", P3_STATIC);
+ }
+
+update_cleanup:
+ sqlite3AuthContextPop(&sContext);
+ sqliteFree(apIdx);
+ sqliteFree(aXRef);
+ sqlite3SrcListDelete(pTabList);
+ sqlite3ExprListDelete(pChanges);
+ sqlite3ExprDelete(pWhere);
+ return;
+}
+
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+/*
+** Generate code for an UPDATE of a virtual table.
+**
+** The strategy is that we create an ephemerial table that contains
+** for each row to be changed:
+**
+** (A) The original rowid of that row.
+** (B) The revised rowid for the row. (note1)
+** (C) The content of every column in the row.
+**
+** Then we loop over this ephemeral table and for each row in
+** the ephermeral table call VUpdate.
+**
+** When finished, drop the ephemeral table.
+**
+** (note1) Actually, if we know in advance that (A) is always the same
+** as (B) we only store (A), then duplicate (A) when pulling
+** it out of the ephemeral table before calling VUpdate.
+*/
+static void updateVirtualTable(
+ Parse *pParse, /* The parsing context */
+ SrcList *pSrc, /* The virtual table to be modified */
+ Table *pTab, /* The virtual table */
+ ExprList *pChanges, /* The columns to change in the UPDATE statement */
+ Expr *pRowid, /* Expression used to recompute the rowid */
+ int *aXRef, /* Mapping from columns of pTab to entries in pChanges */
+ Expr *pWhere /* WHERE clause of the UPDATE statement */
+){
+ Vdbe *v = pParse->pVdbe; /* Virtual machine under construction */
+ ExprList *pEList = 0; /* The result set of the SELECT statement */
+ Select *pSelect = 0; /* The SELECT statement */
+ Expr *pExpr; /* Temporary expression */
+ int ephemTab; /* Table holding the result of the SELECT */
+ int i; /* Loop counter */
+ int addr; /* Address of top of loop */
+
+ /* Construct the SELECT statement that will find the new values for
+ ** all updated rows.
+ */
+ pEList = sqlite3ExprListAppend(0, sqlite3CreateIdExpr("_rowid_"), 0);
+ if( pRowid ){
+ pEList = sqlite3ExprListAppend(pEList, sqlite3ExprDup(pRowid), 0);
+ }
+ assert( pTab->iPKey<0 );
+ for(i=0; i<pTab->nCol; i++){
+ if( aXRef[i]>=0 ){
+ pExpr = sqlite3ExprDup(pChanges->a[aXRef[i]].pExpr);
+ }else{
+ pExpr = sqlite3CreateIdExpr(pTab->aCol[i].zName);
+ }
+ pEList = sqlite3ExprListAppend(pEList, pExpr, 0);
+ }
+ pSelect = sqlite3SelectNew(pEList, pSrc, pWhere, 0, 0, 0, 0, 0, 0);
+
+ /* Create the ephemeral table into which the update results will
+ ** be stored.
+ */
+ assert( v );
+ ephemTab = pParse->nTab++;
+ sqlite3VdbeAddOp(v, OP_OpenEphemeral, ephemTab, pTab->nCol+1+(pRowid!=0));
+
+ /* fill the ephemeral table
+ */
+ sqlite3Select(pParse, pSelect, SRT_Table, ephemTab, 0, 0, 0, 0);
+
+ /*
+ ** Generate code to scan the ephemeral table and call VDelete and
+ ** VInsert
+ */
+ sqlite3VdbeAddOp(v, OP_Rewind, ephemTab, 0);
+ addr = sqlite3VdbeCurrentAddr(v);
+ sqlite3VdbeAddOp(v, OP_Column, ephemTab, 0);
+ if( pRowid ){
+ sqlite3VdbeAddOp(v, OP_Column, ephemTab, 1);
+ }else{
+ sqlite3VdbeAddOp(v, OP_Dup, 0, 0);
+ }
+ for(i=0; i<pTab->nCol; i++){
+ sqlite3VdbeAddOp(v, OP_Column, ephemTab, i+1+(pRowid!=0));
+ }
+ pParse->pVirtualLock = pTab;
+ sqlite3VdbeOp3(v, OP_VUpdate, 0, pTab->nCol+2,
+ (const char*)pTab->pVtab, P3_VTAB);
+ sqlite3VdbeAddOp(v, OP_Next, ephemTab, addr);
+ sqlite3VdbeAddOp(v, OP_Close, ephemTab, 0);
+
+ /* Cleanup */
+ sqlite3SelectDelete(pSelect);
+}
+#endif /* SQLITE_OMIT_VIRTUALTABLE */
Added: freeswitch/trunk/libs/sqlite/src/utf.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/utf.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,596 @@
+/*
+** 2004 April 13
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** This file contains routines used to translate between UTF-8,
+** UTF-16, UTF-16BE, and UTF-16LE.
+**
+** $Id: utf.c,v 1.42 2006/10/05 11:43:53 drh Exp $
+**
+** Notes on UTF-8:
+**
+** Byte-0 Byte-1 Byte-2 Byte-3 Value
+** 0xxxxxxx 00000000 00000000 0xxxxxxx
+** 110yyyyy 10xxxxxx 00000000 00000yyy yyxxxxxx
+** 1110zzzz 10yyyyyy 10xxxxxx 00000000 zzzzyyyy yyxxxxxx
+** 11110uuu 10uuzzzz 10yyyyyy 10xxxxxx 000uuuuu zzzzyyyy yyxxxxxx
+**
+**
+** Notes on UTF-16: (with wwww+1==uuuuu)
+**
+** Word-0 Word-1 Value
+** 110110ww wwzzzzyy 110111yy yyxxxxxx 000uuuuu zzzzyyyy yyxxxxxx
+** zzzzyyyy yyxxxxxx 00000000 zzzzyyyy yyxxxxxx
+**
+**
+** BOM or Byte Order Mark:
+** 0xff 0xfe little-endian utf-16 follows
+** 0xfe 0xff big-endian utf-16 follows
+**
+**
+** Handling of malformed strings:
+**
+** SQLite accepts and processes malformed strings without an error wherever
+** possible. However this is not possible when converting between UTF-8 and
+** UTF-16.
+**
+** When converting malformed UTF-8 strings to UTF-16, one instance of the
+** replacement character U+FFFD for each byte that cannot be interpeted as
+** part of a valid unicode character.
+**
+** When converting malformed UTF-16 strings to UTF-8, one instance of the
+** replacement character U+FFFD for each pair of bytes that cannot be
+** interpeted as part of a valid unicode character.
+**
+** This file contains the following public routines:
+**
+** sqlite3VdbeMemTranslate() - Translate the encoding used by a Mem* string.
+** sqlite3VdbeMemHandleBom() - Handle byte-order-marks in UTF16 Mem* strings.
+** sqlite3utf16ByteLen() - Calculate byte-length of a void* UTF16 string.
+** sqlite3utf8CharLen() - Calculate char-length of a char* UTF8 string.
+** sqlite3utf8LikeCompare() - Do a LIKE match given two UTF8 char* strings.
+**
+*/
+#include "sqliteInt.h"
+#include <assert.h>
+#include "vdbeInt.h"
+
+/*
+** This table maps from the first byte of a UTF-8 character to the number
+** of trailing bytes expected. A value '255' indicates that the table key
+** is not a legal first byte for a UTF-8 character.
+*/
+static const u8 xtra_utf8_bytes[256] = {
+/* 0xxxxxxx */
+0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+
+/* 10wwwwww */
+255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
+255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
+255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
+255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
+
+/* 110yyyyy */
+1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
+1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
+
+/* 1110zzzz */
+2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
+
+/* 11110yyy */
+3, 3, 3, 3, 3, 3, 3, 3, 255, 255, 255, 255, 255, 255, 255, 255,
+};
+
+/*
+** This table maps from the number of trailing bytes in a UTF-8 character
+** to an integer constant that is effectively calculated for each character
+** read by a naive implementation of a UTF-8 character reader. The code
+** in the READ_UTF8 macro explains things best.
+*/
+static const int xtra_utf8_bits[4] = {
+0,
+12416, /* (0xC0 << 6) + (0x80) */
+925824, /* (0xE0 << 12) + (0x80 << 6) + (0x80) */
+63447168 /* (0xF0 << 18) + (0x80 << 12) + (0x80 << 6) + 0x80 */
+};
+
+#define READ_UTF8(zIn, c) { \
+ int xtra; \
+ c = *(zIn)++; \
+ xtra = xtra_utf8_bytes[c]; \
+ switch( xtra ){ \
+ case 255: c = (int)0xFFFD; break; \
+ case 3: c = (c<<6) + *(zIn)++; \
+ case 2: c = (c<<6) + *(zIn)++; \
+ case 1: c = (c<<6) + *(zIn)++; \
+ c -= xtra_utf8_bits[xtra]; \
+ } \
+}
+int sqlite3ReadUtf8(const unsigned char *z){
+ int c;
+ READ_UTF8(z, c);
+ return c;
+}
+
+#define SKIP_UTF8(zIn) { \
+ zIn += (xtra_utf8_bytes[*(u8 *)zIn] + 1); \
+}
+
+#define WRITE_UTF8(zOut, c) { \
+ if( c<0x00080 ){ \
+ *zOut++ = (c&0xFF); \
+ } \
+ else if( c<0x00800 ){ \
+ *zOut++ = 0xC0 + ((c>>6)&0x1F); \
+ *zOut++ = 0x80 + (c & 0x3F); \
+ } \
+ else if( c<0x10000 ){ \
+ *zOut++ = 0xE0 + ((c>>12)&0x0F); \
+ *zOut++ = 0x80 + ((c>>6) & 0x3F); \
+ *zOut++ = 0x80 + (c & 0x3F); \
+ }else{ \
+ *zOut++ = 0xF0 + ((c>>18) & 0x07); \
+ *zOut++ = 0x80 + ((c>>12) & 0x3F); \
+ *zOut++ = 0x80 + ((c>>6) & 0x3F); \
+ *zOut++ = 0x80 + (c & 0x3F); \
+ } \
+}
+
+#define WRITE_UTF16LE(zOut, c) { \
+ if( c<=0xFFFF ){ \
+ *zOut++ = (c&0x00FF); \
+ *zOut++ = ((c>>8)&0x00FF); \
+ }else{ \
+ *zOut++ = (((c>>10)&0x003F) + (((c-0x10000)>>10)&0x00C0)); \
+ *zOut++ = (0x00D8 + (((c-0x10000)>>18)&0x03)); \
+ *zOut++ = (c&0x00FF); \
+ *zOut++ = (0x00DC + ((c>>8)&0x03)); \
+ } \
+}
+
+#define WRITE_UTF16BE(zOut, c) { \
+ if( c<=0xFFFF ){ \
+ *zOut++ = ((c>>8)&0x00FF); \
+ *zOut++ = (c&0x00FF); \
+ }else{ \
+ *zOut++ = (0x00D8 + (((c-0x10000)>>18)&0x03)); \
+ *zOut++ = (((c>>10)&0x003F) + (((c-0x10000)>>10)&0x00C0)); \
+ *zOut++ = (0x00DC + ((c>>8)&0x03)); \
+ *zOut++ = (c&0x00FF); \
+ } \
+}
+
+#define READ_UTF16LE(zIn, c){ \
+ c = (*zIn++); \
+ c += ((*zIn++)<<8); \
+ if( c>=0xD800 && c<=0xE000 ){ \
+ int c2 = (*zIn++); \
+ c2 += ((*zIn++)<<8); \
+ c = (c2&0x03FF) + ((c&0x003F)<<10) + (((c&0x03C0)+0x0040)<<10); \
+ } \
+}
+
+#define READ_UTF16BE(zIn, c){ \
+ c = ((*zIn++)<<8); \
+ c += (*zIn++); \
+ if( c>=0xD800 && c<=0xE000 ){ \
+ int c2 = ((*zIn++)<<8); \
+ c2 += (*zIn++); \
+ c = (c2&0x03FF) + ((c&0x003F)<<10) + (((c&0x03C0)+0x0040)<<10); \
+ } \
+}
+
+#define SKIP_UTF16BE(zIn){ \
+ if( *zIn>=0xD8 && (*zIn<0xE0 || (*zIn==0xE0 && *(zIn+1)==0x00)) ){ \
+ zIn += 4; \
+ }else{ \
+ zIn += 2; \
+ } \
+}
+#define SKIP_UTF16LE(zIn){ \
+ zIn++; \
+ if( *zIn>=0xD8 && (*zIn<0xE0 || (*zIn==0xE0 && *(zIn-1)==0x00)) ){ \
+ zIn += 3; \
+ }else{ \
+ zIn += 1; \
+ } \
+}
+
+#define RSKIP_UTF16LE(zIn){ \
+ if( *zIn>=0xD8 && (*zIn<0xE0 || (*zIn==0xE0 && *(zIn-1)==0x00)) ){ \
+ zIn -= 4; \
+ }else{ \
+ zIn -= 2; \
+ } \
+}
+#define RSKIP_UTF16BE(zIn){ \
+ zIn--; \
+ if( *zIn>=0xD8 && (*zIn<0xE0 || (*zIn==0xE0 && *(zIn+1)==0x00)) ){ \
+ zIn -= 3; \
+ }else{ \
+ zIn -= 1; \
+ } \
+}
+
+/*
+** If the TRANSLATE_TRACE macro is defined, the value of each Mem is
+** printed on stderr on the way into and out of sqlite3VdbeMemTranslate().
+*/
+/* #define TRANSLATE_TRACE 1 */
+
+#ifndef SQLITE_OMIT_UTF16
+/*
+** This routine transforms the internal text encoding used by pMem to
+** desiredEnc. It is an error if the string is already of the desired
+** encoding, or if *pMem does not contain a string value.
+*/
+int sqlite3VdbeMemTranslate(Mem *pMem, u8 desiredEnc){
+ unsigned char zShort[NBFS]; /* Temporary short output buffer */
+ int len; /* Maximum length of output string in bytes */
+ unsigned char *zOut; /* Output buffer */
+ unsigned char *zIn; /* Input iterator */
+ unsigned char *zTerm; /* End of input */
+ unsigned char *z; /* Output iterator */
+ unsigned int c;
+
+ assert( pMem->flags&MEM_Str );
+ assert( pMem->enc!=desiredEnc );
+ assert( pMem->enc!=0 );
+ assert( pMem->n>=0 );
+
+#if defined(TRANSLATE_TRACE) && defined(SQLITE_DEBUG)
+ {
+ char zBuf[100];
+ sqlite3VdbeMemPrettyPrint(pMem, zBuf);
+ fprintf(stderr, "INPUT: %s\n", zBuf);
+ }
+#endif
+
+ /* If the translation is between UTF-16 little and big endian, then
+ ** all that is required is to swap the byte order. This case is handled
+ ** differently from the others.
+ */
+ if( pMem->enc!=SQLITE_UTF8 && desiredEnc!=SQLITE_UTF8 ){
+ u8 temp;
+ int rc;
+ rc = sqlite3VdbeMemMakeWriteable(pMem);
+ if( rc!=SQLITE_OK ){
+ assert( rc==SQLITE_NOMEM );
+ return SQLITE_NOMEM;
+ }
+ zIn = (u8*)pMem->z;
+ zTerm = &zIn[pMem->n];
+ while( zIn<zTerm ){
+ temp = *zIn;
+ *zIn = *(zIn+1);
+ zIn++;
+ *zIn++ = temp;
+ }
+ pMem->enc = desiredEnc;
+ goto translate_out;
+ }
+
+ /* Set len to the maximum number of bytes required in the output buffer. */
+ if( desiredEnc==SQLITE_UTF8 ){
+ /* When converting from UTF-16, the maximum growth results from
+ ** translating a 2-byte character to a 4-byte UTF-8 character.
+ ** A single byte is required for the output string
+ ** nul-terminator.
+ */
+ len = pMem->n * 2 + 1;
+ }else{
+ /* When converting from UTF-8 to UTF-16 the maximum growth is caused
+ ** when a 1-byte UTF-8 character is translated into a 2-byte UTF-16
+ ** character. Two bytes are required in the output buffer for the
+ ** nul-terminator.
+ */
+ len = pMem->n * 2 + 2;
+ }
+
+ /* Set zIn to point at the start of the input buffer and zTerm to point 1
+ ** byte past the end.
+ **
+ ** Variable zOut is set to point at the output buffer. This may be space
+ ** obtained from malloc(), or Mem.zShort, if it large enough and not in
+ ** use, or the zShort array on the stack (see above).
+ */
+ zIn = (u8*)pMem->z;
+ zTerm = &zIn[pMem->n];
+ if( len>NBFS ){
+ zOut = sqliteMallocRaw(len);
+ if( !zOut ) return SQLITE_NOMEM;
+ }else{
+ zOut = zShort;
+ }
+ z = zOut;
+
+ if( pMem->enc==SQLITE_UTF8 ){
+ if( desiredEnc==SQLITE_UTF16LE ){
+ /* UTF-8 -> UTF-16 Little-endian */
+ while( zIn<zTerm ){
+ READ_UTF8(zIn, c);
+ WRITE_UTF16LE(z, c);
+ }
+ }else{
+ assert( desiredEnc==SQLITE_UTF16BE );
+ /* UTF-8 -> UTF-16 Big-endian */
+ while( zIn<zTerm ){
+ READ_UTF8(zIn, c);
+ WRITE_UTF16BE(z, c);
+ }
+ }
+ pMem->n = z - zOut;
+ *z++ = 0;
+ }else{
+ assert( desiredEnc==SQLITE_UTF8 );
+ if( pMem->enc==SQLITE_UTF16LE ){
+ /* UTF-16 Little-endian -> UTF-8 */
+ while( zIn<zTerm ){
+ READ_UTF16LE(zIn, c);
+ WRITE_UTF8(z, c);
+ }
+ }else{
+ /* UTF-16 Little-endian -> UTF-8 */
+ while( zIn<zTerm ){
+ READ_UTF16BE(zIn, c);
+ WRITE_UTF8(z, c);
+ }
+ }
+ pMem->n = z - zOut;
+ }
+ *z = 0;
+ assert( (pMem->n+(desiredEnc==SQLITE_UTF8?1:2))<=len );
+
+ sqlite3VdbeMemRelease(pMem);
+ pMem->flags &= ~(MEM_Static|MEM_Dyn|MEM_Ephem|MEM_Short);
+ pMem->enc = desiredEnc;
+ if( zOut==zShort ){
+ memcpy(pMem->zShort, zOut, len);
+ zOut = (u8*)pMem->zShort;
+ pMem->flags |= (MEM_Term|MEM_Short);
+ }else{
+ pMem->flags |= (MEM_Term|MEM_Dyn);
+ }
+ pMem->z = (char*)zOut;
+
+translate_out:
+#if defined(TRANSLATE_TRACE) && defined(SQLITE_DEBUG)
+ {
+ char zBuf[100];
+ sqlite3VdbeMemPrettyPrint(pMem, zBuf);
+ fprintf(stderr, "OUTPUT: %s\n", zBuf);
+ }
+#endif
+ return SQLITE_OK;
+}
+
+/*
+** This routine checks for a byte-order mark at the beginning of the
+** UTF-16 string stored in *pMem. If one is present, it is removed and
+** the encoding of the Mem adjusted. This routine does not do any
+** byte-swapping, it just sets Mem.enc appropriately.
+**
+** The allocation (static, dynamic etc.) and encoding of the Mem may be
+** changed by this function.
+*/
+int sqlite3VdbeMemHandleBom(Mem *pMem){
+ int rc = SQLITE_OK;
+ u8 bom = 0;
+
+ if( pMem->n<0 || pMem->n>1 ){
+ u8 b1 = *(u8 *)pMem->z;
+ u8 b2 = *(((u8 *)pMem->z) + 1);
+ if( b1==0xFE && b2==0xFF ){
+ bom = SQLITE_UTF16BE;
+ }
+ if( b1==0xFF && b2==0xFE ){
+ bom = SQLITE_UTF16LE;
+ }
+ }
+
+ if( bom ){
+ /* This function is called as soon as a string is stored in a Mem*,
+ ** from within sqlite3VdbeMemSetStr(). At that point it is not possible
+ ** for the string to be stored in Mem.zShort, or for it to be stored
+ ** in dynamic memory with no destructor.
+ */
+ assert( !(pMem->flags&MEM_Short) );
+ assert( !(pMem->flags&MEM_Dyn) || pMem->xDel );
+ if( pMem->flags & MEM_Dyn ){
+ void (*xDel)(void*) = pMem->xDel;
+ char *z = pMem->z;
+ pMem->z = 0;
+ pMem->xDel = 0;
+ rc = sqlite3VdbeMemSetStr(pMem, &z[2], pMem->n-2, bom, SQLITE_TRANSIENT);
+ xDel(z);
+ }else{
+ rc = sqlite3VdbeMemSetStr(pMem, &pMem->z[2], pMem->n-2, bom,
+ SQLITE_TRANSIENT);
+ }
+ }
+ return rc;
+}
+#endif /* SQLITE_OMIT_UTF16 */
+
+/*
+** pZ is a UTF-8 encoded unicode string. If nByte is less than zero,
+** return the number of unicode characters in pZ up to (but not including)
+** the first 0x00 byte. If nByte is not less than zero, return the
+** number of unicode characters in the first nByte of pZ (or up to
+** the first 0x00, whichever comes first).
+*/
+int sqlite3utf8CharLen(const char *z, int nByte){
+ int r = 0;
+ const char *zTerm;
+ if( nByte>=0 ){
+ zTerm = &z[nByte];
+ }else{
+ zTerm = (const char *)(-1);
+ }
+ assert( z<=zTerm );
+ while( *z!=0 && z<zTerm ){
+ SKIP_UTF8(z);
+ r++;
+ }
+ return r;
+}
+
+#ifndef SQLITE_OMIT_UTF16
+/*
+** Convert a UTF-16 string in the native encoding into a UTF-8 string.
+** Memory to hold the UTF-8 string is obtained from malloc and must be
+** freed by the calling function.
+**
+** NULL is returned if there is an allocation error.
+*/
+char *sqlite3utf16to8(const void *z, int nByte){
+ Mem m;
+ memset(&m, 0, sizeof(m));
+ sqlite3VdbeMemSetStr(&m, z, nByte, SQLITE_UTF16NATIVE, SQLITE_STATIC);
+ sqlite3VdbeChangeEncoding(&m, SQLITE_UTF8);
+ assert( (m.flags & MEM_Term)!=0 || sqlite3MallocFailed() );
+ assert( (m.flags & MEM_Str)!=0 || sqlite3MallocFailed() );
+ return (m.flags & MEM_Dyn)!=0 ? m.z : sqliteStrDup(m.z);
+}
+
+/*
+** pZ is a UTF-16 encoded unicode string. If nChar is less than zero,
+** return the number of bytes up to (but not including), the first pair
+** of consecutive 0x00 bytes in pZ. If nChar is not less than zero,
+** then return the number of bytes in the first nChar unicode characters
+** in pZ (or up until the first pair of 0x00 bytes, whichever comes first).
+*/
+int sqlite3utf16ByteLen(const void *zIn, int nChar){
+ unsigned int c = 1;
+ char const *z = zIn;
+ int n = 0;
+ if( SQLITE_UTF16NATIVE==SQLITE_UTF16BE ){
+ /* Using an "if (SQLITE_UTF16NATIVE==SQLITE_UTF16BE)" construct here
+ ** and in other parts of this file means that at one branch will
+ ** not be covered by coverage testing on any single host. But coverage
+ ** will be complete if the tests are run on both a little-endian and
+ ** big-endian host. Because both the UTF16NATIVE and SQLITE_UTF16BE
+ ** macros are constant at compile time the compiler can determine
+ ** which branch will be followed. It is therefore assumed that no runtime
+ ** penalty is paid for this "if" statement.
+ */
+ while( c && ((nChar<0) || n<nChar) ){
+ READ_UTF16BE(z, c);
+ n++;
+ }
+ }else{
+ while( c && ((nChar<0) || n<nChar) ){
+ READ_UTF16LE(z, c);
+ n++;
+ }
+ }
+ return (z-(char const *)zIn)-((c==0)?2:0);
+}
+
+/*
+** UTF-16 implementation of the substr()
+*/
+void sqlite3utf16Substr(
+ sqlite3_context *context,
+ int argc,
+ sqlite3_value **argv
+){
+ int y, z;
+ unsigned char const *zStr;
+ unsigned char const *zStrEnd;
+ unsigned char const *zStart;
+ unsigned char const *zEnd;
+ int i;
+
+ zStr = (unsigned char const *)sqlite3_value_text16(argv[0]);
+ zStrEnd = &zStr[sqlite3_value_bytes16(argv[0])];
+ y = sqlite3_value_int(argv[1]);
+ z = sqlite3_value_int(argv[2]);
+
+ if( y>0 ){
+ y = y-1;
+ zStart = zStr;
+ if( SQLITE_UTF16BE==SQLITE_UTF16NATIVE ){
+ for(i=0; i<y && zStart<zStrEnd; i++) SKIP_UTF16BE(zStart);
+ }else{
+ for(i=0; i<y && zStart<zStrEnd; i++) SKIP_UTF16LE(zStart);
+ }
+ }else{
+ zStart = zStrEnd;
+ if( SQLITE_UTF16BE==SQLITE_UTF16NATIVE ){
+ for(i=y; i<0 && zStart>zStr; i++) RSKIP_UTF16BE(zStart);
+ }else{
+ for(i=y; i<0 && zStart>zStr; i++) RSKIP_UTF16LE(zStart);
+ }
+ for(; i<0; i++) z -= 1;
+ }
+
+ zEnd = zStart;
+ if( SQLITE_UTF16BE==SQLITE_UTF16NATIVE ){
+ for(i=0; i<z && zEnd<zStrEnd; i++) SKIP_UTF16BE(zEnd);
+ }else{
+ for(i=0; i<z && zEnd<zStrEnd; i++) SKIP_UTF16LE(zEnd);
+ }
+
+ sqlite3_result_text16(context, zStart, zEnd-zStart, SQLITE_TRANSIENT);
+}
+
+#if defined(SQLITE_TEST)
+/*
+** This routine is called from the TCL test function "translate_selftest".
+** It checks that the primitives for serializing and deserializing
+** characters in each encoding are inverses of each other.
+*/
+void sqlite3utfSelfTest(){
+ unsigned int i;
+ unsigned char zBuf[20];
+ unsigned char *z;
+ int n;
+ unsigned int c;
+
+ for(i=0; i<0x00110000; i++){
+ z = zBuf;
+ WRITE_UTF8(z, i);
+ n = z-zBuf;
+ z = zBuf;
+ READ_UTF8(z, c);
+ assert( c==i );
+ assert( (z-zBuf)==n );
+ }
+ for(i=0; i<0x00110000; i++){
+ if( i>=0xD800 && i<=0xE000 ) continue;
+ z = zBuf;
+ WRITE_UTF16LE(z, i);
+ n = z-zBuf;
+ z = zBuf;
+ READ_UTF16LE(z, c);
+ assert( c==i );
+ assert( (z-zBuf)==n );
+ }
+ for(i=0; i<0x00110000; i++){
+ if( i>=0xD800 && i<=0xE000 ) continue;
+ z = zBuf;
+ WRITE_UTF16BE(z, i);
+ n = z-zBuf;
+ z = zBuf;
+ READ_UTF16BE(z, c);
+ assert( c==i );
+ assert( (z-zBuf)==n );
+ }
+}
+#endif /* SQLITE_TEST */
+#endif /* SQLITE_OMIT_UTF16 */
Added: freeswitch/trunk/libs/sqlite/src/util.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/util.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,1487 @@
+/*
+** 2001 September 15
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** Utility functions used throughout sqlite.
+**
+** This file contains functions for allocating memory, comparing
+** strings, and stuff like that.
+**
+** $Id: util.c,v 1.193 2006/09/15 07:28:51 drh Exp $
+*/
+#include "sqliteInt.h"
+#include "os.h"
+#include <stdarg.h>
+#include <ctype.h>
+
+/*
+** MALLOC WRAPPER ARCHITECTURE
+**
+** The sqlite code accesses dynamic memory allocation/deallocation by invoking
+** the following six APIs (which may be implemented as macros).
+**
+** sqlite3Malloc()
+** sqlite3MallocRaw()
+** sqlite3Realloc()
+** sqlite3ReallocOrFree()
+** sqlite3Free()
+** sqlite3AllocSize()
+**
+** The function sqlite3FreeX performs the same task as sqlite3Free and is
+** guaranteed to be a real function. The same holds for sqlite3MallocX
+**
+** The above APIs are implemented in terms of the functions provided in the
+** operating-system interface. The OS interface is never accessed directly
+** by code outside of this file.
+**
+** sqlite3OsMalloc()
+** sqlite3OsRealloc()
+** sqlite3OsFree()
+** sqlite3OsAllocationSize()
+**
+** Functions sqlite3MallocRaw() and sqlite3Realloc() may invoke
+** sqlite3_release_memory() if a call to sqlite3OsMalloc() or
+** sqlite3OsRealloc() fails (or if the soft-heap-limit for the thread is
+** exceeded). Function sqlite3Malloc() usually invokes
+** sqlite3MallocRaw().
+**
+** MALLOC TEST WRAPPER ARCHITECTURE
+**
+** The test wrapper provides extra test facilities to ensure the library
+** does not leak memory and handles the failure of the underlying OS level
+** allocation system correctly. It is only present if the library is
+** compiled with the SQLITE_MEMDEBUG macro set.
+**
+** * Guardposts to detect overwrites.
+** * Ability to cause a specific Malloc() or Realloc() to fail.
+** * Audit outstanding memory allocations (i.e check for leaks).
+*/
+
+#define MAX(x,y) ((x)>(y)?(x):(y))
+
+#if defined(SQLITE_ENABLE_MEMORY_MANAGEMENT) && !defined(SQLITE_OMIT_DISKIO)
+/*
+** Set the soft heap-size limit for the current thread. Passing a negative
+** value indicates no limit.
+*/
+void sqlite3_soft_heap_limit(int n){
+ ThreadData *pTd = sqlite3ThreadData();
+ if( pTd ){
+ pTd->nSoftHeapLimit = n;
+ }
+ sqlite3ReleaseThreadData();
+}
+
+/*
+** Release memory held by SQLite instances created by the current thread.
+*/
+int sqlite3_release_memory(int n){
+ return sqlite3pager_release_memory(n);
+}
+#else
+/* If SQLITE_ENABLE_MEMORY_MANAGEMENT is not defined, then define a version
+** of sqlite3_release_memory() to be used by other code in this file.
+** This is done for no better reason than to reduce the number of
+** pre-processor #ifndef statements.
+*/
+#define sqlite3_release_memory(x) 0 /* 0 == no memory freed */
+#endif
+
+#ifdef SQLITE_MEMDEBUG
+/*--------------------------------------------------------------------------
+** Begin code for memory allocation system test layer.
+**
+** Memory debugging is turned on by defining the SQLITE_MEMDEBUG macro.
+**
+** SQLITE_MEMDEBUG==1 -> Fence-posting only (thread safe)
+** SQLITE_MEMDEBUG==2 -> Fence-posting + linked list of allocations (not ts)
+** SQLITE_MEMDEBUG==3 -> Above + backtraces (not thread safe, req. glibc)
+*/
+
+/* Figure out whether or not to store backtrace() information for each malloc.
+** The backtrace() function is only used if SQLITE_MEMDEBUG is set to 2 or
+** greater and glibc is in use. If we don't want to use backtrace(), then just
+** define it as an empty macro and set the amount of space reserved to 0.
+*/
+#if defined(__GLIBC__) && SQLITE_MEMDEBUG>2
+ extern int backtrace(void **, int);
+ #define TESTALLOC_STACKSIZE 128
+ #define TESTALLOC_STACKFRAMES ((TESTALLOC_STACKSIZE-8)/sizeof(void*))
+#else
+ #define backtrace(x, y)
+ #define TESTALLOC_STACKSIZE 0
+ #define TESTALLOC_STACKFRAMES 0
+#endif
+
+/*
+** Number of 32-bit guard words. This should probably be a multiple of
+** 2 since on 64-bit machines we want the value returned by sqliteMalloc()
+** to be 8-byte aligned.
+*/
+#ifndef TESTALLOC_NGUARD
+# define TESTALLOC_NGUARD 2
+#endif
+
+/*
+** Size reserved for storing file-name along with each malloc()ed blob.
+*/
+#define TESTALLOC_FILESIZE 64
+
+/*
+** Size reserved for storing the user string. Each time a Malloc() or Realloc()
+** call succeeds, up to TESTALLOC_USERSIZE bytes of the string pointed to by
+** sqlite3_malloc_id are stored along with the other test system metadata.
+*/
+#define TESTALLOC_USERSIZE 64
+const char *sqlite3_malloc_id = 0;
+
+/*
+** Blocks used by the test layer have the following format:
+**
+** <sizeof(void *) pNext pointer>
+** <sizeof(void *) pPrev pointer>
+** <TESTALLOC_NGUARD 32-bit guard words>
+** <The application level allocation>
+** <TESTALLOC_NGUARD 32-bit guard words>
+** <32-bit line number>
+** <TESTALLOC_FILESIZE bytes containing null-terminated file name>
+** <TESTALLOC_STACKSIZE bytes of backtrace() output>
+*/
+
+#define TESTALLOC_OFFSET_GUARD1(p) (sizeof(void *) * 2)
+#define TESTALLOC_OFFSET_DATA(p) ( \
+ TESTALLOC_OFFSET_GUARD1(p) + sizeof(u32) * TESTALLOC_NGUARD \
+)
+#define TESTALLOC_OFFSET_GUARD2(p) ( \
+ TESTALLOC_OFFSET_DATA(p) + sqlite3OsAllocationSize(p) - TESTALLOC_OVERHEAD \
+)
+#define TESTALLOC_OFFSET_LINENUMBER(p) ( \
+ TESTALLOC_OFFSET_GUARD2(p) + sizeof(u32) * TESTALLOC_NGUARD \
+)
+#define TESTALLOC_OFFSET_FILENAME(p) ( \
+ TESTALLOC_OFFSET_LINENUMBER(p) + sizeof(u32) \
+)
+#define TESTALLOC_OFFSET_USER(p) ( \
+ TESTALLOC_OFFSET_FILENAME(p) + TESTALLOC_FILESIZE \
+)
+#define TESTALLOC_OFFSET_STACK(p) ( \
+ TESTALLOC_OFFSET_USER(p) + TESTALLOC_USERSIZE + 8 - \
+ (TESTALLOC_OFFSET_USER(p) % 8) \
+)
+
+#define TESTALLOC_OVERHEAD ( \
+ sizeof(void *)*2 + /* pPrev and pNext pointers */ \
+ TESTALLOC_NGUARD*sizeof(u32)*2 + /* Guard words */ \
+ sizeof(u32) + TESTALLOC_FILESIZE + /* File and line number */ \
+ TESTALLOC_USERSIZE + /* User string */ \
+ TESTALLOC_STACKSIZE /* backtrace() stack */ \
+)
+
+
+/*
+** For keeping track of the number of mallocs and frees. This
+** is used to check for memory leaks. The iMallocFail and iMallocReset
+** values are used to simulate malloc() failures during testing in
+** order to verify that the library correctly handles an out-of-memory
+** condition.
+*/
+int sqlite3_nMalloc; /* Number of sqliteMalloc() calls */
+int sqlite3_nFree; /* Number of sqliteFree() calls */
+int sqlite3_memUsed; /* TODO Total memory obtained from malloc */
+int sqlite3_memMax; /* TODO Mem usage high-water mark */
+int sqlite3_iMallocFail; /* Fail sqliteMalloc() after this many calls */
+int sqlite3_iMallocReset = -1; /* When iMallocFail reaches 0, set to this */
+
+void *sqlite3_pFirst = 0; /* Pointer to linked list of allocations */
+int sqlite3_nMaxAlloc = 0; /* High water mark of ThreadData.nAlloc */
+int sqlite3_mallocDisallowed = 0; /* assert() in sqlite3Malloc() if set */
+int sqlite3_isFail = 0; /* True if all malloc calls should fail */
+const char *sqlite3_zFile = 0; /* Filename to associate debug info with */
+int sqlite3_iLine = 0; /* Line number for debug info */
+
+/*
+** Check for a simulated memory allocation failure. Return true if
+** the failure should be simulated. Return false to proceed as normal.
+*/
+int sqlite3TestMallocFail(){
+ if( sqlite3_isFail ){
+ return 1;
+ }
+ if( sqlite3_iMallocFail>=0 ){
+ sqlite3_iMallocFail--;
+ if( sqlite3_iMallocFail==0 ){
+ sqlite3_iMallocFail = sqlite3_iMallocReset;
+ sqlite3_isFail = 1;
+ return 1;
+ }
+ }
+ return 0;
+}
+
+/*
+** The argument is a pointer returned by sqlite3OsMalloc() or xRealloc().
+** assert() that the first and last (TESTALLOC_NGUARD*4) bytes are set to the
+** values set by the applyGuards() function.
+*/
+static void checkGuards(u32 *p)
+{
+ int i;
+ char *zAlloc = (char *)p;
+ char *z;
+
+ /* First set of guard words */
+ z = &zAlloc[TESTALLOC_OFFSET_GUARD1(p)];
+ for(i=0; i<TESTALLOC_NGUARD; i++){
+ assert(((u32 *)z)[i]==0xdead1122);
+ }
+
+ /* Second set of guard words */
+ z = &zAlloc[TESTALLOC_OFFSET_GUARD2(p)];
+ for(i=0; i<TESTALLOC_NGUARD; i++){
+ u32 guard = 0;
+ memcpy(&guard, &z[i*sizeof(u32)], sizeof(u32));
+ assert(guard==0xdead3344);
+ }
+}
+
+/*
+** The argument is a pointer returned by sqlite3OsMalloc() or Realloc(). The
+** first and last (TESTALLOC_NGUARD*4) bytes are set to known values for use as
+** guard-posts.
+*/
+static void applyGuards(u32 *p)
+{
+ int i;
+ char *z;
+ char *zAlloc = (char *)p;
+
+ /* First set of guard words */
+ z = &zAlloc[TESTALLOC_OFFSET_GUARD1(p)];
+ for(i=0; i<TESTALLOC_NGUARD; i++){
+ ((u32 *)z)[i] = 0xdead1122;
+ }
+
+ /* Second set of guard words */
+ z = &zAlloc[TESTALLOC_OFFSET_GUARD2(p)];
+ for(i=0; i<TESTALLOC_NGUARD; i++){
+ static const int guard = 0xdead3344;
+ memcpy(&z[i*sizeof(u32)], &guard, sizeof(u32));
+ }
+
+ /* Line number */
+ z = &((char *)z)[TESTALLOC_NGUARD*sizeof(u32)]; /* Guard words */
+ z = &zAlloc[TESTALLOC_OFFSET_LINENUMBER(p)];
+ memcpy(z, &sqlite3_iLine, sizeof(u32));
+
+ /* File name */
+ z = &zAlloc[TESTALLOC_OFFSET_FILENAME(p)];
+ strncpy(z, sqlite3_zFile, TESTALLOC_FILESIZE);
+ z[TESTALLOC_FILESIZE - 1] = '\0';
+
+ /* User string */
+ z = &zAlloc[TESTALLOC_OFFSET_USER(p)];
+ z[0] = 0;
+ if( sqlite3_malloc_id ){
+ strncpy(z, sqlite3_malloc_id, TESTALLOC_USERSIZE);
+ z[TESTALLOC_USERSIZE-1] = 0;
+ }
+
+ /* backtrace() stack */
+ z = &zAlloc[TESTALLOC_OFFSET_STACK(p)];
+ backtrace((void **)z, TESTALLOC_STACKFRAMES);
+
+ /* Sanity check to make sure checkGuards() is working */
+ checkGuards(p);
+}
+
+/*
+** The argument is a malloc()ed pointer as returned by the test-wrapper.
+** Return a pointer to the Os level allocation.
+*/
+static void *getOsPointer(void *p)
+{
+ char *z = (char *)p;
+ return (void *)(&z[-1 * TESTALLOC_OFFSET_DATA(p)]);
+}
+
+
+#if SQLITE_MEMDEBUG>1
+/*
+** The argument points to an Os level allocation. Link it into the threads list
+** of allocations.
+*/
+static void linkAlloc(void *p){
+ void **pp = (void **)p;
+ pp[0] = 0;
+ pp[1] = sqlite3_pFirst;
+ if( sqlite3_pFirst ){
+ ((void **)sqlite3_pFirst)[0] = p;
+ }
+ sqlite3_pFirst = p;
+}
+
+/*
+** The argument points to an Os level allocation. Unlinke it from the threads
+** list of allocations.
+*/
+static void unlinkAlloc(void *p)
+{
+ void **pp = (void **)p;
+ if( p==sqlite3_pFirst ){
+ assert(!pp[0]);
+ assert(!pp[1] || ((void **)(pp[1]))[0]==p);
+ sqlite3_pFirst = pp[1];
+ if( sqlite3_pFirst ){
+ ((void **)sqlite3_pFirst)[0] = 0;
+ }
+ }else{
+ void **pprev = pp[0];
+ void **pnext = pp[1];
+ assert(pprev);
+ assert(pprev[1]==p);
+ pprev[1] = (void *)pnext;
+ if( pnext ){
+ assert(pnext[0]==p);
+ pnext[0] = (void *)pprev;
+ }
+ }
+}
+
+/*
+** Pointer p is a pointer to an OS level allocation that has just been
+** realloc()ed. Set the list pointers that point to this entry to it's new
+** location.
+*/
+static void relinkAlloc(void *p)
+{
+ void **pp = (void **)p;
+ if( pp[0] ){
+ ((void **)(pp[0]))[1] = p;
+ }else{
+ sqlite3_pFirst = p;
+ }
+ if( pp[1] ){
+ ((void **)(pp[1]))[0] = p;
+ }
+}
+#else
+#define linkAlloc(x)
+#define relinkAlloc(x)
+#define unlinkAlloc(x)
+#endif
+
+/*
+** This function sets the result of the Tcl interpreter passed as an argument
+** to a list containing an entry for each currently outstanding call made to
+** sqliteMalloc and friends by the current thread. Each list entry is itself a
+** list, consisting of the following (in order):
+**
+** * The number of bytes allocated
+** * The __FILE__ macro at the time of the sqliteMalloc() call.
+** * The __LINE__ macro ...
+** * The value of the sqlite3_malloc_id variable ...
+** * The output of backtrace() (if available) ...
+**
+** Todo: We could have a version of this function that outputs to stdout,
+** to debug memory leaks when Tcl is not available.
+*/
+#if defined(TCLSH) && defined(SQLITE_DEBUG) && SQLITE_MEMDEBUG>1
+#include <tcl.h>
+int sqlite3OutstandingMallocs(Tcl_Interp *interp){
+ void *p;
+ Tcl_Obj *pRes = Tcl_NewObj();
+ Tcl_IncrRefCount(pRes);
+
+
+ for(p=sqlite3_pFirst; p; p=((void **)p)[1]){
+ Tcl_Obj *pEntry = Tcl_NewObj();
+ Tcl_Obj *pStack = Tcl_NewObj();
+ char *z;
+ u32 iLine;
+ int nBytes = sqlite3OsAllocationSize(p) - TESTALLOC_OVERHEAD;
+ char *zAlloc = (char *)p;
+ int i;
+
+ Tcl_ListObjAppendElement(0, pEntry, Tcl_NewIntObj(nBytes));
+
+ z = &zAlloc[TESTALLOC_OFFSET_FILENAME(p)];
+ Tcl_ListObjAppendElement(0, pEntry, Tcl_NewStringObj(z, -1));
+
+ z = &zAlloc[TESTALLOC_OFFSET_LINENUMBER(p)];
+ memcpy(&iLine, z, sizeof(u32));
+ Tcl_ListObjAppendElement(0, pEntry, Tcl_NewIntObj(iLine));
+
+ z = &zAlloc[TESTALLOC_OFFSET_USER(p)];
+ Tcl_ListObjAppendElement(0, pEntry, Tcl_NewStringObj(z, -1));
+
+ z = &zAlloc[TESTALLOC_OFFSET_STACK(p)];
+ for(i=0; i<TESTALLOC_STACKFRAMES; i++){
+ char zHex[128];
+ sprintf(zHex, "%p", ((void **)z)[i]);
+ Tcl_ListObjAppendElement(0, pStack, Tcl_NewStringObj(zHex, -1));
+ }
+
+ Tcl_ListObjAppendElement(0, pEntry, pStack);
+ Tcl_ListObjAppendElement(0, pRes, pEntry);
+ }
+
+ Tcl_ResetResult(interp);
+ Tcl_SetObjResult(interp, pRes);
+ Tcl_DecrRefCount(pRes);
+ return TCL_OK;
+}
+#endif
+
+/*
+** This is the test layer's wrapper around sqlite3OsMalloc().
+*/
+static void * OSMALLOC(int n){
+ sqlite3OsEnterMutex();
+#ifdef SQLITE_ENABLE_MEMORY_MANAGEMENT
+ sqlite3_nMaxAlloc =
+ MAX(sqlite3_nMaxAlloc, sqlite3ThreadDataReadOnly()->nAlloc);
+#endif
+ assert( !sqlite3_mallocDisallowed );
+ if( !sqlite3TestMallocFail() ){
+ u32 *p;
+ p = (u32 *)sqlite3OsMalloc(n + TESTALLOC_OVERHEAD);
+ assert(p);
+ sqlite3_nMalloc++;
+ applyGuards(p);
+ linkAlloc(p);
+ sqlite3OsLeaveMutex();
+ return (void *)(&p[TESTALLOC_NGUARD + 2*sizeof(void *)/sizeof(u32)]);
+ }
+ sqlite3OsLeaveMutex();
+ return 0;
+}
+
+static int OSSIZEOF(void *p){
+ if( p ){
+ u32 *pOs = (u32 *)getOsPointer(p);
+ return sqlite3OsAllocationSize(pOs) - TESTALLOC_OVERHEAD;
+ }
+ return 0;
+}
+
+/*
+** This is the test layer's wrapper around sqlite3OsFree(). The argument is a
+** pointer to the space allocated for the application to use.
+*/
+static void OSFREE(void *pFree){
+ u32 *p; /* Pointer to the OS-layer allocation */
+ sqlite3OsEnterMutex();
+ p = (u32 *)getOsPointer(pFree);
+ checkGuards(p);
+ unlinkAlloc(p);
+ memset(pFree, 0x55, OSSIZEOF(pFree));
+ sqlite3OsFree(p);
+ sqlite3_nFree++;
+ sqlite3OsLeaveMutex();
+}
+
+/*
+** This is the test layer's wrapper around sqlite3OsRealloc().
+*/
+static void * OSREALLOC(void *pRealloc, int n){
+#ifdef SQLITE_ENABLE_MEMORY_MANAGEMENT
+ sqlite3_nMaxAlloc =
+ MAX(sqlite3_nMaxAlloc, sqlite3ThreadDataReadOnly()->nAlloc);
+#endif
+ assert( !sqlite3_mallocDisallowed );
+ if( !sqlite3TestMallocFail() ){
+ u32 *p = (u32 *)getOsPointer(pRealloc);
+ checkGuards(p);
+ p = sqlite3OsRealloc(p, n + TESTALLOC_OVERHEAD);
+ applyGuards(p);
+ relinkAlloc(p);
+ return (void *)(&p[TESTALLOC_NGUARD + 2*sizeof(void *)/sizeof(u32)]);
+ }
+ return 0;
+}
+
+static void OSMALLOC_FAILED(){
+ sqlite3_isFail = 0;
+}
+
+#else
+/* Define macros to call the sqlite3OsXXX interface directly if
+** the SQLITE_MEMDEBUG macro is not defined.
+*/
+#define OSMALLOC(x) sqlite3OsMalloc(x)
+#define OSREALLOC(x,y) sqlite3OsRealloc(x,y)
+#define OSFREE(x) sqlite3OsFree(x)
+#define OSSIZEOF(x) sqlite3OsAllocationSize(x)
+#define OSMALLOC_FAILED()
+
+#endif /* SQLITE_MEMDEBUG */
+/*
+** End code for memory allocation system test layer.
+**--------------------------------------------------------------------------*/
+
+/*
+** This routine is called when we are about to allocate n additional bytes
+** of memory. If the new allocation will put is over the soft allocation
+** limit, then invoke sqlite3_release_memory() to try to release some
+** memory before continuing with the allocation.
+**
+** This routine also makes sure that the thread-specific-data (TSD) has
+** be allocated. If it has not and can not be allocated, then return
+** false. The updateMemoryUsedCount() routine below will deallocate
+** the TSD if it ought to be.
+**
+** If SQLITE_ENABLE_MEMORY_MANAGEMENT is not defined, this routine is
+** a no-op
+*/
+#ifdef SQLITE_ENABLE_MEMORY_MANAGEMENT
+static int enforceSoftLimit(int n){
+ ThreadData *pTsd = sqlite3ThreadData();
+ if( pTsd==0 ){
+ return 0;
+ }
+ assert( pTsd->nAlloc>=0 );
+ if( n>0 && pTsd->nSoftHeapLimit>0 ){
+ while( pTsd->nAlloc+n>pTsd->nSoftHeapLimit && sqlite3_release_memory(n) ){}
+ }
+ return 1;
+}
+#else
+# define enforceSoftLimit(X) 1
+#endif
+
+/*
+** Update the count of total outstanding memory that is held in
+** thread-specific-data (TSD). If after this update the TSD is
+** no longer being used, then deallocate it.
+**
+** If SQLITE_ENABLE_MEMORY_MANAGEMENT is not defined, this routine is
+** a no-op
+*/
+#ifdef SQLITE_ENABLE_MEMORY_MANAGEMENT
+static void updateMemoryUsedCount(int n){
+ ThreadData *pTsd = sqlite3ThreadData();
+ if( pTsd ){
+ pTsd->nAlloc += n;
+ assert( pTsd->nAlloc>=0 );
+ if( pTsd->nAlloc==0 && pTsd->nSoftHeapLimit==0 ){
+ sqlite3ReleaseThreadData();
+ }
+ }
+}
+#else
+#define updateMemoryUsedCount(x) /* no-op */
+#endif
+
+/*
+** Allocate and return N bytes of uninitialised memory by calling
+** sqlite3OsMalloc(). If the Malloc() call fails, attempt to free memory
+** by calling sqlite3_release_memory().
+*/
+void *sqlite3MallocRaw(int n, int doMemManage){
+ void *p = 0;
+ if( n>0 && !sqlite3MallocFailed() && (!doMemManage || enforceSoftLimit(n)) ){
+ while( (p = OSMALLOC(n))==0 && sqlite3_release_memory(n) ){}
+ if( !p ){
+ sqlite3FailedMalloc();
+ OSMALLOC_FAILED();
+ }else if( doMemManage ){
+ updateMemoryUsedCount(OSSIZEOF(p));
+ }
+ }
+ return p;
+}
+
+/*
+** Resize the allocation at p to n bytes by calling sqlite3OsRealloc(). The
+** pointer to the new allocation is returned. If the Realloc() call fails,
+** attempt to free memory by calling sqlite3_release_memory().
+*/
+void *sqlite3Realloc(void *p, int n){
+ if( sqlite3MallocFailed() ){
+ return 0;
+ }
+
+ if( !p ){
+ return sqlite3Malloc(n, 1);
+ }else{
+ void *np = 0;
+#ifdef SQLITE_ENABLE_MEMORY_MANAGEMENT
+ int origSize = OSSIZEOF(p);
+#endif
+ if( enforceSoftLimit(n - origSize) ){
+ while( (np = OSREALLOC(p, n))==0 && sqlite3_release_memory(n) ){}
+ if( !np ){
+ sqlite3FailedMalloc();
+ OSMALLOC_FAILED();
+ }else{
+ updateMemoryUsedCount(OSSIZEOF(np) - origSize);
+ }
+ }
+ return np;
+ }
+}
+
+/*
+** Free the memory pointed to by p. p must be either a NULL pointer or a
+** value returned by a previous call to sqlite3Malloc() or sqlite3Realloc().
+*/
+void sqlite3FreeX(void *p){
+ if( p ){
+ updateMemoryUsedCount(0 - OSSIZEOF(p));
+ OSFREE(p);
+ }
+}
+
+/*
+** A version of sqliteMalloc() that is always a function, not a macro.
+** Currently, this is used only to alloc to allocate the parser engine.
+*/
+void *sqlite3MallocX(int n){
+ return sqliteMalloc(n);
+}
+
+/*
+** sqlite3Malloc
+** sqlite3ReallocOrFree
+**
+** These two are implemented as wrappers around sqlite3MallocRaw(),
+** sqlite3Realloc() and sqlite3Free().
+*/
+void *sqlite3Malloc(int n, int doMemManage){
+ void *p = sqlite3MallocRaw(n, doMemManage);
+ if( p ){
+ memset(p, 0, n);
+ }
+ return p;
+}
+void sqlite3ReallocOrFree(void **pp, int n){
+ void *p = sqlite3Realloc(*pp, n);
+ if( !p ){
+ sqlite3FreeX(*pp);
+ }
+ *pp = p;
+}
+
+/*
+** sqlite3ThreadSafeMalloc() and sqlite3ThreadSafeFree() are used in those
+** rare scenarios where sqlite may allocate memory in one thread and free
+** it in another. They are exactly the same as sqlite3Malloc() and
+** sqlite3Free() except that:
+**
+** * The allocated memory is not included in any calculations with
+** respect to the soft-heap-limit, and
+**
+** * sqlite3ThreadSafeMalloc() must be matched with ThreadSafeFree(),
+** not sqlite3Free(). Calling sqlite3Free() on memory obtained from
+** ThreadSafeMalloc() will cause an error somewhere down the line.
+*/
+#ifdef SQLITE_ENABLE_MEMORY_MANAGEMENT
+void *sqlite3ThreadSafeMalloc(int n){
+ (void)ENTER_MALLOC;
+ return sqlite3Malloc(n, 0);
+}
+void sqlite3ThreadSafeFree(void *p){
+ (void)ENTER_MALLOC;
+ if( p ){
+ OSFREE(p);
+ }
+}
+#endif
+
+
+/*
+** Return the number of bytes allocated at location p. p must be either
+** a NULL pointer (in which case 0 is returned) or a pointer returned by
+** sqlite3Malloc(), sqlite3Realloc() or sqlite3ReallocOrFree().
+**
+** The number of bytes allocated does not include any overhead inserted by
+** any malloc() wrapper functions that may be called. So the value returned
+** is the number of bytes that were available to SQLite using pointer p,
+** regardless of how much memory was actually allocated.
+*/
+#ifdef SQLITE_ENABLE_MEMORY_MANAGEMENT
+int sqlite3AllocSize(void *p){
+ return OSSIZEOF(p);
+}
+#endif
+
+/*
+** Make a copy of a string in memory obtained from sqliteMalloc(). These
+** functions call sqlite3MallocRaw() directly instead of sqliteMalloc(). This
+** is because when memory debugging is turned on, these two functions are
+** called via macros that record the current file and line number in the
+** ThreadData structure.
+*/
+char *sqlite3StrDup(const char *z){
+ char *zNew;
+ if( z==0 ) return 0;
+ zNew = sqlite3MallocRaw(strlen(z)+1, 1);
+ if( zNew ) strcpy(zNew, z);
+ return zNew;
+}
+char *sqlite3StrNDup(const char *z, int n){
+ char *zNew;
+ if( z==0 ) return 0;
+ zNew = sqlite3MallocRaw(n+1, 1);
+ if( zNew ){
+ memcpy(zNew, z, n);
+ zNew[n] = 0;
+ }
+ return zNew;
+}
+
+/*
+** Create a string from the 2nd and subsequent arguments (up to the
+** first NULL argument), store the string in memory obtained from
+** sqliteMalloc() and make the pointer indicated by the 1st argument
+** point to that string. The 1st argument must either be NULL or
+** point to memory obtained from sqliteMalloc().
+*/
+void sqlite3SetString(char **pz, ...){
+ va_list ap;
+ int nByte;
+ const char *z;
+ char *zResult;
+
+ if( pz==0 ) return;
+ nByte = 1;
+ va_start(ap, pz);
+ while( (z = va_arg(ap, const char*))!=0 ){
+ nByte += strlen(z);
+ }
+ va_end(ap);
+ sqliteFree(*pz);
+ *pz = zResult = sqliteMallocRaw( nByte );
+ if( zResult==0 ){
+ return;
+ }
+ *zResult = 0;
+ va_start(ap, pz);
+ while( (z = va_arg(ap, const char*))!=0 ){
+ strcpy(zResult, z);
+ zResult += strlen(zResult);
+ }
+ va_end(ap);
+}
+
+/*
+** Set the most recent error code and error string for the sqlite
+** handle "db". The error code is set to "err_code".
+**
+** If it is not NULL, string zFormat specifies the format of the
+** error string in the style of the printf functions: The following
+** format characters are allowed:
+**
+** %s Insert a string
+** %z A string that should be freed after use
+** %d Insert an integer
+** %T Insert a token
+** %S Insert the first element of a SrcList
+**
+** zFormat and any string tokens that follow it are assumed to be
+** encoded in UTF-8.
+**
+** To clear the most recent error for sqlite handle "db", sqlite3Error
+** should be called with err_code set to SQLITE_OK and zFormat set
+** to NULL.
+*/
+void sqlite3Error(sqlite3 *db, int err_code, const char *zFormat, ...){
+ if( db && (db->pErr || (db->pErr = sqlite3ValueNew())!=0) ){
+ db->errCode = err_code;
+ if( zFormat ){
+ char *z;
+ va_list ap;
+ va_start(ap, zFormat);
+ z = sqlite3VMPrintf(zFormat, ap);
+ va_end(ap);
+ sqlite3ValueSetStr(db->pErr, -1, z, SQLITE_UTF8, sqlite3FreeX);
+ }else{
+ sqlite3ValueSetStr(db->pErr, 0, 0, SQLITE_UTF8, SQLITE_STATIC);
+ }
+ }
+}
+
+/*
+** Add an error message to pParse->zErrMsg and increment pParse->nErr.
+** The following formatting characters are allowed:
+**
+** %s Insert a string
+** %z A string that should be freed after use
+** %d Insert an integer
+** %T Insert a token
+** %S Insert the first element of a SrcList
+**
+** This function should be used to report any error that occurs whilst
+** compiling an SQL statement (i.e. within sqlite3_prepare()). The
+** last thing the sqlite3_prepare() function does is copy the error
+** stored by this function into the database handle using sqlite3Error().
+** Function sqlite3Error() should be used during statement execution
+** (sqlite3_step() etc.).
+*/
+void sqlite3ErrorMsg(Parse *pParse, const char *zFormat, ...){
+ va_list ap;
+ pParse->nErr++;
+ sqliteFree(pParse->zErrMsg);
+ va_start(ap, zFormat);
+ pParse->zErrMsg = sqlite3VMPrintf(zFormat, ap);
+ va_end(ap);
+}
+
+/*
+** Clear the error message in pParse, if any
+*/
+void sqlite3ErrorClear(Parse *pParse){
+ sqliteFree(pParse->zErrMsg);
+ pParse->zErrMsg = 0;
+ pParse->nErr = 0;
+}
+
+/*
+** Convert an SQL-style quoted string into a normal string by removing
+** the quote characters. The conversion is done in-place. If the
+** input does not begin with a quote character, then this routine
+** is a no-op.
+**
+** 2002-Feb-14: This routine is extended to remove MS-Access style
+** brackets from around identifers. For example: "[a-b-c]" becomes
+** "a-b-c".
+*/
+void sqlite3Dequote(char *z){
+ int quote;
+ int i, j;
+ if( z==0 ) return;
+ quote = z[0];
+ switch( quote ){
+ case '\'': break;
+ case '"': break;
+ case '`': break; /* For MySQL compatibility */
+ case '[': quote = ']'; break; /* For MS SqlServer compatibility */
+ default: return;
+ }
+ for(i=1, j=0; z[i]; i++){
+ if( z[i]==quote ){
+ if( z[i+1]==quote ){
+ z[j++] = quote;
+ i++;
+ }else{
+ z[j++] = 0;
+ break;
+ }
+ }else{
+ z[j++] = z[i];
+ }
+ }
+}
+
+/* An array to map all upper-case characters into their corresponding
+** lower-case character.
+*/
+const unsigned char sqlite3UpperToLower[] = {
+#ifdef SQLITE_ASCII
+ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,
+ 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35,
+ 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53,
+ 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 97, 98, 99,100,101,102,103,
+ 104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,
+ 122, 91, 92, 93, 94, 95, 96, 97, 98, 99,100,101,102,103,104,105,106,107,
+ 108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,
+ 126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,
+ 144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,
+ 162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,178,179,
+ 180,181,182,183,184,185,186,187,188,189,190,191,192,193,194,195,196,197,
+ 198,199,200,201,202,203,204,205,206,207,208,209,210,211,212,213,214,215,
+ 216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,232,233,
+ 234,235,236,237,238,239,240,241,242,243,244,245,246,247,248,249,250,251,
+ 252,253,254,255
+#endif
+#ifdef SQLITE_EBCDIC
+ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, /* 0x */
+ 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, /* 1x */
+ 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, /* 2x */
+ 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, /* 3x */
+ 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, /* 4x */
+ 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, /* 5x */
+ 96, 97, 66, 67, 68, 69, 70, 71, 72, 73,106,107,108,109,110,111, /* 6x */
+ 112, 81, 82, 83, 84, 85, 86, 87, 88, 89,122,123,124,125,126,127, /* 7x */
+ 128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143, /* 8x */
+ 144,145,146,147,148,149,150,151,152,153,154,155,156,157,156,159, /* 9x */
+ 160,161,162,163,164,165,166,167,168,169,170,171,140,141,142,175, /* Ax */
+ 176,177,178,179,180,181,182,183,184,185,186,187,188,189,190,191, /* Bx */
+ 192,129,130,131,132,133,134,135,136,137,202,203,204,205,206,207, /* Cx */
+ 208,145,146,147,148,149,150,151,152,153,218,219,220,221,222,223, /* Dx */
+ 224,225,162,163,164,165,166,167,168,169,232,203,204,205,206,207, /* Ex */
+ 239,240,241,242,243,244,245,246,247,248,249,219,220,221,222,255, /* Fx */
+#endif
+};
+#define UpperToLower sqlite3UpperToLower
+
+/*
+** Some systems have stricmp(). Others have strcasecmp(). Because
+** there is no consistency, we will define our own.
+*/
+int sqlite3StrICmp(const char *zLeft, const char *zRight){
+ register unsigned char *a, *b;
+ a = (unsigned char *)zLeft;
+ b = (unsigned char *)zRight;
+ while( *a!=0 && UpperToLower[*a]==UpperToLower[*b]){ a++; b++; }
+ return UpperToLower[*a] - UpperToLower[*b];
+}
+int sqlite3StrNICmp(const char *zLeft, const char *zRight, int N){
+ register unsigned char *a, *b;
+ a = (unsigned char *)zLeft;
+ b = (unsigned char *)zRight;
+ while( N-- > 0 && *a!=0 && UpperToLower[*a]==UpperToLower[*b]){ a++; b++; }
+ return N<0 ? 0 : UpperToLower[*a] - UpperToLower[*b];
+}
+
+/*
+** Return TRUE if z is a pure numeric string. Return FALSE if the
+** string contains any character which is not part of a number. If
+** the string is numeric and contains the '.' character, set *realnum
+** to TRUE (otherwise FALSE).
+**
+** An empty string is considered non-numeric.
+*/
+int sqlite3IsNumber(const char *z, int *realnum, u8 enc){
+ int incr = (enc==SQLITE_UTF8?1:2);
+ if( enc==SQLITE_UTF16BE ) z++;
+ if( *z=='-' || *z=='+' ) z += incr;
+ if( !isdigit(*(u8*)z) ){
+ return 0;
+ }
+ z += incr;
+ if( realnum ) *realnum = 0;
+ while( isdigit(*(u8*)z) ){ z += incr; }
+ if( *z=='.' ){
+ z += incr;
+ if( !isdigit(*(u8*)z) ) return 0;
+ while( isdigit(*(u8*)z) ){ z += incr; }
+ if( realnum ) *realnum = 1;
+ }
+ if( *z=='e' || *z=='E' ){
+ z += incr;
+ if( *z=='+' || *z=='-' ) z += incr;
+ if( !isdigit(*(u8*)z) ) return 0;
+ while( isdigit(*(u8*)z) ){ z += incr; }
+ if( realnum ) *realnum = 1;
+ }
+ return *z==0;
+}
+
+/*
+** The string z[] is an ascii representation of a real number.
+** Convert this string to a double.
+**
+** This routine assumes that z[] really is a valid number. If it
+** is not, the result is undefined.
+**
+** This routine is used instead of the library atof() function because
+** the library atof() might want to use "," as the decimal point instead
+** of "." depending on how locale is set. But that would cause problems
+** for SQL. So this routine always uses "." regardless of locale.
+*/
+int sqlite3AtoF(const char *z, double *pResult){
+#ifndef SQLITE_OMIT_FLOATING_POINT
+ int sign = 1;
+ const char *zBegin = z;
+ LONGDOUBLE_TYPE v1 = 0.0;
+ while( isspace(*z) ) z++;
+ if( *z=='-' ){
+ sign = -1;
+ z++;
+ }else if( *z=='+' ){
+ z++;
+ }
+ while( isdigit(*(u8*)z) ){
+ v1 = v1*10.0 + (*z - '0');
+ z++;
+ }
+ if( *z=='.' ){
+ LONGDOUBLE_TYPE divisor = 1.0;
+ z++;
+ while( isdigit(*(u8*)z) ){
+ v1 = v1*10.0 + (*z - '0');
+ divisor *= 10.0;
+ z++;
+ }
+ v1 /= divisor;
+ }
+ if( *z=='e' || *z=='E' ){
+ int esign = 1;
+ int eval = 0;
+ LONGDOUBLE_TYPE scale = 1.0;
+ z++;
+ if( *z=='-' ){
+ esign = -1;
+ z++;
+ }else if( *z=='+' ){
+ z++;
+ }
+ while( isdigit(*(u8*)z) ){
+ eval = eval*10 + *z - '0';
+ z++;
+ }
+ while( eval>=64 ){ scale *= 1.0e+64; eval -= 64; }
+ while( eval>=16 ){ scale *= 1.0e+16; eval -= 16; }
+ while( eval>=4 ){ scale *= 1.0e+4; eval -= 4; }
+ while( eval>=1 ){ scale *= 1.0e+1; eval -= 1; }
+ if( esign<0 ){
+ v1 /= scale;
+ }else{
+ v1 *= scale;
+ }
+ }
+ *pResult = sign<0 ? -v1 : v1;
+ return z - zBegin;
+#else
+ return sqlite3atoi64(z, pResult);
+#endif /* SQLITE_OMIT_FLOATING_POINT */
+}
+
+/*
+** Return TRUE if zNum is a 64-bit signed integer and write
+** the value of the integer into *pNum. If zNum is not an integer
+** or is an integer that is too large to be expressed with 64 bits,
+** then return false. If n>0 and the integer is string is not
+** exactly n bytes long, return false.
+**
+** When this routine was originally written it dealt with only
+** 32-bit numbers. At that time, it was much faster than the
+** atoi() library routine in RedHat 7.2.
+*/
+int sqlite3atoi64(const char *zNum, i64 *pNum){
+ i64 v = 0;
+ int neg;
+ int i, c;
+ while( isspace(*zNum) ) zNum++;
+ if( *zNum=='-' ){
+ neg = 1;
+ zNum++;
+ }else if( *zNum=='+' ){
+ neg = 0;
+ zNum++;
+ }else{
+ neg = 0;
+ }
+ for(i=0; (c=zNum[i])>='0' && c<='9'; i++){
+ v = v*10 + c - '0';
+ }
+ *pNum = neg ? -v : v;
+ return c==0 && i>0 &&
+ (i<19 || (i==19 && memcmp(zNum,"9223372036854775807",19)<=0));
+}
+
+/*
+** The string zNum represents an integer. There might be some other
+** information following the integer too, but that part is ignored.
+** If the integer that the prefix of zNum represents will fit in a
+** 32-bit signed integer, return TRUE. Otherwise return FALSE.
+**
+** This routine returns FALSE for the string -2147483648 even that
+** that number will in fact fit in a 32-bit integer. But positive
+** 2147483648 will not fit in 32 bits. So it seems safer to return
+** false.
+*/
+static int sqlite3FitsIn32Bits(const char *zNum){
+ int i, c;
+ if( *zNum=='-' || *zNum=='+' ) zNum++;
+ for(i=0; (c=zNum[i])>='0' && c<='9'; i++){}
+ return i<10 || (i==10 && memcmp(zNum,"2147483647",10)<=0);
+}
+
+/*
+** If zNum represents an integer that will fit in 32-bits, then set
+** *pValue to that integer and return true. Otherwise return false.
+*/
+int sqlite3GetInt32(const char *zNum, int *pValue){
+ if( sqlite3FitsIn32Bits(zNum) ){
+ *pValue = atoi(zNum);
+ return 1;
+ }
+ return 0;
+}
+
+/*
+** The string zNum represents an integer. There might be some other
+** information following the integer too, but that part is ignored.
+** If the integer that the prefix of zNum represents will fit in a
+** 64-bit signed integer, return TRUE. Otherwise return FALSE.
+**
+** This routine returns FALSE for the string -9223372036854775808 even that
+** that number will, in theory fit in a 64-bit integer. Positive
+** 9223373036854775808 will not fit in 64 bits. So it seems safer to return
+** false.
+*/
+int sqlite3FitsIn64Bits(const char *zNum){
+ int i, c;
+ if( *zNum=='-' || *zNum=='+' ) zNum++;
+ for(i=0; (c=zNum[i])>='0' && c<='9'; i++){}
+ return i<19 || (i==19 && memcmp(zNum,"9223372036854775807",19)<=0);
+}
+
+
+/*
+** Change the sqlite.magic from SQLITE_MAGIC_OPEN to SQLITE_MAGIC_BUSY.
+** Return an error (non-zero) if the magic was not SQLITE_MAGIC_OPEN
+** when this routine is called.
+**
+** This routine is a attempt to detect if two threads use the
+** same sqlite* pointer at the same time. There is a race
+** condition so it is possible that the error is not detected.
+** But usually the problem will be seen. The result will be an
+** error which can be used to debug the application that is
+** using SQLite incorrectly.
+**
+** Ticket #202: If db->magic is not a valid open value, take care not
+** to modify the db structure at all. It could be that db is a stale
+** pointer. In other words, it could be that there has been a prior
+** call to sqlite3_close(db) and db has been deallocated. And we do
+** not want to write into deallocated memory.
+*/
+int sqlite3SafetyOn(sqlite3 *db){
+ if( db->magic==SQLITE_MAGIC_OPEN ){
+ db->magic = SQLITE_MAGIC_BUSY;
+ return 0;
+ }else if( db->magic==SQLITE_MAGIC_BUSY ){
+ db->magic = SQLITE_MAGIC_ERROR;
+ db->u1.isInterrupted = 1;
+ }
+ return 1;
+}
+
+/*
+** Change the magic from SQLITE_MAGIC_BUSY to SQLITE_MAGIC_OPEN.
+** Return an error (non-zero) if the magic was not SQLITE_MAGIC_BUSY
+** when this routine is called.
+*/
+int sqlite3SafetyOff(sqlite3 *db){
+ if( db->magic==SQLITE_MAGIC_BUSY ){
+ db->magic = SQLITE_MAGIC_OPEN;
+ return 0;
+ }else if( db->magic==SQLITE_MAGIC_OPEN ){
+ db->magic = SQLITE_MAGIC_ERROR;
+ db->u1.isInterrupted = 1;
+ }
+ return 1;
+}
+
+/*
+** Check to make sure we have a valid db pointer. This test is not
+** foolproof but it does provide some measure of protection against
+** misuse of the interface such as passing in db pointers that are
+** NULL or which have been previously closed. If this routine returns
+** TRUE it means that the db pointer is invalid and should not be
+** dereferenced for any reason. The calling function should invoke
+** SQLITE_MISUSE immediately.
+*/
+int sqlite3SafetyCheck(sqlite3 *db){
+ int magic;
+ if( db==0 ) return 1;
+ magic = db->magic;
+ if( magic!=SQLITE_MAGIC_CLOSED &&
+ magic!=SQLITE_MAGIC_OPEN &&
+ magic!=SQLITE_MAGIC_BUSY ) return 1;
+ return 0;
+}
+
+/*
+** The variable-length integer encoding is as follows:
+**
+** KEY:
+** A = 0xxxxxxx 7 bits of data and one flag bit
+** B = 1xxxxxxx 7 bits of data and one flag bit
+** C = xxxxxxxx 8 bits of data
+**
+** 7 bits - A
+** 14 bits - BA
+** 21 bits - BBA
+** 28 bits - BBBA
+** 35 bits - BBBBA
+** 42 bits - BBBBBA
+** 49 bits - BBBBBBA
+** 56 bits - BBBBBBBA
+** 64 bits - BBBBBBBBC
+*/
+
+/*
+** Write a 64-bit variable-length integer to memory starting at p[0].
+** The length of data write will be between 1 and 9 bytes. The number
+** of bytes written is returned.
+**
+** A variable-length integer consists of the lower 7 bits of each byte
+** for all bytes that have the 8th bit set and one byte with the 8th
+** bit clear. Except, if we get to the 9th byte, it stores the full
+** 8 bits and is the last byte.
+*/
+int sqlite3PutVarint(unsigned char *p, u64 v){
+ int i, j, n;
+ u8 buf[10];
+ if( v & (((u64)0xff000000)<<32) ){
+ p[8] = v;
+ v >>= 8;
+ for(i=7; i>=0; i--){
+ p[i] = (v & 0x7f) | 0x80;
+ v >>= 7;
+ }
+ return 9;
+ }
+ n = 0;
+ do{
+ buf[n++] = (v & 0x7f) | 0x80;
+ v >>= 7;
+ }while( v!=0 );
+ buf[0] &= 0x7f;
+ assert( n<=9 );
+ for(i=0, j=n-1; j>=0; j--, i++){
+ p[i] = buf[j];
+ }
+ return n;
+}
+
+/*
+** Read a 64-bit variable-length integer from memory starting at p[0].
+** Return the number of bytes read. The value is stored in *v.
+*/
+int sqlite3GetVarint(const unsigned char *p, u64 *v){
+ u32 x;
+ u64 x64;
+ int n;
+ unsigned char c;
+ if( ((c = p[0]) & 0x80)==0 ){
+ *v = c;
+ return 1;
+ }
+ x = c & 0x7f;
+ if( ((c = p[1]) & 0x80)==0 ){
+ *v = (x<<7) | c;
+ return 2;
+ }
+ x = (x<<7) | (c&0x7f);
+ if( ((c = p[2]) & 0x80)==0 ){
+ *v = (x<<7) | c;
+ return 3;
+ }
+ x = (x<<7) | (c&0x7f);
+ if( ((c = p[3]) & 0x80)==0 ){
+ *v = (x<<7) | c;
+ return 4;
+ }
+ x64 = (x<<7) | (c&0x7f);
+ n = 4;
+ do{
+ c = p[n++];
+ if( n==9 ){
+ x64 = (x64<<8) | c;
+ break;
+ }
+ x64 = (x64<<7) | (c&0x7f);
+ }while( (c & 0x80)!=0 );
+ *v = x64;
+ return n;
+}
+
+/*
+** Read a 32-bit variable-length integer from memory starting at p[0].
+** Return the number of bytes read. The value is stored in *v.
+*/
+int sqlite3GetVarint32(const unsigned char *p, u32 *v){
+ u32 x;
+ int n;
+ unsigned char c;
+ if( ((signed char*)p)[0]>=0 ){
+ *v = p[0];
+ return 1;
+ }
+ x = p[0] & 0x7f;
+ if( ((signed char*)p)[1]>=0 ){
+ *v = (x<<7) | p[1];
+ return 2;
+ }
+ x = (x<<7) | (p[1] & 0x7f);
+ n = 2;
+ do{
+ x = (x<<7) | ((c = p[n++])&0x7f);
+ }while( (c & 0x80)!=0 && n<9 );
+ *v = x;
+ return n;
+}
+
+/*
+** Return the number of bytes that will be needed to store the given
+** 64-bit integer.
+*/
+int sqlite3VarintLen(u64 v){
+ int i = 0;
+ do{
+ i++;
+ v >>= 7;
+ }while( v!=0 && i<9 );
+ return i;
+}
+
+#if !defined(SQLITE_OMIT_BLOB_LITERAL) || defined(SQLITE_HAS_CODEC) \
+ || defined(SQLITE_TEST)
+/*
+** Translate a single byte of Hex into an integer.
+*/
+static int hexToInt(int h){
+ if( h>='0' && h<='9' ){
+ return h - '0';
+ }else if( h>='a' && h<='f' ){
+ return h - 'a' + 10;
+ }else{
+ assert( h>='A' && h<='F' );
+ return h - 'A' + 10;
+ }
+}
+#endif /* !SQLITE_OMIT_BLOB_LITERAL || SQLITE_HAS_CODEC || SQLITE_TEST */
+
+#if !defined(SQLITE_OMIT_BLOB_LITERAL) || defined(SQLITE_HAS_CODEC)
+/*
+** Convert a BLOB literal of the form "x'hhhhhh'" into its binary
+** value. Return a pointer to its binary value. Space to hold the
+** binary value has been obtained from malloc and must be freed by
+** the calling routine.
+*/
+void *sqlite3HexToBlob(const char *z){
+ char *zBlob;
+ int i;
+ int n = strlen(z);
+ if( n%2 ) return 0;
+
+ zBlob = (char *)sqliteMalloc(n/2);
+ if( zBlob ){
+ for(i=0; i<n; i+=2){
+ zBlob[i/2] = (hexToInt(z[i])<<4) | hexToInt(z[i+1]);
+ }
+ }
+ return zBlob;
+}
+#endif /* !SQLITE_OMIT_BLOB_LITERAL || SQLITE_HAS_CODEC */
+
+#if defined(SQLITE_TEST)
+/*
+** Convert text generated by the "%p" conversion format back into
+** a pointer.
+*/
+void *sqlite3TextToPtr(const char *z){
+ void *p;
+ u64 v;
+ u32 v2;
+ if( z[0]=='0' && z[1]=='x' ){
+ z += 2;
+ }
+ v = 0;
+ while( *z ){
+ v = (v<<4) + hexToInt(*z);
+ z++;
+ }
+ if( sizeof(p)==sizeof(v) ){
+ p = *(void**)&v;
+ }else{
+ assert( sizeof(p)==sizeof(v2) );
+ v2 = (u32)v;
+ p = *(void**)&v2;
+ }
+ return p;
+}
+#endif
+
+/*
+** Return a pointer to the ThreadData associated with the calling thread.
+*/
+ThreadData *sqlite3ThreadData(){
+ ThreadData *p = (ThreadData*)sqlite3OsThreadSpecificData(1);
+ if( !p ){
+ sqlite3FailedMalloc();
+ }
+ return p;
+}
+
+/*
+** Return a pointer to the ThreadData associated with the calling thread.
+** If no ThreadData has been allocated to this thread yet, return a pointer
+** to a substitute ThreadData structure that is all zeros.
+*/
+const ThreadData *sqlite3ThreadDataReadOnly(){
+ static const ThreadData zeroData = {0}; /* Initializer to silence warnings
+ ** from broken compilers */
+ const ThreadData *pTd = sqlite3OsThreadSpecificData(0);
+ return pTd ? pTd : &zeroData;
+}
+
+/*
+** Check to see if the ThreadData for this thread is all zero. If it
+** is, then deallocate it.
+*/
+void sqlite3ReleaseThreadData(){
+ sqlite3OsThreadSpecificData(-1);
+}
+
+/*
+** This function must be called before exiting any API function (i.e.
+** returning control to the user) that has called sqlite3Malloc or
+** sqlite3Realloc.
+**
+** The returned value is normally a copy of the second argument to this
+** function. However, if a malloc() failure has occured since the previous
+** invocation SQLITE_NOMEM is returned instead.
+**
+** If the first argument, db, is not NULL and a malloc() error has occured,
+** then the connection error-code (the value returned by sqlite3_errcode())
+** is set to SQLITE_NOMEM.
+*/
+static int mallocHasFailed = 0;
+int sqlite3ApiExit(sqlite3* db, int rc){
+ if( sqlite3MallocFailed() ){
+ mallocHasFailed = 0;
+ sqlite3OsLeaveMutex();
+ sqlite3Error(db, SQLITE_NOMEM, 0);
+ rc = SQLITE_NOMEM;
+ }
+ return rc & (db ? db->errMask : 0xff);
+}
+
+/*
+** Return true is a malloc has failed in this thread since the last call
+** to sqlite3ApiExit(), or false otherwise.
+*/
+int sqlite3MallocFailed(){
+ return (mallocHasFailed && sqlite3OsInMutex(1));
+}
+
+/*
+** Set the "malloc has failed" condition to true for this thread.
+*/
+void sqlite3FailedMalloc(){
+ sqlite3OsEnterMutex();
+ assert( mallocHasFailed==0 );
+ mallocHasFailed = 1;
+}
+
+#ifdef SQLITE_MEMDEBUG
+/*
+** This function sets a flag in the thread-specific-data structure that will
+** cause an assert to fail if sqliteMalloc() or sqliteRealloc() is called.
+*/
+void sqlite3MallocDisallow(){
+ assert( sqlite3_mallocDisallowed>=0 );
+ sqlite3_mallocDisallowed++;
+}
+
+/*
+** This function clears the flag set in the thread-specific-data structure set
+** by sqlite3MallocDisallow().
+*/
+void sqlite3MallocAllow(){
+ assert( sqlite3_mallocDisallowed>0 );
+ sqlite3_mallocDisallowed--;
+}
+#endif
Added: freeswitch/trunk/libs/sqlite/src/vacuum.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/vacuum.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,319 @@
+/*
+** 2003 April 6
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** This file contains code used to implement the VACUUM command.
+**
+** Most of the code in this file may be omitted by defining the
+** SQLITE_OMIT_VACUUM macro.
+**
+** $Id: vacuum.c,v 1.63 2006/09/21 11:02:18 drh Exp $
+*/
+#include "sqliteInt.h"
+#include "vdbeInt.h"
+#include "os.h"
+
+#ifndef SQLITE_OMIT_VACUUM
+/*
+** Generate a random name of 20 character in length.
+*/
+static void randomName(unsigned char *zBuf){
+ static const unsigned char zChars[] =
+ "abcdefghijklmnopqrstuvwxyz"
+ "0123456789";
+ int i;
+ sqlite3Randomness(20, zBuf);
+ for(i=0; i<20; i++){
+ zBuf[i] = zChars[ zBuf[i]%(sizeof(zChars)-1) ];
+ }
+}
+
+/*
+** Execute zSql on database db. Return an error code.
+*/
+static int execSql(sqlite3 *db, const char *zSql){
+ sqlite3_stmt *pStmt;
+ if( SQLITE_OK!=sqlite3_prepare(db, zSql, -1, &pStmt, 0) ){
+ return sqlite3_errcode(db);
+ }
+ while( SQLITE_ROW==sqlite3_step(pStmt) ){}
+ return sqlite3_finalize(pStmt);
+}
+
+/*
+** Execute zSql on database db. The statement returns exactly
+** one column. Execute this as SQL on the same database.
+*/
+static int execExecSql(sqlite3 *db, const char *zSql){
+ sqlite3_stmt *pStmt;
+ int rc;
+
+ rc = sqlite3_prepare(db, zSql, -1, &pStmt, 0);
+ if( rc!=SQLITE_OK ) return rc;
+
+ while( SQLITE_ROW==sqlite3_step(pStmt) ){
+ rc = execSql(db, (char*)sqlite3_column_text(pStmt, 0));
+ if( rc!=SQLITE_OK ){
+ sqlite3_finalize(pStmt);
+ return rc;
+ }
+ }
+
+ return sqlite3_finalize(pStmt);
+}
+
+/*
+** The non-standard VACUUM command is used to clean up the database,
+** collapse free space, etc. It is modelled after the VACUUM command
+** in PostgreSQL.
+**
+** In version 1.0.x of SQLite, the VACUUM command would call
+** gdbm_reorganize() on all the database tables. But beginning
+** with 2.0.0, SQLite no longer uses GDBM so this command has
+** become a no-op.
+*/
+void sqlite3Vacuum(Parse *pParse){
+ Vdbe *v = sqlite3GetVdbe(pParse);
+ if( v ){
+ sqlite3VdbeAddOp(v, OP_Vacuum, 0, 0);
+ }
+ return;
+}
+
+/*
+** This routine implements the OP_Vacuum opcode of the VDBE.
+*/
+int sqlite3RunVacuum(char **pzErrMsg, sqlite3 *db){
+ int rc = SQLITE_OK; /* Return code from service routines */
+ const char *zFilename; /* full pathname of the database file */
+ int nFilename; /* number of characters in zFilename[] */
+ char *zTemp = 0; /* a temporary file in same directory as zFilename */
+ Btree *pMain; /* The database being vacuumed */
+ Btree *pTemp;
+ char *zSql = 0;
+ int saved_flags; /* Saved value of the db->flags */
+ Db *pDb = 0; /* Database to detach at end of vacuum */
+
+ /* Save the current value of the write-schema flag before setting it. */
+ saved_flags = db->flags;
+ db->flags |= SQLITE_WriteSchema | SQLITE_IgnoreChecks;
+
+ if( !db->autoCommit ){
+ sqlite3SetString(pzErrMsg, "cannot VACUUM from within a transaction",
+ (char*)0);
+ rc = SQLITE_ERROR;
+ goto end_of_vacuum;
+ }
+
+ /* Get the full pathname of the database file and create a
+ ** temporary filename in the same directory as the original file.
+ */
+ pMain = db->aDb[0].pBt;
+ zFilename = sqlite3BtreeGetFilename(pMain);
+ assert( zFilename );
+ if( zFilename[0]=='\0' ){
+ /* The in-memory database. Do nothing. Return directly to avoid causing
+ ** an error trying to DETACH the vacuum_db (which never got attached)
+ ** in the exit-handler.
+ */
+ return SQLITE_OK;
+ }
+ nFilename = strlen(zFilename);
+ zTemp = sqliteMalloc( nFilename+100 );
+ if( zTemp==0 ){
+ rc = SQLITE_NOMEM;
+ goto end_of_vacuum;
+ }
+ strcpy(zTemp, zFilename);
+
+ /* The randomName() procedure in the following loop uses an excellent
+ ** source of randomness to generate a name from a space of 1.3e+31
+ ** possibilities. So unless the directory already contains on the order
+ ** of 1.3e+31 files, the probability that the following loop will
+ ** run more than once or twice is vanishingly small. We are certain
+ ** enough that this loop will always terminate (and terminate quickly)
+ ** that we don't even bother to set a maximum loop count.
+ */
+ do {
+ zTemp[nFilename] = '-';
+ randomName((unsigned char*)&zTemp[nFilename+1]);
+ } while( sqlite3OsFileExists(zTemp) );
+
+ /* Attach the temporary database as 'vacuum_db'. The synchronous pragma
+ ** can be set to 'off' for this file, as it is not recovered if a crash
+ ** occurs anyway. The integrity of the database is maintained by a
+ ** (possibly synchronous) transaction opened on the main database before
+ ** sqlite3BtreeCopyFile() is called.
+ **
+ ** An optimisation would be to use a non-journaled pager.
+ */
+ zSql = sqlite3MPrintf("ATTACH '%q' AS vacuum_db;", zTemp);
+ if( !zSql ){
+ rc = SQLITE_NOMEM;
+ goto end_of_vacuum;
+ }
+ rc = execSql(db, zSql);
+ sqliteFree(zSql);
+ zSql = 0;
+ if( rc!=SQLITE_OK ) goto end_of_vacuum;
+ pDb = &db->aDb[db->nDb-1];
+ assert( strcmp(db->aDb[db->nDb-1].zName,"vacuum_db")==0 );
+ pTemp = db->aDb[db->nDb-1].pBt;
+ sqlite3BtreeSetPageSize(pTemp, sqlite3BtreeGetPageSize(pMain),
+ sqlite3BtreeGetReserve(pMain));
+ assert( sqlite3BtreeGetPageSize(pTemp)==sqlite3BtreeGetPageSize(pMain) );
+ rc = execSql(db, "PRAGMA vacuum_db.synchronous=OFF");
+ if( rc!=SQLITE_OK ){
+ goto end_of_vacuum;
+ }
+
+#ifndef SQLITE_OMIT_AUTOVACUUM
+ sqlite3BtreeSetAutoVacuum(pTemp, sqlite3BtreeGetAutoVacuum(pMain));
+#endif
+
+ /* Begin a transaction */
+ rc = execSql(db, "BEGIN EXCLUSIVE;");
+ if( rc!=SQLITE_OK ) goto end_of_vacuum;
+
+ /* Query the schema of the main database. Create a mirror schema
+ ** in the temporary database.
+ */
+ rc = execExecSql(db,
+ "SELECT 'CREATE TABLE vacuum_db.' || substr(sql,14,100000000) "
+ " FROM sqlite_master WHERE type='table' AND name!='sqlite_sequence'"
+ " AND rootpage>0"
+ );
+ if( rc!=SQLITE_OK ) goto end_of_vacuum;
+ rc = execExecSql(db,
+ "SELECT 'CREATE INDEX vacuum_db.' || substr(sql,14,100000000)"
+ " FROM sqlite_master WHERE sql LIKE 'CREATE INDEX %' ");
+ if( rc!=SQLITE_OK ) goto end_of_vacuum;
+ rc = execExecSql(db,
+ "SELECT 'CREATE UNIQUE INDEX vacuum_db.' || substr(sql,21,100000000) "
+ " FROM sqlite_master WHERE sql LIKE 'CREATE UNIQUE INDEX %'");
+ if( rc!=SQLITE_OK ) goto end_of_vacuum;
+
+ /* Loop through the tables in the main database. For each, do
+ ** an "INSERT INTO vacuum_db.xxx SELECT * FROM xxx;" to copy
+ ** the contents to the temporary database.
+ */
+ rc = execExecSql(db,
+ "SELECT 'INSERT INTO vacuum_db.' || quote(name) "
+ "|| ' SELECT * FROM ' || quote(name) || ';'"
+ "FROM sqlite_master "
+ "WHERE type = 'table' AND name!='sqlite_sequence' "
+ " AND rootpage>0"
+
+ );
+ if( rc!=SQLITE_OK ) goto end_of_vacuum;
+
+ /* Copy over the sequence table
+ */
+ rc = execExecSql(db,
+ "SELECT 'DELETE FROM vacuum_db.' || quote(name) || ';' "
+ "FROM vacuum_db.sqlite_master WHERE name='sqlite_sequence' "
+ );
+ if( rc!=SQLITE_OK ) goto end_of_vacuum;
+ rc = execExecSql(db,
+ "SELECT 'INSERT INTO vacuum_db.' || quote(name) "
+ "|| ' SELECT * FROM ' || quote(name) || ';' "
+ "FROM vacuum_db.sqlite_master WHERE name=='sqlite_sequence';"
+ );
+ if( rc!=SQLITE_OK ) goto end_of_vacuum;
+
+
+ /* Copy the triggers, views, and virtual tables from the main database
+ ** over to the temporary database. None of these objects has any
+ ** associated storage, so all we have to do is copy their entries
+ ** from the SQLITE_MASTER table.
+ */
+ rc = execSql(db,
+ "INSERT INTO vacuum_db.sqlite_master "
+ " SELECT type, name, tbl_name, rootpage, sql"
+ " FROM sqlite_master"
+ " WHERE type='view' OR type='trigger'"
+ " OR (type='table' AND rootpage=0)"
+ );
+ if( rc ) goto end_of_vacuum;
+
+ /* At this point, unless the main db was completely empty, there is now a
+ ** transaction open on the vacuum database, but not on the main database.
+ ** Open a btree level transaction on the main database. This allows a
+ ** call to sqlite3BtreeCopyFile(). The main database btree level
+ ** transaction is then committed, so the SQL level never knows it was
+ ** opened for writing. This way, the SQL transaction used to create the
+ ** temporary database never needs to be committed.
+ */
+ if( rc==SQLITE_OK ){
+ u32 meta;
+ int i;
+
+ /* This array determines which meta meta values are preserved in the
+ ** vacuum. Even entries are the meta value number and odd entries
+ ** are an increment to apply to the meta value after the vacuum.
+ ** The increment is used to increase the schema cookie so that other
+ ** connections to the same database will know to reread the schema.
+ */
+ static const unsigned char aCopy[] = {
+ 1, 1, /* Add one to the old schema cookie */
+ 3, 0, /* Preserve the default page cache size */
+ 5, 0, /* Preserve the default text encoding */
+ 6, 0, /* Preserve the user version */
+ };
+
+ assert( 1==sqlite3BtreeIsInTrans(pTemp) );
+ assert( 1==sqlite3BtreeIsInTrans(pMain) );
+
+ /* Copy Btree meta values */
+ for(i=0; i<sizeof(aCopy)/sizeof(aCopy[0]); i+=2){
+ rc = sqlite3BtreeGetMeta(pMain, aCopy[i], &meta);
+ if( rc!=SQLITE_OK ) goto end_of_vacuum;
+ rc = sqlite3BtreeUpdateMeta(pTemp, aCopy[i], meta+aCopy[i+1]);
+ if( rc!=SQLITE_OK ) goto end_of_vacuum;
+ }
+
+ rc = sqlite3BtreeCopyFile(pMain, pTemp);
+ if( rc!=SQLITE_OK ) goto end_of_vacuum;
+ rc = sqlite3BtreeCommit(pTemp);
+ if( rc!=SQLITE_OK ) goto end_of_vacuum;
+ rc = sqlite3BtreeCommit(pMain);
+ }
+
+end_of_vacuum:
+ /* Restore the original value of db->flags */
+ db->flags = saved_flags;
+
+ /* Currently there is an SQL level transaction open on the vacuum
+ ** database. No locks are held on any other files (since the main file
+ ** was committed at the btree level). So it safe to end the transaction
+ ** by manually setting the autoCommit flag to true and detaching the
+ ** vacuum database. The vacuum_db journal file is deleted when the pager
+ ** is closed by the DETACH.
+ */
+ db->autoCommit = 1;
+
+ if( pDb ){
+ sqlite3MallocDisallow();
+ sqlite3BtreeClose(pDb->pBt);
+ sqlite3MallocAllow();
+ pDb->pBt = 0;
+ pDb->pSchema = 0;
+ }
+
+ if( zTemp ){
+ sqlite3OsDelete(zTemp);
+ sqliteFree(zTemp);
+ }
+ sqliteFree( zSql );
+ sqlite3ResetInternalSchema(db, 0);
+
+ return rc;
+}
+#endif /* SQLITE_OMIT_VACUUM */
Added: freeswitch/trunk/libs/sqlite/src/vdbe.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/vdbe.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,5000 @@
+/*
+** 2001 September 15
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** The code in this file implements execution method of the
+** Virtual Database Engine (VDBE). A separate file ("vdbeaux.c")
+** handles housekeeping details such as creating and deleting
+** VDBE instances. This file is solely interested in executing
+** the VDBE program.
+**
+** In the external interface, an "sqlite3_stmt*" is an opaque pointer
+** to a VDBE.
+**
+** The SQL parser generates a program which is then executed by
+** the VDBE to do the work of the SQL statement. VDBE programs are
+** similar in form to assembly language. The program consists of
+** a linear sequence of operations. Each operation has an opcode
+** and 3 operands. Operands P1 and P2 are integers. Operand P3
+** is a null-terminated string. The P2 operand must be non-negative.
+** Opcodes will typically ignore one or more operands. Many opcodes
+** ignore all three operands.
+**
+** Computation results are stored on a stack. Each entry on the
+** stack is either an integer, a null-terminated string, a floating point
+** number, or the SQL "NULL" value. An inplicit conversion from one
+** type to the other occurs as necessary.
+**
+** Most of the code in this file is taken up by the sqlite3VdbeExec()
+** function which does the work of interpreting a VDBE program.
+** But other routines are also provided to help in building up
+** a program instruction by instruction.
+**
+** Various scripts scan this source file in order to generate HTML
+** documentation, headers files, or other derived files. The formatting
+** of the code in this file is, therefore, important. See other comments
+** in this file for details. If in doubt, do not deviate from existing
+** commenting and indentation practices when changing or adding code.
+**
+** $Id: vdbe.c,v 1.577 2006/09/23 20:36:02 drh Exp $
+*/
+#include "sqliteInt.h"
+#include "os.h"
+#include <ctype.h>
+#include "vdbeInt.h"
+
+/*
+** The following global variable is incremented every time a cursor
+** moves, either by the OP_MoveXX, OP_Next, or OP_Prev opcodes. The test
+** procedures use this information to make sure that indices are
+** working correctly. This variable has no function other than to
+** help verify the correct operation of the library.
+*/
+#ifdef SQLITE_TEST
+int sqlite3_search_count = 0;
+#endif
+
+/*
+** When this global variable is positive, it gets decremented once before
+** each instruction in the VDBE. When reaches zero, the u1.isInterrupted
+** field of the sqlite3 structure is set in order to simulate and interrupt.
+**
+** This facility is used for testing purposes only. It does not function
+** in an ordinary build.
+*/
+#ifdef SQLITE_TEST
+int sqlite3_interrupt_count = 0;
+#endif
+
+/*
+** The next global variable is incremented each type the OP_Sort opcode
+** is executed. The test procedures use this information to make sure that
+** sorting is occurring or not occuring at appropriate times. This variable
+** has no function other than to help verify the correct operation of the
+** library.
+*/
+#ifdef SQLITE_TEST
+int sqlite3_sort_count = 0;
+#endif
+
+/*
+** Release the memory associated with the given stack level. This
+** leaves the Mem.flags field in an inconsistent state.
+*/
+#define Release(P) if((P)->flags&MEM_Dyn){ sqlite3VdbeMemRelease(P); }
+
+/*
+** Convert the given stack entity into a string if it isn't one
+** already. Return non-zero if a malloc() fails.
+*/
+#define Stringify(P, enc) \
+ if(((P)->flags&(MEM_Str|MEM_Blob))==0 && sqlite3VdbeMemStringify(P,enc)) \
+ { goto no_mem; }
+
+/*
+** Convert the given stack entity into a string that has been obtained
+** from sqliteMalloc(). This is different from Stringify() above in that
+** Stringify() will use the NBFS bytes of static string space if the string
+** will fit but this routine always mallocs for space.
+** Return non-zero if we run out of memory.
+*/
+#define Dynamicify(P,enc) sqlite3VdbeMemDynamicify(P)
+
+/*
+** The header of a record consists of a sequence variable-length integers.
+** These integers are almost always small and are encoded as a single byte.
+** The following macro takes advantage this fact to provide a fast decode
+** of the integers in a record header. It is faster for the common case
+** where the integer is a single byte. It is a little slower when the
+** integer is two or more bytes. But overall it is faster.
+**
+** The following expressions are equivalent:
+**
+** x = sqlite3GetVarint32( A, &B );
+**
+** x = GetVarint( A, B );
+**
+*/
+#define GetVarint(A,B) ((B = *(A))<=0x7f ? 1 : sqlite3GetVarint32(A, &B))
+
+/*
+** An ephemeral string value (signified by the MEM_Ephem flag) contains
+** a pointer to a dynamically allocated string where some other entity
+** is responsible for deallocating that string. Because the stack entry
+** does not control the string, it might be deleted without the stack
+** entry knowing it.
+**
+** This routine converts an ephemeral string into a dynamically allocated
+** string that the stack entry itself controls. In other words, it
+** converts an MEM_Ephem string into an MEM_Dyn string.
+*/
+#define Deephemeralize(P) \
+ if( ((P)->flags&MEM_Ephem)!=0 \
+ && sqlite3VdbeMemMakeWriteable(P) ){ goto no_mem;}
+
+/*
+** Argument pMem points at a memory cell that will be passed to a
+** user-defined function or returned to the user as the result of a query.
+** The second argument, 'db_enc' is the text encoding used by the vdbe for
+** stack variables. This routine sets the pMem->enc and pMem->type
+** variables used by the sqlite3_value_*() routines.
+*/
+#define storeTypeInfo(A,B) _storeTypeInfo(A)
+static void _storeTypeInfo(Mem *pMem){
+ int flags = pMem->flags;
+ if( flags & MEM_Null ){
+ pMem->type = SQLITE_NULL;
+ }
+ else if( flags & MEM_Int ){
+ pMem->type = SQLITE_INTEGER;
+ }
+ else if( flags & MEM_Real ){
+ pMem->type = SQLITE_FLOAT;
+ }
+ else if( flags & MEM_Str ){
+ pMem->type = SQLITE_TEXT;
+ }else{
+ pMem->type = SQLITE_BLOB;
+ }
+}
+
+/*
+** Pop the stack N times.
+*/
+static void popStack(Mem **ppTos, int N){
+ Mem *pTos = *ppTos;
+ while( N>0 ){
+ N--;
+ Release(pTos);
+ pTos--;
+ }
+ *ppTos = pTos;
+}
+
+/*
+** Allocate cursor number iCur. Return a pointer to it. Return NULL
+** if we run out of memory.
+*/
+static Cursor *allocateCursor(Vdbe *p, int iCur, int iDb){
+ Cursor *pCx;
+ assert( iCur<p->nCursor );
+ if( p->apCsr[iCur] ){
+ sqlite3VdbeFreeCursor(p, p->apCsr[iCur]);
+ }
+ p->apCsr[iCur] = pCx = sqliteMalloc( sizeof(Cursor) );
+ if( pCx ){
+ pCx->iDb = iDb;
+ }
+ return pCx;
+}
+
+/*
+** Try to convert a value into a numeric representation if we can
+** do so without loss of information. In other words, if the string
+** looks like a number, convert it into a number. If it does not
+** look like a number, leave it alone.
+*/
+static void applyNumericAffinity(Mem *pRec){
+ if( (pRec->flags & (MEM_Real|MEM_Int))==0 ){
+ int realnum;
+ sqlite3VdbeMemNulTerminate(pRec);
+ if( (pRec->flags&MEM_Str)
+ && sqlite3IsNumber(pRec->z, &realnum, pRec->enc) ){
+ i64 value;
+ sqlite3VdbeChangeEncoding(pRec, SQLITE_UTF8);
+ if( !realnum && sqlite3atoi64(pRec->z, &value) ){
+ sqlite3VdbeMemRelease(pRec);
+ pRec->i = value;
+ pRec->flags = MEM_Int;
+ }else{
+ sqlite3VdbeMemRealify(pRec);
+ }
+ }
+ }
+}
+
+/*
+** Processing is determine by the affinity parameter:
+**
+** SQLITE_AFF_INTEGER:
+** SQLITE_AFF_REAL:
+** SQLITE_AFF_NUMERIC:
+** Try to convert pRec to an integer representation or a
+** floating-point representation if an integer representation
+** is not possible. Note that the integer representation is
+** always preferred, even if the affinity is REAL, because
+** an integer representation is more space efficient on disk.
+**
+** SQLITE_AFF_TEXT:
+** Convert pRec to a text representation.
+**
+** SQLITE_AFF_NONE:
+** No-op. pRec is unchanged.
+*/
+static void applyAffinity(Mem *pRec, char affinity, u8 enc){
+ if( affinity==SQLITE_AFF_TEXT ){
+ /* Only attempt the conversion to TEXT if there is an integer or real
+ ** representation (blob and NULL do not get converted) but no string
+ ** representation.
+ */
+ if( 0==(pRec->flags&MEM_Str) && (pRec->flags&(MEM_Real|MEM_Int)) ){
+ sqlite3VdbeMemStringify(pRec, enc);
+ }
+ pRec->flags &= ~(MEM_Real|MEM_Int);
+ }else if( affinity!=SQLITE_AFF_NONE ){
+ assert( affinity==SQLITE_AFF_INTEGER || affinity==SQLITE_AFF_REAL
+ || affinity==SQLITE_AFF_NUMERIC );
+ applyNumericAffinity(pRec);
+ if( pRec->flags & MEM_Real ){
+ sqlite3VdbeIntegerAffinity(pRec);
+ }
+ }
+}
+
+/*
+** Try to convert the type of a function argument or a result column
+** into a numeric representation. Use either INTEGER or REAL whichever
+** is appropriate. But only do the conversion if it is possible without
+** loss of information and return the revised type of the argument.
+**
+** This is an EXPERIMENTAL api and is subject to change or removal.
+*/
+int sqlite3_value_numeric_type(sqlite3_value *pVal){
+ Mem *pMem = (Mem*)pVal;
+ applyNumericAffinity(pMem);
+ storeTypeInfo(pMem, 0);
+ return pMem->type;
+}
+
+/*
+** Exported version of applyAffinity(). This one works on sqlite3_value*,
+** not the internal Mem* type.
+*/
+void sqlite3ValueApplyAffinity(sqlite3_value *pVal, u8 affinity, u8 enc){
+ applyAffinity((Mem *)pVal, affinity, enc);
+}
+
+#ifdef SQLITE_DEBUG
+/*
+** Write a nice string representation of the contents of cell pMem
+** into buffer zBuf, length nBuf.
+*/
+void sqlite3VdbeMemPrettyPrint(Mem *pMem, char *zBuf){
+ char *zCsr = zBuf;
+ int f = pMem->flags;
+
+ static const char *const encnames[] = {"(X)", "(8)", "(16LE)", "(16BE)"};
+
+ if( f&MEM_Blob ){
+ int i;
+ char c;
+ if( f & MEM_Dyn ){
+ c = 'z';
+ assert( (f & (MEM_Static|MEM_Ephem))==0 );
+ }else if( f & MEM_Static ){
+ c = 't';
+ assert( (f & (MEM_Dyn|MEM_Ephem))==0 );
+ }else if( f & MEM_Ephem ){
+ c = 'e';
+ assert( (f & (MEM_Static|MEM_Dyn))==0 );
+ }else{
+ c = 's';
+ }
+
+ zCsr += sprintf(zCsr, "%c", c);
+ zCsr += sprintf(zCsr, "%d[", pMem->n);
+ for(i=0; i<16 && i<pMem->n; i++){
+ zCsr += sprintf(zCsr, "%02X ", ((int)pMem->z[i] & 0xFF));
+ }
+ for(i=0; i<16 && i<pMem->n; i++){
+ char z = pMem->z[i];
+ if( z<32 || z>126 ) *zCsr++ = '.';
+ else *zCsr++ = z;
+ }
+
+ zCsr += sprintf(zCsr, "]");
+ *zCsr = '\0';
+ }else if( f & MEM_Str ){
+ int j, k;
+ zBuf[0] = ' ';
+ if( f & MEM_Dyn ){
+ zBuf[1] = 'z';
+ assert( (f & (MEM_Static|MEM_Ephem))==0 );
+ }else if( f & MEM_Static ){
+ zBuf[1] = 't';
+ assert( (f & (MEM_Dyn|MEM_Ephem))==0 );
+ }else if( f & MEM_Ephem ){
+ zBuf[1] = 'e';
+ assert( (f & (MEM_Static|MEM_Dyn))==0 );
+ }else{
+ zBuf[1] = 's';
+ }
+ k = 2;
+ k += sprintf(&zBuf[k], "%d", pMem->n);
+ zBuf[k++] = '[';
+ for(j=0; j<15 && j<pMem->n; j++){
+ u8 c = pMem->z[j];
+ if( c>=0x20 && c<0x7f ){
+ zBuf[k++] = c;
+ }else{
+ zBuf[k++] = '.';
+ }
+ }
+ zBuf[k++] = ']';
+ k += sprintf(&zBuf[k], encnames[pMem->enc]);
+ zBuf[k++] = 0;
+ }
+}
+#endif
+
+
+#ifdef VDBE_PROFILE
+/*
+** The following routine only works on pentium-class processors.
+** It uses the RDTSC opcode to read the cycle count value out of the
+** processor and returns that value. This can be used for high-res
+** profiling.
+*/
+__inline__ unsigned long long int hwtime(void){
+ unsigned long long int x;
+ __asm__("rdtsc\n\t"
+ "mov %%edx, %%ecx\n\t"
+ :"=A" (x));
+ return x;
+}
+#endif
+
+/*
+** The CHECK_FOR_INTERRUPT macro defined here looks to see if the
+** sqlite3_interrupt() routine has been called. If it has been, then
+** processing of the VDBE program is interrupted.
+**
+** This macro added to every instruction that does a jump in order to
+** implement a loop. This test used to be on every single instruction,
+** but that meant we more testing that we needed. By only testing the
+** flag on jump instructions, we get a (small) speed improvement.
+*/
+#define CHECK_FOR_INTERRUPT \
+ if( db->u1.isInterrupted ) goto abort_due_to_interrupt;
+
+
+/*
+** Execute as much of a VDBE program as we can then return.
+**
+** sqlite3VdbeMakeReady() must be called before this routine in order to
+** close the program with a final OP_Halt and to set up the callbacks
+** and the error message pointer.
+**
+** Whenever a row or result data is available, this routine will either
+** invoke the result callback (if there is one) or return with
+** SQLITE_ROW.
+**
+** If an attempt is made to open a locked database, then this routine
+** will either invoke the busy callback (if there is one) or it will
+** return SQLITE_BUSY.
+**
+** If an error occurs, an error message is written to memory obtained
+** from sqliteMalloc() and p->zErrMsg is made to point to that memory.
+** The error code is stored in p->rc and this routine returns SQLITE_ERROR.
+**
+** If the callback ever returns non-zero, then the program exits
+** immediately. There will be no error message but the p->rc field is
+** set to SQLITE_ABORT and this routine will return SQLITE_ERROR.
+**
+** A memory allocation error causes p->rc to be set to SQLITE_NOMEM and this
+** routine to return SQLITE_ERROR.
+**
+** Other fatal errors return SQLITE_ERROR.
+**
+** After this routine has finished, sqlite3VdbeFinalize() should be
+** used to clean up the mess that was left behind.
+*/
+int sqlite3VdbeExec(
+ Vdbe *p /* The VDBE */
+){
+ int pc; /* The program counter */
+ Op *pOp; /* Current operation */
+ int rc = SQLITE_OK; /* Value to return */
+ sqlite3 *db = p->db; /* The database */
+ u8 encoding = ENC(db); /* The database encoding */
+ Mem *pTos; /* Top entry in the operand stack */
+#ifdef VDBE_PROFILE
+ unsigned long long start; /* CPU clock count at start of opcode */
+ int origPc; /* Program counter at start of opcode */
+#endif
+#ifndef SQLITE_OMIT_PROGRESS_CALLBACK
+ int nProgressOps = 0; /* Opcodes executed since progress callback. */
+#endif
+#ifndef NDEBUG
+ Mem *pStackLimit;
+#endif
+
+ if( p->magic!=VDBE_MAGIC_RUN ) return SQLITE_MISUSE;
+ assert( db->magic==SQLITE_MAGIC_BUSY );
+ pTos = p->pTos;
+ if( p->rc==SQLITE_NOMEM ){
+ /* This happens if a malloc() inside a call to sqlite3_column_text() or
+ ** sqlite3_column_text16() failed. */
+ goto no_mem;
+ }
+ assert( p->rc==SQLITE_OK || p->rc==SQLITE_BUSY );
+ p->rc = SQLITE_OK;
+ assert( p->explain==0 );
+ if( p->popStack ){
+ popStack(&pTos, p->popStack);
+ p->popStack = 0;
+ }
+ p->resOnStack = 0;
+ db->busyHandler.nBusy = 0;
+ CHECK_FOR_INTERRUPT;
+ for(pc=p->pc; rc==SQLITE_OK; pc++){
+ assert( pc>=0 && pc<p->nOp );
+ assert( pTos<=&p->aStack[pc] );
+ if( sqlite3MallocFailed() ) goto no_mem;
+#ifdef VDBE_PROFILE
+ origPc = pc;
+ start = hwtime();
+#endif
+ pOp = &p->aOp[pc];
+
+ /* Only allow tracing if SQLITE_DEBUG is defined.
+ */
+#ifdef SQLITE_DEBUG
+ if( p->trace ){
+ if( pc==0 ){
+ printf("VDBE Execution Trace:\n");
+ sqlite3VdbePrintSql(p);
+ }
+ sqlite3VdbePrintOp(p->trace, pc, pOp);
+ }
+ if( p->trace==0 && pc==0 && sqlite3OsFileExists("vdbe_sqltrace") ){
+ sqlite3VdbePrintSql(p);
+ }
+#endif
+
+
+ /* Check to see if we need to simulate an interrupt. This only happens
+ ** if we have a special test build.
+ */
+#ifdef SQLITE_TEST
+ if( sqlite3_interrupt_count>0 ){
+ sqlite3_interrupt_count--;
+ if( sqlite3_interrupt_count==0 ){
+ sqlite3_interrupt(db);
+ }
+ }
+#endif
+
+#ifndef SQLITE_OMIT_PROGRESS_CALLBACK
+ /* Call the progress callback if it is configured and the required number
+ ** of VDBE ops have been executed (either since this invocation of
+ ** sqlite3VdbeExec() or since last time the progress callback was called).
+ ** If the progress callback returns non-zero, exit the virtual machine with
+ ** a return code SQLITE_ABORT.
+ */
+ if( db->xProgress ){
+ if( db->nProgressOps==nProgressOps ){
+ if( sqlite3SafetyOff(db) ) goto abort_due_to_misuse;
+ if( db->xProgress(db->pProgressArg)!=0 ){
+ sqlite3SafetyOn(db);
+ rc = SQLITE_ABORT;
+ continue; /* skip to the next iteration of the for loop */
+ }
+ nProgressOps = 0;
+ if( sqlite3SafetyOn(db) ) goto abort_due_to_misuse;
+ }
+ nProgressOps++;
+ }
+#endif
+
+#ifndef NDEBUG
+ /* This is to check that the return value of static function
+ ** opcodeNoPush() (see vdbeaux.c) returns values that match the
+ ** implementation of the virtual machine in this file. If
+ ** opcodeNoPush() returns non-zero, then the stack is guarenteed
+ ** not to grow when the opcode is executed. If it returns zero, then
+ ** the stack may grow by at most 1.
+ **
+ ** The global wrapper function sqlite3VdbeOpcodeUsesStack() is not
+ ** available if NDEBUG is defined at build time.
+ */
+ pStackLimit = pTos;
+ if( !sqlite3VdbeOpcodeNoPush(pOp->opcode) ){
+ pStackLimit++;
+ }
+#endif
+
+ switch( pOp->opcode ){
+
+/*****************************************************************************
+** What follows is a massive switch statement where each case implements a
+** separate instruction in the virtual machine. If we follow the usual
+** indentation conventions, each case should be indented by 6 spaces. But
+** that is a lot of wasted space on the left margin. So the code within
+** the switch statement will break with convention and be flush-left. Another
+** big comment (similar to this one) will mark the point in the code where
+** we transition back to normal indentation.
+**
+** The formatting of each case is important. The makefile for SQLite
+** generates two C files "opcodes.h" and "opcodes.c" by scanning this
+** file looking for lines that begin with "case OP_". The opcodes.h files
+** will be filled with #defines that give unique integer values to each
+** opcode and the opcodes.c file is filled with an array of strings where
+** each string is the symbolic name for the corresponding opcode. If the
+** case statement is followed by a comment of the form "/# same as ... #/"
+** that comment is used to determine the particular value of the opcode.
+**
+** If a comment on the same line as the "case OP_" construction contains
+** the word "no-push", then the opcode is guarenteed not to grow the
+** vdbe stack when it is executed. See function opcode() in
+** vdbeaux.c for details.
+**
+** Documentation about VDBE opcodes is generated by scanning this file
+** for lines of that contain "Opcode:". That line and all subsequent
+** comment lines are used in the generation of the opcode.html documentation
+** file.
+**
+** SUMMARY:
+**
+** Formatting is important to scripts that scan this file.
+** Do not deviate from the formatting style currently in use.
+**
+*****************************************************************************/
+
+/* Opcode: Goto * P2 *
+**
+** An unconditional jump to address P2.
+** The next instruction executed will be
+** the one at index P2 from the beginning of
+** the program.
+*/
+case OP_Goto: { /* no-push */
+ CHECK_FOR_INTERRUPT;
+ pc = pOp->p2 - 1;
+ break;
+}
+
+/* Opcode: Gosub * P2 *
+**
+** Push the current address plus 1 onto the return address stack
+** and then jump to address P2.
+**
+** The return address stack is of limited depth. If too many
+** OP_Gosub operations occur without intervening OP_Returns, then
+** the return address stack will fill up and processing will abort
+** with a fatal error.
+*/
+case OP_Gosub: { /* no-push */
+ assert( p->returnDepth<sizeof(p->returnStack)/sizeof(p->returnStack[0]) );
+ p->returnStack[p->returnDepth++] = pc+1;
+ pc = pOp->p2 - 1;
+ break;
+}
+
+/* Opcode: Return * * *
+**
+** Jump immediately to the next instruction after the last unreturned
+** OP_Gosub. If an OP_Return has occurred for all OP_Gosubs, then
+** processing aborts with a fatal error.
+*/
+case OP_Return: { /* no-push */
+ assert( p->returnDepth>0 );
+ p->returnDepth--;
+ pc = p->returnStack[p->returnDepth] - 1;
+ break;
+}
+
+/* Opcode: Halt P1 P2 P3
+**
+** Exit immediately. All open cursors, Fifos, etc are closed
+** automatically.
+**
+** P1 is the result code returned by sqlite3_exec(), sqlite3_reset(),
+** or sqlite3_finalize(). For a normal halt, this should be SQLITE_OK (0).
+** For errors, it can be some other value. If P1!=0 then P2 will determine
+** whether or not to rollback the current transaction. Do not rollback
+** if P2==OE_Fail. Do the rollback if P2==OE_Rollback. If P2==OE_Abort,
+** then back out all changes that have occurred during this execution of the
+** VDBE, but do not rollback the transaction.
+**
+** If P3 is not null then it is an error message string.
+**
+** There is an implied "Halt 0 0 0" instruction inserted at the very end of
+** every program. So a jump past the last instruction of the program
+** is the same as executing Halt.
+*/
+case OP_Halt: { /* no-push */
+ p->pTos = pTos;
+ p->rc = pOp->p1;
+ p->pc = pc;
+ p->errorAction = pOp->p2;
+ if( pOp->p3 ){
+ sqlite3SetString(&p->zErrMsg, pOp->p3, (char*)0);
+ }
+ rc = sqlite3VdbeHalt(p);
+ assert( rc==SQLITE_BUSY || rc==SQLITE_OK );
+ if( rc==SQLITE_BUSY ){
+ p->rc = SQLITE_BUSY;
+ return SQLITE_BUSY;
+ }
+ return p->rc ? SQLITE_ERROR : SQLITE_DONE;
+}
+
+/* Opcode: Integer P1 * *
+**
+** The 32-bit integer value P1 is pushed onto the stack.
+*/
+case OP_Integer: {
+ pTos++;
+ pTos->flags = MEM_Int;
+ pTos->i = pOp->p1;
+ break;
+}
+
+/* Opcode: Int64 * * P3
+**
+** P3 is a string representation of an integer. Convert that integer
+** to a 64-bit value and push it onto the stack.
+*/
+case OP_Int64: {
+ pTos++;
+ assert( pOp->p3!=0 );
+ pTos->flags = MEM_Str|MEM_Static|MEM_Term;
+ pTos->z = pOp->p3;
+ pTos->n = strlen(pTos->z);
+ pTos->enc = SQLITE_UTF8;
+ pTos->i = sqlite3VdbeIntValue(pTos);
+ pTos->flags |= MEM_Int;
+ break;
+}
+
+/* Opcode: Real * * P3
+**
+** The string value P3 is converted to a real and pushed on to the stack.
+*/
+case OP_Real: { /* same as TK_FLOAT, */
+ pTos++;
+ pTos->flags = MEM_Str|MEM_Static|MEM_Term;
+ pTos->z = pOp->p3;
+ pTos->n = strlen(pTos->z);
+ pTos->enc = SQLITE_UTF8;
+ pTos->r = sqlite3VdbeRealValue(pTos);
+ pTos->flags |= MEM_Real;
+ sqlite3VdbeChangeEncoding(pTos, encoding);
+ break;
+}
+
+/* Opcode: String8 * * P3
+**
+** P3 points to a nul terminated UTF-8 string. This opcode is transformed
+** into an OP_String before it is executed for the first time.
+*/
+case OP_String8: { /* same as TK_STRING */
+ assert( pOp->p3!=0 );
+ pOp->opcode = OP_String;
+ pOp->p1 = strlen(pOp->p3);
+
+#ifndef SQLITE_OMIT_UTF16
+ if( encoding!=SQLITE_UTF8 ){
+ pTos++;
+ sqlite3VdbeMemSetStr(pTos, pOp->p3, -1, SQLITE_UTF8, SQLITE_STATIC);
+ if( SQLITE_OK!=sqlite3VdbeChangeEncoding(pTos, encoding) ) goto no_mem;
+ if( SQLITE_OK!=sqlite3VdbeMemDynamicify(pTos) ) goto no_mem;
+ pTos->flags &= ~(MEM_Dyn);
+ pTos->flags |= MEM_Static;
+ if( pOp->p3type==P3_DYNAMIC ){
+ sqliteFree(pOp->p3);
+ }
+ pOp->p3type = P3_DYNAMIC;
+ pOp->p3 = pTos->z;
+ pOp->p1 = pTos->n;
+ break;
+ }
+#endif
+ /* Otherwise fall through to the next case, OP_String */
+}
+
+/* Opcode: String P1 * P3
+**
+** The string value P3 of length P1 (bytes) is pushed onto the stack.
+*/
+case OP_String: {
+ pTos++;
+ assert( pOp->p3!=0 );
+ pTos->flags = MEM_Str|MEM_Static|MEM_Term;
+ pTos->z = pOp->p3;
+ pTos->n = pOp->p1;
+ pTos->enc = encoding;
+ break;
+}
+
+/* Opcode: Null * * *
+**
+** Push a NULL onto the stack.
+*/
+case OP_Null: {
+ pTos++;
+ pTos->flags = MEM_Null;
+ pTos->n = 0;
+ break;
+}
+
+
+#ifndef SQLITE_OMIT_BLOB_LITERAL
+/* Opcode: HexBlob * * P3
+**
+** P3 is an UTF-8 SQL hex encoding of a blob. The blob is pushed onto the
+** vdbe stack.
+**
+** The first time this instruction executes, in transforms itself into a
+** 'Blob' opcode with a binary blob as P3.
+*/
+case OP_HexBlob: { /* same as TK_BLOB */
+ pOp->opcode = OP_Blob;
+ pOp->p1 = strlen(pOp->p3)/2;
+ if( pOp->p1 ){
+ char *zBlob = sqlite3HexToBlob(pOp->p3);
+ if( !zBlob ) goto no_mem;
+ if( pOp->p3type==P3_DYNAMIC ){
+ sqliteFree(pOp->p3);
+ }
+ pOp->p3 = zBlob;
+ pOp->p3type = P3_DYNAMIC;
+ }else{
+ if( pOp->p3type==P3_DYNAMIC ){
+ sqliteFree(pOp->p3);
+ }
+ pOp->p3type = P3_STATIC;
+ pOp->p3 = "";
+ }
+
+ /* Fall through to the next case, OP_Blob. */
+}
+
+/* Opcode: Blob P1 * P3
+**
+** P3 points to a blob of data P1 bytes long. Push this
+** value onto the stack. This instruction is not coded directly
+** by the compiler. Instead, the compiler layer specifies
+** an OP_HexBlob opcode, with the hex string representation of
+** the blob as P3. This opcode is transformed to an OP_Blob
+** the first time it is executed.
+*/
+case OP_Blob: {
+ pTos++;
+ sqlite3VdbeMemSetStr(pTos, pOp->p3, pOp->p1, 0, 0);
+ break;
+}
+#endif /* SQLITE_OMIT_BLOB_LITERAL */
+
+/* Opcode: Variable P1 * *
+**
+** Push the value of variable P1 onto the stack. A variable is
+** an unknown in the original SQL string as handed to sqlite3_compile().
+** Any occurance of the '?' character in the original SQL is considered
+** a variable. Variables in the SQL string are number from left to
+** right beginning with 1. The values of variables are set using the
+** sqlite3_bind() API.
+*/
+case OP_Variable: {
+ int j = pOp->p1 - 1;
+ assert( j>=0 && j<p->nVar );
+
+ pTos++;
+ sqlite3VdbeMemShallowCopy(pTos, &p->aVar[j], MEM_Static);
+ break;
+}
+
+/* Opcode: Pop P1 * *
+**
+** P1 elements are popped off of the top of stack and discarded.
+*/
+case OP_Pop: { /* no-push */
+ assert( pOp->p1>=0 );
+ popStack(&pTos, pOp->p1);
+ assert( pTos>=&p->aStack[-1] );
+ break;
+}
+
+/* Opcode: Dup P1 P2 *
+**
+** A copy of the P1-th element of the stack
+** is made and pushed onto the top of the stack.
+** The top of the stack is element 0. So the
+** instruction "Dup 0 0 0" will make a copy of the
+** top of the stack.
+**
+** If the content of the P1-th element is a dynamically
+** allocated string, then a new copy of that string
+** is made if P2==0. If P2!=0, then just a pointer
+** to the string is copied.
+**
+** Also see the Pull instruction.
+*/
+case OP_Dup: {
+ Mem *pFrom = &pTos[-pOp->p1];
+ assert( pFrom<=pTos && pFrom>=p->aStack );
+ pTos++;
+ sqlite3VdbeMemShallowCopy(pTos, pFrom, MEM_Ephem);
+ if( pOp->p2 ){
+ Deephemeralize(pTos);
+ }
+ break;
+}
+
+/* Opcode: Pull P1 * *
+**
+** The P1-th element is removed from its current location on
+** the stack and pushed back on top of the stack. The
+** top of the stack is element 0, so "Pull 0 0 0" is
+** a no-op. "Pull 1 0 0" swaps the top two elements of
+** the stack.
+**
+** See also the Dup instruction.
+*/
+case OP_Pull: { /* no-push */
+ Mem *pFrom = &pTos[-pOp->p1];
+ int i;
+ Mem ts;
+
+ ts = *pFrom;
+ Deephemeralize(pTos);
+ for(i=0; i<pOp->p1; i++, pFrom++){
+ Deephemeralize(&pFrom[1]);
+ assert( (pFrom->flags & MEM_Ephem)==0 );
+ *pFrom = pFrom[1];
+ if( pFrom->flags & MEM_Short ){
+ assert( pFrom->flags & (MEM_Str|MEM_Blob) );
+ assert( pFrom->z==pFrom[1].zShort );
+ pFrom->z = pFrom->zShort;
+ }
+ }
+ *pTos = ts;
+ if( pTos->flags & MEM_Short ){
+ assert( pTos->flags & (MEM_Str|MEM_Blob) );
+ assert( pTos->z==pTos[-pOp->p1].zShort );
+ pTos->z = pTos->zShort;
+ }
+ break;
+}
+
+/* Opcode: Push P1 * *
+**
+** Overwrite the value of the P1-th element down on the
+** stack (P1==0 is the top of the stack) with the value
+** of the top of the stack. Then pop the top of the stack.
+*/
+case OP_Push: { /* no-push */
+ Mem *pTo = &pTos[-pOp->p1];
+
+ assert( pTo>=p->aStack );
+ sqlite3VdbeMemMove(pTo, pTos);
+ pTos--;
+ break;
+}
+
+/* Opcode: Callback P1 * *
+**
+** The top P1 values on the stack represent a single result row from
+** a query. This opcode causes the sqlite3_step() call to terminate
+** with an SQLITE_ROW return code and it sets up the sqlite3_stmt
+** structure to provide access to the top P1 values as the result
+** row. When the sqlite3_step() function is run again, the top P1
+** values will be automatically popped from the stack before the next
+** instruction executes.
+*/
+case OP_Callback: { /* no-push */
+ Mem *pMem;
+ Mem *pFirstColumn;
+ assert( p->nResColumn==pOp->p1 );
+
+ /* Data in the pager might be moved or changed out from under us
+ ** in between the return from this sqlite3_step() call and the
+ ** next call to sqlite3_step(). So deephermeralize everything on
+ ** the stack. Note that ephemeral data is never stored in memory
+ ** cells so we do not have to worry about them.
+ */
+ pFirstColumn = &pTos[0-pOp->p1];
+ for(pMem = p->aStack; pMem<pFirstColumn; pMem++){
+ Deephemeralize(pMem);
+ }
+
+ /* Invalidate all ephemeral cursor row caches */
+ p->cacheCtr = (p->cacheCtr + 2)|1;
+
+ /* Make sure the results of the current row are \000 terminated
+ ** and have an assigned type. The results are deephemeralized as
+ ** as side effect.
+ */
+ for(; pMem<=pTos; pMem++ ){
+ sqlite3VdbeMemNulTerminate(pMem);
+ storeTypeInfo(pMem, encoding);
+ }
+
+ /* Set up the statement structure so that it will pop the current
+ ** results from the stack when the statement returns.
+ */
+ p->resOnStack = 1;
+ p->nCallback++;
+ p->popStack = pOp->p1;
+ p->pc = pc + 1;
+ p->pTos = pTos;
+ return SQLITE_ROW;
+}
+
+/* Opcode: Concat P1 P2 *
+**
+** Look at the first P1+2 elements of the stack. Append them all
+** together with the lowest element first. The original P1+2 elements
+** are popped from the stack if P2==0 and retained if P2==1. If
+** any element of the stack is NULL, then the result is NULL.
+**
+** When P1==1, this routine makes a copy of the top stack element
+** into memory obtained from sqliteMalloc().
+*/
+case OP_Concat: { /* same as TK_CONCAT */
+ char *zNew;
+ int nByte;
+ int nField;
+ int i, j;
+ Mem *pTerm;
+
+ /* Loop through the stack elements to see how long the result will be. */
+ nField = pOp->p1 + 2;
+ pTerm = &pTos[1-nField];
+ nByte = 0;
+ for(i=0; i<nField; i++, pTerm++){
+ assert( pOp->p2==0 || (pTerm->flags&MEM_Str) );
+ if( pTerm->flags&MEM_Null ){
+ nByte = -1;
+ break;
+ }
+ Stringify(pTerm, encoding);
+ nByte += pTerm->n;
+ }
+
+ if( nByte<0 ){
+ /* If nByte is less than zero, then there is a NULL value on the stack.
+ ** In this case just pop the values off the stack (if required) and
+ ** push on a NULL.
+ */
+ if( pOp->p2==0 ){
+ popStack(&pTos, nField);
+ }
+ pTos++;
+ pTos->flags = MEM_Null;
+ }else{
+ /* Otherwise malloc() space for the result and concatenate all the
+ ** stack values.
+ */
+ zNew = sqliteMallocRaw( nByte+2 );
+ if( zNew==0 ) goto no_mem;
+ j = 0;
+ pTerm = &pTos[1-nField];
+ for(i=j=0; i<nField; i++, pTerm++){
+ int n = pTerm->n;
+ assert( pTerm->flags & (MEM_Str|MEM_Blob) );
+ memcpy(&zNew[j], pTerm->z, n);
+ j += n;
+ }
+ zNew[j] = 0;
+ zNew[j+1] = 0;
+ assert( j==nByte );
+
+ if( pOp->p2==0 ){
+ popStack(&pTos, nField);
+ }
+ pTos++;
+ pTos->n = j;
+ pTos->flags = MEM_Str|MEM_Dyn|MEM_Term;
+ pTos->xDel = 0;
+ pTos->enc = encoding;
+ pTos->z = zNew;
+ }
+ break;
+}
+
+/* Opcode: Add * * *
+**
+** Pop the top two elements from the stack, add them together,
+** and push the result back onto the stack. If either element
+** is a string then it is converted to a double using the atof()
+** function before the addition.
+** If either operand is NULL, the result is NULL.
+*/
+/* Opcode: Multiply * * *
+**
+** Pop the top two elements from the stack, multiply them together,
+** and push the result back onto the stack. If either element
+** is a string then it is converted to a double using the atof()
+** function before the multiplication.
+** If either operand is NULL, the result is NULL.
+*/
+/* Opcode: Subtract * * *
+**
+** Pop the top two elements from the stack, subtract the
+** first (what was on top of the stack) from the second (the
+** next on stack)
+** and push the result back onto the stack. If either element
+** is a string then it is converted to a double using the atof()
+** function before the subtraction.
+** If either operand is NULL, the result is NULL.
+*/
+/* Opcode: Divide * * *
+**
+** Pop the top two elements from the stack, divide the
+** first (what was on top of the stack) from the second (the
+** next on stack)
+** and push the result back onto the stack. If either element
+** is a string then it is converted to a double using the atof()
+** function before the division. Division by zero returns NULL.
+** If either operand is NULL, the result is NULL.
+*/
+/* Opcode: Remainder * * *
+**
+** Pop the top two elements from the stack, divide the
+** first (what was on top of the stack) from the second (the
+** next on stack)
+** and push the remainder after division onto the stack. If either element
+** is a string then it is converted to a double using the atof()
+** function before the division. Division by zero returns NULL.
+** If either operand is NULL, the result is NULL.
+*/
+case OP_Add: /* same as TK_PLUS, no-push */
+case OP_Subtract: /* same as TK_MINUS, no-push */
+case OP_Multiply: /* same as TK_STAR, no-push */
+case OP_Divide: /* same as TK_SLASH, no-push */
+case OP_Remainder: { /* same as TK_REM, no-push */
+ Mem *pNos = &pTos[-1];
+ int flags;
+ assert( pNos>=p->aStack );
+ flags = pTos->flags | pNos->flags;
+ if( (flags & MEM_Null)!=0 ){
+ Release(pTos);
+ pTos--;
+ Release(pTos);
+ pTos->flags = MEM_Null;
+ }else if( (pTos->flags & pNos->flags & MEM_Int)==MEM_Int ){
+ i64 a, b;
+ a = pTos->i;
+ b = pNos->i;
+ switch( pOp->opcode ){
+ case OP_Add: b += a; break;
+ case OP_Subtract: b -= a; break;
+ case OP_Multiply: b *= a; break;
+ case OP_Divide: {
+ if( a==0 ) goto divide_by_zero;
+ b /= a;
+ break;
+ }
+ default: {
+ if( a==0 ) goto divide_by_zero;
+ b %= a;
+ break;
+ }
+ }
+ Release(pTos);
+ pTos--;
+ Release(pTos);
+ pTos->i = b;
+ pTos->flags = MEM_Int;
+ }else{
+ double a, b;
+ a = sqlite3VdbeRealValue(pTos);
+ b = sqlite3VdbeRealValue(pNos);
+ switch( pOp->opcode ){
+ case OP_Add: b += a; break;
+ case OP_Subtract: b -= a; break;
+ case OP_Multiply: b *= a; break;
+ case OP_Divide: {
+ if( a==0.0 ) goto divide_by_zero;
+ b /= a;
+ break;
+ }
+ default: {
+ int ia = (int)a;
+ int ib = (int)b;
+ if( ia==0.0 ) goto divide_by_zero;
+ b = ib % ia;
+ break;
+ }
+ }
+ Release(pTos);
+ pTos--;
+ Release(pTos);
+ pTos->r = b;
+ pTos->flags = MEM_Real;
+ if( (flags & MEM_Real)==0 ){
+ sqlite3VdbeIntegerAffinity(pTos);
+ }
+ }
+ break;
+
+divide_by_zero:
+ Release(pTos);
+ pTos--;
+ Release(pTos);
+ pTos->flags = MEM_Null;
+ break;
+}
+
+/* Opcode: CollSeq * * P3
+**
+** P3 is a pointer to a CollSeq struct. If the next call to a user function
+** or aggregate calls sqlite3GetFuncCollSeq(), this collation sequence will
+** be returned. This is used by the built-in min(), max() and nullif()
+** functions.
+**
+** The interface used by the implementation of the aforementioned functions
+** to retrieve the collation sequence set by this opcode is not available
+** publicly, only to user functions defined in func.c.
+*/
+case OP_CollSeq: { /* no-push */
+ assert( pOp->p3type==P3_COLLSEQ );
+ break;
+}
+
+/* Opcode: Function P1 P2 P3
+**
+** Invoke a user function (P3 is a pointer to a Function structure that
+** defines the function) with P2 arguments taken from the stack. Pop all
+** arguments from the stack and push back the result.
+**
+** P1 is a 32-bit bitmask indicating whether or not each argument to the
+** function was determined to be constant at compile time. If the first
+** argument was constant then bit 0 of P1 is set. This is used to determine
+** whether meta data associated with a user function argument using the
+** sqlite3_set_auxdata() API may be safely retained until the next
+** invocation of this opcode.
+**
+** See also: AggStep and AggFinal
+*/
+case OP_Function: {
+ int i;
+ Mem *pArg;
+ sqlite3_context ctx;
+ sqlite3_value **apVal;
+ int n = pOp->p2;
+
+ apVal = p->apArg;
+ assert( apVal || n==0 );
+
+ pArg = &pTos[1-n];
+ for(i=0; i<n; i++, pArg++){
+ apVal[i] = pArg;
+ storeTypeInfo(pArg, encoding);
+ }
+
+ assert( pOp->p3type==P3_FUNCDEF || pOp->p3type==P3_VDBEFUNC );
+ if( pOp->p3type==P3_FUNCDEF ){
+ ctx.pFunc = (FuncDef*)pOp->p3;
+ ctx.pVdbeFunc = 0;
+ }else{
+ ctx.pVdbeFunc = (VdbeFunc*)pOp->p3;
+ ctx.pFunc = ctx.pVdbeFunc->pFunc;
+ }
+
+ ctx.s.flags = MEM_Null;
+ ctx.s.z = 0;
+ ctx.s.xDel = 0;
+ ctx.isError = 0;
+ if( ctx.pFunc->needCollSeq ){
+ assert( pOp>p->aOp );
+ assert( pOp[-1].p3type==P3_COLLSEQ );
+ assert( pOp[-1].opcode==OP_CollSeq );
+ ctx.pColl = (CollSeq *)pOp[-1].p3;
+ }
+ if( sqlite3SafetyOff(db) ) goto abort_due_to_misuse;
+ (*ctx.pFunc->xFunc)(&ctx, n, apVal);
+ if( sqlite3SafetyOn(db) ) goto abort_due_to_misuse;
+ if( sqlite3MallocFailed() ) goto no_mem;
+ popStack(&pTos, n);
+
+ /* If any auxilary data functions have been called by this user function,
+ ** immediately call the destructor for any non-static values.
+ */
+ if( ctx.pVdbeFunc ){
+ sqlite3VdbeDeleteAuxData(ctx.pVdbeFunc, pOp->p1);
+ pOp->p3 = (char *)ctx.pVdbeFunc;
+ pOp->p3type = P3_VDBEFUNC;
+ }
+
+ /* If the function returned an error, throw an exception */
+ if( ctx.isError ){
+ sqlite3SetString(&p->zErrMsg, sqlite3_value_text(&ctx.s), (char*)0);
+ rc = SQLITE_ERROR;
+ }
+
+ /* Copy the result of the function to the top of the stack */
+ sqlite3VdbeChangeEncoding(&ctx.s, encoding);
+ pTos++;
+ pTos->flags = 0;
+ sqlite3VdbeMemMove(pTos, &ctx.s);
+ break;
+}
+
+/* Opcode: BitAnd * * *
+**
+** Pop the top two elements from the stack. Convert both elements
+** to integers. Push back onto the stack the bit-wise AND of the
+** two elements.
+** If either operand is NULL, the result is NULL.
+*/
+/* Opcode: BitOr * * *
+**
+** Pop the top two elements from the stack. Convert both elements
+** to integers. Push back onto the stack the bit-wise OR of the
+** two elements.
+** If either operand is NULL, the result is NULL.
+*/
+/* Opcode: ShiftLeft * * *
+**
+** Pop the top two elements from the stack. Convert both elements
+** to integers. Push back onto the stack the second element shifted
+** left by N bits where N is the top element on the stack.
+** If either operand is NULL, the result is NULL.
+*/
+/* Opcode: ShiftRight * * *
+**
+** Pop the top two elements from the stack. Convert both elements
+** to integers. Push back onto the stack the second element shifted
+** right by N bits where N is the top element on the stack.
+** If either operand is NULL, the result is NULL.
+*/
+case OP_BitAnd: /* same as TK_BITAND, no-push */
+case OP_BitOr: /* same as TK_BITOR, no-push */
+case OP_ShiftLeft: /* same as TK_LSHIFT, no-push */
+case OP_ShiftRight: { /* same as TK_RSHIFT, no-push */
+ Mem *pNos = &pTos[-1];
+ i64 a, b;
+
+ assert( pNos>=p->aStack );
+ if( (pTos->flags | pNos->flags) & MEM_Null ){
+ popStack(&pTos, 2);
+ pTos++;
+ pTos->flags = MEM_Null;
+ break;
+ }
+ a = sqlite3VdbeIntValue(pNos);
+ b = sqlite3VdbeIntValue(pTos);
+ switch( pOp->opcode ){
+ case OP_BitAnd: a &= b; break;
+ case OP_BitOr: a |= b; break;
+ case OP_ShiftLeft: a <<= b; break;
+ case OP_ShiftRight: a >>= b; break;
+ default: /* CANT HAPPEN */ break;
+ }
+ Release(pTos);
+ pTos--;
+ Release(pTos);
+ pTos->i = a;
+ pTos->flags = MEM_Int;
+ break;
+}
+
+/* Opcode: AddImm P1 * *
+**
+** Add the value P1 to whatever is on top of the stack. The result
+** is always an integer.
+**
+** To force the top of the stack to be an integer, just add 0.
+*/
+case OP_AddImm: { /* no-push */
+ assert( pTos>=p->aStack );
+ sqlite3VdbeMemIntegerify(pTos);
+ pTos->i += pOp->p1;
+ break;
+}
+
+/* Opcode: ForceInt P1 P2 *
+**
+** Convert the top of the stack into an integer. If the current top of
+** the stack is not numeric (meaning that is is a NULL or a string that
+** does not look like an integer or floating point number) then pop the
+** stack and jump to P2. If the top of the stack is numeric then
+** convert it into the least integer that is greater than or equal to its
+** current value if P1==0, or to the least integer that is strictly
+** greater than its current value if P1==1.
+*/
+case OP_ForceInt: { /* no-push */
+ i64 v;
+ assert( pTos>=p->aStack );
+ applyAffinity(pTos, SQLITE_AFF_NUMERIC, encoding);
+ if( (pTos->flags & (MEM_Int|MEM_Real))==0 ){
+ Release(pTos);
+ pTos--;
+ pc = pOp->p2 - 1;
+ break;
+ }
+ if( pTos->flags & MEM_Int ){
+ v = pTos->i + (pOp->p1!=0);
+ }else{
+ /* FIX ME: should this not be assert( pTos->flags & MEM_Real ) ??? */
+ sqlite3VdbeMemRealify(pTos);
+ v = (int)pTos->r;
+ if( pTos->r>(double)v ) v++;
+ if( pOp->p1 && pTos->r==(double)v ) v++;
+ }
+ Release(pTos);
+ pTos->i = v;
+ pTos->flags = MEM_Int;
+ break;
+}
+
+/* Opcode: MustBeInt P1 P2 *
+**
+** Force the top of the stack to be an integer. If the top of the
+** stack is not an integer and cannot be converted into an integer
+** with out data loss, then jump immediately to P2, or if P2==0
+** raise an SQLITE_MISMATCH exception.
+**
+** If the top of the stack is not an integer and P2 is not zero and
+** P1 is 1, then the stack is popped. In all other cases, the depth
+** of the stack is unchanged.
+*/
+case OP_MustBeInt: { /* no-push */
+ assert( pTos>=p->aStack );
+ applyAffinity(pTos, SQLITE_AFF_NUMERIC, encoding);
+ if( (pTos->flags & MEM_Int)==0 ){
+ if( pOp->p2==0 ){
+ rc = SQLITE_MISMATCH;
+ goto abort_due_to_error;
+ }else{
+ if( pOp->p1 ) popStack(&pTos, 1);
+ pc = pOp->p2 - 1;
+ }
+ }else{
+ Release(pTos);
+ pTos->flags = MEM_Int;
+ }
+ break;
+}
+
+/* Opcode: RealAffinity * * *
+**
+** If the top of the stack is an integer, convert it to a real value.
+**
+** This opcode is used when extracting information from a column that
+** has REAL affinity. Such column values may still be stored as
+** integers, for space efficiency, but after extraction we want them
+** to have only a real value.
+*/
+case OP_RealAffinity: { /* no-push */
+ assert( pTos>=p->aStack );
+ if( pTos->flags & MEM_Int ){
+ sqlite3VdbeMemRealify(pTos);
+ }
+ break;
+}
+
+#ifndef SQLITE_OMIT_CAST
+/* Opcode: ToText * * *
+**
+** Force the value on the top of the stack to be text.
+** If the value is numeric, convert it to a string using the
+** equivalent of printf(). Blob values are unchanged and
+** are afterwards simply interpreted as text.
+**
+** A NULL value is not changed by this routine. It remains NULL.
+*/
+case OP_ToText: { /* same as TK_TO_TEXT, no-push */
+ assert( pTos>=p->aStack );
+ if( pTos->flags & MEM_Null ) break;
+ assert( MEM_Str==(MEM_Blob>>3) );
+ pTos->flags |= (pTos->flags&MEM_Blob)>>3;
+ applyAffinity(pTos, SQLITE_AFF_TEXT, encoding);
+ assert( pTos->flags & MEM_Str );
+ pTos->flags &= ~(MEM_Int|MEM_Real|MEM_Blob);
+ break;
+}
+
+/* Opcode: ToBlob * * *
+**
+** Force the value on the top of the stack to be a BLOB.
+** If the value is numeric, convert it to a string first.
+** Strings are simply reinterpreted as blobs with no change
+** to the underlying data.
+**
+** A NULL value is not changed by this routine. It remains NULL.
+*/
+case OP_ToBlob: { /* same as TK_TO_BLOB, no-push */
+ assert( pTos>=p->aStack );
+ if( pTos->flags & MEM_Null ) break;
+ if( (pTos->flags & MEM_Blob)==0 ){
+ applyAffinity(pTos, SQLITE_AFF_TEXT, encoding);
+ assert( pTos->flags & MEM_Str );
+ pTos->flags |= MEM_Blob;
+ }
+ pTos->flags &= ~(MEM_Int|MEM_Real|MEM_Str);
+ break;
+}
+
+/* Opcode: ToNumeric * * *
+**
+** Force the value on the top of the stack to be numeric (either an
+** integer or a floating-point number.)
+** If the value is text or blob, try to convert it to an using the
+** equivalent of atoi() or atof() and store 0 if no such conversion
+** is possible.
+**
+** A NULL value is not changed by this routine. It remains NULL.
+*/
+case OP_ToNumeric: { /* same as TK_TO_NUMERIC, no-push */
+ assert( pTos>=p->aStack );
+ if( (pTos->flags & MEM_Null)==0 ){
+ sqlite3VdbeMemNumerify(pTos);
+ }
+ break;
+}
+#endif /* SQLITE_OMIT_CAST */
+
+/* Opcode: ToInt * * *
+**
+** Force the value on the top of the stack to be an integer. If
+** The value is currently a real number, drop its fractional part.
+** If the value is text or blob, try to convert it to an integer using the
+** equivalent of atoi() and store 0 if no such conversion is possible.
+**
+** A NULL value is not changed by this routine. It remains NULL.
+*/
+case OP_ToInt: { /* same as TK_TO_INT, no-push */
+ assert( pTos>=p->aStack );
+ if( (pTos->flags & MEM_Null)==0 ){
+ sqlite3VdbeMemIntegerify(pTos);
+ }
+ break;
+}
+
+#ifndef SQLITE_OMIT_CAST
+/* Opcode: ToReal * * *
+**
+** Force the value on the top of the stack to be a floating point number.
+** If The value is currently an integer, convert it.
+** If the value is text or blob, try to convert it to an integer using the
+** equivalent of atoi() and store 0 if no such conversion is possible.
+**
+** A NULL value is not changed by this routine. It remains NULL.
+*/
+case OP_ToReal: { /* same as TK_TO_REAL, no-push */
+ assert( pTos>=p->aStack );
+ if( (pTos->flags & MEM_Null)==0 ){
+ sqlite3VdbeMemRealify(pTos);
+ }
+ break;
+}
+#endif /* SQLITE_OMIT_CAST */
+
+/* Opcode: Eq P1 P2 P3
+**
+** Pop the top two elements from the stack. If they are equal, then
+** jump to instruction P2. Otherwise, continue to the next instruction.
+**
+** If the 0x100 bit of P1 is true and either operand is NULL then take the
+** jump. If the 0x100 bit of P1 is clear then fall thru if either operand
+** is NULL.
+**
+** If the 0x200 bit of P1 is set and either operand is NULL then
+** both operands are converted to integers prior to comparison.
+** NULL operands are converted to zero and non-NULL operands are
+** converted to 1. Thus, for example, with 0x200 set, NULL==NULL is true
+** whereas it would normally be NULL. Similarly, NULL==123 is false when
+** 0x200 is set but is NULL when the 0x200 bit of P1 is clear.
+**
+** The least significant byte of P1 (mask 0xff) must be an affinity character -
+** SQLITE_AFF_TEXT, SQLITE_AFF_INTEGER, and so forth. An attempt is made
+** to coerce both values
+** according to the affinity before the comparison is made. If the byte is
+** 0x00, then numeric affinity is used.
+**
+** Once any conversions have taken place, and neither value is NULL,
+** the values are compared. If both values are blobs, or both are text,
+** then memcmp() is used to determine the results of the comparison. If
+** both values are numeric, then a numeric comparison is used. If the
+** two values are of different types, then they are inequal.
+**
+** If P2 is zero, do not jump. Instead, push an integer 1 onto the
+** stack if the jump would have been taken, or a 0 if not. Push a
+** NULL if either operand was NULL.
+**
+** If P3 is not NULL it is a pointer to a collating sequence (a CollSeq
+** structure) that defines how to compare text.
+*/
+/* Opcode: Ne P1 P2 P3
+**
+** This works just like the Eq opcode except that the jump is taken if
+** the operands from the stack are not equal. See the Eq opcode for
+** additional information.
+*/
+/* Opcode: Lt P1 P2 P3
+**
+** This works just like the Eq opcode except that the jump is taken if
+** the 2nd element down on the stack is less than the top of the stack.
+** See the Eq opcode for additional information.
+*/
+/* Opcode: Le P1 P2 P3
+**
+** This works just like the Eq opcode except that the jump is taken if
+** the 2nd element down on the stack is less than or equal to the
+** top of the stack. See the Eq opcode for additional information.
+*/
+/* Opcode: Gt P1 P2 P3
+**
+** This works just like the Eq opcode except that the jump is taken if
+** the 2nd element down on the stack is greater than the top of the stack.
+** See the Eq opcode for additional information.
+*/
+/* Opcode: Ge P1 P2 P3
+**
+** This works just like the Eq opcode except that the jump is taken if
+** the 2nd element down on the stack is greater than or equal to the
+** top of the stack. See the Eq opcode for additional information.
+*/
+case OP_Eq: /* same as TK_EQ, no-push */
+case OP_Ne: /* same as TK_NE, no-push */
+case OP_Lt: /* same as TK_LT, no-push */
+case OP_Le: /* same as TK_LE, no-push */
+case OP_Gt: /* same as TK_GT, no-push */
+case OP_Ge: { /* same as TK_GE, no-push */
+ Mem *pNos;
+ int flags;
+ int res;
+ char affinity;
+
+ pNos = &pTos[-1];
+ flags = pTos->flags|pNos->flags;
+
+ /* If either value is a NULL P2 is not zero, take the jump if the least
+ ** significant byte of P1 is true. If P2 is zero, then push a NULL onto
+ ** the stack.
+ */
+ if( flags&MEM_Null ){
+ if( (pOp->p1 & 0x200)!=0 ){
+ /* The 0x200 bit of P1 means, roughly "do not treat NULL as the
+ ** magic SQL value it normally is - treat it as if it were another
+ ** integer".
+ **
+ ** With 0x200 set, if either operand is NULL then both operands
+ ** are converted to integers prior to being passed down into the
+ ** normal comparison logic below. NULL operands are converted to
+ ** zero and non-NULL operands are converted to 1. Thus, for example,
+ ** with 0x200 set, NULL==NULL is true whereas it would normally
+ ** be NULL. Similarly, NULL!=123 is true.
+ */
+ sqlite3VdbeMemSetInt64(pTos, (pTos->flags & MEM_Null)==0);
+ sqlite3VdbeMemSetInt64(pNos, (pNos->flags & MEM_Null)==0);
+ }else{
+ /* If the 0x200 bit of P1 is clear and either operand is NULL then
+ ** the result is always NULL. The jump is taken if the 0x100 bit
+ ** of P1 is set.
+ */
+ popStack(&pTos, 2);
+ if( pOp->p2 ){
+ if( pOp->p1 & 0x100 ){
+ pc = pOp->p2-1;
+ }
+ }else{
+ pTos++;
+ pTos->flags = MEM_Null;
+ }
+ break;
+ }
+ }
+
+ affinity = pOp->p1 & 0xFF;
+ if( affinity ){
+ applyAffinity(pNos, affinity, encoding);
+ applyAffinity(pTos, affinity, encoding);
+ }
+
+ assert( pOp->p3type==P3_COLLSEQ || pOp->p3==0 );
+ res = sqlite3MemCompare(pNos, pTos, (CollSeq*)pOp->p3);
+ switch( pOp->opcode ){
+ case OP_Eq: res = res==0; break;
+ case OP_Ne: res = res!=0; break;
+ case OP_Lt: res = res<0; break;
+ case OP_Le: res = res<=0; break;
+ case OP_Gt: res = res>0; break;
+ default: res = res>=0; break;
+ }
+
+ popStack(&pTos, 2);
+ if( pOp->p2 ){
+ if( res ){
+ pc = pOp->p2-1;
+ }
+ }else{
+ pTos++;
+ pTos->flags = MEM_Int;
+ pTos->i = res;
+ }
+ break;
+}
+
+/* Opcode: And * * *
+**
+** Pop two values off the stack. Take the logical AND of the
+** two values and push the resulting boolean value back onto the
+** stack.
+*/
+/* Opcode: Or * * *
+**
+** Pop two values off the stack. Take the logical OR of the
+** two values and push the resulting boolean value back onto the
+** stack.
+*/
+case OP_And: /* same as TK_AND, no-push */
+case OP_Or: { /* same as TK_OR, no-push */
+ Mem *pNos = &pTos[-1];
+ int v1, v2; /* 0==TRUE, 1==FALSE, 2==UNKNOWN or NULL */
+
+ assert( pNos>=p->aStack );
+ if( pTos->flags & MEM_Null ){
+ v1 = 2;
+ }else{
+ sqlite3VdbeMemIntegerify(pTos);
+ v1 = pTos->i==0;
+ }
+ if( pNos->flags & MEM_Null ){
+ v2 = 2;
+ }else{
+ sqlite3VdbeMemIntegerify(pNos);
+ v2 = pNos->i==0;
+ }
+ if( pOp->opcode==OP_And ){
+ static const unsigned char and_logic[] = { 0, 1, 2, 1, 1, 1, 2, 1, 2 };
+ v1 = and_logic[v1*3+v2];
+ }else{
+ static const unsigned char or_logic[] = { 0, 0, 0, 0, 1, 2, 0, 2, 2 };
+ v1 = or_logic[v1*3+v2];
+ }
+ popStack(&pTos, 2);
+ pTos++;
+ if( v1==2 ){
+ pTos->flags = MEM_Null;
+ }else{
+ pTos->i = v1==0;
+ pTos->flags = MEM_Int;
+ }
+ break;
+}
+
+/* Opcode: Negative * * *
+**
+** Treat the top of the stack as a numeric quantity. Replace it
+** with its additive inverse. If the top of the stack is NULL
+** its value is unchanged.
+*/
+/* Opcode: AbsValue * * *
+**
+** Treat the top of the stack as a numeric quantity. Replace it
+** with its absolute value. If the top of the stack is NULL
+** its value is unchanged.
+*/
+case OP_Negative: /* same as TK_UMINUS, no-push */
+case OP_AbsValue: {
+ assert( pTos>=p->aStack );
+ if( pTos->flags & MEM_Real ){
+ neg_abs_real_case:
+ Release(pTos);
+ if( pOp->opcode==OP_Negative || pTos->r<0.0 ){
+ pTos->r = -pTos->r;
+ }
+ pTos->flags = MEM_Real;
+ }else if( pTos->flags & MEM_Int ){
+ Release(pTos);
+ if( pOp->opcode==OP_Negative || pTos->i<0 ){
+ pTos->i = -pTos->i;
+ }
+ pTos->flags = MEM_Int;
+ }else if( pTos->flags & MEM_Null ){
+ /* Do nothing */
+ }else{
+ sqlite3VdbeMemNumerify(pTos);
+ goto neg_abs_real_case;
+ }
+ break;
+}
+
+/* Opcode: Not * * *
+**
+** Interpret the top of the stack as a boolean value. Replace it
+** with its complement. If the top of the stack is NULL its value
+** is unchanged.
+*/
+case OP_Not: { /* same as TK_NOT, no-push */
+ assert( pTos>=p->aStack );
+ if( pTos->flags & MEM_Null ) break; /* Do nothing to NULLs */
+ sqlite3VdbeMemIntegerify(pTos);
+ assert( (pTos->flags & MEM_Dyn)==0 );
+ pTos->i = !pTos->i;
+ pTos->flags = MEM_Int;
+ break;
+}
+
+/* Opcode: BitNot * * *
+**
+** Interpret the top of the stack as an value. Replace it
+** with its ones-complement. If the top of the stack is NULL its
+** value is unchanged.
+*/
+case OP_BitNot: { /* same as TK_BITNOT, no-push */
+ assert( pTos>=p->aStack );
+ if( pTos->flags & MEM_Null ) break; /* Do nothing to NULLs */
+ sqlite3VdbeMemIntegerify(pTos);
+ assert( (pTos->flags & MEM_Dyn)==0 );
+ pTos->i = ~pTos->i;
+ pTos->flags = MEM_Int;
+ break;
+}
+
+/* Opcode: Noop * * *
+**
+** Do nothing. This instruction is often useful as a jump
+** destination.
+*/
+/*
+** The magic Explain opcode are only inserted when explain==2 (which
+** is to say when the EXPLAIN QUERY PLAN syntax is used.)
+** This opcode records information from the optimizer. It is the
+** the same as a no-op. This opcodesnever appears in a real VM program.
+*/
+case OP_Explain:
+case OP_Noop: { /* no-push */
+ break;
+}
+
+/* Opcode: If P1 P2 *
+**
+** Pop a single boolean from the stack. If the boolean popped is
+** true, then jump to p2. Otherwise continue to the next instruction.
+** An integer is false if zero and true otherwise. A string is
+** false if it has zero length and true otherwise.
+**
+** If the value popped of the stack is NULL, then take the jump if P1
+** is true and fall through if P1 is false.
+*/
+/* Opcode: IfNot P1 P2 *
+**
+** Pop a single boolean from the stack. If the boolean popped is
+** false, then jump to p2. Otherwise continue to the next instruction.
+** An integer is false if zero and true otherwise. A string is
+** false if it has zero length and true otherwise.
+**
+** If the value popped of the stack is NULL, then take the jump if P1
+** is true and fall through if P1 is false.
+*/
+case OP_If: /* no-push */
+case OP_IfNot: { /* no-push */
+ int c;
+ assert( pTos>=p->aStack );
+ if( pTos->flags & MEM_Null ){
+ c = pOp->p1;
+ }else{
+#ifdef SQLITE_OMIT_FLOATING_POINT
+ c = sqlite3VdbeIntValue(pTos);
+#else
+ c = sqlite3VdbeRealValue(pTos)!=0.0;
+#endif
+ if( pOp->opcode==OP_IfNot ) c = !c;
+ }
+ Release(pTos);
+ pTos--;
+ if( c ) pc = pOp->p2-1;
+ break;
+}
+
+/* Opcode: IsNull P1 P2 *
+**
+** If any of the top abs(P1) values on the stack are NULL, then jump
+** to P2. Pop the stack P1 times if P1>0. If P1<0 leave the stack
+** unchanged.
+*/
+case OP_IsNull: { /* same as TK_ISNULL, no-push */
+ int i, cnt;
+ Mem *pTerm;
+ cnt = pOp->p1;
+ if( cnt<0 ) cnt = -cnt;
+ pTerm = &pTos[1-cnt];
+ assert( pTerm>=p->aStack );
+ for(i=0; i<cnt; i++, pTerm++){
+ if( pTerm->flags & MEM_Null ){
+ pc = pOp->p2-1;
+ break;
+ }
+ }
+ if( pOp->p1>0 ) popStack(&pTos, cnt);
+ break;
+}
+
+/* Opcode: NotNull P1 P2 *
+**
+** Jump to P2 if the top P1 values on the stack are all not NULL. Pop the
+** stack if P1 times if P1 is greater than zero. If P1 is less than
+** zero then leave the stack unchanged.
+*/
+case OP_NotNull: { /* same as TK_NOTNULL, no-push */
+ int i, cnt;
+ cnt = pOp->p1;
+ if( cnt<0 ) cnt = -cnt;
+ assert( &pTos[1-cnt] >= p->aStack );
+ for(i=0; i<cnt && (pTos[1+i-cnt].flags & MEM_Null)==0; i++){}
+ if( i>=cnt ) pc = pOp->p2-1;
+ if( pOp->p1>0 ) popStack(&pTos, cnt);
+ break;
+}
+
+/* Opcode: SetNumColumns P1 P2 *
+**
+** Before the OP_Column opcode can be executed on a cursor, this
+** opcode must be called to set the number of fields in the table.
+**
+** This opcode sets the number of columns for cursor P1 to P2.
+**
+** If OP_KeyAsData is to be applied to cursor P1, it must be executed
+** before this op-code.
+*/
+case OP_SetNumColumns: { /* no-push */
+ Cursor *pC;
+ assert( (pOp->p1)<p->nCursor );
+ assert( p->apCsr[pOp->p1]!=0 );
+ pC = p->apCsr[pOp->p1];
+ pC->nField = pOp->p2;
+ break;
+}
+
+/* Opcode: Column P1 P2 P3
+**
+** Interpret the data that cursor P1 points to as a structure built using
+** the MakeRecord instruction. (See the MakeRecord opcode for additional
+** information about the format of the data.) Push onto the stack the value
+** of the P2-th column contained in the data. If there are less that (P2+1)
+** values in the record, push a NULL onto the stack.
+**
+** If the KeyAsData opcode has previously executed on this cursor, then the
+** field might be extracted from the key rather than the data.
+**
+** If the column contains fewer than P2 fields, then push a NULL. Or
+** if P3 is of type P3_MEM, then push the P3 value. The P3 value will
+** be default value for a column that has been added using the ALTER TABLE
+** ADD COLUMN command. If P3 is an ordinary string, just push a NULL.
+** When P3 is a string it is really just a comment describing the value
+** to be pushed, not a default value.
+*/
+case OP_Column: {
+ u32 payloadSize; /* Number of bytes in the record */
+ int p1 = pOp->p1; /* P1 value of the opcode */
+ int p2 = pOp->p2; /* column number to retrieve */
+ Cursor *pC = 0; /* The VDBE cursor */
+ char *zRec; /* Pointer to complete record-data */
+ BtCursor *pCrsr; /* The BTree cursor */
+ u32 *aType; /* aType[i] holds the numeric type of the i-th column */
+ u32 *aOffset; /* aOffset[i] is offset to start of data for i-th column */
+ u32 nField; /* number of fields in the record */
+ int len; /* The length of the serialized data for the column */
+ int i; /* Loop counter */
+ char *zData; /* Part of the record being decoded */
+ Mem sMem; /* For storing the record being decoded */
+
+ sMem.flags = 0;
+ assert( p1<p->nCursor );
+ pTos++;
+ pTos->flags = MEM_Null;
+
+ /* This block sets the variable payloadSize to be the total number of
+ ** bytes in the record.
+ **
+ ** zRec is set to be the complete text of the record if it is available.
+ ** The complete record text is always available for pseudo-tables
+ ** If the record is stored in a cursor, the complete record text
+ ** might be available in the pC->aRow cache. Or it might not be.
+ ** If the data is unavailable, zRec is set to NULL.
+ **
+ ** We also compute the number of columns in the record. For cursors,
+ ** the number of columns is stored in the Cursor.nField element. For
+ ** records on the stack, the next entry down on the stack is an integer
+ ** which is the number of records.
+ */
+ pC = p->apCsr[p1];
+ assert( pC!=0 );
+ if( pC->pCursor!=0 ){
+ /* The record is stored in a B-Tree */
+ rc = sqlite3VdbeCursorMoveto(pC);
+ if( rc ) goto abort_due_to_error;
+ zRec = 0;
+ pCrsr = pC->pCursor;
+ if( pC->nullRow ){
+ payloadSize = 0;
+ }else if( pC->cacheStatus==p->cacheCtr ){
+ payloadSize = pC->payloadSize;
+ zRec = (char*)pC->aRow;
+ }else if( pC->isIndex ){
+ i64 payloadSize64;
+ sqlite3BtreeKeySize(pCrsr, &payloadSize64);
+ payloadSize = payloadSize64;
+ }else{
+ sqlite3BtreeDataSize(pCrsr, &payloadSize);
+ }
+ nField = pC->nField;
+ }else if( pC->pseudoTable ){
+ /* The record is the sole entry of a pseudo-table */
+ payloadSize = pC->nData;
+ zRec = pC->pData;
+ pC->cacheStatus = CACHE_STALE;
+ assert( payloadSize==0 || zRec!=0 );
+ nField = pC->nField;
+ pCrsr = 0;
+ }else{
+ zRec = 0;
+ payloadSize = 0;
+ pCrsr = 0;
+ nField = 0;
+ }
+
+ /* If payloadSize is 0, then just push a NULL onto the stack. */
+ if( payloadSize==0 ){
+ assert( pTos->flags==MEM_Null );
+ break;
+ }
+
+ assert( p2<nField );
+
+ /* Read and parse the table header. Store the results of the parse
+ ** into the record header cache fields of the cursor.
+ */
+ if( pC && pC->cacheStatus==p->cacheCtr ){
+ aType = pC->aType;
+ aOffset = pC->aOffset;
+ }else{
+ u8 *zIdx; /* Index into header */
+ u8 *zEndHdr; /* Pointer to first byte after the header */
+ u32 offset; /* Offset into the data */
+ int szHdrSz; /* Size of the header size field at start of record */
+ int avail; /* Number of bytes of available data */
+
+ aType = pC->aType;
+ if( aType==0 ){
+ pC->aType = aType = sqliteMallocRaw( 2*nField*sizeof(aType) );
+ }
+ if( aType==0 ){
+ goto no_mem;
+ }
+ pC->aOffset = aOffset = &aType[nField];
+ pC->payloadSize = payloadSize;
+ pC->cacheStatus = p->cacheCtr;
+
+ /* Figure out how many bytes are in the header */
+ if( zRec ){
+ zData = zRec;
+ }else{
+ if( pC->isIndex ){
+ zData = (char*)sqlite3BtreeKeyFetch(pCrsr, &avail);
+ }else{
+ zData = (char*)sqlite3BtreeDataFetch(pCrsr, &avail);
+ }
+ /* If KeyFetch()/DataFetch() managed to get the entire payload,
+ ** save the payload in the pC->aRow cache. That will save us from
+ ** having to make additional calls to fetch the content portion of
+ ** the record.
+ */
+ if( avail>=payloadSize ){
+ zRec = zData;
+ pC->aRow = (u8*)zData;
+ }else{
+ pC->aRow = 0;
+ }
+ }
+ assert( zRec!=0 || avail>=payloadSize || avail>=9 );
+ szHdrSz = GetVarint((u8*)zData, offset);
+
+ /* The KeyFetch() or DataFetch() above are fast and will get the entire
+ ** record header in most cases. But they will fail to get the complete
+ ** record header if the record header does not fit on a single page
+ ** in the B-Tree. When that happens, use sqlite3VdbeMemFromBtree() to
+ ** acquire the complete header text.
+ */
+ if( !zRec && avail<offset ){
+ rc = sqlite3VdbeMemFromBtree(pCrsr, 0, offset, pC->isIndex, &sMem);
+ if( rc!=SQLITE_OK ){
+ goto op_column_out;
+ }
+ zData = sMem.z;
+ }
+ zEndHdr = (u8 *)&zData[offset];
+ zIdx = (u8 *)&zData[szHdrSz];
+
+ /* Scan the header and use it to fill in the aType[] and aOffset[]
+ ** arrays. aType[i] will contain the type integer for the i-th
+ ** column and aOffset[i] will contain the offset from the beginning
+ ** of the record to the start of the data for the i-th column
+ */
+ for(i=0; i<nField; i++){
+ if( zIdx<zEndHdr ){
+ aOffset[i] = offset;
+ zIdx += GetVarint(zIdx, aType[i]);
+ offset += sqlite3VdbeSerialTypeLen(aType[i]);
+ }else{
+ /* If i is less that nField, then there are less fields in this
+ ** record than SetNumColumns indicated there are columns in the
+ ** table. Set the offset for any extra columns not present in
+ ** the record to 0. This tells code below to push a NULL onto the
+ ** stack instead of deserializing a value from the record.
+ */
+ aOffset[i] = 0;
+ }
+ }
+ Release(&sMem);
+ sMem.flags = MEM_Null;
+
+ /* If we have read more header data than was contained in the header,
+ ** or if the end of the last field appears to be past the end of the
+ ** record, then we must be dealing with a corrupt database.
+ */
+ if( zIdx>zEndHdr || offset>payloadSize ){
+ rc = SQLITE_CORRUPT_BKPT;
+ goto op_column_out;
+ }
+ }
+
+ /* Get the column information. If aOffset[p2] is non-zero, then
+ ** deserialize the value from the record. If aOffset[p2] is zero,
+ ** then there are not enough fields in the record to satisfy the
+ ** request. In this case, set the value NULL or to P3 if P3 is
+ ** a pointer to a Mem object.
+ */
+ if( aOffset[p2] ){
+ assert( rc==SQLITE_OK );
+ if( zRec ){
+ zData = &zRec[aOffset[p2]];
+ }else{
+ len = sqlite3VdbeSerialTypeLen(aType[p2]);
+ rc = sqlite3VdbeMemFromBtree(pCrsr, aOffset[p2], len, pC->isIndex,&sMem);
+ if( rc!=SQLITE_OK ){
+ goto op_column_out;
+ }
+ zData = sMem.z;
+ }
+ sqlite3VdbeSerialGet((u8*)zData, aType[p2], pTos);
+ pTos->enc = encoding;
+ }else{
+ if( pOp->p3type==P3_MEM ){
+ sqlite3VdbeMemShallowCopy(pTos, (Mem *)(pOp->p3), MEM_Static);
+ }else{
+ pTos->flags = MEM_Null;
+ }
+ }
+
+ /* If we dynamically allocated space to hold the data (in the
+ ** sqlite3VdbeMemFromBtree() call above) then transfer control of that
+ ** dynamically allocated space over to the pTos structure.
+ ** This prevents a memory copy.
+ */
+ if( (sMem.flags & MEM_Dyn)!=0 ){
+ assert( pTos->flags & MEM_Ephem );
+ assert( pTos->flags & (MEM_Str|MEM_Blob) );
+ assert( pTos->z==sMem.z );
+ assert( sMem.flags & MEM_Term );
+ pTos->flags &= ~MEM_Ephem;
+ pTos->flags |= MEM_Dyn|MEM_Term;
+ }
+
+ /* pTos->z might be pointing to sMem.zShort[]. Fix that so that we
+ ** can abandon sMem */
+ rc = sqlite3VdbeMemMakeWriteable(pTos);
+
+op_column_out:
+ break;
+}
+
+/* Opcode: MakeRecord P1 P2 P3
+**
+** Convert the top abs(P1) entries of the stack into a single entry
+** suitable for use as a data record in a database table or as a key
+** in an index. The details of the format are irrelavant as long as
+** the OP_Column opcode can decode the record later and as long as the
+** sqlite3VdbeRecordCompare function will correctly compare two encoded
+** records. Refer to source code comments for the details of the record
+** format.
+**
+** The original stack entries are popped from the stack if P1>0 but
+** remain on the stack if P1<0.
+**
+** If P2 is not zero and one or more of the entries are NULL, then jump
+** to the address given by P2. This feature can be used to skip a
+** uniqueness test on indices.
+**
+** P3 may be a string that is P1 characters long. The nth character of the
+** string indicates the column affinity that should be used for the nth
+** field of the index key (i.e. the first character of P3 corresponds to the
+** lowest element on the stack).
+**
+** The mapping from character to affinity is given by the SQLITE_AFF_
+** macros defined in sqliteInt.h.
+**
+** If P3 is NULL then all index fields have the affinity NONE.
+**
+** See also OP_MakeIdxRec
+*/
+/* Opcode: MakeIdxRec P1 P2 P3
+**
+** This opcode works just OP_MakeRecord except that it reads an extra
+** integer from the stack (thus reading a total of abs(P1+1) entries)
+** and appends that extra integer to the end of the record as a varint.
+** This results in an index key.
+*/
+case OP_MakeIdxRec:
+case OP_MakeRecord: {
+ /* Assuming the record contains N fields, the record format looks
+ ** like this:
+ **
+ ** ------------------------------------------------------------------------
+ ** | hdr-size | type 0 | type 1 | ... | type N-1 | data0 | ... | data N-1 |
+ ** ------------------------------------------------------------------------
+ **
+ ** Data(0) is taken from the lowest element of the stack and data(N-1) is
+ ** the top of the stack.
+ **
+ ** Each type field is a varint representing the serial type of the
+ ** corresponding data element (see sqlite3VdbeSerialType()). The
+ ** hdr-size field is also a varint which is the offset from the beginning
+ ** of the record to data0.
+ */
+ unsigned char *zNewRecord;
+ unsigned char *zCsr;
+ Mem *pRec;
+ Mem *pRowid = 0;
+ int nData = 0; /* Number of bytes of data space */
+ int nHdr = 0; /* Number of bytes of header space */
+ int nByte = 0; /* Space required for this record */
+ int nVarint; /* Number of bytes in a varint */
+ u32 serial_type; /* Type field */
+ int containsNull = 0; /* True if any of the data fields are NULL */
+ char zTemp[NBFS]; /* Space to hold small records */
+ Mem *pData0;
+
+ int leaveOnStack; /* If true, leave the entries on the stack */
+ int nField; /* Number of fields in the record */
+ int jumpIfNull; /* Jump here if non-zero and any entries are NULL. */
+ int addRowid; /* True to append a rowid column at the end */
+ char *zAffinity; /* The affinity string for the record */
+ int file_format; /* File format to use for encoding */
+
+ leaveOnStack = ((pOp->p1<0)?1:0);
+ nField = pOp->p1 * (leaveOnStack?-1:1);
+ jumpIfNull = pOp->p2;
+ addRowid = pOp->opcode==OP_MakeIdxRec;
+ zAffinity = pOp->p3;
+
+ pData0 = &pTos[1-nField];
+ assert( pData0>=p->aStack );
+ containsNull = 0;
+ file_format = p->minWriteFileFormat;
+
+ /* Loop through the elements that will make up the record to figure
+ ** out how much space is required for the new record.
+ */
+ for(pRec=pData0; pRec<=pTos; pRec++){
+ if( zAffinity ){
+ applyAffinity(pRec, zAffinity[pRec-pData0], encoding);
+ }
+ if( pRec->flags&MEM_Null ){
+ containsNull = 1;
+ }
+ serial_type = sqlite3VdbeSerialType(pRec, file_format);
+ nData += sqlite3VdbeSerialTypeLen(serial_type);
+ nHdr += sqlite3VarintLen(serial_type);
+ }
+
+ /* If we have to append a varint rowid to this record, set 'rowid'
+ ** to the value of the rowid and increase nByte by the amount of space
+ ** required to store it and the 0x00 seperator byte.
+ */
+ if( addRowid ){
+ pRowid = &pTos[0-nField];
+ assert( pRowid>=p->aStack );
+ sqlite3VdbeMemIntegerify(pRowid);
+ serial_type = sqlite3VdbeSerialType(pRowid, 0);
+ nData += sqlite3VdbeSerialTypeLen(serial_type);
+ nHdr += sqlite3VarintLen(serial_type);
+ }
+
+ /* Add the initial header varint and total the size */
+ nHdr += nVarint = sqlite3VarintLen(nHdr);
+ if( nVarint<sqlite3VarintLen(nHdr) ){
+ nHdr++;
+ }
+ nByte = nHdr+nData;
+
+ /* Allocate space for the new record. */
+ if( nByte>sizeof(zTemp) ){
+ zNewRecord = sqliteMallocRaw(nByte);
+ if( !zNewRecord ){
+ goto no_mem;
+ }
+ }else{
+ zNewRecord = (u8*)zTemp;
+ }
+
+ /* Write the record */
+ zCsr = zNewRecord;
+ zCsr += sqlite3PutVarint(zCsr, nHdr);
+ for(pRec=pData0; pRec<=pTos; pRec++){
+ serial_type = sqlite3VdbeSerialType(pRec, file_format);
+ zCsr += sqlite3PutVarint(zCsr, serial_type); /* serial type */
+ }
+ if( addRowid ){
+ zCsr += sqlite3PutVarint(zCsr, sqlite3VdbeSerialType(pRowid, 0));
+ }
+ for(pRec=pData0; pRec<=pTos; pRec++){
+ zCsr += sqlite3VdbeSerialPut(zCsr, pRec, file_format); /* serial data */
+ }
+ if( addRowid ){
+ zCsr += sqlite3VdbeSerialPut(zCsr, pRowid, 0);
+ }
+ assert( zCsr==(zNewRecord+nByte) );
+
+ /* Pop entries off the stack if required. Push the new record on. */
+ if( !leaveOnStack ){
+ popStack(&pTos, nField+addRowid);
+ }
+ pTos++;
+ pTos->n = nByte;
+ if( nByte<=sizeof(zTemp) ){
+ assert( zNewRecord==(unsigned char *)zTemp );
+ pTos->z = pTos->zShort;
+ memcpy(pTos->zShort, zTemp, nByte);
+ pTos->flags = MEM_Blob | MEM_Short;
+ }else{
+ assert( zNewRecord!=(unsigned char *)zTemp );
+ pTos->z = (char*)zNewRecord;
+ pTos->flags = MEM_Blob | MEM_Dyn;
+ pTos->xDel = 0;
+ }
+ pTos->enc = SQLITE_UTF8; /* In case the blob is ever converted to text */
+
+ /* If a NULL was encountered and jumpIfNull is non-zero, take the jump. */
+ if( jumpIfNull && containsNull ){
+ pc = jumpIfNull - 1;
+ }
+ break;
+}
+
+/* Opcode: Statement P1 * *
+**
+** Begin an individual statement transaction which is part of a larger
+** BEGIN..COMMIT transaction. This is needed so that the statement
+** can be rolled back after an error without having to roll back the
+** entire transaction. The statement transaction will automatically
+** commit when the VDBE halts.
+**
+** The statement is begun on the database file with index P1. The main
+** database file has an index of 0 and the file used for temporary tables
+** has an index of 1.
+*/
+case OP_Statement: { /* no-push */
+ int i = pOp->p1;
+ Btree *pBt;
+ if( i>=0 && i<db->nDb && (pBt = db->aDb[i].pBt)!=0 && !(db->autoCommit) ){
+ assert( sqlite3BtreeIsInTrans(pBt) );
+ if( !sqlite3BtreeIsInStmt(pBt) ){
+ rc = sqlite3BtreeBeginStmt(pBt);
+ }
+ }
+ break;
+}
+
+/* Opcode: AutoCommit P1 P2 *
+**
+** Set the database auto-commit flag to P1 (1 or 0). If P2 is true, roll
+** back any currently active btree transactions. If there are any active
+** VMs (apart from this one), then the COMMIT or ROLLBACK statement fails.
+**
+** This instruction causes the VM to halt.
+*/
+case OP_AutoCommit: { /* no-push */
+ u8 i = pOp->p1;
+ u8 rollback = pOp->p2;
+
+ assert( i==1 || i==0 );
+ assert( i==1 || rollback==0 );
+
+ assert( db->activeVdbeCnt>0 ); /* At least this one VM is active */
+
+ if( db->activeVdbeCnt>1 && i && !db->autoCommit ){
+ /* If this instruction implements a COMMIT or ROLLBACK, other VMs are
+ ** still running, and a transaction is active, return an error indicating
+ ** that the other VMs must complete first.
+ */
+ sqlite3SetString(&p->zErrMsg, "cannot ", rollback?"rollback":"commit",
+ " transaction - SQL statements in progress", (char*)0);
+ rc = SQLITE_ERROR;
+ }else if( i!=db->autoCommit ){
+ if( pOp->p2 ){
+ assert( i==1 );
+ sqlite3RollbackAll(db);
+ db->autoCommit = 1;
+ }else{
+ db->autoCommit = i;
+ if( sqlite3VdbeHalt(p)==SQLITE_BUSY ){
+ p->pTos = pTos;
+ p->pc = pc;
+ db->autoCommit = 1-i;
+ p->rc = SQLITE_BUSY;
+ return SQLITE_BUSY;
+ }
+ }
+ return SQLITE_DONE;
+ }else{
+ sqlite3SetString(&p->zErrMsg,
+ (!i)?"cannot start a transaction within a transaction":(
+ (rollback)?"cannot rollback - no transaction is active":
+ "cannot commit - no transaction is active"), (char*)0);
+
+ rc = SQLITE_ERROR;
+ }
+ break;
+}
+
+/* Opcode: Transaction P1 P2 *
+**
+** Begin a transaction. The transaction ends when a Commit or Rollback
+** opcode is encountered. Depending on the ON CONFLICT setting, the
+** transaction might also be rolled back if an error is encountered.
+**
+** P1 is the index of the database file on which the transaction is
+** started. Index 0 is the main database file and index 1 is the
+** file used for temporary tables.
+**
+** If P2 is non-zero, then a write-transaction is started. A RESERVED lock is
+** obtained on the database file when a write-transaction is started. No
+** other process can start another write transaction while this transaction is
+** underway. Starting a write transaction also creates a rollback journal. A
+** write transaction must be started before any changes can be made to the
+** database. If P2 is 2 or greater then an EXCLUSIVE lock is also obtained
+** on the file.
+**
+** If P2 is zero, then a read-lock is obtained on the database file.
+*/
+case OP_Transaction: { /* no-push */
+ int i = pOp->p1;
+ Btree *pBt;
+
+ assert( i>=0 && i<db->nDb );
+ pBt = db->aDb[i].pBt;
+
+ if( pBt ){
+ rc = sqlite3BtreeBeginTrans(pBt, pOp->p2);
+ if( rc==SQLITE_BUSY ){
+ p->pc = pc;
+ p->rc = SQLITE_BUSY;
+ p->pTos = pTos;
+ return SQLITE_BUSY;
+ }
+ if( rc!=SQLITE_OK && rc!=SQLITE_READONLY /* && rc!=SQLITE_BUSY */ ){
+ goto abort_due_to_error;
+ }
+ }
+ break;
+}
+
+/* Opcode: ReadCookie P1 P2 *
+**
+** Read cookie number P2 from database P1 and push it onto the stack.
+** P2==0 is the schema version. P2==1 is the database format.
+** P2==2 is the recommended pager cache size, and so forth. P1==0 is
+** the main database file and P1==1 is the database file used to store
+** temporary tables.
+**
+** There must be a read-lock on the database (either a transaction
+** must be started or there must be an open cursor) before
+** executing this instruction.
+*/
+case OP_ReadCookie: {
+ int iMeta;
+ assert( pOp->p2<SQLITE_N_BTREE_META );
+ assert( pOp->p1>=0 && pOp->p1<db->nDb );
+ assert( db->aDb[pOp->p1].pBt!=0 );
+ /* The indexing of meta values at the schema layer is off by one from
+ ** the indexing in the btree layer. The btree considers meta[0] to
+ ** be the number of free pages in the database (a read-only value)
+ ** and meta[1] to be the schema cookie. The schema layer considers
+ ** meta[1] to be the schema cookie. So we have to shift the index
+ ** by one in the following statement.
+ */
+ rc = sqlite3BtreeGetMeta(db->aDb[pOp->p1].pBt, 1 + pOp->p2, (u32 *)&iMeta);
+ pTos++;
+ pTos->i = iMeta;
+ pTos->flags = MEM_Int;
+ break;
+}
+
+/* Opcode: SetCookie P1 P2 *
+**
+** Write the top of the stack into cookie number P2 of database P1.
+** P2==0 is the schema version. P2==1 is the database format.
+** P2==2 is the recommended pager cache size, and so forth. P1==0 is
+** the main database file and P1==1 is the database file used to store
+** temporary tables.
+**
+** A transaction must be started before executing this opcode.
+*/
+case OP_SetCookie: { /* no-push */
+ Db *pDb;
+ assert( pOp->p2<SQLITE_N_BTREE_META );
+ assert( pOp->p1>=0 && pOp->p1<db->nDb );
+ pDb = &db->aDb[pOp->p1];
+ assert( pDb->pBt!=0 );
+ assert( pTos>=p->aStack );
+ sqlite3VdbeMemIntegerify(pTos);
+ /* See note about index shifting on OP_ReadCookie */
+ rc = sqlite3BtreeUpdateMeta(pDb->pBt, 1+pOp->p2, (int)pTos->i);
+ if( pOp->p2==0 ){
+ /* When the schema cookie changes, record the new cookie internally */
+ pDb->pSchema->schema_cookie = pTos->i;
+ db->flags |= SQLITE_InternChanges;
+ }else if( pOp->p2==1 ){
+ /* Record changes in the file format */
+ pDb->pSchema->file_format = pTos->i;
+ }
+ assert( (pTos->flags & MEM_Dyn)==0 );
+ pTos--;
+ if( pOp->p1==1 ){
+ /* Invalidate all prepared statements whenever the TEMP database
+ ** schema is changed. Ticket #1644 */
+ sqlite3ExpirePreparedStatements(db);
+ }
+ break;
+}
+
+/* Opcode: VerifyCookie P1 P2 *
+**
+** Check the value of global database parameter number 0 (the
+** schema version) and make sure it is equal to P2.
+** P1 is the database number which is 0 for the main database file
+** and 1 for the file holding temporary tables and some higher number
+** for auxiliary databases.
+**
+** The cookie changes its value whenever the database schema changes.
+** This operation is used to detect when that the cookie has changed
+** and that the current process needs to reread the schema.
+**
+** Either a transaction needs to have been started or an OP_Open needs
+** to be executed (to establish a read lock) before this opcode is
+** invoked.
+*/
+case OP_VerifyCookie: { /* no-push */
+ int iMeta;
+ Btree *pBt;
+ assert( pOp->p1>=0 && pOp->p1<db->nDb );
+ pBt = db->aDb[pOp->p1].pBt;
+ if( pBt ){
+ rc = sqlite3BtreeGetMeta(pBt, 1, (u32 *)&iMeta);
+ }else{
+ rc = SQLITE_OK;
+ iMeta = 0;
+ }
+ if( rc==SQLITE_OK && iMeta!=pOp->p2 ){
+ sqlite3SetString(&p->zErrMsg, "database schema has changed", (char*)0);
+ rc = SQLITE_SCHEMA;
+ }
+ break;
+}
+
+/* Opcode: OpenRead P1 P2 P3
+**
+** Open a read-only cursor for the database table whose root page is
+** P2 in a database file. The database file is determined by an
+** integer from the top of the stack. 0 means the main database and
+** 1 means the database used for temporary tables. Give the new
+** cursor an identifier of P1. The P1 values need not be contiguous
+** but all P1 values should be small integers. It is an error for
+** P1 to be negative.
+**
+** If P2==0 then take the root page number from the next of the stack.
+**
+** There will be a read lock on the database whenever there is an
+** open cursor. If the database was unlocked prior to this instruction
+** then a read lock is acquired as part of this instruction. A read
+** lock allows other processes to read the database but prohibits
+** any other process from modifying the database. The read lock is
+** released when all cursors are closed. If this instruction attempts
+** to get a read lock but fails, the script terminates with an
+** SQLITE_BUSY error code.
+**
+** The P3 value is a pointer to a KeyInfo structure that defines the
+** content and collating sequence of indices. P3 is NULL for cursors
+** that are not pointing to indices.
+**
+** See also OpenWrite.
+*/
+/* Opcode: OpenWrite P1 P2 P3
+**
+** Open a read/write cursor named P1 on the table or index whose root
+** page is P2. If P2==0 then take the root page number from the stack.
+**
+** The P3 value is a pointer to a KeyInfo structure that defines the
+** content and collating sequence of indices. P3 is NULL for cursors
+** that are not pointing to indices.
+**
+** This instruction works just like OpenRead except that it opens the cursor
+** in read/write mode. For a given table, there can be one or more read-only
+** cursors or a single read/write cursor but not both.
+**
+** See also OpenRead.
+*/
+case OP_OpenRead: /* no-push */
+case OP_OpenWrite: { /* no-push */
+ int i = pOp->p1;
+ int p2 = pOp->p2;
+ int wrFlag;
+ Btree *pX;
+ int iDb;
+ Cursor *pCur;
+ Db *pDb;
+
+ assert( pTos>=p->aStack );
+ sqlite3VdbeMemIntegerify(pTos);
+ iDb = pTos->i;
+ assert( (pTos->flags & MEM_Dyn)==0 );
+ pTos--;
+ assert( iDb>=0 && iDb<db->nDb );
+ pDb = &db->aDb[iDb];
+ pX = pDb->pBt;
+ assert( pX!=0 );
+ if( pOp->opcode==OP_OpenWrite ){
+ wrFlag = 1;
+ if( pDb->pSchema->file_format < p->minWriteFileFormat ){
+ p->minWriteFileFormat = pDb->pSchema->file_format;
+ }
+ }else{
+ wrFlag = 0;
+ }
+ if( p2<=0 ){
+ assert( pTos>=p->aStack );
+ sqlite3VdbeMemIntegerify(pTos);
+ p2 = pTos->i;
+ assert( (pTos->flags & MEM_Dyn)==0 );
+ pTos--;
+ assert( p2>=2 );
+ }
+ assert( i>=0 );
+ pCur = allocateCursor(p, i, iDb);
+ if( pCur==0 ) goto no_mem;
+ pCur->nullRow = 1;
+ if( pX==0 ) break;
+ /* We always provide a key comparison function. If the table being
+ ** opened is of type INTKEY, the comparision function will be ignored. */
+ rc = sqlite3BtreeCursor(pX, p2, wrFlag,
+ sqlite3VdbeRecordCompare, pOp->p3,
+ &pCur->pCursor);
+ if( pOp->p3type==P3_KEYINFO ){
+ pCur->pKeyInfo = (KeyInfo*)pOp->p3;
+ pCur->pIncrKey = &pCur->pKeyInfo->incrKey;
+ pCur->pKeyInfo->enc = ENC(p->db);
+ }else{
+ pCur->pKeyInfo = 0;
+ pCur->pIncrKey = &pCur->bogusIncrKey;
+ }
+ switch( rc ){
+ case SQLITE_BUSY: {
+ p->pc = pc;
+ p->rc = SQLITE_BUSY;
+ p->pTos = &pTos[1 + (pOp->p2<=0)]; /* Operands must remain on stack */
+ return SQLITE_BUSY;
+ }
+ case SQLITE_OK: {
+ int flags = sqlite3BtreeFlags(pCur->pCursor);
+ /* Sanity checking. Only the lower four bits of the flags byte should
+ ** be used. Bit 3 (mask 0x08) is unpreditable. The lower 3 bits
+ ** (mask 0x07) should be either 5 (intkey+leafdata for tables) or
+ ** 2 (zerodata for indices). If these conditions are not met it can
+ ** only mean that we are dealing with a corrupt database file
+ */
+ if( (flags & 0xf0)!=0 || ((flags & 0x07)!=5 && (flags & 0x07)!=2) ){
+ rc = SQLITE_CORRUPT_BKPT;
+ goto abort_due_to_error;
+ }
+ pCur->isTable = (flags & BTREE_INTKEY)!=0;
+ pCur->isIndex = (flags & BTREE_ZERODATA)!=0;
+ /* If P3==0 it means we are expected to open a table. If P3!=0 then
+ ** we expect to be opening an index. If this is not what happened,
+ ** then the database is corrupt
+ */
+ if( (pCur->isTable && pOp->p3type==P3_KEYINFO)
+ || (pCur->isIndex && pOp->p3type!=P3_KEYINFO) ){
+ rc = SQLITE_CORRUPT_BKPT;
+ goto abort_due_to_error;
+ }
+ break;
+ }
+ case SQLITE_EMPTY: {
+ pCur->isTable = pOp->p3type!=P3_KEYINFO;
+ pCur->isIndex = !pCur->isTable;
+ rc = SQLITE_OK;
+ break;
+ }
+ default: {
+ goto abort_due_to_error;
+ }
+ }
+ break;
+}
+
+/* Opcode: OpenEphemeral P1 P2 P3
+**
+** Open a new cursor P1 to a transient table.
+** The cursor is always opened read/write even if
+** the main database is read-only. The transient or virtual
+** table is deleted automatically when the cursor is closed.
+**
+** P2 is the number of columns in the virtual table.
+** The cursor points to a BTree table if P3==0 and to a BTree index
+** if P3 is not 0. If P3 is not NULL, it points to a KeyInfo structure
+** that defines the format of keys in the index.
+**
+** This opcode was once called OpenTemp. But that created
+** confusion because the term "temp table", might refer either
+** to a TEMP table at the SQL level, or to a table opened by
+** this opcode. Then this opcode was call OpenVirtual. But
+** that created confusion with the whole virtual-table idea.
+*/
+case OP_OpenEphemeral: { /* no-push */
+ int i = pOp->p1;
+ Cursor *pCx;
+ assert( i>=0 );
+ pCx = allocateCursor(p, i, -1);
+ if( pCx==0 ) goto no_mem;
+ pCx->nullRow = 1;
+ rc = sqlite3BtreeFactory(db, 0, 1, TEMP_PAGES, &pCx->pBt);
+ if( rc==SQLITE_OK ){
+ rc = sqlite3BtreeBeginTrans(pCx->pBt, 1);
+ }
+ if( rc==SQLITE_OK ){
+ /* If a transient index is required, create it by calling
+ ** sqlite3BtreeCreateTable() with the BTREE_ZERODATA flag before
+ ** opening it. If a transient table is required, just use the
+ ** automatically created table with root-page 1 (an INTKEY table).
+ */
+ if( pOp->p3 ){
+ int pgno;
+ assert( pOp->p3type==P3_KEYINFO );
+ rc = sqlite3BtreeCreateTable(pCx->pBt, &pgno, BTREE_ZERODATA);
+ if( rc==SQLITE_OK ){
+ assert( pgno==MASTER_ROOT+1 );
+ rc = sqlite3BtreeCursor(pCx->pBt, pgno, 1, sqlite3VdbeRecordCompare,
+ pOp->p3, &pCx->pCursor);
+ pCx->pKeyInfo = (KeyInfo*)pOp->p3;
+ pCx->pKeyInfo->enc = ENC(p->db);
+ pCx->pIncrKey = &pCx->pKeyInfo->incrKey;
+ }
+ pCx->isTable = 0;
+ }else{
+ rc = sqlite3BtreeCursor(pCx->pBt, MASTER_ROOT, 1, 0, 0, &pCx->pCursor);
+ pCx->isTable = 1;
+ pCx->pIncrKey = &pCx->bogusIncrKey;
+ }
+ }
+ pCx->nField = pOp->p2;
+ pCx->isIndex = !pCx->isTable;
+ break;
+}
+
+/* Opcode: OpenPseudo P1 * *
+**
+** Open a new cursor that points to a fake table that contains a single
+** row of data. Any attempt to write a second row of data causes the
+** first row to be deleted. All data is deleted when the cursor is
+** closed.
+**
+** A pseudo-table created by this opcode is useful for holding the
+** NEW or OLD tables in a trigger. Also used to hold the a single
+** row output from the sorter so that the row can be decomposed into
+** individual columns using the OP_Column opcode.
+*/
+case OP_OpenPseudo: { /* no-push */
+ int i = pOp->p1;
+ Cursor *pCx;
+ assert( i>=0 );
+ pCx = allocateCursor(p, i, -1);
+ if( pCx==0 ) goto no_mem;
+ pCx->nullRow = 1;
+ pCx->pseudoTable = 1;
+ pCx->pIncrKey = &pCx->bogusIncrKey;
+ pCx->isTable = 1;
+ pCx->isIndex = 0;
+ break;
+}
+
+/* Opcode: Close P1 * *
+**
+** Close a cursor previously opened as P1. If P1 is not
+** currently open, this instruction is a no-op.
+*/
+case OP_Close: { /* no-push */
+ int i = pOp->p1;
+ if( i>=0 && i<p->nCursor ){
+ sqlite3VdbeFreeCursor(p, p->apCsr[i]);
+ p->apCsr[i] = 0;
+ }
+ break;
+}
+
+/* Opcode: MoveGe P1 P2 *
+**
+** Pop the top of the stack and use its value as a key. Reposition
+** cursor P1 so that it points to the smallest entry that is greater
+** than or equal to the key that was popped ffrom the stack.
+** If there are no records greater than or equal to the key and P2
+** is not zero, then jump to P2.
+**
+** See also: Found, NotFound, Distinct, MoveLt, MoveGt, MoveLe
+*/
+/* Opcode: MoveGt P1 P2 *
+**
+** Pop the top of the stack and use its value as a key. Reposition
+** cursor P1 so that it points to the smallest entry that is greater
+** than the key from the stack.
+** If there are no records greater than the key and P2 is not zero,
+** then jump to P2.
+**
+** See also: Found, NotFound, Distinct, MoveLt, MoveGe, MoveLe
+*/
+/* Opcode: MoveLt P1 P2 *
+**
+** Pop the top of the stack and use its value as a key. Reposition
+** cursor P1 so that it points to the largest entry that is less
+** than the key from the stack.
+** If there are no records less than the key and P2 is not zero,
+** then jump to P2.
+**
+** See also: Found, NotFound, Distinct, MoveGt, MoveGe, MoveLe
+*/
+/* Opcode: MoveLe P1 P2 *
+**
+** Pop the top of the stack and use its value as a key. Reposition
+** cursor P1 so that it points to the largest entry that is less than
+** or equal to the key that was popped from the stack.
+** If there are no records less than or eqal to the key and P2 is not zero,
+** then jump to P2.
+**
+** See also: Found, NotFound, Distinct, MoveGt, MoveGe, MoveLt
+*/
+case OP_MoveLt: /* no-push */
+case OP_MoveLe: /* no-push */
+case OP_MoveGe: /* no-push */
+case OP_MoveGt: { /* no-push */
+ int i = pOp->p1;
+ Cursor *pC;
+
+ assert( pTos>=p->aStack );
+ assert( i>=0 && i<p->nCursor );
+ pC = p->apCsr[i];
+ assert( pC!=0 );
+ if( pC->pCursor!=0 ){
+ int res, oc;
+ oc = pOp->opcode;
+ pC->nullRow = 0;
+ *pC->pIncrKey = oc==OP_MoveGt || oc==OP_MoveLe;
+ if( pC->isTable ){
+ i64 iKey;
+ sqlite3VdbeMemIntegerify(pTos);
+ iKey = intToKey(pTos->i);
+ if( pOp->p2==0 && pOp->opcode==OP_MoveGe ){
+ pC->movetoTarget = iKey;
+ pC->deferredMoveto = 1;
+ assert( (pTos->flags & MEM_Dyn)==0 );
+ pTos--;
+ break;
+ }
+ rc = sqlite3BtreeMoveto(pC->pCursor, 0, (u64)iKey, &res);
+ if( rc!=SQLITE_OK ){
+ goto abort_due_to_error;
+ }
+ pC->lastRowid = pTos->i;
+ pC->rowidIsValid = res==0;
+ }else{
+ assert( pTos->flags & MEM_Blob );
+ /* Stringify(pTos, encoding); */
+ rc = sqlite3BtreeMoveto(pC->pCursor, pTos->z, pTos->n, &res);
+ if( rc!=SQLITE_OK ){
+ goto abort_due_to_error;
+ }
+ pC->rowidIsValid = 0;
+ }
+ pC->deferredMoveto = 0;
+ pC->cacheStatus = CACHE_STALE;
+ *pC->pIncrKey = 0;
+#ifdef SQLITE_TEST
+ sqlite3_search_count++;
+#endif
+ if( oc==OP_MoveGe || oc==OP_MoveGt ){
+ if( res<0 ){
+ rc = sqlite3BtreeNext(pC->pCursor, &res);
+ if( rc!=SQLITE_OK ) goto abort_due_to_error;
+ pC->rowidIsValid = 0;
+ }else{
+ res = 0;
+ }
+ }else{
+ assert( oc==OP_MoveLt || oc==OP_MoveLe );
+ if( res>=0 ){
+ rc = sqlite3BtreePrevious(pC->pCursor, &res);
+ if( rc!=SQLITE_OK ) goto abort_due_to_error;
+ pC->rowidIsValid = 0;
+ }else{
+ /* res might be negative because the table is empty. Check to
+ ** see if this is the case.
+ */
+ res = sqlite3BtreeEof(pC->pCursor);
+ }
+ }
+ if( res ){
+ if( pOp->p2>0 ){
+ pc = pOp->p2 - 1;
+ }else{
+ pC->nullRow = 1;
+ }
+ }
+ }
+ Release(pTos);
+ pTos--;
+ break;
+}
+
+/* Opcode: Distinct P1 P2 *
+**
+** Use the top of the stack as a record created using MakeRecord. P1 is a
+** cursor on a table that declared as an index. If that table contains an
+** entry that matches the top of the stack fall thru. If the top of the stack
+** matches no entry in P1 then jump to P2.
+**
+** The cursor is left pointing at the matching entry if it exists. The
+** record on the top of the stack is not popped.
+**
+** This instruction is similar to NotFound except that this operation
+** does not pop the key from the stack.
+**
+** The instruction is used to implement the DISTINCT operator on SELECT
+** statements. The P1 table is not a true index but rather a record of
+** all results that have produced so far.
+**
+** See also: Found, NotFound, MoveTo, IsUnique, NotExists
+*/
+/* Opcode: Found P1 P2 *
+**
+** Top of the stack holds a blob constructed by MakeRecord. P1 is an index.
+** If an entry that matches the top of the stack exists in P1 then
+** jump to P2. If the top of the stack does not match any entry in P1
+** then fall thru. The P1 cursor is left pointing at the matching entry
+** if it exists. The blob is popped off the top of the stack.
+**
+** This instruction is used to implement the IN operator where the
+** left-hand side is a SELECT statement. P1 is not a true index but
+** is instead a temporary index that holds the results of the SELECT
+** statement. This instruction just checks to see if the left-hand side
+** of the IN operator (stored on the top of the stack) exists in the
+** result of the SELECT statement.
+**
+** See also: Distinct, NotFound, MoveTo, IsUnique, NotExists
+*/
+/* Opcode: NotFound P1 P2 *
+**
+** The top of the stack holds a blob constructed by MakeRecord. P1 is
+** an index. If no entry exists in P1 that matches the blob then jump
+** to P1. If an entry does existing, fall through. The cursor is left
+** pointing to the entry that matches. The blob is popped from the stack.
+**
+** The difference between this operation and Distinct is that
+** Distinct does not pop the key from the stack.
+**
+** See also: Distinct, Found, MoveTo, NotExists, IsUnique
+*/
+case OP_Distinct: /* no-push */
+case OP_NotFound: /* no-push */
+case OP_Found: { /* no-push */
+ int i = pOp->p1;
+ int alreadyExists = 0;
+ Cursor *pC;
+ assert( pTos>=p->aStack );
+ assert( i>=0 && i<p->nCursor );
+ assert( p->apCsr[i]!=0 );
+ if( (pC = p->apCsr[i])->pCursor!=0 ){
+ int res, rx;
+ assert( pC->isTable==0 );
+ Stringify(pTos, encoding);
+ rx = sqlite3BtreeMoveto(pC->pCursor, pTos->z, pTos->n, &res);
+ alreadyExists = rx==SQLITE_OK && res==0;
+ pC->deferredMoveto = 0;
+ pC->cacheStatus = CACHE_STALE;
+ }
+ if( pOp->opcode==OP_Found ){
+ if( alreadyExists ) pc = pOp->p2 - 1;
+ }else{
+ if( !alreadyExists ) pc = pOp->p2 - 1;
+ }
+ if( pOp->opcode!=OP_Distinct ){
+ Release(pTos);
+ pTos--;
+ }
+ break;
+}
+
+/* Opcode: IsUnique P1 P2 *
+**
+** The top of the stack is an integer record number. Call this
+** record number R. The next on the stack is an index key created
+** using MakeIdxRec. Call it K. This instruction pops R from the
+** stack but it leaves K unchanged.
+**
+** P1 is an index. So it has no data and its key consists of a
+** record generated by OP_MakeRecord where the last field is the
+** rowid of the entry that the index refers to.
+**
+** This instruction asks if there is an entry in P1 where the
+** fields matches K but the rowid is different from R.
+** If there is no such entry, then there is an immediate
+** jump to P2. If any entry does exist where the index string
+** matches K but the record number is not R, then the record
+** number for that entry is pushed onto the stack and control
+** falls through to the next instruction.
+**
+** See also: Distinct, NotFound, NotExists, Found
+*/
+case OP_IsUnique: { /* no-push */
+ int i = pOp->p1;
+ Mem *pNos = &pTos[-1];
+ Cursor *pCx;
+ BtCursor *pCrsr;
+ i64 R;
+
+ /* Pop the value R off the top of the stack
+ */
+ assert( pNos>=p->aStack );
+ sqlite3VdbeMemIntegerify(pTos);
+ R = pTos->i;
+ assert( (pTos->flags & MEM_Dyn)==0 );
+ pTos--;
+ assert( i>=0 && i<p->nCursor );
+ pCx = p->apCsr[i];
+ assert( pCx!=0 );
+ pCrsr = pCx->pCursor;
+ if( pCrsr!=0 ){
+ int res;
+ i64 v; /* The record number on the P1 entry that matches K */
+ char *zKey; /* The value of K */
+ int nKey; /* Number of bytes in K */
+ int len; /* Number of bytes in K without the rowid at the end */
+ int szRowid; /* Size of the rowid column at the end of zKey */
+
+ /* Make sure K is a string and make zKey point to K
+ */
+ Stringify(pNos, encoding);
+ zKey = pNos->z;
+ nKey = pNos->n;
+
+ szRowid = sqlite3VdbeIdxRowidLen((u8*)zKey);
+ len = nKey-szRowid;
+
+ /* Search for an entry in P1 where all but the last four bytes match K.
+ ** If there is no such entry, jump immediately to P2.
+ */
+ assert( pCx->deferredMoveto==0 );
+ pCx->cacheStatus = CACHE_STALE;
+ rc = sqlite3BtreeMoveto(pCrsr, zKey, len, &res);
+ if( rc!=SQLITE_OK ){
+ goto abort_due_to_error;
+ }
+ if( res<0 ){
+ rc = sqlite3BtreeNext(pCrsr, &res);
+ if( res ){
+ pc = pOp->p2 - 1;
+ break;
+ }
+ }
+ rc = sqlite3VdbeIdxKeyCompare(pCx, len, (u8*)zKey, &res);
+ if( rc!=SQLITE_OK ) goto abort_due_to_error;
+ if( res>0 ){
+ pc = pOp->p2 - 1;
+ break;
+ }
+
+ /* At this point, pCrsr is pointing to an entry in P1 where all but
+ ** the final entry (the rowid) matches K. Check to see if the
+ ** final rowid column is different from R. If it equals R then jump
+ ** immediately to P2.
+ */
+ rc = sqlite3VdbeIdxRowid(pCrsr, &v);
+ if( rc!=SQLITE_OK ){
+ goto abort_due_to_error;
+ }
+ if( v==R ){
+ pc = pOp->p2 - 1;
+ break;
+ }
+
+ /* The final varint of the key is different from R. Push it onto
+ ** the stack. (The record number of an entry that violates a UNIQUE
+ ** constraint.)
+ */
+ pTos++;
+ pTos->i = v;
+ pTos->flags = MEM_Int;
+ }
+ break;
+}
+
+/* Opcode: NotExists P1 P2 *
+**
+** Use the top of the stack as a integer key. If a record with that key
+** does not exist in table of P1, then jump to P2. If the record
+** does exist, then fall thru. The cursor is left pointing to the
+** record if it exists. The integer key is popped from the stack.
+**
+** The difference between this operation and NotFound is that this
+** operation assumes the key is an integer and that P1 is a table whereas
+** NotFound assumes key is a blob constructed from MakeRecord and
+** P1 is an index.
+**
+** See also: Distinct, Found, MoveTo, NotFound, IsUnique
+*/
+case OP_NotExists: { /* no-push */
+ int i = pOp->p1;
+ Cursor *pC;
+ BtCursor *pCrsr;
+ assert( pTos>=p->aStack );
+ assert( i>=0 && i<p->nCursor );
+ assert( p->apCsr[i]!=0 );
+ if( (pCrsr = (pC = p->apCsr[i])->pCursor)!=0 ){
+ int res;
+ u64 iKey;
+ assert( pTos->flags & MEM_Int );
+ assert( p->apCsr[i]->isTable );
+ iKey = intToKey(pTos->i);
+ rc = sqlite3BtreeMoveto(pCrsr, 0, iKey, &res);
+ pC->lastRowid = pTos->i;
+ pC->rowidIsValid = res==0;
+ pC->nullRow = 0;
+ pC->cacheStatus = CACHE_STALE;
+ if( res!=0 ){
+ pc = pOp->p2 - 1;
+ pC->rowidIsValid = 0;
+ }
+ }
+ Release(pTos);
+ pTos--;
+ break;
+}
+
+/* Opcode: Sequence P1 * *
+**
+** Push an integer onto the stack which is the next available
+** sequence number for cursor P1. The sequence number on the
+** cursor is incremented after the push.
+*/
+case OP_Sequence: {
+ int i = pOp->p1;
+ assert( pTos>=p->aStack );
+ assert( i>=0 && i<p->nCursor );
+ assert( p->apCsr[i]!=0 );
+ pTos++;
+ pTos->i = p->apCsr[i]->seqCount++;
+ pTos->flags = MEM_Int;
+ break;
+}
+
+
+/* Opcode: NewRowid P1 P2 *
+**
+** Get a new integer record number (a.k.a "rowid") used as the key to a table.
+** The record number is not previously used as a key in the database
+** table that cursor P1 points to. The new record number is pushed
+** onto the stack.
+**
+** If P2>0 then P2 is a memory cell that holds the largest previously
+** generated record number. No new record numbers are allowed to be less
+** than this value. When this value reaches its maximum, a SQLITE_FULL
+** error is generated. The P2 memory cell is updated with the generated
+** record number. This P2 mechanism is used to help implement the
+** AUTOINCREMENT feature.
+*/
+case OP_NewRowid: {
+ int i = pOp->p1;
+ i64 v = 0;
+ Cursor *pC;
+ assert( i>=0 && i<p->nCursor );
+ assert( p->apCsr[i]!=0 );
+ if( (pC = p->apCsr[i])->pCursor==0 ){
+ /* The zero initialization above is all that is needed */
+ }else{
+ /* The next rowid or record number (different terms for the same
+ ** thing) is obtained in a two-step algorithm.
+ **
+ ** First we attempt to find the largest existing rowid and add one
+ ** to that. But if the largest existing rowid is already the maximum
+ ** positive integer, we have to fall through to the second
+ ** probabilistic algorithm
+ **
+ ** The second algorithm is to select a rowid at random and see if
+ ** it already exists in the table. If it does not exist, we have
+ ** succeeded. If the random rowid does exist, we select a new one
+ ** and try again, up to 1000 times.
+ **
+ ** For a table with less than 2 billion entries, the probability
+ ** of not finding a unused rowid is about 1.0e-300. This is a
+ ** non-zero probability, but it is still vanishingly small and should
+ ** never cause a problem. You are much, much more likely to have a
+ ** hardware failure than for this algorithm to fail.
+ **
+ ** The analysis in the previous paragraph assumes that you have a good
+ ** source of random numbers. Is a library function like lrand48()
+ ** good enough? Maybe. Maybe not. It's hard to know whether there
+ ** might be subtle bugs is some implementations of lrand48() that
+ ** could cause problems. To avoid uncertainty, SQLite uses its own
+ ** random number generator based on the RC4 algorithm.
+ **
+ ** To promote locality of reference for repetitive inserts, the
+ ** first few attempts at chosing a random rowid pick values just a little
+ ** larger than the previous rowid. This has been shown experimentally
+ ** to double the speed of the COPY operation.
+ */
+ int res, rx=SQLITE_OK, cnt;
+ i64 x;
+ cnt = 0;
+ if( (sqlite3BtreeFlags(pC->pCursor)&(BTREE_INTKEY|BTREE_ZERODATA)) !=
+ BTREE_INTKEY ){
+ rc = SQLITE_CORRUPT_BKPT;
+ goto abort_due_to_error;
+ }
+ assert( (sqlite3BtreeFlags(pC->pCursor) & BTREE_INTKEY)!=0 );
+ assert( (sqlite3BtreeFlags(pC->pCursor) & BTREE_ZERODATA)==0 );
+
+#ifdef SQLITE_32BIT_ROWID
+# define MAX_ROWID 0x7fffffff
+#else
+ /* Some compilers complain about constants of the form 0x7fffffffffffffff.
+ ** Others complain about 0x7ffffffffffffffffLL. The following macro seems
+ ** to provide the constant while making all compilers happy.
+ */
+# define MAX_ROWID ( (((u64)0x7fffffff)<<32) | (u64)0xffffffff )
+#endif
+
+ if( !pC->useRandomRowid ){
+ if( pC->nextRowidValid ){
+ v = pC->nextRowid;
+ }else{
+ rc = sqlite3BtreeLast(pC->pCursor, &res);
+ if( rc!=SQLITE_OK ){
+ goto abort_due_to_error;
+ }
+ if( res ){
+ v = 1;
+ }else{
+ sqlite3BtreeKeySize(pC->pCursor, &v);
+ v = keyToInt(v);
+ if( v==MAX_ROWID ){
+ pC->useRandomRowid = 1;
+ }else{
+ v++;
+ }
+ }
+ }
+
+#ifndef SQLITE_OMIT_AUTOINCREMENT
+ if( pOp->p2 ){
+ Mem *pMem;
+ assert( pOp->p2>0 && pOp->p2<p->nMem ); /* P2 is a valid memory cell */
+ pMem = &p->aMem[pOp->p2];
+ sqlite3VdbeMemIntegerify(pMem);
+ assert( (pMem->flags & MEM_Int)!=0 ); /* mem(P2) holds an integer */
+ if( pMem->i==MAX_ROWID || pC->useRandomRowid ){
+ rc = SQLITE_FULL;
+ goto abort_due_to_error;
+ }
+ if( v<pMem->i+1 ){
+ v = pMem->i + 1;
+ }
+ pMem->i = v;
+ }
+#endif
+
+ if( v<MAX_ROWID ){
+ pC->nextRowidValid = 1;
+ pC->nextRowid = v+1;
+ }else{
+ pC->nextRowidValid = 0;
+ }
+ }
+ if( pC->useRandomRowid ){
+ assert( pOp->p2==0 ); /* SQLITE_FULL must have occurred prior to this */
+ v = db->priorNewRowid;
+ cnt = 0;
+ do{
+ if( v==0 || cnt>2 ){
+ sqlite3Randomness(sizeof(v), &v);
+ if( cnt<5 ) v &= 0xffffff;
+ }else{
+ unsigned char r;
+ sqlite3Randomness(1, &r);
+ v += r + 1;
+ }
+ if( v==0 ) continue;
+ x = intToKey(v);
+ rx = sqlite3BtreeMoveto(pC->pCursor, 0, (u64)x, &res);
+ cnt++;
+ }while( cnt<1000 && rx==SQLITE_OK && res==0 );
+ db->priorNewRowid = v;
+ if( rx==SQLITE_OK && res==0 ){
+ rc = SQLITE_FULL;
+ goto abort_due_to_error;
+ }
+ }
+ pC->rowidIsValid = 0;
+ pC->deferredMoveto = 0;
+ pC->cacheStatus = CACHE_STALE;
+ }
+ pTos++;
+ pTos->i = v;
+ pTos->flags = MEM_Int;
+ break;
+}
+
+/* Opcode: Insert P1 P2 P3
+**
+** Write an entry into the table of cursor P1. A new entry is
+** created if it doesn't already exist or the data for an existing
+** entry is overwritten. The data is the value on the top of the
+** stack. The key is the next value down on the stack. The key must
+** be an integer. The stack is popped twice by this instruction.
+**
+** If the OPFLAG_NCHANGE flag of P2 is set, then the row change count is
+** incremented (otherwise not). If the OPFLAG_LASTROWID flag of P2 is set,
+** then rowid is stored for subsequent return by the
+** sqlite3_last_insert_rowid() function (otherwise it's unmodified).
+**
+** Parameter P3 may point to a string containing the table-name, or
+** may be NULL. If it is not NULL, then the update-hook
+** (sqlite3.xUpdateCallback) is invoked following a successful insert.
+**
+** This instruction only works on tables. The equivalent instruction
+** for indices is OP_IdxInsert.
+*/
+case OP_Insert: { /* no-push */
+ Mem *pNos = &pTos[-1];
+ int i = pOp->p1;
+ Cursor *pC;
+ assert( pNos>=p->aStack );
+ assert( i>=0 && i<p->nCursor );
+ assert( p->apCsr[i]!=0 );
+ if( ((pC = p->apCsr[i])->pCursor!=0 || pC->pseudoTable) ){
+ i64 iKey; /* The integer ROWID or key for the record to be inserted */
+
+ assert( pNos->flags & MEM_Int );
+ assert( pC->isTable );
+ iKey = intToKey(pNos->i);
+
+ if( pOp->p2 & OPFLAG_NCHANGE ) p->nChange++;
+ if( pOp->p2 & OPFLAG_LASTROWID ) db->lastRowid = pNos->i;
+ if( pC->nextRowidValid && pNos->i>=pC->nextRowid ){
+ pC->nextRowidValid = 0;
+ }
+ if( pTos->flags & MEM_Null ){
+ pTos->z = 0;
+ pTos->n = 0;
+ }else{
+ assert( pTos->flags & (MEM_Blob|MEM_Str) );
+ }
+ if( pC->pseudoTable ){
+ sqliteFree(pC->pData);
+ pC->iKey = iKey;
+ pC->nData = pTos->n;
+ if( pTos->flags & MEM_Dyn ){
+ pC->pData = pTos->z;
+ pTos->flags = MEM_Null;
+ }else{
+ pC->pData = sqliteMallocRaw( pC->nData+2 );
+ if( !pC->pData ) goto no_mem;
+ memcpy(pC->pData, pTos->z, pC->nData);
+ pC->pData[pC->nData] = 0;
+ pC->pData[pC->nData+1] = 0;
+ }
+ pC->nullRow = 0;
+ }else{
+ rc = sqlite3BtreeInsert(pC->pCursor, 0, iKey, pTos->z, pTos->n);
+ }
+
+ pC->rowidIsValid = 0;
+ pC->deferredMoveto = 0;
+ pC->cacheStatus = CACHE_STALE;
+
+ /* Invoke the update-hook if required. */
+ if( rc==SQLITE_OK && db->xUpdateCallback && pOp->p3 ){
+ const char *zDb = db->aDb[pC->iDb].zName;
+ const char *zTbl = pOp->p3;
+ int op = ((pOp->p2 & OPFLAG_ISUPDATE) ? SQLITE_UPDATE : SQLITE_INSERT);
+ assert( pC->isTable );
+ db->xUpdateCallback(db->pUpdateArg, op, zDb, zTbl, iKey);
+ assert( pC->iDb>=0 );
+ }
+ }
+ popStack(&pTos, 2);
+
+ break;
+}
+
+/* Opcode: Delete P1 P2 P3
+**
+** Delete the record at which the P1 cursor is currently pointing.
+**
+** The cursor will be left pointing at either the next or the previous
+** record in the table. If it is left pointing at the next record, then
+** the next Next instruction will be a no-op. Hence it is OK to delete
+** a record from within an Next loop.
+**
+** If the OPFLAG_NCHANGE flag of P2 is set, then the row change count is
+** incremented (otherwise not).
+**
+** If P1 is a pseudo-table, then this instruction is a no-op.
+*/
+case OP_Delete: { /* no-push */
+ int i = pOp->p1;
+ Cursor *pC;
+ assert( i>=0 && i<p->nCursor );
+ pC = p->apCsr[i];
+ assert( pC!=0 );
+ if( pC->pCursor!=0 ){
+ i64 iKey;
+
+ /* If the update-hook will be invoked, set iKey to the rowid of the
+ ** row being deleted.
+ */
+ if( db->xUpdateCallback && pOp->p3 ){
+ assert( pC->isTable );
+ if( pC->rowidIsValid ){
+ iKey = pC->lastRowid;
+ }else{
+ rc = sqlite3BtreeKeySize(pC->pCursor, &iKey);
+ if( rc ){
+ goto abort_due_to_error;
+ }
+ iKey = keyToInt(iKey);
+ }
+ }
+
+ rc = sqlite3VdbeCursorMoveto(pC);
+ if( rc ) goto abort_due_to_error;
+ rc = sqlite3BtreeDelete(pC->pCursor);
+ pC->nextRowidValid = 0;
+ pC->cacheStatus = CACHE_STALE;
+
+ /* Invoke the update-hook if required. */
+ if( rc==SQLITE_OK && db->xUpdateCallback && pOp->p3 ){
+ const char *zDb = db->aDb[pC->iDb].zName;
+ const char *zTbl = pOp->p3;
+ db->xUpdateCallback(db->pUpdateArg, SQLITE_DELETE, zDb, zTbl, iKey);
+ assert( pC->iDb>=0 );
+ }
+ }
+ if( pOp->p2 & OPFLAG_NCHANGE ) p->nChange++;
+ break;
+}
+
+/* Opcode: ResetCount P1 * *
+**
+** This opcode resets the VMs internal change counter to 0. If P1 is true,
+** then the value of the change counter is copied to the database handle
+** change counter (returned by subsequent calls to sqlite3_changes())
+** before it is reset. This is used by trigger programs.
+*/
+case OP_ResetCount: { /* no-push */
+ if( pOp->p1 ){
+ sqlite3VdbeSetChanges(db, p->nChange);
+ }
+ p->nChange = 0;
+ break;
+}
+
+/* Opcode: RowData P1 * *
+**
+** Push onto the stack the complete row data for cursor P1.
+** There is no interpretation of the data. It is just copied
+** onto the stack exactly as it is found in the database file.
+**
+** If the cursor is not pointing to a valid row, a NULL is pushed
+** onto the stack.
+*/
+/* Opcode: RowKey P1 * *
+**
+** Push onto the stack the complete row key for cursor P1.
+** There is no interpretation of the key. It is just copied
+** onto the stack exactly as it is found in the database file.
+**
+** If the cursor is not pointing to a valid row, a NULL is pushed
+** onto the stack.
+*/
+case OP_RowKey:
+case OP_RowData: {
+ int i = pOp->p1;
+ Cursor *pC;
+ u32 n;
+
+ /* Note that RowKey and RowData are really exactly the same instruction */
+ pTos++;
+ assert( i>=0 && i<p->nCursor );
+ pC = p->apCsr[i];
+ assert( pC->isTable || pOp->opcode==OP_RowKey );
+ assert( pC->isIndex || pOp->opcode==OP_RowData );
+ assert( pC!=0 );
+ if( pC->nullRow ){
+ pTos->flags = MEM_Null;
+ }else if( pC->pCursor!=0 ){
+ BtCursor *pCrsr = pC->pCursor;
+ rc = sqlite3VdbeCursorMoveto(pC);
+ if( rc ) goto abort_due_to_error;
+ if( pC->nullRow ){
+ pTos->flags = MEM_Null;
+ break;
+ }else if( pC->isIndex ){
+ i64 n64;
+ assert( !pC->isTable );
+ sqlite3BtreeKeySize(pCrsr, &n64);
+ n = n64;
+ }else{
+ sqlite3BtreeDataSize(pCrsr, &n);
+ }
+ pTos->n = n;
+ if( n<=NBFS ){
+ pTos->flags = MEM_Blob | MEM_Short;
+ pTos->z = pTos->zShort;
+ }else{
+ char *z = sqliteMallocRaw( n );
+ if( z==0 ) goto no_mem;
+ pTos->flags = MEM_Blob | MEM_Dyn;
+ pTos->xDel = 0;
+ pTos->z = z;
+ }
+ if( pC->isIndex ){
+ sqlite3BtreeKey(pCrsr, 0, n, pTos->z);
+ }else{
+ sqlite3BtreeData(pCrsr, 0, n, pTos->z);
+ }
+ }else if( pC->pseudoTable ){
+ pTos->n = pC->nData;
+ pTos->z = pC->pData;
+ pTos->flags = MEM_Blob|MEM_Ephem;
+ }else{
+ pTos->flags = MEM_Null;
+ }
+ pTos->enc = SQLITE_UTF8; /* In case the blob is ever cast to text */
+ break;
+}
+
+/* Opcode: Rowid P1 * *
+**
+** Push onto the stack an integer which is the key of the table entry that
+** P1 is currently point to.
+*/
+case OP_Rowid: {
+ int i = pOp->p1;
+ Cursor *pC;
+ i64 v;
+
+ assert( i>=0 && i<p->nCursor );
+ pC = p->apCsr[i];
+ assert( pC!=0 );
+ rc = sqlite3VdbeCursorMoveto(pC);
+ if( rc ) goto abort_due_to_error;
+ pTos++;
+ if( pC->rowidIsValid ){
+ v = pC->lastRowid;
+ }else if( pC->pseudoTable ){
+ v = keyToInt(pC->iKey);
+ }else if( pC->nullRow || pC->pCursor==0 ){
+ pTos->flags = MEM_Null;
+ break;
+ }else{
+ assert( pC->pCursor!=0 );
+ sqlite3BtreeKeySize(pC->pCursor, &v);
+ v = keyToInt(v);
+ }
+ pTos->i = v;
+ pTos->flags = MEM_Int;
+ break;
+}
+
+/* Opcode: NullRow P1 * *
+**
+** Move the cursor P1 to a null row. Any OP_Column operations
+** that occur while the cursor is on the null row will always push
+** a NULL onto the stack.
+*/
+case OP_NullRow: { /* no-push */
+ int i = pOp->p1;
+ Cursor *pC;
+
+ assert( i>=0 && i<p->nCursor );
+ pC = p->apCsr[i];
+ assert( pC!=0 );
+ pC->nullRow = 1;
+ pC->rowidIsValid = 0;
+ break;
+}
+
+/* Opcode: Last P1 P2 *
+**
+** The next use of the Rowid or Column or Next instruction for P1
+** will refer to the last entry in the database table or index.
+** If the table or index is empty and P2>0, then jump immediately to P2.
+** If P2 is 0 or if the table or index is not empty, fall through
+** to the following instruction.
+*/
+case OP_Last: { /* no-push */
+ int i = pOp->p1;
+ Cursor *pC;
+ BtCursor *pCrsr;
+
+ assert( i>=0 && i<p->nCursor );
+ pC = p->apCsr[i];
+ assert( pC!=0 );
+ if( (pCrsr = pC->pCursor)!=0 ){
+ int res;
+ rc = sqlite3BtreeLast(pCrsr, &res);
+ pC->nullRow = res;
+ pC->deferredMoveto = 0;
+ pC->cacheStatus = CACHE_STALE;
+ if( res && pOp->p2>0 ){
+ pc = pOp->p2 - 1;
+ }
+ }else{
+ pC->nullRow = 0;
+ }
+ break;
+}
+
+
+/* Opcode: Sort P1 P2 *
+**
+** This opcode does exactly the same thing as OP_Rewind except that
+** it increments an undocumented global variable used for testing.
+**
+** Sorting is accomplished by writing records into a sorting index,
+** then rewinding that index and playing it back from beginning to
+** end. We use the OP_Sort opcode instead of OP_Rewind to do the
+** rewinding so that the global variable will be incremented and
+** regression tests can determine whether or not the optimizer is
+** correctly optimizing out sorts.
+*/
+case OP_Sort: { /* no-push */
+#ifdef SQLITE_TEST
+ sqlite3_sort_count++;
+ sqlite3_search_count--;
+#endif
+ /* Fall through into OP_Rewind */
+}
+/* Opcode: Rewind P1 P2 *
+**
+** The next use of the Rowid or Column or Next instruction for P1
+** will refer to the first entry in the database table or index.
+** If the table or index is empty and P2>0, then jump immediately to P2.
+** If P2 is 0 or if the table or index is not empty, fall through
+** to the following instruction.
+*/
+case OP_Rewind: { /* no-push */
+ int i = pOp->p1;
+ Cursor *pC;
+ BtCursor *pCrsr;
+ int res;
+
+ assert( i>=0 && i<p->nCursor );
+ pC = p->apCsr[i];
+ assert( pC!=0 );
+ if( (pCrsr = pC->pCursor)!=0 ){
+ rc = sqlite3BtreeFirst(pCrsr, &res);
+ pC->atFirst = res==0;
+ pC->deferredMoveto = 0;
+ pC->cacheStatus = CACHE_STALE;
+ }else{
+ res = 1;
+ }
+ pC->nullRow = res;
+ if( res && pOp->p2>0 ){
+ pc = pOp->p2 - 1;
+ }
+ break;
+}
+
+/* Opcode: Next P1 P2 *
+**
+** Advance cursor P1 so that it points to the next key/data pair in its
+** table or index. If there are no more key/value pairs then fall through
+** to the following instruction. But if the cursor advance was successful,
+** jump immediately to P2.
+**
+** See also: Prev
+*/
+/* Opcode: Prev P1 P2 *
+**
+** Back up cursor P1 so that it points to the previous key/data pair in its
+** table or index. If there is no previous key/value pairs then fall through
+** to the following instruction. But if the cursor backup was successful,
+** jump immediately to P2.
+*/
+case OP_Prev: /* no-push */
+case OP_Next: { /* no-push */
+ Cursor *pC;
+ BtCursor *pCrsr;
+
+ CHECK_FOR_INTERRUPT;
+ assert( pOp->p1>=0 && pOp->p1<p->nCursor );
+ pC = p->apCsr[pOp->p1];
+ assert( pC!=0 );
+ if( (pCrsr = pC->pCursor)!=0 ){
+ int res;
+ if( pC->nullRow ){
+ res = 1;
+ }else{
+ assert( pC->deferredMoveto==0 );
+ rc = pOp->opcode==OP_Next ? sqlite3BtreeNext(pCrsr, &res) :
+ sqlite3BtreePrevious(pCrsr, &res);
+ pC->nullRow = res;
+ pC->cacheStatus = CACHE_STALE;
+ }
+ if( res==0 ){
+ pc = pOp->p2 - 1;
+#ifdef SQLITE_TEST
+ sqlite3_search_count++;
+#endif
+ }
+ }else{
+ pC->nullRow = 1;
+ }
+ pC->rowidIsValid = 0;
+ break;
+}
+
+/* Opcode: IdxInsert P1 * *
+**
+** The top of the stack holds a SQL index key made using either the
+** MakeIdxRec or MakeRecord instructions. This opcode writes that key
+** into the index P1. Data for the entry is nil.
+**
+** This instruction only works for indices. The equivalent instruction
+** for tables is OP_Insert.
+*/
+case OP_IdxInsert: { /* no-push */
+ int i = pOp->p1;
+ Cursor *pC;
+ BtCursor *pCrsr;
+ assert( pTos>=p->aStack );
+ assert( i>=0 && i<p->nCursor );
+ assert( p->apCsr[i]!=0 );
+ assert( pTos->flags & MEM_Blob );
+ assert( pOp->p2==0 );
+ if( (pCrsr = (pC = p->apCsr[i])->pCursor)!=0 ){
+ int nKey = pTos->n;
+ const char *zKey = pTos->z;
+ assert( pC->isTable==0 );
+ rc = sqlite3BtreeInsert(pCrsr, zKey, nKey, "", 0);
+ assert( pC->deferredMoveto==0 );
+ pC->cacheStatus = CACHE_STALE;
+ }
+ Release(pTos);
+ pTos--;
+ break;
+}
+
+/* Opcode: IdxDelete P1 * *
+**
+** The top of the stack is an index key built using the either the
+** MakeIdxRec or MakeRecord opcodes.
+** This opcode removes that entry from the index.
+*/
+case OP_IdxDelete: { /* no-push */
+ int i = pOp->p1;
+ Cursor *pC;
+ BtCursor *pCrsr;
+ assert( pTos>=p->aStack );
+ assert( pTos->flags & MEM_Blob );
+ assert( i>=0 && i<p->nCursor );
+ assert( p->apCsr[i]!=0 );
+ if( (pCrsr = (pC = p->apCsr[i])->pCursor)!=0 ){
+ int res;
+ rc = sqlite3BtreeMoveto(pCrsr, pTos->z, pTos->n, &res);
+ if( rc==SQLITE_OK && res==0 ){
+ rc = sqlite3BtreeDelete(pCrsr);
+ }
+ assert( pC->deferredMoveto==0 );
+ pC->cacheStatus = CACHE_STALE;
+ }
+ Release(pTos);
+ pTos--;
+ break;
+}
+
+/* Opcode: IdxRowid P1 * *
+**
+** Push onto the stack an integer which is the last entry in the record at
+** the end of the index key pointed to by cursor P1. This integer should be
+** the rowid of the table entry to which this index entry points.
+**
+** See also: Rowid, MakeIdxRec.
+*/
+case OP_IdxRowid: {
+ int i = pOp->p1;
+ BtCursor *pCrsr;
+ Cursor *pC;
+
+ assert( i>=0 && i<p->nCursor );
+ assert( p->apCsr[i]!=0 );
+ pTos++;
+ pTos->flags = MEM_Null;
+ if( (pCrsr = (pC = p->apCsr[i])->pCursor)!=0 ){
+ i64 rowid;
+
+ assert( pC->deferredMoveto==0 );
+ assert( pC->isTable==0 );
+ if( pC->nullRow ){
+ pTos->flags = MEM_Null;
+ }else{
+ rc = sqlite3VdbeIdxRowid(pCrsr, &rowid);
+ if( rc!=SQLITE_OK ){
+ goto abort_due_to_error;
+ }
+ pTos->flags = MEM_Int;
+ pTos->i = rowid;
+ }
+ }
+ break;
+}
+
+/* Opcode: IdxGT P1 P2 *
+**
+** The top of the stack is an index entry that omits the ROWID. Compare
+** the top of stack against the index that P1 is currently pointing to.
+** Ignore the ROWID on the P1 index.
+**
+** The top of the stack might have fewer columns that P1.
+**
+** If the P1 index entry is greater than the top of the stack
+** then jump to P2. Otherwise fall through to the next instruction.
+** In either case, the stack is popped once.
+*/
+/* Opcode: IdxGE P1 P2 P3
+**
+** The top of the stack is an index entry that omits the ROWID. Compare
+** the top of stack against the index that P1 is currently pointing to.
+** Ignore the ROWID on the P1 index.
+**
+** If the P1 index entry is greater than or equal to the top of the stack
+** then jump to P2. Otherwise fall through to the next instruction.
+** In either case, the stack is popped once.
+**
+** If P3 is the "+" string (or any other non-NULL string) then the
+** index taken from the top of the stack is temporarily increased by
+** an epsilon prior to the comparison. This make the opcode work
+** like IdxGT except that if the key from the stack is a prefix of
+** the key in the cursor, the result is false whereas it would be
+** true with IdxGT.
+*/
+/* Opcode: IdxLT P1 P2 P3
+**
+** The top of the stack is an index entry that omits the ROWID. Compare
+** the top of stack against the index that P1 is currently pointing to.
+** Ignore the ROWID on the P1 index.
+**
+** If the P1 index entry is less than the top of the stack
+** then jump to P2. Otherwise fall through to the next instruction.
+** In either case, the stack is popped once.
+**
+** If P3 is the "+" string (or any other non-NULL string) then the
+** index taken from the top of the stack is temporarily increased by
+** an epsilon prior to the comparison. This makes the opcode work
+** like IdxLE.
+*/
+case OP_IdxLT: /* no-push */
+case OP_IdxGT: /* no-push */
+case OP_IdxGE: { /* no-push */
+ int i= pOp->p1;
+ Cursor *pC;
+
+ assert( i>=0 && i<p->nCursor );
+ assert( p->apCsr[i]!=0 );
+ assert( pTos>=p->aStack );
+ if( (pC = p->apCsr[i])->pCursor!=0 ){
+ int res;
+
+ assert( pTos->flags & MEM_Blob ); /* Created using OP_Make*Key */
+ Stringify(pTos, encoding);
+ assert( pC->deferredMoveto==0 );
+ *pC->pIncrKey = pOp->p3!=0;
+ assert( pOp->p3==0 || pOp->opcode!=OP_IdxGT );
+ rc = sqlite3VdbeIdxKeyCompare(pC, pTos->n, (u8*)pTos->z, &res);
+ *pC->pIncrKey = 0;
+ if( rc!=SQLITE_OK ){
+ break;
+ }
+ if( pOp->opcode==OP_IdxLT ){
+ res = -res;
+ }else if( pOp->opcode==OP_IdxGE ){
+ res++;
+ }
+ if( res>0 ){
+ pc = pOp->p2 - 1 ;
+ }
+ }
+ Release(pTos);
+ pTos--;
+ break;
+}
+
+/* Opcode: IdxIsNull P1 P2 *
+**
+** The top of the stack contains an index entry such as might be generated
+** by the MakeIdxRec opcode. This routine looks at the first P1 fields of
+** that key. If any of the first P1 fields are NULL, then a jump is made
+** to address P2. Otherwise we fall straight through.
+**
+** The index entry is always popped from the stack.
+*/
+case OP_IdxIsNull: { /* no-push */
+ int i = pOp->p1;
+ int k, n;
+ const char *z;
+ u32 serial_type;
+
+ assert( pTos>=p->aStack );
+ assert( pTos->flags & MEM_Blob );
+ z = pTos->z;
+ n = pTos->n;
+ k = sqlite3GetVarint32((u8*)z, &serial_type);
+ for(; k<n && i>0; i--){
+ k += sqlite3GetVarint32((u8*)&z[k], &serial_type);
+ if( serial_type==0 ){ /* Serial type 0 is a NULL */
+ pc = pOp->p2-1;
+ break;
+ }
+ }
+ Release(pTos);
+ pTos--;
+ break;
+}
+
+/* Opcode: Destroy P1 P2 *
+**
+** Delete an entire database table or index whose root page in the database
+** file is given by P1.
+**
+** The table being destroyed is in the main database file if P2==0. If
+** P2==1 then the table to be clear is in the auxiliary database file
+** that is used to store tables create using CREATE TEMPORARY TABLE.
+**
+** If AUTOVACUUM is enabled then it is possible that another root page
+** might be moved into the newly deleted root page in order to keep all
+** root pages contiguous at the beginning of the database. The former
+** value of the root page that moved - its value before the move occurred -
+** is pushed onto the stack. If no page movement was required (because
+** the table being dropped was already the last one in the database) then
+** a zero is pushed onto the stack. If AUTOVACUUM is disabled
+** then a zero is pushed onto the stack.
+**
+** See also: Clear
+*/
+case OP_Destroy: {
+ int iMoved;
+ Vdbe *pVdbe;
+ int iCnt;
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+ iCnt = 0;
+ for(pVdbe=db->pVdbe; pVdbe; pVdbe=pVdbe->pNext){
+ if( pVdbe->magic==VDBE_MAGIC_RUN && pVdbe->inVtabMethod<2 && pVdbe->pc>=0 ){
+ iCnt++;
+ }
+ }
+#else
+ iCnt = db->activeVdbeCnt;
+#endif
+ if( iCnt>1 ){
+ rc = SQLITE_LOCKED;
+ }else{
+ assert( iCnt==1 );
+ rc = sqlite3BtreeDropTable(db->aDb[pOp->p2].pBt, pOp->p1, &iMoved);
+ pTos++;
+ pTos->flags = MEM_Int;
+ pTos->i = iMoved;
+#ifndef SQLITE_OMIT_AUTOVACUUM
+ if( rc==SQLITE_OK && iMoved!=0 ){
+ sqlite3RootPageMoved(&db->aDb[pOp->p2], iMoved, pOp->p1);
+ }
+#endif
+ }
+ break;
+}
+
+/* Opcode: Clear P1 P2 *
+**
+** Delete all contents of the database table or index whose root page
+** in the database file is given by P1. But, unlike Destroy, do not
+** remove the table or index from the database file.
+**
+** The table being clear is in the main database file if P2==0. If
+** P2==1 then the table to be clear is in the auxiliary database file
+** that is used to store tables create using CREATE TEMPORARY TABLE.
+**
+** See also: Destroy
+*/
+case OP_Clear: { /* no-push */
+
+ /* For consistency with the way other features of SQLite operate
+ ** with a truncate, we will also skip the update callback.
+ */
+#if 0
+ Btree *pBt = db->aDb[pOp->p2].pBt;
+ if( db->xUpdateCallback && pOp->p3 ){
+ const char *zDb = db->aDb[pOp->p2].zName;
+ const char *zTbl = pOp->p3;
+ BtCursor *pCur = 0;
+ int fin = 0;
+
+ rc = sqlite3BtreeCursor(pBt, pOp->p1, 0, 0, 0, &pCur);
+ if( rc!=SQLITE_OK ){
+ goto abort_due_to_error;
+ }
+ for(
+ rc=sqlite3BtreeFirst(pCur, &fin);
+ rc==SQLITE_OK && !fin;
+ rc=sqlite3BtreeNext(pCur, &fin)
+ ){
+ i64 iKey;
+ rc = sqlite3BtreeKeySize(pCur, &iKey);
+ if( rc ){
+ break;
+ }
+ iKey = keyToInt(iKey);
+ db->xUpdateCallback(db->pUpdateArg, SQLITE_DELETE, zDb, zTbl, iKey);
+ }
+ sqlite3BtreeCloseCursor(pCur);
+ if( rc!=SQLITE_OK ){
+ goto abort_due_to_error;
+ }
+ }
+#endif
+ rc = sqlite3BtreeClearTable(db->aDb[pOp->p2].pBt, pOp->p1);
+ break;
+}
+
+/* Opcode: CreateTable P1 * *
+**
+** Allocate a new table in the main database file if P2==0 or in the
+** auxiliary database file if P2==1. Push the page number
+** for the root page of the new table onto the stack.
+**
+** The difference between a table and an index is this: A table must
+** have a 4-byte integer key and can have arbitrary data. An index
+** has an arbitrary key but no data.
+**
+** See also: CreateIndex
+*/
+/* Opcode: CreateIndex P1 * *
+**
+** Allocate a new index in the main database file if P2==0 or in the
+** auxiliary database file if P2==1. Push the page number of the
+** root page of the new index onto the stack.
+**
+** See documentation on OP_CreateTable for additional information.
+*/
+case OP_CreateIndex:
+case OP_CreateTable: {
+ int pgno;
+ int flags;
+ Db *pDb;
+ assert( pOp->p1>=0 && pOp->p1<db->nDb );
+ pDb = &db->aDb[pOp->p1];
+ assert( pDb->pBt!=0 );
+ if( pOp->opcode==OP_CreateTable ){
+ /* flags = BTREE_INTKEY; */
+ flags = BTREE_LEAFDATA|BTREE_INTKEY;
+ }else{
+ flags = BTREE_ZERODATA;
+ }
+ rc = sqlite3BtreeCreateTable(pDb->pBt, &pgno, flags);
+ pTos++;
+ if( rc==SQLITE_OK ){
+ pTos->i = pgno;
+ pTos->flags = MEM_Int;
+ }else{
+ pTos->flags = MEM_Null;
+ }
+ break;
+}
+
+/* Opcode: ParseSchema P1 * P3
+**
+** Read and parse all entries from the SQLITE_MASTER table of database P1
+** that match the WHERE clause P3.
+**
+** This opcode invokes the parser to create a new virtual machine,
+** then runs the new virtual machine. It is thus a reentrant opcode.
+*/
+case OP_ParseSchema: { /* no-push */
+ char *zSql;
+ int iDb = pOp->p1;
+ const char *zMaster;
+ InitData initData;
+
+ assert( iDb>=0 && iDb<db->nDb );
+ if( !DbHasProperty(db, iDb, DB_SchemaLoaded) ) break;
+ zMaster = SCHEMA_TABLE(iDb);
+ initData.db = db;
+ initData.iDb = pOp->p1;
+ initData.pzErrMsg = &p->zErrMsg;
+ zSql = sqlite3MPrintf(
+ "SELECT name, rootpage, sql FROM '%q'.%s WHERE %s",
+ db->aDb[iDb].zName, zMaster, pOp->p3);
+ if( zSql==0 ) goto no_mem;
+ sqlite3SafetyOff(db);
+ assert( db->init.busy==0 );
+ db->init.busy = 1;
+ assert( !sqlite3MallocFailed() );
+ rc = sqlite3_exec(db, zSql, sqlite3InitCallback, &initData, 0);
+ if( rc==SQLITE_ABORT ) rc = initData.rc;
+ sqliteFree(zSql);
+ db->init.busy = 0;
+ sqlite3SafetyOn(db);
+ if( rc==SQLITE_NOMEM ){
+ sqlite3FailedMalloc();
+ goto no_mem;
+ }
+ break;
+}
+
+#if !defined(SQLITE_OMIT_ANALYZE) && !defined(SQLITE_OMIT_PARSER)
+/* Opcode: LoadAnalysis P1 * *
+**
+** Read the sqlite_stat1 table for database P1 and load the content
+** of that table into the internal index hash table. This will cause
+** the analysis to be used when preparing all subsequent queries.
+*/
+case OP_LoadAnalysis: { /* no-push */
+ int iDb = pOp->p1;
+ assert( iDb>=0 && iDb<db->nDb );
+ sqlite3AnalysisLoad(db, iDb);
+ break;
+}
+#endif /* !defined(SQLITE_OMIT_ANALYZE) && !defined(SQLITE_OMIT_PARSER) */
+
+/* Opcode: DropTable P1 * P3
+**
+** Remove the internal (in-memory) data structures that describe
+** the table named P3 in database P1. This is called after a table
+** is dropped in order to keep the internal representation of the
+** schema consistent with what is on disk.
+*/
+case OP_DropTable: { /* no-push */
+ sqlite3UnlinkAndDeleteTable(db, pOp->p1, pOp->p3);
+ break;
+}
+
+/* Opcode: DropIndex P1 * P3
+**
+** Remove the internal (in-memory) data structures that describe
+** the index named P3 in database P1. This is called after an index
+** is dropped in order to keep the internal representation of the
+** schema consistent with what is on disk.
+*/
+case OP_DropIndex: { /* no-push */
+ sqlite3UnlinkAndDeleteIndex(db, pOp->p1, pOp->p3);
+ break;
+}
+
+/* Opcode: DropTrigger P1 * P3
+**
+** Remove the internal (in-memory) data structures that describe
+** the trigger named P3 in database P1. This is called after a trigger
+** is dropped in order to keep the internal representation of the
+** schema consistent with what is on disk.
+*/
+case OP_DropTrigger: { /* no-push */
+ sqlite3UnlinkAndDeleteTrigger(db, pOp->p1, pOp->p3);
+ break;
+}
+
+
+#ifndef SQLITE_OMIT_INTEGRITY_CHECK
+/* Opcode: IntegrityCk * P2 *
+**
+** Do an analysis of the currently open database. Push onto the
+** stack the text of an error message describing any problems.
+** If there are no errors, push a "ok" onto the stack.
+**
+** The root page numbers of all tables in the database are integer
+** values on the stack. This opcode pulls as many integers as it
+** can off of the stack and uses those numbers as the root pages.
+**
+** If P2 is not zero, the check is done on the auxiliary database
+** file, not the main database file.
+**
+** This opcode is used for testing purposes only.
+*/
+case OP_IntegrityCk: {
+ int nRoot;
+ int *aRoot;
+ int j;
+ char *z;
+
+ for(nRoot=0; &pTos[-nRoot]>=p->aStack; nRoot++){
+ if( (pTos[-nRoot].flags & MEM_Int)==0 ) break;
+ }
+ assert( nRoot>0 );
+ aRoot = sqliteMallocRaw( sizeof(int*)*(nRoot+1) );
+ if( aRoot==0 ) goto no_mem;
+ for(j=0; j<nRoot; j++){
+ Mem *pMem = &pTos[-j];
+ aRoot[j] = pMem->i;
+ }
+ aRoot[j] = 0;
+ popStack(&pTos, nRoot);
+ pTos++;
+ z = sqlite3BtreeIntegrityCheck(db->aDb[pOp->p2].pBt, aRoot, nRoot);
+ if( z==0 || z[0]==0 ){
+ if( z ) sqliteFree(z);
+ pTos->z = "ok";
+ pTos->n = 2;
+ pTos->flags = MEM_Str | MEM_Static | MEM_Term;
+ }else{
+ pTos->z = z;
+ pTos->n = strlen(z);
+ pTos->flags = MEM_Str | MEM_Dyn | MEM_Term;
+ pTos->xDel = 0;
+ }
+ pTos->enc = SQLITE_UTF8;
+ sqlite3VdbeChangeEncoding(pTos, encoding);
+ sqliteFree(aRoot);
+ break;
+}
+#endif /* SQLITE_OMIT_INTEGRITY_CHECK */
+
+/* Opcode: FifoWrite * * *
+**
+** Write the integer on the top of the stack
+** into the Fifo.
+*/
+case OP_FifoWrite: { /* no-push */
+ assert( pTos>=p->aStack );
+ sqlite3VdbeMemIntegerify(pTos);
+ sqlite3VdbeFifoPush(&p->sFifo, pTos->i);
+ assert( (pTos->flags & MEM_Dyn)==0 );
+ pTos--;
+ break;
+}
+
+/* Opcode: FifoRead * P2 *
+**
+** Attempt to read a single integer from the Fifo
+** and push it onto the stack. If the Fifo is empty
+** push nothing but instead jump to P2.
+*/
+case OP_FifoRead: {
+ i64 v;
+ CHECK_FOR_INTERRUPT;
+ if( sqlite3VdbeFifoPop(&p->sFifo, &v)==SQLITE_DONE ){
+ pc = pOp->p2 - 1;
+ }else{
+ pTos++;
+ pTos->i = v;
+ pTos->flags = MEM_Int;
+ }
+ break;
+}
+
+#ifndef SQLITE_OMIT_TRIGGER
+/* Opcode: ContextPush * * *
+**
+** Save the current Vdbe context such that it can be restored by a ContextPop
+** opcode. The context stores the last insert row id, the last statement change
+** count, and the current statement change count.
+*/
+case OP_ContextPush: { /* no-push */
+ int i = p->contextStackTop++;
+ Context *pContext;
+
+ assert( i>=0 );
+ /* FIX ME: This should be allocated as part of the vdbe at compile-time */
+ if( i>=p->contextStackDepth ){
+ p->contextStackDepth = i+1;
+ sqliteReallocOrFree((void**)&p->contextStack, sizeof(Context)*(i+1));
+ if( p->contextStack==0 ) goto no_mem;
+ }
+ pContext = &p->contextStack[i];
+ pContext->lastRowid = db->lastRowid;
+ pContext->nChange = p->nChange;
+ pContext->sFifo = p->sFifo;
+ sqlite3VdbeFifoInit(&p->sFifo);
+ break;
+}
+
+/* Opcode: ContextPop * * *
+**
+** Restore the Vdbe context to the state it was in when contextPush was last
+** executed. The context stores the last insert row id, the last statement
+** change count, and the current statement change count.
+*/
+case OP_ContextPop: { /* no-push */
+ Context *pContext = &p->contextStack[--p->contextStackTop];
+ assert( p->contextStackTop>=0 );
+ db->lastRowid = pContext->lastRowid;
+ p->nChange = pContext->nChange;
+ sqlite3VdbeFifoClear(&p->sFifo);
+ p->sFifo = pContext->sFifo;
+ break;
+}
+#endif /* #ifndef SQLITE_OMIT_TRIGGER */
+
+/* Opcode: MemStore P1 P2 *
+**
+** Write the top of the stack into memory location P1.
+** P1 should be a small integer since space is allocated
+** for all memory locations between 0 and P1 inclusive.
+**
+** After the data is stored in the memory location, the
+** stack is popped once if P2 is 1. If P2 is zero, then
+** the original data remains on the stack.
+*/
+case OP_MemStore: { /* no-push */
+ assert( pTos>=p->aStack );
+ assert( pOp->p1>=0 && pOp->p1<p->nMem );
+ rc = sqlite3VdbeMemMove(&p->aMem[pOp->p1], pTos);
+ pTos--;
+
+ /* If P2 is 0 then fall thru to the next opcode, OP_MemLoad, that will
+ ** restore the top of the stack to its original value.
+ */
+ if( pOp->p2 ){
+ break;
+ }
+}
+/* Opcode: MemLoad P1 * *
+**
+** Push a copy of the value in memory location P1 onto the stack.
+**
+** If the value is a string, then the value pushed is a pointer to
+** the string that is stored in the memory location. If the memory
+** location is subsequently changed (using OP_MemStore) then the
+** value pushed onto the stack will change too.
+*/
+case OP_MemLoad: {
+ int i = pOp->p1;
+ assert( i>=0 && i<p->nMem );
+ pTos++;
+ sqlite3VdbeMemShallowCopy(pTos, &p->aMem[i], MEM_Ephem);
+ break;
+}
+
+#ifndef SQLITE_OMIT_AUTOINCREMENT
+/* Opcode: MemMax P1 * *
+**
+** Set the value of memory cell P1 to the maximum of its current value
+** and the value on the top of the stack. The stack is unchanged.
+**
+** This instruction throws an error if the memory cell is not initially
+** an integer.
+*/
+case OP_MemMax: { /* no-push */
+ int i = pOp->p1;
+ Mem *pMem;
+ assert( pTos>=p->aStack );
+ assert( i>=0 && i<p->nMem );
+ pMem = &p->aMem[i];
+ sqlite3VdbeMemIntegerify(pMem);
+ sqlite3VdbeMemIntegerify(pTos);
+ if( pMem->i<pTos->i){
+ pMem->i = pTos->i;
+ }
+ break;
+}
+#endif /* SQLITE_OMIT_AUTOINCREMENT */
+
+/* Opcode: MemIncr P1 P2 *
+**
+** Increment the integer valued memory cell P2 by the value in P1.
+**
+** It is illegal to use this instruction on a memory cell that does
+** not contain an integer. An assertion fault will result if you try.
+*/
+case OP_MemIncr: { /* no-push */
+ int i = pOp->p2;
+ Mem *pMem;
+ assert( i>=0 && i<p->nMem );
+ pMem = &p->aMem[i];
+ assert( pMem->flags==MEM_Int );
+ pMem->i += pOp->p1;
+ break;
+}
+
+/* Opcode: IfMemPos P1 P2 *
+**
+** If the value of memory cell P1 is 1 or greater, jump to P2.
+**
+** It is illegal to use this instruction on a memory cell that does
+** not contain an integer. An assertion fault will result if you try.
+*/
+case OP_IfMemPos: { /* no-push */
+ int i = pOp->p1;
+ Mem *pMem;
+ assert( i>=0 && i<p->nMem );
+ pMem = &p->aMem[i];
+ assert( pMem->flags==MEM_Int );
+ if( pMem->i>0 ){
+ pc = pOp->p2 - 1;
+ }
+ break;
+}
+
+/* Opcode: IfMemNeg P1 P2 *
+**
+** If the value of memory cell P1 is less than zero, jump to P2.
+**
+** It is illegal to use this instruction on a memory cell that does
+** not contain an integer. An assertion fault will result if you try.
+*/
+case OP_IfMemNeg: { /* no-push */
+ int i = pOp->p1;
+ Mem *pMem;
+ assert( i>=0 && i<p->nMem );
+ pMem = &p->aMem[i];
+ assert( pMem->flags==MEM_Int );
+ if( pMem->i<0 ){
+ pc = pOp->p2 - 1;
+ }
+ break;
+}
+
+/* Opcode: IfMemZero P1 P2 *
+**
+** If the value of memory cell P1 is exactly 0, jump to P2.
+**
+** It is illegal to use this instruction on a memory cell that does
+** not contain an integer. An assertion fault will result if you try.
+*/
+case OP_IfMemZero: { /* no-push */
+ int i = pOp->p1;
+ Mem *pMem;
+ assert( i>=0 && i<p->nMem );
+ pMem = &p->aMem[i];
+ assert( pMem->flags==MEM_Int );
+ if( pMem->i==0 ){
+ pc = pOp->p2 - 1;
+ }
+ break;
+}
+
+/* Opcode: MemNull P1 * *
+**
+** Store a NULL in memory cell P1
+*/
+case OP_MemNull: {
+ assert( pOp->p1>=0 && pOp->p1<p->nMem );
+ sqlite3VdbeMemSetNull(&p->aMem[pOp->p1]);
+ break;
+}
+
+/* Opcode: MemInt P1 P2 *
+**
+** Store the integer value P1 in memory cell P2.
+*/
+case OP_MemInt: {
+ assert( pOp->p2>=0 && pOp->p2<p->nMem );
+ sqlite3VdbeMemSetInt64(&p->aMem[pOp->p2], pOp->p1);
+ break;
+}
+
+/* Opcode: MemMove P1 P2 *
+**
+** Move the content of memory cell P2 over to memory cell P1.
+** Any prior content of P1 is erased. Memory cell P2 is left
+** containing a NULL.
+*/
+case OP_MemMove: {
+ assert( pOp->p1>=0 && pOp->p1<p->nMem );
+ assert( pOp->p2>=0 && pOp->p2<p->nMem );
+ rc = sqlite3VdbeMemMove(&p->aMem[pOp->p1], &p->aMem[pOp->p2]);
+ break;
+}
+
+/* Opcode: AggStep P1 P2 P3
+**
+** Execute the step function for an aggregate. The
+** function has P2 arguments. P3 is a pointer to the FuncDef
+** structure that specifies the function. Use memory location
+** P1 as the accumulator.
+**
+** The P2 arguments are popped from the stack.
+*/
+case OP_AggStep: { /* no-push */
+ int n = pOp->p2;
+ int i;
+ Mem *pMem, *pRec;
+ sqlite3_context ctx;
+ sqlite3_value **apVal;
+
+ assert( n>=0 );
+ pRec = &pTos[1-n];
+ assert( pRec>=p->aStack );
+ apVal = p->apArg;
+ assert( apVal || n==0 );
+ for(i=0; i<n; i++, pRec++){
+ apVal[i] = pRec;
+ storeTypeInfo(pRec, encoding);
+ }
+ ctx.pFunc = (FuncDef*)pOp->p3;
+ assert( pOp->p1>=0 && pOp->p1<p->nMem );
+ ctx.pMem = pMem = &p->aMem[pOp->p1];
+ pMem->n++;
+ ctx.s.flags = MEM_Null;
+ ctx.s.z = 0;
+ ctx.s.xDel = 0;
+ ctx.isError = 0;
+ ctx.pColl = 0;
+ if( ctx.pFunc->needCollSeq ){
+ assert( pOp>p->aOp );
+ assert( pOp[-1].p3type==P3_COLLSEQ );
+ assert( pOp[-1].opcode==OP_CollSeq );
+ ctx.pColl = (CollSeq *)pOp[-1].p3;
+ }
+ (ctx.pFunc->xStep)(&ctx, n, apVal);
+ popStack(&pTos, n);
+ if( ctx.isError ){
+ sqlite3SetString(&p->zErrMsg, sqlite3_value_text(&ctx.s), (char*)0);
+ rc = SQLITE_ERROR;
+ }
+ sqlite3VdbeMemRelease(&ctx.s);
+ break;
+}
+
+/* Opcode: AggFinal P1 P2 P3
+**
+** Execute the finalizer function for an aggregate. P1 is
+** the memory location that is the accumulator for the aggregate.
+**
+** P2 is the number of arguments that the step function takes and
+** P3 is a pointer to the FuncDef for this function. The P2
+** argument is not used by this opcode. It is only there to disambiguate
+** functions that can take varying numbers of arguments. The
+** P3 argument is only needed for the degenerate case where
+** the step function was not previously called.
+*/
+case OP_AggFinal: { /* no-push */
+ Mem *pMem;
+ assert( pOp->p1>=0 && pOp->p1<p->nMem );
+ pMem = &p->aMem[pOp->p1];
+ assert( (pMem->flags & ~(MEM_Null|MEM_Agg))==0 );
+ rc = sqlite3VdbeMemFinalize(pMem, (FuncDef*)pOp->p3);
+ if( rc==SQLITE_ERROR ){
+ sqlite3SetString(&p->zErrMsg, sqlite3_value_text(pMem), (char*)0);
+ }
+ break;
+}
+
+
+#ifndef SQLITE_OMIT_VACUUM
+/* Opcode: Vacuum * * *
+**
+** Vacuum the entire database. This opcode will cause other virtual
+** machines to be created and run. It may not be called from within
+** a transaction.
+*/
+case OP_Vacuum: { /* no-push */
+ if( sqlite3SafetyOff(db) ) goto abort_due_to_misuse;
+ rc = sqlite3RunVacuum(&p->zErrMsg, db);
+ if( sqlite3SafetyOn(db) ) goto abort_due_to_misuse;
+ break;
+}
+#endif
+
+/* Opcode: Expire P1 * *
+**
+** Cause precompiled statements to become expired. An expired statement
+** fails with an error code of SQLITE_SCHEMA if it is ever executed
+** (via sqlite3_step()).
+**
+** If P1 is 0, then all SQL statements become expired. If P1 is non-zero,
+** then only the currently executing statement is affected.
+*/
+case OP_Expire: { /* no-push */
+ if( !pOp->p1 ){
+ sqlite3ExpirePreparedStatements(db);
+ }else{
+ p->expired = 1;
+ }
+ break;
+}
+
+#ifndef SQLITE_OMIT_SHARED_CACHE
+/* Opcode: TableLock P1 P2 P3
+**
+** Obtain a lock on a particular table. This instruction is only used when
+** the shared-cache feature is enabled.
+**
+** If P1 is not negative, then it is the index of the database
+** in sqlite3.aDb[] and a read-lock is required. If P1 is negative, a
+** write-lock is required. In this case the index of the database is the
+** absolute value of P1 minus one (iDb = abs(P1) - 1;) and a write-lock is
+** required.
+**
+** P2 contains the root-page of the table to lock.
+**
+** P3 contains a pointer to the name of the table being locked. This is only
+** used to generate an error message if the lock cannot be obtained.
+*/
+case OP_TableLock: { /* no-push */
+ int p1 = pOp->p1;
+ u8 isWriteLock = (p1<0);
+ if( isWriteLock ){
+ p1 = (-1*p1)-1;
+ }
+ rc = sqlite3BtreeLockTable(db->aDb[p1].pBt, pOp->p2, isWriteLock);
+ if( rc==SQLITE_LOCKED ){
+ const char *z = (const char *)pOp->p3;
+ sqlite3SetString(&p->zErrMsg, "database table is locked: ", z, (char*)0);
+ }
+ break;
+}
+#endif /* SQLITE_OMIT_SHARED_CACHE */
+
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+/* Opcode: VBegin * * P3
+**
+** P3 a pointer to an sqlite3_vtab structure. Call the xBegin method
+** for that table.
+*/
+case OP_VBegin: { /* no-push */
+ rc = sqlite3VtabBegin(db, (sqlite3_vtab *)pOp->p3);
+ break;
+}
+#endif /* SQLITE_OMIT_VIRTUALTABLE */
+
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+/* Opcode: VCreate P1 * P3
+**
+** P3 is the name of a virtual table in database P1. Call the xCreate method
+** for that table.
+*/
+case OP_VCreate: { /* no-push */
+ rc = sqlite3VtabCallCreate(db, pOp->p1, pOp->p3, &p->zErrMsg);
+ break;
+}
+#endif /* SQLITE_OMIT_VIRTUALTABLE */
+
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+/* Opcode: VDestroy P1 * P3
+**
+** P3 is the name of a virtual table in database P1. Call the xDestroy method
+** of that table.
+*/
+case OP_VDestroy: { /* no-push */
+ p->inVtabMethod = 2;
+ rc = sqlite3VtabCallDestroy(db, pOp->p1, pOp->p3);
+ p->inVtabMethod = 0;
+ break;
+}
+#endif /* SQLITE_OMIT_VIRTUALTABLE */
+
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+/* Opcode: VOpen P1 * P3
+**
+** P3 is a pointer to a virtual table object, an sqlite3_vtab structure.
+** P1 is a cursor number. This opcode opens a cursor to the virtual
+** table and stores that cursor in P1.
+*/
+case OP_VOpen: { /* no-push */
+ Cursor *pCur = 0;
+ sqlite3_vtab_cursor *pVtabCursor = 0;
+
+ sqlite3_vtab *pVtab = (sqlite3_vtab *)(pOp->p3);
+ sqlite3_module *pModule = (sqlite3_module *)pVtab->pModule;
+
+ assert(pVtab && pModule);
+ if( sqlite3SafetyOff(db) ) goto abort_due_to_misuse;
+ rc = pModule->xOpen(pVtab, &pVtabCursor);
+ if( sqlite3SafetyOn(db) ) goto abort_due_to_misuse;
+ if( SQLITE_OK==rc ){
+ /* Initialise sqlite3_vtab_cursor base class */
+ pVtabCursor->pVtab = pVtab;
+
+ /* Initialise vdbe cursor object */
+ pCur = allocateCursor(p, pOp->p1, -1);
+ if( pCur ){
+ pCur->pVtabCursor = pVtabCursor;
+ pCur->pModule = pVtabCursor->pVtab->pModule;
+ }else{
+ pModule->xClose(pVtabCursor);
+ }
+ }
+ break;
+}
+#endif /* SQLITE_OMIT_VIRTUALTABLE */
+
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+/* Opcode: VFilter P1 P2 P3
+**
+** P1 is a cursor opened using VOpen. P2 is an address to jump to if
+** the filtered result set is empty.
+**
+** P3 is either NULL or a string that was generated by the xBestIndex
+** method of the module. The interpretation of the P3 string is left
+** to the module implementation.
+**
+** This opcode invokes the xFilter method on the virtual table specified
+** by P1. The integer query plan parameter to xFilter is the top of the
+** stack. Next down on the stack is the argc parameter. Beneath the
+** next of stack are argc additional parameters which are passed to
+** xFilter as argv. The topmost parameter (i.e. 3rd element popped from
+** the stack) becomes argv[argc-1] when passed to xFilter.
+**
+** The integer query plan parameter, argc, and all argv stack values
+** are popped from the stack before this instruction completes.
+**
+** A jump is made to P2 if the result set after filtering would be
+** empty.
+*/
+case OP_VFilter: { /* no-push */
+ int nArg;
+
+ const sqlite3_module *pModule;
+
+ Cursor *pCur = p->apCsr[pOp->p1];
+ assert( pCur->pVtabCursor );
+ pModule = pCur->pVtabCursor->pVtab->pModule;
+
+ /* Grab the index number and argc parameters off the top of the stack. */
+ assert( (&pTos[-1])>=p->aStack );
+ assert( (pTos[0].flags&MEM_Int)!=0 && pTos[-1].flags==MEM_Int );
+ nArg = pTos[-1].i;
+
+ /* Invoke the xFilter method if one is defined. */
+ if( pModule->xFilter ){
+ int res;
+ int i;
+ Mem **apArg = p->apArg;
+ for(i = 0; i<nArg; i++){
+ apArg[i] = &pTos[i+1-2-nArg];
+ storeTypeInfo(apArg[i], 0);
+ }
+
+ if( sqlite3SafetyOff(db) ) goto abort_due_to_misuse;
+ p->inVtabMethod = 1;
+ rc = pModule->xFilter(pCur->pVtabCursor, pTos->i, pOp->p3, nArg, apArg);
+ p->inVtabMethod = 0;
+ if( rc==SQLITE_OK ){
+ res = pModule->xEof(pCur->pVtabCursor);
+ }
+ if( sqlite3SafetyOn(db) ) goto abort_due_to_misuse;
+
+ if( res ){
+ pc = pOp->p2 - 1;
+ }
+ }
+
+ /* Pop the index number, argc value and parameters off the stack */
+ popStack(&pTos, 2+nArg);
+ break;
+}
+#endif /* SQLITE_OMIT_VIRTUALTABLE */
+
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+/* Opcode: VRowid P1 * *
+**
+** Push an integer onto the stack which is the rowid of
+** the virtual-table that the P1 cursor is pointing to.
+*/
+case OP_VRowid: {
+ const sqlite3_module *pModule;
+
+ Cursor *pCur = p->apCsr[pOp->p1];
+ assert( pCur->pVtabCursor );
+ pModule = pCur->pVtabCursor->pVtab->pModule;
+ if( pModule->xRowid==0 ){
+ sqlite3SetString(&p->zErrMsg, "Unsupported module operation: xRowid", 0);
+ rc = SQLITE_ERROR;
+ } else {
+ sqlite_int64 iRow;
+
+ if( sqlite3SafetyOff(db) ) goto abort_due_to_misuse;
+ rc = pModule->xRowid(pCur->pVtabCursor, &iRow);
+ if( sqlite3SafetyOn(db) ) goto abort_due_to_misuse;
+
+ pTos++;
+ pTos->flags = MEM_Int;
+ pTos->i = iRow;
+ }
+
+ break;
+}
+#endif /* SQLITE_OMIT_VIRTUALTABLE */
+
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+/* Opcode: VColumn P1 P2 *
+**
+** Push onto the stack the value of the P2-th column of
+** the row of the virtual-table that the P1 cursor is pointing to.
+*/
+case OP_VColumn: {
+ const sqlite3_module *pModule;
+
+ Cursor *pCur = p->apCsr[pOp->p1];
+ assert( pCur->pVtabCursor );
+ pModule = pCur->pVtabCursor->pVtab->pModule;
+ if( pModule->xColumn==0 ){
+ sqlite3SetString(&p->zErrMsg, "Unsupported module operation: xColumn", 0);
+ rc = SQLITE_ERROR;
+ } else {
+ sqlite3_context sContext;
+ memset(&sContext, 0, sizeof(sContext));
+ sContext.s.flags = MEM_Null;
+ if( sqlite3SafetyOff(db) ) goto abort_due_to_misuse;
+ rc = pModule->xColumn(pCur->pVtabCursor, &sContext, pOp->p2);
+
+ /* Copy the result of the function to the top of the stack. We
+ ** do this regardless of whether or not an error occured to ensure any
+ ** dynamic allocation in sContext.s (a Mem struct) is released.
+ */
+ sqlite3VdbeChangeEncoding(&sContext.s, encoding);
+ pTos++;
+ pTos->flags = 0;
+ sqlite3VdbeMemMove(pTos, &sContext.s);
+
+ if( sqlite3SafetyOn(db) ) goto abort_due_to_misuse;
+ }
+
+ break;
+}
+#endif /* SQLITE_OMIT_VIRTUALTABLE */
+
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+/* Opcode: VNext P1 P2 *
+**
+** Advance virtual table P1 to the next row in its result set and
+** jump to instruction P2. Or, if the virtual table has reached
+** the end of its result set, then fall through to the next instruction.
+*/
+case OP_VNext: { /* no-push */
+ const sqlite3_module *pModule;
+ int res = 0;
+
+ Cursor *pCur = p->apCsr[pOp->p1];
+ assert( pCur->pVtabCursor );
+ pModule = pCur->pVtabCursor->pVtab->pModule;
+ if( pModule->xNext==0 ){
+ sqlite3SetString(&p->zErrMsg, "Unsupported module operation: xNext", 0);
+ rc = SQLITE_ERROR;
+ } else {
+ /* Invoke the xNext() method of the module. There is no way for the
+ ** underlying implementation to return an error if one occurs during
+ ** xNext(). Instead, if an error occurs, true is returned (indicating that
+ ** data is available) and the error code returned when xColumn or
+ ** some other method is next invoked on the save virtual table cursor.
+ */
+ if( sqlite3SafetyOff(db) ) goto abort_due_to_misuse;
+ p->inVtabMethod = 1;
+ rc = pModule->xNext(pCur->pVtabCursor);
+ p->inVtabMethod = 0;
+ if( rc==SQLITE_OK ){
+ res = pModule->xEof(pCur->pVtabCursor);
+ }
+ if( sqlite3SafetyOn(db) ) goto abort_due_to_misuse;
+
+ if( !res ){
+ /* If there is data, jump to P2 */
+ pc = pOp->p2 - 1;
+ }
+ }
+
+ break;
+}
+#endif /* SQLITE_OMIT_VIRTUALTABLE */
+
+
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+/* Opcode: VUpdate P1 P2 P3
+**
+** P3 is a pointer to a virtual table object, an sqlite3_vtab structure.
+** This opcode invokes the corresponding xUpdate method. P2 values
+** are taken from the stack to pass to the xUpdate invocation. The
+** value on the top of the stack corresponds to the p2th element
+** of the argv array passed to xUpdate.
+**
+** The xUpdate method will do a DELETE or an INSERT or both.
+** The argv[0] element (which corresponds to the P2-th element down
+** on the stack) is the rowid of a row to delete. If argv[0] is
+** NULL then no deletion occurs. The argv[1] element is the rowid
+** of the new row. This can be NULL to have the virtual table
+** select the new rowid for itself. The higher elements in the
+** stack are the values of columns in the new row.
+**
+** If P2==1 then no insert is performed. argv[0] is the rowid of
+** a row to delete.
+**
+** P1 is a boolean flag. If it is set to true and the xUpdate call
+** is successful, then the value returned by sqlite3_last_insert_rowid()
+** is set to the value of the rowid for the row just inserted.
+*/
+case OP_VUpdate: { /* no-push */
+ sqlite3_vtab *pVtab = (sqlite3_vtab *)(pOp->p3);
+ sqlite3_module *pModule = (sqlite3_module *)pVtab->pModule;
+ int nArg = pOp->p2;
+ assert( pOp->p3type==P3_VTAB );
+ if( pModule->xUpdate==0 ){
+ sqlite3SetString(&p->zErrMsg, "read-only table", 0);
+ rc = SQLITE_ERROR;
+ }else{
+ int i;
+ sqlite_int64 rowid;
+ Mem **apArg = p->apArg;
+ Mem *pX = &pTos[1-nArg];
+ for(i = 0; i<nArg; i++, pX++){
+ storeTypeInfo(pX, 0);
+ apArg[i] = pX;
+ }
+ if( sqlite3SafetyOff(db) ) goto abort_due_to_misuse;
+ sqlite3VtabLock(pVtab);
+ rc = pModule->xUpdate(pVtab, nArg, apArg, &rowid);
+ sqlite3VtabUnlock(pVtab);
+ if( sqlite3SafetyOn(db) ) goto abort_due_to_misuse;
+ if( pOp->p1 && rc==SQLITE_OK ){
+ assert( nArg>1 && apArg[0] && (apArg[0]->flags&MEM_Null) );
+ db->lastRowid = rowid;
+ }
+ }
+ popStack(&pTos, nArg);
+ break;
+}
+#endif /* SQLITE_OMIT_VIRTUALTABLE */
+
+/* An other opcode is illegal...
+*/
+default: {
+ assert( 0 );
+ break;
+}
+
+/*****************************************************************************
+** The cases of the switch statement above this line should all be indented
+** by 6 spaces. But the left-most 6 spaces have been removed to improve the
+** readability. From this point on down, the normal indentation rules are
+** restored.
+*****************************************************************************/
+ }
+
+ /* Make sure the stack limit was not exceeded */
+ assert( pTos<=pStackLimit );
+
+#ifdef VDBE_PROFILE
+ {
+ long long elapse = hwtime() - start;
+ pOp->cycles += elapse;
+ pOp->cnt++;
+#if 0
+ fprintf(stdout, "%10lld ", elapse);
+ sqlite3VdbePrintOp(stdout, origPc, &p->aOp[origPc]);
+#endif
+ }
+#endif
+
+ /* The following code adds nothing to the actual functionality
+ ** of the program. It is only here for testing and debugging.
+ ** On the other hand, it does burn CPU cycles every time through
+ ** the evaluator loop. So we can leave it out when NDEBUG is defined.
+ */
+#ifndef NDEBUG
+ /* Sanity checking on the top element of the stack. If the previous
+ ** instruction was VNoChange, then the flags field of the top
+ ** of the stack is set to 0. This is technically invalid for a memory
+ ** cell, so avoid calling MemSanity() in this case.
+ */
+ if( pTos>=p->aStack && pTos->flags ){
+ sqlite3VdbeMemSanity(pTos);
+ }
+ assert( pc>=-1 && pc<p->nOp );
+#ifdef SQLITE_DEBUG
+ /* Code for tracing the vdbe stack. */
+ if( p->trace && pTos>=p->aStack ){
+ int i;
+ fprintf(p->trace, "Stack:");
+ for(i=0; i>-5 && &pTos[i]>=p->aStack; i--){
+ if( pTos[i].flags & MEM_Null ){
+ fprintf(p->trace, " NULL");
+ }else if( (pTos[i].flags & (MEM_Int|MEM_Str))==(MEM_Int|MEM_Str) ){
+ fprintf(p->trace, " si:%lld", pTos[i].i);
+ }else if( pTos[i].flags & MEM_Int ){
+ fprintf(p->trace, " i:%lld", pTos[i].i);
+ }else if( pTos[i].flags & MEM_Real ){
+ fprintf(p->trace, " r:%g", pTos[i].r);
+ }else{
+ char zBuf[100];
+ sqlite3VdbeMemPrettyPrint(&pTos[i], zBuf);
+ fprintf(p->trace, " ");
+ fprintf(p->trace, "%s", zBuf);
+ }
+ }
+ if( rc!=0 ) fprintf(p->trace," rc=%d",rc);
+ fprintf(p->trace,"\n");
+ }
+#endif /* SQLITE_DEBUG */
+#endif /* NDEBUG */
+ } /* The end of the for(;;) loop the loops through opcodes */
+
+ /* If we reach this point, it means that execution is finished.
+ */
+vdbe_halt:
+ if( rc ){
+ p->rc = rc;
+ rc = SQLITE_ERROR;
+ }else{
+ rc = SQLITE_DONE;
+ }
+ sqlite3VdbeHalt(p);
+ p->pTos = pTos;
+ return rc;
+
+ /* Jump to here if a malloc() fails. It's hard to get a malloc()
+ ** to fail on a modern VM computer, so this code is untested.
+ */
+no_mem:
+ sqlite3SetString(&p->zErrMsg, "out of memory", (char*)0);
+ rc = SQLITE_NOMEM;
+ goto vdbe_halt;
+
+ /* Jump to here for an SQLITE_MISUSE error.
+ */
+abort_due_to_misuse:
+ rc = SQLITE_MISUSE;
+ /* Fall thru into abort_due_to_error */
+
+ /* Jump to here for any other kind of fatal error. The "rc" variable
+ ** should hold the error number.
+ */
+abort_due_to_error:
+ if( p->zErrMsg==0 ){
+ if( sqlite3MallocFailed() ) rc = SQLITE_NOMEM;
+ sqlite3SetString(&p->zErrMsg, sqlite3ErrStr(rc), (char*)0);
+ }
+ goto vdbe_halt;
+
+ /* Jump to here if the sqlite3_interrupt() API sets the interrupt
+ ** flag.
+ */
+abort_due_to_interrupt:
+ assert( db->u1.isInterrupted );
+ if( db->magic!=SQLITE_MAGIC_BUSY ){
+ rc = SQLITE_MISUSE;
+ }else{
+ rc = SQLITE_INTERRUPT;
+ }
+ p->rc = rc;
+ sqlite3SetString(&p->zErrMsg, sqlite3ErrStr(rc), (char*)0);
+ goto vdbe_halt;
+}
Added: freeswitch/trunk/libs/sqlite/src/vdbe.h
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/vdbe.h Tue Dec 19 15:11:50 2006
@@ -0,0 +1,146 @@
+/*
+** 2001 September 15
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** Header file for the Virtual DataBase Engine (VDBE)
+**
+** This header defines the interface to the virtual database engine
+** or VDBE. The VDBE implements an abstract machine that runs a
+** simple program to access and modify the underlying database.
+**
+** $Id: vdbe.h,v 1.105 2006/06/13 23:51:35 drh Exp $
+*/
+#ifndef _SQLITE_VDBE_H_
+#define _SQLITE_VDBE_H_
+#include <stdio.h>
+
+/*
+** A single VDBE is an opaque structure named "Vdbe". Only routines
+** in the source file sqliteVdbe.c are allowed to see the insides
+** of this structure.
+*/
+typedef struct Vdbe Vdbe;
+
+/*
+** A single instruction of the virtual machine has an opcode
+** and as many as three operands. The instruction is recorded
+** as an instance of the following structure:
+*/
+struct VdbeOp {
+ u8 opcode; /* What operation to perform */
+ int p1; /* First operand */
+ int p2; /* Second parameter (often the jump destination) */
+ char *p3; /* Third parameter */
+ int p3type; /* One of the P3_xxx constants defined below */
+#ifdef VDBE_PROFILE
+ int cnt; /* Number of times this instruction was executed */
+ long long cycles; /* Total time spend executing this instruction */
+#endif
+};
+typedef struct VdbeOp VdbeOp;
+
+/*
+** A smaller version of VdbeOp used for the VdbeAddOpList() function because
+** it takes up less space.
+*/
+struct VdbeOpList {
+ u8 opcode; /* What operation to perform */
+ signed char p1; /* First operand */
+ short int p2; /* Second parameter (often the jump destination) */
+ char *p3; /* Third parameter */
+};
+typedef struct VdbeOpList VdbeOpList;
+
+/*
+** Allowed values of VdbeOp.p3type
+*/
+#define P3_NOTUSED 0 /* The P3 parameter is not used */
+#define P3_DYNAMIC (-1) /* Pointer to a string obtained from sqliteMalloc() */
+#define P3_STATIC (-2) /* Pointer to a static string */
+#define P3_COLLSEQ (-4) /* P3 is a pointer to a CollSeq structure */
+#define P3_FUNCDEF (-5) /* P3 is a pointer to a FuncDef structure */
+#define P3_KEYINFO (-6) /* P3 is a pointer to a KeyInfo structure */
+#define P3_VDBEFUNC (-7) /* P3 is a pointer to a VdbeFunc structure */
+#define P3_MEM (-8) /* P3 is a pointer to a Mem* structure */
+#define P3_TRANSIENT (-9) /* P3 is a pointer to a transient string */
+#define P3_VTAB (-10) /* P3 is a pointer to an sqlite3_vtab structure */
+#define P3_MPRINTF (-11) /* P3 is a string obtained from sqlite3_mprintf() */
+
+/* When adding a P3 argument using P3_KEYINFO, a copy of the KeyInfo structure
+** is made. That copy is freed when the Vdbe is finalized. But if the
+** argument is P3_KEYINFO_HANDOFF, the passed in pointer is used. It still
+** gets freed when the Vdbe is finalized so it still should be obtained
+** from a single sqliteMalloc(). But no copy is made and the calling
+** function should *not* try to free the KeyInfo.
+*/
+#define P3_KEYINFO_HANDOFF (-9)
+
+/*
+** The Vdbe.aColName array contains 5n Mem structures, where n is the
+** number of columns of data returned by the statement.
+*/
+#define COLNAME_NAME 0
+#define COLNAME_DECLTYPE 1
+#define COLNAME_DATABASE 2
+#define COLNAME_TABLE 3
+#define COLNAME_COLUMN 4
+#define COLNAME_N 5 /* Number of COLNAME_xxx symbols */
+
+/*
+** The following macro converts a relative address in the p2 field
+** of a VdbeOp structure into a negative number so that
+** sqlite3VdbeAddOpList() knows that the address is relative. Calling
+** the macro again restores the address.
+*/
+#define ADDR(X) (-1-(X))
+
+/*
+** The makefile scans the vdbe.c source file and creates the "opcodes.h"
+** header file that defines a number for each opcode used by the VDBE.
+*/
+#include "opcodes.h"
+
+/*
+** Prototypes for the VDBE interface. See comments on the implementation
+** for a description of what each of these routines does.
+*/
+Vdbe *sqlite3VdbeCreate(sqlite3*);
+void sqlite3VdbeCreateCallback(Vdbe*, int*);
+int sqlite3VdbeAddOp(Vdbe*,int,int,int);
+int sqlite3VdbeOp3(Vdbe*,int,int,int,const char *zP3,int);
+int sqlite3VdbeAddOpList(Vdbe*, int nOp, VdbeOpList const *aOp);
+void sqlite3VdbeChangeP1(Vdbe*, int addr, int P1);
+void sqlite3VdbeChangeP2(Vdbe*, int addr, int P2);
+void sqlite3VdbeJumpHere(Vdbe*, int addr);
+void sqlite3VdbeChangeToNoop(Vdbe*, int addr, int N);
+void sqlite3VdbeChangeP3(Vdbe*, int addr, const char *zP1, int N);
+VdbeOp *sqlite3VdbeGetOp(Vdbe*, int);
+int sqlite3VdbeMakeLabel(Vdbe*);
+void sqlite3VdbeDelete(Vdbe*);
+void sqlite3VdbeMakeReady(Vdbe*,int,int,int,int);
+int sqlite3VdbeFinalize(Vdbe*);
+void sqlite3VdbeResolveLabel(Vdbe*, int);
+int sqlite3VdbeCurrentAddr(Vdbe*);
+void sqlite3VdbeTrace(Vdbe*,FILE*);
+int sqlite3VdbeReset(Vdbe*);
+int sqliteVdbeSetVariables(Vdbe*,int,const char**);
+void sqlite3VdbeSetNumCols(Vdbe*,int);
+int sqlite3VdbeSetColName(Vdbe*, int, int, const char *, int);
+void sqlite3VdbeCountChanges(Vdbe*);
+sqlite3 *sqlite3VdbeDb(Vdbe*);
+
+#ifndef NDEBUG
+ void sqlite3VdbeComment(Vdbe*, const char*, ...);
+# define VdbeComment(X) sqlite3VdbeComment X
+#else
+# define VdbeComment(X)
+#endif
+
+#endif
Added: freeswitch/trunk/libs/sqlite/src/vdbeInt.h
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/vdbeInt.h Tue Dec 19 15:11:50 2006
@@ -0,0 +1,403 @@
+/*
+** 2003 September 6
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** This is the header file for information that is private to the
+** VDBE. This information used to all be at the top of the single
+** source code file "vdbe.c". When that file became too big (over
+** 6000 lines long) it was split up into several smaller files and
+** this header information was factored out.
+*/
+
+/*
+** intToKey() and keyToInt() used to transform the rowid. But with
+** the latest versions of the design they are no-ops.
+*/
+#define keyToInt(X) (X)
+#define intToKey(X) (X)
+
+/*
+** The makefile scans the vdbe.c source file and creates the following
+** array of string constants which are the names of all VDBE opcodes. This
+** array is defined in a separate source code file named opcode.c which is
+** automatically generated by the makefile.
+*/
+extern char *sqlite3OpcodeNames[];
+
+/*
+** SQL is translated into a sequence of instructions to be
+** executed by a virtual machine. Each instruction is an instance
+** of the following structure.
+*/
+typedef struct VdbeOp Op;
+
+/*
+** Boolean values
+*/
+typedef unsigned char Bool;
+
+/*
+** A cursor is a pointer into a single BTree within a database file.
+** The cursor can seek to a BTree entry with a particular key, or
+** loop over all entries of the Btree. You can also insert new BTree
+** entries or retrieve the key or data from the entry that the cursor
+** is currently pointing to.
+**
+** Every cursor that the virtual machine has open is represented by an
+** instance of the following structure.
+**
+** If the Cursor.isTriggerRow flag is set it means that this cursor is
+** really a single row that represents the NEW or OLD pseudo-table of
+** a row trigger. The data for the row is stored in Cursor.pData and
+** the rowid is in Cursor.iKey.
+*/
+struct Cursor {
+ BtCursor *pCursor; /* The cursor structure of the backend */
+ int iDb; /* Index of cursor database in db->aDb[] (or -1) */
+ i64 lastRowid; /* Last rowid from a Next or NextIdx operation */
+ i64 nextRowid; /* Next rowid returned by OP_NewRowid */
+ Bool zeroed; /* True if zeroed out and ready for reuse */
+ Bool rowidIsValid; /* True if lastRowid is valid */
+ Bool atFirst; /* True if pointing to first entry */
+ Bool useRandomRowid; /* Generate new record numbers semi-randomly */
+ Bool nullRow; /* True if pointing to a row with no data */
+ Bool nextRowidValid; /* True if the nextRowid field is valid */
+ Bool pseudoTable; /* This is a NEW or OLD pseudo-tables of a trigger */
+ Bool deferredMoveto; /* A call to sqlite3BtreeMoveto() is needed */
+ Bool isTable; /* True if a table requiring integer keys */
+ Bool isIndex; /* True if an index containing keys only - no data */
+ u8 bogusIncrKey; /* Something for pIncrKey to point to if pKeyInfo==0 */
+ i64 movetoTarget; /* Argument to the deferred sqlite3BtreeMoveto() */
+ Btree *pBt; /* Separate file holding temporary table */
+ int nData; /* Number of bytes in pData */
+ char *pData; /* Data for a NEW or OLD pseudo-table */
+ i64 iKey; /* Key for the NEW or OLD pseudo-table row */
+ u8 *pIncrKey; /* Pointer to pKeyInfo->incrKey */
+ KeyInfo *pKeyInfo; /* Info about index keys needed by index cursors */
+ int nField; /* Number of fields in the header */
+ i64 seqCount; /* Sequence counter */
+ sqlite3_vtab_cursor *pVtabCursor; /* The cursor for a virtual table */
+ const sqlite3_module *pModule; /* Module for cursor pVtabCursor */
+
+ /* Cached information about the header for the data record that the
+ ** cursor is currently pointing to. Only valid if cacheValid is true.
+ ** aRow might point to (ephemeral) data for the current row, or it might
+ ** be NULL.
+ */
+ int cacheStatus; /* Cache is valid if this matches Vdbe.cacheCtr */
+ int payloadSize; /* Total number of bytes in the record */
+ u32 *aType; /* Type values for all entries in the record */
+ u32 *aOffset; /* Cached offsets to the start of each columns data */
+ u8 *aRow; /* Data for the current row, if all on one page */
+};
+typedef struct Cursor Cursor;
+
+/*
+** Number of bytes of string storage space available to each stack
+** layer without having to malloc. NBFS is short for Number of Bytes
+** For Strings.
+*/
+#define NBFS 32
+
+/*
+** A value for Cursor.cacheValid that means the cache is always invalid.
+*/
+#define CACHE_STALE 0
+
+/*
+** Internally, the vdbe manipulates nearly all SQL values as Mem
+** structures. Each Mem struct may cache multiple representations (string,
+** integer etc.) of the same value. A value (and therefore Mem structure)
+** has the following properties:
+**
+** Each value has a manifest type. The manifest type of the value stored
+** in a Mem struct is returned by the MemType(Mem*) macro. The type is
+** one of SQLITE_NULL, SQLITE_INTEGER, SQLITE_REAL, SQLITE_TEXT or
+** SQLITE_BLOB.
+*/
+struct Mem {
+ i64 i; /* Integer value. Or FuncDef* when flags==MEM_Agg */
+ double r; /* Real value */
+ char *z; /* String or BLOB value */
+ int n; /* Number of characters in string value, including '\0' */
+ u16 flags; /* Some combination of MEM_Null, MEM_Str, MEM_Dyn, etc. */
+ u8 type; /* One of MEM_Null, MEM_Str, etc. */
+ u8 enc; /* TEXT_Utf8, TEXT_Utf16le, or TEXT_Utf16be */
+ void (*xDel)(void *); /* If not null, call this function to delete Mem.z */
+ char zShort[NBFS]; /* Space for short strings */
+};
+typedef struct Mem Mem;
+
+/* One or more of the following flags are set to indicate the validOK
+** representations of the value stored in the Mem struct.
+**
+** If the MEM_Null flag is set, then the value is an SQL NULL value.
+** No other flags may be set in this case.
+**
+** If the MEM_Str flag is set then Mem.z points at a string representation.
+** Usually this is encoded in the same unicode encoding as the main
+** database (see below for exceptions). If the MEM_Term flag is also
+** set, then the string is nul terminated. The MEM_Int and MEM_Real
+** flags may coexist with the MEM_Str flag.
+**
+** Multiple of these values can appear in Mem.flags. But only one
+** at a time can appear in Mem.type.
+*/
+#define MEM_Null 0x0001 /* Value is NULL */
+#define MEM_Str 0x0002 /* Value is a string */
+#define MEM_Int 0x0004 /* Value is an integer */
+#define MEM_Real 0x0008 /* Value is a real number */
+#define MEM_Blob 0x0010 /* Value is a BLOB */
+
+/* Whenever Mem contains a valid string or blob representation, one of
+** the following flags must be set to determine the memory management
+** policy for Mem.z. The MEM_Term flag tells us whether or not the
+** string is \000 or \u0000 terminated
+*/
+#define MEM_Term 0x0020 /* String rep is nul terminated */
+#define MEM_Dyn 0x0040 /* Need to call sqliteFree() on Mem.z */
+#define MEM_Static 0x0080 /* Mem.z points to a static string */
+#define MEM_Ephem 0x0100 /* Mem.z points to an ephemeral string */
+#define MEM_Short 0x0200 /* Mem.z points to Mem.zShort */
+#define MEM_Agg 0x0400 /* Mem.z points to an agg function context */
+
+
+/* A VdbeFunc is just a FuncDef (defined in sqliteInt.h) that contains
+** additional information about auxiliary information bound to arguments
+** of the function. This is used to implement the sqlite3_get_auxdata()
+** and sqlite3_set_auxdata() APIs. The "auxdata" is some auxiliary data
+** that can be associated with a constant argument to a function. This
+** allows functions such as "regexp" to compile their constant regular
+** expression argument once and reused the compiled code for multiple
+** invocations.
+*/
+struct VdbeFunc {
+ FuncDef *pFunc; /* The definition of the function */
+ int nAux; /* Number of entries allocated for apAux[] */
+ struct AuxData {
+ void *pAux; /* Aux data for the i-th argument */
+ void (*xDelete)(void *); /* Destructor for the aux data */
+ } apAux[1]; /* One slot for each function argument */
+};
+typedef struct VdbeFunc VdbeFunc;
+
+/*
+** The "context" argument for a installable function. A pointer to an
+** instance of this structure is the first argument to the routines used
+** implement the SQL functions.
+**
+** There is a typedef for this structure in sqlite.h. So all routines,
+** even the public interface to SQLite, can use a pointer to this structure.
+** But this file is the only place where the internal details of this
+** structure are known.
+**
+** This structure is defined inside of vdbeInt.h because it uses substructures
+** (Mem) which are only defined there.
+*/
+struct sqlite3_context {
+ FuncDef *pFunc; /* Pointer to function information. MUST BE FIRST */
+ VdbeFunc *pVdbeFunc; /* Auxilary data, if created. */
+ Mem s; /* The return value is stored here */
+ Mem *pMem; /* Memory cell used to store aggregate context */
+ u8 isError; /* Set to true for an error */
+ CollSeq *pColl; /* Collating sequence */
+};
+
+/*
+** A Set structure is used for quick testing to see if a value
+** is part of a small set. Sets are used to implement code like
+** this:
+** x.y IN ('hi','hoo','hum')
+*/
+typedef struct Set Set;
+struct Set {
+ Hash hash; /* A set is just a hash table */
+ HashElem *prev; /* Previously accessed hash elemen */
+};
+
+/*
+** A FifoPage structure holds a single page of valves. Pages are arranged
+** in a list.
+*/
+typedef struct FifoPage FifoPage;
+struct FifoPage {
+ int nSlot; /* Number of entries aSlot[] */
+ int iWrite; /* Push the next value into this entry in aSlot[] */
+ int iRead; /* Read the next value from this entry in aSlot[] */
+ FifoPage *pNext; /* Next page in the fifo */
+ i64 aSlot[1]; /* One or more slots for rowid values */
+};
+
+/*
+** The Fifo structure is typedef-ed in vdbeInt.h. But the implementation
+** of that structure is private to this file.
+**
+** The Fifo structure describes the entire fifo.
+*/
+typedef struct Fifo Fifo;
+struct Fifo {
+ int nEntry; /* Total number of entries */
+ FifoPage *pFirst; /* First page on the list */
+ FifoPage *pLast; /* Last page on the list */
+};
+
+/*
+** A Context stores the last insert rowid, the last statement change count,
+** and the current statement change count (i.e. changes since last statement).
+** The current keylist is also stored in the context.
+** Elements of Context structure type make up the ContextStack, which is
+** updated by the ContextPush and ContextPop opcodes (used by triggers).
+** The context is pushed before executing a trigger a popped when the
+** trigger finishes.
+*/
+typedef struct Context Context;
+struct Context {
+ i64 lastRowid; /* Last insert rowid (sqlite3.lastRowid) */
+ int nChange; /* Statement changes (Vdbe.nChanges) */
+ Fifo sFifo; /* Records that will participate in a DELETE or UPDATE */
+};
+
+/*
+** An instance of the virtual machine. This structure contains the complete
+** state of the virtual machine.
+**
+** The "sqlite3_stmt" structure pointer that is returned by sqlite3_compile()
+** is really a pointer to an instance of this structure.
+**
+** The Vdbe.inVtabMethod variable is set to non-zero for the duration of
+** any virtual table method invocations made by the vdbe program. It is
+** set to 2 for xDestroy method calls and 1 for all other methods. This
+** variable is used for two purposes: to allow xDestroy methods to execute
+** "DROP TABLE" statements and to prevent some nasty side effects of
+** malloc failure when SQLite is invoked recursively by a virtual table
+** method function.
+*/
+struct Vdbe {
+ sqlite3 *db; /* The whole database */
+ Vdbe *pPrev,*pNext; /* Linked list of VDBEs with the same Vdbe.db */
+ FILE *trace; /* Write an execution trace here, if not NULL */
+ int nOp; /* Number of instructions in the program */
+ int nOpAlloc; /* Number of slots allocated for aOp[] */
+ Op *aOp; /* Space to hold the virtual machine's program */
+ int nLabel; /* Number of labels used */
+ int nLabelAlloc; /* Number of slots allocated in aLabel[] */
+ int *aLabel; /* Space to hold the labels */
+ Mem *aStack; /* The operand stack, except string values */
+ Mem *pTos; /* Top entry in the operand stack */
+ Mem **apArg; /* Arguments to currently executing user function */
+ Mem *aColName; /* Column names to return */
+ int nCursor; /* Number of slots in apCsr[] */
+ Cursor **apCsr; /* One element of this array for each open cursor */
+ int nVar; /* Number of entries in aVar[] */
+ Mem *aVar; /* Values for the OP_Variable opcode. */
+ char **azVar; /* Name of variables */
+ int okVar; /* True if azVar[] has been initialized */
+ int magic; /* Magic number for sanity checking */
+ int nMem; /* Number of memory locations currently allocated */
+ Mem *aMem; /* The memory locations */
+ int nCallback; /* Number of callbacks invoked so far */
+ int cacheCtr; /* Cursor row cache generation counter */
+ Fifo sFifo; /* A list of ROWIDs */
+ int contextStackTop; /* Index of top element in the context stack */
+ int contextStackDepth; /* The size of the "context" stack */
+ Context *contextStack; /* Stack used by opcodes ContextPush & ContextPop*/
+ int pc; /* The program counter */
+ int rc; /* Value to return */
+ unsigned uniqueCnt; /* Used by OP_MakeRecord when P2!=0 */
+ int errorAction; /* Recovery action to do in case of an error */
+ int inTempTrans; /* True if temp database is transactioned */
+ int returnStack[100]; /* Return address stack for OP_Gosub & OP_Return */
+ int returnDepth; /* Next unused element in returnStack[] */
+ int nResColumn; /* Number of columns in one row of the result set */
+ char **azResColumn; /* Values for one row of result */
+ int popStack; /* Pop the stack this much on entry to VdbeExec() */
+ char *zErrMsg; /* Error message written here */
+ u8 resOnStack; /* True if there are result values on the stack */
+ u8 explain; /* True if EXPLAIN present on SQL command */
+ u8 changeCntOn; /* True to update the change-counter */
+ u8 aborted; /* True if ROLLBACK in another VM causes an abort */
+ u8 expired; /* True if the VM needs to be recompiled */
+ u8 minWriteFileFormat; /* Minimum file format for writable database files */
+ u8 inVtabMethod; /* See comments above */
+ int nChange; /* Number of db changes made since last reset */
+ i64 startTime; /* Time when query started - used for profiling */
+#ifdef SQLITE_SSE
+ int fetchId; /* Statement number used by sqlite3_fetch_statement */
+ int lru; /* Counter used for LRU cache replacement */
+#endif
+};
+
+/*
+** The following are allowed values for Vdbe.magic
+*/
+#define VDBE_MAGIC_INIT 0x26bceaa5 /* Building a VDBE program */
+#define VDBE_MAGIC_RUN 0xbdf20da3 /* VDBE is ready to execute */
+#define VDBE_MAGIC_HALT 0x519c2973 /* VDBE has completed execution */
+#define VDBE_MAGIC_DEAD 0xb606c3c8 /* The VDBE has been deallocated */
+
+/*
+** Function prototypes
+*/
+void sqlite3VdbeFreeCursor(Vdbe *, Cursor*);
+void sqliteVdbePopStack(Vdbe*,int);
+int sqlite3VdbeCursorMoveto(Cursor*);
+#if defined(SQLITE_DEBUG) || defined(VDBE_PROFILE)
+void sqlite3VdbePrintOp(FILE*, int, Op*);
+#endif
+#ifdef SQLITE_DEBUG
+void sqlite3VdbePrintSql(Vdbe*);
+#endif
+int sqlite3VdbeSerialTypeLen(u32);
+u32 sqlite3VdbeSerialType(Mem*, int);
+int sqlite3VdbeSerialPut(unsigned char*, Mem*, int);
+int sqlite3VdbeSerialGet(const unsigned char*, u32, Mem*);
+void sqlite3VdbeDeleteAuxData(VdbeFunc*, int);
+
+int sqlite2BtreeKeyCompare(BtCursor *, const void *, int, int, int *);
+int sqlite3VdbeIdxKeyCompare(Cursor*, int , const unsigned char*, int*);
+int sqlite3VdbeIdxRowid(BtCursor *, i64 *);
+int sqlite3MemCompare(const Mem*, const Mem*, const CollSeq*);
+int sqlite3VdbeRecordCompare(void*,int,const void*,int, const void*);
+int sqlite3VdbeIdxRowidLen(const u8*);
+int sqlite3VdbeExec(Vdbe*);
+int sqlite3VdbeList(Vdbe*);
+int sqlite3VdbeHalt(Vdbe*);
+int sqlite3VdbeChangeEncoding(Mem *, int);
+int sqlite3VdbeMemCopy(Mem*, const Mem*);
+void sqlite3VdbeMemShallowCopy(Mem*, const Mem*, int);
+int sqlite3VdbeMemMove(Mem*, Mem*);
+int sqlite3VdbeMemNulTerminate(Mem*);
+int sqlite3VdbeMemSetStr(Mem*, const char*, int, u8, void(*)(void*));
+void sqlite3VdbeMemSetInt64(Mem*, i64);
+void sqlite3VdbeMemSetDouble(Mem*, double);
+void sqlite3VdbeMemSetNull(Mem*);
+int sqlite3VdbeMemMakeWriteable(Mem*);
+int sqlite3VdbeMemDynamicify(Mem*);
+int sqlite3VdbeMemStringify(Mem*, int);
+i64 sqlite3VdbeIntValue(Mem*);
+int sqlite3VdbeMemIntegerify(Mem*);
+double sqlite3VdbeRealValue(Mem*);
+void sqlite3VdbeIntegerAffinity(Mem*);
+int sqlite3VdbeMemRealify(Mem*);
+int sqlite3VdbeMemNumerify(Mem*);
+int sqlite3VdbeMemFromBtree(BtCursor*,int,int,int,Mem*);
+void sqlite3VdbeMemRelease(Mem *p);
+int sqlite3VdbeMemFinalize(Mem*, FuncDef*);
+#ifndef NDEBUG
+void sqlite3VdbeMemSanity(Mem*);
+int sqlite3VdbeOpcodeNoPush(u8);
+#endif
+int sqlite3VdbeMemTranslate(Mem*, u8);
+void sqlite3VdbeMemPrettyPrint(Mem *pMem, char *zBuf);
+int sqlite3VdbeMemHandleBom(Mem *pMem);
+void sqlite3VdbeFifoInit(Fifo*);
+int sqlite3VdbeFifoPush(Fifo*, i64);
+int sqlite3VdbeFifoPop(Fifo*, i64*);
+void sqlite3VdbeFifoClear(Fifo*);
Added: freeswitch/trunk/libs/sqlite/src/vdbeapi.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/vdbeapi.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,852 @@
+/*
+** 2004 May 26
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+**
+** This file contains code use to implement APIs that are part of the
+** VDBE.
+*/
+#include "sqliteInt.h"
+#include "vdbeInt.h"
+#include "os.h"
+
+/*
+** Return TRUE (non-zero) of the statement supplied as an argument needs
+** to be recompiled. A statement needs to be recompiled whenever the
+** execution environment changes in a way that would alter the program
+** that sqlite3_prepare() generates. For example, if new functions or
+** collating sequences are registered or if an authorizer function is
+** added or changed.
+*/
+int sqlite3_expired(sqlite3_stmt *pStmt){
+ Vdbe *p = (Vdbe*)pStmt;
+ return p==0 || p->expired;
+}
+
+/**************************** sqlite3_value_ *******************************
+** The following routines extract information from a Mem or sqlite3_value
+** structure.
+*/
+const void *sqlite3_value_blob(sqlite3_value *pVal){
+ Mem *p = (Mem*)pVal;
+ if( p->flags & (MEM_Blob|MEM_Str) ){
+ return p->z;
+ }else{
+ return sqlite3_value_text(pVal);
+ }
+}
+int sqlite3_value_bytes(sqlite3_value *pVal){
+ return sqlite3ValueBytes(pVal, SQLITE_UTF8);
+}
+int sqlite3_value_bytes16(sqlite3_value *pVal){
+ return sqlite3ValueBytes(pVal, SQLITE_UTF16NATIVE);
+}
+double sqlite3_value_double(sqlite3_value *pVal){
+ return sqlite3VdbeRealValue((Mem*)pVal);
+}
+int sqlite3_value_int(sqlite3_value *pVal){
+ return sqlite3VdbeIntValue((Mem*)pVal);
+}
+sqlite_int64 sqlite3_value_int64(sqlite3_value *pVal){
+ return sqlite3VdbeIntValue((Mem*)pVal);
+}
+const unsigned char *sqlite3_value_text(sqlite3_value *pVal){
+ return (const unsigned char *)sqlite3ValueText(pVal, SQLITE_UTF8);
+}
+#ifndef SQLITE_OMIT_UTF16
+const void *sqlite3_value_text16(sqlite3_value* pVal){
+ return sqlite3ValueText(pVal, SQLITE_UTF16NATIVE);
+}
+const void *sqlite3_value_text16be(sqlite3_value *pVal){
+ return sqlite3ValueText(pVal, SQLITE_UTF16BE);
+}
+const void *sqlite3_value_text16le(sqlite3_value *pVal){
+ return sqlite3ValueText(pVal, SQLITE_UTF16LE);
+}
+#endif /* SQLITE_OMIT_UTF16 */
+int sqlite3_value_type(sqlite3_value* pVal){
+ return pVal->type;
+}
+/* sqlite3_value_numeric_type() defined in vdbe.c */
+
+/**************************** sqlite3_result_ *******************************
+** The following routines are used by user-defined functions to specify
+** the function result.
+*/
+void sqlite3_result_blob(
+ sqlite3_context *pCtx,
+ const void *z,
+ int n,
+ void (*xDel)(void *)
+){
+ assert( n>=0 );
+ sqlite3VdbeMemSetStr(&pCtx->s, z, n, 0, xDel);
+}
+void sqlite3_result_double(sqlite3_context *pCtx, double rVal){
+ sqlite3VdbeMemSetDouble(&pCtx->s, rVal);
+}
+void sqlite3_result_error(sqlite3_context *pCtx, const char *z, int n){
+ pCtx->isError = 1;
+ sqlite3VdbeMemSetStr(&pCtx->s, z, n, SQLITE_UTF8, SQLITE_TRANSIENT);
+}
+#ifndef SQLITE_OMIT_UTF16
+void sqlite3_result_error16(sqlite3_context *pCtx, const void *z, int n){
+ pCtx->isError = 1;
+ sqlite3VdbeMemSetStr(&pCtx->s, z, n, SQLITE_UTF16NATIVE, SQLITE_TRANSIENT);
+}
+#endif
+void sqlite3_result_int(sqlite3_context *pCtx, int iVal){
+ sqlite3VdbeMemSetInt64(&pCtx->s, (i64)iVal);
+}
+void sqlite3_result_int64(sqlite3_context *pCtx, i64 iVal){
+ sqlite3VdbeMemSetInt64(&pCtx->s, iVal);
+}
+void sqlite3_result_null(sqlite3_context *pCtx){
+ sqlite3VdbeMemSetNull(&pCtx->s);
+}
+void sqlite3_result_text(
+ sqlite3_context *pCtx,
+ const char *z,
+ int n,
+ void (*xDel)(void *)
+){
+ sqlite3VdbeMemSetStr(&pCtx->s, z, n, SQLITE_UTF8, xDel);
+}
+#ifndef SQLITE_OMIT_UTF16
+void sqlite3_result_text16(
+ sqlite3_context *pCtx,
+ const void *z,
+ int n,
+ void (*xDel)(void *)
+){
+ sqlite3VdbeMemSetStr(&pCtx->s, z, n, SQLITE_UTF16NATIVE, xDel);
+}
+void sqlite3_result_text16be(
+ sqlite3_context *pCtx,
+ const void *z,
+ int n,
+ void (*xDel)(void *)
+){
+ sqlite3VdbeMemSetStr(&pCtx->s, z, n, SQLITE_UTF16BE, xDel);
+}
+void sqlite3_result_text16le(
+ sqlite3_context *pCtx,
+ const void *z,
+ int n,
+ void (*xDel)(void *)
+){
+ sqlite3VdbeMemSetStr(&pCtx->s, z, n, SQLITE_UTF16LE, xDel);
+}
+#endif /* SQLITE_OMIT_UTF16 */
+void sqlite3_result_value(sqlite3_context *pCtx, sqlite3_value *pValue){
+ sqlite3VdbeMemCopy(&pCtx->s, pValue);
+}
+
+
+/*
+** Execute the statement pStmt, either until a row of data is ready, the
+** statement is completely executed or an error occurs.
+*/
+int sqlite3_step(sqlite3_stmt *pStmt){
+ Vdbe *p = (Vdbe*)pStmt;
+ sqlite3 *db;
+ int rc;
+
+ /* Assert that malloc() has not failed */
+ assert( !sqlite3MallocFailed() );
+
+ if( p==0 || p->magic!=VDBE_MAGIC_RUN ){
+ return SQLITE_MISUSE;
+ }
+ if( p->aborted ){
+ return SQLITE_ABORT;
+ }
+ if( p->pc<=0 && p->expired ){
+ if( p->rc==SQLITE_OK ){
+ p->rc = SQLITE_SCHEMA;
+ }
+ return SQLITE_ERROR;
+ }
+ db = p->db;
+ if( sqlite3SafetyOn(db) ){
+ p->rc = SQLITE_MISUSE;
+ return SQLITE_MISUSE;
+ }
+ if( p->pc<0 ){
+ /* If there are no other statements currently running, then
+ ** reset the interrupt flag. This prevents a call to sqlite3_interrupt
+ ** from interrupting a statement that has not yet started.
+ */
+ if( db->activeVdbeCnt==0 ){
+ db->u1.isInterrupted = 0;
+ }
+
+#ifndef SQLITE_OMIT_TRACE
+ /* Invoke the trace callback if there is one
+ */
+ if( db->xTrace && !db->init.busy ){
+ assert( p->nOp>0 );
+ assert( p->aOp[p->nOp-1].opcode==OP_Noop );
+ assert( p->aOp[p->nOp-1].p3!=0 );
+ assert( p->aOp[p->nOp-1].p3type==P3_DYNAMIC );
+ sqlite3SafetyOff(db);
+ db->xTrace(db->pTraceArg, p->aOp[p->nOp-1].p3);
+ if( sqlite3SafetyOn(db) ){
+ p->rc = SQLITE_MISUSE;
+ return SQLITE_MISUSE;
+ }
+ }
+ if( db->xProfile && !db->init.busy ){
+ double rNow;
+ sqlite3OsCurrentTime(&rNow);
+ p->startTime = (rNow - (int)rNow)*3600.0*24.0*1000000000.0;
+ }
+#endif
+
+ /* Print a copy of SQL as it is executed if the SQL_TRACE pragma is turned
+ ** on in debugging mode.
+ */
+#ifdef SQLITE_DEBUG
+ if( (db->flags & SQLITE_SqlTrace)!=0 ){
+ sqlite3DebugPrintf("SQL-trace: %s\n", p->aOp[p->nOp-1].p3);
+ }
+#endif /* SQLITE_DEBUG */
+
+ db->activeVdbeCnt++;
+ p->pc = 0;
+ }
+#ifndef SQLITE_OMIT_EXPLAIN
+ if( p->explain ){
+ rc = sqlite3VdbeList(p);
+ }else
+#endif /* SQLITE_OMIT_EXPLAIN */
+ {
+ rc = sqlite3VdbeExec(p);
+ }
+
+ if( sqlite3SafetyOff(db) ){
+ rc = SQLITE_MISUSE;
+ }
+
+#ifndef SQLITE_OMIT_TRACE
+ /* Invoke the profile callback if there is one
+ */
+ if( rc!=SQLITE_ROW && db->xProfile && !db->init.busy ){
+ double rNow;
+ u64 elapseTime;
+
+ sqlite3OsCurrentTime(&rNow);
+ elapseTime = (rNow - (int)rNow)*3600.0*24.0*1000000000.0 - p->startTime;
+ assert( p->nOp>0 );
+ assert( p->aOp[p->nOp-1].opcode==OP_Noop );
+ assert( p->aOp[p->nOp-1].p3!=0 );
+ assert( p->aOp[p->nOp-1].p3type==P3_DYNAMIC );
+ db->xProfile(db->pProfileArg, p->aOp[p->nOp-1].p3, elapseTime);
+ }
+#endif
+
+ sqlite3Error(p->db, rc, 0);
+ p->rc = sqlite3ApiExit(p->db, p->rc);
+ assert( (rc&0xff)==rc );
+ return rc;
+}
+
+/*
+** Extract the user data from a sqlite3_context structure and return a
+** pointer to it.
+*/
+void *sqlite3_user_data(sqlite3_context *p){
+ assert( p && p->pFunc );
+ return p->pFunc->pUserData;
+}
+
+/*
+** The following is the implementation of an SQL function that always
+** fails with an error message stating that the function is used in the
+** wrong context. The sqlite3_overload_function() API might construct
+** SQL function that use this routine so that the functions will exist
+** for name resolution but are actually overloaded by the xFindFunction
+** method of virtual tables.
+*/
+void sqlite3InvalidFunction(
+ sqlite3_context *context, /* The function calling context */
+ int argc, /* Number of arguments to the function */
+ sqlite3_value **argv /* Value of each argument */
+){
+ const char *zName = context->pFunc->zName;
+ char *zErr;
+ zErr = sqlite3MPrintf(
+ "unable to use function %s in the requested context", zName);
+ sqlite3_result_error(context, zErr, -1);
+ sqliteFree(zErr);
+}
+
+/*
+** Allocate or return the aggregate context for a user function. A new
+** context is allocated on the first call. Subsequent calls return the
+** same context that was returned on prior calls.
+*/
+void *sqlite3_aggregate_context(sqlite3_context *p, int nByte){
+ Mem *pMem = p->pMem;
+ assert( p && p->pFunc && p->pFunc->xStep );
+ if( (pMem->flags & MEM_Agg)==0 ){
+ if( nByte==0 ){
+ assert( pMem->flags==MEM_Null );
+ pMem->z = 0;
+ }else{
+ pMem->flags = MEM_Agg;
+ pMem->xDel = sqlite3FreeX;
+ *(FuncDef**)&pMem->i = p->pFunc;
+ if( nByte<=NBFS ){
+ pMem->z = pMem->zShort;
+ memset(pMem->z, 0, nByte);
+ }else{
+ pMem->z = sqliteMalloc( nByte );
+ }
+ }
+ }
+ return (void*)pMem->z;
+}
+
+/*
+** Return the auxilary data pointer, if any, for the iArg'th argument to
+** the user-function defined by pCtx.
+*/
+void *sqlite3_get_auxdata(sqlite3_context *pCtx, int iArg){
+ VdbeFunc *pVdbeFunc = pCtx->pVdbeFunc;
+ if( !pVdbeFunc || iArg>=pVdbeFunc->nAux || iArg<0 ){
+ return 0;
+ }
+ return pVdbeFunc->apAux[iArg].pAux;
+}
+
+/*
+** Set the auxilary data pointer and delete function, for the iArg'th
+** argument to the user-function defined by pCtx. Any previous value is
+** deleted by calling the delete function specified when it was set.
+*/
+void sqlite3_set_auxdata(
+ sqlite3_context *pCtx,
+ int iArg,
+ void *pAux,
+ void (*xDelete)(void*)
+){
+ struct AuxData *pAuxData;
+ VdbeFunc *pVdbeFunc;
+ if( iArg<0 ) return;
+
+ pVdbeFunc = pCtx->pVdbeFunc;
+ if( !pVdbeFunc || pVdbeFunc->nAux<=iArg ){
+ int nMalloc = sizeof(VdbeFunc) + sizeof(struct AuxData)*iArg;
+ pVdbeFunc = sqliteRealloc(pVdbeFunc, nMalloc);
+ if( !pVdbeFunc ) return;
+ pCtx->pVdbeFunc = pVdbeFunc;
+ memset(&pVdbeFunc->apAux[pVdbeFunc->nAux], 0,
+ sizeof(struct AuxData)*(iArg+1-pVdbeFunc->nAux));
+ pVdbeFunc->nAux = iArg+1;
+ pVdbeFunc->pFunc = pCtx->pFunc;
+ }
+
+ pAuxData = &pVdbeFunc->apAux[iArg];
+ if( pAuxData->pAux && pAuxData->xDelete ){
+ pAuxData->xDelete(pAuxData->pAux);
+ }
+ pAuxData->pAux = pAux;
+ pAuxData->xDelete = xDelete;
+}
+
+/*
+** Return the number of times the Step function of a aggregate has been
+** called.
+**
+** This function is deprecated. Do not use it for new code. It is
+** provide only to avoid breaking legacy code. New aggregate function
+** implementations should keep their own counts within their aggregate
+** context.
+*/
+int sqlite3_aggregate_count(sqlite3_context *p){
+ assert( p && p->pFunc && p->pFunc->xStep );
+ return p->pMem->n;
+}
+
+/*
+** Return the number of columns in the result set for the statement pStmt.
+*/
+int sqlite3_column_count(sqlite3_stmt *pStmt){
+ Vdbe *pVm = (Vdbe *)pStmt;
+ return pVm ? pVm->nResColumn : 0;
+}
+
+/*
+** Return the number of values available from the current row of the
+** currently executing statement pStmt.
+*/
+int sqlite3_data_count(sqlite3_stmt *pStmt){
+ Vdbe *pVm = (Vdbe *)pStmt;
+ if( pVm==0 || !pVm->resOnStack ) return 0;
+ return pVm->nResColumn;
+}
+
+
+/*
+** Check to see if column iCol of the given statement is valid. If
+** it is, return a pointer to the Mem for the value of that column.
+** If iCol is not valid, return a pointer to a Mem which has a value
+** of NULL.
+*/
+static Mem *columnMem(sqlite3_stmt *pStmt, int i){
+ Vdbe *pVm = (Vdbe *)pStmt;
+ int vals = sqlite3_data_count(pStmt);
+ if( i>=vals || i<0 ){
+ static const Mem nullMem = {0, 0.0, "", 0, MEM_Null, MEM_Null };
+ sqlite3Error(pVm->db, SQLITE_RANGE, 0);
+ return (Mem*)&nullMem;
+ }
+ return &pVm->pTos[(1-vals)+i];
+}
+
+/*
+** This function is called after invoking an sqlite3_value_XXX function on a
+** column value (i.e. a value returned by evaluating an SQL expression in the
+** select list of a SELECT statement) that may cause a malloc() failure. If
+** malloc() has failed, the threads mallocFailed flag is cleared and the result
+** code of statement pStmt set to SQLITE_NOMEM.
+**
+** Specificly, this is called from within:
+**
+** sqlite3_column_int()
+** sqlite3_column_int64()
+** sqlite3_column_text()
+** sqlite3_column_text16()
+** sqlite3_column_real()
+** sqlite3_column_bytes()
+** sqlite3_column_bytes16()
+**
+** But not for sqlite3_column_blob(), which never calls malloc().
+*/
+static void columnMallocFailure(sqlite3_stmt *pStmt)
+{
+ /* If malloc() failed during an encoding conversion within an
+ ** sqlite3_column_XXX API, then set the return code of the statement to
+ ** SQLITE_NOMEM. The next call to _step() (if any) will return SQLITE_ERROR
+ ** and _finalize() will return NOMEM.
+ */
+ Vdbe *p = (Vdbe *)pStmt;
+ p->rc = sqlite3ApiExit(0, p->rc);
+}
+
+/**************************** sqlite3_column_ *******************************
+** The following routines are used to access elements of the current row
+** in the result set.
+*/
+const void *sqlite3_column_blob(sqlite3_stmt *pStmt, int i){
+ const void *val;
+ sqlite3MallocDisallow();
+ val = sqlite3_value_blob( columnMem(pStmt,i) );
+ sqlite3MallocAllow();
+ return val;
+}
+int sqlite3_column_bytes(sqlite3_stmt *pStmt, int i){
+ int val = sqlite3_value_bytes( columnMem(pStmt,i) );
+ columnMallocFailure(pStmt);
+ return val;
+}
+int sqlite3_column_bytes16(sqlite3_stmt *pStmt, int i){
+ int val = sqlite3_value_bytes16( columnMem(pStmt,i) );
+ columnMallocFailure(pStmt);
+ return val;
+}
+double sqlite3_column_double(sqlite3_stmt *pStmt, int i){
+ double val = sqlite3_value_double( columnMem(pStmt,i) );
+ columnMallocFailure(pStmt);
+ return val;
+}
+int sqlite3_column_int(sqlite3_stmt *pStmt, int i){
+ int val = sqlite3_value_int( columnMem(pStmt,i) );
+ columnMallocFailure(pStmt);
+ return val;
+}
+sqlite_int64 sqlite3_column_int64(sqlite3_stmt *pStmt, int i){
+ sqlite_int64 val = sqlite3_value_int64( columnMem(pStmt,i) );
+ columnMallocFailure(pStmt);
+ return val;
+}
+const unsigned char *sqlite3_column_text(sqlite3_stmt *pStmt, int i){
+ const unsigned char *val = sqlite3_value_text( columnMem(pStmt,i) );
+ columnMallocFailure(pStmt);
+ return val;
+}
+sqlite3_value *sqlite3_column_value(sqlite3_stmt *pStmt, int i){
+ return columnMem(pStmt, i);
+}
+#ifndef SQLITE_OMIT_UTF16
+const void *sqlite3_column_text16(sqlite3_stmt *pStmt, int i){
+ const void *val = sqlite3_value_text16( columnMem(pStmt,i) );
+ columnMallocFailure(pStmt);
+ return val;
+}
+#endif /* SQLITE_OMIT_UTF16 */
+int sqlite3_column_type(sqlite3_stmt *pStmt, int i){
+ return sqlite3_value_type( columnMem(pStmt,i) );
+}
+
+/* The following function is experimental and subject to change or
+** removal */
+/*int sqlite3_column_numeric_type(sqlite3_stmt *pStmt, int i){
+** return sqlite3_value_numeric_type( columnMem(pStmt,i) );
+**}
+*/
+
+/*
+** Convert the N-th element of pStmt->pColName[] into a string using
+** xFunc() then return that string. If N is out of range, return 0.
+**
+** There are up to 5 names for each column. useType determines which
+** name is returned. Here are the names:
+**
+** 0 The column name as it should be displayed for output
+** 1 The datatype name for the column
+** 2 The name of the database that the column derives from
+** 3 The name of the table that the column derives from
+** 4 The name of the table column that the result column derives from
+**
+** If the result is not a simple column reference (if it is an expression
+** or a constant) then useTypes 2, 3, and 4 return NULL.
+*/
+static const void *columnName(
+ sqlite3_stmt *pStmt,
+ int N,
+ const void *(*xFunc)(Mem*),
+ int useType
+){
+ const void *ret;
+ Vdbe *p = (Vdbe *)pStmt;
+ int n = sqlite3_column_count(pStmt);
+
+ if( p==0 || N>=n || N<0 ){
+ return 0;
+ }
+ N += useType*n;
+ ret = xFunc(&p->aColName[N]);
+
+ /* A malloc may have failed inside of the xFunc() call. If this is the case,
+ ** clear the mallocFailed flag and return NULL.
+ */
+ sqlite3ApiExit(0, 0);
+ return ret;
+}
+
+/*
+** Return the name of the Nth column of the result set returned by SQL
+** statement pStmt.
+*/
+const char *sqlite3_column_name(sqlite3_stmt *pStmt, int N){
+ return columnName(
+ pStmt, N, (const void*(*)(Mem*))sqlite3_value_text, COLNAME_NAME);
+}
+#ifndef SQLITE_OMIT_UTF16
+const void *sqlite3_column_name16(sqlite3_stmt *pStmt, int N){
+ return columnName(
+ pStmt, N, (const void*(*)(Mem*))sqlite3_value_text16, COLNAME_NAME);
+}
+#endif
+
+/*
+** Return the column declaration type (if applicable) of the 'i'th column
+** of the result set of SQL statement pStmt.
+*/
+const char *sqlite3_column_decltype(sqlite3_stmt *pStmt, int N){
+ return columnName(
+ pStmt, N, (const void*(*)(Mem*))sqlite3_value_text, COLNAME_DECLTYPE);
+}
+#ifndef SQLITE_OMIT_UTF16
+const void *sqlite3_column_decltype16(sqlite3_stmt *pStmt, int N){
+ return columnName(
+ pStmt, N, (const void*(*)(Mem*))sqlite3_value_text16, COLNAME_DECLTYPE);
+}
+#endif /* SQLITE_OMIT_UTF16 */
+
+#ifdef SQLITE_ENABLE_COLUMN_METADATA
+/*
+** Return the name of the database from which a result column derives.
+** NULL is returned if the result column is an expression or constant or
+** anything else which is not an unabiguous reference to a database column.
+*/
+const char *sqlite3_column_database_name(sqlite3_stmt *pStmt, int N){
+ return columnName(
+ pStmt, N, (const void*(*)(Mem*))sqlite3_value_text, COLNAME_DATABASE);
+}
+#ifndef SQLITE_OMIT_UTF16
+const void *sqlite3_column_database_name16(sqlite3_stmt *pStmt, int N){
+ return columnName(
+ pStmt, N, (const void*(*)(Mem*))sqlite3_value_text16, COLNAME_DATABASE);
+}
+#endif /* SQLITE_OMIT_UTF16 */
+
+/*
+** Return the name of the table from which a result column derives.
+** NULL is returned if the result column is an expression or constant or
+** anything else which is not an unabiguous reference to a database column.
+*/
+const char *sqlite3_column_table_name(sqlite3_stmt *pStmt, int N){
+ return columnName(
+ pStmt, N, (const void*(*)(Mem*))sqlite3_value_text, COLNAME_TABLE);
+}
+#ifndef SQLITE_OMIT_UTF16
+const void *sqlite3_column_table_name16(sqlite3_stmt *pStmt, int N){
+ return columnName(
+ pStmt, N, (const void*(*)(Mem*))sqlite3_value_text16, COLNAME_TABLE);
+}
+#endif /* SQLITE_OMIT_UTF16 */
+
+/*
+** Return the name of the table column from which a result column derives.
+** NULL is returned if the result column is an expression or constant or
+** anything else which is not an unabiguous reference to a database column.
+*/
+const char *sqlite3_column_origin_name(sqlite3_stmt *pStmt, int N){
+ return columnName(
+ pStmt, N, (const void*(*)(Mem*))sqlite3_value_text, COLNAME_COLUMN);
+}
+#ifndef SQLITE_OMIT_UTF16
+const void *sqlite3_column_origin_name16(sqlite3_stmt *pStmt, int N){
+ return columnName(
+ pStmt, N, (const void*(*)(Mem*))sqlite3_value_text16, COLNAME_COLUMN);
+}
+#endif /* SQLITE_OMIT_UTF16 */
+#endif /* SQLITE_ENABLE_COLUMN_METADATA */
+
+
+/******************************* sqlite3_bind_ ***************************
+**
+** Routines used to attach values to wildcards in a compiled SQL statement.
+*/
+/*
+** Unbind the value bound to variable i in virtual machine p. This is the
+** the same as binding a NULL value to the column. If the "i" parameter is
+** out of range, then SQLITE_RANGE is returned. Othewise SQLITE_OK.
+**
+** The error code stored in database p->db is overwritten with the return
+** value in any case.
+*/
+static int vdbeUnbind(Vdbe *p, int i){
+ Mem *pVar;
+ if( p==0 || p->magic!=VDBE_MAGIC_RUN || p->pc>=0 ){
+ if( p ) sqlite3Error(p->db, SQLITE_MISUSE, 0);
+ return SQLITE_MISUSE;
+ }
+ if( i<1 || i>p->nVar ){
+ sqlite3Error(p->db, SQLITE_RANGE, 0);
+ return SQLITE_RANGE;
+ }
+ i--;
+ pVar = &p->aVar[i];
+ sqlite3VdbeMemRelease(pVar);
+ pVar->flags = MEM_Null;
+ sqlite3Error(p->db, SQLITE_OK, 0);
+ return SQLITE_OK;
+}
+
+/*
+** Bind a text or BLOB value.
+*/
+static int bindText(
+ sqlite3_stmt *pStmt,
+ int i,
+ const void *zData,
+ int nData,
+ void (*xDel)(void*),
+ int encoding
+){
+ Vdbe *p = (Vdbe *)pStmt;
+ Mem *pVar;
+ int rc;
+
+ rc = vdbeUnbind(p, i);
+ if( rc || zData==0 ){
+ return rc;
+ }
+ pVar = &p->aVar[i-1];
+ rc = sqlite3VdbeMemSetStr(pVar, zData, nData, encoding, xDel);
+ if( rc==SQLITE_OK && encoding!=0 ){
+ rc = sqlite3VdbeChangeEncoding(pVar, ENC(p->db));
+ }
+
+ sqlite3Error(((Vdbe *)pStmt)->db, rc, 0);
+ return sqlite3ApiExit(((Vdbe *)pStmt)->db, rc);
+}
+
+
+/*
+** Bind a blob value to an SQL statement variable.
+*/
+int sqlite3_bind_blob(
+ sqlite3_stmt *pStmt,
+ int i,
+ const void *zData,
+ int nData,
+ void (*xDel)(void*)
+){
+ return bindText(pStmt, i, zData, nData, xDel, 0);
+}
+int sqlite3_bind_double(sqlite3_stmt *pStmt, int i, double rValue){
+ int rc;
+ Vdbe *p = (Vdbe *)pStmt;
+ rc = vdbeUnbind(p, i);
+ if( rc==SQLITE_OK ){
+ sqlite3VdbeMemSetDouble(&p->aVar[i-1], rValue);
+ }
+ return rc;
+}
+int sqlite3_bind_int(sqlite3_stmt *p, int i, int iValue){
+ return sqlite3_bind_int64(p, i, (i64)iValue);
+}
+int sqlite3_bind_int64(sqlite3_stmt *pStmt, int i, sqlite_int64 iValue){
+ int rc;
+ Vdbe *p = (Vdbe *)pStmt;
+ rc = vdbeUnbind(p, i);
+ if( rc==SQLITE_OK ){
+ sqlite3VdbeMemSetInt64(&p->aVar[i-1], iValue);
+ }
+ return rc;
+}
+int sqlite3_bind_null(sqlite3_stmt* p, int i){
+ return vdbeUnbind((Vdbe *)p, i);
+}
+int sqlite3_bind_text(
+ sqlite3_stmt *pStmt,
+ int i,
+ const char *zData,
+ int nData,
+ void (*xDel)(void*)
+){
+ return bindText(pStmt, i, zData, nData, xDel, SQLITE_UTF8);
+}
+#ifndef SQLITE_OMIT_UTF16
+int sqlite3_bind_text16(
+ sqlite3_stmt *pStmt,
+ int i,
+ const void *zData,
+ int nData,
+ void (*xDel)(void*)
+){
+ return bindText(pStmt, i, zData, nData, xDel, SQLITE_UTF16NATIVE);
+}
+#endif /* SQLITE_OMIT_UTF16 */
+int sqlite3_bind_value(sqlite3_stmt *pStmt, int i, const sqlite3_value *pValue){
+ int rc;
+ Vdbe *p = (Vdbe *)pStmt;
+ rc = vdbeUnbind(p, i);
+ if( rc==SQLITE_OK ){
+ sqlite3VdbeMemCopy(&p->aVar[i-1], pValue);
+ }
+ return rc;
+}
+
+/*
+** Return the number of wildcards that can be potentially bound to.
+** This routine is added to support DBD::SQLite.
+*/
+int sqlite3_bind_parameter_count(sqlite3_stmt *pStmt){
+ Vdbe *p = (Vdbe*)pStmt;
+ return p ? p->nVar : 0;
+}
+
+/*
+** Create a mapping from variable numbers to variable names
+** in the Vdbe.azVar[] array, if such a mapping does not already
+** exist.
+*/
+static void createVarMap(Vdbe *p){
+ if( !p->okVar ){
+ int j;
+ Op *pOp;
+ for(j=0, pOp=p->aOp; j<p->nOp; j++, pOp++){
+ if( pOp->opcode==OP_Variable ){
+ assert( pOp->p1>0 && pOp->p1<=p->nVar );
+ p->azVar[pOp->p1-1] = pOp->p3;
+ }
+ }
+ p->okVar = 1;
+ }
+}
+
+/*
+** Return the name of a wildcard parameter. Return NULL if the index
+** is out of range or if the wildcard is unnamed.
+**
+** The result is always UTF-8.
+*/
+const char *sqlite3_bind_parameter_name(sqlite3_stmt *pStmt, int i){
+ Vdbe *p = (Vdbe*)pStmt;
+ if( p==0 || i<1 || i>p->nVar ){
+ return 0;
+ }
+ createVarMap(p);
+ return p->azVar[i-1];
+}
+
+/*
+** Given a wildcard parameter name, return the index of the variable
+** with that name. If there is no variable with the given name,
+** return 0.
+*/
+int sqlite3_bind_parameter_index(sqlite3_stmt *pStmt, const char *zName){
+ Vdbe *p = (Vdbe*)pStmt;
+ int i;
+ if( p==0 ){
+ return 0;
+ }
+ createVarMap(p);
+ if( zName ){
+ for(i=0; i<p->nVar; i++){
+ const char *z = p->azVar[i];
+ if( z && strcmp(z,zName)==0 ){
+ return i+1;
+ }
+ }
+ }
+ return 0;
+}
+
+/*
+** Transfer all bindings from the first statement over to the second.
+** If the two statements contain a different number of bindings, then
+** an SQLITE_ERROR is returned.
+*/
+int sqlite3_transfer_bindings(sqlite3_stmt *pFromStmt, sqlite3_stmt *pToStmt){
+ Vdbe *pFrom = (Vdbe*)pFromStmt;
+ Vdbe *pTo = (Vdbe*)pToStmt;
+ int i, rc = SQLITE_OK;
+ if( (pFrom->magic!=VDBE_MAGIC_RUN && pFrom->magic!=VDBE_MAGIC_HALT)
+ || (pTo->magic!=VDBE_MAGIC_RUN && pTo->magic!=VDBE_MAGIC_HALT) ){
+ return SQLITE_MISUSE;
+ }
+ if( pFrom->nVar!=pTo->nVar ){
+ return SQLITE_ERROR;
+ }
+ for(i=0; rc==SQLITE_OK && i<pFrom->nVar; i++){
+ sqlite3MallocDisallow();
+ rc = sqlite3VdbeMemMove(&pTo->aVar[i], &pFrom->aVar[i]);
+ sqlite3MallocAllow();
+ }
+ assert( rc==SQLITE_OK || rc==SQLITE_NOMEM );
+ return rc;
+}
+
+/*
+** Return the sqlite3* database handle to which the prepared statement given
+** in the argument belongs. This is the same database handle that was
+** the first argument to the sqlite3_prepare() that was used to create
+** the statement in the first place.
+*/
+sqlite3 *sqlite3_db_handle(sqlite3_stmt *pStmt){
+ return pStmt ? ((Vdbe*)pStmt)->db : 0;
+}
Added: freeswitch/trunk/libs/sqlite/src/vdbeaux.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/vdbeaux.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,2053 @@
+/*
+** 2003 September 6
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** This file contains code used for creating, destroying, and populating
+** a VDBE (or an "sqlite3_stmt" as it is known to the outside world.) Prior
+** to version 2.8.7, all this code was combined into the vdbe.c source file.
+** But that file was getting too big so this subroutines were split out.
+*/
+#include "sqliteInt.h"
+#include "os.h"
+#include <ctype.h>
+#include "vdbeInt.h"
+
+
+/*
+** When debugging the code generator in a symbolic debugger, one can
+** set the sqlite3_vdbe_addop_trace to 1 and all opcodes will be printed
+** as they are added to the instruction stream.
+*/
+#ifdef SQLITE_DEBUG
+int sqlite3_vdbe_addop_trace = 0;
+#endif
+
+
+/*
+** Create a new virtual database engine.
+*/
+Vdbe *sqlite3VdbeCreate(sqlite3 *db){
+ Vdbe *p;
+ p = sqliteMalloc( sizeof(Vdbe) );
+ if( p==0 ) return 0;
+ p->db = db;
+ if( db->pVdbe ){
+ db->pVdbe->pPrev = p;
+ }
+ p->pNext = db->pVdbe;
+ p->pPrev = 0;
+ db->pVdbe = p;
+ p->magic = VDBE_MAGIC_INIT;
+ return p;
+}
+
+/*
+** Turn tracing on or off
+*/
+void sqlite3VdbeTrace(Vdbe *p, FILE *trace){
+ p->trace = trace;
+}
+
+/*
+** Resize the Vdbe.aOp array so that it contains at least N
+** elements. If the Vdbe is in VDBE_MAGIC_RUN state, then
+** the Vdbe.aOp array will be sized to contain exactly N
+** elements. Vdbe.nOpAlloc is set to reflect the new size of
+** the array.
+**
+** If an out-of-memory error occurs while resizing the array,
+** Vdbe.aOp and Vdbe.nOpAlloc remain unchanged (this is so that
+** any opcodes already allocated can be correctly deallocated
+** along with the rest of the Vdbe).
+*/
+static void resizeOpArray(Vdbe *p, int N){
+ int runMode = p->magic==VDBE_MAGIC_RUN;
+ if( runMode || p->nOpAlloc<N ){
+ VdbeOp *pNew;
+ int nNew = N + 100*(!runMode);
+ int oldSize = p->nOpAlloc;
+ pNew = sqliteRealloc(p->aOp, nNew*sizeof(Op));
+ if( pNew ){
+ p->nOpAlloc = nNew;
+ p->aOp = pNew;
+ if( nNew>oldSize ){
+ memset(&p->aOp[oldSize], 0, (nNew-oldSize)*sizeof(Op));
+ }
+ }
+ }
+}
+
+/*
+** Add a new instruction to the list of instructions current in the
+** VDBE. Return the address of the new instruction.
+**
+** Parameters:
+**
+** p Pointer to the VDBE
+**
+** op The opcode for this instruction
+**
+** p1, p2 First two of the three possible operands.
+**
+** Use the sqlite3VdbeResolveLabel() function to fix an address and
+** the sqlite3VdbeChangeP3() function to change the value of the P3
+** operand.
+*/
+int sqlite3VdbeAddOp(Vdbe *p, int op, int p1, int p2){
+ int i;
+ VdbeOp *pOp;
+
+ i = p->nOp;
+ p->nOp++;
+ assert( p->magic==VDBE_MAGIC_INIT );
+ if( p->nOpAlloc<=i ){
+ resizeOpArray(p, i+1);
+ if( sqlite3MallocFailed() ){
+ return 0;
+ }
+ }
+ pOp = &p->aOp[i];
+ pOp->opcode = op;
+ pOp->p1 = p1;
+ pOp->p2 = p2;
+ pOp->p3 = 0;
+ pOp->p3type = P3_NOTUSED;
+ p->expired = 0;
+#ifdef SQLITE_DEBUG
+ if( sqlite3_vdbe_addop_trace ) sqlite3VdbePrintOp(0, i, &p->aOp[i]);
+#endif
+ return i;
+}
+
+/*
+** Add an opcode that includes the p3 value.
+*/
+int sqlite3VdbeOp3(Vdbe *p, int op, int p1, int p2, const char *zP3,int p3type){
+ int addr = sqlite3VdbeAddOp(p, op, p1, p2);
+ sqlite3VdbeChangeP3(p, addr, zP3, p3type);
+ return addr;
+}
+
+/*
+** Create a new symbolic label for an instruction that has yet to be
+** coded. The symbolic label is really just a negative number. The
+** label can be used as the P2 value of an operation. Later, when
+** the label is resolved to a specific address, the VDBE will scan
+** through its operation list and change all values of P2 which match
+** the label into the resolved address.
+**
+** The VDBE knows that a P2 value is a label because labels are
+** always negative and P2 values are suppose to be non-negative.
+** Hence, a negative P2 value is a label that has yet to be resolved.
+**
+** Zero is returned if a malloc() fails.
+*/
+int sqlite3VdbeMakeLabel(Vdbe *p){
+ int i;
+ i = p->nLabel++;
+ assert( p->magic==VDBE_MAGIC_INIT );
+ if( i>=p->nLabelAlloc ){
+ p->nLabelAlloc = p->nLabelAlloc*2 + 10;
+ sqliteReallocOrFree((void**)&p->aLabel,
+ p->nLabelAlloc*sizeof(p->aLabel[0]));
+ }
+ if( p->aLabel ){
+ p->aLabel[i] = -1;
+ }
+ return -1-i;
+}
+
+/*
+** Resolve label "x" to be the address of the next instruction to
+** be inserted. The parameter "x" must have been obtained from
+** a prior call to sqlite3VdbeMakeLabel().
+*/
+void sqlite3VdbeResolveLabel(Vdbe *p, int x){
+ int j = -1-x;
+ assert( p->magic==VDBE_MAGIC_INIT );
+ assert( j>=0 && j<p->nLabel );
+ if( p->aLabel ){
+ p->aLabel[j] = p->nOp;
+ }
+}
+
+/*
+** Return non-zero if opcode 'op' is guarenteed not to push more values
+** onto the VDBE stack than it pops off.
+*/
+static int opcodeNoPush(u8 op){
+ /* The 10 NOPUSH_MASK_n constants are defined in the automatically
+ ** generated header file opcodes.h. Each is a 16-bit bitmask, one
+ ** bit corresponding to each opcode implemented by the virtual
+ ** machine in vdbe.c. The bit is true if the word "no-push" appears
+ ** in a comment on the same line as the "case OP_XXX:" in
+ ** sqlite3VdbeExec() in vdbe.c.
+ **
+ ** If the bit is true, then the corresponding opcode is guarenteed not
+ ** to grow the stack when it is executed. Otherwise, it may grow the
+ ** stack by at most one entry.
+ **
+ ** NOPUSH_MASK_0 corresponds to opcodes 0 to 15. NOPUSH_MASK_1 contains
+ ** one bit for opcodes 16 to 31, and so on.
+ **
+ ** 16-bit bitmasks (rather than 32-bit) are specified in opcodes.h
+ ** because the file is generated by an awk program. Awk manipulates
+ ** all numbers as floating-point and we don't want to risk a rounding
+ ** error if someone builds with an awk that uses (for example) 32-bit
+ ** IEEE floats.
+ */
+ static const u32 masks[5] = {
+ NOPUSH_MASK_0 + (((unsigned)NOPUSH_MASK_1)<<16),
+ NOPUSH_MASK_2 + (((unsigned)NOPUSH_MASK_3)<<16),
+ NOPUSH_MASK_4 + (((unsigned)NOPUSH_MASK_5)<<16),
+ NOPUSH_MASK_6 + (((unsigned)NOPUSH_MASK_7)<<16),
+ NOPUSH_MASK_8 + (((unsigned)NOPUSH_MASK_9)<<16)
+ };
+ assert( op<32*5 );
+ return (masks[op>>5] & (1<<(op&0x1F)));
+}
+
+#ifndef NDEBUG
+int sqlite3VdbeOpcodeNoPush(u8 op){
+ return opcodeNoPush(op);
+}
+#endif
+
+/*
+** Loop through the program looking for P2 values that are negative.
+** Each such value is a label. Resolve the label by setting the P2
+** value to its correct non-zero value.
+**
+** This routine is called once after all opcodes have been inserted.
+**
+** Variable *pMaxFuncArgs is set to the maximum value of any P2 argument
+** to an OP_Function, OP_AggStep or OP_VFilter opcode. This is used by
+** sqlite3VdbeMakeReady() to size the Vdbe.apArg[] array.
+**
+** The integer *pMaxStack is set to the maximum number of vdbe stack
+** entries that static analysis reveals this program might need.
+**
+** This routine also does the following optimization: It scans for
+** Halt instructions where P1==SQLITE_CONSTRAINT or P2==OE_Abort or for
+** IdxInsert instructions where P2!=0. If no such instruction is
+** found, then every Statement instruction is changed to a Noop. In
+** this way, we avoid creating the statement journal file unnecessarily.
+*/
+static void resolveP2Values(Vdbe *p, int *pMaxFuncArgs, int *pMaxStack){
+ int i;
+ int nMaxArgs = 0;
+ int nMaxStack = p->nOp;
+ Op *pOp;
+ int *aLabel = p->aLabel;
+ int doesStatementRollback = 0;
+ int hasStatementBegin = 0;
+ for(pOp=p->aOp, i=p->nOp-1; i>=0; i--, pOp++){
+ u8 opcode = pOp->opcode;
+
+ if( opcode==OP_Function || opcode==OP_AggStep
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+ || opcode==OP_VUpdate
+#endif
+ ){
+ if( pOp->p2>nMaxArgs ) nMaxArgs = pOp->p2;
+ }else if( opcode==OP_Halt ){
+ if( pOp->p1==SQLITE_CONSTRAINT && pOp->p2==OE_Abort ){
+ doesStatementRollback = 1;
+ }
+ }else if( opcode==OP_Statement ){
+ hasStatementBegin = 1;
+ }else if( opcode==OP_VFilter ){
+ int n;
+ assert( p->nOp - i >= 3 );
+ assert( pOp[-2].opcode==OP_Integer );
+ n = pOp[-2].p1;
+ if( n>nMaxArgs ) nMaxArgs = n;
+ }
+ if( opcodeNoPush(opcode) ){
+ nMaxStack--;
+ }
+
+ if( pOp->p2>=0 ) continue;
+ assert( -1-pOp->p2<p->nLabel );
+ pOp->p2 = aLabel[-1-pOp->p2];
+ }
+ sqliteFree(p->aLabel);
+ p->aLabel = 0;
+
+ *pMaxFuncArgs = nMaxArgs;
+ *pMaxStack = nMaxStack;
+
+ /* If we never rollback a statement transaction, then statement
+ ** transactions are not needed. So change every OP_Statement
+ ** opcode into an OP_Noop. This avoid a call to sqlite3OsOpenExclusive()
+ ** which can be expensive on some platforms.
+ */
+ if( hasStatementBegin && !doesStatementRollback ){
+ for(pOp=p->aOp, i=p->nOp-1; i>=0; i--, pOp++){
+ if( pOp->opcode==OP_Statement ){
+ pOp->opcode = OP_Noop;
+ }
+ }
+ }
+}
+
+/*
+** Return the address of the next instruction to be inserted.
+*/
+int sqlite3VdbeCurrentAddr(Vdbe *p){
+ assert( p->magic==VDBE_MAGIC_INIT );
+ return p->nOp;
+}
+
+/*
+** Add a whole list of operations to the operation stack. Return the
+** address of the first operation added.
+*/
+int sqlite3VdbeAddOpList(Vdbe *p, int nOp, VdbeOpList const *aOp){
+ int addr;
+ assert( p->magic==VDBE_MAGIC_INIT );
+ resizeOpArray(p, p->nOp + nOp);
+ if( sqlite3MallocFailed() ){
+ return 0;
+ }
+ addr = p->nOp;
+ if( nOp>0 ){
+ int i;
+ VdbeOpList const *pIn = aOp;
+ for(i=0; i<nOp; i++, pIn++){
+ int p2 = pIn->p2;
+ VdbeOp *pOut = &p->aOp[i+addr];
+ pOut->opcode = pIn->opcode;
+ pOut->p1 = pIn->p1;
+ pOut->p2 = p2<0 ? addr + ADDR(p2) : p2;
+ pOut->p3 = pIn->p3;
+ pOut->p3type = pIn->p3 ? P3_STATIC : P3_NOTUSED;
+#ifdef SQLITE_DEBUG
+ if( sqlite3_vdbe_addop_trace ){
+ sqlite3VdbePrintOp(0, i+addr, &p->aOp[i+addr]);
+ }
+#endif
+ }
+ p->nOp += nOp;
+ }
+ return addr;
+}
+
+/*
+** Change the value of the P1 operand for a specific instruction.
+** This routine is useful when a large program is loaded from a
+** static array using sqlite3VdbeAddOpList but we want to make a
+** few minor changes to the program.
+*/
+void sqlite3VdbeChangeP1(Vdbe *p, int addr, int val){
+ assert( p==0 || p->magic==VDBE_MAGIC_INIT );
+ if( p && addr>=0 && p->nOp>addr && p->aOp ){
+ p->aOp[addr].p1 = val;
+ }
+}
+
+/*
+** Change the value of the P2 operand for a specific instruction.
+** This routine is useful for setting a jump destination.
+*/
+void sqlite3VdbeChangeP2(Vdbe *p, int addr, int val){
+ assert( val>=0 );
+ assert( p==0 || p->magic==VDBE_MAGIC_INIT );
+ if( p && addr>=0 && p->nOp>addr && p->aOp ){
+ p->aOp[addr].p2 = val;
+ }
+}
+
+/*
+** Change the P2 operand of instruction addr so that it points to
+** the address of the next instruction to be coded.
+*/
+void sqlite3VdbeJumpHere(Vdbe *p, int addr){
+ sqlite3VdbeChangeP2(p, addr, p->nOp);
+}
+
+
+/*
+** If the input FuncDef structure is ephemeral, then free it. If
+** the FuncDef is not ephermal, then do nothing.
+*/
+static void freeEphemeralFunction(FuncDef *pDef){
+ if( pDef && (pDef->flags & SQLITE_FUNC_EPHEM)!=0 ){
+ sqliteFree(pDef);
+ }
+}
+
+/*
+** Delete a P3 value if necessary.
+*/
+static void freeP3(int p3type, void *p3){
+ if( p3 ){
+ switch( p3type ){
+ case P3_DYNAMIC:
+ case P3_KEYINFO:
+ case P3_KEYINFO_HANDOFF: {
+ sqliteFree(p3);
+ break;
+ }
+ case P3_MPRINTF: {
+ sqlite3_free(p3);
+ break;
+ }
+ case P3_VDBEFUNC: {
+ VdbeFunc *pVdbeFunc = (VdbeFunc *)p3;
+ freeEphemeralFunction(pVdbeFunc->pFunc);
+ sqlite3VdbeDeleteAuxData(pVdbeFunc, 0);
+ sqliteFree(pVdbeFunc);
+ break;
+ }
+ case P3_FUNCDEF: {
+ freeEphemeralFunction((FuncDef*)p3);
+ break;
+ }
+ case P3_MEM: {
+ sqlite3ValueFree((sqlite3_value*)p3);
+ break;
+ }
+ }
+ }
+}
+
+
+/*
+** Change N opcodes starting at addr to No-ops.
+*/
+void sqlite3VdbeChangeToNoop(Vdbe *p, int addr, int N){
+ VdbeOp *pOp = &p->aOp[addr];
+ while( N-- ){
+ freeP3(pOp->p3type, pOp->p3);
+ memset(pOp, 0, sizeof(pOp[0]));
+ pOp->opcode = OP_Noop;
+ pOp++;
+ }
+}
+
+/*
+** Change the value of the P3 operand for a specific instruction.
+** This routine is useful when a large program is loaded from a
+** static array using sqlite3VdbeAddOpList but we want to make a
+** few minor changes to the program.
+**
+** If n>=0 then the P3 operand is dynamic, meaning that a copy of
+** the string is made into memory obtained from sqliteMalloc().
+** A value of n==0 means copy bytes of zP3 up to and including the
+** first null byte. If n>0 then copy n+1 bytes of zP3.
+**
+** If n==P3_KEYINFO it means that zP3 is a pointer to a KeyInfo structure.
+** A copy is made of the KeyInfo structure into memory obtained from
+** sqliteMalloc, to be freed when the Vdbe is finalized.
+** n==P3_KEYINFO_HANDOFF indicates that zP3 points to a KeyInfo structure
+** stored in memory that the caller has obtained from sqliteMalloc. The
+** caller should not free the allocation, it will be freed when the Vdbe is
+** finalized.
+**
+** Other values of n (P3_STATIC, P3_COLLSEQ etc.) indicate that zP3 points
+** to a string or structure that is guaranteed to exist for the lifetime of
+** the Vdbe. In these cases we can just copy the pointer.
+**
+** If addr<0 then change P3 on the most recently inserted instruction.
+*/
+void sqlite3VdbeChangeP3(Vdbe *p, int addr, const char *zP3, int n){
+ Op *pOp;
+ assert( p==0 || p->magic==VDBE_MAGIC_INIT );
+ if( p==0 || p->aOp==0 || sqlite3MallocFailed() ){
+ if (n != P3_KEYINFO) {
+ freeP3(n, (void*)*(char**)&zP3);
+ }
+ return;
+ }
+ if( addr<0 || addr>=p->nOp ){
+ addr = p->nOp - 1;
+ if( addr<0 ) return;
+ }
+ pOp = &p->aOp[addr];
+ freeP3(pOp->p3type, pOp->p3);
+ pOp->p3 = 0;
+ if( zP3==0 ){
+ pOp->p3 = 0;
+ pOp->p3type = P3_NOTUSED;
+ }else if( n==P3_KEYINFO ){
+ KeyInfo *pKeyInfo;
+ int nField, nByte;
+
+ nField = ((KeyInfo*)zP3)->nField;
+ nByte = sizeof(*pKeyInfo) + (nField-1)*sizeof(pKeyInfo->aColl[0]) + nField;
+ pKeyInfo = sqliteMallocRaw( nByte );
+ pOp->p3 = (char*)pKeyInfo;
+ if( pKeyInfo ){
+ unsigned char *aSortOrder;
+ memcpy(pKeyInfo, zP3, nByte);
+ aSortOrder = pKeyInfo->aSortOrder;
+ if( aSortOrder ){
+ pKeyInfo->aSortOrder = (unsigned char*)&pKeyInfo->aColl[nField];
+ memcpy(pKeyInfo->aSortOrder, aSortOrder, nField);
+ }
+ pOp->p3type = P3_KEYINFO;
+ }else{
+ pOp->p3type = P3_NOTUSED;
+ }
+ }else if( n==P3_KEYINFO_HANDOFF ){
+ pOp->p3 = (char*)zP3;
+ pOp->p3type = P3_KEYINFO;
+ }else if( n<0 ){
+ pOp->p3 = (char*)zP3;
+ pOp->p3type = n;
+ }else{
+ if( n==0 ) n = strlen(zP3);
+ pOp->p3 = sqliteStrNDup(zP3, n);
+ pOp->p3type = P3_DYNAMIC;
+ }
+}
+
+#ifndef NDEBUG
+/*
+** Replace the P3 field of the most recently coded instruction with
+** comment text.
+*/
+void sqlite3VdbeComment(Vdbe *p, const char *zFormat, ...){
+ va_list ap;
+ assert( p->nOp>0 );
+ assert( p->aOp==0 || p->aOp[p->nOp-1].p3==0
+ || sqlite3MallocFailed() );
+ va_start(ap, zFormat);
+ sqlite3VdbeChangeP3(p, -1, sqlite3VMPrintf(zFormat, ap), P3_DYNAMIC);
+ va_end(ap);
+}
+#endif
+
+/*
+** Return the opcode for a given address.
+*/
+VdbeOp *sqlite3VdbeGetOp(Vdbe *p, int addr){
+ assert( p->magic==VDBE_MAGIC_INIT );
+ assert( addr>=0 && addr<p->nOp );
+ return &p->aOp[addr];
+}
+
+#if !defined(SQLITE_OMIT_EXPLAIN) || !defined(NDEBUG) \
+ || defined(VDBE_PROFILE) || defined(SQLITE_DEBUG)
+/*
+** Compute a string that describes the P3 parameter for an opcode.
+** Use zTemp for any required temporary buffer space.
+*/
+static char *displayP3(Op *pOp, char *zTemp, int nTemp){
+ char *zP3;
+ assert( nTemp>=20 );
+ switch( pOp->p3type ){
+ case P3_KEYINFO: {
+ int i, j;
+ KeyInfo *pKeyInfo = (KeyInfo*)pOp->p3;
+ sprintf(zTemp, "keyinfo(%d", pKeyInfo->nField);
+ i = strlen(zTemp);
+ for(j=0; j<pKeyInfo->nField; j++){
+ CollSeq *pColl = pKeyInfo->aColl[j];
+ if( pColl ){
+ int n = strlen(pColl->zName);
+ if( i+n>nTemp-6 ){
+ strcpy(&zTemp[i],",...");
+ break;
+ }
+ zTemp[i++] = ',';
+ if( pKeyInfo->aSortOrder && pKeyInfo->aSortOrder[j] ){
+ zTemp[i++] = '-';
+ }
+ strcpy(&zTemp[i], pColl->zName);
+ i += n;
+ }else if( i+4<nTemp-6 ){
+ strcpy(&zTemp[i],",nil");
+ i += 4;
+ }
+ }
+ zTemp[i++] = ')';
+ zTemp[i] = 0;
+ assert( i<nTemp );
+ zP3 = zTemp;
+ break;
+ }
+ case P3_COLLSEQ: {
+ CollSeq *pColl = (CollSeq*)pOp->p3;
+ sprintf(zTemp, "collseq(%.20s)", pColl->zName);
+ zP3 = zTemp;
+ break;
+ }
+ case P3_FUNCDEF: {
+ FuncDef *pDef = (FuncDef*)pOp->p3;
+ sqlite3_snprintf(nTemp, zTemp, "%s(%d)", pDef->zName, pDef->nArg);
+ zP3 = zTemp;
+ break;
+ }
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+ case P3_VTAB: {
+ sqlite3_vtab *pVtab = (sqlite3_vtab*)pOp->p3;
+ sqlite3_snprintf(nTemp, zTemp, "vtab:%p:%p", pVtab, pVtab->pModule);
+ zP3 = zTemp;
+ break;
+ }
+#endif
+ default: {
+ zP3 = pOp->p3;
+ if( zP3==0 || pOp->opcode==OP_Noop ){
+ zP3 = "";
+ }
+ }
+ }
+ assert( zP3!=0 );
+ return zP3;
+}
+#endif
+
+
+#if defined(VDBE_PROFILE) || defined(SQLITE_DEBUG)
+/*
+** Print a single opcode. This routine is used for debugging only.
+*/
+void sqlite3VdbePrintOp(FILE *pOut, int pc, Op *pOp){
+ char *zP3;
+ char zPtr[50];
+ static const char *zFormat1 = "%4d %-13s %4d %4d %s\n";
+ if( pOut==0 ) pOut = stdout;
+ zP3 = displayP3(pOp, zPtr, sizeof(zPtr));
+ fprintf(pOut, zFormat1,
+ pc, sqlite3OpcodeNames[pOp->opcode], pOp->p1, pOp->p2, zP3);
+ fflush(pOut);
+}
+#endif
+
+/*
+** Release an array of N Mem elements
+*/
+static void releaseMemArray(Mem *p, int N){
+ if( p ){
+ while( N-->0 ){
+ sqlite3VdbeMemRelease(p++);
+ }
+ }
+}
+
+#ifndef SQLITE_OMIT_EXPLAIN
+/*
+** Give a listing of the program in the virtual machine.
+**
+** The interface is the same as sqlite3VdbeExec(). But instead of
+** running the code, it invokes the callback once for each instruction.
+** This feature is used to implement "EXPLAIN".
+*/
+int sqlite3VdbeList(
+ Vdbe *p /* The VDBE */
+){
+ sqlite3 *db = p->db;
+ int i;
+ int rc = SQLITE_OK;
+
+ assert( p->explain );
+ if( p->magic!=VDBE_MAGIC_RUN ) return SQLITE_MISUSE;
+ assert( db->magic==SQLITE_MAGIC_BUSY );
+ assert( p->rc==SQLITE_OK || p->rc==SQLITE_BUSY );
+
+ /* Even though this opcode does not put dynamic strings onto the
+ ** the stack, they may become dynamic if the user calls
+ ** sqlite3_column_text16(), causing a translation to UTF-16 encoding.
+ */
+ if( p->pTos==&p->aStack[4] ){
+ releaseMemArray(p->aStack, 5);
+ }
+ p->resOnStack = 0;
+
+ do{
+ i = p->pc++;
+ }while( i<p->nOp && p->explain==2 && p->aOp[i].opcode!=OP_Explain );
+ if( i>=p->nOp ){
+ p->rc = SQLITE_OK;
+ rc = SQLITE_DONE;
+ }else if( db->u1.isInterrupted ){
+ p->rc = SQLITE_INTERRUPT;
+ rc = SQLITE_ERROR;
+ sqlite3SetString(&p->zErrMsg, sqlite3ErrStr(p->rc), (char*)0);
+ }else{
+ Op *pOp = &p->aOp[i];
+ Mem *pMem = p->aStack;
+ pMem->flags = MEM_Int;
+ pMem->type = SQLITE_INTEGER;
+ pMem->i = i; /* Program counter */
+ pMem++;
+
+ pMem->flags = MEM_Static|MEM_Str|MEM_Term;
+ pMem->z = sqlite3OpcodeNames[pOp->opcode]; /* Opcode */
+ assert( pMem->z!=0 );
+ pMem->n = strlen(pMem->z);
+ pMem->type = SQLITE_TEXT;
+ pMem->enc = SQLITE_UTF8;
+ pMem++;
+
+ pMem->flags = MEM_Int;
+ pMem->i = pOp->p1; /* P1 */
+ pMem->type = SQLITE_INTEGER;
+ pMem++;
+
+ pMem->flags = MEM_Int;
+ pMem->i = pOp->p2; /* P2 */
+ pMem->type = SQLITE_INTEGER;
+ pMem++;
+
+ pMem->flags = MEM_Ephem|MEM_Str|MEM_Term; /* P3 */
+ pMem->z = displayP3(pOp, pMem->zShort, sizeof(pMem->zShort));
+ assert( pMem->z!=0 );
+ pMem->n = strlen(pMem->z);
+ pMem->type = SQLITE_TEXT;
+ pMem->enc = SQLITE_UTF8;
+
+ p->nResColumn = 5 - 2*(p->explain-1);
+ p->pTos = pMem;
+ p->rc = SQLITE_OK;
+ p->resOnStack = 1;
+ rc = SQLITE_ROW;
+ }
+ return rc;
+}
+#endif /* SQLITE_OMIT_EXPLAIN */
+
+/*
+** Print the SQL that was used to generate a VDBE program.
+*/
+void sqlite3VdbePrintSql(Vdbe *p){
+#ifdef SQLITE_DEBUG
+ int nOp = p->nOp;
+ VdbeOp *pOp;
+ if( nOp<1 ) return;
+ pOp = &p->aOp[nOp-1];
+ if( pOp->opcode==OP_Noop && pOp->p3!=0 ){
+ const char *z = pOp->p3;
+ while( isspace(*(u8*)z) ) z++;
+ printf("SQL: [%s]\n", z);
+ }
+#endif
+}
+
+/*
+** Prepare a virtual machine for execution. This involves things such
+** as allocating stack space and initializing the program counter.
+** After the VDBE has be prepped, it can be executed by one or more
+** calls to sqlite3VdbeExec().
+**
+** This is the only way to move a VDBE from VDBE_MAGIC_INIT to
+** VDBE_MAGIC_RUN.
+*/
+void sqlite3VdbeMakeReady(
+ Vdbe *p, /* The VDBE */
+ int nVar, /* Number of '?' see in the SQL statement */
+ int nMem, /* Number of memory cells to allocate */
+ int nCursor, /* Number of cursors to allocate */
+ int isExplain /* True if the EXPLAIN keywords is present */
+){
+ int n;
+
+ assert( p!=0 );
+ assert( p->magic==VDBE_MAGIC_INIT );
+
+ /* There should be at least one opcode.
+ */
+ assert( p->nOp>0 );
+
+ /* Set the magic to VDBE_MAGIC_RUN sooner rather than later. This
+ * is because the call to resizeOpArray() below may shrink the
+ * p->aOp[] array to save memory if called when in VDBE_MAGIC_RUN
+ * state.
+ */
+ p->magic = VDBE_MAGIC_RUN;
+
+ /* No instruction ever pushes more than a single element onto the
+ ** stack. And the stack never grows on successive executions of the
+ ** same loop. So the total number of instructions is an upper bound
+ ** on the maximum stack depth required. (Added later:) The
+ ** resolveP2Values() call computes a tighter upper bound on the
+ ** stack size.
+ **
+ ** Allocation all the stack space we will ever need.
+ */
+ if( p->aStack==0 ){
+ int nArg; /* Maximum number of args passed to a user function. */
+ int nStack; /* Maximum number of stack entries required */
+ resolveP2Values(p, &nArg, &nStack);
+ resizeOpArray(p, p->nOp);
+ assert( nVar>=0 );
+ assert( nStack<p->nOp );
+ if( isExplain ){
+ nStack = 10;
+ }
+ p->aStack = sqliteMalloc(
+ nStack*sizeof(p->aStack[0]) /* aStack */
+ + nArg*sizeof(Mem*) /* apArg */
+ + nVar*sizeof(Mem) /* aVar */
+ + nVar*sizeof(char*) /* azVar */
+ + nMem*sizeof(Mem) /* aMem */
+ + nCursor*sizeof(Cursor*) /* apCsr */
+ );
+ if( !sqlite3MallocFailed() ){
+ p->aMem = &p->aStack[nStack];
+ p->nMem = nMem;
+ p->aVar = &p->aMem[nMem];
+ p->nVar = nVar;
+ p->okVar = 0;
+ p->apArg = (Mem**)&p->aVar[nVar];
+ p->azVar = (char**)&p->apArg[nArg];
+ p->apCsr = (Cursor**)&p->azVar[nVar];
+ p->nCursor = nCursor;
+ for(n=0; n<nVar; n++){
+ p->aVar[n].flags = MEM_Null;
+ }
+ }
+ }
+ for(n=0; n<p->nMem; n++){
+ p->aMem[n].flags = MEM_Null;
+ }
+
+#ifdef SQLITE_DEBUG
+ if( (p->db->flags & SQLITE_VdbeListing)!=0
+ || sqlite3OsFileExists("vdbe_explain")
+ ){
+ int i;
+ printf("VDBE Program Listing:\n");
+ sqlite3VdbePrintSql(p);
+ for(i=0; i<p->nOp; i++){
+ sqlite3VdbePrintOp(stdout, i, &p->aOp[i]);
+ }
+ }
+ if( sqlite3OsFileExists("vdbe_trace") ){
+ p->trace = stdout;
+ }
+#endif
+ p->pTos = &p->aStack[-1];
+ p->pc = -1;
+ p->rc = SQLITE_OK;
+ p->uniqueCnt = 0;
+ p->returnDepth = 0;
+ p->errorAction = OE_Abort;
+ p->popStack = 0;
+ p->explain |= isExplain;
+ p->magic = VDBE_MAGIC_RUN;
+ p->nChange = 0;
+ p->cacheCtr = 1;
+ p->minWriteFileFormat = 255;
+#ifdef VDBE_PROFILE
+ {
+ int i;
+ for(i=0; i<p->nOp; i++){
+ p->aOp[i].cnt = 0;
+ p->aOp[i].cycles = 0;
+ }
+ }
+#endif
+}
+
+/*
+** Close a cursor and release all the resources that cursor happens
+** to hold.
+*/
+void sqlite3VdbeFreeCursor(Vdbe *p, Cursor *pCx){
+ if( pCx==0 ){
+ return;
+ }
+ if( pCx->pCursor ){
+ sqlite3BtreeCloseCursor(pCx->pCursor);
+ }
+ if( pCx->pBt ){
+ sqlite3BtreeClose(pCx->pBt);
+ }
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+ if( pCx->pVtabCursor ){
+ sqlite3_vtab_cursor *pVtabCursor = pCx->pVtabCursor;
+ const sqlite3_module *pModule = pCx->pModule;
+ p->inVtabMethod = 1;
+ sqlite3SafetyOff(p->db);
+ pModule->xClose(pVtabCursor);
+ sqlite3SafetyOn(p->db);
+ p->inVtabMethod = 0;
+ }
+#endif
+ sqliteFree(pCx->pData);
+ sqliteFree(pCx->aType);
+ sqliteFree(pCx);
+}
+
+/*
+** Close all cursors
+*/
+static void closeAllCursors(Vdbe *p){
+ int i;
+ if( p->apCsr==0 ) return;
+ for(i=0; i<p->nCursor; i++){
+ if( !p->inVtabMethod || (p->apCsr[i] && !p->apCsr[i]->pVtabCursor) ){
+ sqlite3VdbeFreeCursor(p, p->apCsr[i]);
+ p->apCsr[i] = 0;
+ }
+ }
+}
+
+/*
+** Clean up the VM after execution.
+**
+** This routine will automatically close any cursors, lists, and/or
+** sorters that were left open. It also deletes the values of
+** variables in the aVar[] array.
+*/
+static void Cleanup(Vdbe *p){
+ int i;
+ if( p->aStack ){
+ releaseMemArray(p->aStack, 1 + (p->pTos - p->aStack));
+ p->pTos = &p->aStack[-1];
+ }
+ closeAllCursors(p);
+ releaseMemArray(p->aMem, p->nMem);
+ sqlite3VdbeFifoClear(&p->sFifo);
+ if( p->contextStack ){
+ for(i=0; i<p->contextStackTop; i++){
+ sqlite3VdbeFifoClear(&p->contextStack[i].sFifo);
+ }
+ sqliteFree(p->contextStack);
+ }
+ p->contextStack = 0;
+ p->contextStackDepth = 0;
+ p->contextStackTop = 0;
+ sqliteFree(p->zErrMsg);
+ p->zErrMsg = 0;
+}
+
+/*
+** Set the number of result columns that will be returned by this SQL
+** statement. This is now set at compile time, rather than during
+** execution of the vdbe program so that sqlite3_column_count() can
+** be called on an SQL statement before sqlite3_step().
+*/
+void sqlite3VdbeSetNumCols(Vdbe *p, int nResColumn){
+ Mem *pColName;
+ int n;
+ releaseMemArray(p->aColName, p->nResColumn*COLNAME_N);
+ sqliteFree(p->aColName);
+ n = nResColumn*COLNAME_N;
+ p->nResColumn = nResColumn;
+ p->aColName = pColName = (Mem*)sqliteMalloc( sizeof(Mem)*n );
+ if( p->aColName==0 ) return;
+ while( n-- > 0 ){
+ (pColName++)->flags = MEM_Null;
+ }
+}
+
+/*
+** Set the name of the idx'th column to be returned by the SQL statement.
+** zName must be a pointer to a nul terminated string.
+**
+** This call must be made after a call to sqlite3VdbeSetNumCols().
+**
+** If N==P3_STATIC it means that zName is a pointer to a constant static
+** string and we can just copy the pointer. If it is P3_DYNAMIC, then
+** the string is freed using sqliteFree() when the vdbe is finished with
+** it. Otherwise, N bytes of zName are copied.
+*/
+int sqlite3VdbeSetColName(Vdbe *p, int idx, int var, const char *zName, int N){
+ int rc;
+ Mem *pColName;
+ assert( idx<p->nResColumn );
+ assert( var<COLNAME_N );
+ if( sqlite3MallocFailed() ) return SQLITE_NOMEM;
+ assert( p->aColName!=0 );
+ pColName = &(p->aColName[idx+var*p->nResColumn]);
+ if( N==P3_DYNAMIC || N==P3_STATIC ){
+ rc = sqlite3VdbeMemSetStr(pColName, zName, -1, SQLITE_UTF8, SQLITE_STATIC);
+ }else{
+ rc = sqlite3VdbeMemSetStr(pColName, zName, N, SQLITE_UTF8,SQLITE_TRANSIENT);
+ }
+ if( rc==SQLITE_OK && N==P3_DYNAMIC ){
+ pColName->flags = (pColName->flags&(~MEM_Static))|MEM_Dyn;
+ pColName->xDel = 0;
+ }
+ return rc;
+}
+
+/*
+** A read or write transaction may or may not be active on database handle
+** db. If a transaction is active, commit it. If there is a
+** write-transaction spanning more than one database file, this routine
+** takes care of the master journal trickery.
+*/
+static int vdbeCommit(sqlite3 *db){
+ int i;
+ int nTrans = 0; /* Number of databases with an active write-transaction */
+ int rc = SQLITE_OK;
+ int needXcommit = 0;
+
+ /* Before doing anything else, call the xSync() callback for any
+ ** virtual module tables written in this transaction. This has to
+ ** be done before determining whether a master journal file is
+ ** required, as an xSync() callback may add an attached database
+ ** to the transaction.
+ */
+ rc = sqlite3VtabSync(db, rc);
+ if( rc!=SQLITE_OK ){
+ return rc;
+ }
+
+ /* This loop determines (a) if the commit hook should be invoked and
+ ** (b) how many database files have open write transactions, not
+ ** including the temp database. (b) is important because if more than
+ ** one database file has an open write transaction, a master journal
+ ** file is required for an atomic commit.
+ */
+ for(i=0; i<db->nDb; i++){
+ Btree *pBt = db->aDb[i].pBt;
+ if( pBt && sqlite3BtreeIsInTrans(pBt) ){
+ needXcommit = 1;
+ if( i!=1 ) nTrans++;
+ }
+ }
+
+ /* If there are any write-transactions at all, invoke the commit hook */
+ if( needXcommit && db->xCommitCallback ){
+ sqlite3SafetyOff(db);
+ rc = db->xCommitCallback(db->pCommitArg);
+ sqlite3SafetyOn(db);
+ if( rc ){
+ return SQLITE_CONSTRAINT;
+ }
+ }
+
+ /* The simple case - no more than one database file (not counting the
+ ** TEMP database) has a transaction active. There is no need for the
+ ** master-journal.
+ **
+ ** If the return value of sqlite3BtreeGetFilename() is a zero length
+ ** string, it means the main database is :memory:. In that case we do
+ ** not support atomic multi-file commits, so use the simple case then
+ ** too.
+ */
+ if( 0==strlen(sqlite3BtreeGetFilename(db->aDb[0].pBt)) || nTrans<=1 ){
+ for(i=0; rc==SQLITE_OK && i<db->nDb; i++){
+ Btree *pBt = db->aDb[i].pBt;
+ if( pBt ){
+ rc = sqlite3BtreeSync(pBt, 0);
+ }
+ }
+
+ /* Do the commit only if all databases successfully synced */
+ if( rc==SQLITE_OK ){
+ for(i=0; i<db->nDb; i++){
+ Btree *pBt = db->aDb[i].pBt;
+ if( pBt ){
+ sqlite3BtreeCommit(pBt);
+ }
+ }
+ sqlite3VtabCommit(db);
+ }
+ }
+
+ /* The complex case - There is a multi-file write-transaction active.
+ ** This requires a master journal file to ensure the transaction is
+ ** committed atomicly.
+ */
+#ifndef SQLITE_OMIT_DISKIO
+ else{
+ int needSync = 0;
+ char *zMaster = 0; /* File-name for the master journal */
+ char const *zMainFile = sqlite3BtreeGetFilename(db->aDb[0].pBt);
+ OsFile *master = 0;
+
+ /* Select a master journal file name */
+ do {
+ u32 random;
+ sqliteFree(zMaster);
+ sqlite3Randomness(sizeof(random), &random);
+ zMaster = sqlite3MPrintf("%s-mj%08X", zMainFile, random&0x7fffffff);
+ if( !zMaster ){
+ return SQLITE_NOMEM;
+ }
+ }while( sqlite3OsFileExists(zMaster) );
+
+ /* Open the master journal. */
+ rc = sqlite3OsOpenExclusive(zMaster, &master, 0);
+ if( rc!=SQLITE_OK ){
+ sqliteFree(zMaster);
+ return rc;
+ }
+
+ /* Write the name of each database file in the transaction into the new
+ ** master journal file. If an error occurs at this point close
+ ** and delete the master journal file. All the individual journal files
+ ** still have 'null' as the master journal pointer, so they will roll
+ ** back independently if a failure occurs.
+ */
+ for(i=0; i<db->nDb; i++){
+ Btree *pBt = db->aDb[i].pBt;
+ if( i==1 ) continue; /* Ignore the TEMP database */
+ if( pBt && sqlite3BtreeIsInTrans(pBt) ){
+ char const *zFile = sqlite3BtreeGetJournalname(pBt);
+ if( zFile[0]==0 ) continue; /* Ignore :memory: databases */
+ if( !needSync && !sqlite3BtreeSyncDisabled(pBt) ){
+ needSync = 1;
+ }
+ rc = sqlite3OsWrite(master, zFile, strlen(zFile)+1);
+ if( rc!=SQLITE_OK ){
+ sqlite3OsClose(&master);
+ sqlite3OsDelete(zMaster);
+ sqliteFree(zMaster);
+ return rc;
+ }
+ }
+ }
+
+
+ /* Sync the master journal file. Before doing this, open the directory
+ ** the master journal file is store in so that it gets synced too.
+ */
+ zMainFile = sqlite3BtreeGetDirname(db->aDb[0].pBt);
+ rc = sqlite3OsOpenDirectory(master, zMainFile);
+ if( rc!=SQLITE_OK ||
+ (needSync && (rc=sqlite3OsSync(master,0))!=SQLITE_OK) ){
+ sqlite3OsClose(&master);
+ sqlite3OsDelete(zMaster);
+ sqliteFree(zMaster);
+ return rc;
+ }
+
+ /* Sync all the db files involved in the transaction. The same call
+ ** sets the master journal pointer in each individual journal. If
+ ** an error occurs here, do not delete the master journal file.
+ **
+ ** If the error occurs during the first call to sqlite3BtreeSync(),
+ ** then there is a chance that the master journal file will be
+ ** orphaned. But we cannot delete it, in case the master journal
+ ** file name was written into the journal file before the failure
+ ** occured.
+ */
+ for(i=0; rc==SQLITE_OK && i<db->nDb; i++){
+ Btree *pBt = db->aDb[i].pBt;
+ if( pBt && sqlite3BtreeIsInTrans(pBt) ){
+ rc = sqlite3BtreeSync(pBt, zMaster);
+ }
+ }
+ sqlite3OsClose(&master);
+ if( rc!=SQLITE_OK ){
+ sqliteFree(zMaster);
+ return rc;
+ }
+
+ /* Delete the master journal file. This commits the transaction. After
+ ** doing this the directory is synced again before any individual
+ ** transaction files are deleted.
+ */
+ rc = sqlite3OsDelete(zMaster);
+ if( rc ){
+ return rc;
+ }
+ sqliteFree(zMaster);
+ zMaster = 0;
+ rc = sqlite3OsSyncDirectory(zMainFile);
+ if( rc!=SQLITE_OK ){
+ /* This is not good. The master journal file has been deleted, but
+ ** the directory sync failed. There is no completely safe course of
+ ** action from here. The individual journals contain the name of the
+ ** master journal file, but there is no way of knowing if that
+ ** master journal exists now or if it will exist after the operating
+ ** system crash that may follow the fsync() failure.
+ */
+ return rc;
+ }
+
+ /* All files and directories have already been synced, so the following
+ ** calls to sqlite3BtreeCommit() are only closing files and deleting
+ ** journals. If something goes wrong while this is happening we don't
+ ** really care. The integrity of the transaction is already guaranteed,
+ ** but some stray 'cold' journals may be lying around. Returning an
+ ** error code won't help matters.
+ */
+ for(i=0; i<db->nDb; i++){
+ Btree *pBt = db->aDb[i].pBt;
+ if( pBt ){
+ sqlite3BtreeCommit(pBt);
+ }
+ }
+ sqlite3VtabCommit(db);
+ }
+#endif
+
+ return rc;
+}
+
+/*
+** This routine checks that the sqlite3.activeVdbeCnt count variable
+** matches the number of vdbe's in the list sqlite3.pVdbe that are
+** currently active. An assertion fails if the two counts do not match.
+** This is an internal self-check only - it is not an essential processing
+** step.
+**
+** This is a no-op if NDEBUG is defined.
+*/
+#ifndef NDEBUG
+static void checkActiveVdbeCnt(sqlite3 *db){
+ Vdbe *p;
+ int cnt = 0;
+ p = db->pVdbe;
+ while( p ){
+ if( p->magic==VDBE_MAGIC_RUN && p->pc>=0 ){
+ cnt++;
+ }
+ p = p->pNext;
+ }
+ assert( cnt==db->activeVdbeCnt );
+}
+#else
+#define checkActiveVdbeCnt(x)
+#endif
+
+/*
+** Find every active VM other than pVdbe and change its status to
+** aborted. This happens when one VM causes a rollback due to an
+** ON CONFLICT ROLLBACK clause (for example). The other VMs must be
+** aborted so that they do not have data rolled out from underneath
+** them leading to a segfault.
+*/
+void sqlite3AbortOtherActiveVdbes(sqlite3 *db, Vdbe *pExcept){
+ Vdbe *pOther;
+ for(pOther=db->pVdbe; pOther; pOther=pOther->pNext){
+ if( pOther==pExcept ) continue;
+ if( pOther->magic!=VDBE_MAGIC_RUN || pOther->pc<0 ) continue;
+ checkActiveVdbeCnt(db);
+ closeAllCursors(pOther);
+ checkActiveVdbeCnt(db);
+ pOther->aborted = 1;
+ }
+}
+
+/*
+** This routine is called the when a VDBE tries to halt. If the VDBE
+** has made changes and is in autocommit mode, then commit those
+** changes. If a rollback is needed, then do the rollback.
+**
+** This routine is the only way to move the state of a VM from
+** SQLITE_MAGIC_RUN to SQLITE_MAGIC_HALT.
+**
+** Return an error code. If the commit could not complete because of
+** lock contention, return SQLITE_BUSY. If SQLITE_BUSY is returned, it
+** means the close did not happen and needs to be repeated.
+*/
+int sqlite3VdbeHalt(Vdbe *p){
+ sqlite3 *db = p->db;
+ int i;
+ int (*xFunc)(Btree *pBt) = 0; /* Function to call on each btree backend */
+ int isSpecialError; /* Set to true if SQLITE_NOMEM or IOERR */
+
+ /* This function contains the logic that determines if a statement or
+ ** transaction will be committed or rolled back as a result of the
+ ** execution of this virtual machine.
+ **
+ ** Special errors:
+ **
+ ** If an SQLITE_NOMEM error has occured in a statement that writes to
+ ** the database, then either a statement or transaction must be rolled
+ ** back to ensure the tree-structures are in a consistent state. A
+ ** statement transaction is rolled back if one is open, otherwise the
+ ** entire transaction must be rolled back.
+ **
+ ** If an SQLITE_IOERR error has occured in a statement that writes to
+ ** the database, then the entire transaction must be rolled back. The
+ ** I/O error may have caused garbage to be written to the journal
+ ** file. Were the transaction to continue and eventually be rolled
+ ** back that garbage might end up in the database file.
+ **
+ ** In both of the above cases, the Vdbe.errorAction variable is
+ ** ignored. If the sqlite3.autoCommit flag is false and a transaction
+ ** is rolled back, it will be set to true.
+ **
+ ** Other errors:
+ **
+ ** No error:
+ **
+ */
+
+ if( sqlite3MallocFailed() ){
+ p->rc = SQLITE_NOMEM;
+ }
+ if( p->magic!=VDBE_MAGIC_RUN ){
+ /* Already halted. Nothing to do. */
+ assert( p->magic==VDBE_MAGIC_HALT );
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+ closeAllCursors(p);
+#endif
+ return SQLITE_OK;
+ }
+ closeAllCursors(p);
+ checkActiveVdbeCnt(db);
+
+ /* No commit or rollback needed if the program never started */
+ if( p->pc>=0 ){
+ int mrc; /* Primary error code from p->rc */
+ /* Check for one of the special errors - SQLITE_NOMEM or SQLITE_IOERR */
+ mrc = p->rc & 0xff;
+ isSpecialError = ((mrc==SQLITE_NOMEM || mrc==SQLITE_IOERR)?1:0);
+ if( isSpecialError ){
+ /* This loop does static analysis of the query to see which of the
+ ** following three categories it falls into:
+ **
+ ** Read-only
+ ** Query with statement journal
+ ** Query without statement journal
+ **
+ ** We could do something more elegant than this static analysis (i.e.
+ ** store the type of query as part of the compliation phase), but
+ ** handling malloc() or IO failure is a fairly obscure edge case so
+ ** this is probably easier. Todo: Might be an opportunity to reduce
+ ** code size a very small amount though...
+ */
+ int isReadOnly = 1;
+ int isStatement = 0;
+ assert(p->aOp || p->nOp==0);
+ for(i=0; i<p->nOp; i++){
+ switch( p->aOp[i].opcode ){
+ case OP_Transaction:
+ isReadOnly = 0;
+ break;
+ case OP_Statement:
+ isStatement = 1;
+ break;
+ }
+ }
+
+ /* If the query was read-only, we need do no rollback at all. Otherwise,
+ ** proceed with the special handling.
+ */
+ if( !isReadOnly ){
+ if( p->rc==SQLITE_NOMEM && isStatement ){
+ xFunc = sqlite3BtreeRollbackStmt;
+ }else{
+ /* We are forced to roll back the active transaction. Before doing
+ ** so, abort any other statements this handle currently has active.
+ */
+ sqlite3AbortOtherActiveVdbes(db, p);
+ sqlite3RollbackAll(db);
+ db->autoCommit = 1;
+ }
+ }
+ }
+
+ /* If the auto-commit flag is set and this is the only active vdbe, then
+ ** we do either a commit or rollback of the current transaction.
+ **
+ ** Note: This block also runs if one of the special errors handled
+ ** above has occured.
+ */
+ if( db->autoCommit && db->activeVdbeCnt==1 ){
+ if( p->rc==SQLITE_OK || (p->errorAction==OE_Fail && !isSpecialError) ){
+ /* The auto-commit flag is true, and the vdbe program was
+ ** successful or hit an 'OR FAIL' constraint. This means a commit
+ ** is required.
+ */
+ int rc = vdbeCommit(db);
+ if( rc==SQLITE_BUSY ){
+ return SQLITE_BUSY;
+ }else if( rc!=SQLITE_OK ){
+ p->rc = rc;
+ sqlite3RollbackAll(db);
+ }else{
+ sqlite3CommitInternalChanges(db);
+ }
+ }else{
+ sqlite3RollbackAll(db);
+ }
+ }else if( !xFunc ){
+ if( p->rc==SQLITE_OK || p->errorAction==OE_Fail ){
+ xFunc = sqlite3BtreeCommitStmt;
+ }else if( p->errorAction==OE_Abort ){
+ xFunc = sqlite3BtreeRollbackStmt;
+ }else{
+ sqlite3AbortOtherActiveVdbes(db, p);
+ sqlite3RollbackAll(db);
+ db->autoCommit = 1;
+ }
+ }
+
+ /* If xFunc is not NULL, then it is one of sqlite3BtreeRollbackStmt or
+ ** sqlite3BtreeCommitStmt. Call it once on each backend. If an error occurs
+ ** and the return code is still SQLITE_OK, set the return code to the new
+ ** error value.
+ */
+ assert(!xFunc ||
+ xFunc==sqlite3BtreeCommitStmt ||
+ xFunc==sqlite3BtreeRollbackStmt
+ );
+ for(i=0; xFunc && i<db->nDb; i++){
+ int rc;
+ Btree *pBt = db->aDb[i].pBt;
+ if( pBt ){
+ rc = xFunc(pBt);
+ if( rc && (p->rc==SQLITE_OK || p->rc==SQLITE_CONSTRAINT) ){
+ p->rc = rc;
+ sqlite3SetString(&p->zErrMsg, 0);
+ }
+ }
+ }
+
+ /* If this was an INSERT, UPDATE or DELETE and the statement was committed,
+ ** set the change counter.
+ */
+ if( p->changeCntOn && p->pc>=0 ){
+ if( !xFunc || xFunc==sqlite3BtreeCommitStmt ){
+ sqlite3VdbeSetChanges(db, p->nChange);
+ }else{
+ sqlite3VdbeSetChanges(db, 0);
+ }
+ p->nChange = 0;
+ }
+
+ /* Rollback or commit any schema changes that occurred. */
+ if( p->rc!=SQLITE_OK && db->flags&SQLITE_InternChanges ){
+ sqlite3ResetInternalSchema(db, 0);
+ db->flags = (db->flags | SQLITE_InternChanges);
+ }
+ }
+
+ /* We have successfully halted and closed the VM. Record this fact. */
+ if( p->pc>=0 ){
+ db->activeVdbeCnt--;
+ }
+ p->magic = VDBE_MAGIC_HALT;
+ checkActiveVdbeCnt(db);
+
+ return SQLITE_OK;
+}
+
+/*
+** Clean up a VDBE after execution but do not delete the VDBE just yet.
+** Write any error messages into *pzErrMsg. Return the result code.
+**
+** After this routine is run, the VDBE should be ready to be executed
+** again.
+**
+** To look at it another way, this routine resets the state of the
+** virtual machine from VDBE_MAGIC_RUN or VDBE_MAGIC_HALT back to
+** VDBE_MAGIC_INIT.
+*/
+int sqlite3VdbeReset(Vdbe *p){
+ sqlite3 *db;
+ if( p->magic!=VDBE_MAGIC_RUN && p->magic!=VDBE_MAGIC_HALT ){
+ sqlite3Error(p->db, SQLITE_MISUSE, 0);
+ return SQLITE_MISUSE;
+ }
+ db = p->db;
+
+ /* If the VM did not run to completion or if it encountered an
+ ** error, then it might not have been halted properly. So halt
+ ** it now.
+ */
+ sqlite3SafetyOn(db);
+ sqlite3VdbeHalt(p);
+ sqlite3SafetyOff(db);
+
+ /* If the VDBE has be run even partially, then transfer the error code
+ ** and error message from the VDBE into the main database structure. But
+ ** if the VDBE has just been set to run but has not actually executed any
+ ** instructions yet, leave the main database error information unchanged.
+ */
+ if( p->pc>=0 ){
+ if( p->zErrMsg ){
+ sqlite3ValueSetStr(db->pErr, -1, p->zErrMsg, SQLITE_UTF8, sqlite3FreeX);
+ db->errCode = p->rc;
+ p->zErrMsg = 0;
+ }else if( p->rc ){
+ sqlite3Error(db, p->rc, 0);
+ }else{
+ sqlite3Error(db, SQLITE_OK, 0);
+ }
+ }else if( p->rc && p->expired ){
+ /* The expired flag was set on the VDBE before the first call
+ ** to sqlite3_step(). For consistency (since sqlite3_step() was
+ ** called), set the database error in this case as well.
+ */
+ sqlite3Error(db, p->rc, 0);
+ }
+
+ /* Reclaim all memory used by the VDBE
+ */
+ Cleanup(p);
+
+ /* Save profiling information from this VDBE run.
+ */
+ assert( p->pTos<&p->aStack[p->pc<0?0:p->pc] || !p->aStack );
+#ifdef VDBE_PROFILE
+ {
+ FILE *out = fopen("vdbe_profile.out", "a");
+ if( out ){
+ int i;
+ fprintf(out, "---- ");
+ for(i=0; i<p->nOp; i++){
+ fprintf(out, "%02x", p->aOp[i].opcode);
+ }
+ fprintf(out, "\n");
+ for(i=0; i<p->nOp; i++){
+ fprintf(out, "%6d %10lld %8lld ",
+ p->aOp[i].cnt,
+ p->aOp[i].cycles,
+ p->aOp[i].cnt>0 ? p->aOp[i].cycles/p->aOp[i].cnt : 0
+ );
+ sqlite3VdbePrintOp(out, i, &p->aOp[i]);
+ }
+ fclose(out);
+ }
+ }
+#endif
+ p->magic = VDBE_MAGIC_INIT;
+ p->aborted = 0;
+ if( p->rc==SQLITE_SCHEMA ){
+ sqlite3ResetInternalSchema(db, 0);
+ }
+ return p->rc & db->errMask;
+}
+
+/*
+** Clean up and delete a VDBE after execution. Return an integer which is
+** the result code. Write any error message text into *pzErrMsg.
+*/
+int sqlite3VdbeFinalize(Vdbe *p){
+ int rc = SQLITE_OK;
+ if( p->magic==VDBE_MAGIC_RUN || p->magic==VDBE_MAGIC_HALT ){
+ rc = sqlite3VdbeReset(p);
+ assert( (rc & p->db->errMask)==rc );
+ }else if( p->magic!=VDBE_MAGIC_INIT ){
+ return SQLITE_MISUSE;
+ }
+ sqlite3VdbeDelete(p);
+ return rc;
+}
+
+/*
+** Call the destructor for each auxdata entry in pVdbeFunc for which
+** the corresponding bit in mask is clear. Auxdata entries beyond 31
+** are always destroyed. To destroy all auxdata entries, call this
+** routine with mask==0.
+*/
+void sqlite3VdbeDeleteAuxData(VdbeFunc *pVdbeFunc, int mask){
+ int i;
+ for(i=0; i<pVdbeFunc->nAux; i++){
+ struct AuxData *pAux = &pVdbeFunc->apAux[i];
+ if( (i>31 || !(mask&(1<<i))) && pAux->pAux ){
+ if( pAux->xDelete ){
+ pAux->xDelete(pAux->pAux);
+ }
+ pAux->pAux = 0;
+ }
+ }
+}
+
+/*
+** Delete an entire VDBE.
+*/
+void sqlite3VdbeDelete(Vdbe *p){
+ int i;
+ if( p==0 ) return;
+ Cleanup(p);
+ if( p->pPrev ){
+ p->pPrev->pNext = p->pNext;
+ }else{
+ assert( p->db->pVdbe==p );
+ p->db->pVdbe = p->pNext;
+ }
+ if( p->pNext ){
+ p->pNext->pPrev = p->pPrev;
+ }
+ if( p->aOp ){
+ for(i=0; i<p->nOp; i++){
+ Op *pOp = &p->aOp[i];
+ freeP3(pOp->p3type, pOp->p3);
+ }
+ sqliteFree(p->aOp);
+ }
+ releaseMemArray(p->aVar, p->nVar);
+ sqliteFree(p->aLabel);
+ sqliteFree(p->aStack);
+ releaseMemArray(p->aColName, p->nResColumn*COLNAME_N);
+ sqliteFree(p->aColName);
+ p->magic = VDBE_MAGIC_DEAD;
+ sqliteFree(p);
+}
+
+/*
+** If a MoveTo operation is pending on the given cursor, then do that
+** MoveTo now. Return an error code. If no MoveTo is pending, this
+** routine does nothing and returns SQLITE_OK.
+*/
+int sqlite3VdbeCursorMoveto(Cursor *p){
+ if( p->deferredMoveto ){
+ int res, rc;
+#ifdef SQLITE_TEST
+ extern int sqlite3_search_count;
+#endif
+ assert( p->isTable );
+ if( p->isTable ){
+ rc = sqlite3BtreeMoveto(p->pCursor, 0, p->movetoTarget, &res);
+ }else{
+ rc = sqlite3BtreeMoveto(p->pCursor,(char*)&p->movetoTarget,
+ sizeof(i64),&res);
+ }
+ if( rc ) return rc;
+ *p->pIncrKey = 0;
+ p->lastRowid = keyToInt(p->movetoTarget);
+ p->rowidIsValid = res==0;
+ if( res<0 ){
+ rc = sqlite3BtreeNext(p->pCursor, &res);
+ if( rc ) return rc;
+ }
+#ifdef SQLITE_TEST
+ sqlite3_search_count++;
+#endif
+ p->deferredMoveto = 0;
+ p->cacheStatus = CACHE_STALE;
+ }
+ return SQLITE_OK;
+}
+
+/*
+** The following functions:
+**
+** sqlite3VdbeSerialType()
+** sqlite3VdbeSerialTypeLen()
+** sqlite3VdbeSerialRead()
+** sqlite3VdbeSerialLen()
+** sqlite3VdbeSerialWrite()
+**
+** encapsulate the code that serializes values for storage in SQLite
+** data and index records. Each serialized value consists of a
+** 'serial-type' and a blob of data. The serial type is an 8-byte unsigned
+** integer, stored as a varint.
+**
+** In an SQLite index record, the serial type is stored directly before
+** the blob of data that it corresponds to. In a table record, all serial
+** types are stored at the start of the record, and the blobs of data at
+** the end. Hence these functions allow the caller to handle the
+** serial-type and data blob seperately.
+**
+** The following table describes the various storage classes for data:
+**
+** serial type bytes of data type
+** -------------- --------------- ---------------
+** 0 0 NULL
+** 1 1 signed integer
+** 2 2 signed integer
+** 3 3 signed integer
+** 4 4 signed integer
+** 5 6 signed integer
+** 6 8 signed integer
+** 7 8 IEEE float
+** 8 0 Integer constant 0
+** 9 0 Integer constant 1
+** 10,11 reserved for expansion
+** N>=12 and even (N-12)/2 BLOB
+** N>=13 and odd (N-13)/2 text
+**
+** The 8 and 9 types were added in 3.3.0, file format 4. Prior versions
+** of SQLite will not understand those serial types.
+*/
+
+/*
+** Return the serial-type for the value stored in pMem.
+*/
+u32 sqlite3VdbeSerialType(Mem *pMem, int file_format){
+ int flags = pMem->flags;
+
+ if( flags&MEM_Null ){
+ return 0;
+ }
+ if( flags&MEM_Int ){
+ /* Figure out whether to use 1, 2, 4, 6 or 8 bytes. */
+# define MAX_6BYTE ((((i64)0x00001000)<<32)-1)
+ i64 i = pMem->i;
+ u64 u;
+ if( file_format>=4 && (i&1)==i ){
+ return 8+i;
+ }
+ u = i<0 ? -i : i;
+ if( u<=127 ) return 1;
+ if( u<=32767 ) return 2;
+ if( u<=8388607 ) return 3;
+ if( u<=2147483647 ) return 4;
+ if( u<=MAX_6BYTE ) return 5;
+ return 6;
+ }
+ if( flags&MEM_Real ){
+ return 7;
+ }
+ if( flags&MEM_Str ){
+ int n = pMem->n;
+ assert( n>=0 );
+ return ((n*2) + 13);
+ }
+ if( flags&MEM_Blob ){
+ return (pMem->n*2 + 12);
+ }
+ return 0;
+}
+
+/*
+** Return the length of the data corresponding to the supplied serial-type.
+*/
+int sqlite3VdbeSerialTypeLen(u32 serial_type){
+ if( serial_type>=12 ){
+ return (serial_type-12)/2;
+ }else{
+ static const u8 aSize[] = { 0, 1, 2, 3, 4, 6, 8, 8, 0, 0, 0, 0 };
+ return aSize[serial_type];
+ }
+}
+
+/*
+** Write the serialized data blob for the value stored in pMem into
+** buf. It is assumed that the caller has allocated sufficient space.
+** Return the number of bytes written.
+*/
+int sqlite3VdbeSerialPut(unsigned char *buf, Mem *pMem, int file_format){
+ u32 serial_type = sqlite3VdbeSerialType(pMem, file_format);
+ int len;
+
+ /* Integer and Real */
+ if( serial_type<=7 && serial_type>0 ){
+ u64 v;
+ int i;
+ if( serial_type==7 ){
+ v = *(u64*)&pMem->r;
+ }else{
+ v = *(u64*)&pMem->i;
+ }
+ len = i = sqlite3VdbeSerialTypeLen(serial_type);
+ while( i-- ){
+ buf[i] = (v&0xFF);
+ v >>= 8;
+ }
+ return len;
+ }
+
+ /* String or blob */
+ if( serial_type>=12 ){
+ len = sqlite3VdbeSerialTypeLen(serial_type);
+ memcpy(buf, pMem->z, len);
+ return len;
+ }
+
+ /* NULL or constants 0 or 1 */
+ return 0;
+}
+
+/*
+** Deserialize the data blob pointed to by buf as serial type serial_type
+** and store the result in pMem. Return the number of bytes read.
+*/
+int sqlite3VdbeSerialGet(
+ const unsigned char *buf, /* Buffer to deserialize from */
+ u32 serial_type, /* Serial type to deserialize */
+ Mem *pMem /* Memory cell to write value into */
+){
+ switch( serial_type ){
+ case 10: /* Reserved for future use */
+ case 11: /* Reserved for future use */
+ case 0: { /* NULL */
+ pMem->flags = MEM_Null;
+ break;
+ }
+ case 1: { /* 1-byte signed integer */
+ pMem->i = (signed char)buf[0];
+ pMem->flags = MEM_Int;
+ return 1;
+ }
+ case 2: { /* 2-byte signed integer */
+ pMem->i = (((signed char)buf[0])<<8) | buf[1];
+ pMem->flags = MEM_Int;
+ return 2;
+ }
+ case 3: { /* 3-byte signed integer */
+ pMem->i = (((signed char)buf[0])<<16) | (buf[1]<<8) | buf[2];
+ pMem->flags = MEM_Int;
+ return 3;
+ }
+ case 4: { /* 4-byte signed integer */
+ pMem->i = (buf[0]<<24) | (buf[1]<<16) | (buf[2]<<8) | buf[3];
+ pMem->flags = MEM_Int;
+ return 4;
+ }
+ case 5: { /* 6-byte signed integer */
+ u64 x = (((signed char)buf[0])<<8) | buf[1];
+ u32 y = (buf[2]<<24) | (buf[3]<<16) | (buf[4]<<8) | buf[5];
+ x = (x<<32) | y;
+ pMem->i = *(i64*)&x;
+ pMem->flags = MEM_Int;
+ return 6;
+ }
+ case 6: /* 8-byte signed integer */
+ case 7: { /* IEEE floating point */
+ u64 x;
+ u32 y;
+#if !defined(NDEBUG) && !defined(SQLITE_OMIT_FLOATING_POINT)
+ /* Verify that integers and floating point values use the same
+ ** byte order. The byte order differs on some (broken) architectures.
+ */
+ static const u64 t1 = ((u64)0x3ff00000)<<32;
+ assert( 1.0==*(double*)&t1 );
+#endif
+
+ x = (buf[0]<<24) | (buf[1]<<16) | (buf[2]<<8) | buf[3];
+ y = (buf[4]<<24) | (buf[5]<<16) | (buf[6]<<8) | buf[7];
+ x = (x<<32) | y;
+ if( serial_type==6 ){
+ pMem->i = *(i64*)&x;
+ pMem->flags = MEM_Int;
+ }else{
+ pMem->r = *(double*)&x;
+ pMem->flags = MEM_Real;
+ }
+ return 8;
+ }
+ case 8: /* Integer 0 */
+ case 9: { /* Integer 1 */
+ pMem->i = serial_type-8;
+ pMem->flags = MEM_Int;
+ return 0;
+ }
+ default: {
+ int len = (serial_type-12)/2;
+ pMem->z = (char *)buf;
+ pMem->n = len;
+ pMem->xDel = 0;
+ if( serial_type&0x01 ){
+ pMem->flags = MEM_Str | MEM_Ephem;
+ }else{
+ pMem->flags = MEM_Blob | MEM_Ephem;
+ }
+ return len;
+ }
+ }
+ return 0;
+}
+
+/*
+** The header of a record consists of a sequence variable-length integers.
+** These integers are almost always small and are encoded as a single byte.
+** The following macro takes advantage this fact to provide a fast decode
+** of the integers in a record header. It is faster for the common case
+** where the integer is a single byte. It is a little slower when the
+** integer is two or more bytes. But overall it is faster.
+**
+** The following expressions are equivalent:
+**
+** x = sqlite3GetVarint32( A, &B );
+**
+** x = GetVarint( A, B );
+**
+*/
+#define GetVarint(A,B) ((B = *(A))<=0x7f ? 1 : sqlite3GetVarint32(A, &B))
+
+/*
+** This function compares the two table rows or index records specified by
+** {nKey1, pKey1} and {nKey2, pKey2}, returning a negative, zero
+** or positive integer if {nKey1, pKey1} is less than, equal to or
+** greater than {nKey2, pKey2}. Both Key1 and Key2 must be byte strings
+** composed by the OP_MakeRecord opcode of the VDBE.
+*/
+int sqlite3VdbeRecordCompare(
+ void *userData,
+ int nKey1, const void *pKey1,
+ int nKey2, const void *pKey2
+){
+ KeyInfo *pKeyInfo = (KeyInfo*)userData;
+ u32 d1, d2; /* Offset into aKey[] of next data element */
+ u32 idx1, idx2; /* Offset into aKey[] of next header element */
+ u32 szHdr1, szHdr2; /* Number of bytes in header */
+ int i = 0;
+ int nField;
+ int rc = 0;
+ const unsigned char *aKey1 = (const unsigned char *)pKey1;
+ const unsigned char *aKey2 = (const unsigned char *)pKey2;
+
+ Mem mem1;
+ Mem mem2;
+ mem1.enc = pKeyInfo->enc;
+ mem2.enc = pKeyInfo->enc;
+
+ idx1 = GetVarint(aKey1, szHdr1);
+ d1 = szHdr1;
+ idx2 = GetVarint(aKey2, szHdr2);
+ d2 = szHdr2;
+ nField = pKeyInfo->nField;
+ while( idx1<szHdr1 && idx2<szHdr2 ){
+ u32 serial_type1;
+ u32 serial_type2;
+
+ /* Read the serial types for the next element in each key. */
+ idx1 += GetVarint( aKey1+idx1, serial_type1 );
+ if( d1>=nKey1 && sqlite3VdbeSerialTypeLen(serial_type1)>0 ) break;
+ idx2 += GetVarint( aKey2+idx2, serial_type2 );
+ if( d2>=nKey2 && sqlite3VdbeSerialTypeLen(serial_type2)>0 ) break;
+
+ /* Assert that there is enough space left in each key for the blob of
+ ** data to go with the serial type just read. This assert may fail if
+ ** the file is corrupted. Then read the value from each key into mem1
+ ** and mem2 respectively.
+ */
+ d1 += sqlite3VdbeSerialGet(&aKey1[d1], serial_type1, &mem1);
+ d2 += sqlite3VdbeSerialGet(&aKey2[d2], serial_type2, &mem2);
+
+ rc = sqlite3MemCompare(&mem1, &mem2, i<nField ? pKeyInfo->aColl[i] : 0);
+ if( mem1.flags & MEM_Dyn ) sqlite3VdbeMemRelease(&mem1);
+ if( mem2.flags & MEM_Dyn ) sqlite3VdbeMemRelease(&mem2);
+ if( rc!=0 ){
+ break;
+ }
+ i++;
+ }
+
+ /* One of the keys ran out of fields, but all the fields up to that point
+ ** were equal. If the incrKey flag is true, then the second key is
+ ** treated as larger.
+ */
+ if( rc==0 ){
+ if( pKeyInfo->incrKey ){
+ rc = -1;
+ }else if( d1<nKey1 ){
+ rc = 1;
+ }else if( d2<nKey2 ){
+ rc = -1;
+ }
+ }else if( pKeyInfo->aSortOrder && i<pKeyInfo->nField
+ && pKeyInfo->aSortOrder[i] ){
+ rc = -rc;
+ }
+
+ return rc;
+}
+
+/*
+** The argument is an index entry composed using the OP_MakeRecord opcode.
+** The last entry in this record should be an integer (specifically
+** an integer rowid). This routine returns the number of bytes in
+** that integer.
+*/
+int sqlite3VdbeIdxRowidLen(const u8 *aKey){
+ u32 szHdr; /* Size of the header */
+ u32 typeRowid; /* Serial type of the rowid */
+
+ sqlite3GetVarint32(aKey, &szHdr);
+ sqlite3GetVarint32(&aKey[szHdr-1], &typeRowid);
+ return sqlite3VdbeSerialTypeLen(typeRowid);
+}
+
+
+/*
+** pCur points at an index entry created using the OP_MakeRecord opcode.
+** Read the rowid (the last field in the record) and store it in *rowid.
+** Return SQLITE_OK if everything works, or an error code otherwise.
+*/
+int sqlite3VdbeIdxRowid(BtCursor *pCur, i64 *rowid){
+ i64 nCellKey;
+ int rc;
+ u32 szHdr; /* Size of the header */
+ u32 typeRowid; /* Serial type of the rowid */
+ u32 lenRowid; /* Size of the rowid */
+ Mem m, v;
+
+ sqlite3BtreeKeySize(pCur, &nCellKey);
+ if( nCellKey<=0 ){
+ return SQLITE_CORRUPT_BKPT;
+ }
+ rc = sqlite3VdbeMemFromBtree(pCur, 0, nCellKey, 1, &m);
+ if( rc ){
+ return rc;
+ }
+ sqlite3GetVarint32((u8*)m.z, &szHdr);
+ sqlite3GetVarint32((u8*)&m.z[szHdr-1], &typeRowid);
+ lenRowid = sqlite3VdbeSerialTypeLen(typeRowid);
+ sqlite3VdbeSerialGet((u8*)&m.z[m.n-lenRowid], typeRowid, &v);
+ *rowid = v.i;
+ sqlite3VdbeMemRelease(&m);
+ return SQLITE_OK;
+}
+
+/*
+** Compare the key of the index entry that cursor pC is point to against
+** the key string in pKey (of length nKey). Write into *pRes a number
+** that is negative, zero, or positive if pC is less than, equal to,
+** or greater than pKey. Return SQLITE_OK on success.
+**
+** pKey is either created without a rowid or is truncated so that it
+** omits the rowid at the end. The rowid at the end of the index entry
+** is ignored as well.
+*/
+int sqlite3VdbeIdxKeyCompare(
+ Cursor *pC, /* The cursor to compare against */
+ int nKey, const u8 *pKey, /* The key to compare */
+ int *res /* Write the comparison result here */
+){
+ i64 nCellKey;
+ int rc;
+ BtCursor *pCur = pC->pCursor;
+ int lenRowid;
+ Mem m;
+
+ sqlite3BtreeKeySize(pCur, &nCellKey);
+ if( nCellKey<=0 ){
+ *res = 0;
+ return SQLITE_OK;
+ }
+ rc = sqlite3VdbeMemFromBtree(pC->pCursor, 0, nCellKey, 1, &m);
+ if( rc ){
+ return rc;
+ }
+ lenRowid = sqlite3VdbeIdxRowidLen((u8*)m.z);
+ *res = sqlite3VdbeRecordCompare(pC->pKeyInfo, m.n-lenRowid, m.z, nKey, pKey);
+ sqlite3VdbeMemRelease(&m);
+ return SQLITE_OK;
+}
+
+/*
+** This routine sets the value to be returned by subsequent calls to
+** sqlite3_changes() on the database handle 'db'.
+*/
+void sqlite3VdbeSetChanges(sqlite3 *db, int nChange){
+ db->nChange = nChange;
+ db->nTotalChange += nChange;
+}
+
+/*
+** Set a flag in the vdbe to update the change counter when it is finalised
+** or reset.
+*/
+void sqlite3VdbeCountChanges(Vdbe *v){
+ v->changeCntOn = 1;
+}
+
+/*
+** Mark every prepared statement associated with a database connection
+** as expired.
+**
+** An expired statement means that recompilation of the statement is
+** recommend. Statements expire when things happen that make their
+** programs obsolete. Removing user-defined functions or collating
+** sequences, or changing an authorization function are the types of
+** things that make prepared statements obsolete.
+*/
+void sqlite3ExpirePreparedStatements(sqlite3 *db){
+ Vdbe *p;
+ for(p = db->pVdbe; p; p=p->pNext){
+ p->expired = 1;
+ }
+}
+
+/*
+** Return the database associated with the Vdbe.
+*/
+sqlite3 *sqlite3VdbeDb(Vdbe *v){
+ return v->db;
+}
Added: freeswitch/trunk/libs/sqlite/src/vdbefifo.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/vdbefifo.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,114 @@
+/*
+** 2005 June 16
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** This file implements a FIFO queue of rowids used for processing
+** UPDATE and DELETE statements.
+*/
+#include "sqliteInt.h"
+#include "vdbeInt.h"
+
+/*
+** Allocate a new FifoPage and return a pointer to it. Return NULL if
+** we run out of memory. Leave space on the page for nEntry entries.
+*/
+static FifoPage *allocatePage(int nEntry){
+ FifoPage *pPage;
+ if( nEntry>32767 ){
+ nEntry = 32767;
+ }
+ pPage = sqliteMallocRaw( sizeof(FifoPage) + sizeof(i64)*(nEntry-1) );
+ if( pPage ){
+ pPage->nSlot = nEntry;
+ pPage->iWrite = 0;
+ pPage->iRead = 0;
+ pPage->pNext = 0;
+ }
+ return pPage;
+}
+
+/*
+** Initialize a Fifo structure.
+*/
+void sqlite3VdbeFifoInit(Fifo *pFifo){
+ memset(pFifo, 0, sizeof(*pFifo));
+}
+
+/*
+** Push a single 64-bit integer value into the Fifo. Return SQLITE_OK
+** normally. SQLITE_NOMEM is returned if we are unable to allocate
+** memory.
+*/
+int sqlite3VdbeFifoPush(Fifo *pFifo, i64 val){
+ FifoPage *pPage;
+ pPage = pFifo->pLast;
+ if( pPage==0 ){
+ pPage = pFifo->pLast = pFifo->pFirst = allocatePage(20);
+ if( pPage==0 ){
+ return SQLITE_NOMEM;
+ }
+ }else if( pPage->iWrite>=pPage->nSlot ){
+ pPage->pNext = allocatePage(pFifo->nEntry);
+ if( pPage->pNext==0 ){
+ return SQLITE_NOMEM;
+ }
+ pPage = pFifo->pLast = pPage->pNext;
+ }
+ pPage->aSlot[pPage->iWrite++] = val;
+ pFifo->nEntry++;
+ return SQLITE_OK;
+}
+
+/*
+** Extract a single 64-bit integer value from the Fifo. The integer
+** extracted is the one least recently inserted. If the Fifo is empty
+** return SQLITE_DONE.
+*/
+int sqlite3VdbeFifoPop(Fifo *pFifo, i64 *pVal){
+ FifoPage *pPage;
+ if( pFifo->nEntry==0 ){
+ return SQLITE_DONE;
+ }
+ assert( pFifo->nEntry>0 );
+ pPage = pFifo->pFirst;
+ assert( pPage!=0 );
+ assert( pPage->iWrite>pPage->iRead );
+ assert( pPage->iWrite<=pPage->nSlot );
+ assert( pPage->iRead<pPage->nSlot );
+ assert( pPage->iRead>=0 );
+ *pVal = pPage->aSlot[pPage->iRead++];
+ pFifo->nEntry--;
+ if( pPage->iRead>=pPage->iWrite ){
+ pFifo->pFirst = pPage->pNext;
+ sqliteFree(pPage);
+ if( pFifo->nEntry==0 ){
+ assert( pFifo->pLast==pPage );
+ pFifo->pLast = 0;
+ }else{
+ assert( pFifo->pFirst!=0 );
+ }
+ }else{
+ assert( pFifo->nEntry>0 );
+ }
+ return SQLITE_OK;
+}
+
+/*
+** Delete all information from a Fifo object. Free all memory held
+** by the Fifo.
+*/
+void sqlite3VdbeFifoClear(Fifo *pFifo){
+ FifoPage *pPage, *pNextPage;
+ for(pPage=pFifo->pFirst; pPage; pPage=pNextPage){
+ pNextPage = pPage->pNext;
+ sqliteFree(pPage);
+ }
+ sqlite3VdbeFifoInit(pFifo);
+}
Added: freeswitch/trunk/libs/sqlite/src/vdbemem.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/vdbemem.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,892 @@
+/*
+** 2004 May 26
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+**
+** This file contains code use to manipulate "Mem" structure. A "Mem"
+** stores a single value in the VDBE. Mem is an opaque structure visible
+** only within the VDBE. Interface routines refer to a Mem using the
+** name sqlite_value
+*/
+#include "sqliteInt.h"
+#include "os.h"
+#include <ctype.h>
+#include "vdbeInt.h"
+
+/*
+** If pMem is an object with a valid string representation, this routine
+** ensures the internal encoding for the string representation is
+** 'desiredEnc', one of SQLITE_UTF8, SQLITE_UTF16LE or SQLITE_UTF16BE.
+**
+** If pMem is not a string object, or the encoding of the string
+** representation is already stored using the requested encoding, then this
+** routine is a no-op.
+**
+** SQLITE_OK is returned if the conversion is successful (or not required).
+** SQLITE_NOMEM may be returned if a malloc() fails during conversion
+** between formats.
+*/
+int sqlite3VdbeChangeEncoding(Mem *pMem, int desiredEnc){
+ int rc;
+ if( !(pMem->flags&MEM_Str) || pMem->enc==desiredEnc ){
+ return SQLITE_OK;
+ }
+#ifdef SQLITE_OMIT_UTF16
+ return SQLITE_ERROR;
+#else
+
+
+ /* MemTranslate() may return SQLITE_OK or SQLITE_NOMEM. If NOMEM is returned,
+ ** then the encoding of the value may not have changed.
+ */
+ rc = sqlite3VdbeMemTranslate(pMem, desiredEnc);
+ assert(rc==SQLITE_OK || rc==SQLITE_NOMEM);
+ assert(rc==SQLITE_OK || pMem->enc!=desiredEnc);
+ assert(rc==SQLITE_NOMEM || pMem->enc==desiredEnc);
+ return rc;
+#endif
+}
+
+/*
+** Make the given Mem object MEM_Dyn.
+**
+** Return SQLITE_OK on success or SQLITE_NOMEM if malloc fails.
+*/
+int sqlite3VdbeMemDynamicify(Mem *pMem){
+ int n = pMem->n;
+ u8 *z;
+ if( (pMem->flags & (MEM_Ephem|MEM_Static|MEM_Short))==0 ){
+ return SQLITE_OK;
+ }
+ assert( (pMem->flags & MEM_Dyn)==0 );
+ assert( pMem->flags & (MEM_Str|MEM_Blob) );
+ z = sqliteMallocRaw( n+2 );
+ if( z==0 ){
+ return SQLITE_NOMEM;
+ }
+ pMem->flags |= MEM_Dyn|MEM_Term;
+ pMem->xDel = 0;
+ memcpy(z, pMem->z, n );
+ z[n] = 0;
+ z[n+1] = 0;
+ pMem->z = (char*)z;
+ pMem->flags &= ~(MEM_Ephem|MEM_Static|MEM_Short);
+ return SQLITE_OK;
+}
+
+/*
+** Make the given Mem object either MEM_Short or MEM_Dyn so that bytes
+** of the Mem.z[] array can be modified.
+**
+** Return SQLITE_OK on success or SQLITE_NOMEM if malloc fails.
+*/
+int sqlite3VdbeMemMakeWriteable(Mem *pMem){
+ int n;
+ u8 *z;
+ if( (pMem->flags & (MEM_Ephem|MEM_Static))==0 ){
+ return SQLITE_OK;
+ }
+ assert( (pMem->flags & MEM_Dyn)==0 );
+ assert( pMem->flags & (MEM_Str|MEM_Blob) );
+ if( (n = pMem->n)+2<sizeof(pMem->zShort) ){
+ z = (u8*)pMem->zShort;
+ pMem->flags |= MEM_Short|MEM_Term;
+ }else{
+ z = sqliteMallocRaw( n+2 );
+ if( z==0 ){
+ return SQLITE_NOMEM;
+ }
+ pMem->flags |= MEM_Dyn|MEM_Term;
+ pMem->xDel = 0;
+ }
+ memcpy(z, pMem->z, n );
+ z[n] = 0;
+ z[n+1] = 0;
+ pMem->z = (char*)z;
+ pMem->flags &= ~(MEM_Ephem|MEM_Static);
+ assert(0==(1&(int)pMem->z));
+ return SQLITE_OK;
+}
+
+/*
+** Make sure the given Mem is \u0000 terminated.
+*/
+int sqlite3VdbeMemNulTerminate(Mem *pMem){
+ if( (pMem->flags & MEM_Term)!=0 || (pMem->flags & MEM_Str)==0 ){
+ return SQLITE_OK; /* Nothing to do */
+ }
+ if( pMem->flags & (MEM_Static|MEM_Ephem) ){
+ return sqlite3VdbeMemMakeWriteable(pMem);
+ }else{
+ char *z = sqliteMalloc(pMem->n+2);
+ if( !z ) return SQLITE_NOMEM;
+ memcpy(z, pMem->z, pMem->n);
+ z[pMem->n] = 0;
+ z[pMem->n+1] = 0;
+ if( pMem->xDel ){
+ pMem->xDel(pMem->z);
+ }else{
+ sqliteFree(pMem->z);
+ }
+ pMem->xDel = 0;
+ pMem->z = z;
+ }
+ return SQLITE_OK;
+}
+
+/*
+** Add MEM_Str to the set of representations for the given Mem. Numbers
+** are converted using sqlite3_snprintf(). Converting a BLOB to a string
+** is a no-op.
+**
+** Existing representations MEM_Int and MEM_Real are *not* invalidated.
+**
+** A MEM_Null value will never be passed to this function. This function is
+** used for converting values to text for returning to the user (i.e. via
+** sqlite3_value_text()), or for ensuring that values to be used as btree
+** keys are strings. In the former case a NULL pointer is returned the
+** user and the later is an internal programming error.
+*/
+int sqlite3VdbeMemStringify(Mem *pMem, int enc){
+ int rc = SQLITE_OK;
+ int fg = pMem->flags;
+ char *z = pMem->zShort;
+
+ assert( !(fg&(MEM_Str|MEM_Blob)) );
+ assert( fg&(MEM_Int|MEM_Real) );
+
+ /* For a Real or Integer, use sqlite3_snprintf() to produce the UTF-8
+ ** string representation of the value. Then, if the required encoding
+ ** is UTF-16le or UTF-16be do a translation.
+ **
+ ** FIX ME: It would be better if sqlite3_snprintf() could do UTF-16.
+ */
+ if( fg & MEM_Int ){
+ sqlite3_snprintf(NBFS, z, "%lld", pMem->i);
+ }else{
+ assert( fg & MEM_Real );
+ sqlite3_snprintf(NBFS, z, "%!.15g", pMem->r);
+ }
+ pMem->n = strlen(z);
+ pMem->z = z;
+ pMem->enc = SQLITE_UTF8;
+ pMem->flags |= MEM_Str | MEM_Short | MEM_Term;
+ sqlite3VdbeChangeEncoding(pMem, enc);
+ return rc;
+}
+
+/*
+** Memory cell pMem contains the context of an aggregate function.
+** This routine calls the finalize method for that function. The
+** result of the aggregate is stored back into pMem.
+**
+** Return SQLITE_ERROR if the finalizer reports an error. SQLITE_OK
+** otherwise.
+*/
+int sqlite3VdbeMemFinalize(Mem *pMem, FuncDef *pFunc){
+ int rc = SQLITE_OK;
+ if( pFunc && pFunc->xFinalize ){
+ sqlite3_context ctx;
+ assert( (pMem->flags & MEM_Null)!=0 || pFunc==*(FuncDef**)&pMem->i );
+ ctx.s.flags = MEM_Null;
+ ctx.s.z = pMem->zShort;
+ ctx.pMem = pMem;
+ ctx.pFunc = pFunc;
+ ctx.isError = 0;
+ pFunc->xFinalize(&ctx);
+ if( pMem->z && pMem->z!=pMem->zShort ){
+ sqliteFree( pMem->z );
+ }
+ *pMem = ctx.s;
+ if( pMem->flags & MEM_Short ){
+ pMem->z = pMem->zShort;
+ }
+ if( ctx.isError ){
+ rc = SQLITE_ERROR;
+ }
+ }
+ return rc;
+}
+
+/*
+** Release any memory held by the Mem. This may leave the Mem in an
+** inconsistent state, for example with (Mem.z==0) and
+** (Mem.type==SQLITE_TEXT).
+*/
+void sqlite3VdbeMemRelease(Mem *p){
+ if( p->flags & (MEM_Dyn|MEM_Agg) ){
+ if( p->xDel ){
+ if( p->flags & MEM_Agg ){
+ sqlite3VdbeMemFinalize(p, *(FuncDef**)&p->i);
+ assert( (p->flags & MEM_Agg)==0 );
+ sqlite3VdbeMemRelease(p);
+ }else{
+ p->xDel((void *)p->z);
+ }
+ }else{
+ sqliteFree(p->z);
+ }
+ p->z = 0;
+ p->xDel = 0;
+ }
+}
+
+/*
+** Return some kind of integer value which is the best we can do
+** at representing the value that *pMem describes as an integer.
+** If pMem is an integer, then the value is exact. If pMem is
+** a floating-point then the value returned is the integer part.
+** If pMem is a string or blob, then we make an attempt to convert
+** it into a integer and return that. If pMem is NULL, return 0.
+**
+** If pMem is a string, its encoding might be changed.
+*/
+i64 sqlite3VdbeIntValue(Mem *pMem){
+ int flags = pMem->flags;
+ if( flags & MEM_Int ){
+ return pMem->i;
+ }else if( flags & MEM_Real ){
+ return (i64)pMem->r;
+ }else if( flags & (MEM_Str|MEM_Blob) ){
+ i64 value;
+ if( sqlite3VdbeChangeEncoding(pMem, SQLITE_UTF8)
+ || sqlite3VdbeMemNulTerminate(pMem) ){
+ return 0;
+ }
+ assert( pMem->z );
+ sqlite3atoi64(pMem->z, &value);
+ return value;
+ }else{
+ return 0;
+ }
+}
+
+/*
+** Return the best representation of pMem that we can get into a
+** double. If pMem is already a double or an integer, return its
+** value. If it is a string or blob, try to convert it to a double.
+** If it is a NULL, return 0.0.
+*/
+double sqlite3VdbeRealValue(Mem *pMem){
+ if( pMem->flags & MEM_Real ){
+ return pMem->r;
+ }else if( pMem->flags & MEM_Int ){
+ return (double)pMem->i;
+ }else if( pMem->flags & (MEM_Str|MEM_Blob) ){
+ double val = 0.0;
+ if( sqlite3VdbeChangeEncoding(pMem, SQLITE_UTF8)
+ || sqlite3VdbeMemNulTerminate(pMem) ){
+ return 0.0;
+ }
+ assert( pMem->z );
+ sqlite3AtoF(pMem->z, &val);
+ return val;
+ }else{
+ return 0.0;
+ }
+}
+
+/*
+** The MEM structure is already a MEM_Real. Try to also make it a
+** MEM_Int if we can.
+*/
+void sqlite3VdbeIntegerAffinity(Mem *pMem){
+ assert( pMem->flags & MEM_Real );
+ pMem->i = pMem->r;
+ if( ((double)pMem->i)==pMem->r ){
+ pMem->flags |= MEM_Int;
+ }
+}
+
+/*
+** Convert pMem to type integer. Invalidate any prior representations.
+*/
+int sqlite3VdbeMemIntegerify(Mem *pMem){
+ pMem->i = sqlite3VdbeIntValue(pMem);
+ sqlite3VdbeMemRelease(pMem);
+ pMem->flags = MEM_Int;
+ return SQLITE_OK;
+}
+
+/*
+** Convert pMem so that it is of type MEM_Real.
+** Invalidate any prior representations.
+*/
+int sqlite3VdbeMemRealify(Mem *pMem){
+ pMem->r = sqlite3VdbeRealValue(pMem);
+ sqlite3VdbeMemRelease(pMem);
+ pMem->flags = MEM_Real;
+ return SQLITE_OK;
+}
+
+/*
+** Convert pMem so that it has types MEM_Real or MEM_Int or both.
+** Invalidate any prior representations.
+*/
+int sqlite3VdbeMemNumerify(Mem *pMem){
+ sqlite3VdbeMemRealify(pMem);
+ sqlite3VdbeIntegerAffinity(pMem);
+ return SQLITE_OK;
+}
+
+/*
+** Delete any previous value and set the value stored in *pMem to NULL.
+*/
+void sqlite3VdbeMemSetNull(Mem *pMem){
+ sqlite3VdbeMemRelease(pMem);
+ pMem->flags = MEM_Null;
+ pMem->type = SQLITE_NULL;
+ pMem->n = 0;
+}
+
+/*
+** Delete any previous value and set the value stored in *pMem to val,
+** manifest type INTEGER.
+*/
+void sqlite3VdbeMemSetInt64(Mem *pMem, i64 val){
+ sqlite3VdbeMemRelease(pMem);
+ pMem->i = val;
+ pMem->flags = MEM_Int;
+ pMem->type = SQLITE_INTEGER;
+}
+
+/*
+** Delete any previous value and set the value stored in *pMem to val,
+** manifest type REAL.
+*/
+void sqlite3VdbeMemSetDouble(Mem *pMem, double val){
+ sqlite3VdbeMemRelease(pMem);
+ pMem->r = val;
+ pMem->flags = MEM_Real;
+ pMem->type = SQLITE_FLOAT;
+}
+
+/*
+** Make an shallow copy of pFrom into pTo. Prior contents of
+** pTo are overwritten. The pFrom->z field is not duplicated. If
+** pFrom->z is used, then pTo->z points to the same thing as pFrom->z
+** and flags gets srcType (either MEM_Ephem or MEM_Static).
+*/
+void sqlite3VdbeMemShallowCopy(Mem *pTo, const Mem *pFrom, int srcType){
+ memcpy(pTo, pFrom, sizeof(*pFrom)-sizeof(pFrom->zShort));
+ pTo->xDel = 0;
+ if( pTo->flags & (MEM_Str|MEM_Blob) ){
+ pTo->flags &= ~(MEM_Dyn|MEM_Static|MEM_Short|MEM_Ephem);
+ assert( srcType==MEM_Ephem || srcType==MEM_Static );
+ pTo->flags |= srcType;
+ }
+}
+
+/*
+** Make a full copy of pFrom into pTo. Prior contents of pTo are
+** freed before the copy is made.
+*/
+int sqlite3VdbeMemCopy(Mem *pTo, const Mem *pFrom){
+ int rc;
+ if( pTo->flags & MEM_Dyn ){
+ sqlite3VdbeMemRelease(pTo);
+ }
+ sqlite3VdbeMemShallowCopy(pTo, pFrom, MEM_Ephem);
+ if( pTo->flags & MEM_Ephem ){
+ rc = sqlite3VdbeMemMakeWriteable(pTo);
+ }else{
+ rc = SQLITE_OK;
+ }
+ return rc;
+}
+
+/*
+** Transfer the contents of pFrom to pTo. Any existing value in pTo is
+** freed. If pFrom contains ephemeral data, a copy is made.
+**
+** pFrom contains an SQL NULL when this routine returns. SQLITE_NOMEM
+** might be returned if pFrom held ephemeral data and we were unable
+** to allocate enough space to make a copy.
+*/
+int sqlite3VdbeMemMove(Mem *pTo, Mem *pFrom){
+ int rc;
+ if( pTo->flags & MEM_Dyn ){
+ sqlite3VdbeMemRelease(pTo);
+ }
+ memcpy(pTo, pFrom, sizeof(Mem));
+ if( pFrom->flags & MEM_Short ){
+ pTo->z = pTo->zShort;
+ }
+ pFrom->flags = MEM_Null;
+ pFrom->xDel = 0;
+ if( pTo->flags & MEM_Ephem ){
+ rc = sqlite3VdbeMemMakeWriteable(pTo);
+ }else{
+ rc = SQLITE_OK;
+ }
+ return rc;
+}
+
+/*
+** Change the value of a Mem to be a string or a BLOB.
+*/
+int sqlite3VdbeMemSetStr(
+ Mem *pMem, /* Memory cell to set to string value */
+ const char *z, /* String pointer */
+ int n, /* Bytes in string, or negative */
+ u8 enc, /* Encoding of z. 0 for BLOBs */
+ void (*xDel)(void*) /* Destructor function */
+){
+ sqlite3VdbeMemRelease(pMem);
+ if( !z ){
+ pMem->flags = MEM_Null;
+ pMem->type = SQLITE_NULL;
+ return SQLITE_OK;
+ }
+
+ pMem->z = (char *)z;
+ if( xDel==SQLITE_STATIC ){
+ pMem->flags = MEM_Static;
+ }else if( xDel==SQLITE_TRANSIENT ){
+ pMem->flags = MEM_Ephem;
+ }else{
+ pMem->flags = MEM_Dyn;
+ pMem->xDel = xDel;
+ }
+
+ pMem->enc = enc;
+ pMem->type = enc==0 ? SQLITE_BLOB : SQLITE_TEXT;
+ pMem->n = n;
+
+ assert( enc==0 || enc==SQLITE_UTF8 || enc==SQLITE_UTF16LE
+ || enc==SQLITE_UTF16BE );
+ switch( enc ){
+ case 0:
+ pMem->flags |= MEM_Blob;
+ pMem->enc = SQLITE_UTF8;
+ break;
+
+ case SQLITE_UTF8:
+ pMem->flags |= MEM_Str;
+ if( n<0 ){
+ pMem->n = strlen(z);
+ pMem->flags |= MEM_Term;
+ }
+ break;
+
+#ifndef SQLITE_OMIT_UTF16
+ case SQLITE_UTF16LE:
+ case SQLITE_UTF16BE:
+ pMem->flags |= MEM_Str;
+ if( pMem->n<0 ){
+ pMem->n = sqlite3utf16ByteLen(pMem->z,-1);
+ pMem->flags |= MEM_Term;
+ }
+ if( sqlite3VdbeMemHandleBom(pMem) ){
+ return SQLITE_NOMEM;
+ }
+#endif /* SQLITE_OMIT_UTF16 */
+ }
+ if( pMem->flags&MEM_Ephem ){
+ return sqlite3VdbeMemMakeWriteable(pMem);
+ }
+ return SQLITE_OK;
+}
+
+/*
+** Compare the values contained by the two memory cells, returning
+** negative, zero or positive if pMem1 is less than, equal to, or greater
+** than pMem2. Sorting order is NULL's first, followed by numbers (integers
+** and reals) sorted numerically, followed by text ordered by the collating
+** sequence pColl and finally blob's ordered by memcmp().
+**
+** Two NULL values are considered equal by this function.
+*/
+int sqlite3MemCompare(const Mem *pMem1, const Mem *pMem2, const CollSeq *pColl){
+ int rc;
+ int f1, f2;
+ int combined_flags;
+
+ /* Interchange pMem1 and pMem2 if the collating sequence specifies
+ ** DESC order.
+ */
+ f1 = pMem1->flags;
+ f2 = pMem2->flags;
+ combined_flags = f1|f2;
+
+ /* If one value is NULL, it is less than the other. If both values
+ ** are NULL, return 0.
+ */
+ if( combined_flags&MEM_Null ){
+ return (f2&MEM_Null) - (f1&MEM_Null);
+ }
+
+ /* If one value is a number and the other is not, the number is less.
+ ** If both are numbers, compare as reals if one is a real, or as integers
+ ** if both values are integers.
+ */
+ if( combined_flags&(MEM_Int|MEM_Real) ){
+ if( !(f1&(MEM_Int|MEM_Real)) ){
+ return 1;
+ }
+ if( !(f2&(MEM_Int|MEM_Real)) ){
+ return -1;
+ }
+ if( (f1 & f2 & MEM_Int)==0 ){
+ double r1, r2;
+ if( (f1&MEM_Real)==0 ){
+ r1 = pMem1->i;
+ }else{
+ r1 = pMem1->r;
+ }
+ if( (f2&MEM_Real)==0 ){
+ r2 = pMem2->i;
+ }else{
+ r2 = pMem2->r;
+ }
+ if( r1<r2 ) return -1;
+ if( r1>r2 ) return 1;
+ return 0;
+ }else{
+ assert( f1&MEM_Int );
+ assert( f2&MEM_Int );
+ if( pMem1->i < pMem2->i ) return -1;
+ if( pMem1->i > pMem2->i ) return 1;
+ return 0;
+ }
+ }
+
+ /* If one value is a string and the other is a blob, the string is less.
+ ** If both are strings, compare using the collating functions.
+ */
+ if( combined_flags&MEM_Str ){
+ if( (f1 & MEM_Str)==0 ){
+ return 1;
+ }
+ if( (f2 & MEM_Str)==0 ){
+ return -1;
+ }
+
+ assert( pMem1->enc==pMem2->enc );
+ assert( pMem1->enc==SQLITE_UTF8 ||
+ pMem1->enc==SQLITE_UTF16LE || pMem1->enc==SQLITE_UTF16BE );
+
+ /* The collation sequence must be defined at this point, even if
+ ** the user deletes the collation sequence after the vdbe program is
+ ** compiled (this was not always the case).
+ */
+ assert( !pColl || pColl->xCmp );
+
+ if( pColl ){
+ if( pMem1->enc==pColl->enc ){
+ /* The strings are already in the correct encoding. Call the
+ ** comparison function directly */
+ return pColl->xCmp(pColl->pUser,pMem1->n,pMem1->z,pMem2->n,pMem2->z);
+ }else{
+ u8 origEnc = pMem1->enc;
+ const void *v1, *v2;
+ int n1, n2;
+ /* Convert the strings into the encoding that the comparison
+ ** function expects */
+ v1 = sqlite3ValueText((sqlite3_value*)pMem1, pColl->enc);
+ n1 = v1==0 ? 0 : pMem1->n;
+ assert( n1==sqlite3ValueBytes((sqlite3_value*)pMem1, pColl->enc) );
+ v2 = sqlite3ValueText((sqlite3_value*)pMem2, pColl->enc);
+ n2 = v2==0 ? 0 : pMem2->n;
+ assert( n2==sqlite3ValueBytes((sqlite3_value*)pMem2, pColl->enc) );
+ /* Do the comparison */
+ rc = pColl->xCmp(pColl->pUser, n1, v1, n2, v2);
+ /* Convert the strings back into the database encoding */
+ sqlite3ValueText((sqlite3_value*)pMem1, origEnc);
+ sqlite3ValueText((sqlite3_value*)pMem2, origEnc);
+ return rc;
+ }
+ }
+ /* If a NULL pointer was passed as the collate function, fall through
+ ** to the blob case and use memcmp(). */
+ }
+
+ /* Both values must be blobs. Compare using memcmp(). */
+ rc = memcmp(pMem1->z, pMem2->z, (pMem1->n>pMem2->n)?pMem2->n:pMem1->n);
+ if( rc==0 ){
+ rc = pMem1->n - pMem2->n;
+ }
+ return rc;
+}
+
+/*
+** Move data out of a btree key or data field and into a Mem structure.
+** The data or key is taken from the entry that pCur is currently pointing
+** to. offset and amt determine what portion of the data or key to retrieve.
+** key is true to get the key or false to get data. The result is written
+** into the pMem element.
+**
+** The pMem structure is assumed to be uninitialized. Any prior content
+** is overwritten without being freed.
+**
+** If this routine fails for any reason (malloc returns NULL or unable
+** to read from the disk) then the pMem is left in an inconsistent state.
+*/
+int sqlite3VdbeMemFromBtree(
+ BtCursor *pCur, /* Cursor pointing at record to retrieve. */
+ int offset, /* Offset from the start of data to return bytes from. */
+ int amt, /* Number of bytes to return. */
+ int key, /* If true, retrieve from the btree key, not data. */
+ Mem *pMem /* OUT: Return data in this Mem structure. */
+){
+ char *zData; /* Data from the btree layer */
+ int available; /* Number of bytes available on the local btree page */
+
+ if( key ){
+ zData = (char *)sqlite3BtreeKeyFetch(pCur, &available);
+ }else{
+ zData = (char *)sqlite3BtreeDataFetch(pCur, &available);
+ }
+
+ pMem->n = amt;
+ if( offset+amt<=available ){
+ pMem->z = &zData[offset];
+ pMem->flags = MEM_Blob|MEM_Ephem;
+ }else{
+ int rc;
+ if( amt>NBFS-2 ){
+ zData = (char *)sqliteMallocRaw(amt+2);
+ if( !zData ){
+ return SQLITE_NOMEM;
+ }
+ pMem->flags = MEM_Blob|MEM_Dyn|MEM_Term;
+ pMem->xDel = 0;
+ }else{
+ zData = &(pMem->zShort[0]);
+ pMem->flags = MEM_Blob|MEM_Short|MEM_Term;
+ }
+ pMem->z = zData;
+ pMem->enc = 0;
+ pMem->type = SQLITE_BLOB;
+
+ if( key ){
+ rc = sqlite3BtreeKey(pCur, offset, amt, zData);
+ }else{
+ rc = sqlite3BtreeData(pCur, offset, amt, zData);
+ }
+ zData[amt] = 0;
+ zData[amt+1] = 0;
+ if( rc!=SQLITE_OK ){
+ if( amt>NBFS-2 ){
+ assert( zData!=pMem->zShort );
+ assert( pMem->flags & MEM_Dyn );
+ sqliteFree(zData);
+ } else {
+ assert( zData==pMem->zShort );
+ assert( pMem->flags & MEM_Short );
+ }
+ return rc;
+ }
+ }
+
+ return SQLITE_OK;
+}
+
+#ifndef NDEBUG
+/*
+** Perform various checks on the memory cell pMem. An assert() will
+** fail if pMem is internally inconsistent.
+*/
+void sqlite3VdbeMemSanity(Mem *pMem){
+ int flags = pMem->flags;
+ assert( flags!=0 ); /* Must define some type */
+ if( pMem->flags & (MEM_Str|MEM_Blob) ){
+ int x = pMem->flags & (MEM_Static|MEM_Dyn|MEM_Ephem|MEM_Short);
+ assert( x!=0 ); /* Strings must define a string subtype */
+ assert( (x & (x-1))==0 ); /* Only one string subtype can be defined */
+ assert( pMem->z!=0 ); /* Strings must have a value */
+ /* Mem.z points to Mem.zShort iff the subtype is MEM_Short */
+ assert( (pMem->flags & MEM_Short)==0 || pMem->z==pMem->zShort );
+ assert( (pMem->flags & MEM_Short)!=0 || pMem->z!=pMem->zShort );
+ /* No destructor unless there is MEM_Dyn */
+ assert( pMem->xDel==0 || (pMem->flags & MEM_Dyn)!=0 );
+
+ if( (flags & MEM_Str) ){
+ assert( pMem->enc==SQLITE_UTF8 ||
+ pMem->enc==SQLITE_UTF16BE ||
+ pMem->enc==SQLITE_UTF16LE
+ );
+ /* If the string is UTF-8 encoded and nul terminated, then pMem->n
+ ** must be the length of the string. (Later:) If the database file
+ ** has been corrupted, '\000' characters might have been inserted
+ ** into the middle of the string. In that case, the strlen() might
+ ** be less.
+ */
+ if( pMem->enc==SQLITE_UTF8 && (flags & MEM_Term) ){
+ assert( strlen(pMem->z)<=pMem->n );
+ assert( pMem->z[pMem->n]==0 );
+ }
+ }
+ }else{
+ /* Cannot define a string subtype for non-string objects */
+ assert( (pMem->flags & (MEM_Static|MEM_Dyn|MEM_Ephem|MEM_Short))==0 );
+ assert( pMem->xDel==0 );
+ }
+ /* MEM_Null excludes all other types */
+ assert( (pMem->flags&(MEM_Str|MEM_Int|MEM_Real|MEM_Blob))==0
+ || (pMem->flags&MEM_Null)==0 );
+ /* If the MEM is both real and integer, the values are equal */
+ assert( (pMem->flags & (MEM_Int|MEM_Real))!=(MEM_Int|MEM_Real)
+ || pMem->r==pMem->i );
+}
+#endif
+
+/* This function is only available internally, it is not part of the
+** external API. It works in a similar way to sqlite3_value_text(),
+** except the data returned is in the encoding specified by the second
+** parameter, which must be one of SQLITE_UTF16BE, SQLITE_UTF16LE or
+** SQLITE_UTF8.
+**
+** (2006-02-16:) The enc value can be or-ed with SQLITE_UTF16_ALIGNED.
+** If that is the case, then the result must be aligned on an even byte
+** boundary.
+*/
+const void *sqlite3ValueText(sqlite3_value* pVal, u8 enc){
+ if( !pVal ) return 0;
+ assert( (enc&3)==(enc&~SQLITE_UTF16_ALIGNED) );
+
+ if( pVal->flags&MEM_Null ){
+ return 0;
+ }
+ assert( (MEM_Blob>>3) == MEM_Str );
+ pVal->flags |= (pVal->flags & MEM_Blob)>>3;
+ if( pVal->flags&MEM_Str ){
+ sqlite3VdbeChangeEncoding(pVal, enc & ~SQLITE_UTF16_ALIGNED);
+ if( (enc & SQLITE_UTF16_ALIGNED)!=0 && 1==(1&(int)pVal->z) ){
+ assert( (pVal->flags & (MEM_Ephem|MEM_Static))!=0 );
+ if( sqlite3VdbeMemMakeWriteable(pVal)!=SQLITE_OK ){
+ return 0;
+ }
+ }
+ sqlite3VdbeMemNulTerminate(pVal);
+ }else{
+ assert( (pVal->flags&MEM_Blob)==0 );
+ sqlite3VdbeMemStringify(pVal, enc);
+ assert( 0==(1&(int)pVal->z) );
+ }
+ assert(pVal->enc==(enc & ~SQLITE_UTF16_ALIGNED) || sqlite3MallocFailed() );
+ if( pVal->enc==(enc & ~SQLITE_UTF16_ALIGNED) ){
+ return pVal->z;
+ }else{
+ return 0;
+ }
+}
+
+/*
+** Create a new sqlite3_value object.
+*/
+sqlite3_value* sqlite3ValueNew(void){
+ Mem *p = sqliteMalloc(sizeof(*p));
+ if( p ){
+ p->flags = MEM_Null;
+ p->type = SQLITE_NULL;
+ }
+ return p;
+}
+
+/*
+** Create a new sqlite3_value object, containing the value of pExpr.
+**
+** This only works for very simple expressions that consist of one constant
+** token (i.e. "5", "5.1", "NULL", "'a string'"). If the expression can
+** be converted directly into a value, then the value is allocated and
+** a pointer written to *ppVal. The caller is responsible for deallocating
+** the value by passing it to sqlite3ValueFree() later on. If the expression
+** cannot be converted to a value, then *ppVal is set to NULL.
+*/
+int sqlite3ValueFromExpr(
+ Expr *pExpr,
+ u8 enc,
+ u8 affinity,
+ sqlite3_value **ppVal
+){
+ int op;
+ char *zVal = 0;
+ sqlite3_value *pVal = 0;
+
+ if( !pExpr ){
+ *ppVal = 0;
+ return SQLITE_OK;
+ }
+ op = pExpr->op;
+
+ if( op==TK_STRING || op==TK_FLOAT || op==TK_INTEGER ){
+ zVal = sqliteStrNDup((char*)pExpr->token.z, pExpr->token.n);
+ pVal = sqlite3ValueNew();
+ if( !zVal || !pVal ) goto no_mem;
+ sqlite3Dequote(zVal);
+ sqlite3ValueSetStr(pVal, -1, zVal, SQLITE_UTF8, sqlite3FreeX);
+ if( (op==TK_INTEGER || op==TK_FLOAT ) && affinity==SQLITE_AFF_NONE ){
+ sqlite3ValueApplyAffinity(pVal, SQLITE_AFF_NUMERIC, enc);
+ }else{
+ sqlite3ValueApplyAffinity(pVal, affinity, enc);
+ }
+ }else if( op==TK_UMINUS ) {
+ if( SQLITE_OK==sqlite3ValueFromExpr(pExpr->pLeft, enc, affinity, &pVal) ){
+ pVal->i = -1 * pVal->i;
+ pVal->r = -1.0 * pVal->r;
+ }
+ }
+#ifndef SQLITE_OMIT_BLOB_LITERAL
+ else if( op==TK_BLOB ){
+ int nVal;
+ pVal = sqlite3ValueNew();
+ zVal = sqliteStrNDup((char*)pExpr->token.z+1, pExpr->token.n-1);
+ if( !zVal || !pVal ) goto no_mem;
+ sqlite3Dequote(zVal);
+ nVal = strlen(zVal)/2;
+ sqlite3VdbeMemSetStr(pVal, sqlite3HexToBlob(zVal), nVal, 0, sqlite3FreeX);
+ sqliteFree(zVal);
+ }
+#endif
+
+ *ppVal = pVal;
+ return SQLITE_OK;
+
+no_mem:
+ sqliteFree(zVal);
+ sqlite3ValueFree(pVal);
+ *ppVal = 0;
+ return SQLITE_NOMEM;
+}
+
+/*
+** Change the string value of an sqlite3_value object
+*/
+void sqlite3ValueSetStr(
+ sqlite3_value *v,
+ int n,
+ const void *z,
+ u8 enc,
+ void (*xDel)(void*)
+){
+ if( v ) sqlite3VdbeMemSetStr((Mem *)v, z, n, enc, xDel);
+}
+
+/*
+** Free an sqlite3_value object
+*/
+void sqlite3ValueFree(sqlite3_value *v){
+ if( !v ) return;
+ sqlite3ValueSetStr(v, 0, 0, SQLITE_UTF8, SQLITE_STATIC);
+ sqliteFree(v);
+}
+
+/*
+** Return the number of bytes in the sqlite3_value object assuming
+** that it uses the encoding "enc"
+*/
+int sqlite3ValueBytes(sqlite3_value *pVal, u8 enc){
+ Mem *p = (Mem*)pVal;
+ if( (p->flags & MEM_Blob)!=0 || sqlite3ValueText(pVal, enc) ){
+ return p->n;
+ }
+ return 0;
+}
Added: freeswitch/trunk/libs/sqlite/src/vtab.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/vtab.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,695 @@
+/*
+** 2006 June 10
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** This file contains code used to help implement virtual tables.
+**
+** $Id: vtab.c,v 1.37 2006/09/18 20:24:03 drh Exp $
+*/
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+#include "sqliteInt.h"
+
+/*
+** External API function used to create a new virtual-table module.
+*/
+int sqlite3_create_module(
+ sqlite3 *db, /* Database in which module is registered */
+ const char *zName, /* Name assigned to this module */
+ const sqlite3_module *pModule, /* The definition of the module */
+ void *pAux /* Context pointer for xCreate/xConnect */
+){
+ int nName = strlen(zName);
+ Module *pMod = (Module *)sqliteMallocRaw(sizeof(Module) + nName + 1);
+ if( pMod ){
+ char *zCopy = (char *)(&pMod[1]);
+ strcpy(zCopy, zName);
+ pMod->zName = zCopy;
+ pMod->pModule = pModule;
+ pMod->pAux = pAux;
+ pMod = (Module *)sqlite3HashInsert(&db->aModule, zCopy, nName, (void*)pMod);
+ sqliteFree(pMod);
+ sqlite3ResetInternalSchema(db, 0);
+ }
+ return sqlite3ApiExit(db, SQLITE_OK);
+}
+
+/*
+** Lock the virtual table so that it cannot be disconnected.
+** Locks nest. Every lock should have a corresponding unlock.
+** If an unlock is omitted, resources leaks will occur.
+**
+** If a disconnect is attempted while a virtual table is locked,
+** the disconnect is deferred until all locks have been removed.
+*/
+void sqlite3VtabLock(sqlite3_vtab *pVtab){
+ pVtab->nRef++;
+}
+
+/*
+** Unlock a virtual table. When the last lock is removed,
+** disconnect the virtual table.
+*/
+void sqlite3VtabUnlock(sqlite3_vtab *pVtab){
+ pVtab->nRef--;
+ if( pVtab->nRef==0 ){
+ pVtab->pModule->xDisconnect(pVtab);
+ }
+}
+
+/*
+** Clear any and all virtual-table information from the Table record.
+** This routine is called, for example, just before deleting the Table
+** record.
+*/
+void sqlite3VtabClear(Table *p){
+ sqlite3_vtab *pVtab = p->pVtab;
+ if( pVtab ){
+ assert( p->pMod && p->pMod->pModule );
+ sqlite3VtabUnlock(pVtab);
+ p->pVtab = 0;
+ }
+ if( p->azModuleArg ){
+ int i;
+ for(i=0; i<p->nModuleArg; i++){
+ sqliteFree(p->azModuleArg[i]);
+ }
+ sqliteFree(p->azModuleArg);
+ }
+}
+
+/*
+** Add a new module argument to pTable->azModuleArg[].
+** The string is not copied - the pointer is stored. The
+** string will be freed automatically when the table is
+** deleted.
+*/
+static void addModuleArgument(Table *pTable, char *zArg){
+ int i = pTable->nModuleArg++;
+ int nBytes = sizeof(char *)*(1+pTable->nModuleArg);
+ char **azModuleArg;
+ azModuleArg = sqliteRealloc(pTable->azModuleArg, nBytes);
+ if( azModuleArg==0 ){
+ int j;
+ for(j=0; j<i; j++){
+ sqliteFree(pTable->azModuleArg[j]);
+ }
+ sqliteFree(zArg);
+ sqliteFree(pTable->azModuleArg);
+ pTable->nModuleArg = 0;
+ }else{
+ azModuleArg[i] = zArg;
+ azModuleArg[i+1] = 0;
+ }
+ pTable->azModuleArg = azModuleArg;
+}
+
+/*
+** The parser calls this routine when it first sees a CREATE VIRTUAL TABLE
+** statement. The module name has been parsed, but the optional list
+** of parameters that follow the module name are still pending.
+*/
+void sqlite3VtabBeginParse(
+ Parse *pParse, /* Parsing context */
+ Token *pName1, /* Name of new table, or database name */
+ Token *pName2, /* Name of new table or NULL */
+ Token *pModuleName /* Name of the module for the virtual table */
+){
+ int iDb; /* The database the table is being created in */
+ Table *pTable; /* The new virtual table */
+
+ sqlite3StartTable(pParse, pName1, pName2, 0, 0, 1, 0);
+ pTable = pParse->pNewTable;
+ if( pTable==0 || pParse->nErr ) return;
+ assert( 0==pTable->pIndex );
+
+ iDb = sqlite3SchemaToIndex(pParse->db, pTable->pSchema);
+ assert( iDb>=0 );
+
+ pTable->isVirtual = 1;
+ pTable->nModuleArg = 0;
+ addModuleArgument(pTable, sqlite3NameFromToken(pModuleName));
+ addModuleArgument(pTable, sqlite3StrDup(pParse->db->aDb[iDb].zName));
+ addModuleArgument(pTable, sqlite3StrDup(pTable->zName));
+ pParse->sNameToken.n = pModuleName->z + pModuleName->n - pName1->z;
+
+#ifndef SQLITE_OMIT_AUTHORIZATION
+ /* Creating a virtual table invokes the authorization callback twice.
+ ** The first invocation, to obtain permission to INSERT a row into the
+ ** sqlite_master table, has already been made by sqlite3StartTable().
+ ** The second call, to obtain permission to create the table, is made now.
+ */
+ if( pTable->azModuleArg ){
+ sqlite3AuthCheck(pParse, SQLITE_CREATE_VTABLE, pTable->zName,
+ pTable->azModuleArg[0], pParse->db->aDb[iDb].zName);
+ }
+#endif
+}
+
+/*
+** This routine takes the module argument that has been accumulating
+** in pParse->zArg[] and appends it to the list of arguments on the
+** virtual table currently under construction in pParse->pTable.
+*/
+static void addArgumentToVtab(Parse *pParse){
+ if( pParse->sArg.z && pParse->pNewTable ){
+ const char *z = (const char*)pParse->sArg.z;
+ int n = pParse->sArg.n;
+ addModuleArgument(pParse->pNewTable, sqliteStrNDup(z, n));
+ }
+}
+
+/*
+** The parser calls this routine after the CREATE VIRTUAL TABLE statement
+** has been completely parsed.
+*/
+void sqlite3VtabFinishParse(Parse *pParse, Token *pEnd){
+ Table *pTab; /* The table being constructed */
+ sqlite3 *db; /* The database connection */
+ char *zModule; /* The module name of the table: USING modulename */
+ Module *pMod = 0;
+
+ addArgumentToVtab(pParse);
+ pParse->sArg.z = 0;
+
+ /* Lookup the module name. */
+ pTab = pParse->pNewTable;
+ if( pTab==0 ) return;
+ db = pParse->db;
+ if( pTab->nModuleArg<1 ) return;
+ zModule = pTab->azModuleArg[0];
+ pMod = (Module *)sqlite3HashFind(&db->aModule, zModule, strlen(zModule));
+ pTab->pMod = pMod;
+
+ /* If the CREATE VIRTUAL TABLE statement is being entered for the
+ ** first time (in other words if the virtual table is actually being
+ ** created now instead of just being read out of sqlite_master) then
+ ** do additional initialization work and store the statement text
+ ** in the sqlite_master table.
+ */
+ if( !db->init.busy ){
+ char *zStmt;
+ char *zWhere;
+ int iDb;
+ Vdbe *v;
+
+ /* Compute the complete text of the CREATE VIRTUAL TABLE statement */
+ if( pEnd ){
+ pParse->sNameToken.n = pEnd->z - pParse->sNameToken.z + pEnd->n;
+ }
+ zStmt = sqlite3MPrintf("CREATE VIRTUAL TABLE %T", &pParse->sNameToken);
+
+ /* A slot for the record has already been allocated in the
+ ** SQLITE_MASTER table. We just need to update that slot with all
+ ** the information we've collected.
+ **
+ ** The top of the stack is the rootpage allocated by sqlite3StartTable().
+ ** This value is always 0 and is ignored, a virtual table does not have a
+ ** rootpage. The next entry on the stack is the rowid of the record
+ ** in the sqlite_master table.
+ */
+ iDb = sqlite3SchemaToIndex(db, pTab->pSchema);
+ sqlite3NestedParse(pParse,
+ "UPDATE %Q.%s "
+ "SET type='table', name=%Q, tbl_name=%Q, rootpage=0, sql=%Q "
+ "WHERE rowid=#1",
+ db->aDb[iDb].zName, SCHEMA_TABLE(iDb),
+ pTab->zName,
+ pTab->zName,
+ zStmt
+ );
+ sqliteFree(zStmt);
+ v = sqlite3GetVdbe(pParse);
+ sqlite3ChangeCookie(db, v, iDb);
+
+ sqlite3VdbeAddOp(v, OP_Expire, 0, 0);
+ zWhere = sqlite3MPrintf("name='%q'", pTab->zName);
+ sqlite3VdbeOp3(v, OP_ParseSchema, iDb, 0, zWhere, P3_DYNAMIC);
+ sqlite3VdbeOp3(v, OP_VCreate, iDb, 0, pTab->zName, strlen(pTab->zName) + 1);
+ }
+
+ /* If we are rereading the sqlite_master table create the in-memory
+ ** record of the table. If the module has already been registered,
+ ** also call the xConnect method here.
+ */
+ else {
+ Table *pOld;
+ Schema *pSchema = pTab->pSchema;
+ const char *zName = pTab->zName;
+ int nName = strlen(zName) + 1;
+ pOld = sqlite3HashInsert(&pSchema->tblHash, zName, nName, pTab);
+ if( pOld ){
+ assert( pTab==pOld ); /* Malloc must have failed inside HashInsert() */
+ return;
+ }
+ pParse->pNewTable = 0;
+ }
+}
+
+/*
+** The parser calls this routine when it sees the first token
+** of an argument to the module name in a CREATE VIRTUAL TABLE statement.
+*/
+void sqlite3VtabArgInit(Parse *pParse){
+ addArgumentToVtab(pParse);
+ pParse->sArg.z = 0;
+ pParse->sArg.n = 0;
+}
+
+/*
+** The parser calls this routine for each token after the first token
+** in an argument to the module name in a CREATE VIRTUAL TABLE statement.
+*/
+void sqlite3VtabArgExtend(Parse *pParse, Token *p){
+ Token *pArg = &pParse->sArg;
+ if( pArg->z==0 ){
+ pArg->z = p->z;
+ pArg->n = p->n;
+ }else{
+ assert(pArg->z < p->z);
+ pArg->n = (p->z + p->n - pArg->z);
+ }
+}
+
+/*
+** Invoke a virtual table constructor (either xCreate or xConnect). The
+** pointer to the function to invoke is passed as the fourth parameter
+** to this procedure.
+*/
+static int vtabCallConstructor(
+ sqlite3 *db,
+ Table *pTab,
+ Module *pMod,
+ int (*xConstruct)(sqlite3*,void*,int,const char*const*,sqlite3_vtab**,char**),
+ char **pzErr
+){
+ int rc;
+ int rc2;
+ sqlite3_vtab *pVtab;
+ const char *const*azArg = (const char *const*)pTab->azModuleArg;
+ int nArg = pTab->nModuleArg;
+ char *zErr = 0;
+ char *zModuleName = sqlite3MPrintf("%s", pTab->zName);
+
+ assert( !db->pVTab );
+ assert( xConstruct );
+
+ db->pVTab = pTab;
+ rc = sqlite3SafetyOff(db);
+ assert( rc==SQLITE_OK );
+ rc = xConstruct(db, pMod->pAux, nArg, azArg, &pTab->pVtab, &zErr);
+ rc2 = sqlite3SafetyOn(db);
+ pVtab = pTab->pVtab;
+ if( rc==SQLITE_OK && pVtab ){
+ pVtab->pModule = pMod->pModule;
+ pVtab->nRef = 1;
+ }
+
+ if( SQLITE_OK!=rc ){
+ if( zErr==0 ){
+ *pzErr = sqlite3MPrintf("vtable constructor failed: %s", zModuleName);
+ }else {
+ *pzErr = sqlite3MPrintf("%s", zErr);
+ sqlite3_free(zErr);
+ }
+ }else if( db->pVTab ){
+ const char *zFormat = "vtable constructor did not declare schema: %s";
+ *pzErr = sqlite3MPrintf(zFormat, pTab->zName);
+ rc = SQLITE_ERROR;
+ }
+ if( rc==SQLITE_OK ){
+ rc = rc2;
+ }
+ db->pVTab = 0;
+ sqliteFree(zModuleName);
+ return rc;
+}
+
+/*
+** This function is invoked by the parser to call the xConnect() method
+** of the virtual table pTab. If an error occurs, an error code is returned
+** and an error left in pParse.
+**
+** This call is a no-op if table pTab is not a virtual table.
+*/
+int sqlite3VtabCallConnect(Parse *pParse, Table *pTab){
+ Module *pMod;
+ const char *zModule;
+ int rc = SQLITE_OK;
+
+ if( !pTab || !pTab->isVirtual || pTab->pVtab ){
+ return SQLITE_OK;
+ }
+
+ pMod = pTab->pMod;
+ zModule = pTab->azModuleArg[0];
+ if( !pMod ){
+ const char *zModule = pTab->azModuleArg[0];
+ sqlite3ErrorMsg(pParse, "no such module: %s", zModule);
+ rc = SQLITE_ERROR;
+ } else {
+ char *zErr = 0;
+ sqlite3 *db = pParse->db;
+ rc = vtabCallConstructor(db, pTab, pMod, pMod->pModule->xConnect, &zErr);
+ if( rc!=SQLITE_OK ){
+ sqlite3ErrorMsg(pParse, "%s", zErr);
+ }
+ sqliteFree(zErr);
+ }
+
+ return rc;
+}
+
+/*
+** Add the virtual table pVtab to the array sqlite3.aVTrans[].
+*/
+static int addToVTrans(sqlite3 *db, sqlite3_vtab *pVtab){
+ const int ARRAY_INCR = 5;
+
+ /* Grow the sqlite3.aVTrans array if required */
+ if( (db->nVTrans%ARRAY_INCR)==0 ){
+ sqlite3_vtab **aVTrans;
+ int nBytes = sizeof(sqlite3_vtab *) * (db->nVTrans + ARRAY_INCR);
+ aVTrans = sqliteRealloc((void *)db->aVTrans, nBytes);
+ if( !aVTrans ){
+ return SQLITE_NOMEM;
+ }
+ memset(&aVTrans[db->nVTrans], 0, sizeof(sqlite3_vtab *)*ARRAY_INCR);
+ db->aVTrans = aVTrans;
+ }
+
+ /* Add pVtab to the end of sqlite3.aVTrans */
+ db->aVTrans[db->nVTrans++] = pVtab;
+ sqlite3VtabLock(pVtab);
+ return SQLITE_OK;
+}
+
+/*
+** This function is invoked by the vdbe to call the xCreate method
+** of the virtual table named zTab in database iDb.
+**
+** If an error occurs, *pzErr is set to point an an English language
+** description of the error and an SQLITE_XXX error code is returned.
+** In this case the caller must call sqliteFree() on *pzErr.
+*/
+int sqlite3VtabCallCreate(sqlite3 *db, int iDb, const char *zTab, char **pzErr){
+ int rc = SQLITE_OK;
+ Table *pTab;
+ Module *pMod;
+ const char *zModule;
+
+ pTab = sqlite3FindTable(db, zTab, db->aDb[iDb].zName);
+ assert(pTab && pTab->isVirtual && !pTab->pVtab);
+ pMod = pTab->pMod;
+ zModule = pTab->azModuleArg[0];
+
+ /* If the module has been registered and includes a Create method,
+ ** invoke it now. If the module has not been registered, return an
+ ** error. Otherwise, do nothing.
+ */
+ if( !pMod ){
+ *pzErr = sqlite3MPrintf("no such module: %s", zModule);
+ rc = SQLITE_ERROR;
+ }else{
+ rc = vtabCallConstructor(db, pTab, pMod, pMod->pModule->xCreate, pzErr);
+ }
+
+ if( rc==SQLITE_OK && pTab->pVtab ){
+ rc = addToVTrans(db, pTab->pVtab);
+ }
+
+ return rc;
+}
+
+/*
+** This function is used to set the schema of a virtual table. It is only
+** valid to call this function from within the xCreate() or xConnect() of a
+** virtual table module.
+*/
+int sqlite3_declare_vtab(sqlite3 *db, const char *zCreateTable){
+ Parse sParse;
+
+ int rc = SQLITE_OK;
+ Table *pTab = db->pVTab;
+ char *zErr = 0;
+
+ if( !pTab ){
+ sqlite3Error(db, SQLITE_MISUSE, 0);
+ return SQLITE_MISUSE;
+ }
+ assert(pTab->isVirtual && pTab->nCol==0 && pTab->aCol==0);
+
+ memset(&sParse, 0, sizeof(Parse));
+ sParse.declareVtab = 1;
+ sParse.db = db;
+
+ if(
+ SQLITE_OK == sqlite3RunParser(&sParse, zCreateTable, &zErr) &&
+ sParse.pNewTable &&
+ !sParse.pNewTable->pSelect &&
+ !sParse.pNewTable->isVirtual
+ ){
+ pTab->aCol = sParse.pNewTable->aCol;
+ pTab->nCol = sParse.pNewTable->nCol;
+ sParse.pNewTable->nCol = 0;
+ sParse.pNewTable->aCol = 0;
+ } else {
+ sqlite3Error(db, SQLITE_ERROR, zErr);
+ sqliteFree(zErr);
+ rc = SQLITE_ERROR;
+ }
+ sParse.declareVtab = 0;
+
+ sqlite3_finalize((sqlite3_stmt*)sParse.pVdbe);
+ sqlite3DeleteTable(0, sParse.pNewTable);
+ sParse.pNewTable = 0;
+ db->pVTab = 0;
+
+ assert( (rc&0xff)==rc );
+ return rc;
+}
+
+/*
+** This function is invoked by the vdbe to call the xDestroy method
+** of the virtual table named zTab in database iDb. This occurs
+** when a DROP TABLE is mentioned.
+**
+** This call is a no-op if zTab is not a virtual table.
+*/
+int sqlite3VtabCallDestroy(sqlite3 *db, int iDb, const char *zTab)
+{
+ int rc = SQLITE_OK;
+ Table *pTab;
+
+ pTab = sqlite3FindTable(db, zTab, db->aDb[iDb].zName);
+ assert(pTab);
+ if( pTab->pVtab ){
+ int (*xDestroy)(sqlite3_vtab *pVTab) = pTab->pMod->pModule->xDestroy;
+ rc = sqlite3SafetyOff(db);
+ assert( rc==SQLITE_OK );
+ if( xDestroy ){
+ rc = xDestroy(pTab->pVtab);
+ }
+ sqlite3SafetyOn(db);
+ if( rc==SQLITE_OK ){
+ pTab->pVtab = 0;
+ }
+ }
+
+ return rc;
+}
+
+/*
+** This function invokes either the xRollback or xCommit method
+** of each of the virtual tables in the sqlite3.aVTrans array. The method
+** called is identified by the second argument, "offset", which is
+** the offset of the method to call in the sqlite3_module structure.
+**
+** The array is cleared after invoking the callbacks.
+*/
+static void callFinaliser(sqlite3 *db, int offset){
+ int i;
+ for(i=0; i<db->nVTrans && db->aVTrans[i]; i++){
+ sqlite3_vtab *pVtab = db->aVTrans[i];
+ int (*x)(sqlite3_vtab *);
+ x = *(int (**)(sqlite3_vtab *))((char *)pVtab->pModule + offset);
+ if( x ) x(pVtab);
+ sqlite3VtabUnlock(pVtab);
+ }
+ sqliteFree(db->aVTrans);
+ db->nVTrans = 0;
+ db->aVTrans = 0;
+}
+
+/*
+** If argument rc2 is not SQLITE_OK, then return it and do nothing.
+** Otherwise, invoke the xSync method of all virtual tables in the
+** sqlite3.aVTrans array. Return the error code for the first error
+** that occurs, or SQLITE_OK if all xSync operations are successful.
+*/
+int sqlite3VtabSync(sqlite3 *db, int rc2){
+ int i;
+ int rc = SQLITE_OK;
+ int rcsafety;
+ sqlite3_vtab **aVTrans = db->aVTrans;
+ if( rc2!=SQLITE_OK ) return rc2;
+
+ rc = sqlite3SafetyOff(db);
+ db->aVTrans = 0;
+ for(i=0; rc==SQLITE_OK && i<db->nVTrans && aVTrans[i]; i++){
+ sqlite3_vtab *pVtab = aVTrans[i];
+ int (*x)(sqlite3_vtab *);
+ x = pVtab->pModule->xSync;
+ if( x ){
+ rc = x(pVtab);
+ }
+ }
+ db->aVTrans = aVTrans;
+ rcsafety = sqlite3SafetyOn(db);
+
+ if( rc==SQLITE_OK ){
+ rc = rcsafety;
+ }
+ return rc;
+}
+
+/*
+** Invoke the xRollback method of all virtual tables in the
+** sqlite3.aVTrans array. Then clear the array itself.
+*/
+int sqlite3VtabRollback(sqlite3 *db){
+ callFinaliser(db, (int)(&((sqlite3_module *)0)->xRollback));
+ return SQLITE_OK;
+}
+
+/*
+** Invoke the xCommit method of all virtual tables in the
+** sqlite3.aVTrans array. Then clear the array itself.
+*/
+int sqlite3VtabCommit(sqlite3 *db){
+ callFinaliser(db, (int)(&((sqlite3_module *)0)->xCommit));
+ return SQLITE_OK;
+}
+
+/*
+** If the virtual table pVtab supports the transaction interface
+** (xBegin/xRollback/xCommit and optionally xSync) and a transaction is
+** not currently open, invoke the xBegin method now.
+**
+** If the xBegin call is successful, place the sqlite3_vtab pointer
+** in the sqlite3.aVTrans array.
+*/
+int sqlite3VtabBegin(sqlite3 *db, sqlite3_vtab *pVtab){
+ int rc = SQLITE_OK;
+ const sqlite3_module *pModule;
+
+ /* Special case: If db->aVTrans is NULL and db->nVTrans is greater
+ ** than zero, then this function is being called from within a
+ ** virtual module xSync() callback. It is illegal to write to
+ ** virtual module tables in this case, so return SQLITE_LOCKED.
+ */
+ if( 0==db->aVTrans && db->nVTrans>0 ){
+ return SQLITE_LOCKED;
+ }
+ if( !pVtab ){
+ return SQLITE_OK;
+ }
+ pModule = pVtab->pModule;
+
+ if( pModule->xBegin ){
+ int i;
+
+
+ /* If pVtab is already in the aVTrans array, return early */
+ for(i=0; (i<db->nVTrans) && 0!=db->aVTrans[i]; i++){
+ if( db->aVTrans[i]==pVtab ){
+ return SQLITE_OK;
+ }
+ }
+
+ /* Invoke the xBegin method */
+ rc = pModule->xBegin(pVtab);
+ if( rc!=SQLITE_OK ){
+ return rc;
+ }
+
+ rc = addToVTrans(db, pVtab);
+ }
+ return rc;
+}
+
+/*
+** The first parameter (pDef) is a function implementation. The
+** second parameter (pExpr) is the first argument to this function.
+** If pExpr is a column in a virtual table, then let the virtual
+** table implementation have an opportunity to overload the function.
+**
+** This routine is used to allow virtual table implementations to
+** overload MATCH, LIKE, GLOB, and REGEXP operators.
+**
+** Return either the pDef argument (indicating no change) or a
+** new FuncDef structure that is marked as ephemeral using the
+** SQLITE_FUNC_EPHEM flag.
+*/
+FuncDef *sqlite3VtabOverloadFunction(
+ FuncDef *pDef, /* Function to possibly overload */
+ int nArg, /* Number of arguments to the function */
+ Expr *pExpr /* First argument to the function */
+){
+ Table *pTab;
+ sqlite3_vtab *pVtab;
+ sqlite3_module *pMod;
+ void (*xFunc)(sqlite3_context*,int,sqlite3_value**);
+ void *pArg;
+ FuncDef *pNew;
+ int rc;
+ char *zLowerName;
+ unsigned char *z;
+
+
+ /* Check to see the left operand is a column in a virtual table */
+ if( pExpr==0 ) return pDef;
+ if( pExpr->op!=TK_COLUMN ) return pDef;
+ pTab = pExpr->pTab;
+ if( pTab==0 ) return pDef;
+ if( !pTab->isVirtual ) return pDef;
+ pVtab = pTab->pVtab;
+ assert( pVtab!=0 );
+ assert( pVtab->pModule!=0 );
+ pMod = (sqlite3_module *)pVtab->pModule;
+ if( pMod->xFindFunction==0 ) return pDef;
+
+ /* Call the xFuncFunction method on the virtual table implementation
+ ** to see if the implementation wants to overload this function
+ */
+ zLowerName = sqlite3StrDup(pDef->zName);
+ for(z=(unsigned char*)zLowerName; *z; z++){
+ *z = sqlite3UpperToLower[*z];
+ }
+ rc = pMod->xFindFunction(pVtab, nArg, zLowerName, &xFunc, &pArg);
+ sqliteFree(zLowerName);
+ if( rc==0 ){
+ return pDef;
+ }
+
+ /* Create a new ephemeral function definition for the overloaded
+ ** function */
+ pNew = sqliteMalloc( sizeof(*pNew) + strlen(pDef->zName) );
+ if( pNew==0 ){
+ return pDef;
+ }
+ *pNew = *pDef;
+ strcpy(pNew->zName, pDef->zName);
+ pNew->xFunc = xFunc;
+ pNew->pUserData = pArg;
+ pNew->flags |= SQLITE_FUNC_EPHEM;
+ return pNew;
+}
+
+#endif /* SQLITE_OMIT_VIRTUALTABLE */
Added: freeswitch/trunk/libs/sqlite/src/where.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/src/where.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,2543 @@
+/*
+** 2001 September 15
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** This module contains C code that generates VDBE code used to process
+** the WHERE clause of SQL statements. This module is reponsible for
+** generating the code that loops through a table looking for applicable
+** rows. Indices are selected and used to speed the search when doing
+** so is applicable. Because this module is responsible for selecting
+** indices, you might also think of this module as the "query optimizer".
+**
+** $Id: where.c,v 1.228 2006/06/27 13:20:22 drh Exp $
+*/
+#include "sqliteInt.h"
+
+/*
+** The number of bits in a Bitmask. "BMS" means "BitMask Size".
+*/
+#define BMS (sizeof(Bitmask)*8)
+
+/*
+** Determine the number of elements in an array.
+*/
+#define ARRAYSIZE(X) (sizeof(X)/sizeof(X[0]))
+
+/*
+** Trace output macros
+*/
+#if defined(SQLITE_TEST) || defined(SQLITE_DEBUG)
+int sqlite3_where_trace = 0;
+# define TRACE(X) if(sqlite3_where_trace) sqlite3DebugPrintf X
+#else
+# define TRACE(X)
+#endif
+
+/* Forward reference
+*/
+typedef struct WhereClause WhereClause;
+
+/*
+** The query generator uses an array of instances of this structure to
+** help it analyze the subexpressions of the WHERE clause. Each WHERE
+** clause subexpression is separated from the others by an AND operator.
+**
+** All WhereTerms are collected into a single WhereClause structure.
+** The following identity holds:
+**
+** WhereTerm.pWC->a[WhereTerm.idx] == WhereTerm
+**
+** When a term is of the form:
+**
+** X <op> <expr>
+**
+** where X is a column name and <op> is one of certain operators,
+** then WhereTerm.leftCursor and WhereTerm.leftColumn record the
+** cursor number and column number for X. WhereTerm.operator records
+** the <op> using a bitmask encoding defined by WO_xxx below. The
+** use of a bitmask encoding for the operator allows us to search
+** quickly for terms that match any of several different operators.
+**
+** prereqRight and prereqAll record sets of cursor numbers,
+** but they do so indirectly. A single ExprMaskSet structure translates
+** cursor number into bits and the translated bit is stored in the prereq
+** fields. The translation is used in order to maximize the number of
+** bits that will fit in a Bitmask. The VDBE cursor numbers might be
+** spread out over the non-negative integers. For example, the cursor
+** numbers might be 3, 8, 9, 10, 20, 23, 41, and 45. The ExprMaskSet
+** translates these sparse cursor numbers into consecutive integers
+** beginning with 0 in order to make the best possible use of the available
+** bits in the Bitmask. So, in the example above, the cursor numbers
+** would be mapped into integers 0 through 7.
+*/
+typedef struct WhereTerm WhereTerm;
+struct WhereTerm {
+ Expr *pExpr; /* Pointer to the subexpression */
+ i16 iParent; /* Disable pWC->a[iParent] when this term disabled */
+ i16 leftCursor; /* Cursor number of X in "X <op> <expr>" */
+ i16 leftColumn; /* Column number of X in "X <op> <expr>" */
+ u16 eOperator; /* A WO_xx value describing <op> */
+ u8 flags; /* Bit flags. See below */
+ u8 nChild; /* Number of children that must disable us */
+ WhereClause *pWC; /* The clause this term is part of */
+ Bitmask prereqRight; /* Bitmask of tables used by pRight */
+ Bitmask prereqAll; /* Bitmask of tables referenced by p */
+};
+
+/*
+** Allowed values of WhereTerm.flags
+*/
+#define TERM_DYNAMIC 0x01 /* Need to call sqlite3ExprDelete(pExpr) */
+#define TERM_VIRTUAL 0x02 /* Added by the optimizer. Do not code */
+#define TERM_CODED 0x04 /* This term is already coded */
+#define TERM_COPIED 0x08 /* Has a child */
+#define TERM_OR_OK 0x10 /* Used during OR-clause processing */
+
+/*
+** An instance of the following structure holds all information about a
+** WHERE clause. Mostly this is a container for one or more WhereTerms.
+*/
+struct WhereClause {
+ Parse *pParse; /* The parser context */
+ int nTerm; /* Number of terms */
+ int nSlot; /* Number of entries in a[] */
+ WhereTerm *a; /* Each a[] describes a term of the WHERE cluase */
+ WhereTerm aStatic[10]; /* Initial static space for a[] */
+};
+
+/*
+** An instance of the following structure keeps track of a mapping
+** between VDBE cursor numbers and bits of the bitmasks in WhereTerm.
+**
+** The VDBE cursor numbers are small integers contained in
+** SrcList_item.iCursor and Expr.iTable fields. For any given WHERE
+** clause, the cursor numbers might not begin with 0 and they might
+** contain gaps in the numbering sequence. But we want to make maximum
+** use of the bits in our bitmasks. This structure provides a mapping
+** from the sparse cursor numbers into consecutive integers beginning
+** with 0.
+**
+** If ExprMaskSet.ix[A]==B it means that The A-th bit of a Bitmask
+** corresponds VDBE cursor number B. The A-th bit of a bitmask is 1<<A.
+**
+** For example, if the WHERE clause expression used these VDBE
+** cursors: 4, 5, 8, 29, 57, 73. Then the ExprMaskSet structure
+** would map those cursor numbers into bits 0 through 5.
+**
+** Note that the mapping is not necessarily ordered. In the example
+** above, the mapping might go like this: 4->3, 5->1, 8->2, 29->0,
+** 57->5, 73->4. Or one of 719 other combinations might be used. It
+** does not really matter. What is important is that sparse cursor
+** numbers all get mapped into bit numbers that begin with 0 and contain
+** no gaps.
+*/
+typedef struct ExprMaskSet ExprMaskSet;
+struct ExprMaskSet {
+ int n; /* Number of assigned cursor values */
+ int ix[sizeof(Bitmask)*8]; /* Cursor assigned to each bit */
+};
+
+
+/*
+** Bitmasks for the operators that indices are able to exploit. An
+** OR-ed combination of these values can be used when searching for
+** terms in the where clause.
+*/
+#define WO_IN 1
+#define WO_EQ 2
+#define WO_LT (WO_EQ<<(TK_LT-TK_EQ))
+#define WO_LE (WO_EQ<<(TK_LE-TK_EQ))
+#define WO_GT (WO_EQ<<(TK_GT-TK_EQ))
+#define WO_GE (WO_EQ<<(TK_GE-TK_EQ))
+#define WO_MATCH 64
+
+/*
+** Value for flags returned by bestIndex()
+*/
+#define WHERE_ROWID_EQ 0x0001 /* rowid=EXPR or rowid IN (...) */
+#define WHERE_ROWID_RANGE 0x0002 /* rowid<EXPR and/or rowid>EXPR */
+#define WHERE_COLUMN_EQ 0x0010 /* x=EXPR or x IN (...) */
+#define WHERE_COLUMN_RANGE 0x0020 /* x<EXPR and/or x>EXPR */
+#define WHERE_COLUMN_IN 0x0040 /* x IN (...) */
+#define WHERE_TOP_LIMIT 0x0100 /* x<EXPR or x<=EXPR constraint */
+#define WHERE_BTM_LIMIT 0x0200 /* x>EXPR or x>=EXPR constraint */
+#define WHERE_IDX_ONLY 0x0800 /* Use index only - omit table */
+#define WHERE_ORDERBY 0x1000 /* Output will appear in correct order */
+#define WHERE_REVERSE 0x2000 /* Scan in reverse order */
+#define WHERE_UNIQUE 0x4000 /* Selects no more than one row */
+#define WHERE_VIRTUALTABLE 0x8000 /* Use virtual-table processing */
+
+/*
+** Initialize a preallocated WhereClause structure.
+*/
+static void whereClauseInit(WhereClause *pWC, Parse *pParse){
+ pWC->pParse = pParse;
+ pWC->nTerm = 0;
+ pWC->nSlot = ARRAYSIZE(pWC->aStatic);
+ pWC->a = pWC->aStatic;
+}
+
+/*
+** Deallocate a WhereClause structure. The WhereClause structure
+** itself is not freed. This routine is the inverse of whereClauseInit().
+*/
+static void whereClauseClear(WhereClause *pWC){
+ int i;
+ WhereTerm *a;
+ for(i=pWC->nTerm-1, a=pWC->a; i>=0; i--, a++){
+ if( a->flags & TERM_DYNAMIC ){
+ sqlite3ExprDelete(a->pExpr);
+ }
+ }
+ if( pWC->a!=pWC->aStatic ){
+ sqliteFree(pWC->a);
+ }
+}
+
+/*
+** Add a new entries to the WhereClause structure. Increase the allocated
+** space as necessary.
+**
+** WARNING: This routine might reallocate the space used to store
+** WhereTerms. All pointers to WhereTerms should be invalided after
+** calling this routine. Such pointers may be reinitialized by referencing
+** the pWC->a[] array.
+*/
+static int whereClauseInsert(WhereClause *pWC, Expr *p, int flags){
+ WhereTerm *pTerm;
+ int idx;
+ if( pWC->nTerm>=pWC->nSlot ){
+ WhereTerm *pOld = pWC->a;
+ pWC->a = sqliteMalloc( sizeof(pWC->a[0])*pWC->nSlot*2 );
+ if( pWC->a==0 ) return 0;
+ memcpy(pWC->a, pOld, sizeof(pWC->a[0])*pWC->nTerm);
+ if( pOld!=pWC->aStatic ){
+ sqliteFree(pOld);
+ }
+ pWC->nSlot *= 2;
+ }
+ pTerm = &pWC->a[idx = pWC->nTerm];
+ pWC->nTerm++;
+ pTerm->pExpr = p;
+ pTerm->flags = flags;
+ pTerm->pWC = pWC;
+ pTerm->iParent = -1;
+ return idx;
+}
+
+/*
+** This routine identifies subexpressions in the WHERE clause where
+** each subexpression is separated by the AND operator or some other
+** operator specified in the op parameter. The WhereClause structure
+** is filled with pointers to subexpressions. For example:
+**
+** WHERE a=='hello' AND coalesce(b,11)<10 AND (c+12!=d OR c==22)
+** \________/ \_______________/ \________________/
+** slot[0] slot[1] slot[2]
+**
+** The original WHERE clause in pExpr is unaltered. All this routine
+** does is make slot[] entries point to substructure within pExpr.
+**
+** In the previous sentence and in the diagram, "slot[]" refers to
+** the WhereClause.a[] array. This array grows as needed to contain
+** all terms of the WHERE clause.
+*/
+static void whereSplit(WhereClause *pWC, Expr *pExpr, int op){
+ if( pExpr==0 ) return;
+ if( pExpr->op!=op ){
+ whereClauseInsert(pWC, pExpr, 0);
+ }else{
+ whereSplit(pWC, pExpr->pLeft, op);
+ whereSplit(pWC, pExpr->pRight, op);
+ }
+}
+
+/*
+** Initialize an expression mask set
+*/
+#define initMaskSet(P) memset(P, 0, sizeof(*P))
+
+/*
+** Return the bitmask for the given cursor number. Return 0 if
+** iCursor is not in the set.
+*/
+static Bitmask getMask(ExprMaskSet *pMaskSet, int iCursor){
+ int i;
+ for(i=0; i<pMaskSet->n; i++){
+ if( pMaskSet->ix[i]==iCursor ){
+ return ((Bitmask)1)<<i;
+ }
+ }
+ return 0;
+}
+
+/*
+** Create a new mask for cursor iCursor.
+**
+** There is one cursor per table in the FROM clause. The number of
+** tables in the FROM clause is limited by a test early in the
+** sqlite3WhereBegin() routine. So we know that the pMaskSet->ix[]
+** array will never overflow.
+*/
+static void createMask(ExprMaskSet *pMaskSet, int iCursor){
+ assert( pMaskSet->n < ARRAYSIZE(pMaskSet->ix) );
+ pMaskSet->ix[pMaskSet->n++] = iCursor;
+}
+
+/*
+** This routine walks (recursively) an expression tree and generates
+** a bitmask indicating which tables are used in that expression
+** tree.
+**
+** In order for this routine to work, the calling function must have
+** previously invoked sqlite3ExprResolveNames() on the expression. See
+** the header comment on that routine for additional information.
+** The sqlite3ExprResolveNames() routines looks for column names and
+** sets their opcodes to TK_COLUMN and their Expr.iTable fields to
+** the VDBE cursor number of the table. This routine just has to
+** translate the cursor numbers into bitmask values and OR all
+** the bitmasks together.
+*/
+static Bitmask exprListTableUsage(ExprMaskSet*, ExprList*);
+static Bitmask exprSelectTableUsage(ExprMaskSet*, Select*);
+static Bitmask exprTableUsage(ExprMaskSet *pMaskSet, Expr *p){
+ Bitmask mask = 0;
+ if( p==0 ) return 0;
+ if( p->op==TK_COLUMN ){
+ mask = getMask(pMaskSet, p->iTable);
+ return mask;
+ }
+ mask = exprTableUsage(pMaskSet, p->pRight);
+ mask |= exprTableUsage(pMaskSet, p->pLeft);
+ mask |= exprListTableUsage(pMaskSet, p->pList);
+ mask |= exprSelectTableUsage(pMaskSet, p->pSelect);
+ return mask;
+}
+static Bitmask exprListTableUsage(ExprMaskSet *pMaskSet, ExprList *pList){
+ int i;
+ Bitmask mask = 0;
+ if( pList ){
+ for(i=0; i<pList->nExpr; i++){
+ mask |= exprTableUsage(pMaskSet, pList->a[i].pExpr);
+ }
+ }
+ return mask;
+}
+static Bitmask exprSelectTableUsage(ExprMaskSet *pMaskSet, Select *pS){
+ Bitmask mask;
+ if( pS==0 ){
+ mask = 0;
+ }else{
+ mask = exprListTableUsage(pMaskSet, pS->pEList);
+ mask |= exprListTableUsage(pMaskSet, pS->pGroupBy);
+ mask |= exprListTableUsage(pMaskSet, pS->pOrderBy);
+ mask |= exprTableUsage(pMaskSet, pS->pWhere);
+ mask |= exprTableUsage(pMaskSet, pS->pHaving);
+ }
+ return mask;
+}
+
+/*
+** Return TRUE if the given operator is one of the operators that is
+** allowed for an indexable WHERE clause term. The allowed operators are
+** "=", "<", ">", "<=", ">=", and "IN".
+*/
+static int allowedOp(int op){
+ assert( TK_GT>TK_EQ && TK_GT<TK_GE );
+ assert( TK_LT>TK_EQ && TK_LT<TK_GE );
+ assert( TK_LE>TK_EQ && TK_LE<TK_GE );
+ assert( TK_GE==TK_EQ+4 );
+ return op==TK_IN || (op>=TK_EQ && op<=TK_GE);
+}
+
+/*
+** Swap two objects of type T.
+*/
+#define SWAP(TYPE,A,B) {TYPE t=A; A=B; B=t;}
+
+/*
+** Commute a comparision operator. Expressions of the form "X op Y"
+** are converted into "Y op X".
+*/
+static void exprCommute(Expr *pExpr){
+ assert( allowedOp(pExpr->op) && pExpr->op!=TK_IN );
+ SWAP(CollSeq*,pExpr->pRight->pColl,pExpr->pLeft->pColl);
+ SWAP(Expr*,pExpr->pRight,pExpr->pLeft);
+ if( pExpr->op>=TK_GT ){
+ assert( TK_LT==TK_GT+2 );
+ assert( TK_GE==TK_LE+2 );
+ assert( TK_GT>TK_EQ );
+ assert( TK_GT<TK_LE );
+ assert( pExpr->op>=TK_GT && pExpr->op<=TK_GE );
+ pExpr->op = ((pExpr->op-TK_GT)^2)+TK_GT;
+ }
+}
+
+/*
+** Translate from TK_xx operator to WO_xx bitmask.
+*/
+static int operatorMask(int op){
+ int c;
+ assert( allowedOp(op) );
+ if( op==TK_IN ){
+ c = WO_IN;
+ }else{
+ c = WO_EQ<<(op-TK_EQ);
+ }
+ assert( op!=TK_IN || c==WO_IN );
+ assert( op!=TK_EQ || c==WO_EQ );
+ assert( op!=TK_LT || c==WO_LT );
+ assert( op!=TK_LE || c==WO_LE );
+ assert( op!=TK_GT || c==WO_GT );
+ assert( op!=TK_GE || c==WO_GE );
+ return c;
+}
+
+/*
+** Search for a term in the WHERE clause that is of the form "X <op> <expr>"
+** where X is a reference to the iColumn of table iCur and <op> is one of
+** the WO_xx operator codes specified by the op parameter.
+** Return a pointer to the term. Return 0 if not found.
+*/
+static WhereTerm *findTerm(
+ WhereClause *pWC, /* The WHERE clause to be searched */
+ int iCur, /* Cursor number of LHS */
+ int iColumn, /* Column number of LHS */
+ Bitmask notReady, /* RHS must not overlap with this mask */
+ u16 op, /* Mask of WO_xx values describing operator */
+ Index *pIdx /* Must be compatible with this index, if not NULL */
+){
+ WhereTerm *pTerm;
+ int k;
+ for(pTerm=pWC->a, k=pWC->nTerm; k; k--, pTerm++){
+ if( pTerm->leftCursor==iCur
+ && (pTerm->prereqRight & notReady)==0
+ && pTerm->leftColumn==iColumn
+ && (pTerm->eOperator & op)!=0
+ ){
+ if( iCur>=0 && pIdx ){
+ Expr *pX = pTerm->pExpr;
+ CollSeq *pColl;
+ char idxaff;
+ int j;
+ Parse *pParse = pWC->pParse;
+
+ idxaff = pIdx->pTable->aCol[iColumn].affinity;
+ if( !sqlite3IndexAffinityOk(pX, idxaff) ) continue;
+ pColl = sqlite3ExprCollSeq(pParse, pX->pLeft);
+ if( !pColl ){
+ if( pX->pRight ){
+ pColl = sqlite3ExprCollSeq(pParse, pX->pRight);
+ }
+ if( !pColl ){
+ pColl = pParse->db->pDfltColl;
+ }
+ }
+ for(j=0; j<pIdx->nColumn && pIdx->aiColumn[j]!=iColumn; j++){}
+ assert( j<pIdx->nColumn );
+ if( sqlite3StrICmp(pColl->zName, pIdx->azColl[j]) ) continue;
+ }
+ return pTerm;
+ }
+ }
+ return 0;
+}
+
+/* Forward reference */
+static void exprAnalyze(SrcList*, ExprMaskSet*, WhereClause*, int);
+
+/*
+** Call exprAnalyze on all terms in a WHERE clause.
+**
+**
+*/
+static void exprAnalyzeAll(
+ SrcList *pTabList, /* the FROM clause */
+ ExprMaskSet *pMaskSet, /* table masks */
+ WhereClause *pWC /* the WHERE clause to be analyzed */
+){
+ int i;
+ for(i=pWC->nTerm-1; i>=0; i--){
+ exprAnalyze(pTabList, pMaskSet, pWC, i);
+ }
+}
+
+#ifndef SQLITE_OMIT_LIKE_OPTIMIZATION
+/*
+** Check to see if the given expression is a LIKE or GLOB operator that
+** can be optimized using inequality constraints. Return TRUE if it is
+** so and false if not.
+**
+** In order for the operator to be optimizible, the RHS must be a string
+** literal that does not begin with a wildcard.
+*/
+static int isLikeOrGlob(
+ sqlite3 *db, /* The database */
+ Expr *pExpr, /* Test this expression */
+ int *pnPattern, /* Number of non-wildcard prefix characters */
+ int *pisComplete /* True if the only wildcard is % in the last character */
+){
+ const char *z;
+ Expr *pRight, *pLeft;
+ ExprList *pList;
+ int c, cnt;
+ int noCase;
+ char wc[3];
+ CollSeq *pColl;
+
+ if( !sqlite3IsLikeFunction(db, pExpr, &noCase, wc) ){
+ return 0;
+ }
+ pList = pExpr->pList;
+ pRight = pList->a[0].pExpr;
+ if( pRight->op!=TK_STRING ){
+ return 0;
+ }
+ pLeft = pList->a[1].pExpr;
+ if( pLeft->op!=TK_COLUMN ){
+ return 0;
+ }
+ pColl = pLeft->pColl;
+ if( pColl==0 ){
+ pColl = db->pDfltColl;
+ }
+ if( (pColl->type!=SQLITE_COLL_BINARY || noCase) &&
+ (pColl->type!=SQLITE_COLL_NOCASE || !noCase) ){
+ return 0;
+ }
+ sqlite3DequoteExpr(pRight);
+ z = (char *)pRight->token.z;
+ for(cnt=0; (c=z[cnt])!=0 && c!=wc[0] && c!=wc[1] && c!=wc[2]; cnt++){}
+ if( cnt==0 || 255==(u8)z[cnt] ){
+ return 0;
+ }
+ *pisComplete = z[cnt]==wc[0] && z[cnt+1]==0;
+ *pnPattern = cnt;
+ return 1;
+}
+#endif /* SQLITE_OMIT_LIKE_OPTIMIZATION */
+
+
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+/*
+** Check to see if the given expression is of the form
+**
+** column MATCH expr
+**
+** If it is then return TRUE. If not, return FALSE.
+*/
+static int isMatchOfColumn(
+ Expr *pExpr /* Test this expression */
+){
+ ExprList *pList;
+
+ if( pExpr->op!=TK_FUNCTION ){
+ return 0;
+ }
+ if( pExpr->token.n!=5 ||
+ sqlite3StrNICmp((const char*)pExpr->token.z,"match",5)!=0 ){
+ return 0;
+ }
+ pList = pExpr->pList;
+ if( pList->nExpr!=2 ){
+ return 0;
+ }
+ if( pList->a[1].pExpr->op != TK_COLUMN ){
+ return 0;
+ }
+ return 1;
+}
+#endif /* SQLITE_OMIT_VIRTUALTABLE */
+
+/*
+** If the pBase expression originated in the ON or USING clause of
+** a join, then transfer the appropriate markings over to derived.
+*/
+static void transferJoinMarkings(Expr *pDerived, Expr *pBase){
+ pDerived->flags |= pBase->flags & EP_FromJoin;
+ pDerived->iRightJoinTable = pBase->iRightJoinTable;
+}
+
+
+/*
+** The input to this routine is an WhereTerm structure with only the
+** "pExpr" field filled in. The job of this routine is to analyze the
+** subexpression and populate all the other fields of the WhereTerm
+** structure.
+**
+** If the expression is of the form "<expr> <op> X" it gets commuted
+** to the standard form of "X <op> <expr>". If the expression is of
+** the form "X <op> Y" where both X and Y are columns, then the original
+** expression is unchanged and a new virtual expression of the form
+** "Y <op> X" is added to the WHERE clause and analyzed separately.
+*/
+static void exprAnalyze(
+ SrcList *pSrc, /* the FROM clause */
+ ExprMaskSet *pMaskSet, /* table masks */
+ WhereClause *pWC, /* the WHERE clause */
+ int idxTerm /* Index of the term to be analyzed */
+){
+ WhereTerm *pTerm = &pWC->a[idxTerm];
+ Expr *pExpr = pTerm->pExpr;
+ Bitmask prereqLeft;
+ Bitmask prereqAll;
+ int nPattern;
+ int isComplete;
+
+ if( sqlite3MallocFailed() ) return;
+ prereqLeft = exprTableUsage(pMaskSet, pExpr->pLeft);
+ if( pExpr->op==TK_IN ){
+ assert( pExpr->pRight==0 );
+ pTerm->prereqRight = exprListTableUsage(pMaskSet, pExpr->pList)
+ | exprSelectTableUsage(pMaskSet, pExpr->pSelect);
+ }else{
+ pTerm->prereqRight = exprTableUsage(pMaskSet, pExpr->pRight);
+ }
+ prereqAll = exprTableUsage(pMaskSet, pExpr);
+ if( ExprHasProperty(pExpr, EP_FromJoin) ){
+ prereqAll |= getMask(pMaskSet, pExpr->iRightJoinTable);
+ }
+ pTerm->prereqAll = prereqAll;
+ pTerm->leftCursor = -1;
+ pTerm->iParent = -1;
+ pTerm->eOperator = 0;
+ if( allowedOp(pExpr->op) && (pTerm->prereqRight & prereqLeft)==0 ){
+ Expr *pLeft = pExpr->pLeft;
+ Expr *pRight = pExpr->pRight;
+ if( pLeft->op==TK_COLUMN ){
+ pTerm->leftCursor = pLeft->iTable;
+ pTerm->leftColumn = pLeft->iColumn;
+ pTerm->eOperator = operatorMask(pExpr->op);
+ }
+ if( pRight && pRight->op==TK_COLUMN ){
+ WhereTerm *pNew;
+ Expr *pDup;
+ if( pTerm->leftCursor>=0 ){
+ int idxNew;
+ pDup = sqlite3ExprDup(pExpr);
+ idxNew = whereClauseInsert(pWC, pDup, TERM_VIRTUAL|TERM_DYNAMIC);
+ if( idxNew==0 ) return;
+ pNew = &pWC->a[idxNew];
+ pNew->iParent = idxTerm;
+ pTerm = &pWC->a[idxTerm];
+ pTerm->nChild = 1;
+ pTerm->flags |= TERM_COPIED;
+ }else{
+ pDup = pExpr;
+ pNew = pTerm;
+ }
+ exprCommute(pDup);
+ pLeft = pDup->pLeft;
+ pNew->leftCursor = pLeft->iTable;
+ pNew->leftColumn = pLeft->iColumn;
+ pNew->prereqRight = prereqLeft;
+ pNew->prereqAll = prereqAll;
+ pNew->eOperator = operatorMask(pDup->op);
+ }
+ }
+
+#ifndef SQLITE_OMIT_BETWEEN_OPTIMIZATION
+ /* If a term is the BETWEEN operator, create two new virtual terms
+ ** that define the range that the BETWEEN implements.
+ */
+ else if( pExpr->op==TK_BETWEEN ){
+ ExprList *pList = pExpr->pList;
+ int i;
+ static const u8 ops[] = {TK_GE, TK_LE};
+ assert( pList!=0 );
+ assert( pList->nExpr==2 );
+ for(i=0; i<2; i++){
+ Expr *pNewExpr;
+ int idxNew;
+ pNewExpr = sqlite3Expr(ops[i], sqlite3ExprDup(pExpr->pLeft),
+ sqlite3ExprDup(pList->a[i].pExpr), 0);
+ idxNew = whereClauseInsert(pWC, pNewExpr, TERM_VIRTUAL|TERM_DYNAMIC);
+ exprAnalyze(pSrc, pMaskSet, pWC, idxNew);
+ pTerm = &pWC->a[idxTerm];
+ pWC->a[idxNew].iParent = idxTerm;
+ }
+ pTerm->nChild = 2;
+ }
+#endif /* SQLITE_OMIT_BETWEEN_OPTIMIZATION */
+
+#if !defined(SQLITE_OMIT_OR_OPTIMIZATION) && !defined(SQLITE_OMIT_SUBQUERY)
+ /* Attempt to convert OR-connected terms into an IN operator so that
+ ** they can make use of indices. Example:
+ **
+ ** x = expr1 OR expr2 = x OR x = expr3
+ **
+ ** is converted into
+ **
+ ** x IN (expr1,expr2,expr3)
+ **
+ ** This optimization must be omitted if OMIT_SUBQUERY is defined because
+ ** the compiler for the the IN operator is part of sub-queries.
+ */
+ else if( pExpr->op==TK_OR ){
+ int ok;
+ int i, j;
+ int iColumn, iCursor;
+ WhereClause sOr;
+ WhereTerm *pOrTerm;
+
+ assert( (pTerm->flags & TERM_DYNAMIC)==0 );
+ whereClauseInit(&sOr, pWC->pParse);
+ whereSplit(&sOr, pExpr, TK_OR);
+ exprAnalyzeAll(pSrc, pMaskSet, &sOr);
+ assert( sOr.nTerm>0 );
+ j = 0;
+ do{
+ iColumn = sOr.a[j].leftColumn;
+ iCursor = sOr.a[j].leftCursor;
+ ok = iCursor>=0;
+ for(i=sOr.nTerm-1, pOrTerm=sOr.a; i>=0 && ok; i--, pOrTerm++){
+ if( pOrTerm->eOperator!=WO_EQ ){
+ goto or_not_possible;
+ }
+ if( pOrTerm->leftCursor==iCursor && pOrTerm->leftColumn==iColumn ){
+ pOrTerm->flags |= TERM_OR_OK;
+ }else if( (pOrTerm->flags & TERM_COPIED)!=0 ||
+ ((pOrTerm->flags & TERM_VIRTUAL)!=0 &&
+ (sOr.a[pOrTerm->iParent].flags & TERM_OR_OK)!=0) ){
+ pOrTerm->flags &= ~TERM_OR_OK;
+ }else{
+ ok = 0;
+ }
+ }
+ }while( !ok && (sOr.a[j++].flags & TERM_COPIED)!=0 && j<sOr.nTerm );
+ if( ok ){
+ ExprList *pList = 0;
+ Expr *pNew, *pDup;
+ for(i=sOr.nTerm-1, pOrTerm=sOr.a; i>=0 && ok; i--, pOrTerm++){
+ if( (pOrTerm->flags & TERM_OR_OK)==0 ) continue;
+ pDup = sqlite3ExprDup(pOrTerm->pExpr->pRight);
+ pList = sqlite3ExprListAppend(pList, pDup, 0);
+ }
+ pDup = sqlite3Expr(TK_COLUMN, 0, 0, 0);
+ if( pDup ){
+ pDup->iTable = iCursor;
+ pDup->iColumn = iColumn;
+ }
+ pNew = sqlite3Expr(TK_IN, pDup, 0, 0);
+ if( pNew ){
+ int idxNew;
+ transferJoinMarkings(pNew, pExpr);
+ pNew->pList = pList;
+ idxNew = whereClauseInsert(pWC, pNew, TERM_VIRTUAL|TERM_DYNAMIC);
+ exprAnalyze(pSrc, pMaskSet, pWC, idxNew);
+ pTerm = &pWC->a[idxTerm];
+ pWC->a[idxNew].iParent = idxTerm;
+ pTerm->nChild = 1;
+ }else{
+ sqlite3ExprListDelete(pList);
+ }
+ }
+or_not_possible:
+ whereClauseClear(&sOr);
+ }
+#endif /* SQLITE_OMIT_OR_OPTIMIZATION */
+
+#ifndef SQLITE_OMIT_LIKE_OPTIMIZATION
+ /* Add constraints to reduce the search space on a LIKE or GLOB
+ ** operator.
+ */
+ if( isLikeOrGlob(pWC->pParse->db, pExpr, &nPattern, &isComplete) ){
+ Expr *pLeft, *pRight;
+ Expr *pStr1, *pStr2;
+ Expr *pNewExpr1, *pNewExpr2;
+ int idxNew1, idxNew2;
+
+ pLeft = pExpr->pList->a[1].pExpr;
+ pRight = pExpr->pList->a[0].pExpr;
+ pStr1 = sqlite3Expr(TK_STRING, 0, 0, 0);
+ if( pStr1 ){
+ sqlite3TokenCopy(&pStr1->token, &pRight->token);
+ pStr1->token.n = nPattern;
+ }
+ pStr2 = sqlite3ExprDup(pStr1);
+ if( pStr2 ){
+ assert( pStr2->token.dyn );
+ ++*(u8*)&pStr2->token.z[nPattern-1];
+ }
+ pNewExpr1 = sqlite3Expr(TK_GE, sqlite3ExprDup(pLeft), pStr1, 0);
+ idxNew1 = whereClauseInsert(pWC, pNewExpr1, TERM_VIRTUAL|TERM_DYNAMIC);
+ exprAnalyze(pSrc, pMaskSet, pWC, idxNew1);
+ pNewExpr2 = sqlite3Expr(TK_LT, sqlite3ExprDup(pLeft), pStr2, 0);
+ idxNew2 = whereClauseInsert(pWC, pNewExpr2, TERM_VIRTUAL|TERM_DYNAMIC);
+ exprAnalyze(pSrc, pMaskSet, pWC, idxNew2);
+ pTerm = &pWC->a[idxTerm];
+ if( isComplete ){
+ pWC->a[idxNew1].iParent = idxTerm;
+ pWC->a[idxNew2].iParent = idxTerm;
+ pTerm->nChild = 2;
+ }
+ }
+#endif /* SQLITE_OMIT_LIKE_OPTIMIZATION */
+
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+ /* Add a WO_MATCH auxiliary term to the constraint set if the
+ ** current expression is of the form: column MATCH expr.
+ ** This information is used by the xBestIndex methods of
+ ** virtual tables. The native query optimizer does not attempt
+ ** to do anything with MATCH functions.
+ */
+ if( isMatchOfColumn(pExpr) ){
+ int idxNew;
+ Expr *pRight, *pLeft;
+ WhereTerm *pNewTerm;
+ Bitmask prereqColumn, prereqExpr;
+
+ pRight = pExpr->pList->a[0].pExpr;
+ pLeft = pExpr->pList->a[1].pExpr;
+ prereqExpr = exprTableUsage(pMaskSet, pRight);
+ prereqColumn = exprTableUsage(pMaskSet, pLeft);
+ if( (prereqExpr & prereqColumn)==0 ){
+ Expr *pNewExpr;
+ pNewExpr = sqlite3Expr(TK_MATCH, 0, sqlite3ExprDup(pRight), 0);
+ idxNew = whereClauseInsert(pWC, pNewExpr, TERM_VIRTUAL|TERM_DYNAMIC);
+ pNewTerm = &pWC->a[idxNew];
+ pNewTerm->prereqRight = prereqExpr;
+ pNewTerm->leftCursor = pLeft->iTable;
+ pNewTerm->leftColumn = pLeft->iColumn;
+ pNewTerm->eOperator = WO_MATCH;
+ pNewTerm->iParent = idxTerm;
+ pTerm = &pWC->a[idxTerm];
+ pTerm->nChild = 1;
+ pTerm->flags |= TERM_COPIED;
+ pNewTerm->prereqAll = pTerm->prereqAll;
+ }
+ }
+#endif /* SQLITE_OMIT_VIRTUALTABLE */
+}
+
+
+/*
+** This routine decides if pIdx can be used to satisfy the ORDER BY
+** clause. If it can, it returns 1. If pIdx cannot satisfy the
+** ORDER BY clause, this routine returns 0.
+**
+** pOrderBy is an ORDER BY clause from a SELECT statement. pTab is the
+** left-most table in the FROM clause of that same SELECT statement and
+** the table has a cursor number of "base". pIdx is an index on pTab.
+**
+** nEqCol is the number of columns of pIdx that are used as equality
+** constraints. Any of these columns may be missing from the ORDER BY
+** clause and the match can still be a success.
+**
+** All terms of the ORDER BY that match against the index must be either
+** ASC or DESC. (Terms of the ORDER BY clause past the end of a UNIQUE
+** index do not need to satisfy this constraint.) The *pbRev value is
+** set to 1 if the ORDER BY clause is all DESC and it is set to 0 if
+** the ORDER BY clause is all ASC.
+*/
+static int isSortingIndex(
+ Parse *pParse, /* Parsing context */
+ Index *pIdx, /* The index we are testing */
+ int base, /* Cursor number for the table to be sorted */
+ ExprList *pOrderBy, /* The ORDER BY clause */
+ int nEqCol, /* Number of index columns with == constraints */
+ int *pbRev /* Set to 1 if ORDER BY is DESC */
+){
+ int i, j; /* Loop counters */
+ int sortOrder = 0; /* XOR of index and ORDER BY sort direction */
+ int nTerm; /* Number of ORDER BY terms */
+ struct ExprList_item *pTerm; /* A term of the ORDER BY clause */
+ sqlite3 *db = pParse->db;
+
+ assert( pOrderBy!=0 );
+ nTerm = pOrderBy->nExpr;
+ assert( nTerm>0 );
+
+ /* Match terms of the ORDER BY clause against columns of
+ ** the index.
+ */
+ for(i=j=0, pTerm=pOrderBy->a; j<nTerm && i<pIdx->nColumn; i++){
+ Expr *pExpr; /* The expression of the ORDER BY pTerm */
+ CollSeq *pColl; /* The collating sequence of pExpr */
+ int termSortOrder; /* Sort order for this term */
+
+ pExpr = pTerm->pExpr;
+ if( pExpr->op!=TK_COLUMN || pExpr->iTable!=base ){
+ /* Can not use an index sort on anything that is not a column in the
+ ** left-most table of the FROM clause */
+ return 0;
+ }
+ pColl = sqlite3ExprCollSeq(pParse, pExpr);
+ if( !pColl ) pColl = db->pDfltColl;
+ if( pExpr->iColumn!=pIdx->aiColumn[i] ||
+ sqlite3StrICmp(pColl->zName, pIdx->azColl[i]) ){
+ /* Term j of the ORDER BY clause does not match column i of the index */
+ if( i<nEqCol ){
+ /* If an index column that is constrained by == fails to match an
+ ** ORDER BY term, that is OK. Just ignore that column of the index
+ */
+ continue;
+ }else{
+ /* If an index column fails to match and is not constrained by ==
+ ** then the index cannot satisfy the ORDER BY constraint.
+ */
+ return 0;
+ }
+ }
+ assert( pIdx->aSortOrder!=0 );
+ assert( pTerm->sortOrder==0 || pTerm->sortOrder==1 );
+ assert( pIdx->aSortOrder[i]==0 || pIdx->aSortOrder[i]==1 );
+ termSortOrder = pIdx->aSortOrder[i] ^ pTerm->sortOrder;
+ if( i>nEqCol ){
+ if( termSortOrder!=sortOrder ){
+ /* Indices can only be used if all ORDER BY terms past the
+ ** equality constraints are all either DESC or ASC. */
+ return 0;
+ }
+ }else{
+ sortOrder = termSortOrder;
+ }
+ j++;
+ pTerm++;
+ }
+
+ /* The index can be used for sorting if all terms of the ORDER BY clause
+ ** are covered.
+ */
+ if( j>=nTerm ){
+ *pbRev = sortOrder!=0;
+ return 1;
+ }
+ return 0;
+}
+
+/*
+** Check table to see if the ORDER BY clause in pOrderBy can be satisfied
+** by sorting in order of ROWID. Return true if so and set *pbRev to be
+** true for reverse ROWID and false for forward ROWID order.
+*/
+static int sortableByRowid(
+ int base, /* Cursor number for table to be sorted */
+ ExprList *pOrderBy, /* The ORDER BY clause */
+ int *pbRev /* Set to 1 if ORDER BY is DESC */
+){
+ Expr *p;
+
+ assert( pOrderBy!=0 );
+ assert( pOrderBy->nExpr>0 );
+ p = pOrderBy->a[0].pExpr;
+ if( pOrderBy->nExpr==1 && p->op==TK_COLUMN && p->iTable==base
+ && p->iColumn==-1 ){
+ *pbRev = pOrderBy->a[0].sortOrder;
+ return 1;
+ }
+ return 0;
+}
+
+/*
+** Prepare a crude estimate of the logarithm of the input value.
+** The results need not be exact. This is only used for estimating
+** the total cost of performing operatings with O(logN) or O(NlogN)
+** complexity. Because N is just a guess, it is no great tragedy if
+** logN is a little off.
+*/
+static double estLog(double N){
+ double logN = 1;
+ double x = 10;
+ while( N>x ){
+ logN += 1;
+ x *= 10;
+ }
+ return logN;
+}
+
+/*
+** Two routines for printing the content of an sqlite3_index_info
+** structure. Used for testing and debugging only. If neither
+** SQLITE_TEST or SQLITE_DEBUG are defined, then these routines
+** are no-ops.
+*/
+#if !defined(SQLITE_OMIT_VIRTUALTABLE) && \
+ (defined(SQLITE_TEST) || defined(SQLITE_DEBUG))
+static void TRACE_IDX_INPUTS(sqlite3_index_info *p){
+ int i;
+ if( !sqlite3_where_trace ) return;
+ for(i=0; i<p->nConstraint; i++){
+ sqlite3DebugPrintf(" constraint[%d]: col=%d termid=%d op=%d usabled=%d\n",
+ i,
+ p->aConstraint[i].iColumn,
+ p->aConstraint[i].iTermOffset,
+ p->aConstraint[i].op,
+ p->aConstraint[i].usable);
+ }
+ for(i=0; i<p->nOrderBy; i++){
+ sqlite3DebugPrintf(" orderby[%d]: col=%d desc=%d\n",
+ i,
+ p->aOrderBy[i].iColumn,
+ p->aOrderBy[i].desc);
+ }
+}
+static void TRACE_IDX_OUTPUTS(sqlite3_index_info *p){
+ int i;
+ if( !sqlite3_where_trace ) return;
+ for(i=0; i<p->nConstraint; i++){
+ sqlite3DebugPrintf(" usage[%d]: argvIdx=%d omit=%d\n",
+ i,
+ p->aConstraintUsage[i].argvIndex,
+ p->aConstraintUsage[i].omit);
+ }
+ sqlite3DebugPrintf(" idxNum=%d\n", p->idxNum);
+ sqlite3DebugPrintf(" idxStr=%s\n", p->idxStr);
+ sqlite3DebugPrintf(" orderByConsumed=%d\n", p->orderByConsumed);
+ sqlite3DebugPrintf(" estimatedCost=%g\n", p->estimatedCost);
+}
+#else
+#define TRACE_IDX_INPUTS(A)
+#define TRACE_IDX_OUTPUTS(A)
+#endif
+
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+/*
+** Compute the best index for a virtual table.
+**
+** The best index is computed by the xBestIndex method of the virtual
+** table module. This routine is really just a wrapper that sets up
+** the sqlite3_index_info structure that is used to communicate with
+** xBestIndex.
+**
+** In a join, this routine might be called multiple times for the
+** same virtual table. The sqlite3_index_info structure is created
+** and initialized on the first invocation and reused on all subsequent
+** invocations. The sqlite3_index_info structure is also used when
+** code is generated to access the virtual table. The whereInfoDelete()
+** routine takes care of freeing the sqlite3_index_info structure after
+** everybody has finished with it.
+*/
+static double bestVirtualIndex(
+ Parse *pParse, /* The parsing context */
+ WhereClause *pWC, /* The WHERE clause */
+ struct SrcList_item *pSrc, /* The FROM clause term to search */
+ Bitmask notReady, /* Mask of cursors that are not available */
+ ExprList *pOrderBy, /* The order by clause */
+ int orderByUsable, /* True if we can potential sort */
+ sqlite3_index_info **ppIdxInfo /* Index information passed to xBestIndex */
+){
+ Table *pTab = pSrc->pTab;
+ sqlite3_index_info *pIdxInfo;
+ struct sqlite3_index_constraint *pIdxCons;
+ struct sqlite3_index_orderby *pIdxOrderBy;
+ struct sqlite3_index_constraint_usage *pUsage;
+ WhereTerm *pTerm;
+ int i, j;
+ int nOrderBy;
+ int rc;
+
+ /* If the sqlite3_index_info structure has not been previously
+ ** allocated and initialized for this virtual table, then allocate
+ ** and initialize it now
+ */
+ pIdxInfo = *ppIdxInfo;
+ if( pIdxInfo==0 ){
+ WhereTerm *pTerm;
+ int nTerm;
+ TRACE(("Recomputing index info for %s...\n", pTab->zName));
+
+ /* Count the number of possible WHERE clause constraints referring
+ ** to this virtual table */
+ for(i=nTerm=0, pTerm=pWC->a; i<pWC->nTerm; i++, pTerm++){
+ if( pTerm->leftCursor != pSrc->iCursor ) continue;
+ if( pTerm->eOperator==WO_IN ) continue;
+ nTerm++;
+ }
+
+ /* If the ORDER BY clause contains only columns in the current
+ ** virtual table then allocate space for the aOrderBy part of
+ ** the sqlite3_index_info structure.
+ */
+ nOrderBy = 0;
+ if( pOrderBy ){
+ for(i=0; i<pOrderBy->nExpr; i++){
+ Expr *pExpr = pOrderBy->a[i].pExpr;
+ if( pExpr->op!=TK_COLUMN || pExpr->iTable!=pSrc->iCursor ) break;
+ }
+ if( i==pOrderBy->nExpr ){
+ nOrderBy = pOrderBy->nExpr;
+ }
+ }
+
+ /* Allocate the sqlite3_index_info structure
+ */
+ pIdxInfo = sqliteMalloc( sizeof(*pIdxInfo)
+ + (sizeof(*pIdxCons) + sizeof(*pUsage))*nTerm
+ + sizeof(*pIdxOrderBy)*nOrderBy );
+ if( pIdxInfo==0 ){
+ sqlite3ErrorMsg(pParse, "out of memory");
+ return 0.0;
+ }
+ *ppIdxInfo = pIdxInfo;
+
+ /* Initialize the structure. The sqlite3_index_info structure contains
+ ** many fields that are declared "const" to prevent xBestIndex from
+ ** changing them. We have to do some funky casting in order to
+ ** initialize those fields.
+ */
+ pIdxCons = (struct sqlite3_index_constraint*)&pIdxInfo[1];
+ pIdxOrderBy = (struct sqlite3_index_orderby*)&pIdxCons[nTerm];
+ pUsage = (struct sqlite3_index_constraint_usage*)&pIdxOrderBy[nOrderBy];
+ *(int*)&pIdxInfo->nConstraint = nTerm;
+ *(int*)&pIdxInfo->nOrderBy = nOrderBy;
+ *(struct sqlite3_index_constraint**)&pIdxInfo->aConstraint = pIdxCons;
+ *(struct sqlite3_index_orderby**)&pIdxInfo->aOrderBy = pIdxOrderBy;
+ *(struct sqlite3_index_constraint_usage**)&pIdxInfo->aConstraintUsage =
+ pUsage;
+
+ for(i=j=0, pTerm=pWC->a; i<pWC->nTerm; i++, pTerm++){
+ if( pTerm->leftCursor != pSrc->iCursor ) continue;
+ if( pTerm->eOperator==WO_IN ) continue;
+ pIdxCons[j].iColumn = pTerm->leftColumn;
+ pIdxCons[j].iTermOffset = i;
+ pIdxCons[j].op = pTerm->eOperator;
+ /* The direct assignment in the previous line is possible only because
+ ** the WO_ and SQLITE_INDEX_CONSTRAINT_ codes are identical. The
+ ** following asserts verify this fact. */
+ assert( WO_EQ==SQLITE_INDEX_CONSTRAINT_EQ );
+ assert( WO_LT==SQLITE_INDEX_CONSTRAINT_LT );
+ assert( WO_LE==SQLITE_INDEX_CONSTRAINT_LE );
+ assert( WO_GT==SQLITE_INDEX_CONSTRAINT_GT );
+ assert( WO_GE==SQLITE_INDEX_CONSTRAINT_GE );
+ assert( WO_MATCH==SQLITE_INDEX_CONSTRAINT_MATCH );
+ assert( pTerm->eOperator & (WO_EQ|WO_LT|WO_LE|WO_GT|WO_GE|WO_MATCH) );
+ j++;
+ }
+ for(i=0; i<nOrderBy; i++){
+ Expr *pExpr = pOrderBy->a[i].pExpr;
+ pIdxOrderBy[i].iColumn = pExpr->iColumn;
+ pIdxOrderBy[i].desc = pOrderBy->a[i].sortOrder;
+ }
+ }
+
+ /* At this point, the sqlite3_index_info structure that pIdxInfo points
+ ** to will have been initialized, either during the current invocation or
+ ** during some prior invocation. Now we just have to customize the
+ ** details of pIdxInfo for the current invocation and pass it to
+ ** xBestIndex.
+ */
+
+ /* The module name must be defined */
+ assert( pTab->azModuleArg && pTab->azModuleArg[0] );
+ if( pTab->pVtab==0 ){
+ sqlite3ErrorMsg(pParse, "undefined module %s for table %s",
+ pTab->azModuleArg[0], pTab->zName);
+ return 0.0;
+ }
+
+ /* Set the aConstraint[].usable fields and initialize all
+ ** output variables to zero.
+ **
+ ** aConstraint[].usable is true for constraints where the right-hand
+ ** side contains only references to tables to the left of the current
+ ** table. In other words, if the constraint is of the form:
+ **
+ ** column = expr
+ **
+ ** and we are evaluating a join, then the constraint on column is
+ ** only valid if all tables referenced in expr occur to the left
+ ** of the table containing column.
+ **
+ ** The aConstraints[] array contains entries for all constraints
+ ** on the current table. That way we only have to compute it once
+ ** even though we might try to pick the best index multiple times.
+ ** For each attempt at picking an index, the order of tables in the
+ ** join might be different so we have to recompute the usable flag
+ ** each time.
+ */
+ pIdxCons = *(struct sqlite3_index_constraint**)&pIdxInfo->aConstraint;
+ pUsage = pIdxInfo->aConstraintUsage;
+ for(i=0; i<pIdxInfo->nConstraint; i++, pIdxCons++){
+ j = pIdxCons->iTermOffset;
+ pTerm = &pWC->a[j];
+ pIdxCons->usable = (pTerm->prereqRight & notReady)==0;
+ }
+ memset(pUsage, 0, sizeof(pUsage[0])*pIdxInfo->nConstraint);
+ if( pIdxInfo->needToFreeIdxStr ){
+ sqlite3_free(pIdxInfo->idxStr);
+ }
+ pIdxInfo->idxStr = 0;
+ pIdxInfo->idxNum = 0;
+ pIdxInfo->needToFreeIdxStr = 0;
+ pIdxInfo->orderByConsumed = 0;
+ pIdxInfo->estimatedCost = SQLITE_BIG_DBL / 2.0;
+ nOrderBy = pIdxInfo->nOrderBy;
+ if( pIdxInfo->nOrderBy && !orderByUsable ){
+ *(int*)&pIdxInfo->nOrderBy = 0;
+ }
+
+ sqlite3SafetyOff(pParse->db);
+ TRACE(("xBestIndex for %s\n", pTab->zName));
+ TRACE_IDX_INPUTS(pIdxInfo);
+ rc = pTab->pVtab->pModule->xBestIndex(pTab->pVtab, pIdxInfo);
+ TRACE_IDX_OUTPUTS(pIdxInfo);
+ if( rc!=SQLITE_OK ){
+ if( rc==SQLITE_NOMEM ){
+ sqlite3FailedMalloc();
+ }else {
+ sqlite3ErrorMsg(pParse, "%s", sqlite3ErrStr(rc));
+ }
+ sqlite3SafetyOn(pParse->db);
+ }else{
+ rc = sqlite3SafetyOn(pParse->db);
+ }
+ *(int*)&pIdxInfo->nOrderBy = nOrderBy;
+ return pIdxInfo->estimatedCost;
+}
+#endif /* SQLITE_OMIT_VIRTUALTABLE */
+
+/*
+** Find the best index for accessing a particular table. Return a pointer
+** to the index, flags that describe how the index should be used, the
+** number of equality constraints, and the "cost" for this index.
+**
+** The lowest cost index wins. The cost is an estimate of the amount of
+** CPU and disk I/O need to process the request using the selected index.
+** Factors that influence cost include:
+**
+** * The estimated number of rows that will be retrieved. (The
+** fewer the better.)
+**
+** * Whether or not sorting must occur.
+**
+** * Whether or not there must be separate lookups in the
+** index and in the main table.
+**
+*/
+static double bestIndex(
+ Parse *pParse, /* The parsing context */
+ WhereClause *pWC, /* The WHERE clause */
+ struct SrcList_item *pSrc, /* The FROM clause term to search */
+ Bitmask notReady, /* Mask of cursors that are not available */
+ ExprList *pOrderBy, /* The order by clause */
+ Index **ppIndex, /* Make *ppIndex point to the best index */
+ int *pFlags, /* Put flags describing this choice in *pFlags */
+ int *pnEq /* Put the number of == or IN constraints here */
+){
+ WhereTerm *pTerm;
+ Index *bestIdx = 0; /* Index that gives the lowest cost */
+ double lowestCost; /* The cost of using bestIdx */
+ int bestFlags = 0; /* Flags associated with bestIdx */
+ int bestNEq = 0; /* Best value for nEq */
+ int iCur = pSrc->iCursor; /* The cursor of the table to be accessed */
+ Index *pProbe; /* An index we are evaluating */
+ int rev; /* True to scan in reverse order */
+ int flags; /* Flags associated with pProbe */
+ int nEq; /* Number of == or IN constraints */
+ double cost; /* Cost of using pProbe */
+
+ TRACE(("bestIndex: tbl=%s notReady=%x\n", pSrc->pTab->zName, notReady));
+ lowestCost = SQLITE_BIG_DBL;
+ pProbe = pSrc->pTab->pIndex;
+
+ /* If the table has no indices and there are no terms in the where
+ ** clause that refer to the ROWID, then we will never be able to do
+ ** anything other than a full table scan on this table. We might as
+ ** well put it first in the join order. That way, perhaps it can be
+ ** referenced by other tables in the join.
+ */
+ if( pProbe==0 &&
+ findTerm(pWC, iCur, -1, 0, WO_EQ|WO_IN|WO_LT|WO_LE|WO_GT|WO_GE,0)==0 &&
+ (pOrderBy==0 || !sortableByRowid(iCur, pOrderBy, &rev)) ){
+ *pFlags = 0;
+ *ppIndex = 0;
+ *pnEq = 0;
+ return 0.0;
+ }
+
+ /* Check for a rowid=EXPR or rowid IN (...) constraints
+ */
+ pTerm = findTerm(pWC, iCur, -1, notReady, WO_EQ|WO_IN, 0);
+ if( pTerm ){
+ Expr *pExpr;
+ *ppIndex = 0;
+ bestFlags = WHERE_ROWID_EQ;
+ if( pTerm->eOperator & WO_EQ ){
+ /* Rowid== is always the best pick. Look no further. Because only
+ ** a single row is generated, output is always in sorted order */
+ *pFlags = WHERE_ROWID_EQ | WHERE_UNIQUE;
+ *pnEq = 1;
+ TRACE(("... best is rowid\n"));
+ return 0.0;
+ }else if( (pExpr = pTerm->pExpr)->pList!=0 ){
+ /* Rowid IN (LIST): cost is NlogN where N is the number of list
+ ** elements. */
+ lowestCost = pExpr->pList->nExpr;
+ lowestCost *= estLog(lowestCost);
+ }else{
+ /* Rowid IN (SELECT): cost is NlogN where N is the number of rows
+ ** in the result of the inner select. We have no way to estimate
+ ** that value so make a wild guess. */
+ lowestCost = 200;
+ }
+ TRACE(("... rowid IN cost: %.9g\n", lowestCost));
+ }
+
+ /* Estimate the cost of a table scan. If we do not know how many
+ ** entries are in the table, use 1 million as a guess.
+ */
+ cost = pProbe ? pProbe->aiRowEst[0] : 1000000;
+ TRACE(("... table scan base cost: %.9g\n", cost));
+ flags = WHERE_ROWID_RANGE;
+
+ /* Check for constraints on a range of rowids in a table scan.
+ */
+ pTerm = findTerm(pWC, iCur, -1, notReady, WO_LT|WO_LE|WO_GT|WO_GE, 0);
+ if( pTerm ){
+ if( findTerm(pWC, iCur, -1, notReady, WO_LT|WO_LE, 0) ){
+ flags |= WHERE_TOP_LIMIT;
+ cost /= 3; /* Guess that rowid<EXPR eliminates two-thirds or rows */
+ }
+ if( findTerm(pWC, iCur, -1, notReady, WO_GT|WO_GE, 0) ){
+ flags |= WHERE_BTM_LIMIT;
+ cost /= 3; /* Guess that rowid>EXPR eliminates two-thirds of rows */
+ }
+ TRACE(("... rowid range reduces cost to %.9g\n", cost));
+ }else{
+ flags = 0;
+ }
+
+ /* If the table scan does not satisfy the ORDER BY clause, increase
+ ** the cost by NlogN to cover the expense of sorting. */
+ if( pOrderBy ){
+ if( sortableByRowid(iCur, pOrderBy, &rev) ){
+ flags |= WHERE_ORDERBY|WHERE_ROWID_RANGE;
+ if( rev ){
+ flags |= WHERE_REVERSE;
+ }
+ }else{
+ cost += cost*estLog(cost);
+ TRACE(("... sorting increases cost to %.9g\n", cost));
+ }
+ }
+ if( cost<lowestCost ){
+ lowestCost = cost;
+ bestFlags = flags;
+ }
+
+ /* Look at each index.
+ */
+ for(; pProbe; pProbe=pProbe->pNext){
+ int i; /* Loop counter */
+ double inMultiplier = 1;
+
+ TRACE(("... index %s:\n", pProbe->zName));
+
+ /* Count the number of columns in the index that are satisfied
+ ** by x=EXPR constraints or x IN (...) constraints.
+ */
+ flags = 0;
+ for(i=0; i<pProbe->nColumn; i++){
+ int j = pProbe->aiColumn[i];
+ pTerm = findTerm(pWC, iCur, j, notReady, WO_EQ|WO_IN, pProbe);
+ if( pTerm==0 ) break;
+ flags |= WHERE_COLUMN_EQ;
+ if( pTerm->eOperator & WO_IN ){
+ Expr *pExpr = pTerm->pExpr;
+ flags |= WHERE_COLUMN_IN;
+ if( pExpr->pSelect!=0 ){
+ inMultiplier *= 25;
+ }else if( pExpr->pList!=0 ){
+ inMultiplier *= pExpr->pList->nExpr + 1;
+ }
+ }
+ }
+ cost = pProbe->aiRowEst[i] * inMultiplier * estLog(inMultiplier);
+ nEq = i;
+ if( pProbe->onError!=OE_None && (flags & WHERE_COLUMN_IN)==0
+ && nEq==pProbe->nColumn ){
+ flags |= WHERE_UNIQUE;
+ }
+ TRACE(("...... nEq=%d inMult=%.9g cost=%.9g\n", nEq, inMultiplier, cost));
+
+ /* Look for range constraints
+ */
+ if( nEq<pProbe->nColumn ){
+ int j = pProbe->aiColumn[nEq];
+ pTerm = findTerm(pWC, iCur, j, notReady, WO_LT|WO_LE|WO_GT|WO_GE, pProbe);
+ if( pTerm ){
+ flags |= WHERE_COLUMN_RANGE;
+ if( findTerm(pWC, iCur, j, notReady, WO_LT|WO_LE, pProbe) ){
+ flags |= WHERE_TOP_LIMIT;
+ cost /= 3;
+ }
+ if( findTerm(pWC, iCur, j, notReady, WO_GT|WO_GE, pProbe) ){
+ flags |= WHERE_BTM_LIMIT;
+ cost /= 3;
+ }
+ TRACE(("...... range reduces cost to %.9g\n", cost));
+ }
+ }
+
+ /* Add the additional cost of sorting if that is a factor.
+ */
+ if( pOrderBy ){
+ if( (flags & WHERE_COLUMN_IN)==0 &&
+ isSortingIndex(pParse,pProbe,iCur,pOrderBy,nEq,&rev) ){
+ if( flags==0 ){
+ flags = WHERE_COLUMN_RANGE;
+ }
+ flags |= WHERE_ORDERBY;
+ if( rev ){
+ flags |= WHERE_REVERSE;
+ }
+ }else{
+ cost += cost*estLog(cost);
+ TRACE(("...... orderby increases cost to %.9g\n", cost));
+ }
+ }
+
+ /* Check to see if we can get away with using just the index without
+ ** ever reading the table. If that is the case, then halve the
+ ** cost of this index.
+ */
+ if( flags && pSrc->colUsed < (((Bitmask)1)<<(BMS-1)) ){
+ Bitmask m = pSrc->colUsed;
+ int j;
+ for(j=0; j<pProbe->nColumn; j++){
+ int x = pProbe->aiColumn[j];
+ if( x<BMS-1 ){
+ m &= ~(((Bitmask)1)<<x);
+ }
+ }
+ if( m==0 ){
+ flags |= WHERE_IDX_ONLY;
+ cost /= 2;
+ TRACE(("...... idx-only reduces cost to %.9g\n", cost));
+ }
+ }
+
+ /* If this index has achieved the lowest cost so far, then use it.
+ */
+ if( cost < lowestCost ){
+ bestIdx = pProbe;
+ lowestCost = cost;
+ assert( flags!=0 );
+ bestFlags = flags;
+ bestNEq = nEq;
+ }
+ }
+
+ /* Report the best result
+ */
+ *ppIndex = bestIdx;
+ TRACE(("best index is %s, cost=%.9g, flags=%x, nEq=%d\n",
+ bestIdx ? bestIdx->zName : "(none)", lowestCost, bestFlags, bestNEq));
+ *pFlags = bestFlags;
+ *pnEq = bestNEq;
+ return lowestCost;
+}
+
+
+/*
+** Disable a term in the WHERE clause. Except, do not disable the term
+** if it controls a LEFT OUTER JOIN and it did not originate in the ON
+** or USING clause of that join.
+**
+** Consider the term t2.z='ok' in the following queries:
+**
+** (1) SELECT * FROM t1 LEFT JOIN t2 ON t1.a=t2.x WHERE t2.z='ok'
+** (2) SELECT * FROM t1 LEFT JOIN t2 ON t1.a=t2.x AND t2.z='ok'
+** (3) SELECT * FROM t1, t2 WHERE t1.a=t2.x AND t2.z='ok'
+**
+** The t2.z='ok' is disabled in the in (2) because it originates
+** in the ON clause. The term is disabled in (3) because it is not part
+** of a LEFT OUTER JOIN. In (1), the term is not disabled.
+**
+** Disabling a term causes that term to not be tested in the inner loop
+** of the join. Disabling is an optimization. When terms are satisfied
+** by indices, we disable them to prevent redundant tests in the inner
+** loop. We would get the correct results if nothing were ever disabled,
+** but joins might run a little slower. The trick is to disable as much
+** as we can without disabling too much. If we disabled in (1), we'd get
+** the wrong answer. See ticket #813.
+*/
+static void disableTerm(WhereLevel *pLevel, WhereTerm *pTerm){
+ if( pTerm
+ && (pTerm->flags & TERM_CODED)==0
+ && (pLevel->iLeftJoin==0 || ExprHasProperty(pTerm->pExpr, EP_FromJoin))
+ ){
+ pTerm->flags |= TERM_CODED;
+ if( pTerm->iParent>=0 ){
+ WhereTerm *pOther = &pTerm->pWC->a[pTerm->iParent];
+ if( (--pOther->nChild)==0 ){
+ disableTerm(pLevel, pOther);
+ }
+ }
+ }
+}
+
+/*
+** Generate code that builds a probe for an index. Details:
+**
+** * Check the top nColumn entries on the stack. If any
+** of those entries are NULL, jump immediately to brk,
+** which is the loop exit, since no index entry will match
+** if any part of the key is NULL. Pop (nColumn+nExtra)
+** elements from the stack.
+**
+** * Construct a probe entry from the top nColumn entries in
+** the stack with affinities appropriate for index pIdx.
+** Only nColumn elements are popped from the stack in this case
+** (by OP_MakeRecord).
+**
+*/
+static void buildIndexProbe(
+ Vdbe *v,
+ int nColumn,
+ int nExtra,
+ int brk,
+ Index *pIdx
+){
+ sqlite3VdbeAddOp(v, OP_NotNull, -nColumn, sqlite3VdbeCurrentAddr(v)+3);
+ sqlite3VdbeAddOp(v, OP_Pop, nColumn+nExtra, 0);
+ sqlite3VdbeAddOp(v, OP_Goto, 0, brk);
+ sqlite3VdbeAddOp(v, OP_MakeRecord, nColumn, 0);
+ sqlite3IndexAffinityStr(v, pIdx);
+}
+
+
+/*
+** Generate code for a single equality term of the WHERE clause. An equality
+** term can be either X=expr or X IN (...). pTerm is the term to be
+** coded.
+**
+** The current value for the constraint is left on the top of the stack.
+**
+** For a constraint of the form X=expr, the expression is evaluated and its
+** result is left on the stack. For constraints of the form X IN (...)
+** this routine sets up a loop that will iterate over all values of X.
+*/
+static void codeEqualityTerm(
+ Parse *pParse, /* The parsing context */
+ WhereTerm *pTerm, /* The term of the WHERE clause to be coded */
+ int brk, /* Jump here to abandon the loop */
+ WhereLevel *pLevel /* When level of the FROM clause we are working on */
+){
+ Expr *pX = pTerm->pExpr;
+ if( pX->op!=TK_IN ){
+ assert( pX->op==TK_EQ );
+ sqlite3ExprCode(pParse, pX->pRight);
+#ifndef SQLITE_OMIT_SUBQUERY
+ }else{
+ int iTab;
+ int *aIn;
+ Vdbe *v = pParse->pVdbe;
+
+ sqlite3CodeSubselect(pParse, pX);
+ iTab = pX->iTable;
+ sqlite3VdbeAddOp(v, OP_Rewind, iTab, 0);
+ VdbeComment((v, "# %.*s", pX->span.n, pX->span.z));
+ pLevel->nIn++;
+ sqliteReallocOrFree((void**)&pLevel->aInLoop,
+ sizeof(pLevel->aInLoop[0])*2*pLevel->nIn);
+ aIn = pLevel->aInLoop;
+ if( aIn ){
+ aIn += pLevel->nIn*2 - 2;
+ aIn[0] = iTab;
+ aIn[1] = sqlite3VdbeAddOp(v, OP_Column, iTab, 0);
+ }else{
+ pLevel->nIn = 0;
+ }
+#endif
+ }
+ disableTerm(pLevel, pTerm);
+}
+
+/*
+** Generate code that will evaluate all == and IN constraints for an
+** index. The values for all constraints are left on the stack.
+**
+** For example, consider table t1(a,b,c,d,e,f) with index i1(a,b,c).
+** Suppose the WHERE clause is this: a==5 AND b IN (1,2,3) AND c>5 AND c<10
+** The index has as many as three equality constraints, but in this
+** example, the third "c" value is an inequality. So only two
+** constraints are coded. This routine will generate code to evaluate
+** a==5 and b IN (1,2,3). The current values for a and b will be left
+** on the stack - a is the deepest and b the shallowest.
+**
+** In the example above nEq==2. But this subroutine works for any value
+** of nEq including 0. If nEq==0, this routine is nearly a no-op.
+** The only thing it does is allocate the pLevel->iMem memory cell.
+**
+** This routine always allocates at least one memory cell and puts
+** the address of that memory cell in pLevel->iMem. The code that
+** calls this routine will use pLevel->iMem to store the termination
+** key value of the loop. If one or more IN operators appear, then
+** this routine allocates an additional nEq memory cells for internal
+** use.
+*/
+static void codeAllEqualityTerms(
+ Parse *pParse, /* Parsing context */
+ WhereLevel *pLevel, /* Which nested loop of the FROM we are coding */
+ WhereClause *pWC, /* The WHERE clause */
+ Bitmask notReady, /* Which parts of FROM have not yet been coded */
+ int brk /* Jump here to end the loop */
+){
+ int nEq = pLevel->nEq; /* The number of == or IN constraints to code */
+ int termsInMem = 0; /* If true, store value in mem[] cells */
+ Vdbe *v = pParse->pVdbe; /* The virtual machine under construction */
+ Index *pIdx = pLevel->pIdx; /* The index being used for this loop */
+ int iCur = pLevel->iTabCur; /* The cursor of the table */
+ WhereTerm *pTerm; /* A single constraint term */
+ int j; /* Loop counter */
+
+ /* Figure out how many memory cells we will need then allocate them.
+ ** We always need at least one used to store the loop terminator
+ ** value. If there are IN operators we'll need one for each == or
+ ** IN constraint.
+ */
+ pLevel->iMem = pParse->nMem++;
+ if( pLevel->flags & WHERE_COLUMN_IN ){
+ pParse->nMem += pLevel->nEq;
+ termsInMem = 1;
+ }
+
+ /* Evaluate the equality constraints
+ */
+ for(j=0; j<pIdx->nColumn; j++){
+ int k = pIdx->aiColumn[j];
+ pTerm = findTerm(pWC, iCur, k, notReady, WO_EQ|WO_IN, pIdx);
+ if( pTerm==0 ) break;
+ assert( (pTerm->flags & TERM_CODED)==0 );
+ codeEqualityTerm(pParse, pTerm, brk, pLevel);
+ if( termsInMem ){
+ sqlite3VdbeAddOp(v, OP_MemStore, pLevel->iMem+j+1, 1);
+ }
+ }
+ assert( j==nEq );
+
+ /* Make sure all the constraint values are on the top of the stack
+ */
+ if( termsInMem ){
+ for(j=0; j<nEq; j++){
+ sqlite3VdbeAddOp(v, OP_MemLoad, pLevel->iMem+j+1, 0);
+ }
+ }
+}
+
+#if defined(SQLITE_TEST)
+/*
+** The following variable holds a text description of query plan generated
+** by the most recent call to sqlite3WhereBegin(). Each call to WhereBegin
+** overwrites the previous. This information is used for testing and
+** analysis only.
+*/
+char sqlite3_query_plan[BMS*2*40]; /* Text of the join */
+static int nQPlan = 0; /* Next free slow in _query_plan[] */
+
+#endif /* SQLITE_TEST */
+
+
+/*
+** Free a WhereInfo structure
+*/
+static void whereInfoFree(WhereInfo *pWInfo){
+ if( pWInfo ){
+ int i;
+ for(i=0; i<pWInfo->nLevel; i++){
+ sqlite3_index_info *pInfo = pWInfo->a[i].pIdxInfo;
+ if( pInfo ){
+ if( pInfo->needToFreeIdxStr ){
+ sqlite3_free(pInfo->idxStr);
+ }
+ sqliteFree(pInfo);
+ }
+ }
+ sqliteFree(pWInfo);
+ }
+}
+
+
+/*
+** Generate the beginning of the loop used for WHERE clause processing.
+** The return value is a pointer to an opaque structure that contains
+** information needed to terminate the loop. Later, the calling routine
+** should invoke sqlite3WhereEnd() with the return value of this function
+** in order to complete the WHERE clause processing.
+**
+** If an error occurs, this routine returns NULL.
+**
+** The basic idea is to do a nested loop, one loop for each table in
+** the FROM clause of a select. (INSERT and UPDATE statements are the
+** same as a SELECT with only a single table in the FROM clause.) For
+** example, if the SQL is this:
+**
+** SELECT * FROM t1, t2, t3 WHERE ...;
+**
+** Then the code generated is conceptually like the following:
+**
+** foreach row1 in t1 do \ Code generated
+** foreach row2 in t2 do |-- by sqlite3WhereBegin()
+** foreach row3 in t3 do /
+** ...
+** end \ Code generated
+** end |-- by sqlite3WhereEnd()
+** end /
+**
+** Note that the loops might not be nested in the order in which they
+** appear in the FROM clause if a different order is better able to make
+** use of indices. Note also that when the IN operator appears in
+** the WHERE clause, it might result in additional nested loops for
+** scanning through all values on the right-hand side of the IN.
+**
+** There are Btree cursors associated with each table. t1 uses cursor
+** number pTabList->a[0].iCursor. t2 uses the cursor pTabList->a[1].iCursor.
+** And so forth. This routine generates code to open those VDBE cursors
+** and sqlite3WhereEnd() generates the code to close them.
+**
+** The code that sqlite3WhereBegin() generates leaves the cursors named
+** in pTabList pointing at their appropriate entries. The [...] code
+** can use OP_Column and OP_Rowid opcodes on these cursors to extract
+** data from the various tables of the loop.
+**
+** If the WHERE clause is empty, the foreach loops must each scan their
+** entire tables. Thus a three-way join is an O(N^3) operation. But if
+** the tables have indices and there are terms in the WHERE clause that
+** refer to those indices, a complete table scan can be avoided and the
+** code will run much faster. Most of the work of this routine is checking
+** to see if there are indices that can be used to speed up the loop.
+**
+** Terms of the WHERE clause are also used to limit which rows actually
+** make it to the "..." in the middle of the loop. After each "foreach",
+** terms of the WHERE clause that use only terms in that loop and outer
+** loops are evaluated and if false a jump is made around all subsequent
+** inner loops (or around the "..." if the test occurs within the inner-
+** most loop)
+**
+** OUTER JOINS
+**
+** An outer join of tables t1 and t2 is conceptally coded as follows:
+**
+** foreach row1 in t1 do
+** flag = 0
+** foreach row2 in t2 do
+** start:
+** ...
+** flag = 1
+** end
+** if flag==0 then
+** move the row2 cursor to a null row
+** goto start
+** fi
+** end
+**
+** ORDER BY CLAUSE PROCESSING
+**
+** *ppOrderBy is a pointer to the ORDER BY clause of a SELECT statement,
+** if there is one. If there is no ORDER BY clause or if this routine
+** is called from an UPDATE or DELETE statement, then ppOrderBy is NULL.
+**
+** If an index can be used so that the natural output order of the table
+** scan is correct for the ORDER BY clause, then that index is used and
+** *ppOrderBy is set to NULL. This is an optimization that prevents an
+** unnecessary sort of the result set if an index appropriate for the
+** ORDER BY clause already exists.
+**
+** If the where clause loops cannot be arranged to provide the correct
+** output order, then the *ppOrderBy is unchanged.
+*/
+WhereInfo *sqlite3WhereBegin(
+ Parse *pParse, /* The parser context */
+ SrcList *pTabList, /* A list of all tables to be scanned */
+ Expr *pWhere, /* The WHERE clause */
+ ExprList **ppOrderBy /* An ORDER BY clause, or NULL */
+){
+ int i; /* Loop counter */
+ WhereInfo *pWInfo; /* Will become the return value of this function */
+ Vdbe *v = pParse->pVdbe; /* The virtual database engine */
+ int brk, cont = 0; /* Addresses used during code generation */
+ Bitmask notReady; /* Cursors that are not yet positioned */
+ WhereTerm *pTerm; /* A single term in the WHERE clause */
+ ExprMaskSet maskSet; /* The expression mask set */
+ WhereClause wc; /* The WHERE clause is divided into these terms */
+ struct SrcList_item *pTabItem; /* A single entry from pTabList */
+ WhereLevel *pLevel; /* A single level in the pWInfo list */
+ int iFrom; /* First unused FROM clause element */
+ int andFlags; /* AND-ed combination of all wc.a[].flags */
+
+ /* The number of tables in the FROM clause is limited by the number of
+ ** bits in a Bitmask
+ */
+ if( pTabList->nSrc>BMS ){
+ sqlite3ErrorMsg(pParse, "at most %d tables in a join", BMS);
+ return 0;
+ }
+
+ /* Split the WHERE clause into separate subexpressions where each
+ ** subexpression is separated by an AND operator.
+ */
+ initMaskSet(&maskSet);
+ whereClauseInit(&wc, pParse);
+ whereSplit(&wc, pWhere, TK_AND);
+
+ /* Allocate and initialize the WhereInfo structure that will become the
+ ** return value.
+ */
+ pWInfo = sqliteMalloc( sizeof(WhereInfo) + pTabList->nSrc*sizeof(WhereLevel));
+ if( sqlite3MallocFailed() ){
+ goto whereBeginNoMem;
+ }
+ pWInfo->nLevel = pTabList->nSrc;
+ pWInfo->pParse = pParse;
+ pWInfo->pTabList = pTabList;
+ pWInfo->iBreak = sqlite3VdbeMakeLabel(v);
+
+ /* Special case: a WHERE clause that is constant. Evaluate the
+ ** expression and either jump over all of the code or fall thru.
+ */
+ if( pWhere && (pTabList->nSrc==0 || sqlite3ExprIsConstant(pWhere)) ){
+ sqlite3ExprIfFalse(pParse, pWhere, pWInfo->iBreak, 1);
+ pWhere = 0;
+ }
+
+ /* Analyze all of the subexpressions. Note that exprAnalyze() might
+ ** add new virtual terms onto the end of the WHERE clause. We do not
+ ** want to analyze these virtual terms, so start analyzing at the end
+ ** and work forward so that the added virtual terms are never processed.
+ */
+ for(i=0; i<pTabList->nSrc; i++){
+ createMask(&maskSet, pTabList->a[i].iCursor);
+ }
+ exprAnalyzeAll(pTabList, &maskSet, &wc);
+ if( sqlite3MallocFailed() ){
+ goto whereBeginNoMem;
+ }
+
+ /* Chose the best index to use for each table in the FROM clause.
+ **
+ ** This loop fills in the following fields:
+ **
+ ** pWInfo->a[].pIdx The index to use for this level of the loop.
+ ** pWInfo->a[].flags WHERE_xxx flags associated with pIdx
+ ** pWInfo->a[].nEq The number of == and IN constraints
+ ** pWInfo->a[].iFrom When term of the FROM clause is being coded
+ ** pWInfo->a[].iTabCur The VDBE cursor for the database table
+ ** pWInfo->a[].iIdxCur The VDBE cursor for the index
+ **
+ ** This loop also figures out the nesting order of tables in the FROM
+ ** clause.
+ */
+ notReady = ~(Bitmask)0;
+ pTabItem = pTabList->a;
+ pLevel = pWInfo->a;
+ andFlags = ~0;
+ TRACE(("*** Optimizer Start ***\n"));
+ for(i=iFrom=0, pLevel=pWInfo->a; i<pTabList->nSrc; i++, pLevel++){
+ Index *pIdx; /* Index for FROM table at pTabItem */
+ int flags; /* Flags asssociated with pIdx */
+ int nEq; /* Number of == or IN constraints */
+ double cost; /* The cost for pIdx */
+ int j; /* For looping over FROM tables */
+ Index *pBest = 0; /* The best index seen so far */
+ int bestFlags = 0; /* Flags associated with pBest */
+ int bestNEq = 0; /* nEq associated with pBest */
+ double lowestCost; /* Cost of the pBest */
+ int bestJ = 0; /* The value of j */
+ Bitmask m; /* Bitmask value for j or bestJ */
+ int once = 0; /* True when first table is seen */
+ sqlite3_index_info *pIndex; /* Current virtual index */
+
+ lowestCost = SQLITE_BIG_DBL;
+ for(j=iFrom, pTabItem=&pTabList->a[j]; j<pTabList->nSrc; j++, pTabItem++){
+ int doNotReorder; /* True if this table should not be reordered */
+
+ doNotReorder = (pTabItem->jointype & (JT_LEFT|JT_CROSS))!=0
+ || (j>0 && (pTabItem[-1].jointype & (JT_LEFT|JT_CROSS))!=0);
+ if( once && doNotReorder ) break;
+ m = getMask(&maskSet, pTabItem->iCursor);
+ if( (m & notReady)==0 ){
+ if( j==iFrom ) iFrom++;
+ continue;
+ }
+ assert( pTabItem->pTab );
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+ if( IsVirtual(pTabItem->pTab) ){
+ sqlite3_index_info **ppIdxInfo = &pWInfo->a[j].pIdxInfo;
+ cost = bestVirtualIndex(pParse, &wc, pTabItem, notReady,
+ ppOrderBy ? *ppOrderBy : 0, i==0,
+ ppIdxInfo);
+ flags = WHERE_VIRTUALTABLE;
+ pIndex = *ppIdxInfo;
+ if( pIndex && pIndex->orderByConsumed ){
+ flags = WHERE_VIRTUALTABLE | WHERE_ORDERBY;
+ }
+ pIdx = 0;
+ nEq = 0;
+ }else
+#endif
+ {
+ cost = bestIndex(pParse, &wc, pTabItem, notReady,
+ (i==0 && ppOrderBy) ? *ppOrderBy : 0,
+ &pIdx, &flags, &nEq);
+ pIndex = 0;
+ }
+ if( cost<lowestCost ){
+ once = 1;
+ lowestCost = cost;
+ pBest = pIdx;
+ bestFlags = flags;
+ bestNEq = nEq;
+ bestJ = j;
+ pLevel->pBestIdx = pIndex;
+ }
+ if( doNotReorder ) break;
+ }
+ TRACE(("*** Optimizer choose table %d for loop %d\n", bestJ,
+ pLevel-pWInfo->a));
+ if( (bestFlags & WHERE_ORDERBY)!=0 ){
+ *ppOrderBy = 0;
+ }
+ andFlags &= bestFlags;
+ pLevel->flags = bestFlags;
+ pLevel->pIdx = pBest;
+ pLevel->nEq = bestNEq;
+ pLevel->aInLoop = 0;
+ pLevel->nIn = 0;
+ if( pBest ){
+ pLevel->iIdxCur = pParse->nTab++;
+ }else{
+ pLevel->iIdxCur = -1;
+ }
+ notReady &= ~getMask(&maskSet, pTabList->a[bestJ].iCursor);
+ pLevel->iFrom = bestJ;
+ }
+ TRACE(("*** Optimizer Finished ***\n"));
+
+ /* If the total query only selects a single row, then the ORDER BY
+ ** clause is irrelevant.
+ */
+ if( (andFlags & WHERE_UNIQUE)!=0 && ppOrderBy ){
+ *ppOrderBy = 0;
+ }
+
+ /* Open all tables in the pTabList and any indices selected for
+ ** searching those tables.
+ */
+ sqlite3CodeVerifySchema(pParse, -1); /* Insert the cookie verifier Goto */
+ for(i=0, pLevel=pWInfo->a; i<pTabList->nSrc; i++, pLevel++){
+ Table *pTab; /* Table to open */
+ Index *pIx; /* Index used to access pTab (if any) */
+ int iDb; /* Index of database containing table/index */
+ int iIdxCur = pLevel->iIdxCur;
+
+#ifndef SQLITE_OMIT_EXPLAIN
+ if( pParse->explain==2 ){
+ char *zMsg;
+ struct SrcList_item *pItem = &pTabList->a[pLevel->iFrom];
+ zMsg = sqlite3MPrintf("TABLE %s", pItem->zName);
+ if( pItem->zAlias ){
+ zMsg = sqlite3MPrintf("%z AS %s", zMsg, pItem->zAlias);
+ }
+ if( (pIx = pLevel->pIdx)!=0 ){
+ zMsg = sqlite3MPrintf("%z WITH INDEX %s", zMsg, pIx->zName);
+ }else if( pLevel->flags & (WHERE_ROWID_EQ|WHERE_ROWID_RANGE) ){
+ zMsg = sqlite3MPrintf("%z USING PRIMARY KEY", zMsg);
+ }
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+ else if( pLevel->pBestIdx ){
+ sqlite3_index_info *pBestIdx = pLevel->pBestIdx;
+ zMsg = sqlite3MPrintf("%z VIRTUAL TABLE INDEX %d:%s", zMsg,
+ pBestIdx->idxNum, pBestIdx->idxStr);
+ }
+#endif
+ if( pLevel->flags & WHERE_ORDERBY ){
+ zMsg = sqlite3MPrintf("%z ORDER BY", zMsg);
+ }
+ sqlite3VdbeOp3(v, OP_Explain, i, pLevel->iFrom, zMsg, P3_DYNAMIC);
+ }
+#endif /* SQLITE_OMIT_EXPLAIN */
+ pTabItem = &pTabList->a[pLevel->iFrom];
+ pTab = pTabItem->pTab;
+ iDb = sqlite3SchemaToIndex(pParse->db, pTab->pSchema);
+ if( pTab->isEphem || pTab->pSelect ) continue;
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+ if( pLevel->pBestIdx ){
+ int iCur = pTabItem->iCursor;
+ sqlite3VdbeOp3(v, OP_VOpen, iCur, 0, (const char*)pTab->pVtab, P3_VTAB);
+ }else
+#endif
+ if( (pLevel->flags & WHERE_IDX_ONLY)==0 ){
+ sqlite3OpenTable(pParse, pTabItem->iCursor, iDb, pTab, OP_OpenRead);
+ if( pTab->nCol<(sizeof(Bitmask)*8) ){
+ Bitmask b = pTabItem->colUsed;
+ int n = 0;
+ for(; b; b=b>>1, n++){}
+ sqlite3VdbeChangeP2(v, sqlite3VdbeCurrentAddr(v)-1, n);
+ assert( n<=pTab->nCol );
+ }
+ }else{
+ sqlite3TableLock(pParse, iDb, pTab->tnum, 0, pTab->zName);
+ }
+ pLevel->iTabCur = pTabItem->iCursor;
+ if( (pIx = pLevel->pIdx)!=0 ){
+ KeyInfo *pKey = sqlite3IndexKeyinfo(pParse, pIx);
+ assert( pIx->pSchema==pTab->pSchema );
+ sqlite3VdbeAddOp(v, OP_Integer, iDb, 0);
+ VdbeComment((v, "# %s", pIx->zName));
+ sqlite3VdbeOp3(v, OP_OpenRead, iIdxCur, pIx->tnum,
+ (char*)pKey, P3_KEYINFO_HANDOFF);
+ }
+ if( (pLevel->flags & WHERE_IDX_ONLY)!=0 ){
+ sqlite3VdbeAddOp(v, OP_SetNumColumns, iIdxCur, pIx->nColumn+1);
+ }
+ sqlite3CodeVerifySchema(pParse, iDb);
+ }
+ pWInfo->iTop = sqlite3VdbeCurrentAddr(v);
+
+ /* Generate the code to do the search. Each iteration of the for
+ ** loop below generates code for a single nested loop of the VM
+ ** program.
+ */
+ notReady = ~(Bitmask)0;
+ for(i=0, pLevel=pWInfo->a; i<pTabList->nSrc; i++, pLevel++){
+ int j;
+ int iCur = pTabItem->iCursor; /* The VDBE cursor for the table */
+ Index *pIdx; /* The index we will be using */
+ int iIdxCur; /* The VDBE cursor for the index */
+ int omitTable; /* True if we use the index only */
+ int bRev; /* True if we need to scan in reverse order */
+
+ pTabItem = &pTabList->a[pLevel->iFrom];
+ iCur = pTabItem->iCursor;
+ pIdx = pLevel->pIdx;
+ iIdxCur = pLevel->iIdxCur;
+ bRev = (pLevel->flags & WHERE_REVERSE)!=0;
+ omitTable = (pLevel->flags & WHERE_IDX_ONLY)!=0;
+
+ /* Create labels for the "break" and "continue" instructions
+ ** for the current loop. Jump to brk to break out of a loop.
+ ** Jump to cont to go immediately to the next iteration of the
+ ** loop.
+ */
+ brk = pLevel->brk = sqlite3VdbeMakeLabel(v);
+ cont = pLevel->cont = sqlite3VdbeMakeLabel(v);
+
+ /* If this is the right table of a LEFT OUTER JOIN, allocate and
+ ** initialize a memory cell that records if this table matches any
+ ** row of the left table of the join.
+ */
+ if( pLevel->iFrom>0 && (pTabItem[-1].jointype & JT_LEFT)!=0 ){
+ if( !pParse->nMem ) pParse->nMem++;
+ pLevel->iLeftJoin = pParse->nMem++;
+ sqlite3VdbeAddOp(v, OP_MemInt, 0, pLevel->iLeftJoin);
+ VdbeComment((v, "# init LEFT JOIN no-match flag"));
+ }
+
+#ifndef SQLITE_OMIT_VIRTUALTABLE
+ if( pLevel->pBestIdx ){
+ /* Case 0: The table is a virtual-table. Use the VFilter and VNext
+ ** to access the data.
+ */
+ int j;
+ sqlite3_index_info *pBestIdx = pLevel->pBestIdx;
+ int nConstraint = pBestIdx->nConstraint;
+ struct sqlite3_index_constraint_usage *aUsage =
+ pBestIdx->aConstraintUsage;
+ const struct sqlite3_index_constraint *aConstraint =
+ pBestIdx->aConstraint;
+
+ for(j=1; j<=nConstraint; j++){
+ int k;
+ for(k=0; k<nConstraint; k++){
+ if( aUsage[k].argvIndex==j ){
+ int iTerm = aConstraint[k].iTermOffset;
+ sqlite3ExprCode(pParse, wc.a[iTerm].pExpr->pRight);
+ break;
+ }
+ }
+ if( k==nConstraint ) break;
+ }
+ sqlite3VdbeAddOp(v, OP_Integer, j-1, 0);
+ sqlite3VdbeAddOp(v, OP_Integer, pBestIdx->idxNum, 0);
+ sqlite3VdbeOp3(v, OP_VFilter, iCur, brk, pBestIdx->idxStr,
+ pBestIdx->needToFreeIdxStr ? P3_MPRINTF : P3_STATIC);
+ pBestIdx->needToFreeIdxStr = 0;
+ for(j=0; j<pBestIdx->nConstraint; j++){
+ if( aUsage[j].omit ){
+ int iTerm = aConstraint[j].iTermOffset;
+ disableTerm(pLevel, &wc.a[iTerm]);
+ }
+ }
+ pLevel->op = OP_VNext;
+ pLevel->p1 = iCur;
+ pLevel->p2 = sqlite3VdbeCurrentAddr(v);
+ }else
+#endif /* SQLITE_OMIT_VIRTUALTABLE */
+
+ if( pLevel->flags & WHERE_ROWID_EQ ){
+ /* Case 1: We can directly reference a single row using an
+ ** equality comparison against the ROWID field. Or
+ ** we reference multiple rows using a "rowid IN (...)"
+ ** construct.
+ */
+ pTerm = findTerm(&wc, iCur, -1, notReady, WO_EQ|WO_IN, 0);
+ assert( pTerm!=0 );
+ assert( pTerm->pExpr!=0 );
+ assert( pTerm->leftCursor==iCur );
+ assert( omitTable==0 );
+ codeEqualityTerm(pParse, pTerm, brk, pLevel);
+ sqlite3VdbeAddOp(v, OP_MustBeInt, 1, brk);
+ sqlite3VdbeAddOp(v, OP_NotExists, iCur, brk);
+ VdbeComment((v, "pk"));
+ pLevel->op = OP_Noop;
+ }else if( pLevel->flags & WHERE_ROWID_RANGE ){
+ /* Case 2: We have an inequality comparison against the ROWID field.
+ */
+ int testOp = OP_Noop;
+ int start;
+ WhereTerm *pStart, *pEnd;
+
+ assert( omitTable==0 );
+ pStart = findTerm(&wc, iCur, -1, notReady, WO_GT|WO_GE, 0);
+ pEnd = findTerm(&wc, iCur, -1, notReady, WO_LT|WO_LE, 0);
+ if( bRev ){
+ pTerm = pStart;
+ pStart = pEnd;
+ pEnd = pTerm;
+ }
+ if( pStart ){
+ Expr *pX;
+ pX = pStart->pExpr;
+ assert( pX!=0 );
+ assert( pStart->leftCursor==iCur );
+ sqlite3ExprCode(pParse, pX->pRight);
+ sqlite3VdbeAddOp(v, OP_ForceInt, pX->op==TK_LE || pX->op==TK_GT, brk);
+ sqlite3VdbeAddOp(v, bRev ? OP_MoveLt : OP_MoveGe, iCur, brk);
+ VdbeComment((v, "pk"));
+ disableTerm(pLevel, pStart);
+ }else{
+ sqlite3VdbeAddOp(v, bRev ? OP_Last : OP_Rewind, iCur, brk);
+ }
+ if( pEnd ){
+ Expr *pX;
+ pX = pEnd->pExpr;
+ assert( pX!=0 );
+ assert( pEnd->leftCursor==iCur );
+ sqlite3ExprCode(pParse, pX->pRight);
+ pLevel->iMem = pParse->nMem++;
+ sqlite3VdbeAddOp(v, OP_MemStore, pLevel->iMem, 1);
+ if( pX->op==TK_LT || pX->op==TK_GT ){
+ testOp = bRev ? OP_Le : OP_Ge;
+ }else{
+ testOp = bRev ? OP_Lt : OP_Gt;
+ }
+ disableTerm(pLevel, pEnd);
+ }
+ start = sqlite3VdbeCurrentAddr(v);
+ pLevel->op = bRev ? OP_Prev : OP_Next;
+ pLevel->p1 = iCur;
+ pLevel->p2 = start;
+ if( testOp!=OP_Noop ){
+ sqlite3VdbeAddOp(v, OP_Rowid, iCur, 0);
+ sqlite3VdbeAddOp(v, OP_MemLoad, pLevel->iMem, 0);
+ sqlite3VdbeAddOp(v, testOp, SQLITE_AFF_NUMERIC, brk);
+ }
+ }else if( pLevel->flags & WHERE_COLUMN_RANGE ){
+ /* Case 3: The WHERE clause term that refers to the right-most
+ ** column of the index is an inequality. For example, if
+ ** the index is on (x,y,z) and the WHERE clause is of the
+ ** form "x=5 AND y<10" then this case is used. Only the
+ ** right-most column can be an inequality - the rest must
+ ** use the "==" and "IN" operators.
+ **
+ ** This case is also used when there are no WHERE clause
+ ** constraints but an index is selected anyway, in order
+ ** to force the output order to conform to an ORDER BY.
+ */
+ int start;
+ int nEq = pLevel->nEq;
+ int topEq=0; /* True if top limit uses ==. False is strictly < */
+ int btmEq=0; /* True if btm limit uses ==. False if strictly > */
+ int topOp, btmOp; /* Operators for the top and bottom search bounds */
+ int testOp;
+ int nNotNull; /* Number of rows of index that must be non-NULL */
+ int topLimit = (pLevel->flags & WHERE_TOP_LIMIT)!=0;
+ int btmLimit = (pLevel->flags & WHERE_BTM_LIMIT)!=0;
+
+ /* Generate code to evaluate all constraint terms using == or IN
+ ** and level the values of those terms on the stack.
+ */
+ codeAllEqualityTerms(pParse, pLevel, &wc, notReady, brk);
+
+ /* Duplicate the equality term values because they will all be
+ ** used twice: once to make the termination key and once to make the
+ ** start key.
+ */
+ for(j=0; j<nEq; j++){
+ sqlite3VdbeAddOp(v, OP_Dup, nEq-1, 0);
+ }
+
+ /* Figure out what comparison operators to use for top and bottom
+ ** search bounds. For an ascending index, the bottom bound is a > or >=
+ ** operator and the top bound is a < or <= operator. For a descending
+ ** index the operators are reversed.
+ */
+ nNotNull = nEq + topLimit;
+ if( pIdx->aSortOrder[nEq]==SQLITE_SO_ASC ){
+ topOp = WO_LT|WO_LE;
+ btmOp = WO_GT|WO_GE;
+ }else{
+ topOp = WO_GT|WO_GE;
+ btmOp = WO_LT|WO_LE;
+ SWAP(int, topLimit, btmLimit);
+ }
+
+ /* Generate the termination key. This is the key value that
+ ** will end the search. There is no termination key if there
+ ** are no equality terms and no "X<..." term.
+ **
+ ** 2002-Dec-04: On a reverse-order scan, the so-called "termination"
+ ** key computed here really ends up being the start key.
+ */
+ if( topLimit ){
+ Expr *pX;
+ int k = pIdx->aiColumn[j];
+ pTerm = findTerm(&wc, iCur, k, notReady, topOp, pIdx);
+ assert( pTerm!=0 );
+ pX = pTerm->pExpr;
+ assert( (pTerm->flags & TERM_CODED)==0 );
+ sqlite3ExprCode(pParse, pX->pRight);
+ topEq = pTerm->eOperator & (WO_LE|WO_GE);
+ disableTerm(pLevel, pTerm);
+ testOp = OP_IdxGE;
+ }else{
+ testOp = nEq>0 ? OP_IdxGE : OP_Noop;
+ topEq = 1;
+ }
+ if( testOp!=OP_Noop ){
+ int nCol = nEq + topLimit;
+ pLevel->iMem = pParse->nMem++;
+ buildIndexProbe(v, nCol, nEq, brk, pIdx);
+ if( bRev ){
+ int op = topEq ? OP_MoveLe : OP_MoveLt;
+ sqlite3VdbeAddOp(v, op, iIdxCur, brk);
+ }else{
+ sqlite3VdbeAddOp(v, OP_MemStore, pLevel->iMem, 1);
+ }
+ }else if( bRev ){
+ sqlite3VdbeAddOp(v, OP_Last, iIdxCur, brk);
+ }
+
+ /* Generate the start key. This is the key that defines the lower
+ ** bound on the search. There is no start key if there are no
+ ** equality terms and if there is no "X>..." term. In
+ ** that case, generate a "Rewind" instruction in place of the
+ ** start key search.
+ **
+ ** 2002-Dec-04: In the case of a reverse-order search, the so-called
+ ** "start" key really ends up being used as the termination key.
+ */
+ if( btmLimit ){
+ Expr *pX;
+ int k = pIdx->aiColumn[j];
+ pTerm = findTerm(&wc, iCur, k, notReady, btmOp, pIdx);
+ assert( pTerm!=0 );
+ pX = pTerm->pExpr;
+ assert( (pTerm->flags & TERM_CODED)==0 );
+ sqlite3ExprCode(pParse, pX->pRight);
+ btmEq = pTerm->eOperator & (WO_LE|WO_GE);
+ disableTerm(pLevel, pTerm);
+ }else{
+ btmEq = 1;
+ }
+ if( nEq>0 || btmLimit ){
+ int nCol = nEq + btmLimit;
+ buildIndexProbe(v, nCol, 0, brk, pIdx);
+ if( bRev ){
+ pLevel->iMem = pParse->nMem++;
+ sqlite3VdbeAddOp(v, OP_MemStore, pLevel->iMem, 1);
+ testOp = OP_IdxLT;
+ }else{
+ int op = btmEq ? OP_MoveGe : OP_MoveGt;
+ sqlite3VdbeAddOp(v, op, iIdxCur, brk);
+ }
+ }else if( bRev ){
+ testOp = OP_Noop;
+ }else{
+ sqlite3VdbeAddOp(v, OP_Rewind, iIdxCur, brk);
+ }
+
+ /* Generate the the top of the loop. If there is a termination
+ ** key we have to test for that key and abort at the top of the
+ ** loop.
+ */
+ start = sqlite3VdbeCurrentAddr(v);
+ if( testOp!=OP_Noop ){
+ sqlite3VdbeAddOp(v, OP_MemLoad, pLevel->iMem, 0);
+ sqlite3VdbeAddOp(v, testOp, iIdxCur, brk);
+ if( (topEq && !bRev) || (!btmEq && bRev) ){
+ sqlite3VdbeChangeP3(v, -1, "+", P3_STATIC);
+ }
+ }
+ sqlite3VdbeAddOp(v, OP_RowKey, iIdxCur, 0);
+ sqlite3VdbeAddOp(v, OP_IdxIsNull, nNotNull, cont);
+ if( !omitTable ){
+ sqlite3VdbeAddOp(v, OP_IdxRowid, iIdxCur, 0);
+ sqlite3VdbeAddOp(v, OP_MoveGe, iCur, 0);
+ }
+
+ /* Record the instruction used to terminate the loop.
+ */
+ pLevel->op = bRev ? OP_Prev : OP_Next;
+ pLevel->p1 = iIdxCur;
+ pLevel->p2 = start;
+ }else if( pLevel->flags & WHERE_COLUMN_EQ ){
+ /* Case 4: There is an index and all terms of the WHERE clause that
+ ** refer to the index using the "==" or "IN" operators.
+ */
+ int start;
+ int nEq = pLevel->nEq;
+
+ /* Generate code to evaluate all constraint terms using == or IN
+ ** and leave the values of those terms on the stack.
+ */
+ codeAllEqualityTerms(pParse, pLevel, &wc, notReady, brk);
+
+ /* Generate a single key that will be used to both start and terminate
+ ** the search
+ */
+ buildIndexProbe(v, nEq, 0, brk, pIdx);
+ sqlite3VdbeAddOp(v, OP_MemStore, pLevel->iMem, 0);
+
+ /* Generate code (1) to move to the first matching element of the table.
+ ** Then generate code (2) that jumps to "brk" after the cursor is past
+ ** the last matching element of the table. The code (1) is executed
+ ** once to initialize the search, the code (2) is executed before each
+ ** iteration of the scan to see if the scan has finished. */
+ if( bRev ){
+ /* Scan in reverse order */
+ sqlite3VdbeAddOp(v, OP_MoveLe, iIdxCur, brk);
+ start = sqlite3VdbeAddOp(v, OP_MemLoad, pLevel->iMem, 0);
+ sqlite3VdbeAddOp(v, OP_IdxLT, iIdxCur, brk);
+ pLevel->op = OP_Prev;
+ }else{
+ /* Scan in the forward order */
+ sqlite3VdbeAddOp(v, OP_MoveGe, iIdxCur, brk);
+ start = sqlite3VdbeAddOp(v, OP_MemLoad, pLevel->iMem, 0);
+ sqlite3VdbeOp3(v, OP_IdxGE, iIdxCur, brk, "+", P3_STATIC);
+ pLevel->op = OP_Next;
+ }
+ sqlite3VdbeAddOp(v, OP_RowKey, iIdxCur, 0);
+ sqlite3VdbeAddOp(v, OP_IdxIsNull, nEq, cont);
+ if( !omitTable ){
+ sqlite3VdbeAddOp(v, OP_IdxRowid, iIdxCur, 0);
+ sqlite3VdbeAddOp(v, OP_MoveGe, iCur, 0);
+ }
+ pLevel->p1 = iIdxCur;
+ pLevel->p2 = start;
+ }else{
+ /* Case 5: There is no usable index. We must do a complete
+ ** scan of the entire table.
+ */
+ assert( omitTable==0 );
+ assert( bRev==0 );
+ pLevel->op = OP_Next;
+ pLevel->p1 = iCur;
+ pLevel->p2 = 1 + sqlite3VdbeAddOp(v, OP_Rewind, iCur, brk);
+ }
+ notReady &= ~getMask(&maskSet, iCur);
+
+ /* Insert code to test every subexpression that can be completely
+ ** computed using the current set of tables.
+ */
+ for(pTerm=wc.a, j=wc.nTerm; j>0; j--, pTerm++){
+ Expr *pE;
+ if( pTerm->flags & (TERM_VIRTUAL|TERM_CODED) ) continue;
+ if( (pTerm->prereqAll & notReady)!=0 ) continue;
+ pE = pTerm->pExpr;
+ assert( pE!=0 );
+ if( pLevel->iLeftJoin && !ExprHasProperty(pE, EP_FromJoin) ){
+ continue;
+ }
+ sqlite3ExprIfFalse(pParse, pE, cont, 1);
+ pTerm->flags |= TERM_CODED;
+ }
+
+ /* For a LEFT OUTER JOIN, generate code that will record the fact that
+ ** at least one row of the right table has matched the left table.
+ */
+ if( pLevel->iLeftJoin ){
+ pLevel->top = sqlite3VdbeCurrentAddr(v);
+ sqlite3VdbeAddOp(v, OP_MemInt, 1, pLevel->iLeftJoin);
+ VdbeComment((v, "# record LEFT JOIN hit"));
+ for(pTerm=wc.a, j=0; j<wc.nTerm; j++, pTerm++){
+ if( pTerm->flags & (TERM_VIRTUAL|TERM_CODED) ) continue;
+ if( (pTerm->prereqAll & notReady)!=0 ) continue;
+ assert( pTerm->pExpr );
+ sqlite3ExprIfFalse(pParse, pTerm->pExpr, cont, 1);
+ pTerm->flags |= TERM_CODED;
+ }
+ }
+ }
+
+#ifdef SQLITE_TEST /* For testing and debugging use only */
+ /* Record in the query plan information about the current table
+ ** and the index used to access it (if any). If the table itself
+ ** is not used, its name is just '{}'. If no index is used
+ ** the index is listed as "{}". If the primary key is used the
+ ** index name is '*'.
+ */
+ for(i=0; i<pTabList->nSrc; i++){
+ char *z;
+ int n;
+ pLevel = &pWInfo->a[i];
+ pTabItem = &pTabList->a[pLevel->iFrom];
+ z = pTabItem->zAlias;
+ if( z==0 ) z = pTabItem->pTab->zName;
+ n = strlen(z);
+ if( n+nQPlan < sizeof(sqlite3_query_plan)-10 ){
+ if( pLevel->flags & WHERE_IDX_ONLY ){
+ strcpy(&sqlite3_query_plan[nQPlan], "{}");
+ nQPlan += 2;
+ }else{
+ strcpy(&sqlite3_query_plan[nQPlan], z);
+ nQPlan += n;
+ }
+ sqlite3_query_plan[nQPlan++] = ' ';
+ }
+ if( pLevel->flags & (WHERE_ROWID_EQ|WHERE_ROWID_RANGE) ){
+ strcpy(&sqlite3_query_plan[nQPlan], "* ");
+ nQPlan += 2;
+ }else if( pLevel->pIdx==0 ){
+ strcpy(&sqlite3_query_plan[nQPlan], "{} ");
+ nQPlan += 3;
+ }else{
+ n = strlen(pLevel->pIdx->zName);
+ if( n+nQPlan < sizeof(sqlite3_query_plan)-2 ){
+ strcpy(&sqlite3_query_plan[nQPlan], pLevel->pIdx->zName);
+ nQPlan += n;
+ sqlite3_query_plan[nQPlan++] = ' ';
+ }
+ }
+ }
+ while( nQPlan>0 && sqlite3_query_plan[nQPlan-1]==' ' ){
+ sqlite3_query_plan[--nQPlan] = 0;
+ }
+ sqlite3_query_plan[nQPlan] = 0;
+ nQPlan = 0;
+#endif /* SQLITE_TEST // Testing and debugging use only */
+
+ /* Record the continuation address in the WhereInfo structure. Then
+ ** clean up and return.
+ */
+ pWInfo->iContinue = cont;
+ whereClauseClear(&wc);
+ return pWInfo;
+
+ /* Jump here if malloc fails */
+whereBeginNoMem:
+ whereClauseClear(&wc);
+ whereInfoFree(pWInfo);
+ return 0;
+}
+
+/*
+** Generate the end of the WHERE loop. See comments on
+** sqlite3WhereBegin() for additional information.
+*/
+void sqlite3WhereEnd(WhereInfo *pWInfo){
+ Vdbe *v = pWInfo->pParse->pVdbe;
+ int i;
+ WhereLevel *pLevel;
+ SrcList *pTabList = pWInfo->pTabList;
+
+ /* Generate loop termination code.
+ */
+ for(i=pTabList->nSrc-1; i>=0; i--){
+ pLevel = &pWInfo->a[i];
+ sqlite3VdbeResolveLabel(v, pLevel->cont);
+ if( pLevel->op!=OP_Noop ){
+ sqlite3VdbeAddOp(v, pLevel->op, pLevel->p1, pLevel->p2);
+ }
+ sqlite3VdbeResolveLabel(v, pLevel->brk);
+ if( pLevel->nIn ){
+ int *a;
+ int j;
+ for(j=pLevel->nIn, a=&pLevel->aInLoop[j*2-2]; j>0; j--, a-=2){
+ sqlite3VdbeAddOp(v, OP_Next, a[0], a[1]);
+ sqlite3VdbeJumpHere(v, a[1]-1);
+ }
+ sqliteFree(pLevel->aInLoop);
+ }
+ if( pLevel->iLeftJoin ){
+ int addr;
+ addr = sqlite3VdbeAddOp(v, OP_IfMemPos, pLevel->iLeftJoin, 0);
+ sqlite3VdbeAddOp(v, OP_NullRow, pTabList->a[i].iCursor, 0);
+ if( pLevel->iIdxCur>=0 ){
+ sqlite3VdbeAddOp(v, OP_NullRow, pLevel->iIdxCur, 0);
+ }
+ sqlite3VdbeAddOp(v, OP_Goto, 0, pLevel->top);
+ sqlite3VdbeJumpHere(v, addr);
+ }
+ }
+
+ /* The "break" point is here, just past the end of the outer loop.
+ ** Set it.
+ */
+ sqlite3VdbeResolveLabel(v, pWInfo->iBreak);
+
+ /* Close all of the cursors that were opened by sqlite3WhereBegin.
+ */
+ for(i=0, pLevel=pWInfo->a; i<pTabList->nSrc; i++, pLevel++){
+ struct SrcList_item *pTabItem = &pTabList->a[pLevel->iFrom];
+ Table *pTab = pTabItem->pTab;
+ assert( pTab!=0 );
+ if( pTab->isEphem || pTab->pSelect ) continue;
+ if( (pLevel->flags & WHERE_IDX_ONLY)==0 ){
+ sqlite3VdbeAddOp(v, OP_Close, pTabItem->iCursor, 0);
+ }
+ if( pLevel->pIdx!=0 ){
+ sqlite3VdbeAddOp(v, OP_Close, pLevel->iIdxCur, 0);
+ }
+
+ /* Make cursor substitutions for cases where we want to use
+ ** just the index and never reference the table.
+ **
+ ** Calls to the code generator in between sqlite3WhereBegin and
+ ** sqlite3WhereEnd will have created code that references the table
+ ** directly. This loop scans all that code looking for opcodes
+ ** that reference the table and converts them into opcodes that
+ ** reference the index.
+ */
+ if( pLevel->flags & WHERE_IDX_ONLY ){
+ int k, j, last;
+ VdbeOp *pOp;
+ Index *pIdx = pLevel->pIdx;
+
+ assert( pIdx!=0 );
+ pOp = sqlite3VdbeGetOp(v, pWInfo->iTop);
+ last = sqlite3VdbeCurrentAddr(v);
+ for(k=pWInfo->iTop; k<last; k++, pOp++){
+ if( pOp->p1!=pLevel->iTabCur ) continue;
+ if( pOp->opcode==OP_Column ){
+ pOp->p1 = pLevel->iIdxCur;
+ for(j=0; j<pIdx->nColumn; j++){
+ if( pOp->p2==pIdx->aiColumn[j] ){
+ pOp->p2 = j;
+ break;
+ }
+ }
+ }else if( pOp->opcode==OP_Rowid ){
+ pOp->p1 = pLevel->iIdxCur;
+ pOp->opcode = OP_IdxRowid;
+ }else if( pOp->opcode==OP_NullRow ){
+ pOp->opcode = OP_Noop;
+ }
+ }
+ }
+ }
+
+ /* Final cleanup
+ */
+ whereInfoFree(pWInfo);
+ return;
+}
Added: freeswitch/trunk/libs/sqlite/tclinstaller.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/tclinstaller.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,29 @@
+# This script attempts to install SQLite3 so that it can be used
+# by TCL. Invoke this script with single argument which is the
+# version number of SQLite. Example:
+#
+# tclsh tclinstaller.tcl 3.0
+#
+set VERSION [lindex $argv 0]
+set LIBFILE .libs/libtclsqlite3[info sharedlibextension]
+if { ![info exists env(DESTDIR)] } { set env(DESTDIR) "" }
+set LIBDIR $env(DESTDIR)[lindex $auto_path 0]
+set LIBNAME [file tail $LIBFILE]
+set LIB $LIBDIR/sqlite3/$LIBNAME
+
+file delete -force $LIBDIR/sqlite3
+file mkdir $LIBDIR/sqlite3
+set fd [open $LIBDIR/sqlite3/pkgIndex.tcl w]
+puts $fd "package ifneeded sqlite3 $VERSION \[list load $LIB sqlite3\]"
+close $fd
+
+# We cannot use [file copy] because that will just make a copy of
+# a symbolic link. We have to open and copy the file for ourselves.
+#
+set in [open $LIBFILE]
+fconfigure $in -translation binary
+set out [open $LIB w]
+fconfigure $out -translation binary
+puts -nonewline $out [read $in]
+close $in
+close $out
Added: freeswitch/trunk/libs/sqlite/test/aggerror.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/aggerror.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,78 @@
+# 2006 January 20
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests for calling sqlite3_result_error()
+# from within an aggregate function implementation.
+#
+# $Id: aggerror.test,v 1.3 2006/05/03 23:34:06 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+
+# Add the x_count aggregate function to the database handle.
+# x_count will error out if its input is 40 or 41 or if its
+# final results is 42. Make sure that such errors are handled
+# appropriately.
+#
+do_test aggerror-1.1 {
+ set DB [sqlite3_connection_pointer db]
+ sqlite3_create_aggregate $DB
+ execsql {
+ CREATE TABLE t1(a);
+ INSERT INTO t1 VALUES(1);
+ INSERT INTO t1 VALUES(2);
+ INSERT INTO t1 SELECT a+2 FROM t1;
+ INSERT INTO t1 SELECT a+4 FROM t1;
+ INSERT INTO t1 SELECT a+8 FROM t1;
+ INSERT INTO t1 SELECT a+16 FROM t1;
+ INSERT INTO t1 SELECT a+32 FROM t1 ORDER BY a LIMIT 7;
+ SELECT x_count(*) FROM t1;
+ }
+} {39}
+do_test aggerror-1.2 {
+ execsql {
+ INSERT INTO t1 VALUES(40);
+ SELECT x_count(*) FROM t1;
+ }
+} {40}
+do_test aggerror-1.3 {
+ catchsql {
+ SELECT x_count(a) FROM t1;
+ }
+} {1 {value of 40 handed to x_count}}
+ifcapable utf16 {
+ do_test aggerror-1.4 {
+ execsql {
+ UPDATE t1 SET a=41 WHERE a=40
+ }
+ catchsql {
+ SELECT x_count(a) FROM t1;
+ }
+ } {1 abc}
+}
+do_test aggerror-1.5 {
+ execsql {
+ SELECT x_count(*) FROM t1
+ }
+} 40
+do_test aggerror-1.6 {
+ execsql {
+ INSERT INTO t1 VALUES(40);
+ INSERT INTO t1 VALUES(42);
+ }
+ catchsql {
+ SELECT x_count(*) FROM t1;
+ }
+} {1 {x_count totals to 42}}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/all.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/all.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,140 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file runs all tests.
+#
+# $Id: all.test,v 1.35 2006/01/17 15:36:33 danielk1977 Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+rename finish_test really_finish_test
+proc finish_test {} {memleak_check}
+
+if {[file exists ./sqlite_test_count]} {
+ set COUNT [exec cat ./sqlite_test_count]
+} else {
+ set COUNT 3
+}
+
+if {[llength $argv]>0} {
+ foreach {name value} $argv {
+ switch -- $name {
+ -count {
+ set COUNT $value
+ }
+ -quick {
+ set ISQUICK $value
+ }
+ default {
+ puts stderr "Unknown option: $name"
+ exit
+ }
+ }
+ }
+}
+set argv {}
+
+# LeakList will hold a list of the number of unfreed mallocs after
+# each round of the test. This number should be constant. If it
+# grows, it may mean there is a memory leak in the library.
+#
+set LeakList {}
+
+set EXCLUDE {
+ all.test
+ async.test
+ crash.test
+ autovacuum_crash.test
+ quick.test
+ malloc.test
+ misuse.test
+ memleak.test
+}
+
+# Files to include in the test. If this list is empty then everything
+# that is not in the EXCLUDE list is run.
+#
+set INCLUDE {
+}
+
+# Test files btree2.test and btree4.test don't work if the
+# SQLITE_DEFAULT_AUTOVACUUM macro is defined to true (because they depend
+# on tables being allocated starting at page 2).
+#
+ifcapable default_autovacuum {
+ lappend EXCLUDE btree2.test
+ lappend EXCLUDE btree4.test
+}
+
+for {set Counter 0} {$Counter<$COUNT && $nErr==0} {incr Counter} {
+ if {$Counter%2} {
+ set ::SETUP_SQL {PRAGMA default_synchronous=off;}
+ } else {
+ catch {unset ::SETUP_SQL}
+ }
+ foreach testfile [lsort -dictionary [glob $testdir/*.test]] {
+ set tail [file tail $testfile]
+ if {[lsearch -exact $EXCLUDE $tail]>=0} continue
+ if {[llength $INCLUDE]>0 && [lsearch -exact $INCLUDE $tail]<0} continue
+ source $testfile
+ catch {db close}
+ if {$sqlite_open_file_count>0} {
+ puts "$tail did not close all files: $sqlite_open_file_count"
+ incr nErr
+ lappend ::failList $tail
+ }
+ }
+ if {[info exists Leak]} {
+ lappend LeakList $Leak
+ }
+}
+
+# Do one last test to look for a memory leak in the library. This will
+# only work if SQLite is compiled with the -DSQLITE_DEBUG=1 flag.
+#
+if {$LeakList!=""} {
+ puts -nonewline memory-leak-test...
+ incr ::nTest
+ foreach x $LeakList {
+ if {$x!=[lindex $LeakList 0]} {
+ puts " failed!"
+ puts "Expected: all values to be the same"
+ puts " Got: $LeakList"
+ incr ::nErr
+ lappend ::failList memory-leak-test
+ break
+ }
+ }
+ puts " Ok"
+}
+
+# Run the crashtest only on unix and only once. If the library does not
+# always create auto-vacuum databases, also run autovacuum_crash.test.
+#
+if {$::tcl_platform(platform)=="unix"} {
+ source $testdir/crash.test
+ ifcapable !default_autovacuum {
+ source $testdir/autovacuum_crash.test
+ }
+}
+
+# Run the malloc tests and the misuse test after memory leak detection.
+# Both tests leak memory. Currently, misuse.test also leaks a handful of
+# file descriptors. This is not considered a problem, but can cause tests
+# in malloc.test to fail. So set the open-file count to zero before running
+# malloc.test to get around this.
+#
+catch {source $testdir/misuse.test}
+set sqlite_open_file_count 0
+catch {source $testdir/malloc.test}
+
+catch {db close}
+set sqlite_open_file_count 0
+really_finish_test
Added: freeswitch/trunk/libs/sqlite/test/alter.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/alter.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,634 @@
+# 2004 November 10
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#*************************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is testing the ALTER TABLE statement.
+#
+# $Id: alter.test,v 1.17 2006/02/09 02:56:03 drh Exp $
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# If SQLITE_OMIT_ALTERTABLE is defined, omit this file.
+ifcapable !altertable {
+ finish_test
+ return
+}
+
+#----------------------------------------------------------------------
+# Test organization:
+#
+# alter-1.1.* - alter-1.7.*: Basic tests of ALTER TABLE, including tables
+# with implicit and explicit indices. These tests came from an earlier
+# fork of SQLite that also supported ALTER TABLE.
+# alter-1.8.*: Tests for ALTER TABLE when the table resides in an
+# attached database.
+# alter-1.9.*: Tests for ALTER TABLE when their is whitespace between the
+# table name and left parenthesis token. i.e:
+# "CREATE TABLE abc (a, b, c);"
+# alter-2.*: Test error conditions and messages.
+# alter-3.*: Test ALTER TABLE on tables that have TRIGGERs attached to them.
+# alter-4.*: Test ALTER TABLE on tables that have AUTOINCREMENT fields.
+#
+
+# Create some tables to rename. Be sure to include some TEMP tables
+# and some tables with odd names.
+#
+do_test alter-1.1 {
+ ifcapable tempdb {
+ set ::temp TEMP
+ } else {
+ set ::temp {}
+ }
+ execsql [subst -nocommands {
+ CREATE TABLE t1(a,b);
+ INSERT INTO t1 VALUES(1,2);
+ CREATE TABLE [t1'x1](c UNIQUE, b PRIMARY KEY);
+ INSERT INTO [t1'x1] VALUES(3,4);
+ CREATE INDEX t1i1 ON T1(B);
+ CREATE INDEX t1i2 ON t1(a,b);
+ CREATE INDEX i3 ON [t1'x1](b,c);
+ CREATE $::temp TABLE "temp table"(e,f,g UNIQUE);
+ CREATE INDEX i2 ON [temp table](f);
+ INSERT INTO [temp table] VALUES(5,6,7);
+ }]
+ execsql {
+ SELECT 't1', * FROM t1;
+ SELECT 't1''x1', * FROM "t1'x1";
+ SELECT * FROM [temp table];
+ }
+} {t1 1 2 t1'x1 3 4 5 6 7}
+do_test alter-1.2 {
+ execsql [subst {
+ CREATE $::temp TABLE objlist(type, name, tbl_name);
+ INSERT INTO objlist SELECT type, name, tbl_name
+ FROM sqlite_master WHERE NAME!='objlist';
+ }]
+ ifcapable tempdb {
+ execsql {
+ INSERT INTO objlist SELECT type, name, tbl_name
+ FROM sqlite_temp_master WHERE NAME!='objlist';
+ }
+ }
+
+ execsql {
+ SELECT type, name, tbl_name FROM objlist ORDER BY tbl_name, type desc, name;
+ }
+} [list \
+ table t1 t1 \
+ index t1i1 t1 \
+ index t1i2 t1 \
+ table t1'x1 t1'x1 \
+ index i3 t1'x1 \
+ index {sqlite_autoindex_t1'x1_1} t1'x1 \
+ index {sqlite_autoindex_t1'x1_2} t1'x1 \
+ table {temp table} {temp table} \
+ index i2 {temp table} \
+ index {sqlite_autoindex_temp table_1} {temp table} \
+ ]
+
+# Make some changes
+#
+do_test alter-1.3 {
+ execsql {
+ ALTER TABLE [T1] RENAME to [-t1-];
+ ALTER TABLE "t1'x1" RENAME TO T2;
+ ALTER TABLE [temp table] RENAME to TempTab;
+ }
+} {}
+integrity_check alter-1.3.1
+do_test alter-1.4 {
+ execsql {
+ SELECT 't1', * FROM [-t1-];
+ SELECT 't2', * FROM t2;
+ SELECT * FROM temptab;
+ }
+} {t1 1 2 t2 3 4 5 6 7}
+do_test alter-1.5 {
+ execsql {
+ DELETE FROM objlist;
+ INSERT INTO objlist SELECT type, name, tbl_name
+ FROM sqlite_master WHERE NAME!='objlist';
+ }
+ catchsql {
+ INSERT INTO objlist SELECT type, name, tbl_name
+ FROM sqlite_temp_master WHERE NAME!='objlist';
+ }
+ execsql {
+ SELECT type, name, tbl_name FROM objlist ORDER BY tbl_name, type desc, name;
+ }
+} [list \
+ table -t1- -t1- \
+ index t1i1 -t1- \
+ index t1i2 -t1- \
+ table T2 T2 \
+ index i3 T2 \
+ index {sqlite_autoindex_T2_1} T2 \
+ index {sqlite_autoindex_T2_2} T2 \
+ table {TempTab} {TempTab} \
+ index i2 {TempTab} \
+ index {sqlite_autoindex_TempTab_1} {TempTab} \
+ ]
+
+# Make sure the changes persist after restarting the database.
+# (The TEMP table will not persist, of course.)
+#
+ifcapable tempdb {
+ do_test alter-1.6 {
+ db close
+ sqlite3 db test.db
+ set DB [sqlite3_connection_pointer db]
+ execsql {
+ CREATE TEMP TABLE objlist(type, name, tbl_name);
+ INSERT INTO objlist SELECT type, name, tbl_name FROM sqlite_master;
+ INSERT INTO objlist
+ SELECT type, name, tbl_name FROM sqlite_temp_master
+ WHERE NAME!='objlist';
+ SELECT type, name, tbl_name FROM objlist
+ ORDER BY tbl_name, type desc, name;
+ }
+ } [list \
+ table -t1- -t1- \
+ index t1i1 -t1- \
+ index t1i2 -t1- \
+ table T2 T2 \
+ index i3 T2 \
+ index {sqlite_autoindex_T2_1} T2 \
+ index {sqlite_autoindex_T2_2} T2 \
+ ]
+} else {
+ execsql {
+ DROP TABLE TempTab;
+ }
+}
+
+# Make sure the ALTER TABLE statements work with the
+# non-callback API
+#
+do_test alter-1.7 {
+ stepsql $DB {
+ ALTER TABLE [-t1-] RENAME to [*t1*];
+ ALTER TABLE T2 RENAME TO [<t2>];
+ }
+ execsql {
+ DELETE FROM objlist;
+ INSERT INTO objlist SELECT type, name, tbl_name
+ FROM sqlite_master WHERE NAME!='objlist';
+ }
+ catchsql {
+ INSERT INTO objlist SELECT type, name, tbl_name
+ FROM sqlite_temp_master WHERE NAME!='objlist';
+ }
+ execsql {
+ SELECT type, name, tbl_name FROM objlist ORDER BY tbl_name, type desc, name;
+ }
+} [list \
+ table *t1* *t1* \
+ index t1i1 *t1* \
+ index t1i2 *t1* \
+ table <t2> <t2> \
+ index i3 <t2> \
+ index {sqlite_autoindex_<t2>_1} <t2> \
+ index {sqlite_autoindex_<t2>_2} <t2> \
+ ]
+
+# Check that ALTER TABLE works on attached databases.
+#
+do_test alter-1.8.1 {
+ file delete -force test2.db
+ file delete -force test2.db-journal
+ execsql {
+ ATTACH 'test2.db' AS aux;
+ }
+} {}
+do_test alter-1.8.2 {
+ execsql {
+ CREATE TABLE t4(a PRIMARY KEY, b, c);
+ CREATE TABLE aux.t4(a PRIMARY KEY, b, c);
+ CREATE INDEX i4 ON t4(b);
+ CREATE INDEX aux.i4 ON t4(b);
+ }
+} {}
+do_test alter-1.8.3 {
+ execsql {
+ INSERT INTO t4 VALUES('main', 'main', 'main');
+ INSERT INTO aux.t4 VALUES('aux', 'aux', 'aux');
+ SELECT * FROM t4 WHERE a = 'main';
+ }
+} {main main main}
+do_test alter-1.8.4 {
+ execsql {
+ ALTER TABLE t4 RENAME TO t5;
+ SELECT * FROM t4 WHERE a = 'aux';
+ }
+} {aux aux aux}
+do_test alter-1.8.5 {
+ execsql {
+ SELECT * FROM t5;
+ }
+} {main main main}
+do_test alter-1.8.6 {
+ execsql {
+ SELECT * FROM t5 WHERE b = 'main';
+ }
+} {main main main}
+do_test alter-1.8.7 {
+ execsql {
+ ALTER TABLE aux.t4 RENAME TO t5;
+ SELECT * FROM aux.t5 WHERE b = 'aux';
+ }
+} {aux aux aux}
+
+do_test alter-1.9.1 {
+ execsql {
+ CREATE TABLE tbl1 (a, b, c);
+ INSERT INTO tbl1 VALUES(1, 2, 3);
+ }
+} {}
+do_test alter-1.9.2 {
+ execsql {
+ SELECT * FROM tbl1;
+ }
+} {1 2 3}
+do_test alter-1.9.3 {
+ execsql {
+ ALTER TABLE tbl1 RENAME TO tbl2;
+ SELECT * FROM tbl2;
+ }
+} {1 2 3}
+do_test alter-1.9.4 {
+ execsql {
+ DROP TABLE tbl2;
+ }
+} {}
+
+# Test error messages
+#
+do_test alter-2.1 {
+ catchsql {
+ ALTER TABLE none RENAME TO hi;
+ }
+} {1 {no such table: none}}
+do_test alter-2.2 {
+ execsql {
+ CREATE TABLE t3(p,q,r);
+ }
+ catchsql {
+ ALTER TABLE [<t2>] RENAME TO t3;
+ }
+} {1 {there is already another table or index with this name: t3}}
+do_test alter-2.3 {
+ catchsql {
+ ALTER TABLE [<t2>] RENAME TO i3;
+ }
+} {1 {there is already another table or index with this name: i3}}
+do_test alter-2.4 {
+ catchsql {
+ ALTER TABLE SqLiTe_master RENAME TO master;
+ }
+} {1 {table sqlite_master may not be altered}}
+do_test alter-2.5 {
+ catchsql {
+ ALTER TABLE t3 RENAME TO sqlite_t3;
+ }
+} {1 {object name reserved for internal use: sqlite_t3}}
+
+# If this compilation does not include triggers, omit the alter-3.* tests.
+ifcapable trigger {
+
+#-----------------------------------------------------------------------
+# Tests alter-3.* test ALTER TABLE on tables that have triggers.
+#
+# alter-3.1.*: ALTER TABLE with triggers.
+# alter-3.2.*: Test that the ON keyword cannot be used as a database,
+# table or column name unquoted. This is done because part of the
+# ALTER TABLE code (specifically the implementation of SQL function
+# "sqlite_alter_trigger") will break in this case.
+# alter-3.3.*: ALTER TABLE with TEMP triggers (todo).
+#
+
+# An SQL user-function for triggers to fire, so that we know they
+# are working.
+proc trigfunc {args} {
+ set ::TRIGGER $args
+}
+db func trigfunc trigfunc
+
+do_test alter-3.1.0 {
+ execsql {
+ CREATE TABLE t6(a, b, c);
+ CREATE TRIGGER trig1 AFTER INSERT ON t6 BEGIN
+ SELECT trigfunc('trig1', new.a, new.b, new.c);
+ END;
+ }
+} {}
+do_test alter-3.1.1 {
+ execsql {
+ INSERT INTO t6 VALUES(1, 2, 3);
+ }
+ set ::TRIGGER
+} {trig1 1 2 3}
+do_test alter-3.1.2 {
+ execsql {
+ ALTER TABLE t6 RENAME TO t7;
+ INSERT INTO t7 VALUES(4, 5, 6);
+ }
+ set ::TRIGGER
+} {trig1 4 5 6}
+do_test alter-3.1.3 {
+ execsql {
+ DROP TRIGGER trig1;
+ }
+} {}
+do_test alter-3.1.4 {
+ execsql {
+ CREATE TRIGGER trig2 AFTER INSERT ON main.t7 BEGIN
+ SELECT trigfunc('trig2', new.a, new.b, new.c);
+ END;
+ INSERT INTO t7 VALUES(1, 2, 3);
+ }
+ set ::TRIGGER
+} {trig2 1 2 3}
+do_test alter-3.1.5 {
+ execsql {
+ ALTER TABLE t7 RENAME TO t8;
+ INSERT INTO t8 VALUES(4, 5, 6);
+ }
+ set ::TRIGGER
+} {trig2 4 5 6}
+do_test alter-3.1.6 {
+ execsql {
+ DROP TRIGGER trig2;
+ }
+} {}
+do_test alter-3.1.7 {
+ execsql {
+ CREATE TRIGGER trig3 AFTER INSERT ON main.'t8'BEGIN
+ SELECT trigfunc('trig3', new.a, new.b, new.c);
+ END;
+ INSERT INTO t8 VALUES(1, 2, 3);
+ }
+ set ::TRIGGER
+} {trig3 1 2 3}
+do_test alter-3.1.8 {
+ execsql {
+ ALTER TABLE t8 RENAME TO t9;
+ INSERT INTO t9 VALUES(4, 5, 6);
+ }
+ set ::TRIGGER
+} {trig3 4 5 6}
+
+# Make sure "ON" cannot be used as a database, table or column name without
+# quoting. Otherwise the sqlite_alter_trigger() function might not work.
+file delete -force test3.db
+file delete -force test3.db-journal
+do_test alter-3.2.1 {
+ catchsql {
+ ATTACH 'test3.db' AS ON;
+ }
+} {1 {near "ON": syntax error}}
+do_test alter-3.2.2 {
+ catchsql {
+ ATTACH 'test3.db' AS 'ON';
+ }
+} {0 {}}
+do_test alter-3.2.3 {
+ catchsql {
+ CREATE TABLE ON.t1(a, b, c);
+ }
+} {1 {near "ON": syntax error}}
+do_test alter-3.2.4 {
+ catchsql {
+ CREATE TABLE 'ON'.t1(a, b, c);
+ }
+} {0 {}}
+do_test alter-3.2.4 {
+ catchsql {
+ CREATE TABLE 'ON'.ON(a, b, c);
+ }
+} {1 {near "ON": syntax error}}
+do_test alter-3.2.5 {
+ catchsql {
+ CREATE TABLE 'ON'.'ON'(a, b, c);
+ }
+} {0 {}}
+do_test alter-3.2.6 {
+ catchsql {
+ CREATE TABLE t10(a, ON, c);
+ }
+} {1 {near "ON": syntax error}}
+do_test alter-3.2.7 {
+ catchsql {
+ CREATE TABLE t10(a, 'ON', c);
+ }
+} {0 {}}
+do_test alter-3.2.8 {
+ catchsql {
+ CREATE TRIGGER trig4 AFTER INSERT ON ON BEGIN SELECT 1; END;
+ }
+} {1 {near "ON": syntax error}}
+do_test alter-3.2.9 {
+ catchsql {
+ CREATE TRIGGER 'on'.trig4 AFTER INSERT ON 'ON' BEGIN SELECT 1; END;
+ }
+} {0 {}}
+do_test alter-3.2.10 {
+ execsql {
+ DROP TABLE t10;
+ }
+} {}
+
+do_test alter-3.3.1 {
+ execsql [subst {
+ CREATE TABLE tbl1(a, b, c);
+ CREATE $::temp TRIGGER trig1 AFTER INSERT ON tbl1 BEGIN
+ SELECT trigfunc('trig1', new.a, new.b, new.c);
+ END;
+ }]
+} {}
+do_test alter-3.3.2 {
+ execsql {
+ INSERT INTO tbl1 VALUES('a', 'b', 'c');
+ }
+ set ::TRIGGER
+} {trig1 a b c}
+do_test alter-3.3.3 {
+ execsql {
+ ALTER TABLE tbl1 RENAME TO tbl2;
+ INSERT INTO tbl2 VALUES('d', 'e', 'f');
+ }
+ set ::TRIGGER
+} {trig1 d e f}
+do_test alter-3.3.4 {
+ execsql [subst {
+ CREATE $::temp TRIGGER trig2 AFTER UPDATE ON tbl2 BEGIN
+ SELECT trigfunc('trig2', new.a, new.b, new.c);
+ END;
+ }]
+} {}
+do_test alter-3.3.5 {
+ execsql {
+ ALTER TABLE tbl2 RENAME TO tbl3;
+ INSERT INTO tbl3 VALUES('g', 'h', 'i');
+ }
+ set ::TRIGGER
+} {trig1 g h i}
+do_test alter-3.3.6 {
+ execsql {
+ UPDATE tbl3 SET a = 'G' where a = 'g';
+ }
+ set ::TRIGGER
+} {trig2 G h i}
+do_test alter-3.3.7 {
+ execsql {
+ DROP TABLE tbl3;
+ }
+} {}
+ifcapable tempdb {
+ do_test alter-3.3.8 {
+ execsql {
+ SELECT * FROM sqlite_temp_master WHERE type = 'trigger';
+ }
+ } {}
+}
+
+} ;# ifcapable trigger
+
+# If the build does not include AUTOINCREMENT fields, omit alter-4.*.
+ifcapable autoinc {
+
+do_test alter-4.1 {
+ execsql {
+ CREATE TABLE tbl1(a INTEGER PRIMARY KEY AUTOINCREMENT);
+ INSERT INTO tbl1 VALUES(10);
+ }
+} {}
+do_test alter-4.2 {
+ execsql {
+ INSERT INTO tbl1 VALUES(NULL);
+ SELECT a FROM tbl1;
+ }
+} {10 11}
+do_test alter-4.3 {
+ execsql {
+ ALTER TABLE tbl1 RENAME TO tbl2;
+ DELETE FROM tbl2;
+ INSERT INTO tbl2 VALUES(NULL);
+ SELECT a FROM tbl2;
+ }
+} {12}
+do_test alter-4.4 {
+ execsql {
+ DROP TABLE tbl2;
+ }
+} {}
+
+} ;# ifcapable autoinc
+
+# Test that it is Ok to execute an ALTER TABLE immediately after
+# opening a database.
+do_test alter-5.1 {
+ execsql {
+ CREATE TABLE tbl1(a, b, c);
+ INSERT INTO tbl1 VALUES('x', 'y', 'z');
+ }
+} {}
+do_test alter-5.2 {
+ sqlite3 db2 test.db
+ execsql {
+ ALTER TABLE tbl1 RENAME TO tbl2;
+ SELECT * FROM tbl2;
+ } db2
+} {x y z}
+do_test alter-5.3 {
+ db2 close
+} {}
+
+foreach tblname [execsql {
+ SELECT name FROM sqlite_master WHERE type='table' AND name NOT LIKE 'sqlite%'
+}] {
+ execsql "DROP TABLE \"$tblname\""
+}
+
+set ::tbl_name "abc\uABCDdef"
+do_test alter-6.1 {
+ string length $::tbl_name
+} {7}
+do_test alter-6.2 {
+ execsql "
+ CREATE TABLE ${tbl_name}(a, b, c);
+ "
+ set ::oid [execsql {SELECT max(oid) FROM sqlite_master}]
+ execsql "
+ SELECT sql FROM sqlite_master WHERE oid = $::oid;
+ "
+} "{CREATE TABLE ${::tbl_name}(a, b, c)}"
+execsql "
+ SELECT * FROM ${::tbl_name}
+"
+set ::tbl_name2 "abcXdef"
+do_test alter-6.3 {
+ execsql "
+ ALTER TABLE $::tbl_name RENAME TO $::tbl_name2
+ "
+ execsql "
+ SELECT sql FROM sqlite_master WHERE oid = $::oid
+ "
+} "{CREATE TABLE '${::tbl_name2}'(a, b, c)}"
+do_test alter-6.4 {
+ execsql "
+ ALTER TABLE $::tbl_name2 RENAME TO $::tbl_name
+ "
+ execsql "
+ SELECT sql FROM sqlite_master WHERE oid = $::oid
+ "
+} "{CREATE TABLE '${::tbl_name}'(a, b, c)}"
+set ::col_name ghi\1234\jkl
+do_test alter-6.5 {
+ execsql "
+ ALTER TABLE $::tbl_name ADD COLUMN $::col_name VARCHAR
+ "
+ execsql "
+ SELECT sql FROM sqlite_master WHERE oid = $::oid
+ "
+} "{CREATE TABLE '${::tbl_name}'(a, b, c, $::col_name VARCHAR)}"
+set ::col_name2 B\3421\A
+do_test alter-6.6 {
+ db close
+ sqlite3 db test.db
+ execsql "
+ ALTER TABLE $::tbl_name ADD COLUMN $::col_name2
+ "
+ execsql "
+ SELECT sql FROM sqlite_master WHERE oid = $::oid
+ "
+} "{CREATE TABLE '${::tbl_name}'(a, b, c, $::col_name VARCHAR, $::col_name2)}"
+do_test alter-6.7 {
+ execsql "
+ INSERT INTO ${::tbl_name} VALUES(1, 2, 3, 4, 5);
+ SELECT $::col_name, $::col_name2 FROM $::tbl_name;
+ "
+} {4 5}
+
+# Ticket #1665: Make sure ALTER TABLE ADD COLUMN works on a table
+# that includes a COLLATE clause.
+#
+do_test alter-7.1 {
+ execsql {
+ CREATE TABLE t1(a TEXT COLLATE BINARY);
+ ALTER TABLE t1 ADD COLUMN b INTEGER COLLATE NOCASE;
+ INSERT INTO t1 VALUES(1,'2');
+ SELECT typeof(a), a, typeof(b), b FROM t1;
+ }
+} {text 1 integer 2}
+
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/alter2.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/alter2.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,438 @@
+# 2005 February 18
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#*************************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is testing that SQLite can handle a subtle
+# file format change that may be used in the future to implement
+# "ALTER TABLE ... ADD COLUMN".
+#
+# $Id: alter2.test,v 1.5 2006/01/03 00:33:50 drh Exp $
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# We have to have pragmas in order to do this test
+ifcapable {!pragma} return
+
+# These tests do not work if there is a codec. The
+# btree_open command does not know how to handle codecs.
+#
+if {[catch {sqlite3 -has_codec} r] || $r} return
+
+# The file format change affects the way row-records stored in tables (but
+# not indices) are interpreted. Before version 3.1.3, a row-record for a
+# table with N columns was guaranteed to contain exactly N fields. As
+# of version 3.1.3, the record may contain up to N fields. In this case
+# the M fields that are present are the values for the left-most M
+# columns. The (N-M) rightmost columns contain NULL.
+#
+# If any records in the database contain less fields than their table
+# has columns, then the file-format meta value should be set to (at least) 2.
+#
+
+# This procedure sets the value of the file-format in file 'test.db'
+# to $newval. Also, the schema cookie is incremented.
+#
+proc set_file_format {newval} {
+ set bt [btree_open test.db 10 0]
+ btree_begin_transaction $bt
+ set meta [btree_get_meta $bt]
+ lset meta 2 $newval ;# File format
+ lset meta 1 [expr [lindex $meta 1]+1] ;# Schema cookie
+ eval "btree_update_meta $bt $meta"
+ btree_commit $bt
+ btree_close $bt
+}
+
+# This procedure returns the value of the file-format in file 'test.db'.
+#
+proc get_file_format {{fname test.db}} {
+ set bt [btree_open $fname 10 0]
+ set meta [btree_get_meta $bt]
+ btree_close $bt
+ lindex $meta 2
+}
+
+# This procedure sets the SQL statement stored for table $tbl in the
+# sqlite_master table of file 'test.db' to $sql. Also set the file format
+# to the supplied value. This is 2 if the added column has a default that is
+# NULL, or 3 otherwise.
+#
+proc alter_table {tbl sql {file_format 2}} {
+ sqlite3 dbat test.db
+puts one
+ dbat eval {
+ PRAGMA writable_schema = 1;
+ UPDATE sqlite_master SET sql = $sql WHERE name = $tbl AND type = 'table';
+ PRAGMA writable_schema = 0;
+ }
+puts two
+ dbat close
+puts three
+ set_file_format 2
+puts four
+}
+
+#-----------------------------------------------------------------------
+# Some basic tests to make sure short rows are handled.
+#
+do_test alter2-1.1 {
+ execsql {
+ CREATE TABLE abc(a, b);
+ INSERT INTO abc VALUES(1, 2);
+ INSERT INTO abc VALUES(3, 4);
+ INSERT INTO abc VALUES(5, 6);
+ }
+} {}
+do_test alter2-1.2 {
+ # ALTER TABLE abc ADD COLUMN c;
+ alter_table abc {CREATE TABLE abc(a, b, c);}
+} {}
+exit
+do_test alter2-1.3 {
+ execsql {
+ SELECT * FROM abc;
+ }
+} {1 2 {} 3 4 {} 5 6 {}}
+do_test alter2-1.4 {
+ execsql {
+ UPDATE abc SET c = 10 WHERE a = 1;
+ SELECT * FROM abc;
+ }
+} {1 2 10 3 4 {} 5 6 {}}
+do_test alter2-1.5 {
+ execsql {
+ CREATE INDEX abc_i ON abc(c);
+ }
+} {}
+do_test alter2-1.6 {
+ execsql {
+ SELECT c FROM abc ORDER BY c;
+ }
+} {{} {} 10}
+do_test alter2-1.7 {
+ execsql {
+ SELECT * FROM abc WHERE c = 10;
+ }
+} {1 2 10}
+do_test alter2-1.8 {
+ execsql {
+ SELECT sum(a), c FROM abc GROUP BY c;
+ }
+} {8.0 {} 1.0 10}
+do_test alter2-1.9 {
+ # ALTER TABLE abc ADD COLUMN d;
+ alter_table abc {CREATE TABLE abc(a, b, c, d);}
+ execsql { SELECT * FROM abc; }
+ execsql {
+ UPDATE abc SET d = 11 WHERE c IS NULL AND a<4;
+ SELECT * FROM abc;
+ }
+} {1 2 10 {} 3 4 {} 11 5 6 {} {}}
+do_test alter2-1.10 {
+ execsql {
+ SELECT typeof(d) FROM abc;
+ }
+} {null integer null}
+do_test alter2-1.99 {
+ execsql {
+ DROP TABLE abc;
+ }
+} {}
+
+#-----------------------------------------------------------------------
+# Test that views work when the underlying table structure is changed.
+#
+ifcapable view {
+ do_test alter2-2.1 {
+ execsql {
+ CREATE TABLE abc2(a, b, c);
+ INSERT INTO abc2 VALUES(1, 2, 10);
+ INSERT INTO abc2 VALUES(3, 4, NULL);
+ INSERT INTO abc2 VALUES(5, 6, NULL);
+ CREATE VIEW abc2_v AS SELECT * FROM abc2;
+ SELECT * FROM abc2_v;
+ }
+ } {1 2 10 3 4 {} 5 6 {}}
+ do_test alter2-2.2 {
+ # ALTER TABLE abc ADD COLUMN d;
+ alter_table abc2 {CREATE TABLE abc2(a, b, c, d);}
+ execsql {
+ SELECT * FROM abc2_v;
+ }
+ } {1 2 10 {} 3 4 {} {} 5 6 {} {}}
+ do_test alter2-2.3 {
+ execsql {
+ DROP TABLE abc2;
+ DROP VIEW abc2_v;
+ }
+ } {}
+}
+
+#-----------------------------------------------------------------------
+# Test that triggers work when a short row is copied to the old.*
+# trigger pseudo-table.
+#
+ifcapable trigger {
+ do_test alter2-3.1 {
+ execsql {
+ CREATE TABLE abc3(a, b);
+ CREATE TABLE blog(o, n);
+ CREATE TRIGGER abc3_t AFTER UPDATE OF b ON abc3 BEGIN
+ INSERT INTO blog VALUES(old.b, new.b);
+ END;
+ }
+ } {}
+ do_test alter2-3.2 {
+ execsql {
+ INSERT INTO abc3 VALUES(1, 4);
+ UPDATE abc3 SET b = 2 WHERE b = 4;
+ SELECT * FROM blog;
+ }
+ } {4 2}
+ do_test alter2-3.3 {
+ execsql {
+ INSERT INTO abc3 VALUES(3, 4);
+ INSERT INTO abc3 VALUES(5, 6);
+ }
+ alter_table abc3 {CREATE TABLE abc3(a, b, c);}
+ execsql {
+ SELECT * FROM abc3;
+ }
+ } {1 2 {} 3 4 {} 5 6 {}}
+ do_test alter2-3.4 {
+ execsql {
+ UPDATE abc3 SET b = b*2 WHERE a<4;
+ SELECT * FROM abc3;
+ }
+ } {1 4 {} 3 8 {} 5 6 {}}
+ do_test alter2-3.5 {
+ execsql {
+ SELECT * FROM blog;
+ }
+ } {4 2 2 4 4 8}
+
+ do_test alter2-3.6 {
+ execsql {
+ CREATE TABLE clog(o, n);
+ CREATE TRIGGER abc3_t2 AFTER UPDATE OF c ON abc3 BEGIN
+ INSERT INTO clog VALUES(old.c, new.c);
+ END;
+ UPDATE abc3 SET c = a*2;
+ SELECT * FROM clog;
+ }
+ } {{} 2 {} 6 {} 10}
+}
+
+#---------------------------------------------------------------------
+# Check that an error occurs if the database is upgraded to a file
+# format that SQLite does not support (in this case 4). Note: The
+# file format is checked each time the schema is read, so changing the
+# file format requires incrementing the schema cookie.
+#
+do_test alter2-4.1 {
+ set_file_format 4
+} {}
+do_test alter2-4.2 {
+ catchsql {
+ SELECT * FROM sqlite_master;
+ }
+} {1 {unsupported file format}}
+do_test alter2-4.3 {
+ sqlite3_errcode $::DB
+} {SQLITE_ERROR}
+do_test alter2-4.4 {
+ set ::DB [sqlite3_connection_pointer db]
+ catchsql {
+ SELECT * FROM sqlite_master;
+ }
+} {1 {unsupported file format}}
+do_test alter2-4.5 {
+ sqlite3_errcode $::DB
+} {SQLITE_ERROR}
+
+#---------------------------------------------------------------------
+# Check that executing VACUUM on a file with file-format version 2
+# resets the file format to 1.
+#
+do_test alter2-5.1 {
+ set_file_format 2
+ get_file_format
+} {2}
+do_test alter2-5.2 {
+ execsql {
+ VACUUM;
+ }
+} {}
+do_test alter2-5.3 {
+ get_file_format
+} {1}
+
+#---------------------------------------------------------------------
+# Test that when a database with file-format 2 is opened, new
+# databases are still created with file-format 1.
+#
+do_test alter2-6.1 {
+ db close
+ set_file_format 2
+ sqlite3 db test.db
+ set ::DB [sqlite3_connection_pointer db]
+ get_file_format
+} {2}
+do_test alter2-6.2 {
+ file delete -force test2.db-journal
+ file delete -force test2.db
+ execsql {
+ ATTACH 'test2.db' AS aux;
+ CREATE TABLE aux.t1(a, b);
+ }
+ get_file_format test2.db
+} {1}
+do_test alter2-6.3 {
+ execsql {
+ CREATE TABLE t1(a, b);
+ }
+ get_file_format
+} {2}
+
+#---------------------------------------------------------------------
+# Test that types and values for columns added with default values
+# other than NULL work with SELECT statements.
+#
+do_test alter2-7.1 {
+ execsql {
+ DROP TABLE t1;
+ CREATE TABLE t1(a);
+ INSERT INTO t1 VALUES(1);
+ INSERT INTO t1 VALUES(2);
+ INSERT INTO t1 VALUES(3);
+ INSERT INTO t1 VALUES(4);
+ SELECT * FROM t1;
+ }
+} {1 2 3 4}
+do_test alter2-7.2 {
+ set sql {CREATE TABLE t1(a, b DEFAULT '123', c INTEGER DEFAULT '123')}
+ alter_table t1 $sql 3
+ execsql {
+ SELECT * FROM t1 LIMIT 1;
+ }
+} {1 123 123}
+do_test alter2-7.3 {
+ execsql {
+ SELECT a, typeof(a), b, typeof(b), c, typeof(c) FROM t1 LIMIT 1;
+ }
+} {1 integer 123 text 123 integer}
+do_test alter2-7.4 {
+ execsql {
+ SELECT a, typeof(a), b, typeof(b), c, typeof(c) FROM t1 LIMIT 1;
+ }
+} {1 integer 123 text 123 integer}
+do_test alter2-7.5 {
+ set sql {CREATE TABLE t1(a, b DEFAULT -123.0, c VARCHAR(10) default 5)}
+ alter_table t1 $sql 3
+ execsql {
+ SELECT a, typeof(a), b, typeof(b), c, typeof(c) FROM t1 LIMIT 1;
+ }
+} {1 integer -123.0 real 5 text}
+
+#-----------------------------------------------------------------------
+# Test that UPDATE trigger tables work with default values, and that when
+# a row is updated the default values are correctly transfered to the
+# new row.
+#
+ifcapable trigger {
+db function set_val {set ::val}
+ do_test alter2-8.1 {
+ execsql {
+ CREATE TRIGGER trig1 BEFORE UPDATE ON t1 BEGIN
+ SELECT set_val(
+ old.b||' '||typeof(old.b)||' '||old.c||' '||typeof(old.c)||' '||
+ new.b||' '||typeof(new.b)||' '||new.c||' '||typeof(new.c)
+ );
+ END;
+ }
+ list
+ } {}
+}
+do_test alter2-8.2 {
+ execsql {
+ UPDATE t1 SET c = 10 WHERE a = 1;
+ SELECT a, typeof(a), b, typeof(b), c, typeof(c) FROM t1 LIMIT 1;
+ }
+} {1 integer -123.0 real 10 text}
+ifcapable trigger {
+ do_test alter2-8.3 {
+ set ::val
+ } {-123 real 5 text -123 real 10 text}
+}
+
+#-----------------------------------------------------------------------
+# Test that DELETE trigger tables work with default values, and that when
+# a row is updated the default values are correctly transfered to the
+# new row.
+#
+ifcapable trigger {
+ do_test alter2-9.1 {
+ execsql {
+ CREATE TRIGGER trig2 BEFORE DELETE ON t1 BEGIN
+ SELECT set_val(
+ old.b||' '||typeof(old.b)||' '||old.c||' '||typeof(old.c)
+ );
+ END;
+ }
+ list
+ } {}
+ do_test alter2-9.2 {
+ execsql {
+ DELETE FROM t1 WHERE a = 2;
+ }
+ set ::val
+ } {-123 real 5 text}
+}
+
+#-----------------------------------------------------------------------
+# Test creating an index on a column added with a default value.
+#
+do_test alter2-10.1 {
+ execsql {
+ CREATE TABLE t2(a);
+ INSERT INTO t2 VALUES('a');
+ INSERT INTO t2 VALUES('b');
+ INSERT INTO t2 VALUES('c');
+ INSERT INTO t2 VALUES('d');
+ }
+ alter_table t2 {CREATE TABLE t2(a, b DEFAULT X'ABCD', c DEFAULT NULL);} 3
+ catchsql {
+ SELECT * FROM sqlite_master;
+ }
+ execsql {
+ SELECT quote(a), quote(b), quote(c) FROM t2 LIMIT 1;
+ }
+} {'a' X'ABCD' NULL}
+do_test alter2-10.2 {
+ execsql {
+ CREATE INDEX i1 ON t2(b);
+ SELECT a FROM t2 WHERE b = X'ABCD';
+ }
+} {a b c d}
+do_test alter2-10.3 {
+ execsql {
+ DELETE FROM t2 WHERE a = 'c';
+ SELECT a FROM t2 WHERE b = X'ABCD';
+ }
+} {a b d}
+do_test alter2-10.4 {
+ execsql {
+ SELECT count(b) FROM t2 WHERE b = X'ABCD';
+ }
+} {3}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/alter3.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/alter3.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,396 @@
+# 2005 February 19
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#*************************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is testing that SQLite can handle a subtle
+# file format change that may be used in the future to implement
+# "ALTER TABLE ... ADD COLUMN".
+#
+# $Id: alter3.test,v 1.9 2006/01/17 09:35:02 danielk1977 Exp $
+#
+
+set testdir [file dirname $argv0]
+
+source $testdir/tester.tcl
+
+# If SQLITE_OMIT_ALTERTABLE is defined, omit this file.
+ifcapable !altertable {
+ finish_test
+ return
+}
+
+# Determine if there is a codec available on this test.
+#
+if {[catch {sqlite3 -has_codec} r] || $r} {
+ set has_codec 1
+} else {
+ set has_codec 0
+}
+
+
+# Test Organisation:
+# ------------------
+#
+# alter3-1.*: Test that ALTER TABLE correctly modifies the CREATE TABLE sql.
+# alter3-2.*: Test error messages.
+# alter3-3.*: Test adding columns with default value NULL.
+# alter3-4.*: Test adding columns with default values other than NULL.
+# alter3-5.*: Test adding columns to tables in ATTACHed databases.
+# alter3-6.*: Test that temp triggers are not accidentally dropped.
+# alter3-7.*: Test that VACUUM resets the file-format.
+#
+
+# This procedure returns the value of the file-format in file 'test.db'.
+#
+proc get_file_format {{fname test.db}} {
+ set bt [btree_open $fname 10 0]
+ set meta [btree_get_meta $bt]
+ btree_close $bt
+ lindex $meta 2
+}
+
+do_test alter3-1.1 {
+ execsql {
+ CREATE TABLE abc(a, b, c);
+ SELECT sql FROM sqlite_master;
+ }
+} {{CREATE TABLE abc(a, b, c)}}
+do_test alter3-1.2 {
+ execsql {ALTER TABLE abc ADD d INTEGER;}
+ execsql {
+ SELECT sql FROM sqlite_master;
+ }
+} {{CREATE TABLE abc(a, b, c, d INTEGER)}}
+do_test alter3-1.3 {
+ execsql {ALTER TABLE abc ADD e}
+ execsql {
+ SELECT sql FROM sqlite_master;
+ }
+} {{CREATE TABLE abc(a, b, c, d INTEGER, e)}}
+do_test alter3-1.4 {
+ execsql {
+ CREATE TABLE main.t1(a, b);
+ ALTER TABLE t1 ADD c;
+ SELECT sql FROM sqlite_master WHERE tbl_name = 't1';
+ }
+} {{CREATE TABLE t1(a, b, c)}}
+do_test alter3-1.5 {
+ execsql {
+ ALTER TABLE t1 ADD d CHECK (a>d);
+ SELECT sql FROM sqlite_master WHERE tbl_name = 't1';
+ }
+} {{CREATE TABLE t1(a, b, c, d CHECK (a>d))}}
+ifcapable foreignkey {
+ do_test alter3-1.6 {
+ execsql {
+ CREATE TABLE t2(a, b, UNIQUE(a, b));
+ ALTER TABLE t2 ADD c REFERENCES t1(c) ;
+ SELECT sql FROM sqlite_master WHERE tbl_name = 't2' AND type = 'table';
+ }
+ } {{CREATE TABLE t2(a, b, c REFERENCES t1(c), UNIQUE(a, b))}}
+}
+do_test alter3-1.7 {
+ execsql {
+ CREATE TABLE t3(a, b, UNIQUE(a, b));
+ ALTER TABLE t3 ADD COLUMN c VARCHAR(10, 20);
+ SELECT sql FROM sqlite_master WHERE tbl_name = 't3' AND type = 'table';
+ }
+} {{CREATE TABLE t3(a, b, c VARCHAR(10, 20), UNIQUE(a, b))}}
+do_test alter3-1.99 {
+ catchsql {
+ # May not exist if foriegn-keys are omitted at compile time.
+ DROP TABLE t2;
+ }
+ execsql {
+ DROP TABLE abc;
+ DROP TABLE t1;
+ DROP TABLE t3;
+ }
+} {}
+
+do_test alter3-2.1 {
+ execsql {
+ CREATE TABLE t1(a, b);
+ }
+ catchsql {
+ ALTER TABLE t1 ADD c PRIMARY KEY;
+ }
+} {1 {Cannot add a PRIMARY KEY column}}
+do_test alter3-2.2 {
+ catchsql {
+ ALTER TABLE t1 ADD c UNIQUE
+ }
+} {1 {Cannot add a UNIQUE column}}
+do_test alter3-2.3 {
+ catchsql {
+ ALTER TABLE t1 ADD b VARCHAR(10)
+ }
+} {1 {duplicate column name: b}}
+do_test alter3-2.3 {
+ catchsql {
+ ALTER TABLE t1 ADD c NOT NULL;
+ }
+} {1 {Cannot add a NOT NULL column with default value NULL}}
+do_test alter3-2.4 {
+ catchsql {
+ ALTER TABLE t1 ADD c NOT NULL DEFAULT 10;
+ }
+} {0 {}}
+ifcapable view {
+ do_test alter3-2.5 {
+ execsql {
+ CREATE VIEW v1 AS SELECT * FROM t1;
+ }
+ catchsql {
+ alter table v1 add column d;
+ }
+ } {1 {Cannot add a column to a view}}
+}
+do_test alter3-2.6 {
+ catchsql {
+ alter table t1 add column d DEFAULT CURRENT_TIME;
+ }
+} {1 {Cannot add a column with non-constant default}}
+do_test alter3-2.99 {
+ execsql {
+ DROP TABLE t1;
+ }
+} {}
+
+do_test alter3-3.1 {
+ execsql {
+ CREATE TABLE t1(a, b);
+ INSERT INTO t1 VALUES(1, 100);
+ INSERT INTO t1 VALUES(2, 300);
+ SELECT * FROM t1;
+ }
+} {1 100 2 300}
+do_test alter3-3.1 {
+ execsql {
+ PRAGMA schema_version = 10;
+ }
+} {}
+do_test alter3-3.2 {
+ execsql {
+ ALTER TABLE t1 ADD c;
+ SELECT * FROM t1;
+ }
+} {1 100 {} 2 300 {}}
+if {!$has_codec} {
+ do_test alter3-3.3 {
+ get_file_format
+ } {3}
+}
+ifcapable schema_version {
+ do_test alter3-3.4 {
+ execsql {
+ PRAGMA schema_version;
+ }
+ } {11}
+}
+
+do_test alter3-4.1 {
+ db close
+ file delete -force test.db
+ set ::DB [sqlite3 db test.db]
+ execsql {
+ CREATE TABLE t1(a, b);
+ INSERT INTO t1 VALUES(1, 100);
+ INSERT INTO t1 VALUES(2, 300);
+ SELECT * FROM t1;
+ }
+} {1 100 2 300}
+do_test alter3-4.1 {
+ execsql {
+ PRAGMA schema_version = 20;
+ }
+} {}
+do_test alter3-4.2 {
+ execsql {
+ ALTER TABLE t1 ADD c DEFAULT 'hello world';
+ SELECT * FROM t1;
+ }
+} {1 100 {hello world} 2 300 {hello world}}
+if {!$has_codec} {
+ do_test alter3-4.3 {
+ get_file_format
+ } {3}
+}
+ifcapable schema_version {
+ do_test alter3-4.4 {
+ execsql {
+ PRAGMA schema_version;
+ }
+ } {21}
+}
+do_test alter3-4.99 {
+ execsql {
+ DROP TABLE t1;
+ }
+} {}
+
+do_test alter3-5.1 {
+ file delete -force test2.db
+ file delete -force test2.db-journal
+ execsql {
+ CREATE TABLE t1(a, b);
+ INSERT INTO t1 VALUES(1, 'one');
+ INSERT INTO t1 VALUES(2, 'two');
+ ATTACH 'test2.db' AS aux;
+ CREATE TABLE aux.t1 AS SELECT * FROM t1;
+ PRAGMA aux.schema_version = 30;
+ SELECT sql FROM aux.sqlite_master;
+ }
+} {{CREATE TABLE t1(a,b)}}
+do_test alter3-5.2 {
+ execsql {
+ ALTER TABLE aux.t1 ADD COLUMN c VARCHAR(128);
+ SELECT sql FROM aux.sqlite_master;
+ }
+} {{CREATE TABLE t1(a,b, c VARCHAR(128))}}
+do_test alter3-5.3 {
+ execsql {
+ SELECT * FROM aux.t1;
+ }
+} {1 one {} 2 two {}}
+ifcapable schema_version {
+ do_test alter3-5.4 {
+ execsql {
+ PRAGMA aux.schema_version;
+ }
+ } {31}
+}
+if {!$has_codec} {
+ do_test alter3-5.5 {
+ list [get_file_format test2.db] [get_file_format]
+ } {2 3}
+}
+do_test alter3-5.6 {
+ execsql {
+ ALTER TABLE aux.t1 ADD COLUMN d DEFAULT 1000;
+ SELECT sql FROM aux.sqlite_master;
+ }
+} {{CREATE TABLE t1(a,b, c VARCHAR(128), d DEFAULT 1000)}}
+do_test alter3-5.7 {
+ execsql {
+ SELECT * FROM aux.t1;
+ }
+} {1 one {} 1000 2 two {} 1000}
+ifcapable schema_version {
+ do_test alter3-5.8 {
+ execsql {
+ PRAGMA aux.schema_version;
+ }
+ } {32}
+}
+do_test alter3-5.9 {
+ execsql {
+ SELECT * FROM t1;
+ }
+} {1 one 2 two}
+do_test alter3-5.99 {
+ execsql {
+ DROP TABLE aux.t1;
+ DROP TABLE t1;
+ }
+} {}
+
+#----------------------------------------------------------------
+# Test that the table schema is correctly reloaded when a column
+# is added to a table.
+#
+ifcapable trigger&&tempdb {
+ do_test alter3-6.1 {
+ execsql {
+ CREATE TABLE t1(a, b);
+ CREATE TABLE log(trig, a, b);
+
+ CREATE TRIGGER t1_a AFTER INSERT ON t1 BEGIN
+ INSERT INTO log VALUES('a', new.a, new.b);
+ END;
+ CREATE TEMP TRIGGER t1_b AFTER INSERT ON t1 BEGIN
+ INSERT INTO log VALUES('b', new.a, new.b);
+ END;
+
+ INSERT INTO t1 VALUES(1, 2);
+ SELECT * FROM log;
+ }
+ } {b 1 2 a 1 2}
+ do_test alter3-6.2 {
+ execsql {
+ ALTER TABLE t1 ADD COLUMN c DEFAULT 'c';
+ INSERT INTO t1(a, b) VALUES(3, 4);
+ SELECT * FROM log;
+ }
+ } {b 1 2 a 1 2 b 3 4 a 3 4}
+}
+
+if {!$has_codec} {
+ ifcapable vacuum {
+ do_test alter3-7.1 {
+ execsql {
+ VACUUM;
+ }
+ get_file_format
+ } {1}
+ do_test alter3-7.2 {
+ execsql {
+ CREATE TABLE abc(a, b, c);
+ ALTER TABLE abc ADD d DEFAULT NULL;
+ }
+ get_file_format
+ } {2}
+ do_test alter3-7.3 {
+ execsql {
+ ALTER TABLE abc ADD e DEFAULT 10;
+ }
+ get_file_format
+ } {3}
+ do_test alter3-7.4 {
+ execsql {
+ ALTER TABLE abc ADD f DEFAULT NULL;
+ }
+ get_file_format
+ } {3}
+ do_test alter3-7.5 {
+ execsql {
+ VACUUM;
+ }
+ get_file_format
+ } {1}
+ }
+}
+
+# Ticket #1183 - Make sure adding columns to large tables does not cause
+# memory corruption (as was the case before this bug was fixed).
+do_test alter3-8.1 {
+ execsql {
+ CREATE TABLE t4(c1);
+ }
+} {}
+set ::sql ""
+do_test alter3-8.2 {
+ set cols c1
+ for {set i 2} {$i < 100} {incr i} {
+ execsql "
+ ALTER TABLE t4 ADD c$i
+ "
+ lappend cols c$i
+ }
+ set ::sql "CREATE TABLE t4([join $cols {, }])"
+ list
+} {}
+do_test alter3-8.2 {
+ execsql {
+ SELECT sql FROM sqlite_master WHERE name = 't4';
+ }
+} [list $::sql]
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/altermalloc.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/altermalloc.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,126 @@
+# 2005 September 19
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#*************************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is testing the ALTER TABLE statement and
+# specifically out-of-memory conditions within that command.
+#
+# $Id: altermalloc.test,v 1.3 2006/09/04 18:54:14 drh Exp $
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Only run these tests if memory debugging is turned on.
+#
+if {[info command sqlite_malloc_stat]==""} {
+ puts "Skipping malloc tests: not compiled with -DSQLITE_MEMDEBUG=1"
+ finish_test
+ return
+}
+
+# If SQLITE_OMIT_ALTERTABLE is defined, omit this file.
+ifcapable !altertable {
+ finish_test
+ return
+}
+
+# Usage: do_malloc_test <test name> <options...>
+#
+# The first argument, <test number>, is an integer used to name the
+# tests executed by this proc. Options are as follows:
+#
+# -tclprep TCL script to run to prepare test.
+# -sqlprep SQL script to run to prepare test.
+# -tclbody TCL script to run with malloc failure simulation.
+# -sqlbody TCL script to run with malloc failure simulation.
+# -cleanup TCL script to run after the test.
+#
+# This command runs a series of tests to verify SQLite's ability
+# to handle an out-of-memory condition gracefully. It is assumed
+# that if this condition occurs a malloc() call will return a
+# NULL pointer. Linux, for example, doesn't do that by default. See
+# the "BUGS" section of malloc(3).
+#
+# Each iteration of a loop, the TCL commands in any argument passed
+# to the -tclbody switch, followed by the SQL commands in any argument
+# passed to the -sqlbody switch are executed. Each iteration the
+# Nth call to sqliteMalloc() is made to fail, where N is increased
+# each time the loop runs starting from 1. When all commands execute
+# successfully, the loop ends.
+#
+proc do_malloc_test {tn args} {
+ array set ::mallocopts $args
+
+ set ::go 1
+ for {set ::n 1} {$::go} {incr ::n} {
+
+ do_test $tn.$::n {
+
+ sqlite_malloc_fail 0
+ catch {db close}
+ catch {file delete -force test.db}
+ catch {file delete -force test.db-journal}
+ catch {file delete -force test2.db}
+ catch {file delete -force test2.db-journal}
+ set ::DB [sqlite3 db test.db]
+
+ if {[info exists ::mallocopts(-tclprep)]} {
+ eval $::mallocopts(-tclprep)
+ }
+ if {[info exists ::mallocopts(-sqlprep)]} {
+ execsql $::mallocopts(-sqlprep)
+ }
+
+ sqlite_malloc_fail $::n
+ set ::mallocbody {}
+ if {[info exists ::mallocopts(-tclbody)]} {
+ append ::mallocbody "$::mallocopts(-tclbody)\n"
+ }
+ if {[info exists ::mallocopts(-sqlbody)]} {
+ append ::mallocbody "db eval {$::mallocopts(-sqlbody)}"
+ }
+
+ set v [catch $::mallocbody msg]
+
+ set leftover [lindex [sqlite_malloc_stat] 2]
+ if {$leftover>0} {
+ if {$leftover>1} {puts "\nLeftover: $leftover\nReturn=$v Message=$msg"}
+ set ::go 0
+ set v {1 1}
+ } else {
+ set v2 [expr {$msg=="" || $msg=="out of memory"}]
+ if {!$v2} {puts "\nError message returned: $msg"}
+ lappend v $v2
+ }
+ } {1 1}
+ sqlite_malloc_fail 0
+
+ if {[info exists ::mallocopts(-cleanup)]} {
+ catch $::mallocopts(-cleanup)
+ }
+ }
+ unset ::mallocopts
+}
+
+do_malloc_test altermalloc-1 -tclprep {
+ db close
+} -tclbody {
+ if {[catch {sqlite3 db test.db}]} {
+ error "out of memory"
+ }
+} -sqlbody {
+ CREATE TABLE t1(a int);
+ ALTER TABLE t1 ADD COLUMN b INTEGER DEFAULT NULL;
+ ALTER TABLE t1 ADD COLUMN c TEXT DEFAULT 'default-text';
+ ALTER TABLE t1 RENAME TO t2;
+}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/analyze.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/analyze.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,257 @@
+# 2005 July 22
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+# This file implements tests for the ANALYZE command.
+#
+# $Id: analyze.test,v 1.5 2005/09/10 22:40:54 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# There is nothing to test if ANALYZE is disable for this build.
+#
+ifcapable {!analyze} {
+ finish_test
+ return
+}
+
+# Basic sanity checks.
+#
+do_test analyze-1.1 {
+ catchsql {
+ ANALYZE no_such_table
+ }
+} {1 {no such table: no_such_table}}
+do_test analyze-1.2 {
+ execsql {
+ SELECT count(*) FROM sqlite_master WHERE name='sqlite_stat1'
+ }
+} {0}
+do_test analyze-1.3 {
+ catchsql {
+ ANALYZE no_such_db.no_such_table
+ }
+} {1 {unknown database no_such_db}}
+do_test analyze-1.4 {
+ execsql {
+ SELECT count(*) FROM sqlite_master WHERE name='sqlite_stat1'
+ }
+} {0}
+do_test analyze-1.5.1 {
+ catchsql {
+ ANALYZE
+ }
+} {0 {}}
+do_test analyze-1.5.2 {
+ catchsql {
+ PRAGMA empty_result_callbacks=1;
+ ANALYZE
+ }
+} {0 {}}
+do_test analyze-1.6 {
+ execsql {
+ SELECT count(*) FROM sqlite_master WHERE name='sqlite_stat1'
+ }
+} {1}
+do_test analyze-1.7 {
+ execsql {
+ SELECT * FROM sqlite_stat1
+ }
+} {}
+do_test analyze-1.8 {
+ catchsql {
+ ANALYZE main
+ }
+} {0 {}}
+do_test analyze-1.9 {
+ execsql {
+ SELECT * FROM sqlite_stat1
+ }
+} {}
+do_test analyze-1.10 {
+ catchsql {
+ CREATE TABLE t1(a,b);
+ ANALYZE main.t1;
+ }
+} {0 {}}
+do_test analyze-1.11 {
+ execsql {
+ SELECT * FROM sqlite_stat1
+ }
+} {}
+do_test analyze-1.12 {
+ catchsql {
+ ANALYZE t1;
+ }
+} {0 {}}
+do_test analyze-1.13 {
+ execsql {
+ SELECT * FROM sqlite_stat1
+ }
+} {}
+
+# Create some indices that can be analyzed. But do not yet add
+# data. Without data in the tables, no analysis is done.
+#
+do_test analyze-2.1 {
+ execsql {
+ CREATE INDEX t1i1 ON t1(a);
+ ANALYZE main.t1;
+ SELECT * FROM sqlite_stat1 ORDER BY idx;
+ }
+} {}
+do_test analyze-2.2 {
+ execsql {
+ CREATE INDEX t1i2 ON t1(b);
+ ANALYZE t1;
+ SELECT * FROM sqlite_stat1 ORDER BY idx;
+ }
+} {}
+do_test analyze-2.3 {
+ execsql {
+ CREATE INDEX t1i3 ON t1(a,b);
+ ANALYZE main;
+ SELECT * FROM sqlite_stat1 ORDER BY idx;
+ }
+} {}
+
+# Start adding data to the table. Verify that the analysis
+# is done correctly.
+#
+do_test analyze-3.1 {
+ execsql {
+ INSERT INTO t1 VALUES(1,2);
+ INSERT INTO t1 VALUES(1,3);
+ ANALYZE main.t1;
+ SELECT idx, stat FROM sqlite_stat1 ORDER BY idx;
+ }
+} {t1i1 {2 2} t1i2 {2 1} t1i3 {2 2 1}}
+do_test analyze-3.2 {
+ execsql {
+ INSERT INTO t1 VALUES(1,4);
+ INSERT INTO t1 VALUES(1,5);
+ ANALYZE t1;
+ SELECT idx, stat FROM sqlite_stat1 ORDER BY idx;
+ }
+} {t1i1 {4 4} t1i2 {4 1} t1i3 {4 4 1}}
+do_test analyze-3.3 {
+ execsql {
+ INSERT INTO t1 VALUES(2,5);
+ ANALYZE main;
+ SELECT idx, stat FROM sqlite_stat1 ORDER BY idx;
+ }
+} {t1i1 {5 3} t1i2 {5 2} t1i3 {5 3 1}}
+do_test analyze-3.4 {
+ execsql {
+ CREATE TABLE t2 AS SELECT * FROM t1;
+ CREATE INDEX t2i1 ON t2(a);
+ CREATE INDEX t2i2 ON t2(b);
+ CREATE INDEX t2i3 ON t2(a,b);
+ ANALYZE;
+ SELECT idx, stat FROM sqlite_stat1 ORDER BY idx;
+ }
+} {t1i1 {5 3} t1i2 {5 2} t1i3 {5 3 1} t2i1 {5 3} t2i2 {5 2} t2i3 {5 3 1}}
+do_test analyze-3.5 {
+ execsql {
+ DROP INDEX t2i3;
+ ANALYZE t1;
+ SELECT idx, stat FROM sqlite_stat1 ORDER BY idx;
+ }
+} {t1i1 {5 3} t1i2 {5 2} t1i3 {5 3 1} t2i1 {5 3} t2i2 {5 2} t2i3 {5 3 1}}
+do_test analyze-3.6 {
+ execsql {
+ ANALYZE t2;
+ SELECT idx, stat FROM sqlite_stat1 ORDER BY idx;
+ }
+} {t1i1 {5 3} t1i2 {5 2} t1i3 {5 3 1} t2i1 {5 3} t2i2 {5 2}}
+do_test analyze-3.7 {
+ execsql {
+ DROP INDEX t2i2;
+ ANALYZE t2;
+ SELECT idx, stat FROM sqlite_stat1 ORDER BY idx;
+ }
+} {t1i1 {5 3} t1i2 {5 2} t1i3 {5 3 1} t2i1 {5 3}}
+do_test analyze-3.8 {
+ execsql {
+ CREATE TABLE t3 AS SELECT a, b, rowid AS c, 'hi' AS d FROM t1;
+ CREATE INDEX t3i1 ON t3(a);
+ CREATE INDEX t3i2 ON t3(a,b,c,d);
+ CREATE INDEX t3i3 ON t3(d,b,c,a);
+ DROP TABLE t1;
+ DROP TABLE t2;
+ ANALYZE;
+ SELECT idx, stat FROM sqlite_stat1 ORDER BY idx;
+ }
+} {t3i1 {5 3} t3i2 {5 3 1 1 1} t3i3 {5 5 2 1 1}}
+
+# Try corrupting the sqlite_stat1 table and make sure the
+# database is still able to function.
+#
+do_test analyze-4.0 {
+ sqlite3 db2 test.db
+ db2 eval {
+ CREATE TABLE t4(x,y,z);
+ CREATE INDEX t4i1 ON t4(x);
+ CREATE INDEX t4i2 ON t4(y);
+ INSERT INTO t4 SELECT a,b,c FROM t3;
+ }
+ db2 close
+ db close
+ sqlite3 db test.db
+ execsql {
+ ANALYZE;
+ SELECT idx, stat FROM sqlite_stat1 ORDER BY idx;
+ }
+} {t3i1 {5 3} t3i2 {5 3 1 1 1} t3i3 {5 5 2 1 1} t4i1 {5 3} t4i2 {5 2}}
+do_test analyze-4.1 {
+ execsql {
+ PRAGMA writable_schema=on;
+ INSERT INTO sqlite_stat1 VALUES(null,null,null);
+ PRAGMA writable_schema=off;
+ }
+ db close
+ sqlite3 db test.db
+ execsql {
+ SELECT * FROM t4 WHERE x=1234;
+ }
+} {}
+do_test analyze-4.2 {
+ execsql {
+ PRAGMA writable_schema=on;
+ DELETE FROM sqlite_stat1;
+ INSERT INTO sqlite_stat1 VALUES('t4','t4i1','nonsense');
+ INSERT INTO sqlite_stat1 VALUES('t4','t4i2','120897349817238741092873198273409187234918720394817209384710928374109827172901827349871928741910');
+ PRAGMA writable_schema=off;
+ }
+ db close
+ sqlite3 db test.db
+ execsql {
+ SELECT * FROM t4 WHERE x=1234;
+ }
+} {}
+
+# This test corrupts the database file so it must be the last test
+# in the series.
+#
+do_test analyze-99.1 {
+ execsql {
+ PRAGMA writable_schema=on;
+ UPDATE sqlite_master SET sql='nonsense';
+ }
+ db close
+ sqlite3 db test.db
+ catchsql {
+ ANALYZE
+ }
+} {1 {malformed database schema - near "nonsense": syntax error}}
+
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/async.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/async.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,65 @@
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file runs all tests.
+#
+# $Id: async.test,v 1.7 2006/03/19 13:00:25 drh Exp $
+
+
+if {[catch {sqlite3async_enable}]} {
+ # The async logic is not built into this system
+ return
+}
+
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+rename finish_test really_finish_test
+proc finish_test {} {}
+set ISQUICK 1
+
+set INCLUDE {
+ select1.test
+ select2.test
+ select3.test
+ select4.test
+ insert.test
+ insert2.test
+ insert3.test
+ trans.test
+}
+# set INCLUDE {select4.test}
+
+# Enable asynchronous IO.
+sqlite3async_enable 1
+
+rename do_test really_do_test
+proc do_test {name args} {
+ uplevel really_do_test async_io-$name $args
+ sqlite3async_halt idle
+ sqlite3async_start
+ sqlite3async_wait
+}
+
+foreach testfile [lsort -dictionary [glob $testdir/*.test]] {
+ set tail [file tail $testfile]
+ if {[lsearch -exact $INCLUDE $tail]<0} continue
+ source $testfile
+ catch {db close}
+}
+
+# Flush the write-queue and disable asynchronous IO. This should ensure
+# all allocated memory is cleaned up.
+set sqlite3async_trace 1
+sqlite3async_halt idle
+sqlite3async_start
+sqlite3async_wait
+sqlite3async_enable 0
+set sqlite3async_trace 0
+
+really_finish_test
+rename really_do_test do_test
+rename really_finish_test finish_test
Added: freeswitch/trunk/libs/sqlite/test/async2.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/async2.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,119 @@
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+#
+# $Id: async2.test,v 1.3 2006/02/14 14:02:08 danielk1977 Exp $
+
+
+if {[info commands sqlite3async_enable]==""} {
+ # The async logic is not built into this system
+ return
+}
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Enable asynchronous IO.
+
+set setup_script {
+ CREATE TABLE counter(c);
+ INSERT INTO counter(c) VALUES (1);
+}
+
+set sql_script {
+ BEGIN;
+ UPDATE counter SET c = 2;
+ CREATE TABLE t1(a PRIMARY KEY, b, c);
+ CREATE TABLE t2(a PRIMARY KEY, b, c);
+ COMMIT;
+
+ BEGIN;
+ UPDATE counter SET c = 3;
+ INSERT INTO t1 VALUES('abcdefghij', 'four', 'score');
+ INSERT INTO t2 VALUES('klmnopqrst', 'and', 'seven');
+ COMMIT;
+
+ UPDATE counter SET c = 'FIN';
+}
+
+db close
+
+
+foreach err [list ioerr malloc] {
+ set ::go 1
+ for {set n 1} {$::go} {incr n} {
+ set ::sqlite_io_error_pending 0
+ sqlite_malloc_fail 0
+ file delete -force test.db test.db-journal
+ sqlite3 db test.db
+ execsql $::setup_script
+ db close
+
+ sqlite3async_enable 1
+ sqlite3 db test.db
+ execsql $::sql_script
+ db close
+
+ switch -- $err {
+ ioerr { set ::sqlite_io_error_pending $n }
+ malloc { sqlite_malloc_fail $n }
+ }
+ sqlite3async_halt idle
+ sqlite3async_start
+ sqlite3async_wait
+
+ set ::sqlite_io_error_pending 0
+ sqlite_malloc_fail 0
+
+ sqlite3 db test.db
+ set c [db eval {SELECT c FROM counter LIMIT 1}]
+ switch -- $c {
+ 1 {
+ do_test async-$err-1.1.$n {
+ execsql {
+ SELECT name FROM sqlite_master;
+ }
+ } {counter}
+ }
+ 2 {
+ do_test async-$err-1.2.$n.1 {
+ execsql {
+ SELECT * FROM t1;
+ }
+ } {}
+ do_test async-$err-1.2.$n.2 {
+ execsql {
+ SELECT * FROM t2;
+ }
+ } {}
+ }
+ 3 {
+ do_test async-$err-1.3.$n.1 {
+ execsql {
+ SELECT * FROM t1;
+ }
+ } {abcdefghij four score}
+ do_test async-$err-1.3.$n.2 {
+ execsql {
+ SELECT * FROM t2;
+ }
+ } {klmnopqrst and seven}
+ }
+ FIN {
+ set ::go 0
+ }
+ }
+
+ sqlite3async_enable 0
+ }
+}
+
+catch {db close}
+sqlite3async_halt idle
+sqlite3async_start
+sqlite3async_wait
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/attach.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/attach.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,738 @@
+# 2003 April 4
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is testing the ATTACH and DETACH commands
+# and related functionality.
+#
+# $Id: attach.test,v 1.43 2006/05/25 12:17:32 drh Exp $
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+for {set i 2} {$i<=15} {incr i} {
+ file delete -force test$i.db
+ file delete -force test$i.db-journal
+}
+
+set btree_trace 0
+do_test attach-1.1 {
+ execsql {
+ CREATE TABLE t1(a,b);
+ INSERT INTO t1 VALUES(1,2);
+ INSERT INTO t1 VALUES(3,4);
+ SELECT * FROM t1;
+ }
+} {1 2 3 4}
+do_test attach-1.2 {
+ sqlite3 db2 test2.db
+ execsql {
+ CREATE TABLE t2(x,y);
+ INSERT INTO t2 VALUES(1,'x');
+ INSERT INTO t2 VALUES(2,'y');
+ SELECT * FROM t2;
+ } db2
+} {1 x 2 y}
+do_test attach-1.3 {
+ execsql {
+ ATTACH DATABASE 'test2.db' AS two;
+ SELECT * FROM two.t2;
+ }
+} {1 x 2 y}
+do_test attach-1.4 {
+ execsql {
+ SELECT * FROM t2;
+ }
+} {1 x 2 y}
+do_test attach-1.5 {
+ execsql {
+ DETACH DATABASE two;
+ SELECT * FROM t1;
+ }
+} {1 2 3 4}
+do_test attach-1.6 {
+ catchsql {
+ SELECT * FROM t2;
+ }
+} {1 {no such table: t2}}
+do_test attach-1.7 {
+ catchsql {
+ SELECT * FROM two.t2;
+ }
+} {1 {no such table: two.t2}}
+do_test attach-1.8 {
+ catchsql {
+ ATTACH DATABASE 'test3.db' AS three;
+ }
+} {0 {}}
+do_test attach-1.9 {
+ catchsql {
+ SELECT * FROM three.sqlite_master;
+ }
+} {0 {}}
+do_test attach-1.10 {
+ catchsql {
+ DETACH DATABASE [three];
+ }
+} {0 {}}
+do_test attach-1.11 {
+ execsql {
+ ATTACH 'test.db' AS db2;
+ ATTACH 'test.db' AS db3;
+ ATTACH 'test.db' AS db4;
+ ATTACH 'test.db' AS db5;
+ ATTACH 'test.db' AS db6;
+ ATTACH 'test.db' AS db7;
+ ATTACH 'test.db' AS db8;
+ ATTACH 'test.db' AS db9;
+ }
+} {}
+proc db_list {db} {
+ set list {}
+ foreach {idx name file} [execsql {PRAGMA database_list} $db] {
+ lappend list $idx $name
+ }
+ return $list
+}
+ifcapable schema_pragmas {
+do_test attach-1.11b {
+ db_list db
+} {0 main 2 db2 3 db3 4 db4 5 db5 6 db6 7 db7 8 db8 9 db9}
+} ;# ifcapable schema_pragmas
+do_test attach-1.12 {
+ catchsql {
+ ATTACH 'test.db' as db2;
+ }
+} {1 {database db2 is already in use}}
+do_test attach-1.13 {
+ catchsql {
+ ATTACH 'test.db' as db5;
+ }
+} {1 {database db5 is already in use}}
+do_test attach-1.14 {
+ catchsql {
+ ATTACH 'test.db' as db9;
+ }
+} {1 {database db9 is already in use}}
+do_test attach-1.15 {
+ catchsql {
+ ATTACH 'test.db' as main;
+ }
+} {1 {database main is already in use}}
+ifcapable tempdb {
+ do_test attach-1.16 {
+ catchsql {
+ ATTACH 'test.db' as temp;
+ }
+ } {1 {database temp is already in use}}
+}
+do_test attach-1.17 {
+ catchsql {
+ ATTACH 'test.db' as MAIN;
+ }
+} {1 {database MAIN is already in use}}
+do_test attach-1.18 {
+ catchsql {
+ ATTACH 'test.db' as db10;
+ ATTACH 'test.db' as db11;
+ }
+} {0 {}}
+do_test attach-1.19 {
+ catchsql {
+ ATTACH 'test.db' as db12;
+ }
+} {1 {too many attached databases - max 10}}
+do_test attach-1.20.1 {
+ execsql {
+ DETACH db5;
+ }
+} {}
+ifcapable schema_pragmas {
+do_test attach-1.20.2 {
+ db_list db
+} {0 main 2 db2 3 db3 4 db4 5 db6 6 db7 7 db8 8 db9 9 db10 10 db11}
+} ;# ifcapable schema_pragmas
+integrity_check attach-1.20.3
+ifcapable tempdb {
+ execsql {select * from sqlite_temp_master}
+}
+do_test attach-1.21 {
+ catchsql {
+ ATTACH 'test.db' as db12;
+ }
+} {0 {}}
+do_test attach-1.22 {
+ catchsql {
+ ATTACH 'test.db' as db13;
+ }
+} {1 {too many attached databases - max 10}}
+do_test attach-1.23 {
+ catchsql {
+ DETACH "db14";
+ }
+} {1 {no such database: db14}}
+do_test attach-1.24 {
+ catchsql {
+ DETACH db12;
+ }
+} {0 {}}
+do_test attach-1.25 {
+ catchsql {
+ DETACH db12;
+ }
+} {1 {no such database: db12}}
+do_test attach-1.26 {
+ catchsql {
+ DETACH main;
+ }
+} {1 {cannot detach database main}}
+
+ifcapable tempdb {
+ do_test attach-1.27 {
+ catchsql {
+ DETACH Temp;
+ }
+ } {1 {cannot detach database Temp}}
+} else {
+ do_test attach-1.27 {
+ catchsql {
+ DETACH Temp;
+ }
+ } {1 {no such database: Temp}}
+}
+
+do_test attach-1.28 {
+ catchsql {
+ DETACH db11;
+ DETACH db10;
+ DETACH db9;
+ DETACH db8;
+ DETACH db7;
+ DETACH db6;
+ DETACH db4;
+ DETACH db3;
+ DETACH db2;
+ }
+} {0 {}}
+ifcapable schema_pragmas {
+ ifcapable tempdb {
+ do_test attach-1.29 {
+ db_list db
+ } {0 main 1 temp}
+ } else {
+ do_test attach-1.29 {
+ db_list db
+ } {0 main}
+ }
+} ;# ifcapable schema_pragmas
+
+ifcapable {trigger} { # Only do the following tests if triggers are enabled
+do_test attach-2.1 {
+ execsql {
+ CREATE TABLE tx(x1,x2,y1,y2);
+ CREATE TRIGGER r1 AFTER UPDATE ON t2 FOR EACH ROW BEGIN
+ INSERT INTO tx(x1,x2,y1,y2) VALUES(OLD.x,NEW.x,OLD.y,NEW.y);
+ END;
+ SELECT * FROM tx;
+ } db2;
+} {}
+do_test attach-2.2 {
+ execsql {
+ UPDATE t2 SET x=x+10;
+ SELECT * FROM tx;
+ } db2;
+} {1 11 x x 2 12 y y}
+do_test attach-2.3 {
+ execsql {
+ CREATE TABLE tx(x1,x2,y1,y2);
+ SELECT * FROM tx;
+ }
+} {}
+do_test attach-2.4 {
+ execsql {
+ ATTACH 'test2.db' AS db2;
+ }
+} {}
+do_test attach-2.5 {
+ execsql {
+ UPDATE db2.t2 SET x=x+10;
+ SELECT * FROM db2.tx;
+ }
+} {1 11 x x 2 12 y y 11 21 x x 12 22 y y}
+do_test attach-2.6 {
+ execsql {
+ SELECT * FROM main.tx;
+ }
+} {}
+do_test attach-2.7 {
+ execsql {
+ SELECT type, name, tbl_name FROM db2.sqlite_master;
+ }
+} {table t2 t2 table tx tx trigger r1 t2}
+
+ifcapable schema_pragmas&&tempdb {
+ do_test attach-2.8 {
+ db_list db
+ } {0 main 1 temp 2 db2}
+} ;# ifcapable schema_pragmas&&tempdb
+ifcapable schema_pragmas&&!tempdb {
+ do_test attach-2.8 {
+ db_list db
+ } {0 main 2 db2}
+} ;# ifcapable schema_pragmas&&!tempdb
+
+do_test attach-2.9 {
+ execsql {
+ CREATE INDEX i2 ON t2(x);
+ SELECT * FROM t2 WHERE x>5;
+ } db2
+} {21 x 22 y}
+do_test attach-2.10 {
+ execsql {
+ SELECT type, name, tbl_name FROM sqlite_master;
+ } db2
+} {table t2 t2 table tx tx trigger r1 t2 index i2 t2}
+#do_test attach-2.11 {
+# catchsql {
+# SELECT * FROM t2 WHERE x>5;
+# }
+#} {1 {database schema has changed}}
+ifcapable schema_pragmas {
+ ifcapable tempdb {
+ do_test attach-2.12 {
+ db_list db
+ } {0 main 1 temp 2 db2}
+ } else {
+ do_test attach-2.12 {
+ db_list db
+ } {0 main 2 db2}
+ }
+} ;# ifcapable schema_pragmas
+do_test attach-2.13 {
+ catchsql {
+ SELECT * FROM t2 WHERE x>5;
+ }
+} {0 {21 x 22 y}}
+do_test attach-2.14 {
+ execsql {
+ SELECT type, name, tbl_name FROM sqlite_master;
+ }
+} {table t1 t1 table tx tx}
+do_test attach-2.15 {
+ execsql {
+ SELECT type, name, tbl_name FROM db2.sqlite_master;
+ }
+} {table t2 t2 table tx tx trigger r1 t2 index i2 t2}
+do_test attach-2.16 {
+ db close
+ sqlite3 db test.db
+ execsql {
+ ATTACH 'test2.db' AS db2;
+ SELECT type, name, tbl_name FROM db2.sqlite_master;
+ }
+} {table t2 t2 table tx tx trigger r1 t2 index i2 t2}
+} ;# End of ifcapable {trigger}
+
+do_test attach-3.1 {
+ db close
+ db2 close
+ sqlite3 db test.db
+ sqlite3 db2 test2.db
+ execsql {
+ SELECT * FROM t1
+ }
+} {1 2 3 4}
+
+# If we are testing a version of the code that lacks trigger support,
+# adjust the database contents so that they are the same if triggers
+# had been enabled.
+ifcapable {!trigger} {
+ db2 eval {
+ DELETE FROM t2;
+ INSERT INTO t2 VALUES(21, 'x');
+ INSERT INTO t2 VALUES(22, 'y');
+ CREATE TABLE tx(x1,x2,y1,y2);
+ INSERT INTO tx VALUES(1, 11, 'x', 'x');
+ INSERT INTO tx VALUES(2, 12, 'y', 'y');
+ INSERT INTO tx VALUES(11, 21, 'x', 'x');
+ INSERT INTO tx VALUES(12, 22, 'y', 'y');
+ CREATE INDEX i2 ON t2(x);
+ }
+}
+
+do_test attach-3.2 {
+ catchsql {
+ SELECT * FROM t2
+ }
+} {1 {no such table: t2}}
+do_test attach-3.3 {
+ catchsql {
+ ATTACH DATABASE 'test2.db' AS db2;
+ SELECT * FROM t2
+ }
+} {0 {21 x 22 y}}
+
+# Even though 'db' has started a transaction, it should not yet have
+# a lock on test2.db so 'db2' should be readable.
+do_test attach-3.4 {
+ execsql BEGIN
+ catchsql {
+ SELECT * FROM t2;
+ } db2;
+} {0 {21 x 22 y}}
+
+# Reading from test2.db from db within a transaction should not
+# prevent test2.db from being read by db2.
+do_test attach-3.5 {
+ execsql {SELECT * FROM t2}
+btree_breakpoint
+ catchsql {
+ SELECT * FROM t2;
+ } db2;
+} {0 {21 x 22 y}}
+
+# Making a change to test2.db through db causes test2.db to get
+# a reserved lock. It should still be accessible through db2.
+do_test attach-3.6 {
+ execsql {
+ UPDATE t2 SET x=x+1 WHERE x=50;
+ }
+ catchsql {
+ SELECT * FROM t2;
+ } db2;
+} {0 {21 x 22 y}}
+
+do_test attach-3.7 {
+ execsql ROLLBACK
+ execsql {SELECT * FROM t2} db2
+} {21 x 22 y}
+
+# Start transactions on both db and db2. Once again, just because
+# we make a change to test2.db using db2, only a RESERVED lock is
+# obtained, so test2.db should still be readable using db.
+#
+do_test attach-3.8 {
+ execsql BEGIN
+ execsql BEGIN db2
+ execsql {UPDATE t2 SET x=0 WHERE 0} db2
+ catchsql {SELECT * FROM t2}
+} {0 {21 x 22 y}}
+
+# It is also still accessible from db2.
+do_test attach-3.9 {
+ catchsql {SELECT * FROM t2} db2
+} {0 {21 x 22 y}}
+
+do_test attach-3.10 {
+ execsql {SELECT * FROM t1}
+} {1 2 3 4}
+
+do_test attach-3.11 {
+ catchsql {UPDATE t1 SET a=a+1}
+} {0 {}}
+do_test attach-3.12 {
+ execsql {SELECT * FROM t1}
+} {2 2 4 4}
+
+# db2 has a RESERVED lock on test2.db, so db cannot write to any tables
+# in test2.db.
+do_test attach-3.13 {
+ catchsql {UPDATE t2 SET x=x+1 WHERE x=50}
+} {1 {database is locked}}
+
+# Change for version 3. Transaction is no longer rolled back
+# for a locked database.
+execsql {ROLLBACK}
+
+# db is able to reread its schema because db2 still only holds a
+# reserved lock.
+do_test attach-3.14 {
+ catchsql {SELECT * FROM t1}
+} {0 {1 2 3 4}}
+do_test attach-3.15 {
+ execsql COMMIT db2
+ execsql {SELECT * FROM t1}
+} {1 2 3 4}
+
+#set btree_trace 1
+
+# Ticket #323
+do_test attach-4.1 {
+ execsql {DETACH db2}
+ db2 close
+ sqlite3 db2 test2.db
+ execsql {
+ CREATE TABLE t3(x,y);
+ CREATE UNIQUE INDEX t3i1 ON t3(x);
+ INSERT INTO t3 VALUES(1,2);
+ SELECT * FROM t3;
+ } db2;
+} {1 2}
+do_test attach-4.2 {
+ execsql {
+ CREATE TABLE t3(a,b);
+ CREATE UNIQUE INDEX t3i1b ON t3(a);
+ INSERT INTO t3 VALUES(9,10);
+ SELECT * FROM t3;
+ }
+} {9 10}
+do_test attach-4.3 {
+ execsql {
+ ATTACH DATABASE 'test2.db' AS db2;
+ SELECT * FROM db2.t3;
+ }
+} {1 2}
+do_test attach-4.4 {
+ execsql {
+ SELECT * FROM main.t3;
+ }
+} {9 10}
+do_test attach-4.5 {
+ execsql {
+ INSERT INTO db2.t3 VALUES(9,10);
+ SELECT * FROM db2.t3;
+ }
+} {1 2 9 10}
+execsql {
+ DETACH db2;
+}
+ifcapable {trigger} {
+ do_test attach-4.6 {
+ execsql {
+ CREATE TABLE t4(x);
+ CREATE TRIGGER t3r3 AFTER INSERT ON t3 BEGIN
+ INSERT INTO t4 VALUES('db2.' || NEW.x);
+ END;
+ INSERT INTO t3 VALUES(6,7);
+ SELECT * FROM t4;
+ } db2
+ } {db2.6}
+ do_test attach-4.7 {
+ execsql {
+ CREATE TABLE t4(y);
+ CREATE TRIGGER t3r3 AFTER INSERT ON t3 BEGIN
+ INSERT INTO t4 VALUES('main.' || NEW.a);
+ END;
+ INSERT INTO main.t3 VALUES(11,12);
+ SELECT * FROM main.t4;
+ }
+ } {main.11}
+}
+ifcapable {!trigger} {
+ # When we do not have trigger support, set up the table like they
+ # would have been had triggers been there. The tests that follow need
+ # this setup.
+ execsql {
+ CREATE TABLE t4(x);
+ INSERT INTO t3 VALUES(6,7);
+ INSERT INTO t4 VALUES('db2.6');
+ INSERT INTO t4 VALUES('db2.13');
+ } db2
+ execsql {
+ CREATE TABLE t4(y);
+ INSERT INTO main.t3 VALUES(11,12);
+ INSERT INTO t4 VALUES('main.11');
+ }
+}
+
+
+# This one is tricky. On the UNION ALL select, we have to make sure
+# the schema for both main and db2 is valid before starting to execute
+# the first query of the UNION ALL. If we wait to test the validity of
+# the schema for main until after the first query has run, that test will
+# fail and the query will abort but we will have already output some
+# results. When the query is retried, the results will be repeated.
+#
+ifcapable compound {
+do_test attach-4.8 {
+ execsql {
+ ATTACH DATABASE 'test2.db' AS db2;
+ INSERT INTO db2.t3 VALUES(13,14);
+ SELECT * FROM db2.t4 UNION ALL SELECT * FROM main.t4;
+ }
+} {db2.6 db2.13 main.11}
+
+do_test attach-4.9 {
+ ifcapable {!trigger} {execsql {INSERT INTO main.t4 VALUES('main.15')}}
+ execsql {
+ INSERT INTO main.t3 VALUES(15,16);
+ SELECT * FROM db2.t4 UNION ALL SELECT * FROM main.t4;
+ }
+} {db2.6 db2.13 main.11 main.15}
+} ;# ifcapable compound
+
+ifcapable !compound {
+ ifcapable {!trigger} {execsql {INSERT INTO main.t4 VALUES('main.15')}}
+ execsql {
+ ATTACH DATABASE 'test2.db' AS db2;
+ INSERT INTO db2.t3 VALUES(13,14);
+ INSERT INTO main.t3 VALUES(15,16);
+ }
+} ;# ifcapable !compound
+
+ifcapable view {
+do_test attach-4.10 {
+ execsql {
+ DETACH DATABASE db2;
+ }
+ execsql {
+ CREATE VIEW v3 AS SELECT x*100+y FROM t3;
+ SELECT * FROM v3;
+ } db2
+} {102 910 607 1314}
+do_test attach-4.11 {
+ execsql {
+ CREATE VIEW v3 AS SELECT a*100+b FROM t3;
+ SELECT * FROM v3;
+ }
+} {910 1112 1516}
+do_test attach-4.12 {
+ execsql {
+ ATTACH DATABASE 'test2.db' AS db2;
+ SELECT * FROM db2.v3;
+ }
+} {102 910 607 1314}
+do_test attach-4.13 {
+ execsql {
+ SELECT * FROM main.v3;
+ }
+} {910 1112 1516}
+} ;# ifcapable view
+
+# Tests for the sqliteFix...() routines in attach.c
+#
+ifcapable {trigger} {
+do_test attach-5.1 {
+ db close
+ sqlite3 db test.db
+ db2 close
+ file delete -force test2.db
+ sqlite3 db2 test2.db
+ catchsql {
+ ATTACH DATABASE 'test.db' AS orig;
+ CREATE TRIGGER r1 AFTER INSERT ON orig.t1 BEGIN
+ SELECT 'no-op';
+ END;
+ } db2
+} {1 {trigger r1 cannot reference objects in database orig}}
+do_test attach-5.2 {
+ catchsql {
+ CREATE TABLE t5(x,y);
+ CREATE TRIGGER r5 AFTER INSERT ON t5 BEGIN
+ SELECT 'no-op';
+ END;
+ } db2
+} {0 {}}
+do_test attach-5.3 {
+ catchsql {
+ DROP TRIGGER r5;
+ CREATE TRIGGER r5 AFTER INSERT ON t5 BEGIN
+ SELECT 'no-op' FROM orig.t1;
+ END;
+ } db2
+} {1 {trigger r5 cannot reference objects in database orig}}
+ifcapable tempdb {
+ do_test attach-5.4 {
+ catchsql {
+ CREATE TEMP TABLE t6(p,q,r);
+ CREATE TRIGGER r5 AFTER INSERT ON t5 BEGIN
+ SELECT 'no-op' FROM temp.t6;
+ END;
+ } db2
+ } {1 {trigger r5 cannot reference objects in database temp}}
+}
+ifcapable subquery {
+ do_test attach-5.5 {
+ catchsql {
+ CREATE TRIGGER r5 AFTER INSERT ON t5 BEGIN
+ SELECT 'no-op' || (SELECT * FROM temp.t6);
+ END;
+ } db2
+ } {1 {trigger r5 cannot reference objects in database temp}}
+ do_test attach-5.6 {
+ catchsql {
+ CREATE TRIGGER r5 AFTER INSERT ON t5 BEGIN
+ SELECT 'no-op' FROM t1 WHERE x<(SELECT min(x) FROM temp.t6);
+ END;
+ } db2
+ } {1 {trigger r5 cannot reference objects in database temp}}
+ do_test attach-5.7 {
+ catchsql {
+ CREATE TRIGGER r5 AFTER INSERT ON t5 BEGIN
+ SELECT 'no-op' FROM t1 GROUP BY 1 HAVING x<(SELECT min(x) FROM temp.t6);
+ END;
+ } db2
+ } {1 {trigger r5 cannot reference objects in database temp}}
+ do_test attach-5.7 {
+ catchsql {
+ CREATE TRIGGER r5 AFTER INSERT ON t5 BEGIN
+ SELECT max(1,x,(SELECT min(x) FROM temp.t6)) FROM t1;
+ END;
+ } db2
+ } {1 {trigger r5 cannot reference objects in database temp}}
+ do_test attach-5.8 {
+ catchsql {
+ CREATE TRIGGER r5 AFTER INSERT ON t5 BEGIN
+ INSERT INTO t1 VALUES((SELECT min(x) FROM temp.t6),5);
+ END;
+ } db2
+ } {1 {trigger r5 cannot reference objects in database temp}}
+ do_test attach-5.9 {
+ catchsql {
+ CREATE TRIGGER r5 AFTER INSERT ON t5 BEGIN
+ DELETE FROM t1 WHERE x<(SELECT min(x) FROM temp.t6);
+ END;
+ } db2
+ } {1 {trigger r5 cannot reference objects in database temp}}
+} ;# endif subquery
+} ;# endif trigger
+
+# Check to make sure we get a sensible error if unable to open
+# the file that we are trying to attach.
+#
+do_test attach-6.1 {
+ catchsql {
+ ATTACH DATABASE 'no-such-file' AS nosuch;
+ }
+} {0 {}}
+if {$tcl_platform(platform)=="unix"} {
+ do_test attach-6.2 {
+ sqlite3 dbx cannot-read
+ dbx eval {CREATE TABLE t1(a,b,c)}
+ dbx close
+ file attributes cannot-read -permission 0000
+ if {[file writable cannot-read]} {
+ puts "\n**** Tests do not work when run as root ****"
+ file delete -force cannot-read
+ exit 1
+ }
+ catchsql {
+ ATTACH DATABASE 'cannot-read' AS noread;
+ }
+ } {1 {unable to open database: cannot-read}}
+ file delete -force cannot-read
+}
+
+# Check the error message if we try to access a database that has
+# not been attached.
+do_test attach-6.3 {
+ catchsql {
+ CREATE TABLE no_such_db.t1(a, b, c);
+ }
+} {1 {unknown database no_such_db}}
+for {set i 2} {$i<=15} {incr i} {
+ catch {db$i close}
+}
+db close
+file delete -force test2.db
+file delete -force no-such-file
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/attach2.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/attach2.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,379 @@
+# 2003 July 1
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is testing the ATTACH and DETACH commands
+# and related functionality.
+#
+# $Id: attach2.test,v 1.35 2006/01/03 00:33:50 drh Exp $
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Ticket #354
+#
+# Databases test.db and test2.db contain identical schemas. Make
+# sure we can attach test2.db from test.db.
+#
+do_test attach2-1.1 {
+ db eval {
+ CREATE TABLE t1(a,b);
+ CREATE INDEX x1 ON t1(a);
+ }
+ file delete -force test2.db
+ file delete -force test2.db-journal
+ sqlite3 db2 test2.db
+ db2 eval {
+ CREATE TABLE t1(a,b);
+ CREATE INDEX x1 ON t1(a);
+ }
+ catchsql {
+ ATTACH 'test2.db' AS t2;
+ }
+} {0 {}}
+
+# Ticket #514
+#
+proc db_list {db} {
+ set list {}
+ foreach {idx name file} [execsql {PRAGMA database_list} $db] {
+ lappend list $idx $name
+ }
+ return $list
+}
+db eval {DETACH t2}
+do_test attach2-2.1 {
+ # lock test2.db then try to attach it. This is no longer an error because
+ # db2 just RESERVES the database. It does not obtain a write-lock until
+ # we COMMIT.
+ db2 eval {BEGIN}
+ db2 eval {UPDATE t1 SET a = 0 WHERE 0}
+ catchsql {
+ ATTACH 'test2.db' AS t2;
+ }
+} {0 {}}
+ifcapable schema_pragmas {
+do_test attach2-2.2 {
+ # make sure test2.db did get attached.
+ db_list db
+} {0 main 2 t2}
+} ;# ifcapable schema_pragmas
+db2 eval {COMMIT}
+
+do_test attach2-2.5 {
+ # Make sure we can read test2.db from db
+ catchsql {
+ SELECT name FROM t2.sqlite_master;
+ }
+} {0 {t1 x1}}
+do_test attach2-2.6 {
+ # lock test2.db and try to read from it. This should still work because
+ # the lock is only a RESERVED lock which does not prevent reading.
+ #
+ db2 eval BEGIN
+ db2 eval {UPDATE t1 SET a = 0 WHERE 0}
+ catchsql {
+ SELECT name FROM t2.sqlite_master;
+ }
+} {0 {t1 x1}}
+do_test attach2-2.7 {
+ # but we can still read from test1.db even though test2.db is locked.
+ catchsql {
+ SELECT name FROM main.sqlite_master;
+ }
+} {0 {t1 x1}}
+do_test attach2-2.8 {
+ # start a transaction on test.db even though test2.db is locked.
+ catchsql {
+ BEGIN;
+ INSERT INTO t1 VALUES(8,9);
+ }
+} {0 {}}
+do_test attach2-2.9 {
+ execsql {
+ SELECT * FROM t1
+ }
+} {8 9}
+do_test attach2-2.10 {
+ # now try to write to test2.db. the write should fail
+ catchsql {
+ INSERT INTO t2.t1 VALUES(1,2);
+ }
+} {1 {database is locked}}
+do_test attach2-2.11 {
+ # when the write failed in the previous test, the transaction should
+ # have rolled back.
+ #
+ # Update for version 3: A transaction is no longer rolled back if a
+ # database is found to be busy.
+ execsql {rollback}
+ db2 eval ROLLBACK
+ execsql {
+ SELECT * FROM t1
+ }
+} {}
+do_test attach2-2.12 {
+ catchsql {
+ COMMIT
+ }
+} {1 {cannot commit - no transaction is active}}
+
+# Ticket #574: Make sure it works using the non-callback API
+#
+do_test attach2-3.1 {
+ set DB [sqlite3_connection_pointer db]
+ set rc [catch {sqlite3_prepare $DB "ATTACH 'test2.db' AS t2" -1 TAIL} VM]
+ if {$rc} {lappend rc $VM}
+ sqlite3_step $VM
+ sqlite3_finalize $VM
+ set rc
+} {0}
+do_test attach2-3.2 {
+ set rc [catch {sqlite3_prepare $DB "DETACH t2" -1 TAIL} VM]
+ if {$rc} {lappend rc $VM}
+ sqlite3_step $VM
+ sqlite3_finalize $VM
+ set rc
+} {0}
+
+db close
+for {set i 2} {$i<=15} {incr i} {
+ catch {db$i close}
+}
+
+# A procedure to verify the status of locks on a database.
+#
+proc lock_status {testnum db expected_result} {
+ # If the database was compiled with OMIT_TEMPDB set, then
+ # the lock_status list will not contain an entry for the temp
+ # db. But the test code doesn't know this, so it's easiest
+ # to filter it out of the $expected_result list here.
+ ifcapable !tempdb {
+ set expected_result [concat \
+ [lrange $expected_result 0 1] \
+ [lrange $expected_result 4 end] \
+ ]
+ }
+ do_test attach2-$testnum [subst {
+ $db cache flush ;# The lock_status pragma should not be cached
+ execsql {PRAGMA lock_status} $db
+ }] $expected_result
+}
+set sqlite_os_trace 0
+
+# Tests attach2-4.* test that read-locks work correctly with attached
+# databases.
+do_test attach2-4.1 {
+ sqlite3 db test.db
+ sqlite3 db2 test.db
+ execsql {ATTACH 'test2.db' as file2}
+ execsql {ATTACH 'test2.db' as file2} db2
+} {}
+
+lock_status 4.1.1 db {main unlocked temp closed file2 unlocked}
+lock_status 4.1.2 db2 {main unlocked temp closed file2 unlocked}
+
+do_test attach2-4.2 {
+ # Handle 'db' read-locks test.db
+ execsql {BEGIN}
+ execsql {SELECT * FROM t1}
+ # Lock status:
+ # db - shared(main)
+ # db2 -
+} {}
+
+lock_status 4.2.1 db {main shared temp closed file2 unlocked}
+lock_status 4.2.2 db2 {main unlocked temp closed file2 unlocked}
+
+do_test attach2-4.3 {
+ # The read lock held by db does not prevent db2 from reading test.db
+ execsql {SELECT * FROM t1} db2
+} {}
+
+lock_status 4.3.1 db {main shared temp closed file2 unlocked}
+lock_status 4.3.2 db2 {main unlocked temp closed file2 unlocked}
+
+do_test attach2-4.4 {
+ # db is holding a read lock on test.db, so we should not be able
+ # to commit a write to test.db from db2
+ catchsql {
+ INSERT INTO t1 VALUES(1, 2)
+ } db2
+} {1 {database is locked}}
+
+lock_status 4.4.1 db {main shared temp closed file2 unlocked}
+lock_status 4.4.2 db2 {main unlocked temp closed file2 unlocked}
+
+do_test attach2-4.5 {
+ # Handle 'db2' reserves file2.
+ execsql {BEGIN} db2
+ execsql {INSERT INTO file2.t1 VALUES(1, 2)} db2
+ # Lock status:
+ # db - shared(main)
+ # db2 - reserved(file2)
+} {}
+
+lock_status 4.5.1 db {main shared temp closed file2 unlocked}
+lock_status 4.5.2 db2 {main unlocked temp closed file2 reserved}
+
+do_test attach2-4.6.1 {
+ # Reads are allowed against a reserved database.
+ catchsql {
+ SELECT * FROM file2.t1;
+ }
+ # Lock status:
+ # db - shared(main), shared(file2)
+ # db2 - reserved(file2)
+} {0 {}}
+
+lock_status 4.6.1.1 db {main shared temp closed file2 shared}
+lock_status 4.6.1.2 db2 {main unlocked temp closed file2 reserved}
+
+do_test attach2-4.6.2 {
+ # Writes against a reserved database are not allowed.
+ catchsql {
+ UPDATE file2.t1 SET a=0;
+ }
+} {1 {database is locked}}
+
+lock_status 4.6.2.1 db {main shared temp closed file2 shared}
+lock_status 4.6.2.2 db2 {main unlocked temp closed file2 reserved}
+
+do_test attach2-4.7 {
+ # Ensure handle 'db' retains the lock on the main file after
+ # failing to obtain a write-lock on file2.
+ catchsql {
+ INSERT INTO t1 VALUES(1, 2)
+ } db2
+} {0 {}}
+
+lock_status 4.7.1 db {main shared temp closed file2 shared}
+lock_status 4.7.2 db2 {main reserved temp closed file2 reserved}
+
+do_test attach2-4.8 {
+ # We should still be able to read test.db from db2
+ execsql {SELECT * FROM t1} db2
+} {1 2}
+
+lock_status 4.8.1 db {main shared temp closed file2 shared}
+lock_status 4.8.2 db2 {main reserved temp closed file2 reserved}
+
+do_test attach2-4.9 {
+ # Try to upgrade the handle 'db' lock.
+ catchsql {
+ INSERT INTO t1 VALUES(1, 2)
+ }
+} {1 {database is locked}}
+
+lock_status 4.9.1 db {main shared temp closed file2 shared}
+lock_status 4.9.2 db2 {main reserved temp closed file2 reserved}
+
+do_test attach2-4.10 {
+ # We cannot commit db2 while db is holding a read-lock
+ catchsql {COMMIT} db2
+} {1 {database is locked}}
+
+lock_status 4.10.1 db {main shared temp closed file2 shared}
+lock_status 4.10.2 db2 {main pending temp closed file2 reserved}
+
+set sqlite_os_trace 0
+do_test attach2-4.11 {
+ # db is able to commit.
+ catchsql {COMMIT}
+} {0 {}}
+
+lock_status 4.11.1 db {main unlocked temp closed file2 unlocked}
+lock_status 4.11.2 db2 {main pending temp closed file2 reserved}
+
+do_test attach2-4.12 {
+ # Now we can commit db2
+ catchsql {COMMIT} db2
+} {0 {}}
+
+lock_status 4.12.1 db {main unlocked temp closed file2 unlocked}
+lock_status 4.12.2 db2 {main unlocked temp closed file2 unlocked}
+
+do_test attach2-4.13 {
+ execsql {SELECT * FROM file2.t1}
+} {1 2}
+do_test attach2-4.14 {
+ execsql {INSERT INTO t1 VALUES(1, 2)}
+} {}
+do_test attach2-4.15 {
+ execsql {SELECT * FROM t1} db2
+} {1 2 1 2}
+
+db close
+db2 close
+file delete -force test2.db
+
+# These tests - attach2-5.* - check that the master journal file is deleted
+# correctly when a multi-file transaction is committed or rolled back.
+#
+# Update: It's not actually created if a rollback occurs, so that test
+# doesn't really prove too much.
+foreach f [glob test.db*] {file delete -force $f}
+do_test attach2-5.1 {
+ sqlite3 db test.db
+ execsql {
+ ATTACH 'test.db2' AS aux;
+ }
+} {}
+do_test attach2-5.2 {
+ execsql {
+ BEGIN;
+ CREATE TABLE tbl(a, b, c);
+ CREATE TABLE aux.tbl(a, b, c);
+ COMMIT;
+ }
+} {}
+do_test attach2-5.3 {
+ lsort [glob test.db*]
+} {test.db test.db2}
+do_test attach2-5.4 {
+ execsql {
+ BEGIN;
+ DROP TABLE aux.tbl;
+ DROP TABLE tbl;
+ ROLLBACK;
+ }
+} {}
+do_test attach2-5.5 {
+ lsort [glob test.db*]
+} {test.db test.db2}
+
+# Check that a database cannot be ATTACHed or DETACHed during a transaction.
+do_test attach2-6.1 {
+ execsql {
+ BEGIN;
+ }
+} {}
+do_test attach2-6.2 {
+ catchsql {
+ ATTACH 'test3.db' as aux2;
+ }
+} {1 {cannot ATTACH database within transaction}}
+
+do_test attach2-6.3 {
+ catchsql {
+ DETACH aux;
+ }
+} {1 {cannot DETACH database within transaction}}
+do_test attach2-6.4 {
+ execsql {
+ COMMIT;
+ DETACH aux;
+ }
+} {}
+
+db close
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/attach3.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/attach3.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,344 @@
+# 2003 July 1
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is testing the ATTACH and DETACH commands
+# and schema changes to attached databases.
+#
+# $Id: attach3.test,v 1.17 2006/06/20 11:01:09 danielk1977 Exp $
+#
+
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Create tables t1 and t2 in the main database
+execsql {
+ CREATE TABLE t1(a, b);
+ CREATE TABLE t2(c, d);
+}
+
+# Create tables t1 and t2 in database file test2.db
+file delete -force test2.db
+file delete -force test2.db-journal
+sqlite3 db2 test2.db
+execsql {
+ CREATE TABLE t1(a, b);
+ CREATE TABLE t2(c, d);
+} db2
+db2 close
+
+# Create a table in the auxilary database.
+do_test attach3-1.1 {
+ execsql {
+ ATTACH 'test2.db' AS aux;
+ }
+} {}
+do_test attach3-1.2 {
+ execsql {
+ CREATE TABLE aux.t3(e, f);
+ }
+} {}
+do_test attach3-1.3 {
+ execsql {
+ SELECT * FROM sqlite_master WHERE name = 't3';
+ }
+} {}
+do_test attach3-1.4 {
+ execsql {
+ SELECT * FROM aux.sqlite_master WHERE name = 't3';
+ }
+} "table t3 t3 [expr $AUTOVACUUM?5:4] {CREATE TABLE t3(e, f)}"
+do_test attach3-1.5 {
+ execsql {
+ INSERT INTO t3 VALUES(1, 2);
+ SELECT * FROM t3;
+ }
+} {1 2}
+
+# Create an index on the auxilary database table.
+do_test attach3-2.1 {
+ execsql {
+ CREATE INDEX aux.i1 on t3(e);
+ }
+} {}
+do_test attach3-2.2 {
+ execsql {
+ SELECT * FROM sqlite_master WHERE name = 'i1';
+ }
+} {}
+do_test attach3-2.3 {
+ execsql {
+ SELECT * FROM aux.sqlite_master WHERE name = 'i1';
+ }
+} "index i1 t3 [expr $AUTOVACUUM?6:5] {CREATE INDEX i1 on t3(e)}"
+
+# Drop the index on the aux database table.
+do_test attach3-3.1 {
+ execsql {
+ DROP INDEX aux.i1;
+ SELECT * FROM aux.sqlite_master WHERE name = 'i1';
+ }
+} {}
+do_test attach3-3.2 {
+ execsql {
+ CREATE INDEX aux.i1 on t3(e);
+ SELECT * FROM aux.sqlite_master WHERE name = 'i1';
+ }
+} "index i1 t3 [expr $AUTOVACUUM?6:5] {CREATE INDEX i1 on t3(e)}"
+do_test attach3-3.3 {
+ execsql {
+ DROP INDEX i1;
+ SELECT * FROM aux.sqlite_master WHERE name = 'i1';
+ }
+} {}
+
+# Drop tables t1 and t2 in the auxilary database.
+do_test attach3-4.1 {
+ execsql {
+ DROP TABLE aux.t1;
+ SELECT name FROM aux.sqlite_master;
+ }
+} {t2 t3}
+do_test attach3-4.2 {
+ # This will drop main.t2
+ execsql {
+ DROP TABLE t2;
+ SELECT name FROM aux.sqlite_master;
+ }
+} {t2 t3}
+do_test attach3-4.3 {
+ execsql {
+ DROP TABLE t2;
+ SELECT name FROM aux.sqlite_master;
+ }
+} {t3}
+
+# Create a view in the auxilary database.
+ifcapable view {
+do_test attach3-5.1 {
+ execsql {
+ CREATE VIEW aux.v1 AS SELECT * FROM t3;
+ }
+} {}
+do_test attach3-5.2 {
+ execsql {
+ SELECT * FROM aux.sqlite_master WHERE name = 'v1';
+ }
+} {view v1 v1 0 {CREATE VIEW v1 AS SELECT * FROM t3}}
+do_test attach3-5.3 {
+ execsql {
+ INSERT INTO aux.t3 VALUES('hello', 'world');
+ SELECT * FROM v1;
+ }
+} {1 2 hello world}
+
+# Drop the view
+do_test attach3-6.1 {
+ execsql {
+ DROP VIEW aux.v1;
+ }
+} {}
+do_test attach3-6.2 {
+ execsql {
+ SELECT * FROM aux.sqlite_master WHERE name = 'v1';
+ }
+} {}
+} ;# ifcapable view
+
+ifcapable {trigger} {
+# Create a trigger in the auxilary database.
+do_test attach3-7.1 {
+ execsql {
+ CREATE TRIGGER aux.tr1 AFTER INSERT ON t3 BEGIN
+ INSERT INTO t3 VALUES(new.e*2, new.f*2);
+ END;
+ }
+} {}
+do_test attach3-7.2 {
+ execsql {
+ DELETE FROM t3;
+ INSERT INTO t3 VALUES(10, 20);
+ SELECT * FROM t3;
+ }
+} {10 20 20 40}
+do_test attach3-5.3 {
+ execsql {
+ SELECT * FROM aux.sqlite_master WHERE name = 'tr1';
+ }
+} {trigger tr1 t3 0 {CREATE TRIGGER tr1 AFTER INSERT ON t3 BEGIN
+ INSERT INTO t3 VALUES(new.e*2, new.f*2);
+ END}}
+
+# Drop the trigger
+do_test attach3-8.1 {
+ execsql {
+ DROP TRIGGER aux.tr1;
+ }
+} {}
+do_test attach3-8.2 {
+ execsql {
+ SELECT * FROM aux.sqlite_master WHERE name = 'tr1';
+ }
+} {}
+
+ifcapable tempdb {
+ # Try to trick SQLite into dropping the wrong temp trigger.
+ do_test attach3-9.0 {
+ execsql {
+ CREATE TABLE main.t4(a, b, c);
+ CREATE TABLE aux.t4(a, b, c);
+ CREATE TEMP TRIGGER tst_trigger BEFORE INSERT ON aux.t4 BEGIN
+ SELECT 'hello world';
+ END;
+ SELECT count(*) FROM sqlite_temp_master;
+ }
+ } {1}
+ do_test attach3-9.1 {
+ execsql {
+ DROP TABLE main.t4;
+ SELECT count(*) FROM sqlite_temp_master;
+ }
+ } {1}
+ do_test attach3-9.2 {
+ execsql {
+ DROP TABLE aux.t4;
+ SELECT count(*) FROM sqlite_temp_master;
+ }
+ } {0}
+}
+} ;# endif trigger
+
+# Make sure the aux.sqlite_master table is read-only
+do_test attach3-10.0 {
+ catchsql {
+ INSERT INTO aux.sqlite_master VALUES(1, 2, 3, 4, 5);
+ }
+} {1 {table sqlite_master may not be modified}}
+
+# Failure to attach leaves us in a workable state.
+# Ticket #811
+#
+do_test attach3-11.0 {
+ catchsql {
+ ATTACH DATABASE '/nodir/nofile.x' AS notadb;
+ }
+} {1 {unable to open database: /nodir/nofile.x}}
+do_test attach3-11.1 {
+ catchsql {
+ ATTACH DATABASE ':memory:' AS notadb;
+ }
+} {0 {}}
+do_test attach3-11.2 {
+ catchsql {
+ DETACH DATABASE notadb;
+ }
+} {0 {}}
+
+# Return a list of attached databases
+#
+proc db_list {} {
+ set x [execsql {
+ PRAGMA database_list;
+ }]
+ set y {}
+ foreach {n id file} $x {lappend y $id}
+ return $y
+}
+
+ifcapable schema_pragmas&&tempdb {
+
+ifcapable !trigger {
+ execsql {create temp table dummy(dummy)}
+}
+
+# Ticket #1825
+#
+do_test attach3-12.1 {
+ db_list
+} {main temp aux}
+do_test attach3-12.2 {
+ execsql {
+ ATTACH DATABASE ? AS ?
+ }
+ db_list
+} {main temp aux {}}
+do_test attach3-12.3 {
+ execsql {
+ DETACH aux
+ }
+ db_list
+} {main temp {}}
+do_test attach3-12.4 {
+ execsql {
+ DETACH ?
+ }
+ db_list
+} {main temp}
+do_test attach3-12.5 {
+ execsql {
+ ATTACH DATABASE '' AS ''
+ }
+ db_list
+} {main temp {}}
+do_test attach3-12.6 {
+ execsql {
+ DETACH ''
+ }
+ db_list
+} {main temp}
+do_test attach3-12.7 {
+ execsql {
+ ATTACH DATABASE '' AS ?
+ }
+ db_list
+} {main temp {}}
+do_test attach3-12.8 {
+ execsql {
+ DETACH ''
+ }
+ db_list
+} {main temp}
+do_test attach3-12.9 {
+ execsql {
+ ATTACH DATABASE '' AS NULL
+ }
+ db_list
+} {main temp {}}
+do_test attach3-12.10 {
+ execsql {
+ DETACH ?
+ }
+ db_list
+} {main temp}
+do_test attach3-12.11 {
+ catchsql {
+ DETACH NULL
+ }
+} {1 {no such database: }}
+do_test attach3-12.12 {
+ catchsql {
+ ATTACH null AS null;
+ ATTACH '' AS '';
+ }
+} {1 {database is already in use}}
+do_test attach3-12.13 {
+ db_list
+} {main temp {}}
+do_test attach3-12.14 {
+ execsql {
+ DETACH '';
+ }
+ db_list
+} {main temp}
+
+} ;# ifcapable pragma
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/attachmalloc.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/attachmalloc.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,127 @@
+# 2005 September 19
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#*************************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is testing the ATTACH statement and
+# specifically out-of-memory conditions within that command.
+#
+# $Id: attachmalloc.test,v 1.3 2006/09/04 18:54:14 drh Exp $
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Only run these tests if memory debugging is turned on.
+#
+if {[info command sqlite_malloc_stat]==""} {
+ puts "Skipping malloc tests: not compiled with -DSQLITE_MEMDEBUG=1"
+ finish_test
+ return
+}
+
+
+# Usage: do_malloc_test <test name> <options...>
+#
+# The first argument, <test number>, is an integer used to name the
+# tests executed by this proc. Options are as follows:
+#
+# -tclprep TCL script to run to prepare test.
+# -sqlprep SQL script to run to prepare test.
+# -tclbody TCL script to run with malloc failure simulation.
+# -sqlbody TCL script to run with malloc failure simulation.
+# -cleanup TCL script to run after the test.
+#
+# This command runs a series of tests to verify SQLite's ability
+# to handle an out-of-memory condition gracefully. It is assumed
+# that if this condition occurs a malloc() call will return a
+# NULL pointer. Linux, for example, doesn't do that by default. See
+# the "BUGS" section of malloc(3).
+#
+# Each iteration of a loop, the TCL commands in any argument passed
+# to the -tclbody switch, followed by the SQL commands in any argument
+# passed to the -sqlbody switch are executed. Each iteration the
+# Nth call to sqliteMalloc() is made to fail, where N is increased
+# each time the loop runs starting from 1. When all commands execute
+# successfully, the loop ends.
+#
+proc do_malloc_test {tn args} {
+ array set ::mallocopts $args
+
+ set ::go 1
+ for {set ::n 1} {$::go} {incr ::n} {
+
+ do_test $tn.$::n {
+
+ sqlite_malloc_fail 0
+ catch {db close}
+ catch {file delete -force test.db}
+ catch {file delete -force test.db-journal}
+ catch {file delete -force test2.db}
+ catch {file delete -force test2.db-journal}
+ set ::DB [sqlite3 db test.db]
+
+ if {[info exists ::mallocopts(-tclprep)]} {
+ eval $::mallocopts(-tclprep)
+ }
+ if {[info exists ::mallocopts(-sqlprep)]} {
+ execsql $::mallocopts(-sqlprep)
+ }
+
+ sqlite_malloc_fail $::n
+ set ::mallocbody {}
+ if {[info exists ::mallocopts(-tclbody)]} {
+ append ::mallocbody "$::mallocopts(-tclbody)\n"
+ }
+ if {[info exists ::mallocopts(-sqlbody)]} {
+ append ::mallocbody "db eval {$::mallocopts(-sqlbody)}"
+ }
+
+ set v [catch $::mallocbody msg]
+
+ set leftover [lindex [sqlite_malloc_stat] 2]
+ if {$leftover>0} {
+ if {$leftover>1} {puts "\nLeftover: $leftover\nReturn=$v Message=$msg"}
+ set ::go 0
+ set v {1 1}
+ } else {
+ set v2 [expr {$msg=="" || $msg=="out of memory"}]
+ if {!$v2} {puts "\nError message returned: $msg"}
+ lappend v $v2
+ }
+ } {1 1}
+ sqlite_malloc_fail 0
+
+ if {[info exists ::mallocopts(-cleanup)]} {
+ catch $::mallocopts(-cleanup)
+ }
+ }
+ unset ::mallocopts
+}
+
+do_malloc_test attachmalloc-1 -tclprep {
+ db close
+ for {set i 2} {$i<=4} {incr i} {
+ file delete -force test$i.db
+ file delete -force test$i.db-journal
+ }
+} -tclbody {
+ if {[catch {sqlite3 db test.db}]} {
+ error "out of memory"
+ }
+} -sqlbody {
+ ATTACH 'test2.db' AS two;
+ CREATE TABLE two.t1(x);
+ ATTACH 'test3.db' AS three;
+ CREATE TABLE three.t1(x);
+ ATTACH 'test4.db' AS four;
+ CREATE TABLE four.t1(x);
+}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/auth.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/auth.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,2306 @@
+# 2003 April 4
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is testing the sqlite3_set_authorizer() API
+# and related functionality.
+#
+# $Id: auth.test,v 1.37 2006/08/24 14:59:46 drh Exp $
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# disable this test if the SQLITE_OMIT_AUTHORIZATION macro is
+# defined during compilation.
+if {[catch {db auth {}} msg]} {
+ finish_test
+ return
+}
+
+rename proc proc_real
+proc_real proc {name arguments script} {
+ proc_real $name $arguments $script
+ if {$name=="auth"} {
+ db authorizer ::auth
+ }
+}
+
+do_test auth-1.1.1 {
+ db close
+ set ::DB [sqlite3 db test.db]
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_INSERT" && $arg1=="sqlite_master"} {
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ db authorizer ::auth
+ catchsql {CREATE TABLE t1(a,b,c)}
+} {1 {not authorized}}
+do_test auth-1.1.2 {
+ db errorcode
+} {23}
+do_test auth-1.1.3 {
+ db authorizer
+} {::auth}
+do_test auth-1.1.4 {
+ # Ticket #896.
+ catchsql {
+ SELECT x;
+ }
+} {1 {no such column: x}}
+do_test auth-1.2 {
+ execsql {SELECT name FROM sqlite_master}
+} {}
+do_test auth-1.3.1 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_CREATE_TABLE"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {CREATE TABLE t1(a,b,c)}
+} {1 {not authorized}}
+do_test auth-1.3.2 {
+ db errorcode
+} {23}
+do_test auth-1.3.3 {
+ set ::authargs
+} {t1 {} main {}}
+do_test auth-1.4 {
+ execsql {SELECT name FROM sqlite_master}
+} {}
+
+ifcapable tempdb {
+ do_test auth-1.5 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_INSERT" && $arg1=="sqlite_temp_master"} {
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {CREATE TEMP TABLE t1(a,b,c)}
+ } {1 {not authorized}}
+ do_test auth-1.6 {
+ execsql {SELECT name FROM sqlite_temp_master}
+ } {}
+ do_test auth-1.7.1 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_CREATE_TEMP_TABLE"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {CREATE TEMP TABLE t1(a,b,c)}
+ } {1 {not authorized}}
+ do_test auth-1.7.2 {
+ set ::authargs
+ } {t1 {} temp {}}
+ do_test auth-1.8 {
+ execsql {SELECT name FROM sqlite_temp_master}
+ } {}
+}
+
+do_test auth-1.9 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_INSERT" && $arg1=="sqlite_master"} {
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {CREATE TABLE t1(a,b,c)}
+} {0 {}}
+do_test auth-1.10 {
+ execsql {SELECT name FROM sqlite_master}
+} {}
+do_test auth-1.11 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_CREATE_TABLE"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {CREATE TABLE t1(a,b,c)}
+} {0 {}}
+do_test auth-1.12 {
+ execsql {SELECT name FROM sqlite_master}
+} {}
+
+ifcapable tempdb {
+ do_test auth-1.13 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_INSERT" && $arg1=="sqlite_temp_master"} {
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {CREATE TEMP TABLE t1(a,b,c)}
+ } {0 {}}
+ do_test auth-1.14 {
+ execsql {SELECT name FROM sqlite_temp_master}
+ } {}
+ do_test auth-1.15 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_CREATE_TEMP_TABLE"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {CREATE TEMP TABLE t1(a,b,c)}
+ } {0 {}}
+ do_test auth-1.16 {
+ execsql {SELECT name FROM sqlite_temp_master}
+ } {}
+
+ do_test auth-1.17 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_CREATE_TABLE"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {CREATE TEMP TABLE t1(a,b,c)}
+ } {0 {}}
+ do_test auth-1.18 {
+ execsql {SELECT name FROM sqlite_temp_master}
+ } {t1}
+}
+
+do_test auth-1.19.1 {
+ set ::authargs {}
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_CREATE_TEMP_TABLE"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {CREATE TABLE t2(a,b,c)}
+} {0 {}}
+do_test auth-1.19.2 {
+ set ::authargs
+} {}
+do_test auth-1.20 {
+ execsql {SELECT name FROM sqlite_master}
+} {t2}
+
+do_test auth-1.21.1 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DROP_TABLE"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {DROP TABLE t2}
+} {1 {not authorized}}
+do_test auth-1.21.2 {
+ set ::authargs
+} {t2 {} main {}}
+do_test auth-1.22 {
+ execsql {SELECT name FROM sqlite_master}
+} {t2}
+do_test auth-1.23.1 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DROP_TABLE"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {DROP TABLE t2}
+} {0 {}}
+do_test auth-1.23.2 {
+ set ::authargs
+} {t2 {} main {}}
+do_test auth-1.24 {
+ execsql {SELECT name FROM sqlite_master}
+} {t2}
+
+ifcapable tempdb {
+ do_test auth-1.25 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DROP_TEMP_TABLE"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {DROP TABLE t1}
+ } {1 {not authorized}}
+ do_test auth-1.26 {
+ execsql {SELECT name FROM sqlite_temp_master}
+ } {t1}
+ do_test auth-1.27 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DROP_TEMP_TABLE"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {DROP TABLE t1}
+ } {0 {}}
+ do_test auth-1.28 {
+ execsql {SELECT name FROM sqlite_temp_master}
+ } {t1}
+}
+
+do_test auth-1.29 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_INSERT" && $arg1=="t2"} {
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {INSERT INTO t2 VALUES(1,2,3)}
+} {1 {not authorized}}
+do_test auth-1.30 {
+ execsql {SELECT * FROM t2}
+} {}
+do_test auth-1.31 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_INSERT" && $arg1=="t2"} {
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {INSERT INTO t2 VALUES(1,2,3)}
+} {0 {}}
+do_test auth-1.32 {
+ execsql {SELECT * FROM t2}
+} {}
+do_test auth-1.33 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_INSERT" && $arg1=="t1"} {
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {INSERT INTO t2 VALUES(1,2,3)}
+} {0 {}}
+do_test auth-1.34 {
+ execsql {SELECT * FROM t2}
+} {1 2 3}
+
+do_test auth-1.35.1 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_READ" && $arg1=="t2" && $arg2=="b"} {
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {SELECT * FROM t2}
+} {1 {access to t2.b is prohibited}}
+do_test auth-1.35.2 {
+ execsql {ATTACH DATABASE 'test.db' AS two}
+ catchsql {SELECT * FROM two.t2}
+} {1 {access to two.t2.b is prohibited}}
+execsql {DETACH DATABASE two}
+do_test auth-1.36 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_READ" && $arg1=="t2" && $arg2=="b"} {
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {SELECT * FROM t2}
+} {0 {1 {} 3}}
+do_test auth-1.37 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_READ" && $arg1=="t2" && $arg2=="b"} {
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {SELECT * FROM t2 WHERE b=2}
+} {0 {}}
+do_test auth-1.38 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_READ" && $arg1=="t2" && $arg2=="a"} {
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {SELECT * FROM t2 WHERE b=2}
+} {0 {{} 2 3}}
+do_test auth-1.39 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_READ" && $arg1=="t2" && $arg2=="b"} {
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {SELECT * FROM t2 WHERE b IS NULL}
+} {0 {1 {} 3}}
+do_test auth-1.40 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_READ" && $arg1=="t2" && $arg2=="b"} {
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {SELECT a,c FROM t2 WHERE b IS NULL}
+} {1 {access to t2.b is prohibited}}
+
+do_test auth-1.41 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_UPDATE" && $arg1=="t2" && $arg2=="b"} {
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {UPDATE t2 SET a=11}
+} {0 {}}
+do_test auth-1.42 {
+ execsql {SELECT * FROM t2}
+} {11 2 3}
+do_test auth-1.43 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_UPDATE" && $arg1=="t2" && $arg2=="b"} {
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {UPDATE t2 SET b=22, c=33}
+} {1 {not authorized}}
+do_test auth-1.44 {
+ execsql {SELECT * FROM t2}
+} {11 2 3}
+do_test auth-1.45 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_UPDATE" && $arg1=="t2" && $arg2=="b"} {
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {UPDATE t2 SET b=22, c=33}
+} {0 {}}
+do_test auth-1.46 {
+ execsql {SELECT * FROM t2}
+} {11 2 33}
+
+do_test auth-1.47 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DELETE" && $arg1=="t2"} {
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {DELETE FROM t2 WHERE a=11}
+} {1 {not authorized}}
+do_test auth-1.48 {
+ execsql {SELECT * FROM t2}
+} {11 2 33}
+do_test auth-1.49 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DELETE" && $arg1=="t2"} {
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {DELETE FROM t2 WHERE a=11}
+} {0 {}}
+do_test auth-1.50 {
+ execsql {SELECT * FROM t2}
+} {11 2 33}
+
+do_test auth-1.51 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_SELECT"} {
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {SELECT * FROM t2}
+} {1 {not authorized}}
+do_test auth-1.52 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_SELECT"} {
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {SELECT * FROM t2}
+} {0 {}}
+do_test auth-1.53 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_SELECT"} {
+ return SQLITE_OK
+ }
+ return SQLITE_OK
+ }
+ catchsql {SELECT * FROM t2}
+} {0 {11 2 33}}
+
+# Update for version 3: There used to be a handful of test here that
+# tested the authorisation callback with the COPY command. The following
+# test makes the same database modifications as they used to.
+do_test auth-1.54 {
+ execsql {INSERT INTO t2 VALUES(7, 8, 9);}
+} {}
+do_test auth-1.55 {
+ execsql {SELECT * FROM t2}
+} {11 2 33 7 8 9}
+
+do_test auth-1.63 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DELETE" && $arg1=="sqlite_master"} {
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {DROP TABLE t2}
+} {1 {not authorized}}
+do_test auth-1.64 {
+ execsql {SELECT name FROM sqlite_master}
+} {t2}
+do_test auth-1.65 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DELETE" && $arg1=="t2"} {
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {DROP TABLE t2}
+} {1 {not authorized}}
+do_test auth-1.66 {
+ execsql {SELECT name FROM sqlite_master}
+} {t2}
+
+ifcapable tempdb {
+ do_test auth-1.67 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DELETE" && $arg1=="sqlite_temp_master"} {
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {DROP TABLE t1}
+ } {1 {not authorized}}
+ do_test auth-1.68 {
+ execsql {SELECT name FROM sqlite_temp_master}
+ } {t1}
+ do_test auth-1.69 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DELETE" && $arg1=="t1"} {
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {DROP TABLE t1}
+ } {1 {not authorized}}
+ do_test auth-1.70 {
+ execsql {SELECT name FROM sqlite_temp_master}
+ } {t1}
+}
+
+do_test auth-1.71 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DELETE" && $arg1=="sqlite_master"} {
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {DROP TABLE t2}
+} {0 {}}
+do_test auth-1.72 {
+ execsql {SELECT name FROM sqlite_master}
+} {t2}
+do_test auth-1.73 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DELETE" && $arg1=="t2"} {
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {DROP TABLE t2}
+} {0 {}}
+do_test auth-1.74 {
+ execsql {SELECT name FROM sqlite_master}
+} {t2}
+
+ifcapable tempdb {
+ do_test auth-1.75 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DELETE" && $arg1=="sqlite_temp_master"} {
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {DROP TABLE t1}
+ } {0 {}}
+ do_test auth-1.76 {
+ execsql {SELECT name FROM sqlite_temp_master}
+ } {t1}
+ do_test auth-1.77 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DELETE" && $arg1=="t1"} {
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {DROP TABLE t1}
+ } {0 {}}
+ do_test auth-1.78 {
+ execsql {SELECT name FROM sqlite_temp_master}
+ } {t1}
+}
+
+# Test cases auth-1.79 to auth-1.124 test creating and dropping views.
+# Omit these if the library was compiled with views omitted.
+ifcapable view {
+do_test auth-1.79 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_CREATE_VIEW"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {CREATE VIEW v1 AS SELECT a+1,b+1 FROM t2}
+} {1 {not authorized}}
+do_test auth-1.80 {
+ set ::authargs
+} {v1 {} main {}}
+do_test auth-1.81 {
+ execsql {SELECT name FROM sqlite_master}
+} {t2}
+do_test auth-1.82 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_CREATE_VIEW"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {CREATE VIEW v1 AS SELECT a+1,b+1 FROM t2}
+} {0 {}}
+do_test auth-1.83 {
+ set ::authargs
+} {v1 {} main {}}
+do_test auth-1.84 {
+ execsql {SELECT name FROM sqlite_master}
+} {t2}
+
+ifcapable tempdb {
+ do_test auth-1.85 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_CREATE_TEMP_VIEW"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {CREATE TEMPORARY VIEW v1 AS SELECT a+1,b+1 FROM t2}
+ } {1 {not authorized}}
+ do_test auth-1.86 {
+ set ::authargs
+ } {v1 {} temp {}}
+ do_test auth-1.87 {
+ execsql {SELECT name FROM sqlite_temp_master}
+ } {t1}
+ do_test auth-1.88 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_CREATE_TEMP_VIEW"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {CREATE TEMPORARY VIEW v1 AS SELECT a+1,b+1 FROM t2}
+ } {0 {}}
+ do_test auth-1.89 {
+ set ::authargs
+ } {v1 {} temp {}}
+ do_test auth-1.90 {
+ execsql {SELECT name FROM sqlite_temp_master}
+ } {t1}
+}
+
+do_test auth-1.91 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_INSERT" && $arg1=="sqlite_master"} {
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {CREATE VIEW v1 AS SELECT a+1,b+1 FROM t2}
+} {1 {not authorized}}
+do_test auth-1.92 {
+ execsql {SELECT name FROM sqlite_master}
+} {t2}
+do_test auth-1.93 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_INSERT" && $arg1=="sqlite_master"} {
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {CREATE VIEW v1 AS SELECT a+1,b+1 FROM t2}
+} {0 {}}
+do_test auth-1.94 {
+ execsql {SELECT name FROM sqlite_master}
+} {t2}
+
+ifcapable tempdb {
+ do_test auth-1.95 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_INSERT" && $arg1=="sqlite_temp_master"} {
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {CREATE TEMPORARY VIEW v1 AS SELECT a+1,b+1 FROM t2}
+ } {1 {not authorized}}
+ do_test auth-1.96 {
+ execsql {SELECT name FROM sqlite_temp_master}
+ } {t1}
+ do_test auth-1.97 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_INSERT" && $arg1=="sqlite_temp_master"} {
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {CREATE TEMPORARY VIEW v1 AS SELECT a+1,b+1 FROM t2}
+ } {0 {}}
+ do_test auth-1.98 {
+ execsql {SELECT name FROM sqlite_temp_master}
+ } {t1}
+}
+
+do_test auth-1.99 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DELETE" && $arg1=="sqlite_master"} {
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {
+ CREATE VIEW v2 AS SELECT a+1,b+1 FROM t2;
+ DROP VIEW v2
+ }
+} {1 {not authorized}}
+do_test auth-1.100 {
+ execsql {SELECT name FROM sqlite_master}
+} {t2 v2}
+do_test auth-1.101 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DROP_VIEW"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {DROP VIEW v2}
+} {1 {not authorized}}
+do_test auth-1.102 {
+ set ::authargs
+} {v2 {} main {}}
+do_test auth-1.103 {
+ execsql {SELECT name FROM sqlite_master}
+} {t2 v2}
+do_test auth-1.104 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DELETE" && $arg1=="sqlite_master"} {
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {DROP VIEW v2}
+} {0 {}}
+do_test auth-1.105 {
+ execsql {SELECT name FROM sqlite_master}
+} {t2 v2}
+do_test auth-1.106 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DROP_VIEW"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {DROP VIEW v2}
+} {0 {}}
+do_test auth-1.107 {
+ set ::authargs
+} {v2 {} main {}}
+do_test auth-1.108 {
+ execsql {SELECT name FROM sqlite_master}
+} {t2 v2}
+do_test auth-1.109 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DROP_VIEW"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_OK
+ }
+ return SQLITE_OK
+ }
+ catchsql {DROP VIEW v2}
+} {0 {}}
+do_test auth-1.110 {
+ set ::authargs
+} {v2 {} main {}}
+do_test auth-1.111 {
+ execsql {SELECT name FROM sqlite_master}
+} {t2}
+
+
+ifcapable tempdb {
+ do_test auth-1.112 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DELETE" && $arg1=="sqlite_temp_master"} {
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {
+ CREATE TEMP VIEW v1 AS SELECT a+1,b+1 FROM t1;
+ DROP VIEW v1
+ }
+ } {1 {not authorized}}
+ do_test auth-1.113 {
+ execsql {SELECT name FROM sqlite_temp_master}
+ } {t1 v1}
+ do_test auth-1.114 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DROP_TEMP_VIEW"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {DROP VIEW v1}
+ } {1 {not authorized}}
+ do_test auth-1.115 {
+ set ::authargs
+ } {v1 {} temp {}}
+ do_test auth-1.116 {
+ execsql {SELECT name FROM sqlite_temp_master}
+ } {t1 v1}
+ do_test auth-1.117 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DELETE" && $arg1=="sqlite_temp_master"} {
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {DROP VIEW v1}
+ } {0 {}}
+ do_test auth-1.118 {
+ execsql {SELECT name FROM sqlite_temp_master}
+ } {t1 v1}
+ do_test auth-1.119 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DROP_TEMP_VIEW"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {DROP VIEW v1}
+ } {0 {}}
+ do_test auth-1.120 {
+ set ::authargs
+ } {v1 {} temp {}}
+ do_test auth-1.121 {
+ execsql {SELECT name FROM sqlite_temp_master}
+ } {t1 v1}
+ do_test auth-1.122 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DROP_TEMP_VIEW"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_OK
+ }
+ return SQLITE_OK
+ }
+ catchsql {DROP VIEW v1}
+ } {0 {}}
+ do_test auth-1.123 {
+ set ::authargs
+ } {v1 {} temp {}}
+ do_test auth-1.124 {
+ execsql {SELECT name FROM sqlite_temp_master}
+ } {t1}
+}
+} ;# ifcapable view
+
+# Test cases auth-1.125 to auth-1.176 test creating and dropping triggers.
+# Omit these if the library was compiled with triggers omitted.
+#
+ifcapable trigger&&tempdb {
+do_test auth-1.125 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_CREATE_TRIGGER"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {
+ CREATE TRIGGER r2 DELETE on t2 BEGIN
+ SELECT NULL;
+ END;
+ }
+} {1 {not authorized}}
+do_test auth-1.126 {
+ set ::authargs
+} {r2 t2 main {}}
+do_test auth-1.127 {
+ execsql {SELECT name FROM sqlite_master}
+} {t2}
+do_test auth-1.128 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_INSERT" && $arg1=="sqlite_master"} {
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {
+ CREATE TRIGGER r2 DELETE on t2 BEGIN
+ SELECT NULL;
+ END;
+ }
+} {1 {not authorized}}
+do_test auth-1.129 {
+ execsql {SELECT name FROM sqlite_master}
+} {t2}
+do_test auth-1.130 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_CREATE_TRIGGER"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {
+ CREATE TRIGGER r2 DELETE on t2 BEGIN
+ SELECT NULL;
+ END;
+ }
+} {0 {}}
+do_test auth-1.131 {
+ set ::authargs
+} {r2 t2 main {}}
+do_test auth-1.132 {
+ execsql {SELECT name FROM sqlite_master}
+} {t2}
+do_test auth-1.133 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_INSERT" && $arg1=="sqlite_master"} {
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {
+ CREATE TRIGGER r2 DELETE on t2 BEGIN
+ SELECT NULL;
+ END;
+ }
+} {0 {}}
+do_test auth-1.134 {
+ execsql {SELECT name FROM sqlite_master}
+} {t2}
+do_test auth-1.135 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_CREATE_TRIGGER"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_OK
+ }
+ return SQLITE_OK
+ }
+ catchsql {
+ CREATE TABLE tx(id);
+ CREATE TRIGGER r2 AFTER INSERT ON t2 BEGIN
+ INSERT INTO tx VALUES(NEW.rowid);
+ END;
+ }
+} {0 {}}
+do_test auth-1.136.1 {
+ set ::authargs
+} {r2 t2 main {}}
+do_test auth-1.136.2 {
+ execsql {
+ SELECT name FROM sqlite_master WHERE type='trigger'
+ }
+} {r2}
+do_test auth-1.136.3 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ lappend ::authargs $code $arg1 $arg2 $arg3 $arg4
+ return SQLITE_OK
+ }
+ set ::authargs {}
+ execsql {
+ INSERT INTO t2 VALUES(1,2,3);
+ }
+ set ::authargs
+} {SQLITE_INSERT t2 {} main {} SQLITE_INSERT tx {} main r2 SQLITE_READ t2 ROWID main r2}
+do_test auth-1.136.4 {
+ execsql {
+ SELECT * FROM tx;
+ }
+} {3}
+do_test auth-1.137 {
+ execsql {SELECT name FROM sqlite_master}
+} {t2 tx r2}
+do_test auth-1.138 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_CREATE_TEMP_TRIGGER"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {
+ CREATE TRIGGER r1 DELETE on t1 BEGIN
+ SELECT NULL;
+ END;
+ }
+} {1 {not authorized}}
+do_test auth-1.139 {
+ set ::authargs
+} {r1 t1 temp {}}
+do_test auth-1.140 {
+ execsql {SELECT name FROM sqlite_temp_master}
+} {t1}
+do_test auth-1.141 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_INSERT" && $arg1=="sqlite_temp_master"} {
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {
+ CREATE TRIGGER r1 DELETE on t1 BEGIN
+ SELECT NULL;
+ END;
+ }
+} {1 {not authorized}}
+do_test auth-1.142 {
+ execsql {SELECT name FROM sqlite_temp_master}
+} {t1}
+do_test auth-1.143 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_CREATE_TEMP_TRIGGER"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {
+ CREATE TRIGGER r1 DELETE on t1 BEGIN
+ SELECT NULL;
+ END;
+ }
+} {0 {}}
+do_test auth-1.144 {
+ set ::authargs
+} {r1 t1 temp {}}
+do_test auth-1.145 {
+ execsql {SELECT name FROM sqlite_temp_master}
+} {t1}
+do_test auth-1.146 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_INSERT" && $arg1=="sqlite_temp_master"} {
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {
+ CREATE TRIGGER r1 DELETE on t1 BEGIN
+ SELECT NULL;
+ END;
+ }
+} {0 {}}
+do_test auth-1.147 {
+ execsql {SELECT name FROM sqlite_temp_master}
+} {t1}
+do_test auth-1.148 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_CREATE_TEMP_TRIGGER"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_OK
+ }
+ return SQLITE_OK
+ }
+ catchsql {
+ CREATE TRIGGER r1 DELETE on t1 BEGIN
+ SELECT NULL;
+ END;
+ }
+} {0 {}}
+do_test auth-1.149 {
+ set ::authargs
+} {r1 t1 temp {}}
+do_test auth-1.150 {
+ execsql {SELECT name FROM sqlite_temp_master}
+} {t1 r1}
+
+do_test auth-1.151 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DELETE" && $arg1=="sqlite_master"} {
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {DROP TRIGGER r2}
+} {1 {not authorized}}
+do_test auth-1.152 {
+ execsql {SELECT name FROM sqlite_master}
+} {t2 tx r2}
+do_test auth-1.153 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DROP_TRIGGER"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {DROP TRIGGER r2}
+} {1 {not authorized}}
+do_test auth-1.154 {
+ set ::authargs
+} {r2 t2 main {}}
+do_test auth-1.155 {
+ execsql {SELECT name FROM sqlite_master}
+} {t2 tx r2}
+do_test auth-1.156 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DELETE" && $arg1=="sqlite_master"} {
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {DROP TRIGGER r2}
+} {0 {}}
+do_test auth-1.157 {
+ execsql {SELECT name FROM sqlite_master}
+} {t2 tx r2}
+do_test auth-1.158 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DROP_TRIGGER"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {DROP TRIGGER r2}
+} {0 {}}
+do_test auth-1.159 {
+ set ::authargs
+} {r2 t2 main {}}
+do_test auth-1.160 {
+ execsql {SELECT name FROM sqlite_master}
+} {t2 tx r2}
+do_test auth-1.161 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DROP_TRIGGER"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_OK
+ }
+ return SQLITE_OK
+ }
+ catchsql {DROP TRIGGER r2}
+} {0 {}}
+do_test auth-1.162 {
+ set ::authargs
+} {r2 t2 main {}}
+do_test auth-1.163 {
+ execsql {
+ DROP TABLE tx;
+ DELETE FROM t2 WHERE a=1 AND b=2 AND c=3;
+ SELECT name FROM sqlite_master;
+ }
+} {t2}
+
+do_test auth-1.164 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DELETE" && $arg1=="sqlite_temp_master"} {
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {DROP TRIGGER r1}
+} {1 {not authorized}}
+do_test auth-1.165 {
+ execsql {SELECT name FROM sqlite_temp_master}
+} {t1 r1}
+do_test auth-1.166 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DROP_TEMP_TRIGGER"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {DROP TRIGGER r1}
+} {1 {not authorized}}
+do_test auth-1.167 {
+ set ::authargs
+} {r1 t1 temp {}}
+do_test auth-1.168 {
+ execsql {SELECT name FROM sqlite_temp_master}
+} {t1 r1}
+do_test auth-1.169 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DELETE" && $arg1=="sqlite_temp_master"} {
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {DROP TRIGGER r1}
+} {0 {}}
+do_test auth-1.170 {
+ execsql {SELECT name FROM sqlite_temp_master}
+} {t1 r1}
+do_test auth-1.171 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DROP_TEMP_TRIGGER"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {DROP TRIGGER r1}
+} {0 {}}
+do_test auth-1.172 {
+ set ::authargs
+} {r1 t1 temp {}}
+do_test auth-1.173 {
+ execsql {SELECT name FROM sqlite_temp_master}
+} {t1 r1}
+do_test auth-1.174 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DROP_TEMP_TRIGGER"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_OK
+ }
+ return SQLITE_OK
+ }
+ catchsql {DROP TRIGGER r1}
+} {0 {}}
+do_test auth-1.175 {
+ set ::authargs
+} {r1 t1 temp {}}
+do_test auth-1.176 {
+ execsql {SELECT name FROM sqlite_temp_master}
+} {t1}
+} ;# ifcapable trigger
+
+do_test auth-1.177 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_CREATE_INDEX"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {CREATE INDEX i2 ON t2(a)}
+} {1 {not authorized}}
+do_test auth-1.178 {
+ set ::authargs
+} {i2 t2 main {}}
+do_test auth-1.179 {
+ execsql {SELECT name FROM sqlite_master}
+} {t2}
+do_test auth-1.180 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_INSERT" && $arg1=="sqlite_master"} {
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {CREATE INDEX i2 ON t2(a)}
+} {1 {not authorized}}
+do_test auth-1.181 {
+ execsql {SELECT name FROM sqlite_master}
+} {t2}
+do_test auth-1.182 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_CREATE_INDEX"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {CREATE INDEX i2 ON t2(b)}
+} {0 {}}
+do_test auth-1.183 {
+ set ::authargs
+} {i2 t2 main {}}
+do_test auth-1.184 {
+ execsql {SELECT name FROM sqlite_master}
+} {t2}
+do_test auth-1.185 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_INSERT" && $arg1=="sqlite_master"} {
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {CREATE INDEX i2 ON t2(b)}
+} {0 {}}
+do_test auth-1.186 {
+ execsql {SELECT name FROM sqlite_master}
+} {t2}
+do_test auth-1.187 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_CREATE_INDEX"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_OK
+ }
+ return SQLITE_OK
+ }
+ catchsql {CREATE INDEX i2 ON t2(a)}
+} {0 {}}
+do_test auth-1.188 {
+ set ::authargs
+} {i2 t2 main {}}
+do_test auth-1.189 {
+ execsql {SELECT name FROM sqlite_master}
+} {t2 i2}
+
+ifcapable tempdb {
+ do_test auth-1.190 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_CREATE_TEMP_INDEX"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {CREATE INDEX i1 ON t1(a)}
+ } {1 {not authorized}}
+ do_test auth-1.191 {
+ set ::authargs
+ } {i1 t1 temp {}}
+ do_test auth-1.192 {
+ execsql {SELECT name FROM sqlite_temp_master}
+ } {t1}
+ do_test auth-1.193 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_INSERT" && $arg1=="sqlite_temp_master"} {
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {CREATE INDEX i1 ON t1(b)}
+ } {1 {not authorized}}
+ do_test auth-1.194 {
+ execsql {SELECT name FROM sqlite_temp_master}
+ } {t1}
+ do_test auth-1.195 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_CREATE_TEMP_INDEX"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {CREATE INDEX i1 ON t1(b)}
+ } {0 {}}
+ do_test auth-1.196 {
+ set ::authargs
+ } {i1 t1 temp {}}
+ do_test auth-1.197 {
+ execsql {SELECT name FROM sqlite_temp_master}
+ } {t1}
+ do_test auth-1.198 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_INSERT" && $arg1=="sqlite_temp_master"} {
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {CREATE INDEX i1 ON t1(c)}
+ } {0 {}}
+ do_test auth-1.199 {
+ execsql {SELECT name FROM sqlite_temp_master}
+ } {t1}
+ do_test auth-1.200 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_CREATE_TEMP_INDEX"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_OK
+ }
+ return SQLITE_OK
+ }
+ catchsql {CREATE INDEX i1 ON t1(a)}
+ } {0 {}}
+ do_test auth-1.201 {
+ set ::authargs
+ } {i1 t1 temp {}}
+ do_test auth-1.202 {
+ execsql {SELECT name FROM sqlite_temp_master}
+ } {t1 i1}
+}
+
+do_test auth-1.203 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DELETE" && $arg1=="sqlite_master"} {
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {DROP INDEX i2}
+} {1 {not authorized}}
+do_test auth-1.204 {
+ execsql {SELECT name FROM sqlite_master}
+} {t2 i2}
+do_test auth-1.205 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DROP_INDEX"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {DROP INDEX i2}
+} {1 {not authorized}}
+do_test auth-1.206 {
+ set ::authargs
+} {i2 t2 main {}}
+do_test auth-1.207 {
+ execsql {SELECT name FROM sqlite_master}
+} {t2 i2}
+do_test auth-1.208 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DELETE" && $arg1=="sqlite_master"} {
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {DROP INDEX i2}
+} {0 {}}
+do_test auth-1.209 {
+ execsql {SELECT name FROM sqlite_master}
+} {t2 i2}
+do_test auth-1.210 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DROP_INDEX"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {DROP INDEX i2}
+} {0 {}}
+do_test auth-1.211 {
+ set ::authargs
+} {i2 t2 main {}}
+do_test auth-1.212 {
+ execsql {SELECT name FROM sqlite_master}
+} {t2 i2}
+do_test auth-1.213 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DROP_INDEX"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_OK
+ }
+ return SQLITE_OK
+ }
+ catchsql {DROP INDEX i2}
+} {0 {}}
+do_test auth-1.214 {
+ set ::authargs
+} {i2 t2 main {}}
+do_test auth-1.215 {
+ execsql {SELECT name FROM sqlite_master}
+} {t2}
+
+ifcapable tempdb {
+ do_test auth-1.216 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DELETE" && $arg1=="sqlite_temp_master"} {
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {DROP INDEX i1}
+ } {1 {not authorized}}
+ do_test auth-1.217 {
+ execsql {SELECT name FROM sqlite_temp_master}
+ } {t1 i1}
+ do_test auth-1.218 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DROP_TEMP_INDEX"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {DROP INDEX i1}
+ } {1 {not authorized}}
+ do_test auth-1.219 {
+ set ::authargs
+ } {i1 t1 temp {}}
+ do_test auth-1.220 {
+ execsql {SELECT name FROM sqlite_temp_master}
+ } {t1 i1}
+ do_test auth-1.221 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DELETE" && $arg1=="sqlite_temp_master"} {
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {DROP INDEX i1}
+ } {0 {}}
+ do_test auth-1.222 {
+ execsql {SELECT name FROM sqlite_temp_master}
+ } {t1 i1}
+ do_test auth-1.223 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DROP_TEMP_INDEX"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {DROP INDEX i1}
+ } {0 {}}
+ do_test auth-1.224 {
+ set ::authargs
+ } {i1 t1 temp {}}
+ do_test auth-1.225 {
+ execsql {SELECT name FROM sqlite_temp_master}
+ } {t1 i1}
+ do_test auth-1.226 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DROP_TEMP_INDEX"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_OK
+ }
+ return SQLITE_OK
+ }
+ catchsql {DROP INDEX i1}
+ } {0 {}}
+ do_test auth-1.227 {
+ set ::authargs
+ } {i1 t1 temp {}}
+ do_test auth-1.228 {
+ execsql {SELECT name FROM sqlite_temp_master}
+ } {t1}
+}
+
+do_test auth-1.229 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_PRAGMA"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {PRAGMA full_column_names=on}
+} {1 {not authorized}}
+do_test auth-1.230 {
+ set ::authargs
+} {full_column_names on {} {}}
+do_test auth-1.231 {
+ execsql2 {SELECT a FROM t2}
+} {a 11 a 7}
+do_test auth-1.232 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_PRAGMA"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {PRAGMA full_column_names=on}
+} {0 {}}
+do_test auth-1.233 {
+ set ::authargs
+} {full_column_names on {} {}}
+do_test auth-1.234 {
+ execsql2 {SELECT a FROM t2}
+} {a 11 a 7}
+do_test auth-1.235 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_PRAGMA"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_OK
+ }
+ return SQLITE_OK
+ }
+ catchsql {PRAGMA full_column_names=on}
+} {0 {}}
+do_test auth-1.236 {
+ execsql2 {SELECT a FROM t2}
+} {t2.a 11 t2.a 7}
+do_test auth-1.237 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_PRAGMA"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_OK
+ }
+ return SQLITE_OK
+ }
+ catchsql {PRAGMA full_column_names=OFF}
+} {0 {}}
+do_test auth-1.238 {
+ set ::authargs
+} {full_column_names OFF {} {}}
+do_test auth-1.239 {
+ execsql2 {SELECT a FROM t2}
+} {a 11 a 7}
+
+do_test auth-1.240 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_TRANSACTION"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {BEGIN}
+} {1 {not authorized}}
+do_test auth-1.241 {
+ set ::authargs
+} {BEGIN {} {} {}}
+do_test auth-1.242 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_TRANSACTION" && $arg1!="BEGIN"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {BEGIN; INSERT INTO t2 VALUES(44,55,66); COMMIT}
+} {1 {not authorized}}
+do_test auth-1.243 {
+ set ::authargs
+} {COMMIT {} {} {}}
+do_test auth-1.244 {
+ execsql {SELECT * FROM t2}
+} {11 2 33 7 8 9 44 55 66}
+do_test auth-1.245 {
+ catchsql {ROLLBACK}
+} {1 {not authorized}}
+do_test auth-1.246 {
+ set ::authargs
+} {ROLLBACK {} {} {}}
+do_test auth-1.247 {
+ catchsql {END TRANSACTION}
+} {1 {not authorized}}
+do_test auth-1.248 {
+ set ::authargs
+} {COMMIT {} {} {}}
+do_test auth-1.249 {
+ db authorizer {}
+ catchsql {ROLLBACK}
+} {0 {}}
+do_test auth-1.250 {
+ execsql {SELECT * FROM t2}
+} {11 2 33 7 8 9}
+
+# ticket #340 - authorization for ATTACH and DETACH.
+#
+do_test auth-1.251 {
+ db authorizer ::auth
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_ATTACH"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ }
+ return SQLITE_OK
+ }
+ catchsql {
+ ATTACH DATABASE ':memory:' AS test1
+ }
+} {0 {}}
+do_test auth-1.252 {
+ set ::authargs
+} {:memory: {} {} {}}
+do_test auth-1.253 {
+ catchsql {DETACH DATABASE test1}
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_ATTACH"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {
+ ATTACH DATABASE ':memory:' AS test1;
+ }
+} {1 {not authorized}}
+do_test auth-1.254 {
+ lindex [execsql {PRAGMA database_list}] 7
+} {}
+do_test auth-1.255 {
+ catchsql {DETACH DATABASE test1}
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_ATTACH"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {
+ ATTACH DATABASE ':memory:' AS test1;
+ }
+} {0 {}}
+do_test auth-1.256 {
+ lindex [execsql {PRAGMA database_list}] 7
+} {}
+do_test auth-1.257 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DETACH"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_OK
+ }
+ return SQLITE_OK
+ }
+ execsql {ATTACH DATABASE ':memory:' AS test1}
+ catchsql {
+ DETACH DATABASE test1;
+ }
+} {0 {}}
+do_test auth-1.258 {
+ lindex [execsql {PRAGMA database_list}] 7
+} {}
+do_test auth-1.259 {
+ execsql {ATTACH DATABASE ':memory:' AS test1}
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DETACH"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {
+ DETACH DATABASE test1;
+ }
+} {0 {}}
+ifcapable tempdb {
+ ifcapable schema_pragmas {
+ do_test auth-1.260 {
+ lindex [execsql {PRAGMA database_list}] 7
+ } {test1}
+ } ;# ifcapable schema_pragmas
+ do_test auth-1.261 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_DETACH"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {
+ DETACH DATABASE test1;
+ }
+ } {1 {not authorized}}
+ ifcapable schema_pragmas {
+ do_test auth-1.262 {
+ lindex [execsql {PRAGMA database_list}] 7
+ } {test1}
+ } ;# ifcapable schema_pragmas
+ db authorizer {}
+ execsql {DETACH DATABASE test1}
+ db authorizer ::auth
+
+ # Authorization for ALTER TABLE. These tests are omitted if the library
+ # was built without ALTER TABLE support.
+ ifcapable altertable {
+
+ do_test auth-1.263 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_ALTER_TABLE"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_OK
+ }
+ return SQLITE_OK
+ }
+ catchsql {
+ ALTER TABLE t1 RENAME TO t1x
+ }
+ } {0 {}}
+ do_test auth-1.264 {
+ execsql {SELECT name FROM sqlite_temp_master WHERE type='table'}
+ } {t1x}
+ do_test auth-1.265 {
+ set authargs
+ } {temp t1 {} {}}
+ do_test auth-1.266 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_ALTER_TABLE"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {
+ ALTER TABLE t1x RENAME TO t1
+ }
+ } {0 {}}
+ do_test auth-1.267 {
+ execsql {SELECT name FROM sqlite_temp_master WHERE type='table'}
+ } {t1x}
+ do_test auth-1.268 {
+ set authargs
+ } {temp t1x {} {}}
+ do_test auth-1.269 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_ALTER_TABLE"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {
+ ALTER TABLE t1x RENAME TO t1
+ }
+ } {1 {not authorized}}
+ do_test auth-1.270 {
+ execsql {SELECT name FROM sqlite_temp_master WHERE type='table'}
+ } {t1x}
+
+ do_test auth-1.271 {
+ set authargs
+ } {temp t1x {} {}}
+ } ;# ifcapable altertable
+
+} else {
+ db authorizer {}
+ db eval {
+ DETACH DATABASE test1;
+ }
+}
+
+ifcapable altertable {
+db authorizer {}
+catchsql {ALTER TABLE t1x RENAME TO t1}
+db authorizer ::auth
+do_test auth-1.272 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_ALTER_TABLE"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_OK
+ }
+ return SQLITE_OK
+ }
+ catchsql {
+ ALTER TABLE t2 RENAME TO t2x
+ }
+} {0 {}}
+do_test auth-1.273 {
+ execsql {SELECT name FROM sqlite_master WHERE type='table'}
+} {t2x}
+do_test auth-1.274 {
+ set authargs
+} {main t2 {} {}}
+do_test auth-1.275 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_ALTER_TABLE"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {
+ ALTER TABLE t2x RENAME TO t2
+ }
+} {0 {}}
+do_test auth-1.276 {
+ execsql {SELECT name FROM sqlite_master WHERE type='table'}
+} {t2x}
+do_test auth-1.277 {
+ set authargs
+} {main t2x {} {}}
+do_test auth-1.278 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_ALTER_TABLE"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {
+ ALTER TABLE t2x RENAME TO t2
+ }
+} {1 {not authorized}}
+do_test auth-1.279 {
+ execsql {SELECT name FROM sqlite_master WHERE type='table'}
+} {t2x}
+do_test auth-1.280 {
+ set authargs
+} {main t2x {} {}}
+db authorizer {}
+catchsql {ALTER TABLE t2x RENAME TO t2}
+
+} ;# ifcapable altertable
+
+# Test the authorization callbacks for the REINDEX command.
+ifcapable reindex {
+
+proc auth {code args} {
+ if {$code=="SQLITE_REINDEX"} {
+ set ::authargs [concat $::authargs $args]
+ }
+ return SQLITE_OK
+}
+db authorizer auth
+do_test auth-1.281 {
+ execsql {
+ CREATE TABLE t3(a PRIMARY KEY, b, c);
+ CREATE INDEX t3_idx1 ON t3(c COLLATE BINARY);
+ CREATE INDEX t3_idx2 ON t3(b COLLATE NOCASE);
+ }
+} {}
+do_test auth-1.282 {
+ set ::authargs {}
+ execsql {
+ REINDEX t3_idx1;
+ }
+ set ::authargs
+} {t3_idx1 {} main {}}
+do_test auth-1.283 {
+ set ::authargs {}
+ execsql {
+ REINDEX BINARY;
+ }
+ set ::authargs
+} {t3_idx1 {} main {} sqlite_autoindex_t3_1 {} main {}}
+do_test auth-1.284 {
+ set ::authargs {}
+ execsql {
+ REINDEX NOCASE;
+ }
+ set ::authargs
+} {t3_idx2 {} main {}}
+do_test auth-1.285 {
+ set ::authargs {}
+ execsql {
+ REINDEX t3;
+ }
+ set ::authargs
+} {t3_idx2 {} main {} t3_idx1 {} main {} sqlite_autoindex_t3_1 {} main {}}
+do_test auth-1.286 {
+ execsql {
+ DROP TABLE t3;
+ }
+} {}
+ifcapable tempdb {
+ do_test auth-1.287 {
+ execsql {
+ CREATE TEMP TABLE t3(a PRIMARY KEY, b, c);
+ CREATE INDEX t3_idx1 ON t3(c COLLATE BINARY);
+ CREATE INDEX t3_idx2 ON t3(b COLLATE NOCASE);
+ }
+ } {}
+ do_test auth-1.288 {
+ set ::authargs {}
+ execsql {
+ REINDEX temp.t3_idx1;
+ }
+ set ::authargs
+ } {t3_idx1 {} temp {}}
+ do_test auth-1.289 {
+ set ::authargs {}
+ execsql {
+ REINDEX BINARY;
+ }
+ set ::authargs
+ } {t3_idx1 {} temp {} sqlite_autoindex_t3_1 {} temp {}}
+ do_test auth-1.290 {
+ set ::authargs {}
+ execsql {
+ REINDEX NOCASE;
+ }
+ set ::authargs
+ } {t3_idx2 {} temp {}}
+ do_test auth-1.291 {
+ set ::authargs {}
+ execsql {
+ REINDEX temp.t3;
+ }
+ set ::authargs
+ } {t3_idx2 {} temp {} t3_idx1 {} temp {} sqlite_autoindex_t3_1 {} temp {}}
+ proc auth {code args} {
+ if {$code=="SQLITE_REINDEX"} {
+ set ::authargs [concat $::authargs $args]
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ do_test auth-1.292 {
+ set ::authargs {}
+ catchsql {
+ REINDEX temp.t3;
+ }
+ } {1 {not authorized}}
+ do_test auth-1.293 {
+ execsql {
+ DROP TABLE t3;
+ }
+ } {}
+}
+
+} ;# ifcapable reindex
+
+ifcapable analyze {
+ proc auth {code args} {
+ if {$code=="SQLITE_ANALYZE"} {
+ set ::authargs [concat $::authargs $args]
+ }
+ return SQLITE_OK
+ }
+ do_test auth-1.294 {
+ set ::authargs {}
+ execsql {
+ CREATE TABLE t4(a,b,c);
+ CREATE INDEX t4i1 ON t4(a);
+ CREATE INDEX t4i2 ON t4(b,a,c);
+ INSERT INTO t4 VALUES(1,2,3);
+ ANALYZE;
+ }
+ set ::authargs
+ } {t4 {} main {}}
+ do_test auth-1.295 {
+ execsql {
+ SELECT count(*) FROM sqlite_stat1;
+ }
+ } 2
+ proc auth {code args} {
+ if {$code=="SQLITE_ANALYZE"} {
+ set ::authargs [concat $::authargs $args]
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ do_test auth-1.296 {
+ set ::authargs {}
+ catchsql {
+ ANALYZE;
+ }
+ } {1 {not authorized}}
+ do_test auth-1.297 {
+ execsql {
+ SELECT count(*) FROM sqlite_stat1;
+ }
+ } 2
+} ;# ifcapable analyze
+
+
+# Authorization for ALTER TABLE ADD COLUMN.
+# These tests are omitted if the library
+# was built without ALTER TABLE support.
+ifcapable {altertable} {
+ do_test auth-1.300 {
+ execsql {CREATE TABLE t5(x)}
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_ALTER_TABLE"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_OK
+ }
+ return SQLITE_OK
+ }
+ catchsql {
+ ALTER TABLE t5 ADD COLUMN new_col_1;
+ }
+ } {0 {}}
+ do_test auth-1.301 {
+ set x [execsql {SELECT sql FROM sqlite_master WHERE name='t5'}]
+ regexp new_col_1 $x
+ } {1}
+ do_test auth-1.302 {
+ set authargs
+ } {main t5 {} {}}
+ do_test auth-1.303 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_ALTER_TABLE"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {
+ ALTER TABLE t5 ADD COLUMN new_col_2;
+ }
+ } {0 {}}
+ do_test auth-1.304 {
+ set x [execsql {SELECT sql FROM sqlite_master WHERE name='t5'}]
+ regexp new_col_2 $x
+ } {0}
+ do_test auth-1.305 {
+ set authargs
+ } {main t5 {} {}}
+ do_test auth-1.306 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_ALTER_TABLE"} {
+ set ::authargs [list $arg1 $arg2 $arg3 $arg4]
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ catchsql {
+ ALTER TABLE t5 ADD COLUMN new_col_3
+ }
+ } {1 {not authorized}}
+ do_test auth-1.307 {
+ set x [execsql {SELECT sql FROM sqlite_temp_master WHERE type='t5'}]
+ regexp new_col_3 $x
+ } {0}
+
+ do_test auth-1.308 {
+ set authargs
+ } {main t5 {} {}}
+ execsql {DROP TABLE t5}
+} ;# ifcapable altertable
+
+do_test auth-2.1 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_READ" && $arg1=="t3" && $arg2=="x"} {
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+ }
+ db authorizer ::auth
+ execsql {CREATE TABLE t3(x INTEGER PRIMARY KEY, y, z)}
+ catchsql {SELECT * FROM t3}
+} {1 {access to t3.x is prohibited}}
+do_test auth-2.1 {
+ catchsql {SELECT y,z FROM t3}
+} {0 {}}
+do_test auth-2.2 {
+ catchsql {SELECT ROWID,y,z FROM t3}
+} {1 {access to t3.x is prohibited}}
+do_test auth-2.3 {
+ catchsql {SELECT OID,y,z FROM t3}
+} {1 {access to t3.x is prohibited}}
+do_test auth-2.4 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_READ" && $arg1=="t3" && $arg2=="x"} {
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ execsql {INSERT INTO t3 VALUES(44,55,66)}
+ catchsql {SELECT * FROM t3}
+} {0 {{} 55 66}}
+do_test auth-2.5 {
+ catchsql {SELECT rowid,y,z FROM t3}
+} {0 {{} 55 66}}
+do_test auth-2.6 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_READ" && $arg1=="t3" && $arg2=="ROWID"} {
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {SELECT * FROM t3}
+} {0 {44 55 66}}
+do_test auth-2.7 {
+ catchsql {SELECT ROWID,y,z FROM t3}
+} {0 {44 55 66}}
+do_test auth-2.8 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_READ" && $arg1=="t2" && $arg2=="ROWID"} {
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {SELECT ROWID,b,c FROM t2}
+} {0 {{} 2 33 {} 8 9}}
+do_test auth-2.9.1 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_READ" && $arg1=="t2" && $arg2=="ROWID"} {
+ return bogus
+ }
+ return SQLITE_OK
+ }
+ catchsql {SELECT ROWID,b,c FROM t2}
+} {1 {illegal return value (999) from the authorization function - should be SQLITE_OK, SQLITE_IGNORE, or SQLITE_DENY}}
+do_test auth-2.9.2 {
+ db errorcode
+} {1}
+do_test auth-2.10 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_SELECT"} {
+ return bogus
+ }
+ return SQLITE_OK
+ }
+ catchsql {SELECT ROWID,b,c FROM t2}
+} {1 {illegal return value (1) from the authorization function - should be SQLITE_OK, SQLITE_IGNORE, or SQLITE_DENY}}
+do_test auth-2.11.1 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_READ" && $arg2=="a"} {
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {SELECT * FROM t2, t3}
+} {0 {{} 2 33 44 55 66 {} 8 9 44 55 66}}
+do_test auth-2.11.2 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_READ" && $arg2=="x"} {
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ catchsql {SELECT * FROM t2, t3}
+} {0 {11 2 33 {} 55 66 7 8 9 {} 55 66}}
+
+# Make sure the OLD and NEW pseudo-tables of a trigger get authorized.
+#
+ifcapable trigger {
+ do_test auth-3.1 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ return SQLITE_OK
+ }
+ execsql {
+ CREATE TABLE tx(a1,a2,b1,b2,c1,c2);
+ CREATE TRIGGER r1 AFTER UPDATE ON t2 FOR EACH ROW BEGIN
+ INSERT INTO tx VALUES(OLD.a,NEW.a,OLD.b,NEW.b,OLD.c,NEW.c);
+ END;
+ UPDATE t2 SET a=a+1;
+ SELECT * FROM tx;
+ }
+ } {11 12 2 2 33 33 7 8 8 8 9 9}
+ do_test auth-3.2 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_READ" && $arg1=="t2" && $arg2=="c"} {
+ return SQLITE_IGNORE
+ }
+ return SQLITE_OK
+ }
+ execsql {
+ DELETE FROM tx;
+ UPDATE t2 SET a=a+100;
+ SELECT * FROM tx;
+ }
+ } {12 112 2 2 {} {} 8 108 8 8 {} {}}
+} ;# ifcapable trigger
+
+# Make sure the names of views and triggers are passed on on arg4.
+#
+ifcapable trigger {
+do_test auth-4.1 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ lappend ::authargs $code $arg1 $arg2 $arg3 $arg4
+ return SQLITE_OK
+ }
+ set authargs {}
+ execsql {
+ UPDATE t2 SET a=a+1;
+ }
+ set authargs
+} [list \
+ SQLITE_READ t2 a main {} \
+ SQLITE_UPDATE t2 a main {} \
+ SQLITE_INSERT tx {} main r1 \
+ SQLITE_READ t2 a main r1 \
+ SQLITE_READ t2 a main r1 \
+ SQLITE_READ t2 b main r1 \
+ SQLITE_READ t2 b main r1 \
+ SQLITE_READ t2 c main r1 \
+ SQLITE_READ t2 c main r1]
+}
+
+ifcapable {view && trigger} {
+do_test auth-4.2 {
+ execsql {
+ CREATE VIEW v1 AS SELECT a+b AS x FROM t2;
+ CREATE TABLE v1chng(x1,x2);
+ CREATE TRIGGER r2 INSTEAD OF UPDATE ON v1 BEGIN
+ INSERT INTO v1chng VALUES(OLD.x,NEW.x);
+ END;
+ SELECT * FROM v1;
+ }
+} {115 117}
+do_test auth-4.3 {
+ set authargs {}
+ execsql {
+ UPDATE v1 SET x=1 WHERE x=117
+ }
+ set authargs
+} [list \
+ SQLITE_UPDATE v1 x main {} \
+ SQLITE_READ v1 x main {} \
+ SQLITE_SELECT {} {} {} v1 \
+ SQLITE_READ t2 a main v1 \
+ SQLITE_READ t2 b main v1 \
+ SQLITE_INSERT v1chng {} main r2 \
+ SQLITE_READ v1 x main r2 \
+ SQLITE_READ v1 x main r2]
+do_test auth-4.4 {
+ execsql {
+ CREATE TRIGGER r3 INSTEAD OF DELETE ON v1 BEGIN
+ INSERT INTO v1chng VALUES(OLD.x,NULL);
+ END;
+ SELECT * FROM v1;
+ }
+} {115 117}
+do_test auth-4.5 {
+ set authargs {}
+ execsql {
+ DELETE FROM v1 WHERE x=117
+ }
+ set authargs
+} [list \
+ SQLITE_DELETE v1 {} main {} \
+ SQLITE_READ v1 x main {} \
+ SQLITE_SELECT {} {} {} v1 \
+ SQLITE_READ t2 a main v1 \
+ SQLITE_READ t2 b main v1 \
+ SQLITE_INSERT v1chng {} main r3 \
+ SQLITE_READ v1 x main r3]
+
+} ;# ifcapable view && trigger
+
+# Ticket #1338: Make sure authentication works in the presence of an AS
+# clause.
+#
+do_test auth-5.1 {
+ proc auth {code arg1 arg2 arg3 arg4} {
+ return SQLITE_OK
+ }
+ execsql {
+ SELECT count(a) AS cnt FROM t4 ORDER BY cnt
+ }
+} {1}
+
+# Ticket #1607
+#
+ifcapable compound&&subquery {
+ ifcapable trigger {
+ execsql {
+ DROP TABLE tx;
+ }
+ ifcapable view {
+ execsql {
+ DROP TABLE v1chng;
+ }
+ }
+ }
+ do_test auth-5.2 {
+ execsql {
+ SELECT name FROM (
+ SELECT * FROM sqlite_master UNION ALL SELECT * FROM sqlite_temp_master)
+ WHERE type='table'
+ ORDER BY name
+ }
+ } {sqlite_stat1 t1 t2 t3 t4}
+}
+
+
+rename proc {}
+rename proc_real proc
+
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/auth2.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/auth2.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,75 @@
+# 2006 Aug 24
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is testing the sqlite3_set_authorizer() API
+# and related functionality.
+#
+# $Id: auth2.test,v 1.1 2006/08/24 14:59:46 drh Exp $
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# disable this test if the SQLITE_OMIT_AUTHORIZATION macro is
+# defined during compilation.
+if {[catch {db auth {}} msg]} {
+ finish_test
+ return
+}
+
+do_test auth2-1.1 {
+ execsql {
+ CREATE TABLE t1(a,b,c);
+ INSERT INTO t1 VALUES(1,2,3);
+ }
+ set ::flist {}
+ proc auth {code arg1 arg2 arg3 arg4} {
+ if {$code=="SQLITE_FUNCTION"} {
+ lappend ::flist $arg2
+ if {$arg2=="max"} {
+ return SQLITE_DENY
+ } elseif {$arg2=="min"} {
+ return SQLITE_IGNORE
+ } else {
+ return SQLITE_OK
+ }
+ }
+ return SQLITE_OK
+ }
+ db authorizer ::auth
+ catchsql {SELECT max(a,b,c) FROM t1}
+} {1 {not authorized to use function: max}}
+do_test auth2-1.2 {
+ set ::flist
+} max
+do_test auth2-1.3 {
+ set ::flist {}
+ catchsql {SELECT min(a,b,c) FROM t1}
+} {0 {{}}}
+do_test auth2-1.4 {
+ set ::flist
+} min
+do_test auth2-1.5 {
+ set ::flist {}
+ catchsql {SELECT coalesce(min(a,b,c),999) FROM t1}
+} {0 999}
+do_test auth2-1.6 {
+ set ::flist
+} {coalesce min}
+do_test auth2-1.7 {
+ set ::flist {}
+ catchsql {SELECT coalesce(a,b,c) FROM t1}
+} {0 1}
+do_test auth2-1.8 {
+ set ::flist
+} coalesce
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/autoinc.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/autoinc.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,536 @@
+# 2004 November 12
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#*************************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is testing the AUTOINCREMENT features.
+#
+# $Id: autoinc.test,v 1.9 2006/01/03 00:33:50 drh Exp $
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# If the library is not compiled with autoincrement support then
+# skip all tests in this file.
+#
+ifcapable {!autoinc} {
+ finish_test
+ return
+}
+
+# The database is initially empty.
+#
+do_test autoinc-1.1 {
+ execsql {
+ SELECT name FROM sqlite_master WHERE type='table';
+ }
+} {}
+
+# Add a table with the AUTOINCREMENT feature. Verify that the
+# SQLITE_SEQUENCE table gets created.
+#
+do_test autoinc-1.2 {
+ execsql {
+ CREATE TABLE t1(x INTEGER PRIMARY KEY AUTOINCREMENT, y);
+ SELECT name FROM sqlite_master WHERE type='table';
+ }
+} {t1 sqlite_sequence}
+
+# The SQLITE_SEQUENCE table is initially empty
+#
+do_test autoinc-1.3 {
+ execsql {
+ SELECT * FROM sqlite_sequence;
+ }
+} {}
+
+# Close and reopen the database. Verify that everything is still there.
+#
+do_test autoinc-1.4 {
+ db close
+ sqlite3 db test.db
+ execsql {
+ SELECT * FROM sqlite_sequence;
+ }
+} {}
+
+# We are not allowed to drop the sqlite_sequence table.
+#
+do_test autoinc-1.5 {
+ catchsql {DROP TABLE sqlite_sequence}
+} {1 {table sqlite_sequence may not be dropped}}
+do_test autoinc-1.6 {
+ execsql {SELECT name FROM sqlite_master WHERE type='table'}
+} {t1 sqlite_sequence}
+
+# Insert an entries into the t1 table and make sure the largest key
+# is always recorded in the sqlite_sequence table.
+#
+do_test autoinc-2.1 {
+ execsql {
+ SELECT * FROM sqlite_sequence
+ }
+} {}
+do_test autoinc-2.2 {
+ execsql {
+ INSERT INTO t1 VALUES(12,34);
+ SELECT * FROM sqlite_sequence;
+ }
+} {t1 12}
+do_test autoinc-2.3 {
+ execsql {
+ INSERT INTO t1 VALUES(1,23);
+ SELECT * FROM sqlite_sequence;
+ }
+} {t1 12}
+do_test autoinc-2.4 {
+ execsql {
+ INSERT INTO t1 VALUES(123,456);
+ SELECT * FROM sqlite_sequence;
+ }
+} {t1 123}
+do_test autoinc-2.5 {
+ execsql {
+ INSERT INTO t1 VALUES(NULL,567);
+ SELECT * FROM sqlite_sequence;
+ }
+} {t1 124}
+do_test autoinc-2.6 {
+ execsql {
+ DELETE FROM t1 WHERE y=567;
+ SELECT * FROM sqlite_sequence;
+ }
+} {t1 124}
+do_test autoinc-2.7 {
+ execsql {
+ INSERT INTO t1 VALUES(NULL,567);
+ SELECT * FROM sqlite_sequence;
+ }
+} {t1 125}
+do_test autoinc-2.8 {
+ execsql {
+ DELETE FROM t1;
+ SELECT * FROM sqlite_sequence;
+ }
+} {t1 125}
+do_test autoinc-2.9 {
+ execsql {
+ INSERT INTO t1 VALUES(12,34);
+ SELECT * FROM sqlite_sequence;
+ }
+} {t1 125}
+do_test autoinc-2.10 {
+ execsql {
+ INSERT INTO t1 VALUES(125,456);
+ SELECT * FROM sqlite_sequence;
+ }
+} {t1 125}
+do_test autoinc-2.11 {
+ execsql {
+ INSERT INTO t1 VALUES(-1234567,-1);
+ SELECT * FROM sqlite_sequence;
+ }
+} {t1 125}
+do_test autoinc-2.12 {
+ execsql {
+ INSERT INTO t1 VALUES(234,5678);
+ SELECT * FROM sqlite_sequence;
+ }
+} {t1 234}
+do_test autoinc-2.13 {
+ execsql {
+ DELETE FROM t1;
+ INSERT INTO t1 VALUES(NULL,1);
+ SELECT * FROM sqlite_sequence;
+ }
+} {t1 235}
+do_test autoinc-2.14 {
+ execsql {
+ SELECT * FROM t1;
+ }
+} {235 1}
+
+# Manually change the autoincrement values in sqlite_sequence.
+#
+do_test autoinc-2.20 {
+ execsql {
+ UPDATE sqlite_sequence SET seq=1234 WHERE name='t1';
+ INSERT INTO t1 VALUES(NULL,2);
+ SELECT * FROM t1;
+ }
+} {235 1 1235 2}
+do_test autoinc-2.21 {
+ execsql {
+ SELECT * FROM sqlite_sequence;
+ }
+} {t1 1235}
+do_test autoinc-2.22 {
+ execsql {
+ UPDATE sqlite_sequence SET seq=NULL WHERE name='t1';
+ INSERT INTO t1 VALUES(NULL,3);
+ SELECT * FROM t1;
+ }
+} {235 1 1235 2 1236 3}
+do_test autoinc-2.23 {
+ execsql {
+ SELECT * FROM sqlite_sequence;
+ }
+} {t1 1236}
+do_test autoinc-2.24 {
+ execsql {
+ UPDATE sqlite_sequence SET seq='a-string' WHERE name='t1';
+ INSERT INTO t1 VALUES(NULL,4);
+ SELECT * FROM t1;
+ }
+} {235 1 1235 2 1236 3 1237 4}
+do_test autoinc-2.25 {
+ execsql {
+ SELECT * FROM sqlite_sequence;
+ }
+} {t1 1237}
+do_test autoinc-2.26 {
+ execsql {
+ DELETE FROM sqlite_sequence WHERE name='t1';
+ INSERT INTO t1 VALUES(NULL,5);
+ SELECT * FROM t1;
+ }
+} {235 1 1235 2 1236 3 1237 4 1238 5}
+do_test autoinc-2.27 {
+ execsql {
+ SELECT * FROM sqlite_sequence;
+ }
+} {t1 1238}
+do_test autoinc-2.28 {
+ execsql {
+ UPDATE sqlite_sequence SET seq='12345678901234567890'
+ WHERE name='t1';
+ INSERT INTO t1 VALUES(NULL,6);
+ SELECT * FROM t1;
+ }
+} {235 1 1235 2 1236 3 1237 4 1238 5 1239 6}
+do_test autoinc-2.29 {
+ execsql {
+ SELECT * FROM sqlite_sequence;
+ }
+} {t1 1239}
+
+# Test multi-row inserts
+#
+do_test autoinc-2.50 {
+ execsql {
+ DELETE FROM t1 WHERE y>=3;
+ INSERT INTO t1 SELECT NULL, y+2 FROM t1;
+ SELECT * FROM t1;
+ }
+} {235 1 1235 2 1240 3 1241 4}
+do_test autoinc-2.51 {
+ execsql {
+ SELECT * FROM sqlite_sequence
+ }
+} {t1 1241}
+
+ifcapable tempdb {
+ do_test autoinc-2.52 {
+ execsql {
+ CREATE TEMP TABLE t2 AS SELECT y FROM t1;
+ INSERT INTO t1 SELECT NULL, y+4 FROM t2;
+ SELECT * FROM t1;
+ }
+ } {235 1 1235 2 1240 3 1241 4 1242 5 1243 6 1244 7 1245 8}
+ do_test autoinc-2.53 {
+ execsql {
+ SELECT * FROM sqlite_sequence
+ }
+ } {t1 1245}
+ do_test autoinc-2.54 {
+ execsql {
+ DELETE FROM t1;
+ INSERT INTO t1 SELECT NULL, y FROM t2;
+ SELECT * FROM t1;
+ }
+ } {1246 1 1247 2 1248 3 1249 4}
+ do_test autoinc-2.55 {
+ execsql {
+ SELECT * FROM sqlite_sequence
+ }
+ } {t1 1249}
+}
+
+# Create multiple AUTOINCREMENT tables. Make sure all sequences are
+# tracked separately and do not interfere with one another.
+#
+do_test autoinc-2.70 {
+ catchsql {
+ DROP TABLE t2;
+ }
+ execsql {
+ CREATE TABLE t2(d, e INTEGER PRIMARY KEY AUTOINCREMENT, f);
+ INSERT INTO t2(d) VALUES(1);
+ SELECT * FROM sqlite_sequence;
+ }
+} [ifcapable tempdb {list t1 1249 t2 1} else {list t1 1241 t2 1}]
+do_test autoinc-2.71 {
+ execsql {
+ INSERT INTO t2(d) VALUES(2);
+ SELECT * FROM sqlite_sequence;
+ }
+} [ifcapable tempdb {list t1 1249 t2 2} else {list t1 1241 t2 2}]
+do_test autoinc-2.72 {
+ execsql {
+ INSERT INTO t1(x) VALUES(10000);
+ SELECT * FROM sqlite_sequence;
+ }
+} {t1 10000 t2 2}
+do_test autoinc-2.73 {
+ execsql {
+ CREATE TABLE t3(g INTEGER PRIMARY KEY AUTOINCREMENT, h);
+ INSERT INTO t3(h) VALUES(1);
+ SELECT * FROM sqlite_sequence;
+ }
+} {t1 10000 t2 2 t3 1}
+do_test autoinc-2.74 {
+ execsql {
+ INSERT INTO t2(d,e) VALUES(3,100);
+ SELECT * FROM sqlite_sequence;
+ }
+} {t1 10000 t2 100 t3 1}
+
+
+# When a table with an AUTOINCREMENT is deleted, the corresponding entry
+# in the SQLITE_SEQUENCE table should also be deleted. But the SQLITE_SEQUENCE
+# table itself should remain behind.
+#
+do_test autoinc-3.1 {
+ execsql {SELECT name FROM sqlite_sequence}
+} {t1 t2 t3}
+do_test autoinc-3.2 {
+ execsql {
+ DROP TABLE t1;
+ SELECT name FROM sqlite_sequence;
+ }
+} {t2 t3}
+do_test autoinc-3.3 {
+ execsql {
+ DROP TABLE t3;
+ SELECT name FROM sqlite_sequence;
+ }
+} {t2}
+do_test autoinc-3.4 {
+ execsql {
+ DROP TABLE t2;
+ SELECT name FROM sqlite_sequence;
+ }
+} {}
+
+# AUTOINCREMENT on TEMP tables.
+#
+ifcapable tempdb {
+ do_test autoinc-4.1 {
+ execsql {
+ SELECT 1, name FROM sqlite_master WHERE type='table';
+ SELECT 2, name FROM sqlite_temp_master WHERE type='table';
+ }
+ } {1 sqlite_sequence}
+ do_test autoinc-4.2 {
+ execsql {
+ CREATE TABLE t1(x INTEGER PRIMARY KEY AUTOINCREMENT, y);
+ CREATE TEMP TABLE t3(a INTEGER PRIMARY KEY AUTOINCREMENT, b);
+ SELECT 1, name FROM sqlite_master WHERE type='table';
+ SELECT 2, name FROM sqlite_temp_master WHERE type='table';
+ }
+ } {1 sqlite_sequence 1 t1 2 t3 2 sqlite_sequence}
+ do_test autoinc-4.3 {
+ execsql {
+ SELECT 1, * FROM main.sqlite_sequence;
+ SELECT 2, * FROM temp.sqlite_sequence;
+ }
+ } {}
+ do_test autoinc-4.4 {
+ execsql {
+ INSERT INTO t1 VALUES(10,1);
+ INSERT INTO t3 VALUES(20,2);
+ INSERT INTO t1 VALUES(NULL,3);
+ INSERT INTO t3 VALUES(NULL,4);
+ }
+ } {}
+
+ ifcapable compound {
+ do_test autoinc-4.4.1 {
+ execsql {
+ SELECT * FROM t1 UNION ALL SELECT * FROM t3;
+ }
+ } {10 1 11 3 20 2 21 4}
+ } ;# ifcapable compound
+
+ do_test autoinc-4.5 {
+ execsql {
+ SELECT 1, * FROM main.sqlite_sequence;
+ SELECT 2, * FROM temp.sqlite_sequence;
+ }
+ } {1 t1 11 2 t3 21}
+ do_test autoinc-4.6 {
+ execsql {
+ INSERT INTO t1 SELECT * FROM t3;
+ SELECT 1, * FROM main.sqlite_sequence;
+ SELECT 2, * FROM temp.sqlite_sequence;
+ }
+ } {1 t1 21 2 t3 21}
+ do_test autoinc-4.7 {
+ execsql {
+ INSERT INTO t3 SELECT x+100, y FROM t1;
+ SELECT 1, * FROM main.sqlite_sequence;
+ SELECT 2, * FROM temp.sqlite_sequence;
+ }
+ } {1 t1 21 2 t3 121}
+ do_test autoinc-4.8 {
+ execsql {
+ DROP TABLE t3;
+ SELECT 1, * FROM main.sqlite_sequence;
+ SELECT 2, * FROM temp.sqlite_sequence;
+ }
+ } {1 t1 21}
+ do_test autoinc-4.9 {
+ execsql {
+ CREATE TEMP TABLE t2(p INTEGER PRIMARY KEY AUTOINCREMENT, q);
+ INSERT INTO t2 SELECT * FROM t1;
+ DROP TABLE t1;
+ SELECT 1, * FROM main.sqlite_sequence;
+ SELECT 2, * FROM temp.sqlite_sequence;
+ }
+ } {2 t2 21}
+ do_test autoinc-4.10 {
+ execsql {
+ DROP TABLE t2;
+ SELECT 1, * FROM main.sqlite_sequence;
+ SELECT 2, * FROM temp.sqlite_sequence;
+ }
+ } {}
+}
+
+# Make sure AUTOINCREMENT works on ATTACH-ed tables.
+#
+ifcapable tempdb {
+ do_test autoinc-5.1 {
+ file delete -force test2.db
+ file delete -force test2.db-journal
+ sqlite3 db2 test2.db
+ execsql {
+ CREATE TABLE t4(m INTEGER PRIMARY KEY AUTOINCREMENT, n);
+ CREATE TABLE t5(o, p INTEGER PRIMARY KEY AUTOINCREMENT);
+ } db2;
+ execsql {
+ ATTACH 'test2.db' as aux;
+ SELECT 1, * FROM main.sqlite_sequence;
+ SELECT 2, * FROM temp.sqlite_sequence;
+ SELECT 3, * FROM aux.sqlite_sequence;
+ }
+ } {}
+ do_test autoinc-5.2 {
+ execsql {
+ INSERT INTO t4 VALUES(NULL,1);
+ SELECT 1, * FROM main.sqlite_sequence;
+ SELECT 2, * FROM temp.sqlite_sequence;
+ SELECT 3, * FROM aux.sqlite_sequence;
+ }
+ } {3 t4 1}
+ do_test autoinc-5.3 {
+ execsql {
+ INSERT INTO t5 VALUES(100,200);
+ SELECT * FROM sqlite_sequence
+ } db2
+ } {t4 1 t5 200}
+ do_test autoinc-5.4 {
+ execsql {
+ SELECT 1, * FROM main.sqlite_sequence;
+ SELECT 2, * FROM temp.sqlite_sequence;
+ SELECT 3, * FROM aux.sqlite_sequence;
+ }
+ } {3 t4 1 3 t5 200}
+}
+
+# Requirement REQ00310: Make sure an insert fails if the sequence is
+# already at its maximum value.
+#
+ifcapable {rowid32} {
+ do_test autoinc-6.1 {
+ execsql {
+ CREATE TABLE t6(v INTEGER PRIMARY KEY AUTOINCREMENT, w);
+ INSERT INTO t6 VALUES(2147483647,1);
+ SELECT seq FROM main.sqlite_sequence WHERE name='t6';
+ }
+ } 2147483647
+}
+ifcapable {!rowid32} {
+ do_test autoinc-6.1 {
+ execsql {
+ CREATE TABLE t6(v INTEGER PRIMARY KEY AUTOINCREMENT, w);
+ INSERT INTO t6 VALUES(9223372036854775807,1);
+ SELECT seq FROM main.sqlite_sequence WHERE name='t6';
+ }
+ } 9223372036854775807
+}
+do_test autoinc-6.2 {
+ catchsql {
+ INSERT INTO t6 VALUES(NULL,1);
+ }
+} {1 {database or disk is full}}
+
+# Allow the AUTOINCREMENT keyword inside the parentheses
+# on a separate PRIMARY KEY designation.
+#
+do_test autoinc-7.1 {
+ execsql {
+ CREATE TABLE t7(x INTEGER, y REAL, PRIMARY KEY(x AUTOINCREMENT));
+ INSERT INTO t7(y) VALUES(123);
+ INSERT INTO t7(y) VALUES(234);
+ DELETE FROM t7;
+ INSERT INTO t7(y) VALUES(345);
+ SELECT * FROM t7;
+ }
+} {3 345.0}
+
+# Test that if the AUTOINCREMENT is applied to a non integer primary key
+# the error message is sensible.
+do_test autoinc-7.2 {
+ catchsql {
+ CREATE TABLE t8(x TEXT PRIMARY KEY AUTOINCREMENT);
+ }
+} {1 {AUTOINCREMENT is only allowed on an INTEGER PRIMARY KEY}}
+
+
+# Ticket #1283. Make sure that preparing but never running a statement
+# that creates the sqlite_sequence table does not mess up the database.
+#
+do_test autoinc-8.1 {
+ catch {db2 close}
+ catch {db close}
+ file delete -force test.db
+ sqlite3 db test.db
+ set DB [sqlite3_connection_pointer db]
+ set STMT [sqlite3_prepare $DB {
+ CREATE TABLE t1(
+ x INTEGER PRIMARY KEY AUTOINCREMENT
+ )
+ } -1 TAIL]
+ sqlite3_finalize $STMT
+ set STMT [sqlite3_prepare $DB {
+ CREATE TABLE t1(
+ x INTEGER PRIMARY KEY AUTOINCREMENT
+ )
+ } -1 TAIL]
+ sqlite3_step $STMT
+ sqlite3_finalize $STMT
+ execsql {
+ INSERT INTO t1 VALUES(NULL);
+ SELECT * FROM t1;
+ }
+} {1}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/autovacuum.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/autovacuum.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,582 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing the SELECT statement.
+#
+# $Id: autovacuum.test,v 1.24 2006/08/12 12:33:15 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# If this build of the library does not support auto-vacuum, omit this
+# whole file.
+ifcapable {!autovacuum || !pragma} {
+ finish_test
+ return
+}
+
+# Return a string $len characters long. The returned string is $char repeated
+# over and over. For example, [make_str abc 8] returns "abcabcab".
+proc make_str {char len} {
+ set str [string repeat $char. $len]
+ return [string range $str 0 [expr $len-1]]
+}
+
+# Return the number of pages in the file test.db by looking at the file system.
+proc file_pages {} {
+ return [expr [file size test.db] / 1024]
+}
+
+#-------------------------------------------------------------------------
+# Test cases autovacuum-1.* work as follows:
+#
+# 1. A table with a single indexed field is created.
+# 2. Approximately 20 rows are inserted into the table. Each row is long
+# enough such that it uses at least 2 overflow pages for both the table
+# and index entry.
+# 3. The rows are deleted in a psuedo-random order. Sometimes only one row
+# is deleted per transaction, sometimes more than one.
+# 4. After each transaction the table data is checked to ensure it is correct
+# and a "PRAGMA integrity_check" is executed.
+# 5. Once all the rows are deleted the file is checked to make sure it
+# consists of exactly 4 pages.
+#
+# Steps 2-5 are repeated for a few different psuedo-random delete patterns
+# (defined by the $delete_orders list).
+set delete_orders [list]
+lappend delete_orders {1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20}
+lappend delete_orders {20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1}
+lappend delete_orders {8 18 2 4 14 11 13 3 10 7 9 5 12 17 19 15 20 6 16 1}
+lappend delete_orders {10 3 11 17 19 20 7 4 13 6 1 14 16 12 9 18 8 15 5 2}
+lappend delete_orders {{1 2 3 4 5 6 7 8 9 10} {11 12 13 14 15 16 17 18 19 20}}
+lappend delete_orders {{19 8 17 15} {16 11 9 14} {18 5 3 1} {13 20 7 2} {6 12}}
+
+# The length of each table entry.
+# set ENTRY_LEN 3500
+set ENTRY_LEN 3500
+
+do_test autovacuum-1.1 {
+ execsql {
+ PRAGMA auto_vacuum = 1;
+ CREATE TABLE av1(a);
+ CREATE INDEX av1_idx ON av1(a);
+ }
+} {}
+
+set tn 0
+foreach delete_order $delete_orders {
+ incr tn
+
+ # Set up the table.
+ set ::tbl_data [list]
+ foreach i [lsort -integer [eval concat $delete_order]] {
+ execsql "INSERT INTO av1 (oid, a) VALUES($i, '[make_str $i $ENTRY_LEN]')"
+ lappend ::tbl_data [make_str $i $ENTRY_LEN]
+ }
+
+ # Make sure the integrity check passes with the initial data.
+ ifcapable {integrityck} {
+ do_test autovacuum-1.$tn.1 {
+ execsql {
+ pragma integrity_check
+ }
+ } {ok}
+ }
+
+ foreach delete $delete_order {
+ # Delete one set of rows from the table.
+ do_test autovacuum-1.$tn.($delete).1 {
+ execsql "
+ DELETE FROM av1 WHERE oid = [join $delete " OR oid = "]
+ "
+ } {}
+
+ # Do the integrity check.
+ ifcapable {integrityck} {
+ do_test autovacuum-1.$tn.($delete).2 {
+ execsql {
+ pragma integrity_check
+ }
+ } {ok}
+ }
+ # Ensure the data remaining in the table is what was expected.
+ foreach d $delete {
+ set idx [lsearch $::tbl_data [make_str $d $ENTRY_LEN]]
+ set ::tbl_data [lreplace $::tbl_data $idx $idx]
+ }
+ do_test autovacuum-1.$tn.($delete).3 {
+ execsql {
+ select a from av1
+ }
+ } $::tbl_data
+ }
+
+ # All rows have been deleted. Ensure the file has shrunk to 4 pages.
+ do_test autovacuum-1.$tn.3 {
+ file_pages
+ } {4}
+}
+
+#---------------------------------------------------------------------------
+# Tests cases autovacuum-2.* test that root pages are allocated
+# and deallocated correctly at the start of the file. Operation is roughly as
+# follows:
+#
+# autovacuum-2.1.*: Drop the tables that currently exist in the database.
+# autovacuum-2.2.*: Create some tables. Ensure that data pages can be
+# moved correctly to make space for new root-pages.
+# autovacuum-2.3.*: Drop one of the tables just created (not the last one),
+# and check that one of the other tables is moved to
+# the free root-page location.
+# autovacuum-2.4.*: Check that a table can be created correctly when the
+# root-page it requires is on the free-list.
+# autovacuum-2.5.*: Check that a table with indices can be dropped. This
+# is slightly tricky because dropping one of the
+# indices/table btrees could move the root-page of another.
+# The code-generation layer of SQLite overcomes this problem
+# by dropping the btrees in descending order of root-pages.
+# This test ensures that this actually happens.
+#
+do_test autovacuum-2.1.1 {
+ execsql {
+ DROP TABLE av1;
+ }
+} {}
+do_test autovacuum-2.1.2 {
+ file_pages
+} {1}
+
+# Create a table and put some data in it.
+do_test autovacuum-2.2.1 {
+ execsql {
+ CREATE TABLE av1(x);
+ SELECT rootpage FROM sqlite_master ORDER BY rootpage;
+ }
+} {3}
+do_test autovacuum-2.2.2 {
+ execsql "
+ INSERT INTO av1 VALUES('[make_str abc 3000]');
+ INSERT INTO av1 VALUES('[make_str def 3000]');
+ INSERT INTO av1 VALUES('[make_str ghi 3000]');
+ INSERT INTO av1 VALUES('[make_str jkl 3000]');
+ "
+ set ::av1_data [db eval {select * from av1}]
+ file_pages
+} {15}
+
+# Create another table. Check it is located immediately after the first.
+# This test case moves the second page in an over-flow chain.
+do_test autovacuum-2.2.3 {
+ execsql {
+ CREATE TABLE av2(x);
+ SELECT rootpage FROM sqlite_master ORDER BY rootpage;
+ }
+} {3 4}
+do_test autovacuum-2.2.4 {
+ file_pages
+} {16}
+
+# Create another table. Check it is located immediately after the second.
+# This test case moves the first page in an over-flow chain.
+do_test autovacuum-2.2.5 {
+ execsql {
+ CREATE TABLE av3(x);
+ SELECT rootpage FROM sqlite_master ORDER BY rootpage;
+ }
+} {3 4 5}
+do_test autovacuum-2.2.6 {
+ file_pages
+} {17}
+
+# Create another table. Check it is located immediately after the second.
+# This test case moves a btree leaf page.
+do_test autovacuum-2.2.7 {
+ execsql {
+ CREATE TABLE av4(x);
+ SELECT rootpage FROM sqlite_master ORDER BY rootpage;
+ }
+} {3 4 5 6}
+do_test autovacuum-2.2.8 {
+ file_pages
+} {18}
+do_test autovacuum-2.2.9 {
+ execsql {
+ select * from av1
+ }
+} $av1_data
+
+do_test autovacuum-2.3.1 {
+ execsql {
+ INSERT INTO av2 SELECT 'av1' || x FROM av1;
+ INSERT INTO av3 SELECT 'av2' || x FROM av1;
+ INSERT INTO av4 SELECT 'av3' || x FROM av1;
+ }
+ set ::av2_data [execsql {select x from av2}]
+ set ::av3_data [execsql {select x from av3}]
+ set ::av4_data [execsql {select x from av4}]
+ file_pages
+} {54}
+do_test autovacuum-2.3.2 {
+ execsql {
+ DROP TABLE av2;
+ SELECT rootpage FROM sqlite_master ORDER BY rootpage;
+ }
+} {3 4 5}
+do_test autovacuum-2.3.3 {
+ file_pages
+} {41}
+do_test autovacuum-2.3.4 {
+ execsql {
+ SELECT x FROM av3;
+ }
+} $::av3_data
+do_test autovacuum-2.3.5 {
+ execsql {
+ SELECT x FROM av4;
+ }
+} $::av4_data
+
+# Drop all the tables in the file. This puts all pages except the first 2
+# (the sqlite_master root-page and the first pointer map page) on the
+# free-list.
+do_test autovacuum-2.4.1 {
+ execsql {
+ DROP TABLE av1;
+ DROP TABLE av3;
+ BEGIN;
+ DROP TABLE av4;
+ }
+ file_pages
+} {15}
+do_test autovacuum-2.4.2 {
+ for {set i 3} {$i<=10} {incr i} {
+ execsql "CREATE TABLE av$i (x)"
+ }
+ file_pages
+} {15}
+do_test autovacuum-2.4.3 {
+ execsql {
+ SELECT rootpage FROM sqlite_master ORDER by rootpage
+ }
+} {3 4 5 6 7 8 9 10}
+
+# Right now there are 5 free pages in the database. Consume and then free
+# a 520 pages. Then create 520 tables. This ensures that at least some of the
+# desired root-pages reside on the second free-list trunk page, and that the
+# trunk itself is required at some point.
+do_test autovacuum-2.4.4 {
+ execsql "
+ INSERT INTO av3 VALUES ('[make_str abcde [expr 1020*520 + 500]]');
+ DELETE FROM av3;
+ "
+} {}
+set root_page_list [list]
+set pending_byte_page [expr ($::sqlite_pending_byte / 1024) + 1]
+for {set i 3} {$i<=532} {incr i} {
+ # 207 and 412 are pointer-map pages.
+ if { $i!=207 && $i!=412 && $i != $pending_byte_page} {
+ lappend root_page_list $i
+ }
+}
+if {$i >= $pending_byte_page} {
+ lappend root_page_list $i
+}
+do_test autovacuum-2.4.5 {
+ for {set i 11} {$i<=530} {incr i} {
+ execsql "CREATE TABLE av$i (x)"
+ }
+ execsql {
+ SELECT rootpage FROM sqlite_master ORDER by rootpage
+ }
+} $root_page_list
+
+# Just for fun, delete all those tables and see if the database is 1 page.
+do_test autovacuum-2.4.6 {
+ execsql COMMIT;
+ file_pages
+} [expr 561 + (($i >= $pending_byte_page)?1:0)]
+integrity_check autovacuum-2.4.6
+do_test autovacuum-2.4.7 {
+ execsql BEGIN
+ for {set i 3} {$i<=530} {incr i} {
+ execsql "DROP TABLE av$i"
+ }
+ execsql COMMIT
+ file_pages
+} 1
+
+# Create some tables with indices to drop.
+do_test autovacuum-2.5.1 {
+ execsql {
+ CREATE TABLE av1(a PRIMARY KEY, b, c);
+ INSERT INTO av1 VALUES('av1 a', 'av1 b', 'av1 c');
+
+ CREATE TABLE av2(a PRIMARY KEY, b, c);
+ CREATE INDEX av2_i1 ON av2(b);
+ CREATE INDEX av2_i2 ON av2(c);
+ INSERT INTO av2 VALUES('av2 a', 'av2 b', 'av2 c');
+
+ CREATE TABLE av3(a PRIMARY KEY, b, c);
+ CREATE INDEX av3_i1 ON av3(b);
+ INSERT INTO av3 VALUES('av3 a', 'av3 b', 'av3 c');
+
+ CREATE TABLE av4(a, b, c);
+ CREATE INDEX av4_i1 ON av4(a);
+ CREATE INDEX av4_i2 ON av4(b);
+ CREATE INDEX av4_i3 ON av4(c);
+ CREATE INDEX av4_i4 ON av4(a, b, c);
+ INSERT INTO av4 VALUES('av4 a', 'av4 b', 'av4 c');
+ }
+} {}
+
+do_test autovacuum-2.5.2 {
+ execsql {
+ SELECT name, rootpage FROM sqlite_master;
+ }
+} [list av1 3 sqlite_autoindex_av1_1 4 \
+ av2 5 sqlite_autoindex_av2_1 6 av2_i1 7 av2_i2 8 \
+ av3 9 sqlite_autoindex_av3_1 10 av3_i1 11 \
+ av4 12 av4_i1 13 av4_i2 14 av4_i3 15 av4_i4 16 \
+]
+
+# The following 4 tests are SELECT queries that use the indices created.
+# If the root-pages in the internal schema are not updated correctly when
+# a table or indice is moved, these queries will fail. They are repeated
+# after each table is dropped (i.e. as test cases 2.5.*.[1..4]).
+do_test autovacuum-2.5.2.1 {
+ execsql {
+ SELECT * FROM av1 WHERE a = 'av1 a';
+ }
+} {{av1 a} {av1 b} {av1 c}}
+do_test autovacuum-2.5.2.2 {
+ execsql {
+ SELECT * FROM av2 WHERE a = 'av2 a' AND b = 'av2 b' AND c = 'av2 c'
+ }
+} {{av2 a} {av2 b} {av2 c}}
+do_test autovacuum-2.5.2.3 {
+ execsql {
+ SELECT * FROM av3 WHERE a = 'av3 a' AND b = 'av3 b';
+ }
+} {{av3 a} {av3 b} {av3 c}}
+do_test autovacuum-2.5.2.4 {
+ execsql {
+ SELECT * FROM av4 WHERE a = 'av4 a' AND b = 'av4 b' AND c = 'av4 c';
+ }
+} {{av4 a} {av4 b} {av4 c}}
+
+# Drop table av3. Indices av4_i2, av4_i3 and av4_i4 are moved to fill the two
+# root pages vacated. The operation proceeds as:
+# Step 1: Delete av3_i1 (root-page 11). Move root-page of av4_i4 to page 11.
+# Step 2: Delete av3 (root-page 10). Move root-page of av4_i3 to page 10.
+# Step 3: Delete sqlite_autoindex_av1_3 (root-page 9). Move av4_i2 to page 9.
+do_test autovacuum-2.5.3 {
+ execsql {
+ DROP TABLE av3;
+ SELECT name, rootpage FROM sqlite_master;
+ }
+} [list av1 3 sqlite_autoindex_av1_1 4 \
+ av2 5 sqlite_autoindex_av2_1 6 av2_i1 7 av2_i2 8 \
+ av4 12 av4_i1 13 av4_i2 9 av4_i3 10 av4_i4 11 \
+]
+do_test autovacuum-2.5.3.1 {
+ execsql {
+ SELECT * FROM av1 WHERE a = 'av1 a';
+ }
+} {{av1 a} {av1 b} {av1 c}}
+do_test autovacuum-2.5.3.2 {
+ execsql {
+ SELECT * FROM av2 WHERE a = 'av2 a' AND b = 'av2 b' AND c = 'av2 c'
+ }
+} {{av2 a} {av2 b} {av2 c}}
+do_test autovacuum-2.5.3.3 {
+ execsql {
+ SELECT * FROM av4 WHERE a = 'av4 a' AND b = 'av4 b' AND c = 'av4 c';
+ }
+} {{av4 a} {av4 b} {av4 c}}
+
+# Drop table av1:
+# Step 1: Delete av1 (root page 4). Root-page of av4_i1 fills the gap.
+# Step 2: Delete sqlite_autoindex_av1_1 (root page 3). Move av4 to the gap.
+do_test autovacuum-2.5.4 {
+ execsql {
+ DROP TABLE av1;
+ SELECT name, rootpage FROM sqlite_master;
+ }
+} [list av2 5 sqlite_autoindex_av2_1 6 av2_i1 7 av2_i2 8 \
+ av4 3 av4_i1 4 av4_i2 9 av4_i3 10 av4_i4 11 \
+]
+do_test autovacuum-2.5.4.2 {
+ execsql {
+ SELECT * FROM av2 WHERE a = 'av2 a' AND b = 'av2 b' AND c = 'av2 c'
+ }
+} {{av2 a} {av2 b} {av2 c}}
+do_test autovacuum-2.5.4.4 {
+ execsql {
+ SELECT * FROM av4 WHERE a = 'av4 a' AND b = 'av4 b' AND c = 'av4 c';
+ }
+} {{av4 a} {av4 b} {av4 c}}
+
+# Drop table av4:
+# Step 1: Delete av4_i4.
+# Step 2: Delete av4_i3.
+# Step 3: Delete av4_i2.
+# Step 4: Delete av4_i1. av2_i2 replaces it.
+# Step 5: Delete av4. av2_i1 replaces it.
+do_test autovacuum-2.5.5 {
+ execsql {
+ DROP TABLE av4;
+ SELECT name, rootpage FROM sqlite_master;
+ }
+} [list av2 5 sqlite_autoindex_av2_1 6 av2_i1 3 av2_i2 4]
+do_test autovacuum-2.5.5.2 {
+ execsql {
+ SELECT * FROM av2 WHERE a = 'av2 a' AND b = 'av2 b' AND c = 'av2 c'
+ }
+} {{av2 a} {av2 b} {av2 c}}
+
+#--------------------------------------------------------------------------
+# Test cases autovacuum-3.* test the operation of the "PRAGMA auto_vacuum"
+# command.
+#
+do_test autovacuum-3.1 {
+ execsql {
+ PRAGMA auto_vacuum;
+ }
+} {1}
+do_test autovacuum-3.2 {
+ db close
+ sqlite3 db test.db
+ execsql {
+ PRAGMA auto_vacuum;
+ }
+} {1}
+do_test autovacuum-3.3 {
+ execsql {
+ PRAGMA auto_vacuum = 0;
+ PRAGMA auto_vacuum;
+ }
+} {1}
+
+do_test autovacuum-3.4 {
+ db close
+ file delete -force test.db
+ sqlite3 db test.db
+ execsql {
+ PRAGMA auto_vacuum;
+ }
+} $AUTOVACUUM
+do_test autovacuum-3.5 {
+ execsql {
+ CREATE TABLE av1(x);
+ PRAGMA auto_vacuum;
+ }
+} $AUTOVACUUM
+do_test autovacuum-3.6 {
+ execsql {
+ PRAGMA auto_vacuum = 1;
+ PRAGMA auto_vacuum;
+ }
+} $AUTOVACUUM
+do_test autovacuum-3.7 {
+ execsql {
+ DROP TABLE av1;
+ }
+ file_pages
+} [expr $AUTOVACUUM?1:2]
+
+#-----------------------------------------------------------------------
+# Test that if a statement transaction around a CREATE INDEX statement is
+# rolled back no corruption occurs.
+#
+do_test autovacuum-4.1 {
+ execsql {
+ CREATE TABLE av1(a, b);
+ BEGIN;
+ }
+ for {set i 0} {$i<100} {incr i} {
+ execsql "INSERT INTO av1 VALUES($i, '[string repeat X 200]');"
+ }
+ execsql "INSERT INTO av1 VALUES(99, '[string repeat X 200]');"
+ execsql {
+ SELECT sum(a) FROM av1;
+ }
+} {5049}
+do_test autovacuum-4.2 {
+ catchsql {
+ CREATE UNIQUE INDEX av1_i ON av1(a);
+ }
+} {1 {indexed columns are not unique}}
+do_test autovacuum-4.3 {
+ execsql {
+ SELECT sum(a) FROM av1;
+ }
+} {5049}
+do_test autovacuum-4.4 {
+ execsql {
+ COMMIT;
+ }
+} {}
+
+ifcapable integrityck {
+
+# Ticket #1727
+do_test autovacuum-5.1 {
+ db close
+ sqlite3 db :memory:
+ db eval {
+ PRAGMA auto_vacuum=1;
+ CREATE TABLE t1(a);
+ CREATE TABLE t2(a);
+ DROP TABLE t1;
+ PRAGMA integrity_check;
+ }
+} ok
+
+}
+
+# Ticket #1728.
+#
+# In autovacuum mode, when tables or indices are deleted, the rootpage
+# values in the symbol table have to be updated. There was a bug in this
+# logic so that if an index/table was moved twice, the second move might
+# not occur. This would leave the internal symbol table in an inconsistent
+# state causing subsequent statements to fail.
+#
+# The problem is difficult to reproduce. The sequence of statements in
+# the following test are carefully designed make it occur and thus to
+# verify that this very obscure bug has been resolved.
+#
+ifcapable integrityck&&memorydb {
+
+do_test autovacuum-6.1 {
+ db close
+ sqlite3 db :memory:
+ db eval {
+ PRAGMA auto_vacuum=1;
+ CREATE TABLE t1(a, b);
+ CREATE INDEX i1 ON t1(a);
+ CREATE TABLE t2(a);
+ CREATE INDEX i2 ON t2(a);
+ CREATE TABLE t3(a);
+ CREATE INDEX i3 ON t2(a);
+ CREATE INDEX x ON t1(b);
+ DROP TABLE t3;
+ PRAGMA integrity_check;
+ DROP TABLE t2;
+ PRAGMA integrity_check;
+ DROP TABLE t1;
+ PRAGMA integrity_check;
+ }
+} {ok ok ok}
+
+}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/autovacuum_crash.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/autovacuum_crash.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,58 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+#
+# This file runs the tests in the file crash.test with auto-vacuum enabled
+# databases.
+#
+# $Id: autovacuum_crash.test,v 1.2 2005/01/16 09:06:34 danielk1977 Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# If this build of the library does not support auto-vacuum, omit this
+# whole file.
+ifcapable {!autovacuum} {
+ finish_test
+ return
+}
+
+rename finish_test really_finish_test2
+proc finish_test {} {}
+set ISQUICK 1
+
+rename sqlite3 real_sqlite3
+proc sqlite3 {args} {
+ set r [eval "real_sqlite3 $args"]
+ if { [llength $args] == 2 } {
+ [lindex $args 0] eval {pragma auto_vacuum = 1}
+ }
+ set r
+}
+
+rename do_test really_do_test
+proc do_test {args} {
+ set sc [concat really_do_test "autovacuum-[lindex $args 0]" \
+ [lrange $args 1 end]]
+ eval $sc
+}
+
+source $testdir/crash.test
+
+rename sqlite3 ""
+rename real_sqlite3 sqlite3
+rename finish_test ""
+rename really_finish_test2 finish_test
+rename do_test ""
+rename really_do_test do_test
+finish_test
+
+
+
Added: freeswitch/trunk/libs/sqlite/test/autovacuum_ioerr.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/autovacuum_ioerr.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,58 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+#
+# This file runs the tests in the file ioerr.test with auto-vacuum enabled
+# databases.
+#
+# $Id: autovacuum_ioerr.test,v 1.3 2006/01/16 12:46:41 danielk1977 Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# If this build of the library does not support auto-vacuum, omit this
+# whole file.
+ifcapable {!autovacuum} {
+ finish_test
+ return
+}
+
+rename finish_test really_finish_test2
+proc finish_test {} {}
+set ISQUICK 1
+
+rename sqlite3 real_sqlite3
+proc sqlite3 {args} {
+ set r [eval "real_sqlite3 $args"]
+ if { [llength $args] == 2 } {
+ [lindex $args 0] eval {pragma auto_vacuum = 1}
+ }
+ set r
+}
+
+rename do_test really_do_test
+proc do_test {args} {
+ set sc [concat really_do_test "autovacuum-[lindex $args 0]" \
+ [lrange $args 1 end]]
+ eval $sc
+}
+
+source $testdir/ioerr.test
+
+rename sqlite3 ""
+rename real_sqlite3 sqlite3
+rename finish_test ""
+rename really_finish_test2 finish_test
+rename do_test ""
+rename really_do_test do_test
+finish_test
+
+
+
Added: freeswitch/trunk/libs/sqlite/test/autovacuum_ioerr2.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/autovacuum_ioerr2.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,120 @@
+# 2001 October 12
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing for correct handling of I/O errors
+# such as writes failing because the disk is full.
+#
+# The tests in this file use special facilities that are only
+# available in the SQLite test fixture.
+#
+# $Id: autovacuum_ioerr2.test,v 1.5 2005/01/29 09:14:05 danielk1977 Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# If this build of the library does not support auto-vacuum, omit this
+# whole file.
+ifcapable {!autovacuum} {
+ finish_test
+ return
+}
+
+do_ioerr_test autovacuum-ioerr2-1 -sqlprep {
+ PRAGMA auto_vacuum = 1;
+ CREATE TABLE abc(a);
+ INSERT INTO abc VALUES(randstr(1500,1500));
+} -sqlbody {
+ CREATE TABLE abc2(a);
+ BEGIN;
+ DELETE FROM abc;
+ INSERT INTO abc VALUES(randstr(1500,1500));
+ CREATE TABLE abc3(a);
+ COMMIT;
+}
+
+do_ioerr_test autovacuum-ioerr2-2 -tclprep {
+ execsql {
+ PRAGMA auto_vacuum = 1;
+ PRAGMA cache_size = 10;
+ BEGIN;
+ CREATE TABLE abc(a);
+ INSERT INTO abc VALUES(randstr(1100,1100)); -- Page 4 is overflow
+ INSERT INTO abc VALUES(randstr(1100,1100)); -- Page 5 is overflow
+ }
+ for {set i 0} {$i<150} {incr i} {
+ execsql {
+ INSERT INTO abc VALUES(randstr(100,100));
+ }
+ }
+ execsql COMMIT
+} -sqlbody {
+ BEGIN;
+ DELETE FROM abc WHERE length(a)>100;
+ UPDATE abc SET a = randstr(90,90);
+ CREATE TABLE abc3(a);
+ COMMIT;
+}
+
+do_ioerr_test autovacuum-ioerr2-3 -sqlprep {
+ PRAGMA auto_vacuum = 1;
+ CREATE TABLE abc(a);
+ CREATE TABLE abc2(b);
+} -sqlbody {
+ BEGIN;
+ INSERT INTO abc2 VALUES(10);
+ DROP TABLE abc;
+ COMMIT;
+ DROP TABLE abc2;
+}
+
+file delete -force backup.db
+ifcapable subquery {
+ do_ioerr_test autovacuum-ioerr2-4 -tclprep {
+ if {![file exists backup.db]} {
+ sqlite3 dbb backup.db
+ execsql {
+ PRAGMA auto_vacuum = 1;
+ BEGIN;
+ CREATE TABLE abc(a);
+ INSERT INTO abc VALUES(randstr(1100,1100)); -- Page 4 is overflow
+ INSERT INTO abc VALUES(randstr(1100,1100)); -- Page 5 is overflow
+ } dbb
+ for {set i 0} {$i<2500} {incr i} {
+ execsql {
+ INSERT INTO abc VALUES(randstr(100,100));
+ } dbb
+ }
+ execsql {
+ COMMIT;
+ PRAGMA cache_size = 10;
+ } dbb
+ dbb close
+ }
+ db close
+ file delete -force test.db
+ file delete -force test.db-journal
+ copy_file backup.db test.db
+ set ::DB [sqlite3 db test.db]
+ execsql {
+ PRAGMA cache_size = 10;
+ }
+ } -sqlbody {
+ BEGIN;
+ DELETE FROM abc WHERE oid < 3;
+ UPDATE abc SET a = randstr(100,100) WHERE oid > 2300;
+ UPDATE abc SET a = randstr(1100,1100) WHERE oid =
+ (select max(oid) from abc);
+ COMMIT;
+ }
+}
+
+finish_test
+
Added: freeswitch/trunk/libs/sqlite/test/avtrans.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/avtrans.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,919 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. This
+# file is a copy of "trans.test" modified to run under autovacuum mode.
+# the point is to stress the autovacuum logic and try to get it to fail.
+#
+# $Id: avtrans.test,v 1.4 2006/02/11 01:25:51 drh Exp $
+
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+
+# Create several tables to work with.
+#
+do_test avtrans-1.0 {
+ execsql {
+ PRAGMA auto_vacuum=ON;
+ CREATE TABLE one(a int PRIMARY KEY, b text);
+ INSERT INTO one VALUES(1,'one');
+ INSERT INTO one VALUES(2,'two');
+ INSERT INTO one VALUES(3,'three');
+ SELECT b FROM one ORDER BY a;
+ }
+} {one two three}
+do_test avtrans-1.1 {
+ execsql {
+ CREATE TABLE two(a int PRIMARY KEY, b text);
+ INSERT INTO two VALUES(1,'I');
+ INSERT INTO two VALUES(5,'V');
+ INSERT INTO two VALUES(10,'X');
+ SELECT b FROM two ORDER BY a;
+ }
+} {I V X}
+do_test avtrans-1.9 {
+ sqlite3 altdb test.db
+ execsql {SELECT b FROM one ORDER BY a} altdb
+} {one two three}
+do_test avtrans-1.10 {
+ execsql {SELECT b FROM two ORDER BY a} altdb
+} {I V X}
+integrity_check avtrans-1.11
+
+# Basic transactions
+#
+do_test avtrans-2.1 {
+ set v [catch {execsql {BEGIN}} msg]
+ lappend v $msg
+} {0 {}}
+do_test avtrans-2.2 {
+ set v [catch {execsql {END}} msg]
+ lappend v $msg
+} {0 {}}
+do_test avtrans-2.3 {
+ set v [catch {execsql {BEGIN TRANSACTION}} msg]
+ lappend v $msg
+} {0 {}}
+do_test avtrans-2.4 {
+ set v [catch {execsql {COMMIT TRANSACTION}} msg]
+ lappend v $msg
+} {0 {}}
+do_test avtrans-2.5 {
+ set v [catch {execsql {BEGIN TRANSACTION 'foo'}} msg]
+ lappend v $msg
+} {0 {}}
+do_test avtrans-2.6 {
+ set v [catch {execsql {ROLLBACK TRANSACTION 'foo'}} msg]
+ lappend v $msg
+} {0 {}}
+do_test avtrans-2.10 {
+ execsql {
+ BEGIN;
+ SELECT a FROM one ORDER BY a;
+ SELECT a FROM two ORDER BY a;
+ END;
+ }
+} {1 2 3 1 5 10}
+integrity_check avtrans-2.11
+
+# Check the locking behavior
+#
+do_test avtrans-3.1 {
+ execsql {
+ BEGIN;
+ UPDATE one SET a = 0 WHERE 0;
+ SELECT a FROM one ORDER BY a;
+ }
+} {1 2 3}
+do_test avtrans-3.2 {
+ catchsql {
+ SELECT a FROM two ORDER BY a;
+ } altdb
+} {0 {1 5 10}}
+do_test avtrans-3.3 {
+ catchsql {
+ SELECT a FROM one ORDER BY a;
+ } altdb
+} {0 {1 2 3}}
+do_test avtrans-3.4 {
+ catchsql {
+ INSERT INTO one VALUES(4,'four');
+ }
+} {0 {}}
+do_test avtrans-3.5 {
+ catchsql {
+ SELECT a FROM two ORDER BY a;
+ } altdb
+} {0 {1 5 10}}
+do_test avtrans-3.6 {
+ catchsql {
+ SELECT a FROM one ORDER BY a;
+ } altdb
+} {0 {1 2 3}}
+do_test avtrans-3.7 {
+ catchsql {
+ INSERT INTO two VALUES(4,'IV');
+ }
+} {0 {}}
+do_test avtrans-3.8 {
+ catchsql {
+ SELECT a FROM two ORDER BY a;
+ } altdb
+} {0 {1 5 10}}
+do_test avtrans-3.9 {
+ catchsql {
+ SELECT a FROM one ORDER BY a;
+ } altdb
+} {0 {1 2 3}}
+do_test avtrans-3.10 {
+ execsql {END TRANSACTION}
+} {}
+do_test avtrans-3.11 {
+ set v [catch {execsql {
+ SELECT a FROM two ORDER BY a;
+ } altdb} msg]
+ lappend v $msg
+} {0 {1 4 5 10}}
+do_test avtrans-3.12 {
+ set v [catch {execsql {
+ SELECT a FROM one ORDER BY a;
+ } altdb} msg]
+ lappend v $msg
+} {0 {1 2 3 4}}
+do_test avtrans-3.13 {
+ set v [catch {execsql {
+ SELECT a FROM two ORDER BY a;
+ } db} msg]
+ lappend v $msg
+} {0 {1 4 5 10}}
+do_test avtrans-3.14 {
+ set v [catch {execsql {
+ SELECT a FROM one ORDER BY a;
+ } db} msg]
+ lappend v $msg
+} {0 {1 2 3 4}}
+integrity_check avtrans-3.15
+
+do_test avtrans-4.1 {
+ set v [catch {execsql {
+ COMMIT;
+ } db} msg]
+ lappend v $msg
+} {1 {cannot commit - no transaction is active}}
+do_test avtrans-4.2 {
+ set v [catch {execsql {
+ ROLLBACK;
+ } db} msg]
+ lappend v $msg
+} {1 {cannot rollback - no transaction is active}}
+do_test avtrans-4.3 {
+ catchsql {
+ BEGIN TRANSACTION;
+ UPDATE two SET a = 0 WHERE 0;
+ SELECT a FROM two ORDER BY a;
+ } db
+} {0 {1 4 5 10}}
+do_test avtrans-4.4 {
+ catchsql {
+ SELECT a FROM two ORDER BY a;
+ } altdb
+} {0 {1 4 5 10}}
+do_test avtrans-4.5 {
+ catchsql {
+ SELECT a FROM one ORDER BY a;
+ } altdb
+} {0 {1 2 3 4}}
+do_test avtrans-4.6 {
+ catchsql {
+ BEGIN TRANSACTION;
+ SELECT a FROM one ORDER BY a;
+ } db
+} {1 {cannot start a transaction within a transaction}}
+do_test avtrans-4.7 {
+ catchsql {
+ SELECT a FROM two ORDER BY a;
+ } altdb
+} {0 {1 4 5 10}}
+do_test avtrans-4.8 {
+ catchsql {
+ SELECT a FROM one ORDER BY a;
+ } altdb
+} {0 {1 2 3 4}}
+do_test avtrans-4.9 {
+ set v [catch {execsql {
+ END TRANSACTION;
+ SELECT a FROM two ORDER BY a;
+ } db} msg]
+ lappend v $msg
+} {0 {1 4 5 10}}
+do_test avtrans-4.10 {
+ set v [catch {execsql {
+ SELECT a FROM two ORDER BY a;
+ } altdb} msg]
+ lappend v $msg
+} {0 {1 4 5 10}}
+do_test avtrans-4.11 {
+ set v [catch {execsql {
+ SELECT a FROM one ORDER BY a;
+ } altdb} msg]
+ lappend v $msg
+} {0 {1 2 3 4}}
+integrity_check avtrans-4.12
+do_test avtrans-4.98 {
+ altdb close
+ execsql {
+ DROP TABLE one;
+ DROP TABLE two;
+ }
+} {}
+integrity_check avtrans-4.99
+
+# Check out the commit/rollback behavior of the database
+#
+do_test avtrans-5.1 {
+ execsql {SELECT name FROM sqlite_master WHERE type='table' ORDER BY name}
+} {}
+do_test avtrans-5.2 {
+ execsql {BEGIN TRANSACTION}
+ execsql {SELECT name FROM sqlite_master WHERE type='table' ORDER BY name}
+} {}
+do_test avtrans-5.3 {
+ execsql {CREATE TABLE one(a text, b int)}
+ execsql {SELECT name FROM sqlite_master WHERE type='table' ORDER BY name}
+} {one}
+do_test avtrans-5.4 {
+ execsql {SELECT a,b FROM one ORDER BY b}
+} {}
+do_test avtrans-5.5 {
+ execsql {INSERT INTO one(a,b) VALUES('hello', 1)}
+ execsql {SELECT a,b FROM one ORDER BY b}
+} {hello 1}
+do_test avtrans-5.6 {
+ execsql {ROLLBACK}
+ execsql {SELECT name FROM sqlite_master WHERE type='table' ORDER BY name}
+} {}
+do_test avtrans-5.7 {
+ set v [catch {
+ execsql {SELECT a,b FROM one ORDER BY b}
+ } msg]
+ lappend v $msg
+} {1 {no such table: one}}
+
+# Test commits and rollbacks of table CREATE TABLEs, CREATE INDEXs
+# DROP TABLEs and DROP INDEXs
+#
+do_test avtrans-5.8 {
+ execsql {
+ SELECT name fROM sqlite_master
+ WHERE type='table' OR type='index'
+ ORDER BY name
+ }
+} {}
+do_test avtrans-5.9 {
+ execsql {
+ BEGIN TRANSACTION;
+ CREATE TABLE t1(a int, b int, c int);
+ SELECT name fROM sqlite_master
+ WHERE type='table' OR type='index'
+ ORDER BY name;
+ }
+} {t1}
+do_test avtrans-5.10 {
+ execsql {
+ CREATE INDEX i1 ON t1(a);
+ SELECT name fROM sqlite_master
+ WHERE type='table' OR type='index'
+ ORDER BY name;
+ }
+} {i1 t1}
+do_test avtrans-5.11 {
+ execsql {
+ COMMIT;
+ SELECT name fROM sqlite_master
+ WHERE type='table' OR type='index'
+ ORDER BY name;
+ }
+} {i1 t1}
+do_test avtrans-5.12 {
+ execsql {
+ BEGIN TRANSACTION;
+ CREATE TABLE t2(a int, b int, c int);
+ CREATE INDEX i2a ON t2(a);
+ CREATE INDEX i2b ON t2(b);
+ DROP TABLE t1;
+ SELECT name fROM sqlite_master
+ WHERE type='table' OR type='index'
+ ORDER BY name;
+ }
+} {i2a i2b t2}
+do_test avtrans-5.13 {
+ execsql {
+ ROLLBACK;
+ SELECT name fROM sqlite_master
+ WHERE type='table' OR type='index'
+ ORDER BY name;
+ }
+} {i1 t1}
+do_test avtrans-5.14 {
+ execsql {
+ BEGIN TRANSACTION;
+ DROP INDEX i1;
+ SELECT name fROM sqlite_master
+ WHERE type='table' OR type='index'
+ ORDER BY name;
+ }
+} {t1}
+do_test avtrans-5.15 {
+ execsql {
+ ROLLBACK;
+ SELECT name fROM sqlite_master
+ WHERE type='table' OR type='index'
+ ORDER BY name;
+ }
+} {i1 t1}
+do_test avtrans-5.16 {
+ execsql {
+ BEGIN TRANSACTION;
+ DROP INDEX i1;
+ CREATE TABLE t2(x int, y int, z int);
+ CREATE INDEX i2x ON t2(x);
+ CREATE INDEX i2y ON t2(y);
+ INSERT INTO t2 VALUES(1,2,3);
+ SELECT name fROM sqlite_master
+ WHERE type='table' OR type='index'
+ ORDER BY name;
+ }
+} {i2x i2y t1 t2}
+do_test avtrans-5.17 {
+ execsql {
+ COMMIT;
+ SELECT name fROM sqlite_master
+ WHERE type='table' OR type='index'
+ ORDER BY name;
+ }
+} {i2x i2y t1 t2}
+do_test avtrans-5.18 {
+ execsql {
+ SELECT * FROM t2;
+ }
+} {1 2 3}
+do_test avtrans-5.19 {
+ execsql {
+ SELECT x FROM t2 WHERE y=2;
+ }
+} {1}
+do_test avtrans-5.20 {
+ execsql {
+ BEGIN TRANSACTION;
+ DROP TABLE t1;
+ DROP TABLE t2;
+ SELECT name fROM sqlite_master
+ WHERE type='table' OR type='index'
+ ORDER BY name;
+ }
+} {}
+do_test avtrans-5.21 {
+ set r [catch {execsql {
+ SELECT * FROM t2
+ }} msg]
+ lappend r $msg
+} {1 {no such table: t2}}
+do_test avtrans-5.22 {
+ execsql {
+ ROLLBACK;
+ SELECT name fROM sqlite_master
+ WHERE type='table' OR type='index'
+ ORDER BY name;
+ }
+} {i2x i2y t1 t2}
+do_test avtrans-5.23 {
+ execsql {
+ SELECT * FROM t2;
+ }
+} {1 2 3}
+integrity_check avtrans-5.23
+
+
+# Try to DROP and CREATE tables and indices with the same name
+# within a transaction. Make sure ROLLBACK works.
+#
+do_test avtrans-6.1 {
+ execsql2 {
+ INSERT INTO t1 VALUES(1,2,3);
+ BEGIN TRANSACTION;
+ DROP TABLE t1;
+ CREATE TABLE t1(p,q,r);
+ ROLLBACK;
+ SELECT * FROM t1;
+ }
+} {a 1 b 2 c 3}
+do_test avtrans-6.2 {
+ execsql2 {
+ INSERT INTO t1 VALUES(1,2,3);
+ BEGIN TRANSACTION;
+ DROP TABLE t1;
+ CREATE TABLE t1(p,q,r);
+ COMMIT;
+ SELECT * FROM t1;
+ }
+} {}
+do_test avtrans-6.3 {
+ execsql2 {
+ INSERT INTO t1 VALUES(1,2,3);
+ SELECT * FROM t1;
+ }
+} {p 1 q 2 r 3}
+do_test avtrans-6.4 {
+ execsql2 {
+ BEGIN TRANSACTION;
+ DROP TABLE t1;
+ CREATE TABLE t1(a,b,c);
+ INSERT INTO t1 VALUES(4,5,6);
+ SELECT * FROM t1;
+ DROP TABLE t1;
+ }
+} {a 4 b 5 c 6}
+do_test avtrans-6.5 {
+ execsql2 {
+ ROLLBACK;
+ SELECT * FROM t1;
+ }
+} {p 1 q 2 r 3}
+do_test avtrans-6.6 {
+ execsql2 {
+ BEGIN TRANSACTION;
+ DROP TABLE t1;
+ CREATE TABLE t1(a,b,c);
+ INSERT INTO t1 VALUES(4,5,6);
+ SELECT * FROM t1;
+ DROP TABLE t1;
+ }
+} {a 4 b 5 c 6}
+do_test avtrans-6.7 {
+ catchsql {
+ COMMIT;
+ SELECT * FROM t1;
+ }
+} {1 {no such table: t1}}
+
+# Repeat on a table with an automatically generated index.
+#
+do_test avtrans-6.10 {
+ execsql2 {
+ CREATE TABLE t1(a unique,b,c);
+ INSERT INTO t1 VALUES(1,2,3);
+ BEGIN TRANSACTION;
+ DROP TABLE t1;
+ CREATE TABLE t1(p unique,q,r);
+ ROLLBACK;
+ SELECT * FROM t1;
+ }
+} {a 1 b 2 c 3}
+do_test avtrans-6.11 {
+ execsql2 {
+ BEGIN TRANSACTION;
+ DROP TABLE t1;
+ CREATE TABLE t1(p unique,q,r);
+ COMMIT;
+ SELECT * FROM t1;
+ }
+} {}
+do_test avtrans-6.12 {
+ execsql2 {
+ INSERT INTO t1 VALUES(1,2,3);
+ SELECT * FROM t1;
+ }
+} {p 1 q 2 r 3}
+do_test avtrans-6.13 {
+ execsql2 {
+ BEGIN TRANSACTION;
+ DROP TABLE t1;
+ CREATE TABLE t1(a unique,b,c);
+ INSERT INTO t1 VALUES(4,5,6);
+ SELECT * FROM t1;
+ DROP TABLE t1;
+ }
+} {a 4 b 5 c 6}
+do_test avtrans-6.14 {
+ execsql2 {
+ ROLLBACK;
+ SELECT * FROM t1;
+ }
+} {p 1 q 2 r 3}
+do_test avtrans-6.15 {
+ execsql2 {
+ BEGIN TRANSACTION;
+ DROP TABLE t1;
+ CREATE TABLE t1(a unique,b,c);
+ INSERT INTO t1 VALUES(4,5,6);
+ SELECT * FROM t1;
+ DROP TABLE t1;
+ }
+} {a 4 b 5 c 6}
+do_test avtrans-6.16 {
+ catchsql {
+ COMMIT;
+ SELECT * FROM t1;
+ }
+} {1 {no such table: t1}}
+
+do_test avtrans-6.20 {
+ execsql {
+ CREATE TABLE t1(a integer primary key,b,c);
+ INSERT INTO t1 VALUES(1,-2,-3);
+ INSERT INTO t1 VALUES(4,-5,-6);
+ SELECT * FROM t1;
+ }
+} {1 -2 -3 4 -5 -6}
+do_test avtrans-6.21 {
+ execsql {
+ CREATE INDEX i1 ON t1(b);
+ SELECT * FROM t1 WHERE b<1;
+ }
+} {4 -5 -6 1 -2 -3}
+do_test avtrans-6.22 {
+ execsql {
+ BEGIN TRANSACTION;
+ DROP INDEX i1;
+ SELECT * FROM t1 WHERE b<1;
+ ROLLBACK;
+ }
+} {1 -2 -3 4 -5 -6}
+do_test avtrans-6.23 {
+ execsql {
+ SELECT * FROM t1 WHERE b<1;
+ }
+} {4 -5 -6 1 -2 -3}
+do_test avtrans-6.24 {
+ execsql {
+ BEGIN TRANSACTION;
+ DROP TABLE t1;
+ ROLLBACK;
+ SELECT * FROM t1 WHERE b<1;
+ }
+} {4 -5 -6 1 -2 -3}
+
+do_test avtrans-6.25 {
+ execsql {
+ BEGIN TRANSACTION;
+ DROP INDEX i1;
+ CREATE INDEX i1 ON t1(c);
+ SELECT * FROM t1 WHERE b<1;
+ }
+} {1 -2 -3 4 -5 -6}
+do_test avtrans-6.26 {
+ execsql {
+ SELECT * FROM t1 WHERE c<1;
+ }
+} {4 -5 -6 1 -2 -3}
+do_test avtrans-6.27 {
+ execsql {
+ ROLLBACK;
+ SELECT * FROM t1 WHERE b<1;
+ }
+} {4 -5 -6 1 -2 -3}
+do_test avtrans-6.28 {
+ execsql {
+ SELECT * FROM t1 WHERE c<1;
+ }
+} {1 -2 -3 4 -5 -6}
+
+# The following repeats steps 6.20 through 6.28, but puts a "unique"
+# constraint the first field of the table in order to generate an
+# automatic index.
+#
+do_test avtrans-6.30 {
+ execsql {
+ BEGIN TRANSACTION;
+ DROP TABLE t1;
+ CREATE TABLE t1(a int unique,b,c);
+ COMMIT;
+ INSERT INTO t1 VALUES(1,-2,-3);
+ INSERT INTO t1 VALUES(4,-5,-6);
+ SELECT * FROM t1 ORDER BY a;
+ }
+} {1 -2 -3 4 -5 -6}
+do_test avtrans-6.31 {
+ execsql {
+ CREATE INDEX i1 ON t1(b);
+ SELECT * FROM t1 WHERE b<1;
+ }
+} {4 -5 -6 1 -2 -3}
+do_test avtrans-6.32 {
+ execsql {
+ BEGIN TRANSACTION;
+ DROP INDEX i1;
+ SELECT * FROM t1 WHERE b<1;
+ ROLLBACK;
+ }
+} {1 -2 -3 4 -5 -6}
+do_test avtrans-6.33 {
+ execsql {
+ SELECT * FROM t1 WHERE b<1;
+ }
+} {4 -5 -6 1 -2 -3}
+do_test avtrans-6.34 {
+ execsql {
+ BEGIN TRANSACTION;
+ DROP TABLE t1;
+ ROLLBACK;
+ SELECT * FROM t1 WHERE b<1;
+ }
+} {4 -5 -6 1 -2 -3}
+
+do_test avtrans-6.35 {
+ execsql {
+ BEGIN TRANSACTION;
+ DROP INDEX i1;
+ CREATE INDEX i1 ON t1(c);
+ SELECT * FROM t1 WHERE b<1;
+ }
+} {1 -2 -3 4 -5 -6}
+do_test avtrans-6.36 {
+ execsql {
+ SELECT * FROM t1 WHERE c<1;
+ }
+} {4 -5 -6 1 -2 -3}
+do_test avtrans-6.37 {
+ execsql {
+ DROP INDEX i1;
+ SELECT * FROM t1 WHERE c<1;
+ }
+} {1 -2 -3 4 -5 -6}
+do_test avtrans-6.38 {
+ execsql {
+ ROLLBACK;
+ SELECT * FROM t1 WHERE b<1;
+ }
+} {4 -5 -6 1 -2 -3}
+do_test avtrans-6.39 {
+ execsql {
+ SELECT * FROM t1 WHERE c<1;
+ }
+} {1 -2 -3 4 -5 -6}
+integrity_check avtrans-6.40
+
+ifcapable !floatingpoint {
+ finish_test
+ return
+}
+
+# Test to make sure rollback restores the database back to its original
+# state.
+#
+do_test avtrans-7.1 {
+ execsql {BEGIN}
+ for {set i 0} {$i<1000} {incr i} {
+ set r1 [expr {rand()}]
+ set r2 [expr {rand()}]
+ set r3 [expr {rand()}]
+ execsql "INSERT INTO t2 VALUES($r1,$r2,$r3)"
+ }
+ execsql {COMMIT}
+ set ::checksum [execsql {SELECT md5sum(x,y,z) FROM t2}]
+ set ::checksum2 [
+ execsql {SELECT md5sum(type,name,tbl_name,rootpage,sql) FROM sqlite_master}
+ ]
+ execsql {SELECT count(*) FROM t2}
+} {1001}
+do_test avtrans-7.2 {
+ execsql {SELECT md5sum(x,y,z) FROM t2}
+} $checksum
+do_test avtrans-7.2.1 {
+ execsql {SELECT md5sum(type,name,tbl_name,rootpage,sql) FROM sqlite_master}
+} $checksum2
+do_test avtrans-7.3 {
+ execsql {
+ BEGIN;
+ DELETE FROM t2;
+ ROLLBACK;
+ SELECT md5sum(x,y,z) FROM t2;
+ }
+} $checksum
+do_test avtrans-7.4 {
+ execsql {
+ BEGIN;
+ INSERT INTO t2 SELECT * FROM t2;
+ ROLLBACK;
+ SELECT md5sum(x,y,z) FROM t2;
+ }
+} $checksum
+do_test avtrans-7.5 {
+ execsql {
+ BEGIN;
+ DELETE FROM t2;
+ ROLLBACK;
+ SELECT md5sum(x,y,z) FROM t2;
+ }
+} $checksum
+do_test avtrans-7.6 {
+ execsql {
+ BEGIN;
+ INSERT INTO t2 SELECT * FROM t2;
+ ROLLBACK;
+ SELECT md5sum(x,y,z) FROM t2;
+ }
+} $checksum
+do_test avtrans-7.7 {
+ execsql {
+ BEGIN;
+ CREATE TABLE t3 AS SELECT * FROM t2;
+ INSERT INTO t2 SELECT * FROM t3;
+ ROLLBACK;
+ SELECT md5sum(x,y,z) FROM t2;
+ }
+} $checksum
+do_test avtrans-7.8 {
+ execsql {SELECT md5sum(type,name,tbl_name,rootpage,sql) FROM sqlite_master}
+} $checksum2
+ifcapable tempdb {
+ do_test avtrans-7.9 {
+ execsql {
+ BEGIN;
+ CREATE TEMP TABLE t3 AS SELECT * FROM t2;
+ INSERT INTO t2 SELECT * FROM t3;
+ ROLLBACK;
+ SELECT md5sum(x,y,z) FROM t2;
+ }
+ } $checksum
+}
+do_test avtrans-7.10 {
+ execsql {SELECT md5sum(type,name,tbl_name,rootpage,sql) FROM sqlite_master}
+} $checksum2
+ifcapable tempdb {
+ do_test avtrans-7.11 {
+ execsql {
+ BEGIN;
+ CREATE TEMP TABLE t3 AS SELECT * FROM t2;
+ INSERT INTO t2 SELECT * FROM t3;
+ DROP INDEX i2x;
+ DROP INDEX i2y;
+ CREATE INDEX i3a ON t3(x);
+ ROLLBACK;
+ SELECT md5sum(x,y,z) FROM t2;
+ }
+ } $checksum
+}
+do_test avtrans-7.12 {
+ execsql {SELECT md5sum(type,name,tbl_name,rootpage,sql) FROM sqlite_master}
+} $checksum2
+ifcapable tempdb {
+ do_test avtrans-7.13 {
+ execsql {
+ BEGIN;
+ DROP TABLE t2;
+ ROLLBACK;
+ SELECT md5sum(x,y,z) FROM t2;
+ }
+ } $checksum
+}
+do_test avtrans-7.14 {
+ execsql {SELECT md5sum(type,name,tbl_name,rootpage,sql) FROM sqlite_master}
+} $checksum2
+integrity_check avtrans-7.15
+
+# Arrange for another process to begin modifying the database but abort
+# and die in the middle of the modification. Then have this process read
+# the database. This process should detect the journal file and roll it
+# back. Verify that this happens correctly.
+#
+set fd [open test.tcl w]
+puts $fd {
+ sqlite3 db test.db
+ db eval {
+ PRAGMA default_cache_size=20;
+ BEGIN;
+ CREATE TABLE t3 AS SELECT * FROM t2;
+ DELETE FROM t2;
+ }
+ sqlite_abort
+}
+close $fd
+do_test avtrans-8.1 {
+ catch {exec [info nameofexec] test.tcl}
+ execsql {SELECT md5sum(x,y,z) FROM t2}
+} $checksum
+do_test avtrans-8.2 {
+ execsql {SELECT md5sum(type,name,tbl_name,rootpage,sql) FROM sqlite_master}
+} $checksum2
+integrity_check avtrans-8.3
+
+# In the following sequence of tests, compute the MD5 sum of the content
+# of a table, make lots of modifications to that table, then do a rollback.
+# Verify that after the rollback, the MD5 checksum is unchanged.
+#
+do_test avtrans-9.1 {
+ execsql {
+ PRAGMA default_cache_size=10;
+ }
+ db close
+ sqlite3 db test.db
+ execsql {
+ BEGIN;
+ CREATE TABLE t3(x TEXT);
+ INSERT INTO t3 VALUES(randstr(10,400));
+ INSERT INTO t3 VALUES(randstr(10,400));
+ INSERT INTO t3 SELECT randstr(10,400) FROM t3;
+ INSERT INTO t3 SELECT randstr(10,400) FROM t3;
+ INSERT INTO t3 SELECT randstr(10,400) FROM t3;
+ INSERT INTO t3 SELECT randstr(10,400) FROM t3;
+ INSERT INTO t3 SELECT randstr(10,400) FROM t3;
+ INSERT INTO t3 SELECT randstr(10,400) FROM t3;
+ INSERT INTO t3 SELECT randstr(10,400) FROM t3;
+ INSERT INTO t3 SELECT randstr(10,400) FROM t3;
+ INSERT INTO t3 SELECT randstr(10,400) FROM t3;
+ COMMIT;
+ SELECT count(*) FROM t3;
+ }
+} {1024}
+
+# The following procedure computes a "signature" for table "t3". If
+# T3 changes in any way, the signature should change.
+#
+# This is used to test ROLLBACK. We gather a signature for t3, then
+# make lots of changes to t3, then rollback and take another signature.
+# The two signatures should be the same.
+#
+proc signature {} {
+ return [db eval {SELECT count(*), md5sum(x) FROM t3}]
+}
+
+# Repeat the following group of tests 20 times for quick testing and
+# 40 times for full testing. Each iteration of the test makes table
+# t3 a little larger, and thus takes a little longer, so doing 40 tests
+# is more than 2.0 times slower than doing 20 tests. Considerably more.
+#
+if {[info exists ISQUICK]} {
+ set limit 20
+} else {
+ set limit 40
+}
+
+# Do rollbacks. Make sure the signature does not change.
+#
+for {set i 2} {$i<=$limit} {incr i} {
+ set ::sig [signature]
+ set cnt [lindex $::sig 0]
+ if {$i%2==0} {
+ execsql {PRAGMA fullfsync=ON}
+ } else {
+ execsql {PRAGMA fullfsync=OFF}
+ }
+ set sqlite_sync_count 0
+ set sqlite_fullsync_count 0
+ do_test avtrans-9.$i.1-$cnt {
+ execsql {
+ BEGIN;
+ DELETE FROM t3 WHERE random()%10!=0;
+ INSERT INTO t3 SELECT randstr(10,10)||x FROM t3;
+ INSERT INTO t3 SELECT randstr(10,10)||x FROM t3;
+ ROLLBACK;
+ }
+ signature
+ } $sig
+ do_test avtrans-9.$i.2-$cnt {
+ execsql {
+ BEGIN;
+ DELETE FROM t3 WHERE random()%10!=0;
+ INSERT INTO t3 SELECT randstr(10,10)||x FROM t3;
+ DELETE FROM t3 WHERE random()%10!=0;
+ INSERT INTO t3 SELECT randstr(10,10)||x FROM t3;
+ ROLLBACK;
+ }
+ signature
+ } $sig
+ if {$i<$limit} {
+ do_test avtrans-9.$i.3-$cnt {
+ execsql {
+ INSERT INTO t3 SELECT randstr(10,400) FROM t3 WHERE random()%10==0;
+ }
+ } {}
+ if {$tcl_platform(platform)=="unix"} {
+ do_test avtrans-9.$i.4-$cnt {
+ expr {$sqlite_sync_count>0}
+ } 1
+ ifcapable pager_pragmas {
+ do_test avtrans-9.$i.5-$cnt {
+ expr {$sqlite_fullsync_count>0}
+ } [expr {$i%2==0}]
+ } else {
+ do_test avtrans-9.$i.5-$cnt {
+ expr {$sqlite_fullsync_count>0}
+ } {1}
+ }
+ }
+ }
+ set ::pager_old_format 0
+}
+integrity_check avtrans-10.1
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/between.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/between.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,113 @@
+# 2005 July 28
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing the use of indices in WHERE clauses
+# when the WHERE clause contains the BETWEEN operator.
+#
+# $Id: between.test,v 1.2 2006/01/17 09:35:02 danielk1977 Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Build some test data
+#
+do_test between-1.0 {
+ execsql {
+ BEGIN;
+ CREATE TABLE t1(w int, x int, y int, z int);
+ }
+ for {set i 1} {$i<=100} {incr i} {
+ set w $i
+ set x [expr {int(log($i)/log(2))}]
+ set y [expr {$i*$i + 2*$i + 1}]
+ set z [expr {$x+$y}]
+ ifcapable tclvar {
+ # Random unplanned test of the $varname variable syntax.
+ execsql {INSERT INTO t1 VALUES($::w,$::x,$::y,$::z)}
+ } else {
+ # If the $varname syntax is not available, use the regular variable
+ # declaration syntax.
+ execsql {INSERT INTO t1 VALUES(:w,:x,:y,:z)}
+ }
+ }
+ execsql {
+ CREATE UNIQUE INDEX i1w ON t1(w);
+ CREATE INDEX i1xy ON t1(x,y);
+ CREATE INDEX i1zyx ON t1(z,y,x);
+ COMMIT;
+ }
+} {}
+
+# This procedure executes the SQL. Then it appends to the result the
+# "sort" or "nosort" keyword depending on whether or not any sorting
+# is done. Then it appends the ::sqlite_query_plan variable.
+#
+proc queryplan {sql} {
+ set ::sqlite_sort_count 0
+ set data [execsql $sql]
+ if {$::sqlite_sort_count} {set x sort} {set x nosort}
+ lappend data $x
+ return [concat $data $::sqlite_query_plan]
+}
+
+do_test between-1.1.1 {
+ queryplan {
+ SELECT * FROM t1 WHERE w BETWEEN 5 AND 6 ORDER BY +w
+ }
+} {5 2 36 38 6 2 49 51 sort t1 i1w}
+do_test between-1.1.2 {
+ queryplan {
+ SELECT * FROM t1 WHERE +w BETWEEN 5 AND 6 ORDER BY +w
+ }
+} {5 2 36 38 6 2 49 51 sort t1 {}}
+do_test between-1.2.1 {
+ queryplan {
+ SELECT * FROM t1 WHERE w BETWEEN 5 AND 65-y ORDER BY +w
+ }
+} {5 2 36 38 6 2 49 51 sort t1 i1w}
+do_test between-1.2.2 {
+ queryplan {
+ SELECT * FROM t1 WHERE +w BETWEEN 5 AND 65-y ORDER BY +w
+ }
+} {5 2 36 38 6 2 49 51 sort t1 {}}
+do_test between-1.3.1 {
+ queryplan {
+ SELECT * FROM t1 WHERE w BETWEEN 41-y AND 6 ORDER BY +w
+ }
+} {5 2 36 38 6 2 49 51 sort t1 i1w}
+do_test between-1.3.2 {
+ queryplan {
+ SELECT * FROM t1 WHERE +w BETWEEN 41-y AND 6 ORDER BY +w
+ }
+} {5 2 36 38 6 2 49 51 sort t1 {}}
+do_test between-1.4 {
+ queryplan {
+ SELECT * FROM t1 WHERE w BETWEEN 41-y AND 65-y ORDER BY +w
+ }
+} {5 2 36 38 6 2 49 51 sort t1 {}}
+do_test between-1.5.1 {
+ queryplan {
+ SELECT * FROM t1 WHERE 26 BETWEEN y AND z ORDER BY +w
+ }
+} {4 2 25 27 sort t1 i1zyx}
+do_test between-1.5.2 {
+ queryplan {
+ SELECT * FROM t1 WHERE 26 BETWEEN +y AND z ORDER BY +w
+ }
+} {4 2 25 27 sort t1 i1zyx}
+do_test between-1.5.3 {
+ queryplan {
+ SELECT * FROM t1 WHERE 26 BETWEEN y AND +z ORDER BY +w
+ }
+} {4 2 25 27 sort t1 {}}
+
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/bigfile.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/bigfile.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,192 @@
+# 2002 November 30
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script testing the ability of SQLite to handle database
+# files larger than 4GB.
+#
+# $Id: bigfile.test,v 1.9 2005/11/25 09:01:24 danielk1977 Exp $
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# If SQLITE_DISABLE_LFS is defined, omit this file.
+ifcapable !lfs {
+ finish_test
+ return
+}
+
+# These tests only work for Tcl version 8.4 and later. Prior to 8.4,
+# Tcl was unable to handle large files.
+#
+scan $::tcl_version %f vx
+if {$vx<8.4} return
+
+# Mac OS X does not handle large files efficiently. So skip this test
+# on that platform.
+if {$tcl_platform(os)=="Darwin"} return
+
+# This is the md5 checksum of all the data in table t1 as created
+# by the first test. We will use this number to make sure that data
+# never changes.
+#
+set MAGIC_SUM {593f1efcfdbe698c28b4b1b693f7e4cf}
+
+do_test bigfile-1.1 {
+ execsql {
+ BEGIN;
+ CREATE TABLE t1(x);
+ INSERT INTO t1 VALUES('abcdefghijklmnopqrstuvwxyz');
+ INSERT INTO t1 SELECT rowid || ' ' || x FROM t1;
+ INSERT INTO t1 SELECT rowid || ' ' || x FROM t1;
+ INSERT INTO t1 SELECT rowid || ' ' || x FROM t1;
+ INSERT INTO t1 SELECT rowid || ' ' || x FROM t1;
+ INSERT INTO t1 SELECT rowid || ' ' || x FROM t1;
+ INSERT INTO t1 SELECT rowid || ' ' || x FROM t1;
+ INSERT INTO t1 SELECT rowid || ' ' || x FROM t1;
+ COMMIT;
+ }
+ execsql {
+ SELECT md5sum(x) FROM t1;
+ }
+} $::MAGIC_SUM
+
+# Try to create a large file - a file that is larger than 2^32 bytes.
+# If this fails, it means that the system being tested does not support
+# large files. So skip all of the remaining tests in this file.
+#
+db close
+if {[catch {fake_big_file 4096 test.db}]} {
+ puts "**** Unable to create a file larger than 4096 MB. *****"
+ finish_test
+ return
+}
+
+do_test bigfile-1.2 {
+ sqlite3 db test.db
+ execsql {
+ SELECT md5sum(x) FROM t1;
+ }
+} $::MAGIC_SUM
+
+# The previous test may fail on some systems because they are unable
+# to handle large files. If that is so, then skip all of the following
+# tests. We will know the above test failed because the "db" command
+# does not exist.
+#
+if {[llength [info command db]]>0} {
+
+do_test bigfile-1.3 {
+ execsql {
+ CREATE TABLE t2 AS SELECT * FROM t1;
+ SELECT md5sum(x) FROM t2;
+ }
+} $::MAGIC_SUM
+do_test bigfile-1.4 {
+ db close
+ sqlite3 db test.db
+ execsql {
+ SELECT md5sum(x) FROM t1;
+ }
+} $::MAGIC_SUM
+do_test bigfile-1.5 {
+ execsql {
+ SELECT md5sum(x) FROM t2;
+ }
+} $::MAGIC_SUM
+
+db close
+if {[catch {fake_big_file 8192 test.db}]} {
+ puts "**** Unable to create a file larger than 8192 MB. *****"
+ finish_test
+ return
+}
+
+do_test bigfile-1.6 {
+ sqlite3 db test.db
+ execsql {
+ SELECT md5sum(x) FROM t1;
+ }
+} $::MAGIC_SUM
+do_test bigfile-1.7 {
+ execsql {
+ CREATE TABLE t3 AS SELECT * FROM t1;
+ SELECT md5sum(x) FROM t3;
+ }
+} $::MAGIC_SUM
+do_test bigfile-1.8 {
+ db close
+ sqlite3 db test.db
+ execsql {
+ SELECT md5sum(x) FROM t1;
+ }
+} $::MAGIC_SUM
+do_test bigfile-1.9 {
+ execsql {
+ SELECT md5sum(x) FROM t2;
+ }
+} $::MAGIC_SUM
+do_test bigfile-1.10 {
+ execsql {
+ SELECT md5sum(x) FROM t3;
+ }
+} $::MAGIC_SUM
+
+db close
+if {[catch {fake_big_file 16384 test.db}]} {
+ puts "**** Unable to create a file larger than 16384 MB. *****"
+ finish_test
+ return
+}
+
+do_test bigfile-1.11 {
+ sqlite3 db test.db
+ execsql {
+ SELECT md5sum(x) FROM t1;
+ }
+} $::MAGIC_SUM
+do_test bigfile-1.12 {
+ execsql {
+ CREATE TABLE t4 AS SELECT * FROM t1;
+ SELECT md5sum(x) FROM t4;
+ }
+} $::MAGIC_SUM
+do_test bigfile-1.13 {
+ db close
+ sqlite3 db test.db
+ execsql {
+ SELECT md5sum(x) FROM t1;
+ }
+} $::MAGIC_SUM
+do_test bigfile-1.14 {
+ execsql {
+ SELECT md5sum(x) FROM t2;
+ }
+} $::MAGIC_SUM
+do_test bigfile-1.15 {
+ execsql {
+ SELECT md5sum(x) FROM t3;
+ }
+} $::MAGIC_SUM
+do_test bigfile-1.16 {
+ execsql {
+ SELECT md5sum(x) FROM t3;
+ }
+} $::MAGIC_SUM
+do_test bigfile-1.17 {
+ execsql {
+ SELECT md5sum(x) FROM t4;
+ }
+} $::MAGIC_SUM
+
+} ;# End of the "if( db command exists )"
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/bigrow.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/bigrow.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,223 @@
+# 2001 September 23
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is stressing the library by putting large amounts
+# of data in a single row of a table.
+#
+# $Id: bigrow.test,v 1.5 2004/08/07 23:54:48 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Make a big string that we can use for test data
+#
+do_test bigrow-1.0 {
+ set ::bigstr {}
+ for {set i 1} {$i<=9999} {incr i} {
+ set sep [string index "abcdefghijklmnopqrstuvwxyz" [expr {$i%26}]]
+ append ::bigstr "$sep [format %04d $i] "
+ }
+ string length $::bigstr
+} {69993}
+
+# Make a table into which we can insert some but records.
+#
+do_test bigrow-1.1 {
+ execsql {
+ CREATE TABLE t1(a text, b text, c text);
+ SELECT name FROM sqlite_master
+ WHERE type='table' OR type='index'
+ ORDER BY name
+ }
+} {t1}
+
+do_test bigrow-1.2 {
+ set ::big1 [string range $::bigstr 0 65519]
+ set sql "INSERT INTO t1 VALUES('abc',"
+ append sql "'$::big1', 'xyz');"
+ execsql $sql
+ execsql {SELECT a, c FROM t1}
+} {abc xyz}
+do_test bigrow-1.3 {
+ execsql {SELECT b FROM t1}
+} [list $::big1]
+do_test bigrow-1.4 {
+ set ::big2 [string range $::bigstr 0 65520]
+ set sql "INSERT INTO t1 VALUES('abc2',"
+ append sql "'$::big2', 'xyz2');"
+ set r [catch {execsql $sql} msg]
+ lappend r $msg
+} {0 {}}
+do_test bigrow-1.4.1 {
+ execsql {SELECT b FROM t1 ORDER BY c}
+} [list $::big1 $::big2]
+do_test bigrow-1.4.2 {
+ execsql {SELECT c FROM t1 ORDER BY c}
+} {xyz xyz2}
+do_test bigrow-1.4.3 {
+ execsql {DELETE FROM t1 WHERE a='abc2'}
+ execsql {SELECT c FROM t1}
+} {xyz}
+
+do_test bigrow-1.5 {
+ execsql {
+ UPDATE t1 SET a=b, b=a;
+ SELECT b,c FROM t1
+ }
+} {abc xyz}
+do_test bigrow-1.6 {
+ execsql {
+ SELECT * FROM t1
+ }
+} [list $::big1 abc xyz]
+do_test bigrow-1.7 {
+ execsql {
+ INSERT INTO t1 VALUES('1','2','3');
+ INSERT INTO t1 VALUES('A','B','C');
+ SELECT b FROM t1 WHERE a=='1';
+ }
+} {2}
+do_test bigrow-1.8 {
+ execsql "SELECT b FROM t1 WHERE a=='$::big1'"
+} {abc}
+do_test bigrow-1.9 {
+ execsql "SELECT b FROM t1 WHERE a!='$::big1' ORDER BY a"
+} {2 B}
+
+# Try doing some indexing on big columns
+#
+do_test bigrow-2.1 {
+ execsql {
+ CREATE INDEX i1 ON t1(a)
+ }
+ execsql "SELECT b FROM t1 WHERE a=='$::big1'"
+} {abc}
+do_test bigrow-2.2 {
+ execsql {
+ UPDATE t1 SET a=b, b=a
+ }
+ execsql "SELECT b FROM t1 WHERE a=='abc'"
+} [list $::big1]
+do_test bigrow-2.3 {
+ execsql {
+ UPDATE t1 SET a=b, b=a
+ }
+ execsql "SELECT b FROM t1 WHERE a=='$::big1'"
+} {abc}
+catch {unset ::bigstr}
+catch {unset ::big1}
+catch {unset ::big2}
+
+# Mosts of the tests above were created back when rows were limited in
+# size to 64K. Now rows can be much bigger. Test that logic. Also
+# make sure things work correctly at the transition boundries between
+# row sizes of 256 to 257 bytes and from 65536 to 65537 bytes.
+#
+# We begin by testing the 256..257 transition.
+#
+do_test bigrow-3.1 {
+ execsql {
+ DELETE FROM t1;
+ INSERT INTO t1(a,b,c) VALUES('one','abcdefghijklmnopqrstuvwxyz0123','hi');
+ }
+ execsql {SELECT a,length(b),c FROM t1}
+} {one 30 hi}
+do_test bigrow-3.2 {
+ execsql {
+ UPDATE t1 SET b=b||b;
+ UPDATE t1 SET b=b||b;
+ UPDATE t1 SET b=b||b;
+ }
+ execsql {SELECT a,length(b),c FROM t1}
+} {one 240 hi}
+for {set i 1} {$i<10} {incr i} {
+ do_test bigrow-3.3.$i {
+ execsql "UPDATE t1 SET b=b||'$i'"
+ execsql {SELECT a,length(b),c FROM t1}
+ } "one [expr {240+$i}] hi"
+}
+
+# Now test the 65536..65537 row-size transition.
+#
+do_test bigrow-4.1 {
+ execsql {
+ DELETE FROM t1;
+ INSERT INTO t1(a,b,c) VALUES('one','abcdefghijklmnopqrstuvwxyz0123','hi');
+ }
+ execsql {SELECT a,length(b),c FROM t1}
+} {one 30 hi}
+do_test bigrow-4.2 {
+ execsql {
+ UPDATE t1 SET b=b||b;
+ UPDATE t1 SET b=b||b;
+ UPDATE t1 SET b=b||b;
+ UPDATE t1 SET b=b||b;
+ UPDATE t1 SET b=b||b;
+ UPDATE t1 SET b=b||b;
+ UPDATE t1 SET b=b||b;
+ UPDATE t1 SET b=b||b;
+ UPDATE t1 SET b=b||b;
+ UPDATE t1 SET b=b||b;
+ UPDATE t1 SET b=b||b;
+ UPDATE t1 SET b=b||b;
+ }
+ execsql {SELECT a,length(b),c FROM t1}
+} {one 122880 hi}
+do_test bigrow-4.3 {
+ execsql {
+ UPDATE t1 SET b=substr(b,1,65515)
+ }
+ execsql {SELECT a,length(b),c FROM t1}
+} {one 65515 hi}
+for {set i 1} {$i<10} {incr i} {
+ do_test bigrow-4.4.$i {
+ execsql "UPDATE t1 SET b=b||'$i'"
+ execsql {SELECT a,length(b),c FROM t1}
+ } "one [expr {65515+$i}] hi"
+}
+
+# Check to make sure the library recovers safely if a row contains
+# too much data.
+#
+do_test bigrow-5.1 {
+ execsql {
+ DELETE FROM t1;
+ INSERT INTO t1(a,b,c) VALUES('one','abcdefghijklmnopqrstuvwxyz0123','hi');
+ }
+ execsql {SELECT a,length(b),c FROM t1}
+} {one 30 hi}
+set i 1
+for {set sz 60} {$sz<1048560} {incr sz $sz} {
+ do_test bigrow-5.2.$i {
+ execsql {
+ UPDATE t1 SET b=b||b;
+ SELECT a,length(b),c FROM t1;
+ }
+ } "one $sz hi"
+ incr i
+}
+do_test bigrow-5.3 {
+ catchsql {UPDATE t1 SET b=b||b}
+} {0 {}}
+do_test bigrow-5.4 {
+ execsql {SELECT length(b) FROM t1}
+} 1966080
+do_test bigrow-5.5 {
+ catchsql {UPDATE t1 SET b=b||b}
+} {0 {}}
+do_test bigrow-5.6 {
+ execsql {SELECT length(b) FROM t1}
+} 3932160
+do_test bigrow-5.99 {
+ execsql {DROP TABLE t1}
+} {}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/bind.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/bind.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,559 @@
+# 2003 September 6
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script testing the sqlite_bind API.
+#
+# $Id: bind.test,v 1.38 2006/06/27 20:06:45 drh Exp $
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+proc sqlite_step {stmt N VALS COLS} {
+ upvar VALS vals
+ upvar COLS cols
+ set vals [list]
+ set cols [list]
+
+ set rc [sqlite3_step $stmt]
+ for {set i 0} {$i < [sqlite3_column_count $stmt]} {incr i} {
+ lappend cols [sqlite3_column_name $stmt $i]
+ }
+ for {set i 0} {$i < [sqlite3_data_count $stmt]} {incr i} {
+ lappend vals [sqlite3_column_text $stmt $i]
+ }
+
+ return $rc
+}
+
+do_test bind-1.1 {
+ set DB [sqlite3_connection_pointer db]
+ execsql {CREATE TABLE t1(a,b,c);}
+ set VM [sqlite3_prepare $DB {INSERT INTO t1 VALUES(:1,?,:abc)} -1 TAIL]
+ set TAIL
+} {}
+do_test bind-1.1.1 {
+ sqlite3_bind_parameter_count $VM
+} 3
+do_test bind-1.1.2 {
+ sqlite3_bind_parameter_name $VM 1
+} {:1}
+do_test bind-1.1.3 {
+ sqlite3_bind_parameter_name $VM 2
+} {}
+do_test bind-1.1.4 {
+ sqlite3_bind_parameter_name $VM 3
+} {:abc}
+do_test bind-1.2 {
+ sqlite_step $VM N VALUES COLNAMES
+} {SQLITE_DONE}
+do_test bind-1.3 {
+ execsql {SELECT rowid, * FROM t1}
+} {1 {} {} {}}
+do_test bind-1.4 {
+ sqlite3_reset $VM
+ sqlite_bind $VM 1 {test value 1} normal
+ sqlite_step $VM N VALUES COLNAMES
+} SQLITE_DONE
+do_test bind-1.5 {
+ execsql {SELECT rowid, * FROM t1}
+} {1 {} {} {} 2 {test value 1} {} {}}
+do_test bind-1.6 {
+ sqlite3_reset $VM
+ sqlite_bind $VM 3 {'test value 2'} normal
+ sqlite_step $VM N VALUES COLNAMES
+} SQLITE_DONE
+do_test bind-1.7 {
+ execsql {SELECT rowid, * FROM t1}
+} {1 {} {} {} 2 {test value 1} {} {} 3 {test value 1} {} {'test value 2'}}
+do_test bind-1.8 {
+ sqlite3_reset $VM
+ set sqlite_static_bind_value 123
+ sqlite_bind $VM 1 {} static
+ sqlite_bind $VM 2 {abcdefg} normal
+ sqlite_bind $VM 3 {} null
+ execsql {DELETE FROM t1}
+ sqlite_step $VM N VALUES COLNAMES
+ execsql {SELECT rowid, * FROM t1}
+} {1 123 abcdefg {}}
+do_test bind-1.9 {
+ sqlite3_reset $VM
+ sqlite_bind $VM 1 {456} normal
+ sqlite_step $VM N VALUES COLNAMES
+ execsql {SELECT rowid, * FROM t1}
+} {1 123 abcdefg {} 2 456 abcdefg {}}
+
+do_test bind-1.99 {
+ sqlite3_finalize $VM
+} SQLITE_OK
+
+# Prepare the statement in different ways depending on whether or not
+# the $var processing is compiled into the library.
+#
+ifcapable {tclvar} {
+ do_test bind-2.1 {
+ execsql {
+ DELETE FROM t1;
+ }
+ set VM [sqlite3_prepare $DB {INSERT INTO t1 VALUES($one,$::two,$x(-z-))}\
+ -1 TX]
+ set TX
+ } {}
+ set v1 {$one}
+ set v2 {$::two}
+ set v3 {$x(-z-)}
+}
+ifcapable {!tclvar} {
+ do_test bind-2.1 {
+ execsql {
+ DELETE FROM t1;
+ }
+ set VM [sqlite3_prepare $DB {INSERT INTO t1 VALUES(:one,:two,:_)} -1 TX]
+ set TX
+ } {}
+ set v1 {:one}
+ set v2 {:two}
+ set v3 {:_}
+}
+
+do_test bind-2.1.1 {
+ sqlite3_bind_parameter_count $VM
+} 3
+do_test bind-2.1.2 {
+ sqlite3_bind_parameter_name $VM 1
+} $v1
+do_test bind-2.1.3 {
+ sqlite3_bind_parameter_name $VM 2
+} $v2
+do_test bind-2.1.4 {
+ sqlite3_bind_parameter_name $VM 3
+} $v3
+do_test bind-2.1.5 {
+ sqlite3_bind_parameter_index $VM $v1
+} 1
+do_test bind-2.1.6 {
+ sqlite3_bind_parameter_index $VM $v2
+} 2
+do_test bind-2.1.7 {
+ sqlite3_bind_parameter_index $VM $v3
+} 3
+do_test bind-2.1.8 {
+ sqlite3_bind_parameter_index $VM {:hi}
+} 0
+
+# 32 bit Integers
+do_test bind-2.2 {
+ sqlite3_bind_int $VM 1 123
+ sqlite3_bind_int $VM 2 456
+ sqlite3_bind_int $VM 3 789
+ sqlite_step $VM N VALUES COLNAMES
+ sqlite3_reset $VM
+ execsql {SELECT rowid, * FROM t1}
+} {1 123 456 789}
+do_test bind-2.3 {
+ sqlite3_bind_int $VM 2 -2000000000
+ sqlite3_bind_int $VM 3 2000000000
+ sqlite_step $VM N VALUES COLNAMES
+ sqlite3_reset $VM
+ execsql {SELECT rowid, * FROM t1}
+} {1 123 456 789 2 123 -2000000000 2000000000}
+do_test bind-2.4 {
+ execsql {SELECT typeof(a), typeof(b), typeof(c) FROM t1}
+} {integer integer integer integer integer integer}
+do_test bind-2.5 {
+ execsql {
+ DELETE FROM t1;
+ }
+} {}
+
+# 64 bit Integers
+do_test bind-3.1 {
+ sqlite3_bind_int64 $VM 1 32
+ sqlite3_bind_int64 $VM 2 -2000000000000
+ sqlite3_bind_int64 $VM 3 2000000000000
+ sqlite_step $VM N VALUES COLNAMES
+ sqlite3_reset $VM
+ execsql {SELECT rowid, * FROM t1}
+} {1 32 -2000000000000 2000000000000}
+do_test bind-3.2 {
+ execsql {SELECT typeof(a), typeof(b), typeof(c) FROM t1}
+} {integer integer integer}
+do_test bind-3.3 {
+ execsql {
+ DELETE FROM t1;
+ }
+} {}
+
+# Doubles
+do_test bind-4.1 {
+ sqlite3_bind_double $VM 1 1234.1234
+ sqlite3_bind_double $VM 2 0.00001
+ sqlite3_bind_double $VM 3 123456789
+ sqlite_step $VM N VALUES COLNAMES
+ sqlite3_reset $VM
+ set x [execsql {SELECT rowid, * FROM t1}]
+ regsub {1e-005} $x {1e-05} y
+ set y
+} {1 1234.1234 1e-05 123456789.0}
+do_test bind-4.2 {
+ execsql {SELECT typeof(a), typeof(b), typeof(c) FROM t1}
+} {real real real}
+do_test bind-4.3 {
+ execsql {
+ DELETE FROM t1;
+ }
+} {}
+
+# NULL
+do_test bind-5.1 {
+ sqlite3_bind_null $VM 1
+ sqlite3_bind_null $VM 2
+ sqlite3_bind_null $VM 3
+ sqlite_step $VM N VALUES COLNAMES
+ sqlite3_reset $VM
+ execsql {SELECT rowid, * FROM t1}
+} {1 {} {} {}}
+do_test bind-5.2 {
+ execsql {SELECT typeof(a), typeof(b), typeof(c) FROM t1}
+} {null null null}
+do_test bind-5.3 {
+ execsql {
+ DELETE FROM t1;
+ }
+} {}
+
+# UTF-8 text
+do_test bind-6.1 {
+ sqlite3_bind_text $VM 1 hellothere 5
+ sqlite3_bind_text $VM 2 ".." 1
+ sqlite3_bind_text $VM 3 world -1
+ sqlite_step $VM N VALUES COLNAMES
+ sqlite3_reset $VM
+ execsql {SELECT rowid, * FROM t1}
+} {1 hello . world}
+do_test bind-6.2 {
+ execsql {SELECT typeof(a), typeof(b), typeof(c) FROM t1}
+} {text text text}
+do_test bind-6.3 {
+ execsql {
+ DELETE FROM t1;
+ }
+} {}
+
+# UTF-16 text
+ifcapable {utf16} {
+ do_test bind-7.1 {
+ sqlite3_bind_text16 $VM 1 [encoding convertto unicode hellothere] 10
+ sqlite3_bind_text16 $VM 2 [encoding convertto unicode ""] 0
+ sqlite3_bind_text16 $VM 3 [encoding convertto unicode world] 10
+ sqlite_step $VM N VALUES COLNAMES
+ sqlite3_reset $VM
+ execsql {SELECT rowid, * FROM t1}
+ } {1 hello {} world}
+ do_test bind-7.2 {
+ execsql {SELECT typeof(a), typeof(b), typeof(c) FROM t1}
+ } {text text text}
+}
+do_test bind-7.3 {
+ execsql {
+ DELETE FROM t1;
+ }
+} {}
+
+# Test that the 'out of range' error works.
+do_test bind-8.1 {
+ catch { sqlite3_bind_null $VM 0 }
+} {1}
+do_test bind-8.2 {
+ sqlite3_errmsg $DB
+} {bind or column index out of range}
+ifcapable {utf16} {
+ do_test bind-8.3 {
+ encoding convertfrom unicode [sqlite3_errmsg16 $DB]
+ } {bind or column index out of range}
+}
+do_test bind-8.4 {
+ sqlite3_bind_null $VM 1
+ sqlite3_errmsg $DB
+} {not an error}
+do_test bind-8.5 {
+ catch { sqlite3_bind_null $VM 4 }
+} {1}
+do_test bind-8.6 {
+ sqlite3_errmsg $DB
+} {bind or column index out of range}
+ifcapable {utf16} {
+ do_test bind-8.7 {
+ encoding convertfrom unicode [sqlite3_errmsg16 $DB]
+ } {bind or column index out of range}
+}
+
+do_test bind-8.8 {
+ catch { sqlite3_bind_blob $VM 0 "abc" 3 }
+} {1}
+do_test bind-8.9 {
+ catch { sqlite3_bind_blob $VM 4 "abc" 3 }
+} {1}
+do_test bind-8.10 {
+ catch { sqlite3_bind_text $VM 0 "abc" 3 }
+} {1}
+ifcapable {utf16} {
+ do_test bind-8.11 {
+ catch { sqlite3_bind_text16 $VM 4 "abc" 2 }
+ } {1}
+}
+do_test bind-8.12 {
+ catch { sqlite3_bind_int $VM 0 5 }
+} {1}
+do_test bind-8.13 {
+ catch { sqlite3_bind_int $VM 4 5 }
+} {1}
+do_test bind-8.14 {
+ catch { sqlite3_bind_double $VM 0 5.0 }
+} {1}
+do_test bind-8.15 {
+ catch { sqlite3_bind_double $VM 4 6.0 }
+} {1}
+
+do_test bind-8.99 {
+ sqlite3_finalize $VM
+} SQLITE_OK
+
+do_test bind-9.1 {
+ execsql {
+ CREATE TABLE t2(a,b,c,d,e,f);
+ }
+ set rc [catch {
+ sqlite3_prepare $DB {
+ INSERT INTO t2(a) VALUES(?0)
+ } -1 TAIL
+ } msg]
+ lappend rc $msg
+} {1 {(1) variable number must be between ?1 and ?999}}
+do_test bind-9.2 {
+ set rc [catch {
+ sqlite3_prepare $DB {
+ INSERT INTO t2(a) VALUES(?1000)
+ } -1 TAIL
+ } msg]
+ lappend rc $msg
+} {1 {(1) variable number must be between ?1 and ?999}}
+do_test bind-9.3 {
+ set VM [
+ sqlite3_prepare $DB {
+ INSERT INTO t2(a,b) VALUES(?1,?999)
+ } -1 TAIL
+ ]
+ sqlite3_bind_parameter_count $VM
+} {999}
+catch {sqlite3_finalize $VM}
+do_test bind-9.4 {
+ set VM [
+ sqlite3_prepare $DB {
+ INSERT INTO t2(a,b,c,d) VALUES(?1,?999,?,?)
+ } -1 TAIL
+ ]
+ sqlite3_bind_parameter_count $VM
+} {1001}
+do_test bind-9.5 {
+ sqlite3_bind_int $VM 1 1
+ sqlite3_bind_int $VM 999 999
+ sqlite3_bind_int $VM 1000 1000
+ sqlite3_bind_int $VM 1001 1001
+ sqlite3_step $VM
+} SQLITE_DONE
+do_test bind-9.6 {
+ sqlite3_finalize $VM
+} SQLITE_OK
+do_test bind-9.7 {
+ execsql {SELECT * FROM t2}
+} {1 999 1000 1001 {} {}}
+
+ifcapable {tclvar} {
+ do_test bind-10.1 {
+ set VM [
+ sqlite3_prepare $DB {
+ INSERT INTO t2(a,b,c,d,e,f) VALUES(:abc,$abc,:abc,$ab,$abc,:abc)
+ } -1 TAIL
+ ]
+ sqlite3_bind_parameter_count $VM
+ } 3
+ set v1 {$abc}
+ set v2 {$ab}
+}
+ifcapable {!tclvar} {
+ do_test bind-10.1 {
+ set VM [
+ sqlite3_prepare $DB {
+ INSERT INTO t2(a,b,c,d,e,f) VALUES(:abc,:xyz,:abc,:xy,:xyz,:abc)
+ } -1 TAIL
+ ]
+ sqlite3_bind_parameter_count $VM
+ } 3
+ set v1 {:xyz}
+ set v2 {:xy}
+}
+do_test bind-10.2 {
+ sqlite3_bind_parameter_index $VM :abc
+} 1
+do_test bind-10.3 {
+ sqlite3_bind_parameter_index $VM $v1
+} 2
+do_test bind-10.4 {
+ sqlite3_bind_parameter_index $VM $v2
+} 3
+do_test bind-10.5 {
+ sqlite3_bind_parameter_name $VM 1
+} :abc
+do_test bind-10.6 {
+ sqlite3_bind_parameter_name $VM 2
+} $v1
+do_test bind-10.7 {
+ sqlite3_bind_parameter_name $VM 3
+} $v2
+do_test bind-10.7.1 {
+ sqlite3_bind_parameter_name 0 1 ;# Ignore if VM is NULL
+} {}
+do_test bind-10.7.2 {
+ sqlite3_bind_parameter_name $VM 0 ;# Ignore if index too small
+} {}
+do_test bind-10.7.3 {
+ sqlite3_bind_parameter_name $VM 4 ;# Ignore if index is too big
+} {}
+do_test bind-10.8 {
+ sqlite3_bind_int $VM 1 1
+ sqlite3_bind_int $VM 2 2
+ sqlite3_bind_int $VM 3 3
+ sqlite3_step $VM
+} SQLITE_DONE
+do_test bind-10.8.1 {
+ # Binding attempts after program start should fail
+ set rc [catch {
+ sqlite3_bind_int $VM 1 1
+ } msg]
+ lappend rc $msg
+} {1 {}}
+do_test bind-10.9 {
+ sqlite3_finalize $VM
+} SQLITE_OK
+do_test bind-10.10 {
+ execsql {SELECT * FROM t2}
+} {1 999 1000 1001 {} {} 1 2 1 3 2 1}
+
+# Ticket #918
+#
+do_test bind-10.11 {
+ # catch {sqlite3_finalize $VM}
+ set VM [
+ sqlite3_prepare $DB {
+ INSERT INTO t2(a,b,c,d,e,f) VALUES(:abc,?,?4,:pqr,:abc,?4)
+ } -1 TAIL
+ ]
+ sqlite3_bind_parameter_count $VM
+} 5
+do_test bind-10.11.1 {
+ sqlite3_bind_parameter_index 0 :xyz ;# ignore NULL VM arguments
+} 0
+do_test bind-10.12 {
+ sqlite3_bind_parameter_index $VM :xyz
+} 0
+do_test bind-10.13 {
+ sqlite3_bind_parameter_index $VM {}
+} 0
+do_test bind-10.14 {
+ sqlite3_bind_parameter_index $VM :pqr
+} 5
+do_test bind-10.15 {
+ sqlite3_bind_parameter_index $VM ?4
+} 4
+do_test bind-10.16 {
+ sqlite3_bind_parameter_name $VM 1
+} :abc
+do_test bind-10.17 {
+ sqlite3_bind_parameter_name $VM 2
+} {}
+do_test bind-10.18 {
+ sqlite3_bind_parameter_name $VM 3
+} {}
+do_test bind-10.19 {
+ sqlite3_bind_parameter_name $VM 4
+} {?4}
+do_test bind-10.20 {
+ sqlite3_bind_parameter_name $VM 5
+} :pqr
+catch {sqlite3_finalize $VM}
+
+# Make sure we catch an unterminated "(" in a Tcl-style variable name
+#
+ifcapable tclvar {
+ do_test bind-11.1 {
+ catchsql {SELECT * FROM sqlite_master WHERE name=$abc(123 and sql NOT NULL;}
+ } {1 {unrecognized token: "$abc(123"}}
+}
+
+if {[execsql {pragma encoding}]=="UTF-8"} {
+ # Test the ability to bind text that contains embedded '\000' characters.
+ # Make sure we can recover the entire input string.
+ #
+ do_test bind-12.1 {
+ execsql {
+ CREATE TABLE t3(x BLOB);
+ }
+ set VM [sqlite3_prepare $DB {INSERT INTO t3 VALUES(?)} -1 TAIL]
+ sqlite_bind $VM 1 not-used blob10
+ sqlite3_step $VM
+ sqlite3_finalize $VM
+ execsql {
+ SELECT typeof(x), length(x), quote(x),
+ length(cast(x AS BLOB)), quote(cast(x AS BLOB)) FROM t3
+ }
+ } {text 3 'abc' 10 X'6162630078797A007071'}
+ do_test bind-12.2 {
+ sqlite3_create_function $DB
+ execsql {
+ SELECT quote(cast(x_coalesce(x) AS blob)) FROM t3
+ }
+ } {X'6162630078797A007071'}
+}
+
+# Test the operation of sqlite3_clear_bindings
+#
+do_test bind-13.1 {
+ set VM [sqlite3_prepare $DB {SELECT ?,?,?} -1 TAIL]
+ sqlite3_step $VM
+ list [sqlite3_column_type $VM 0] [sqlite3_column_type $VM 1] \
+ [sqlite3_column_type $VM 2]
+} {NULL NULL NULL}
+do_test bind-13.2 {
+ sqlite3_reset $VM
+ sqlite3_bind_int $VM 1 1
+ sqlite3_bind_int $VM 2 2
+ sqlite3_bind_int $VM 3 3
+ sqlite3_step $VM
+ list [sqlite3_column_type $VM 0] [sqlite3_column_type $VM 1] \
+ [sqlite3_column_type $VM 2]
+} {INTEGER INTEGER INTEGER}
+do_test bind-13.3 {
+ sqlite3_reset $VM
+ sqlite3_step $VM
+ list [sqlite3_column_type $VM 0] [sqlite3_column_type $VM 1] \
+ [sqlite3_column_type $VM 2]
+} {INTEGER INTEGER INTEGER}
+do_test bind-13.4 {
+ sqlite3_reset $VM
+ sqlite3_clear_bindings $VM
+ sqlite3_step $VM
+ list [sqlite3_column_type $VM 0] [sqlite3_column_type $VM 1] \
+ [sqlite3_column_type $VM 2]
+} {NULL NULL NULL}
+sqlite3_finalize $VM
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/bindxfer.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/bindxfer.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,73 @@
+# 2005 April 21
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script testing the sqlite_transfer_bindings() API.
+#
+# $Id: bindxfer.test,v 1.2 2006/01/03 00:33:50 drh Exp $
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+proc sqlite_step {stmt VALS COLS} {
+ upvar #0 $VALS vals
+ upvar #0 $COLS cols
+ set vals [list]
+ set cols [list]
+
+ set rc [sqlite3_step $stmt]
+ for {set i 0} {$i < [sqlite3_column_count $stmt]} {incr i} {
+ lappend cols [sqlite3_column_name $stmt $i]
+ }
+ for {set i 0} {$i < [sqlite3_data_count $stmt]} {incr i} {
+ lappend vals [sqlite3_column_text $stmt $i]
+ }
+
+ return $rc
+}
+
+do_test bindxfer-1.1 {
+ set DB [sqlite3_connection_pointer db]
+ execsql {CREATE TABLE t1(a,b,c);}
+ set VM1 [sqlite3_prepare $DB {SELECT ?, ?, ?} -1 TAIL]
+ set TAIL
+} {}
+do_test bindxfer-1.2 {
+ sqlite3_bind_parameter_count $VM1
+} 3
+do_test bindxfer-1.3 {
+ set VM2 [sqlite3_prepare $DB {SELECT ?, ?, ?} -1 TAIL]
+ set TAIL
+} {}
+do_test bindxfer-1.4 {
+ sqlite3_bind_parameter_count $VM2
+} 3
+do_test bindxfer-1.5 {
+ sqlite_bind $VM1 1 one normal
+ set sqlite_static_bind_value two
+ sqlite_bind $VM1 2 {} static
+ sqlite_bind $VM1 3 {} null
+ sqlite3_transfer_bindings $VM1 $VM2
+ sqlite_step $VM1 VALUES COLNAMES
+} SQLITE_ROW
+do_test bindxfer-1.6 {
+ set VALUES
+} {{} {} {}}
+do_test bindxfer-1.7 {
+ sqlite_step $VM2 VALUES COLNAMES
+} SQLITE_ROW
+do_test bindxfer-1.8 {
+ set VALUES
+} {one two {}}
+catch {sqlite3_finalize $VM1}
+catch {sqlite3_finalize $VM2}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/blob.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/blob.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,124 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# $Id: blob.test,v 1.5 2006/01/03 00:33:50 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+ifcapable {!bloblit} {
+ finish_test
+ return
+}
+
+proc bin_to_hex {blob} {
+ set bytes {}
+ binary scan $blob \c* bytes
+ set bytes2 [list]
+ foreach b $bytes {lappend bytes2 [format %02X [expr $b & 0xFF]]}
+ join $bytes2 {}
+}
+
+# Simplest possible case. Specify a blob literal
+do_test blob-1.0 {
+ set blob [execsql {SELECT X'01020304';}]
+ bin_to_hex [lindex $blob 0]
+} {01020304}
+do_test blob-1.1 {
+ set blob [execsql {SELECT x'ABCDEF';}]
+ bin_to_hex [lindex $blob 0]
+} {ABCDEF}
+do_test blob-1.2 {
+ set blob [execsql {SELECT x'';}]
+ bin_to_hex [lindex $blob 0]
+} {}
+do_test blob-1.3 {
+ set blob [execsql {SELECT x'abcdEF12';}]
+ bin_to_hex [lindex $blob 0]
+} {ABCDEF12}
+
+# Try some syntax errors in blob literals.
+do_test blob-1.4 {
+ catchsql {SELECT X'01020k304', 100}
+} {1 {unrecognized token: "X'01020"}}
+do_test blob-1.5 {
+ catchsql {SELECT X'01020, 100}
+} {1 {unrecognized token: "X'01020"}}
+do_test blob-1.6 {
+ catchsql {SELECT X'01020 100'}
+} {1 {unrecognized token: "X'01020"}}
+do_test blob-1.7 {
+ catchsql {SELECT X'01001'}
+} {1 {unrecognized token: "X'01001'"}}
+
+# Insert a blob into a table and retrieve it.
+do_test blob-2.0 {
+ execsql {
+ CREATE TABLE t1(a BLOB, b BLOB);
+ INSERT INTO t1 VALUES(X'123456', x'7890ab');
+ INSERT INTO t1 VALUES(X'CDEF12', x'345678');
+ }
+ set blobs [execsql {SELECT * FROM t1}]
+ set blobs2 [list]
+ foreach b $blobs {lappend blobs2 [bin_to_hex $b]}
+ set blobs2
+} {123456 7890AB CDEF12 345678}
+
+# An index on a blob column
+do_test blob-2.1 {
+ execsql {
+ CREATE INDEX i1 ON t1(a);
+ }
+ set blobs [execsql {SELECT * FROM t1}]
+ set blobs2 [list]
+ foreach b $blobs {lappend blobs2 [bin_to_hex $b]}
+ set blobs2
+} {123456 7890AB CDEF12 345678}
+do_test blob-2.2 {
+ set blobs [execsql {SELECT * FROM t1 where a = X'123456'}]
+ set blobs2 [list]
+ foreach b $blobs {lappend blobs2 [bin_to_hex $b]}
+ set blobs2
+} {123456 7890AB}
+do_test blob-2.3 {
+ set blobs [execsql {SELECT * FROM t1 where a = X'CDEF12'}]
+ set blobs2 [list]
+ foreach b $blobs {lappend blobs2 [bin_to_hex $b]}
+ set blobs2
+} {CDEF12 345678}
+do_test blob-2.4 {
+ set blobs [execsql {SELECT * FROM t1 where a = X'CD12'}]
+ set blobs2 [list]
+ foreach b $blobs {lappend blobs2 [bin_to_hex $b]}
+ set blobs2
+} {}
+
+# Try to bind a blob value to a prepared statement.
+do_test blob-3.0 {
+ sqlite3 db2 test.db
+ set DB [sqlite3_connection_pointer db2]
+ set STMT [sqlite3_prepare $DB "DELETE FROM t1 WHERE a = ?" -1 DUMMY]
+ sqlite3_bind_blob $STMT 1 "\x12\x34\x56" 3
+ sqlite3_step $STMT
+} {SQLITE_DONE}
+do_test blob-3.1 {
+ sqlite3_finalize $STMT
+ db2 close
+} {}
+do_test blob-2.3 {
+ set blobs [execsql {SELECT * FROM t1}]
+ set blobs2 [list]
+ foreach b $blobs {lappend blobs2 [bin_to_hex $b]}
+ set blobs2
+} {CDEF12 345678}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/btree.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/btree.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,1069 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is btree database backend
+#
+# $Id: btree.test,v 1.37 2006/08/16 16:42:48 drh Exp $
+
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+ifcapable default_autovacuum {
+ finish_test
+ return
+}
+
+# Basic functionality. Open and close a database.
+#
+do_test btree-1.1 {
+ file delete -force test1.bt
+ file delete -force test1.bt-journal
+ set rc [catch {btree_open test1.bt 2000 0} ::b1]
+} {0}
+
+# The second element of the list returned by btree_pager_stats is the
+# number of pages currently checked out. We'll be checking this value
+# frequently during this test script, to make sure the btree library
+# is properly releasing the pages it checks out, and thus avoiding
+# page leaks.
+#
+do_test btree-1.1.1 {
+ lindex [btree_pager_stats $::b1] 1
+} {0}
+do_test btree-1.2 {
+ set rc [catch {btree_open test1.bt 2000 0} ::b2]
+} {0}
+do_test btree-1.3 {
+ set rc [catch {btree_close $::b2} msg]
+ lappend rc $msg
+} {0 {}}
+
+# Do an insert and verify that the database file grows in size.
+#
+do_test btree-1.4 {
+ set rc [catch {btree_begin_transaction $::b1} msg]
+ lappend rc $msg
+} {0 {}}
+do_test btree-1.4.1 {
+ lindex [btree_pager_stats $::b1] 1
+} {1}
+do_test btree-1.5 {
+ set rc [catch {btree_cursor $::b1 1 1} ::c1]
+ if {$rc} {lappend rc $::c1}
+ set rc
+} {0}
+do_test btree-1.6 {
+ set rc [catch {btree_insert $::c1 100 1.00} msg]
+ lappend rc $msg
+} {0 {}}
+do_test btree-1.7 {
+ btree_move_to $::c1 100
+ btree_key $::c1
+} {100}
+do_test btree-1.8 {
+ btree_data $::c1
+} {1.00}
+do_test btree-1.9 {
+ set rc [catch {btree_close_cursor $::c1} msg]
+ lappend rc $msg
+} {0 {}}
+do_test btree-1.10 {
+ set rc [catch {btree_commit $::b1} msg]
+ lappend rc $msg
+} {0 {}}
+do_test btree-1.11 {
+ file size test1.bt
+} {1024}
+do_test btree-1.12 {
+ lindex [btree_pager_stats $::b1] 1
+} {0}
+
+# Reopen the database and attempt to read the record that we wrote.
+#
+do_test btree-2.1 {
+ set rc [catch {btree_cursor $::b1 1 1} ::c1]
+ if {$rc} {lappend rc $::c1}
+ set rc
+} {0}
+do_test btree-2.2 {
+ btree_move_to $::c1 99
+} {1}
+do_test btree-2.3 {
+ btree_move_to $::c1 101
+} {-1}
+do_test btree-2.4 {
+ btree_move_to $::c1 100
+} {0}
+do_test btree-2.5 {
+ btree_key $::c1
+} {100}
+do_test btree-2.6 {
+ btree_data $::c1
+} {1.00}
+do_test btree-2.7 {
+ lindex [btree_pager_stats $::b1] 1
+} {1}
+
+# Do some additional inserts
+#
+do_test btree-3.1 {
+ btree_begin_transaction $::b1
+ btree_insert $::c1 200 2.00
+ btree_move_to $::c1 200
+ btree_key $::c1
+} {200}
+do_test btree-3.1.1 {
+ lindex [btree_pager_stats $::b1] 1
+} {1}
+do_test btree-3.2 {
+ btree_insert $::c1 300 3.00
+ btree_move_to $::c1 300
+ btree_key $::c1
+} {300}
+do_test btree-3.4 {
+ btree_insert $::c1 400 4.00
+ btree_move_to $::c1 400
+ btree_key $::c1
+} {400}
+do_test btree-3.5 {
+ btree_insert $::c1 500 5.00
+ btree_move_to $::c1 500
+ btree_key $::c1
+} {500}
+do_test btree-3.6 {
+ btree_insert $::c1 600 6.00
+ btree_move_to $::c1 600
+ btree_key $::c1
+} {600}
+#btree_page_dump $::b1 2
+do_test btree-3.7 {
+ set rc [btree_move_to $::c1 0]
+ expr {$rc>0}
+} {1}
+do_test btree-3.8 {
+ btree_key $::c1
+} {100}
+do_test btree-3.9 {
+ btree_data $::c1
+} {1.00}
+do_test btree-3.10 {
+ btree_next $::c1
+ btree_key $::c1
+} {200}
+do_test btree-3.11 {
+ btree_data $::c1
+} {2.00}
+do_test btree-3.12 {
+ btree_next $::c1
+ btree_key $::c1
+} {300}
+do_test btree-3.13 {
+ btree_data $::c1
+} {3.00}
+do_test btree-3.14 {
+ btree_next $::c1
+ btree_key $::c1
+} {400}
+do_test btree-3.15 {
+ btree_data $::c1
+} {4.00}
+do_test btree-3.16 {
+ btree_next $::c1
+ btree_key $::c1
+} {500}
+do_test btree-3.17 {
+ btree_data $::c1
+} {5.00}
+do_test btree-3.18 {
+ btree_next $::c1
+ btree_key $::c1
+} {600}
+do_test btree-3.19 {
+ btree_data $::c1
+} {6.00}
+do_test btree-3.20.1 {
+ btree_next $::c1
+ btree_key $::c1
+} {0}
+do_test btree-3.20.2 {
+ btree_eof $::c1
+} {1}
+# This test case used to test that one couldn't request data from an
+# invalid cursor. That is now an assert()ed condition.
+#
+# do_test btree-3.21 {
+# set rc [catch {btree_data $::c1} res]
+# lappend rc $res
+# } {1 SQLITE_INTERNAL}
+
+# Commit the changes, reopen and reread the data
+#
+do_test btree-3.22 {
+ set rc [catch {btree_close_cursor $::c1} msg]
+ lappend rc $msg
+} {0 {}}
+do_test btree-3.22.1 {
+ lindex [btree_pager_stats $::b1] 1
+} {1}
+do_test btree-3.23 {
+ set rc [catch {btree_commit $::b1} msg]
+ lappend rc $msg
+} {0 {}}
+do_test btree-3.23.1 {
+ lindex [btree_pager_stats $::b1] 1
+} {0}
+do_test btree-3.24 {
+ file size test1.bt
+} {1024}
+do_test btree-3.25 {
+ set rc [catch {btree_cursor $::b1 1 1} ::c1]
+ if {$rc} {lappend rc $::c1}
+ set rc
+} {0}
+do_test btree-3.25.1 {
+ lindex [btree_pager_stats $::b1] 1
+} {1}
+do_test btree-3.26 {
+ set rc [btree_move_to $::c1 0]
+ expr {$rc>0}
+} {1}
+do_test btree-3.27 {
+ btree_key $::c1
+} {100}
+do_test btree-3.28 {
+ btree_data $::c1
+} {1.00}
+do_test btree-3.29 {
+ btree_next $::c1
+ btree_key $::c1
+} {200}
+do_test btree-3.30 {
+ btree_data $::c1
+} {2.00}
+do_test btree-3.31 {
+ btree_next $::c1
+ btree_key $::c1
+} {300}
+do_test btree-3.32 {
+ btree_data $::c1
+} {3.00}
+do_test btree-3.33 {
+ btree_next $::c1
+ btree_key $::c1
+} {400}
+do_test btree-3.34 {
+ btree_data $::c1
+} {4.00}
+do_test btree-3.35 {
+ btree_next $::c1
+ btree_key $::c1
+} {500}
+do_test btree-3.36 {
+ btree_data $::c1
+} {5.00}
+do_test btree-3.37 {
+ btree_next $::c1
+ btree_key $::c1
+} {600}
+do_test btree-3.38 {
+ btree_data $::c1
+} {6.00}
+do_test btree-3.39 {
+ btree_next $::c1
+ btree_key $::c1
+} {0}
+# This test case used to test that requesting data from an invalid cursor
+# returned SQLITE_INTERNAL. That is now an assert()ed condition.
+#
+# do_test btree-3.40 {
+# set rc [catch {btree_data $::c1} res]
+# lappend rc $res
+# } {1 SQLITE_INTERNAL}
+do_test btree-3.41 {
+ lindex [btree_pager_stats $::b1] 1
+} {1}
+
+
+# Now try a delete
+#
+do_test btree-4.1 {
+ btree_begin_transaction $::b1
+ btree_move_to $::c1 100
+ btree_key $::c1
+} {100}
+do_test btree-4.1.1 {
+ lindex [btree_pager_stats $::b1] 1
+} {1}
+do_test btree-4.2 {
+ btree_delete $::c1
+} {}
+do_test btree-4.3 {
+ btree_move_to $::c1 100
+ btree_key $::c1
+} {200}
+do_test btree-4.4 {
+ btree_next $::c1
+ btree_key $::c1
+} {300}
+do_test btree-4.5 {
+ btree_next $::c1
+ btree_key $::c1
+} {400}
+do_test btree-4.4 {
+ btree_move_to $::c1 0
+ set r {}
+ while 1 {
+ set key [btree_key $::c1]
+ if {[btree_eof $::c1]} break
+ lappend r $key
+ lappend r [btree_data $::c1]
+ btree_next $::c1
+ }
+ set r
+} {200 2.00 300 3.00 400 4.00 500 5.00 600 6.00}
+
+# Commit and make sure the delete is still there.
+#
+do_test btree-4.5 {
+ btree_commit $::b1
+ btree_move_to $::c1 0
+ set r {}
+ while 1 {
+ set key [btree_key $::c1]
+ if {[btree_eof $::c1]} break
+ lappend r $key
+ lappend r [btree_data $::c1]
+ btree_next $::c1
+ }
+ set r
+} {200 2.00 300 3.00 400 4.00 500 5.00 600 6.00}
+
+# Completely close the database and reopen it. Then check
+# the data again.
+#
+do_test btree-4.6 {
+ lindex [btree_pager_stats $::b1] 1
+} {1}
+do_test btree-4.7 {
+ btree_close_cursor $::c1
+ lindex [btree_pager_stats $::b1] 1
+} {0}
+do_test btree-4.8 {
+ btree_close $::b1
+ set ::b1 [btree_open test1.bt 2000 0]
+ set ::c1 [btree_cursor $::b1 1 1]
+ lindex [btree_pager_stats $::b1] 1
+} {1}
+do_test btree-4.9 {
+ set r {}
+ btree_first $::c1
+ while 1 {
+ set key [btree_key $::c1]
+ if {[btree_eof $::c1]} break
+ lappend r $key
+ lappend r [btree_data $::c1]
+ btree_next $::c1
+ }
+ set r
+} {200 2.00 300 3.00 400 4.00 500 5.00 600 6.00}
+
+# Try to read and write meta data
+#
+do_test btree-5.1 {
+ btree_get_meta $::b1
+} {0 0 0 0 0 0 0 0 0 0}
+do_test btree-5.2 {
+ set rc [catch {
+ btree_update_meta $::b1 0 1 2 3 4 5 6 7 8 9
+ } msg]
+ lappend rc $msg
+} {1 SQLITE_ERROR}
+do_test btree-5.3 {
+ btree_begin_transaction $::b1
+ set rc [catch {
+ btree_update_meta $::b1 0 1 2 3 0 5 6 7 8 9
+ } msg]
+ lappend rc $msg
+} {0 {}}
+do_test btree-5.4 {
+ btree_get_meta $::b1
+} {0 1 2 3 0 5 6 7 8 9}
+do_test btree-5.5 {
+ btree_close_cursor $::c1
+ btree_rollback $::b1
+ btree_get_meta $::b1
+} {0 0 0 0 0 0 0 0 0 0}
+do_test btree-5.6 {
+ btree_begin_transaction $::b1
+ btree_update_meta $::b1 0 10 20 30 0 50 60 70 80 90
+ btree_commit $::b1
+ btree_get_meta $::b1
+} {0 10 20 30 0 50 60 70 80 90}
+
+proc select_all {cursor} {
+ set r {}
+ btree_first $cursor
+ while {![btree_eof $cursor]} {
+ set key [btree_key $cursor]
+ lappend r $key
+ lappend r [btree_data $cursor]
+ btree_next $cursor
+ }
+ return $r
+}
+proc select_keys {cursor} {
+ set r {}
+ btree_first $cursor
+ while {![btree_eof $cursor]} {
+ set key [btree_key $cursor]
+ lappend r $key
+ btree_next $cursor
+ }
+ return $r
+}
+
+# Try to create a new table in the database file
+#
+do_test btree-6.1 {
+ set rc [catch {btree_create_table $::b1 0} msg]
+ lappend rc $msg
+} {1 SQLITE_ERROR}
+do_test btree-6.2 {
+ btree_begin_transaction $::b1
+ set ::t2 [btree_create_table $::b1 0]
+} {2}
+do_test btree-6.2.1 {
+ lindex [btree_pager_stats $::b1] 1
+} {1}
+do_test btree-6.2.2 {
+ set ::c2 [btree_cursor $::b1 $::t2 1]
+ lindex [btree_pager_stats $::b1] 1
+} {2}
+do_test btree-6.2.3 {
+ btree_insert $::c2 ten 10
+ btree_move_to $::c2 ten
+ btree_key $::c2
+} {ten}
+do_test btree-6.3 {
+ btree_commit $::b1
+ set ::c1 [btree_cursor $::b1 1 1]
+ lindex [btree_pager_stats $::b1] 1
+} {2}
+do_test btree-6.3.1 {
+ select_all $::c1
+} {200 2.00 300 3.00 400 4.00 500 5.00 600 6.00}
+#btree_page_dump $::b1 3
+do_test btree-6.4 {
+ select_all $::c2
+} {ten 10}
+
+# Drop the new table, then create it again anew.
+#
+do_test btree-6.5 {
+ btree_begin_transaction $::b1
+} {}
+do_test btree-6.6 {
+ btree_close_cursor $::c2
+} {}
+do_test btree-6.6.1 {
+ lindex [btree_pager_stats $::b1] 1
+} {1}
+do_test btree-6.7 {
+ btree_close_cursor $::c1
+ btree_drop_table $::b1 $::t2
+} {}
+do_test btree-6.7.1 {
+ lindex [btree_get_meta $::b1] 0
+} {1}
+do_test btree-6.8 {
+ set ::t2 [btree_create_table $::b1 0]
+} {2}
+do_test btree-6.8.1 {
+ lindex [btree_get_meta $::b1] 0
+} {0}
+do_test btree-6.9 {
+ set ::c2 [btree_cursor $::b1 $::t2 1]
+ lindex [btree_pager_stats $::b1] 1
+} {2}
+
+# This test case used to test that requesting the key from an invalid cursor
+# returned an empty string. But that is now an assert()ed condition.
+#
+# do_test btree-6.9.1 {
+# btree_move_to $::c2 {}
+# btree_key $::c2
+# } {}
+
+# If we drop table 1 it just clears the table. Table 1 always exists.
+#
+do_test btree-6.10 {
+ btree_close_cursor $::c2
+ btree_drop_table $::b1 1
+ set ::c2 [btree_cursor $::b1 $::t2 1]
+ set ::c1 [btree_cursor $::b1 1 1]
+ btree_first $::c1
+ btree_eof $::c1
+} {1}
+do_test btree-6.11 {
+ btree_commit $::b1
+ select_all $::c1
+} {}
+do_test btree-6.12 {
+ select_all $::c2
+} {}
+do_test btree-6.13 {
+ btree_close_cursor $::c2
+ lindex [btree_pager_stats $::b1] 1
+} {1}
+
+# Check to see that pages defragment properly. To do this test we will
+#
+# 1. Fill the first page of table 1 with data.
+# 2. Delete every other entry of table 1.
+# 3. Insert a single entry that requires more contiguous
+# space than is available.
+#
+do_test btree-7.1 {
+ btree_begin_transaction $::b1
+} {}
+catch {unset key}
+catch {unset data}
+
+# Check to see that data on overflow pages work correctly.
+#
+do_test btree-8.1 {
+ set data "*** This is a very long key "
+ while {[string length $data]<1234} {append data $data}
+ set ::data $data
+ btree_insert $::c1 2020 $data
+} {}
+btree_page_dump $::b1 1
+btree_page_dump $::b1 2
+btree_page_dump $::b1 3
+do_test btree-8.1.1 {
+ lindex [btree_pager_stats $::b1] 1
+} {1}
+#btree_pager_ref_dump $::b1
+do_test btree-8.2 {
+ btree_move_to $::c1 2020
+ string length [btree_data $::c1]
+} [string length $::data]
+do_test btree-8.3 {
+ btree_data $::c1
+} $::data
+do_test btree-8.4 {
+ btree_delete $::c1
+} {}
+do_test btree-8.4.1 {
+ lindex [btree_get_meta $::b1] 0
+} [expr {int(([string length $::data]-238+1019)/1020)}]
+do_test btree-8.4.2 {
+ btree_integrity_check $::b1 1 2
+} {}
+do_test btree-8.5 {
+ set data "*** This is an even longer key "
+ while {[string length $data]<2000} {append data $data}
+ append data END
+ set ::data $data
+ btree_insert $::c1 2030 $data
+} {}
+do_test btree-8.6 {
+ btree_move_to $::c1 2030
+ string length [btree_data $::c1]
+} [string length $::data]
+do_test btree-8.7 {
+ btree_data $::c1
+} $::data
+do_test btree-8.8 {
+ btree_commit $::b1
+ btree_data $::c1
+} $::data
+do_test btree-8.9.1 {
+ btree_close_cursor $::c1
+ btree_close $::b1
+ set ::b1 [btree_open test1.bt 2000 0]
+ set ::c1 [btree_cursor $::b1 1 1]
+ btree_move_to $::c1 2030
+ btree_data $::c1
+} $::data
+do_test btree-8.9.2 {
+ btree_integrity_check $::b1 1 2
+} {}
+do_test btree-8.10 {
+ btree_begin_transaction $::b1
+ btree_delete $::c1
+} {}
+do_test btree-8.11 {
+ lindex [btree_get_meta $::b1] 0
+} {4}
+
+# Now check out keys on overflow pages.
+#
+do_test btree-8.12.1 {
+ set ::keyprefix "This is a long prefix to a key "
+ while {[string length $::keyprefix]<256} {append ::keyprefix $::keyprefix}
+ btree_close_cursor $::c1
+ btree_clear_table $::b1 2
+ lindex [btree_get_meta $::b1] 0
+} {4}
+do_test btree-8.12.2 {
+ btree_integrity_check $::b1 1 2
+} {}
+do_test btree-8.12.3 {
+ set ::c1 [btree_cursor $::b1 2 1]
+ btree_insert $::c1 ${::keyprefix}1 1
+ btree_first $::c1
+ btree_data $::c1
+} {1}
+do_test btree-8.13 {
+ btree_key $::c1
+} ${keyprefix}1
+do_test btree-8.14 {
+ btree_insert $::c1 ${::keyprefix}2 2
+ btree_insert $::c1 ${::keyprefix}3 3
+ btree_last $::c1
+ btree_key $::c1
+} ${keyprefix}3
+do_test btree-8.15 {
+ btree_move_to $::c1 ${::keyprefix}2
+ btree_data $::c1
+} {2}
+do_test btree-8.16 {
+ btree_move_to $::c1 ${::keyprefix}1
+ btree_data $::c1
+} {1}
+do_test btree-8.17 {
+ btree_move_to $::c1 ${::keyprefix}3
+ btree_data $::c1
+} {3}
+do_test btree-8.18 {
+ lindex [btree_get_meta $::b1] 0
+} {1}
+do_test btree-8.19 {
+ btree_move_to $::c1 ${::keyprefix}2
+ btree_key $::c1
+} ${::keyprefix}2
+#btree_page_dump $::b1 2
+do_test btree-8.20 {
+ btree_delete $::c1
+ btree_next $::c1
+ btree_key $::c1
+} ${::keyprefix}3
+#btree_page_dump $::b1 2
+do_test btree-8.21 {
+ lindex [btree_get_meta $::b1] 0
+} {2}
+do_test btree-8.22 {
+ lindex [btree_pager_stats $::b1] 1
+} {2}
+do_test btree-8.23.1 {
+ btree_close_cursor $::c1
+ btree_drop_table $::b1 2
+ btree_integrity_check $::b1 1
+} {}
+do_test btree-8.23.2 {
+ btree_create_table $::b1 0
+} {2}
+do_test btree-8.23.3 {
+ set ::c1 [btree_cursor $::b1 2 1]
+ lindex [btree_get_meta $::b1] 0
+} {4}
+do_test btree-8.24 {
+ lindex [btree_pager_stats $::b1] 1
+} {2}
+#btree_pager_ref_dump $::b1
+do_test btree-8.25 {
+ btree_integrity_check $::b1 1 2
+} {}
+
+# Check page splitting logic
+#
+do_test btree-9.1 {
+ for {set i 1} {$i<=19} {incr i} {
+ set key [format %03d $i]
+ set data "*** $key *** $key *** $key *** $key ***"
+ btree_insert $::c1 $key $data
+ }
+} {}
+#btree_tree_dump $::b1 2
+#btree_pager_ref_dump $::b1
+#set pager_refinfo_enable 1
+do_test btree-9.2 {
+ btree_insert $::c1 020 {*** 020 *** 020 *** 020 *** 020 ***}
+ select_keys $::c1
+} {001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020}
+#btree_page_dump $::b1 2
+#btree_pager_ref_dump $::b1
+#set pager_refinfo_enable 0
+
+# The previous "select_keys" command left the cursor pointing at the root
+# page. So there should only be two pages checked out. 2 (the root) and
+# page 1.
+do_test btree-9.2.1 {
+ lindex [btree_pager_stats $::b1] 1
+} {2}
+for {set i 1} {$i<=20} {incr i} {
+ do_test btree-9.3.$i.1 [subst {
+ btree_move_to $::c1 [format %03d $i]
+ btree_key $::c1
+ }] [format %03d $i]
+ do_test btree-9.3.$i.2 [subst {
+ btree_move_to $::c1 [format %03d $i]
+ string range \[btree_data $::c1\] 0 10
+ }] "*** [format %03d $i] ***"
+}
+do_test btree-9.4.1 {
+ lindex [btree_pager_stats $::b1] 1
+} {2}
+
+# Check the page joining logic.
+#
+#btree_page_dump $::b1 2
+#btree_pager_ref_dump $::b1
+do_test btree-9.4.2 {
+ btree_move_to $::c1 005
+ btree_delete $::c1
+} {}
+#btree_page_dump $::b1 2
+for {set i 1} {$i<=19} {incr i} {
+ if {$i==5} continue
+ do_test btree-9.5.$i.1 [subst {
+ btree_move_to $::c1 [format %03d $i]
+ btree_key $::c1
+ }] [format %03d $i]
+ do_test btree-9.5.$i.2 [subst {
+ btree_move_to $::c1 [format %03d $i]
+ string range \[btree_data $::c1\] 0 10
+ }] "*** [format %03d $i] ***"
+}
+#btree_pager_ref_dump $::b1
+do_test btree-9.6 {
+ btree_close_cursor $::c1
+ lindex [btree_pager_stats $::b1] 1
+} {1}
+do_test btree-9.7 {
+ btree_integrity_check $::b1 1 2
+} {}
+do_test btree-9.8 {
+ btree_rollback $::b1
+ lindex [btree_pager_stats $::b1] 1
+} {0}
+do_test btree-9.9 {
+ btree_integrity_check $::b1 1 2
+} {}
+do_test btree-9.10 {
+ btree_close $::b1
+ set ::b1 [btree_open test1.bt 2000 0]
+ btree_integrity_check $::b1 1 2
+} {}
+
+# Create a tree of depth two. That is, there is a single divider entry
+# on the root pages and two leaf pages. Then delete the divider entry
+# see what happens.
+#
+do_test btree-10.1 {
+ btree_begin_transaction $::b1
+ btree_clear_table $::b1 2
+ lindex [btree_pager_stats $::b1] 1
+} {1}
+do_test btree-10.2 {
+ set ::c1 [btree_cursor $::b1 2 1]
+ lindex [btree_pager_stats $::b1] 1
+} {2}
+do_test btree-10.3 {
+ for {set i 1} {$i<=30} {incr i} {
+ set key [format %03d $i]
+ set data "*** $key *** $key *** $key *** $key ***"
+ btree_insert $::c1 $key $data
+ }
+ select_keys $::c1
+} {001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030}
+#btree_tree_dump $::b1 2
+do_test btree-10.4 {
+ # The divider entry is 012. This is found by uncommenting the
+ # btree_tree_dump call above and looking at the tree. If the page size
+ # changes, this test will no longer work.
+ btree_move_to $::c1 012
+ btree_delete $::c1
+ select_keys $::c1
+} {001 002 003 004 005 006 007 008 009 010 011 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030}
+#btree_pager_ref_dump $::b1
+#btree_tree_dump $::b1 2
+for {set i 1} {$i<=30} {incr i} {
+ # Check the number of unreference pages. This should be 3 in most cases,
+ # but 2 when the cursor is pointing to the divider entry which is now 013.
+ do_test btree-10.5.$i {
+ btree_move_to $::c1 [format %03d $i]
+ lindex [btree_pager_stats $::b1] 1
+ } [expr {$i==13?2:3}]
+ #btree_pager_ref_dump $::b1
+ #btree_tree_dump $::b1 2
+}
+
+# Create a tree with lots more pages
+#
+catch {unset ::data}
+catch {unset ::key}
+for {set i 31} {$i<=2000} {incr i} {
+ do_test btree-11.1.$i.1 {
+ set key [format %03d $i]
+ set ::data "*** $key *** $key *** $key *** $key ***"
+ btree_insert $::c1 $key $data
+ btree_move_to $::c1 $key
+ btree_key $::c1
+ } [format %03d $i]
+ do_test btree-11.1.$i.2 {
+ btree_data $::c1
+ } $::data
+ set ::key [format %03d [expr {$i/2}]]
+ if {$::key=="012"} {set ::key 013}
+ do_test btree-11.1.$i.3 {
+ btree_move_to $::c1 $::key
+ btree_key $::c1
+ } $::key
+}
+catch {unset ::data}
+catch {unset ::key}
+
+# Make sure our reference count is still correct.
+#
+do_test btree-11.2 {
+ btree_close_cursor $::c1
+ lindex [btree_pager_stats $::b1] 1
+} {1}
+do_test btree-11.3 {
+ set ::c1 [btree_cursor $::b1 2 1]
+ lindex [btree_pager_stats $::b1] 1
+} {2}
+
+# Delete the dividers on the root page
+#
+#btree_page_dump $::b1 2
+do_test btree-11.4 {
+ btree_move_to $::c1 1667
+ btree_delete $::c1
+ btree_move_to $::c1 1667
+ set k [btree_key $::c1]
+ if {$k==1666} {
+ set k [btree_next $::c1]
+ }
+ btree_key $::c1
+} {1668}
+#btree_page_dump $::b1 2
+
+# Change the data on an intermediate node such that the node becomes overfull
+# and has to split. We happen to know that intermediate nodes exist on
+# 337, 401 and 465 by the btree_page_dumps above
+#
+catch {unset ::data}
+set ::data {This is going to be a very long data segment}
+append ::data $::data
+append ::data $::data
+do_test btree-12.1 {
+ btree_insert $::c1 337 $::data
+ btree_move_to $::c1 337
+ btree_data $::c1
+} $::data
+do_test btree-12.2 {
+ btree_insert $::c1 401 $::data
+ btree_move_to $::c1 401
+ btree_data $::c1
+} $::data
+do_test btree-12.3 {
+ btree_insert $::c1 465 $::data
+ btree_move_to $::c1 465
+ btree_data $::c1
+} $::data
+do_test btree-12.4 {
+ btree_move_to $::c1 337
+ btree_key $::c1
+} {337}
+do_test btree-12.5 {
+ btree_data $::c1
+} $::data
+do_test btree-12.6 {
+ btree_next $::c1
+ btree_key $::c1
+} {338}
+do_test btree-12.7 {
+ btree_move_to $::c1 464
+ btree_key $::c1
+} {464}
+do_test btree-12.8 {
+ btree_next $::c1
+ btree_data $::c1
+} $::data
+do_test btree-12.9 {
+ btree_next $::c1
+ btree_key $::c1
+} {466}
+do_test btree-12.10 {
+ btree_move_to $::c1 400
+ btree_key $::c1
+} {400}
+do_test btree-12.11 {
+ btree_next $::c1
+ btree_data $::c1
+} $::data
+do_test btree-12.12 {
+ btree_next $::c1
+ btree_key $::c1
+} {402}
+# btree_commit $::b1
+# btree_tree_dump $::b1 1
+do_test btree-13.1 {
+ btree_integrity_check $::b1 1 2
+} {}
+
+# To Do:
+#
+# 1. Do some deletes from the 3-layer tree
+# 2. Commit and reopen the database
+# 3. Read every 15th entry and make sure it works
+# 4. Implement btree_sanity and put it throughout this script
+#
+
+do_test btree-15.98 {
+ btree_close_cursor $::c1
+ lindex [btree_pager_stats $::b1] 1
+} {1}
+do_test btree-15.99 {
+ btree_rollback $::b1
+ lindex [btree_pager_stats $::b1] 1
+} {0}
+btree_pager_ref_dump $::b1
+
+# Miscellaneous tests.
+#
+# btree-16.1 - Check that a statement cannot be started if a transaction
+# is not active.
+# btree-16.2 - Check that it is an error to request more payload from a
+# btree entry than the entry contains.
+do_test btree-16.1 {
+ catch {btree_begin_statement $::b1} msg
+ set msg
+} SQLITE_ERROR
+
+do_test btree-16.2 {
+ btree_begin_transaction $::b1
+ set ::c1 [btree_cursor $::b1 2 1]
+ btree_insert $::c1 1 helloworld
+ btree_close_cursor $::c1
+ btree_commit $::b1
+} {}
+do_test btree-16.3 {
+ set ::c1 [btree_cursor $::b1 2 1]
+ btree_first $::c1
+} 0
+do_test btree-16.4 {
+ catch {btree_data $::c1 [expr [btree_payload_size $::c1] + 10]} msg
+ set msg
+} SQLITE_ERROR
+
+if {$tcl_platform(platform)=="unix"} {
+ do_test btree-16.5 {
+ btree_close $::b1
+ set ::origperm [file attributes test1.bt -permissions]
+ file attributes test1.bt -permissions o-w,g-w,a-w
+ set ::b1 [btree_open test1.bt 2000 0]
+ catch {btree_cursor $::b1 2 1} msg
+ file attributes test1.bt -permissions $::origperm
+ btree_close $::b1
+ set ::b1 [btree_open test1.bt 2000 0]
+ set msg
+ } {SQLITE_READONLY}
+}
+
+do_test btree-16.6 {
+ set ::c1 [btree_cursor $::b1 2 1]
+ set ::c2 [btree_cursor $::b1 2 1]
+ btree_begin_transaction $::b1
+ for {set i 0} {$i<100} {incr i} {
+ btree_insert $::c1 $i [string repeat helloworld 10]
+ }
+ btree_last $::c2
+ btree_insert $::c1 100 [string repeat helloworld 10]
+} {}
+
+do_test btree-16.7 {
+ btree_close_cursor $::c1
+ btree_close_cursor $::c2
+ btree_commit $::b1
+ set ::c1 [btree_cursor $::b1 2 1]
+ catch {btree_insert $::c1 101 helloworld} msg
+ set msg
+} {SQLITE_ERROR}
+do_test btree-16.8 {
+ btree_first $::c1
+ catch {btree_delete $::c1} msg
+ set msg
+} {SQLITE_ERROR}
+do_test btree-16.9 {
+ btree_close_cursor $::c1
+ btree_begin_transaction $::b1
+ set ::c1 [btree_cursor $::b1 2 0]
+ catch {btree_insert $::c1 101 helloworld} msg
+ set msg
+} {SQLITE_PERM}
+do_test btree-16.10 {
+ catch {btree_delete $::c1} msg
+ set msg
+} {SQLITE_PERM}
+
+# As of 2006-08-16 (version 3.3.7+) a read cursor will no
+# longer block a write cursor from the same database
+# connectiin. The following three tests uses to return
+# the SQLITE_LOCK error, but no more.
+#
+do_test btree-16.11 {
+ btree_close_cursor $::c1
+ set ::c2 [btree_cursor $::b1 2 1]
+ set ::c1 [btree_cursor $::b1 2 0]
+ catch {btree_insert $::c2 101 helloworld} msg
+ set msg
+} {}
+do_test btree-16.12 {
+ btree_first $::c2
+ catch {btree_delete $::c2} msg
+ set msg
+} {}
+do_test btree-16.13 {
+ catch {btree_clear_table $::b1 2} msg
+ set msg
+} {}
+
+
+do_test btree-16.14 {
+ btree_close_cursor $::c1
+ btree_close_cursor $::c2
+ btree_commit $::b1
+ catch {btree_clear_table $::b1 2} msg
+ set msg
+} {SQLITE_ERROR}
+do_test btree-16.15 {
+ catch {btree_drop_table $::b1 2} msg
+ set msg
+} {SQLITE_ERROR}
+do_test btree-16.16 {
+ btree_begin_transaction $::b1
+ set ::c1 [btree_cursor $::b1 2 0]
+ catch {btree_drop_table $::b1 2} msg
+ set msg
+} {SQLITE_LOCKED}
+
+do_test btree-99.1 {
+ btree_close $::b1
+} {}
+catch {unset data}
+catch {unset key}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/btree2.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/btree2.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,502 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is btree database backend
+#
+# $Id: btree2.test,v 1.15 2006/03/19 13:00:25 drh Exp $
+
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+if {[info commands btree_open]!=""} {
+
+# Create a new database file containing no entries. The database should
+# contain 5 tables:
+#
+# 2 The descriptor table
+# 3 The foreground table
+# 4 The background table
+# 5 The long key table
+# 6 The long data table
+#
+# An explanation for what all these tables are used for is provided below.
+#
+do_test btree2-1.1 {
+ expr srand(1)
+ file delete -force test2.bt
+ file delete -force test2.bt-journal
+ set ::b [btree_open test2.bt 2000 0]
+ btree_begin_transaction $::b
+ btree_create_table $::b 0
+} {2}
+do_test btree2-1.2 {
+ btree_create_table $::b 0
+} {3}
+do_test btree2-1.3 {
+ btree_create_table $::b 0
+} {4}
+do_test btree2-1.4 {
+ btree_create_table $::b 0
+} {5}
+do_test btree2-1.5 {
+ btree_create_table $::b 0
+} {6}
+do_test btree2-1.6 {
+ set ::c2 [btree_cursor $::b 2 1]
+ btree_insert $::c2 {one} {1}
+ btree_move_to $::c2 {one}
+ btree_delete $::c2
+ btree_close_cursor $::c2
+ btree_commit $::b
+ btree_integrity_check $::b 1 2 3 4 5 6
+} {}
+
+# This test module works by making lots of pseudo-random changes to a
+# database while simultaneously maintaining an invariant on that database.
+# Periodically, the script does a sanity check on the database and verifies
+# that the invariant is satisfied.
+#
+# The invariant is as follows:
+#
+# 1. The descriptor table always contains 2 enters. An entry keyed by
+# "N" is the number of elements in the foreground and background tables
+# combined. The entry keyed by "L" is the number of digits in the keys
+# for foreground and background tables.
+#
+# 2. The union of the foreground an background tables consists of N entries
+# where each entry has an L-digit key. (Actually, some keys can be longer
+# than L characters, but they always start with L digits.) The keys
+# cover all integers between 1 and N. Whenever an entry is added to
+# the foreground it is removed form the background and vice versa.
+#
+# 3. Some entries in the foreground and background tables have keys that
+# begin with an L-digit number but are followed by additional characters.
+# For each such entry there is a corresponding entry in the long key
+# table. The long key table entry has a key which is just the L-digit
+# number and data which is the length of the key in the foreground and
+# background tables.
+#
+# 4. The data for both foreground and background entries is usually a
+# short string. But some entries have long data strings. For each
+# such entries there is an entry in the long data type. The key to
+# long data table is an L-digit number. (The extension on long keys
+# is omitted.) The data is the number of charaters in the data of the
+# foreground or background entry.
+#
+# The following function builds a database that satisfies all of the above
+# invariants.
+#
+proc build_db {N L} {
+ for {set i 2} {$i<=6} {incr i} {
+ catch {btree_close_cursor [set ::c$i]}
+ btree_clear_table $::b $i
+ set ::c$i [btree_cursor $::b $i 1]
+ }
+ btree_insert $::c2 N $N
+ btree_insert $::c2 L $L
+ set format %0${L}d
+ for {set i 1} {$i<=$N} {incr i} {
+ set key [format $format $i]
+ set data $key
+ btree_insert $::c3 $key $data
+ }
+}
+
+# Given a base key number and a length, construct the full text of the key
+# or data.
+#
+proc make_payload {keynum L len} {
+ set key [format %0${L}d $keynum]
+ set r $key
+ set i 1
+ while {[string length $r]<$len} {
+ append r " ($i) $key"
+ incr i
+ }
+ return [string range $r 0 [expr {$len-1}]]
+}
+
+# Verify the invariants on the database. Return an empty string on
+# success or an error message if something is amiss.
+#
+proc check_invariants {} {
+ set ck [btree_integrity_check $::b 1 2 3 4 5 6]
+ if {$ck!=""} {
+ puts "\n*** SANITY:\n$ck"
+ exit
+ return $ck
+ }
+ btree_move_to $::c3 {}
+ btree_move_to $::c4 {}
+ btree_move_to $::c2 N
+ set N [btree_data $::c2]
+ btree_move_to $::c2 L
+ set L [btree_data $::c2]
+ set LM1 [expr {$L-1}]
+ for {set i 1} {$i<=$N} {incr i} {
+ set key {}
+ if {![btree_eof $::c3]} {
+ set key [btree_key $::c3]
+ }
+ if {[scan $key %d k]<1} {set k 0}
+ if {$k!=$i} {
+ set key {}
+ if {![btree_eof $::c4]} {
+ set key [btree_key $::c4]
+ }
+ if {[scan $key %d k]<1} {set k 0}
+ if {$k!=$i} {
+ return "Key $i is missing from both foreground and background"
+ }
+ set data [btree_data $::c4]
+ btree_next $::c4
+ } else {
+ set data [btree_data $::c3]
+ btree_next $::c3
+ }
+ set skey [string range $key 0 $LM1]
+ if {[btree_move_to $::c5 $skey]==0} {
+ set keylen [btree_data $::c5]
+ } else {
+ set keylen $L
+ }
+ if {[string length $key]!=$keylen} {
+ return "Key $i is the wrong size.\
+ Is \"$key\" but should be \"[make_payload $k $L $keylen]\""
+ }
+ if {[make_payload $k $L $keylen]!=$key} {
+ return "Key $i has an invalid extension"
+ }
+ if {[btree_move_to $::c6 $skey]==0} {
+ set datalen [btree_data $::c6]
+ } else {
+ set datalen $L
+ }
+ if {[string length $data]!=$datalen} {
+ return "Data for $i is the wrong size.\
+ Is [string length $data] but should be $datalen"
+ }
+ if {[make_payload $k $L $datalen]!=$data} {
+ return "Entry $i has an incorrect data"
+ }
+ }
+}
+
+# Look at all elements in both the foreground and background tables.
+# Make sure the key is always the same as the prefix of the data.
+#
+# This routine was used for hunting bugs. It is not a part of standard
+# tests.
+#
+proc check_data {n key} {
+ global c3 c4
+ incr n -1
+ foreach c [list $c3 $c4] {
+ btree_first $c ;# move_to $c $key
+ set cnt 0
+ while {![btree_eof $c]} {
+ set key [btree_key $c]
+ set data [btree_data $c]
+ if {[string range $key 0 $n] ne [string range $data 0 $n]} {
+ puts "key=[list $key] data=[list $data] n=$n"
+ puts "cursor info = [btree_cursor_info $c]"
+ btree_page_dump $::b [lindex [btree_cursor_info $c] 0]
+ exit
+ }
+ btree_next $c
+ }
+ }
+}
+
+# Make random changes to the database such that each change preserves
+# the invariants. The number of changes is $n*N where N is the parameter
+# from the descriptor table. Each changes begins with a random key.
+# the entry with that key is put in the foreground table with probability
+# $I and it is put in background with probability (1.0-$I). It gets
+# a long key with probability $K and long data with probability $D.
+#
+set chngcnt 0
+proc random_changes {n I K D} {
+ global chngcnt
+ btree_move_to $::c2 N
+ set N [btree_data $::c2]
+ btree_move_to $::c2 L
+ set L [btree_data $::c2]
+ set LM1 [expr {$L-1}]
+ set total [expr {int($N*$n)}]
+ set format %0${L}d
+ for {set i 0} {$i<$total} {incr i} {
+ set k [expr {int(rand()*$N)+1}]
+ set insert [expr {rand()<=$I}]
+ set longkey [expr {rand()<=$K}]
+ set longdata [expr {rand()<=$D}]
+ if {$longkey} {
+ set x [expr {rand()}]
+ set keylen [expr {int($x*$x*$x*$x*3000)+10}]
+ } else {
+ set keylen $L
+ }
+ set key [make_payload $k $L $keylen]
+ if {$longdata} {
+ set x [expr {rand()}]
+ set datalen [expr {int($x*$x*$x*$x*3000)+10}]
+ } else {
+ set datalen $L
+ }
+ set data [make_payload $k $L $datalen]
+ set basekey [format $format $k]
+ if {[set c [btree_move_to $::c3 $basekey]]==0} {
+ btree_delete $::c3
+ } else {
+ if {$c<0} {btree_next $::c3}
+ if {![btree_eof $::c3]} {
+ if {[string match $basekey* [btree_key $::c3]]} {
+ btree_delete $::c3
+ }
+ }
+ }
+ if {[set c [btree_move_to $::c4 $basekey]]==0} {
+ btree_delete $::c4
+ } else {
+ if {$c<0} {btree_next $::c4}
+ if {![btree_eof $::c4]} {
+ if {[string match $basekey* [btree_key $::c4]]} {
+ btree_delete $::c4
+ }
+ }
+ }
+ set kx -1
+ if {![btree_eof $::c4]} {
+ if {[scan [btree_key $::c4] %d kx]<1} {set kx -1}
+ }
+ if {$kx==$k} {
+ btree_delete $::c4
+ }
+ # For debugging - change the "0" to "1" to integrity check after
+ # every change.
+ if 0 {
+ incr chngcnt
+ puts check----$chngcnt
+ set ck [btree_integrity_check $::b 1 2 3 4 5 6]
+ if {$ck!=""} {
+ puts "\nSANITY CHECK FAILED!\n$ck"
+ exit
+ }
+ }
+ if {$insert} {
+ btree_insert $::c3 $key $data
+ } else {
+ btree_insert $::c4 $key $data
+ }
+ if {$longkey} {
+ btree_insert $::c5 $basekey $keylen
+ } elseif {[btree_move_to $::c5 $basekey]==0} {
+ btree_delete $::c5
+ }
+ if {$longdata} {
+ btree_insert $::c6 $basekey $datalen
+ } elseif {[btree_move_to $::c6 $basekey]==0} {
+ btree_delete $::c6
+ }
+ # For debugging - change the "0" to "1" to integrity check after
+ # every change.
+ if 0 {
+ incr chngcnt
+ puts check----$chngcnt
+ set ck [btree_integrity_check $::b 1 2 3 4 5 6]
+ if {$ck!=""} {
+ puts "\nSANITY CHECK FAILED!\n$ck"
+ exit
+ }
+ }
+ }
+}
+set btree_trace 0
+
+# Repeat this test sequence on database of various sizes
+#
+set testno 2
+foreach {N L} {
+ 10 2
+ 50 2
+ 200 3
+ 2000 5
+} {
+ puts "**** N=$N L=$L ****"
+ set hash [md5file test2.bt]
+ do_test btree2-$testno.1 [subst -nocommands {
+ set ::c2 [btree_cursor $::b 2 1]
+ set ::c3 [btree_cursor $::b 3 1]
+ set ::c4 [btree_cursor $::b 4 1]
+ set ::c5 [btree_cursor $::b 5 1]
+ set ::c6 [btree_cursor $::b 6 1]
+ btree_begin_transaction $::b
+ build_db $N $L
+ check_invariants
+ }] {}
+ do_test btree2-$testno.2 {
+ btree_close_cursor $::c2
+ btree_close_cursor $::c3
+ btree_close_cursor $::c4
+ btree_close_cursor $::c5
+ btree_close_cursor $::c6
+ btree_rollback $::b
+ md5file test2.bt
+ } $hash
+ do_test btree2-$testno.3 [subst -nocommands {
+ btree_begin_transaction $::b
+ set ::c2 [btree_cursor $::b 2 1]
+ set ::c3 [btree_cursor $::b 3 1]
+ set ::c4 [btree_cursor $::b 4 1]
+ set ::c5 [btree_cursor $::b 5 1]
+ set ::c6 [btree_cursor $::b 6 1]
+ build_db $N $L
+ check_invariants
+ }] {}
+ do_test btree2-$testno.4 {
+ btree_commit $::b
+ check_invariants
+ } {}
+ do_test btree2-$testno.5 {
+ lindex [btree_pager_stats $::b] 1
+ } {6}
+ do_test btree2-$testno.6 {
+ btree_cursor_info $::c2
+ btree_cursor_info $::c3
+ btree_cursor_info $::c4
+ btree_cursor_info $::c5
+ btree_cursor_info $::c6
+ btree_close_cursor $::c2
+ btree_close_cursor $::c3
+ btree_close_cursor $::c4
+ btree_close_cursor $::c5
+ btree_close_cursor $::c6
+ lindex [btree_pager_stats $::b] 1
+ } {0}
+ do_test btree2-$testno.7 {
+ btree_close $::b
+ } {}
+
+ # For each database size, run various changes tests.
+ #
+ set num2 1
+ foreach {n I K D} {
+ 0.5 0.5 0.1 0.1
+ 1.0 0.2 0.1 0.1
+ 1.0 0.8 0.1 0.1
+ 2.0 0.0 0.1 0.1
+ 2.0 1.0 0.1 0.1
+ 2.0 0.0 0.0 0.0
+ 2.0 1.0 0.0 0.0
+ } {
+ set testid btree2-$testno.8.$num2
+ set hash [md5file test2.bt]
+ do_test $testid.0 {
+ set ::b [btree_open test2.bt 2000 0]
+ set ::c2 [btree_cursor $::b 2 1]
+ set ::c3 [btree_cursor $::b 3 1]
+ set ::c4 [btree_cursor $::b 4 1]
+ set ::c5 [btree_cursor $::b 5 1]
+ set ::c6 [btree_cursor $::b 6 1]
+ check_invariants
+ } {}
+ set cnt 6
+ for {set i 2} {$i<=6} {incr i} {
+ if {[lindex [btree_cursor_info [set ::c$i]] 0]!=$i} {incr cnt}
+ }
+ do_test $testid.1 {
+ btree_begin_transaction $::b
+ lindex [btree_pager_stats $::b] 1
+ } $cnt
+ do_test $testid.2 [subst {
+ random_changes $n $I $K $D
+ }] {}
+ do_test $testid.3 {
+ check_invariants
+ } {}
+ do_test $testid.4 {
+ btree_close_cursor $::c2
+ btree_close_cursor $::c3
+ btree_close_cursor $::c4
+ btree_close_cursor $::c5
+ btree_close_cursor $::c6
+ btree_rollback $::b
+ md5file test2.bt
+ } $hash
+ btree_begin_transaction $::b
+ set ::c2 [btree_cursor $::b 2 1]
+ set ::c3 [btree_cursor $::b 3 1]
+ set ::c4 [btree_cursor $::b 4 1]
+ set ::c5 [btree_cursor $::b 5 1]
+ set ::c6 [btree_cursor $::b 6 1]
+ do_test $testid.5 [subst {
+ random_changes $n $I $K $D
+ }] {}
+ do_test $testid.6 {
+ check_invariants
+ } {}
+ do_test $testid.7 {
+ btree_commit $::b
+ check_invariants
+ } {}
+ set hash [md5file test2.bt]
+ do_test $testid.8 {
+ btree_close_cursor $::c2
+ btree_close_cursor $::c3
+ btree_close_cursor $::c4
+ btree_close_cursor $::c5
+ btree_close_cursor $::c6
+ lindex [btree_pager_stats $::b] 1
+ } {0}
+ do_test $testid.9 {
+ btree_close $::b
+ set ::b [btree_open test2.bt 2000 0]
+ set ::c2 [btree_cursor $::b 2 1]
+ set ::c3 [btree_cursor $::b 3 1]
+ set ::c4 [btree_cursor $::b 4 1]
+ set ::c5 [btree_cursor $::b 5 1]
+ set ::c6 [btree_cursor $::b 6 1]
+ check_invariants
+ } {}
+ do_test $testid.10 {
+ btree_close_cursor $::c2
+ btree_close_cursor $::c3
+ btree_close_cursor $::c4
+ btree_close_cursor $::c5
+ btree_close_cursor $::c6
+ lindex [btree_pager_stats $::b] 1
+ } {0}
+ do_test $testid.11 {
+ btree_close $::b
+ } {}
+ incr num2
+ }
+ incr testno
+ set ::b [btree_open test2.bt 2000 0]
+}
+
+# Testing is complete. Shut everything down.
+#
+do_test btree-999.1 {
+ lindex [btree_pager_stats $::b] 1
+} {0}
+do_test btree-999.2 {
+ btree_close $::b
+} {}
+do_test btree-999.3 {
+ file delete -force test2.bt
+ file exists test2.bt-journal
+} {0}
+
+} ;# end if( not mem: and has pager_open command );
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/btree4.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/btree4.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,101 @@
+# 2002 December 03
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is btree database backend
+#
+# This file focuses on testing the sqliteBtreeNext() and
+# sqliteBtreePrevious() procedures and making sure they are able
+# to step through an entire table from either direction.
+#
+# $Id: btree4.test,v 1.2 2004/05/09 20:40:12 drh Exp $
+
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+if {[info commands btree_open]!=""} {
+
+# Open a test database.
+#
+file delete -force test1.bt
+file delete -force test1.bt-journal
+set b1 [btree_open test1.bt 2000 0]
+btree_begin_transaction $b1
+do_test btree4-0.1 {
+ btree_create_table $b1 0
+} 2
+
+set data {abcdefghijklmnopqrstuvwxyz0123456789}
+append data $data
+append data $data
+append data $data
+append data $data
+
+foreach N {10 100 1000} {
+ btree_clear_table $::b1 2
+ set ::c1 [btree_cursor $::b1 2 1]
+ do_test btree4-$N.1 {
+ for {set i 1} {$i<=$N} {incr i} {
+ btree_insert $::c1 [format k-%05d $i] $::data-$i
+ }
+ btree_first $::c1
+ btree_key $::c1
+ } {k-00001}
+ do_test btree4-$N.2 {
+ btree_data $::c1
+ } $::data-1
+ for {set i 2} {$i<=$N} {incr i} {
+ do_test btree-$N.3.$i.1 {
+ btree_next $::c1
+ } 0
+ do_test btree-$N.3.$i.2 {
+ btree_key $::c1
+ } [format k-%05d $i]
+ do_test btree-$N.3.$i.3 {
+ btree_data $::c1
+ } $::data-$i
+ }
+ do_test btree4-$N.4 {
+ btree_next $::c1
+ } 1
+ do_test btree4-$N.5 {
+ btree_last $::c1
+ } 0
+ do_test btree4-$N.6 {
+ btree_key $::c1
+ } [format k-%05d $N]
+ do_test btree4-$N.7 {
+ btree_data $::c1
+ } $::data-$N
+ for {set i [expr {$N-1}]} {$i>=1} {incr i -1} {
+ do_test btree4-$N.8.$i.1 {
+ btree_prev $::c1
+ } 0
+ do_test btree4-$N.8.$i.2 {
+ btree_key $::c1
+ } [format k-%05d $i]
+ do_test btree4-$N.8.$i.3 {
+ btree_data $::c1
+ } $::data-$i
+ }
+ do_test btree4-$N.9 {
+ btree_prev $::c1
+ } 1
+ btree_close_cursor $::c1
+}
+
+btree_rollback $::b1
+btree_pager_ref_dump $::b1
+btree_close $::b1
+
+} ;# end if( not mem: and has pager_open command );
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/btree5.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/btree5.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,292 @@
+# 2004 May 10
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is btree database backend
+#
+# $Id: btree5.test,v 1.5 2004/05/14 12:17:46 drh Exp $
+
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Attempting to read table 1 of an empty file gives an SQLITE_EMPTY
+# error.
+#
+do_test btree5-1.1 {
+ file delete -force test1.bt
+ file delete -force test1.bt-journal
+ set rc [catch {btree_open test1.bt 2000 0} ::b1]
+} {0}
+do_test btree5-1.2 {
+ set rc [catch {btree_cursor $::b1 1 0} ::c1]
+} {1}
+do_test btree5-1.3 {
+ set ::c1
+} {SQLITE_EMPTY}
+do_test btree5-1.4 {
+ set rc [catch {btree_cursor $::b1 1 1} ::c1]
+} {1}
+do_test btree5-1.5 {
+ set ::c1
+} {SQLITE_EMPTY}
+
+# Starting a transaction initializes the first page of the database
+# and the error goes away.
+#
+do_test btree5-1.6 {
+ btree_begin_transaction $b1
+ set rc [catch {btree_cursor $b1 1 0} c1]
+} {0}
+do_test btree5-1.7 {
+ btree_first $c1
+} {1}
+do_test btree5-1.8 {
+ btree_close_cursor $c1
+ btree_rollback $b1
+ set rc [catch {btree_cursor $b1 1 0} c1]
+} {1}
+do_test btree5-1.9 {
+ set c1
+} {SQLITE_EMPTY}
+do_test btree5-1.10 {
+ btree_begin_transaction $b1
+ set rc [catch {btree_cursor $b1 1 0} c1]
+} {0}
+do_test btree5-1.11 {
+ btree_first $c1
+} {1}
+do_test btree5-1.12 {
+ btree_close_cursor $c1
+ btree_commit $b1
+ set rc [catch {btree_cursor $b1 1 0} c1]
+} {0}
+do_test btree5-1.13 {
+ btree_first $c1
+} {1}
+do_test btree5-1.14 {
+ btree_close_cursor $c1
+ btree_integrity_check $b1 1
+} {}
+
+# Insert many entries into table 1. This is designed to test the
+# virtual-root logic that comes into play for page one. It is also
+# a good test of INTKEY tables.
+#
+# Stagger the inserts. After the inserts complete, go back and do
+# deletes. Stagger the deletes too. Repeat this several times.
+#
+
+# Do N inserts into table 1 using random keys between 0 and 1000000
+#
+proc random_inserts {N} {
+ global c1
+ while {$N>0} {
+ set k [expr {int(rand()*1000000)}]
+ if {[btree_move_to $c1 $k]==0} continue; # entry already exists
+ btree_insert $c1 $k data-for-$k
+ incr N -1
+ }
+}
+
+# Do N delete from table 1
+#
+proc random_deletes {N} {
+ global c1
+ while {$N>0} {
+ set k [expr {int(rand()*1000000)}]
+ btree_move_to $c1 $k
+ btree_delete $c1
+ incr N -1
+ }
+}
+
+# Make sure the table has exactly N entries. Make sure the data for
+# each entry agrees with its key.
+#
+proc check_table {N} {
+ global c1
+ btree_first $c1
+ set cnt 0
+ while {![btree_eof $c1]} {
+ if {[set data [btree_data $c1]] ne "data-for-[btree_key $c1]"} {
+ return "wrong data for entry $cnt"
+ }
+ set n [string length $data]
+ set fdata1 [btree_fetch_data $c1 $n]
+ set fdata2 [btree_fetch_data $c1 -1]
+ if {$fdata1 ne "" && $fdata1 ne $data} {
+ return "DataFetch returned the wrong value with amt=$n"
+ }
+ if {$fdata1 ne $fdata2} {
+ return "DataFetch returned the wrong value when amt=-1"
+ }
+ if {$n>10} {
+ set fdata3 [btree_fetch_data $c1 10]
+ if {$fdata3 ne [string range $data 0 9]} {
+ return "DataFetch returned the wrong value when amt=10"
+ }
+ }
+ incr cnt
+ btree_next $c1
+ }
+ if {$cnt!=$N} {
+ return "wrong number of entries"
+ }
+ return {}
+}
+
+# Initialize the database
+#
+btree_begin_transaction $b1
+set c1 [btree_cursor $b1 1 1]
+set btree_trace 0
+
+# Do the tests.
+#
+set cnt 0
+for {set i 1} {$i<=100} {incr i} {
+ do_test btree5-2.$i.1 {
+ random_inserts 200
+ incr cnt 200
+ check_table $cnt
+ } {}
+ do_test btree5-2.$i.2 {
+ btree_integrity_check $b1 1
+ } {}
+ do_test btree5-2.$i.3 {
+ random_deletes 190
+ incr cnt -190
+ check_table $cnt
+ } {}
+ do_test btree5-2.$i.4 {
+ btree_integrity_check $b1 1
+ } {}
+}
+
+#btree_tree_dump $b1 1
+btree_close_cursor $c1
+btree_commit $b1
+btree_begin_transaction $b1
+
+# This procedure converts an integer into a variable-length text key.
+# The conversion is reversible.
+#
+# The first two characters of the string are alphabetics derived from
+# the least significant bits of the number. Because they are derived
+# from least significant bits, the sort order of the resulting string
+# is different from numeric order. After the alphabetic prefix comes
+# the original number. A variable-length suffix follows. The length
+# of the suffix is based on a hash of the original number.
+#
+proc num_to_key {n} {
+ global charset ncharset suffix
+ set c1 [string index $charset [expr {$n%$ncharset}]]
+ set c2 [string index $charset [expr {($n/$ncharset)%$ncharset}]]
+ set nsuf [expr {($n*211)%593}]
+ return $c1$c2-$n-[string range $suffix 0 $nsuf]
+}
+set charset {abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ}
+set ncharset [string length $charset]
+set suffix $charset$charset
+while {[string length $suffix]<1000} {append suffix $suffix}
+
+# This procedures extracts the original integer used to create
+# a key by num_to_key
+#
+proc key_to_num {key} {
+ regexp {^..-([0-9]+)} $key all n
+ return $n
+}
+
+# Insert into table $tab keys corresponding to all values between
+# $start and $end, inclusive.
+#
+proc insert_range {tab start end} {
+ for {set i $start} {$i<=$end} {incr i} {
+ btree_insert $tab [num_to_key $i] {}
+ }
+}
+
+# Delete from table $tab keys corresponding to all values between
+# $start and $end, inclusive.
+#
+proc delete_range {tab start end} {
+ for {set i $start} {$i<=$end} {incr i} {
+ if {[btree_move_to $tab [num_to_key $i]]==0} {
+ btree_delete $tab
+ }
+ }
+}
+
+# Make sure table $tab contains exactly those keys corresponding
+# to values between $start and $end
+#
+proc check_range {tab start end} {
+ btree_first $tab
+ while {![btree_eof $tab]} {
+ set key [btree_key $tab]
+ set i [key_to_num $key]
+ if {[num_to_key $i] ne $key} {
+ return "malformed key: $key"
+ }
+ set got($i) 1
+ btree_next $tab
+ }
+ set all [lsort -integer [array names got]]
+ if {[llength $all]!=$end+1-$start} {
+ return "table contains wrong number of values"
+ }
+ if {[lindex $all 0]!=$start} {
+ return "wrong starting value"
+ }
+ if {[lindex $all end]!=$end} {
+ return "wrong ending value"
+ }
+ return {}
+}
+
+# Create a zero-data table and test it out.
+#
+do_test btree5-3.1 {
+ set rc [catch {btree_create_table $b1 2} t2]
+} {0}
+do_test btree5-3.2 {
+ set rc [catch {btree_cursor $b1 $t2 1} c2]
+} {0}
+set start 1
+set end 100
+for {set i 1} {$i<=100} {incr i} {
+ do_test btree5-3.3.$i.1 {
+ insert_range $c2 $start $end
+ btree_integrity_check $b1 1 $t2
+ } {}
+ do_test btree5-3.3.$i.2 {
+ check_range $c2 $start $end
+ } {}
+ set nstart $start
+ incr nstart 89
+ do_test btree5-3.3.$i.3 {
+ delete_range $c2 $start $nstart
+ btree_integrity_check $b1 1 $t2
+ } {}
+ incr start 90
+ do_test btree5-3.3.$i.4 {
+ check_range $c2 $start $end
+ } {}
+ incr end 100
+}
+
+
+btree_close_cursor $c2
+btree_commit $b1
+btree_close $b1
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/btree6.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/btree6.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,128 @@
+# 2004 May 10
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is btree database backend - specifically
+# the B+tree tables. B+trees store all data on the leaves rather
+# that storing data with keys on interior nodes.
+#
+# $Id: btree6.test,v 1.4 2004/05/20 22:16:31 drh Exp $
+
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+
+# Insert many entries into the table that cursor $cur points to.
+# The table should be an INTKEY table.
+#
+# Stagger the inserts. After the inserts complete, go back and do
+# deletes. Stagger the deletes too. Repeat this several times.
+#
+
+# Do N inserts into table $tab using random keys between 0 and 1000000
+#
+proc random_inserts {cur N} {
+ global inscnt
+ while {$N>0} {
+ set k [expr {int(rand()*1000000)}]
+ if {[btree_move_to $cur $k]==0} {
+ continue; # entry already exists
+ }
+ incr inscnt
+ btree_insert $cur $k data-for-$k
+ incr N -1
+ }
+}
+set inscnt 0
+
+# Do N delete from the table that $cur points to.
+#
+proc random_deletes {cur N} {
+ while {$N>0} {
+ set k [expr {int(rand()*1000000)}]
+ btree_move_to $cur $k
+ btree_delete $cur
+ incr N -1
+ }
+}
+
+# Make sure the table that $cur points to has exactly N entries.
+# Make sure the data for each entry agrees with its key.
+#
+proc check_table {cur N} {
+ btree_first $cur
+ set cnt 0
+ while {![btree_eof $cur]} {
+ if {[set data [btree_data $cur]] ne "data-for-[btree_key $cur]"} {
+ return "wrong data for entry $cnt"
+ }
+ set n [string length $data]
+ set fdata1 [btree_fetch_data $cur $n]
+ set fdata2 [btree_fetch_data $cur -1]
+ if {$fdata1 ne "" && $fdata1 ne $data} {
+ return "DataFetch returned the wrong value with amt=$n"
+ }
+ if {$fdata1 ne $fdata2} {
+ return "DataFetch returned the wrong value when amt=-1"
+ }
+ if {$n>10} {
+ set fdata3 [btree_fetch_data $cur 10]
+ if {$fdata3 ne [string range $data 0 9]} {
+ return "DataFetch returned the wrong value when amt=10"
+ }
+ }
+ incr cnt
+ btree_next $cur
+ }
+ if {$cnt!=$N} {
+ return "wrong number of entries. Got $cnt. Looking for $N"
+ }
+ return {}
+}
+
+# Initialize the database
+#
+file delete -force test1.bt
+file delete -force test1.bt-journal
+set b1 [btree_open test1.bt 2000 0]
+btree_begin_transaction $b1
+set tab [btree_create_table $b1 5]
+set cur [btree_cursor $b1 $tab 1]
+set btree_trace 0
+expr srand(1)
+
+# Do the tests.
+#
+set cnt 0
+for {set i 1} {$i<=40} {incr i} {
+ do_test btree6-1.$i.1 {
+ random_inserts $cur 200
+ incr cnt 200
+ check_table $cur $cnt
+ } {}
+ do_test btree6-1.$i.2 {
+ btree_integrity_check $b1 1 $tab
+ } {}
+ do_test btree6-1.$i.3 {
+ random_deletes $cur 90
+ incr cnt -90
+ check_table $cur $cnt
+ } {}
+ do_test btree6-1.$i.4 {
+ btree_integrity_check $b1 1 $tab
+ } {}
+}
+
+btree_close_cursor $cur
+btree_commit $b1
+btree_close $b1
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/btree7.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/btree7.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,50 @@
+# 2004 Jun 4
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is btree database backend.
+#
+# $Id: btree7.test,v 1.2 2004/11/04 14:47:13 drh Exp $
+
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Stress the balance routine by trying to create situations where
+# 3 neighboring nodes split into 5.
+#
+set bigdata _123456789 ;# 10
+append bigdata $bigdata ;# 20
+append bigdata $bigdata ;# 40
+append bigdata $bigdata ;# 80
+append bigdata $bigdata ;# 160
+append bigdata $bigdata ;# 320
+append bigdata $bigdata ;# 640
+set data450 [string range $bigdata 0 449]
+do_test btree7-1.1 {
+ execsql "
+ CREATE TABLE t1(x INTEGER PRIMARY KEY, y TEXT);
+ INSERT INTO t1 VALUES(1, '$bigdata');
+ INSERT INTO t1 VALUES(2, '$bigdata');
+ INSERT INTO t1 VALUES(3, '$data450');
+ INSERT INTO t1 VALUES(5, '$data450');
+ INSERT INTO t1 VALUES(8, '$bigdata');
+ INSERT INTO t1 VALUES(9, '$bigdata');
+ "
+} {}
+integrity_check btree7-1.2
+do_test btree7-1.3 {
+ execsql "
+ INSERT INTO t1 VALUES(4, '$bigdata');
+ "
+} {}
+integrity_check btree7-1.4
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/btree8.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/btree8.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,43 @@
+# 2005 August 2
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is btree database backend.
+#
+# $Id: btree8.test,v 1.6 2005/08/02 17:13:12 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Ticket #1346: If the table rooted on page 1 contains a single entry
+# and that single entries has to flow out into another page because
+# page 1 is 100-bytes smaller than most other pages, then you delete that
+# one entry, everything should still work.
+#
+do_test btree8-1.1 {
+ execsql {
+CREATE TABLE t1(x
+ ----------------------------------------------------------------------------
+ ----------------------------------------------------------------------------
+ ----------------------------------------------------------------------------
+ ----------------------------------------------------------------------------
+ ----------------------------------------------------------------------------
+ ----------------------------------------------------------------------------
+ ----------------------------------------------------------------------------
+ ----------------------------------------------------------------------------
+ ----------------------------------------------------------------------------
+ ----------------------------------------------------------------------------
+ ----------------------------------------------------------------------------
+ ----------------------------------------------------------------------------
+);
+DROP table t1;
+ }
+} {}
+integrity_check btree8-1.2
Added: freeswitch/trunk/libs/sqlite/test/busy.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/busy.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,44 @@
+# 2005 july 8
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file test the busy handler
+#
+# $Id: busy.test,v 1.2 2005/09/17 18:02:37 drh Exp $
+
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+do_test busy-1.1 {
+ sqlite3 db2 test.db
+ execsql {
+ CREATE TABLE t1(x);
+ INSERT INTO t1 VALUES(1);
+ SELECT * FROM t1
+ }
+} 1
+proc busy x {
+ lappend ::busyargs $x
+ if {$x>2} {return 1}
+ return 0
+}
+set busyargs {}
+do_test busy-1.2 {
+ db busy busy
+ db2 eval {begin exclusive}
+ catchsql {begin immediate}
+} {1 {database is locked}}
+do_test busy-1.3 {
+ set busyargs
+} {0 1 2 3}
+
+db2 close
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/capi2.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/capi2.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,793 @@
+# 2003 January 29
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script testing the callback-free C/C++ API.
+#
+# $Id: capi2.test,v 1.32 2006/08/16 16:42:48 drh Exp $
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Return the text values from the current row pointed at by STMT as a list.
+proc get_row_values {STMT} {
+ set VALUES [list]
+ for {set i 0} {$i < [sqlite3_data_count $STMT]} {incr i} {
+ lappend VALUES [sqlite3_column_text $STMT $i]
+ }
+ return $VALUES
+}
+
+# Return the column names followed by declaration types for the result set
+# of the SQL statement STMT.
+#
+# i.e. for:
+# CREATE TABLE abc(a text, b integer);
+# SELECT * FROM abc;
+#
+# The result is {a b text integer}
+proc get_column_names {STMT} {
+ set VALUES [list]
+ for {set i 0} {$i < [sqlite3_column_count $STMT]} {incr i} {
+ lappend VALUES [sqlite3_column_name $STMT $i]
+ }
+ for {set i 0} {$i < [sqlite3_column_count $STMT]} {incr i} {
+ lappend VALUES [sqlite3_column_decltype $STMT $i]
+ }
+ return $VALUES
+}
+
+# Check basic functionality
+#
+do_test capi2-1.1 {
+ set DB [sqlite3_connection_pointer db]
+ execsql {CREATE TABLE t1(a,b,c)}
+ set VM [sqlite3_prepare $DB {SELECT name, rowid FROM sqlite_master} -1 TAIL]
+ set TAIL
+} {}
+do_test capi2-1.2 {
+ sqlite3_step $VM
+} {SQLITE_ROW}
+do_test capi2-1.3 {
+ sqlite3_data_count $VM
+} {2}
+do_test capi2-1.4 {
+ get_row_values $VM
+} {t1 1}
+do_test capi2-1.5 {
+ get_column_names $VM
+} {name rowid text INTEGER}
+do_test capi2-1.6 {
+ sqlite3_step $VM
+} {SQLITE_DONE}
+do_test capi2-1.7 {
+ list [sqlite3_column_count $VM] [get_row_values $VM] [get_column_names $VM]
+} {2 {} {name rowid text INTEGER}}
+do_test capi2-1.8 {
+ sqlite3_step $VM
+} {SQLITE_MISUSE}
+
+# Update: In v2, once SQLITE_MISUSE is returned the statement handle cannot
+# be interrogated for more information. However in v3, since the column
+# count, names and types are determined at compile time, these are still
+# accessible after an SQLITE_MISUSE error.
+do_test capi2-1.9 {
+ list [sqlite3_column_count $VM] [get_row_values $VM] [get_column_names $VM]
+} {2 {} {name rowid text INTEGER}}
+do_test capi2-1.10 {
+ sqlite3_data_count $VM
+} {0}
+
+do_test capi2-1.11 {
+ sqlite3_finalize $VM
+} {SQLITE_OK}
+
+# Check to make sure that the "tail" of a multi-statement SQL script
+# is returned by sqlite3_prepare.
+#
+do_test capi2-2.1 {
+ set SQL {
+ SELECT name, rowid FROM sqlite_master;
+ SELECT name, rowid FROM sqlite_master WHERE 0;
+ -- A comment at the end
+ }
+ set VM [sqlite3_prepare $DB $SQL -1 SQL]
+ set SQL
+} {
+ SELECT name, rowid FROM sqlite_master WHERE 0;
+ -- A comment at the end
+ }
+do_test capi2-2.2 {
+ set r [sqlite3_step $VM]
+ lappend r [sqlite3_column_count $VM] \
+ [get_row_values $VM] \
+ [get_column_names $VM]
+} {SQLITE_ROW 2 {t1 1} {name rowid text INTEGER}}
+do_test capi2-2.3 {
+ set r [sqlite3_step $VM]
+ lappend r [sqlite3_column_count $VM] \
+ [get_row_values $VM] \
+ [get_column_names $VM]
+} {SQLITE_DONE 2 {} {name rowid text INTEGER}}
+do_test capi2-2.4 {
+ sqlite3_finalize $VM
+} {SQLITE_OK}
+do_test capi2-2.5 {
+ set VM [sqlite3_prepare $DB $SQL -1 SQL]
+ set SQL
+} {
+ -- A comment at the end
+ }
+do_test capi2-2.6 {
+ set r [sqlite3_step $VM]
+ lappend r [sqlite3_column_count $VM] \
+ [get_row_values $VM] \
+ [get_column_names $VM]
+} {SQLITE_DONE 2 {} {name rowid text INTEGER}}
+do_test capi2-2.7 {
+ sqlite3_finalize $VM
+} {SQLITE_OK}
+do_test capi2-2.8 {
+ set VM [sqlite3_prepare $DB $SQL -1 SQL]
+ list $SQL $VM
+} {{} {}}
+
+# Check the error handling.
+#
+do_test capi2-3.1 {
+ set rc [catch {
+ sqlite3_prepare $DB {select bogus from sqlite_master} -1 TAIL
+ } msg]
+ lappend rc $msg $TAIL
+} {1 {(1) no such column: bogus} {}}
+do_test capi2-3.2 {
+ set rc [catch {
+ sqlite3_prepare $DB {select bogus from } -1 TAIL
+ } msg]
+ lappend rc $msg $TAIL
+} {1 {(1) near " ": syntax error} {}}
+do_test capi2-3.3 {
+ set rc [catch {
+ sqlite3_prepare $DB {;;;;select bogus from sqlite_master} -1 TAIL
+ } msg]
+ lappend rc $msg $TAIL
+} {1 {(1) no such column: bogus} {}}
+do_test capi2-3.4 {
+ set rc [catch {
+ sqlite3_prepare $DB {select bogus from sqlite_master;x;} -1 TAIL
+ } msg]
+ lappend rc $msg $TAIL
+} {1 {(1) no such column: bogus} {x;}}
+do_test capi2-3.5 {
+ set rc [catch {
+ sqlite3_prepare $DB {select bogus from sqlite_master;;;x;} -1 TAIL
+ } msg]
+ lappend rc $msg $TAIL
+} {1 {(1) no such column: bogus} {;;x;}}
+do_test capi2-3.6 {
+ set rc [catch {
+ sqlite3_prepare $DB {select 5/0} -1 TAIL
+ } VM]
+ lappend rc $TAIL
+} {0 {}}
+do_test capi2-3.7 {
+ list [sqlite3_step $VM] \
+ [sqlite3_column_count $VM] \
+ [get_row_values $VM] \
+ [get_column_names $VM]
+} {SQLITE_ROW 1 {{}} {5/0 {}}}
+do_test capi2-3.8 {
+ sqlite3_finalize $VM
+} {SQLITE_OK}
+do_test capi2-3.9 {
+ execsql {CREATE UNIQUE INDEX i1 ON t1(a)}
+ set VM [sqlite3_prepare $DB {INSERT INTO t1 VALUES(1,2,3)} -1 TAIL]
+ set TAIL
+} {}
+do_test capi2-3.9b {db changes} {0}
+do_test capi2-3.10 {
+ list [sqlite3_step $VM] \
+ [sqlite3_column_count $VM] \
+ [get_row_values $VM] \
+ [get_column_names $VM]
+} {SQLITE_DONE 0 {} {}}
+
+# Update for v3 - the change has not actually happened until the query is
+# finalized. Is this going to cause trouble for anyone? Lee Nelson maybe?
+# (Later:) The change now happens just before SQLITE_DONE is returned.
+do_test capi2-3.10b {db changes} {1}
+do_test capi2-3.11 {
+ sqlite3_finalize $VM
+} {SQLITE_OK}
+do_test capi2-3.11b {db changes} {1}
+do_test capi2-3.12 {
+ sqlite3_finalize $VM
+} {SQLITE_MISUSE}
+do_test capi2-3.13 {
+ set VM [sqlite3_prepare $DB {INSERT INTO t1 VALUES(1,3,4)} -1 TAIL]
+ list [sqlite3_step $VM] \
+ [sqlite3_column_count $VM] \
+ [get_row_values $VM] \
+ [get_column_names $VM]
+} {SQLITE_ERROR 0 {} {}}
+
+# Update for v3: Preparing a statement does not affect the change counter.
+# (Test result changes from 0 to 1). (Later:) change counter updates occur
+# when sqlite3_step returns, not at finalize time.
+do_test capi2-3.13b {db changes} {0}
+
+do_test capi2-3.14 {
+ list [sqlite3_finalize $VM] [sqlite3_errmsg $DB]
+} {SQLITE_CONSTRAINT {column a is not unique}}
+do_test capi2-3.15 {
+ set VM [sqlite3_prepare $DB {CREATE TABLE t2(a NOT NULL, b)} -1 TAIL]
+ set TAIL
+} {}
+do_test capi2-3.16 {
+ list [sqlite3_step $VM] \
+ [sqlite3_column_count $VM] \
+ [get_row_values $VM] \
+ [get_column_names $VM]
+} {SQLITE_DONE 0 {} {}}
+do_test capi2-3.17 {
+ list [sqlite3_finalize $VM] [sqlite3_errmsg $DB]
+} {SQLITE_OK {not an error}}
+do_test capi2-3.18 {
+ set VM [sqlite3_prepare $DB {INSERT INTO t2 VALUES(NULL,2)} -1 TAIL]
+ list [sqlite3_step $VM] \
+ [sqlite3_column_count $VM] \
+ [get_row_values $VM] \
+ [get_column_names $VM]
+} {SQLITE_ERROR 0 {} {}}
+do_test capi2-3.19 {
+ list [sqlite3_finalize $VM] [sqlite3_errmsg $DB]
+} {SQLITE_CONSTRAINT {t2.a may not be NULL}}
+
+do_test capi2-3.20 {
+ execsql {
+ CREATE TABLE a1(message_id, name , UNIQUE(message_id, name) );
+ INSERT INTO a1 VALUES(1, 1);
+ }
+} {}
+do_test capi2-3.21 {
+ set VM [sqlite3_prepare $DB {INSERT INTO a1 VALUES(1, 1)} -1 TAIL]
+ sqlite3_step $VM
+} {SQLITE_ERROR}
+do_test capi2-3.22 {
+ sqlite3_errcode $DB
+} {SQLITE_ERROR}
+do_test capi2-3.23 {
+ sqlite3_finalize $VM
+} {SQLITE_CONSTRAINT}
+do_test capi2-3.24 {
+ sqlite3_errcode $DB
+} {SQLITE_CONSTRAINT}
+
+# Two or more virtual machines exists at the same time.
+#
+do_test capi2-4.1 {
+ set VM1 [sqlite3_prepare $DB {INSERT INTO t2 VALUES(1,2)} -1 TAIL]
+ set TAIL
+} {}
+do_test capi2-4.2 {
+ set VM2 [sqlite3_prepare $DB {INSERT INTO t2 VALUES(2,3)} -1 TAIL]
+ set TAIL
+} {}
+do_test capi2-4.3 {
+ set VM3 [sqlite3_prepare $DB {INSERT INTO t2 VALUES(3,4)} -1 TAIL]
+ set TAIL
+} {}
+do_test capi2-4.4 {
+ list [sqlite3_step $VM2] \
+ [sqlite3_column_count $VM2] \
+ [get_row_values $VM2] \
+ [get_column_names $VM2]
+} {SQLITE_DONE 0 {} {}}
+do_test capi2-4.5 {
+ execsql {SELECT * FROM t2 ORDER BY a}
+} {2 3}
+do_test capi2-4.6 {
+ sqlite3_finalize $VM2
+} {SQLITE_OK}
+do_test capi2-4.7 {
+ list [sqlite3_step $VM3] \
+ [sqlite3_column_count $VM3] \
+ [get_row_values $VM3] \
+ [get_column_names $VM3]
+} {SQLITE_DONE 0 {} {}}
+do_test capi2-4.8 {
+ execsql {SELECT * FROM t2 ORDER BY a}
+} {2 3 3 4}
+do_test capi2-4.9 {
+ sqlite3_finalize $VM3
+} {SQLITE_OK}
+do_test capi2-4.10 {
+ list [sqlite3_step $VM1] \
+ [sqlite3_column_count $VM1] \
+ [get_row_values $VM1] \
+ [get_column_names $VM1]
+} {SQLITE_DONE 0 {} {}}
+do_test capi2-4.11 {
+ execsql {SELECT * FROM t2 ORDER BY a}
+} {1 2 2 3 3 4}
+do_test capi2-4.12 {
+ sqlite3_finalize $VM1
+} {SQLITE_OK}
+
+# Interleaved SELECTs
+#
+do_test capi2-5.1 {
+ set VM1 [sqlite3_prepare $DB {SELECT * FROM t2} -1 TAIL]
+ set VM2 [sqlite3_prepare $DB {SELECT * FROM t2} -1 TAIL]
+ set VM3 [sqlite3_prepare $DB {SELECT * FROM t2} -1 TAIL]
+ list [sqlite3_step $VM1] \
+ [sqlite3_column_count $VM1] \
+ [get_row_values $VM1] \
+ [get_column_names $VM1]
+} {SQLITE_ROW 2 {2 3} {a b {} {}}}
+do_test capi2-5.2 {
+ list [sqlite3_step $VM2] \
+ [sqlite3_column_count $VM2] \
+ [get_row_values $VM2] \
+ [get_column_names $VM2]
+} {SQLITE_ROW 2 {2 3} {a b {} {}}}
+do_test capi2-5.3 {
+ list [sqlite3_step $VM1] \
+ [sqlite3_column_count $VM1] \
+ [get_row_values $VM1] \
+ [get_column_names $VM1]
+} {SQLITE_ROW 2 {3 4} {a b {} {}}}
+do_test capi2-5.4 {
+ list [sqlite3_step $VM3] \
+ [sqlite3_column_count $VM3] \
+ [get_row_values $VM3] \
+ [get_column_names $VM3]
+} {SQLITE_ROW 2 {2 3} {a b {} {}}}
+do_test capi2-5.5 {
+ list [sqlite3_step $VM3] \
+ [sqlite3_column_count $VM3] \
+ [get_row_values $VM3] \
+ [get_column_names $VM3]
+} {SQLITE_ROW 2 {3 4} {a b {} {}}}
+do_test capi2-5.6 {
+ list [sqlite3_step $VM3] \
+ [sqlite3_column_count $VM3] \
+ [get_row_values $VM3] \
+ [get_column_names $VM3]
+} {SQLITE_ROW 2 {1 2} {a b {} {}}}
+do_test capi2-5.7 {
+ list [sqlite3_step $VM3] \
+ [sqlite3_column_count $VM3] \
+ [get_row_values $VM3] \
+ [get_column_names $VM3]
+} {SQLITE_DONE 2 {} {a b {} {}}}
+do_test capi2-5.8 {
+ sqlite3_finalize $VM3
+} {SQLITE_OK}
+do_test capi2-5.9 {
+ list [sqlite3_step $VM1] \
+ [sqlite3_column_count $VM1] \
+ [get_row_values $VM1] \
+ [get_column_names $VM1]
+} {SQLITE_ROW 2 {1 2} {a b {} {}}}
+do_test capi2-5.10 {
+ sqlite3_finalize $VM1
+} {SQLITE_OK}
+do_test capi2-5.11 {
+ list [sqlite3_step $VM2] \
+ [sqlite3_column_count $VM2] \
+ [get_row_values $VM2] \
+ [get_column_names $VM2]
+} {SQLITE_ROW 2 {3 4} {a b {} {}}}
+do_test capi2-5.12 {
+ list [sqlite3_step $VM2] \
+ [sqlite3_column_count $VM2] \
+ [get_row_values $VM2] \
+ [get_column_names $VM2]
+} {SQLITE_ROW 2 {1 2} {a b {} {}}}
+do_test capi2-5.11 {
+ sqlite3_finalize $VM2
+} {SQLITE_OK}
+
+# Check for proper SQLITE_BUSY returns.
+#
+do_test capi2-6.1 {
+ execsql {
+ BEGIN;
+ CREATE TABLE t3(x counter);
+ INSERT INTO t3 VALUES(1);
+ INSERT INTO t3 VALUES(2);
+ INSERT INTO t3 SELECT x+2 FROM t3;
+ INSERT INTO t3 SELECT x+4 FROM t3;
+ INSERT INTO t3 SELECT x+8 FROM t3;
+ COMMIT;
+ }
+ set VM1 [sqlite3_prepare $DB {SELECT * FROM t3} -1 TAIL]
+ sqlite3 db2 test.db
+ execsql {BEGIN} db2
+} {}
+# Update for v3: BEGIN doesn't write-lock the database. It is quite
+# difficult to get v3 to write-lock the database, which causes a few
+# problems for test scripts.
+#
+# do_test capi2-6.2 {
+# list [sqlite3_step $VM1] \
+# [sqlite3_column_count $VM1] \
+# [get_row_values $VM1] \
+# [get_column_names $VM1]
+# } {SQLITE_BUSY 0 {} {}}
+do_test capi2-6.3 {
+ execsql {COMMIT} db2
+} {}
+do_test capi2-6.4 {
+ list [sqlite3_step $VM1] \
+ [sqlite3_column_count $VM1] \
+ [get_row_values $VM1] \
+ [get_column_names $VM1]
+} {SQLITE_ROW 1 1 {x counter}}
+do_test capi2-6.5 {
+ catchsql {INSERT INTO t3 VALUES(10);} db2
+} {1 {database is locked}}
+do_test capi2-6.6 {
+ list [sqlite3_step $VM1] \
+ [sqlite3_column_count $VM1] \
+ [get_row_values $VM1] \
+ [get_column_names $VM1]
+} {SQLITE_ROW 1 2 {x counter}}
+do_test capi2-6.7 {
+ execsql {SELECT * FROM t2} db2
+} {2 3 3 4 1 2}
+do_test capi2-6.8 {
+ list [sqlite3_step $VM1] \
+ [sqlite3_column_count $VM1] \
+ [get_row_values $VM1] \
+ [get_column_names $VM1]
+} {SQLITE_ROW 1 3 {x counter}}
+do_test capi2-6.9 {
+ execsql {SELECT * FROM t2}
+} {2 3 3 4 1 2}
+do_test capi2-6.10 {
+ list [sqlite3_step $VM1] \
+ [sqlite3_column_count $VM1] \
+ [get_row_values $VM1] \
+ [get_column_names $VM1]
+} {SQLITE_ROW 1 4 {x counter}}
+do_test capi2-6.11 {
+ execsql {BEGIN}
+} {}
+do_test capi2-6.12 {
+ list [sqlite3_step $VM1] \
+ [sqlite3_column_count $VM1] \
+ [get_row_values $VM1] \
+ [get_column_names $VM1]
+} {SQLITE_ROW 1 5 {x counter}}
+
+# A read no longer blocks a write in the same connection.
+#do_test capi2-6.13 {
+# catchsql {UPDATE t3 SET x=x+1}
+#} {1 {database table is locked}}
+
+do_test capi2-6.14 {
+ list [sqlite3_step $VM1] \
+ [sqlite3_column_count $VM1] \
+ [get_row_values $VM1] \
+ [get_column_names $VM1]
+} {SQLITE_ROW 1 6 {x counter}}
+do_test capi2-6.15 {
+ execsql {SELECT * FROM t1}
+} {1 2 3}
+do_test capi2-6.16 {
+ list [sqlite3_step $VM1] \
+ [sqlite3_column_count $VM1] \
+ [get_row_values $VM1] \
+ [get_column_names $VM1]
+} {SQLITE_ROW 1 7 {x counter}}
+do_test capi2-6.17 {
+ catchsql {UPDATE t1 SET b=b+1}
+} {0 {}}
+do_test capi2-6.18 {
+ list [sqlite3_step $VM1] \
+ [sqlite3_column_count $VM1] \
+ [get_row_values $VM1] \
+ [get_column_names $VM1]
+} {SQLITE_ROW 1 8 {x counter}}
+do_test capi2-6.19 {
+ execsql {SELECT * FROM t1}
+} {1 3 3}
+do_test capi2-6.20 {
+ list [sqlite3_step $VM1] \
+ [sqlite3_column_count $VM1] \
+ [get_row_values $VM1] \
+ [get_column_names $VM1]
+} {SQLITE_ROW 1 9 {x counter}}
+#do_test capi2-6.21 {
+# execsql {ROLLBACK; SELECT * FROM t1}
+#} {1 2 3}
+do_test capi2-6.22 {
+ list [sqlite3_step $VM1] \
+ [sqlite3_column_count $VM1] \
+ [get_row_values $VM1] \
+ [get_column_names $VM1]
+} {SQLITE_ROW 1 10 {x counter}}
+#do_test capi2-6.23 {
+# execsql {BEGIN TRANSACTION;}
+#} {}
+do_test capi2-6.24 {
+ list [sqlite3_step $VM1] \
+ [sqlite3_column_count $VM1] \
+ [get_row_values $VM1] \
+ [get_column_names $VM1]
+} {SQLITE_ROW 1 11 {x counter}}
+do_test capi2-6.25 {
+ execsql {
+ INSERT INTO t1 VALUES(2,3,4);
+ SELECT * FROM t1;
+ }
+} {1 3 3 2 3 4}
+do_test capi2-6.26 {
+ list [sqlite3_step $VM1] \
+ [sqlite3_column_count $VM1] \
+ [get_row_values $VM1] \
+ [get_column_names $VM1]
+} {SQLITE_ROW 1 12 {x counter}}
+do_test capi2-6.27 {
+ catchsql {
+ INSERT INTO t1 VALUES(2,4,5);
+ SELECT * FROM t1;
+ }
+} {1 {column a is not unique}}
+do_test capi2-6.28 {
+ list [sqlite3_step $VM1] \
+ [sqlite3_column_count $VM1] \
+ [get_row_values $VM1] \
+ [get_column_names $VM1]
+} {SQLITE_ROW 1 13 {x counter}}
+do_test capi2-6.99 {
+ sqlite3_finalize $VM1
+} {SQLITE_OK}
+catchsql {ROLLBACK}
+
+do_test capi2-7.1 {
+ stepsql $DB {
+ SELECT * FROM t1
+ }
+} {0 1 2 3}
+do_test capi2-7.2 {
+ stepsql $DB {
+ PRAGMA count_changes=on
+ }
+} {0}
+do_test capi2-7.3 {
+ stepsql $DB {
+ UPDATE t1 SET a=a+10;
+ }
+} {0 1}
+do_test capi2-7.4 {
+ stepsql $DB {
+ INSERT INTO t1 SELECT a+1,b+1,c+1 FROM t1;
+ }
+} {0 1}
+do_test capi2-7.4b {sqlite3_changes $DB} {1}
+do_test capi2-7.5 {
+ stepsql $DB {
+ UPDATE t1 SET a=a+10;
+ }
+} {0 2}
+do_test capi2-7.5b {sqlite3_changes $DB} {2}
+do_test capi2-7.6 {
+ stepsql $DB {
+ SELECT * FROM t1;
+ }
+} {0 21 2 3 22 3 4}
+do_test capi2-7.7 {
+ stepsql $DB {
+ INSERT INTO t1 SELECT a+2,b+2,c+2 FROM t1;
+ }
+} {0 2}
+do_test capi2-7.8 {
+ sqlite3_changes $DB
+} {2}
+do_test capi2-7.9 {
+ stepsql $DB {
+ SELECT * FROM t1;
+ }
+} {0 21 2 3 22 3 4 23 4 5 24 5 6}
+do_test capi2-7.10 {
+ stepsql $DB {
+ UPDATE t1 SET a=a-20;
+ SELECT * FROM t1;
+ }
+} {0 4 1 2 3 2 3 4 3 4 5 4 5 6}
+
+# Update for version 3: A SELECT statement no longer resets the change
+# counter (Test result changes from 0 to 4).
+do_test capi2-7.11 {
+ sqlite3_changes $DB
+} {4}
+do_test capi2-7.11a {
+ execsql {SELECT count(*) FROM t1}
+} {4}
+
+ifcapable {explain} {
+ do_test capi2-7.12 {
+btree_breakpoint
+ set x [stepsql $DB {EXPLAIN SELECT * FROM t1}]
+ lindex $x 0
+ } {0}
+}
+
+# Ticket #261 - make sure we can finalize before the end of a query.
+#
+do_test capi2-8.1 {
+ set VM1 [sqlite3_prepare $DB {SELECT * FROM t2} -1 TAIL]
+ sqlite3_finalize $VM1
+} {SQLITE_OK}
+
+# Tickets #384 and #385 - make sure the TAIL argument to sqlite3_prepare
+# and all of the return pointers in sqlite_step can be null.
+#
+do_test capi2-9.1 {
+ set VM1 [sqlite3_prepare $DB {SELECT * FROM t2} -1 DUMMY]
+ sqlite3_step $VM1
+ sqlite3_finalize $VM1
+} {SQLITE_OK}
+
+# Test that passing a NULL pointer to sqlite3_finalize() or sqlite3_reset
+# does not cause an error.
+do_test capi2-10.1 {
+ sqlite3_finalize 0
+} {SQLITE_OK}
+do_test capi2-10.2 {
+ sqlite3_reset 0
+} {SQLITE_OK}
+
+#---------------------------------------------------------------------------
+# The following tests - capi2-11.* - test the "column origin" APIs.
+#
+# sqlite3_column_origin_name()
+# sqlite3_column_database_name()
+# sqlite3_column_table_name()
+#
+
+ifcapable columnmetadata {
+
+# This proc uses the database handle $::DB to compile the SQL statement passed
+# as a parameter. The return value of this procedure is a list with one
+# element for each column returned by the compiled statement. Each element of
+# this list is itself a list of length three, consisting of the origin
+# database, table and column for the corresponding returned column.
+proc check_origins {sql} {
+ set ret [list]
+ set ::STMT [sqlite3_prepare $::DB $sql -1 dummy]
+ for {set i 0} {$i < [sqlite3_column_count $::STMT]} {incr i} {
+ lappend ret [list \
+ [sqlite3_column_database_name $::STMT $i] \
+ [sqlite3_column_table_name $::STMT $i] \
+ [sqlite3_column_origin_name $::STMT $i] \
+ ]
+ }
+ sqlite3_finalize $::STMT
+ return $ret
+}
+do_test capi2-11.1 {
+ execsql {
+ CREATE TABLE tab1(col1, col2);
+ }
+} {}
+do_test capi2-11.2 {
+ check_origins {SELECT col2, col1 FROM tab1}
+} [list {main tab1 col2} {main tab1 col1}]
+do_test capi2-11.3 {
+ check_origins {SELECT col2 AS hello, col1 AS world FROM tab1}
+} [list {main tab1 col2} {main tab1 col1}]
+
+ifcapable subquery {
+ do_test capi2-11.4 {
+ check_origins {SELECT b, a FROM (SELECT col1 AS a, col2 AS b FROM tab1)}
+ } [list {main tab1 col2} {main tab1 col1}]
+ do_test capi2-11.5 {
+ check_origins {SELECT (SELECT col2 FROM tab1), (SELECT col1 FROM tab1)}
+ } [list {main tab1 col2} {main tab1 col1}]
+ do_test capi2-11.6 {
+ check_origins {SELECT (SELECT col2), (SELECT col1) FROM tab1}
+ } [list {main tab1 col2} {main tab1 col1}]
+ do_test capi2-11.7 {
+ check_origins {SELECT * FROM tab1}
+ } [list {main tab1 col1} {main tab1 col2}]
+ do_test capi2-11.8 {
+ check_origins {SELECT * FROM (SELECT * FROM tab1)}
+ } [list {main tab1 col1} {main tab1 col2}]
+}
+
+ifcapable view&&subquery {
+ do_test capi2-12.1 {
+ execsql {
+ CREATE VIEW view1 AS SELECT * FROM tab1;
+ }
+ } {}
+ do_test capi2-12.2 {
+ check_origins {SELECT col2, col1 FROM view1}
+ } [list {main tab1 col2} {main tab1 col1}]
+ do_test capi2-12.3 {
+ check_origins {SELECT col2 AS hello, col1 AS world FROM view1}
+ } [list {main tab1 col2} {main tab1 col1}]
+ do_test capi2-12.4 {
+ check_origins {SELECT b, a FROM (SELECT col1 AS a, col2 AS b FROM view1)}
+ } [list {main tab1 col2} {main tab1 col1}]
+ do_test capi2-12.5 {
+ check_origins {SELECT (SELECT col2 FROM view1), (SELECT col1 FROM view1)}
+ } [list {main tab1 col2} {main tab1 col1}]
+ do_test capi2-12.6 {
+ check_origins {SELECT (SELECT col2), (SELECT col1) FROM view1}
+ } [list {main tab1 col2} {main tab1 col1}]
+ do_test capi2-12.7 {
+ check_origins {SELECT * FROM view1}
+ } [list {main tab1 col1} {main tab1 col2}]
+ do_test capi2-12.8 {
+ check_origins {select * from (select * from view1)}
+ } [list {main tab1 col1} {main tab1 col2}]
+ do_test capi2-12.9 {
+ check_origins {select * from (select * from (select * from view1))}
+ } [list {main tab1 col1} {main tab1 col2}]
+ do_test capi2-12.10 {
+ db close
+ sqlite3 db test.db
+ set ::DB [sqlite3_connection_pointer db]
+ check_origins {select * from (select * from (select * from view1))}
+ } [list {main tab1 col1} {main tab1 col2}]
+
+ # This view will thwart the flattening optimization.
+ do_test capi2-13.1 {
+ execsql {
+ CREATE VIEW view2 AS SELECT * FROM tab1 limit 10 offset 10;
+ }
+ } {}
+ breakpoint
+ do_test capi2-13.2 {
+ check_origins {SELECT col2, col1 FROM view2}
+ } [list {main tab1 col2} {main tab1 col1}]
+ do_test capi2-13.3 {
+ check_origins {SELECT col2 AS hello, col1 AS world FROM view2}
+ } [list {main tab1 col2} {main tab1 col1}]
+ do_test capi2-13.4 {
+ check_origins {SELECT b, a FROM (SELECT col1 AS a, col2 AS b FROM view2)}
+ } [list {main tab1 col2} {main tab1 col1}]
+ do_test capi2-13.5 {
+ check_origins {SELECT (SELECT col2 FROM view2), (SELECT col1 FROM view2)}
+ } [list {main tab1 col2} {main tab1 col1}]
+ do_test capi2-13.6 {
+ check_origins {SELECT (SELECT col2), (SELECT col1) FROM view2}
+ } [list {main tab1 col2} {main tab1 col1}]
+ do_test capi2-13.7 {
+ check_origins {SELECT * FROM view2}
+ } [list {main tab1 col1} {main tab1 col2}]
+ do_test capi2-13.8 {
+ check_origins {select * from (select * from view2)}
+ } [list {main tab1 col1} {main tab1 col2}]
+ do_test capi2-13.9 {
+ check_origins {select * from (select * from (select * from view2))}
+ } [list {main tab1 col1} {main tab1 col2}]
+ do_test capi2-13.10 {
+ db close
+ sqlite3 db test.db
+ set ::DB [sqlite3_connection_pointer db]
+ check_origins {select * from (select * from (select * from view2))}
+ } [list {main tab1 col1} {main tab1 col2}]
+ do_test capi2-13.11 {
+ check_origins {select * from (select * from tab1 limit 10 offset 10)}
+ } [list {main tab1 col1} {main tab1 col2}]
+}
+
+
+} ;# ifcapable columnmetadata
+
+db2 close
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/capi3.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/capi3.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,1050 @@
+# 2003 January 29
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script testing the callback-free C/C++ API.
+#
+# $Id: capi3.test,v 1.46 2006/08/16 16:42:48 drh Exp $
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Return the UTF-16 representation of the supplied UTF-8 string $str.
+# If $nt is true, append two 0x00 bytes as a nul terminator.
+proc utf16 {str {nt 1}} {
+ set r [encoding convertto unicode $str]
+ if {$nt} {
+ append r "\x00\x00"
+ }
+ return $r
+}
+
+# Return the UTF-8 representation of the supplied UTF-16 string $str.
+proc utf8 {str} {
+ # If $str ends in two 0x00 0x00 bytes, knock these off before
+ # converting to UTF-8 using TCL.
+ binary scan $str \c* vals
+ if {[lindex $vals end]==0 && [lindex $vals end-1]==0} {
+ set str [binary format \c* [lrange $vals 0 end-2]]
+ }
+
+ set r [encoding convertfrom unicode $str]
+ return $r
+}
+
+# These tests complement those in capi2.test. They are organized
+# as follows:
+#
+# capi3-1.*: Test sqlite3_prepare
+# capi3-2.*: Test sqlite3_prepare16
+# capi3-3.*: Test sqlite3_open
+# capi3-4.*: Test sqlite3_open16
+# capi3-5.*: Test the various sqlite3_result_* APIs
+# capi3-6.*: Test that sqlite3_close fails if there are outstanding VMs.
+#
+
+set DB [sqlite3_connection_pointer db]
+
+do_test capi3-1.0 {
+ sqlite3_get_autocommit $DB
+} 1
+do_test capi3-1.1 {
+ set STMT [sqlite3_prepare $DB {SELECT name FROM sqlite_master} -1 TAIL]
+ sqlite3_finalize $STMT
+ set TAIL
+} {}
+do_test capi3-1.2 {
+ sqlite3_errcode $DB
+} {SQLITE_OK}
+do_test capi3-1.3 {
+ sqlite3_errmsg $DB
+} {not an error}
+do_test capi3-1.4 {
+ set sql {SELECT name FROM sqlite_master;SELECT 10}
+ set STMT [sqlite3_prepare $DB $sql -1 TAIL]
+ sqlite3_finalize $STMT
+ set TAIL
+} {SELECT 10}
+do_test capi3-1.5 {
+ set sql {SELECT namex FROM sqlite_master}
+ catch {
+ set STMT [sqlite3_prepare $DB $sql -1 TAIL]
+ }
+} {1}
+do_test capi3-1.6 {
+ sqlite3_errcode $DB
+} {SQLITE_ERROR}
+do_test capi3-1.7 {
+ sqlite3_errmsg $DB
+} {no such column: namex}
+
+ifcapable {utf16} {
+ do_test capi3-2.1 {
+ set sql16 [utf16 {SELECT name FROM sqlite_master}]
+ set STMT [sqlite3_prepare16 $DB $sql16 -1 ::TAIL]
+ sqlite3_finalize $STMT
+ utf8 $::TAIL
+ } {}
+ do_test capi3-2.2 {
+ set sql [utf16 {SELECT name FROM sqlite_master;SELECT 10}]
+ set STMT [sqlite3_prepare16 $DB $sql -1 TAIL]
+ sqlite3_finalize $STMT
+ utf8 $TAIL
+ } {SELECT 10}
+ do_test capi3-2.3 {
+ set sql [utf16 {SELECT namex FROM sqlite_master}]
+ catch {
+ set STMT [sqlite3_prepare16 $DB $sql -1 TAIL]
+ }
+ } {1}
+ do_test capi3-2.4 {
+ sqlite3_errcode $DB
+ } {SQLITE_ERROR}
+ do_test capi3-2.5 {
+ sqlite3_errmsg $DB
+ } {no such column: namex}
+
+ ifcapable schema_pragmas {
+ do_test capi3-2.6 {
+ execsql {CREATE TABLE tablename(x)}
+ set sql16 [utf16 {PRAGMA table_info("TableName")}]
+ set STMT [sqlite3_prepare16 $DB $sql16 -1 TAIL]
+ sqlite3_step $STMT
+ } SQLITE_ROW
+ do_test capi3-2.7 {
+ sqlite3_step $STMT
+ } SQLITE_DONE
+ do_test capi3-2.8 {
+ sqlite3_finalize $STMT
+ } SQLITE_OK
+ }
+
+} ;# endif utf16
+
+# rename sqlite3_open sqlite3_open_old
+# proc sqlite3_open {fname options} {sqlite3_open_new $fname $options}
+
+do_test capi3-3.1 {
+ set db2 [sqlite3_open test.db {}]
+ sqlite3_errcode $db2
+} {SQLITE_OK}
+# FIX ME: Should test the db handle works.
+do_test capi3-3.2 {
+ sqlite3_close $db2
+} {SQLITE_OK}
+do_test capi3-3.3 {
+ catch {
+ set db2 [sqlite3_open /bogus/path/test.db {}]
+ }
+ sqlite3_errcode $db2
+} {SQLITE_CANTOPEN}
+do_test capi3-3.4 {
+ sqlite3_errmsg $db2
+} {unable to open database file}
+do_test capi3-3.5 {
+ sqlite3_close $db2
+} {SQLITE_OK}
+do_test capi3-3.6.1 {
+ sqlite3_close $db2
+} {SQLITE_MISUSE}
+do_test capi3-3.6.2 {
+ sqlite3_errmsg $db2
+} {library routine called out of sequence}
+ifcapable {utf16} {
+ do_test capi3-3.6.3 {
+ utf8 [sqlite3_errmsg16 $db2]
+ } {library routine called out of sequence}
+}
+
+# rename sqlite3_open ""
+# rename sqlite3_open_old sqlite3_open
+
+ifcapable {utf16} {
+do_test capi3-4.1 {
+ set db2 [sqlite3_open16 [utf16 test.db] {}]
+ sqlite3_errcode $db2
+} {SQLITE_OK}
+# FIX ME: Should test the db handle works.
+do_test capi3-4.2 {
+ sqlite3_close $db2
+} {SQLITE_OK}
+do_test capi3-4.3 {
+ catch {
+ set db2 [sqlite3_open16 [utf16 /bogus/path/test.db] {}]
+ }
+ sqlite3_errcode $db2
+} {SQLITE_CANTOPEN}
+do_test capi3-4.4 {
+ utf8 [sqlite3_errmsg16 $db2]
+} {unable to open database file}
+do_test capi3-4.5 {
+ sqlite3_close $db2
+} {SQLITE_OK}
+} ;# utf16
+
+# This proc is used to test the following API calls:
+#
+# sqlite3_column_count
+# sqlite3_column_name
+# sqlite3_column_name16
+# sqlite3_column_decltype
+# sqlite3_column_decltype16
+#
+# $STMT is a compiled SQL statement. $test is a prefix
+# to use for test names within this proc. $names is a list
+# of the column names that should be returned by $STMT.
+# $decltypes is a list of column declaration types for $STMT.
+#
+# Example:
+#
+# set STMT [sqlite3_prepare "SELECT 1, 2, 2;" -1 DUMMY]
+# check_header test1.1 {1 2 3} {"" "" ""}
+#
+proc check_header {STMT test names decltypes} {
+
+ # Use the return value of sqlite3_column_count() to build
+ # a list of column indexes. i.e. If sqlite3_column_count
+ # is 3, build the list {0 1 2}.
+ set ::idxlist [list]
+ set ::numcols [sqlite3_column_count $STMT]
+ for {set i 0} {$i < $::numcols} {incr i} {lappend ::idxlist $i}
+
+ # Column names in UTF-8
+ do_test $test.1 {
+ set cnamelist [list]
+ foreach i $idxlist {lappend cnamelist [sqlite3_column_name $STMT $i]}
+ set cnamelist
+ } $names
+
+ # Column names in UTF-16
+ ifcapable {utf16} {
+ do_test $test.2 {
+ set cnamelist [list]
+ foreach i $idxlist {
+ lappend cnamelist [utf8 [sqlite3_column_name16 $STMT $i]]
+ }
+ set cnamelist
+ } $names
+ }
+
+ # Column names in UTF-8
+ do_test $test.3 {
+ set cnamelist [list]
+ foreach i $idxlist {lappend cnamelist [sqlite3_column_name $STMT $i]}
+ set cnamelist
+ } $names
+
+ # Column names in UTF-16
+ ifcapable {utf16} {
+ do_test $test.4 {
+ set cnamelist [list]
+ foreach i $idxlist {
+ lappend cnamelist [utf8 [sqlite3_column_name16 $STMT $i]]
+ }
+ set cnamelist
+ } $names
+ }
+
+ # Column names in UTF-8
+ do_test $test.5 {
+ set cnamelist [list]
+ foreach i $idxlist {lappend cnamelist [sqlite3_column_decltype $STMT $i]}
+ set cnamelist
+ } $decltypes
+
+ # Column declaration types in UTF-16
+ ifcapable {utf16} {
+ do_test $test.6 {
+ set cnamelist [list]
+ foreach i $idxlist {
+ lappend cnamelist [utf8 [sqlite3_column_decltype16 $STMT $i]]
+ }
+ set cnamelist
+ } $decltypes
+ }
+
+
+ # Test some out of range conditions:
+ ifcapable {utf16} {
+ do_test $test.7 {
+ list \
+ [sqlite3_column_name $STMT -1] \
+ [sqlite3_column_name16 $STMT -1] \
+ [sqlite3_column_decltype $STMT -1] \
+ [sqlite3_column_decltype16 $STMT -1] \
+ [sqlite3_column_name $STMT $numcols] \
+ [sqlite3_column_name16 $STMT $numcols] \
+ [sqlite3_column_decltype $STMT $numcols] \
+ [sqlite3_column_decltype16 $STMT $numcols]
+ } {{} {} {} {} {} {} {} {}}
+ }
+}
+
+# This proc is used to test the following API calls:
+#
+# sqlite3_column_origin_name
+# sqlite3_column_origin_name16
+# sqlite3_column_table_name
+# sqlite3_column_table_name16
+# sqlite3_column_database_name
+# sqlite3_column_database_name16
+#
+# $STMT is a compiled SQL statement. $test is a prefix
+# to use for test names within this proc. $names is a list
+# of the column names that should be returned by $STMT.
+# $decltypes is a list of column declaration types for $STMT.
+#
+# Example:
+#
+# set STMT [sqlite3_prepare "SELECT 1, 2, 2;" -1 DUMMY]
+# check_header test1.1 {1 2 3} {"" "" ""}
+#
+proc check_origin_header {STMT test dbs tables cols} {
+ # If sqlite3_column_origin_name() and friends are not compiled into
+ # this build, this proc is a no-op.
+ifcapable columnmetadata {
+
+ # Use the return value of sqlite3_column_count() to build
+ # a list of column indexes. i.e. If sqlite3_column_count
+ # is 3, build the list {0 1 2}.
+ set ::idxlist [list]
+ set ::numcols [sqlite3_column_count $STMT]
+ for {set i 0} {$i < $::numcols} {incr i} {lappend ::idxlist $i}
+
+ # Database names in UTF-8
+ do_test $test.8 {
+ set cnamelist [list]
+ foreach i $idxlist {
+ lappend cnamelist [sqlite3_column_database_name $STMT $i]
+ }
+ set cnamelist
+ } $dbs
+
+ # Database names in UTF-16
+ ifcapable {utf16} {
+ do_test $test.9 {
+ set cnamelist [list]
+ foreach i $idxlist {
+ lappend cnamelist [utf8 [sqlite3_column_database_name16 $STMT $i]]
+ }
+ set cnamelist
+ } $dbs
+ }
+
+ # Table names in UTF-8
+ do_test $test.10 {
+ set cnamelist [list]
+ foreach i $idxlist {
+ lappend cnamelist [sqlite3_column_table_name $STMT $i]
+ }
+ set cnamelist
+ } $tables
+
+ # Table names in UTF-16
+ ifcapable {utf16} {
+ do_test $test.11 {
+ set cnamelist [list]
+ foreach i $idxlist {
+ lappend cnamelist [utf8 [sqlite3_column_table_name16 $STMT $i]]
+ }
+ set cnamelist
+ } $tables
+ }
+
+ # Origin names in UTF-8
+ do_test $test.12 {
+ set cnamelist [list]
+ foreach i $idxlist {
+ lappend cnamelist [sqlite3_column_origin_name $STMT $i]
+ }
+ set cnamelist
+ } $cols
+
+ # Origin declaration types in UTF-16
+ ifcapable {utf16} {
+ do_test $test.13 {
+ set cnamelist [list]
+ foreach i $idxlist {
+ lappend cnamelist [utf8 [sqlite3_column_origin_name16 $STMT $i]]
+ }
+ set cnamelist
+ } $cols
+ }
+ }
+}
+
+# This proc is used to test the following APIs:
+#
+# sqlite3_data_count
+# sqlite3_column_type
+# sqlite3_column_int
+# sqlite3_column_text
+# sqlite3_column_text16
+# sqlite3_column_double
+#
+# $STMT is a compiled SQL statement for which the previous call
+# to sqlite3_step returned SQLITE_ROW. $test is a prefix to use
+# for test names within this proc. $types is a list of the
+# manifest types for the current row. $ints, $doubles and $strings
+# are lists of the integer, real and string representations of
+# the values in the current row.
+#
+# Example:
+#
+# set STMT [sqlite3_prepare "SELECT 'hello', 1.1, NULL" -1 DUMMY]
+# sqlite3_step $STMT
+# check_data test1.2 {TEXT REAL NULL} {0 1 0} {0 1.1 0} {hello 1.1 {}}
+#
+proc check_data {STMT test types ints doubles strings} {
+
+ # Use the return value of sqlite3_column_count() to build
+ # a list of column indexes. i.e. If sqlite3_column_count
+ # is 3, build the list {0 1 2}.
+ set ::idxlist [list]
+ set numcols [sqlite3_data_count $STMT]
+ for {set i 0} {$i < $numcols} {incr i} {lappend ::idxlist $i}
+
+# types
+do_test $test.1 {
+ set types [list]
+ foreach i $idxlist {lappend types [sqlite3_column_type $STMT $i]}
+ set types
+} $types
+
+# Integers
+do_test $test.2 {
+ set ints [list]
+ foreach i $idxlist {lappend ints [sqlite3_column_int64 $STMT $i]}
+ set ints
+} $ints
+
+# bytes
+set lens [list]
+foreach i $::idxlist {
+ lappend lens [string length [lindex $strings $i]]
+}
+do_test $test.3 {
+ set bytes [list]
+ set lens [list]
+ foreach i $idxlist {
+ lappend bytes [sqlite3_column_bytes $STMT $i]
+ }
+ set bytes
+} $lens
+
+# bytes16
+ifcapable {utf16} {
+ set lens [list]
+ foreach i $::idxlist {
+ lappend lens [expr 2 * [string length [lindex $strings $i]]]
+ }
+ do_test $test.4 {
+ set bytes [list]
+ set lens [list]
+ foreach i $idxlist {
+ lappend bytes [sqlite3_column_bytes16 $STMT $i]
+ }
+ set bytes
+ } $lens
+}
+
+# Blob
+do_test $test.5 {
+ set utf8 [list]
+ foreach i $idxlist {lappend utf8 [sqlite3_column_blob $STMT $i]}
+ set utf8
+} $strings
+
+# UTF-8
+do_test $test.6 {
+ set utf8 [list]
+ foreach i $idxlist {lappend utf8 [sqlite3_column_text $STMT $i]}
+ set utf8
+} $strings
+
+# Floats
+do_test $test.7 {
+ set utf8 [list]
+ foreach i $idxlist {lappend utf8 [sqlite3_column_double $STMT $i]}
+ set utf8
+} $doubles
+
+# UTF-16
+ifcapable {utf16} {
+ do_test $test.8 {
+ set utf8 [list]
+ foreach i $idxlist {lappend utf8 [utf8 [sqlite3_column_text16 $STMT $i]]}
+ set utf8
+ } $strings
+}
+
+# Integers
+do_test $test.9 {
+ set ints [list]
+ foreach i $idxlist {lappend ints [sqlite3_column_int $STMT $i]}
+ set ints
+} $ints
+
+# Floats
+do_test $test.10 {
+ set utf8 [list]
+ foreach i $idxlist {lappend utf8 [sqlite3_column_double $STMT $i]}
+ set utf8
+} $doubles
+
+# UTF-8
+do_test $test.11 {
+ set utf8 [list]
+ foreach i $idxlist {lappend utf8 [sqlite3_column_text $STMT $i]}
+ set utf8
+} $strings
+
+# Types
+do_test $test.12 {
+ set types [list]
+ foreach i $idxlist {lappend types [sqlite3_column_type $STMT $i]}
+ set types
+} $types
+
+# Test that an out of range request returns the equivalent of NULL
+do_test $test.13 {
+ sqlite3_column_int $STMT -1
+} {0}
+do_test $test.13 {
+ sqlite3_column_text $STMT -1
+} {}
+
+}
+
+ifcapable !floatingpoint {
+ finish_test
+ return
+}
+
+do_test capi3-5.0 {
+ execsql {
+ CREATE TABLE t1(a VARINT, b BLOB, c VARCHAR(16));
+ INSERT INTO t1 VALUES(1, 2, 3);
+ INSERT INTO t1 VALUES('one', 'two', NULL);
+ INSERT INTO t1 VALUES(1.2, 1.3, 1.4);
+ }
+ set sql "SELECT * FROM t1"
+ set STMT [sqlite3_prepare $DB $sql -1 TAIL]
+ sqlite3_column_count $STMT
+} 3
+
+check_header $STMT capi3-5.1 {a b c} {VARINT BLOB VARCHAR(16)}
+check_origin_header $STMT capi3-5.1 {main main main} {t1 t1 t1} {a b c}
+do_test capi3-5.2 {
+ sqlite3_step $STMT
+} SQLITE_ROW
+
+check_header $STMT capi3-5.3 {a b c} {VARINT BLOB VARCHAR(16)}
+check_origin_header $STMT capi3-5.3 {main main main} {t1 t1 t1} {a b c}
+check_data $STMT capi3-5.4 {INTEGER INTEGER TEXT} {1 2 3} {1.0 2.0 3.0} {1 2 3}
+
+do_test capi3-5.5 {
+ sqlite3_step $STMT
+} SQLITE_ROW
+
+check_header $STMT capi3-5.6 {a b c} {VARINT BLOB VARCHAR(16)}
+check_origin_header $STMT capi3-5.6 {main main main} {t1 t1 t1} {a b c}
+check_data $STMT capi3-5.7 {TEXT TEXT NULL} {0 0 0} {0.0 0.0 0.0} {one two {}}
+
+do_test capi3-5.8 {
+ sqlite3_step $STMT
+} SQLITE_ROW
+
+check_header $STMT capi3-5.9 {a b c} {VARINT BLOB VARCHAR(16)}
+check_origin_header $STMT capi3-5.9 {main main main} {t1 t1 t1} {a b c}
+check_data $STMT capi3-5.10 {FLOAT FLOAT TEXT} {1 1 1} {1.2 1.3 1.4} {1.2 1.3 1.4}
+
+do_test capi3-5.11 {
+ sqlite3_step $STMT
+} SQLITE_DONE
+
+do_test capi3-5.12 {
+ sqlite3_finalize $STMT
+} SQLITE_OK
+
+do_test capi3-5.20 {
+ set sql "SELECT a, sum(b), max(c) FROM t1 GROUP BY a"
+ set STMT [sqlite3_prepare $DB $sql -1 TAIL]
+ sqlite3_column_count $STMT
+} 3
+
+check_header $STMT capi3-5.21 {a sum(b) max(c)} {VARINT {} {}}
+check_origin_header $STMT capi3-5.22 {main {} {}} {t1 {} {}} {a {} {}}
+do_test capi3-5.23 {
+ sqlite3_finalize $STMT
+} SQLITE_OK
+
+
+set ::ENC [execsql {pragma encoding}]
+db close
+
+do_test capi3-6.0 {
+btree_breakpoint
+ sqlite3 db test.db
+ set DB [sqlite3_connection_pointer db]
+btree_breakpoint
+ sqlite3_key $DB xyzzy
+ set sql {SELECT a FROM t1 order by rowid}
+ set STMT [sqlite3_prepare $DB $sql -1 TAIL]
+ expr 0
+} {0}
+do_test capi3-6.1 {
+ db cache flush
+ sqlite3_close $DB
+} {SQLITE_BUSY}
+do_test capi3-6.2 {
+ sqlite3_step $STMT
+} {SQLITE_ROW}
+check_data $STMT capi3-6.3 {INTEGER} {1} {1.0} {1}
+do_test capi3-6.3 {
+ sqlite3_finalize $STMT
+} {SQLITE_OK}
+do_test capi3-6.4 {
+ db cache flush
+ sqlite3_close $DB
+} {SQLITE_OK}
+db close
+
+if {![sqlite3 -has-codec]} {
+ # Test what happens when the library encounters a newer file format.
+ # Do this by updating the file format via the btree layer.
+ do_test capi3-7.1 {
+ set ::bt [btree_open test.db 10 0]
+ btree_begin_transaction $::bt
+ set meta [btree_get_meta $::bt]
+ lset meta 2 5
+ eval [concat btree_update_meta $::bt [lrange $meta 0 end]]
+ btree_commit $::bt
+ btree_close $::bt
+ } {}
+ do_test capi3-7.2 {
+ sqlite3 db test.db
+ catchsql {
+ SELECT * FROM sqlite_master;
+ }
+ } {1 {unsupported file format}}
+ db close
+}
+
+if {![sqlite3 -has-codec]} {
+ # Now test that the library correctly handles bogus entries in the
+ # sqlite_master table (schema corruption).
+ do_test capi3-8.1 {
+ file delete -force test.db
+ file delete -force test.db-journal
+ sqlite3 db test.db
+ execsql {
+ CREATE TABLE t1(a);
+ }
+ db close
+ } {}
+ do_test capi3-8.2 {
+ set ::bt [btree_open test.db 10 0]
+ btree_begin_transaction $::bt
+ set ::bc [btree_cursor $::bt 1 1]
+
+ # Build a 5-field row record consisting of 5 null records. This is
+ # officially black magic.
+ catch {unset data}
+ set data [binary format c6 {6 0 0 0 0 0}]
+ btree_insert $::bc 5 $data
+
+ btree_close_cursor $::bc
+ btree_commit $::bt
+ btree_close $::bt
+ } {}
+ do_test capi3-8.3 {
+ sqlite3 db test.db
+ catchsql {
+ SELECT * FROM sqlite_master;
+ }
+ } {1 {malformed database schema}}
+ do_test capi3-8.4 {
+ set ::bt [btree_open test.db 10 0]
+ btree_begin_transaction $::bt
+ set ::bc [btree_cursor $::bt 1 1]
+
+ # Build a 5-field row record. The first field is a string 'table', and
+ # subsequent fields are all NULL. Replace the other broken record with
+ # this one and try to read the schema again. The broken record uses
+ # either UTF-8 or native UTF-16 (if this file is being run by
+ # utf16.test).
+ if { [string match UTF-16* $::ENC] } {
+ set data [binary format c6a10 {6 33 0 0 0 0} [utf16 table]]
+ } else {
+ set data [binary format c6a5 {6 23 0 0 0 0} table]
+ }
+ btree_insert $::bc 5 $data
+
+ btree_close_cursor $::bc
+ btree_commit $::bt
+ btree_close $::bt
+ } {};
+ do_test capi3-8.5 {
+ db close
+ sqlite3 db test.db
+ catchsql {
+ SELECT * FROM sqlite_master;
+ }
+ } {1 {malformed database schema}}
+ db close
+}
+file delete -force test.db
+file delete -force test.db-journal
+
+
+# Test the english language string equivalents for sqlite error codes
+set code2english [list \
+SQLITE_OK {not an error} \
+SQLITE_ERROR {SQL logic error or missing database} \
+SQLITE_PERM {access permission denied} \
+SQLITE_ABORT {callback requested query abort} \
+SQLITE_BUSY {database is locked} \
+SQLITE_LOCKED {database table is locked} \
+SQLITE_NOMEM {out of memory} \
+SQLITE_READONLY {attempt to write a readonly database} \
+SQLITE_INTERRUPT {interrupted} \
+SQLITE_IOERR {disk I/O error} \
+SQLITE_CORRUPT {database disk image is malformed} \
+SQLITE_FULL {database or disk is full} \
+SQLITE_CANTOPEN {unable to open database file} \
+SQLITE_PROTOCOL {database locking protocol failure} \
+SQLITE_EMPTY {table contains no data} \
+SQLITE_SCHEMA {database schema has changed} \
+SQLITE_CONSTRAINT {constraint failed} \
+SQLITE_MISMATCH {datatype mismatch} \
+SQLITE_MISUSE {library routine called out of sequence} \
+SQLITE_NOLFS {kernel lacks large file support} \
+SQLITE_AUTH {authorization denied} \
+SQLITE_FORMAT {auxiliary database format error} \
+SQLITE_RANGE {bind or column index out of range} \
+SQLITE_NOTADB {file is encrypted or is not a database} \
+unknownerror {unknown error} \
+]
+
+set test_number 1
+foreach {code english} $code2english {
+ do_test capi3-9.$test_number "sqlite3_test_errstr $code" $english
+ incr test_number
+}
+
+# Test the error message when a "real" out of memory occurs.
+if {[info command sqlite_malloc_stat]!=""} {
+set sqlite_malloc_fail 1
+do_test capi3-10-1 {
+ sqlite3 db test.db
+ set DB [sqlite3_connection_pointer db]
+ sqlite_malloc_fail 1
+ catchsql {
+ select * from sqlite_master;
+ }
+} {1 {out of memory}}
+do_test capi3-10-2 {
+ sqlite3_errmsg $::DB
+} {out of memory}
+ifcapable {utf16} {
+ do_test capi3-10-3 {
+ utf8 [sqlite3_errmsg16 $::DB]
+ } {out of memory}
+}
+db close
+sqlite_malloc_fail 0
+}
+
+# The following tests - capi3-11.* - test that a COMMIT or ROLLBACK
+# statement issued while there are still outstanding VMs that are part of
+# the transaction fails.
+sqlite3 db test.db
+set DB [sqlite3_connection_pointer db]
+sqlite_register_test_function $DB func
+do_test capi3-11.1 {
+ execsql {
+ BEGIN;
+ CREATE TABLE t1(a, b);
+ INSERT INTO t1 VALUES(1, 'int');
+ INSERT INTO t1 VALUES(2, 'notatype');
+ }
+} {}
+do_test capi3-11.1.1 {
+ sqlite3_get_autocommit $DB
+} 0
+do_test capi3-11.2 {
+ set STMT [sqlite3_prepare $DB "SELECT func(b, a) FROM t1" -1 TAIL]
+ sqlite3_step $STMT
+} {SQLITE_ROW}
+do_test capi3-11.3 {
+ catchsql {
+ COMMIT;
+ }
+} {1 {cannot commit transaction - SQL statements in progress}}
+do_test capi3-11.3.1 {
+ sqlite3_get_autocommit $DB
+} 0
+do_test capi3-11.4 {
+ sqlite3_step $STMT
+} {SQLITE_ERROR}
+do_test capi3-11.5 {
+ sqlite3_finalize $STMT
+} {SQLITE_ERROR}
+do_test capi3-11.6 {
+ catchsql {
+ SELECT * FROM t1;
+ }
+} {0 {1 int 2 notatype}}
+do_test capi3-11.6.1 {
+ sqlite3_get_autocommit $DB
+} 0
+do_test capi3-11.7 {
+ catchsql {
+ COMMIT;
+ }
+} {0 {}}
+do_test capi3-11.7.1 {
+ sqlite3_get_autocommit $DB
+} 1
+do_test capi3-11.8 {
+ execsql {
+ CREATE TABLE t2(a);
+ INSERT INTO t2 VALUES(1);
+ INSERT INTO t2 VALUES(2);
+ BEGIN;
+ INSERT INTO t2 VALUES(3);
+ }
+} {}
+do_test capi3-11.8.1 {
+ sqlite3_get_autocommit $DB
+} 0
+do_test capi3-11.9 {
+ set STMT [sqlite3_prepare $DB "SELECT a FROM t2" -1 TAIL]
+ sqlite3_step $STMT
+} {SQLITE_ROW}
+do_test capi3-11.9.1 {
+ sqlite3_get_autocommit $DB
+} 0
+do_test capi3-11.9.2 {
+ catchsql {
+ ROLLBACK;
+ }
+} {1 {cannot rollback transaction - SQL statements in progress}}
+do_test capi3-11.9.3 {
+ sqlite3_get_autocommit $DB
+} 0
+do_test capi3-11.10 {
+ sqlite3_step $STMT
+} {SQLITE_ROW}
+do_test capi3-11.11 {
+ sqlite3_step $STMT
+} {SQLITE_ROW}
+do_test capi3-11.12 {
+ sqlite3_step $STMT
+} {SQLITE_DONE}
+do_test capi3-11.13 {
+ sqlite3_finalize $STMT
+} {SQLITE_OK}
+do_test capi3-11.14 {
+ execsql {
+ SELECT a FROM t2;
+ }
+} {1 2 3}
+do_test capi3-11.14.1 {
+ sqlite3_get_autocommit $DB
+} 0
+do_test capi3-11.15 {
+ catchsql {
+ ROLLBACK;
+ }
+} {0 {}}
+do_test capi3-11.15.1 {
+ sqlite3_get_autocommit $DB
+} 1
+do_test capi3-11.16 {
+ execsql {
+ SELECT a FROM t2;
+ }
+} {1 2}
+
+# Sanity check on the definition of 'outstanding VM'. This means any VM
+# that has had sqlite3_step() called more recently than sqlite3_finalize() or
+# sqlite3_reset(). So a VM that has just been prepared or reset does not
+# count as an active VM.
+do_test capi3-11.17 {
+ execsql {
+ BEGIN;
+ }
+} {}
+do_test capi3-11.18 {
+ set STMT [sqlite3_prepare $DB "SELECT a FROM t1" -1 TAIL]
+ catchsql {
+ COMMIT;
+ }
+} {0 {}}
+do_test capi3-11.19 {
+ sqlite3_step $STMT
+} {SQLITE_ROW}
+do_test capi3-11.20 {
+ catchsql {
+ BEGIN;
+ COMMIT;
+ }
+} {1 {cannot commit transaction - SQL statements in progress}}
+do_test capi3-11.20 {
+ sqlite3_reset $STMT
+ catchsql {
+ COMMIT;
+ }
+} {0 {}}
+do_test capi3-11.21 {
+ sqlite3_finalize $STMT
+} {SQLITE_OK}
+
+# The following tests - capi3-12.* - check that it's Ok to start a
+# transaction while other VMs are active, and that it's Ok to execute
+# atomic updates in the same situation
+#
+do_test capi3-12.1 {
+ set STMT [sqlite3_prepare $DB "SELECT a FROM t2" -1 TAIL]
+ sqlite3_step $STMT
+} {SQLITE_ROW}
+do_test capi3-12.2 {
+ catchsql {
+ INSERT INTO t1 VALUES(3, NULL);
+ }
+} {0 {}}
+do_test capi3-12.3 {
+ catchsql {
+ INSERT INTO t2 VALUES(4);
+ }
+} {0 {}}
+do_test capi3-12.4 {
+ catchsql {
+ BEGIN;
+ INSERT INTO t1 VALUES(4, NULL);
+ }
+} {0 {}}
+do_test capi3-12.5 {
+ sqlite3_step $STMT
+} {SQLITE_ROW}
+do_test capi3-12.5.1 {
+ sqlite3_step $STMT
+} {SQLITE_ROW}
+do_test capi3-12.6 {
+ sqlite3_step $STMT
+} {SQLITE_DONE}
+do_test capi3-12.7 {
+ sqlite3_finalize $STMT
+} {SQLITE_OK}
+do_test capi3-12.8 {
+ execsql {
+ COMMIT;
+ SELECT a FROM t1;
+ }
+} {1 2 3 4}
+
+# Test cases capi3-13.* test the sqlite3_clear_bindings() and
+# sqlite3_sleep APIs.
+#
+if {[llength [info commands sqlite3_clear_bindings]]>0} {
+ do_test capi3-13.1 {
+ execsql {
+ DELETE FROM t1;
+ }
+ set STMT [sqlite3_prepare $DB "INSERT INTO t1 VALUES(?, ?)" -1 TAIL]
+ sqlite3_step $STMT
+ } {SQLITE_DONE}
+ do_test capi3-13.2 {
+ sqlite3_reset $STMT
+ sqlite3_bind_text $STMT 1 hello 5
+ sqlite3_bind_text $STMT 2 world 5
+ sqlite3_step $STMT
+ } {SQLITE_DONE}
+ do_test capi3-13.3 {
+ sqlite3_reset $STMT
+ sqlite3_clear_bindings $STMT
+ sqlite3_step $STMT
+ } {SQLITE_DONE}
+ do_test capi3-13-4 {
+ sqlite3_finalize $STMT
+ execsql {
+ SELECT * FROM t1;
+ }
+ } {{} {} hello world {} {}}
+}
+if {[llength [info commands sqlite3_sleep]]>0} {
+ do_test capi3-13-5 {
+ set ms [sqlite3_sleep 80]
+ expr {$ms==80 || $ms==1000}
+ } {1}
+}
+
+# Ticket #1219: Make sure binding APIs can handle a NULL pointer.
+#
+do_test capi3-14.1 {
+ set rc [catch {sqlite3_bind_text 0 1 hello 5} msg]
+ lappend rc $msg
+} {1 SQLITE_MISUSE}
+
+# Ticket #1650: Honor the nBytes parameter to sqlite3_prepare.
+#
+do_test capi3-15.1 {
+ set sql {SELECT * FROM t2}
+ set nbytes [string length $sql]
+ append sql { WHERE a==1}
+ set STMT [sqlite3_prepare $DB $sql $nbytes TAIL]
+ sqlite3_step $STMT
+ sqlite3_column_int $STMT 0
+} {1}
+do_test capi3-15.2 {
+ sqlite3_step $STMT
+ sqlite3_column_int $STMT 0
+} {2}
+do_test capi3-15.3 {
+ sqlite3_finalize $STMT
+} {SQLITE_OK}
+
+# Make sure code is always generated even if an IF EXISTS or
+# IF NOT EXISTS clause is present that the table does not or
+# does exists. That way we will always have a prepared statement
+# to expire when the schema changes.
+#
+do_test capi3-16.1 {
+ set sql {DROP TABLE IF EXISTS t3}
+ set STMT [sqlite3_prepare $DB $sql -1 TAIL]
+ sqlite3_finalize $STMT
+ expr {$STMT!=""}
+} {1}
+do_test capi3-16.2 {
+ set sql {CREATE TABLE IF NOT EXISTS t1(x,y)}
+ set STMT [sqlite3_prepare $DB $sql -1 TAIL]
+ sqlite3_finalize $STMT
+ expr {$STMT!=""}
+} {1}
+
+# But still we do not generate code if there is no SQL
+#
+do_test capi3-16.3 {
+ set STMT [sqlite3_prepare $DB {} -1 TAIL]
+ sqlite3_finalize $STMT
+ expr {$STMT==""}
+} {1}
+do_test capi3-16.4 {
+ set STMT [sqlite3_prepare $DB {;} -1 TAIL]
+ sqlite3_finalize $STMT
+ expr {$STMT==""}
+} {1}
+
+
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/capi3b.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/capi3b.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,135 @@
+# 2004 September 2
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script testing the callback-free C/C++ API and in
+# particular the behavior of sqlite3_step() when trying to commit
+# with lock contention.
+#
+# $Id: capi3b.test,v 1.3 2006/01/03 00:33:50 drh Exp $
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+
+set DB [sqlite3_connection_pointer db]
+sqlite3 db2 test.db
+set DB2 [sqlite3_connection_pointer db2]
+
+# Create some data in the database
+#
+do_test capi3b-1.1 {
+ execsql {
+ CREATE TABLE t1(x);
+ INSERT INTO t1 VALUES(1);
+ INSERT INTO t1 VALUES(2);
+ SELECT * FROM t1
+ }
+} {1 2}
+
+# Make sure the second database connection can see the data
+#
+do_test capi3b-1.2 {
+ execsql {
+ SELECT * FROM t1
+ } db2
+} {1 2}
+
+# First database connection acquires a shared lock
+#
+do_test capi3b-1.3 {
+ execsql {
+ BEGIN;
+ SELECT * FROM t1;
+ }
+} {1 2}
+
+# Second database connection tries to write. The sqlite3_step()
+# function returns SQLITE_BUSY because it cannot commit.
+#
+do_test capi3b-1.4 {
+ set VM [sqlite3_prepare $DB2 {INSERT INTO t1 VALUES(3)} -1 TAIL]
+ sqlite3_step $VM
+} SQLITE_BUSY
+
+# The sqlite3_step call can be repeated multiple times.
+#
+do_test capi3b-1.5.1 {
+ sqlite3_step $VM
+} SQLITE_BUSY
+do_test capi3b-1.5.2 {
+ sqlite3_step $VM
+} SQLITE_BUSY
+
+# The first connection closes its transaction. This allows the second
+# connections sqlite3_step to succeed.
+#
+do_test capi3b-1.6 {
+ execsql COMMIT
+ sqlite3_step $VM
+} SQLITE_DONE
+do_test capi3b-1.7 {
+ sqlite3_finalize $VM
+} SQLITE_OK
+do_test capi3b-1.8 {
+ execsql {SELECT * FROM t1} db2
+} {1 2 3}
+do_test capi3b-1.9 {
+ execsql {SELECT * FROM t1}
+} {1 2 3}
+
+# Start doing a SELECT with one connection. This gets a SHARED lock.
+# Then do an INSERT with the other connection. The INSERT should
+# not be able to complete until the SELECT finishes.
+#
+do_test capi3b-2.1 {
+ set VM1 [sqlite3_prepare $DB {SELECT * FROM t1} -1 TAIL]
+ sqlite3_step $VM1
+} SQLITE_ROW
+do_test capi3b-2.2 {
+ sqlite3_column_text $VM1 0
+} 1
+do_test capi3b-2.3 {
+ set VM2 [sqlite3_prepare $DB2 {INSERT INTO t1 VALUES(4)} -1 TAIL]
+ sqlite3_step $VM2
+} SQLITE_BUSY
+do_test capi3b-2.4 {
+ sqlite3_step $VM1
+} SQLITE_ROW
+do_test capi3b-2.5 {
+ sqlite3_column_text $VM1 0
+} 2
+do_test capi3b-2.6 {
+ sqlite3_step $VM2
+} SQLITE_BUSY
+do_test capi3b-2.7 {
+ sqlite3_step $VM1
+} SQLITE_ROW
+do_test capi3b-2.8 {
+ sqlite3_column_text $VM1 0
+} 3
+do_test capi3b-2.9 {
+ sqlite3_step $VM2
+} SQLITE_BUSY
+do_test capi3b-2.10 {
+ sqlite3_step $VM1
+} SQLITE_DONE
+do_test capi3b-2.11 {
+ sqlite3_step $VM2
+} SQLITE_DONE
+do_test capi3b-2.12 {
+ sqlite3_finalize $VM1
+ sqlite3_finalize $VM2
+ execsql {SELECT * FROM t1}
+} {1 2 3 4}
+
+catch {db2 close}
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/cast.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/cast.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,196 @@
+# 2005 June 25
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing the CAST operator.
+#
+# $Id: cast.test,v 1.5 2006/03/03 19:12:30 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Only run these tests if the build includes the CAST operator
+ifcapable !cast {
+ finish_test
+ return
+}
+
+# Tests for the CAST( AS blob), CAST( AS text) and CAST( AS numeric) built-ins
+#
+ifcapable bloblit {
+ do_test cast-1.1 {
+ execsql {SELECT x'616263'}
+ } abc
+ do_test cast-1.2 {
+ execsql {SELECT typeof(x'616263')}
+ } blob
+ do_test cast-1.3 {
+ execsql {SELECT CAST(x'616263' AS text)}
+ } abc
+ do_test cast-1.4 {
+ execsql {SELECT typeof(CAST(x'616263' AS text))}
+ } text
+ do_test cast-1.5 {
+ execsql {SELECT CAST(x'616263' AS numeric)}
+ } 0
+ do_test cast-1.6 {
+ execsql {SELECT typeof(CAST(x'616263' AS numeric))}
+ } integer
+ do_test cast-1.7 {
+ execsql {SELECT CAST(x'616263' AS blob)}
+ } abc
+ do_test cast-1.8 {
+ execsql {SELECT typeof(CAST(x'616263' AS blob))}
+ } blob
+ do_test cast-1.9 {
+ execsql {SELECT CAST(x'616263' AS integer)}
+ } 0
+ do_test cast-1.10 {
+ execsql {SELECT typeof(CAST(x'616263' AS integer))}
+ } integer
+}
+do_test cast-1.11 {
+ execsql {SELECT null}
+} {{}}
+do_test cast-1.12 {
+ execsql {SELECT typeof(NULL)}
+} null
+do_test cast-1.13 {
+ execsql {SELECT CAST(NULL AS text)}
+} {{}}
+do_test cast-1.14 {
+ execsql {SELECT typeof(CAST(NULL AS text))}
+} null
+do_test cast-1.15 {
+ execsql {SELECT CAST(NULL AS numeric)}
+} {{}}
+do_test cast-1.16 {
+ execsql {SELECT typeof(CAST(NULL AS numeric))}
+} null
+do_test cast-1.17 {
+ execsql {SELECT CAST(NULL AS blob)}
+} {{}}
+do_test cast-1.18 {
+ execsql {SELECT typeof(CAST(NULL AS blob))}
+} null
+do_test cast-1.19 {
+ execsql {SELECT CAST(NULL AS integer)}
+} {{}}
+do_test cast-1.20 {
+ execsql {SELECT typeof(CAST(NULL AS integer))}
+} null
+do_test cast-1.21 {
+ execsql {SELECT 123}
+} {123}
+do_test cast-1.22 {
+ execsql {SELECT typeof(123)}
+} integer
+do_test cast-1.23 {
+ execsql {SELECT CAST(123 AS text)}
+} {123}
+do_test cast-1.24 {
+ execsql {SELECT typeof(CAST(123 AS text))}
+} text
+do_test cast-1.25 {
+ execsql {SELECT CAST(123 AS numeric)}
+} 123
+do_test cast-1.26 {
+ execsql {SELECT typeof(CAST(123 AS numeric))}
+} integer
+do_test cast-1.27 {
+ execsql {SELECT CAST(123 AS blob)}
+} {123}
+do_test cast-1.28 {
+ execsql {SELECT typeof(CAST(123 AS blob))}
+} blob
+do_test cast-1.29 {
+ execsql {SELECT CAST(123 AS integer)}
+} {123}
+do_test cast-1.30 {
+ execsql {SELECT typeof(CAST(123 AS integer))}
+} integer
+do_test cast-1.31 {
+ execsql {SELECT 123.456}
+} {123.456}
+do_test cast-1.32 {
+ execsql {SELECT typeof(123.456)}
+} real
+do_test cast-1.33 {
+ execsql {SELECT CAST(123.456 AS text)}
+} {123.456}
+do_test cast-1.34 {
+ execsql {SELECT typeof(CAST(123.456 AS text))}
+} text
+do_test cast-1.35 {
+ execsql {SELECT CAST(123.456 AS numeric)}
+} 123.456
+do_test cast-1.36 {
+ execsql {SELECT typeof(CAST(123.456 AS numeric))}
+} real
+do_test cast-1.37 {
+ execsql {SELECT CAST(123.456 AS blob)}
+} {123.456}
+do_test cast-1.38 {
+ execsql {SELECT typeof(CAST(123.456 AS blob))}
+} blob
+do_test cast-1.39 {
+ execsql {SELECT CAST(123.456 AS integer)}
+} {123}
+do_test cast-1.38 {
+ execsql {SELECT typeof(CAST(123.456 AS integer))}
+} integer
+do_test cast-1.41 {
+ execsql {SELECT '123abc'}
+} {123abc}
+do_test cast-1.42 {
+ execsql {SELECT typeof('123abc')}
+} text
+do_test cast-1.43 {
+ execsql {SELECT CAST('123abc' AS text)}
+} {123abc}
+do_test cast-1.44 {
+ execsql {SELECT typeof(CAST('123abc' AS text))}
+} text
+do_test cast-1.45 {
+ execsql {SELECT CAST('123abc' AS numeric)}
+} 123
+do_test cast-1.46 {
+ execsql {SELECT typeof(CAST('123abc' AS numeric))}
+} integer
+do_test cast-1.47 {
+ execsql {SELECT CAST('123abc' AS blob)}
+} {123abc}
+do_test cast-1.48 {
+ execsql {SELECT typeof(CAST('123abc' AS blob))}
+} blob
+do_test cast-1.49 {
+ execsql {SELECT CAST('123abc' AS integer)}
+} 123
+do_test cast-1.50 {
+ execsql {SELECT typeof(CAST('123abc' AS integer))}
+} integer
+do_test cast-1.51 {
+ execsql {SELECT CAST('123.5abc' AS numeric)}
+} 123.5
+do_test cast-1.53 {
+ execsql {SELECT CAST('123.5abc' AS integer)}
+} 123
+
+# Ticket #1662. Ignore leading spaces in numbers when casting.
+#
+do_test cast-2.1 {
+ execsql {SELECT CAST(' 123' AS integer)}
+} 123
+do_test cast-2.2 {
+ execsql {SELECT CAST(' -123.456' AS real)}
+} -123.456
+
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/check.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/check.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,351 @@
+# 2005 November 2
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing CHECK constraints
+#
+# $Id: check.test,v 1.10 2006/06/20 11:01:09 danielk1977 Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Only run these tests if the build includes support for CHECK constraints
+ifcapable !check {
+ finish_test
+ return
+}
+
+do_test check-1.1 {
+ execsql {
+ CREATE TABLE t1(
+ x INTEGER CHECK( x<5 ),
+ y REAL CHECK( y>x )
+ );
+ }
+} {}
+do_test check-1.2 {
+ execsql {
+ INSERT INTO t1 VALUES(3,4);
+ SELECT * FROM t1;
+ }
+} {3 4.0}
+do_test check-1.3 {
+ catchsql {
+ INSERT INTO t1 VALUES(6,7);
+ }
+} {1 {constraint failed}}
+do_test check-1.4 {
+ execsql {
+ SELECT * FROM t1;
+ }
+} {3 4.0}
+do_test check-1.5 {
+ catchsql {
+ INSERT INTO t1 VALUES(4,3);
+ }
+} {1 {constraint failed}}
+do_test check-1.6 {
+ execsql {
+ SELECT * FROM t1;
+ }
+} {3 4.0}
+do_test check-1.7 {
+ catchsql {
+ INSERT INTO t1 VALUES(NULL,6);
+ }
+} {0 {}}
+do_test check-1.8 {
+ execsql {
+ SELECT * FROM t1;
+ }
+} {3 4.0 {} 6.0}
+do_test check-1.9 {
+ catchsql {
+ INSERT INTO t1 VALUES(2,NULL);
+ }
+} {0 {}}
+do_test check-1.10 {
+ execsql {
+ SELECT * FROM t1;
+ }
+} {3 4.0 {} 6.0 2 {}}
+do_test check-1.11 {
+ execsql {
+ DELETE FROM t1 WHERE x IS NULL OR x!=3;
+ UPDATE t1 SET x=2 WHERE x==3;
+ SELECT * FROM t1;
+ }
+} {2 4.0}
+do_test check-1.12 {
+ catchsql {
+ UPDATE t1 SET x=7 WHERE x==2
+ }
+} {1 {constraint failed}}
+do_test check-1.13 {
+ execsql {
+ SELECT * FROM t1;
+ }
+} {2 4.0}
+do_test check-1.14 {
+ catchsql {
+ UPDATE t1 SET x=5 WHERE x==2
+ }
+} {1 {constraint failed}}
+do_test check-1.15 {
+ execsql {
+ SELECT * FROM t1;
+ }
+} {2 4.0}
+do_test check-1.16 {
+ catchsql {
+ UPDATE t1 SET x=4, y=11 WHERE x==2
+ }
+} {0 {}}
+do_test check-1.17 {
+ execsql {
+ SELECT * FROM t1;
+ }
+} {4 11.0}
+
+do_test check-2.1 {
+ execsql {
+ CREATE TABLE t2(
+ x INTEGER CHECK( typeof(coalesce(x,0))=="integer" ),
+ y REAL CHECK( typeof(coalesce(y,0.1))=="real" ),
+ z TEXT CHECK( typeof(coalesce(z,''))=="text" )
+ );
+ }
+} {}
+do_test check-2.2 {
+ execsql {
+ INSERT INTO t2 VALUES(1,2.2,'three');
+ SELECT * FROM t2;
+ }
+} {1 2.2 three}
+do_test check-2.3 {
+ execsql {
+ INSERT INTO t2 VALUES(NULL, NULL, NULL);
+ SELECT * FROM t2;
+ }
+} {1 2.2 three {} {} {}}
+do_test check-2.4 {
+ catchsql {
+ INSERT INTO t2 VALUES(1.1, NULL, NULL);
+ }
+} {1 {constraint failed}}
+do_test check-2.5 {
+ catchsql {
+ INSERT INTO t2 VALUES(NULL, 5, NULL);
+ }
+} {1 {constraint failed}}
+do_test check-2.6 {
+ catchsql {
+ INSERT INTO t2 VALUES(NULL, NULL, 3.14159);
+ }
+} {1 {constraint failed}}
+
+ifcapable subquery {
+ do_test check-3.1 {
+ catchsql {
+ CREATE TABLE t3(
+ x, y, z,
+ CHECK( x<(SELECT min(x) FROM t1) )
+ );
+ }
+ } {1 {subqueries prohibited in CHECK constraints}}
+}
+
+do_test check-3.2 {
+ execsql {
+ SELECT name FROM sqlite_master ORDER BY name
+ }
+} {t1 t2}
+do_test check-3.3 {
+ catchsql {
+ CREATE TABLE t3(
+ x, y, z,
+ CHECK( q<x )
+ );
+ }
+} {1 {no such column: q}}
+do_test check-3.4 {
+ execsql {
+ SELECT name FROM sqlite_master ORDER BY name
+ }
+} {t1 t2}
+do_test check-3.5 {
+ catchsql {
+ CREATE TABLE t3(
+ x, y, z,
+ CHECK( t2.x<x )
+ );
+ }
+} {1 {no such column: t2.x}}
+do_test check-3.6 {
+ execsql {
+ SELECT name FROM sqlite_master ORDER BY name
+ }
+} {t1 t2}
+do_test check-3.7 {
+ catchsql {
+ CREATE TABLE t3(
+ x, y, z,
+ CHECK( t3.x<25 )
+ );
+ }
+} {0 {}}
+do_test check-3.8 {
+ execsql {
+ INSERT INTO t3 VALUES(1,2,3);
+ SELECT * FROM t3;
+ }
+} {1 2 3}
+do_test check-3.9 {
+ catchsql {
+ INSERT INTO t3 VALUES(111,222,333);
+ }
+} {1 {constraint failed}}
+
+do_test check-4.1 {
+ execsql {
+ CREATE TABLE t4(x, y,
+ CHECK (
+ x+y==11
+ OR x*y==12
+ OR x/y BETWEEN 5 AND 8
+ OR -x==y+10
+ )
+ );
+ }
+} {}
+do_test check-4.2 {
+ execsql {
+ INSERT INTO t4 VALUES(1,10);
+ SELECT * FROM t4
+ }
+} {1 10}
+do_test check-4.3 {
+ execsql {
+ UPDATE t4 SET x=4, y=3;
+ SELECT * FROM t4
+ }
+} {4 3}
+do_test check-4.3 {
+ execsql {
+ UPDATE t4 SET x=12, y=2;
+ SELECT * FROM t4
+ }
+} {12 2}
+do_test check-4.4 {
+ execsql {
+ UPDATE t4 SET x=12, y=-22;
+ SELECT * FROM t4
+ }
+} {12 -22}
+do_test check-4.5 {
+ catchsql {
+ UPDATE t4 SET x=0, y=1;
+ }
+} {1 {constraint failed}}
+do_test check-4.6 {
+ execsql {
+ SELECT * FROM t4;
+ }
+} {12 -22}
+do_test check-4.7 {
+ execsql {
+ PRAGMA ignore_check_constraints=ON;
+ UPDATE t4 SET x=0, y=1;
+ SELECT * FROM t4;
+ }
+} {0 1}
+do_test check-4.8 {
+ catchsql {
+ PRAGMA ignore_check_constraints=OFF;
+ UPDATE t4 SET x=0, y=2;
+ }
+} {1 {constraint failed}}
+ifcapable vacuum {
+ do_test check_4.9 {
+ catchsql {
+ VACUUM
+ }
+ } {0 {}}
+}
+
+do_test check-5.1 {
+ catchsql {
+ CREATE TABLE t5(x, y,
+ CHECK( x*y<:abc )
+ );
+ }
+} {1 {parameters prohibited in CHECK constraints}}
+do_test check-5.2 {
+ catchsql {
+ CREATE TABLE t5(x, y,
+ CHECK( x*y<? )
+ );
+ }
+} {1 {parameters prohibited in CHECK constraints}}
+
+ifcapable conflict {
+
+do_test check-6.1 {
+ execsql {SELECT * FROM t1}
+} {4 11.0}
+do_test check-6.2 {
+ execsql {
+ UPDATE OR IGNORE t1 SET x=5;
+ SELECT * FROM t1;
+ }
+} {4 11.0}
+do_test check-6.3 {
+ execsql {
+ INSERT OR IGNORE INTO t1 VALUES(5,4.0);
+ SELECT * FROM t1;
+ }
+} {4 11.0}
+do_test check-6.4 {
+ execsql {
+ INSERT OR IGNORE INTO t1 VALUES(2,20.0);
+ SELECT * FROM t1;
+ }
+} {4 11.0 2 20.0}
+do_test check-6.5 {
+ catchsql {
+ UPDATE OR FAIL t1 SET x=7-x, y=y+1;
+ }
+} {1 {constraint failed}}
+do_test check-6.6 {
+ execsql {
+ SELECT * FROM t1;
+ }
+} {3 12.0 2 20.0}
+do_test check-6.7 {
+ catchsql {
+ BEGIN;
+ INSERT INTO t1 VALUES(1,30.0);
+ INSERT OR ROLLBACK INTO t1 VALUES(8,40.0);
+ }
+} {1 {constraint failed}}
+do_test check-6.8 {
+ catchsql {
+ COMMIT;
+ }
+} {1 {cannot commit - no transaction is active}}
+do_test check-6.9 {
+ execsql {
+ SELECT * FROM t1
+ }
+} {3 12.0 2 20.0}
+
+}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/collate1.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/collate1.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,229 @@
+#
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is page cache subsystem.
+#
+# $Id: collate1.test,v 1.4 2005/11/01 15:48:25 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+#
+# Tests are roughly organised as follows:
+#
+# collate1-1.* - Single-field ORDER BY with an explicit COLLATE clause.
+# collate1-2.* - Multi-field ORDER BY with an explicit COLLATE clause.
+# collate1-3.* - ORDER BY using a default collation type. Also that an
+# explict collate type overrides a default collate type.
+# collate1-4.* - ORDER BY using a data type.
+#
+
+#
+# Collation type 'HEX'. If an argument can be interpreted as a hexadecimal
+# number, then it is converted to one before the comparison is performed.
+# Numbers are less than other strings. If neither argument is a number,
+# [string compare] is used.
+#
+db collate HEX hex_collate
+proc hex_collate {lhs rhs} {
+ set lhs_ishex [regexp {^(0x|)[1234567890abcdefABCDEF]+$} $lhs]
+ set rhs_ishex [regexp {^(0x|)[1234567890abcdefABCDEF]+$} $rhs]
+ if {$lhs_ishex && $rhs_ishex} {
+ set lhsx [scan $lhs %x]
+ set rhsx [scan $rhs %x]
+ if {$lhs < $rhs} {return -1}
+ if {$lhs == $rhs} {return 0}
+ if {$lhs > $rhs} {return 1}
+ }
+ if {$lhs_ishex} {
+ return -1;
+ }
+ if {$rhs_ishex} {
+ return 1;
+ }
+ return [string compare $lhs $rhs]
+}
+db function hex {format 0x%X}
+
+# Mimic the SQLite 2 collation type NUMERIC.
+db collate numeric numeric_collate
+proc numeric_collate {lhs rhs} {
+ if {$lhs == $rhs} {return 0}
+ return [expr ($lhs>$rhs)?1:-1]
+}
+
+do_test collate1-1.0 {
+ execsql {
+ CREATE TABLE collate1t1(c1, c2);
+ INSERT INTO collate1t1 VALUES(45, hex(45));
+ INSERT INTO collate1t1 VALUES(NULL, NULL);
+ INSERT INTO collate1t1 VALUES(281, hex(281));
+ }
+} {}
+do_test collate1-1.1 {
+ execsql {
+ SELECT c2 FROM collate1t1 ORDER BY 1;
+ }
+} {{} 0x119 0x2D}
+do_test collate1-1.2 {
+ execsql {
+ SELECT c2 FROM collate1t1 ORDER BY 1 COLLATE hex;
+ }
+} {{} 0x2D 0x119}
+do_test collate1-1.3 {
+ execsql {
+ SELECT c2 FROM collate1t1 ORDER BY 1 COLLATE hex DESC;
+ }
+} {0x119 0x2D {}}
+do_test collate1-1.4 {
+ execsql {
+ SELECT c2 FROM collate1t1 ORDER BY 1 COLLATE hex ASC;
+ }
+} {{} 0x2D 0x119}
+do_test collate1-1.5 {
+ execsql {
+ DROP TABLE collate1t1;
+ }
+} {}
+
+do_test collate1-2.0 {
+ execsql {
+ CREATE TABLE collate1t1(c1, c2);
+ INSERT INTO collate1t1 VALUES('5', '0x11');
+ INSERT INTO collate1t1 VALUES('5', '0xA');
+ INSERT INTO collate1t1 VALUES(NULL, NULL);
+ INSERT INTO collate1t1 VALUES('7', '0xA');
+ INSERT INTO collate1t1 VALUES('11', '0x11');
+ INSERT INTO collate1t1 VALUES('11', '0x101');
+ }
+} {}
+do_test collate1-2.2 {
+ execsql {
+ SELECT c1, c2 FROM collate1t1 ORDER BY 1 COLLATE numeric, 2 COLLATE hex;
+ }
+} {{} {} 5 0xA 5 0x11 7 0xA 11 0x11 11 0x101}
+do_test collate1-2.3 {
+ execsql {
+ SELECT c1, c2 FROM collate1t1 ORDER BY 1 COLLATE binary, 2 COLLATE hex;
+ }
+} {{} {} 11 0x11 11 0x101 5 0xA 5 0x11 7 0xA}
+do_test collate1-2.4 {
+ execsql {
+ SELECT c1, c2 FROM collate1t1 ORDER BY 1 COLLATE binary DESC, 2 COLLATE hex;
+ }
+} {7 0xA 5 0xA 5 0x11 11 0x11 11 0x101 {} {}}
+do_test collate1-2.5 {
+ execsql {
+ SELECT c1, c2 FROM collate1t1
+ ORDER BY 1 COLLATE binary DESC, 2 COLLATE hex DESC;
+ }
+} {7 0xA 5 0x11 5 0xA 11 0x101 11 0x11 {} {}}
+do_test collate1-2.6 {
+ execsql {
+ SELECT c1, c2 FROM collate1t1
+ ORDER BY 1 COLLATE binary ASC, 2 COLLATE hex ASC;
+ }
+} {{} {} 11 0x11 11 0x101 5 0xA 5 0x11 7 0xA}
+do_test collate1-2.7 {
+ execsql {
+ DROP TABLE collate1t1;
+ }
+} {}
+
+#
+# These tests ensure that the default collation type for a column is used
+# by an ORDER BY clause correctly. The focus is all the different ways
+# the column can be referenced. i.e. a, collate2t1.a, main.collate2t1.a etc.
+#
+do_test collate1-3.0 {
+ execsql {
+ CREATE TABLE collate1t1(a COLLATE hex, b);
+ INSERT INTO collate1t1 VALUES( '0x5', 5 );
+ INSERT INTO collate1t1 VALUES( '1', 1 );
+ INSERT INTO collate1t1 VALUES( '0x45', 69 );
+ INSERT INTO collate1t1 VALUES( NULL, NULL );
+ SELECT * FROM collate1t1 ORDER BY a;
+ }
+} {{} {} 1 1 0x5 5 0x45 69}
+
+do_test collate1-3.1 {
+ execsql {
+ SELECT * FROM collate1t1 ORDER BY 1;
+ }
+} {{} {} 1 1 0x5 5 0x45 69}
+do_test collate1-3.2 {
+ execsql {
+ SELECT * FROM collate1t1 ORDER BY collate1t1.a;
+ }
+} {{} {} 1 1 0x5 5 0x45 69}
+do_test collate1-3.3 {
+ execsql {
+ SELECT * FROM collate1t1 ORDER BY main.collate1t1.a;
+ }
+} {{} {} 1 1 0x5 5 0x45 69}
+do_test collate1-3.4 {
+ execsql {
+ SELECT a as c1, b as c2 FROM collate1t1 ORDER BY c1;
+ }
+} {{} {} 1 1 0x5 5 0x45 69}
+do_test collate1-3.5 {
+ execsql {
+ SELECT a as c1, b as c2 FROM collate1t1 ORDER BY c1 COLLATE binary;
+ }
+} {{} {} 0x45 69 0x5 5 1 1}
+do_test collate1-3.6 {
+ execsql {
+ DROP TABLE collate1t1;
+ }
+} {}
+
+# Update for SQLite version 3. The collate1-4.* test cases were written
+# before manifest types were introduced. The following test cases still
+# work, due to the 'affinity' mechanism, but they don't prove anything
+# about collation sequences.
+#
+do_test collate1-4.0 {
+ execsql {
+ CREATE TABLE collate1t1(c1 numeric, c2 text);
+ INSERT INTO collate1t1 VALUES(1, 1);
+ INSERT INTO collate1t1 VALUES(12, 12);
+ INSERT INTO collate1t1 VALUES(NULL, NULL);
+ INSERT INTO collate1t1 VALUES(101, 101);
+ }
+} {}
+do_test collate1-4.1 {
+ execsql {
+ SELECT c1 FROM collate1t1 ORDER BY 1;
+ }
+} {{} 1 12 101}
+do_test collate1-4.2 {
+ execsql {
+ SELECT c2 FROM collate1t1 ORDER BY 1;
+ }
+} {{} 1 101 12}
+do_test collate1-4.3 {
+ execsql {
+ SELECT c2+0 FROM collate1t1 ORDER BY 1;
+ }
+} {{} 1 12 101}
+do_test collate1-4.4 {
+ execsql {
+ SELECT c1||'' FROM collate1t1 ORDER BY 1;
+ }
+} {{} 1 101 12}
+do_test collate1-4.5 {
+ execsql {
+ DROP TABLE collate1t1;
+ }
+} {}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/collate2.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/collate2.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,613 @@
+#
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is page cache subsystem.
+#
+# $Id: collate2.test,v 1.4 2005/01/21 03:12:16 danielk1977 Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+#
+# Tests are organised as follows:
+#
+# collate2-1.* WHERE <expr> expressions (sqliteExprIfTrue).
+# collate2-2.* WHERE NOT <expr> expressions (sqliteExprIfFalse).
+# collate2-3.* SELECT <expr> expressions (sqliteExprCode).
+# collate2-4.* Precedence of collation/data types in binary comparisons
+# collate2-5.* JOIN syntax.
+#
+
+# Create a collation type BACKWARDS for use in testing. This collation type
+# is similar to the built-in TEXT collation type except the order of
+# characters in each string is reversed before the comparison is performed.
+db collate BACKWARDS backwards_collate
+proc backwards_collate {a b} {
+ set ra {};
+ set rb {}
+ foreach c [split $a {}] { set ra $c$ra }
+ foreach c [split $b {}] { set rb $c$rb }
+ return [string compare $ra $rb]
+}
+
+# The following values are used in these tests:
+# NULL aa ab ba bb aA aB bA bB Aa Ab Ba Bb AA AB BA BB
+#
+# The collation orders for each of the tested collation types are:
+#
+# BINARY: NULL AA AB Aa Ab BA BB Ba Bb aA aB aa ab bA bB ba bb
+# NOCASE: NULL aa aA Aa AA ab aB Ab AB ba bA Ba BA bb bB Bb BB
+# BACKWARDS: NULL AA BA aA bA AB BB aB bB Aa Ba aa ba Ab Bb ab bb
+#
+# These tests verify that the default collation type for a column is used
+# for comparison operators (<, >, <=, >=, =) involving that column and
+# an expression that is not a column with a default collation type.
+#
+# The collation sequences BINARY and NOCASE are built-in, the BACKWARDS
+# collation sequence is implemented by the TCL proc backwards_collate
+# above.
+#
+do_test collate2-1.0 {
+ execsql {
+ CREATE TABLE collate2t1(
+ a COLLATE BINARY,
+ b COLLATE NOCASE,
+ c COLLATE BACKWARDS
+ );
+ INSERT INTO collate2t1 VALUES( NULL, NULL, NULL );
+
+ INSERT INTO collate2t1 VALUES( 'aa', 'aa', 'aa' );
+ INSERT INTO collate2t1 VALUES( 'ab', 'ab', 'ab' );
+ INSERT INTO collate2t1 VALUES( 'ba', 'ba', 'ba' );
+ INSERT INTO collate2t1 VALUES( 'bb', 'bb', 'bb' );
+
+ INSERT INTO collate2t1 VALUES( 'aA', 'aA', 'aA' );
+ INSERT INTO collate2t1 VALUES( 'aB', 'aB', 'aB' );
+ INSERT INTO collate2t1 VALUES( 'bA', 'bA', 'bA' );
+ INSERT INTO collate2t1 VALUES( 'bB', 'bB', 'bB' );
+
+ INSERT INTO collate2t1 VALUES( 'Aa', 'Aa', 'Aa' );
+ INSERT INTO collate2t1 VALUES( 'Ab', 'Ab', 'Ab' );
+ INSERT INTO collate2t1 VALUES( 'Ba', 'Ba', 'Ba' );
+ INSERT INTO collate2t1 VALUES( 'Bb', 'Bb', 'Bb' );
+
+ INSERT INTO collate2t1 VALUES( 'AA', 'AA', 'AA' );
+ INSERT INTO collate2t1 VALUES( 'AB', 'AB', 'AB' );
+ INSERT INTO collate2t1 VALUES( 'BA', 'BA', 'BA' );
+ INSERT INTO collate2t1 VALUES( 'BB', 'BB', 'BB' );
+ }
+ if {[info exists collate_test_use_index]} {
+ execsql {
+ CREATE INDEX collate2t1_i1 ON collate2t1(a);
+ CREATE INDEX collate2t1_i2 ON collate2t1(b);
+ CREATE INDEX collate2t1_i3 ON collate2t1(c);
+ }
+ }
+} {}
+do_test collate2-1.1 {
+ execsql {
+ SELECT a FROM collate2t1 WHERE a > 'aa' ORDER BY 1;
+ }
+} {ab bA bB ba bb}
+do_test collate2-1.2 {
+ execsql {
+ SELECT b FROM collate2t1 WHERE b > 'aa' ORDER BY 1, oid;
+ }
+} {ab aB Ab AB ba bA Ba BA bb bB Bb BB}
+do_test collate2-1.3 {
+ execsql {
+ SELECT c FROM collate2t1 WHERE c > 'aa' ORDER BY 1;
+ }
+} {ba Ab Bb ab bb}
+do_test collate2-1.4 {
+ execsql {
+ SELECT a FROM collate2t1 WHERE a < 'aa' ORDER BY 1;
+ }
+} {AA AB Aa Ab BA BB Ba Bb aA aB}
+do_test collate2-1.5 {
+ execsql {
+ SELECT b FROM collate2t1 WHERE b < 'aa' ORDER BY 1, oid;
+ }
+} {}
+do_test collate2-1.6 {
+ execsql {
+ SELECT c FROM collate2t1 WHERE c < 'aa' ORDER BY 1;
+ }
+} {AA BA aA bA AB BB aB bB Aa Ba}
+do_test collate2-1.7 {
+ execsql {
+ SELECT a FROM collate2t1 WHERE a = 'aa';
+ }
+} {aa}
+do_test collate2-1.8 {
+ execsql {
+ SELECT b FROM collate2t1 WHERE b = 'aa' ORDER BY oid;
+ }
+} {aa aA Aa AA}
+do_test collate2-1.9 {
+ execsql {
+ SELECT c FROM collate2t1 WHERE c = 'aa';
+ }
+} {aa}
+do_test collate2-1.10 {
+ execsql {
+ SELECT a FROM collate2t1 WHERE a >= 'aa' ORDER BY 1;
+ }
+} {aa ab bA bB ba bb}
+do_test collate2-1.11 {
+ execsql {
+ SELECT b FROM collate2t1 WHERE b >= 'aa' ORDER BY 1, oid;
+ }
+} {aa aA Aa AA ab aB Ab AB ba bA Ba BA bb bB Bb BB}
+do_test collate2-1.12 {
+ execsql {
+ SELECT c FROM collate2t1 WHERE c >= 'aa' ORDER BY 1;
+ }
+} {aa ba Ab Bb ab bb}
+do_test collate2-1.13 {
+ execsql {
+ SELECT a FROM collate2t1 WHERE a <= 'aa' ORDER BY 1;
+ }
+} {AA AB Aa Ab BA BB Ba Bb aA aB aa}
+do_test collate2-1.14 {
+ execsql {
+ SELECT b FROM collate2t1 WHERE b <= 'aa' ORDER BY 1, oid;
+ }
+} {aa aA Aa AA}
+do_test collate2-1.15 {
+ execsql {
+ SELECT c FROM collate2t1 WHERE c <= 'aa' ORDER BY 1;
+ }
+} {AA BA aA bA AB BB aB bB Aa Ba aa}
+do_test collate2-1.16 {
+ execsql {
+ SELECT a FROM collate2t1 WHERE a BETWEEN 'Aa' AND 'Bb' ORDER BY 1;
+ }
+} {Aa Ab BA BB Ba Bb}
+do_test collate2-1.17 {
+ execsql {
+ SELECT b FROM collate2t1 WHERE b BETWEEN 'Aa' AND 'Bb' ORDER BY 1, oid;
+ }
+} {aa aA Aa AA ab aB Ab AB ba bA Ba BA bb bB Bb BB}
+do_test collate2-1.18 {
+ execsql {
+ SELECT c FROM collate2t1 WHERE c BETWEEN 'Aa' AND 'Bb' ORDER BY 1;
+ }
+} {Aa Ba aa ba Ab Bb}
+do_test collate2-1.19 {
+ execsql {
+ SELECT a FROM collate2t1 WHERE
+ CASE a WHEN 'aa' THEN 1 ELSE 0 END
+ ORDER BY 1, oid;
+ }
+} {aa}
+do_test collate2-1.20 {
+ execsql {
+ SELECT b FROM collate2t1 WHERE
+ CASE b WHEN 'aa' THEN 1 ELSE 0 END
+ ORDER BY 1, oid;
+ }
+} {aa aA Aa AA}
+do_test collate2-1.21 {
+ execsql {
+ SELECT c FROM collate2t1 WHERE
+ CASE c WHEN 'aa' THEN 1 ELSE 0 END
+ ORDER BY 1, oid;
+ }
+} {aa}
+
+ifcapable subquery {
+ do_test collate2-1.22 {
+ execsql {
+ SELECT a FROM collate2t1 WHERE a IN ('aa', 'bb') ORDER BY 1, oid;
+ }
+ } {aa bb}
+ do_test collate2-1.23 {
+ execsql {
+ SELECT b FROM collate2t1 WHERE b IN ('aa', 'bb') ORDER BY 1, oid;
+ }
+ } {aa aA Aa AA bb bB Bb BB}
+ do_test collate2-1.24 {
+ execsql {
+ SELECT c FROM collate2t1 WHERE c IN ('aa', 'bb') ORDER BY 1, oid;
+ }
+ } {aa bb}
+ do_test collate2-1.25 {
+ execsql {
+ SELECT a FROM collate2t1
+ WHERE a IN (SELECT a FROM collate2t1 WHERE a IN ('aa', 'bb'));
+ }
+ } {aa bb}
+ do_test collate2-1.26 {
+ execsql {
+ SELECT b FROM collate2t1
+ WHERE b IN (SELECT a FROM collate2t1 WHERE a IN ('aa', 'bb'));
+ }
+ } {aa bb aA bB Aa Bb AA BB}
+ do_test collate2-1.27 {
+ execsql {
+ SELECT c FROM collate2t1
+ WHERE c IN (SELECT a FROM collate2t1 WHERE a IN ('aa', 'bb'));
+ }
+ } {aa bb}
+} ;# ifcapable subquery
+
+do_test collate2-2.1 {
+ execsql {
+ SELECT a FROM collate2t1 WHERE NOT a > 'aa' ORDER BY 1;
+ }
+} {AA AB Aa Ab BA BB Ba Bb aA aB aa}
+do_test collate2-2.2 {
+ execsql {
+ SELECT b FROM collate2t1 WHERE NOT b > 'aa' ORDER BY 1, oid;
+ }
+} {aa aA Aa AA}
+do_test collate2-2.3 {
+ execsql {
+ SELECT c FROM collate2t1 WHERE NOT c > 'aa' ORDER BY 1;
+ }
+} {AA BA aA bA AB BB aB bB Aa Ba aa}
+do_test collate2-2.4 {
+ execsql {
+ SELECT a FROM collate2t1 WHERE NOT a < 'aa' ORDER BY 1;
+ }
+} {aa ab bA bB ba bb}
+do_test collate2-2.5 {
+ execsql {
+ SELECT b FROM collate2t1 WHERE NOT b < 'aa' ORDER BY 1, oid;
+ }
+} {aa aA Aa AA ab aB Ab AB ba bA Ba BA bb bB Bb BB}
+do_test collate2-2.6 {
+ execsql {
+ SELECT c FROM collate2t1 WHERE NOT c < 'aa' ORDER BY 1;
+ }
+} {aa ba Ab Bb ab bb}
+do_test collate2-2.7 {
+ execsql {
+ SELECT a FROM collate2t1 WHERE NOT a = 'aa';
+ }
+} {ab ba bb aA aB bA bB Aa Ab Ba Bb AA AB BA BB}
+do_test collate2-2.8 {
+ execsql {
+ SELECT b FROM collate2t1 WHERE NOT b = 'aa';
+ }
+} {ab ba bb aB bA bB Ab Ba Bb AB BA BB}
+do_test collate2-2.9 {
+ execsql {
+ SELECT c FROM collate2t1 WHERE NOT c = 'aa';
+ }
+} {ab ba bb aA aB bA bB Aa Ab Ba Bb AA AB BA BB}
+do_test collate2-2.10 {
+ execsql {
+ SELECT a FROM collate2t1 WHERE NOT a >= 'aa' ORDER BY 1;
+ }
+} {AA AB Aa Ab BA BB Ba Bb aA aB}
+do_test collate2-2.11 {
+ execsql {
+ SELECT b FROM collate2t1 WHERE NOT b >= 'aa' ORDER BY 1, oid;
+ }
+} {}
+do_test collate2-2.12 {
+ execsql {
+ SELECT c FROM collate2t1 WHERE NOT c >= 'aa' ORDER BY 1;
+ }
+} {AA BA aA bA AB BB aB bB Aa Ba}
+do_test collate2-2.13 {
+ execsql {
+ SELECT a FROM collate2t1 WHERE NOT a <= 'aa' ORDER BY 1;
+ }
+} {ab bA bB ba bb}
+do_test collate2-2.14 {
+ execsql {
+ SELECT b FROM collate2t1 WHERE NOT b <= 'aa' ORDER BY 1, oid;
+ }
+} {ab aB Ab AB ba bA Ba BA bb bB Bb BB}
+do_test collate2-2.15 {
+ execsql {
+ SELECT c FROM collate2t1 WHERE NOT c <= 'aa' ORDER BY 1;
+ }
+} {ba Ab Bb ab bb}
+do_test collate2-2.16 {
+ execsql {
+ SELECT a FROM collate2t1 WHERE a NOT BETWEEN 'Aa' AND 'Bb' ORDER BY 1;
+ }
+} {AA AB aA aB aa ab bA bB ba bb}
+do_test collate2-2.17 {
+ execsql {
+ SELECT b FROM collate2t1 WHERE b NOT BETWEEN 'Aa' AND 'Bb' ORDER BY 1, oid;
+ }
+} {}
+do_test collate2-2.18 {
+ execsql {
+ SELECT c FROM collate2t1 WHERE c NOT BETWEEN 'Aa' AND 'Bb' ORDER BY 1;
+ }
+} {AA BA aA bA AB BB aB bB ab bb}
+do_test collate2-2.19 {
+ execsql {
+ SELECT a FROM collate2t1 WHERE NOT CASE a WHEN 'aa' THEN 1 ELSE 0 END;
+ }
+} {{} ab ba bb aA aB bA bB Aa Ab Ba Bb AA AB BA BB}
+do_test collate2-2.20 {
+ execsql {
+ SELECT b FROM collate2t1 WHERE NOT CASE b WHEN 'aa' THEN 1 ELSE 0 END;
+ }
+} {{} ab ba bb aB bA bB Ab Ba Bb AB BA BB}
+do_test collate2-2.21 {
+ execsql {
+ SELECT c FROM collate2t1 WHERE NOT CASE c WHEN 'aa' THEN 1 ELSE 0 END;
+ }
+} {{} ab ba bb aA aB bA bB Aa Ab Ba Bb AA AB BA BB}
+
+ifcapable subquery {
+ do_test collate2-2.22 {
+ execsql {
+ SELECT a FROM collate2t1 WHERE NOT a IN ('aa', 'bb');
+ }
+ } {ab ba aA aB bA bB Aa Ab Ba Bb AA AB BA BB}
+ do_test collate2-2.23 {
+ execsql {
+ SELECT b FROM collate2t1 WHERE NOT b IN ('aa', 'bb');
+ }
+ } {ab ba aB bA Ab Ba AB BA}
+ do_test collate2-2.24 {
+ execsql {
+ SELECT c FROM collate2t1 WHERE NOT c IN ('aa', 'bb');
+ }
+ } {ab ba aA aB bA bB Aa Ab Ba Bb AA AB BA BB}
+ do_test collate2-2.25 {
+ execsql {
+ SELECT a FROM collate2t1
+ WHERE NOT a IN (SELECT a FROM collate2t1 WHERE a IN ('aa', 'bb'));
+ }
+ } {ab ba aA aB bA bB Aa Ab Ba Bb AA AB BA BB}
+ do_test collate2-2.26 {
+ execsql {
+ SELECT b FROM collate2t1
+ WHERE NOT b IN (SELECT a FROM collate2t1 WHERE a IN ('aa', 'bb'));
+ }
+ } {ab ba aB bA Ab Ba AB BA}
+ do_test collate2-2.27 {
+ execsql {
+ SELECT c FROM collate2t1
+ WHERE NOT c IN (SELECT a FROM collate2t1 WHERE a IN ('aa', 'bb'));
+ }
+ } {ab ba aA aB bA bB Aa Ab Ba Bb AA AB BA BB}
+}
+
+do_test collate2-3.1 {
+ execsql {
+ SELECT a > 'aa' FROM collate2t1;
+ }
+} {{} 0 1 1 1 0 0 1 1 0 0 0 0 0 0 0 0}
+do_test collate2-3.2 {
+ execsql {
+ SELECT b > 'aa' FROM collate2t1;
+ }
+} {{} 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1}
+do_test collate2-3.3 {
+ execsql {
+ SELECT c > 'aa' FROM collate2t1;
+ }
+} {{} 0 1 1 1 0 0 0 0 0 1 0 1 0 0 0 0}
+do_test collate2-3.4 {
+ execsql {
+ SELECT a < 'aa' FROM collate2t1;
+ }
+} {{} 0 0 0 0 1 1 0 0 1 1 1 1 1 1 1 1}
+do_test collate2-3.5 {
+ execsql {
+ SELECT b < 'aa' FROM collate2t1;
+ }
+} {{} 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0}
+do_test collate2-3.6 {
+ execsql {
+ SELECT c < 'aa' FROM collate2t1;
+ }
+} {{} 0 0 0 0 1 1 1 1 1 0 1 0 1 1 1 1}
+do_test collate2-3.7 {
+ execsql {
+ SELECT a = 'aa' FROM collate2t1;
+ }
+} {{} 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0}
+do_test collate2-3.8 {
+ execsql {
+ SELECT b = 'aa' FROM collate2t1;
+ }
+} {{} 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0}
+do_test collate2-3.9 {
+ execsql {
+ SELECT c = 'aa' FROM collate2t1;
+ }
+} {{} 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0}
+do_test collate2-3.10 {
+ execsql {
+ SELECT a <= 'aa' FROM collate2t1;
+ }
+} {{} 1 0 0 0 1 1 0 0 1 1 1 1 1 1 1 1}
+do_test collate2-3.11 {
+ execsql {
+ SELECT b <= 'aa' FROM collate2t1;
+ }
+} {{} 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0}
+do_test collate2-3.12 {
+ execsql {
+ SELECT c <= 'aa' FROM collate2t1;
+ }
+} {{} 1 0 0 0 1 1 1 1 1 0 1 0 1 1 1 1}
+do_test collate2-3.13 {
+ execsql {
+ SELECT a >= 'aa' FROM collate2t1;
+ }
+} {{} 1 1 1 1 0 0 1 1 0 0 0 0 0 0 0 0}
+do_test collate2-3.14 {
+ execsql {
+ SELECT b >= 'aa' FROM collate2t1;
+ }
+} {{} 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1}
+do_test collate2-3.15 {
+ execsql {
+ SELECT c >= 'aa' FROM collate2t1;
+ }
+} {{} 1 1 1 1 0 0 0 0 0 1 0 1 0 0 0 0}
+do_test collate2-3.16 {
+ execsql {
+ SELECT a BETWEEN 'Aa' AND 'Bb' FROM collate2t1;
+ }
+} {{} 0 0 0 0 0 0 0 0 1 1 1 1 0 0 1 1}
+do_test collate2-3.17 {
+ execsql {
+ SELECT b BETWEEN 'Aa' AND 'Bb' FROM collate2t1;
+ }
+} {{} 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1}
+do_test collate2-3.18 {
+ execsql {
+ SELECT c BETWEEN 'Aa' AND 'Bb' FROM collate2t1;
+ }
+} {{} 1 0 1 0 0 0 0 0 1 1 1 1 0 0 0 0}
+do_test collate2-3.19 {
+ execsql {
+ SELECT CASE a WHEN 'aa' THEN 1 ELSE 0 END FROM collate2t1;
+ }
+} {0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0}
+do_test collate2-3.20 {
+ execsql {
+ SELECT CASE b WHEN 'aa' THEN 1 ELSE 0 END FROM collate2t1;
+ }
+} {0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0}
+do_test collate2-3.21 {
+ execsql {
+ SELECT CASE c WHEN 'aa' THEN 1 ELSE 0 END FROM collate2t1;
+ }
+} {0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0}
+
+ifcapable subquery {
+ do_test collate2-3.22 {
+ execsql {
+ SELECT a IN ('aa', 'bb') FROM collate2t1;
+ }
+ } {{} 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0}
+ do_test collate2-3.23 {
+ execsql {
+ SELECT b IN ('aa', 'bb') FROM collate2t1;
+ }
+ } {{} 1 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1}
+ do_test collate2-3.24 {
+ execsql {
+ SELECT c IN ('aa', 'bb') FROM collate2t1;
+ }
+ } {{} 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0}
+ do_test collate2-3.25 {
+ execsql {
+ SELECT a IN (SELECT a FROM collate2t1 WHERE a IN ('aa', 'bb'))
+ FROM collate2t1;
+ }
+ } {{} 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0}
+ do_test collate2-3.26 {
+ execsql {
+ SELECT b IN (SELECT a FROM collate2t1 WHERE a IN ('aa', 'bb'))
+ FROM collate2t1;
+ }
+ } {{} 1 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1}
+ do_test collate2-3.27 {
+ execsql {
+ SELECT c IN (SELECT a FROM collate2t1 WHERE a IN ('aa', 'bb'))
+ FROM collate2t1;
+ }
+ } {{} 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0}
+}
+
+do_test collate2-4.0 {
+ execsql {
+ CREATE TABLE collate2t2(b COLLATE binary);
+ CREATE TABLE collate2t3(b text);
+ INSERT INTO collate2t2 VALUES('aa');
+ INSERT INTO collate2t3 VALUES('aa');
+ }
+} {}
+
+# Test that when both sides of a binary comparison operator have
+# default collation types, the collate type for the leftmost term
+# is used.
+do_test collate2-4.1 {
+ execsql {
+ SELECT collate2t1.a FROM collate2t1, collate2t2
+ WHERE collate2t1.b = collate2t2.b;
+ }
+} {aa aA Aa AA}
+do_test collate2-4.2 {
+ execsql {
+ SELECT collate2t1.a FROM collate2t1, collate2t2
+ WHERE collate2t2.b = collate2t1.b;
+ }
+} {aa}
+
+# Test that when one side has a default collation type and the other
+# does not, the collation type is used.
+do_test collate2-4.3 {
+ execsql {
+ SELECT collate2t1.a FROM collate2t1, collate2t3
+ WHERE collate2t1.b = collate2t3.b||'';
+ }
+} {aa aA Aa AA}
+do_test collate2-4.4 {
+ execsql {
+ SELECT collate2t1.a FROM collate2t1, collate2t3
+ WHERE collate2t3.b||'' = collate2t1.b;
+ }
+} {aa aA Aa AA}
+
+do_test collate2-4.5 {
+ execsql {
+ DROP TABLE collate2t3;
+ }
+} {}
+
+#
+# Test that the default collation types are used when the JOIN syntax
+# is used in place of a WHERE clause.
+#
+# SQLite transforms the JOIN syntax into a WHERE clause internally, so
+# the focus of these tests is to ensure that the table on the left-hand-side
+# of the join determines the collation type used.
+#
+do_test collate2-5.0 {
+ execsql {
+ SELECT collate2t1.b FROM collate2t1 JOIN collate2t2 USING (b);
+ }
+} {aa aA Aa AA}
+do_test collate2-5.1 {
+ execsql {
+ SELECT collate2t1.b FROM collate2t2 JOIN collate2t1 USING (b);
+ }
+} {aa}
+do_test collate2-5.2 {
+ execsql {
+ SELECT collate2t1.b FROM collate2t1 NATURAL JOIN collate2t2;
+ }
+} {aa aA Aa AA}
+do_test collate2-5.3 {
+ execsql {
+ SELECT collate2t1.b FROM collate2t2 NATURAL JOIN collate2t1;
+ }
+} {aa}
+do_test collate2-5.4 {
+ execsql {
+ SELECT collate2t2.b FROM collate2t1 LEFT OUTER JOIN collate2t2 USING (b) order by collate2t1.oid;
+ }
+} {{} aa {} {} {} aa {} {} {} aa {} {} {} aa {} {} {}}
+do_test collate2-5.5 {
+ execsql {
+ SELECT collate2t1.b, collate2t2.b FROM collate2t2 LEFT OUTER JOIN collate2t1 USING (b);
+ }
+} {aa aa}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/collate3.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/collate3.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,429 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is page cache subsystem.
+#
+# $Id: collate3.test,v 1.11 2005/09/08 01:58:43 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+#
+# Tests are organised as follows:
+#
+# collate3.1.* - Errors related to unknown collation sequences.
+# collate3.2.* - Errors related to undefined collation sequences.
+# collate3.3.* - Writing to a table that has an index with an undefined c.s.
+# collate3.4.* - Misc errors.
+# collate3.5.* - Collation factory.
+#
+
+#
+# These tests ensure that when a user executes a statement with an
+# unknown collation sequence an error is returned.
+#
+do_test collate3-1.0 {
+ execsql {
+ CREATE TABLE collate3t1(c1);
+ }
+} {}
+do_test collate3-1.1 {
+ catchsql {
+ SELECT * FROM collate3t1 ORDER BY 1 collate garbage;
+ }
+} {1 {no such collation sequence: garbage}}
+do_test collate3-1.2 {
+ catchsql {
+ CREATE TABLE collate3t2(c1 collate garbage);
+ }
+} {1 {no such collation sequence: garbage}}
+do_test collate3-1.3 {
+ catchsql {
+ CREATE INDEX collate3i1 ON collate3t1(c1 COLLATE garbage);
+ }
+} {1 {no such collation sequence: garbage}}
+
+execsql {
+ DROP TABLE collate3t1;
+}
+
+#
+# Create a table with a default collation sequence, then close
+# and re-open the database without re-registering the collation
+# sequence. Then make sure the library stops us from using
+# the collation sequence in:
+# * an explicitly collated ORDER BY
+# * an ORDER BY that uses the default collation sequence
+# * an expression (=)
+# * a CREATE TABLE statement
+# * a CREATE INDEX statement that uses a default collation sequence
+# * a GROUP BY that uses the default collation sequence
+# * a SELECT DISTINCT that uses the default collation sequence
+# * Compound SELECTs that uses the default collation sequence
+# * An ORDER BY on a compound SELECT with an explicit ORDER BY.
+#
+do_test collate3-2.0 {
+ db collate string_compare {string compare}
+ execsql {
+ CREATE TABLE collate3t1(c1 COLLATE string_compare, c2);
+ }
+ db close
+ sqlite3 db test.db
+ expr 0
+} 0
+do_test collate3-2.1 {
+ catchsql {
+ SELECT * FROM collate3t1 ORDER BY 1 COLLATE string_compare;
+ }
+} {1 {no such collation sequence: string_compare}}
+do_test collate3-2.2 {
+ catchsql {
+ SELECT * FROM collate3t1 ORDER BY c1;
+ }
+} {1 {no such collation sequence: string_compare}}
+do_test collate3-2.3 {
+ catchsql {
+ SELECT * FROM collate3t1 WHERE c1 = 'xxx';
+ }
+} {1 {no such collation sequence: string_compare}}
+do_test collate3-2.4 {
+ catchsql {
+ CREATE TABLE collate3t2(c1 COLLATE string_compare);
+ }
+} {1 {no such collation sequence: string_compare}}
+do_test collate3-2.5 {
+ catchsql {
+ CREATE INDEX collate3t1_i1 ON collate3t1(c1);
+ }
+} {1 {no such collation sequence: string_compare}}
+do_test collate3-2.6 {
+ catchsql {
+ SELECT * FROM collate3t1;
+ }
+} {0 {}}
+do_test collate3-2.7.1 {
+ catchsql {
+ SELECT count(*) FROM collate3t1 GROUP BY c1;
+ }
+} {1 {no such collation sequence: string_compare}}
+# do_test collate3-2.7.2 {
+# catchsql {
+# SELECT * FROM collate3t1 GROUP BY c1;
+# }
+# } {1 {GROUP BY may only be used on aggregate queries}}
+do_test collate3-2.7.2 {
+ catchsql {
+ SELECT * FROM collate3t1 GROUP BY c1;
+ }
+} {1 {no such collation sequence: string_compare}}
+do_test collate3-2.8 {
+ catchsql {
+ SELECT DISTINCT c1 FROM collate3t1;
+ }
+} {1 {no such collation sequence: string_compare}}
+
+ifcapable compound {
+ do_test collate3-2.9 {
+ catchsql {
+ SELECT c1 FROM collate3t1 UNION SELECT c1 FROM collate3t1;
+ }
+ } {1 {no such collation sequence: string_compare}}
+ do_test collate3-2.10 {
+ catchsql {
+ SELECT c1 FROM collate3t1 EXCEPT SELECT c1 FROM collate3t1;
+ }
+ } {1 {no such collation sequence: string_compare}}
+ do_test collate3-2.11 {
+ catchsql {
+ SELECT c1 FROM collate3t1 INTERSECT SELECT c1 FROM collate3t1;
+ }
+ } {1 {no such collation sequence: string_compare}}
+ do_test collate3-2.12 {
+ catchsql {
+ SELECT c1 FROM collate3t1 UNION ALL SELECT c1 FROM collate3t1;
+ }
+ } {0 {}}
+ do_test collate3-2.13 {
+btree_breakpoint
+ catchsql {
+ SELECT 10 UNION ALL SELECT 20 ORDER BY 1 COLLATE string_compare;
+ }
+ } {1 {no such collation sequence: string_compare}}
+ do_test collate3-2.14 {
+ catchsql {
+ SELECT 10 INTERSECT SELECT 20 ORDER BY 1 COLLATE string_compare;
+ }
+ } {1 {no such collation sequence: string_compare}}
+ do_test collate3-2.15 {
+ catchsql {
+ SELECT 10 EXCEPT SELECT 20 ORDER BY 1 COLLATE string_compare;
+ }
+ } {1 {no such collation sequence: string_compare}}
+ do_test collate3-2.16 {
+ catchsql {
+ SELECT 10 UNION SELECT 20 ORDER BY 1 COLLATE string_compare;
+ }
+ } {1 {no such collation sequence: string_compare}}
+ do_test collate3-2.17 {
+ catchsql {
+ SELECT c1 FROM collate3t1 UNION ALL SELECT c1 FROM collate3t1 ORDER BY 1;
+ }
+ } {1 {no such collation sequence: string_compare}}
+} ;# ifcapable compound
+
+#
+# Create an index that uses a collation sequence then close and
+# re-open the database without re-registering the collation
+# sequence. Then check that for the table with the index
+# * An INSERT fails,
+# * An UPDATE on the column with the index fails,
+# * An UPDATE on a different column succeeds.
+# * A DELETE with a WHERE clause fails
+# * A DELETE without a WHERE clause succeeds
+#
+# Also, ensure that the restrictions tested by collate3-2.* still
+# apply after the index has been created.
+#
+do_test collate3-3.0 {
+ db collate string_compare {string compare}
+ execsql {
+ CREATE INDEX collate3t1_i1 ON collate3t1(c1);
+ INSERT INTO collate3t1 VALUES('xxx', 'yyy');
+ }
+ db close
+ sqlite3 db test.db
+ expr 0
+} 0
+db eval {select * from collate3t1}
+do_test collate3-3.1 {
+ catchsql {
+ INSERT INTO collate3t1 VALUES('xxx', 0);
+ }
+} {1 {no such collation sequence: string_compare}}
+do_test collate3-3.2 {
+ catchsql {
+ UPDATE collate3t1 SET c1 = 'xxx';
+ }
+} {1 {no such collation sequence: string_compare}}
+do_test collate3-3.3 {
+ catchsql {
+ UPDATE collate3t1 SET c2 = 'xxx';
+ }
+} {0 {}}
+do_test collate3-3.4 {
+ catchsql {
+ DELETE FROM collate3t1 WHERE 1;
+ }
+} {1 {no such collation sequence: string_compare}}
+do_test collate3-3.5 {
+ catchsql {
+ SELECT * FROM collate3t1;
+ }
+} {0 {xxx xxx}}
+do_test collate3-3.6 {
+ catchsql {
+ DELETE FROM collate3t1;
+ }
+} {0 {}}
+ifcapable {integrityck} {
+ do_test collate3-3.8 {
+ catchsql {
+ PRAGMA integrity_check
+ }
+ } {1 {no such collation sequence: string_compare}}
+}
+do_test collate3-3.9 {
+ catchsql {
+ SELECT * FROM collate3t1;
+ }
+} {0 {}}
+do_test collate3-3.10 {
+ catchsql {
+ SELECT * FROM collate3t1 ORDER BY 1 COLLATE string_compare;
+ }
+} {1 {no such collation sequence: string_compare}}
+do_test collate3-3.11 {
+ catchsql {
+ SELECT * FROM collate3t1 ORDER BY c1;
+ }
+} {1 {no such collation sequence: string_compare}}
+do_test collate3-3.12 {
+ catchsql {
+ SELECT * FROM collate3t1 WHERE c1 = 'xxx';
+ }
+} {1 {no such collation sequence: string_compare}}
+do_test collate3-3.13 {
+ catchsql {
+ CREATE TABLE collate3t2(c1 COLLATE string_compare);
+ }
+} {1 {no such collation sequence: string_compare}}
+do_test collate3-3.14 {
+ catchsql {
+ CREATE INDEX collate3t1_i2 ON collate3t1(c1);
+ }
+} {1 {no such collation sequence: string_compare}}
+do_test collate3-3.15 {
+ execsql {
+ DROP TABLE collate3t1;
+ }
+} {}
+
+# Check we can create an index that uses an explicit collation
+# sequence and then close and re-open the database.
+do_test collate3-4.6 {
+ db collate user_defined "string compare"
+ execsql {
+ CREATE TABLE collate3t1(a, b);
+ INSERT INTO collate3t1 VALUES('hello', NULL);
+ CREATE INDEX collate3i1 ON collate3t1(a COLLATE user_defined);
+ }
+} {}
+do_test collate3-4.7 {
+ db close
+ sqlite3 db test.db
+ catchsql {
+ SELECT * FROM collate3t1 ORDER BY a COLLATE user_defined;
+ }
+} {1 {no such collation sequence: user_defined}}
+do_test collate3-4.8 {
+ db collate user_defined "string compare"
+ catchsql {
+ SELECT * FROM collate3t1 ORDER BY a COLLATE user_defined;
+ }
+} {0 {hello {}}}
+do_test collate3-4.8 {
+ db close
+ lindex [catch {
+ sqlite3 db test.db
+ }] 0
+} {0}
+do_test collate3-4.8 {
+ execsql {
+ DROP TABLE collate3t1;
+ }
+} {}
+
+# Compare strings as numbers.
+proc numeric_compare {lhs rhs} {
+ if {$rhs > $lhs} {
+ set res -1
+ } else {
+ set res [expr ($lhs > $rhs)?1:0]
+ }
+ return $res
+}
+
+# Check we can create a view that uses an explicit collation
+# sequence and then close and re-open the database.
+ifcapable view {
+do_test collate3-4.9 {
+ db collate user_defined numeric_compare
+ execsql {
+ CREATE TABLE collate3t1(a, b);
+ INSERT INTO collate3t1 VALUES('2', NULL);
+ INSERT INTO collate3t1 VALUES('101', NULL);
+ INSERT INTO collate3t1 VALUES('12', NULL);
+ CREATE VIEW collate3v1 AS SELECT * FROM collate3t1
+ ORDER BY 1 COLLATE user_defined;
+ SELECT * FROM collate3v1;
+ }
+} {2 {} 12 {} 101 {}}
+do_test collate3-4.10 {
+ db close
+ sqlite3 db test.db
+ catchsql {
+ SELECT * FROM collate3v1;
+ }
+} {1 {no such collation sequence: user_defined}}
+do_test collate3-4.11 {
+ db collate user_defined numeric_compare
+ catchsql {
+ SELECT * FROM collate3v1;
+ }
+} {0 {2 {} 12 {} 101 {}}}
+do_test collate3-4.12 {
+ execsql {
+ DROP TABLE collate3t1;
+ }
+} {}
+} ;# ifcapable view
+
+#
+# Test the collation factory. In the code, the "no such collation sequence"
+# message is only generated in two places. So these tests just test that
+# the collation factory can be called once from each of those points.
+#
+do_test collate3-5.0 {
+ catchsql {
+ CREATE TABLE collate3t1(a);
+ INSERT INTO collate3t1 VALUES(10);
+ SELECT a FROM collate3t1 ORDER BY 1 COLLATE unk;
+ }
+} {1 {no such collation sequence: unk}}
+do_test collate3-5.1 {
+ set ::cfact_cnt 0
+ proc cfact {nm} {
+ db collate $nm {string compare}
+ incr ::cfact_cnt
+ }
+ db collation_needed cfact
+} {}
+do_test collate3-5.2 {
+ catchsql {
+ SELECT a FROM collate3t1 ORDER BY 1 COLLATE unk;
+ }
+} {0 10}
+do_test collate3-5.3 {
+ set ::cfact_cnt
+} {1}
+do_test collate3-5.4 {
+ catchsql {
+ SELECT a FROM collate3t1 ORDER BY 1 COLLATE unk;
+ }
+} {0 10}
+do_test collate3-5.5 {
+ set ::cfact_cnt
+} {1}
+do_test collate3-5.6 {
+ catchsql {
+ SELECT a FROM collate3t1 ORDER BY 1 COLLATE unk;
+ }
+} {0 10}
+do_test collate3-5.7 {
+ execsql {
+ DROP TABLE collate3t1;
+ CREATE TABLE collate3t1(a COLLATE unk);
+ }
+ db close
+ sqlite3 db test.db
+ catchsql {
+ SELECT a FROM collate3t1 ORDER BY 1;
+ }
+} {1 {no such collation sequence: unk}}
+do_test collate3-5.8 {
+ set ::cfact_cnt 0
+ proc cfact {nm} {
+ db collate $nm {string compare}
+ incr ::cfact_cnt
+ }
+ db collation_needed cfact
+ catchsql {
+ SELECT a FROM collate3t1 ORDER BY 1;
+ }
+} {0 {}}
+
+do_test collate3-5.9 {
+ execsql {
+ DROP TABLE collate3t1;
+ }
+} {}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/collate4.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/collate4.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,700 @@
+#
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is page cache subsystem.
+#
+# $Id: collate4.test,v 1.8 2005/04/01 10:47:40 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+db collate TEXT text_collate
+proc text_collate {a b} {
+ return [string compare $a $b]
+}
+
+# Do an SQL statement. Append the search count to the end of the result.
+#
+proc count sql {
+ set ::sqlite_search_count 0
+ return [concat [execsql $sql] $::sqlite_search_count]
+}
+
+# This procedure executes the SQL. Then it checks the generated program
+# for the SQL and appends a "nosort" to the result if the program contains the
+# SortCallback opcode. If the program does not contain the SortCallback
+# opcode it appends "sort"
+#
+proc cksort {sql} {
+ set ::sqlite_sort_count 0
+ set data [execsql $sql]
+ if {$::sqlite_sort_count} {set x sort} {set x nosort}
+ lappend data $x
+ return $data
+}
+
+#
+# Test cases are organized roughly as follows:
+#
+# collate4-1.* ORDER BY.
+# collate4-2.* WHERE clauses.
+# collate4-3.* constraints (primary key, unique).
+# collate4-4.* simple min() or max() queries.
+# collate4-5.* REINDEX command
+# collate4-6.* INTEGER PRIMARY KEY indices.
+#
+
+#
+# These tests - collate4-1.* - check that indices are correctly
+# selected or not selected to implement ORDER BY clauses when
+# user defined collation sequences are involved.
+#
+# Because these tests also exercise all the different ways indices
+# can be created, they also serve to verify that indices are correctly
+# initialised with user-defined collation sequences when they are
+# created.
+#
+# Tests named collate4-1.1.* use indices with a single column. Tests
+# collate4-1.2.* use indices with two columns.
+#
+do_test collate4-1.1.0 {
+ execsql {
+ CREATE TABLE collate4t1(a COLLATE NOCASE, b COLLATE TEXT);
+ INSERT INTO collate4t1 VALUES( 'a', 'a' );
+ INSERT INTO collate4t1 VALUES( 'b', 'b' );
+ INSERT INTO collate4t1 VALUES( NULL, NULL );
+ INSERT INTO collate4t1 VALUES( 'B', 'B' );
+ INSERT INTO collate4t1 VALUES( 'A', 'A' );
+ CREATE INDEX collate4i1 ON collate4t1(a);
+ CREATE INDEX collate4i2 ON collate4t1(b);
+ }
+} {}
+do_test collate4-1.1.1 {
+ cksort {SELECT a FROM collate4t1 ORDER BY a}
+} {{} a A b B nosort}
+do_test collate4-1.1.2 {
+ cksort {SELECT a FROM collate4t1 ORDER BY a COLLATE NOCASE}
+} {{} a A b B nosort}
+do_test collate4-1.1.3 {
+ cksort {SELECT a FROM collate4t1 ORDER BY a COLLATE TEXT}
+} {{} A B a b sort}
+do_test collate4-1.1.4 {
+ cksort {SELECT b FROM collate4t1 ORDER BY b}
+} {{} A B a b nosort}
+do_test collate4-1.1.5 {
+ cksort {SELECT b FROM collate4t1 ORDER BY b COLLATE TEXT}
+} {{} A B a b nosort}
+do_test collate4-1.1.6 {
+ cksort {SELECT b FROM collate4t1 ORDER BY b COLLATE NOCASE}
+} {{} a A b B sort}
+
+do_test collate4-1.1.7 {
+ execsql {
+ CREATE TABLE collate4t2(
+ a PRIMARY KEY COLLATE NOCASE,
+ b UNIQUE COLLATE TEXT
+ );
+ INSERT INTO collate4t2 VALUES( 'a', 'a' );
+ INSERT INTO collate4t2 VALUES( NULL, NULL );
+ INSERT INTO collate4t2 VALUES( 'B', 'B' );
+ }
+} {}
+do_test collate4-1.1.8 {
+ cksort {SELECT a FROM collate4t2 ORDER BY a}
+} {{} a B nosort}
+do_test collate4-1.1.9 {
+ cksort {SELECT a FROM collate4t2 ORDER BY a COLLATE NOCASE}
+} {{} a B nosort}
+do_test collate4-1.1.10 {
+ cksort {SELECT a FROM collate4t2 ORDER BY a COLLATE TEXT}
+} {{} B a sort}
+do_test collate4-1.1.11 {
+ cksort {SELECT b FROM collate4t2 ORDER BY b}
+} {{} B a nosort}
+do_test collate4-1.1.12 {
+ cksort {SELECT b FROM collate4t2 ORDER BY b COLLATE TEXT}
+} {{} B a nosort}
+do_test collate4-1.1.13 {
+ cksort {SELECT b FROM collate4t2 ORDER BY b COLLATE NOCASE}
+} {{} a B sort}
+
+do_test collate4-1.1.14 {
+ execsql {
+ CREATE TABLE collate4t3(
+ b COLLATE TEXT,
+ a COLLATE NOCASE,
+ UNIQUE(a), PRIMARY KEY(b)
+ );
+ INSERT INTO collate4t3 VALUES( 'a', 'a' );
+ INSERT INTO collate4t3 VALUES( NULL, NULL );
+ INSERT INTO collate4t3 VALUES( 'B', 'B' );
+ }
+} {}
+do_test collate4-1.1.15 {
+ cksort {SELECT a FROM collate4t3 ORDER BY a}
+} {{} a B nosort}
+do_test collate4-1.1.16 {
+ cksort {SELECT a FROM collate4t3 ORDER BY a COLLATE NOCASE}
+} {{} a B nosort}
+do_test collate4-1.1.17 {
+ cksort {SELECT a FROM collate4t3 ORDER BY a COLLATE TEXT}
+} {{} B a sort}
+do_test collate4-1.1.18 {
+ cksort {SELECT b FROM collate4t3 ORDER BY b}
+} {{} B a nosort}
+do_test collate4-1.1.19 {
+ cksort {SELECT b FROM collate4t3 ORDER BY b COLLATE TEXT}
+} {{} B a nosort}
+do_test collate4-1.1.20 {
+ cksort {SELECT b FROM collate4t3 ORDER BY b COLLATE NOCASE}
+} {{} a B sort}
+
+do_test collate4-1.1.21 {
+ execsql {
+ CREATE TABLE collate4t4(a COLLATE NOCASE, b COLLATE TEXT);
+ INSERT INTO collate4t4 VALUES( 'a', 'a' );
+ INSERT INTO collate4t4 VALUES( 'b', 'b' );
+ INSERT INTO collate4t4 VALUES( NULL, NULL );
+ INSERT INTO collate4t4 VALUES( 'B', 'B' );
+ INSERT INTO collate4t4 VALUES( 'A', 'A' );
+ CREATE INDEX collate4i3 ON collate4t4(a COLLATE TEXT);
+ CREATE INDEX collate4i4 ON collate4t4(b COLLATE NOCASE);
+ }
+} {}
+do_test collate4-1.1.22 {
+ cksort {SELECT a FROM collate4t4 ORDER BY a}
+} {{} a A b B sort}
+do_test collate4-1.1.23 {
+ cksort {SELECT a FROM collate4t4 ORDER BY a COLLATE NOCASE}
+} {{} a A b B sort}
+do_test collate4-1.1.24 {
+ cksort {SELECT a FROM collate4t4 ORDER BY a COLLATE TEXT}
+} {{} A B a b nosort}
+do_test collate4-1.1.25 {
+ cksort {SELECT b FROM collate4t4 ORDER BY b}
+} {{} A B a b sort}
+do_test collate4-1.1.26 {
+ cksort {SELECT b FROM collate4t4 ORDER BY b COLLATE TEXT}
+} {{} A B a b sort}
+do_test collate4-1.1.27 {
+ cksort {SELECT b FROM collate4t4 ORDER BY b COLLATE NOCASE}
+} {{} a A b B nosort}
+
+do_test collate4-1.1.30 {
+ execsql {
+ DROP TABLE collate4t1;
+ DROP TABLE collate4t2;
+ DROP TABLE collate4t3;
+ DROP TABLE collate4t4;
+ }
+} {}
+
+do_test collate4-1.2.0 {
+ execsql {
+ CREATE TABLE collate4t1(a COLLATE NOCASE, b COLLATE TEXT);
+ INSERT INTO collate4t1 VALUES( 'a', 'a' );
+ INSERT INTO collate4t1 VALUES( 'b', 'b' );
+ INSERT INTO collate4t1 VALUES( NULL, NULL );
+ INSERT INTO collate4t1 VALUES( 'B', 'B' );
+ INSERT INTO collate4t1 VALUES( 'A', 'A' );
+ CREATE INDEX collate4i1 ON collate4t1(a, b);
+ }
+} {}
+do_test collate4-1.2.1 {
+ cksort {SELECT a FROM collate4t1 ORDER BY a}
+} {{} A a B b nosort}
+do_test collate4-1.2.2 {
+ cksort {SELECT a FROM collate4t1 ORDER BY a COLLATE nocase}
+} {{} A a B b nosort}
+do_test collate4-1.2.3 {
+ cksort {SELECT a FROM collate4t1 ORDER BY a COLLATE text}
+} {{} A B a b sort}
+do_test collate4-1.2.4 {
+ cksort {SELECT a FROM collate4t1 ORDER BY a, b}
+} {{} A a B b nosort}
+do_test collate4-1.2.5 {
+ cksort {SELECT a FROM collate4t1 ORDER BY a, b COLLATE nocase}
+} {{} a A b B sort}
+do_test collate4-1.2.6 {
+ cksort {SELECT a FROM collate4t1 ORDER BY a, b COLLATE text}
+} {{} A a B b nosort}
+
+do_test collate4-1.2.7 {
+ execsql {
+ CREATE TABLE collate4t2(
+ a COLLATE NOCASE,
+ b COLLATE TEXT,
+ PRIMARY KEY(a, b)
+ );
+ INSERT INTO collate4t2 VALUES( 'a', 'a' );
+ INSERT INTO collate4t2 VALUES( NULL, NULL );
+ INSERT INTO collate4t2 VALUES( 'B', 'B' );
+ }
+} {}
+do_test collate4-1.2.8 {
+ cksort {SELECT a FROM collate4t2 ORDER BY a}
+} {{} a B nosort}
+do_test collate4-1.2.9 {
+ cksort {SELECT a FROM collate4t2 ORDER BY a COLLATE nocase}
+} {{} a B nosort}
+do_test collate4-1.2.10 {
+ cksort {SELECT a FROM collate4t2 ORDER BY a COLLATE text}
+} {{} B a sort}
+do_test collate4-1.2.11 {
+ cksort {SELECT a FROM collate4t2 ORDER BY a, b}
+} {{} a B nosort}
+do_test collate4-1.2.12 {
+ cksort {SELECT a FROM collate4t2 ORDER BY a, b COLLATE nocase}
+} {{} a B sort}
+do_test collate4-1.2.13 {
+ cksort {SELECT a FROM collate4t2 ORDER BY a, b COLLATE text}
+} {{} a B nosort}
+
+do_test collate4-1.2.14 {
+ execsql {
+ CREATE TABLE collate4t3(a COLLATE NOCASE, b COLLATE TEXT);
+ INSERT INTO collate4t3 VALUES( 'a', 'a' );
+ INSERT INTO collate4t3 VALUES( 'b', 'b' );
+ INSERT INTO collate4t3 VALUES( NULL, NULL );
+ INSERT INTO collate4t3 VALUES( 'B', 'B' );
+ INSERT INTO collate4t3 VALUES( 'A', 'A' );
+ CREATE INDEX collate4i2 ON collate4t3(a COLLATE TEXT, b COLLATE NOCASE);
+ }
+} {}
+do_test collate4-1.2.15 {
+ cksort {SELECT a FROM collate4t3 ORDER BY a}
+} {{} a A b B sort}
+do_test collate4-1.2.16 {
+ cksort {SELECT a FROM collate4t3 ORDER BY a COLLATE nocase}
+} {{} a A b B sort}
+do_test collate4-1.2.17 {
+ cksort {SELECT a FROM collate4t3 ORDER BY a COLLATE text}
+} {{} A B a b nosort}
+do_test collate4-1.2.18 {
+ cksort {SELECT a FROM collate4t3 ORDER BY a COLLATE text, b}
+} {{} A B a b sort}
+do_test collate4-1.2.19 {
+ cksort {SELECT a FROM collate4t3 ORDER BY a COLLATE text, b COLLATE nocase}
+} {{} A B a b nosort}
+do_test collate4-1.2.20 {
+ cksort {SELECT a FROM collate4t3 ORDER BY a COLLATE text, b COLLATE text}
+} {{} A B a b sort}
+do_test collate4-1.2.21 {
+ cksort {SELECT a FROM collate4t3 ORDER BY a COLLATE text DESC}
+} {b a B A {} nosort}
+do_test collate4-1.2.22 {
+ cksort {SELECT a FROM collate4t3 ORDER BY a COLLATE text DESC, b}
+} {b a B A {} sort}
+do_test collate4-1.2.23 {
+ cksort {SELECT a FROM collate4t3
+ ORDER BY a COLLATE text DESC, b COLLATE nocase}
+} {b a B A {} sort}
+do_test collate4-1.2.24 {
+ cksort {SELECT a FROM collate4t3
+ ORDER BY a COLLATE text DESC, b COLLATE nocase DESC}
+} {b a B A {} nosort}
+
+do_test collate4-1.2.25 {
+ execsql {
+ DROP TABLE collate4t1;
+ DROP TABLE collate4t2;
+ DROP TABLE collate4t3;
+ }
+} {}
+
+#
+# These tests - collate4-2.* - check that indices are correctly
+# selected or not selected to implement WHERE clauses when user
+# defined collation sequences are involved.
+#
+# Indices may optimise WHERE clauses using <, >, <=, >=, = or IN
+# operators.
+#
+do_test collate4-2.1.0 {
+ execsql {
+ CREATE TABLE collate4t1(a COLLATE NOCASE);
+ CREATE TABLE collate4t2(b COLLATE TEXT);
+
+ INSERT INTO collate4t1 VALUES('a');
+ INSERT INTO collate4t1 VALUES('A');
+ INSERT INTO collate4t1 VALUES('b');
+ INSERT INTO collate4t1 VALUES('B');
+ INSERT INTO collate4t1 VALUES('c');
+ INSERT INTO collate4t1 VALUES('C');
+ INSERT INTO collate4t1 VALUES('d');
+ INSERT INTO collate4t1 VALUES('D');
+ INSERT INTO collate4t1 VALUES('e');
+ INSERT INTO collate4t1 VALUES('D');
+
+ INSERT INTO collate4t2 VALUES('A');
+ INSERT INTO collate4t2 VALUES('Z');
+ }
+} {}
+do_test collate4-2.1.1 {
+ count {
+ SELECT * FROM collate4t2, collate4t1 WHERE a = b;
+ }
+} {A a A A 19}
+do_test collate4-2.1.2 {
+ execsql {
+ CREATE INDEX collate4i1 ON collate4t1(a);
+ }
+ count {
+ SELECT * FROM collate4t2, collate4t1 WHERE a = b;
+ }
+} {A a A A 5}
+do_test collate4-2.1.3 {
+ count {
+ SELECT * FROM collate4t2, collate4t1 WHERE b = a;
+ }
+} {A A 19}
+do_test collate4-2.1.4 {
+ execsql {
+ DROP INDEX collate4i1;
+ CREATE INDEX collate4i1 ON collate4t1(a COLLATE TEXT);
+ }
+ count {
+ SELECT * FROM collate4t2, collate4t1 WHERE a = b;
+ }
+} {A a A A 19}
+do_test collate4-2.1.5 {
+ count {
+ SELECT * FROM collate4t2, collate4t1 WHERE b = a;
+ }
+} {A A 4}
+ifcapable subquery {
+ do_test collate4-2.1.6 {
+ count {
+ SELECT a FROM collate4t1 WHERE a IN (SELECT * FROM collate4t2);
+ }
+ } {a A 10}
+ do_test collate4-2.1.7 {
+ execsql {
+ DROP INDEX collate4i1;
+ CREATE INDEX collate4i1 ON collate4t1(a);
+ }
+ count {
+ SELECT a FROM collate4t1 WHERE a IN (SELECT * FROM collate4t2);
+ }
+ } {a A 6}
+ do_test collate4-2.1.8 {
+ count {
+ SELECT a FROM collate4t1 WHERE a IN ('z', 'a');
+ }
+ } {a A 5}
+ do_test collate4-2.1.9 {
+ execsql {
+ DROP INDEX collate4i1;
+ CREATE INDEX collate4i1 ON collate4t1(a COLLATE TEXT);
+ }
+ count {
+ SELECT a FROM collate4t1 WHERE a IN ('z', 'a');
+ }
+ } {a A 9}
+}
+do_test collate4-2.1.10 {
+ execsql {
+ DROP TABLE collate4t1;
+ DROP TABLE collate4t2;
+ }
+} {}
+
+do_test collate4-2.2.0 {
+ execsql {
+ CREATE TABLE collate4t1(a COLLATE nocase, b COLLATE text, c);
+ CREATE TABLE collate4t2(a COLLATE nocase, b COLLATE text, c COLLATE TEXT);
+
+ INSERT INTO collate4t1 VALUES('0', '0', '0');
+ INSERT INTO collate4t1 VALUES('0', '0', '1');
+ INSERT INTO collate4t1 VALUES('0', '1', '0');
+ INSERT INTO collate4t1 VALUES('0', '1', '1');
+ INSERT INTO collate4t1 VALUES('1', '0', '0');
+ INSERT INTO collate4t1 VALUES('1', '0', '1');
+ INSERT INTO collate4t1 VALUES('1', '1', '0');
+ INSERT INTO collate4t1 VALUES('1', '1', '1');
+ insert into collate4t2 SELECT * FROM collate4t1;
+ }
+} {}
+do_test collate4-2.2.1 {
+ count {
+ SELECT * FROM collate4t2 NATURAL JOIN collate4t1;
+ }
+} {0 0 0 0 0 1 0 1 0 0 1 1 1 0 0 1 0 1 1 1 0 1 1 1 63}
+do_test collate4-2.2.1b {
+ execsql {
+ CREATE INDEX collate4i1 ON collate4t1(a, b, c);
+ }
+ count {
+ SELECT * FROM collate4t2 NATURAL JOIN collate4t1;
+ }
+} {0 0 0 0 0 1 0 1 0 0 1 1 1 0 0 1 0 1 1 1 0 1 1 1 29}
+do_test collate4-2.2.2 {
+ execsql {
+ DROP INDEX collate4i1;
+ CREATE INDEX collate4i1 ON collate4t1(a, b, c COLLATE text);
+ }
+ count {
+ SELECT * FROM collate4t2 NATURAL JOIN collate4t1;
+ }
+} {0 0 0 0 0 1 0 1 0 0 1 1 1 0 0 1 0 1 1 1 0 1 1 1 22}
+
+do_test collate4-2.2.10 {
+ execsql {
+ DROP TABLE collate4t1;
+ DROP TABLE collate4t2;
+ }
+} {}
+
+#
+# These tests - collate4-3.* verify that indices that implement
+# UNIQUE and PRIMARY KEY constraints operate correctly with user
+# defined collation sequences.
+#
+do_test collate4-3.0 {
+ execsql {
+ CREATE TABLE collate4t1(a PRIMARY KEY COLLATE NOCASE);
+ }
+} {}
+do_test collate4-3.1 {
+ catchsql {
+ INSERT INTO collate4t1 VALUES('abc');
+ INSERT INTO collate4t1 VALUES('ABC');
+ }
+} {1 {column a is not unique}}
+do_test collate4-3.2 {
+ execsql {
+ SELECT * FROM collate4t1;
+ }
+} {abc}
+do_test collate4-3.3 {
+ catchsql {
+ INSERT INTO collate4t1 SELECT upper(a) FROM collate4t1;
+ }
+} {1 {column a is not unique}}
+do_test collate4-3.4 {
+ catchsql {
+ INSERT INTO collate4t1 VALUES(1);
+ UPDATE collate4t1 SET a = 'abc';
+ }
+} {1 {column a is not unique}}
+do_test collate4-3.5 {
+ execsql {
+ DROP TABLE collate4t1;
+ CREATE TABLE collate4t1(a COLLATE NOCASE UNIQUE);
+ }
+} {}
+do_test collate4-3.6 {
+ catchsql {
+ INSERT INTO collate4t1 VALUES('abc');
+ INSERT INTO collate4t1 VALUES('ABC');
+ }
+} {1 {column a is not unique}}
+do_test collate4-3.7 {
+ execsql {
+ SELECT * FROM collate4t1;
+ }
+} {abc}
+do_test collate4-3.8 {
+ catchsql {
+ INSERT INTO collate4t1 SELECT upper(a) FROM collate4t1;
+ }
+} {1 {column a is not unique}}
+do_test collate4-3.9 {
+ catchsql {
+ INSERT INTO collate4t1 VALUES(1);
+ UPDATE collate4t1 SET a = 'abc';
+ }
+} {1 {column a is not unique}}
+do_test collate4-3.10 {
+ execsql {
+ DROP TABLE collate4t1;
+ CREATE TABLE collate4t1(a);
+ CREATE UNIQUE INDEX collate4i1 ON collate4t1(a COLLATE NOCASE);
+ }
+} {}
+do_test collate4-3.11 {
+ catchsql {
+ INSERT INTO collate4t1 VALUES('abc');
+ INSERT INTO collate4t1 VALUES('ABC');
+ }
+} {1 {column a is not unique}}
+do_test collate4-3.12 {
+ execsql {
+ SELECT * FROM collate4t1;
+ }
+} {abc}
+do_test collate4-3.13 {
+ catchsql {
+ INSERT INTO collate4t1 SELECT upper(a) FROM collate4t1;
+ }
+} {1 {column a is not unique}}
+do_test collate4-3.14 {
+ catchsql {
+ INSERT INTO collate4t1 VALUES(1);
+ UPDATE collate4t1 SET a = 'abc';
+ }
+} {1 {column a is not unique}}
+
+do_test collate4-3.15 {
+ execsql {
+ DROP TABLE collate4t1;
+ }
+} {}
+
+# Mimic the SQLite 2 collation type NUMERIC.
+db collate numeric numeric_collate
+proc numeric_collate {lhs rhs} {
+ if {$lhs == $rhs} {return 0}
+ return [expr ($lhs>$rhs)?1:-1]
+}
+
+#
+# These tests - collate4-4.* check that min() and max() only ever
+# use indices constructed with built-in collation type numeric.
+#
+# CHANGED: min() and max() now use the collation type. If there
+# is an indice that can be used, it is used.
+#
+do_test collate4-4.0 {
+ execsql {
+ CREATE TABLE collate4t1(a COLLATE TEXT);
+ INSERT INTO collate4t1 VALUES('2');
+ INSERT INTO collate4t1 VALUES('10');
+ INSERT INTO collate4t1 VALUES('20');
+ INSERT INTO collate4t1 VALUES('104');
+ }
+} {}
+do_test collate4-4.1 {
+ count {
+ SELECT max(a) FROM collate4t1
+ }
+} {20 3}
+do_test collate4-4.2 {
+ count {
+ SELECT min(a) FROM collate4t1
+ }
+} {10 3}
+do_test collate4-4.3 {
+ # Test that the index with collation type TEXT is used.
+ execsql {
+ CREATE INDEX collate4i1 ON collate4t1(a);
+ }
+ count {
+ SELECT min(a) FROM collate4t1;
+ }
+} {10 2}
+do_test collate4-4.4 {
+ count {
+ SELECT max(a) FROM collate4t1;
+ }
+} {20 1}
+do_test collate4-4.5 {
+ # Test that the index with collation type NUMERIC is not used.
+ execsql {
+ DROP INDEX collate4i1;
+ CREATE INDEX collate4i1 ON collate4t1(a COLLATE NUMERIC);
+ }
+ count {
+ SELECT min(a) FROM collate4t1;
+ }
+} {10 3}
+do_test collate4-4.6 {
+ count {
+ SELECT max(a) FROM collate4t1;
+ }
+} {20 3}
+do_test collate4-4.7 {
+ execsql {
+ DROP TABLE collate4t1;
+ }
+} {}
+
+# Also test the scalar min() and max() functions.
+#
+do_test collate4-4.8 {
+ execsql {
+ CREATE TABLE collate4t1(a COLLATE TEXT, b COLLATE NUMERIC);
+ INSERT INTO collate4t1 VALUES('11', '101');
+ INSERT INTO collate4t1 VALUES('101', '11')
+ }
+} {}
+do_test collate4-4.9 {
+ execsql {
+ SELECT max(a, b) FROM collate4t1;
+ }
+} {11 11}
+do_test collate4-4.10 {
+ execsql {
+ SELECT max(b, a) FROM collate4t1;
+ }
+} {101 101}
+do_test collate4-4.11 {
+ execsql {
+ SELECT max(a, '101') FROM collate4t1;
+ }
+} {11 101}
+do_test collate4-4.12 {
+ execsql {
+ SELECT max('101', a) FROM collate4t1;
+ }
+} {11 101}
+do_test collate4-4.13 {
+ execsql {
+ SELECT max(b, '101') FROM collate4t1;
+ }
+} {101 101}
+do_test collate4-4.14 {
+ execsql {
+ SELECT max('101', b) FROM collate4t1;
+ }
+} {101 101}
+
+do_test collate4-4.15 {
+ execsql {
+ DROP TABLE collate4t1;
+ }
+} {}
+
+#
+# These tests - collate4.6.* - ensure that implict INTEGER PRIMARY KEY
+# indices do not confuse collation sequences.
+#
+# These indices are never used for sorting in SQLite. And you can't
+# create another index on an INTEGER PRIMARY KEY column, so we don't have
+# to test that.
+# (Revised 2004-Nov-22): The ROWID can be used for sorting now.
+#
+do_test collate4-6.0 {
+ execsql {
+ CREATE TABLE collate4t1(a INTEGER PRIMARY KEY);
+ INSERT INTO collate4t1 VALUES(101);
+ INSERT INTO collate4t1 VALUES(10);
+ INSERT INTO collate4t1 VALUES(15);
+ }
+} {}
+do_test collate4-6.1 {
+ cksort {
+ SELECT * FROM collate4t1 ORDER BY 1;
+ }
+} {10 15 101 nosort}
+do_test collate4-6.2 {
+ cksort {
+ SELECT * FROM collate4t1 ORDER BY oid;
+ }
+} {10 15 101 nosort}
+do_test collate4-6.3 {
+ cksort {
+ SELECT * FROM collate4t1 ORDER BY oid||'' COLLATE TEXT;
+ }
+} {10 101 15 sort}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/collate5.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/collate5.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,270 @@
+#
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#*************************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing DISTINCT, UNION, INTERSECT and EXCEPT
+# SELECT statements that use user-defined collation sequences. Also
+# GROUP BY clauses that use user-defined collation sequences.
+#
+# $Id: collate5.test,v 1.5 2005/09/07 22:48:16 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+
+#
+# Tests are organised as follows:
+# collate5-1.* - DISTINCT
+# collate5-2.* - Compound SELECT
+# collate5-3.* - ORDER BY on compound SELECT
+# collate5-4.* - GROUP BY
+
+# Create the collation sequence 'TEXT', purely for asthetic reasons. The
+# test cases in this script could just as easily use BINARY.
+db collate TEXT [list string compare]
+
+# Mimic the SQLite 2 collation type NUMERIC.
+db collate numeric numeric_collate
+proc numeric_collate {lhs rhs} {
+ if {$lhs == $rhs} {return 0}
+ return [expr ($lhs>$rhs)?1:-1]
+}
+
+#
+# These tests - collate5-1.* - focus on the DISTINCT keyword.
+#
+do_test collate5-1.0 {
+ execsql {
+ CREATE TABLE collate5t1(a COLLATE nocase, b COLLATE text);
+
+ INSERT INTO collate5t1 VALUES('a', 'apple');
+ INSERT INTO collate5t1 VALUES('A', 'Apple');
+ INSERT INTO collate5t1 VALUES('b', 'banana');
+ INSERT INTO collate5t1 VALUES('B', 'banana');
+ INSERT INTO collate5t1 VALUES('n', NULL);
+ INSERT INTO collate5t1 VALUES('N', NULL);
+ }
+} {}
+do_test collate5-1.1 {
+ execsql {
+ SELECT DISTINCT a FROM collate5t1;
+ }
+} {a b n}
+do_test collate5-1.2 {
+ execsql {
+ SELECT DISTINCT b FROM collate5t1;
+ }
+} {apple Apple banana {}}
+do_test collate5-1.3 {
+ execsql {
+ SELECT DISTINCT a, b FROM collate5t1;
+ }
+} {a apple A Apple b banana n {}}
+
+# The remainder of this file tests compound SELECT statements.
+# Omit it if the library is compiled such that they are omitted.
+#
+ifcapable !compound {
+ finish_test
+ return
+}
+
+#
+# Tests named collate5-2.* focus on UNION, EXCEPT and INTERSECT
+# queries that use user-defined collation sequences.
+#
+# collate5-2.1.* - UNION
+# collate5-2.2.* - INTERSECT
+# collate5-2.3.* - EXCEPT
+#
+do_test collate5-2.0 {
+ execsql {
+ CREATE TABLE collate5t2(a COLLATE text, b COLLATE nocase);
+
+ INSERT INTO collate5t2 VALUES('a', 'apple');
+ INSERT INTO collate5t2 VALUES('A', 'apple');
+ INSERT INTO collate5t2 VALUES('b', 'banana');
+ INSERT INTO collate5t2 VALUES('B', 'Banana');
+ }
+} {}
+
+do_test collate5-2.1.1 {
+ execsql {
+ SELECT a FROM collate5t1 UNION select a FROM collate5t2;
+ }
+} {A B N}
+do_test collate5-2.1.2 {
+ execsql {
+ SELECT a FROM collate5t2 UNION select a FROM collate5t1;
+ }
+} {A B N a b n}
+do_test collate5-2.1.3 {
+ execsql {
+ SELECT a, b FROM collate5t1 UNION select a, b FROM collate5t2;
+ }
+} {A Apple A apple B Banana b banana N {}}
+do_test collate5-2.1.4 {
+ execsql {
+ SELECT a, b FROM collate5t2 UNION select a, b FROM collate5t1;
+ }
+} {A Apple B banana N {} a apple b banana n {}}
+
+do_test collate5-2.2.1 {
+ execsql {
+ SELECT a FROM collate5t1 EXCEPT select a FROM collate5t2;
+ }
+} {N}
+do_test collate5-2.2.2 {
+ execsql {
+ SELECT a FROM collate5t2 EXCEPT select a FROM collate5t1 WHERE a != 'a';
+ }
+} {A a}
+do_test collate5-2.2.3 {
+ execsql {
+ SELECT a, b FROM collate5t1 EXCEPT select a, b FROM collate5t2;
+ }
+} {A Apple N {}}
+do_test collate5-2.2.4 {
+ execsql {
+ SELECT a, b FROM collate5t2 EXCEPT select a, b FROM collate5t1
+ where a != 'a';
+ }
+} {A apple a apple}
+
+do_test collate5-2.3.1 {
+ execsql {
+ SELECT a FROM collate5t1 INTERSECT select a FROM collate5t2;
+ }
+} {A B}
+do_test collate5-2.3.2 {
+ execsql {
+ SELECT a FROM collate5t2 INTERSECT select a FROM collate5t1 WHERE a != 'a';
+ }
+} {B b}
+do_test collate5-2.3.3 {
+ execsql {
+ SELECT a, b FROM collate5t1 INTERSECT select a, b FROM collate5t2;
+ }
+} {a apple B banana}
+do_test collate5-2.3.4 {
+ execsql {
+ SELECT a, b FROM collate5t2 INTERSECT select a, b FROM collate5t1;
+ }
+} {A apple B Banana a apple b banana}
+
+#
+# This test ensures performs a UNION operation with a bunch of different
+# length records. The goal is to test that the logic that compares records
+# for the compound SELECT operators works with record lengths that lie
+# either side of the troublesome 256 and 65536 byte marks.
+#
+set ::lens [list \
+ 0 1 2 3 4 5 6 7 8 9 \
+ 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 \
+ 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 \
+ 65520 65521 65522 65523 65524 65525 65526 65527 65528 65529 65530 \
+ 65531 65532 65533 65534 65535 65536 65537 65538 65539 65540 65541 \
+ 65542 65543 65544 65545 65546 65547 65548 65549 65550 65551 ]
+do_test collate5-2.4.0 {
+ execsql {
+ BEGIN;
+ CREATE TABLE collate5t3(a, b);
+ }
+ foreach ii $::lens {
+ execsql "INSERT INTO collate5t3 VALUES($ii, '[string repeat a $ii]');"
+ }
+ expr [llength [execsql {
+ COMMIT;
+ SELECT * FROM collate5t3 UNION SELECT * FROM collate5t3;
+ }]] / 2
+} [llength $::lens]
+do_test collate5-2.4.1 {
+ execsql {DROP TABLE collate5t3;}
+} {}
+unset ::lens
+
+#
+# These tests - collate5-3.* - focus on compound SELECT queries that
+# feature ORDER BY clauses.
+#
+do_test collate5-3.0 {
+ execsql {
+ SELECT a FROM collate5t1 UNION ALL SELECT a FROM collate5t2 ORDER BY 1;
+ }
+} {a A a A b B b B n N}
+do_test collate5-3.1 {
+ execsql {
+ SELECT a FROM collate5t2 UNION ALL SELECT a FROM collate5t1 ORDER BY 1;
+ }
+} {A A B B N a a b b n}
+do_test collate5-3.2 {
+ execsql {
+ SELECT a FROM collate5t1 UNION ALL SELECT a FROM collate5t2
+ ORDER BY 1 COLLATE TEXT;
+ }
+} {A A B B N a a b b n}
+
+do_test collate5-3.3 {
+ execsql {
+ CREATE TABLE collate5t_cn(a COLLATE NUMERIC);
+ CREATE TABLE collate5t_ct(a COLLATE TEXT);
+ INSERT INTO collate5t_cn VALUES('1');
+ INSERT INTO collate5t_cn VALUES('11');
+ INSERT INTO collate5t_cn VALUES('101');
+ INSERT INTO collate5t_ct SELECT * FROM collate5t_cn;
+ }
+} {}
+do_test collate5-3.4 {
+ execsql {
+ SELECT a FROM collate5t_cn INTERSECT SELECT a FROM collate5t_ct ORDER BY 1;
+ }
+} {1 11 101}
+do_test collate5-3.5 {
+ execsql {
+ SELECT a FROM collate5t_ct INTERSECT SELECT a FROM collate5t_cn ORDER BY 1;
+ }
+} {1 101 11}
+
+do_test collate5-3.20 {
+ execsql {
+ DROP TABLE collate5t_cn;
+ DROP TABLE collate5t_ct;
+ DROP TABLE collate5t1;
+ DROP TABLE collate5t2;
+ }
+} {}
+
+do_test collate5-4.0 {
+ execsql {
+ CREATE TABLE collate5t1(a COLLATE NOCASE, b COLLATE NUMERIC);
+ INSERT INTO collate5t1 VALUES('a', '1');
+ INSERT INTO collate5t1 VALUES('A', '1.0');
+ INSERT INTO collate5t1 VALUES('b', '2');
+ INSERT INTO collate5t1 VALUES('B', '3');
+ }
+} {}
+do_test collate5-4.1 {
+ string tolower [execsql {
+ SELECT a, count(*) FROM collate5t1 GROUP BY a;
+ }]
+} {a 2 b 2}
+do_test collate5-4.2 {
+ execsql {
+ SELECT a, b, count(*) FROM collate5t1 GROUP BY a, b ORDER BY a, b;
+ }
+} {A 1.0 2 b 2 1 B 3 1}
+do_test collate5-4.3 {
+ execsql {
+ DROP TABLE collate5t1;
+ }
+} {}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/collate6.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/collate6.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,111 @@
+#
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is collation sequences in concert with triggers.
+#
+# $Id: collate6.test,v 1.2 2004/11/04 04:42:28 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# There are no tests in this file that will work without
+# trigger support.
+#
+ifcapable {!trigger} {
+ finish_test
+ return
+}
+
+# Create a case-insensitive collation type NOCASE for use in testing.
+# Normally, capital letters are less than their lower-case counterparts.
+db collate NOCASE nocase_collate
+proc nocase_collate {a b} {
+ return [string compare -nocase $a $b]
+}
+
+#
+# Tests are organized as follows:
+# collate6-1.* - triggers.
+#
+
+do_test collate6-1.0 {
+ execsql {
+ CREATE TABLE collate6log(a, b);
+ CREATE TABLE collate6tab(a COLLATE NOCASE, b COLLATE BINARY);
+ }
+} {}
+
+# Test that the default collation sequence applies to new.* references
+# in WHEN clauses.
+do_test collate6-1.1 {
+ execsql {
+ CREATE TRIGGER collate6trig BEFORE INSERT ON collate6tab
+ WHEN new.a = 'a' BEGIN
+ INSERT INTO collate6log VALUES(new.a, new.b);
+ END;
+ }
+} {}
+do_test collate6-1.2 {
+ execsql {
+ INSERT INTO collate6tab VALUES('a', 'b');
+ SELECT * FROM collate6log;
+ }
+} {a b}
+do_test collate6-1.3 {
+ execsql {
+ INSERT INTO collate6tab VALUES('A', 'B');
+ SELECT * FROM collate6log;
+ }
+} {a b A B}
+do_test collate6-1.4 {
+ execsql {
+ DROP TRIGGER collate6trig;
+ DELETE FROM collate6log;
+ }
+} {}
+
+# Test that the default collation sequence applies to new.* references
+# in the body of triggers.
+do_test collate6-1.5 {
+ execsql {
+ CREATE TRIGGER collate6trig BEFORE INSERT ON collate6tab BEGIN
+ INSERT INTO collate6log VALUES(new.a='a', new.b='b');
+ END;
+ }
+} {}
+do_test collate6-1.6 {
+ execsql {
+ INSERT INTO collate6tab VALUES('a', 'b');
+ SELECT * FROM collate6log;
+ }
+} {1 1}
+do_test collate6-1.7 {
+ execsql {
+ INSERT INTO collate6tab VALUES('A', 'B');
+ SELECT * FROM collate6log;
+ }
+} {1 1 1 0}
+do_test collate6-1.8 {
+ execsql {
+ DROP TRIGGER collate6trig;
+ DELETE FROM collate6log;
+ }
+} {}
+
+do_test collate6-1.9 {
+ execsql {
+ DROP TABLE collate6tab;
+ }
+} {}
+
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/colmeta.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/colmeta.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,103 @@
+#
+# 2006 February 9
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is the sqlite3_table_column_metadata() API.
+#
+# $Id: colmeta.test,v 1.3 2006/02/10 13:33:31 danielk1977 Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+ifcapable !columnmetadata {
+ finish_test
+ return
+}
+
+# Set up a schema in the main and temp test databases.
+do_test colmeta-0 {
+ execsql {
+ CREATE TABLE abc(a, b, c);
+ CREATE TABLE abc2(a PRIMARY KEY COLLATE NOCASE, b VARCHAR(32), c);
+ CREATE TABLE abc3(a NOT NULL, b INTEGER PRIMARY KEY, c);
+ }
+ ifcapable autoinc {
+ execsql {
+ CREATE TABLE abc4(a, b INTEGER PRIMARY KEY AUTOINCREMENT, c);
+ }
+ }
+ ifcapable view {
+ execsql {
+ CREATE VIEW v1 AS SELECT * FROM abc2;
+ }
+ }
+} {}
+
+
+# Return values are of the form:
+#
+# {<decl-type> <collation> <not null> <primary key> <auto increment>}
+#
+set tests {
+ 1 {main abc a} {0 {{} BINARY 0 0 0}}
+ 2 {{} abc a} {0 {{} BINARY 0 0 0}}
+ 3 {{} abc2 b} {0 {VARCHAR(32) BINARY 0 0 0}}
+ 4 {main abc2 b} {0 {VARCHAR(32) BINARY 0 0 0}}
+ 5 {{} abc2 a} {0 {{} NOCASE 0 1 0}}
+ 6 {{} abc3 a} {0 {{} BINARY 1 0 0}}
+ 7 {{} abc3 b} {0 {INTEGER BINARY 0 1 0}}
+ 13 {main abc rowid} {0 {INTEGER BINARY 0 1 0}}
+ 14 {main abc3 rowid} {0 {INTEGER BINARY 0 1 0}}
+ 16 {main abc d} {1 {no such table column: abc.d}}
+}
+ifcapable view {
+ set tests [concat $tests {
+ 8 {{} abc4 b} {0 {INTEGER BINARY 0 1 1}}
+ 15 {main abc4 rowid} {0 {INTEGER BINARY 0 1 1}}
+ }]
+}
+ifcapable view {
+ set tests [concat $tests {
+ 9 {{} v1 a} {1 {no such table column: v1.a}}
+ 10 {main v1 b} {1 {no such table column: v1.b}}
+ 11 {main v1 badname} {1 {no such table column: v1.badname}}
+ 12 {main v1 rowid} {1 {no such table column: v1.rowid}}
+ }]
+}
+
+foreach {tn params results} $tests {
+ set ::DB [sqlite3_connection_pointer db]
+
+ set tstbody [concat sqlite3_table_column_metadata $::DB $params]
+ do_test colmeta-$tn.1 {
+ list [catch $tstbody msg] [set msg]
+ } $results
+
+ db close
+ sqlite3 db test.db
+
+ set ::DB [sqlite3_connection_pointer db]
+ set tstbody [concat sqlite3_table_column_metadata $::DB $params]
+ do_test colmeta-$tn.2 {
+ list [catch $tstbody msg] [set msg]
+ } $results
+}
+
+do_test colmeta-misuse.1 {
+ db close
+ set rc [catch {
+ sqlite3_table_column_metadata $::DB a b c
+ } msg]
+ list $rc $msg
+} {1 {library routine called out of sequence}}
+
+finish_test
+
Added: freeswitch/trunk/libs/sqlite/test/conflict.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/conflict.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,754 @@
+# 2002 January 29
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests for the conflict resolution extension
+# to SQLite.
+#
+# $Id: conflict.test,v 1.27 2006/01/17 09:35:02 danielk1977 Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+ifcapable !conflict {
+ finish_test
+ return
+}
+
+# Create tables for the first group of tests.
+#
+do_test conflict-1.0 {
+ execsql {
+ CREATE TABLE t1(a, b, c, UNIQUE(a,b));
+ CREATE TABLE t2(x);
+ SELECT c FROM t1 ORDER BY c;
+ }
+} {}
+
+# Six columns of configuration data as follows:
+#
+# i The reference number of the test
+# cmd An INSERT or REPLACE command to execute against table t1
+# t0 True if there is an error from $cmd
+# t1 Content of "c" column of t1 assuming no error in $cmd
+# t2 Content of "x" column of t2
+# t3 Number of temporary files created by this test
+#
+foreach {i cmd t0 t1 t2 t3} {
+ 1 INSERT 1 {} 1 0
+ 2 {INSERT OR IGNORE} 0 3 1 0
+ 3 {INSERT OR REPLACE} 0 4 1 0
+ 4 REPLACE 0 4 1 0
+ 5 {INSERT OR FAIL} 1 {} 1 0
+ 6 {INSERT OR ABORT} 1 {} 1 0
+ 7 {INSERT OR ROLLBACK} 1 {} {} 0
+} {
+ do_test conflict-1.$i {
+ set ::sqlite_opentemp_count 0
+ set r0 [catch {execsql [subst {
+ DELETE FROM t1;
+ DELETE FROM t2;
+ INSERT INTO t1 VALUES(1,2,3);
+ BEGIN;
+ INSERT INTO t2 VALUES(1);
+ $cmd INTO t1 VALUES(1,2,4);
+ }]} r1]
+ catch {execsql {COMMIT}}
+ if {$r0} {set r1 {}} {set r1 [execsql {SELECT c FROM t1}]}
+ set r2 [execsql {SELECT x FROM t2}]
+ set r3 $::sqlite_opentemp_count
+ list $r0 $r1 $r2 $r3
+ } [list $t0 $t1 $t2 $t3]
+}
+
+# Create tables for the first group of tests.
+#
+do_test conflict-2.0 {
+ execsql {
+ DROP TABLE t1;
+ DROP TABLE t2;
+ CREATE TABLE t1(a INTEGER PRIMARY KEY, b, c, UNIQUE(a,b));
+ CREATE TABLE t2(x);
+ SELECT c FROM t1 ORDER BY c;
+ }
+} {}
+
+# Six columns of configuration data as follows:
+#
+# i The reference number of the test
+# cmd An INSERT or REPLACE command to execute against table t1
+# t0 True if there is an error from $cmd
+# t1 Content of "c" column of t1 assuming no error in $cmd
+# t2 Content of "x" column of t2
+#
+foreach {i cmd t0 t1 t2} {
+ 1 INSERT 1 {} 1
+ 2 {INSERT OR IGNORE} 0 3 1
+ 3 {INSERT OR REPLACE} 0 4 1
+ 4 REPLACE 0 4 1
+ 5 {INSERT OR FAIL} 1 {} 1
+ 6 {INSERT OR ABORT} 1 {} 1
+ 7 {INSERT OR ROLLBACK} 1 {} {}
+} {
+ do_test conflict-2.$i {
+ set r0 [catch {execsql [subst {
+ DELETE FROM t1;
+ DELETE FROM t2;
+ INSERT INTO t1 VALUES(1,2,3);
+ BEGIN;
+ INSERT INTO t2 VALUES(1);
+ $cmd INTO t1 VALUES(1,2,4);
+ }]} r1]
+ catch {execsql {COMMIT}}
+ if {$r0} {set r1 {}} {set r1 [execsql {SELECT c FROM t1}]}
+ set r2 [execsql {SELECT x FROM t2}]
+ list $r0 $r1 $r2
+ } [list $t0 $t1 $t2]
+}
+
+# Create tables for the first group of tests.
+#
+do_test conflict-3.0 {
+ execsql {
+ DROP TABLE t1;
+ DROP TABLE t2;
+ CREATE TABLE t1(a, b, c INTEGER, PRIMARY KEY(c), UNIQUE(a,b));
+ CREATE TABLE t2(x);
+ SELECT c FROM t1 ORDER BY c;
+ }
+} {}
+
+# Six columns of configuration data as follows:
+#
+# i The reference number of the test
+# cmd An INSERT or REPLACE command to execute against table t1
+# t0 True if there is an error from $cmd
+# t1 Content of "c" column of t1 assuming no error in $cmd
+# t2 Content of "x" column of t2
+#
+foreach {i cmd t0 t1 t2} {
+ 1 INSERT 1 {} 1
+ 2 {INSERT OR IGNORE} 0 3 1
+ 3 {INSERT OR REPLACE} 0 4 1
+ 4 REPLACE 0 4 1
+ 5 {INSERT OR FAIL} 1 {} 1
+ 6 {INSERT OR ABORT} 1 {} 1
+ 7 {INSERT OR ROLLBACK} 1 {} {}
+} {
+ do_test conflict-3.$i {
+ set r0 [catch {execsql [subst {
+ DELETE FROM t1;
+ DELETE FROM t2;
+ INSERT INTO t1 VALUES(1,2,3);
+ BEGIN;
+ INSERT INTO t2 VALUES(1);
+ $cmd INTO t1 VALUES(1,2,4);
+ }]} r1]
+ catch {execsql {COMMIT}}
+ if {$r0} {set r1 {}} {set r1 [execsql {SELECT c FROM t1}]}
+ set r2 [execsql {SELECT x FROM t2}]
+ list $r0 $r1 $r2
+ } [list $t0 $t1 $t2]
+}
+
+do_test conflict-4.0 {
+ execsql {
+ DROP TABLE t2;
+ CREATE TABLE t2(x);
+ SELECT x FROM t2;
+ }
+} {}
+
+# Six columns of configuration data as follows:
+#
+# i The reference number of the test
+# conf1 The conflict resolution algorithm on the UNIQUE constraint
+# cmd An INSERT or REPLACE command to execute against table t1
+# t0 True if there is an error from $cmd
+# t1 Content of "c" column of t1 assuming no error in $cmd
+# t2 Content of "x" column of t2
+#
+foreach {i conf1 cmd t0 t1 t2} {
+ 1 {} INSERT 1 {} 1
+ 2 REPLACE INSERT 0 4 1
+ 3 IGNORE INSERT 0 3 1
+ 4 FAIL INSERT 1 {} 1
+ 5 ABORT INSERT 1 {} 1
+ 6 ROLLBACK INSERT 1 {} {}
+ 7 REPLACE {INSERT OR IGNORE} 0 3 1
+ 8 IGNORE {INSERT OR REPLACE} 0 4 1
+ 9 FAIL {INSERT OR IGNORE} 0 3 1
+ 10 ABORT {INSERT OR REPLACE} 0 4 1
+ 11 ROLLBACK {INSERT OR IGNORE } 0 3 1
+} {
+ do_test conflict-4.$i {
+ if {$conf1!=""} {set conf1 "ON CONFLICT $conf1"}
+ set r0 [catch {execsql [subst {
+ DROP TABLE t1;
+ CREATE TABLE t1(a,b,c,UNIQUE(a,b) $conf1);
+ DELETE FROM t2;
+ INSERT INTO t1 VALUES(1,2,3);
+ BEGIN;
+ INSERT INTO t2 VALUES(1);
+ $cmd INTO t1 VALUES(1,2,4);
+ }]} r1]
+ catch {execsql {COMMIT}}
+ if {$r0} {set r1 {}} {set r1 [execsql {SELECT c FROM t1}]}
+ set r2 [execsql {SELECT x FROM t2}]
+ list $r0 $r1 $r2
+ } [list $t0 $t1 $t2]
+}
+
+do_test conflict-5.0 {
+ execsql {
+ DROP TABLE t2;
+ CREATE TABLE t2(x);
+ SELECT x FROM t2;
+ }
+} {}
+
+# Six columns of configuration data as follows:
+#
+# i The reference number of the test
+# conf1 The conflict resolution algorithm on the NOT NULL constraint
+# cmd An INSERT or REPLACE command to execute against table t1
+# t0 True if there is an error from $cmd
+# t1 Content of "c" column of t1 assuming no error in $cmd
+# t2 Content of "x" column of t2
+#
+foreach {i conf1 cmd t0 t1 t2} {
+ 1 {} INSERT 1 {} 1
+ 2 REPLACE INSERT 0 5 1
+ 3 IGNORE INSERT 0 {} 1
+ 4 FAIL INSERT 1 {} 1
+ 5 ABORT INSERT 1 {} 1
+ 6 ROLLBACK INSERT 1 {} {}
+ 7 REPLACE {INSERT OR IGNORE} 0 {} 1
+ 8 IGNORE {INSERT OR REPLACE} 0 5 1
+ 9 FAIL {INSERT OR IGNORE} 0 {} 1
+ 10 ABORT {INSERT OR REPLACE} 0 5 1
+ 11 ROLLBACK {INSERT OR IGNORE} 0 {} 1
+ 12 {} {INSERT OR IGNORE} 0 {} 1
+ 13 {} {INSERT OR REPLACE} 0 5 1
+ 14 {} {INSERT OR FAIL} 1 {} 1
+ 15 {} {INSERT OR ABORT} 1 {} 1
+ 16 {} {INSERT OR ROLLBACK} 1 {} {}
+} {
+ if {$t0} {set t1 {t1.c may not be NULL}}
+ do_test conflict-5.$i {
+ if {$conf1!=""} {set conf1 "ON CONFLICT $conf1"}
+ set r0 [catch {execsql [subst {
+ DROP TABLE t1;
+ CREATE TABLE t1(a,b,c NOT NULL $conf1 DEFAULT 5);
+ DELETE FROM t2;
+ BEGIN;
+ INSERT INTO t2 VALUES(1);
+ $cmd INTO t1 VALUES(1,2,NULL);
+ }]} r1]
+ catch {execsql {COMMIT}}
+ if {!$r0} {set r1 [execsql {SELECT c FROM t1}]}
+ set r2 [execsql {SELECT x FROM t2}]
+ list $r0 $r1 $r2
+ } [list $t0 $t1 $t2]
+}
+
+do_test conflict-6.0 {
+ execsql {
+ DROP TABLE t2;
+ CREATE TABLE t2(a,b,c);
+ INSERT INTO t2 VALUES(1,2,1);
+ INSERT INTO t2 VALUES(2,3,2);
+ INSERT INTO t2 VALUES(3,4,1);
+ INSERT INTO t2 VALUES(4,5,4);
+ SELECT c FROM t2 ORDER BY b;
+ CREATE TABLE t3(x);
+ INSERT INTO t3 VALUES(1);
+ }
+} {1 2 1 4}
+
+# Six columns of configuration data as follows:
+#
+# i The reference number of the test
+# conf1 The conflict resolution algorithm on the UNIQUE constraint
+# cmd An UPDATE command to execute against table t1
+# t0 True if there is an error from $cmd
+# t1 Content of "b" column of t1 assuming no error in $cmd
+# t2 Content of "x" column of t3
+# t3 Number of temporary files created
+#
+foreach {i conf1 cmd t0 t1 t2 t3} {
+ 1 {} UPDATE 1 {6 7 8 9} 1 1
+ 2 REPLACE UPDATE 0 {7 6 9} 1 1
+ 3 IGNORE UPDATE 0 {6 7 3 9} 1 1
+ 4 FAIL UPDATE 1 {6 7 3 4} 1 0
+ 5 ABORT UPDATE 1 {1 2 3 4} 1 1
+ 6 ROLLBACK UPDATE 1 {1 2 3 4} 0 0
+ 7 REPLACE {UPDATE OR IGNORE} 0 {6 7 3 9} 1 1
+ 8 IGNORE {UPDATE OR REPLACE} 0 {7 6 9} 1 1
+ 9 FAIL {UPDATE OR IGNORE} 0 {6 7 3 9} 1 1
+ 10 ABORT {UPDATE OR REPLACE} 0 {7 6 9} 1 1
+ 11 ROLLBACK {UPDATE OR IGNORE} 0 {6 7 3 9} 1 1
+ 12 {} {UPDATE OR IGNORE} 0 {6 7 3 9} 1 1
+ 13 {} {UPDATE OR REPLACE} 0 {7 6 9} 1 1
+ 14 {} {UPDATE OR FAIL} 1 {6 7 3 4} 1 0
+ 15 {} {UPDATE OR ABORT} 1 {1 2 3 4} 1 1
+ 16 {} {UPDATE OR ROLLBACK} 1 {1 2 3 4} 0 0
+} {
+ if {$t0} {set t1 {column a is not unique}}
+ do_test conflict-6.$i {
+ db close
+ sqlite3 db test.db
+ if {$conf1!=""} {set conf1 "ON CONFLICT $conf1"}
+ execsql {pragma temp_store=file}
+ set ::sqlite_opentemp_count 0
+ set r0 [catch {execsql [subst {
+ DROP TABLE t1;
+ CREATE TABLE t1(a,b,c, UNIQUE(a) $conf1);
+ INSERT INTO t1 SELECT * FROM t2;
+ UPDATE t3 SET x=0;
+ BEGIN;
+ $cmd t3 SET x=1;
+ $cmd t1 SET b=b*2;
+ $cmd t1 SET a=c+5;
+ }]} r1]
+ catch {execsql {COMMIT}}
+ if {!$r0} {set r1 [execsql {SELECT a FROM t1 ORDER BY b}]}
+ set r2 [execsql {SELECT x FROM t3}]
+ list $r0 $r1 $r2 $::sqlite_opentemp_count
+ } [list $t0 $t1 $t2 $t3]
+}
+
+# Test to make sure a lot of IGNOREs don't cause a stack overflow
+#
+do_test conflict-7.1 {
+ execsql {
+ DROP TABLE t1;
+ DROP TABLE t2;
+ DROP TABLE t3;
+ CREATE TABLE t1(a unique, b);
+ }
+ for {set i 1} {$i<=50} {incr i} {
+ execsql "INSERT into t1 values($i,[expr {$i+1}]);"
+ }
+ execsql {
+ SELECT count(*), min(a), max(b) FROM t1;
+ }
+} {50 1 51}
+do_test conflict-7.2 {
+ execsql {
+ PRAGMA count_changes=on;
+ UPDATE OR IGNORE t1 SET a=1000;
+ }
+} {1}
+do_test conflict-7.2.1 {
+ db changes
+} {1}
+do_test conflict-7.3 {
+ execsql {
+ SELECT b FROM t1 WHERE a=1000;
+ }
+} {2}
+do_test conflict-7.4 {
+ execsql {
+ SELECT count(*) FROM t1;
+ }
+} {50}
+do_test conflict-7.5 {
+ execsql {
+ PRAGMA count_changes=on;
+ UPDATE OR REPLACE t1 SET a=1001;
+ }
+} {50}
+do_test conflict-7.5.1 {
+ db changes
+} {50}
+do_test conflict-7.6 {
+ execsql {
+ SELECT b FROM t1 WHERE a=1001;
+ }
+} {51}
+do_test conflict-7.7 {
+ execsql {
+ SELECT count(*) FROM t1;
+ }
+} {1}
+
+# Update for version 3: A SELECT statement no longer resets the change
+# counter (Test result changes from 0 to 50).
+do_test conflict-7.7.1 {
+ db changes
+} {50}
+
+# Make sure the row count is right for rows that are ignored on
+# an insert.
+#
+do_test conflict-8.1 {
+ execsql {
+ DELETE FROM t1;
+ INSERT INTO t1 VALUES(1,2);
+ }
+ execsql {
+ INSERT OR IGNORE INTO t1 VALUES(2,3);
+ }
+} {1}
+do_test conflict-8.1.1 {
+ db changes
+} {1}
+do_test conflict-8.2 {
+ execsql {
+ INSERT OR IGNORE INTO t1 VALUES(2,4);
+ }
+} {0}
+do_test conflict-8.2.1 {
+ db changes
+} {0}
+do_test conflict-8.3 {
+ execsql {
+ INSERT OR REPLACE INTO t1 VALUES(2,4);
+ }
+} {1}
+do_test conflict-8.3.1 {
+ db changes
+} {1}
+do_test conflict-8.4 {
+ execsql {
+ INSERT OR IGNORE INTO t1 SELECT * FROM t1;
+ }
+} {0}
+do_test conflict-8.4.1 {
+ db changes
+} {0}
+do_test conflict-8.5 {
+ execsql {
+ INSERT OR IGNORE INTO t1 SELECT a+2,b+2 FROM t1;
+ }
+} {2}
+do_test conflict-8.5.1 {
+ db changes
+} {2}
+do_test conflict-8.6 {
+ execsql {
+ INSERT OR IGNORE INTO t1 SELECT a+3,b+3 FROM t1;
+ }
+} {3}
+do_test conflict-8.6.1 {
+ db changes
+} {3}
+
+integrity_check conflict-8.99
+
+do_test conflict-9.1 {
+ execsql {
+ PRAGMA count_changes=0;
+ CREATE TABLE t2(
+ a INTEGER UNIQUE ON CONFLICT IGNORE,
+ b INTEGER UNIQUE ON CONFLICT FAIL,
+ c INTEGER UNIQUE ON CONFLICT REPLACE,
+ d INTEGER UNIQUE ON CONFLICT ABORT,
+ e INTEGER UNIQUE ON CONFLICT ROLLBACK
+ );
+ CREATE TABLE t3(x);
+ INSERT INTO t3 VALUES(1);
+ SELECT * FROM t3;
+ }
+} {1}
+do_test conflict-9.2 {
+ catchsql {
+ INSERT INTO t2 VALUES(1,1,1,1,1);
+ INSERT INTO t2 VALUES(2,2,2,2,2);
+ SELECT * FROM t2;
+ }
+} {0 {1 1 1 1 1 2 2 2 2 2}}
+do_test conflict-9.3 {
+ catchsql {
+ INSERT INTO t2 VALUES(1,3,3,3,3);
+ SELECT * FROM t2;
+ }
+} {0 {1 1 1 1 1 2 2 2 2 2}}
+do_test conflict-9.4 {
+ catchsql {
+ UPDATE t2 SET a=a+1 WHERE a=1;
+ SELECT * FROM t2;
+ }
+} {0 {1 1 1 1 1 2 2 2 2 2}}
+do_test conflict-9.5 {
+ catchsql {
+ INSERT INTO t2 VALUES(3,1,3,3,3);
+ SELECT * FROM t2;
+ }
+} {1 {column b is not unique}}
+do_test conflict-9.6 {
+ catchsql {
+ UPDATE t2 SET b=b+1 WHERE b=1;
+ SELECT * FROM t2;
+ }
+} {1 {column b is not unique}}
+do_test conflict-9.7 {
+ catchsql {
+ BEGIN;
+ UPDATE t3 SET x=x+1;
+ INSERT INTO t2 VALUES(3,1,3,3,3);
+ SELECT * FROM t2;
+ }
+} {1 {column b is not unique}}
+do_test conflict-9.8 {
+ execsql {COMMIT}
+ execsql {SELECT * FROM t3}
+} {2}
+do_test conflict-9.9 {
+ catchsql {
+ BEGIN;
+ UPDATE t3 SET x=x+1;
+ UPDATE t2 SET b=b+1 WHERE b=1;
+ SELECT * FROM t2;
+ }
+} {1 {column b is not unique}}
+do_test conflict-9.10 {
+ execsql {COMMIT}
+ execsql {SELECT * FROM t3}
+} {3}
+do_test conflict-9.11 {
+ catchsql {
+ INSERT INTO t2 VALUES(3,3,3,1,3);
+ SELECT * FROM t2;
+ }
+} {1 {column d is not unique}}
+do_test conflict-9.12 {
+ catchsql {
+ UPDATE t2 SET d=d+1 WHERE d=1;
+ SELECT * FROM t2;
+ }
+} {1 {column d is not unique}}
+do_test conflict-9.13 {
+ catchsql {
+ BEGIN;
+ UPDATE t3 SET x=x+1;
+ INSERT INTO t2 VALUES(3,3,3,1,3);
+ SELECT * FROM t2;
+ }
+} {1 {column d is not unique}}
+do_test conflict-9.14 {
+ execsql {COMMIT}
+ execsql {SELECT * FROM t3}
+} {4}
+do_test conflict-9.15 {
+ catchsql {
+ BEGIN;
+ UPDATE t3 SET x=x+1;
+ UPDATE t2 SET d=d+1 WHERE d=1;
+ SELECT * FROM t2;
+ }
+} {1 {column d is not unique}}
+do_test conflict-9.16 {
+ execsql {COMMIT}
+ execsql {SELECT * FROM t3}
+} {5}
+do_test conflict-9.17 {
+ catchsql {
+ INSERT INTO t2 VALUES(3,3,3,3,1);
+ SELECT * FROM t2;
+ }
+} {1 {column e is not unique}}
+do_test conflict-9.18 {
+ catchsql {
+ UPDATE t2 SET e=e+1 WHERE e=1;
+ SELECT * FROM t2;
+ }
+} {1 {column e is not unique}}
+do_test conflict-9.19 {
+ catchsql {
+ BEGIN;
+ UPDATE t3 SET x=x+1;
+ INSERT INTO t2 VALUES(3,3,3,3,1);
+ SELECT * FROM t2;
+ }
+} {1 {column e is not unique}}
+do_test conflict-9.20 {
+ catch {execsql {COMMIT}}
+ execsql {SELECT * FROM t3}
+} {5}
+do_test conflict-9.21 {
+ catchsql {
+ BEGIN;
+ UPDATE t3 SET x=x+1;
+ UPDATE t2 SET e=e+1 WHERE e=1;
+ SELECT * FROM t2;
+ }
+} {1 {column e is not unique}}
+do_test conflict-9.22 {
+ catch {execsql {COMMIT}}
+ execsql {SELECT * FROM t3}
+} {5}
+do_test conflict-9.23 {
+ catchsql {
+ INSERT INTO t2 VALUES(3,3,1,3,3);
+ SELECT * FROM t2;
+ }
+} {0 {2 2 2 2 2 3 3 1 3 3}}
+do_test conflict-9.24 {
+ catchsql {
+ UPDATE t2 SET c=c-1 WHERE c=2;
+ SELECT * FROM t2;
+ }
+} {0 {2 2 1 2 2}}
+do_test conflict-9.25 {
+ catchsql {
+ BEGIN;
+ UPDATE t3 SET x=x+1;
+ INSERT INTO t2 VALUES(3,3,1,3,3);
+ SELECT * FROM t2;
+ }
+} {0 {3 3 1 3 3}}
+do_test conflict-9.26 {
+ catch {execsql {COMMIT}}
+ execsql {SELECT * FROM t3}
+} {6}
+
+do_test conflict-10.1 {
+ catchsql {
+ DELETE FROM t1;
+ BEGIN;
+ INSERT OR ROLLBACK INTO t1 VALUES(1,2);
+ INSERT OR ROLLBACK INTO t1 VALUES(1,3);
+ COMMIT;
+ }
+ execsql {SELECT * FROM t1}
+} {}
+do_test conflict-10.2 {
+ catchsql {
+ CREATE TABLE t4(x);
+ CREATE UNIQUE INDEX t4x ON t4(x);
+ BEGIN;
+ INSERT OR ROLLBACK INTO t4 VALUES(1);
+ INSERT OR ROLLBACK INTO t4 VALUES(1);
+ COMMIT;
+ }
+ execsql {SELECT * FROM t4}
+} {}
+
+# Ticket #1171. Make sure statement rollbacks do not
+# damage the database.
+#
+do_test conflict-11.1 {
+ execsql {
+ -- Create a database object (pages 2, 3 of the file)
+ BEGIN;
+ CREATE TABLE abc(a UNIQUE, b, c);
+ INSERT INTO abc VALUES(1, 2, 3);
+ INSERT INTO abc VALUES(4, 5, 6);
+ INSERT INTO abc VALUES(7, 8, 9);
+ COMMIT;
+ }
+
+
+ # Set a small cache size so that changes will spill into
+ # the database file.
+ execsql {
+ PRAGMA cache_size = 10;
+ }
+
+ # Make lots of changes. Because of the small cache, some
+ # (most?) of these changes will spill into the disk file.
+ # In other words, some of the changes will not be held in
+ # cache.
+ #
+ execsql {
+ BEGIN;
+ -- Make sure the pager is in EXCLUSIVE state.
+ CREATE TABLE def(d, e, f);
+ INSERT INTO def VALUES
+ ('xxxxxxxxxxxxxxx', 'yyyyyyyyyyyyyyyy', 'zzzzzzzzzzzzzzzz');
+ INSERT INTO def SELECT * FROM def;
+ INSERT INTO def SELECT * FROM def;
+ INSERT INTO def SELECT * FROM def;
+ INSERT INTO def SELECT * FROM def;
+ INSERT INTO def SELECT * FROM def;
+ INSERT INTO def SELECT * FROM def;
+ INSERT INTO def SELECT * FROM def;
+ DELETE FROM abc WHERE a = 4;
+ }
+
+ # Execute a statement that does a statement rollback due to
+ # a constraint failure.
+ #
+ catchsql {
+ INSERT INTO abc SELECT 10, 20, 30 FROM def;
+ }
+
+ # Rollback the database. Verify that the state of the ABC table
+ # is unchanged from the beginning of the transaction. In other words,
+ # make sure the DELETE on table ABC that occurred within the transaction
+ # had no effect.
+ #
+ execsql {
+ ROLLBACK;
+ SELECT * FROM abc;
+ }
+} {1 2 3 4 5 6 7 8 9}
+integrity_check conflict-11.2
+
+# Repeat test conflict-11.1 but this time commit.
+#
+do_test conflict-11.3 {
+ execsql {
+ BEGIN;
+ -- Make sure the pager is in EXCLUSIVE state.
+ UPDATE abc SET a=a+1;
+ CREATE TABLE def(d, e, f);
+ INSERT INTO def VALUES
+ ('xxxxxxxxxxxxxxx', 'yyyyyyyyyyyyyyyy', 'zzzzzzzzzzzzzzzz');
+ INSERT INTO def SELECT * FROM def;
+ INSERT INTO def SELECT * FROM def;
+ INSERT INTO def SELECT * FROM def;
+ INSERT INTO def SELECT * FROM def;
+ INSERT INTO def SELECT * FROM def;
+ INSERT INTO def SELECT * FROM def;
+ INSERT INTO def SELECT * FROM def;
+ DELETE FROM abc WHERE a = 4;
+ }
+ catchsql {
+ INSERT INTO abc SELECT 10, 20, 30 FROM def;
+ }
+ execsql {
+ ROLLBACK;
+ SELECT * FROM abc;
+ }
+} {1 2 3 4 5 6 7 8 9}
+# Repeat test conflict-11.1 but this time commit.
+#
+do_test conflict-11.5 {
+ execsql {
+ BEGIN;
+ -- Make sure the pager is in EXCLUSIVE state.
+ CREATE TABLE def(d, e, f);
+ INSERT INTO def VALUES
+ ('xxxxxxxxxxxxxxx', 'yyyyyyyyyyyyyyyy', 'zzzzzzzzzzzzzzzz');
+ INSERT INTO def SELECT * FROM def;
+ INSERT INTO def SELECT * FROM def;
+ INSERT INTO def SELECT * FROM def;
+ INSERT INTO def SELECT * FROM def;
+ INSERT INTO def SELECT * FROM def;
+ INSERT INTO def SELECT * FROM def;
+ INSERT INTO def SELECT * FROM def;
+ DELETE FROM abc WHERE a = 4;
+ }
+ catchsql {
+ INSERT INTO abc SELECT 10, 20, 30 FROM def;
+ }
+ execsql {
+ COMMIT;
+ SELECT * FROM abc;
+ }
+} {1 2 3 7 8 9}
+integrity_check conflict-11.6
+
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/corrupt.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/corrupt.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,169 @@
+# 2004 August 30
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests to make sure SQLite does not crash or
+# segfault if it sees a corrupt database file.
+#
+# $Id: corrupt.test,v 1.8 2005/02/19 08:18:06 danielk1977 Exp $
+
+catch {file delete -force test.db}
+catch {file delete -force test.db-journal}
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Construct a large database for testing.
+#
+do_test corrupt-1.1 {
+ execsql {
+ BEGIN;
+ CREATE TABLE t1(x);
+ INSERT INTO t1 VALUES(randstr(100,100));
+ INSERT INTO t1 VALUES(randstr(90,90));
+ INSERT INTO t1 VALUES(randstr(80,80));
+ INSERT INTO t1 SELECT x || randstr(5,5) FROM t1;
+ INSERT INTO t1 SELECT x || randstr(6,6) FROM t1;
+ INSERT INTO t1 SELECT x || randstr(7,7) FROM t1;
+ INSERT INTO t1 SELECT x || randstr(8,8) FROM t1;
+ INSERT INTO t1 VALUES(randstr(3000,3000));
+ INSERT INTO t1 SELECT x || randstr(9,9) FROM t1;
+ INSERT INTO t1 SELECT x || randstr(10,10) FROM t1;
+ INSERT INTO t1 SELECT x || randstr(11,11) FROM t1;
+ INSERT INTO t1 SELECT x || randstr(12,12) FROM t1;
+ CREATE INDEX t1i1 ON t1(x);
+ CREATE TABLE t2 AS SELECT * FROM t1;
+ DELETE FROM t2 WHERE rowid%5!=0;
+ COMMIT;
+ }
+} {}
+integrity_check corrupt-1.2
+
+# Copy file $from into $to
+#
+proc copy_file {from to} {
+ set f [open $from]
+ fconfigure $f -translation binary
+ set t [open $to w]
+ fconfigure $t -translation binary
+ puts -nonewline $t [read $f [file size $from]]
+ close $t
+ close $f
+}
+
+# Setup for the tests. Make a backup copy of the good database in test.bu.
+# Create a string of garbage data that is 256 bytes long.
+#
+copy_file test.db test.bu
+set fsize [file size test.db]
+set junk "abcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+while {[string length $junk]<256} {append junk $junk}
+set junk [string range $junk 0 255]
+
+# Go through the database and write garbage data into each 256 segment
+# of the file. Then do various operations on the file to make sure that
+# the database engine can recover gracefully from the corruption.
+#
+for {set i [expr {1*256}]} {$i<$fsize-256} {incr i 256} {
+ set tn [expr {$i/256}]
+ db close
+ copy_file test.bu test.db
+ set fd [open test.db r+]
+ fconfigure $fd -translation binary
+ seek $fd $i
+ puts -nonewline $fd $junk
+ close $fd
+ do_test corrupt-2.$tn.1 {
+ sqlite3 db test.db
+ catchsql {SELECT count(*) FROM sqlite_master}
+ set x {}
+ } {}
+ do_test corrupt-2.$tn.2 {
+ catchsql {SELECT count(*) FROM t1}
+ set x {}
+ } {}
+ do_test corrupt-2.$tn.3 {
+ catchsql {SELECT count(*) FROM t1 WHERE x>'abcdef'}
+ set x {}
+ } {}
+ do_test corrupt-2.$tn.4 {
+ catchsql {SELECT count(*) FROM t2}
+ set x {}
+ } {}
+ do_test corrupt-2.$tn.5 {
+ catchsql {CREATE TABLE t3 AS SELECT * FROM t1}
+ set x {}
+ } {}
+ do_test corrupt-2.$tn.6 {
+ catchsql {DROP TABLE t1}
+ set x {}
+ } {}
+ do_test corrupt-2.$tn.7 {
+ catchsql {PRAGMA integrity_check}
+ set x {}
+ } {}
+}
+
+#------------------------------------------------------------------------
+# For these tests, swap the rootpage entries of t1 (a table) and t1i1 (an
+# index on t1) in sqlite_master. Then perform a few different queries
+# and make sure this is detected as corruption.
+#
+do_test corrupt-3.1 {
+ db close
+ copy_file test.bu test.db
+ sqlite3 db test.db
+ list
+} {}
+do_test corrupt-3.2 {
+ set t1_r [execsql {SELECT rootpage FROM sqlite_master WHERE name = 't1i1'}]
+ set t1i1_r [execsql {SELECT rootpage FROM sqlite_master WHERE name = 't1'}]
+ set cookie [expr [execsql {PRAGMA schema_version}] + 1]
+ execsql "
+ PRAGMA writable_schema = 1;
+ UPDATE sqlite_master SET rootpage = $t1_r WHERE name = 't1';
+ UPDATE sqlite_master SET rootpage = $t1i1_r WHERE name = 't1i1';
+ PRAGMA writable_schema = 0;
+ PRAGMA schema_version = $cookie;
+ "
+} {}
+
+# This one tests the case caught by code in checkin [2313].
+do_test corrupt-3.3 {
+ db close
+ sqlite3 db test.db
+ catchsql {
+ INSERT INTO t1 VALUES('abc');
+ }
+} {1 {database disk image is malformed}}
+do_test corrupt-3.4 {
+ db close
+ sqlite3 db test.db
+ catchsql {
+ SELECT * FROM t1;
+ }
+} {1 {database disk image is malformed}}
+do_test corrupt-3.5 {
+ db close
+ sqlite3 db test.db
+ catchsql {
+ SELECT * FROM t1 WHERE oid = 10;
+ }
+} {1 {database disk image is malformed}}
+do_test corrupt-3.6 {
+ db close
+ sqlite3 db test.db
+ catchsql {
+ SELECT * FROM t1 WHERE x = 'abcde';
+ }
+} {1 {database disk image is malformed}}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/corrupt2.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/corrupt2.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,109 @@
+# 2004 August 30
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests to make sure SQLite does not crash or
+# segfault if it sees a corrupt database file.
+#
+# $Id: corrupt2.test,v 1.2 2005/01/22 03:39:39 danielk1977 Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# The following tests - corrupt2-1.* - create some databases corrupted in
+# specific ways and ensure that SQLite detects them as corrupt.
+#
+do_test corrupt2-1.1 {
+ execsql {
+ CREATE TABLE abc(a, b, c);
+ }
+} {}
+
+do_test corrupt2-1.2 {
+
+ # Corrupt the 16 byte magic string at the start of the file
+ file delete -force corrupt.db
+ file delete -force corrupt.db-journal
+ copy_file test.db corrupt.db
+ set f [open corrupt.db a]
+ seek $f 8 start
+ puts $f blah
+ close $f
+
+ sqlite3 db2 corrupt.db
+ catchsql {
+ SELECT * FROM sqlite_master;
+ } db2
+} {1 {file is encrypted or is not a database}}
+
+do_test corrupt2-1.3 {
+ db2 close
+
+ # Corrupt the page-size (bytes 16 and 17 of page 1).
+ file delete -force corrupt.db
+ file delete -force corrupt.db-journal
+ copy_file test.db corrupt.db
+ set f [open corrupt.db a]
+ fconfigure $f -encoding binary
+ seek $f 16 start
+ puts -nonewline $f "\x00\xFF"
+ close $f
+
+ sqlite3 db2 corrupt.db
+ catchsql {
+ SELECT * FROM sqlite_master;
+ } db2
+} {1 {file is encrypted or is not a database}}
+
+do_test corrupt2-1.4 {
+ db2 close
+
+ # Corrupt the free-block list on page 1.
+ file delete -force corrupt.db
+ file delete -force corrupt.db-journal
+ copy_file test.db corrupt.db
+ set f [open corrupt.db a]
+ fconfigure $f -encoding binary
+ seek $f 101 start
+ puts -nonewline $f "\xFF\xFF"
+ close $f
+
+ sqlite3 db2 corrupt.db
+ catchsql {
+ SELECT * FROM sqlite_master;
+ } db2
+} {1 {database disk image is malformed}}
+
+do_test corrupt2-1.5 {
+ db2 close
+
+ # Corrupt the free-block list on page 1.
+ file delete -force corrupt.db
+ file delete -force corrupt.db-journal
+ copy_file test.db corrupt.db
+ set f [open corrupt.db a]
+ fconfigure $f -encoding binary
+ seek $f 101 start
+ puts -nonewline $f "\x00\xC8"
+ seek $f 200 start
+ puts -nonewline $f "\x00\x00"
+ puts -nonewline $f "\x10\x00"
+ close $f
+
+ sqlite3 db2 corrupt.db
+ catchsql {
+ SELECT * FROM sqlite_master;
+ } db2
+} {1 {database disk image is malformed}}
+db2 close
+
+finish_test
+
Added: freeswitch/trunk/libs/sqlite/test/crash.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/crash.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,435 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# The focus of this file is testing the ability of the database to
+# uses its rollback journal to recover intact (no database corruption)
+# from a power failure during the middle of a COMMIT. The OS interface
+# modules are overloaded in a separate instance of testfixture using
+# the modified I/O routines found in test6.c. These routines allow us
+# to simulate the kind of file damage that occurs after a power failure.
+#
+# $Id: crash.test,v 1.21 2006/01/06 14:32:20 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+ifcapable !crashtest {
+ finish_test
+ return
+}
+
+# set repeats 100
+set repeats 10
+
+# This proc execs a seperate process that crashes midway through executing
+# the SQL script $sql on database test.db.
+#
+# The crash occurs during a sync() of file $crashfile. When the crash
+# occurs a random subset of all unsynced writes made by the process are
+# written into the files on disk. Argument $crashdelay indicates the
+# number of file syncs to wait before crashing.
+#
+# The return value is a list of two elements. The first element is a
+# boolean, indicating whether or not the process actually crashed or
+# reported some other error. The second element in the returned list is the
+# error message. This is "child process exited abnormally" if the crash
+# occured.
+proc crashsql {crashdelay crashfile sql} {
+ set cfile [file join [pwd] $crashfile]
+
+ set f [open crash.tcl w]
+ puts $f "sqlite3_crashparams $crashdelay $cfile"
+ puts $f "set sqlite_pending_byte $::sqlite_pending_byte"
+ puts $f {sqlite3 db test.db}
+
+ # This block sets the cache size of the main database to 10
+ # pages. This is done in case the build is configured to omit
+ # "PRAGMA cache_size".
+ puts $f {db eval {SELECT * FROM sqlite_master;}}
+ puts $f {set bt [btree_from_db db]}
+ puts $f {btree_set_cache_size $bt 10}
+
+ puts $f "db eval {"
+ puts $f "$sql"
+ puts $f "}"
+ close $f
+
+ set r [catch {
+ exec [info nameofexec] crash.tcl >@stdout
+ } msg]
+ lappend r $msg
+}
+
+# The following procedure computes a "signature" for table "abc". If
+# abc changes in any way, the signature should change.
+proc signature {} {
+ return [db eval {SELECT count(*), md5sum(a), md5sum(b), md5sum(c) FROM abc}]
+}
+proc signature2 {} {
+ return [db eval {SELECT count(*), md5sum(a), md5sum(b), md5sum(c) FROM abc2}]
+}
+
+#--------------------------------------------------------------------------
+# Simple crash test:
+#
+# crash-1.1: Create a database with a table with two rows.
+# crash-1.2: Run a 'DELETE FROM abc WHERE a = 1' that crashes during
+# the first journal-sync.
+# crash-1.3: Ensure the database is in the same state as after crash-1.1.
+# crash-1.4: Run a 'DELETE FROM abc WHERE a = 1' that crashes during
+# the first database-sync.
+# crash-1.5: Ensure the database is in the same state as after crash-1.1.
+# crash-1.6: Run a 'DELETE FROM abc WHERE a = 1' that crashes during
+# the second journal-sync.
+# crash-1.7: Ensure the database is in the same state as after crash-1.1.
+#
+# Tests 1.8 through 1.11 test for crashes on the third journal sync and
+# second database sync. Neither of these is required in such a small test
+# case, so these tests are just to verify that the test infrastructure
+# operates as expected.
+#
+do_test crash-1.1 {
+ execsql {
+ CREATE TABLE abc(a, b, c);
+ INSERT INTO abc VALUES(1, 2, 3);
+ INSERT INTO abc VALUES(4, 5, 6);
+ }
+ set ::sig [signature]
+ expr 0
+} {0}
+do_test crash-1.2 {
+ crashsql 1 test.db-journal {
+ DELETE FROM abc WHERE a = 1;
+ }
+} {1 {child process exited abnormally}}
+do_test crash-1.3 {
+ signature
+} $::sig
+do_test crash-1.4 {
+ crashsql 1 test.db {
+ DELETE FROM abc WHERE a = 1;
+ }
+} {1 {child process exited abnormally}}
+do_test crash-1.5 {
+ signature
+} $::sig
+do_test crash-1.6 {
+ crashsql 2 test.db-journal {
+ DELETE FROM abc WHERE a = 1;
+ }
+} {1 {child process exited abnormally}}
+do_test crash-1.7 {
+ catchsql {
+ SELECT * FROM abc;
+ }
+} {0 {1 2 3 4 5 6}}
+
+do_test crash-1.8 {
+ crashsql 3 test.db-journal {
+ DELETE FROM abc WHERE a = 1;
+ }
+} {0 {}}
+do_test crash-1.9 {
+ catchsql {
+ SELECT * FROM abc;
+ }
+} {0 {4 5 6}}
+do_test crash-1.10 {
+ crashsql 2 test.db {
+ DELETE FROM abc WHERE a = 4;
+ }
+} {0 {}}
+do_test crash-1.11 {
+ catchsql {
+ SELECT * FROM abc;
+ }
+} {0 {}}
+
+#--------------------------------------------------------------------------
+# The following tests test recovery when both the database file and the the
+# journal file contain corrupt data. This can happen after pages are
+# written to the database file before a transaction is committed due to
+# cache-pressure.
+#
+# crash-2.1: Insert 18 pages of data into the database.
+# crash-2.2: Check the database file size looks ok.
+# crash-2.3: Delete 15 or so pages (with a 10 page page-cache), then crash.
+# crash-2.4: Ensure the database is in the same state as after crash-2.1.
+#
+# Test cases crash-2.5 and crash-2.6 check that the database is OK if the
+# crash occurs during the main database file sync. But this isn't really
+# different from the crash-1.* cases.
+#
+do_test crash-2.1 {
+ execsql { BEGIN }
+ for {set n 0} {$n < 1000} {incr n} {
+ execsql "INSERT INTO abc VALUES($n, [expr 2*$n], [expr 3*$n])"
+ }
+ execsql { COMMIT }
+ set ::sig [signature]
+ execsql { SELECT sum(a), sum(b), sum(c) from abc }
+} {499500 999000 1498500}
+do_test crash-2.2 {
+ expr ([file size test.db] / 1024)>16
+} {1}
+do_test crash-2.3 {
+ crashsql 2 test.db-journal {
+ DELETE FROM abc WHERE a < 800;
+ }
+} {1 {child process exited abnormally}}
+do_test crash-2.4 {
+ signature
+} $sig
+do_test crash-2.5 {
+ crashsql 1 test.db {
+ DELETE FROM abc WHERE a<800;
+ }
+} {1 {child process exited abnormally}}
+do_test crash-2.6 {
+ signature
+} $sig
+
+#--------------------------------------------------------------------------
+# The crash-3.* test cases are essentially the same test as test case
+# crash-2.*, but with a more complicated data set.
+#
+# The test is repeated a few times with different seeds for the random
+# number generator in the crashing executable. Because there is no way to
+# seed the random number generator directly, some SQL is added to the test
+# case to 'use up' a different quantity random numbers before the test SQL
+# is executed.
+#
+
+# Make sure the file is much bigger than the pager-cache (10 pages). This
+# ensures that cache-spills happen regularly.
+do_test crash-3.0 {
+ execsql {
+ INSERT INTO abc SELECT * FROM abc;
+ INSERT INTO abc SELECT * FROM abc;
+ INSERT INTO abc SELECT * FROM abc;
+ INSERT INTO abc SELECT * FROM abc;
+ INSERT INTO abc SELECT * FROM abc;
+ }
+ expr ([file size test.db] / 1024) > 450
+} {1}
+for {set i 1} {$i < $repeats} {incr i} {
+ set sig [signature]
+ do_test crash-3.$i.1 {
+ crashsql [expr $i%5 + 1] test.db-journal "
+ BEGIN;
+ SELECT random() FROM abc LIMIT $i;
+ INSERT INTO abc VALUES(randstr(10,10), 0, 0);
+ DELETE FROM abc WHERE random()%10!=0;
+ COMMIT;
+ "
+ } {1 {child process exited abnormally}}
+ do_test crash-3.$i.2 {
+ signature
+ } $sig
+}
+
+#--------------------------------------------------------------------------
+# The following test cases - crash-4.* - test the correct recovery of the
+# database when a crash occurs during a multi-file transaction.
+#
+# crash-4.1.*: Test recovery when crash occurs during sync() of the
+# main database journal file.
+# crash-4.2.*: Test recovery when crash occurs during sync() of an
+# attached database journal file.
+# crash-4.3.*: Test recovery when crash occurs during sync() of the master
+# journal file.
+#
+do_test crash-4.0 {
+ file delete -force test2.db
+ file delete -force test2.db-journal
+ execsql {
+ ATTACH 'test2.db' AS aux;
+ PRAGMA aux.default_cache_size = 10;
+ CREATE TABLE aux.abc2 AS SELECT 2*a as a, 2*b as b, 2*c as c FROM abc;
+ }
+ expr ([file size test2.db] / 1024) > 450
+} {1}
+
+for {set i 1} {$i<$repeats} {incr i} {
+ set sig [signature]
+ set sig2 [signature2]
+ do_test crash-4.1.$i.1 {
+ set c [crashsql $i test.db-journal "
+ ATTACH 'test2.db' AS aux;
+ BEGIN;
+ SELECT random() FROM abc LIMIT $i;
+ INSERT INTO abc VALUES(randstr(10,10), 0, 0);
+ DELETE FROM abc WHERE random()%10!=0;
+ INSERT INTO abc2 VALUES(randstr(10,10), 0, 0);
+ DELETE FROM abc2 WHERE random()%10!=0;
+ COMMIT;
+ "]
+ set c
+ } {1 {child process exited abnormally}}
+ do_test crash-4.1.$i.2 {
+ signature
+ } $sig
+ do_test crash-4.1.$i.3 {
+ signature2
+ } $sig2
+}
+set i 0
+while {[incr i]} {
+ set sig [signature]
+ set sig2 [signature2]
+ set ::fin 0
+ do_test crash-4.2.$i.1 {
+ set c [crashsql $i test2.db-journal "
+ ATTACH 'test2.db' AS aux;
+ BEGIN;
+ SELECT random() FROM abc LIMIT $i;
+ INSERT INTO abc VALUES(randstr(10,10), 0, 0);
+ DELETE FROM abc WHERE random()%10!=0;
+ INSERT INTO abc2 VALUES(randstr(10,10), 0, 0);
+ DELETE FROM abc2 WHERE random()%10!=0;
+ COMMIT;
+ "]
+ if { $c == {0 {}} } {
+ set ::fin 1
+ set c {1 {child process exited abnormally}}
+ }
+ set c
+ } {1 {child process exited abnormally}}
+ if { $::fin } break
+ do_test crash-4.2.$i.2 {
+ signature
+ } $sig
+ do_test crash-4.2.$i.3 {
+ signature2
+ } $sig2
+}
+for {set i 1} {$i < 5} {incr i} {
+ set sig [signature]
+ set sig2 [signature2]
+ do_test crash-4.3.$i.1 {
+ crashsql 1 test.db-mj* "
+ ATTACH 'test2.db' AS aux;
+ BEGIN;
+ SELECT random() FROM abc LIMIT $i;
+ INSERT INTO abc VALUES(randstr(10,10), 0, 0);
+ DELETE FROM abc WHERE random()%10!=0;
+ INSERT INTO abc2 VALUES(randstr(10,10), 0, 0);
+ DELETE FROM abc2 WHERE random()%10!=0;
+ COMMIT;
+ "
+ } {1 {child process exited abnormally}}
+ do_test crash-4.3.$i.2 {
+ signature
+ } $sig
+ do_test crash-4.3.$i.3 {
+ signature2
+ } $sig2
+}
+
+#--------------------------------------------------------------------------
+# The following test cases - crash-5.* - exposes a bug that existed in the
+# sqlite3pager_movepage() API used by auto-vacuum databases.
+# database when a crash occurs during a multi-file transaction. See comments
+# in test crash-5.3 for details.
+#
+db close
+file delete -force test.db
+sqlite3 db test.db
+do_test crash-5.1 {
+ execsql {
+ CREATE TABLE abc(a, b, c); -- Root page 3
+ INSERT INTO abc VALUES(randstr(1500,1500), 0, 0); -- Overflow page 4
+ INSERT INTO abc SELECT * FROM abc;
+ INSERT INTO abc SELECT * FROM abc;
+ INSERT INTO abc SELECT * FROM abc;
+ }
+} {}
+do_test crash-5.2 {
+ expr [file size test.db] / 1024
+} [expr [string match [execsql {pragma auto_vacuum}] 1] ? 11 : 10]
+set sig [signature]
+do_test crash-5.3 {
+# The SQL below is used to expose a bug that existed in
+# sqlite3pager_movepage() during development of the auto-vacuum feature. It
+# functions as follows:
+#
+# 1: Begin a transaction.
+# 2: Put page 4 on the free-list (was the overflow page for the row deleted).
+# 3: Write data to page 4 (it becomes the overflow page for the row inserted).
+# The old page 4 data has been written to the journal file, but the
+# journal file has not been sync()hronized.
+# 4: Create a table, which calls sqlite3pager_movepage() to move page 4
+# to the end of the database (page 12) to make room for the new root-page.
+# 5: Put pressure on the pager-cache. This results in page 4 being written
+# to the database file to make space in the cache to load a new page. The
+# bug was that page 4 was written to the database file before the journal
+# is sync()hronized.
+# 6: Commit. A crash occurs during the sync of the journal file.
+#
+# End result: Before the bug was fixed, data has been written to page 4 of the
+# database file and the journal file does not contain trustworthy rollback
+# data for this page.
+#
+ crashsql 1 test.db-journal {
+ BEGIN; -- 1
+ DELETE FROM abc WHERE oid = 1; -- 2
+ INSERT INTO abc VALUES(randstr(1500,1500), 0, 0); -- 3
+ CREATE TABLE abc2(a, b, c); -- 4
+ SELECT * FROM abc; -- 5
+ COMMIT; -- 6
+ }
+} {1 {child process exited abnormally}}
+integrity_check crash-5.4
+do_test crash-5.5 {
+ signature
+} $sig
+
+#--------------------------------------------------------------------------
+# The following test cases - crash-6.* - test that a DROP TABLE operation
+# is correctly rolled back in the event of a crash while the database file
+# is being written. This is mainly to test that all pages are written to the
+# journal file before truncation in an auto-vacuum database.
+#
+do_test crash-6.1 {
+ crashsql 1 test.db {
+ DROP TABLE abc;
+ }
+} {1 {child process exited abnormally}}
+do_test crash-6.2 {
+ signature
+} $sig
+
+#--------------------------------------------------------------------------
+# These test cases test the case where the master journal file name is
+# corrupted slightly so that the corruption has to be detected by the
+# checksum.
+do_test crash-7.1 {
+ crashsql 1 test.db {
+ ATTACH 'test2.db' AS aux;
+ BEGIN;
+ INSERT INTO abc VALUES(randstr(1500,1500), 0, 0);
+ INSERT INTO abc2 VALUES(randstr(1500,1500), 0, 0);
+ COMMIT;
+ }
+
+ # Change the checksum value for the master journal name.
+ set f [open test.db-journal a]
+ fconfigure $f -encoding binary
+ seek $f [expr [file size test.db-journal] - 12]
+ puts -nonewline $f "\00\00\00\00"
+ close $f
+} {}
+do_test crash-7.2 {
+ signature
+} $sig
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/date.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/date.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,288 @@
+# 2003 October 31
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing date and time functions.
+#
+# $Id: date.test,v 1.17 2006/09/25 18:03:29 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Skip this whole file if date and time functions are omitted
+# at compile-time
+#
+ifcapable {!datetime} {
+ finish_test
+ return
+}
+
+proc datetest {tnum expr result} {
+ do_test date-$tnum [subst {
+ execsql "SELECT coalesce($expr,'NULL')"
+ }] [list $result]
+}
+set tcl_precision 15
+datetest 1.1 julianday('2000-01-01') 2451544.5
+datetest 1.2 julianday('1970-01-01') 2440587.5
+datetest 1.3 julianday('1910-04-20') 2418781.5
+datetest 1.4 julianday('1986-02-09') 2446470.5
+datetest 1.5 julianday('12:00:00') 2451545.0
+datetest 1.6 {julianday('2000-01-01 12:00:00')} 2451545.0
+datetest 1.7 {julianday('2000-01-01 12:00')} 2451545.0
+datetest 1.8 julianday('bogus') NULL
+datetest 1.9 julianday('1999-12-31') 2451543.5
+datetest 1.10 julianday('1999-12-32') NULL
+datetest 1.11 julianday('1999-13-01') NULL
+datetest 1.12 julianday('2003-02-31') 2452701.5
+datetest 1.13 julianday('2003-03-03') 2452701.5
+datetest 1.14 julianday('+2000-01-01') NULL
+datetest 1.15 julianday('200-01-01') NULL
+datetest 1.16 julianday('2000-1-01') NULL
+datetest 1.17 julianday('2000-01-1') NULL
+datetest 1.18.1 {julianday('2000-01-01 12:00:00')} 2451545.0
+datetest 1.18.2 {julianday('2000-01-01T12:00:00')} 2451545.0
+datetest 1.18.3 {julianday('2000-01-01 T12:00:00')} 2451545.0
+datetest 1.18.4 {julianday('2000-01-01T 12:00:00')} 2451545.0
+datetest 1.18.4 {julianday('2000-01-01 T 12:00:00')} 2451545.0
+datetest 1.19 {julianday('2000-01-01 12:00:00.1')} 2451545.00000116
+datetest 1.20 {julianday('2000-01-01 12:00:00.01')} 2451545.00000012
+datetest 1.21 {julianday('2000-01-01 12:00:00.001')} 2451545.00000001
+datetest 1.22 {julianday('2000-01-01 12:00:00.')} NULL
+datetest 1.23 julianday(12345.6) 12345.6
+datetest 1.24 {julianday('2001-01-01 12:00:00 bogus')} NULL
+datetest 1.25 {julianday('2001-01-01 bogus')} NULL
+datetest 1.26 {julianday('2001-01-01 12:60:00')} NULL
+datetest 1.27 {julianday('2001-01-01 12:59:60')} NULL
+
+datetest 2.1 datetime(0,'unixepoch') {1970-01-01 00:00:00}
+datetest 2.2 datetime(946684800,'unixepoch') {2000-01-01 00:00:00}
+datetest 2.3 {date('2003-10-22','weekday 0')} 2003-10-26
+datetest 2.4 {date('2003-10-22','weekday 1')} 2003-10-27
+datetest 2.5 {date('2003-10-22','weekday 2')} 2003-10-28
+datetest 2.6 {date('2003-10-22','weekday 3')} 2003-10-22
+datetest 2.7 {date('2003-10-22','weekday 4')} 2003-10-23
+datetest 2.8 {date('2003-10-22','weekday 5')} 2003-10-24
+datetest 2.9 {date('2003-10-22','weekday 6')} 2003-10-25
+datetest 2.10 {date('2003-10-22','weekday 7')} NULL
+datetest 2.11 {date('2003-10-22','weekday 5.5')} NULL
+datetest 2.12 {datetime('2003-10-22 12:34','weekday 0')} {2003-10-26 12:34:00}
+datetest 2.13 {datetime('2003-10-22 12:34','start of month')} \
+ {2003-10-01 00:00:00}
+datetest 2.14 {datetime('2003-10-22 12:34','start of year')} \
+ {2003-01-01 00:00:00}
+datetest 2.15 {datetime('2003-10-22 12:34','start of day')} \
+ {2003-10-22 00:00:00}
+datetest 2.16 time('12:34:56.43') 12:34:56
+datetest 2.17 {datetime('2003-10-22 12:34','1 day')} {2003-10-23 12:34:00}
+datetest 2.18 {datetime('2003-10-22 12:34','+1 day')} {2003-10-23 12:34:00}
+datetest 2.19 {datetime('2003-10-22 12:34','+1.25 day')} {2003-10-23 18:34:00}
+datetest 2.20 {datetime('2003-10-22 12:34','-1.0 day')} {2003-10-21 12:34:00}
+datetest 2.21 {datetime('2003-10-22 12:34','1 month')} {2003-11-22 12:34:00}
+datetest 2.22 {datetime('2003-10-22 12:34','11 month')} {2004-09-22 12:34:00}
+datetest 2.23 {datetime('2003-10-22 12:34','-13 month')} {2002-09-22 12:34:00}
+datetest 2.24 {datetime('2003-10-22 12:34','1.5 months')} {2003-12-07 12:34:00}
+datetest 2.25 {datetime('2003-10-22 12:34','-5 years')} {1998-10-22 12:34:00}
+datetest 2.26 {datetime('2003-10-22 12:34','+10.5 minutes')} \
+ {2003-10-22 12:44:30}
+datetest 2.27 {datetime('2003-10-22 12:34','-1.25 hours')} \
+ {2003-10-22 11:19:00}
+datetest 2.28 {datetime('2003-10-22 12:34','11.25 seconds')} \
+ {2003-10-22 12:34:11}
+datetest 2.29 {datetime('2003-10-22 12:24','+5 bogus')} NULL
+
+
+datetest 3.1 {strftime('%d','2003-10-31 12:34:56.432')} 31
+datetest 3.2 {strftime('%f','2003-10-31 12:34:56.432')} 56.432
+datetest 3.3 {strftime('%H','2003-10-31 12:34:56.432')} 12
+datetest 3.4 {strftime('%j','2003-10-31 12:34:56.432')} 304
+datetest 3.5 {strftime('%J','2003-10-31 12:34:56.432')} 2452944.024264259
+datetest 3.6 {strftime('%m','2003-10-31 12:34:56.432')} 10
+datetest 3.7 {strftime('%M','2003-10-31 12:34:56.432')} 34
+datetest 3.8 {strftime('%s','2003-10-31 12:34:56.432')} 1067603696
+datetest 3.9 {strftime('%S','2003-10-31 12:34:56.432')} 56
+datetest 3.10 {strftime('%w','2003-10-31 12:34:56.432')} 5
+datetest 3.11.1 {strftime('%W','2003-10-31 12:34:56.432')} 43
+datetest 3.11.2 {strftime('%W','2004-01-01')} 00
+datetest 3.11.3 {strftime('%W','2004-01-02')} 00
+datetest 3.11.4 {strftime('%W','2004-01-03')} 00
+datetest 3.11.5 {strftime('%W','2004-01-04')} 00
+datetest 3.11.6 {strftime('%W','2004-01-05')} 01
+datetest 3.11.7 {strftime('%W','2004-01-06')} 01
+datetest 3.11.8 {strftime('%W','2004-01-07')} 01
+datetest 3.11.9 {strftime('%W','2004-01-08')} 01
+datetest 3.11.10 {strftime('%W','2004-01-09')} 01
+datetest 3.11.11 {strftime('%W','2004-07-18')} 28
+datetest 3.11.12 {strftime('%W','2004-12-31')} 52
+datetest 3.11.13 {strftime('%W','2007-12-31')} 53
+datetest 3.11.14 {strftime('%W','2007-01-01')} 01
+datetest 3.12 {strftime('%Y','2003-10-31 12:34:56.432')} 2003
+datetest 3.13 {strftime('%%','2003-10-31 12:34:56.432')} %
+datetest 3.14 {strftime('%_','2003-10-31 12:34:56.432')} NULL
+datetest 3.15 {strftime('%Y-%m-%d','2003-10-31')} 2003-10-31
+proc repeat {n txt} {
+ set x {}
+ while {$n>0} {
+ append x $txt
+ incr n -1
+ }
+ return $x
+}
+datetest 3.16 "strftime('[repeat 200 %Y]','2003-10-31')" [repeat 200 2003]
+datetest 3.17 "strftime('[repeat 200 abc%m123]','2003-10-31')" \
+ [repeat 200 abc10123]
+
+set sqlite_current_time 1157124367
+datetest 4.1 {date('now')} {2006-09-01}
+set sqlite_current_time 0
+
+datetest 5.1 {datetime('1994-04-16 14:00:00 +05:00')} {1994-04-16 09:00:00}
+datetest 5.2 {datetime('1994-04-16 14:00:00 -05:15')} {1994-04-16 19:15:00}
+datetest 5.3 {datetime('1994-04-16 05:00:00 +08:30')} {1994-04-15 20:30:00}
+datetest 5.4 {datetime('1994-04-16 14:00:00 -11:55')} {1994-04-17 01:55:00}
+datetest 5.5 {datetime('1994-04-16 14:00:00 -11:60')} NULL
+
+# localtime->utc and utc->localtime conversions. These tests only work
+# if the localtime is in the US Eastern Time (the time in Charlotte, NC
+# and in New York.)
+#
+set tzoffset [db one {
+ SELECT CAST(24*(julianday('2006-09-01') -
+ julianday('2006-09-01','localtime'))+0.5
+ AS INT)
+}]
+if {$tzoffset==4} {
+ datetest 6.1 {datetime('2000-10-29 05:59:00','localtime')}\
+ {2000-10-29 01:59:00}
+ datetest 6.2 {datetime('2000-10-29 06:00:00','localtime')}\
+ {2000-10-29 01:00:00}
+ datetest 6.3 {datetime('2000-04-02 06:59:00','localtime')}\
+ {2000-04-02 01:59:00}
+ datetest 6.4 {datetime('2000-04-02 07:00:00','localtime')}\
+ {2000-04-02 03:00:00}
+ datetest 6.5 {datetime('2000-10-29 01:59:00','utc')} {2000-10-29 05:59:00}
+ datetest 6.6 {datetime('2000-10-29 02:00:00','utc')} {2000-10-29 07:00:00}
+ datetest 6.7 {datetime('2000-04-02 01:59:00','utc')} {2000-04-02 06:59:00}
+ datetest 6.8 {datetime('2000-04-02 02:00:00','utc')} {2000-04-02 06:00:00}
+
+ datetest 6.10 {datetime('2000-01-01 12:00:00','localtime')} \
+ {2000-01-01 07:00:00}
+ datetest 6.11 {datetime('1969-01-01 12:00:00','localtime')} \
+ {1969-01-01 07:00:00}
+ datetest 6.12 {datetime('2039-01-01 12:00:00','localtime')} \
+ {2039-01-01 07:00:00}
+ datetest 6.13 {datetime('2000-07-01 12:00:00','localtime')} \
+ {2000-07-01 08:00:00}
+ datetest 6.14 {datetime('1969-07-01 12:00:00','localtime')} \
+ {1969-07-01 07:00:00}
+ datetest 6.15 {datetime('2039-07-01 12:00:00','localtime')} \
+ {2039-07-01 07:00:00}
+ set sqlite_current_time \
+ [db eval {SELECT strftime('%s','2000-07-01 12:34:56')}]
+ datetest 6.16 {datetime('now','localtime')} {2000-07-01 08:34:56}
+ set sqlite_current_time 0
+}
+
+# Date-time functions that contain NULL arguments return a NULL
+# result.
+#
+datetest 7.1 {datetime(null)} NULL
+datetest 7.2 {datetime('now',null)} NULL
+datetest 7.3 {datetime('now','localtime',null)} NULL
+datetest 7.4 {time(null)} NULL
+datetest 7.5 {time('now',null)} NULL
+datetest 7.6 {time('now','localtime',null)} NULL
+datetest 7.7 {date(null)} NULL
+datetest 7.8 {date('now',null)} NULL
+datetest 7.9 {date('now','localtime',null)} NULL
+datetest 7.10 {julianday(null)} NULL
+datetest 7.11 {julianday('now',null)} NULL
+datetest 7.12 {julianday('now','localtime',null)} NULL
+datetest 7.13 {strftime(null,'now')} NULL
+datetest 7.14 {strftime('%s',null)} NULL
+datetest 7.15 {strftime('%s','now',null)} NULL
+datetest 7.16 {strftime('%s','now','localtime',null)} NULL
+
+# Test modifiers when the date begins as a julian day number - to
+# make sure the HH:MM:SS is preserved. Ticket #551.
+#
+set sqlite_current_time [db eval {SELECT strftime('%s','2003-10-22 12:34:00')}]
+datetest 8.1 {datetime('now','weekday 0')} {2003-10-26 12:34:00}
+datetest 8.2 {datetime('now','weekday 1')} {2003-10-27 12:34:00}
+datetest 8.3 {datetime('now','weekday 2')} {2003-10-28 12:34:00}
+datetest 8.4 {datetime('now','weekday 3')} {2003-10-22 12:34:00}
+datetest 8.5 {datetime('now','start of month')} {2003-10-01 00:00:00}
+datetest 8.6 {datetime('now','start of year')} {2003-01-01 00:00:00}
+datetest 8.7 {datetime('now','start of day')} {2003-10-22 00:00:00}
+datetest 8.8 {datetime('now','1 day')} {2003-10-23 12:34:00}
+datetest 8.9 {datetime('now','+1 day')} {2003-10-23 12:34:00}
+datetest 8.10 {datetime('now','+1.25 day')} {2003-10-23 18:34:00}
+datetest 8.11 {datetime('now','-1.0 day')} {2003-10-21 12:34:00}
+datetest 8.12 {datetime('now','1 month')} {2003-11-22 12:34:00}
+datetest 8.13 {datetime('now','11 month')} {2004-09-22 12:34:00}
+datetest 8.14 {datetime('now','-13 month')} {2002-09-22 12:34:00}
+datetest 8.15 {datetime('now','1.5 months')} {2003-12-07 12:34:00}
+datetest 8.16 {datetime('now','-5 years')} {1998-10-22 12:34:00}
+datetest 8.17 {datetime('now','+10.5 minutes')} {2003-10-22 12:44:30}
+datetest 8.18 {datetime('now','-1.25 hours')} {2003-10-22 11:19:00}
+datetest 8.19 {datetime('now','11.25 seconds')} {2003-10-22 12:34:11}
+set sqlite_current_time 0
+
+# Negative years work. Example: '-4713-11-26' is JD 1.5.
+#
+datetest 9.1 {julianday('-4713-11-24 12:00:00')} {0.0}
+datetest 9.2 {julianday(datetime(5))} {5.0}
+datetest 9.3 {julianday(datetime(10))} {10.0}
+datetest 9.4 {julianday(datetime(100))} {100.0}
+datetest 9.5 {julianday(datetime(1000))} {1000.0}
+datetest 9.6 {julianday(datetime(10000))} {10000.0}
+datetest 9.7 {julianday(datetime(100000))} {100000.0}
+
+# datetime() with just an HH:MM:SS correctly inserts the date 2000-01-01.
+#
+datetest 10.1 {datetime('01:02:03')} {2000-01-01 01:02:03}
+datetest 10.2 {date('01:02:03')} {2000-01-01}
+datetest 10.3 {strftime('%Y-%m-%d %H:%M','01:02:03')} {2000-01-01 01:02}
+
+# Test the new HH:MM:SS modifier
+#
+datetest 11.1 {datetime('2004-02-28 20:00:00', '-01:20:30')} \
+ {2004-02-28 18:39:30}
+datetest 11.2 {datetime('2004-02-28 20:00:00', '+12:30:00')} \
+ {2004-02-29 08:30:00}
+datetest 11.3 {datetime('2004-02-28 20:00:00', '+12:30')} \
+ {2004-02-29 08:30:00}
+datetest 11.4 {datetime('2004-02-28 20:00:00', '12:30')} \
+ {2004-02-29 08:30:00}
+datetest 11.5 {datetime('2004-02-28 20:00:00', '-12:00')} \
+ {2004-02-28 08:00:00}
+datetest 11.6 {datetime('2004-02-28 20:00:00', '-12:01')} \
+ {2004-02-28 07:59:00}
+datetest 11.7 {datetime('2004-02-28 20:00:00', '-11:59')} \
+ {2004-02-28 08:01:00}
+datetest 11.8 {datetime('2004-02-28 20:00:00', '11:59')} \
+ {2004-02-29 07:59:00}
+datetest 11.9 {datetime('2004-02-28 20:00:00', '12:01')} \
+ {2004-02-29 08:01:00}
+datetest 11.10 {datetime('2004-02-28 20:00:00', '12:60')} NULL
+
+# Ticket #1964
+datetest 12.1 {datetime('2005-09-01')} {2005-09-01 00:00:00}
+datetest 12.2 {datetime('2005-09-01','+0 hours')} {2005-09-01 00:00:00}
+
+# Ticket #1991
+do_test date-13.1 {
+ execsql {
+ SELECT strftime('%Y-%m-%d %H:%M:%f', julianday('2006-09-24T10:50:26.047'))
+ }
+} {{2006-09-24 10:50:26.047}}
+
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/default.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/default.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,52 @@
+# 2005 August 18
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#*************************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing corner cases of the DEFAULT syntax
+# on table definitions.
+#
+# $Id: default.test,v 1.2 2005/08/20 03:03:04 drh Exp $
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+ifcapable bloblit {
+ do_test default-1.1 {
+ execsql {
+ CREATE TABLE t1(
+ a INTEGER,
+ b BLOB DEFAULT x'6869'
+ );
+ INSERT INTO t1(a) VALUES(1);
+ SELECT * from t1;
+ }
+ } {1 hi}
+}
+do_test default-1.2 {
+ execsql {
+ CREATE TABLE t2(
+ x INTEGER,
+ y INTEGER DEFAULT NULL
+ );
+ INSERT INTO t2(x) VALUES(1);
+ SELECT * FROM t2;
+ }
+} {1 {}}
+do_test default-1.3 {
+ catchsql {
+ CREATE TABLE t3(
+ x INTEGER,
+ y INTEGER DEFAULT (max(x,5))
+ )
+ }
+} {1 {default value of column [y] is not constant}}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/delete.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/delete.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,313 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing the DELETE FROM statement.
+#
+# $Id: delete.test,v 1.21 2006/01/03 00:33:50 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Try to delete from a non-existant table.
+#
+do_test delete-1.1 {
+ set v [catch {execsql {DELETE FROM test1}} msg]
+ lappend v $msg
+} {1 {no such table: test1}}
+
+# Try to delete from sqlite_master
+#
+do_test delete-2.1 {
+ set v [catch {execsql {DELETE FROM sqlite_master}} msg]
+ lappend v $msg
+} {1 {table sqlite_master may not be modified}}
+
+# Delete selected entries from a table with and without an index.
+#
+do_test delete-3.1.1 {
+ execsql {CREATE TABLE table1(f1 int, f2 int)}
+ execsql {INSERT INTO table1 VALUES(1,2)}
+ execsql {INSERT INTO table1 VALUES(2,4)}
+ execsql {INSERT INTO table1 VALUES(3,8)}
+ execsql {INSERT INTO table1 VALUES(4,16)}
+ execsql {SELECT * FROM table1 ORDER BY f1}
+} {1 2 2 4 3 8 4 16}
+do_test delete-3.1.2 {
+ execsql {DELETE FROM table1 WHERE f1=3}
+} {}
+do_test delete-3.1.3 {
+ execsql {SELECT * FROM table1 ORDER BY f1}
+} {1 2 2 4 4 16}
+do_test delete-3.1.4 {
+ execsql {CREATE INDEX index1 ON table1(f1)}
+ execsql {PRAGMA count_changes=on}
+ ifcapable explain {
+ execsql {EXPLAIN DELETE FROM table1 WHERE f1=3}
+ }
+ execsql {DELETE FROM 'table1' WHERE f1=3}
+} {0}
+do_test delete-3.1.5 {
+ execsql {SELECT * FROM table1 ORDER BY f1}
+} {1 2 2 4 4 16}
+do_test delete-3.1.6.1 {
+ execsql {DELETE FROM table1 WHERE f1=2}
+} {1}
+do_test delete-3.1.6.2 {
+ db changes
+} 1
+do_test delete-3.1.7 {
+ execsql {SELECT * FROM table1 ORDER BY f1}
+} {1 2 4 16}
+integrity_check delete-3.2
+
+
+# Semantic errors in the WHERE clause
+#
+do_test delete-4.1 {
+ execsql {CREATE TABLE table2(f1 int, f2 int)}
+ set v [catch {execsql {DELETE FROM table2 WHERE f3=5}} msg]
+ lappend v $msg
+} {1 {no such column: f3}}
+
+do_test delete-4.2 {
+ set v [catch {execsql {DELETE FROM table2 WHERE xyzzy(f1+4)}} msg]
+ lappend v $msg
+} {1 {no such function: xyzzy}}
+integrity_check delete-4.3
+
+# Lots of deletes
+#
+do_test delete-5.1.1 {
+ execsql {DELETE FROM table1}
+} {2}
+do_test delete-5.1.2 {
+ execsql {SELECT count(*) FROM table1}
+} {0}
+do_test delete-5.2.1 {
+ execsql {BEGIN TRANSACTION}
+ for {set i 1} {$i<=200} {incr i} {
+ execsql "INSERT INTO table1 VALUES($i,[expr {$i*$i}])"
+ }
+ execsql {COMMIT}
+ execsql {SELECT count(*) FROM table1}
+} {200}
+do_test delete-5.2.2 {
+ execsql {DELETE FROM table1}
+} {200}
+do_test delete-5.2.3 {
+ execsql {BEGIN TRANSACTION}
+ for {set i 1} {$i<=200} {incr i} {
+ execsql "INSERT INTO table1 VALUES($i,[expr {$i*$i}])"
+ }
+ execsql {COMMIT}
+ execsql {SELECT count(*) FROM table1}
+} {200}
+do_test delete-5.2.4 {
+ execsql {PRAGMA count_changes=off}
+ execsql {DELETE FROM table1}
+} {}
+do_test delete-5.2.5 {
+ execsql {SELECT count(*) FROM table1}
+} {0}
+do_test delete-5.2.6 {
+ execsql {BEGIN TRANSACTION}
+ for {set i 1} {$i<=200} {incr i} {
+ execsql "INSERT INTO table1 VALUES($i,[expr {$i*$i}])"
+ }
+ execsql {COMMIT}
+ execsql {SELECT count(*) FROM table1}
+} {200}
+do_test delete-5.3 {
+ for {set i 1} {$i<=200} {incr i 4} {
+ execsql "DELETE FROM table1 WHERE f1==$i"
+ }
+ execsql {SELECT count(*) FROM table1}
+} {150}
+do_test delete-5.4.1 {
+ execsql "DELETE FROM table1 WHERE f1>50"
+ db changes
+} [db one {SELECT count(*) FROM table1 WHERE f1>50}]
+do_test delete-5.4.2 {
+ execsql {SELECT count(*) FROM table1}
+} {37}
+do_test delete-5.5 {
+ for {set i 1} {$i<=70} {incr i 3} {
+ execsql "DELETE FROM table1 WHERE f1==$i"
+ }
+ execsql {SELECT f1 FROM table1 ORDER BY f1}
+} {2 3 6 8 11 12 14 15 18 20 23 24 26 27 30 32 35 36 38 39 42 44 47 48 50}
+do_test delete-5.6 {
+ for {set i 1} {$i<40} {incr i} {
+ execsql "DELETE FROM table1 WHERE f1==$i"
+ }
+ execsql {SELECT f1 FROM table1 ORDER BY f1}
+} {42 44 47 48 50}
+do_test delete-5.7 {
+ execsql "DELETE FROM table1 WHERE f1!=48"
+ execsql {SELECT f1 FROM table1 ORDER BY f1}
+} {48}
+integrity_check delete-5.8
+
+
+# Delete large quantities of data. We want to test the List overflow
+# mechanism in the vdbe.
+#
+do_test delete-6.1 {
+ execsql {BEGIN; DELETE FROM table1}
+ for {set i 1} {$i<=3000} {incr i} {
+ execsql "INSERT INTO table1 VALUES($i,[expr {$i*$i}])"
+ }
+ execsql {DELETE FROM table2}
+ for {set i 1} {$i<=3000} {incr i} {
+ execsql "INSERT INTO table2 VALUES($i,[expr {$i*$i}])"
+ }
+ execsql {COMMIT}
+ execsql {SELECT count(*) FROM table1}
+} {3000}
+do_test delete-6.2 {
+ execsql {SELECT count(*) FROM table2}
+} {3000}
+do_test delete-6.3 {
+ execsql {SELECT f1 FROM table1 WHERE f1<10 ORDER BY f1}
+} {1 2 3 4 5 6 7 8 9}
+do_test delete-6.4 {
+ execsql {SELECT f1 FROM table2 WHERE f1<10 ORDER BY f1}
+} {1 2 3 4 5 6 7 8 9}
+do_test delete-6.5.1 {
+ execsql {DELETE FROM table1 WHERE f1>7}
+ db changes
+} {2993}
+do_test delete-6.5.2 {
+ execsql {SELECT f1 FROM table1 ORDER BY f1}
+} {1 2 3 4 5 6 7}
+do_test delete-6.6 {
+ execsql {DELETE FROM table2 WHERE f1>7}
+ execsql {SELECT f1 FROM table2 ORDER BY f1}
+} {1 2 3 4 5 6 7}
+do_test delete-6.7 {
+ execsql {DELETE FROM table1}
+ execsql {SELECT f1 FROM table1}
+} {}
+do_test delete-6.8 {
+ execsql {INSERT INTO table1 VALUES(2,3)}
+ execsql {SELECT f1 FROM table1}
+} {2}
+do_test delete-6.9 {
+ execsql {DELETE FROM table2}
+ execsql {SELECT f1 FROM table2}
+} {}
+do_test delete-6.10 {
+ execsql {INSERT INTO table2 VALUES(2,3)}
+ execsql {SELECT f1 FROM table2}
+} {2}
+integrity_check delete-6.11
+
+do_test delete-7.1 {
+ execsql {
+ CREATE TABLE t3(a);
+ INSERT INTO t3 VALUES(1);
+ INSERT INTO t3 SELECT a+1 FROM t3;
+ INSERT INTO t3 SELECT a+2 FROM t3;
+ SELECT * FROM t3;
+ }
+} {1 2 3 4}
+ifcapable {trigger} {
+ do_test delete-7.2 {
+ execsql {
+ CREATE TABLE cnt(del);
+ INSERT INTO cnt VALUES(0);
+ CREATE TRIGGER r1 AFTER DELETE ON t3 FOR EACH ROW BEGIN
+ UPDATE cnt SET del=del+1;
+ END;
+ DELETE FROM t3 WHERE a<2;
+ SELECT * FROM t3;
+ }
+ } {2 3 4}
+ do_test delete-7.3 {
+ execsql {
+ SELECT * FROM cnt;
+ }
+ } {1}
+ do_test delete-7.4 {
+ execsql {
+ DELETE FROM t3;
+ SELECT * FROM t3;
+ }
+ } {}
+ do_test delete-7.5 {
+ execsql {
+ SELECT * FROM cnt;
+ }
+ } {4}
+ do_test delete-7.6 {
+ execsql {
+ INSERT INTO t3 VALUES(1);
+ INSERT INTO t3 SELECT a+1 FROM t3;
+ INSERT INTO t3 SELECT a+2 FROM t3;
+ CREATE TABLE t4 AS SELECT * FROM t3;
+ PRAGMA count_changes=ON;
+ DELETE FROM t3;
+ DELETE FROM t4;
+ }
+ } {4 4}
+} ;# endif trigger
+ifcapable {!trigger} {
+ execsql {DELETE FROM t3}
+}
+integrity_check delete-7.7
+
+# Make sure error messages are consistent when attempting to delete
+# from a read-only database. Ticket #304.
+#
+do_test delete-8.0 {
+ execsql {
+ PRAGMA count_changes=OFF;
+ INSERT INTO t3 VALUES(123);
+ SELECT * FROM t3;
+ }
+} {123}
+db close
+catch {file attributes test.db -permissions 0444}
+catch {file attributes test.db -readonly 1}
+sqlite3 db test.db
+set ::DB [sqlite3_connection_pointer db]
+do_test delete-8.1 {
+ catchsql {
+ DELETE FROM t3;
+ }
+} {1 {attempt to write a readonly database}}
+do_test delete-8.2 {
+ execsql {SELECT * FROM t3}
+} {123}
+do_test delete-8.3 {
+ catchsql {
+ DELETE FROM t3 WHERE 1;
+ }
+} {1 {attempt to write a readonly database}}
+do_test delete-8.4 {
+ execsql {SELECT * FROM t3}
+} {123}
+
+# Update for v3: In v2 the DELETE statement would succeed because no
+# database writes actually occur. Version 3 refuses to open a transaction
+# on a read-only file, so the statement fails.
+do_test delete-8.5 {
+ catchsql {
+ DELETE FROM t3 WHERE a<100;
+ }
+# v2 result: {0 {}}
+} {1 {attempt to write a readonly database}}
+do_test delete-8.6 {
+ execsql {SELECT * FROM t3}
+} {123}
+integrity_check delete-8.7
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/delete2.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/delete2.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,99 @@
+# 2003 September 6
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is a test to replicate the bug reported by
+# ticket #842.
+#
+# Ticket #842 was a database corruption problem caused by a DELETE that
+# removed an index entry by not the main table entry. To recreate the
+# problem do this:
+#
+# (1) Create a table with an index. Insert some data into that table.
+# (2) Start a query on the table but do not complete the query.
+# (3) Try to delete a single entry from the table.
+#
+# Step 3 will fail because there is still a read cursor on the table.
+# But the database is corrupted by the DELETE. It turns out that the
+# index entry was deleted first, before the table entry. And the index
+# delete worked. Thus an entry was deleted from the index but not from
+# the table.
+#
+# The solution to the problem was to detect that the table is locked
+# before the index entry is deleted.
+#
+# $Id: delete2.test,v 1.7 2006/08/16 16:42:48 drh Exp $
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Create a table that has an index.
+#
+do_test delete2-1.1 {
+ set DB [sqlite3_connection_pointer db]
+ execsql {
+ CREATE TABLE q(s string, id string, constraint pk_q primary key(id));
+ BEGIN;
+ INSERT INTO q(s,id) VALUES('hello','id.1');
+ INSERT INTO q(s,id) VALUES('goodbye','id.2');
+ INSERT INTO q(s,id) VALUES('again','id.3');
+ END;
+ SELECT * FROM q;
+ }
+} {hello id.1 goodbye id.2 again id.3}
+do_test delete2-1.2 {
+ execsql {
+ SELECT * FROM q WHERE id='id.1';
+ }
+} {hello id.1}
+integrity_check delete2-1.3
+
+# Start a query on the table. The query should not use the index.
+# Do not complete the query, thus leaving the table locked.
+#
+do_test delete2-1.4 {
+ set STMT [sqlite3_prepare $DB {SELECT * FROM q} -1 TAIL]
+ sqlite3_step $STMT
+} SQLITE_ROW
+integrity_check delete2-1.5
+
+# Try to delete a row from the table while a read is in process.
+# As of 2006-08-16, this is allowed. (It used to fail with SQLITE_LOCKED.)
+#
+do_test delete2-1.6 {
+ catchsql {
+ DELETE FROM q WHERE rowid=1
+ }
+} {0 {}}
+integrity_check delete2-1.7
+do_test delete2-1.8 {
+ execsql {
+ SELECT * FROM q;
+ }
+} {goodbye id.2 again id.3}
+
+# Finalize the query, thus clearing the lock on the table. Then
+# retry the delete. The delete should work this time.
+#
+do_test delete2-1.9 {
+ sqlite3_finalize $STMT
+ catchsql {
+ DELETE FROM q WHERE rowid=1
+ }
+} {0 {}}
+integrity_check delete2-1.10
+do_test delete2-1.11 {
+ execsql {
+ SELECT * FROM q;
+ }
+} {goodbye id.2 again id.3}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/delete3.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/delete3.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,57 @@
+# 2005 August 24
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is a test of the DELETE command where a
+# large number of rows are deleted.
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Create a table that contains a large number of rows.
+#
+do_test delete3-1.1 {
+ execsql {
+ CREATE TABLE t1(x integer primary key);
+ BEGIN;
+ INSERT INTO t1 VALUES(1);
+ INSERT INTO t1 VALUES(2);
+ INSERT INTO t1 SELECT x+2 FROM t1;
+ INSERT INTO t1 SELECT x+4 FROM t1;
+ INSERT INTO t1 SELECT x+8 FROM t1;
+ INSERT INTO t1 SELECT x+16 FROM t1;
+ INSERT INTO t1 SELECT x+32 FROM t1;
+ INSERT INTO t1 SELECT x+64 FROM t1;
+ INSERT INTO t1 SELECT x+128 FROM t1;
+ INSERT INTO t1 SELECT x+256 FROM t1;
+ INSERT INTO t1 SELECT x+512 FROM t1;
+ INSERT INTO t1 SELECT x+1024 FROM t1;
+ INSERT INTO t1 SELECT x+2048 FROM t1;
+ INSERT INTO t1 SELECT x+4096 FROM t1;
+ INSERT INTO t1 SELECT x+8192 FROM t1;
+ INSERT INTO t1 SELECT x+16384 FROM t1;
+ INSERT INTO t1 SELECT x+32768 FROM t1;
+ INSERT INTO t1 SELECT x+65536 FROM t1;
+ INSERT INTO t1 SELECT x+131072 FROM t1;
+ INSERT INTO t1 SELECT x+262144 FROM t1;
+ COMMIT;
+ SELECT count(*) FROM t1;
+ }
+} {524288}
+do_test delete3-1.2 {
+ execsql {
+ DELETE FROM t1 WHERE x%2==0;
+ SELECT count(*) FROM t1;
+ }
+} {262144}
+integrity_check delete3-1.3
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/descidx1.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/descidx1.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,337 @@
+# 2005 December 21
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#*************************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is descending indices.
+#
+# $Id: descidx1.test,v 1.7 2006/07/11 14:17:52 drh Exp $
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+db eval {PRAGMA legacy_file_format=OFF}
+
+# This procedure sets the value of the file-format in file 'test.db'
+# to $newval. Also, the schema cookie is incremented.
+#
+proc set_file_format {newval} {
+ set bt [btree_open test.db 10 0]
+ btree_begin_transaction $bt
+ set meta [btree_get_meta $bt]
+ lset meta 2 $newval ;# File format
+ lset meta 1 [expr [lindex $meta 1]+1] ;# Schema cookie
+ eval "btree_update_meta $bt $meta"
+ btree_commit $bt
+ btree_close $bt
+}
+
+# This procedure returns the value of the file-format in file 'test.db'.
+#
+proc get_file_format {{fname test.db}} {
+ set bt [btree_open $fname 10 0]
+ set meta [btree_get_meta $bt]
+ btree_close $bt
+ lindex $meta 2
+}
+
+# Verify that the file format starts as 4.
+#
+do_test descidx1-1.1 {
+ execsql {
+ CREATE TABLE t1(a,b);
+ CREATE INDEX i1 ON t1(b ASC);
+ }
+ get_file_format
+} {4}
+do_test descidx1-1.2 {
+ execsql {
+ CREATE INDEX i2 ON t1(a DESC);
+ }
+ get_file_format
+} {4}
+
+# Put some information in the table and verify that the descending
+# index actually works.
+#
+do_test descidx1-2.1 {
+ execsql {
+ INSERT INTO t1 VALUES(1,1);
+ INSERT INTO t1 VALUES(2,2);
+ INSERT INTO t1 SELECT a+2, a+2 FROM t1;
+ INSERT INTO t1 SELECT a+4, a+4 FROM t1;
+ SELECT b FROM t1 WHERE a>3 AND a<7;
+ }
+} {6 5 4}
+do_test descidx1-2.2 {
+ execsql {
+ SELECT a FROM t1 WHERE b>3 AND b<7;
+ }
+} {4 5 6}
+do_test descidx1-2.3 {
+ execsql {
+ SELECT b FROM t1 WHERE a>=3 AND a<7;
+ }
+} {6 5 4 3}
+do_test descidx1-2.4 {
+ execsql {
+ SELECT b FROM t1 WHERE a>3 AND a<=7;
+ }
+} {7 6 5 4}
+do_test descidx1-2.5 {
+ execsql {
+ SELECT b FROM t1 WHERE a>=3 AND a<=7;
+ }
+} {7 6 5 4 3}
+do_test descidx1-2.6 {
+ execsql {
+ SELECT a FROM t1 WHERE b>=3 AND b<=7;
+ }
+} {3 4 5 6 7}
+
+# This procedure executes the SQL. Then it checks to see if the OP_Sort
+# opcode was executed. If an OP_Sort did occur, then "sort" is appended
+# to the result. If no OP_Sort happened, then "nosort" is appended.
+#
+# This procedure is used to check to make sure sorting is or is not
+# occurring as expected.
+#
+proc cksort {sql} {
+ set ::sqlite_sort_count 0
+ set data [execsql $sql]
+ if {$::sqlite_sort_count} {set x sort} {set x nosort}
+ lappend data $x
+ return $data
+}
+
+# Test sorting using a descending index.
+#
+do_test descidx1-3.1 {
+ cksort {SELECT a FROM t1 ORDER BY a}
+} {1 2 3 4 5 6 7 8 nosort}
+do_test descidx1-3.2 {
+ cksort {SELECT a FROM t1 ORDER BY a ASC}
+} {1 2 3 4 5 6 7 8 nosort}
+do_test descidx1-3.3 {
+ cksort {SELECT a FROM t1 ORDER BY a DESC}
+} {8 7 6 5 4 3 2 1 nosort}
+do_test descidx1-3.4 {
+ cksort {SELECT b FROM t1 ORDER BY a}
+} {1 2 3 4 5 6 7 8 nosort}
+do_test descidx1-3.5 {
+ cksort {SELECT b FROM t1 ORDER BY a ASC}
+} {1 2 3 4 5 6 7 8 nosort}
+do_test descidx1-3.6 {
+ cksort {SELECT b FROM t1 ORDER BY a DESC}
+} {8 7 6 5 4 3 2 1 nosort}
+do_test descidx1-3.7 {
+ cksort {SELECT a FROM t1 ORDER BY b}
+} {1 2 3 4 5 6 7 8 nosort}
+do_test descidx1-3.8 {
+ cksort {SELECT a FROM t1 ORDER BY b ASC}
+} {1 2 3 4 5 6 7 8 nosort}
+do_test descidx1-3.9 {
+ cksort {SELECT a FROM t1 ORDER BY b DESC}
+} {8 7 6 5 4 3 2 1 nosort}
+do_test descidx1-3.10 {
+ cksort {SELECT b FROM t1 ORDER BY b}
+} {1 2 3 4 5 6 7 8 nosort}
+do_test descidx1-3.11 {
+ cksort {SELECT b FROM t1 ORDER BY b ASC}
+} {1 2 3 4 5 6 7 8 nosort}
+do_test descidx1-3.12 {
+ cksort {SELECT b FROM t1 ORDER BY b DESC}
+} {8 7 6 5 4 3 2 1 nosort}
+
+do_test descidx1-3.21 {
+ cksort {SELECT a FROM t1 WHERE a>3 AND a<8 ORDER BY a}
+} {4 5 6 7 nosort}
+do_test descidx1-3.22 {
+ cksort {SELECT a FROM t1 WHERE a>3 AND a<8 ORDER BY a ASC}
+} {4 5 6 7 nosort}
+do_test descidx1-3.23 {
+ cksort {SELECT a FROM t1 WHERE a>3 AND a<8 ORDER BY a DESC}
+} {7 6 5 4 nosort}
+do_test descidx1-3.24 {
+ cksort {SELECT b FROM t1 WHERE a>3 AND a<8 ORDER BY a}
+} {4 5 6 7 nosort}
+do_test descidx1-3.25 {
+ cksort {SELECT b FROM t1 WHERE a>3 AND a<8 ORDER BY a ASC}
+} {4 5 6 7 nosort}
+do_test descidx1-3.26 {
+ cksort {SELECT b FROM t1 WHERE a>3 AND a<8 ORDER BY a DESC}
+} {7 6 5 4 nosort}
+
+# Create a table with indices that are descending on some terms and
+# ascending on others.
+#
+ifcapable bloblit {
+ do_test descidx1-4.1 {
+ execsql {
+ CREATE TABLE t2(a INT, b TEXT, c BLOB, d REAL);
+ CREATE INDEX i3 ON t2(a ASC, b DESC, c ASC);
+ CREATE INDEX i4 ON t2(b DESC, a ASC, d DESC);
+ INSERT INTO t2 VALUES(1,'one',x'31',1.0);
+ INSERT INTO t2 VALUES(2,'two',x'3232',2.0);
+ INSERT INTO t2 VALUES(3,'three',x'333333',3.0);
+ INSERT INTO t2 VALUES(4,'four',x'34343434',4.0);
+ INSERT INTO t2 VALUES(5,'five',x'3535353535',5.0);
+ INSERT INTO t2 VALUES(6,'six',x'363636363636',6.0);
+ INSERT INTO t2 VALUES(2,'two',x'323232',2.1);
+ INSERT INTO t2 VALUES(2,'zwei',x'3232',2.2);
+ INSERT INTO t2 VALUES(2,NULL,NULL,2.3);
+ SELECT count(*) FROM t2;
+ }
+ } {9}
+ do_test descidx1-4.2 {
+ execsql {
+ SELECT d FROM t2 ORDER BY a;
+ }
+ } {1.0 2.2 2.0 2.1 2.3 3.0 4.0 5.0 6.0}
+ do_test descidx1-4.3 {
+ execsql {
+ SELECT d FROM t2 WHERE a>=2;
+ }
+ } {2.2 2.0 2.1 2.3 3.0 4.0 5.0 6.0}
+ do_test descidx1-4.4 {
+ execsql {
+ SELECT d FROM t2 WHERE a>2;
+ }
+ } {3.0 4.0 5.0 6.0}
+ do_test descidx1-4.5 {
+ execsql {
+ SELECT d FROM t2 WHERE a=2 AND b>'two';
+ }
+ } {2.2}
+ do_test descidx1-4.6 {
+ execsql {
+ SELECT d FROM t2 WHERE a=2 AND b>='two';
+ }
+ } {2.2 2.0 2.1}
+ do_test descidx1-4.7 {
+ execsql {
+ SELECT d FROM t2 WHERE a=2 AND b<'two';
+ }
+ } {}
+ do_test descidx1-4.8 {
+ execsql {
+ SELECT d FROM t2 WHERE a=2 AND b<='two';
+ }
+ } {2.0 2.1}
+}
+
+do_test descidx1-5.1 {
+ execsql {
+ CREATE TABLE t3(a,b,c,d);
+ CREATE INDEX t3i1 ON t3(a DESC, b ASC, c DESC, d ASC);
+ INSERT INTO t3 VALUES(0,0,0,0);
+ INSERT INTO t3 VALUES(0,0,0,1);
+ INSERT INTO t3 VALUES(0,0,1,0);
+ INSERT INTO t3 VALUES(0,0,1,1);
+ INSERT INTO t3 VALUES(0,1,0,0);
+ INSERT INTO t3 VALUES(0,1,0,1);
+ INSERT INTO t3 VALUES(0,1,1,0);
+ INSERT INTO t3 VALUES(0,1,1,1);
+ INSERT INTO t3 VALUES(1,0,0,0);
+ INSERT INTO t3 VALUES(1,0,0,1);
+ INSERT INTO t3 VALUES(1,0,1,0);
+ INSERT INTO t3 VALUES(1,0,1,1);
+ INSERT INTO t3 VALUES(1,1,0,0);
+ INSERT INTO t3 VALUES(1,1,0,1);
+ INSERT INTO t3 VALUES(1,1,1,0);
+ INSERT INTO t3 VALUES(1,1,1,1);
+ SELECT count(*) FROM t3;
+ }
+} {16}
+do_test descidx1-5.2 {
+ cksort {
+ SELECT a||b||c||d FROM t3 ORDER BY a,b,c,d;
+ }
+} {0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111 sort}
+do_test descidx1-5.3 {
+ cksort {
+ SELECT a||b||c||d FROM t3 ORDER BY a DESC, b ASC, c DESC, d ASC;
+ }
+} {1010 1011 1000 1001 1110 1111 1100 1101 0010 0011 0000 0001 0110 0111 0100 0101 nosort}
+do_test descidx1-5.4 {
+ cksort {
+ SELECT a||b||c||d FROM t3 ORDER BY a ASC, b DESC, c ASC, d DESC;
+ }
+} {0101 0100 0111 0110 0001 0000 0011 0010 1101 1100 1111 1110 1001 1000 1011 1010 nosort}
+do_test descidx1-5.5 {
+ cksort {
+ SELECT a||b||c FROM t3 WHERE d=0 ORDER BY a DESC, b ASC, c DESC
+ }
+} {101 100 111 110 001 000 011 010 nosort}
+do_test descidx1-5.6 {
+ cksort {
+ SELECT a||b||c FROM t3 WHERE d=0 ORDER BY a ASC, b DESC, c ASC
+ }
+} {010 011 000 001 110 111 100 101 nosort}
+do_test descidx1-5.7 {
+ cksort {
+ SELECT a||b||c FROM t3 WHERE d=0 ORDER BY a ASC, b DESC, c DESC
+ }
+} {011 010 001 000 111 110 101 100 sort}
+do_test descidx1-5.8 {
+ cksort {
+ SELECT a||b||c FROM t3 WHERE d=0 ORDER BY a ASC, b ASC, c ASC
+ }
+} {000 001 010 011 100 101 110 111 sort}
+do_test descidx1-5.9 {
+ cksort {
+ SELECT a||b||c FROM t3 WHERE d=0 ORDER BY a DESC, b DESC, c ASC
+ }
+} {110 111 100 101 010 011 000 001 sort}
+
+# Test the legacy_file_format pragma here because we have access to
+# the get_file_format command.
+#
+ifcapable legacyformat {
+ do_test descidx1-6.1 {
+ db close
+ file delete -force test.db test.db-journal
+ sqlite3 db test.db
+ execsql {PRAGMA legacy_file_format}
+ } {1}
+} else {
+ do_test descidx1-6.1 {
+ db close
+ file delete -force test.db test.db-journal
+ sqlite3 db test.db
+ execsql {PRAGMA legacy_file_format}
+ } {0}
+}
+do_test descidx1-6.2 {
+ execsql {PRAGMA legacy_file_format=YES}
+ execsql {PRAGMA legacy_file_format}
+} {1}
+do_test descidx1-6.3 {
+ execsql {
+ CREATE TABLE t1(a,b,c);
+ }
+ get_file_format
+} {1}
+do_test descidx1-6.4 {
+ db close
+ file delete -force test.db test.db-journal
+ sqlite3 db test.db
+ execsql {PRAGMA legacy_file_format=NO}
+ execsql {PRAGMA legacy_file_format}
+} {0}
+do_test descidx1-6.5 {
+ execsql {
+ CREATE TABLE t1(a,b,c);
+ }
+ get_file_format
+} {4}
+
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/descidx2.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/descidx2.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,184 @@
+# 2005 December 21
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#*************************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is descending indices.
+#
+# $Id: descidx2.test,v 1.4 2006/07/11 14:17:52 drh Exp $
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+db eval {PRAGMA legacy_file_format=OFF}
+
+# This procedure sets the value of the file-format in file 'test.db'
+# to $newval. Also, the schema cookie is incremented.
+#
+proc set_file_format {newval} {
+ set bt [btree_open test.db 10 0]
+ btree_begin_transaction $bt
+ set meta [btree_get_meta $bt]
+ lset meta 2 $newval ;# File format
+ lset meta 1 [expr [lindex $meta 1]+1] ;# Schema cookie
+ eval "btree_update_meta $bt $meta"
+ btree_commit $bt
+ btree_close $bt
+}
+
+# This procedure returns the value of the file-format in file 'test.db'.
+#
+proc get_file_format {{fname test.db}} {
+ set bt [btree_open $fname 10 0]
+ set meta [btree_get_meta $bt]
+ btree_close $bt
+ lindex $meta 2
+}
+
+# Verify that the file format starts as 4
+#
+do_test descidx2-1.1 {
+ execsql {
+ CREATE TABLE t1(a,b);
+ CREATE INDEX i1 ON t1(b ASC);
+ }
+ get_file_format
+} {4}
+do_test descidx2-1.2 {
+ execsql {
+ CREATE INDEX i2 ON t1(a DESC);
+ }
+ get_file_format
+} {4}
+
+# Before adding any information to the database, set the file format
+# back to three. Then close and reopen the database. With the file
+# format set to three, SQLite should ignore the DESC argument on the
+# index.
+#
+do_test descidx2-2.0 {
+ set_file_format 3
+ db close
+ sqlite3 db test.db
+ get_file_format
+} {3}
+
+# Put some information in the table and verify that the DESC
+# on the index is ignored.
+#
+do_test descidx2-2.1 {
+ execsql {
+ INSERT INTO t1 VALUES(1,1);
+ INSERT INTO t1 VALUES(2,2);
+ INSERT INTO t1 SELECT a+2, a+2 FROM t1;
+ INSERT INTO t1 SELECT a+4, a+4 FROM t1;
+ SELECT b FROM t1 WHERE a>3 AND a<7;
+ }
+} {4 5 6}
+do_test descidx2-2.2 {
+ execsql {
+ SELECT a FROM t1 WHERE b>3 AND b<7;
+ }
+} {4 5 6}
+do_test descidx2-2.3 {
+ execsql {
+ SELECT b FROM t1 WHERE a>=3 AND a<7;
+ }
+} {3 4 5 6}
+do_test descidx2-2.4 {
+ execsql {
+ SELECT b FROM t1 WHERE a>3 AND a<=7;
+ }
+} {4 5 6 7}
+do_test descidx2-2.5 {
+ execsql {
+ SELECT b FROM t1 WHERE a>=3 AND a<=7;
+ }
+} {3 4 5 6 7}
+do_test descidx2-2.6 {
+ execsql {
+ SELECT a FROM t1 WHERE b>=3 AND b<=7;
+ }
+} {3 4 5 6 7}
+
+# This procedure executes the SQL. Then it checks to see if the OP_Sort
+# opcode was executed. If an OP_Sort did occur, then "sort" is appended
+# to the result. If no OP_Sort happened, then "nosort" is appended.
+#
+# This procedure is used to check to make sure sorting is or is not
+# occurring as expected.
+#
+proc cksort {sql} {
+ set ::sqlite_sort_count 0
+ set data [execsql $sql]
+ if {$::sqlite_sort_count} {set x sort} {set x nosort}
+ lappend data $x
+ return $data
+}
+
+# Test sorting using a descending index.
+#
+do_test descidx2-3.1 {
+ cksort {SELECT a FROM t1 ORDER BY a}
+} {1 2 3 4 5 6 7 8 nosort}
+do_test descidx2-3.2 {
+ cksort {SELECT a FROM t1 ORDER BY a ASC}
+} {1 2 3 4 5 6 7 8 nosort}
+do_test descidx2-3.3 {
+ cksort {SELECT a FROM t1 ORDER BY a DESC}
+} {8 7 6 5 4 3 2 1 nosort}
+do_test descidx2-3.4 {
+ cksort {SELECT b FROM t1 ORDER BY a}
+} {1 2 3 4 5 6 7 8 nosort}
+do_test descidx2-3.5 {
+ cksort {SELECT b FROM t1 ORDER BY a ASC}
+} {1 2 3 4 5 6 7 8 nosort}
+do_test descidx2-3.6 {
+ cksort {SELECT b FROM t1 ORDER BY a DESC}
+} {8 7 6 5 4 3 2 1 nosort}
+do_test descidx2-3.7 {
+ cksort {SELECT a FROM t1 ORDER BY b}
+} {1 2 3 4 5 6 7 8 nosort}
+do_test descidx2-3.8 {
+ cksort {SELECT a FROM t1 ORDER BY b ASC}
+} {1 2 3 4 5 6 7 8 nosort}
+do_test descidx2-3.9 {
+ cksort {SELECT a FROM t1 ORDER BY b DESC}
+} {8 7 6 5 4 3 2 1 nosort}
+do_test descidx2-3.10 {
+ cksort {SELECT b FROM t1 ORDER BY b}
+} {1 2 3 4 5 6 7 8 nosort}
+do_test descidx2-3.11 {
+ cksort {SELECT b FROM t1 ORDER BY b ASC}
+} {1 2 3 4 5 6 7 8 nosort}
+do_test descidx2-3.12 {
+ cksort {SELECT b FROM t1 ORDER BY b DESC}
+} {8 7 6 5 4 3 2 1 nosort}
+
+do_test descidx2-3.21 {
+ cksort {SELECT a FROM t1 WHERE a>3 AND a<8 ORDER BY a}
+} {4 5 6 7 nosort}
+do_test descidx2-3.22 {
+ cksort {SELECT a FROM t1 WHERE a>3 AND a<8 ORDER BY a ASC}
+} {4 5 6 7 nosort}
+do_test descidx2-3.23 {
+ cksort {SELECT a FROM t1 WHERE a>3 AND a<8 ORDER BY a DESC}
+} {7 6 5 4 nosort}
+do_test descidx2-3.24 {
+ cksort {SELECT b FROM t1 WHERE a>3 AND a<8 ORDER BY a}
+} {4 5 6 7 nosort}
+do_test descidx2-3.25 {
+ cksort {SELECT b FROM t1 WHERE a>3 AND a<8 ORDER BY a ASC}
+} {4 5 6 7 nosort}
+do_test descidx2-3.26 {
+ cksort {SELECT b FROM t1 WHERE a>3 AND a<8 ORDER BY a DESC}
+} {7 6 5 4 nosort}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/descidx3.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/descidx3.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,155 @@
+# 2006 January 02
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#*************************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is descending indices.
+#
+# $Id: descidx3.test,v 1.5 2006/07/11 14:17:52 drh Exp $
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+ifcapable !bloblit {
+ finish_test
+ return
+}
+db eval {PRAGMA legacy_file_format=OFF}
+
+# This procedure sets the value of the file-format in file 'test.db'
+# to $newval. Also, the schema cookie is incremented.
+#
+proc set_file_format {newval} {
+ set bt [btree_open test.db 10 0]
+ btree_begin_transaction $bt
+ set meta [btree_get_meta $bt]
+ lset meta 2 $newval ;# File format
+ lset meta 1 [expr [lindex $meta 1]+1] ;# Schema cookie
+ eval "btree_update_meta $bt $meta"
+ btree_commit $bt
+ btree_close $bt
+}
+
+# This procedure returns the value of the file-format in file 'test.db'.
+#
+proc get_file_format {{fname test.db}} {
+ set bt [btree_open $fname 10 0]
+ set meta [btree_get_meta $bt]
+ btree_close $bt
+ lindex $meta 2
+}
+
+# Verify that the file format starts as 4.
+#
+do_test descidx3-1.1 {
+ execsql {
+ CREATE TABLE t1(i INTEGER PRIMARY KEY,a,b,c,d);
+ CREATE INDEX t1i1 ON t1(a DESC, b ASC, c DESC);
+ CREATE INDEX t1i2 ON t1(b DESC, c ASC, d DESC);
+ }
+ get_file_format
+} {4}
+
+# Put some information in the table and verify that the descending
+# index actually works.
+#
+do_test descidx3-2.1 {
+ execsql {
+ INSERT INTO t1 VALUES(1, NULL, NULL, NULL, NULL);
+ INSERT INTO t1 VALUES(2, 2, 2, 2, 2);
+ INSERT INTO t1 VALUES(3, 3, 3, 3, 3);
+ INSERT INTO t1 VALUES(4, 2.5, 2.5, 2.5, 2.5);
+ INSERT INTO t1 VALUES(5, -5, -5, -5, -5);
+ INSERT INTO t1 VALUES(6, 'six', 'six', 'six', 'six');
+ INSERT INTO t1 VALUES(7, x'77', x'77', x'77', x'77');
+ INSERT INTO t1 VALUES(8, 'eight', 'eight', 'eight', 'eight');
+ INSERT INTO t1 VALUES(9, x'7979', x'7979', x'7979', x'7979');
+ SELECT count(*) FROM t1;
+ }
+} 9
+do_test descidx3-2.2 {
+ execsql {
+ SELECT i FROM t1 ORDER BY a;
+ }
+} {1 5 2 4 3 8 6 7 9}
+do_test descidx3-2.3 {
+ execsql {
+ SELECT i FROM t1 ORDER BY a DESC;
+ }
+} {9 7 6 8 3 4 2 5 1}
+
+# The "natural" order for the index is decreasing
+do_test descidx3-2.4 {
+ execsql {
+ SELECT i FROM t1 WHERE a<=x'7979';
+ }
+} {9 7 6 8 3 4 2 5}
+do_test descidx3-2.5 {
+ execsql {
+ SELECT i FROM t1 WHERE a>-99;
+ }
+} {9 7 6 8 3 4 2 5}
+
+# Even when all values of t1.a are the same, sorting by A returns
+# the rows in reverse order because this the natural order of the
+# index.
+#
+do_test descidx3-3.1 {
+ execsql {
+ UPDATE t1 SET a=1;
+ SELECT i FROM t1 ORDER BY a;
+ }
+} {9 7 6 8 3 4 2 5 1}
+do_test descidx3-3.2 {
+ execsql {
+ SELECT i FROM t1 WHERE a=1 AND b>0 AND b<'zzz'
+ }
+} {2 4 3 8 6}
+do_test descidx3-3.3 {
+ execsql {
+ SELECT i FROM t1 WHERE b>0 AND b<'zzz'
+ }
+} {6 8 3 4 2}
+do_test descidx3-3.4 {
+ execsql {
+ SELECT i FROM t1 WHERE a=1 AND b>-9999 AND b<x'ffffffff'
+ }
+} {5 2 4 3 8 6 7 9}
+do_test descidx3-3.5 {
+ execsql {
+ SELECT i FROM t1 WHERE b>-9999 AND b<x'ffffffff'
+ }
+} {9 7 6 8 3 4 2 5}
+
+ifcapable subquery {
+ # If the subquery capability is not compiled in to the binary, then
+ # the IN(...) operator is not available. Hence these tests cannot be
+ # run.
+ do_test descidx3-4.1 {
+ execsql {
+ UPDATE t1 SET a=2 WHERE i<6;
+ SELECT i FROM t1 WHERE a IN (1,2) AND b>0 AND b<'zzz';
+ }
+ } {8 6 2 4 3}
+ do_test descidx3-4.2 {
+ execsql {
+ UPDATE t1 SET a=1;
+ SELECT i FROM t1 WHERE a IN (1,2) AND b>0 AND b<'zzz';
+ }
+ } {2 4 3 8 6}
+ do_test descidx3-4.3 {
+ execsql {
+ UPDATE t1 SET b=2;
+ SELECT i FROM t1 WHERE a IN (1,2) AND b>0 AND b<'zzz';
+ }
+ } {9 7 6 8 3 4 2 5 1}
+}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/diskfull.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/diskfull.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,75 @@
+# 2001 October 12
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing for correct handling of disk full
+# errors.
+#
+# $Id: diskfull.test,v 1.3 2005/09/09 10:46:19 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+do_test diskfull-1.1 {
+ execsql {
+ CREATE TABLE t1(x);
+ INSERT INTO t1 VALUES(randstr(1000,1000));
+ INSERT INTO t1 SELECT * FROM t1;
+ INSERT INTO t1 SELECT * FROM t1;
+ INSERT INTO t1 SELECT * FROM t1;
+ INSERT INTO t1 SELECT * FROM t1;
+ CREATE INDEX t1i1 ON t1(x);
+ CREATE TABLE t2 AS SELECT x AS a, x AS b FROM t1;
+ CREATE INDEX t2i1 ON t2(b);
+ }
+} {}
+set sqlite_diskfull_pending 0
+integrity_check diskfull-1.2
+do_test diskfull-1.3 {
+ set sqlite_diskfull_pending 1
+ catchsql {
+ INSERT INTO t1 SELECT * FROM t1;
+ }
+} {1 {database or disk is full}}
+set sqlite_diskfull_pending 0
+integrity_check diskfull-1.4
+do_test diskfull-1.5 {
+ set sqlite_diskfull_pending 1
+ catchsql {
+ DELETE FROM t1;
+ }
+} {1 {database or disk is full}}
+set sqlite_diskfull_pending 0
+integrity_check diskfull-1.6
+
+set go 1
+set i 0
+while {$go} {
+ incr i
+ do_test diskfull-2.$i.1 {
+ set sqlite_diskfull_pending $i
+ set sqlite_diskfull 0
+ set r [catchsql {VACUUM}]
+ if {!$sqlite_diskfull} {
+ set r {1 {database or disk is full}}
+ set go 0
+ }
+ if {$r=="1 {disk I/O error}"} {
+ set r {1 {database or disk is full}}
+ }
+ set r
+ } {1 {database or disk is full}}
+ set sqlite_diskfull_pending 0
+ db close
+ sqlite3 db test.db
+ integrity_check diskfull-2.$i.2
+}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/distinctagg.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/distinctagg.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,57 @@
+# 2005 September 11
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is the DISTINCT modifier on aggregate functions.
+#
+# $Id: distinctagg.test,v 1.2 2005/09/12 23:03:17 drh Exp $
+
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+do_test distinctagg-1.1 {
+ execsql {
+ CREATE TABLE t1(a,b,c);
+ INSERT INTO t1 VALUES(1,2,3);
+ INSERT INTO t1 VALUES(1,3,4);
+ INSERT INTO t1 VALUES(1,3,5);
+ SELECT count(distinct a),
+ count(distinct b),
+ count(distinct c),
+ count(all a) FROM t1;
+ }
+} {1 2 3 3}
+do_test distinctagg-1.2 {
+ execsql {
+ SELECT b, count(distinct c) FROM t1 GROUP BY b ORDER BY b
+ }
+} {2 1 3 2}
+do_test distinctagg-1.3 {
+ execsql {
+ INSERT INTO t1 SELECT a+1, b+3, c+5 FROM t1;
+ INSERT INTO t1 SELECT a+2, b+6, c+10 FROM t1;
+ INSERT INTO t1 SELECT a+4, b+12, c+20 FROM t1;
+ SELECT count(*), count(distinct a), count(distinct b) FROM t1
+ }
+} {24 8 16}
+do_test distinctagg-1.4 {
+ execsql {
+ SELECT a, count(distinct c) FROM t1 GROUP BY a ORDER BY a
+ }
+} {1 3 2 3 3 3 4 3 5 3 6 3 7 3 8 3}
+
+do_test distinctagg-2.1 {
+ catchsql {
+ SELECT count(distinct) FROM t1;
+ }
+} {1 {DISTINCT in aggregate must be followed by an expression}}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/enc.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/enc.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,152 @@
+# 2002 May 24
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The focus of
+# this file is testing the SQLite routines used for converting between the
+# various suported unicode encodings (UTF-8, UTF-16, UTF-16le and
+# UTF-16be).
+#
+# $Id: enc.test,v 1.5 2004/11/14 21:56:31 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Skip this test if the build does not support multiple encodings.
+#
+ifcapable {!utf16} {
+ finish_test
+ return
+}
+
+proc do_bincmp_test {testname got expect} {
+ binary scan $expect \c* expectvals
+ binary scan $got \c* gotvals
+ do_test $testname [list set dummy $gotvals] $expectvals
+}
+
+# $utf16 is a UTF-16 encoded string. Swap each pair of bytes around
+# to change the byte-order of the string.
+proc swap_byte_order {utf16} {
+ binary scan $utf16 \c* ints
+
+ foreach {a b} $ints {
+ lappend ints2 $b
+ lappend ints2 $a
+ }
+
+ return [binary format \c* $ints2]
+}
+
+#
+# Test that the SQLite routines for converting between UTF encodings
+# produce the same results as their TCL counterparts.
+#
+# $testname is the prefix to be used for the test names.
+# $str is a string to use for testing (encoded in UTF-8, as normal for TCL).
+#
+# The test procedure is:
+# 1. Convert the string from UTF-8 to UTF-16le and check that the TCL and
+# SQLite routines produce the same results.
+#
+# 2. Convert the string from UTF-8 to UTF-16be and check that the TCL and
+# SQLite routines produce the same results.
+#
+# 3. Use the SQLite routines to convert the native machine order UTF-16
+# representation back to the original UTF-8. Check that the result
+# matches the original representation.
+#
+# 4. Add a byte-order mark to each of the UTF-16 representations and
+# check that the SQLite routines can convert them back to UTF-8. For
+# byte-order mark info, refer to section 3.10 of the unicode standard.
+#
+# 5. Take the byte-order marked UTF-16 strings from step 4 and ensure
+# that SQLite can convert them both to native byte order UTF-16
+# strings, sans BOM.
+#
+# Coverage:
+#
+# sqlite_utf8to16be (step 2)
+# sqlite_utf8to16le (step 1)
+# sqlite_utf16to8 (steps 3, 4)
+# sqlite_utf16to16le (step 5)
+# sqlite_utf16to16be (step 5)
+#
+proc test_conversion {testname str} {
+
+ # Step 1.
+ set utf16le_sqlite3 [test_translate $str UTF8 UTF16LE]
+ set utf16le_tcl [encoding convertto unicode $str]
+ append utf16le_tcl "\x00\x00"
+ if { $::tcl_platform(byteOrder)!="littleEndian" } {
+ set utf16le_tcl [swap_byte_order $utf16le_tcl]
+ }
+ do_bincmp_test $testname.1 $utf16le_sqlite3 $utf16le_tcl
+ set utf16le $utf16le_tcl
+
+ # Step 2.
+ set utf16be_sqlite3 [test_translate $str UTF8 UTF16BE]
+ set utf16be_tcl [encoding convertto unicode $str]
+ append utf16be_tcl "\x00\x00"
+ if { $::tcl_platform(byteOrder)=="littleEndian" } {
+ set utf16be_tcl [swap_byte_order $utf16be_tcl]
+ }
+ do_bincmp_test $testname.2 $utf16be_sqlite3 $utf16be_tcl
+ set utf16be $utf16be_tcl
+
+ # Step 3.
+ if { $::tcl_platform(byteOrder)=="littleEndian" } {
+ set utf16 $utf16le
+ } else {
+ set utf16 $utf16be
+ }
+ set utf8_sqlite3 [test_translate $utf16 UTF16 UTF8]
+ do_bincmp_test $testname.3 $utf8_sqlite3 [binarize $str]
+
+ # Step 4 (little endian).
+ append utf16le_bom "\xFF\xFE" $utf16le
+ set utf8_sqlite3 [test_translate $utf16le_bom UTF16 UTF8 1]
+ do_bincmp_test $testname.4.le $utf8_sqlite3 [binarize $str]
+
+ # Step 4 (big endian).
+ append utf16be_bom "\xFE\xFF" $utf16be
+ set utf8_sqlite3 [test_translate $utf16be_bom UTF16 UTF8]
+ do_bincmp_test $testname.4.be $utf8_sqlite3 [binarize $str]
+
+ # Step 5 (little endian to little endian).
+ set utf16_sqlite3 [test_translate $utf16le_bom UTF16LE UTF16LE]
+ do_bincmp_test $testname.5.le.le $utf16_sqlite3 $utf16le
+
+ # Step 5 (big endian to big endian).
+ set utf16_sqlite3 [test_translate $utf16be_bom UTF16 UTF16BE]
+ do_bincmp_test $testname.5.be.be $utf16_sqlite3 $utf16be
+
+ # Step 5 (big endian to little endian).
+ set utf16_sqlite3 [test_translate $utf16be_bom UTF16 UTF16LE]
+ do_bincmp_test $testname.5.be.le $utf16_sqlite3 $utf16le
+
+ # Step 5 (little endian to big endian).
+ set utf16_sqlite3 [test_translate $utf16le_bom UTF16 UTF16BE]
+ do_bincmp_test $testname.5.le.be $utf16_sqlite3 $utf16be
+}
+
+translate_selftest
+
+test_conversion enc-1 "hello world"
+test_conversion enc-2 "sqlite"
+test_conversion enc-3 ""
+test_conversion enc-X "\u0100"
+test_conversion enc-4 "\u1234"
+test_conversion enc-5 "\u4321abc"
+test_conversion enc-6 "\u4321\u1234"
+test_conversion enc-7 [string repeat "abcde\u00EF\u00EE\uFFFCabc" 100]
+test_conversion enc-8 [string repeat "\u007E\u007F\u0080\u0081" 100]
+test_conversion enc-9 [string repeat "\u07FE\u07FF\u0800\u0801\uFFF0" 100]
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/enc2.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/enc2.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,554 @@
+# 2002 May 24
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The focus of
+# this file is testing the SQLite routines used for converting between the
+# various suported unicode encodings (UTF-8, UTF-16, UTF-16le and
+# UTF-16be).
+#
+# $Id: enc2.test,v 1.28 2006/09/23 20:36:03 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# If UTF16 support is disabled, ignore the tests in this file
+#
+ifcapable {!utf16} {
+ finish_test
+ return
+}
+
+# The rough organisation of tests in this file is:
+#
+# enc2.1.*: Simple tests with a UTF-8 db.
+# enc2.2.*: Simple tests with a UTF-16LE db.
+# enc2.3.*: Simple tests with a UTF-16BE db.
+# enc2.4.*: Test that attached databases must have the same text encoding
+# as the main database.
+# enc2.5.*: Test the behaviour of the library when a collation sequence is
+# not available for the most desirable text encoding.
+# enc2.6.*: Similar test for user functions.
+# enc2.7.*: Test that the VerifyCookie opcode protects against assuming the
+# wrong text encoding for the database.
+# enc2.8.*: Test sqlite3_complete16()
+#
+
+db close
+
+# Return the UTF-8 representation of the supplied UTF-16 string $str.
+proc utf8 {str} {
+ # If $str ends in two 0x00 0x00 bytes, knock these off before
+ # converting to UTF-8 using TCL.
+ binary scan $str \c* vals
+ if {[lindex $vals end]==0 && [lindex $vals end-1]==0} {
+ set str [binary format \c* [lrange $vals 0 end-2]]
+ }
+
+ set r [encoding convertfrom unicode $str]
+ return $r
+}
+
+#
+# This proc contains all the tests in this file. It is run
+# three times. Each time the file 'test.db' contains a database
+# with the following contents:
+set dbcontents {
+ CREATE TABLE t1(a PRIMARY KEY, b, c);
+ INSERT INTO t1 VALUES('one', 'I', 1);
+}
+# This proc tests that we can open and manipulate the test.db
+# database, and that it is possible to retreive values in
+# various text encodings.
+#
+proc run_test_script {t enc} {
+
+# Open the database and pull out a (the) row.
+do_test $t.1 {
+ sqlite3 db test.db; set DB [sqlite3_connection_pointer db]
+ execsql {SELECT * FROM t1}
+} {one I 1}
+
+# Insert some data
+do_test $t.2 {
+ execsql {INSERT INTO t1 VALUES('two', 'II', 2);}
+ execsql {SELECT * FROM t1}
+} {one I 1 two II 2}
+
+# Insert some data
+do_test $t.3 {
+ execsql {
+ INSERT INTO t1 VALUES('three','III',3);
+ INSERT INTO t1 VALUES('four','IV',4);
+ INSERT INTO t1 VALUES('five','V',5);
+ }
+ execsql {SELECT * FROM t1}
+} {one I 1 two II 2 three III 3 four IV 4 five V 5}
+
+# Use the index
+do_test $t.4 {
+ execsql {
+ SELECT * FROM t1 WHERE a = 'one';
+ }
+} {one I 1}
+do_test $t.5 {
+ execsql {
+ SELECT * FROM t1 WHERE a = 'four';
+ }
+} {four IV 4}
+ifcapable subquery {
+ do_test $t.6 {
+ execsql {
+ SELECT * FROM t1 WHERE a IN ('one', 'two');
+ }
+ } {one I 1 two II 2}
+}
+
+# Now check that we can retrieve data in both UTF-16 and UTF-8
+do_test $t.7 {
+ set STMT [sqlite3_prepare $DB "SELECT a FROM t1 WHERE c>3;" -1 TAIL]
+ sqlite3_step $STMT
+ sqlite3_column_text $STMT 0
+} {four}
+
+do_test $t.8 {
+ sqlite3_step $STMT
+ utf8 [sqlite3_column_text16 $STMT 0]
+} {five}
+
+do_test $t.9 {
+ sqlite3_finalize $STMT
+} SQLITE_OK
+
+ifcapable vacuum {
+ execsql VACUUM
+}
+
+do_test $t.10 {
+ db eval {PRAGMA encoding}
+} $enc
+
+}
+
+# The three unicode encodings understood by SQLite.
+set encodings [list UTF-8 UTF-16le UTF-16be]
+
+set sqlite_os_trace 0
+set i 1
+foreach enc $encodings {
+ file delete -force test.db
+ sqlite3 db test.db
+ db eval "PRAGMA encoding = \"$enc\""
+ execsql $dbcontents
+ do_test enc2-$i.0.1 {
+ db eval {PRAGMA encoding}
+ } $enc
+ do_test enc2-$i.0.2 {
+ db eval {PRAGMA encoding=UTF8}
+ db eval {PRAGMA encoding}
+ } $enc
+ do_test enc2-$i.0.3 {
+ db eval {PRAGMA encoding=UTF16le}
+ db eval {PRAGMA encoding}
+ } $enc
+ do_test enc2-$i.0.4 {
+ db eval {PRAGMA encoding=UTF16be}
+ db eval {PRAGMA encoding}
+ } $enc
+
+ db close
+ run_test_script enc2-$i $enc
+ db close
+ incr i
+}
+
+# Test that it is an error to try to attach a database with a different
+# encoding to the main database.
+do_test enc2-4.1 {
+ file delete -force test.db
+ sqlite3 db test.db
+ db eval "PRAGMA encoding = 'UTF-8'"
+ db eval "CREATE TABLE abc(a, b, c);"
+} {}
+do_test enc2-4.2 {
+ file delete -force test2.db
+ sqlite3 db2 test2.db
+ db2 eval "PRAGMA encoding = 'UTF-16'"
+ db2 eval "CREATE TABLE abc(a, b, c);"
+} {}
+do_test enc2-4.3 {
+ catchsql {
+ ATTACH 'test2.db' as aux;
+ }
+} {1 {attached databases must use the same text encoding as main database}}
+
+db2 close
+db close
+
+# The following tests - enc2-5.* - test that SQLite selects the correct
+# collation sequence when more than one is available.
+
+set ::values [list one two three four five]
+set ::test_collate_enc INVALID
+proc test_collate {enc lhs rhs} {
+ set ::test_collate_enc $enc
+ set l [lsearch -exact $::values $lhs]
+ set r [lsearch -exact $::values $rhs]
+ set res [expr $l - $r]
+ # puts "enc=$enc lhs=$lhs/$l rhs=$rhs/$r res=$res"
+ return $res
+}
+
+file delete -force test.db
+sqlite3 db test.db; set DB [sqlite3_connection_pointer db]
+do_test enc2-5.0 {
+ execsql {
+ CREATE TABLE t5(a);
+ INSERT INTO t5 VALUES('one');
+ INSERT INTO t5 VALUES('two');
+ INSERT INTO t5 VALUES('five');
+ INSERT INTO t5 VALUES('three');
+ INSERT INTO t5 VALUES('four');
+ }
+} {}
+do_test enc2-5.1 {
+ add_test_collate $DB 1 1 1
+ set res [execsql {SELECT * FROM t5 ORDER BY 1 COLLATE test_collate;}]
+ lappend res $::test_collate_enc
+} {one two three four five UTF-8}
+do_test enc2-5.2 {
+ add_test_collate $DB 0 1 0
+ set res [execsql {SELECT * FROM t5 ORDER BY 1 COLLATE test_collate}]
+ lappend res $::test_collate_enc
+} {one two three four five UTF-16LE}
+do_test enc2-5.3 {
+ add_test_collate $DB 0 0 1
+ set res [execsql {SELECT * FROM t5 ORDER BY 1 COLLATE test_collate}]
+ lappend res $::test_collate_enc
+} {one two three four five UTF-16BE}
+
+db close
+file delete -force test.db
+sqlite3 db test.db; set DB [sqlite3_connection_pointer db]
+execsql {pragma encoding = 'UTF-16LE'}
+do_test enc2-5.4 {
+ execsql {
+ CREATE TABLE t5(a);
+ INSERT INTO t5 VALUES('one');
+ INSERT INTO t5 VALUES('two');
+ INSERT INTO t5 VALUES('five');
+ INSERT INTO t5 VALUES('three');
+ INSERT INTO t5 VALUES('four');
+ }
+} {}
+do_test enc2-5.5 {
+ add_test_collate $DB 1 1 1
+ set res [execsql {SELECT * FROM t5 ORDER BY 1 COLLATE test_collate}]
+ lappend res $::test_collate_enc
+} {one two three four five UTF-16LE}
+do_test enc2-5.6 {
+ add_test_collate $DB 1 0 1
+ set res [execsql {SELECT * FROM t5 ORDER BY 1 COLLATE test_collate}]
+ lappend res $::test_collate_enc
+} {one two three four five UTF-16BE}
+do_test enc2-5.7 {
+ add_test_collate $DB 1 0 0
+ set res [execsql {SELECT * FROM t5 ORDER BY 1 COLLATE test_collate}]
+ lappend res $::test_collate_enc
+} {one two three four five UTF-8}
+
+db close
+file delete -force test.db
+sqlite3 db test.db; set DB [sqlite3_connection_pointer db]
+execsql {pragma encoding = 'UTF-16BE'}
+do_test enc2-5.8 {
+ execsql {
+ CREATE TABLE t5(a);
+ INSERT INTO t5 VALUES('one');
+ INSERT INTO t5 VALUES('two');
+ INSERT INTO t5 VALUES('five');
+ INSERT INTO t5 VALUES('three');
+ INSERT INTO t5 VALUES('four');
+ }
+} {}
+do_test enc2-5.9 {
+ add_test_collate $DB 1 1 1
+ set res [execsql {SELECT * FROM t5 ORDER BY 1 COLLATE test_collate}]
+ lappend res $::test_collate_enc
+} {one two three four five UTF-16BE}
+do_test enc2-5.10 {
+ add_test_collate $DB 1 1 0
+ set res [execsql {SELECT * FROM t5 ORDER BY 1 COLLATE test_collate}]
+ lappend res $::test_collate_enc
+} {one two three four five UTF-16LE}
+do_test enc2-5.11 {
+ add_test_collate $DB 1 0 0
+ set res [execsql {SELECT * FROM t5 ORDER BY 1 COLLATE test_collate}]
+ lappend res $::test_collate_enc
+} {one two three four five UTF-8}
+
+# Also test that a UTF-16 collation factory works.
+do_test enc2-5-12 {
+ add_test_collate $DB 0 0 0
+ catchsql {
+ SELECT * FROM t5 ORDER BY 1 COLLATE test_collate
+ }
+} {1 {no such collation sequence: test_collate}}
+do_test enc2-5.13 {
+ add_test_collate_needed $DB
+ set res [execsql {SELECT * FROM t5 ORDER BY 1 COLLATE test_collate; }]
+ lappend res $::test_collate_enc
+} {one two three four five UTF-16BE}
+do_test enc2-5.14 {
+ set ::sqlite_last_needed_collation
+} test_collate
+
+db close
+file delete -force test.db
+
+do_test enc2-5.15 {
+ sqlite3 db test.db; set ::DB [sqlite3_connection_pointer db]
+ add_test_collate_needed $::DB
+ set ::sqlite_last_needed_collation
+} {}
+do_test enc2-5.16 {
+ execsql {CREATE TABLE t1(a varchar collate test_collate);}
+} {}
+do_test enc2-5.17 {
+ set ::sqlite_last_needed_collation
+} {test_collate}
+
+# The following tests - enc2-6.* - test that SQLite selects the correct
+# user function when more than one is available.
+
+proc test_function {enc arg} {
+ return "$enc $arg"
+}
+
+db close
+file delete -force test.db
+sqlite3 db test.db; set DB [sqlite3_connection_pointer db]
+execsql {pragma encoding = 'UTF-8'}
+do_test enc2-6.0 {
+ execsql {
+ CREATE TABLE t5(a);
+ INSERT INTO t5 VALUES('one');
+ }
+} {}
+do_test enc2-6.1 {
+ add_test_function $DB 1 1 1
+ execsql {
+ SELECT test_function('sqlite')
+ }
+} {{UTF-8 sqlite}}
+db close
+sqlite3 db test.db; set DB [sqlite3_connection_pointer db]
+do_test enc2-6.2 {
+ add_test_function $DB 0 1 0
+ execsql {
+ SELECT test_function('sqlite')
+ }
+} {{UTF-16LE sqlite}}
+db close
+sqlite3 db test.db; set DB [sqlite3_connection_pointer db]
+do_test enc2-6.3 {
+ add_test_function $DB 0 0 1
+ execsql {
+ SELECT test_function('sqlite')
+ }
+} {{UTF-16BE sqlite}}
+
+db close
+file delete -force test.db
+sqlite3 db test.db; set DB [sqlite3_connection_pointer db]
+execsql {pragma encoding = 'UTF-16LE'}
+do_test enc2-6.3 {
+ execsql {
+ CREATE TABLE t5(a);
+ INSERT INTO t5 VALUES('sqlite');
+ }
+} {}
+do_test enc2-6.4 {
+ add_test_function $DB 1 1 1
+ execsql {
+ SELECT test_function('sqlite')
+ }
+} {{UTF-16LE sqlite}}
+db close
+sqlite3 db test.db; set DB [sqlite3_connection_pointer db]
+do_test enc2-6.5 {
+ add_test_function $DB 0 1 0
+ execsql {
+ SELECT test_function('sqlite')
+ }
+} {{UTF-16LE sqlite}}
+db close
+sqlite3 db test.db; set DB [sqlite3_connection_pointer db]
+do_test enc2-6.6 {
+ add_test_function $DB 0 0 1
+ execsql {
+ SELECT test_function('sqlite')
+ }
+} {{UTF-16BE sqlite}}
+
+db close
+file delete -force test.db
+sqlite3 db test.db; set DB [sqlite3_connection_pointer db]
+execsql {pragma encoding = 'UTF-16BE'}
+do_test enc2-6.7 {
+ execsql {
+ CREATE TABLE t5(a);
+ INSERT INTO t5 VALUES('sqlite');
+ }
+} {}
+do_test enc2-6.8 {
+ add_test_function $DB 1 1 1
+ execsql {
+ SELECT test_function('sqlite')
+ }
+} {{UTF-16BE sqlite}}
+db close
+sqlite3 db test.db; set DB [sqlite3_connection_pointer db]
+do_test enc2-6.9 {
+ add_test_function $DB 0 1 0
+ execsql {
+ SELECT test_function('sqlite')
+ }
+} {{UTF-16LE sqlite}}
+db close
+sqlite3 db test.db; set DB [sqlite3_connection_pointer db]
+do_test enc2-6.10 {
+ add_test_function $DB 0 0 1
+ execsql {
+ SELECT test_function('sqlite')
+ }
+} {{UTF-16BE sqlite}}
+
+
+db close
+file delete -force test.db
+
+# The following tests - enc2-7.* - function as follows:
+#
+# 1: Open an empty database file assuming UTF-16 encoding.
+# 2: Open the same database with a different handle assuming UTF-8. Create
+# a table using this handle.
+# 3: Read the sqlite_master table from the first handle.
+# 4: Ensure the first handle recognises the database encoding is UTF-8.
+#
+do_test enc2-7.1 {
+ sqlite3 db test.db
+ execsql {
+ PRAGMA encoding = 'UTF-16';
+ SELECT * FROM sqlite_master;
+ }
+} {}
+do_test enc2-7.2 {
+ set enc [execsql {
+ PRAGMA encoding;
+ }]
+ string range $enc 0 end-2 ;# Chop off the "le" or "be"
+} {UTF-16}
+do_test enc2-7.3 {
+ sqlite3 db2 test.db
+ execsql {
+ PRAGMA encoding = 'UTF-8';
+ CREATE TABLE abc(a, b, c);
+ } db2
+} {}
+do_test enc2-7.4 {
+ execsql {
+ SELECT * FROM sqlite_master;
+ }
+} "table abc abc [expr $AUTOVACUUM?3:2] {CREATE TABLE abc(a, b, c)}"
+do_test enc2-7.5 {
+ execsql {
+ PRAGMA encoding;
+ }
+} {UTF-8}
+
+db close
+db2 close
+
+proc utf16 {utf8} {
+ set utf16 [encoding convertto unicode $utf8]
+ append utf16 "\x00\x00"
+ return $utf16
+}
+ifcapable {complete} {
+ do_test enc2-8.1 {
+ sqlite3_complete16 [utf16 "SELECT * FROM t1;"]
+ } {1}
+ do_test enc2-8.2 {
+ sqlite3_complete16 [utf16 "SELECT * FROM"]
+ } {0}
+}
+
+# Test that the encoding of an empty database may still be set after the
+# (empty) schema has been initialized.
+file delete -force test.db
+do_test enc2-9.1 {
+ sqlite3 db test.db
+ execsql {
+ PRAGMA encoding = 'UTF-8';
+ PRAGMA encoding;
+ }
+} {UTF-8}
+do_test enc2-9.2 {
+ sqlite3 db test.db
+ execsql {
+ PRAGMA encoding = 'UTF-16le';
+ PRAGMA encoding;
+ }
+} {UTF-16le}
+do_test enc2-9.3 {
+ sqlite3 db test.db
+ execsql {
+ SELECT * FROM sqlite_master;
+ PRAGMA encoding = 'UTF-8';
+ PRAGMA encoding;
+ }
+} {UTF-8}
+do_test enc2-9.4 {
+ sqlite3 db test.db
+ execsql {
+ PRAGMA encoding = 'UTF-16le';
+ CREATE TABLE abc(a, b, c);
+ PRAGMA encoding;
+ }
+} {UTF-16le}
+do_test enc2-9.5 {
+ sqlite3 db test.db
+ execsql {
+ PRAGMA encoding = 'UTF-8';
+ PRAGMA encoding;
+ }
+} {UTF-16le}
+
+# Ticket #1987.
+# Disallow encoding changes once the encoding has been set.
+#
+do_test enc2-10.1 {
+ db close
+ file delete -force test.db test.db-journal
+ sqlite3 db test.db
+ db eval {
+ PRAGMA encoding=UTF16;
+ CREATE TABLE t1(a);
+ PRAGMA encoding=UTF8;
+ CREATE TABLE t2(b);
+ }
+ db close
+ sqlite3 db test.db
+ db eval {
+ SELECT name FROM sqlite_master
+ }
+} {t1 t2}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/enc3.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/enc3.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,71 @@
+# 2002 May 24
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# The focus of this file is testing of the proper handling of conversions
+# to the native text representation.
+#
+# $Id: enc3.test,v 1.5 2006/01/12 19:42:41 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+ifcapable {utf16} {
+ do_test enc3-1.1 {
+ execsql {
+ PRAGMA encoding=utf16le;
+ PRAGMA encoding;
+ }
+ } {UTF-16le}
+}
+do_test enc3-1.2 {
+ execsql {
+ CREATE TABLE t1(x,y);
+ INSERT INTO t1 VALUES('abc''123',5);
+ SELECT * FROM t1
+ }
+} {abc'123 5}
+do_test enc3-1.3 {
+ execsql {
+ SELECT quote(x) || ' ' || quote(y) FROM t1
+ }
+} {{'abc''123' 5}}
+ifcapable {bloblit} {
+ do_test enc3-1.4 {
+ execsql {
+ DELETE FROM t1;
+ INSERT INTO t1 VALUES(x'616263646566',NULL);
+ SELECT * FROM t1
+ }
+ } {abcdef {}}
+ do_test enc3-1.5 {
+ execsql {
+ SELECT quote(x) || ' ' || quote(y) FROM t1
+ }
+ } {{X'616263646566' NULL}}
+}
+ifcapable {bloblit && utf16} {
+ do_test enc3-2.1 {
+ execsql {
+ PRAGMA encoding
+ }
+ } {UTF-16le}
+ do_test enc3-2.2 {
+ execsql {
+ CREATE TABLE t2(a);
+ INSERT INTO t2 VALUES(x'61006200630064006500');
+ SELECT CAST(a AS text) FROM t2 WHERE a LIKE 'abc%';
+ }
+ } {abcde}
+}
+
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/expr.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/expr.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,654 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing expressions.
+#
+# $Id: expr.test,v 1.52 2006/09/01 15:49:06 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Create a table to work with.
+#
+execsql {CREATE TABLE test1(i1 int, i2 int, r1 real, r2 real, t1 text, t2 text)}
+execsql {INSERT INTO test1 VALUES(1,2,1.1,2.2,'hello','world')}
+proc test_expr {name settings expr result} {
+ do_test $name [format {
+ execsql {BEGIN; UPDATE test1 SET %s; SELECT %s FROM test1; ROLLBACK;}
+ } $settings $expr] $result
+}
+
+test_expr expr-1.1 {i1=10, i2=20} {i1+i2} 30
+test_expr expr-1.2 {i1=10, i2=20} {i1-i2} -10
+test_expr expr-1.3 {i1=10, i2=20} {i1*i2} 200
+test_expr expr-1.4 {i1=10, i2=20} {i1/i2} 0
+test_expr expr-1.5 {i1=10, i2=20} {i2/i1} 2
+test_expr expr-1.6 {i1=10, i2=20} {i2<i1} 0
+test_expr expr-1.7 {i1=10, i2=20} {i2<=i1} 0
+test_expr expr-1.8 {i1=10, i2=20} {i2>i1} 1
+test_expr expr-1.9 {i1=10, i2=20} {i2>=i1} 1
+test_expr expr-1.10 {i1=10, i2=20} {i2!=i1} 1
+test_expr expr-1.11 {i1=10, i2=20} {i2=i1} 0
+test_expr expr-1.12 {i1=10, i2=20} {i2<>i1} 1
+test_expr expr-1.13 {i1=10, i2=20} {i2==i1} 0
+test_expr expr-1.14 {i1=20, i2=20} {i2<i1} 0
+test_expr expr-1.15 {i1=20, i2=20} {i2<=i1} 1
+test_expr expr-1.16 {i1=20, i2=20} {i2>i1} 0
+test_expr expr-1.17 {i1=20, i2=20} {i2>=i1} 1
+test_expr expr-1.18 {i1=20, i2=20} {i2!=i1} 0
+test_expr expr-1.19 {i1=20, i2=20} {i2=i1} 1
+test_expr expr-1.20 {i1=20, i2=20} {i2<>i1} 0
+test_expr expr-1.21 {i1=20, i2=20} {i2==i1} 1
+test_expr expr-1.22 {i1=1, i2=2, r1=3.0} {i1+i2*r1} {7.0}
+test_expr expr-1.23 {i1=1, i2=2, r1=3.0} {(i1+i2)*r1} {9.0}
+test_expr expr-1.24 {i1=1, i2=2} {min(i1,i2,i1+i2,i1-i2)} {-1}
+test_expr expr-1.25 {i1=1, i2=2} {max(i1,i2,i1+i2,i1-i2)} {3}
+test_expr expr-1.26 {i1=1, i2=2} {max(i1,i2,i1+i2,i1-i2)} {3}
+test_expr expr-1.27 {i1=1, i2=2} {i1==1 AND i2=2} {1}
+test_expr expr-1.28 {i1=1, i2=2} {i1=2 AND i2=1} {0}
+test_expr expr-1.29 {i1=1, i2=2} {i1=1 AND i2=1} {0}
+test_expr expr-1.30 {i1=1, i2=2} {i1=2 AND i2=2} {0}
+test_expr expr-1.31 {i1=1, i2=2} {i1==1 OR i2=2} {1}
+test_expr expr-1.32 {i1=1, i2=2} {i1=2 OR i2=1} {0}
+test_expr expr-1.33 {i1=1, i2=2} {i1=1 OR i2=1} {1}
+test_expr expr-1.34 {i1=1, i2=2} {i1=2 OR i2=2} {1}
+test_expr expr-1.35 {i1=1, i2=2} {i1-i2=-1} {1}
+test_expr expr-1.36 {i1=1, i2=0} {not i1} {0}
+test_expr expr-1.37 {i1=1, i2=0} {not i2} {1}
+test_expr expr-1.38 {i1=1} {-i1} {-1}
+test_expr expr-1.39 {i1=1} {+i1} {1}
+test_expr expr-1.40 {i1=1, i2=2} {+(i2+i1)} {3}
+test_expr expr-1.41 {i1=1, i2=2} {-(i2+i1)} {-3}
+test_expr expr-1.42 {i1=1, i2=2} {i1|i2} {3}
+test_expr expr-1.42b {i1=1, i2=2} {4|2} {6}
+test_expr expr-1.43 {i1=1, i2=2} {i1&i2} {0}
+test_expr expr-1.43b {i1=1, i2=2} {4&5} {4}
+test_expr expr-1.44 {i1=1} {~i1} {-2}
+test_expr expr-1.45 {i1=1, i2=3} {i1<<i2} {8}
+test_expr expr-1.46 {i1=32, i2=3} {i1>>i2} {4}
+test_expr expr-1.47 {i1=9999999999, i2=8888888888} {i1<i2} 0
+test_expr expr-1.48 {i1=9999999999, i2=8888888888} {i1=i2} 0
+test_expr expr-1.49 {i1=9999999999, i2=8888888888} {i1>i2} 1
+test_expr expr-1.50 {i1=99999999999, i2=99999999998} {i1<i2} 0
+test_expr expr-1.51 {i1=99999999999, i2=99999999998} {i1=i2} 0
+test_expr expr-1.52 {i1=99999999999, i2=99999999998} {i1>i2} 1
+test_expr expr-1.53 {i1=099999999999, i2=99999999999} {i1<i2} 0
+test_expr expr-1.54 {i1=099999999999, i2=99999999999} {i1=i2} 1
+test_expr expr-1.55 {i1=099999999999, i2=99999999999} {i1>i2} 0
+test_expr expr-1.56 {i1=25, i2=11} {i1%i2} 3
+test_expr expr-1.58 {i1=NULL, i2=1} {coalesce(i1+i2,99)} 99
+test_expr expr-1.59 {i1=1, i2=NULL} {coalesce(i1+i2,99)} 99
+test_expr expr-1.60 {i1=NULL, i2=NULL} {coalesce(i1+i2,99)} 99
+test_expr expr-1.61 {i1=NULL, i2=1} {coalesce(i1-i2,99)} 99
+test_expr expr-1.62 {i1=1, i2=NULL} {coalesce(i1-i2,99)} 99
+test_expr expr-1.63 {i1=NULL, i2=NULL} {coalesce(i1-i2,99)} 99
+test_expr expr-1.64 {i1=NULL, i2=1} {coalesce(i1*i2,99)} 99
+test_expr expr-1.65 {i1=1, i2=NULL} {coalesce(i1*i2,99)} 99
+test_expr expr-1.66 {i1=NULL, i2=NULL} {coalesce(i1*i2,99)} 99
+test_expr expr-1.67 {i1=NULL, i2=1} {coalesce(i1/i2,99)} 99
+test_expr expr-1.68 {i1=1, i2=NULL} {coalesce(i1/i2,99)} 99
+test_expr expr-1.69 {i1=NULL, i2=NULL} {coalesce(i1/i2,99)} 99
+test_expr expr-1.70 {i1=NULL, i2=1} {coalesce(i1<i2,99)} 99
+test_expr expr-1.71 {i1=1, i2=NULL} {coalesce(i1>i2,99)} 99
+test_expr expr-1.72 {i1=NULL, i2=NULL} {coalesce(i1<=i2,99)} 99
+test_expr expr-1.73 {i1=NULL, i2=1} {coalesce(i1>=i2,99)} 99
+test_expr expr-1.74 {i1=1, i2=NULL} {coalesce(i1!=i2,99)} 99
+test_expr expr-1.75 {i1=NULL, i2=NULL} {coalesce(i1==i2,99)} 99
+test_expr expr-1.76 {i1=NULL, i2=NULL} {coalesce(not i1,99)} 99
+test_expr expr-1.77 {i1=NULL, i2=NULL} {coalesce(-i1,99)} 99
+test_expr expr-1.78 {i1=NULL, i2=NULL} {coalesce(i1 IS NULL AND i2=5,99)} 99
+test_expr expr-1.79 {i1=NULL, i2=NULL} {coalesce(i1 IS NULL OR i2=5,99)} 1
+test_expr expr-1.80 {i1=NULL, i2=NULL} {coalesce(i1=5 AND i2 IS NULL,99)} 99
+test_expr expr-1.81 {i1=NULL, i2=NULL} {coalesce(i1=5 OR i2 IS NULL,99)} 1
+test_expr expr-1.82 {i1=NULL, i2=3} {coalesce(min(i1,i2,1),99)} 99
+test_expr expr-1.83 {i1=NULL, i2=3} {coalesce(max(i1,i2,1),99)} 99
+test_expr expr-1.84 {i1=3, i2=NULL} {coalesce(min(i1,i2,1),99)} 99
+test_expr expr-1.85 {i1=3, i2=NULL} {coalesce(max(i1,i2,1),99)} 99
+test_expr expr-1.86 {i1=3, i2=8} {5 between i1 and i2} 1
+test_expr expr-1.87 {i1=3, i2=8} {5 not between i1 and i2} 0
+test_expr expr-1.88 {i1=3, i2=8} {55 between i1 and i2} 0
+test_expr expr-1.89 {i1=3, i2=8} {55 not between i1 and i2} 1
+test_expr expr-1.90 {i1=3, i2=NULL} {5 between i1 and i2} {{}}
+test_expr expr-1.91 {i1=3, i2=NULL} {5 not between i1 and i2} {{}}
+test_expr expr-1.92 {i1=3, i2=NULL} {2 between i1 and i2} 0
+test_expr expr-1.93 {i1=3, i2=NULL} {2 not between i1 and i2} 1
+test_expr expr-1.94 {i1=NULL, i2=8} {2 between i1 and i2} {{}}
+test_expr expr-1.95 {i1=NULL, i2=8} {2 not between i1 and i2} {{}}
+test_expr expr-1.94 {i1=NULL, i2=8} {55 between i1 and i2} 0
+test_expr expr-1.95 {i1=NULL, i2=8} {55 not between i1 and i2} 1
+test_expr expr-1.96 {i1=NULL, i2=3} {coalesce(i1<<i2,99)} 99
+test_expr expr-1.97 {i1=32, i2=NULL} {coalesce(i1>>i2,99)} 99
+test_expr expr-1.98 {i1=NULL, i2=NULL} {coalesce(i1|i2,99)} 99
+test_expr expr-1.99 {i1=32, i2=NULL} {coalesce(i1&i2,99)} 99
+test_expr expr-1.100 {i1=1, i2=''} {i1=i2} 0
+test_expr expr-1.101 {i1=0, i2=''} {i1=i2} 0
+
+# Check for proper handling of 64-bit integer values.
+#
+test_expr expr-1.102 {i1=40, i2=1} {i2<<i1} 1099511627776
+
+
+test_expr expr-2.1 {r1=1.23, r2=2.34} {r1+r2} 3.57
+test_expr expr-2.2 {r1=1.23, r2=2.34} {r1-r2} -1.11
+test_expr expr-2.3 {r1=1.23, r2=2.34} {r1*r2} 2.8782
+set tcl_precision 15
+test_expr expr-2.4 {r1=1.23, r2=2.34} {r1/r2} 0.525641025641026
+test_expr expr-2.5 {r1=1.23, r2=2.34} {r2/r1} 1.90243902439024
+test_expr expr-2.6 {r1=1.23, r2=2.34} {r2<r1} 0
+test_expr expr-2.7 {r1=1.23, r2=2.34} {r2<=r1} 0
+test_expr expr-2.8 {r1=1.23, r2=2.34} {r2>r1} 1
+test_expr expr-2.9 {r1=1.23, r2=2.34} {r2>=r1} 1
+test_expr expr-2.10 {r1=1.23, r2=2.34} {r2!=r1} 1
+test_expr expr-2.11 {r1=1.23, r2=2.34} {r2=r1} 0
+test_expr expr-2.12 {r1=1.23, r2=2.34} {r2<>r1} 1
+test_expr expr-2.13 {r1=1.23, r2=2.34} {r2==r1} 0
+test_expr expr-2.14 {r1=2.34, r2=2.34} {r2<r1} 0
+test_expr expr-2.15 {r1=2.34, r2=2.34} {r2<=r1} 1
+test_expr expr-2.16 {r1=2.34, r2=2.34} {r2>r1} 0
+test_expr expr-2.17 {r1=2.34, r2=2.34} {r2>=r1} 1
+test_expr expr-2.18 {r1=2.34, r2=2.34} {r2!=r1} 0
+test_expr expr-2.19 {r1=2.34, r2=2.34} {r2=r1} 1
+test_expr expr-2.20 {r1=2.34, r2=2.34} {r2<>r1} 0
+test_expr expr-2.21 {r1=2.34, r2=2.34} {r2==r1} 1
+test_expr expr-2.22 {r1=1.23, r2=2.34} {min(r1,r2,r1+r2,r1-r2)} {-1.11}
+test_expr expr-2.23 {r1=1.23, r2=2.34} {max(r1,r2,r1+r2,r1-r2)} {3.57}
+test_expr expr-2.24 {r1=25.0, r2=11.0} {r1%r2} 3.0
+test_expr expr-2.25 {r1=1.23, r2=NULL} {coalesce(r1+r2,99.0)} 99.0
+
+test_expr expr-3.1 {t1='abc', t2='xyz'} {t1<t2} 1
+test_expr expr-3.2 {t1='xyz', t2='abc'} {t1<t2} 0
+test_expr expr-3.3 {t1='abc', t2='abc'} {t1<t2} 0
+test_expr expr-3.4 {t1='abc', t2='xyz'} {t1<=t2} 1
+test_expr expr-3.5 {t1='xyz', t2='abc'} {t1<=t2} 0
+test_expr expr-3.6 {t1='abc', t2='abc'} {t1<=t2} 1
+test_expr expr-3.7 {t1='abc', t2='xyz'} {t1>t2} 0
+test_expr expr-3.8 {t1='xyz', t2='abc'} {t1>t2} 1
+test_expr expr-3.9 {t1='abc', t2='abc'} {t1>t2} 0
+test_expr expr-3.10 {t1='abc', t2='xyz'} {t1>=t2} 0
+test_expr expr-3.11 {t1='xyz', t2='abc'} {t1>=t2} 1
+test_expr expr-3.12 {t1='abc', t2='abc'} {t1>=t2} 1
+test_expr expr-3.13 {t1='abc', t2='xyz'} {t1=t2} 0
+test_expr expr-3.14 {t1='xyz', t2='abc'} {t1=t2} 0
+test_expr expr-3.15 {t1='abc', t2='abc'} {t1=t2} 1
+test_expr expr-3.16 {t1='abc', t2='xyz'} {t1==t2} 0
+test_expr expr-3.17 {t1='xyz', t2='abc'} {t1==t2} 0
+test_expr expr-3.18 {t1='abc', t2='abc'} {t1==t2} 1
+test_expr expr-3.19 {t1='abc', t2='xyz'} {t1<>t2} 1
+test_expr expr-3.20 {t1='xyz', t2='abc'} {t1<>t2} 1
+test_expr expr-3.21 {t1='abc', t2='abc'} {t1<>t2} 0
+test_expr expr-3.22 {t1='abc', t2='xyz'} {t1!=t2} 1
+test_expr expr-3.23 {t1='xyz', t2='abc'} {t1!=t2} 1
+test_expr expr-3.24 {t1='abc', t2='abc'} {t1!=t2} 0
+test_expr expr-3.25 {t1=NULL, t2='hi'} {t1 isnull} 1
+test_expr expr-3.25b {t1=NULL, t2='hi'} {t1 is null} 1
+test_expr expr-3.26 {t1=NULL, t2='hi'} {t2 isnull} 0
+test_expr expr-3.27 {t1=NULL, t2='hi'} {t1 notnull} 0
+test_expr expr-3.28 {t1=NULL, t2='hi'} {t2 notnull} 1
+test_expr expr-3.28b {t1=NULL, t2='hi'} {t2 is not null} 1
+test_expr expr-3.29 {t1='xyz', t2='abc'} {t1||t2} {xyzabc}
+test_expr expr-3.30 {t1=NULL, t2='abc'} {t1||t2} {{}}
+test_expr expr-3.31 {t1='xyz', t2=NULL} {t1||t2} {{}}
+test_expr expr-3.32 {t1='xyz', t2='abc'} {t1||' hi '||t2} {{xyz hi abc}}
+test_expr epxr-3.33 {t1='abc', t2=NULL} {coalesce(t1<t2,99)} 99
+test_expr epxr-3.34 {t1='abc', t2=NULL} {coalesce(t2<t1,99)} 99
+test_expr epxr-3.35 {t1='abc', t2=NULL} {coalesce(t1>t2,99)} 99
+test_expr epxr-3.36 {t1='abc', t2=NULL} {coalesce(t2>t1,99)} 99
+test_expr epxr-3.37 {t1='abc', t2=NULL} {coalesce(t1<=t2,99)} 99
+test_expr epxr-3.38 {t1='abc', t2=NULL} {coalesce(t2<=t1,99)} 99
+test_expr epxr-3.39 {t1='abc', t2=NULL} {coalesce(t1>=t2,99)} 99
+test_expr epxr-3.40 {t1='abc', t2=NULL} {coalesce(t2>=t1,99)} 99
+test_expr epxr-3.41 {t1='abc', t2=NULL} {coalesce(t1==t2,99)} 99
+test_expr epxr-3.42 {t1='abc', t2=NULL} {coalesce(t2==t1,99)} 99
+test_expr epxr-3.43 {t1='abc', t2=NULL} {coalesce(t1!=t2,99)} 99
+test_expr epxr-3.44 {t1='abc', t2=NULL} {coalesce(t2!=t1,99)} 99
+
+test_expr expr-4.1 {t1='abc', t2='Abc'} {t1<t2} 0
+test_expr expr-4.2 {t1='abc', t2='Abc'} {t1>t2} 1
+test_expr expr-4.3 {t1='abc', t2='Bbc'} {t1<t2} 0
+test_expr expr-4.4 {t1='abc', t2='Bbc'} {t1>t2} 1
+test_expr expr-4.5 {t1='0', t2='0.0'} {t1==t2} 0
+test_expr expr-4.6 {t1='0.000', t2='0.0'} {t1==t2} 0
+test_expr expr-4.7 {t1=' 0.000', t2=' 0.0'} {t1==t2} 0
+test_expr expr-4.8 {t1='0.0', t2='abc'} {t1<t2} 1
+test_expr expr-4.9 {t1='0.0', t2='abc'} {t1==t2} 0
+test_expr expr-4.10 {r1='0.0', r2='abc'} {r1>r2} 0
+test_expr expr-4.11 {r1='abc', r2='Abc'} {r1<r2} 0
+test_expr expr-4.12 {r1='abc', r2='Abc'} {r1>r2} 1
+test_expr expr-4.13 {r1='abc', r2='Bbc'} {r1<r2} 0
+test_expr expr-4.14 {r1='abc', r2='Bbc'} {r1>r2} 1
+test_expr expr-4.15 {r1='0', r2='0.0'} {r1==r2} 1
+test_expr expr-4.16 {r1='0.000', r2='0.0'} {r1==r2} 1
+test_expr expr-4.17 {r1=' 0.000', r2=' 0.0'} {r1==r2} 0
+test_expr expr-4.18 {r1='0.0', r2='abc'} {r1<r2} 1
+test_expr expr-4.19 {r1='0.0', r2='abc'} {r1==r2} 0
+test_expr expr-4.20 {r1='0.0', r2='abc'} {r1>r2} 0
+
+# CSL is true if LIKE is case sensitive and false if not.
+# NCSL is the opposite. Use these variables as the result
+# on operations where case makes a difference.
+set CSL $sqlite_options(casesensitivelike)
+set NCSL [expr {!$CSL}]
+
+test_expr expr-5.1 {t1='abc', t2='xyz'} {t1 LIKE t2} 0
+test_expr expr-5.2a {t1='abc', t2='abc'} {t1 LIKE t2} 1
+test_expr expr-5.2b {t1='abc', t2='ABC'} {t1 LIKE t2} $NCSL
+test_expr expr-5.3a {t1='abc', t2='a_c'} {t1 LIKE t2} 1
+test_expr expr-5.3b {t1='abc', t2='A_C'} {t1 LIKE t2} $NCSL
+test_expr expr-5.4 {t1='abc', t2='abc_'} {t1 LIKE t2} 0
+test_expr expr-5.5a {t1='abc', t2='a%c'} {t1 LIKE t2} 1
+test_expr expr-5.5b {t1='abc', t2='A%C'} {t1 LIKE t2} $NCSL
+test_expr expr-5.5c {t1='abdc', t2='a%c'} {t1 LIKE t2} 1
+test_expr expr-5.5d {t1='ac', t2='a%c'} {t1 LIKE t2} 1
+test_expr expr-5.5e {t1='ac', t2='A%C'} {t1 LIKE t2} $NCSL
+test_expr expr-5.6a {t1='abxyzzyc', t2='a%c'} {t1 LIKE t2} 1
+test_expr expr-5.6b {t1='abxyzzyc', t2='A%C'} {t1 LIKE t2} $NCSL
+test_expr expr-5.7a {t1='abxyzzy', t2='a%c'} {t1 LIKE t2} 0
+test_expr expr-5.7b {t1='abxyzzy', t2='A%C'} {t1 LIKE t2} 0
+test_expr expr-5.8a {t1='abxyzzycx', t2='a%c'} {t1 LIKE t2} 0
+test_expr expr-5.8b {t1='abxyzzycy', t2='a%cx'} {t1 LIKE t2} 0
+test_expr expr-5.8c {t1='abxyzzycx', t2='A%C'} {t1 LIKE t2} 0
+test_expr expr-5.8d {t1='abxyzzycy', t2='A%CX'} {t1 LIKE t2} 0
+test_expr expr-5.9a {t1='abc', t2='a%_c'} {t1 LIKE t2} 1
+test_expr expr-5.9b {t1='ac', t2='a%_c'} {t1 LIKE t2} 0
+test_expr expr-5.9c {t1='abc', t2='A%_C'} {t1 LIKE t2} $NCSL
+test_expr expr-5.9d {t1='ac', t2='A%_C'} {t1 LIKE t2} 0
+test_expr expr-5.10a {t1='abxyzzyc', t2='a%_c'} {t1 LIKE t2} 1
+test_expr expr-5.10b {t1='abxyzzyc', t2='A%_C'} {t1 LIKE t2} $NCSL
+test_expr expr-5.11 {t1='abc', t2='xyz'} {t1 NOT LIKE t2} 1
+test_expr expr-5.12a {t1='abc', t2='abc'} {t1 NOT LIKE t2} 0
+test_expr expr-5.12b {t1='abc', t2='ABC'} {t1 NOT LIKE t2} $CSL
+
+# The following tests only work on versions of TCL that support Unicode
+#
+if {"\u1234"!="u1234"} {
+ test_expr expr-5.13a "t1='a\u0080c', t2='a_c'" {t1 LIKE t2} 1
+ test_expr expr-5.13b "t1='a\u0080c', t2='A_C'" {t1 LIKE t2} $NCSL
+ test_expr expr-5.14a "t1='a\u07FFc', t2='a_c'" {t1 LIKE t2} 1
+ test_expr expr-5.14b "t1='a\u07FFc', t2='A_C'" {t1 LIKE t2} $NCSL
+ test_expr expr-5.15a "t1='a\u0800c', t2='a_c'" {t1 LIKE t2} 1
+ test_expr expr-5.15b "t1='a\u0800c', t2='A_C'" {t1 LIKE t2} $NCSL
+ test_expr expr-5.16a "t1='a\uFFFFc', t2='a_c'" {t1 LIKE t2} 1
+ test_expr expr-5.16b "t1='a\uFFFFc', t2='A_C'" {t1 LIKE t2} $NCSL
+ test_expr expr-5.17 "t1='a\u0080', t2='A__'" {t1 LIKE t2} 0
+ test_expr expr-5.18 "t1='a\u07FF', t2='A__'" {t1 LIKE t2} 0
+ test_expr expr-5.19 "t1='a\u0800', t2='A__'" {t1 LIKE t2} 0
+ test_expr expr-5.20 "t1='a\uFFFF', t2='A__'" {t1 LIKE t2} 0
+ test_expr expr-5.21a "t1='ax\uABCD', t2='a_\uABCD'" {t1 LIKE t2} 1
+ test_expr expr-5.21b "t1='ax\uABCD', t2='A_\uABCD'" {t1 LIKE t2} $NCSL
+ test_expr expr-5.22a "t1='ax\u1234', t2='a%\u1234'" {t1 LIKE t2} 1
+ test_expr expr-5.22b "t1='ax\u1234', t2='A%\u1234'" {t1 LIKE t2} $NCSL
+ test_expr expr-5.23a "t1='ax\uFEDC', t2='a_%'" {t1 LIKE t2} 1
+ test_expr expr-5.23b "t1='ax\uFEDC', t2='A_%'" {t1 LIKE t2} $NCSL
+ test_expr expr-5.24a "t1='ax\uFEDCy\uFEDC', t2='a%\uFEDC'" {t1 LIKE t2} 1
+ test_expr expr-5.24b "t1='ax\uFEDCy\uFEDC', t2='A%\uFEDC'" {t1 LIKE t2} $NCSL
+}
+
+test_expr expr-5.54 {t1='abc', t2=NULL} {t1 LIKE t2} {{}}
+test_expr expr-5.55 {t1='abc', t2=NULL} {t1 NOT LIKE t2} {{}}
+test_expr expr-5.56 {t1='abc', t2=NULL} {t2 LIKE t1} {{}}
+test_expr expr-5.57 {t1='abc', t2=NULL} {t2 NOT LIKE t1} {{}}
+
+# LIKE expressions that use ESCAPE characters.
+test_expr expr-5.58a {t1='abc', t2='a_c'} {t1 LIKE t2 ESCAPE '7'} 1
+test_expr expr-5.58b {t1='abc', t2='A_C'} {t1 LIKE t2 ESCAPE '7'} $NCSL
+test_expr expr-5.59a {t1='a_c', t2='a7_c'} {t1 LIKE t2 ESCAPE '7'} 1
+test_expr expr-5.59b {t1='a_c', t2='A7_C'} {t1 LIKE t2 ESCAPE '7'} $NCSL
+test_expr expr-5.60a {t1='abc', t2='a7_c'} {t1 LIKE t2 ESCAPE '7'} 0
+test_expr expr-5.60b {t1='abc', t2='A7_C'} {t1 LIKE t2 ESCAPE '7'} 0
+test_expr expr-5.61a {t1='a7Xc', t2='a7_c'} {t1 LIKE t2 ESCAPE '7'} 0
+test_expr expr-5.61b {t1='a7Xc', t2='A7_C'} {t1 LIKE t2 ESCAPE '7'} 0
+test_expr expr-5.62a {t1='abcde', t2='a%e'} {t1 LIKE t2 ESCAPE '7'} 1
+test_expr expr-5.62b {t1='abcde', t2='A%E'} {t1 LIKE t2 ESCAPE '7'} $NCSL
+test_expr expr-5.63a {t1='abcde', t2='a7%e'} {t1 LIKE t2 ESCAPE '7'} 0
+test_expr expr-5.63b {t1='abcde', t2='A7%E'} {t1 LIKE t2 ESCAPE '7'} 0
+test_expr expr-5.64a {t1='a7cde', t2='a7%e'} {t1 LIKE t2 ESCAPE '7'} 0
+test_expr expr-5.64b {t1='a7cde', t2='A7%E'} {t1 LIKE t2 ESCAPE '7'} 0
+test_expr expr-5.65a {t1='a7cde', t2='a77%e'} {t1 LIKE t2 ESCAPE '7'} 1
+test_expr expr-5.65b {t1='a7cde', t2='A77%E'} {t1 LIKE t2 ESCAPE '7'} $NCSL
+test_expr expr-5.66a {t1='abc7', t2='a%77'} {t1 LIKE t2 ESCAPE '7'} 1
+test_expr expr-5.66b {t1='abc7', t2='A%77'} {t1 LIKE t2 ESCAPE '7'} $NCSL
+test_expr expr-5.67a {t1='abc_', t2='a%7_'} {t1 LIKE t2 ESCAPE '7'} 1
+test_expr expr-5.67b {t1='abc_', t2='A%7_'} {t1 LIKE t2 ESCAPE '7'} $NCSL
+test_expr expr-5.68a {t1='abc7', t2='a%7_'} {t1 LIKE t2 ESCAPE '7'} 0
+test_expr expr-5.68b {t1='abc7', t2='A%7_'} {t1 LIKE t2 ESCAPE '7'} 0
+
+# These are the same test as the block above, but using a multi-byte
+# character as the escape character.
+if {"\u1234"!="u1234"} {
+ test_expr expr-5.69a "t1='abc', t2='a_c'" \
+ "t1 LIKE t2 ESCAPE '\u1234'" 1
+ test_expr expr-5.69b "t1='abc', t2='A_C'" \
+ "t1 LIKE t2 ESCAPE '\u1234'" $NCSL
+ test_expr expr-5.70a "t1='a_c', t2='a\u1234_c'" \
+ "t1 LIKE t2 ESCAPE '\u1234'" 1
+ test_expr expr-5.70b "t1='a_c', t2='A\u1234_C'" \
+ "t1 LIKE t2 ESCAPE '\u1234'" $NCSL
+ test_expr expr-5.71a "t1='abc', t2='a\u1234_c'" \
+ "t1 LIKE t2 ESCAPE '\u1234'" 0
+ test_expr expr-5.71b "t1='abc', t2='A\u1234_C'" \
+ "t1 LIKE t2 ESCAPE '\u1234'" 0
+ test_expr expr-5.72a "t1='a\u1234Xc', t2='a\u1234_c'" \
+ "t1 LIKE t2 ESCAPE '\u1234'" 0
+ test_expr expr-5.72b "t1='a\u1234Xc', t2='A\u1234_C'" \
+ "t1 LIKE t2 ESCAPE '\u1234'" 0
+ test_expr expr-5.73a "t1='abcde', t2='a%e'" \
+ "t1 LIKE t2 ESCAPE '\u1234'" 1
+ test_expr expr-5.73b "t1='abcde', t2='A%E'" \
+ "t1 LIKE t2 ESCAPE '\u1234'" $NCSL
+ test_expr expr-5.74a "t1='abcde', t2='a\u1234%e'" \
+ "t1 LIKE t2 ESCAPE '\u1234'" 0
+ test_expr expr-5.74b "t1='abcde', t2='A\u1234%E'" \
+ "t1 LIKE t2 ESCAPE '\u1234'" 0
+ test_expr expr-5.75a "t1='a\u1234cde', t2='a\u1234%e'" \
+ "t1 LIKE t2 ESCAPE '\u1234'" 0
+ test_expr expr-5.75b "t1='a\u1234cde', t2='A\u1234%E'" \
+ "t1 LIKE t2 ESCAPE '\u1234'" 0
+ test_expr expr-5.76a "t1='a\u1234cde', t2='a\u1234\u1234%e'" \
+ "t1 LIKE t2 ESCAPE '\u1234'" 1
+ test_expr expr-5.76b "t1='a\u1234cde', t2='A\u1234\u1234%E'" \
+ "t1 LIKE t2 ESCAPE '\u1234'" $NCSL
+ test_expr expr-5.77a "t1='abc\u1234', t2='a%\u1234\u1234'" \
+ "t1 LIKE t2 ESCAPE '\u1234'" 1
+ test_expr expr-5.77b "t1='abc\u1234', t2='A%\u1234\u1234'" \
+ "t1 LIKE t2 ESCAPE '\u1234'" $NCSL
+ test_expr expr-5.78a "t1='abc_', t2='a%\u1234_'" \
+ "t1 LIKE t2 ESCAPE '\u1234'" 1
+ test_expr expr-5.78b "t1='abc_', t2='A%\u1234_'" \
+ "t1 LIKE t2 ESCAPE '\u1234'" $NCSL
+ test_expr expr-5.79a "t1='abc\u1234', t2='a%\u1234_'" \
+ "t1 LIKE t2 ESCAPE '\u1234'" 0
+ test_expr expr-5.79b "t1='abc\u1234', t2='A%\u1234_'" \
+ "t1 LIKE t2 ESCAPE '\u1234'" 0
+}
+
+test_expr expr-6.1 {t1='abc', t2='xyz'} {t1 GLOB t2} 0
+test_expr expr-6.2 {t1='abc', t2='ABC'} {t1 GLOB t2} 0
+test_expr expr-6.3 {t1='abc', t2='A?C'} {t1 GLOB t2} 0
+test_expr expr-6.4 {t1='abc', t2='a?c'} {t1 GLOB t2} 1
+test_expr expr-6.5 {t1='abc', t2='abc?'} {t1 GLOB t2} 0
+test_expr expr-6.6 {t1='abc', t2='A*C'} {t1 GLOB t2} 0
+test_expr expr-6.7 {t1='abc', t2='a*c'} {t1 GLOB t2} 1
+test_expr expr-6.8 {t1='abxyzzyc', t2='a*c'} {t1 GLOB t2} 1
+test_expr expr-6.9 {t1='abxyzzy', t2='a*c'} {t1 GLOB t2} 0
+test_expr expr-6.10 {t1='abxyzzycx', t2='a*c'} {t1 GLOB t2} 0
+test_expr expr-6.11 {t1='abc', t2='xyz'} {t1 NOT GLOB t2} 1
+test_expr expr-6.12 {t1='abc', t2='abc'} {t1 NOT GLOB t2} 0
+test_expr expr-6.13 {t1='abc', t2='a[bx]c'} {t1 GLOB t2} 1
+test_expr expr-6.14 {t1='abc', t2='a[cx]c'} {t1 GLOB t2} 0
+test_expr expr-6.15 {t1='abc', t2='a[a-d]c'} {t1 GLOB t2} 1
+test_expr expr-6.16 {t1='abc', t2='a[^a-d]c'} {t1 GLOB t2} 0
+test_expr expr-6.17 {t1='abc', t2='a[A-Dc]c'} {t1 GLOB t2} 0
+test_expr expr-6.18 {t1='abc', t2='a[^A-Dc]c'} {t1 GLOB t2} 1
+test_expr expr-6.19 {t1='abc', t2='a[]b]c'} {t1 GLOB t2} 1
+test_expr expr-6.20 {t1='abc', t2='a[^]b]c'} {t1 GLOB t2} 0
+test_expr expr-6.21a {t1='abcdefg', t2='a*[de]g'} {t1 GLOB t2} 0
+test_expr expr-6.21b {t1='abcdefg', t2='a*[df]g'} {t1 GLOB t2} 1
+test_expr expr-6.21c {t1='abcdefg', t2='a*[d-h]g'} {t1 GLOB t2} 1
+test_expr expr-6.21d {t1='abcdefg', t2='a*[b-e]g'} {t1 GLOB t2} 0
+test_expr expr-6.22a {t1='abcdefg', t2='a*[^de]g'} {t1 GLOB t2} 1
+test_expr expr-6.22b {t1='abcdefg', t2='a*[^def]g'} {t1 GLOB t2} 0
+test_expr expr-6.23 {t1='abcdefg', t2='a*?g'} {t1 GLOB t2} 1
+test_expr expr-6.24 {t1='ac', t2='a*c'} {t1 GLOB t2} 1
+test_expr expr-6.25 {t1='ac', t2='a*?c'} {t1 GLOB t2} 0
+test_expr expr-6.26 {t1='a*c', t2='a[*]c'} {t1 GLOB t2} 1
+test_expr expr-6.27 {t1='a?c', t2='a[?]c'} {t1 GLOB t2} 1
+test_expr expr-6.28 {t1='a[c', t2='a[[]c'} {t1 GLOB t2} 1
+
+
+# These tests only work on versions of TCL that support Unicode
+#
+if {"\u1234"!="u1234"} {
+ test_expr expr-6.26 "t1='a\u0080c', t2='a?c'" {t1 GLOB t2} 1
+ test_expr expr-6.27 "t1='a\u07ffc', t2='a?c'" {t1 GLOB t2} 1
+ test_expr expr-6.28 "t1='a\u0800c', t2='a?c'" {t1 GLOB t2} 1
+ test_expr expr-6.29 "t1='a\uffffc', t2='a?c'" {t1 GLOB t2} 1
+ test_expr expr-6.30 "t1='a\u1234', t2='a?'" {t1 GLOB t2} 1
+ test_expr expr-6.31 "t1='a\u1234', t2='a??'" {t1 GLOB t2} 0
+ test_expr expr-6.32 "t1='ax\u1234', t2='a?\u1234'" {t1 GLOB t2} 1
+ test_expr expr-6.33 "t1='ax\u1234', t2='a*\u1234'" {t1 GLOB t2} 1
+ test_expr expr-6.34 "t1='ax\u1234y\u1234', t2='a*\u1234'" {t1 GLOB t2} 1
+ test_expr expr-6.35 "t1='a\u1234b', t2='a\[x\u1234y\]b'" {t1 GLOB t2} 1
+ test_expr expr-6.36 "t1='a\u1234b', t2='a\[\u1233-\u1235\]b'" {t1 GLOB t2} 1
+ test_expr expr-6.37 "t1='a\u1234b', t2='a\[\u1234-\u124f\]b'" {t1 GLOB t2} 1
+ test_expr expr-6.38 "t1='a\u1234b', t2='a\[\u1235-\u124f\]b'" {t1 GLOB t2} 0
+ test_expr expr-6.39 "t1='a\u1234b', t2='a\[a-\u1235\]b'" {t1 GLOB t2} 1
+ test_expr expr-6.40 "t1='a\u1234b', t2='a\[a-\u1234\]b'" {t1 GLOB t2} 1
+ test_expr expr-6.41 "t1='a\u1234b', t2='a\[a-\u1233\]b'" {t1 GLOB t2} 0
+}
+
+test_expr expr-6.51 {t1='ABC', t2='xyz'} {t1 GLOB t2} 0
+test_expr expr-6.52 {t1='ABC', t2='abc'} {t1 GLOB t2} 0
+test_expr expr-6.53 {t1='ABC', t2='a?c'} {t1 GLOB t2} 0
+test_expr expr-6.54 {t1='ABC', t2='A?C'} {t1 GLOB t2} 1
+test_expr expr-6.55 {t1='ABC', t2='abc?'} {t1 GLOB t2} 0
+test_expr expr-6.56 {t1='ABC', t2='a*c'} {t1 GLOB t2} 0
+test_expr expr-6.57 {t1='ABC', t2='A*C'} {t1 GLOB t2} 1
+test_expr expr-6.58 {t1='ABxyzzyC', t2='A*C'} {t1 GLOB t2} 1
+test_expr expr-6.59 {t1='ABxyzzy', t2='A*C'} {t1 GLOB t2} 0
+test_expr expr-6.60 {t1='ABxyzzyCx', t2='A*C'} {t1 GLOB t2} 0
+test_expr expr-6.61 {t1='ABC', t2='xyz'} {t1 NOT GLOB t2} 1
+test_expr expr-6.62 {t1='ABC', t2='ABC'} {t1 NOT GLOB t2} 0
+test_expr expr-6.63 {t1='ABC', t2='A[Bx]C'} {t1 GLOB t2} 1
+test_expr expr-6.64 {t1='ABC', t2='A[Cx]C'} {t1 GLOB t2} 0
+test_expr expr-6.65 {t1='ABC', t2='A[A-D]C'} {t1 GLOB t2} 1
+test_expr expr-6.66 {t1='ABC', t2='A[^A-D]C'} {t1 GLOB t2} 0
+test_expr expr-6.67 {t1='ABC', t2='A[a-dC]C'} {t1 GLOB t2} 0
+test_expr expr-6.68 {t1='ABC', t2='A[^a-dC]C'} {t1 GLOB t2} 1
+test_expr expr-6.69a {t1='ABC', t2='A[]B]C'} {t1 GLOB t2} 1
+test_expr expr-6.69b {t1='A]C', t2='A[]B]C'} {t1 GLOB t2} 1
+test_expr expr-6.70a {t1='ABC', t2='A[^]B]C'} {t1 GLOB t2} 0
+test_expr expr-6.70b {t1='AxC', t2='A[^]B]C'} {t1 GLOB t2} 1
+test_expr expr-6.70c {t1='A]C', t2='A[^]B]C'} {t1 GLOB t2} 0
+test_expr expr-6.71 {t1='ABCDEFG', t2='A*[DE]G'} {t1 GLOB t2} 0
+test_expr expr-6.72 {t1='ABCDEFG', t2='A*[^DE]G'} {t1 GLOB t2} 1
+test_expr expr-6.73 {t1='ABCDEFG', t2='A*?G'} {t1 GLOB t2} 1
+test_expr expr-6.74 {t1='AC', t2='A*C'} {t1 GLOB t2} 1
+test_expr expr-6.75 {t1='AC', t2='A*?C'} {t1 GLOB t2} 0
+
+test_expr expr-6.63 {t1=NULL, t2='a*?c'} {t1 GLOB t2} {{}}
+test_expr expr-6.64 {t1='ac', t2=NULL} {t1 GLOB t2} {{}}
+test_expr expr-6.65 {t1=NULL, t2='a*?c'} {t1 NOT GLOB t2} {{}}
+test_expr expr-6.66 {t1='ac', t2=NULL} {t1 NOT GLOB t2} {{}}
+
+# Check that the affinity of a CAST expression is calculated correctly.
+ifcapable cast {
+ test_expr expr-6.67 {t1='01', t2=1} {t1 = t2} 0
+ test_expr expr-6.68 {t1='1', t2=1} {t1 = t2} 1
+ test_expr expr-6.69 {t1='01', t2=1} {CAST(t1 AS INTEGER) = t2} 1
+}
+
+test_expr expr-case.1 {i1=1, i2=2} \
+ {CASE WHEN i1 = i2 THEN 'eq' ELSE 'ne' END} ne
+test_expr expr-case.2 {i1=2, i2=2} \
+ {CASE WHEN i1 = i2 THEN 'eq' ELSE 'ne' END} eq
+test_expr expr-case.3 {i1=NULL, i2=2} \
+ {CASE WHEN i1 = i2 THEN 'eq' ELSE 'ne' END} ne
+test_expr expr-case.4 {i1=2, i2=NULL} \
+ {CASE WHEN i1 = i2 THEN 'eq' ELSE 'ne' END} ne
+test_expr expr-case.5 {i1=2} \
+ {CASE i1 WHEN 1 THEN 'one' WHEN 2 THEN 'two' ELSE 'error' END} two
+test_expr expr-case.6 {i1=1} \
+ {CASE i1 WHEN 1 THEN 'one' WHEN NULL THEN 'two' ELSE 'error' END} one
+test_expr expr-case.7 {i1=2} \
+ {CASE i1 WHEN 1 THEN 'one' WHEN NULL THEN 'two' ELSE 'error' END} error
+test_expr expr-case.8 {i1=3} \
+ {CASE i1 WHEN 1 THEN 'one' WHEN NULL THEN 'two' ELSE 'error' END} error
+test_expr expr-case.9 {i1=3} \
+ {CASE i1 WHEN 1 THEN 'one' WHEN 2 THEN 'two' ELSE 'error' END} error
+test_expr expr-case.10 {i1=3} \
+ {CASE i1 WHEN 1 THEN 'one' WHEN 2 THEN 'two' END} {{}}
+test_expr expr-case.11 {i1=null} \
+ {CASE i1 WHEN 1 THEN 'one' WHEN 2 THEN 'two' ELSE 3 END} 3
+test_expr expr-case.12 {i1=1} \
+ {CASE i1 WHEN 1 THEN null WHEN 2 THEN 'two' ELSE 3 END} {{}}
+test_expr expr-case.13 {i1=7} \
+ { CASE WHEN i1 < 5 THEN 'low'
+ WHEN i1 < 10 THEN 'medium'
+ WHEN i1 < 15 THEN 'high' ELSE 'error' END} medium
+
+
+# The sqliteExprIfFalse and sqliteExprIfTrue routines are only
+# executed as part of a WHERE clause. Create a table suitable
+# for testing these functions.
+#
+execsql {DROP TABLE test1}
+execsql {CREATE TABLE test1(a int, b int);}
+for {set i 1} {$i<=20} {incr i} {
+ execsql "INSERT INTO test1 VALUES($i,[expr {int(pow(2,$i))}])"
+}
+execsql "INSERT INTO test1 VALUES(NULL,0)"
+do_test expr-7.1 {
+ execsql {SELECT * FROM test1 ORDER BY a}
+} {{} 0 1 2 2 4 3 8 4 16 5 32 6 64 7 128 8 256 9 512 10 1024 11 2048 12 4096 13 8192 14 16384 15 32768 16 65536 17 131072 18 262144 19 524288 20 1048576}
+
+proc test_expr2 {name expr result} {
+ do_test $name [format {
+ execsql {SELECT a FROM test1 WHERE %s ORDER BY a}
+ } $expr] $result
+}
+
+test_expr2 expr-7.2 {a<10 AND a>8} {9}
+test_expr2 expr-7.3 {a<=10 AND a>=8} {8 9 10}
+test_expr2 expr-7.4 {a>=8 AND a<=10} {8 9 10}
+test_expr2 expr-7.5 {a>=20 OR a<=1} {1 20}
+test_expr2 expr-7.6 {b!=4 AND a<=3} {1 3}
+test_expr2 expr-7.7 {b==8 OR b==16 OR b==32} {3 4 5}
+test_expr2 expr-7.8 {NOT b<>8 OR b==1024} {3 10}
+test_expr2 expr-7.9 {b LIKE '10%'} {10 20}
+test_expr2 expr-7.10 {b LIKE '_4'} {6}
+test_expr2 expr-7.11 {a GLOB '1?'} {10 11 12 13 14 15 16 17 18 19}
+test_expr2 expr-7.12 {b GLOB '1*4'} {10 14}
+test_expr2 expr-7.13 {b GLOB '*1[456]'} {4}
+test_expr2 expr-7.14 {a ISNULL} {{}}
+test_expr2 expr-7.15 {a NOTNULL AND a<3} {1 2}
+test_expr2 expr-7.16 {a AND a<3} {1 2}
+test_expr2 expr-7.17 {NOT a} {}
+test_expr2 expr-7.18 {a==11 OR (b>1000 AND b<2000)} {10 11}
+test_expr2 expr-7.19 {a<=1 OR a>=20} {1 20}
+test_expr2 expr-7.20 {a<1 OR a>20} {}
+test_expr2 expr-7.21 {a>19 OR a<1} {20}
+test_expr2 expr-7.22 {a!=1 OR a=100} \
+ {2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20}
+test_expr2 expr-7.23 {(a notnull AND a<4) OR a==8} {1 2 3 8}
+test_expr2 expr-7.24 {a LIKE '2_' OR a==8} {8 20}
+test_expr2 expr-7.25 {a GLOB '2?' OR a==8} {8 20}
+test_expr2 expr-7.26 {a isnull OR a=8} {{} 8}
+test_expr2 expr-7.27 {a notnull OR a=8} \
+ {1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20}
+test_expr2 expr-7.28 {a<0 OR b=0} {{}}
+test_expr2 expr-7.29 {b=0 OR a<0} {{}}
+test_expr2 expr-7.30 {a<0 AND b=0} {}
+test_expr2 expr-7.31 {b=0 AND a<0} {}
+test_expr2 expr-7.32 {a IS NULL AND (a<0 OR b=0)} {{}}
+test_expr2 expr-7.33 {a IS NULL AND (b=0 OR a<0)} {{}}
+test_expr2 expr-7.34 {a IS NULL AND (a<0 AND b=0)} {}
+test_expr2 expr-7.35 {a IS NULL AND (b=0 AND a<0)} {}
+test_expr2 expr-7.32 {(a<0 OR b=0) AND a IS NULL} {{}}
+test_expr2 expr-7.33 {(b=0 OR a<0) AND a IS NULL} {{}}
+test_expr2 expr-7.34 {(a<0 AND b=0) AND a IS NULL} {}
+test_expr2 expr-7.35 {(b=0 AND a<0) AND a IS NULL} {}
+test_expr2 expr-7.36 {a<2 OR (a<0 OR b=0)} {{} 1}
+test_expr2 expr-7.37 {a<2 OR (b=0 OR a<0)} {{} 1}
+test_expr2 expr-7.38 {a<2 OR (a<0 AND b=0)} {1}
+test_expr2 expr-7.39 {a<2 OR (b=0 AND a<0)} {1}
+test_expr2 expr-7.40 {((a<2 OR a IS NULL) AND b<3) OR b>1e10} {{} 1}
+test_expr2 expr-7.41 {a BETWEEN -1 AND 1} {1}
+test_expr2 expr-7.42 {a NOT BETWEEN 2 AND 100} {1}
+test_expr2 expr-7.43 {(b+1234)||'this is a string that is at least 32 characters long' BETWEEN 1 AND 2} {}
+test_expr2 expr-7.44 {123||'xabcdefghijklmnopqrstuvwyxz01234567890'||a BETWEEN '123a' AND '123b'} {}
+test_expr2 expr-7.45 {((123||'xabcdefghijklmnopqrstuvwyxz01234567890'||a) BETWEEN '123a' AND '123b')<0} {}
+test_expr2 expr-7.46 {((123||'xabcdefghijklmnopqrstuvwyxz01234567890'||a) BETWEEN '123a' AND '123z')>0} {1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20}
+
+test_expr2 expr-7.50 {((a between 1 and 2 OR 0) AND 1) OR 0} {1 2}
+test_expr2 expr-7.51 {((a not between 3 and 100 OR 0) AND 1) OR 0} {1 2}
+
+ifcapable subquery {
+ test_expr2 expr-7.52 {((a in (1,2) OR 0) AND 1) OR 0} {1 2}
+ test_expr2 expr-7.53 \
+ {((a not in (3,4,5,6,7,8,9,10) OR 0) AND a<11) OR 0} {1 2}
+}
+test_expr2 expr-7.54 {((a>0 OR 0) AND a<3) OR 0} {1 2}
+ifcapable subquery {
+ test_expr2 expr-7.55 {((a in (1,2) OR 0) IS NULL AND 1) OR 0} {{}}
+ test_expr2 expr-7.56 \
+ {((a not in (3,4,5,6,7,8,9,10) IS NULL OR 0) AND 1) OR 0} {{}}
+}
+test_expr2 expr-7.57 {((a>0 IS NULL OR 0) AND 1) OR 0} {{}}
+
+test_expr2 expr-7.58 {(a||'')<='1'} {1}
+
+test_expr2 expr-7.59 {LIKE('10%',b)} {10 20}
+test_expr2 expr-7.60 {LIKE('_4',b)} {6}
+test_expr2 expr-7.61 {GLOB('1?',a)} {10 11 12 13 14 15 16 17 18 19}
+test_expr2 expr-7.62 {GLOB('1*4',b)} {10 14}
+test_expr2 expr-7.63 {GLOB('*1[456]',b)} {4}
+
+# Test the CURRENT_TIME, CURRENT_DATE, and CURRENT_TIMESTAMP expressions.
+#
+set sqlite_current_time 1157124849
+do_test expr-8.1 {
+ execsql {SELECT CURRENT_TIME}
+} {15:34:09}
+do_test expr-8.2 {
+ execsql {SELECT CURRENT_DATE}
+} {2006-09-01}
+do_test expr-8.3 {
+ execsql {SELECT CURRENT_TIMESTAMP}
+} {{2006-09-01 15:34:09}}
+ifcapable datetime {
+ do_test expr-8.4 {
+ execsql {SELECT CURRENT_TIME==time('now');}
+ } 1
+ do_test expr-8.5 {
+ execsql {SELECT CURRENT_DATE==date('now');}
+ } 1
+ do_test expr-8.6 {
+ execsql {SELECT CURRENT_TIMESTAMP==datetime('now');}
+ } 1
+}
+set sqlite_current_time 0
+
+do_test expr-9.1 {
+ execsql {SELECT round(-('-'||'123'))}
+} 123.0
+
+# Test an error message that can be generated by the LIKE expression
+do_test expr-10.1 {
+ catchsql {SELECT 'abc' LIKE 'abc' ESCAPE ''}
+} {1 {ESCAPE expression must be a single character}}
+do_test expr-10.2 {
+ catchsql {SELECT 'abc' LIKE 'abc' ESCAPE 'ab'}
+} {1 {ESCAPE expression must be a single character}}
+
+# If we specify an integer constant that is bigger than the largest
+# possible integer, code the integer as a real number.
+#
+do_test expr-11.1 {
+ execsql {SELECT typeof(9223372036854775807)}
+} {integer}
+do_test expr-11.2 {
+ execsql {SELECT typeof(9223372036854775808)}
+} {real}
+
+# These two statements used to leak memory (because of missing %destructor
+# directives in parse.y).
+do_test expr-12.1 {
+ catchsql {
+ SELECT (CASE a>4 THEN 1 ELSE 0 END) FROM test1;
+ }
+} {1 {near "THEN": syntax error}}
+do_test expr-12.2 {
+ catchsql {
+ SELECT (CASE WHEN a>4 THEN 1 ELSE 0) FROM test1;
+ }
+} {1 {near ")": syntax error}}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/fkey1.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/fkey1.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,77 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests for foreign keys.
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+ifcapable {!foreignkey} {
+ finish_test
+ return
+}
+
+# Create a table and some data to work with.
+#
+do_test fkey1-1.0 {
+ execsql {
+ CREATE TABLE t1(
+ a INTEGER PRIMARY KEY,
+ b INTEGER
+ REFERENCES t1 ON DELETE CASCADE
+ REFERENCES t2,
+ c TEXT,
+ FOREIGN KEY (b,c) REFERENCES t2(x,y) ON UPDATE CASCADE
+ );
+ }
+} {}
+do_test fkey1-1.1 {
+ execsql {
+ CREATE TABLE t2(
+ x INTEGER PRIMARY KEY,
+ y TEXT
+ );
+ }
+} {}
+do_test fkey1-1.2 {
+ execsql {
+ CREATE TABLE t3(
+ a INTEGER REFERENCES t2,
+ b INTEGER REFERENCES t1,
+ FOREIGN KEY (a,b) REFERENCES t2(x,y)
+ );
+ }
+} {}
+
+do_test fkey1-2.1 {
+ execsql {
+ CREATE TABLE t4(a integer primary key);
+ CREATE TABLE t5(x references t4);
+ CREATE TABLE t6(x references t4);
+ CREATE TABLE t7(x references t4);
+ CREATE TABLE t8(x references t4);
+ CREATE TABLE t9(x references t4);
+ CREATE TABLE t10(x references t4);
+ DROP TABLE t7;
+ DROP TABLE t9;
+ DROP TABLE t5;
+ DROP TABLE t8;
+ DROP TABLE t6;
+ DROP TABLE t10;
+ }
+} {}
+
+
+
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/format4.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/format4.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,60 @@
+# 2005 December 29
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests to verify that the new serial_type
+# values of 8 (integer 0) and 9 (integer 1) work correctly.
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+db eval {PRAGMA legacy_file_format=OFF}
+
+# The size of the database depends on whether or not autovacuum
+# is enabled.
+#
+if {[db one {PRAGMA auto_vacuum}]} {
+ set small 3072
+ set large 5120
+} else {
+ set small 2048
+ set large 4096
+}
+
+do_test format4-1.1 {
+ execsql {
+ CREATE TABLE t1(x0,x1,x2,x3,x4,x5,x6,x7,x8,x9);
+ INSERT INTO t1 VALUES(0,0,0,0,0,0,0,0,0,0);
+ INSERT INTO t1 SELECT * FROM t1;
+ INSERT INTO t1 SELECT * FROM t1;
+ INSERT INTO t1 SELECT * FROM t1;
+ INSERT INTO t1 SELECT * FROM t1;
+ INSERT INTO t1 SELECT * FROM t1;
+ INSERT INTO t1 SELECT * FROM t1;
+ }
+ file size test.db
+} $small
+do_test format4-1.2 {
+ execsql {
+ UPDATE t1 SET x0=1, x1=1, x2=1, x3=1, x4=1, x5=1, x6=1, x7=1, x8=1, x9=1
+ }
+ file size test.db
+} $small
+do_test format4-1.3 {
+ execsql {
+ UPDATE t1 SET x0=2, x1=2, x2=2, x3=2, x4=2, x5=2, x6=2, x7=2, x8=2, x9=2
+ }
+ file size test.db
+} $large
+
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/fts1a.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/fts1a.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,186 @@
+# 2006 September 9
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#*************************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is testing the FTS1 module.
+#
+# $Id: fts1a.test,v 1.4 2006/09/28 19:43:32 drh Exp $
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# If SQLITE_ENABLE_FTS1 is defined, omit this file.
+ifcapable !fts1 {
+ finish_test
+ return
+}
+
+# Construct a full-text search table containing five keywords:
+# one, two, three, four, and five, in various combinations. The
+# rowid for each will be a bitmask for the elements it contains.
+#
+db eval {
+ CREATE VIRTUAL TABLE t1 USING fts1(content);
+ INSERT INTO t1(content) VALUES('one');
+ INSERT INTO t1(content) VALUES('two');
+ INSERT INTO t1(content) VALUES('one two');
+ INSERT INTO t1(content) VALUES('three');
+ INSERT INTO t1(content) VALUES('one three');
+ INSERT INTO t1(content) VALUES('two three');
+ INSERT INTO t1(content) VALUES('one two three');
+ INSERT INTO t1(content) VALUES('four');
+ INSERT INTO t1(content) VALUES('one four');
+ INSERT INTO t1(content) VALUES('two four');
+ INSERT INTO t1(content) VALUES('one two four');
+ INSERT INTO t1(content) VALUES('three four');
+ INSERT INTO t1(content) VALUES('one three four');
+ INSERT INTO t1(content) VALUES('two three four');
+ INSERT INTO t1(content) VALUES('one two three four');
+ INSERT INTO t1(content) VALUES('five');
+ INSERT INTO t1(content) VALUES('one five');
+ INSERT INTO t1(content) VALUES('two five');
+ INSERT INTO t1(content) VALUES('one two five');
+ INSERT INTO t1(content) VALUES('three five');
+ INSERT INTO t1(content) VALUES('one three five');
+ INSERT INTO t1(content) VALUES('two three five');
+ INSERT INTO t1(content) VALUES('one two three five');
+ INSERT INTO t1(content) VALUES('four five');
+ INSERT INTO t1(content) VALUES('one four five');
+ INSERT INTO t1(content) VALUES('two four five');
+ INSERT INTO t1(content) VALUES('one two four five');
+ INSERT INTO t1(content) VALUES('three four five');
+ INSERT INTO t1(content) VALUES('one three four five');
+ INSERT INTO t1(content) VALUES('two three four five');
+ INSERT INTO t1(content) VALUES('one two three four five');
+}
+
+do_test fts1a-1.1 {
+ execsql {SELECT rowid FROM t1 WHERE content MATCH 'one'}
+} {1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31}
+do_test fts1a-1.2 {
+ execsql {SELECT rowid FROM t1 WHERE content MATCH 'one two'}
+} {3 7 11 15 19 23 27 31}
+do_test fts1a-1.3 {
+ execsql {SELECT rowid FROM t1 WHERE content MATCH 'two one'}
+} {3 7 11 15 19 23 27 31}
+do_test fts1a-1.4 {
+ execsql {SELECT rowid FROM t1 WHERE content MATCH 'one two three'}
+} {7 15 23 31}
+do_test fts1a-1.5 {
+ execsql {SELECT rowid FROM t1 WHERE content MATCH 'one three two'}
+} {7 15 23 31}
+do_test fts1a-1.6 {
+ execsql {SELECT rowid FROM t1 WHERE content MATCH 'two three one'}
+} {7 15 23 31}
+do_test fts1a-1.7 {
+ execsql {SELECT rowid FROM t1 WHERE content MATCH 'two one three'}
+} {7 15 23 31}
+do_test fts1a-1.8 {
+ execsql {SELECT rowid FROM t1 WHERE content MATCH 'three one two'}
+} {7 15 23 31}
+do_test fts1a-1.9 {
+ execsql {SELECT rowid FROM t1 WHERE content MATCH 'three two one'}
+} {7 15 23 31}
+do_test fts1a-1.10 {
+ execsql {SELECT rowid FROM t1 WHERE content MATCH 'one two THREE'}
+} {7 15 23 31}
+do_test fts1a-1.11 {
+ execsql {SELECT rowid FROM t1 WHERE content MATCH ' ONE Two three '}
+} {7 15 23 31}
+
+do_test fts1a-2.1 {
+ execsql {SELECT rowid FROM t1 WHERE content MATCH '"one"'}
+} {1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31}
+do_test fts1a-2.2 {
+ execsql {SELECT rowid FROM t1 WHERE content MATCH '"one two"'}
+} {3 7 11 15 19 23 27 31}
+do_test fts1a-2.3 {
+ execsql {SELECT rowid FROM t1 WHERE content MATCH '"two one"'}
+} {}
+do_test fts1a-2.4 {
+ execsql {SELECT rowid FROM t1 WHERE content MATCH '"one two three"'}
+} {7 15 23 31}
+do_test fts1a-2.5 {
+ execsql {SELECT rowid FROM t1 WHERE content MATCH '"one three two"'}
+} {}
+do_test fts1a-2.6 {
+ execsql {SELECT rowid FROM t1 WHERE content MATCH '"one two three four"'}
+} {15 31}
+do_test fts1a-2.7 {
+ execsql {SELECT rowid FROM t1 WHERE content MATCH '"one three two four"'}
+} {}
+do_test fts1a-2.8 {
+ execsql {SELECT rowid FROM t1 WHERE content MATCH '"one three five"'}
+} {21}
+do_test fts1a-2.9 {
+ execsql {SELECT rowid FROM t1 WHERE content MATCH '"one three" five'}
+} {21 29}
+do_test fts1a-2.10 {
+ execsql {SELECT rowid FROM t1 WHERE content MATCH 'five "one three"'}
+} {21 29}
+do_test fts1a-2.11 {
+ execsql {SELECT rowid FROM t1 WHERE content MATCH 'five "one three" four'}
+} {29}
+do_test fts1a-2.12 {
+ execsql {SELECT rowid FROM t1 WHERE content MATCH 'five four "one three"'}
+} {29}
+do_test fts1a-2.13 {
+ execsql {SELECT rowid FROM t1 WHERE content MATCH '"one three" four five'}
+} {29}
+
+do_test fts1a-3.1 {
+ execsql {SELECT rowid FROM t1 WHERE content MATCH 'one'}
+} {1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31}
+do_test fts1a-3.2 {
+ execsql {SELECT rowid FROM t1 WHERE content MATCH 'one -two'}
+} {1 5 9 13 17 21 25 29}
+do_test fts1a-3.3 {
+ execsql {SELECT rowid FROM t1 WHERE content MATCH '-two one'}
+} {1 5 9 13 17 21 25 29}
+
+do_test fts1a-4.1 {
+ execsql {SELECT rowid FROM t1 WHERE content MATCH 'one OR two'}
+} {1 2 3 5 6 7 9 10 11 13 14 15 17 18 19 21 22 23 25 26 27 29 30 31}
+do_test fts1a-4.2 {
+ execsql {SELECT rowid FROM t1 WHERE content MATCH '"one two" OR three'}
+} {3 4 5 6 7 11 12 13 14 15 19 20 21 22 23 27 28 29 30 31}
+do_test fts1a-4.3 {
+ execsql {SELECT rowid FROM t1 WHERE content MATCH 'three OR "one two"'}
+} {3 4 5 6 7 11 12 13 14 15 19 20 21 22 23 27 28 29 30 31}
+do_test fts1a-4.4 {
+ execsql {SELECT rowid FROM t1 WHERE content MATCH 'one two OR three'}
+} {3 5 7 11 13 15 19 21 23 27 29 31}
+do_test fts1a-4.5 {
+ execsql {SELECT rowid FROM t1 WHERE content MATCH 'three OR two one'}
+} {3 5 7 11 13 15 19 21 23 27 29 31}
+do_test fts1a-4.6 {
+ execsql {SELECT rowid FROM t1 WHERE content MATCH 'one two OR three OR four'}
+} {3 5 7 9 11 13 15 19 21 23 25 27 29 31}
+do_test fts1a-4.7 {
+ execsql {SELECT rowid FROM t1 WHERE content MATCH 'two OR three OR four one'}
+} {3 5 7 9 11 13 15 19 21 23 25 27 29 31}
+
+# Test the ability to handle NULL content
+#
+do_test fts1a-5.1 {
+ execsql {INSERT INTO t1(content) VALUES(NULL)}
+} {}
+do_test fts1a-5.2 {
+ set rowid [db last_insert_rowid]
+ execsql {SELECT content FROM t1 WHERE rowid=$rowid}
+} {{}}
+do_test fts1a-5.3 {
+ execsql {SELECT rowid FROM t1 WHERE content MATCH NULL}
+} {}
+
+
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/fts1b.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/fts1b.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,147 @@
+# 2006 September 13
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#*************************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is testing the FTS1 module.
+#
+# $Id: fts1b.test,v 1.4 2006/09/18 02:12:48 drh Exp $
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# If SQLITE_ENABLE_FTS1 is defined, omit this file.
+ifcapable !fts1 {
+ finish_test
+ return
+}
+
+# Fill the full-text index "t1" with phrases in english, spanish,
+# and german. For the i-th row, fill in the names for the bits
+# that are set in the value of i. The least significant bit is
+# 1. For example, the value 5 is 101 in binary which will be
+# converted to "one three" in english.
+#
+proc fill_multilanguage_fulltext_t1 {} {
+ set english {one two three four five}
+ set spanish {un dos tres cuatro cinco}
+ set german {eine zwei drei vier funf}
+
+ for {set i 1} {$i<=31} {incr i} {
+ set cmd "INSERT INTO t1 VALUES"
+ set vset {}
+ foreach lang {english spanish german} {
+ set words {}
+ for {set j 0; set k 1} {$j<5} {incr j; incr k $k} {
+ if {$k&$i} {lappend words [lindex [set $lang] $j]}
+ }
+ lappend vset "'$words'"
+ }
+ set sql "INSERT INTO t1(english,spanish,german) VALUES([join $vset ,])"
+ # puts $sql
+ db eval $sql
+ }
+}
+
+# Construct a full-text search table containing five keywords:
+# one, two, three, four, and five, in various combinations. The
+# rowid for each will be a bitmask for the elements it contains.
+#
+db eval {
+ CREATE VIRTUAL TABLE t1 USING fts1(english,spanish,german);
+}
+fill_multilanguage_fulltext_t1
+
+do_test fts1b-1.1 {
+ execsql {SELECT rowid FROM t1 WHERE english MATCH 'one'}
+} {1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31}
+do_test fts1b-1.2 {
+ execsql {SELECT rowid FROM t1 WHERE spanish MATCH 'one'}
+} {}
+do_test fts1b-1.3 {
+ execsql {SELECT rowid FROM t1 WHERE german MATCH 'one'}
+} {}
+do_test fts1b-1.4 {
+ execsql {SELECT rowid FROM t1 WHERE t1 MATCH 'one'}
+} {1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31}
+do_test fts1b-1.5 {
+ execsql {SELECT rowid FROM t1 WHERE t1 MATCH 'one dos drei'}
+} {7 15 23 31}
+do_test fts1b-1.6 {
+ execsql {SELECT english, spanish, german FROM t1 WHERE rowid=1}
+} {one un eine}
+do_test fts1b-1.7 {
+ execsql {SELECT rowid FROM t1 WHERE t1 MATCH '"one un"'}
+} {}
+
+do_test fts1b-2.1 {
+ execsql {
+ CREATE VIRTUAL TABLE t2 USING fts1(from,to);
+ INSERT INTO t2([from],[to]) VALUES ('one two three', 'four five six');
+ SELECT [from], [to] FROM t2
+ }
+} {{one two three} {four five six}}
+
+
+# Compute an SQL string that contains the words one, two, three,... to
+# describe bits set in the value $i. Only the lower 5 bits are examined.
+#
+proc wordset {i} {
+ set x {}
+ for {set j 0; set k 1} {$j<5} {incr j; incr k $k} {
+ if {$k&$i} {lappend x [lindex {one two three four five} $j]}
+ }
+ return '$x'
+}
+
+# Create a new FTS table with three columns:
+#
+# norm: words for the bits of rowid
+# plusone: words for the bits of rowid+1
+# invert: words for the bits of ~rowid
+#
+db eval {
+ CREATE VIRTUAL TABLE t4 USING fts1([norm],'plusone',"invert");
+}
+for {set i 1} {$i<=15} {incr i} {
+ set vset [list [wordset $i] [wordset [expr {$i+1}]] [wordset [expr {~$i}]]]
+ db eval "INSERT INTO t4(norm,plusone,invert) VALUES([join $vset ,]);"
+}
+
+do_test fts1b-4.1 {
+ execsql {SELECT rowid FROM t4 WHERE t4 MATCH 'norm:one'}
+} {1 3 5 7 9 11 13 15}
+do_test fts1b-4.2 {
+ execsql {SELECT rowid FROM t4 WHERE norm MATCH 'one'}
+} {1 3 5 7 9 11 13 15}
+do_test fts1b-4.3 {
+ execsql {SELECT rowid FROM t4 WHERE t4 MATCH 'one'}
+} {1 2 3 4 5 6 7 8 9 10 11 12 13 14 15}
+do_test fts1b-4.4 {
+ execsql {SELECT rowid FROM t4 WHERE t4 MATCH 'plusone:one'}
+} {2 4 6 8 10 12 14}
+do_test fts1b-4.5 {
+ execsql {SELECT rowid FROM t4 WHERE plusone MATCH 'one'}
+} {2 4 6 8 10 12 14}
+do_test fts1b-4.6 {
+ execsql {SELECT rowid FROM t4 WHERE t4 MATCH 'norm:one plusone:two'}
+} {1 5 9 13}
+do_test fts1b-4.7 {
+ execsql {SELECT rowid FROM t4 WHERE t4 MATCH 'norm:one two'}
+} {1 3 5 7 9 11 13 15}
+do_test fts1b-4.8 {
+ execsql {SELECT rowid FROM t4 WHERE t4 MATCH 'plusone:two norm:one'}
+} {1 5 9 13}
+do_test fts1b-4.9 {
+ execsql {SELECT rowid FROM t4 WHERE t4 MATCH 'two norm:one'}
+} {1 3 5 7 9 11 13 15}
+
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/fts1c.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/fts1c.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,1213 @@
+# 2006 September 14
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#*************************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is testing the FTS1 module.
+#
+# $Id: fts1c.test,v 1.11 2006/10/04 17:35:28 drh Exp $
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# If SQLITE_ENABLE_FTS1 is defined, omit this file.
+ifcapable !fts1 {
+ finish_test
+ return
+}
+
+# Create a table of sample email data. The data comes from email
+# archives of Enron executives that was published as part of the
+# litigation against that company.
+#
+do_test fts1c-1.1 {
+ db eval {
+ CREATE VIRTUAL TABLE email USING fts1([from],[to],subject,body);
+ BEGIN TRANSACTION;
+INSERT INTO email([from],[to],subject,body) VALUES('savita.puthigai at enron.com', 'traders.eol at enron.com, traders.eol at enron.com', 'EnronOnline- Change to Autohedge', 'Effective Monday, October 22, 2001 the following changes will be made to the Autohedge functionality on EnronOnline.
+
+The volume on the hedge will now respect the minimum volume and volume increment settings on the parent product. See rules below:
+
+? If the transaction volume on the child is less than half of the parent''s minimum volume no hedge will occur.
+? If the transaction volume on the child is more than half the parent''s minimum volume but less than half the volume increment on the parent, the hedge will volume will be the parent''s minimum volume.
+? For all other volumes, the same rounding rules will apply based on the volume increment on the parent product.
+
+Please see example below:
+
+Parent''s Settings:
+Minimum: 5000
+Increment: 1000
+
+Volume on Autohedge transaction Volume Hedged
+1 - 2499 0
+2500 - 5499 5000
+5500 - 6499 6000');
+INSERT INTO email([from],[to],subject,body) VALUES('dana.davis at enron.com', 'laynie.east at enron.com, lisa.king at enron.com, lisa.best at enron.com,', 'Leaving Early', 'FYI:
+If it''s ok with everyone''s needs, I would like to leave @4pm. If you think
+you will need my assistance past the 4 o''clock hour just let me know; I''ll
+be more than willing to stay.');
+INSERT INTO email([from],[to],subject,body) VALUES('enron_update at concureworkplace.com', 'louise.kitchen at enron.com', '<<Concur Expense Document>> - CC02.06.02', 'The following expense report is ready for approval:
+
+Employee Name: Christopher F. Calger
+Status last changed by: Mollie E. Gustafson Ms
+Expense Report Name: CC02.06.02
+Report Total: $3,972.93
+Amount Due Employee: $3,972.93
+
+
+To approve this expense report, click on the following link for Concur Expense.
+http://expensexms.enron.com');
+INSERT INTO email([from],[to],subject,body) VALUES('jeff.duff at enron.com', 'julie.johnson at enron.com', 'Work request', 'Julie,
+
+Could you print off the current work request report by 1:30 today?
+
+Gentlemen,
+
+I''d like to review this today at 1:30 in our office. Also, could you provide
+me with your activity reports so I can have Julie enter this information.
+
+JD');
+INSERT INTO email([from],[to],subject,body) VALUES('v.weldon at enron.com', 'gary.l.carrier at usa.dupont.com, scott.joyce at bankofamerica.com', 'Enron News', 'This could turn into something big....
+http://biz.yahoo.com/rf/010129/n29305829.html');
+INSERT INTO email([from],[to],subject,body) VALUES('mark.haedicke at enron.com', 'paul.simons at enron.com', 'Re: First Polish Deal!', 'Congrats! Things seem to be building rapidly now on the Continent. Mark');
+INSERT INTO email([from],[to],subject,body) VALUES('e..carter at enron.com', 't..robinson at enron.com', 'FW: Producers Newsletter 9-24-2001', '
+The producer lumber pricing sheet.
+ -----Original Message-----
+From: Johnson, Jay
+Sent: Tuesday, October 16, 2001 3:42 PM
+To: Carter, Karen E.
+Subject: FW: Producers Newsletter 9-24-2001
+
+
+
+ -----Original Message-----
+From: Daigre, Sergai
+Sent: Friday, September 21, 2001 8:33 PM
+Subject: Producers Newsletter 9-24-2001
+
+ ');
+INSERT INTO email([from],[to],subject,body) VALUES('david.delainey at enron.com', 'kenneth.lay at enron.com', 'Greater Houston Partnership', 'Ken, in response to the letter from Mr Miguel San Juan, my suggestion would
+be to offer up the Falcon for their use; however, given the tight time frame
+and your recent visit with Mr. Fox that it would be difficult for either you
+or me to participate.
+
+I spoke to Max and he agrees with this approach.
+
+I hope this meets with your approval.
+
+Regards
+Delainey');
+INSERT INTO email([from],[to],subject,body) VALUES('lachandra.fenceroy at enron.com', 'lindy.donoho at enron.com', 'FW: Bus Applications Meeting Follow Up', 'Lindy,
+
+Here is the original memo we discussed earlier. Please provide any information that you may have.
+
+Your cooperation is greatly appreciated.
+
+Thanks,
+
+lachandra.fenceroy at enron.com
+713.853.3884
+877.498.3401 Pager
+
+ -----Original Message-----
+From: Bisbee, Joanne
+Sent: Wednesday, September 26, 2001 7:50 AM
+To: Fenceroy, LaChandra
+Subject: FW: Bus Applications Meeting Follow Up
+
+Lachandra, Please get with David Duff today and see what this is about. Who are our TW accounting business users?
+
+ -----Original Message-----
+From: Koh, Wendy
+Sent: Tuesday, September 25, 2001 2:41 PM
+To: Bisbee, Joanne
+Subject: Bus Applications Meeting Follow Up
+
+Lisa brought up a TW change effective Nov 1. It involves eliminating a turnback surcharge. I have no other information, but you might check with the business folks for any system changes required.
+
+Wendy');
+INSERT INTO email([from],[to],subject,body) VALUES('danny.mccarty at enron.com', 'fran.fagan at enron.com', 'RE: worksheets', 'Fran,
+ If Julie''s merit needs to be lump sum, just move it over to that column. Also, send me Eric Gadd''s sheets as well. Thanks.
+Dan
+
+ -----Original Message-----
+From: Fagan, Fran
+Sent: Thursday, December 20, 2001 11:10 AM
+To: McCarty, Danny
+Subject: worksheets
+
+As discussed, attached are your sheets for bonus and merit.
+
+Thanks,
+
+Fran Fagan
+Sr. HR Rep
+713.853.5219
+
+
+ << File: McCartyMerit.xls >> << File: mccartyBonusCommercial_UnP.xls >>
+
+');
+INSERT INTO email([from],[to],subject,body) VALUES('bert.meyers at enron.com', 'shift.dl-portland at enron.com', 'OCTOBER SCHEDULE', 'TEAM,
+
+PLEASE SEND ME ANY REQUESTS THAT YOU HAVE FOR OCTOBER. SO FAR I HAVE THEM FOR LEAF. I WOULD LIKE TO HAVE IT DONE BY THE 15TH OF THE MONTH. ANY QUESTIONS PLEASE GIVE ME A CALL.
+
+BERT');
+INSERT INTO email([from],[to],subject,body) VALUES('errol.mclaughlin at enron.com', 'john.arnold at enron.com, bilal.bajwa at enron.com, john.griffith at enron.com,', 'TRV Notification: (NG - PROPT P/L - 09/27/2001)', 'The report named: NG - PROPT P/L <http://trv.corp.enron.com/linkFromExcel.asp?report_cd=11&report_name=NG+-+PROPT+P/L&category_cd=5&category_name=FINANCIAL&toc_hide=1&sTV1=5&TV1Exp=Y¤t_efct_date=09/27/2001>, published as of 09/27/2001 is now available for viewing on the website.');
+INSERT INTO email([from],[to],subject,body) VALUES('patrice.mims at enron.com', 'calvin.eakins at enron.com', 'Re: Small business supply assistance', 'Hi Calvin
+
+
+I spoke with Rickey (boy, is he long-winded!!). Gave him the name of our
+credit guy, Russell Diamond.
+
+Thank for your help!');
+INSERT INTO email([from],[to],subject,body) VALUES('legal <.hall at enron.com>', 'stephanie.panus at enron.com', 'Termination update', 'City of Vernon and Salt River Project terminated their contracts. I will fax these notices to you.');
+INSERT INTO email([from],[to],subject,body) VALUES('d..steffes at enron.com', 'richard.shapiro at enron.com', 'EES / ENA Government Affairs Staffing & Outside Services', 'Rick --
+
+Here is the information on staffing and outside services. Call if you need anything else.
+
+Jim
+
+ ');
+INSERT INTO email([from],[to],subject,body) VALUES('gelliott at industrialinfo.com', 'pcopello at industrialinfo.com', 'ECAAR (Gavin), WSCC (Diablo Canyon), & NPCC (Seabrook)', 'Dear Power Outage Database Customer,
+Attached you will find an excel document. The outages contained within are forced or rescheduled outages. Your daily delivery will still contain these outages.
+In addition to the two excel documents, there is a dbf file that is formatted like your daily deliveries you receive nightly. This will enable you to load the data into your regular database. Any questions please let me know. Thanks.
+Greg Elliott
+IIR, Inc.
+713-783-5147 x 3481
+outages at industrialinfo.com
+THE INFORMATION CONTAINED IN THIS E-MAIL IS LEGALLY PRIVILEGED AND CONFIDENTIAL INFORMATION INTENDED ONLY FOR THE USE OF THE INDIVIDUAL OR ENTITY NAMED ABOVE. YOU ARE HEREBY NOTIFIED THAT ANY DISSEMINATION, DISTRIBUTION, OR COPY OF THIS E-MAIL TO UNAUTHORIZED ENTITIES IS STRICTLY PROHIBITED. IF YOU HAVE RECEIVED THIS
+E-MAIL IN ERROR, PLEASE DELETE IT.
+ - OUTAGE.dbf
+ - 111201R.xls
+ - 111201.xls ');
+INSERT INTO email([from],[to],subject,body) VALUES('enron.announcements at enron.com', 'all_ena_egm_eim at enron.com', 'EWS Brown Bag', 'MARK YOUR LUNCH CALENDARS NOW !
+
+You are invited to attend the EWS Brown Bag Lunch Series
+
+Featuring: RAY BOWEN, COO
+
+Topic: Enron Industrial Markets
+
+Thursday, March 15, 2001
+11:30 am - 12:30 pm
+EB 5 C2
+
+
+You bring your lunch, Limited Seating
+We provide drinks and dessert. RSVP x 3-9610');
+INSERT INTO email([from],[to],subject,body) VALUES('chris.germany at enron.com', 'ingrid.immer at williams.com', 'Re: About St Pauls', 'Sounds good to me. I bet this is next to the Warick?? Hotel.
+
+
+
+
+"Immer, Ingrid" <Ingrid.Immer at Williams.com> on 12/21/2000 11:48:47 AM
+To: "''chris.germany at enron.com''" <chris.germany at enron.com>
+cc:
+Subject: About St Pauls
+
+
+
+
+ <<About St Pauls.url>>
+?
+?http://www.stpaulshouston.org/about.html
+
+Chris,
+
+I like the looks of this place.? What do you think about going here Christmas
+eve?? They have an 11:00 a.m. service and a candlelight service at 5:00 p.m.,
+among others.
+
+Let me know.?? ii
+
+ - About St Pauls.url
+
+');
+INSERT INTO email([from],[to],subject,body) VALUES('nas at cpuc.ca.gov', 'skatz at sempratrading.com, kmccrea at sablaw.com, thompson at wrightlaw.com,', 'Reply Brief filed July 31, 2000', ' - CPUC01-#76371-v1-Revised_Reply_Brief__Due_today_7_31_.doc');
+INSERT INTO email([from],[to],subject,body) VALUES('gascontrol at aglresources.com', 'dscott4 at enron.com, lcampbel at enron.com', 'Alert Posted 10:00 AM November 20,2000: E-GAS Request Reminder', 'Alert Posted 10:00 AM November 20,2000: E-GAS Request Reminder
+As discussed in the Winter Operations Meeting on Sept.29,2000,
+E-Gas(Emergency Gas) will not be offered this winter as a service from AGLC.
+Marketers and Poolers can receive gas via Peaking and IBSS nominations(daisy
+chain) from other marketers up to the 6 p.m. Same Day 2 nomination cycle.
+');
+INSERT INTO email([from],[to],subject,body) VALUES('dutch.quigley at enron.com', 'rwolkwitz at powermerchants.com', '', '
+
+Here is a goody for you');
+INSERT INTO email([from],[to],subject,body) VALUES('ryan.o''rourke at enron.com', 'k..allen at enron.com, randy.bhatia at enron.com, frank.ermis at enron.com,', 'TRV Notification: (West VaR - 11/07/2001)', 'The report named: West VaR <http://trv.corp.enron.com/linkFromExcel.asp?report_cd=36&report_name=West+VaR&category_cd=2&category_name=WEST&toc_hide=1&sTV1=2&TV1Exp=Y¤t_efct_date=11/07/2001>, published as of 11/07/2001 is now available for viewing on the website.');
+INSERT INTO email([from],[to],subject,body) VALUES('mjones7 at txu.com', 'cstone1 at txu.com, ggreen2 at txu.com, timpowell at txu.com,', 'Enron / HPL Actuals for July 10, 2000', 'Teco Tap 10.000 / Enron ; 110.000 / HPL IFERC
+
+LS HPL LSK IC 30.000 / Enron
+');
+INSERT INTO email([from],[to],subject,body) VALUES('susan.pereira at enron.com', 'kkw816 at aol.com', 'soccer practice', 'Kathy-
+
+Is it safe to assume that practice is cancelled for tonight??
+
+Susan Pereira');
+INSERT INTO email([from],[to],subject,body) VALUES('mark.whitt at enron.com', 'barry.tycholiz at enron.com', 'Huber Internal Memo', 'Please look at this. I didn''t know how deep to go with the desk. Do you think this works.
+
+ ');
+INSERT INTO email([from],[to],subject,body) VALUES('m..forney at enron.com', 'george.phillips at enron.com', '', 'George,
+Give me a call and we will further discuss opportunities on the 13st floor.
+
+Thanks,
+JMForney
+3-7160');
+INSERT INTO email([from],[to],subject,body) VALUES('brad.mckay at enron.com', 'angusmcka at aol.com', 'Re: (no subject)', 'not yet');
+INSERT INTO email([from],[to],subject,body) VALUES('adam.bayer at enron.com', 'jonathan.mckay at enron.com', 'FW: Curve Fetch File', 'Here is the curve fetch file sent to me. It has plenty of points in it. If you give me a list of which ones you need we may be able to construct a secondary worksheet to vlookup the values.
+
+adam
+35227
+
+
+ -----Original Message-----
+From: Royed, Jeff
+Sent: Tuesday, September 25, 2001 11:37 AM
+To: Bayer, Adam
+Subject: Curve Fetch File
+
+Let me know if it works. It may be required to have a certain version of Oracle for it to work properly.
+
+
+
+Jeff Royed
+Enron
+Energy Operations
+Phone: 713-853-5295');
+INSERT INTO email([from],[to],subject,body) VALUES('matt.smith at enron.com', 'yan.wang at enron.com', 'Report Formats', 'Yan,
+
+The merged reports look great. I believe the only orientation changes are to
+"unmerge" the following six reports:
+
+31 Keystone Receipts
+15 Questar Pipeline
+40 Rockies Production
+22 West_2
+23 West_3
+25 CIG_WIC
+
+The orientation of the individual reports should be correct. Thanks.
+
+Mat
+
+PS. Just a reminder to add the "*" by the title of calculated points.');
+INSERT INTO email([from],[to],subject,body) VALUES('michelle.lokay at enron.com', 'jimboman at bigfoot.com', 'Egyptian Festival', '---------------------- Forwarded by Michelle Lokay/ET&S/Enron on 09/07/2000
+10:08 AM ---------------------------
+
+
+"Karkour, Randa" <Randa.Karkour at COMPAQ.com> on 09/07/2000 09:01:04 AM
+To: "''Agheb (E-mail)" <Agheb at aol.com>, "Leila Mankarious (E-mail)"
+<Leila_Mankarious at mhhs.org>, "''Marymankarious (E-mail)"
+<marymankarious at aol.com>, "Michelle lokay (E-mail)" <mlokay at enron.com>, "Ramy
+Mankarious (E-mail)" <Mankarious at aol.com>
+cc:
+
+Subject: Egyptian Festival
+
+
+ <<Egyptian Festival.url>>
+
+ http://www.egyptianfestival.com/
+
+ - Egyptian Festival.url
+');
+INSERT INTO email([from],[to],subject,body) VALUES('errol.mclaughlin at enron.com', 'sherry.dawson at enron.com', 'Urgent!!! --- New EAST books', 'This has to be done..................................
+
+Thanks
+---------------------- Forwarded by Errol McLaughlin/Corp/Enron on 12/20/2000
+08:39 AM ---------------------------
+
+
+
+ From: William Kelly @ ECT 12/20/2000 08:31 AM
+
+
+To: Kam Keiser/HOU/ECT at ECT, Darron C Giron/HOU/ECT at ECT, David
+Baumbach/HOU/ECT at ECT, Errol McLaughlin/Corp/Enron at ENRON
+cc: Kimat Singla/HOU/ECT at ECT, Kulvinder Fowler/NA/Enron at ENRON, Kyle R
+Lilly/HOU/ECT at ECT, Jeff Royed/Corp/Enron at ENRON, Alejandra
+Chavez/NA/Enron at ENRON, Crystal Hyde/HOU/ECT at ECT
+
+Subject: New EAST books
+
+We have new book names in TAGG for our intramonth portfolios and it is
+extremely important that any deal booked to the East is communicated quickly
+to someone on my team. I know it will take some time for the new names to
+sink in and I do not want us to miss any positions or P&L.
+
+Thanks for your help on this.
+
+New:
+Scott Neal : East Northeast
+Dick Jenkins: East Marketeast
+
+WK
+');
+INSERT INTO email([from],[to],subject,body) VALUES('david.forster at enron.com', 'eol.wide at enron.com', 'Change to Stack Manager', 'Effective immediately, there is a change to the Stack Manager which will
+affect any Inactive Child.
+
+An inactive Child with links to Parent products will not have their
+calculated prices updated until the Child product is Activated.
+
+When the Child Product is activated, the price will be recalculated and
+updated BEFORE it is displayed on the web.
+
+This means that if you are inputting a basis price on a Child product, you
+will not see the final, calculated price until you Activate the product, at
+which time the customer will also see it.
+
+If you have any questions, please contact the Help Desk on:
+
+Americas: 713 853 4357
+Europe: + 44 (0) 20 7783 7783
+Asia/Australia: +61 2 9229 2300
+
+Dave');
+INSERT INTO email([from],[to],subject,body) VALUES('vince.kaminski at enron.com', 'jhh1 at email.msn.com', 'Re: Light reading - see pieces beginning on page 7', 'John,
+
+I saw it. Very interesting.
+
+Vince
+
+
+
+
+
+"John H Herbert" <jhh1 at email.msn.com> on 07/28/2000 08:38:08 AM
+To: "Vince J Kaminski" <Vince_J_Kaminski at enron.com>
+cc:
+Subject: Light reading - see pieces beginning on page 7
+
+
+Cheers and have a nice weekend,
+
+
+JHHerbert
+
+
+
+
+ - gd000728.pdf
+
+
+
+');
+INSERT INTO email([from],[to],subject,body) VALUES('matthew.lenhart at enron.com', 'mmmarcantel at equiva.com', 'RE:', 'i will try to line up a pig for you ');
+INSERT INTO email([from],[to],subject,body) VALUES('jae.black at enron.com', 'claudette.harvey at enron.com, chaun.roberts at enron.com, judy.martinez at enron.com,', 'Disaster Recovery Equipment', 'As a reminder...there are several pieces of equipment that are set up on the 30th Floor, as well as on our floor, for the Disaster Recovery Team. PLEASE DO NOT TAKE, BORROW OR USE this equipment. Should you need to use another computer system, other than yours, or make conference calls please work with your Assistant to help find or set up equipment for you to use.
+
+Thanks for your understanding in this matter.
+
+T.Jae Black
+East Power Trading
+Assistant to Kevin Presto
+off. 713-853-5800
+fax 713-646-8272
+cell 713-539-4760');
+INSERT INTO email([from],[to],subject,body) VALUES('eric.bass at enron.com', 'dale.neuner at enron.com', '5 X 24', 'Dale,
+
+Have you heard anything more on the 5 X 24s? We would like to get this
+product out ASAP.
+
+
+Thanks,
+
+Eric');
+INSERT INTO email([from],[to],subject,body) VALUES('messenger at smartreminders.com', 'm..tholt at enron.com', '10% Coupon - PrintPal Printer Cartridges - 100% Guaranteed', '[IMAGE]
+[IMAGE][IMAGE][IMAGE]
+Dear SmartReminders Member,
+ [IMAGE] [IMAGE] [IMAGE] [IMAGE] [IMAGE] [IMAGE] [IMAGE] [IMAGE]
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+We respect your privacy and are a Certified Participant of the BBBOnLine
+ Privacy Program. To be removed from future offers,click here.
+SmartReminders.com is a permission based service. To unsubscribe click here . ');
+INSERT INTO email([from],[to],subject,body) VALUES('benjamin.rogers at enron.com', 'mark.bernstein at enron.com', '', 'The guy you are talking about left CIN under a "cloud of suspicion" sort of
+speak. He was the one who got into several bad deals and PPA''s in California
+for CIN, thus he left on a bad note. Let me know if you need more detail
+than that, I felt this was the type of info you were looking for. Thanks!
+Ben');
+INSERT INTO email([from],[to],subject,body) VALUES('enron_update at concureworkplace.com', 'michelle.cash at enron.com', 'Expense Report Receipts Not Received', 'Employee Name: Michelle Cash
+Report Name: Houston Cellular 8-11-01
+Report Date: 12/13/01
+Report ID: 594D37C9ED2111D5B452
+Submitted On: 12/13/01
+
+You are only allowed 2 reports with receipts outstanding. Your expense reports will not be paid until you meet this requirement.');
+INSERT INTO email([from],[to],subject,body) VALUES('susan.mara at enron.com', 'ray.alvarez at enron.com, mark.palmer at enron.com, karen.denne at enron.com,', 'CAISO Emergency Motion -- to discontinue market-based rates for', 'FYI. the latest broadside against the generators.
+
+Sue Mara
+Enron Corp.
+Tel: (415) 782-7802
+Fax:(415) 782-7854
+----- Forwarded by Susan J Mara/NA/Enron on 06/08/2001 12:24 PM -----
+
+
+ "Milner, Marcie" <MMilner at coral-energy.com> 06/08/2001 11:13 AM To: "''smara at enron.com''" <smara at enron.com> cc: Subject: CAISO Emergency Motion
+
+
+Sue, did you see this emergency motion the CAISO filed today? Apparently
+they are requesting that FERC discontinue market-based rates immediately and
+grant refunds plus interest on the difference between cost-based rates and
+market revenues received back to May 2000. They are requesting the
+commission act within 14 days. Have you heard anything about what they are
+doing?
+
+Marcie
+
+http://www.caiso.com/docs/2001/06/08/200106081005526469.pdf
+');
+INSERT INTO email([from],[to],subject,body) VALUES('fletcher.sturm at enron.com', 'eloy.escobar at enron.com', 'Re: General Brinks Position Meeting', 'Eloy,
+
+Who is General Brinks?
+
+Fletch');
+INSERT INTO email([from],[to],subject,body) VALUES('nailia.dindarova at enron.com', 'richard.shapiro at enron.com', 'Documents for Mark Frevert (on EU developments and lessons from', 'Rick,
+
+Here are the documents that Peter has prepared for Mark Frevert.
+
+Nailia
+---------------------- Forwarded by Nailia Dindarova/LON/ECT on 25/06/2001
+16:36 ---------------------------
+
+
+Nailia Dindarova
+25/06/2001 15:36
+To: Michael Brown/Enron at EUEnronXGate
+cc: Ross Sankey/Enron at EUEnronXGate, Eric Shaw/ENRON at EUEnronXGate, Peter
+Styles/LON/ECT at ECT
+
+Subject: Documents for Mark Frevert (on EU developments and lessons from
+California)
+
+Michael,
+
+
+These are the documents that Peter promised to give to you for Mark Frevert.
+He has now handed them to him in person but asked me to transmit them
+electronically to you, as well as Eric and Ross.
+
+Nailia
+
+
+
+
+
+');
+INSERT INTO email([from],[to],subject,body) VALUES('peggy.a.kostial at accenture.com', 'dave.samuels at enron.com', 'EOL-Accenture Deal Sheet', 'Dave -
+
+Attached are our comments and suggested changes. Please call to review.
+
+On the time line for completion, we have four critical steps to complete:
+ Finalize market analysis to refine business case, specifically
+ projected revenue stream
+ Complete counterparty surveying, including targeting 3 CPs for letters
+ of intent
+ Review Enron asset base for potential reuse/ licensing
+ Contract negotiations
+
+Joe will come back to us with an updated time line, but it is my
+expectation that we are still on the same schedule (we just begun week
+three) with possibly a week or so slippage.....contract negotiations will
+probably be the critical path.
+
+We will send our cut at the actual time line here shortly. Thanks,
+
+Peggy
+
+(See attached file: accenture-dealpoints v2.doc)
+ - accenture-dealpoints v2.doc ');
+INSERT INTO email([from],[to],subject,body) VALUES('thomas.martin at enron.com', 'thomas.martin at enron.com', 'Re: Guadalupe Power Partners LP', '---------------------- Forwarded by Thomas A Martin/HOU/ECT on 03/20/2001
+03:49 PM ---------------------------
+
+
+Thomas A Martin
+10/11/2000 03:55 PM
+To: Patrick Wade/HOU/ECT at ECT
+cc:
+Subject: Re: Guadalupe Power Partners LP
+
+The deal is physically served at Oasis Waha or Oasis Katy and is priced at
+either HSC, Waha or Katytailgate GD at buyers option three days prior to
+NYMEX close.
+
+');
+INSERT INTO email([from],[to],subject,body) VALUES('judy.townsend at enron.com', 'dan.junek at enron.com, chris.germany at enron.com', 'Columbia Distribution''s Capacity Available for Release - Sum', '---------------------- Forwarded by Judy Townsend/HOU/ECT on 03/09/2001 11:04
+AM ---------------------------
+
+
+agoddard at nisource.com on 03/08/2001 09:16:57 AM
+To: " - *Koch, Kent" <kkoch at nisource.com>, " -
+*Millar, Debra" <dmillar at nisource.com>, " - *Burke, Lynn"
+<lburke at nisource.com>
+cc: " - *Heckathorn, Tom" <theckathorn at nisource.com>
+Subject: Columbia Distribution''s Capacity Available for Release - Sum
+
+
+Attached is Columbia Distribution''s notice of capacity available for release
+for
+the summer of 2001 (Apr. 2001 through Oct. 2001).
+
+Please note that the deadline for bids is 3:00pm EST on March 20, 2001.
+
+If you have any questions, feel free to contact any of the representatives
+listed
+at the bottom of the attachment.
+
+Aaron Goddard
+
+
+
+
+ - 2001Summer.doc
+');
+INSERT INTO email([from],[to],subject,body) VALUES('rhonda.denton at enron.com', 'tim.belden at enron.com, dana.davis at enron.com, genia.fitzgerald at enron.com,', 'Split Rock Energy LLC', 'We have received the executed EEI contract from this CP dated 12/12/2000.
+Copies will be distributed to Legal and Credit.');
+INSERT INTO email([from],[to],subject,body) VALUES('kerrymcelroy at dwt.com', 'jack.speer at alcoa.com, crow at millernash.com, michaelearly at earthlink.net,', 'Oral Argument Request', ' - Oral Argument Request.doc');
+INSERT INTO email([from],[to],subject,body) VALUES('mike.carson at enron.com', 'rlmichaelis at hormel.com', '', 'Did you come in town this wk end..... My new number at our house is :
+713-668-3712...... my cell # is 281-381-7332
+
+the kid');
+INSERT INTO email([from],[to],subject,body) VALUES('cooper.richey at enron.com', 'trycooper at hotmail.com', 'FW: Contact Info', '
+
+-----Original Message-----
+From: Punja, Karim
+Sent: Thursday, December 13, 2001 2:35 PM
+To: Richey, Cooper
+Subject: Contact Info
+
+
+Cooper,
+
+Its been a real pleasure working with you (even though it was for only a small amount of time)
+I hope we can stay in touch.
+
+Home# 234-0249
+email: kpunja at hotmail.com
+
+Take Care,
+
+Karim.
+ ');
+INSERT INTO email([from],[to],subject,body) VALUES('bjm30 at earthlink.net', 'mcguinn.k at enron.com, mcguinn.ian at enron.com, mcguinn.stephen at enron.com,', 'email address change', 'Hello all.
+
+I haven''t talked to many of you via email recently but I do want to give you
+my new address for your email file:
+
+ bjm30 at earthlink.net
+
+I hope all is well.
+
+Brian McGuinn');
+INSERT INTO email([from],[to],subject,body) VALUES('shelley.corman at enron.com', 'steve.hotte at enron.com', 'Flat Panels', 'Can you please advise what is going on with the flat panels that we had planned to distribute to our gas logistics team. It was in the budget and we had the okay, but now I''m hearing there is some hold-up & the units are stored on 44.
+
+Shelley');
+INSERT INTO email([from],[to],subject,body) VALUES('sara.davidson at enron.com', 'john.schwartzenburg at enron.com, scott.dieball at enron.com, recipients at enron.com,', '2001 Enron Law Conference (Distribution List 2)', ' Enron Law Conference
+
+San Antonio, Texas May 2-4, 2001 Westin Riverwalk
+
+ See attached memo for more details!!
+
+
+? Registration for the law conference this year will be handled through an
+Online RSVP Form on the Enron Law Conference Website at
+http://lawconference.corp.enron.com. The website is still under construction
+and will not be available until Thursday, March 15, 2001.
+
+? We will send you another e-mail to confirm when the Law Conference Website
+is operational.
+
+? Please complete the Online RSVP Form as soon as it is available and submit
+it no later than Friday, March 30th.
+
+
+
+
+');
+INSERT INTO email([from],[to],subject,body) VALUES('tori.kuykendall at enron.com', 'heath.b.taylor at accenture.com', 'Re:', 'hey - thats funny about john - he definitely remembers him - i''ll call pat
+and let him know - we are coming on saturday - i just havent had a chance to
+call you guys back -- looking forward to it -- i probably need the
+directions again though');
+INSERT INTO email([from],[to],subject,body) VALUES('darron.giron at enron.com', 'bryce.baxter at enron.com', 'Re: Feedback for Audrey Cook', 'Bryce,
+
+I''ll get it done today.
+
+DG 3-9573
+
+
+
+
+
+ From: Bryce Baxter 06/12/2000 07:15 PM
+
+
+To: Darron C Giron/HOU/ECT at ECT
+cc:
+Subject: Feedback for Audrey Cook
+
+You were identified as a reviewer for Audrey Cook. If possible, could you
+complete her feedback by end of business Wednesday? It will really help me
+in the PRC process to have your input. Thanks.
+
+');
+INSERT INTO email([from],[to],subject,body) VALUES('casey.evans at enron.com', 'stephanie.sever at enron.com', 'Gas EOL ID', 'Stephanie,
+
+In conjunction with the recent movement of several power traders, they are changing the names of their gas books as well. The names of the new gas books and traders are as follows:
+
+PWR-NG-LT-SPP: Mike Carson
+PWR-NG-LT-SERC: Jeff King
+
+If you need to know their power desk to map their ID to their gas books, those desks are as follows:
+
+EPMI-LT-SPP: Mike Carson
+EPMI-LT-SERC: Jeff King
+
+I will be in training this afternoon, but will be back when class is over. Let me know if you have any questions.
+
+Thanks for your help!
+Casey');
+INSERT INTO email([from],[to],subject,body) VALUES('darrell.schoolcraft at enron.com', 'david.roensch at enron.com, kimberly.watson at enron.com, michelle.lokay at enron.com,', 'Postings', 'Please see the attached.
+
+
+ds
+
+
+
+
+ ');
+INSERT INTO email([from],[to],subject,body) VALUES('mcominsky at aol.com', 'cpatman at bracepatt.com, james_derrick at enron.com', 'Jurisprudence Luncheon', 'Carrin & Jim --
+
+It was an honor and a pleasure to meet both of you yesterday. I know we will
+have fun working together on this very special event.
+
+Jeff left the jurisprudence luncheon lists for me before he left on vacation.
+ I wasn''t sure whether he transmitted them to you as well. Would you please
+advise me if you would like them sent to you? I can email the MS Excel files
+or I can fax the hard copies to you. Please advise what is most convenient.
+
+I plan to be in town through the holidays and can be reached by phone, email,
+or cell phone at any time. My cell phone number is 713/705-4829.
+
+Thanks again for your interest in the ADL''s work. Martin.
+
+Martin B. Cominsky
+Director, Southwest Region
+Anti-Defamation League
+713/627-3490, ext. 122
+713/627-2011 (fax)
+MCominsky at aol.com');
+INSERT INTO email([from],[to],subject,body) VALUES('phillip.love at enron.com', 'todagost at utmb.edu, gbsonnta at utmb.edu', 'New President', 'I had a little bird put a word in my ear. Is there any possibility for Ben
+Raimer to be Bush''s secretary of HHS? Just curious about that infamous UTMB
+rumor mill. Hope things are well, happy holidays.
+PL');
+INSERT INTO email([from],[to],subject,body) VALUES('marie.heard at enron.com', 'ehamilton at fna.com', 'ISDA Master Agreement', 'Erin:
+
+Pursuant to your request, attached are the Schedule to the ISDA Master Agreement, together with Paragraph 13 to the ISDA Credit Support Annex. Please let me know if you need anything else. We look forward to hearing your comments.
+
+Marie
+
+Marie Heard
+Senior Legal Specialist
+Enron North America Corp.
+Phone: (713) 853-3907
+Fax: (713) 646-3490
+marie.heard at enron.com
+
+ ');
+INSERT INTO email([from],[to],subject,body) VALUES('andrea.ring at enron.com', 'beverly.beaty at enron.com', 'Re: Tennessee Buy - Louis Dreyfus', 'Beverly - once again thanks so much for your help on this.
+
+
+
+ ');
+INSERT INTO email([from],[to],subject,body) VALUES('karolyn.criado at enron.com', 'j..bonin at enron.com, felicia.case at enron.com, b..clapp at enron.com,', 'Price List week of Oct. 8-9, 2001', '
+Please contact me if you have any questions regarding last weeks prices.
+
+Thank you,
+Karolyn Criado
+3-9441
+
+
+
+
+');
+INSERT INTO email([from],[to],subject,body) VALUES('kevin.presto at enron.com', 'edward.baughman at enron.com, billy.braddock at enron.com', 'Associated', 'Please begin working on filling our Associated short position in 02. I would like to take this risk off the books.
+
+In addition, please find out what a buy-out of VEPCO would cost us. With Rogers transitioning to run our retail risk management, I would like to clean up our customer positions.
+
+We also need to continue to explore a JEA buy-out.
+
+Thanks.');
+INSERT INTO email([from],[to],subject,body) VALUES('stacy.dickson at enron.com', 'gregg.penman at enron.com', 'RE: Constellation TC 5-7-01', 'Gregg,
+
+I am at home with a sick baby. (Lots of fun!) I will call you about this
+tomorrow.
+
+Stacy');
+INSERT INTO email([from],[to],subject,body) VALUES('joe.quenet at enron.com', 'dfincher at utilicorp.com', '', 'hey big guy.....check this out.....
+
+ w ww.gorelieberman-2000.com/');
+INSERT INTO email([from],[to],subject,body) VALUES('k..allen at enron.com', 'jacqestc at aol.com', '', 'Jacques,
+
+I sent you a fax of Kevin Kolb''s comments on the release. The payoff on the note would be $36,248 ($36090(principal) + $158 (accrued interest)).
+This is assuming we wrap this up on Tuesday.
+
+Please email to confirm that their changes are ok so I can set up a meeting on Tuesday to reach closure.
+
+Phillip');
+INSERT INTO email([from],[to],subject,body) VALUES('kourtney.nelson at enron.com', 'mike.swerzbin at enron.com', 'Adjusted L/R Balance', 'Mike,
+
+I placed the adjusted L/R Balance on the Enronwest site. It is under the "Staff/Kourtney Nelson". There are two links:
+
+1) "Adj L_R" is the same data/format from the weekly strategy meeting.
+2) "New Gen 2001_2002" link has all of the supply side info that is used to calculate the L/R balance
+ -Please note the Data Flag column, a value of "3" indicates the project was cancelled, on hold, etc and is not included in the calc.
+
+Both of these sheets are interactive Excel spreadsheets and thus you can play around with the data as you please. Also, James Bruce is working to get his gen report on the web. That will help with your access to information on new gen.
+
+Please let me know if you have any questions or feedback,
+
+Kourtney
+
+
+
+Kourtney Nelson
+Fundamental Analysis
+Enron North America
+(503) 464-8280
+kourtney.nelson at enron.com');
+INSERT INTO email([from],[to],subject,body) VALUES('d..thomas at enron.com', 'naveed.ahmed at enron.com', 'FW: Current Enron TCC Portfolio', '
+
+-----Original Message-----
+From: Grace, Rebecca M.
+Sent: Monday, December 17, 2001 9:44 AM
+To: Thomas, Paul D.
+Cc: Cashion, Jim; Allen, Thresa A.; May, Tom
+Subject: RE: Current Enron TCC Portfolio
+
+
+Paul,
+
+I reviewed NY''s list. I agree with all of their contracts numbers and mw amounts.
+
+Call if you have any more questions.
+
+Rebecca
+
+
+
+ -----Original Message-----
+From: Thomas, Paul D.
+Sent: Monday, December 17, 2001 9:08 AM
+To: Grace, Rebecca M.
+Subject: FW: Current Enron TCC Portfolio
+
+ << File: enrontccs.xls >>
+Rebecca,
+Let me know if you see any differences.
+
+Paul
+X 3-0403
+-----Original Message-----
+From: Thomas, Paul D.
+Sent: Monday, December 17, 2001 9:04 AM
+To: Ahmed, Naveed
+Subject: FW: Current Enron TCC Portfolio
+
+
+
+
+-----Original Message-----
+From: Thomas, Paul D.
+Sent: Thursday, December 13, 2001 10:01 AM
+To: Baughman, Edward D.
+Subject: Current Enron TCC Portfolio
+
+
+');
+INSERT INTO email([from],[to],subject,body) VALUES('stephanie.panus at enron.com', 'william.bradford at enron.com, debbie.brackett at enron.com,', 'Coastal Merchant Energy/El Paso Merchant Energy', 'Coastal Merchant Energy, L.P. merged with and into El Paso Merchant Energy,
+L.P., effective February 1, 2001, with the surviving entity being El Paso
+Merchant Energy, L.P. We currently have ISDA Master Agreements with both
+counterparties. Please see the attached memo regarding the existing Masters
+and let us know which agreement should be terminated.
+
+Thanks,
+Stephanie
+');
+INSERT INTO email([from],[to],subject,body) VALUES('kam.keiser at enron.com', 'c..kenne at enron.com', 'RE: What about this too???', '
+
+ -----Original Message-----
+From: Kenne, Dawn C.
+Sent: Wednesday, February 06, 2002 11:50 AM
+To: Keiser, Kam
+Subject: What about this too???
+
+
+ << File: Netco Trader Matrix.xls >>
+ ');
+INSERT INTO email([from],[to],subject,body) VALUES('chris.meyer at enron.com', 'joe.parks at enron.com', 'Centana', 'Talked to Chip. We do need Cash Committe approval given the netting feature of your deal, which means Batch Funding Request. Please update per my previous e-mail and forward.
+
+Thanks
+
+chris
+x31666');
+INSERT INTO email([from],[to],subject,body) VALUES('debra.perlingiere at enron.com', 'jworman at academyofhealth.com', '', 'Have a great weekend! Happy Fathers Day!
+
+
+Debra Perlingiere
+Enron North America Corp.
+1400 Smith Street, EB 3885
+Houston, Texas 77002
+dperlin at enron.com
+Phone 713-853-7658
+Fax 713-646-3490');
+INSERT INTO email([from],[to],subject,body) VALUES('outlook.team at enron.com', '', 'Demo by Martha Janousek of Dashboard & Pipeline Profile / Julia &', 'CALENDAR ENTRY: APPOINTMENT
+
+Description:
+ Demo by Martha Janousek of Dashboard & Pipeline Profile / Julia & Dir Rpts. - 4102
+
+Date: 1/5/2001
+Time: 9:00 AM - 10:00 AM (Central Standard Time)
+
+Chairperson: Outlook Migration Team
+
+Detailed Description:');
+INSERT INTO email([from],[to],subject,body) VALUES('diana.seifert at enron.com', 'mark.taylor at enron.com', 'Guest access Chile', 'Hello Mark,
+
+Justin Boyd told me that your can help me with questions regarding Chile.
+We got a request for guest access through MG.
+The company is called Escondida and is a subsidiary of BHP Australia.
+
+Please advise if I can set up a guest account or not.
+F.Y.I.: MG is planning to put a "in w/h Chile" contract for Copper on-line as
+soon as Enron has done the due diligence for this country.
+Thanks !
+
+
+Best regards
+
+Diana Seifert
+EOL PCG');
+INSERT INTO email([from],[to],subject,body) VALUES('enron_update at concureworkplace.com', 'mark.whitt at enron.com', '<<Concur Expense Document>> - 121001', 'The Approval status has changed on the following report:
+
+Status last changed by: Barry L. Tycholiz
+Expense Report Name: 121001
+Report Total: $198.98
+Amount Due Employee: $198.98
+Amount Approved: $198.98
+Amount Paid: $0.00
+Approval Status: Approved
+Payment Status: Pending
+
+
+To review this expense report, click on the following link for Concur Expense.
+http://expensexms.enron.com');
+INSERT INTO email([from],[to],subject,body) VALUES('kevin.hyatt at enron.com', '', 'Technical Support', 'Outside the U.S., please refer to the list below:
+
+Australia:
+1800 678-515
+support at palm-au.com
+
+Canada:
+1905 305-6530
+support at palm.com
+
+New Zealand:
+0800 446-398
+support at palm-nz.com
+
+U.K.:
+0171 867 0108
+eurosupport at palm.3com.com
+
+Please refer to the Worldwide Customer Support card for a complete technical support contact list.');
+INSERT INTO email([from],[to],subject,body) VALUES('geoff.storey at enron.com', 'dutch.quigley at enron.com', 'RE:', 'duke contact?
+
+ -----Original Message-----
+From: Quigley, Dutch
+Sent: Wednesday, October 31, 2001 10:14 AM
+To: Storey, Geoff
+Subject: RE:
+
+bp corp Albert LaMore 281-366-4962
+
+running the reports now
+
+
+ -----Original Message-----
+From: Storey, Geoff
+Sent: Wednesday, October 31, 2001 10:10 AM
+To: Quigley, Dutch
+Subject: RE:
+
+give me a contact over there too
+BP
+
+
+ -----Original Message-----
+From: Quigley, Dutch
+Sent: Wednesday, October 31, 2001 9:42 AM
+To: Storey, Geoff
+Subject:
+
+Coral Jeff Whitnah 713-767-5374
+Relaint Steve McGinn 713-207-4000');
+INSERT INTO email([from],[to],subject,body) VALUES('pete.davis at enron.com', 'pete.davis at enron.com', 'Start Date: 4/22/01; HourAhead hour: 3; <CODESITE>', 'Start Date: 4/22/01; HourAhead hour: 3; No ancillary schedules awarded.
+Variances detected.
+Variances detected in Load schedule.
+
+ LOG MESSAGES:
+
+PARSING FILE -->> O:\Portland\WestDesk\California Scheduling\ISO Final
+Schedules\2001042203.txt
+
+---- Load Schedule ----
+$$$ Variance found in table tblLoads.
+ Details: (Hour: 3 / Preferred: 1.92 / Final: 1.89)
+ TRANS_TYPE: FINAL
+ LOAD_ID: PGE4
+ MKT_TYPE: 2
+ TRANS_DATE: 4/22/01
+ SC_ID: EPMI
+
+');
+INSERT INTO email([from],[to],subject,body) VALUES('john.postlethwaite at enron.com', 'john.zufferli at enron.com', 'Reference', 'John, hope things are going well up there for you. The big day is almost here for you and Jessica. I was wondering if I could use your name as a job reference if need be. I am just trying to get everything in order just in case something happens.
+
+John');
+INSERT INTO email([from],[to],subject,body) VALUES('jeffrey.shankman at enron.com', 'lschiffm at jonesday.com', 'Re:', 'I saw you called on the cell this a.m. Sorry I missed you. (I was in the
+shower). I have had a shitty week--I suspect my silence (not only to you,
+but others) after our phone call is a result of the week. I''m seeing Glen at
+11:15....talk to you');
+INSERT INTO email([from],[to],subject,body) VALUES('litebytz at enron.com', '', 'Lite Bytz RSVP', '
+This week''s Lite Bytz presentation will feature the following TOOLZ speaker:
+
+Richard McDougall
+Solaris 8
+Thursday, June 7, 2001
+
+If you have not already signed up, please RSVP via email to litebytz at enron.com by the end of the day Tuesday, June 5, 2001.
+
+*Remember: this is now a Brown Bag Event--so bring your lunch and we will provide cookies and drinks.
+
+Click below for more details.
+
+http://home.enron.com:84/messaging/litebytztoolzprint.jpg');
+ COMMIT;
+ }
+} {}
+
+###############################################################################
+# Everything above just builds an interesting test database. The actual
+# tests come after this comment.
+###############################################################################
+
+do_test fts1c-1.2 {
+ execsql {
+ SELECT rowid FROM email WHERE email MATCH 'mark'
+ }
+} {6 17 25 38 40 42 73 74}
+do_test fts1c-1.3 {
+ execsql {
+ SELECT rowid FROM email WHERE email MATCH 'susan'
+ }
+} {24 40}
+do_test fts1c-1.4 {
+ execsql {
+ SELECT rowid FROM email WHERE email MATCH 'mark susan'
+ }
+} {40}
+do_test fts1c-1.5 {
+ execsql {
+ SELECT rowid FROM email WHERE email MATCH 'susan mark'
+ }
+} {40}
+do_test fts1c-1.6 {
+ execsql {
+ SELECT rowid FROM email WHERE email MATCH '"mark susan"'
+ }
+} {}
+do_test fts1c-1.7 {
+ execsql {
+ SELECT rowid FROM email WHERE email MATCH 'mark -susan'
+ }
+} {6 17 25 38 42 73 74}
+do_test fts1c-1.8 {
+ execsql {
+ SELECT rowid FROM email WHERE email MATCH '-mark susan'
+ }
+} {24}
+do_test fts1c-1.9 {
+ execsql {
+ SELECT rowid FROM email WHERE email MATCH 'mark OR susan'
+ }
+} {6 17 24 25 38 40 42 73 74}
+
+# Some simple tests of the automatic "offsets(email)" column. In the sample
+# data set above, only one message, number 20, contains the words
+# "gas" and "reminder" in both body and subject.
+#
+do_test fts1c-2.1 {
+ execsql {
+ SELECT rowid, offsets(email) FROM email
+ WHERE email MATCH 'gas reminder'
+ }
+} {20 {2 0 42 3 2 1 54 8 3 0 42 3 3 1 54 8 3 0 129 3 3 0 143 3 3 0 240 3}}
+do_test fts1c-2.2 {
+ execsql {
+ SELECT rowid, offsets(email) FROM email
+ WHERE email MATCH 'subject:gas reminder'
+ }
+} {20 {2 0 42 3 2 1 54 8 3 1 54 8}}
+do_test fts1c-2.3 {
+ execsql {
+ SELECT rowid, offsets(email) FROM email
+ WHERE email MATCH 'body:gas reminder'
+ }
+} {20 {2 1 54 8 3 0 42 3 3 1 54 8 3 0 129 3 3 0 143 3 3 0 240 3}}
+do_test fts1c-2.4 {
+ execsql {
+ SELECT rowid, offsets(email) FROM email
+ WHERE subject MATCH 'gas reminder'
+ }
+} {20 {2 0 42 3 2 1 54 8}}
+do_test fts1c-2.5 {
+ execsql {
+ SELECT rowid, offsets(email) FROM email
+ WHERE body MATCH 'gas reminder'
+ }
+} {20 {3 0 42 3 3 1 54 8 3 0 129 3 3 0 143 3 3 0 240 3}}
+
+# Document 32 contains 5 instances of the world "child". But only
+# 3 of them are paired with "product". Make sure only those instances
+# that match the phrase appear in the offsets(email) list.
+#
+do_test fts1c-3.1 {
+ execsql {
+ SELECT rowid, offsets(email) FROM email
+ WHERE body MATCH 'child product' AND +rowid=32
+ }
+} {32 {3 0 94 5 3 0 114 5 3 0 207 5 3 1 213 7 3 0 245 5 3 1 251 7 3 0 409 5 3 1 415 7 3 1 493 7}}
+do_test fts1c-3.2 {
+ execsql {
+ SELECT rowid, offsets(email) FROM email
+ WHERE body MATCH '"child product"'
+ }
+} {32 {3 0 207 5 3 1 213 7 3 0 245 5 3 1 251 7 3 0 409 5 3 1 415 7}}
+
+# Snippet generator tests
+#
+do_test fts1c-4.1 {
+ execsql {
+ SELECT snippet(email) FROM email
+ WHERE email MATCH 'subject:gas reminder'
+ }
+} {{Alert Posted 10:00 AM November 20,2000: E-<b>GAS</b> Request <b>Reminder</b>}}
+do_test fts1c-4.2 {
+ execsql {
+ SELECT snippet(email) FROM email
+ WHERE email MATCH 'christmas candlelight'
+ }
+} {{<b>...</b> place.? What do you think about going here <b>Christmas</b>
+eve?? They have an 11:00 a.m. service and a <b>candlelight</b> service at 5:00 p.m.,
+among others. <b>...</b>}}
+
+do_test fts1c-4.3 {
+ execsql {
+ SELECT snippet(email) FROM email
+ WHERE email MATCH 'deal sheet potential reuse'
+ }
+} {{EOL-Accenture <b>Deal</b> <b>Sheet</b> <b>...</b> intent
+ Review Enron asset base for <b>potential</b> <b>reuse</b>/ licensing
+ Contract negotiations <b>...</b>}}
+do_test fts1c-4.4 {
+ execsql {
+ SELECT snippet(email,'<<<','>>>',' ') FROM email
+ WHERE email MATCH 'deal sheet potential reuse'
+ }
+} {{EOL-Accenture <<<Deal>>> <<<Sheet>>> intent
+ Review Enron asset base for <<<potential>>> <<<reuse>>>/ licensing
+ Contract negotiations }}
+do_test fts1c-4.5 {
+ execsql {
+ SELECT snippet(email,'<<<','>>>',' ') FROM email
+ WHERE email MATCH 'first things'
+ }
+} {{Re: <<<First>>> Polish Deal! Congrats! <<<Things>>> seem to be building rapidly now on the }}
+do_test fts1c-4.6 {
+ execsql {
+ SELECT snippet(email) FROM email
+ WHERE email MATCH 'chris is here'
+ }
+} {{<b>chris</b>.germany at enron.com <b>...</b> Sounds good to me. I bet this <b>is</b> next to the Warick?? Hotel. <b>...</b> place.? What do you think about going <b>here</b> Christmas
+eve?? They have an 11:00 a.m. <b>...</b>}}
+do_test fts1c-4.7 {
+ execsql {
+ SELECT snippet(email) FROM email
+ WHERE email MATCH '"pursuant to"'
+ }
+} {{Erin:
+
+<b>Pursuant</b> <b>to</b> your request, attached are the Schedule to <b>...</b>}}
+do_test fts1c-4.8 {
+ execsql {
+ SELECT snippet(email) FROM email
+ WHERE email MATCH 'ancillary load davis'
+ }
+} {{pete.<b>davis</b>@enron.com <b>...</b> Start Date: 4/22/01; HourAhead hour: 3; No <b>ancillary</b> schedules awarded.
+Variances detected.
+Variances detected in <b>Load</b> schedule.
+
+ LOG MESSAGES:
+
+PARSING <b>...</b>}}
+
+# Combinations of AND and OR operators:
+#
+do_test fts1c-5.1 {
+ execsql {
+ SELECT snippet(email) FROM email
+ WHERE email MATCH 'questar enron OR com'
+ }
+} {{matt.smith@<b>enron</b>.<b>com</b> <b>...</b> six reports:
+
+31 Keystone Receipts
+15 <b>Questar</b> Pipeline
+40 Rockies Production
+22 West_2 <b>...</b>}}
+do_test fts1c-5.2 {
+ execsql {
+ SELECT snippet(email) FROM email
+ WHERE email MATCH 'enron OR com questar'
+ }
+} {{matt.smith@<b>enron</b>.<b>com</b> <b>...</b> six reports:
+
+31 Keystone Receipts
+15 <b>Questar</b> Pipeline
+40 Rockies Production
+22 West_2 <b>...</b>}}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/fts1d.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/fts1d.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,65 @@
+# 2006 October 1
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#*************************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is testing the FTS1 module, and in particular
+# the Porter stemmer.
+#
+# $Id: fts1d.test,v 1.1 2006/10/01 18:41:21 drh Exp $
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# If SQLITE_ENABLE_FTS1 is defined, omit this file.
+ifcapable !fts1 {
+ finish_test
+ return
+}
+
+do_test fts1d-1.1 {
+ execsql {
+ CREATE VIRTUAL TABLE t1 USING fts1(content, tokenize porter);
+ INSERT INTO t1(rowid, content) VALUES(1, 'running and jumping');
+ SELECT rowid FROM t1 WHERE content MATCH 'run jump';
+ }
+} {1}
+do_test fts1d-1.2 {
+ execsql {
+ SELECT snippet(t1) FROM t1 WHERE t1 MATCH 'run jump';
+ }
+} {{<b>running</b> and <b>jumping</b>}}
+do_test fts1d-1.3 {
+ execsql {
+ INSERT INTO t1(rowid, content)
+ VALUES(2, 'abcdefghijklmnopqrstuvwyxz');
+ SELECT rowid, snippet(t1) FROM t1 WHERE t1 MATCH 'abcdefghijqrstuvwyxz'
+ }
+} {2 <b>abcdefghijklmnopqrstuvwyxz</b>}
+do_test fts1d-1.4 {
+ execsql {
+ SELECT rowid, snippet(t1) FROM t1 WHERE t1 MATCH 'abcdefghijXXXXqrstuvwyxz'
+ }
+} {2 <b>abcdefghijklmnopqrstuvwyxz</b>}
+do_test fts1d-1.5 {
+ execsql {
+ INSERT INTO t1(rowid, content)
+ VALUES(3, 'The value is 123456789');
+ SELECT rowid, snippet(t1) FROM t1 WHERE t1 MATCH '123789'
+ }
+} {3 {The value is <b>123456789</b>}}
+do_test fts1d-1.6 {
+ execsql {
+ SELECT rowid, snippet(t1) FROM t1 WHERE t1 MATCH '123000000789'
+ }
+} {3 {The value is <b>123456789</b>}}
+
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/fts1porter.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/fts1porter.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,23590 @@
+# 2006 October 1
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#*************************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is testing the FTS1 module, and in particular
+# the Porter stemmer.
+#
+# $Id: fts1porter.test,v 1.5 2006/10/03 19:37:37 drh Exp $
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# If SQLITE_ENABLE_FTS1 is defined, omit this file.
+ifcapable !fts1 {
+ finish_test
+ return
+}
+
+# Test data for the Porter stemmer. The first word of each line
+# is the input. The second word is the desired output.
+#
+# This test data is taken from http://www.tartarus.org/martin/PorterStemmer/
+# There is no claim of copyright made on that page, but you should
+# probably contact the author (Martin Porter - the inventor of the
+# Porter Stemmer algorithm) if you want to use this test data in a
+# commerical product of some kind. The stemmer code in FTS1 is a
+# complete rewrite from scratch based on the algorithm specification
+# and does not contain any code under copyright.
+#
+set porter_test_data {
+ a a
+ aaron aaron
+ abaissiez abaissiez
+ abandon abandon
+ abandoned abandon
+ abase abas
+ abash abash
+ abate abat
+ abated abat
+ abatement abat
+ abatements abat
+ abates abat
+ abbess abbess
+ abbey abbei
+ abbeys abbei
+ abbominable abbomin
+ abbot abbot
+ abbots abbot
+ abbreviated abbrevi
+ abed ab
+ abel abel
+ aberga aberga
+ abergavenny abergavenni
+ abet abet
+ abetting abet
+ abhominable abhomin
+ abhor abhor
+ abhorr abhorr
+ abhorred abhor
+ abhorring abhor
+ abhors abhor
+ abhorson abhorson
+ abide abid
+ abides abid
+ abilities abil
+ ability abil
+ abject abject
+ abjectly abjectli
+ abjects abject
+ abjur abjur
+ abjure abjur
+ able abl
+ abler abler
+ aboard aboard
+ abode abod
+ aboded abod
+ abodements abod
+ aboding abod
+ abominable abomin
+ abominably abomin
+ abominations abomin
+ abortive abort
+ abortives abort
+ abound abound
+ abounding abound
+ about about
+ above abov
+ abr abr
+ abraham abraham
+ abram abram
+ abreast abreast
+ abridg abridg
+ abridge abridg
+ abridged abridg
+ abridgment abridg
+ abroach abroach
+ abroad abroad
+ abrogate abrog
+ abrook abrook
+ abrupt abrupt
+ abruption abrupt
+ abruptly abruptli
+ absence absenc
+ absent absent
+ absey absei
+ absolute absolut
+ absolutely absolut
+ absolv absolv
+ absolver absolv
+ abstains abstain
+ abstemious abstemi
+ abstinence abstin
+ abstract abstract
+ absurd absurd
+ absyrtus absyrtu
+ abundance abund
+ abundant abund
+ abundantly abundantli
+ abus abu
+ abuse abus
+ abused abus
+ abuser abus
+ abuses abus
+ abusing abus
+ abutting abut
+ aby abi
+ abysm abysm
+ ac ac
+ academe academ
+ academes academ
+ accent accent
+ accents accent
+ accept accept
+ acceptable accept
+ acceptance accept
+ accepted accept
+ accepts accept
+ access access
+ accessary accessari
+ accessible access
+ accidence accid
+ accident accid
+ accidental accident
+ accidentally accident
+ accidents accid
+ accite accit
+ accited accit
+ accites accit
+ acclamations acclam
+ accommodate accommod
+ accommodated accommod
+ accommodation accommod
+ accommodations accommod
+ accommodo accommodo
+ accompanied accompani
+ accompany accompani
+ accompanying accompani
+ accomplices accomplic
+ accomplish accomplish
+ accomplished accomplish
+ accomplishing accomplish
+ accomplishment accomplish
+ accompt accompt
+ accord accord
+ accordant accord
+ accorded accord
+ accordeth accordeth
+ according accord
+ accordingly accordingli
+ accords accord
+ accost accost
+ accosted accost
+ account account
+ accountant account
+ accounted account
+ accounts account
+ accoutred accoutr
+ accoutrement accoutr
+ accoutrements accoutr
+ accrue accru
+ accumulate accumul
+ accumulated accumul
+ accumulation accumul
+ accurs accur
+ accursed accurs
+ accurst accurst
+ accus accu
+ accusation accus
+ accusations accus
+ accusative accus
+ accusativo accusativo
+ accuse accus
+ accused accus
+ accuser accus
+ accusers accus
+ accuses accus
+ accuseth accuseth
+ accusing accus
+ accustom accustom
+ accustomed accustom
+ ace ac
+ acerb acerb
+ ache ach
+ acheron acheron
+ aches ach
+ achiev achiev
+ achieve achiev
+ achieved achiev
+ achievement achiev
+ achievements achiev
+ achiever achiev
+ achieves achiev
+ achieving achiev
+ achilles achil
+ aching ach
+ achitophel achitophel
+ acknowledg acknowledg
+ acknowledge acknowledg
+ acknowledged acknowledg
+ acknowledgment acknowledg
+ acknown acknown
+ acold acold
+ aconitum aconitum
+ acordo acordo
+ acorn acorn
+ acquaint acquaint
+ acquaintance acquaint
+ acquainted acquaint
+ acquaints acquaint
+ acquir acquir
+ acquire acquir
+ acquisition acquisit
+ acquit acquit
+ acquittance acquitt
+ acquittances acquitt
+ acquitted acquit
+ acre acr
+ acres acr
+ across across
+ act act
+ actaeon actaeon
+ acted act
+ acting act
+ action action
+ actions action
+ actium actium
+ active activ
+ actively activ
+ activity activ
+ actor actor
+ actors actor
+ acts act
+ actual actual
+ acture actur
+ acute acut
+ acutely acut
+ ad ad
+ adage adag
+ adallas adalla
+ adam adam
+ adamant adam
+ add add
+ added ad
+ adder adder
+ adders adder
+ addeth addeth
+ addict addict
+ addicted addict
+ addiction addict
+ adding ad
+ addition addit
+ additions addit
+ addle addl
+ address address
+ addressing address
+ addrest addrest
+ adds add
+ adhere adher
+ adheres adher
+ adieu adieu
+ adieus adieu
+ adjacent adjac
+ adjoin adjoin
+ adjoining adjoin
+ adjourn adjourn
+ adjudg adjudg
+ adjudged adjudg
+ adjunct adjunct
+ administer administ
+ administration administr
+ admir admir
+ admirable admir
+ admiral admir
+ admiration admir
+ admire admir
+ admired admir
+ admirer admir
+ admiring admir
+ admiringly admiringli
+ admission admiss
+ admit admit
+ admits admit
+ admittance admitt
+ admitted admit
+ admitting admit
+ admonish admonish
+ admonishing admonish
+ admonishment admonish
+ admonishments admonish
+ admonition admonit
+ ado ado
+ adonis adoni
+ adopt adopt
+ adopted adopt
+ adoptedly adoptedli
+ adoption adopt
+ adoptious adopti
+ adopts adopt
+ ador ador
+ adoration ador
+ adorations ador
+ adore ador
+ adorer ador
+ adores ador
+ adorest adorest
+ adoreth adoreth
+ adoring ador
+ adorn adorn
+ adorned adorn
+ adornings adorn
+ adornment adorn
+ adorns adorn
+ adown adown
+ adramadio adramadio
+ adrian adrian
+ adriana adriana
+ adriano adriano
+ adriatic adriat
+ adsum adsum
+ adulation adul
+ adulterate adulter
+ adulterates adulter
+ adulterers adulter
+ adulteress adulteress
+ adulteries adulteri
+ adulterous adulter
+ adultery adulteri
+ adultress adultress
+ advanc advanc
+ advance advanc
+ advanced advanc
+ advancement advanc
+ advancements advanc
+ advances advanc
+ advancing advanc
+ advantage advantag
+ advantageable advantag
+ advantaged advantag
+ advantageous advantag
+ advantages advantag
+ advantaging advantag
+ advent advent
+ adventur adventur
+ adventure adventur
+ adventures adventur
+ adventuring adventur
+ adventurous adventur
+ adventurously adventur
+ adversaries adversari
+ adversary adversari
+ adverse advers
+ adversely advers
+ adversities advers
+ adversity advers
+ advertis adverti
+ advertise advertis
+ advertised advertis
+ advertisement advertis
+ advertising advertis
+ advice advic
+ advis advi
+ advise advis
+ advised advis
+ advisedly advisedli
+ advises advis
+ advisings advis
+ advocate advoc
+ advocation advoc
+ aeacida aeacida
+ aeacides aeacid
+ aedile aedil
+ aediles aedil
+ aegeon aegeon
+ aegion aegion
+ aegles aegl
+ aemelia aemelia
+ aemilia aemilia
+ aemilius aemiliu
+ aeneas aenea
+ aeolus aeolu
+ aer aer
+ aerial aerial
+ aery aeri
+ aesculapius aesculapiu
+ aeson aeson
+ aesop aesop
+ aetna aetna
+ afar afar
+ afear afear
+ afeard afeard
+ affability affabl
+ affable affabl
+ affair affair
+ affaire affair
+ affairs affair
+ affect affect
+ affectation affect
+ affectations affect
+ affected affect
+ affectedly affectedli
+ affecteth affecteth
+ affecting affect
+ affection affect
+ affectionate affection
+ affectionately affection
+ affections affect
+ affects affect
+ affeer affeer
+ affianc affianc
+ affiance affianc
+ affianced affianc
+ affied affi
+ affin affin
+ affined affin
+ affinity affin
+ affirm affirm
+ affirmation affirm
+ affirmatives affirm
+ afflict afflict
+ afflicted afflict
+ affliction afflict
+ afflictions afflict
+ afflicts afflict
+ afford afford
+ affordeth affordeth
+ affords afford
+ affray affrai
+ affright affright
+ affrighted affright
+ affrights affright
+ affront affront
+ affronted affront
+ affy affi
+ afield afield
+ afire afir
+ afloat afloat
+ afoot afoot
+ afore afor
+ aforehand aforehand
+ aforesaid aforesaid
+ afraid afraid
+ afresh afresh
+ afric afric
+ africa africa
+ african african
+ afront afront
+ after after
+ afternoon afternoon
+ afterward afterward
+ afterwards afterward
+ ag ag
+ again again
+ against against
+ agamemmon agamemmon
+ agamemnon agamemnon
+ agate agat
+ agaz agaz
+ age ag
+ aged ag
+ agenor agenor
+ agent agent
+ agents agent
+ ages ag
+ aggravate aggrav
+ aggrief aggrief
+ agile agil
+ agincourt agincourt
+ agitation agit
+ aglet aglet
+ agnize agniz
+ ago ago
+ agone agon
+ agony agoni
+ agree agre
+ agreed agre
+ agreeing agre
+ agreement agreement
+ agrees agre
+ agrippa agrippa
+ aground aground
+ ague agu
+ aguecheek aguecheek
+ agued agu
+ agueface aguefac
+ agues agu
+ ah ah
+ aha aha
+ ahungry ahungri
+ ai ai
+ aialvolio aialvolio
+ aiaria aiaria
+ aid aid
+ aidance aidanc
+ aidant aidant
+ aided aid
+ aiding aid
+ aidless aidless
+ aids aid
+ ail ail
+ aim aim
+ aimed aim
+ aimest aimest
+ aiming aim
+ aims aim
+ ainsi ainsi
+ aio aio
+ air air
+ aired air
+ airless airless
+ airs air
+ airy airi
+ ajax ajax
+ akilling akil
+ al al
+ alabaster alabast
+ alack alack
+ alacrity alacr
+ alarbus alarbu
+ alarm alarm
+ alarms alarm
+ alarum alarum
+ alarums alarum
+ alas ala
+ alb alb
+ alban alban
+ albans alban
+ albany albani
+ albeit albeit
+ albion albion
+ alchemist alchemist
+ alchemy alchemi
+ alcibiades alcibiad
+ alcides alcid
+ alder alder
+ alderman alderman
+ aldermen aldermen
+ ale al
+ alecto alecto
+ alehouse alehous
+ alehouses alehous
+ alencon alencon
+ alengon alengon
+ aleppo aleppo
+ ales al
+ alewife alewif
+ alexander alexand
+ alexanders alexand
+ alexandria alexandria
+ alexandrian alexandrian
+ alexas alexa
+ alias alia
+ alice alic
+ alien alien
+ aliena aliena
+ alight alight
+ alighted alight
+ alights alight
+ aliis alii
+ alike alik
+ alisander alisand
+ alive aliv
+ all all
+ alla alla
+ allay allai
+ allayed allai
+ allaying allai
+ allayment allay
+ allayments allay
+ allays allai
+ allegation alleg
+ allegations alleg
+ allege alleg
+ alleged alleg
+ allegiance allegi
+ allegiant allegi
+ alley allei
+ alleys allei
+ allhallowmas allhallowma
+ alliance allianc
+ allicholy allicholi
+ allied alli
+ allies alli
+ alligant allig
+ alligator allig
+ allons allon
+ allot allot
+ allots allot
+ allotted allot
+ allottery allotteri
+ allow allow
+ allowance allow
+ allowed allow
+ allowing allow
+ allows allow
+ allur allur
+ allure allur
+ allurement allur
+ alluring allur
+ allusion allus
+ ally alli
+ allycholly allycholli
+ almain almain
+ almanac almanac
+ almanack almanack
+ almanacs almanac
+ almighty almighti
+ almond almond
+ almost almost
+ alms alm
+ almsman almsman
+ aloes alo
+ aloft aloft
+ alone alon
+ along along
+ alonso alonso
+ aloof aloof
+ aloud aloud
+ alphabet alphabet
+ alphabetical alphabet
+ alphonso alphonso
+ alps alp
+ already alreadi
+ also also
+ alt alt
+ altar altar
+ altars altar
+ alter alter
+ alteration alter
+ altered alter
+ alters alter
+ althaea althaea
+ although although
+ altitude altitud
+ altogether altogeth
+ alton alton
+ alway alwai
+ always alwai
+ am am
+ amaimon amaimon
+ amain amain
+ amaking amak
+ amamon amamon
+ amaz amaz
+ amaze amaz
+ amazed amaz
+ amazedly amazedli
+ amazedness amazed
+ amazement amaz
+ amazes amaz
+ amazeth amazeth
+ amazing amaz
+ amazon amazon
+ amazonian amazonian
+ amazons amazon
+ ambassador ambassador
+ ambassadors ambassador
+ amber amber
+ ambiguides ambiguid
+ ambiguities ambigu
+ ambiguous ambigu
+ ambition ambit
+ ambitions ambit
+ ambitious ambiti
+ ambitiously ambiti
+ amble ambl
+ ambled ambl
+ ambles ambl
+ ambling ambl
+ ambo ambo
+ ambuscadoes ambuscado
+ ambush ambush
+ amen amen
+ amend amend
+ amended amend
+ amendment amend
+ amends amend
+ amerce amerc
+ america america
+ ames am
+ amiable amiabl
+ amid amid
+ amidst amidst
+ amiens amien
+ amis ami
+ amiss amiss
+ amities amiti
+ amity amiti
+ amnipotent amnipot
+ among among
+ amongst amongst
+ amorous amor
+ amorously amor
+ amort amort
+ amount amount
+ amounts amount
+ amour amour
+ amphimacus amphimacu
+ ample ampl
+ ampler ampler
+ amplest amplest
+ amplified amplifi
+ amplify amplifi
+ amply ampli
+ ampthill ampthil
+ amurath amurath
+ amyntas amynta
+ an an
+ anatomiz anatomiz
+ anatomize anatom
+ anatomy anatomi
+ ancestor ancestor
+ ancestors ancestor
+ ancestry ancestri
+ anchises anchis
+ anchor anchor
+ anchorage anchorag
+ anchored anchor
+ anchoring anchor
+ anchors anchor
+ anchovies anchovi
+ ancient ancient
+ ancientry ancientri
+ ancients ancient
+ ancus ancu
+ and and
+ andirons andiron
+ andpholus andpholu
+ andren andren
+ andrew andrew
+ andromache andromach
+ andronici andronici
+ andronicus andronicu
+ anew anew
+ ang ang
+ angel angel
+ angelica angelica
+ angelical angel
+ angelo angelo
+ angels angel
+ anger anger
+ angerly angerli
+ angers anger
+ anges ang
+ angiers angier
+ angl angl
+ anglais anglai
+ angle angl
+ angler angler
+ angleterre angleterr
+ angliae anglia
+ angling angl
+ anglish anglish
+ angrily angrili
+ angry angri
+ anguish anguish
+ angus angu
+ animal anim
+ animals anim
+ animis animi
+ anjou anjou
+ ankle ankl
+ anna anna
+ annals annal
+ anne ann
+ annex annex
+ annexed annex
+ annexions annexion
+ annexment annex
+ annothanize annothan
+ announces announc
+ annoy annoi
+ annoyance annoy
+ annoying annoi
+ annual annual
+ anoint anoint
+ anointed anoint
+ anon anon
+ another anoth
+ anselmo anselmo
+ answer answer
+ answerable answer
+ answered answer
+ answerest answerest
+ answering answer
+ answers answer
+ ant ant
+ ante ant
+ antenor antenor
+ antenorides antenorid
+ anteroom anteroom
+ anthem anthem
+ anthems anthem
+ anthony anthoni
+ anthropophagi anthropophagi
+ anthropophaginian anthropophaginian
+ antiates antiat
+ antic antic
+ anticipate anticip
+ anticipates anticip
+ anticipatest anticipatest
+ anticipating anticip
+ anticipation anticip
+ antick antick
+ anticly anticli
+ antics antic
+ antidote antidot
+ antidotes antidot
+ antigonus antigonu
+ antiopa antiopa
+ antipathy antipathi
+ antipholus antipholu
+ antipholuses antipholus
+ antipodes antipod
+ antiquary antiquari
+ antique antiqu
+ antiquity antiqu
+ antium antium
+ antoniad antoniad
+ antonio antonio
+ antonius antoniu
+ antony antoni
+ antres antr
+ anvil anvil
+ any ani
+ anybody anybodi
+ anyone anyon
+ anything anyth
+ anywhere anywher
+ ap ap
+ apace apac
+ apart apart
+ apartment apart
+ apartments apart
+ ape ap
+ apemantus apemantu
+ apennines apennin
+ apes ap
+ apiece apiec
+ apish apish
+ apollinem apollinem
+ apollo apollo
+ apollodorus apollodoru
+ apology apolog
+ apoplex apoplex
+ apoplexy apoplexi
+ apostle apostl
+ apostles apostl
+ apostrophas apostropha
+ apoth apoth
+ apothecary apothecari
+ appal appal
+ appall appal
+ appalled appal
+ appals appal
+ apparel apparel
+ apparell apparel
+ apparelled apparel
+ apparent appar
+ apparently appar
+ apparition apparit
+ apparitions apparit
+ appeach appeach
+ appeal appeal
+ appeals appeal
+ appear appear
+ appearance appear
+ appeared appear
+ appeareth appeareth
+ appearing appear
+ appears appear
+ appeas appea
+ appease appeas
+ appeased appeas
+ appelant appel
+ appele appel
+ appelee appele
+ appeles appel
+ appelez appelez
+ appellant appel
+ appellants appel
+ appelons appelon
+ appendix appendix
+ apperil apperil
+ appertain appertain
+ appertaining appertain
+ appertainings appertain
+ appertains appertain
+ appertinent appertin
+ appertinents appertin
+ appetite appetit
+ appetites appetit
+ applaud applaud
+ applauded applaud
+ applauding applaud
+ applause applaus
+ applauses applaus
+ apple appl
+ apples appl
+ appletart appletart
+ appliance applianc
+ appliances applianc
+ applications applic
+ applied appli
+ applies appli
+ apply appli
+ applying appli
+ appoint appoint
+ appointed appoint
+ appointment appoint
+ appointments appoint
+ appoints appoint
+ apprehend apprehend
+ apprehended apprehend
+ apprehends apprehend
+ apprehension apprehens
+ apprehensions apprehens
+ apprehensive apprehens
+ apprendre apprendr
+ apprenne apprenn
+ apprenticehood apprenticehood
+ appris appri
+ approach approach
+ approachers approach
+ approaches approach
+ approacheth approacheth
+ approaching approach
+ approbation approb
+ approof approof
+ appropriation appropri
+ approv approv
+ approve approv
+ approved approv
+ approvers approv
+ approves approv
+ appurtenance appurten
+ appurtenances appurten
+ apricocks apricock
+ april april
+ apron apron
+ aprons apron
+ apt apt
+ apter apter
+ aptest aptest
+ aptly aptli
+ aptness apt
+ aqua aqua
+ aquilon aquilon
+ aquitaine aquitain
+ arabia arabia
+ arabian arabian
+ araise arais
+ arbitrate arbitr
+ arbitrating arbitr
+ arbitrator arbitr
+ arbitrement arbitr
+ arbors arbor
+ arbour arbour
+ arc arc
+ arch arch
+ archbishop archbishop
+ archbishopric archbishopr
+ archdeacon archdeacon
+ arched arch
+ archelaus archelau
+ archer archer
+ archers archer
+ archery archeri
+ archibald archibald
+ archidamus archidamu
+ architect architect
+ arcu arcu
+ arde ard
+ arden arden
+ ardent ardent
+ ardour ardour
+ are ar
+ argal argal
+ argier argier
+ argo argo
+ argosies argosi
+ argosy argosi
+ argu argu
+ argue argu
+ argued argu
+ argues argu
+ arguing argu
+ argument argument
+ arguments argument
+ argus argu
+ ariachne ariachn
+ ariadne ariadn
+ ariel ariel
+ aries ari
+ aright aright
+ arinado arinado
+ arinies arini
+ arion arion
+ arise aris
+ arises aris
+ ariseth ariseth
+ arising aris
+ aristode aristod
+ aristotle aristotl
+ arithmetic arithmet
+ arithmetician arithmetician
+ ark ark
+ arm arm
+ arma arma
+ armado armado
+ armadoes armado
+ armagnac armagnac
+ arme arm
+ armed arm
+ armenia armenia
+ armies armi
+ armigero armigero
+ arming arm
+ armipotent armipot
+ armor armor
+ armour armour
+ armourer armour
+ armourers armour
+ armours armour
+ armoury armouri
+ arms arm
+ army armi
+ arn arn
+ aroint aroint
+ arose aros
+ arouse arous
+ aroused arous
+ arragon arragon
+ arraign arraign
+ arraigned arraign
+ arraigning arraign
+ arraignment arraign
+ arrant arrant
+ arras arra
+ array arrai
+ arrearages arrearag
+ arrest arrest
+ arrested arrest
+ arrests arrest
+ arriv arriv
+ arrival arriv
+ arrivance arriv
+ arrive arriv
+ arrived arriv
+ arrives arriv
+ arriving arriv
+ arrogance arrog
+ arrogancy arrog
+ arrogant arrog
+ arrow arrow
+ arrows arrow
+ art art
+ artemidorus artemidoru
+ arteries arteri
+ arthur arthur
+ article articl
+ articles articl
+ articulate articul
+ artificer artific
+ artificial artifici
+ artillery artilleri
+ artire artir
+ artist artist
+ artists artist
+ artless artless
+ artois artoi
+ arts art
+ artus artu
+ arviragus arviragu
+ as as
+ asaph asaph
+ ascanius ascaniu
+ ascend ascend
+ ascended ascend
+ ascendeth ascendeth
+ ascends ascend
+ ascension ascens
+ ascent ascent
+ ascribe ascrib
+ ascribes ascrib
+ ash ash
+ asham asham
+ ashamed asham
+ asher asher
+ ashes ash
+ ashford ashford
+ ashore ashor
+ ashouting ashout
+ ashy ashi
+ asia asia
+ aside asid
+ ask ask
+ askance askanc
+ asked ask
+ asker asker
+ asketh asketh
+ asking ask
+ asks ask
+ aslant aslant
+ asleep asleep
+ asmath asmath
+ asp asp
+ aspect aspect
+ aspects aspect
+ aspen aspen
+ aspersion aspers
+ aspic aspic
+ aspicious aspici
+ aspics aspic
+ aspir aspir
+ aspiration aspir
+ aspire aspir
+ aspiring aspir
+ asquint asquint
+ ass ass
+ assail assail
+ assailable assail
+ assailant assail
+ assailants assail
+ assailed assail
+ assaileth assaileth
+ assailing assail
+ assails assail
+ assassination assassin
+ assault assault
+ assaulted assault
+ assaults assault
+ assay assai
+ assaying assai
+ assays assai
+ assemblance assembl
+ assemble assembl
+ assembled assembl
+ assemblies assembl
+ assembly assembl
+ assent assent
+ asses ass
+ assez assez
+ assign assign
+ assigned assign
+ assigns assign
+ assinico assinico
+ assist assist
+ assistance assist
+ assistances assist
+ assistant assist
+ assistants assist
+ assisted assist
+ assisting assist
+ associate associ
+ associated associ
+ associates associ
+ assuage assuag
+ assubjugate assubjug
+ assum assum
+ assume assum
+ assumes assum
+ assumption assumpt
+ assur assur
+ assurance assur
+ assure assur
+ assured assur
+ assuredly assuredli
+ assures assur
+ assyrian assyrian
+ astonish astonish
+ astonished astonish
+ astraea astraea
+ astray astrai
+ astrea astrea
+ astronomer astronom
+ astronomers astronom
+ astronomical astronom
+ astronomy astronomi
+ asunder asund
+ at at
+ atalanta atalanta
+ ate at
+ ates at
+ athenian athenian
+ athenians athenian
+ athens athen
+ athol athol
+ athversary athversari
+ athwart athwart
+ atlas atla
+ atomies atomi
+ atomy atomi
+ atone aton
+ atonement aton
+ atonements aton
+ atropos atropo
+ attach attach
+ attached attach
+ attachment attach
+ attain attain
+ attainder attaind
+ attains attain
+ attaint attaint
+ attainted attaint
+ attainture attaintur
+ attempt attempt
+ attemptable attempt
+ attempted attempt
+ attempting attempt
+ attempts attempt
+ attend attend
+ attendance attend
+ attendant attend
+ attendants attend
+ attended attend
+ attendents attend
+ attendeth attendeth
+ attending attend
+ attends attend
+ attent attent
+ attention attent
+ attentive attent
+ attentivenes attentiven
+ attest attest
+ attested attest
+ attir attir
+ attire attir
+ attired attir
+ attires attir
+ attorney attornei
+ attorneyed attornei
+ attorneys attornei
+ attorneyship attorneyship
+ attract attract
+ attraction attract
+ attractive attract
+ attracts attract
+ attribute attribut
+ attributed attribut
+ attributes attribut
+ attribution attribut
+ attributive attribut
+ atwain atwain
+ au au
+ aubrey aubrei
+ auburn auburn
+ aucun aucun
+ audacious audaci
+ audaciously audaci
+ audacity audac
+ audible audibl
+ audience audienc
+ audis audi
+ audit audit
+ auditor auditor
+ auditors auditor
+ auditory auditori
+ audre audr
+ audrey audrei
+ aufidius aufidiu
+ aufidiuses aufidius
+ auger auger
+ aught aught
+ augment augment
+ augmentation augment
+ augmented augment
+ augmenting augment
+ augurer augur
+ augurers augur
+ augures augur
+ auguring augur
+ augurs augur
+ augury auguri
+ august august
+ augustus augustu
+ auld auld
+ aumerle aumerl
+ aunchient aunchient
+ aunt aunt
+ aunts aunt
+ auricular auricular
+ aurora aurora
+ auspicious auspici
+ aussi aussi
+ austere auster
+ austerely auster
+ austereness auster
+ austerity auster
+ austria austria
+ aut aut
+ authentic authent
+ author author
+ authorities author
+ authority author
+ authorized author
+ authorizing author
+ authors author
+ autolycus autolycu
+ autre autr
+ autumn autumn
+ auvergne auvergn
+ avail avail
+ avails avail
+ avarice avaric
+ avaricious avarici
+ avaunt avaunt
+ ave av
+ aveng aveng
+ avenge aveng
+ avenged aveng
+ averring aver
+ avert avert
+ aves av
+ avez avez
+ avis avi
+ avoid avoid
+ avoided avoid
+ avoiding avoid
+ avoids avoid
+ avoirdupois avoirdupoi
+ avouch avouch
+ avouched avouch
+ avouches avouch
+ avouchment avouch
+ avow avow
+ aw aw
+ await await
+ awaits await
+ awak awak
+ awake awak
+ awaked awak
+ awaken awaken
+ awakened awaken
+ awakens awaken
+ awakes awak
+ awaking awak
+ award award
+ awards award
+ awasy awasi
+ away awai
+ awe aw
+ aweary aweari
+ aweless aweless
+ awful aw
+ awhile awhil
+ awkward awkward
+ awl awl
+ awooing awoo
+ awork awork
+ awry awri
+ axe ax
+ axle axl
+ axletree axletre
+ ay ay
+ aye ay
+ ayez ayez
+ ayli ayli
+ azur azur
+ azure azur
+ b b
+ ba ba
+ baa baa
+ babbl babbl
+ babble babbl
+ babbling babbl
+ babe babe
+ babes babe
+ babies babi
+ baboon baboon
+ baboons baboon
+ baby babi
+ babylon babylon
+ bacare bacar
+ bacchanals bacchan
+ bacchus bacchu
+ bach bach
+ bachelor bachelor
+ bachelors bachelor
+ back back
+ backbite backbit
+ backbitten backbitten
+ backing back
+ backs back
+ backward backward
+ backwardly backwardli
+ backwards backward
+ bacon bacon
+ bacons bacon
+ bad bad
+ bade bade
+ badge badg
+ badged badg
+ badges badg
+ badly badli
+ badness bad
+ baes bae
+ baffl baffl
+ baffle baffl
+ baffled baffl
+ bag bag
+ baggage baggag
+ bagot bagot
+ bagpipe bagpip
+ bags bag
+ bail bail
+ bailiff bailiff
+ baillez baillez
+ baily baili
+ baisant baisant
+ baisees baise
+ baiser baiser
+ bait bait
+ baited bait
+ baiting bait
+ baitings bait
+ baits bait
+ bajazet bajazet
+ bak bak
+ bake bake
+ baked bake
+ baker baker
+ bakers baker
+ bakes bake
+ baking bake
+ bal bal
+ balanc balanc
+ balance balanc
+ balcony balconi
+ bald bald
+ baldrick baldrick
+ bale bale
+ baleful bale
+ balk balk
+ ball ball
+ ballad ballad
+ ballads ballad
+ ballast ballast
+ ballasting ballast
+ ballet ballet
+ ballow ballow
+ balls ball
+ balm balm
+ balms balm
+ balmy balmi
+ balsam balsam
+ balsamum balsamum
+ balth balth
+ balthasar balthasar
+ balthazar balthazar
+ bames bame
+ ban ban
+ banbury banburi
+ band band
+ bandied bandi
+ banding band
+ bandit bandit
+ banditti banditti
+ banditto banditto
+ bands band
+ bandy bandi
+ bandying bandi
+ bane bane
+ banes bane
+ bang bang
+ bangor bangor
+ banish banish
+ banished banish
+ banishers banish
+ banishment banish
+ banister banist
+ bank bank
+ bankrout bankrout
+ bankrupt bankrupt
+ bankrupts bankrupt
+ banks bank
+ banner banner
+ bannerets banneret
+ banners banner
+ banning ban
+ banns bann
+ banquet banquet
+ banqueted banquet
+ banqueting banquet
+ banquets banquet
+ banquo banquo
+ bans ban
+ baptism baptism
+ baptista baptista
+ baptiz baptiz
+ bar bar
+ barbarian barbarian
+ barbarians barbarian
+ barbarism barbar
+ barbarous barbar
+ barbary barbari
+ barbason barbason
+ barbed barb
+ barber barber
+ barbermonger barbermong
+ bard bard
+ bardolph bardolph
+ bards bard
+ bare bare
+ bared bare
+ barefac barefac
+ barefaced barefac
+ barefoot barefoot
+ bareheaded barehead
+ barely bare
+ bareness bare
+ barful bar
+ bargain bargain
+ bargains bargain
+ barge barg
+ bargulus bargulu
+ baring bare
+ bark bark
+ barking bark
+ barkloughly barkloughli
+ barks bark
+ barky barki
+ barley barlei
+ barm barm
+ barn barn
+ barnacles barnacl
+ barnardine barnardin
+ barne barn
+ barnes barn
+ barnet barnet
+ barns barn
+ baron baron
+ barons baron
+ barony baroni
+ barr barr
+ barrabas barraba
+ barrel barrel
+ barrels barrel
+ barren barren
+ barrenly barrenli
+ barrenness barren
+ barricado barricado
+ barricadoes barricado
+ barrow barrow
+ bars bar
+ barson barson
+ barter barter
+ bartholomew bartholomew
+ bas ba
+ basan basan
+ base base
+ baseless baseless
+ basely base
+ baseness base
+ baser baser
+ bases base
+ basest basest
+ bashful bash
+ bashfulness bash
+ basilisco basilisco
+ basilisk basilisk
+ basilisks basilisk
+ basimecu basimecu
+ basin basin
+ basingstoke basingstok
+ basins basin
+ basis basi
+ bask bask
+ basket basket
+ baskets basket
+ bass bass
+ bassanio bassanio
+ basset basset
+ bassianus bassianu
+ basta basta
+ bastard bastard
+ bastardizing bastard
+ bastardly bastardli
+ bastards bastard
+ bastardy bastardi
+ basted bast
+ bastes bast
+ bastinado bastinado
+ basting bast
+ bat bat
+ batailles batail
+ batch batch
+ bate bate
+ bated bate
+ bates bate
+ bath bath
+ bathe bath
+ bathed bath
+ bathing bath
+ baths bath
+ bating bate
+ batler batler
+ bats bat
+ batt batt
+ battalia battalia
+ battalions battalion
+ batten batten
+ batter batter
+ battering batter
+ batters batter
+ battery batteri
+ battle battl
+ battled battl
+ battlefield battlefield
+ battlements battlement
+ battles battl
+ batty batti
+ bauble baubl
+ baubles baubl
+ baubling baubl
+ baulk baulk
+ bavin bavin
+ bawcock bawcock
+ bawd bawd
+ bawdry bawdri
+ bawds bawd
+ bawdy bawdi
+ bawl bawl
+ bawling bawl
+ bay bai
+ baying bai
+ baynard baynard
+ bayonne bayonn
+ bays bai
+ be be
+ beach beach
+ beached beach
+ beachy beachi
+ beacon beacon
+ bead bead
+ beaded bead
+ beadle beadl
+ beadles beadl
+ beads bead
+ beadsmen beadsmen
+ beagle beagl
+ beagles beagl
+ beak beak
+ beaks beak
+ beam beam
+ beamed beam
+ beams beam
+ bean bean
+ beans bean
+ bear bear
+ beard beard
+ bearded beard
+ beardless beardless
+ beards beard
+ bearer bearer
+ bearers bearer
+ bearest bearest
+ beareth beareth
+ bearing bear
+ bears bear
+ beast beast
+ beastliest beastliest
+ beastliness beastli
+ beastly beastli
+ beasts beast
+ beat beat
+ beated beat
+ beaten beaten
+ beating beat
+ beatrice beatric
+ beats beat
+ beau beau
+ beaufort beaufort
+ beaumond beaumond
+ beaumont beaumont
+ beauteous beauteou
+ beautied beauti
+ beauties beauti
+ beautified beautifi
+ beautiful beauti
+ beautify beautifi
+ beauty beauti
+ beaver beaver
+ beavers beaver
+ became becam
+ because becaus
+ bechanc bechanc
+ bechance bechanc
+ bechanced bechanc
+ beck beck
+ beckon beckon
+ beckons beckon
+ becks beck
+ becom becom
+ become becom
+ becomed becom
+ becomes becom
+ becoming becom
+ becomings becom
+ bed bed
+ bedabbled bedabbl
+ bedash bedash
+ bedaub bedaub
+ bedazzled bedazzl
+ bedchamber bedchamb
+ bedclothes bedcloth
+ bedded bed
+ bedeck bedeck
+ bedecking bedeck
+ bedew bedew
+ bedfellow bedfellow
+ bedfellows bedfellow
+ bedford bedford
+ bedlam bedlam
+ bedrench bedrench
+ bedrid bedrid
+ beds bed
+ bedtime bedtim
+ bedward bedward
+ bee bee
+ beef beef
+ beefs beef
+ beehives beehiv
+ been been
+ beer beer
+ bees bee
+ beest beest
+ beetle beetl
+ beetles beetl
+ beeves beev
+ befall befal
+ befallen befallen
+ befalls befal
+ befell befel
+ befits befit
+ befitted befit
+ befitting befit
+ befor befor
+ before befor
+ beforehand beforehand
+ befortune befortun
+ befriend befriend
+ befriended befriend
+ befriends befriend
+ beg beg
+ began began
+ beget beget
+ begets beget
+ begetting beget
+ begg begg
+ beggar beggar
+ beggared beggar
+ beggarly beggarli
+ beggarman beggarman
+ beggars beggar
+ beggary beggari
+ begging beg
+ begin begin
+ beginners beginn
+ beginning begin
+ beginnings begin
+ begins begin
+ begnawn begnawn
+ begone begon
+ begot begot
+ begotten begotten
+ begrimed begrim
+ begs beg
+ beguil beguil
+ beguile beguil
+ beguiled beguil
+ beguiles beguil
+ beguiling beguil
+ begun begun
+ behalf behalf
+ behalfs behalf
+ behav behav
+ behaved behav
+ behavedst behavedst
+ behavior behavior
+ behaviors behavior
+ behaviour behaviour
+ behaviours behaviour
+ behead behead
+ beheaded behead
+ beheld beheld
+ behest behest
+ behests behest
+ behind behind
+ behold behold
+ beholder behold
+ beholders behold
+ beholdest beholdest
+ beholding behold
+ beholds behold
+ behoof behoof
+ behooffull behoofful
+ behooves behoov
+ behove behov
+ behoves behov
+ behowls behowl
+ being be
+ bel bel
+ belarius belariu
+ belch belch
+ belching belch
+ beldam beldam
+ beldame beldam
+ beldams beldam
+ belee bele
+ belgia belgia
+ belie beli
+ belied beli
+ belief belief
+ beliest beliest
+ believ believ
+ believe believ
+ believed believ
+ believes believ
+ believest believest
+ believing believ
+ belike belik
+ bell bell
+ bellario bellario
+ belle bell
+ bellied belli
+ bellies belli
+ bellman bellman
+ bellona bellona
+ bellow bellow
+ bellowed bellow
+ bellowing bellow
+ bellows bellow
+ bells bell
+ belly belli
+ bellyful belly
+ belman belman
+ belmont belmont
+ belock belock
+ belong belong
+ belonging belong
+ belongings belong
+ belongs belong
+ belov belov
+ beloved belov
+ beloving belov
+ below below
+ belt belt
+ belzebub belzebub
+ bemadding bemad
+ bemet bemet
+ bemete bemet
+ bemoan bemoan
+ bemoaned bemoan
+ bemock bemock
+ bemoil bemoil
+ bemonster bemonst
+ ben ben
+ bench bench
+ bencher bencher
+ benches bench
+ bend bend
+ bended bend
+ bending bend
+ bends bend
+ bene bene
+ beneath beneath
+ benedicite benedicit
+ benedick benedick
+ benediction benedict
+ benedictus benedictu
+ benefactors benefactor
+ benefice benefic
+ beneficial benefici
+ benefit benefit
+ benefited benefit
+ benefits benefit
+ benetted benet
+ benevolence benevol
+ benevolences benevol
+ benied beni
+ benison benison
+ bennet bennet
+ bent bent
+ bentii bentii
+ bentivolii bentivolii
+ bents bent
+ benumbed benumb
+ benvolio benvolio
+ bepaint bepaint
+ bepray beprai
+ bequeath bequeath
+ bequeathed bequeath
+ bequeathing bequeath
+ bequest bequest
+ ber ber
+ berard berard
+ berattle berattl
+ beray berai
+ bere bere
+ bereave bereav
+ bereaved bereav
+ bereaves bereav
+ bereft bereft
+ bergamo bergamo
+ bergomask bergomask
+ berhym berhym
+ berhyme berhym
+ berkeley berkelei
+ bermoothes bermooth
+ bernardo bernardo
+ berod berod
+ berowne berown
+ berri berri
+ berries berri
+ berrord berrord
+ berry berri
+ bertram bertram
+ berwick berwick
+ bescreen bescreen
+ beseech beseech
+ beseeched beseech
+ beseechers beseech
+ beseeching beseech
+ beseek beseek
+ beseem beseem
+ beseemeth beseemeth
+ beseeming beseem
+ beseems beseem
+ beset beset
+ beshrew beshrew
+ beside besid
+ besides besid
+ besieg besieg
+ besiege besieg
+ besieged besieg
+ beslubber beslubb
+ besmear besmear
+ besmeared besmear
+ besmirch besmirch
+ besom besom
+ besort besort
+ besotted besot
+ bespake bespak
+ bespeak bespeak
+ bespice bespic
+ bespoke bespok
+ bespotted bespot
+ bess bess
+ bessy bessi
+ best best
+ bestained bestain
+ bested best
+ bestial bestial
+ bestir bestir
+ bestirr bestirr
+ bestow bestow
+ bestowed bestow
+ bestowing bestow
+ bestows bestow
+ bestraught bestraught
+ bestrew bestrew
+ bestrid bestrid
+ bestride bestrid
+ bestrides bestrid
+ bet bet
+ betake betak
+ beteem beteem
+ bethink bethink
+ bethought bethought
+ bethrothed bethroth
+ bethump bethump
+ betid betid
+ betide betid
+ betideth betideth
+ betime betim
+ betimes betim
+ betoken betoken
+ betook betook
+ betossed betoss
+ betray betrai
+ betrayed betrai
+ betraying betrai
+ betrays betrai
+ betrims betrim
+ betroth betroth
+ betrothed betroth
+ betroths betroth
+ bett bett
+ betted bet
+ better better
+ bettered better
+ bettering better
+ betters better
+ betting bet
+ bettre bettr
+ between between
+ betwixt betwixt
+ bevel bevel
+ beverage beverag
+ bevis bevi
+ bevy bevi
+ bewail bewail
+ bewailed bewail
+ bewailing bewail
+ bewails bewail
+ beware bewar
+ bewasted bewast
+ beweep beweep
+ bewept bewept
+ bewet bewet
+ bewhored bewhor
+ bewitch bewitch
+ bewitched bewitch
+ bewitchment bewitch
+ bewray bewrai
+ beyond beyond
+ bezonian bezonian
+ bezonians bezonian
+ bianca bianca
+ bianco bianco
+ bias bia
+ bibble bibbl
+ bickerings bicker
+ bid bid
+ bidden bidden
+ bidding bid
+ biddings bid
+ biddy biddi
+ bide bide
+ bides bide
+ biding bide
+ bids bid
+ bien bien
+ bier bier
+ bifold bifold
+ big big
+ bigamy bigami
+ biggen biggen
+ bigger bigger
+ bigness big
+ bigot bigot
+ bilberry bilberri
+ bilbo bilbo
+ bilboes bilbo
+ bilbow bilbow
+ bill bill
+ billeted billet
+ billets billet
+ billiards billiard
+ billing bill
+ billow billow
+ billows billow
+ bills bill
+ bin bin
+ bind bind
+ bindeth bindeth
+ binding bind
+ binds bind
+ biondello biondello
+ birch birch
+ bird bird
+ birding bird
+ birdlime birdlim
+ birds bird
+ birnam birnam
+ birth birth
+ birthday birthdai
+ birthdom birthdom
+ birthplace birthplac
+ birthright birthright
+ birthrights birthright
+ births birth
+ bis bi
+ biscuit biscuit
+ bishop bishop
+ bishops bishop
+ bisson bisson
+ bit bit
+ bitch bitch
+ bite bite
+ biter biter
+ bites bite
+ biting bite
+ bits bit
+ bitt bitt
+ bitten bitten
+ bitter bitter
+ bitterest bitterest
+ bitterly bitterli
+ bitterness bitter
+ blab blab
+ blabb blabb
+ blabbing blab
+ blabs blab
+ black black
+ blackamoor blackamoor
+ blackamoors blackamoor
+ blackberries blackberri
+ blackberry blackberri
+ blacker blacker
+ blackest blackest
+ blackfriars blackfriar
+ blackheath blackheath
+ blackmere blackmer
+ blackness black
+ blacks black
+ bladder bladder
+ bladders bladder
+ blade blade
+ bladed blade
+ blades blade
+ blains blain
+ blam blam
+ blame blame
+ blamed blame
+ blameful blame
+ blameless blameless
+ blames blame
+ blanc blanc
+ blanca blanca
+ blanch blanch
+ blank blank
+ blanket blanket
+ blanks blank
+ blaspheme blasphem
+ blaspheming blasphem
+ blasphemous blasphem
+ blasphemy blasphemi
+ blast blast
+ blasted blast
+ blasting blast
+ blastments blastment
+ blasts blast
+ blaz blaz
+ blaze blaze
+ blazes blaze
+ blazing blaze
+ blazon blazon
+ blazoned blazon
+ blazoning blazon
+ bleach bleach
+ bleaching bleach
+ bleak bleak
+ blear blear
+ bleared blear
+ bleat bleat
+ bleated bleat
+ bleats bleat
+ bled bled
+ bleed bleed
+ bleedest bleedest
+ bleedeth bleedeth
+ bleeding bleed
+ bleeds bleed
+ blemish blemish
+ blemishes blemish
+ blench blench
+ blenches blench
+ blend blend
+ blended blend
+ blent blent
+ bless bless
+ blessed bless
+ blessedly blessedli
+ blessedness blessed
+ blesses bless
+ blesseth blesseth
+ blessing bless
+ blessings bless
+ blest blest
+ blew blew
+ blind blind
+ blinded blind
+ blindfold blindfold
+ blinding blind
+ blindly blindli
+ blindness blind
+ blinds blind
+ blink blink
+ blinking blink
+ bliss bliss
+ blist blist
+ blister blister
+ blisters blister
+ blithe blith
+ blithild blithild
+ bloat bloat
+ block block
+ blockish blockish
+ blocks block
+ blois bloi
+ blood blood
+ blooded blood
+ bloodhound bloodhound
+ bloodied bloodi
+ bloodier bloodier
+ bloodiest bloodiest
+ bloodily bloodili
+ bloodless bloodless
+ bloods blood
+ bloodshed bloodsh
+ bloodshedding bloodshed
+ bloodstained bloodstain
+ bloody bloodi
+ bloom bloom
+ blooms bloom
+ blossom blossom
+ blossoming blossom
+ blossoms blossom
+ blot blot
+ blots blot
+ blotted blot
+ blotting blot
+ blount blount
+ blow blow
+ blowed blow
+ blowers blower
+ blowest blowest
+ blowing blow
+ blown blown
+ blows blow
+ blowse blows
+ blubb blubb
+ blubber blubber
+ blubbering blubber
+ blue blue
+ bluecaps bluecap
+ bluest bluest
+ blunt blunt
+ blunted blunt
+ blunter blunter
+ bluntest bluntest
+ blunting blunt
+ bluntly bluntli
+ bluntness blunt
+ blunts blunt
+ blur blur
+ blurr blurr
+ blurs blur
+ blush blush
+ blushes blush
+ blushest blushest
+ blushing blush
+ blust blust
+ bluster bluster
+ blusterer bluster
+ blusters bluster
+ bo bo
+ boar boar
+ board board
+ boarded board
+ boarding board
+ boards board
+ boarish boarish
+ boars boar
+ boast boast
+ boasted boast
+ boastful boast
+ boasting boast
+ boasts boast
+ boat boat
+ boats boat
+ boatswain boatswain
+ bob bob
+ bobb bobb
+ boblibindo boblibindo
+ bobtail bobtail
+ bocchus bocchu
+ bode bode
+ boded bode
+ bodements bodement
+ bodes bode
+ bodg bodg
+ bodied bodi
+ bodies bodi
+ bodiless bodiless
+ bodily bodili
+ boding bode
+ bodkin bodkin
+ body bodi
+ bodykins bodykin
+ bog bog
+ boggle boggl
+ boggler boggler
+ bogs bog
+ bohemia bohemia
+ bohemian bohemian
+ bohun bohun
+ boil boil
+ boiling boil
+ boils boil
+ boist boist
+ boisterous boister
+ boisterously boister
+ boitier boitier
+ bold bold
+ bolden bolden
+ bolder bolder
+ boldest boldest
+ boldly boldli
+ boldness bold
+ bolds bold
+ bolingbroke bolingbrok
+ bolster bolster
+ bolt bolt
+ bolted bolt
+ bolter bolter
+ bolters bolter
+ bolting bolt
+ bolts bolt
+ bombard bombard
+ bombards bombard
+ bombast bombast
+ bon bon
+ bona bona
+ bond bond
+ bondage bondag
+ bonded bond
+ bondmaid bondmaid
+ bondman bondman
+ bondmen bondmen
+ bonds bond
+ bondslave bondslav
+ bone bone
+ boneless boneless
+ bones bone
+ bonfire bonfir
+ bonfires bonfir
+ bonjour bonjour
+ bonne bonn
+ bonnet bonnet
+ bonneted bonnet
+ bonny bonni
+ bonos bono
+ bonto bonto
+ bonville bonvil
+ bood bood
+ book book
+ bookish bookish
+ books book
+ boon boon
+ boor boor
+ boorish boorish
+ boors boor
+ boot boot
+ booted boot
+ booties booti
+ bootless bootless
+ boots boot
+ booty booti
+ bor bor
+ bora bora
+ borachio borachio
+ bordeaux bordeaux
+ border border
+ bordered border
+ borderers border
+ borders border
+ bore bore
+ boreas borea
+ bores bore
+ boring bore
+ born born
+ borne born
+ borough borough
+ boroughs borough
+ borrow borrow
+ borrowed borrow
+ borrower borrow
+ borrowing borrow
+ borrows borrow
+ bosko bosko
+ boskos bosko
+ bosky boski
+ bosom bosom
+ bosoms bosom
+ boson boson
+ boss boss
+ bosworth bosworth
+ botch botch
+ botcher botcher
+ botches botch
+ botchy botchi
+ both both
+ bots bot
+ bottle bottl
+ bottled bottl
+ bottles bottl
+ bottom bottom
+ bottomless bottomless
+ bottoms bottom
+ bouciqualt bouciqualt
+ bouge boug
+ bough bough
+ boughs bough
+ bought bought
+ bounce bounc
+ bouncing bounc
+ bound bound
+ bounded bound
+ bounden bounden
+ boundeth boundeth
+ bounding bound
+ boundless boundless
+ bounds bound
+ bounteous bounteou
+ bounteously bounteous
+ bounties bounti
+ bountiful bounti
+ bountifully bountifulli
+ bounty bounti
+ bourbier bourbier
+ bourbon bourbon
+ bourchier bourchier
+ bourdeaux bourdeaux
+ bourn bourn
+ bout bout
+ bouts bout
+ bove bove
+ bow bow
+ bowcase bowcas
+ bowed bow
+ bowels bowel
+ bower bower
+ bowing bow
+ bowl bowl
+ bowler bowler
+ bowling bowl
+ bowls bowl
+ bows bow
+ bowsprit bowsprit
+ bowstring bowstr
+ box box
+ boxes box
+ boy boi
+ boyet boyet
+ boyish boyish
+ boys boi
+ brabant brabant
+ brabantio brabantio
+ brabble brabbl
+ brabbler brabbler
+ brac brac
+ brace brace
+ bracelet bracelet
+ bracelets bracelet
+ brach brach
+ bracy braci
+ brag brag
+ bragg bragg
+ braggardism braggard
+ braggards braggard
+ braggart braggart
+ braggarts braggart
+ bragged brag
+ bragging brag
+ bragless bragless
+ brags brag
+ braid braid
+ braided braid
+ brain brain
+ brained brain
+ brainford brainford
+ brainish brainish
+ brainless brainless
+ brains brain
+ brainsick brainsick
+ brainsickly brainsickli
+ brake brake
+ brakenbury brakenburi
+ brakes brake
+ brambles brambl
+ bran bran
+ branch branch
+ branches branch
+ branchless branchless
+ brand brand
+ branded brand
+ brandish brandish
+ brandon brandon
+ brands brand
+ bras bra
+ brass brass
+ brassy brassi
+ brat brat
+ brats brat
+ brav brav
+ brave brave
+ braved brave
+ bravely brave
+ braver braver
+ bravery braveri
+ braves brave
+ bravest bravest
+ braving brave
+ brawl brawl
+ brawler brawler
+ brawling brawl
+ brawls brawl
+ brawn brawn
+ brawns brawn
+ bray brai
+ braying brai
+ braz braz
+ brazen brazen
+ brazier brazier
+ breach breach
+ breaches breach
+ bread bread
+ breadth breadth
+ break break
+ breaker breaker
+ breakfast breakfast
+ breaking break
+ breaks break
+ breast breast
+ breasted breast
+ breasting breast
+ breastplate breastplat
+ breasts breast
+ breath breath
+ breathe breath
+ breathed breath
+ breather breather
+ breathers breather
+ breathes breath
+ breathest breathest
+ breathing breath
+ breathless breathless
+ breaths breath
+ brecknock brecknock
+ bred bred
+ breech breech
+ breeches breech
+ breeching breech
+ breed breed
+ breeder breeder
+ breeders breeder
+ breeding breed
+ breeds breed
+ breese brees
+ breeze breez
+ breff breff
+ bretagne bretagn
+ brethen brethen
+ bretheren bretheren
+ brethren brethren
+ brevis brevi
+ brevity breviti
+ brew brew
+ brewage brewag
+ brewer brewer
+ brewers brewer
+ brewing brew
+ brews brew
+ briareus briareu
+ briars briar
+ brib brib
+ bribe bribe
+ briber briber
+ bribes bribe
+ brick brick
+ bricklayer bricklay
+ bricks brick
+ bridal bridal
+ bride bride
+ bridegroom bridegroom
+ bridegrooms bridegroom
+ brides bride
+ bridge bridg
+ bridgenorth bridgenorth
+ bridges bridg
+ bridget bridget
+ bridle bridl
+ bridled bridl
+ brief brief
+ briefer briefer
+ briefest briefest
+ briefly briefli
+ briefness brief
+ brier brier
+ briers brier
+ brigandine brigandin
+ bright bright
+ brighten brighten
+ brightest brightest
+ brightly brightli
+ brightness bright
+ brim brim
+ brimful brim
+ brims brim
+ brimstone brimston
+ brinded brind
+ brine brine
+ bring bring
+ bringer bringer
+ bringeth bringeth
+ bringing bring
+ bringings bring
+ brings bring
+ brinish brinish
+ brink brink
+ brisk brisk
+ brisky briski
+ bristle bristl
+ bristled bristl
+ bristly bristli
+ bristol bristol
+ bristow bristow
+ britain britain
+ britaine britain
+ britaines britain
+ british british
+ briton briton
+ britons briton
+ brittany brittani
+ brittle brittl
+ broach broach
+ broached broach
+ broad broad
+ broader broader
+ broadsides broadsid
+ brocas broca
+ brock brock
+ brogues brogu
+ broil broil
+ broiling broil
+ broils broil
+ broke broke
+ broken broken
+ brokenly brokenli
+ broker broker
+ brokers broker
+ brokes broke
+ broking broke
+ brooch brooch
+ brooches brooch
+ brood brood
+ brooded brood
+ brooding brood
+ brook brook
+ brooks brook
+ broom broom
+ broomstaff broomstaff
+ broth broth
+ brothel brothel
+ brother brother
+ brotherhood brotherhood
+ brotherhoods brotherhood
+ brotherly brotherli
+ brothers brother
+ broths broth
+ brought brought
+ brow brow
+ brown brown
+ browner browner
+ brownist brownist
+ browny browni
+ brows brow
+ browse brows
+ browsing brows
+ bruis brui
+ bruise bruis
+ bruised bruis
+ bruises bruis
+ bruising bruis
+ bruit bruit
+ bruited bruit
+ brundusium brundusium
+ brunt brunt
+ brush brush
+ brushes brush
+ brute brute
+ brutish brutish
+ brutus brutu
+ bubble bubbl
+ bubbles bubbl
+ bubbling bubbl
+ bubukles bubukl
+ buck buck
+ bucket bucket
+ buckets bucket
+ bucking buck
+ buckingham buckingham
+ buckle buckl
+ buckled buckl
+ buckler buckler
+ bucklers buckler
+ bucklersbury bucklersburi
+ buckles buckl
+ buckram buckram
+ bucks buck
+ bud bud
+ budded bud
+ budding bud
+ budge budg
+ budger budger
+ budget budget
+ buds bud
+ buff buff
+ buffet buffet
+ buffeting buffet
+ buffets buffet
+ bug bug
+ bugbear bugbear
+ bugle bugl
+ bugs bug
+ build build
+ builded build
+ buildeth buildeth
+ building build
+ buildings build
+ builds build
+ built built
+ bulk bulk
+ bulks bulk
+ bull bull
+ bullcalf bullcalf
+ bullen bullen
+ bullens bullen
+ bullet bullet
+ bullets bullet
+ bullocks bullock
+ bulls bull
+ bully bulli
+ bulmer bulmer
+ bulwark bulwark
+ bulwarks bulwark
+ bum bum
+ bumbast bumbast
+ bump bump
+ bumper bumper
+ bums bum
+ bunch bunch
+ bunches bunch
+ bundle bundl
+ bung bung
+ bunghole bunghol
+ bungle bungl
+ bunting bunt
+ buoy buoi
+ bur bur
+ burbolt burbolt
+ burd burd
+ burden burden
+ burdened burden
+ burdening burden
+ burdenous burden
+ burdens burden
+ burgh burgh
+ burgher burgher
+ burghers burgher
+ burglary burglari
+ burgomasters burgomast
+ burgonet burgonet
+ burgundy burgundi
+ burial burial
+ buried buri
+ burier burier
+ buriest buriest
+ burly burli
+ burn burn
+ burned burn
+ burnet burnet
+ burneth burneth
+ burning burn
+ burnish burnish
+ burns burn
+ burnt burnt
+ burr burr
+ burrows burrow
+ burs bur
+ burst burst
+ bursting burst
+ bursts burst
+ burthen burthen
+ burthens burthen
+ burton burton
+ bury buri
+ burying buri
+ bush bush
+ bushels bushel
+ bushes bush
+ bushy bushi
+ busied busi
+ busily busili
+ busines busin
+ business busi
+ businesses busi
+ buskin buskin
+ busky buski
+ buss buss
+ busses buss
+ bussing buss
+ bustle bustl
+ bustling bustl
+ busy busi
+ but but
+ butcheed butche
+ butcher butcher
+ butchered butcher
+ butcheries butcheri
+ butcherly butcherli
+ butchers butcher
+ butchery butcheri
+ butler butler
+ butt butt
+ butter butter
+ buttered butter
+ butterflies butterfli
+ butterfly butterfli
+ butterwoman butterwoman
+ buttery butteri
+ buttock buttock
+ buttocks buttock
+ button button
+ buttonhole buttonhol
+ buttons button
+ buttress buttress
+ buttry buttri
+ butts butt
+ buxom buxom
+ buy bui
+ buyer buyer
+ buying bui
+ buys bui
+ buzz buzz
+ buzzard buzzard
+ buzzards buzzard
+ buzzers buzzer
+ buzzing buzz
+ by by
+ bye bye
+ byzantium byzantium
+ c c
+ ca ca
+ cabbage cabbag
+ cabileros cabilero
+ cabin cabin
+ cabins cabin
+ cable cabl
+ cables cabl
+ cackling cackl
+ cacodemon cacodemon
+ caddis caddi
+ caddisses caddiss
+ cade cade
+ cadence cadenc
+ cadent cadent
+ cades cade
+ cadmus cadmu
+ caduceus caduceu
+ cadwal cadwal
+ cadwallader cadwallad
+ caelius caeliu
+ caelo caelo
+ caesar caesar
+ caesarion caesarion
+ caesars caesar
+ cage cage
+ caged cage
+ cagion cagion
+ cain cain
+ caithness caith
+ caitiff caitiff
+ caitiffs caitiff
+ caius caiu
+ cak cak
+ cake cake
+ cakes cake
+ calaber calab
+ calais calai
+ calamities calam
+ calamity calam
+ calchas calcha
+ calculate calcul
+ calen calen
+ calendar calendar
+ calendars calendar
+ calf calf
+ caliban caliban
+ calibans caliban
+ calipolis calipoli
+ cality caliti
+ caliver caliv
+ call call
+ callat callat
+ called call
+ callet callet
+ calling call
+ calls call
+ calm calm
+ calmest calmest
+ calmly calmli
+ calmness calm
+ calms calm
+ calpurnia calpurnia
+ calumniate calumni
+ calumniating calumni
+ calumnious calumni
+ calumny calumni
+ calve calv
+ calved calv
+ calves calv
+ calveskins calveskin
+ calydon calydon
+ cam cam
+ cambio cambio
+ cambria cambria
+ cambric cambric
+ cambrics cambric
+ cambridge cambridg
+ cambyses cambys
+ came came
+ camel camel
+ camelot camelot
+ camels camel
+ camest camest
+ camillo camillo
+ camlet camlet
+ camomile camomil
+ camp camp
+ campeius campeiu
+ camping camp
+ camps camp
+ can can
+ canakin canakin
+ canaries canari
+ canary canari
+ cancel cancel
+ cancell cancel
+ cancelled cancel
+ cancelling cancel
+ cancels cancel
+ cancer cancer
+ candidatus candidatu
+ candied candi
+ candle candl
+ candles candl
+ candlesticks candlestick
+ candy candi
+ canidius canidiu
+ cank cank
+ canker canker
+ cankerblossom cankerblossom
+ cankers canker
+ cannibally cannib
+ cannibals cannib
+ cannon cannon
+ cannoneer cannon
+ cannons cannon
+ cannot cannot
+ canon canon
+ canoniz canoniz
+ canonize canon
+ canonized canon
+ canons canon
+ canopied canopi
+ canopies canopi
+ canopy canopi
+ canst canst
+ canstick canstick
+ canterbury canterburi
+ cantle cantl
+ cantons canton
+ canus canu
+ canvas canva
+ canvass canvass
+ canzonet canzonet
+ cap cap
+ capability capabl
+ capable capabl
+ capacities capac
+ capacity capac
+ caparison caparison
+ capdv capdv
+ cape cape
+ capel capel
+ capels capel
+ caper caper
+ capers caper
+ capet capet
+ caphis caphi
+ capilet capilet
+ capitaine capitain
+ capital capit
+ capite capit
+ capitol capitol
+ capitulate capitul
+ capocchia capocchia
+ capon capon
+ capons capon
+ capp capp
+ cappadocia cappadocia
+ capriccio capriccio
+ capricious caprici
+ caps cap
+ capt capt
+ captain captain
+ captains captain
+ captainship captainship
+ captious captiou
+ captivate captiv
+ captivated captiv
+ captivates captiv
+ captive captiv
+ captives captiv
+ captivity captiv
+ captum captum
+ capucius capuciu
+ capulet capulet
+ capulets capulet
+ car car
+ carack carack
+ caracks carack
+ carat carat
+ caraways carawai
+ carbonado carbonado
+ carbuncle carbuncl
+ carbuncled carbuncl
+ carbuncles carbuncl
+ carcanet carcanet
+ carcase carcas
+ carcases carcas
+ carcass carcass
+ carcasses carcass
+ card card
+ cardecue cardecu
+ carded card
+ carders carder
+ cardinal cardin
+ cardinally cardin
+ cardinals cardin
+ cardmaker cardmak
+ cards card
+ carduus carduu
+ care care
+ cared care
+ career career
+ careers career
+ careful care
+ carefully carefulli
+ careless careless
+ carelessly carelessli
+ carelessness careless
+ cares care
+ caret caret
+ cargo cargo
+ carl carl
+ carlisle carlisl
+ carlot carlot
+ carman carman
+ carmen carmen
+ carnal carnal
+ carnally carnal
+ carnarvonshire carnarvonshir
+ carnation carnat
+ carnations carnat
+ carol carol
+ carous carou
+ carouse carous
+ caroused carous
+ carouses carous
+ carousing carous
+ carp carp
+ carpenter carpent
+ carper carper
+ carpet carpet
+ carpets carpet
+ carping carp
+ carriage carriag
+ carriages carriag
+ carried carri
+ carrier carrier
+ carriers carrier
+ carries carri
+ carrion carrion
+ carrions carrion
+ carry carri
+ carrying carri
+ cars car
+ cart cart
+ carters carter
+ carthage carthag
+ carts cart
+ carv carv
+ carve carv
+ carved carv
+ carver carver
+ carves carv
+ carving carv
+ cas ca
+ casa casa
+ casaer casaer
+ casca casca
+ case case
+ casement casement
+ casements casement
+ cases case
+ cash cash
+ cashier cashier
+ casing case
+ cask cask
+ casket casket
+ casketed casket
+ caskets casket
+ casque casqu
+ casques casqu
+ cassado cassado
+ cassandra cassandra
+ cassibelan cassibelan
+ cassio cassio
+ cassius cassiu
+ cassocks cassock
+ cast cast
+ castalion castalion
+ castaway castawai
+ castaways castawai
+ casted cast
+ caster caster
+ castigate castig
+ castigation castig
+ castile castil
+ castiliano castiliano
+ casting cast
+ castle castl
+ castles castl
+ casts cast
+ casual casual
+ casually casual
+ casualties casualti
+ casualty casualti
+ cat cat
+ cataian cataian
+ catalogue catalogu
+ cataplasm cataplasm
+ cataracts cataract
+ catarrhs catarrh
+ catastrophe catastroph
+ catch catch
+ catcher catcher
+ catches catch
+ catching catch
+ cate cate
+ catechising catechis
+ catechism catech
+ catechize catech
+ cater cater
+ caterpillars caterpillar
+ caters cater
+ caterwauling caterwaul
+ cates cate
+ catesby catesbi
+ cathedral cathedr
+ catlike catlik
+ catling catl
+ catlings catl
+ cato cato
+ cats cat
+ cattle cattl
+ caucasus caucasu
+ caudle caudl
+ cauf cauf
+ caught caught
+ cauldron cauldron
+ caus cau
+ cause caus
+ caused caus
+ causeless causeless
+ causer causer
+ causes caus
+ causest causest
+ causeth causeth
+ cautel cautel
+ cautelous cautel
+ cautels cautel
+ cauterizing cauter
+ caution caution
+ cautions caution
+ cavaleiro cavaleiro
+ cavalery cavaleri
+ cavaliers cavali
+ cave cave
+ cavern cavern
+ caverns cavern
+ caves cave
+ caveto caveto
+ caviary caviari
+ cavil cavil
+ cavilling cavil
+ cawdor cawdor
+ cawdron cawdron
+ cawing caw
+ ce ce
+ ceas cea
+ cease ceas
+ ceases ceas
+ ceaseth ceaseth
+ cedar cedar
+ cedars cedar
+ cedius cediu
+ celebrate celebr
+ celebrated celebr
+ celebrates celebr
+ celebration celebr
+ celerity celer
+ celestial celesti
+ celia celia
+ cell cell
+ cellar cellar
+ cellarage cellarag
+ celsa celsa
+ cement cement
+ censer censer
+ censor censor
+ censorinus censorinu
+ censur censur
+ censure censur
+ censured censur
+ censurers censur
+ censures censur
+ censuring censur
+ centaur centaur
+ centaurs centaur
+ centre centr
+ cents cent
+ centuries centuri
+ centurion centurion
+ centurions centurion
+ century centuri
+ cerberus cerberu
+ cerecloth cerecloth
+ cerements cerement
+ ceremonial ceremoni
+ ceremonies ceremoni
+ ceremonious ceremoni
+ ceremoniously ceremoni
+ ceremony ceremoni
+ ceres cere
+ cerns cern
+ certain certain
+ certainer certain
+ certainly certainli
+ certainties certainti
+ certainty certainti
+ certes cert
+ certificate certif
+ certified certifi
+ certifies certifi
+ certify certifi
+ ces ce
+ cesario cesario
+ cess cess
+ cesse cess
+ cestern cestern
+ cetera cetera
+ cette cett
+ chaces chace
+ chaf chaf
+ chafe chafe
+ chafed chafe
+ chafes chafe
+ chaff chaff
+ chaffless chaffless
+ chafing chafe
+ chain chain
+ chains chain
+ chair chair
+ chairs chair
+ chalic chalic
+ chalice chalic
+ chalices chalic
+ chalk chalk
+ chalks chalk
+ chalky chalki
+ challeng challeng
+ challenge challeng
+ challenged challeng
+ challenger challeng
+ challengers challeng
+ challenges challeng
+ cham cham
+ chamber chamber
+ chamberers chamber
+ chamberlain chamberlain
+ chamberlains chamberlain
+ chambermaid chambermaid
+ chambermaids chambermaid
+ chambers chamber
+ chameleon chameleon
+ champ champ
+ champagne champagn
+ champain champain
+ champains champain
+ champion champion
+ champions champion
+ chanc chanc
+ chance chanc
+ chanced chanc
+ chancellor chancellor
+ chances chanc
+ chandler chandler
+ chang chang
+ change chang
+ changeable changeabl
+ changed chang
+ changeful chang
+ changeling changel
+ changelings changel
+ changer changer
+ changes chang
+ changest changest
+ changing chang
+ channel channel
+ channels channel
+ chanson chanson
+ chant chant
+ chanticleer chanticl
+ chanting chant
+ chantries chantri
+ chantry chantri
+ chants chant
+ chaos chao
+ chap chap
+ chape chape
+ chapel chapel
+ chapeless chapeless
+ chapels chapel
+ chaplain chaplain
+ chaplains chaplain
+ chapless chapless
+ chaplet chaplet
+ chapmen chapmen
+ chaps chap
+ chapter chapter
+ character charact
+ charactered charact
+ characterless characterless
+ characters charact
+ charactery characteri
+ characts charact
+ charbon charbon
+ chare chare
+ chares chare
+ charg charg
+ charge charg
+ charged charg
+ chargeful charg
+ charges charg
+ chargeth chargeth
+ charging charg
+ chariest chariest
+ chariness chari
+ charing chare
+ chariot chariot
+ chariots chariot
+ charitable charit
+ charitably charit
+ charities chariti
+ charity chariti
+ charlemain charlemain
+ charles charl
+ charm charm
+ charmed charm
+ charmer charmer
+ charmeth charmeth
+ charmian charmian
+ charming charm
+ charmingly charmingli
+ charms charm
+ charneco charneco
+ charnel charnel
+ charolois charoloi
+ charon charon
+ charter charter
+ charters charter
+ chartreux chartreux
+ chary chari
+ charybdis charybdi
+ chas cha
+ chase chase
+ chased chase
+ chaser chaser
+ chaseth chaseth
+ chasing chase
+ chaste chast
+ chastely chast
+ chastis chasti
+ chastise chastis
+ chastised chastis
+ chastisement chastis
+ chastity chastiti
+ chat chat
+ chatham chatham
+ chatillon chatillon
+ chats chat
+ chatt chatt
+ chattels chattel
+ chatter chatter
+ chattering chatter
+ chattles chattl
+ chaud chaud
+ chaunted chaunt
+ chaw chaw
+ chawdron chawdron
+ che che
+ cheap cheap
+ cheapen cheapen
+ cheaper cheaper
+ cheapest cheapest
+ cheaply cheapli
+ cheapside cheapsid
+ cheat cheat
+ cheated cheat
+ cheater cheater
+ cheaters cheater
+ cheating cheat
+ cheats cheat
+ check check
+ checked check
+ checker checker
+ checking check
+ checks check
+ cheek cheek
+ cheeks cheek
+ cheer cheer
+ cheered cheer
+ cheerer cheerer
+ cheerful cheer
+ cheerfully cheerfulli
+ cheering cheer
+ cheerless cheerless
+ cheerly cheerli
+ cheers cheer
+ cheese chees
+ chequer chequer
+ cher cher
+ cherish cherish
+ cherished cherish
+ cherisher cherish
+ cherishes cherish
+ cherishing cherish
+ cherries cherri
+ cherry cherri
+ cherrypit cherrypit
+ chertsey chertsei
+ cherub cherub
+ cherubims cherubim
+ cherubin cherubin
+ cherubins cherubin
+ cheshu cheshu
+ chess chess
+ chest chest
+ chester chester
+ chestnut chestnut
+ chestnuts chestnut
+ chests chest
+ chetas cheta
+ chev chev
+ cheval cheval
+ chevalier chevali
+ chevaliers chevali
+ cheveril cheveril
+ chew chew
+ chewed chew
+ chewet chewet
+ chewing chew
+ chez chez
+ chi chi
+ chick chick
+ chicken chicken
+ chickens chicken
+ chicurmurco chicurmurco
+ chid chid
+ chidden chidden
+ chide chide
+ chiders chider
+ chides chide
+ chiding chide
+ chief chief
+ chiefest chiefest
+ chiefly chiefli
+ chien chien
+ child child
+ childed child
+ childeric childer
+ childhood childhood
+ childhoods childhood
+ childing child
+ childish childish
+ childishness childish
+ childlike childlik
+ childness child
+ children children
+ chill chill
+ chilling chill
+ chime chime
+ chimes chime
+ chimney chimnei
+ chimneypiece chimneypiec
+ chimneys chimnei
+ chimurcho chimurcho
+ chin chin
+ china china
+ chine chine
+ chines chine
+ chink chink
+ chinks chink
+ chins chin
+ chipp chipp
+ chipper chipper
+ chips chip
+ chiron chiron
+ chirping chirp
+ chirrah chirrah
+ chirurgeonly chirurgeonli
+ chisel chisel
+ chitopher chitoph
+ chivalrous chivalr
+ chivalry chivalri
+ choice choic
+ choicely choic
+ choicest choicest
+ choir choir
+ choirs choir
+ chok chok
+ choke choke
+ choked choke
+ chokes choke
+ choking choke
+ choler choler
+ choleric choler
+ cholers choler
+ chollors chollor
+ choose choos
+ chooser chooser
+ chooses choos
+ chooseth chooseth
+ choosing choos
+ chop chop
+ chopine chopin
+ choplogic choplog
+ chopp chopp
+ chopped chop
+ chopping chop
+ choppy choppi
+ chops chop
+ chopt chopt
+ chor chor
+ choristers chorist
+ chorus choru
+ chose chose
+ chosen chosen
+ chough chough
+ choughs chough
+ chrish chrish
+ christ christ
+ christen christen
+ christendom christendom
+ christendoms christendom
+ christening christen
+ christenings christen
+ christian christian
+ christianlike christianlik
+ christians christian
+ christmas christma
+ christom christom
+ christopher christoph
+ christophero christophero
+ chronicle chronicl
+ chronicled chronicl
+ chronicler chronicl
+ chroniclers chronicl
+ chronicles chronicl
+ chrysolite chrysolit
+ chuck chuck
+ chucks chuck
+ chud chud
+ chuffs chuff
+ church church
+ churches church
+ churchman churchman
+ churchmen churchmen
+ churchyard churchyard
+ churchyards churchyard
+ churl churl
+ churlish churlish
+ churlishly churlishli
+ churls churl
+ churn churn
+ chus chu
+ cicatrice cicatric
+ cicatrices cicatric
+ cicely cice
+ cicero cicero
+ ciceter cicet
+ ciel ciel
+ ciitzens ciitzen
+ cilicia cilicia
+ cimber cimber
+ cimmerian cimmerian
+ cinable cinabl
+ cincture cinctur
+ cinders cinder
+ cine cine
+ cinna cinna
+ cinque cinqu
+ cipher cipher
+ ciphers cipher
+ circa circa
+ circe circ
+ circle circl
+ circled circl
+ circlets circlet
+ circling circl
+ circuit circuit
+ circum circum
+ circumcised circumcis
+ circumference circumfer
+ circummur circummur
+ circumscrib circumscrib
+ circumscribed circumscrib
+ circumscription circumscript
+ circumspect circumspect
+ circumstance circumst
+ circumstanced circumstanc
+ circumstances circumst
+ circumstantial circumstanti
+ circumvent circumv
+ circumvention circumvent
+ cistern cistern
+ citadel citadel
+ cital cital
+ cite cite
+ cited cite
+ cites cite
+ cities citi
+ citing cite
+ citizen citizen
+ citizens citizen
+ cittern cittern
+ city citi
+ civet civet
+ civil civil
+ civility civil
+ civilly civilli
+ clack clack
+ clad clad
+ claim claim
+ claiming claim
+ claims claim
+ clamb clamb
+ clamber clamber
+ clammer clammer
+ clamor clamor
+ clamorous clamor
+ clamors clamor
+ clamour clamour
+ clamours clamour
+ clang clang
+ clangor clangor
+ clap clap
+ clapp clapp
+ clapped clap
+ clapper clapper
+ clapping clap
+ claps clap
+ clare clare
+ clarence clarenc
+ claret claret
+ claribel claribel
+ clasp clasp
+ clasps clasp
+ clatter clatter
+ claud claud
+ claudio claudio
+ claudius claudiu
+ clause claus
+ claw claw
+ clawed claw
+ clawing claw
+ claws claw
+ clay clai
+ clays clai
+ clean clean
+ cleanliest cleanliest
+ cleanly cleanli
+ cleans clean
+ cleanse cleans
+ cleansing cleans
+ clear clear
+ clearer clearer
+ clearest clearest
+ clearly clearli
+ clearness clear
+ clears clear
+ cleave cleav
+ cleaving cleav
+ clef clef
+ cleft cleft
+ cleitus cleitu
+ clemency clemenc
+ clement clement
+ cleomenes cleomen
+ cleopatpa cleopatpa
+ cleopatra cleopatra
+ clepeth clepeth
+ clept clept
+ clerestories clerestori
+ clergy clergi
+ clergyman clergyman
+ clergymen clergymen
+ clerk clerk
+ clerkly clerkli
+ clerks clerk
+ clew clew
+ client client
+ clients client
+ cliff cliff
+ clifford clifford
+ cliffords clifford
+ cliffs cliff
+ clifton clifton
+ climate climat
+ climature climatur
+ climb climb
+ climbed climb
+ climber climber
+ climbeth climbeth
+ climbing climb
+ climbs climb
+ clime clime
+ cling cling
+ clink clink
+ clinking clink
+ clinquant clinquant
+ clip clip
+ clipp clipp
+ clipper clipper
+ clippeth clippeth
+ clipping clip
+ clipt clipt
+ clitus clitu
+ clo clo
+ cloak cloak
+ cloakbag cloakbag
+ cloaks cloak
+ clock clock
+ clocks clock
+ clod clod
+ cloddy cloddi
+ clodpole clodpol
+ clog clog
+ clogging clog
+ clogs clog
+ cloister cloister
+ cloistress cloistress
+ cloquence cloquenc
+ clos clo
+ close close
+ closed close
+ closely close
+ closeness close
+ closer closer
+ closes close
+ closest closest
+ closet closet
+ closing close
+ closure closur
+ cloten cloten
+ clotens cloten
+ cloth cloth
+ clothair clothair
+ clotharius clothariu
+ clothe cloth
+ clothes cloth
+ clothier clothier
+ clothiers clothier
+ clothing cloth
+ cloths cloth
+ clotpoles clotpol
+ clotpoll clotpol
+ cloud cloud
+ clouded cloud
+ cloudiness cloudi
+ clouds cloud
+ cloudy cloudi
+ clout clout
+ clouted clout
+ clouts clout
+ cloven cloven
+ clover clover
+ cloves clove
+ clovest clovest
+ clowder clowder
+ clown clown
+ clownish clownish
+ clowns clown
+ cloy cloi
+ cloyed cloi
+ cloying cloi
+ cloyless cloyless
+ cloyment cloyment
+ cloys cloi
+ club club
+ clubs club
+ cluck cluck
+ clung clung
+ clust clust
+ clusters cluster
+ clutch clutch
+ clyster clyster
+ cneius cneiu
+ cnemies cnemi
+ co co
+ coach coach
+ coaches coach
+ coachmakers coachmak
+ coact coact
+ coactive coactiv
+ coagulate coagul
+ coal coal
+ coals coal
+ coarse coars
+ coarsely coars
+ coast coast
+ coasting coast
+ coasts coast
+ coat coat
+ coated coat
+ coats coat
+ cobble cobbl
+ cobbled cobbl
+ cobbler cobbler
+ cobham cobham
+ cobloaf cobloaf
+ cobweb cobweb
+ cobwebs cobweb
+ cock cock
+ cockatrice cockatric
+ cockatrices cockatric
+ cockle cockl
+ cockled cockl
+ cockney cocknei
+ cockpit cockpit
+ cocks cock
+ cocksure cocksur
+ coctus coctu
+ cocytus cocytu
+ cod cod
+ codding cod
+ codling codl
+ codpiece codpiec
+ codpieces codpiec
+ cods cod
+ coelestibus coelestibu
+ coesar coesar
+ coeur coeur
+ coffer coffer
+ coffers coffer
+ coffin coffin
+ coffins coffin
+ cog cog
+ cogging cog
+ cogitation cogit
+ cogitations cogit
+ cognition cognit
+ cognizance cogniz
+ cogscomb cogscomb
+ cohabitants cohabit
+ coher coher
+ cohere coher
+ coherence coher
+ coherent coher
+ cohorts cohort
+ coif coif
+ coign coign
+ coil coil
+ coin coin
+ coinage coinag
+ coiner coiner
+ coining coin
+ coins coin
+ col col
+ colbrand colbrand
+ colchos colcho
+ cold cold
+ colder colder
+ coldest coldest
+ coldly coldli
+ coldness cold
+ coldspur coldspur
+ colebrook colebrook
+ colic colic
+ collar collar
+ collars collar
+ collateral collater
+ colleagued colleagu
+ collect collect
+ collected collect
+ collection collect
+ college colleg
+ colleges colleg
+ collied colli
+ collier collier
+ colliers collier
+ collop collop
+ collusion collus
+ colme colm
+ colmekill colmekil
+ coloquintida coloquintida
+ color color
+ colors color
+ colossus colossu
+ colour colour
+ colourable colour
+ coloured colour
+ colouring colour
+ colours colour
+ colt colt
+ colted colt
+ colts colt
+ columbine columbin
+ columbines columbin
+ colville colvil
+ com com
+ comagene comagen
+ comart comart
+ comb comb
+ combat combat
+ combatant combat
+ combatants combat
+ combated combat
+ combating combat
+ combin combin
+ combinate combin
+ combination combin
+ combine combin
+ combined combin
+ combless combless
+ combustion combust
+ come come
+ comedian comedian
+ comedians comedian
+ comedy comedi
+ comeliness comeli
+ comely come
+ comer comer
+ comers comer
+ comes come
+ comest comest
+ comet comet
+ cometh cometh
+ comets comet
+ comfect comfect
+ comfit comfit
+ comfits comfit
+ comfort comfort
+ comfortable comfort
+ comforted comfort
+ comforter comfort
+ comforting comfort
+ comfortless comfortless
+ comforts comfort
+ comic comic
+ comical comic
+ coming come
+ comings come
+ cominius cominiu
+ comma comma
+ command command
+ commande command
+ commanded command
+ commander command
+ commanders command
+ commanding command
+ commandment command
+ commandments command
+ commands command
+ comme comm
+ commenc commenc
+ commence commenc
+ commenced commenc
+ commencement commenc
+ commences commenc
+ commencing commenc
+ commend commend
+ commendable commend
+ commendation commend
+ commendations commend
+ commended commend
+ commending commend
+ commends commend
+ comment comment
+ commentaries commentari
+ commenting comment
+ comments comment
+ commerce commerc
+ commingled commingl
+ commiseration commiser
+ commission commiss
+ commissioners commission
+ commissions commiss
+ commit commit
+ commits commit
+ committ committ
+ committed commit
+ committing commit
+ commix commix
+ commixed commix
+ commixtion commixt
+ commixture commixtur
+ commodious commodi
+ commodities commod
+ commodity commod
+ common common
+ commonalty commonalti
+ commoner common
+ commoners common
+ commonly commonli
+ commons common
+ commonweal commonw
+ commonwealth commonwealth
+ commotion commot
+ commotions commot
+ commune commun
+ communicat communicat
+ communicate commun
+ communication commun
+ communities commun
+ community commun
+ comonty comonti
+ compact compact
+ companies compani
+ companion companion
+ companions companion
+ companionship companionship
+ company compani
+ compar compar
+ comparative compar
+ compare compar
+ compared compar
+ comparing compar
+ comparison comparison
+ comparisons comparison
+ compartner compartn
+ compass compass
+ compasses compass
+ compassing compass
+ compassion compass
+ compassionate compassion
+ compeers compeer
+ compel compel
+ compell compel
+ compelled compel
+ compelling compel
+ compels compel
+ compensation compens
+ competence compet
+ competency compet
+ competent compet
+ competitor competitor
+ competitors competitor
+ compil compil
+ compile compil
+ compiled compil
+ complain complain
+ complainer complain
+ complainest complainest
+ complaining complain
+ complainings complain
+ complains complain
+ complaint complaint
+ complaints complaint
+ complement complement
+ complements complement
+ complete complet
+ complexion complexion
+ complexioned complexion
+ complexions complexion
+ complices complic
+ complies compli
+ compliment compliment
+ complimental compliment
+ compliments compliment
+ complot complot
+ complots complot
+ complotted complot
+ comply compli
+ compos compo
+ compose compos
+ composed compos
+ composition composit
+ compost compost
+ composture compostur
+ composure composur
+ compound compound
+ compounded compound
+ compounds compound
+ comprehend comprehend
+ comprehended comprehend
+ comprehends comprehend
+ compremises compremis
+ compris compri
+ comprising compris
+ compromis compromi
+ compromise compromis
+ compt compt
+ comptible comptibl
+ comptrollers comptrol
+ compulsatory compulsatori
+ compulsion compuls
+ compulsive compuls
+ compunctious compuncti
+ computation comput
+ comrade comrad
+ comrades comrad
+ comutual comutu
+ con con
+ concave concav
+ concavities concav
+ conceal conceal
+ concealed conceal
+ concealing conceal
+ concealment conceal
+ concealments conceal
+ conceals conceal
+ conceit conceit
+ conceited conceit
+ conceitless conceitless
+ conceits conceit
+ conceiv conceiv
+ conceive conceiv
+ conceived conceiv
+ conceives conceiv
+ conceiving conceiv
+ conception concept
+ conceptions concept
+ conceptious concepti
+ concern concern
+ concernancy concern
+ concerneth concerneth
+ concerning concern
+ concernings concern
+ concerns concern
+ conclave conclav
+ conclud conclud
+ conclude conclud
+ concluded conclud
+ concludes conclud
+ concluding conclud
+ conclusion conclus
+ conclusions conclus
+ concolinel concolinel
+ concord concord
+ concubine concubin
+ concupiscible concupisc
+ concupy concupi
+ concur concur
+ concurring concur
+ concurs concur
+ condemn condemn
+ condemnation condemn
+ condemned condemn
+ condemning condemn
+ condemns condemn
+ condescend condescend
+ condign condign
+ condition condit
+ conditionally condition
+ conditions condit
+ condole condol
+ condolement condol
+ condoling condol
+ conduce conduc
+ conduct conduct
+ conducted conduct
+ conducting conduct
+ conductor conductor
+ conduit conduit
+ conduits conduit
+ conected conect
+ coney conei
+ confection confect
+ confectionary confectionari
+ confections confect
+ confederacy confederaci
+ confederate confeder
+ confederates confeder
+ confer confer
+ conference confer
+ conferr conferr
+ conferring confer
+ confess confess
+ confessed confess
+ confesses confess
+ confesseth confesseth
+ confessing confess
+ confession confess
+ confessions confess
+ confessor confessor
+ confidence confid
+ confident confid
+ confidently confid
+ confin confin
+ confine confin
+ confined confin
+ confineless confineless
+ confiners confin
+ confines confin
+ confining confin
+ confirm confirm
+ confirmation confirm
+ confirmations confirm
+ confirmed confirm
+ confirmer confirm
+ confirmers confirm
+ confirming confirm
+ confirmities confirm
+ confirms confirm
+ confiscate confisc
+ confiscated confisc
+ confiscation confisc
+ confixed confix
+ conflict conflict
+ conflicting conflict
+ conflicts conflict
+ confluence confluenc
+ conflux conflux
+ conform conform
+ conformable conform
+ confound confound
+ confounded confound
+ confounding confound
+ confounds confound
+ confront confront
+ confronted confront
+ confus confu
+ confused confus
+ confusedly confusedli
+ confusion confus
+ confusions confus
+ confutation confut
+ confutes confut
+ congeal congeal
+ congealed congeal
+ congealment congeal
+ congee conge
+ conger conger
+ congest congest
+ congied congi
+ congratulate congratul
+ congreeing congre
+ congreeted congreet
+ congregate congreg
+ congregated congreg
+ congregation congreg
+ congregations congreg
+ congruent congruent
+ congruing congru
+ conies coni
+ conjectural conjectur
+ conjecture conjectur
+ conjectures conjectur
+ conjoin conjoin
+ conjoined conjoin
+ conjoins conjoin
+ conjointly conjointli
+ conjunct conjunct
+ conjunction conjunct
+ conjunctive conjunct
+ conjur conjur
+ conjuration conjur
+ conjurations conjur
+ conjure conjur
+ conjured conjur
+ conjurer conjur
+ conjurers conjur
+ conjures conjur
+ conjuring conjur
+ conjuro conjuro
+ conn conn
+ connected connect
+ connive conniv
+ conqu conqu
+ conquer conquer
+ conquered conquer
+ conquering conquer
+ conqueror conqueror
+ conquerors conqueror
+ conquers conquer
+ conquest conquest
+ conquests conquest
+ conquring conqur
+ conrade conrad
+ cons con
+ consanguineous consanguin
+ consanguinity consanguin
+ conscienc conscienc
+ conscience conscienc
+ consciences conscienc
+ conscionable conscion
+ consecrate consecr
+ consecrated consecr
+ consecrations consecr
+ consent consent
+ consented consent
+ consenting consent
+ consents consent
+ consequence consequ
+ consequences consequ
+ consequently consequ
+ conserve conserv
+ conserved conserv
+ conserves conserv
+ consider consid
+ considerance consider
+ considerate consider
+ consideration consider
+ considerations consider
+ considered consid
+ considering consid
+ considerings consid
+ considers consid
+ consign consign
+ consigning consign
+ consist consist
+ consisteth consisteth
+ consisting consist
+ consistory consistori
+ consists consist
+ consolate consol
+ consolation consol
+ consonancy conson
+ consonant conson
+ consort consort
+ consorted consort
+ consortest consortest
+ conspectuities conspectu
+ conspir conspir
+ conspiracy conspiraci
+ conspirant conspir
+ conspirator conspir
+ conspirators conspir
+ conspire conspir
+ conspired conspir
+ conspirers conspir
+ conspires conspir
+ conspiring conspir
+ constable constabl
+ constables constabl
+ constance constanc
+ constancies constanc
+ constancy constanc
+ constant constant
+ constantine constantin
+ constantinople constantinopl
+ constantly constantli
+ constellation constel
+ constitution constitut
+ constrain constrain
+ constrained constrain
+ constraineth constraineth
+ constrains constrain
+ constraint constraint
+ constring constr
+ construction construct
+ construe constru
+ consul consul
+ consuls consul
+ consulship consulship
+ consulships consulship
+ consult consult
+ consulting consult
+ consults consult
+ consum consum
+ consume consum
+ consumed consum
+ consumes consum
+ consuming consum
+ consummate consumm
+ consummation consumm
+ consumption consumpt
+ consumptions consumpt
+ contagion contagion
+ contagious contagi
+ contain contain
+ containing contain
+ contains contain
+ contaminate contamin
+ contaminated contamin
+ contemn contemn
+ contemned contemn
+ contemning contemn
+ contemns contemn
+ contemplate contempl
+ contemplation contempl
+ contemplative contempl
+ contempt contempt
+ contemptible contempt
+ contempts contempt
+ contemptuous contemptu
+ contemptuously contemptu
+ contend contend
+ contended contend
+ contending contend
+ contendon contendon
+ content content
+ contenta contenta
+ contented content
+ contenteth contenteth
+ contention content
+ contentious contenti
+ contentless contentless
+ contento contento
+ contents content
+ contest contest
+ contestation contest
+ continence contin
+ continency contin
+ continent contin
+ continents contin
+ continu continu
+ continual continu
+ continually continu
+ continuance continu
+ continuantly continuantli
+ continuate continu
+ continue continu
+ continued continu
+ continuer continu
+ continues continu
+ continuing continu
+ contract contract
+ contracted contract
+ contracting contract
+ contraction contract
+ contradict contradict
+ contradicted contradict
+ contradiction contradict
+ contradicts contradict
+ contraries contrari
+ contrarieties contrarieti
+ contrariety contrarieti
+ contrarious contrari
+ contrariously contrari
+ contrary contrari
+ contre contr
+ contribution contribut
+ contributors contributor
+ contrite contrit
+ contriv contriv
+ contrive contriv
+ contrived contriv
+ contriver contriv
+ contrives contriv
+ contriving contriv
+ control control
+ controll control
+ controller control
+ controlling control
+ controlment control
+ controls control
+ controversy controversi
+ contumelious contumeli
+ contumeliously contumeli
+ contumely contum
+ contusions contus
+ convenience conveni
+ conveniences conveni
+ conveniency conveni
+ convenient conveni
+ conveniently conveni
+ convented convent
+ conventicles conventicl
+ convents convent
+ convers conver
+ conversant convers
+ conversation convers
+ conversations convers
+ converse convers
+ conversed convers
+ converses convers
+ conversing convers
+ conversion convers
+ convert convert
+ converted convert
+ convertest convertest
+ converting convert
+ convertite convertit
+ convertites convertit
+ converts convert
+ convey convei
+ conveyance convey
+ conveyances convey
+ conveyers convey
+ conveying convei
+ convict convict
+ convicted convict
+ convince convinc
+ convinced convinc
+ convinces convinc
+ convive conviv
+ convocation convoc
+ convoy convoi
+ convulsions convuls
+ cony coni
+ cook cook
+ cookery cookeri
+ cooks cook
+ cool cool
+ cooled cool
+ cooling cool
+ cools cool
+ coop coop
+ coops coop
+ cop cop
+ copatain copatain
+ cope cope
+ cophetua cophetua
+ copied copi
+ copies copi
+ copious copiou
+ copper copper
+ copperspur copperspur
+ coppice coppic
+ copulation copul
+ copulatives copul
+ copy copi
+ cor cor
+ coragio coragio
+ coral coral
+ coram coram
+ corambus corambu
+ coranto coranto
+ corantos coranto
+ corbo corbo
+ cord cord
+ corded cord
+ cordelia cordelia
+ cordial cordial
+ cordis cordi
+ cords cord
+ core core
+ corin corin
+ corinth corinth
+ corinthian corinthian
+ coriolanus coriolanu
+ corioli corioli
+ cork cork
+ corky corki
+ cormorant cormor
+ corn corn
+ cornelia cornelia
+ cornelius corneliu
+ corner corner
+ corners corner
+ cornerstone cornerston
+ cornets cornet
+ cornish cornish
+ corns corn
+ cornuto cornuto
+ cornwall cornwal
+ corollary corollari
+ coronal coron
+ coronation coron
+ coronet coronet
+ coronets coronet
+ corporal corpor
+ corporals corpor
+ corporate corpor
+ corpse corps
+ corpulent corpul
+ correct correct
+ corrected correct
+ correcting correct
+ correction correct
+ correctioner correction
+ corrects correct
+ correspondence correspond
+ correspondent correspond
+ corresponding correspond
+ corresponsive correspons
+ corrigible corrig
+ corrival corriv
+ corrivals corriv
+ corroborate corrobor
+ corrosive corros
+ corrupt corrupt
+ corrupted corrupt
+ corrupter corrupt
+ corrupters corrupt
+ corruptible corrupt
+ corruptibly corrupt
+ corrupting corrupt
+ corruption corrupt
+ corruptly corruptli
+ corrupts corrupt
+ corse cors
+ corses cors
+ corslet corslet
+ cosmo cosmo
+ cost cost
+ costard costard
+ costermongers costermong
+ costlier costlier
+ costly costli
+ costs cost
+ cot cot
+ cote cote
+ coted cote
+ cotsall cotsal
+ cotsole cotsol
+ cotswold cotswold
+ cottage cottag
+ cottages cottag
+ cotus cotu
+ couch couch
+ couched couch
+ couching couch
+ couchings couch
+ coude coud
+ cough cough
+ coughing cough
+ could could
+ couldst couldst
+ coulter coulter
+ council council
+ councillor councillor
+ councils council
+ counsel counsel
+ counsell counsel
+ counsellor counsellor
+ counsellors counsellor
+ counselor counselor
+ counselors counselor
+ counsels counsel
+ count count
+ counted count
+ countenanc countenanc
+ countenance counten
+ countenances counten
+ counter counter
+ counterchange counterchang
+ countercheck countercheck
+ counterfeit counterfeit
+ counterfeited counterfeit
+ counterfeiting counterfeit
+ counterfeitly counterfeitli
+ counterfeits counterfeit
+ countermand countermand
+ countermands countermand
+ countermines countermin
+ counterpart counterpart
+ counterpoints counterpoint
+ counterpois counterpoi
+ counterpoise counterpois
+ counters counter
+ countervail countervail
+ countess countess
+ countesses countess
+ counties counti
+ counting count
+ countless countless
+ countries countri
+ countrv countrv
+ country countri
+ countryman countryman
+ countrymen countrymen
+ counts count
+ county counti
+ couper couper
+ couple coupl
+ coupled coupl
+ couplement couplement
+ couples coupl
+ couplet couplet
+ couplets couplet
+ cour cour
+ courage courag
+ courageous courag
+ courageously courag
+ courages courag
+ courier courier
+ couriers courier
+ couronne couronn
+ cours cour
+ course cours
+ coursed cours
+ courser courser
+ coursers courser
+ courses cours
+ coursing cours
+ court court
+ courted court
+ courteous courteou
+ courteously courteous
+ courtesan courtesan
+ courtesies courtesi
+ courtesy courtesi
+ courtezan courtezan
+ courtezans courtezan
+ courtier courtier
+ courtiers courtier
+ courtlike courtlik
+ courtly courtli
+ courtney courtnei
+ courts court
+ courtship courtship
+ cousin cousin
+ cousins cousin
+ couterfeit couterfeit
+ coutume coutum
+ covenant coven
+ covenants coven
+ covent covent
+ coventry coventri
+ cover cover
+ covered cover
+ covering cover
+ coverlet coverlet
+ covers cover
+ covert covert
+ covertly covertli
+ coverture covertur
+ covet covet
+ coveted covet
+ coveting covet
+ covetings covet
+ covetous covet
+ covetously covet
+ covetousness covet
+ covets covet
+ cow cow
+ coward coward
+ cowarded coward
+ cowardice cowardic
+ cowardly cowardli
+ cowards coward
+ cowardship cowardship
+ cowish cowish
+ cowl cowl
+ cowslip cowslip
+ cowslips cowslip
+ cox cox
+ coxcomb coxcomb
+ coxcombs coxcomb
+ coy coi
+ coystrill coystril
+ coz coz
+ cozen cozen
+ cozenage cozenag
+ cozened cozen
+ cozener cozen
+ cozeners cozen
+ cozening cozen
+ coziers cozier
+ crab crab
+ crabbed crab
+ crabs crab
+ crack crack
+ cracked crack
+ cracker cracker
+ crackers cracker
+ cracking crack
+ cracks crack
+ cradle cradl
+ cradled cradl
+ cradles cradl
+ craft craft
+ crafted craft
+ craftied crafti
+ craftier craftier
+ craftily craftili
+ crafts craft
+ craftsmen craftsmen
+ crafty crafti
+ cram cram
+ cramm cramm
+ cramp cramp
+ cramps cramp
+ crams cram
+ cranking crank
+ cranks crank
+ cranmer cranmer
+ crannied cranni
+ crannies cranni
+ cranny cranni
+ crants crant
+ crare crare
+ crash crash
+ crassus crassu
+ crav crav
+ crave crave
+ craved crave
+ craven craven
+ cravens craven
+ craves crave
+ craveth craveth
+ craving crave
+ crawl crawl
+ crawling crawl
+ crawls crawl
+ craz craz
+ crazed craze
+ crazy crazi
+ creaking creak
+ cream cream
+ create creat
+ created creat
+ creates creat
+ creating creat
+ creation creation
+ creator creator
+ creature creatur
+ creatures creatur
+ credence credenc
+ credent credent
+ credible credibl
+ credit credit
+ creditor creditor
+ creditors creditor
+ credo credo
+ credulity credul
+ credulous credul
+ creed creed
+ creek creek
+ creeks creek
+ creep creep
+ creeping creep
+ creeps creep
+ crept crept
+ crescent crescent
+ crescive cresciv
+ cressets cresset
+ cressid cressid
+ cressida cressida
+ cressids cressid
+ cressy cressi
+ crest crest
+ crested crest
+ crestfall crestfal
+ crestless crestless
+ crests crest
+ cretan cretan
+ crete crete
+ crevice crevic
+ crew crew
+ crews crew
+ crib crib
+ cribb cribb
+ cribs crib
+ cricket cricket
+ crickets cricket
+ cried cri
+ criedst criedst
+ crier crier
+ cries cri
+ criest criest
+ crieth crieth
+ crime crime
+ crimeful crime
+ crimeless crimeless
+ crimes crime
+ criminal crimin
+ crimson crimson
+ cringe cring
+ cripple crippl
+ crisp crisp
+ crisped crisp
+ crispian crispian
+ crispianus crispianu
+ crispin crispin
+ critic critic
+ critical critic
+ critics critic
+ croak croak
+ croaking croak
+ croaks croak
+ crocodile crocodil
+ cromer cromer
+ cromwell cromwel
+ crone crone
+ crook crook
+ crookback crookback
+ crooked crook
+ crooking crook
+ crop crop
+ cropp cropp
+ crosby crosbi
+ cross cross
+ crossed cross
+ crosses cross
+ crossest crossest
+ crossing cross
+ crossings cross
+ crossly crossli
+ crossness cross
+ crost crost
+ crotchets crotchet
+ crouch crouch
+ crouching crouch
+ crow crow
+ crowd crowd
+ crowded crowd
+ crowding crowd
+ crowds crowd
+ crowflowers crowflow
+ crowing crow
+ crowkeeper crowkeep
+ crown crown
+ crowned crown
+ crowner crowner
+ crownet crownet
+ crownets crownet
+ crowning crown
+ crowns crown
+ crows crow
+ crudy crudi
+ cruel cruel
+ cruell cruell
+ crueller crueller
+ cruelly cruelli
+ cruels cruel
+ cruelty cruelti
+ crum crum
+ crumble crumbl
+ crumbs crumb
+ crupper crupper
+ crusadoes crusado
+ crush crush
+ crushed crush
+ crushest crushest
+ crushing crush
+ crust crust
+ crusts crust
+ crusty crusti
+ crutch crutch
+ crutches crutch
+ cry cry
+ crying cry
+ crystal crystal
+ crystalline crystallin
+ crystals crystal
+ cub cub
+ cubbert cubbert
+ cubiculo cubiculo
+ cubit cubit
+ cubs cub
+ cuckold cuckold
+ cuckoldly cuckoldli
+ cuckolds cuckold
+ cuckoo cuckoo
+ cucullus cucullu
+ cudgel cudgel
+ cudgeled cudgel
+ cudgell cudgel
+ cudgelling cudgel
+ cudgels cudgel
+ cue cue
+ cues cue
+ cuff cuff
+ cuffs cuff
+ cuique cuiqu
+ cull cull
+ culling cull
+ cullion cullion
+ cullionly cullionli
+ cullions cullion
+ culpable culpabl
+ culverin culverin
+ cum cum
+ cumber cumber
+ cumberland cumberland
+ cunning cun
+ cunningly cunningli
+ cunnings cun
+ cuore cuor
+ cup cup
+ cupbearer cupbear
+ cupboarding cupboard
+ cupid cupid
+ cupids cupid
+ cuppele cuppel
+ cups cup
+ cur cur
+ curan curan
+ curate curat
+ curb curb
+ curbed curb
+ curbing curb
+ curbs curb
+ curd curd
+ curdied curdi
+ curds curd
+ cure cure
+ cured cure
+ cureless cureless
+ curer curer
+ cures cure
+ curfew curfew
+ curing cure
+ curio curio
+ curiosity curios
+ curious curiou
+ curiously curious
+ curl curl
+ curled curl
+ curling curl
+ curls curl
+ currance curranc
+ currants currant
+ current current
+ currents current
+ currish currish
+ curry curri
+ curs cur
+ curse curs
+ cursed curs
+ curses curs
+ cursies cursi
+ cursing curs
+ cursorary cursorari
+ curst curst
+ curster curster
+ curstest curstest
+ curstness curst
+ cursy cursi
+ curtail curtail
+ curtain curtain
+ curtains curtain
+ curtal curtal
+ curtis curti
+ curtle curtl
+ curtsied curtsi
+ curtsies curtsi
+ curtsy curtsi
+ curvet curvet
+ curvets curvet
+ cushes cush
+ cushion cushion
+ cushions cushion
+ custalorum custalorum
+ custard custard
+ custody custodi
+ custom custom
+ customary customari
+ customed custom
+ customer custom
+ customers custom
+ customs custom
+ custure custur
+ cut cut
+ cutler cutler
+ cutpurse cutpurs
+ cutpurses cutpurs
+ cuts cut
+ cutter cutter
+ cutting cut
+ cuttle cuttl
+ cxsar cxsar
+ cyclops cyclop
+ cydnus cydnu
+ cygnet cygnet
+ cygnets cygnet
+ cym cym
+ cymbals cymbal
+ cymbeline cymbelin
+ cyme cyme
+ cynic cynic
+ cynthia cynthia
+ cypress cypress
+ cypriot cypriot
+ cyprus cypru
+ cyrus cyru
+ cytherea cytherea
+ d d
+ dabbled dabbl
+ dace dace
+ dad dad
+ daedalus daedalu
+ daemon daemon
+ daff daff
+ daffed daf
+ daffest daffest
+ daffodils daffodil
+ dagger dagger
+ daggers dagger
+ dagonet dagonet
+ daily daili
+ daintier daintier
+ dainties dainti
+ daintiest daintiest
+ daintily daintili
+ daintiness dainti
+ daintry daintri
+ dainty dainti
+ daisied daisi
+ daisies daisi
+ daisy daisi
+ dale dale
+ dalliance dallianc
+ dallied dalli
+ dallies dalli
+ dally dalli
+ dallying dalli
+ dalmatians dalmatian
+ dam dam
+ damage damag
+ damascus damascu
+ damask damask
+ damasked damask
+ dame dame
+ dames dame
+ damm damm
+ damn damn
+ damnable damnabl
+ damnably damnabl
+ damnation damnat
+ damned damn
+ damns damn
+ damoiselle damoisel
+ damon damon
+ damosella damosella
+ damp damp
+ dams dam
+ damsel damsel
+ damsons damson
+ dan dan
+ danc danc
+ dance danc
+ dancer dancer
+ dances danc
+ dancing danc
+ dandle dandl
+ dandy dandi
+ dane dane
+ dang dang
+ danger danger
+ dangerous danger
+ dangerously danger
+ dangers danger
+ dangling dangl
+ daniel daniel
+ danish danish
+ dank dank
+ dankish dankish
+ danskers dansker
+ daphne daphn
+ dappled dappl
+ dapples dappl
+ dar dar
+ dardan dardan
+ dardanian dardanian
+ dardanius dardaniu
+ dare dare
+ dared dare
+ dareful dare
+ dares dare
+ darest darest
+ daring dare
+ darius dariu
+ dark dark
+ darken darken
+ darkening darken
+ darkens darken
+ darker darker
+ darkest darkest
+ darkling darkl
+ darkly darkli
+ darkness dark
+ darling darl
+ darlings darl
+ darnel darnel
+ darraign darraign
+ dart dart
+ darted dart
+ darter darter
+ dartford dartford
+ darting dart
+ darts dart
+ dash dash
+ dashes dash
+ dashing dash
+ dastard dastard
+ dastards dastard
+ dat dat
+ datchet datchet
+ date date
+ dated date
+ dateless dateless
+ dates date
+ daub daub
+ daughter daughter
+ daughters daughter
+ daunt daunt
+ daunted daunt
+ dauntless dauntless
+ dauphin dauphin
+ daventry daventri
+ davy davi
+ daw daw
+ dawn dawn
+ dawning dawn
+ daws daw
+ day dai
+ daylight daylight
+ days dai
+ dazzle dazzl
+ dazzled dazzl
+ dazzling dazzl
+ de de
+ dead dead
+ deadly deadli
+ deaf deaf
+ deafing deaf
+ deafness deaf
+ deafs deaf
+ deal deal
+ dealer dealer
+ dealers dealer
+ dealest dealest
+ dealing deal
+ dealings deal
+ deals deal
+ dealt dealt
+ dean dean
+ deanery deaneri
+ dear dear
+ dearer dearer
+ dearest dearest
+ dearly dearli
+ dearness dear
+ dears dear
+ dearth dearth
+ dearths dearth
+ death death
+ deathbed deathb
+ deathful death
+ deaths death
+ deathsman deathsman
+ deathsmen deathsmen
+ debarred debar
+ debase debas
+ debate debat
+ debated debat
+ debatement debat
+ debateth debateth
+ debating debat
+ debauch debauch
+ debile debil
+ debility debil
+ debitor debitor
+ debonair debonair
+ deborah deborah
+ debosh debosh
+ debt debt
+ debted debt
+ debtor debtor
+ debtors debtor
+ debts debt
+ debuty debuti
+ decay decai
+ decayed decai
+ decayer decay
+ decaying decai
+ decays decai
+ deceas decea
+ decease deceas
+ deceased deceas
+ deceit deceit
+ deceitful deceit
+ deceits deceit
+ deceiv deceiv
+ deceivable deceiv
+ deceive deceiv
+ deceived deceiv
+ deceiver deceiv
+ deceivers deceiv
+ deceives deceiv
+ deceivest deceivest
+ deceiveth deceiveth
+ deceiving deceiv
+ december decemb
+ decent decent
+ deceptious decepti
+ decerns decern
+ decide decid
+ decides decid
+ decimation decim
+ decipher deciph
+ deciphers deciph
+ decision decis
+ decius deciu
+ deck deck
+ decking deck
+ decks deck
+ deckt deckt
+ declare declar
+ declares declar
+ declension declens
+ declensions declens
+ declin declin
+ decline declin
+ declined declin
+ declines declin
+ declining declin
+ decoct decoct
+ decorum decorum
+ decreas decrea
+ decrease decreas
+ decreasing decreas
+ decree decre
+ decreed decre
+ decrees decre
+ decrepit decrepit
+ dedicate dedic
+ dedicated dedic
+ dedicates dedic
+ dedication dedic
+ deed deed
+ deedless deedless
+ deeds deed
+ deem deem
+ deemed deem
+ deep deep
+ deeper deeper
+ deepest deepest
+ deeply deepli
+ deeps deep
+ deepvow deepvow
+ deer deer
+ deesse deess
+ defac defac
+ deface defac
+ defaced defac
+ defacer defac
+ defacers defac
+ defacing defac
+ defam defam
+ default default
+ defeat defeat
+ defeated defeat
+ defeats defeat
+ defeatures defeatur
+ defect defect
+ defective defect
+ defects defect
+ defence defenc
+ defences defenc
+ defend defend
+ defendant defend
+ defended defend
+ defender defend
+ defenders defend
+ defending defend
+ defends defend
+ defense defens
+ defensible defens
+ defensive defens
+ defer defer
+ deferr deferr
+ defiance defianc
+ deficient defici
+ defied defi
+ defies defi
+ defil defil
+ defile defil
+ defiler defil
+ defiles defil
+ defiling defil
+ define defin
+ definement defin
+ definite definit
+ definitive definit
+ definitively definit
+ deflow deflow
+ deflower deflow
+ deflowered deflow
+ deform deform
+ deformed deform
+ deformities deform
+ deformity deform
+ deftly deftli
+ defunct defunct
+ defunction defunct
+ defuse defus
+ defy defi
+ defying defi
+ degenerate degener
+ degraded degrad
+ degree degre
+ degrees degre
+ deified deifi
+ deifying deifi
+ deign deign
+ deigned deign
+ deiphobus deiphobu
+ deities deiti
+ deity deiti
+ deja deja
+ deject deject
+ dejected deject
+ delabreth delabreth
+ delay delai
+ delayed delai
+ delaying delai
+ delays delai
+ delectable delect
+ deliberate deliber
+ delicate delic
+ delicates delic
+ delicious delici
+ deliciousness delici
+ delight delight
+ delighted delight
+ delightful delight
+ delights delight
+ delinquents delinqu
+ deliv deliv
+ deliver deliv
+ deliverance deliver
+ delivered deliv
+ delivering deliv
+ delivers deliv
+ delivery deliveri
+ delphos delpho
+ deluded delud
+ deluding delud
+ deluge delug
+ delve delv
+ delver delver
+ delves delv
+ demand demand
+ demanded demand
+ demanding demand
+ demands demand
+ demean demean
+ demeanor demeanor
+ demeanour demeanour
+ demerits demerit
+ demesnes demesn
+ demetrius demetriu
+ demi demi
+ demigod demigod
+ demise demis
+ demoiselles demoisel
+ demon demon
+ demonstrable demonstr
+ demonstrate demonstr
+ demonstrated demonstr
+ demonstrating demonstr
+ demonstration demonstr
+ demonstrative demonstr
+ demure demur
+ demurely demur
+ demuring demur
+ den den
+ denay denai
+ deni deni
+ denial denial
+ denials denial
+ denied deni
+ denier denier
+ denies deni
+ deniest deniest
+ denis deni
+ denmark denmark
+ dennis denni
+ denny denni
+ denote denot
+ denoted denot
+ denotement denot
+ denounc denounc
+ denounce denounc
+ denouncing denounc
+ dens den
+ denunciation denunci
+ deny deni
+ denying deni
+ deo deo
+ depart depart
+ departed depart
+ departest departest
+ departing depart
+ departure departur
+ depeche depech
+ depend depend
+ dependant depend
+ dependants depend
+ depended depend
+ dependence depend
+ dependences depend
+ dependency depend
+ dependent depend
+ dependents depend
+ depender depend
+ depending depend
+ depends depend
+ deplore deplor
+ deploring deplor
+ depopulate depopul
+ depos depo
+ depose depos
+ deposed depos
+ deposing depos
+ depositaries depositari
+ deprav deprav
+ depravation deprav
+ deprave deprav
+ depraved deprav
+ depraves deprav
+ depress depress
+ depriv depriv
+ deprive depriv
+ depth depth
+ depths depth
+ deputation deput
+ depute deput
+ deputed deput
+ deputies deputi
+ deputing deput
+ deputy deputi
+ deracinate deracin
+ derby derbi
+ dercetas derceta
+ dere dere
+ derides derid
+ derision deris
+ deriv deriv
+ derivation deriv
+ derivative deriv
+ derive deriv
+ derived deriv
+ derives deriv
+ derogate derog
+ derogately derog
+ derogation derog
+ des de
+ desartless desartless
+ descant descant
+ descend descend
+ descended descend
+ descending descend
+ descends descend
+ descension descens
+ descent descent
+ descents descent
+ describe describ
+ described describ
+ describes describ
+ descried descri
+ description descript
+ descriptions descript
+ descry descri
+ desdemon desdemon
+ desdemona desdemona
+ desert desert
+ deserts desert
+ deserv deserv
+ deserve deserv
+ deserved deserv
+ deservedly deservedli
+ deserver deserv
+ deservers deserv
+ deserves deserv
+ deservest deservest
+ deserving deserv
+ deservings deserv
+ design design
+ designment design
+ designments design
+ designs design
+ desir desir
+ desire desir
+ desired desir
+ desirers desir
+ desires desir
+ desirest desirest
+ desiring desir
+ desirous desir
+ desist desist
+ desk desk
+ desolate desol
+ desolation desol
+ desp desp
+ despair despair
+ despairing despair
+ despairs despair
+ despatch despatch
+ desperate desper
+ desperately desper
+ desperation desper
+ despis despi
+ despise despis
+ despised despis
+ despiser despis
+ despiseth despiseth
+ despising despis
+ despite despit
+ despiteful despit
+ despoiled despoil
+ dest dest
+ destin destin
+ destined destin
+ destinies destini
+ destiny destini
+ destitute destitut
+ destroy destroi
+ destroyed destroi
+ destroyer destroy
+ destroyers destroy
+ destroying destroi
+ destroys destroi
+ destruction destruct
+ destructions destruct
+ det det
+ detain detain
+ detains detain
+ detect detect
+ detected detect
+ detecting detect
+ detection detect
+ detector detector
+ detects detect
+ detention detent
+ determin determin
+ determinate determin
+ determination determin
+ determinations determin
+ determine determin
+ determined determin
+ determines determin
+ detest detest
+ detestable detest
+ detested detest
+ detesting detest
+ detests detest
+ detract detract
+ detraction detract
+ detractions detract
+ deucalion deucalion
+ deuce deuc
+ deum deum
+ deux deux
+ devant devant
+ devesting devest
+ device devic
+ devices devic
+ devil devil
+ devilish devilish
+ devils devil
+ devis devi
+ devise devis
+ devised devis
+ devises devis
+ devising devis
+ devoid devoid
+ devonshire devonshir
+ devote devot
+ devoted devot
+ devotion devot
+ devour devour
+ devoured devour
+ devourers devour
+ devouring devour
+ devours devour
+ devout devout
+ devoutly devoutli
+ dew dew
+ dewberries dewberri
+ dewdrops dewdrop
+ dewlap dewlap
+ dewlapp dewlapp
+ dews dew
+ dewy dewi
+ dexter dexter
+ dexteriously dexteri
+ dexterity dexter
+ di di
+ diable diabl
+ diablo diablo
+ diadem diadem
+ dial dial
+ dialect dialect
+ dialogue dialogu
+ dialogued dialogu
+ dials dial
+ diameter diamet
+ diamond diamond
+ diamonds diamond
+ dian dian
+ diana diana
+ diaper diaper
+ dibble dibbl
+ dic dic
+ dice dice
+ dicers dicer
+ dich dich
+ dick dick
+ dickens dicken
+ dickon dickon
+ dicky dicki
+ dictator dictat
+ diction diction
+ dictynna dictynna
+ did did
+ diddle diddl
+ didest didest
+ dido dido
+ didst didst
+ die die
+ died di
+ diedst diedst
+ dies di
+ diest diest
+ diet diet
+ dieted diet
+ dieter dieter
+ dieu dieu
+ diff diff
+ differ differ
+ difference differ
+ differences differ
+ differency differ
+ different differ
+ differing differ
+ differs differ
+ difficile difficil
+ difficult difficult
+ difficulties difficulti
+ difficulty difficulti
+ diffidence diffid
+ diffidences diffid
+ diffus diffu
+ diffused diffus
+ diffusest diffusest
+ dig dig
+ digest digest
+ digested digest
+ digestion digest
+ digestions digest
+ digg digg
+ digging dig
+ dighton dighton
+ dignified dignifi
+ dignifies dignifi
+ dignify dignifi
+ dignities digniti
+ dignity digniti
+ digress digress
+ digressing digress
+ digression digress
+ digs dig
+ digt digt
+ dilate dilat
+ dilated dilat
+ dilations dilat
+ dilatory dilatori
+ dild dild
+ dildos dildo
+ dilemma dilemma
+ dilemmas dilemma
+ diligence dilig
+ diligent dilig
+ diluculo diluculo
+ dim dim
+ dimension dimens
+ dimensions dimens
+ diminish diminish
+ diminishing diminish
+ diminution diminut
+ diminutive diminut
+ diminutives diminut
+ dimm dimm
+ dimmed dim
+ dimming dim
+ dimpled dimpl
+ dimples dimpl
+ dims dim
+ din din
+ dine dine
+ dined dine
+ diner diner
+ dines dine
+ ding ding
+ dining dine
+ dinner dinner
+ dinners dinner
+ dinnertime dinnertim
+ dint dint
+ diomed diom
+ diomede diomed
+ diomedes diomed
+ dion dion
+ dip dip
+ dipp dipp
+ dipping dip
+ dips dip
+ dir dir
+ dire dire
+ direct direct
+ directed direct
+ directing direct
+ direction direct
+ directions direct
+ directitude directitud
+ directive direct
+ directly directli
+ directs direct
+ direful dire
+ direness dire
+ direst direst
+ dirge dirg
+ dirges dirg
+ dirt dirt
+ dirty dirti
+ dis di
+ disability disabl
+ disable disabl
+ disabled disabl
+ disabling disabl
+ disadvantage disadvantag
+ disagree disagre
+ disallow disallow
+ disanimates disanim
+ disannul disannul
+ disannuls disannul
+ disappointed disappoint
+ disarm disarm
+ disarmed disarm
+ disarmeth disarmeth
+ disarms disarm
+ disaster disast
+ disasters disast
+ disastrous disastr
+ disbench disbench
+ disbranch disbranch
+ disburdened disburden
+ disburs disbur
+ disburse disburs
+ disbursed disburs
+ discandy discandi
+ discandying discandi
+ discard discard
+ discarded discard
+ discase discas
+ discased discas
+ discern discern
+ discerner discern
+ discerning discern
+ discernings discern
+ discerns discern
+ discharg discharg
+ discharge discharg
+ discharged discharg
+ discharging discharg
+ discipled discipl
+ disciples discipl
+ disciplin disciplin
+ discipline disciplin
+ disciplined disciplin
+ disciplines disciplin
+ disclaim disclaim
+ disclaiming disclaim
+ disclaims disclaim
+ disclos disclo
+ disclose disclos
+ disclosed disclos
+ discloses disclos
+ discolour discolour
+ discoloured discolour
+ discolours discolour
+ discomfit discomfit
+ discomfited discomfit
+ discomfiture discomfitur
+ discomfort discomfort
+ discomfortable discomfort
+ discommend discommend
+ disconsolate disconsol
+ discontent discont
+ discontented discont
+ discontentedly discontentedli
+ discontenting discont
+ discontents discont
+ discontinue discontinu
+ discontinued discontinu
+ discord discord
+ discordant discord
+ discords discord
+ discourse discours
+ discoursed discours
+ discourser discours
+ discourses discours
+ discoursive discours
+ discourtesy discourtesi
+ discov discov
+ discover discov
+ discovered discov
+ discoverers discover
+ discoveries discoveri
+ discovering discov
+ discovers discov
+ discovery discoveri
+ discredit discredit
+ discredited discredit
+ discredits discredit
+ discreet discreet
+ discreetly discreetli
+ discretion discret
+ discretions discret
+ discuss discuss
+ disdain disdain
+ disdained disdain
+ disdaineth disdaineth
+ disdainful disdain
+ disdainfully disdainfulli
+ disdaining disdain
+ disdains disdain
+ disdnguish disdnguish
+ diseas disea
+ disease diseas
+ diseased diseas
+ diseases diseas
+ disedg disedg
+ disembark disembark
+ disfigure disfigur
+ disfigured disfigur
+ disfurnish disfurnish
+ disgorge disgorg
+ disgrac disgrac
+ disgrace disgrac
+ disgraced disgrac
+ disgraceful disgrac
+ disgraces disgrac
+ disgracing disgrac
+ disgracious disgraci
+ disguis disgui
+ disguise disguis
+ disguised disguis
+ disguiser disguis
+ disguises disguis
+ disguising disguis
+ dish dish
+ dishabited dishabit
+ dishclout dishclout
+ dishearten dishearten
+ disheartens dishearten
+ dishes dish
+ dishonest dishonest
+ dishonestly dishonestli
+ dishonesty dishonesti
+ dishonor dishonor
+ dishonorable dishonor
+ dishonors dishonor
+ dishonour dishonour
+ dishonourable dishonour
+ dishonoured dishonour
+ dishonours dishonour
+ disinherit disinherit
+ disinherited disinherit
+ disjoin disjoin
+ disjoining disjoin
+ disjoins disjoin
+ disjoint disjoint
+ disjunction disjunct
+ dislik dislik
+ dislike dislik
+ disliken disliken
+ dislikes dislik
+ dislimns dislimn
+ dislocate disloc
+ dislodg dislodg
+ disloyal disloy
+ disloyalty disloyalti
+ dismal dismal
+ dismantle dismantl
+ dismantled dismantl
+ dismask dismask
+ dismay dismai
+ dismayed dismai
+ dismemb dismemb
+ dismember dismemb
+ dismes dism
+ dismiss dismiss
+ dismissed dismiss
+ dismissing dismiss
+ dismission dismiss
+ dismount dismount
+ dismounted dismount
+ disnatur disnatur
+ disobedience disobedi
+ disobedient disobedi
+ disobey disobei
+ disobeys disobei
+ disorb disorb
+ disorder disord
+ disordered disord
+ disorderly disorderli
+ disorders disord
+ disparage disparag
+ disparagement disparag
+ disparagements disparag
+ dispark dispark
+ dispatch dispatch
+ dispensation dispens
+ dispense dispens
+ dispenses dispens
+ dispers disper
+ disperse dispers
+ dispersed dispers
+ dispersedly dispersedli
+ dispersing dispers
+ dispiteous dispit
+ displac displac
+ displace displac
+ displaced displac
+ displant displant
+ displanting displant
+ display displai
+ displayed displai
+ displeas displea
+ displease displeas
+ displeased displeas
+ displeasing displeas
+ displeasure displeasur
+ displeasures displeasur
+ disponge dispong
+ disport disport
+ disports disport
+ dispos dispo
+ dispose dispos
+ disposed dispos
+ disposer dispos
+ disposing dispos
+ disposition disposit
+ dispositions disposit
+ dispossess dispossess
+ dispossessing dispossess
+ disprais disprai
+ dispraise disprais
+ dispraising disprais
+ dispraisingly dispraisingli
+ dispropertied disproperti
+ disproportion disproport
+ disproportioned disproport
+ disprov disprov
+ disprove disprov
+ disproved disprov
+ dispursed dispurs
+ disputable disput
+ disputation disput
+ disputations disput
+ dispute disput
+ disputed disput
+ disputes disput
+ disputing disput
+ disquantity disquant
+ disquiet disquiet
+ disquietly disquietli
+ disrelish disrelish
+ disrobe disrob
+ disseat disseat
+ dissemble dissembl
+ dissembled dissembl
+ dissembler dissembl
+ dissemblers dissembl
+ dissembling dissembl
+ dissembly dissembl
+ dissension dissens
+ dissensions dissens
+ dissentious dissenti
+ dissever dissev
+ dissipation dissip
+ dissolute dissolut
+ dissolutely dissolut
+ dissolution dissolut
+ dissolutions dissolut
+ dissolv dissolv
+ dissolve dissolv
+ dissolved dissolv
+ dissolves dissolv
+ dissuade dissuad
+ dissuaded dissuad
+ distaff distaff
+ distaffs distaff
+ distain distain
+ distains distain
+ distance distanc
+ distant distant
+ distaste distast
+ distasted distast
+ distasteful distast
+ distemp distemp
+ distemper distemp
+ distemperature distemperatur
+ distemperatures distemperatur
+ distempered distemp
+ distempering distemp
+ distil distil
+ distill distil
+ distillation distil
+ distilled distil
+ distills distil
+ distilment distil
+ distinct distinct
+ distinction distinct
+ distinctly distinctli
+ distingue distingu
+ distinguish distinguish
+ distinguishes distinguish
+ distinguishment distinguish
+ distract distract
+ distracted distract
+ distractedly distractedli
+ distraction distract
+ distractions distract
+ distracts distract
+ distrain distrain
+ distraught distraught
+ distress distress
+ distressed distress
+ distresses distress
+ distressful distress
+ distribute distribut
+ distributed distribut
+ distribution distribut
+ distrust distrust
+ distrustful distrust
+ disturb disturb
+ disturbed disturb
+ disturbers disturb
+ disturbing disturb
+ disunite disunit
+ disvalued disvalu
+ disvouch disvouch
+ dit dit
+ ditch ditch
+ ditchers ditcher
+ ditches ditch
+ dites dite
+ ditties ditti
+ ditty ditti
+ diurnal diurnal
+ div div
+ dive dive
+ diver diver
+ divers diver
+ diversely divers
+ diversity divers
+ divert divert
+ diverted divert
+ diverts divert
+ dives dive
+ divest divest
+ dividable divid
+ dividant divid
+ divide divid
+ divided divid
+ divides divid
+ divideth divideth
+ divin divin
+ divination divin
+ divine divin
+ divinely divin
+ divineness divin
+ diviner divin
+ divines divin
+ divinest divinest
+ divining divin
+ divinity divin
+ division divis
+ divisions divis
+ divorc divorc
+ divorce divorc
+ divorced divorc
+ divorcement divorc
+ divorcing divorc
+ divulg divulg
+ divulge divulg
+ divulged divulg
+ divulging divulg
+ dizy dizi
+ dizzy dizzi
+ do do
+ doating doat
+ dobbin dobbin
+ dock dock
+ docks dock
+ doct doct
+ doctor doctor
+ doctors doctor
+ doctrine doctrin
+ document document
+ dodge dodg
+ doe doe
+ doer doer
+ doers doer
+ does doe
+ doest doest
+ doff doff
+ dog dog
+ dogberry dogberri
+ dogfish dogfish
+ dogg dogg
+ dogged dog
+ dogs dog
+ doigts doigt
+ doing do
+ doings do
+ doit doit
+ doits doit
+ dolabella dolabella
+ dole dole
+ doleful dole
+ doll doll
+ dollar dollar
+ dollars dollar
+ dolor dolor
+ dolorous dolor
+ dolour dolour
+ dolours dolour
+ dolphin dolphin
+ dolt dolt
+ dolts dolt
+ domestic domest
+ domestics domest
+ dominance domin
+ dominations domin
+ dominator domin
+ domine domin
+ domineer domin
+ domineering domin
+ dominical domin
+ dominion dominion
+ dominions dominion
+ domitius domitiu
+ dommelton dommelton
+ don don
+ donalbain donalbain
+ donation donat
+ donc donc
+ doncaster doncast
+ done done
+ dong dong
+ donn donn
+ donne donn
+ donner donner
+ donnerai donnerai
+ doom doom
+ doomsday doomsdai
+ door door
+ doorkeeper doorkeep
+ doors door
+ dorcas dorca
+ doreus doreu
+ doricles doricl
+ dormouse dormous
+ dorothy dorothi
+ dorset dorset
+ dorsetshire dorsetshir
+ dost dost
+ dotage dotag
+ dotant dotant
+ dotard dotard
+ dotards dotard
+ dote dote
+ doted dote
+ doters doter
+ dotes dote
+ doteth doteth
+ doth doth
+ doting dote
+ double doubl
+ doubled doubl
+ doubleness doubl
+ doubler doubler
+ doublet doublet
+ doublets doublet
+ doubling doubl
+ doubly doubli
+ doubt doubt
+ doubted doubt
+ doubtful doubt
+ doubtfully doubtfulli
+ doubting doubt
+ doubtless doubtless
+ doubts doubt
+ doug doug
+ dough dough
+ doughty doughti
+ doughy doughi
+ douglas dougla
+ dout dout
+ doute dout
+ douts dout
+ dove dove
+ dovehouse dovehous
+ dover dover
+ doves dove
+ dow dow
+ dowager dowag
+ dowdy dowdi
+ dower dower
+ dowerless dowerless
+ dowers dower
+ dowlas dowla
+ dowle dowl
+ down down
+ downfall downfal
+ downright downright
+ downs down
+ downstairs downstair
+ downtrod downtrod
+ downward downward
+ downwards downward
+ downy downi
+ dowries dowri
+ dowry dowri
+ dowsabel dowsabel
+ doxy doxi
+ dozed doze
+ dozen dozen
+ dozens dozen
+ dozy dozi
+ drab drab
+ drabbing drab
+ drabs drab
+ drachma drachma
+ drachmas drachma
+ draff draff
+ drag drag
+ dragg dragg
+ dragged drag
+ dragging drag
+ dragon dragon
+ dragonish dragonish
+ dragons dragon
+ drain drain
+ drained drain
+ drains drain
+ drake drake
+ dram dram
+ dramatis dramati
+ drank drank
+ draught draught
+ draughts draught
+ drave drave
+ draw draw
+ drawbridge drawbridg
+ drawer drawer
+ drawers drawer
+ draweth draweth
+ drawing draw
+ drawling drawl
+ drawn drawn
+ draws draw
+ drayman drayman
+ draymen draymen
+ dread dread
+ dreaded dread
+ dreadful dread
+ dreadfully dreadfulli
+ dreading dread
+ dreads dread
+ dream dream
+ dreamer dreamer
+ dreamers dreamer
+ dreaming dream
+ dreams dream
+ dreamt dreamt
+ drearning drearn
+ dreary dreari
+ dreg dreg
+ dregs dreg
+ drench drench
+ drenched drench
+ dress dress
+ dressed dress
+ dresser dresser
+ dressing dress
+ dressings dress
+ drest drest
+ drew drew
+ dribbling dribbl
+ dried dri
+ drier drier
+ dries dri
+ drift drift
+ drily drili
+ drink drink
+ drinketh drinketh
+ drinking drink
+ drinkings drink
+ drinks drink
+ driv driv
+ drive drive
+ drivelling drivel
+ driven driven
+ drives drive
+ driveth driveth
+ driving drive
+ drizzle drizzl
+ drizzled drizzl
+ drizzles drizzl
+ droit droit
+ drollery drolleri
+ dromio dromio
+ dromios dromio
+ drone drone
+ drones drone
+ droop droop
+ droopeth droopeth
+ drooping droop
+ droops droop
+ drop drop
+ dropheir dropheir
+ droplets droplet
+ dropp dropp
+ dropper dropper
+ droppeth droppeth
+ dropping drop
+ droppings drop
+ drops drop
+ dropsied dropsi
+ dropsies dropsi
+ dropsy dropsi
+ dropt dropt
+ dross dross
+ drossy drossi
+ drought drought
+ drove drove
+ droven droven
+ drovier drovier
+ drown drown
+ drowned drown
+ drowning drown
+ drowns drown
+ drows drow
+ drowse drows
+ drowsily drowsili
+ drowsiness drowsi
+ drowsy drowsi
+ drudge drudg
+ drudgery drudgeri
+ drudges drudg
+ drug drug
+ drugg drugg
+ drugs drug
+ drum drum
+ drumble drumbl
+ drummer drummer
+ drumming drum
+ drums drum
+ drunk drunk
+ drunkard drunkard
+ drunkards drunkard
+ drunken drunken
+ drunkenly drunkenli
+ drunkenness drunken
+ dry dry
+ dryness dryness
+ dst dst
+ du du
+ dub dub
+ dubb dubb
+ ducat ducat
+ ducats ducat
+ ducdame ducdam
+ duchess duchess
+ duchies duchi
+ duchy duchi
+ duck duck
+ ducking duck
+ ducks duck
+ dudgeon dudgeon
+ due due
+ duellist duellist
+ duello duello
+ duer duer
+ dues due
+ duff duff
+ dug dug
+ dugs dug
+ duke duke
+ dukedom dukedom
+ dukedoms dukedom
+ dukes duke
+ dulcet dulcet
+ dulche dulch
+ dull dull
+ dullard dullard
+ duller duller
+ dullest dullest
+ dulling dull
+ dullness dull
+ dulls dull
+ dully dulli
+ dulness dul
+ duly duli
+ dumain dumain
+ dumb dumb
+ dumbe dumb
+ dumbly dumbl
+ dumbness dumb
+ dump dump
+ dumps dump
+ dun dun
+ duncan duncan
+ dung dung
+ dungeon dungeon
+ dungeons dungeon
+ dunghill dunghil
+ dunghills dunghil
+ dungy dungi
+ dunnest dunnest
+ dunsinane dunsinan
+ dunsmore dunsmor
+ dunstable dunstabl
+ dupp dupp
+ durance duranc
+ during dure
+ durst durst
+ dusky duski
+ dust dust
+ dusted dust
+ dusty dusti
+ dutch dutch
+ dutchman dutchman
+ duteous duteou
+ duties duti
+ dutiful duti
+ duty duti
+ dwarf dwarf
+ dwarfish dwarfish
+ dwell dwell
+ dwellers dweller
+ dwelling dwell
+ dwells dwell
+ dwelt dwelt
+ dwindle dwindl
+ dy dy
+ dye dye
+ dyed dy
+ dyer dyer
+ dying dy
+ e e
+ each each
+ eager eager
+ eagerly eagerli
+ eagerness eager
+ eagle eagl
+ eagles eagl
+ eaning ean
+ eanlings eanl
+ ear ear
+ earing ear
+ earl earl
+ earldom earldom
+ earlier earlier
+ earliest earliest
+ earliness earli
+ earls earl
+ early earli
+ earn earn
+ earned earn
+ earnest earnest
+ earnestly earnestli
+ earnestness earnest
+ earns earn
+ ears ear
+ earth earth
+ earthen earthen
+ earthlier earthlier
+ earthly earthli
+ earthquake earthquak
+ earthquakes earthquak
+ earthy earthi
+ eas ea
+ ease eas
+ eased eas
+ easeful eas
+ eases eas
+ easier easier
+ easiest easiest
+ easiliest easiliest
+ easily easili
+ easiness easi
+ easing eas
+ east east
+ eastcheap eastcheap
+ easter easter
+ eastern eastern
+ eastward eastward
+ easy easi
+ eat eat
+ eaten eaten
+ eater eater
+ eaters eater
+ eating eat
+ eats eat
+ eaux eaux
+ eaves eav
+ ebb ebb
+ ebbing eb
+ ebbs ebb
+ ebon ebon
+ ebony eboni
+ ebrew ebrew
+ ecce ecc
+ echapper echapp
+ echo echo
+ echoes echo
+ eclips eclip
+ eclipse eclips
+ eclipses eclips
+ ecolier ecoli
+ ecoutez ecoutez
+ ecstacy ecstaci
+ ecstasies ecstasi
+ ecstasy ecstasi
+ ecus ecu
+ eden eden
+ edg edg
+ edgar edgar
+ edge edg
+ edged edg
+ edgeless edgeless
+ edges edg
+ edict edict
+ edicts edict
+ edifice edific
+ edifices edific
+ edified edifi
+ edifies edifi
+ edition edit
+ edm edm
+ edmund edmund
+ edmunds edmund
+ edmundsbury edmundsburi
+ educate educ
+ educated educ
+ education educ
+ edward edward
+ eel eel
+ eels eel
+ effect effect
+ effected effect
+ effectless effectless
+ effects effect
+ effectual effectu
+ effectually effectu
+ effeminate effemin
+ effigies effigi
+ effus effu
+ effuse effus
+ effusion effus
+ eftest eftest
+ egal egal
+ egally egal
+ eget eget
+ egeus egeu
+ egg egg
+ eggs egg
+ eggshell eggshel
+ eglamour eglamour
+ eglantine eglantin
+ egma egma
+ ego ego
+ egregious egregi
+ egregiously egregi
+ egress egress
+ egypt egypt
+ egyptian egyptian
+ egyptians egyptian
+ eie eie
+ eight eight
+ eighteen eighteen
+ eighth eighth
+ eightpenny eightpenni
+ eighty eighti
+ eisel eisel
+ either either
+ eject eject
+ eke ek
+ el el
+ elbe elb
+ elbow elbow
+ elbows elbow
+ eld eld
+ elder elder
+ elders elder
+ eldest eldest
+ eleanor eleanor
+ elect elect
+ elected elect
+ election elect
+ elegancy eleg
+ elegies elegi
+ element element
+ elements element
+ elephant eleph
+ elephants eleph
+ elevated elev
+ eleven eleven
+ eleventh eleventh
+ elf elf
+ elflocks elflock
+ eliads eliad
+ elinor elinor
+ elizabeth elizabeth
+ ell ell
+ elle ell
+ ellen ellen
+ elm elm
+ eloquence eloqu
+ eloquent eloqu
+ else els
+ elsewhere elsewher
+ elsinore elsinor
+ eltham eltham
+ elves elv
+ elvish elvish
+ ely eli
+ elysium elysium
+ em em
+ emballing embal
+ embalm embalm
+ embalms embalm
+ embark embark
+ embarked embark
+ embarquements embarqu
+ embassade embassad
+ embassage embassag
+ embassies embassi
+ embassy embassi
+ embattailed embattail
+ embattl embattl
+ embattle embattl
+ embay embai
+ embellished embellish
+ embers ember
+ emblaze emblaz
+ emblem emblem
+ emblems emblem
+ embodied embodi
+ embold embold
+ emboldens embolden
+ emboss emboss
+ embossed emboss
+ embounded embound
+ embowel embowel
+ embowell embowel
+ embrac embrac
+ embrace embrac
+ embraced embrac
+ embracement embrac
+ embracements embrac
+ embraces embrac
+ embracing embrac
+ embrasures embrasur
+ embroider embroid
+ embroidery embroideri
+ emhracing emhrac
+ emilia emilia
+ eminence emin
+ eminent emin
+ eminently emin
+ emmanuel emmanuel
+ emnity emniti
+ empale empal
+ emperal emper
+ emperess emperess
+ emperial emperi
+ emperor emperor
+ empery emperi
+ emphasis emphasi
+ empire empir
+ empirics empir
+ empiricutic empiricut
+ empleached empleach
+ employ emploi
+ employed emploi
+ employer employ
+ employment employ
+ employments employ
+ empoison empoison
+ empress empress
+ emptied empti
+ emptier emptier
+ empties empti
+ emptiness empti
+ empty empti
+ emptying empti
+ emulate emul
+ emulation emul
+ emulations emul
+ emulator emul
+ emulous emul
+ en en
+ enact enact
+ enacted enact
+ enacts enact
+ enactures enactur
+ enamell enamel
+ enamelled enamel
+ enamour enamour
+ enamoured enamour
+ enanmour enanmour
+ encamp encamp
+ encamped encamp
+ encave encav
+ enceladus enceladu
+ enchaf enchaf
+ enchafed enchaf
+ enchant enchant
+ enchanted enchant
+ enchanting enchant
+ enchantingly enchantingli
+ enchantment enchant
+ enchantress enchantress
+ enchants enchant
+ enchas encha
+ encircle encircl
+ encircled encircl
+ enclos enclo
+ enclose enclos
+ enclosed enclos
+ encloses enclos
+ encloseth encloseth
+ enclosing enclos
+ enclouded encloud
+ encompass encompass
+ encompassed encompass
+ encompasseth encompasseth
+ encompassment encompass
+ encore encor
+ encorporal encorpor
+ encount encount
+ encounter encount
+ encountered encount
+ encounters encount
+ encourage encourag
+ encouraged encourag
+ encouragement encourag
+ encrimsoned encrimson
+ encroaching encroach
+ encumb encumb
+ end end
+ endamage endamag
+ endamagement endamag
+ endanger endang
+ endart endart
+ endear endear
+ endeared endear
+ endeavour endeavour
+ endeavours endeavour
+ ended end
+ ender ender
+ ending end
+ endings end
+ endite endit
+ endless endless
+ endow endow
+ endowed endow
+ endowments endow
+ endows endow
+ ends end
+ endu endu
+ endue endu
+ endur endur
+ endurance endur
+ endure endur
+ endured endur
+ endures endur
+ enduring endur
+ endymion endymion
+ eneas enea
+ enemies enemi
+ enemy enemi
+ enernies enerni
+ enew enew
+ enfeebled enfeebl
+ enfeebles enfeebl
+ enfeoff enfeoff
+ enfetter enfett
+ enfoldings enfold
+ enforc enforc
+ enforce enforc
+ enforced enforc
+ enforcedly enforcedli
+ enforcement enforc
+ enforces enforc
+ enforcest enforcest
+ enfranched enfranch
+ enfranchis enfranchi
+ enfranchise enfranchis
+ enfranchised enfranchis
+ enfranchisement enfranchis
+ enfreed enfre
+ enfreedoming enfreedom
+ engag engag
+ engage engag
+ engaged engag
+ engagements engag
+ engaging engag
+ engaol engaol
+ engend engend
+ engender engend
+ engenders engend
+ engilds engild
+ engine engin
+ engineer engin
+ enginer engin
+ engines engin
+ engirt engirt
+ england england
+ english english
+ englishman englishman
+ englishmen englishmen
+ engluts englut
+ englutted englut
+ engraffed engraf
+ engraft engraft
+ engrafted engraft
+ engrav engrav
+ engrave engrav
+ engross engross
+ engrossed engross
+ engrossest engrossest
+ engrossing engross
+ engrossments engross
+ enguard enguard
+ enigma enigma
+ enigmatical enigmat
+ enjoin enjoin
+ enjoined enjoin
+ enjoy enjoi
+ enjoyed enjoi
+ enjoyer enjoy
+ enjoying enjoi
+ enjoys enjoi
+ enkindle enkindl
+ enkindled enkindl
+ enlard enlard
+ enlarg enlarg
+ enlarge enlarg
+ enlarged enlarg
+ enlargement enlarg
+ enlargeth enlargeth
+ enlighten enlighten
+ enlink enlink
+ enmesh enmesh
+ enmities enmiti
+ enmity enmiti
+ ennoble ennobl
+ ennobled ennobl
+ enobarb enobarb
+ enobarbus enobarbu
+ enon enon
+ enormity enorm
+ enormous enorm
+ enough enough
+ enow enow
+ enpatron enpatron
+ enpierced enpierc
+ enquir enquir
+ enquire enquir
+ enquired enquir
+ enrag enrag
+ enrage enrag
+ enraged enrag
+ enrages enrag
+ enrank enrank
+ enrapt enrapt
+ enrich enrich
+ enriched enrich
+ enriches enrich
+ enridged enridg
+ enrings enr
+ enrob enrob
+ enrobe enrob
+ enroll enrol
+ enrolled enrol
+ enrooted enroot
+ enrounded enround
+ enschedul enschedul
+ ensconce ensconc
+ ensconcing ensconc
+ enseamed enseam
+ ensear ensear
+ enseigne enseign
+ enseignez enseignez
+ ensemble ensembl
+ enshelter enshelt
+ enshielded enshield
+ enshrines enshrin
+ ensign ensign
+ ensigns ensign
+ enskied enski
+ ensman ensman
+ ensnare ensnar
+ ensnared ensnar
+ ensnareth ensnareth
+ ensteep ensteep
+ ensu ensu
+ ensue ensu
+ ensued ensu
+ ensues ensu
+ ensuing ensu
+ enswathed enswath
+ ent ent
+ entail entail
+ entame entam
+ entangled entangl
+ entangles entangl
+ entendre entendr
+ enter enter
+ entered enter
+ entering enter
+ enterprise enterpris
+ enterprises enterpris
+ enters enter
+ entertain entertain
+ entertained entertain
+ entertainer entertain
+ entertaining entertain
+ entertainment entertain
+ entertainments entertain
+ enthrall enthral
+ enthralled enthral
+ enthron enthron
+ enthroned enthron
+ entice entic
+ enticements entic
+ enticing entic
+ entire entir
+ entirely entir
+ entitle entitl
+ entitled entitl
+ entitling entitl
+ entomb entomb
+ entombed entomb
+ entrails entrail
+ entrance entranc
+ entrances entranc
+ entrap entrap
+ entrapp entrapp
+ entre entr
+ entreat entreat
+ entreated entreat
+ entreaties entreati
+ entreating entreat
+ entreatments entreat
+ entreats entreat
+ entreaty entreati
+ entrench entrench
+ entry entri
+ entwist entwist
+ envelop envelop
+ envenom envenom
+ envenomed envenom
+ envenoms envenom
+ envied envi
+ envies envi
+ envious enviou
+ enviously envious
+ environ environ
+ environed environ
+ envoy envoi
+ envy envi
+ envying envi
+ enwheel enwheel
+ enwombed enwomb
+ enwraps enwrap
+ ephesian ephesian
+ ephesians ephesian
+ ephesus ephesu
+ epicure epicur
+ epicurean epicurean
+ epicures epicur
+ epicurism epicur
+ epicurus epicuru
+ epidamnum epidamnum
+ epidaurus epidauru
+ epigram epigram
+ epilepsy epilepsi
+ epileptic epilept
+ epilogue epilogu
+ epilogues epilogu
+ epistles epistl
+ epistrophus epistrophu
+ epitaph epitaph
+ epitaphs epitaph
+ epithet epithet
+ epitheton epitheton
+ epithets epithet
+ epitome epitom
+ equal equal
+ equalities equal
+ equality equal
+ equall equal
+ equally equal
+ equalness equal
+ equals equal
+ equinoctial equinocti
+ equinox equinox
+ equipage equipag
+ equity equiti
+ equivocal equivoc
+ equivocate equivoc
+ equivocates equivoc
+ equivocation equivoc
+ equivocator equivoc
+ er er
+ erbear erbear
+ erbearing erbear
+ erbears erbear
+ erbeat erbeat
+ erblows erblow
+ erboard erboard
+ erborne erborn
+ ercame ercam
+ ercast ercast
+ ercharg ercharg
+ ercharged ercharg
+ ercharging ercharg
+ ercles ercl
+ ercome ercom
+ ercover ercov
+ ercrows ercrow
+ erdoing erdo
+ ere er
+ erebus erebu
+ erect erect
+ erected erect
+ erecting erect
+ erection erect
+ erects erect
+ erewhile erewhil
+ erflourish erflourish
+ erflow erflow
+ erflowing erflow
+ erflows erflow
+ erfraught erfraught
+ erga erga
+ ergalled ergal
+ erglanced erglanc
+ ergo ergo
+ ergone ergon
+ ergrow ergrow
+ ergrown ergrown
+ ergrowth ergrowth
+ erhang erhang
+ erhanging erhang
+ erhasty erhasti
+ erhear erhear
+ erheard erheard
+ eringoes eringo
+ erjoy erjoi
+ erleap erleap
+ erleaps erleap
+ erleavens erleaven
+ erlook erlook
+ erlooking erlook
+ ermaster ermast
+ ermengare ermengar
+ ermount ermount
+ ern ern
+ ernight ernight
+ eros ero
+ erpaid erpaid
+ erparted erpart
+ erpast erpast
+ erpays erpai
+ erpeer erpeer
+ erperch erperch
+ erpicturing erpictur
+ erpingham erpingham
+ erposting erpost
+ erpow erpow
+ erpress erpress
+ erpressed erpress
+ err err
+ errand errand
+ errands errand
+ errant errant
+ errate errat
+ erraught erraught
+ erreaches erreach
+ erred er
+ errest errest
+ erring er
+ erroneous erron
+ error error
+ errors error
+ errs err
+ errule errul
+ errun errun
+ erset erset
+ ershade ershad
+ ershades ershad
+ ershine ershin
+ ershot ershot
+ ersized ersiz
+ erskip erskip
+ erslips erslip
+ erspreads erspread
+ erst erst
+ erstare erstar
+ erstep erstep
+ erstunk erstunk
+ ersway erswai
+ ersways erswai
+ erswell erswel
+ erta erta
+ ertake ertak
+ erteemed erteem
+ erthrow erthrow
+ erthrown erthrown
+ erthrows erthrow
+ ertook ertook
+ ertop ertop
+ ertopping ertop
+ ertrip ertrip
+ erturn erturn
+ erudition erudit
+ eruption erupt
+ eruptions erupt
+ ervalues ervalu
+ erwalk erwalk
+ erwatch erwatch
+ erween erween
+ erweens erween
+ erweigh erweigh
+ erweighs erweigh
+ erwhelm erwhelm
+ erwhelmed erwhelm
+ erworn erworn
+ es es
+ escalus escalu
+ escap escap
+ escape escap
+ escaped escap
+ escapes escap
+ eschew eschew
+ escoted escot
+ esill esil
+ especial especi
+ especially especi
+ esperance esper
+ espials espial
+ espied espi
+ espies espi
+ espous espou
+ espouse espous
+ espy espi
+ esquire esquir
+ esquires esquir
+ essay essai
+ essays essai
+ essence essenc
+ essential essenti
+ essentially essenti
+ esses ess
+ essex essex
+ est est
+ establish establish
+ established establish
+ estate estat
+ estates estat
+ esteem esteem
+ esteemed esteem
+ esteemeth esteemeth
+ esteeming esteem
+ esteems esteem
+ estimable estim
+ estimate estim
+ estimation estim
+ estimations estim
+ estime estim
+ estranged estrang
+ estridge estridg
+ estridges estridg
+ et et
+ etc etc
+ etceteras etcetera
+ ete et
+ eternal etern
+ eternally etern
+ eterne etern
+ eternity etern
+ eterniz eterniz
+ etes et
+ ethiop ethiop
+ ethiope ethiop
+ ethiopes ethiop
+ ethiopian ethiopian
+ etna etna
+ eton eton
+ etre etr
+ eunuch eunuch
+ eunuchs eunuch
+ euphrates euphrat
+ euphronius euphroniu
+ euriphile euriphil
+ europa europa
+ europe europ
+ ev ev
+ evade evad
+ evades evad
+ evans evan
+ evasion evas
+ evasions evas
+ eve ev
+ even even
+ evening even
+ evenly evenli
+ event event
+ eventful event
+ events event
+ ever ever
+ everlasting everlast
+ everlastingly everlastingli
+ evermore evermor
+ every everi
+ everyone everyon
+ everything everyth
+ everywhere everywher
+ evidence evid
+ evidences evid
+ evident evid
+ evil evil
+ evilly evilli
+ evils evil
+ evitate evit
+ ewe ew
+ ewer ewer
+ ewers ewer
+ ewes ew
+ exact exact
+ exacted exact
+ exactest exactest
+ exacting exact
+ exaction exact
+ exactions exact
+ exactly exactli
+ exacts exact
+ exalt exalt
+ exalted exalt
+ examin examin
+ examination examin
+ examinations examin
+ examine examin
+ examined examin
+ examines examin
+ exampl exampl
+ example exampl
+ exampled exampl
+ examples exampl
+ exasperate exasper
+ exasperates exasper
+ exceed exce
+ exceeded exceed
+ exceedeth exceedeth
+ exceeding exceed
+ exceedingly exceedingli
+ exceeds exce
+ excel excel
+ excelled excel
+ excellence excel
+ excellencies excel
+ excellency excel
+ excellent excel
+ excellently excel
+ excelling excel
+ excels excel
+ except except
+ excepted except
+ excepting except
+ exception except
+ exceptions except
+ exceptless exceptless
+ excess excess
+ excessive excess
+ exchang exchang
+ exchange exchang
+ exchanged exchang
+ exchequer exchequ
+ exchequers exchequ
+ excite excit
+ excited excit
+ excitements excit
+ excites excit
+ exclaim exclaim
+ exclaims exclaim
+ exclamation exclam
+ exclamations exclam
+ excludes exclud
+ excommunicate excommun
+ excommunication excommun
+ excrement excrement
+ excrements excrement
+ excursion excurs
+ excursions excurs
+ excus excu
+ excusable excus
+ excuse excus
+ excused excus
+ excuses excus
+ excusez excusez
+ excusing excus
+ execrable execr
+ execrations execr
+ execute execut
+ executed execut
+ executing execut
+ execution execut
+ executioner execution
+ executioners execution
+ executor executor
+ executors executor
+ exempt exempt
+ exempted exempt
+ exequies exequi
+ exercise exercis
+ exercises exercis
+ exeter exet
+ exeunt exeunt
+ exhal exhal
+ exhalation exhal
+ exhalations exhal
+ exhale exhal
+ exhales exhal
+ exhaust exhaust
+ exhibit exhibit
+ exhibiters exhibit
+ exhibition exhibit
+ exhort exhort
+ exhortation exhort
+ exigent exig
+ exil exil
+ exile exil
+ exiled exil
+ exion exion
+ exist exist
+ exists exist
+ exit exit
+ exits exit
+ exorciser exorcis
+ exorcisms exorc
+ exorcist exorcist
+ expect expect
+ expectance expect
+ expectancy expect
+ expectation expect
+ expectations expect
+ expected expect
+ expecters expect
+ expecting expect
+ expects expect
+ expedience expedi
+ expedient expedi
+ expediently expedi
+ expedition expedit
+ expeditious expediti
+ expel expel
+ expell expel
+ expelling expel
+ expels expel
+ expend expend
+ expense expens
+ expenses expens
+ experienc experienc
+ experience experi
+ experiences experi
+ experiment experi
+ experimental experiment
+ experiments experi
+ expert expert
+ expertness expert
+ expiate expiat
+ expiation expiat
+ expir expir
+ expiration expir
+ expire expir
+ expired expir
+ expires expir
+ expiring expir
+ explication explic
+ exploit exploit
+ exploits exploit
+ expos expo
+ expose expos
+ exposing expos
+ exposition exposit
+ expositor expositor
+ expostulate expostul
+ expostulation expostul
+ exposture expostur
+ exposure exposur
+ expound expound
+ expounded expound
+ express express
+ expressed express
+ expresseth expresseth
+ expressing express
+ expressive express
+ expressly expressli
+ expressure expressur
+ expuls expul
+ expulsion expuls
+ exquisite exquisit
+ exsufflicate exsuffl
+ extant extant
+ extemporal extempor
+ extemporally extempor
+ extempore extempor
+ extend extend
+ extended extend
+ extends extend
+ extent extent
+ extenuate extenu
+ extenuated extenu
+ extenuates extenu
+ extenuation extenu
+ exterior exterior
+ exteriorly exteriorli
+ exteriors exterior
+ extermin extermin
+ extern extern
+ external extern
+ extinct extinct
+ extincted extinct
+ extincture extinctur
+ extinguish extinguish
+ extirp extirp
+ extirpate extirp
+ extirped extirp
+ extol extol
+ extoll extol
+ extolment extol
+ exton exton
+ extort extort
+ extorted extort
+ extortion extort
+ extortions extort
+ extra extra
+ extract extract
+ extracted extract
+ extracting extract
+ extraordinarily extraordinarili
+ extraordinary extraordinari
+ extraught extraught
+ extravagancy extravag
+ extravagant extravag
+ extreme extrem
+ extremely extrem
+ extremes extrem
+ extremest extremest
+ extremities extrem
+ extremity extrem
+ exuent exuent
+ exult exult
+ exultation exult
+ ey ey
+ eyas eya
+ eyases eyas
+ eye ey
+ eyeball eyebal
+ eyeballs eyebal
+ eyebrow eyebrow
+ eyebrows eyebrow
+ eyed ei
+ eyeless eyeless
+ eyelid eyelid
+ eyelids eyelid
+ eyes ey
+ eyesight eyesight
+ eyestrings eyestr
+ eying ei
+ eyne eyn
+ eyrie eyri
+ fa fa
+ fabian fabian
+ fable fabl
+ fables fabl
+ fabric fabric
+ fabulous fabul
+ fac fac
+ face face
+ faced face
+ facere facer
+ faces face
+ faciant faciant
+ facile facil
+ facility facil
+ facinerious facineri
+ facing face
+ facit facit
+ fact fact
+ faction faction
+ factionary factionari
+ factions faction
+ factious factiou
+ factor factor
+ factors factor
+ faculties faculti
+ faculty faculti
+ fade fade
+ faded fade
+ fadeth fadeth
+ fadge fadg
+ fading fade
+ fadings fade
+ fadom fadom
+ fadoms fadom
+ fagot fagot
+ fagots fagot
+ fail fail
+ failing fail
+ fails fail
+ fain fain
+ faint faint
+ fainted faint
+ fainter fainter
+ fainting faint
+ faintly faintli
+ faintness faint
+ faints faint
+ fair fair
+ fairer fairer
+ fairest fairest
+ fairies fairi
+ fairing fair
+ fairings fair
+ fairly fairli
+ fairness fair
+ fairs fair
+ fairwell fairwel
+ fairy fairi
+ fais fai
+ fait fait
+ faites fait
+ faith faith
+ faithful faith
+ faithfull faithful
+ faithfully faithfulli
+ faithless faithless
+ faiths faith
+ faitors faitor
+ fal fal
+ falchion falchion
+ falcon falcon
+ falconbridge falconbridg
+ falconer falcon
+ falconers falcon
+ fall fall
+ fallacy fallaci
+ fallen fallen
+ falleth falleth
+ falliable falliabl
+ fallible fallibl
+ falling fall
+ fallow fallow
+ fallows fallow
+ falls fall
+ fally falli
+ falorous falor
+ false fals
+ falsehood falsehood
+ falsely fals
+ falseness fals
+ falser falser
+ falsify falsifi
+ falsing fals
+ falstaff falstaff
+ falstaffs falstaff
+ falter falter
+ fam fam
+ fame fame
+ famed fame
+ familiar familiar
+ familiarity familiar
+ familiarly familiarli
+ familiars familiar
+ family famili
+ famine famin
+ famish famish
+ famished famish
+ famous famou
+ famoused famous
+ famously famous
+ fan fan
+ fanatical fanat
+ fancies fanci
+ fancy fanci
+ fane fane
+ fanes fane
+ fang fang
+ fangled fangl
+ fangless fangless
+ fangs fang
+ fann fann
+ fanning fan
+ fans fan
+ fantasied fantasi
+ fantasies fantasi
+ fantastic fantast
+ fantastical fantast
+ fantastically fantast
+ fantasticoes fantastico
+ fantasy fantasi
+ fap fap
+ far far
+ farborough farborough
+ farced farc
+ fardel fardel
+ fardels fardel
+ fare fare
+ fares fare
+ farewell farewel
+ farewells farewel
+ fariner farin
+ faring fare
+ farm farm
+ farmer farmer
+ farmhouse farmhous
+ farms farm
+ farre farr
+ farrow farrow
+ farther farther
+ farthest farthest
+ farthing farth
+ farthingale farthingal
+ farthingales farthingal
+ farthings farth
+ fartuous fartuou
+ fas fa
+ fashion fashion
+ fashionable fashion
+ fashioning fashion
+ fashions fashion
+ fast fast
+ fasted fast
+ fasten fasten
+ fastened fasten
+ faster faster
+ fastest fastest
+ fasting fast
+ fastly fastli
+ fastolfe fastolf
+ fasts fast
+ fat fat
+ fatal fatal
+ fatally fatal
+ fate fate
+ fated fate
+ fates fate
+ father father
+ fathered father
+ fatherless fatherless
+ fatherly fatherli
+ fathers father
+ fathom fathom
+ fathomless fathomless
+ fathoms fathom
+ fatigate fatig
+ fatness fat
+ fats fat
+ fatted fat
+ fatter fatter
+ fattest fattest
+ fatting fat
+ fatuus fatuu
+ fauconbridge fauconbridg
+ faulconbridge faulconbridg
+ fault fault
+ faultiness faulti
+ faultless faultless
+ faults fault
+ faulty faulti
+ fausse fauss
+ fauste faust
+ faustuses faustus
+ faut faut
+ favor favor
+ favorable favor
+ favorably favor
+ favors favor
+ favour favour
+ favourable favour
+ favoured favour
+ favouredly favouredli
+ favourer favour
+ favourers favour
+ favouring favour
+ favourite favourit
+ favourites favourit
+ favours favour
+ favout favout
+ fawn fawn
+ fawneth fawneth
+ fawning fawn
+ fawns fawn
+ fay fai
+ fe fe
+ fealty fealti
+ fear fear
+ feared fear
+ fearest fearest
+ fearful fear
+ fearfull fearful
+ fearfully fearfulli
+ fearfulness fear
+ fearing fear
+ fearless fearless
+ fears fear
+ feast feast
+ feasted feast
+ feasting feast
+ feasts feast
+ feat feat
+ feated feat
+ feater feater
+ feather feather
+ feathered feather
+ feathers feather
+ featly featli
+ feats feat
+ featur featur
+ feature featur
+ featured featur
+ featureless featureless
+ features featur
+ february februari
+ fecks feck
+ fed fed
+ fedary fedari
+ federary federari
+ fee fee
+ feeble feebl
+ feebled feebl
+ feebleness feebl
+ feebling feebl
+ feebly feebli
+ feed feed
+ feeder feeder
+ feeders feeder
+ feedeth feedeth
+ feeding feed
+ feeds feed
+ feel feel
+ feeler feeler
+ feeling feel
+ feelingly feelingli
+ feels feel
+ fees fee
+ feet feet
+ fehemently fehement
+ feign feign
+ feigned feign
+ feigning feign
+ feil feil
+ feith feith
+ felicitate felicit
+ felicity felic
+ fell fell
+ fellest fellest
+ fellies felli
+ fellow fellow
+ fellowly fellowli
+ fellows fellow
+ fellowship fellowship
+ fellowships fellowship
+ fells fell
+ felon felon
+ felonious feloni
+ felony feloni
+ felt felt
+ female femal
+ females femal
+ feminine feminin
+ fen fen
+ fenc fenc
+ fence fenc
+ fencer fencer
+ fencing fenc
+ fends fend
+ fennel fennel
+ fenny fenni
+ fens fen
+ fenton fenton
+ fer fer
+ ferdinand ferdinand
+ fere fere
+ fernseed fernse
+ ferrara ferrara
+ ferrers ferrer
+ ferret ferret
+ ferry ferri
+ ferryman ferryman
+ fertile fertil
+ fertility fertil
+ fervency fervenc
+ fervour fervour
+ fery feri
+ fest fest
+ feste fest
+ fester fester
+ festinate festin
+ festinately festin
+ festival festiv
+ festivals festiv
+ fet fet
+ fetch fetch
+ fetches fetch
+ fetching fetch
+ fetlock fetlock
+ fetlocks fetlock
+ fett fett
+ fetter fetter
+ fettering fetter
+ fetters fetter
+ fettle fettl
+ feu feu
+ feud feud
+ fever fever
+ feverous fever
+ fevers fever
+ few few
+ fewer fewer
+ fewest fewest
+ fewness few
+ fickle fickl
+ fickleness fickl
+ fico fico
+ fiction fiction
+ fiddle fiddl
+ fiddler fiddler
+ fiddlestick fiddlestick
+ fidele fidel
+ fidelicet fidelicet
+ fidelity fidel
+ fidius fidiu
+ fie fie
+ field field
+ fielded field
+ fields field
+ fiend fiend
+ fiends fiend
+ fierce fierc
+ fiercely fierc
+ fierceness fierc
+ fiery fieri
+ fife fife
+ fifes fife
+ fifteen fifteen
+ fifteens fifteen
+ fifteenth fifteenth
+ fifth fifth
+ fifty fifti
+ fiftyfold fiftyfold
+ fig fig
+ fight fight
+ fighter fighter
+ fightest fightest
+ fighteth fighteth
+ fighting fight
+ fights fight
+ figo figo
+ figs fig
+ figur figur
+ figure figur
+ figured figur
+ figures figur
+ figuring figur
+ fike fike
+ fil fil
+ filberts filbert
+ filch filch
+ filches filch
+ filching filch
+ file file
+ filed file
+ files file
+ filial filial
+ filius filiu
+ fill fill
+ filled fill
+ fillet fillet
+ filling fill
+ fillip fillip
+ fills fill
+ filly filli
+ film film
+ fils fil
+ filth filth
+ filths filth
+ filthy filthi
+ fin fin
+ finally final
+ finch finch
+ find find
+ finder finder
+ findeth findeth
+ finding find
+ findings find
+ finds find
+ fine fine
+ fineless fineless
+ finely fine
+ finem finem
+ fineness fine
+ finer finer
+ fines fine
+ finest finest
+ fing fing
+ finger finger
+ fingering finger
+ fingers finger
+ fingre fingr
+ fingres fingr
+ finical finic
+ finish finish
+ finished finish
+ finisher finish
+ finless finless
+ finn finn
+ fins fin
+ finsbury finsburi
+ fir fir
+ firago firago
+ fire fire
+ firebrand firebrand
+ firebrands firebrand
+ fired fire
+ fires fire
+ firework firework
+ fireworks firework
+ firing fire
+ firk firk
+ firm firm
+ firmament firmament
+ firmly firmli
+ firmness firm
+ first first
+ firstlings firstl
+ fish fish
+ fisher fisher
+ fishermen fishermen
+ fishers fisher
+ fishes fish
+ fishified fishifi
+ fishmonger fishmong
+ fishpond fishpond
+ fisnomy fisnomi
+ fist fist
+ fisting fist
+ fists fist
+ fistula fistula
+ fit fit
+ fitchew fitchew
+ fitful fit
+ fitly fitli
+ fitment fitment
+ fitness fit
+ fits fit
+ fitted fit
+ fitter fitter
+ fittest fittest
+ fitteth fitteth
+ fitting fit
+ fitzwater fitzwat
+ five five
+ fivepence fivep
+ fives five
+ fix fix
+ fixed fix
+ fixes fix
+ fixeth fixeth
+ fixing fix
+ fixture fixtur
+ fl fl
+ flag flag
+ flagging flag
+ flagon flagon
+ flagons flagon
+ flags flag
+ flail flail
+ flakes flake
+ flaky flaki
+ flam flam
+ flame flame
+ flamen flamen
+ flamens flamen
+ flames flame
+ flaming flame
+ flaminius flaminiu
+ flanders flander
+ flannel flannel
+ flap flap
+ flaring flare
+ flash flash
+ flashes flash
+ flashing flash
+ flask flask
+ flat flat
+ flatly flatli
+ flatness flat
+ flats flat
+ flatt flatt
+ flatter flatter
+ flattered flatter
+ flatterer flatter
+ flatterers flatter
+ flatterest flatterest
+ flatteries flatteri
+ flattering flatter
+ flatters flatter
+ flattery flatteri
+ flaunts flaunt
+ flavio flavio
+ flavius flaviu
+ flaw flaw
+ flaws flaw
+ flax flax
+ flaxen flaxen
+ flay flai
+ flaying flai
+ flea flea
+ fleance fleanc
+ fleas flea
+ flecked fleck
+ fled fled
+ fledge fledg
+ flee flee
+ fleec fleec
+ fleece fleec
+ fleeces fleec
+ fleer fleer
+ fleering fleer
+ fleers fleer
+ fleet fleet
+ fleeter fleeter
+ fleeting fleet
+ fleming fleme
+ flemish flemish
+ flesh flesh
+ fleshes flesh
+ fleshly fleshli
+ fleshment fleshment
+ fleshmonger fleshmong
+ flew flew
+ flexible flexibl
+ flexure flexur
+ flibbertigibbet flibbertigibbet
+ flickering flicker
+ flidge flidg
+ fliers flier
+ flies fli
+ flieth flieth
+ flight flight
+ flights flight
+ flighty flighti
+ flinch flinch
+ fling fling
+ flint flint
+ flints flint
+ flinty flinti
+ flirt flirt
+ float float
+ floated float
+ floating float
+ flock flock
+ flocks flock
+ flood flood
+ floodgates floodgat
+ floods flood
+ floor floor
+ flora flora
+ florence florenc
+ florentine florentin
+ florentines florentin
+ florentius florentiu
+ florizel florizel
+ flote flote
+ floulish floulish
+ flour flour
+ flourish flourish
+ flourishes flourish
+ flourisheth flourisheth
+ flourishing flourish
+ flout flout
+ flouted flout
+ flouting flout
+ flouts flout
+ flow flow
+ flowed flow
+ flower flower
+ flowerets floweret
+ flowers flower
+ flowing flow
+ flown flown
+ flows flow
+ fluellen fluellen
+ fluent fluent
+ flung flung
+ flush flush
+ flushing flush
+ fluster fluster
+ flute flute
+ flutes flute
+ flutter flutter
+ flux flux
+ fluxive fluxiv
+ fly fly
+ flying fly
+ fo fo
+ foal foal
+ foals foal
+ foam foam
+ foamed foam
+ foaming foam
+ foams foam
+ foamy foami
+ fob fob
+ focative foc
+ fodder fodder
+ foe foe
+ foeman foeman
+ foemen foemen
+ foes foe
+ fog fog
+ foggy foggi
+ fogs fog
+ foh foh
+ foi foi
+ foil foil
+ foiled foil
+ foils foil
+ foin foin
+ foining foin
+ foins foin
+ fois foi
+ foison foison
+ foisons foison
+ foist foist
+ foix foix
+ fold fold
+ folded fold
+ folds fold
+ folio folio
+ folk folk
+ folks folk
+ follies folli
+ follow follow
+ followed follow
+ follower follow
+ followers follow
+ followest followest
+ following follow
+ follows follow
+ folly folli
+ fond fond
+ fonder fonder
+ fondly fondli
+ fondness fond
+ font font
+ fontibell fontibel
+ food food
+ fool fool
+ fooleries fooleri
+ foolery fooleri
+ foolhardy foolhardi
+ fooling fool
+ foolish foolish
+ foolishly foolishli
+ foolishness foolish
+ fools fool
+ foot foot
+ football footbal
+ footboy footboi
+ footboys footboi
+ footed foot
+ footfall footfal
+ footing foot
+ footman footman
+ footmen footmen
+ footpath footpath
+ footsteps footstep
+ footstool footstool
+ fopp fopp
+ fopped fop
+ foppery fopperi
+ foppish foppish
+ fops fop
+ for for
+ forage forag
+ foragers forag
+ forbade forbad
+ forbear forbear
+ forbearance forbear
+ forbears forbear
+ forbid forbid
+ forbidden forbidden
+ forbiddenly forbiddenli
+ forbids forbid
+ forbod forbod
+ forborne forborn
+ forc forc
+ force forc
+ forced forc
+ forceful forc
+ forceless forceless
+ forces forc
+ forcible forcibl
+ forcibly forcibl
+ forcing forc
+ ford ford
+ fordid fordid
+ fordo fordo
+ fordoes fordo
+ fordone fordon
+ fore fore
+ forecast forecast
+ forefather forefath
+ forefathers forefath
+ forefinger forefing
+ forego forego
+ foregone foregon
+ forehand forehand
+ forehead forehead
+ foreheads forehead
+ forehorse forehors
+ foreign foreign
+ foreigner foreign
+ foreigners foreign
+ foreknowing foreknow
+ foreknowledge foreknowledg
+ foremost foremost
+ forenamed forenam
+ forenoon forenoon
+ forerun forerun
+ forerunner forerunn
+ forerunning forerun
+ foreruns forerun
+ foresaid foresaid
+ foresaw foresaw
+ foresay foresai
+ foresee forese
+ foreseeing forese
+ foresees forese
+ foreshow foreshow
+ foreskirt foreskirt
+ forespent foresp
+ forest forest
+ forestall forestal
+ forestalled forestal
+ forester forest
+ foresters forest
+ forests forest
+ foretell foretel
+ foretelling foretel
+ foretells foretel
+ forethink forethink
+ forethought forethought
+ foretold foretold
+ forever forev
+ foreward foreward
+ forewarn forewarn
+ forewarned forewarn
+ forewarning forewarn
+ forfeit forfeit
+ forfeited forfeit
+ forfeiters forfeit
+ forfeiting forfeit
+ forfeits forfeit
+ forfeiture forfeitur
+ forfeitures forfeitur
+ forfend forfend
+ forfended forfend
+ forg forg
+ forgave forgav
+ forge forg
+ forged forg
+ forgeries forgeri
+ forgery forgeri
+ forges forg
+ forget forget
+ forgetful forget
+ forgetfulness forget
+ forgetive forget
+ forgets forget
+ forgetting forget
+ forgive forgiv
+ forgiven forgiven
+ forgiveness forgiv
+ forgo forgo
+ forgoing forgo
+ forgone forgon
+ forgot forgot
+ forgotten forgotten
+ fork fork
+ forked fork
+ forks fork
+ forlorn forlorn
+ form form
+ formal formal
+ formally formal
+ formed form
+ former former
+ formerly formerli
+ formless formless
+ forms form
+ fornication fornic
+ fornications fornic
+ fornicatress fornicatress
+ forres forr
+ forrest forrest
+ forsake forsak
+ forsaken forsaken
+ forsaketh forsaketh
+ forslow forslow
+ forsook forsook
+ forsooth forsooth
+ forspent forspent
+ forspoke forspok
+ forswear forswear
+ forswearing forswear
+ forswore forswor
+ forsworn forsworn
+ fort fort
+ forted fort
+ forth forth
+ forthcoming forthcom
+ forthlight forthlight
+ forthright forthright
+ forthwith forthwith
+ fortification fortif
+ fortifications fortif
+ fortified fortifi
+ fortifies fortifi
+ fortify fortifi
+ fortinbras fortinbra
+ fortitude fortitud
+ fortnight fortnight
+ fortress fortress
+ fortresses fortress
+ forts fort
+ fortun fortun
+ fortuna fortuna
+ fortunate fortun
+ fortunately fortun
+ fortune fortun
+ fortuned fortun
+ fortunes fortun
+ fortward fortward
+ forty forti
+ forum forum
+ forward forward
+ forwarding forward
+ forwardness forward
+ forwards forward
+ forwearied forweari
+ fosset fosset
+ fost fost
+ foster foster
+ fostered foster
+ fought fought
+ foughten foughten
+ foul foul
+ fouler fouler
+ foulest foulest
+ foully foulli
+ foulness foul
+ found found
+ foundation foundat
+ foundations foundat
+ founded found
+ founder founder
+ fount fount
+ fountain fountain
+ fountains fountain
+ founts fount
+ four four
+ fourscore fourscor
+ fourteen fourteen
+ fourth fourth
+ foutra foutra
+ fowl fowl
+ fowler fowler
+ fowling fowl
+ fowls fowl
+ fox fox
+ foxes fox
+ foxship foxship
+ fracted fract
+ fraction fraction
+ fractions fraction
+ fragile fragil
+ fragment fragment
+ fragments fragment
+ fragrant fragrant
+ frail frail
+ frailer frailer
+ frailties frailti
+ frailty frailti
+ fram fram
+ frame frame
+ framed frame
+ frames frame
+ frampold frampold
+ fran fran
+ francais francai
+ france franc
+ frances franc
+ franchise franchis
+ franchised franchis
+ franchisement franchis
+ franchises franchis
+ franciae francia
+ francis franci
+ francisca francisca
+ franciscan franciscan
+ francisco francisco
+ frank frank
+ franker franker
+ frankfort frankfort
+ franklin franklin
+ franklins franklin
+ frankly frankli
+ frankness frank
+ frantic frantic
+ franticly franticli
+ frateretto frateretto
+ fratrum fratrum
+ fraud fraud
+ fraudful fraud
+ fraught fraught
+ fraughtage fraughtag
+ fraughting fraught
+ fray frai
+ frays frai
+ freckl freckl
+ freckled freckl
+ freckles freckl
+ frederick frederick
+ free free
+ freed freed
+ freedom freedom
+ freedoms freedom
+ freehearted freeheart
+ freelier freelier
+ freely freeli
+ freeman freeman
+ freemen freemen
+ freeness freeness
+ freer freer
+ frees free
+ freestone freeston
+ freetown freetown
+ freeze freez
+ freezes freez
+ freezing freez
+ freezings freez
+ french french
+ frenchman frenchman
+ frenchmen frenchmen
+ frenchwoman frenchwoman
+ frenzy frenzi
+ frequent frequent
+ frequents frequent
+ fresh fresh
+ fresher fresher
+ freshes fresh
+ freshest freshest
+ freshly freshli
+ freshness fresh
+ fret fret
+ fretful fret
+ frets fret
+ fretted fret
+ fretten fretten
+ fretting fret
+ friar friar
+ friars friar
+ friday fridai
+ fridays fridai
+ friend friend
+ friended friend
+ friending friend
+ friendless friendless
+ friendliness friendli
+ friendly friendli
+ friends friend
+ friendship friendship
+ friendships friendship
+ frieze friez
+ fright fright
+ frighted fright
+ frightened frighten
+ frightful fright
+ frighting fright
+ frights fright
+ fringe fring
+ fringed fring
+ frippery fripperi
+ frisk frisk
+ fritters fritter
+ frivolous frivol
+ fro fro
+ frock frock
+ frog frog
+ frogmore frogmor
+ froissart froissart
+ frolic frolic
+ from from
+ front front
+ fronted front
+ frontier frontier
+ frontiers frontier
+ fronting front
+ frontlet frontlet
+ fronts front
+ frost frost
+ frosts frost
+ frosty frosti
+ froth froth
+ froward froward
+ frown frown
+ frowning frown
+ frowningly frowningli
+ frowns frown
+ froze froze
+ frozen frozen
+ fructify fructifi
+ frugal frugal
+ fruit fruit
+ fruiterer fruiter
+ fruitful fruit
+ fruitfully fruitfulli
+ fruitfulness fruit
+ fruition fruition
+ fruitless fruitless
+ fruits fruit
+ frush frush
+ frustrate frustrat
+ frutify frutifi
+ fry fry
+ fubb fubb
+ fuel fuel
+ fugitive fugit
+ fulfil fulfil
+ fulfill fulfil
+ fulfilling fulfil
+ fulfils fulfil
+ full full
+ fullam fullam
+ fuller fuller
+ fullers fuller
+ fullest fullest
+ fullness full
+ fully fulli
+ fulness ful
+ fulsome fulsom
+ fulvia fulvia
+ fum fum
+ fumble fumbl
+ fumbles fumbl
+ fumblest fumblest
+ fumbling fumbl
+ fume fume
+ fumes fume
+ fuming fume
+ fumiter fumit
+ fumitory fumitori
+ fun fun
+ function function
+ functions function
+ fundamental fundament
+ funeral funer
+ funerals funer
+ fur fur
+ furbish furbish
+ furies furi
+ furious furiou
+ furlongs furlong
+ furnace furnac
+ furnaces furnac
+ furnish furnish
+ furnished furnish
+ furnishings furnish
+ furniture furnitur
+ furnival furniv
+ furor furor
+ furr furr
+ furrow furrow
+ furrowed furrow
+ furrows furrow
+ furth furth
+ further further
+ furtherance further
+ furtherer further
+ furthermore furthermor
+ furthest furthest
+ fury furi
+ furze furz
+ furzes furz
+ fust fust
+ fustian fustian
+ fustilarian fustilarian
+ fusty fusti
+ fut fut
+ future futur
+ futurity futur
+ g g
+ gabble gabbl
+ gaberdine gaberdin
+ gabriel gabriel
+ gad gad
+ gadding gad
+ gads gad
+ gadshill gadshil
+ gag gag
+ gage gage
+ gaged gage
+ gagg gagg
+ gaging gage
+ gagne gagn
+ gain gain
+ gained gain
+ gainer gainer
+ gaingiving gaingiv
+ gains gain
+ gainsaid gainsaid
+ gainsay gainsai
+ gainsaying gainsai
+ gainsays gainsai
+ gainst gainst
+ gait gait
+ gaited gait
+ galathe galath
+ gale gale
+ galen galen
+ gales gale
+ gall gall
+ gallant gallant
+ gallantly gallantli
+ gallantry gallantri
+ gallants gallant
+ galled gall
+ gallery galleri
+ galley gallei
+ galleys gallei
+ gallia gallia
+ gallian gallian
+ galliard galliard
+ galliasses galliass
+ gallimaufry gallimaufri
+ galling gall
+ gallons gallon
+ gallop gallop
+ galloping gallop
+ gallops gallop
+ gallow gallow
+ galloway gallowai
+ gallowglasses gallowglass
+ gallows gallow
+ gallowses gallows
+ galls gall
+ gallus gallu
+ gam gam
+ gambol gambol
+ gambold gambold
+ gambols gambol
+ gamboys gamboi
+ game game
+ gamers gamer
+ games game
+ gamesome gamesom
+ gamester gamest
+ gaming game
+ gammon gammon
+ gamut gamut
+ gan gan
+ gangren gangren
+ ganymede ganymed
+ gaol gaol
+ gaoler gaoler
+ gaolers gaoler
+ gaols gaol
+ gap gap
+ gape gape
+ gapes gape
+ gaping gape
+ gar gar
+ garb garb
+ garbage garbag
+ garboils garboil
+ garcon garcon
+ gard gard
+ garde gard
+ garden garden
+ gardener garden
+ gardeners garden
+ gardens garden
+ gardez gardez
+ gardiner gardin
+ gardon gardon
+ gargantua gargantua
+ gargrave gargrav
+ garish garish
+ garland garland
+ garlands garland
+ garlic garlic
+ garment garment
+ garments garment
+ garmet garmet
+ garner garner
+ garners garner
+ garnish garnish
+ garnished garnish
+ garret garret
+ garrison garrison
+ garrisons garrison
+ gart gart
+ garter garter
+ garterd garterd
+ gartering garter
+ garters garter
+ gascony gasconi
+ gash gash
+ gashes gash
+ gaskins gaskin
+ gasp gasp
+ gasping gasp
+ gasted gast
+ gastness gast
+ gat gat
+ gate gate
+ gated gate
+ gates gate
+ gath gath
+ gather gather
+ gathered gather
+ gathering gather
+ gathers gather
+ gatories gatori
+ gatory gatori
+ gaud gaud
+ gaudeo gaudeo
+ gaudy gaudi
+ gauge gaug
+ gaul gaul
+ gaultree gaultre
+ gaunt gaunt
+ gauntlet gauntlet
+ gauntlets gauntlet
+ gav gav
+ gave gave
+ gavest gavest
+ gawded gawd
+ gawds gawd
+ gawsey gawsei
+ gay gai
+ gayness gay
+ gaz gaz
+ gaze gaze
+ gazed gaze
+ gazer gazer
+ gazers gazer
+ gazes gaze
+ gazeth gazeth
+ gazing gaze
+ gear gear
+ geck geck
+ geese gees
+ geffrey geffrei
+ geld geld
+ gelded geld
+ gelding geld
+ gelida gelida
+ gelidus gelidu
+ gelt gelt
+ gem gem
+ geminy gemini
+ gems gem
+ gen gen
+ gender gender
+ genders gender
+ general gener
+ generally gener
+ generals gener
+ generation gener
+ generations gener
+ generative gener
+ generosity generos
+ generous gener
+ genitive genit
+ genitivo genitivo
+ genius geniu
+ gennets gennet
+ genoa genoa
+ genoux genoux
+ gens gen
+ gent gent
+ gentilhomme gentilhomm
+ gentility gentil
+ gentle gentl
+ gentlefolks gentlefolk
+ gentleman gentleman
+ gentlemanlike gentlemanlik
+ gentlemen gentlemen
+ gentleness gentl
+ gentler gentler
+ gentles gentl
+ gentlest gentlest
+ gentlewoman gentlewoman
+ gentlewomen gentlewomen
+ gently gentli
+ gentry gentri
+ george georg
+ gerard gerard
+ germaines germain
+ germains germain
+ german german
+ germane german
+ germans german
+ germany germani
+ gertrude gertrud
+ gest gest
+ gests gest
+ gesture gestur
+ gestures gestur
+ get get
+ getrude getrud
+ gets get
+ getter getter
+ getting get
+ ghastly ghastli
+ ghost ghost
+ ghosted ghost
+ ghostly ghostli
+ ghosts ghost
+ gi gi
+ giant giant
+ giantess giantess
+ giantlike giantlik
+ giants giant
+ gib gib
+ gibber gibber
+ gibbet gibbet
+ gibbets gibbet
+ gibe gibe
+ giber giber
+ gibes gibe
+ gibing gibe
+ gibingly gibingli
+ giddily giddili
+ giddiness giddi
+ giddy giddi
+ gift gift
+ gifts gift
+ gig gig
+ giglets giglet
+ giglot giglot
+ gilbert gilbert
+ gild gild
+ gilded gild
+ gilding gild
+ gilliams gilliam
+ gillian gillian
+ gills gill
+ gillyvors gillyvor
+ gilt gilt
+ gimmal gimmal
+ gimmers gimmer
+ gin gin
+ ging ging
+ ginger ginger
+ gingerbread gingerbread
+ gingerly gingerli
+ ginn ginn
+ gins gin
+ gioucestershire gioucestershir
+ gipes gipe
+ gipsies gipsi
+ gipsy gipsi
+ gird gird
+ girded gird
+ girdle girdl
+ girdled girdl
+ girdles girdl
+ girdling girdl
+ girl girl
+ girls girl
+ girt girt
+ girth girth
+ gis gi
+ giv giv
+ give give
+ given given
+ giver giver
+ givers giver
+ gives give
+ givest givest
+ giveth giveth
+ giving give
+ givings give
+ glad glad
+ gladded glad
+ gladding glad
+ gladly gladli
+ gladness glad
+ glamis glami
+ glanc glanc
+ glance glanc
+ glanced glanc
+ glances glanc
+ glancing glanc
+ glanders glander
+ glansdale glansdal
+ glare glare
+ glares glare
+ glass glass
+ glasses glass
+ glassy glassi
+ glaz glaz
+ glazed glaze
+ gleams gleam
+ glean glean
+ gleaned glean
+ gleaning glean
+ gleeful gleeful
+ gleek gleek
+ gleeking gleek
+ gleeks gleek
+ glend glend
+ glendower glendow
+ glib glib
+ glide glide
+ glided glide
+ glides glide
+ glideth glideth
+ gliding glide
+ glimmer glimmer
+ glimmering glimmer
+ glimmers glimmer
+ glimpse glimps
+ glimpses glimps
+ glist glist
+ glistening glisten
+ glister glister
+ glistering glister
+ glisters glister
+ glitt glitt
+ glittering glitter
+ globe globe
+ globes globe
+ glooming gloom
+ gloomy gloomi
+ glories glori
+ glorified glorifi
+ glorify glorifi
+ glorious gloriou
+ gloriously glorious
+ glory glori
+ glose glose
+ gloss gloss
+ glosses gloss
+ glou glou
+ glouceste gloucest
+ gloucester gloucest
+ gloucestershire gloucestershir
+ glove glove
+ glover glover
+ gloves glove
+ glow glow
+ glowed glow
+ glowing glow
+ glowworm glowworm
+ gloz gloz
+ gloze gloze
+ glozes gloze
+ glu glu
+ glue glue
+ glued glu
+ glues glue
+ glut glut
+ glutt glutt
+ glutted glut
+ glutton glutton
+ gluttoning glutton
+ gluttony gluttoni
+ gnarled gnarl
+ gnarling gnarl
+ gnat gnat
+ gnats gnat
+ gnaw gnaw
+ gnawing gnaw
+ gnawn gnawn
+ gnaws gnaw
+ go go
+ goad goad
+ goaded goad
+ goads goad
+ goal goal
+ goat goat
+ goatish goatish
+ goats goat
+ gobbets gobbet
+ gobbo gobbo
+ goblet goblet
+ goblets goblet
+ goblin goblin
+ goblins goblin
+ god god
+ godded god
+ godden godden
+ goddess goddess
+ goddesses goddess
+ goddild goddild
+ godfather godfath
+ godfathers godfath
+ godhead godhead
+ godlike godlik
+ godliness godli
+ godly godli
+ godmother godmoth
+ gods god
+ godson godson
+ goer goer
+ goers goer
+ goes goe
+ goest goest
+ goeth goeth
+ goffe goff
+ gogs gog
+ going go
+ gold gold
+ golden golden
+ goldenly goldenli
+ goldsmith goldsmith
+ goldsmiths goldsmith
+ golgotha golgotha
+ goliases golias
+ goliath goliath
+ gon gon
+ gondola gondola
+ gondolier gondoli
+ gone gone
+ goneril goneril
+ gong gong
+ gonzago gonzago
+ gonzalo gonzalo
+ good good
+ goodfellow goodfellow
+ goodlier goodlier
+ goodliest goodliest
+ goodly goodli
+ goodman goodman
+ goodness good
+ goodnight goodnight
+ goodrig goodrig
+ goods good
+ goodwife goodwif
+ goodwill goodwil
+ goodwin goodwin
+ goodwins goodwin
+ goodyear goodyear
+ goodyears goodyear
+ goose goos
+ gooseberry gooseberri
+ goosequills goosequil
+ goot goot
+ gor gor
+ gorbellied gorbelli
+ gorboduc gorboduc
+ gordian gordian
+ gore gore
+ gored gore
+ gorg gorg
+ gorge gorg
+ gorgeous gorgeou
+ gorget gorget
+ gorging gorg
+ gorgon gorgon
+ gormandize gormand
+ gormandizing gormand
+ gory gori
+ gosling gosl
+ gospel gospel
+ gospels gospel
+ goss goss
+ gossamer gossam
+ gossip gossip
+ gossiping gossip
+ gossiplike gossiplik
+ gossips gossip
+ got got
+ goth goth
+ goths goth
+ gotten gotten
+ gourd gourd
+ gout gout
+ gouts gout
+ gouty gouti
+ govern govern
+ governance govern
+ governed govern
+ governess gover
+ government govern
+ governor governor
+ governors governor
+ governs govern
+ gower gower
+ gown gown
+ gowns gown
+ grac grac
+ grace grace
+ graced grace
+ graceful grace
+ gracefully gracefulli
+ graceless graceless
+ graces grace
+ gracing grace
+ gracious graciou
+ graciously gracious
+ gradation gradat
+ graff graff
+ graffing graf
+ graft graft
+ grafted graft
+ grafters grafter
+ grain grain
+ grained grain
+ grains grain
+ gramercies gramerci
+ gramercy gramerci
+ grammar grammar
+ grand grand
+ grandam grandam
+ grandame grandam
+ grandchild grandchild
+ grande grand
+ grandeur grandeur
+ grandfather grandfath
+ grandjurors grandjuror
+ grandmother grandmoth
+ grandpre grandpr
+ grandsir grandsir
+ grandsire grandsir
+ grandsires grandsir
+ grange grang
+ grant grant
+ granted grant
+ granting grant
+ grants grant
+ grape grape
+ grapes grape
+ grapple grappl
+ grapples grappl
+ grappling grappl
+ grasp grasp
+ grasped grasp
+ grasps grasp
+ grass grass
+ grasshoppers grasshopp
+ grassy grassi
+ grate grate
+ grated grate
+ grateful grate
+ grates grate
+ gratiano gratiano
+ gratify gratifi
+ gratii gratii
+ gratillity gratil
+ grating grate
+ gratis grati
+ gratitude gratitud
+ gratulate gratul
+ grav grav
+ grave grave
+ gravediggers gravedigg
+ gravel gravel
+ graveless graveless
+ gravell gravel
+ gravely grave
+ graven graven
+ graveness grave
+ graver graver
+ graves grave
+ gravest gravest
+ gravestone graveston
+ gravities graviti
+ gravity graviti
+ gravy gravi
+ gray grai
+ graymalkin graymalkin
+ graz graz
+ graze graze
+ grazed graze
+ grazing graze
+ grease greas
+ greases greas
+ greasily greasili
+ greasy greasi
+ great great
+ greater greater
+ greatest greatest
+ greatly greatli
+ greatness great
+ grecian grecian
+ grecians grecian
+ gree gree
+ greece greec
+ greed greed
+ greedily greedili
+ greediness greedi
+ greedy greedi
+ greeing gree
+ greek greek
+ greekish greekish
+ greeks greek
+ green green
+ greener greener
+ greenly greenli
+ greens green
+ greensleeves greensleev
+ greenwich greenwich
+ greenwood greenwood
+ greet greet
+ greeted greet
+ greeting greet
+ greetings greet
+ greets greet
+ greg greg
+ gregory gregori
+ gremio gremio
+ grew grew
+ grey grei
+ greybeard greybeard
+ greybeards greybeard
+ greyhound greyhound
+ greyhounds greyhound
+ grief grief
+ griefs grief
+ griev griev
+ grievance grievanc
+ grievances grievanc
+ grieve griev
+ grieved griev
+ grieves griev
+ grievest grievest
+ grieving griev
+ grievingly grievingli
+ grievous grievou
+ grievously grievous
+ griffin griffin
+ griffith griffith
+ grim grim
+ grime grime
+ grimly grimli
+ grin grin
+ grind grind
+ grinding grind
+ grindstone grindston
+ grinning grin
+ grip grip
+ gripe gripe
+ gripes gripe
+ griping gripe
+ grise grise
+ grisly grisli
+ grissel grissel
+ grize grize
+ grizzle grizzl
+ grizzled grizzl
+ groan groan
+ groaning groan
+ groans groan
+ groat groat
+ groats groat
+ groin groin
+ groom groom
+ grooms groom
+ grop grop
+ groping grope
+ gros gro
+ gross gross
+ grosser grosser
+ grossly grossli
+ grossness gross
+ ground ground
+ grounded ground
+ groundlings groundl
+ grounds ground
+ grove grove
+ grovel grovel
+ grovelling grovel
+ groves grove
+ grow grow
+ groweth groweth
+ growing grow
+ grown grown
+ grows grow
+ growth growth
+ grub grub
+ grubb grubb
+ grubs grub
+ grudge grudg
+ grudged grudg
+ grudges grudg
+ grudging grudg
+ gruel gruel
+ grumble grumbl
+ grumblest grumblest
+ grumbling grumbl
+ grumblings grumbl
+ grumio grumio
+ grund grund
+ grunt grunt
+ gualtier gualtier
+ guard guard
+ guardage guardag
+ guardant guardant
+ guarded guard
+ guardian guardian
+ guardians guardian
+ guards guard
+ guardsman guardsman
+ gud gud
+ gudgeon gudgeon
+ guerdon guerdon
+ guerra guerra
+ guess guess
+ guesses guess
+ guessingly guessingli
+ guest guest
+ guests guest
+ guiana guiana
+ guichard guichard
+ guide guid
+ guided guid
+ guider guider
+ guiderius guideriu
+ guides guid
+ guiding guid
+ guidon guidon
+ guienne guienn
+ guil guil
+ guildenstern guildenstern
+ guilders guilder
+ guildford guildford
+ guildhall guildhal
+ guile guil
+ guiled guil
+ guileful guil
+ guilfords guilford
+ guilt guilt
+ guiltian guiltian
+ guiltier guiltier
+ guiltily guiltili
+ guiltiness guilti
+ guiltless guiltless
+ guilts guilt
+ guilty guilti
+ guinea guinea
+ guinever guinev
+ guise guis
+ gul gul
+ gules gule
+ gulf gulf
+ gulfs gulf
+ gull gull
+ gulls gull
+ gum gum
+ gumm gumm
+ gums gum
+ gun gun
+ gunner gunner
+ gunpowder gunpowd
+ guns gun
+ gurnet gurnet
+ gurney gurnei
+ gust gust
+ gusts gust
+ gusty gusti
+ guts gut
+ gutter gutter
+ guy gui
+ guynes guyn
+ guysors guysor
+ gypsy gypsi
+ gyve gyve
+ gyved gyve
+ gyves gyve
+ h h
+ ha ha
+ haberdasher haberdash
+ habiliment habili
+ habiliments habili
+ habit habit
+ habitation habit
+ habited habit
+ habits habit
+ habitude habitud
+ hack hack
+ hacket hacket
+ hackney hacknei
+ hacks hack
+ had had
+ hadst hadst
+ haec haec
+ haeres haer
+ hag hag
+ hagar hagar
+ haggard haggard
+ haggards haggard
+ haggish haggish
+ haggled haggl
+ hags hag
+ hail hail
+ hailed hail
+ hailstone hailston
+ hailstones hailston
+ hair hair
+ hairless hairless
+ hairs hair
+ hairy hairi
+ hal hal
+ halberd halberd
+ halberds halberd
+ halcyon halcyon
+ hale hale
+ haled hale
+ hales hale
+ half half
+ halfcan halfcan
+ halfpence halfpenc
+ halfpenny halfpenni
+ halfpennyworth halfpennyworth
+ halfway halfwai
+ halidom halidom
+ hall hall
+ halloa halloa
+ halloing hallo
+ hallond hallond
+ halloo halloo
+ hallooing halloo
+ hallow hallow
+ hallowed hallow
+ hallowmas hallowma
+ hallown hallown
+ hals hal
+ halt halt
+ halter halter
+ halters halter
+ halting halt
+ halts halt
+ halves halv
+ ham ham
+ hames hame
+ hamlet hamlet
+ hammer hammer
+ hammered hammer
+ hammering hammer
+ hammers hammer
+ hamper hamper
+ hampton hampton
+ hams ham
+ hamstring hamstr
+ hand hand
+ handed hand
+ handful hand
+ handicraft handicraft
+ handicraftsmen handicraftsmen
+ handing hand
+ handiwork handiwork
+ handkercher handkerch
+ handkerchers handkerch
+ handkerchief handkerchief
+ handle handl
+ handled handl
+ handles handl
+ handless handless
+ handlest handlest
+ handling handl
+ handmaid handmaid
+ handmaids handmaid
+ hands hand
+ handsaw handsaw
+ handsome handsom
+ handsomely handsom
+ handsomeness handsom
+ handwriting handwrit
+ handy handi
+ hang hang
+ hanged hang
+ hangers hanger
+ hangeth hangeth
+ hanging hang
+ hangings hang
+ hangman hangman
+ hangmen hangmen
+ hangs hang
+ hannibal hannib
+ hap hap
+ hapless hapless
+ haply hapli
+ happ happ
+ happen happen
+ happened happen
+ happier happier
+ happies happi
+ happiest happiest
+ happily happili
+ happiness happi
+ happy happi
+ haps hap
+ harbinger harbing
+ harbingers harbing
+ harbor harbor
+ harbour harbour
+ harbourage harbourag
+ harbouring harbour
+ harbours harbour
+ harcourt harcourt
+ hard hard
+ harder harder
+ hardest hardest
+ hardiest hardiest
+ hardiment hardiment
+ hardiness hardi
+ hardly hardli
+ hardness hard
+ hardocks hardock
+ hardy hardi
+ hare hare
+ harelip harelip
+ hares hare
+ harfleur harfleur
+ hark hark
+ harlot harlot
+ harlotry harlotri
+ harlots harlot
+ harm harm
+ harmed harm
+ harmful harm
+ harming harm
+ harmless harmless
+ harmonious harmoni
+ harmony harmoni
+ harms harm
+ harness har
+ harp harp
+ harper harper
+ harpier harpier
+ harping harp
+ harpy harpi
+ harried harri
+ harrow harrow
+ harrows harrow
+ harry harri
+ harsh harsh
+ harshly harshli
+ harshness harsh
+ hart hart
+ harts hart
+ harum harum
+ harvest harvest
+ has ha
+ hast hast
+ haste hast
+ hasted hast
+ hasten hasten
+ hastes hast
+ hastily hastili
+ hasting hast
+ hastings hast
+ hasty hasti
+ hat hat
+ hatch hatch
+ hatches hatch
+ hatchet hatchet
+ hatching hatch
+ hatchment hatchment
+ hate hate
+ hated hate
+ hateful hate
+ hater hater
+ haters hater
+ hates hate
+ hateth hateth
+ hatfield hatfield
+ hath hath
+ hating hate
+ hatred hatr
+ hats hat
+ haud haud
+ hauf hauf
+ haught haught
+ haughtiness haughti
+ haughty haughti
+ haunch haunch
+ haunches haunch
+ haunt haunt
+ haunted haunt
+ haunting haunt
+ haunts haunt
+ hautboy hautboi
+ hautboys hautboi
+ have have
+ haven haven
+ havens haven
+ haver haver
+ having have
+ havings have
+ havior havior
+ haviour haviour
+ havoc havoc
+ hawk hawk
+ hawking hawk
+ hawks hawk
+ hawthorn hawthorn
+ hawthorns hawthorn
+ hay hai
+ hazard hazard
+ hazarded hazard
+ hazards hazard
+ hazel hazel
+ hazelnut hazelnut
+ he he
+ head head
+ headborough headborough
+ headed head
+ headier headier
+ heading head
+ headland headland
+ headless headless
+ headlong headlong
+ heads head
+ headsman headsman
+ headstrong headstrong
+ heady headi
+ heal heal
+ healed heal
+ healing heal
+ heals heal
+ health health
+ healthful health
+ healths health
+ healthsome healthsom
+ healthy healthi
+ heap heap
+ heaping heap
+ heaps heap
+ hear hear
+ heard heard
+ hearer hearer
+ hearers hearer
+ hearest hearest
+ heareth heareth
+ hearing hear
+ hearings hear
+ heark heark
+ hearken hearken
+ hearkens hearken
+ hears hear
+ hearsay hearsai
+ hearse hears
+ hearsed hears
+ hearst hearst
+ heart heart
+ heartache heartach
+ heartbreak heartbreak
+ heartbreaking heartbreak
+ hearted heart
+ hearten hearten
+ hearth hearth
+ hearths hearth
+ heartily heartili
+ heartiness hearti
+ heartless heartless
+ heartlings heartl
+ heartly heartli
+ hearts heart
+ heartsick heartsick
+ heartstrings heartstr
+ hearty hearti
+ heat heat
+ heated heat
+ heath heath
+ heathen heathen
+ heathenish heathenish
+ heating heat
+ heats heat
+ heauties heauti
+ heav heav
+ heave heav
+ heaved heav
+ heaven heaven
+ heavenly heavenli
+ heavens heaven
+ heaves heav
+ heavier heavier
+ heaviest heaviest
+ heavily heavili
+ heaviness heavi
+ heaving heav
+ heavings heav
+ heavy heavi
+ hebona hebona
+ hebrew hebrew
+ hecate hecat
+ hectic hectic
+ hector hector
+ hectors hector
+ hecuba hecuba
+ hedg hedg
+ hedge hedg
+ hedgehog hedgehog
+ hedgehogs hedgehog
+ hedges hedg
+ heed heed
+ heeded heed
+ heedful heed
+ heedfull heedful
+ heedfully heedfulli
+ heedless heedless
+ heel heel
+ heels heel
+ hefted heft
+ hefts heft
+ heifer heifer
+ heifers heifer
+ heigh heigh
+ height height
+ heighten heighten
+ heinous heinou
+ heinously heinous
+ heir heir
+ heiress heiress
+ heirless heirless
+ heirs heir
+ held held
+ helen helen
+ helena helena
+ helenus helenu
+ helias helia
+ helicons helicon
+ hell hell
+ hellespont hellespont
+ hellfire hellfir
+ hellish hellish
+ helm helm
+ helmed helm
+ helmet helmet
+ helmets helmet
+ helms helm
+ help help
+ helper helper
+ helpers helper
+ helpful help
+ helping help
+ helpless helpless
+ helps help
+ helter helter
+ hem hem
+ heme heme
+ hemlock hemlock
+ hemm hemm
+ hemp hemp
+ hempen hempen
+ hems hem
+ hen hen
+ hence henc
+ henceforth henceforth
+ henceforward henceforward
+ henchman henchman
+ henri henri
+ henricus henricu
+ henry henri
+ hens hen
+ hent hent
+ henton henton
+ her her
+ herald herald
+ heraldry heraldri
+ heralds herald
+ herb herb
+ herbert herbert
+ herblets herblet
+ herbs herb
+ herculean herculean
+ hercules hercul
+ herd herd
+ herds herd
+ herdsman herdsman
+ herdsmen herdsmen
+ here here
+ hereabout hereabout
+ hereabouts hereabout
+ hereafter hereaft
+ hereby herebi
+ hereditary hereditari
+ hereford hereford
+ herefordshire herefordshir
+ herein herein
+ hereof hereof
+ heresies heresi
+ heresy heresi
+ heretic heret
+ heretics heret
+ hereto hereto
+ hereupon hereupon
+ heritage heritag
+ heritier heriti
+ hermes herm
+ hermia hermia
+ hermione hermion
+ hermit hermit
+ hermitage hermitag
+ hermits hermit
+ herne hern
+ hero hero
+ herod herod
+ herods herod
+ heroes hero
+ heroic heroic
+ heroical heroic
+ herring her
+ herrings her
+ hers her
+ herself herself
+ hesperides hesperid
+ hesperus hesperu
+ hest hest
+ hests hest
+ heure heur
+ heureux heureux
+ hew hew
+ hewgh hewgh
+ hewing hew
+ hewn hewn
+ hews hew
+ hey hei
+ heyday heydai
+ hibocrates hibocr
+ hic hic
+ hiccups hiccup
+ hick hick
+ hid hid
+ hidden hidden
+ hide hide
+ hideous hideou
+ hideously hideous
+ hideousness hideous
+ hides hide
+ hidest hidest
+ hiding hide
+ hie hie
+ hied hi
+ hiems hiem
+ hies hi
+ hig hig
+ high high
+ higher higher
+ highest highest
+ highly highli
+ highmost highmost
+ highness high
+ hight hight
+ highway highwai
+ highways highwai
+ hilding hild
+ hildings hild
+ hill hill
+ hillo hillo
+ hilloa hilloa
+ hills hill
+ hilt hilt
+ hilts hilt
+ hily hili
+ him him
+ himself himself
+ hinc hinc
+ hinckley hincklei
+ hind hind
+ hinder hinder
+ hindered hinder
+ hinders hinder
+ hindmost hindmost
+ hinds hind
+ hing hing
+ hinge hing
+ hinges hing
+ hint hint
+ hip hip
+ hipp hipp
+ hipparchus hipparchu
+ hippolyta hippolyta
+ hips hip
+ hir hir
+ hire hire
+ hired hire
+ hiren hiren
+ hirtius hirtiu
+ his hi
+ hisperia hisperia
+ hiss hiss
+ hisses hiss
+ hissing hiss
+ hist hist
+ historical histor
+ history histori
+ hit hit
+ hither hither
+ hitherto hitherto
+ hitherward hitherward
+ hitherwards hitherward
+ hits hit
+ hitting hit
+ hive hive
+ hives hive
+ hizzing hizz
+ ho ho
+ hoa hoa
+ hoar hoar
+ hoard hoard
+ hoarded hoard
+ hoarding hoard
+ hoars hoar
+ hoarse hoars
+ hoary hoari
+ hob hob
+ hobbididence hobbidid
+ hobby hobbi
+ hobbyhorse hobbyhors
+ hobgoblin hobgoblin
+ hobnails hobnail
+ hoc hoc
+ hod hod
+ hodge hodg
+ hog hog
+ hogs hog
+ hogshead hogshead
+ hogsheads hogshead
+ hois hoi
+ hoise hois
+ hoist hoist
+ hoisted hoist
+ hoists hoist
+ holborn holborn
+ hold hold
+ holden holden
+ holder holder
+ holdeth holdeth
+ holdfast holdfast
+ holding hold
+ holds hold
+ hole hole
+ holes hole
+ holidam holidam
+ holidame holidam
+ holiday holidai
+ holidays holidai
+ holier holier
+ holiest holiest
+ holily holili
+ holiness holi
+ holla holla
+ holland holland
+ hollander holland
+ hollanders holland
+ holloa holloa
+ holloaing holloa
+ hollow hollow
+ hollowly hollowli
+ hollowness hollow
+ holly holli
+ holmedon holmedon
+ holofernes holofern
+ holp holp
+ holy holi
+ homage homag
+ homager homag
+ home home
+ homely home
+ homes home
+ homespuns homespun
+ homeward homeward
+ homewards homeward
+ homicide homicid
+ homicides homicid
+ homily homili
+ hominem hominem
+ hommes homm
+ homo homo
+ honest honest
+ honester honest
+ honestest honestest
+ honestly honestli
+ honesty honesti
+ honey honei
+ honeycomb honeycomb
+ honeying honei
+ honeyless honeyless
+ honeysuckle honeysuckl
+ honeysuckles honeysuckl
+ honi honi
+ honneur honneur
+ honor honor
+ honorable honor
+ honorably honor
+ honorato honorato
+ honors honor
+ honour honour
+ honourable honour
+ honourably honour
+ honoured honour
+ honourest honourest
+ honourible honour
+ honouring honour
+ honours honour
+ hoo hoo
+ hood hood
+ hooded hood
+ hoodman hoodman
+ hoods hood
+ hoodwink hoodwink
+ hoof hoof
+ hoofs hoof
+ hook hook
+ hooking hook
+ hooks hook
+ hoop hoop
+ hoops hoop
+ hoot hoot
+ hooted hoot
+ hooting hoot
+ hoots hoot
+ hop hop
+ hope hope
+ hopeful hope
+ hopeless hopeless
+ hopes hope
+ hopest hopest
+ hoping hope
+ hopkins hopkin
+ hoppedance hopped
+ hor hor
+ horace horac
+ horatio horatio
+ horizon horizon
+ horn horn
+ hornbook hornbook
+ horned horn
+ horner horner
+ horning horn
+ hornpipes hornpip
+ horns horn
+ horologe horolog
+ horrible horribl
+ horribly horribl
+ horrid horrid
+ horrider horrid
+ horridly horridli
+ horror horror
+ horrors horror
+ hors hor
+ horse hors
+ horseback horseback
+ horsed hors
+ horsehairs horsehair
+ horseman horseman
+ horsemanship horsemanship
+ horsemen horsemen
+ horses hors
+ horseway horsewai
+ horsing hors
+ hortensio hortensio
+ hortensius hortensiu
+ horum horum
+ hose hose
+ hospitable hospit
+ hospital hospit
+ hospitality hospit
+ host host
+ hostage hostag
+ hostages hostag
+ hostess hostess
+ hostile hostil
+ hostility hostil
+ hostilius hostiliu
+ hosts host
+ hot hot
+ hotly hotli
+ hotspur hotspur
+ hotter hotter
+ hottest hottest
+ hound hound
+ hounds hound
+ hour hour
+ hourly hourli
+ hours hour
+ hous hou
+ house hous
+ household household
+ householder household
+ householders household
+ households household
+ housekeeper housekeep
+ housekeepers housekeep
+ housekeeping housekeep
+ houseless houseless
+ houses hous
+ housewife housewif
+ housewifery housewiferi
+ housewives housew
+ hovel hovel
+ hover hover
+ hovered hover
+ hovering hover
+ hovers hover
+ how how
+ howbeit howbeit
+ howe how
+ howeer howeer
+ however howev
+ howl howl
+ howled howl
+ howlet howlet
+ howling howl
+ howls howl
+ howsoe howso
+ howsoever howsoev
+ howsome howsom
+ hoxes hox
+ hoy hoi
+ hoyday hoydai
+ hubert hubert
+ huddled huddl
+ huddling huddl
+ hue hue
+ hued hu
+ hues hue
+ hug hug
+ huge huge
+ hugely huge
+ hugeness huge
+ hugg hugg
+ hugger hugger
+ hugh hugh
+ hugs hug
+ hujus huju
+ hulk hulk
+ hulks hulk
+ hull hull
+ hulling hull
+ hullo hullo
+ hum hum
+ human human
+ humane human
+ humanely human
+ humanity human
+ humble humbl
+ humbled humbl
+ humbleness humbl
+ humbler humbler
+ humbles humbl
+ humblest humblest
+ humbling humbl
+ humbly humbl
+ hume hume
+ humh humh
+ humidity humid
+ humility humil
+ humming hum
+ humor humor
+ humorous humor
+ humors humor
+ humour humour
+ humourists humourist
+ humours humour
+ humphrey humphrei
+ humphry humphri
+ hums hum
+ hundred hundr
+ hundreds hundr
+ hundredth hundredth
+ hung hung
+ hungarian hungarian
+ hungary hungari
+ hunger hunger
+ hungerford hungerford
+ hungerly hungerli
+ hungry hungri
+ hunt hunt
+ hunted hunt
+ hunter hunter
+ hunters hunter
+ hunteth hunteth
+ hunting hunt
+ huntington huntington
+ huntress huntress
+ hunts hunt
+ huntsman huntsman
+ huntsmen huntsmen
+ hurdle hurdl
+ hurl hurl
+ hurling hurl
+ hurls hurl
+ hurly hurli
+ hurlyburly hurlyburli
+ hurricano hurricano
+ hurricanoes hurricano
+ hurried hurri
+ hurries hurri
+ hurry hurri
+ hurt hurt
+ hurting hurt
+ hurtled hurtl
+ hurtless hurtless
+ hurtling hurtl
+ hurts hurt
+ husband husband
+ husbanded husband
+ husbandless husbandless
+ husbandry husbandri
+ husbands husband
+ hush hush
+ hushes hush
+ husht husht
+ husks husk
+ huswife huswif
+ huswifes huswif
+ hutch hutch
+ hybla hybla
+ hydra hydra
+ hyen hyen
+ hymen hymen
+ hymenaeus hymenaeu
+ hymn hymn
+ hymns hymn
+ hyperboles hyperbol
+ hyperbolical hyperbol
+ hyperion hyperion
+ hypocrisy hypocrisi
+ hypocrite hypocrit
+ hypocrites hypocrit
+ hyrcan hyrcan
+ hyrcania hyrcania
+ hyrcanian hyrcanian
+ hyssop hyssop
+ hysterica hysterica
+ i i
+ iachimo iachimo
+ iaculis iaculi
+ iago iago
+ iament iament
+ ibat ibat
+ icarus icaru
+ ice ic
+ iceland iceland
+ ici ici
+ icicle icicl
+ icicles icicl
+ icy ici
+ idea idea
+ ideas idea
+ idem idem
+ iden iden
+ ides id
+ idiot idiot
+ idiots idiot
+ idle idl
+ idleness idl
+ idles idl
+ idly idli
+ idol idol
+ idolatrous idolatr
+ idolatry idolatri
+ ield ield
+ if if
+ ifs if
+ ignis igni
+ ignoble ignobl
+ ignobly ignobl
+ ignominious ignomini
+ ignominy ignomini
+ ignomy ignomi
+ ignorance ignor
+ ignorant ignor
+ ii ii
+ iii iii
+ iiii iiii
+ il il
+ ilbow ilbow
+ ild ild
+ ilion ilion
+ ilium ilium
+ ill ill
+ illegitimate illegitim
+ illiterate illiter
+ illness ill
+ illo illo
+ ills ill
+ illume illum
+ illumin illumin
+ illuminate illumin
+ illumineth illumineth
+ illusion illus
+ illusions illus
+ illustrate illustr
+ illustrated illustr
+ illustrious illustri
+ illyria illyria
+ illyrian illyrian
+ ils il
+ im im
+ image imag
+ imagery imageri
+ images imag
+ imagin imagin
+ imaginary imaginari
+ imagination imagin
+ imaginations imagin
+ imagine imagin
+ imagining imagin
+ imaginings imagin
+ imbar imbar
+ imbecility imbecil
+ imbrue imbru
+ imitari imitari
+ imitate imit
+ imitated imit
+ imitation imit
+ imitations imit
+ immaculate immacul
+ immanity imman
+ immask immask
+ immaterial immateri
+ immediacy immediaci
+ immediate immedi
+ immediately immedi
+ imminence immin
+ imminent immin
+ immoderate immoder
+ immoderately immoder
+ immodest immodest
+ immoment immoment
+ immortal immort
+ immortaliz immortaliz
+ immortally immort
+ immur immur
+ immured immur
+ immures immur
+ imogen imogen
+ imp imp
+ impaint impaint
+ impair impair
+ impairing impair
+ impale impal
+ impaled impal
+ impanelled impanel
+ impart impart
+ imparted impart
+ impartial imparti
+ impartment impart
+ imparts impart
+ impasted impast
+ impatience impati
+ impatient impati
+ impatiently impati
+ impawn impawn
+ impeach impeach
+ impeached impeach
+ impeachment impeach
+ impeachments impeach
+ impedes imped
+ impediment impedi
+ impediments impedi
+ impenetrable impenetr
+ imperator imper
+ imperceiverant imperceiver
+ imperfect imperfect
+ imperfection imperfect
+ imperfections imperfect
+ imperfectly imperfectli
+ imperial imperi
+ imperious imperi
+ imperiously imperi
+ impertinency impertin
+ impertinent impertin
+ impeticos impetico
+ impetuosity impetuos
+ impetuous impetu
+ impieties impieti
+ impiety impieti
+ impious impiou
+ implacable implac
+ implements implement
+ implies impli
+ implor implor
+ implorators implor
+ implore implor
+ implored implor
+ imploring implor
+ impon impon
+ import import
+ importance import
+ importancy import
+ important import
+ importantly importantli
+ imported import
+ importeth importeth
+ importing import
+ importless importless
+ imports import
+ importun importun
+ importunacy importunaci
+ importunate importun
+ importune importun
+ importunes importun
+ importunity importun
+ impos impo
+ impose impos
+ imposed impos
+ imposition imposit
+ impositions imposit
+ impossibilities imposs
+ impossibility imposs
+ impossible imposs
+ imposthume imposthum
+ impostor impostor
+ impostors impostor
+ impotence impot
+ impotent impot
+ impounded impound
+ impregnable impregn
+ imprese impres
+ impress impress
+ impressed impress
+ impressest impressest
+ impression impress
+ impressure impressur
+ imprimendum imprimendum
+ imprimis imprimi
+ imprint imprint
+ imprinted imprint
+ imprison imprison
+ imprisoned imprison
+ imprisoning imprison
+ imprisonment imprison
+ improbable improb
+ improper improp
+ improve improv
+ improvident improvid
+ impudence impud
+ impudency impud
+ impudent impud
+ impudently impud
+ impudique impudiqu
+ impugn impugn
+ impugns impugn
+ impure impur
+ imputation imput
+ impute imput
+ in in
+ inaccessible inaccess
+ inaidable inaid
+ inaudible inaud
+ inauspicious inauspici
+ incaged incag
+ incantations incant
+ incapable incap
+ incardinate incardin
+ incarnadine incarnadin
+ incarnate incarn
+ incarnation incarn
+ incens incen
+ incense incens
+ incensed incens
+ incensement incens
+ incenses incens
+ incensing incens
+ incertain incertain
+ incertainties incertainti
+ incertainty incertainti
+ incessant incess
+ incessantly incessantli
+ incest incest
+ incestuous incestu
+ inch inch
+ incharitable incharit
+ inches inch
+ incidency incid
+ incident incid
+ incision incis
+ incite incit
+ incites incit
+ incivil incivil
+ incivility incivil
+ inclin inclin
+ inclinable inclin
+ inclination inclin
+ incline inclin
+ inclined inclin
+ inclines inclin
+ inclining inclin
+ inclips inclip
+ include includ
+ included includ
+ includes includ
+ inclusive inclus
+ incomparable incompar
+ incomprehensible incomprehens
+ inconsiderate inconsider
+ inconstancy inconst
+ inconstant inconst
+ incontinency incontin
+ incontinent incontin
+ incontinently incontin
+ inconvenience inconveni
+ inconveniences inconveni
+ inconvenient inconveni
+ incony inconi
+ incorporate incorpor
+ incorps incorp
+ incorrect incorrect
+ increas increa
+ increase increas
+ increases increas
+ increaseth increaseth
+ increasing increas
+ incredible incred
+ incredulous incredul
+ incur incur
+ incurable incur
+ incurr incurr
+ incurred incur
+ incursions incurs
+ ind ind
+ inde ind
+ indebted indebt
+ indeed inde
+ indent indent
+ indented indent
+ indenture indentur
+ indentures indentur
+ index index
+ indexes index
+ india india
+ indian indian
+ indict indict
+ indicted indict
+ indictment indict
+ indies indi
+ indifferency indiffer
+ indifferent indiffer
+ indifferently indiffer
+ indigent indig
+ indigest indigest
+ indigested indigest
+ indign indign
+ indignation indign
+ indignations indign
+ indigne indign
+ indignities indign
+ indignity indign
+ indirect indirect
+ indirection indirect
+ indirections indirect
+ indirectly indirectli
+ indiscreet indiscreet
+ indiscretion indiscret
+ indispos indispo
+ indisposition indisposit
+ indissoluble indissolubl
+ indistinct indistinct
+ indistinguish indistinguish
+ indistinguishable indistinguish
+ indited indit
+ individable individ
+ indrench indrench
+ indu indu
+ indubitate indubit
+ induc induc
+ induce induc
+ induced induc
+ inducement induc
+ induction induct
+ inductions induct
+ indue indu
+ indued indu
+ indues indu
+ indulgence indulg
+ indulgences indulg
+ indulgent indulg
+ indurance indur
+ industrious industri
+ industriously industri
+ industry industri
+ inequality inequ
+ inestimable inestim
+ inevitable inevit
+ inexecrable inexecr
+ inexorable inexor
+ inexplicable inexplic
+ infallible infal
+ infallibly infal
+ infamonize infamon
+ infamous infam
+ infamy infami
+ infancy infanc
+ infant infant
+ infants infant
+ infect infect
+ infected infect
+ infecting infect
+ infection infect
+ infections infect
+ infectious infecti
+ infectiously infecti
+ infects infect
+ infer infer
+ inference infer
+ inferior inferior
+ inferiors inferior
+ infernal infern
+ inferr inferr
+ inferreth inferreth
+ inferring infer
+ infest infest
+ infidel infidel
+ infidels infidel
+ infinite infinit
+ infinitely infinit
+ infinitive infinit
+ infirm infirm
+ infirmities infirm
+ infirmity infirm
+ infixed infix
+ infixing infix
+ inflam inflam
+ inflame inflam
+ inflaming inflam
+ inflammation inflamm
+ inflict inflict
+ infliction inflict
+ influence influenc
+ influences influenc
+ infold infold
+ inform inform
+ informal inform
+ information inform
+ informations inform
+ informed inform
+ informer inform
+ informs inform
+ infortunate infortun
+ infring infr
+ infringe infring
+ infringed infring
+ infus infu
+ infuse infus
+ infused infus
+ infusing infus
+ infusion infus
+ ingener ingen
+ ingenious ingeni
+ ingeniously ingeni
+ inglorious inglori
+ ingots ingot
+ ingraffed ingraf
+ ingraft ingraft
+ ingrate ingrat
+ ingrated ingrat
+ ingrateful ingrat
+ ingratitude ingratitud
+ ingratitudes ingratitud
+ ingredient ingredi
+ ingredients ingredi
+ ingross ingross
+ inhabit inhabit
+ inhabitable inhabit
+ inhabitants inhabit
+ inhabited inhabit
+ inhabits inhabit
+ inhearse inhears
+ inhearsed inhears
+ inherent inher
+ inherit inherit
+ inheritance inherit
+ inherited inherit
+ inheriting inherit
+ inheritor inheritor
+ inheritors inheritor
+ inheritrix inheritrix
+ inherits inherit
+ inhibited inhibit
+ inhibition inhibit
+ inhoop inhoop
+ inhuman inhuman
+ iniquities iniqu
+ iniquity iniqu
+ initiate initi
+ injointed injoint
+ injunction injunct
+ injunctions injunct
+ injur injur
+ injure injur
+ injurer injur
+ injuries injuri
+ injurious injuri
+ injury injuri
+ injustice injustic
+ ink ink
+ inkhorn inkhorn
+ inkle inkl
+ inkles inkl
+ inkling inkl
+ inky inki
+ inlaid inlaid
+ inland inland
+ inlay inlai
+ inly inli
+ inmost inmost
+ inn inn
+ inner inner
+ innkeeper innkeep
+ innocence innoc
+ innocency innoc
+ innocent innoc
+ innocents innoc
+ innovation innov
+ innovator innov
+ inns inn
+ innumerable innumer
+ inoculate inocul
+ inordinate inordin
+ inprimis inprimi
+ inquir inquir
+ inquire inquir
+ inquiry inquiri
+ inquisition inquisit
+ inquisitive inquisit
+ inroads inroad
+ insane insan
+ insanie insani
+ insatiate insati
+ insconce insconc
+ inscrib inscrib
+ inscription inscript
+ inscriptions inscript
+ inscroll inscrol
+ inscrutable inscrut
+ insculp insculp
+ insculpture insculptur
+ insensible insens
+ inseparable insepar
+ inseparate insepar
+ insert insert
+ inserted insert
+ inset inset
+ inshell inshel
+ inshipp inshipp
+ inside insid
+ insinewed insinew
+ insinuate insinu
+ insinuateth insinuateth
+ insinuating insinu
+ insinuation insinu
+ insisted insist
+ insisting insist
+ insisture insistur
+ insociable insoci
+ insolence insol
+ insolent insol
+ insomuch insomuch
+ inspir inspir
+ inspiration inspir
+ inspirations inspir
+ inspire inspir
+ inspired inspir
+ install instal
+ installed instal
+ instalment instal
+ instance instanc
+ instances instanc
+ instant instant
+ instantly instantli
+ instate instat
+ instead instead
+ insteeped insteep
+ instigate instig
+ instigated instig
+ instigation instig
+ instigations instig
+ instigator instig
+ instinct instinct
+ instinctively instinct
+ institute institut
+ institutions institut
+ instruct instruct
+ instructed instruct
+ instruction instruct
+ instructions instruct
+ instructs instruct
+ instrument instrument
+ instrumental instrument
+ instruments instrument
+ insubstantial insubstanti
+ insufficience insuffici
+ insufficiency insuffici
+ insult insult
+ insulted insult
+ insulting insult
+ insultment insult
+ insults insult
+ insupportable insupport
+ insuppressive insuppress
+ insurrection insurrect
+ insurrections insurrect
+ int int
+ integer integ
+ integritas integrita
+ integrity integr
+ intellect intellect
+ intellects intellect
+ intellectual intellectu
+ intelligence intellig
+ intelligencer intelligenc
+ intelligencing intelligenc
+ intelligent intellig
+ intelligis intelligi
+ intelligo intelligo
+ intemperance intemper
+ intemperate intemper
+ intend intend
+ intended intend
+ intendeth intendeth
+ intending intend
+ intendment intend
+ intends intend
+ intenible inten
+ intent intent
+ intention intent
+ intentively intent
+ intents intent
+ inter inter
+ intercept intercept
+ intercepted intercept
+ intercepter intercept
+ interception intercept
+ intercepts intercept
+ intercession intercess
+ intercessors intercessor
+ interchained interchain
+ interchang interchang
+ interchange interchang
+ interchangeably interchang
+ interchangement interchang
+ interchanging interchang
+ interdiction interdict
+ interest interest
+ interim interim
+ interims interim
+ interior interior
+ interjections interject
+ interjoin interjoin
+ interlude interlud
+ intermingle intermingl
+ intermission intermiss
+ intermissive intermiss
+ intermit intermit
+ intermix intermix
+ intermixed intermix
+ interpose interpos
+ interposer interpos
+ interposes interpos
+ interpret interpret
+ interpretation interpret
+ interpreted interpret
+ interpreter interpret
+ interpreters interpret
+ interprets interpret
+ interr interr
+ interred inter
+ interrogatories interrogatori
+ interrupt interrupt
+ interrupted interrupt
+ interrupter interrupt
+ interruptest interruptest
+ interruption interrupt
+ interrupts interrupt
+ intertissued intertissu
+ intervallums intervallum
+ interview interview
+ intestate intest
+ intestine intestin
+ intil intil
+ intimate intim
+ intimation intim
+ intitled intitl
+ intituled intitul
+ into into
+ intolerable intoler
+ intoxicates intox
+ intreasured intreasur
+ intreat intreat
+ intrench intrench
+ intrenchant intrench
+ intricate intric
+ intrinse intrins
+ intrinsicate intrins
+ intrude intrud
+ intruder intrud
+ intruding intrud
+ intrusion intrus
+ inundation inund
+ inure inur
+ inurn inurn
+ invade invad
+ invades invad
+ invasion invas
+ invasive invas
+ invectively invect
+ invectives invect
+ inveigled inveigl
+ invent invent
+ invented invent
+ invention invent
+ inventions invent
+ inventor inventor
+ inventorially inventori
+ inventoried inventori
+ inventors inventor
+ inventory inventori
+ inverness inver
+ invert invert
+ invest invest
+ invested invest
+ investing invest
+ investments invest
+ inveterate inveter
+ invincible invinc
+ inviolable inviol
+ invised invis
+ invisible invis
+ invitation invit
+ invite invit
+ invited invit
+ invites invit
+ inviting invit
+ invitis inviti
+ invocate invoc
+ invocation invoc
+ invoke invok
+ invoked invok
+ invulnerable invulner
+ inward inward
+ inwardly inwardli
+ inwardness inward
+ inwards inward
+ ionia ionia
+ ionian ionian
+ ipse ips
+ ipswich ipswich
+ ira ira
+ irae ira
+ iras ira
+ ire ir
+ ireful ir
+ ireland ireland
+ iris iri
+ irish irish
+ irishman irishman
+ irishmen irishmen
+ irks irk
+ irksome irksom
+ iron iron
+ irons iron
+ irreconcil irreconcil
+ irrecoverable irrecover
+ irregular irregular
+ irregulous irregul
+ irreligious irreligi
+ irremovable irremov
+ irreparable irrepar
+ irresolute irresolut
+ irrevocable irrevoc
+ is is
+ isabel isabel
+ isabella isabella
+ isbel isbel
+ isbels isbel
+ iscariot iscariot
+ ise is
+ ish ish
+ isidore isidor
+ isis isi
+ island island
+ islander island
+ islanders island
+ islands island
+ isle isl
+ isles isl
+ israel israel
+ issu issu
+ issue issu
+ issued issu
+ issueless issueless
+ issues issu
+ issuing issu
+ ist ist
+ ista ista
+ it it
+ italian italian
+ italy itali
+ itch itch
+ itches itch
+ itching itch
+ item item
+ items item
+ iteration iter
+ ithaca ithaca
+ its it
+ itself itself
+ itshall itshal
+ iv iv
+ ivory ivori
+ ivy ivi
+ iwis iwi
+ ix ix
+ j j
+ jacet jacet
+ jack jack
+ jackanapes jackanap
+ jacks jack
+ jacksauce jacksauc
+ jackslave jackslav
+ jacob jacob
+ jade jade
+ jaded jade
+ jades jade
+ jail jail
+ jakes jake
+ jamany jamani
+ james jame
+ jamy jami
+ jane jane
+ jangled jangl
+ jangling jangl
+ january januari
+ janus janu
+ japhet japhet
+ jaquenetta jaquenetta
+ jaques jaqu
+ jar jar
+ jarring jar
+ jars jar
+ jarteer jarteer
+ jasons jason
+ jaunce jaunc
+ jauncing jaunc
+ jaundice jaundic
+ jaundies jaundi
+ jaw jaw
+ jawbone jawbon
+ jaws jaw
+ jay jai
+ jays jai
+ jc jc
+ je je
+ jealous jealou
+ jealousies jealousi
+ jealousy jealousi
+ jeer jeer
+ jeering jeer
+ jelly jelli
+ jenny jenni
+ jeopardy jeopardi
+ jephtha jephtha
+ jephthah jephthah
+ jerkin jerkin
+ jerkins jerkin
+ jerks jerk
+ jeronimy jeronimi
+ jerusalem jerusalem
+ jeshu jeshu
+ jesses jess
+ jessica jessica
+ jest jest
+ jested jest
+ jester jester
+ jesters jester
+ jesting jest
+ jests jest
+ jesu jesu
+ jesus jesu
+ jet jet
+ jets jet
+ jew jew
+ jewel jewel
+ jeweller jewel
+ jewels jewel
+ jewess jewess
+ jewish jewish
+ jewry jewri
+ jews jew
+ jezebel jezebel
+ jig jig
+ jigging jig
+ jill jill
+ jills jill
+ jingling jingl
+ joan joan
+ job job
+ jockey jockei
+ jocund jocund
+ jog jog
+ jogging jog
+ john john
+ johns john
+ join join
+ joinder joinder
+ joined join
+ joiner joiner
+ joineth joineth
+ joins join
+ joint joint
+ jointed joint
+ jointing joint
+ jointly jointli
+ jointress jointress
+ joints joint
+ jointure jointur
+ jollity jolliti
+ jolly jolli
+ jolt jolt
+ joltheads jolthead
+ jordan jordan
+ joseph joseph
+ joshua joshua
+ jot jot
+ jour jour
+ jourdain jourdain
+ journal journal
+ journey journei
+ journeying journei
+ journeyman journeyman
+ journeymen journeymen
+ journeys journei
+ jove jove
+ jovem jovem
+ jovial jovial
+ jowl jowl
+ jowls jowl
+ joy joi
+ joyed joi
+ joyful joy
+ joyfully joyfulli
+ joyless joyless
+ joyous joyou
+ joys joi
+ juan juan
+ jud jud
+ judas juda
+ judases judas
+ jude jude
+ judg judg
+ judge judg
+ judged judg
+ judgement judgement
+ judges judg
+ judgest judgest
+ judging judg
+ judgment judgment
+ judgments judgment
+ judicious judici
+ jug jug
+ juggle juggl
+ juggled juggl
+ juggler juggler
+ jugglers juggler
+ juggling juggl
+ jugs jug
+ juice juic
+ juiced juic
+ jul jul
+ jule jule
+ julia julia
+ juliet juliet
+ julietta julietta
+ julio julio
+ julius juliu
+ july juli
+ jump jump
+ jumpeth jumpeth
+ jumping jump
+ jumps jump
+ june june
+ junes june
+ junior junior
+ junius juniu
+ junkets junket
+ juno juno
+ jupiter jupit
+ jure jure
+ jurement jurement
+ jurisdiction jurisdict
+ juror juror
+ jurors juror
+ jury juri
+ jurymen jurymen
+ just just
+ justeius justeiu
+ justest justest
+ justice justic
+ justicer justic
+ justicers justic
+ justices justic
+ justification justif
+ justified justifi
+ justify justifi
+ justle justl
+ justled justl
+ justles justl
+ justling justl
+ justly justli
+ justness just
+ justs just
+ jutting jut
+ jutty jutti
+ juvenal juven
+ kam kam
+ kate kate
+ kated kate
+ kates kate
+ katharine katharin
+ katherina katherina
+ katherine katherin
+ kecksies kecksi
+ keech keech
+ keel keel
+ keels keel
+ keen keen
+ keenness keen
+ keep keep
+ keepdown keepdown
+ keeper keeper
+ keepers keeper
+ keepest keepest
+ keeping keep
+ keeps keep
+ keiser keiser
+ ken ken
+ kendal kendal
+ kennel kennel
+ kent kent
+ kentish kentish
+ kentishman kentishman
+ kentishmen kentishmen
+ kept kept
+ kerchief kerchief
+ kerely kere
+ kern kern
+ kernal kernal
+ kernel kernel
+ kernels kernel
+ kerns kern
+ kersey kersei
+ kettle kettl
+ kettledrum kettledrum
+ kettledrums kettledrum
+ key kei
+ keys kei
+ kibe kibe
+ kibes kibe
+ kick kick
+ kicked kick
+ kickshaws kickshaw
+ kickshawses kickshaws
+ kicky kicki
+ kid kid
+ kidney kidnei
+ kikely kike
+ kildare kildar
+ kill kill
+ killed kill
+ killer killer
+ killeth killeth
+ killing kill
+ killingworth killingworth
+ kills kill
+ kiln kiln
+ kimbolton kimbolton
+ kin kin
+ kind kind
+ kinder kinder
+ kindest kindest
+ kindle kindl
+ kindled kindl
+ kindless kindless
+ kindlier kindlier
+ kindling kindl
+ kindly kindli
+ kindness kind
+ kindnesses kind
+ kindred kindr
+ kindreds kindr
+ kinds kind
+ kine kine
+ king king
+ kingdom kingdom
+ kingdoms kingdom
+ kingly kingli
+ kings king
+ kinred kinr
+ kins kin
+ kinsman kinsman
+ kinsmen kinsmen
+ kinswoman kinswoman
+ kirtle kirtl
+ kirtles kirtl
+ kiss kiss
+ kissed kiss
+ kisses kiss
+ kissing kiss
+ kitchen kitchen
+ kitchens kitchen
+ kite kite
+ kites kite
+ kitten kitten
+ kj kj
+ kl kl
+ klll klll
+ knack knack
+ knacks knack
+ knapp knapp
+ knav knav
+ knave knave
+ knaveries knaveri
+ knavery knaveri
+ knaves knave
+ knavish knavish
+ knead knead
+ kneaded knead
+ kneading knead
+ knee knee
+ kneel kneel
+ kneeling kneel
+ kneels kneel
+ knees knee
+ knell knell
+ knew knew
+ knewest knewest
+ knife knife
+ knight knight
+ knighted knight
+ knighthood knighthood
+ knighthoods knighthood
+ knightly knightli
+ knights knight
+ knit knit
+ knits knit
+ knitters knitter
+ knitteth knitteth
+ knives knive
+ knobs knob
+ knock knock
+ knocking knock
+ knocks knock
+ knog knog
+ knoll knoll
+ knot knot
+ knots knot
+ knotted knot
+ knotty knotti
+ know know
+ knower knower
+ knowest knowest
+ knowing know
+ knowingly knowingli
+ knowings know
+ knowledge knowledg
+ known known
+ knows know
+ l l
+ la la
+ laban laban
+ label label
+ labell label
+ labienus labienu
+ labio labio
+ labor labor
+ laboring labor
+ labors labor
+ labour labour
+ laboured labour
+ labourer labour
+ labourers labour
+ labouring labour
+ labours labour
+ laboursome laboursom
+ labras labra
+ labyrinth labyrinth
+ lac lac
+ lace lace
+ laced lace
+ lacedaemon lacedaemon
+ laces lace
+ lacies laci
+ lack lack
+ lackbeard lackbeard
+ lacked lack
+ lackey lackei
+ lackeying lackei
+ lackeys lackei
+ lacking lack
+ lacks lack
+ lad lad
+ ladder ladder
+ ladders ladder
+ lade lade
+ laden laden
+ ladies ladi
+ lading lade
+ lads lad
+ lady ladi
+ ladybird ladybird
+ ladyship ladyship
+ ladyships ladyship
+ laer laer
+ laertes laert
+ lafeu lafeu
+ lag lag
+ lagging lag
+ laid laid
+ lain lain
+ laissez laissez
+ lake lake
+ lakes lake
+ lakin lakin
+ lam lam
+ lamb lamb
+ lambert lambert
+ lambkin lambkin
+ lambkins lambkin
+ lambs lamb
+ lame lame
+ lamely lame
+ lameness lame
+ lament lament
+ lamentable lament
+ lamentably lament
+ lamentation lament
+ lamentations lament
+ lamented lament
+ lamenting lament
+ lamentings lament
+ laments lament
+ lames lame
+ laming lame
+ lammas lamma
+ lammastide lammastid
+ lamound lamound
+ lamp lamp
+ lampass lampass
+ lamps lamp
+ lanc lanc
+ lancaster lancast
+ lance lanc
+ lances lanc
+ lanceth lanceth
+ lanch lanch
+ land land
+ landed land
+ landing land
+ landless landless
+ landlord landlord
+ landmen landmen
+ lands land
+ lane lane
+ lanes lane
+ langage langag
+ langley langlei
+ langton langton
+ language languag
+ languageless languageless
+ languages languag
+ langues langu
+ languish languish
+ languished languish
+ languishes languish
+ languishing languish
+ languishings languish
+ languishment languish
+ languor languor
+ lank lank
+ lantern lantern
+ lanterns lantern
+ lanthorn lanthorn
+ lap lap
+ lapis lapi
+ lapland lapland
+ lapp lapp
+ laps lap
+ lapse laps
+ lapsed laps
+ lapsing laps
+ lapwing lapw
+ laquais laquai
+ larded lard
+ larder larder
+ larding lard
+ lards lard
+ large larg
+ largely larg
+ largeness larg
+ larger larger
+ largess largess
+ largest largest
+ lark lark
+ larks lark
+ larron larron
+ lartius lartiu
+ larum larum
+ larums larum
+ las la
+ lascivious lascivi
+ lash lash
+ lass lass
+ lasses lass
+ last last
+ lasted last
+ lasting last
+ lastly lastli
+ lasts last
+ latch latch
+ latches latch
+ late late
+ lated late
+ lately late
+ later later
+ latest latest
+ lath lath
+ latin latin
+ latten latten
+ latter latter
+ lattice lattic
+ laud laud
+ laudable laudabl
+ laudis laudi
+ laugh laugh
+ laughable laughabl
+ laughed laugh
+ laugher laugher
+ laughest laughest
+ laughing laugh
+ laughs laugh
+ laughter laughter
+ launce launc
+ launcelot launcelot
+ launces launc
+ launch launch
+ laund laund
+ laundress laundress
+ laundry laundri
+ laur laur
+ laura laura
+ laurel laurel
+ laurels laurel
+ laurence laurenc
+ laus lau
+ lavache lavach
+ lave lave
+ lavee lave
+ lavender lavend
+ lavina lavina
+ lavinia lavinia
+ lavish lavish
+ lavishly lavishli
+ lavolt lavolt
+ lavoltas lavolta
+ law law
+ lawful law
+ lawfully lawfulli
+ lawless lawless
+ lawlessly lawlessli
+ lawn lawn
+ lawns lawn
+ lawrence lawrenc
+ laws law
+ lawyer lawyer
+ lawyers lawyer
+ lay lai
+ layer layer
+ layest layest
+ laying lai
+ lays lai
+ lazar lazar
+ lazars lazar
+ lazarus lazaru
+ lazy lazi
+ lc lc
+ ld ld
+ ldst ldst
+ le le
+ lead lead
+ leaden leaden
+ leader leader
+ leaders leader
+ leadest leadest
+ leading lead
+ leads lead
+ leaf leaf
+ leagu leagu
+ league leagu
+ leagued leagu
+ leaguer leaguer
+ leagues leagu
+ leah leah
+ leak leak
+ leaky leaki
+ lean lean
+ leander leander
+ leaner leaner
+ leaning lean
+ leanness lean
+ leans lean
+ leap leap
+ leaped leap
+ leaping leap
+ leaps leap
+ leapt leapt
+ lear lear
+ learn learn
+ learned learn
+ learnedly learnedli
+ learning learn
+ learnings learn
+ learns learn
+ learnt learnt
+ leas lea
+ lease leas
+ leases leas
+ leash leash
+ leasing leas
+ least least
+ leather leather
+ leathern leathern
+ leav leav
+ leave leav
+ leaven leaven
+ leavening leaven
+ leaver leaver
+ leaves leav
+ leaving leav
+ leavy leavi
+ lecher lecher
+ lecherous lecher
+ lechers lecher
+ lechery lecheri
+ lecon lecon
+ lecture lectur
+ lectures lectur
+ led led
+ leda leda
+ leech leech
+ leeches leech
+ leek leek
+ leeks leek
+ leer leer
+ leers leer
+ lees lee
+ leese lees
+ leet leet
+ leets leet
+ left left
+ leg leg
+ legacies legaci
+ legacy legaci
+ legate legat
+ legatine legatin
+ lege lege
+ legerity leger
+ leges lege
+ legg legg
+ legion legion
+ legions legion
+ legitimate legitim
+ legitimation legitim
+ legs leg
+ leicester leicest
+ leicestershire leicestershir
+ leiger leiger
+ leigers leiger
+ leisure leisur
+ leisurely leisur
+ leisures leisur
+ leman leman
+ lemon lemon
+ lena lena
+ lend lend
+ lender lender
+ lending lend
+ lendings lend
+ lends lend
+ length length
+ lengthen lengthen
+ lengthens lengthen
+ lengths length
+ lenity leniti
+ lennox lennox
+ lent lent
+ lenten lenten
+ lentus lentu
+ leo leo
+ leon leon
+ leonardo leonardo
+ leonati leonati
+ leonato leonato
+ leonatus leonatu
+ leontes leont
+ leopard leopard
+ leopards leopard
+ leper leper
+ leperous leper
+ lepidus lepidu
+ leprosy leprosi
+ lequel lequel
+ lers ler
+ les le
+ less less
+ lessen lessen
+ lessens lessen
+ lesser lesser
+ lesson lesson
+ lessoned lesson
+ lessons lesson
+ lest lest
+ lestrake lestrak
+ let let
+ lethargied lethargi
+ lethargies lethargi
+ lethargy lethargi
+ lethe leth
+ lets let
+ lett lett
+ letter letter
+ letters letter
+ letting let
+ lettuce lettuc
+ leur leur
+ leve leve
+ level level
+ levell level
+ levelled level
+ levels level
+ leven leven
+ levers lever
+ leviathan leviathan
+ leviathans leviathan
+ levied levi
+ levies levi
+ levity leviti
+ levy levi
+ levying levi
+ lewd lewd
+ lewdly lewdli
+ lewdness lewd
+ lewdsters lewdster
+ lewis lewi
+ liable liabl
+ liar liar
+ liars liar
+ libbard libbard
+ libelling libel
+ libels libel
+ liberal liber
+ liberality liber
+ liberte libert
+ liberties liberti
+ libertine libertin
+ libertines libertin
+ liberty liberti
+ library librari
+ libya libya
+ licence licenc
+ licens licen
+ license licens
+ licentious licenti
+ lichas licha
+ licio licio
+ lick lick
+ licked lick
+ licker licker
+ lictors lictor
+ lid lid
+ lids lid
+ lie lie
+ lied li
+ lief lief
+ liefest liefest
+ liege lieg
+ liegeman liegeman
+ liegemen liegemen
+ lien lien
+ lies li
+ liest liest
+ lieth lieth
+ lieu lieu
+ lieutenant lieuten
+ lieutenantry lieutenantri
+ lieutenants lieuten
+ lieve liev
+ life life
+ lifeblood lifeblood
+ lifeless lifeless
+ lifelings lifel
+ lift lift
+ lifted lift
+ lifter lifter
+ lifteth lifteth
+ lifting lift
+ lifts lift
+ lig lig
+ ligarius ligariu
+ liggens liggen
+ light light
+ lighted light
+ lighten lighten
+ lightens lighten
+ lighter lighter
+ lightest lightest
+ lightly lightli
+ lightness light
+ lightning lightn
+ lightnings lightn
+ lights light
+ lik lik
+ like like
+ liked like
+ likeliest likeliest
+ likelihood likelihood
+ likelihoods likelihood
+ likely like
+ likeness like
+ liker liker
+ likes like
+ likest likest
+ likewise likewis
+ liking like
+ likings like
+ lilies lili
+ lily lili
+ lim lim
+ limander limand
+ limb limb
+ limbeck limbeck
+ limbecks limbeck
+ limber limber
+ limbo limbo
+ limbs limb
+ lime lime
+ limed lime
+ limehouse limehous
+ limekilns limekiln
+ limit limit
+ limitation limit
+ limited limit
+ limits limit
+ limn limn
+ limp limp
+ limping limp
+ limps limp
+ lin lin
+ lincoln lincoln
+ lincolnshire lincolnshir
+ line line
+ lineal lineal
+ lineally lineal
+ lineament lineament
+ lineaments lineament
+ lined line
+ linen linen
+ linens linen
+ lines line
+ ling ling
+ lingare lingar
+ linger linger
+ lingered linger
+ lingers linger
+ linguist linguist
+ lining line
+ link link
+ links link
+ linsey linsei
+ linstock linstock
+ linta linta
+ lion lion
+ lionel lionel
+ lioness lioness
+ lions lion
+ lip lip
+ lipp lipp
+ lips lip
+ lipsbury lipsburi
+ liquid liquid
+ liquor liquor
+ liquorish liquorish
+ liquors liquor
+ lirra lirra
+ lisbon lisbon
+ lisp lisp
+ lisping lisp
+ list list
+ listen listen
+ listening listen
+ lists list
+ literatured literatur
+ lither lither
+ litter litter
+ little littl
+ littlest littlest
+ liv liv
+ live live
+ lived live
+ livelier liveli
+ livelihood livelihood
+ livelong livelong
+ lively live
+ liver liver
+ liveries liveri
+ livers liver
+ livery liveri
+ lives live
+ livest livest
+ liveth liveth
+ livia livia
+ living live
+ livings live
+ lizard lizard
+ lizards lizard
+ ll ll
+ lll lll
+ llous llou
+ lnd lnd
+ lo lo
+ loa loa
+ loach loach
+ load load
+ loaden loaden
+ loading load
+ loads load
+ loaf loaf
+ loam loam
+ loan loan
+ loath loath
+ loathe loath
+ loathed loath
+ loather loather
+ loathes loath
+ loathing loath
+ loathly loathli
+ loathness loath
+ loathsome loathsom
+ loathsomeness loathsom
+ loathsomest loathsomest
+ loaves loav
+ lob lob
+ lobbies lobbi
+ lobby lobbi
+ local local
+ lochaber lochab
+ lock lock
+ locked lock
+ locking lock
+ lockram lockram
+ locks lock
+ locusts locust
+ lode lode
+ lodg lodg
+ lodge lodg
+ lodged lodg
+ lodgers lodger
+ lodges lodg
+ lodging lodg
+ lodgings lodg
+ lodovico lodovico
+ lodowick lodowick
+ lofty lofti
+ log log
+ logger logger
+ loggerhead loggerhead
+ loggerheads loggerhead
+ loggets logget
+ logic logic
+ logs log
+ loins loin
+ loiter loiter
+ loiterer loiter
+ loiterers loiter
+ loitering loiter
+ lolling loll
+ lolls loll
+ lombardy lombardi
+ london london
+ londoners london
+ lone lone
+ loneliness loneli
+ lonely lone
+ long long
+ longaville longavil
+ longboat longboat
+ longed long
+ longer longer
+ longest longest
+ longeth longeth
+ longing long
+ longings long
+ longly longli
+ longs long
+ longtail longtail
+ loo loo
+ loof loof
+ look look
+ looked look
+ looker looker
+ lookers looker
+ lookest lookest
+ looking look
+ looks look
+ loon loon
+ loop loop
+ loos loo
+ loose loos
+ loosed loos
+ loosely loos
+ loosen loosen
+ loosing loos
+ lop lop
+ lopp lopp
+ loquitur loquitur
+ lord lord
+ lorded lord
+ lording lord
+ lordings lord
+ lordliness lordli
+ lordly lordli
+ lords lord
+ lordship lordship
+ lordships lordship
+ lorenzo lorenzo
+ lorn lorn
+ lorraine lorrain
+ lorship lorship
+ los lo
+ lose lose
+ loser loser
+ losers loser
+ loses lose
+ losest losest
+ loseth loseth
+ losing lose
+ loss loss
+ losses loss
+ lost lost
+ lot lot
+ lots lot
+ lott lott
+ lottery lotteri
+ loud loud
+ louder louder
+ loudly loudli
+ lour lour
+ loureth loureth
+ louring lour
+ louse lous
+ louses lous
+ lousy lousi
+ lout lout
+ louted lout
+ louts lout
+ louvre louvr
+ lov lov
+ love love
+ loved love
+ lovedst lovedst
+ lovel lovel
+ lovelier loveli
+ loveliness loveli
+ lovell lovel
+ lovely love
+ lover lover
+ lovered lover
+ lovers lover
+ loves love
+ lovest lovest
+ loveth loveth
+ loving love
+ lovingly lovingli
+ low low
+ lowe low
+ lower lower
+ lowest lowest
+ lowing low
+ lowliness lowli
+ lowly lowli
+ lown lown
+ lowness low
+ loyal loyal
+ loyally loyal
+ loyalties loyalti
+ loyalty loyalti
+ lozel lozel
+ lt lt
+ lubber lubber
+ lubberly lubberli
+ luc luc
+ luccicos luccico
+ luce luce
+ lucentio lucentio
+ luces luce
+ lucetta lucetta
+ luciana luciana
+ lucianus lucianu
+ lucifer lucif
+ lucifier lucifi
+ lucilius luciliu
+ lucina lucina
+ lucio lucio
+ lucius luciu
+ luck luck
+ luckier luckier
+ luckiest luckiest
+ luckily luckili
+ luckless luckless
+ lucky lucki
+ lucre lucr
+ lucrece lucrec
+ lucretia lucretia
+ lucullius luculliu
+ lucullus lucullu
+ lucy luci
+ lud lud
+ ludlow ludlow
+ lug lug
+ lugg lugg
+ luggage luggag
+ luke luke
+ lukewarm lukewarm
+ lull lull
+ lulla lulla
+ lullaby lullabi
+ lulls lull
+ lumbert lumbert
+ lump lump
+ lumpish lumpish
+ luna luna
+ lunacies lunaci
+ lunacy lunaci
+ lunatic lunat
+ lunatics lunat
+ lunes lune
+ lungs lung
+ lupercal luperc
+ lurch lurch
+ lure lure
+ lurk lurk
+ lurketh lurketh
+ lurking lurk
+ lurks lurk
+ luscious lusciou
+ lush lush
+ lust lust
+ lusted lust
+ luster luster
+ lustful lust
+ lustier lustier
+ lustiest lustiest
+ lustig lustig
+ lustihood lustihood
+ lustily lustili
+ lustre lustr
+ lustrous lustrou
+ lusts lust
+ lusty lusti
+ lute lute
+ lutes lute
+ lutestring lutestr
+ lutheran lutheran
+ luxurious luxuri
+ luxuriously luxuri
+ luxury luxuri
+ ly ly
+ lycaonia lycaonia
+ lycurguses lycurgus
+ lydia lydia
+ lye lye
+ lyen lyen
+ lying ly
+ lym lym
+ lymoges lymog
+ lynn lynn
+ lysander lysand
+ m m
+ ma ma
+ maan maan
+ mab mab
+ macbeth macbeth
+ maccabaeus maccabaeu
+ macdonwald macdonwald
+ macduff macduff
+ mace mace
+ macedon macedon
+ maces mace
+ machiavel machiavel
+ machination machin
+ machinations machin
+ machine machin
+ mack mack
+ macmorris macmorri
+ maculate macul
+ maculation macul
+ mad mad
+ madam madam
+ madame madam
+ madams madam
+ madcap madcap
+ madded mad
+ madding mad
+ made made
+ madeira madeira
+ madly madli
+ madman madman
+ madmen madmen
+ madness mad
+ madonna madonna
+ madrigals madrig
+ mads mad
+ maecenas maecena
+ maggot maggot
+ maggots maggot
+ magic magic
+ magical magic
+ magician magician
+ magistrate magistr
+ magistrates magistr
+ magnanimity magnanim
+ magnanimous magnanim
+ magni magni
+ magnifi magnifi
+ magnificence magnific
+ magnificent magnific
+ magnifico magnifico
+ magnificoes magnifico
+ magnus magnu
+ mahomet mahomet
+ mahu mahu
+ maid maid
+ maiden maiden
+ maidenhead maidenhead
+ maidenheads maidenhead
+ maidenhood maidenhood
+ maidenhoods maidenhood
+ maidenliest maidenliest
+ maidenly maidenli
+ maidens maiden
+ maidhood maidhood
+ maids maid
+ mail mail
+ mailed mail
+ mails mail
+ maim maim
+ maimed maim
+ maims maim
+ main main
+ maincourse maincours
+ maine main
+ mainly mainli
+ mainmast mainmast
+ mains main
+ maintain maintain
+ maintained maintain
+ maintains maintain
+ maintenance mainten
+ mais mai
+ maison maison
+ majestas majesta
+ majestee majeste
+ majestic majest
+ majestical majest
+ majestically majest
+ majesties majesti
+ majesty majesti
+ major major
+ majority major
+ mak mak
+ make make
+ makeless makeless
+ maker maker
+ makers maker
+ makes make
+ makest makest
+ maketh maketh
+ making make
+ makings make
+ mal mal
+ mala mala
+ maladies maladi
+ malady maladi
+ malapert malapert
+ malcolm malcolm
+ malcontent malcont
+ malcontents malcont
+ male male
+ maledictions maledict
+ malefactions malefact
+ malefactor malefactor
+ malefactors malefactor
+ males male
+ malevolence malevol
+ malevolent malevol
+ malhecho malhecho
+ malice malic
+ malicious malici
+ maliciously malici
+ malign malign
+ malignancy malign
+ malignant malign
+ malignantly malignantli
+ malkin malkin
+ mall mall
+ mallard mallard
+ mallet mallet
+ mallows mallow
+ malmsey malmsei
+ malt malt
+ maltworms maltworm
+ malvolio malvolio
+ mamillius mamilliu
+ mammering mammer
+ mammet mammet
+ mammets mammet
+ mammock mammock
+ man man
+ manacle manacl
+ manacles manacl
+ manage manag
+ managed manag
+ manager manag
+ managing manag
+ manakin manakin
+ manchus manchu
+ mandate mandat
+ mandragora mandragora
+ mandrake mandrak
+ mandrakes mandrak
+ mane mane
+ manent manent
+ manes mane
+ manet manet
+ manfully manfulli
+ mangle mangl
+ mangled mangl
+ mangles mangl
+ mangling mangl
+ mangy mangi
+ manhood manhood
+ manhoods manhood
+ manifest manifest
+ manifested manifest
+ manifests manifest
+ manifold manifold
+ manifoldly manifoldli
+ manka manka
+ mankind mankind
+ manlike manlik
+ manly manli
+ mann mann
+ manna manna
+ manner manner
+ mannerly mannerli
+ manners manner
+ manningtree manningtre
+ mannish mannish
+ manor manor
+ manors manor
+ mans man
+ mansion mansion
+ mansionry mansionri
+ mansions mansion
+ manslaughter manslaught
+ mantle mantl
+ mantled mantl
+ mantles mantl
+ mantua mantua
+ mantuan mantuan
+ manual manual
+ manure manur
+ manured manur
+ manus manu
+ many mani
+ map map
+ mapp mapp
+ maps map
+ mar mar
+ marble marbl
+ marbled marbl
+ marcade marcad
+ marcellus marcellu
+ march march
+ marches march
+ marcheth marcheth
+ marching march
+ marchioness marchio
+ marchpane marchpan
+ marcians marcian
+ marcius marciu
+ marcus marcu
+ mardian mardian
+ mare mare
+ mares mare
+ marg marg
+ margarelon margarelon
+ margaret margaret
+ marge marg
+ margent margent
+ margery margeri
+ maria maria
+ marian marian
+ mariana mariana
+ maries mari
+ marigold marigold
+ mariner marin
+ mariners marin
+ maritime maritim
+ marjoram marjoram
+ mark mark
+ marked mark
+ market market
+ marketable market
+ marketplace marketplac
+ markets market
+ marking mark
+ markman markman
+ marks mark
+ marl marl
+ marle marl
+ marmoset marmoset
+ marquess marquess
+ marquis marqui
+ marr marr
+ marriage marriag
+ marriages marriag
+ married marri
+ marries marri
+ marring mar
+ marrow marrow
+ marrowless marrowless
+ marrows marrow
+ marry marri
+ marrying marri
+ mars mar
+ marseilles marseil
+ marsh marsh
+ marshal marshal
+ marshalsea marshalsea
+ marshalship marshalship
+ mart mart
+ marted mart
+ martem martem
+ martext martext
+ martial martial
+ martin martin
+ martino martino
+ martius martiu
+ martlemas martlema
+ martlet martlet
+ marts mart
+ martyr martyr
+ martyrs martyr
+ marullus marullu
+ marv marv
+ marvel marvel
+ marvell marvel
+ marvellous marvel
+ marvellously marvel
+ marvels marvel
+ mary mari
+ mas ma
+ masculine masculin
+ masham masham
+ mask mask
+ masked mask
+ masker masker
+ maskers masker
+ masking mask
+ masks mask
+ mason mason
+ masonry masonri
+ masons mason
+ masque masqu
+ masquers masquer
+ masques masqu
+ masquing masqu
+ mass mass
+ massacre massacr
+ massacres massacr
+ masses mass
+ massy massi
+ mast mast
+ mastcr mastcr
+ master master
+ masterdom masterdom
+ masterest masterest
+ masterless masterless
+ masterly masterli
+ masterpiece masterpiec
+ masters master
+ mastership mastership
+ mastic mastic
+ mastiff mastiff
+ mastiffs mastiff
+ masts mast
+ match match
+ matches match
+ matcheth matcheth
+ matching match
+ matchless matchless
+ mate mate
+ mated mate
+ mater mater
+ material materi
+ mates mate
+ mathematics mathemat
+ matin matin
+ matron matron
+ matrons matron
+ matter matter
+ matters matter
+ matthew matthew
+ mattock mattock
+ mattress mattress
+ mature matur
+ maturity matur
+ maud maud
+ maudlin maudlin
+ maugre maugr
+ maul maul
+ maund maund
+ mauri mauri
+ mauritania mauritania
+ mauvais mauvai
+ maw maw
+ maws maw
+ maxim maxim
+ may mai
+ mayday maydai
+ mayest mayest
+ mayor mayor
+ maypole maypol
+ mayst mayst
+ maz maz
+ maze maze
+ mazed maze
+ mazes maze
+ mazzard mazzard
+ me me
+ meacock meacock
+ mead mead
+ meadow meadow
+ meadows meadow
+ meads mead
+ meagre meagr
+ meal meal
+ meals meal
+ mealy meali
+ mean mean
+ meanders meander
+ meaner meaner
+ meanest meanest
+ meaneth meaneth
+ meaning mean
+ meanings mean
+ meanly meanli
+ means mean
+ meant meant
+ meantime meantim
+ meanwhile meanwhil
+ measles measl
+ measur measur
+ measurable measur
+ measure measur
+ measured measur
+ measureless measureless
+ measures measur
+ measuring measur
+ meat meat
+ meats meat
+ mechanic mechan
+ mechanical mechan
+ mechanicals mechan
+ mechanics mechan
+ mechante mechant
+ med med
+ medal medal
+ meddle meddl
+ meddler meddler
+ meddling meddl
+ mede mede
+ medea medea
+ media media
+ mediation mediat
+ mediators mediat
+ medice medic
+ medicinal medicin
+ medicine medicin
+ medicines medicin
+ meditate medit
+ meditates medit
+ meditating medit
+ meditation medit
+ meditations medit
+ mediterranean mediterranean
+ mediterraneum mediterraneum
+ medlar medlar
+ medlars medlar
+ meed meed
+ meeds meed
+ meek meek
+ meekly meekli
+ meekness meek
+ meet meet
+ meeter meeter
+ meetest meetest
+ meeting meet
+ meetings meet
+ meetly meetli
+ meetness meet
+ meets meet
+ meg meg
+ mehercle mehercl
+ meilleur meilleur
+ meiny meini
+ meisen meisen
+ melancholies melancholi
+ melancholy melancholi
+ melford melford
+ mell mell
+ mellifluous melliflu
+ mellow mellow
+ mellowing mellow
+ melodious melodi
+ melody melodi
+ melt melt
+ melted melt
+ melteth melteth
+ melting melt
+ melts melt
+ melun melun
+ member member
+ members member
+ memento memento
+ memorable memor
+ memorandums memorandum
+ memorial memori
+ memorials memori
+ memories memori
+ memoriz memoriz
+ memorize memor
+ memory memori
+ memphis memphi
+ men men
+ menac menac
+ menace menac
+ menaces menac
+ menaphon menaphon
+ menas mena
+ mend mend
+ mended mend
+ mender mender
+ mending mend
+ mends mend
+ menecrates menecr
+ menelaus menelau
+ menenius meneniu
+ mental mental
+ menteith menteith
+ mention mention
+ mentis menti
+ menton menton
+ mephostophilus mephostophilu
+ mer mer
+ mercatante mercatant
+ mercatio mercatio
+ mercenaries mercenari
+ mercenary mercenari
+ mercer mercer
+ merchandise merchandis
+ merchandized merchand
+ merchant merchant
+ merchants merchant
+ mercies merci
+ merciful merci
+ mercifully mercifulli
+ merciless merciless
+ mercurial mercuri
+ mercuries mercuri
+ mercury mercuri
+ mercutio mercutio
+ mercy merci
+ mere mere
+ mered mere
+ merely mere
+ merest merest
+ meridian meridian
+ merit merit
+ merited merit
+ meritorious meritori
+ merits merit
+ merlin merlin
+ mermaid mermaid
+ mermaids mermaid
+ merops merop
+ merrier merrier
+ merriest merriest
+ merrily merrili
+ merriman merriman
+ merriment merriment
+ merriments merriment
+ merriness merri
+ merry merri
+ mervailous mervail
+ mes me
+ mesh mesh
+ meshes mesh
+ mesopotamia mesopotamia
+ mess mess
+ message messag
+ messages messag
+ messala messala
+ messaline messalin
+ messenger messeng
+ messengers messeng
+ messes mess
+ messina messina
+ met met
+ metal metal
+ metals metal
+ metamorphis metamorphi
+ metamorphoses metamorphos
+ metaphor metaphor
+ metaphysical metaphys
+ metaphysics metaphys
+ mete mete
+ metellus metellu
+ meteor meteor
+ meteors meteor
+ meteyard meteyard
+ metheglin metheglin
+ metheglins metheglin
+ methink methink
+ methinks methink
+ method method
+ methods method
+ methought methought
+ methoughts methought
+ metre metr
+ metres metr
+ metropolis metropoli
+ mette mett
+ mettle mettl
+ mettled mettl
+ meus meu
+ mew mew
+ mewed mew
+ mewling mewl
+ mexico mexico
+ mi mi
+ mice mice
+ michael michael
+ michaelmas michaelma
+ micher micher
+ miching mich
+ mickle mickl
+ microcosm microcosm
+ mid mid
+ midas mida
+ middest middest
+ middle middl
+ middleham middleham
+ midnight midnight
+ midriff midriff
+ midst midst
+ midsummer midsumm
+ midway midwai
+ midwife midwif
+ midwives midwiv
+ mienne mienn
+ might might
+ mightful might
+ mightier mightier
+ mightiest mightiest
+ mightily mightili
+ mightiness mighti
+ mightst mightst
+ mighty mighti
+ milan milan
+ milch milch
+ mild mild
+ milder milder
+ mildest mildest
+ mildew mildew
+ mildews mildew
+ mildly mildli
+ mildness mild
+ mile mile
+ miles mile
+ milford milford
+ militarist militarist
+ military militari
+ milk milk
+ milking milk
+ milkmaid milkmaid
+ milks milk
+ milksops milksop
+ milky milki
+ mill mill
+ mille mill
+ miller miller
+ milliner millin
+ million million
+ millioned million
+ millions million
+ mills mill
+ millstones millston
+ milo milo
+ mimic mimic
+ minc minc
+ mince minc
+ minces minc
+ mincing minc
+ mind mind
+ minded mind
+ minding mind
+ mindless mindless
+ minds mind
+ mine mine
+ mineral miner
+ minerals miner
+ minerva minerva
+ mines mine
+ mingle mingl
+ mingled mingl
+ mingling mingl
+ minikin minikin
+ minim minim
+ minime minim
+ minimo minimo
+ minimus minimu
+ mining mine
+ minion minion
+ minions minion
+ minist minist
+ minister minist
+ ministers minist
+ ministration ministr
+ minnow minnow
+ minnows minnow
+ minola minola
+ minority minor
+ minos mino
+ minotaurs minotaur
+ minstrel minstrel
+ minstrels minstrel
+ minstrelsy minstrelsi
+ mint mint
+ mints mint
+ minute minut
+ minutely minut
+ minutes minut
+ minx minx
+ mio mio
+ mir mir
+ mirable mirabl
+ miracle miracl
+ miracles miracl
+ miraculous miracul
+ miranda miranda
+ mire mire
+ mirror mirror
+ mirrors mirror
+ mirth mirth
+ mirthful mirth
+ miry miri
+ mis mi
+ misadventur misadventur
+ misadventure misadventur
+ misanthropos misanthropo
+ misapplied misappli
+ misbecame misbecam
+ misbecom misbecom
+ misbecome misbecom
+ misbegot misbegot
+ misbegotten misbegotten
+ misbeliever misbeliev
+ misbelieving misbeliev
+ misbhav misbhav
+ miscall miscal
+ miscalled miscal
+ miscarried miscarri
+ miscarries miscarri
+ miscarry miscarri
+ miscarrying miscarri
+ mischance mischanc
+ mischances mischanc
+ mischief mischief
+ mischiefs mischief
+ mischievous mischiev
+ misconceived misconceiv
+ misconst misconst
+ misconster misconst
+ misconstruction misconstruct
+ misconstrued misconstru
+ misconstrues misconstru
+ miscreant miscreant
+ miscreate miscreat
+ misdeed misde
+ misdeeds misde
+ misdemean misdemean
+ misdemeanours misdemeanour
+ misdoubt misdoubt
+ misdoubteth misdoubteth
+ misdoubts misdoubt
+ misenum misenum
+ miser miser
+ miserable miser
+ miserably miser
+ misericorde misericord
+ miseries miseri
+ misers miser
+ misery miseri
+ misfortune misfortun
+ misfortunes misfortun
+ misgive misgiv
+ misgives misgiv
+ misgiving misgiv
+ misgoverned misgovern
+ misgovernment misgovern
+ misgraffed misgraf
+ misguide misguid
+ mishap mishap
+ mishaps mishap
+ misheard misheard
+ misinterpret misinterpret
+ mislead mislead
+ misleader mislead
+ misleaders mislead
+ misleading mislead
+ misled misl
+ mislike mislik
+ misord misord
+ misplac misplac
+ misplaced misplac
+ misplaces misplac
+ mispris mispri
+ misprised mispris
+ misprision mispris
+ misprizing mispriz
+ misproud misproud
+ misquote misquot
+ misreport misreport
+ miss miss
+ missed miss
+ misses miss
+ misshap misshap
+ misshapen misshapen
+ missheathed missheath
+ missing miss
+ missingly missingli
+ missions mission
+ missive missiv
+ missives missiv
+ misspoke misspok
+ mist mist
+ mista mista
+ mistak mistak
+ mistake mistak
+ mistaken mistaken
+ mistakes mistak
+ mistaketh mistaketh
+ mistaking mistak
+ mistakings mistak
+ mistemp mistemp
+ mistempered mistemp
+ misterm misterm
+ mistful mist
+ misthink misthink
+ misthought misthought
+ mistletoe mistleto
+ mistook mistook
+ mistreadings mistread
+ mistress mistress
+ mistresses mistress
+ mistresss mistresss
+ mistriship mistriship
+ mistrust mistrust
+ mistrusted mistrust
+ mistrustful mistrust
+ mistrusting mistrust
+ mists mist
+ misty misti
+ misus misu
+ misuse misus
+ misused misus
+ misuses misus
+ mites mite
+ mithridates mithrid
+ mitigate mitig
+ mitigation mitig
+ mix mix
+ mixed mix
+ mixture mixtur
+ mixtures mixtur
+ mm mm
+ mnd mnd
+ moan moan
+ moans moan
+ moat moat
+ moated moat
+ mobled mobl
+ mock mock
+ mockable mockabl
+ mocker mocker
+ mockeries mockeri
+ mockers mocker
+ mockery mockeri
+ mocking mock
+ mocks mock
+ mockvater mockvat
+ mockwater mockwat
+ model model
+ modena modena
+ moderate moder
+ moderately moder
+ moderation moder
+ modern modern
+ modest modest
+ modesties modesti
+ modestly modestli
+ modesty modesti
+ modicums modicum
+ modo modo
+ module modul
+ moe moe
+ moi moi
+ moiety moieti
+ moist moist
+ moisten moisten
+ moisture moistur
+ moldwarp moldwarp
+ mole mole
+ molehill molehil
+ moles mole
+ molest molest
+ molestation molest
+ mollification mollif
+ mollis molli
+ molten molten
+ molto molto
+ mome mome
+ moment moment
+ momentary momentari
+ moming mome
+ mon mon
+ monachum monachum
+ monarch monarch
+ monarchies monarchi
+ monarchize monarch
+ monarcho monarcho
+ monarchs monarch
+ monarchy monarchi
+ monast monast
+ monastery monasteri
+ monastic monast
+ monday mondai
+ monde mond
+ money monei
+ moneys monei
+ mong mong
+ monger monger
+ mongers monger
+ monging mong
+ mongrel mongrel
+ mongrels mongrel
+ mongst mongst
+ monk monk
+ monkey monkei
+ monkeys monkei
+ monks monk
+ monmouth monmouth
+ monopoly monopoli
+ mons mon
+ monsieur monsieur
+ monsieurs monsieur
+ monster monster
+ monsters monster
+ monstrous monstrou
+ monstrously monstrous
+ monstrousness monstrous
+ monstruosity monstruos
+ montacute montacut
+ montage montag
+ montague montagu
+ montagues montagu
+ montano montano
+ montant montant
+ montez montez
+ montferrat montferrat
+ montgomery montgomeri
+ month month
+ monthly monthli
+ months month
+ montjoy montjoi
+ monument monument
+ monumental monument
+ monuments monument
+ mood mood
+ moods mood
+ moody moodi
+ moon moon
+ moonbeams moonbeam
+ moonish moonish
+ moonlight moonlight
+ moons moon
+ moonshine moonshin
+ moonshines moonshin
+ moor moor
+ moorfields moorfield
+ moors moor
+ moorship moorship
+ mop mop
+ mope mope
+ moping mope
+ mopping mop
+ mopsa mopsa
+ moral moral
+ moraler moral
+ morality moral
+ moralize moral
+ mordake mordak
+ more more
+ moreover moreov
+ mores more
+ morgan morgan
+ mori mori
+ morisco morisco
+ morn morn
+ morning morn
+ mornings morn
+ morocco morocco
+ morris morri
+ morrow morrow
+ morrows morrow
+ morsel morsel
+ morsels morsel
+ mort mort
+ mortal mortal
+ mortality mortal
+ mortally mortal
+ mortals mortal
+ mortar mortar
+ mortgaged mortgag
+ mortified mortifi
+ mortifying mortifi
+ mortimer mortim
+ mortimers mortim
+ mortis morti
+ mortise mortis
+ morton morton
+ mose mose
+ moss moss
+ mossgrown mossgrown
+ most most
+ mote mote
+ moth moth
+ mother mother
+ mothers mother
+ moths moth
+ motion motion
+ motionless motionless
+ motions motion
+ motive motiv
+ motives motiv
+ motley motlei
+ mots mot
+ mought mought
+ mould mould
+ moulded mould
+ mouldeth mouldeth
+ moulds mould
+ mouldy mouldi
+ moult moult
+ moulten moulten
+ mounch mounch
+ mounseur mounseur
+ mounsieur mounsieur
+ mount mount
+ mountain mountain
+ mountaineer mountain
+ mountaineers mountain
+ mountainous mountain
+ mountains mountain
+ mountant mountant
+ mountanto mountanto
+ mountebank mountebank
+ mountebanks mountebank
+ mounted mount
+ mounteth mounteth
+ mounting mount
+ mounts mount
+ mourn mourn
+ mourned mourn
+ mourner mourner
+ mourners mourner
+ mournful mourn
+ mournfully mournfulli
+ mourning mourn
+ mourningly mourningli
+ mournings mourn
+ mourns mourn
+ mous mou
+ mouse mous
+ mousetrap mousetrap
+ mousing mous
+ mouth mouth
+ mouthed mouth
+ mouths mouth
+ mov mov
+ movables movabl
+ move move
+ moveable moveabl
+ moveables moveabl
+ moved move
+ mover mover
+ movers mover
+ moves move
+ moveth moveth
+ moving move
+ movingly movingli
+ movousus movousu
+ mow mow
+ mowbray mowbrai
+ mower mower
+ mowing mow
+ mows mow
+ moy moi
+ moys moi
+ moyses moys
+ mrs mr
+ much much
+ muck muck
+ mud mud
+ mudded mud
+ muddied muddi
+ muddy muddi
+ muffins muffin
+ muffl muffl
+ muffle muffl
+ muffled muffl
+ muffler muffler
+ muffling muffl
+ mugger mugger
+ mugs mug
+ mulberries mulberri
+ mulberry mulberri
+ mule mule
+ mules mule
+ muleteers mulet
+ mulier mulier
+ mulieres mulier
+ muliteus muliteu
+ mull mull
+ mulmutius mulmutiu
+ multiplied multipli
+ multiply multipli
+ multiplying multipli
+ multipotent multipot
+ multitude multitud
+ multitudes multitud
+ multitudinous multitudin
+ mum mum
+ mumble mumbl
+ mumbling mumbl
+ mummers mummer
+ mummy mummi
+ mun mun
+ munch munch
+ muniments muniment
+ munition munit
+ murd murd
+ murder murder
+ murdered murder
+ murderer murder
+ murderers murder
+ murdering murder
+ murderous murder
+ murders murder
+ mure mure
+ murk murk
+ murkiest murkiest
+ murky murki
+ murmur murmur
+ murmurers murmur
+ murmuring murmur
+ murrain murrain
+ murray murrai
+ murrion murrion
+ murther murther
+ murtherer murther
+ murtherers murther
+ murthering murther
+ murtherous murther
+ murthers murther
+ mus mu
+ muscadel muscadel
+ muscovites muscovit
+ muscovits muscovit
+ muscovy muscovi
+ muse muse
+ muses muse
+ mush mush
+ mushrooms mushroom
+ music music
+ musical music
+ musician musician
+ musicians musician
+ musics music
+ musing muse
+ musings muse
+ musk musk
+ musket musket
+ muskets musket
+ muskos musko
+ muss muss
+ mussel mussel
+ mussels mussel
+ must must
+ mustachio mustachio
+ mustard mustard
+ mustardseed mustardse
+ muster muster
+ mustering muster
+ musters muster
+ musty musti
+ mutability mutabl
+ mutable mutabl
+ mutation mutat
+ mutations mutat
+ mute mute
+ mutes mute
+ mutest mutest
+ mutine mutin
+ mutineer mutin
+ mutineers mutin
+ mutines mutin
+ mutinies mutini
+ mutinous mutin
+ mutiny mutini
+ mutius mutiu
+ mutter mutter
+ muttered mutter
+ mutton mutton
+ muttons mutton
+ mutual mutual
+ mutualities mutual
+ mutually mutual
+ muzzl muzzl
+ muzzle muzzl
+ muzzled muzzl
+ mv mv
+ mww mww
+ my my
+ mynheers mynheer
+ myrmidon myrmidon
+ myrmidons myrmidon
+ myrtle myrtl
+ myself myself
+ myst myst
+ mysteries mysteri
+ mystery mysteri
+ n n
+ nag nag
+ nage nage
+ nags nag
+ naiads naiad
+ nail nail
+ nails nail
+ nak nak
+ naked nake
+ nakedness naked
+ nal nal
+ nam nam
+ name name
+ named name
+ nameless nameless
+ namely name
+ names name
+ namest namest
+ naming name
+ nan nan
+ nance nanc
+ nap nap
+ nape nape
+ napes nape
+ napkin napkin
+ napkins napkin
+ naples napl
+ napless napless
+ napping nap
+ naps nap
+ narbon narbon
+ narcissus narcissu
+ narines narin
+ narrow narrow
+ narrowly narrowli
+ naso naso
+ nasty nasti
+ nathaniel nathaniel
+ natifs natif
+ nation nation
+ nations nation
+ native nativ
+ nativity nativ
+ natur natur
+ natural natur
+ naturalize natur
+ naturally natur
+ nature natur
+ natured natur
+ natures natur
+ natus natu
+ naught naught
+ naughtily naughtili
+ naughty naughti
+ navarre navarr
+ nave nave
+ navel navel
+ navigation navig
+ navy navi
+ nay nai
+ nayward nayward
+ nayword nayword
+ nazarite nazarit
+ ne ne
+ neaf neaf
+ neamnoins neamnoin
+ neanmoins neanmoin
+ neapolitan neapolitan
+ neapolitans neapolitan
+ near near
+ nearer nearer
+ nearest nearest
+ nearly nearli
+ nearness near
+ neat neat
+ neatly neatli
+ neb neb
+ nebour nebour
+ nebuchadnezzar nebuchadnezzar
+ nec nec
+ necessaries necessari
+ necessarily necessarili
+ necessary necessari
+ necessitied necess
+ necessities necess
+ necessity necess
+ neck neck
+ necklace necklac
+ necks neck
+ nectar nectar
+ ned ned
+ nedar nedar
+ need need
+ needed need
+ needer needer
+ needful need
+ needfull needful
+ needing need
+ needle needl
+ needles needl
+ needless needless
+ needly needli
+ needs need
+ needy needi
+ neer neer
+ neeze neez
+ nefas nefa
+ negation negat
+ negative neg
+ negatives neg
+ neglect neglect
+ neglected neglect
+ neglecting neglect
+ neglectingly neglectingli
+ neglection neglect
+ negligence neglig
+ negligent neglig
+ negotiate negoti
+ negotiations negoti
+ negro negro
+ neigh neigh
+ neighbors neighbor
+ neighbour neighbour
+ neighbourhood neighbourhood
+ neighbouring neighbour
+ neighbourly neighbourli
+ neighbours neighbour
+ neighing neigh
+ neighs neigh
+ neither neither
+ nell nell
+ nemean nemean
+ nemesis nemesi
+ neoptolemus neoptolemu
+ nephew nephew
+ nephews nephew
+ neptune neptun
+ ner ner
+ nereides nereid
+ nerissa nerissa
+ nero nero
+ neroes nero
+ ners ner
+ nerve nerv
+ nerves nerv
+ nervii nervii
+ nervy nervi
+ nessus nessu
+ nest nest
+ nestor nestor
+ nests nest
+ net net
+ nether nether
+ netherlands netherland
+ nets net
+ nettle nettl
+ nettled nettl
+ nettles nettl
+ neuter neuter
+ neutral neutral
+ nev nev
+ never never
+ nevil nevil
+ nevils nevil
+ new new
+ newborn newborn
+ newer newer
+ newest newest
+ newgate newgat
+ newly newli
+ newness new
+ news new
+ newsmongers newsmong
+ newt newt
+ newts newt
+ next next
+ nibbling nibbl
+ nicanor nicanor
+ nice nice
+ nicely nice
+ niceness nice
+ nicer nicer
+ nicety niceti
+ nicholas nichola
+ nick nick
+ nickname nicknam
+ nicks nick
+ niece niec
+ nieces niec
+ niggard niggard
+ niggarding niggard
+ niggardly niggardli
+ nigh nigh
+ night night
+ nightcap nightcap
+ nightcaps nightcap
+ nighted night
+ nightgown nightgown
+ nightingale nightingal
+ nightingales nightingal
+ nightly nightli
+ nightmare nightmar
+ nights night
+ nightwork nightwork
+ nihil nihil
+ nile nile
+ nill nill
+ nilus nilu
+ nimble nimbl
+ nimbleness nimbl
+ nimbler nimbler
+ nimbly nimbl
+ nine nine
+ nineteen nineteen
+ ning ning
+ ningly ningli
+ ninny ninni
+ ninth ninth
+ ninus ninu
+ niobe niob
+ niobes niob
+ nip nip
+ nipp nipp
+ nipping nip
+ nipple nippl
+ nips nip
+ nit nit
+ nly nly
+ nnight nnight
+ nnights nnight
+ no no
+ noah noah
+ nob nob
+ nobility nobil
+ nobis nobi
+ noble nobl
+ nobleman nobleman
+ noblemen noblemen
+ nobleness nobl
+ nobler nobler
+ nobles nobl
+ noblesse nobless
+ noblest noblest
+ nobly nobli
+ nobody nobodi
+ noces noce
+ nod nod
+ nodded nod
+ nodding nod
+ noddle noddl
+ noddles noddl
+ noddy noddi
+ nods nod
+ noes noe
+ nointed noint
+ nois noi
+ noise nois
+ noiseless noiseless
+ noisemaker noisemak
+ noises nois
+ noisome noisom
+ nole nole
+ nominate nomin
+ nominated nomin
+ nomination nomin
+ nominativo nominativo
+ non non
+ nonage nonag
+ nonce nonc
+ none none
+ nonino nonino
+ nonny nonni
+ nonpareil nonpareil
+ nonsuits nonsuit
+ nony noni
+ nook nook
+ nooks nook
+ noon noon
+ noonday noondai
+ noontide noontid
+ nor nor
+ norbery norberi
+ norfolk norfolk
+ norman norman
+ normandy normandi
+ normans norman
+ north north
+ northampton northampton
+ northamptonshire northamptonshir
+ northerly northerli
+ northern northern
+ northgate northgat
+ northumberland northumberland
+ northumberlands northumberland
+ northward northward
+ norway norwai
+ norways norwai
+ norwegian norwegian
+ norweyan norweyan
+ nos no
+ nose nose
+ nosegays nosegai
+ noseless noseless
+ noses nose
+ noster noster
+ nostra nostra
+ nostril nostril
+ nostrils nostril
+ not not
+ notable notabl
+ notably notabl
+ notary notari
+ notch notch
+ note note
+ notebook notebook
+ noted note
+ notedly notedli
+ notes note
+ notest notest
+ noteworthy noteworthi
+ nothing noth
+ nothings noth
+ notice notic
+ notify notifi
+ noting note
+ notion notion
+ notorious notori
+ notoriously notori
+ notre notr
+ notwithstanding notwithstand
+ nought nought
+ noun noun
+ nouns noun
+ nourish nourish
+ nourished nourish
+ nourisher nourish
+ nourishes nourish
+ nourisheth nourisheth
+ nourishing nourish
+ nourishment nourish
+ nous nou
+ novel novel
+ novelties novelti
+ novelty novelti
+ noverbs noverb
+ novi novi
+ novice novic
+ novices novic
+ novum novum
+ now now
+ nowhere nowher
+ noyance noyanc
+ ns ns
+ nt nt
+ nubibus nubibu
+ numa numa
+ numb numb
+ number number
+ numbered number
+ numbering number
+ numberless numberless
+ numbers number
+ numbness numb
+ nun nun
+ nuncio nuncio
+ nuncle nuncl
+ nunnery nunneri
+ nuns nun
+ nuntius nuntiu
+ nuptial nuptial
+ nurs nur
+ nurse nurs
+ nursed nurs
+ nurser nurser
+ nursery nurseri
+ nurses nurs
+ nurseth nurseth
+ nursh nursh
+ nursing nurs
+ nurtur nurtur
+ nurture nurtur
+ nut nut
+ nuthook nuthook
+ nutmeg nutmeg
+ nutmegs nutmeg
+ nutriment nutriment
+ nuts nut
+ nutshell nutshel
+ ny ny
+ nym nym
+ nymph nymph
+ nymphs nymph
+ o o
+ oak oak
+ oaken oaken
+ oaks oak
+ oared oar
+ oars oar
+ oatcake oatcak
+ oaten oaten
+ oath oath
+ oathable oathabl
+ oaths oath
+ oats oat
+ ob ob
+ obduracy obduraci
+ obdurate obdur
+ obedience obedi
+ obedient obedi
+ obeisance obeis
+ oberon oberon
+ obey obei
+ obeyed obei
+ obeying obei
+ obeys obei
+ obidicut obidicut
+ object object
+ objected object
+ objections object
+ objects object
+ oblation oblat
+ oblations oblat
+ obligation oblig
+ obligations oblig
+ obliged oblig
+ oblique obliqu
+ oblivion oblivion
+ oblivious oblivi
+ obloquy obloqui
+ obscene obscen
+ obscenely obscen
+ obscur obscur
+ obscure obscur
+ obscured obscur
+ obscurely obscur
+ obscures obscur
+ obscuring obscur
+ obscurity obscur
+ obsequies obsequi
+ obsequious obsequi
+ obsequiously obsequi
+ observ observ
+ observance observ
+ observances observ
+ observancy observ
+ observant observ
+ observants observ
+ observation observ
+ observe observ
+ observed observ
+ observer observ
+ observers observ
+ observing observ
+ observingly observingli
+ obsque obsqu
+ obstacle obstacl
+ obstacles obstacl
+ obstinacy obstinaci
+ obstinate obstin
+ obstinately obstin
+ obstruct obstruct
+ obstruction obstruct
+ obstructions obstruct
+ obtain obtain
+ obtained obtain
+ obtaining obtain
+ occasion occas
+ occasions occas
+ occident occid
+ occidental occident
+ occulted occult
+ occupat occupat
+ occupation occup
+ occupations occup
+ occupied occupi
+ occupies occupi
+ occupy occupi
+ occurrence occurr
+ occurrences occurr
+ occurrents occurr
+ ocean ocean
+ oceans ocean
+ octavia octavia
+ octavius octaviu
+ ocular ocular
+ od od
+ odd odd
+ oddest oddest
+ oddly oddli
+ odds odd
+ ode od
+ odes od
+ odious odiou
+ odoriferous odorifer
+ odorous odor
+ odour odour
+ odours odour
+ ods od
+ oeillades oeillad
+ oes oe
+ oeuvres oeuvr
+ of of
+ ofephesus ofephesu
+ off off
+ offal offal
+ offence offenc
+ offenceful offenc
+ offences offenc
+ offend offend
+ offended offend
+ offendendo offendendo
+ offender offend
+ offenders offend
+ offendeth offendeth
+ offending offend
+ offendress offendress
+ offends offend
+ offense offens
+ offenseless offenseless
+ offenses offens
+ offensive offens
+ offer offer
+ offered offer
+ offering offer
+ offerings offer
+ offers offer
+ offert offert
+ offic offic
+ office offic
+ officed offic
+ officer offic
+ officers offic
+ offices offic
+ official offici
+ officious offici
+ offspring offspr
+ oft oft
+ often often
+ oftener often
+ oftentimes oftentim
+ oh oh
+ oil oil
+ oils oil
+ oily oili
+ old old
+ oldcastle oldcastl
+ olden olden
+ older older
+ oldest oldest
+ oldness old
+ olive oliv
+ oliver oliv
+ olivers oliv
+ olives oliv
+ olivia olivia
+ olympian olympian
+ olympus olympu
+ oman oman
+ omans oman
+ omen omen
+ ominous omin
+ omission omiss
+ omit omit
+ omittance omitt
+ omitted omit
+ omitting omit
+ omne omn
+ omnes omn
+ omnipotent omnipot
+ on on
+ once onc
+ one on
+ ones on
+ oneyers oney
+ ongles ongl
+ onion onion
+ onions onion
+ only onli
+ onset onset
+ onward onward
+ onwards onward
+ oo oo
+ ooze ooz
+ oozes ooz
+ oozy oozi
+ op op
+ opal opal
+ ope op
+ open open
+ opener open
+ opening open
+ openly openli
+ openness open
+ opens open
+ operant oper
+ operate oper
+ operation oper
+ operations oper
+ operative oper
+ opes op
+ oph oph
+ ophelia ophelia
+ opinion opinion
+ opinions opinion
+ opportune opportun
+ opportunities opportun
+ opportunity opportun
+ oppos oppo
+ oppose oppos
+ opposed oppos
+ opposeless opposeless
+ opposer oppos
+ opposers oppos
+ opposes oppos
+ opposing oppos
+ opposite opposit
+ opposites opposit
+ opposition opposit
+ oppositions opposit
+ oppress oppress
+ oppressed oppress
+ oppresses oppress
+ oppresseth oppresseth
+ oppressing oppress
+ oppression oppress
+ oppressor oppressor
+ opprest opprest
+ opprobriously opprobri
+ oppugnancy oppugn
+ opulency opul
+ opulent opul
+ or or
+ oracle oracl
+ oracles oracl
+ orange orang
+ oration orat
+ orator orat
+ orators orat
+ oratory oratori
+ orb orb
+ orbed orb
+ orbs orb
+ orchard orchard
+ orchards orchard
+ ord ord
+ ordain ordain
+ ordained ordain
+ ordaining ordain
+ order order
+ ordered order
+ ordering order
+ orderless orderless
+ orderly orderli
+ orders order
+ ordinance ordin
+ ordinant ordin
+ ordinaries ordinari
+ ordinary ordinari
+ ordnance ordnanc
+ ords ord
+ ordure ordur
+ ore or
+ organ organ
+ organs organ
+ orgillous orgil
+ orient orient
+ orifex orifex
+ origin origin
+ original origin
+ orisons orison
+ ork ork
+ orlando orlando
+ orld orld
+ orleans orlean
+ ornament ornament
+ ornaments ornament
+ orodes orod
+ orphan orphan
+ orphans orphan
+ orpheus orpheu
+ orsino orsino
+ ort ort
+ orthography orthographi
+ orts ort
+ oscorbidulchos oscorbidulcho
+ osier osier
+ osiers osier
+ osprey osprei
+ osr osr
+ osric osric
+ ossa ossa
+ ost ost
+ ostent ostent
+ ostentare ostentar
+ ostentation ostent
+ ostents ostent
+ ostler ostler
+ ostlers ostler
+ ostrich ostrich
+ osw osw
+ oswald oswald
+ othello othello
+ other other
+ othergates otherg
+ others other
+ otherwhere otherwher
+ otherwhiles otherwhil
+ otherwise otherwis
+ otter otter
+ ottoman ottoman
+ ottomites ottomit
+ oublie oubli
+ ouches ouch
+ ought ought
+ oui oui
+ ounce ounc
+ ounces ounc
+ ouphes ouph
+ our our
+ ours our
+ ourself ourself
+ ourselves ourselv
+ ousel ousel
+ out out
+ outbids outbid
+ outbrave outbrav
+ outbraves outbrav
+ outbreak outbreak
+ outcast outcast
+ outcries outcri
+ outcry outcri
+ outdar outdar
+ outdare outdar
+ outdares outdar
+ outdone outdon
+ outfac outfac
+ outface outfac
+ outfaced outfac
+ outfacing outfac
+ outfly outfli
+ outfrown outfrown
+ outgo outgo
+ outgoes outgo
+ outgrown outgrown
+ outjest outjest
+ outlaw outlaw
+ outlawry outlawri
+ outlaws outlaw
+ outliv outliv
+ outlive outliv
+ outlives outliv
+ outliving outliv
+ outlook outlook
+ outlustres outlustr
+ outpriz outpriz
+ outrage outrag
+ outrageous outrag
+ outrages outrag
+ outran outran
+ outright outright
+ outroar outroar
+ outrun outrun
+ outrunning outrun
+ outruns outrun
+ outscold outscold
+ outscorn outscorn
+ outsell outsel
+ outsells outsel
+ outside outsid
+ outsides outsid
+ outspeaks outspeak
+ outsport outsport
+ outstare outstar
+ outstay outstai
+ outstood outstood
+ outstretch outstretch
+ outstretched outstretch
+ outstrike outstrik
+ outstrip outstrip
+ outstripped outstrip
+ outswear outswear
+ outvenoms outvenom
+ outward outward
+ outwardly outwardli
+ outwards outward
+ outwear outwear
+ outweighs outweigh
+ outwent outwent
+ outworn outworn
+ outworths outworth
+ oven oven
+ over over
+ overawe overaw
+ overbear overbear
+ overblown overblown
+ overboard overboard
+ overbold overbold
+ overborne overborn
+ overbulk overbulk
+ overbuys overbui
+ overcame overcam
+ overcast overcast
+ overcharg overcharg
+ overcharged overcharg
+ overcome overcom
+ overcomes overcom
+ overdone overdon
+ overearnest overearnest
+ overfar overfar
+ overflow overflow
+ overflown overflown
+ overglance overgl
+ overgo overgo
+ overgone overgon
+ overgorg overgorg
+ overgrown overgrown
+ overhead overhead
+ overhear overhear
+ overheard overheard
+ overhold overhold
+ overjoyed overjoi
+ overkind overkind
+ overland overland
+ overleather overleath
+ overlive overl
+ overlook overlook
+ overlooking overlook
+ overlooks overlook
+ overmaster overmast
+ overmounting overmount
+ overmuch overmuch
+ overpass overpass
+ overpeer overp
+ overpeering overp
+ overplus overplu
+ overrul overrul
+ overrun overrun
+ overscutch overscutch
+ overset overset
+ overshades overshad
+ overshine overshin
+ overshines overshin
+ overshot overshot
+ oversights oversight
+ overspread overspread
+ overstain overstain
+ overswear overswear
+ overt overt
+ overta overta
+ overtake overtak
+ overtaketh overtaketh
+ overthrow overthrow
+ overthrown overthrown
+ overthrows overthrow
+ overtook overtook
+ overtopp overtopp
+ overture overtur
+ overturn overturn
+ overwatch overwatch
+ overween overween
+ overweening overween
+ overweigh overweigh
+ overwhelm overwhelm
+ overwhelming overwhelm
+ overworn overworn
+ ovid ovid
+ ovidius ovidiu
+ ow ow
+ owe ow
+ owed ow
+ owedst owedst
+ owen owen
+ owes ow
+ owest owest
+ oweth oweth
+ owing ow
+ owl owl
+ owls owl
+ own own
+ owner owner
+ owners owner
+ owning own
+ owns own
+ owy owi
+ ox ox
+ oxen oxen
+ oxford oxford
+ oxfordshire oxfordshir
+ oxlips oxlip
+ oyes oy
+ oyster oyster
+ p p
+ pabble pabbl
+ pabylon pabylon
+ pac pac
+ pace pace
+ paced pace
+ paces pace
+ pacified pacifi
+ pacify pacifi
+ pacing pace
+ pack pack
+ packet packet
+ packets packet
+ packhorses packhors
+ packing pack
+ packings pack
+ packs pack
+ packthread packthread
+ pacorus pacoru
+ paction paction
+ pad pad
+ paddle paddl
+ paddling paddl
+ paddock paddock
+ padua padua
+ pagan pagan
+ pagans pagan
+ page page
+ pageant pageant
+ pageants pageant
+ pages page
+ pah pah
+ paid paid
+ pail pail
+ pailfuls pail
+ pails pail
+ pain pain
+ pained pain
+ painful pain
+ painfully painfulli
+ pains pain
+ paint paint
+ painted paint
+ painter painter
+ painting paint
+ paintings paint
+ paints paint
+ pair pair
+ paired pair
+ pairs pair
+ pajock pajock
+ pal pal
+ palabras palabra
+ palace palac
+ palaces palac
+ palamedes palamed
+ palate palat
+ palates palat
+ palatine palatin
+ palating palat
+ pale pale
+ paled pale
+ paleness pale
+ paler paler
+ pales pale
+ palestine palestin
+ palfrey palfrei
+ palfreys palfrei
+ palisadoes palisado
+ pall pall
+ pallabris pallabri
+ pallas palla
+ pallets pallet
+ palm palm
+ palmer palmer
+ palmers palmer
+ palms palm
+ palmy palmi
+ palpable palpabl
+ palsied palsi
+ palsies palsi
+ palsy palsi
+ palt palt
+ palter palter
+ paltry paltri
+ paly pali
+ pamp pamp
+ pamper pamper
+ pamphlets pamphlet
+ pan pan
+ pancackes pancack
+ pancake pancak
+ pancakes pancak
+ pandar pandar
+ pandars pandar
+ pandarus pandaru
+ pander pander
+ panderly panderli
+ panders pander
+ pandulph pandulph
+ panel panel
+ pang pang
+ panging pang
+ pangs pang
+ pannier pannier
+ pannonians pannonian
+ pansa pansa
+ pansies pansi
+ pant pant
+ pantaloon pantaloon
+ panted pant
+ pantheon pantheon
+ panther panther
+ panthino panthino
+ panting pant
+ pantingly pantingli
+ pantler pantler
+ pantry pantri
+ pants pant
+ pap pap
+ papal papal
+ paper paper
+ papers paper
+ paphlagonia paphlagonia
+ paphos papho
+ papist papist
+ paps pap
+ par par
+ parable parabl
+ paracelsus paracelsu
+ paradise paradis
+ paradox paradox
+ paradoxes paradox
+ paragon paragon
+ paragons paragon
+ parallel parallel
+ parallels parallel
+ paramour paramour
+ paramours paramour
+ parapets parapet
+ paraquito paraquito
+ parasite parasit
+ parasites parasit
+ parca parca
+ parcel parcel
+ parcell parcel
+ parcels parcel
+ parch parch
+ parched parch
+ parching parch
+ parchment parchment
+ pard pard
+ pardon pardon
+ pardona pardona
+ pardoned pardon
+ pardoner pardon
+ pardoning pardon
+ pardonne pardonn
+ pardonner pardonn
+ pardonnez pardonnez
+ pardons pardon
+ pare pare
+ pared pare
+ parel parel
+ parent parent
+ parentage parentag
+ parents parent
+ parfect parfect
+ paring pare
+ parings pare
+ paris pari
+ parish parish
+ parishioners parishion
+ parisians parisian
+ paritors paritor
+ park park
+ parks park
+ parle parl
+ parler parler
+ parles parl
+ parley parlei
+ parlez parlez
+ parliament parliament
+ parlors parlor
+ parlour parlour
+ parlous parlou
+ parmacity parmac
+ parolles parol
+ parricide parricid
+ parricides parricid
+ parrot parrot
+ parrots parrot
+ parsley parslei
+ parson parson
+ part part
+ partake partak
+ partaken partaken
+ partaker partak
+ partakers partak
+ parted part
+ parthia parthia
+ parthian parthian
+ parthians parthian
+ parti parti
+ partial partial
+ partialize partial
+ partially partial
+ participate particip
+ participation particip
+ particle particl
+ particular particular
+ particularities particular
+ particularize particular
+ particularly particularli
+ particulars particular
+ parties parti
+ parting part
+ partisan partisan
+ partisans partisan
+ partition partit
+ partizan partizan
+ partlet partlet
+ partly partli
+ partner partner
+ partners partner
+ partridge partridg
+ parts part
+ party parti
+ pas pa
+ pash pash
+ pashed pash
+ pashful pash
+ pass pass
+ passable passabl
+ passado passado
+ passage passag
+ passages passag
+ passant passant
+ passed pass
+ passenger passeng
+ passengers passeng
+ passes pass
+ passeth passeth
+ passing pass
+ passio passio
+ passion passion
+ passionate passion
+ passioning passion
+ passions passion
+ passive passiv
+ passport passport
+ passy passi
+ past past
+ paste past
+ pasterns pastern
+ pasties pasti
+ pastime pastim
+ pastimes pastim
+ pastoral pastor
+ pastorals pastor
+ pastors pastor
+ pastry pastri
+ pasture pastur
+ pastures pastur
+ pasty pasti
+ pat pat
+ patay patai
+ patch patch
+ patchery patcheri
+ patches patch
+ pate pate
+ pated pate
+ patent patent
+ patents patent
+ paternal patern
+ pates pate
+ path path
+ pathetical pathet
+ paths path
+ pathway pathwai
+ pathways pathwai
+ patience patienc
+ patient patient
+ patiently patient
+ patients patient
+ patines patin
+ patrician patrician
+ patricians patrician
+ patrick patrick
+ patrimony patrimoni
+ patroclus patroclu
+ patron patron
+ patronage patronag
+ patroness patro
+ patrons patron
+ patrum patrum
+ patter patter
+ pattern pattern
+ patterns pattern
+ pattle pattl
+ pauca pauca
+ paucas pauca
+ paul paul
+ paulina paulina
+ paunch paunch
+ paunches paunch
+ pause paus
+ pauser pauser
+ pauses paus
+ pausingly pausingli
+ pauvres pauvr
+ pav pav
+ paved pave
+ pavement pavement
+ pavilion pavilion
+ pavilions pavilion
+ pavin pavin
+ paw paw
+ pawn pawn
+ pawns pawn
+ paws paw
+ pax pax
+ pay pai
+ payest payest
+ paying pai
+ payment payment
+ payments payment
+ pays pai
+ paysan paysan
+ paysans paysan
+ pe pe
+ peace peac
+ peaceable peaceabl
+ peaceably peaceabl
+ peaceful peac
+ peacemakers peacemak
+ peaces peac
+ peach peach
+ peaches peach
+ peacock peacock
+ peacocks peacock
+ peak peak
+ peaking peak
+ peal peal
+ peals peal
+ pear pear
+ peard peard
+ pearl pearl
+ pearls pearl
+ pears pear
+ peas pea
+ peasant peasant
+ peasantry peasantri
+ peasants peasant
+ peascod peascod
+ pease peas
+ peaseblossom peaseblossom
+ peat peat
+ peaten peaten
+ peating peat
+ pebble pebbl
+ pebbled pebbl
+ pebbles pebbl
+ peck peck
+ pecks peck
+ peculiar peculiar
+ pecus pecu
+ pedant pedant
+ pedantical pedant
+ pedascule pedascul
+ pede pede
+ pedestal pedest
+ pedigree pedigre
+ pedlar pedlar
+ pedlars pedlar
+ pedro pedro
+ peds ped
+ peel peel
+ peep peep
+ peeped peep
+ peeping peep
+ peeps peep
+ peer peer
+ peereth peereth
+ peering peer
+ peerless peerless
+ peers peer
+ peesel peesel
+ peevish peevish
+ peevishly peevishli
+ peflur peflur
+ peg peg
+ pegasus pegasu
+ pegs peg
+ peise peis
+ peised peis
+ peize peiz
+ pelf pelf
+ pelican pelican
+ pelion pelion
+ pell pell
+ pella pella
+ pelleted pellet
+ peloponnesus peloponnesu
+ pelt pelt
+ pelting pelt
+ pembroke pembrok
+ pen pen
+ penalties penalti
+ penalty penalti
+ penance penanc
+ pence penc
+ pencil pencil
+ pencill pencil
+ pencils pencil
+ pendant pendant
+ pendent pendent
+ pendragon pendragon
+ pendulous pendul
+ penelope penelop
+ penetrable penetr
+ penetrate penetr
+ penetrative penetr
+ penitence penit
+ penitent penit
+ penitential penitenti
+ penitently penit
+ penitents penit
+ penker penker
+ penknife penknif
+ penn penn
+ penned pen
+ penning pen
+ pennons pennon
+ penny penni
+ pennyworth pennyworth
+ pennyworths pennyworth
+ pens pen
+ pense pens
+ pension pension
+ pensioners pension
+ pensive pensiv
+ pensived pensiv
+ pensively pensiv
+ pent pent
+ pentecost pentecost
+ penthesilea penthesilea
+ penthouse penthous
+ penurious penuri
+ penury penuri
+ peopl peopl
+ people peopl
+ peopled peopl
+ peoples peopl
+ pepin pepin
+ pepper pepper
+ peppercorn peppercorn
+ peppered pepper
+ per per
+ peradventure peradventur
+ peradventures peradventur
+ perceiv perceiv
+ perceive perceiv
+ perceived perceiv
+ perceives perceiv
+ perceiveth perceiveth
+ perch perch
+ perchance perchanc
+ percies perci
+ percussion percuss
+ percy perci
+ perdie perdi
+ perdita perdita
+ perdition perdit
+ perdonato perdonato
+ perdu perdu
+ perdurable perdur
+ perdurably perdur
+ perdy perdi
+ pere pere
+ peregrinate peregrin
+ peremptorily peremptorili
+ peremptory peremptori
+ perfect perfect
+ perfected perfect
+ perfecter perfect
+ perfectest perfectest
+ perfection perfect
+ perfections perfect
+ perfectly perfectli
+ perfectness perfect
+ perfidious perfidi
+ perfidiously perfidi
+ perforce perforc
+ perform perform
+ performance perform
+ performances perform
+ performed perform
+ performer perform
+ performers perform
+ performing perform
+ performs perform
+ perfum perfum
+ perfume perfum
+ perfumed perfum
+ perfumer perfum
+ perfumes perfum
+ perge perg
+ perhaps perhap
+ periapts periapt
+ perigort perigort
+ perigouna perigouna
+ peril peril
+ perilous peril
+ perils peril
+ period period
+ periods period
+ perish perish
+ perished perish
+ perishest perishest
+ perisheth perisheth
+ perishing perish
+ periwig periwig
+ perjur perjur
+ perjure perjur
+ perjured perjur
+ perjuries perjuri
+ perjury perjuri
+ perk perk
+ perkes perk
+ permafoy permafoi
+ permanent perman
+ permission permiss
+ permissive permiss
+ permit permit
+ permitted permit
+ pernicious pernici
+ perniciously pernici
+ peroration peror
+ perpend perpend
+ perpendicular perpendicular
+ perpendicularly perpendicularli
+ perpetual perpetu
+ perpetually perpetu
+ perpetuity perpetu
+ perplex perplex
+ perplexed perplex
+ perplexity perplex
+ pers per
+ persecuted persecut
+ persecutions persecut
+ persecutor persecutor
+ perseus perseu
+ persever persev
+ perseverance persever
+ persevers persev
+ persia persia
+ persian persian
+ persist persist
+ persisted persist
+ persistency persist
+ persistive persist
+ persists persist
+ person person
+ personae persona
+ personage personag
+ personages personag
+ personal person
+ personally person
+ personate person
+ personated person
+ personates person
+ personating person
+ persons person
+ perspective perspect
+ perspectively perspect
+ perspectives perspect
+ perspicuous perspicu
+ persuade persuad
+ persuaded persuad
+ persuades persuad
+ persuading persuad
+ persuasion persuas
+ persuasions persuas
+ pert pert
+ pertain pertain
+ pertaining pertain
+ pertains pertain
+ pertaunt pertaunt
+ pertinent pertin
+ pertly pertli
+ perturb perturb
+ perturbation perturb
+ perturbations perturb
+ perturbed perturb
+ perus peru
+ perusal perus
+ peruse perus
+ perused perus
+ perusing perus
+ perverse pervers
+ perversely pervers
+ perverseness pervers
+ pervert pervert
+ perverted pervert
+ peseech peseech
+ pest pest
+ pester pester
+ pestiferous pestifer
+ pestilence pestil
+ pestilent pestil
+ pet pet
+ petar petar
+ peter peter
+ petit petit
+ petition petit
+ petitionary petitionari
+ petitioner petition
+ petitioners petition
+ petitions petit
+ peto peto
+ petrarch petrarch
+ petruchio petruchio
+ petter petter
+ petticoat petticoat
+ petticoats petticoat
+ pettiness petti
+ pettish pettish
+ pettitoes pettito
+ petty petti
+ peu peu
+ pew pew
+ pewter pewter
+ pewterer pewter
+ phaethon phaethon
+ phaeton phaeton
+ phantasime phantasim
+ phantasimes phantasim
+ phantasma phantasma
+ pharamond pharamond
+ pharaoh pharaoh
+ pharsalia pharsalia
+ pheasant pheasant
+ pheazar pheazar
+ phebe phebe
+ phebes phebe
+ pheebus pheebu
+ pheeze pheez
+ phibbus phibbu
+ philadelphos philadelpho
+ philario philario
+ philarmonus philarmonu
+ philemon philemon
+ philip philip
+ philippan philippan
+ philippe philipp
+ philippi philippi
+ phillida phillida
+ philo philo
+ philomel philomel
+ philomela philomela
+ philosopher philosoph
+ philosophers philosoph
+ philosophical philosoph
+ philosophy philosophi
+ philostrate philostr
+ philotus philotu
+ phlegmatic phlegmat
+ phoebe phoeb
+ phoebus phoebu
+ phoenicia phoenicia
+ phoenicians phoenician
+ phoenix phoenix
+ phorbus phorbu
+ photinus photinu
+ phrase phrase
+ phraseless phraseless
+ phrases phrase
+ phrygia phrygia
+ phrygian phrygian
+ phrynia phrynia
+ physic physic
+ physical physic
+ physician physician
+ physicians physician
+ physics physic
+ pia pia
+ pibble pibbl
+ pible pibl
+ picardy picardi
+ pick pick
+ pickaxe pickax
+ pickaxes pickax
+ pickbone pickbon
+ picked pick
+ pickers picker
+ picking pick
+ pickle pickl
+ picklock picklock
+ pickpurse pickpurs
+ picks pick
+ pickt pickt
+ pickthanks pickthank
+ pictur pictur
+ picture pictur
+ pictured pictur
+ pictures pictur
+ pid pid
+ pie pie
+ piec piec
+ piece piec
+ pieces piec
+ piecing piec
+ pied pi
+ piedness pied
+ pier pier
+ pierc pierc
+ pierce pierc
+ pierced pierc
+ pierces pierc
+ pierceth pierceth
+ piercing pierc
+ piercy pierci
+ piers pier
+ pies pi
+ piety pieti
+ pig pig
+ pigeon pigeon
+ pigeons pigeon
+ pight pight
+ pigmy pigmi
+ pigrogromitus pigrogromitu
+ pike pike
+ pikes pike
+ pil pil
+ pilate pilat
+ pilates pilat
+ pilchers pilcher
+ pile pile
+ piles pile
+ pilf pilf
+ pilfering pilfer
+ pilgrim pilgrim
+ pilgrimage pilgrimag
+ pilgrims pilgrim
+ pill pill
+ pillage pillag
+ pillagers pillag
+ pillar pillar
+ pillars pillar
+ pillicock pillicock
+ pillory pillori
+ pillow pillow
+ pillows pillow
+ pills pill
+ pilot pilot
+ pilots pilot
+ pimpernell pimpernel
+ pin pin
+ pinch pinch
+ pinched pinch
+ pinches pinch
+ pinching pinch
+ pindarus pindaru
+ pine pine
+ pined pine
+ pines pine
+ pinfold pinfold
+ pining pine
+ pinion pinion
+ pink pink
+ pinn pinn
+ pinnace pinnac
+ pins pin
+ pinse pins
+ pint pint
+ pintpot pintpot
+ pioned pion
+ pioneers pioneer
+ pioner pioner
+ pioners pioner
+ pious piou
+ pip pip
+ pipe pipe
+ piper piper
+ pipers piper
+ pipes pipe
+ piping pipe
+ pippin pippin
+ pippins pippin
+ pirate pirat
+ pirates pirat
+ pisa pisa
+ pisanio pisanio
+ pish pish
+ pismires pismir
+ piss piss
+ pissing piss
+ pistol pistol
+ pistols pistol
+ pit pit
+ pitch pitch
+ pitched pitch
+ pitcher pitcher
+ pitchers pitcher
+ pitchy pitchi
+ piteous piteou
+ piteously piteous
+ pitfall pitfal
+ pith pith
+ pithless pithless
+ pithy pithi
+ pitie piti
+ pitied piti
+ pities piti
+ pitiful piti
+ pitifully pitifulli
+ pitiless pitiless
+ pits pit
+ pittance pittanc
+ pittie pitti
+ pittikins pittikin
+ pity piti
+ pitying piti
+ pius piu
+ plac plac
+ place place
+ placed place
+ placentio placentio
+ places place
+ placeth placeth
+ placid placid
+ placing place
+ plack plack
+ placket placket
+ plackets placket
+ plagu plagu
+ plague plagu
+ plagued plagu
+ plagues plagu
+ plaguing plagu
+ plaguy plagui
+ plain plain
+ plainer plainer
+ plainest plainest
+ plaining plain
+ plainings plain
+ plainly plainli
+ plainness plain
+ plains plain
+ plainsong plainsong
+ plaintful plaint
+ plaintiff plaintiff
+ plaintiffs plaintiff
+ plaints plaint
+ planched planch
+ planet planet
+ planetary planetari
+ planets planet
+ planks plank
+ plant plant
+ plantage plantag
+ plantagenet plantagenet
+ plantagenets plantagenet
+ plantain plantain
+ plantation plantat
+ planted plant
+ planteth planteth
+ plants plant
+ plash plash
+ plashy plashi
+ plast plast
+ plaster plaster
+ plasterer plaster
+ plat plat
+ plate plate
+ plated plate
+ plates plate
+ platform platform
+ platforms platform
+ plats plat
+ platted plat
+ plausible plausibl
+ plausive plausiv
+ plautus plautu
+ play plai
+ played plai
+ player player
+ players player
+ playeth playeth
+ playfellow playfellow
+ playfellows playfellow
+ playhouse playhous
+ playing plai
+ plays plai
+ plea plea
+ pleach pleach
+ pleached pleach
+ plead plead
+ pleaded plead
+ pleader pleader
+ pleaders pleader
+ pleading plead
+ pleads plead
+ pleas plea
+ pleasance pleasanc
+ pleasant pleasant
+ pleasantly pleasantli
+ please pleas
+ pleased pleas
+ pleaser pleaser
+ pleasers pleaser
+ pleases pleas
+ pleasest pleasest
+ pleaseth pleaseth
+ pleasing pleas
+ pleasure pleasur
+ pleasures pleasur
+ plebeians plebeian
+ plebeii plebeii
+ plebs pleb
+ pledge pledg
+ pledges pledg
+ pleines plein
+ plenitude plenitud
+ plenteous plenteou
+ plenteously plenteous
+ plenties plenti
+ plentiful plenti
+ plentifully plentifulli
+ plenty plenti
+ pless pless
+ plessed pless
+ plessing pless
+ pliant pliant
+ plied pli
+ plies pli
+ plight plight
+ plighted plight
+ plighter plighter
+ plod plod
+ plodded plod
+ plodders plodder
+ plodding plod
+ plods plod
+ plood plood
+ ploody ploodi
+ plot plot
+ plots plot
+ plotted plot
+ plotter plotter
+ plough plough
+ ploughed plough
+ ploughman ploughman
+ ploughmen ploughmen
+ plow plow
+ plows plow
+ pluck pluck
+ plucked pluck
+ plucker plucker
+ plucking pluck
+ plucks pluck
+ plue plue
+ plum plum
+ plume plume
+ plumed plume
+ plumes plume
+ plummet plummet
+ plump plump
+ plumpy plumpi
+ plums plum
+ plung plung
+ plunge plung
+ plunged plung
+ plural plural
+ plurisy plurisi
+ plus plu
+ pluto pluto
+ plutus plutu
+ ply ply
+ po po
+ pocket pocket
+ pocketing pocket
+ pockets pocket
+ pocky pocki
+ pody podi
+ poem poem
+ poesy poesi
+ poet poet
+ poetical poetic
+ poetry poetri
+ poets poet
+ poictiers poictier
+ poinards poinard
+ poins poin
+ point point
+ pointblank pointblank
+ pointed point
+ pointing point
+ points point
+ pois poi
+ poise pois
+ poising pois
+ poison poison
+ poisoned poison
+ poisoner poison
+ poisoning poison
+ poisonous poison
+ poisons poison
+ poke poke
+ poking poke
+ pol pol
+ polack polack
+ polacks polack
+ poland poland
+ pold pold
+ pole pole
+ poleaxe poleax
+ polecat polecat
+ polecats polecat
+ polemon polemon
+ poles pole
+ poli poli
+ policies polici
+ policy polici
+ polish polish
+ polished polish
+ politic polit
+ politician politician
+ politicians politician
+ politicly politicli
+ polixenes polixen
+ poll poll
+ polluted pollut
+ pollution pollut
+ polonius poloniu
+ poltroons poltroon
+ polusion polus
+ polydamus polydamu
+ polydore polydor
+ polyxena polyxena
+ pomander pomand
+ pomegranate pomegran
+ pomewater pomewat
+ pomfret pomfret
+ pomgarnet pomgarnet
+ pommel pommel
+ pomp pomp
+ pompeius pompeiu
+ pompey pompei
+ pompion pompion
+ pompous pompou
+ pomps pomp
+ pond pond
+ ponder ponder
+ ponderous ponder
+ ponds pond
+ poniard poniard
+ poniards poniard
+ pont pont
+ pontic pontic
+ pontifical pontif
+ ponton ponton
+ pooh pooh
+ pool pool
+ poole pool
+ poop poop
+ poor poor
+ poorer poorer
+ poorest poorest
+ poorly poorli
+ pop pop
+ pope pope
+ popedom popedom
+ popilius popiliu
+ popingay popingai
+ popish popish
+ popp popp
+ poppy poppi
+ pops pop
+ popular popular
+ popularity popular
+ populous popul
+ porch porch
+ porches porch
+ pore pore
+ poring pore
+ pork pork
+ porn porn
+ porpentine porpentin
+ porridge porridg
+ porringer porring
+ port port
+ portable portabl
+ portage portag
+ portal portal
+ portance portanc
+ portcullis portculli
+ portend portend
+ portends portend
+ portent portent
+ portentous portent
+ portents portent
+ porter porter
+ porters porter
+ portia portia
+ portion portion
+ portly portli
+ portotartarossa portotartarossa
+ portrait portrait
+ portraiture portraitur
+ ports port
+ portugal portug
+ pose pose
+ posied posi
+ posies posi
+ position posit
+ positive posit
+ positively posit
+ posse poss
+ possess possess
+ possessed possess
+ possesses possess
+ possesseth possesseth
+ possessing possess
+ possession possess
+ possessions possess
+ possessor possessor
+ posset posset
+ possets posset
+ possibilities possibl
+ possibility possibl
+ possible possibl
+ possibly possibl
+ possitable possit
+ post post
+ poste post
+ posted post
+ posterior posterior
+ posteriors posterior
+ posterity poster
+ postern postern
+ posterns postern
+ posters poster
+ posthorse posthors
+ posthorses posthors
+ posthumus posthumu
+ posting post
+ postmaster postmast
+ posts post
+ postscript postscript
+ posture postur
+ postures postur
+ posy posi
+ pot pot
+ potable potabl
+ potations potat
+ potato potato
+ potatoes potato
+ potch potch
+ potency potenc
+ potent potent
+ potentates potent
+ potential potenti
+ potently potent
+ potents potent
+ pothecary pothecari
+ pother pother
+ potion potion
+ potions potion
+ potpan potpan
+ pots pot
+ potter potter
+ potting pot
+ pottle pottl
+ pouch pouch
+ poulter poulter
+ poultice poultic
+ poultney poultnei
+ pouncet pouncet
+ pound pound
+ pounds pound
+ pour pour
+ pourest pourest
+ pouring pour
+ pourquoi pourquoi
+ pours pour
+ pout pout
+ poverty poverti
+ pow pow
+ powd powd
+ powder powder
+ power power
+ powerful power
+ powerfully powerfulli
+ powerless powerless
+ powers power
+ pox pox
+ poys poi
+ poysam poysam
+ prabbles prabbl
+ practic practic
+ practice practic
+ practiced practic
+ practicer practic
+ practices practic
+ practicing practic
+ practis practi
+ practisants practis
+ practise practis
+ practiser practis
+ practisers practis
+ practises practis
+ practising practis
+ praeclarissimus praeclarissimu
+ praemunire praemunir
+ praetor praetor
+ praetors praetor
+ pragging prag
+ prague pragu
+ prain prain
+ prains prain
+ prais prai
+ praise prais
+ praised prais
+ praises prais
+ praisest praisest
+ praiseworthy praiseworthi
+ praising prais
+ prancing pranc
+ prank prank
+ pranks prank
+ prat prat
+ prate prate
+ prated prate
+ prater prater
+ prating prate
+ prattle prattl
+ prattler prattler
+ prattling prattl
+ prave prave
+ prawls prawl
+ prawns prawn
+ pray prai
+ prayer prayer
+ prayers prayer
+ praying prai
+ prays prai
+ pre pre
+ preach preach
+ preached preach
+ preachers preacher
+ preaches preach
+ preaching preach
+ preachment preachment
+ pread pread
+ preambulate preambul
+ precedence preced
+ precedent preced
+ preceding preced
+ precept precept
+ preceptial precepti
+ precepts precept
+ precinct precinct
+ precious preciou
+ preciously precious
+ precipice precipic
+ precipitating precipit
+ precipitation precipit
+ precise precis
+ precisely precis
+ preciseness precis
+ precisian precisian
+ precor precor
+ precurse precurs
+ precursors precursor
+ predeceased predeceas
+ predecessor predecessor
+ predecessors predecessor
+ predestinate predestin
+ predicament predica
+ predict predict
+ prediction predict
+ predictions predict
+ predominance predomin
+ predominant predomin
+ predominate predomin
+ preeches preech
+ preeminence preemin
+ preface prefac
+ prefer prefer
+ preferment prefer
+ preferments prefer
+ preferr preferr
+ preferreth preferreth
+ preferring prefer
+ prefers prefer
+ prefiguring prefigur
+ prefix prefix
+ prefixed prefix
+ preformed preform
+ pregnancy pregnanc
+ pregnant pregnant
+ pregnantly pregnantli
+ prejudicates prejud
+ prejudice prejudic
+ prejudicial prejudici
+ prelate prelat
+ premeditated premedit
+ premeditation premedit
+ premised premis
+ premises premis
+ prenez prenez
+ prenominate prenomin
+ prentice prentic
+ prentices prentic
+ preordinance preordin
+ prepar prepar
+ preparation prepar
+ preparations prepar
+ prepare prepar
+ prepared prepar
+ preparedly preparedli
+ prepares prepar
+ preparing prepar
+ prepost prepost
+ preposterous preposter
+ preposterously preposter
+ prerogatifes prerogatif
+ prerogative prerog
+ prerogatived prerogativ
+ presage presag
+ presagers presag
+ presages presag
+ presageth presageth
+ presaging presag
+ prescience prescienc
+ prescribe prescrib
+ prescript prescript
+ prescription prescript
+ prescriptions prescript
+ prescripts prescript
+ presence presenc
+ presences presenc
+ present present
+ presentation present
+ presented present
+ presenter present
+ presenters present
+ presenteth presenteth
+ presenting present
+ presently present
+ presentment present
+ presents present
+ preserv preserv
+ preservation preserv
+ preservative preserv
+ preserve preserv
+ preserved preserv
+ preserver preserv
+ preservers preserv
+ preserving preserv
+ president presid
+ press press
+ pressed press
+ presser presser
+ presses press
+ pressing press
+ pressure pressur
+ pressures pressur
+ prest prest
+ prester prester
+ presume presum
+ presumes presum
+ presuming presum
+ presumption presumpt
+ presumptuous presumptu
+ presuppos presuppo
+ pret pret
+ pretence pretenc
+ pretences pretenc
+ pretend pretend
+ pretended pretend
+ pretending pretend
+ pretense pretens
+ pretext pretext
+ pretia pretia
+ prettier prettier
+ prettiest prettiest
+ prettily prettili
+ prettiness pretti
+ pretty pretti
+ prevail prevail
+ prevailed prevail
+ prevaileth prevaileth
+ prevailing prevail
+ prevailment prevail
+ prevails prevail
+ prevent prevent
+ prevented prevent
+ prevention prevent
+ preventions prevent
+ prevents prevent
+ prey prei
+ preyful prey
+ preys prei
+ priam priam
+ priami priami
+ priamus priamu
+ pribbles pribbl
+ price price
+ prick prick
+ pricked prick
+ pricket pricket
+ pricking prick
+ pricks prick
+ pricksong pricksong
+ pride pride
+ prides pride
+ pridge pridg
+ prie prie
+ pried pri
+ prief prief
+ pries pri
+ priest priest
+ priesthood priesthood
+ priests priest
+ prig prig
+ primal primal
+ prime prime
+ primer primer
+ primero primero
+ primest primest
+ primitive primit
+ primo primo
+ primogenity primogen
+ primrose primros
+ primroses primros
+ primy primi
+ prince princ
+ princely princ
+ princes princ
+ princess princess
+ principal princip
+ principalities princip
+ principality princip
+ principle principl
+ principles principl
+ princox princox
+ prings pring
+ print print
+ printed print
+ printing print
+ printless printless
+ prints print
+ prioress prioress
+ priories priori
+ priority prioriti
+ priory priori
+ priscian priscian
+ prison prison
+ prisoner prison
+ prisoners prison
+ prisonment prison
+ prisonnier prisonni
+ prisons prison
+ pristine pristin
+ prithe prith
+ prithee prithe
+ privacy privaci
+ private privat
+ privately privat
+ privates privat
+ privilage privilag
+ privileg privileg
+ privilege privileg
+ privileged privileg
+ privileges privileg
+ privilegio privilegio
+ privily privili
+ privity priviti
+ privy privi
+ priz priz
+ prize prize
+ prized prize
+ prizer prizer
+ prizes prize
+ prizest prizest
+ prizing prize
+ pro pro
+ probable probabl
+ probal probal
+ probation probat
+ proceed proce
+ proceeded proceed
+ proceeders proceed
+ proceeding proceed
+ proceedings proceed
+ proceeds proce
+ process process
+ procession process
+ proclaim proclaim
+ proclaimed proclaim
+ proclaimeth proclaimeth
+ proclaims proclaim
+ proclamation proclam
+ proclamations proclam
+ proconsul proconsul
+ procrastinate procrastin
+ procreant procreant
+ procreants procreant
+ procreation procreat
+ procrus procru
+ proculeius proculeiu
+ procur procur
+ procurator procur
+ procure procur
+ procured procur
+ procures procur
+ procuring procur
+ prodigal prodig
+ prodigality prodig
+ prodigally prodig
+ prodigals prodig
+ prodigies prodigi
+ prodigious prodigi
+ prodigiously prodigi
+ prodigy prodigi
+ proditor proditor
+ produc produc
+ produce produc
+ produced produc
+ produces produc
+ producing produc
+ proface profac
+ profan profan
+ profanation profan
+ profane profan
+ profaned profan
+ profanely profan
+ profaneness profan
+ profaners profan
+ profaning profan
+ profess profess
+ professed profess
+ professes profess
+ profession profess
+ professions profess
+ professors professor
+ proffer proffer
+ proffered proffer
+ profferer proffer
+ proffers proffer
+ proficient profici
+ profit profit
+ profitable profit
+ profitably profit
+ profited profit
+ profiting profit
+ profitless profitless
+ profits profit
+ profound profound
+ profoundest profoundest
+ profoundly profoundli
+ progenitors progenitor
+ progeny progeni
+ progne progn
+ prognosticate prognost
+ prognostication prognost
+ progress progress
+ progression progress
+ prohibit prohibit
+ prohibition prohibit
+ project project
+ projection project
+ projects project
+ prolixious prolixi
+ prolixity prolix
+ prologue prologu
+ prologues prologu
+ prolong prolong
+ prolongs prolong
+ promethean promethean
+ prometheus prometheu
+ promis promi
+ promise promis
+ promised promis
+ promises promis
+ promiseth promiseth
+ promising promis
+ promontory promontori
+ promotion promot
+ promotions promot
+ prompt prompt
+ prompted prompt
+ promptement promptement
+ prompter prompter
+ prompting prompt
+ prompts prompt
+ prompture promptur
+ promulgate promulg
+ prone prone
+ prononcer prononc
+ prononcez prononcez
+ pronoun pronoun
+ pronounc pronounc
+ pronounce pronounc
+ pronounced pronounc
+ pronouncing pronounc
+ pronouns pronoun
+ proof proof
+ proofs proof
+ prop prop
+ propagate propag
+ propagation propag
+ propend propend
+ propension propens
+ proper proper
+ properer proper
+ properly properli
+ propertied properti
+ properties properti
+ property properti
+ prophecies propheci
+ prophecy propheci
+ prophesied prophesi
+ prophesier prophesi
+ prophesy prophesi
+ prophesying prophesi
+ prophet prophet
+ prophetess prophetess
+ prophetic prophet
+ prophetically prophet
+ prophets prophet
+ propinquity propinqu
+ propontic propont
+ proportion proport
+ proportionable proportion
+ proportions proport
+ propos propo
+ propose propos
+ proposed propos
+ proposer propos
+ proposes propos
+ proposing propos
+ proposition proposit
+ propositions proposit
+ propounded propound
+ propp propp
+ propre propr
+ propriety proprieti
+ props prop
+ propugnation propugn
+ prorogue prorogu
+ prorogued prorogu
+ proscription proscript
+ proscriptions proscript
+ prose prose
+ prosecute prosecut
+ prosecution prosecut
+ proselytes proselyt
+ proserpina proserpina
+ prosp prosp
+ prospect prospect
+ prosper prosper
+ prosperity prosper
+ prospero prospero
+ prosperous prosper
+ prosperously prosper
+ prospers prosper
+ prostitute prostitut
+ prostrate prostrat
+ protect protect
+ protected protect
+ protection protect
+ protector protector
+ protectors protector
+ protectorship protectorship
+ protectress protectress
+ protects protect
+ protest protest
+ protestation protest
+ protestations protest
+ protested protest
+ protester protest
+ protesting protest
+ protests protest
+ proteus proteu
+ protheus protheu
+ protract protract
+ protractive protract
+ proud proud
+ prouder prouder
+ proudest proudest
+ proudlier proudlier
+ proudly proudli
+ prouds proud
+ prov prov
+ provand provand
+ prove prove
+ proved prove
+ provender provend
+ proverb proverb
+ proverbs proverb
+ proves prove
+ proveth proveth
+ provide provid
+ provided provid
+ providence provid
+ provident provid
+ providently provid
+ provider provid
+ provides provid
+ province provinc
+ provinces provinc
+ provincial provinci
+ proving prove
+ provision provis
+ proviso proviso
+ provocation provoc
+ provok provok
+ provoke provok
+ provoked provok
+ provoker provok
+ provokes provok
+ provoketh provoketh
+ provoking provok
+ provost provost
+ prowess prowess
+ prudence prudenc
+ prudent prudent
+ prun prun
+ prune prune
+ prunes prune
+ pruning prune
+ pry pry
+ prying pry
+ psalm psalm
+ psalmist psalmist
+ psalms psalm
+ psalteries psalteri
+ ptolemies ptolemi
+ ptolemy ptolemi
+ public public
+ publican publican
+ publication public
+ publicly publicli
+ publicola publicola
+ publish publish
+ published publish
+ publisher publish
+ publishing publish
+ publius publiu
+ pucelle pucel
+ puck puck
+ pudder pudder
+ pudding pud
+ puddings pud
+ puddle puddl
+ puddled puddl
+ pudency pudenc
+ pueritia pueritia
+ puff puff
+ puffing puf
+ puffs puff
+ pugging pug
+ puis pui
+ puissance puissanc
+ puissant puissant
+ puke puke
+ puking puke
+ pulcher pulcher
+ puling pule
+ pull pull
+ puller puller
+ pullet pullet
+ pulling pull
+ pulls pull
+ pulpit pulpit
+ pulpiter pulpit
+ pulpits pulpit
+ pulse puls
+ pulsidge pulsidg
+ pump pump
+ pumpion pumpion
+ pumps pump
+ pun pun
+ punched punch
+ punish punish
+ punished punish
+ punishes punish
+ punishment punish
+ punishments punish
+ punk punk
+ punto punto
+ puny puni
+ pupil pupil
+ pupils pupil
+ puppet puppet
+ puppets puppet
+ puppies puppi
+ puppy puppi
+ pur pur
+ purblind purblind
+ purchas purcha
+ purchase purchas
+ purchased purchas
+ purchases purchas
+ purchaseth purchaseth
+ purchasing purchas
+ pure pure
+ purely pure
+ purer purer
+ purest purest
+ purg purg
+ purgation purgat
+ purgative purg
+ purgatory purgatori
+ purge purg
+ purged purg
+ purgers purger
+ purging purg
+ purifies purifi
+ purifying purifi
+ puritan puritan
+ purity puriti
+ purlieus purlieu
+ purple purpl
+ purpled purpl
+ purples purpl
+ purport purport
+ purpos purpo
+ purpose purpos
+ purposed purpos
+ purposely purpos
+ purposes purpos
+ purposeth purposeth
+ purposing purpos
+ purr purr
+ purs pur
+ purse purs
+ pursents pursent
+ purses purs
+ pursu pursu
+ pursue pursu
+ pursued pursu
+ pursuers pursuer
+ pursues pursu
+ pursuest pursuest
+ pursueth pursueth
+ pursuing pursu
+ pursuit pursuit
+ pursuivant pursuiv
+ pursuivants pursuiv
+ pursy pursi
+ purus puru
+ purveyor purveyor
+ push push
+ pushes push
+ pusillanimity pusillanim
+ put put
+ putrefy putrefi
+ putrified putrifi
+ puts put
+ putter putter
+ putting put
+ puttock puttock
+ puzzel puzzel
+ puzzle puzzl
+ puzzled puzzl
+ puzzles puzzl
+ py py
+ pygmalion pygmalion
+ pygmies pygmi
+ pygmy pygmi
+ pyramid pyramid
+ pyramides pyramid
+ pyramids pyramid
+ pyramis pyrami
+ pyramises pyramis
+ pyramus pyramu
+ pyrenean pyrenean
+ pyrrhus pyrrhu
+ pythagoras pythagora
+ qu qu
+ quadrangle quadrangl
+ quae quae
+ quaff quaff
+ quaffing quaf
+ quagmire quagmir
+ quail quail
+ quailing quail
+ quails quail
+ quaint quaint
+ quaintly quaintli
+ quak quak
+ quake quak
+ quakes quak
+ qualification qualif
+ qualified qualifi
+ qualifies qualifi
+ qualify qualifi
+ qualifying qualifi
+ qualite qualit
+ qualities qualiti
+ quality qualiti
+ qualm qualm
+ qualmish qualmish
+ quam quam
+ quand quand
+ quando quando
+ quantities quantiti
+ quantity quantiti
+ quare quar
+ quarrel quarrel
+ quarrell quarrel
+ quarreller quarrel
+ quarrelling quarrel
+ quarrelous quarrel
+ quarrels quarrel
+ quarrelsome quarrelsom
+ quarries quarri
+ quarry quarri
+ quart quart
+ quarter quarter
+ quartered quarter
+ quartering quarter
+ quarters quarter
+ quarts quart
+ quasi quasi
+ quat quat
+ quatch quatch
+ quay quai
+ que que
+ quean quean
+ queas quea
+ queasiness queasi
+ queasy queasi
+ queen queen
+ queens queen
+ quell quell
+ queller queller
+ quench quench
+ quenched quench
+ quenching quench
+ quenchless quenchless
+ quern quern
+ quest quest
+ questant questant
+ question question
+ questionable question
+ questioned question
+ questioning question
+ questionless questionless
+ questions question
+ questrists questrist
+ quests quest
+ queubus queubu
+ qui qui
+ quick quick
+ quicken quicken
+ quickens quicken
+ quicker quicker
+ quicklier quicklier
+ quickly quickli
+ quickness quick
+ quicksand quicksand
+ quicksands quicksand
+ quicksilverr quicksilverr
+ quid quid
+ quiddities quidditi
+ quiddits quiddit
+ quier quier
+ quiet quiet
+ quieter quieter
+ quietly quietli
+ quietness quiet
+ quietus quietu
+ quill quill
+ quillets quillet
+ quills quill
+ quilt quilt
+ quinapalus quinapalu
+ quince quinc
+ quinces quinc
+ quintain quintain
+ quintessence quintess
+ quintus quintu
+ quip quip
+ quips quip
+ quire quir
+ quiring quir
+ quirk quirk
+ quirks quirk
+ quis qui
+ quit quit
+ quite quit
+ quits quit
+ quittance quittanc
+ quitted quit
+ quitting quit
+ quiver quiver
+ quivering quiver
+ quivers quiver
+ quo quo
+ quod quod
+ quoifs quoif
+ quoint quoint
+ quoit quoit
+ quoits quoit
+ quondam quondam
+ quoniam quoniam
+ quote quot
+ quoted quot
+ quotes quot
+ quoth quoth
+ quotidian quotidian
+ r r
+ rabbit rabbit
+ rabble rabbl
+ rabblement rabblement
+ race race
+ rack rack
+ rackers racker
+ racket racket
+ rackets racket
+ racking rack
+ racks rack
+ radiance radianc
+ radiant radiant
+ radish radish
+ rafe rafe
+ raft raft
+ rag rag
+ rage rage
+ rages rage
+ rageth rageth
+ ragg ragg
+ ragged rag
+ raggedness ragged
+ raging rage
+ ragozine ragozin
+ rags rag
+ rah rah
+ rail rail
+ railed rail
+ railer railer
+ railest railest
+ raileth raileth
+ railing rail
+ rails rail
+ raiment raiment
+ rain rain
+ rainbow rainbow
+ raineth raineth
+ raining rain
+ rainold rainold
+ rains rain
+ rainy raini
+ rais rai
+ raise rais
+ raised rais
+ raises rais
+ raising rais
+ raisins raisin
+ rak rak
+ rake rake
+ rakers raker
+ rakes rake
+ ral ral
+ rald rald
+ ralph ralph
+ ram ram
+ rambures rambur
+ ramm ramm
+ rampallian rampallian
+ rampant rampant
+ ramping ramp
+ rampir rampir
+ ramps ramp
+ rams ram
+ ramsey ramsei
+ ramston ramston
+ ran ran
+ rance ranc
+ rancorous rancor
+ rancors rancor
+ rancour rancour
+ random random
+ rang rang
+ range rang
+ ranged rang
+ rangers ranger
+ ranges rang
+ ranging rang
+ rank rank
+ ranker ranker
+ rankest rankest
+ ranking rank
+ rankle rankl
+ rankly rankli
+ rankness rank
+ ranks rank
+ ransack ransack
+ ransacking ransack
+ ransom ransom
+ ransomed ransom
+ ransoming ransom
+ ransomless ransomless
+ ransoms ransom
+ rant rant
+ ranting rant
+ rap rap
+ rape rape
+ rapes rape
+ rapier rapier
+ rapiers rapier
+ rapine rapin
+ raps rap
+ rapt rapt
+ rapture raptur
+ raptures raptur
+ rar rar
+ rare rare
+ rarely rare
+ rareness rare
+ rarer rarer
+ rarest rarest
+ rarities rariti
+ rarity rariti
+ rascal rascal
+ rascalliest rascalliest
+ rascally rascal
+ rascals rascal
+ rased rase
+ rash rash
+ rasher rasher
+ rashly rashli
+ rashness rash
+ rat rat
+ ratcatcher ratcatch
+ ratcliff ratcliff
+ rate rate
+ rated rate
+ rately rate
+ rates rate
+ rather rather
+ ratherest ratherest
+ ratified ratifi
+ ratifiers ratifi
+ ratify ratifi
+ rating rate
+ rational ration
+ ratolorum ratolorum
+ rats rat
+ ratsbane ratsban
+ rattle rattl
+ rattles rattl
+ rattling rattl
+ rature ratur
+ raught raught
+ rav rav
+ rave rave
+ ravel ravel
+ raven raven
+ ravening raven
+ ravenous raven
+ ravens raven
+ ravenspurgh ravenspurgh
+ raves rave
+ ravin ravin
+ raving rave
+ ravish ravish
+ ravished ravish
+ ravisher ravish
+ ravishing ravish
+ ravishments ravish
+ raw raw
+ rawer rawer
+ rawly rawli
+ rawness raw
+ ray rai
+ rayed rai
+ rays rai
+ raz raz
+ raze raze
+ razed raze
+ razes raze
+ razeth razeth
+ razing raze
+ razor razor
+ razorable razor
+ razors razor
+ razure razur
+ re re
+ reach reach
+ reaches reach
+ reacheth reacheth
+ reaching reach
+ read read
+ reader reader
+ readiest readiest
+ readily readili
+ readiness readi
+ reading read
+ readins readin
+ reads read
+ ready readi
+ real real
+ really realli
+ realm realm
+ realms realm
+ reap reap
+ reapers reaper
+ reaping reap
+ reaps reap
+ rear rear
+ rears rear
+ rearward rearward
+ reason reason
+ reasonable reason
+ reasonably reason
+ reasoned reason
+ reasoning reason
+ reasonless reasonless
+ reasons reason
+ reave reav
+ rebate rebat
+ rebato rebato
+ rebeck rebeck
+ rebel rebel
+ rebell rebel
+ rebelling rebel
+ rebellion rebellion
+ rebellious rebelli
+ rebels rebel
+ rebound rebound
+ rebuk rebuk
+ rebuke rebuk
+ rebukeable rebuk
+ rebuked rebuk
+ rebukes rebuk
+ rebus rebu
+ recall recal
+ recant recant
+ recantation recant
+ recanter recant
+ recanting recant
+ receipt receipt
+ receipts receipt
+ receiv receiv
+ receive receiv
+ received receiv
+ receiver receiv
+ receives receiv
+ receivest receivest
+ receiveth receiveth
+ receiving receiv
+ receptacle receptacl
+ rechate rechat
+ reciprocal reciproc
+ reciprocally reciproc
+ recite recit
+ recited recit
+ reciterai reciterai
+ reck reck
+ recking reck
+ reckless reckless
+ reckon reckon
+ reckoned reckon
+ reckoning reckon
+ reckonings reckon
+ recks reck
+ reclaim reclaim
+ reclaims reclaim
+ reclusive reclus
+ recognizance recogniz
+ recognizances recogniz
+ recoil recoil
+ recoiling recoil
+ recollected recollect
+ recomforted recomfort
+ recomforture recomfortur
+ recommend recommend
+ recommended recommend
+ recommends recommend
+ recompens recompen
+ recompense recompens
+ reconcil reconcil
+ reconcile reconcil
+ reconciled reconcil
+ reconcilement reconcil
+ reconciler reconcil
+ reconciles reconcil
+ reconciliation reconcili
+ record record
+ recordation record
+ recorded record
+ recorder record
+ recorders record
+ records record
+ recount recount
+ recounted recount
+ recounting recount
+ recountments recount
+ recounts recount
+ recourse recours
+ recov recov
+ recover recov
+ recoverable recover
+ recovered recov
+ recoveries recoveri
+ recovers recov
+ recovery recoveri
+ recreant recreant
+ recreants recreant
+ recreate recreat
+ recreation recreat
+ rectify rectifi
+ rector rector
+ rectorship rectorship
+ recure recur
+ recured recur
+ red red
+ redbreast redbreast
+ redder redder
+ reddest reddest
+ rede rede
+ redeem redeem
+ redeemed redeem
+ redeemer redeem
+ redeeming redeem
+ redeems redeem
+ redeliver redeliv
+ redemption redempt
+ redime redim
+ redness red
+ redoubled redoubl
+ redoubted redoubt
+ redound redound
+ redress redress
+ redressed redress
+ redresses redress
+ reduce reduc
+ reechy reechi
+ reed reed
+ reeds reed
+ reek reek
+ reeking reek
+ reeks reek
+ reeky reeki
+ reel reel
+ reeleth reeleth
+ reeling reel
+ reels reel
+ refell refel
+ refer refer
+ reference refer
+ referr referr
+ referred refer
+ refigured refigur
+ refin refin
+ refined refin
+ reflect reflect
+ reflecting reflect
+ reflection reflect
+ reflex reflex
+ reform reform
+ reformation reform
+ reformed reform
+ refractory refractori
+ refrain refrain
+ refresh refresh
+ refreshing refresh
+ reft reft
+ refts reft
+ refuge refug
+ refus refu
+ refusal refus
+ refuse refus
+ refused refus
+ refusest refusest
+ refusing refus
+ reg reg
+ regal regal
+ regalia regalia
+ regan regan
+ regard regard
+ regardance regard
+ regarded regard
+ regardfully regardfulli
+ regarding regard
+ regards regard
+ regenerate regener
+ regent regent
+ regentship regentship
+ regia regia
+ regiment regiment
+ regiments regiment
+ regina regina
+ region region
+ regions region
+ regist regist
+ register regist
+ registers regist
+ regreet regreet
+ regreets regreet
+ regress regress
+ reguerdon reguerdon
+ regular regular
+ rehears rehear
+ rehearsal rehears
+ rehearse rehears
+ reign reign
+ reigned reign
+ reignier reignier
+ reigning reign
+ reigns reign
+ rein rein
+ reinforc reinforc
+ reinforce reinforc
+ reinforcement reinforc
+ reins rein
+ reiterate reiter
+ reject reject
+ rejected reject
+ rejoic rejoic
+ rejoice rejoic
+ rejoices rejoic
+ rejoiceth rejoiceth
+ rejoicing rejoic
+ rejoicingly rejoicingli
+ rejoindure rejoindur
+ rejourn rejourn
+ rel rel
+ relapse relaps
+ relate relat
+ relates relat
+ relation relat
+ relations relat
+ relative rel
+ releas relea
+ release releas
+ released releas
+ releasing releas
+ relent relent
+ relenting relent
+ relents relent
+ reliances relianc
+ relics relic
+ relief relief
+ reliev reliev
+ relieve reliev
+ relieved reliev
+ relieves reliev
+ relieving reliev
+ religion religion
+ religions religion
+ religious religi
+ religiously religi
+ relinquish relinquish
+ reliques reliqu
+ reliquit reliquit
+ relish relish
+ relume relum
+ rely reli
+ relying reli
+ remain remain
+ remainder remaind
+ remainders remaind
+ remained remain
+ remaineth remaineth
+ remaining remain
+ remains remain
+ remark remark
+ remarkable remark
+ remediate remedi
+ remedied remedi
+ remedies remedi
+ remedy remedi
+ rememb rememb
+ remember rememb
+ remembered rememb
+ remembers rememb
+ remembrance remembr
+ remembrancer remembranc
+ remembrances remembr
+ remercimens remercimen
+ remiss remiss
+ remission remiss
+ remissness remiss
+ remit remit
+ remnant remnant
+ remnants remnant
+ remonstrance remonstr
+ remorse remors
+ remorseful remors
+ remorseless remorseless
+ remote remot
+ remotion remot
+ remov remov
+ remove remov
+ removed remov
+ removedness removed
+ remover remov
+ removes remov
+ removing remov
+ remunerate remuner
+ remuneration remuner
+ rence renc
+ rend rend
+ render render
+ rendered render
+ renders render
+ rendezvous rendezv
+ renegado renegado
+ renege reneg
+ reneges reneg
+ renew renew
+ renewed renew
+ renewest renewest
+ renounce renounc
+ renouncement renounc
+ renouncing renounc
+ renowmed renowm
+ renown renown
+ renowned renown
+ rent rent
+ rents rent
+ repaid repaid
+ repair repair
+ repaired repair
+ repairing repair
+ repairs repair
+ repass repass
+ repast repast
+ repasture repastur
+ repay repai
+ repaying repai
+ repays repai
+ repeal repeal
+ repealing repeal
+ repeals repeal
+ repeat repeat
+ repeated repeat
+ repeating repeat
+ repeats repeat
+ repel repel
+ repent repent
+ repentance repent
+ repentant repent
+ repented repent
+ repenting repent
+ repents repent
+ repetition repetit
+ repetitions repetit
+ repin repin
+ repine repin
+ repining repin
+ replant replant
+ replenish replenish
+ replenished replenish
+ replete replet
+ replication replic
+ replied repli
+ replies repli
+ repliest repliest
+ reply repli
+ replying repli
+ report report
+ reported report
+ reporter report
+ reportest reportest
+ reporting report
+ reportingly reportingli
+ reports report
+ reposal repos
+ repose repos
+ reposeth reposeth
+ reposing repos
+ repossess repossess
+ reprehend reprehend
+ reprehended reprehend
+ reprehending reprehend
+ represent repres
+ representing repres
+ reprieve repriev
+ reprieves repriev
+ reprisal repris
+ reproach reproach
+ reproaches reproach
+ reproachful reproach
+ reproachfully reproachfulli
+ reprobate reprob
+ reprobation reprob
+ reproof reproof
+ reprov reprov
+ reprove reprov
+ reproveable reprov
+ reproves reprov
+ reproving reprov
+ repugn repugn
+ repugnancy repugn
+ repugnant repugn
+ repulse repuls
+ repulsed repuls
+ repurchas repurcha
+ repured repur
+ reputation reput
+ repute reput
+ reputed reput
+ reputeless reputeless
+ reputes reput
+ reputing reput
+ request request
+ requested request
+ requesting request
+ requests request
+ requiem requiem
+ requir requir
+ require requir
+ required requir
+ requires requir
+ requireth requireth
+ requiring requir
+ requisite requisit
+ requisites requisit
+ requit requit
+ requital requit
+ requite requit
+ requited requit
+ requites requit
+ rer rer
+ rere rere
+ rers rer
+ rescu rescu
+ rescue rescu
+ rescued rescu
+ rescues rescu
+ rescuing rescu
+ resemblance resembl
+ resemble resembl
+ resembled resembl
+ resembles resembl
+ resembleth resembleth
+ resembling resembl
+ reserv reserv
+ reservation reserv
+ reserve reserv
+ reserved reserv
+ reserves reserv
+ reside resid
+ residence resid
+ resident resid
+ resides resid
+ residing resid
+ residue residu
+ resign resign
+ resignation resign
+ resist resist
+ resistance resist
+ resisted resist
+ resisting resist
+ resists resist
+ resolute resolut
+ resolutely resolut
+ resolutes resolut
+ resolution resolut
+ resolv resolv
+ resolve resolv
+ resolved resolv
+ resolvedly resolvedli
+ resolves resolv
+ resolveth resolveth
+ resort resort
+ resorted resort
+ resounding resound
+ resounds resound
+ respeaking respeak
+ respect respect
+ respected respect
+ respecting respect
+ respective respect
+ respectively respect
+ respects respect
+ respice respic
+ respite respit
+ respites respit
+ responsive respons
+ respose respos
+ ress ress
+ rest rest
+ rested rest
+ resteth resteth
+ restful rest
+ resting rest
+ restitution restitut
+ restless restless
+ restor restor
+ restoration restor
+ restorative restor
+ restore restor
+ restored restor
+ restores restor
+ restoring restor
+ restrain restrain
+ restrained restrain
+ restraining restrain
+ restrains restrain
+ restraint restraint
+ rests rest
+ resty resti
+ resum resum
+ resume resum
+ resumes resum
+ resurrections resurrect
+ retail retail
+ retails retail
+ retain retain
+ retainers retain
+ retaining retain
+ retell retel
+ retention retent
+ retentive retent
+ retinue retinu
+ retir retir
+ retire retir
+ retired retir
+ retirement retir
+ retires retir
+ retiring retir
+ retold retold
+ retort retort
+ retorts retort
+ retourne retourn
+ retract retract
+ retreat retreat
+ retrograde retrograd
+ rets ret
+ return return
+ returned return
+ returnest returnest
+ returneth returneth
+ returning return
+ returns return
+ revania revania
+ reveal reveal
+ reveals reveal
+ revel revel
+ reveler revel
+ revell revel
+ reveller revel
+ revellers revel
+ revelling revel
+ revelry revelri
+ revels revel
+ reveng reveng
+ revenge reveng
+ revenged reveng
+ revengeful reveng
+ revengement reveng
+ revenger reveng
+ revengers reveng
+ revenges reveng
+ revenging reveng
+ revengingly revengingli
+ revenue revenu
+ revenues revenu
+ reverb reverb
+ reverberate reverber
+ reverbs reverb
+ reverenc reverenc
+ reverence rever
+ reverend reverend
+ reverent rever
+ reverently rever
+ revers rever
+ reverse revers
+ reversion revers
+ reverted revert
+ review review
+ reviewest reviewest
+ revil revil
+ revile revil
+ revisits revisit
+ reviv reviv
+ revive reviv
+ revives reviv
+ reviving reviv
+ revok revok
+ revoke revok
+ revokement revok
+ revolt revolt
+ revolted revolt
+ revolting revolt
+ revolts revolt
+ revolution revolut
+ revolutions revolut
+ revolve revolv
+ revolving revolv
+ reward reward
+ rewarded reward
+ rewarder reward
+ rewarding reward
+ rewards reward
+ reword reword
+ reworded reword
+ rex rex
+ rey rei
+ reynaldo reynaldo
+ rford rford
+ rful rful
+ rfull rfull
+ rhapsody rhapsodi
+ rheims rheim
+ rhenish rhenish
+ rhesus rhesu
+ rhetoric rhetor
+ rheum rheum
+ rheumatic rheumat
+ rheums rheum
+ rheumy rheumi
+ rhinoceros rhinocero
+ rhodes rhode
+ rhodope rhodop
+ rhubarb rhubarb
+ rhym rhym
+ rhyme rhyme
+ rhymers rhymer
+ rhymes rhyme
+ rhyming rhyme
+ rialto rialto
+ rib rib
+ ribald ribald
+ riband riband
+ ribands riband
+ ribaudred ribaudr
+ ribb ribb
+ ribbed rib
+ ribbon ribbon
+ ribbons ribbon
+ ribs rib
+ rice rice
+ rich rich
+ richard richard
+ richer richer
+ riches rich
+ richest richest
+ richly richli
+ richmond richmond
+ richmonds richmond
+ rid rid
+ riddance riddanc
+ ridden ridden
+ riddle riddl
+ riddles riddl
+ riddling riddl
+ ride ride
+ rider rider
+ riders rider
+ rides ride
+ ridest ridest
+ rideth rideth
+ ridge ridg
+ ridges ridg
+ ridiculous ridicul
+ riding ride
+ rids rid
+ rien rien
+ ries ri
+ rifle rifl
+ rift rift
+ rifted rift
+ rig rig
+ rigg rigg
+ riggish riggish
+ right right
+ righteous righteou
+ righteously righteous
+ rightful right
+ rightfully rightfulli
+ rightly rightli
+ rights right
+ rigol rigol
+ rigorous rigor
+ rigorously rigor
+ rigour rigour
+ ril ril
+ rim rim
+ rin rin
+ rinaldo rinaldo
+ rind rind
+ ring ring
+ ringing ring
+ ringleader ringlead
+ ringlets ringlet
+ rings ring
+ ringwood ringwood
+ riot riot
+ rioter rioter
+ rioting riot
+ riotous riotou
+ riots riot
+ rip rip
+ ripe ripe
+ ripely ripe
+ ripen ripen
+ ripened ripen
+ ripeness ripe
+ ripening ripen
+ ripens ripen
+ riper riper
+ ripest ripest
+ riping ripe
+ ripp ripp
+ ripping rip
+ rise rise
+ risen risen
+ rises rise
+ riseth riseth
+ rish rish
+ rising rise
+ rite rite
+ rites rite
+ rivage rivag
+ rival rival
+ rivality rival
+ rivall rival
+ rivals rival
+ rive rive
+ rived rive
+ rivelled rivel
+ river river
+ rivers river
+ rivet rivet
+ riveted rivet
+ rivets rivet
+ rivo rivo
+ rj rj
+ rless rless
+ road road
+ roads road
+ roam roam
+ roaming roam
+ roan roan
+ roar roar
+ roared roar
+ roarers roarer
+ roaring roar
+ roars roar
+ roast roast
+ roasted roast
+ rob rob
+ roba roba
+ robas roba
+ robb robb
+ robbed rob
+ robber robber
+ robbers robber
+ robbery robberi
+ robbing rob
+ robe robe
+ robed robe
+ robert robert
+ robes robe
+ robin robin
+ robs rob
+ robustious robusti
+ rochester rochest
+ rochford rochford
+ rock rock
+ rocks rock
+ rocky rocki
+ rod rod
+ rode rode
+ roderigo roderigo
+ rods rod
+ roe roe
+ roes roe
+ roger roger
+ rogero rogero
+ rogue rogu
+ roguery rogueri
+ rogues rogu
+ roguish roguish
+ roi roi
+ roisting roist
+ roll roll
+ rolled roll
+ rolling roll
+ rolls roll
+ rom rom
+ romage romag
+ roman roman
+ romano romano
+ romanos romano
+ romans roman
+ rome rome
+ romeo romeo
+ romish romish
+ rondure rondur
+ ronyon ronyon
+ rood rood
+ roof roof
+ roofs roof
+ rook rook
+ rooks rook
+ rooky rooki
+ room room
+ rooms room
+ root root
+ rooted root
+ rootedly rootedli
+ rooteth rooteth
+ rooting root
+ roots root
+ rope rope
+ ropery roperi
+ ropes rope
+ roping rope
+ ros ro
+ rosalind rosalind
+ rosalinda rosalinda
+ rosalinde rosalind
+ rosaline rosalin
+ roscius rosciu
+ rose rose
+ rosed rose
+ rosemary rosemari
+ rosencrantz rosencrantz
+ roses rose
+ ross ross
+ rosy rosi
+ rot rot
+ rote rote
+ roted rote
+ rother rother
+ rotherham rotherham
+ rots rot
+ rotted rot
+ rotten rotten
+ rottenness rotten
+ rotting rot
+ rotundity rotund
+ rouen rouen
+ rough rough
+ rougher rougher
+ roughest roughest
+ roughly roughli
+ roughness rough
+ round round
+ rounded round
+ roundel roundel
+ rounder rounder
+ roundest roundest
+ rounding round
+ roundly roundli
+ rounds round
+ roundure roundur
+ rous rou
+ rouse rous
+ roused rous
+ rousillon rousillon
+ rously rousli
+ roussi roussi
+ rout rout
+ routed rout
+ routs rout
+ rove rove
+ rover rover
+ row row
+ rowel rowel
+ rowland rowland
+ rowlands rowland
+ roy roi
+ royal royal
+ royalize royal
+ royally royal
+ royalties royalti
+ royalty royalti
+ roynish roynish
+ rs rs
+ rt rt
+ rub rub
+ rubb rubb
+ rubbing rub
+ rubbish rubbish
+ rubies rubi
+ rubious rubiou
+ rubs rub
+ ruby rubi
+ rud rud
+ rudand rudand
+ rudder rudder
+ ruddiness ruddi
+ ruddock ruddock
+ ruddy ruddi
+ rude rude
+ rudely rude
+ rudeness rude
+ ruder ruder
+ rudesby rudesbi
+ rudest rudest
+ rudiments rudiment
+ rue rue
+ rued ru
+ ruff ruff
+ ruffian ruffian
+ ruffians ruffian
+ ruffle ruffl
+ ruffling ruffl
+ ruffs ruff
+ rug rug
+ rugby rugbi
+ rugemount rugemount
+ rugged rug
+ ruin ruin
+ ruinate ruinat
+ ruined ruin
+ ruining ruin
+ ruinous ruinou
+ ruins ruin
+ rul rul
+ rule rule
+ ruled rule
+ ruler ruler
+ rulers ruler
+ rules rule
+ ruling rule
+ rumble rumbl
+ ruminaies ruminai
+ ruminat ruminat
+ ruminate rumin
+ ruminated rumin
+ ruminates rumin
+ rumination rumin
+ rumor rumor
+ rumour rumour
+ rumourer rumour
+ rumours rumour
+ rump rump
+ run run
+ runagate runag
+ runagates runag
+ runaway runawai
+ runaways runawai
+ rung rung
+ runn runn
+ runner runner
+ runners runner
+ running run
+ runs run
+ rupture ruptur
+ ruptures ruptur
+ rural rural
+ rush rush
+ rushes rush
+ rushing rush
+ rushling rushl
+ rushy rushi
+ russet russet
+ russia russia
+ russian russian
+ russians russian
+ rust rust
+ rusted rust
+ rustic rustic
+ rustically rustic
+ rustics rustic
+ rustle rustl
+ rustling rustl
+ rusts rust
+ rusty rusti
+ rut rut
+ ruth ruth
+ ruthful ruth
+ ruthless ruthless
+ rutland rutland
+ ruttish ruttish
+ ry ry
+ rye rye
+ rything ryth
+ s s
+ sa sa
+ saba saba
+ sabbath sabbath
+ sable sabl
+ sables sabl
+ sack sack
+ sackbuts sackbut
+ sackcloth sackcloth
+ sacked sack
+ sackerson sackerson
+ sacks sack
+ sacrament sacrament
+ sacred sacr
+ sacrific sacrif
+ sacrifice sacrific
+ sacrificers sacrific
+ sacrifices sacrific
+ sacrificial sacrifici
+ sacrificing sacrif
+ sacrilegious sacrilegi
+ sacring sacr
+ sad sad
+ sadder sadder
+ saddest saddest
+ saddle saddl
+ saddler saddler
+ saddles saddl
+ sadly sadli
+ sadness sad
+ saf saf
+ safe safe
+ safeguard safeguard
+ safely safe
+ safer safer
+ safest safest
+ safeties safeti
+ safety safeti
+ saffron saffron
+ sag sag
+ sage sage
+ sagittary sagittari
+ said said
+ saidst saidst
+ sail sail
+ sailing sail
+ sailmaker sailmak
+ sailor sailor
+ sailors sailor
+ sails sail
+ sain sain
+ saint saint
+ sainted saint
+ saintlike saintlik
+ saints saint
+ saith saith
+ sake sake
+ sakes sake
+ sala sala
+ salad salad
+ salamander salamand
+ salary salari
+ sale sale
+ salerio salerio
+ salicam salicam
+ salique saliqu
+ salisbury salisburi
+ sall sall
+ sallet sallet
+ sallets sallet
+ sallies salli
+ sallow sallow
+ sally salli
+ salmon salmon
+ salmons salmon
+ salt salt
+ salter salter
+ saltiers saltier
+ saltness salt
+ saltpetre saltpetr
+ salutation salut
+ salutations salut
+ salute salut
+ saluted salut
+ salutes salut
+ saluteth saluteth
+ salv salv
+ salvation salvat
+ salve salv
+ salving salv
+ same same
+ samingo samingo
+ samp samp
+ sampire sampir
+ sample sampl
+ sampler sampler
+ sampson sampson
+ samson samson
+ samsons samson
+ sancta sancta
+ sanctified sanctifi
+ sanctifies sanctifi
+ sanctify sanctifi
+ sanctimonies sanctimoni
+ sanctimonious sanctimoni
+ sanctimony sanctimoni
+ sanctities sanctiti
+ sanctity sanctiti
+ sanctuarize sanctuar
+ sanctuary sanctuari
+ sand sand
+ sandal sandal
+ sandbag sandbag
+ sanded sand
+ sands sand
+ sandy sandi
+ sandys sandi
+ sang sang
+ sanguine sanguin
+ sanguis sangui
+ sanity saniti
+ sans san
+ santrailles santrail
+ sap sap
+ sapient sapient
+ sapit sapit
+ sapless sapless
+ sapling sapl
+ sapphire sapphir
+ sapphires sapphir
+ saracens saracen
+ sarcenet sarcenet
+ sard sard
+ sardians sardian
+ sardinia sardinia
+ sardis sardi
+ sarum sarum
+ sat sat
+ satan satan
+ satchel satchel
+ sate sate
+ sated sate
+ satiate satiat
+ satiety satieti
+ satin satin
+ satire satir
+ satirical satir
+ satis sati
+ satisfaction satisfact
+ satisfied satisfi
+ satisfies satisfi
+ satisfy satisfi
+ satisfying satisfi
+ saturday saturdai
+ saturdays saturdai
+ saturn saturn
+ saturnine saturnin
+ saturninus saturninu
+ satyr satyr
+ satyrs satyr
+ sauc sauc
+ sauce sauc
+ sauced sauc
+ saucers saucer
+ sauces sauc
+ saucily saucili
+ sauciness sauci
+ saucy sauci
+ sauf sauf
+ saunder saunder
+ sav sav
+ savage savag
+ savagely savag
+ savageness savag
+ savagery savageri
+ savages savag
+ save save
+ saved save
+ saves save
+ saving save
+ saviour saviour
+ savory savori
+ savour savour
+ savouring savour
+ savours savour
+ savoury savouri
+ savoy savoi
+ saw saw
+ sawed saw
+ sawest sawest
+ sawn sawn
+ sawpit sawpit
+ saws saw
+ sawyer sawyer
+ saxons saxon
+ saxony saxoni
+ saxton saxton
+ say sai
+ sayest sayest
+ saying sai
+ sayings sai
+ says sai
+ sayst sayst
+ sblood sblood
+ sc sc
+ scab scab
+ scabbard scabbard
+ scabs scab
+ scaffold scaffold
+ scaffoldage scaffoldag
+ scal scal
+ scald scald
+ scalded scald
+ scalding scald
+ scale scale
+ scaled scale
+ scales scale
+ scaling scale
+ scall scall
+ scalp scalp
+ scalps scalp
+ scaly scali
+ scamble scambl
+ scambling scambl
+ scamels scamel
+ scan scan
+ scandal scandal
+ scandaliz scandaliz
+ scandalous scandal
+ scandy scandi
+ scann scann
+ scant scant
+ scanted scant
+ scanter scanter
+ scanting scant
+ scantling scantl
+ scants scant
+ scap scap
+ scape scape
+ scaped scape
+ scapes scape
+ scapeth scapeth
+ scar scar
+ scarce scarc
+ scarcely scarc
+ scarcity scarciti
+ scare scare
+ scarecrow scarecrow
+ scarecrows scarecrow
+ scarf scarf
+ scarfed scarf
+ scarfs scarf
+ scaring scare
+ scarlet scarlet
+ scarr scarr
+ scarre scarr
+ scars scar
+ scarus scaru
+ scath scath
+ scathe scath
+ scathful scath
+ scatt scatt
+ scatter scatter
+ scattered scatter
+ scattering scatter
+ scatters scatter
+ scelera scelera
+ scelerisque scelerisqu
+ scene scene
+ scenes scene
+ scent scent
+ scented scent
+ scept scept
+ scepter scepter
+ sceptre sceptr
+ sceptred sceptr
+ sceptres sceptr
+ schedule schedul
+ schedules schedul
+ scholar scholar
+ scholarly scholarli
+ scholars scholar
+ school school
+ schoolboy schoolboi
+ schoolboys schoolboi
+ schoolfellows schoolfellow
+ schooling school
+ schoolmaster schoolmast
+ schoolmasters schoolmast
+ schools school
+ sciatica sciatica
+ sciaticas sciatica
+ science scienc
+ sciences scienc
+ scimitar scimitar
+ scion scion
+ scions scion
+ scissors scissor
+ scoff scoff
+ scoffer scoffer
+ scoffing scof
+ scoffs scoff
+ scoggin scoggin
+ scold scold
+ scolding scold
+ scolds scold
+ sconce sconc
+ scone scone
+ scope scope
+ scopes scope
+ scorch scorch
+ scorched scorch
+ score score
+ scored score
+ scores score
+ scoring score
+ scorn scorn
+ scorned scorn
+ scornful scorn
+ scornfully scornfulli
+ scorning scorn
+ scorns scorn
+ scorpion scorpion
+ scorpions scorpion
+ scot scot
+ scotch scotch
+ scotches scotch
+ scotland scotland
+ scots scot
+ scottish scottish
+ scoundrels scoundrel
+ scour scour
+ scoured scour
+ scourg scourg
+ scourge scourg
+ scouring scour
+ scout scout
+ scouts scout
+ scowl scowl
+ scrap scrap
+ scrape scrape
+ scraping scrape
+ scraps scrap
+ scratch scratch
+ scratches scratch
+ scratching scratch
+ scream scream
+ screams scream
+ screech screech
+ screeching screech
+ screen screen
+ screens screen
+ screw screw
+ screws screw
+ scribbl scribbl
+ scribbled scribbl
+ scribe scribe
+ scribes scribe
+ scrimers scrimer
+ scrip scrip
+ scrippage scrippag
+ scripture scriptur
+ scriptures scriptur
+ scrivener scriven
+ scroll scroll
+ scrolls scroll
+ scroop scroop
+ scrowl scrowl
+ scroyles scroyl
+ scrubbed scrub
+ scruple scrupl
+ scruples scrupl
+ scrupulous scrupul
+ scuffles scuffl
+ scuffling scuffl
+ scullion scullion
+ sculls scull
+ scum scum
+ scurril scurril
+ scurrility scurril
+ scurrilous scurril
+ scurvy scurvi
+ scuse scuse
+ scut scut
+ scutcheon scutcheon
+ scutcheons scutcheon
+ scylla scylla
+ scythe scyth
+ scythed scyth
+ scythia scythia
+ scythian scythian
+ sdeath sdeath
+ se se
+ sea sea
+ seacoal seacoal
+ seafaring seafar
+ seal seal
+ sealed seal
+ sealing seal
+ seals seal
+ seam seam
+ seamen seamen
+ seamy seami
+ seaport seaport
+ sear sear
+ searce searc
+ search search
+ searchers searcher
+ searches search
+ searcheth searcheth
+ searching search
+ seared sear
+ seas sea
+ seasick seasick
+ seaside seasid
+ season season
+ seasoned season
+ seasons season
+ seat seat
+ seated seat
+ seats seat
+ sebastian sebastian
+ second second
+ secondarily secondarili
+ secondary secondari
+ seconded second
+ seconds second
+ secrecy secreci
+ secret secret
+ secretaries secretari
+ secretary secretari
+ secretly secretli
+ secrets secret
+ sect sect
+ sectary sectari
+ sects sect
+ secundo secundo
+ secure secur
+ securely secur
+ securing secur
+ security secur
+ sedg sedg
+ sedge sedg
+ sedges sedg
+ sedgy sedgi
+ sedition sedit
+ seditious sediti
+ seduc seduc
+ seduce seduc
+ seduced seduc
+ seducer seduc
+ seducing seduc
+ see see
+ seed seed
+ seeded seed
+ seedness seed
+ seeds seed
+ seedsman seedsman
+ seein seein
+ seeing see
+ seek seek
+ seeking seek
+ seeks seek
+ seel seel
+ seeling seel
+ seely seeli
+ seem seem
+ seemed seem
+ seemers seemer
+ seemest seemest
+ seemeth seemeth
+ seeming seem
+ seemingly seemingli
+ seemly seemli
+ seems seem
+ seen seen
+ seer seer
+ sees see
+ seese sees
+ seest seest
+ seethe seeth
+ seethes seeth
+ seething seeth
+ seeting seet
+ segregation segreg
+ seigneur seigneur
+ seigneurs seigneur
+ seiz seiz
+ seize seiz
+ seized seiz
+ seizes seiz
+ seizeth seizeth
+ seizing seiz
+ seizure seizur
+ seld seld
+ seldom seldom
+ select select
+ seleucus seleucu
+ self self
+ selfsame selfsam
+ sell sell
+ seller seller
+ selling sell
+ sells sell
+ selves selv
+ semblable semblabl
+ semblably semblabl
+ semblance semblanc
+ semblances semblanc
+ semblative sembl
+ semi semi
+ semicircle semicircl
+ semiramis semirami
+ semper semper
+ sempronius semproniu
+ senate senat
+ senator senat
+ senators senat
+ send send
+ sender sender
+ sendeth sendeth
+ sending send
+ sends send
+ seneca seneca
+ senior senior
+ seniory seniori
+ senis seni
+ sennet sennet
+ senoys senoi
+ sense sens
+ senseless senseless
+ senses sens
+ sensible sensibl
+ sensibly sensibl
+ sensual sensual
+ sensuality sensual
+ sent sent
+ sentenc sentenc
+ sentence sentenc
+ sentences sentenc
+ sententious sententi
+ sentinel sentinel
+ sentinels sentinel
+ separable separ
+ separate separ
+ separated separ
+ separates separ
+ separation separ
+ septentrion septentrion
+ sepulchre sepulchr
+ sepulchres sepulchr
+ sepulchring sepulchr
+ sequel sequel
+ sequence sequenc
+ sequent sequent
+ sequest sequest
+ sequester sequest
+ sequestration sequestr
+ sere sere
+ serenis sereni
+ serge serg
+ sergeant sergeant
+ serious seriou
+ seriously serious
+ sermon sermon
+ sermons sermon
+ serpent serpent
+ serpentine serpentin
+ serpents serpent
+ serpigo serpigo
+ serv serv
+ servant servant
+ servanted servant
+ servants servant
+ serve serv
+ served serv
+ server server
+ serves serv
+ serveth serveth
+ service servic
+ serviceable servic
+ services servic
+ servile servil
+ servility servil
+ servilius serviliu
+ serving serv
+ servingman servingman
+ servingmen servingmen
+ serviteur serviteur
+ servitor servitor
+ servitors servitor
+ servitude servitud
+ sessa sessa
+ session session
+ sessions session
+ sestos sesto
+ set set
+ setebos setebo
+ sets set
+ setter setter
+ setting set
+ settle settl
+ settled settl
+ settlest settlest
+ settling settl
+ sev sev
+ seven seven
+ sevenfold sevenfold
+ sevennight sevennight
+ seventeen seventeen
+ seventh seventh
+ seventy seventi
+ sever sever
+ several sever
+ severally sever
+ severals sever
+ severe sever
+ severed sever
+ severely sever
+ severest severest
+ severing sever
+ severity sever
+ severn severn
+ severs sever
+ sew sew
+ seward seward
+ sewer sewer
+ sewing sew
+ sex sex
+ sexes sex
+ sexton sexton
+ sextus sextu
+ seymour seymour
+ seyton seyton
+ sfoot sfoot
+ sh sh
+ shackle shackl
+ shackles shackl
+ shade shade
+ shades shade
+ shadow shadow
+ shadowed shadow
+ shadowing shadow
+ shadows shadow
+ shadowy shadowi
+ shady shadi
+ shafalus shafalu
+ shaft shaft
+ shafts shaft
+ shag shag
+ shak shak
+ shake shake
+ shaked shake
+ shaken shaken
+ shakes shake
+ shaking shake
+ shales shale
+ shall shall
+ shallenge shalleng
+ shallow shallow
+ shallowest shallowest
+ shallowly shallowli
+ shallows shallow
+ shalt shalt
+ sham sham
+ shambles shambl
+ shame shame
+ shamed shame
+ shameful shame
+ shamefully shamefulli
+ shameless shameless
+ shames shame
+ shamest shamest
+ shaming shame
+ shank shank
+ shanks shank
+ shap shap
+ shape shape
+ shaped shape
+ shapeless shapeless
+ shapen shapen
+ shapes shape
+ shaping shape
+ shar shar
+ shard shard
+ sharded shard
+ shards shard
+ share share
+ shared share
+ sharers sharer
+ shares share
+ sharing share
+ shark shark
+ sharp sharp
+ sharpen sharpen
+ sharpened sharpen
+ sharpens sharpen
+ sharper sharper
+ sharpest sharpest
+ sharply sharpli
+ sharpness sharp
+ sharps sharp
+ shatter shatter
+ shav shav
+ shave shave
+ shaven shaven
+ shaw shaw
+ she she
+ sheaf sheaf
+ sheal sheal
+ shear shear
+ shearers shearer
+ shearing shear
+ shearman shearman
+ shears shear
+ sheath sheath
+ sheathe sheath
+ sheathed sheath
+ sheathes sheath
+ sheathing sheath
+ sheaved sheav
+ sheaves sheav
+ shed shed
+ shedding shed
+ sheds shed
+ sheen sheen
+ sheep sheep
+ sheepcote sheepcot
+ sheepcotes sheepcot
+ sheeps sheep
+ sheepskins sheepskin
+ sheer sheer
+ sheet sheet
+ sheeted sheet
+ sheets sheet
+ sheffield sheffield
+ shelf shelf
+ shell shell
+ shells shell
+ shelt shelt
+ shelter shelter
+ shelters shelter
+ shelves shelv
+ shelving shelv
+ shelvy shelvi
+ shent shent
+ shepherd shepherd
+ shepherdes shepherd
+ shepherdess shepherdess
+ shepherdesses shepherdess
+ shepherds shepherd
+ sher sher
+ sheriff sheriff
+ sherris sherri
+ shes she
+ sheweth sheweth
+ shield shield
+ shielded shield
+ shields shield
+ shift shift
+ shifted shift
+ shifting shift
+ shifts shift
+ shilling shill
+ shillings shill
+ shin shin
+ shine shine
+ shines shine
+ shineth shineth
+ shining shine
+ shins shin
+ shiny shini
+ ship ship
+ shipboard shipboard
+ shipman shipman
+ shipmaster shipmast
+ shipmen shipmen
+ shipp shipp
+ shipped ship
+ shipping ship
+ ships ship
+ shipt shipt
+ shipwreck shipwreck
+ shipwrecking shipwreck
+ shipwright shipwright
+ shipwrights shipwright
+ shire shire
+ shirley shirlei
+ shirt shirt
+ shirts shirt
+ shive shive
+ shiver shiver
+ shivering shiver
+ shivers shiver
+ shoal shoal
+ shoals shoal
+ shock shock
+ shocks shock
+ shod shod
+ shoe shoe
+ shoeing shoe
+ shoemaker shoemak
+ shoes shoe
+ shog shog
+ shone shone
+ shook shook
+ shoon shoon
+ shoot shoot
+ shooter shooter
+ shootie shooti
+ shooting shoot
+ shoots shoot
+ shop shop
+ shops shop
+ shore shore
+ shores shore
+ shorn shorn
+ short short
+ shortcake shortcak
+ shorten shorten
+ shortened shorten
+ shortens shorten
+ shorter shorter
+ shortly shortli
+ shortness short
+ shot shot
+ shotten shotten
+ shoughs shough
+ should should
+ shoulder shoulder
+ shouldering shoulder
+ shoulders shoulder
+ shouldst shouldst
+ shout shout
+ shouted shout
+ shouting shout
+ shouts shout
+ shov shov
+ shove shove
+ shovel shovel
+ shovels shovel
+ show show
+ showed show
+ shower shower
+ showers shower
+ showest showest
+ showing show
+ shown shown
+ shows show
+ shreds shred
+ shrew shrew
+ shrewd shrewd
+ shrewdly shrewdli
+ shrewdness shrewd
+ shrewish shrewish
+ shrewishly shrewishli
+ shrewishness shrewish
+ shrews shrew
+ shrewsbury shrewsburi
+ shriek shriek
+ shrieking shriek
+ shrieks shriek
+ shrieve shriev
+ shrift shrift
+ shrill shrill
+ shriller shriller
+ shrills shrill
+ shrilly shrilli
+ shrimp shrimp
+ shrine shrine
+ shrink shrink
+ shrinking shrink
+ shrinks shrink
+ shriv shriv
+ shrive shrive
+ shriver shriver
+ shrives shrive
+ shriving shrive
+ shroud shroud
+ shrouded shroud
+ shrouding shroud
+ shrouds shroud
+ shrove shrove
+ shrow shrow
+ shrows shrow
+ shrub shrub
+ shrubs shrub
+ shrug shrug
+ shrugs shrug
+ shrunk shrunk
+ shudd shudd
+ shudders shudder
+ shuffl shuffl
+ shuffle shuffl
+ shuffled shuffl
+ shuffling shuffl
+ shun shun
+ shunless shunless
+ shunn shunn
+ shunned shun
+ shunning shun
+ shuns shun
+ shut shut
+ shuts shut
+ shuttle shuttl
+ shy shy
+ shylock shylock
+ si si
+ sibyl sibyl
+ sibylla sibylla
+ sibyls sibyl
+ sicil sicil
+ sicilia sicilia
+ sicilian sicilian
+ sicilius siciliu
+ sicils sicil
+ sicily sicili
+ sicinius siciniu
+ sick sick
+ sicken sicken
+ sickens sicken
+ sicker sicker
+ sickle sickl
+ sicklemen sicklemen
+ sicklied sickli
+ sickliness sickli
+ sickly sickli
+ sickness sick
+ sicles sicl
+ sicyon sicyon
+ side side
+ sided side
+ sides side
+ siege sieg
+ sieges sieg
+ sienna sienna
+ sies si
+ sieve siev
+ sift sift
+ sifted sift
+ sigeia sigeia
+ sigh sigh
+ sighed sigh
+ sighing sigh
+ sighs sigh
+ sight sight
+ sighted sight
+ sightless sightless
+ sightly sightli
+ sights sight
+ sign sign
+ signal signal
+ signet signet
+ signieur signieur
+ significant signific
+ significants signific
+ signified signifi
+ signifies signifi
+ signify signifi
+ signifying signifi
+ signior signior
+ signiories signiori
+ signiors signior
+ signiory signiori
+ signor signor
+ signories signori
+ signs sign
+ signum signum
+ silenc silenc
+ silence silenc
+ silenced silenc
+ silencing silenc
+ silent silent
+ silently silent
+ silius siliu
+ silk silk
+ silken silken
+ silkman silkman
+ silks silk
+ silliest silliest
+ silliness silli
+ silling sill
+ silly silli
+ silva silva
+ silver silver
+ silvered silver
+ silverly silverli
+ silvia silvia
+ silvius silviu
+ sima sima
+ simile simil
+ similes simil
+ simois simoi
+ simon simon
+ simony simoni
+ simp simp
+ simpcox simpcox
+ simple simpl
+ simpleness simpl
+ simpler simpler
+ simples simpl
+ simplicity simplic
+ simply simpli
+ simular simular
+ simulation simul
+ sin sin
+ since sinc
+ sincere sincer
+ sincerely sincer
+ sincerity sincer
+ sinel sinel
+ sinew sinew
+ sinewed sinew
+ sinews sinew
+ sinewy sinewi
+ sinful sin
+ sinfully sinfulli
+ sing sing
+ singe sing
+ singeing sing
+ singer singer
+ singes sing
+ singeth singeth
+ singing sing
+ single singl
+ singled singl
+ singleness singl
+ singly singli
+ sings sing
+ singular singular
+ singulariter singularit
+ singularities singular
+ singularity singular
+ singuled singul
+ sinister sinist
+ sink sink
+ sinking sink
+ sinks sink
+ sinn sinn
+ sinner sinner
+ sinners sinner
+ sinning sin
+ sinon sinon
+ sins sin
+ sip sip
+ sipping sip
+ sir sir
+ sire sire
+ siren siren
+ sirrah sirrah
+ sirs sir
+ sist sist
+ sister sister
+ sisterhood sisterhood
+ sisterly sisterli
+ sisters sister
+ sit sit
+ sith sith
+ sithence sithenc
+ sits sit
+ sitting sit
+ situate situat
+ situation situat
+ situations situat
+ siward siward
+ six six
+ sixpence sixpenc
+ sixpences sixpenc
+ sixpenny sixpenni
+ sixteen sixteen
+ sixth sixth
+ sixty sixti
+ siz siz
+ size size
+ sizes size
+ sizzle sizzl
+ skains skain
+ skamble skambl
+ skein skein
+ skelter skelter
+ skies ski
+ skilful skil
+ skilfully skilfulli
+ skill skill
+ skilless skilless
+ skillet skillet
+ skillful skill
+ skills skill
+ skim skim
+ skimble skimbl
+ skin skin
+ skinker skinker
+ skinny skinni
+ skins skin
+ skip skip
+ skipp skipp
+ skipper skipper
+ skipping skip
+ skirmish skirmish
+ skirmishes skirmish
+ skirr skirr
+ skirted skirt
+ skirts skirt
+ skittish skittish
+ skulking skulk
+ skull skull
+ skulls skull
+ sky sky
+ skyey skyei
+ skyish skyish
+ slab slab
+ slack slack
+ slackly slackli
+ slackness slack
+ slain slain
+ slake slake
+ sland sland
+ slander slander
+ slandered slander
+ slanderer slander
+ slanderers slander
+ slandering slander
+ slanderous slander
+ slanders slander
+ slash slash
+ slaught slaught
+ slaughter slaughter
+ slaughtered slaughter
+ slaughterer slaughter
+ slaughterman slaughterman
+ slaughtermen slaughtermen
+ slaughterous slaughter
+ slaughters slaughter
+ slave slave
+ slaver slaver
+ slavery slaveri
+ slaves slave
+ slavish slavish
+ slay slai
+ slayeth slayeth
+ slaying slai
+ slays slai
+ sleave sleav
+ sledded sled
+ sleek sleek
+ sleekly sleekli
+ sleep sleep
+ sleeper sleeper
+ sleepers sleeper
+ sleepest sleepest
+ sleeping sleep
+ sleeps sleep
+ sleepy sleepi
+ sleeve sleev
+ sleeves sleev
+ sleid sleid
+ sleided sleid
+ sleight sleight
+ sleights sleight
+ slender slender
+ slenderer slender
+ slenderly slenderli
+ slept slept
+ slew slew
+ slewest slewest
+ slice slice
+ slid slid
+ slide slide
+ slides slide
+ sliding slide
+ slight slight
+ slighted slight
+ slightest slightest
+ slightly slightli
+ slightness slight
+ slights slight
+ slily slili
+ slime slime
+ slimy slimi
+ slings sling
+ slink slink
+ slip slip
+ slipp slipp
+ slipper slipper
+ slippers slipper
+ slippery slipperi
+ slips slip
+ slish slish
+ slit slit
+ sliver sliver
+ slobb slobb
+ slomber slomber
+ slop slop
+ slope slope
+ slops slop
+ sloth sloth
+ slothful sloth
+ slough slough
+ slovenly slovenli
+ slovenry slovenri
+ slow slow
+ slower slower
+ slowly slowli
+ slowness slow
+ slubber slubber
+ slug slug
+ sluggard sluggard
+ sluggardiz sluggardiz
+ sluggish sluggish
+ sluic sluic
+ slumb slumb
+ slumber slumber
+ slumbers slumber
+ slumbery slumberi
+ slunk slunk
+ slut slut
+ sluts slut
+ sluttery slutteri
+ sluttish sluttish
+ sluttishness sluttish
+ sly sly
+ slys sly
+ smack smack
+ smacking smack
+ smacks smack
+ small small
+ smaller smaller
+ smallest smallest
+ smallness small
+ smalus smalu
+ smart smart
+ smarting smart
+ smartly smartli
+ smatch smatch
+ smatter smatter
+ smear smear
+ smell smell
+ smelling smell
+ smells smell
+ smelt smelt
+ smil smil
+ smile smile
+ smiled smile
+ smiles smile
+ smilest smilest
+ smilets smilet
+ smiling smile
+ smilingly smilingli
+ smirch smirch
+ smirched smirch
+ smit smit
+ smite smite
+ smites smite
+ smith smith
+ smithfield smithfield
+ smock smock
+ smocks smock
+ smok smok
+ smoke smoke
+ smoked smoke
+ smokes smoke
+ smoking smoke
+ smoky smoki
+ smooth smooth
+ smoothed smooth
+ smoothing smooth
+ smoothly smoothli
+ smoothness smooth
+ smooths smooth
+ smote smote
+ smoth smoth
+ smother smother
+ smothered smother
+ smothering smother
+ smug smug
+ smulkin smulkin
+ smutch smutch
+ snaffle snaffl
+ snail snail
+ snails snail
+ snake snake
+ snakes snake
+ snaky snaki
+ snap snap
+ snapp snapp
+ snapper snapper
+ snar snar
+ snare snare
+ snares snare
+ snarl snarl
+ snarleth snarleth
+ snarling snarl
+ snatch snatch
+ snatchers snatcher
+ snatches snatch
+ snatching snatch
+ sneak sneak
+ sneaking sneak
+ sneap sneap
+ sneaping sneap
+ sneck sneck
+ snip snip
+ snipe snipe
+ snipt snipt
+ snore snore
+ snores snore
+ snoring snore
+ snorting snort
+ snout snout
+ snow snow
+ snowballs snowbal
+ snowed snow
+ snowy snowi
+ snuff snuff
+ snuffs snuff
+ snug snug
+ so so
+ soak soak
+ soaking soak
+ soaks soak
+ soar soar
+ soaring soar
+ soars soar
+ sob sob
+ sobbing sob
+ sober sober
+ soberly soberli
+ sobriety sobrieti
+ sobs sob
+ sociable sociabl
+ societies societi
+ society societi
+ socks sock
+ socrates socrat
+ sod sod
+ sodden sodden
+ soe soe
+ soever soever
+ soft soft
+ soften soften
+ softens soften
+ softer softer
+ softest softest
+ softly softli
+ softness soft
+ soil soil
+ soiled soil
+ soilure soilur
+ soit soit
+ sojourn sojourn
+ sol sol
+ sola sola
+ solace solac
+ solanio solanio
+ sold sold
+ soldat soldat
+ solder solder
+ soldest soldest
+ soldier soldier
+ soldiers soldier
+ soldiership soldiership
+ sole sole
+ solely sole
+ solem solem
+ solemn solemn
+ solemness solem
+ solemnities solemn
+ solemnity solemn
+ solemniz solemniz
+ solemnize solemn
+ solemnized solemn
+ solemnly solemnli
+ soles sole
+ solicit solicit
+ solicitation solicit
+ solicited solicit
+ soliciting solicit
+ solicitings solicit
+ solicitor solicitor
+ solicits solicit
+ solid solid
+ solidares solidar
+ solidity solid
+ solinus solinu
+ solitary solitari
+ solomon solomon
+ solon solon
+ solum solum
+ solus solu
+ solyman solyman
+ some some
+ somebody somebodi
+ someone someon
+ somerset somerset
+ somerville somervil
+ something someth
+ sometime sometim
+ sometimes sometim
+ somever somev
+ somewhat somewhat
+ somewhere somewher
+ somewhither somewhith
+ somme somm
+ son son
+ sonance sonanc
+ song song
+ songs song
+ sonnet sonnet
+ sonneting sonnet
+ sonnets sonnet
+ sons son
+ sont sont
+ sonties sonti
+ soon soon
+ sooner sooner
+ soonest soonest
+ sooth sooth
+ soothe sooth
+ soothers soother
+ soothing sooth
+ soothsay soothsai
+ soothsayer soothsay
+ sooty sooti
+ sop sop
+ sophister sophist
+ sophisticated sophist
+ sophy sophi
+ sops sop
+ sorcerer sorcer
+ sorcerers sorcer
+ sorceress sorceress
+ sorceries sorceri
+ sorcery sorceri
+ sore sore
+ sorel sorel
+ sorely sore
+ sorer sorer
+ sores sore
+ sorrier sorrier
+ sorriest sorriest
+ sorrow sorrow
+ sorrowed sorrow
+ sorrowest sorrowest
+ sorrowful sorrow
+ sorrowing sorrow
+ sorrows sorrow
+ sorry sorri
+ sort sort
+ sortance sortanc
+ sorted sort
+ sorting sort
+ sorts sort
+ sossius sossiu
+ sot sot
+ soto soto
+ sots sot
+ sottish sottish
+ soud soud
+ sought sought
+ soul soul
+ sould sould
+ soulless soulless
+ souls soul
+ sound sound
+ sounded sound
+ sounder sounder
+ soundest soundest
+ sounding sound
+ soundless soundless
+ soundly soundli
+ soundness sound
+ soundpost soundpost
+ sounds sound
+ sour sour
+ source sourc
+ sources sourc
+ sourest sourest
+ sourly sourli
+ sours sour
+ sous sou
+ souse sous
+ south south
+ southam southam
+ southampton southampton
+ southerly southerli
+ southern southern
+ southward southward
+ southwark southwark
+ southwell southwel
+ souviendrai souviendrai
+ sov sov
+ sovereign sovereign
+ sovereignest sovereignest
+ sovereignly sovereignli
+ sovereignty sovereignti
+ sovereignvours sovereignvour
+ sow sow
+ sowing sow
+ sowl sowl
+ sowter sowter
+ space space
+ spaces space
+ spacious spaciou
+ spade spade
+ spades spade
+ spain spain
+ spak spak
+ spake spake
+ spakest spakest
+ span span
+ spangle spangl
+ spangled spangl
+ spaniard spaniard
+ spaniel spaniel
+ spaniels spaniel
+ spanish spanish
+ spann spann
+ spans span
+ spar spar
+ spare spare
+ spares spare
+ sparing spare
+ sparingly sparingli
+ spark spark
+ sparkle sparkl
+ sparkles sparkl
+ sparkling sparkl
+ sparks spark
+ sparrow sparrow
+ sparrows sparrow
+ sparta sparta
+ spartan spartan
+ spavin spavin
+ spavins spavin
+ spawn spawn
+ speak speak
+ speaker speaker
+ speakers speaker
+ speakest speakest
+ speaketh speaketh
+ speaking speak
+ speaks speak
+ spear spear
+ speargrass speargrass
+ spears spear
+ special special
+ specialities special
+ specially special
+ specialties specialti
+ specialty specialti
+ specify specifi
+ speciously specious
+ spectacle spectacl
+ spectacled spectacl
+ spectacles spectacl
+ spectators spectat
+ spectatorship spectatorship
+ speculation specul
+ speculations specul
+ speculative specul
+ sped sped
+ speech speech
+ speeches speech
+ speechless speechless
+ speed speed
+ speeded speed
+ speedier speedier
+ speediest speediest
+ speedily speedili
+ speediness speedi
+ speeding speed
+ speeds speed
+ speedy speedi
+ speens speen
+ spell spell
+ spelling spell
+ spells spell
+ spelt spelt
+ spencer spencer
+ spend spend
+ spendest spendest
+ spending spend
+ spends spend
+ spendthrift spendthrift
+ spent spent
+ sperato sperato
+ sperm sperm
+ spero spero
+ sperr sperr
+ spher spher
+ sphere sphere
+ sphered sphere
+ spheres sphere
+ spherical spheric
+ sphery spheri
+ sphinx sphinx
+ spice spice
+ spiced spice
+ spicery spiceri
+ spices spice
+ spider spider
+ spiders spider
+ spied spi
+ spies spi
+ spieth spieth
+ spightfully spightfulli
+ spigot spigot
+ spill spill
+ spilling spill
+ spills spill
+ spilt spilt
+ spilth spilth
+ spin spin
+ spinii spinii
+ spinners spinner
+ spinster spinster
+ spinsters spinster
+ spire spire
+ spirit spirit
+ spirited spirit
+ spiritless spiritless
+ spirits spirit
+ spiritual spiritu
+ spiritualty spiritualti
+ spirt spirt
+ spit spit
+ spital spital
+ spite spite
+ spited spite
+ spiteful spite
+ spites spite
+ spits spit
+ spitted spit
+ spitting spit
+ splay splai
+ spleen spleen
+ spleenful spleen
+ spleens spleen
+ spleeny spleeni
+ splendour splendour
+ splenitive splenit
+ splinter splinter
+ splinters splinter
+ split split
+ splits split
+ splitted split
+ splitting split
+ spoil spoil
+ spoils spoil
+ spok spok
+ spoke spoke
+ spoken spoken
+ spokes spoke
+ spokesman spokesman
+ sponge spong
+ spongy spongi
+ spoon spoon
+ spoons spoon
+ sport sport
+ sportful sport
+ sporting sport
+ sportive sportiv
+ sports sport
+ spot spot
+ spotless spotless
+ spots spot
+ spotted spot
+ spousal spousal
+ spouse spous
+ spout spout
+ spouting spout
+ spouts spout
+ sprag sprag
+ sprang sprang
+ sprat sprat
+ sprawl sprawl
+ spray sprai
+ sprays sprai
+ spread spread
+ spreading spread
+ spreads spread
+ sprighted spright
+ sprightful spright
+ sprightly sprightli
+ sprigs sprig
+ spring spring
+ springe spring
+ springes spring
+ springeth springeth
+ springhalt springhalt
+ springing spring
+ springs spring
+ springtime springtim
+ sprinkle sprinkl
+ sprinkles sprinkl
+ sprite sprite
+ sprited sprite
+ spritely sprite
+ sprites sprite
+ spriting sprite
+ sprout sprout
+ spruce spruce
+ sprung sprung
+ spun spun
+ spur spur
+ spurio spurio
+ spurn spurn
+ spurns spurn
+ spurr spurr
+ spurrer spurrer
+ spurring spur
+ spurs spur
+ spy spy
+ spying spy
+ squabble squabbl
+ squadron squadron
+ squadrons squadron
+ squand squand
+ squar squar
+ square squar
+ squarer squarer
+ squares squar
+ squash squash
+ squeak squeak
+ squeaking squeak
+ squeal squeal
+ squealing squeal
+ squeezes squeez
+ squeezing squeez
+ squele squel
+ squier squier
+ squints squint
+ squiny squini
+ squire squir
+ squires squir
+ squirrel squirrel
+ st st
+ stab stab
+ stabb stabb
+ stabbed stab
+ stabbing stab
+ stable stabl
+ stableness stabl
+ stables stabl
+ stablish stablish
+ stablishment stablish
+ stabs stab
+ stacks stack
+ staff staff
+ stafford stafford
+ staffords stafford
+ staffordshire staffordshir
+ stag stag
+ stage stage
+ stages stage
+ stagger stagger
+ staggering stagger
+ staggers stagger
+ stags stag
+ staid staid
+ staider staider
+ stain stain
+ stained stain
+ staines stain
+ staineth staineth
+ staining stain
+ stainless stainless
+ stains stain
+ stair stair
+ stairs stair
+ stake stake
+ stakes stake
+ stale stale
+ staled stale
+ stalk stalk
+ stalking stalk
+ stalks stalk
+ stall stall
+ stalling stall
+ stalls stall
+ stamford stamford
+ stammer stammer
+ stamp stamp
+ stamped stamp
+ stamps stamp
+ stanch stanch
+ stanchless stanchless
+ stand stand
+ standard standard
+ standards standard
+ stander stander
+ standers stander
+ standest standest
+ standeth standeth
+ standing stand
+ stands stand
+ staniel staniel
+ stanley stanlei
+ stanze stanz
+ stanzo stanzo
+ stanzos stanzo
+ staple stapl
+ staples stapl
+ star star
+ stare stare
+ stared stare
+ stares stare
+ staring stare
+ starings stare
+ stark stark
+ starkly starkli
+ starlight starlight
+ starling starl
+ starr starr
+ starry starri
+ stars star
+ start start
+ started start
+ starting start
+ startingly startingli
+ startle startl
+ startles startl
+ starts start
+ starv starv
+ starve starv
+ starved starv
+ starvelackey starvelackei
+ starveling starvel
+ starveth starveth
+ starving starv
+ state state
+ statelier stateli
+ stately state
+ states state
+ statesman statesman
+ statesmen statesmen
+ statilius statiliu
+ station station
+ statist statist
+ statists statist
+ statue statu
+ statues statu
+ stature statur
+ statures statur
+ statute statut
+ statutes statut
+ stave stave
+ staves stave
+ stay stai
+ stayed stai
+ stayest stayest
+ staying stai
+ stays stai
+ stead stead
+ steaded stead
+ steadfast steadfast
+ steadier steadier
+ steads stead
+ steal steal
+ stealer stealer
+ stealers stealer
+ stealing steal
+ steals steal
+ stealth stealth
+ stealthy stealthi
+ steed steed
+ steeds steed
+ steel steel
+ steeled steel
+ steely steeli
+ steep steep
+ steeped steep
+ steeple steepl
+ steeples steepl
+ steeps steep
+ steepy steepi
+ steer steer
+ steerage steerag
+ steering steer
+ steers steer
+ stelled stell
+ stem stem
+ stemming stem
+ stench stench
+ step step
+ stepdame stepdam
+ stephano stephano
+ stephen stephen
+ stepmothers stepmoth
+ stepp stepp
+ stepping step
+ steps step
+ sterile steril
+ sterility steril
+ sterling sterl
+ stern stern
+ sternage sternag
+ sterner sterner
+ sternest sternest
+ sternness stern
+ steterat steterat
+ stew stew
+ steward steward
+ stewards steward
+ stewardship stewardship
+ stewed stew
+ stews stew
+ stick stick
+ sticking stick
+ stickler stickler
+ sticks stick
+ stiff stiff
+ stiffen stiffen
+ stiffly stiffli
+ stifle stifl
+ stifled stifl
+ stifles stifl
+ stigmatic stigmat
+ stigmatical stigmat
+ stile stile
+ still still
+ stiller stiller
+ stillest stillest
+ stillness still
+ stilly stilli
+ sting sting
+ stinging sting
+ stingless stingless
+ stings sting
+ stink stink
+ stinking stink
+ stinkingly stinkingli
+ stinks stink
+ stint stint
+ stinted stint
+ stints stint
+ stir stir
+ stirr stirr
+ stirred stir
+ stirrer stirrer
+ stirrers stirrer
+ stirreth stirreth
+ stirring stir
+ stirrup stirrup
+ stirrups stirrup
+ stirs stir
+ stitchery stitcheri
+ stitches stitch
+ stithied stithi
+ stithy stithi
+ stoccadoes stoccado
+ stoccata stoccata
+ stock stock
+ stockfish stockfish
+ stocking stock
+ stockings stock
+ stockish stockish
+ stocks stock
+ stog stog
+ stogs stog
+ stoics stoic
+ stokesly stokesli
+ stol stol
+ stole stole
+ stolen stolen
+ stolest stolest
+ stomach stomach
+ stomachers stomach
+ stomaching stomach
+ stomachs stomach
+ ston ston
+ stone stone
+ stonecutter stonecutt
+ stones stone
+ stonish stonish
+ stony stoni
+ stood stood
+ stool stool
+ stools stool
+ stoop stoop
+ stooping stoop
+ stoops stoop
+ stop stop
+ stope stope
+ stopp stopp
+ stopped stop
+ stopping stop
+ stops stop
+ stor stor
+ store store
+ storehouse storehous
+ storehouses storehous
+ stores store
+ stories stori
+ storm storm
+ stormed storm
+ storming storm
+ storms storm
+ stormy stormi
+ story stori
+ stoup stoup
+ stoups stoup
+ stout stout
+ stouter stouter
+ stoutly stoutli
+ stoutness stout
+ stover stover
+ stow stow
+ stowage stowag
+ stowed stow
+ strachy strachi
+ stragglers straggler
+ straggling straggl
+ straight straight
+ straightest straightest
+ straightway straightwai
+ strain strain
+ strained strain
+ straining strain
+ strains strain
+ strait strait
+ straited strait
+ straiter straiter
+ straitly straitli
+ straitness strait
+ straits strait
+ strand strand
+ strange strang
+ strangely strang
+ strangeness strang
+ stranger stranger
+ strangers stranger
+ strangest strangest
+ strangle strangl
+ strangled strangl
+ strangler strangler
+ strangles strangl
+ strangling strangl
+ strappado strappado
+ straps strap
+ stratagem stratagem
+ stratagems stratagem
+ stratford stratford
+ strato strato
+ straw straw
+ strawberries strawberri
+ strawberry strawberri
+ straws straw
+ strawy strawi
+ stray strai
+ straying strai
+ strays strai
+ streak streak
+ streaks streak
+ stream stream
+ streamers streamer
+ streaming stream
+ streams stream
+ streching strech
+ street street
+ streets street
+ strength strength
+ strengthen strengthen
+ strengthened strengthen
+ strengthless strengthless
+ strengths strength
+ stretch stretch
+ stretched stretch
+ stretches stretch
+ stretching stretch
+ strew strew
+ strewing strew
+ strewings strew
+ strewments strewment
+ stricken stricken
+ strict strict
+ stricter stricter
+ strictest strictest
+ strictly strictli
+ stricture strictur
+ stride stride
+ strides stride
+ striding stride
+ strife strife
+ strifes strife
+ strik strik
+ strike strike
+ strikers striker
+ strikes strike
+ strikest strikest
+ striking strike
+ string string
+ stringless stringless
+ strings string
+ strip strip
+ stripes stripe
+ stripling stripl
+ striplings stripl
+ stripp stripp
+ stripping strip
+ striv striv
+ strive strive
+ strives strive
+ striving strive
+ strok strok
+ stroke stroke
+ strokes stroke
+ strond strond
+ stronds strond
+ strong strong
+ stronger stronger
+ strongest strongest
+ strongly strongli
+ strooke strook
+ strossers strosser
+ strove strove
+ strown strown
+ stroy stroi
+ struck struck
+ strucken strucken
+ struggle struggl
+ struggles struggl
+ struggling struggl
+ strumpet strumpet
+ strumpeted strumpet
+ strumpets strumpet
+ strung strung
+ strut strut
+ struts strut
+ strutted strut
+ strutting strut
+ stubble stubbl
+ stubborn stubborn
+ stubbornest stubbornest
+ stubbornly stubbornli
+ stubbornness stubborn
+ stuck stuck
+ studded stud
+ student student
+ students student
+ studied studi
+ studies studi
+ studious studiou
+ studiously studious
+ studs stud
+ study studi
+ studying studi
+ stuff stuff
+ stuffing stuf
+ stuffs stuff
+ stumble stumbl
+ stumbled stumbl
+ stumblest stumblest
+ stumbling stumbl
+ stump stump
+ stumps stump
+ stung stung
+ stupefy stupefi
+ stupid stupid
+ stupified stupifi
+ stuprum stuprum
+ sturdy sturdi
+ sty sty
+ styga styga
+ stygian stygian
+ styl styl
+ style style
+ styx styx
+ su su
+ sub sub
+ subcontracted subcontract
+ subdu subdu
+ subdue subdu
+ subdued subdu
+ subduements subduement
+ subdues subdu
+ subduing subdu
+ subject subject
+ subjected subject
+ subjection subject
+ subjects subject
+ submerg submerg
+ submission submiss
+ submissive submiss
+ submit submit
+ submits submit
+ submitting submit
+ suborn suborn
+ subornation suborn
+ suborned suborn
+ subscrib subscrib
+ subscribe subscrib
+ subscribed subscrib
+ subscribes subscrib
+ subscription subscript
+ subsequent subsequ
+ subsidies subsidi
+ subsidy subsidi
+ subsist subsist
+ subsisting subsist
+ substance substanc
+ substances substanc
+ substantial substanti
+ substitute substitut
+ substituted substitut
+ substitutes substitut
+ substitution substitut
+ subtile subtil
+ subtilly subtilli
+ subtle subtl
+ subtleties subtleti
+ subtlety subtleti
+ subtly subtli
+ subtractors subtractor
+ suburbs suburb
+ subversion subvers
+ subverts subvert
+ succedant succed
+ succeed succe
+ succeeded succeed
+ succeeders succeed
+ succeeding succeed
+ succeeds succe
+ success success
+ successantly successantli
+ successes success
+ successful success
+ successfully successfulli
+ succession success
+ successive success
+ successively success
+ successor successor
+ successors successor
+ succour succour
+ succours succour
+ such such
+ suck suck
+ sucker sucker
+ suckers sucker
+ sucking suck
+ suckle suckl
+ sucks suck
+ sudden sudden
+ suddenly suddenli
+ sue sue
+ sued su
+ suerly suerli
+ sues sue
+ sueth sueth
+ suff suff
+ suffer suffer
+ sufferance suffer
+ sufferances suffer
+ suffered suffer
+ suffering suffer
+ suffers suffer
+ suffic suffic
+ suffice suffic
+ sufficed suffic
+ suffices suffic
+ sufficeth sufficeth
+ sufficiency suffici
+ sufficient suffici
+ sufficiently suffici
+ sufficing suffic
+ sufficit sufficit
+ suffigance suffig
+ suffocate suffoc
+ suffocating suffoc
+ suffocation suffoc
+ suffolk suffolk
+ suffrage suffrag
+ suffrages suffrag
+ sug sug
+ sugar sugar
+ sugarsop sugarsop
+ suggest suggest
+ suggested suggest
+ suggesting suggest
+ suggestion suggest
+ suggestions suggest
+ suggests suggest
+ suis sui
+ suit suit
+ suitable suitabl
+ suited suit
+ suiting suit
+ suitor suitor
+ suitors suitor
+ suits suit
+ suivez suivez
+ sullen sullen
+ sullens sullen
+ sullied sulli
+ sullies sulli
+ sully sulli
+ sulph sulph
+ sulpherous sulpher
+ sulphur sulphur
+ sulphurous sulphur
+ sultan sultan
+ sultry sultri
+ sum sum
+ sumless sumless
+ summ summ
+ summa summa
+ summary summari
+ summer summer
+ summers summer
+ summit summit
+ summon summon
+ summoners summon
+ summons summon
+ sumpter sumpter
+ sumptuous sumptuou
+ sumptuously sumptuous
+ sums sum
+ sun sun
+ sunbeams sunbeam
+ sunburning sunburn
+ sunburnt sunburnt
+ sund sund
+ sunday sundai
+ sundays sundai
+ sunder sunder
+ sunders sunder
+ sundry sundri
+ sung sung
+ sunk sunk
+ sunken sunken
+ sunny sunni
+ sunrising sunris
+ suns sun
+ sunset sunset
+ sunshine sunshin
+ sup sup
+ super super
+ superficial superfici
+ superficially superfici
+ superfluity superflu
+ superfluous superflu
+ superfluously superflu
+ superflux superflux
+ superior superior
+ supernal supern
+ supernatural supernatur
+ superpraise superprais
+ superscript superscript
+ superscription superscript
+ superserviceable superservic
+ superstition superstit
+ superstitious superstiti
+ superstitiously superstiti
+ supersubtle supersubtl
+ supervise supervis
+ supervisor supervisor
+ supp supp
+ supper supper
+ suppers supper
+ suppertime suppertim
+ supping sup
+ supplant supplant
+ supple suppl
+ suppler suppler
+ suppliance supplianc
+ suppliant suppliant
+ suppliants suppliant
+ supplicant supplic
+ supplication supplic
+ supplications supplic
+ supplie suppli
+ supplied suppli
+ supplies suppli
+ suppliest suppliest
+ supply suppli
+ supplyant supplyant
+ supplying suppli
+ supplyment supplyment
+ support support
+ supportable support
+ supportance support
+ supported support
+ supporter support
+ supporters support
+ supporting support
+ supportor supportor
+ suppos suppo
+ supposal suppos
+ suppose suppos
+ supposed suppos
+ supposes suppos
+ supposest supposest
+ supposing suppos
+ supposition supposit
+ suppress suppress
+ suppressed suppress
+ suppresseth suppresseth
+ supremacy supremaci
+ supreme suprem
+ sups sup
+ sur sur
+ surance suranc
+ surcease surceas
+ surd surd
+ sure sure
+ surecard surecard
+ surely sure
+ surer surer
+ surest surest
+ sureties sureti
+ surety sureti
+ surfeit surfeit
+ surfeited surfeit
+ surfeiter surfeit
+ surfeiting surfeit
+ surfeits surfeit
+ surge surg
+ surgeon surgeon
+ surgeons surgeon
+ surgere surger
+ surgery surgeri
+ surges surg
+ surly surli
+ surmis surmi
+ surmise surmis
+ surmised surmis
+ surmises surmis
+ surmount surmount
+ surmounted surmount
+ surmounts surmount
+ surnam surnam
+ surname surnam
+ surnamed surnam
+ surpasseth surpasseth
+ surpassing surpass
+ surplice surplic
+ surplus surplu
+ surpris surpri
+ surprise surpris
+ surprised surpris
+ surrender surrend
+ surrey surrei
+ surreys surrei
+ survey survei
+ surveyest surveyest
+ surveying survei
+ surveyor surveyor
+ surveyors surveyor
+ surveys survei
+ survive surviv
+ survives surviv
+ survivor survivor
+ susan susan
+ suspect suspect
+ suspected suspect
+ suspecting suspect
+ suspects suspect
+ suspend suspend
+ suspense suspens
+ suspicion suspicion
+ suspicions suspicion
+ suspicious suspici
+ suspiration suspir
+ suspire suspir
+ sust sust
+ sustain sustain
+ sustaining sustain
+ sutler sutler
+ sutton sutton
+ suum suum
+ swabber swabber
+ swaddling swaddl
+ swag swag
+ swagg swagg
+ swagger swagger
+ swaggerer swagger
+ swaggerers swagger
+ swaggering swagger
+ swain swain
+ swains swain
+ swallow swallow
+ swallowed swallow
+ swallowing swallow
+ swallows swallow
+ swam swam
+ swan swan
+ swans swan
+ sward sward
+ sware sware
+ swarm swarm
+ swarming swarm
+ swart swart
+ swarth swarth
+ swarths swarth
+ swarthy swarthi
+ swashers swasher
+ swashing swash
+ swath swath
+ swathing swath
+ swathling swathl
+ sway swai
+ swaying swai
+ sways swai
+ swear swear
+ swearer swearer
+ swearers swearer
+ swearest swearest
+ swearing swear
+ swearings swear
+ swears swear
+ sweat sweat
+ sweaten sweaten
+ sweating sweat
+ sweats sweat
+ sweaty sweati
+ sweep sweep
+ sweepers sweeper
+ sweeps sweep
+ sweet sweet
+ sweeten sweeten
+ sweetens sweeten
+ sweeter sweeter
+ sweetest sweetest
+ sweetheart sweetheart
+ sweeting sweet
+ sweetly sweetli
+ sweetmeats sweetmeat
+ sweetness sweet
+ sweets sweet
+ swell swell
+ swelling swell
+ swellings swell
+ swells swell
+ swelter swelter
+ sweno sweno
+ swept swept
+ swerve swerv
+ swerver swerver
+ swerving swerv
+ swift swift
+ swifter swifter
+ swiftest swiftest
+ swiftly swiftli
+ swiftness swift
+ swill swill
+ swills swill
+ swim swim
+ swimmer swimmer
+ swimmers swimmer
+ swimming swim
+ swims swim
+ swine swine
+ swineherds swineherd
+ swing swing
+ swinge swing
+ swinish swinish
+ swinstead swinstead
+ switches switch
+ swits swit
+ switzers switzer
+ swol swol
+ swoll swoll
+ swoln swoln
+ swoon swoon
+ swooned swoon
+ swooning swoon
+ swoons swoon
+ swoop swoop
+ swoopstake swoopstak
+ swor swor
+ sword sword
+ sworder sworder
+ swords sword
+ swore swore
+ sworn sworn
+ swounded swound
+ swounds swound
+ swum swum
+ swung swung
+ sy sy
+ sycamore sycamor
+ sycorax sycorax
+ sylla sylla
+ syllable syllabl
+ syllables syllabl
+ syllogism syllog
+ symbols symbol
+ sympathise sympathis
+ sympathiz sympathiz
+ sympathize sympath
+ sympathized sympath
+ sympathy sympathi
+ synagogue synagogu
+ synod synod
+ synods synod
+ syracuse syracus
+ syracusian syracusian
+ syracusians syracusian
+ syria syria
+ syrups syrup
+ t t
+ ta ta
+ taber taber
+ table tabl
+ tabled tabl
+ tables tabl
+ tablet tablet
+ tabor tabor
+ taborer tabor
+ tabors tabor
+ tabourines tabourin
+ taciturnity taciturn
+ tack tack
+ tackle tackl
+ tackled tackl
+ tackles tackl
+ tackling tackl
+ tacklings tackl
+ taddle taddl
+ tadpole tadpol
+ taffeta taffeta
+ taffety taffeti
+ tag tag
+ tagrag tagrag
+ tah tah
+ tail tail
+ tailor tailor
+ tailors tailor
+ tails tail
+ taint taint
+ tainted taint
+ tainting taint
+ taints taint
+ tainture taintur
+ tak tak
+ take take
+ taken taken
+ taker taker
+ takes take
+ takest takest
+ taketh taketh
+ taking take
+ tal tal
+ talbot talbot
+ talbotites talbotit
+ talbots talbot
+ tale tale
+ talent talent
+ talents talent
+ taleporter taleport
+ tales tale
+ talk talk
+ talked talk
+ talker talker
+ talkers talker
+ talkest talkest
+ talking talk
+ talks talk
+ tall tall
+ taller taller
+ tallest tallest
+ tallies talli
+ tallow tallow
+ tally talli
+ talons talon
+ tam tam
+ tambourines tambourin
+ tame tame
+ tamed tame
+ tamely tame
+ tameness tame
+ tamer tamer
+ tames tame
+ taming tame
+ tamora tamora
+ tamworth tamworth
+ tan tan
+ tang tang
+ tangle tangl
+ tangled tangl
+ tank tank
+ tanlings tanl
+ tann tann
+ tanned tan
+ tanner tanner
+ tanquam tanquam
+ tanta tanta
+ tantaene tantaen
+ tap tap
+ tape tape
+ taper taper
+ tapers taper
+ tapestries tapestri
+ tapestry tapestri
+ taphouse taphous
+ tapp tapp
+ tapster tapster
+ tapsters tapster
+ tar tar
+ tardied tardi
+ tardily tardili
+ tardiness tardi
+ tardy tardi
+ tarentum tarentum
+ targe targ
+ targes targ
+ target target
+ targets target
+ tarpeian tarpeian
+ tarquin tarquin
+ tarquins tarquin
+ tarr tarr
+ tarre tarr
+ tarriance tarrianc
+ tarried tarri
+ tarries tarri
+ tarry tarri
+ tarrying tarri
+ tart tart
+ tartar tartar
+ tartars tartar
+ tartly tartli
+ tartness tart
+ task task
+ tasker tasker
+ tasking task
+ tasks task
+ tassel tassel
+ taste tast
+ tasted tast
+ tastes tast
+ tasting tast
+ tatt tatt
+ tatter tatter
+ tattered tatter
+ tatters tatter
+ tattle tattl
+ tattling tattl
+ tattlings tattl
+ taught taught
+ taunt taunt
+ taunted taunt
+ taunting taunt
+ tauntingly tauntingli
+ taunts taunt
+ taurus tauru
+ tavern tavern
+ taverns tavern
+ tavy tavi
+ tawdry tawdri
+ tawny tawni
+ tax tax
+ taxation taxat
+ taxations taxat
+ taxes tax
+ taxing tax
+ tc tc
+ te te
+ teach teach
+ teacher teacher
+ teachers teacher
+ teaches teach
+ teachest teachest
+ teacheth teacheth
+ teaching teach
+ team team
+ tear tear
+ tearful tear
+ tearing tear
+ tears tear
+ tearsheet tearsheet
+ teat teat
+ tedious tediou
+ tediously tedious
+ tediousness tedious
+ teem teem
+ teeming teem
+ teems teem
+ teen teen
+ teeth teeth
+ teipsum teipsum
+ telamon telamon
+ telamonius telamoniu
+ tell tell
+ teller teller
+ telling tell
+ tells tell
+ tellus tellu
+ temp temp
+ temper temper
+ temperality temper
+ temperance temper
+ temperate temper
+ temperately temper
+ tempers temper
+ tempest tempest
+ tempests tempest
+ tempestuous tempestu
+ temple templ
+ temples templ
+ temporal tempor
+ temporary temporari
+ temporiz temporiz
+ temporize tempor
+ temporizer tempor
+ temps temp
+ tempt tempt
+ temptation temptat
+ temptations temptat
+ tempted tempt
+ tempter tempter
+ tempters tempter
+ tempteth tempteth
+ tempting tempt
+ tempts tempt
+ ten ten
+ tenable tenabl
+ tenant tenant
+ tenantius tenantiu
+ tenantless tenantless
+ tenants tenant
+ tench tench
+ tend tend
+ tendance tendanc
+ tended tend
+ tender tender
+ tendered tender
+ tenderly tenderli
+ tenderness tender
+ tenders tender
+ tending tend
+ tends tend
+ tenedos tenedo
+ tenement tenement
+ tenements tenement
+ tenfold tenfold
+ tennis tenni
+ tenour tenour
+ tenours tenour
+ tens ten
+ tent tent
+ tented tent
+ tenth tenth
+ tenths tenth
+ tents tent
+ tenure tenur
+ tenures tenur
+ tercel tercel
+ tereus tereu
+ term term
+ termagant termag
+ termed term
+ terminations termin
+ termless termless
+ terms term
+ terra terra
+ terrace terrac
+ terram terram
+ terras terra
+ terre terr
+ terrene terren
+ terrestrial terrestri
+ terrible terribl
+ terribly terribl
+ territories territori
+ territory territori
+ terror terror
+ terrors terror
+ tertian tertian
+ tertio tertio
+ test test
+ testament testament
+ tested test
+ tester tester
+ testern testern
+ testify testifi
+ testimonied testimoni
+ testimonies testimoni
+ testimony testimoni
+ testiness testi
+ testril testril
+ testy testi
+ tetchy tetchi
+ tether tether
+ tetter tetter
+ tevil tevil
+ tewksbury tewksburi
+ text text
+ tgv tgv
+ th th
+ thaes thae
+ thames thame
+ than than
+ thane thane
+ thanes thane
+ thank thank
+ thanked thank
+ thankful thank
+ thankfully thankfulli
+ thankfulness thank
+ thanking thank
+ thankings thank
+ thankless thankless
+ thanks thank
+ thanksgiving thanksgiv
+ thasos thaso
+ that that
+ thatch thatch
+ thaw thaw
+ thawing thaw
+ thaws thaw
+ the the
+ theatre theatr
+ theban theban
+ thebes thebe
+ thee thee
+ theft theft
+ thefts theft
+ thein thein
+ their their
+ theirs their
+ theise theis
+ them them
+ theme theme
+ themes theme
+ themselves themselv
+ then then
+ thence thenc
+ thenceforth thenceforth
+ theoric theoric
+ there there
+ thereabout thereabout
+ thereabouts thereabout
+ thereafter thereaft
+ thereat thereat
+ thereby therebi
+ therefore therefor
+ therein therein
+ thereof thereof
+ thereon thereon
+ thereto thereto
+ thereunto thereunto
+ thereupon thereupon
+ therewith therewith
+ therewithal therewith
+ thersites thersit
+ these these
+ theseus theseu
+ thessalian thessalian
+ thessaly thessali
+ thetis theti
+ thews thew
+ they thei
+ thick thick
+ thicken thicken
+ thickens thicken
+ thicker thicker
+ thickest thickest
+ thicket thicket
+ thickskin thickskin
+ thief thief
+ thievery thieveri
+ thieves thiev
+ thievish thievish
+ thigh thigh
+ thighs thigh
+ thimble thimbl
+ thimbles thimbl
+ thin thin
+ thine thine
+ thing thing
+ things thing
+ think think
+ thinkest thinkest
+ thinking think
+ thinkings think
+ thinks think
+ thinkst thinkst
+ thinly thinli
+ third third
+ thirdly thirdli
+ thirds third
+ thirst thirst
+ thirsting thirst
+ thirsts thirst
+ thirsty thirsti
+ thirteen thirteen
+ thirties thirti
+ thirtieth thirtieth
+ thirty thirti
+ this thi
+ thisby thisbi
+ thisne thisn
+ thistle thistl
+ thistles thistl
+ thither thither
+ thitherward thitherward
+ thoas thoa
+ thomas thoma
+ thorn thorn
+ thorns thorn
+ thorny thorni
+ thorough thorough
+ thoroughly thoroughli
+ those those
+ thou thou
+ though though
+ thought thought
+ thoughtful thought
+ thoughts thought
+ thousand thousand
+ thousands thousand
+ thracian thracian
+ thraldom thraldom
+ thrall thrall
+ thralled thrall
+ thralls thrall
+ thrash thrash
+ thrasonical thrason
+ thread thread
+ threadbare threadbar
+ threaden threaden
+ threading thread
+ threat threat
+ threaten threaten
+ threatening threaten
+ threatens threaten
+ threatest threatest
+ threats threat
+ three three
+ threefold threefold
+ threepence threepenc
+ threepile threepil
+ threes three
+ threescore threescor
+ thresher thresher
+ threshold threshold
+ threw threw
+ thrice thrice
+ thrift thrift
+ thriftless thriftless
+ thrifts thrift
+ thrifty thrifti
+ thrill thrill
+ thrilling thrill
+ thrills thrill
+ thrive thrive
+ thrived thrive
+ thrivers thriver
+ thrives thrive
+ thriving thrive
+ throat throat
+ throats throat
+ throbbing throb
+ throbs throb
+ throca throca
+ throe throe
+ throes throe
+ thromuldo thromuldo
+ thron thron
+ throne throne
+ throned throne
+ thrones throne
+ throng throng
+ thronging throng
+ throngs throng
+ throstle throstl
+ throttle throttl
+ through through
+ throughfare throughfar
+ throughfares throughfar
+ throughly throughli
+ throughout throughout
+ throw throw
+ thrower thrower
+ throwest throwest
+ throwing throw
+ thrown thrown
+ throws throw
+ thrum thrum
+ thrumm thrumm
+ thrush thrush
+ thrust thrust
+ thrusteth thrusteth
+ thrusting thrust
+ thrusts thrust
+ thumb thumb
+ thumbs thumb
+ thump thump
+ thund thund
+ thunder thunder
+ thunderbolt thunderbolt
+ thunderbolts thunderbolt
+ thunderer thunder
+ thunders thunder
+ thunderstone thunderston
+ thunderstroke thunderstrok
+ thurio thurio
+ thursday thursdai
+ thus thu
+ thwack thwack
+ thwart thwart
+ thwarted thwart
+ thwarting thwart
+ thwartings thwart
+ thy thy
+ thyme thyme
+ thymus thymu
+ thyreus thyreu
+ thyself thyself
+ ti ti
+ tib tib
+ tiber tiber
+ tiberio tiberio
+ tibey tibei
+ ticed tice
+ tick tick
+ tickl tickl
+ tickle tickl
+ tickled tickl
+ tickles tickl
+ tickling tickl
+ ticklish ticklish
+ tiddle tiddl
+ tide tide
+ tides tide
+ tidings tide
+ tidy tidi
+ tie tie
+ tied ti
+ ties ti
+ tiff tiff
+ tiger tiger
+ tigers tiger
+ tight tight
+ tightly tightli
+ tike tike
+ til til
+ tile tile
+ till till
+ tillage tillag
+ tilly tilli
+ tilt tilt
+ tilter tilter
+ tilth tilth
+ tilting tilt
+ tilts tilt
+ tiltyard tiltyard
+ tim tim
+ timandra timandra
+ timber timber
+ time time
+ timeless timeless
+ timelier timeli
+ timely time
+ times time
+ timon timon
+ timor timor
+ timorous timor
+ timorously timor
+ tinct tinct
+ tincture tinctur
+ tinctures tinctur
+ tinder tinder
+ tingling tingl
+ tinker tinker
+ tinkers tinker
+ tinsel tinsel
+ tiny tini
+ tip tip
+ tipp tipp
+ tippling tippl
+ tips tip
+ tipsy tipsi
+ tiptoe tipto
+ tir tir
+ tire tire
+ tired tire
+ tires tire
+ tirest tirest
+ tiring tire
+ tirra tirra
+ tirrits tirrit
+ tis ti
+ tish tish
+ tisick tisick
+ tissue tissu
+ titan titan
+ titania titania
+ tithe tith
+ tithed tith
+ tithing tith
+ titinius titiniu
+ title titl
+ titled titl
+ titleless titleless
+ titles titl
+ tittle tittl
+ tittles tittl
+ titular titular
+ titus titu
+ tn tn
+ to to
+ toad toad
+ toads toad
+ toadstool toadstool
+ toast toast
+ toasted toast
+ toasting toast
+ toasts toast
+ toaze toaz
+ toby tobi
+ tock tock
+ tod tod
+ today todai
+ todpole todpol
+ tods tod
+ toe toe
+ toes toe
+ tofore tofor
+ toge toge
+ toged toge
+ together togeth
+ toil toil
+ toiled toil
+ toiling toil
+ toils toil
+ token token
+ tokens token
+ told told
+ toledo toledo
+ tolerable toler
+ toll toll
+ tolling toll
+ tom tom
+ tomb tomb
+ tombe tomb
+ tombed tomb
+ tombless tombless
+ tomboys tomboi
+ tombs tomb
+ tomorrow tomorrow
+ tomyris tomyri
+ ton ton
+ tongs tong
+ tongu tongu
+ tongue tongu
+ tongued tongu
+ tongueless tongueless
+ tongues tongu
+ tonight tonight
+ too too
+ took took
+ tool tool
+ tools tool
+ tooth tooth
+ toothache toothach
+ toothpick toothpick
+ toothpicker toothpick
+ top top
+ topas topa
+ topful top
+ topgallant topgal
+ topless topless
+ topmast topmast
+ topp topp
+ topping top
+ topple toppl
+ topples toppl
+ tops top
+ topsail topsail
+ topsy topsi
+ torch torch
+ torchbearer torchbear
+ torchbearers torchbear
+ torcher torcher
+ torches torch
+ torchlight torchlight
+ tore tore
+ torment torment
+ tormenta tormenta
+ tormente torment
+ tormented torment
+ tormenting torment
+ tormentors tormentor
+ torments torment
+ torn torn
+ torrent torrent
+ tortive tortiv
+ tortoise tortois
+ tortur tortur
+ torture tortur
+ tortured tortur
+ torturer tortur
+ torturers tortur
+ tortures tortur
+ torturest torturest
+ torturing tortur
+ toryne toryn
+ toss toss
+ tossed toss
+ tosseth tosseth
+ tossing toss
+ tot tot
+ total total
+ totally total
+ tott tott
+ tottered totter
+ totters totter
+ tou tou
+ touch touch
+ touched touch
+ touches touch
+ toucheth toucheth
+ touching touch
+ touchstone touchston
+ tough tough
+ tougher tougher
+ toughness tough
+ touraine tourain
+ tournaments tournament
+ tours tour
+ tous tou
+ tout tout
+ touze touz
+ tow tow
+ toward toward
+ towardly towardli
+ towards toward
+ tower tower
+ towering tower
+ towers tower
+ town town
+ towns town
+ township township
+ townsman townsman
+ townsmen townsmen
+ towton towton
+ toy toi
+ toys toi
+ trace trace
+ traces trace
+ track track
+ tract tract
+ tractable tractabl
+ trade trade
+ traded trade
+ traders trader
+ trades trade
+ tradesman tradesman
+ tradesmen tradesmen
+ trading trade
+ tradition tradit
+ traditional tradit
+ traduc traduc
+ traduced traduc
+ traducement traduc
+ traffic traffic
+ traffickers traffick
+ traffics traffic
+ tragedian tragedian
+ tragedians tragedian
+ tragedies tragedi
+ tragedy tragedi
+ tragic tragic
+ tragical tragic
+ trail trail
+ train train
+ trained train
+ training train
+ trains train
+ trait trait
+ traitor traitor
+ traitorly traitorli
+ traitorous traitor
+ traitorously traitor
+ traitors traitor
+ traitress traitress
+ traject traject
+ trammel trammel
+ trample trampl
+ trampled trampl
+ trampling trampl
+ tranc tranc
+ trance tranc
+ tranio tranio
+ tranquil tranquil
+ tranquillity tranquil
+ transcendence transcend
+ transcends transcend
+ transferred transfer
+ transfigur transfigur
+ transfix transfix
+ transform transform
+ transformation transform
+ transformations transform
+ transformed transform
+ transgress transgress
+ transgresses transgress
+ transgressing transgress
+ transgression transgress
+ translate translat
+ translated translat
+ translates translat
+ translation translat
+ transmigrates transmigr
+ transmutation transmut
+ transparent transpar
+ transport transport
+ transportance transport
+ transported transport
+ transporting transport
+ transports transport
+ transpose transpos
+ transshape transshap
+ trap trap
+ trapp trapp
+ trappings trap
+ traps trap
+ trash trash
+ travail travail
+ travails travail
+ travel travel
+ traveler travel
+ traveling travel
+ travell travel
+ travelled travel
+ traveller travel
+ travellers travel
+ travellest travellest
+ travelling travel
+ travels travel
+ travers traver
+ traverse travers
+ tray trai
+ treacherous treacher
+ treacherously treacher
+ treachers treacher
+ treachery treacheri
+ tread tread
+ treading tread
+ treads tread
+ treason treason
+ treasonable treason
+ treasonous treason
+ treasons treason
+ treasure treasur
+ treasurer treasur
+ treasures treasur
+ treasuries treasuri
+ treasury treasuri
+ treat treat
+ treaties treati
+ treatise treatis
+ treats treat
+ treaty treati
+ treble trebl
+ trebled trebl
+ trebles trebl
+ trebonius treboniu
+ tree tree
+ trees tree
+ tremble trembl
+ trembled trembl
+ trembles trembl
+ tremblest tremblest
+ trembling trembl
+ tremblingly tremblingli
+ tremor tremor
+ trempling trempl
+ trench trench
+ trenchant trenchant
+ trenched trench
+ trencher trencher
+ trenchering trencher
+ trencherman trencherman
+ trenchers trencher
+ trenches trench
+ trenching trench
+ trent trent
+ tres tre
+ trespass trespass
+ trespasses trespass
+ tressel tressel
+ tresses tress
+ treys trei
+ trial trial
+ trials trial
+ trib trib
+ tribe tribe
+ tribes tribe
+ tribulation tribul
+ tribunal tribun
+ tribune tribun
+ tribunes tribun
+ tributaries tributari
+ tributary tributari
+ tribute tribut
+ tributes tribut
+ trice trice
+ trick trick
+ tricking trick
+ trickling trickl
+ tricks trick
+ tricksy tricksi
+ trident trident
+ tried tri
+ trier trier
+ trifle trifl
+ trifled trifl
+ trifler trifler
+ trifles trifl
+ trifling trifl
+ trigon trigon
+ trill trill
+ trim trim
+ trimly trimli
+ trimm trimm
+ trimmed trim
+ trimming trim
+ trims trim
+ trinculo trinculo
+ trinculos trinculo
+ trinkets trinket
+ trip trip
+ tripartite tripartit
+ tripe tripe
+ triple tripl
+ triplex triplex
+ tripoli tripoli
+ tripolis tripoli
+ tripp tripp
+ tripping trip
+ trippingly trippingli
+ trips trip
+ tristful trist
+ triton triton
+ triumph triumph
+ triumphant triumphant
+ triumphantly triumphantli
+ triumpher triumpher
+ triumphers triumpher
+ triumphing triumph
+ triumphs triumph
+ triumvir triumvir
+ triumvirate triumvir
+ triumvirs triumvir
+ triumviry triumviri
+ trivial trivial
+ troat troat
+ trod trod
+ trodden trodden
+ troiant troiant
+ troien troien
+ troilus troilu
+ troiluses troilus
+ trojan trojan
+ trojans trojan
+ troll troll
+ tromperies tromperi
+ trompet trompet
+ troop troop
+ trooping troop
+ troops troop
+ trop trop
+ trophies trophi
+ trophy trophi
+ tropically tropic
+ trot trot
+ troth troth
+ trothed troth
+ troths troth
+ trots trot
+ trotting trot
+ trouble troubl
+ troubled troubl
+ troubler troubler
+ troubles troubl
+ troublesome troublesom
+ troublest troublest
+ troublous troublou
+ trough trough
+ trout trout
+ trouts trout
+ trovato trovato
+ trow trow
+ trowel trowel
+ trowest trowest
+ troy troi
+ troyan troyan
+ troyans troyan
+ truant truant
+ truce truce
+ truckle truckl
+ trudge trudg
+ true true
+ trueborn trueborn
+ truepenny truepenni
+ truer truer
+ truest truest
+ truie truie
+ trull trull
+ trulls trull
+ truly truli
+ trump trump
+ trumpery trumperi
+ trumpet trumpet
+ trumpeter trumpet
+ trumpeters trumpet
+ trumpets trumpet
+ truncheon truncheon
+ truncheoners truncheon
+ trundle trundl
+ trunk trunk
+ trunks trunk
+ trust trust
+ trusted trust
+ truster truster
+ trusters truster
+ trusting trust
+ trusts trust
+ trusty trusti
+ truth truth
+ truths truth
+ try try
+ ts ts
+ tu tu
+ tuae tuae
+ tub tub
+ tubal tubal
+ tubs tub
+ tuck tuck
+ tucket tucket
+ tuesday tuesdai
+ tuft tuft
+ tufts tuft
+ tug tug
+ tugg tugg
+ tugging tug
+ tuition tuition
+ tullus tullu
+ tully tulli
+ tumble tumbl
+ tumbled tumbl
+ tumbler tumbler
+ tumbling tumbl
+ tumult tumult
+ tumultuous tumultu
+ tun tun
+ tune tune
+ tuneable tuneabl
+ tuned tune
+ tuners tuner
+ tunes tune
+ tunis tuni
+ tuns tun
+ tupping tup
+ turban turban
+ turbans turban
+ turbulence turbul
+ turbulent turbul
+ turd turd
+ turf turf
+ turfy turfi
+ turk turk
+ turkey turkei
+ turkeys turkei
+ turkish turkish
+ turks turk
+ turlygod turlygod
+ turmoil turmoil
+ turmoiled turmoil
+ turn turn
+ turnbull turnbul
+ turncoat turncoat
+ turncoats turncoat
+ turned turn
+ turneth turneth
+ turning turn
+ turnips turnip
+ turns turn
+ turph turph
+ turpitude turpitud
+ turquoise turquois
+ turret turret
+ turrets turret
+ turtle turtl
+ turtles turtl
+ turvy turvi
+ tuscan tuscan
+ tush tush
+ tut tut
+ tutor tutor
+ tutored tutor
+ tutors tutor
+ tutto tutto
+ twain twain
+ twang twang
+ twangling twangl
+ twas twa
+ tway twai
+ tweaks tweak
+ tween tween
+ twelfth twelfth
+ twelve twelv
+ twelvemonth twelvemonth
+ twentieth twentieth
+ twenty twenti
+ twere twere
+ twice twice
+ twig twig
+ twiggen twiggen
+ twigs twig
+ twilight twilight
+ twill twill
+ twilled twill
+ twin twin
+ twine twine
+ twink twink
+ twinkle twinkl
+ twinkled twinkl
+ twinkling twinkl
+ twinn twinn
+ twins twin
+ twire twire
+ twist twist
+ twisted twist
+ twit twit
+ twits twit
+ twitting twit
+ twixt twixt
+ two two
+ twofold twofold
+ twopence twopenc
+ twopences twopenc
+ twos two
+ twould twould
+ tyb tyb
+ tybalt tybalt
+ tybalts tybalt
+ tyburn tyburn
+ tying ty
+ tyke tyke
+ tymbria tymbria
+ type type
+ types type
+ typhon typhon
+ tyrannical tyrann
+ tyrannically tyrann
+ tyrannize tyrann
+ tyrannous tyrann
+ tyranny tyranni
+ tyrant tyrant
+ tyrants tyrant
+ tyrian tyrian
+ tyrrel tyrrel
+ u u
+ ubique ubiqu
+ udders udder
+ udge udg
+ uds ud
+ uglier uglier
+ ugliest ugliest
+ ugly ugli
+ ulcer ulcer
+ ulcerous ulcer
+ ulysses ulyss
+ um um
+ umber umber
+ umbra umbra
+ umbrage umbrag
+ umfrevile umfrevil
+ umpire umpir
+ umpires umpir
+ un un
+ unable unabl
+ unaccommodated unaccommod
+ unaccompanied unaccompani
+ unaccustom unaccustom
+ unaching unach
+ unacquainted unacquaint
+ unactive unact
+ unadvis unadvi
+ unadvised unadvis
+ unadvisedly unadvisedli
+ unagreeable unagre
+ unanel unanel
+ unanswer unansw
+ unappeas unappea
+ unapproved unapprov
+ unapt unapt
+ unaptness unapt
+ unarm unarm
+ unarmed unarm
+ unarms unarm
+ unassail unassail
+ unassailable unassail
+ unattainted unattaint
+ unattempted unattempt
+ unattended unattend
+ unauspicious unauspici
+ unauthorized unauthor
+ unavoided unavoid
+ unawares unawar
+ unback unback
+ unbak unbak
+ unbanded unband
+ unbar unbar
+ unbarb unbarb
+ unbashful unbash
+ unbated unbat
+ unbatter unbatt
+ unbecoming unbecom
+ unbefitting unbefit
+ unbegot unbegot
+ unbegotten unbegotten
+ unbelieved unbeliev
+ unbend unbend
+ unbent unbent
+ unbewail unbewail
+ unbid unbid
+ unbidden unbidden
+ unbind unbind
+ unbinds unbind
+ unbitted unbit
+ unbless unbless
+ unblest unblest
+ unbloodied unbloodi
+ unblown unblown
+ unbodied unbodi
+ unbolt unbolt
+ unbolted unbolt
+ unbonneted unbonnet
+ unbookish unbookish
+ unborn unborn
+ unbosom unbosom
+ unbound unbound
+ unbounded unbound
+ unbow unbow
+ unbowed unbow
+ unbrac unbrac
+ unbraced unbrac
+ unbraided unbraid
+ unbreathed unbreath
+ unbred unbr
+ unbreech unbreech
+ unbridled unbridl
+ unbroke unbrok
+ unbruis unbrui
+ unbruised unbruis
+ unbuckle unbuckl
+ unbuckles unbuckl
+ unbuckling unbuckl
+ unbuild unbuild
+ unburden unburden
+ unburdens unburden
+ unburied unburi
+ unburnt unburnt
+ unburthen unburthen
+ unbutton unbutton
+ unbuttoning unbutton
+ uncapable uncap
+ uncape uncap
+ uncase uncas
+ uncasing uncas
+ uncaught uncaught
+ uncertain uncertain
+ uncertainty uncertainti
+ unchain unchain
+ unchanging unchang
+ uncharge uncharg
+ uncharged uncharg
+ uncharitably uncharit
+ unchary unchari
+ unchaste unchast
+ uncheck uncheck
+ unchilded unchild
+ uncivil uncivil
+ unclaim unclaim
+ unclasp unclasp
+ uncle uncl
+ unclean unclean
+ uncleanliness uncleanli
+ uncleanly uncleanli
+ uncleanness unclean
+ uncles uncl
+ unclew unclew
+ unclog unclog
+ uncoined uncoin
+ uncolted uncolt
+ uncomeliness uncomeli
+ uncomfortable uncomfort
+ uncompassionate uncompassion
+ uncomprehensive uncomprehens
+ unconfinable unconfin
+ unconfirm unconfirm
+ unconfirmed unconfirm
+ unconquer unconqu
+ unconquered unconqu
+ unconsidered unconsid
+ unconstant unconst
+ unconstrain unconstrain
+ unconstrained unconstrain
+ uncontemn uncontemn
+ uncontroll uncontrol
+ uncorrected uncorrect
+ uncounted uncount
+ uncouple uncoupl
+ uncourteous uncourt
+ uncouth uncouth
+ uncover uncov
+ uncovered uncov
+ uncropped uncrop
+ uncross uncross
+ uncrown uncrown
+ unction unction
+ unctuous unctuou
+ uncuckolded uncuckold
+ uncurable uncur
+ uncurbable uncurb
+ uncurbed uncurb
+ uncurls uncurl
+ uncurrent uncurr
+ uncurse uncurs
+ undaunted undaunt
+ undeaf undeaf
+ undeck undeck
+ undeeded undeed
+ under under
+ underbearing underbear
+ underborne underborn
+ undercrest undercrest
+ underfoot underfoot
+ undergo undergo
+ undergoes undergo
+ undergoing undergo
+ undergone undergon
+ underground underground
+ underhand underhand
+ underlings underl
+ undermine undermin
+ underminers undermin
+ underneath underneath
+ underprizing underpr
+ underprop underprop
+ understand understand
+ understandeth understandeth
+ understanding understand
+ understandings understand
+ understands understand
+ understood understood
+ underta underta
+ undertake undertak
+ undertakeing undertak
+ undertaker undertak
+ undertakes undertak
+ undertaking undertak
+ undertakings undertak
+ undertook undertook
+ undervalu undervalu
+ undervalued undervalu
+ underwent underw
+ underwrit underwrit
+ underwrite underwrit
+ undescried undescri
+ undeserved undeserv
+ undeserver undeserv
+ undeservers undeserv
+ undeserving undeserv
+ undetermin undetermin
+ undid undid
+ undinted undint
+ undiscernible undiscern
+ undiscover undiscov
+ undishonoured undishonour
+ undispos undispo
+ undistinguishable undistinguish
+ undistinguished undistinguish
+ undividable undivid
+ undivided undivid
+ undivulged undivulg
+ undo undo
+ undoes undo
+ undoing undo
+ undone undon
+ undoubted undoubt
+ undoubtedly undoubtedli
+ undream undream
+ undress undress
+ undressed undress
+ undrown undrown
+ unduteous undut
+ undutiful unduti
+ une un
+ uneared unear
+ unearned unearn
+ unearthly unearthli
+ uneasines uneasin
+ uneasy uneasi
+ uneath uneath
+ uneducated uneduc
+ uneffectual uneffectu
+ unelected unelect
+ unequal unequ
+ uneven uneven
+ unexamin unexamin
+ unexecuted unexecut
+ unexpected unexpect
+ unexperienc unexperienc
+ unexperient unexperi
+ unexpressive unexpress
+ unfair unfair
+ unfaithful unfaith
+ unfallible unfal
+ unfam unfam
+ unfashionable unfashion
+ unfasten unfasten
+ unfather unfath
+ unfathered unfath
+ unfed unf
+ unfeed unfe
+ unfeeling unfeel
+ unfeigned unfeign
+ unfeignedly unfeignedli
+ unfellowed unfellow
+ unfelt unfelt
+ unfenced unfenc
+ unfilial unfili
+ unfill unfil
+ unfinish unfinish
+ unfirm unfirm
+ unfit unfit
+ unfitness unfit
+ unfix unfix
+ unfledg unfledg
+ unfold unfold
+ unfolded unfold
+ unfoldeth unfoldeth
+ unfolding unfold
+ unfolds unfold
+ unfool unfool
+ unforc unforc
+ unforced unforc
+ unforfeited unforfeit
+ unfortified unfortifi
+ unfortunate unfortun
+ unfought unfought
+ unfrequented unfrequ
+ unfriended unfriend
+ unfurnish unfurnish
+ ungain ungain
+ ungalled ungal
+ ungart ungart
+ ungarter ungart
+ ungenitur ungenitur
+ ungentle ungentl
+ ungentleness ungentl
+ ungently ungent
+ ungird ungird
+ ungodly ungodli
+ ungor ungor
+ ungot ungot
+ ungotten ungotten
+ ungovern ungovern
+ ungracious ungraci
+ ungrateful ungrat
+ ungravely ungrav
+ ungrown ungrown
+ unguarded unguard
+ unguem unguem
+ unguided unguid
+ unhack unhack
+ unhair unhair
+ unhallow unhallow
+ unhallowed unhallow
+ unhand unhand
+ unhandled unhandl
+ unhandsome unhandsom
+ unhang unhang
+ unhappied unhappi
+ unhappily unhappili
+ unhappiness unhappi
+ unhappy unhappi
+ unhardened unharden
+ unharm unharm
+ unhatch unhatch
+ unheard unheard
+ unhearts unheart
+ unheedful unheed
+ unheedfully unheedfulli
+ unheedy unheedi
+ unhelpful unhelp
+ unhidden unhidden
+ unholy unholi
+ unhop unhop
+ unhopefullest unhopefullest
+ unhorse unhors
+ unhospitable unhospit
+ unhous unhou
+ unhoused unhous
+ unhurtful unhurt
+ unicorn unicorn
+ unicorns unicorn
+ unimproved unimprov
+ uninhabitable uninhabit
+ uninhabited uninhabit
+ unintelligent unintellig
+ union union
+ unions union
+ unite unit
+ united unit
+ unity uniti
+ universal univers
+ universe univers
+ universities univers
+ university univers
+ unjointed unjoint
+ unjust unjust
+ unjustice unjustic
+ unjustly unjustli
+ unkennel unkennel
+ unkept unkept
+ unkind unkind
+ unkindest unkindest
+ unkindly unkindli
+ unkindness unkind
+ unking unk
+ unkinglike unkinglik
+ unkiss unkiss
+ unknit unknit
+ unknowing unknow
+ unknown unknown
+ unlace unlac
+ unlaid unlaid
+ unlawful unlaw
+ unlawfully unlawfulli
+ unlearn unlearn
+ unlearned unlearn
+ unless unless
+ unlesson unlesson
+ unletter unlett
+ unlettered unlett
+ unlick unlick
+ unlike unlik
+ unlikely unlik
+ unlimited unlimit
+ unlineal unlin
+ unlink unlink
+ unload unload
+ unloaded unload
+ unloading unload
+ unloads unload
+ unlock unlock
+ unlocks unlock
+ unlook unlook
+ unlooked unlook
+ unloos unloo
+ unloose unloos
+ unlov unlov
+ unloving unlov
+ unluckily unluckili
+ unlucky unlucki
+ unmade unmad
+ unmake unmak
+ unmanly unmanli
+ unmann unmann
+ unmanner unmann
+ unmannerd unmannerd
+ unmannerly unmannerli
+ unmarried unmarri
+ unmask unmask
+ unmasked unmask
+ unmasking unmask
+ unmasks unmask
+ unmast unmast
+ unmatch unmatch
+ unmatchable unmatch
+ unmatched unmatch
+ unmeasurable unmeasur
+ unmeet unmeet
+ unmellowed unmellow
+ unmerciful unmerci
+ unmeritable unmerit
+ unmeriting unmerit
+ unminded unmind
+ unmindfull unmindful
+ unmingled unmingl
+ unmitigable unmitig
+ unmitigated unmitig
+ unmix unmix
+ unmoan unmoan
+ unmov unmov
+ unmoved unmov
+ unmoving unmov
+ unmuffles unmuffl
+ unmuffling unmuffl
+ unmusical unmus
+ unmuzzle unmuzzl
+ unmuzzled unmuzzl
+ unnatural unnatur
+ unnaturally unnatur
+ unnaturalness unnatur
+ unnecessarily unnecessarili
+ unnecessary unnecessari
+ unneighbourly unneighbourli
+ unnerved unnerv
+ unnoble unnobl
+ unnoted unnot
+ unnumb unnumb
+ unnumber unnumb
+ unowed unow
+ unpack unpack
+ unpaid unpaid
+ unparagon unparagon
+ unparallel unparallel
+ unpartial unparti
+ unpath unpath
+ unpaved unpav
+ unpay unpai
+ unpeaceable unpeac
+ unpeg unpeg
+ unpeople unpeopl
+ unpeopled unpeopl
+ unperfect unperfect
+ unperfectness unperfect
+ unpick unpick
+ unpin unpin
+ unpink unpink
+ unpitied unpiti
+ unpitifully unpitifulli
+ unplagu unplagu
+ unplausive unplaus
+ unpleas unplea
+ unpleasant unpleas
+ unpleasing unpleas
+ unpolicied unpolici
+ unpolish unpolish
+ unpolished unpolish
+ unpolluted unpollut
+ unpossess unpossess
+ unpossessing unpossess
+ unpossible unposs
+ unpractis unpracti
+ unpregnant unpregn
+ unpremeditated unpremedit
+ unprepar unprepar
+ unprepared unprepar
+ unpress unpress
+ unprevailing unprevail
+ unprevented unprev
+ unpriz unpriz
+ unprizable unpriz
+ unprofitable unprofit
+ unprofited unprofit
+ unproper unprop
+ unproperly unproperli
+ unproportion unproport
+ unprovide unprovid
+ unprovided unprovid
+ unprovident unprovid
+ unprovokes unprovok
+ unprun unprun
+ unpruned unprun
+ unpublish unpublish
+ unpurged unpurg
+ unpurpos unpurpo
+ unqualitied unqual
+ unqueen unqueen
+ unquestion unquest
+ unquestionable unquestion
+ unquiet unquiet
+ unquietly unquietli
+ unquietness unquiet
+ unraised unrais
+ unrak unrak
+ unread unread
+ unready unreadi
+ unreal unreal
+ unreasonable unreason
+ unreasonably unreason
+ unreclaimed unreclaim
+ unreconciled unreconcil
+ unreconciliable unreconcili
+ unrecounted unrecount
+ unrecuring unrecur
+ unregarded unregard
+ unregist unregist
+ unrelenting unrel
+ unremovable unremov
+ unremovably unremov
+ unreprievable unrepriev
+ unresolv unresolv
+ unrespected unrespect
+ unrespective unrespect
+ unrest unrest
+ unrestor unrestor
+ unrestrained unrestrain
+ unreveng unreveng
+ unreverend unreverend
+ unreverent unrever
+ unrevers unrev
+ unrewarded unreward
+ unrighteous unright
+ unrightful unright
+ unripe unrip
+ unripp unripp
+ unrivall unrival
+ unroll unrol
+ unroof unroof
+ unroosted unroost
+ unroot unroot
+ unrough unrough
+ unruly unruli
+ unsafe unsaf
+ unsaluted unsalut
+ unsanctified unsanctifi
+ unsatisfied unsatisfi
+ unsavoury unsavouri
+ unsay unsai
+ unscalable unscal
+ unscann unscann
+ unscarr unscarr
+ unschool unschool
+ unscorch unscorch
+ unscour unscour
+ unscratch unscratch
+ unseal unseal
+ unseam unseam
+ unsearch unsearch
+ unseason unseason
+ unseasonable unseason
+ unseasonably unseason
+ unseasoned unseason
+ unseconded unsecond
+ unsecret unsecret
+ unseduc unseduc
+ unseeing unse
+ unseeming unseem
+ unseemly unseemli
+ unseen unseen
+ unseminar unseminar
+ unseparable unsepar
+ unserviceable unservic
+ unset unset
+ unsettle unsettl
+ unsettled unsettl
+ unsever unsev
+ unsex unsex
+ unshak unshak
+ unshaked unshak
+ unshaken unshaken
+ unshaped unshap
+ unshapes unshap
+ unsheath unsheath
+ unsheathe unsheath
+ unshorn unshorn
+ unshout unshout
+ unshown unshown
+ unshrinking unshrink
+ unshrubb unshrubb
+ unshunn unshunn
+ unshunnable unshunn
+ unsifted unsift
+ unsightly unsightli
+ unsinew unsinew
+ unsisting unsist
+ unskilful unskil
+ unskilfully unskilfulli
+ unskillful unskil
+ unslipping unslip
+ unsmirched unsmirch
+ unsoil unsoil
+ unsolicited unsolicit
+ unsorted unsort
+ unsought unsought
+ unsound unsound
+ unsounded unsound
+ unspeak unspeak
+ unspeakable unspeak
+ unspeaking unspeak
+ unsphere unspher
+ unspoke unspok
+ unspoken unspoken
+ unspotted unspot
+ unsquar unsquar
+ unstable unstabl
+ unstaid unstaid
+ unstain unstain
+ unstained unstain
+ unstanched unstanch
+ unstate unstat
+ unsteadfast unsteadfast
+ unstooping unstoop
+ unstringed unstring
+ unstuff unstuff
+ unsubstantial unsubstanti
+ unsuitable unsuit
+ unsuiting unsuit
+ unsullied unsulli
+ unsunn unsunn
+ unsur unsur
+ unsure unsur
+ unsuspected unsuspect
+ unsway unswai
+ unswayable unsway
+ unswayed unswai
+ unswear unswear
+ unswept unswept
+ unsworn unsworn
+ untainted untaint
+ untalk untalk
+ untangle untangl
+ untangled untangl
+ untasted untast
+ untaught untaught
+ untempering untemp
+ untender untend
+ untent untent
+ untented untent
+ unthankful unthank
+ unthankfulness unthank
+ unthink unthink
+ unthought unthought
+ unthread unthread
+ unthrift unthrift
+ unthrifts unthrift
+ unthrifty unthrifti
+ untie unti
+ untied unti
+ until until
+ untimber untimb
+ untimely untim
+ untir untir
+ untirable untir
+ untired untir
+ untitled untitl
+ unto unto
+ untold untold
+ untouch untouch
+ untoward untoward
+ untowardly untowardli
+ untraded untrad
+ untrain untrain
+ untrained untrain
+ untread untread
+ untreasur untreasur
+ untried untri
+ untrimmed untrim
+ untrod untrod
+ untrodden untrodden
+ untroubled untroubl
+ untrue untru
+ untrussing untruss
+ untruth untruth
+ untruths untruth
+ untucked untuck
+ untun untun
+ untune untun
+ untuneable untun
+ untutor untutor
+ untutored untutor
+ untwine untwin
+ unurg unurg
+ unus unu
+ unused unus
+ unusual unusu
+ unvalued unvalu
+ unvanquish unvanquish
+ unvarnish unvarnish
+ unveil unveil
+ unveiling unveil
+ unvenerable unvener
+ unvex unvex
+ unviolated unviol
+ unvirtuous unvirtu
+ unvisited unvisit
+ unvulnerable unvulner
+ unwares unwar
+ unwarily unwarili
+ unwash unwash
+ unwatch unwatch
+ unwearied unweari
+ unwed unw
+ unwedgeable unwedg
+ unweeded unweed
+ unweighed unweigh
+ unweighing unweigh
+ unwelcome unwelcom
+ unwept unwept
+ unwhipp unwhipp
+ unwholesome unwholesom
+ unwieldy unwieldi
+ unwilling unwil
+ unwillingly unwillingli
+ unwillingness unwilling
+ unwind unwind
+ unwiped unwip
+ unwise unwis
+ unwisely unwis
+ unwish unwish
+ unwished unwish
+ unwitted unwit
+ unwittingly unwittingli
+ unwonted unwont
+ unwooed unwoo
+ unworthier unworthi
+ unworthiest unworthiest
+ unworthily unworthili
+ unworthiness unworthi
+ unworthy unworthi
+ unwrung unwrung
+ unyok unyok
+ unyoke unyok
+ up up
+ upbraid upbraid
+ upbraided upbraid
+ upbraidings upbraid
+ upbraids upbraid
+ uphoarded uphoard
+ uphold uphold
+ upholdeth upholdeth
+ upholding uphold
+ upholds uphold
+ uplift uplift
+ uplifted uplift
+ upmost upmost
+ upon upon
+ upper upper
+ uprear uprear
+ upreared uprear
+ upright upright
+ uprighteously upright
+ uprightness upright
+ uprise upris
+ uprising upris
+ uproar uproar
+ uproars uproar
+ uprous uprou
+ upshoot upshoot
+ upshot upshot
+ upside upsid
+ upspring upspr
+ upstairs upstair
+ upstart upstart
+ upturned upturn
+ upward upward
+ upwards upward
+ urchin urchin
+ urchinfield urchinfield
+ urchins urchin
+ urg urg
+ urge urg
+ urged urg
+ urgent urgent
+ urges urg
+ urgest urgest
+ urging urg
+ urinal urin
+ urinals urin
+ urine urin
+ urn urn
+ urns urn
+ urs ur
+ ursa ursa
+ ursley urslei
+ ursula ursula
+ urswick urswick
+ us us
+ usage usag
+ usance usanc
+ usances usanc
+ use us
+ used us
+ useful us
+ useless useless
+ user user
+ uses us
+ usest usest
+ useth useth
+ usher usher
+ ushered usher
+ ushering usher
+ ushers usher
+ using us
+ usual usual
+ usually usual
+ usurer usur
+ usurers usur
+ usuries usuri
+ usuring usur
+ usurp usurp
+ usurpation usurp
+ usurped usurp
+ usurper usurp
+ usurpers usurp
+ usurping usurp
+ usurpingly usurpingli
+ usurps usurp
+ usury usuri
+ ut ut
+ utensil utensil
+ utensils utensil
+ utility util
+ utmost utmost
+ utt utt
+ utter utter
+ utterance utter
+ uttered utter
+ uttereth uttereth
+ uttering utter
+ utterly utterli
+ uttermost uttermost
+ utters utter
+ uy uy
+ v v
+ va va
+ vacancy vacanc
+ vacant vacant
+ vacation vacat
+ vade vade
+ vagabond vagabond
+ vagabonds vagabond
+ vagram vagram
+ vagrom vagrom
+ vail vail
+ vailed vail
+ vailing vail
+ vaillant vaillant
+ vain vain
+ vainer vainer
+ vainglory vainglori
+ vainly vainli
+ vainness vain
+ vais vai
+ valanc valanc
+ valance valanc
+ vale vale
+ valence valenc
+ valentine valentin
+ valentinus valentinu
+ valentio valentio
+ valeria valeria
+ valerius valeriu
+ vales vale
+ valiant valiant
+ valiantly valiantli
+ valiantness valiant
+ validity valid
+ vallant vallant
+ valley vallei
+ valleys vallei
+ vally valli
+ valor valor
+ valorous valor
+ valorously valor
+ valour valour
+ valu valu
+ valuation valuat
+ value valu
+ valued valu
+ valueless valueless
+ values valu
+ valuing valu
+ vane vane
+ vanish vanish
+ vanished vanish
+ vanishes vanish
+ vanishest vanishest
+ vanishing vanish
+ vanities vaniti
+ vanity vaniti
+ vanquish vanquish
+ vanquished vanquish
+ vanquisher vanquish
+ vanquishest vanquishest
+ vanquisheth vanquisheth
+ vant vant
+ vantage vantag
+ vantages vantag
+ vantbrace vantbrac
+ vapians vapian
+ vapor vapor
+ vaporous vapor
+ vapour vapour
+ vapours vapour
+ vara vara
+ variable variabl
+ variance varianc
+ variation variat
+ variations variat
+ varied vari
+ variest variest
+ variety varieti
+ varld varld
+ varlet varlet
+ varletry varletri
+ varlets varlet
+ varletto varletto
+ varnish varnish
+ varrius varriu
+ varro varro
+ vary vari
+ varying vari
+ vassal vassal
+ vassalage vassalag
+ vassals vassal
+ vast vast
+ vastidity vastid
+ vasty vasti
+ vat vat
+ vater vater
+ vaudemont vaudemont
+ vaughan vaughan
+ vault vault
+ vaultages vaultag
+ vaulted vault
+ vaulting vault
+ vaults vault
+ vaulty vaulti
+ vaumond vaumond
+ vaunt vaunt
+ vaunted vaunt
+ vaunter vaunter
+ vaunting vaunt
+ vauntingly vauntingli
+ vaunts vaunt
+ vauvado vauvado
+ vaux vaux
+ vaward vaward
+ ve ve
+ veal veal
+ vede vede
+ vehemence vehem
+ vehemency vehem
+ vehement vehement
+ vehor vehor
+ veil veil
+ veiled veil
+ veiling veil
+ vein vein
+ veins vein
+ vell vell
+ velure velur
+ velutus velutu
+ velvet velvet
+ vendible vendibl
+ venerable vener
+ venereal vener
+ venetia venetia
+ venetian venetian
+ venetians venetian
+ veneys venei
+ venge veng
+ vengeance vengeanc
+ vengeances vengeanc
+ vengeful veng
+ veni veni
+ venial venial
+ venice venic
+ venison venison
+ venit venit
+ venom venom
+ venomous venom
+ venomously venom
+ vent vent
+ ventages ventag
+ vented vent
+ ventidius ventidiu
+ ventricle ventricl
+ vents vent
+ ventur ventur
+ venture ventur
+ ventured ventur
+ ventures ventur
+ venturing ventur
+ venturous ventur
+ venue venu
+ venus venu
+ venuto venuto
+ ver ver
+ verb verb
+ verba verba
+ verbal verbal
+ verbatim verbatim
+ verbosity verbos
+ verdict verdict
+ verdun verdun
+ verdure verdur
+ vere vere
+ verefore verefor
+ verg verg
+ verge verg
+ vergers verger
+ verges verg
+ verier verier
+ veriest veriest
+ verified verifi
+ verify verifi
+ verily verili
+ veritable verit
+ verite verit
+ verities veriti
+ verity veriti
+ vermilion vermilion
+ vermin vermin
+ vernon vernon
+ verona verona
+ veronesa veronesa
+ versal versal
+ verse vers
+ verses vers
+ versing vers
+ vert vert
+ very veri
+ vesper vesper
+ vessel vessel
+ vessels vessel
+ vestal vestal
+ vestments vestment
+ vesture vestur
+ vetch vetch
+ vetches vetch
+ veux veux
+ vex vex
+ vexation vexat
+ vexations vexat
+ vexed vex
+ vexes vex
+ vexest vexest
+ vexeth vexeth
+ vexing vex
+ vi vi
+ via via
+ vial vial
+ vials vial
+ viand viand
+ viands viand
+ vic vic
+ vicar vicar
+ vice vice
+ vicegerent viceger
+ vicentio vicentio
+ viceroy viceroi
+ viceroys viceroi
+ vices vice
+ vici vici
+ vicious viciou
+ viciousness vicious
+ vict vict
+ victims victim
+ victor victor
+ victoress victoress
+ victories victori
+ victorious victori
+ victors victor
+ victory victori
+ victual victual
+ victuall victual
+ victuals victual
+ videlicet videlicet
+ video video
+ vides vide
+ videsne videsn
+ vidi vidi
+ vie vie
+ vied vi
+ vienna vienna
+ view view
+ viewest viewest
+ vieweth vieweth
+ viewing view
+ viewless viewless
+ views view
+ vigil vigil
+ vigilance vigil
+ vigilant vigil
+ vigitant vigit
+ vigour vigour
+ vii vii
+ viii viii
+ vile vile
+ vilely vile
+ vileness vile
+ viler viler
+ vilest vilest
+ vill vill
+ village villag
+ villager villag
+ villagery villageri
+ villages villag
+ villain villain
+ villainies villaini
+ villainous villain
+ villainously villain
+ villains villain
+ villainy villaini
+ villanies villani
+ villanous villan
+ villany villani
+ villiago villiago
+ villian villian
+ villianda villianda
+ villians villian
+ vinaigre vinaigr
+ vincentio vincentio
+ vincere vincer
+ vindicative vindic
+ vine vine
+ vinegar vinegar
+ vines vine
+ vineyard vineyard
+ vineyards vineyard
+ vint vint
+ vintner vintner
+ viol viol
+ viola viola
+ violate violat
+ violated violat
+ violates violat
+ violation violat
+ violator violat
+ violence violenc
+ violent violent
+ violenta violenta
+ violenteth violenteth
+ violently violent
+ violet violet
+ violets violet
+ viper viper
+ viperous viper
+ vipers viper
+ vir vir
+ virgilia virgilia
+ virgin virgin
+ virginal virgin
+ virginalling virginal
+ virginity virgin
+ virginius virginiu
+ virgins virgin
+ virgo virgo
+ virtue virtu
+ virtues virtu
+ virtuous virtuou
+ virtuously virtuous
+ visag visag
+ visage visag
+ visages visag
+ visard visard
+ viscount viscount
+ visible visibl
+ visibly visibl
+ vision vision
+ visions vision
+ visit visit
+ visitation visit
+ visitations visit
+ visited visit
+ visiting visit
+ visitings visit
+ visitor visitor
+ visitors visitor
+ visits visit
+ visor visor
+ vita vita
+ vitae vita
+ vital vital
+ vitement vitement
+ vitruvio vitruvio
+ vitx vitx
+ viva viva
+ vivant vivant
+ vive vive
+ vixen vixen
+ viz viz
+ vizaments vizament
+ vizard vizard
+ vizarded vizard
+ vizards vizard
+ vizor vizor
+ vlouting vlout
+ vocation vocat
+ vocativo vocativo
+ vocatur vocatur
+ voce voce
+ voic voic
+ voice voic
+ voices voic
+ void void
+ voided void
+ voiding void
+ voke voke
+ volable volabl
+ volant volant
+ volivorco volivorco
+ volley vollei
+ volquessen volquessen
+ volsce volsc
+ volsces volsc
+ volscian volscian
+ volscians volscian
+ volt volt
+ voltemand voltemand
+ volubility volubl
+ voluble volubl
+ volume volum
+ volumes volum
+ volumnia volumnia
+ volumnius volumniu
+ voluntaries voluntari
+ voluntary voluntari
+ voluptuously voluptu
+ voluptuousness voluptu
+ vomissement vomiss
+ vomit vomit
+ vomits vomit
+ vor vor
+ vore vore
+ vortnight vortnight
+ vot vot
+ votaries votari
+ votarist votarist
+ votarists votarist
+ votary votari
+ votre votr
+ vouch vouch
+ voucher voucher
+ vouchers voucher
+ vouches vouch
+ vouching vouch
+ vouchsaf vouchsaf
+ vouchsafe vouchsaf
+ vouchsafed vouchsaf
+ vouchsafes vouchsaf
+ vouchsafing vouchsaf
+ voudrais voudrai
+ vour vour
+ vous vou
+ voutsafe voutsaf
+ vow vow
+ vowed vow
+ vowel vowel
+ vowels vowel
+ vowing vow
+ vows vow
+ vox vox
+ voyage voyag
+ voyages voyag
+ vraiment vraiment
+ vulcan vulcan
+ vulgar vulgar
+ vulgarly vulgarli
+ vulgars vulgar
+ vulgo vulgo
+ vulnerable vulner
+ vulture vultur
+ vultures vultur
+ vurther vurther
+ w w
+ wad wad
+ waddled waddl
+ wade wade
+ waded wade
+ wafer wafer
+ waft waft
+ waftage waftag
+ wafting waft
+ wafts waft
+ wag wag
+ wage wage
+ wager wager
+ wagers wager
+ wages wage
+ wagging wag
+ waggish waggish
+ waggling waggl
+ waggon waggon
+ waggoner waggon
+ wagon wagon
+ wagoner wagon
+ wags wag
+ wagtail wagtail
+ wail wail
+ wailful wail
+ wailing wail
+ wails wail
+ wain wain
+ wainropes wainrop
+ wainscot wainscot
+ waist waist
+ wait wait
+ waited wait
+ waiter waiter
+ waiteth waiteth
+ waiting wait
+ waits wait
+ wak wak
+ wake wake
+ waked wake
+ wakefield wakefield
+ waken waken
+ wakened waken
+ wakes wake
+ wakest wakest
+ waking wake
+ wales wale
+ walk walk
+ walked walk
+ walking walk
+ walks walk
+ wall wall
+ walled wall
+ wallet wallet
+ wallets wallet
+ wallon wallon
+ walloon walloon
+ wallow wallow
+ walls wall
+ walnut walnut
+ walter walter
+ wan wan
+ wand wand
+ wander wander
+ wanderer wander
+ wanderers wander
+ wandering wander
+ wanders wander
+ wands wand
+ wane wane
+ waned wane
+ wanes wane
+ waning wane
+ wann wann
+ want want
+ wanted want
+ wanteth wanteth
+ wanting want
+ wanton wanton
+ wantonly wantonli
+ wantonness wanton
+ wantons wanton
+ wants want
+ wappen wappen
+ war war
+ warble warbl
+ warbling warbl
+ ward ward
+ warded ward
+ warden warden
+ warder warder
+ warders warder
+ wardrobe wardrob
+ wardrop wardrop
+ wards ward
+ ware ware
+ wares ware
+ warily warili
+ warkworth warkworth
+ warlike warlik
+ warm warm
+ warmed warm
+ warmer warmer
+ warming warm
+ warms warm
+ warmth warmth
+ warn warn
+ warned warn
+ warning warn
+ warnings warn
+ warns warn
+ warp warp
+ warped warp
+ warr warr
+ warrant warrant
+ warranted warrant
+ warranteth warranteth
+ warrantise warrantis
+ warrantize warrant
+ warrants warrant
+ warranty warranti
+ warren warren
+ warrener warren
+ warring war
+ warrior warrior
+ warriors warrior
+ wars war
+ wart wart
+ warwick warwick
+ warwickshire warwickshir
+ wary wari
+ was wa
+ wash wash
+ washed wash
+ washer washer
+ washes wash
+ washford washford
+ washing wash
+ wasp wasp
+ waspish waspish
+ wasps wasp
+ wassail wassail
+ wassails wassail
+ wast wast
+ waste wast
+ wasted wast
+ wasteful wast
+ wasters waster
+ wastes wast
+ wasting wast
+ wat wat
+ watch watch
+ watched watch
+ watchers watcher
+ watches watch
+ watchful watch
+ watching watch
+ watchings watch
+ watchman watchman
+ watchmen watchmen
+ watchword watchword
+ water water
+ waterdrops waterdrop
+ watered water
+ waterfly waterfli
+ waterford waterford
+ watering water
+ waterish waterish
+ waterpots waterpot
+ waterrugs waterrug
+ waters water
+ waterton waterton
+ watery wateri
+ wav wav
+ wave wave
+ waved wave
+ waver waver
+ waverer waver
+ wavering waver
+ waves wave
+ waving wave
+ waw waw
+ wawl wawl
+ wax wax
+ waxed wax
+ waxen waxen
+ waxes wax
+ waxing wax
+ way wai
+ waylaid waylaid
+ waylay waylai
+ ways wai
+ wayward wayward
+ waywarder wayward
+ waywardness wayward
+ we we
+ weak weak
+ weaken weaken
+ weakens weaken
+ weaker weaker
+ weakest weakest
+ weakling weakl
+ weakly weakli
+ weakness weak
+ weal weal
+ wealsmen wealsmen
+ wealth wealth
+ wealthiest wealthiest
+ wealthily wealthili
+ wealthy wealthi
+ wealtlly wealtlli
+ wean wean
+ weapon weapon
+ weapons weapon
+ wear wear
+ wearer wearer
+ wearers wearer
+ wearied weari
+ wearies weari
+ weariest weariest
+ wearily wearili
+ weariness weari
+ wearing wear
+ wearisome wearisom
+ wears wear
+ weary weari
+ weasel weasel
+ weather weather
+ weathercock weathercock
+ weathers weather
+ weav weav
+ weave weav
+ weaver weaver
+ weavers weaver
+ weaves weav
+ weaving weav
+ web web
+ wed wed
+ wedded wed
+ wedding wed
+ wedg wedg
+ wedged wedg
+ wedges wedg
+ wedlock wedlock
+ wednesday wednesdai
+ weed weed
+ weeded weed
+ weeder weeder
+ weeding weed
+ weeds weed
+ weedy weedi
+ week week
+ weeke week
+ weekly weekli
+ weeks week
+ ween ween
+ weening ween
+ weep weep
+ weeper weeper
+ weeping weep
+ weepingly weepingli
+ weepings weep
+ weeps weep
+ weet weet
+ weigh weigh
+ weighed weigh
+ weighing weigh
+ weighs weigh
+ weight weight
+ weightier weightier
+ weightless weightless
+ weights weight
+ weighty weighti
+ weird weird
+ welcom welcom
+ welcome welcom
+ welcomer welcom
+ welcomes welcom
+ welcomest welcomest
+ welfare welfar
+ welkin welkin
+ well well
+ wells well
+ welsh welsh
+ welshman welshman
+ welshmen welshmen
+ welshwomen welshwomen
+ wench wench
+ wenches wench
+ wenching wench
+ wend wend
+ went went
+ wept wept
+ weraday weradai
+ were were
+ wert wert
+ west west
+ western western
+ westminster westminst
+ westmoreland westmoreland
+ westward westward
+ wet wet
+ wether wether
+ wetting wet
+ wezand wezand
+ whale whale
+ whales whale
+ wharf wharf
+ wharfs wharf
+ what what
+ whate whate
+ whatever whatev
+ whatsoe whatso
+ whatsoever whatsoev
+ whatsome whatsom
+ whe whe
+ wheat wheat
+ wheaten wheaten
+ wheel wheel
+ wheeling wheel
+ wheels wheel
+ wheer wheer
+ wheeson wheeson
+ wheezing wheez
+ whelk whelk
+ whelks whelk
+ whelm whelm
+ whelp whelp
+ whelped whelp
+ whelps whelp
+ when when
+ whenas whena
+ whence whenc
+ whencesoever whencesoev
+ whene whene
+ whenever whenev
+ whensoever whensoev
+ where where
+ whereabout whereabout
+ whereas wherea
+ whereat whereat
+ whereby wherebi
+ wherefore wherefor
+ wherein wherein
+ whereinto whereinto
+ whereof whereof
+ whereon whereon
+ whereout whereout
+ whereso whereso
+ wheresoe whereso
+ wheresoever wheresoev
+ wheresome wheresom
+ whereto whereto
+ whereuntil whereuntil
+ whereunto whereunto
+ whereupon whereupon
+ wherever wherev
+ wherewith wherewith
+ wherewithal wherewith
+ whet whet
+ whether whether
+ whetstone whetston
+ whetted whet
+ whew whew
+ whey whei
+ which which
+ whiff whiff
+ whiffler whiffler
+ while while
+ whiles while
+ whilst whilst
+ whin whin
+ whine whine
+ whined whine
+ whinid whinid
+ whining whine
+ whip whip
+ whipp whipp
+ whippers whipper
+ whipping whip
+ whips whip
+ whipster whipster
+ whipstock whipstock
+ whipt whipt
+ whirl whirl
+ whirled whirl
+ whirligig whirligig
+ whirling whirl
+ whirlpool whirlpool
+ whirls whirl
+ whirlwind whirlwind
+ whirlwinds whirlwind
+ whisp whisp
+ whisper whisper
+ whispering whisper
+ whisperings whisper
+ whispers whisper
+ whist whist
+ whistle whistl
+ whistles whistl
+ whistling whistl
+ whit whit
+ white white
+ whitehall whitehal
+ whitely white
+ whiteness white
+ whiter whiter
+ whites white
+ whitest whitest
+ whither whither
+ whiting white
+ whitmore whitmor
+ whitsters whitster
+ whitsun whitsun
+ whittle whittl
+ whizzing whizz
+ who who
+ whoa whoa
+ whoe whoe
+ whoever whoever
+ whole whole
+ wholesom wholesom
+ wholesome wholesom
+ wholly wholli
+ whom whom
+ whoobub whoobub
+ whoop whoop
+ whooping whoop
+ whor whor
+ whore whore
+ whoremaster whoremast
+ whoremasterly whoremasterli
+ whoremonger whoremong
+ whores whore
+ whoreson whoreson
+ whoresons whoreson
+ whoring whore
+ whorish whorish
+ whose whose
+ whoso whoso
+ whosoe whoso
+ whosoever whosoev
+ why why
+ wi wi
+ wick wick
+ wicked wick
+ wickednes wickedn
+ wickedness wicked
+ wicket wicket
+ wicky wicki
+ wid wid
+ wide wide
+ widens widen
+ wider wider
+ widow widow
+ widowed widow
+ widower widow
+ widowhood widowhood
+ widows widow
+ wield wield
+ wife wife
+ wight wight
+ wights wight
+ wild wild
+ wildcats wildcat
+ wilder wilder
+ wilderness wilder
+ wildest wildest
+ wildfire wildfir
+ wildly wildli
+ wildness wild
+ wilds wild
+ wiles wile
+ wilful wil
+ wilfull wilful
+ wilfully wilfulli
+ wilfulnes wilfuln
+ wilfulness wil
+ will will
+ willed will
+ willers willer
+ willeth willeth
+ william william
+ williams william
+ willing will
+ willingly willingli
+ willingness willing
+ willoughby willoughbi
+ willow willow
+ wills will
+ wilt wilt
+ wiltshire wiltshir
+ wimpled wimpl
+ win win
+ wince winc
+ winch winch
+ winchester winchest
+ wincot wincot
+ wind wind
+ winded wind
+ windgalls windgal
+ winding wind
+ windlasses windlass
+ windmill windmil
+ window window
+ windows window
+ windpipe windpip
+ winds wind
+ windsor windsor
+ windy windi
+ wine wine
+ wing wing
+ winged wing
+ wingfield wingfield
+ wingham wingham
+ wings wing
+ wink wink
+ winking wink
+ winks wink
+ winner winner
+ winners winner
+ winning win
+ winnow winnow
+ winnowed winnow
+ winnows winnow
+ wins win
+ winter winter
+ winterly winterli
+ winters winter
+ wip wip
+ wipe wipe
+ wiped wipe
+ wipes wipe
+ wiping wipe
+ wire wire
+ wires wire
+ wiry wiri
+ wisdom wisdom
+ wisdoms wisdom
+ wise wise
+ wiselier wiseli
+ wisely wise
+ wiser wiser
+ wisest wisest
+ wish wish
+ wished wish
+ wisher wisher
+ wishers wisher
+ wishes wish
+ wishest wishest
+ wisheth wisheth
+ wishful wish
+ wishing wish
+ wishtly wishtli
+ wisp wisp
+ wist wist
+ wit wit
+ witb witb
+ witch witch
+ witchcraft witchcraft
+ witches witch
+ witching witch
+ with with
+ withal withal
+ withdraw withdraw
+ withdrawing withdraw
+ withdrawn withdrawn
+ withdrew withdrew
+ wither wither
+ withered wither
+ withering wither
+ withers wither
+ withheld withheld
+ withhold withhold
+ withholds withhold
+ within within
+ withold withold
+ without without
+ withstand withstand
+ withstanding withstand
+ withstood withstood
+ witless witless
+ witness wit
+ witnesses wit
+ witnesseth witnesseth
+ witnessing wit
+ wits wit
+ witted wit
+ wittenberg wittenberg
+ wittiest wittiest
+ wittily wittili
+ witting wit
+ wittingly wittingli
+ wittol wittol
+ wittolly wittolli
+ witty witti
+ wiv wiv
+ wive wive
+ wived wive
+ wives wive
+ wiving wive
+ wizard wizard
+ wizards wizard
+ wo wo
+ woe woe
+ woeful woeful
+ woefull woeful
+ woefullest woefullest
+ woes woe
+ woful woful
+ wolf wolf
+ wolfish wolfish
+ wolsey wolsei
+ wolves wolv
+ wolvish wolvish
+ woman woman
+ womanhood womanhood
+ womanish womanish
+ womankind womankind
+ womanly womanli
+ womb womb
+ wombs womb
+ womby wombi
+ women women
+ won won
+ woncot woncot
+ wond wond
+ wonder wonder
+ wondered wonder
+ wonderful wonder
+ wonderfully wonderfulli
+ wondering wonder
+ wonders wonder
+ wondrous wondrou
+ wondrously wondrous
+ wont wont
+ wonted wont
+ woo woo
+ wood wood
+ woodbine woodbin
+ woodcock woodcock
+ woodcocks woodcock
+ wooden wooden
+ woodland woodland
+ woodman woodman
+ woodmonger woodmong
+ woods wood
+ woodstock woodstock
+ woodville woodvil
+ wooed woo
+ wooer wooer
+ wooers wooer
+ wooes wooe
+ woof woof
+ wooing woo
+ wooingly wooingli
+ wool wool
+ woollen woollen
+ woolly woolli
+ woolsack woolsack
+ woolsey woolsei
+ woolward woolward
+ woos woo
+ wor wor
+ worcester worcest
+ word word
+ words word
+ wore wore
+ worins worin
+ work work
+ workers worker
+ working work
+ workings work
+ workman workman
+ workmanly workmanli
+ workmanship workmanship
+ workmen workmen
+ works work
+ worky worki
+ world world
+ worldlings worldl
+ worldly worldli
+ worlds world
+ worm worm
+ worms worm
+ wormwood wormwood
+ wormy wormi
+ worn worn
+ worried worri
+ worries worri
+ worry worri
+ worrying worri
+ worse wors
+ worser worser
+ worship worship
+ worshipful worship
+ worshipfully worshipfulli
+ worshipp worshipp
+ worshipper worshipp
+ worshippers worshipp
+ worshippest worshippest
+ worships worship
+ worst worst
+ worsted worst
+ wort wort
+ worth worth
+ worthied worthi
+ worthier worthier
+ worthies worthi
+ worthiest worthiest
+ worthily worthili
+ worthiness worthi
+ worthless worthless
+ worths worth
+ worthy worthi
+ worts wort
+ wot wot
+ wots wot
+ wotting wot
+ wouid wouid
+ would would
+ wouldest wouldest
+ wouldst wouldst
+ wound wound
+ wounded wound
+ wounding wound
+ woundings wound
+ woundless woundless
+ wounds wound
+ wouns woun
+ woven woven
+ wow wow
+ wrack wrack
+ wrackful wrack
+ wrangle wrangl
+ wrangler wrangler
+ wranglers wrangler
+ wrangling wrangl
+ wrap wrap
+ wrapp wrapp
+ wraps wrap
+ wrapt wrapt
+ wrath wrath
+ wrathful wrath
+ wrathfully wrathfulli
+ wraths wrath
+ wreak wreak
+ wreakful wreak
+ wreaks wreak
+ wreath wreath
+ wreathed wreath
+ wreathen wreathen
+ wreaths wreath
+ wreck wreck
+ wrecked wreck
+ wrecks wreck
+ wren wren
+ wrench wrench
+ wrenching wrench
+ wrens wren
+ wrest wrest
+ wrested wrest
+ wresting wrest
+ wrestle wrestl
+ wrestled wrestl
+ wrestler wrestler
+ wrestling wrestl
+ wretch wretch
+ wretchcd wretchcd
+ wretched wretch
+ wretchedness wretched
+ wretches wretch
+ wring wring
+ wringer wringer
+ wringing wring
+ wrings wring
+ wrinkle wrinkl
+ wrinkled wrinkl
+ wrinkles wrinkl
+ wrist wrist
+ wrists wrist
+ writ writ
+ write write
+ writer writer
+ writers writer
+ writes write
+ writhled writhl
+ writing write
+ writings write
+ writs writ
+ written written
+ wrong wrong
+ wronged wrong
+ wronger wronger
+ wrongful wrong
+ wrongfully wrongfulli
+ wronging wrong
+ wrongly wrongli
+ wrongs wrong
+ wronk wronk
+ wrote wrote
+ wroth wroth
+ wrought wrought
+ wrung wrung
+ wry wry
+ wrying wry
+ wt wt
+ wul wul
+ wye wye
+ x x
+ xanthippe xanthipp
+ xi xi
+ xii xii
+ xiii xiii
+ xiv xiv
+ xv xv
+ y y
+ yard yard
+ yards yard
+ yare yare
+ yarely yare
+ yarn yarn
+ yaughan yaughan
+ yaw yaw
+ yawn yawn
+ yawning yawn
+ ycleped yclepe
+ ycliped yclipe
+ ye ye
+ yea yea
+ yead yead
+ year year
+ yearly yearli
+ yearn yearn
+ yearns yearn
+ years year
+ yeas yea
+ yeast yeast
+ yedward yedward
+ yell yell
+ yellow yellow
+ yellowed yellow
+ yellowing yellow
+ yellowness yellow
+ yellows yellow
+ yells yell
+ yelping yelp
+ yeoman yeoman
+ yeomen yeomen
+ yerk yerk
+ yes ye
+ yesterday yesterdai
+ yesterdays yesterdai
+ yesternight yesternight
+ yesty yesti
+ yet yet
+ yew yew
+ yicld yicld
+ yield yield
+ yielded yield
+ yielder yielder
+ yielders yielder
+ yielding yield
+ yields yield
+ yok yok
+ yoke yoke
+ yoked yoke
+ yokefellow yokefellow
+ yokes yoke
+ yoketh yoketh
+ yon yon
+ yond yond
+ yonder yonder
+ yongrey yongrei
+ yore yore
+ yorick yorick
+ york york
+ yorkists yorkist
+ yorks york
+ yorkshire yorkshir
+ you you
+ young young
+ younger younger
+ youngest youngest
+ youngling youngl
+ younglings youngl
+ youngly youngli
+ younker younker
+ your your
+ yours your
+ yourself yourself
+ yourselves yourselv
+ youth youth
+ youthful youth
+ youths youth
+ youtli youtli
+ zanies zani
+ zany zani
+ zeal zeal
+ zealous zealou
+ zeals zeal
+ zed zed
+ zenelophon zenelophon
+ zenith zenith
+ zephyrs zephyr
+ zir zir
+ zo zo
+ zodiac zodiac
+ zodiacs zodiac
+ zone zone
+ zounds zound
+ zwagger zwagger
+}
+
+# Create a full-text index to use for testing the stemmer.
+#
+db close
+sqlite3 db :memory:
+db eval {
+ CREATE VIRTUAL TABLE t1 USING fts1(word, tokenize Porter);
+}
+
+foreach {pfrom pto} $porter_test_data {
+ do_test fts1porter-$pfrom {
+ execsql {
+ DELETE FROM t1_term;
+ DELETE FROM t1_content;
+ INSERT INTO t1(word) VALUES($pfrom);
+ SELECT term FROM t1_term;
+ }
+ } $pto
+}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/func.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/func.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,703 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing built-in functions.
+#
+# $Id: func.test,v 1.55 2006/09/16 21:45:14 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Create a table to work with.
+#
+do_test func-0.0 {
+ execsql {CREATE TABLE tbl1(t1 text)}
+ foreach word {this program is free software} {
+ execsql "INSERT INTO tbl1 VALUES('$word')"
+ }
+ execsql {SELECT t1 FROM tbl1 ORDER BY t1}
+} {free is program software this}
+do_test func-0.1 {
+ execsql {
+ CREATE TABLE t2(a);
+ INSERT INTO t2 VALUES(1);
+ INSERT INTO t2 VALUES(NULL);
+ INSERT INTO t2 VALUES(345);
+ INSERT INTO t2 VALUES(NULL);
+ INSERT INTO t2 VALUES(67890);
+ SELECT * FROM t2;
+ }
+} {1 {} 345 {} 67890}
+
+# Check out the length() function
+#
+do_test func-1.0 {
+ execsql {SELECT length(t1) FROM tbl1 ORDER BY t1}
+} {4 2 7 8 4}
+do_test func-1.1 {
+ set r [catch {execsql {SELECT length(*) FROM tbl1 ORDER BY t1}} msg]
+ lappend r $msg
+} {1 {wrong number of arguments to function length()}}
+do_test func-1.2 {
+ set r [catch {execsql {SELECT length(t1,5) FROM tbl1 ORDER BY t1}} msg]
+ lappend r $msg
+} {1 {wrong number of arguments to function length()}}
+do_test func-1.3 {
+ execsql {SELECT length(t1), count(*) FROM tbl1 GROUP BY length(t1)
+ ORDER BY length(t1)}
+} {2 1 4 2 7 1 8 1}
+do_test func-1.4 {
+ execsql {SELECT coalesce(length(a),-1) FROM t2}
+} {1 -1 3 -1 5}
+
+# Check out the substr() function
+#
+do_test func-2.0 {
+ execsql {SELECT substr(t1,1,2) FROM tbl1 ORDER BY t1}
+} {fr is pr so th}
+do_test func-2.1 {
+ execsql {SELECT substr(t1,2,1) FROM tbl1 ORDER BY t1}
+} {r s r o h}
+do_test func-2.2 {
+ execsql {SELECT substr(t1,3,3) FROM tbl1 ORDER BY t1}
+} {ee {} ogr ftw is}
+do_test func-2.3 {
+ execsql {SELECT substr(t1,-1,1) FROM tbl1 ORDER BY t1}
+} {e s m e s}
+do_test func-2.4 {
+ execsql {SELECT substr(t1,-1,2) FROM tbl1 ORDER BY t1}
+} {e s m e s}
+do_test func-2.5 {
+ execsql {SELECT substr(t1,-2,1) FROM tbl1 ORDER BY t1}
+} {e i a r i}
+do_test func-2.6 {
+ execsql {SELECT substr(t1,-2,2) FROM tbl1 ORDER BY t1}
+} {ee is am re is}
+do_test func-2.7 {
+ execsql {SELECT substr(t1,-4,2) FROM tbl1 ORDER BY t1}
+} {fr {} gr wa th}
+do_test func-2.8 {
+ execsql {SELECT t1 FROM tbl1 ORDER BY substr(t1,2,20)}
+} {this software free program is}
+do_test func-2.9 {
+ execsql {SELECT substr(a,1,1) FROM t2}
+} {1 {} 3 {} 6}
+do_test func-2.10 {
+ execsql {SELECT substr(a,2,2) FROM t2}
+} {{} {} 45 {} 78}
+
+# Only do the following tests if TCL has UTF-8 capabilities
+#
+if {"\u1234"!="u1234"} {
+
+# Put some UTF-8 characters in the database
+#
+do_test func-3.0 {
+ execsql {DELETE FROM tbl1}
+ foreach word "contains UTF-8 characters hi\u1234ho" {
+ execsql "INSERT INTO tbl1 VALUES('$word')"
+ }
+ execsql {SELECT t1 FROM tbl1 ORDER BY t1}
+} "UTF-8 characters contains hi\u1234ho"
+do_test func-3.1 {
+ execsql {SELECT length(t1) FROM tbl1 ORDER BY t1}
+} {5 10 8 5}
+do_test func-3.2 {
+ execsql {SELECT substr(t1,1,2) FROM tbl1 ORDER BY t1}
+} {UT ch co hi}
+do_test func-3.3 {
+ execsql {SELECT substr(t1,1,3) FROM tbl1 ORDER BY t1}
+} "UTF cha con hi\u1234"
+do_test func-3.4 {
+ execsql {SELECT substr(t1,2,2) FROM tbl1 ORDER BY t1}
+} "TF ha on i\u1234"
+do_test func-3.5 {
+ execsql {SELECT substr(t1,2,3) FROM tbl1 ORDER BY t1}
+} "TF- har ont i\u1234h"
+do_test func-3.6 {
+ execsql {SELECT substr(t1,3,2) FROM tbl1 ORDER BY t1}
+} "F- ar nt \u1234h"
+do_test func-3.7 {
+ execsql {SELECT substr(t1,4,2) FROM tbl1 ORDER BY t1}
+} "-8 ra ta ho"
+do_test func-3.8 {
+ execsql {SELECT substr(t1,-1,1) FROM tbl1 ORDER BY t1}
+} "8 s s o"
+do_test func-3.9 {
+ execsql {SELECT substr(t1,-3,2) FROM tbl1 ORDER BY t1}
+} "F- er in \u1234h"
+do_test func-3.10 {
+ execsql {SELECT substr(t1,-4,3) FROM tbl1 ORDER BY t1}
+} "TF- ter ain i\u1234h"
+do_test func-3.99 {
+ execsql {DELETE FROM tbl1}
+ foreach word {this program is free software} {
+ execsql "INSERT INTO tbl1 VALUES('$word')"
+ }
+ execsql {SELECT t1 FROM tbl1}
+} {this program is free software}
+
+} ;# End \u1234!=u1234
+
+# Test the abs() and round() functions.
+#
+do_test func-4.1 {
+ execsql {
+ CREATE TABLE t1(a,b,c);
+ INSERT INTO t1 VALUES(1,2,3);
+ INSERT INTO t1 VALUES(2,1.2345678901234,-12345.67890);
+ INSERT INTO t1 VALUES(3,-2,-5);
+ }
+ catchsql {SELECT abs(a,b) FROM t1}
+} {1 {wrong number of arguments to function abs()}}
+do_test func-4.2 {
+ catchsql {SELECT abs() FROM t1}
+} {1 {wrong number of arguments to function abs()}}
+do_test func-4.3 {
+ catchsql {SELECT abs(b) FROM t1 ORDER BY a}
+} {0 {2 1.2345678901234 2}}
+do_test func-4.4 {
+ catchsql {SELECT abs(c) FROM t1 ORDER BY a}
+} {0 {3 12345.6789 5}}
+do_test func-4.4.1 {
+ execsql {SELECT abs(a) FROM t2}
+} {1 {} 345 {} 67890}
+do_test func-4.4.2 {
+ execsql {SELECT abs(t1) FROM tbl1}
+} {0.0 0.0 0.0 0.0 0.0}
+
+do_test func-4.5 {
+ catchsql {SELECT round(a,b,c) FROM t1}
+} {1 {wrong number of arguments to function round()}}
+do_test func-4.6 {
+ catchsql {SELECT round(b,2) FROM t1 ORDER BY b}
+} {0 {-2.0 1.23 2.0}}
+do_test func-4.7 {
+ catchsql {SELECT round(b,0) FROM t1 ORDER BY a}
+} {0 {2.0 1.0 -2.0}}
+do_test func-4.8 {
+ catchsql {SELECT round(c) FROM t1 ORDER BY a}
+} {0 {3.0 -12346.0 -5.0}}
+do_test func-4.9 {
+ catchsql {SELECT round(c,a) FROM t1 ORDER BY a}
+} {0 {3.0 -12345.68 -5.0}}
+do_test func-4.10 {
+ catchsql {SELECT 'x' || round(c,a) || 'y' FROM t1 ORDER BY a}
+} {0 {x3.0y x-12345.68y x-5.0y}}
+do_test func-4.11 {
+ catchsql {SELECT round() FROM t1 ORDER BY a}
+} {1 {wrong number of arguments to function round()}}
+do_test func-4.12 {
+ execsql {SELECT coalesce(round(a,2),'nil') FROM t2}
+} {1.0 nil 345.0 nil 67890.0}
+do_test func-4.13 {
+ execsql {SELECT round(t1,2) FROM tbl1}
+} {0.0 0.0 0.0 0.0 0.0}
+do_test func-4.14 {
+ execsql {SELECT typeof(round(5.1,1));}
+} {real}
+do_test func-4.15 {
+ execsql {SELECT typeof(round(5.1));}
+} {real}
+
+
+# Test the upper() and lower() functions
+#
+do_test func-5.1 {
+ execsql {SELECT upper(t1) FROM tbl1}
+} {THIS PROGRAM IS FREE SOFTWARE}
+do_test func-5.2 {
+ execsql {SELECT lower(upper(t1)) FROM tbl1}
+} {this program is free software}
+do_test func-5.3 {
+ execsql {SELECT upper(a), lower(a) FROM t2}
+} {1 1 {} {} 345 345 {} {} 67890 67890}
+do_test func-5.4 {
+ catchsql {SELECT upper(a,5) FROM t2}
+} {1 {wrong number of arguments to function upper()}}
+do_test func-5.5 {
+ catchsql {SELECT upper(*) FROM t2}
+} {1 {wrong number of arguments to function upper()}}
+
+# Test the coalesce() and nullif() functions
+#
+do_test func-6.1 {
+ execsql {SELECT coalesce(a,'xyz') FROM t2}
+} {1 xyz 345 xyz 67890}
+do_test func-6.2 {
+ execsql {SELECT coalesce(upper(a),'nil') FROM t2}
+} {1 nil 345 nil 67890}
+do_test func-6.3 {
+ execsql {SELECT coalesce(nullif(1,1),'nil')}
+} {nil}
+do_test func-6.4 {
+ execsql {SELECT coalesce(nullif(1,2),'nil')}
+} {1}
+do_test func-6.5 {
+ execsql {SELECT coalesce(nullif(1,NULL),'nil')}
+} {1}
+
+
+# Test the last_insert_rowid() function
+#
+do_test func-7.1 {
+ execsql {SELECT last_insert_rowid()}
+} [db last_insert_rowid]
+
+# Tests for aggregate functions and how they handle NULLs.
+#
+do_test func-8.1 {
+ ifcapable explain {
+ execsql {EXPLAIN SELECT sum(a) FROM t2;}
+ }
+ execsql {
+ SELECT sum(a), count(a), round(avg(a),2), min(a), max(a), count(*) FROM t2;
+ }
+} {68236 3 22745.33 1 67890 5}
+do_test func-8.2 {
+ execsql {
+ SELECT max('z+'||a||'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOP') FROM t2;
+ }
+} {z+67890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOP}
+
+ifcapable tempdb {
+ do_test func-8.3 {
+ execsql {
+ CREATE TEMP TABLE t3 AS SELECT a FROM t2 ORDER BY a DESC;
+ SELECT min('z+'||a||'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOP') FROM t3;
+ }
+ } {z+1abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOP}
+} else {
+ do_test func-8.3 {
+ execsql {
+ CREATE TABLE t3 AS SELECT a FROM t2 ORDER BY a DESC;
+ SELECT min('z+'||a||'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOP') FROM t3;
+ }
+ } {z+1abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOP}
+}
+do_test func-8.4 {
+ execsql {
+ SELECT max('z+'||a||'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOP') FROM t3;
+ }
+} {z+67890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOP}
+
+# How do you test the random() function in a meaningful, deterministic way?
+#
+do_test func-9.1 {
+ execsql {
+ SELECT random() is not null;
+ }
+} {1}
+
+# Use the "sqlite_register_test_function" TCL command which is part of
+# the text fixture in order to verify correct operation of some of
+# the user-defined SQL function APIs that are not used by the built-in
+# functions.
+#
+set ::DB [sqlite3_connection_pointer db]
+sqlite_register_test_function $::DB testfunc
+do_test func-10.1 {
+ catchsql {
+ SELECT testfunc(NULL,NULL);
+ }
+} {1 {first argument should be one of: int int64 string double null value}}
+do_test func-10.2 {
+ execsql {
+ SELECT testfunc(
+ 'string', 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ',
+ 'int', 1234
+ );
+ }
+} {1234}
+do_test func-10.3 {
+ execsql {
+ SELECT testfunc(
+ 'string', 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ',
+ 'string', NULL
+ );
+ }
+} {{}}
+do_test func-10.4 {
+ execsql {
+ SELECT testfunc(
+ 'string', 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ',
+ 'double', 1.234
+ );
+ }
+} {1.234}
+do_test func-10.5 {
+ execsql {
+ SELECT testfunc(
+ 'string', 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ',
+ 'int', 1234,
+ 'string', 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ',
+ 'string', NULL,
+ 'string', 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ',
+ 'double', 1.234,
+ 'string', 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ',
+ 'int', 1234,
+ 'string', 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ',
+ 'string', NULL,
+ 'string', 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ',
+ 'double', 1.234
+ );
+ }
+} {1.234}
+
+# Test the built-in sqlite_version(*) SQL function.
+#
+do_test func-11.1 {
+ execsql {
+ SELECT sqlite_version(*);
+ }
+} [sqlite3 -version]
+
+# Test that destructors passed to sqlite3 by calls to sqlite3_result_text()
+# etc. are called. These tests use two special user-defined functions
+# (implemented in func.c) only available in test builds.
+#
+# Function test_destructor() takes one argument and returns a copy of the
+# text form of that argument. A destructor is associated with the return
+# value. Function test_destructor_count() returns the number of outstanding
+# destructor calls for values returned by test_destructor().
+#
+do_test func-12.1 {
+ execsql {
+ SELECT test_destructor('hello world'), test_destructor_count();
+ }
+} {{hello world} 1}
+do_test func-12.2 {
+ execsql {
+ SELECT test_destructor_count();
+ }
+} {0}
+do_test func-12.3 {
+ execsql {
+ SELECT test_destructor('hello')||' world', test_destructor_count();
+ }
+} {{hello world} 0}
+do_test func-12.4 {
+ execsql {
+ SELECT test_destructor_count();
+ }
+} {0}
+do_test func-12.5 {
+ execsql {
+ CREATE TABLE t4(x);
+ INSERT INTO t4 VALUES(test_destructor('hello'));
+ INSERT INTO t4 VALUES(test_destructor('world'));
+ SELECT min(test_destructor(x)), max(test_destructor(x)) FROM t4;
+ }
+} {hello world}
+do_test func-12.6 {
+ execsql {
+ SELECT test_destructor_count();
+ }
+} {0}
+do_test func-12.7 {
+ execsql {
+ DROP TABLE t4;
+ }
+} {}
+
+# Test that the auxdata API for scalar functions works. This test uses
+# a special user-defined function only available in test builds,
+# test_auxdata(). Function test_auxdata() takes any number of arguments.
+btree_breakpoint
+do_test func-13.1 {
+ execsql {
+ SELECT test_auxdata('hello world');
+ }
+} {0}
+
+do_test func-13.2 {
+ execsql {
+ CREATE TABLE t4(a, b);
+ INSERT INTO t4 VALUES('abc', 'def');
+ INSERT INTO t4 VALUES('ghi', 'jkl');
+ }
+} {}
+do_test func-13.3 {
+ execsql {
+ SELECT test_auxdata('hello world') FROM t4;
+ }
+} {0 1}
+do_test func-13.4 {
+ execsql {
+ SELECT test_auxdata('hello world', 123) FROM t4;
+ }
+} {{0 0} {1 1}}
+do_test func-13.5 {
+ execsql {
+ SELECT test_auxdata('hello world', a) FROM t4;
+ }
+} {{0 0} {1 0}}
+do_test func-13.6 {
+ execsql {
+ SELECT test_auxdata('hello'||'world', a) FROM t4;
+ }
+} {{0 0} {1 0}}
+
+# Test that auxilary data is preserved between calls for SQL variables.
+do_test func-13.7 {
+ set DB [sqlite3_connection_pointer db]
+ set sql "SELECT test_auxdata( ? , a ) FROM t4;"
+ set STMT [sqlite3_prepare $DB $sql -1 TAIL]
+ sqlite3_bind_text $STMT 1 hello -1
+ set res [list]
+ while { "SQLITE_ROW"==[sqlite3_step $STMT] } {
+ lappend res [sqlite3_column_text $STMT 0]
+ }
+ lappend res [sqlite3_finalize $STMT]
+} {{0 0} {1 0} SQLITE_OK}
+
+# Make sure that a function with a very long name is rejected
+do_test func-14.1 {
+ catch {
+ db function [string repeat X 254] {return "hello"}
+ }
+} {0}
+do_test func-14.2 {
+ catch {
+ db function [string repeat X 256] {return "hello"}
+ }
+} {1}
+
+do_test func-15.1 {
+ catchsql {
+ select test_error(NULL);
+ }
+} {1 {}}
+
+# Test the quote function for BLOB and NULL values.
+do_test func-16.1 {
+ execsql {
+ CREATE TABLE tbl2(a, b);
+ }
+ set STMT [sqlite3_prepare $::DB "INSERT INTO tbl2 VALUES(?, ?)" -1 TAIL]
+ sqlite3_bind_blob $::STMT 1 abc 3
+ sqlite3_step $::STMT
+ sqlite3_finalize $::STMT
+ execsql {
+ SELECT quote(a), quote(b) FROM tbl2;
+ }
+} {X'616263' NULL}
+
+# Correctly handle function error messages that include %. Ticket #1354
+#
+do_test func-17.1 {
+ proc testfunc1 args {error "Error %d with %s percents %p"}
+ db function testfunc1 ::testfunc1
+ catchsql {
+ SELECT testfunc1(1,2,3);
+ }
+} {1 {Error %d with %s percents %p}}
+
+# The SUM function should return integer results when all inputs are integer.
+#
+do_test func-18.1 {
+ execsql {
+ CREATE TABLE t5(x);
+ INSERT INTO t5 VALUES(1);
+ INSERT INTO t5 VALUES(-99);
+ INSERT INTO t5 VALUES(10000);
+ SELECT sum(x) FROM t5;
+ }
+} {9902}
+do_test func-18.2 {
+ execsql {
+ INSERT INTO t5 VALUES(0.0);
+ SELECT sum(x) FROM t5;
+ }
+} {9902.0}
+
+# The sum of nothing is NULL. But the sum of all NULLs is NULL.
+#
+# The TOTAL of nothing is 0.0.
+#
+do_test func-18.3 {
+ execsql {
+ DELETE FROM t5;
+ SELECT sum(x), total(x) FROM t5;
+ }
+} {{} 0.0}
+do_test func-18.4 {
+ execsql {
+ INSERT INTO t5 VALUES(NULL);
+ SELECT sum(x), total(x) FROM t5
+ }
+} {{} 0.0}
+do_test func-18.5 {
+ execsql {
+ INSERT INTO t5 VALUES(NULL);
+ SELECT sum(x), total(x) FROM t5
+ }
+} {{} 0.0}
+do_test func-18.6 {
+ execsql {
+ INSERT INTO t5 VALUES(123);
+ SELECT sum(x), total(x) FROM t5
+ }
+} {123 123.0}
+
+# Ticket #1664, #1669, #1670, #1674: An integer overflow on SUM causes
+# an error. The non-standard TOTAL() function continues to give a helpful
+# result.
+#
+do_test func-18.10 {
+ execsql {
+ CREATE TABLE t6(x INTEGER);
+ INSERT INTO t6 VALUES(1);
+ INSERT INTO t6 VALUES(1<<62);
+ SELECT sum(x) - ((1<<62)+1) from t6;
+ }
+} 0
+do_test func-18.11 {
+ execsql {
+ SELECT typeof(sum(x)) FROM t6
+ }
+} integer
+do_test func-18.12 {
+ catchsql {
+ INSERT INTO t6 VALUES(1<<62);
+ SELECT sum(x) - ((1<<62)*2.0+1) from t6;
+ }
+} {1 {integer overflow}}
+do_test func-18.13 {
+ execsql {
+ SELECT total(x) - ((1<<62)*2.0+1) FROM t6
+ }
+} 0.0
+do_test func-18.14 {
+ execsql {
+ SELECT sum(-9223372036854775805);
+ }
+} -9223372036854775805
+
+ifcapable compound&&subquery {
+
+do_test func-18.15 {
+ catchsql {
+ SELECT sum(x) FROM
+ (SELECT 9223372036854775807 AS x UNION ALL
+ SELECT 10 AS x);
+ }
+} {1 {integer overflow}}
+do_test func-18.16 {
+ catchsql {
+ SELECT sum(x) FROM
+ (SELECT 9223372036854775807 AS x UNION ALL
+ SELECT -10 AS x);
+ }
+} {0 9223372036854775797}
+do_test func-18.17 {
+ catchsql {
+ SELECT sum(x) FROM
+ (SELECT -9223372036854775807 AS x UNION ALL
+ SELECT 10 AS x);
+ }
+} {0 -9223372036854775797}
+do_test func-18.18 {
+ catchsql {
+ SELECT sum(x) FROM
+ (SELECT -9223372036854775807 AS x UNION ALL
+ SELECT -10 AS x);
+ }
+} {1 {integer overflow}}
+do_test func-18.19 {
+ catchsql {
+ SELECT sum(x) FROM (SELECT 9 AS x UNION ALL SELECT -10 AS x);
+ }
+} {0 -1}
+do_test func-18.20 {
+ catchsql {
+ SELECT sum(x) FROM (SELECT -9 AS x UNION ALL SELECT 10 AS x);
+ }
+} {0 1}
+do_test func-18.21 {
+ catchsql {
+ SELECT sum(x) FROM (SELECT -10 AS x UNION ALL SELECT 9 AS x);
+ }
+} {0 -1}
+do_test func-18.22 {
+ catchsql {
+ SELECT sum(x) FROM (SELECT 10 AS x UNION ALL SELECT -9 AS x);
+ }
+} {0 1}
+
+} ;# ifcapable compound&&subquery
+
+# Integer overflow on abs()
+#
+do_test func-18.31 {
+ catchsql {
+ SELECT abs(-9223372036854775807);
+ }
+} {0 9223372036854775807}
+do_test func-18.32 {
+ catchsql {
+ SELECT abs(-9223372036854775807-1);
+ }
+} {1 {integer overflow}}
+
+# The MATCH function exists but is only a stub and always throws an error.
+#
+do_test func-19.1 {
+ execsql {
+ SELECT match(a,b) FROM t1 WHERE 0;
+ }
+} {}
+do_test func-19.2 {
+ catchsql {
+ SELECT 'abc' MATCH 'xyz';
+ }
+} {1 {unable to use function MATCH in the requested context}}
+do_test func-19.3 {
+ catchsql {
+ SELECT 'abc' NOT MATCH 'xyz';
+ }
+} {1 {unable to use function MATCH in the requested context}}
+do_test func-19.4 {
+ catchsql {
+ SELECT match(1,2,3);
+ }
+} {1 {wrong number of arguments to function match()}}
+
+# Soundex tests.
+#
+if {![catch {db eval {SELECT soundex('hello')}}]} {
+ set i 0
+ foreach {name sdx} {
+ euler E460
+ EULER E460
+ Euler E460
+ ellery E460
+ gauss G200
+ ghosh G200
+ hilbert H416
+ Heilbronn H416
+ knuth K530
+ kant K530
+ Lloyd L300
+ LADD L300
+ Lukasiewicz L222
+ Lissajous L222
+ A A000
+ 12345 ?000
+ } {
+ incr i
+ do_test func-20.$i {
+ execsql {SELECT soundex($name)}
+ } $sdx
+ }
+}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/hook.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/hook.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,297 @@
+# 2004 Jan 14
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for TCL interface to the
+# SQLite library.
+#
+# The focus of the tests in this file is the following interface:
+#
+# sqlite_commit_hook (tests hook-1..hook-3 inclusive)
+# sqlite_update_hook (tests hook-4-*)
+# sqlite_rollback_hook (tests hook-5.*)
+#
+# $Id: hook.test,v 1.11 2006/01/17 09:35:02 danielk1977 Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+do_test hook-1.2 {
+ db commit_hook
+} {}
+
+
+do_test hook-3.1 {
+ set commit_cnt 0
+ proc commit_hook {} {
+ incr ::commit_cnt
+ return 0
+ }
+ db commit_hook ::commit_hook
+ db commit_hook
+} {::commit_hook}
+do_test hook-3.2 {
+ set commit_cnt
+} {0}
+do_test hook-3.3 {
+ execsql {
+ CREATE TABLE t2(a,b);
+ }
+ set commit_cnt
+} {1}
+do_test hook-3.4 {
+ execsql {
+ INSERT INTO t2 VALUES(1,2);
+ INSERT INTO t2 SELECT a+1, b+1 FROM t2;
+ INSERT INTO t2 SELECT a+2, b+2 FROM t2;
+ }
+ set commit_cnt
+} {4}
+do_test hook-3.5 {
+ set commit_cnt {}
+ proc commit_hook {} {
+ set ::commit_cnt [execsql {SELECT * FROM t2}]
+ return 0
+ }
+ execsql {
+ INSERT INTO t2 VALUES(5,6);
+ }
+ set commit_cnt
+} {1 2 2 3 3 4 4 5 5 6}
+do_test hook-3.6 {
+ set commit_cnt {}
+ proc commit_hook {} {
+ set ::commit_cnt [execsql {SELECT * FROM t2}]
+ return 1
+ }
+ catchsql {
+ INSERT INTO t2 VALUES(6,7);
+ }
+} {1 {constraint failed}}
+do_test hook-3.7 {
+ set ::commit_cnt
+} {1 2 2 3 3 4 4 5 5 6 6 7}
+do_test hook-3.8 {
+ execsql {SELECT * FROM t2}
+} {1 2 2 3 3 4 4 5 5 6}
+
+# Test turnning off the commit hook
+#
+do_test hook-3.9 {
+ db commit_hook {}
+ set ::commit_cnt {}
+ execsql {
+ INSERT INTO t2 VALUES(7,8);
+ }
+ set ::commit_cnt
+} {}
+
+#----------------------------------------------------------------------------
+# Tests for the update-hook.
+#
+# 4.1.* - Very simple tests. Test that the update hook is invoked correctly
+# for INSERT, DELETE and UPDATE statements, including DELETE
+# statements with no WHERE clause.
+# 4.2.* - Check that the update-hook is invoked for rows modified by trigger
+# bodies. Also that the database name is correctly reported when
+# an attached database is modified.
+# 4.3.* - Do some sorting, grouping, compound queries, population and
+# depopulation of indices, to make sure the update-hook is not
+# invoked incorrectly.
+#
+
+# Simple tests
+do_test hook-4.1.1 {
+ catchsql {
+ DROP TABLE t1;
+ }
+ execsql {
+ CREATE TABLE t1(a INTEGER PRIMARY KEY, b);
+ INSERT INTO t1 VALUES(1, 'one');
+ INSERT INTO t1 VALUES(2, 'two');
+ INSERT INTO t1 VALUES(3, 'three');
+ }
+ db update_hook [list lappend ::update_hook]
+} {}
+do_test hook-4.1.2 {
+ execsql {
+ INSERT INTO t1 VALUES(4, 'four');
+ DELETE FROM t1 WHERE b = 'two';
+ UPDATE t1 SET b = '' WHERE a = 1 OR a = 3;
+ DELETE FROM t1 WHERE 1; -- Avoid the truncate optimization (for now)
+ }
+ set ::update_hook
+} [list \
+ INSERT main t1 4 \
+ DELETE main t1 2 \
+ UPDATE main t1 1 \
+ UPDATE main t1 3 \
+ DELETE main t1 1 \
+ DELETE main t1 3 \
+ DELETE main t1 4 \
+]
+
+set ::update_hook {}
+ifcapable trigger {
+ do_test hook-4.2.1 {
+ catchsql {
+ DROP TABLE t2;
+ }
+ execsql {
+ CREATE TABLE t2(c INTEGER PRIMARY KEY, d);
+ CREATE TRIGGER t1_trigger AFTER INSERT ON t1 BEGIN
+ INSERT INTO t2 VALUES(new.a, new.b);
+ UPDATE t2 SET d = d || ' via trigger' WHERE new.a = c;
+ DELETE FROM t2 WHERE new.a = c;
+ END;
+ }
+ } {}
+ do_test hook-4.2.2 {
+ execsql {
+ INSERT INTO t1 VALUES(1, 'one');
+ INSERT INTO t1 VALUES(2, 'two');
+ }
+ set ::update_hook
+ } [list \
+ INSERT main t1 1 \
+ INSERT main t2 1 \
+ UPDATE main t2 1 \
+ DELETE main t2 1 \
+ INSERT main t1 2 \
+ INSERT main t2 2 \
+ UPDATE main t2 2 \
+ DELETE main t2 2 \
+ ]
+} else {
+ execsql {
+ INSERT INTO t1 VALUES(1, 'one');
+ INSERT INTO t1 VALUES(2, 'two');
+ }
+}
+
+# Update-hook + ATTACH
+set ::update_hook {}
+do_test hook-4.2.3 {
+ file delete -force test2.db
+ execsql {
+ ATTACH 'test2.db' AS aux;
+ CREATE TABLE aux.t3(a INTEGER PRIMARY KEY, b);
+ INSERT INTO aux.t3 SELECT * FROM t1;
+ UPDATE t3 SET b = 'two or so' WHERE a = 2;
+ DELETE FROM t3 WHERE 1; -- Avoid the truncate optimization (for now)
+ }
+ set ::update_hook
+} [list \
+ INSERT aux t3 1 \
+ INSERT aux t3 2 \
+ UPDATE aux t3 2 \
+ DELETE aux t3 1 \
+ DELETE aux t3 2 \
+]
+
+ifcapable trigger {
+ execsql {
+ DROP TRIGGER t1_trigger;
+ }
+}
+
+# Test that other vdbe operations involving btree structures do not
+# incorrectly invoke the update-hook.
+set ::update_hook {}
+do_test hook-4.3.1 {
+ execsql {
+ CREATE INDEX t1_i ON t1(b);
+ INSERT INTO t1 VALUES(3, 'three');
+ UPDATE t1 SET b = '';
+ DELETE FROM t1 WHERE a > 1;
+ }
+ set ::update_hook
+} [list \
+ INSERT main t1 3 \
+ UPDATE main t1 1 \
+ UPDATE main t1 2 \
+ UPDATE main t1 3 \
+ DELETE main t1 2 \
+ DELETE main t1 3 \
+]
+set ::update_hook {}
+ifcapable compound {
+ do_test hook-4.3.2 {
+ execsql {
+ SELECT * FROM t1 UNION SELECT * FROM t3;
+ SELECT * FROM t1 UNION ALL SELECT * FROM t3;
+ SELECT * FROM t1 INTERSECT SELECT * FROM t3;
+ SELECT * FROM t1 EXCEPT SELECT * FROM t3;
+ SELECT * FROM t1 ORDER BY b;
+ SELECT * FROM t1 GROUP BY b;
+ }
+ set ::update_hook
+ } [list]
+}
+db update_hook {}
+#
+#----------------------------------------------------------------------------
+
+#----------------------------------------------------------------------------
+# Test the rollback-hook. The rollback-hook is a bit more complicated than
+# either the commit or update hooks because a rollback can happen
+# explicitly (an sql ROLLBACK statement) or implicitly (a constraint or
+# error condition).
+#
+# hook-5.1.* - Test explicit rollbacks.
+# hook-5.2.* - Test implicit rollbacks caused by constraint failure.
+#
+# hook-5.3.* - Test implicit rollbacks caused by IO errors.
+# hook-5.4.* - Test implicit rollbacks caused by malloc() failure.
+# hook-5.5.* - Test hot-journal rollbacks. Or should the rollback hook
+# not be called for these?
+#
+
+do_test hook-5.0 {
+ # Configure the rollback hook to increment global variable
+ # $::rollback_hook each time it is invoked.
+ set ::rollback_hook 0
+ db rollback_hook [list incr ::rollback_hook]
+} {}
+
+# Test explicit rollbacks. Not much can really go wrong here.
+#
+do_test hook-5.1.1 {
+ set ::rollback_hook 0
+ execsql {
+ BEGIN;
+ ROLLBACK;
+ }
+ set ::rollback_hook
+} {1}
+
+# Test implicit rollbacks caused by constraints.
+#
+do_test hook-5.2.1 {
+ set ::rollback_hook 0
+ catchsql {
+ DROP TABLE t1;
+ CREATE TABLE t1(a PRIMARY KEY, b);
+ INSERT INTO t1 VALUES('one', 'I');
+ INSERT INTO t1 VALUES('one', 'I');
+ }
+ set ::rollback_hook
+} {1}
+do_test hook-5.2.2 {
+ # Check that the INSERT transaction above really was rolled back.
+ execsql {
+ SELECT count(*) FROM t1;
+ }
+} {1}
+
+#
+# End rollback-hook testing.
+#----------------------------------------------------------------------------
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/in.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/in.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,367 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing the IN and BETWEEN operator.
+#
+# $Id: in.test,v 1.17 2006/05/23 23:25:10 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Generate the test data we will need for the first squences of tests.
+#
+do_test in-1.0 {
+ execsql {
+ BEGIN;
+ CREATE TABLE t1(a int, b int);
+ }
+ for {set i 1} {$i<=10} {incr i} {
+ execsql "INSERT INTO t1 VALUES($i,[expr {int(pow(2,$i))}])"
+ }
+ execsql {
+ COMMIT;
+ SELECT count(*) FROM t1;
+ }
+} {10}
+
+# Do basic testing of BETWEEN.
+#
+do_test in-1.1 {
+ execsql {SELECT a FROM t1 WHERE b BETWEEN 10 AND 50 ORDER BY a}
+} {4 5}
+do_test in-1.2 {
+ execsql {SELECT a FROM t1 WHERE b NOT BETWEEN 10 AND 50 ORDER BY a}
+} {1 2 3 6 7 8 9 10}
+do_test in-1.3 {
+ execsql {SELECT a FROM t1 WHERE b BETWEEN a AND a*5 ORDER BY a}
+} {1 2 3 4}
+do_test in-1.4 {
+ execsql {SELECT a FROM t1 WHERE b NOT BETWEEN a AND a*5 ORDER BY a}
+} {5 6 7 8 9 10}
+do_test in-1.6 {
+ execsql {SELECT a FROM t1 WHERE b BETWEEN a AND a*5 OR b=512 ORDER BY a}
+} {1 2 3 4 9}
+do_test in-1.7 {
+ execsql {SELECT a+ 100*(a BETWEEN 1 and 3) FROM t1 ORDER BY b}
+} {101 102 103 4 5 6 7 8 9 10}
+
+# The rest of this file concentrates on testing the IN operator.
+# Skip this if the library is compiled with SQLITE_OMIT_SUBQUERY
+# (because the IN operator is unavailable).
+#
+ifcapable !subquery {
+ finish_test
+ return
+}
+
+# Testing of the IN operator using static lists on the right-hand side.
+#
+do_test in-2.1 {
+ execsql {SELECT a FROM t1 WHERE b IN (8,12,16,24,32) ORDER BY a}
+} {3 4 5}
+do_test in-2.2 {
+ execsql {SELECT a FROM t1 WHERE b NOT IN (8,12,16,24,32) ORDER BY a}
+} {1 2 6 7 8 9 10}
+do_test in-2.3 {
+ execsql {SELECT a FROM t1 WHERE b IN (8,12,16,24,32) OR b=512 ORDER BY a}
+} {3 4 5 9}
+do_test in-2.4 {
+ execsql {SELECT a FROM t1 WHERE b NOT IN (8,12,16,24,32) OR b=512 ORDER BY a}
+} {1 2 6 7 8 9 10}
+do_test in-2.5 {
+ execsql {SELECT a+100*(b IN (8,16,24)) FROM t1 ORDER BY b}
+} {1 2 103 104 5 6 7 8 9 10}
+
+do_test in-2.6 {
+ execsql {SELECT a FROM t1 WHERE b IN (b+8,64)}
+} {6}
+do_test in-2.7 {
+ execsql {SELECT a FROM t1 WHERE b IN (max(5,10,b),20)}
+} {4 5 6 7 8 9 10}
+do_test in-2.8 {
+ execsql {SELECT a FROM t1 WHERE b IN (8*2,64/2) ORDER BY b}
+} {4 5}
+do_test in-2.9 {
+ execsql {SELECT a FROM t1 WHERE b IN (max(5,10),20)}
+} {}
+do_test in-2.10 {
+ execsql {SELECT a FROM t1 WHERE min(0,b IN (a,30))}
+} {}
+do_test in-2.11 {
+ set v [catch {execsql {SELECT a FROM t1 WHERE c IN (10,20)}} msg]
+ lappend v $msg
+} {1 {no such column: c}}
+
+# Testing the IN operator where the right-hand side is a SELECT
+#
+do_test in-3.1 {
+ execsql {
+ SELECT a FROM t1
+ WHERE b IN (SELECT b FROM t1 WHERE a<5)
+ ORDER BY a
+ }
+} {1 2 3 4}
+do_test in-3.2 {
+ execsql {
+ SELECT a FROM t1
+ WHERE b IN (SELECT b FROM t1 WHERE a<5) OR b==512
+ ORDER BY a
+ }
+} {1 2 3 4 9}
+do_test in-3.3 {
+ execsql {
+ SELECT a + 100*(b IN (SELECT b FROM t1 WHERE a<5)) FROM t1 ORDER BY b
+ }
+} {101 102 103 104 5 6 7 8 9 10}
+
+# Make sure the UPDATE and DELETE commands work with IN-SELECT
+#
+do_test in-4.1 {
+ execsql {
+ UPDATE t1 SET b=b*2
+ WHERE b IN (SELECT b FROM t1 WHERE a>8)
+ }
+ execsql {SELECT b FROM t1 ORDER BY b}
+} {2 4 8 16 32 64 128 256 1024 2048}
+do_test in-4.2 {
+ execsql {
+ DELETE FROM t1 WHERE b IN (SELECT b FROM t1 WHERE a>8)
+ }
+ execsql {SELECT a FROM t1 ORDER BY a}
+} {1 2 3 4 5 6 7 8}
+do_test in-4.3 {
+ execsql {
+ DELETE FROM t1 WHERE b NOT IN (SELECT b FROM t1 WHERE a>4)
+ }
+ execsql {SELECT a FROM t1 ORDER BY a}
+} {5 6 7 8}
+
+# Do an IN with a constant RHS but where the RHS has many, many
+# elements. We need to test that collisions in the hash table
+# are resolved properly.
+#
+do_test in-5.1 {
+ execsql {
+ INSERT INTO t1 VALUES('hello', 'world');
+ SELECT * FROM t1
+ WHERE a IN (
+ 'Do','an','IN','with','a','constant','RHS','but','where','the',
+ 'has','many','elements','We','need','to','test','that',
+ 'collisions','hash','table','are','resolved','properly',
+ 'This','in-set','contains','thirty','one','entries','hello');
+ }
+} {hello world}
+
+# Make sure the IN operator works with INTEGER PRIMARY KEY fields.
+#
+do_test in-6.1 {
+ execsql {
+ CREATE TABLE ta(a INTEGER PRIMARY KEY, b);
+ INSERT INTO ta VALUES(1,1);
+ INSERT INTO ta VALUES(2,2);
+ INSERT INTO ta VALUES(3,3);
+ INSERT INTO ta VALUES(4,4);
+ INSERT INTO ta VALUES(6,6);
+ INSERT INTO ta VALUES(8,8);
+ INSERT INTO ta VALUES(10,
+ 'This is a key that is long enough to require a malloc in the VDBE');
+ SELECT * FROM ta WHERE a<10;
+ }
+} {1 1 2 2 3 3 4 4 6 6 8 8}
+do_test in-6.2 {
+ execsql {
+ CREATE TABLE tb(a INTEGER PRIMARY KEY, b);
+ INSERT INTO tb VALUES(1,1);
+ INSERT INTO tb VALUES(2,2);
+ INSERT INTO tb VALUES(3,3);
+ INSERT INTO tb VALUES(5,5);
+ INSERT INTO tb VALUES(7,7);
+ INSERT INTO tb VALUES(9,9);
+ INSERT INTO tb VALUES(11,
+ 'This is a key that is long enough to require a malloc in the VDBE');
+ SELECT * FROM tb WHERE a<10;
+ }
+} {1 1 2 2 3 3 5 5 7 7 9 9}
+do_test in-6.3 {
+ execsql {
+ SELECT a FROM ta WHERE b IN (SELECT a FROM tb);
+ }
+} {1 2 3}
+do_test in-6.4 {
+ execsql {
+ SELECT a FROM ta WHERE b NOT IN (SELECT a FROM tb);
+ }
+} {4 6 8 10}
+do_test in-6.5 {
+ execsql {
+ SELECT a FROM ta WHERE b IN (SELECT b FROM tb);
+ }
+} {1 2 3 10}
+do_test in-6.6 {
+ execsql {
+ SELECT a FROM ta WHERE b NOT IN (SELECT b FROM tb);
+ }
+} {4 6 8}
+do_test in-6.7 {
+ execsql {
+ SELECT a FROM ta WHERE a IN (SELECT a FROM tb);
+ }
+} {1 2 3}
+do_test in-6.8 {
+ execsql {
+ SELECT a FROM ta WHERE a NOT IN (SELECT a FROM tb);
+ }
+} {4 6 8 10}
+do_test in-6.9 {
+ execsql {
+ SELECT a FROM ta WHERE a IN (SELECT b FROM tb);
+ }
+} {1 2 3}
+do_test in-6.10 {
+ execsql {
+ SELECT a FROM ta WHERE a NOT IN (SELECT b FROM tb);
+ }
+} {4 6 8 10}
+
+# Tests of IN operator against empty sets. (Ticket #185)
+#
+do_test in-7.1 {
+ execsql {
+ SELECT a FROM t1 WHERE a IN ();
+ }
+} {}
+do_test in-7.2 {
+ execsql {
+ SELECT a FROM t1 WHERE a IN (5);
+ }
+} {5}
+do_test in-7.3 {
+ execsql {
+ SELECT a FROM t1 WHERE a NOT IN () ORDER BY a;
+ }
+} {5 6 7 8 hello}
+do_test in-7.4 {
+ execsql {
+ SELECT a FROM t1 WHERE a IN (5) AND b IN ();
+ }
+} {}
+do_test in-7.5 {
+ execsql {
+ SELECT a FROM t1 WHERE a IN (5) AND b NOT IN ();
+ }
+} {5}
+do_test in-7.6 {
+ execsql {
+ SELECT a FROM ta WHERE a IN ();
+ }
+} {}
+do_test in-7.7 {
+ execsql {
+ SELECT a FROM ta WHERE a NOT IN ();
+ }
+} {1 2 3 4 6 8 10}
+
+do_test in-8.1 {
+ execsql {
+ SELECT b FROM t1 WHERE a IN ('hello','there')
+ }
+} {world}
+do_test in-8.2 {
+ execsql {
+ SELECT b FROM t1 WHERE a IN ("hello",'there')
+ }
+} {world}
+
+# Test constructs of the form: expr IN tablename
+#
+do_test in-9.1 {
+ execsql {
+ CREATE TABLE t4 AS SELECT a FROM tb;
+ SELECT * FROM t4;
+ }
+} {1 2 3 5 7 9 11}
+do_test in-9.2 {
+ execsql {
+ SELECT b FROM t1 WHERE a IN t4;
+ }
+} {32 128}
+do_test in-9.3 {
+ execsql {
+ SELECT b FROM t1 WHERE a NOT IN t4;
+ }
+} {64 256 world}
+do_test in-9.4 {
+ catchsql {
+ SELECT b FROM t1 WHERE a NOT IN tb;
+ }
+} {1 {only a single result allowed for a SELECT that is part of an expression}}
+
+# IN clauses in CHECK constraints. Ticket #1645
+#
+do_test in-10.1 {
+ execsql {
+ CREATE TABLE t5(
+ a INTEGER,
+ CHECK( a IN (111,222,333) )
+ );
+ INSERT INTO t5 VALUES(111);
+ SELECT * FROM t5;
+ }
+} {111}
+do_test in-10.2 {
+ catchsql {
+ INSERT INTO t5 VALUES(4);
+ }
+} {1 {constraint failed}}
+
+# Ticket #1821
+#
+# Type affinity applied to the right-hand side of an IN operator.
+#
+do_test in-11.1 {
+ execsql {
+ CREATE TABLE t6(a,b NUMERIC);
+ INSERT INTO t6 VALUES(1,2);
+ INSERT INTO t6 VALUES(2,3);
+ SELECT * FROM t6 WHERE b IN (2);
+ }
+} {1 2}
+do_test in-11.2 {
+ # The '2' should be coerced into 2 because t6.b is NUMERIC
+ execsql {
+ SELECT * FROM t6 WHERE b IN ('2');
+ }
+} {1 2}
+do_test in-11.3 {
+ # No coercion should occur here because of the unary + before b.
+ execsql {
+ SELECT * FROM t6 WHERE +b IN ('2');
+ }
+} {}
+do_test in-11.4 {
+ # No coercion because column a as affinity NONE
+ execsql {
+ SELECT * FROM t6 WHERE a IN ('2');
+ }
+} {}
+do_test in-11.5 {
+ execsql {
+ SELECT * FROM t6 WHERE a IN (2);
+ }
+} {2 3}
+do_test in-11.6 {
+ # No coercion because column a as affinity NONE
+ execsql {
+ SELECT * FROM t6 WHERE +a IN ('2');
+ }
+} {}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/index.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/index.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,711 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing the CREATE INDEX statement.
+#
+# $Id: index.test,v 1.42 2006/03/29 00:24:07 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Create a basic index and verify it is added to sqlite_master
+#
+do_test index-1.1 {
+ execsql {CREATE TABLE test1(f1 int, f2 int, f3 int)}
+ execsql {CREATE INDEX index1 ON test1(f1)}
+ execsql {SELECT name FROM sqlite_master WHERE type!='meta' ORDER BY name}
+} {index1 test1}
+do_test index-1.1b {
+ execsql {SELECT name, sql, tbl_name, type FROM sqlite_master
+ WHERE name='index1'}
+} {index1 {CREATE INDEX index1 ON test1(f1)} test1 index}
+do_test index-1.1c {
+ db close
+ sqlite3 db test.db
+ execsql {SELECT name, sql, tbl_name, type FROM sqlite_master
+ WHERE name='index1'}
+} {index1 {CREATE INDEX index1 ON test1(f1)} test1 index}
+do_test index-1.1d {
+ db close
+ sqlite3 db test.db
+ execsql {SELECT name FROM sqlite_master WHERE type!='meta' ORDER BY name}
+} {index1 test1}
+
+# Verify that the index dies with the table
+#
+do_test index-1.2 {
+ execsql {DROP TABLE test1}
+ execsql {SELECT name FROM sqlite_master WHERE type!='meta' ORDER BY name}
+} {}
+
+# Try adding an index to a table that does not exist
+#
+do_test index-2.1 {
+ set v [catch {execsql {CREATE INDEX index1 ON test1(f1)}} msg]
+ lappend v $msg
+} {1 {no such table: main.test1}}
+
+# Try adding an index on a column of a table where the table
+# exists but the column does not.
+#
+do_test index-2.1 {
+ execsql {CREATE TABLE test1(f1 int, f2 int, f3 int)}
+ set v [catch {execsql {CREATE INDEX index1 ON test1(f4)}} msg]
+ lappend v $msg
+} {1 {table test1 has no column named f4}}
+
+# Try an index with some columns that match and others that do now.
+#
+do_test index-2.2 {
+ set v [catch {execsql {CREATE INDEX index1 ON test1(f1, f2, f4, f3)}} msg]
+ execsql {DROP TABLE test1}
+ lappend v $msg
+} {1 {table test1 has no column named f4}}
+
+# Try creating a bunch of indices on the same table
+#
+set r {}
+for {set i 1} {$i<100} {incr i} {
+ lappend r [format index%02d $i]
+}
+do_test index-3.1 {
+ execsql {CREATE TABLE test1(f1 int, f2 int, f3 int, f4 int, f5 int)}
+ for {set i 1} {$i<100} {incr i} {
+ set sql "CREATE INDEX [format index%02d $i] ON test1(f[expr {($i%5)+1}])"
+ execsql $sql
+ }
+ execsql {SELECT name FROM sqlite_master
+ WHERE type='index' AND tbl_name='test1'
+ ORDER BY name}
+} $r
+integrity_check index-3.2.1
+ifcapable {reindex} {
+ do_test index-3.2.2 {
+ execsql REINDEX
+ } {}
+}
+integrity_check index-3.2.3
+
+
+# Verify that all the indices go away when we drop the table.
+#
+do_test index-3.3 {
+ execsql {DROP TABLE test1}
+ execsql {SELECT name FROM sqlite_master
+ WHERE type='index' AND tbl_name='test1'
+ ORDER BY name}
+} {}
+
+# Create a table and insert values into that table. Then create
+# an index on that table. Verify that we can select values
+# from the table correctly using the index.
+#
+# Note that the index names "index9" and "indext" are chosen because
+# they both have the same hash.
+#
+do_test index-4.1 {
+ execsql {CREATE TABLE test1(cnt int, power int)}
+ for {set i 1} {$i<20} {incr i} {
+ execsql "INSERT INTO test1 VALUES($i,[expr {int(pow(2,$i))}])"
+ }
+ execsql {CREATE INDEX index9 ON test1(cnt)}
+ execsql {CREATE INDEX indext ON test1(power)}
+ execsql {SELECT name FROM sqlite_master WHERE type!='meta' ORDER BY name}
+} {index9 indext test1}
+do_test index-4.2 {
+ execsql {SELECT cnt FROM test1 WHERE power=4}
+} {2}
+do_test index-4.3 {
+ execsql {SELECT cnt FROM test1 WHERE power=1024}
+} {10}
+do_test index-4.4 {
+ execsql {SELECT power FROM test1 WHERE cnt=6}
+} {64}
+do_test index-4.5 {
+ execsql {DROP INDEX indext}
+ execsql {SELECT power FROM test1 WHERE cnt=6}
+} {64}
+do_test index-4.6 {
+ execsql {SELECT cnt FROM test1 WHERE power=1024}
+} {10}
+do_test index-4.7 {
+ execsql {CREATE INDEX indext ON test1(cnt)}
+ execsql {SELECT power FROM test1 WHERE cnt=6}
+} {64}
+do_test index-4.8 {
+ execsql {SELECT cnt FROM test1 WHERE power=1024}
+} {10}
+do_test index-4.9 {
+ execsql {DROP INDEX index9}
+ execsql {SELECT power FROM test1 WHERE cnt=6}
+} {64}
+do_test index-4.10 {
+ execsql {SELECT cnt FROM test1 WHERE power=1024}
+} {10}
+do_test index-4.11 {
+ execsql {DROP INDEX indext}
+ execsql {SELECT power FROM test1 WHERE cnt=6}
+} {64}
+do_test index-4.12 {
+ execsql {SELECT cnt FROM test1 WHERE power=1024}
+} {10}
+do_test index-4.13 {
+ execsql {DROP TABLE test1}
+ execsql {SELECT name FROM sqlite_master WHERE type!='meta' ORDER BY name}
+} {}
+integrity_check index-4.14
+
+# Do not allow indices to be added to sqlite_master
+#
+do_test index-5.1 {
+ set v [catch {execsql {CREATE INDEX index1 ON sqlite_master(name)}} msg]
+ lappend v $msg
+} {1 {table sqlite_master may not be indexed}}
+do_test index-5.2 {
+ execsql {SELECT name FROM sqlite_master WHERE type!='meta'}
+} {}
+
+# Do not allow indices with duplicate names to be added
+#
+do_test index-6.1 {
+ execsql {CREATE TABLE test1(f1 int, f2 int)}
+ execsql {CREATE TABLE test2(g1 real, g2 real)}
+ execsql {CREATE INDEX index1 ON test1(f1)}
+ set v [catch {execsql {CREATE INDEX index1 ON test2(g1)}} msg]
+ lappend v $msg
+} {1 {index index1 already exists}}
+do_test index-6.1.1 {
+ catchsql {CREATE INDEX [index1] ON test2(g1)}
+} {1 {index index1 already exists}}
+do_test index-6.1b {
+ execsql {SELECT name FROM sqlite_master WHERE type!='meta' ORDER BY name}
+} {index1 test1 test2}
+do_test index-6.1c {
+ catchsql {CREATE INDEX IF NOT EXISTS index1 ON test1(f1)}
+} {0 {}}
+do_test index-6.2 {
+ set v [catch {execsql {CREATE INDEX test1 ON test2(g1)}} msg]
+ lappend v $msg
+} {1 {there is already a table named test1}}
+do_test index-6.2b {
+ execsql {SELECT name FROM sqlite_master WHERE type!='meta' ORDER BY name}
+} {index1 test1 test2}
+do_test index-6.3 {
+ execsql {DROP TABLE test1}
+ execsql {DROP TABLE test2}
+ execsql {SELECT name FROM sqlite_master WHERE type!='meta' ORDER BY name}
+} {}
+do_test index-6.4 {
+ execsql {
+ CREATE TABLE test1(a,b);
+ CREATE INDEX index1 ON test1(a);
+ CREATE INDEX index2 ON test1(b);
+ CREATE INDEX index3 ON test1(a,b);
+ DROP TABLE test1;
+ SELECT name FROM sqlite_master WHERE type!='meta' ORDER BY name;
+ }
+} {}
+integrity_check index-6.5
+
+
+# Create a primary key
+#
+do_test index-7.1 {
+ execsql {CREATE TABLE test1(f1 int, f2 int primary key)}
+ for {set i 1} {$i<20} {incr i} {
+ execsql "INSERT INTO test1 VALUES($i,[expr {int(pow(2,$i))}])"
+ }
+ execsql {SELECT count(*) FROM test1}
+} {19}
+do_test index-7.2 {
+ execsql {SELECT f1 FROM test1 WHERE f2=65536}
+} {16}
+do_test index-7.3 {
+ execsql {
+ SELECT name FROM sqlite_master
+ WHERE type='index' AND tbl_name='test1'
+ }
+} {sqlite_autoindex_test1_1}
+do_test index-7.4 {
+ execsql {DROP table test1}
+ execsql {SELECT name FROM sqlite_master WHERE type!='meta'}
+} {}
+integrity_check index-7.5
+
+# Make sure we cannot drop a non-existant index.
+#
+do_test index-8.1 {
+ set v [catch {execsql {DROP INDEX index1}} msg]
+ lappend v $msg
+} {1 {no such index: index1}}
+
+# Make sure we don't actually create an index when the EXPLAIN keyword
+# is used.
+#
+do_test index-9.1 {
+ execsql {CREATE TABLE tab1(a int)}
+ ifcapable {explain} {
+ execsql {EXPLAIN CREATE INDEX idx1 ON tab1(a)}
+ }
+ execsql {SELECT name FROM sqlite_master WHERE tbl_name='tab1'}
+} {tab1}
+do_test index-9.2 {
+ execsql {CREATE INDEX idx1 ON tab1(a)}
+ execsql {SELECT name FROM sqlite_master WHERE tbl_name='tab1' ORDER BY name}
+} {idx1 tab1}
+integrity_check index-9.3
+
+# Allow more than one entry with the same key.
+#
+do_test index-10.0 {
+ execsql {
+ CREATE TABLE t1(a int, b int);
+ CREATE INDEX i1 ON t1(a);
+ INSERT INTO t1 VALUES(1,2);
+ INSERT INTO t1 VALUES(2,4);
+ INSERT INTO t1 VALUES(3,8);
+ INSERT INTO t1 VALUES(1,12);
+ SELECT b FROM t1 WHERE a=1 ORDER BY b;
+ }
+} {2 12}
+do_test index-10.1 {
+ execsql {
+ SELECT b FROM t1 WHERE a=2 ORDER BY b;
+ }
+} {4}
+do_test index-10.2 {
+ execsql {
+ DELETE FROM t1 WHERE b=12;
+ SELECT b FROM t1 WHERE a=1 ORDER BY b;
+ }
+} {2}
+do_test index-10.3 {
+ execsql {
+ DELETE FROM t1 WHERE b=2;
+ SELECT b FROM t1 WHERE a=1 ORDER BY b;
+ }
+} {}
+do_test index-10.4 {
+ execsql {
+ DELETE FROM t1;
+ INSERT INTO t1 VALUES (1,1);
+ INSERT INTO t1 VALUES (1,2);
+ INSERT INTO t1 VALUES (1,3);
+ INSERT INTO t1 VALUES (1,4);
+ INSERT INTO t1 VALUES (1,5);
+ INSERT INTO t1 VALUES (1,6);
+ INSERT INTO t1 VALUES (1,7);
+ INSERT INTO t1 VALUES (1,8);
+ INSERT INTO t1 VALUES (1,9);
+ INSERT INTO t1 VALUES (2,0);
+ SELECT b FROM t1 WHERE a=1 ORDER BY b;
+ }
+} {1 2 3 4 5 6 7 8 9}
+do_test index-10.5 {
+ ifcapable subquery {
+ execsql { DELETE FROM t1 WHERE b IN (2, 4, 6, 8); }
+ } else {
+ execsql { DELETE FROM t1 WHERE b = 2 OR b = 4 OR b = 6 OR b = 8; }
+ }
+ execsql {
+ SELECT b FROM t1 WHERE a=1 ORDER BY b;
+ }
+} {1 3 5 7 9}
+do_test index-10.6 {
+ execsql {
+ DELETE FROM t1 WHERE b>2;
+ SELECT b FROM t1 WHERE a=1 ORDER BY b;
+ }
+} {1}
+do_test index-10.7 {
+ execsql {
+ DELETE FROM t1 WHERE b=1;
+ SELECT b FROM t1 WHERE a=1 ORDER BY b;
+ }
+} {}
+do_test index-10.8 {
+ execsql {
+ SELECT b FROM t1 ORDER BY b;
+ }
+} {0}
+integrity_check index-10.9
+
+# Automatically create an index when we specify a primary key.
+#
+do_test index-11.1 {
+ execsql {
+ CREATE TABLE t3(
+ a text,
+ b int,
+ c float,
+ PRIMARY KEY(b)
+ );
+ }
+ for {set i 1} {$i<=50} {incr i} {
+ execsql "INSERT INTO t3 VALUES('x${i}x',$i,0.$i)"
+ }
+ set sqlite_search_count 0
+ concat [execsql {SELECT c FROM t3 WHERE b==10}] $sqlite_search_count
+} {0.1 3}
+integrity_check index-11.2
+
+
+# Numeric strings should compare as if they were numbers. So even if the
+# strings are not character-by-character the same, if they represent the
+# same number they should compare equal to one another. Verify that this
+# is true in indices.
+#
+# Updated for sqlite3 v3: SQLite will now store these values as numbers
+# (because the affinity of column a is NUMERIC) so the quirky
+# representations are not retained. i.e. '+1.0' becomes '1'.
+do_test index-12.1 {
+ execsql {
+ CREATE TABLE t4(a NUM,b);
+ INSERT INTO t4 VALUES('0.0',1);
+ INSERT INTO t4 VALUES('0.00',2);
+ INSERT INTO t4 VALUES('abc',3);
+ INSERT INTO t4 VALUES('-1.0',4);
+ INSERT INTO t4 VALUES('+1.0',5);
+ INSERT INTO t4 VALUES('0',6);
+ INSERT INTO t4 VALUES('00000',7);
+ SELECT a FROM t4 ORDER BY b;
+ }
+} {0 0 abc -1 1 0 0}
+do_test index-12.2 {
+ execsql {
+ SELECT a FROM t4 WHERE a==0 ORDER BY b
+ }
+} {0 0 0 0}
+do_test index-12.3 {
+ execsql {
+ SELECT a FROM t4 WHERE a<0.5 ORDER BY b
+ }
+} {0 0 -1 0 0}
+do_test index-12.4 {
+ execsql {
+ SELECT a FROM t4 WHERE a>-0.5 ORDER BY b
+ }
+} {0 0 abc 1 0 0}
+do_test index-12.5 {
+ execsql {
+ CREATE INDEX t4i1 ON t4(a);
+ SELECT a FROM t4 WHERE a==0 ORDER BY b
+ }
+} {0 0 0 0}
+do_test index-12.6 {
+ execsql {
+ SELECT a FROM t4 WHERE a<0.5 ORDER BY b
+ }
+} {0 0 -1 0 0}
+do_test index-12.7 {
+ execsql {
+ SELECT a FROM t4 WHERE a>-0.5 ORDER BY b
+ }
+} {0 0 abc 1 0 0}
+integrity_check index-12.8
+
+# Make sure we cannot drop an automatically created index.
+#
+do_test index-13.1 {
+ execsql {
+ CREATE TABLE t5(
+ a int UNIQUE,
+ b float PRIMARY KEY,
+ c varchar(10),
+ UNIQUE(a,c)
+ );
+ INSERT INTO t5 VALUES(1,2,3);
+ SELECT * FROM t5;
+ }
+} {1 2.0 3}
+do_test index-13.2 {
+ set ::idxlist [execsql {
+ SELECT name FROM sqlite_master WHERE type="index" AND tbl_name="t5";
+ }]
+ llength $::idxlist
+} {3}
+for {set i 0} {$i<[llength $::idxlist]} {incr i} {
+ do_test index-13.3.$i {
+ catchsql "
+ DROP INDEX '[lindex $::idxlist $i]';
+ "
+ } {1 {index associated with UNIQUE or PRIMARY KEY constraint cannot be dropped}}
+}
+do_test index-13.4 {
+ execsql {
+ INSERT INTO t5 VALUES('a','b','c');
+ SELECT * FROM t5;
+ }
+} {1 2.0 3 a b c}
+integrity_check index-13.5
+
+# Check the sort order of data in an index.
+#
+do_test index-14.1 {
+ execsql {
+ CREATE TABLE t6(a,b,c);
+ CREATE INDEX t6i1 ON t6(a,b);
+ INSERT INTO t6 VALUES('','',1);
+ INSERT INTO t6 VALUES('',NULL,2);
+ INSERT INTO t6 VALUES(NULL,'',3);
+ INSERT INTO t6 VALUES('abc',123,4);
+ INSERT INTO t6 VALUES(123,'abc',5);
+ SELECT c FROM t6 ORDER BY a,b;
+ }
+} {3 5 2 1 4}
+do_test index-14.2 {
+ execsql {
+ SELECT c FROM t6 WHERE a='';
+ }
+} {2 1}
+do_test index-14.3 {
+ execsql {
+ SELECT c FROM t6 WHERE b='';
+ }
+} {1 3}
+do_test index-14.4 {
+ execsql {
+ SELECT c FROM t6 WHERE a>'';
+ }
+} {4}
+do_test index-14.5 {
+ execsql {
+ SELECT c FROM t6 WHERE a>='';
+ }
+} {2 1 4}
+do_test index-14.6 {
+ execsql {
+ SELECT c FROM t6 WHERE a>123;
+ }
+} {2 1 4}
+do_test index-14.7 {
+ execsql {
+ SELECT c FROM t6 WHERE a>=123;
+ }
+} {5 2 1 4}
+do_test index-14.8 {
+ execsql {
+ SELECT c FROM t6 WHERE a<'abc';
+ }
+} {5 2 1}
+do_test index-14.9 {
+ execsql {
+ SELECT c FROM t6 WHERE a<='abc';
+ }
+} {5 2 1 4}
+do_test index-14.10 {
+ execsql {
+ SELECT c FROM t6 WHERE a<='';
+ }
+} {5 2 1}
+do_test index-14.11 {
+ execsql {
+ SELECT c FROM t6 WHERE a<'';
+ }
+} {5}
+integrity_check index-14.12
+
+do_test index-15.1 {
+ execsql {
+ DELETE FROM t1;
+ SELECT * FROM t1;
+ }
+} {}
+do_test index-15.2 {
+ execsql {
+ INSERT INTO t1 VALUES('1.234e5',1);
+ INSERT INTO t1 VALUES('12.33e04',2);
+ INSERT INTO t1 VALUES('12.35E4',3);
+ INSERT INTO t1 VALUES('12.34e',4);
+ INSERT INTO t1 VALUES('12.32e+4',5);
+ INSERT INTO t1 VALUES('12.36E+04',6);
+ INSERT INTO t1 VALUES('12.36E+',7);
+ INSERT INTO t1 VALUES('+123.10000E+0003',8);
+ INSERT INTO t1 VALUES('+',9);
+ INSERT INTO t1 VALUES('+12347.E+02',10);
+ INSERT INTO t1 VALUES('+12347E+02',11);
+ SELECT b FROM t1 ORDER BY a;
+ }
+} {8 5 2 1 3 6 11 9 10 4 7}
+integrity_check index-15.1
+
+# The following tests - index-16.* - test that when a table definition
+# includes qualifications that specify the same constraint twice only a
+# single index is generated to enforce the constraint.
+#
+# For example: "CREATE TABLE abc( x PRIMARY KEY, UNIQUE(x) );"
+#
+do_test index-16.1 {
+ execsql {
+ CREATE TABLE t7(c UNIQUE PRIMARY KEY);
+ SELECT count(*) FROM sqlite_master WHERE tbl_name = 't7' AND type = 'index';
+ }
+} {1}
+do_test index-16.2 {
+ execsql {
+ DROP TABLE t7;
+ CREATE TABLE t7(c UNIQUE PRIMARY KEY);
+ SELECT count(*) FROM sqlite_master WHERE tbl_name = 't7' AND type = 'index';
+ }
+} {1}
+do_test index-16.3 {
+ execsql {
+ DROP TABLE t7;
+ CREATE TABLE t7(c PRIMARY KEY, UNIQUE(c) );
+ SELECT count(*) FROM sqlite_master WHERE tbl_name = 't7' AND type = 'index';
+ }
+} {1}
+do_test index-16.4 {
+ execsql {
+ DROP TABLE t7;
+ CREATE TABLE t7(c, d , UNIQUE(c, d), PRIMARY KEY(c, d) );
+ SELECT count(*) FROM sqlite_master WHERE tbl_name = 't7' AND type = 'index';
+ }
+} {1}
+do_test index-16.5 {
+ execsql {
+ DROP TABLE t7;
+ CREATE TABLE t7(c, d , UNIQUE(c), PRIMARY KEY(c, d) );
+ SELECT count(*) FROM sqlite_master WHERE tbl_name = 't7' AND type = 'index';
+ }
+} {2}
+
+# Test that automatically create indices are named correctly. The current
+# convention is: "sqlite_autoindex_<table name>_<integer>"
+#
+# Then check that it is an error to try to drop any automtically created
+# indices.
+do_test index-17.1 {
+ execsql {
+ DROP TABLE t7;
+ CREATE TABLE t7(c, d UNIQUE, UNIQUE(c), PRIMARY KEY(c, d) );
+ SELECT name FROM sqlite_master WHERE tbl_name = 't7' AND type = 'index';
+ }
+} {sqlite_autoindex_t7_1 sqlite_autoindex_t7_2 sqlite_autoindex_t7_3}
+do_test index-17.2 {
+ catchsql {
+ DROP INDEX sqlite_autoindex_t7_1;
+ }
+} {1 {index associated with UNIQUE or PRIMARY KEY constraint cannot be dropped}}
+do_test index-17.3 {
+ catchsql {
+ DROP INDEX IF EXISTS sqlite_autoindex_t7_1;
+ }
+} {1 {index associated with UNIQUE or PRIMARY KEY constraint cannot be dropped}}
+do_test index-17.4 {
+ catchsql {
+ DROP INDEX IF EXISTS no_such_index;
+ }
+} {0 {}}
+
+
+# The following tests ensure that it is not possible to explicitly name
+# a schema object with a name beginning with "sqlite_". Granted that is a
+# little outside the focus of this test scripts, but this has got to be
+# tested somewhere.
+do_test index-18.1 {
+ catchsql {
+ CREATE TABLE sqlite_t1(a, b, c);
+ }
+} {1 {object name reserved for internal use: sqlite_t1}}
+do_test index-18.2 {
+ catchsql {
+ CREATE INDEX sqlite_i1 ON t7(c);
+ }
+} {1 {object name reserved for internal use: sqlite_i1}}
+ifcapable view {
+do_test index-18.3 {
+ catchsql {
+ CREATE VIEW sqlite_v1 AS SELECT * FROM t7;
+ }
+} {1 {object name reserved for internal use: sqlite_v1}}
+} ;# ifcapable view
+ifcapable {trigger} {
+ do_test index-18.4 {
+ catchsql {
+ CREATE TRIGGER sqlite_tr1 BEFORE INSERT ON t7 BEGIN SELECT 1; END;
+ }
+ } {1 {object name reserved for internal use: sqlite_tr1}}
+}
+do_test index-18.5 {
+ execsql {
+ DROP TABLE t7;
+ }
+} {}
+
+# These tests ensure that if multiple table definition constraints are
+# implemented by a single indice, the correct ON CONFLICT policy applies.
+ifcapable conflict {
+ do_test index-19.1 {
+ execsql {
+ CREATE TABLE t7(a UNIQUE PRIMARY KEY);
+ CREATE TABLE t8(a UNIQUE PRIMARY KEY ON CONFLICT ROLLBACK);
+ INSERT INTO t7 VALUES(1);
+ INSERT INTO t8 VALUES(1);
+ }
+ } {}
+ do_test index-19.2 {
+ catchsql {
+ BEGIN;
+ INSERT INTO t7 VALUES(1);
+ }
+ } {1 {column a is not unique}}
+ do_test index-19.3 {
+ catchsql {
+ BEGIN;
+ }
+ } {1 {cannot start a transaction within a transaction}}
+ do_test index-19.4 {
+ catchsql {
+ INSERT INTO t8 VALUES(1);
+ }
+ } {1 {column a is not unique}}
+ do_test index-19.5 {
+ catchsql {
+ BEGIN;
+ COMMIT;
+ }
+ } {0 {}}
+ do_test index-19.6 {
+ catchsql {
+ DROP TABLE t7;
+ DROP TABLE t8;
+ CREATE TABLE t7(
+ a PRIMARY KEY ON CONFLICT FAIL,
+ UNIQUE(a) ON CONFLICT IGNORE
+ );
+ }
+ } {1 {conflicting ON CONFLICT clauses specified}}
+} ; # end of "ifcapable conflict" block
+
+ifcapable {reindex} {
+ do_test index-19.7 {
+ execsql REINDEX
+ } {}
+}
+integrity_check index-19.8
+
+# Drop index with a quoted name. Ticket #695.
+#
+do_test index-20.1 {
+ execsql {
+ CREATE INDEX "t6i2" ON t6(c);
+ DROP INDEX "t6i2";
+ }
+} {}
+do_test index-20.2 {
+ execsql {
+ DROP INDEX "t6i1";
+ }
+} {}
+
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/index2.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/index2.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,74 @@
+# 2005 January 11
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing the CREATE INDEX statement.
+#
+# $Id: index2.test,v 1.3 2006/03/03 19:12:30 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Create a table with a large number of columns
+#
+do_test index2-1.1 {
+ set sql {CREATE TABLE t1(}
+ for {set i 1} {$i<1000} {incr i} {
+ append sql "c$i,"
+ }
+ append sql "c1000);"
+ execsql $sql
+} {}
+do_test index2-1.2 {
+ set sql {INSERT INTO t1 VALUES(}
+ for {set i 1} {$i<1000} {incr i} {
+ append sql $i,
+ }
+ append sql {1000);}
+ execsql $sql
+} {}
+do_test index2-1.3 {
+ execsql {SELECT c123 FROM t1}
+} 123
+do_test index2-1.4 {
+ execsql BEGIN
+ for {set j 1} {$j<=100} {incr j} {
+ set sql {INSERT INTO t1 VALUES(}
+ for {set i 1} {$i<1000} {incr i} {
+ append sql [expr {$j*10000+$i}],
+ }
+ append sql "[expr {$j*10000+1000}]);"
+ execsql $sql
+ }
+ execsql COMMIT
+ execsql {SELECT count(*) FROM t1}
+} 101
+do_test index2-1.5 {
+ execsql {SELECT round(sum(c1000)) FROM t1}
+} {50601000.0}
+
+# Create indices with many columns
+#
+do_test index2-2.1 {
+ set sql "CREATE INDEX t1i1 ON t1("
+ for {set i 1} {$i<1000} {incr i} {
+ append sql c$i,
+ }
+ append sql c1000)
+ execsql $sql
+} {}
+do_test index2-2.2 {
+ ifcapable explain {
+ execsql {EXPLAIN SELECT c9 FROM t1 ORDER BY c1, c2, c3, c4, c5}
+ }
+ execsql {SELECT c9 FROM t1 ORDER BY c1, c2, c3, c4, c5, c6 LIMIT 5}
+} {9 10009 20009 30009 40009}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/index3.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/index3.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,58 @@
+# 2005 February 14
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing the CREATE INDEX statement.
+#
+# $Id: index3.test,v 1.2 2005/08/20 03:03:04 drh Exp $
+
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Ticket #1115. Make sure that when a UNIQUE index is created on a
+# non-unique column (or columns) that it fails and that it leaves no
+# residue behind.
+#
+do_test index3-1.1 {
+ execsql {
+ CREATE TABLE t1(a);
+ INSERT INTO t1 VALUES(1);
+ INSERT INTO t1 VALUES(1);
+ SELECT * FROM t1;
+ }
+} {1 1}
+do_test index3-1.2 {
+ catchsql {
+ BEGIN;
+ CREATE UNIQUE INDEX i1 ON t1(a);
+ }
+} {1 {indexed columns are not unique}}
+do_test index3-1.3 {
+ catchsql COMMIT;
+} {0 {}}
+integrity_check index3-1.4
+
+# This test corrupts the database file so it must be the last test
+# in the series.
+#
+do_test index3-99.1 {
+ execsql {
+ PRAGMA writable_schema=on;
+ UPDATE sqlite_master SET sql='nonsense';
+ }
+ db close
+ sqlite3 db test.db
+ catchsql {
+ DROP INDEX i1;
+ }
+} {1 {malformed database schema - near "nonsense": syntax error}}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/insert.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/insert.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,368 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing the INSERT statement.
+#
+# $Id: insert.test,v 1.30 2006/06/11 23:41:56 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Try to insert into a non-existant table.
+#
+do_test insert-1.1 {
+ set v [catch {execsql {INSERT INTO test1 VALUES(1,2,3)}} msg]
+ lappend v $msg
+} {1 {no such table: test1}}
+
+# Try to insert into sqlite_master
+#
+do_test insert-1.2 {
+ set v [catch {execsql {INSERT INTO sqlite_master VALUES(1,2,3,4)}} msg]
+ lappend v $msg
+} {1 {table sqlite_master may not be modified}}
+
+# Try to insert the wrong number of entries.
+#
+do_test insert-1.3 {
+ execsql {CREATE TABLE test1(one int, two int, three int)}
+ set v [catch {execsql {INSERT INTO test1 VALUES(1,2)}} msg]
+ lappend v $msg
+} {1 {table test1 has 3 columns but 2 values were supplied}}
+do_test insert-1.3b {
+ set v [catch {execsql {INSERT INTO test1 VALUES(1,2,3,4)}} msg]
+ lappend v $msg
+} {1 {table test1 has 3 columns but 4 values were supplied}}
+do_test insert-1.3c {
+ set v [catch {execsql {INSERT INTO test1(one,two) VALUES(1,2,3,4)}} msg]
+ lappend v $msg
+} {1 {4 values for 2 columns}}
+do_test insert-1.3d {
+ set v [catch {execsql {INSERT INTO test1(one,two) VALUES(1)}} msg]
+ lappend v $msg
+} {1 {1 values for 2 columns}}
+
+# Try to insert into a non-existant column of a table.
+#
+do_test insert-1.4 {
+ set v [catch {execsql {INSERT INTO test1(one,four) VALUES(1,2)}} msg]
+ lappend v $msg
+} {1 {table test1 has no column named four}}
+
+# Make sure the inserts actually happen
+#
+do_test insert-1.5 {
+ execsql {INSERT INTO test1 VALUES(1,2,3)}
+ execsql {SELECT * FROM test1}
+} {1 2 3}
+do_test insert-1.5b {
+ execsql {INSERT INTO test1 VALUES(4,5,6)}
+ execsql {SELECT * FROM test1 ORDER BY one}
+} {1 2 3 4 5 6}
+do_test insert-1.5c {
+ execsql {INSERT INTO test1 VALUES(7,8,9)}
+ execsql {SELECT * FROM test1 ORDER BY one}
+} {1 2 3 4 5 6 7 8 9}
+
+do_test insert-1.6 {
+ execsql {DELETE FROM test1}
+ execsql {INSERT INTO test1(one,two) VALUES(1,2)}
+ execsql {SELECT * FROM test1 ORDER BY one}
+} {1 2 {}}
+do_test insert-1.6b {
+ execsql {INSERT INTO test1(two,three) VALUES(5,6)}
+ execsql {SELECT * FROM test1 ORDER BY one}
+} {{} 5 6 1 2 {}}
+do_test insert-1.6c {
+ execsql {INSERT INTO test1(three,one) VALUES(7,8)}
+ execsql {SELECT * FROM test1 ORDER BY one}
+} {{} 5 6 1 2 {} 8 {} 7}
+
+# A table to use for testing default values
+#
+do_test insert-2.1 {
+ execsql {
+ CREATE TABLE test2(
+ f1 int default -111,
+ f2 real default +4.32,
+ f3 int default +222,
+ f4 int default 7.89
+ )
+ }
+ execsql {SELECT * from test2}
+} {}
+do_test insert-2.2 {
+ execsql {INSERT INTO test2(f1,f3) VALUES(+10,-10)}
+ execsql {SELECT * FROM test2}
+} {10 4.32 -10 7.89}
+do_test insert-2.3 {
+ execsql {INSERT INTO test2(f2,f4) VALUES(1.23,-3.45)}
+ execsql {SELECT * FROM test2 WHERE f1==-111}
+} {-111 1.23 222 -3.45}
+do_test insert-2.4 {
+ execsql {INSERT INTO test2(f1,f2,f4) VALUES(77,+1.23,3.45)}
+ execsql {SELECT * FROM test2 WHERE f1==77}
+} {77 1.23 222 3.45}
+do_test insert-2.10 {
+ execsql {
+ DROP TABLE test2;
+ CREATE TABLE test2(
+ f1 int default 111,
+ f2 real default -4.32,
+ f3 text default hi,
+ f4 text default 'abc-123',
+ f5 varchar(10)
+ )
+ }
+ execsql {SELECT * from test2}
+} {}
+do_test insert-2.11 {
+ execsql {INSERT INTO test2(f2,f4) VALUES(-2.22,'hi!')}
+ execsql {SELECT * FROM test2}
+} {111 -2.22 hi hi! {}}
+do_test insert-2.12 {
+ execsql {INSERT INTO test2(f1,f5) VALUES(1,'xyzzy')}
+ execsql {SELECT * FROM test2 ORDER BY f1}
+} {1 -4.32 hi abc-123 xyzzy 111 -2.22 hi hi! {}}
+
+# Do additional inserts with default values, but this time
+# on a table that has indices. In particular we want to verify
+# that the correct default values are inserted into the indices.
+#
+do_test insert-3.1 {
+ execsql {
+ DELETE FROM test2;
+ CREATE INDEX index9 ON test2(f1,f2);
+ CREATE INDEX indext ON test2(f4,f5);
+ SELECT * from test2;
+ }
+} {}
+
+# Update for sqlite3 v3:
+# Change the 111 to '111' in the following two test cases, because
+# the default value is being inserted as a string. TODO: It shouldn't be.
+do_test insert-3.2 {
+ execsql {INSERT INTO test2(f2,f4) VALUES(-3.33,'hum')}
+ execsql {SELECT * FROM test2 WHERE f1='111' AND f2=-3.33}
+} {111 -3.33 hi hum {}}
+do_test insert-3.3 {
+ execsql {INSERT INTO test2(f1,f2,f5) VALUES(22,-4.44,'wham')}
+ execsql {SELECT * FROM test2 WHERE f1='111' AND f2=-3.33}
+} {111 -3.33 hi hum {}}
+do_test insert-3.4 {
+ execsql {SELECT * FROM test2 WHERE f1=22 AND f2=-4.44}
+} {22 -4.44 hi abc-123 wham}
+ifcapable {reindex} {
+ do_test insert-3.5 {
+ execsql REINDEX
+ } {}
+}
+integrity_check insert-3.5
+
+# Test of expressions in the VALUES clause
+#
+do_test insert-4.1 {
+ execsql {
+ CREATE TABLE t3(a,b,c);
+ INSERT INTO t3 VALUES(1+2+3,4,5);
+ SELECT * FROM t3;
+ }
+} {6 4 5}
+do_test insert-4.2 {
+ ifcapable subquery {
+ execsql {INSERT INTO t3 VALUES((SELECT max(a) FROM t3)+1,5,6);}
+ } else {
+ set maxa [execsql {SELECT max(a) FROM t3}]
+ execsql "INSERT INTO t3 VALUES($maxa+1,5,6);"
+ }
+ execsql {
+ SELECT * FROM t3 ORDER BY a;
+ }
+} {6 4 5 7 5 6}
+ifcapable subquery {
+ do_test insert-4.3 {
+ catchsql {
+ INSERT INTO t3 VALUES((SELECT max(a) FROM t3)+1,t3.a,6);
+ SELECT * FROM t3 ORDER BY a;
+ }
+ } {1 {no such column: t3.a}}
+}
+do_test insert-4.4 {
+ ifcapable subquery {
+ execsql {INSERT INTO t3 VALUES((SELECT b FROM t3 WHERE a=0),6,7);}
+ } else {
+ set b [execsql {SELECT b FROM t3 WHERE a = 0}]
+ if {$b==""} {set b NULL}
+ execsql "INSERT INTO t3 VALUES($b,6,7);"
+ }
+ execsql {
+ SELECT * FROM t3 ORDER BY a;
+ }
+} {{} 6 7 6 4 5 7 5 6}
+do_test insert-4.5 {
+ execsql {
+ SELECT b,c FROM t3 WHERE a IS NULL;
+ }
+} {6 7}
+do_test insert-4.6 {
+ catchsql {
+ INSERT INTO t3 VALUES(notafunc(2,3),2,3);
+ }
+} {1 {no such function: notafunc}}
+do_test insert-4.7 {
+ execsql {
+ INSERT INTO t3 VALUES(min(1,2,3),max(1,2,3),99);
+ SELECT * FROM t3 WHERE c=99;
+ }
+} {1 3 99}
+
+# Test the ability to insert from a temporary table into itself.
+# Ticket #275.
+#
+ifcapable tempdb {
+ do_test insert-5.1 {
+ execsql {
+ CREATE TEMP TABLE t4(x);
+ INSERT INTO t4 VALUES(1);
+ SELECT * FROM t4;
+ }
+ } {1}
+ do_test insert-5.2 {
+ execsql {
+ INSERT INTO t4 SELECT x+1 FROM t4;
+ SELECT * FROM t4;
+ }
+ } {1 2}
+ ifcapable {explain} {
+ do_test insert-5.3 {
+ # verify that a temporary table is used to copy t4 to t4
+ set x [execsql {
+ EXPLAIN INSERT INTO t4 SELECT x+2 FROM t4;
+ }]
+ expr {[lsearch $x OpenEphemeral]>0}
+ } {1}
+ }
+
+ do_test insert-5.4 {
+ # Verify that table "test1" begins on page 3. This should be the same
+ # page number used by "t4" above.
+ #
+ # Update for v3 - the first table now begins on page 2 of each file, not 3.
+ execsql {
+ SELECT rootpage FROM sqlite_master WHERE name='test1';
+ }
+ } [expr $AUTOVACUUM?3:2]
+ do_test insert-5.5 {
+ # Verify that "t4" begins on page 3.
+ #
+ # Update for v3 - the first table now begins on page 2 of each file, not 3.
+ execsql {
+ SELECT rootpage FROM sqlite_temp_master WHERE name='t4';
+ }
+ } {2}
+ do_test insert-5.6 {
+ # This should not use an intermediate temporary table.
+ execsql {
+ INSERT INTO t4 SELECT one FROM test1 WHERE three=7;
+ SELECT * FROM t4
+ }
+ } {1 2 8}
+ ifcapable {explain} {
+ do_test insert-5.7 {
+ # verify that no temporary table is used to copy test1 to t4
+ set x [execsql {
+ EXPLAIN INSERT INTO t4 SELECT one FROM test1;
+ }]
+ expr {[lsearch $x OpenTemp]>0}
+ } {0}
+ }
+}
+
+# Ticket #334: REPLACE statement corrupting indices.
+#
+ifcapable conflict {
+ # The REPLACE command is not available if SQLITE_OMIT_CONFLICT is
+ # defined at compilation time.
+ do_test insert-6.1 {
+ execsql {
+ CREATE TABLE t1(a INTEGER PRIMARY KEY, b UNIQUE);
+ INSERT INTO t1 VALUES(1,2);
+ INSERT INTO t1 VALUES(2,3);
+ SELECT b FROM t1 WHERE b=2;
+ }
+ } {2}
+ do_test insert-6.2 {
+ execsql {
+ REPLACE INTO t1 VALUES(1,4);
+ SELECT b FROM t1 WHERE b=2;
+ }
+ } {}
+ do_test insert-6.3 {
+ execsql {
+ UPDATE OR REPLACE t1 SET a=2 WHERE b=4;
+ SELECT * FROM t1 WHERE b=4;
+ }
+ } {2 4}
+ do_test insert-6.4 {
+ execsql {
+ SELECT * FROM t1 WHERE b=3;
+ }
+ } {}
+ ifcapable {reindex} {
+ do_test insert-6.5 {
+ execsql REINDEX
+ } {}
+ }
+ do_test insert-6.6 {
+ execsql {
+ DROP TABLE t1;
+ }
+ } {}
+}
+
+# Test that the special optimization for queries of the form
+# "SELECT max(x) FROM tbl" where there is an index on tbl(x) works with
+# INSERT statments.
+do_test insert-7.1 {
+ execsql {
+ CREATE TABLE t1(a);
+ INSERT INTO t1 VALUES(1);
+ INSERT INTO t1 VALUES(2);
+ CREATE INDEX i1 ON t1(a);
+ }
+} {}
+do_test insert-7.2 {
+ execsql {
+ INSERT INTO t1 SELECT max(a) FROM t1;
+ }
+} {}
+do_test insert-7.3 {
+ execsql {
+ SELECT a FROM t1;
+ }
+} {1 2 2}
+
+# Ticket #1140: Check for an infinite loop in the algorithm that tests
+# to see if the right-hand side of an INSERT...SELECT references the left-hand
+# side.
+#
+ifcapable subquery&&compound {
+ do_test insert-8.1 {
+ execsql {
+ INSERT INTO t3 SELECT * FROM (SELECT * FROM t3 UNION ALL SELECT 1,2,3)
+ }
+ } {}
+}
+
+
+integrity_check insert-99.0
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/insert2.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/insert2.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,278 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing the INSERT statement that takes is
+# result from a SELECT.
+#
+# $Id: insert2.test,v 1.18 2005/10/05 11:35:09 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Create some tables with data that we can select against
+#
+do_test insert2-1.0 {
+ execsql {CREATE TABLE d1(n int, log int);}
+ for {set i 1} {$i<=20} {incr i} {
+ for {set j 0} {pow(2,$j)<$i} {incr j} {}
+ execsql "INSERT INTO d1 VALUES($i,$j)"
+ }
+ execsql {SELECT * FROM d1 ORDER BY n}
+} {1 0 2 1 3 2 4 2 5 3 6 3 7 3 8 3 9 4 10 4 11 4 12 4 13 4 14 4 15 4 16 4 17 5 18 5 19 5 20 5}
+
+# Insert into a new table from the old one.
+#
+do_test insert2-1.1.1 {
+ execsql {
+ CREATE TABLE t1(log int, cnt int);
+ PRAGMA count_changes=on;
+ }
+ ifcapable explain {
+ execsql {
+ EXPLAIN INSERT INTO t1 SELECT log, count(*) FROM d1 GROUP BY log;
+ }
+ }
+ execsql {
+ INSERT INTO t1 SELECT log, count(*) FROM d1 GROUP BY log;
+ }
+} {6}
+do_test insert2-1.1.2 {
+ db changes
+} {6}
+do_test insert2-1.1.3 {
+ execsql {SELECT * FROM t1 ORDER BY log}
+} {0 1 1 1 2 2 3 4 4 8 5 4}
+
+ifcapable compound {
+do_test insert2-1.2.1 {
+ catch {execsql {DROP TABLE t1}}
+ execsql {
+ CREATE TABLE t1(log int, cnt int);
+ INSERT INTO t1
+ SELECT log, count(*) FROM d1 GROUP BY log
+ EXCEPT SELECT n-1,log FROM d1;
+ }
+} {4}
+do_test insert2-1.2.2 {
+ execsql {
+ SELECT * FROM t1 ORDER BY log;
+ }
+} {0 1 3 4 4 8 5 4}
+do_test insert2-1.3.1 {
+ catch {execsql {DROP TABLE t1}}
+ execsql {
+ CREATE TABLE t1(log int, cnt int);
+ PRAGMA count_changes=off;
+ INSERT INTO t1
+ SELECT log, count(*) FROM d1 GROUP BY log
+ INTERSECT SELECT n-1,log FROM d1;
+ }
+} {}
+do_test insert2-1.3.2 {
+ execsql {
+ SELECT * FROM t1 ORDER BY log;
+ }
+} {1 1 2 2}
+} ;# ifcapable compound
+execsql {PRAGMA count_changes=off;}
+
+do_test insert2-1.4 {
+ catch {execsql {DROP TABLE t1}}
+ set r [execsql {
+ CREATE TABLE t1(log int, cnt int);
+ CREATE INDEX i1 ON t1(log);
+ CREATE INDEX i2 ON t1(cnt);
+ INSERT INTO t1 SELECT log, count() FROM d1 GROUP BY log;
+ SELECT * FROM t1 ORDER BY log;
+ }]
+ lappend r [execsql {SELECT cnt FROM t1 WHERE log=3}]
+ lappend r [execsql {SELECT log FROM t1 WHERE cnt=4 ORDER BY log}]
+} {0 1 1 1 2 2 3 4 4 8 5 4 4 {3 5}}
+
+do_test insert2-2.0 {
+ execsql {
+ CREATE TABLE t3(a,b,c);
+ CREATE TABLE t4(x,y);
+ INSERT INTO t4 VALUES(1,2);
+ SELECT * FROM t4;
+ }
+} {1 2}
+do_test insert2-2.1 {
+ execsql {
+ INSERT INTO t3(a,c) SELECT * FROM t4;
+ SELECT * FROM t3;
+ }
+} {1 {} 2}
+do_test insert2-2.2 {
+ execsql {
+ DELETE FROM t3;
+ INSERT INTO t3(c,b) SELECT * FROM t4;
+ SELECT * FROM t3;
+ }
+} {{} 2 1}
+do_test insert2-2.3 {
+ execsql {
+ DELETE FROM t3;
+ INSERT INTO t3(c,a,b) SELECT x, 'hi', y FROM t4;
+ SELECT * FROM t3;
+ }
+} {hi 2 1}
+
+integrity_check insert2-3.0
+
+# File table t4 with lots of data
+#
+do_test insert2-3.1 {
+ execsql {
+ SELECT * from t4;
+ }
+} {1 2}
+do_test insert2-3.2 {
+ set x [db total_changes]
+ execsql {
+ BEGIN;
+ INSERT INTO t4 VALUES(2,4);
+ INSERT INTO t4 VALUES(3,6);
+ INSERT INTO t4 VALUES(4,8);
+ INSERT INTO t4 VALUES(5,10);
+ INSERT INTO t4 VALUES(6,12);
+ INSERT INTO t4 VALUES(7,14);
+ INSERT INTO t4 VALUES(8,16);
+ INSERT INTO t4 VALUES(9,18);
+ INSERT INTO t4 VALUES(10,20);
+ COMMIT;
+ }
+ expr [db total_changes] - $x
+} {9}
+do_test insert2-3.2.1 {
+ execsql {
+ SELECT count(*) FROM t4;
+ }
+} {10}
+do_test insert2-3.3 {
+ ifcapable subquery {
+ execsql {
+ BEGIN;
+ INSERT INTO t4 SELECT x+(SELECT max(x) FROM t4),y FROM t4;
+ INSERT INTO t4 SELECT x+(SELECT max(x) FROM t4),y FROM t4;
+ INSERT INTO t4 SELECT x+(SELECT max(x) FROM t4),y FROM t4;
+ INSERT INTO t4 SELECT x+(SELECT max(x) FROM t4),y FROM t4;
+ COMMIT;
+ SELECT count(*) FROM t4;
+ }
+ } else {
+ db function max_x_t4 {execsql {SELECT max(x) FROM t4}}
+ execsql {
+ BEGIN;
+ INSERT INTO t4 SELECT x+max_x_t4() ,y FROM t4;
+ INSERT INTO t4 SELECT x+max_x_t4() ,y FROM t4;
+ INSERT INTO t4 SELECT x+max_x_t4() ,y FROM t4;
+ INSERT INTO t4 SELECT x+max_x_t4() ,y FROM t4;
+ COMMIT;
+ SELECT count(*) FROM t4;
+ }
+ }
+} {160}
+do_test insert2-3.4 {
+ execsql {
+ BEGIN;
+ UPDATE t4 SET y='lots of data for the row where x=' || x
+ || ' and y=' || y || ' - even more data to fill space';
+ COMMIT;
+ SELECT count(*) FROM t4;
+ }
+} {160}
+do_test insert2-3.5 {
+ ifcapable subquery {
+ execsql {
+ BEGIN;
+ INSERT INTO t4 SELECT x+(SELECT max(x)+1 FROM t4),y FROM t4;
+ SELECT count(*) from t4;
+ ROLLBACK;
+ }
+ } else {
+ execsql {
+ BEGIN;
+ INSERT INTO t4 SELECT x+max_x_t4()+1,y FROM t4;
+ SELECT count(*) from t4;
+ ROLLBACK;
+ }
+ }
+} {320}
+do_test insert2-3.6 {
+ execsql {
+ SELECT count(*) FROM t4;
+ }
+} {160}
+do_test insert2-3.7 {
+ execsql {
+ BEGIN;
+ DELETE FROM t4 WHERE x!=123;
+ SELECT count(*) FROM t4;
+ ROLLBACK;
+ }
+} {1}
+do_test insert2-3.8 {
+ db changes
+} {159}
+integrity_check insert2-3.9
+
+# Ticket #901
+#
+ifcapable tempdb {
+ do_test insert2-4.1 {
+ execsql {
+ CREATE TABLE Dependencies(depId integer primary key,
+ class integer, name str, flag str);
+ CREATE TEMPORARY TABLE DepCheck(troveId INT, depNum INT,
+ flagCount INT, isProvides BOOL, class INTEGER, name STRING,
+ flag STRING);
+ INSERT INTO DepCheck
+ VALUES(-1, 0, 1, 0, 2, 'libc.so.6', 'GLIBC_2.0');
+ INSERT INTO Dependencies
+ SELECT DISTINCT
+ NULL,
+ DepCheck.class,
+ DepCheck.name,
+ DepCheck.flag
+ FROM DepCheck LEFT OUTER JOIN Dependencies ON
+ DepCheck.class == Dependencies.class AND
+ DepCheck.name == Dependencies.name AND
+ DepCheck.flag == Dependencies.flag
+ WHERE
+ Dependencies.depId is NULL;
+ };
+ } {}
+}
+
+#--------------------------------------------------------------------
+# Test that the INSERT works when the SELECT statement (a) references
+# the table being inserted into and (b) is optimized to use an index
+# only.
+do_test insert2-5.1 {
+ execsql {
+ CREATE TABLE t2(a, b);
+ INSERT INTO t2 VALUES(1, 2);
+ CREATE INDEX t2i1 ON t2(a);
+ INSERT INTO t2 SELECT a, 3 FROM t2 WHERE a = 1;
+ SELECT * FROM t2;
+ }
+} {1 2 1 3}
+ifcapable subquery {
+ do_test insert2-5.2 {
+ execsql {
+ INSERT INTO t2 SELECT (SELECT a FROM t2), 4;
+ SELECT * FROM t2;
+ }
+ } {1 2 1 3 1 4}
+}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/insert3.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/insert3.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,168 @@
+# 2005 January 13
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing corner cases of the INSERT statement.
+#
+# $Id: insert3.test,v 1.5 2006/08/25 23:42:53 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# All the tests in this file require trigger support
+#
+ifcapable {trigger} {
+
+# Create a table and a corresponding insert trigger. Do a self-insert
+# into the table.
+#
+do_test insert3-1.0 {
+ execsql {
+ CREATE TABLE t1(a,b);
+ CREATE TABLE log(x UNIQUE, y);
+ CREATE TRIGGER r1 AFTER INSERT ON t1 BEGIN
+ UPDATE log SET y=y+1 WHERE x=new.a;
+ INSERT OR IGNORE INTO log VALUES(new.a, 1);
+ END;
+ INSERT INTO t1 VALUES('hello','world');
+ INSERT INTO t1 VALUES(5,10);
+ SELECT * FROM log ORDER BY x;
+ }
+} {5 1 hello 1}
+do_test insert3-1.1 {
+ execsql {
+ INSERT INTO t1 SELECT a, b+10 FROM t1;
+ SELECT * FROM log ORDER BY x;
+ }
+} {5 2 hello 2}
+do_test insert3-1.2 {
+ execsql {
+ CREATE TABLE log2(x PRIMARY KEY,y);
+ CREATE TRIGGER r2 BEFORE INSERT ON t1 BEGIN
+ UPDATE log2 SET y=y+1 WHERE x=new.b;
+ INSERT OR IGNORE INTO log2 VALUES(new.b,1);
+ END;
+ INSERT INTO t1 VALUES(453,'hi');
+ SELECT * FROM log ORDER BY x;
+ }
+} {5 2 453 1 hello 2}
+do_test insert3-1.3 {
+ execsql {
+ SELECT * FROM log2 ORDER BY x;
+ }
+} {hi 1}
+ifcapable compound {
+ do_test insert3-1.4.1 {
+ execsql {
+ INSERT INTO t1 SELECT * FROM t1;
+ SELECT 'a:', x, y FROM log UNION ALL
+ SELECT 'b:', x, y FROM log2 ORDER BY x;
+ }
+ } {a: 5 4 b: 10 2 b: 20 1 a: 453 2 a: hello 4 b: hi 2 b: world 1}
+ do_test insert3-1.4.2 {
+ execsql {
+ SELECT 'a:', x, y FROM log UNION ALL
+ SELECT 'b:', x, y FROM log2 ORDER BY x, y;
+ }
+ } {a: 5 4 b: 10 2 b: 20 1 a: 453 2 a: hello 4 b: hi 2 b: world 1}
+ do_test insert3-1.5 {
+ execsql {
+ INSERT INTO t1(a) VALUES('xyz');
+ SELECT * FROM log ORDER BY x;
+ }
+ } {5 4 453 2 hello 4 xyz 1}
+}
+
+do_test insert3-2.1 {
+ execsql {
+ CREATE TABLE t2(
+ a INTEGER PRIMARY KEY,
+ b DEFAULT 'b',
+ c DEFAULT 'c'
+ );
+ CREATE TABLE t2dup(a,b,c);
+ CREATE TRIGGER t2r1 BEFORE INSERT ON t2 BEGIN
+ INSERT INTO t2dup(a,b,c) VALUES(new.a,new.b,new.c);
+ END;
+ INSERT INTO t2(a) VALUES(123);
+ INSERT INTO t2(b) VALUES(234);
+ INSERT INTO t2(c) VALUES(345);
+ SELECT * FROM t2dup;
+ }
+} {123 b c -1 234 c -1 b 345}
+do_test insert3-2.2 {
+ execsql {
+ DELETE FROM t2dup;
+ INSERT INTO t2(a) SELECT 1 FROM t1 LIMIT 1;
+ INSERT INTO t2(b) SELECT 987 FROM t1 LIMIT 1;
+ INSERT INTO t2(c) SELECT 876 FROM t1 LIMIT 1;
+ SELECT * FROM t2dup;
+ }
+} {1 b c -1 987 c -1 b 876}
+
+# Test for proper detection of malformed WHEN clauses on INSERT triggers.
+#
+do_test insert3-3.1 {
+ execsql {
+ CREATE TABLE t3(a,b,c);
+ CREATE TRIGGER t3r1 BEFORE INSERT on t3 WHEN nosuchcol BEGIN
+ SELECT 'illegal WHEN clause';
+ END;
+ }
+} {}
+do_test insert3-3.2 {
+ catchsql {
+ INSERT INTO t3 VALUES(1,2,3)
+ }
+} {1 {no such column: nosuchcol}}
+do_test insert3-3.3 {
+ execsql {
+ CREATE TABLE t4(a,b,c);
+ CREATE TRIGGER t4r1 AFTER INSERT on t4 WHEN nosuchcol BEGIN
+ SELECT 'illegal WHEN clause';
+ END;
+ }
+} {}
+do_test insert3-3.4 {
+ catchsql {
+ INSERT INTO t4 VALUES(1,2,3)
+ }
+} {1 {no such column: nosuchcol}}
+
+} ;# ifcapable {trigger}
+
+# Tests for the INSERT INTO ... DEFAULT VALUES construct
+#
+do_test insert4-3.5 {
+ execsql {
+ CREATE TABLE t5(
+ a INTEGER PRIMARY KEY,
+ b DEFAULT 'xyz'
+ );
+ INSERT INTO t5 DEFAULT VALUES;
+ SELECT * FROM t5;
+ }
+} {1 xyz}
+do_test insert4-3.6 {
+ execsql {
+ INSERT INTO t5 DEFAULT VALUES;
+ SELECT * FROM t5;
+ }
+} {1 xyz 2 xyz}
+do_test insert4-3.7 {
+ execsql {
+ CREATE TABLE t6(x,y DEFAULT 4.3, z DEFAULT x'6869');
+ INSERT INTO t6 DEFAULT VALUES;
+ SELECT * FROM t6;
+ }
+} {{} 4.3 hi}
+
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/interrupt.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/interrupt.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,197 @@
+# 2004 Feb 8
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is the sqlite_interrupt() API.
+#
+# $Id: interrupt.test,v 1.13 2006/07/17 00:02:46 drh Exp $
+
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+set DB [sqlite3_connection_pointer db]
+
+# Compute a checksum on the entire database.
+#
+proc cksum {{db db}} {
+ set txt [$db eval {SELECT name, type, sql FROM sqlite_master}]\n
+ foreach tbl [$db eval {SELECT name FROM sqlite_master WHERE type='table'}] {
+ append txt [$db eval "SELECT * FROM $tbl"]\n
+ }
+ foreach prag {default_synchronous default_cache_size} {
+ append txt $prag-[$db eval "PRAGMA $prag"]\n
+ }
+ set cksum [string length $txt]-[md5 $txt]
+ # puts $cksum-[file size test.db]
+ return $cksum
+}
+
+# This routine attempts to execute the sql in $sql. It triggers an
+# interrupt at progressively later and later points during the processing
+# and checks to make sure SQLITE_INTERRUPT is returned. Eventually,
+# the routine completes successfully.
+#
+proc interrupt_test {testid sql result {initcnt 0}} {
+ set orig_sum [cksum]
+ set i $initcnt
+ while 1 {
+ incr i
+ set ::sqlite_interrupt_count $i
+ do_test $testid.$i.1 [format {
+ set ::r [catchsql %s]
+ set ::code [db errorcode]
+ expr {$::code==0 || $::code==9}
+ } [list $sql]] 1
+ if {$::code==9} {
+ do_test $testid.$i.2 {
+ cksum
+ } $orig_sum
+ } else {
+ do_test $testid.$i.99 {
+ set ::r
+ } [list 0 $result]
+ break
+ }
+ }
+ set ::sqlite_interrupt_count 0
+}
+
+do_test interrupt-1.1 {
+ execsql {
+ CREATE TABLE t1(a,b);
+ SELECT name FROM sqlite_master;
+ }
+} {t1}
+interrupt_test interrupt-1.2 {DROP TABLE t1} {}
+do_test interrupt-1.3 {
+ execsql {
+ SELECT name FROM sqlite_master;
+ }
+} {}
+integrity_check interrupt-1.4
+
+do_test interrrupt-2.1 {
+ execsql {
+ BEGIN;
+ CREATE TABLE t1(a,b);
+ INSERT INTO t1 VALUES(1,randstr(300,400));
+ INSERT INTO t1 SELECT a+1, randstr(300,400) FROM t1;
+ INSERT INTO t1 SELECT a+2, a || '-' || b FROM t1;
+ INSERT INTO t1 SELECT a+4, a || '-' || b FROM t1;
+ INSERT INTO t1 SELECT a+8, a || '-' || b FROM t1;
+ INSERT INTO t1 SELECT a+16, a || '-' || b FROM t1;
+ INSERT INTO t1 SELECT a+32, a || '-' || b FROM t1;
+ COMMIT;
+ UPDATE t1 SET b=substr(b,-5,5);
+ SELECT count(*) from t1;
+ }
+} 64
+set origsize [file size test.db]
+set cksum [db eval {SELECT md5sum(a || b) FROM t1}]
+ifcapable {vacuum} {
+ interrupt_test interrupt-2.2 {VACUUM} {} 100
+}
+do_test interrupt-2.3 {
+ execsql {
+ SELECT md5sum(a || b) FROM t1;
+ }
+} $cksum
+ifcapable {vacuum && !default_autovacuum} {
+ do_test interrupt-2.4 {
+ expr {$::origsize>[file size test.db]}
+ } 1
+}
+ifcapable {explain} {
+ do_test interrupt-2.5 {
+ set sql {EXPLAIN SELECT max(a,b), a, b FROM t1}
+ execsql $sql
+ set rc [catch {db eval $sql {sqlite3_interrupt $DB}} msg]
+ lappend rc $msg
+ } {1 interrupted}
+}
+integrity_check interrupt-2.6
+
+# Ticket #594. If an interrupt occurs in the middle of a transaction
+# and that transaction is later rolled back, the internal schema tables do
+# not reset.
+#
+ifcapable tempdb {
+ for {set i 1} {$i<50} {incr i 5} {
+ do_test interrupt-3.$i.1 {
+ execsql {
+ BEGIN;
+ CREATE TEMP TABLE t2(x,y);
+ SELECT name FROM sqlite_temp_master;
+ }
+ } {t2}
+ do_test interrupt-3.$i.2 {
+ set ::sqlite_interrupt_count $::i
+ catchsql {
+ INSERT INTO t2 SELECT * FROM t1;
+ }
+ } {1 interrupted}
+ do_test interrupt-3.$i.3 {
+ execsql {
+ SELECT name FROM sqlite_temp_master;
+ }
+ } {t2}
+ do_test interrupt-3.$i.4 {
+ catchsql {
+ ROLLBACK
+ }
+ } {0 {}}
+ do_test interrupt-3.$i.5 {
+ catchsql {SELECT name FROM sqlite_temp_master};
+ execsql {
+ SELECT name FROM sqlite_temp_master;
+ }
+ } {}
+ }
+}
+
+# There are reports of a memory leak if an interrupt occurs during
+# the beginning of a complex query - before the first callback. We
+# will try to reproduce it here:
+#
+execsql {
+ CREATE TABLE t2(a,b,c);
+ INSERT INTO t2 SELECT round(a/10), randstr(50,80), randstr(50,60) FROM t1;
+}
+set sql {
+ SELECT max(min(b,c)), min(max(b,c)), a FROM t2 GROUP BY a ORDER BY a;
+}
+set sqlite_interrupt_count 1000000
+execsql $sql
+set max_count [expr {1000000-$sqlite_interrupt_count}]
+for {set i 1} {$i<$max_count-5} {incr i 1} {
+ do_test interrupt-4.$i.1 {
+ set ::sqlite_interrupt_count $::i
+ catchsql $sql
+ } {1 interrupted}
+}
+
+# Interrupt during parsing
+#
+do_test interrupt-5.1 {
+ proc fake_interrupt {args} {sqlite3_interrupt $::DB; return SQLITE_OK}
+ db collation_needed fake_interrupt
+ catchsql {
+ CREATE INDEX fake ON fake1(a COLLATE fake_collation, b, c DESC);
+ }
+} {1 interrupt}
+do_test interrupt-5.2 {
+ proc fake_interrupt {args} {db interrupt; return SQLITE_OK}
+ db collation_needed fake_interrupt
+ catchsql {
+ CREATE INDEX fake ON fake1(a COLLATE fake_collation, b, c DESC);
+ }
+} {1 interrupt}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/intpkey.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/intpkey.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,605 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests for the special processing associated
+# with INTEGER PRIMARY KEY columns.
+#
+# $Id: intpkey.test,v 1.23 2005/07/21 03:48:20 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Create a table with a primary key and a datatype other than
+# integer
+#
+do_test intpkey-1.0 {
+ execsql {
+ CREATE TABLE t1(a TEXT PRIMARY KEY, b, c);
+ }
+} {}
+
+# There should be an index associated with the primary key
+#
+do_test intpkey-1.1 {
+ execsql {
+ SELECT name FROM sqlite_master
+ WHERE type='index' AND tbl_name='t1';
+ }
+} {sqlite_autoindex_t1_1}
+
+# Now create a table with an integer primary key and verify that
+# there is no associated index.
+#
+do_test intpkey-1.2 {
+ execsql {
+ DROP TABLE t1;
+ CREATE TABLE t1(a INTEGER PRIMARY KEY, b, c);
+ SELECT name FROM sqlite_master
+ WHERE type='index' AND tbl_name='t1';
+ }
+} {}
+
+# Insert some records into the new table. Specify the primary key
+# and verify that the key is used as the record number.
+#
+do_test intpkey-1.3 {
+ execsql {
+ INSERT INTO t1 VALUES(5,'hello','world');
+ }
+ db last_insert_rowid
+} {5}
+do_test intpkey-1.4 {
+ execsql {
+ SELECT * FROM t1;
+ }
+} {5 hello world}
+do_test intpkey-1.5 {
+ execsql {
+ SELECT rowid, * FROM t1;
+ }
+} {5 5 hello world}
+
+# Attempting to insert a duplicate primary key should give a constraint
+# failure.
+#
+do_test intpkey-1.6 {
+ set r [catch {execsql {
+ INSERT INTO t1 VALUES(5,'second','entry');
+ }} msg]
+ lappend r $msg
+} {1 {PRIMARY KEY must be unique}}
+do_test intpkey-1.7 {
+ execsql {
+ SELECT rowid, * FROM t1;
+ }
+} {5 5 hello world}
+do_test intpkey-1.8 {
+ set r [catch {execsql {
+ INSERT INTO t1 VALUES(6,'second','entry');
+ }} msg]
+ lappend r $msg
+} {0 {}}
+do_test intpkey-1.8.1 {
+ db last_insert_rowid
+} {6}
+do_test intpkey-1.9 {
+ execsql {
+ SELECT rowid, * FROM t1;
+ }
+} {5 5 hello world 6 6 second entry}
+
+# A ROWID is automatically generated for new records that do not specify
+# the integer primary key.
+#
+do_test intpkey-1.10 {
+ execsql {
+ INSERT INTO t1(b,c) VALUES('one','two');
+ SELECT b FROM t1 ORDER BY b;
+ }
+} {hello one second}
+
+# Try to change the ROWID for the new entry.
+#
+do_test intpkey-1.11 {
+ execsql {
+ UPDATE t1 SET a=4 WHERE b='one';
+ SELECT * FROM t1;
+ }
+} {4 one two 5 hello world 6 second entry}
+
+# Make sure SELECT statements are able to use the primary key column
+# as an index.
+#
+do_test intpkey-1.12.1 {
+ execsql {
+ SELECT * FROM t1 WHERE a==4;
+ }
+} {4 one two}
+do_test intpkey-1.12.2 {
+ set sqlite_query_plan
+} {t1 *}
+
+# Try to insert a non-integer value into the primary key field. This
+# should result in a data type mismatch.
+#
+do_test intpkey-1.13.1 {
+ set r [catch {execsql {
+ INSERT INTO t1 VALUES('x','y','z');
+ }} msg]
+ lappend r $msg
+} {1 {datatype mismatch}}
+do_test intpkey-1.13.2 {
+ set r [catch {execsql {
+ INSERT INTO t1 VALUES('','y','z');
+ }} msg]
+ lappend r $msg
+} {1 {datatype mismatch}}
+do_test intpkey-1.14 {
+ set r [catch {execsql {
+ INSERT INTO t1 VALUES(3.4,'y','z');
+ }} msg]
+ lappend r $msg
+} {1 {datatype mismatch}}
+do_test intpkey-1.15 {
+ set r [catch {execsql {
+ INSERT INTO t1 VALUES(-3,'y','z');
+ }} msg]
+ lappend r $msg
+} {0 {}}
+do_test intpkey-1.16 {
+ execsql {SELECT * FROM t1}
+} {-3 y z 4 one two 5 hello world 6 second entry}
+
+#### INDICES
+# Check to make sure indices work correctly with integer primary keys
+#
+do_test intpkey-2.1 {
+ execsql {
+ CREATE INDEX i1 ON t1(b);
+ SELECT * FROM t1 WHERE b=='y'
+ }
+} {-3 y z}
+do_test intpkey-2.1.1 {
+ execsql {
+ SELECT * FROM t1 WHERE b=='y' AND rowid<0
+ }
+} {-3 y z}
+do_test intpkey-2.1.2 {
+ execsql {
+ SELECT * FROM t1 WHERE b=='y' AND rowid<0 AND rowid>=-20
+ }
+} {-3 y z}
+do_test intpkey-2.1.3 {
+ execsql {
+ SELECT * FROM t1 WHERE b>='y'
+ }
+} {-3 y z}
+do_test intpkey-2.1.4 {
+ execsql {
+ SELECT * FROM t1 WHERE b>='y' AND rowid<10
+ }
+} {-3 y z}
+
+do_test intpkey-2.2 {
+ execsql {
+ UPDATE t1 SET a=8 WHERE b=='y';
+ SELECT * FROM t1 WHERE b=='y';
+ }
+} {8 y z}
+do_test intpkey-2.3 {
+ execsql {
+ SELECT rowid, * FROM t1;
+ }
+} {4 4 one two 5 5 hello world 6 6 second entry 8 8 y z}
+do_test intpkey-2.4 {
+ execsql {
+ SELECT rowid, * FROM t1 WHERE b<'second'
+ }
+} {5 5 hello world 4 4 one two}
+do_test intpkey-2.4.1 {
+ execsql {
+ SELECT rowid, * FROM t1 WHERE 'second'>b
+ }
+} {5 5 hello world 4 4 one two}
+do_test intpkey-2.4.2 {
+ execsql {
+ SELECT rowid, * FROM t1 WHERE 8>rowid AND 'second'>b
+ }
+} {4 4 one two 5 5 hello world}
+do_test intpkey-2.4.3 {
+ execsql {
+ SELECT rowid, * FROM t1 WHERE 8>rowid AND 'second'>b AND 0<rowid
+ }
+} {4 4 one two 5 5 hello world}
+do_test intpkey-2.5 {
+ execsql {
+ SELECT rowid, * FROM t1 WHERE b>'a'
+ }
+} {5 5 hello world 4 4 one two 6 6 second entry 8 8 y z}
+do_test intpkey-2.6 {
+ execsql {
+ DELETE FROM t1 WHERE rowid=4;
+ SELECT * FROM t1 WHERE b>'a';
+ }
+} {5 hello world 6 second entry 8 y z}
+do_test intpkey-2.7 {
+ execsql {
+ UPDATE t1 SET a=-4 WHERE rowid=8;
+ SELECT * FROM t1 WHERE b>'a';
+ }
+} {5 hello world 6 second entry -4 y z}
+do_test intpkey-2.7 {
+ execsql {
+ SELECT * FROM t1
+ }
+} {-4 y z 5 hello world 6 second entry}
+
+# Do an SQL statement. Append the search count to the end of the result.
+#
+proc count sql {
+ set ::sqlite_search_count 0
+ return [concat [execsql $sql] $::sqlite_search_count]
+}
+
+# Create indices that include the integer primary key as one of their
+# columns.
+#
+do_test intpkey-3.1 {
+ execsql {
+ CREATE INDEX i2 ON t1(a);
+ }
+} {}
+do_test intpkey-3.2 {
+ count {
+ SELECT * FROM t1 WHERE a=5;
+ }
+} {5 hello world 0}
+do_test intpkey-3.3 {
+ count {
+ SELECT * FROM t1 WHERE a>4 AND a<6;
+ }
+} {5 hello world 2}
+do_test intpkey-3.4 {
+ count {
+ SELECT * FROM t1 WHERE b>='hello' AND b<'hello2';
+ }
+} {5 hello world 3}
+do_test intpkey-3.5 {
+ execsql {
+ CREATE INDEX i3 ON t1(c,a);
+ }
+} {}
+do_test intpkey-3.6 {
+ count {
+ SELECT * FROM t1 WHERE c=='world';
+ }
+} {5 hello world 3}
+do_test intpkey-3.7 {
+ execsql {INSERT INTO t1 VALUES(11,'hello','world')}
+ count {
+ SELECT * FROM t1 WHERE c=='world';
+ }
+} {5 hello world 11 hello world 5}
+do_test intpkey-3.8 {
+ count {
+ SELECT * FROM t1 WHERE c=='world' AND a>7;
+ }
+} {11 hello world 5}
+do_test intpkey-3.9 {
+ count {
+ SELECT * FROM t1 WHERE 7<a;
+ }
+} {11 hello world 1}
+
+# Test inequality constraints on integer primary keys and rowids
+#
+do_test intpkey-4.1 {
+ count {
+ SELECT * FROM t1 WHERE 11=rowid
+ }
+} {11 hello world 0}
+do_test intpkey-4.2 {
+ count {
+ SELECT * FROM t1 WHERE 11=rowid AND b=='hello'
+ }
+} {11 hello world 0}
+do_test intpkey-4.3 {
+ count {
+ SELECT * FROM t1 WHERE 11=rowid AND b=='hello' AND c IS NOT NULL;
+ }
+} {11 hello world 0}
+do_test intpkey-4.4 {
+ count {
+ SELECT * FROM t1 WHERE rowid==11
+ }
+} {11 hello world 0}
+do_test intpkey-4.5 {
+ count {
+ SELECT * FROM t1 WHERE oid==11 AND b=='hello'
+ }
+} {11 hello world 0}
+do_test intpkey-4.6 {
+ count {
+ SELECT * FROM t1 WHERE a==11 AND b=='hello' AND c IS NOT NULL;
+ }
+} {11 hello world 0}
+
+do_test intpkey-4.7 {
+ count {
+ SELECT * FROM t1 WHERE 8<rowid;
+ }
+} {11 hello world 1}
+do_test intpkey-4.8 {
+ count {
+ SELECT * FROM t1 WHERE 8<rowid AND 11>=oid;
+ }
+} {11 hello world 1}
+do_test intpkey-4.9 {
+ count {
+ SELECT * FROM t1 WHERE 11<=_rowid_ AND 12>=a;
+ }
+} {11 hello world 1}
+do_test intpkey-4.10 {
+ count {
+ SELECT * FROM t1 WHERE 0>=_rowid_;
+ }
+} {-4 y z 1}
+do_test intpkey-4.11 {
+ count {
+ SELECT * FROM t1 WHERE a<0;
+ }
+} {-4 y z 1}
+do_test intpkey-4.12 {
+ count {
+ SELECT * FROM t1 WHERE a<0 AND a>10;
+ }
+} {1}
+
+# Make sure it is OK to insert a rowid of 0
+#
+do_test intpkey-5.1 {
+ execsql {
+ INSERT INTO t1 VALUES(0,'zero','entry');
+ }
+ count {
+ SELECT * FROM t1 WHERE a=0;
+ }
+} {0 zero entry 0}
+do_test intpkey-5.2 {
+ execsql {
+ SELECT rowid, a FROM t1
+ }
+} {-4 -4 0 0 5 5 6 6 11 11}
+
+# Test the ability of the COPY command to put data into a
+# table that contains an integer primary key.
+#
+# COPY command has been removed. But we retain these tests so
+# that the tables will contain the right data for tests that follow.
+#
+do_test intpkey-6.1 {
+ execsql {
+ BEGIN;
+ INSERT INTO t1 VALUES(20,'b-20','c-20');
+ INSERT INTO t1 VALUES(21,'b-21','c-21');
+ INSERT INTO t1 VALUES(22,'b-22','c-22');
+ COMMIT;
+ SELECT * FROM t1 WHERE a>=20;
+ }
+} {20 b-20 c-20 21 b-21 c-21 22 b-22 c-22}
+do_test intpkey-6.2 {
+ execsql {
+ SELECT * FROM t1 WHERE b=='hello'
+ }
+} {5 hello world 11 hello world}
+do_test intpkey-6.3 {
+ execsql {
+ DELETE FROM t1 WHERE b='b-21';
+ SELECT * FROM t1 WHERE b=='b-21';
+ }
+} {}
+do_test intpkey-6.4 {
+ execsql {
+ SELECT * FROM t1 WHERE a>=20
+ }
+} {20 b-20 c-20 22 b-22 c-22}
+
+# Do an insert of values with the columns specified out of order.
+#
+do_test intpkey-7.1 {
+ execsql {
+ INSERT INTO t1(c,b,a) VALUES('row','new',30);
+ SELECT * FROM t1 WHERE rowid>=30;
+ }
+} {30 new row}
+do_test intpkey-7.2 {
+ execsql {
+ SELECT * FROM t1 WHERE rowid>20;
+ }
+} {22 b-22 c-22 30 new row}
+
+# Do an insert from a select statement.
+#
+do_test intpkey-8.1 {
+ execsql {
+ CREATE TABLE t2(x INTEGER PRIMARY KEY, y, z);
+ INSERT INTO t2 SELECT * FROM t1;
+ SELECT rowid FROM t2;
+ }
+} {-4 0 5 6 11 20 22 30}
+do_test intpkey-8.2 {
+ execsql {
+ SELECT x FROM t2;
+ }
+} {-4 0 5 6 11 20 22 30}
+
+do_test intpkey-9.1 {
+ execsql {
+ UPDATE t1 SET c='www' WHERE c='world';
+ SELECT rowid, a, c FROM t1 WHERE c=='www';
+ }
+} {5 5 www 11 11 www}
+
+
+# Check insert of NULL for primary key
+#
+do_test intpkey-10.1 {
+ execsql {
+ DROP TABLE t2;
+ CREATE TABLE t2(x INTEGER PRIMARY KEY, y, z);
+ INSERT INTO t2 VALUES(NULL, 1, 2);
+ SELECT * from t2;
+ }
+} {1 1 2}
+do_test intpkey-10.2 {
+ execsql {
+ INSERT INTO t2 VALUES(NULL, 2, 3);
+ SELECT * from t2 WHERE x=2;
+ }
+} {2 2 3}
+do_test intpkey-10.3 {
+ execsql {
+ INSERT INTO t2 SELECT NULL, z, y FROM t2;
+ SELECT * FROM t2;
+ }
+} {1 1 2 2 2 3 3 2 1 4 3 2}
+
+# This tests checks to see if a floating point number can be used
+# to reference an integer primary key.
+#
+do_test intpkey-11.1 {
+ execsql {
+ SELECT b FROM t1 WHERE a=2.0+3.0;
+ }
+} {hello}
+do_test intpkey-11.1 {
+ execsql {
+ SELECT b FROM t1 WHERE a=2.0+3.5;
+ }
+} {}
+
+integrity_check intpkey-12.1
+
+# Try to use a string that looks like a floating point number as
+# an integer primary key. This should actually work when the floating
+# point value can be rounded to an integer without loss of data.
+#
+do_test intpkey-13.1 {
+ execsql {
+ SELECT * FROM t1 WHERE a=1;
+ }
+} {}
+do_test intpkey-13.2 {
+ execsql {
+ INSERT INTO t1 VALUES('1.0',2,3);
+ SELECT * FROM t1 WHERE a=1;
+ }
+} {1 2 3}
+do_test intpkey-13.3 {
+ catchsql {
+ INSERT INTO t1 VALUES('1.5',3,4);
+ }
+} {1 {datatype mismatch}}
+ifcapable {bloblit} {
+ do_test intpkey-13.4 {
+ catchsql {
+ INSERT INTO t1 VALUES(x'123456',3,4);
+ }
+ } {1 {datatype mismatch}}
+}
+do_test intpkey-13.5 {
+ catchsql {
+ INSERT INTO t1 VALUES('+1234567890',3,4);
+ }
+} {0 {}}
+
+# Compare an INTEGER PRIMARY KEY against a TEXT expression. The INTEGER
+# affinity should be applied to the text value before the comparison
+# takes place.
+#
+do_test intpkey-14.1 {
+ execsql {
+ CREATE TABLE t3(a INTEGER PRIMARY KEY, b INTEGER, c TEXT);
+ INSERT INTO t3 VALUES(1, 1, 'one');
+ INSERT INTO t3 VALUES(2, 2, '2');
+ INSERT INTO t3 VALUES(3, 3, 3);
+ }
+} {}
+do_test intpkey-14.2 {
+ execsql {
+ SELECT * FROM t3 WHERE a>2;
+ }
+} {3 3 3}
+do_test intpkey-14.3 {
+ execsql {
+ SELECT * FROM t3 WHERE a>'2';
+ }
+} {3 3 3}
+do_test intpkey-14.4 {
+ execsql {
+ SELECT * FROM t3 WHERE a<'2';
+ }
+} {1 1 one}
+do_test intpkey-14.5 {
+ execsql {
+ SELECT * FROM t3 WHERE a<c;
+ }
+} {1 1 one}
+do_test intpkey-14.6 {
+ execsql {
+ SELECT * FROM t3 WHERE a=c;
+ }
+} {2 2 2 3 3 3}
+
+# Check for proper handling of primary keys greater than 2^31.
+# Ticket #1188
+#
+do_test intpkey-15.1 {
+ execsql {
+ INSERT INTO t1 VALUES(2147483647, 'big-1', 123);
+ SELECT * FROM t1 WHERE a>2147483648;
+ }
+} {}
+do_test intpkey-15.2 {
+ execsql {
+ INSERT INTO t1 VALUES(NULL, 'big-2', 234);
+ SELECT b FROM t1 WHERE a>=2147483648;
+ }
+} {big-2}
+do_test intpkey-15.3 {
+ execsql {
+ SELECT b FROM t1 WHERE a>2147483648;
+ }
+} {}
+do_test intpkey-15.4 {
+ execsql {
+ SELECT b FROM t1 WHERE a>=2147483647;
+ }
+} {big-1 big-2}
+do_test intpkey-15.5 {
+ execsql {
+ SELECT b FROM t1 WHERE a<2147483648;
+ }
+} {y zero 2 hello second hello b-20 b-22 new 3 big-1}
+do_test intpkey-15.6 {
+ execsql {
+ SELECT b FROM t1 WHERE a<12345678901;
+ }
+} {y zero 2 hello second hello b-20 b-22 new 3 big-1 big-2}
+do_test intpkey-15.7 {
+ execsql {
+ SELECT b FROM t1 WHERE a>12345678901;
+ }
+} {}
+
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/ioerr.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/ioerr.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,259 @@
+# 2001 October 12
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing for correct handling of I/O errors
+# such as writes failing because the disk is full.
+#
+# The tests in this file use special facilities that are only
+# available in the SQLite test fixture.
+#
+# $Id: ioerr.test,v 1.27 2006/09/15 07:28:51 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+
+# If SQLITE_DEFAULT_AUTOVACUUM is set to true, then a simulated IO error
+# on the 8th IO operation in the SQL script below doesn't report an error.
+#
+# This is because the 8th IO call attempts to read page 2 of the database
+# file when the file on disk is only 1 page. The pager layer detects that
+# this has happened and suppresses the error returned by the OS layer.
+#
+do_ioerr_test ioerr-1 -erc 1 -sqlprep {
+ SELECT * FROM sqlite_master;
+} -sqlbody {
+ CREATE TABLE t1(a,b,c);
+ SELECT * FROM sqlite_master;
+ BEGIN TRANSACTION;
+ INSERT INTO t1 VALUES(1,2,3);
+ INSERT INTO t1 VALUES(4,5,6);
+ ROLLBACK;
+ SELECT * FROM t1;
+ BEGIN TRANSACTION;
+ INSERT INTO t1 VALUES(1,2,3);
+ INSERT INTO t1 VALUES(4,5,6);
+ COMMIT;
+ SELECT * FROM t1;
+ DELETE FROM t1 WHERE a<100;
+} -exclude [expr [string match [execsql {pragma auto_vacuum}] 1] ? 4 : 0]
+
+# Test for IO errors during a VACUUM.
+#
+# The first IO call is excluded from the test. This call attempts to read
+# the file-header of the temporary database used by VACUUM. Since the
+# database doesn't exist at that point, the IO error is not detected.
+#
+# Additionally, if auto-vacuum is enabled, the 12th IO error is not
+# detected. Same reason as the 8th in the test case above.
+#
+ifcapable vacuum {
+ do_ioerr_test ioerr-2 -cksum true -sqlprep {
+ BEGIN;
+ CREATE TABLE t1(a, b, c);
+ INSERT INTO t1 VALUES(1, randstr(50,50), randstr(50,50));
+ INSERT INTO t1 SELECT a+2, b||'-'||rowid, c||'-'||rowid FROM t1;
+ INSERT INTO t1 SELECT a+4, b||'-'||rowid, c||'-'||rowid FROM t1;
+ INSERT INTO t1 SELECT a+8, b||'-'||rowid, c||'-'||rowid FROM t1;
+ INSERT INTO t1 SELECT a+16, b||'-'||rowid, c||'-'||rowid FROM t1;
+ INSERT INTO t1 SELECT a+32, b||'-'||rowid, c||'-'||rowid FROM t1;
+ INSERT INTO t1 SELECT a+64, b||'-'||rowid, c||'-'||rowid FROM t1;
+ INSERT INTO t1 SELECT a+128, b||'-'||rowid, c||'-'||rowid FROM t1;
+ INSERT INTO t1 VALUES(1, randstr(600,600), randstr(600,600));
+ CREATE TABLE t2 AS SELECT * FROM t1;
+ CREATE TABLE t3 AS SELECT * FROM t1;
+ COMMIT;
+ DROP TABLE t2;
+ } -sqlbody {
+ VACUUM;
+ } -exclude [list \
+ 1 [expr [string match [execsql {pragma auto_vacuum}] 1]?9:-1]]
+}
+
+do_ioerr_test ioerr-3 -tclprep {
+ execsql {
+ PRAGMA cache_size = 10;
+ BEGIN;
+ CREATE TABLE abc(a);
+ INSERT INTO abc VALUES(randstr(1500,1500)); -- Page 4 is overflow
+ }
+ for {set i 0} {$i<150} {incr i} {
+ execsql {
+ INSERT INTO abc VALUES(randstr(100,100));
+ }
+ }
+ execsql COMMIT
+} -sqlbody {
+ CREATE TABLE abc2(a);
+ BEGIN;
+ DELETE FROM abc WHERE length(a)>100;
+ UPDATE abc SET a = randstr(90,90);
+ COMMIT;
+ CREATE TABLE abc3(a);
+}
+
+# Test IO errors that can occur retrieving a record header that flows over
+# onto an overflow page.
+do_ioerr_test ioerr-4 -tclprep {
+ set sql "CREATE TABLE abc(a1"
+ for {set i 2} {$i<1300} {incr i} {
+ append sql ", a$i"
+ }
+ append sql ");"
+ execsql $sql
+ execsql {INSERT INTO abc (a1) VALUES(NULL)}
+} -sqlbody {
+ SELECT * FROM abc;
+}
+
+# Test IO errors that may occur during a multi-file commit.
+#
+# Tests 8 and 17 are excluded when auto-vacuum is enabled for the same
+# reason as in test cases ioerr-1.XXX
+set ex ""
+if {[string match [execsql {pragma auto_vacuum}] 1]} {
+ set ex [list 4 17]
+}
+do_ioerr_test ioerr-5 -sqlprep {
+ ATTACH 'test2.db' AS test2;
+} -sqlbody {
+ BEGIN;
+ CREATE TABLE t1(a,b,c);
+ CREATE TABLE test2.t2(a,b,c);
+ COMMIT;
+} -exclude $ex
+
+# Test IO errors when replaying two hot journals from a 2-file
+# transaction. This test only runs on UNIX.
+ifcapable crashtest {
+ if {![catch {sqlite3 -has_codec} r] && !$r} {
+ do_ioerr_test ioerr-6 -tclprep {
+ execsql {
+ ATTACH 'test2.db' as aux;
+ CREATE TABLE tx(a, b);
+ CREATE TABLE aux.ty(a, b);
+ }
+ set rc [crashsql 2 test2.db-journal {
+ ATTACH 'test2.db' as aux;
+ PRAGMA cache_size = 10;
+ BEGIN;
+ CREATE TABLE aux.t2(a, b, c);
+ CREATE TABLE t1(a, b, c);
+ COMMIT;
+ }]
+ if {$rc!="1 {child process exited abnormally}"} {
+ error "Wrong error message: $rc"
+ }
+ } -sqlbody {
+ SELECT * FROM sqlite_master;
+ SELECT * FROM aux.sqlite_master;
+ }
+ }
+}
+
+# Test handling of IO errors that occur while rolling back hot journal
+# files.
+#
+# These tests can't be run on windows because the windows version of
+# SQLite holds a mandatory exclusive lock on journal files it has open.
+#
+if {$tcl_platform(platform)!="windows"} {
+ do_ioerr_test ioerr-7 -tclprep {
+ db close
+ sqlite3 db2 test2.db
+ db2 eval {
+ PRAGMA synchronous = 0;
+ CREATE TABLE t1(a, b);
+ INSERT INTO t1 VALUES(1, 2);
+ BEGIN;
+ INSERT INTO t1 VALUES(3, 4);
+ }
+ copy_file test2.db test.db
+ copy_file test2.db-journal test.db-journal
+ db2 close
+ } -tclbody {
+ sqlite3 db test.db
+ db eval {
+ SELECT * FROM t1;
+ }
+ } -exclude 1
+}
+
+# For test coverage: Cause an I/O failure while trying to read a
+# short field (one that fits into a Mem buffer without mallocing
+# for space).
+#
+do_ioerr_test ioerr-8 -tclprep {
+ execsql {
+ CREATE TABLE t1(a,b,c);
+ INSERT INTO t1 VALUES(randstr(200,200), randstr(1000,1000), 2);
+ }
+ db close
+ sqlite3 db test.db
+} -sqlbody {
+ SELECT c FROM t1;
+}
+
+# For test coverage: Cause an IO error whilst reading the master-journal
+# name from a journal file.
+if {$tcl_platform(platform)=="unix"} {
+ do_ioerr_test ioerr-9 -tclprep {
+ execsql {
+ CREATE TABLE t1(a,b,c);
+ INSERT INTO t1 VALUES(randstr(200,200), randstr(1000,1000), 2);
+ BEGIN;
+ INSERT INTO t1 VALUES(randstr(200,200), randstr(1000,1000), 2);
+ }
+ copy_file test.db-journal test2.db-journal
+ execsql {
+ COMMIT;
+ }
+ copy_file test2.db-journal test.db-journal
+ set f [open test.db-journal a]
+ fconfigure $f -encoding binary
+ puts -nonewline $f "hello"
+ puts -nonewline $f "\x00\x00\x00\x05\x01\x02\x03\x04"
+ puts -nonewline $f "\xd9\xd5\x05\xf9\x20\xa1\x63\xd7"
+ close $f
+ } -sqlbody {
+ SELECT a FROM t1;
+ }
+}
+
+# For test coverage: Cause an IO error during statement playback (i.e.
+# a constraint).
+do_ioerr_test ioerr-10 -tclprep {
+ execsql {
+ BEGIN;
+ CREATE TABLE t1(a PRIMARY KEY, b);
+ }
+ for {set i 0} {$i < 500} {incr i} {
+ execsql {INSERT INTO t1 VALUES(:i, 'hello world');}
+ }
+ execsql {
+ COMMIT;
+ }
+} -tclbody {
+
+ catch {execsql {
+ BEGIN;
+ INSERT INTO t1 VALUES('abc', 123);
+ INSERT INTO t1 VALUES('def', 123);
+ INSERT INTO t1 VALUES('ghi', 123);
+ INSERT INTO t1 SELECT (a+500)%900, 'good string' FROM t1;
+ }} msg
+
+ if {$msg != "column a is not unique"} {
+ error $msg
+ }
+}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/join.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/join.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,461 @@
+# 2002 May 24
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests for joins, including outer joins.
+#
+# $Id: join.test,v 1.22 2006/06/20 11:01:09 danielk1977 Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+do_test join-1.1 {
+ execsql {
+ CREATE TABLE t1(a,b,c);
+ INSERT INTO t1 VALUES(1,2,3);
+ INSERT INTO t1 VALUES(2,3,4);
+ INSERT INTO t1 VALUES(3,4,5);
+ SELECT * FROM t1;
+ }
+} {1 2 3 2 3 4 3 4 5}
+do_test join-1.2 {
+ execsql {
+ CREATE TABLE t2(b,c,d);
+ INSERT INTO t2 VALUES(1,2,3);
+ INSERT INTO t2 VALUES(2,3,4);
+ INSERT INTO t2 VALUES(3,4,5);
+ SELECT * FROM t2;
+ }
+} {1 2 3 2 3 4 3 4 5}
+
+do_test join-1.3 {
+ execsql2 {
+ SELECT * FROM t1 NATURAL JOIN t2;
+ }
+} {a 1 b 2 c 3 d 4 a 2 b 3 c 4 d 5}
+do_test join-1.3.1 {
+ execsql2 {
+ SELECT * FROM t2 NATURAL JOIN t1;
+ }
+} {b 2 c 3 d 4 a 1 b 3 c 4 d 5 a 2}
+do_test join-1.3.2 {
+ execsql2 {
+ SELECT * FROM t2 AS x NATURAL JOIN t1;
+ }
+} {b 2 c 3 d 4 a 1 b 3 c 4 d 5 a 2}
+do_test join-1.3.3 {
+ execsql2 {
+ SELECT * FROM t2 NATURAL JOIN t1 AS y;
+ }
+} {b 2 c 3 d 4 a 1 b 3 c 4 d 5 a 2}
+do_test join-1.3.4 {
+ execsql {
+ SELECT b FROM t1 NATURAL JOIN t2;
+ }
+} {2 3}
+do_test join-1.4.1 {
+ execsql2 {
+ SELECT * FROM t1 INNER JOIN t2 USING(b,c);
+ }
+} {a 1 b 2 c 3 d 4 a 2 b 3 c 4 d 5}
+do_test join-1.4.2 {
+ execsql2 {
+ SELECT * FROM t1 AS x INNER JOIN t2 USING(b,c);
+ }
+} {a 1 b 2 c 3 d 4 a 2 b 3 c 4 d 5}
+do_test join-1.4.3 {
+ execsql2 {
+ SELECT * FROM t1 INNER JOIN t2 AS y USING(b,c);
+ }
+} {a 1 b 2 c 3 d 4 a 2 b 3 c 4 d 5}
+do_test join-1.4.4 {
+ execsql2 {
+ SELECT * FROM t1 AS x INNER JOIN t2 AS y USING(b,c);
+ }
+} {a 1 b 2 c 3 d 4 a 2 b 3 c 4 d 5}
+do_test join-1.4.5 {
+ execsql {
+ SELECT b FROM t1 JOIN t2 USING(b);
+ }
+} {2 3}
+do_test join-1.5 {
+ execsql2 {
+ SELECT * FROM t1 INNER JOIN t2 USING(b);
+ }
+} {a 1 b 2 c 3 c 3 d 4 a 2 b 3 c 4 c 4 d 5}
+do_test join-1.6 {
+ execsql2 {
+ SELECT * FROM t1 INNER JOIN t2 USING(c);
+ }
+} {a 1 b 2 c 3 b 2 d 4 a 2 b 3 c 4 b 3 d 5}
+do_test join-1.7 {
+ execsql2 {
+ SELECT * FROM t1 INNER JOIN t2 USING(c,b);
+ }
+} {a 1 b 2 c 3 d 4 a 2 b 3 c 4 d 5}
+
+do_test join-1.8 {
+ execsql {
+ SELECT * FROM t1 NATURAL CROSS JOIN t2;
+ }
+} {1 2 3 4 2 3 4 5}
+do_test join-1.9 {
+ execsql {
+ SELECT * FROM t1 CROSS JOIN t2 USING(b,c);
+ }
+} {1 2 3 4 2 3 4 5}
+do_test join-1.10 {
+ execsql {
+ SELECT * FROM t1 NATURAL INNER JOIN t2;
+ }
+} {1 2 3 4 2 3 4 5}
+do_test join-1.11 {
+ execsql {
+ SELECT * FROM t1 INNER JOIN t2 USING(b,c);
+ }
+} {1 2 3 4 2 3 4 5}
+do_test join-1.12 {
+ execsql {
+ SELECT * FROM t1 natural inner join t2;
+ }
+} {1 2 3 4 2 3 4 5}
+
+ifcapable subquery {
+ do_test join-1.13 {
+ execsql2 {
+ SELECT * FROM t1 NATURAL JOIN
+ (SELECT b as 'c', c as 'd', d as 'e' FROM t2) as t3
+ }
+ } {a 1 b 2 c 3 d 4 e 5}
+ do_test join-1.14 {
+ execsql2 {
+ SELECT * FROM (SELECT b as 'c', c as 'd', d as 'e' FROM t2) as 'tx'
+ NATURAL JOIN t1
+ }
+ } {c 3 d 4 e 5 a 1 b 2}
+}
+
+do_test join-1.15 {
+ execsql {
+ CREATE TABLE t3(c,d,e);
+ INSERT INTO t3 VALUES(2,3,4);
+ INSERT INTO t3 VALUES(3,4,5);
+ INSERT INTO t3 VALUES(4,5,6);
+ SELECT * FROM t3;
+ }
+} {2 3 4 3 4 5 4 5 6}
+do_test join-1.16 {
+ execsql {
+ SELECT * FROM t1 natural join t2 natural join t3;
+ }
+} {1 2 3 4 5 2 3 4 5 6}
+do_test join-1.17 {
+ execsql2 {
+ SELECT * FROM t1 natural join t2 natural join t3;
+ }
+} {a 1 b 2 c 3 d 4 e 5 a 2 b 3 c 4 d 5 e 6}
+do_test join-1.18 {
+ execsql {
+ CREATE TABLE t4(d,e,f);
+ INSERT INTO t4 VALUES(2,3,4);
+ INSERT INTO t4 VALUES(3,4,5);
+ INSERT INTO t4 VALUES(4,5,6);
+ SELECT * FROM t4;
+ }
+} {2 3 4 3 4 5 4 5 6}
+do_test join-1.19.1 {
+ execsql {
+ SELECT * FROM t1 natural join t2 natural join t4;
+ }
+} {1 2 3 4 5 6}
+do_test join-1.19.2 {
+ execsql2 {
+ SELECT * FROM t1 natural join t2 natural join t4;
+ }
+} {a 1 b 2 c 3 d 4 e 5 f 6}
+do_test join-1.20 {
+ execsql {
+ SELECT * FROM t1 natural join t2 natural join t3 WHERE t1.a=1
+ }
+} {1 2 3 4 5}
+
+do_test join-2.1 {
+ execsql {
+ SELECT * FROM t1 NATURAL LEFT JOIN t2;
+ }
+} {1 2 3 4 2 3 4 5 3 4 5 {}}
+do_test join-2.2 {
+ execsql {
+ SELECT * FROM t2 NATURAL LEFT OUTER JOIN t1;
+ }
+} {1 2 3 {} 2 3 4 1 3 4 5 2}
+do_test join-2.3 {
+ catchsql {
+ SELECT * FROM t1 NATURAL RIGHT OUTER JOIN t2;
+ }
+} {1 {RIGHT and FULL OUTER JOINs are not currently supported}}
+do_test join-2.4 {
+ execsql {
+ SELECT * FROM t1 LEFT JOIN t2 ON t1.a=t2.d
+ }
+} {1 2 3 {} {} {} 2 3 4 {} {} {} 3 4 5 1 2 3}
+do_test join-2.5 {
+ execsql {
+ SELECT * FROM t1 LEFT JOIN t2 ON t1.a=t2.d WHERE t1.a>1
+ }
+} {2 3 4 {} {} {} 3 4 5 1 2 3}
+do_test join-2.6 {
+ execsql {
+ SELECT * FROM t1 LEFT JOIN t2 ON t1.a=t2.d WHERE t2.b IS NULL OR t2.b>1
+ }
+} {1 2 3 {} {} {} 2 3 4 {} {} {}}
+
+do_test join-3.1 {
+ catchsql {
+ SELECT * FROM t1 NATURAL JOIN t2 ON t1.a=t2.b;
+ }
+} {1 {a NATURAL join may not have an ON or USING clause}}
+do_test join-3.2 {
+ catchsql {
+ SELECT * FROM t1 NATURAL JOIN t2 USING(b);
+ }
+} {1 {a NATURAL join may not have an ON or USING clause}}
+do_test join-3.3 {
+ catchsql {
+ SELECT * FROM t1 JOIN t2 ON t1.a=t2.b USING(b);
+ }
+} {1 {cannot have both ON and USING clauses in the same join}}
+do_test join-3.4 {
+ catchsql {
+ SELECT * FROM t1 JOIN t2 USING(a);
+ }
+} {1 {cannot join using column a - column not present in both tables}}
+do_test join-3.5 {
+ catchsql {
+ SELECT * FROM t1 USING(a);
+ }
+} {0 {1 2 3 2 3 4 3 4 5}}
+do_test join-3.6 {
+ catchsql {
+ SELECT * FROM t1 JOIN t2 ON t3.a=t2.b;
+ }
+} {1 {no such column: t3.a}}
+do_test join-3.7 {
+ catchsql {
+ SELECT * FROM t1 INNER OUTER JOIN t2;
+ }
+} {1 {unknown or unsupported join type: INNER OUTER}}
+do_test join-3.7 {
+ catchsql {
+ SELECT * FROM t1 LEFT BOGUS JOIN t2;
+ }
+} {1 {unknown or unsupported join type: LEFT BOGUS}}
+
+do_test join-4.1 {
+ execsql {
+ BEGIN;
+ CREATE TABLE t5(a INTEGER PRIMARY KEY);
+ CREATE TABLE t6(a INTEGER);
+ INSERT INTO t6 VALUES(NULL);
+ INSERT INTO t6 VALUES(NULL);
+ INSERT INTO t6 SELECT * FROM t6;
+ INSERT INTO t6 SELECT * FROM t6;
+ INSERT INTO t6 SELECT * FROM t6;
+ INSERT INTO t6 SELECT * FROM t6;
+ INSERT INTO t6 SELECT * FROM t6;
+ INSERT INTO t6 SELECT * FROM t6;
+ COMMIT;
+ }
+ execsql {
+ SELECT * FROM t6 NATURAL JOIN t5;
+ }
+} {}
+do_test join-4.2 {
+ execsql {
+ SELECT * FROM t6, t5 WHERE t6.a<t5.a;
+ }
+} {}
+do_test join-4.3 {
+ execsql {
+ SELECT * FROM t6, t5 WHERE t6.a>t5.a;
+ }
+} {}
+do_test join-4.4 {
+ execsql {
+ UPDATE t6 SET a='xyz';
+ SELECT * FROM t6 NATURAL JOIN t5;
+ }
+} {}
+do_test join-4.6 {
+ execsql {
+ SELECT * FROM t6, t5 WHERE t6.a<t5.a;
+ }
+} {}
+do_test join-4.7 {
+ execsql {
+ SELECT * FROM t6, t5 WHERE t6.a>t5.a;
+ }
+} {}
+do_test join-4.8 {
+ execsql {
+ UPDATE t6 SET a=1;
+ SELECT * FROM t6 NATURAL JOIN t5;
+ }
+} {}
+do_test join-4.9 {
+ execsql {
+ SELECT * FROM t6, t5 WHERE t6.a<t5.a;
+ }
+} {}
+do_test join-4.10 {
+ execsql {
+ SELECT * FROM t6, t5 WHERE t6.a>t5.a;
+ }
+} {}
+
+do_test join-5.1 {
+ execsql {
+ BEGIN;
+ create table centros (id integer primary key, centro);
+ INSERT INTO centros VALUES(1,'xxx');
+ create table usuarios (id integer primary key, nombre, apellidos,
+ idcentro integer);
+ INSERT INTO usuarios VALUES(1,'a','aa',1);
+ INSERT INTO usuarios VALUES(2,'b','bb',1);
+ INSERT INTO usuarios VALUES(3,'c','cc',NULL);
+ create index idcentro on usuarios (idcentro);
+ END;
+ select usuarios.id, usuarios.nombre, centros.centro from
+ usuarios left outer join centros on usuarios.idcentro = centros.id;
+ }
+} {1 a xxx 2 b xxx 3 c {}}
+
+# A test for ticket #247.
+#
+do_test join-7.1 {
+ execsql {
+ CREATE TABLE t7 (x, y);
+ INSERT INTO t7 VALUES ("pa1", 1);
+ INSERT INTO t7 VALUES ("pa2", NULL);
+ INSERT INTO t7 VALUES ("pa3", NULL);
+ INSERT INTO t7 VALUES ("pa4", 2);
+ INSERT INTO t7 VALUES ("pa30", 131);
+ INSERT INTO t7 VALUES ("pa31", 130);
+ INSERT INTO t7 VALUES ("pa28", NULL);
+
+ CREATE TABLE t8 (a integer primary key, b);
+ INSERT INTO t8 VALUES (1, "pa1");
+ INSERT INTO t8 VALUES (2, "pa4");
+ INSERT INTO t8 VALUES (3, NULL);
+ INSERT INTO t8 VALUES (4, NULL);
+ INSERT INTO t8 VALUES (130, "pa31");
+ INSERT INTO t8 VALUES (131, "pa30");
+
+ SELECT coalesce(t8.a,999) from t7 LEFT JOIN t8 on y=a;
+ }
+} {1 999 999 2 131 130 999}
+
+# Make sure a left join where the right table is really a view that
+# is itself a join works right. Ticket #306.
+#
+ifcapable view {
+do_test join-8.1 {
+ execsql {
+ BEGIN;
+ CREATE TABLE t9(a INTEGER PRIMARY KEY, b);
+ INSERT INTO t9 VALUES(1,11);
+ INSERT INTO t9 VALUES(2,22);
+ CREATE TABLE t10(x INTEGER PRIMARY KEY, y);
+ INSERT INTO t10 VALUES(1,2);
+ INSERT INTO t10 VALUES(3,3);
+ CREATE TABLE t11(p INTEGER PRIMARY KEY, q);
+ INSERT INTO t11 VALUES(2,111);
+ INSERT INTO t11 VALUES(3,333);
+ CREATE VIEW v10_11 AS SELECT x, q FROM t10, t11 WHERE t10.y=t11.p;
+ COMMIT;
+ SELECT * FROM t9 LEFT JOIN v10_11 ON( a=x );
+ }
+} {1 11 1 111 2 22 {} {}}
+ifcapable subquery {
+ do_test join-8.2 {
+ execsql {
+ SELECT * FROM t9 LEFT JOIN (SELECT x, q FROM t10, t11 WHERE t10.y=t11.p)
+ ON( a=x);
+ }
+ } {1 11 1 111 2 22 {} {}}
+}
+do_test join-8.3 {
+ execsql {
+ SELECT * FROM v10_11 LEFT JOIN t9 ON( a=x );
+ }
+} {1 111 1 11 3 333 {} {}}
+} ;# ifcapable view
+
+# Ticket #350 describes a scenario where LEFT OUTER JOIN does not
+# function correctly if the right table in the join is really
+# subquery.
+#
+# To test the problem, we generate the same LEFT OUTER JOIN in two
+# separate selects but with on using a subquery and the other calling
+# the table directly. Then connect the two SELECTs using an EXCEPT.
+# Both queries should generate the same results so the answer should
+# be an empty set.
+#
+ifcapable compound {
+do_test join-9.1 {
+ execsql {
+ BEGIN;
+ CREATE TABLE t12(a,b);
+ INSERT INTO t12 VALUES(1,11);
+ INSERT INTO t12 VALUES(2,22);
+ CREATE TABLE t13(b,c);
+ INSERT INTO t13 VALUES(22,222);
+ COMMIT;
+ }
+} {}
+
+ifcapable subquery {
+ do_test join-9.1.1 {
+ execsql {
+ SELECT * FROM t12 NATURAL LEFT JOIN t13
+ EXCEPT
+ SELECT * FROM t12 NATURAL LEFT JOIN (SELECT * FROM t13 WHERE b>0);
+ }
+ } {}
+}
+ifcapable view {
+ do_test join-9.2 {
+ execsql {
+ CREATE VIEW v13 AS SELECT * FROM t13 WHERE b>0;
+ SELECT * FROM t12 NATURAL LEFT JOIN t13
+ EXCEPT
+ SELECT * FROM t12 NATURAL LEFT JOIN v13;
+ }
+ } {}
+} ;# ifcapable view
+} ;# ifcapable compound
+
+# Ticket #1697: Left Join WHERE clause terms that contain an
+# aggregate subquery.
+#
+ifcapable subquery {
+do_test join-10.1 {
+ execsql {
+ CREATE TABLE t21(a,b,c);
+ CREATE TABLE t22(p,q);
+ CREATE INDEX i22 ON t22(q);
+ SELECT a FROM t21 LEFT JOIN t22 ON b=p WHERE q=
+ (SELECT max(m.q) FROM t22 m JOIN t21 n ON n.b=m.p WHERE n.c=1);
+ }
+} {}
+} ;# ifcapable subquery
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/join2.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/join2.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,75 @@
+# 2002 May 24
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests for joins, including outer joins.
+#
+# $Id: join2.test,v 1.2 2005/01/21 03:12:16 danielk1977 Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+do_test join2-1.1 {
+ execsql {
+ CREATE TABLE t1(a,b);
+ INSERT INTO t1 VALUES(1,11);
+ INSERT INTO t1 VALUES(2,22);
+ INSERT INTO t1 VALUES(3,33);
+ SELECT * FROM t1;
+ }
+} {1 11 2 22 3 33}
+do_test join2-1.2 {
+ execsql {
+ CREATE TABLE t2(b,c);
+ INSERT INTO t2 VALUES(11,111);
+ INSERT INTO t2 VALUES(33,333);
+ INSERT INTO t2 VALUES(44,444);
+ SELECT * FROM t2;
+ }
+} {11 111 33 333 44 444};
+do_test join2-1.3 {
+ execsql {
+ CREATE TABLE t3(c,d);
+ INSERT INTO t3 VALUES(111,1111);
+ INSERT INTO t3 VALUES(444,4444);
+ INSERT INTO t3 VALUES(555,5555);
+ SELECT * FROM t3;
+ }
+} {111 1111 444 4444 555 5555}
+
+do_test join2-1.4 {
+ execsql {
+ SELECT * FROM
+ t1 NATURAL JOIN t2 NATURAL JOIN t3
+ }
+} {1 11 111 1111}
+do_test join2-1.5 {
+ execsql {
+ SELECT * FROM
+ t1 NATURAL JOIN t2 NATURAL LEFT OUTER JOIN t3
+ }
+} {1 11 111 1111 3 33 333 {}}
+do_test join2-1.6 {
+ execsql {
+ SELECT * FROM
+ t1 NATURAL LEFT OUTER JOIN t2 NATURAL JOIN t3
+ }
+} {1 11 111 1111}
+ifcapable subquery {
+ do_test join2-1.7 {
+ execsql {
+ SELECT * FROM
+ t1 NATURAL LEFT OUTER JOIN (t2 NATURAL JOIN t3)
+ }
+ } {1 11 111 1111 2 22 {} {} 3 33 {} {}}
+}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/join3.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/join3.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,62 @@
+# 2002 May 24
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests for joins, including outer joins, where
+# there are a large number of tables involved in the join.
+#
+# $Id: join3.test,v 1.4 2005/01/19 23:24:51 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# An unrestricted join
+#
+catch {unset ::result}
+set result {}
+for {set N 1} {$N<=$bitmask_size} {incr N} {
+ lappend result $N
+ do_test join3-1.$N {
+ execsql "CREATE TABLE t${N}(x);"
+ execsql "INSERT INTO t$N VALUES($N)"
+ set sql "SELECT * FROM t1"
+ for {set i 2} {$i<=$N} {incr i} {append sql ", t$i"}
+ execsql $sql
+ } $result
+}
+
+# Joins with a comparison
+#
+set result {}
+for {set N 1} {$N<=$bitmask_size} {incr N} {
+ lappend result $N
+ do_test join3-2.$N {
+ set sql "SELECT * FROM t1"
+ for {set i 2} {$i<=$N} {incr i} {append sql ", t$i"}
+ set sep WHERE
+ for {set i 1} {$i<$N} {incr i} {
+ append sql " $sep t[expr {$i+1}].x==t$i.x+1"
+ set sep AND
+ }
+ execsql $sql
+ } $result
+}
+
+# Error of too many tables in the join
+#
+do_test join3-3.1 {
+ set sql "SELECT * FROM t1 AS t0, t1"
+ for {set i 2} {$i<=$bitmask_size} {incr i} {append sql ", t$i"}
+ catchsql $sql
+} [list 1 "at most $bitmask_size tables in a join"]
+
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/join4.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/join4.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,98 @@
+# 2002 May 24
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests for left outer joins containing WHERE
+# clauses that restrict the scope of the left term of the join.
+#
+# $Id: join4.test,v 1.4 2005/03/29 03:11:00 danielk1977 Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+ifcapable tempdb {
+ do_test join4-1.1 {
+ execsql {
+ create temp table t1(a integer, b varchar(10));
+ insert into t1 values(1,'one');
+ insert into t1 values(2,'two');
+ insert into t1 values(3,'three');
+ insert into t1 values(4,'four');
+
+ create temp table t2(x integer, y varchar(10), z varchar(10));
+ insert into t2 values(2,'niban','ok');
+ insert into t2 values(4,'yonban','err');
+ }
+ execsql {
+ select * from t1 left outer join t2 on t1.a=t2.x where t2.z='ok'
+ }
+ } {2 two 2 niban ok}
+} else {
+ do_test join4-1.1 {
+ execsql {
+ create table t1(a integer, b varchar(10));
+ insert into t1 values(1,'one');
+ insert into t1 values(2,'two');
+ insert into t1 values(3,'three');
+ insert into t1 values(4,'four');
+
+ create table t2(x integer, y varchar(10), z varchar(10));
+ insert into t2 values(2,'niban','ok');
+ insert into t2 values(4,'yonban','err');
+ }
+ execsql {
+ select * from t1 left outer join t2 on t1.a=t2.x where t2.z='ok'
+ }
+ } {2 two 2 niban ok}
+}
+do_test join4-1.2 {
+ execsql {
+ select * from t1 left outer join t2 on t1.a=t2.x and t2.z='ok'
+ }
+} {1 one {} {} {} 2 two 2 niban ok 3 three {} {} {} 4 four {} {} {}}
+do_test join4-1.3 {
+ execsql {
+ create index i2 on t2(z);
+ }
+ execsql {
+ select * from t1 left outer join t2 on t1.a=t2.x where t2.z='ok'
+ }
+} {2 two 2 niban ok}
+do_test join4-1.4 {
+ execsql {
+ select * from t1 left outer join t2 on t1.a=t2.x and t2.z='ok'
+ }
+} {1 one {} {} {} 2 two 2 niban ok 3 three {} {} {} 4 four {} {} {}}
+do_test join4-1.5 {
+ execsql {
+ select * from t1 left outer join t2 on t1.a=t2.x where t2.z>='ok'
+ }
+} {2 two 2 niban ok}
+do_test join4-1.4 {
+ execsql {
+ select * from t1 left outer join t2 on t1.a=t2.x and t2.z>='ok'
+ }
+} {1 one {} {} {} 2 two 2 niban ok 3 three {} {} {} 4 four {} {} {}}
+ifcapable subquery {
+ do_test join4-1.6 {
+ execsql {
+ select * from t1 left outer join t2 on t1.a=t2.x where t2.z IN ('ok')
+ }
+ } {2 two 2 niban ok}
+ do_test join4-1.7 {
+ execsql {
+ select * from t1 left outer join t2 on t1.a=t2.x and t2.z IN ('ok')
+ }
+ } {1 one {} {} {} 2 two 2 niban ok 3 three {} {} {} 4 four {} {} {}}
+}
+
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/join5.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/join5.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,62 @@
+# 2005 September 19
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests for left outer joins containing ON
+# clauses that restrict the scope of the left term of the join.
+#
+# $Id: join5.test,v 1.1 2005/09/19 21:05:50 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+
+do_test join5-1.1 {
+ execsql {
+ BEGIN;
+ CREATE TABLE t1(a integer primary key, b integer, c integer);
+ CREATE TABLE t2(x integer primary key, y);
+ CREATE TABLE t3(p integer primary key, q);
+ INSERT INTO t3 VALUES(11,'t3-11');
+ INSERT INTO t3 VALUES(12,'t3-12');
+ INSERT INTO t2 VALUES(11,'t2-11');
+ INSERT INTO t2 VALUES(12,'t2-12');
+ INSERT INTO t1 VALUES(1, 5, 0);
+ INSERT INTO t1 VALUES(2, 11, 2);
+ INSERT INTO t1 VALUES(3, 12, 1);
+ COMMIT;
+ }
+} {}
+do_test join5-1.2 {
+ execsql {
+ select * from t1 left join t2 on t1.b=t2.x and t1.c=1
+ }
+} {1 5 0 {} {} 2 11 2 {} {} 3 12 1 12 t2-12}
+do_test join5-1.3 {
+ execsql {
+ select * from t1 left join t2 on t1.b=t2.x where t1.c=1
+ }
+} {3 12 1 12 t2-12}
+do_test join5-1.4 {
+ execsql {
+ select * from t1 left join t2 on t1.b=t2.x and t1.c=1
+ left join t3 on t1.b=t3.p and t1.c=2
+ }
+} {1 5 0 {} {} {} {} 2 11 2 {} {} 11 t3-11 3 12 1 12 t2-12 {} {}}
+do_test join5-1.5 {
+ execsql {
+ select * from t1 left join t2 on t1.b=t2.x and t1.c=1
+ left join t3 on t1.b=t3.p where t1.c=2
+ }
+} {2 11 2 {} {} 11 t3-11}
+
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/journal1.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/journal1.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,67 @@
+# 2005 March 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests to make sure that leftover journals from
+# prior databases do not try to rollback into new databases.
+#
+# $Id: journal1.test,v 1.2 2005/03/20 22:54:56 drh Exp $
+
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# These tests will not work on windows because windows uses
+# manditory file locking which breaks the file copy command.
+#
+if {$tcl_platform(platform)=="windows"} {
+ finish_test
+ return
+}
+
+# Create a smaple database
+#
+do_test journal1-1.1 {
+ execsql {
+ CREATE TABLE t1(a,b);
+ INSERT INTO t1 VALUES(1,randstr(10,400));
+ INSERT INTO t1 VALUES(2,randstr(10,400));
+ INSERT INTO t1 SELECT a+2, a||b FROM t1;
+ INSERT INTO t1 SELECT a+4, a||b FROM t1;
+ SELECT count(*) FROM t1;
+ }
+} 8
+
+# Make changes to the database and save the journal file.
+# Then delete the database. Replace the the journal file
+# and try to create a new database with the same name. The
+# old journal should not attempt to rollback into the new
+# database.
+#
+do_test journal1-1.2 {
+ execsql {
+ BEGIN;
+ DELETE FROM t1;
+ }
+ file copy -force test.db-journal test.db-journal-bu
+ execsql {
+ ROLLBACK;
+ }
+ db close
+ file delete test.db
+ file copy test.db-journal-bu test.db-journal
+ sqlite3 db test.db
+ catchsql {
+ SELECT * FROM sqlite_master
+ }
+} {0 {}}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/lastinsert.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/lastinsert.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,366 @@
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+#
+# Tests to make sure that value returned by last_insert_rowid() (LIRID)
+# is updated properly, especially inside triggers
+#
+# Note 1: insert into table is now the only statement which changes LIRID
+# Note 2: upon entry into before or instead of triggers,
+# LIRID is unchanged (rather than -1)
+# Note 3: LIRID is changed within the context of a trigger,
+# but is restored once the trigger exits
+# Note 4: LIRID is not changed by an insert into a view (since everything
+# is done within instead of trigger context)
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# ----------------------------------------------------------------------------
+# 1.x - basic tests (no triggers)
+
+# LIRID changed properly after an insert into a table
+do_test lastinsert-1.1 {
+ catchsql {
+ create table t1 (k integer primary key);
+ insert into t1 values (1);
+ insert into t1 values (NULL);
+ insert into t1 values (NULL);
+ select last_insert_rowid();
+ }
+} {0 3}
+
+# LIRID unchanged after an update on a table
+do_test lastinsert-1.2 {
+ catchsql {
+ update t1 set k=4 where k=2;
+ select last_insert_rowid();
+ }
+} {0 3}
+
+# LIRID unchanged after a delete from a table
+do_test lastinsert-1.3 {
+ catchsql {
+ delete from t1 where k=4;
+ select last_insert_rowid();
+ }
+} {0 3}
+
+# LIRID unchanged after create table/view statements
+do_test lastinsert-1.4.1 {
+ catchsql {
+ create table t2 (k integer primary key, val1, val2, val3);
+ select last_insert_rowid();
+ }
+} {0 3}
+ifcapable view {
+do_test lastinsert-1.4.2 {
+ catchsql {
+ create view v as select * from t1;
+ select last_insert_rowid();
+ }
+} {0 3}
+} ;# ifcapable view
+
+# All remaining tests involve triggers. Skip them if triggers are not
+# supported in this build.
+#
+ifcapable {!trigger} {
+ finish_test
+ return
+}
+
+# ----------------------------------------------------------------------------
+# 2.x - tests with after insert trigger
+
+# LIRID changed properly after an insert into table containing an after trigger
+do_test lastinsert-2.1 {
+ catchsql {
+ delete from t2;
+ create trigger r1 after insert on t1 for each row begin
+ insert into t2 values (NEW.k*2, last_insert_rowid(), NULL, NULL);
+ update t2 set k=k+10, val2=100+last_insert_rowid();
+ update t2 set val3=1000+last_insert_rowid();
+ end;
+ insert into t1 values (13);
+ select last_insert_rowid();
+ }
+} {0 13}
+
+# LIRID equals NEW.k upon entry into after insert trigger
+do_test lastinsert-2.2 {
+ catchsql {
+ select val1 from t2;
+ }
+} {0 13}
+
+# LIRID changed properly by insert within context of after insert trigger
+do_test lastinsert-2.3 {
+ catchsql {
+ select val2 from t2;
+ }
+} {0 126}
+
+# LIRID unchanged by update within context of after insert trigger
+do_test lastinsert-2.4 {
+ catchsql {
+ select val3 from t2;
+ }
+} {0 1026}
+
+# ----------------------------------------------------------------------------
+# 3.x - tests with after update trigger
+
+# LIRID not changed after an update onto a table containing an after trigger
+do_test lastinsert-3.1 {
+ catchsql {
+ delete from t2;
+ drop trigger r1;
+ create trigger r1 after update on t1 for each row begin
+ insert into t2 values (NEW.k*2, last_insert_rowid(), NULL, NULL);
+ update t2 set k=k+10, val2=100+last_insert_rowid();
+ update t2 set val3=1000+last_insert_rowid();
+ end;
+ update t1 set k=14 where k=3;
+ select last_insert_rowid();
+ }
+} {0 13}
+
+# LIRID unchanged upon entry into after update trigger
+do_test lastinsert-3.2 {
+ catchsql {
+ select val1 from t2;
+ }
+} {0 13}
+
+# LIRID changed properly by insert within context of after update trigger
+do_test lastinsert-3.3 {
+ catchsql {
+ select val2 from t2;
+ }
+} {0 128}
+
+# LIRID unchanged by update within context of after update trigger
+do_test lastinsert-3.4 {
+ catchsql {
+ select val3 from t2;
+ }
+} {0 1028}
+
+# ----------------------------------------------------------------------------
+# 4.x - tests with instead of insert trigger
+# These may not be run if either views or triggers were disabled at
+# compile-time
+
+ifcapable {view && trigger} {
+# LIRID not changed after an insert into view containing an instead of trigger
+do_test lastinsert-4.1 {
+ catchsql {
+ delete from t2;
+ drop trigger r1;
+ create trigger r1 instead of insert on v for each row begin
+ insert into t2 values (NEW.k*2, last_insert_rowid(), NULL, NULL);
+ update t2 set k=k+10, val2=100+last_insert_rowid();
+ update t2 set val3=1000+last_insert_rowid();
+ end;
+ insert into v values (15);
+ select last_insert_rowid();
+ }
+} {0 13}
+
+# LIRID unchanged upon entry into instead of trigger
+do_test lastinsert-4.2 {
+ catchsql {
+ select val1 from t2;
+ }
+} {0 13}
+
+# LIRID changed properly by insert within context of instead of trigger
+do_test lastinsert-4.3 {
+ catchsql {
+ select val2 from t2;
+ }
+} {0 130}
+
+# LIRID unchanged by update within context of instead of trigger
+do_test lastinsert-4.4 {
+ catchsql {
+ select val3 from t2;
+ }
+} {0 1030}
+} ;# ifcapable (view && trigger)
+
+# ----------------------------------------------------------------------------
+# 5.x - tests with before delete trigger
+
+# LIRID not changed after a delete on a table containing a before trigger
+do_test lastinsert-5.1 {
+ catchsql {
+ drop trigger r1; -- This was not created if views are disabled.
+ }
+ catchsql {
+ delete from t2;
+ create trigger r1 before delete on t1 for each row begin
+ insert into t2 values (77, last_insert_rowid(), NULL, NULL);
+ update t2 set k=k+10, val2=100+last_insert_rowid();
+ update t2 set val3=1000+last_insert_rowid();
+ end;
+ delete from t1 where k=1;
+ select last_insert_rowid();
+ }
+} {0 13}
+
+# LIRID unchanged upon entry into delete trigger
+do_test lastinsert-5.2 {
+ catchsql {
+ select val1 from t2;
+ }
+} {0 13}
+
+# LIRID changed properly by insert within context of delete trigger
+do_test lastinsert-5.3 {
+ catchsql {
+ select val2 from t2;
+ }
+} {0 177}
+
+# LIRID unchanged by update within context of delete trigger
+do_test lastinsert-5.4 {
+ catchsql {
+ select val3 from t2;
+ }
+} {0 1077}
+
+# ----------------------------------------------------------------------------
+# 6.x - tests with instead of update trigger
+# These tests may not run if either views or triggers are disabled.
+
+ifcapable {view && trigger} {
+# LIRID not changed after an update on a view containing an instead of trigger
+do_test lastinsert-6.1 {
+ catchsql {
+ delete from t2;
+ drop trigger r1;
+ create trigger r1 instead of update on v for each row begin
+ insert into t2 values (NEW.k*2, last_insert_rowid(), NULL, NULL);
+ update t2 set k=k+10, val2=100+last_insert_rowid();
+ update t2 set val3=1000+last_insert_rowid();
+ end;
+ update v set k=16 where k=14;
+ select last_insert_rowid();
+ }
+} {0 13}
+
+# LIRID unchanged upon entry into instead of trigger
+do_test lastinsert-6.2 {
+ catchsql {
+ select val1 from t2;
+ }
+} {0 13}
+
+# LIRID changed properly by insert within context of instead of trigger
+do_test lastinsert-6.3 {
+ catchsql {
+ select val2 from t2;
+ }
+} {0 132}
+
+# LIRID unchanged by update within context of instead of trigger
+do_test lastinsert-6.4 {
+ catchsql {
+ select val3 from t2;
+ }
+} {0 1032}
+} ;# ifcapable (view && trigger)
+
+# ----------------------------------------------------------------------------
+# 7.x - complex tests with temporary tables and nested instead of triggers
+# These do not run if views or triggers are disabled.
+
+ifcapable {trigger && view && tempdb} {
+do_test lastinsert-7.1 {
+ catchsql {
+ drop table t1; drop table t2; drop trigger r1;
+ create temp table t1 (k integer primary key);
+ create temp table t2 (k integer primary key);
+ create temp view v1 as select * from t1;
+ create temp view v2 as select * from t2;
+ create temp table rid (k integer primary key, rin, rout);
+ insert into rid values (1, NULL, NULL);
+ insert into rid values (2, NULL, NULL);
+ create temp trigger r1 instead of insert on v1 for each row begin
+ update rid set rin=last_insert_rowid() where k=1;
+ insert into t1 values (100+NEW.k);
+ insert into v2 values (100+last_insert_rowid());
+ update rid set rout=last_insert_rowid() where k=1;
+ end;
+ create temp trigger r2 instead of insert on v2 for each row begin
+ update rid set rin=last_insert_rowid() where k=2;
+ insert into t2 values (1000+NEW.k);
+ update rid set rout=last_insert_rowid() where k=2;
+ end;
+ insert into t1 values (77);
+ select last_insert_rowid();
+ }
+} {0 77}
+
+do_test lastinsert-7.2 {
+ catchsql {
+ insert into v1 values (5);
+ select last_insert_rowid();
+ }
+} {0 77}
+
+do_test lastinsert-7.3 {
+ catchsql {
+ select rin from rid where k=1;
+ }
+} {0 77}
+
+do_test lastinsert-7.4 {
+ catchsql {
+ select rout from rid where k=1;
+ }
+} {0 105}
+
+do_test lastinsert-7.5 {
+ catchsql {
+ select rin from rid where k=2;
+ }
+} {0 105}
+
+do_test lastinsert-7.6 {
+ catchsql {
+ select rout from rid where k=2;
+ }
+} {0 1205}
+
+do_test lastinsert-8.1 {
+ db close
+ sqlite3 db test.db
+ execsql {
+ CREATE TABLE t2(x INTEGER PRIMARY KEY, y);
+ CREATE TABLE t3(a, b);
+ CREATE TRIGGER after_t2 AFTER INSERT ON t2 BEGIN
+ INSERT INTO t3 VALUES(new.x, new.y);
+ END;
+ INSERT INTO t2 VALUES(5000000000, 1);
+ SELECT last_insert_rowid();
+ }
+} 5000000000
+
+do_test lastinsert-9.1 {
+ db eval {INSERT INTO t2 VALUES(123456789012345,0)}
+ db last_insert_rowid
+} {123456789012345}
+
+
+} ;# ifcapable (view && trigger)
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/laststmtchanges.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/laststmtchanges.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,270 @@
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+#
+# Tests to make sure that values returned by changes() and total_changes()
+# are updated properly, especially inside triggers
+#
+# Note 1: changes() remains constant within a statement and only updates
+# once the statement is finished (triggers count as part of
+# statement).
+# Note 2: changes() is changed within the context of a trigger much like
+# last_insert_rowid() (see lastinsert.test), but is restored once
+# the trigger exits.
+# Note 3: changes() is not changed by a change to a view (since everything
+# is done within instead of trigger context).
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# ----------------------------------------------------------------------------
+# 1.x - basic tests (no triggers)
+
+# changes() set properly after insert
+do_test laststmtchanges-1.1 {
+ catchsql {
+ create table t0 (x);
+ insert into t0 values (1);
+ insert into t0 values (1);
+ insert into t0 values (2);
+ insert into t0 values (2);
+ insert into t0 values (1);
+ insert into t0 values (1);
+ insert into t0 values (1);
+ insert into t0 values (2);
+ select changes(), total_changes();
+ }
+} {0 {1 8}}
+
+# changes() set properly after update
+do_test laststmtchanges-1.2 {
+ catchsql {
+ update t0 set x=3 where x=1;
+ select changes(), total_changes();
+ }
+} {0 {5 13}}
+
+# changes() unchanged within an update statement
+do_test laststmtchanges-1.3 {
+ catchsql {
+ update t0 set x=x+changes() where x=3;
+ select count() from t0 where x=8;
+ }
+} {0 5}
+
+# changes() set properly after update on table where no rows changed
+do_test laststmtchanges-1.4 {
+ catchsql {
+ update t0 set x=77 where x=88;
+ select changes();
+ }
+} {0 0}
+
+# changes() set properly after delete from table
+do_test laststmtchanges-1.5 {
+ catchsql {
+ delete from t0 where x=2;
+ select changes();
+ }
+} {0 3}
+
+# All remaining tests involve triggers. Skip them if triggers are not
+# supported in this build.
+#
+ifcapable {!trigger} {
+ finish_test
+ return
+}
+
+
+# ----------------------------------------------------------------------------
+# 2.x - tests with after insert trigger
+
+# changes() changed properly after insert into table containing after trigger
+do_test laststmtchanges-2.1 {
+ set ::tc [db total_changes]
+ catchsql {
+ create table t1 (k integer primary key);
+ create table t2 (k integer primary key, v1, v2);
+ create trigger r1 after insert on t1 for each row begin
+ insert into t2 values (NULL, changes(), NULL);
+ update t0 set x=x;
+ update t2 set v2=changes();
+ end;
+ insert into t1 values (77);
+ select changes();
+ }
+} {0 1}
+
+# changes() unchanged upon entry into after insert trigger
+do_test laststmtchanges-2.2 {
+ catchsql {
+ select v1 from t2;
+ }
+} {0 3}
+
+# changes() changed properly by update within context of after insert trigger
+do_test laststmtchanges-2.3 {
+ catchsql {
+ select v2 from t2;
+ }
+} {0 5}
+
+# Total changes caused by firing the trigger above:
+#
+# 1 from "insert into t1 values(77)" +
+# 1 from "insert into t2 values (NULL, changes(), NULL);" +
+# 5 from "update t0 set x=x;" +
+# 1 from "update t2 set v2=changes();"
+#
+do_test laststmtchanges-2.4 {
+ expr [db total_changes] - $::tc
+} {8}
+
+# ----------------------------------------------------------------------------
+# 3.x - tests with after update trigger
+
+# changes() changed properly after update into table containing after trigger
+do_test laststmtchanges-3.1 {
+ catchsql {
+ drop trigger r1;
+ delete from t2; delete from t2;
+ create trigger r1 after update on t1 for each row begin
+ insert into t2 values (NULL, changes(), NULL);
+ delete from t0 where oid=1 or oid=2;
+ update t2 set v2=changes();
+ end;
+ update t1 set k=k;
+ select changes();
+ }
+} {0 1}
+
+# changes() unchanged upon entry into after update trigger
+do_test laststmtchanges-3.2 {
+ catchsql {
+ select v1 from t2;
+ }
+} {0 0}
+
+# changes() changed properly by delete within context of after update trigger
+do_test laststmtchanges-3.3 {
+ catchsql {
+ select v2 from t2;
+ }
+} {0 2}
+
+# ----------------------------------------------------------------------------
+# 4.x - tests with before delete trigger
+
+# changes() changed properly on delete from table containing before trigger
+do_test laststmtchanges-4.1 {
+ catchsql {
+ drop trigger r1;
+ delete from t2; delete from t2;
+ create trigger r1 before delete on t1 for each row begin
+ insert into t2 values (NULL, changes(), NULL);
+ insert into t0 values (5);
+ update t2 set v2=changes();
+ end;
+ delete from t1;
+ select changes();
+ }
+} {0 1}
+
+# changes() unchanged upon entry into before delete trigger
+do_test laststmtchanges-4.2 {
+ catchsql {
+ select v1 from t2;
+ }
+} {0 0}
+
+# changes() changed properly by insert within context of before delete trigger
+do_test laststmtchanges-4.3 {
+ catchsql {
+ select v2 from t2;
+ }
+} {0 1}
+
+# ----------------------------------------------------------------------------
+# 5.x - complex tests with temporary tables and nested instead of triggers
+# These tests cannot run if the library does not have view support enabled.
+
+ifcapable view&&tempdb {
+
+do_test laststmtchanges-5.1 {
+ catchsql {
+ drop table t0; drop table t1; drop table t2;
+ create temp table t0(x);
+ create temp table t1 (k integer primary key);
+ create temp table t2 (k integer primary key);
+ create temp view v1 as select * from t1;
+ create temp view v2 as select * from t2;
+ create temp table n1 (k integer primary key, n);
+ create temp table n2 (k integer primary key, n);
+ insert into t0 values (1);
+ insert into t0 values (2);
+ insert into t0 values (1);
+ insert into t0 values (1);
+ insert into t0 values (1);
+ insert into t0 values (2);
+ insert into t0 values (2);
+ insert into t0 values (1);
+ create temp trigger r1 instead of insert on v1 for each row begin
+ insert into n1 values (NULL, changes());
+ update t0 set x=x*10 where x=1;
+ insert into n1 values (NULL, changes());
+ insert into t1 values (NEW.k);
+ insert into n1 values (NULL, changes());
+ update t0 set x=x*10 where x=0;
+ insert into v2 values (100+NEW.k);
+ insert into n1 values (NULL, changes());
+ end;
+ create temp trigger r2 instead of insert on v2 for each row begin
+ insert into n2 values (NULL, changes());
+ insert into t2 values (1000+NEW.k);
+ insert into n2 values (NULL, changes());
+ update t0 set x=x*100 where x=0;
+ insert into n2 values (NULL, changes());
+ delete from t0 where x=2;
+ insert into n2 values (NULL, changes());
+ end;
+ insert into t1 values (77);
+ select changes();
+ }
+} {0 1}
+
+do_test laststmtchanges-5.2 {
+ catchsql {
+ delete from t1 where k=88;
+ select changes();
+ }
+} {0 0}
+
+do_test laststmtchanges-5.3 {
+ catchsql {
+ insert into v1 values (5);
+ select changes();
+ }
+} {0 0}
+
+do_test laststmtchanges-5.4 {
+ catchsql {
+ select n from n1;
+ }
+} {0 {0 5 1 0}}
+
+do_test laststmtchanges-5.5 {
+ catchsql {
+ select n from n2;
+ }
+} {0 {0 1 0 3}}
+
+} ;# ifcapable view
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/like.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/like.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,386 @@
+# 2005 August 13
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing the LIKE and GLOB operators and
+# in particular the optimizations that occur to help those operators
+# run faster.
+#
+# $Id: like.test,v 1.5 2006/06/14 08:48:26 danielk1977 Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Create some sample data to work with.
+#
+do_test like-1.0 {
+ execsql {
+ CREATE TABLE t1(x TEXT);
+ }
+ foreach str {
+ a
+ ab
+ abc
+ abcd
+
+ acd
+ abd
+ bc
+ bcd
+
+ xyz
+ ABC
+ CDE
+ {ABC abc xyz}
+ } {
+ db eval {INSERT INTO t1 VALUES(:str)}
+ }
+ execsql {
+ SELECT count(*) FROM t1;
+ }
+} {12}
+
+# Test that both case sensitive and insensitive version of LIKE work.
+#
+do_test like-1.1 {
+ execsql {
+ SELECT x FROM t1 WHERE x LIKE 'abc' ORDER BY 1;
+ }
+} {ABC abc}
+do_test like-1.2 {
+ execsql {
+ SELECT x FROM t1 WHERE x GLOB 'abc' ORDER BY 1;
+ }
+} {abc}
+do_test like-1.3 {
+ execsql {
+ SELECT x FROM t1 WHERE x LIKE 'ABC' ORDER BY 1;
+ }
+} {ABC abc}
+do_test like-1.4 {
+ execsql {
+ SELECT x FROM t1 WHERE x LIKE 'aBc' ORDER BY 1;
+ }
+} {ABC abc}
+do_test like-1.5 {
+ execsql {
+ PRAGMA case_sensitive_like=on;
+ SELECT x FROM t1 WHERE x LIKE 'abc' ORDER BY 1;
+ }
+} {abc}
+do_test like-1.6 {
+ execsql {
+ SELECT x FROM t1 WHERE x GLOB 'abc' ORDER BY 1;
+ }
+} {abc}
+do_test like-1.7 {
+ execsql {
+ SELECT x FROM t1 WHERE x LIKE 'ABC' ORDER BY 1;
+ }
+} {ABC}
+do_test like-1.8 {
+ execsql {
+ SELECT x FROM t1 WHERE x LIKE 'aBc' ORDER BY 1;
+ }
+} {}
+do_test like-1.9 {
+ execsql {
+ PRAGMA case_sensitive_like=off;
+ SELECT x FROM t1 WHERE x LIKE 'abc' ORDER BY 1;
+ }
+} {ABC abc}
+
+# Tests of the REGEXP operator
+#
+do_test like-2.1 {
+ proc test_regexp {a b} {
+ return [regexp $a $b]
+ }
+ db function regexp test_regexp
+ execsql {
+ SELECT x FROM t1 WHERE x REGEXP 'abc' ORDER BY 1;
+ }
+} {{ABC abc xyz} abc abcd}
+do_test like-2.2 {
+ execsql {
+ SELECT x FROM t1 WHERE x REGEXP '^abc' ORDER BY 1;
+ }
+} {abc abcd}
+
+# Tests of the MATCH operator
+#
+do_test like-2.3 {
+ proc test_match {a b} {
+ return [string match $a $b]
+ }
+ db function match test_match
+ execsql {
+ SELECT x FROM t1 WHERE x MATCH '*abc*' ORDER BY 1;
+ }
+} {{ABC abc xyz} abc abcd}
+do_test like-2.4 {
+ execsql {
+ SELECT x FROM t1 WHERE x MATCH 'abc*' ORDER BY 1;
+ }
+} {abc abcd}
+
+# For the remaining tests, we need to have the like optimizations
+# enabled.
+#
+ifcapable !like_opt {
+ finish_test
+ return
+}
+
+# This procedure executes the SQL. Then it appends to the result the
+# "sort" or "nosort" keyword (as in the cksort procedure above) then
+# it appends the ::sqlite_query_plan variable.
+#
+proc queryplan {sql} {
+ set ::sqlite_sort_count 0
+ set data [execsql $sql]
+ if {$::sqlite_sort_count} {set x sort} {set x nosort}
+ lappend data $x
+ return [concat $data $::sqlite_query_plan]
+}
+
+# Perform tests on the like optimization.
+#
+# With no index on t1.x and with case sensitivity turned off, no optimization
+# is performed.
+#
+do_test like-3.1 {
+ set sqlite_like_count 0
+ queryplan {
+ SELECT x FROM t1 WHERE x LIKE 'abc%' ORDER BY 1;
+ }
+} {ABC {ABC abc xyz} abc abcd sort t1 {}}
+do_test like-3.2 {
+ set sqlite_like_count
+} {12}
+
+# With an index on t1.x and case sensitivity on, optimize completely.
+#
+do_test like-3.3 {
+ set sqlite_like_count 0
+ execsql {
+ PRAGMA case_sensitive_like=on;
+ CREATE INDEX i1 ON t1(x);
+ }
+ queryplan {
+ SELECT x FROM t1 WHERE x LIKE 'abc%' ORDER BY 1;
+ }
+} {abc abcd nosort {} i1}
+do_test like-3.4 {
+ set sqlite_like_count
+} 0
+
+# Partial optimization when the pattern does not end in '%'
+#
+do_test like-3.5 {
+ set sqlite_like_count 0
+ queryplan {
+ SELECT x FROM t1 WHERE x LIKE 'a_c' ORDER BY 1;
+ }
+} {abc nosort {} i1}
+do_test like-3.6 {
+ set sqlite_like_count
+} 6
+do_test like-3.7 {
+ set sqlite_like_count 0
+ queryplan {
+ SELECT x FROM t1 WHERE x LIKE 'ab%d' ORDER BY 1;
+ }
+} {abcd abd nosort {} i1}
+do_test like-3.8 {
+ set sqlite_like_count
+} 4
+do_test like-3.9 {
+ set sqlite_like_count 0
+ queryplan {
+ SELECT x FROM t1 WHERE x LIKE 'a_c%' ORDER BY 1;
+ }
+} {abc abcd nosort {} i1}
+do_test like-3.10 {
+ set sqlite_like_count
+} 6
+
+# No optimization when the pattern begins with a wildcard.
+# Note that the index is still used but only for sorting.
+#
+do_test like-3.11 {
+ set sqlite_like_count 0
+ queryplan {
+ SELECT x FROM t1 WHERE x LIKE '%bcd' ORDER BY 1;
+ }
+} {abcd bcd nosort {} i1}
+do_test like-3.12 {
+ set sqlite_like_count
+} 12
+
+# No optimization for case insensitive LIKE
+#
+do_test like-3.13 {
+ set sqlite_like_count 0
+ queryplan {
+ PRAGMA case_sensitive_like=off;
+ SELECT x FROM t1 WHERE x LIKE 'abc%' ORDER BY 1;
+ }
+} {ABC {ABC abc xyz} abc abcd nosort {} i1}
+do_test like-3.14 {
+ set sqlite_like_count
+} 12
+
+# No optimization without an index.
+#
+do_test like-3.15 {
+ set sqlite_like_count 0
+ queryplan {
+ PRAGMA case_sensitive_like=on;
+ DROP INDEX i1;
+ SELECT x FROM t1 WHERE x LIKE 'abc%' ORDER BY 1;
+ }
+} {abc abcd sort t1 {}}
+do_test like-3.16 {
+ set sqlite_like_count
+} 12
+
+# No GLOB optimization without an index.
+#
+do_test like-3.17 {
+ set sqlite_like_count 0
+ queryplan {
+ SELECT x FROM t1 WHERE x GLOB 'abc*' ORDER BY 1;
+ }
+} {abc abcd sort t1 {}}
+do_test like-3.18 {
+ set sqlite_like_count
+} 12
+
+# GLOB is optimized regardless of the case_sensitive_like setting.
+#
+do_test like-3.19 {
+ set sqlite_like_count 0
+ queryplan {
+ CREATE INDEX i1 ON t1(x);
+ SELECT x FROM t1 WHERE x GLOB 'abc*' ORDER BY 1;
+ }
+} {abc abcd nosort {} i1}
+do_test like-3.20 {
+ set sqlite_like_count
+} 0
+do_test like-3.21 {
+ set sqlite_like_count 0
+ queryplan {
+ PRAGMA case_sensitive_like=on;
+ SELECT x FROM t1 WHERE x GLOB 'abc*' ORDER BY 1;
+ }
+} {abc abcd nosort {} i1}
+do_test like-3.22 {
+ set sqlite_like_count
+} 0
+do_test like-3.23 {
+ set sqlite_like_count 0
+ queryplan {
+ PRAGMA case_sensitive_like=off;
+ SELECT x FROM t1 WHERE x GLOB 'a[bc]d' ORDER BY 1;
+ }
+} {abd acd nosort {} i1}
+do_test like-3.24 {
+ set sqlite_like_count
+} 6
+
+# No optimization if the LHS of the LIKE is not a column name or
+# if the RHS is not a string.
+#
+do_test like-4.1 {
+ execsql {PRAGMA case_sensitive_like=on}
+ set sqlite_like_count 0
+ queryplan {
+ SELECT x FROM t1 WHERE x LIKE 'abc%' ORDER BY 1
+ }
+} {abc abcd nosort {} i1}
+do_test like-4.2 {
+ set sqlite_like_count
+} 0
+do_test like-4.3 {
+ set sqlite_like_count 0
+ queryplan {
+ SELECT x FROM t1 WHERE +x LIKE 'abc%' ORDER BY 1
+ }
+} {abc abcd nosort {} i1}
+do_test like-4.4 {
+ set sqlite_like_count
+} 12
+do_test like-4.5 {
+ set sqlite_like_count 0
+ queryplan {
+ SELECT x FROM t1 WHERE x LIKE ('ab' || 'c%') ORDER BY 1
+ }
+} {abc abcd nosort {} i1}
+do_test like-4.6 {
+ set sqlite_like_count
+} 12
+
+# Collating sequences on the index disable the LIKE optimization.
+# Or if the NOCASE collating sequence is used, the LIKE optimization
+# is enabled when case_sensitive_like is OFF.
+#
+do_test like-5.1 {
+ execsql {PRAGMA case_sensitive_like=off}
+ set sqlite_like_count 0
+ queryplan {
+ SELECT x FROM t1 WHERE x LIKE 'abc%' ORDER BY 1
+ }
+} {ABC {ABC abc xyz} abc abcd nosort {} i1}
+do_test like-5.2 {
+ set sqlite_like_count
+} 12
+do_test like-5.3 {
+ execsql {
+ CREATE TABLE t2(x COLLATE NOCASE);
+ INSERT INTO t2 SELECT * FROM t1;
+ CREATE INDEX i2 ON t2(x COLLATE NOCASE);
+ }
+ set sqlite_like_count 0
+ queryplan {
+ SELECT x FROM t2 WHERE x LIKE 'abc%' ORDER BY 1
+ }
+} {abc ABC {ABC abc xyz} abcd nosort {} i2}
+do_test like-5.4 {
+ set sqlite_like_count
+} 0
+do_test like-5.5 {
+ execsql {
+ PRAGMA case_sensitive_like=on;
+ }
+ set sqlite_like_count 0
+ queryplan {
+ SELECT x FROM t2 WHERE x LIKE 'abc%' ORDER BY 1
+ }
+} {abc abcd nosort {} i2}
+do_test like-5.6 {
+ set sqlite_like_count
+} 12
+do_test like-5.7 {
+ execsql {
+ PRAGMA case_sensitive_like=off;
+ }
+ set sqlite_like_count 0
+ queryplan {
+ SELECT x FROM t2 WHERE x GLOB 'abc*' ORDER BY 1
+ }
+} {abc abcd nosort {} i2}
+do_test like-5.8 {
+ set sqlite_like_count
+} 12
+
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/limit.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/limit.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,448 @@
+# 2001 November 6
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing the LIMIT ... OFFSET ... clause
+# of SELECT statements.
+#
+# $Id: limit.test,v 1.30 2006/06/20 11:01:09 danielk1977 Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Build some test data
+#
+execsql {
+ CREATE TABLE t1(x int, y int);
+ BEGIN;
+}
+for {set i 1} {$i<=32} {incr i} {
+ for {set j 0} {pow(2,$j)<$i} {incr j} {}
+ execsql "INSERT INTO t1 VALUES([expr {32-$i}],[expr {10-$j}])"
+}
+execsql {
+ COMMIT;
+}
+
+do_test limit-1.0 {
+ execsql {SELECT count(*) FROM t1}
+} {32}
+do_test limit-1.1 {
+ execsql {SELECT count(*) FROM t1 LIMIT 5}
+} {32}
+do_test limit-1.2.1 {
+ execsql {SELECT x FROM t1 ORDER BY x LIMIT 5}
+} {0 1 2 3 4}
+do_test limit-1.2.2 {
+ execsql {SELECT x FROM t1 ORDER BY x LIMIT 5 OFFSET 2}
+} {2 3 4 5 6}
+do_test limit-1.2.3 {
+ execsql {SELECT x FROM t1 ORDER BY x+1 LIMIT 5 OFFSET -2}
+} {0 1 2 3 4}
+do_test limit-1.2.4 {
+ execsql {SELECT x FROM t1 ORDER BY x+1 LIMIT 2, -5}
+} {2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31}
+do_test limit-1.2.5 {
+ execsql {SELECT x FROM t1 ORDER BY x+1 LIMIT -2, 5}
+} {0 1 2 3 4}
+do_test limit-1.2.6 {
+ execsql {SELECT x FROM t1 ORDER BY x+1 LIMIT -2, -5}
+} {0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31}
+do_test limit-1.2.7 {
+ execsql {SELECT x FROM t1 ORDER BY x LIMIT 2, 5}
+} {2 3 4 5 6}
+do_test limit-1.3 {
+ execsql {SELECT x FROM t1 ORDER BY x LIMIT 5 OFFSET 5}
+} {5 6 7 8 9}
+do_test limit-1.4.1 {
+ execsql {SELECT x FROM t1 ORDER BY x LIMIT 50 OFFSET 30}
+} {30 31}
+do_test limit-1.4.2 {
+ execsql {SELECT x FROM t1 ORDER BY x LIMIT 30, 50}
+} {30 31}
+do_test limit-1.5 {
+ execsql {SELECT x FROM t1 ORDER BY x LIMIT 50 OFFSET 50}
+} {}
+do_test limit-1.6 {
+ execsql {SELECT * FROM t1 AS a, t1 AS b ORDER BY a.x, b.x LIMIT 5}
+} {0 5 0 5 0 5 1 5 0 5 2 5 0 5 3 5 0 5 4 5}
+do_test limit-1.7 {
+ execsql {SELECT * FROM t1 AS a, t1 AS b ORDER BY a.x, b.x LIMIT 5 OFFSET 32}
+} {1 5 0 5 1 5 1 5 1 5 2 5 1 5 3 5 1 5 4 5}
+
+ifcapable {view && subquery} {
+ do_test limit-2.1 {
+ execsql {
+ CREATE VIEW v1 AS SELECT * FROM t1 LIMIT 2;
+ SELECT count(*) FROM (SELECT * FROM v1);
+ }
+ } 2
+} ;# ifcapable view
+do_test limit-2.2 {
+ execsql {
+ CREATE TABLE t2 AS SELECT * FROM t1 LIMIT 2;
+ SELECT count(*) FROM t2;
+ }
+} 2
+ifcapable subquery {
+ do_test limit-2.3 {
+ execsql {
+ SELECT count(*) FROM t1 WHERE rowid IN (SELECT rowid FROM t1 LIMIT 2);
+ }
+ } 2
+}
+
+ifcapable subquery {
+ do_test limit-3.1 {
+ execsql {
+ SELECT z FROM (SELECT y*10+x AS z FROM t1 ORDER BY x LIMIT 10)
+ ORDER BY z LIMIT 5;
+ }
+ } {50 51 52 53 54}
+}
+
+do_test limit-4.1 {
+ ifcapable subquery {
+ execsql {
+ BEGIN;
+ CREATE TABLE t3(x);
+ INSERT INTO t3 SELECT x FROM t1 ORDER BY x LIMIT 10 OFFSET 1;
+ INSERT INTO t3 SELECT x+(SELECT max(x) FROM t3) FROM t3;
+ INSERT INTO t3 SELECT x+(SELECT max(x) FROM t3) FROM t3;
+ INSERT INTO t3 SELECT x+(SELECT max(x) FROM t3) FROM t3;
+ INSERT INTO t3 SELECT x+(SELECT max(x) FROM t3) FROM t3;
+ INSERT INTO t3 SELECT x+(SELECT max(x) FROM t3) FROM t3;
+ INSERT INTO t3 SELECT x+(SELECT max(x) FROM t3) FROM t3;
+ INSERT INTO t3 SELECT x+(SELECT max(x) FROM t3) FROM t3;
+ INSERT INTO t3 SELECT x+(SELECT max(x) FROM t3) FROM t3;
+ INSERT INTO t3 SELECT x+(SELECT max(x) FROM t3) FROM t3;
+ INSERT INTO t3 SELECT x+(SELECT max(x) FROM t3) FROM t3;
+ END;
+ SELECT count(*) FROM t3;
+ }
+ } else {
+ execsql {
+ BEGIN;
+ CREATE TABLE t3(x);
+ INSERT INTO t3 SELECT x FROM t1 ORDER BY x LIMIT 10 OFFSET 1;
+ }
+ for {set i 0} {$i<10} {incr i} {
+ set max_x_t3 [execsql {SELECT max(x) FROM t3}]
+ execsql "INSERT INTO t3 SELECT x+$max_x_t3 FROM t3;"
+ }
+ execsql {
+ END;
+ SELECT count(*) FROM t3;
+ }
+ }
+} {10240}
+do_test limit-4.2 {
+ execsql {
+ SELECT x FROM t3 LIMIT 2 OFFSET 10000
+ }
+} {10001 10002}
+do_test limit-4.3 {
+ execsql {
+ CREATE TABLE t4 AS SELECT x,
+ 'abcdefghijklmnopqrstuvwyxz ABCDEFGHIJKLMNOPQRSTUVWYXZ' || x ||
+ 'abcdefghijklmnopqrstuvwyxz ABCDEFGHIJKLMNOPQRSTUVWYXZ' || x ||
+ 'abcdefghijklmnopqrstuvwyxz ABCDEFGHIJKLMNOPQRSTUVWYXZ' || x ||
+ 'abcdefghijklmnopqrstuvwyxz ABCDEFGHIJKLMNOPQRSTUVWYXZ' || x ||
+ 'abcdefghijklmnopqrstuvwyxz ABCDEFGHIJKLMNOPQRSTUVWYXZ' || x AS y
+ FROM t3 LIMIT 1000;
+ SELECT x FROM t4 ORDER BY y DESC LIMIT 1 OFFSET 999;
+ }
+} {1000}
+
+do_test limit-5.1 {
+ execsql {
+ CREATE TABLE t5(x,y);
+ INSERT INTO t5 SELECT x-y, x+y FROM t1 WHERE x BETWEEN 10 AND 15
+ ORDER BY x LIMIT 2;
+ SELECT * FROM t5 ORDER BY x;
+ }
+} {5 15 6 16}
+do_test limit-5.2 {
+ execsql {
+ DELETE FROM t5;
+ INSERT INTO t5 SELECT x-y, x+y FROM t1 WHERE x BETWEEN 10 AND 15
+ ORDER BY x DESC LIMIT 2;
+ SELECT * FROM t5 ORDER BY x;
+ }
+} {9 19 10 20}
+do_test limit-5.3 {
+ execsql {
+ DELETE FROM t5;
+ INSERT INTO t5 SELECT x-y, x+y FROM t1 WHERE x ORDER BY x DESC LIMIT 31;
+ SELECT * FROM t5 ORDER BY x LIMIT 2;
+ }
+} {-4 6 -3 7}
+do_test limit-5.4 {
+ execsql {
+ SELECT * FROM t5 ORDER BY x DESC, y DESC LIMIT 2;
+ }
+} {21 41 21 39}
+do_test limit-5.5 {
+ execsql {
+ DELETE FROM t5;
+ INSERT INTO t5 SELECT a.x*100+b.x, a.y*100+b.y FROM t1 AS a, t1 AS b
+ ORDER BY 1, 2 LIMIT 1000;
+ SELECT count(*), sum(x), sum(y), min(x), max(x), min(y), max(y) FROM t5;
+ }
+} {1000 1528204 593161 0 3107 505 1005}
+
+# There is some contraversy about whether LIMIT 0 should be the same as
+# no limit at all or if LIMIT 0 should result in zero output rows.
+#
+do_test limit-6.1 {
+ execsql {
+ BEGIN;
+ CREATE TABLE t6(a);
+ INSERT INTO t6 VALUES(1);
+ INSERT INTO t6 VALUES(2);
+ INSERT INTO t6 SELECT a+2 FROM t6;
+ COMMIT;
+ SELECT * FROM t6;
+ }
+} {1 2 3 4}
+do_test limit-6.2 {
+ execsql {
+ SELECT * FROM t6 LIMIT -1 OFFSET -1;
+ }
+} {1 2 3 4}
+do_test limit-6.3 {
+ execsql {
+ SELECT * FROM t6 LIMIT 2 OFFSET -123;
+ }
+} {1 2}
+do_test limit-6.4 {
+ execsql {
+ SELECT * FROM t6 LIMIT -432 OFFSET 2;
+ }
+} {3 4}
+do_test limit-6.5 {
+ execsql {
+ SELECT * FROM t6 LIMIT -1
+ }
+} {1 2 3 4}
+do_test limit-6.6 {
+ execsql {
+ SELECT * FROM t6 LIMIT -1 OFFSET 1
+ }
+} {2 3 4}
+do_test limit-6.7 {
+ execsql {
+ SELECT * FROM t6 LIMIT 0
+ }
+} {}
+do_test limit-6.8 {
+ execsql {
+ SELECT * FROM t6 LIMIT 0 OFFSET 1
+ }
+} {}
+
+# Make sure LIMIT works well with compound SELECT statements.
+# Ticket #393
+#
+ifcapable compound {
+do_test limit-7.1.1 {
+ catchsql {
+ SELECT x FROM t2 LIMIT 5 UNION ALL SELECT a FROM t6;
+ }
+} {1 {LIMIT clause should come after UNION ALL not before}}
+do_test limit-7.1.2 {
+ catchsql {
+ SELECT x FROM t2 LIMIT 5 UNION SELECT a FROM t6;
+ }
+} {1 {LIMIT clause should come after UNION not before}}
+do_test limit-7.1.3 {
+ catchsql {
+ SELECT x FROM t2 LIMIT 5 EXCEPT SELECT a FROM t6 LIMIT 3;
+ }
+} {1 {LIMIT clause should come after EXCEPT not before}}
+do_test limit-7.1.4 {
+ catchsql {
+ SELECT x FROM t2 LIMIT 0,5 INTERSECT SELECT a FROM t6;
+ }
+} {1 {LIMIT clause should come after INTERSECT not before}}
+do_test limit-7.2 {
+ execsql {
+ SELECT x FROM t2 UNION ALL SELECT a FROM t6 LIMIT 5;
+ }
+} {31 30 1 2 3}
+do_test limit-7.3 {
+ execsql {
+ SELECT x FROM t2 UNION ALL SELECT a FROM t6 LIMIT 3 OFFSET 1;
+ }
+} {30 1 2}
+do_test limit-7.4 {
+ execsql {
+ SELECT x FROM t2 UNION ALL SELECT a FROM t6 ORDER BY 1 LIMIT 3 OFFSET 1;
+ }
+} {2 3 4}
+do_test limit-7.5 {
+ execsql {
+ SELECT x FROM t2 UNION SELECT x+2 FROM t2 LIMIT 2 OFFSET 1;
+ }
+} {31 32}
+do_test limit-7.6 {
+ execsql {
+ SELECT x FROM t2 UNION SELECT x+2 FROM t2 ORDER BY 1 DESC LIMIT 2 OFFSET 1;
+ }
+} {32 31}
+do_test limit-7.7 {
+ execsql {
+ SELECT a+9 FROM t6 EXCEPT SELECT y FROM t2 LIMIT 2;
+ }
+} {11 12}
+do_test limit-7.8 {
+ execsql {
+ SELECT a+9 FROM t6 EXCEPT SELECT y FROM t2 ORDER BY 1 DESC LIMIT 2;
+ }
+} {13 12}
+do_test limit-7.9 {
+ execsql {
+ SELECT a+26 FROM t6 INTERSECT SELECT x FROM t2 LIMIT 1;
+ }
+} {30}
+do_test limit-7.10 {
+ execsql {
+ SELECT a+27 FROM t6 INTERSECT SELECT x FROM t2 LIMIT 1;
+ }
+} {30}
+do_test limit-7.11 {
+ execsql {
+ SELECT a+27 FROM t6 INTERSECT SELECT x FROM t2 LIMIT 1 OFFSET 1;
+ }
+} {31}
+do_test limit-7.12 {
+ execsql {
+ SELECT a+27 FROM t6 INTERSECT SELECT x FROM t2
+ ORDER BY 1 DESC LIMIT 1 OFFSET 1;
+ }
+} {30}
+} ;# ifcapable compound
+
+# Tests for limit in conjunction with distinct. The distinct should
+# occur before both the limit and the offset. Ticket #749.
+#
+do_test limit-8.1 {
+ execsql {
+ SELECT DISTINCT cast(round(x/100) as integer) FROM t3 LIMIT 5;
+ }
+} {0 1 2 3 4}
+do_test limit-8.2 {
+ execsql {
+ SELECT DISTINCT cast(round(x/100) as integer) FROM t3 LIMIT 5 OFFSET 5;
+ }
+} {5 6 7 8 9}
+do_test limit-8.3 {
+ execsql {
+ SELECT DISTINCT cast(round(x/100) as integer) FROM t3 LIMIT 5 OFFSET 25;
+ }
+} {25 26 27 28 29}
+
+# Make sure limits on multiple subqueries work correctly.
+# Ticket #1035
+#
+ifcapable subquery {
+ do_test limit-9.1 {
+ execsql {
+ SELECT * FROM (SELECT * FROM t6 LIMIT 3);
+ }
+ } {1 2 3}
+}
+do_test limit-9.2.1 {
+ execsql {
+ CREATE TABLE t7 AS SELECT * FROM t6;
+ }
+} {}
+ifcapable subquery {
+ do_test limit-9.2.2 {
+ execsql {
+ SELECT * FROM (SELECT * FROM t7 LIMIT 3);
+ }
+ } {1 2 3}
+}
+ifcapable compound {
+ ifcapable subquery {
+ do_test limit-9.3 {
+ execsql {
+ SELECT * FROM (SELECT * FROM t6 LIMIT 3)
+ UNION
+ SELECT * FROM (SELECT * FROM t7 LIMIT 3)
+ ORDER BY 1
+ }
+ } {1 2 3}
+ do_test limit-9.4 {
+ execsql {
+ SELECT * FROM (SELECT * FROM t6 LIMIT 3)
+ UNION
+ SELECT * FROM (SELECT * FROM t7 LIMIT 3)
+ ORDER BY 1
+ LIMIT 2
+ }
+ } {1 2}
+ }
+ do_test limit-9.5 {
+ catchsql {
+ SELECT * FROM t6 LIMIT 3
+ UNION
+ SELECT * FROM t7 LIMIT 3
+ }
+ } {1 {LIMIT clause should come after UNION not before}}
+}
+
+# Test LIMIT and OFFSET using SQL variables.
+do_test limit-10.1 {
+ set limit 10
+ db eval {
+ SELECT x FROM t1 LIMIT :limit;
+ }
+} {31 30 29 28 27 26 25 24 23 22}
+do_test limit-10.2 {
+ set limit 5
+ set offset 5
+ db eval {
+ SELECT x FROM t1 LIMIT :limit OFFSET :offset;
+ }
+} {26 25 24 23 22}
+do_test limit-10.3 {
+ set limit -1
+ db eval {
+ SELECT x FROM t1 WHERE x<10 LIMIT :limit;
+ }
+} {9 8 7 6 5 4 3 2 1 0}
+do_test limit-10.4 {
+ set limit 1.5
+ set rc [catch {
+ db eval {
+ SELECT x FROM t1 WHERE x<10 LIMIT :limit;
+ } } msg]
+ list $rc $msg
+} {1 {datatype mismatch}}
+do_test limit-10.5 {
+ set limit "hello world"
+ set rc [catch {
+ db eval {
+ SELECT x FROM t1 WHERE x<10 LIMIT :limit;
+ } } msg]
+ list $rc $msg
+} {1 {datatype mismatch}}
+
+ifcapable subquery {
+do_test limit-11.1 {
+ db eval {
+ SELECT x FROM (SELECT x FROM t1 ORDER BY x LIMIT 0) ORDER BY x
+ }
+} {}
+} ;# ifcapable subquery
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/loadext.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/loadext.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,192 @@
+# 2006 July 14
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is extension loading.
+#
+# $Id: loadext.test,v 1.8 2006/08/23 20:07:22 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# The name of the test extension varies by operating system.
+#
+if {$::tcl_platform(platform) eq "windows"} {
+ set testextension ./testloadext.dll
+} else {
+ set testextension ./libtestloadext.so
+}
+
+# Make sure the test extension actually exists. If it does not
+# exist, try to create it. If unable to create it, then skip this
+# test file.
+#
+if {![file exists $testextension]} {
+ set srcdir [file dir $testdir]/src
+ set testextsrc $srcdir/test_loadext.c
+ if {[catch {
+ exec gcc -Wall -I$srcdir -I. -g -shared $testextsrc -o $testextension
+ } msg]} {
+ puts "Skipping loadext tests: Test extension not built..."
+ puts $msg
+ finish_test
+ return
+ }
+}
+
+# Test that loading the extension produces the expected results - adding
+# the half() function to the specified database handle.
+#
+do_test loadext-1.1 {
+ catchsql {
+ SELECT half(1.0);
+ }
+} {1 {no such function: half}}
+do_test loadext-1.2 {
+ db enable_load_extension 1
+ sqlite3_load_extension db $testextension testloadext_init
+ catchsql {
+ SELECT half(1.0);
+ }
+} {0 0.5}
+
+# Test that a second database connection (db2) can load the extension also.
+#
+do_test loadext-1.3 {
+ sqlite3 db2 test.db
+ sqlite3_enable_load_extension db2 1
+ catchsql {
+ SELECT half(1.0);
+ } db2
+} {1 {no such function: half}}
+do_test loadext-1.4 {
+ sqlite3_load_extension db2 $testextension testloadext_init
+ catchsql {
+ SELECT half(1.0);
+ } db2
+} {0 0.5}
+
+# Close the first database connection. Then check that the second database
+# can still use the half() function without a problem.
+#
+do_test loadext-1.5 {
+ db close
+ catchsql {
+ SELECT half(1.0);
+ } db2
+} {0 0.5}
+
+db2 close
+sqlite3 db test.db
+sqlite3_enable_load_extension db 1
+
+# Try to load an extension for which the file does not exist.
+#
+do_test loadext-2.1 {
+ set rc [catch {
+ sqlite3_load_extension db "${testextension}xx"
+ } msg]
+ list $rc $msg
+} [list 1 [subst -nocommands \
+ {unable to open shared library [${testextension}xx]}
+]]
+
+# Try to load an extension for which the file is not a shared object
+#
+do_test loadext-2.2 {
+ set fd [open "${testextension}xx" w]
+ puts $fd blah
+ close $fd
+ set rc [catch {
+ sqlite3_load_extension db "${testextension}xx"
+ } msg]
+ list $rc $msg
+} [list 1 [subst -nocommands \
+ {unable to open shared library [${testextension}xx]}
+]]
+
+# Try to load an extension for which the file is present but the
+# entry point is not.
+#
+do_test loadext-2.3 {
+ set rc [catch {
+ sqlite3_load_extension db $testextension icecream
+ } msg]
+ list $rc $msg
+} [list 1 [subst -nocommands \
+ {no entry point [icecream] in shared library [$testextension]}
+]]
+
+# Try to load an extension for which the entry point fails (returns non-zero)
+#
+do_test loadext-2.4 {
+ set rc [catch {
+ sqlite3_load_extension db $testextension testbrokenext_init
+ } msg]
+ list $rc $msg
+} {1 {error during initialization: broken!}}
+
+############################################################################
+# Tests for the load_extension() SQL function
+#
+
+db close
+sqlite3 db test.db
+sqlite3_enable_load_extension db 1
+do_test loadext-3.1 {
+ catchsql {
+ SELECT half(5);
+ }
+} {1 {no such function: half}}
+do_test loadext-3.2 {
+ catchsql {
+ SELECT load_extension($::testextension)
+ }
+} [list 1 "no entry point \[sqlite3_extension_init\]\
+ in shared library \[$testextension\]"]
+do_test loadext-3.3 {
+ catchsql {
+ SELECT load_extension($::testextension,'testloadext_init')
+ }
+} {0 {{}}}
+do_test loadext-3.4 {
+ catchsql {
+ SELECT half(5);
+ }
+} {0 2.5}
+
+# Ticket #1863
+# Make sure the extension loading mechanism will not work unless it
+# is explicitly enabled.
+#
+db close
+sqlite3 db test.db
+do_test loadext-4.1 {
+ catchsql {
+ SELECT load_extension($::testextension,'testloadext_init')
+ }
+} {1 {not authorized}}
+do_test loadext-4.2 {
+ sqlite3_enable_load_extension db 1
+ catchsql {
+ SELECT load_extension($::testextension,'testloadext_init')
+ }
+} {0 {{}}}
+
+do_test loadext-4.3 {
+ sqlite3_enable_load_extension db 0
+ catchsql {
+ SELECT load_extension($::testextension,'testloadext_init')
+ }
+} {1 {not authorized}}
+
+
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/loadext2.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/loadext2.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,139 @@
+# 2006 August 23
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is automatic extension loading and the
+# sqlite3_auto_extension() API.
+#
+# $Id: loadext2.test,v 1.1 2006/08/23 20:07:22 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Only run these tests if the approriate APIs are defined
+# in the system under test.
+#
+if {[info command sqlite3_auto_extension_sqr]==""} {
+ finish_test
+ return
+}
+
+
+# None of the extension are loaded by default.
+#
+do_test loadext2-1.1 {
+ catchsql {
+ SELECT sqr(2)
+ }
+} {1 {no such function: sqr}}
+do_test loadext2-1.2 {
+ catchsql {
+ SELECT cube(2)
+ }
+} {1 {no such function: cube}}
+
+# Register auto-loaders. Still functions do not exist.
+#
+do_test loadext2-1.3 {
+ sqlite3_auto_extension_sqr
+ sqlite3_auto_extension_cube
+ catchsql {
+ SELECT sqr(2)
+ }
+} {1 {no such function: sqr}}
+do_test loadext2-1.4 {
+ catchsql {
+ SELECT cube(2)
+ }
+} {1 {no such function: cube}}
+
+
+# Functions do exist in a new database connection
+#
+do_test loadext2-1.5 {
+ sqlite3 db test.db
+ catchsql {
+ SELECT sqr(2)
+ }
+} {0 4.0}
+do_test loadext2-1.6 {
+ catchsql {
+ SELECT cube(2)
+ }
+} {0 8.0}
+
+
+# Reset extension auto loading. Existing extensions still exist.
+#
+do_test loadext2-1.7 {
+ sqlite3_reset_auto_extension
+ catchsql {
+ SELECT sqr(2)
+ }
+} {0 4.0}
+do_test loadext2-1.8 {
+ catchsql {
+ SELECT cube(2)
+ }
+} {0 8.0}
+
+
+# Register only the sqr() function.
+#
+do_test loadext2-1.9 {
+ sqlite3_auto_extension_sqr
+ sqlite3 db test.db
+ catchsql {
+ SELECT sqr(2)
+ }
+} {0 4.0}
+do_test loadext2-1.10 {
+ catchsql {
+ SELECT cube(2)
+ }
+} {1 {no such function: cube}}
+
+# Register only the cube() function.
+#
+do_test loadext2-1.11 {
+ sqlite3_reset_auto_extension
+ sqlite3_auto_extension_cube
+ sqlite3 db test.db
+ catchsql {
+ SELECT sqr(2)
+ }
+} {1 {no such function: sqr}}
+do_test loadext2-1.12 {
+ catchsql {
+ SELECT cube(2)
+ }
+} {0 8.0}
+
+# Register a broken entry point.
+#
+do_test loadext2-1.13 {
+ sqlite3_auto_extension_broken
+ set rc [catch {sqlite3 db test.db} errmsg]
+ lappend rc $errmsg
+} {1 {automatic extension loading failed: broken autoext!}}
+do_test loadext2-1.14 {
+ catchsql {
+ SELECT sqr(2)
+ }
+} {1 {no such function: sqr}}
+do_test loadext2-1.15 {
+ catchsql {
+ SELECT cube(2)
+ }
+} {0 8.0}
+
+
+sqlite3_reset_auto_extension
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/lock.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/lock.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,354 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is database locks.
+#
+# $Id: lock.test,v 1.33 2006/08/16 16:42:48 drh Exp $
+
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Create an alternative connection to the database
+#
+do_test lock-1.0 {
+ sqlite3 db2 ./test.db
+ set dummy {}
+} {}
+do_test lock-1.1 {
+ execsql {SELECT name FROM sqlite_master WHERE type='table' ORDER BY name}
+} {}
+do_test lock-1.2 {
+ execsql {SELECT name FROM sqlite_master WHERE type='table' ORDER BY name} db2
+} {}
+do_test lock-1.3 {
+ execsql {CREATE TABLE t1(a int, b int)}
+ execsql {SELECT name FROM sqlite_master WHERE type='table' ORDER BY name}
+} {t1}
+do_test lock-1.5 {
+ catchsql {
+ SELECT name FROM sqlite_master WHERE type='table' ORDER BY name
+ } db2
+} {0 t1}
+
+do_test lock-1.6 {
+ execsql {INSERT INTO t1 VALUES(1,2)}
+ execsql {SELECT * FROM t1}
+} {1 2}
+# Update: The schema is now brought up to date by test lock-1.5.
+# do_test lock-1.7.1 {
+# catchsql {SELECT * FROM t1} db2
+# } {1 {no such table: t1}}
+do_test lock-1.7.2 {
+ catchsql {SELECT * FROM t1} db2
+} {0 {1 2}}
+do_test lock-1.8 {
+ execsql {UPDATE t1 SET a=b, b=a} db2
+ execsql {SELECT * FROM t1} db2
+} {2 1}
+do_test lock-1.9 {
+ execsql {SELECT * FROM t1}
+} {2 1}
+do_test lock-1.10 {
+ execsql {BEGIN TRANSACTION}
+ execsql {UPDATE t1 SET a = 0 WHERE 0}
+ execsql {SELECT * FROM t1}
+} {2 1}
+do_test lock-1.11 {
+ catchsql {SELECT * FROM t1} db2
+} {0 {2 1}}
+do_test lock-1.12 {
+ execsql {ROLLBACK}
+ catchsql {SELECT * FROM t1}
+} {0 {2 1}}
+
+do_test lock-1.13 {
+ execsql {CREATE TABLE t2(x int, y int)}
+ execsql {INSERT INTO t2 VALUES(8,9)}
+ execsql {SELECT * FROM t2}
+} {8 9}
+do_test lock-1.14.1 {
+ catchsql {SELECT * FROM t2} db2
+} {1 {no such table: t2}}
+do_test lock-1.14.2 {
+ catchsql {SELECT * FROM t1} db2
+} {0 {2 1}}
+do_test lock-1.15 {
+ catchsql {SELECT * FROM t2} db2
+} {0 {8 9}}
+
+do_test lock-1.16 {
+ db eval {SELECT * FROM t1} qv {
+ set x [db eval {SELECT * FROM t1}]
+ }
+ set x
+} {2 1}
+do_test lock-1.17 {
+ db eval {SELECT * FROM t1} qv {
+ set x [db eval {SELECT * FROM t2}]
+ }
+ set x
+} {8 9}
+
+# You cannot UPDATE a table from within the callback of a SELECT
+# on that same table because the SELECT has the table locked.
+#
+# 2006-08-16: Reads no longer block writes within the same
+# database connection.
+#
+#do_test lock-1.18 {
+# db eval {SELECT * FROM t1} qv {
+# set r [catch {db eval {UPDATE t1 SET a=b, b=a}} msg]
+# lappend r $msg
+# }
+# set r
+#} {1 {database table is locked}}
+
+# But you can UPDATE a different table from the one that is used in
+# the SELECT.
+#
+do_test lock-1.19 {
+ db eval {SELECT * FROM t1} qv {
+ set r [catch {db eval {UPDATE t2 SET x=y, y=x}} msg]
+ lappend r $msg
+ }
+ set r
+} {0 {}}
+do_test lock-1.20 {
+ execsql {SELECT * FROM t2}
+} {9 8}
+
+# It is possible to do a SELECT of the same table within the
+# callback of another SELECT on that same table because two
+# or more read-only cursors can be open at once.
+#
+do_test lock-1.21 {
+ db eval {SELECT * FROM t1} qv {
+ set r [catch {db eval {SELECT a FROM t1}} msg]
+ lappend r $msg
+ }
+ set r
+} {0 2}
+
+# Under UNIX you can do two SELECTs at once with different database
+# connections, because UNIX supports reader/writer locks. Under windows,
+# this is not possible.
+#
+if {$::tcl_platform(platform)=="unix"} {
+ do_test lock-1.22 {
+ db eval {SELECT * FROM t1} qv {
+ set r [catch {db2 eval {SELECT a FROM t1}} msg]
+ lappend r $msg
+ }
+ set r
+ } {0 2}
+}
+integrity_check lock-1.23
+
+# If one thread has a transaction another thread cannot start
+# a transaction. -> Not true in version 3.0. But if one thread
+# as a RESERVED lock another thread cannot acquire one.
+#
+do_test lock-2.1 {
+ execsql {BEGIN TRANSACTION}
+ execsql {UPDATE t1 SET a = 0 WHERE 0}
+ execsql {BEGIN TRANSACTION} db2
+ set r [catch {execsql {UPDATE t1 SET a = 0 WHERE 0} db2} msg]
+ execsql {ROLLBACK} db2
+ lappend r $msg
+} {1 {database is locked}}
+
+# A thread can read when another has a RESERVED lock.
+#
+do_test lock-2.2 {
+ catchsql {SELECT * FROM t2} db2
+} {0 {9 8}}
+
+# If the other thread (the one that does not hold the transaction with
+# a RESERVED lock) tries to get a RESERVED lock, we do get a busy callback
+# as long as we were not orginally holding a READ lock.
+#
+do_test lock-2.3.1 {
+ proc callback {count} {
+ set ::callback_value $count
+ break
+ }
+ set ::callback_value {}
+ db2 busy callback
+ # db2 does not hold a lock so we should get a busy callback here
+ set r [catch {execsql {UPDATE t1 SET a=b, b=a} db2} msg]
+ lappend r $msg
+ lappend r $::callback_value
+} {1 {database is locked} 0}
+do_test lock-2.3.2 {
+ set ::callback_value {}
+ execsql {BEGIN; SELECT rowid FROM sqlite_master LIMIT 1} db2
+ # This time db2 does hold a read lock. No busy callback this time.
+ set r [catch {execsql {UPDATE t1 SET a=b, b=a} db2} msg]
+ lappend r $msg
+ lappend r $::callback_value
+} {1 {database is locked} {}}
+catch {execsql {ROLLBACK} db2}
+do_test lock-2.4.1 {
+ proc callback {count} {
+ lappend ::callback_value $count
+ if {$count>4} break
+ }
+ set ::callback_value {}
+ db2 busy callback
+ # We get a busy callback because db2 is not holding a lock
+ set r [catch {execsql {UPDATE t1 SET a=b, b=a} db2} msg]
+ lappend r $msg
+ lappend r $::callback_value
+} {1 {database is locked} {0 1 2 3 4 5}}
+do_test lock-2.4.2 {
+ proc callback {count} {
+ lappend ::callback_value $count
+ if {$count>4} break
+ }
+ set ::callback_value {}
+ db2 busy callback
+ execsql {BEGIN; SELECT rowid FROM sqlite_master LIMIT 1} db2
+ # No busy callback this time because we are holding a lock
+ set r [catch {execsql {UPDATE t1 SET a=b, b=a} db2} msg]
+ lappend r $msg
+ lappend r $::callback_value
+} {1 {database is locked} {}}
+catch {execsql {ROLLBACK} db2}
+do_test lock-2.5 {
+ proc callback {count} {
+ lappend ::callback_value $count
+ if {$count>4} break
+ }
+ set ::callback_value {}
+ db2 busy callback
+ set r [catch {execsql {SELECT * FROM t1} db2} msg]
+ lappend r $msg
+ lappend r $::callback_value
+} {0 {2 1} {}}
+execsql {ROLLBACK}
+
+# Test the built-in busy timeout handler
+#
+do_test lock-2.8 {
+ db2 timeout 400
+ execsql BEGIN
+ execsql {UPDATE t1 SET a = 0 WHERE 0}
+ catchsql {BEGIN EXCLUSIVE;} db2
+} {1 {database is locked}}
+do_test lock-2.9 {
+ db2 timeout 0
+ execsql COMMIT
+} {}
+integrity_check lock-2.10
+
+# Try to start two transactions in a row
+#
+do_test lock-3.1 {
+ execsql {BEGIN TRANSACTION}
+ set r [catch {execsql {BEGIN TRANSACTION}} msg]
+ execsql {ROLLBACK}
+ lappend r $msg
+} {1 {cannot start a transaction within a transaction}}
+integrity_check lock-3.2
+
+# Make sure the busy handler and error messages work when
+# opening a new pointer to the database while another pointer
+# has the database locked.
+#
+do_test lock-4.1 {
+ db2 close
+ catch {db eval ROLLBACK}
+ db eval BEGIN
+ db eval {UPDATE t1 SET a=0 WHERE 0}
+ sqlite3 db2 ./test.db
+ catchsql {UPDATE t1 SET a=0} db2
+} {1 {database is locked}}
+do_test lock-4.2 {
+ set ::callback_value {}
+ set rc [catch {db2 eval {UPDATE t1 SET a=0}} msg]
+ lappend rc $msg $::callback_value
+} {1 {database is locked} {}}
+do_test lock-4.3 {
+ proc callback {count} {
+ lappend ::callback_value $count
+ if {$count>4} break
+ }
+ db2 busy callback
+ set rc [catch {db2 eval {UPDATE t1 SET a=0}} msg]
+ lappend rc $msg $::callback_value
+} {1 {database is locked} {0 1 2 3 4 5}}
+execsql {ROLLBACK}
+
+# When one thread is writing, other threads cannot read. Except if the
+# writing thread is writing to its temporary tables, the other threads
+# can still read. -> Not so in 3.0. One thread can read while another
+# holds a RESERVED lock.
+#
+proc tx_exec {sql} {
+ db2 eval $sql
+}
+do_test lock-5.1 {
+ execsql {
+ SELECT * FROM t1
+ }
+} {2 1}
+do_test lock-5.2 {
+ db function tx_exec tx_exec
+ catchsql {
+ INSERT INTO t1(a,b) SELECT 3, tx_exec('SELECT y FROM t2 LIMIT 1');
+ }
+} {0 {}}
+
+ifcapable tempdb {
+ do_test lock-5.3 {
+ execsql {
+ CREATE TEMP TABLE t3(x);
+ SELECT * FROM t3;
+ }
+ } {}
+ do_test lock-5.4 {
+ catchsql {
+ INSERT INTO t3 SELECT tx_exec('SELECT y FROM t2 LIMIT 1');
+ }
+ } {0 {}}
+ do_test lock-5.5 {
+ execsql {
+ SELECT * FROM t3;
+ }
+ } {8}
+ do_test lock-5.6 {
+ catchsql {
+ UPDATE t1 SET a=tx_exec('SELECT x FROM t2');
+ }
+ } {0 {}}
+ do_test lock-5.7 {
+ execsql {
+ SELECT * FROM t1;
+ }
+ } {9 1 9 8}
+ do_test lock-5.8 {
+ catchsql {
+ UPDATE t3 SET x=tx_exec('SELECT x FROM t2');
+ }
+ } {0 {}}
+ do_test lock-5.9 {
+ execsql {
+ SELECT * FROM t3;
+ }
+ } {9}
+}
+
+do_test lock-999.1 {
+ rename db2 {}
+} {}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/lock2.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/lock2.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,163 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is database locks between competing processes.
+#
+# $Id: lock2.test,v 1.6 2005/09/17 16:48:19 drh Exp $
+
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Launch another testfixture process to be controlled by this one. A
+# channel name is returned that may be passed as the first argument to proc
+# 'testfixture' to execute a command. The child testfixture process is shut
+# down by closing the channel.
+proc launch_testfixture {} {
+ set chan [open "|[file join . testfixture] tf_main.tcl" r+]
+ fconfigure $chan -buffering line
+ return $chan
+}
+
+# Execute a command in a child testfixture process, connected by two-way
+# channel $chan. Return the result of the command, or an error message.
+proc testfixture {chan cmd} {
+ puts $chan $cmd
+ puts $chan OVER
+ set r ""
+ while { 1 } {
+ set line [gets $chan]
+ if { $line == "OVER" } {
+ return $r
+ }
+ append r $line
+ }
+}
+
+# Write the main loop for the child testfixture processes into file
+# tf_main.tcl. The parent (this script) interacts with the child processes
+# via a two way pipe. The parent writes a script to the stdin of the child
+# process, followed by the word "OVER" on a line of it's own. The child
+# process evaluates the script and writes the results to stdout, followed
+# by an "OVER" of its own.
+set f [open tf_main.tcl w]
+puts $f {
+ set l [open log w]
+ set script ""
+ while {![eof stdin]} {
+ flush stdout
+ set line [gets stdin]
+ puts $l "READ $line"
+ if { $line == "OVER" } {
+ catch {eval $script} result
+ puts $result
+ puts $l "WRITE $result"
+ puts OVER
+ puts $l "WRITE OVER"
+ flush stdout
+ set script ""
+ } else {
+ append script $line
+ append script " ; "
+ }
+ }
+ close $l
+}
+close $f
+
+# Simple locking test case:
+#
+# lock2-1.1: Connect a second process to the database.
+# lock2-1.2: Establish a RESERVED lock with this process.
+# lock2-1.3: Get a SHARED lock with the second process.
+# lock2-1.4: Try for a RESERVED lock with process 2. This fails.
+# lock2-1.5: Try to upgrade the first process to EXCLUSIVE, this fails so
+# it gets PENDING.
+# lock2-1.6: Release the SHARED lock held by the second process.
+# lock2-1.7: Attempt to reaquire a SHARED lock with the second process.
+# this fails due to the PENDING lock.
+# lock2-1.8: Ensure the first process can now upgrade to EXCLUSIVE.
+#
+do_test lock2-1.1 {
+ set ::tf1 [launch_testfixture]
+ testfixture $::tf1 "set sqlite_pending_byte $::sqlite_pending_byte"
+ testfixture $::tf1 {
+ sqlite3 db test.db -key xyzzy
+ db eval {select * from sqlite_master}
+ }
+} {}
+do_test lock2-1.1.1 {
+ execsql {pragma lock_status}
+} {main unlocked temp closed}
+do_test lock2-1.2 {
+ execsql {
+ BEGIN;
+ CREATE TABLE abc(a, b, c);
+ }
+} {}
+do_test lock2-1.3 {
+ testfixture $::tf1 {
+ db eval {
+ BEGIN;
+ SELECT * FROM sqlite_master;
+ }
+ }
+} {}
+do_test lock2-1.4 {
+ testfixture $::tf1 {
+ db eval {
+ CREATE TABLE def(d, e, f)
+ }
+ }
+} {database is locked}
+do_test lock2-1.5 {
+ catchsql {
+ COMMIT;
+ }
+} {1 {database is locked}}
+do_test lock2-1.6 {
+ testfixture $::tf1 {
+ db eval {
+ SELECT * FROM sqlite_master;
+ COMMIT;
+ }
+ }
+} {}
+do_test lock2-1.7 {
+ testfixture $::tf1 {
+ db eval {
+ BEGIN;
+ SELECT * FROM sqlite_master;
+ }
+ }
+} {database is locked}
+do_test lock2-1.8 {
+ catchsql {
+ COMMIT;
+ }
+} {0 {}}
+do_test lock2-1.9 {
+ execsql {
+ SELECT * FROM sqlite_master;
+ }
+} "table abc abc [expr $AUTOVACUUM?3:2] {CREATE TABLE abc(a, b, c)}"
+do_test lock2-1.10 {
+ testfixture $::tf1 {
+ db eval {
+ SELECT * FROM sqlite_master;
+ }
+ }
+} "table abc abc [expr $AUTOVACUUM?3:2] {CREATE TABLE abc(a, b, c)}"
+
+catch {testfixture $::tf1 {db close}}
+catch {close $::tf1}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/lock3.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/lock3.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,78 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is database locks and the operation of the
+# DEFERRED, IMMEDIATE, and EXCLUSIVE keywords as modifiers to the
+# BEGIN command.
+#
+# $Id: lock3.test,v 1.1 2004/10/05 02:41:43 drh Exp $
+
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Establish two connections to the same database. Put some
+# sample data into the database.
+#
+do_test lock3-1.1 {
+ sqlite3 db2 test.db
+ execsql {
+ CREATE TABLE t1(a);
+ INSERT INTO t1 VALUES(1);
+ }
+ execsql {
+ SELECT * FROM t1
+ } db2
+} 1
+
+# Get a deferred lock on the database using one connection. The
+# other connection should still be able to write.
+#
+do_test lock3-2.1 {
+ execsql {BEGIN DEFERRED TRANSACTION}
+ execsql {INSERT INTO t1 VALUES(2)} db2
+ execsql {END TRANSACTION}
+ execsql {SELECT * FROM t1}
+} {1 2}
+
+# Get an immediate lock on the database using one connection. The
+# other connection should be able to read the database but not write
+# it.
+#
+do_test lock3-3.1 {
+ execsql {BEGIN IMMEDIATE TRANSACTION}
+ catchsql {SELECT * FROM t1} db2
+} {0 {1 2}}
+do_test lock3-3.2 {
+ catchsql {INSERT INTO t1 VALUES(3)} db2
+} {1 {database is locked}}
+do_test lock3-3.3 {
+ execsql {END TRANSACTION}
+} {}
+
+
+# Get an exclusive lock on the database using one connection. The
+# other connection should be unable to read or write the database.
+#
+do_test lock3-4.1 {
+ execsql {BEGIN EXCLUSIVE TRANSACTION}
+ catchsql {SELECT * FROM t1} db2
+} {1 {database is locked}}
+do_test lock3-4.2 {
+ catchsql {INSERT INTO t1 VALUES(3)} db2
+} {1 {database is locked}}
+do_test lock3-4.3 {
+ execsql {END TRANSACTION}
+} {}
+
+catch {db2 close}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/main.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/main.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,319 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is exercising the code in main.c.
+#
+# $Id: main.test,v 1.25 2006/02/09 22:24:41 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Only do the next group of tests if the sqlite3_complete API is available
+#
+ifcapable {complete} {
+
+# Tests of the sqlite_complete() function.
+#
+do_test main-1.1 {
+ db complete {This is a test}
+} {0}
+do_test main-1.2 {
+ db complete {
+ }
+} {1}
+do_test main-1.3 {
+ db complete {
+ -- a comment ;
+ }
+} {1}
+do_test main-1.4 {
+ db complete {
+ -- a comment ;
+ ;
+ }
+} {1}
+do_test main-1.5 {
+ db complete {DROP TABLE 'xyz;}
+} {0}
+do_test main-1.6 {
+ db complete {DROP TABLE 'xyz';}
+} {1}
+do_test main-1.7 {
+ db complete {DROP TABLE "xyz;}
+} {0}
+do_test main-1.8 {
+ db complete {DROP TABLE "xyz';}
+} {0}
+do_test main-1.9 {
+ db complete {DROP TABLE "xyz";}
+} {1}
+do_test main-1.10 {
+ db complete {DROP TABLE xyz; hi}
+} {0}
+do_test main-1.11 {
+ db complete {DROP TABLE xyz; }
+} {1}
+do_test main-1.12 {
+ db complete {DROP TABLE xyz; -- hi }
+} {1}
+do_test main-1.13 {
+ db complete {DROP TABLE xyz; -- hi
+ }
+} {1}
+do_test main-1.14 {
+ db complete {SELECT a-b FROM t1; }
+} {1}
+do_test main-1.15 {
+ db complete {SELECT a/e FROM t1 }
+} {0}
+do_test main-1.16 {
+ db complete {
+ CREATE TABLE abc(x,y);
+ }
+} {1}
+ifcapable {trigger} {
+ do_test main-1.17 {
+ db complete {
+ CREATE TRIGGER xyz AFTER DELETE abc BEGIN UPDATE pqr;
+ }
+ } {0}
+ do_test main-1.18 {
+ db complete {
+ CREATE TRIGGER xyz AFTER DELETE abc BEGIN UPDATE pqr; END;
+ }
+ } {1}
+ do_test main-1.19 {
+ db complete {
+ CREATE TRIGGER xyz AFTER DELETE abc BEGIN
+ UPDATE pqr;
+ unknown command;
+ }
+ } {0}
+ do_test main-1.20 {
+ db complete {
+ CREATE TRIGGER xyz AFTER DELETE backend BEGIN
+ UPDATE pqr;
+ }
+ } {0}
+ do_test main-1.21 {
+ db complete {
+ CREATE TRIGGER xyz AFTER DELETE end BEGIN
+ SELECT a, b FROM end;
+ }
+ } {0}
+ do_test main-1.22 {
+ db complete {
+ CREATE TRIGGER xyz AFTER DELETE end BEGIN
+ SELECT a, b FROM end;
+ END;
+ }
+ } {1}
+ do_test main-1.23 {
+ db complete {
+ CREATE TRIGGER xyz AFTER DELETE end BEGIN
+ SELECT a, b FROM end;
+ END;
+ SELECT a, b FROM end;
+ }
+ } {1}
+ do_test main-1.24 {
+ db complete {
+ CREATE TRIGGER xyz AFTER DELETE [;end;] BEGIN
+ UPDATE pqr;
+ }
+ } {0}
+ do_test main-1.25 {
+ db complete {
+ CREATE TRIGGER xyz AFTER DELETE backend BEGIN
+ UPDATE pqr SET a=[;end;];;;
+ }
+ } {0}
+ do_test main-1.26 {
+ db complete {
+ CREATE -- a comment
+ TRIGGER xyz AFTER DELETE backend BEGIN
+ UPDATE pqr SET a=5;
+ }
+ } {0}
+ do_test main-1.27.1 {
+ db complete {
+ CREATE -- a comment
+ TRIGGERX xyz AFTER DELETE backend BEGIN
+ UPDATE pqr SET a=5;
+ }
+ } {1}
+ do_test main-1.27.2 {
+ db complete {
+ CREATE/**/TRIGGER xyz AFTER DELETE backend BEGIN
+ UPDATE pqr SET a=5;
+ }
+ } {0}
+ ifcapable {explain} {
+ do_test main-1.27.3 {
+ db complete {
+ /* */ EXPLAIN -- A comment
+ CREATE/**/TRIGGER xyz AFTER DELETE backend BEGIN
+ UPDATE pqr SET a=5;
+ }
+ } {0}
+ }
+ do_test main-1.27.4 {
+ db complete {
+ BOGUS token
+ CREATE TRIGGER xyz AFTER DELETE backend BEGIN
+ UPDATE pqr SET a=5;
+ }
+ } {1}
+ ifcapable {explain} {
+ do_test main-1.27.5 {
+ db complete {
+ EXPLAIN
+ CREATE TEMP TRIGGER xyz AFTER DELETE backend BEGIN
+ UPDATE pqr SET a=5;
+ }
+ } {0}
+ }
+ do_test main-1.28 {
+ db complete {
+ CREATE TEMPORARY TRIGGER xyz AFTER DELETE backend BEGIN
+ UPDATE pqr SET a=5;
+ }
+ } {0}
+ do_test main-1.29 {
+ db complete {
+ CREATE TRIGGER xyz AFTER DELETE backend BEGIN
+ UPDATE pqr SET a=5;
+ EXPLAIN select * from xyz;
+ }
+ } {0}
+}
+do_test main-1.30 {
+ db complete {
+ CREATE TABLE /* In comment ; */
+ }
+} {0}
+do_test main-1.31 {
+ db complete {
+ CREATE TABLE /* In comment ; */ hi;
+ }
+} {1}
+do_test main-1.31 {
+ db complete {
+ CREATE TABLE /* In comment ; */;
+ }
+} {1}
+do_test main-1.32 {
+ db complete {
+ stuff;
+ /*
+ CREATE TABLE
+ multiple lines
+ of text
+ */
+ }
+} {1}
+do_test main-1.33 {
+ db complete {
+ /*
+ CREATE TABLE
+ multiple lines
+ of text;
+ }
+} {0}
+do_test main-1.34 {
+ db complete {
+ /*
+ CREATE TABLE
+ multiple lines "*/
+ of text;
+ }
+} {1}
+do_test main-1.35 {
+ db complete {hi /**/ there;}
+} {1}
+do_test main-1.36 {
+ db complete {hi there/***/;}
+} {1}
+
+} ;# end ifcapable {complete}
+
+
+# Try to open a database with a corrupt database file.
+#
+do_test main-2.0 {
+ catch {db close}
+ file delete -force test.db
+ set fd [open test.db w]
+ puts $fd hi!
+ close $fd
+ set v [catch {sqlite3 db test.db} msg]
+ if {$v} {lappend v $msg} {lappend v {}}
+} {0 {}}
+
+# Here are some tests for tokenize.c.
+#
+do_test main-3.1 {
+ catch {db close}
+ foreach f [glob -nocomplain testdb/*] {file delete -force $f}
+ file delete -force testdb
+ sqlite3 db testdb
+ set v [catch {execsql {SELECT * from T1 where x!!5}} msg]
+ lappend v $msg
+} {1 {unrecognized token: "!!"}}
+do_test main-3.2 {
+ catch {db close}
+ foreach f [glob -nocomplain testdb/*] {file delete -force $f}
+ file delete -force testdb
+ sqlite3 db testdb
+ set v [catch {execsql {SELECT * from T1 where ^x}} msg]
+ lappend v $msg
+} {1 {unrecognized token: "^"}}
+do_test main-3.2.2 {
+ catchsql {select 'abc}
+} {1 {unrecognized token: "'abc"}}
+do_test main-3.2.3 {
+ catchsql {select "abc}
+} {1 {unrecognized token: ""abc"}}
+
+do_test main-3.3 {
+ catch {db close}
+ foreach f [glob -nocomplain testdb/*] {file delete -force $f}
+ file delete -force testdb
+ sqlite3 db testdb
+ execsql {
+ create table T1(X REAL); /* C-style comments allowed */
+ insert into T1 values(0.5);
+ insert into T1 values(0.5e2);
+ insert into T1 values(0.5e-002);
+ insert into T1 values(5e-002);
+ insert into T1 values(-5.0e-2);
+ insert into T1 values(-5.1e-2);
+ insert into T1 values(0.5e2);
+ insert into T1 values(0.5E+02);
+ insert into T1 values(5E+02);
+ insert into T1 values(5.0E+03);
+ select x*10 from T1 order by x*5;
+ }
+} {-0.51 -0.5 0.05 0.5 5.0 500.0 500.0 500.0 5000.0 50000.0}
+do_test main-3.4 {
+ set v [catch {execsql {create bogus}} msg]
+ lappend v $msg
+} {1 {near "bogus": syntax error}}
+do_test main-3.5 {
+ set v [catch {execsql {create}} msg]
+ lappend v $msg
+} {1 {near "create": syntax error}}
+do_test main-3.6 {
+ catchsql {SELECT 'abc' + #9}
+} {1 {near "#9": syntax error}}
+
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/malloc.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/malloc.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,554 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file attempts to check the library in an out-of-memory situation.
+# When compiled with -DSQLITE_DEBUG=1, the SQLite library accepts a special
+# command (sqlite_malloc_fail N) which causes the N-th malloc to fail. This
+# special feature is used to see what happens in the library if a malloc
+# were to really fail due to an out-of-memory situation.
+#
+# $Id: malloc.test,v 1.35 2006/10/04 11:55:50 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Only run these tests if memory debugging is turned on.
+#
+if {[info command sqlite_malloc_stat]==""} {
+ puts "Skipping malloc tests: not compiled with -DSQLITE_MEMDEBUG..."
+ finish_test
+ return
+}
+
+# Usage: do_malloc_test <test number> <options...>
+#
+# The first argument, <test number>, is an integer used to name the
+# tests executed by this proc. Options are as follows:
+#
+# -tclprep TCL script to run to prepare test.
+# -sqlprep SQL script to run to prepare test.
+# -tclbody TCL script to run with malloc failure simulation.
+# -sqlbody TCL script to run with malloc failure simulation.
+# -cleanup TCL script to run after the test.
+#
+# This command runs a series of tests to verify SQLite's ability
+# to handle an out-of-memory condition gracefully. It is assumed
+# that if this condition occurs a malloc() call will return a
+# NULL pointer. Linux, for example, doesn't do that by default. See
+# the "BUGS" section of malloc(3).
+#
+# Each iteration of a loop, the TCL commands in any argument passed
+# to the -tclbody switch, followed by the SQL commands in any argument
+# passed to the -sqlbody switch are executed. Each iteration the
+# Nth call to sqliteMalloc() is made to fail, where N is increased
+# each time the loop runs starting from 1. When all commands execute
+# successfully, the loop ends.
+#
+proc do_malloc_test {tn args} {
+ array unset ::mallocopts
+ array set ::mallocopts $args
+
+ set ::go 1
+ for {set ::n 1} {$::go && $::n < 50000} {incr ::n} {
+ do_test malloc-$tn.$::n {
+
+ # Remove all traces of database files test.db and test2.db from the files
+ # system. Then open (empty database) "test.db" with the handle [db].
+ #
+ sqlite_malloc_fail 0
+ catch {db close}
+ catch {file delete -force test.db}
+ catch {file delete -force test.db-journal}
+ catch {file delete -force test2.db}
+ catch {file delete -force test2.db-journal}
+ catch {sqlite3 db test.db}
+ set ::DB [sqlite3_connection_pointer db]
+
+ # Execute any -tclprep and -sqlprep scripts.
+ #
+ if {[info exists ::mallocopts(-tclprep)]} {
+ eval $::mallocopts(-tclprep)
+ }
+ if {[info exists ::mallocopts(-sqlprep)]} {
+ execsql $::mallocopts(-sqlprep)
+ }
+
+ # Now set the ${::n}th malloc() to fail and execute the -tclbody and
+ # -sqlbody scripts.
+ #
+ sqlite_malloc_fail $::n
+ set ::mallocbody {}
+ if {[info exists ::mallocopts(-tclbody)]} {
+ append ::mallocbody "$::mallocopts(-tclbody)\n"
+ }
+ if {[info exists ::mallocopts(-sqlbody)]} {
+ append ::mallocbody "db eval {$::mallocopts(-sqlbody)}"
+ }
+ set v [catch $::mallocbody msg]
+
+ # If the test fails (if $v!=0) and the database connection actually
+ # exists, make sure the failure code is SQLITE_NOMEM.
+ if {$v && [info command db]=="db" && [info exists ::mallocopts(-sqlbody)]
+ && [db errorcode]!=7} {
+ set v 999
+ }
+
+ set leftover [lindex [sqlite_malloc_stat] 2]
+ if {$leftover>0} {
+ if {$leftover>1} {puts "\nLeftover: $leftover\nReturn=$v Message=$msg"}
+ set ::go 0
+ if {$v} {
+ puts "\nError message returned: $msg"
+ } else {
+ set v {1 1}
+ }
+ } else {
+ set v2 [expr {$msg=="" || $msg=="out of memory"}]
+ if {!$v2} {puts "\nError message returned: $msg"}
+ lappend v $v2
+ }
+ } {1 1}
+
+ if {[info exists ::mallocopts(-cleanup)]} {
+ catch [list uplevel #0 $::mallocopts(-cleanup)] msg
+ }
+ }
+ unset ::mallocopts
+}
+
+do_malloc_test 1 -tclprep {
+ db close
+} -tclbody {
+ if {[catch {sqlite3 db test.db}]} {
+ error "out of memory"
+ }
+} -sqlbody {
+ DROP TABLE IF EXISTS t1;
+ CREATE TABLE t1(
+ a int, b float, c double, d text, e varchar(20),
+ primary key(a,b,c)
+ );
+ CREATE INDEX i1 ON t1(a,b);
+ INSERT INTO t1 VALUES(1,2.3,4.5,'hi',x'746865726500');
+ INSERT INTO t1 VALUES(6,7.0,0.8,'hello','out yonder');
+ SELECT * FROM t1;
+ SELECT avg(b) FROM t1 GROUP BY a HAVING b>20.0;
+ DELETE FROM t1 WHERE a IN (SELECT min(a) FROM t1);
+ SELECT count(*) FROM t1;
+}
+
+# Ensure that no file descriptors were leaked.
+do_test malloc-1.X {
+ catch {db close}
+ set sqlite_open_file_count
+} {0}
+
+do_malloc_test 2 -sqlbody {
+ CREATE TABLE t1(a int, b int default 'abc', c int default 1);
+ CREATE INDEX i1 ON t1(a,b);
+ INSERT INTO t1 VALUES(1,1,'99 abcdefghijklmnopqrstuvwxyz');
+ INSERT INTO t1 VALUES(2,4,'98 abcdefghijklmnopqrstuvwxyz');
+ INSERT INTO t1 VALUES(3,9,'97 abcdefghijklmnopqrstuvwxyz');
+ INSERT INTO t1 VALUES(4,16,'96 abcdefghijklmnopqrstuvwxyz');
+ INSERT INTO t1 VALUES(5,25,'95 abcdefghijklmnopqrstuvwxyz');
+ INSERT INTO t1 VALUES(6,36,'94 abcdefghijklmnopqrstuvwxyz');
+ SELECT 'stuff', count(*) as 'other stuff', max(a+10) FROM t1;
+ UPDATE t1 SET b=b||b||b||b;
+ UPDATE t1 SET b=a WHERE a in (10,12,22);
+ INSERT INTO t1(c,b,a) VALUES(20,10,5);
+ INSERT INTO t1 SELECT * FROM t1
+ WHERE a IN (SELECT a FROM t1 WHERE a<10);
+ DELETE FROM t1 WHERE a>=10;
+ DROP INDEX i1;
+ DELETE FROM t1;
+}
+
+# Ensure that no file descriptors were leaked.
+do_test malloc-2.X {
+ catch {db close}
+ set sqlite_open_file_count
+} {0}
+
+do_malloc_test 3 -sqlbody {
+ BEGIN TRANSACTION;
+ CREATE TABLE t1(a int, b int, c int);
+ CREATE INDEX i1 ON t1(a,b);
+ INSERT INTO t1 VALUES(1,1,99);
+ INSERT INTO t1 VALUES(2,4,98);
+ INSERT INTO t1 VALUES(3,9,97);
+ INSERT INTO t1 VALUES(4,16,96);
+ INSERT INTO t1 VALUES(5,25,95);
+ INSERT INTO t1 VALUES(6,36,94);
+ INSERT INTO t1(c,b,a) VALUES(20,10,5);
+ DELETE FROM t1 WHERE a>=10;
+ DROP INDEX i1;
+ DELETE FROM t1;
+ ROLLBACK;
+}
+
+
+# Ensure that no file descriptors were leaked.
+do_test malloc-3.X {
+ catch {db close}
+ set sqlite_open_file_count
+} {0}
+
+do_malloc_test 4 -sqlbody {
+ BEGIN TRANSACTION;
+ CREATE TABLE t1(a int, b int, c int);
+ CREATE INDEX i1 ON t1(a,b);
+ INSERT INTO t1 VALUES(1,1,99);
+ INSERT INTO t1 VALUES(2,4,98);
+ INSERT INTO t1 VALUES(3,9,97);
+ INSERT INTO t1 VALUES(4,16,96);
+ INSERT INTO t1 VALUES(5,25,95);
+ INSERT INTO t1 VALUES(6,36,94);
+ UPDATE t1 SET b=a WHERE a in (10,12,22);
+ INSERT INTO t1 SELECT * FROM t1
+ WHERE a IN (SELECT a FROM t1 WHERE a<10);
+ DROP INDEX i1;
+ DELETE FROM t1;
+ COMMIT;
+}
+
+# Ensure that no file descriptors were leaked.
+do_test malloc-4.X {
+ catch {db close}
+ set sqlite_open_file_count
+} {0}
+
+do_malloc_test 5 -sqlbody {
+ BEGIN TRANSACTION;
+ CREATE TABLE t1(a,b);
+ CREATE TABLE t2(x,y);
+ CREATE TRIGGER r1 AFTER INSERT ON t1 BEGIN
+ INSERT INTO t2(x,y) VALUES(new.rowid,1);
+ END;
+ INSERT INTO t1(a,b) VALUES(2,3);
+ COMMIT;
+}
+
+# Ensure that no file descriptors were leaked.
+do_test malloc-5.X {
+ catch {db close}
+ set sqlite_open_file_count
+} {0}
+
+do_malloc_test 6 -sqlprep {
+ BEGIN TRANSACTION;
+ CREATE TABLE t1(a);
+ INSERT INTO t1 VALUES(1);
+ INSERT INTO t1 SELECT a*2 FROM t1;
+ INSERT INTO t1 SELECT a*2 FROM t1;
+ INSERT INTO t1 SELECT a*2 FROM t1;
+ INSERT INTO t1 SELECT a*2 FROM t1;
+ INSERT INTO t1 SELECT a*2 FROM t1;
+ INSERT INTO t1 SELECT a*2 FROM t1;
+ INSERT INTO t1 SELECT a*2 FROM t1;
+ INSERT INTO t1 SELECT a*2 FROM t1;
+ INSERT INTO t1 SELECT a*2 FROM t1;
+ INSERT INTO t1 SELECT a*2 FROM t1;
+ DELETE FROM t1 where rowid%5 = 0;
+ COMMIT;
+} -sqlbody {
+ VACUUM;
+}
+
+do_malloc_test 7 -sqlprep {
+ CREATE TABLE t1(a, b);
+ INSERT INTO t1 VALUES(1, 2);
+ INSERT INTO t1 VALUES(3, 4);
+ INSERT INTO t1 VALUES(5, 6);
+ INSERT INTO t1 VALUES(7, randstr(1200,1200));
+} -sqlbody {
+ SELECT min(a) FROM t1 WHERE a<6 GROUP BY b;
+ SELECT a FROM t1 WHERE a<6 ORDER BY a;
+ SELECT b FROM t1 WHERE a>6;
+}
+
+# This block is designed to test that some malloc failures that may
+# occur in vdbeapi.c. Specifically, if a malloc failure that occurs
+# when converting UTF-16 text to integers and real numbers is handled
+# correctly.
+#
+# This is done by retrieving a string from the database engine and
+# manipulating it using the sqlite3_column_*** APIs. This doesn't
+# actually return an error to the user when a malloc() fails.. That
+# could be viewed as a bug.
+#
+# These tests only run if UTF-16 support is compiled in.
+#
+if {$::sqlite_options(utf16)} {
+ do_malloc_test 8 -tclprep {
+ set sql "SELECT '[string repeat abc 20]', '[string repeat def 20]', ?"
+ set ::STMT [sqlite3_prepare $::DB $sql -1 X]
+ sqlite3_step $::STMT
+ if { $::tcl_platform(byteOrder)=="littleEndian" } {
+ set ::bomstr "\xFF\xFE"
+ } else {
+ set ::bomstr "\xFE\xFF"
+ }
+ append ::bomstr [encoding convertto unicode "123456789_123456789_12345678"]
+ } -tclbody {
+ sqlite3_column_text16 $::STMT 0
+ sqlite3_column_int $::STMT 0
+ sqlite3_column_text16 $::STMT 1
+ sqlite3_column_double $::STMT 1
+ sqlite3_reset $::STMT
+ sqlite3_bind_text16 $::STMT 1 $::bomstr 60
+ catch {sqlite3_finalize $::STMT}
+ if {[lindex [sqlite_malloc_stat] 2]<=0} {
+ error "out of memory"
+ }
+ } -cleanup {
+ sqlite3_finalize $::STMT
+ }
+}
+
+# This block tests that malloc() failures that occur whilst commiting
+# a multi-file transaction are handled correctly.
+#
+do_malloc_test 9 -sqlprep {
+ ATTACH 'test2.db' as test2;
+ CREATE TABLE abc1(a, b, c);
+ CREATE TABLE test2.abc2(a, b, c);
+} -sqlbody {
+ BEGIN;
+ INSERT INTO abc1 VALUES(1, 2, 3);
+ INSERT INTO abc2 VALUES(1, 2, 3);
+ COMMIT;
+}
+
+# This block tests malloc() failures that occur while opening a
+# connection to a database.
+do_malloc_test 10 -sqlprep {
+ CREATE TABLE abc(a, b, c);
+} -tclbody {
+ sqlite3 db2 test.db
+ db2 eval {SELECT * FROM sqlite_master}
+ db2 close
+}
+
+# This block tests malloc() failures that occur within calls to
+# sqlite3_create_function().
+do_malloc_test 11 -tclbody {
+ set rc [sqlite3_create_function $::DB]
+ if {[string match $rc SQLITE_NOMEM]} {
+ error "out of memory"
+ }
+}
+
+do_malloc_test 12 -tclbody {
+ set sql16 [encoding convertto unicode "SELECT * FROM sqlite_master"]
+ append sql16 "\00\00"
+ set ::STMT [sqlite3_prepare16 $::DB $sql16 -1 DUMMY]
+ sqlite3_finalize $::STMT
+}
+
+# Test malloc errors when replaying two hot journals from a 2-file
+# transaction.
+ifcapable crashtest {
+ do_malloc_test 13 -tclprep {
+ set rc [crashsql 1 test2.db {
+ ATTACH 'test2.db' as aux;
+ PRAGMA cache_size = 10;
+ BEGIN;
+ CREATE TABLE aux.t2(a, b, c);
+ CREATE TABLE t1(a, b, c);
+ COMMIT;
+ }]
+ if {$rc!="1 {child process exited abnormally}"} {
+ error "Wrong error message: $rc"
+ }
+ } -tclbody {
+ db eval {ATTACH 'test2.db' as aux;}
+ set rc [catch {db eval {
+ SELECT * FROM t1;
+ SELECT * FROM t2;
+ }} err]
+ if {$rc && $err!="no such table: t1"} {
+ error $err
+ }
+ }
+}
+
+if {$tcl_platform(platform)!="windows"} {
+ do_malloc_test 14 -tclprep {
+ catch {db close}
+ sqlite3 db2 test2.db
+ db2 eval {
+ PRAGMA synchronous = 0;
+ CREATE TABLE t1(a, b);
+ INSERT INTO t1 VALUES(1, 2);
+ BEGIN;
+ INSERT INTO t1 VALUES(3, 4);
+ }
+ copy_file test2.db test.db
+ copy_file test2.db-journal test.db-journal
+ db2 close
+ } -tclbody {
+ sqlite3 db test.db
+ db eval {
+ SELECT * FROM t1;
+ }
+ }
+}
+
+proc string_compare {a b} {
+ return [string compare $a $b]
+}
+
+# Test for malloc() failures in sqlite3_create_collation() and
+# sqlite3_create_collation16().
+#
+do_malloc_test 15 -tclbody {
+ db collate string_compare string_compare
+ if {[catch {add_test_collate $::DB 1 1 1} msg]} {
+ if {$msg=="SQLITE_NOMEM"} {set msg "out of memory"}
+ error $msg
+ }
+
+ db complete {SELECT "hello """||'world"' [microsoft], * FROM anicetable;}
+ db complete {-- Useful comment}
+
+ execsql {
+ CREATE TABLE t1(a, b COLLATE string_compare);
+ INSERT INTO t1 VALUES(10, 'string');
+ INSERT INTO t1 VALUES(10, 'string2');
+ }
+}
+
+# Also test sqlite3_complete(). There are (currently) no malloc()
+# calls in this function, but test anyway against future changes.
+#
+do_malloc_test 16 -tclbody {
+ db complete {SELECT "hello """||'world"' [microsoft], * FROM anicetable;}
+ db complete {-- Useful comment}
+ db eval {
+ SELECT * FROM sqlite_master;
+ }
+}
+
+# Test handling of malloc() failures in sqlite3_open16().
+#
+do_malloc_test 17 -tclbody {
+ set DB2 0
+ set STMT 0
+
+ # open database using sqlite3_open16()
+ set filename [encoding convertto unicode test.db]
+ append filename "\x00\x00"
+ set DB2 [sqlite3_open16 $filename -unused]
+ if {0==$DB2} {
+ error "out of memory"
+ }
+
+ # Prepare statement
+ set rc [catch {sqlite3_prepare $DB2 {SELECT * FROM sqlite_master} -1 X} msg]
+ if {$rc} {
+ error [string range $msg 4 end]
+ }
+ set STMT $msg
+
+ # Finalize statement
+ set rc [sqlite3_finalize $STMT]
+ if {$rc!="SQLITE_OK"} {
+ error [sqlite3_errmsg $DB2]
+ }
+ set STMT 0
+
+ # Close database
+ set rc [sqlite3_close $DB2]
+ if {$rc!="SQLITE_OK"} {
+ error [sqlite3_errmsg $DB2]
+ }
+ set DB2 0
+} -cleanup {
+ if {$STMT!="0"} {
+ sqlite3_finalize $STMT
+ }
+ if {$DB2!="0"} {
+ set rc [sqlite3_close $DB2]
+ }
+}
+
+# Test handling of malloc() failures in sqlite3_errmsg16().
+#
+do_malloc_test 18 -tclbody {
+ catch {
+ db eval "SELECT [string repeat longcolumnname 10] FROM sqlite_master"
+ } msg
+ if {$msg=="out of memory"} {error $msg}
+ set utf16 [sqlite3_errmsg16 [sqlite3_connection_pointer db]]
+ binary scan $utf16 c* bytes
+ if {[llength $bytes]==0} {
+ error "out of memory"
+ }
+}
+
+# This test is aimed at coverage testing. Specificly, it is supposed to
+# cause a malloc() only used when converting between the two utf-16
+# encodings to fail (i.e. little-endian->big-endian). It only actually
+# hits this malloc() on little-endian hosts.
+#
+set static_string "\x00h\x00e\x00l\x00l\x00o"
+for {set l 0} {$l<10} {incr l} {
+ append static_string $static_string
+}
+append static_string "\x00\x00"
+do_malloc_test 19 -tclprep {
+ execsql {
+ PRAGMA encoding = "UTF16be";
+ CREATE TABLE abc(a, b, c);
+ }
+} -tclbody {
+ unset -nocomplain ::STMT
+ set r [catch {
+ set ::STMT [sqlite3_prepare $::DB {SELECT ?} -1 DUMMY]
+ sqlite3_bind_text16 -static $::STMT 1 $static_string 112
+ } msg]
+ if {$r} {error [string range $msg 4 end]}
+ set msg
+} -cleanup {
+ if {[info exists ::STMT]} {
+ sqlite3_finalize $::STMT
+ }
+}
+unset static_string
+
+# Make sure SQLITE_NOMEM is reported out on an ATTACH failure even
+# when the malloc failure occurs within the nested parse.
+#
+do_malloc_test 20 -tclprep {
+ db close
+ file delete -force test2.db test2.db-journal
+ sqlite3 db test2.db
+ db eval {CREATE TABLE t1(x);}
+ db close
+} -tclbody {
+ if {[catch {sqlite3 db test.db}]} {
+ error "out of memory"
+ }
+} -sqlbody {
+ ATTACH DATABASE 'test2.db' AS t2;
+ SELECT * FROM t1;
+ DETACH DATABASE t2;
+}
+
+
+# Ensure that no file descriptors were leaked.
+do_test malloc-99.X {
+ catch {db close}
+ set sqlite_open_file_count
+} {0}
+
+puts open-file-count=$sqlite_open_file_count
+sqlite_malloc_fail 0
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/malloc2.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/malloc2.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,359 @@
+# 2005 March 18
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file attempts to check that the library can recover from a malloc()
+# failure when sqlite3_global_recover() is invoked.
+#
+# $Id: malloc2.test,v 1.5 2006/09/04 18:54:14 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Only run these tests if memory debugging is turned on.
+#
+if {[info command sqlite_malloc_stat]==""} {
+ puts "Skipping malloc tests: not compiled with -DSQLITE_MEMDEBUG=1"
+ finish_test
+ return
+}
+
+ifcapable !globalrecover {
+ finish_test
+ return
+}
+
+# Generate a checksum based on the contents of the database. If the
+# checksum of two databases is the same, and the integrity-check passes
+# for both, the two databases are identical.
+#
+proc cksum {db} {
+ set ret [list]
+ ifcapable tempdb {
+ set sql {
+ SELECT name FROM sqlite_master WHERE type = 'table' UNION
+ SELECT name FROM sqlite_temp_master WHERE type = 'table' UNION
+ SELECT 'sqlite_master' UNION
+ SELECT 'sqlite_temp_master'
+ }
+ } else {
+ set sql {
+ SELECT name FROM sqlite_master WHERE type = 'table' UNION
+ SELECT 'sqlite_master'
+ }
+ }
+ set tbllist [$db eval $sql]
+ set txt {}
+ foreach tbl $tbllist {
+ append txt [$db eval "SELECT * FROM $tbl"]
+ }
+ # puts txt=$txt
+ return [md5 $txt]
+}
+
+proc do_malloc2_test {tn args} {
+ array set ::mallocopts $args
+ set sum [cksum db]
+
+ for {set ::n 1} {true} {incr ::n} {
+
+ # Run the SQL. Malloc number $::n is set to fail. A malloc() failure
+ # may or may not be reported.
+ sqlite_malloc_fail $::n
+ do_test malloc2-$tn.$::n.2 {
+ set res [catchsql [string trim $::mallocopts(-sql)]]
+ set rc [expr {
+ 0==[string compare $res {1 {out of memory}}] ||
+ 0==[lindex $res 0]
+ }]
+ if {$rc!=1} {
+ puts "Error: $res"
+ }
+ set rc
+ } {1}
+
+ # If $::n is greater than the number of malloc() calls required to
+ # execute the SQL, then this test is finished. Break out of the loop.
+ if {[lindex [sqlite_malloc_stat] 2]>0} {
+ sqlite_malloc_fail -1
+ break
+ }
+
+ # Nothing should work now, because the allocator should refuse to
+ # allocate any memory.
+ #
+ # Update: SQLite now automatically recovers from a malloc() failure.
+ # So the statement in the test below would work.
+if 0 {
+ do_test malloc2-$tn.$::n.3 {
+ catchsql {SELECT 'nothing should work'}
+ } {1 {out of memory}}
+}
+
+ # Recover from the malloc failure.
+ #
+ # Update: The new malloc() failure handling means that a transaction may
+ # still be active even if a malloc() has failed. But when these tests were
+ # written this was not the case. So do a manual ROLLBACK here so that the
+ # tests pass.
+ do_test malloc2-$tn.$::n.4 {
+ sqlite3_global_recover
+ catch {
+ execsql {
+ ROLLBACK;
+ }
+ }
+ expr 0
+ } {0}
+
+ # Checksum the database.
+ do_test malloc2-$tn.$::n.5 {
+ cksum db
+ } $sum
+
+ integrity_check malloc2-$tn.$::n.6
+ if {$::nErr>1} return
+ }
+ unset ::mallocopts
+}
+
+do_test malloc2.1.setup {
+ execsql {
+ CREATE TABLE abc(a, b, c);
+ INSERT INTO abc VALUES(10, 20, 30);
+ INSERT INTO abc VALUES(40, 50, 60);
+ CREATE INDEX abc_i ON abc(a, b, c);
+ }
+} {}
+do_malloc2_test 1.1 -sql {
+ SELECT * FROM abc;
+}
+do_malloc2_test 1.2 -sql {
+ UPDATE abc SET c = c+10;
+}
+do_malloc2_test 1.3 -sql {
+ INSERT INTO abc VALUES(70, 80, 90);
+}
+do_malloc2_test 1.4 -sql {
+ DELETE FROM abc;
+}
+do_test malloc2.1.5 {
+ execsql {
+ SELECT * FROM abc;
+ }
+} {}
+
+do_test malloc2.2.setup {
+ execsql {
+ CREATE TABLE def(a, b, c);
+ CREATE INDEX def_i1 ON def(a);
+ CREATE INDEX def_i2 ON def(c);
+ BEGIN;
+ }
+ for {set i 0} {$i<20} {incr i} {
+ execsql {
+ INSERT INTO def VALUES(randstr(300,300),randstr(300,300),randstr(300,300));
+ }
+ }
+ execsql {
+ COMMIT;
+ }
+} {}
+do_malloc2_test 2 -sql {
+ BEGIN;
+ UPDATE def SET a = randstr(100,100) WHERE (oid%9)==0;
+ INSERT INTO def SELECT * FROM def WHERE (oid%13)==0;
+
+ CREATE INDEX def_i3 ON def(b);
+
+ UPDATE def SET a = randstr(100,100) WHERE (oid%9)==1;
+ INSERT INTO def SELECT * FROM def WHERE (oid%13)==1;
+
+ CREATE TABLE def2 AS SELECT * FROM def;
+ DROP TABLE def;
+ CREATE TABLE def AS SELECT * FROM def2;
+ DROP TABLE def2;
+
+ DELETE FROM def WHERE (oid%9)==2;
+ INSERT INTO def SELECT * FROM def WHERE (oid%13)==2;
+ COMMIT;
+}
+
+ifcapable tempdb {
+ do_test malloc2.3.setup {
+ execsql {
+ CREATE TEMP TABLE ghi(a, b, c);
+ BEGIN;
+ }
+ for {set i 0} {$i<20} {incr i} {
+ execsql {
+ INSERT INTO ghi VALUES(randstr(300,300),randstr(300,300),randstr(300,300));
+ }
+ }
+ execsql {
+ COMMIT;
+ }
+ } {}
+ do_malloc2_test 3 -sql {
+ BEGIN;
+ CREATE INDEX ghi_i1 ON ghi(a);
+ UPDATE def SET a = randstr(100,100) WHERE (oid%2)==0;
+ UPDATE ghi SET a = randstr(100,100) WHERE (oid%2)==0;
+ COMMIT;
+ }
+}
+
+############################################################################
+# The test cases below are to increase the code coverage in btree.c and
+# pager.c of this test file. The idea is that each malloc() that occurs in
+# these two source files should be made to fail at least once.
+#
+catchsql {
+ DROP TABLE ghi;
+}
+do_malloc2_test 4.1 -sql {
+ SELECT * FROM def ORDER BY oid ASC;
+ SELECT * FROM def ORDER BY oid DESC;
+}
+do_malloc2_test 4.2 -sql {
+ PRAGMA cache_size = 10;
+ BEGIN;
+
+ -- This will put about 25 pages on the free list.
+ DELETE FROM def WHERE 1;
+
+ -- Allocate 32 new root pages. This will exercise the 'extract specific
+ -- page from the freelist' code when in auto-vacuum mode (see the
+ -- allocatePage() routine in btree.c).
+ CREATE TABLE t1(a UNIQUE, b UNIQUE, c UNIQUE);
+ CREATE TABLE t2(a UNIQUE, b UNIQUE, c UNIQUE);
+ CREATE TABLE t3(a UNIQUE, b UNIQUE, c UNIQUE);
+ CREATE TABLE t4(a UNIQUE, b UNIQUE, c UNIQUE);
+ CREATE TABLE t5(a UNIQUE, b UNIQUE, c UNIQUE);
+ CREATE TABLE t6(a UNIQUE, b UNIQUE, c UNIQUE);
+ CREATE TABLE t7(a UNIQUE, b UNIQUE, c UNIQUE);
+ CREATE TABLE t8(a UNIQUE, b UNIQUE, c UNIQUE);
+
+ ROLLBACK;
+}
+
+########################################################################
+# Test that the global linked list of database handles works. An assert()
+# will fail if there is some problem.
+do_test malloc2-5 {
+ sqlite3 db1 test.db
+ sqlite3 db2 test.db
+ sqlite3 db3 test.db
+ sqlite3 db4 test.db
+ sqlite3 db5 test.db
+
+ # Close the head of the list:
+ db5 close
+
+ # Close the end of the list:
+ db1 close
+
+ # Close a handle from the middle of the list:
+ db3 close
+
+ # Close the other two. Then open and close one more database, to make
+ # sure the head of the list was set back to NULL.
+ db2 close
+ db4 close
+ sqlite db1 test.db
+ db1 close
+} {}
+
+########################################################################
+# Check that if a statement is active sqlite3_global_recover doesn't reset
+# the sqlite3_malloc_failed variable.
+#
+# Update: There is now no sqlite3_malloc_failed variable, so these tests
+# are not run.
+#
+# do_test malloc2-6.1 {
+# set ::STMT [sqlite3_prepare $::DB {SELECT * FROM def} -1 DUMMY]
+# sqlite3_step $::STMT
+# } {SQLITE_ROW}
+# do_test malloc2-6.2 {
+# sqlite3 db1 test.db
+# sqlite_malloc_fail 100
+# catchsql {
+# SELECT * FROM def;
+# } db1
+# } {1 {out of memory}}
+# do_test malloc2-6.3 {
+# sqlite3_global_recover
+# } {SQLITE_BUSY}
+# do_test malloc2-6.4 {
+# catchsql {
+# SELECT 'hello';
+# }
+# } {1 {out of memory}}
+# do_test malloc2-6.5 {
+# sqlite3_reset $::STMT
+# } {SQLITE_OK}
+# do_test malloc2-6.6 {
+# sqlite3_global_recover
+# } {SQLITE_OK}
+# do_test malloc2-6.7 {
+# catchsql {
+# SELECT 'hello';
+# }
+# } {0 hello}
+# do_test malloc2-6.8 {
+# sqlite3_step $::STMT
+# } {SQLITE_ERROR}
+# do_test malloc2-6.9 {
+# sqlite3_finalize $::STMT
+# } {SQLITE_SCHEMA}
+# do_test malloc2-6.10 {
+# db1 close
+# } {}
+
+########################################################################
+# Check that if an in-memory database is being used it is not possible
+# to recover from a malloc() failure.
+#
+# Update: An in-memory database can now survive a malloc() failure, so these
+# tests are not run.
+#
+# ifcapable memorydb {
+# do_test malloc2-7.1 {
+# sqlite3 db1 :memory:
+# list
+# } {}
+# do_test malloc2-7.2 {
+# sqlite_malloc_fail 100
+# catchsql {
+# SELECT * FROM def;
+# }
+# } {1 {out of memory}}
+# do_test malloc2-7.3 {
+# sqlite3_global_recover
+# } {SQLITE_ERROR}
+# do_test malloc2-7.4 {
+# catchsql {
+# SELECT 'hello';
+# }
+# } {1 {out of memory}}
+# do_test malloc2-7.5 {
+# db1 close
+# } {}
+# do_test malloc2-7.6 {
+# sqlite3_global_recover
+# } {SQLITE_OK}
+# do_test malloc2-7.7 {
+# catchsql {
+# SELECT 'hello';
+# }
+# } {0 hello}
+# }
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/malloc3.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/malloc3.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,655 @@
+# 2005 November 30
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+#
+# This file contains tests to ensure that the library handles malloc() failures
+# correctly. The emphasis of these tests are the _prepare(), _step() and
+# _finalize() calls.
+#
+# $Id: malloc3.test,v 1.9 2006/01/23 07:52:41 danielk1977 Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Only run these tests if memory debugging is turned on.
+if {[info command sqlite_malloc_stat]==""} {
+ puts "Skipping malloc tests: not compiled with -DSQLITE_MEMDEBUG..."
+ finish_test
+ return
+}
+
+#--------------------------------------------------------------------------
+# NOTES ON RECOVERING FROM A MALLOC FAILURE
+#
+# The tests in this file test the behaviours described in the following
+# paragraphs. These tests test the behaviour of the system when malloc() fails
+# inside of a call to _prepare(), _step(), _finalize() or _reset(). The
+# handling of malloc() failures within ancillary procedures is tested
+# elsewhere.
+#
+# Overview:
+#
+# Executing a statement is done in three stages (prepare, step and finalize). A
+# malloc() failure may occur within any stage. If a memory allocation fails
+# during statement preparation, no statement handle is returned. From the users
+# point of view the system state is as if _prepare() had never been called.
+#
+# If the memory allocation fails during the _step() or _finalize() calls, then
+# the database may be left in one of two states (after finalize() has been
+# called):
+#
+# * As if the neither _step() nor _finalize() had ever been called on
+# the statement handle (i.e. any changes made by the statement are
+# rolled back).
+# * The current transaction may be rolled back. In this case a hot-journal
+# may or may not actually be present in the filesystem.
+#
+# The caller can tell the difference between these two scenarios by invoking
+# _get_autocommit().
+#
+#
+# Handling of sqlite3_reset():
+#
+# If a malloc() fails while executing an sqlite3_reset() call, this is handled
+# in the same way as a failure within _finalize(). The statement handle
+# is not deleted and must be passed to _finalize() for resource deallocation.
+# Attempting to _step() or _reset() the statement after a failed _reset() will
+# always return SQLITE_NOMEM.
+#
+#
+# Other active SQL statements:
+#
+# The effect of a malloc failure on concurrently executing SQL statements,
+# particularly when the statement is executing with READ_UNCOMMITTED set and
+# the malloc() failure mandates statement rollback only. Currently, if
+# transaction rollback is required, all other vdbe's are aborted.
+#
+# Non-transient mallocs in btree.c:
+# * The Btree structure itself
+# * Each BtCursor structure
+#
+# Mallocs in pager.c:
+# readMasterJournal() - Space to read the master journal name
+# pager_delmaster() - Space for the entire master journal file
+#
+# sqlite3pager_open() - The pager structure itself
+# sqlite3_pagerget() - Space for a new page
+# pager_open_journal() - Pager.aInJournal[] bitmap
+# sqlite3pager_write() - For in-memory databases only: history page and
+# statement history page.
+# pager_stmt_begin() - Pager.aInStmt[] bitmap
+#
+# None of the above are a huge problem. The most troublesome failures are the
+# transient malloc() calls in btree.c, which can occur during the tree-balance
+# operation. This means the tree being balanced will be internally inconsistent
+# after the malloc() fails. To avoid the corrupt tree being read by a
+# READ_UNCOMMITTED query, we have to make sure the transaction or statement
+# rollback occurs before sqlite3_step() returns, not during a subsequent
+# sqlite3_finalize().
+#--------------------------------------------------------------------------
+
+#--------------------------------------------------------------------------
+# NOTES ON TEST IMPLEMENTATION
+#
+# The tests in this file are implemented differently from those in other
+# files. Instead, tests are specified using three primitives: SQL, PREP and
+# TEST. Each primitive has a single argument. Primitives are processed in
+# the order they are specified in the file.
+#
+# A TEST primitive specifies a TCL script as it's argument. When a TEST
+# directive is encountered the Tcl script is evaluated. Usually, this Tcl
+# script contains one or more calls to [do_test].
+#
+# A PREP primitive specifies an SQL script as it's argument. When a PREP
+# directive is encountered the SQL is evaluated using database connection
+# [db].
+#
+# The SQL primitives are where the action happens. An SQL primitive must
+# contain a single, valid SQL statement as it's argument. When an SQL
+# primitive is encountered, it is evaluated one or more times to test the
+# behaviour of the system when malloc() fails during preparation or
+# execution of said statement. The Nth time the statement is executed,
+# the Nth malloc is said to fail. The statement is executed until it
+# succeeds, i.e. (M+1) times, where M is the number of mallocs() required
+# to prepare and execute the statement.
+#
+# Each time an SQL statement fails, the driver program (see proc [run_test]
+# below) figures out if a transaction has been automatically rolled back.
+# If not, it executes any TEST block immediately proceeding the SQL
+# statement, then reexecutes the SQL statement with the next value of N.
+#
+# If a transaction has been automatically rolled back, then the driver
+# program executes all the SQL specified as part of SQL or PREP primitives
+# between the current SQL statement and the most recent "BEGIN". Any
+# TEST block immediately proceeding the SQL statement is evaluated, and
+# then the SQL statement reexecuted with the incremented N value.
+#
+# That make any sense? If not, read the code in [run_test] and it might.
+#
+# Extra restriction imposed by the implementation:
+#
+# * If a PREP block starts a transaction, it must finish it.
+# * A PREP block may not close a transaction it did not start.
+#
+#--------------------------------------------------------------------------
+
+
+# These procs are used to build up a "program" in global variable
+# ::run_test_script. At the end of this file, the proc [run_test] is used
+# to execute the program (and all test cases contained therein).
+#
+set ::run_test_script [list]
+proc TEST {id t} {lappend ::run_test_script -test [list $id $t]}
+proc PREP {p} {lappend ::run_test_script -prep [string trim $p]}
+
+# SQL --
+#
+# SQL ?-norollback? <sql-text>
+#
+# Add an 'SQL' primitive to the program (see notes above). If the -norollback
+# switch is present, then the statement is not allowed to automatically roll
+# back any active transaction if malloc() fails. It must rollback the statement
+# transaction only.
+#
+proc SQL {a1 {a2 ""}} {
+ # An SQL primitive parameter is a list of two elements, a boolean value
+ # indicating if the statement may cause transaction rollback when malloc()
+ # fails, and the sql statement itself.
+ if {$a2 == ""} {
+ lappend ::run_test_script -sql [list true [string trim $a1]]
+ } else {
+ lappend ::run_test_script -sql [list false [string trim $a2]]
+ }
+}
+
+# TEST_AUTOCOMMIT --
+#
+# A shorthand test to see if a transaction is active or not. The first
+# argument - $id - is the integer number of the test case. The second
+# argument is either 1 or 0, the expected value of the auto-commit flag.
+#
+proc TEST_AUTOCOMMIT {id a} {
+ TEST $id "do_test \$testid { sqlite3_get_autocommit $::DB } {$a}"
+}
+
+#--------------------------------------------------------------------------
+# Start of test program declaration
+#
+
+
+# Warm body test. A malloc() fails in the middle of a CREATE TABLE statement
+# in a single-statement transaction on an empty database. Not too much can go
+# wrong here.
+#
+TEST 1 {
+ do_test $testid {
+ execsql {SELECT tbl_name FROM sqlite_master;}
+ } {}
+}
+SQL {
+ CREATE TABLE abc(a, b, c);
+}
+TEST 2 {
+ do_test $testid.1 {
+ execsql {SELECT tbl_name FROM sqlite_master;}
+ } {abc}
+}
+
+# Insert a couple of rows into the table. each insert is in it's own
+# transaction. test that the table is unpopulated before running the inserts
+# (and hence after each failure of the first insert), and that it has been
+# populated correctly after the final insert succeeds.
+#
+TEST 3 {
+ do_test $testid.2 {
+ execsql {SELECT * FROM abc}
+ } {}
+}
+SQL {INSERT INTO abc VALUES(1, 2, 3);}
+SQL {INSERT INTO abc VALUES(4, 5, 6);}
+SQL {INSERT INTO abc VALUES(7, 8, 9);}
+TEST 4 {
+ do_test $testid {
+ execsql {SELECT * FROM abc}
+ } {1 2 3 4 5 6 7 8 9}
+}
+
+# Test a CREATE INDEX statement. Because the table 'abc' is so small, the index
+# will all fit on a single page, so this doesn't test too much that the CREATE
+# TABLE statement didn't test. A few of the transient malloc()s in btree.c
+# perhaps.
+#
+SQL {CREATE INDEX abc_i ON abc(a, b, c);}
+TEST 4 {
+ do_test $testid {
+ execsql {
+ SELECT * FROM abc ORDER BY a DESC;
+ }
+ } {7 8 9 4 5 6 1 2 3}
+}
+
+# Test a DELETE statement. Also create a trigger and a view, just to make sure
+# these statements don't have any obvious malloc() related bugs in them. Note
+# that the test above will be executed each time the DELETE fails, so we're
+# also testing rollback of a DELETE from a table with an index on it.
+#
+SQL {DELETE FROM abc WHERE a > 2;}
+SQL {CREATE TRIGGER abc_t AFTER INSERT ON abc BEGIN SELECT 'trigger!'; END;}
+SQL {CREATE VIEW abc_v AS SELECT * FROM abc;}
+TEST 5 {
+ do_test $testid {
+ execsql {
+ SELECT name, tbl_name FROM sqlite_master ORDER BY name;
+ SELECT * FROM abc;
+ }
+ } {abc abc abc_i abc abc_t abc abc_v abc_v 1 2 3}
+}
+
+set sql {
+ BEGIN;DELETE FROM abc;
+}
+for {set i 1} {$i < 100} {incr i} {
+ set a $i
+ set b "String value $i"
+ set c [string repeat X $i]
+ append sql "INSERT INTO abc VALUES ($a, '$b', '$c');"
+}
+append sql {COMMIT;}
+PREP $sql
+
+SQL {
+ DELETE FROM abc WHERE oid IN (SELECT oid FROM abc ORDER BY random() LIMIT 5);
+}
+TEST 6 {
+ do_test $testid.1 {
+ execsql {SELECT count(*) FROM abc}
+ } {94}
+ do_test $testid.2 {
+ execsql {
+ SELECT min(
+ (oid == a) AND 'String value ' || a == b AND a == length(c)
+ ) FROM abc;
+ }
+ } {1}
+}
+SQL {
+ DELETE FROM abc WHERE oid IN (SELECT oid FROM abc ORDER BY random() LIMIT 5);
+}
+TEST 7 {
+ do_test $testid {
+ execsql {SELECT count(*) FROM abc}
+ } {89}
+ do_test $testid {
+ execsql {
+ SELECT min(
+ (oid == a) AND 'String value ' || a == b AND a == length(c)
+ ) FROM abc;
+ }
+ } {1}
+}
+SQL {
+ DELETE FROM abc WHERE oid IN (SELECT oid FROM abc ORDER BY random() LIMIT 5);
+}
+TEST 9 {
+ do_test $testid {
+ execsql {SELECT count(*) FROM abc}
+ } {84}
+ do_test $testid {
+ execsql {
+ SELECT min(
+ (oid == a) AND 'String value ' || a == b AND a == length(c)
+ ) FROM abc;
+ }
+ } {1}
+}
+
+set padding [string repeat X 500]
+PREP [subst {
+ DROP TABLE abc;
+ CREATE TABLE abc(a PRIMARY KEY, padding, b, c);
+ INSERT INTO abc VALUES(0, '$padding', 2, 2);
+ INSERT INTO abc VALUES(3, '$padding', 5, 5);
+ INSERT INTO abc VALUES(6, '$padding', 8, 8);
+}]
+
+TEST 10 {
+ do_test $testid {
+ execsql {SELECT a, b, c FROM abc}
+ } {0 2 2 3 5 5 6 8 8}
+}
+
+SQL {BEGIN;}
+SQL {INSERT INTO abc VALUES(9, 'XXXXX', 11, 12);}
+TEST_AUTOCOMMIT 11 0
+SQL -norollback {UPDATE abc SET a = a + 1, c = c + 1;}
+TEST_AUTOCOMMIT 12 0
+SQL {DELETE FROM abc WHERE a = 10;}
+TEST_AUTOCOMMIT 13 0
+SQL {COMMIT;}
+
+TEST 14 {
+ do_test $testid.1 {
+ sqlite3_get_autocommit $::DB
+ } {1}
+ do_test $testid.2 {
+ execsql {SELECT a, b, c FROM abc}
+ } {1 2 3 4 5 6 7 8 9}
+}
+
+PREP [subst {
+ DROP TABLE abc;
+ CREATE TABLE abc(a, padding, b, c);
+ INSERT INTO abc VALUES(1, '$padding', 2, 3);
+ INSERT INTO abc VALUES(4, '$padding', 5, 6);
+ INSERT INTO abc VALUES(7, '$padding', 8, 9);
+ CREATE INDEX abc_i ON abc(a, padding, b, c);
+}]
+
+TEST 15 {
+ db eval {PRAGMA cache_size = 10}
+}
+
+SQL {BEGIN;}
+SQL -norllbck {INSERT INTO abc (oid, a, padding, b, c) SELECT NULL, * FROM abc}
+TEST 16 {
+ do_test $testid {
+ execsql {SELECT a, count(*) FROM abc GROUP BY a;}
+ } {1 2 4 2 7 2}
+}
+SQL -norllbck {INSERT INTO abc (oid, a, padding, b, c) SELECT NULL, * FROM abc}
+TEST 17 {
+ do_test $testid {
+ execsql {SELECT a, count(*) FROM abc GROUP BY a;}
+ } {1 4 4 4 7 4}
+}
+SQL -norllbck {INSERT INTO abc (oid, a, padding, b, c) SELECT NULL, * FROM abc}
+TEST 18 {
+ do_test $testid {
+ execsql {SELECT a, count(*) FROM abc GROUP BY a;}
+ } {1 8 4 8 7 8}
+}
+SQL -norllbck {INSERT INTO abc (oid, a, padding, b, c) SELECT NULL, * FROM abc}
+TEST 19 {
+ do_test $testid {
+ execsql {SELECT a, count(*) FROM abc GROUP BY a;}
+ } {1 16 4 16 7 16}
+}
+SQL {COMMIT;}
+TEST 21 {
+ do_test $testid {
+ execsql {SELECT a, count(*) FROM abc GROUP BY a;}
+ } {1 16 4 16 7 16}
+}
+
+SQL {BEGIN;}
+SQL {DELETE FROM abc WHERE oid %2}
+TEST 22 {
+ do_test $testid {
+ execsql {SELECT a, count(*) FROM abc GROUP BY a;}
+ } {1 8 4 8 7 8}
+}
+SQL {DELETE FROM abc}
+TEST 23 {
+ do_test $testid {
+ execsql {SELECT * FROM abc}
+ } {}
+}
+SQL {ROLLBACK;}
+TEST 24 {
+ do_test $testid {
+ execsql {SELECT a, count(*) FROM abc GROUP BY a;}
+ } {1 16 4 16 7 16}
+}
+
+# Test some schema modifications inside of a transaction. These should all
+# cause transaction rollback if they fail. Also query a view, to cover a bit
+# more code.
+#
+PREP {DROP VIEW abc_v;}
+TEST 25 {
+ do_test $testid {
+ execsql {
+ SELECT name, tbl_name FROM sqlite_master;
+ }
+ } {abc abc abc_i abc}
+}
+SQL {BEGIN;}
+SQL {CREATE TABLE def(d, e, f);}
+SQL {CREATE TABLE ghi(g, h, i);}
+TEST 26 {
+ do_test $testid {
+ execsql {
+ SELECT name, tbl_name FROM sqlite_master;
+ }
+ } {abc abc abc_i abc def def ghi ghi}
+}
+SQL {CREATE VIEW v1 AS SELECT * FROM def, ghi}
+SQL {CREATE UNIQUE INDEX ghi_i1 ON ghi(g);}
+TEST 27 {
+ do_test $testid {
+ execsql {
+ SELECT name, tbl_name FROM sqlite_master;
+ }
+ } {abc abc abc_i abc def def ghi ghi v1 v1 ghi_i1 ghi}
+}
+SQL {INSERT INTO def VALUES('a', 'b', 'c')}
+SQL {INSERT INTO def VALUES(1, 2, 3)}
+SQL -norollback {INSERT INTO ghi SELECT * FROM def}
+TEST 28 {
+ do_test $testid {
+ execsql {
+ SELECT * FROM def, ghi WHERE d = g;
+ }
+ } {a b c a b c 1 2 3 1 2 3}
+}
+SQL {COMMIT}
+TEST 29 {
+ do_test $testid {
+ execsql {
+ SELECT * FROM v1 WHERE d = g;
+ }
+ } {a b c a b c 1 2 3 1 2 3}
+}
+
+# Test a simple multi-file transaction
+#
+file delete -force test2.db
+SQL {ATTACH 'test2.db' AS aux;}
+SQL {BEGIN}
+SQL {CREATE TABLE aux.tbl2(x, y, z)}
+SQL {INSERT INTO tbl2 VALUES(1, 2, 3)}
+SQL {INSERT INTO def VALUES(4, 5, 6)}
+TEST 30 {
+ do_test $testid {
+ execsql {
+ SELECT * FROM tbl2, def WHERE d = x;
+ }
+ } {1 2 3 1 2 3}
+}
+SQL {COMMIT}
+TEST 31 {
+ do_test $testid {
+ execsql {
+ SELECT * FROM tbl2, def WHERE d = x;
+ }
+ } {1 2 3 1 2 3}
+}
+
+# Test what happens when a malloc() fails while there are other active
+# statements. This changes the way sqlite3VdbeHalt() works.
+TEST 32 {
+ if {![info exists ::STMT32]} {
+ set sql "SELECT name FROM sqlite_master"
+ set ::STMT32 [sqlite3_prepare $::DB $sql -1 DUMMY]
+ do_test $testid {
+ sqlite3_step $::STMT32
+ } {SQLITE_ROW}
+ }
+}
+SQL BEGIN
+TEST 33 {
+ do_test $testid {
+ execsql {SELECT * FROM ghi}
+ } {a b c 1 2 3}
+}
+SQL -norollback {
+ -- There is a unique index on ghi(g), so this statement may not cause
+ -- an automatic ROLLBACK. Hence the "-norollback" switch.
+ INSERT INTO ghi SELECT '2'||g, h, i FROM ghi;
+}
+TEST 34 {
+ if {[info exists ::STMT32]} {
+ do_test $testid {
+ sqlite3_finalize $::STMT32
+ } {SQLITE_OK}
+ unset ::STMT32
+ }
+}
+SQL COMMIT
+
+#
+# End of test program declaration
+#--------------------------------------------------------------------------
+
+proc run_test {arglist {pcstart 0} {iFailStart 1}} {
+ if {[llength $arglist] %2} {
+ error "Uneven number of arguments to TEST"
+ }
+
+ for {set i 0} {$i < $pcstart} {incr i} {
+ set k2 [lindex $arglist [expr 2 * $i]]
+ set v2 [lindex $arglist [expr 2 * $i + 1]]
+ set ac [sqlite3_get_autocommit $::DB] ;# Auto-Commit
+# puts "STARTUP"
+ switch -- $k2 {
+ -sql {db eval [lindex $v2 1]}
+ -prep {db eval $v2}
+ }
+ set nac [sqlite3_get_autocommit $::DB] ;# New Auto-Commit
+ if {$ac && !$nac} {set begin_pc $i}
+ }
+
+ db rollback_hook [list incr ::rollback_hook_count]
+
+ set iFail $iFailStart
+ set pc $pcstart
+ while {$pc*2 < [llength $arglist]} {
+
+ # Id of this iteration:
+ set iterid "(pc $pc).(iFail $iFail)"
+
+ set k [lindex $arglist [expr 2 * $pc]]
+ set v [lindex $arglist [expr 2 * $pc + 1]]
+
+ switch -- $k {
+
+ -test {
+ foreach {id script} $v {}
+ set testid "malloc3-(test $id).$iterid"
+ eval $script
+ incr pc
+ }
+
+ -sql {
+ set ::rollback_hook_count 0
+
+ set ac [sqlite3_get_autocommit $::DB] ;# Auto-Commit
+ sqlite_malloc_fail $iFail
+# puts "SQL $iterid [lindex $v 1]"
+ set rc [catch {db eval [lindex $v 1]} msg] ;# True error occurs
+# puts "rc = $rc msg = \"$msg\""
+ set nac [sqlite3_get_autocommit $::DB] ;# New Auto-Commit
+
+
+ if {$rc != 0 && $nac && !$ac} {
+ # Before [db eval] the auto-commit flag was clear. Now it
+ # is set. Since an error occured we assume this was not a
+ # commit - therefore a rollback occured. Check that the
+ # rollback-hook was invoked.
+ do_test malloc3-rollback_hook.$iterid {
+ set ::rollback_hook_count
+ } {1}
+ }
+
+ if {$rc == 0} {
+ # Successful execution of sql. Our "mallocs-until-failure"
+ # count should be greater than 0. Otherwise a malloc() failed
+ # and the error was not reported.
+ if {[lindex [sqlite_malloc_stat] 2] <= 0} {
+ error "Unreported malloc() failure"
+ }
+
+ if {$ac && !$nac} {
+ # Before the [db eval] the auto-commit flag was set, now it
+ # is clear. We can deduce that a "BEGIN" statement has just
+ # been successfully executed.
+ set begin_pc $pc
+ }
+
+ incr pc
+ set iFail 1
+ sqlite_malloc_fail 0
+ integrity_check "malloc3-(integrity).$iterid"
+ } elseif {[regexp {.*out of memory} $msg]} {
+ # Out of memory error, as expected
+ integrity_check "malloc3-(integrity).$iterid"
+ incr iFail
+ if {$nac && !$ac} {
+
+ if {![lindex $v 0]} {
+ error "Statement \"[lindex $v 1]\" caused a rollback"
+ }
+
+# puts "Statement \"[lindex $v 1]\" caused a rollback"
+ for {set i $begin_pc} {$i < $pc} {incr i} {
+ set k2 [lindex $arglist [expr 2 * $i]]
+ set v2 [lindex $arglist [expr 2 * $i + 1]]
+ set catchupsql ""
+ switch -- $k2 {
+ -sql {set catchupsql [lindex $v2 1]}
+ -prep {set catchupsql $v2}
+ }
+# puts "CATCHUP $iterid $i $catchupsql"
+ db eval $catchupsql
+ }
+ }
+ } else {
+ error $msg
+ }
+
+ while {[lindex $arglist [expr 2 * ($pc -1)]] == "-test"} {
+ incr pc -1
+ }
+ }
+
+ -prep {
+# puts "PREP $iterid $v"
+ db eval $v
+ incr pc
+ }
+
+ default { error "Unknown switch: $k" }
+ }
+# if {$iFail > ($iFailStart+1)} return
+ }
+}
+
+# Turn of the Tcl interface's prepared statement caching facility.
+db cache size 0
+
+run_test $::run_test_script 9 1
+# run_test [lrange $::run_test_script 0 3] 0 63
+sqlite_malloc_fail 0
+db close
+
+pp_check_for_leaks
+
+finish_test
+
Added: freeswitch/trunk/libs/sqlite/test/malloc4.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/malloc4.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,194 @@
+# 2005 November 30
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+#
+# This file contains tests to ensure that the library handles malloc() failures
+# correctly. The emphasis in this file is on sqlite3_column_XXX() APIs.
+#
+# $Id: malloc4.test,v 1.3 2006/01/23 07:52:41 danielk1977 Exp $
+
+#---------------------------------------------------------------------------
+# NOTES ON EXPECTED BEHAVIOUR
+#
+# [193] When a memory allocation failure occurs during sqlite3_column_name(),
+# sqlite3_column_name16(), sqlite3_column_decltype(), or
+# sqlite3_column_decltype16() the function shall return NULL.
+#
+#---------------------------------------------------------------------------
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Only run these tests if memory debugging is turned on.
+if {[info command sqlite_malloc_stat]==""} {
+ puts "Skipping malloc tests: not compiled with -DSQLITE_MEMDEBUG..."
+ finish_test
+ return
+}
+
+ifcapable !utf16 {
+ finish_test
+ return
+}
+
+proc do_stmt_test {id sql} {
+ set ::sql $sql
+ set go 1
+ for {set n 1} {$go} {incr n} {
+ set testid "malloc4-$id.(iFail $n)"
+
+ # Prepare the statement
+ do_test ${testid}.1 {
+ set ::STMT [sqlite3_prepare $::DB $sql -1 TAIL]
+ expr [string length $::STMT] > 0
+ } {1}
+
+ # Set the Nth malloc() to fail.
+ sqlite_malloc_fail $n
+
+ # Test malloc failure in the _name(), _name16(), decltype() and
+ # decltype16() APIs. Calls that occur after the malloc() failure should
+ # return NULL. No error is raised though.
+ #
+ # ${testid}.2.1 - Call _name()
+ # ${testid}.2.2 - Call _name16()
+ # ${testid}.2.3 - Call _name()
+ # ${testid}.2.4 - Check that the return values of the above three calls are
+ # consistent with each other and with the simulated
+ # malloc() failures.
+ #
+ # Because the code that implements the _decltype() and _decltype16() APIs
+ # is the same as the _name() and _name16() implementations, we don't worry
+ # about explicitly testing them.
+ #
+ do_test ${testid}.2.1 {
+ set mf1 [expr [lindex [sqlite_malloc_stat] 2] <= 0]
+ set ::name8 [sqlite3_column_name $::STMT 0]
+ set mf2 [expr [lindex [sqlite_malloc_stat] 2] <= 0]
+ expr {$mf1 == $mf2 || $::name8 == ""}
+ } {1}
+ do_test ${testid}.2.2 {
+ set mf1 [expr [lindex [sqlite_malloc_stat] 2] <= 0]
+ set ::name16 [sqlite3_column_name16 $::STMT 0]
+ set ::name16 [encoding convertfrom unicode $::name16]
+ set ::name16 [string range $::name16 0 end-1]
+ set mf2 [expr [lindex [sqlite_malloc_stat] 2] <= 0]
+ expr {$mf1 == $mf2 || $::name16 == ""}
+ } {1}
+ do_test ${testid}.2.3 {
+ set mf1 [expr [lindex [sqlite_malloc_stat] 2] <= 0]
+ set ::name8_2 [sqlite3_column_name $::STMT 0]
+ set mf2 [expr [lindex [sqlite_malloc_stat] 2] <= 0]
+ expr {$mf1 == $mf2 || $::name8_2 == ""}
+ } {1}
+ set ::mallocFailed [expr [lindex [sqlite_malloc_stat] 2] <= 0]
+ do_test ${testid}.2.4 {
+ expr {
+ $::name8 == $::name8_2 && $::name16 == $::name8 && !$::mallocFailed ||
+ $::name8 == $::name8_2 && $::name16 == "" && $::mallocFailed ||
+ $::name8 == $::name16 && $::name8_2 == "" && $::mallocFailed ||
+ $::name8_2 == $::name16 && $::name8 == "" && $::mallocFailed
+ }
+ } {1}
+
+ # Step the statement so that we can call _text() and _text16(). Before
+ # running sqlite3_step(), make sure that malloc() is not about to fail.
+ # Memory allocation failures that occur within sqlite3_step() are tested
+ # elsewhere.
+ set mf [lindex [sqlite_malloc_stat] 2]
+ sqlite_malloc_fail 0
+ do_test ${testid}.3 {
+ sqlite3_step $::STMT
+ } {SQLITE_ROW}
+ sqlite_malloc_fail $mf
+
+ # Test for malloc() failures within _text() and _text16().
+ #
+ do_test ${testid}.4.1 {
+ set ::text8 [sqlite3_column_text $::STMT 0]
+ set mf [expr [lindex [sqlite_malloc_stat] 2] <= 0 && !$::mallocFailed]
+ expr {$mf==0 || $::text8 == ""}
+ } {1}
+ do_test ${testid}.4.2 {
+ set ::text16 [sqlite3_column_text16 $::STMT 0]
+ set ::text16 [encoding convertfrom unicode $::text16]
+ set ::text16 [string range $::text16 0 end-1]
+ set mf [expr [lindex [sqlite_malloc_stat] 2] <= 0 && !$::mallocFailed]
+ expr {$mf==0 || $::text16 == ""}
+ } {1}
+ do_test ${testid}.4.3 {
+ set ::text8_2 [sqlite3_column_text $::STMT 0]
+ set mf [expr [lindex [sqlite_malloc_stat] 2] <= 0 && !$::mallocFailed]
+ expr {$mf==0 || $::text8_2 == "" || ($::text16 == "" && $::text8 != "")}
+ } {1}
+
+ # Test for malloc() failures within _int(), _int64() and _real(). The only
+ # way this can occur is if the string has to be translated from UTF-16 to
+ # UTF-8 before being converted to a numeric value.
+ do_test ${testid}.4.4.1 {
+ set mf [lindex [sqlite_malloc_stat] 2]
+ sqlite_malloc_fail 0
+ sqlite3_column_text16 $::STMT 0
+ sqlite_malloc_fail $mf
+ sqlite3_column_int $::STMT 0
+ } {0}
+ do_test ${testid}.4.5 {
+ set mf [lindex [sqlite_malloc_stat] 2]
+ sqlite_malloc_fail 0
+ sqlite3_column_text16 $::STMT 0
+ sqlite_malloc_fail $mf
+ sqlite3_column_int64 $::STMT 0
+ } {0}
+
+ do_test ${testid}.4.6 {
+ set mf [lindex [sqlite_malloc_stat] 2]
+ sqlite_malloc_fail 0
+ sqlite3_column_text16 $::STMT 0
+ sqlite_malloc_fail $mf
+ sqlite3_column_double $::STMT 0
+ } {0.0}
+
+ set mallocFailedAfterStep [expr \
+ [lindex [sqlite_malloc_stat] 2] <= 0 && !$::mallocFailed
+ ]
+
+ sqlite_malloc_fail 0
+ # Test that if a malloc() failed the next call to sqlite3_step() returns
+ # SQLITE_ERROR. If malloc() did not fail, it should return SQLITE_DONE.
+ #
+ do_test ${testid}.5 {
+ sqlite3_step $::STMT
+ } [expr {$mallocFailedAfterStep ? "SQLITE_ERROR" : "SQLITE_DONE"}]
+
+ do_test ${testid}.6 {
+ sqlite3_finalize $::STMT
+ } [expr {$mallocFailedAfterStep ? "SQLITE_NOMEM" : "SQLITE_OK"}]
+
+ if {$::mallocFailed == 0 && $mallocFailedAfterStep == 0} {
+ sqlite_malloc_fail 0
+ set go 0
+ }
+ }
+}
+
+execsql {
+ CREATE TABLE tbl(
+ the_first_reasonably_long_column_name that_also_has_quite_a_lengthy_type
+ );
+ INSERT INTO tbl VALUES(
+ 'An extra long string. Far too long to be stored in NBFS bytes.'
+ );
+}
+
+do_stmt_test 1 "SELECT * FROM tbl"
+
+sqlite_malloc_fail 0
+finish_test
+
Added: freeswitch/trunk/libs/sqlite/test/malloc5.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/malloc5.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,223 @@
+# 2005 November 30
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+#
+# This file contains test cases focused on the two memory-management APIs,
+# sqlite3_soft_heap_limit() and sqlite3_release_memory().
+#
+# $Id: malloc5.test,v 1.7 2006/01/19 08:43:32 danielk1977 Exp $
+
+#---------------------------------------------------------------------------
+# NOTES ON EXPECTED BEHAVIOUR
+#
+#---------------------------------------------------------------------------
+
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+db close
+
+# Only run these tests if memory debugging is turned on.
+if {[info command sqlite_malloc_stat]==""} {
+ puts "Skipping malloc tests: not compiled with -DSQLITE_MEMDEBUG..."
+ finish_test
+ return
+}
+
+# Skip these tests if OMIT_MEMORY_MANAGEMENT was defined at compile time.
+ifcapable !memorymanage {
+ finish_test
+ return
+}
+
+sqlite3 db test.db
+
+do_test malloc5-1.1 {
+ # Simplest possible test. Call sqlite3_release_memory when there is exactly
+ # one unused page in a single pager cache. This test case set's the
+ # value of the ::pgalloc variable, which is used in subsequent tests.
+ #
+ # Note: Even though executing this statement on an empty database
+ # modifies 2 pages (the root of sqlite_master and the new root page),
+ # the sqlite_master root (page 1) is never freed because the btree layer
+ # retains a reference to it for the entire transaction.
+ execsql {
+ BEGIN;
+ CREATE TABLE abc(a, b, c);
+ }
+ set ::pgalloc [sqlite3_release_memory]
+ expr $::pgalloc > 0
+} {1}
+do_test malloc5-1.2 {
+ # Test that the transaction started in the above test is still active.
+ # Because the page freed had been written to, freeing it required a
+ # journal sync and exclusive lock on the database file. Test the file
+ # appears to be locked.
+ sqlite3 db2 test.db
+ catchsql {
+ SELECT * FROM abc;
+ } db2
+} {1 {database is locked}}
+do_test malloc5-1.3 {
+ # Again call [sqlite3_release_memory] when there is exactly one unused page
+ # in the cache. The same amount of memory is required, but no journal-sync
+ # or exclusive lock should be established.
+ execsql {
+ COMMIT;
+ BEGIN;
+ SELECT * FROM abc;
+ }
+ sqlite3_release_memory
+} $::pgalloc
+do_test malloc5-1.4 {
+ # Database should not be locked this time.
+ catchsql {
+ SELECT * FROM abc;
+ } db2
+} {0 {}}
+do_test malloc5-1.5 {
+ # Manipulate the cache so that it contains two unused pages. One requires
+ # a journal-sync to free, the other does not.
+ execsql {
+ SELECT * FROM abc;
+ CREATE TABLE def(d, e, f);
+ }
+ sqlite3_release_memory 500
+} $::pgalloc
+do_test malloc5-1.6 {
+ # Database should not be locked this time. The above test case only
+ # requested 500 bytes of memory, which can be obtained by freeing the page
+ # that does not require an fsync().
+ catchsql {
+ SELECT * FROM abc;
+ } db2
+} {0 {}}
+do_test malloc5-1.7 {
+ # Release another 500 bytes of memory. This time we require a sync(),
+ # so the database file will be locked afterwards.
+ sqlite3_release_memory 500
+} $::pgalloc
+do_test malloc5-1.8 {
+ catchsql {
+ SELECT * FROM abc;
+ } db2
+} {1 {database is locked}}
+do_test malloc5-1.9 {
+ execsql {
+ COMMIT;
+ }
+} {}
+
+do_test malloc5-2.1 {
+ # Put some data in tables abc and def. Both tables are still wholly
+ # contained within their root pages.
+ execsql {
+ INSERT INTO abc VALUES(1, 2, 3);
+ INSERT INTO abc VALUES(4, 5, 6);
+ INSERT INTO def VALUES(7, 8, 9);
+ INSERT INTO def VALUES(10,11,12);
+ }
+} {}
+do_test malloc5-2.2 {
+ # Load the root-page for table def into the cache. Then query table abc.
+ # Halfway through the query call sqlite3_release_memory(). The goal of this
+ # test is to make sure we don't free pages that are in use (specifically,
+ # the root of table abc).
+ set nRelease 0
+ execsql {
+ BEGIN;
+ SELECT * FROM def;
+ }
+ set data [list]
+ db eval {SELECT * FROM abc} {
+ incr nRelease [sqlite3_release_memory]
+ lappend data $a $b $c
+ }
+ execsql {
+ COMMIT;
+ }
+ list $nRelease $data
+} [list $pgalloc [list 1 2 3 4 5 6]]
+
+do_test malloc5-3.1 {
+ # Simple test to show that if two pagers are opened from within this
+ # thread, memory is freed from both when sqlite3_release_memory() is
+ # called.
+ execsql {
+ BEGIN;
+ SELECT * FROM abc;
+ }
+ execsql {
+ SELECT * FROM sqlite_master;
+ BEGIN;
+ SELECT * FROM def;
+ } db2
+ sqlite3_release_memory
+} [expr $::pgalloc * 2]
+do_test malloc5-3.2 {
+ concat \
+ [execsql {SELECT * FROM abc; COMMIT}] \
+ [execsql {SELECT * FROM def; COMMIT} db2]
+} {1 2 3 4 5 6 7 8 9 10 11 12}
+
+db2 close
+sqlite_malloc_outstanding -clearmaxbytes
+
+# The following two test cases each execute a transaction in which
+# 10000 rows are inserted into table abc. The first test case is used
+# to ensure that more than 1MB of dynamic memory is used to perform
+# the transaction.
+#
+# The second test case sets the "soft-heap-limit" to 100,000 bytes (0.1 MB)
+# and tests to see that this limit is not exceeded at any point during
+# transaction execution.
+#
+# Before executing malloc5-4.* we save the value of the current soft heap
+# limit in variable ::soft_limit. The original value is restored after
+# running the tests.
+#
+set ::soft_limit [sqlite3_soft_heap_limit -1]
+do_test malloc5-4.1 {
+ execsql {BEGIN;}
+ execsql {DELETE FROM abc;}
+ for {set i 0} {$i < 10000} {incr i} {
+ execsql "INSERT INTO abc VALUES($i, $i, '[string repeat X 100]');"
+ }
+ execsql {COMMIT;}
+ set ::nMaxBytes [sqlite_malloc_outstanding -maxbytes]
+ if {$::nMaxBytes==""} {set ::nMaxBytes 1000001}
+ expr $::nMaxBytes > 1000000
+} {1}
+do_test malloc5-4.2 {
+ sqlite3_release_memory
+ sqlite_malloc_outstanding -clearmaxbytes
+ sqlite3_soft_heap_limit 100000
+ execsql {BEGIN;}
+ for {set i 0} {$i < 10000} {incr i} {
+ execsql "INSERT INTO abc VALUES($i, $i, '[string repeat X 100]');"
+ }
+ execsql {COMMIT;}
+ set ::nMaxBytes [sqlite_malloc_outstanding -maxbytes]
+ if {$::nMaxBytes==""} {set ::nMaxBytes 0}
+ expr $::nMaxBytes <= 100000
+} {1}
+do_test malloc5-4.3 {
+ # Check that the content of table abc is at least roughly as expected.
+ execsql {
+ SELECT count(*), sum(a), sum(b) FROM abc;
+ }
+} [list 20000 [expr int(20000.0 * 4999.5)] [expr int(20000.0 * 4999.5)]]
+
+# Restore the soft heap limit.
+sqlite3_soft_heap_limit $::soft_limit
+finish_test
+
+catch {db close}
+
Added: freeswitch/trunk/libs/sqlite/test/malloc6.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/malloc6.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,153 @@
+# 2006 June 25
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file attempts to check the library in an out-of-memory situation.
+# When compiled with -DSQLITE_DEBUG=1, the SQLite library accepts a special
+# command (sqlite_malloc_fail N) which causes the N-th malloc to fail. This
+# special feature is used to see what happens in the library if a malloc
+# were to really fail due to an out-of-memory situation.
+#
+# $Id: malloc6.test,v 1.1 2006/06/26 12:50:09 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Only run these tests if memory debugging is turned on.
+#
+if {[info command sqlite_malloc_stat]==""} {
+ puts "Skipping malloc tests: not compiled with -DSQLITE_MEMDEBUG..."
+ finish_test
+ return
+}
+
+# Usage: do_malloc_test <test number> <options...>
+#
+# The first argument, <test number>, is an integer used to name the
+# tests executed by this proc. Options are as follows:
+#
+# -tclprep TCL script to run to prepare test.
+# -sqlprep SQL script to run to prepare test.
+# -tclbody TCL script to run with malloc failure simulation.
+# -sqlbody TCL script to run with malloc failure simulation.
+# -cleanup TCL script to run after the test.
+#
+# This command runs a series of tests to verify SQLite's ability
+# to handle an out-of-memory condition gracefully. It is assumed
+# that if this condition occurs a malloc() call will return a
+# NULL pointer. Linux, for example, doesn't do that by default. See
+# the "BUGS" section of malloc(3).
+#
+# Each iteration of a loop, the TCL commands in any argument passed
+# to the -tclbody switch, followed by the SQL commands in any argument
+# passed to the -sqlbody switch are executed. Each iteration the
+# Nth call to sqliteMalloc() is made to fail, where N is increased
+# each time the loop runs starting from 1. When all commands execute
+# successfully, the loop ends.
+#
+proc do_malloc_test {tn args} {
+ array unset ::mallocopts
+ array set ::mallocopts $args
+
+ set ::go 1
+ for {set ::n 1} {$::go && $::n < 50000} {incr ::n} {
+ do_test malloc6-$tn.$::n {
+
+ # Remove all traces of database files test.db and test2.db from the files
+ # system. Then open (empty database) "test.db" with the handle [db].
+ #
+ sqlite_malloc_fail 0
+ catch {db close}
+ catch {file delete -force test.db}
+ catch {file delete -force test.db-journal}
+ catch {file delete -force test2.db}
+ catch {file delete -force test2.db-journal}
+ catch {sqlite3 db test.db}
+ set ::DB [sqlite3_connection_pointer db]
+
+ # Execute any -tclprep and -sqlprep scripts.
+ #
+ if {[info exists ::mallocopts(-tclprep)]} {
+ eval $::mallocopts(-tclprep)
+ }
+ if {[info exists ::mallocopts(-sqlprep)]} {
+ execsql $::mallocopts(-sqlprep)
+ }
+
+ # Now set the ${::n}th malloc() to fail and execute the -tclbody and
+ # -sqlbody scripts.
+ #
+ sqlite_malloc_fail $::n
+ set ::mallocbody {}
+ if {[info exists ::mallocopts(-tclbody)]} {
+ append ::mallocbody "$::mallocopts(-tclbody)\n"
+ }
+ if {[info exists ::mallocopts(-sqlbody)]} {
+ append ::mallocbody "db eval {$::mallocopts(-sqlbody)}"
+ }
+ set v [catch $::mallocbody msg]
+
+ # If the test fails (if $v!=0) and the database connection actually
+ # exists, make sure the failure code is SQLITE_NOMEM.
+ if {$v && [info command db]=="db" && [info exists ::mallocopts(-sqlbody)]
+ && [db errorcode]!=7} {
+ set v 999
+ }
+
+ set leftover [lindex [sqlite_malloc_stat] 2]
+ if {$leftover>0} {
+ if {$leftover>1} {puts "\nLeftover: $leftover\nReturn=$v Message=$msg"}
+ set ::go 0
+ if {$v} {
+ puts "\nError message returned: $msg"
+ } else {
+ set v {1 1}
+ }
+ } else {
+ set v2 [expr {$msg=="" || $msg=="out of memory"}]
+ if {!$v2} {puts "\nError message returned: $msg"}
+ lappend v $v2
+ }
+ } {1 1}
+
+ if {[info exists ::mallocopts(-cleanup)]} {
+ catch [list uplevel #0 $::mallocopts(-cleanup)] msg
+ }
+ }
+ unset ::mallocopts
+}
+
+set sqlite_os_trace 0
+do_malloc_test 1 -tclprep {
+ db close
+} -tclbody {
+ if {[catch {sqlite3 db test.db}]} {
+ error "out of memory"
+ }
+} -sqlbody {
+ DROP TABLE IF EXISTS t1;
+ CREATE TABLE IF NOT EXISTS t1(
+ a int, b float, c double, d text, e varchar(20),
+ primary key(a,b,c)
+ );
+ CREATE TABLE IF NOT EXISTS t1(
+ a int, b float, c double, d text, e varchar(20),
+ primary key(a,b,c)
+ );
+ DROP TABLE IF EXISTS t1;
+}
+
+# Ensure that no file descriptors were leaked.
+do_test malloc6-1.X {
+ catch {db close}
+ set sqlite_open_file_count
+} {0}
+
+sqlite_malloc_fail 0
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/malloc7.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/malloc7.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,146 @@
+# 2006 July 26
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file contains additional out-of-memory checks (see malloc.tcl)
+# added to expose a bug in out-of-memory handling for sqlite3_prepare16().
+#
+# $Id: malloc7.test,v 1.2 2006/07/26 14:57:30 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Only run these tests if memory debugging is turned on.
+#
+if {[info command sqlite_malloc_stat]==""} {
+ puts "Skipping malloc tests: not compiled with -DSQLITE_MEMDEBUG..."
+ finish_test
+ return
+}
+
+# Usage: do_malloc_test <test number> <options...>
+#
+# The first argument, <test number>, is an integer used to name the
+# tests executed by this proc. Options are as follows:
+#
+# -tclprep TCL script to run to prepare test.
+# -sqlprep SQL script to run to prepare test.
+# -tclbody TCL script to run with malloc failure simulation.
+# -sqlbody TCL script to run with malloc failure simulation.
+# -cleanup TCL script to run after the test.
+#
+# This command runs a series of tests to verify SQLite's ability
+# to handle an out-of-memory condition gracefully. It is assumed
+# that if this condition occurs a malloc() call will return a
+# NULL pointer. Linux, for example, doesn't do that by default. See
+# the "BUGS" section of malloc(3).
+#
+# Each iteration of a loop, the TCL commands in any argument passed
+# to the -tclbody switch, followed by the SQL commands in any argument
+# passed to the -sqlbody switch are executed. Each iteration the
+# Nth call to sqliteMalloc() is made to fail, where N is increased
+# each time the loop runs starting from 1. When all commands execute
+# successfully, the loop ends.
+#
+proc do_malloc_test {tn args} {
+ array unset ::mallocopts
+ array set ::mallocopts $args
+
+ set ::go 1
+ for {set ::n 1} {$::go && $::n < 50000} {incr ::n} {
+ do_test malloc7-$tn.$::n {
+
+ # Remove all traces of database files test.db and test2.db from the files
+ # system. Then open (empty database) "test.db" with the handle [db].
+ #
+ sqlite_malloc_fail 0
+ catch {db close}
+ catch {file delete -force test.db}
+ catch {file delete -force test.db-journal}
+ catch {file delete -force test2.db}
+ catch {file delete -force test2.db-journal}
+ catch {sqlite3 db test.db}
+ set ::DB [sqlite3_connection_pointer db]
+
+ # Execute any -tclprep and -sqlprep scripts.
+ #
+ if {[info exists ::mallocopts(-tclprep)]} {
+ eval $::mallocopts(-tclprep)
+ }
+ if {[info exists ::mallocopts(-sqlprep)]} {
+ execsql $::mallocopts(-sqlprep)
+ }
+
+ # Now set the ${::n}th malloc() to fail and execute the -tclbody and
+ # -sqlbody scripts.
+ #
+ sqlite_malloc_fail $::n
+ set ::mallocbody {}
+ if {[info exists ::mallocopts(-tclbody)]} {
+ append ::mallocbody "$::mallocopts(-tclbody)\n"
+ }
+ if {[info exists ::mallocopts(-sqlbody)]} {
+ append ::mallocbody "db eval {$::mallocopts(-sqlbody)}"
+ }
+ set v [catch $::mallocbody msg]
+
+ # If the test fails (if $v!=0) and the database connection actually
+ # exists, make sure the failure code is SQLITE_NOMEM.
+ if {$v && [info command db]=="db" && [info exists ::mallocopts(-sqlbody)]
+ && [db errorcode]!=7} {
+ set v 999
+ }
+
+ set leftover [lindex [sqlite_malloc_stat] 2]
+ if {$leftover>0} {
+ if {$leftover>1} {puts "\nLeftover: $leftover\nReturn=$v Message=$msg"}
+ set ::go 0
+ if {$v} {
+ puts "\nError message returned: $msg"
+ } else {
+ set v {1 1}
+ }
+ } else {
+ set v2 [expr {$msg=="" || $msg=="out of memory"}]
+ if {!$v2} {puts "\nError message returned: $msg"}
+ lappend v $v2
+ }
+ } {1 1}
+
+ if {[info exists ::mallocopts(-cleanup)]} {
+ catch [list uplevel #0 $::mallocopts(-cleanup)] msg
+ }
+ }
+ unset ::mallocopts
+}
+
+db eval {
+ CREATE TABLE t1(a,b,c,d);
+ CREATE INDEX i1 ON t1(b,c);
+}
+
+do_malloc_test 1 -tclbody {
+ set sql16 [encoding convertto unicode "SELECT * FROM sqlite_master"]
+ append sql16 "\00\00"
+ set nbyte [string length $sql16]
+ set ::STMT [sqlite3_prepare16 $::DB $sql16 $nbyte DUMMY]
+ sqlite3_finalize $::STMT
+}
+
+
+
+# Ensure that no file descriptors were leaked.
+do_test malloc-99.X {
+ catch {db close}
+ set sqlite_open_file_count
+} {0}
+
+puts open-file-count=$sqlite_open_file_count
+sqlite_malloc_fail 0
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/manydb.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/manydb.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,91 @@
+# 2005 October 3
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests the ability of the library to open
+# many different databases at the same time without leaking memory.
+#
+# $Id: manydb.test,v 1.3 2006/01/11 01:08:34 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+set N 300
+
+# First test how many file descriptors are available for use. To open a
+# database for writing SQLite requires 3 file descriptors (the database, the
+# journal and the directory).
+set filehandles {}
+catch {
+ for {set i 0} {$i<($N * 3)} {incr i} {
+ lappend filehandles [open testfile.1 w]
+ }
+}
+foreach fd $filehandles {
+ close $fd
+}
+catch {
+ file delete -force testfile.1
+}
+set N [expr $i / 3]
+
+# Create a bunch of random database names
+#
+unset -nocomplain dbname
+unset -nocomplain used
+for {set i 0} {$i<$N} {incr i} {
+ while 1 {
+ set name test-[format %08x [expr {int(rand()*0x7fffffff)}]].db
+ if {[info exists used($name)]} continue
+ set dbname($i) $name
+ set used($name) $i
+ break
+ }
+}
+
+# Create a bunch of databases
+#
+for {set i 0} {$i<$N} {incr i} {
+ do_test manydb-1.$i {
+ sqlite3 db$i $dbname($i)
+ execsql {
+ CREATE TABLE t1(a,b);
+ BEGIN;
+ INSERT INTO t1 VALUES(1,2);
+ } db$i
+ } {}
+}
+
+# Finish the transactions
+#
+for {set i 0} {$i<$N} {incr i} {
+ do_test manydb-2.$i {
+ execsql {
+ COMMIT;
+ SELECT * FROM t1;
+ } db$i
+ } {1 2}
+}
+
+
+# Close the databases and erase the files.
+#
+for {set i 0} {$i<$N} {incr i} {
+ do_test manydb-3.$i {
+ db$i close
+ file delete -force $dbname($i)
+ } {}
+}
+
+
+
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/memdb.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/memdb.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,417 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is in-memory database backend.
+#
+# $Id: memdb.test,v 1.15 2006/01/30 22:48:44 drh Exp $
+
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+ifcapable memorydb {
+
+# In the following sequence of tests, compute the MD5 sum of the content
+# of a table, make lots of modifications to that table, then do a rollback.
+# Verify that after the rollback, the MD5 checksum is unchanged.
+#
+# These tests were browed from trans.tcl.
+#
+do_test memdb-1.1 {
+ db close
+ sqlite3 db :memory:
+ # sqlite3 db test.db
+ execsql {
+ BEGIN;
+ CREATE TABLE t3(x TEXT);
+ INSERT INTO t3 VALUES(randstr(10,400));
+ INSERT INTO t3 VALUES(randstr(10,400));
+ INSERT INTO t3 SELECT randstr(10,400) FROM t3;
+ INSERT INTO t3 SELECT randstr(10,400) FROM t3;
+ INSERT INTO t3 SELECT randstr(10,400) FROM t3;
+ INSERT INTO t3 SELECT randstr(10,400) FROM t3;
+ INSERT INTO t3 SELECT randstr(10,400) FROM t3;
+ INSERT INTO t3 SELECT randstr(10,400) FROM t3;
+ INSERT INTO t3 SELECT randstr(10,400) FROM t3;
+ INSERT INTO t3 SELECT randstr(10,400) FROM t3;
+ INSERT INTO t3 SELECT randstr(10,400) FROM t3;
+ COMMIT;
+ SELECT count(*) FROM t3;
+ }
+} {1024}
+
+# The following procedure computes a "signature" for table "t3". If
+# T3 changes in any way, the signature should change.
+#
+# This is used to test ROLLBACK. We gather a signature for t3, then
+# make lots of changes to t3, then rollback and take another signature.
+# The two signatures should be the same.
+#
+proc signature {{fn {}}} {
+ set rx [db eval {SELECT x FROM t3}]
+ # set r1 [md5 $rx\n]
+ if {$fn!=""} {
+ # set fd [open $fn w]
+ # puts $fd $rx
+ # close $fd
+ }
+ # set r [db eval {SELECT count(*), md5sum(x) FROM t3}]
+ # puts "SIG($fn)=$r1"
+ return [list [string length $rx] $rx]
+}
+
+# Do rollbacks. Make sure the signature does not change.
+#
+set limit 10
+for {set i 2} {$i<=$limit} {incr i} {
+ set ::sig [signature one]
+ # puts "sig=$sig"
+ set cnt [lindex $::sig 0]
+ if {$i%2==0} {
+ execsql {PRAGMA synchronous=FULL}
+ } else {
+ execsql {PRAGMA synchronous=NORMAL}
+ }
+ do_test memdb-1.$i.1-$cnt {
+ execsql {
+ BEGIN;
+ DELETE FROM t3 WHERE random()%10!=0;
+ INSERT INTO t3 SELECT randstr(10,10)||x FROM t3;
+ INSERT INTO t3 SELECT randstr(10,10)||x FROM t3;
+ ROLLBACK;
+ }
+ set sig2 [signature two]
+ } $sig
+ # puts "sig2=$sig2"
+ # if {$sig2!=$sig} exit
+ do_test memdb-1.$i.2-$cnt {
+ execsql {
+ BEGIN;
+ DELETE FROM t3 WHERE random()%10!=0;
+ INSERT INTO t3 SELECT randstr(10,10)||x FROM t3;
+ DELETE FROM t3 WHERE random()%10!=0;
+ INSERT INTO t3 SELECT randstr(10,10)||x FROM t3;
+ ROLLBACK;
+ }
+ signature
+ } $sig
+ if {$i<$limit} {
+ do_test memdb-1.$i.9-$cnt {
+ execsql {
+ INSERT INTO t3 SELECT randstr(10,400) FROM t3 WHERE random()%10==0;
+ }
+ } {}
+ }
+ set ::pager_old_format 0
+}
+
+integrity_check memdb-2.1
+
+do_test memdb-3.1 {
+ execsql {
+ CREATE TABLE t4(a,b,c,d);
+ BEGIN;
+ INSERT INTO t4 VALUES(1,2,3,4);
+ SELECT * FROM t4;
+ }
+} {1 2 3 4}
+do_test memdb-3.2 {
+ execsql {
+ SELECT name FROM sqlite_master WHERE type='table';
+ }
+} {t3 t4}
+do_test memdb-3.3 {
+ execsql {
+ DROP TABLE t4;
+ SELECT name FROM sqlite_master WHERE type='table';
+ }
+} {t3}
+do_test memdb-3.4 {
+ execsql {
+ ROLLBACK;
+ SELECT name FROM sqlite_master WHERE type='table';
+ }
+} {t3 t4}
+
+# Create tables for the first group of tests.
+#
+do_test memdb-4.0 {
+ execsql {
+ CREATE TABLE t1(a, b, c, UNIQUE(a,b));
+ CREATE TABLE t2(x);
+ SELECT c FROM t1 ORDER BY c;
+ }
+} {}
+
+# Six columns of configuration data as follows:
+#
+# i The reference number of the test
+# conf The conflict resolution algorithm on the BEGIN statement
+# cmd An INSERT or REPLACE command to execute against table t1
+# t0 True if there is an error from $cmd
+# t1 Content of "c" column of t1 assuming no error in $cmd
+# t2 Content of "x" column of t2
+#
+foreach {i conf cmd t0 t1 t2} {
+ 1 {} INSERT 1 {} 1
+ 2 {} {INSERT OR IGNORE} 0 3 1
+ 3 {} {INSERT OR REPLACE} 0 4 1
+ 4 {} REPLACE 0 4 1
+ 5 {} {INSERT OR FAIL} 1 {} 1
+ 6 {} {INSERT OR ABORT} 1 {} 1
+ 7 {} {INSERT OR ROLLBACK} 1 {} {}
+} {
+
+ # All tests after test 1 depend on conflict resolution. So end the
+ # loop if that is not available in this build.
+ ifcapable !conflict {if {$i>1} break}
+
+ do_test memdb-4.$i {
+ if {$conf!=""} {set conf "ON CONFLICT $conf"}
+ set r0 [catch {execsql [subst {
+ DELETE FROM t1;
+ DELETE FROM t2;
+ INSERT INTO t1 VALUES(1,2,3);
+ BEGIN $conf;
+ INSERT INTO t2 VALUES(1);
+ $cmd INTO t1 VALUES(1,2,4);
+ }]} r1]
+ catch {execsql {COMMIT}}
+ if {$r0} {set r1 {}} {set r1 [execsql {SELECT c FROM t1}]}
+ set r2 [execsql {SELECT x FROM t2}]
+ list $r0 $r1 $r2
+ } [list $t0 $t1 $t2]
+}
+
+do_test memdb-5.0 {
+ execsql {
+ DROP TABLE t2;
+ DROP TABLE t3;
+ CREATE TABLE t2(a,b,c);
+ INSERT INTO t2 VALUES(1,2,1);
+ INSERT INTO t2 VALUES(2,3,2);
+ INSERT INTO t2 VALUES(3,4,1);
+ INSERT INTO t2 VALUES(4,5,4);
+ SELECT c FROM t2 ORDER BY b;
+ CREATE TABLE t3(x);
+ INSERT INTO t3 VALUES(1);
+ }
+} {1 2 1 4}
+
+# Six columns of configuration data as follows:
+#
+# i The reference number of the test
+# conf1 The conflict resolution algorithm on the UNIQUE constraint
+# conf2 The conflict resolution algorithm on the BEGIN statement
+# cmd An UPDATE command to execute against table t1
+# t0 True if there is an error from $cmd
+# t1 Content of "b" column of t1 assuming no error in $cmd
+# t2 Content of "x" column of t3
+#
+foreach {i conf1 conf2 cmd t0 t1 t2} {
+ 1 {} {} UPDATE 1 {6 7 8 9} 1
+ 2 REPLACE {} UPDATE 0 {7 6 9} 1
+ 3 IGNORE {} UPDATE 0 {6 7 3 9} 1
+ 4 FAIL {} UPDATE 1 {6 7 3 4} 1
+ 5 ABORT {} UPDATE 1 {1 2 3 4} 1
+ 6 ROLLBACK {} UPDATE 1 {1 2 3 4} 0
+ 7 REPLACE {} {UPDATE OR IGNORE} 0 {6 7 3 9} 1
+ 8 IGNORE {} {UPDATE OR REPLACE} 0 {7 6 9} 1
+ 9 FAIL {} {UPDATE OR IGNORE} 0 {6 7 3 9} 1
+ 10 ABORT {} {UPDATE OR REPLACE} 0 {7 6 9} 1
+ 11 ROLLBACK {} {UPDATE OR IGNORE} 0 {6 7 3 9} 1
+ 12 {} {} {UPDATE OR IGNORE} 0 {6 7 3 9} 1
+ 13 {} {} {UPDATE OR REPLACE} 0 {7 6 9} 1
+ 14 {} {} {UPDATE OR FAIL} 1 {6 7 3 4} 1
+ 15 {} {} {UPDATE OR ABORT} 1 {1 2 3 4} 1
+ 16 {} {} {UPDATE OR ROLLBACK} 1 {1 2 3 4} 0
+} {
+ # All tests after test 1 depend on conflict resolution. So end the
+ # loop if that is not available in this build.
+ ifcapable !conflict {
+ if {$i>1} break
+ }
+
+ if {$t0} {set t1 {column a is not unique}}
+ do_test memdb-5.$i {
+ if {$conf1!=""} {set conf1 "ON CONFLICT $conf1"}
+ if {$conf2!=""} {set conf2 "ON CONFLICT $conf2"}
+ set r0 [catch {execsql [subst {
+ DROP TABLE t1;
+ CREATE TABLE t1(a,b,c, UNIQUE(a) $conf1);
+ INSERT INTO t1 SELECT * FROM t2;
+ UPDATE t3 SET x=0;
+ BEGIN $conf2;
+ $cmd t3 SET x=1;
+ $cmd t1 SET b=b*2;
+ $cmd t1 SET a=c+5;
+ }]} r1]
+ catch {execsql {COMMIT}}
+ if {!$r0} {set r1 [execsql {SELECT a FROM t1 ORDER BY b}]}
+ set r2 [execsql {SELECT x FROM t3}]
+ list $r0 $r1 $r2
+ } [list $t0 $t1 $t2]
+}
+
+do_test memdb-6.1 {
+ execsql {
+ SELECT * FROM t2;
+ }
+} {1 2 1 2 3 2 3 4 1 4 5 4}
+do_test memdb-6.2 {
+ execsql {
+ BEGIN;
+ DROP TABLE t2;
+ SELECT name FROM sqlite_master WHERE type='table' ORDER BY 1;
+ }
+} {t1 t3 t4}
+do_test memdb-6.3 {
+ execsql {
+ ROLLBACK;
+ SELECT name FROM sqlite_master WHERE type='table' ORDER BY 1;
+ }
+} {t1 t2 t3 t4}
+do_test memdb-6.4 {
+ execsql {
+ SELECT * FROM t2;
+ }
+} {1 2 1 2 3 2 3 4 1 4 5 4}
+ifcapable compound {
+do_test memdb-6.5 {
+ execsql {
+ SELECT a FROM t2 UNION SELECT b FROM t2 ORDER BY 1;
+ }
+} {1 2 3 4 5}
+} ;# ifcapable compound
+do_test memdb-6.6 {
+ execsql {
+ CREATE INDEX i2 ON t2(c);
+ SELECT a FROM t2 ORDER BY c;
+ }
+} {1 3 2 4}
+do_test memdb-6.6 {
+ execsql {
+ SELECT a FROM t2 ORDER BY c DESC;
+ }
+} {4 2 3 1}
+do_test memdb-6.7 {
+ execsql {
+ BEGIN;
+ CREATE TABLE t5(x,y);
+ INSERT INTO t5 VALUES(1,2);
+ SELECT * FROM t5;
+ }
+} {1 2}
+do_test memdb-6.8 {
+ execsql {
+ SELECT name FROM sqlite_master WHERE type='table' ORDER BY 1;
+ }
+} {t1 t2 t3 t4 t5}
+do_test memdb-6.9 {
+ execsql {
+ ROLLBACK;
+ SELECT name FROM sqlite_master WHERE type='table' ORDER BY 1;
+ }
+} {t1 t2 t3 t4}
+do_test memdb-6.10 {
+ execsql {
+ CREATE TABLE t5(x PRIMARY KEY, y UNIQUE);
+ SELECT * FROM t5;
+ }
+} {}
+do_test memdb-6.11 {
+ execsql {
+ SELECT * FROM t5 ORDER BY y DESC;
+ }
+} {}
+
+ifcapable conflict {
+ do_test memdb-6.12 {
+ execsql {
+ INSERT INTO t5 VALUES(1,2);
+ INSERT INTO t5 VALUES(3,4);
+ REPLACE INTO t5 VALUES(1,4);
+ SELECT rowid,* FROM t5;
+ }
+ } {3 1 4}
+ do_test memdb-6.13 {
+ execsql {
+ DELETE FROM t5 WHERE x>5;
+ SELECT * FROM t5;
+ }
+ } {1 4}
+ do_test memdb-6.14 {
+ execsql {
+ DELETE FROM t5 WHERE y<3;
+ SELECT * FROM t5;
+ }
+ } {1 4}
+}
+
+do_test memdb-6.15 {
+ execsql {
+ DELETE FROM t5 WHERE x>0;
+ SELECT * FROM t5;
+ }
+} {}
+
+ifcapable subquery {
+ do_test memdb-7.1 {
+ execsql {
+ CREATE TABLE t6(x);
+ INSERT INTO t6 VALUES(1);
+ INSERT INTO t6 SELECT x+1 FROM t6;
+ INSERT INTO t6 SELECT x+2 FROM t6;
+ INSERT INTO t6 SELECT x+4 FROM t6;
+ INSERT INTO t6 SELECT x+8 FROM t6;
+ INSERT INTO t6 SELECT x+16 FROM t6;
+ INSERT INTO t6 SELECT x+32 FROM t6;
+ INSERT INTO t6 SELECT x+64 FROM t6;
+ INSERT INTO t6 SELECT x+128 FROM t6;
+ SELECT count(*) FROM (SELECT DISTINCT x FROM t6);
+ }
+ } {256}
+ for {set i 1} {$i<=256} {incr i} {
+ do_test memdb-7.2.$i {
+ execsql "DELETE FROM t6 WHERE x=\
+ (SELECT x FROM t6 ORDER BY random() LIMIT 1)"
+ execsql {SELECT count(*) FROM t6}
+ } [expr {256-$i}]
+ }
+}
+
+# Ticket #1524
+#
+do_test memdb-8.1 {
+ db close
+ sqlite3 db {:memory:}
+ execsql {
+ PRAGMA auto_vacuum=TRUE;
+ CREATE TABLE t1(a);
+ INSERT INTO t1 VALUES(randstr(5000,6000));
+ INSERT INTO t1 VALUES(randstr(5000,6000));
+ INSERT INTO t1 VALUES(randstr(5000,6000));
+ INSERT INTO t1 VALUES(randstr(5000,6000));
+ INSERT INTO t1 VALUES(randstr(5000,6000));
+ SELECT count(*) FROM t1;
+ }
+} 5
+do_test memdb-8.2 {
+ execsql {
+ DELETE FROM t1;
+ SELECT count(*) FROM t1;
+ }
+} 0
+
+
+} ;# ifcapable memorydb
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/memleak.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/memleak.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,96 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file runs all tests.
+#
+# $Id: memleak.test,v 1.9 2005/03/16 12:15:22 danielk1977 Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+rename finish_test really_finish_test
+proc finish_test {} {
+ catch {db close}
+ memleak_check
+}
+
+if {[file exists ./sqlite_test_count]} {
+ set COUNT [exec cat ./sqlite_test_count]
+} else {
+ set COUNT 3
+}
+
+# LeakList will hold a list of the number of unfreed mallocs after
+# each round of the test. This number should be constant. If it
+# grows, it may mean there is a memory leak in the library.
+#
+set LeakList {}
+
+set EXCLUDE {
+ all.test
+ quick.test
+ misuse.test
+ memleak.test
+ btree2.test
+ trans.test
+ crash.test
+ autovacuum_crash.test
+}
+# Test files btree2.test and btree4.test don't work if the
+# SQLITE_DEFAULT_AUTOVACUUM macro is defined to true (because they depend
+# on tables being allocated starting at page 2).
+#
+ifcapable default_autovacuum {
+ lappend EXCLUDE btree2.test
+ lappend EXCLUDE btree4.test
+}
+
+if {[sqlite3 -has-codec]} {
+ # lappend EXCLUDE
+}
+if {[llength $argv]>0} {
+ set FILELIST $argv
+ set argv {}
+} else {
+ set FILELIST [lsort -dictionary [glob $testdir/*.test]]
+}
+
+foreach testfile $FILELIST {
+ set tail [file tail $testfile]
+ if {[lsearch -exact $EXCLUDE $tail]>=0} continue
+ set LeakList {}
+ for {set COUNTER 0} {$COUNTER<$COUNT} {incr COUNTER} {
+ source $testfile
+ if {[info exists Leak]} {
+ lappend LeakList $Leak
+ }
+ }
+ if {$LeakList!=""} {
+ puts -nonewline memory-leak-test-$tail...
+ incr ::nTest
+ foreach x $LeakList {
+ if {$x!=[lindex $LeakList 0]} {
+ puts " failed! ($LeakList)"
+ incr ::nErr
+ lappend ::failList memory-leak-test-$tail
+ break
+ }
+ }
+ puts " Ok"
+ }
+}
+really_finish_test
+
+# Run the malloc tests and the misuse test after memory leak detection.
+# Both tests leak memory.
+#
+#catch {source $testdir/misuse.test}
+#catch {source $testdir/malloc.test}
+
+really_finish_test
Added: freeswitch/trunk/libs/sqlite/test/minmax.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/minmax.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,384 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing SELECT statements that contain
+# aggregate min() and max() functions and which are handled as
+# as a special case.
+#
+# $Id: minmax.test,v 1.19 2006/03/26 01:21:23 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+do_test minmax-1.0 {
+ execsql {
+ BEGIN;
+ CREATE TABLE t1(x, y);
+ INSERT INTO t1 VALUES(1,1);
+ INSERT INTO t1 VALUES(2,2);
+ INSERT INTO t1 VALUES(3,2);
+ INSERT INTO t1 VALUES(4,3);
+ INSERT INTO t1 VALUES(5,3);
+ INSERT INTO t1 VALUES(6,3);
+ INSERT INTO t1 VALUES(7,3);
+ INSERT INTO t1 VALUES(8,4);
+ INSERT INTO t1 VALUES(9,4);
+ INSERT INTO t1 VALUES(10,4);
+ INSERT INTO t1 VALUES(11,4);
+ INSERT INTO t1 VALUES(12,4);
+ INSERT INTO t1 VALUES(13,4);
+ INSERT INTO t1 VALUES(14,4);
+ INSERT INTO t1 VALUES(15,4);
+ INSERT INTO t1 VALUES(16,5);
+ INSERT INTO t1 VALUES(17,5);
+ INSERT INTO t1 VALUES(18,5);
+ INSERT INTO t1 VALUES(19,5);
+ INSERT INTO t1 VALUES(20,5);
+ COMMIT;
+ SELECT DISTINCT y FROM t1 ORDER BY y;
+ }
+} {1 2 3 4 5}
+
+do_test minmax-1.1 {
+ set sqlite_search_count 0
+ execsql {SELECT min(x) FROM t1}
+} {1}
+do_test minmax-1.2 {
+ set sqlite_search_count
+} {19}
+do_test minmax-1.3 {
+ set sqlite_search_count 0
+ execsql {SELECT max(x) FROM t1}
+} {20}
+do_test minmax-1.4 {
+ set sqlite_search_count
+} {19}
+do_test minmax-1.5 {
+ execsql {CREATE INDEX t1i1 ON t1(x)}
+ set sqlite_search_count 0
+ execsql {SELECT min(x) FROM t1}
+} {1}
+do_test minmax-1.6 {
+ set sqlite_search_count
+} {2}
+do_test minmax-1.7 {
+ set sqlite_search_count 0
+ execsql {SELECT max(x) FROM t1}
+} {20}
+do_test minmax-1.8 {
+ set sqlite_search_count
+} {1}
+do_test minmax-1.9 {
+ set sqlite_search_count 0
+ execsql {SELECT max(y) FROM t1}
+} {5}
+do_test minmax-1.10 {
+ set sqlite_search_count
+} {19}
+
+do_test minmax-2.0 {
+ execsql {
+ CREATE TABLE t2(a INTEGER PRIMARY KEY, b);
+ INSERT INTO t2 SELECT * FROM t1;
+ }
+ set sqlite_search_count 0
+ execsql {SELECT min(a) FROM t2}
+} {1}
+do_test minmax-2.1 {
+ set sqlite_search_count
+} {0}
+do_test minmax-2.2 {
+ set sqlite_search_count 0
+ execsql {SELECT max(a) FROM t2}
+} {20}
+do_test minmax-2.3 {
+ set sqlite_search_count
+} {0}
+
+do_test minmax-3.0 {
+ ifcapable subquery {
+ execsql {INSERT INTO t2 VALUES((SELECT max(a) FROM t2)+1,999)}
+ } else {
+ db function max_a_t2 {execsql {SELECT max(a) FROM t2}}
+ execsql {INSERT INTO t2 VALUES(max_a_t2()+1,999)}
+ }
+ set sqlite_search_count 0
+ execsql {SELECT max(a) FROM t2}
+} {21}
+do_test minmax-3.1 {
+ set sqlite_search_count
+} {0}
+do_test minmax-3.2 {
+ ifcapable subquery {
+ execsql {INSERT INTO t2 VALUES((SELECT max(a) FROM t2)+1,999)}
+ } else {
+ db function max_a_t2 {execsql {SELECT max(a) FROM t2}}
+ execsql {INSERT INTO t2 VALUES(max_a_t2()+1,999)}
+ }
+ set sqlite_search_count 0
+ ifcapable subquery {
+ execsql { SELECT b FROM t2 WHERE a=(SELECT max(a) FROM t2) }
+ } else {
+ execsql { SELECT b FROM t2 WHERE a=max_a_t2() }
+ }
+} {999}
+do_test minmax-3.3 {
+ set sqlite_search_count
+} {0}
+
+ifcapable {compound && subquery} {
+ do_test minmax-4.1 {
+ execsql {
+ SELECT coalesce(min(x+0),-1), coalesce(max(x+0),-1) FROM
+ (SELECT * FROM t1 UNION SELECT NULL as 'x', NULL as 'y')
+ }
+ } {1 20}
+ do_test minmax-4.2 {
+ execsql {
+ SELECT y, coalesce(sum(x),0) FROM
+ (SELECT null AS x, y+1 AS y FROM t1 UNION SELECT * FROM t1)
+ GROUP BY y ORDER BY y;
+ }
+ } {1 1 2 5 3 22 4 92 5 90 6 0}
+ do_test minmax-4.3 {
+ execsql {
+ SELECT y, count(x), count(*) FROM
+ (SELECT null AS x, y+1 AS y FROM t1 UNION SELECT * FROM t1)
+ GROUP BY y ORDER BY y;
+ }
+ } {1 1 1 2 2 3 3 4 5 4 8 9 5 5 6 6 0 1}
+} ;# ifcapable compound
+
+# Make sure the min(x) and max(x) optimizations work on empty tables
+# including empty tables with indices. Ticket #296.
+#
+do_test minmax-5.1 {
+ execsql {
+ CREATE TABLE t3(x INTEGER UNIQUE NOT NULL);
+ SELECT coalesce(min(x),999) FROM t3;
+ }
+} {999}
+do_test minmax-5.2 {
+ execsql {
+ SELECT coalesce(min(rowid),999) FROM t3;
+ }
+} {999}
+do_test minmax-5.3 {
+ execsql {
+ SELECT coalesce(max(x),999) FROM t3;
+ }
+} {999}
+do_test minmax-5.4 {
+ execsql {
+ SELECT coalesce(max(rowid),999) FROM t3;
+ }
+} {999}
+do_test minmax-5.5 {
+ execsql {
+ SELECT coalesce(max(rowid),999) FROM t3 WHERE rowid<25;
+ }
+} {999}
+
+# Make sure the min(x) and max(x) optimizations work when there
+# is a LIMIT clause. Ticket #396.
+#
+do_test minmax-6.1 {
+ execsql {
+ SELECT min(a) FROM t2 LIMIT 1
+ }
+} {1}
+do_test minmax-6.2 {
+ execsql {
+ SELECT max(a) FROM t2 LIMIT 3
+ }
+} {22}
+do_test minmax-6.3 {
+ execsql {
+ SELECT min(a) FROM t2 LIMIT 0,100
+ }
+} {1}
+do_test minmax-6.4 {
+ execsql {
+ SELECT max(a) FROM t2 LIMIT 1,100
+ }
+} {}
+do_test minmax-6.5 {
+ execsql {
+ SELECT min(x) FROM t3 LIMIT 1
+ }
+} {{}}
+do_test minmax-6.6 {
+ execsql {
+ SELECT max(x) FROM t3 LIMIT 0
+ }
+} {}
+do_test minmax-6.7 {
+ execsql {
+ SELECT max(a) FROM t2 LIMIT 0
+ }
+} {}
+
+# Make sure the max(x) and min(x) optimizations work for nested
+# queries. Ticket #587.
+#
+do_test minmax-7.1 {
+ execsql {
+ SELECT max(x) FROM t1;
+ }
+} 20
+ifcapable subquery {
+ do_test minmax-7.2 {
+ execsql {
+ SELECT * FROM (SELECT max(x) FROM t1);
+ }
+ } 20
+}
+do_test minmax-7.3 {
+ execsql {
+ SELECT min(x) FROM t1;
+ }
+} 1
+ifcapable subquery {
+ do_test minmax-7.4 {
+ execsql {
+ SELECT * FROM (SELECT min(x) FROM t1);
+ }
+ } 1
+}
+
+# Make sure min(x) and max(x) work correctly when the datatype is
+# TEXT instead of NUMERIC. Ticket #623.
+#
+do_test minmax-8.1 {
+ execsql {
+ CREATE TABLE t4(a TEXT);
+ INSERT INTO t4 VALUES('1234');
+ INSERT INTO t4 VALUES('234');
+ INSERT INTO t4 VALUES('34');
+ SELECT min(a), max(a) FROM t4;
+ }
+} {1234 34}
+do_test minmax-8.2 {
+ execsql {
+ CREATE TABLE t5(a INTEGER);
+ INSERT INTO t5 VALUES('1234');
+ INSERT INTO t5 VALUES('234');
+ INSERT INTO t5 VALUES('34');
+ SELECT min(a), max(a) FROM t5;
+ }
+} {34 1234}
+
+# Ticket #658: Test the min()/max() optimization when the FROM clause
+# is a subquery.
+#
+ifcapable {compound && subquery} {
+ do_test minmax-9.1 {
+ execsql {
+ SELECT max(rowid) FROM (
+ SELECT max(rowid) FROM t4 UNION SELECT max(rowid) FROM t5
+ )
+ }
+ } {1}
+ do_test minmax-9.2 {
+ execsql {
+ SELECT max(rowid) FROM (
+ SELECT max(rowid) FROM t4 EXCEPT SELECT max(rowid) FROM t5
+ )
+ }
+ } {{}}
+} ;# ifcapable compound&&subquery
+
+# If there is a NULL in an aggregate max() or min(), ignore it. An
+# aggregate min() or max() will only return NULL if all values are NULL.
+#
+do_test minmax-10.1 {
+ execsql {
+ CREATE TABLE t6(x);
+ INSERT INTO t6 VALUES(1);
+ INSERT INTO t6 VALUES(2);
+ INSERT INTO t6 VALUES(NULL);
+ SELECT coalesce(min(x),-1) FROM t6;
+ }
+} {1}
+do_test minmax-10.2 {
+ execsql {
+ SELECT max(x) FROM t6;
+ }
+} {2}
+do_test minmax-10.3 {
+ execsql {
+ CREATE INDEX i6 ON t6(x);
+ SELECT coalesce(min(x),-1) FROM t6;
+ }
+} {1}
+do_test minmax-10.4 {
+ execsql {
+ SELECT max(x) FROM t6;
+ }
+} {2}
+do_test minmax-10.5 {
+ execsql {
+ DELETE FROM t6 WHERE x NOT NULL;
+ SELECT count(*) FROM t6;
+ }
+} 1
+do_test minmax-10.6 {
+ execsql {
+ SELECT count(x) FROM t6;
+ }
+} 0
+ifcapable subquery {
+ do_test minmax-10.7 {
+ execsql {
+ SELECT (SELECT min(x) FROM t6), (SELECT max(x) FROM t6);
+ }
+ } {{} {}}
+}
+do_test minmax-10.8 {
+ execsql {
+ SELECT min(x), max(x) FROM t6;
+ }
+} {{} {}}
+do_test minmax-10.9 {
+ execsql {
+ INSERT INTO t6 SELECT * FROM t6;
+ INSERT INTO t6 SELECT * FROM t6;
+ INSERT INTO t6 SELECT * FROM t6;
+ INSERT INTO t6 SELECT * FROM t6;
+ INSERT INTO t6 SELECT * FROM t6;
+ INSERT INTO t6 SELECT * FROM t6;
+ INSERT INTO t6 SELECT * FROM t6;
+ INSERT INTO t6 SELECT * FROM t6;
+ INSERT INTO t6 SELECT * FROM t6;
+ INSERT INTO t6 SELECT * FROM t6;
+ SELECT count(*) FROM t6;
+ }
+} 1024
+do_test minmax-10.10 {
+ execsql {
+ SELECT count(x) FROM t6;
+ }
+} 0
+ifcapable subquery {
+ do_test minmax-10.11 {
+ execsql {
+ SELECT (SELECT min(x) FROM t6), (SELECT max(x) FROM t6);
+ }
+ } {{} {}}
+}
+do_test minmax-10.12 {
+ execsql {
+ SELECT min(x), max(x) FROM t6;
+ }
+} {{} {}}
+
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/misc1.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/misc1.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,585 @@
+# 2001 September 15.
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests for miscellanous features that were
+# left out of other test files.
+#
+# $Id: misc1.test,v 1.41 2006/06/27 20:06:45 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Mimic the SQLite 2 collation type NUMERIC.
+db collate numeric numeric_collate
+proc numeric_collate {lhs rhs} {
+ if {$lhs == $rhs} {return 0}
+ return [expr ($lhs>$rhs)?1:-1]
+}
+
+# Mimic the SQLite 2 collation type TEXT.
+db collate text text_collate
+proc numeric_collate {lhs rhs} {
+ return [string compare $lhs $rhs]
+}
+
+# Test the creation and use of tables that have a large number
+# of columns.
+#
+do_test misc1-1.1 {
+ set cmd "CREATE TABLE manycol(x0 text"
+ for {set i 1} {$i<=99} {incr i} {
+ append cmd ",x$i text"
+ }
+ append cmd ")";
+ execsql $cmd
+ set cmd "INSERT INTO manycol VALUES(0"
+ for {set i 1} {$i<=99} {incr i} {
+ append cmd ",$i"
+ }
+ append cmd ")";
+ execsql $cmd
+ execsql "SELECT x99 FROM manycol"
+} 99
+do_test misc1-1.2 {
+ execsql {SELECT x0, x10, x25, x50, x75 FROM manycol}
+} {0 10 25 50 75}
+do_test misc1-1.3.1 {
+ for {set j 100} {$j<=1000} {incr j 100} {
+ set cmd "INSERT INTO manycol VALUES($j"
+ for {set i 1} {$i<=99} {incr i} {
+ append cmd ",[expr {$i+$j}]"
+ }
+ append cmd ")"
+ execsql $cmd
+ }
+ execsql {SELECT x50 FROM manycol ORDER BY x80+0}
+} {50 150 250 350 450 550 650 750 850 950 1050}
+do_test misc1-1.3.2 {
+ execsql {SELECT x50 FROM manycol ORDER BY x80}
+} {1050 150 250 350 450 550 650 750 50 850 950}
+do_test misc1-1.4 {
+ execsql {SELECT x75 FROM manycol WHERE x50=350}
+} 375
+do_test misc1-1.5 {
+ execsql {SELECT x50 FROM manycol WHERE x99=599}
+} 550
+do_test misc1-1.6 {
+ execsql {CREATE INDEX manycol_idx1 ON manycol(x99)}
+ execsql {SELECT x50 FROM manycol WHERE x99=899}
+} 850
+do_test misc1-1.7 {
+ execsql {SELECT count(*) FROM manycol}
+} 11
+do_test misc1-1.8 {
+ execsql {DELETE FROM manycol WHERE x98=1234}
+ execsql {SELECT count(*) FROM manycol}
+} 11
+do_test misc1-1.9 {
+ execsql {DELETE FROM manycol WHERE x98=998}
+ execsql {SELECT count(*) FROM manycol}
+} 10
+do_test misc1-1.10 {
+ execsql {DELETE FROM manycol WHERE x99=500}
+ execsql {SELECT count(*) FROM manycol}
+} 10
+do_test misc1-1.11 {
+ execsql {DELETE FROM manycol WHERE x99=599}
+ execsql {SELECT count(*) FROM manycol}
+} 9
+
+# Check GROUP BY expressions that name two or more columns.
+#
+do_test misc1-2.1 {
+ execsql {
+ BEGIN TRANSACTION;
+ CREATE TABLE agger(one text, two text, three text, four text);
+ INSERT INTO agger VALUES(1, 'one', 'hello', 'yes');
+ INSERT INTO agger VALUES(2, 'two', 'howdy', 'no');
+ INSERT INTO agger VALUES(3, 'thr', 'howareya', 'yes');
+ INSERT INTO agger VALUES(4, 'two', 'lothere', 'yes');
+ INSERT INTO agger VALUES(5, 'one', 'atcha', 'yes');
+ INSERT INTO agger VALUES(6, 'two', 'hello', 'no');
+ COMMIT
+ }
+ execsql {SELECT count(*) FROM agger}
+} 6
+do_test misc1-2.2 {
+ execsql {SELECT sum(one), two, four FROM agger
+ GROUP BY two, four ORDER BY sum(one) desc}
+} {8 two no 6 one yes 4 two yes 3 thr yes}
+do_test misc1-2.3 {
+ execsql {SELECT sum((one)), (two), (four) FROM agger
+ GROUP BY (two), (four) ORDER BY sum(one) desc}
+} {8 two no 6 one yes 4 two yes 3 thr yes}
+
+# Here's a test for a bug found by Joel Lucsy. The code below
+# was causing an assertion failure.
+#
+do_test misc1-3.1 {
+ set r [execsql {
+ CREATE TABLE t1(a);
+ INSERT INTO t1 VALUES('hi');
+ PRAGMA full_column_names=on;
+ SELECT rowid, * FROM t1;
+ }]
+ lindex $r 1
+} {hi}
+
+# Here's a test for yet another bug found by Joel Lucsy. The code
+# below was causing an assertion failure.
+#
+do_test misc1-4.1 {
+ execsql {
+ BEGIN;
+ CREATE TABLE t2(a);
+ INSERT INTO t2 VALUES('This is a long string to use up a lot of disk -');
+ UPDATE t2 SET a=a||a||a||a;
+ INSERT INTO t2 SELECT '1 - ' || a FROM t2;
+ INSERT INTO t2 SELECT '2 - ' || a FROM t2;
+ INSERT INTO t2 SELECT '3 - ' || a FROM t2;
+ INSERT INTO t2 SELECT '4 - ' || a FROM t2;
+ INSERT INTO t2 SELECT '5 - ' || a FROM t2;
+ INSERT INTO t2 SELECT '6 - ' || a FROM t2;
+ COMMIT;
+ SELECT count(*) FROM t2;
+ }
+} {64}
+
+# Make sure we actually see a semicolon or end-of-file in the SQL input
+# before executing a command. Thus if "WHERE" is misspelled on an UPDATE,
+# the user won't accidently update every record.
+#
+do_test misc1-5.1 {
+ catchsql {
+ CREATE TABLE t3(a,b);
+ INSERT INTO t3 VALUES(1,2);
+ INSERT INTO t3 VALUES(3,4);
+ UPDATE t3 SET a=0 WHEREwww b=2;
+ }
+} {1 {near "WHEREwww": syntax error}}
+do_test misc1-5.2 {
+ execsql {
+ SELECT * FROM t3 ORDER BY a;
+ }
+} {1 2 3 4}
+
+# Certain keywords (especially non-standard keywords like "REPLACE") can
+# also be used as identifiers. The way this works in the parser is that
+# the parser first detects a syntax error, the error handling routine
+# sees that the special keyword caused the error, then replaces the keyword
+# with "ID" and tries again.
+#
+# Check the operation of this logic.
+#
+do_test misc1-6.1 {
+ catchsql {
+ CREATE TABLE t4(
+ abort, asc, begin, cluster, conflict, copy, delimiters, desc, end,
+ explain, fail, ignore, key, offset, pragma, replace, temp,
+ vacuum, view
+ );
+ }
+} {0 {}}
+do_test misc1-6.2 {
+ catchsql {
+ INSERT INTO t4
+ VALUES(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19);
+ }
+} {0 {}}
+do_test misc1-6.3 {
+ execsql {
+ SELECT * FROM t4
+ }
+} {1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19}
+do_test misc1-6.4 {
+ execsql {
+ SELECT abort+asc,max(key,pragma,temp) FROM t4
+ }
+} {3 17}
+
+# Test for multi-column primary keys, and for multiple primary keys.
+#
+do_test misc1-7.1 {
+ catchsql {
+ CREATE TABLE error1(
+ a TYPE PRIMARY KEY,
+ b TYPE PRIMARY KEY
+ );
+ }
+} {1 {table "error1" has more than one primary key}}
+do_test misc1-7.2 {
+ catchsql {
+ CREATE TABLE error1(
+ a INTEGER PRIMARY KEY,
+ b TYPE PRIMARY KEY
+ );
+ }
+} {1 {table "error1" has more than one primary key}}
+do_test misc1-7.3 {
+ execsql {
+ CREATE TABLE t5(a,b,c,PRIMARY KEY(a,b));
+ INSERT INTO t5 VALUES(1,2,3);
+ SELECT * FROM t5 ORDER BY a;
+ }
+} {1 2 3}
+do_test misc1-7.4 {
+ catchsql {
+ INSERT INTO t5 VALUES(1,2,4);
+ }
+} {1 {columns a, b are not unique}}
+do_test misc1-7.5 {
+ catchsql {
+ INSERT INTO t5 VALUES(0,2,4);
+ }
+} {0 {}}
+do_test misc1-7.6 {
+ execsql {
+ SELECT * FROM t5 ORDER BY a;
+ }
+} {0 2 4 1 2 3}
+
+do_test misc1-8.1 {
+ catchsql {
+ SELECT *;
+ }
+} {1 {no tables specified}}
+do_test misc1-8.2 {
+ catchsql {
+ SELECT t1.*;
+ }
+} {1 {no such table: t1}}
+
+execsql {
+ DROP TABLE t1;
+ DROP TABLE t2;
+ DROP TABLE t3;
+ DROP TABLE t4;
+}
+
+# 64-bit integers are represented exactly.
+#
+do_test misc1-9.1 {
+ catchsql {
+ CREATE TABLE t1(a unique not null, b unique not null);
+ INSERT INTO t1 VALUES('a',1234567890123456789);
+ INSERT INTO t1 VALUES('b',1234567891123456789);
+ INSERT INTO t1 VALUES('c',1234567892123456789);
+ SELECT * FROM t1;
+ }
+} {0 {a 1234567890123456789 b 1234567891123456789 c 1234567892123456789}}
+
+# A WHERE clause is not allowed to contain more than 99 terms. Check to
+# make sure this limit is enforced.
+#
+# 2005-07-16: There is no longer a limit on the number of terms in a
+# WHERE clause. But keep these tests just so that we have some tests
+# that use a large number of terms in the WHERE clause.
+#
+do_test misc1-10.0 {
+ execsql {SELECT count(*) FROM manycol}
+} {9}
+do_test misc1-10.1 {
+ set ::where {WHERE x0>=0}
+ for {set i 1} {$i<=99} {incr i} {
+ append ::where " AND x$i<>0"
+ }
+ catchsql "SELECT count(*) FROM manycol $::where"
+} {0 9}
+do_test misc1-10.2 {
+ catchsql "SELECT count(*) FROM manycol $::where AND rowid>0"
+} {0 9}
+do_test misc1-10.3 {
+ regsub "x0>=0" $::where "x0=0" ::where
+ catchsql "DELETE FROM manycol $::where"
+} {0 {}}
+do_test misc1-10.4 {
+ execsql {SELECT count(*) FROM manycol}
+} {8}
+do_test misc1-10.5 {
+ catchsql "DELETE FROM manycol $::where AND rowid>0"
+} {0 {}}
+do_test misc1-10.6 {
+ execsql {SELECT x1 FROM manycol WHERE x0=100}
+} {101}
+do_test misc1-10.7 {
+ regsub "x0=0" $::where "x0=100" ::where
+ catchsql "UPDATE manycol SET x1=x1+1 $::where"
+} {0 {}}
+do_test misc1-10.8 {
+ execsql {SELECT x1 FROM manycol WHERE x0=100}
+} {102}
+do_test misc1-10.9 {
+ catchsql "UPDATE manycol SET x1=x1+1 $::where AND rowid>0"
+} {0 {}}
+do_test misc1-10.10 {
+ execsql {SELECT x1 FROM manycol WHERE x0=100}
+} {103}
+
+# Make sure the initialization works even if a database is opened while
+# another process has the database locked.
+#
+# Update for v3: The BEGIN doesn't lock the database so the schema is read
+# and the SELECT returns successfully.
+do_test misc1-11.1 {
+ execsql {BEGIN}
+ execsql {UPDATE t1 SET a=0 WHERE 0}
+ sqlite3 db2 test.db
+ set rc [catch {db2 eval {SELECT count(*) FROM t1}} msg]
+ lappend rc $msg
+# v2 result: {1 {database is locked}}
+} {0 3}
+do_test misc1-11.2 {
+ execsql {COMMIT}
+ set rc [catch {db2 eval {SELECT count(*) FROM t1}} msg]
+ db2 close
+ lappend rc $msg
+} {0 3}
+
+# Make sure string comparisons really do compare strings in format4+.
+# Similar tests in the format3.test file show that for format3 and earlier
+# all comparisions where numeric if either operand looked like a number.
+#
+do_test misc1-12.1 {
+ execsql {SELECT '0'=='0.0'}
+} {0}
+do_test misc1-12.2 {
+ execsql {SELECT '0'==0.0}
+} {0}
+do_test misc1-12.3 {
+ execsql {SELECT '12345678901234567890'=='12345678901234567891'}
+} {0}
+do_test misc1-12.4 {
+ execsql {
+ CREATE TABLE t6(a INT UNIQUE, b TEXT UNIQUE);
+ INSERT INTO t6 VALUES('0','0.0');
+ SELECT * FROM t6;
+ }
+} {0 0.0}
+ifcapable conflict {
+ do_test misc1-12.5 {
+ execsql {
+ INSERT OR IGNORE INTO t6 VALUES(0.0,'x');
+ SELECT * FROM t6;
+ }
+ } {0 0.0}
+ do_test misc1-12.6 {
+ execsql {
+ INSERT OR IGNORE INTO t6 VALUES('y',0);
+ SELECT * FROM t6;
+ }
+ } {0 0.0 y 0}
+}
+do_test misc1-12.7 {
+ execsql {
+ CREATE TABLE t7(x INTEGER, y TEXT, z);
+ INSERT INTO t7 VALUES(0,0,1);
+ INSERT INTO t7 VALUES(0.0,0,2);
+ INSERT INTO t7 VALUES(0,0.0,3);
+ INSERT INTO t7 VALUES(0.0,0.0,4);
+ SELECT DISTINCT x, y FROM t7 ORDER BY z;
+ }
+} {0 0 0 0.0}
+do_test misc1-12.8 {
+ execsql {
+ SELECT min(z), max(z), count(z) FROM t7 GROUP BY x ORDER BY 1;
+ }
+} {1 4 4}
+do_test misc1-12.9 {
+ execsql {
+ SELECT min(z), max(z), count(z) FROM t7 GROUP BY y ORDER BY 1;
+ }
+} {1 2 2 3 4 2}
+
+# This used to be an error. But we changed the code so that arbitrary
+# identifiers can be used as a collating sequence. Collation is by text
+# if the identifier contains "text", "blob", or "clob" and is numeric
+# otherwise.
+#
+# Update: In v3, it is an error again.
+#
+#do_test misc1-12.10 {
+# catchsql {
+# SELECT * FROM t6 ORDER BY a COLLATE unknown;
+# }
+#} {0 {0 0 y 0}}
+do_test misc1-12.11 {
+ execsql {
+ CREATE TABLE t8(x TEXT COLLATE numeric, y INTEGER COLLATE text, z);
+ INSERT INTO t8 VALUES(0,0,1);
+ INSERT INTO t8 VALUES(0.0,0,2);
+ INSERT INTO t8 VALUES(0,0.0,3);
+ INSERT INTO t8 VALUES(0.0,0.0,4);
+ SELECT DISTINCT x, y FROM t8 ORDER BY z;
+ }
+} {0 0 0.0 0}
+do_test misc1-12.12 {
+ execsql {
+ SELECT min(z), max(z), count(z) FROM t8 GROUP BY x ORDER BY 1;
+ }
+} {1 3 2 2 4 2}
+do_test misc1-12.13 {
+ execsql {
+ SELECT min(z), max(z), count(z) FROM t8 GROUP BY y ORDER BY 1;
+ }
+} {1 4 4}
+
+# There was a problem with realloc() in the OP_MemStore operation of
+# the VDBE. A buffer was being reallocated but some pointers into
+# the old copy of the buffer were not being moved over to the new copy.
+# The following code tests for the problem.
+#
+ifcapable subquery {
+ do_test misc1-13.1 {
+ execsql {
+ CREATE TABLE t9(x,y);
+ INSERT INTO t9 VALUES('one',1);
+ INSERT INTO t9 VALUES('two',2);
+ INSERT INTO t9 VALUES('three',3);
+ INSERT INTO t9 VALUES('four',4);
+ INSERT INTO t9 VALUES('five',5);
+ INSERT INTO t9 VALUES('six',6);
+ INSERT INTO t9 VALUES('seven',7);
+ INSERT INTO t9 VALUES('eight',8);
+ INSERT INTO t9 VALUES('nine',9);
+ INSERT INTO t9 VALUES('ten',10);
+ INSERT INTO t9 VALUES('eleven',11);
+ SELECT y FROM t9
+ WHERE x=(SELECT x FROM t9 WHERE y=1)
+ OR x=(SELECT x FROM t9 WHERE y=2)
+ OR x=(SELECT x FROM t9 WHERE y=3)
+ OR x=(SELECT x FROM t9 WHERE y=4)
+ OR x=(SELECT x FROM t9 WHERE y=5)
+ OR x=(SELECT x FROM t9 WHERE y=6)
+ OR x=(SELECT x FROM t9 WHERE y=7)
+ OR x=(SELECT x FROM t9 WHERE y=8)
+ OR x=(SELECT x FROM t9 WHERE y=9)
+ OR x=(SELECT x FROM t9 WHERE y=10)
+ OR x=(SELECT x FROM t9 WHERE y=11)
+ OR x=(SELECT x FROM t9 WHERE y=12)
+ OR x=(SELECT x FROM t9 WHERE y=13)
+ OR x=(SELECT x FROM t9 WHERE y=14)
+ ;
+ }
+ } {1 2 3 4 5 6 7 8 9 10 11}
+}
+
+# Make sure a database connection still works after changing the
+# working directory.
+#
+do_test misc1-14.1 {
+ file mkdir tempdir
+ cd tempdir
+ execsql {BEGIN}
+ file exists ./test.db-journal
+} {0}
+do_test misc1-14.2 {
+ execsql {UPDATE t1 SET a=0 WHERE 0}
+ file exists ../test.db-journal
+} {1}
+do_test misc1-14.3 {
+ cd ..
+ file delete tempdir
+ execsql {COMMIT}
+ file exists ./test.db-journal
+} {0}
+
+# A failed create table should not leave the table in the internal
+# data structures. Ticket #238.
+#
+do_test misc1-15.1.1 {
+ catchsql {
+ CREATE TABLE t10 AS SELECT c1;
+ }
+} {1 {no such column: c1}}
+do_test misc1-15.1.2 {
+ catchsql {
+ CREATE TABLE t10 AS SELECT t9.c1;
+ }
+} {1 {no such column: t9.c1}}
+do_test misc1-15.1.3 {
+ catchsql {
+ CREATE TABLE t10 AS SELECT main.t9.c1;
+ }
+} {1 {no such column: main.t9.c1}}
+do_test misc1-15.2 {
+ catchsql {
+ CREATE TABLE t10 AS SELECT 1;
+ }
+ # The bug in ticket #238 causes the statement above to fail with
+ # the error "table t10 alread exists"
+} {0 {}}
+
+# Test for memory leaks when a CREATE TABLE containing a primary key
+# fails. Ticket #249.
+#
+do_test misc1-16.1 {
+ catchsql {SELECT name FROM sqlite_master LIMIT 1}
+ catchsql {
+ CREATE TABLE test(a integer, primary key(a));
+ }
+} {0 {}}
+do_test misc1-16.2 {
+ catchsql {
+ CREATE TABLE test(a integer, primary key(a));
+ }
+} {1 {table test already exists}}
+do_test misc1-16.3 {
+ catchsql {
+ CREATE TABLE test2(a text primary key, b text, primary key(a,b));
+ }
+} {1 {table "test2" has more than one primary key}}
+do_test misc1-16.4 {
+ execsql {
+ INSERT INTO test VALUES(1);
+ SELECT rowid, a FROM test;
+ }
+} {1 1}
+do_test misc1-16.5 {
+ execsql {
+ INSERT INTO test VALUES(5);
+ SELECT rowid, a FROM test;
+ }
+} {1 1 5 5}
+do_test misc1-16.6 {
+ execsql {
+ INSERT INTO test VALUES(NULL);
+ SELECT rowid, a FROM test;
+ }
+} {1 1 5 5 6 6}
+
+ifcapable trigger&&tempdb {
+# Ticket #333: Temp triggers that modify persistent tables.
+#
+do_test misc1-17.1 {
+ execsql {
+ BEGIN;
+ CREATE TABLE RealTable(TestID INTEGER PRIMARY KEY, TestString TEXT);
+ CREATE TEMP TABLE TempTable(TestID INTEGER PRIMARY KEY, TestString TEXT);
+ CREATE TEMP TRIGGER trigTest_1 AFTER UPDATE ON TempTable BEGIN
+ INSERT INTO RealTable(TestString)
+ SELECT new.TestString FROM TempTable LIMIT 1;
+ END;
+ INSERT INTO TempTable(TestString) VALUES ('1');
+ INSERT INTO TempTable(TestString) VALUES ('2');
+ UPDATE TempTable SET TestString = TestString + 1 WHERE TestID=1 OR TestId=2;
+ COMMIT;
+ SELECT TestString FROM RealTable ORDER BY 1;
+ }
+} {2 3}
+}
+
+do_test misc1-18.1 {
+ set n [sqlite3_sleep 100]
+ expr {$n>=100}
+} {1}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/misc2.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/misc2.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,333 @@
+# 2003 June 21
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests for miscellanous features that were
+# left out of other test files.
+#
+# $Id: misc2.test,v 1.26 2006/09/29 14:01:07 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+ifcapable {trigger} {
+# Test for ticket #360
+#
+do_test misc2-1.1 {
+ catchsql {
+ CREATE TABLE FOO(bar integer);
+ CREATE TRIGGER foo_insert BEFORE INSERT ON foo BEGIN
+ SELECT CASE WHEN (NOT new.bar BETWEEN 0 AND 20)
+ THEN raise(rollback, 'aiieee') END;
+ END;
+ INSERT INTO foo(bar) VALUES (1);
+ }
+} {0 {}}
+do_test misc2-1.2 {
+ catchsql {
+ INSERT INTO foo(bar) VALUES (111);
+ }
+} {1 aiieee}
+} ;# endif trigger
+
+# Make sure ROWID works on a view and a subquery. Ticket #364
+#
+do_test misc2-2.1 {
+ execsql {
+ CREATE TABLE t1(a,b,c);
+ INSERT INTO t1 VALUES(1,2,3);
+ CREATE TABLE t2(a,b,c);
+ INSERT INTO t2 VALUES(7,8,9);
+ }
+} {}
+ifcapable subquery {
+ do_test misc2-2.2 {
+ execsql {
+ SELECT rowid, * FROM (SELECT * FROM t1, t2);
+ }
+ } {{} 1 2 3 7 8 9}
+}
+ifcapable view {
+ do_test misc2-2.3 {
+ execsql {
+ CREATE VIEW v1 AS SELECT * FROM t1, t2;
+ SELECT rowid, * FROM v1;
+ }
+ } {{} 1 2 3 7 8 9}
+} ;# ifcapable view
+
+# Ticket #2002 and #1952.
+ifcapable subquery {
+ do_test misc2-2.4 {
+ execsql2 {
+ SELECT * FROM (SELECT a, b AS 'a', c AS 'a', 4 AS 'a' FROM t1)
+ }
+ } {a 1 a:1 2 a:2 3 a:3 4}
+}
+
+# Check name binding precedence. Ticket #387
+#
+do_test misc2-3.1 {
+ catchsql {
+ SELECT t1.b+t2.b AS a, t1.a, t2.a FROM t1, t2 WHERE a==10
+ }
+} {1 {ambiguous column name: a}}
+
+# Make sure 32-bit integer overflow is handled properly in queries.
+# ticket #408
+#
+do_test misc2-4.1 {
+ execsql {
+ INSERT INTO t1 VALUES(4000000000,'a','b');
+ SELECT a FROM t1 WHERE a>1;
+ }
+} {4000000000}
+do_test misc2-4.2 {
+ execsql {
+ INSERT INTO t1 VALUES(2147483648,'b2','c2');
+ INSERT INTO t1 VALUES(2147483647,'b3','c3');
+ SELECT a FROM t1 WHERE a>2147483647;
+ }
+} {4000000000 2147483648}
+do_test misc2-4.3 {
+ execsql {
+ SELECT a FROM t1 WHERE a<2147483648;
+ }
+} {1 2147483647}
+do_test misc2-4.4 {
+ execsql {
+ SELECT a FROM t1 WHERE a<=2147483648;
+ }
+} {1 2147483648 2147483647}
+do_test misc2-4.5 {
+ execsql {
+ SELECT a FROM t1 WHERE a<10000000000;
+ }
+} {1 4000000000 2147483648 2147483647}
+do_test misc2-4.6 {
+ execsql {
+ SELECT a FROM t1 WHERE a<1000000000000 ORDER BY 1;
+ }
+} {1 2147483647 2147483648 4000000000}
+
+# There were some issues with expanding a SrcList object using a call
+# to sqliteSrcListAppend() if the SrcList had previously been duplicated
+# using a call to sqliteSrcListDup(). Ticket #416. The following test
+# makes sure the problem has been fixed.
+#
+ifcapable view {
+do_test misc2-5.1 {
+ execsql {
+ CREATE TABLE x(a,b);
+ CREATE VIEW y AS
+ SELECT x1.b AS p, x2.b AS q FROM x AS x1, x AS x2 WHERE x1.a=x2.a;
+ CREATE VIEW z AS
+ SELECT y1.p, y2.p FROM y AS y1, y AS y2 WHERE y1.q=y2.q;
+ SELECT * from z;
+ }
+} {}
+}
+
+# Make sure we can open a database with an empty filename. What this
+# does is store the database in a temporary file that is deleted when
+# the database is closed. Ticket #432.
+#
+do_test misc2-6.1 {
+ db close
+ sqlite3 db {}
+ execsql {
+ CREATE TABLE t1(a,b);
+ INSERT INTO t1 VALUES(1,2);
+ SELECT * FROM t1;
+ }
+} {1 2}
+
+# Make sure we get an error message (not a segfault) on an attempt to
+# update a table from within the callback of a select on that same
+# table.
+#
+# 2006-08-16: This has changed. It is now permitted to update
+# the table being SELECTed from within the callback of the query.
+#
+do_test misc2-7.1 {
+ db close
+ file delete -force test.db
+ sqlite3 db test.db
+ execsql {
+ CREATE TABLE t1(x);
+ INSERT INTO t1 VALUES(1);
+ INSERT INTO t1 VALUES(2);
+ INSERT INTO t1 VALUES(3);
+ SELECT * FROM t1;
+ }
+} {1 2 3}
+do_test misc2-7.2 {
+ set rc [catch {
+ db eval {SELECT rowid FROM t1} {} {
+ db eval "DELETE FROM t1 WHERE rowid=$rowid"
+ }
+ } msg]
+ lappend rc $msg
+} {0 {}}
+do_test misc2-7.3 {
+ execsql {SELECT * FROM t1}
+} {}
+do_test misc2-7.4 {
+ execsql {
+ DELETE FROM t1;
+ INSERT INTO t1 VALUES(1);
+ INSERT INTO t1 VALUES(2);
+ INSERT INTO t1 VALUES(3);
+ INSERT INTO t1 VALUES(4);
+ }
+ db eval {SELECT rowid, x FROM t1} {
+ if {$x & 1} {
+ db eval {DELETE FROM t1 WHERE rowid=$rowid}
+ }
+ }
+ execsql {SELECT * FROM t1}
+} {2 4}
+do_test misc2-7.5 {
+ execsql {
+ DELETE FROM t1;
+ INSERT INTO t1 VALUES(1);
+ INSERT INTO t1 VALUES(2);
+ INSERT INTO t1 VALUES(3);
+ INSERT INTO t1 VALUES(4);
+ }
+ db eval {SELECT rowid, x FROM t1} {
+ if {$x & 1} {
+ db eval {DELETE FROM t1 WHERE rowid=$rowid+1}
+ }
+ }
+ execsql {SELECT * FROM t1}
+} {1 3}
+do_test misc2-7.6 {
+ execsql {
+ DELETE FROM t1;
+ INSERT INTO t1 VALUES(1);
+ INSERT INTO t1 VALUES(2);
+ INSERT INTO t1 VALUES(3);
+ INSERT INTO t1 VALUES(4);
+ }
+ db eval {SELECT rowid, x FROM t1} {
+ if {$x & 1} {
+ db eval {DELETE FROM t1}
+ }
+ }
+ execsql {SELECT * FROM t1}
+} {}
+do_test misc2-7.7 {
+ execsql {
+ DELETE FROM t1;
+ INSERT INTO t1 VALUES(1);
+ INSERT INTO t1 VALUES(2);
+ INSERT INTO t1 VALUES(3);
+ INSERT INTO t1 VALUES(4);
+ }
+ db eval {SELECT rowid, x FROM t1} {
+ if {$x & 1} {
+ db eval {UPDATE t1 SET x=x+100 WHERE rowid=$rowid}
+ }
+ }
+ execsql {SELECT * FROM t1}
+} {101 2 103 4}
+do_test misc2-7.8 {
+ execsql {
+ DELETE FROM t1;
+ INSERT INTO t1 VALUES(1);
+ }
+ db eval {SELECT rowid, x FROM t1} {
+ if {$x<10} {
+ db eval {INSERT INTO t1 VALUES($x+1)}
+ }
+ }
+ execsql {SELECT * FROM t1}
+} {1 2 3 4 5 6 7 8 9 10}
+
+db close
+file delete -force test.db
+sqlite3 db test.db
+
+# Ticket #453. If the SQL ended with "-", the tokenizer was calling that
+# an incomplete token, which caused problem. The solution was to just call
+# it a minus sign.
+#
+do_test misc2-8.1 {
+ catchsql {-}
+} {1 {near "-": syntax error}}
+
+# Ticket #513. Make sure the VDBE stack does not grow on a 3-way join.
+#
+ifcapable tempdb {
+ do_test misc2-9.1 {
+ execsql {
+ BEGIN;
+ CREATE TABLE counts(n INTEGER PRIMARY KEY);
+ INSERT INTO counts VALUES(0);
+ INSERT INTO counts VALUES(1);
+ INSERT INTO counts SELECT n+2 FROM counts;
+ INSERT INTO counts SELECT n+4 FROM counts;
+ INSERT INTO counts SELECT n+8 FROM counts;
+ COMMIT;
+
+ CREATE TEMP TABLE x AS
+ SELECT dim1.n, dim2.n, dim3.n
+ FROM counts AS dim1, counts AS dim2, counts AS dim3
+ WHERE dim1.n<10 AND dim2.n<10 AND dim3.n<10;
+
+ SELECT count(*) FROM x;
+ }
+ } {1000}
+ do_test misc2-9.2 {
+ execsql {
+ DROP TABLE x;
+ CREATE TEMP TABLE x AS
+ SELECT dim1.n, dim2.n, dim3.n
+ FROM counts AS dim1, counts AS dim2, counts AS dim3
+ WHERE dim1.n>=6 AND dim2.n>=6 AND dim3.n>=6;
+
+ SELECT count(*) FROM x;
+ }
+ } {1000}
+ do_test misc2-9.3 {
+ execsql {
+ DROP TABLE x;
+ CREATE TEMP TABLE x AS
+ SELECT dim1.n, dim2.n, dim3.n, dim4.n
+ FROM counts AS dim1, counts AS dim2, counts AS dim3, counts AS dim4
+ WHERE dim1.n<5 AND dim2.n<5 AND dim3.n<5 AND dim4.n<5;
+
+ SELECT count(*) FROM x;
+ }
+ } [expr 5*5*5*5]
+}
+
+# Ticket #1229. Sometimes when a "NEW.X" appears in a SELECT without
+# a FROM clause deep within a trigger, the code generator is unable to
+# trace the NEW.X back to an original table and thus figure out its
+# declared datatype.
+#
+# The SQL code below was causing a segfault.
+#
+ifcapable subquery&&trigger {
+ do_test misc2-10.1 {
+ execsql {
+ CREATE TABLE t1229(x);
+ CREATE TRIGGER r1229 BEFORE INSERT ON t1229 BEGIN
+ INSERT INTO t1229 SELECT y FROM (SELECT new.x y);
+ END;
+ INSERT INTO t1229 VALUES(1);
+ }
+ } {}
+}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/misc3.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/misc3.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,317 @@
+# 2003 December 17
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests for miscellanous features that were
+# left out of other test files.
+#
+# $Id: misc3.test,v 1.16 2005/01/21 03:12:16 danielk1977 Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+ifcapable {integrityck} {
+ # Ticket #529. Make sure an ABORT does not damage the in-memory cache
+ # that will be used by subsequent statements in the same transaction.
+ #
+ do_test misc3-1.1 {
+ execsql {
+ CREATE TABLE t1(a UNIQUE,b);
+ INSERT INTO t1
+ VALUES(1,'a23456789_b23456789_c23456789_d23456789_e23456789_');
+ UPDATE t1 SET b=b||b;
+ UPDATE t1 SET b=b||b;
+ UPDATE t1 SET b=b||b;
+ UPDATE t1 SET b=b||b;
+ UPDATE t1 SET b=b||b;
+ INSERT INTO t1 VALUES(2,'x');
+ UPDATE t1 SET b=substr(b,1,500);
+ BEGIN;
+ }
+ catchsql {UPDATE t1 SET a=CASE a WHEN 2 THEN 1 ELSE a END, b='y';}
+ execsql {
+ CREATE TABLE t2(x,y);
+ COMMIT;
+ PRAGMA integrity_check;
+ }
+ } ok
+}
+ifcapable {integrityck} {
+ do_test misc3-1.2 {
+ execsql {
+ DROP TABLE t1;
+ DROP TABLE t2;
+ }
+ ifcapable {vacuum} {execsql VACUUM}
+ execsql {
+ CREATE TABLE t1(a UNIQUE,b);
+ INSERT INTO t1
+ VALUES(1,'a23456789_b23456789_c23456789_d23456789_e23456789_');
+ INSERT INTO t1 SELECT a+1, b||b FROM t1;
+ INSERT INTO t1 SELECT a+2, b||b FROM t1;
+ INSERT INTO t1 SELECT a+4, b FROM t1;
+ INSERT INTO t1 SELECT a+8, b FROM t1;
+ INSERT INTO t1 SELECT a+16, b FROM t1;
+ INSERT INTO t1 SELECT a+32, b FROM t1;
+ INSERT INTO t1 SELECT a+64, b FROM t1;
+ BEGIN;
+ }
+ catchsql {UPDATE t1 SET a=CASE a WHEN 128 THEN 127 ELSE a END, b='';}
+ execsql {
+ INSERT INTO t1 VALUES(200,'hello out there');
+ COMMIT;
+ PRAGMA integrity_check;
+ }
+ } ok
+}
+
+# Tests of the sqliteAtoF() function in util.c
+#
+do_test misc3-2.1 {
+ execsql {SELECT 2e-25*0.5e25}
+} 1.0
+do_test misc3-2.2 {
+ execsql {SELECT 2.0e-25*000000.500000000000000000000000000000e+00025}
+} 1.0
+do_test misc3-2.3 {
+ execsql {SELECT 000000000002e-0000000025*0.5e25}
+} 1.0
+do_test misc3-2.4 {
+ execsql {SELECT 2e-25*0.5e250}
+} 1e+225
+do_test misc3-2.5 {
+ execsql {SELECT 2.0e-250*0.5e25}
+} 1e-225
+do_test misc3-2.6 {
+ execsql {SELECT '-2.0e-127' * '-0.5e27'}
+} 1e-100
+do_test misc3-2.7 {
+ execsql {SELECT '+2.0e-127' * '-0.5e27'}
+} -1e-100
+do_test misc3-2.8 {
+ execsql {SELECT 2.0e-27 * '+0.5e+127'}
+} 1e+100
+do_test misc3-2.9 {
+ execsql {SELECT 2.0e-27 * '+0.000005e+132'}
+} 1e+100
+
+# Ticket #522. Make sure integer overflow is handled properly in
+# indices.
+#
+integrity_check misc3-3.1
+do_test misc3-3.2 {
+ execsql {
+ CREATE TABLE t2(a INT UNIQUE);
+ }
+} {}
+integrity_check misc3-3.2.1
+do_test misc3-3.3 {
+ execsql {
+ INSERT INTO t2 VALUES(2147483648);
+ }
+} {}
+integrity_check misc3-3.3.1
+do_test misc3-3.4 {
+ execsql {
+ INSERT INTO t2 VALUES(-2147483649);
+ }
+} {}
+integrity_check misc3-3.4.1
+do_test misc3-3.5 {
+ execsql {
+ INSERT INTO t2 VALUES(+2147483649);
+ }
+} {}
+integrity_check misc3-3.5.1
+do_test misc3-3.6 {
+ execsql {
+ INSERT INTO t2 VALUES(+2147483647);
+ INSERT INTO t2 VALUES(-2147483648);
+ INSERT INTO t2 VALUES(-2147483647);
+ INSERT INTO t2 VALUES(2147483646);
+ SELECT * FROM t2 ORDER BY a;
+ }
+} {-2147483649 -2147483648 -2147483647 2147483646 2147483647 2147483648 2147483649}
+do_test misc3-3.7 {
+ execsql {
+ SELECT * FROM t2 WHERE a>=-2147483648 ORDER BY a;
+ }
+} {-2147483648 -2147483647 2147483646 2147483647 2147483648 2147483649}
+do_test misc3-3.8 {
+ execsql {
+ SELECT * FROM t2 WHERE a>-2147483648 ORDER BY a;
+ }
+} {-2147483647 2147483646 2147483647 2147483648 2147483649}
+do_test misc3-3.9 {
+ execsql {
+ SELECT * FROM t2 WHERE a>-2147483649 ORDER BY a;
+ }
+} {-2147483648 -2147483647 2147483646 2147483647 2147483648 2147483649}
+do_test misc3-3.10 {
+ execsql {
+ SELECT * FROM t2 WHERE a>=0 AND a<2147483649 ORDER BY a DESC;
+ }
+} {2147483648 2147483647 2147483646}
+do_test misc3-3.11 {
+ execsql {
+ SELECT * FROM t2 WHERE a>=0 AND a<=2147483648 ORDER BY a DESC;
+ }
+} {2147483648 2147483647 2147483646}
+do_test misc3-3.12 {
+ execsql {
+ SELECT * FROM t2 WHERE a>=0 AND a<2147483648 ORDER BY a DESC;
+ }
+} {2147483647 2147483646}
+do_test misc3-3.13 {
+ execsql {
+ SELECT * FROM t2 WHERE a>=0 AND a<=2147483647 ORDER BY a DESC;
+ }
+} {2147483647 2147483646}
+do_test misc3-3.14 {
+ execsql {
+ SELECT * FROM t2 WHERE a>=0 AND a<2147483647 ORDER BY a DESC;
+ }
+} {2147483646}
+
+# Ticket #565. A stack overflow is occurring when the subquery to the
+# right of an IN operator contains many NULLs
+#
+do_test misc3-4.1 {
+ execsql {
+ CREATE TABLE t3(a INTEGER PRIMARY KEY, b);
+ INSERT INTO t3(b) VALUES('abc');
+ INSERT INTO t3(b) VALUES('xyz');
+ INSERT INTO t3(b) VALUES(NULL);
+ INSERT INTO t3(b) VALUES(NULL);
+ INSERT INTO t3(b) SELECT b||'d' FROM t3;
+ INSERT INTO t3(b) SELECT b||'e' FROM t3;
+ INSERT INTO t3(b) SELECT b||'f' FROM t3;
+ INSERT INTO t3(b) SELECT b||'g' FROM t3;
+ INSERT INTO t3(b) SELECT b||'h' FROM t3;
+ SELECT count(a), count(b) FROM t3;
+ }
+} {128 64}
+ifcapable subquery {
+do_test misc3-4.2 {
+ execsql {
+ SELECT count(a) FROM t3 WHERE b IN (SELECT b FROM t3);
+ }
+ } {64}
+ do_test misc3-4.3 {
+ execsql {
+ SELECT count(a) FROM t3 WHERE b IN (SELECT b FROM t3 ORDER BY a+1);
+ }
+ } {64}
+}
+
+# Ticket #601: Putting a left join inside "SELECT * FROM (<join-here>)"
+# gives different results that if the outer "SELECT * FROM ..." is omitted.
+#
+ifcapable subquery {
+ do_test misc3-5.1 {
+ execsql {
+ CREATE TABLE x1 (b, c);
+ INSERT INTO x1 VALUES('dog',3);
+ INSERT INTO x1 VALUES('cat',1);
+ INSERT INTO x1 VALUES('dog',4);
+ CREATE TABLE x2 (c, e);
+ INSERT INTO x2 VALUES(1,'one');
+ INSERT INTO x2 VALUES(2,'two');
+ INSERT INTO x2 VALUES(3,'three');
+ INSERT INTO x2 VALUES(4,'four');
+ SELECT x2.c AS c, e, b FROM x2 LEFT JOIN
+ (SELECT b, max(c)+0 AS c FROM x1 GROUP BY b)
+ USING(c);
+ }
+ } {1 one cat 2 two {} 3 three {} 4 four dog}
+ do_test misc3-5.2 {
+ execsql {
+ SELECT * FROM (
+ SELECT x2.c AS c, e, b FROM x2 LEFT JOIN
+ (SELECT b, max(c)+0 AS c FROM x1 GROUP BY b)
+ USING(c)
+ );
+ }
+ } {1 one cat 2 two {} 3 three {} 4 four dog}
+}
+
+ifcapable {explain} {
+ # Ticket #626: make sure EXPLAIN prevents BEGIN and COMMIT from working.
+ #
+ do_test misc3-6.1 {
+ execsql {EXPLAIN BEGIN}
+ catchsql {BEGIN}
+ } {0 {}}
+ do_test misc3-6.2 {
+ execsql {EXPLAIN COMMIT}
+ catchsql {COMMIT}
+ } {0 {}}
+ do_test misc3-6.3 {
+ execsql {BEGIN; EXPLAIN ROLLBACK}
+ catchsql {ROLLBACK}
+ } {0 {}}
+}
+
+ifcapable {trigger} {
+# Ticket #640: vdbe stack overflow with a LIMIT clause on a SELECT inside
+# of a trigger.
+#
+do_test misc3-7.1 {
+ execsql {
+ BEGIN;
+ CREATE TABLE y1(a);
+ CREATE TABLE y2(b);
+ CREATE TABLE y3(c);
+ CREATE TRIGGER r1 AFTER DELETE ON y1 FOR EACH ROW BEGIN
+ INSERT INTO y3(c) SELECT b FROM y2 ORDER BY b LIMIT 1;
+ END;
+ INSERT INTO y1 VALUES(1);
+ INSERT INTO y1 VALUES(2);
+ INSERT INTO y1 SELECT a+2 FROM y1;
+ INSERT INTO y1 SELECT a+4 FROM y1;
+ INSERT INTO y1 SELECT a+8 FROM y1;
+ INSERT INTO y1 SELECT a+16 FROM y1;
+ INSERT INTO y2 SELECT a FROM y1;
+ COMMIT;
+ SELECT count(*) FROM y1;
+ }
+} 32
+do_test misc3-7.2 {
+ execsql {
+ DELETE FROM y1;
+ SELECT count(*) FROM y1;
+ }
+} 0
+do_test misc3-7.3 {
+ execsql {
+ SELECT count(*) FROM y3;
+ }
+} 32
+} ;# endif trigger
+
+# Ticket #668: VDBE stack overflow occurs when the left-hand side
+# of an IN expression is NULL and the result is used as an integer, not
+# as a jump.
+#
+ifcapable subquery {
+ do_test misc-8.1 {
+ execsql {
+ SELECT count(CASE WHEN b IN ('abc','xyz') THEN 'x' END) FROM t3
+ }
+ } {2}
+ do_test misc-8.2 {
+ execsql {
+ SELECT count(*) FROM t3 WHERE 1+(b IN ('abc','xyz'))==2
+ }
+ } {2}
+}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/misc4.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/misc4.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,185 @@
+# 2004 Jun 27
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests for miscellanous features that were
+# left out of other test files.
+#
+# $Id: misc4.test,v 1.21 2006/01/03 00:33:50 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Prepare a statement that will create a temporary table. Then do
+# a rollback. Then try to execute the prepared statement.
+#
+do_test misc4-1.1 {
+ set DB [sqlite3_connection_pointer db]
+ execsql {
+ CREATE TABLE t1(x);
+ INSERT INTO t1 VALUES(1);
+ }
+} {}
+
+ifcapable tempdb {
+ do_test misc4-1.2 {
+ set sql {CREATE TEMP TABLE t2 AS SELECT * FROM t1}
+ set stmt [sqlite3_prepare $DB $sql -1 TAIL]
+ execsql {
+ BEGIN;
+ CREATE TABLE t3(a,b,c);
+ INSERT INTO t1 SELECT * FROM t1;
+ ROLLBACK;
+ }
+ } {}
+ do_test misc4-1.3 {
+ sqlite3_step $stmt
+ } SQLITE_DONE
+ do_test misc4-1.4 {
+ execsql {
+ SELECT * FROM temp.t2;
+ }
+ } {1}
+
+ # Drop the temporary table, then rerun the prepared statement to
+ # recreate it again. This recreates ticket #807.
+ #
+ do_test misc4-1.5 {
+ execsql {DROP TABLE t2}
+ sqlite3_reset $stmt
+ sqlite3_step $stmt
+ } {SQLITE_ERROR}
+ do_test misc4-1.6 {
+ sqlite3_finalize $stmt
+ } {SQLITE_SCHEMA}
+}
+
+# Prepare but do not execute various CREATE statements. Then before
+# those statements are executed, try to use the tables, indices, views,
+# are triggers that were created.
+#
+do_test misc4-2.1 {
+ set stmt [sqlite3_prepare $DB {CREATE TABLE t3(x);} -1 TAIL]
+ catchsql {
+ INSERT INTO t3 VALUES(1);
+ }
+} {1 {no such table: t3}}
+do_test misc4-2.2 {
+ sqlite3_step $stmt
+} SQLITE_DONE
+do_test misc4-2.3 {
+ sqlite3_finalize $stmt
+} SQLITE_OK
+do_test misc4-2.4 {
+ catchsql {
+ INSERT INTO t3 VALUES(1);
+ }
+} {0 {}}
+
+# Ticket #966
+#
+ifcapable compound {
+do_test misc4-3.1 {
+ execsql {
+ CREATE TABLE Table1(ID integer primary key, Value TEXT);
+ INSERT INTO Table1 VALUES(1, 'x');
+ CREATE TABLE Table2(ID integer NOT NULL, Value TEXT);
+ INSERT INTO Table2 VALUES(1, 'z');
+ INSERT INTO Table2 VALUES (1, 'a');
+ SELECT ID, Value FROM Table1
+ UNION SELECT ID, max(Value) FROM Table2 GROUP BY 1
+ ORDER BY 1, 2;
+ }
+} {1 x 1 z}
+do_test misc4-3.2 {
+ catchsql {
+ SELECT ID, Value FROM Table1
+ UNION SELECT ID, max(Value) FROM Table2 GROUP BY 1, 2
+ ORDER BY 1, 2;
+ }
+} {1 {aggregate functions are not allowed in the GROUP BY clause}}
+} ;# ifcapable compound
+
+# Ticket #1047. Make sure column types are preserved in subqueries.
+#
+ifcapable subquery {
+ do_test misc4-4.1 {
+ execsql {
+ create table a(key varchar, data varchar);
+ create table b(key varchar, period integer);
+ insert into a values('01','data01');
+ insert into a values('+1','data+1');
+
+ insert into b values ('01',1);
+ insert into b values ('01',2);
+ insert into b values ('+1',3);
+ insert into b values ('+1',4);
+
+ select a.*, x.*
+ from a, (select key,sum(period) from b group by key) as x
+ where a.key=x.key;
+ }
+ } {01 data01 01 3 +1 data+1 +1 7}
+
+ # This test case tests the same property as misc4-4.1, but it is
+ # a bit smaller which makes it easier to work with while debugging.
+ do_test misc4-4.2 {
+ execsql {
+ CREATE TABLE ab(a TEXT, b TEXT);
+ INSERT INTO ab VALUES('01', '1');
+ }
+ execsql {
+ select * from ab, (select b from ab) as x where x.b = ab.a;
+ }
+ } {}
+}
+
+
+# Ticket #1036. When creating tables from a SELECT on a view, use the
+# short names of columns.
+#
+ifcapable view {
+ do_test misc4-5.1 {
+ execsql {
+ create table t4(a,b);
+ create table t5(a,c);
+ insert into t4 values (1,2);
+ insert into t5 values (1,3);
+ create view myview as select t4.a a from t4 inner join t5 on t4.a=t5.a;
+ create table problem as select * from myview;
+ }
+ execsql2 {
+ select * FROM problem;
+ }
+ } {a 1}
+ do_test misc4-5.2 {
+ execsql2 {
+ create table t6 as select * from t4, t5;
+ select * from t6;
+ }
+ } {a 1 b 2 a:1 1 c 3}
+}
+
+# Ticket #1086
+do_test misc4-6.1 {
+ execsql {
+ CREATE TABLE abc(a);
+ INSERT INTO abc VALUES(1);
+ CREATE TABLE def(d, e, f, PRIMARY KEY(d, e));
+ }
+} {}
+do_test misc4-6.2 {
+ execsql {
+ SELECT a FROM abc LEFT JOIN def ON (abc.a=def.d);
+ }
+} {1}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/misc5.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/misc5.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,618 @@
+# 2005 Mar 16
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests for miscellanous features that were
+# left out of other test files.
+#
+# $Id: misc5.test,v 1.15 2006/08/12 12:33:15 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Build records using the MakeRecord opcode such that the size of the
+# header is at the transition point in the size of a varint.
+#
+# This test causes an assertion failure or a buffer overrun in version
+# 3.1.5 and earlier.
+#
+for {set i 120} {$i<140} {incr i} {
+ do_test misc5-1.$i {
+ catchsql {DROP TABLE t1}
+ set sql1 {CREATE TABLE t1}
+ set sql2 {INSERT INTO t1 VALUES}
+ set sep (
+ for {set j 0} {$j<$i} {incr j} {
+ append sql1 ${sep}a$j
+ append sql2 ${sep}$j
+ set sep ,
+ }
+ append sql1 {);}
+ append sql2 {);}
+ execsql $sql1$sql2
+ } {}
+}
+
+# Make sure large integers are stored correctly.
+#
+ifcapable conflict {
+ do_test misc5-2.1 {
+ execsql {
+ create table t2(x unique);
+ insert into t2 values(1);
+ insert or ignore into t2 select x*2 from t2;
+ insert or ignore into t2 select x*4 from t2;
+ insert or ignore into t2 select x*16 from t2;
+ insert or ignore into t2 select x*256 from t2;
+ insert or ignore into t2 select x*65536 from t2;
+ insert or ignore into t2 select x*2147483648 from t2;
+ insert or ignore into t2 select x-1 from t2;
+ insert or ignore into t2 select x+1 from t2;
+ insert or ignore into t2 select -x from t2;
+ select count(*) from t2;
+ }
+ } 371
+} else {
+ do_test misc5-2.1 {
+ execsql {
+ BEGIN;
+ create table t2(x unique);
+ create table t2_temp(x);
+ insert into t2_temp values(1);
+ insert into t2_temp select x*2 from t2_temp;
+ insert into t2_temp select x*4 from t2_temp;
+ insert into t2_temp select x*16 from t2_temp;
+ insert into t2_temp select x*256 from t2_temp;
+ insert into t2_temp select x*65536 from t2_temp;
+ insert into t2_temp select x*2147483648 from t2_temp;
+ insert into t2_temp select x-1 from t2_temp;
+ insert into t2_temp select x+1 from t2_temp;
+ insert into t2_temp select -x from t2_temp;
+ INSERT INTO t2 SELECT DISTINCT(x) FROM t2_temp;
+ DROP TABLE t2_temp;
+ COMMIT;
+ select count(*) from t2;
+ }
+ } 371
+}
+do_test misc5-2.2 {
+ execsql {
+ select x from t2 order by x;
+ }
+} \
+"-4611686018427387905\
+-4611686018427387904\
+-4611686018427387903\
+-2305843009213693953\
+-2305843009213693952\
+-2305843009213693951\
+-1152921504606846977\
+-1152921504606846976\
+-1152921504606846975\
+-576460752303423489\
+-576460752303423488\
+-576460752303423487\
+-288230376151711745\
+-288230376151711744\
+-288230376151711743\
+-144115188075855873\
+-144115188075855872\
+-144115188075855871\
+-72057594037927937\
+-72057594037927936\
+-72057594037927935\
+-36028797018963969\
+-36028797018963968\
+-36028797018963967\
+-18014398509481985\
+-18014398509481984\
+-18014398509481983\
+-9007199254740993\
+-9007199254740992\
+-9007199254740991\
+-4503599627370497\
+-4503599627370496\
+-4503599627370495\
+-2251799813685249\
+-2251799813685248\
+-2251799813685247\
+-1125899906842625\
+-1125899906842624\
+-1125899906842623\
+-562949953421313\
+-562949953421312\
+-562949953421311\
+-281474976710657\
+-281474976710656\
+-281474976710655\
+-140737488355329\
+-140737488355328\
+-140737488355327\
+-70368744177665\
+-70368744177664\
+-70368744177663\
+-35184372088833\
+-35184372088832\
+-35184372088831\
+-17592186044417\
+-17592186044416\
+-17592186044415\
+-8796093022209\
+-8796093022208\
+-8796093022207\
+-4398046511105\
+-4398046511104\
+-4398046511103\
+-2199023255553\
+-2199023255552\
+-2199023255551\
+-1099511627777\
+-1099511627776\
+-1099511627775\
+-549755813889\
+-549755813888\
+-549755813887\
+-274877906945\
+-274877906944\
+-274877906943\
+-137438953473\
+-137438953472\
+-137438953471\
+-68719476737\
+-68719476736\
+-68719476735\
+-34359738369\
+-34359738368\
+-34359738367\
+-17179869185\
+-17179869184\
+-17179869183\
+-8589934593\
+-8589934592\
+-8589934591\
+-4294967297\
+-4294967296\
+-4294967295\
+-2147483649\
+-2147483648\
+-2147483647\
+-1073741825\
+-1073741824\
+-1073741823\
+-536870913\
+-536870912\
+-536870911\
+-268435457\
+-268435456\
+-268435455\
+-134217729\
+-134217728\
+-134217727\
+-67108865\
+-67108864\
+-67108863\
+-33554433\
+-33554432\
+-33554431\
+-16777217\
+-16777216\
+-16777215\
+-8388609\
+-8388608\
+-8388607\
+-4194305\
+-4194304\
+-4194303\
+-2097153\
+-2097152\
+-2097151\
+-1048577\
+-1048576\
+-1048575\
+-524289\
+-524288\
+-524287\
+-262145\
+-262144\
+-262143\
+-131073\
+-131072\
+-131071\
+-65537\
+-65536\
+-65535\
+-32769\
+-32768\
+-32767\
+-16385\
+-16384\
+-16383\
+-8193\
+-8192\
+-8191\
+-4097\
+-4096\
+-4095\
+-2049\
+-2048\
+-2047\
+-1025\
+-1024\
+-1023\
+-513\
+-512\
+-511\
+-257\
+-256\
+-255\
+-129\
+-128\
+-127\
+-65\
+-64\
+-63\
+-33\
+-32\
+-31\
+-17\
+-16\
+-15\
+-9\
+-8\
+-7\
+-5\
+-4\
+-3\
+-2\
+-1\
+0\
+1\
+2\
+3\
+4\
+5\
+7\
+8\
+9\
+15\
+16\
+17\
+31\
+32\
+33\
+63\
+64\
+65\
+127\
+128\
+129\
+255\
+256\
+257\
+511\
+512\
+513\
+1023\
+1024\
+1025\
+2047\
+2048\
+2049\
+4095\
+4096\
+4097\
+8191\
+8192\
+8193\
+16383\
+16384\
+16385\
+32767\
+32768\
+32769\
+65535\
+65536\
+65537\
+131071\
+131072\
+131073\
+262143\
+262144\
+262145\
+524287\
+524288\
+524289\
+1048575\
+1048576\
+1048577\
+2097151\
+2097152\
+2097153\
+4194303\
+4194304\
+4194305\
+8388607\
+8388608\
+8388609\
+16777215\
+16777216\
+16777217\
+33554431\
+33554432\
+33554433\
+67108863\
+67108864\
+67108865\
+134217727\
+134217728\
+134217729\
+268435455\
+268435456\
+268435457\
+536870911\
+536870912\
+536870913\
+1073741823\
+1073741824\
+1073741825\
+2147483647\
+2147483648\
+2147483649\
+4294967295\
+4294967296\
+4294967297\
+8589934591\
+8589934592\
+8589934593\
+17179869183\
+17179869184\
+17179869185\
+34359738367\
+34359738368\
+34359738369\
+68719476735\
+68719476736\
+68719476737\
+137438953471\
+137438953472\
+137438953473\
+274877906943\
+274877906944\
+274877906945\
+549755813887\
+549755813888\
+549755813889\
+1099511627775\
+1099511627776\
+1099511627777\
+2199023255551\
+2199023255552\
+2199023255553\
+4398046511103\
+4398046511104\
+4398046511105\
+8796093022207\
+8796093022208\
+8796093022209\
+17592186044415\
+17592186044416\
+17592186044417\
+35184372088831\
+35184372088832\
+35184372088833\
+70368744177663\
+70368744177664\
+70368744177665\
+140737488355327\
+140737488355328\
+140737488355329\
+281474976710655\
+281474976710656\
+281474976710657\
+562949953421311\
+562949953421312\
+562949953421313\
+1125899906842623\
+1125899906842624\
+1125899906842625\
+2251799813685247\
+2251799813685248\
+2251799813685249\
+4503599627370495\
+4503599627370496\
+4503599627370497\
+9007199254740991\
+9007199254740992\
+9007199254740993\
+18014398509481983\
+18014398509481984\
+18014398509481985\
+36028797018963967\
+36028797018963968\
+36028797018963969\
+72057594037927935\
+72057594037927936\
+72057594037927937\
+144115188075855871\
+144115188075855872\
+144115188075855873\
+288230376151711743\
+288230376151711744\
+288230376151711745\
+576460752303423487\
+576460752303423488\
+576460752303423489\
+1152921504606846975\
+1152921504606846976\
+1152921504606846977\
+2305843009213693951\
+2305843009213693952\
+2305843009213693953\
+4611686018427387903\
+4611686018427387904\
+4611686018427387905"
+
+# Ticket #1210. Do proper reference counting of Table structures
+# so that deeply nested SELECT statements can be flattened correctly.
+#
+ifcapable subquery {
+ do_test misc5-3.1 {
+ execsql {
+ CREATE TABLE songs(songid, artist, timesplayed);
+ INSERT INTO songs VALUES(1,'one',1);
+ INSERT INTO songs VALUES(2,'one',2);
+ INSERT INTO songs VALUES(3,'two',3);
+ INSERT INTO songs VALUES(4,'three',5);
+ INSERT INTO songs VALUES(5,'one',7);
+ INSERT INTO songs VALUES(6,'two',11);
+ SELECT DISTINCT artist
+ FROM (
+ SELECT DISTINCT artist
+ FROM songs
+ WHERE songid IN (
+ SELECT songid
+ FROM songs
+ WHERE LOWER(artist) = (
+ SELECT DISTINCT LOWER(artist)
+ FROM (
+ SELECT DISTINCT artist,sum(timesplayed) AS total
+ FROM songs
+ GROUP BY LOWER(artist)
+ ORDER BY total DESC
+ LIMIT 10
+ )
+ WHERE artist <> ''
+ )
+ )
+ )
+ ORDER BY LOWER(artist) ASC;
+ }
+ } {two}
+}
+
+# Ticket #1370. Do not overwrite small files (less than 1024 bytes)
+# when trying to open them as a database.
+#
+do_test misc5-4.1 {
+ db close
+ file delete -force test.db
+ set fd [open test.db w]
+ puts $fd "This is not really a database"
+ close $fd
+ sqlite3 db test.db
+ catchsql {
+ CREATE TABLE t1(a,b,c);
+ }
+} {1 {file is encrypted or is not a database}}
+
+# Ticket #1371. Allow floating point numbers of the form .N or N.
+#
+do_test misc5-5.1 {
+ execsql {SELECT .1 }
+} 0.1
+do_test misc5-5.2 {
+ execsql {SELECT 2. }
+} 2.0
+do_test misc5-5.3 {
+ execsql {SELECT 3.e0 }
+} 3.0
+do_test misc5-5.4 {
+ execsql {SELECT .4e+1}
+} 4.0
+
+# Ticket #1582. Ensure that an unknown table in a LIMIT clause applied to
+# a UNION ALL query causes an error, not a crash.
+#
+db close
+file delete -force test.db
+sqlite3 db test.db
+ifcapable subquery&&compound {
+ do_test misc5-6.1 {
+ catchsql {
+ SELECT * FROM sqlite_master
+ UNION ALL
+ SELECT * FROM sqlite_master
+ LIMIT (SELECT count(*) FROM blah);
+ }
+ } {1 {no such table: blah}}
+ do_test misc5-6.2 {
+ execsql {
+ CREATE TABLE logs(msg TEXT, timestamp INTEGER, dbtime TEXT);
+ }
+ catchsql {
+ SELECT * FROM logs WHERE logs.id >= (SELECT head FROM logs_base)
+ UNION ALL
+ SELECT * FROM logs
+ LIMIT (SELECT lmt FROM logs_base) ;
+ }
+ } {1 {no such column: logs.id}}
+}
+
+# Overflow the lemon parser stack by providing an overly complex
+# expression. Make sure that the overflow is detected and reported.
+#
+do_test misc5-7.1 {
+ execsql {CREATE TABLE t1(x)}
+ set sql "INSERT INTO t1 VALUES("
+ set tail ""
+ for {set i 0} {$i<200} {incr i} {
+ append sql "(1+"
+ append tail ")"
+ }
+ append sql 2$tail
+ catchsql $sql
+} {1 {parser stack overflow}}
+
+# Check the MISUSE return from sqlitee3_busy_timeout
+#
+do_test misc5-8.1 {
+ set DB [sqlite3_connection_pointer db]
+ db close
+ sqlite3_busy_timeout $DB 1000
+} SQLITE_MISUSE
+sqlite3 db test.db
+
+# Ticket #1911
+#
+do_test misc5-9.1 {
+ execsql {
+ SELECT name, type FROM sqlite_master WHERE name IS NULL
+ UNION
+ SELECT type, name FROM sqlite_master WHERE type IS NULL
+ ORDER BY 1, 2, 1, 2, 1, 2
+ }
+} {}
+do_test misc5-9.2 {
+ execsql {
+ SELECT name, type FROM sqlite_master WHERE name IS NULL
+ UNION
+ SELECT type, name FROM sqlite_master WHERE type IS NULL
+ ORDER BY 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2
+ }
+} {}
+
+# Ticket #1912. Make the tokenizer require a space after a numeric
+# literal.
+#
+do_test misc5-10.1 {
+ catchsql {
+ SELECT 123abc
+ }
+} {1 {unrecognized token: "123abc"}}
+do_test misc5-10.2 {
+ catchsql {
+ SELECT 1*123.4e5ghi;
+ }
+} {1 {unrecognized token: "123.4e5ghi"}}
+
+
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/misc6.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/misc6.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,46 @@
+# 2006 September 4
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests to make sure sqlite3_value_text()
+# always returns a null-terminated string.
+#
+# $Id: misc6.test,v 1.2 2006/09/04 18:54:14 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+do_test misc6-1.1 {
+ set DB [sqlite3_connection_pointer db]
+ sqlite3_create_function $DB
+ set STMT [sqlite3_prepare $DB {SELECT hex8(?)} -1 DUMMY]
+ set sqlite_static_bind_value {0123456789}
+ set sqlite_static_bind_nbyte 5
+ sqlite_bind $STMT 1 {} static-nbytes
+ sqlite3_step $STMT
+} SQLITE_ROW
+do_test misc6-1.2 {
+ sqlite3_column_text $STMT 0
+} {3031323334}
+do_test misc6-1.3 {
+ sqlite3_finalize $STMT
+ set STMT [sqlite3_prepare $DB {SELECT hex16(?)} -1 DUMMY]
+ set sqlite_static_bind_value {0123456789}
+ set sqlite_static_bind_nbyte 5
+ sqlite_bind $STMT 1 {} static-nbytes
+ sqlite3_step $STMT
+} SQLITE_ROW
+do_test misc6-1.4 {
+ sqlite3_column_text $STMT 0
+} {00300031003200330034}
+sqlite3_finalize $STMT
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/misuse.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/misuse.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,207 @@
+# 2002 May 10
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests for the SQLITE_MISUSE detection logic.
+# This test file leaks memory and file descriptors.
+#
+# $Id: misuse.test,v 1.11 2006/01/03 00:33:50 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+proc catchsql2 {sql} {
+ set r [
+ catch {
+ set res [list]
+ db eval $sql data {
+ if { $res==[list] } {
+ foreach f $data(*) {lappend res $f}
+ }
+ foreach f $data(*) {lappend res $data($f)}
+ }
+ set res
+ } msg
+ ]
+ lappend r $msg
+}
+
+
+# Make sure the test logic works
+#
+do_test misuse-1.1 {
+ db close
+ catch {file delete -force test2.db}
+ catch {file delete -force test2.db-journal}
+ sqlite3 db test2.db; set ::DB [sqlite3_connection_pointer db]
+ execsql {
+ CREATE TABLE t1(a,b);
+ INSERT INTO t1 VALUES(1,2);
+ }
+ catchsql2 {
+ SELECT * FROM t1
+ }
+} {0 {a b 1 2}}
+do_test misuse-1.2 {
+ catchsql2 {
+ SELECT x_coalesce(NULL,a) AS 'xyz' FROM t1
+ }
+} {1 {no such function: x_coalesce}}
+do_test misuse-1.3 {
+ sqlite3_create_function $::DB
+ catchsql2 {
+ SELECT x_coalesce(NULL,a) AS 'xyz' FROM t1
+ }
+} {0 {xyz 1}}
+
+# Use the x_sqlite_exec() SQL function to simulate the effect of two
+# threads trying to use the same database at the same time.
+#
+# It used to be prohibited to invoke sqlite_exec() from within a function,
+# but that has changed. The following tests used to cause errors but now
+# they do not.
+#
+ifcapable {utf16} {
+ do_test misuse-1.4 {
+ catchsql2 {
+ SELECT x_sqlite_exec('SELECT * FROM t1') AS xyz;
+ }
+ } {0 {xyz {1 2}}}
+}
+do_test misuse-1.5 {
+ catchsql2 {SELECT * FROM t1}
+} {0 {a b 1 2}}
+do_test misuse-1.6 {
+ catchsql {
+ SELECT * FROM t1
+ }
+} {0 {1 2}}
+
+# Attempt to register a new SQL function while an sqlite_exec() is active.
+#
+do_test misuse-2.1 {
+ db close
+ sqlite3 db test2.db; set ::DB [sqlite3_connection_pointer db]
+ execsql {
+ SELECT * FROM t1
+ }
+} {1 2}
+do_test misuse-2.2 {
+ catchsql2 {SELECT * FROM t1}
+} {0 {a b 1 2}}
+
+# We used to disallow creating new function from within an exec().
+# But now this is acceptable.
+do_test misuse-2.3 {
+ set v [catch {
+ db eval {SELECT * FROM t1} {} {
+ sqlite3_create_function $::DB
+ }
+ } msg]
+ lappend v $msg
+} {0 {}}
+do_test misuse-2.4 {
+ catchsql2 {SELECT * FROM t1}
+} {0 {a b 1 2}}
+do_test misuse-2.5 {
+ catchsql {
+ SELECT * FROM t1
+ }
+} {0 {1 2}}
+
+# Attempt to register a new SQL aggregate while an sqlite_exec() is active.
+#
+do_test misuse-3.1 {
+ db close
+ sqlite3 db test2.db; set ::DB [sqlite3_connection_pointer db]
+ execsql {
+ SELECT * FROM t1
+ }
+} {1 2}
+do_test misuse-3.2 {
+ catchsql2 {SELECT * FROM t1}
+} {0 {a b 1 2}}
+
+# We used to disallow creating new function from within an exec().
+# But now this is acceptable.
+do_test misuse-3.3 {
+ set v [catch {
+ db eval {SELECT * FROM t1} {} {
+ sqlite3_create_aggregate $::DB
+ }
+ } msg]
+ lappend v $msg
+} {0 {}}
+do_test misuse-3.4 {
+ catchsql2 {SELECT * FROM t1}
+} {0 {a b 1 2}}
+do_test misuse-3.5 {
+ catchsql {
+ SELECT * FROM t1
+ }
+} {0 {1 2}}
+
+# Attempt to close the database from an sqlite_exec callback.
+#
+# Update for v3: The db cannot be closed because there are active
+# VMs. The sqlite3_close call would return SQLITE_BUSY.
+do_test misuse-4.1 {
+ db close
+ sqlite3 db test2.db; set ::DB [sqlite3_connection_pointer db]
+ execsql {
+ SELECT * FROM t1
+ }
+} {1 2}
+do_test misuse-4.2 {
+ catchsql2 {SELECT * FROM t1}
+} {0 {a b 1 2}}
+do_test misuse-4.3 {
+ set v [catch {
+ db eval {SELECT * FROM t1} {} {
+ set r [sqlite3_close $::DB]
+ }
+ } msg]
+ lappend v $msg $r
+} {0 {} SQLITE_BUSY}
+do_test misuse-4.4 {
+ # Flush the TCL statement cache here, otherwise the sqlite3_close() will
+ # fail because there are still un-finalized() VDBEs.
+ db cache flush
+ sqlite3_close $::DB
+ catchsql2 {SELECT * FROM t1}
+} {1 {library routine called out of sequence}}
+do_test misuse-4.5 {
+ catchsql {
+ SELECT * FROM t1
+ }
+} {1 {library routine called out of sequence}}
+
+# Attempt to use a database after it has been closed.
+#
+do_test misuse-5.1 {
+ db close
+ sqlite3 db test2.db; set ::DB [sqlite3_connection_pointer db]
+ execsql {
+ SELECT * FROM t1
+ }
+} {1 2}
+do_test misuse-5.2 {
+ catchsql2 {SELECT * FROM t1}
+} {0 {a b 1 2}}
+do_test misuse-5.3 {
+ db close
+ set r [catch {
+ sqlite3_prepare $::DB {SELECT * FROM t1} -1 TAIL
+ } msg]
+ lappend r $msg
+} {1 {(21) library routine called out of sequence}}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/notnull.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/notnull.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,505 @@
+# 2002 January 29
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests for the NOT NULL constraint.
+#
+# $Id: notnull.test,v 1.4 2006/01/17 09:35:02 danielk1977 Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+ifcapable !conflict {
+ finish_test
+ return
+}
+
+do_test notnull-1.0 {
+ execsql {
+ CREATE TABLE t1 (
+ a NOT NULL,
+ b NOT NULL DEFAULT 5,
+ c NOT NULL ON CONFLICT REPLACE DEFAULT 6,
+ d NOT NULL ON CONFLICT IGNORE DEFAULT 7,
+ e NOT NULL ON CONFLICT ABORT DEFAULT 8
+ );
+ SELECT * FROM t1;
+ }
+} {}
+do_test notnull-1.1 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT INTO t1(a,b,c,d,e) VALUES(1,2,3,4,5);
+ SELECT * FROM t1 order by a;
+ }
+} {0 {1 2 3 4 5}}
+do_test notnull-1.2 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT INTO t1(b,c,d,e) VALUES(2,3,4,5);
+ SELECT * FROM t1 order by a;
+ }
+} {1 {t1.a may not be NULL}}
+do_test notnull-1.3 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT OR IGNORE INTO t1(b,c,d,e) VALUES(2,3,4,5);
+ SELECT * FROM t1 order by a;
+ }
+} {0 {}}
+do_test notnull-1.4 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT OR REPLACE INTO t1(b,c,d,e) VALUES(2,3,4,5);
+ SELECT * FROM t1 order by a;
+ }
+} {1 {t1.a may not be NULL}}
+do_test notnull-1.5 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT OR ABORT INTO t1(b,c,d,e) VALUES(2,3,4,5);
+ SELECT * FROM t1 order by a;
+ }
+} {1 {t1.a may not be NULL}}
+do_test notnull-1.6 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT INTO t1(a,c,d,e) VALUES(1,3,4,5);
+ SELECT * FROM t1 order by a;
+ }
+} {0 {1 5 3 4 5}}
+do_test notnull-1.7 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT OR IGNORE INTO t1(a,c,d,e) VALUES(1,3,4,5);
+ SELECT * FROM t1 order by a;
+ }
+} {0 {1 5 3 4 5}}
+do_test notnull-1.8 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT OR REPLACE INTO t1(a,c,d,e) VALUES(1,3,4,5);
+ SELECT * FROM t1 order by a;
+ }
+} {0 {1 5 3 4 5}}
+do_test notnull-1.9 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT OR ABORT INTO t1(a,c,d,e) VALUES(1,3,4,5);
+ SELECT * FROM t1 order by a;
+ }
+} {0 {1 5 3 4 5}}
+do_test notnull-1.10 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT INTO t1(a,b,c,d,e) VALUES(1,null,3,4,5);
+ SELECT * FROM t1 order by a;
+ }
+} {1 {t1.b may not be NULL}}
+do_test notnull-1.11 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT OR IGNORE INTO t1(a,b,c,d,e) VALUES(1,null,3,4,5);
+ SELECT * FROM t1 order by a;
+ }
+} {0 {}}
+do_test notnull-1.12 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT OR REPLACE INTO t1(a,b,c,d,e) VALUES(1,null,3,4,5);
+ SELECT * FROM t1 order by a;
+ }
+} {0 {1 5 3 4 5}}
+do_test notnull-1.13 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT INTO t1(a,b,c,d,e) VALUES(1,2,null,4,5);
+ SELECT * FROM t1 order by a;
+ }
+} {0 {1 2 6 4 5}}
+do_test notnull-1.14 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT OR IGNORE INTO t1(a,b,c,d,e) VALUES(1,2,null,4,5);
+ SELECT * FROM t1 order by a;
+ }
+} {0 {}}
+do_test notnull-1.15 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT OR REPLACE INTO t1(a,b,c,d,e) VALUES(1,2,null,4,5);
+ SELECT * FROM t1 order by a;
+ }
+} {0 {1 2 6 4 5}}
+do_test notnull-1.16 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT OR ABORT INTO t1(a,b,c,d,e) VALUES(1,2,null,4,5);
+ SELECT * FROM t1 order by a;
+ }
+} {1 {t1.c may not be NULL}}
+do_test notnull-1.17 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT OR ABORT INTO t1(a,b,c,d,e) VALUES(1,2,3,null,5);
+ SELECT * FROM t1 order by a;
+ }
+} {1 {t1.d may not be NULL}}
+do_test notnull-1.18 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT OR ABORT INTO t1(a,b,c,e) VALUES(1,2,3,5);
+ SELECT * FROM t1 order by a;
+ }
+} {0 {1 2 3 7 5}}
+do_test notnull-1.19 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT INTO t1(a,b,c,d) VALUES(1,2,3,4);
+ SELECT * FROM t1 order by a;
+ }
+} {0 {1 2 3 4 8}}
+do_test notnull-1.20 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT INTO t1(a,b,c,d,e) VALUES(1,2,3,4,null);
+ SELECT * FROM t1 order by a;
+ }
+} {1 {t1.e may not be NULL}}
+do_test notnull-1.21 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT OR REPLACE INTO t1(e,d,c,b,a) VALUES(1,2,3,null,5);
+ SELECT * FROM t1 order by a;
+ }
+} {0 {5 5 3 2 1}}
+
+do_test notnull-2.1 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT INTO t1 VALUES(1,2,3,4,5);
+ UPDATE t1 SET a=null;
+ SELECT * FROM t1 ORDER BY a;
+ }
+} {1 {t1.a may not be NULL}}
+do_test notnull-2.2 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT INTO t1 VALUES(1,2,3,4,5);
+ UPDATE OR REPLACE t1 SET a=null;
+ SELECT * FROM t1 ORDER BY a;
+ }
+} {1 {t1.a may not be NULL}}
+do_test notnull-2.3 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT INTO t1 VALUES(1,2,3,4,5);
+ UPDATE OR IGNORE t1 SET a=null;
+ SELECT * FROM t1 ORDER BY a;
+ }
+} {0 {1 2 3 4 5}}
+do_test notnull-2.4 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT INTO t1 VALUES(1,2,3,4,5);
+ UPDATE OR ABORT t1 SET a=null;
+ SELECT * FROM t1 ORDER BY a;
+ }
+} {1 {t1.a may not be NULL}}
+do_test notnull-2.5 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT INTO t1 VALUES(1,2,3,4,5);
+ UPDATE t1 SET b=null;
+ SELECT * FROM t1 ORDER BY a;
+ }
+} {1 {t1.b may not be NULL}}
+do_test notnull-2.6 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT INTO t1 VALUES(1,2,3,4,5);
+ UPDATE OR REPLACE t1 SET b=null, d=e, e=d;
+ SELECT * FROM t1 ORDER BY a;
+ }
+} {0 {1 5 3 5 4}}
+do_test notnull-2.7 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT INTO t1 VALUES(1,2,3,4,5);
+ UPDATE OR IGNORE t1 SET b=null, d=e, e=d;
+ SELECT * FROM t1 ORDER BY a;
+ }
+} {0 {1 2 3 4 5}}
+do_test notnull-2.8 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT INTO t1 VALUES(1,2,3,4,5);
+ UPDATE t1 SET c=null, d=e, e=d;
+ SELECT * FROM t1 ORDER BY a;
+ }
+} {0 {1 2 6 5 4}}
+do_test notnull-2.9 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT INTO t1 VALUES(1,2,3,4,5);
+ UPDATE t1 SET d=null, a=b, b=a;
+ SELECT * FROM t1 ORDER BY a;
+ }
+} {0 {1 2 3 4 5}}
+do_test notnull-2.10 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT INTO t1 VALUES(1,2,3,4,5);
+ UPDATE t1 SET e=null, a=b, b=a;
+ SELECT * FROM t1 ORDER BY a;
+ }
+} {1 {t1.e may not be NULL}}
+
+do_test notnull-3.0 {
+ execsql {
+ CREATE INDEX t1a ON t1(a);
+ CREATE INDEX t1b ON t1(b);
+ CREATE INDEX t1c ON t1(c);
+ CREATE INDEX t1d ON t1(d);
+ CREATE INDEX t1e ON t1(e);
+ CREATE INDEX t1abc ON t1(a,b,c);
+ }
+} {}
+do_test notnull-3.1 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT INTO t1(a,b,c,d,e) VALUES(1,2,3,4,5);
+ SELECT * FROM t1 order by a;
+ }
+} {0 {1 2 3 4 5}}
+do_test notnull-3.2 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT INTO t1(b,c,d,e) VALUES(2,3,4,5);
+ SELECT * FROM t1 order by a;
+ }
+} {1 {t1.a may not be NULL}}
+do_test notnull-3.3 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT OR IGNORE INTO t1(b,c,d,e) VALUES(2,3,4,5);
+ SELECT * FROM t1 order by a;
+ }
+} {0 {}}
+do_test notnull-3.4 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT OR REPLACE INTO t1(b,c,d,e) VALUES(2,3,4,5);
+ SELECT * FROM t1 order by a;
+ }
+} {1 {t1.a may not be NULL}}
+do_test notnull-3.5 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT OR ABORT INTO t1(b,c,d,e) VALUES(2,3,4,5);
+ SELECT * FROM t1 order by a;
+ }
+} {1 {t1.a may not be NULL}}
+do_test notnull-3.6 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT INTO t1(a,c,d,e) VALUES(1,3,4,5);
+ SELECT * FROM t1 order by a;
+ }
+} {0 {1 5 3 4 5}}
+do_test notnull-3.7 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT OR IGNORE INTO t1(a,c,d,e) VALUES(1,3,4,5);
+ SELECT * FROM t1 order by a;
+ }
+} {0 {1 5 3 4 5}}
+do_test notnull-3.8 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT OR REPLACE INTO t1(a,c,d,e) VALUES(1,3,4,5);
+ SELECT * FROM t1 order by a;
+ }
+} {0 {1 5 3 4 5}}
+do_test notnull-3.9 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT OR ABORT INTO t1(a,c,d,e) VALUES(1,3,4,5);
+ SELECT * FROM t1 order by a;
+ }
+} {0 {1 5 3 4 5}}
+do_test notnull-3.10 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT INTO t1(a,b,c,d,e) VALUES(1,null,3,4,5);
+ SELECT * FROM t1 order by a;
+ }
+} {1 {t1.b may not be NULL}}
+do_test notnull-3.11 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT OR IGNORE INTO t1(a,b,c,d,e) VALUES(1,null,3,4,5);
+ SELECT * FROM t1 order by a;
+ }
+} {0 {}}
+do_test notnull-3.12 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT OR REPLACE INTO t1(a,b,c,d,e) VALUES(1,null,3,4,5);
+ SELECT * FROM t1 order by a;
+ }
+} {0 {1 5 3 4 5}}
+do_test notnull-3.13 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT INTO t1(a,b,c,d,e) VALUES(1,2,null,4,5);
+ SELECT * FROM t1 order by a;
+ }
+} {0 {1 2 6 4 5}}
+do_test notnull-3.14 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT OR IGNORE INTO t1(a,b,c,d,e) VALUES(1,2,null,4,5);
+ SELECT * FROM t1 order by a;
+ }
+} {0 {}}
+do_test notnull-3.15 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT OR REPLACE INTO t1(a,b,c,d,e) VALUES(1,2,null,4,5);
+ SELECT * FROM t1 order by a;
+ }
+} {0 {1 2 6 4 5}}
+do_test notnull-3.16 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT OR ABORT INTO t1(a,b,c,d,e) VALUES(1,2,null,4,5);
+ SELECT * FROM t1 order by a;
+ }
+} {1 {t1.c may not be NULL}}
+do_test notnull-3.17 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT OR ABORT INTO t1(a,b,c,d,e) VALUES(1,2,3,null,5);
+ SELECT * FROM t1 order by a;
+ }
+} {1 {t1.d may not be NULL}}
+do_test notnull-3.18 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT OR ABORT INTO t1(a,b,c,e) VALUES(1,2,3,5);
+ SELECT * FROM t1 order by a;
+ }
+} {0 {1 2 3 7 5}}
+do_test notnull-3.19 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT INTO t1(a,b,c,d) VALUES(1,2,3,4);
+ SELECT * FROM t1 order by a;
+ }
+} {0 {1 2 3 4 8}}
+do_test notnull-3.20 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT INTO t1(a,b,c,d,e) VALUES(1,2,3,4,null);
+ SELECT * FROM t1 order by a;
+ }
+} {1 {t1.e may not be NULL}}
+do_test notnull-3.21 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT OR REPLACE INTO t1(e,d,c,b,a) VALUES(1,2,3,null,5);
+ SELECT * FROM t1 order by a;
+ }
+} {0 {5 5 3 2 1}}
+
+do_test notnull-4.1 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT INTO t1 VALUES(1,2,3,4,5);
+ UPDATE t1 SET a=null;
+ SELECT * FROM t1 ORDER BY a;
+ }
+} {1 {t1.a may not be NULL}}
+do_test notnull-4.2 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT INTO t1 VALUES(1,2,3,4,5);
+ UPDATE OR REPLACE t1 SET a=null;
+ SELECT * FROM t1 ORDER BY a;
+ }
+} {1 {t1.a may not be NULL}}
+do_test notnull-4.3 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT INTO t1 VALUES(1,2,3,4,5);
+ UPDATE OR IGNORE t1 SET a=null;
+ SELECT * FROM t1 ORDER BY a;
+ }
+} {0 {1 2 3 4 5}}
+do_test notnull-4.4 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT INTO t1 VALUES(1,2,3,4,5);
+ UPDATE OR ABORT t1 SET a=null;
+ SELECT * FROM t1 ORDER BY a;
+ }
+} {1 {t1.a may not be NULL}}
+do_test notnull-4.5 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT INTO t1 VALUES(1,2,3,4,5);
+ UPDATE t1 SET b=null;
+ SELECT * FROM t1 ORDER BY a;
+ }
+} {1 {t1.b may not be NULL}}
+do_test notnull-4.6 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT INTO t1 VALUES(1,2,3,4,5);
+ UPDATE OR REPLACE t1 SET b=null, d=e, e=d;
+ SELECT * FROM t1 ORDER BY a;
+ }
+} {0 {1 5 3 5 4}}
+do_test notnull-4.7 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT INTO t1 VALUES(1,2,3,4,5);
+ UPDATE OR IGNORE t1 SET b=null, d=e, e=d;
+ SELECT * FROM t1 ORDER BY a;
+ }
+} {0 {1 2 3 4 5}}
+do_test notnull-4.8 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT INTO t1 VALUES(1,2,3,4,5);
+ UPDATE t1 SET c=null, d=e, e=d;
+ SELECT * FROM t1 ORDER BY a;
+ }
+} {0 {1 2 6 5 4}}
+do_test notnull-4.9 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT INTO t1 VALUES(1,2,3,4,5);
+ UPDATE t1 SET d=null, a=b, b=a;
+ SELECT * FROM t1 ORDER BY a;
+ }
+} {0 {1 2 3 4 5}}
+do_test notnull-4.10 {
+ catchsql {
+ DELETE FROM t1;
+ INSERT INTO t1 VALUES(1,2,3,4,5);
+ UPDATE t1 SET e=null, a=b, b=a;
+ SELECT * FROM t1 ORDER BY a;
+ }
+} {1 {t1.e may not be NULL}}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/null.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/null.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,252 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests for proper treatment of the special
+# value NULL.
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Create a table and some data to work with.
+#
+do_test null-1.0 {
+ execsql {
+ begin;
+ create table t1(a,b,c);
+ insert into t1 values(1,0,0);
+ insert into t1 values(2,0,1);
+ insert into t1 values(3,1,0);
+ insert into t1 values(4,1,1);
+ insert into t1 values(5,null,0);
+ insert into t1 values(6,null,1);
+ insert into t1 values(7,null,null);
+ commit;
+ select * from t1;
+ }
+} {1 0 0 2 0 1 3 1 0 4 1 1 5 {} 0 6 {} 1 7 {} {}}
+
+# Check for how arithmetic expressions handle NULL
+#
+do_test null-1.1 {
+ execsql {
+ select ifnull(a+b,99) from t1;
+ }
+} {1 2 4 5 99 99 99}
+do_test null-1.2 {
+ execsql {
+ select ifnull(b*c,99) from t1;
+ }
+} {0 0 0 1 99 99 99}
+
+# Check to see how the CASE expression handles NULL values. The
+# first WHEN for which the test expression is TRUE is selected.
+# FALSE and UNKNOWN test expressions are skipped.
+#
+do_test null-2.1 {
+ execsql {
+ select ifnull(case when b<>0 then 1 else 0 end, 99) from t1;
+ }
+} {0 0 1 1 0 0 0}
+do_test null-2.2 {
+ execsql {
+ select ifnull(case when not b<>0 then 1 else 0 end, 99) from t1;
+ }
+} {1 1 0 0 0 0 0}
+do_test null-2.3 {
+ execsql {
+ select ifnull(case when b<>0 and c<>0 then 1 else 0 end, 99) from t1;
+ }
+} {0 0 0 1 0 0 0}
+do_test null-2.4 {
+ execsql {
+ select ifnull(case when not (b<>0 and c<>0) then 1 else 0 end, 99) from t1;
+ }
+} {1 1 1 0 1 0 0}
+do_test null-2.5 {
+ execsql {
+ select ifnull(case when b<>0 or c<>0 then 1 else 0 end, 99) from t1;
+ }
+} {0 1 1 1 0 1 0}
+do_test null-2.6 {
+ execsql {
+ select ifnull(case when not (b<>0 or c<>0) then 1 else 0 end, 99) from t1;
+ }
+} {1 0 0 0 0 0 0}
+do_test null-2.7 {
+ execsql {
+ select ifnull(case b when c then 1 else 0 end, 99) from t1;
+ }
+} {1 0 0 1 0 0 0}
+do_test null-2.8 {
+ execsql {
+ select ifnull(case c when b then 1 else 0 end, 99) from t1;
+ }
+} {1 0 0 1 0 0 0}
+
+# Check to see that NULL values are ignored in aggregate functions.
+#
+do_test null-3.1 {
+ execsql {
+ select count(*), count(b), count(c), sum(b), sum(c),
+ avg(b), avg(c), min(b), max(b) from t1;
+ }
+} {7 4 6 2 3 0.5 0.5 0 1}
+
+# The sum of zero entries is a NULL, but the total of zero entries is 0.
+#
+do_test null-3.2 {
+ execsql {
+ SELECT sum(b), total(b) FROM t1 WHERE b<0
+ }
+} {{} 0.0}
+
+# Check to see how WHERE clauses handle NULL values. A NULL value
+# is the same as UNKNOWN. The WHERE clause should only select those
+# rows that are TRUE. FALSE and UNKNOWN rows are rejected.
+#
+do_test null-4.1 {
+ execsql {
+ select a from t1 where b<10
+ }
+} {1 2 3 4}
+do_test null-4.2 {
+ execsql {
+ select a from t1 where not b>10
+ }
+} {1 2 3 4}
+do_test null-4.3 {
+ execsql {
+ select a from t1 where b<10 or c=1;
+ }
+} {1 2 3 4 6}
+do_test null-4.4 {
+ execsql {
+ select a from t1 where b<10 and c=1;
+ }
+} {2 4}
+do_test null-4.5 {
+ execsql {
+ select a from t1 where not (b<10 and c=1);
+ }
+} {1 3 5}
+
+# The DISTINCT keyword on a SELECT statement should treat NULL values
+# as distinct
+#
+do_test null-5.1 {
+ execsql {
+ select distinct b from t1 order by b;
+ }
+} {{} 0 1}
+
+# A UNION to two queries should treat NULL values
+# as distinct
+#
+ifcapable compound {
+do_test null-6.1 {
+ execsql {
+ select b from t1 union select c from t1 order by c;
+ }
+} {{} 0 1}
+} ;# ifcapable compound
+
+# The UNIQUE constraint only applies to non-null values
+#
+ifcapable conflict {
+do_test null-7.1 {
+ execsql {
+ create table t2(a, b unique on conflict ignore);
+ insert into t2 values(1,1);
+ insert into t2 values(2,null);
+ insert into t2 values(3,null);
+ insert into t2 values(4,1);
+ select a from t2;
+ }
+ } {1 2 3}
+ do_test null-7.2 {
+ execsql {
+ create table t3(a, b, c, unique(b,c) on conflict ignore);
+ insert into t3 values(1,1,1);
+ insert into t3 values(2,null,1);
+ insert into t3 values(3,null,1);
+ insert into t3 values(4,1,1);
+ select a from t3;
+ }
+ } {1 2 3}
+}
+
+# Ticket #461 - Make sure nulls are handled correctly when doing a
+# lookup using an index.
+#
+do_test null-8.1 {
+ execsql {
+ CREATE TABLE t4(x,y);
+ INSERT INTO t4 VALUES(1,11);
+ INSERT INTO t4 VALUES(2,NULL);
+ SELECT x FROM t4 WHERE y=NULL;
+ }
+} {}
+ifcapable subquery {
+ do_test null-8.2 {
+ execsql {
+ SELECT x FROM t4 WHERE y IN (33,NULL);
+ }
+ } {}
+}
+do_test null-8.3 {
+ execsql {
+ SELECT x FROM t4 WHERE y<33 ORDER BY x;
+ }
+} {1}
+do_test null-8.4 {
+ execsql {
+ SELECT x FROM t4 WHERE y>6 ORDER BY x;
+ }
+} {1}
+do_test null-8.5 {
+ execsql {
+ SELECT x FROM t4 WHERE y!=33 ORDER BY x;
+ }
+} {1}
+do_test null-8.11 {
+ execsql {
+ CREATE INDEX t4i1 ON t4(y);
+ SELECT x FROM t4 WHERE y=NULL;
+ }
+} {}
+ifcapable subquery {
+ do_test null-8.12 {
+ execsql {
+ SELECT x FROM t4 WHERE y IN (33,NULL);
+ }
+ } {}
+}
+do_test null-8.13 {
+ execsql {
+ SELECT x FROM t4 WHERE y<33 ORDER BY x;
+ }
+} {1}
+do_test null-8.14 {
+ execsql {
+ SELECT x FROM t4 WHERE y>6 ORDER BY x;
+ }
+} {1}
+do_test null-8.15 {
+ execsql {
+ SELECT x FROM t4 WHERE y!=33 ORDER BY x;
+ }
+} {1}
+
+
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/pager.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/pager.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,570 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is page cache subsystem.
+#
+# $Id: pager.test,v 1.25 2006/01/23 15:25:48 danielk1977 Exp $
+
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+if {[info commands pager_open]!=""} {
+db close
+
+# Basic sanity check. Open and close a pager.
+#
+do_test pager-1.0 {
+ catch {file delete -force ptf1.db}
+ catch {file delete -force ptf1.db-journal}
+ set v [catch {
+ set ::p1 [pager_open ptf1.db 10]
+ } msg]
+} {0}
+do_test pager-1.1 {
+ pager_stats $::p1
+} {ref 0 page 0 max 10 size -1 state 0 err 0 hit 0 miss 0 ovfl 0}
+do_test pager-1.2 {
+ pager_pagecount $::p1
+} {0}
+do_test pager-1.3 {
+ pager_stats $::p1
+} {ref 0 page 0 max 10 size -1 state 0 err 0 hit 0 miss 0 ovfl 0}
+do_test pager-1.4 {
+ pager_close $::p1
+} {}
+
+# Try to write a few pages.
+#
+do_test pager-2.1 {
+ set v [catch {
+ set ::p1 [pager_open ptf1.db 10]
+ } msg]
+} {0}
+#do_test pager-2.2 {
+# set v [catch {
+# set ::g1 [page_get $::p1 0]
+# } msg]
+# lappend v $msg
+#} {1 SQLITE_ERROR}
+do_test pager-2.3.1 {
+ set ::gx [page_lookup $::p1 1]
+} {}
+do_test pager-2.3.2 {
+ pager_stats $::p1
+} {ref 0 page 0 max 10 size -1 state 0 err 0 hit 0 miss 0 ovfl 0}
+do_test pager-2.3.3 {
+ set v [catch {
+ set ::g1 [page_get $::p1 1]
+ } msg]
+ if {$v} {lappend v $msg}
+ set v
+} {0}
+do_test pager-2.3.3 {
+ pager_stats $::p1
+} {ref 1 page 1 max 10 size 0 state 1 err 0 hit 0 miss 1 ovfl 0}
+do_test pager-2.3.4 {
+ set ::gx [page_lookup $::p1 1]
+ expr {$::gx!=""}
+} {1}
+do_test pager-2.3.5 {
+ pager_stats $::p1
+} {ref 1 page 1 max 10 size 0 state 1 err 0 hit 0 miss 1 ovfl 0}
+do_test pager-2.3.6 {
+ expr {$::g1==$::gx}
+} {1}
+do_test pager-2.3.7 {
+ page_unref $::gx
+ pager_stats $::p1
+} {ref 1 page 1 max 10 size 0 state 1 err 0 hit 0 miss 1 ovfl 0}
+do_test pager-2.4 {
+ pager_stats $::p1
+} {ref 1 page 1 max 10 size 0 state 1 err 0 hit 0 miss 1 ovfl 0}
+do_test pager-2.5 {
+ pager_pagecount $::p1
+} {0}
+do_test pager-2.6 {
+ pager_stats $::p1
+} {ref 1 page 1 max 10 size 0 state 1 err 0 hit 0 miss 1 ovfl 0}
+do_test pager-2.7 {
+ page_number $::g1
+} {1}
+do_test pager-2.8 {
+ page_read $::g1
+} {}
+do_test pager-2.9 {
+ page_unref $::g1
+} {}
+do_test pager-2.10 {
+ pager_stats $::p1
+} {ref 0 page 0 max 10 size -1 state 0 err 0 hit 0 miss 1 ovfl 0}
+do_test pager-2.11 {
+ set ::g1 [page_get $::p1 1]
+ expr {$::g1!=0}
+} {1}
+do_test pager-2.12 {
+ page_number $::g1
+} {1}
+do_test pager-2.13 {
+ pager_stats $::p1
+} {ref 1 page 1 max 10 size 0 state 1 err 0 hit 0 miss 2 ovfl 0}
+do_test pager-2.14 {
+ set v [catch {
+ page_write $::g1 "Page-One"
+ } msg]
+ lappend v $msg
+} {0 {}}
+do_test pager-2.15 {
+ pager_stats $::p1
+} {ref 1 page 1 max 10 size 1 state 2 err 0 hit 0 miss 2 ovfl 0}
+do_test pager-2.16 {
+ page_read $::g1
+} {Page-One}
+do_test pager-2.17 {
+ set v [catch {
+ pager_commit $::p1
+ } msg]
+ lappend v $msg
+} {0 {}}
+do_test pager-2.20 {
+ pager_stats $::p1
+} {ref 1 page 1 max 10 size -1 state 1 err 0 hit 1 miss 2 ovfl 0}
+do_test pager-2.19 {
+ pager_pagecount $::p1
+} {1}
+do_test pager-2.21 {
+ pager_stats $::p1
+} {ref 1 page 1 max 10 size 1 state 1 err 0 hit 1 miss 2 ovfl 0}
+do_test pager-2.22 {
+ page_unref $::g1
+} {}
+do_test pager-2.23 {
+ pager_stats $::p1
+} {ref 0 page 0 max 10 size -1 state 0 err 0 hit 1 miss 2 ovfl 0}
+do_test pager-2.24 {
+ set v [catch {
+ page_get $::p1 1
+ } ::g1]
+ if {$v} {lappend v $::g1}
+ set v
+} {0}
+do_test pager-2.25 {
+ page_read $::g1
+} {Page-One}
+do_test pager-2.26 {
+ set v [catch {
+ page_write $::g1 {page-one}
+ } msg]
+ lappend v $msg
+} {0 {}}
+do_test pager-2.27 {
+ page_read $::g1
+} {page-one}
+do_test pager-2.28 {
+ set v [catch {
+ pager_rollback $::p1
+ } msg]
+ lappend v $msg
+} {0 {}}
+do_test pager-2.29 {
+ page_unref $::g1
+ set ::g1 [page_get $::p1 1]
+ page_read $::g1
+} {Page-One}
+do_test pager-2.99 {
+ pager_close $::p1
+} {}
+
+do_test pager-3.1 {
+ set v [catch {
+ set ::p1 [pager_open ptf1.db 15]
+ } msg]
+ if {$v} {lappend v $msg}
+ set v
+} {0}
+do_test pager-3.2 {
+ pager_pagecount $::p1
+} {1}
+do_test pager-3.3 {
+ set v [catch {
+ set ::g(1) [page_get $::p1 1]
+ } msg]
+ if {$v} {lappend v $msg}
+ set v
+} {0}
+do_test pager-3.4 {
+ page_read $::g(1)
+} {Page-One}
+do_test pager-3.5 {
+ for {set i 2} {$i<=20} {incr i} {
+ set gx [page_get $::p1 $i]
+ page_write $gx "Page-$i"
+ page_unref $gx
+ }
+ pager_commit $::p1
+} {}
+for {set i 2} {$i<=20} {incr i} {
+ do_test pager-3.6.[expr {$i-1}] [subst {
+ set gx \[page_get $::p1 $i\]
+ set v \[page_read \$gx\]
+ page_unref \$gx
+ set v
+ }] "Page-$i"
+}
+for {set i 1} {$i<=20} {incr i} {
+ regsub -all CNT {
+ set ::g1 [page_get $::p1 CNT]
+ set ::g2 [page_get $::p1 CNT]
+ set ::vx [page_read $::g2]
+ expr {$::g1==$::g2}
+ } $i body;
+ do_test pager-3.7.$i.1 $body {1}
+ regsub -all CNT {
+ page_unref $::g2
+ set vy [page_read $::g1]
+ expr {$vy==$::vx}
+ } $i body;
+ do_test pager-3.7.$i.2 $body {1}
+ regsub -all CNT {
+ page_unref $::g1
+ set gx [page_get $::p1 CNT]
+ set vy [page_read $gx]
+ page_unref $gx
+ expr {$vy==$::vx}
+ } $i body;
+ do_test pager-3.7.$i.3 $body {1}
+}
+do_test pager-3.99 {
+ pager_close $::p1
+} {}
+
+# tests of the checkpoint mechanism and api
+#
+do_test pager-4.0 {
+ set v [catch {
+ file delete -force ptf1.db
+ set ::p1 [pager_open ptf1.db 15]
+ } msg]
+ if {$v} {lappend v $msg}
+ set v
+} {0}
+do_test pager-4.1 {
+ set g1 [page_get $::p1 1]
+ page_write $g1 "Page-1 v0"
+ for {set i 2} {$i<=20} {incr i} {
+ set gx [page_get $::p1 $i]
+ page_write $gx "Page-$i v0"
+ page_unref $gx
+ }
+ pager_commit $::p1
+} {}
+for {set i 1} {$i<=20} {incr i} {
+ do_test pager-4.2.$i {
+ set gx [page_get $p1 $i]
+ set v [page_read $gx]
+ page_unref $gx
+ set v
+ } "Page-$i v0"
+}
+do_test pager-4.3 {
+ lrange [pager_stats $::p1] 0 1
+} {ref 1}
+do_test pager-4.4 {
+ lrange [pager_stats $::p1] 8 9
+} {state 1}
+
+for {set i 1} {$i<20} {incr i} {
+ do_test pager-4.5.$i.0 {
+ set res {}
+ for {set j 2} {$j<=20} {incr j} {
+ set gx [page_get $p1 $j]
+ set value [page_read $gx]
+ page_unref $gx
+ set shouldbe "Page-$j v[expr {$i-1}]"
+ if {$value!=$shouldbe} {
+ lappend res $value $shouldbe
+ }
+ }
+ set res
+ } {}
+ do_test pager-4.5.$i.1 {
+ page_write $g1 "Page-1 v$i"
+ lrange [pager_stats $p1] 8 9
+ } {state 2}
+ do_test pager-4.5.$i.2 {
+ for {set j 2} {$j<=20} {incr j} {
+ set gx [page_get $p1 $j]
+ page_write $gx "Page-$j v$i"
+ page_unref $gx
+ if {$j==$i} {
+ pager_stmt_begin $p1
+ }
+ }
+ } {}
+ do_test pager-4.5.$i.3 {
+ set res {}
+ for {set j 2} {$j<=20} {incr j} {
+ set gx [page_get $p1 $j]
+ set value [page_read $gx]
+ page_unref $gx
+ set shouldbe "Page-$j v$i"
+ if {$value!=$shouldbe} {
+ lappend res $value $shouldbe
+ }
+ }
+ set res
+ } {}
+ do_test pager-4.5.$i.4 {
+ pager_rollback $p1
+ set res {}
+ for {set j 2} {$j<=20} {incr j} {
+ set gx [page_get $p1 $j]
+ set value [page_read $gx]
+ page_unref $gx
+ set shouldbe "Page-$j v[expr {$i-1}]"
+ if {$value!=$shouldbe} {
+ lappend res $value $shouldbe
+ }
+ }
+ set res
+ } {}
+ do_test pager-4.5.$i.5 {
+ page_write $g1 "Page-1 v$i"
+ lrange [pager_stats $p1] 8 9
+ } {state 2}
+ do_test pager-4.5.$i.6 {
+ for {set j 2} {$j<=20} {incr j} {
+ set gx [page_get $p1 $j]
+ page_write $gx "Page-$j v$i"
+ page_unref $gx
+ if {$j==$i} {
+ pager_stmt_begin $p1
+ }
+ }
+ } {}
+ do_test pager-4.5.$i.7 {
+ pager_stmt_rollback $p1
+ for {set j 2} {$j<=20} {incr j} {
+ set gx [page_get $p1 $j]
+ set value [page_read $gx]
+ page_unref $gx
+ if {$j<=$i || $i==1} {
+ set shouldbe "Page-$j v$i"
+ } else {
+ set shouldbe "Page-$j v[expr {$i-1}]"
+ }
+ if {$value!=$shouldbe} {
+ lappend res $value $shouldbe
+ }
+ }
+ set res
+ } {}
+ do_test pager-4.5.$i.8 {
+ for {set j 2} {$j<=20} {incr j} {
+ set gx [page_get $p1 $j]
+ page_write $gx "Page-$j v$i"
+ page_unref $gx
+ if {$j==$i} {
+ pager_stmt_begin $p1
+ }
+ }
+ } {}
+ do_test pager-4.5.$i.9 {
+ pager_stmt_commit $p1
+ for {set j 2} {$j<=20} {incr j} {
+ set gx [page_get $p1 $j]
+ set value [page_read $gx]
+ page_unref $gx
+ set shouldbe "Page-$j v$i"
+ if {$value!=$shouldbe} {
+ lappend res $value $shouldbe
+ }
+ }
+ set res
+ } {}
+ do_test pager-4.5.$i.10 {
+ pager_commit $p1
+ lrange [pager_stats $p1] 8 9
+ } {state 1}
+}
+
+# Test that nothing bad happens when sqlite3pager_set_cachesize() is
+# called with a negative argument.
+do_test pager-4.6.1 {
+ pager_close [pager_open ptf2.db -15]
+} {}
+
+# Test truncate on an in-memory database is Ok.
+ifcapable memorydb {
+ do_test pager-4.6.2 {
+ set ::p2 [pager_open :memory: 10]
+ pager_truncate $::p2 5
+ } {}
+ do_test pager-4.6.3 {
+ for {set i 1} {$i<5} {incr i} {
+ set p [page_get $::p2 $i]
+ page_write $p "Page $i"
+ page_unref $p
+ pager_commit $::p2
+ }
+ pager_truncate $::p2 3
+ } {}
+ do_test pager-4.6.4 {
+ pager_close $::p2
+ } {}
+}
+
+do_test pager-4.99 {
+ pager_close $::p1
+} {}
+
+
+
+ file delete -force ptf1.db
+
+} ;# end if( not mem: and has pager_open command );
+
+if 0 {
+# Ticket #615: an assertion fault inside the pager. It is a benign
+# fault, but we might as well test for it.
+#
+do_test pager-5.1 {
+ sqlite3 db test.db
+ execsql {
+ BEGIN;
+ CREATE TABLE t1(x);
+ PRAGMA synchronous=off;
+ COMMIT;
+ }
+} {}
+}
+
+# The following tests cover rolling back hot journal files.
+# They can't be run on windows because the windows version of
+# SQLite holds a mandatory exclusive lock on journal files it has open.
+#
+if {$tcl_platform(platform)!="windows"} {
+do_test pager-6.1 {
+ file delete -force test2.db
+ file delete -force test2.db-journal
+ sqlite3 db2 test2.db
+ execsql {
+ PRAGMA synchronous = 0;
+ CREATE TABLE abc(a, b, c);
+ INSERT INTO abc VALUES(1, 2, randstr(200,200));
+ INSERT INTO abc VALUES(1, 2, randstr(200,200));
+ INSERT INTO abc VALUES(1, 2, randstr(200,200));
+ INSERT INTO abc VALUES(1, 2, randstr(200,200));
+ INSERT INTO abc VALUES(1, 2, randstr(200,200));
+ INSERT INTO abc VALUES(1, 2, randstr(200,200));
+ INSERT INTO abc VALUES(1, 2, randstr(200,200));
+ INSERT INTO abc VALUES(1, 2, randstr(200,200));
+ INSERT INTO abc VALUES(1, 2, randstr(200,200));
+ BEGIN;
+ UPDATE abc SET c = randstr(200,200);
+ } db2
+ copy_file test2.db test.db
+ copy_file test2.db-journal test.db-journal
+
+ set f [open test.db-journal a]
+ fconfigure $f -encoding binary
+ seek $f [expr [file size test.db-journal] - 1032] start
+ puts -nonewline $f "\00\00\00\00"
+ close $f
+
+ sqlite3 db test.db
+ execsql {
+ SELECT sql FROM sqlite_master
+ }
+} {{CREATE TABLE abc(a, b, c)}}
+
+do_test pager-6.2 {
+ copy_file test2.db test.db
+ copy_file test2.db-journal test.db-journal
+
+ set f [open test.db-journal a]
+ fconfigure $f -encoding binary
+ seek $f [expr [file size test.db-journal] - 1032] start
+ puts -nonewline $f "\00\00\00\FF"
+ close $f
+
+ sqlite3 db test.db
+ execsql {
+ SELECT sql FROM sqlite_master
+ }
+} {{CREATE TABLE abc(a, b, c)}}
+
+do_test pager-6.3 {
+ copy_file test2.db test.db
+ copy_file test2.db-journal test.db-journal
+
+ set f [open test.db-journal a]
+ fconfigure $f -encoding binary
+ seek $f [expr [file size test.db-journal] - 4] start
+ puts -nonewline $f "\00\00\00\00"
+ close $f
+
+ sqlite3 db test.db
+ execsql {
+ SELECT sql FROM sqlite_master
+ }
+} {{CREATE TABLE abc(a, b, c)}}
+
+do_test pager-6.4.1 {
+ execsql {
+ BEGIN;
+ SELECT sql FROM sqlite_master;
+ }
+ copy_file test2.db-journal test.db-journal;
+ sqlite3 db3 test.db
+ catchsql {
+ BEGIN;
+ SELECT sql FROM sqlite_master;
+ } db3;
+} {1 {database is locked}}
+do_test pager-6.4.2 {
+ file delete -force test.db-journal
+ catchsql {
+ SELECT sql FROM sqlite_master;
+ } db3;
+} {0 {{CREATE TABLE abc(a, b, c)}}}
+do_test pager-6.4.3 {
+ db3 close
+ execsql {
+ COMMIT;
+ }
+} {}
+
+do_test pager-6.5 {
+ copy_file test2.db test.db
+ copy_file test2.db-journal test.db-journal
+
+ set f [open test.db-journal a]
+ fconfigure $f -encoding binary
+ puts -nonewline $f "hello"
+ puts -nonewline $f "\x00\x00\x00\x05\x01\x02\x03\x04"
+ puts -nonewline $f "\xd9\xd5\x05\xf9\x20\xa1\x63\xd7"
+ close $f
+
+ sqlite3 db test.db
+ execsql {
+ SELECT sql FROM sqlite_master
+ }
+} {{CREATE TABLE abc(a, b, c)}}
+
+do_test pager-6.5 {
+ db2 close
+} {}
+}
+finish_test
+
+
+
Added: freeswitch/trunk/libs/sqlite/test/pager2.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/pager2.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,408 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is page cache subsystem.
+#
+# $Id: pager2.test,v 1.5 2004/11/22 05:26:28 danielk1977 Exp $
+
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Don't run this test file if the pager test interface [pager_open] is not
+# available, or the library was compiled without in-memory database support.
+#
+if {[info commands pager_open]!=""} {
+ifcapable memorydb {
+db close
+
+# Basic sanity check. Open and close a pager.
+#
+do_test pager2-1.0 {
+ set v [catch {
+ set ::p1 [pager_open :memory: 10]
+ } msg]
+} {0}
+do_test pager2-1.1 {
+ pager_stats $::p1
+} {ref 0 page 0 max 10 size 0 state 0 err 0 hit 0 miss 0 ovfl 0}
+do_test pager2-1.2 {
+ pager_pagecount $::p1
+} {0}
+do_test pager2-1.3 {
+ pager_stats $::p1
+} {ref 0 page 0 max 10 size 0 state 0 err 0 hit 0 miss 0 ovfl 0}
+do_test pager2-1.4 {
+ pager_close $::p1
+} {}
+
+# Try to write a few pages.
+#
+do_test pager2-2.1 {
+ set v [catch {
+ set ::p1 [pager_open :memory: 10]
+ } msg]
+} {0}
+#do_test pager2-2.2 {
+# set v [catch {
+# set ::g1 [page_get $::p1 0]
+# } msg]
+# lappend v $msg
+#} {1 SQLITE_ERROR}
+do_test pager2-2.3.1 {
+ set ::gx [page_lookup $::p1 1]
+} {}
+do_test pager2-2.3.2 {
+ pager_stats $::p1
+} {ref 0 page 0 max 10 size 0 state 0 err 0 hit 0 miss 0 ovfl 0}
+do_test pager2-2.3.3 {
+ set v [catch {
+ set ::g1 [page_get $::p1 1]
+ } msg]
+ if {$v} {lappend v $msg}
+ set v
+} {0}
+do_test pager2-2.3.3 {
+ pager_stats $::p1
+} {ref 1 page 1 max 10 size 0 state 1 err 0 hit 0 miss 1 ovfl 0}
+do_test pager2-2.3.4 {
+ set ::gx [page_lookup $::p1 1]
+ expr {$::gx!=""}
+} {1}
+do_test pager2-2.3.5 {
+ pager_stats $::p1
+} {ref 1 page 1 max 10 size 0 state 1 err 0 hit 0 miss 1 ovfl 0}
+do_test pager2-2.3.6 {
+ expr {$::g1==$::gx}
+} {1}
+do_test pager2-2.3.7 {
+ page_unref $::gx
+ pager_stats $::p1
+} {ref 1 page 1 max 10 size 0 state 1 err 0 hit 0 miss 1 ovfl 0}
+do_test pager2-2.4 {
+ pager_stats $::p1
+} {ref 1 page 1 max 10 size 0 state 1 err 0 hit 0 miss 1 ovfl 0}
+do_test pager2-2.5 {
+ pager_pagecount $::p1
+} {0}
+do_test pager2-2.6 {
+ pager_stats $::p1
+} {ref 1 page 1 max 10 size 0 state 1 err 0 hit 0 miss 1 ovfl 0}
+do_test pager2-2.7 {
+ page_number $::g1
+} {1}
+do_test pager2-2.8 {
+ page_read $::g1
+} {}
+do_test pager2-2.9 {
+ page_unref $::g1
+} {}
+do_test pager2-2.10 {
+ pager_stats $::p1
+} {ref 0 page 1 max 10 size 0 state 1 err 0 hit 0 miss 1 ovfl 0}
+do_test pager2-2.11 {
+ set ::g1 [page_get $::p1 1]
+ expr {$::g1!=0}
+} {1}
+do_test pager2-2.12 {
+ page_number $::g1
+} {1}
+do_test pager2-2.13 {
+ pager_stats $::p1
+} {ref 1 page 1 max 10 size 0 state 1 err 0 hit 1 miss 1 ovfl 0}
+do_test pager2-2.14 {
+ set v [catch {
+ page_write $::g1 "Page-One"
+ } msg]
+ lappend v $msg
+} {0 {}}
+do_test pager2-2.15 {
+ pager_stats $::p1
+} {ref 1 page 1 max 10 size 1 state 4 err 0 hit 1 miss 1 ovfl 0}
+do_test pager2-2.16 {
+ page_read $::g1
+} {Page-One}
+do_test pager2-2.17 {
+ set v [catch {
+ pager_commit $::p1
+ } msg]
+ lappend v $msg
+} {0 {}}
+do_test pager2-2.20 {
+ pager_stats $::p1
+} {ref 1 page 1 max 10 size 1 state 1 err 0 hit 1 miss 1 ovfl 0}
+do_test pager2-2.19 {
+ pager_pagecount $::p1
+} {1}
+do_test pager2-2.21 {
+ pager_stats $::p1
+} {ref 1 page 1 max 10 size 1 state 1 err 0 hit 1 miss 1 ovfl 0}
+do_test pager2-2.22 {
+ page_unref $::g1
+} {}
+do_test pager2-2.23 {
+ pager_stats $::p1
+} {ref 0 page 1 max 10 size 1 state 1 err 0 hit 1 miss 1 ovfl 0}
+do_test pager2-2.24 {
+ set v [catch {
+ page_get $::p1 1
+ } ::g1]
+ if {$v} {lappend v $::g1}
+ set v
+} {0}
+do_test pager2-2.25 {
+ page_read $::g1
+} {Page-One}
+do_test pager2-2.26 {
+ set v [catch {
+ page_write $::g1 {page-one}
+ } msg]
+ lappend v $msg
+} {0 {}}
+do_test pager2-2.27 {
+ page_read $::g1
+} {page-one}
+do_test pager2-2.28 {
+ set v [catch {
+ pager_rollback $::p1
+ } msg]
+ lappend v $msg
+} {0 {}}
+do_test pager2-2.29 {
+ page_unref $::g1
+ set ::g1 [page_get $::p1 1]
+ page_read $::g1
+} {Page-One}
+#do_test pager2-2.99 {
+# pager_close $::p1
+#} {}
+
+#do_test pager2-3.1 {
+# set v [catch {
+# set ::p1 [pager_open :memory: 15]
+# } msg]
+# if {$v} {lappend v $msg}
+# set v
+#} {0}
+do_test pager2-3.2 {
+ pager_pagecount $::p1
+} {1}
+do_test pager2-3.3 {
+ set v [catch {
+ set ::g(1) [page_get $::p1 1]
+ } msg]
+ if {$v} {lappend v $msg}
+ set v
+} {0}
+do_test pager2-3.4 {
+ page_read $::g(1)
+} {Page-One}
+do_test pager2-3.5 {
+ for {set i 2} {$i<=20} {incr i} {
+ set gx [page_get $::p1 $i]
+ page_write $gx "Page-$i"
+ page_unref $gx
+ }
+ pager_commit $::p1
+} {}
+for {set i 2} {$i<=20} {incr i} {
+ do_test pager2-3.6.[expr {$i-1}] [subst {
+ set gx \[page_get $::p1 $i\]
+ set v \[page_read \$gx\]
+ page_unref \$gx
+ set v
+ }] "Page-$i"
+}
+for {set i 1} {$i<=20} {incr i} {
+ regsub -all CNT {
+ set ::g1 [page_get $::p1 CNT]
+ set ::g2 [page_get $::p1 CNT]
+ set ::vx [page_read $::g2]
+ expr {$::g1==$::g2}
+ } $i body;
+ do_test pager2-3.7.$i.1 $body {1}
+ regsub -all CNT {
+ page_unref $::g2
+ set vy [page_read $::g1]
+ expr {$vy==$::vx}
+ } $i body;
+ do_test pager2-3.7.$i.2 $body {1}
+ regsub -all CNT {
+ page_unref $::g1
+ set gx [page_get $::p1 CNT]
+ set vy [page_read $gx]
+ page_unref $gx
+ expr {$vy==$::vx}
+ } $i body;
+ do_test pager2-3.7.$i.3 $body {1}
+}
+do_test pager2-3.99 {
+ pager_close $::p1
+} {}
+
+# tests of the checkpoint mechanism and api
+#
+do_test pager2-4.0 {
+ set v [catch {
+ set ::p1 [pager_open :memory: 15]
+ } msg]
+ if {$v} {lappend v $msg}
+ set v
+} {0}
+do_test pager2-4.1 {
+ set g1 [page_get $::p1 1]
+ page_write $g1 "Page-1 v0"
+ for {set i 2} {$i<=20} {incr i} {
+ set gx [page_get $::p1 $i]
+ page_write $gx "Page-$i v0"
+ page_unref $gx
+ }
+ pager_commit $::p1
+} {}
+for {set i 1} {$i<=20} {incr i} {
+ do_test pager2-4.2.$i {
+ set gx [page_get $p1 $i]
+ set v [page_read $gx]
+ page_unref $gx
+ set v
+ } "Page-$i v0"
+}
+do_test pager2-4.3 {
+ lrange [pager_stats $::p1] 0 1
+} {ref 1}
+do_test pager2-4.4 {
+ lrange [pager_stats $::p1] 8 9
+} {state 1}
+
+for {set i 1} {$i<20} {incr i} {
+ do_test pager2-4.5.$i.0 {
+ set res {}
+ for {set j 2} {$j<=20} {incr j} {
+ set gx [page_get $p1 $j]
+ set value [page_read $gx]
+ page_unref $gx
+ set shouldbe "Page-$j v[expr {$i-1}]"
+ if {$value!=$shouldbe} {
+ lappend res $value $shouldbe
+ }
+ }
+ set res
+ } {}
+ do_test pager2-4.5.$i.1 {
+ page_write $g1 "Page-1 v$i"
+ lrange [pager_stats $p1] 8 9
+ } {state 4}
+ do_test pager2-4.5.$i.2 {
+ for {set j 2} {$j<=20} {incr j} {
+ set gx [page_get $p1 $j]
+ page_write $gx "Page-$j v$i"
+ page_unref $gx
+ if {$j==$i} {
+ pager_stmt_begin $p1
+ }
+ }
+ } {}
+ do_test pager2-4.5.$i.3 {
+ set res {}
+ for {set j 2} {$j<=20} {incr j} {
+ set gx [page_get $p1 $j]
+ set value [page_read $gx]
+ page_unref $gx
+ set shouldbe "Page-$j v$i"
+ if {$value!=$shouldbe} {
+ lappend res $value $shouldbe
+ }
+ }
+ set res
+ } {}
+ do_test pager2-4.5.$i.4 {
+ pager_rollback $p1
+ set res {}
+ for {set j 2} {$j<=20} {incr j} {
+ set gx [page_get $p1 $j]
+ set value [page_read $gx]
+ page_unref $gx
+ set shouldbe "Page-$j v[expr {$i-1}]"
+ if {$value!=$shouldbe} {
+ lappend res $value $shouldbe
+ }
+ }
+ set res
+ } {}
+ do_test pager2-4.5.$i.5 {
+ page_write $g1 "Page-1 v$i"
+ lrange [pager_stats $p1] 8 9
+ } {state 4}
+ do_test pager2-4.5.$i.6 {
+ for {set j 2} {$j<=20} {incr j} {
+ set gx [page_get $p1 $j]
+ page_write $gx "Page-$j v$i"
+ page_unref $gx
+ if {$j==$i} {
+ pager_stmt_begin $p1
+ }
+ }
+ } {}
+ do_test pager2-4.5.$i.7 {
+ pager_stmt_rollback $p1
+ for {set j 2} {$j<=20} {incr j} {
+ set gx [page_get $p1 $j]
+ set value [page_read $gx]
+ page_unref $gx
+ if {$j<=$i || $i==1} {
+ set shouldbe "Page-$j v$i"
+ } else {
+ set shouldbe "Page-$j v[expr {$i-1}]"
+ }
+ if {$value!=$shouldbe} {
+ lappend res $value $shouldbe
+ }
+ }
+ set res
+ } {}
+ do_test pager2-4.5.$i.8 {
+ for {set j 2} {$j<=20} {incr j} {
+ set gx [page_get $p1 $j]
+ page_write $gx "Page-$j v$i"
+ page_unref $gx
+ if {$j==$i} {
+ pager_stmt_begin $p1
+ }
+ }
+ } {}
+ do_test pager2-4.5.$i.9 {
+ pager_stmt_commit $p1
+ for {set j 2} {$j<=20} {incr j} {
+ set gx [page_get $p1 $j]
+ set value [page_read $gx]
+ page_unref $gx
+ set shouldbe "Page-$j v$i"
+ if {$value!=$shouldbe} {
+ lappend res $value $shouldbe
+ }
+ }
+ set res
+ } {}
+ do_test pager2-4.5.$i.10 {
+ pager_commit $p1
+ lrange [pager_stats $p1] 8 9
+ } {state 1}
+}
+
+do_test pager2-4.99 {
+ pager_close $::p1
+} {}
+
+} ;# ifcapable inmemory
+} ;# end if( has pager_open command );
+
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/pager3.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/pager3.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,73 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is page cache subsystem.
+#
+# $Id: pager3.test,v 1.3 2005/03/29 03:11:00 danielk1977 Exp $
+
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# This test makes sure the database file is truncated back to the correct
+# length on a rollback.
+#
+# After some preliminary setup, a transaction is start at NOTE (1).
+# The create table on the following line allocates an additional page
+# at the end of the database file. But that page is not written because
+# the database still has a RESERVED lock, not an EXCLUSIVE lock. The
+# new page is held in memory and the size of the file is unchanged.
+# The insert at NOTE (2) begins adding additional pages. Then it hits
+# a constraint error and aborts. The abort causes sqlite3OsTruncate()
+# to be called to restore the file to the same length as it was after
+# the create table. But the create table results had not yet been
+# written so the file is actually lengthened by this truncate. Finally,
+# the rollback at NOTE (3) is called to undo all the changes since the
+# begin. This rollback should truncate the database again.
+#
+# This test was added because the second truncate at NOTE (3) was not
+# occurring on early versions of SQLite 3.0.
+#
+ifcapable tempdb {
+ do_test pager3-1.1 {
+ execsql {
+ create table t1(a unique, b);
+ insert into t1 values(1, 'abcdefghijklmnopqrstuvwxyz');
+ insert into t1 values(2, 'abcdefghijklmnopqrstuvwxyz');
+ update t1 set b=b||a||b;
+ update t1 set b=b||a||b;
+ update t1 set b=b||a||b;
+ update t1 set b=b||a||b;
+ update t1 set b=b||a||b;
+ update t1 set b=b||a||b;
+ create temp table t2 as select * from t1;
+ begin; ------- NOTE (1)
+ create table t3(x);
+ }
+ catchsql {
+ insert into t1 select 4-a, b from t2; ----- NOTE (2)
+ }
+ execsql {
+ rollback; ------- NOTE (3)
+ }
+ db close
+ sqlite3 db test.db
+ set r ok
+ ifcapable {integrityck} {
+ set r [execsql {
+ pragma integrity_check;
+ }]
+ }
+ set r
+ } ok
+}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/pagesize.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/pagesize.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,179 @@
+# 2004 September 2
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+# This file implements tests for the page_size PRAGMA.
+#
+# $Id: pagesize.test,v 1.11 2006/01/17 09:35:02 danielk1977 Exp $
+
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# This test script depends entirely on "PRAGMA page_size". So if this
+# pragma is not available, omit the whole file.
+ifcapable !pager_pragmas {
+ finish_test
+ return
+}
+
+do_test pagesize-1.1 {
+ execsql {PRAGMA page_size}
+} 1024
+ifcapable {explain} {
+ do_test pagesize-1.2 {
+ catch {execsql {EXPLAIN PRAGMA page_size}}
+ } 0
+}
+do_test pagesize-1.3 {
+ execsql {
+ CREATE TABLE t1(a);
+ PRAGMA page_size=2048;
+ PRAGMA page_size;
+ }
+} 1024
+
+do_test pagesize-1.4 {
+ db close
+ file delete -force test.db
+ sqlite3 db test.db
+ execsql {
+ PRAGMA page_size=511;
+ PRAGMA page_size;
+ }
+} 1024
+do_test pagesize-1.5 {
+ execsql {
+ PRAGMA page_size=512;
+ PRAGMA page_size;
+ }
+} 512
+do_test pagesize-1.6 {
+ execsql {
+ PRAGMA page_size=8192;
+ PRAGMA page_size;
+ }
+} 8192
+do_test pagesize-1.7 {
+ execsql {
+ PRAGMA page_size=65537;
+ PRAGMA page_size;
+ }
+} 8192
+do_test pagesize-1.8 {
+ execsql {
+ PRAGMA page_size=1234;
+ PRAGMA page_size
+ }
+} 8192
+
+foreach PGSZ {512 2048 4096 8192} {
+ ifcapable memorydb {
+ do_test pagesize-2.$PGSZ.0 {
+ db close
+ sqlite3 db :memory:
+ execsql "PRAGMA page_size=$PGSZ;"
+ execsql {PRAGMA page_size}
+ } 1024
+ }
+ do_test pagesize-2.$PGSZ.1 {
+ db close
+ file delete -force test.db
+ sqlite3 db test.db
+ execsql "PRAGMA page_size=$PGSZ"
+ execsql {
+ CREATE TABLE t1(x);
+ PRAGMA page_size;
+ }
+ } $PGSZ
+ do_test pagesize-2.$PGSZ.2 {
+ db close
+ sqlite3 db test.db
+ execsql {
+ PRAGMA page_size
+ }
+ } $PGSZ
+ do_test pagesize-2.$PGSZ.3 {
+ file size test.db
+ } [expr {$PGSZ*($AUTOVACUUM?3:2)}]
+ ifcapable {vacuum} {
+ do_test pagesize-2.$PGSZ.4 {
+ execsql {VACUUM}
+ } {}
+ }
+ integrity_check pagesize-2.$PGSZ.5
+ do_test pagesize-2.$PGSZ.6 {
+ db close
+ sqlite3 db test.db
+ execsql {PRAGMA page_size}
+ } $PGSZ
+ do_test pagesize-2.$PGSZ.7 {
+ execsql {
+ INSERT INTO t1 VALUES(randstr(10,9000));
+ INSERT INTO t1 VALUES(randstr(10,9000));
+ INSERT INTO t1 VALUES(randstr(10,9000));
+ BEGIN;
+ INSERT INTO t1 SELECT x||x FROM t1;
+ INSERT INTO t1 SELECT x||x FROM t1;
+ INSERT INTO t1 SELECT x||x FROM t1;
+ INSERT INTO t1 SELECT x||x FROM t1;
+ SELECT count(*) FROM t1;
+ }
+ } 48
+ do_test pagesize-2.$PGSZ.8 {
+ execsql {
+ ROLLBACK;
+ SELECT count(*) FROM t1;
+ }
+ } 3
+ integrity_check pagesize-2.$PGSZ.9
+ do_test pagesize-2.$PGSZ.10 {
+ db close
+ sqlite3 db test.db
+ execsql {PRAGMA page_size}
+ } $PGSZ
+ do_test pagesize-2.$PGSZ.11 {
+ execsql {
+ INSERT INTO t1 SELECT x||x FROM t1;
+ INSERT INTO t1 SELECT x||x FROM t1;
+ INSERT INTO t1 SELECT x||x FROM t1;
+ INSERT INTO t1 SELECT x||x FROM t1;
+ INSERT INTO t1 SELECT x||x FROM t1;
+ INSERT INTO t1 SELECT x||x FROM t1;
+ SELECT count(*) FROM t1;
+ }
+ } 192
+ do_test pagesize-2.$PGSZ.12 {
+ execsql {
+ BEGIN;
+ DELETE FROM t1 WHERE rowid%5!=0;
+ SELECT count(*) FROM t1;
+ }
+ } 38
+ do_test pagesize-2.$PGSZ.13 {
+ execsql {
+ ROLLBACK;
+ SELECT count(*) FROM t1;
+ }
+ } 192
+ integrity_check pagesize-2.$PGSZ.14
+ do_test pagesize-2.$PGSZ.15 {
+ execsql {DELETE FROM t1 WHERE rowid%5!=0}
+ ifcapable {vacuum} {execsql VACUUM}
+ execsql {SELECT count(*) FROM t1}
+ } 38
+ do_test pagesize-2.$PGSZ.16 {
+ execsql {DROP TABLE t1}
+ ifcapable {vacuum} {execsql VACUUM}
+ } {}
+ integrity_check pagesize-2.$PGSZ.17
+}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/pragma.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/pragma.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,843 @@
+# 2002 March 6
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests for the PRAGMA command.
+#
+# $Id: pragma.test,v 1.44 2006/08/14 14:23:43 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Test organization:
+#
+# pragma-1.*: Test cache_size, default_cache_size and synchronous on main db.
+# pragma-2.*: Test synchronous on attached db.
+# pragma-3.*: Test detection of table/index inconsistency by integrity_check.
+# pragma-4.*: Test cache_size and default_cache_size on attached db.
+# pragma-5.*: Test that pragma synchronous may not be used inside of a
+# transaction.
+# pragma-6.*: Test schema-query pragmas.
+# pragma-7.*: Miscellaneous tests.
+# pragma-8.*: Test user_version and schema_version pragmas.
+# pragma-9.*: Test temp_store and temp_store_directory.
+# pragma-10.*: Test the count_changes pragma in the presence of triggers.
+# pragma-11.*: Test the collation_list pragma.
+#
+
+ifcapable !pragma {
+ finish_test
+ return
+}
+
+# Delete the preexisting database to avoid the special setup
+# that the "all.test" script does.
+#
+db close
+file delete test.db
+sqlite3 db test.db; set DB [sqlite3_connection_pointer db]
+
+ifcapable pager_pragmas {
+do_test pragma-1.1 {
+ execsql {
+ PRAGMA cache_size;
+ PRAGMA default_cache_size;
+ PRAGMA synchronous;
+ }
+} {2000 2000 2}
+do_test pragma-1.2 {
+ execsql {
+ PRAGMA synchronous=OFF;
+ PRAGMA cache_size=1234;
+ PRAGMA cache_size;
+ PRAGMA default_cache_size;
+ PRAGMA synchronous;
+ }
+} {1234 2000 0}
+do_test pragma-1.3 {
+ db close
+ sqlite3 db test.db
+ execsql {
+ PRAGMA cache_size;
+ PRAGMA default_cache_size;
+ PRAGMA synchronous;
+ }
+} {2000 2000 2}
+do_test pragma-1.4 {
+ execsql {
+ PRAGMA synchronous=OFF;
+ PRAGMA cache_size;
+ PRAGMA default_cache_size;
+ PRAGMA synchronous;
+ }
+} {2000 2000 0}
+do_test pragma-1.5 {
+ execsql {
+ PRAGMA cache_size=4321;
+ PRAGMA cache_size;
+ PRAGMA default_cache_size;
+ PRAGMA synchronous;
+ }
+} {4321 2000 0}
+do_test pragma-1.6 {
+ execsql {
+ PRAGMA synchronous=ON;
+ PRAGMA cache_size;
+ PRAGMA default_cache_size;
+ PRAGMA synchronous;
+ }
+} {4321 2000 1}
+do_test pragma-1.7 {
+ db close
+ sqlite3 db test.db
+ execsql {
+ PRAGMA cache_size;
+ PRAGMA default_cache_size;
+ PRAGMA synchronous;
+ }
+} {2000 2000 2}
+do_test pragma-1.8 {
+ execsql {
+ PRAGMA default_cache_size=123;
+ PRAGMA cache_size;
+ PRAGMA default_cache_size;
+ PRAGMA synchronous;
+ }
+} {123 123 2}
+do_test pragma-1.9.1 {
+ db close
+ sqlite3 db test.db; set ::DB [sqlite3_connection_pointer db]
+ execsql {
+ PRAGMA cache_size;
+ PRAGMA default_cache_size;
+ PRAGMA synchronous;
+ }
+} {123 123 2}
+ifcapable vacuum {
+ do_test pragma-1.9.2 {
+ execsql {
+ VACUUM;
+ PRAGMA cache_size;
+ PRAGMA default_cache_size;
+ PRAGMA synchronous;
+ }
+ } {123 123 2}
+}
+do_test pragma-1.10 {
+ execsql {
+ PRAGMA synchronous=NORMAL;
+ PRAGMA cache_size;
+ PRAGMA default_cache_size;
+ PRAGMA synchronous;
+ }
+} {123 123 1}
+do_test pragma-1.11 {
+ execsql {
+ PRAGMA synchronous=FULL;
+ PRAGMA cache_size;
+ PRAGMA default_cache_size;
+ PRAGMA synchronous;
+ }
+} {123 123 2}
+do_test pragma-1.12 {
+ db close
+ sqlite3 db test.db; set ::DB [sqlite3_connection_pointer db]
+ execsql {
+ PRAGMA cache_size;
+ PRAGMA default_cache_size;
+ PRAGMA synchronous;
+ }
+} {123 123 2}
+
+# Make sure the pragma handler understands numeric values in addition
+# to keywords like "off" and "full".
+#
+do_test pragma-1.13 {
+ execsql {
+ PRAGMA synchronous=0;
+ PRAGMA synchronous;
+ }
+} {0}
+do_test pragma-1.14 {
+ execsql {
+ PRAGMA synchronous=2;
+ PRAGMA synchronous;
+ }
+} {2}
+} ;# ifcapable pager_pragmas
+
+# Test turning "flag" pragmas on and off.
+#
+do_test pragma-1.15 {
+ execsql {
+ PRAGMA vdbe_listing=YES;
+ PRAGMA vdbe_listing;
+ }
+} {1}
+do_test pragma-1.16 {
+ execsql {
+ PRAGMA vdbe_listing=NO;
+ PRAGMA vdbe_listing;
+ }
+} {0}
+do_test pragma-1.17 {
+ execsql {
+ PRAGMA parser_trace=ON;
+ PRAGMA parser_trace=OFF;
+ }
+} {}
+do_test pragma-1.18 {
+ execsql {
+ PRAGMA bogus = -1234; -- Parsing of negative values
+ }
+} {}
+
+# Test modifying the safety_level of an attached database.
+do_test pragma-2.1 {
+ file delete -force test2.db
+ file delete -force test2.db-journal
+ execsql {
+ ATTACH 'test2.db' AS aux;
+ }
+} {}
+ifcapable pager_pragmas {
+do_test pragma-2.2 {
+ execsql {
+ pragma aux.synchronous;
+ }
+} {2}
+do_test pragma-2.3 {
+ execsql {
+ pragma aux.synchronous = OFF;
+ pragma aux.synchronous;
+ pragma synchronous;
+ }
+} {0 2}
+do_test pragma-2.4 {
+ execsql {
+ pragma aux.synchronous = ON;
+ pragma synchronous;
+ pragma aux.synchronous;
+ }
+} {2 1}
+} ;# ifcapable pager_pragmas
+
+# Construct a corrupted index and make sure the integrity_check
+# pragma finds it.
+#
+# These tests won't work if the database is encrypted
+#
+do_test pragma-3.1 {
+ execsql {
+ BEGIN;
+ CREATE TABLE t2(a,b,c);
+ CREATE INDEX i2 ON t2(a);
+ INSERT INTO t2 VALUES(11,2,3);
+ INSERT INTO t2 VALUES(22,3,4);
+ COMMIT;
+ SELECT rowid, * from t2;
+ }
+} {1 11 2 3 2 22 3 4}
+if {![sqlite3 -has-codec] && $sqlite_options(integrityck)} {
+ do_test pragma-3.2 {
+ set rootpage [execsql {SELECT rootpage FROM sqlite_master WHERE name='i2'}]
+ set db [btree_open test.db 100 0]
+ btree_begin_transaction $db
+ set c [btree_cursor $db $rootpage 1]
+ btree_first $c
+ btree_delete $c
+ btree_commit $db
+ btree_close $db
+ execsql {PRAGMA integrity_check}
+ } {{rowid 1 missing from index i2} {wrong # of entries in index i2}}
+}
+do_test pragma-3.3 {
+ execsql {
+ DROP INDEX i2;
+ }
+} {}
+
+# Test modifying the cache_size of an attached database.
+ifcapable pager_pragmas {
+do_test pragma-4.1 {
+ execsql {
+ pragma aux.cache_size;
+ pragma aux.default_cache_size;
+ }
+} {2000 2000}
+do_test pragma-4.2 {
+ execsql {
+ pragma aux.cache_size = 50;
+ pragma aux.cache_size;
+ pragma aux.default_cache_size;
+ }
+} {50 2000}
+do_test pragma-4.3 {
+ execsql {
+ pragma aux.default_cache_size = 456;
+ pragma aux.cache_size;
+ pragma aux.default_cache_size;
+ }
+} {456 456}
+do_test pragma-4.4 {
+ execsql {
+ pragma cache_size;
+ pragma default_cache_size;
+ }
+} {123 123}
+do_test pragma-4.5 {
+ execsql {
+ DETACH aux;
+ ATTACH 'test3.db' AS aux;
+ pragma aux.cache_size;
+ pragma aux.default_cache_size;
+ }
+} {2000 2000}
+do_test pragma-4.6 {
+ execsql {
+ DETACH aux;
+ ATTACH 'test2.db' AS aux;
+ pragma aux.cache_size;
+ pragma aux.default_cache_size;
+ }
+} {456 456}
+} ;# ifcapable pager_pragmas
+
+# Test that modifying the sync-level in the middle of a transaction is
+# disallowed.
+ifcapable pager_pragmas {
+do_test pragma-5.0 {
+ execsql {
+ pragma synchronous;
+ }
+} {2}
+do_test pragma-5.1 {
+ catchsql {
+ BEGIN;
+ pragma synchronous = OFF;
+ }
+} {1 {Safety level may not be changed inside a transaction}}
+do_test pragma-5.2 {
+ execsql {
+ pragma synchronous;
+ }
+} {2}
+catchsql {COMMIT;}
+} ;# ifcapable pager_pragmas
+
+# Test schema-query pragmas
+#
+ifcapable schema_pragmas {
+ifcapable tempdb {
+ do_test pragma-6.1 {
+ set res {}
+ execsql {SELECT * FROM sqlite_temp_master}
+ foreach {idx name file} [execsql {pragma database_list}] {
+ lappend res $idx $name
+ }
+ set res
+ } {0 main 1 temp 2 aux}
+}
+do_test pragma-6.2 {
+ execsql {
+ pragma table_info(t2)
+ }
+} {0 a {} 0 {} 0 1 b {} 0 {} 0 2 c {} 0 {} 0}
+do_test pragma-6.2.2 {
+ execsql {
+ CREATE TABLE t5(a TEXT DEFAULT CURRENT_TIMESTAMP, b DEFAULT (5+3));
+ PRAGMA table_info(t5);
+ }
+} {0 a TEXT 0 CURRENT_TIMESTAMP 0 1 b {} 0 5+3 0}
+ifcapable {foreignkey} {
+ do_test pragma-6.3 {
+ execsql {
+ CREATE TABLE t3(a int references t2(b), b UNIQUE);
+ pragma foreign_key_list(t3);
+ }
+ } {0 0 t2 a b}
+ do_test pragma-6.4 {
+ execsql {
+ pragma index_list(t3);
+ }
+ } {0 sqlite_autoindex_t3_1 1}
+}
+ifcapable {!foreignkey} {
+ execsql {CREATE TABLE t3(a,b UNIQUE)}
+}
+do_test pragma-6.5 {
+ execsql {
+ CREATE INDEX t3i1 ON t3(a,b);
+ pragma index_info(t3i1);
+ }
+} {0 0 a 1 1 b}
+} ;# ifcapable schema_pragmas
+# Miscellaneous tests
+#
+ifcapable schema_pragmas {
+do_test pragma-7.1 {
+ # Make sure a pragma knows to read the schema if it needs to
+ db close
+ sqlite3 db test.db
+ execsql {
+ pragma index_list(t3);
+ }
+} {0 t3i1 0 1 sqlite_autoindex_t3_1 1}
+} ;# ifcapable schema_pragmas
+ifcapable {utf16} {
+ do_test pragma-7.2 {
+ db close
+ sqlite3 db test.db
+ catchsql {
+ pragma encoding=bogus;
+ }
+ } {1 {unsupported encoding: bogus}}
+}
+ifcapable tempdb {
+ do_test pragma-7.3 {
+ db close
+ sqlite3 db test.db
+ execsql {
+ pragma lock_status;
+ }
+ } {main unlocked temp closed}
+} else {
+ do_test pragma-7.3 {
+ db close
+ sqlite3 db test.db
+ execsql {
+ pragma lock_status;
+ }
+ } {main unlocked}
+}
+
+
+#----------------------------------------------------------------------
+# Test cases pragma-8.* test the "PRAGMA schema_version" and "PRAGMA
+# user_version" statements.
+#
+# pragma-8.1: PRAGMA schema_version
+# pragma-8.2: PRAGMA user_version
+#
+
+ifcapable schema_version {
+
+# First check that we can set the schema version and then retrieve the
+# same value.
+do_test pragma-8.1.1 {
+ execsql {
+ PRAGMA schema_version = 105;
+ }
+} {}
+do_test pragma-8.1.2 {
+ execsql {
+ PRAGMA schema_version;
+ }
+} 105
+do_test pragma-8.1.3 {
+ execsql {
+ PRAGMA schema_version = 106;
+ }
+} {}
+do_test pragma-8.1.4 {
+ execsql {
+ PRAGMA schema_version;
+ }
+} 106
+
+# Check that creating a table modifies the schema-version (this is really
+# to verify that the value being read is in fact the schema version).
+do_test pragma-8.1.5 {
+ execsql {
+ CREATE TABLE t4(a, b, c);
+ INSERT INTO t4 VALUES(1, 2, 3);
+ SELECT * FROM t4;
+ }
+} {1 2 3}
+do_test pragma-8.1.6 {
+ execsql {
+ PRAGMA schema_version;
+ }
+} 107
+
+# Now open a second connection to the database. Ensure that changing the
+# schema-version using the first connection forces the second connection
+# to reload the schema. This has to be done using the C-API test functions,
+# because the TCL API accounts for SCHEMA_ERROR and retries the query.
+do_test pragma-8.1.7 {
+ sqlite3 db2 test.db; set ::DB2 [sqlite3_connection_pointer db2]
+ execsql {
+ SELECT * FROM t4;
+ } db2
+} {1 2 3}
+do_test pragma-8.1.8 {
+ execsql {
+ PRAGMA schema_version = 108;
+ }
+} {}
+do_test pragma-8.1.9 {
+ set ::STMT [sqlite3_prepare $::DB2 "SELECT * FROM t4" -1 DUMMY]
+ sqlite3_step $::STMT
+} SQLITE_ERROR
+do_test pragma-8.1.10 {
+ sqlite3_finalize $::STMT
+} SQLITE_SCHEMA
+
+# Make sure the schema-version can be manipulated in an attached database.
+file delete -force test2.db
+file delete -force test2.db-journal
+do_test pragma-8.1.11 {
+ execsql {
+ ATTACH 'test2.db' AS aux;
+ CREATE TABLE aux.t1(a, b, c);
+ PRAGMA aux.schema_version = 205;
+ }
+} {}
+do_test pragma-8.1.12 {
+ execsql {
+ PRAGMA aux.schema_version;
+ }
+} 205
+do_test pragma-8.1.13 {
+ execsql {
+ PRAGMA schema_version;
+ }
+} 108
+
+# And check that modifying the schema-version in an attached database
+# forces the second connection to reload the schema.
+do_test pragma-8.1.14 {
+ sqlite3 db2 test.db; set ::DB2 [sqlite3_connection_pointer db2]
+ execsql {
+ ATTACH 'test2.db' AS aux;
+ SELECT * FROM aux.t1;
+ } db2
+} {}
+do_test pragma-8.1.15 {
+ execsql {
+ PRAGMA aux.schema_version = 206;
+ }
+} {}
+do_test pragma-8.1.16 {
+ set ::STMT [sqlite3_prepare $::DB2 "SELECT * FROM aux.t1" -1 DUMMY]
+ sqlite3_step $::STMT
+} SQLITE_ERROR
+do_test pragma-8.1.17 {
+ sqlite3_finalize $::STMT
+} SQLITE_SCHEMA
+do_test pragma-8.1.18 {
+ db2 close
+} {}
+
+# Now test that the user-version can be read and written (and that we aren't
+# accidentally manipulating the schema-version instead).
+do_test pragma-8.2.1 {
+ execsql {
+ PRAGMA user_version;
+ }
+} {0}
+do_test pragma-8.2.2 {
+ execsql {
+ PRAGMA user_version = 2;
+ }
+} {}
+do_test pragma-8.2.3.1 {
+ execsql {
+ PRAGMA user_version;
+ }
+} {2}
+do_test pragma-8.2.3.2 {
+ db close
+ sqlite3 db test.db
+ execsql {
+ PRAGMA user_version;
+ }
+} {2}
+do_test pragma-8.2.4.1 {
+ execsql {
+ PRAGMA schema_version;
+ }
+} {108}
+ifcapable vacuum {
+ do_test pragma-8.2.4.2 {
+ execsql {
+ VACUUM;
+ PRAGMA user_version;
+ }
+ } {2}
+ do_test pragma-8.2.4.3 {
+ execsql {
+ PRAGMA schema_version;
+ }
+ } {109}
+}
+db eval {ATTACH 'test2.db' AS aux}
+
+# Check that the user-version in the auxilary database can be manipulated (
+# and that we aren't accidentally manipulating the same in the main db).
+do_test pragma-8.2.5 {
+ execsql {
+ PRAGMA aux.user_version;
+ }
+} {0}
+do_test pragma-8.2.6 {
+ execsql {
+ PRAGMA aux.user_version = 3;
+ }
+} {}
+do_test pragma-8.2.7 {
+ execsql {
+ PRAGMA aux.user_version;
+ }
+} {3}
+do_test pragma-8.2.8 {
+ execsql {
+ PRAGMA main.user_version;
+ }
+} {2}
+
+# Now check that a ROLLBACK resets the user-version if it has been modified
+# within a transaction.
+do_test pragma-8.2.9 {
+ execsql {
+ BEGIN;
+ PRAGMA aux.user_version = 10;
+ PRAGMA user_version = 11;
+ }
+} {}
+do_test pragma-8.2.10 {
+ execsql {
+ PRAGMA aux.user_version;
+ }
+} {10}
+do_test pragma-8.2.11 {
+ execsql {
+ PRAGMA main.user_version;
+ }
+} {11}
+do_test pragma-8.2.12 {
+ execsql {
+ ROLLBACK;
+ PRAGMA aux.user_version;
+ }
+} {3}
+do_test pragma-8.2.13 {
+ execsql {
+ PRAGMA main.user_version;
+ }
+} {2}
+
+# Try a negative value for the user-version
+do_test pragma-8.2.14 {
+ execsql {
+ PRAGMA user_version = -450;
+ }
+} {}
+do_test pragma-8.2.15 {
+ execsql {
+ PRAGMA user_version;
+ }
+} {-450}
+} ; # ifcapable schema_version
+
+
+# Test temp_store and temp_store_directory pragmas
+#
+ifcapable pager_pragmas {
+do_test pragma-9.1 {
+ db close
+ sqlite3 db test.db
+ execsql {
+ PRAGMA temp_store;
+ }
+} {0}
+do_test pragma-9.2 {
+ execsql {
+ PRAGMA temp_store=file;
+ PRAGMA temp_store;
+ }
+} {1}
+do_test pragma-9.3 {
+ execsql {
+ PRAGMA temp_store=memory;
+ PRAGMA temp_store;
+ }
+} {2}
+do_test pragma-9.4 {
+ execsql {
+ PRAGMA temp_store_directory;
+ }
+} {}
+do_test pragma-9.5 {
+ set pwd [string map {' ''} [pwd]]
+ execsql "
+ PRAGMA temp_store_directory='$pwd';
+ "
+} {}
+do_test pragma-9.6 {
+ execsql {
+ PRAGMA temp_store_directory;
+ }
+} [pwd]
+do_test pragma-9.7 {
+ catchsql {
+ PRAGMA temp_store_directory='/NON/EXISTENT/PATH/FOOBAR';
+ }
+} {1 {not a writable directory}}
+do_test pragma-9.8 {
+ execsql {
+ PRAGMA temp_store_directory='';
+ }
+} {}
+ifcapable tempdb {
+ do_test pragma-9.9 {
+ execsql {
+ PRAGMA temp_store_directory;
+ PRAGMA temp_store=FILE;
+ CREATE TEMP TABLE temp_store_directory_test(a integer);
+ INSERT INTO temp_store_directory_test values (2);
+ SELECT * FROM temp_store_directory_test;
+ }
+ } {2}
+}
+do_test pragma-9.10 {
+ catchsql "
+ PRAGMA temp_store_directory='$pwd';
+ SELECT * FROM temp_store_directory_test;
+ "
+} {1 {no such table: temp_store_directory_test}}
+} ;# ifcapable pager_pragmas
+
+ifcapable trigger {
+
+do_test pragma-10.0 {
+ catchsql {
+ DROP TABLE main.t1;
+ }
+ execsql {
+ PRAGMA count_changes = 1;
+
+ CREATE TABLE t1(a PRIMARY KEY);
+ CREATE TABLE t1_mirror(a);
+ CREATE TABLE t1_mirror2(a);
+ CREATE TRIGGER t1_bi BEFORE INSERT ON t1 BEGIN
+ INSERT INTO t1_mirror VALUES(new.a);
+ END;
+ CREATE TRIGGER t1_ai AFTER INSERT ON t1 BEGIN
+ INSERT INTO t1_mirror2 VALUES(new.a);
+ END;
+ CREATE TRIGGER t1_bu BEFORE UPDATE ON t1 BEGIN
+ UPDATE t1_mirror SET a = new.a WHERE a = old.a;
+ END;
+ CREATE TRIGGER t1_au AFTER UPDATE ON t1 BEGIN
+ UPDATE t1_mirror2 SET a = new.a WHERE a = old.a;
+ END;
+ CREATE TRIGGER t1_bd BEFORE DELETE ON t1 BEGIN
+ DELETE FROM t1_mirror WHERE a = old.a;
+ END;
+ CREATE TRIGGER t1_ad AFTER DELETE ON t1 BEGIN
+ DELETE FROM t1_mirror2 WHERE a = old.a;
+ END;
+ }
+} {}
+
+do_test pragma-10.1 {
+ execsql {
+ INSERT INTO t1 VALUES(randstr(10,10));
+ }
+} {1}
+do_test pragma-10.2 {
+ execsql {
+ UPDATE t1 SET a = randstr(10,10);
+ }
+} {1}
+do_test pragma-10.3 {
+ execsql {
+ DELETE FROM t1;
+ }
+} {1}
+
+} ;# ifcapable trigger
+
+ifcapable schema_pragmas {
+ do_test pragma-11.1 {
+ execsql2 {
+ pragma collation_list;
+ }
+ } {seq 0 name NOCASE seq 1 name BINARY}
+ do_test pragma-11.2 {
+ db collate New_Collation blah...
+ execsql {
+ pragma collation_list;
+ }
+ } {0 New_Collation 1 NOCASE 2 BINARY}
+}
+
+ifcapable schema_pragmas&&tempdb {
+ do_test pragma-12.1 {
+ sqlite3 db2 test.db
+ execsql {
+ PRAGMA temp.table_info('abc');
+ } db2
+ } {}
+ db2 close
+
+ do_test pragma-12.2 {
+ sqlite3 db2 test.db
+ execsql {
+ PRAGMA temp.default_cache_size = 200;
+ PRAGMA temp.default_cache_size;
+ } db2
+ } {200}
+ db2 close
+
+ do_test pragma-12.3 {
+ sqlite3 db2 test.db
+ execsql {
+ PRAGMA temp.cache_size = 400;
+ PRAGMA temp.cache_size;
+ } db2
+ } {400}
+ db2 close
+}
+
+ifcapable bloblit {
+
+do_test pragma-13.1 {
+ execsql {
+ DROP TABLE IF EXISTS t4;
+ PRAGMA vdbe_trace=on;
+ PRAGMA vdbe_listing=on;
+ PRAGMA sql_trace=on;
+ CREATE TABLE t4(a INTEGER PRIMARY KEY,b);
+ INSERT INTO t4(b) VALUES(x'0123456789abcdef0123456789abcdef0123456789');
+ INSERT INTO t4(b) VALUES(randstr(30,30));
+ INSERT INTO t4(b) VALUES(1.23456);
+ INSERT INTO t4(b) VALUES(NULL);
+ INSERT INTO t4(b) VALUES(0);
+ INSERT INTO t4(b) SELECT b||b||b||b FROM t4;
+ SELECT * FROM t4;
+ }
+ execsql {
+ PRAGMA vdbe_trace=off;
+ PRAGMA vdbe_listing=off;
+ PRAGMA sql_trace=off;
+ }
+} {}
+
+} ;# ifcapable bloblit
+
+# Reset the sqlite3_temp_directory variable for the next run of tests:
+sqlite3 dbX :memory:
+dbX eval {PRAGMA temp_store_directory = ""}
+dbX close
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/printf.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/printf.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,242 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing the sqlite_*_printf() interface.
+#
+# $Id: printf.test,v 1.21 2006/03/19 13:00:25 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+set n 1
+foreach v {1 2 5 10 99 100 1000000 999999999 0 -1 -2 -5 -10 -99 -100 -9999999} {
+ set v32 [expr {$v&0xffffffff}]
+ do_test printf-1.$n.1 [subst {
+ sqlite3_mprintf_int {Three integers: %d %x %o} $v $v $v
+ }] [format {Three integers: %d %x %o} $v $v32 $v32]
+ do_test printf-1.$n.2 [subst {
+ sqlite3_mprintf_int {Three integers: (%6d) (%6x) (%6o)} $v $v $v
+ }] [format {Three integers: (%6d) (%6x) (%6o)} $v $v32 $v32]
+ do_test printf-1.$n.3 [subst {
+ sqlite3_mprintf_int {Three integers: (%-6d) (%-6x) (%-6o)} $v $v $v
+ }] [format {Three integers: (%-6d) (%-6x) (%-6o)} $v $v32 $v32]
+ do_test printf-1.$n.4 [subst {
+ sqlite3_mprintf_int {Three integers: (%+6d) (%+6x) (%+6o)} $v $v $v
+ }] [format {Three integers: (%+6d) (%+6x) (%+6o)} $v $v32 $v32]
+ do_test printf-1.$n.5 [subst {
+ sqlite3_mprintf_int {Three integers: (%06d) (%06x) (%06o)} $v $v $v
+ }] [format {Three integers: (%06d) (%06x) (%06o)} $v $v32 $v32]
+ do_test printf-1.$n.6 [subst {
+ sqlite3_mprintf_int {Three integers: (% 6d) (% 6x) (% 6o)} $v $v $v
+ }] [format {Three integers: (% 6d) (% 6x) (% 6o)} $v $v32 $v32]
+ do_test printf-1.$n.7 [subst {
+ sqlite3_mprintf_int {Three integers: (%#6d) (%#6x) (%#6o)} $v $v $v
+ }] [format {Three integers: (%#6d) (%#6x) (%#6o)} $v $v32 $v32]
+ incr n
+}
+
+
+if {$::tcl_platform(platform)!="windows"} {
+
+set m 1
+foreach {a b} {1 1 5 5 10 10 10 5} {
+ set n 1
+ foreach x {0.001 1.0e-20 1.0 0.0 100.0 9.99999 -0.00543 -1.0 -99.99999} {
+ do_test printf-2.$m.$n.1 [subst {
+ sqlite3_mprintf_double {A double: %*.*f} $a $b $x
+ }] [format {A double: %*.*f} $a $b $x]
+ do_test printf-2.$m.$n.2 [subst {
+ sqlite3_mprintf_double {A double: %*.*e} $a $b $x
+ }] [format {A double: %*.*e} $a $b $x]
+ do_test printf-2.$m.$n.3 [subst {
+ sqlite3_mprintf_double {A double: %*.*g} $a $b $x
+ }] [format {A double: %*.*g} $a $b $x]
+ do_test printf-2.$m.$n.4 [subst {
+ sqlite3_mprintf_double {A double: %d %d %g} $a $b $x
+ }] [format {A double: %d %d %g} $a $b $x]
+ do_test printf-2.$m.$n.5 [subst {
+ sqlite3_mprintf_double {A double: %d %d %#g} $a $b $x
+ }] [format {A double: %d %d %#g} $a $b $x]
+ do_test printf-2.$m.$n.6 [subst {
+ sqlite3_mprintf_double {A double: %d %d %010g} $a $b $x
+ }] [format {A double: %d %d %010g} $a $b $x]
+ incr n
+ }
+ incr m
+}
+
+} ;# endif not windows
+
+do_test printf-3.1 {
+ sqlite3_mprintf_str {A String: (%*.*s)} 10 10 {This is the string}
+} [format {A String: (%*.*s)} 10 10 {This is the string}]
+do_test printf-3.2 {
+ sqlite3_mprintf_str {A String: (%*.*s)} 10 5 {This is the string}
+} [format {A String: (%*.*s)} 10 5 {This is the string}]
+do_test printf-3.3 {
+ sqlite3_mprintf_str {A String: (%*.*s)} -10 5 {This is the string}
+} [format {A String: (%*.*s)} -10 5 {This is the string}]
+do_test printf-3.4 {
+ sqlite3_mprintf_str {%d %d A String: (%s)} 1 2 {This is the string}
+} [format {%d %d A String: (%s)} 1 2 {This is the string}]
+do_test printf-3.5 {
+ sqlite3_mprintf_str {%d %d A String: (%30s)} 1 2 {This is the string}
+} [format {%d %d A String: (%30s)} 1 2 {This is the string}]
+do_test printf-3.6 {
+ sqlite3_mprintf_str {%d %d A String: (%-30s)} 1 2 {This is the string}
+} [format {%d %d A String: (%-30s)} 1 2 {This is the string}]
+
+do_test printf-4.1 {
+ sqlite3_mprintf_str {%d %d A quoted string: '%q'} 1 2 {Hi Y'all}
+} {1 2 A quoted string: 'Hi Y''all'}
+do_test printf-4.2 {
+ sqlite3_mprintf_str {%d %d A NULL pointer in %%q: '%q'} 1 2
+} {1 2 A NULL pointer in %q: '(NULL)'}
+do_test printf-4.3 {
+ sqlite3_mprintf_str {%d %d A quoted string: %Q} 1 2 {Hi Y'all}
+} {1 2 A quoted string: 'Hi Y''all'}
+do_test printf-4.4 {
+ sqlite3_mprintf_str {%d %d A NULL pointer in %%Q: %Q} 1 2
+} {1 2 A NULL pointer in %Q: NULL}
+
+do_test printf-5.1 {
+ set x [sqlite3_mprintf_str {%d %d %100000s} 0 0 {Hello}]
+ string length $x
+} {344}
+do_test printf-5.2 {
+ sqlite3_mprintf_str {%d %d (%-10.10s) %} -9 -10 {HelloHelloHello}
+} {-9 -10 (HelloHello) %}
+
+do_test printf-6.1 {
+ sqlite3_mprintf_z_test , one two three four five six
+} {,one,two,three,four,five,six}
+
+
+do_test printf-7.1 {
+ sqlite3_mprintf_scaled {A double: %g} 1.0e307 1.0
+} {A double: 1e+307}
+do_test printf-7.2 {
+ sqlite3_mprintf_scaled {A double: %g} 1.0e307 10.0
+} {A double: 1e+308}
+do_test printf-7.3 {
+ sqlite3_mprintf_scaled {A double: %g} 1.0e307 100.0
+} {A double: NaN}
+
+do_test printf-8.1 {
+ sqlite3_mprintf_int {%u %u %u} 0x7fffffff 0x80000000 0xffffffff
+} {2147483647 2147483648 4294967295}
+do_test printf-8.2 {
+ sqlite3_mprintf_int {%lu %lu %lu} 0x7fffffff 0x80000000 0xffffffff
+} {2147483647 2147483648 4294967295}
+do_test printf-8.3 {
+ sqlite3_mprintf_int64 {%llu %llu %llu} 2147483647 2147483648 4294967296
+} {2147483647 2147483648 4294967296}
+do_test printf-8.4 {
+ sqlite3_mprintf_int64 {%lld %lld %lld} 2147483647 2147483648 4294967296
+} {2147483647 2147483648 4294967296}
+do_test printf-8.5 {
+ sqlite3_mprintf_int64 {%llx %llx %llx} 2147483647 2147483648 4294967296
+} {7fffffff 80000000 100000000}
+do_test printf-8.6 {
+ sqlite3_mprintf_int64 {%llx %llo %lld} -1 -1 -1
+} {ffffffffffffffff 1777777777777777777777 -1}
+
+do_test printf-9.1 {
+ sqlite3_mprintf_int {%*.*c} 4 4 65
+} {AAAA}
+do_test printf-9.2 {
+ sqlite3_mprintf_int {%*.*c} -4 1 66
+} {B }
+do_test printf-9.3 {
+ sqlite3_mprintf_int {%*.*c} 4 1 67
+} { C}
+do_test printf-9.4 {
+ sqlite3_mprintf_int {%d %d %c} 4 1 67
+} {4 1 C}
+set ten { }
+set fifty $ten$ten$ten$ten$ten
+do_test printf-9.5 {
+ sqlite3_mprintf_int {%d %*c} 1 -201 67
+} "1 C$fifty$fifty$fifty$fifty"
+do_test printf-9.6 {
+ sqlite3_mprintf_int {hi%12345.12346yhello} 0 0 0
+} {hi}
+
+# Ticket #812
+#
+do_test printf-10.1 {
+ sqlite3_mprintf_stronly %s {}
+} {}
+
+# Ticket #831
+#
+do_test printf-10.2 {
+ sqlite3_mprintf_stronly %q {}
+} {}
+
+# Ticket #1340: Test for loss of precision on large positive exponents
+#
+do_test printf-10.3 {
+ sqlite3_mprintf_double {%d %d %f} 1 1 1e300
+} {1 1 1000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000.000000}
+
+# The non-standard '!' flag on a 'g' conversion forces a decimal point
+# and at least one digit on either side of the decimal point.
+#
+do_test printf-11.1 {
+ sqlite3_mprintf_double {%d %d %!g} 1 1 1
+} {1 1 1.0}
+do_test printf-11.2 {
+ sqlite3_mprintf_double {%d %d %!g} 1 1 123
+} {1 1 123.0}
+do_test printf-11.3 {
+ sqlite3_mprintf_double {%d %d %!g} 1 1 12.3
+} {1 1 12.3}
+do_test printf-11.4 {
+ sqlite3_mprintf_double {%d %d %!g} 1 1 0.123
+} {1 1 0.123}
+do_test printf-11.5 {
+ sqlite3_mprintf_double {%d %d %!.15g} 1 1 1
+} {1 1 1.0}
+do_test printf-11.6 {
+ sqlite3_mprintf_double {%d %d %!.15g} 1 1 1e10
+} {1 1 10000000000.0}
+do_test printf-11.7 {
+ sqlite3_mprintf_double {%d %d %!.15g} 1 1 1e300
+} {1 1 1.0e+300}
+
+# Additional tests for coverage
+#
+do_test printf-12.1 {
+ sqlite3_mprintf_double {%d %d %.2000g} 1 1 1.0
+} {1 1 1}
+
+# Floating point boundary cases
+#
+do_test printf-13.1 {
+ sqlite3_mprintf_hexdouble %.20f 4024000000000000
+} {10.00000000000000000000}
+do_test printf-13.2 {
+ sqlite3_mprintf_hexdouble %.20f 4197d78400000000
+} {100000000.00000000000000000000}
+do_test printf-13.3 {
+ sqlite3_mprintf_hexdouble %.20f 4693b8b5b5056e17
+} {100000000000000000000000000000000.00000000000000000000}
+
+do_test printf-14.1 {
+ sqlite3_mprintf_str {abc-%y-123} 0 0 {not used}
+} {abc-}
+do_test printf-14.2 {
+ sqlite3_mprintf_n_test {xyzzy}
+} 5
+
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/progress.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/progress.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,151 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing the 'progress callback'.
+#
+# $Id: progress.test,v 1.6 2006/05/26 19:57:20 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# If the progress callback is not available in this build, skip this
+# whole file.
+ifcapable !progress {
+ finish_test
+ return
+}
+
+# Build some test data
+#
+execsql {
+ BEGIN;
+ CREATE TABLE t1(a);
+ INSERT INTO t1 VALUES(1);
+ INSERT INTO t1 VALUES(2);
+ INSERT INTO t1 VALUES(3);
+ INSERT INTO t1 VALUES(4);
+ INSERT INTO t1 VALUES(5);
+ INSERT INTO t1 VALUES(6);
+ INSERT INTO t1 VALUES(7);
+ INSERT INTO t1 VALUES(8);
+ INSERT INTO t1 VALUES(9);
+ INSERT INTO t1 VALUES(10);
+ COMMIT;
+}
+
+
+# Test that the progress callback is invoked.
+do_test progress-1.0 {
+ set counter 0
+ db progress 1 "[namespace code {incr counter}] ; expr 0"
+ execsql {
+ SELECT * FROM t1
+ }
+ expr $counter > 1
+} 1
+do_test progress-1.0.1 {
+ db progress
+} {::namespace inscope :: {incr counter} ; expr 0}
+do_test progress-1.0.2 {
+ set v [catch {db progress xyz bogus} msg]
+ lappend v $msg
+} {1 {expected integer but got "xyz"}}
+
+# Test that the query is abandoned when the progress callback returns non-zero
+do_test progress-1.1 {
+ set counter 0
+ db progress 1 "[namespace code {incr counter}] ; expr 1"
+ set rc [catch {execsql {
+ SELECT * FROM t1
+ }}]
+ list $counter $rc
+} {1 1}
+
+# Test that the query is rolled back when the progress callback returns
+# non-zero.
+do_test progress-1.2 {
+
+ # This figures out how many opcodes it takes to copy 5 extra rows into t1.
+ db progress 1 "[namespace code {incr five_rows}] ; expr 0"
+ set five_rows 0
+ execsql {
+ INSERT INTO t1 SELECT a+10 FROM t1 WHERE a < 6
+ }
+ db progress 0 ""
+ execsql {
+ DELETE FROM t1 WHERE a > 10
+ }
+
+ # Now set up the progress callback to abandon the query after the number of
+ # opcodes to copy 5 rows. That way, when we try to copy 6 rows, we know
+ # some data will have been inserted into the table by the time the progress
+ # callback abandons the query.
+ db progress $five_rows "expr 1"
+ catchsql {
+ INSERT INTO t1 SELECT a+10 FROM t1 WHERE a < 9
+ }
+ execsql {
+ SELECT count(*) FROM t1
+ }
+} 10
+
+# Test that an active transaction remains active and not rolled back after the
+# progress query abandons a query.
+do_test progress-1.3 {
+
+ db progress 0 ""
+ execsql BEGIN
+ execsql {
+ INSERT INTO t1 VALUES(11)
+ }
+ db progress 1 "expr 1"
+ catchsql {
+ INSERT INTO t1 VALUES(12)
+ }
+ db progress 0 ""
+ execsql COMMIT
+ execsql {
+ SELECT count(*) FROM t1
+ }
+} 11
+
+# Check that a value of 0 for N means no progress callback
+do_test progress-1.4 {
+ set counter 0
+ db progress 0 "[namespace code {incr counter}] ; expr 0"
+ execsql {
+ SELECT * FROM t1;
+ }
+ set counter
+} 0
+
+db progress 0 ""
+
+# Make sure other queries can be run from within the progress
+# handler. Ticket #1827
+#
+do_test progress-1.5 {
+ set rx 0
+ proc set_rx {args} {
+ db progress 0 {}
+ set ::rx [db eval {SELECT count(*) FROM t1}]
+ return [expr 0]
+ }
+ db progress 10 set_rx
+ db eval {
+ SELECT sum(a) FROM t1
+ }
+} {66}
+do_test progress-1.6 {
+ set ::rx
+} {11}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/quick.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/quick.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,80 @@
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file runs all tests.
+#
+# $Id: quick.test,v 1.45 2006/06/23 08:05:38 danielk1977 Exp $
+
+proc lshift {lvar} {
+ upvar $lvar l
+ set ret [lindex $l 0]
+ set l [lrange $l 1 end]
+ return $ret
+}
+while {[set arg [lshift argv]] != ""} {
+ switch -- $arg {
+ -sharedpagercache {
+ sqlite3_enable_shared_cache 1
+ }
+ default {
+ set argv [linsert $argv 0 $arg]
+ break
+ }
+ }
+}
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+rename finish_test really_finish_test
+proc finish_test {} {}
+set ISQUICK 1
+
+set EXCLUDE {
+ all.test
+ async.test
+ async2.test
+ btree2.test
+ btree3.test
+ btree4.test
+ btree5.test
+ btree6.test
+ corrupt.test
+ crash.test
+ loadext.test
+ malloc.test
+ malloc2.test
+ malloc3.test
+ memleak.test
+ misuse.test
+ quick.test
+
+ autovacuum_crash.test
+ btree8.test
+ utf16.test
+ shared_err.test
+ vtab_err.test
+}
+
+if {[sqlite3 -has-codec]} {
+ # lappend EXCLUDE \
+ # conflict.test
+}
+
+foreach testfile [lsort -dictionary [glob $testdir/*.test]] {
+ set tail [file tail $testfile]
+ if {[lsearch -exact $EXCLUDE $tail]>=0} continue
+ source $testfile
+ catch {db close}
+ if {$sqlite_open_file_count>0} {
+ puts "$tail did not close all files: $sqlite_open_file_count"
+ incr nErr
+ lappend ::failList $tail
+ }
+}
+source $testdir/misuse.test
+
+set sqlite_open_file_count 0
+really_finish_test
Added: freeswitch/trunk/libs/sqlite/test/quote.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/quote.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,89 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is the ability to specify table and column names
+# as quoted strings.
+#
+# $Id: quote.test,v 1.6 2005/11/01 15:48:25 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Create a table with a strange name and with strange column names.
+#
+do_test quote-1.0 {
+ catchsql {CREATE TABLE '@abc' ( '#xyz' int, '!pqr' text );}
+} {0 {}}
+
+# Insert, update and query the table.
+#
+do_test quote-1.1 {
+ catchsql {INSERT INTO '@abc' VALUES(5,'hello')}
+} {0 {}}
+do_test quote-1.2.1 {
+ catchsql {SELECT * FROM '@abc'}
+} {0 {5 hello}}
+do_test quote-1.2.2 {
+ catchsql {SELECT * FROM [@abc]} ;# SqlServer compatibility
+} {0 {5 hello}}
+do_test quote-1.2.3 {
+ catchsql {SELECT * FROM `@abc`} ;# MySQL compatibility
+} {0 {5 hello}}
+do_test quote-1.3 {
+ catchsql {
+ SELECT '@abc'.'!pqr', '@abc'.'#xyz'+5 FROM '@abc'
+ }
+} {0 {hello 10}}
+do_test quote-1.3.1 {
+ catchsql {
+ SELECT '!pqr', '#xyz'+5 FROM '@abc'
+ }
+} {0 {!pqr 5}}
+do_test quote-1.3.2 {
+ catchsql {
+ SELECT "!pqr", "#xyz"+5 FROM '@abc'
+ }
+} {0 {hello 10}}
+do_test quote-1.3.3 {
+ catchsql {
+ SELECT [!pqr], `#xyz`+5 FROM '@abc'
+ }
+} {0 {hello 10}}
+do_test quote-1.3 {
+ set r [catch {
+ execsql {SELECT '@abc'.'!pqr', '@abc'.'#xyz'+5 FROM '@abc'}
+ } msg ]
+ lappend r $msg
+} {0 {hello 10}}
+do_test quote-1.4 {
+ set r [catch {
+ execsql {UPDATE '@abc' SET '#xyz'=11}
+ } msg ]
+ lappend r $msg
+} {0 {}}
+do_test quote-1.5 {
+ set r [catch {
+ execsql {SELECT '@abc'.'!pqr', '@abc'.'#xyz'+5 FROM '@abc'}
+ } msg ]
+ lappend r $msg
+} {0 {hello 16}}
+
+# Drop the table with the strange name.
+#
+do_test quote-1.6 {
+ set r [catch {
+ execsql {DROP TABLE '@abc'}
+ } msg ]
+ lappend r $msg
+} {0 {}}
+
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/reindex.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/reindex.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,172 @@
+# 2004 November 5
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+# This file implements tests for the REINDEX command.
+#
+# $Id: reindex.test,v 1.3 2005/01/27 00:22:04 danielk1977 Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# There is nothing to test if REINDEX is disable for this build.
+#
+ifcapable {!reindex} {
+ finish_test
+ return
+}
+
+# Basic sanity checks.
+#
+do_test reindex-1.1 {
+ execsql {
+ CREATE TABLE t1(a,b);
+ INSERT INTO t1 VALUES(1,2);
+ INSERT INTO t1 VALUES(3,4);
+ CREATE INDEX i1 ON t1(a);
+ REINDEX;
+ }
+} {}
+integrity_check reindex-1.2
+do_test reindex-1.3 {
+ execsql {
+ REINDEX t1;
+ }
+} {}
+integrity_check reindex-1.4
+do_test reindex-1.5 {
+ execsql {
+ REINDEX i1;
+ }
+} {}
+integrity_check reindex-1.6
+do_test reindex-1.7 {
+ execsql {
+ REINDEX main.t1;
+ }
+} {}
+do_test reindex-1.8 {
+ execsql {
+ REINDEX main.i1;
+ }
+} {}
+do_test reindex-1.9 {
+ catchsql {
+ REINDEX bogus
+ }
+} {1 {unable to identify the object to be reindexed}}
+
+# Set up a table for testing that includes several different collating
+# sequences including some that we can modify.
+#
+do_test reindex-2.1 {
+ proc c1 {a b} {
+ return [expr {-[string compare $a $b]}]
+ }
+ proc c2 {a b} {
+ return [expr {-[string compare [string tolower $a] [string tolower $b]]}]
+ }
+ db collate c1 c1
+ db collate c2 c2
+ execsql {
+ CREATE TABLE t2(
+ a TEXT PRIMARY KEY COLLATE c1,
+ b TEXT UNIQUE COLLATE c2,
+ c TEXT COLLATE nocase,
+ d TEST COLLATE binary
+ );
+ INSERT INTO t2 VALUES('abc','abc','abc','abc');
+ INSERT INTO t2 VALUES('ABCD','ABCD','ABCD','ABCD');
+ INSERT INTO t2 VALUES('bcd','bcd','bcd','bcd');
+ INSERT INTO t2 VALUES('BCDE','BCDE','BCDE','BCDE');
+ SELECT a FROM t2 ORDER BY a;
+ }
+} {bcd abc BCDE ABCD}
+do_test reindex-2.2 {
+ execsql {
+ SELECT b FROM t2 ORDER BY b;
+ }
+} {BCDE bcd ABCD abc}
+do_test reindex-2.3 {
+ execsql {
+ SELECT c FROM t2 ORDER BY c;
+ }
+} {abc ABCD bcd BCDE}
+do_test reindex-2.4 {
+ execsql {
+ SELECT d FROM t2 ORDER BY d;
+ }
+} {ABCD BCDE abc bcd}
+
+# Change a collating sequence function. Verify that REINDEX rebuilds
+# the index.
+#
+do_test reindex-2.5 {
+ proc c1 {a b} {
+ return [string compare $a $b]
+ }
+ execsql {
+ SELECT a FROM t2 ORDER BY a;
+ }
+} {bcd abc BCDE ABCD}
+ifcapable {integrityck} {
+ do_test reindex-2.5.1 {
+ string equal ok [execsql {PRAGMA integrity_check}]
+ } {0}
+}
+do_test reindex-2.6 {
+ execsql {
+ REINDEX c2;
+ SELECT a FROM t2 ORDER BY a;
+ }
+} {bcd abc BCDE ABCD}
+do_test reindex-2.7 {
+ execsql {
+ REINDEX t1;
+ SELECT a FROM t2 ORDER BY a;
+ }
+} {bcd abc BCDE ABCD}
+do_test reindex-2.8 {
+ execsql {
+ REINDEX c1;
+ SELECT a FROM t2 ORDER BY a;
+ }
+} {ABCD BCDE abc bcd}
+integrity_check reindex-2.8.1
+
+# Try to REINDEX an index for which the collation sequence is not available.
+#
+do_test reindex-3.1 {
+ sqlite3 db2 test.db
+ catchsql {
+ REINDEX c1;
+ } db2
+} {1 {no such collation sequence: c1}}
+do_test reindex-3.2 {
+ proc need_collate {collation} {
+ db2 collate c1 c1
+ }
+ db2 collation_needed need_collate
+ catchsql {
+ REINDEX c1;
+ } db2
+} {0 {}}
+do_test reindex-3.3 {
+ catchsql {
+ REINDEX;
+ } db2
+} {1 {no such collation sequence: c2}}
+
+do_test reindex-3.99 {
+ db2 close
+} {}
+
+finish_test
+
Added: freeswitch/trunk/libs/sqlite/test/rollback.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/rollback.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,81 @@
+# 2004 June 30
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is verifying that a rollback in one statement
+# caused by an ON CONFLICT ROLLBACK clause aborts any other pending
+# statements.
+#
+# $Id: rollback.test,v 1.4 2006/01/17 09:35:02 danielk1977 Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+set DB [sqlite3_connection_pointer db]
+
+do_test rollback-1.1 {
+ execsql {
+ CREATE TABLE t1(a);
+ INSERT INTO t1 VALUES(1);
+ INSERT INTO t1 VALUES(2);
+ INSERT INTO t1 VALUES(3);
+ INSERT INTO t1 VALUES(4);
+ SELECT * FROM t1;
+ }
+} {1 2 3 4}
+
+ifcapable conflict {
+ do_test rollback-1.2 {
+ execsql {
+ CREATE TABLE t3(a unique on conflict rollback);
+ INSERT INTO t3 SELECT a FROM t1;
+ BEGIN;
+ INSERT INTO t1 SELECT * FROM t1;
+ }
+ } {}
+}
+do_test rollback-1.3 {
+ set STMT [sqlite3_prepare $DB "SELECT a FROM t1" -1 TAIL]
+ sqlite3_step $STMT
+} {SQLITE_ROW}
+
+ifcapable conflict {
+ # This causes a ROLLBACK, which deletes the table out from underneath the
+ # SELECT statement.
+ #
+ do_test rollback-1.4 {
+ catchsql {
+ INSERT INTO t3 SELECT a FROM t1;
+ }
+ } {1 {column a is not unique}}
+
+ # Try to continue with the SELECT statement
+ #
+ do_test rollback-1.5 {
+ sqlite3_step $STMT
+ } {SQLITE_ABORT}
+}
+
+# Restart the SELECT statement
+#
+do_test rollback-1.6 {
+ sqlite3_reset $STMT
+} {SQLITE_OK}
+do_test rollback-1.7 {
+ sqlite3_step $STMT
+} {SQLITE_ROW}
+do_test rollback-1.8 {
+ sqlite3_step $STMT
+} {SQLITE_ROW}
+do_test rollback-1.9 {
+ sqlite3_finalize $STMT
+} {SQLITE_OK}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/rowid.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/rowid.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,674 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing the magic ROWID column that is
+# found on all tables.
+#
+# $Id: rowid.test,v 1.18 2005/01/21 03:12:16 danielk1977 Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Basic ROWID functionality tests.
+#
+do_test rowid-1.1 {
+ execsql {
+ CREATE TABLE t1(x int, y int);
+ INSERT INTO t1 VALUES(1,2);
+ INSERT INTO t1 VALUES(3,4);
+ SELECT x FROM t1 ORDER BY y;
+ }
+} {1 3}
+do_test rowid-1.2 {
+ set r [execsql {SELECT rowid FROM t1 ORDER BY x}]
+ global x2rowid rowid2x
+ set x2rowid(1) [lindex $r 0]
+ set x2rowid(3) [lindex $r 1]
+ set rowid2x($x2rowid(1)) 1
+ set rowid2x($x2rowid(3)) 3
+ llength $r
+} {2}
+do_test rowid-1.3 {
+ global x2rowid
+ set sql "SELECT x FROM t1 WHERE rowid==$x2rowid(1)"
+ execsql $sql
+} {1}
+do_test rowid-1.4 {
+ global x2rowid
+ set sql "SELECT x FROM t1 WHERE rowid==$x2rowid(3)"
+ execsql $sql
+} {3}
+do_test rowid-1.5 {
+ global x2rowid
+ set sql "SELECT x FROM t1 WHERE oid==$x2rowid(1)"
+ execsql $sql
+} {1}
+do_test rowid-1.6 {
+ global x2rowid
+ set sql "SELECT x FROM t1 WHERE OID==$x2rowid(3)"
+ execsql $sql
+} {3}
+do_test rowid-1.7 {
+ global x2rowid
+ set sql "SELECT x FROM t1 WHERE _rowid_==$x2rowid(1)"
+ execsql $sql
+} {1}
+do_test rowid-1.7.1 {
+ while 1 {
+ set norow [expr {int(rand()*1000000)}]
+ if {$norow!=$x2rowid(1) && $norow!=$x2rowid(3)} break
+ }
+ execsql "SELECT x FROM t1 WHERE rowid=$norow"
+} {}
+do_test rowid-1.8 {
+ global x2rowid
+ set v [execsql {SELECT x, oid FROM t1 order by x}]
+ set v2 [list 1 $x2rowid(1) 3 $x2rowid(3)]
+ expr {$v==$v2}
+} {1}
+do_test rowid-1.9 {
+ global x2rowid
+ set v [execsql {SELECT x, RowID FROM t1 order by x}]
+ set v2 [list 1 $x2rowid(1) 3 $x2rowid(3)]
+ expr {$v==$v2}
+} {1}
+do_test rowid-1.9 {
+ global x2rowid
+ set v [execsql {SELECT x, _rowid_ FROM t1 order by x}]
+ set v2 [list 1 $x2rowid(1) 3 $x2rowid(3)]
+ expr {$v==$v2}
+} {1}
+
+# We can insert or update the ROWID column.
+#
+do_test rowid-2.1 {
+ catchsql {
+ INSERT INTO t1(rowid,x,y) VALUES(1234,5,6);
+ SELECT rowid, * FROM t1;
+ }
+} {0 {1 1 2 2 3 4 1234 5 6}}
+do_test rowid-2.2 {
+ catchsql {
+ UPDATE t1 SET rowid=12345 WHERE x==1;
+ SELECT rowid, * FROM t1
+ }
+} {0 {2 3 4 1234 5 6 12345 1 2}}
+do_test rowid-2.3 {
+ catchsql {
+ INSERT INTO t1(y,x,oid) VALUES(8,7,1235);
+ SELECT rowid, * FROM t1 WHERE rowid>1000;
+ }
+} {0 {1234 5 6 1235 7 8 12345 1 2}}
+do_test rowid-2.4 {
+ catchsql {
+ UPDATE t1 SET oid=12346 WHERE x==1;
+ SELECT rowid, * FROM t1;
+ }
+} {0 {2 3 4 1234 5 6 1235 7 8 12346 1 2}}
+do_test rowid-2.5 {
+ catchsql {
+ INSERT INTO t1(x,_rowid_,y) VALUES(9,1236,10);
+ SELECT rowid, * FROM t1 WHERE rowid>1000;
+ }
+} {0 {1234 5 6 1235 7 8 1236 9 10 12346 1 2}}
+do_test rowid-2.6 {
+ catchsql {
+ UPDATE t1 SET _rowid_=12347 WHERE x==1;
+ SELECT rowid, * FROM t1 WHERE rowid>1000;
+ }
+} {0 {1234 5 6 1235 7 8 1236 9 10 12347 1 2}}
+
+# But we can use ROWID in the WHERE clause of an UPDATE that does not
+# change the ROWID.
+#
+do_test rowid-2.7 {
+ global x2rowid
+ set sql "UPDATE t1 SET x=2 WHERE OID==$x2rowid(3)"
+ execsql $sql
+ execsql {SELECT x FROM t1 ORDER BY x}
+} {1 2 5 7 9}
+do_test rowid-2.8 {
+ global x2rowid
+ set sql "UPDATE t1 SET x=3 WHERE _rowid_==$x2rowid(3)"
+ execsql $sql
+ execsql {SELECT x FROM t1 ORDER BY x}
+} {1 3 5 7 9}
+
+# We cannot index by ROWID
+#
+do_test rowid-2.9 {
+ set v [catch {execsql {CREATE INDEX idxt1 ON t1(rowid)}} msg]
+ lappend v $msg
+} {1 {table t1 has no column named rowid}}
+do_test rowid-2.10 {
+ set v [catch {execsql {CREATE INDEX idxt1 ON t1(_rowid_)}} msg]
+ lappend v $msg
+} {1 {table t1 has no column named _rowid_}}
+do_test rowid-2.11 {
+ set v [catch {execsql {CREATE INDEX idxt1 ON t1(oid)}} msg]
+ lappend v $msg
+} {1 {table t1 has no column named oid}}
+do_test rowid-2.12 {
+ set v [catch {execsql {CREATE INDEX idxt1 ON t1(x, rowid)}} msg]
+ lappend v $msg
+} {1 {table t1 has no column named rowid}}
+
+# Columns defined in the CREATE statement override the buildin ROWID
+# column names.
+#
+do_test rowid-3.1 {
+ execsql {
+ CREATE TABLE t2(rowid int, x int, y int);
+ INSERT INTO t2 VALUES(0,2,3);
+ INSERT INTO t2 VALUES(4,5,6);
+ INSERT INTO t2 VALUES(7,8,9);
+ SELECT * FROM t2 ORDER BY x;
+ }
+} {0 2 3 4 5 6 7 8 9}
+do_test rowid-3.2 {
+ execsql {SELECT * FROM t2 ORDER BY rowid}
+} {0 2 3 4 5 6 7 8 9}
+do_test rowid-3.3 {
+ execsql {SELECT rowid, x, y FROM t2 ORDER BY rowid}
+} {0 2 3 4 5 6 7 8 9}
+do_test rowid-3.4 {
+ set r1 [execsql {SELECT _rowid_, rowid FROM t2 ORDER BY rowid}]
+ foreach {a b c d e f} $r1 {}
+ set r2 [execsql {SELECT _rowid_, rowid FROM t2 ORDER BY x DESC}]
+ foreach {u v w x y z} $r2 {}
+ expr {$u==$e && $w==$c && $y==$a}
+} {1}
+# sqlite3 v3 - do_probtest doesn't exist anymore?
+if 0 {
+do_probtest rowid-3.5 {
+ set r1 [execsql {SELECT _rowid_, rowid FROM t2 ORDER BY rowid}]
+ foreach {a b c d e f} $r1 {}
+ expr {$a!=$b && $c!=$d && $e!=$f}
+} {1}
+}
+
+# Let's try some more complex examples, including some joins.
+#
+do_test rowid-4.1 {
+ execsql {
+ DELETE FROM t1;
+ DELETE FROM t2;
+ }
+ for {set i 1} {$i<=50} {incr i} {
+ execsql "INSERT INTO t1(x,y) VALUES($i,[expr {$i*$i}])"
+ }
+ execsql {INSERT INTO t2 SELECT _rowid_, x*y, y*y FROM t1}
+ execsql {SELECT t2.y FROM t1, t2 WHERE t1.x==4 AND t1.rowid==t2.rowid}
+} {256}
+do_test rowid-4.2 {
+ execsql {SELECT t2.y FROM t2, t1 WHERE t1.x==4 AND t1.rowid==t2.rowid}
+} {256}
+do_test rowid-4.2.1 {
+ execsql {SELECT t2.y FROM t2, t1 WHERE t1.x==4 AND t1.oid==t2.rowid}
+} {256}
+do_test rowid-4.2.2 {
+ execsql {SELECT t2.y FROM t2, t1 WHERE t1.x==4 AND t1._rowid_==t2.rowid}
+} {256}
+do_test rowid-4.2.3 {
+ execsql {SELECT t2.y FROM t2, t1 WHERE t1.x==4 AND t2.rowid==t1.rowid}
+} {256}
+do_test rowid-4.2.4 {
+ execsql {SELECT t2.y FROM t2, t1 WHERE t2.rowid==t1.oid AND t1.x==4}
+} {256}
+do_test rowid-4.2.5 {
+ execsql {SELECT t2.y FROM t1, t2 WHERE t1.x==4 AND t1._rowid_==t2.rowid}
+} {256}
+do_test rowid-4.2.6 {
+ execsql {SELECT t2.y FROM t1, t2 WHERE t1.x==4 AND t2.rowid==t1.rowid}
+} {256}
+do_test rowid-4.2.7 {
+ execsql {SELECT t2.y FROM t1, t2 WHERE t2.rowid==t1.oid AND t1.x==4}
+} {256}
+do_test rowid-4.3 {
+ execsql {CREATE INDEX idxt1 ON t1(x)}
+ execsql {SELECT t2.y FROM t1, t2 WHERE t1.x==4 AND t1.rowid==t2.rowid}
+} {256}
+do_test rowid-4.3.1 {
+ execsql {SELECT t2.y FROM t1, t2 WHERE t1.x==4 AND t1._rowid_==t2.rowid}
+} {256}
+do_test rowid-4.3.2 {
+ execsql {SELECT t2.y FROM t1, t2 WHERE t2.rowid==t1.oid AND 4==t1.x}
+} {256}
+do_test rowid-4.4 {
+ execsql {SELECT t2.y FROM t2, t1 WHERE t1.x==4 AND t1.rowid==t2.rowid}
+} {256}
+do_test rowid-4.4.1 {
+ execsql {SELECT t2.y FROM t2, t1 WHERE t1.x==4 AND t1._rowid_==t2.rowid}
+} {256}
+do_test rowid-4.4.2 {
+ execsql {SELECT t2.y FROM t2, t1 WHERE t2.rowid==t1.oid AND 4==t1.x}
+} {256}
+do_test rowid-4.5 {
+ execsql {CREATE INDEX idxt2 ON t2(y)}
+ set sqlite_search_count 0
+ concat [execsql {
+ SELECT t1.x FROM t2, t1
+ WHERE t2.y==256 AND t1.rowid==t2.rowid
+ }] $sqlite_search_count
+} {4 3}
+do_test rowid-4.5.1 {
+ set sqlite_search_count 0
+ concat [execsql {
+ SELECT t1.x FROM t2, t1
+ WHERE t1.OID==t2.rowid AND t2.y==81
+ }] $sqlite_search_count
+} {3 3}
+do_test rowid-4.6 {
+ execsql {
+ SELECT t1.x FROM t1, t2
+ WHERE t2.y==256 AND t1.rowid==t2.rowid
+ }
+} {4}
+
+do_test rowid-5.1.1 {
+ ifcapable subquery {
+ execsql {DELETE FROM t1 WHERE _rowid_ IN (SELECT oid FROM t1 WHERE x>8)}
+ } else {
+ set oids [execsql {SELECT oid FROM t1 WHERE x>8}]
+ set where "_rowid_ = [join $oids { OR _rowid_ = }]"
+ execsql "DELETE FROM t1 WHERE $where"
+ }
+} {}
+do_test rowid-5.1.2 {
+ execsql {SELECT max(x) FROM t1}
+} {8}
+
+# Make sure a "WHERE rowid=X" clause works when there is no ROWID of X.
+#
+do_test rowid-6.1 {
+ execsql {
+ SELECT x FROM t1
+ }
+} {1 2 3 4 5 6 7 8}
+do_test rowid-6.2 {
+ for {set ::norow 1} {1} {incr ::norow} {
+ if {[execsql "SELECT x FROM t1 WHERE rowid=$::norow"]==""} break
+ }
+ execsql [subst {
+ DELETE FROM t1 WHERE rowid=$::norow
+ }]
+} {}
+do_test rowid-6.3 {
+ execsql {
+ SELECT x FROM t1
+ }
+} {1 2 3 4 5 6 7 8}
+
+# Beginning with version 2.3.4, SQLite computes rowids of new rows by
+# finding the maximum current rowid and adding one. It falls back to
+# the old random algorithm if the maximum rowid is the largest integer.
+# The following tests are for this new behavior.
+#
+do_test rowid-7.0 {
+ execsql {
+ DELETE FROM t1;
+ DROP TABLE t2;
+ DROP INDEX idxt1;
+ INSERT INTO t1 VALUES(1,2);
+ SELECT rowid, * FROM t1;
+ }
+} {1 1 2}
+do_test rowid-7.1 {
+ execsql {
+ INSERT INTO t1 VALUES(99,100);
+ SELECT rowid,* FROM t1
+ }
+} {1 1 2 2 99 100}
+do_test rowid-7.2 {
+ execsql {
+ CREATE TABLE t2(a INTEGER PRIMARY KEY, b);
+ INSERT INTO t2(b) VALUES(55);
+ SELECT * FROM t2;
+ }
+} {1 55}
+do_test rowid-7.3 {
+ execsql {
+ INSERT INTO t2(b) VALUES(66);
+ SELECT * FROM t2;
+ }
+} {1 55 2 66}
+do_test rowid-7.4 {
+ execsql {
+ INSERT INTO t2(a,b) VALUES(1000000,77);
+ INSERT INTO t2(b) VALUES(88);
+ SELECT * FROM t2;
+ }
+} {1 55 2 66 1000000 77 1000001 88}
+do_test rowid-7.5 {
+ execsql {
+ INSERT INTO t2(a,b) VALUES(2147483647,99);
+ INSERT INTO t2(b) VALUES(11);
+ SELECT b FROM t2 ORDER BY b;
+ }
+} {11 55 66 77 88 99}
+ifcapable subquery {
+ do_test rowid-7.6 {
+ execsql {
+ SELECT b FROM t2 WHERE a NOT IN(1,2,1000000,1000001,2147483647);
+ }
+ } {11}
+ do_test rowid-7.7 {
+ execsql {
+ INSERT INTO t2(b) VALUES(22);
+ INSERT INTO t2(b) VALUES(33);
+ INSERT INTO t2(b) VALUES(44);
+ INSERT INTO t2(b) VALUES(55);
+ SELECT b FROM t2 WHERE a NOT IN(1,2,1000000,1000001,2147483647)
+ ORDER BY b;
+ }
+ } {11 22 33 44 55}
+}
+do_test rowid-7.8 {
+ execsql {
+ DELETE FROM t2 WHERE a!=2;
+ INSERT INTO t2(b) VALUES(111);
+ SELECT * FROM t2;
+ }
+} {2 66 3 111}
+
+ifcapable {trigger} {
+# Make sure AFTER triggers that do INSERTs do not change the last_insert_rowid.
+# Ticket #290
+#
+do_test rowid-8.1 {
+ execsql {
+ CREATE TABLE t3(a integer primary key);
+ CREATE TABLE t4(x);
+ INSERT INTO t4 VALUES(1);
+ CREATE TRIGGER r3 AFTER INSERT on t3 FOR EACH ROW BEGIN
+ INSERT INTO t4 VALUES(NEW.a+10);
+ END;
+ SELECT * FROM t3;
+ }
+} {}
+do_test rowid-8.2 {
+ execsql {
+ SELECT rowid, * FROM t4;
+ }
+} {1 1}
+do_test rowid-8.3 {
+ execsql {
+ INSERT INTO t3 VALUES(123);
+ SELECT last_insert_rowid();
+ }
+} {123}
+do_test rowid-8.4 {
+ execsql {
+ SELECT * FROM t3;
+ }
+} {123}
+do_test rowid-8.5 {
+ execsql {
+ SELECT rowid, * FROM t4;
+ }
+} {1 1 2 133}
+do_test rowid-8.6 {
+ execsql {
+ INSERT INTO t3 VALUES(NULL);
+ SELECT last_insert_rowid();
+ }
+} {124}
+do_test rowid-8.7 {
+ execsql {
+ SELECT * FROM t3;
+ }
+} {123 124}
+do_test rowid-8.8 {
+ execsql {
+ SELECT rowid, * FROM t4;
+ }
+} {1 1 2 133 3 134}
+} ;# endif trigger
+
+# If triggers are not enable, simulate their effect for the tests that
+# follow.
+ifcapable {!trigger} {
+ execsql {
+ CREATE TABLE t3(a integer primary key);
+ INSERT INTO t3 VALUES(123);
+ INSERT INTO t3 VALUES(124);
+ }
+}
+
+# ticket #377: Comparison between integer primiary key and floating point
+# values.
+#
+do_test rowid-9.1 {
+ execsql {
+ SELECT * FROM t3 WHERE a<123.5
+ }
+} {123}
+do_test rowid-9.2 {
+ execsql {
+ SELECT * FROM t3 WHERE a<124.5
+ }
+} {123 124}
+do_test rowid-9.3 {
+ execsql {
+ SELECT * FROM t3 WHERE a>123.5
+ }
+} {124}
+do_test rowid-9.4 {
+ execsql {
+ SELECT * FROM t3 WHERE a>122.5
+ }
+} {123 124}
+do_test rowid-9.5 {
+ execsql {
+ SELECT * FROM t3 WHERE a==123.5
+ }
+} {}
+do_test rowid-9.6 {
+ execsql {
+ SELECT * FROM t3 WHERE a==123.000
+ }
+} {123}
+do_test rowid-9.7 {
+ execsql {
+ SELECT * FROM t3 WHERE a>100.5 AND a<200.5
+ }
+} {123 124}
+do_test rowid-9.8 {
+ execsql {
+ SELECT * FROM t3 WHERE a>'xyz';
+ }
+} {}
+do_test rowid-9.9 {
+ execsql {
+ SELECT * FROM t3 WHERE a<'xyz';
+ }
+} {123 124}
+do_test rowid-9.10 {
+ execsql {
+ SELECT * FROM t3 WHERE a>=122.9 AND a<=123.1
+ }
+} {123}
+
+# Ticket #567. Comparisons of ROWID or integery primary key against
+# floating point numbers still do not always work.
+#
+do_test rowid-10.1 {
+ execsql {
+ CREATE TABLE t5(a);
+ INSERT INTO t5 VALUES(1);
+ INSERT INTO t5 VALUES(2);
+ INSERT INTO t5 SELECT a+2 FROM t5;
+ INSERT INTO t5 SELECT a+4 FROM t5;
+ SELECT rowid, * FROM t5;
+ }
+} {1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8}
+do_test rowid-10.2 {
+ execsql {SELECT rowid, a FROM t5 WHERE rowid>=5.5}
+} {6 6 7 7 8 8}
+do_test rowid-10.3 {
+ execsql {SELECT rowid, a FROM t5 WHERE rowid>=5.0}
+} {5 5 6 6 7 7 8 8}
+do_test rowid-10.4 {
+ execsql {SELECT rowid, a FROM t5 WHERE rowid>5.5}
+} {6 6 7 7 8 8}
+do_test rowid-10.3.2 {
+ execsql {SELECT rowid, a FROM t5 WHERE rowid>5.0}
+} {6 6 7 7 8 8}
+do_test rowid-10.5 {
+ execsql {SELECT rowid, a FROM t5 WHERE 5.5<=rowid}
+} {6 6 7 7 8 8}
+do_test rowid-10.6 {
+ execsql {SELECT rowid, a FROM t5 WHERE 5.5<rowid}
+} {6 6 7 7 8 8}
+do_test rowid-10.7 {
+ execsql {SELECT rowid, a FROM t5 WHERE rowid<=5.5}
+} {1 1 2 2 3 3 4 4 5 5}
+do_test rowid-10.8 {
+ execsql {SELECT rowid, a FROM t5 WHERE rowid<5.5}
+} {1 1 2 2 3 3 4 4 5 5}
+do_test rowid-10.9 {
+ execsql {SELECT rowid, a FROM t5 WHERE 5.5>=rowid}
+} {1 1 2 2 3 3 4 4 5 5}
+do_test rowid-10.10 {
+ execsql {SELECT rowid, a FROM t5 WHERE 5.5>rowid}
+} {1 1 2 2 3 3 4 4 5 5}
+do_test rowid-10.11 {
+ execsql {SELECT rowid, a FROM t5 WHERE rowid>=5.5 ORDER BY rowid DESC}
+} {8 8 7 7 6 6}
+do_test rowid-10.11.2 {
+ execsql {SELECT rowid, a FROM t5 WHERE rowid>=5.0 ORDER BY rowid DESC}
+} {8 8 7 7 6 6 5 5}
+do_test rowid-10.12 {
+ execsql {SELECT rowid, a FROM t5 WHERE rowid>5.5 ORDER BY rowid DESC}
+} {8 8 7 7 6 6}
+do_test rowid-10.12.2 {
+ execsql {SELECT rowid, a FROM t5 WHERE rowid>5.0 ORDER BY rowid DESC}
+} {8 8 7 7 6 6}
+do_test rowid-10.13 {
+ execsql {SELECT rowid, a FROM t5 WHERE 5.5<=rowid ORDER BY rowid DESC}
+} {8 8 7 7 6 6}
+do_test rowid-10.14 {
+ execsql {SELECT rowid, a FROM t5 WHERE 5.5<rowid ORDER BY rowid DESC}
+} {8 8 7 7 6 6}
+do_test rowid-10.15 {
+ execsql {SELECT rowid, a FROM t5 WHERE rowid<=5.5 ORDER BY rowid DESC}
+} {5 5 4 4 3 3 2 2 1 1}
+do_test rowid-10.16 {
+ execsql {SELECT rowid, a FROM t5 WHERE rowid<5.5 ORDER BY rowid DESC}
+} {5 5 4 4 3 3 2 2 1 1}
+do_test rowid-10.17 {
+ execsql {SELECT rowid, a FROM t5 WHERE 5.5>=rowid ORDER BY rowid DESC}
+} {5 5 4 4 3 3 2 2 1 1}
+do_test rowid-10.18 {
+ execsql {SELECT rowid, a FROM t5 WHERE 5.5>rowid ORDER BY rowid DESC}
+} {5 5 4 4 3 3 2 2 1 1}
+
+do_test rowid-10.30 {
+ execsql {
+ CREATE TABLE t6(a);
+ INSERT INTO t6(rowid,a) SELECT -a,a FROM t5;
+ SELECT rowid, * FROM t6;
+ }
+} {-8 8 -7 7 -6 6 -5 5 -4 4 -3 3 -2 2 -1 1}
+do_test rowid-10.31.1 {
+ execsql {SELECT rowid, a FROM t6 WHERE rowid>=-5.5}
+} {-5 5 -4 4 -3 3 -2 2 -1 1}
+do_test rowid-10.31.2 {
+ execsql {SELECT rowid, a FROM t6 WHERE rowid>=-5.0}
+} {-5 5 -4 4 -3 3 -2 2 -1 1}
+do_test rowid-10.32.1 {
+ execsql {SELECT rowid, a FROM t6 WHERE rowid>=-5.5 ORDER BY rowid DESC}
+} {-1 1 -2 2 -3 3 -4 4 -5 5}
+do_test rowid-10.32.1 {
+ execsql {SELECT rowid, a FROM t6 WHERE rowid>=-5.0 ORDER BY rowid DESC}
+} {-1 1 -2 2 -3 3 -4 4 -5 5}
+do_test rowid-10.33 {
+ execsql {SELECT rowid, a FROM t6 WHERE -5.5<=rowid}
+} {-5 5 -4 4 -3 3 -2 2 -1 1}
+do_test rowid-10.34 {
+ execsql {SELECT rowid, a FROM t6 WHERE -5.5<=rowid ORDER BY rowid DESC}
+} {-1 1 -2 2 -3 3 -4 4 -5 5}
+do_test rowid-10.35.1 {
+ execsql {SELECT rowid, a FROM t6 WHERE rowid>-5.5}
+} {-5 5 -4 4 -3 3 -2 2 -1 1}
+do_test rowid-10.35.2 {
+ execsql {SELECT rowid, a FROM t6 WHERE rowid>-5.0}
+} {-4 4 -3 3 -2 2 -1 1}
+do_test rowid-10.36.1 {
+ execsql {SELECT rowid, a FROM t6 WHERE rowid>-5.5 ORDER BY rowid DESC}
+} {-1 1 -2 2 -3 3 -4 4 -5 5}
+do_test rowid-10.36.2 {
+ execsql {SELECT rowid, a FROM t6 WHERE rowid>-5.0 ORDER BY rowid DESC}
+} {-1 1 -2 2 -3 3 -4 4}
+do_test rowid-10.37 {
+ execsql {SELECT rowid, a FROM t6 WHERE -5.5<rowid}
+} {-5 5 -4 4 -3 3 -2 2 -1 1}
+do_test rowid-10.38 {
+ execsql {SELECT rowid, a FROM t6 WHERE -5.5<rowid ORDER BY rowid DESC}
+} {-1 1 -2 2 -3 3 -4 4 -5 5}
+do_test rowid-10.39 {
+ execsql {SELECT rowid, a FROM t6 WHERE rowid<=-5.5}
+} {-8 8 -7 7 -6 6}
+do_test rowid-10.40 {
+ execsql {SELECT rowid, a FROM t6 WHERE rowid<=-5.5 ORDER BY rowid DESC}
+} {-6 6 -7 7 -8 8}
+do_test rowid-10.41 {
+ execsql {SELECT rowid, a FROM t6 WHERE -5.5>=rowid}
+} {-8 8 -7 7 -6 6}
+do_test rowid-10.42 {
+ execsql {SELECT rowid, a FROM t6 WHERE -5.5>=rowid ORDER BY rowid DESC}
+} {-6 6 -7 7 -8 8}
+do_test rowid-10.43 {
+ execsql {SELECT rowid, a FROM t6 WHERE rowid<-5.5}
+} {-8 8 -7 7 -6 6}
+do_test rowid-10.44 {
+ execsql {SELECT rowid, a FROM t6 WHERE rowid<-5.5 ORDER BY rowid DESC}
+} {-6 6 -7 7 -8 8}
+do_test rowid-10.44 {
+ execsql {SELECT rowid, a FROM t6 WHERE -5.5>rowid}
+} {-8 8 -7 7 -6 6}
+do_test rowid-10.46 {
+ execsql {SELECT rowid, a FROM t6 WHERE -5.5>rowid ORDER BY rowid DESC}
+} {-6 6 -7 7 -8 8}
+
+# Comparison of rowid against string values.
+#
+do_test rowid-11.1 {
+ execsql {SELECT rowid, a FROM t5 WHERE rowid>'abc'}
+} {}
+do_test rowid-11.2 {
+ execsql {SELECT rowid, a FROM t5 WHERE rowid>='abc'}
+} {}
+do_test rowid-11.3 {
+ execsql {SELECT rowid, a FROM t5 WHERE rowid<'abc'}
+} {1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8}
+do_test rowid-11.4 {
+ execsql {SELECT rowid, a FROM t5 WHERE rowid<='abc'}
+} {1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8}
+
+# Test the automatic generation of rowids when the table already contains
+# a rowid with the maximum value.
+#
+do_test rowid-12.1 {
+ execsql {
+ CREATE TABLE t7(x INTEGER PRIMARY KEY, y);
+ INSERT INTO t7 VALUES(9223372036854775807,'a');
+ SELECT y FROM t7;
+ }
+} {a}
+do_test rowid-12.2 {
+ execsql {
+ INSERT INTO t7 VALUES(NULL,'b');
+ SELECT y FROM t7;
+ }
+} {b a}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/safety.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/safety.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,68 @@
+# 2005 January 11
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing the sqlite3SafetyOn and sqlite3SafetyOff
+# functions. Those routines are not strictly necessary - they are
+# designed to detect misuse of the library.
+#
+# $Id: safety.test,v 1.2 2006/01/03 00:33:50 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+do_test safety-1.1 {
+ set DB [sqlite3_connection_pointer db]
+ db eval {CREATE TABLE t1(a)}
+ sqlite_set_magic $DB SQLITE_MAGIC_BUSY
+ catchsql {
+ SELECT name FROM sqlite_master;
+ }
+} {1 {library routine called out of sequence}}
+do_test safety-1.2 {
+ sqlite_set_magic $DB SQLITE_MAGIC_OPEN
+ catchsql {
+ SELECT name FROM sqlite_master
+ }
+} {0 t1}
+
+do_test safety-2.1 {
+ proc safety_on {} "sqlite_set_magic $DB SQLITE_MAGIC_BUSY"
+ db function safety_on safety_on
+ catchsql {
+ SELECT safety_on(), name FROM sqlite_master
+ }
+} {1 {library routine called out of sequence}}
+do_test safety-2.2 {
+ catchsql {
+ SELECT 'hello'
+ }
+} {1 {library routine called out of sequence}}
+do_test safety-2.3 {
+ sqlite3_close $DB
+} {SQLITE_MISUSE}
+do_test safety-2.4 {
+ sqlite_set_magic $DB SQLITE_MAGIC_OPEN
+ execsql {
+ SELECT name FROM sqlite_master
+ }
+} {t1}
+
+do_test safety-3.1 {
+ set rc [catch {
+ db eval {SELECT name FROM sqlite_master} {
+ sqlite_set_magic $DB SQLITE_MAGIC_BUSY
+ }
+ } msg]
+ lappend rc $msg
+} {1 {library routine called out of sequence}}
+sqlite_set_magic $DB SQLITE_MAGIC_OPEN
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/schema.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/schema.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,333 @@
+# 2005 Jan 24
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file tests the various conditions under which an SQLITE_SCHEMA
+# error should be returned.
+#
+# $Id: schema.test,v 1.5 2005/12/06 12:53:01 danielk1977 Exp $
+
+#---------------------------------------------------------------------
+# When any of the following types of SQL statements or actions are
+# executed, all pre-compiled statements are invalidated. An attempt
+# to execute an invalidated statement always returns SQLITE_SCHEMA.
+#
+# CREATE/DROP TABLE...................................schema-1.*
+# CREATE/DROP VIEW....................................schema-2.*
+# CREATE/DROP TRIGGER.................................schema-3.*
+# CREATE/DROP INDEX...................................schema-4.*
+# DETACH..............................................schema-5.*
+# Deleting a user-function............................schema-6.*
+# Deleting a collation sequence.......................schema-7.*
+# Setting or changing the authorization function......schema-8.*
+#
+# Test cases schema-9.* and schema-10.* test some specific bugs
+# that came up during development.
+#
+# Test cases schema-11.* test that it is impossible to delete or
+# change a collation sequence or user-function while SQL statements
+# are executing. Adding new collations or functions is allowed.
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+do_test schema-1.1 {
+ set ::STMT [sqlite3_prepare $::DB {SELECT * FROM sqlite_master} -1 TAIL]
+ execsql {
+ CREATE TABLE abc(a, b, c);
+ }
+ sqlite3_step $::STMT
+} {SQLITE_ERROR}
+do_test schema-1.2 {
+ sqlite3_finalize $::STMT
+} {SQLITE_SCHEMA}
+do_test schema-1.3 {
+ set ::STMT [sqlite3_prepare $::DB {SELECT * FROM sqlite_master} -1 TAIL]
+ execsql {
+ DROP TABLE abc;
+ }
+ sqlite3_step $::STMT
+} {SQLITE_ERROR}
+do_test schema-1.4 {
+ sqlite3_finalize $::STMT
+} {SQLITE_SCHEMA}
+
+ifcapable view {
+ do_test schema-2.1 {
+ set ::STMT [sqlite3_prepare $::DB {SELECT * FROM sqlite_master} -1 TAIL]
+ execsql {
+ CREATE VIEW v1 AS SELECT * FROM sqlite_master;
+ }
+ sqlite3_step $::STMT
+ } {SQLITE_ERROR}
+ do_test schema-2.2 {
+ sqlite3_finalize $::STMT
+ } {SQLITE_SCHEMA}
+ do_test schema-2.3 {
+ set ::STMT [sqlite3_prepare $::DB {SELECT * FROM sqlite_master} -1 TAIL]
+ execsql {
+ DROP VIEW v1;
+ }
+ sqlite3_step $::STMT
+ } {SQLITE_ERROR}
+ do_test schema-2.4 {
+ sqlite3_finalize $::STMT
+ } {SQLITE_SCHEMA}
+}
+
+ifcapable trigger {
+ do_test schema-3.1 {
+ execsql {
+ CREATE TABLE abc(a, b, c);
+ }
+ set ::STMT [sqlite3_prepare $::DB {SELECT * FROM sqlite_master} -1 TAIL]
+ execsql {
+ CREATE TRIGGER abc_trig AFTER INSERT ON abc BEGIN
+ SELECT 1, 2, 3;
+ END;
+ }
+ sqlite3_step $::STMT
+ } {SQLITE_ERROR}
+ do_test schema-3.2 {
+ sqlite3_finalize $::STMT
+ } {SQLITE_SCHEMA}
+ do_test schema-3.3 {
+ set ::STMT [sqlite3_prepare $::DB {SELECT * FROM sqlite_master} -1 TAIL]
+ execsql {
+ DROP TRIGGER abc_trig;
+ }
+ sqlite3_step $::STMT
+ } {SQLITE_ERROR}
+ do_test schema-3.4 {
+ sqlite3_finalize $::STMT
+ } {SQLITE_SCHEMA}
+}
+
+do_test schema-4.1 {
+ catchsql {
+ CREATE TABLE abc(a, b, c);
+ }
+ set ::STMT [sqlite3_prepare $::DB {SELECT * FROM sqlite_master} -1 TAIL]
+ execsql {
+ CREATE INDEX abc_index ON abc(a);
+ }
+ sqlite3_step $::STMT
+} {SQLITE_ERROR}
+do_test schema-4.2 {
+ sqlite3_finalize $::STMT
+} {SQLITE_SCHEMA}
+do_test schema-4.3 {
+ set ::STMT [sqlite3_prepare $::DB {SELECT * FROM sqlite_master} -1 TAIL]
+ execsql {
+ DROP INDEX abc_index;
+ }
+ sqlite3_step $::STMT
+} {SQLITE_ERROR}
+do_test schema-4.4 {
+ sqlite3_finalize $::STMT
+} {SQLITE_SCHEMA}
+
+#---------------------------------------------------------------------
+# Tests 5.1 to 5.4 check that prepared statements are invalidated when
+# a database is DETACHed (but not when one is ATTACHed).
+#
+do_test schema-5.1 {
+ set sql {SELECT * FROM abc;}
+ set ::STMT [sqlite3_prepare $::DB $sql -1 TAIL]
+ execsql {
+ ATTACH 'test2.db' AS aux;
+ }
+ sqlite3_step $::STMT
+} {SQLITE_DONE}
+do_test schema-5.2 {
+ sqlite3_reset $::STMT
+} {SQLITE_OK}
+do_test schema-5.3 {
+ execsql {
+ DETACH aux;
+ }
+ sqlite3_step $::STMT
+} {SQLITE_ERROR}
+do_test schema-5.4 {
+ sqlite3_finalize $::STMT
+} {SQLITE_SCHEMA}
+
+#---------------------------------------------------------------------
+# Tests 6.* check that prepared statements are invalidated when
+# a user-function is deleted (but not when one is added).
+do_test schema-6.1 {
+ set sql {SELECT * FROM abc;}
+ set ::STMT [sqlite3_prepare $::DB $sql -1 TAIL]
+ db function hello_function {}
+ sqlite3_step $::STMT
+} {SQLITE_DONE}
+do_test schema-6.2 {
+ sqlite3_reset $::STMT
+} {SQLITE_OK}
+do_test schema-6.3 {
+ sqlite_delete_function $::DB hello_function
+ sqlite3_step $::STMT
+} {SQLITE_ERROR}
+do_test schema-6.4 {
+ sqlite3_finalize $::STMT
+} {SQLITE_SCHEMA}
+
+#---------------------------------------------------------------------
+# Tests 7.* check that prepared statements are invalidated when
+# a collation sequence is deleted (but not when one is added).
+#
+ifcapable utf16 {
+ do_test schema-7.1 {
+ set sql {SELECT * FROM abc;}
+ set ::STMT [sqlite3_prepare $::DB $sql -1 TAIL]
+ add_test_collate $::DB 1 1 1
+ sqlite3_step $::STMT
+ } {SQLITE_DONE}
+ do_test schema-7.2 {
+ sqlite3_reset $::STMT
+ } {SQLITE_OK}
+ do_test schema-7.3 {
+ add_test_collate $::DB 0 0 0
+ sqlite3_step $::STMT
+ } {SQLITE_ERROR}
+ do_test schema-7.4 {
+ sqlite3_finalize $::STMT
+ } {SQLITE_SCHEMA}
+}
+
+#---------------------------------------------------------------------
+# Tests 8.1 and 8.2 check that prepared statements are invalidated when
+# the authorization function is set.
+#
+ifcapable auth {
+ do_test schema-8.1 {
+ set ::STMT [sqlite3_prepare $::DB {SELECT * FROM sqlite_master} -1 TAIL]
+ db auth {}
+ sqlite3_step $::STMT
+ } {SQLITE_ERROR}
+ do_test schema-8.3 {
+ sqlite3_finalize $::STMT
+ } {SQLITE_SCHEMA}
+}
+
+#---------------------------------------------------------------------
+# schema-9.1: Test that if a table is dropped by one database connection,
+# other database connections are aware of the schema change.
+# schema-9.2: Test that if a view is dropped by one database connection,
+# other database connections are aware of the schema change.
+#
+do_test schema-9.1 {
+ sqlite3 db2 test.db
+ execsql {
+ DROP TABLE abc;
+ } db2
+ db2 close
+ catchsql {
+ SELECT * FROM abc;
+ }
+} {1 {no such table: abc}}
+execsql {
+ CREATE TABLE abc(a, b, c);
+}
+ifcapable view {
+ do_test schema-9.2 {
+ execsql {
+ CREATE VIEW abcview AS SELECT * FROM abc;
+ }
+ sqlite3 db2 test.db
+ execsql {
+ DROP VIEW abcview;
+ } db2
+ db2 close
+ catchsql {
+ SELECT * FROM abcview;
+ }
+ } {1 {no such table: abcview}}
+}
+
+#---------------------------------------------------------------------
+# Test that if a CREATE TABLE statement fails because there are other
+# btree cursors open on the same database file it does not corrupt
+# the sqlite_master table.
+#
+do_test schema-10.1 {
+ execsql {
+ INSERT INTO abc VALUES(1, 2, 3);
+ }
+ set sql {SELECT * FROM abc}
+ set ::STMT [sqlite3_prepare $::DB $sql -1 TAIL]
+ sqlite3_step $::STMT
+} {SQLITE_ROW}
+do_test schema-10.2 {
+ catchsql {
+ CREATE TABLE t2(a, b, c);
+ }
+} {1 {database table is locked}}
+do_test schema-10.3 {
+ sqlite3_finalize $::STMT
+} {SQLITE_OK}
+do_test schema-10.4 {
+ sqlite3 db2 test.db
+ execsql {
+ SELECT * FROM abc
+ } db2
+} {1 2 3}
+do_test schema-10.5 {
+ db2 close
+} {}
+
+#---------------------------------------------------------------------
+# Attempting to delete or replace a user-function or collation sequence
+# while there are active statements returns an SQLITE_BUSY error.
+#
+# schema-11.1 - 11.4: User function.
+# schema-11.5 - 11.8: Collation sequence.
+#
+do_test schema-11.1 {
+ db function tstfunc {}
+ set sql {SELECT * FROM abc}
+ set ::STMT [sqlite3_prepare $::DB $sql -1 TAIL]
+ sqlite3_step $::STMT
+} {SQLITE_ROW}
+do_test schema-11.2 {
+ sqlite_delete_function $::DB tstfunc
+} {SQLITE_BUSY}
+do_test schema-11.3 {
+ set rc [catch {
+ db function tstfunc {}
+ } msg]
+ list $rc $msg
+} {1 {Unable to delete/modify user-function due to active statements}}
+do_test schema-11.4 {
+ sqlite3_finalize $::STMT
+} {SQLITE_OK}
+do_test schema-11.5 {
+ db collate tstcollate {}
+ set sql {SELECT * FROM abc}
+ set ::STMT [sqlite3_prepare $::DB $sql -1 TAIL]
+ sqlite3_step $::STMT
+} {SQLITE_ROW}
+do_test schema-11.6 {
+ sqlite_delete_collation $::DB tstcollate
+} {SQLITE_BUSY}
+do_test schema-11.7 {
+ set rc [catch {
+ db collate tstcollate {}
+ } msg]
+ list $rc $msg
+} {1 {Unable to delete/modify collation sequence due to active statements}}
+do_test schema-11.8 {
+ sqlite3_finalize $::STMT
+} {SQLITE_OK}
+
+finish_test
+
Added: freeswitch/trunk/libs/sqlite/test/select1.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/select1.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,845 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing the SELECT statement.
+#
+# $Id: select1.test,v 1.51 2006/04/11 14:16:22 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Try to select on a non-existant table.
+#
+do_test select1-1.1 {
+ set v [catch {execsql {SELECT * FROM test1}} msg]
+ lappend v $msg
+} {1 {no such table: test1}}
+
+
+execsql {CREATE TABLE test1(f1 int, f2 int)}
+
+do_test select1-1.2 {
+ set v [catch {execsql {SELECT * FROM test1, test2}} msg]
+ lappend v $msg
+} {1 {no such table: test2}}
+do_test select1-1.3 {
+ set v [catch {execsql {SELECT * FROM test2, test1}} msg]
+ lappend v $msg
+} {1 {no such table: test2}}
+
+execsql {INSERT INTO test1(f1,f2) VALUES(11,22)}
+
+
+# Make sure the columns are extracted correctly.
+#
+do_test select1-1.4 {
+ execsql {SELECT f1 FROM test1}
+} {11}
+do_test select1-1.5 {
+ execsql {SELECT f2 FROM test1}
+} {22}
+do_test select1-1.6 {
+ execsql {SELECT f2, f1 FROM test1}
+} {22 11}
+do_test select1-1.7 {
+ execsql {SELECT f1, f2 FROM test1}
+} {11 22}
+do_test select1-1.8 {
+ execsql {SELECT * FROM test1}
+} {11 22}
+do_test select1-1.8.1 {
+ execsql {SELECT *, * FROM test1}
+} {11 22 11 22}
+do_test select1-1.8.2 {
+ execsql {SELECT *, min(f1,f2), max(f1,f2) FROM test1}
+} {11 22 11 22}
+do_test select1-1.8.3 {
+ execsql {SELECT 'one', *, 'two', * FROM test1}
+} {one 11 22 two 11 22}
+
+execsql {CREATE TABLE test2(r1 real, r2 real)}
+execsql {INSERT INTO test2(r1,r2) VALUES(1.1,2.2)}
+
+do_test select1-1.9 {
+ execsql {SELECT * FROM test1, test2}
+} {11 22 1.1 2.2}
+do_test select1-1.9.1 {
+ execsql {SELECT *, 'hi' FROM test1, test2}
+} {11 22 1.1 2.2 hi}
+do_test select1-1.9.2 {
+ execsql {SELECT 'one', *, 'two', * FROM test1, test2}
+} {one 11 22 1.1 2.2 two 11 22 1.1 2.2}
+do_test select1-1.10 {
+ execsql {SELECT test1.f1, test2.r1 FROM test1, test2}
+} {11 1.1}
+do_test select1-1.11 {
+ execsql {SELECT test1.f1, test2.r1 FROM test2, test1}
+} {11 1.1}
+do_test select1-1.11.1 {
+ execsql {SELECT * FROM test2, test1}
+} {1.1 2.2 11 22}
+do_test select1-1.11.2 {
+ execsql {SELECT * FROM test1 AS a, test1 AS b}
+} {11 22 11 22}
+do_test select1-1.12 {
+ execsql {SELECT max(test1.f1,test2.r1), min(test1.f2,test2.r2)
+ FROM test2, test1}
+} {11 2.2}
+do_test select1-1.13 {
+ execsql {SELECT min(test1.f1,test2.r1), max(test1.f2,test2.r2)
+ FROM test1, test2}
+} {1.1 22}
+
+set long {This is a string that is too big to fit inside a NBFS buffer}
+do_test select1-2.0 {
+ execsql "
+ DROP TABLE test2;
+ DELETE FROM test1;
+ INSERT INTO test1 VALUES(11,22);
+ INSERT INTO test1 VALUES(33,44);
+ CREATE TABLE t3(a,b);
+ INSERT INTO t3 VALUES('abc',NULL);
+ INSERT INTO t3 VALUES(NULL,'xyz');
+ INSERT INTO t3 SELECT * FROM test1;
+ CREATE TABLE t4(a,b);
+ INSERT INTO t4 VALUES(NULL,'$long');
+ SELECT * FROM t3;
+ "
+} {abc {} {} xyz 11 22 33 44}
+
+# Error messges from sqliteExprCheck
+#
+do_test select1-2.1 {
+ set v [catch {execsql {SELECT count(f1,f2) FROM test1}} msg]
+ lappend v $msg
+} {1 {wrong number of arguments to function count()}}
+do_test select1-2.2 {
+ set v [catch {execsql {SELECT count(f1) FROM test1}} msg]
+ lappend v $msg
+} {0 2}
+do_test select1-2.3 {
+ set v [catch {execsql {SELECT Count() FROM test1}} msg]
+ lappend v $msg
+} {0 2}
+do_test select1-2.4 {
+ set v [catch {execsql {SELECT COUNT(*) FROM test1}} msg]
+ lappend v $msg
+} {0 2}
+do_test select1-2.5 {
+ set v [catch {execsql {SELECT COUNT(*)+1 FROM test1}} msg]
+ lappend v $msg
+} {0 3}
+do_test select1-2.5.1 {
+ execsql {SELECT count(*),count(a),count(b) FROM t3}
+} {4 3 3}
+do_test select1-2.5.2 {
+ execsql {SELECT count(*),count(a),count(b) FROM t4}
+} {1 0 1}
+do_test select1-2.5.3 {
+ execsql {SELECT count(*),count(a),count(b) FROM t4 WHERE b=5}
+} {0 0 0}
+do_test select1-2.6 {
+ set v [catch {execsql {SELECT min(*) FROM test1}} msg]
+ lappend v $msg
+} {1 {wrong number of arguments to function min()}}
+do_test select1-2.7 {
+ set v [catch {execsql {SELECT Min(f1) FROM test1}} msg]
+ lappend v $msg
+} {0 11}
+do_test select1-2.8 {
+ set v [catch {execsql {SELECT MIN(f1,f2) FROM test1}} msg]
+ lappend v [lsort $msg]
+} {0 {11 33}}
+do_test select1-2.8.1 {
+ execsql {SELECT coalesce(min(a),'xyzzy') FROM t3}
+} {11}
+do_test select1-2.8.2 {
+ execsql {SELECT min(coalesce(a,'xyzzy')) FROM t3}
+} {11}
+do_test select1-2.8.3 {
+ execsql {SELECT min(b), min(b) FROM t4}
+} [list $long $long]
+do_test select1-2.9 {
+ set v [catch {execsql {SELECT MAX(*) FROM test1}} msg]
+ lappend v $msg
+} {1 {wrong number of arguments to function MAX()}}
+do_test select1-2.10 {
+ set v [catch {execsql {SELECT Max(f1) FROM test1}} msg]
+ lappend v $msg
+} {0 33}
+do_test select1-2.11 {
+ set v [catch {execsql {SELECT max(f1,f2) FROM test1}} msg]
+ lappend v [lsort $msg]
+} {0 {22 44}}
+do_test select1-2.12 {
+ set v [catch {execsql {SELECT MAX(f1,f2)+1 FROM test1}} msg]
+ lappend v [lsort $msg]
+} {0 {23 45}}
+do_test select1-2.13 {
+ set v [catch {execsql {SELECT MAX(f1)+1 FROM test1}} msg]
+ lappend v $msg
+} {0 34}
+do_test select1-2.13.1 {
+ execsql {SELECT coalesce(max(a),'xyzzy') FROM t3}
+} {abc}
+do_test select1-2.13.2 {
+ execsql {SELECT max(coalesce(a,'xyzzy')) FROM t3}
+} {xyzzy}
+do_test select1-2.14 {
+ set v [catch {execsql {SELECT SUM(*) FROM test1}} msg]
+ lappend v $msg
+} {1 {wrong number of arguments to function SUM()}}
+do_test select1-2.15 {
+ set v [catch {execsql {SELECT Sum(f1) FROM test1}} msg]
+ lappend v $msg
+} {0 44}
+do_test select1-2.16 {
+ set v [catch {execsql {SELECT sum(f1,f2) FROM test1}} msg]
+ lappend v $msg
+} {1 {wrong number of arguments to function sum()}}
+do_test select1-2.17 {
+ set v [catch {execsql {SELECT SUM(f1)+1 FROM test1}} msg]
+ lappend v $msg
+} {0 45}
+do_test select1-2.17.1 {
+ execsql {SELECT sum(a) FROM t3}
+} {44.0}
+do_test select1-2.18 {
+ set v [catch {execsql {SELECT XYZZY(f1) FROM test1}} msg]
+ lappend v $msg
+} {1 {no such function: XYZZY}}
+do_test select1-2.19 {
+ set v [catch {execsql {SELECT SUM(min(f1,f2)) FROM test1}} msg]
+ lappend v $msg
+} {0 44}
+do_test select1-2.20 {
+ set v [catch {execsql {SELECT SUM(min(f1)) FROM test1}} msg]
+ lappend v $msg
+} {1 {misuse of aggregate function min()}}
+
+# WHERE clause expressions
+#
+do_test select1-3.1 {
+ set v [catch {execsql {SELECT f1 FROM test1 WHERE f1<11}} msg]
+ lappend v $msg
+} {0 {}}
+do_test select1-3.2 {
+ set v [catch {execsql {SELECT f1 FROM test1 WHERE f1<=11}} msg]
+ lappend v $msg
+} {0 11}
+do_test select1-3.3 {
+ set v [catch {execsql {SELECT f1 FROM test1 WHERE f1=11}} msg]
+ lappend v $msg
+} {0 11}
+do_test select1-3.4 {
+ set v [catch {execsql {SELECT f1 FROM test1 WHERE f1>=11}} msg]
+ lappend v [lsort $msg]
+} {0 {11 33}}
+do_test select1-3.5 {
+ set v [catch {execsql {SELECT f1 FROM test1 WHERE f1>11}} msg]
+ lappend v [lsort $msg]
+} {0 33}
+do_test select1-3.6 {
+ set v [catch {execsql {SELECT f1 FROM test1 WHERE f1!=11}} msg]
+ lappend v [lsort $msg]
+} {0 33}
+do_test select1-3.7 {
+ set v [catch {execsql {SELECT f1 FROM test1 WHERE min(f1,f2)!=11}} msg]
+ lappend v [lsort $msg]
+} {0 33}
+do_test select1-3.8 {
+ set v [catch {execsql {SELECT f1 FROM test1 WHERE max(f1,f2)!=11}} msg]
+ lappend v [lsort $msg]
+} {0 {11 33}}
+do_test select1-3.9 {
+ set v [catch {execsql {SELECT f1 FROM test1 WHERE count(f1,f2)!=11}} msg]
+ lappend v $msg
+} {1 {wrong number of arguments to function count()}}
+
+# ORDER BY expressions
+#
+do_test select1-4.1 {
+ set v [catch {execsql {SELECT f1 FROM test1 ORDER BY f1}} msg]
+ lappend v $msg
+} {0 {11 33}}
+do_test select1-4.2 {
+ set v [catch {execsql {SELECT f1 FROM test1 ORDER BY -f1}} msg]
+ lappend v $msg
+} {0 {33 11}}
+do_test select1-4.3 {
+ set v [catch {execsql {SELECT f1 FROM test1 ORDER BY min(f1,f2)}} msg]
+ lappend v $msg
+} {0 {11 33}}
+do_test select1-4.4 {
+ set v [catch {execsql {SELECT f1 FROM test1 ORDER BY min(f1)}} msg]
+ lappend v $msg
+} {1 {misuse of aggregate function min()}}
+
+# The restriction not allowing constants in the ORDER BY clause
+# has been removed. See ticket #1768
+#do_test select1-4.5 {
+# catchsql {
+# SELECT f1 FROM test1 ORDER BY 8.4;
+# }
+#} {1 {ORDER BY terms must not be non-integer constants}}
+#do_test select1-4.6 {
+# catchsql {
+# SELECT f1 FROM test1 ORDER BY '8.4';
+# }
+#} {1 {ORDER BY terms must not be non-integer constants}}
+#do_test select1-4.7.1 {
+# catchsql {
+# SELECT f1 FROM test1 ORDER BY 'xyz';
+# }
+#} {1 {ORDER BY terms must not be non-integer constants}}
+#do_test select1-4.7.2 {
+# catchsql {
+# SELECT f1 FROM test1 ORDER BY -8.4;
+# }
+#} {1 {ORDER BY terms must not be non-integer constants}}
+#do_test select1-4.7.3 {
+# catchsql {
+# SELECT f1 FROM test1 ORDER BY +8.4;
+# }
+#} {1 {ORDER BY terms must not be non-integer constants}}
+#do_test select1-4.7.4 {
+# catchsql {
+# SELECT f1 FROM test1 ORDER BY 4294967296; -- constant larger than 32 bits
+# }
+#} {1 {ORDER BY terms must not be non-integer constants}}
+
+do_test select1-4.5 {
+ execsql {
+ SELECT f1 FROM test1 ORDER BY 8.4
+ }
+} {11 33}
+do_test select1-4.6 {
+ execsql {
+ SELECT f1 FROM test1 ORDER BY '8.4'
+ }
+} {11 33}
+
+do_test select1-4.8 {
+ execsql {
+ CREATE TABLE t5(a,b);
+ INSERT INTO t5 VALUES(1,10);
+ INSERT INTO t5 VALUES(2,9);
+ SELECT * FROM t5 ORDER BY 1;
+ }
+} {1 10 2 9}
+do_test select1-4.9.1 {
+ execsql {
+ SELECT * FROM t5 ORDER BY 2;
+ }
+} {2 9 1 10}
+do_test select1-4.9.2 {
+ execsql {
+ SELECT * FROM t5 ORDER BY +2;
+ }
+} {2 9 1 10}
+do_test select1-4.10.1 {
+ catchsql {
+ SELECT * FROM t5 ORDER BY 3;
+ }
+} {1 {ORDER BY column number 3 out of range - should be between 1 and 2}}
+do_test select1-4.10.2 {
+ catchsql {
+ SELECT * FROM t5 ORDER BY -1;
+ }
+} {1 {ORDER BY column number -1 out of range - should be between 1 and 2}}
+do_test select1-4.11 {
+ execsql {
+ INSERT INTO t5 VALUES(3,10);
+ SELECT * FROM t5 ORDER BY 2, 1 DESC;
+ }
+} {2 9 3 10 1 10}
+do_test select1-4.12 {
+ execsql {
+ SELECT * FROM t5 ORDER BY 1 DESC, b;
+ }
+} {3 10 2 9 1 10}
+do_test select1-4.13 {
+ execsql {
+ SELECT * FROM t5 ORDER BY b DESC, 1;
+ }
+} {1 10 3 10 2 9}
+
+
+# ORDER BY ignored on an aggregate query
+#
+do_test select1-5.1 {
+ set v [catch {execsql {SELECT max(f1) FROM test1 ORDER BY f2}} msg]
+ lappend v $msg
+} {0 33}
+
+execsql {CREATE TABLE test2(t1 test, t2 text)}
+execsql {INSERT INTO test2 VALUES('abc','xyz')}
+
+# Check for column naming
+#
+do_test select1-6.1 {
+ set v [catch {execsql2 {SELECT f1 FROM test1 ORDER BY f2}} msg]
+ lappend v $msg
+} {0 {f1 11 f1 33}}
+do_test select1-6.1.1 {
+ db eval {PRAGMA full_column_names=on}
+ set v [catch {execsql2 {SELECT f1 FROM test1 ORDER BY f2}} msg]
+ lappend v $msg
+} {0 {test1.f1 11 test1.f1 33}}
+do_test select1-6.1.2 {
+ set v [catch {execsql2 {SELECT f1 as 'f1' FROM test1 ORDER BY f2}} msg]
+ lappend v $msg
+} {0 {f1 11 f1 33}}
+do_test select1-6.1.3 {
+ set v [catch {execsql2 {SELECT * FROM test1 WHERE f1==11}} msg]
+ lappend v $msg
+} {0 {f1 11 f2 22}}
+do_test select1-6.1.4 {
+ set v [catch {execsql2 {SELECT DISTINCT * FROM test1 WHERE f1==11}} msg]
+ db eval {PRAGMA full_column_names=off}
+ lappend v $msg
+} {0 {f1 11 f2 22}}
+do_test select1-6.1.5 {
+ set v [catch {execsql2 {SELECT * FROM test1 WHERE f1==11}} msg]
+ lappend v $msg
+} {0 {f1 11 f2 22}}
+do_test select1-6.1.6 {
+ set v [catch {execsql2 {SELECT DISTINCT * FROM test1 WHERE f1==11}} msg]
+ lappend v $msg
+} {0 {f1 11 f2 22}}
+do_test select1-6.2 {
+ set v [catch {execsql2 {SELECT f1 as xyzzy FROM test1 ORDER BY f2}} msg]
+ lappend v $msg
+} {0 {xyzzy 11 xyzzy 33}}
+do_test select1-6.3 {
+ set v [catch {execsql2 {SELECT f1 as "xyzzy" FROM test1 ORDER BY f2}} msg]
+ lappend v $msg
+} {0 {xyzzy 11 xyzzy 33}}
+do_test select1-6.3.1 {
+ set v [catch {execsql2 {SELECT f1 as 'xyzzy ' FROM test1 ORDER BY f2}} msg]
+ lappend v $msg
+} {0 {{xyzzy } 11 {xyzzy } 33}}
+do_test select1-6.4 {
+ set v [catch {execsql2 {SELECT f1+F2 as xyzzy FROM test1 ORDER BY f2}} msg]
+ lappend v $msg
+} {0 {xyzzy 33 xyzzy 77}}
+do_test select1-6.4a {
+ set v [catch {execsql2 {SELECT f1+F2 FROM test1 ORDER BY f2}} msg]
+ lappend v $msg
+} {0 {f1+F2 33 f1+F2 77}}
+do_test select1-6.5 {
+ set v [catch {execsql2 {SELECT test1.f1+F2 FROM test1 ORDER BY f2}} msg]
+ lappend v $msg
+} {0 {test1.f1+F2 33 test1.f1+F2 77}}
+do_test select1-6.5.1 {
+ execsql2 {PRAGMA full_column_names=on}
+ set v [catch {execsql2 {SELECT test1.f1+F2 FROM test1 ORDER BY f2}} msg]
+ execsql2 {PRAGMA full_column_names=off}
+ lappend v $msg
+} {0 {test1.f1+F2 33 test1.f1+F2 77}}
+do_test select1-6.6 {
+ set v [catch {execsql2 {SELECT test1.f1+F2, t1 FROM test1, test2
+ ORDER BY f2}} msg]
+ lappend v $msg
+} {0 {test1.f1+F2 33 t1 abc test1.f1+F2 77 t1 abc}}
+do_test select1-6.7 {
+ set v [catch {execsql2 {SELECT A.f1, t1 FROM test1 as A, test2
+ ORDER BY f2}} msg]
+ lappend v $msg
+} {0 {f1 11 t1 abc f1 33 t1 abc}}
+do_test select1-6.8 {
+ set v [catch {execsql2 {SELECT A.f1, f1 FROM test1 as A, test1 as B
+ ORDER BY f2}} msg]
+ lappend v $msg
+} {1 {ambiguous column name: f1}}
+do_test select1-6.8b {
+ set v [catch {execsql2 {SELECT A.f1, B.f1 FROM test1 as A, test1 as B
+ ORDER BY f2}} msg]
+ lappend v $msg
+} {1 {ambiguous column name: f2}}
+do_test select1-6.8c {
+ set v [catch {execsql2 {SELECT A.f1, f1 FROM test1 as A, test1 as A
+ ORDER BY f2}} msg]
+ lappend v $msg
+} {1 {ambiguous column name: A.f1}}
+do_test select1-6.9.1 {
+ set v [catch {execsql {SELECT A.f1, B.f1 FROM test1 as A, test1 as B
+ ORDER BY A.f1, B.f1}} msg]
+ lappend v $msg
+} {0 {11 11 11 33 33 11 33 33}}
+do_test select1-6.9.2 {
+ set v [catch {execsql2 {SELECT A.f1, B.f1 FROM test1 as A, test1 as B
+ ORDER BY A.f1, B.f1}} msg]
+ lappend v $msg
+} {0 {f1 11 f1 11 f1 33 f1 33 f1 11 f1 11 f1 33 f1 33}}
+
+ifcapable compound {
+do_test select1-6.10 {
+ set v [catch {execsql2 {
+ SELECT f1 FROM test1 UNION SELECT f2 FROM test1
+ ORDER BY f2;
+ }} msg]
+ lappend v $msg
+} {0 {f1 11 f1 22 f1 33 f1 44}}
+do_test select1-6.11 {
+ set v [catch {execsql2 {
+ SELECT f1 FROM test1 UNION SELECT f2+100 FROM test1
+ ORDER BY f2+100;
+ }} msg]
+ lappend v $msg
+} {0 {f1 11 f1 33 f1 122 f1 144}}
+} ;#ifcapable compound
+
+do_test select1-7.1 {
+ set v [catch {execsql {
+ SELECT f1 FROM test1 WHERE f2=;
+ }} msg]
+ lappend v $msg
+} {1 {near ";": syntax error}}
+ifcapable compound {
+do_test select1-7.2 {
+ set v [catch {execsql {
+ SELECT f1 FROM test1 UNION SELECT WHERE;
+ }} msg]
+ lappend v $msg
+} {1 {near "WHERE": syntax error}}
+} ;# ifcapable compound
+do_test select1-7.3 {
+ set v [catch {execsql {SELECT f1 FROM test1 as 'hi', test2 as}} msg]
+ lappend v $msg
+} {1 {near "as": syntax error}}
+do_test select1-7.4 {
+ set v [catch {execsql {
+ SELECT f1 FROM test1 ORDER BY;
+ }} msg]
+ lappend v $msg
+} {1 {near ";": syntax error}}
+do_test select1-7.5 {
+ set v [catch {execsql {
+ SELECT f1 FROM test1 ORDER BY f1 desc, f2 where;
+ }} msg]
+ lappend v $msg
+} {1 {near "where": syntax error}}
+do_test select1-7.6 {
+ set v [catch {execsql {
+ SELECT count(f1,f2 FROM test1;
+ }} msg]
+ lappend v $msg
+} {1 {near "FROM": syntax error}}
+do_test select1-7.7 {
+ set v [catch {execsql {
+ SELECT count(f1,f2+) FROM test1;
+ }} msg]
+ lappend v $msg
+} {1 {near ")": syntax error}}
+do_test select1-7.8 {
+ set v [catch {execsql {
+ SELECT f1 FROM test1 ORDER BY f2, f1+;
+ }} msg]
+ lappend v $msg
+} {1 {near ";": syntax error}}
+
+do_test select1-8.1 {
+ execsql {SELECT f1 FROM test1 WHERE 4.3+2.4 OR 1 ORDER BY f1}
+} {11 33}
+do_test select1-8.2 {
+ execsql {
+ SELECT f1 FROM test1 WHERE ('x' || f1) BETWEEN 'x10' AND 'x20'
+ ORDER BY f1
+ }
+} {11}
+do_test select1-8.3 {
+ execsql {
+ SELECT f1 FROM test1 WHERE 5-3==2
+ ORDER BY f1
+ }
+} {11 33}
+
+# TODO: This test is failing because f1 is now being loaded off the
+# disk as a vdbe integer, not a string. Hence the value of f1/(f1-11)
+# changes because of rounding. Disable the test for now.
+if 0 {
+do_test select1-8.4 {
+ execsql {
+ SELECT coalesce(f1/(f1-11),'x'),
+ coalesce(min(f1/(f1-11),5),'y'),
+ coalesce(max(f1/(f1-33),6),'z')
+ FROM test1 ORDER BY f1
+ }
+} {x y 6 1.5 1.5 z}
+}
+do_test select1-8.5 {
+ execsql {
+ SELECT min(1,2,3), -max(1,2,3)
+ FROM test1 ORDER BY f1
+ }
+} {1 -3 1 -3}
+
+
+# Check the behavior when the result set is empty
+#
+# SQLite v3 always sets r(*).
+#
+# do_test select1-9.1 {
+# catch {unset r}
+# set r(*) {}
+# db eval {SELECT * FROM test1 WHERE f1<0} r {}
+# set r(*)
+# } {}
+do_test select1-9.2 {
+ execsql {PRAGMA empty_result_callbacks=on}
+ catch {unset r}
+ set r(*) {}
+ db eval {SELECT * FROM test1 WHERE f1<0} r {}
+ set r(*)
+} {f1 f2}
+ifcapable subquery {
+ do_test select1-9.3 {
+ set r(*) {}
+ db eval {SELECT * FROM test1 WHERE f1<(select count(*) from test2)} r {}
+ set r(*)
+ } {f1 f2}
+}
+do_test select1-9.4 {
+ set r(*) {}
+ db eval {SELECT * FROM test1 ORDER BY f1} r {}
+ set r(*)
+} {f1 f2}
+do_test select1-9.5 {
+ set r(*) {}
+ db eval {SELECT * FROM test1 WHERE f1<0 ORDER BY f1} r {}
+ set r(*)
+} {f1 f2}
+unset r
+
+# Check for ORDER BY clauses that refer to an AS name in the column list
+#
+do_test select1-10.1 {
+ execsql {
+ SELECT f1 AS x FROM test1 ORDER BY x
+ }
+} {11 33}
+do_test select1-10.2 {
+ execsql {
+ SELECT f1 AS x FROM test1 ORDER BY -x
+ }
+} {33 11}
+do_test select1-10.3 {
+ execsql {
+ SELECT f1-23 AS x FROM test1 ORDER BY abs(x)
+ }
+} {10 -12}
+do_test select1-10.4 {
+ execsql {
+ SELECT f1-23 AS x FROM test1 ORDER BY -abs(x)
+ }
+} {-12 10}
+do_test select1-10.5 {
+ execsql {
+ SELECT f1-22 AS x, f2-22 as y FROM test1
+ }
+} {-11 0 11 22}
+do_test select1-10.6 {
+ execsql {
+ SELECT f1-22 AS x, f2-22 as y FROM test1 WHERE x>0 AND y<50
+ }
+} {11 22}
+
+# Check the ability to specify "TABLE.*" in the result set of a SELECT
+#
+do_test select1-11.1 {
+ execsql {
+ DELETE FROM t3;
+ DELETE FROM t4;
+ INSERT INTO t3 VALUES(1,2);
+ INSERT INTO t4 VALUES(3,4);
+ SELECT * FROM t3, t4;
+ }
+} {1 2 3 4}
+do_test select1-11.2.1 {
+ execsql {
+ SELECT * FROM t3, t4;
+ }
+} {1 2 3 4}
+do_test select1-11.2.2 {
+ execsql2 {
+ SELECT * FROM t3, t4;
+ }
+} {a 3 b 4 a 3 b 4}
+do_test select1-11.4.1 {
+ execsql {
+ SELECT t3.*, t4.b FROM t3, t4;
+ }
+} {1 2 4}
+do_test select1-11.4.2 {
+ execsql {
+ SELECT "t3".*, t4.b FROM t3, t4;
+ }
+} {1 2 4}
+do_test select1-11.5.1 {
+ execsql2 {
+ SELECT t3.*, t4.b FROM t3, t4;
+ }
+} {a 1 b 4 b 4}
+do_test select1-11.6 {
+ execsql2 {
+ SELECT x.*, y.b FROM t3 AS x, t4 AS y;
+ }
+} {a 1 b 4 b 4}
+do_test select1-11.7 {
+ execsql {
+ SELECT t3.b, t4.* FROM t3, t4;
+ }
+} {2 3 4}
+do_test select1-11.8 {
+ execsql2 {
+ SELECT t3.b, t4.* FROM t3, t4;
+ }
+} {b 4 a 3 b 4}
+do_test select1-11.9 {
+ execsql2 {
+ SELECT x.b, y.* FROM t3 AS x, t4 AS y;
+ }
+} {b 4 a 3 b 4}
+do_test select1-11.10 {
+ catchsql {
+ SELECT t5.* FROM t3, t4;
+ }
+} {1 {no such table: t5}}
+do_test select1-11.11 {
+ catchsql {
+ SELECT t3.* FROM t3 AS x, t4;
+ }
+} {1 {no such table: t3}}
+ifcapable subquery {
+ do_test select1-11.12 {
+ execsql2 {
+ SELECT t3.* FROM t3, (SELECT max(a), max(b) FROM t4)
+ }
+ } {a 1 b 2}
+ do_test select1-11.13 {
+ execsql2 {
+ SELECT t3.* FROM (SELECT max(a), max(b) FROM t4), t3
+ }
+ } {a 1 b 2}
+ do_test select1-11.14 {
+ execsql2 {
+ SELECT * FROM t3, (SELECT max(a), max(b) FROM t4) AS 'tx'
+ }
+ } {a 1 b 2 max(a) 3 max(b) 4}
+ do_test select1-11.15 {
+ execsql2 {
+ SELECT y.*, t3.* FROM t3, (SELECT max(a), max(b) FROM t4) AS y
+ }
+ } {max(a) 3 max(b) 4 a 1 b 2}
+}
+do_test select1-11.16 {
+ execsql2 {
+ SELECT y.* FROM t3 as y, t4 as z
+ }
+} {a 1 b 2}
+
+# Tests of SELECT statements without a FROM clause.
+#
+do_test select1-12.1 {
+ execsql2 {
+ SELECT 1+2+3
+ }
+} {1+2+3 6}
+do_test select1-12.2 {
+ execsql2 {
+ SELECT 1,'hello',2
+ }
+} {1 1 'hello' hello 2 2}
+do_test select1-12.3 {
+ execsql2 {
+ SELECT 1 AS 'a','hello' AS 'b',2 AS 'c'
+ }
+} {a 1 b hello c 2}
+do_test select1-12.4 {
+ execsql {
+ DELETE FROM t3;
+ INSERT INTO t3 VALUES(1,2);
+ }
+} {}
+
+ifcapable compound {
+do_test select1-12.5 {
+ execsql {
+ SELECT * FROM t3 UNION SELECT 3 AS 'a', 4 ORDER BY a;
+ }
+} {1 2 3 4}
+
+do_test select1-12.6 {
+ execsql {
+ SELECT 3, 4 UNION SELECT * FROM t3;
+ }
+} {1 2 3 4}
+} ;# ifcapable compound
+
+ifcapable subquery {
+ do_test select1-12.7 {
+ execsql {
+ SELECT * FROM t3 WHERE a=(SELECT 1);
+ }
+ } {1 2}
+ do_test select1-12.8 {
+ execsql {
+ SELECT * FROM t3 WHERE a=(SELECT 2);
+ }
+ } {}
+}
+
+ifcapable {compound && subquery} {
+ do_test select1-12.9 {
+ execsql2 {
+ SELECT x FROM (
+ SELECT a AS x, b AS y FROM t3 UNION SELECT a,b FROM t4 ORDER BY a,b
+ ) ORDER BY x;
+ }
+ } {x 1 x 3}
+ do_test select1-12.10 {
+ execsql2 {
+ SELECT z.x FROM (
+ SELECT a AS x,b AS y FROM t3 UNION SELECT a, b FROM t4 ORDER BY a,b
+ ) AS 'z' ORDER BY x;
+ }
+ } {x 1 x 3}
+} ;# ifcapable compound
+
+
+# Check for a VDBE stack growth problem that existed at one point.
+#
+ifcapable subquery {
+ do_test select1-13.1 {
+ execsql {
+ BEGIN;
+ create TABLE abc(a, b, c, PRIMARY KEY(a, b));
+ INSERT INTO abc VALUES(1, 1, 1);
+ }
+ for {set i 0} {$i<10} {incr i} {
+ execsql {
+ INSERT INTO abc SELECT a+(select max(a) FROM abc),
+ b+(select max(a) FROM abc), c+(select max(a) FROM abc) FROM abc;
+ }
+ }
+ execsql {COMMIT}
+
+ # This used to seg-fault when the problem existed.
+ execsql {
+ SELECT count(
+ (SELECT a FROM abc WHERE a = NULL AND b >= upper.c)
+ ) FROM abc AS upper;
+ }
+ } {0}
+}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/select2.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/select2.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,185 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing the SELECT statement.
+#
+# $Id: select2.test,v 1.25 2005/07/21 03:15:01 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Create a table with some data
+#
+execsql {CREATE TABLE tbl1(f1 int, f2 int)}
+execsql {BEGIN}
+for {set i 0} {$i<=30} {incr i} {
+ execsql "INSERT INTO tbl1 VALUES([expr {$i%9}],[expr {$i%10}])"
+}
+execsql {COMMIT}
+
+# Do a second query inside a first.
+#
+do_test select2-1.1 {
+ set sql {SELECT DISTINCT f1 FROM tbl1 ORDER BY f1}
+ set r {}
+ catch {unset data}
+ db eval $sql data {
+ set f1 $data(f1)
+ lappend r $f1:
+ set sql2 "SELECT f2 FROM tbl1 WHERE f1=$f1 ORDER BY f2"
+ db eval $sql2 d2 {
+ lappend r $d2(f2)
+ }
+ }
+ set r
+} {0: 0 7 8 9 1: 0 1 8 9 2: 0 1 2 9 3: 0 1 2 3 4: 2 3 4 5: 3 4 5 6: 4 5 6 7: 5 6 7 8: 6 7 8}
+
+do_test select2-1.2 {
+ set sql {SELECT DISTINCT f1 FROM tbl1 WHERE f1>3 AND f1<5}
+ set r {}
+ db eval $sql data {
+ set f1 $data(f1)
+ lappend r $f1:
+ set sql2 "SELECT f2 FROM tbl1 WHERE f1=$f1 ORDER BY f2"
+ db eval $sql2 d2 {
+ lappend r $d2(f2)
+ }
+ }
+ set r
+} {4: 2 3 4}
+
+# Create a largish table. Do this twice, once using the TCL cache and once
+# without. Compare the performance to make sure things go faster with the
+# cache turned on.
+#
+ifcapable tclvar {
+ do_test select2-2.0.1 {
+ set t1 [time {
+ execsql {CREATE TABLE tbl2(f1 int, f2 int, f3 int); BEGIN;}
+ for {set i 1} {$i<=30000} {incr i} {
+ set i2 [expr {$i*2}]
+ set i3 [expr {$i*3}]
+ db eval {INSERT INTO tbl2 VALUES($i,$i2,$i3)}
+ }
+ execsql {COMMIT}
+ }]
+ list
+ } {}
+ puts "time with cache: $::t1"
+}
+catch {execsql {DROP TABLE tbl2}}
+do_test select2-2.0.2 {
+ set t2 [time {
+ execsql {CREATE TABLE tbl2(f1 int, f2 int, f3 int); BEGIN;}
+ for {set i 1} {$i<=30000} {incr i} {
+ set i2 [expr {$i*2}]
+ set i3 [expr {$i*3}]
+ execsql "INSERT INTO tbl2 VALUES($i,$i2,$i3)"
+ }
+ execsql {COMMIT}
+ }]
+ list
+} {}
+puts "time without cache: $t2"
+ifcapable tclvar {
+ do_test select2-2.0.3 {
+ expr {[lindex $t1 0]<[lindex $t2 0]}
+ } 1
+}
+
+do_test select2-2.1 {
+ execsql {SELECT count(*) FROM tbl2}
+} {30000}
+do_test select2-2.2 {
+ execsql {SELECT count(*) FROM tbl2 WHERE f2>1000}
+} {29500}
+
+do_test select2-3.1 {
+ execsql {SELECT f1 FROM tbl2 WHERE 1000=f2}
+} {500}
+
+do_test select2-3.2a {
+ execsql {CREATE INDEX idx1 ON tbl2(f2)}
+} {}
+do_test select2-3.2b {
+ execsql {SELECT f1 FROM tbl2 WHERE 1000=f2}
+} {500}
+do_test select2-3.2c {
+ execsql {SELECT f1 FROM tbl2 WHERE f2=1000}
+} {500}
+do_test select2-3.2d {
+ set sqlite_search_count 0
+btree_breakpoint
+ execsql {SELECT * FROM tbl2 WHERE 1000=f2}
+ set sqlite_search_count
+} {3}
+do_test select2-3.2e {
+ set sqlite_search_count 0
+ execsql {SELECT * FROM tbl2 WHERE f2=1000}
+ set sqlite_search_count
+} {3}
+
+# Make sure queries run faster with an index than without
+#
+do_test select2-3.3 {
+ execsql {DROP INDEX idx1}
+ set sqlite_search_count 0
+ execsql {SELECT f1 FROM tbl2 WHERE f2==2000}
+ set sqlite_search_count
+} {29999}
+
+# Make sure we can optimize functions in the WHERE clause that
+# use fields from two or more different table. (Bug #6)
+#
+do_test select2-4.1 {
+ execsql {
+ CREATE TABLE aa(a);
+ CREATE TABLE bb(b);
+ INSERT INTO aa VALUES(1);
+ INSERT INTO aa VALUES(3);
+ INSERT INTO bb VALUES(2);
+ INSERT INTO bb VALUES(4);
+ SELECT * FROM aa, bb WHERE max(a,b)>2;
+ }
+} {1 4 3 2 3 4}
+do_test select2-4.2 {
+ execsql {
+ INSERT INTO bb VALUES(0);
+ SELECT * FROM aa, bb WHERE b;
+ }
+} {1 2 1 4 3 2 3 4}
+do_test select2-4.3 {
+ execsql {
+ SELECT * FROM aa, bb WHERE NOT b;
+ }
+} {1 0 3 0}
+do_test select2-4.4 {
+ execsql {
+ SELECT * FROM aa, bb WHERE min(a,b);
+ }
+} {1 2 1 4 3 2 3 4}
+do_test select2-4.5 {
+ execsql {
+ SELECT * FROM aa, bb WHERE NOT min(a,b);
+ }
+} {1 0 3 0}
+do_test select2-4.6 {
+ execsql {
+ SELECT * FROM aa, bb WHERE CASE WHEN a=b-1 THEN 1 END;
+ }
+} {1 2 3 4}
+do_test select2-4.7 {
+ execsql {
+ SELECT * FROM aa, bb WHERE CASE WHEN a=b-1 THEN 0 ELSE 1 END;
+ }
+} {1 4 1 0 3 2 3 0}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/select3.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/select3.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,239 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing aggregate functions and the
+# GROUP BY and HAVING clauses of SELECT statements.
+#
+# $Id: select3.test,v 1.19 2006/04/11 14:16:22 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Build some test data
+#
+do_test select3-1.0 {
+ execsql {
+ CREATE TABLE t1(n int, log int);
+ BEGIN;
+ }
+ for {set i 1} {$i<32} {incr i} {
+ for {set j 0} {pow(2,$j)<$i} {incr j} {}
+ execsql "INSERT INTO t1 VALUES($i,$j)"
+ }
+ execsql {
+ COMMIT
+ }
+ execsql {SELECT DISTINCT log FROM t1 ORDER BY log}
+} {0 1 2 3 4 5}
+
+# Basic aggregate functions.
+#
+do_test select3-1.1 {
+ execsql {SELECT count(*) FROM t1}
+} {31}
+do_test select3-1.2 {
+ execsql {
+ SELECT min(n),min(log),max(n),max(log),sum(n),sum(log),avg(n),avg(log)
+ FROM t1
+ }
+} {1 0 31 5 496 124 16.0 4.0}
+do_test select3-1.3 {
+ execsql {SELECT max(n)/avg(n), max(log)/avg(log) FROM t1}
+} {1.9375 1.25}
+
+# Try some basic GROUP BY clauses
+#
+do_test select3-2.1 {
+ execsql {SELECT log, count(*) FROM t1 GROUP BY log ORDER BY log}
+} {0 1 1 1 2 2 3 4 4 8 5 15}
+do_test select3-2.2 {
+ execsql {SELECT log, min(n) FROM t1 GROUP BY log ORDER BY log}
+} {0 1 1 2 2 3 3 5 4 9 5 17}
+do_test select3-2.3.1 {
+ execsql {SELECT log, avg(n) FROM t1 GROUP BY log ORDER BY log}
+} {0 1.0 1 2.0 2 3.5 3 6.5 4 12.5 5 24.0}
+do_test select3-2.3.2 {
+ execsql {SELECT log, avg(n)+1 FROM t1 GROUP BY log ORDER BY log}
+} {0 2.0 1 3.0 2 4.5 3 7.5 4 13.5 5 25.0}
+do_test select3-2.4 {
+ execsql {SELECT log, avg(n)-min(n) FROM t1 GROUP BY log ORDER BY log}
+} {0 0.0 1 0.0 2 0.5 3 1.5 4 3.5 5 7.0}
+do_test select3-2.5 {
+ execsql {SELECT log*2+1, avg(n)-min(n) FROM t1 GROUP BY log ORDER BY log}
+} {1 0.0 3 0.0 5 0.5 7 1.5 9 3.5 11 7.0}
+do_test select3-2.6 {
+ execsql {
+ SELECT log*2+1 as x, count(*) FROM t1 GROUP BY x ORDER BY x
+ }
+} {1 1 3 1 5 2 7 4 9 8 11 15}
+do_test select3-2.7 {
+ execsql {
+ SELECT log*2+1 AS x, count(*) AS y FROM t1 GROUP BY x ORDER BY y, x
+ }
+} {1 1 3 1 5 2 7 4 9 8 11 15}
+do_test select3-2.8 {
+ execsql {
+ SELECT log*2+1 AS x, count(*) AS y FROM t1 GROUP BY x ORDER BY 10-(x+y)
+ }
+} {11 15 9 8 7 4 5 2 3 1 1 1}
+#do_test select3-2.9 {
+# catchsql {
+# SELECT log, count(*) FROM t1 GROUP BY 'x' ORDER BY log;
+# }
+#} {1 {GROUP BY terms must not be non-integer constants}}
+do_test select3-2.10 {
+ catchsql {
+ SELECT log, count(*) FROM t1 GROUP BY 0 ORDER BY log;
+ }
+} {1 {GROUP BY column number 0 out of range - should be between 1 and 2}}
+do_test select3-2.11 {
+ catchsql {
+ SELECT log, count(*) FROM t1 GROUP BY 3 ORDER BY log;
+ }
+} {1 {GROUP BY column number 3 out of range - should be between 1 and 2}}
+do_test select3-2.12 {
+ catchsql {
+ SELECT log, count(*) FROM t1 GROUP BY 1 ORDER BY log;
+ }
+} {0 {0 1 1 1 2 2 3 4 4 8 5 15}}
+#do_test select3-2.13 {
+# catchsql {
+# SELECT log, count(*) FROM t1 GROUP BY 2 ORDER BY log;
+# }
+#} {0 {0 1 1 1 2 2 3 4 4 8 5 15}}
+#do_test select3-2.14 {
+# catchsql {
+# SELECT log, count(*) FROM t1 GROUP BY count(*) ORDER BY log;
+# }
+#} {0 {0 1 1 1 2 2 3 4 4 8 5 15}}
+
+# Cannot have a HAVING without a GROUP BY
+#
+do_test select3-3.1 {
+ set v [catch {execsql {SELECT log, count(*) FROM t1 HAVING log>=4}} msg]
+ lappend v $msg
+} {1 {a GROUP BY clause is required before HAVING}}
+
+# Toss in some HAVING clauses
+#
+do_test select3-4.1 {
+ execsql {SELECT log, count(*) FROM t1 GROUP BY log HAVING log>=4 ORDER BY log}
+} {4 8 5 15}
+do_test select3-4.2 {
+ execsql {
+ SELECT log, count(*) FROM t1
+ GROUP BY log
+ HAVING count(*)>=4
+ ORDER BY log
+ }
+} {3 4 4 8 5 15}
+do_test select3-4.3 {
+ execsql {
+ SELECT log, count(*) FROM t1
+ GROUP BY log
+ HAVING count(*)>=4
+ ORDER BY max(n)+0
+ }
+} {3 4 4 8 5 15}
+do_test select3-4.4 {
+ execsql {
+ SELECT log AS x, count(*) AS y FROM t1
+ GROUP BY x
+ HAVING y>=4
+ ORDER BY max(n)+0
+ }
+} {3 4 4 8 5 15}
+do_test select3-4.5 {
+ execsql {
+ SELECT log AS x FROM t1
+ GROUP BY x
+ HAVING count(*)>=4
+ ORDER BY max(n)+0
+ }
+} {3 4 5}
+
+do_test select3-5.1 {
+ execsql {
+ SELECT log, count(*), avg(n), max(n+log*2) FROM t1
+ GROUP BY log
+ ORDER BY max(n+log*2)+0, avg(n)+0
+ }
+} {0 1 1.0 1 1 1 2.0 4 2 2 3.5 8 3 4 6.5 14 4 8 12.5 24 5 15 24.0 41}
+do_test select3-5.2 {
+ execsql {
+ SELECT log, count(*), avg(n), max(n+log*2) FROM t1
+ GROUP BY log
+ ORDER BY max(n+log*2)+0, min(log,avg(n))+0
+ }
+} {0 1 1.0 1 1 1 2.0 4 2 2 3.5 8 3 4 6.5 14 4 8 12.5 24 5 15 24.0 41}
+
+# Test sorting of GROUP BY results in the presence of an index
+# on the GROUP BY column.
+#
+do_test select3-6.1 {
+ execsql {
+ SELECT log, min(n) FROM t1 GROUP BY log ORDER BY log;
+ }
+} {0 1 1 2 2 3 3 5 4 9 5 17}
+do_test select3-6.2 {
+ execsql {
+ SELECT log, min(n) FROM t1 GROUP BY log ORDER BY log DESC;
+ }
+} {5 17 4 9 3 5 2 3 1 2 0 1}
+do_test select3-6.3 {
+ execsql {
+ SELECT log, min(n) FROM t1 GROUP BY log ORDER BY 1;
+ }
+} {0 1 1 2 2 3 3 5 4 9 5 17}
+do_test select3-6.4 {
+ execsql {
+ SELECT log, min(n) FROM t1 GROUP BY log ORDER BY 1 DESC;
+ }
+} {5 17 4 9 3 5 2 3 1 2 0 1}
+do_test select3-6.5 {
+ execsql {
+ CREATE INDEX i1 ON t1(log);
+ SELECT log, min(n) FROM t1 GROUP BY log ORDER BY log;
+ }
+} {0 1 1 2 2 3 3 5 4 9 5 17}
+do_test select3-6.6 {
+ execsql {
+ SELECT log, min(n) FROM t1 GROUP BY log ORDER BY log DESC;
+ }
+} {5 17 4 9 3 5 2 3 1 2 0 1}
+do_test select3-6.7 {
+ execsql {
+ SELECT log, min(n) FROM t1 GROUP BY log ORDER BY 1;
+ }
+} {0 1 1 2 2 3 3 5 4 9 5 17}
+do_test select3-6.8 {
+ execsql {
+ SELECT log, min(n) FROM t1 GROUP BY log ORDER BY 1 DESC;
+ }
+} {5 17 4 9 3 5 2 3 1 2 0 1}
+
+# Sometimes an aggregate query can return no rows at all.
+#
+do_test select3-7.1 {
+ execsql {
+ CREATE TABLE t2(a,b);
+ INSERT INTO t2 VALUES(1,2);
+ SELECT a, sum(b) FROM t2 WHERE b=5 GROUP BY a;
+ }
+} {}
+do_test select3-7.2 {
+ execsql {
+ SELECT a, sum(b) FROM t2 WHERE b=5;
+ }
+} {{} {}}
+
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/select4.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/select4.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,617 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing UNION, INTERSECT and EXCEPT operators
+# in SELECT statements.
+#
+# $Id: select4.test,v 1.20 2006/06/20 11:01:09 danielk1977 Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Most tests in this file depend on compound-select. But there are a couple
+# right at the end that test DISTINCT, so we cannot omit the entire file.
+#
+ifcapable compound {
+
+# Build some test data
+#
+execsql {
+ CREATE TABLE t1(n int, log int);
+ BEGIN;
+}
+for {set i 1} {$i<32} {incr i} {
+ for {set j 0} {pow(2,$j)<$i} {incr j} {}
+ execsql "INSERT INTO t1 VALUES($i,$j)"
+}
+execsql {
+ COMMIT;
+}
+
+do_test select4-1.0 {
+ execsql {SELECT DISTINCT log FROM t1 ORDER BY log}
+} {0 1 2 3 4 5}
+
+# Union All operator
+#
+do_test select4-1.1a {
+ lsort [execsql {SELECT DISTINCT log FROM t1}]
+} {0 1 2 3 4 5}
+do_test select4-1.1b {
+ lsort [execsql {SELECT n FROM t1 WHERE log=3}]
+} {5 6 7 8}
+do_test select4-1.1c {
+ execsql {
+ SELECT DISTINCT log FROM t1
+ UNION ALL
+ SELECT n FROM t1 WHERE log=3
+ ORDER BY log;
+ }
+} {0 1 2 3 4 5 5 6 7 8}
+do_test select4-1.1d {
+ execsql {
+ CREATE TABLE t2 AS
+ SELECT DISTINCT log FROM t1
+ UNION ALL
+ SELECT n FROM t1 WHERE log=3
+ ORDER BY log;
+ SELECT * FROM t2;
+ }
+} {0 1 2 3 4 5 5 6 7 8}
+execsql {DROP TABLE t2}
+do_test select4-1.1e {
+ execsql {
+ CREATE TABLE t2 AS
+ SELECT DISTINCT log FROM t1
+ UNION ALL
+ SELECT n FROM t1 WHERE log=3
+ ORDER BY log DESC;
+ SELECT * FROM t2;
+ }
+} {8 7 6 5 5 4 3 2 1 0}
+execsql {DROP TABLE t2}
+do_test select4-1.1f {
+ execsql {
+ SELECT DISTINCT log FROM t1
+ UNION ALL
+ SELECT n FROM t1 WHERE log=2
+ }
+} {0 1 2 3 4 5 3 4}
+do_test select4-1.1g {
+ execsql {
+ CREATE TABLE t2 AS
+ SELECT DISTINCT log FROM t1
+ UNION ALL
+ SELECT n FROM t1 WHERE log=2;
+ SELECT * FROM t2;
+ }
+} {0 1 2 3 4 5 3 4}
+execsql {DROP TABLE t2}
+ifcapable subquery {
+ do_test select4-1.2 {
+ execsql {
+ SELECT log FROM t1 WHERE n IN
+ (SELECT DISTINCT log FROM t1 UNION ALL
+ SELECT n FROM t1 WHERE log=3)
+ ORDER BY log;
+ }
+ } {0 1 2 2 3 3 3 3}
+}
+do_test select4-1.3 {
+ set v [catch {execsql {
+ SELECT DISTINCT log FROM t1 ORDER BY log
+ UNION ALL
+ SELECT n FROM t1 WHERE log=3
+ ORDER BY log;
+ }} msg]
+ lappend v $msg
+} {1 {ORDER BY clause should come after UNION ALL not before}}
+
+# Union operator
+#
+do_test select4-2.1 {
+ execsql {
+ SELECT DISTINCT log FROM t1
+ UNION
+ SELECT n FROM t1 WHERE log=3
+ ORDER BY log;
+ }
+} {0 1 2 3 4 5 6 7 8}
+ifcapable subquery {
+ do_test select4-2.2 {
+ execsql {
+ SELECT log FROM t1 WHERE n IN
+ (SELECT DISTINCT log FROM t1 UNION
+ SELECT n FROM t1 WHERE log=3)
+ ORDER BY log;
+ }
+ } {0 1 2 2 3 3 3 3}
+}
+do_test select4-2.3 {
+ set v [catch {execsql {
+ SELECT DISTINCT log FROM t1 ORDER BY log
+ UNION
+ SELECT n FROM t1 WHERE log=3
+ ORDER BY log;
+ }} msg]
+ lappend v $msg
+} {1 {ORDER BY clause should come after UNION not before}}
+
+# Except operator
+#
+do_test select4-3.1.1 {
+ execsql {
+ SELECT DISTINCT log FROM t1
+ EXCEPT
+ SELECT n FROM t1 WHERE log=3
+ ORDER BY log;
+ }
+} {0 1 2 3 4}
+do_test select4-3.1.2 {
+ execsql {
+ CREATE TABLE t2 AS
+ SELECT DISTINCT log FROM t1
+ EXCEPT
+ SELECT n FROM t1 WHERE log=3
+ ORDER BY log;
+ SELECT * FROM t2;
+ }
+} {0 1 2 3 4}
+execsql {DROP TABLE t2}
+do_test select4-3.1.3 {
+ execsql {
+ CREATE TABLE t2 AS
+ SELECT DISTINCT log FROM t1
+ EXCEPT
+ SELECT n FROM t1 WHERE log=3
+ ORDER BY log DESC;
+ SELECT * FROM t2;
+ }
+} {4 3 2 1 0}
+execsql {DROP TABLE t2}
+ifcapable subquery {
+ do_test select4-3.2 {
+ execsql {
+ SELECT log FROM t1 WHERE n IN
+ (SELECT DISTINCT log FROM t1 EXCEPT
+ SELECT n FROM t1 WHERE log=3)
+ ORDER BY log;
+ }
+ } {0 1 2 2}
+}
+do_test select4-3.3 {
+ set v [catch {execsql {
+ SELECT DISTINCT log FROM t1 ORDER BY log
+ EXCEPT
+ SELECT n FROM t1 WHERE log=3
+ ORDER BY log;
+ }} msg]
+ lappend v $msg
+} {1 {ORDER BY clause should come after EXCEPT not before}}
+
+# Intersect operator
+#
+do_test select4-4.1.1 {
+ execsql {
+ SELECT DISTINCT log FROM t1
+ INTERSECT
+ SELECT n FROM t1 WHERE log=3
+ ORDER BY log;
+ }
+} {5}
+
+do_test select4-4.1.2 {
+ execsql {
+ SELECT DISTINCT log FROM t1 UNION ALL SELECT 6
+ INTERSECT
+ SELECT n FROM t1 WHERE log=3
+ ORDER BY log;
+ }
+} {5 6}
+do_test select4-4.1.3 {
+ execsql {
+ CREATE TABLE t2 AS
+ SELECT DISTINCT log FROM t1 UNION ALL SELECT 6
+ INTERSECT
+ SELECT n FROM t1 WHERE log=3
+ ORDER BY log;
+ SELECT * FROM t2;
+ }
+} {5 6}
+execsql {DROP TABLE t2}
+do_test select4-4.1.4 {
+ execsql {
+ CREATE TABLE t2 AS
+ SELECT DISTINCT log FROM t1 UNION ALL SELECT 6
+ INTERSECT
+ SELECT n FROM t1 WHERE log=3
+ ORDER BY log DESC;
+ SELECT * FROM t2;
+ }
+} {6 5}
+execsql {DROP TABLE t2}
+ifcapable subquery {
+ do_test select4-4.2 {
+ execsql {
+ SELECT log FROM t1 WHERE n IN
+ (SELECT DISTINCT log FROM t1 INTERSECT
+ SELECT n FROM t1 WHERE log=3)
+ ORDER BY log;
+ }
+ } {3}
+}
+do_test select4-4.3 {
+ set v [catch {execsql {
+ SELECT DISTINCT log FROM t1 ORDER BY log
+ INTERSECT
+ SELECT n FROM t1 WHERE log=3
+ ORDER BY log;
+ }} msg]
+ lappend v $msg
+} {1 {ORDER BY clause should come after INTERSECT not before}}
+
+# Various error messages while processing UNION or INTERSECT
+#
+do_test select4-5.1 {
+ set v [catch {execsql {
+ SELECT DISTINCT log FROM t2
+ UNION ALL
+ SELECT n FROM t1 WHERE log=3
+ ORDER BY log;
+ }} msg]
+ lappend v $msg
+} {1 {no such table: t2}}
+do_test select4-5.2 {
+ set v [catch {execsql {
+ SELECT DISTINCT log AS "xyzzy" FROM t1
+ UNION ALL
+ SELECT n FROM t1 WHERE log=3
+ ORDER BY xyzzy;
+ }} msg]
+ lappend v $msg
+} {0 {0 1 2 3 4 5 5 6 7 8}}
+do_test select4-5.2b {
+ set v [catch {execsql {
+ SELECT DISTINCT log AS xyzzy FROM t1
+ UNION ALL
+ SELECT n FROM t1 WHERE log=3
+ ORDER BY 'xyzzy';
+ }} msg]
+ lappend v $msg
+} {0 {0 1 2 3 4 5 5 6 7 8}}
+do_test select4-5.2c {
+ set v [catch {execsql {
+ SELECT DISTINCT log FROM t1
+ UNION ALL
+ SELECT n FROM t1 WHERE log=3
+ ORDER BY 'xyzzy';
+ }} msg]
+ lappend v $msg
+} {1 {ORDER BY term number 1 does not match any result column}}
+do_test select4-5.2d {
+ set v [catch {execsql {
+ SELECT DISTINCT log FROM t1
+ INTERSECT
+ SELECT n FROM t1 WHERE log=3
+ ORDER BY 'xyzzy';
+ }} msg]
+ lappend v $msg
+} {1 {ORDER BY term number 1 does not match any result column}}
+do_test select4-5.2e {
+ set v [catch {execsql {
+ SELECT DISTINCT log FROM t1
+ UNION ALL
+ SELECT n FROM t1 WHERE log=3
+ ORDER BY n;
+ }} msg]
+ lappend v $msg
+} {0 {0 1 2 3 4 5 5 6 7 8}}
+do_test select4-5.2f {
+ catchsql {
+ SELECT DISTINCT log FROM t1
+ UNION ALL
+ SELECT n FROM t1 WHERE log=3
+ ORDER BY log;
+ }
+} {0 {0 1 2 3 4 5 5 6 7 8}}
+do_test select4-5.2g {
+ catchsql {
+ SELECT DISTINCT log FROM t1
+ UNION ALL
+ SELECT n FROM t1 WHERE log=3
+ ORDER BY 1;
+ }
+} {0 {0 1 2 3 4 5 5 6 7 8}}
+do_test select4-5.2h {
+ catchsql {
+ SELECT DISTINCT log FROM t1
+ UNION ALL
+ SELECT n FROM t1 WHERE log=3
+ ORDER BY 2;
+ }
+} {1 {ORDER BY position 2 should be between 1 and 1}}
+do_test select4-5.2i {
+ catchsql {
+ SELECT DISTINCT 1, log FROM t1
+ UNION ALL
+ SELECT 2, n FROM t1 WHERE log=3
+ ORDER BY 2, 1;
+ }
+} {0 {1 0 1 1 1 2 1 3 1 4 1 5 2 5 2 6 2 7 2 8}}
+do_test select4-5.2j {
+ catchsql {
+ SELECT DISTINCT 1, log FROM t1
+ UNION ALL
+ SELECT 2, n FROM t1 WHERE log=3
+ ORDER BY 1, 2 DESC;
+ }
+} {0 {1 5 1 4 1 3 1 2 1 1 1 0 2 8 2 7 2 6 2 5}}
+do_test select4-5.2k {
+ catchsql {
+ SELECT DISTINCT 1, log FROM t1
+ UNION ALL
+ SELECT 2, n FROM t1 WHERE log=3
+ ORDER BY n, 1;
+ }
+} {0 {1 0 1 1 1 2 1 3 1 4 1 5 2 5 2 6 2 7 2 8}}
+do_test select4-5.3 {
+ set v [catch {execsql {
+ SELECT DISTINCT log, n FROM t1
+ UNION ALL
+ SELECT n FROM t1 WHERE log=3
+ ORDER BY log;
+ }} msg]
+ lappend v $msg
+} {1 {SELECTs to the left and right of UNION ALL do not have the same number of result columns}}
+do_test select4-5.4 {
+ set v [catch {execsql {
+ SELECT log FROM t1 WHERE n=2
+ UNION ALL
+ SELECT log FROM t1 WHERE n=3
+ UNION ALL
+ SELECT log FROM t1 WHERE n=4
+ UNION ALL
+ SELECT log FROM t1 WHERE n=5
+ ORDER BY log;
+ }} msg]
+ lappend v $msg
+} {0 {1 2 2 3}}
+
+do_test select4-6.1 {
+ execsql {
+ SELECT log, count(*) as cnt FROM t1 GROUP BY log
+ UNION
+ SELECT log, n FROM t1 WHERE n=7
+ ORDER BY cnt, log;
+ }
+} {0 1 1 1 2 2 3 4 3 7 4 8 5 15}
+do_test select4-6.2 {
+ execsql {
+ SELECT log, count(*) FROM t1 GROUP BY log
+ UNION
+ SELECT log, n FROM t1 WHERE n=7
+ ORDER BY count(*), log;
+ }
+} {0 1 1 1 2 2 3 4 3 7 4 8 5 15}
+
+# NULLs are indistinct for the UNION operator.
+# Make sure the UNION operator recognizes this
+#
+do_test select4-6.3 {
+ execsql {
+ SELECT NULL UNION SELECT NULL UNION
+ SELECT 1 UNION SELECT 2 AS 'x'
+ ORDER BY x;
+ }
+} {{} 1 2}
+do_test select4-6.3.1 {
+ execsql {
+ SELECT NULL UNION ALL SELECT NULL UNION ALL
+ SELECT 1 UNION ALL SELECT 2 AS 'x'
+ ORDER BY x;
+ }
+} {{} {} 1 2}
+
+# Make sure the DISTINCT keyword treats NULLs as indistinct.
+#
+ifcapable subquery {
+ do_test select4-6.4 {
+ execsql {
+ SELECT * FROM (
+ SELECT NULL, 1 UNION ALL SELECT NULL, 1
+ );
+ }
+ } {{} 1 {} 1}
+ do_test select4-6.5 {
+ execsql {
+ SELECT DISTINCT * FROM (
+ SELECT NULL, 1 UNION ALL SELECT NULL, 1
+ );
+ }
+ } {{} 1}
+ do_test select4-6.6 {
+ execsql {
+ SELECT DISTINCT * FROM (
+ SELECT 1,2 UNION ALL SELECT 1,2
+ );
+ }
+ } {1 2}
+}
+
+# Test distinctness of NULL in other ways.
+#
+do_test select4-6.7 {
+ execsql {
+ SELECT NULL EXCEPT SELECT NULL
+ }
+} {}
+
+
+# Make sure column names are correct when a compound select appears as
+# an expression in the WHERE clause.
+#
+do_test select4-7.1 {
+ execsql {
+ CREATE TABLE t2 AS SELECT log AS 'x', count(*) AS 'y' FROM t1 GROUP BY log;
+ SELECT * FROM t2 ORDER BY x;
+ }
+} {0 1 1 1 2 2 3 4 4 8 5 15}
+ifcapable subquery {
+ do_test select4-7.2 {
+ execsql2 {
+ SELECT * FROM t1 WHERE n IN (SELECT n FROM t1 INTERSECT SELECT x FROM t2)
+ ORDER BY n
+ }
+ } {n 1 log 0 n 2 log 1 n 3 log 2 n 4 log 2 n 5 log 3}
+ do_test select4-7.3 {
+ execsql2 {
+ SELECT * FROM t1 WHERE n IN (SELECT n FROM t1 EXCEPT SELECT x FROM t2)
+ ORDER BY n LIMIT 2
+ }
+ } {n 6 log 3 n 7 log 3}
+ do_test select4-7.4 {
+ execsql2 {
+ SELECT * FROM t1 WHERE n IN (SELECT n FROM t1 UNION SELECT x FROM t2)
+ ORDER BY n LIMIT 2
+ }
+ } {n 1 log 0 n 2 log 1}
+} ;# ifcapable subquery
+
+} ;# ifcapable compound
+
+# Make sure DISTINCT works appropriately on TEXT and NUMERIC columns.
+do_test select4-8.1 {
+ execsql {
+ BEGIN;
+ CREATE TABLE t3(a text, b float, c text);
+ INSERT INTO t3 VALUES(1, 1.1, '1.1');
+ INSERT INTO t3 VALUES(2, 1.10, '1.10');
+ INSERT INTO t3 VALUES(3, 1.10, '1.1');
+ INSERT INTO t3 VALUES(4, 1.1, '1.10');
+ INSERT INTO t3 VALUES(5, 1.2, '1.2');
+ INSERT INTO t3 VALUES(6, 1.3, '1.3');
+ COMMIT;
+ }
+ execsql {
+ SELECT DISTINCT b FROM t3 ORDER BY c;
+ }
+} {1.1 1.2 1.3}
+do_test select4-8.2 {
+ execsql {
+ SELECT DISTINCT c FROM t3 ORDER BY c;
+ }
+} {1.1 1.10 1.2 1.3}
+
+# Make sure the names of columns are takenf rom the right-most subquery
+# right in a compound query. Ticket #1721
+#
+ifcapable compound {
+
+do_test select4-9.1 {
+ execsql2 {
+ SELECT x, y FROM t2 UNION SELECT a, b FROM t3 ORDER BY x LIMIT 1
+ }
+} {x 0 y 1}
+do_test select4-9.2 {
+ execsql2 {
+ SELECT x, y FROM t2 UNION ALL SELECT a, b FROM t3 ORDER BY x LIMIT 1
+ }
+} {x 0 y 1}
+do_test select4-9.3 {
+ execsql2 {
+ SELECT x, y FROM t2 EXCEPT SELECT a, b FROM t3 ORDER BY x LIMIT 1
+ }
+} {x 0 y 1}
+do_test select4-9.4 {
+ execsql2 {
+ SELECT x, y FROM t2 INTERSECT SELECT 0 AS a, 1 AS b;
+ }
+} {x 0 y 1}
+do_test select4-9.5 {
+ execsql2 {
+ SELECT 0 AS x, 1 AS y
+ UNION
+ SELECT 2 AS p, 3 AS q
+ UNION
+ SELECT 4 AS a, 5 AS b
+ ORDER BY x LIMIT 1
+ }
+} {x 0 y 1}
+
+ifcapable subquery {
+do_test select4-9.6 {
+ execsql2 {
+ SELECT * FROM (
+ SELECT 0 AS x, 1 AS y
+ UNION
+ SELECT 2 AS p, 3 AS q
+ UNION
+ SELECT 4 AS a, 5 AS b
+ ) ORDER BY 1 LIMIT 1;
+ }
+} {x 0 y 1}
+do_test select4-9.7 {
+ execsql2 {
+ SELECT * FROM (
+ SELECT 0 AS x, 1 AS y
+ UNION
+ SELECT 2 AS p, 3 AS q
+ UNION
+ SELECT 4 AS a, 5 AS b
+ ) ORDER BY x LIMIT 1;
+ }
+} {x 0 y 1}
+} ;# ifcapable subquery
+
+do_test select4-9.8 {
+ execsql2 {
+ SELECT 0 AS x, 1 AS y
+ UNION
+ SELECT 2 AS y, -3 AS x
+ ORDER BY x LIMIT 1;
+ }
+} {x 0 y 1}
+do_test select4-9.9.1 {
+ execsql2 {
+ SELECT 1 AS a, 2 AS b UNION ALL SELECT 3 AS b, 4 AS a
+ }
+} {a 1 b 2 a 3 b 4}
+
+ifcapable subquery {
+do_test select4-9.9.2 {
+ execsql2 {
+ SELECT * FROM (SELECT 1 AS a, 2 AS b UNION ALL SELECT 3 AS b, 4 AS a)
+ WHERE b=3
+ }
+} {}
+do_test select4-9.10 {
+ execsql2 {
+ SELECT * FROM (SELECT 1 AS a, 2 AS b UNION ALL SELECT 3 AS b, 4 AS a)
+ WHERE b=2
+ }
+} {a 1 b 2}
+do_test select4-9.11 {
+ execsql2 {
+ SELECT * FROM (SELECT 1 AS a, 2 AS b UNION ALL SELECT 3 AS e, 4 AS b)
+ WHERE b=2
+ }
+} {a 1 b 2}
+do_test select4-9.12 {
+ execsql2 {
+ SELECT * FROM (SELECT 1 AS a, 2 AS b UNION ALL SELECT 3 AS e, 4 AS b)
+ WHERE b>0
+ }
+} {a 1 b 2 a 3 b 4}
+} ;# ifcapable subquery
+
+} ;# ifcapable compound
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/select5.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/select5.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,192 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing aggregate functions and the
+# GROUP BY and HAVING clauses of SELECT statements.
+#
+# $Id: select5.test,v 1.16 2006/01/21 12:08:55 danielk1977 Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Build some test data
+#
+execsql {
+ CREATE TABLE t1(x int, y int);
+ BEGIN;
+}
+for {set i 1} {$i<32} {incr i} {
+ for {set j 0} {pow(2,$j)<$i} {incr j} {}
+ execsql "INSERT INTO t1 VALUES([expr {32-$i}],[expr {10-$j}])"
+}
+execsql {
+ COMMIT
+}
+
+do_test select5-1.0 {
+ execsql {SELECT DISTINCT y FROM t1 ORDER BY y}
+} {5 6 7 8 9 10}
+
+# Sort by an aggregate function.
+#
+do_test select5-1.1 {
+ execsql {SELECT y, count(*) FROM t1 GROUP BY y ORDER BY y}
+} {5 15 6 8 7 4 8 2 9 1 10 1}
+do_test select5-1.2 {
+ execsql {SELECT y, count(*) FROM t1 GROUP BY y ORDER BY count(*), y}
+} {9 1 10 1 8 2 7 4 6 8 5 15}
+do_test select5-1.3 {
+ execsql {SELECT count(*), y FROM t1 GROUP BY y ORDER BY count(*), y}
+} {1 9 1 10 2 8 4 7 8 6 15 5}
+
+# Some error messages associated with aggregates and GROUP BY
+#
+do_test select5-2.1.1 {
+ catchsql {
+ SELECT y, count(*) FROM t1 GROUP BY z ORDER BY y
+ }
+} {1 {no such column: z}}
+do_test select5-2.1.2 {
+ catchsql {
+ SELECT y, count(*) FROM t1 GROUP BY temp.t1.y ORDER BY y
+ }
+} {1 {no such column: temp.t1.y}}
+do_test select5-2.2 {
+ set v [catch {execsql {
+ SELECT y, count(*) FROM t1 GROUP BY z(y) ORDER BY y
+ }} msg]
+ lappend v $msg
+} {1 {no such function: z}}
+do_test select5-2.3 {
+ set v [catch {execsql {
+ SELECT y, count(*) FROM t1 GROUP BY y HAVING count(*)<3 ORDER BY y
+ }} msg]
+ lappend v $msg
+} {0 {8 2 9 1 10 1}}
+do_test select5-2.4 {
+ set v [catch {execsql {
+ SELECT y, count(*) FROM t1 GROUP BY y HAVING z(y)<3 ORDER BY y
+ }} msg]
+ lappend v $msg
+} {1 {no such function: z}}
+do_test select5-2.5 {
+ set v [catch {execsql {
+ SELECT y, count(*) FROM t1 GROUP BY y HAVING count(*)<z ORDER BY y
+ }} msg]
+ lappend v $msg
+} {1 {no such column: z}}
+
+# Get the Agg function to rehash in vdbe.c
+#
+do_test select5-3.1 {
+ execsql {
+ SELECT x, count(*), avg(y) FROM t1 GROUP BY x HAVING x<4 ORDER BY x
+ }
+} {1 1 5.0 2 1 5.0 3 1 5.0}
+
+# Run various aggregate functions when the count is zero.
+#
+do_test select5-4.1 {
+ execsql {
+ SELECT avg(x) FROM t1 WHERE x>100
+ }
+} {{}}
+do_test select5-4.2 {
+ execsql {
+ SELECT count(x) FROM t1 WHERE x>100
+ }
+} {0}
+do_test select5-4.3 {
+ execsql {
+ SELECT min(x) FROM t1 WHERE x>100
+ }
+} {{}}
+do_test select5-4.4 {
+ execsql {
+ SELECT max(x) FROM t1 WHERE x>100
+ }
+} {{}}
+do_test select5-4.5 {
+ execsql {
+ SELECT sum(x) FROM t1 WHERE x>100
+ }
+} {{}}
+
+# Some tests for queries with a GROUP BY clause but no aggregate functions.
+#
+# Note: The query in test case 5-5.5 are not legal SQL. So if the
+# implementation changes in the future and it returns different results,
+# this is not such a big deal.
+#
+do_test select5-5.1 {
+ execsql {
+ CREATE TABLE t2(a, b, c);
+ INSERT INTO t2 VALUES(1, 2, 3);
+ INSERT INTO t2 VALUES(1, 4, 5);
+ INSERT INTO t2 VALUES(6, 4, 7);
+ CREATE INDEX t2_idx ON t2(a);
+ }
+} {}
+do_test select5-5.2 {
+ execsql {
+ SELECT a FROM t2 GROUP BY a;
+ }
+} {1 6}
+do_test select5-5.3 {
+ execsql {
+ SELECT a FROM t2 WHERE a>2 GROUP BY a;
+ }
+} {6}
+do_test select5-5.4 {
+ execsql {
+ SELECT a, b FROM t2 GROUP BY a, b;
+ }
+} {1 2 1 4 6 4}
+do_test select5-5.5 {
+ execsql {
+ SELECT a, b FROM t2 GROUP BY a;
+ }
+} {1 4 6 4}
+
+# NULL compare equal to each other for the purposes of processing
+# the GROUP BY clause.
+#
+do_test select5-6.1 {
+ execsql {
+ CREATE TABLE t3(x,y);
+ INSERT INTO t3 VALUES(1,NULL);
+ INSERT INTO t3 VALUES(2,NULL);
+ INSERT INTO t3 VALUES(3,4);
+ SELECT count(x), y FROM t3 GROUP BY y ORDER BY 1
+ }
+} {1 4 2 {}}
+do_test select5-6.2 {
+ execsql {
+ CREATE TABLE t4(x,y,z);
+ INSERT INTO t4 VALUES(1,2,NULL);
+ INSERT INTO t4 VALUES(2,3,NULL);
+ INSERT INTO t4 VALUES(3,NULL,5);
+ INSERT INTO t4 VALUES(4,NULL,6);
+ INSERT INTO t4 VALUES(4,NULL,6);
+ INSERT INTO t4 VALUES(5,NULL,NULL);
+ INSERT INTO t4 VALUES(5,NULL,NULL);
+ INSERT INTO t4 VALUES(6,7,8);
+ SELECT max(x), count(x), y, z FROM t4 GROUP BY y, z ORDER BY 1
+ }
+} {1 1 2 {} 2 1 3 {} 3 1 {} 5 4 2 {} 6 5 2 {} {} 6 1 7 8}
+
+do_test select5.7.2 {
+ execsql {
+ SELECT count(*), count(x) as cnt FROM t4 GROUP BY y ORDER BY cnt;
+ }
+} {1 1 1 1 1 1 5 5}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/select6.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/select6.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,507 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing SELECT statements that contain
+# subqueries in their FROM clause.
+#
+# $Id: select6.test,v 1.24 2006/06/11 23:41:56 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Omit this whole file if the library is build without subquery support.
+ifcapable !subquery {
+ finish_test
+ return
+}
+
+do_test select6-1.0 {
+ execsql {
+ BEGIN;
+ CREATE TABLE t1(x, y);
+ INSERT INTO t1 VALUES(1,1);
+ INSERT INTO t1 VALUES(2,2);
+ INSERT INTO t1 VALUES(3,2);
+ INSERT INTO t1 VALUES(4,3);
+ INSERT INTO t1 VALUES(5,3);
+ INSERT INTO t1 VALUES(6,3);
+ INSERT INTO t1 VALUES(7,3);
+ INSERT INTO t1 VALUES(8,4);
+ INSERT INTO t1 VALUES(9,4);
+ INSERT INTO t1 VALUES(10,4);
+ INSERT INTO t1 VALUES(11,4);
+ INSERT INTO t1 VALUES(12,4);
+ INSERT INTO t1 VALUES(13,4);
+ INSERT INTO t1 VALUES(14,4);
+ INSERT INTO t1 VALUES(15,4);
+ INSERT INTO t1 VALUES(16,5);
+ INSERT INTO t1 VALUES(17,5);
+ INSERT INTO t1 VALUES(18,5);
+ INSERT INTO t1 VALUES(19,5);
+ INSERT INTO t1 VALUES(20,5);
+ COMMIT;
+ SELECT DISTINCT y FROM t1 ORDER BY y;
+ }
+} {1 2 3 4 5}
+
+do_test select6-1.1 {
+ execsql2 {SELECT * FROM (SELECT x, y FROM t1 WHERE x<2)}
+} {x 1 y 1}
+do_test select6-1.2 {
+ execsql {SELECT count(*) FROM (SELECT y FROM t1)}
+} {20}
+do_test select6-1.3 {
+ execsql {SELECT count(*) FROM (SELECT DISTINCT y FROM t1)}
+} {5}
+do_test select6-1.4 {
+ execsql {SELECT count(*) FROM (SELECT DISTINCT * FROM (SELECT y FROM t1))}
+} {5}
+do_test select6-1.5 {
+ execsql {SELECT count(*) FROM (SELECT * FROM (SELECT DISTINCT y FROM t1))}
+} {5}
+
+do_test select6-1.6 {
+ execsql {
+ SELECT *
+ FROM (SELECT count(*),y FROM t1 GROUP BY y) AS a,
+ (SELECT max(x),y FROM t1 GROUP BY y) as b
+ WHERE a.y=b.y ORDER BY a.y
+ }
+} {1 1 1 1 2 2 3 2 4 3 7 3 8 4 15 4 5 5 20 5}
+do_test select6-1.7 {
+ execsql {
+ SELECT a.y, a.[count(*)], [max(x)], [count(*)]
+ FROM (SELECT count(*),y FROM t1 GROUP BY y) AS a,
+ (SELECT max(x),y FROM t1 GROUP BY y) as b
+ WHERE a.y=b.y ORDER BY a.y
+ }
+} {1 1 1 1 2 2 3 2 3 4 7 4 4 8 15 8 5 5 20 5}
+do_test select6-1.8 {
+ execsql {
+ SELECT q, p, r
+ FROM (SELECT count(*) as p , y as q FROM t1 GROUP BY y) AS a,
+ (SELECT max(x) as r, y as s FROM t1 GROUP BY y) as b
+ WHERE q=s ORDER BY s
+ }
+} {1 1 1 2 2 3 3 4 7 4 8 15 5 5 20}
+do_test select6-1.9 {
+ execsql {
+ SELECT q, p, r, b.[min(x)+y]
+ FROM (SELECT count(*) as p , y as q FROM t1 GROUP BY y) AS a,
+ (SELECT max(x) as r, y as s, min(x)+y FROM t1 GROUP BY y) as b
+ WHERE q=s ORDER BY s
+ }
+} {1 1 1 2 2 2 3 4 3 4 7 7 4 8 15 12 5 5 20 21}
+
+do_test select6-2.0 {
+ execsql {
+ CREATE TABLE t2(a INTEGER PRIMARY KEY, b);
+ INSERT INTO t2 SELECT * FROM t1;
+ SELECT DISTINCT b FROM t2 ORDER BY b;
+ }
+} {1 2 3 4 5}
+do_test select6-2.1 {
+ execsql2 {SELECT * FROM (SELECT a, b FROM t2 WHERE a<2)}
+} {a 1 b 1}
+do_test select6-2.2 {
+ execsql {SELECT count(*) FROM (SELECT b FROM t2)}
+} {20}
+do_test select6-2.3 {
+ execsql {SELECT count(*) FROM (SELECT DISTINCT b FROM t2)}
+} {5}
+do_test select6-2.4 {
+ execsql {SELECT count(*) FROM (SELECT DISTINCT * FROM (SELECT b FROM t2))}
+} {5}
+do_test select6-2.5 {
+ execsql {SELECT count(*) FROM (SELECT * FROM (SELECT DISTINCT b FROM t2))}
+} {5}
+
+do_test select6-2.6 {
+ execsql {
+ SELECT *
+ FROM (SELECT count(*),b FROM t2 GROUP BY b) AS a,
+ (SELECT max(a),b FROM t2 GROUP BY b) as b
+ WHERE a.b=b.b ORDER BY a.b
+ }
+} {1 1 1 1 2 2 3 2 4 3 7 3 8 4 15 4 5 5 20 5}
+do_test select6-2.7 {
+ execsql {
+ SELECT a.b, a.[count(*)], [max(a)], [count(*)]
+ FROM (SELECT count(*),b FROM t2 GROUP BY b) AS a,
+ (SELECT max(a),b FROM t2 GROUP BY b) as b
+ WHERE a.b=b.b ORDER BY a.b
+ }
+} {1 1 1 1 2 2 3 2 3 4 7 4 4 8 15 8 5 5 20 5}
+do_test select6-2.8 {
+ execsql {
+ SELECT q, p, r
+ FROM (SELECT count(*) as p , b as q FROM t2 GROUP BY b) AS a,
+ (SELECT max(a) as r, b as s FROM t2 GROUP BY b) as b
+ WHERE q=s ORDER BY s
+ }
+} {1 1 1 2 2 3 3 4 7 4 8 15 5 5 20}
+do_test select6-2.9 {
+ execsql {
+ SELECT a.q, a.p, b.r
+ FROM (SELECT count(*) as p , b as q FROM t2 GROUP BY q) AS a,
+ (SELECT max(a) as r, b as s FROM t2 GROUP BY s) as b
+ WHERE a.q=b.s ORDER BY a.q
+ }
+} {1 1 1 2 2 3 3 4 7 4 8 15 5 5 20}
+
+do_test select6-3.1 {
+ execsql2 {
+ SELECT * FROM (SELECT * FROM (SELECT * FROM t1 WHERE x=3));
+ }
+} {x 3 y 2}
+do_test select6-3.2 {
+ execsql {
+ SELECT * FROM
+ (SELECT a.q, a.p, b.r
+ FROM (SELECT count(*) as p , b as q FROM t2 GROUP BY q) AS a,
+ (SELECT max(a) as r, b as s FROM t2 GROUP BY s) as b
+ WHERE a.q=b.s ORDER BY a.q)
+ ORDER BY "a.q"
+ }
+} {1 1 1 2 2 3 3 4 7 4 8 15 5 5 20}
+do_test select6-3.3 {
+ execsql {
+ SELECT a,b,a+b FROM (SELECT avg(x) as 'a', avg(y) as 'b' FROM t1)
+ }
+} {10.5 3.7 14.2}
+do_test select6-3.4 {
+ execsql {
+ SELECT a,b,a+b FROM (SELECT avg(x) as 'a', avg(y) as 'b' FROM t1 WHERE y=4)
+ }
+} {11.5 4.0 15.5}
+do_test select6-3.5 {
+ execsql {
+ SELECT x,y,x+y FROM (SELECT avg(a) as 'x', avg(b) as 'y' FROM t2 WHERE a=4)
+ }
+} {4.0 3.0 7.0}
+do_test select6-3.6 {
+ execsql {
+ SELECT a,b,a+b FROM (SELECT avg(x) as 'a', avg(y) as 'b' FROM t1)
+ WHERE a>10
+ }
+} {10.5 3.7 14.2}
+do_test select6-3.7 {
+btree_breakpoint
+ execsql {
+ SELECT a,b,a+b FROM (SELECT avg(x) as 'a', avg(y) as 'b' FROM t1)
+ WHERE a<10
+ }
+} {}
+do_test select6-3.8 {
+ execsql {
+ SELECT a,b,a+b FROM (SELECT avg(x) as 'a', avg(y) as 'b' FROM t1 WHERE y=4)
+ WHERE a>10
+ }
+} {11.5 4.0 15.5}
+do_test select6-3.9 {
+ execsql {
+ SELECT a,b,a+b FROM (SELECT avg(x) as 'a', avg(y) as 'b' FROM t1 WHERE y=4)
+ WHERE a<10
+ }
+} {}
+do_test select6-3.10 {
+ execsql {
+ SELECT a,b,a+b FROM (SELECT avg(x) as 'a', y as 'b' FROM t1 GROUP BY b)
+ ORDER BY a
+ }
+} {1.0 1 2.0 2.5 2 4.5 5.5 3 8.5 11.5 4 15.5 18.0 5 23.0}
+do_test select6-3.11 {
+ execsql {
+ SELECT a,b,a+b FROM
+ (SELECT avg(x) as 'a', y as 'b' FROM t1 GROUP BY b)
+ WHERE b<4 ORDER BY a
+ }
+} {1.0 1 2.0 2.5 2 4.5 5.5 3 8.5}
+do_test select6-3.12 {
+ execsql {
+ SELECT a,b,a+b FROM
+ (SELECT avg(x) as 'a', y as 'b' FROM t1 GROUP BY b HAVING a>1)
+ WHERE b<4 ORDER BY a
+ }
+} {2.5 2 4.5 5.5 3 8.5}
+do_test select6-3.13 {
+ execsql {
+ SELECT a,b,a+b FROM
+ (SELECT avg(x) as 'a', y as 'b' FROM t1 GROUP BY b HAVING a>1)
+ ORDER BY a
+ }
+} {2.5 2 4.5 5.5 3 8.5 11.5 4 15.5 18.0 5 23.0}
+do_test select6-3.14 {
+ execsql {
+ SELECT [count(*)],y FROM (SELECT count(*), y FROM t1 GROUP BY y)
+ ORDER BY [count(*)]
+ }
+} {1 1 2 2 4 3 5 5 8 4}
+do_test select6-3.15 {
+ execsql {
+ SELECT [count(*)],y FROM (SELECT count(*), y FROM t1 GROUP BY y)
+ ORDER BY y
+ }
+} {1 1 2 2 4 3 8 4 5 5}
+
+do_test select6-4.1 {
+ execsql {
+ SELECT a,b,c FROM
+ (SELECT x AS 'a', y AS 'b', x+y AS 'c' FROM t1 WHERE y=4)
+ WHERE a<10 ORDER BY a;
+ }
+} {8 4 12 9 4 13}
+do_test select6-4.2 {
+ execsql {
+ SELECT y FROM (SELECT DISTINCT y FROM t1) WHERE y<5 ORDER BY y
+ }
+} {1 2 3 4}
+do_test select6-4.3 {
+ execsql {
+ SELECT DISTINCT y FROM (SELECT y FROM t1) WHERE y<5 ORDER BY y
+ }
+} {1 2 3 4}
+do_test select6-4.4 {
+ execsql {
+ SELECT avg(y) FROM (SELECT DISTINCT y FROM t1) WHERE y<5 ORDER BY y
+ }
+} {2.5}
+do_test select6-4.5 {
+ execsql {
+ SELECT avg(y) FROM (SELECT DISTINCT y FROM t1 WHERE y<5) ORDER BY y
+ }
+} {2.5}
+
+do_test select6-5.1 {
+ execsql {
+ SELECT a,x,b FROM
+ (SELECT x+3 AS 'a', x FROM t1 WHERE y=3) AS 'p',
+ (SELECT x AS 'b' FROM t1 WHERE y=4) AS 'q'
+ WHERE a=b
+ ORDER BY a
+ }
+} {8 5 8 9 6 9 10 7 10}
+do_test select6-5.2 {
+ execsql {
+ SELECT a,x,b FROM
+ (SELECT x+3 AS 'a', x FROM t1 WHERE y=3),
+ (SELECT x AS 'b' FROM t1 WHERE y=4)
+ WHERE a=b
+ ORDER BY a
+ }
+} {8 5 8 9 6 9 10 7 10}
+
+# Tests of compound sub-selects
+#
+do_test select5-6.1 {
+ execsql {
+ DELETE FROM t1 WHERE x>4;
+ SELECT * FROM t1
+ }
+} {1 1 2 2 3 2 4 3}
+ifcapable compound {
+ do_test select6-6.2 {
+ execsql {
+ SELECT * FROM (
+ SELECT x AS 'a' FROM t1 UNION ALL SELECT x+10 AS 'a' FROM t1
+ ) ORDER BY a;
+ }
+ } {1 2 3 4 11 12 13 14}
+ do_test select6-6.3 {
+ execsql {
+ SELECT * FROM (
+ SELECT x AS 'a' FROM t1 UNION ALL SELECT x+1 AS 'a' FROM t1
+ ) ORDER BY a;
+ }
+ } {1 2 2 3 3 4 4 5}
+ do_test select6-6.4 {
+ execsql {
+ SELECT * FROM (
+ SELECT x AS 'a' FROM t1 UNION SELECT x+1 AS 'a' FROM t1
+ ) ORDER BY a;
+ }
+ } {1 2 3 4 5}
+ do_test select6-6.5 {
+ execsql {
+ SELECT * FROM (
+ SELECT x AS 'a' FROM t1 INTERSECT SELECT x+1 AS 'a' FROM t1
+ ) ORDER BY a;
+ }
+ } {2 3 4}
+ do_test select6-6.6 {
+ execsql {
+ SELECT * FROM (
+ SELECT x AS 'a' FROM t1 EXCEPT SELECT x*2 AS 'a' FROM t1
+ ) ORDER BY a;
+ }
+ } {1 3}
+} ;# ifcapable compound
+
+# Subselects with no FROM clause
+#
+do_test select6-7.1 {
+ execsql {
+ SELECT * FROM (SELECT 1)
+ }
+} {1}
+do_test select6-7.2 {
+ execsql {
+ SELECT c,b,a,* FROM (SELECT 1 AS 'a', 2 AS 'b', 'abc' AS 'c')
+ }
+} {abc 2 1 1 2 abc}
+do_test select6-7.3 {
+ execsql {
+ SELECT c,b,a,* FROM (SELECT 1 AS 'a', 2 AS 'b', 'abc' AS 'c' WHERE 0)
+ }
+} {}
+do_test select6-7.4 {
+ execsql2 {
+ SELECT c,b,a,* FROM (SELECT 1 AS 'a', 2 AS 'b', 'abc' AS 'c' WHERE 1)
+ }
+} {c abc b 2 a 1 a 1 b 2 c abc}
+
+# The remaining tests in this file depend on the EXPLAIN keyword.
+# Skip these tests if EXPLAIN is disabled in the current build.
+#
+ifcapable {!explain} {
+ finish_test
+ return
+}
+
+# The following procedure compiles the SQL given as an argument and returns
+# TRUE if that SQL uses any transient tables and returns FALSE if no
+# transient tables are used. This is used to make sure that the
+# sqliteFlattenSubquery() routine in select.c is doing its job.
+#
+proc is_flat {sql} {
+ return [expr 0>[lsearch [execsql "EXPLAIN $sql"] OpenEphemeral]]
+}
+
+# Check that the flattener works correctly for deeply nested subqueries
+# involving joins.
+#
+do_test select6-8.1 {
+ execsql {
+ BEGIN;
+ CREATE TABLE t3(p,q);
+ INSERT INTO t3 VALUES(1,11);
+ INSERT INTO t3 VALUES(2,22);
+ CREATE TABLE t4(q,r);
+ INSERT INTO t4 VALUES(11,111);
+ INSERT INTO t4 VALUES(22,222);
+ COMMIT;
+ SELECT * FROM t3 NATURAL JOIN t4;
+ }
+} {1 11 111 2 22 222}
+do_test select6-8.2 {
+ execsql {
+ SELECT y, p, q, r FROM
+ (SELECT t1.y AS y, t2.b AS b FROM t1, t2 WHERE t1.x=t2.a) AS m,
+ (SELECT t3.p AS p, t3.q AS q, t4.r AS r FROM t3 NATURAL JOIN t4) as n
+ WHERE y=p
+ }
+} {1 1 11 111 2 2 22 222 2 2 22 222}
+# If view support is omitted from the build, then so is the query
+# "flattener". So omit this test and test select6-8.6 in that case.
+ifcapable view {
+do_test select6-8.3 {
+ is_flat {
+ SELECT y, p, q, r FROM
+ (SELECT t1.y AS y, t2.b AS b FROM t1, t2 WHERE t1.x=t2.a) AS m,
+ (SELECT t3.p AS p, t3.q AS q, t4.r AS r FROM t3 NATURAL JOIN t4) as n
+ WHERE y=p
+ }
+} {1}
+} ;# ifcapable view
+do_test select6-8.4 {
+ execsql {
+ SELECT DISTINCT y, p, q, r FROM
+ (SELECT t1.y AS y, t2.b AS b FROM t1, t2 WHERE t1.x=t2.a) AS m,
+ (SELECT t3.p AS p, t3.q AS q, t4.r AS r FROM t3 NATURAL JOIN t4) as n
+ WHERE y=p
+ }
+} {1 1 11 111 2 2 22 222}
+do_test select6-8.5 {
+ execsql {
+ SELECT * FROM
+ (SELECT y, p, q, r FROM
+ (SELECT t1.y AS y, t2.b AS b FROM t1, t2 WHERE t1.x=t2.a) AS m,
+ (SELECT t3.p AS p, t3.q AS q, t4.r AS r FROM t3 NATURAL JOIN t4) as n
+ WHERE y=p) AS e,
+ (SELECT r AS z FROM t4 WHERE q=11) AS f
+ WHERE e.r=f.z
+ }
+} {1 1 11 111 111}
+ifcapable view {
+do_test select6-8.6 {
+ is_flat {
+ SELECT * FROM
+ (SELECT y, p, q, r FROM
+ (SELECT t1.y AS y, t2.b AS b FROM t1, t2 WHERE t1.x=t2.a) AS m,
+ (SELECT t3.p AS p, t3.q AS q, t4.r AS r FROM t3 NATURAL JOIN t4) as n
+ WHERE y=p) AS e,
+ (SELECT r AS z FROM t4 WHERE q=11) AS f
+ WHERE e.r=f.z
+ }
+} {1}
+} ;# ifcapable view
+
+# Ticket #1634
+#
+do_test select6-9.1 {
+ execsql {
+ SELECT a.x, b.x FROM t1 AS a, (SELECT x FROM t1 LIMIT 2) AS b
+ }
+} {1 1 1 2 2 1 2 2 3 1 3 2 4 1 4 2}
+do_test select6-9.2 {
+ execsql {
+ SELECT x FROM (SELECT x FROM t1 LIMIT 2);
+ }
+} {1 2}
+do_test select6-9.3 {
+ execsql {
+ SELECT x FROM (SELECT x FROM t1 LIMIT 2 OFFSET 1);
+ }
+} {2 3}
+do_test select6-9.4 {
+ execsql {
+ SELECT x FROM (SELECT x FROM t1) LIMIT 2;
+ }
+} {1 2}
+do_test select6-9.5 {
+ execsql {
+ SELECT x FROM (SELECT x FROM t1) LIMIT 2 OFFSET 1;
+ }
+} {2 3}
+do_test select6-9.6 {
+ execsql {
+ SELECT x FROM (SELECT x FROM t1 LIMIT 2) LIMIT 3;
+ }
+} {1 2}
+do_test select6-9.7 {
+ execsql {
+ SELECT x FROM (SELECT x FROM t1 LIMIT -1) LIMIT 3;
+ }
+} {1 2 3}
+do_test select6-9.8 {
+ execsql {
+ SELECT x FROM (SELECT x FROM t1 LIMIT -1);
+ }
+} {1 2 3 4}
+do_test select6-9.9 {
+ execsql {
+ SELECT x FROM (SELECT x FROM t1 LIMIT -1 OFFSET 1);
+ }
+} {2 3 4}
+
+
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/select7.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/select7.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,75 @@
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing compute SELECT statements and nested
+# views.
+#
+# $Id: select7.test,v 1.7 2005/03/29 03:11:00 danielk1977 Exp $
+
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+ifcapable compound {
+
+# A 3-way INTERSECT. Ticket #875
+ifcapable tempdb {
+ do_test select7-1.1 {
+ execsql {
+ create temp table t1(x);
+ insert into t1 values('amx');
+ insert into t1 values('anx');
+ insert into t1 values('amy');
+ insert into t1 values('bmy');
+ select * from t1 where x like 'a__'
+ intersect select * from t1 where x like '_m_'
+ intersect select * from t1 where x like '__x';
+ }
+ } {amx}
+}
+
+
+# Nested views do not handle * properly. Ticket #826.
+#
+ifcapable view {
+do_test select7-2.1 {
+ execsql {
+ CREATE TABLE x(id integer primary key, a TEXT NULL);
+ INSERT INTO x (a) VALUES ('first');
+ CREATE TABLE tempx(id integer primary key, a TEXT NULL);
+ INSERT INTO tempx (a) VALUES ('t-first');
+ CREATE VIEW tv1 AS SELECT x.id, tx.id FROM x JOIN tempx tx ON tx.id=x.id;
+ CREATE VIEW tv1b AS SELECT x.id, tx.id FROM x JOIN tempx tx on tx.id=x.id;
+ CREATE VIEW tv2 AS SELECT * FROM tv1 UNION SELECT * FROM tv1b;
+ SELECT * FROM tv2;
+ }
+} {1 1}
+} ;# ifcapable view
+
+} ;# ifcapable compound
+
+# Do not allow GROUP BY without an aggregate. Ticket #1039.
+#
+# Change: force any query with a GROUP BY clause to be processed as
+# an aggregate query, whether it contains aggregates or not.
+#
+ifcapable subquery {
+ # do_test select7-3.1 {
+ # catchsql {
+ # SELECT * FROM (SELECT * FROM sqlite_master) GROUP BY name
+ # }
+ # } {1 {GROUP BY may only be used on aggregate queries}}
+ do_test select7-3.1 {
+ catchsql {
+ SELECT * FROM (SELECT * FROM sqlite_master) GROUP BY name
+ }
+ } [list 0 [execsql {SELECT * FROM sqlite_master ORDER BY name}]]
+}
+finish_test
+
Added: freeswitch/trunk/libs/sqlite/test/server1.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/server1.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,170 @@
+# 2006 January 09
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is testing the server mode of SQLite.
+#
+# This file is derived from thread1.test
+#
+# $Id: server1.test,v 1.4 2006/01/15 00:13:16 drh Exp $
+
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Skip this whole file if the server testing code is not enabled
+#
+if {[llength [info command client_step]]==0 || [sqlite3 -has-codec]} {
+ finish_test
+ return
+}
+
+# The sample server implementation does not work right when memory
+# management is enabled.
+#
+ifcapable memorymanage {
+ finish_test
+ return
+}
+
+# Create some data to work with
+#
+do_test server1-1.1 {
+ execsql {
+ CREATE TABLE t1(a,b);
+ INSERT INTO t1 VALUES(1,'abcdefgh');
+ INSERT INTO t1 SELECT a+1, b||b FROM t1;
+ INSERT INTO t1 SELECT a+2, b||b FROM t1;
+ INSERT INTO t1 SELECT a+4, b||b FROM t1;
+ SELECT count(*), max(length(b)) FROM t1;
+ }
+} {8 64}
+
+# Interleave two threads on read access. Then make sure a third
+# thread can write the database. In other words:
+#
+# read-lock A
+# read-lock B
+# unlock A
+# unlock B
+# write-lock C
+#
+do_test server1-1.2 {
+ client_create A test.db
+ client_create B test.db
+ client_create C test.db
+ client_compile A {SELECT a FROM t1}
+ client_step A
+ client_result A
+} SQLITE_ROW
+do_test server1-1.3 {
+ client_argc A
+} 1
+do_test server1-1.4 {
+ client_argv A 0
+} 1
+do_test server1-1.5 {
+ client_compile B {SELECT b FROM t1}
+ client_step B
+ client_result B
+} SQLITE_ROW
+do_test server1-1.6 {
+ client_argc B
+} 1
+do_test server1-1.7 {
+ client_argv B 0
+} abcdefgh
+do_test server1-1.8 {
+ client_finalize A
+ client_result A
+} SQLITE_OK
+do_test server1-1.9 {
+ client_finalize B
+ client_result B
+} SQLITE_OK
+do_test server1-1.10 {
+ client_compile C {CREATE TABLE t2(x,y)}
+ client_step C
+ client_result C
+} SQLITE_DONE
+do_test server1-1.11 {
+ client_finalize C
+ client_result C
+} SQLITE_OK
+do_test server1-1.12 {
+ catchsql {SELECT name FROM sqlite_master}
+ execsql {SELECT name FROM sqlite_master}
+} {t1 t2}
+
+
+# Read from table t1. Do not finalize the statement. This
+# will leave the lock pending.
+#
+do_test server1-2.1 {
+ client_halt *
+ client_create A test.db
+ client_compile A {SELECT a FROM t1}
+ client_step A
+ client_result A
+} SQLITE_ROW
+
+# Read from the same table from another thread. This is allows.
+#
+do_test server1-2.2 {
+ client_create B test.db
+ client_compile B {SELECT b FROM t1}
+ client_step B
+ client_result B
+} SQLITE_ROW
+
+# Write to a different table from another thread. This is allowed
+# becaus in server mode with a shared cache we have table-level locking.
+#
+do_test server1-2.3 {
+ client_create C test.db
+ client_compile C {INSERT INTO t2 VALUES(98,99)}
+ client_step C
+ client_result C
+ client_finalize C
+ client_result C
+} SQLITE_OK
+
+# But we cannot insert into table t1 because threads A and B have it locked.
+#
+do_test server1-2.4 {
+ client_compile C {INSERT INTO t1 VALUES(98,99)}
+ client_step C
+ client_result C
+ client_finalize C
+ client_result C
+} SQLITE_LOCKED
+do_test server1-2.5 {
+ client_finalize B
+ client_wait B
+ client_compile C {INSERT INTO t1 VALUES(98,99)}
+ client_step C
+ client_result C
+ client_finalize C
+ client_result C
+} SQLITE_LOCKED
+
+# Insert into t1 is successful after finishing the other two threads.
+do_test server1-2.6 {
+ client_finalize A
+ client_wait A
+ client_compile C {INSERT INTO t1 VALUES(98,99)}
+ client_step C
+ client_result C
+ client_finalize C
+ client_result C
+} SQLITE_OK
+
+client_halt *
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/shared.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/shared.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,860 @@
+# 2005 December 30
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+#
+# $Id: shared.test,v 1.21 2006/01/23 21:38:03 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+db close
+
+ifcapable !shared_cache {
+ finish_test
+ return
+}
+
+set ::enable_shared_cache [sqlite3_enable_shared_cache 1]
+
+foreach av [list 0 1] {
+
+# Open the database connection and execute the auto-vacuum pragma
+file delete -force test.db
+sqlite3 db test.db
+
+ifcapable autovacuum {
+ do_test shared-[expr $av+1].1.0 {
+ execsql "pragma auto_vacuum=$::av"
+ execsql {pragma auto_vacuum}
+ } "$av"
+} else {
+ if {$av} {
+ db close
+ break
+ }
+}
+
+# $av is currently 0 if this loop iteration is to test with auto-vacuum turned
+# off, and 1 if it is turned on. Increment it so that (1 -> no auto-vacuum)
+# and (2 -> auto-vacuum). The sole reason for this is so that it looks nicer
+# when we use this variable as part of test-case names.
+#
+incr av
+
+# Test organization:
+#
+# shared-1.*: Simple test to verify basic sanity of table level locking when
+# two connections share a pager cache.
+# shared-2.*: Test that a read transaction can co-exist with a
+# write-transaction, including a simple test to ensure the
+# external locking protocol is still working.
+# shared-3.*: Simple test of read-uncommitted mode.
+# shared-4.*: Check that the schema is locked and unlocked correctly.
+# shared-5.*: Test that creating/dropping schema items works when databases
+# are attached in different orders to different handles.
+# shared-6.*: Locking, UNION ALL queries and sub-queries.
+# shared-7.*: Autovacuum and shared-cache.
+# shared-8.*: Tests related to the text encoding of shared-cache databases.
+# shared-9.*: TEMP triggers and shared-cache databases.
+# shared-10.*: Tests of sqlite3_close().
+# shared-11.*: Test transaction locking.
+#
+
+do_test shared-$av.1.1 {
+ # Open a second database on the file test.db. It should use the same pager
+ # cache and schema as the original connection. Verify that only 1 file is
+ # opened.
+ sqlite3 db2 test.db
+ set ::sqlite_open_file_count
+} {1}
+do_test shared-$av.1.2 {
+ # Add a table and a single row of data via the first connection.
+ # Ensure that the second connection can see them.
+ execsql {
+ CREATE TABLE abc(a, b, c);
+ INSERT INTO abc VALUES(1, 2, 3);
+ } db
+ execsql {
+ SELECT * FROM abc;
+ } db2
+} {1 2 3}
+do_test shared-$av.1.3 {
+ # Have the first connection begin a transaction and obtain a read-lock
+ # on table abc. This should not prevent the second connection from
+ # querying abc.
+ execsql {
+ BEGIN;
+ SELECT * FROM abc;
+ }
+ execsql {
+ SELECT * FROM abc;
+ } db2
+} {1 2 3}
+do_test shared-$av.1.4 {
+ # Try to insert a row into abc via connection 2. This should fail because
+ # of the read-lock connection 1 is holding on table abc (obtained in the
+ # previous test case).
+ catchsql {
+ INSERT INTO abc VALUES(4, 5, 6);
+ } db2
+} {1 {database table is locked: abc}}
+do_test shared-$av.1.5 {
+ # Using connection 2 (the one without the open transaction), try to create
+ # a new table. This should fail because of the open read transaction
+ # held by connection 1.
+ catchsql {
+ CREATE TABLE def(d, e, f);
+ } db2
+} {1 {database table is locked: sqlite_master}}
+do_test shared-$av.1.6 {
+ # Upgrade connection 1's transaction to a write transaction. Create
+ # a new table - def - and insert a row into it. Because the connection 1
+ # transaction modifies the schema, it should not be possible for
+ # connection 2 to access the database at all until the connection 1
+ # has finished the transaction.
+ execsql {
+ CREATE TABLE def(d, e, f);
+ INSERT INTO def VALUES('IV', 'V', 'VI');
+ }
+} {}
+do_test shared-$av.1.7 {
+ # Read from the sqlite_master table with connection 1 (inside the
+ # transaction). Then test that we can not do this with connection 2. This
+ # is because of the schema-modified lock established by connection 1
+ # in the previous test case.
+ execsql {
+ SELECT * FROM sqlite_master;
+ }
+ catchsql {
+ SELECT * FROM sqlite_master;
+ } db2
+} {1 {database schema is locked: main}}
+do_test shared-$av.1.8 {
+ # Commit the connection 1 transaction.
+ execsql {
+ COMMIT;
+ }
+} {}
+
+do_test shared-$av.2.1 {
+ # Open connection db3 to the database. Use a different path to the same
+ # file so that db3 does *not* share the same pager cache as db and db2
+ # (there should be two open file handles).
+ if {$::tcl_platform(platform)=="unix"} {
+ sqlite3 db3 ./test.db
+ } else {
+ sqlite3 db3 TEST.DB
+ }
+ set ::sqlite_open_file_count
+} {2}
+do_test shared-$av.2.2 {
+ # Start read transactions on db and db2 (the shared pager cache). Ensure
+ # db3 cannot write to the database.
+ execsql {
+ BEGIN;
+ SELECT * FROM abc;
+ }
+ execsql {
+ BEGIN;
+ SELECT * FROM abc;
+ } db2
+ catchsql {
+ INSERT INTO abc VALUES(1, 2, 3);
+ } db2
+} {1 {database table is locked: abc}}
+do_test shared-$av.2.3 {
+ # Turn db's transaction into a write-transaction. db3 should still be
+ # able to read from table def (but will not see the new row). Connection
+ # db2 should not be able to read def (because of the write-lock).
+
+# Todo: The failed "INSERT INTO abc ..." statement in the above test
+# has started a write-transaction on db2 (should this be so?). This
+# would prevent connection db from starting a write-transaction. So roll the
+# db2 transaction back and replace it with a new read transaction.
+ execsql {
+ ROLLBACK;
+ BEGIN;
+ SELECT * FROM abc;
+ } db2
+
+ execsql {
+ INSERT INTO def VALUES('VII', 'VIII', 'IX');
+ }
+ concat [
+ catchsql { SELECT * FROM def; } db3
+ ] [
+ catchsql { SELECT * FROM def; } db2
+ ]
+} {0 {IV V VI} 1 {database table is locked: def}}
+do_test shared-$av.2.4 {
+ # Commit the open transaction on db. db2 still holds a read-transaction.
+ # This should prevent db3 from writing to the database, but not from
+ # reading.
+ execsql {
+ COMMIT;
+ }
+ concat [
+ catchsql { SELECT * FROM def; } db3
+ ] [
+ catchsql { INSERT INTO def VALUES('X', 'XI', 'XII'); } db3
+ ]
+} {0 {IV V VI VII VIII IX} 1 {database is locked}}
+
+catchsql COMMIT db2
+
+do_test shared-$av.3.1.1 {
+ # This test case starts a linear scan of table 'seq' using a
+ # read-uncommitted connection. In the middle of the scan, rows are added
+ # to the end of the seq table (ahead of the current cursor position).
+ # The uncommitted rows should be included in the results of the scan.
+ execsql "
+ CREATE TABLE seq(i PRIMARY KEY, x);
+ INSERT INTO seq VALUES(1, '[string repeat X 500]');
+ INSERT INTO seq VALUES(2, '[string repeat X 500]');
+ "
+ execsql {SELECT * FROM sqlite_master} db2
+ execsql {PRAGMA read_uncommitted = 1} db2
+
+ set ret [list]
+ db2 eval {SELECT i FROM seq ORDER BY i} {
+ if {$i < 4} {
+ set max [execsql {SELECT max(i) FROM seq}]
+ db eval {
+ INSERT INTO seq SELECT i + :max, x FROM seq;
+ }
+ }
+ lappend ret $i
+ }
+ set ret
+} {1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16}
+do_test shared-$av.3.1.2 {
+ # Another linear scan through table seq using a read-uncommitted connection.
+ # This time, delete each row as it is read. Should not affect the results of
+ # the scan, but the table should be empty after the scan is concluded
+ # (test 3.1.3 verifies this).
+ set ret [list]
+ db2 eval {SELECT i FROM seq} {
+ db eval {DELETE FROM seq WHERE i = :i}
+ lappend ret $i
+ }
+ set ret
+} {1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16}
+do_test shared-$av.3.1.3 {
+ execsql {
+ SELECT * FROM seq;
+ }
+} {}
+
+catch {db close}
+catch {db2 close}
+catch {db3 close}
+
+#--------------------------------------------------------------------------
+# Tests shared-4.* test that the schema locking rules are applied
+# correctly. i.e.:
+#
+# 1. All transactions require a read-lock on the schemas of databases they
+# access.
+# 2. Transactions that modify a database schema require a write-lock on that
+# schema.
+# 3. It is not possible to compile a statement while another handle has a
+# write-lock on the schema.
+#
+
+# Open two database handles db and db2. Each has a single attach database
+# (as well as main):
+#
+# db.main -> ./test.db
+# db.test2 -> ./test2.db
+# db2.main -> ./test2.db
+# db2.test -> ./test.db
+#
+file delete -force test.db
+file delete -force test2.db
+file delete -force test2.db-journal
+sqlite3 db test.db
+sqlite3 db2 test2.db
+do_test shared-$av.4.1.1 {
+ set sqlite_open_file_count
+} {2}
+do_test shared-$av.4.1.2 {
+ execsql {ATTACH 'test2.db' AS test2}
+ set sqlite_open_file_count
+} {2}
+do_test shared-$av.4.1.3 {
+ execsql {ATTACH 'test.db' AS test} db2
+ set sqlite_open_file_count
+} {2}
+
+# Sanity check: Create a table in ./test.db via handle db, and test that handle
+# db2 can "see" the new table immediately. A handle using a seperate pager
+# cache would have to reload the database schema before this were possible.
+#
+do_test shared-$av.4.2.1 {
+ execsql {
+ CREATE TABLE abc(a, b, c);
+ CREATE TABLE def(d, e, f);
+ INSERT INTO abc VALUES('i', 'ii', 'iii');
+ INSERT INTO def VALUES('I', 'II', 'III');
+ }
+} {}
+do_test shared-$av.4.2.2 {
+ execsql {
+ SELECT * FROM test.abc;
+ } db2
+} {i ii iii}
+
+# Open a read-transaction and read from table abc via handle 2. Check that
+# handle 1 can read table abc. Check that handle 1 cannot modify table abc
+# or the database schema. Then check that handle 1 can modify table def.
+#
+do_test shared-$av.4.3.1 {
+ execsql {
+ BEGIN;
+ SELECT * FROM test.abc;
+ } db2
+} {i ii iii}
+do_test shared-$av.4.3.2 {
+ catchsql {
+ INSERT INTO abc VALUES('iv', 'v', 'vi');
+ }
+} {1 {database table is locked: abc}}
+do_test shared-$av.4.3.3 {
+ catchsql {
+ CREATE TABLE ghi(g, h, i);
+ }
+} {1 {database table is locked: sqlite_master}}
+do_test shared-$av.4.3.3 {
+ catchsql {
+ INSERT INTO def VALUES('IV', 'V', 'VI');
+ }
+} {0 {}}
+do_test shared-$av.4.3.4 {
+ # Cleanup: commit the transaction opened by db2.
+ execsql {
+ COMMIT
+ } db2
+} {}
+
+# Open a write-transaction using handle 1 and modify the database schema.
+# Then try to execute a compiled statement to read from the same
+# database via handle 2 (fails to get the lock on sqlite_master). Also
+# try to compile a read of the same database using handle 2 (also fails).
+# Finally, compile a read of the other database using handle 2. This
+# should also fail.
+#
+ifcapable compound {
+ do_test shared-$av.4.4.1.2 {
+ # Sanity check 1: Check that the schema is what we think it is when viewed
+ # via handle 1.
+ execsql {
+ CREATE TABLE test2.ghi(g, h, i);
+ SELECT 'test.db:'||name FROM sqlite_master
+ UNION ALL
+ SELECT 'test2.db:'||name FROM test2.sqlite_master;
+ }
+ } {test.db:abc test.db:def test2.db:ghi}
+ do_test shared-$av.4.4.1.2 {
+ # Sanity check 2: Check that the schema is what we think it is when viewed
+ # via handle 2.
+ execsql {
+ SELECT 'test2.db:'||name FROM sqlite_master
+ UNION ALL
+ SELECT 'test.db:'||name FROM test.sqlite_master;
+ } db2
+ } {test2.db:ghi test.db:abc test.db:def}
+}
+
+do_test shared-$av.4.4.2 {
+ set ::DB2 [sqlite3_connection_pointer db2]
+ set sql {SELECT * FROM abc}
+ set ::STMT1 [sqlite3_prepare $::DB2 $sql -1 DUMMY]
+ execsql {
+ BEGIN;
+ CREATE TABLE jkl(j, k, l);
+ }
+ sqlite3_step $::STMT1
+} {SQLITE_ERROR}
+do_test shared-$av.4.4.3 {
+ sqlite3_finalize $::STMT1
+} {SQLITE_LOCKED}
+do_test shared-$av.4.4.4 {
+ set rc [catch {
+ set ::STMT1 [sqlite3_prepare $::DB2 $sql -1 DUMMY]
+ } msg]
+ list $rc $msg
+} {1 {(6) database schema is locked: test}}
+do_test shared-$av.4.4.5 {
+ set rc [catch {
+ set ::STMT1 [sqlite3_prepare $::DB2 "SELECT * FROM ghi" -1 DUMMY]
+ } msg]
+ list $rc $msg
+} {1 {(6) database schema is locked: test}}
+
+
+catch {db2 close}
+catch {db close}
+
+#--------------------------------------------------------------------------
+# Tests shared-5.*
+#
+foreach db [list test.db test1.db test2.db test3.db] {
+ file delete -force $db ${db}-journal
+}
+do_test shared-$av.5.1.1 {
+ sqlite3 db1 test.db
+ sqlite3 db2 test.db
+ execsql {
+ ATTACH 'test1.db' AS test1;
+ ATTACH 'test2.db' AS test2;
+ ATTACH 'test3.db' AS test3;
+ } db1
+ execsql {
+ ATTACH 'test3.db' AS test3;
+ ATTACH 'test2.db' AS test2;
+ ATTACH 'test1.db' AS test1;
+ } db2
+} {}
+do_test shared-$av.5.1.2 {
+ execsql {
+ CREATE TABLE test1.t1(a, b);
+ CREATE INDEX test1.i1 ON t1(a, b);
+ } db1
+} {}
+ifcapable view {
+ do_test shared-$av.5.1.3 {
+ execsql {
+ CREATE VIEW test1.v1 AS SELECT * FROM t1;
+ } db1
+ } {}
+}
+ifcapable trigger {
+ do_test shared-$av.5.1.4 {
+ execsql {
+ CREATE TRIGGER test1.trig1 AFTER INSERT ON t1 BEGIN
+ INSERT INTO t1 VALUES(new.a, new.b);
+ END;
+ } db1
+ } {}
+}
+do_test shared-$av.5.1.5 {
+ execsql {
+ DROP INDEX i1;
+ } db2
+} {}
+ifcapable view {
+ do_test shared-$av.5.1.6 {
+ execsql {
+ DROP VIEW v1;
+ } db2
+ } {}
+}
+ifcapable trigger {
+ do_test shared-$av.5.1.7 {
+ execsql {
+ DROP TRIGGER trig1;
+ } db2
+ } {}
+}
+do_test shared-$av.5.1.8 {
+ execsql {
+ DROP TABLE t1;
+ } db2
+} {}
+ifcapable compound {
+ do_test shared-$av.5.1.9 {
+ execsql {
+ SELECT * FROM sqlite_master UNION ALL SELECT * FROM test1.sqlite_master
+ } db1
+ } {}
+}
+
+#--------------------------------------------------------------------------
+# Tests shared-6.* test that a query obtains all the read-locks it needs
+# before starting execution of the query. This means that there is no chance
+# some rows of data will be returned before a lock fails and SQLITE_LOCK
+# is returned.
+#
+do_test shared-$av.6.1.1 {
+ execsql {
+ CREATE TABLE t1(a, b);
+ CREATE TABLE t2(a, b);
+ INSERT INTO t1 VALUES(1, 2);
+ INSERT INTO t2 VALUES(3, 4);
+ } db1
+} {}
+ifcapable compound {
+ do_test shared-$av.6.1.2 {
+ execsql {
+ SELECT * FROM t1 UNION ALL SELECT * FROM t2;
+ } db2
+ } {1 2 3 4}
+}
+do_test shared-$av.6.1.3 {
+ # Establish a write lock on table t2 via connection db2. Then make a
+ # UNION all query using connection db1 that first accesses t1, followed
+ # by t2. If the locks are grabbed at the start of the statement (as
+ # they should be), no rows are returned. If (as was previously the case)
+ # they are grabbed as the tables are accessed, the t1 rows will be
+ # returned before the query fails.
+ #
+ execsql {
+ BEGIN;
+ INSERT INTO t2 VALUES(5, 6);
+ } db2
+ set ret [list]
+ catch {
+ db1 eval {SELECT * FROM t1 UNION ALL SELECT * FROM t2} {
+ lappend ret $a $b
+ }
+ }
+ set ret
+} {}
+do_test shared-$av.6.1.4 {
+ execsql {
+ COMMIT;
+ BEGIN;
+ INSERT INTO t1 VALUES(7, 8);
+ } db2
+ set ret [list]
+ catch {
+ db1 eval {
+ SELECT (CASE WHEN a>4 THEN (SELECT a FROM t1) ELSE 0 END) AS d FROM t2;
+ } {
+ lappend ret $d
+ }
+ }
+ set ret
+} {}
+
+catch {db1 close}
+catch {db2 close}
+foreach f [list test.db test2.db] {
+ file delete -force $f ${f}-journal
+}
+
+#--------------------------------------------------------------------------
+# Tests shared-7.* test auto-vacuum does not invalidate cursors from
+# other shared-cache users when it reorganizes the database on
+# COMMIT.
+#
+do_test shared-$av.7.1 {
+ # This test case sets up a test database in auto-vacuum mode consisting
+ # of two tables, t1 and t2. Both have a single index. Table t1 is
+ # populated first (so consists of pages toward the start of the db file),
+ # t2 second (pages toward the end of the file).
+ sqlite3 db test.db
+ sqlite3 db2 test.db
+ execsql {
+ BEGIN;
+ CREATE TABLE t1(a PRIMARY KEY, b);
+ CREATE TABLE t2(a PRIMARY KEY, b);
+ }
+ set ::contents {}
+ for {set i 0} {$i < 100} {incr i} {
+ set a [string repeat "$i " 20]
+ set b [string repeat "$i " 20]
+ db eval {
+ INSERT INTO t1 VALUES(:a, :b);
+ }
+ lappend ::contents [list [expr $i+1] $a $b]
+ }
+ execsql {
+ INSERT INTO t2 SELECT * FROM t1;
+ COMMIT;
+ }
+} {}
+do_test shared-$av.7.2 {
+ # This test case deletes the contents of table t1 (the one at the start of
+ # the file) while many cursors are open on table t2 and it's index. All of
+ # the non-root pages will be moved from the end to the start of the file
+ # when the DELETE is committed - this test verifies that moving the pages
+ # does not disturb the open cursors.
+ #
+
+ proc lockrow {db tbl oids body} {
+ set ret [list]
+ db eval "SELECT oid AS i, a, b FROM $tbl ORDER BY a" {
+ if {$i==[lindex $oids 0]} {
+ set noids [lrange $oids 1 end]
+ if {[llength $noids]==0} {
+ set subret [eval $body]
+ } else {
+ set subret [lockrow $db $tbl $noids $body]
+ }
+ }
+ lappend ret [list $i $a $b]
+ }
+ return [linsert $subret 0 $ret]
+ }
+ proc locktblrows {db tbl body} {
+ set oids [db eval "SELECT oid FROM $tbl"]
+ lockrow $db $tbl $oids $body
+ }
+
+ set scans [locktblrows db t2 {
+ execsql {
+ DELETE FROM t1;
+ } db2
+ }]
+ set error 0
+
+ # Test that each SELECT query returned the expected contents of t2.
+ foreach s $scans {
+ if {[lsort -integer -index 0 $s]!=$::contents} {
+ set error 1
+ }
+ }
+ set error
+} {0}
+
+catch {db close}
+catch {db2 close}
+unset -nocomplain contents
+
+#--------------------------------------------------------------------------
+# The following tests try to trick the shared-cache code into assuming
+# the wrong encoding for a database.
+#
+file delete -force test.db test.db-journal
+ifcapable utf16 {
+ do_test shared-$av.8.1.1 {
+ sqlite3 db test.db
+ execsql {
+ PRAGMA encoding = 'UTF-16';
+ SELECT * FROM sqlite_master;
+ }
+ } {}
+ do_test shared-$av.8.1.2 {
+ string range [execsql {PRAGMA encoding;}] 0 end-2
+ } {UTF-16}
+ do_test shared-$av.8.1.3 {
+ sqlite3 db2 test.db
+ execsql {
+ PRAGMA encoding = 'UTF-8';
+ CREATE TABLE abc(a, b, c);
+ } db2
+ } {}
+ do_test shared-$av.8.1.4 {
+ execsql {
+ SELECT * FROM sqlite_master;
+ }
+ } "table abc abc [expr $AUTOVACUUM?3:2] {CREATE TABLE abc(a, b, c)}"
+ do_test shared-$av.8.1.5 {
+ db2 close
+ execsql {
+ PRAGMA encoding;
+ }
+ } {UTF-8}
+ file delete -force test2.db test2.db-journal
+ do_test shared-$av.8.2.1 {
+ execsql {
+ ATTACH 'test2.db' AS aux;
+ SELECT * FROM aux.sqlite_master;
+ }
+ } {}
+ do_test shared-$av.8.2.2 {
+ sqlite3 db2 test2.db
+ execsql {
+ PRAGMA encoding = 'UTF-16';
+ CREATE TABLE def(d, e, f);
+ } db2
+ string range [execsql {PRAGMA encoding;} db2] 0 end-2
+ } {UTF-16}
+ do_test shared-$av.8.2.3 {
+ catchsql {
+ SELECT * FROM aux.sqlite_master;
+ }
+ } {1 {attached databases must use the same text encoding as main database}}
+}
+
+catch {db close}
+catch {db2 close}
+file delete -force test.db test2.db
+
+#---------------------------------------------------------------------------
+# The following tests - shared-9.* - test interactions between TEMP triggers
+# and shared-schemas.
+#
+ifcapable trigger&&tempdb {
+
+do_test shared-$av.9.1 {
+ sqlite3 db test.db
+ sqlite3 db2 test.db
+ execsql {
+ CREATE TABLE abc(a, b, c);
+ CREATE TABLE abc_mirror(a, b, c);
+ CREATE TEMP TRIGGER BEFORE INSERT ON abc BEGIN
+ INSERT INTO abc_mirror(a, b, c) VALUES(new.a, new.b, new.c);
+ END;
+ INSERT INTO abc VALUES(1, 2, 3);
+ SELECT * FROM abc_mirror;
+ }
+} {1 2 3}
+do_test shared-$av.9.2 {
+ execsql {
+ INSERT INTO abc VALUES(4, 5, 6);
+ SELECT * FROM abc_mirror;
+ } db2
+} {1 2 3}
+do_test shared-$av.9.3 {
+ db close
+ db2 close
+} {}
+
+} ; # End shared-9.*
+
+#---------------------------------------------------------------------------
+# The following tests - shared-10.* - test that the library behaves
+# correctly when a connection to a shared-cache is closed.
+#
+do_test shared-$av.10.1 {
+ # Create a small sample database with two connections to it (db and db2).
+ file delete -force test.db
+ sqlite3 db test.db
+ sqlite3 db2 test.db
+ execsql {
+ CREATE TABLE ab(a PRIMARY KEY, b);
+ CREATE TABLE de(d PRIMARY KEY, e);
+ INSERT INTO ab VALUES('Chiang Mai', 100000);
+ INSERT INTO ab VALUES('Bangkok', 8000000);
+ INSERT INTO de VALUES('Ubon', 120000);
+ INSERT INTO de VALUES('Khon Kaen', 200000);
+ }
+} {}
+do_test shared-$av.10.2 {
+ # Open a read-transaction with the first connection, a write-transaction
+ # with the second.
+ execsql {
+ BEGIN;
+ SELECT * FROM ab;
+ }
+ execsql {
+ BEGIN;
+ INSERT INTO de VALUES('Pataya', 30000);
+ } db2
+} {}
+do_test shared-$av.10.3 {
+ # An external connection should be able to read the database, but not
+ # prepare a write operation.
+ if {$::tcl_platform(platform)=="unix"} {
+ sqlite3 db3 ./test.db
+ } else {
+ sqlite3 db3 TEST.DB
+ }
+ execsql {
+ SELECT * FROM ab;
+ } db3
+ catchsql {
+ BEGIN;
+ INSERT INTO de VALUES('Pataya', 30000);
+ } db3
+} {1 {database is locked}}
+do_test shared-$av.10.4 {
+ # Close the connection with the write-transaction open
+ db2 close
+} {}
+do_test shared-$av.10.5 {
+ # Test that the db2 transaction has been automatically rolled back.
+ # If it has not the ('Pataya', 30000) entry will still be in the table.
+ execsql {
+ SELECT * FROM de;
+ }
+} {Ubon 120000 {Khon Kaen} 200000}
+do_test shared-$av.10.5 {
+ # Closing db2 should have dropped the shared-cache back to a read-lock.
+ # So db3 should be able to prepare a write...
+ catchsql {INSERT INTO de VALUES('Pataya', 30000);} db3
+} {0 {}}
+do_test shared-$av.10.6 {
+ # ... but not commit it.
+ catchsql {COMMIT} db3
+} {1 {database is locked}}
+do_test shared-$av.10.7 {
+ # Commit the (read-only) db transaction. Check via db3 to make sure the
+ # contents of table "de" are still as they should be.
+ execsql {
+ COMMIT;
+ }
+ execsql {
+ SELECT * FROM de;
+ } db3
+} {Ubon 120000 {Khon Kaen} 200000 Pataya 30000}
+do_test shared-$av.10.9 {
+ # Commit the external transaction.
+ catchsql {COMMIT} db3
+} {0 {}}
+integrity_check shared-$av.10.10
+do_test shared-$av.10.11 {
+ db close
+ db3 close
+} {}
+
+do_test shared-$av.11.1 {
+ file delete -force test.db
+ sqlite3 db test.db
+ sqlite3 db2 test.db
+ execsql {
+ CREATE TABLE abc(a, b, c);
+ CREATE TABLE abc2(a, b, c);
+ BEGIN;
+ INSERT INTO abc VALUES(1, 2, 3);
+ }
+} {}
+do_test shared-$av.11.2 {
+ catchsql {BEGIN;} db2
+ catchsql {SELECT * FROM abc;} db2
+} {1 {database table is locked: abc}}
+do_test shared-$av.11.3 {
+ catchsql {BEGIN} db2
+} {1 {cannot start a transaction within a transaction}}
+do_test shared-$av.11.4 {
+ catchsql {SELECT * FROM abc2;} db2
+} {0 {}}
+do_test shared-$av.11.5 {
+ catchsql {INSERT INTO abc2 VALUES(1, 2, 3);} db2
+} {1 {database is locked}}
+do_test shared-$av.11.6 {
+ catchsql {SELECT * FROM abc2}
+} {0 {}}
+do_test shared-$av.11.6 {
+ execsql {
+ ROLLBACK;
+ PRAGMA read_uncommitted = 1;
+ } db2
+} {}
+do_test shared-$av.11.7 {
+ execsql {
+ INSERT INTO abc2 VALUES(4, 5, 6);
+ INSERT INTO abc2 VALUES(7, 8, 9);
+ }
+} {}
+do_test shared-$av.11.8 {
+ set res [list]
+ breakpoint
+ db2 eval {
+ SELECT abc.a as I, abc2.a as II FROM abc, abc2;
+ } {
+ execsql {
+ DELETE FROM abc WHERE 1;
+ }
+ lappend res $I $II
+ }
+ set res
+} {1 4 {} 7}
+
+do_test shared-$av.11.11 {
+ db close
+ db2 close
+} {}
+
+}
+
+sqlite3_enable_shared_cache $::enable_shared_cache
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/shared2.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/shared2.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,132 @@
+# 2005 January 19
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+#
+# $Id: shared2.test,v 1.4 2006/01/26 13:11:37 danielk1977 Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+db close
+
+ifcapable !shared_cache {
+ finish_test
+ return
+}
+set ::enable_shared_cache [sqlite3_enable_shared_cache 1]
+
+# Test that if we delete all rows from a table any read-uncommitted
+# cursors are correctly invalidated. Test on both table and index btrees.
+do_test shared2-1.1 {
+ sqlite3 db1 test.db
+ sqlite3 db2 test.db
+
+ # Set up some data. Table "numbers" has 64 rows after this block
+ # is executed.
+ execsql {
+ BEGIN;
+ CREATE TABLE numbers(a PRIMARY KEY, b);
+ INSERT INTO numbers(oid) VALUES(NULL);
+ INSERT INTO numbers(oid) SELECT NULL FROM numbers;
+ INSERT INTO numbers(oid) SELECT NULL FROM numbers;
+ INSERT INTO numbers(oid) SELECT NULL FROM numbers;
+ INSERT INTO numbers(oid) SELECT NULL FROM numbers;
+ INSERT INTO numbers(oid) SELECT NULL FROM numbers;
+ INSERT INTO numbers(oid) SELECT NULL FROM numbers;
+ UPDATE numbers set a = oid, b = 'abcdefghijklmnopqrstuvwxyz0123456789';
+ COMMIT;
+ } db1
+} {}
+do_test shared2-1.2 {
+ # Put connection 2 in read-uncommitted mode and start a SELECT on table
+ # 'numbers'. Half way through the SELECT, use connection 1 to delete the
+ # contents of this table.
+ execsql {
+ pragma read_uncommitted = 1;
+ } db2
+ set count [execsql {SELECT count(*) FROM numbers} db2]
+ db2 eval {SELECT a FROM numbers ORDER BY oid} {
+ if {$a==32} {
+ execsql {
+ BEGIN;
+ DELETE FROM numbers;
+ } db1
+ }
+ }
+ list $a $count
+} {32 64}
+do_test shared2-1.3 {
+ # Same test as 1.2, except scan using the index this time.
+ execsql {
+ ROLLBACK;
+ } db1
+ set count [execsql {SELECT count(*) FROM numbers} db2]
+ db2 eval {SELECT a, b FROM numbers ORDER BY a} {
+ if {$a==32} {
+ execsql {
+ DELETE FROM numbers;
+ } db1
+ }
+ }
+ list $a $count
+} {32 64}
+
+#---------------------------------------------------------------------------
+# These tests, shared2.2.*, test the outcome when data is added to or
+# removed from a table due to a rollback while a read-uncommitted
+# cursor is scanning it.
+#
+do_test shared2-2.1 {
+ execsql {
+ INSERT INTO numbers VALUES(1, 'Medium length text field');
+ INSERT INTO numbers VALUES(2, 'Medium length text field');
+ INSERT INTO numbers VALUES(3, 'Medium length text field');
+ INSERT INTO numbers VALUES(4, 'Medium length text field');
+ BEGIN;
+ DELETE FROM numbers WHERE (a%2)=0;
+ } db1
+ set res [list]
+ db2 eval {
+ SELECT a FROM numbers ORDER BY a;
+ } {
+ lappend res $a
+ if {$a==3} {
+ execsql {ROLLBACK} db1
+ }
+ }
+ set res
+} {1 3 4}
+do_test shared2-2.2 {
+ execsql {
+ BEGIN;
+ INSERT INTO numbers VALUES(5, 'Medium length text field');
+ INSERT INTO numbers VALUES(6, 'Medium length text field');
+ } db1
+ set res [list]
+ db2 eval {
+ SELECT a FROM numbers ORDER BY a;
+ } {
+ lappend res $a
+ if {$a==5} {
+ execsql {ROLLBACK} db1
+ }
+ }
+ set res
+} {1 2 3 4 5}
+
+db1 close
+db2 close
+
+do_test shared2-3.2 {
+ sqlite3_thread_cleanup
+ sqlite3_enable_shared_cache 1
+} {0}
+
+sqlite3_enable_shared_cache $::enable_shared_cache
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/shared3.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/shared3.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,47 @@
+# 2005 January 19
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+#
+# $Id: shared3.test,v 1.1 2006/05/24 12:43:28 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+db close
+
+ifcapable !shared_cache {
+ finish_test
+ return
+}
+set ::enable_shared_cache [sqlite3_enable_shared_cache 1]
+
+# Ticket #1824
+#
+do_test shared3-1.1 {
+ file delete -force test.db test.db-journal
+ sqlite3 db1 test.db
+ db1 eval {
+ PRAGMA encoding=UTF16;
+ CREATE TABLE t1(x,y);
+ INSERT INTO t1 VALUES('abc','This is a test string');
+ }
+ db1 close
+ sqlite3 db1 test.db
+ db1 eval {SELECT * FROM t1}
+} {abc {This is a test string}}
+do_test shared3-1.2 {
+ sqlite3 db2 test.db
+ db2 eval {SELECT y FROM t1 WHERE x='abc'}
+} {{This is a test string}}
+
+db1 close
+db2 close
+
+sqlite3_enable_shared_cache $::enable_shared_cache
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/shared_err.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/shared_err.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,412 @@
+# 2005 December 30
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+#
+# The focus of the tests in this file are IO errors that occur in a shared
+# cache context. What happens to connection B if one connection A encounters
+# an IO-error whilst reading or writing the file-system?
+#
+# $Id: shared_err.test,v 1.9 2006/01/24 16:37:59 danielk1977 Exp $
+
+proc skip {args} {}
+
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+db close
+
+ifcapable !shared_cache||!subquery {
+ finish_test
+ return
+}
+set ::enable_shared_cache [sqlite3_enable_shared_cache 1]
+
+
+# Todo: This is a copy of the [do_malloc_test] proc in malloc.test
+# It would be better if these were consolidated.
+
+# Usage: do_malloc_test <test number> <options...>
+#
+# The first argument, <test number>, is an integer used to name the
+# tests executed by this proc. Options are as follows:
+#
+# -tclprep TCL script to run to prepare test.
+# -sqlprep SQL script to run to prepare test.
+# -tclbody TCL script to run with malloc failure simulation.
+# -sqlbody TCL script to run with malloc failure simulation.
+# -cleanup TCL script to run after the test.
+#
+# This command runs a series of tests to verify SQLite's ability
+# to handle an out-of-memory condition gracefully. It is assumed
+# that if this condition occurs a malloc() call will return a
+# NULL pointer. Linux, for example, doesn't do that by default. See
+# the "BUGS" section of malloc(3).
+#
+# Each iteration of a loop, the TCL commands in any argument passed
+# to the -tclbody switch, followed by the SQL commands in any argument
+# passed to the -sqlbody switch are executed. Each iteration the
+# Nth call to sqliteMalloc() is made to fail, where N is increased
+# each time the loop runs starting from 1. When all commands execute
+# successfully, the loop ends.
+#
+proc do_malloc_test {tn args} {
+ array unset ::mallocopts
+ array set ::mallocopts $args
+
+ set ::go 1
+ for {set ::n 1} {$::go && $::n < 50000} {incr ::n} {
+ do_test shared_malloc-$tn.$::n {
+
+ # Remove all traces of database files test.db and test2.db from the files
+ # system. Then open (empty database) "test.db" with the handle [db].
+ #
+ sqlite_malloc_fail 0
+ catch {db close}
+ catch {file delete -force test.db}
+ catch {file delete -force test.db-journal}
+ catch {file delete -force test2.db}
+ catch {file delete -force test2.db-journal}
+ catch {sqlite3 db test.db}
+ set ::DB [sqlite3_connection_pointer db]
+
+ # Execute any -tclprep and -sqlprep scripts.
+ #
+ if {[info exists ::mallocopts(-tclprep)]} {
+ eval $::mallocopts(-tclprep)
+ }
+ if {[info exists ::mallocopts(-sqlprep)]} {
+ execsql $::mallocopts(-sqlprep)
+ }
+
+ # Now set the ${::n}th malloc() to fail and execute the -tclbody and
+ # -sqlbody scripts.
+ #
+ sqlite_malloc_fail $::n
+ set ::mallocbody {}
+ if {[info exists ::mallocopts(-tclbody)]} {
+ append ::mallocbody "$::mallocopts(-tclbody)\n"
+ }
+ if {[info exists ::mallocopts(-sqlbody)]} {
+ append ::mallocbody "db eval {$::mallocopts(-sqlbody)}"
+ }
+ set v [catch $::mallocbody msg]
+
+ set leftover [lindex [sqlite_malloc_stat] 2]
+ if {$leftover>0} {
+ if {$leftover>1} {puts "\nLeftover: $leftover\nReturn=$v Message=$msg"}
+ set ::go 0
+ if {$v} {
+ puts "\nError message returned: $msg"
+ } else {
+ set v {1 1}
+ }
+ } else {
+ set v2 [expr {$msg=="" || $msg=="out of memory"}]
+ if {!$v2} {puts "\nError message returned: $msg"}
+ lappend v $v2
+ }
+ } {1 1}
+
+ sqlite_malloc_fail 0
+ if {[info exists ::mallocopts(-cleanup)]} {
+ catch [list uplevel #0 $::mallocopts(-cleanup)] msg
+ }
+ }
+ unset ::mallocopts
+}
+
+
+do_ioerr_test shared_ioerr-1 -tclprep {
+ sqlite3 db2 test.db
+ execsql {
+ PRAGMA read_uncommitted = 1;
+ CREATE TABLE t1(a,b,c);
+ BEGIN;
+ SELECT * FROM sqlite_master;
+ } db2
+} -sqlbody {
+ SELECT * FROM sqlite_master;
+ INSERT INTO t1 VALUES(1,2,3);
+ BEGIN TRANSACTION;
+ INSERT INTO t1 VALUES(1,2,3);
+ INSERT INTO t1 VALUES(4,5,6);
+ ROLLBACK;
+ SELECT * FROM t1;
+ BEGIN TRANSACTION;
+ INSERT INTO t1 VALUES(1,2,3);
+ INSERT INTO t1 VALUES(4,5,6);
+ COMMIT;
+ SELECT * FROM t1;
+ DELETE FROM t1 WHERE a<100;
+} -cleanup {
+ do_test shared_ioerr-1.$n.cleanup.1 {
+ set res [catchsql {
+ SELECT * FROM t1;
+ } db2]
+ set possible_results [list \
+ "1 {disk I/O error}" \
+ "0 {1 2 3}" \
+ "0 {1 2 3 1 2 3 4 5 6}" \
+ "0 {1 2 3 1 2 3 4 5 6 1 2 3 4 5 6}" \
+ "0 {}" \
+ ]
+ set rc [expr [lsearch -exact $possible_results $res] >= 0]
+ if {$rc != 1} {
+ puts ""
+ puts "Result: $res"
+ }
+ set rc
+ } {1}
+ db2 close
+}
+
+do_ioerr_test shared_ioerr-2 -tclprep {
+ sqlite3 db2 test.db
+ execsql {
+ PRAGMA read_uncommitted = 1;
+ BEGIN;
+ CREATE TABLE t1(a, b);
+ INSERT INTO t1(oid) VALUES(NULL);
+ INSERT INTO t1(oid) SELECT NULL FROM t1;
+ INSERT INTO t1(oid) SELECT NULL FROM t1;
+ INSERT INTO t1(oid) SELECT NULL FROM t1;
+ INSERT INTO t1(oid) SELECT NULL FROM t1;
+ INSERT INTO t1(oid) SELECT NULL FROM t1;
+ INSERT INTO t1(oid) SELECT NULL FROM t1;
+ INSERT INTO t1(oid) SELECT NULL FROM t1;
+ INSERT INTO t1(oid) SELECT NULL FROM t1;
+ INSERT INTO t1(oid) SELECT NULL FROM t1;
+ INSERT INTO t1(oid) SELECT NULL FROM t1;
+ UPDATE t1 set a = oid, b = 'abcdefghijklmnopqrstuvwxyz0123456789';
+ CREATE INDEX i1 ON t1(a);
+ COMMIT;
+ BEGIN;
+ SELECT * FROM sqlite_master;
+ } db2
+} -tclbody {
+ set ::residx 0
+ execsql {DELETE FROM t1 WHERE 0 = (a % 2);}
+ incr ::residx
+
+ # When this transaction begins the table contains 512 entries. The
+ # two statements together add 512+146 more if it succeeds.
+ # (1024/7==146)
+ execsql {BEGIN;}
+ execsql {INSERT INTO t1 SELECT a+1, b FROM t1;}
+ execsql {INSERT INTO t1 SELECT 'string' || a, b FROM t1 WHERE 0 = (a%7);}
+ execsql {COMMIT;}
+
+ incr ::residx
+} -cleanup {
+ do_test shared_ioerr-2.$n.cleanup.1 {
+ set res [catchsql {
+ SELECT max(a), min(a), count(*) FROM (SELECT a FROM t1 order by a);
+ } db2]
+ set possible_results [list \
+ {0 {1024 1 1024}} \
+ {0 {1023 1 512}} \
+ {0 {string994 1 1170}} \
+ ]
+ set idx [lsearch -exact $possible_results $res]
+ set success [expr {$idx==$::residx || $res=="1 {disk I/O error}"}]
+ if {!$success} {
+ puts ""
+ puts "Result: \"$res\" ($::residx)"
+ }
+ set success
+ } {1}
+ db2 close
+}
+
+# This test is designed to provoke an IO error when a cursor position is
+# "saved" (because another cursor is going to modify the underlying table).
+#
+do_ioerr_test shared_ioerr-3 -tclprep {
+ sqlite3 db2 test.db
+ execsql {
+ PRAGMA read_uncommitted = 1;
+ PRAGMA cache_size = 10;
+ BEGIN;
+ CREATE TABLE t1(a, b, UNIQUE(a, b));
+ } db2
+ for {set i 0} {$i < 200} {incr i} {
+ set a [string range [string repeat "[format %03d $i]." 5] 0 end-1]
+
+ set b [string repeat $i 2000]
+ execsql {INSERT INTO t1 VALUES($a, $b)} db2
+ }
+ execsql {COMMIT} db2
+ set ::DB2 [sqlite3_connection_pointer db2]
+ set ::STMT [sqlite3_prepare $::DB2 "SELECT a FROM t1 ORDER BY a" -1 DUMMY]
+ sqlite3_step $::STMT ;# Cursor points at 000.000.000.000
+ sqlite3_step $::STMT ;# Cursor points at 001.001.001.001
+
+} -tclbody {
+ execsql {
+ BEGIN;
+ INSERT INTO t1 VALUES('201.201.201.201.201', NULL);
+ UPDATE t1 SET a = '202.202.202.202.202' WHERE a LIKE '201%';
+ COMMIT;
+ }
+} -cleanup {
+ do_test shared_ioerr-3.$n.cleanup.1 {
+ sqlite3_step $::STMT
+ } {SQLITE_ROW}
+ do_test shared_ioerr-3.$n.cleanup.2 {
+ sqlite3_column_text $::STMT 0
+ } {002.002.002.002.002}
+ do_test shared_ioerr-3.$n.cleanup.3 {
+ sqlite3_finalize $::STMT
+ } {SQLITE_OK}
+# db2 eval {select * from sqlite_master}
+ db2 close
+}
+
+# Only run these tests if memory debugging is turned on.
+#
+if {[info command sqlite_malloc_stat]==""} {
+ puts "Skipping malloc tests: not compiled with -DSQLITE_MEMDEBUG..."
+ db close
+ sqlite3_enable_shared_cache $::enable_shared_cache
+ finish_test
+ return
+}
+
+# Provoke a malloc() failure when a cursor position is being saved. This
+# only happens with index cursors (because they malloc() space to save the
+# current key value). It does not happen with tables, because an integer
+# key does not require a malloc() to store.
+#
+# The library should return an SQLITE_NOMEM to the caller. The query that
+# owns the cursor (the one for which the position is not saved) should
+# continue unaffected.
+#
+do_malloc_test 4 -tclprep {
+ sqlite3 db2 test.db
+ execsql {
+ PRAGMA read_uncommitted = 1;
+ BEGIN;
+ CREATE TABLE t1(a, b, UNIQUE(a, b));
+ } db2
+ for {set i 0} {$i < 5} {incr i} {
+ set a [string repeat $i 10]
+ set b [string repeat $i 2000]
+ execsql {INSERT INTO t1 VALUES($a, $b)} db2
+ }
+ execsql {COMMIT} db2
+ set ::DB2 [sqlite3_connection_pointer db2]
+ set ::STMT [sqlite3_prepare $::DB2 "SELECT a FROM t1 ORDER BY a" -1 DUMMY]
+ sqlite3_step $::STMT ;# Cursor points at 0000000000
+ sqlite3_step $::STMT ;# Cursor points at 1111111111
+} -tclbody {
+ execsql {
+ INSERT INTO t1 VALUES(6, NULL);
+ }
+} -cleanup {
+ do_test shared_malloc-4.$::n.cleanup.1 {
+ set ::rc [sqlite3_step $::STMT]
+ expr {$::rc=="SQLITE_ROW" || $::rc=="SQLITE_ABORT"}
+ } {1}
+ if {$::rc=="SQLITE_ROW"} {
+ do_test shared_malloc-4.$::n.cleanup.2 {
+ sqlite3_column_text $::STMT 0
+ } {2222222222}
+ }
+ do_test shared_malloc-4.$::n.cleanup.3 {
+ sqlite3_finalize $::STMT
+ } {SQLITE_OK}
+# db2 eval {select * from sqlite_master}
+ db2 close
+}
+
+do_malloc_test 5 -tclbody {
+ sqlite3 dbX test.db
+ sqlite3 dbY test.db
+ dbX close
+ dbY close
+} -cleanup {
+ catch {dbX close}
+ catch {dbY close}
+}
+
+do_malloc_test 6 -tclbody {
+ catch {db close}
+ sqlite3_thread_cleanup
+ sqlite3_enable_shared_cache 0
+} -cleanup {
+ sqlite3_enable_shared_cache 1
+}
+
+do_test shared_misuse-7.1 {
+ sqlite3 db test.db
+ catch {
+ sqlite3_enable_shared_cache 0
+ } msg
+ set msg
+} {library routine called out of sequence}
+
+# Again provoke a malloc() failure when a cursor position is being saved,
+# this time during a ROLLBACK operation by some other handle.
+#
+# The library should return an SQLITE_NOMEM to the caller. The query that
+# owns the cursor (the one for which the position is not saved) should
+# be aborted.
+#
+set ::aborted 0
+do_malloc_test 8 -tclprep {
+ sqlite3 db2 test.db
+ execsql {
+ PRAGMA read_uncommitted = 1;
+ BEGIN;
+ CREATE TABLE t1(a, b, UNIQUE(a, b));
+ } db2
+ for {set i 0} {$i < 2} {incr i} {
+ set a [string repeat $i 10]
+ set b [string repeat $i 2000]
+ execsql {INSERT INTO t1 VALUES($a, $b)} db2
+ }
+ execsql {COMMIT} db2
+ set ::DB2 [sqlite3_connection_pointer db2]
+ set ::STMT [sqlite3_prepare $::DB2 "SELECT a FROM t1 ORDER BY a" -1 DUMMY]
+ sqlite3_step $::STMT ;# Cursor points at 0000000000
+ sqlite3_step $::STMT ;# Cursor points at 1111111111
+} -tclbody {
+ execsql {
+ BEGIN;
+ INSERT INTO t1 VALUES(6, NULL);
+ ROLLBACK;
+ }
+} -cleanup {
+ do_test shared_malloc-8.$::n.cleanup.1 {
+ lrange [execsql {
+ SELECT a FROM t1;
+ } db2] 0 1
+ } {0000000000 1111111111}
+ do_test shared_malloc-8.$::n.cleanup.2 {
+ set rc1 [sqlite3_step $::STMT]
+ set rc2 [sqlite3_finalize $::STMT]
+ if {$rc1=="SQLITE_ABORT"} {
+ incr ::aborted
+ }
+ expr {
+ ($rc1=="SQLITE_DONE" && $rc2=="SQLITE_OK") ||
+ ($rc1=="SQLITE_ABORT" && $rc2=="SQLITE_OK")
+ }
+ } {1}
+ db2 close
+}
+do_test shared_malloc-8.X {
+ # Test that one or more queries were aborted due to the malloc() failure.
+ expr $::aborted>=1
+} {1}
+
+catch {db close}
+sqlite3_enable_shared_cache $::enable_shared_cache
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/sort.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/sort.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,467 @@
+# 2001 September 15.
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing the CREATE TABLE statement.
+#
+# $Id: sort.test,v 1.25 2005/11/14 22:29:06 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Create a bunch of data to sort against
+#
+do_test sort-1.0 {
+ execsql {
+ CREATE TABLE t1(
+ n int,
+ v varchar(10),
+ log int,
+ roman varchar(10),
+ flt real
+ );
+ INSERT INTO t1 VALUES(1,'one',0,'I',3.141592653);
+ INSERT INTO t1 VALUES(2,'two',1,'II',2.15);
+ INSERT INTO t1 VALUES(3,'three',1,'III',4221.0);
+ INSERT INTO t1 VALUES(4,'four',2,'IV',-0.0013442);
+ INSERT INTO t1 VALUES(5,'five',2,'V',-11);
+ INSERT INTO t1 VALUES(6,'six',2,'VI',0.123);
+ INSERT INTO t1 VALUES(7,'seven',2,'VII',123.0);
+ INSERT INTO t1 VALUES(8,'eight',3,'VIII',-1.6);
+ }
+ execsql {SELECT count(*) FROM t1}
+} {8}
+
+do_test sort-1.1 {
+ execsql {SELECT n FROM t1 ORDER BY n}
+} {1 2 3 4 5 6 7 8}
+do_test sort-1.1.1 {
+ execsql {SELECT n FROM t1 ORDER BY n ASC}
+} {1 2 3 4 5 6 7 8}
+do_test sort-1.1.1 {
+ execsql {SELECT ALL n FROM t1 ORDER BY n ASC}
+} {1 2 3 4 5 6 7 8}
+do_test sort-1.2 {
+ execsql {SELECT n FROM t1 ORDER BY n DESC}
+} {8 7 6 5 4 3 2 1}
+do_test sort-1.3a {
+ execsql {SELECT v FROM t1 ORDER BY v}
+} {eight five four one seven six three two}
+do_test sort-1.3b {
+ execsql {SELECT n FROM t1 ORDER BY v}
+} {8 5 4 1 7 6 3 2}
+do_test sort-1.4 {
+ execsql {SELECT n FROM t1 ORDER BY v DESC}
+} {2 3 6 7 1 4 5 8}
+do_test sort-1.5 {
+ execsql {SELECT flt FROM t1 ORDER BY flt}
+} {-11.0 -1.6 -0.0013442 0.123 2.15 3.141592653 123.0 4221.0}
+do_test sort-1.6 {
+ execsql {SELECT flt FROM t1 ORDER BY flt DESC}
+} {4221.0 123.0 3.141592653 2.15 0.123 -0.0013442 -1.6 -11.0}
+do_test sort-1.7 {
+ execsql {SELECT roman FROM t1 ORDER BY roman}
+} {I II III IV V VI VII VIII}
+do_test sort-1.8 {
+ execsql {SELECT n FROM t1 ORDER BY log, flt}
+} {1 2 3 5 4 6 7 8}
+do_test sort-1.8.1 {
+ execsql {SELECT n FROM t1 ORDER BY log asc, flt}
+} {1 2 3 5 4 6 7 8}
+do_test sort-1.8.2 {
+ execsql {SELECT n FROM t1 ORDER BY log, flt ASC}
+} {1 2 3 5 4 6 7 8}
+do_test sort-1.8.3 {
+ execsql {SELECT n FROM t1 ORDER BY log ASC, flt asc}
+} {1 2 3 5 4 6 7 8}
+do_test sort-1.9 {
+ execsql {SELECT n FROM t1 ORDER BY log, flt DESC}
+} {1 3 2 7 6 4 5 8}
+do_test sort-1.9.1 {
+ execsql {SELECT n FROM t1 ORDER BY log ASC, flt DESC}
+} {1 3 2 7 6 4 5 8}
+do_test sort-1.10 {
+ execsql {SELECT n FROM t1 ORDER BY log DESC, flt}
+} {8 5 4 6 7 2 3 1}
+do_test sort-1.11 {
+ execsql {SELECT n FROM t1 ORDER BY log DESC, flt DESC}
+} {8 7 6 4 5 3 2 1}
+
+# These tests are designed to reach some hard-to-reach places
+# inside the string comparison routines.
+#
+# (Later) The sorting behavior changed in 2.7.0. But we will
+# keep these tests. You can never have too many test cases!
+#
+do_test sort-2.1.1 {
+ execsql {
+ UPDATE t1 SET v='x' || -flt;
+ UPDATE t1 SET v='x-2b' where v=='x-0.123';
+ SELECT v FROM t1 ORDER BY v;
+ }
+} {x-123.0 x-2.15 x-2b x-3.141592653 x-4221.0 x0.0013442 x1.6 x11.0}
+do_test sort-2.1.2 {
+ execsql {
+ SELECT v FROM t1 ORDER BY substr(v,2,999);
+ }
+} {x-123.0 x-2.15 x-2b x-3.141592653 x-4221.0 x0.0013442 x1.6 x11.0}
+do_test sort-2.1.3 {
+ execsql {
+ SELECT v FROM t1 ORDER BY substr(v,2,999)+0.0;
+ }
+} {x-4221.0 x-123.0 x-3.141592653 x-2.15 x-2b x0.0013442 x1.6 x11.0}
+do_test sort-2.1.4 {
+ execsql {
+ SELECT v FROM t1 ORDER BY substr(v,2,999) DESC;
+ }
+} {x11.0 x1.6 x0.0013442 x-4221.0 x-3.141592653 x-2b x-2.15 x-123.0}
+do_test sort-2.1.5 {
+ execsql {
+ SELECT v FROM t1 ORDER BY substr(v,2,999)+0.0 DESC;
+ }
+} {x11.0 x1.6 x0.0013442 x-2b x-2.15 x-3.141592653 x-123.0 x-4221.0}
+
+# This is a bug fix for 2.2.4.
+# Strings are normally mapped to upper-case for a caseless comparison.
+# But this can cause problems for characters in between 'Z' and 'a'.
+#
+do_test sort-3.1 {
+ execsql {
+ CREATE TABLE t2(a,b);
+ INSERT INTO t2 VALUES('AGLIENTU',1);
+ INSERT INTO t2 VALUES('AGLIE`',2);
+ INSERT INTO t2 VALUES('AGNA',3);
+ SELECT a, b FROM t2 ORDER BY a;
+ }
+} {AGLIENTU 1 AGLIE` 2 AGNA 3}
+do_test sort-3.2 {
+ execsql {
+ SELECT a, b FROM t2 ORDER BY a DESC;
+ }
+} {AGNA 3 AGLIE` 2 AGLIENTU 1}
+do_test sort-3.3 {
+ execsql {
+ DELETE FROM t2;
+ INSERT INTO t2 VALUES('aglientu',1);
+ INSERT INTO t2 VALUES('aglie`',2);
+ INSERT INTO t2 VALUES('agna',3);
+ SELECT a, b FROM t2 ORDER BY a;
+ }
+} {aglie` 2 aglientu 1 agna 3}
+do_test sort-3.4 {
+ execsql {
+ SELECT a, b FROM t2 ORDER BY a DESC;
+ }
+} {agna 3 aglientu 1 aglie` 2}
+
+# Version 2.7.0 testing.
+#
+do_test sort-4.1 {
+ execsql {
+ INSERT INTO t1 VALUES(9,'x2.7',3,'IX',4.0e5);
+ INSERT INTO t1 VALUES(10,'x5.0e10',3,'X',-4.0e5);
+ INSERT INTO t1 VALUES(11,'x-4.0e9',3,'XI',4.1e4);
+ INSERT INTO t1 VALUES(12,'x01234567890123456789',3,'XII',-4.2e3);
+ SELECT n FROM t1 ORDER BY n;
+ }
+} {1 2 3 4 5 6 7 8 9 10 11 12}
+do_test sort-4.2 {
+ execsql {
+ SELECT n||'' FROM t1 ORDER BY 1;
+ }
+} {1 10 11 12 2 3 4 5 6 7 8 9}
+do_test sort-4.3 {
+ execsql {
+ SELECT n+0 FROM t1 ORDER BY 1;
+ }
+} {1 2 3 4 5 6 7 8 9 10 11 12}
+do_test sort-4.4 {
+ execsql {
+ SELECT n||'' FROM t1 ORDER BY 1 DESC;
+ }
+} {9 8 7 6 5 4 3 2 12 11 10 1}
+do_test sort-4.5 {
+ execsql {
+ SELECT n+0 FROM t1 ORDER BY 1 DESC;
+ }
+} {12 11 10 9 8 7 6 5 4 3 2 1}
+do_test sort-4.6 {
+ execsql {
+ SELECT v FROM t1 ORDER BY 1;
+ }
+} {x-123.0 x-2.15 x-2b x-3.141592653 x-4.0e9 x-4221.0 x0.0013442 x01234567890123456789 x1.6 x11.0 x2.7 x5.0e10}
+do_test sort-4.7 {
+ execsql {
+ SELECT v FROM t1 ORDER BY 1 DESC;
+ }
+} {x5.0e10 x2.7 x11.0 x1.6 x01234567890123456789 x0.0013442 x-4221.0 x-4.0e9 x-3.141592653 x-2b x-2.15 x-123.0}
+do_test sort-4.8 {
+ execsql {
+ SELECT substr(v,2,99) FROM t1 ORDER BY 1;
+ }
+} {-123.0 -2.15 -2b -3.141592653 -4.0e9 -4221.0 0.0013442 01234567890123456789 1.6 11.0 2.7 5.0e10}
+#do_test sort-4.9 {
+# execsql {
+# SELECT substr(v,2,99)+0.0 FROM t1 ORDER BY 1;
+# }
+#} {-4000000000 -4221 -123 -3.141592653 -2.15 -2 0.0013442 1.6 2.7 11 50000000000 1.23456789012346e+18}
+
+do_test sort-5.1 {
+ execsql {
+ create table t3(a,b);
+ insert into t3 values(5,NULL);
+ insert into t3 values(6,NULL);
+ insert into t3 values(3,NULL);
+ insert into t3 values(4,'cd');
+ insert into t3 values(1,'ab');
+ insert into t3 values(2,NULL);
+ select a from t3 order by b, a;
+ }
+} {2 3 5 6 1 4}
+do_test sort-5.2 {
+ execsql {
+ select a from t3 order by b, a desc;
+ }
+} {6 5 3 2 1 4}
+do_test sort-5.3 {
+ execsql {
+ select a from t3 order by b desc, a;
+ }
+} {4 1 2 3 5 6}
+do_test sort-5.4 {
+ execsql {
+ select a from t3 order by b desc, a desc;
+ }
+} {4 1 6 5 3 2}
+
+do_test sort-6.1 {
+ execsql {
+ create index i3 on t3(b,a);
+ select a from t3 order by b, a;
+ }
+} {2 3 5 6 1 4}
+do_test sort-6.2 {
+ execsql {
+ select a from t3 order by b, a desc;
+ }
+} {6 5 3 2 1 4}
+do_test sort-6.3 {
+ execsql {
+ select a from t3 order by b desc, a;
+ }
+} {4 1 2 3 5 6}
+do_test sort-6.4 {
+ execsql {
+ select a from t3 order by b desc, a desc;
+ }
+} {4 1 6 5 3 2}
+
+do_test sort-7.1 {
+ execsql {
+ CREATE TABLE t4(
+ a INTEGER,
+ b VARCHAR(30)
+ );
+ INSERT INTO t4 VALUES(1,1);
+ INSERT INTO t4 VALUES(2,2);
+ INSERT INTO t4 VALUES(11,11);
+ INSERT INTO t4 VALUES(12,12);
+ SELECT a FROM t4 ORDER BY 1;
+ }
+} {1 2 11 12}
+do_test sort-7.2 {
+ execsql {
+ SELECT b FROM t4 ORDER BY 1
+ }
+} {1 11 12 2}
+
+# Omit tests sort-7.3 to sort-7.8 if view support was disabled at
+# compilatation time.
+ifcapable view {
+do_test sort-7.3 {
+ execsql {
+ CREATE VIEW v4 AS SELECT * FROM t4;
+ SELECT a FROM v4 ORDER BY 1;
+ }
+} {1 2 11 12}
+do_test sort-7.4 {
+ execsql {
+ SELECT b FROM v4 ORDER BY 1;
+ }
+} {1 11 12 2}
+
+ifcapable compound {
+do_test sort-7.5 {
+ execsql {
+ SELECT a FROM t4 UNION SELECT a FROM v4 ORDER BY 1;
+ }
+} {1 2 11 12}
+do_test sort-7.6 {
+ execsql {
+ SELECT b FROM t4 UNION SELECT a FROM v4 ORDER BY 1;
+ }
+} {1 2 11 12 1 11 12 2} ;# text from t4.b and numeric from v4.a
+do_test sort-7.7 {
+ execsql {
+ SELECT a FROM t4 UNION SELECT b FROM v4 ORDER BY 1;
+ }
+} {1 2 11 12 1 11 12 2} ;# numeric from t4.a and text from v4.b
+do_test sort-7.8 {
+ execsql {
+ SELECT b FROM t4 UNION SELECT b FROM v4 ORDER BY 1;
+ }
+} {1 11 12 2}
+} ;# ifcapable compound
+} ;# ifcapable view
+
+#### Version 3 works differently here:
+#do_test sort-7.9 {
+# execsql {
+# SELECT b FROM t4 UNION SELECT b FROM v4 ORDER BY 1 COLLATE numeric;
+# }
+#} {1 2 11 12}
+#do_test sort-7.10 {
+# execsql {
+# SELECT b FROM t4 UNION SELECT b FROM v4 ORDER BY 1 COLLATE integer;
+# }
+#} {1 2 11 12}
+#do_test sort-7.11 {
+# execsql {
+# SELECT b FROM t4 UNION SELECT b FROM v4 ORDER BY 1 COLLATE text;
+# }
+#} {1 11 12 2}
+#do_test sort-7.12 {
+# execsql {
+# SELECT b FROM t4 UNION SELECT b FROM v4 ORDER BY 1 COLLATE blob;
+# }
+#} {1 11 12 2}
+#do_test sort-7.13 {
+# execsql {
+# SELECT b FROM t4 UNION SELECT b FROM v4 ORDER BY 1 COLLATE clob;
+# }
+#} {1 11 12 2}
+#do_test sort-7.14 {
+# execsql {
+# SELECT b FROM t4 UNION SELECT b FROM v4 ORDER BY 1 COLLATE varchar;
+# }
+#} {1 11 12 2}
+
+# Ticket #297
+#
+do_test sort-8.1 {
+ execsql {
+ CREATE TABLE t5(a real, b text);
+ INSERT INTO t5 VALUES(100,'A1');
+ INSERT INTO t5 VALUES(100.0,'A2');
+ SELECT * FROM t5 ORDER BY a, b;
+ }
+} {100.0 A1 100.0 A2}
+
+
+ifcapable {bloblit} {
+# BLOBs should sort after TEXT
+#
+do_test sort-9.1 {
+ execsql {
+ CREATE TABLE t6(x, y);
+ INSERT INTO t6 VALUES(1,1);
+ INSERT INTO t6 VALUES(2,'1');
+ INSERT INTO t6 VALUES(3,x'31');
+ INSERT INTO t6 VALUES(4,NULL);
+ SELECT x FROM t6 ORDER BY y;
+ }
+} {4 1 2 3}
+do_test sort-9.2 {
+ execsql {
+ SELECT x FROM t6 ORDER BY y DESC;
+ }
+} {3 2 1 4}
+do_test sort-9.3 {
+ execsql {
+ SELECT x FROM t6 WHERE y<1
+ }
+} {}
+do_test sort-9.4 {
+ execsql {
+ SELECT x FROM t6 WHERE y<'1'
+ }
+} {1}
+do_test sort-9.5 {
+ execsql {
+ SELECT x FROM t6 WHERE y<x'31'
+ }
+} {1 2}
+do_test sort-9.6 {
+ execsql {
+ SELECT x FROM t6 WHERE y>1
+ }
+} {2 3}
+do_test sort-9.7 {
+ execsql {
+ SELECT x FROM t6 WHERE y>'1'
+ }
+} {3}
+} ;# endif bloblit
+
+# Ticket #1092 - ORDER BY on rowid fields.
+do_test sort-10.1 {
+ execsql {
+ CREATE TABLE t7(c INTEGER PRIMARY KEY);
+ INSERT INTO t7 VALUES(1);
+ INSERT INTO t7 VALUES(2);
+ INSERT INTO t7 VALUES(3);
+ INSERT INTO t7 VALUES(4);
+ }
+} {}
+do_test sort-10.2 {
+ execsql {
+ SELECT c FROM t7 WHERE c<=3 ORDER BY c DESC;
+ }
+} {3 2 1}
+do_test sort-10.3 {
+ execsql {
+ SELECT c FROM t7 WHERE c<3 ORDER BY c DESC;
+ }
+} {2 1}
+
+# ticket #1358. Just because one table in a join gives a unique
+# result does not mean they all do. We cannot disable sorting unless
+# all tables in the join give unique results.
+#
+do_test sort-11.1 {
+ execsql {
+ create table t8(a unique, b, c);
+ insert into t8 values(1,2,3);
+ insert into t8 values(2,3,4);
+ create table t9(x,y);
+ insert into t9 values(2,4);
+ insert into t9 values(2,3);
+ select y from t8, t9 where a=1 order by a, y;
+ }
+} {3 4}
+
+# Trouble reported on the mailing list. Check for overly aggressive
+# (which is to say, incorrect) optimization of order-by with a rowid
+# in a join.
+#
+do_test sort-12.1 {
+ execsql {
+ create table a (id integer primary key);
+ create table b (id integer primary key, aId integer, text);
+ insert into a values (1);
+ insert into b values (2, 1, 'xxx');
+ insert into b values (1, 1, 'zzz');
+ insert into b values (3, 1, 'yyy');
+ select a.id, b.id, b.text from a join b on (a.id = b.aId)
+ order by a.id, b.text;
+ }
+} {1 2 xxx 1 3 yyy 1 1 zzz}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/subquery.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/subquery.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,424 @@
+# 2005 January 19
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#*************************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is testing correlated subqueries
+#
+# $Id: subquery.test,v 1.14 2006/01/17 09:35:02 danielk1977 Exp $
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+ifcapable !subquery {
+ finish_test
+ return
+}
+
+do_test subquery-1.1 {
+ execsql {
+ BEGIN;
+ CREATE TABLE t1(a,b);
+ INSERT INTO t1 VALUES(1,2);
+ INSERT INTO t1 VALUES(3,4);
+ INSERT INTO t1 VALUES(5,6);
+ INSERT INTO t1 VALUES(7,8);
+ CREATE TABLE t2(x,y);
+ INSERT INTO t2 VALUES(1,1);
+ INSERT INTO t2 VALUES(3,9);
+ INSERT INTO t2 VALUES(5,25);
+ INSERT INTO t2 VALUES(7,49);
+ COMMIT;
+ }
+ execsql {
+ SELECT a, (SELECT y FROM t2 WHERE x=a) FROM t1 WHERE b<8
+ }
+} {1 1 3 9 5 25}
+do_test subquery-1.2 {
+ execsql {
+ UPDATE t1 SET b=b+(SELECT y FROM t2 WHERE x=a);
+ SELECT * FROM t1;
+ }
+} {1 3 3 13 5 31 7 57}
+
+do_test subquery-1.3 {
+ execsql {
+ SELECT b FROM t1 WHERE EXISTS(SELECT * FROM t2 WHERE y=a)
+ }
+} {3}
+do_test subquery-1.4 {
+ execsql {
+ SELECT b FROM t1 WHERE NOT EXISTS(SELECT * FROM t2 WHERE y=a)
+ }
+} {13 31 57}
+
+# Simple tests to make sure correlated subqueries in WHERE clauses
+# are used by the query optimizer correctly.
+do_test subquery-1.5 {
+ execsql {
+ SELECT a, x FROM t1, t2 WHERE t1.a = (SELECT x);
+ }
+} {1 1 3 3 5 5 7 7}
+do_test subquery-1.6 {
+ execsql {
+ CREATE INDEX i1 ON t1(a);
+ SELECT a, x FROM t1, t2 WHERE t1.a = (SELECT x);
+ }
+} {1 1 3 3 5 5 7 7}
+do_test subquery-1.7 {
+ execsql {
+ SELECT a, x FROM t2, t1 WHERE t1.a = (SELECT x);
+ }
+} {1 1 3 3 5 5 7 7}
+
+# Try an aggregate in both the subquery and the parent query.
+do_test subquery-1.8 {
+ execsql {
+ SELECT count(*) FROM t1 WHERE a > (SELECT count(*) FROM t2);
+ }
+} {2}
+
+# Test a correlated subquery disables the "only open the index" optimization.
+do_test subquery-1.9.1 {
+ execsql {
+ SELECT (y*2)>b FROM t1, t2 WHERE a=x;
+ }
+} {0 1 1 1}
+do_test subquery-1.9.2 {
+ execsql {
+ SELECT a FROM t1 WHERE (SELECT (y*2)>b FROM t2 WHERE a=x);
+ }
+} {3 5 7}
+
+# Test that the flattening optimization works with subquery expressions.
+do_test subquery-1.10.1 {
+ execsql {
+ SELECT (SELECT a), b FROM t1;
+ }
+} {1 3 3 13 5 31 7 57}
+do_test subquery-1.10.2 {
+ execsql {
+ SELECT * FROM (SELECT (SELECT a), b FROM t1);
+ }
+} {1 3 3 13 5 31 7 57}
+do_test subquery-1.10.3 {
+ execsql {
+ SELECT * FROM (SELECT (SELECT sum(a) FROM t1));
+ }
+} {16}
+do_test subquery-1.10.4 {
+ execsql {
+ CREATE TABLE t5 (val int, period text PRIMARY KEY);
+ INSERT INTO t5 VALUES(5, '2001-3');
+ INSERT INTO t5 VALUES(10, '2001-4');
+ INSERT INTO t5 VALUES(15, '2002-1');
+ INSERT INTO t5 VALUES(5, '2002-2');
+ INSERT INTO t5 VALUES(10, '2002-3');
+ INSERT INTO t5 VALUES(15, '2002-4');
+ INSERT INTO t5 VALUES(10, '2003-1');
+ INSERT INTO t5 VALUES(5, '2003-2');
+ INSERT INTO t5 VALUES(25, '2003-3');
+ INSERT INTO t5 VALUES(5, '2003-4');
+
+ SELECT "a.period", vsum
+ FROM (SELECT
+ a.period,
+ (select sum(val) from t5 where period between a.period and '2002-4') vsum
+ FROM t5 a where a.period between '2002-1' and '2002-4')
+ WHERE vsum < 45 ;
+ }
+} {2002-2 30 2002-3 25 2002-4 15}
+do_test subquery-1.10.5 {
+ execsql {
+ SELECT "a.period", vsum from
+ (select a.period,
+ (select sum(val) from t5 where period between a.period and '2002-4') vsum
+ FROM t5 a where a.period between '2002-1' and '2002-4')
+ WHERE vsum < 45 ;
+ }
+} {2002-2 30 2002-3 25 2002-4 15}
+do_test subquery-1.10.6 {
+ execsql {
+ DROP TABLE t5;
+ }
+} {}
+
+
+
+#------------------------------------------------------------------
+# The following test cases - subquery-2.* - are not logically
+# organized. They're here largely because they were failing during
+# one stage of development of sub-queries.
+#
+do_test subquery-2.1 {
+ execsql {
+ SELECT (SELECT 10);
+ }
+} {10}
+do_test subquery-2.2.1 {
+ execsql {
+ CREATE TABLE t3(a PRIMARY KEY, b);
+ INSERT INTO t3 VALUES(1, 2);
+ INSERT INTO t3 VALUES(3, 1);
+ }
+} {}
+do_test subquery-2.2.2 {
+ execsql {
+ SELECT * FROM t3 WHERE a IN (SELECT b FROM t3);
+ }
+} {1 2}
+do_test subquery-2.2.3 {
+ execsql {
+ DROP TABLE t3;
+ }
+} {}
+do_test subquery-2.3.1 {
+ execsql {
+ CREATE TABLE t3(a TEXT);
+ INSERT INTO t3 VALUES('10');
+ }
+} {}
+do_test subquery-2.3.2 {
+ execsql {
+ SELECT a IN (10.0, 20) FROM t3;
+ }
+} {0}
+do_test subquery-2.3.3 {
+ execsql {
+ DROP TABLE t3;
+ }
+} {}
+do_test subquery-2.4.1 {
+ execsql {
+ CREATE TABLE t3(a TEXT);
+ INSERT INTO t3 VALUES('XX');
+ }
+} {}
+do_test subquery-2.4.2 {
+ execsql {
+ SELECT count(*) FROM t3 WHERE a IN (SELECT 'XX')
+ }
+} {1}
+do_test subquery-2.4.3 {
+ execsql {
+ DROP TABLE t3;
+ }
+} {}
+do_test subquery-2.5.1 {
+ execsql {
+ CREATE TABLE t3(a INTEGER);
+ INSERT INTO t3 VALUES(10);
+
+ CREATE TABLE t4(x TEXT);
+ INSERT INTO t4 VALUES('10.0');
+ }
+} {}
+do_test subquery-2.5.2 {
+ # In the expr "x IN (SELECT a FROM t3)" the RHS of the IN operator
+ # has text affinity and the LHS has integer affinity. The rule is
+ # that we try to convert both sides to an integer before doing the
+ # comparision. Hence, the integer value 10 in t3 will compare equal
+ # to the string value '10.0' in t4 because the t4 value will be
+ # converted into an integer.
+ execsql {
+ SELECT * FROM t4 WHERE x IN (SELECT a FROM t3);
+ }
+} {10.0}
+do_test subquery-2.5.3.1 {
+ # The t4i index cannot be used to resolve the "x IN (...)" constraint
+ # because the constraint has integer affinity but t4i has text affinity.
+ execsql {
+ CREATE INDEX t4i ON t4(x);
+ SELECT * FROM t4 WHERE x IN (SELECT a FROM t3);
+ }
+} {10.0}
+do_test subquery-2.5.3.2 {
+ # Verify that the t4i index was not used in the previous query
+ set ::sqlite_query_plan
+} {t4 {}}
+do_test subquery-2.5.4 {
+ execsql {
+ DROP TABLE t3;
+ DROP TABLE t4;
+ }
+} {}
+
+#------------------------------------------------------------------
+# The following test cases - subquery-3.* - test tickets that
+# were raised during development of correlated subqueries.
+#
+
+# Ticket 1083
+ifcapable view {
+ do_test subquery-3.1 {
+ catchsql { DROP TABLE t1; }
+ catchsql { DROP TABLE t2; }
+ execsql {
+ CREATE TABLE t1(a,b);
+ INSERT INTO t1 VALUES(1,2);
+ CREATE VIEW v1 AS SELECT b FROM t1 WHERE a>0;
+ CREATE TABLE t2(p,q);
+ INSERT INTO t2 VALUES(2,9);
+ SELECT * FROM v1 WHERE EXISTS(SELECT * FROM t2 WHERE p=v1.b);
+ }
+ } {2}
+} else {
+ catchsql { DROP TABLE t1; }
+ catchsql { DROP TABLE t2; }
+ execsql {
+ CREATE TABLE t1(a,b);
+ INSERT INTO t1 VALUES(1,2);
+ CREATE TABLE t2(p,q);
+ INSERT INTO t2 VALUES(2,9);
+ }
+}
+
+# Ticket 1084
+do_test subquery-3.2 {
+ catchsql {
+ CREATE TABLE t1(a,b);
+ INSERT INTO t1 VALUES(1,2);
+ }
+ execsql {
+ SELECT (SELECT t1.a) FROM t1;
+ }
+} {1}
+
+# Test Cases subquery-3.3.* test correlated subqueries where the
+# parent query is an aggregate query. Ticket #1105 is an example
+# of such a query.
+#
+do_test subquery-3.3.1 {
+ execsql {
+ SELECT a, (SELECT b) FROM t1 GROUP BY a;
+ }
+} {1 2}
+do_test subquery-3.3.2 {
+ catchsql {DROP TABLE t2}
+ execsql {
+ CREATE TABLE t2(c, d);
+ INSERT INTO t2 VALUES(1, 'one');
+ INSERT INTO t2 VALUES(2, 'two');
+ SELECT a, (SELECT d FROM t2 WHERE a=c) FROM t1 GROUP BY a;
+ }
+} {1 one}
+do_test subquery-3.3.3 {
+ execsql {
+ INSERT INTO t1 VALUES(2, 4);
+ SELECT max(a), (SELECT d FROM t2 WHERE a=c) FROM t1;
+ }
+} {2 two}
+do_test subquery-3.3.4 {
+ execsql {
+ SELECT a, (SELECT (SELECT d FROM t2 WHERE a=c)) FROM t1 GROUP BY a;
+ }
+} {1 one 2 two}
+do_test subquery-3.3.5 {
+ execsql {
+ SELECT a, (SELECT count(*) FROM t2 WHERE a=c) FROM t1;
+ }
+} {1 1 2 1}
+
+#------------------------------------------------------------------
+# These tests - subquery-4.* - use the TCL statement cache to try
+# and expose bugs to do with re-using statements that have been
+# passed to sqlite3_reset().
+#
+# One problem was that VDBE memory cells were not being initialised
+# to NULL on the second and subsequent executions.
+#
+do_test subquery-4.1.1 {
+ execsql {
+ SELECT (SELECT a FROM t1);
+ }
+} {1}
+do_test subquery-4.2 {
+ execsql {
+ DELETE FROM t1;
+ SELECT (SELECT a FROM t1);
+ }
+} {{}}
+do_test subquery-4.2.1 {
+ execsql {
+ CREATE TABLE t3(a PRIMARY KEY);
+ INSERT INTO t3 VALUES(10);
+ }
+ execsql {INSERT INTO t3 VALUES((SELECT max(a) FROM t3)+1)}
+} {}
+do_test subquery-4.2.2 {
+ execsql {INSERT INTO t3 VALUES((SELECT max(a) FROM t3)+1)}
+} {}
+
+#------------------------------------------------------------------
+# The subquery-5.* tests make sure string literals in double-quotes
+# are handled efficiently. Double-quote literals are first checked
+# to see if they match any column names. If there is not column name
+# match then those literals are used a string constants. When a
+# double-quoted string appears, we want to make sure that the search
+# for a matching column name did not cause an otherwise static subquery
+# to become a dynamic (correlated) subquery.
+#
+do_test subquery-5.1 {
+ proc callcntproc {n} {
+ incr ::callcnt
+ return $n
+ }
+ set callcnt 0
+ db function callcnt callcntproc
+ execsql {
+ CREATE TABLE t4(x,y);
+ INSERT INTO t4 VALUES('one',1);
+ INSERT INTO t4 VALUES('two',2);
+ INSERT INTO t4 VALUES('three',3);
+ INSERT INTO t4 VALUES('four',4);
+ CREATE TABLE t5(a,b);
+ INSERT INTO t5 VALUES(1,11);
+ INSERT INTO t5 VALUES(2,22);
+ INSERT INTO t5 VALUES(3,33);
+ INSERT INTO t5 VALUES(4,44);
+ SELECT b FROM t5 WHERE a IN
+ (SELECT callcnt(y)+0 FROM t4 WHERE x="two")
+ }
+} {22}
+do_test subquery-5.2 {
+ # This is the key test. The subquery should have only run once. If
+ # The double-quoted identifier "two" were causing the subquery to be
+ # processed as a correlated subquery, then it would have run 4 times.
+ set callcnt
+} {1}
+
+
+# Ticket #1380. Make sure correlated subqueries on an IN clause work
+# correctly when the left-hand side of the IN operator is constant.
+#
+do_test subquery-6.1 {
+ set callcnt 0
+ execsql {
+ SELECT x FROM t4 WHERE 1 IN (SELECT callcnt(count(*)) FROM t5 WHERE a=y)
+ }
+} {one two three four}
+do_test subquery-6.2 {
+ set callcnt
+} {4}
+do_test subquery-6.3 {
+ set callcnt 0
+ execsql {
+ SELECT x FROM t4 WHERE 1 IN (SELECT callcnt(count(*)) FROM t5 WHERE a=1)
+ }
+} {one two three four}
+do_test subquery-6.4 {
+ set callcnt
+} {1}
+
+
+
+
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/subselect.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/subselect.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,178 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing SELECT statements that are part of
+# expressions.
+#
+# $Id: subselect.test,v 1.13 2005/09/08 10:37:01 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Omit this whole file if the library is build without subquery support.
+ifcapable !subquery {
+ finish_test
+ return
+}
+
+# Basic sanity checking. Try a simple subselect.
+#
+do_test subselect-1.1 {
+ execsql {
+ CREATE TABLE t1(a int, b int);
+ INSERT INTO t1 VALUES(1,2);
+ INSERT INTO t1 VALUES(3,4);
+ INSERT INTO t1 VALUES(5,6);
+ }
+ execsql {SELECT * FROM t1 WHERE a = (SELECT count(*) FROM t1)}
+} {3 4}
+
+# Try a select with more than one result column.
+#
+do_test subselect-1.2 {
+ set v [catch {execsql {SELECT * FROM t1 WHERE a = (SELECT * FROM t1)}} msg]
+ lappend v $msg
+} {1 {only a single result allowed for a SELECT that is part of an expression}}
+
+# A subselect without an aggregate.
+#
+do_test subselect-1.3a {
+ execsql {SELECT b from t1 where a = (SELECT a FROM t1 WHERE b=2)}
+} {2}
+do_test subselect-1.3b {
+ execsql {SELECT b from t1 where a = (SELECT a FROM t1 WHERE b=4)}
+} {4}
+do_test subselect-1.3c {
+ execsql {SELECT b from t1 where a = (SELECT a FROM t1 WHERE b=6)}
+} {6}
+do_test subselect-1.3c {
+ execsql {SELECT b from t1 where a = (SELECT a FROM t1 WHERE b=8)}
+} {}
+
+# What if the subselect doesn't return any value. We should get
+# NULL as the result. Check it out.
+#
+do_test subselect-1.4 {
+ execsql {SELECT b from t1 where a = coalesce((SELECT a FROM t1 WHERE b=5),1)}
+} {2}
+
+# Try multiple subselects within a single expression.
+#
+do_test subselect-1.5 {
+ execsql {
+ CREATE TABLE t2(x int, y int);
+ INSERT INTO t2 VALUES(1,2);
+ INSERT INTO t2 VALUES(2,4);
+ INSERT INTO t2 VALUES(3,8);
+ INSERT INTO t2 VALUES(4,16);
+ }
+ execsql {
+ SELECT y from t2
+ WHERE x = (SELECT sum(b) FROM t1 where a notnull) - (SELECT sum(a) FROM t1)
+ }
+} {8}
+
+# Try something useful. Delete every entry from t2 where the
+# x value is less than half of the maximum.
+#
+do_test subselect-1.6 {
+ execsql {DELETE FROM t2 WHERE x < 0.5*(SELECT max(x) FROM t2)}
+ execsql {SELECT x FROM t2 ORDER BY x}
+} {2 3 4}
+
+# Make sure sorting works for SELECTs there used as a scalar expression.
+#
+do_test subselect-2.1 {
+ execsql {
+ SELECT (SELECT a FROM t1 ORDER BY a), (SELECT a FROM t1 ORDER BY a DESC)
+ }
+} {1 5}
+do_test subselect-2.2 {
+ execsql {
+ SELECT 1 IN (SELECT a FROM t1 ORDER BY a);
+ }
+} {1}
+do_test subselect-2.3 {
+ execsql {
+ SELECT 2 IN (SELECT a FROM t1 ORDER BY a DESC);
+ }
+} {0}
+
+# Verify that the ORDER BY clause is honored in a subquery.
+#
+ifcapable compound {
+do_test subselect-3.1 {
+ execsql {
+ CREATE TABLE t3(x int);
+ INSERT INTO t3 SELECT a FROM t1 UNION ALL SELECT b FROM t1;
+ SELECT * FROM t3 ORDER BY x;
+ }
+} {1 2 3 4 5 6}
+} ;# ifcapable compound
+ifcapable !compound {
+do_test subselect-3.1 {
+ execsql {
+ CREATE TABLE t3(x int);
+ INSERT INTO t3 SELECT a FROM t1;
+ INSERT INTO t3 SELECT b FROM t1;
+ SELECT * FROM t3 ORDER BY x;
+ }
+} {1 2 3 4 5 6}
+} ;# ifcapable !compound
+
+do_test subselect-3.2 {
+ execsql {
+ SELECT sum(x) FROM (SELECT x FROM t3 ORDER BY x LIMIT 2);
+ }
+} {3}
+do_test subselect-3.3 {
+ execsql {
+ SELECT sum(x) FROM (SELECT x FROM t3 ORDER BY x DESC LIMIT 2);
+ }
+} {11}
+do_test subselect-3.4 {
+ execsql {
+ SELECT (SELECT x FROM t3 ORDER BY x);
+ }
+} {1}
+do_test subselect-3.5 {
+ execsql {
+ SELECT (SELECT x FROM t3 ORDER BY x DESC);
+ }
+} {6}
+do_test subselect-3.6 {
+ execsql {
+ SELECT (SELECT x FROM t3 ORDER BY x LIMIT 1);
+ }
+} {1}
+do_test subselect-3.7 {
+ execsql {
+ SELECT (SELECT x FROM t3 ORDER BY x DESC LIMIT 1);
+ }
+} {6}
+do_test subselect-3.8 {
+ execsql {
+ SELECT (SELECT x FROM t3 ORDER BY x LIMIT 1 OFFSET 2);
+ }
+} {3}
+do_test subselect-3.9 {
+ execsql {
+ SELECT (SELECT x FROM t3 ORDER BY x DESC LIMIT 1 OFFSET 2);
+ }
+} {4}
+do_test subselect-3.10 {
+ execsql {
+ SELECT x FROM t3 WHERE x IN
+ (SELECT x FROM t3 ORDER BY x DESC LIMIT 1 OFFSET 2);
+ }
+} {4}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/sync.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/sync.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,97 @@
+# 2005 August 28
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests to verify that fsync is disabled when
+# pragma synchronous=off even for multi-database commits.
+#
+# $Id: sync.test,v 1.5 2006/02/11 01:25:51 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+#
+# These tests are only applicable on unix when pager pragma are
+# enabled.
+#
+if {$::tcl_platform(platform)!="unix"} {
+ finish_test
+ return
+}
+ifcapable !pager_pragmas {
+ finish_test
+ return
+}
+
+do_test sync-1.1 {
+ set sqlite_sync_count 0
+ file delete -force test2.db
+ file delete -force test2.db-journal
+ execsql {
+ PRAGMA fullfsync=OFF;
+ CREATE TABLE t1(a,b);
+ ATTACH DATABASE 'test2.db' AS db2;
+ CREATE TABLE db2.t2(x,y);
+ }
+ ifcapable !dirsync {
+ incr sqlite_sync_count 2
+ }
+ set sqlite_sync_count
+} 8
+ifcapable pager_pragmas {
+ do_test sync-1.2 {
+ set sqlite_sync_count 0
+ execsql {
+ PRAGMA main.synchronous=on;
+ PRAGMA db2.synchronous=on;
+ BEGIN;
+ INSERT INTO t1 VALUES(1,2);
+ INSERT INTO t2 VALUES(3,4);
+ COMMIT;
+ }
+ ifcapable !dirsync {
+ incr sqlite_sync_count 3
+ }
+ set sqlite_sync_count
+ } 8
+}
+do_test sync-1.3 {
+ set sqlite_sync_count 0
+ execsql {
+ PRAGMA main.synchronous=full;
+ PRAGMA db2.synchronous=full;
+ BEGIN;
+ INSERT INTO t1 VALUES(3,4);
+ INSERT INTO t2 VALUES(5,6);
+ COMMIT;
+ }
+ ifcapable !dirsync {
+ incr sqlite_sync_count 3
+ }
+ set sqlite_sync_count
+} 10
+ifcapable pager_pragmas {
+ do_test sync-1.4 {
+ set sqlite_sync_count 0
+ execsql {
+ PRAGMA main.synchronous=off;
+ PRAGMA db2.synchronous=off;
+ BEGIN;
+ INSERT INTO t1 VALUES(5,6);
+ INSERT INTO t2 VALUES(7,8);
+ COMMIT;
+ }
+ set sqlite_sync_count
+ } 0
+}
+
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/table.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/table.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,676 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing the CREATE TABLE statement.
+#
+# $Id: table.test,v 1.46 2006/09/01 15:49:06 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Create a basic table and verify it is added to sqlite_master
+#
+do_test table-1.1 {
+ execsql {
+ CREATE TABLE test1 (
+ one varchar(10),
+ two text
+ )
+ }
+ execsql {
+ SELECT sql FROM sqlite_master WHERE type!='meta'
+ }
+} {{CREATE TABLE test1 (
+ one varchar(10),
+ two text
+ )}}
+
+
+# Verify the other fields of the sqlite_master file.
+#
+do_test table-1.3 {
+ execsql {SELECT name, tbl_name, type FROM sqlite_master WHERE type!='meta'}
+} {test1 test1 table}
+
+# Close and reopen the database. Verify that everything is
+# still the same.
+#
+do_test table-1.4 {
+ db close
+ sqlite3 db test.db
+ execsql {SELECT name, tbl_name, type from sqlite_master WHERE type!='meta'}
+} {test1 test1 table}
+
+# Drop the database and make sure it disappears.
+#
+do_test table-1.5 {
+ execsql {DROP TABLE test1}
+ execsql {SELECT * FROM sqlite_master WHERE type!='meta'}
+} {}
+
+# Close and reopen the database. Verify that the table is
+# still gone.
+#
+do_test table-1.6 {
+ db close
+ sqlite3 db test.db
+ execsql {SELECT name FROM sqlite_master WHERE type!='meta'}
+} {}
+
+# Repeat the above steps, but this time quote the table name.
+#
+do_test table-1.10 {
+ execsql {CREATE TABLE "create" (f1 int)}
+ execsql {SELECT name FROM sqlite_master WHERE type!='meta'}
+} {create}
+do_test table-1.11 {
+ execsql {DROP TABLE "create"}
+ execsql {SELECT name FROM "sqlite_master" WHERE type!='meta'}
+} {}
+do_test table-1.12 {
+ execsql {CREATE TABLE test1("f1 ho" int)}
+ execsql {SELECT name as "X" FROM sqlite_master WHERE type!='meta'}
+} {test1}
+do_test table-1.13 {
+ execsql {DROP TABLE "TEST1"}
+ execsql {SELECT name FROM "sqlite_master" WHERE type!='meta'}
+} {}
+
+
+
+# Verify that we cannot make two tables with the same name
+#
+do_test table-2.1 {
+ execsql {CREATE TABLE TEST2(one text)}
+ catchsql {CREATE TABLE test2(two text default 'hi')}
+} {1 {table test2 already exists}}
+do_test table-2.1.1 {
+ catchsql {CREATE TABLE "test2" (two)}
+} {1 {table "test2" already exists}}
+do_test table-2.1b {
+ set v [catch {execsql {CREATE TABLE sqlite_master(two text)}} msg]
+ lappend v $msg
+} {1 {object name reserved for internal use: sqlite_master}}
+do_test table-2.1c {
+ db close
+ sqlite3 db test.db
+ set v [catch {execsql {CREATE TABLE sqlite_master(two text)}} msg]
+ lappend v $msg
+} {1 {object name reserved for internal use: sqlite_master}}
+do_test table-2.1d {
+ catchsql {CREATE TABLE IF NOT EXISTS test2(x,y)}
+} {0 {}}
+do_test table-2.1e {
+ catchsql {CREATE TABLE IF NOT EXISTS test2(x UNIQUE, y TEXT PRIMARY KEY)}
+} {0 {}}
+do_test table-2.1f {
+ execsql {DROP TABLE test2; SELECT name FROM sqlite_master WHERE type!='meta'}
+} {}
+
+# Verify that we cannot make a table with the same name as an index
+#
+do_test table-2.2a {
+ execsql {CREATE TABLE test2(one text); CREATE INDEX test3 ON test2(one)}
+ set v [catch {execsql {CREATE TABLE test3(two text)}} msg]
+ lappend v $msg
+} {1 {there is already an index named test3}}
+do_test table-2.2b {
+ db close
+ sqlite3 db test.db
+ set v [catch {execsql {CREATE TABLE test3(two text)}} msg]
+ lappend v $msg
+} {1 {there is already an index named test3}}
+do_test table-2.2c {
+ execsql {SELECT name FROM sqlite_master WHERE type!='meta' ORDER BY name}
+} {test2 test3}
+do_test table-2.2d {
+ execsql {DROP INDEX test3}
+ set v [catch {execsql {CREATE TABLE test3(two text)}} msg]
+ lappend v $msg
+} {0 {}}
+do_test table-2.2e {
+ execsql {SELECT name FROM sqlite_master WHERE type!='meta' ORDER BY name}
+} {test2 test3}
+do_test table-2.2f {
+ execsql {DROP TABLE test2; DROP TABLE test3}
+ execsql {SELECT name FROM sqlite_master WHERE type!='meta' ORDER BY name}
+} {}
+
+# Create a table with many field names
+#
+set big_table \
+{CREATE TABLE big(
+ f1 varchar(20),
+ f2 char(10),
+ f3 varchar(30) primary key,
+ f4 text,
+ f5 text,
+ f6 text,
+ f7 text,
+ f8 text,
+ f9 text,
+ f10 text,
+ f11 text,
+ f12 text,
+ f13 text,
+ f14 text,
+ f15 text,
+ f16 text,
+ f17 text,
+ f18 text,
+ f19 text,
+ f20 text
+)}
+do_test table-3.1 {
+ execsql $big_table
+ execsql {SELECT sql FROM sqlite_master WHERE type=='table'}
+} \{$big_table\}
+do_test table-3.2 {
+ set v [catch {execsql {CREATE TABLE BIG(xyz foo)}} msg]
+ lappend v $msg
+} {1 {table BIG already exists}}
+do_test table-3.3 {
+ set v [catch {execsql {CREATE TABLE biG(xyz foo)}} msg]
+ lappend v $msg
+} {1 {table biG already exists}}
+do_test table-3.4 {
+ set v [catch {execsql {CREATE TABLE bIg(xyz foo)}} msg]
+ lappend v $msg
+} {1 {table bIg already exists}}
+do_test table-3.5 {
+ db close
+ sqlite3 db test.db
+ set v [catch {execsql {CREATE TABLE Big(xyz foo)}} msg]
+ lappend v $msg
+} {1 {table Big already exists}}
+do_test table-3.6 {
+ execsql {DROP TABLE big}
+ execsql {SELECT name FROM sqlite_master WHERE type!='meta'}
+} {}
+
+# Try creating large numbers of tables
+#
+set r {}
+for {set i 1} {$i<=100} {incr i} {
+ lappend r [format test%03d $i]
+}
+do_test table-4.1 {
+ for {set i 1} {$i<=100} {incr i} {
+ set sql "CREATE TABLE [format test%03d $i] ("
+ for {set k 1} {$k<$i} {incr k} {
+ append sql "field$k text,"
+ }
+ append sql "last_field text)"
+ execsql $sql
+ }
+ execsql {SELECT name FROM sqlite_master WHERE type!='meta' ORDER BY name}
+} $r
+do_test table-4.1b {
+ db close
+ sqlite3 db test.db
+ execsql {SELECT name FROM sqlite_master WHERE type!='meta' ORDER BY name}
+} $r
+
+# Drop the even numbered tables
+#
+set r {}
+for {set i 1} {$i<=100} {incr i 2} {
+ lappend r [format test%03d $i]
+}
+do_test table-4.2 {
+ for {set i 2} {$i<=100} {incr i 2} {
+ # if {$i==38} {execsql {pragma vdbe_trace=on}}
+ set sql "DROP TABLE [format TEST%03d $i]"
+ execsql $sql
+ }
+ execsql {SELECT name FROM sqlite_master WHERE type!='meta' ORDER BY name}
+} $r
+#exit
+
+# Drop the odd number tables
+#
+do_test table-4.3 {
+ for {set i 1} {$i<=100} {incr i 2} {
+ set sql "DROP TABLE [format test%03d $i]"
+ execsql $sql
+ }
+ execsql {SELECT name FROM sqlite_master WHERE type!='meta' ORDER BY name}
+} {}
+
+# Try to drop a table that does not exist
+#
+do_test table-5.1.1 {
+ catchsql {DROP TABLE test009}
+} {1 {no such table: test009}}
+do_test table-5.1.2 {
+ catchsql {DROP TABLE IF EXISTS test009}
+} {0 {}}
+
+# Try to drop sqlite_master
+#
+do_test table-5.2 {
+ catchsql {DROP TABLE IF EXISTS sqlite_master}
+} {1 {table sqlite_master may not be dropped}}
+
+# Make sure an EXPLAIN does not really create a new table
+#
+do_test table-5.3 {
+ ifcapable {explain} {
+ execsql {EXPLAIN CREATE TABLE test1(f1 int)}
+ }
+ execsql {SELECT name FROM sqlite_master WHERE type!='meta'}
+} {}
+
+# Make sure an EXPLAIN does not really drop an existing table
+#
+do_test table-5.4 {
+ execsql {CREATE TABLE test1(f1 int)}
+ ifcapable {explain} {
+ execsql {EXPLAIN DROP TABLE test1}
+ }
+ execsql {SELECT name FROM sqlite_master WHERE type!='meta'}
+} {test1}
+
+# Create a table with a goofy name
+#
+#do_test table-6.1 {
+# execsql {CREATE TABLE 'Spaces In This Name!'(x int)}
+# execsql {INSERT INTO 'spaces in this name!' VALUES(1)}
+# set list [glob -nocomplain testdb/spaces*.tbl]
+#} {testdb/spaces+in+this+name+.tbl}
+
+# Try using keywords as table names or column names.
+#
+do_test table-7.1 {
+ set v [catch {execsql {
+ CREATE TABLE weird(
+ desc text,
+ asc text,
+ key int,
+ [14_vac] boolean,
+ fuzzy_dog_12 varchar(10),
+ begin blob,
+ end clob
+ )
+ }} msg]
+ lappend v $msg
+} {0 {}}
+do_test table-7.2 {
+ execsql {
+ INSERT INTO weird VALUES('a','b',9,0,'xyz','hi','y''all');
+ SELECT * FROM weird;
+ }
+} {a b 9 0 xyz hi y'all}
+do_test table-7.3 {
+ execsql2 {
+ SELECT * FROM weird;
+ }
+} {desc a asc b key 9 14_vac 0 fuzzy_dog_12 xyz begin hi end y'all}
+
+# Try out the CREATE TABLE AS syntax
+#
+do_test table-8.1 {
+ execsql2 {
+ CREATE TABLE t2 AS SELECT * FROM weird;
+ SELECT * FROM t2;
+ }
+} {desc a asc b key 9 14_vac 0 fuzzy_dog_12 xyz begin hi end y'all}
+do_test table-8.1.1 {
+ execsql {
+ SELECT sql FROM sqlite_master WHERE name='t2';
+ }
+} {{CREATE TABLE t2(
+ "desc" text,
+ "asc" text,
+ "key" int,
+ "14_vac" boolean,
+ fuzzy_dog_12 varchar(10),
+ "begin" blob,
+ "end" clob
+)}}
+do_test table-8.2 {
+ execsql {
+ CREATE TABLE "t3""xyz"(a,b,c);
+ INSERT INTO [t3"xyz] VALUES(1,2,3);
+ SELECT * FROM [t3"xyz];
+ }
+} {1 2 3}
+do_test table-8.3 {
+ execsql2 {
+ CREATE TABLE [t4"abc] AS SELECT count(*) as cnt, max(b+c) FROM [t3"xyz];
+ SELECT * FROM [t4"abc];
+ }
+} {cnt 1 max(b+c) 5}
+
+# Update for v3: The declaration type of anything except a column is now a
+# NULL pointer, so the created table has no column types. (Changed result
+# from {{CREATE TABLE 't4"abc'(cnt NUMERIC,"max(b+c)" NUMERIC)}}).
+do_test table-8.3.1 {
+ execsql {
+ SELECT sql FROM sqlite_master WHERE name='t4"abc'
+ }
+} {{CREATE TABLE "t4""abc"(cnt,"max(b+c)")}}
+
+ifcapable tempdb {
+ do_test table-8.4 {
+ execsql2 {
+ CREATE TEMPORARY TABLE t5 AS SELECT count(*) AS [y'all] FROM [t3"xyz];
+ SELECT * FROM t5;
+ }
+ } {y'all 1}
+}
+
+do_test table-8.5 {
+ db close
+ sqlite3 db test.db
+ execsql2 {
+ SELECT * FROM [t4"abc];
+ }
+} {cnt 1 max(b+c) 5}
+do_test table-8.6 {
+ execsql2 {
+ SELECT * FROM t2;
+ }
+} {desc a asc b key 9 14_vac 0 fuzzy_dog_12 xyz begin hi end y'all}
+do_test table-8.7 {
+ catchsql {
+ SELECT * FROM t5;
+ }
+} {1 {no such table: t5}}
+do_test table-8.8 {
+ catchsql {
+ CREATE TABLE t5 AS SELECT * FROM no_such_table;
+ }
+} {1 {no such table: no_such_table}}
+
+# Make sure we cannot have duplicate column names within a table.
+#
+do_test table-9.1 {
+ catchsql {
+ CREATE TABLE t6(a,b,a);
+ }
+} {1 {duplicate column name: a}}
+do_test table-9.2 {
+ catchsql {
+ CREATE TABLE t6(a varchar(100), b blob, a integer);
+ }
+} {1 {duplicate column name: a}}
+
+# Check the foreign key syntax.
+#
+ifcapable {foreignkey} {
+do_test table-10.1 {
+ catchsql {
+ CREATE TABLE t6(a REFERENCES t4(a) NOT NULL);
+ INSERT INTO t6 VALUES(NULL);
+ }
+} {1 {t6.a may not be NULL}}
+do_test table-10.2 {
+ catchsql {
+ DROP TABLE t6;
+ CREATE TABLE t6(a REFERENCES t4(a) MATCH PARTIAL);
+ }
+} {0 {}}
+do_test table-10.3 {
+ catchsql {
+ DROP TABLE t6;
+ CREATE TABLE t6(a REFERENCES t4 MATCH FULL ON DELETE SET NULL NOT NULL);
+ }
+} {0 {}}
+do_test table-10.4 {
+ catchsql {
+ DROP TABLE t6;
+ CREATE TABLE t6(a REFERENCES t4 MATCH FULL ON UPDATE SET DEFAULT DEFAULT 1);
+ }
+} {0 {}}
+do_test table-10.5 {
+ catchsql {
+ DROP TABLE t6;
+ CREATE TABLE t6(a NOT NULL NOT DEFERRABLE INITIALLY IMMEDIATE);
+ }
+} {0 {}}
+do_test table-10.6 {
+ catchsql {
+ DROP TABLE t6;
+ CREATE TABLE t6(a NOT NULL DEFERRABLE INITIALLY DEFERRED);
+ }
+} {0 {}}
+do_test table-10.7 {
+ catchsql {
+ DROP TABLE t6;
+ CREATE TABLE t6(a,
+ FOREIGN KEY (a) REFERENCES t4(b) DEFERRABLE INITIALLY DEFERRED
+ );
+ }
+} {0 {}}
+do_test table-10.8 {
+ catchsql {
+ DROP TABLE t6;
+ CREATE TABLE t6(a,b,c,
+ FOREIGN KEY (b,c) REFERENCES t4(x,y) MATCH PARTIAL
+ ON UPDATE SET NULL ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED
+ );
+ }
+} {0 {}}
+do_test table-10.9 {
+ catchsql {
+ DROP TABLE t6;
+ CREATE TABLE t6(a,b,c,
+ FOREIGN KEY (b,c) REFERENCES t4(x)
+ );
+ }
+} {1 {number of columns in foreign key does not match the number of columns in the referenced table}}
+do_test table-10.10 {
+ catchsql {DROP TABLE t6}
+ catchsql {
+ CREATE TABLE t6(a,b,c,
+ FOREIGN KEY (b,c) REFERENCES t4(x,y,z)
+ );
+ }
+} {1 {number of columns in foreign key does not match the number of columns in the referenced table}}
+do_test table-10.11 {
+ catchsql {DROP TABLE t6}
+ catchsql {
+ CREATE TABLE t6(a,b, c REFERENCES t4(x,y));
+ }
+} {1 {foreign key on c should reference only one column of table t4}}
+do_test table-10.12 {
+ catchsql {DROP TABLE t6}
+ catchsql {
+ CREATE TABLE t6(a,b,c,
+ FOREIGN KEY (b,x) REFERENCES t4(x,y)
+ );
+ }
+} {1 {unknown column "x" in foreign key definition}}
+do_test table-10.13 {
+ catchsql {DROP TABLE t6}
+ catchsql {
+ CREATE TABLE t6(a,b,c,
+ FOREIGN KEY (x,b) REFERENCES t4(x,y)
+ );
+ }
+} {1 {unknown column "x" in foreign key definition}}
+} ;# endif foreignkey
+
+# Test for the "typeof" function. More tests for the
+# typeof() function are found in bind.test and types.test.
+#
+do_test table-11.1 {
+ execsql {
+ CREATE TABLE t7(
+ a integer primary key,
+ b number(5,10),
+ c character varying (8),
+ d VARCHAR(9),
+ e clob,
+ f BLOB,
+ g Text,
+ h
+ );
+ INSERT INTO t7(a) VALUES(1);
+ SELECT typeof(a), typeof(b), typeof(c), typeof(d),
+ typeof(e), typeof(f), typeof(g), typeof(h)
+ FROM t7 LIMIT 1;
+ }
+} {integer null null null null null null null}
+do_test table-11.2 {
+ execsql {
+ SELECT typeof(a+b), typeof(a||b), typeof(c+d), typeof(c||d)
+ FROM t7 LIMIT 1;
+ }
+} {null null null null}
+
+# Test that when creating a table using CREATE TABLE AS, column types are
+# assigned correctly for (SELECT ...) and 'x AS y' expressions.
+do_test table-12.1 {
+ ifcapable subquery {
+ execsql {
+ CREATE TABLE t8 AS SELECT b, h, a as i, (SELECT f FROM t7) as j FROM t7;
+ }
+ } else {
+ execsql {
+ CREATE TABLE t8 AS SELECT b, h, a as i, f as j FROM t7;
+ }
+ }
+} {}
+do_test table-12.2 {
+ execsql {
+ SELECT sql FROM sqlite_master WHERE tbl_name = 't8'
+ }
+} {{CREATE TABLE t8(b number(5,10),h,i integer,j BLOB)}}
+
+#--------------------------------------------------------------------
+# Test cases table-13.*
+#
+# Test the ability to have default values of CURRENT_TIME, CURRENT_DATE
+# and CURRENT_TIMESTAMP.
+#
+do_test table-13.1 {
+ execsql {
+ CREATE TABLE tablet8(
+ a integer primary key,
+ tm text DEFAULT CURRENT_TIME,
+ dt text DEFAULT CURRENT_DATE,
+ dttm text DEFAULT CURRENT_TIMESTAMP
+ );
+ SELECT * FROM tablet8;
+ }
+} {}
+set i 0
+foreach {date time seconds} {
+ 1976-07-04 12:00:00 205329600
+ 1994-04-16 14:00:00 766504800
+ 2000-01-01 00:00:00 946684800
+ 2003-12-31 12:34:56 1072874096
+} {
+ incr i
+ set sqlite_current_time $seconds
+ do_test table-13.2.$i {
+ execsql "
+ INSERT INTO tablet8(a) VALUES($i);
+ SELECT tm, dt, dttm FROM tablet8 WHERE a=$i;
+ "
+ } [list $time $date [list $date $time]]
+}
+set sqlite_current_time 0
+
+#--------------------------------------------------------------------
+# Test cases table-14.*
+#
+# Test that a table cannot be created or dropped while other virtual
+# machines are active. This is required because otherwise when in
+# auto-vacuum mode the btree-layer may need to move the root-pages of
+# a table for which there is an open cursor.
+#
+
+# db eval {
+# pragma vdbe_trace = 0;
+# }
+# Try to create a table from within a callback:
+unset -nocomplain result
+do_test table-14.1 {
+ set rc [
+ catch {
+ db eval {SELECT * FROM tablet8 LIMIT 1} {} {
+ db eval {CREATE TABLE t9(a, b, c)}
+ }
+ } msg
+ ]
+ set result [list $rc $msg]
+} {1 {database table is locked}}
+
+do_test table-14.2 {
+ execsql {
+ CREATE TABLE t9(a, b, c)
+ }
+} {}
+
+# Try to drop a table from within a callback:
+do_test table-14.3 {
+ set rc [
+ catch {
+ db eval {SELECT * FROM tablet8 LIMIT 1} {} {
+ db eval {DROP TABLE t9;}
+ }
+ } msg
+ ]
+ set result [list $rc $msg]
+} {1 {database table is locked}}
+
+# Now attach a database and ensure that a table can be created in the
+# attached database whilst in a callback from a query on the main database.
+do_test table-14.4 {
+ file delete -force test2.db
+ file delete -force test2.db-journal
+ execsql {
+ attach 'test2.db' as aux;
+ }
+ db eval {SELECT * FROM tablet8 LIMIT 1} {} {
+ db eval {CREATE TABLE aux.t1(a, b, c)}
+ }
+} {}
+
+# On the other hand, it should be impossible to drop a table when any VMs
+# are active. This is because VerifyCookie instructions may have already
+# been executed, and btree root-pages may not move after this (which a
+# delete table might do).
+do_test table-14.4 {
+ set rc [
+ catch {
+ db eval {SELECT * FROM tablet8 LIMIT 1} {} {
+ db eval {DROP TABLE aux.t1;}
+ }
+ } msg
+ ]
+ set result [list $rc $msg]
+} {1 {database table is locked}}
+
+# Create and drop 2000 tables. This is to check that the balance_shallow()
+# routine works correctly on the sqlite_master table. At one point it
+# contained a bug that would prevent the right-child pointer of the
+# child page from being copied to the root page.
+#
+do_test table-15.1 {
+ execsql {BEGIN}
+ for {set i 0} {$i<2000} {incr i} {
+ execsql "CREATE TABLE tbl$i (a, b, c)"
+ }
+ execsql {COMMIT}
+} {}
+do_test table-15.2 {
+ execsql {BEGIN}
+ for {set i 0} {$i<2000} {incr i} {
+ execsql "DROP TABLE tbl$i"
+ }
+ execsql {COMMIT}
+} {}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/tableapi.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/tableapi.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,217 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing the sqlite_exec_printf() and
+# sqlite_get_table_printf() APIs.
+#
+# $Id: tableapi.test,v 1.11 2006/06/27 20:39:05 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+do_test tableapi-1.0 {
+ set ::dbx [sqlite3_open test.db]
+ catch {sqlite_exec_printf $::dbx {DROP TABLE xyz} {}}
+ sqlite3_exec_printf $::dbx {CREATE TABLE %s(a int, b text)} xyz
+} {0 {}}
+do_test tableapi-1.1 {
+ sqlite3_exec_printf $::dbx {
+ INSERT INTO xyz VALUES(1,'%q')
+ } {Hi Y'all}
+} {0 {}}
+do_test tableapi-1.2 {
+ sqlite3_exec_printf $::dbx {SELECT * FROM xyz} {}
+} {0 {a b 1 {Hi Y'all}}}
+
+do_test tableapi-2.1 {
+ sqlite3_get_table_printf $::dbx {
+ BEGIN TRANSACTION;
+ SELECT * FROM xyz WHERE b='%q'
+ } {Hi Y'all}
+} {0 1 2 a b 1 {Hi Y'all}}
+do_test tableapi-2.2 {
+ sqlite3_get_table_printf $::dbx {
+ SELECT * FROM xyz
+ } {}
+} {0 1 2 a b 1 {Hi Y'all}}
+do_test tableapi-2.3 {
+ for {set i 2} {$i<=50} {incr i} {
+ sqlite3_get_table_printf $::dbx \
+ "INSERT INTO xyz VALUES($i,'(%s)')" $i
+ }
+ sqlite3_get_table_printf $::dbx {
+ SELECT * FROM xyz ORDER BY a
+ } {}
+} {0 50 2 a b 1 {Hi Y'all} 2 (2) 3 (3) 4 (4) 5 (5) 6 (6) 7 (7) 8 (8) 9 (9) 10 (10) 11 (11) 12 (12) 13 (13) 14 (14) 15 (15) 16 (16) 17 (17) 18 (18) 19 (19) 20 (20) 21 (21) 22 (22) 23 (23) 24 (24) 25 (25) 26 (26) 27 (27) 28 (28) 29 (29) 30 (30) 31 (31) 32 (32) 33 (33) 34 (34) 35 (35) 36 (36) 37 (37) 38 (38) 39 (39) 40 (40) 41 (41) 42 (42) 43 (43) 44 (44) 45 (45) 46 (46) 47 (47) 48 (48) 49 (49) 50 (50)}
+do_test tableapi-2.3.1 {
+ sqlite3_get_table_printf $::dbx {
+ SELECT * FROM xyz WHERE a>49 ORDER BY a
+ } {}
+} {0 1 2 a b 50 (50)}
+do_test tableapi-2.3.2 {
+ sqlite3_get_table_printf $::dbx {
+ SELECT * FROM xyz WHERE a>47 ORDER BY a
+ } {}
+} {0 3 2 a b 48 (48) 49 (49) 50 (50)}
+do_test tableapi-2.4 {
+ set manyquote ''''''''
+ append manyquote $manyquote
+ append manyquote $manyquote
+ append manyquote $manyquote
+ append manyquote $manyquote
+ append manyquote $manyquote
+ append manyquote $manyquote
+ set ::big_str "$manyquote Hello $manyquote"
+ sqlite3_get_table_printf $::dbx {
+ INSERT INTO xyz VALUES(51,'%q')
+ } $::big_str
+} {0 0 0}
+do_test tableapi-2.5 {
+ sqlite3_get_table_printf $::dbx {
+ SELECT * FROM xyz WHERE a>49 ORDER BY a;
+ } {}
+} "0 2 2 a b 50 (50) 51 \173$::big_str\175"
+do_test tableapi-2.6 {
+ sqlite3_get_table_printf $::dbx {
+ INSERT INTO xyz VALUES(52,NULL)
+ } {}
+ ifcapable subquery {
+ sqlite3_get_table_printf $::dbx {
+ SELECT * FROM xyz WHERE a IN (42,50,52) ORDER BY a DESC
+ } {}
+ } else {
+ sqlite3_get_table_printf $::dbx {
+ SELECT * FROM xyz WHERE a=42 OR a=50 OR a=52 ORDER BY a DESC
+ } {}
+ }
+} {0 3 2 a b 52 NULL 50 (50) 42 (42)}
+do_test tableapi-2.7 {
+ sqlite3_get_table_printf $::dbx {
+ SELECT * FROM xyz WHERE a>1000
+ } {}
+} {0 0 0}
+
+# Repeat all tests with the empty_result_callbacks pragma turned on
+#
+do_test tableapi-3.1 {
+ sqlite3_get_table_printf $::dbx {
+ ROLLBACK;
+ PRAGMA empty_result_callbacks = ON;
+ SELECT * FROM xyz WHERE b='%q'
+ } {Hi Y'all}
+} {0 1 2 a b 1 {Hi Y'all}}
+do_test tableapi-3.2 {
+ sqlite3_get_table_printf $::dbx {
+ SELECT * FROM xyz
+ } {}
+} {0 1 2 a b 1 {Hi Y'all}}
+do_test tableapi-3.3 {
+ for {set i 2} {$i<=50} {incr i} {
+ sqlite3_get_table_printf $::dbx \
+ "INSERT INTO xyz VALUES($i,'(%s)')" $i
+ }
+ sqlite3_get_table_printf $::dbx {
+ SELECT * FROM xyz ORDER BY a
+ } {}
+} {0 50 2 a b 1 {Hi Y'all} 2 (2) 3 (3) 4 (4) 5 (5) 6 (6) 7 (7) 8 (8) 9 (9) 10 (10) 11 (11) 12 (12) 13 (13) 14 (14) 15 (15) 16 (16) 17 (17) 18 (18) 19 (19) 20 (20) 21 (21) 22 (22) 23 (23) 24 (24) 25 (25) 26 (26) 27 (27) 28 (28) 29 (29) 30 (30) 31 (31) 32 (32) 33 (33) 34 (34) 35 (35) 36 (36) 37 (37) 38 (38) 39 (39) 40 (40) 41 (41) 42 (42) 43 (43) 44 (44) 45 (45) 46 (46) 47 (47) 48 (48) 49 (49) 50 (50)}
+do_test tableapi-3.3.1 {
+ sqlite3_get_table_printf $::dbx {
+ SELECT * FROM xyz WHERE a>49 ORDER BY a
+ } {}
+} {0 1 2 a b 50 (50)}
+do_test tableapi-3.3.2 {
+ sqlite3_get_table_printf $::dbx {
+ SELECT * FROM xyz WHERE a>47 ORDER BY a
+ } {}
+} {0 3 2 a b 48 (48) 49 (49) 50 (50)}
+do_test tableapi-3.4 {
+ sqlite3_get_table_printf $::dbx {
+ INSERT INTO xyz VALUES(51,'%q')
+ } $::big_str
+} {0 0 0}
+do_test tableapi-3.5 {
+ sqlite3_get_table_printf $::dbx {
+ SELECT * FROM xyz WHERE a>49 ORDER BY a;
+ } {}
+} "0 2 2 a b 50 (50) 51 \173$::big_str\175"
+do_test tableapi-3.6 {
+ sqlite3_get_table_printf $::dbx {
+ INSERT INTO xyz VALUES(52,NULL)
+ } {}
+ ifcapable subquery {
+ sqlite3_get_table_printf $::dbx {
+ SELECT * FROM xyz WHERE a IN (42,50,52) ORDER BY a DESC
+ } {}
+ } else {
+ sqlite3_get_table_printf $::dbx {
+ SELECT * FROM xyz WHERE a=42 OR a=50 OR a=52 ORDER BY a DESC
+ } {}
+ }
+} {0 3 2 a b 52 NULL 50 (50) 42 (42)}
+do_test tableapi-3.7 {
+ sqlite3_get_table_printf $::dbx {
+ SELECT * FROM xyz WHERE a>1000
+ } {}
+} {0 0 2 a b}
+
+do_test tableapi-4.1 {
+ set rc [catch {
+ sqlite3_get_table_printf $::dbx {
+ SELECT * FROM xyz; SELECT * FROM sqlite_master
+ } {}
+ } msg]
+ concat $rc $msg
+} {0 1 {sqlite3_get_table() called with two or more incompatible queries}}
+
+# A report on the mailing list says that the sqlite_get_table() api fails
+# on queries involving more than 40 columns. The following code attempts
+# to test that complaint
+#
+do_test tableapi-5.1 {
+ set sql "CREATE TABLE t2("
+ set sep ""
+ for {set i 1} {$i<=100} {incr i} {
+ append sql ${sep}x$i
+ set sep ,
+ }
+ append sql )
+ sqlite3_get_table_printf $::dbx $sql {}
+ set sql "INSERT INTO t2 VALUES("
+ set sep ""
+ for {set i 1} {$i<=100} {incr i} {
+ append sql ${sep}$i
+ set sep ,
+ }
+ append sql )
+ sqlite3_get_table_printf $::dbx $sql {}
+ sqlite3_get_table_printf $::dbx {SELECT * FROM t2} {}
+} {0 1 100 x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 x11 x12 x13 x14 x15 x16 x17 x18 x19 x20 x21 x22 x23 x24 x25 x26 x27 x28 x29 x30 x31 x32 x33 x34 x35 x36 x37 x38 x39 x40 x41 x42 x43 x44 x45 x46 x47 x48 x49 x50 x51 x52 x53 x54 x55 x56 x57 x58 x59 x60 x61 x62 x63 x64 x65 x66 x67 x68 x69 x70 x71 x72 x73 x74 x75 x76 x77 x78 x79 x80 x81 x82 x83 x84 x85 x86 x87 x88 x89 x90 x91 x92 x93 x94 x95 x96 x97 x98 x99 x100 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100}
+do_test tableapi-5.2 {
+ set sql "INSERT INTO t2 VALUES("
+ set sep ""
+ for {set i 1} {$i<=100} {incr i} {
+ append sql ${sep}[expr {$i+1000}]
+ set sep ,
+ }
+ append sql )
+ sqlite3_get_table_printf $::dbx $sql {}
+ sqlite3_get_table_printf $::dbx {SELECT * FROM t2} {}
+} {0 2 100 x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 x11 x12 x13 x14 x15 x16 x17 x18 x19 x20 x21 x22 x23 x24 x25 x26 x27 x28 x29 x30 x31 x32 x33 x34 x35 x36 x37 x38 x39 x40 x41 x42 x43 x44 x45 x46 x47 x48 x49 x50 x51 x52 x53 x54 x55 x56 x57 x58 x59 x60 x61 x62 x63 x64 x65 x66 x67 x68 x69 x70 x71 x72 x73 x74 x75 x76 x77 x78 x79 x80 x81 x82 x83 x84 x85 x86 x87 x88 x89 x90 x91 x92 x93 x94 x95 x96 x97 x98 x99 x100 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100}
+
+do_test tableapi-6.1 {
+ sqlite3_get_table_printf $::dbx {PRAGMA user_version} {}
+} {0 1 1 {} 0}
+
+do_test tableapi-99.0 {
+ sqlite3_close $::dbx
+} {SQLITE_OK}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/tclsqlite.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/tclsqlite.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,457 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for TCL interface to the
+# SQLite library.
+#
+# Actually, all tests are based on the TCL interface, so the main
+# interface is pretty well tested. This file contains some addition
+# tests for fringe issues that the main test suite does not cover.
+#
+# $Id: tclsqlite.test,v 1.56 2006/09/01 15:49:06 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Check the error messages generated by tclsqlite
+#
+if {[sqlite3 -has-codec]} {
+ set r "sqlite_orig HANDLE FILENAME ?-key CODEC-KEY?"
+} else {
+ set r "sqlite3 HANDLE FILENAME ?MODE?"
+}
+do_test tcl-1.1 {
+ set v [catch {sqlite3 bogus} msg]
+ lappend v $msg
+} [list 1 "wrong # args: should be \"$r\""]
+do_test tcl-1.2 {
+ set v [catch {db bogus} msg]
+ lappend v $msg
+} {1 {bad option "bogus": must be authorizer, busy, cache, changes, close, collate, collation_needed, commit_hook, complete, copy, enable_load_extension, errorcode, eval, exists, function, interrupt, last_insert_rowid, nullvalue, onecolumn, profile, progress, rekey, rollback_hook, timeout, total_changes, trace, transaction, update_hook, or version}}
+do_test tcl-1.3 {
+ execsql {CREATE TABLE t1(a int, b int)}
+ execsql {INSERT INTO t1 VALUES(10,20)}
+ set v [catch {
+ db eval {SELECT * FROM t1} data {
+ error "The error message"
+ }
+ } msg]
+ lappend v $msg
+} {1 {The error message}}
+do_test tcl-1.4 {
+ set v [catch {
+ db eval {SELECT * FROM t2} data {
+ error "The error message"
+ }
+ } msg]
+ lappend v $msg
+} {1 {no such table: t2}}
+do_test tcl-1.5 {
+ set v [catch {
+ db eval {SELECT * FROM t1} data {
+ break
+ }
+ } msg]
+ lappend v $msg
+} {0 {}}
+do_test tcl-1.6 {
+ set v [catch {
+ db eval {SELECT * FROM t1} data {
+ expr x*
+ }
+ } msg]
+ regsub {:.*$} $msg {} msg
+ lappend v $msg
+} {1 {syntax error in expression "x*"}}
+do_test tcl-1.7 {
+ set v [catch {db} msg]
+ lappend v $msg
+} {1 {wrong # args: should be "db SUBCOMMAND ..."}}
+if {[catch {db auth {}}]==0} {
+ do_test tcl-1.8 {
+ set v [catch {db authorizer 1 2 3} msg]
+ lappend v $msg
+ } {1 {wrong # args: should be "db authorizer ?CALLBACK?"}}
+}
+do_test tcl-1.9 {
+ set v [catch {db busy 1 2 3} msg]
+ lappend v $msg
+} {1 {wrong # args: should be "db busy CALLBACK"}}
+do_test tcl-1.10 {
+ set v [catch {db progress 1} msg]
+ lappend v $msg
+} {1 {wrong # args: should be "db progress N CALLBACK"}}
+do_test tcl-1.11 {
+ set v [catch {db changes xyz} msg]
+ lappend v $msg
+} {1 {wrong # args: should be "db changes "}}
+do_test tcl-1.12 {
+ set v [catch {db commit_hook a b c} msg]
+ lappend v $msg
+} {1 {wrong # args: should be "db commit_hook ?CALLBACK?"}}
+ifcapable {complete} {
+ do_test tcl-1.13 {
+ set v [catch {db complete} msg]
+ lappend v $msg
+ } {1 {wrong # args: should be "db complete SQL"}}
+}
+do_test tcl-1.14 {
+ set v [catch {db eval} msg]
+ lappend v $msg
+} {1 {wrong # args: should be "db eval SQL ?ARRAY-NAME? ?SCRIPT?"}}
+do_test tcl-1.15 {
+ set v [catch {db function} msg]
+ lappend v $msg
+} {1 {wrong # args: should be "db function NAME SCRIPT"}}
+do_test tcl-1.16 {
+ set v [catch {db last_insert_rowid xyz} msg]
+ lappend v $msg
+} {1 {wrong # args: should be "db last_insert_rowid "}}
+do_test tcl-1.17 {
+ set v [catch {db rekey} msg]
+ lappend v $msg
+} {1 {wrong # args: should be "db rekey KEY"}}
+do_test tcl-1.18 {
+ set v [catch {db timeout} msg]
+ lappend v $msg
+} {1 {wrong # args: should be "db timeout MILLISECONDS"}}
+do_test tcl-1.19 {
+ set v [catch {db collate} msg]
+ lappend v $msg
+} {1 {wrong # args: should be "db collate NAME SCRIPT"}}
+do_test tcl-1.20 {
+ set v [catch {db collation_needed} msg]
+ lappend v $msg
+} {1 {wrong # args: should be "db collation_needed SCRIPT"}}
+do_test tcl-1.21 {
+ set v [catch {db total_changes xyz} msg]
+ lappend v $msg
+} {1 {wrong # args: should be "db total_changes "}}
+do_test tcl-1.20 {
+ set v [catch {db copy} msg]
+ lappend v $msg
+} {1 {wrong # args: should be "db copy CONFLICT-ALGORITHM TABLE FILENAME ?SEPARATOR? ?NULLINDICATOR?"}}
+
+
+if {[sqlite3 -tcl-uses-utf]} {
+ catch {unset ::result}
+ do_test tcl-2.1 {
+ execsql "CREATE TABLE t\u0123x(a int, b\u1235 float)"
+ } {}
+ ifcapable schema_pragmas {
+ do_test tcl-2.2 {
+ execsql "PRAGMA table_info(t\u0123x)"
+ } "0 a int 0 {} 0 1 b\u1235 float 0 {} 0"
+ }
+ do_test tcl-2.3 {
+ execsql "INSERT INTO t\u0123x VALUES(1,2.3)"
+ db eval "SELECT * FROM t\u0123x" result break
+ set result(*)
+ } "a b\u1235"
+}
+
+
+# Test the onecolumn method
+#
+do_test tcl-3.1 {
+ execsql {
+ INSERT INTO t1 SELECT a*2, b*2 FROM t1;
+ INSERT INTO t1 SELECT a*2+1, b*2+1 FROM t1;
+ INSERT INTO t1 SELECT a*2+3, b*2+3 FROM t1;
+ }
+ set rc [catch {db onecolumn {SELECT * FROM t1 ORDER BY a}} msg]
+ lappend rc $msg
+} {0 10}
+do_test tcl-3.2 {
+ db onecolumn {SELECT * FROM t1 WHERE a<0}
+} {}
+do_test tcl-3.3 {
+ set rc [catch {db onecolumn} errmsg]
+ lappend rc $errmsg
+} {1 {wrong # args: should be "db onecolumn SQL"}}
+do_test tcl-3.4 {
+ set rc [catch {db onecolumn {SELECT bogus}} errmsg]
+ lappend rc $errmsg
+} {1 {no such column: bogus}}
+ifcapable {tclvar} {
+ do_test tcl-3.5 {
+ set b 50
+ set rc [catch {db one {SELECT * FROM t1 WHERE b>$b}} msg]
+ lappend rc $msg
+ } {0 41}
+ do_test tcl-3.6 {
+ set b 500
+ set rc [catch {db one {SELECT * FROM t1 WHERE b>$b}} msg]
+ lappend rc $msg
+ } {0 {}}
+ do_test tcl-3.7 {
+ set b 500
+ set rc [catch {db one {
+ INSERT INTO t1 VALUES(99,510);
+ SELECT * FROM t1 WHERE b>$b
+ }} msg]
+ lappend rc $msg
+ } {0 99}
+}
+ifcapable {!tclvar} {
+ execsql {INSERT INTO t1 VALUES(99,510)}
+}
+
+# Turn the busy handler on and off
+#
+do_test tcl-4.1 {
+ proc busy_callback {cnt} {
+ break
+ }
+ db busy busy_callback
+ db busy
+} {busy_callback}
+do_test tcl-4.2 {
+ db busy {}
+ db busy
+} {}
+
+ifcapable {tclvar} {
+ # Parsing of TCL variable names within SQL into bound parameters.
+ #
+ do_test tcl-5.1 {
+ execsql {CREATE TABLE t3(a,b,c)}
+ catch {unset x}
+ set x(1) 5
+ set x(2) 7
+ execsql {
+ INSERT INTO t3 VALUES($::x(1),$::x(2),$::x(3));
+ SELECT * FROM t3
+ }
+ } {5 7 {}}
+ do_test tcl-5.2 {
+ execsql {
+ SELECT typeof(a), typeof(b), typeof(c) FROM t3
+ }
+ } {text text null}
+ do_test tcl-5.3 {
+ catch {unset x}
+ set x [binary format h12 686900686f00]
+ execsql {
+ UPDATE t3 SET a=$::x;
+ }
+ db eval {
+ SELECT a FROM t3
+ } break
+ binary scan $a h12 adata
+ set adata
+ } {686900686f00}
+ do_test tcl-5.4 {
+ execsql {
+ SELECT typeof(a), typeof(b), typeof(c) FROM t3
+ }
+ } {blob text null}
+}
+
+# Operation of "break" and "continue" within row scripts
+#
+do_test tcl-6.1 {
+ db eval {SELECT * FROM t1} {
+ break
+ }
+ lappend a $b
+} {10 20}
+do_test tcl-6.2 {
+ set cnt 0
+ db eval {SELECT * FROM t1} {
+ if {$a>40} continue
+ incr cnt
+ }
+ set cnt
+} {4}
+do_test tcl-6.3 {
+ set cnt 0
+ db eval {SELECT * FROM t1} {
+ if {$a<40} continue
+ incr cnt
+ }
+ set cnt
+} {5}
+do_test tcl-6.4 {
+ proc return_test {x} {
+ db eval {SELECT * FROM t1} {
+ if {$a==$x} {return $b}
+ }
+ }
+ return_test 10
+} 20
+do_test tcl-6.5 {
+ return_test 20
+} 40
+do_test tcl-6.6 {
+ return_test 99
+} 510
+do_test tcl-6.7 {
+ return_test 0
+} {}
+
+do_test tcl-7.1 {
+ db version
+ expr 0
+} {0}
+
+# modify and reset the NULL representation
+#
+do_test tcl-8.1 {
+ db nullvalue NaN
+ execsql {INSERT INTO t1 VALUES(30,NULL)}
+ db eval {SELECT * FROM t1 WHERE b IS NULL}
+} {30 NaN}
+do_test tcl-8.2 {
+ db nullvalue NULL
+ db nullvalue
+} {NULL}
+do_test tcl-8.3 {
+ db nullvalue {}
+ db eval {SELECT * FROM t1 WHERE b IS NULL}
+} {30 {}}
+
+# Test the return type of user-defined functions
+#
+do_test tcl-9.1 {
+ db function ret_str {return "hi"}
+ execsql {SELECT typeof(ret_str())}
+} {text}
+do_test tcl-9.2 {
+ db function ret_dbl {return [expr {rand()*0.5}]}
+ execsql {SELECT typeof(ret_dbl())}
+} {real}
+do_test tcl-9.3 {
+ db function ret_int {return [expr {int(rand()*200)}]}
+ execsql {SELECT typeof(ret_int())}
+} {integer}
+
+# Recursive calls to the same user-defined function
+#
+ifcapable tclvar {
+ do_test tcl-9.10 {
+ proc userfunc_r1 {n} {
+ if {$n<=0} {return 0}
+ set nm1 [expr {$n-1}]
+ return [expr {[db eval {SELECT r1($nm1)}]+$n}]
+ }
+ db function r1 userfunc_r1
+ execsql {SELECT r1(10)}
+ } {55}
+ do_test tcl-9.11 {
+ execsql {SELECT r1(100)}
+ } {5050}
+}
+
+# Tests for the new transaction method
+#
+do_test tcl-10.1 {
+ db transaction {}
+} {}
+do_test tcl-10.2 {
+ db transaction deferred {}
+} {}
+do_test tcl-10.3 {
+ db transaction immediate {}
+} {}
+do_test tcl-10.4 {
+ db transaction exclusive {}
+} {}
+do_test tcl-10.5 {
+ set rc [catch {db transaction xyzzy {}} msg]
+ lappend rc $msg
+} {1 {bad transaction type "xyzzy": must be deferred, exclusive, or immediate}}
+do_test tcl-10.6 {
+ set rc [catch {db transaction {error test-error}} msg]
+ lappend rc $msg
+} {1 test-error}
+do_test tcl-10.7 {
+ db transaction {
+ db eval {CREATE TABLE t4(x)}
+ db transaction {
+ db eval {INSERT INTO t4 VALUES(1)}
+ }
+ }
+ db eval {SELECT * FROM t4}
+} 1
+do_test tcl-10.8 {
+ catch {
+ db transaction {
+ db eval {INSERT INTO t4 VALUES(2)}
+ db eval {INSERT INTO t4 VALUES(3)}
+ db eval {INSERT INTO t4 VALUES(4)}
+ error test-error
+ }
+ }
+ db eval {SELECT * FROM t4}
+} 1
+do_test tcl-10.9 {
+ db transaction {
+ db eval {INSERT INTO t4 VALUES(2)}
+ catch {
+ db transaction {
+ db eval {INSERT INTO t4 VALUES(3)}
+ db eval {INSERT INTO t4 VALUES(4)}
+ error test-error
+ }
+ }
+ }
+ db eval {SELECT * FROM t4}
+} {1 2 3 4}
+do_test tcl-10.10 {
+ for {set i 0} {$i<1} {incr i} {
+ db transaction {
+ db eval {INSERT INTO t4 VALUES(5)}
+ continue
+ }
+ }
+ db eval {SELECT * FROM t4}
+} {1 2 3 4 5}
+do_test tcl-10.11 {
+ for {set i 0} {$i<10} {incr i} {
+ db transaction {
+ db eval {INSERT INTO t4 VALUES(6)}
+ break
+ }
+ }
+ db eval {SELECT * FROM t4}
+} {1 2 3 4 5 6}
+do_test tcl-10.12 {
+ set rc [catch {
+ for {set i 0} {$i<10} {incr i} {
+ db transaction {
+ db eval {INSERT INTO t4 VALUES(7)}
+ return
+ }
+ }
+ }]
+} {2}
+do_test tcl-10.13 {
+ db eval {SELECT * FROM t4}
+} {1 2 3 4 5 6 7}
+
+do_test tcl-11.1 {
+ db exists {SELECT x,x*2,x+x FROM t4 WHERE x==4}
+} {1}
+do_test tcl-11.2 {
+ db exists {SELECT 0 FROM t4 WHERE x==4}
+} {1}
+do_test tcl-11.3 {
+ db exists {SELECT 1 FROM t4 WHERE x==8}
+} {0}
+
+do_test tcl-12.1 {
+ unset -nocomplain a b c version
+ set version [db version]
+ scan $version "%d.%d.%d" a b c
+ expr $a*1000000 + $b*1000 + $c
+} [sqlite3_libversion_number]
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/temptable.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/temptable.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,414 @@
+# 2001 October 7
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests for temporary tables and indices.
+#
+# $Id: temptable.test,v 1.17 2006/01/24 00:15:16 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+ifcapable !tempdb {
+ finish_test
+ return
+}
+
+# Create an alternative connection to the database
+#
+do_test temptable-1.0 {
+ sqlite3 db2 ./test.db
+ set dummy {}
+} {}
+
+# Create a permanent table.
+#
+do_test temptable-1.1 {
+ execsql {CREATE TABLE t1(a,b,c);}
+ execsql {INSERT INTO t1 VALUES(1,2,3);}
+ execsql {SELECT * FROM t1}
+} {1 2 3}
+do_test temptable-1.2 {
+ catch {db2 eval {SELECT * FROM sqlite_master}}
+ db2 eval {SELECT * FROM t1}
+} {1 2 3}
+do_test temptable-1.3 {
+ execsql {SELECT name FROM sqlite_master}
+} {t1}
+do_test temptable-1.4 {
+ db2 eval {SELECT name FROM sqlite_master}
+} {t1}
+
+# Create a temporary table. Verify that only one of the two
+# processes can see it.
+#
+do_test temptable-1.5 {
+ db2 eval {
+ CREATE TEMP TABLE t2(x,y,z);
+ INSERT INTO t2 VALUES(4,5,6);
+ }
+ db2 eval {SELECT * FROM t2}
+} {4 5 6}
+do_test temptable-1.6 {
+ catch {execsql {SELECT * FROM sqlite_master}}
+ catchsql {SELECT * FROM t2}
+} {1 {no such table: t2}}
+do_test temptable-1.7 {
+ catchsql {INSERT INTO t2 VALUES(8,9,0);}
+} {1 {no such table: t2}}
+do_test temptable-1.8 {
+ db2 eval {INSERT INTO t2 VALUES(8,9,0);}
+ db2 eval {SELECT * FROM t2 ORDER BY x}
+} {4 5 6 8 9 0}
+do_test temptable-1.9 {
+ db2 eval {DELETE FROM t2 WHERE x==8}
+ db2 eval {SELECT * FROM t2 ORDER BY x}
+} {4 5 6}
+do_test temptable-1.10 {
+ db2 eval {DELETE FROM t2}
+ db2 eval {SELECT * FROM t2}
+} {}
+do_test temptable-1.11 {
+ db2 eval {
+ INSERT INTO t2 VALUES(7,6,5);
+ INSERT INTO t2 VALUES(4,3,2);
+ SELECT * FROM t2 ORDER BY x;
+ }
+} {4 3 2 7 6 5}
+do_test temptable-1.12 {
+ db2 eval {DROP TABLE t2;}
+ set r [catch {db2 eval {SELECT * FROM t2}} msg]
+ lappend r $msg
+} {1 {no such table: t2}}
+
+# Make sure temporary tables work with transactions
+#
+do_test temptable-2.1 {
+ execsql {
+ BEGIN TRANSACTION;
+ CREATE TEMPORARY TABLE t2(x,y);
+ INSERT INTO t2 VALUES(1,2);
+ SELECT * FROM t2;
+ }
+} {1 2}
+do_test temptable-2.2 {
+ execsql {ROLLBACK}
+ catchsql {SELECT * FROM t2}
+} {1 {no such table: t2}}
+do_test temptable-2.3 {
+ execsql {
+ BEGIN TRANSACTION;
+ CREATE TEMPORARY TABLE t2(x,y);
+ INSERT INTO t2 VALUES(1,2);
+ SELECT * FROM t2;
+ }
+} {1 2}
+do_test temptable-2.4 {
+ execsql {COMMIT}
+ catchsql {SELECT * FROM t2}
+} {0 {1 2}}
+do_test temptable-2.5 {
+ set r [catch {db2 eval {SELECT * FROM t2}} msg]
+ lappend r $msg
+} {1 {no such table: t2}}
+
+# Make sure indices on temporary tables are also temporary.
+#
+do_test temptable-3.1 {
+ execsql {
+ CREATE INDEX i2 ON t2(x);
+ SELECT name FROM sqlite_master WHERE type='index';
+ }
+} {}
+do_test temptable-3.2 {
+ execsql {
+ SELECT y FROM t2 WHERE x=1;
+ }
+} {2}
+do_test temptable-3.3 {
+ execsql {
+ DROP INDEX i2;
+ SELECT y FROM t2 WHERE x=1;
+ }
+} {2}
+do_test temptable-3.4 {
+ execsql {
+ CREATE INDEX i2 ON t2(x);
+ DROP TABLE t2;
+ }
+ catchsql {DROP INDEX i2}
+} {1 {no such index: i2}}
+
+# Check for correct name collision processing. A name collision can
+# occur when process A creates a temporary table T then process B
+# creates a permanent table also named T. The temp table in process A
+# hides the existance of the permanent table.
+#
+do_test temptable-4.1 {
+ execsql {
+ CREATE TEMP TABLE t2(x,y);
+ INSERT INTO t2 VALUES(10,20);
+ SELECT * FROM t2;
+ } db2
+} {10 20}
+do_test temptable-4.2 {
+ execsql {
+ CREATE TABLE t2(x,y,z);
+ INSERT INTO t2 VALUES(9,8,7);
+ SELECT * FROM t2;
+ }
+} {9 8 7}
+do_test temptable-4.3 {
+ catchsql {
+ SELECT * FROM t2;
+ } db2
+} {0 {10 20}}
+do_test temptable-4.4.1 {
+ catchsql {
+ SELECT * FROM temp.t2;
+ } db2
+} {0 {10 20}}
+do_test temptable-4.4.2 {
+ catchsql {
+ SELECT * FROM main.t2;
+ } db2
+} {1 {no such table: main.t2}}
+#do_test temptable-4.4.3 {
+# catchsql {
+# SELECT name FROM main.sqlite_master WHERE type='table';
+# } db2
+#} {1 {database schema has changed}}
+do_test temptable-4.4.4 {
+ catchsql {
+ SELECT name FROM main.sqlite_master WHERE type='table';
+ } db2
+} {0 {t1 t2}}
+do_test temptable-4.4.5 {
+ catchsql {
+ SELECT * FROM main.t2;
+ } db2
+} {0 {9 8 7}}
+do_test temptable-4.4.6 {
+ # TEMP takes precedence over MAIN
+ catchsql {
+ SELECT * FROM t2;
+ } db2
+} {0 {10 20}}
+do_test temptable-4.5 {
+ catchsql {
+ DROP TABLE t2; -- should drop TEMP
+ SELECT * FROM t2; -- data should be from MAIN
+ } db2
+} {0 {9 8 7}}
+do_test temptable-4.6 {
+ db2 close
+ sqlite3 db2 ./test.db
+ catchsql {
+ SELECT * FROM t2;
+ } db2
+} {0 {9 8 7}}
+do_test temptable-4.7 {
+ catchsql {
+ DROP TABLE t2;
+ SELECT * FROM t2;
+ }
+} {1 {no such table: t2}}
+do_test temptable-4.8 {
+ db2 close
+ sqlite3 db2 ./test.db
+ execsql {
+ CREATE TEMP TABLE t2(x unique,y);
+ INSERT INTO t2 VALUES(1,2);
+ SELECT * FROM t2;
+ } db2
+} {1 2}
+do_test temptable-4.9 {
+ execsql {
+ CREATE TABLE t2(x unique, y);
+ INSERT INTO t2 VALUES(3,4);
+ SELECT * FROM t2;
+ }
+} {3 4}
+do_test temptable-4.10.1 {
+ catchsql {
+ SELECT * FROM t2;
+ } db2
+} {0 {1 2}}
+# Update: The schema is reloaded in test temptable-4.10.1. And tclsqlite.c
+# handles it and retries the query anyway.
+# do_test temptable-4.10.2 {
+# catchsql {
+# SELECT name FROM sqlite_master WHERE type='table'
+# } db2
+# } {1 {database schema has changed}}
+do_test temptable-4.10.3 {
+ catchsql {
+ SELECT name FROM sqlite_master WHERE type='table'
+ } db2
+} {0 {t1 t2}}
+do_test temptable-4.11 {
+ execsql {
+ SELECT * FROM t2;
+ } db2
+} {1 2}
+do_test temptable-4.12 {
+ execsql {
+ SELECT * FROM t2;
+ }
+} {3 4}
+do_test temptable-4.13 {
+ catchsql {
+ DROP TABLE t2; -- drops TEMP.T2
+ SELECT * FROM t2; -- uses MAIN.T2
+ } db2
+} {0 {3 4}}
+do_test temptable-4.14 {
+ execsql {
+ SELECT * FROM t2;
+ }
+} {3 4}
+do_test temptable-4.15 {
+ db2 close
+ sqlite3 db2 ./test.db
+ execsql {
+ SELECT * FROM t2;
+ } db2
+} {3 4}
+
+# Now create a temporary table in db2 and a permanent index in db. The
+# temporary table in db2 should mask the name of the permanent index,
+# but the permanent index should still be accessible and should still
+# be updated when its corresponding table changes.
+#
+do_test temptable-5.1 {
+ execsql {
+ CREATE TEMP TABLE mask(a,b,c)
+ } db2
+ execsql {
+ CREATE INDEX mask ON t2(x);
+ SELECT * FROM t2;
+ }
+} {3 4}
+#do_test temptable-5.2 {
+# catchsql {
+# SELECT * FROM t2;
+# } db2
+#} {1 {database schema has changed}}
+do_test temptable-5.3 {
+ catchsql {
+ SELECT * FROM t2;
+ } db2
+} {0 {3 4}}
+do_test temptable-5.4 {
+ execsql {
+ SELECT y FROM t2 WHERE x=3
+ }
+} {4}
+do_test temptable-5.5 {
+ execsql {
+ SELECT y FROM t2 WHERE x=3
+ } db2
+} {4}
+do_test temptable-5.6 {
+ execsql {
+ INSERT INTO t2 VALUES(1,2);
+ SELECT y FROM t2 WHERE x=1;
+ } db2
+} {2}
+do_test temptable-5.7 {
+ execsql {
+ SELECT y FROM t2 WHERE x=3
+ } db2
+} {4}
+do_test temptable-5.8 {
+ execsql {
+ SELECT y FROM t2 WHERE x=1;
+ }
+} {2}
+do_test temptable-5.9 {
+ execsql {
+ SELECT y FROM t2 WHERE x=3
+ }
+} {4}
+
+db2 close
+
+# Test for correct operation of read-only databases
+#
+do_test temptable-6.1 {
+ execsql {
+ CREATE TABLE t8(x);
+ INSERT INTO t8 VALUES('xyzzy');
+ SELECT * FROM t8;
+ }
+} {xyzzy}
+do_test temptable-6.2 {
+ db close
+ catch {file attributes test.db -permissions 0444}
+ catch {file attributes test.db -readonly 1}
+ sqlite3 db test.db
+ if {[file writable test.db]} {
+ error "Unable to make the database file test.db readonly - rerun this test as an unprivileged user"
+ }
+ execsql {
+ SELECT * FROM t8;
+ }
+} {xyzzy}
+do_test temptable-6.3 {
+ if {[file writable test.db]} {
+ error "Unable to make the database file test.db readonly - rerun this test as an unprivileged user"
+ }
+ catchsql {
+ CREATE TABLE t9(x,y);
+ }
+} {1 {attempt to write a readonly database}}
+do_test temptable-6.4 {
+ catchsql {
+ CREATE TEMP TABLE t9(x,y);
+ }
+} {0 {}}
+do_test temptable-6.5 {
+ catchsql {
+ INSERT INTO t9 VALUES(1,2);
+ SELECT * FROM t9;
+ }
+} {0 {1 2}}
+do_test temptable-6.6 {
+ if {[file writable test.db]} {
+ error "Unable to make the database file test.db readonly - rerun this test as an unprivileged user"
+ }
+ catchsql {
+ INSERT INTO t8 VALUES('hello');
+ SELECT * FROM t8;
+ }
+} {1 {attempt to write a readonly database}}
+do_test temptable-6.7 {
+ catchsql {
+ SELECT * FROM t8,t9;
+ }
+} {0 {xyzzy 1 2}}
+do_test temptable-6.8 {
+ db close
+ sqlite3 db test.db
+ catchsql {
+ SELECT * FROM t8,t9;
+ }
+} {1 {no such table: t9}}
+
+file delete -force test2.db test2.db-journal
+do_test temptable-7.1 {
+ catchsql {
+ ATTACH 'test2.db' AS two;
+ CREATE TEMP TABLE two.abc(x,y);
+ }
+} {1 {temporary table name must be unqualified}}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/tester.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/tester.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,529 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements some common TCL routines used for regression
+# testing the SQLite library
+#
+# $Id: tester.tcl,v 1.69 2006/10/04 11:55:50 drh Exp $
+
+# Make sure tclsqlite3 was compiled correctly. Abort now with an
+# error message if not.
+#
+if {[sqlite3 -tcl-uses-utf]} {
+ if {"\u1234"=="u1234"} {
+ puts stderr "***** BUILD PROBLEM *****"
+ puts stderr "$argv0 was linked against an older version"
+ puts stderr "of TCL that does not support Unicode, but uses a header"
+ puts stderr "file (\"tcl.h\") from a new TCL version that does support"
+ puts stderr "Unicode. This combination causes internal errors."
+ puts stderr "Recompile using a TCL library and header file that match"
+ puts stderr "and try again.\n**************************"
+ exit 1
+ }
+} else {
+ if {"\u1234"!="u1234"} {
+ puts stderr "***** BUILD PROBLEM *****"
+ puts stderr "$argv0 was linked against an newer version"
+ puts stderr "of TCL that supports Unicode, but uses a header file"
+ puts stderr "(\"tcl.h\") from a old TCL version that does not support"
+ puts stderr "Unicode. This combination causes internal errors."
+ puts stderr "Recompile using a TCL library and header file that match"
+ puts stderr "and try again.\n**************************"
+ exit 1
+ }
+}
+
+set tcl_precision 15
+set sqlite_pending_byte 0x0010000
+
+# Use the pager codec if it is available
+#
+if {[sqlite3 -has-codec] && [info command sqlite_orig]==""} {
+ rename sqlite3 sqlite_orig
+ proc sqlite3 {args} {
+ if {[llength $args]==2 && [string index [lindex $args 0] 0]!="-"} {
+ lappend args -key {xyzzy}
+ }
+ uplevel 1 sqlite_orig $args
+ }
+}
+
+
+# Create a test database
+#
+catch {db close}
+file delete -force test.db
+file delete -force test.db-journal
+sqlite3 db ./test.db
+set ::DB [sqlite3_connection_pointer db]
+if {[info exists ::SETUP_SQL]} {
+ db eval $::SETUP_SQL
+}
+
+# Abort early if this script has been run before.
+#
+if {[info exists nTest]} return
+
+# Set the test counters to zero
+#
+set nErr 0
+set nTest 0
+set skip_test 0
+set failList {}
+set maxErr 1000
+
+# Invoke the do_test procedure to run a single test
+#
+proc do_test {name cmd expected} {
+ global argv nErr nTest skip_test maxErr
+ set ::sqlite_malloc_id $name
+ if {$skip_test} {
+ set skip_test 0
+ return
+ }
+ if {[llength $argv]==0} {
+ set go 1
+ } else {
+ set go 0
+ foreach pattern $argv {
+ if {[string match $pattern $name]} {
+ set go 1
+ break
+ }
+ }
+ }
+ if {!$go} return
+ incr nTest
+ puts -nonewline $name...
+ flush stdout
+ if {[catch {uplevel #0 "$cmd;\n"} result]} {
+ puts "\nError: $result"
+ incr nErr
+ lappend ::failList $name
+ if {$nErr>$maxErr} {puts "*** Giving up..."; finalize_testing}
+ } elseif {[string compare $result $expected]} {
+ puts "\nExpected: \[$expected\]\n Got: \[$result\]"
+ incr nErr
+ lappend ::failList $name
+ if {$nErr>=$maxErr} {puts "*** Giving up..."; finalize_testing}
+ } else {
+ puts " Ok"
+ }
+}
+
+# The procedure uses the special "sqlite_malloc_stat" command
+# (which is only available if SQLite is compiled with -DSQLITE_DEBUG=1)
+# to see how many malloc()s have not been free()ed. The number
+# of surplus malloc()s is stored in the global variable $::Leak.
+# If the value in $::Leak grows, it may mean there is a memory leak
+# in the library.
+#
+proc memleak_check {} {
+ if {[info command sqlite_malloc_stat]!=""} {
+ set r [sqlite_malloc_stat]
+ set ::Leak [expr {[lindex $r 0]-[lindex $r 1]}]
+ }
+}
+
+# Run this routine last
+#
+proc finish_test {} {
+ finalize_testing
+}
+proc finalize_testing {} {
+ global nTest nErr sqlite_open_file_count
+ if {$nErr==0} memleak_check
+
+ catch {db close}
+ catch {db2 close}
+ catch {db3 close}
+
+ catch {
+ pp_check_for_leaks
+ }
+ sqlite3 db {}
+ # sqlite3_clear_tsd_memdebug
+ db close
+ if {$::sqlite3_tsd_count} {
+ puts "Thread-specific data leak: $::sqlite3_tsd_count instances"
+ incr nErr
+ } else {
+ puts "Thread-specific data deallocated properly"
+ }
+ incr nTest
+ puts "$nErr errors out of $nTest tests"
+ puts "Failures on these tests: $::failList"
+ if {$nErr>0 && ![working_64bit_int]} {
+ puts "******************************************************************"
+ puts "N.B.: The version of TCL that you used to build this test harness"
+ puts "is defective in that it does not support 64-bit integers. Some or"
+ puts "all of the test failures above might be a result from this defect"
+ puts "in your TCL build."
+ puts "******************************************************************"
+ }
+ if {$sqlite_open_file_count} {
+ puts "$sqlite_open_file_count files were left open"
+ incr nErr
+ }
+ foreach f [glob -nocomplain test.db-*-journal] {
+ file delete -force $f
+ }
+ foreach f [glob -nocomplain test.db-mj*] {
+ file delete -force $f
+ }
+ exit [expr {$nErr>0}]
+}
+
+# A procedure to execute SQL
+#
+proc execsql {sql {db db}} {
+ # puts "SQL = $sql"
+ uplevel [list $db eval $sql]
+}
+
+# Execute SQL and catch exceptions.
+#
+proc catchsql {sql {db db}} {
+ # puts "SQL = $sql"
+ set r [catch {$db eval $sql} msg]
+ lappend r $msg
+ return $r
+}
+
+# Do an VDBE code dump on the SQL given
+#
+proc explain {sql {db db}} {
+ puts ""
+ puts "addr opcode p1 p2 p3 "
+ puts "---- ------------ ------ ------ ---------------"
+ $db eval "explain $sql" {} {
+ puts [format {%-4d %-12.12s %-6d %-6d %s} $addr $opcode $p1 $p2 $p3]
+ }
+}
+
+# Another procedure to execute SQL. This one includes the field
+# names in the returned list.
+#
+proc execsql2 {sql} {
+ set result {}
+ db eval $sql data {
+ foreach f $data(*) {
+ lappend result $f $data($f)
+ }
+ }
+ return $result
+}
+
+# Use the non-callback API to execute multiple SQL statements
+#
+proc stepsql {dbptr sql} {
+ set sql [string trim $sql]
+ set r 0
+ while {[string length $sql]>0} {
+ if {[catch {sqlite3_prepare $dbptr $sql -1 sqltail} vm]} {
+ return [list 1 $vm]
+ }
+ set sql [string trim $sqltail]
+# while {[sqlite_step $vm N VAL COL]=="SQLITE_ROW"} {
+# foreach v $VAL {lappend r $v}
+# }
+ while {[sqlite3_step $vm]=="SQLITE_ROW"} {
+ for {set i 0} {$i<[sqlite3_data_count $vm]} {incr i} {
+ lappend r [sqlite3_column_text $vm $i]
+ }
+ }
+ if {[catch {sqlite3_finalize $vm} errmsg]} {
+ return [list 1 $errmsg]
+ }
+ }
+ return $r
+}
+
+# Delete a file or directory
+#
+proc forcedelete {filename} {
+ if {[catch {file delete -force $filename}]} {
+ exec rm -rf $filename
+ }
+}
+
+# Do an integrity check of the entire database
+#
+proc integrity_check {name} {
+ ifcapable integrityck {
+ do_test $name {
+ execsql {PRAGMA integrity_check}
+ } {ok}
+ }
+}
+
+# Evaluate a boolean expression of capabilities. If true, execute the
+# code. Omit the code if false.
+#
+proc ifcapable {expr code {else ""} {elsecode ""}} {
+ regsub -all {[a-z_0-9]+} $expr {$::sqlite_options(&)} e2
+ if ($e2) {
+ set c [catch {uplevel 1 $code} r]
+ } else {
+ set c [catch {uplevel 1 $elsecode} r]
+ }
+ return -code $c $r
+}
+
+# This proc execs a seperate process that crashes midway through executing
+# the SQL script $sql on database test.db.
+#
+# The crash occurs during a sync() of file $crashfile. When the crash
+# occurs a random subset of all unsynced writes made by the process are
+# written into the files on disk. Argument $crashdelay indicates the
+# number of file syncs to wait before crashing.
+#
+# The return value is a list of two elements. The first element is a
+# boolean, indicating whether or not the process actually crashed or
+# reported some other error. The second element in the returned list is the
+# error message. This is "child process exited abnormally" if the crash
+# occured.
+#
+proc crashsql {crashdelay crashfile sql} {
+ if {$::tcl_platform(platform)!="unix"} {
+ error "crashsql should only be used on unix"
+ }
+ set cfile [file join [pwd] $crashfile]
+
+ set f [open crash.tcl w]
+ puts $f "sqlite3_crashparams $crashdelay $cfile"
+ puts $f "sqlite3 db test.db"
+ puts $f "db eval {pragma cache_size = 10}"
+ puts $f "db eval {"
+ puts $f "$sql"
+ puts $f "}"
+ close $f
+
+ set r [catch {
+ exec [info nameofexec] crash.tcl >@stdout
+ } msg]
+ lappend r $msg
+}
+
+# Usage: do_ioerr_test <test number> <options...>
+#
+# This proc is used to implement test cases that check that IO errors
+# are correctly handled. The first argument, <test number>, is an integer
+# used to name the tests executed by this proc. Options are as follows:
+#
+# -tclprep TCL script to run to prepare test.
+# -sqlprep SQL script to run to prepare test.
+# -tclbody TCL script to run with IO error simulation.
+# -sqlbody TCL script to run with IO error simulation.
+# -exclude List of 'N' values not to test.
+# -erc Use extended result codes
+# -start Value of 'N' to begin with (default 1)
+#
+# -cksum Boolean. If true, test that the database does
+# not change during the execution of the test case.
+#
+proc do_ioerr_test {testname args} {
+
+ set ::ioerropts(-start) 1
+ set ::ioerropts(-cksum) 0
+ set ::ioerropts(-erc) 0
+ array set ::ioerropts $args
+
+ set ::go 1
+ for {set n $::ioerropts(-start)} {$::go} {incr n} {
+
+ # Skip this IO error if it was specified with the "-exclude" option.
+ if {[info exists ::ioerropts(-exclude)]} {
+ if {[lsearch $::ioerropts(-exclude) $n]!=-1} continue
+ }
+
+ # Delete the files test.db and test2.db, then execute the TCL and
+ # SQL (in that order) to prepare for the test case.
+ do_test $testname.$n.1 {
+ set ::sqlite_io_error_pending 0
+ catch {db close}
+ catch {file delete -force test.db}
+ catch {file delete -force test.db-journal}
+ catch {file delete -force test2.db}
+ catch {file delete -force test2.db-journal}
+ set ::DB [sqlite3 db test.db; sqlite3_connection_pointer db]
+ sqlite3_extended_result_codes $::DB $::ioerropts(-erc)
+ if {[info exists ::ioerropts(-tclprep)]} {
+ eval $::ioerropts(-tclprep)
+ }
+ if {[info exists ::ioerropts(-sqlprep)]} {
+ execsql $::ioerropts(-sqlprep)
+ }
+ expr 0
+ } {0}
+
+ # Read the 'checksum' of the database.
+ if {$::ioerropts(-cksum)} {
+ set checksum [cksum]
+ }
+
+ # Set the Nth IO error to fail.
+ do_test $testname.$n.2 [subst {
+ set ::sqlite_io_error_pending $n
+ }] $n
+
+ # Create a single TCL script from the TCL and SQL specified
+ # as the body of the test.
+ set ::ioerrorbody {}
+ if {[info exists ::ioerropts(-tclbody)]} {
+ append ::ioerrorbody "$::ioerropts(-tclbody)\n"
+ }
+ if {[info exists ::ioerropts(-sqlbody)]} {
+ append ::ioerrorbody "db eval {$::ioerropts(-sqlbody)}"
+ }
+
+ # Execute the TCL Script created in the above block. If
+ # there are at least N IO operations performed by SQLite as
+ # a result of the script, the Nth will fail.
+ do_test $testname.$n.3 {
+ set r [catch $::ioerrorbody msg]
+ # puts rc=[sqlite3_errcode $::DB]
+ set rc [sqlite3_errcode $::DB]
+ if {$::ioerropts(-erc)} {
+ # In extended result mode, all IOERRs are qualified
+ if {[regexp {^SQLITE_IOERR} $rc] && ![regexp {IOERR\+\d} $rc]} {
+ return $rc
+ }
+ } else {
+ # Not in extended result mode, no errors are qualified
+ if {[regexp {\+\d} $rc]} {
+ return $rc
+ }
+ }
+ set ::go [expr {$::sqlite_io_error_pending<=0}]
+ set s [expr $::sqlite_io_error_hit==0]
+ set ::sqlite_io_error_hit 0
+ # puts "$::sqlite_io_error_pending $r $msg"
+ # puts "r=$r s=$s go=$::go msg=\"$msg\""
+ expr { ($s && !$r && !$::go) || (!$s && $r && $::go) }
+ # expr {$::sqlite_io_error_pending>0 || $r!=0}
+ } {1}
+
+ # If an IO error occured, then the checksum of the database should
+ # be the same as before the script that caused the IO error was run.
+ if {$::go && $::ioerropts(-cksum)} {
+ do_test $testname.$n.4 {
+ catch {db close}
+ set ::DB [sqlite3 db test.db; sqlite3_connection_pointer db]
+ cksum
+ } $checksum
+ }
+
+ set ::sqlite_io_error_pending 0
+ if {[info exists ::ioerropts(-cleanup)]} {
+ catch $::ioerropts(-cleanup)
+ }
+ }
+ set ::sqlite_io_error_pending 0
+ unset ::ioerropts
+}
+
+# Return a checksum based on the contents of database 'db'.
+#
+proc cksum {{db db}} {
+ set txt [$db eval {
+ SELECT name, type, sql FROM sqlite_master order by name
+ }]\n
+ foreach tbl [$db eval {
+ SELECT name FROM sqlite_master WHERE type='table' order by name
+ }] {
+ append txt [$db eval "SELECT * FROM $tbl"]\n
+ }
+ foreach prag {default_synchronous default_cache_size} {
+ append txt $prag-[$db eval "PRAGMA $prag"]\n
+ }
+ set cksum [string length $txt]-[md5 $txt]
+ # puts $cksum-[file size test.db]
+ return $cksum
+}
+
+# Copy file $from into $to. This is used because some versions of
+# TCL for windows (notably the 8.4.1 binary package shipped with the
+# current mingw release) have a broken "file copy" command.
+#
+proc copy_file {from to} {
+ if {$::tcl_platform(platform)=="unix"} {
+ file copy -force $from $to
+ } else {
+ set f [open $from]
+ fconfigure $f -translation binary
+ set t [open $to w]
+ fconfigure $t -translation binary
+ puts -nonewline $t [read $f [file size $from]]
+ close $t
+ close $f
+ }
+}
+
+# This command checks for outstanding calls to sqliteMalloc() from within
+# the current thread. A list is returned with one entry for each outstanding
+# malloc. Each list entry is itself a list of 5 items, as follows:
+#
+# { <number-bytes> <file-name> <line-number> <test-case> <stack-dump> }
+#
+proc check_for_leaks {} {
+ set ret [list]
+ set cnt 0
+ foreach alloc [sqlite_malloc_outstanding] {
+ foreach {nBytes file iLine userstring backtrace} $alloc {}
+ set stack [list]
+ set skip 0
+
+ # The first command in this block will probably fail on windows. This
+ # means there will be no stack dump available.
+ if {$cnt < 25 && $backtrace!=""} {
+ catch {
+ set stuff [eval "exec addr2line -e ./testfixture -f $backtrace"]
+ foreach {func line} $stuff {
+ if {$func != "??" || $line != "??:0"} {
+ regexp {.*/(.*)} $line dummy line
+ lappend stack "${func}() $line"
+ } else {
+ if {[lindex $stack end] != "..."} {
+ lappend stack "..."
+ }
+ }
+ }
+ }
+ incr cnt
+ }
+
+ if {!$skip} {
+ lappend ret [list $nBytes $file $iLine $userstring $stack]
+ }
+ }
+ return $ret
+}
+
+# Pretty print a report based on the return value of [check_for_leaks] to
+# stdout.
+proc pp_check_for_leaks {} {
+ set l [check_for_leaks]
+ set n 0
+ foreach leak $l {
+ foreach {nBytes file iLine userstring stack} $leak {}
+ puts "$nBytes bytes leaked at $file:$iLine ($userstring)"
+ foreach frame $stack {
+ puts " $frame"
+ }
+ incr n $nBytes
+ }
+ puts "Memory leaked: $n bytes in [llength $l] allocations"
+ puts ""
+}
+
+# If the library is compiled with the SQLITE_DEFAULT_AUTOVACUUM macro set
+# to non-zero, then set the global variable $AUTOVACUUM to 1.
+set AUTOVACUUM $sqlite_options(default_autovacuum)
Added: freeswitch/trunk/libs/sqlite/test/thread1.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/thread1.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,172 @@
+# 2003 December 18
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is multithreading behavior
+#
+# $Id: thread1.test,v 1.7 2004/06/19 00:16:31 drh Exp $
+
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Skip this whole file if the thread testing code is not enabled
+#
+if {[llength [info command thread_step]]==0 || [sqlite3 -has-codec]} {
+ finish_test
+ return
+}
+
+# Create some data to work with
+#
+do_test thread1-1.1 {
+ execsql {
+ CREATE TABLE t1(a,b);
+ INSERT INTO t1 VALUES(1,'abcdefgh');
+ INSERT INTO t1 SELECT a+1, b||b FROM t1;
+ INSERT INTO t1 SELECT a+2, b||b FROM t1;
+ INSERT INTO t1 SELECT a+4, b||b FROM t1;
+ SELECT count(*), max(length(b)) FROM t1;
+ }
+} {8 64}
+
+# Interleave two threads on read access. Then make sure a third
+# thread can write the database. In other words:
+#
+# read-lock A
+# read-lock B
+# unlock A
+# unlock B
+# write-lock C
+#
+# At one point, the write-lock of C would fail on Linux.
+#
+do_test thread1-1.2 {
+ thread_create A test.db
+ thread_create B test.db
+ thread_create C test.db
+ thread_compile A {SELECT a FROM t1}
+ thread_step A
+ thread_result A
+} SQLITE_ROW
+do_test thread1-1.3 {
+ thread_argc A
+} 1
+do_test thread1-1.4 {
+ thread_argv A 0
+} 1
+do_test thread1-1.5 {
+ thread_compile B {SELECT b FROM t1}
+ thread_step B
+ thread_result B
+} SQLITE_ROW
+do_test thread1-1.6 {
+ thread_argc B
+} 1
+do_test thread1-1.7 {
+ thread_argv B 0
+} abcdefgh
+do_test thread1-1.8 {
+ thread_finalize A
+ thread_result A
+} SQLITE_OK
+do_test thread1-1.9 {
+ thread_finalize B
+ thread_result B
+} SQLITE_OK
+do_test thread1-1.10 {
+ thread_compile C {CREATE TABLE t2(x,y)}
+ thread_step C
+ thread_result C
+} SQLITE_DONE
+do_test thread1-1.11 {
+ thread_finalize C
+ thread_result C
+} SQLITE_OK
+do_test thread1-1.12 {
+ catchsql {SELECT name FROM sqlite_master}
+ execsql {SELECT name FROM sqlite_master}
+} {t1 t2}
+
+
+#
+# The following tests - thread1-2.* - test the following scenario:
+#
+# 1: Read-lock thread A
+# 2: Read-lock thread B
+# 3: Attempt to write in thread C -> SQLITE_BUSY
+# 4: Check db write failed from main thread.
+# 5: Unlock from thread A.
+# 6: Attempt to write in thread C -> SQLITE_BUSY
+# 7: Check db write failed from main thread.
+# 8: Unlock from thread B.
+# 9: Attempt to write in thread C -> SQLITE_DONE
+# 10: Finalize the write from thread C
+# 11: Check db write succeeded from main thread.
+#
+do_test thread1-2.1 {
+ thread_halt *
+ thread_create A test.db
+ thread_compile A {SELECT a FROM t1}
+ thread_step A
+ thread_result A
+} SQLITE_ROW
+do_test thread1-2.2 {
+ thread_create B test.db
+ thread_compile B {SELECT b FROM t1}
+ thread_step B
+ thread_result B
+} SQLITE_ROW
+do_test thread1-2.3 {
+ thread_create C test.db
+ thread_compile C {INSERT INTO t2 VALUES(98,99)}
+ thread_step C
+ thread_result C
+ thread_finalize C
+ thread_result C
+} SQLITE_BUSY
+
+do_test thread1-2.4 {
+ execsql {SELECT * FROM t2}
+} {}
+
+do_test thread1-2.5 {
+ thread_finalize A
+ thread_result A
+} SQLITE_OK
+do_test thread1-2.6 {
+ thread_compile C {INSERT INTO t2 VALUES(98,99)}
+ thread_step C
+ thread_result C
+ thread_finalize C
+ thread_result C
+} SQLITE_BUSY
+do_test thread1-2.7 {
+ execsql {SELECT * FROM t2}
+} {}
+do_test thread1-2.8 {
+ thread_finalize B
+ thread_result B
+} SQLITE_OK
+do_test thread1-2.9 {
+ thread_compile C {INSERT INTO t2 VALUES(98,99)}
+ thread_step C
+ thread_result C
+} SQLITE_DONE
+do_test thread1-2.10 {
+ thread_finalize C
+ thread_result C
+} SQLITE_OK
+do_test thread1-2.11 {
+ execsql {SELECT * FROM t2}
+} {98 99}
+
+thread_halt *
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/thread2.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/thread2.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,246 @@
+# 2006 January 14
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is multithreading behavior
+#
+# $Id: thread2.test,v 1.2 2006/01/18 18:33:42 danielk1977 Exp $
+
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# This file swaps database connections between threads. This
+# is illegal if memory-management is enabled, so skip this file
+# in that case.
+ifcapable memorymanage {
+ finish_test
+ return
+}
+
+
+# Skip this whole file if the thread testing code is not enabled
+#
+if {[llength [info command thread_step]]==0 || [sqlite3 -has-codec]} {
+ finish_test
+ return
+}
+if {![info exists threadsOverrideEachOthersLocks]} {
+ finish_test
+ return
+}
+
+# Create some data to work with
+#
+do_test thread1-1.1 {
+ execsql {
+ CREATE TABLE t1(a,b);
+ INSERT INTO t1 VALUES(1,'abcdefgh');
+ INSERT INTO t1 SELECT a+1, b||b FROM t1;
+ INSERT INTO t1 SELECT a+2, b||b FROM t1;
+ INSERT INTO t1 SELECT a+4, b||b FROM t1;
+ SELECT count(*), max(length(b)) FROM t1;
+ }
+} {8 64}
+
+# Use the thread_swap command to move the database connections between
+# threads, then verify that they still work.
+#
+do_test thread2-1.2 {
+ db close
+ thread_create A test.db
+ thread_create B test.db
+ thread_swap A B
+ thread_compile A {SELECT a FROM t1 LIMIT 1}
+ thread_result A
+} {SQLITE_OK}
+do_test thread2-1.3 {
+ thread_step A
+ thread_result A
+} {SQLITE_ROW}
+do_test thread2-1.4 {
+ thread_argv A 0
+} {1}
+do_test thread2-1.5 {
+ thread_finalize A
+ thread_result A
+} {SQLITE_OK}
+do_test thread2-1.6 {
+ thread_compile B {SELECT a FROM t1 LIMIT 1}
+ thread_result B
+} {SQLITE_OK}
+do_test thread2-1.7 {
+ thread_step B
+ thread_result B
+} {SQLITE_ROW}
+do_test thread2-1.8 {
+ thread_argv B 0
+} {1}
+do_test thread2-1.9 {
+ thread_finalize B
+ thread_result B
+} {SQLITE_OK}
+
+# Swap them again.
+#
+do_test thread2-2.2 {
+ thread_swap A B
+ thread_compile A {SELECT a FROM t1 LIMIT 1}
+ thread_result A
+} {SQLITE_OK}
+do_test thread2-2.3 {
+ thread_step A
+ thread_result A
+} {SQLITE_ROW}
+do_test thread2-2.4 {
+ thread_argv A 0
+} {1}
+do_test thread2-2.5 {
+ thread_finalize A
+ thread_result A
+} {SQLITE_OK}
+do_test thread2-2.6 {
+ thread_compile B {SELECT a FROM t1 LIMIT 1}
+ thread_result B
+} {SQLITE_OK}
+do_test thread2-2.7 {
+ thread_step B
+ thread_result B
+} {SQLITE_ROW}
+do_test thread2-2.8 {
+ thread_argv B 0
+} {1}
+do_test thread2-2.9 {
+ thread_finalize B
+ thread_result B
+} {SQLITE_OK}
+thread_halt A
+thread_halt B
+
+# Save the original (correct) value of threadsOverrideEachOthersLocks
+# so that it can be restored. If this value is left set incorrectly, lots
+# of things will go wrong in future tests.
+#
+set orig_threadOverride $threadsOverrideEachOthersLocks
+
+# Pretend we are on a system (like RedHat9) were threads do not
+# override each others locks.
+#
+set threadsOverrideEachOthersLocks 0
+
+# Verify that we can move database connections between threads as
+# long as no locks are held.
+#
+do_test thread2-3.1 {
+ thread_create A test.db
+ set DB [thread_db_get A]
+ thread_halt A
+} {}
+do_test thread2-3.2 {
+ set STMT [sqlite3_prepare $DB {SELECT a FROM t1 LIMIT 1} -1 TAIL]
+ sqlite3_step $STMT
+} SQLITE_ROW
+do_test thread2-3.3 {
+ sqlite3_column_int $STMT 0
+} 1
+do_test thread2-3.4 {
+ sqlite3_finalize $STMT
+} SQLITE_OK
+do_test thread2-3.5 {
+ set STMT [sqlite3_prepare $DB {SELECT max(a) FROM t1} -1 TAIL]
+ sqlite3_step $STMT
+} SQLITE_ROW
+do_test thread2-3.6 {
+ sqlite3_column_int $STMT 0
+} 8
+do_test thread2-3.7 {
+ sqlite3_finalize $STMT
+} SQLITE_OK
+do_test thread2-3.8 {
+ sqlite3_close $DB
+} {SQLITE_OK}
+
+do_test thread2-3.10 {
+ thread_create A test.db
+ thread_compile A {SELECT a FROM t1 LIMIT 1}
+ thread_step A
+ thread_finalize A
+ set DB [thread_db_get A]
+ thread_halt A
+} {}
+do_test thread2-3.11 {
+ set STMT [sqlite3_prepare $DB {SELECT a FROM t1 LIMIT 1} -1 TAIL]
+ sqlite3_step $STMT
+} SQLITE_ROW
+do_test thread2-3.12 {
+ sqlite3_column_int $STMT 0
+} 1
+do_test thread2-3.13 {
+ sqlite3_finalize $STMT
+} SQLITE_OK
+do_test thread2-3.14 {
+ sqlite3_close $DB
+} SQLITE_OK
+
+do_test thread2-3.20 {
+ thread_create A test.db
+ thread_compile A {SELECT a FROM t1 LIMIT 3}
+ thread_step A
+ set STMT [thread_stmt_get A]
+ set DB [thread_db_get A]
+ thread_halt A
+} {}
+do_test thread2-3.21 {
+ sqlite3_step $STMT
+} SQLITE_ROW
+do_test thread2-3.22 {
+ sqlite3_column_int $STMT 0
+} 2
+do_test thread2-3.23 {
+ # The unlock fails here. But because we never check the return
+ # code from sqlite3OsUnlock (because we cannot do anything about it
+ # if it fails) we do not realize that an error has occurred.
+ sqlite3_finalize $STMT
+} SQLITE_OK
+do_test thread2-3.25 {
+ sqlite3_close $DB
+} SQLITE_OK
+
+do_test thread2-3.30 {
+ thread_create A test.db
+ thread_compile A {BEGIN}
+ thread_step A
+ thread_finalize A
+ thread_compile A {SELECT a FROM t1 LIMIT 1}
+ thread_step A
+ thread_finalize A
+ set DB [thread_db_get A]
+ thread_halt A
+} {}
+do_test thread2-3.31 {
+ set STMT [sqlite3_prepare $DB {INSERT INTO t1 VALUES(99,'error')} -1 TAIL]
+ sqlite3_step $STMT
+} SQLITE_ERROR
+do_test thread2-3.32 {
+ sqlite3_finalize $STMT
+} SQLITE_MISUSE
+do_test thread2-3.33 {
+ sqlite3_close $DB
+} SQLITE_OK
+
+# VERY important to set the override flag back to its true value.
+#
+set threadsOverrideEachOthersLocks $orig_threadOverride
+
+# Also important to halt the worker threads, which are using spin
+# locks and eating away CPU cycles.
+#
+thread_halt *
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/threadtest1.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/threadtest1.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,289 @@
+/*
+** 2002 January 15
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** This file implements a simple standalone program used to test whether
+** or not the SQLite library is threadsafe.
+**
+** Testing the thread safety of SQLite is difficult because there are very
+** few places in the code that are even potentially unsafe, and those
+** places execute for very short periods of time. So even if the library
+** is compiled with its mutexes disabled, it is likely to work correctly
+** in a multi-threaded program most of the time.
+**
+** This file is NOT part of the standard SQLite library. It is used for
+** testing only.
+*/
+#include "sqlite.h"
+#include <pthread.h>
+#include <sched.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <unistd.h>
+
+/*
+** Enable for tracing
+*/
+static int verbose = 0;
+
+/*
+** Come here to die.
+*/
+static void Exit(int rc){
+ exit(rc);
+}
+
+extern char *sqlite3_mprintf(const char *zFormat, ...);
+extern char *sqlite3_vmprintf(const char *zFormat, va_list);
+
+/*
+** When a lock occurs, yield.
+*/
+static int db_is_locked(void *NotUsed, int iCount){
+ /* sched_yield(); */
+ if( verbose ) printf("BUSY %s #%d\n", (char*)NotUsed, iCount);
+ usleep(100);
+ return iCount<25;
+}
+
+/*
+** Used to accumulate query results by db_query()
+*/
+struct QueryResult {
+ const char *zFile; /* Filename - used for error reporting */
+ int nElem; /* Number of used entries in azElem[] */
+ int nAlloc; /* Number of slots allocated for azElem[] */
+ char **azElem; /* The result of the query */
+};
+
+/*
+** The callback function for db_query
+*/
+static int db_query_callback(
+ void *pUser, /* Pointer to the QueryResult structure */
+ int nArg, /* Number of columns in this result row */
+ char **azArg, /* Text of data in all columns */
+ char **NotUsed /* Names of the columns */
+){
+ struct QueryResult *pResult = (struct QueryResult*)pUser;
+ int i;
+ if( pResult->nElem + nArg >= pResult->nAlloc ){
+ if( pResult->nAlloc==0 ){
+ pResult->nAlloc = nArg+1;
+ }else{
+ pResult->nAlloc = pResult->nAlloc*2 + nArg + 1;
+ }
+ pResult->azElem = realloc( pResult->azElem, pResult->nAlloc*sizeof(char*));
+ if( pResult->azElem==0 ){
+ fprintf(stdout,"%s: malloc failed\n", pResult->zFile);
+ return 1;
+ }
+ }
+ if( azArg==0 ) return 0;
+ for(i=0; i<nArg; i++){
+ pResult->azElem[pResult->nElem++] =
+ sqlite3_mprintf("%s",azArg[i] ? azArg[i] : "");
+ }
+ return 0;
+}
+
+/*
+** Execute a query against the database. NULL values are returned
+** as an empty string. The list is terminated by a single NULL pointer.
+*/
+char **db_query(sqlite *db, const char *zFile, const char *zFormat, ...){
+ char *zSql;
+ int rc;
+ char *zErrMsg = 0;
+ va_list ap;
+ struct QueryResult sResult;
+ va_start(ap, zFormat);
+ zSql = sqlite3_vmprintf(zFormat, ap);
+ va_end(ap);
+ memset(&sResult, 0, sizeof(sResult));
+ sResult.zFile = zFile;
+ if( verbose ) printf("QUERY %s: %s\n", zFile, zSql);
+ rc = sqlite3_exec(db, zSql, db_query_callback, &sResult, &zErrMsg);
+ if( rc==SQLITE_SCHEMA ){
+ if( zErrMsg ) free(zErrMsg);
+ rc = sqlite3_exec(db, zSql, db_query_callback, &sResult, &zErrMsg);
+ }
+ if( verbose ) printf("DONE %s %s\n", zFile, zSql);
+ if( zErrMsg ){
+ fprintf(stdout,"%s: query failed: %s - %s\n", zFile, zSql, zErrMsg);
+ free(zErrMsg);
+ free(zSql);
+ Exit(1);
+ }
+ sqlite3_free(zSql);
+ if( sResult.azElem==0 ){
+ db_query_callback(&sResult, 0, 0, 0);
+ }
+ sResult.azElem[sResult.nElem] = 0;
+ return sResult.azElem;
+}
+
+/*
+** Execute an SQL statement.
+*/
+void db_execute(sqlite *db, const char *zFile, const char *zFormat, ...){
+ char *zSql;
+ int rc;
+ char *zErrMsg = 0;
+ va_list ap;
+ va_start(ap, zFormat);
+ zSql = sqlite3_vmprintf(zFormat, ap);
+ va_end(ap);
+ if( verbose ) printf("EXEC %s: %s\n", zFile, zSql);
+ do{
+ rc = sqlite3_exec(db, zSql, 0, 0, &zErrMsg);
+ }while( rc==SQLITE_BUSY );
+ if( verbose ) printf("DONE %s: %s\n", zFile, zSql);
+ if( zErrMsg ){
+ fprintf(stdout,"%s: command failed: %s - %s\n", zFile, zSql, zErrMsg);
+ free(zErrMsg);
+ sqlite3_free(zSql);
+ Exit(1);
+ }
+ sqlite3_free(zSql);
+}
+
+/*
+** Free the results of a db_query() call.
+*/
+void db_query_free(char **az){
+ int i;
+ for(i=0; az[i]; i++){
+ sqlite3_free(az[i]);
+ }
+ free(az);
+}
+
+/*
+** Check results
+*/
+void db_check(const char *zFile, const char *zMsg, char **az, ...){
+ va_list ap;
+ int i;
+ char *z;
+ va_start(ap, az);
+ for(i=0; (z = va_arg(ap, char*))!=0; i++){
+ if( az[i]==0 || strcmp(az[i],z)!=0 ){
+ fprintf(stdout,"%s: %s: bad result in column %d: %s\n",
+ zFile, zMsg, i+1, az[i]);
+ db_query_free(az);
+ Exit(1);
+ }
+ }
+ va_end(ap);
+ db_query_free(az);
+}
+
+pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER;
+pthread_cond_t sig = PTHREAD_COND_INITIALIZER;
+int thread_cnt = 0;
+
+static void *worker_bee(void *pArg){
+ const char *zFilename = (char*)pArg;
+ char *azErr;
+ int i, cnt;
+ int t = atoi(zFilename);
+ char **az;
+ sqlite *db;
+
+ pthread_mutex_lock(&lock);
+ thread_cnt++;
+ pthread_mutex_unlock(&lock);
+ printf("%s: START\n", zFilename);
+ fflush(stdout);
+ for(cnt=0; cnt<10; cnt++){
+ sqlite3_open(&zFilename[2], &db);
+ if( db==0 ){
+ fprintf(stdout,"%s: can't open\n", zFilename);
+ Exit(1);
+ }
+ sqlite3_busy_handler(db, db_is_locked, zFilename);
+ db_execute(db, zFilename, "CREATE TABLE t%d(a,b,c);", t);
+ for(i=1; i<=100; i++){
+ db_execute(db, zFilename, "INSERT INTO t%d VALUES(%d,%d,%d);",
+ t, i, i*2, i*i);
+ }
+ az = db_query(db, zFilename, "SELECT count(*) FROM t%d", t);
+ db_check(zFilename, "tX size", az, "100", 0);
+ az = db_query(db, zFilename, "SELECT avg(b) FROM t%d", t);
+ db_check(zFilename, "tX avg", az, "101", 0);
+ db_execute(db, zFilename, "DELETE FROM t%d WHERE a>50", t);
+ az = db_query(db, zFilename, "SELECT avg(b) FROM t%d", t);
+ db_check(zFilename, "tX avg2", az, "51", 0);
+ for(i=1; i<=50; i++){
+ char z1[30], z2[30];
+ az = db_query(db, zFilename, "SELECT b, c FROM t%d WHERE a=%d", t, i);
+ sprintf(z1, "%d", i*2);
+ sprintf(z2, "%d", i*i);
+ db_check(zFilename, "readback", az, z1, z2, 0);
+ }
+ db_execute(db, zFilename, "DROP TABLE t%d;", t);
+ sqlite3_close(db);
+ }
+ printf("%s: END\n", zFilename);
+ /* unlink(zFilename); */
+ fflush(stdout);
+ pthread_mutex_lock(&lock);
+ thread_cnt--;
+ if( thread_cnt<=0 ){
+ pthread_cond_signal(&sig);
+ }
+ pthread_mutex_unlock(&lock);
+ return 0;
+}
+
+int main(int argc, char **argv){
+ char *zFile;
+ int i, n;
+ pthread_t id;
+ if( argc>2 && strcmp(argv[1], "-v")==0 ){
+ verbose = 1;
+ argc--;
+ argv++;
+ }
+ if( argc<2 || (n=atoi(argv[1]))<1 ) n = 10;
+ for(i=0; i<n; i++){
+ char zBuf[200];
+ sprintf(zBuf, "testdb-%d", (i+1)/2);
+ unlink(zBuf);
+ }
+ for(i=0; i<n; i++){
+ zFile = sqlite3_mprintf("%d.testdb-%d", i%2+1, (i+2)/2);
+ if( (i%2)==0 ){
+ /* Remove both the database file and any old journal for the file
+ ** being used by this thread and the next one. */
+ char *zDb = &zFile[2];
+ char *zJournal = sqlite3_mprintf("%s-journal", zDb);
+ unlink(zDb);
+ unlink(zJournal);
+ free(zJournal);
+ }
+
+ pthread_create(&id, 0, worker_bee, (void*)zFile);
+ pthread_detach(id);
+ }
+ pthread_mutex_lock(&lock);
+ while( thread_cnt>0 ){
+ pthread_cond_wait(&sig, &lock);
+ }
+ pthread_mutex_unlock(&lock);
+ for(i=0; i<n; i++){
+ char zBuf[200];
+ sprintf(zBuf, "testdb-%d", (i+1)/2);
+ unlink(zBuf);
+ }
+ return 0;
+}
Added: freeswitch/trunk/libs/sqlite/test/threadtest2.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/threadtest2.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,129 @@
+/*
+** 2004 January 13
+**
+** The author disclaims copyright to this source code. In place of
+** a legal notice, here is a blessing:
+**
+** May you do good and not evil.
+** May you find forgiveness for yourself and forgive others.
+** May you share freely, never taking more than you give.
+**
+*************************************************************************
+** This file implements a simple standalone program used to test whether
+** or not the SQLite library is threadsafe.
+**
+** This file is NOT part of the standard SQLite library. It is used for
+** testing only.
+*/
+#include <stdio.h>
+#include <unistd.h>
+#include <pthread.h>
+#include <string.h>
+#include <stdlib.h>
+#include "sqlite.h"
+
+/*
+** Name of the database
+*/
+#define DB_FILE "test.db"
+
+/*
+** When this variable becomes non-zero, all threads stop
+** what they are doing.
+*/
+volatile int all_stop = 0;
+
+/*
+** Callback from the integrity check. If the result is anything other
+** than "ok" it means the integrity check has failed. Set the "all_stop"
+** global variable to stop all other activity. Print the error message
+** or print OK if the string "ok" is seen.
+*/
+int check_callback(void *notUsed, int argc, char **argv, char **notUsed2){
+ if( strcmp(argv[0],"ok") ){
+ all_stop = 1;
+ fprintf(stderr,"pid=%d. %s\n", getpid(), argv[0]);
+ }else{
+ /* fprintf(stderr,"pid=%d. OK\n", getpid()); */
+ }
+ return 0;
+}
+
+/*
+** Do an integrity check on the database. If the first integrity check
+** fails, try it a second time.
+*/
+int integrity_check(sqlite *db){
+ int rc;
+ if( all_stop ) return 0;
+ /* fprintf(stderr,"pid=%d: CHECK\n", getpid()); */
+ rc = sqlite3_exec(db, "pragma integrity_check", check_callback, 0, 0);
+ if( rc!=SQLITE_OK && rc!=SQLITE_BUSY ){
+ fprintf(stderr,"pid=%d, Integrity check returns %d\n", getpid(), rc);
+ }
+ if( all_stop ){
+ sqlite3_exec(db, "pragma integrity_check", check_callback, 0, 0);
+ }
+ return 0;
+}
+
+/*
+** This is the worker thread
+*/
+void *worker(void *notUsed){
+ sqlite *db;
+ int rc;
+ int cnt = 0;
+ while( !all_stop && cnt++<10000 ){
+ if( cnt%1000==0 ) printf("pid=%d: %d\n", getpid(), cnt);
+ while( (sqlite3_open(DB_FILE, &db))!=SQLITE_OK ) sched_yield();
+ sqlite3_exec(db, "PRAGMA synchronous=OFF", 0, 0, 0);
+ integrity_check(db);
+ if( all_stop ){ sqlite3_close(db); break; }
+ /* fprintf(stderr, "pid=%d: BEGIN\n", getpid()); */
+ rc = sqlite3_exec(db, "INSERT INTO t1 VALUES('bogus data')", 0, 0, 0);
+ /* fprintf(stderr, "pid=%d: END rc=%d\n", getpid(), rc); */
+ sqlite3_close(db);
+ }
+ return 0;
+}
+
+/*
+** Initialize the database and start the threads
+*/
+int main(int argc, char **argv){
+ sqlite *db;
+ int i, rc;
+ pthread_t aThread[5];
+
+ if( strcmp(DB_FILE,":memory:") ){
+ char *zJournal = sqlite3_mprintf("%s-journal", DB_FILE);
+ unlink(DB_FILE);
+ unlink(zJournal);
+ free(zJournal);
+ }
+ sqlite3_open(DB_FILE, &db);
+ if( db==0 ){
+ fprintf(stderr,"unable to initialize database\n");
+ exit(1);
+ }
+ rc = sqlite3_exec(db, "CREATE TABLE t1(x);", 0,0,0);
+ if( rc ){
+ fprintf(stderr,"cannot create table t1: %d\n", rc);
+ exit(1);
+ }
+ sqlite3_close(db);
+ for(i=0; i<sizeof(aThread)/sizeof(aThread[0]); i++){
+ pthread_create(&aThread[i], 0, worker, 0);
+ }
+ for(i=0; i<sizeof(aThread)/sizeof(aThread[i]); i++){
+ pthread_join(aThread[i], 0);
+ }
+ if( !all_stop ){
+ printf("Everything seems ok.\n");
+ return 0;
+ }else{
+ printf("We hit an error.\n");
+ return 1;
+ }
+}
Added: freeswitch/trunk/libs/sqlite/test/tkt1435.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/tkt1435.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,111 @@
+# 2005 September 17
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests to verify that ticket #1435 has been
+# fixed.
+#
+#
+# $Id: tkt1435.test,v 1.2 2006/01/17 09:35:02 danielk1977 Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+ifcapable !memorydb {
+ finish_test
+ return
+}
+
+# Construct the sample database.
+#
+do_test tkt1435-1.0 {
+ sqlite3 db :memory:
+ execsql {
+ CREATE TABLE Instances(
+ instanceId INTEGER PRIMARY KEY,
+ troveName STR,
+ versionId INT,
+ flavorId INT,
+ timeStamps STR,
+ isPresent INT,
+ pinned BOOLEAN
+ );
+ INSERT INTO "Instances"
+ VALUES(1, 'libhello:runtime', 1, 1, 1126929880.094, 1, 1);
+ INSERT INTO "Instances"
+ VALUES(2, 'libhello:user', 1, 1, 1126929880.094, 1, 0);
+ INSERT INTO "Instances"
+ VALUES(3, 'libhello:script', 1, 1, 1126929880.094, 1, 0);
+ INSERT INTO "Instances"
+ VALUES(4, 'libhello', 1, 1, 1126929880.094, 1, 0);
+
+ CREATE TABLE Versions(versionId INTEGER PRIMARY KEY,version STR UNIQUE);
+ INSERT INTO "Versions" VALUES(0, NULL);
+ INSERT INTO "Versions" VALUES(1, '/localhost at rpl:linux/0-1-1');
+
+ CREATE TABLE Flavors(flavorId integer primary key, flavor str unique);
+ INSERT INTO "Flavors" VALUES(0, NULL);
+ INSERT INTO "Flavors" VALUES(1, '1#x86');
+
+ CREATE TEMPORARY TABLE tlList (
+ row INTEGER PRIMARY KEY,
+ name STRING,
+ version STRING,
+ flavor STRING
+ );
+
+ INSERT INTO tlList
+ values(NULL, 'libhello:script', '/localhost at rpl:linux/0-1-1', '1#x86');
+ INSERT INTO tlList
+ values(NULL, 'libhello:user', '/localhost at rpl:linux/0-1-1', '1#x86');
+ INSERT INTO tlList
+ values(NULL, 'libhello:runtime', '/localhost at rpl:linux/0-1-1', '1#x86');
+ }
+} {}
+
+# Run the query with an index
+#
+do_test tkt1435-1.1 {
+ execsql {
+ select row, pinned from tlList, Instances, Versions, Flavors
+ where
+ Instances.troveName = tlList.name
+ and Versions.version = tlList.version
+ and Instances.versionId = Versions.versionId
+ and ( Flavors.flavor = tlList.flavor or Flavors.flavor is NULL
+ and tlList.flavor = '')
+ and Instances.flavorId = Flavors.flavorId
+ order by row asc;
+ }
+} {1 0 2 0 3 1}
+
+# Create a indices, analyze and rerun the query.
+# Verify that the results are the same
+#
+do_test tkt1435-1.2 {
+ execsql {
+ CREATE INDEX InstancesNameIdx ON Instances(troveName);
+ CREATE UNIQUE INDEX InstancesIdx
+ ON Instances(troveName, versionId, flavorId);
+ ANALYZE;
+ select row, pinned from tlList, Instances, Versions, Flavors
+ where
+ Instances.troveName = tlList.name
+ and Versions.version = tlList.version
+ and Instances.versionId = Versions.versionId
+ and ( Flavors.flavor = tlList.flavor or Flavors.flavor is NULL
+ and tlList.flavor = '')
+ and Instances.flavorId = Flavors.flavorId
+ order by row asc;
+ }
+} {1 0 2 0 3 1}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/tkt1443.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/tkt1443.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,180 @@
+# 2005 September 17
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests to verify that ticket #1433 has been
+# fixed.
+#
+# The problem in ticket #1433 was that the dependencies on the right-hand
+# side of an IN operator were not being checked correctly. So in an
+# expression of the form:
+#
+# t1.x IN (1,t2.b,3)
+#
+# the optimizer was missing the fact that the right-hand side of the IN
+# depended on table t2. It was checking dependencies based on the
+# Expr.pRight field rather than Expr.pList and Expr.pSelect.
+#
+# Such a bug could be verifed using a less elaborate test case. But
+# this test case (from the original bug poster) exercises so many different
+# parts of the system all at once, that it seemed like a good one to
+# include in the test suite.
+#
+# NOTE: Yes, in spite of the name of this file (tkt1443.test) this
+# test is for ticket #1433 not #1443. I mistyped the name when I was
+# creating the file and I had already checked in the file by the wrong
+# name be the time I noticed the error. With CVS it is a really hassle
+# to change filenames, so I'll just leave it as is. No harm done.
+#
+# $Id: tkt1443.test,v 1.4 2006/01/17 09:35:02 danielk1977 Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+ifcapable !subquery||!memorydb {
+ finish_test
+ return
+}
+
+# Construct the sample database.
+#
+do_test tkt1443-1.0 {
+ sqlite3 db :memory:
+ execsql {
+ CREATE TABLE Items(
+ itemId integer primary key,
+ item str unique
+ );
+ INSERT INTO "Items" VALUES(0, 'ALL');
+ INSERT INTO "Items" VALUES(1, 'double:source');
+ INSERT INTO "Items" VALUES(2, 'double');
+ INSERT INTO "Items" VALUES(3, 'double:runtime');
+ INSERT INTO "Items" VALUES(4, '.*:runtime');
+
+ CREATE TABLE Labels(
+ labelId INTEGER PRIMARY KEY,
+ label STR UNIQUE
+ );
+ INSERT INTO "Labels" VALUES(0, 'ALL');
+ INSERT INTO "Labels" VALUES(1, 'localhost at rpl:linux');
+ INSERT INTO "Labels" VALUES(2, 'localhost at rpl:branch');
+
+ CREATE TABLE LabelMap(
+ itemId INTEGER,
+ labelId INTEGER,
+ branchId integer
+ );
+ INSERT INTO "LabelMap" VALUES(1, 1, 1);
+ INSERT INTO "LabelMap" VALUES(2, 1, 1);
+ INSERT INTO "LabelMap" VALUES(3, 1, 1);
+ INSERT INTO "LabelMap" VALUES(1, 2, 2);
+ INSERT INTO "LabelMap" VALUES(2, 2, 3);
+ INSERT INTO "LabelMap" VALUES(3, 2, 3);
+
+ CREATE TABLE Users (
+ userId INTEGER PRIMARY KEY,
+ user STRING UNIQUE,
+ salt BINARY,
+ password STRING
+ );
+ INSERT INTO "Users" VALUES(1, 'test', 'æ$d',
+ '43ba0f45014306bd6df529551ffdb3df');
+ INSERT INTO "Users" VALUES(2, 'limited', 'ª>S',
+ 'cf07c8348fdf675cc1f7696b7d45191b');
+ CREATE TABLE UserGroups (
+ userGroupId INTEGER PRIMARY KEY,
+ userGroup STRING UNIQUE
+ );
+ INSERT INTO "UserGroups" VALUES(1, 'test');
+ INSERT INTO "UserGroups" VALUES(2, 'limited');
+
+ CREATE TABLE UserGroupMembers (
+ userGroupId INTEGER,
+ userId INTEGER
+ );
+ INSERT INTO "UserGroupMembers" VALUES(1, 1);
+ INSERT INTO "UserGroupMembers" VALUES(2, 2);
+
+ CREATE TABLE Permissions (
+ userGroupId INTEGER,
+ labelId INTEGER NOT NULL,
+ itemId INTEGER NOT NULL,
+ write INTEGER,
+ capped INTEGER,
+ admin INTEGER
+ );
+ INSERT INTO "Permissions" VALUES(1, 0, 0, 1, 0, 1);
+ INSERT INTO "Permissions" VALUES(2, 2, 4, 0, 0, 0);
+ }
+} {}
+
+# Run the query with an index
+#
+do_test tkt1443-1.1 {
+ execsql {
+ select distinct
+ Items.Item as trove, UP.pattern as pattern
+ from
+ ( select
+ Permissions.labelId as labelId,
+ PerItems.item as pattern
+ from
+ Users, UserGroupMembers, Permissions
+ left outer join Items as PerItems
+ on Permissions.itemId = PerItems.itemId
+ where
+ Users.user = 'limited'
+ and Users.userId = UserGroupMembers.userId
+ and UserGroupMembers.userGroupId = Permissions.userGroupId
+ ) as UP join LabelMap on ( UP.labelId = 0 or
+ UP.labelId = LabelMap.labelId ),
+ Labels, Items
+ where
+ Labels.label = 'localhost at rpl:branch'
+ and Labels.labelId = LabelMap.labelId
+ and LabelMap.itemId = Items.itemId
+ ORDER BY +trove, +pattern
+ }
+} {double .*:runtime double:runtime .*:runtime double:source .*:runtime}
+
+# Create an index and rerun the query.
+# Verify that the results are the same
+#
+do_test tkt1443-1.2 {
+ execsql {
+ CREATE UNIQUE INDEX PermissionsIdx
+ ON Permissions(userGroupId, labelId, itemId);
+ select distinct
+ Items.Item as trove, UP.pattern as pattern
+ from
+ ( select
+ Permissions.labelId as labelId,
+ PerItems.item as pattern
+ from
+ Users, UserGroupMembers, Permissions
+ left outer join Items as PerItems
+ on Permissions.itemId = PerItems.itemId
+ where
+ Users.user = 'limited'
+ and Users.userId = UserGroupMembers.userId
+ and UserGroupMembers.userGroupId = Permissions.userGroupId
+ ) as UP join LabelMap on ( UP.labelId = 0 or
+ UP.labelId = LabelMap.labelId ),
+ Labels, Items
+ where
+ Labels.label = 'localhost at rpl:branch'
+ and Labels.labelId = LabelMap.labelId
+ and LabelMap.itemId = Items.itemId
+ ORDER BY +trove, +pattern
+ }
+} {double .*:runtime double:runtime .*:runtime double:source .*:runtime}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/tkt1444.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/tkt1444.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,56 @@
+# 2005 September 19
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests to verify that ticket #1444 has been
+# fixed.
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+ifcapable !compound||!view {
+ finish_test
+ return
+}
+
+# The use of a VIEW that contained an ORDER BY clause within a UNION ALL
+# was causing problems. See ticket #1444.
+#
+do_test tkt1444-1.1 {
+ execsql {
+ CREATE TABLE DemoTable (x INTEGER, TextKey TEXT, DKey Real);
+ CREATE INDEX DemoTableIdx ON DemoTable (TextKey);
+ INSERT INTO DemoTable VALUES(9,8,7);
+ INSERT INTO DemoTable VALUES(1,2,3);
+ CREATE VIEW DemoView AS SELECT * FROM DemoTable ORDER BY TextKey;
+ SELECT * FROM DemoTable UNION ALL SELECT * FROM DemoView ORDER BY 1;
+ }
+} {1 2 3.0 1 2 3.0 9 8 7.0 9 8 7.0}
+do_test tkt1444-1.2 {
+ execsql {
+ SELECT * FROM DemoTable UNION ALL SELECT * FROM DemoView;
+ }
+} {9 8 7.0 1 2 3.0 1 2 3.0 9 8 7.0}
+do_test tkt1444-1.3 {
+ execsql {
+ DROP VIEW DemoView;
+ CREATE VIEW DemoView AS SELECT * FROM DemoTable;
+ SELECT * FROM DemoTable UNION ALL SELECT * FROM DemoView ORDER BY 1;
+ }
+} {1 2 3.0 1 2 3.0 9 8 7.0 9 8 7.0}
+do_test tkt1444-1.4 {
+ execsql {
+ SELECT * FROM DemoTable UNION ALL SELECT * FROM DemoView;
+ }
+} {9 8 7.0 1 2 3.0 9 8 7.0 1 2 3.0}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/tkt1449.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/tkt1449.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,262 @@
+# 2005 September 19
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests to verify that ticket #1449 has been
+# fixed.
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Somewhere in tkt1449-1.1 is a VIEW definition that uses a subquery and
+# a compound SELECT. So we cannot run this file if any of these features
+# are not available.
+ifcapable !subquery||!compound||!view {
+ finish_test
+ return
+}
+
+# The following schema generated problems in ticket #1449. We've retained
+# the original schema here because it is some unbelievably complex, it seemed
+# like a good test case for SQLite.
+#
+do_test tkt1449-1.1 {
+ execsql {
+ BEGIN;
+ CREATE TABLE ACLS(ISSUEID text(50) not null, OBJECTID text(50) not null, PARTICIPANTID text(50) not null, PERMISSIONBITS int not null, constraint PK_ACLS primary key (ISSUEID, OBJECTID, PARTICIPANTID));
+ CREATE TABLE ACTIONITEMSTATUSES(CLASSID int null, SEQNO int not null, LASTMODONNODEID text(50) not null, PREVMODONNODEID text(50) null, ISSUEID text(50) not null, OBJECTID text(50) not null, REVISIONNUM int not null, CONTAINERID text(50) not null, AUTHORID text(50) not null, CREATIONDATE text(25) null, LASTMODIFIEDDATE text(25) null, UPDATENUMBER int null, PREVREVISIONNUM int null, LASTCMD int null, LASTCMDACLVERSION int null, USERDEFINEDFIELD text(300) null, LASTMODIFIEDBYID text(50) null, FRIENDLYNAME text(100) not null, REVISION int not null, SHORTNAME text(30) not null, LONGNAME text(200) not null, ATTACHMENTHANDLING int not null, RESULT int not null, NOTIFYCREATOR text(1) null, NOTIFYASSIGNEE text(1) null, NOTIFYFYI text(1) null, NOTIFYCLOSURETEAM text(1) null, NOTIFYCOORDINATORS text(1) null, COMMENTREQUIRED text(1) not null, constraint PK_ACTIONITEMSTATUSES primary key (ISSUEID, OBJECTID));
+ CREATE TABLE ACTIONITEMTYPES(CLASSID int null, SEQNO int not null, LASTMODONNODEID text(50) not null, PREVMODONNODEID text(50) null, ISSUEID text(50) not null, OBJECTID text(50) not null, REVISIONNUM int not null, CONTAINERID text(50) not null, AUTHORID text(50) not null, CREATIONDATE text(25) null, LASTMODIFIEDDATE text(25) null, UPDATENUMBER int null, PREVREVISIONNUM int null, LASTCMD int null, LASTCMDACLVERSION int null, USERDEFINEDFIELD text(300) null, LASTMODIFIEDBYID text(50) null, REVISION int not null, LABEL text(200) not null, INSTRUCTIONS text not null, EMAILINSTRUCTIONS text null, ALLOWEDSTATUSES text not null, INITIALSTATUS text(100) not null, COMMENTREQUIRED text(1) not null, ATTACHMENTHANDLING int not null, constraint PK_ACTIONITEMTYPES primary key (ISSUEID, OBJECTID));
+ CREATE TABLE ATTACHMENTS(TQUNID text(36) not null, OBJECTID text(50) null, ISSUEID text(50) null, DATASTREAM blob not null, CONTENTENCODING text(50) null, CONTENTCHARSET text(50) null, CONTENTTYPE text(100) null, CONTENTID text(100) null, CONTENTLOCATION text(100) null, CONTENTNAME text(100) not null, constraint PK_ATTACHMENTS primary key (TQUNID));
+ CREATE TABLE COMPLIANCEPOLICIES(CLASSID int null, SEQNO int not null, LASTMODONNODEID text(50) not null, PREVMODONNODEID text(50) null, ISSUEID text(50) not null, OBJECTID text(50) not null, REVISIONNUM int not null, CONTAINERID text(50) not null, AUTHORID text(50) not null, CREATIONDATE text(25) null, LASTMODIFIEDDATE text(25) null, UPDATENUMBER int null, PREVREVISIONNUM int null, LASTCMD int null, LASTCMDACLVERSION int null, USERDEFINEDFIELD text(300) null, LASTMODIFIEDBYID text(50) null, BODY text null, constraint PK_COMPLIANCEPOLICIES primary key (ISSUEID, OBJECTID));
+ CREATE TABLE DBHISTORY(DATETIME text(25) not null, OPERATION text(20) not null, KUBIVERSION text(100) not null, FROMVERSION int null, TOVERSION int null);
+ CREATE TABLE DBINFO(FINGERPRINT text(32) not null, VERSION int not null);
+ CREATE TABLE DETACHEDATTACHMENTS (TQUNID text(36) not null, ISSUEID text(50) not null, OBJECTID text(50) not null, PATH text(300) not null, DETACHEDFILELASTMODTIMESTAMP text(25) null, CONTENTID text(100) not null, constraint PK_DETACHEDATTACHMENTS primary key (TQUNID));
+ CREATE TABLE DOCREFERENCES(CLASSID int null, SEQNO int not null, LASTMODONNODEID text(50) not null, PREVMODONNODEID text(50) null, ISSUEID text(50) not null, OBJECTID text(50) not null, REVISIONNUM int not null, CONTAINERID text(50) not null, AUTHORID text(50) not null, CREATIONDATE text(25) null, LASTMODIFIEDDATE text(25) null, UPDATENUMBER int null, PREVREVISIONNUM int null, LASTCMD int null, LASTCMDACLVERSION int null, USERDEFINEDFIELD text(300) null, LASTMODIFIEDBYID text(50) null, REFERENCEDOCUMENTID text(50) null, constraint PK_DOCREFERENCES primary key (ISSUEID, OBJECTID));
+ CREATE TABLE DQ (TQUNID text(36) not null, ISSUEID text(50) not null, DEPENDSID text(50) null, DEPENDSTYPE int null, DEPENDSCOMMANDSTREAM blob null, DEPENDSNODEIDSEQNOKEY text(100) null, DEPENDSACLVERSION int null, constraint PK_DQ primary key (TQUNID));
+ CREATE TABLE EMAILQ(TIMEQUEUED int not null, NODEID text(50) not null, MIME blob not null, TQUNID text(36) not null);
+ CREATE TABLE ENTERPRISEDATA(CLASSID int null, SEQNO int not null, LASTMODONNODEID text(50) not null, PREVMODONNODEID text(50) null, ISSUEID text(50) not null, OBJECTID text(50) not null, REVISIONNUM int not null, CONTAINERID text(50) not null, AUTHORID text(50) not null, CREATIONDATE text(25) null, LASTMODIFIEDDATE text(25) null, UPDATENUMBER int null, PREVREVISIONNUM int null, LASTCMD int null, LASTCMDACLVERSION int null, USERDEFINEDFIELD text(300) null, LASTMODIFIEDBYID text(50) null, DATE1 text(25) null, DATE2 text(25) null, DATE3 text(25) null, DATE4 text(25) null, DATE5 text(25) null, DATE6 text(25) null, DATE7 text(25) null, DATE8 text(25) null, DATE9 text(25) null, DATE10 text(25) null, VALUE1 int null, VALUE2 int null, VALUE3 int null, VALUE4 int null, VALUE5 int null, VALUE6 int null, VALUE7 int null, VALUE8 int null, VALUE9 int null, VALUE10 int null, VALUE11 int null, VALUE12 int null, VALUE13 int null, VALUE14 int null, VALUE15 int null, VALUE16 int null, VALUE17 int null, VALUE18 int null, VALUE19 int null, VALUE20 int null, STRING1 text(300) null, STRING2 text(300) null, STRING3 text(300) null, STRING4 text(300) null, STRING5 text(300) null, STRING6 text(300) null, STRING7 text(300) null, STRING8 text(300) null, STRING9 text(300) null, STRING10 text(300) null, LONGSTRING1 text null, LONGSTRING2 text null, LONGSTRING3 text null, LONGSTRING4 text null, LONGSTRING5 text null, LONGSTRING6 text null, LONGSTRING7 text null, LONGSTRING8 text null, LONGSTRING9 text null, LONGSTRING10 text null, constraint PK_ENTERPRISEDATA primary key (ISSUEID, OBJECTID));
+ CREATE TABLE FILEMORGUE(TQUNID text(36) not null, PATH text(300) not null, DELETEFOLDERWHENEMPTY text(1) null, constraint PK_FILEMORGUE primary key (TQUNID));
+ CREATE TABLE FILES(CLASSID int null, SEQNO int not null, LASTMODONNODEID text(50) not null, PREVMODONNODEID text(50) null, ISSUEID text(50) not null, OBJECTID text(50) not null, REVISIONNUM int not null, CONTAINERID text(50) not null, AUTHORID text(50) not null, CREATIONDATE text(25) null, LASTMODIFIEDDATE text(25) null, UPDATENUMBER int null, PREVREVISIONNUM int null, LASTCMD int null, LASTCMDACLVERSION int null, USERDEFINEDFIELD text(300) null, LASTMODIFIEDBYID text(50) null, PARENTENTITYID text(50) null, BODY text null, BODYCONTENTTYPE text(100) null, ISOBSOLETE text(1) null, FILENAME text(300) not null, VISIBLENAME text(300) not null, VERSIONSTRING text(300) not null, DOCUMENTHASH text(40) not null, ISFINAL text(1) null, DOCREFERENCEID text(50) not null, constraint PK_FILES primary key (ISSUEID, OBJECTID));
+ CREATE TABLE FOLDERS(CLASSID int null, SEQNO int not null, LASTMODONNODEID text(50) not null, PREVMODONNODEID text(50) null, ISSUEID text(50) not null, OBJECTID text(50) not null, REVISIONNUM int not null, CONTAINERID text(50) not null, AUTHORID text(50) not null, CREATIONDATE text(25) null, LASTMODIFIEDDATE text(25) null, UPDATENUMBER int null, PREVREVISIONNUM int null, LASTCMD int null, LASTCMDACLVERSION int null, USERDEFINEDFIELD text(300) null, LASTMODIFIEDBYID text(50) null, CONTAINERNAME text(300) null, CONTAINERACLSETTINGS text null, constraint PK_FOLDERS primary key (ISSUEID, OBJECTID));
+ CREATE TABLE GLOBALSETTINGS(CLASSID int null, SEQNO int not null, LASTMODONNODEID text(50) not null, PREVMODONNODEID text(50) null, ISSUEID text(50) not null, OBJECTID text(50) not null, REVISIONNUM int not null, CONTAINERID text(50) not null, AUTHORID text(50) not null, CREATIONDATE text(25) null, LASTMODIFIEDDATE text(25) null, UPDATENUMBER int null, PREVREVISIONNUM int null, LASTCMD int null, LASTCMDACLVERSION int null, USERDEFINEDFIELD text(300) null, LASTMODIFIEDBYID text(50) null, SINGULARPROJECTLABEL text(30) not null, PLURALPROJECTLABEL text(30) not null, PROJECTREQUIRED text(1) not null, CUSTOMPROJECTSALLOWED text(1) not null, ACTIONITEMSPECXML text null, PROJECTLISTXML text null, ENTERPRISEDATALABELS text null, ENTERPRISEDATATABXSL text null, constraint PK_GLOBALSETTINGS primary key (ISSUEID, OBJECTID));
+ CREATE TABLE GLOBALSTRINGPROPERTIES(ID int not null, VALUE text(300) not null, constraint PK_GLOBALSTRINGPROPERTIES primary key (ID));
+ CREATE TABLE IMQ(TQUNID text(36) not null, DATETIMEQUEUED text(25) not null, ISSUEID text(50) not null, KUBIBUILD text(30) not null, FAILCOUNT int not null, LASTRUN text(25) null, ENVELOPESTREAM blob not null, PAYLOADSTREAM blob not null, constraint PK_IMQ primary key (TQUNID));
+ CREATE TABLE INVITATIONNODES(INVITATIONID text(50) not null, RECIPIENTNODEID text(50) not null, DATECREATED text(25) not null, constraint PK_INVITATIONNODES primary key (INVITATIONID, RECIPIENTNODEID));
+ CREATE TABLE INVITATIONS (INVITATIONID text(50) not null, SENDERNODEID text(50) not null, RECIPIENTEMAILADDR text(200) not null, RECIPIENTUSERID text(50) null, RECIPIENTNODES text null, ISSUEID text(50) not null, ENVELOPE text not null, MESSAGEBLOB blob not null, INVITATIONSTATE int not null, TQUNID text(36) not null, DATECREATED text(25) not null);
+ CREATE TABLE ISSUES (CLASSID int null, SEQNO int not null, LASTMODONNODEID text(50) not null, PREVMODONNODEID text(50) null, ISSUEID text(50) not null, OBJECTID text(50) not null, REVISIONNUM int not null, CONTAINERID text(50) not null, AUTHORID text(50) not null, CREATIONDATE text(25) null, LASTMODIFIEDDATE text(25) null, UPDATENUMBER int null, PREVREVISIONNUM int null, LASTCMD int null, LASTCMDACLVERSION int null, USERDEFINEDFIELD text(300) null, LASTMODIFIEDBYID text(50) null, CONTAINERNAME text(300) null, CONTAINERACLSETTINGS text null, ISINITIALIZED text(1) null, BLINDINVITES text null, ISSYSTEMISSUE text(1) not null, ISSUETYPE int not null, ACTIVITYTYPEID text(50) null, ISINCOMPLETE text(1) not null, constraint PK_ISSUES primary key (ISSUEID, OBJECTID));
+ CREATE TABLE ISSUESETTINGS (CLASSID int null, SEQNO int not null, LASTMODONNODEID text(50) not null, PREVMODONNODEID text(50) null, ISSUEID text(50) not null, OBJECTID text(50) not null, REVISIONNUM int not null, CONTAINERID text(50) not null, AUTHORID text(50) not null, CREATIONDATE text(25) null, LASTMODIFIEDDATE text(25) null, UPDATENUMBER int null, PREVREVISIONNUM int null, LASTCMD int null, LASTCMDACLVERSION int null, USERDEFINEDFIELD text(300) null, LASTMODIFIEDBYID text(50) null, ISSUENAME text(300) not null, ISSUEACLSETTINGS text not null, ISSUEDUEDATE text(25) null, ISSUEPRIORITY int null, ISSUESTATUS int null, DESCRIPTION text null, PROJECTID text(100) null, PROJECTNAME text null, PROJECTNAMEISCUSTOM text(1) null, ISSYSTEMISSUE text(1) not null, ACTIONITEMREVNUM int not null, constraint PK_ISSUESETTINGS primary key (ISSUEID, OBJECTID));
+ CREATE TABLE KMTPMSG (MSGID integer not null, SENDERID text(50) null, RECIPIENTIDLIST text not null, ISSUEID text(50) null, MESSAGETYPE int not null, ENVELOPE text null, MESSAGEBLOB blob not null, RECEIVEDDATE text(25) not null, constraint PK_KMTPMSG primary key (MSGID));
+ CREATE TABLE KMTPNODEQ(NODEID text(50) not null, MSGID int not null, RECEIVEDDATE text(25) not null, SENDCOUNT int not null);
+ CREATE TABLE KMTPQ(MSGID integer not null, SENDERID text(50) null, RECIPIENTIDLIST text not null, ISSUEID text(50) null, MESSAGETYPE int not null, ENVELOPE text null, MESSAGEBLOB blob not null, constraint PK_KMTPQ primary key (MSGID));
+ CREATE TABLE LOGENTRIES(CLASSID int null, SEQNO int not null, LASTMODONNODEID text(50) not null, PREVMODONNODEID text(50) null, ISSUEID text(50) not null, OBJECTID text(50) not null, REVISIONNUM int not null, CONTAINERID text(50) not null, AUTHORID text(50) not null, CREATIONDATE text(25) null, LASTMODIFIEDDATE text(25) null, UPDATENUMBER int null, PREVREVISIONNUM int null, LASTCMD int null, LASTCMDACLVERSION int null, USERDEFINEDFIELD text(300) null, LASTMODIFIEDBYID text(50) null, PARENTENTITYID text(50) null, BODY text null, BODYCONTENTTYPE text(100) null, ISOBSOLETE text(1) null, ACTIONTYPE int not null, ASSOCIATEDOBJECTIDS text null, OLDENTITIES text null, NEWENTITIES text null, OTHERENTITIES text null, constraint PK_LOGENTRIES primary key (ISSUEID, OBJECTID));
+ CREATE TABLE LSBI(TQUNID text(36) not null, ISSUEID text(50) not null, TABLEITEMID text(50) null, TABLENODEID text(50) null, TABLECMD int null, TABLECONTAINERID text(50) null, TABLESEQNO int null, DIRTYCONTENT text null, STUBBED text(1) null, ENTITYSTUBDATA text null, UPDATENUMBER int not null, constraint PK_LSBI primary key (TQUNID));
+ CREATE TABLE LSBN(TQUNID text(36) not null, ISSUEID text(50) not null, NODEID text(50) not null, STORESEQNO int not null, SYNCSEQNO int not null, LASTMSGDATE text(25) null, constraint PK_LSBN primary key (TQUNID));
+ CREATE TABLE MMQ(TQUNID text(36) not null, ISSUEID text(50) not null, TABLEREQUESTNODE text(50) null, MMQENTRYINDEX text(60) null, DIRECTION int null, NODEID text(50) null, TABLEFIRSTSEQNO int null, TABLELASTSEQNO int null, NEXTRESENDTIMEOUT text(25) null, TABLETIMEOUTMULTIPLIER int null, constraint PK_MMQ primary key (TQUNID));
+ CREATE TABLE NODEREG(NODEID text(50) not null, USERID text(50) null, CREATETIME text(25) not null, TQUNID text(36) not null);
+ CREATE TABLE NODES (NODEID text(50) not null, USERID text(50) null, NODESTATE int not null, NODECERT text null, KUBIVERSION int not null, KUBIBUILD text(30) not null, TQUNID text(36) not null, LASTBINDDATE text(25) null, LASTUNBINDDATE text(25) null, LASTBINDIP text(15) null, NUMBINDS int not null, NUMSENDS int not null, NUMPOLLS int not null, NUMRECVS int not null);
+ CREATE TABLE PARTICIPANTNODES(ISSUEID text(50) not null, OBJECTID text(50) not null, NODEID text(50) not null, USERID text(50) null, NODESTATE int not null, NODECERT text null, KUBIVERSION int not null, KUBIBUILD text(30) not null, TQUNID text(36) not null);
+ CREATE TABLE PARTICIPANTS(CLASSID int null, SEQNO int not null, LASTMODONNODEID text(50) not null, PREVMODONNODEID text(50) null, ISSUEID text(50) not null, OBJECTID text(50) not null, REVISIONNUM int not null, CONTAINERID text(50) not null, AUTHORID text(50) not null, CREATIONDATE text(25) null, LASTMODIFIEDDATE text(25) null, UPDATENUMBER int null, PREVREVISIONNUM int null, LASTCMD int null, LASTCMDACLVERSION int null, USERDEFINEDFIELD text(300) null, LASTMODIFIEDBYID text(50) null, PARTICIPANTSTATE int not null, PARTICIPANTROLE int not null, PARTICIPANTTEAM int not null, ISREQUIREDMEMBER text(1) null, USERID text(50) null, ISAGENT text(1) null, NAME text(150) not null, EMAILADDRESS text(200) not null, ISEMAILONLY text(1) not null, INVITATION text null, ACCEPTRESENDCOUNT int null, ACCEPTRESENDTIMEOUT text(25) null, ACCEPTLASTSENTTONODEID text(50) null, constraint PK_PARTICIPANTS primary key (ISSUEID, OBJECTID));
+ CREATE TABLE PARTICIPANTSETTINGS(CLASSID int null, SEQNO int not null, LASTMODONNODEID text(50) not null, PREVMODONNODEID text(50) null, ISSUEID text(50) not null, OBJECTID text(50) not null, REVISIONNUM int not null, CONTAINERID text(50) not null, AUTHORID text(50) not null, CREATIONDATE text(25) null, LASTMODIFIEDDATE text(25) null, UPDATENUMBER int null, PREVREVISIONNUM int null, LASTCMD int null, LASTCMDACLVERSION int null, USERDEFINEDFIELD text(300) null, LASTMODIFIEDBYID text(50) null, PARTICIPANTID text(50) not null, TASKPIMSYNC text(1) null, MOBILESUPPORT text(1) null, NOTIFYBYEMAIL text(1) null, MARKEDCRITICAL text(1) null, constraint PK_PARTICIPANTSETTINGS primary key (ISSUEID, OBJECTID));
+ CREATE TABLE PARTITIONS(PARTITIONID text(50) not null, NAME text(100) not null, LDAPDN text(300) not null, SERVERNODEID text(50) not null, TQUNID text(36) not null);
+ CREATE TABLE PROJECTS(CLASSID int null, SEQNO int not null, LASTMODONNODEID text(50) not null, PREVMODONNODEID text(50) null, ISSUEID text(50) not null, OBJECTID text(50) not null, REVISIONNUM int not null, CONTAINERID text(50) not null, AUTHORID text(50) not null, CREATIONDATE text(25) null, LASTMODIFIEDDATE text(25) null, UPDATENUMBER int null, PREVREVISIONNUM int null, LASTCMD int null, LASTCMDACLVERSION int null, USERDEFINEDFIELD text(300) null, LASTMODIFIEDBYID text(50) null, NAME text(100) not null, ID text(100) null, constraint PK_PROJECTS primary key (ISSUEID, OBJECTID));
+ CREATE TABLE TASKCOMPLETIONS(CLASSID int null, SEQNO int not null, LASTMODONNODEID text(50) not null, PREVMODONNODEID text(50) null, ISSUEID text(50) not null, OBJECTID text(50) not null, REVISIONNUM int not null, CONTAINERID text(50) not null, AUTHORID text(50) not null, CREATIONDATE text(25) null, LASTMODIFIEDDATE text(25) null, UPDATENUMBER int null, PREVREVISIONNUM int null, LASTCMD int null, LASTCMDACLVERSION int null, USERDEFINEDFIELD text(300) null, LASTMODIFIEDBYID text(50) null, PARENTENTITYID text(50) null, BODY text null, BODYCONTENTTYPE text(100) null, ISOBSOLETE text(1) null, TASKID text(50) not null, DISPOSITION int not null, STATUSID text(50) not null, SHORTNAME text(30) not null, LONGNAME text(200) not null, constraint PK_TASKCOMPLETIONS primary key (ISSUEID, OBJECTID));
+ CREATE TABLE TASKS(CLASSID int null, SEQNO int not null, LASTMODONNODEID text(50) not null, PREVMODONNODEID text(50) null, ISSUEID text(50) not null, OBJECTID text(50) not null, REVISIONNUM int not null, CONTAINERID text(50) not null, AUTHORID text(50) not null, CREATIONDATE text(25) null, LASTMODIFIEDDATE text(25) null, UPDATENUMBER int null, PREVREVISIONNUM int null, LASTCMD int null, LASTCMDACLVERSION int null, USERDEFINEDFIELD text(300) null, LASTMODIFIEDBYID text(50) null, PARENTENTITYID text(50) null, BODY text null, BODYCONTENTTYPE text(100) null, ISOBSOLETE text(1) null, DUETIME text(25) null, ASSIGNEDTO text(50) not null, TARGETOBJECTIDS text null, RESPONSEID text(50) not null, TYPEID text(50) not null, LABEL text(200) not null, INSTRUCTIONS text not null, ALLOWEDSTATUSES text not null, ISSERIALREVIEW text(1) null, DAYSTOREVIEW int null, REVIEWERIDS text(500) null, REVIEWTYPE int null, REVIEWGROUP text(300) null, constraint PK_TASKS primary key (ISSUEID, OBJECTID));
+ CREATE TABLE USERS (USERID text(50) not null, USERSID text(100) not null, ENTERPRISEUSER text(1) not null, USEREMAILADDRESS text(200) null, EMAILVALIDATED text(1) null, VALIDATIONCOOKIE text(50) null, CREATETIME text(25) not null, TQUNID text(36) not null, PARTITIONID text(50) null);
+ CREATE VIEW CRITICALISSUES as
+
+
+ select
+ USERID, ISSUEID, ISSUENAME, min(DATE1) DATE1
+ from (
+ select p.USERID USERID, p.ISSUEID ISSUEID, iset.ISSUENAME ISSUENAME, t.DUETIME DATE1
+ from PARTICIPANTS p
+ join TASKS t on t.ASSIGNEDTO = p.OBJECTID
+ join TASKCOMPLETIONS tc on tc.TASKID = t.OBJECTID
+ join ISSUESETTINGS iset on iset.ISSUEID = p.ISSUEID
+ where (t.ISOBSOLETE = 'n' or t.ISOBSOLETE is null)
+ and tc.DISPOSITION = 1
+ and iset.ISSUESTATUS = 1
+ union
+ select p.USERID USERID, p.ISSUEID ISSUEID, iset.ISSUENAME ISSUENAME, iset.ISSUEDUEDATE DATE1
+ from PARTICIPANTS p
+ join PARTICIPANTSETTINGS ps on ps.PARTICIPANTID = p.OBJECTID
+ join ISSUESETTINGS iset on iset.ISSUEID = p.ISSUEID
+ where ps.MARKEDCRITICAL = 'y'
+ and iset.ISSUESTATUS = 1
+ ) as CRITICALDATA
+ group by USERID, ISSUEID, ISSUENAME;
+ CREATE VIEW CURRENTFILES as
+
+
+ select
+ d.ISSUEID as ISSUEID,
+ d.REFERENCEDOCUMENTID as OBJECTID,
+ f.VISIBLENAME as VISIBLENAME
+ from
+ DOCREFERENCES d
+ join FILES f on f.OBJECTID = d.REFERENCEDOCUMENTID;
+ CREATE VIEW ISSUEDATA as
+
+
+ select
+ ISSUES.OBJECTID as ISSUEID,
+ ISSUES.CREATIONDATE as CREATIONDATE,
+ ISSUES.AUTHORID as AUTHORID,
+ ISSUES.LASTMODIFIEDDATE as LASTMODIFIEDDATE,
+ ISSUES.LASTMODIFIEDBYID as LASTMODIFIEDBYID,
+ ISSUESETTINGS.ISSUENAME as ISSUENAME,
+ ISSUES.ISINITIALIZED as ISINITIALIZED,
+ ISSUES.ISSYSTEMISSUE as ISSYSTEMISSUE,
+ ISSUES.ISSUETYPE as ISSUETYPE,
+ ISSUES.ISINCOMPLETE as ISINCOMPLETE,
+ ISSUESETTINGS.REVISIONNUM as ISSUESETTINGS_REVISIONNUM,
+ ISSUESETTINGS.LASTMODIFIEDDATE as ISSUESETTINGS_LASTMODIFIEDDATE,
+ ISSUESETTINGS.LASTMODIFIEDBYID as ISSUESETTINGS_LASTMODIFIEDBYID,
+ ISSUESETTINGS.ISSUEDUEDATE as ISSUEDUEDATE,
+ ISSUESETTINGS.ISSUEPRIORITY as ISSUEPRIORITY,
+ ISSUESETTINGS.ISSUESTATUS as ISSUESTATUS,
+ ISSUESETTINGS.DESCRIPTION as DESCRIPTION,
+ ISSUESETTINGS.PROJECTID as PROJECTID,
+ ISSUESETTINGS.PROJECTNAME as PROJECTNAME,
+ ISSUESETTINGS.PROJECTNAMEISCUSTOM as PROJECTNAMEISCUSTOM,
+ ENTERPRISEDATA.REVISIONNUM as ENTERPRISEDATA_REVISIONNUM,
+ ENTERPRISEDATA.CREATIONDATE as ENTERPRISEDATA_CREATIONDATE,
+ ENTERPRISEDATA.AUTHORID as ENTERPRISEDATA_AUTHORID,
+ ENTERPRISEDATA.LASTMODIFIEDDATE as ENTERPRISEDATA_LASTMODIFIEDDATE,
+ ENTERPRISEDATA.LASTMODIFIEDBYID as ENTERPRISEDATA_LASTMODIFIEDBYID,
+ ENTERPRISEDATA.DATE1 as DATE1,
+ ENTERPRISEDATA.DATE2 as DATE2,
+ ENTERPRISEDATA.DATE3 as DATE3,
+ ENTERPRISEDATA.DATE4 as DATE4,
+ ENTERPRISEDATA.DATE5 as DATE5,
+ ENTERPRISEDATA.DATE6 as DATE6,
+ ENTERPRISEDATA.DATE7 as DATE7,
+ ENTERPRISEDATA.DATE8 as DATE8,
+ ENTERPRISEDATA.DATE9 as DATE9,
+ ENTERPRISEDATA.DATE10 as DATE10,
+ ENTERPRISEDATA.VALUE1 as VALUE1,
+ ENTERPRISEDATA.VALUE2 as VALUE2,
+ ENTERPRISEDATA.VALUE3 as VALUE3,
+ ENTERPRISEDATA.VALUE4 as VALUE4,
+ ENTERPRISEDATA.VALUE5 as VALUE5,
+ ENTERPRISEDATA.VALUE6 as VALUE6,
+ ENTERPRISEDATA.VALUE7 as VALUE7,
+ ENTERPRISEDATA.VALUE8 as VALUE8,
+ ENTERPRISEDATA.VALUE9 as VALUE9,
+ ENTERPRISEDATA.VALUE10 as VALUE10,
+ ENTERPRISEDATA.VALUE11 as VALUE11,
+ ENTERPRISEDATA.VALUE12 as VALUE12,
+ ENTERPRISEDATA.VALUE13 as VALUE13,
+ ENTERPRISEDATA.VALUE14 as VALUE14,
+ ENTERPRISEDATA.VALUE15 as VALUE15,
+ ENTERPRISEDATA.VALUE16 as VALUE16,
+ ENTERPRISEDATA.VALUE17 as VALUE17,
+ ENTERPRISEDATA.VALUE18 as VALUE18,
+ ENTERPRISEDATA.VALUE19 as VALUE19,
+ ENTERPRISEDATA.VALUE20 as VALUE20,
+ ENTERPRISEDATA.STRING1 as STRING1,
+ ENTERPRISEDATA.STRING2 as STRING2,
+ ENTERPRISEDATA.STRING3 as STRING3,
+ ENTERPRISEDATA.STRING4 as STRING4,
+ ENTERPRISEDATA.STRING5 as STRING5,
+ ENTERPRISEDATA.STRING6 as STRING6,
+ ENTERPRISEDATA.STRING7 as STRING7,
+ ENTERPRISEDATA.STRING8 as STRING8,
+ ENTERPRISEDATA.STRING9 as STRING9,
+ ENTERPRISEDATA.STRING10 as STRING10,
+ ENTERPRISEDATA.LONGSTRING1 as LONGSTRING1,
+ ENTERPRISEDATA.LONGSTRING2 as LONGSTRING2,
+ ENTERPRISEDATA.LONGSTRING3 as LONGSTRING3,
+ ENTERPRISEDATA.LONGSTRING4 as LONGSTRING4,
+ ENTERPRISEDATA.LONGSTRING5 as LONGSTRING5,
+ ENTERPRISEDATA.LONGSTRING6 as LONGSTRING6,
+ ENTERPRISEDATA.LONGSTRING7 as LONGSTRING7,
+ ENTERPRISEDATA.LONGSTRING8 as LONGSTRING8,
+ ENTERPRISEDATA.LONGSTRING9 as LONGSTRING9,
+ ENTERPRISEDATA.LONGSTRING10 as LONGSTRING10
+ from
+ ISSUES
+ join ISSUESETTINGS on ISSUES.OBJECTID = ISSUESETTINGS.ISSUEID
+ left outer join ENTERPRISEDATA on ISSUES.OBJECTID = ENTERPRISEDATA.ISSUEID;
+ CREATE VIEW ITEMS as
+
+ select 'FILES' as TABLENAME, CLASSID, SEQNO, LASTMODONNODEID, PREVMODONNODEID, ISSUEID, OBJECTID, REVISIONNUM, CONTAINERID, AUTHORID, CREATIONDATE, LASTMODIFIEDDATE, UPDATENUMBER, PREVREVISIONNUM, LASTCMD, LASTCMDACLVERSION, USERDEFINEDFIELD, LASTMODIFIEDBYID, PARENTENTITYID, BODY, BODYCONTENTTYPE, ISOBSOLETE, FILENAME, VISIBLENAME, VERSIONSTRING, DOCUMENTHASH, ISFINAL, DOCREFERENCEID, NULL as ACTIONTYPE, NULL as ASSOCIATEDOBJECTIDS, NULL as OLDENTITIES, NULL as NEWENTITIES, NULL as OTHERENTITIES, NULL as TQUNID, NULL as TABLEITEMID, NULL as TABLENODEID, NULL as TABLECMD, NULL as TABLECONTAINERID, NULL as TABLESEQNO, NULL as DIRTYCONTENT, NULL as STUBBED, NULL as ENTITYSTUBDATA, NULL as PARTICIPANTSTATE, NULL as PARTICIPANTROLE, NULL as PARTICIPANTTEAM, NULL as ISREQUIREDMEMBER, NULL as USERID, NULL as ISAGENT, NULL as NAME, NULL as EMAILADDRESS, NULL as ISEMAILONLY, NULL as INVITATION, NULL as ACCEPTRESENDCOUNT, NULL as ACCEPTRESENDTIMEOUT, NULL as ACCEPTLASTSENTTONODEID, NULL as TASKID, NULL as DISPOSITION, NULL as STATUSID, NULL as SHORTNAME, NULL as LONGNAME, NULL as DUETIME, NULL as ASSIGNEDTO, NULL as TARGETOBJECTIDS, NULL as RESPONSEID, NULL as TYPEID, NULL as LABEL, NULL as INSTRUCTIONS, NULL as ALLOWEDSTATUSES, NULL as ISSERIALREVIEW, NULL as DAYSTOREVIEW, NULL as REVIEWERIDS, NULL as REVIEWTYPE, NULL as REVIEWGROUP from FILES
+ union all
+ select 'LOGENTRIES' as TABLENAME, CLASSID, SEQNO, LASTMODONNODEID, PREVMODONNODEID, ISSUEID, OBJECTID, REVISIONNUM, CONTAINERID, AUTHORID, CREATIONDATE, LASTMODIFIEDDATE, UPDATENUMBER, PREVREVISIONNUM, LASTCMD, LASTCMDACLVERSION, USERDEFINEDFIELD, LASTMODIFIEDBYID, PARENTENTITYID, BODY, BODYCONTENTTYPE, ISOBSOLETE, NULL as FILENAME, NULL as VISIBLENAME, NULL as VERSIONSTRING, NULL as DOCUMENTHASH, NULL as ISFINAL, NULL as DOCREFERENCEID, ACTIONTYPE, ASSOCIATEDOBJECTIDS, OLDENTITIES, NEWENTITIES, OTHERENTITIES, NULL as TQUNID, NULL as TABLEITEMID, NULL as TABLENODEID, NULL as TABLECMD, NULL as TABLECONTAINERID, NULL as TABLESEQNO, NULL as DIRTYCONTENT, NULL as STUBBED, NULL as ENTITYSTUBDATA, NULL as PARTICIPANTSTATE, NULL as PARTICIPANTROLE, NULL as PARTICIPANTTEAM, NULL as ISREQUIREDMEMBER, NULL as USERID, NULL as ISAGENT, NULL as NAME, NULL as EMAILADDRESS, NULL as ISEMAILONLY, NULL as INVITATION, NULL as ACCEPTRESENDCOUNT, NULL as ACCEPTRESENDTIMEOUT, NULL as ACCEPTLASTSENTTONODEID, NULL as TASKID, NULL as DISPOSITION, NULL as STATUSID, NULL as SHORTNAME, NULL as LONGNAME, NULL as DUETIME, NULL as ASSIGNEDTO, NULL as TARGETOBJECTIDS, NULL as RESPONSEID, NULL as TYPEID, NULL as LABEL, NULL as INSTRUCTIONS, NULL as ALLOWEDSTATUSES, NULL as ISSERIALREVIEW, NULL as DAYSTOREVIEW, NULL as REVIEWERIDS, NULL as REVIEWTYPE, NULL as REVIEWGROUP from LOGENTRIES
+ union all
+ select 'LSBI' as TABLENAME, NULL as CLASSID, NULL as SEQNO, NULL as LASTMODONNODEID, NULL as PREVMODONNODEID, ISSUEID, NULL as OBJECTID, NULL as REVISIONNUM, NULL as CONTAINERID, NULL as AUTHORID, NULL as CREATIONDATE, NULL as LASTMODIFIEDDATE, UPDATENUMBER, NULL as PREVREVISIONNUM, NULL as LASTCMD, NULL as LASTCMDACLVERSION, NULL as USERDEFINEDFIELD, NULL as LASTMODIFIEDBYID, NULL as PARENTENTITYID, NULL as BODY, NULL as BODYCONTENTTYPE, NULL as ISOBSOLETE, NULL as FILENAME, NULL as VISIBLENAME, NULL as VERSIONSTRING, NULL as DOCUMENTHASH, NULL as ISFINAL, NULL as DOCREFERENCEID, NULL as ACTIONTYPE, NULL as ASSOCIATEDOBJECTIDS, NULL as OLDENTITIES, NULL as NEWENTITIES, NULL as OTHERENTITIES, TQUNID, TABLEITEMID, TABLENODEID, TABLECMD, TABLECONTAINERID, TABLESEQNO, DIRTYCONTENT, STUBBED, ENTITYSTUBDATA, NULL as PARTICIPANTSTATE, NULL as PARTICIPANTROLE, NULL as PARTICIPANTTEAM, NULL as ISREQUIREDMEMBER, NULL as USERID, NULL as ISAGENT, NULL as NAME, NULL as EMAILADDRESS, NULL as ISEMAILONLY, NULL as INVITATION, NULL as ACCEPTRESENDCOUNT, NULL as ACCEPTRESENDTIMEOUT, NULL as ACCEPTLASTSENTTONODEID, NULL as TASKID, NULL as DISPOSITION, NULL as STATUSID, NULL as SHORTNAME, NULL as LONGNAME, NULL as DUETIME, NULL as ASSIGNEDTO, NULL as TARGETOBJECTIDS, NULL as RESPONSEID, NULL as TYPEID, NULL as LABEL, NULL as INSTRUCTIONS, NULL as ALLOWEDSTATUSES, NULL as ISSERIALREVIEW, NULL as DAYSTOREVIEW, NULL as REVIEWERIDS, NULL as REVIEWTYPE, NULL as REVIEWGROUP from LSBI where TABLECMD=3
+ union all
+ select 'PARTICIPANTS' as TABLENAME, CLASSID, SEQNO, LASTMODONNODEID, PREVMODONNODEID, ISSUEID, OBJECTID, REVISIONNUM, CONTAINERID, AUTHORID, CREATIONDATE, LASTMODIFIEDDATE, UPDATENUMBER, PREVREVISIONNUM, LASTCMD, LASTCMDACLVERSION, USERDEFINEDFIELD, LASTMODIFIEDBYID, NULL as PARENTENTITYID, NULL as BODY, NULL as BODYCONTENTTYPE, NULL as ISOBSOLETE, NULL as FILENAME, NULL as VISIBLENAME, NULL as VERSIONSTRING, NULL as DOCUMENTHASH, NULL as ISFINAL, NULL as DOCREFERENCEID, NULL as ACTIONTYPE, NULL as ASSOCIATEDOBJECTIDS, NULL as OLDENTITIES, NULL as NEWENTITIES, NULL as OTHERENTITIES, NULL as TQUNID, NULL as TABLEITEMID, NULL as TABLENODEID, NULL as TABLECMD, NULL as TABLECONTAINERID, NULL as TABLESEQNO, NULL as DIRTYCONTENT, NULL as STUBBED, NULL as ENTITYSTUBDATA, PARTICIPANTSTATE, PARTICIPANTROLE, PARTICIPANTTEAM, ISREQUIREDMEMBER, USERID, ISAGENT, NAME, EMAILADDRESS, ISEMAILONLY, INVITATION, ACCEPTRESENDCOUNT, ACCEPTRESENDTIMEOUT, ACCEPTLASTSENTTONODEID, NULL as TASKID, NULL as DISPOSITION, NULL as STATUSID, NULL as SHORTNAME, NULL as LONGNAME, NULL as DUETIME, NULL as ASSIGNEDTO, NULL as TARGETOBJECTIDS, NULL as RESPONSEID, NULL as TYPEID, NULL as LABEL, NULL as INSTRUCTIONS, NULL as ALLOWEDSTATUSES, NULL as ISSERIALREVIEW, NULL as DAYSTOREVIEW, NULL as REVIEWERIDS, NULL as REVIEWTYPE, NULL as REVIEWGROUP from PARTICIPANTS
+ union all
+ select 'TASKCOMPLETIONS' as TABLENAME, CLASSID, SEQNO, LASTMODONNODEID, PREVMODONNODEID, ISSUEID, OBJECTID, REVISIONNUM, CONTAINERID, AUTHORID, CREATIONDATE, LASTMODIFIEDDATE, UPDATENUMBER, PREVREVISIONNUM, LASTCMD, LASTCMDACLVERSION, USERDEFINEDFIELD, LASTMODIFIEDBYID, PARENTENTITYID, BODY, BODYCONTENTTYPE, ISOBSOLETE, NULL as FILENAME, NULL as VISIBLENAME, NULL as VERSIONSTRING, NULL as DOCUMENTHASH, NULL as ISFINAL, NULL as DOCREFERENCEID, NULL as ACTIONTYPE, NULL as ASSOCIATEDOBJECTIDS, NULL as OLDENTITIES, NULL as NEWENTITIES, NULL as OTHERENTITIES, NULL as TQUNID, NULL as TABLEITEMID, NULL as TABLENODEID, NULL as TABLECMD, NULL as TABLECONTAINERID, NULL as TABLESEQNO, NULL as DIRTYCONTENT, NULL as STUBBED, NULL as ENTITYSTUBDATA, NULL as PARTICIPANTSTATE, NULL as PARTICIPANTROLE, NULL as PARTICIPANTTEAM, NULL as ISREQUIREDMEMBER, NULL as USERID, NULL as ISAGENT, NULL as NAME, NULL as EMAILADDRESS, NULL as ISEMAILONLY, NULL as INVITATION, NULL as ACCEPTRESENDCOUNT, NULL as ACCEPTRESENDTIMEOUT, NULL as ACCEPTLASTSENTTONODEID, TASKID, DISPOSITION, STATUSID, SHORTNAME, LONGNAME, NULL as DUETIME, NULL as ASSIGNEDTO, NULL as TARGETOBJECTIDS, NULL as RESPONSEID, NULL as TYPEID, NULL as LABEL, NULL as INSTRUCTIONS, NULL as ALLOWEDSTATUSES, NULL as ISSERIALREVIEW, NULL as DAYSTOREVIEW, NULL as REVIEWERIDS, NULL as REVIEWTYPE, NULL as REVIEWGROUP from TASKCOMPLETIONS
+ union all
+ select 'TASKS' as TABLENAME, CLASSID, SEQNO, LASTMODONNODEID, PREVMODONNODEID, ISSUEID, OBJECTID, REVISIONNUM, CONTAINERID, AUTHORID, CREATIONDATE, LASTMODIFIEDDATE, UPDATENUMBER, PREVREVISIONNUM, LASTCMD, LASTCMDACLVERSION, USERDEFINEDFIELD, LASTMODIFIEDBYID, PARENTENTITYID, BODY, BODYCONTENTTYPE, ISOBSOLETE, NULL as FILENAME, NULL as VISIBLENAME, NULL as VERSIONSTRING, NULL as DOCUMENTHASH, NULL as ISFINAL, NULL as DOCREFERENCEID, NULL as ACTIONTYPE, NULL as ASSOCIATEDOBJECTIDS, NULL as OLDENTITIES, NULL as NEWENTITIES, NULL as OTHERENTITIES, NULL as TQUNID, NULL as TABLEITEMID, NULL as TABLENODEID, NULL as TABLECMD, NULL as TABLECONTAINERID, NULL as TABLESEQNO, NULL as DIRTYCONTENT, NULL as STUBBED, NULL as ENTITYSTUBDATA, NULL as PARTICIPANTSTATE, NULL as PARTICIPANTROLE, NULL as PARTICIPANTTEAM, NULL as ISREQUIREDMEMBER, NULL as USERID, NULL as ISAGENT, NULL as NAME, NULL as EMAILADDRESS, NULL as ISEMAILONLY, NULL as INVITATION, NULL as ACCEPTRESENDCOUNT, NULL as ACCEPTRESENDTIMEOUT, NULL as ACCEPTLASTSENTTONODEID, NULL as TASKID, NULL as DISPOSITION, NULL as STATUSID, NULL as SHORTNAME, NULL as LONGNAME, DUETIME, ASSIGNEDTO, TARGETOBJECTIDS, RESPONSEID, TYPEID, LABEL, INSTRUCTIONS, ALLOWEDSTATUSES, ISSERIALREVIEW, DAYSTOREVIEW, REVIEWERIDS, REVIEWTYPE, REVIEWGROUP from TASKS;
+ CREATE VIEW TASKINFO as
+
+
+ select
+ t.ISSUEID as ISSUEID,
+ t.OBJECTID as OBJECTID,
+ t.ASSIGNEDTO as ASSIGNEDTO,
+ t.TARGETOBJECTIDS as TARGETOBJECTIDS,
+ t.DUETIME as DUETIME,
+ t.ISOBSOLETE as ISOBSOLETE,
+ tc.DISPOSITION as DISPOSITION
+ from
+ TASKS t
+ join TASKCOMPLETIONS tc on tc.TASKID = t.OBJECTID;
+ CREATE INDEX DQ_ISSUEID_DEPENDSID on DQ (ISSUEID, DEPENDSID);
+ CREATE INDEX EMAILQ_TIMEQUEUED on EMAILQ (TIMEQUEUED);
+ CREATE INDEX FOLDERS_CONTAINERID_ISSUEID on FOLDERS (CONTAINERID, ISSUEID);
+ CREATE INDEX IMQ_DATETIMEQUEUED on IMQ (DATETIMEQUEUED);
+ CREATE INDEX INVITATIONS_RECIPIENTUSERID_INVITATIONID on INVITATIONS (RECIPIENTUSERID, INVITATIONID);
+ CREATE INDEX INVITATIONS_TQUNID on INVITATIONS (TQUNID);
+ CREATE INDEX ISSUESETTINGS_CONTAINERID on ISSUESETTINGS (CONTAINERID);
+ CREATE INDEX KMTPMSG_RECEIVEDDATE on KMTPMSG (RECEIVEDDATE desc);
+ CREATE INDEX KMTPNODEQ_MSGID on KMTPNODEQ (MSGID);
+ CREATE INDEX KMTPNODEQ_NODEID_MSGID on KMTPNODEQ (NODEID, MSGID);
+ CREATE INDEX KMTPNODEQ_RECEIVEDDATE on KMTPNODEQ (RECEIVEDDATE desc);
+ CREATE INDEX LSBI_ISSUEID_TABLEITEMID on LSBI (ISSUEID, TABLEITEMID);
+ CREATE INDEX LSBN_ISSUEID_NODEID on LSBN (ISSUEID, NODEID);
+ CREATE INDEX MMQ_ISSUEID_MMQENTRYINDEX on MMQ (ISSUEID, MMQENTRYINDEX);
+ CREATE INDEX NODEREG_NODEID_USERID on NODEREG (NODEID, USERID);
+ CREATE INDEX NODEREG_TQUNID on NODEREG (TQUNID);
+ CREATE INDEX NODEREG_USERID_NODEID on NODEREG (USERID, NODEID);
+ CREATE INDEX NODES_NODEID on NODES (NODEID);
+ CREATE INDEX NODES_TQUNID on NODES (TQUNID);
+ CREATE INDEX PARTICIPANTNODES_ISSUEID_OBJECTID_NODEID on PARTICIPANTNODES (ISSUEID, OBJECTID, NODEID);
+ CREATE INDEX PARTICIPANTNODES_TQUNID on PARTICIPANTNODES (TQUNID);
+ CREATE INDEX PARTICIPANTSETTINGS_PARTICIPANTID on PARTICIPANTSETTINGS (PARTICIPANTID);
+ CREATE INDEX PARTITIONS_LDAPDN on PARTITIONS (LDAPDN);
+ CREATE INDEX PARTITIONS_PARTITIONID_SERVERNODEID on PARTITIONS (PARTITIONID, SERVERNODEID);
+ CREATE INDEX PARTITIONS_SERVERNODEID_PARTITIONID on PARTITIONS (SERVERNODEID, PARTITIONID);
+ CREATE INDEX PARTITIONS_TQUNID on PARTITIONS (TQUNID);
+ CREATE INDEX TASKCOMPLETIONS_TASKID on TASKCOMPLETIONS (TASKID);
+ CREATE INDEX TASKS_ASSIGNEDTO on TASKS (ASSIGNEDTO);
+ CREATE INDEX USERS_PARTITIONID_USERID on USERS (PARTITIONID, USERID);
+ CREATE INDEX USERS_TQUNID on USERS (TQUNID);
+ CREATE INDEX USERS_USERID_PARTITIONID on USERS (USERID, PARTITIONID);
+ CREATE INDEX USERS_USERSID_USERID on USERS (USERSID, USERID);
+ COMMIT;
+ }
+} {}
+
+# Given the schema above, the following query was cause an assertion fault
+# do to an uninitialized field in a Select structure.
+#
+do_test tkt1449-1.2 {
+ execsql {
+ select NEWENTITIES from ITEMS where ((ISSUEID = 'x') and (OBJECTID = 'y'))
+ }
+} {}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/tkt1473.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/tkt1473.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,728 @@
+# 2005 September 19
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests to verify that ticket #1473 has been
+# fixed.
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+ifcapable !compound {
+ finish_test
+ return
+}
+
+do_test tkt1473-1.1 {
+ execsql {
+ CREATE TABLE t1(a,b);
+ INSERT INTO t1 VALUES(1,2);
+ INSERT INTO t1 VALUES(3,4);
+ SELECT * FROM t1
+ }
+} {1 2 3 4}
+
+do_test tkt1473-1.2 {
+ execsql {
+ SELECT 1 FROM t1 WHERE a=1 UNION ALL SELECT 2 FROM t1 WHERE b=0
+ }
+} {1}
+do_test tkt1473-1.3 {
+ execsql {
+ SELECT 1 FROM t1 WHERE a=1 UNION SELECT 2 FROM t1 WHERE b=0
+ }
+} {1}
+do_test tkt1473-1.4 {
+ execsql {
+ SELECT 1 FROM t1 WHERE a=1 UNION ALL SELECT 2 FROM t1 WHERE b=4
+ }
+} {1 2}
+do_test tkt1473-1.5 {
+ execsql {
+ SELECT 1 FROM t1 WHERE a=1 UNION SELECT 2 FROM t1 WHERE b=4
+ }
+} {1 2}
+do_test tkt1473-1.6 {
+ execsql {
+ SELECT 1 FROM t1 WHERE a=0 UNION ALL SELECT 2 FROM t1 WHERE b=4
+ }
+} {2}
+do_test tkt1473-1.7 {
+ execsql {
+ SELECT 1 FROM t1 WHERE a=0 UNION SELECT 2 FROM t1 WHERE b=4
+ }
+} {2}
+do_test tkt1473-1.8 {
+ execsql {
+ SELECT 1 FROM t1 WHERE a=0 UNION ALL SELECT 2 FROM t1 WHERE b=0
+ }
+} {}
+do_test tkt1473-1.9 {
+ execsql {
+ SELECT 1 FROM t1 WHERE a=0 UNION SELECT 2 FROM t1 WHERE b=0
+ }
+} {}
+
+# Everything from this point on depends on sub-queries. So skip it
+# if sub-queries are not available.
+ifcapable !subquery {
+ finish_test
+ return
+}
+
+do_test tkt1473-2.2 {
+ execsql {
+ SELECT (SELECT 1 FROM t1 WHERE a=1 UNION ALL SELECT 2 FROM t1 WHERE b=0)
+ }
+} {1}
+do_test tkt1473-2.3 {
+ execsql {
+ SELECT (SELECT 1 FROM t1 WHERE a=1 UNION SELECT 2 FROM t1 WHERE b=0)
+ }
+} {1}
+do_test tkt1473-2.4 {
+ execsql {
+ SELECT (SELECT 1 FROM t1 WHERE a=1 UNION ALL SELECT 2 FROM t1 WHERE b=4)
+ }
+} {1}
+do_test tkt1473-2.5 {
+ execsql {
+ SELECT (SELECT 1 FROM t1 WHERE a=1 UNION SELECT 2 FROM t1 WHERE b=4)
+ }
+} {1}
+do_test tkt1473-2.6 {
+ execsql {
+ SELECT (SELECT 1 FROM t1 WHERE a=0 UNION ALL SELECT 2 FROM t1 WHERE b=4)
+ }
+} {2}
+do_test tkt1473-2.7 {
+ execsql {
+ SELECT (SELECT 1 FROM t1 WHERE a=0 UNION SELECT 2 FROM t1 WHERE b=4)
+ }
+} {2}
+do_test tkt1473-2.8 {
+ execsql {
+ SELECT (SELECT 1 FROM t1 WHERE a=0 UNION ALL SELECT 2 FROM t1 WHERE b=0)
+ }
+} {{}}
+do_test tkt1473-2.9 {
+ execsql {
+ SELECT (SELECT 1 FROM t1 WHERE a=0 UNION SELECT 2 FROM t1 WHERE b=0)
+ }
+} {{}}
+
+do_test tkt1473-3.2 {
+ execsql {
+ SELECT EXISTS
+ (SELECT 1 FROM t1 WHERE a=1 UNION ALL SELECT 2 FROM t1 WHERE b=0)
+ }
+} {1}
+do_test tkt1473-3.3 {
+ execsql {
+ SELECT EXISTS
+ (SELECT 1 FROM t1 WHERE a=1 UNION SELECT 2 FROM t1 WHERE b=0)
+ }
+} {1}
+do_test tkt1473-3.4 {
+ execsql {
+ SELECT EXISTS
+ (SELECT 1 FROM t1 WHERE a=1 UNION ALL SELECT 2 FROM t1 WHERE b=4)
+ }
+} {1}
+do_test tkt1473-3.5 {
+ execsql {
+ SELECT EXISTS
+ (SELECT 1 FROM t1 WHERE a=1 UNION SELECT 2 FROM t1 WHERE b=4)
+ }
+} {1}
+do_test tkt1473-3.6 {
+ execsql {
+ SELECT EXISTS
+ (SELECT 1 FROM t1 WHERE a=0 UNION ALL SELECT 2 FROM t1 WHERE b=4)
+ }
+} {1}
+do_test tkt1473-3.7 {
+ execsql {
+ SELECT EXISTS
+ (SELECT 1 FROM t1 WHERE a=0 UNION SELECT 2 FROM t1 WHERE b=4)
+ }
+} {1}
+do_test tkt1473-3.8 {
+ execsql {
+ SELECT EXISTS
+ (SELECT 1 FROM t1 WHERE a=0 UNION ALL SELECT 2 FROM t1 WHERE b=0)
+ }
+} {0}
+do_test tkt1473-3.9 {
+ execsql {
+ SELECT EXISTS
+ (SELECT 1 FROM t1 WHERE a=0 UNION SELECT 2 FROM t1 WHERE b=0)
+ }
+} {0}
+
+do_test tkt1473-4.1 {
+ execsql {
+ CREATE TABLE t2(x,y);
+ INSERT INTO t2 VALUES(1,2);
+ INSERT INTO t2 SELECT x+2, y+2 FROM t2;
+ INSERT INTO t2 SELECT x+4, y+4 FROM t2;
+ INSERT INTO t2 SELECT x+8, y+8 FROM t2;
+ INSERT INTO t2 SELECT x+16, y+16 FROM t2;
+ INSERT INTO t2 SELECT x+32, y+32 FROM t2;
+ INSERT INTO t2 SELECT x+64, y+64 FROM t2;
+ SELECT count(*), sum(x), sum(y) FROM t2;
+ }
+} {64 4096 4160}
+do_test tkt1473-4.2 {
+ execsql {
+ SELECT 1 FROM t2 WHERE x=0
+ UNION ALL
+ SELECT 2 FROM t2 WHERE x=1
+ UNION ALL
+ SELECT 3 FROM t2 WHERE x=2
+ UNION ALL
+ SELECT 4 FROM t2 WHERE x=3
+ UNION ALL
+ SELECT 5 FROM t2 WHERE x=4
+ UNION ALL
+ SELECT 6 FROM t2 WHERE y=0
+ UNION ALL
+ SELECT 7 FROM t2 WHERE y=1
+ UNION ALL
+ SELECT 8 FROM t2 WHERE y=2
+ UNION ALL
+ SELECT 9 FROM t2 WHERE y=3
+ UNION ALL
+ SELECT 10 FROM t2 WHERE y=4
+ }
+} {2 4 8 10}
+do_test tkt1473-4.3 {
+ execsql {
+ SELECT (
+ SELECT 1 FROM t2 WHERE x=0
+ UNION ALL
+ SELECT 2 FROM t2 WHERE x=1
+ UNION ALL
+ SELECT 3 FROM t2 WHERE x=2
+ UNION ALL
+ SELECT 4 FROM t2 WHERE x=3
+ UNION ALL
+ SELECT 5 FROM t2 WHERE x=4
+ UNION ALL
+ SELECT 6 FROM t2 WHERE y=0
+ UNION ALL
+ SELECT 7 FROM t2 WHERE y=1
+ UNION ALL
+ SELECT 8 FROM t2 WHERE y=2
+ UNION ALL
+ SELECT 9 FROM t2 WHERE y=3
+ UNION ALL
+ SELECT 10 FROM t2 WHERE y=4
+ )
+ }
+} {2}
+do_test tkt1473-4.4 {
+ execsql {
+ SELECT (
+ SELECT 1 FROM t2 WHERE x=0
+ UNION ALL
+ SELECT 2 FROM t2 WHERE x=-1
+ UNION ALL
+ SELECT 3 FROM t2 WHERE x=2
+ UNION ALL
+ SELECT 4 FROM t2 WHERE x=3
+ UNION ALL
+ SELECT 5 FROM t2 WHERE x=4
+ UNION ALL
+ SELECT 6 FROM t2 WHERE y=0
+ UNION ALL
+ SELECT 7 FROM t2 WHERE y=1
+ UNION ALL
+ SELECT 8 FROM t2 WHERE y=2
+ UNION ALL
+ SELECT 9 FROM t2 WHERE y=3
+ UNION ALL
+ SELECT 10 FROM t2 WHERE y=4
+ )
+ }
+} {4}
+do_test tkt1473-4.5 {
+ execsql {
+ SELECT (
+ SELECT 1 FROM t2 WHERE x=0
+ UNION ALL
+ SELECT 2 FROM t2 WHERE x=-1
+ UNION ALL
+ SELECT 3 FROM t2 WHERE x=2
+ UNION ALL
+ SELECT 4 FROM t2 WHERE x=-1
+ UNION ALL
+ SELECT 5 FROM t2 WHERE x=4
+ UNION ALL
+ SELECT 6 FROM t2 WHERE y=0
+ UNION ALL
+ SELECT 7 FROM t2 WHERE y=1
+ UNION ALL
+ SELECT 8 FROM t2 WHERE y=2
+ UNION ALL
+ SELECT 9 FROM t2 WHERE y=3
+ UNION ALL
+ SELECT 10 FROM t2 WHERE y=-4
+ )
+ }
+} {8}
+do_test tkt1473-4.6 {
+ execsql {
+ SELECT (
+ SELECT 1 FROM t2 WHERE x=0
+ UNION ALL
+ SELECT 2 FROM t2 WHERE x=-1
+ UNION ALL
+ SELECT 3 FROM t2 WHERE x=2
+ UNION ALL
+ SELECT 4 FROM t2 WHERE x=-2
+ UNION ALL
+ SELECT 5 FROM t2 WHERE x=4
+ UNION ALL
+ SELECT 6 FROM t2 WHERE y=0
+ UNION ALL
+ SELECT 7 FROM t2 WHERE y=1
+ UNION ALL
+ SELECT 8 FROM t2 WHERE y=-3
+ UNION ALL
+ SELECT 9 FROM t2 WHERE y=3
+ UNION ALL
+ SELECT 10 FROM t2 WHERE y=4
+ )
+ }
+} {10}
+do_test tkt1473-4.7 {
+ execsql {
+ SELECT (
+ SELECT 1 FROM t2 WHERE x=0
+ UNION ALL
+ SELECT 2 FROM t2 WHERE x=-1
+ UNION ALL
+ SELECT 3 FROM t2 WHERE x=2
+ UNION ALL
+ SELECT 4 FROM t2 WHERE x=-2
+ UNION ALL
+ SELECT 5 FROM t2 WHERE x=4
+ UNION ALL
+ SELECT 6 FROM t2 WHERE y=0
+ UNION ALL
+ SELECT 7 FROM t2 WHERE y=1
+ UNION ALL
+ SELECT 8 FROM t2 WHERE y=-3
+ UNION ALL
+ SELECT 9 FROM t2 WHERE y=3
+ UNION ALL
+ SELECT 10 FROM t2 WHERE y=-4
+ )
+ }
+} {{}}
+
+do_test tkt1473-5.3 {
+ execsql {
+ SELECT EXISTS (
+ SELECT 1 FROM t2 WHERE x=0
+ UNION ALL
+ SELECT 2 FROM t2 WHERE x=1
+ UNION ALL
+ SELECT 3 FROM t2 WHERE x=2
+ UNION ALL
+ SELECT 4 FROM t2 WHERE x=3
+ UNION ALL
+ SELECT 5 FROM t2 WHERE x=4
+ UNION ALL
+ SELECT 6 FROM t2 WHERE y=0
+ UNION ALL
+ SELECT 7 FROM t2 WHERE y=1
+ UNION ALL
+ SELECT 8 FROM t2 WHERE y=2
+ UNION ALL
+ SELECT 9 FROM t2 WHERE y=3
+ UNION ALL
+ SELECT 10 FROM t2 WHERE y=4
+ )
+ }
+} {1}
+do_test tkt1473-5.4 {
+ execsql {
+ SELECT EXISTS (
+ SELECT 1 FROM t2 WHERE x=0
+ UNION ALL
+ SELECT 2 FROM t2 WHERE x=-1
+ UNION ALL
+ SELECT 3 FROM t2 WHERE x=2
+ UNION ALL
+ SELECT 4 FROM t2 WHERE x=3
+ UNION ALL
+ SELECT 5 FROM t2 WHERE x=4
+ UNION ALL
+ SELECT 6 FROM t2 WHERE y=0
+ UNION ALL
+ SELECT 7 FROM t2 WHERE y=1
+ UNION ALL
+ SELECT 8 FROM t2 WHERE y=2
+ UNION ALL
+ SELECT 9 FROM t2 WHERE y=3
+ UNION ALL
+ SELECT 10 FROM t2 WHERE y=4
+ )
+ }
+} {1}
+
+do_test tkt1473-5.5 {
+ execsql {
+ SELECT EXISTS (
+ SELECT 1 FROM t2 WHERE x=0
+ UNION ALL
+ SELECT 2 FROM t2 WHERE x=-1
+ UNION ALL
+ SELECT 3 FROM t2 WHERE x=2
+ UNION ALL
+ SELECT 4 FROM t2 WHERE x=-1
+ UNION ALL
+ SELECT 5 FROM t2 WHERE x=4
+ UNION ALL
+ SELECT 6 FROM t2 WHERE y=0
+ UNION ALL
+ SELECT 7 FROM t2 WHERE y=1
+ UNION ALL
+ SELECT 8 FROM t2 WHERE y=2
+ UNION ALL
+ SELECT 9 FROM t2 WHERE y=3
+ UNION ALL
+ SELECT 10 FROM t2 WHERE y=-4
+ )
+ }
+} {1}
+do_test tkt1473-5.6 {
+ execsql {
+ SELECT EXISTS (
+ SELECT 1 FROM t2 WHERE x=0
+ UNION ALL
+ SELECT 2 FROM t2 WHERE x=-1
+ UNION ALL
+ SELECT 3 FROM t2 WHERE x=2
+ UNION ALL
+ SELECT 4 FROM t2 WHERE x=-2
+ UNION ALL
+ SELECT 5 FROM t2 WHERE x=4
+ UNION ALL
+ SELECT 6 FROM t2 WHERE y=0
+ UNION ALL
+ SELECT 7 FROM t2 WHERE y=1
+ UNION ALL
+ SELECT 8 FROM t2 WHERE y=-3
+ UNION ALL
+ SELECT 9 FROM t2 WHERE y=3
+ UNION ALL
+ SELECT 10 FROM t2 WHERE y=4
+ )
+ }
+} {1}
+do_test tkt1473-5.7 {
+ execsql {
+ SELECT EXISTS (
+ SELECT 1 FROM t2 WHERE x=0
+ UNION ALL
+ SELECT 2 FROM t2 WHERE x=-1
+ UNION ALL
+ SELECT 3 FROM t2 WHERE x=2
+ UNION ALL
+ SELECT 4 FROM t2 WHERE x=-2
+ UNION ALL
+ SELECT 5 FROM t2 WHERE x=4
+ UNION ALL
+ SELECT 6 FROM t2 WHERE y=0
+ UNION ALL
+ SELECT 7 FROM t2 WHERE y=1
+ UNION ALL
+ SELECT 8 FROM t2 WHERE y=-3
+ UNION ALL
+ SELECT 9 FROM t2 WHERE y=3
+ UNION ALL
+ SELECT 10 FROM t2 WHERE y=-4
+ )
+ }
+} {0}
+
+do_test tkt1473-6.3 {
+ execsql {
+ SELECT EXISTS (
+ SELECT 1 FROM t2 WHERE x=0
+ UNION
+ SELECT 2 FROM t2 WHERE x=1
+ UNION
+ SELECT 3 FROM t2 WHERE x=2
+ UNION
+ SELECT 4 FROM t2 WHERE x=3
+ UNION
+ SELECT 5 FROM t2 WHERE x=4
+ UNION
+ SELECT 6 FROM t2 WHERE y=0
+ UNION
+ SELECT 7 FROM t2 WHERE y=1
+ UNION
+ SELECT 8 FROM t2 WHERE y=2
+ UNION
+ SELECT 9 FROM t2 WHERE y=3
+ UNION
+ SELECT 10 FROM t2 WHERE y=4
+ )
+ }
+} {1}
+do_test tkt1473-6.4 {
+ execsql {
+ SELECT EXISTS (
+ SELECT 1 FROM t2 WHERE x=0
+ UNION
+ SELECT 2 FROM t2 WHERE x=-1
+ UNION
+ SELECT 3 FROM t2 WHERE x=2
+ UNION
+ SELECT 4 FROM t2 WHERE x=3
+ UNION
+ SELECT 5 FROM t2 WHERE x=4
+ UNION
+ SELECT 6 FROM t2 WHERE y=0
+ UNION
+ SELECT 7 FROM t2 WHERE y=1
+ UNION
+ SELECT 8 FROM t2 WHERE y=2
+ UNION
+ SELECT 9 FROM t2 WHERE y=3
+ UNION
+ SELECT 10 FROM t2 WHERE y=4
+ )
+ }
+} {1}
+
+do_test tkt1473-6.5 {
+ execsql {
+ SELECT EXISTS (
+ SELECT 1 FROM t2 WHERE x=0
+ UNION
+ SELECT 2 FROM t2 WHERE x=-1
+ UNION
+ SELECT 3 FROM t2 WHERE x=2
+ UNION
+ SELECT 4 FROM t2 WHERE x=-1
+ UNION
+ SELECT 5 FROM t2 WHERE x=4
+ UNION
+ SELECT 6 FROM t2 WHERE y=0
+ UNION
+ SELECT 7 FROM t2 WHERE y=1
+ UNION
+ SELECT 8 FROM t2 WHERE y=2
+ UNION
+ SELECT 9 FROM t2 WHERE y=3
+ UNION
+ SELECT 10 FROM t2 WHERE y=-4
+ )
+ }
+} {1}
+do_test tkt1473-6.6 {
+ execsql {
+ SELECT EXISTS (
+ SELECT 1 FROM t2 WHERE x=0
+ UNION
+ SELECT 2 FROM t2 WHERE x=-1
+ UNION
+ SELECT 3 FROM t2 WHERE x=2
+ UNION
+ SELECT 4 FROM t2 WHERE x=-2
+ UNION
+ SELECT 5 FROM t2 WHERE x=4
+ UNION
+ SELECT 6 FROM t2 WHERE y=0
+ UNION
+ SELECT 7 FROM t2 WHERE y=1
+ UNION
+ SELECT 8 FROM t2 WHERE y=-3
+ UNION
+ SELECT 9 FROM t2 WHERE y=3
+ UNION
+ SELECT 10 FROM t2 WHERE y=4
+ )
+ }
+} {1}
+do_test tkt1473-6.7 {
+ execsql {
+ SELECT EXISTS (
+ SELECT 1 FROM t2 WHERE x=0
+ UNION
+ SELECT 2 FROM t2 WHERE x=-1
+ UNION
+ SELECT 3 FROM t2 WHERE x=2
+ UNION
+ SELECT 4 FROM t2 WHERE x=-2
+ UNION
+ SELECT 5 FROM t2 WHERE x=4
+ UNION
+ SELECT 6 FROM t2 WHERE y=0
+ UNION
+ SELECT 7 FROM t2 WHERE y=1
+ UNION
+ SELECT 8 FROM t2 WHERE y=-3
+ UNION
+ SELECT 9 FROM t2 WHERE y=3
+ UNION
+ SELECT 10 FROM t2 WHERE y=-4
+ )
+ }
+} {0}
+do_test tkt1473-6.8 {
+ execsql {
+ SELECT EXISTS (
+ SELECT 1 FROM t2 WHERE x=0
+ UNION
+ SELECT 2 FROM t2 WHERE x=-1
+ UNION
+ SELECT 3 FROM t2 WHERE x=2
+ UNION
+ SELECT 4 FROM t2 WHERE x=-2
+ UNION
+ SELECT 5 FROM t2 WHERE x=4
+ UNION ALL
+ SELECT 6 FROM t2 WHERE y=0
+ UNION
+ SELECT 7 FROM t2 WHERE y=1
+ UNION
+ SELECT 8 FROM t2 WHERE y=-3
+ UNION
+ SELECT 9 FROM t2 WHERE y=3
+ UNION
+ SELECT 10 FROM t2 WHERE y=4
+ )
+ }
+} {1}
+do_test tkt1473-6.9 {
+ execsql {
+ SELECT EXISTS (
+ SELECT 1 FROM t2 WHERE x=0
+ UNION
+ SELECT 2 FROM t2 WHERE x=-1
+ UNION
+ SELECT 3 FROM t2 WHERE x=2
+ UNION
+ SELECT 4 FROM t2 WHERE x=-2
+ UNION
+ SELECT 5 FROM t2 WHERE x=4
+ UNION ALL
+ SELECT 6 FROM t2 WHERE y=0
+ UNION
+ SELECT 7 FROM t2 WHERE y=1
+ UNION
+ SELECT 8 FROM t2 WHERE y=-3
+ UNION
+ SELECT 9 FROM t2 WHERE y=3
+ UNION
+ SELECT 10 FROM t2 WHERE y=-4
+ )
+ }
+} {0}
+
+do_test tkt1473-7.1 {
+ execsql {
+ SELECT 1 FROM t2 WHERE x=1 EXCEPT SELECT 2 FROM t2 WHERE y=2
+ }
+} {1}
+do_test tkt1473-7.2 {
+ execsql {
+ SELECT (
+ SELECT 1 FROM t2 WHERE x=1 EXCEPT SELECT 2 FROM t2 WHERE y=2
+ )
+ }
+} {1}
+do_test tkt1473-7.3 {
+ execsql {
+ SELECT EXISTS (
+ SELECT 1 FROM t2 WHERE x=1 EXCEPT SELECT 2 FROM t2 WHERE y=2
+ )
+ }
+} {1}
+do_test tkt1473-7.4 {
+ execsql {
+ SELECT (
+ SELECT 1 FROM t2 WHERE x=0 EXCEPT SELECT 2 FROM t2 WHERE y=2
+ )
+ }
+} {{}}
+do_test tkt1473-7.5 {
+ execsql {
+ SELECT EXISTS (
+ SELECT 1 FROM t2 WHERE x=0 EXCEPT SELECT 2 FROM t2 WHERE y=2
+ )
+ }
+} {0}
+
+do_test tkt1473-8.1 {
+ execsql {
+ SELECT 1 FROM t2 WHERE x=1 INTERSECT SELECT 2 FROM t2 WHERE y=2
+ }
+} {}
+do_test tkt1473-8.1 {
+ execsql {
+ SELECT 1 FROM t2 WHERE x=1 INTERSECT SELECT 1 FROM t2 WHERE y=2
+ }
+} {1}
+do_test tkt1473-8.3 {
+ execsql {
+ SELECT (
+ SELECT 1 FROM t2 WHERE x=1 INTERSECT SELECT 2 FROM t2 WHERE y=2
+ )
+ }
+} {{}}
+do_test tkt1473-8.4 {
+ execsql {
+ SELECT (
+ SELECT 1 FROM t2 WHERE x=1 INTERSECT SELECT 1 FROM t2 WHERE y=2
+ )
+ }
+} {1}
+do_test tkt1473-8.5 {
+ execsql {
+ SELECT EXISTS (
+ SELECT 1 FROM t2 WHERE x=1 INTERSECT SELECT 2 FROM t2 WHERE y=2
+ )
+ }
+} {0}
+do_test tkt1473-8.6 {
+ execsql {
+ SELECT EXISTS (
+ SELECT 1 FROM t2 WHERE x=1 INTERSECT SELECT 1 FROM t2 WHERE y=2
+ )
+ }
+} {1}
+do_test tkt1473-8.7 {
+ execsql {
+ SELECT (
+ SELECT 1 FROM t2 WHERE x=0 INTERSECT SELECT 1 FROM t2 WHERE y=2
+ )
+ }
+} {{}}
+do_test tkt1473-8.8 {
+ execsql {
+ SELECT EXISTS (
+ SELECT 1 FROM t2 WHERE x=1 INTERSECT SELECT 1 FROM t2 WHERE y=0
+ )
+ }
+} {0}
+
+
+
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/tkt1501.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/tkt1501.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,36 @@
+# 2005 November 16
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests to verify that ticket #1501 is
+# fixed.
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+ifcapable !compound {
+ finish_test
+ return
+}
+
+do_test tkt1501-1.1 {
+ execsql {
+ CREATE TABLE t1(a,b);
+ INSERT INTO t1 VALUES(1,2);
+ SELECT a, b, 'abc' FROM t1
+ UNION
+ SELECT b, a, 'xyz' FROM t1
+ ORDER BY 2, 3;
+ }
+} {2 1 xyz 1 2 abc}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/tkt1512.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/tkt1512.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,54 @@
+# 2005 September 19
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests to verify that ticket #1512 is
+# fixed.
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+ifcapable !vacuum {
+ finish_test
+ return
+}
+if {[db one {PRAGMA auto_vacuum}]} {
+ finish_test
+ return
+}
+
+do_test tkt1512-1.1 {
+ execsql {
+ CREATE TABLE t1(a,b);
+ INSERT INTO t1 VALUES(1,2);
+ INSERT INTO t1 VALUES(3,4);
+ SELECT * FROM t1
+ }
+} {1 2 3 4}
+do_test tkt1512-1.2 {
+ file size test.db
+} {2048}
+do_test tkt1512-1.3 {
+ execsql {
+ DROP TABLE t1;
+ }
+ file size test.db
+} {2048}
+do_test tkt1512-1.4 {
+ execsql {
+ VACUUM;
+ }
+ file size test.db
+} {1024}
+
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/tkt1514.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/tkt1514.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,27 @@
+# 2005 November 16
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests to verify that ticket #1514 is
+# fixed.
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+do_test tkt1514-1.1 {
+ catchsql {
+ CREATE TABLE t1(a,b);
+ SELECT a FROM t1 WHERE max(b)<10 GROUP BY a;
+ }
+} {1 {misuse of aggregate: max(b)}}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/tkt1536.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/tkt1536.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,38 @@
+# 2005 November 24
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests to verify that ticket #1536 is
+# fixed.
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+do_test tkt1536-1.1 {
+ execsql {
+ CREATE TABLE t1(
+ a INTEGER PRIMARY KEY,
+ b TEXT
+ );
+ INSERT INTO t1 VALUES(1,'01');
+ SELECT typeof(a), typeof(b) FROM t1;
+ }
+} {integer text}
+do_test tkt1536-1.2 {
+ execsql {
+ INSERT INTO t1(b) SELECT b FROM t1;
+ SELECT b FROM t1 WHERE rowid=2;
+ }
+} {01}
+
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/tkt1537.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/tkt1537.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,122 @@
+# 2005 November 26
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests to verify that ticket #1537 is
+# fixed.
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+do_test tkt1537-1.1 {
+ execsql {
+ CREATE TABLE t1(id, a1, a2);
+ INSERT INTO t1 VALUES(1, NULL, NULL);
+ INSERT INTO t1 VALUES(2, 1, 3);
+ CREATE TABLE t2(id, b);
+ INSERT INTO t2 VALUES(3, 1);
+ INSERT INTO t2 VALUES(4, NULL);
+ SELECT * FROM t1 LEFT JOIN t2 ON a1=b OR a2=+b;
+ }
+} {1 {} {} {} {} 2 1 3 3 1}
+do_test tkt1537-1.2 {
+ execsql {
+ SELECT * FROM t1 LEFT JOIN t2 ON a1=b OR a2=b;
+ }
+} {1 {} {} {} {} 2 1 3 3 1}
+do_test tkt1537-1.3 {
+ execsql {
+ SELECT * FROM t2 LEFT JOIN t1 ON a1=b OR a2=b;
+ }
+} {3 1 2 1 3 4 {} {} {} {}}
+ifcapable subquery {
+ do_test tkt1537-1.4 {
+ execsql {
+ SELECT * FROM t1 LEFT JOIN t2 ON b IN (a1,a2);
+ }
+ } {1 {} {} {} {} 2 1 3 3 1}
+ do_test tkt1537-1.5 {
+ execsql {
+ SELECT * FROM t2 LEFT JOIN t1 ON b IN (a2,a1);
+ }
+ } {3 1 2 1 3 4 {} {} {} {}}
+}
+do_test tkt1537-1.6 {
+ execsql {
+ CREATE INDEX t1a1 ON t1(a1);
+ CREATE INDEX t1a2 ON t1(a2);
+ CREATE INDEX t2b ON t2(b);
+ SELECT * FROM t1 LEFT JOIN t2 ON a1=b OR a2=b;
+ }
+} {1 {} {} {} {} 2 1 3 3 1}
+do_test tkt1537-1.7 {
+ execsql {
+ SELECT * FROM t2 LEFT JOIN t1 ON a1=b OR a2=b;
+ }
+} {3 1 2 1 3 4 {} {} {} {}}
+
+ifcapable subquery {
+ do_test tkt1537-1.8 {
+ execsql {
+ SELECT * FROM t1 LEFT JOIN t2 ON b IN (a1,a2);
+ }
+ } {1 {} {} {} {} 2 1 3 3 1}
+ do_test tkt1537-1.9 {
+ execsql {
+ SELECT * FROM t2 LEFT JOIN t1 ON b IN (a2,a1);
+ }
+ } {3 1 2 1 3 4 {} {} {} {}}
+}
+
+execsql {
+ DROP INDEX t1a1;
+ DROP INDEX t1a2;
+ DROP INDEX t2b;
+}
+
+do_test tkt1537-2.1 {
+ execsql {
+ SELECT * FROM t1 LEFT JOIN t2 ON b BETWEEN a1 AND a2;
+ }
+} {1 {} {} {} {} 2 1 3 3 1}
+do_test tkt1537-2.2 {
+ execsql {
+ CREATE INDEX t2b ON t2(b);
+ SELECT * FROM t1 LEFT JOIN t2 ON b BETWEEN a1 AND a2;
+ }
+} {1 {} {} {} {} 2 1 3 3 1}
+do_test tkt1537-2.3 {
+ execsql {
+ SELECT * FROM t2 LEFT JOIN t1 ON b BETWEEN a1 AND a2;
+ }
+} {3 1 2 1 3 4 {} {} {} {}}
+do_test tkt1537-2.4 {
+ execsql {
+ CREATE INDEX t1a1 ON t1(a1);
+ CREATE INDEX t1a2 ON t1(a2);
+ SELECT * FROM t2 LEFT JOIN t1 ON b BETWEEN a1 AND a2;
+ }
+} {3 1 2 1 3 4 {} {} {} {}}
+
+do_test tkt1537-3.1 {
+ execsql {
+ SELECT * FROM t1 LEFT JOIN t2 ON b GLOB 'abc*' WHERE t1.id=1;
+ }
+} {1 {} {} {} {}}
+do_test tkt1537-3.2 {
+ execsql {
+ SELECT * FROM t2 LEFT JOIN t1 ON a1 GLOB 'abc*' WHERE t2.id=3;
+ }
+} {3 1 {} {} {}}
+
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/tkt1567.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/tkt1567.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,51 @@
+# 2005 December 19 2005
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests to verify that ticket #1567 is
+# fixed.
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+do_test tkt1567-1.1 {
+ execsql {
+ CREATE TABLE t1(a TEXT PRIMARY KEY);
+ }
+ set bigstr abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ
+ for {set i 0} {$i<100} {incr i} {
+ set x [format %5d [expr $i*2]]
+ set sql "INSERT INTO t1 VALUES('$x-$bigstr')"
+ execsql $sql
+ }
+} {}
+integrity_check tkt1567-1.2
+
+do_test tkt1567-1.3 {
+ execsql {
+ BEGIN;
+ UPDATE t1 SET a = a||'x' WHERE rowid%2==0;
+ }
+} {}
+do_test tkt1567-1.4 {
+ catchsql {
+ UPDATE t1 SET a = CASE WHEN rowid<90 THEN substr(a,1,10) ELSE '9999' END;
+ }
+} {1 {column a is not unique}}
+do_test tkt1567-1.5 {
+ execsql {
+ COMMIT;
+ }
+} {}
+integrity_check tkt1567-1.6
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/tkt1644.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/tkt1644.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,111 @@
+# 2006 January 30
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests to verify that ticket #1644 is
+# fixed. Ticket #1644 complains that precompiled statements
+# are not expired correctly as a result of changes to TEMP
+# views and triggers.
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+ifcapable !tempdb||!view {
+ finish_test
+ return
+}
+
+# Create two tables T1 and T2 and make V1 point to T1.
+do_test tkt1644-1.1 {
+ execsql {
+ CREATE TABLE t1(a);
+ INSERT INTO t1 VALUES(1);
+ CREATE TABLE t2(b);
+ INSERT INTO t2 VALUES(99);
+ CREATE TEMP VIEW v1 AS SELECT * FROM t1;
+ SELECT * FROM v1;
+ }
+} {1}
+
+# The "SELECT * FROM v1" should be in the TCL interface cache below.
+# It will continue to point to T1 unless the cache is invalidated when
+# the view changes.
+#
+do_test tkt1644-1.2 {
+ execsql {
+ DROP VIEW v1;
+ CREATE TEMP VIEW v1 AS SELECT * FROM t2;
+ SELECT * FROM v1;
+ }
+} {99}
+
+# Cache an access to the T1 table.
+#
+do_test tkt1644-1.3 {
+ execsql {
+ SELECT * FROM t1;
+ }
+} {1}
+
+# Create a temp table T1. Make sure the cache is invalidated so that
+# the statement is recompiled and refers to the empty temp table.
+#
+do_test tkt1644-1.4 {
+ execsql {
+ CREATE TEMP TABLE t1(x);
+ }
+ execsql {
+ SELECT * FROM t1;
+ }
+} {}
+
+ifcapable view {
+ do_test tkt1644-2.1 {
+ execsql {
+ CREATE TEMP TABLE temp_t1(a, b);
+ }
+ set ::DB [sqlite3_connection_pointer db]
+ set ::STMT [sqlite3_prepare $::DB "SELECT * FROM temp_t1" -1 DUMMY]
+ execsql {
+ DROP TABLE temp_t1;
+ }
+ list [sqlite3_step $::STMT] [sqlite3_finalize $::STMT]
+ } {SQLITE_ERROR SQLITE_SCHEMA}
+
+ do_test tkt1644-2.2 {
+ execsql {
+ CREATE TABLE real_t1(a, b);
+ CREATE TEMP VIEW temp_v1 AS SELECT * FROM real_t1;
+ }
+ set ::DB [sqlite3_connection_pointer db]
+ set ::STMT [sqlite3_prepare $::DB "SELECT * FROM temp_v1" -1 DUMMY]
+ execsql {
+ DROP VIEW temp_v1;
+ }
+ list [sqlite3_step $::STMT] [sqlite3_finalize $::STMT]
+ } {SQLITE_ERROR SQLITE_SCHEMA}
+
+ do_test tkt1644-2.3 {
+ execsql {
+ CREATE TEMP VIEW temp_v1 AS SELECT * FROM real_t1 LIMIT 10 OFFSET 10;
+ }
+ set ::DB [sqlite3_connection_pointer db]
+ set ::STMT [sqlite3_prepare $::DB "SELECT * FROM temp_v1" -1 DUMMY]
+ execsql {
+ DROP VIEW temp_v1;
+ }
+ list [sqlite3_step $::STMT] [sqlite3_finalize $::STMT]
+ } {SQLITE_ERROR SQLITE_SCHEMA}
+}
+
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/tkt1667.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/tkt1667.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,85 @@
+# 2006 February 10
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests to verify that ticket #1667 has been
+# fixed.
+#
+#
+# $Id: tkt1667.test,v 1.2 2006/06/20 11:01:09 danielk1977 Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+ifcapable !autovacuum||!tclvar {
+ finish_test
+ return
+}
+
+db close
+file delete -force test.db test.db-journal
+
+# Set the pending byte offset such that the page it is on is
+# the first autovacuum pointer map page in the file (assume a page
+# size of 1024).
+
+set first_ptrmap_page [expr 1024/5 + 3]
+set sqlite_pending_byte [expr 1024 * ($first_ptrmap_page-1)]
+
+sqlite db test.db
+
+do_test tkt1667-1 {
+ execsql {
+ PRAGMA auto_vacuum = 1;
+ BEGIN;
+ CREATE TABLE t1(a, b);
+ }
+ for {set i 0} {$i < 500} {incr i} {
+ execsql {
+ INSERT INTO t1 VALUES($i, randstr(1000, 2000))
+ }
+ }
+ execsql {
+ COMMIT;
+ }
+} {}
+for {set i 0} {$i < 500} {incr i} {
+ do_test tkt1667-2.$i.1 {
+ execsql {
+ DELETE FROM t1 WHERE a = $i;
+ }
+ } {}
+ integrity_check tkt1667-2.$i.2
+}
+
+do_test tkt1667-3 {
+ execsql {
+ BEGIN;
+ }
+ for {set i 0} {$i < 500} {incr i} {
+ execsql {
+ INSERT INTO t1 VALUES($i, randstr(1000, 2000))
+ }
+ }
+ execsql {
+ COMMIT;
+ }
+} {}
+do_test tkt1667-4.1 {
+ execsql {
+ DELETE FROM t1;
+ }
+} {}
+integrity_check tkt1667-4.2
+
+finish_test
+
+
Added: freeswitch/trunk/libs/sqlite/test/tkt1873.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/tkt1873.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,67 @@
+# 2006 June 27
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests to verify that ticket #1873 has been
+# fixed.
+#
+#
+# $Id: tkt1873.test,v 1.1 2006/06/27 16:34:58 danielk1977 Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+file delete -force test2.db test2.db-journal
+
+do_test tkt1873-1.1 {
+ execsql {
+ CREATE TABLE t1(x, y);
+ ATTACH 'test2.db' AS aux;
+ CREATE TABLE aux.t2(x, y);
+ INSERT INTO t1 VALUES(1, 2);
+ INSERT INTO t1 VALUES(3, 4);
+ INSERT INTO t2 VALUES(5, 6);
+ INSERT INTO t2 VALUES(7, 8);
+ }
+} {}
+
+do_test tkt1873-1.2 {
+ set rc [catch {
+ db eval {SELECT * FROM t2 LIMIT 1} {
+ db eval {DETACH aux}
+ }
+ } msg]
+ list $rc $msg
+} {1 {database aux is locked}}
+
+do_test tkt1873-1.3 {
+ set rc [catch {
+ db eval {SELECT * FROM t1 LIMIT 1} {
+ db eval {DETACH aux}
+ }
+ } msg]
+ list $rc $msg
+} {0 {}}
+
+do_test tkt1873-1.4 {
+ catchsql {
+ select * from t2;
+ }
+} {1 {no such table: t2}}
+
+do_test tkt1873-1.5 {
+ catchsql {
+ ATTACH 'test2.db' AS aux;
+ select * from t2;
+ }
+} {0 {5 6 7 8}}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/trace.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/trace.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,148 @@
+# 2004 Jun 29
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests for the "sqlite3_trace()" API.
+#
+# $Id: trace.test,v 1.6 2006/01/03 00:33:50 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+ifcapable !trace {
+ finish_test
+ return
+}
+
+set ::stmtlist {}
+do_test trace-1.1 {
+ set rc [catch {db trace 1 2 3} msg]
+ lappend rc $msg
+} {1 {wrong # args: should be "db trace ?CALLBACK?"}}
+proc trace_proc cmd {
+ lappend ::stmtlist [string trim $cmd]
+}
+do_test trace-1.2 {
+ db trace trace_proc
+ db trace
+} {trace_proc}
+do_test trace-1.3 {
+ execsql {
+ CREATE TABLE t1(a,b);
+ INSERT INTO t1 VALUES(1,2);
+ SELECT * FROM t1;
+ }
+} {1 2}
+do_test trace-1.4 {
+ set ::stmtlist
+} {{CREATE TABLE t1(a,b);} {INSERT INTO t1 VALUES(1,2);} {SELECT * FROM t1;}}
+do_test trace-1.5 {
+ db trace {}
+ db trace
+} {}
+
+# If we prepare a statement and execute it multiple times, the trace
+# happens on each execution.
+#
+db close
+sqlite3 db test.db; set DB [sqlite3_connection_pointer db]
+do_test trace-2.1 {
+ set STMT [sqlite3_prepare $DB {INSERT INTO t1 VALUES(2,3)} -1 TAIL]
+ db trace trace_proc
+ proc trace_proc sql {
+ global TRACE_OUT
+ set TRACE_OUT $sql
+ }
+ set TRACE_OUT {}
+ sqlite3_step $STMT
+ set TRACE_OUT
+} {INSERT INTO t1 VALUES(2,3)}
+do_test trace-2.2 {
+ set TRACE_OUT {}
+ sqlite3_reset $STMT
+ set TRACE_OUT
+} {}
+do_test trace-2.3 {
+ sqlite3_step $STMT
+ set TRACE_OUT
+} {INSERT INTO t1 VALUES(2,3)}
+do_test trace-2.4 {
+ execsql {SELECT * FROM t1}
+} {1 2 2 3 2 3}
+do_test trace-2.5 {
+ set TRACE_OUT
+} {SELECT * FROM t1}
+catch {sqlite3_finalize $STMT}
+
+# Similar tests, but this time for profiling.
+#
+do_test trace-3.1 {
+ set rc [catch {db profile 1 2 3} msg]
+ lappend rc $msg
+} {1 {wrong # args: should be "db profile ?CALLBACK?"}}
+set ::stmtlist {}
+proc profile_proc {cmd tm} {
+ lappend ::stmtlist [string trim $cmd]
+}
+do_test trace-3.2 {
+ db trace {}
+ db profile profile_proc
+ db profile
+} {profile_proc}
+do_test trace-3.3 {
+ execsql {
+ CREATE TABLE t2(a,b);
+ INSERT INTO t2 VALUES(1,2);
+ SELECT * FROM t2;
+ }
+} {1 2}
+do_test trace-3.4 {
+ set ::stmtlist
+} {{CREATE TABLE t2(a,b);} {INSERT INTO t2 VALUES(1,2);} {SELECT * FROM t2;}}
+do_test trace-3.5 {
+ db profile {}
+ db profile
+} {}
+
+# If we prepare a statement and execute it multiple times, the profile
+# happens on each execution.
+#
+db close
+sqlite3 db test.db; set DB [sqlite3_connection_pointer db]
+do_test trace-4.1 {
+ set STMT [sqlite3_prepare $DB {INSERT INTO t2 VALUES(2,3)} -1 TAIL]
+ db trace trace_proc
+ proc profile_proc {sql tm} {
+ global TRACE_OUT
+ set TRACE_OUT $sql
+ }
+ set TRACE_OUT {}
+ sqlite3_step $STMT
+ set TRACE_OUT
+} {INSERT INTO t2 VALUES(2,3)}
+do_test trace-4.2 {
+ set TRACE_OUT {}
+ sqlite3_reset $STMT
+ set TRACE_OUT
+} {}
+do_test trace-4.3 {
+ sqlite3_step $STMT
+ set TRACE_OUT
+} {INSERT INTO t2 VALUES(2,3)}
+do_test trace-4.4 {
+ execsql {SELECT * FROM t1}
+} {1 2 2 3 2 3}
+do_test trace-4.5 {
+ set TRACE_OUT
+} {SELECT * FROM t1}
+catch {sqlite3_finalize $STMT}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/trans.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/trans.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,916 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is database locks.
+#
+# $Id: trans.test,v 1.32 2006/06/20 11:01:09 danielk1977 Exp $
+
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+
+# Create several tables to work with.
+#
+do_test trans-1.0 {
+ execsql {
+ CREATE TABLE one(a int PRIMARY KEY, b text);
+ INSERT INTO one VALUES(1,'one');
+ INSERT INTO one VALUES(2,'two');
+ INSERT INTO one VALUES(3,'three');
+ SELECT b FROM one ORDER BY a;
+ }
+} {one two three}
+do_test trans-1.1 {
+ execsql {
+ CREATE TABLE two(a int PRIMARY KEY, b text);
+ INSERT INTO two VALUES(1,'I');
+ INSERT INTO two VALUES(5,'V');
+ INSERT INTO two VALUES(10,'X');
+ SELECT b FROM two ORDER BY a;
+ }
+} {I V X}
+do_test trans-1.9 {
+ sqlite3 altdb test.db
+ execsql {SELECT b FROM one ORDER BY a} altdb
+} {one two three}
+do_test trans-1.10 {
+ execsql {SELECT b FROM two ORDER BY a} altdb
+} {I V X}
+integrity_check trans-1.11
+
+# Basic transactions
+#
+do_test trans-2.1 {
+ set v [catch {execsql {BEGIN}} msg]
+ lappend v $msg
+} {0 {}}
+do_test trans-2.2 {
+ set v [catch {execsql {END}} msg]
+ lappend v $msg
+} {0 {}}
+do_test trans-2.3 {
+ set v [catch {execsql {BEGIN TRANSACTION}} msg]
+ lappend v $msg
+} {0 {}}
+do_test trans-2.4 {
+ set v [catch {execsql {COMMIT TRANSACTION}} msg]
+ lappend v $msg
+} {0 {}}
+do_test trans-2.5 {
+ set v [catch {execsql {BEGIN TRANSACTION 'foo'}} msg]
+ lappend v $msg
+} {0 {}}
+do_test trans-2.6 {
+ set v [catch {execsql {ROLLBACK TRANSACTION 'foo'}} msg]
+ lappend v $msg
+} {0 {}}
+do_test trans-2.10 {
+ execsql {
+ BEGIN;
+ SELECT a FROM one ORDER BY a;
+ SELECT a FROM two ORDER BY a;
+ END;
+ }
+} {1 2 3 1 5 10}
+integrity_check trans-2.11
+
+# Check the locking behavior
+#
+do_test trans-3.1 {
+ execsql {
+ BEGIN;
+ UPDATE one SET a = 0 WHERE 0;
+ SELECT a FROM one ORDER BY a;
+ }
+} {1 2 3}
+do_test trans-3.2 {
+ catchsql {
+ SELECT a FROM two ORDER BY a;
+ } altdb
+} {0 {1 5 10}}
+
+do_test trans-3.3 {
+ catchsql {
+ SELECT a FROM one ORDER BY a;
+ } altdb
+} {0 {1 2 3}}
+do_test trans-3.4 {
+ catchsql {
+ INSERT INTO one VALUES(4,'four');
+ }
+} {0 {}}
+do_test trans-3.5 {
+ catchsql {
+ SELECT a FROM two ORDER BY a;
+ } altdb
+} {0 {1 5 10}}
+do_test trans-3.6 {
+ catchsql {
+ SELECT a FROM one ORDER BY a;
+ } altdb
+} {0 {1 2 3}}
+do_test trans-3.7 {
+ catchsql {
+ INSERT INTO two VALUES(4,'IV');
+ }
+} {0 {}}
+do_test trans-3.8 {
+ catchsql {
+ SELECT a FROM two ORDER BY a;
+ } altdb
+} {0 {1 5 10}}
+do_test trans-3.9 {
+ catchsql {
+ SELECT a FROM one ORDER BY a;
+ } altdb
+} {0 {1 2 3}}
+do_test trans-3.10 {
+ execsql {END TRANSACTION}
+} {}
+
+do_test trans-3.11 {
+ set v [catch {execsql {
+ SELECT a FROM two ORDER BY a;
+ } altdb} msg]
+ lappend v $msg
+} {0 {1 4 5 10}}
+do_test trans-3.12 {
+ set v [catch {execsql {
+ SELECT a FROM one ORDER BY a;
+ } altdb} msg]
+ lappend v $msg
+} {0 {1 2 3 4}}
+do_test trans-3.13 {
+ set v [catch {execsql {
+ SELECT a FROM two ORDER BY a;
+ } db} msg]
+ lappend v $msg
+} {0 {1 4 5 10}}
+do_test trans-3.14 {
+ set v [catch {execsql {
+ SELECT a FROM one ORDER BY a;
+ } db} msg]
+ lappend v $msg
+} {0 {1 2 3 4}}
+integrity_check trans-3.15
+
+do_test trans-4.1 {
+ set v [catch {execsql {
+ COMMIT;
+ } db} msg]
+ lappend v $msg
+} {1 {cannot commit - no transaction is active}}
+do_test trans-4.2 {
+ set v [catch {execsql {
+ ROLLBACK;
+ } db} msg]
+ lappend v $msg
+} {1 {cannot rollback - no transaction is active}}
+do_test trans-4.3 {
+ catchsql {
+ BEGIN TRANSACTION;
+ UPDATE two SET a = 0 WHERE 0;
+ SELECT a FROM two ORDER BY a;
+ } db
+} {0 {1 4 5 10}}
+do_test trans-4.4 {
+ catchsql {
+ SELECT a FROM two ORDER BY a;
+ } altdb
+} {0 {1 4 5 10}}
+do_test trans-4.5 {
+ catchsql {
+ SELECT a FROM one ORDER BY a;
+ } altdb
+} {0 {1 2 3 4}}
+do_test trans-4.6 {
+ catchsql {
+ BEGIN TRANSACTION;
+ SELECT a FROM one ORDER BY a;
+ } db
+} {1 {cannot start a transaction within a transaction}}
+do_test trans-4.7 {
+ catchsql {
+ SELECT a FROM two ORDER BY a;
+ } altdb
+} {0 {1 4 5 10}}
+do_test trans-4.8 {
+ catchsql {
+ SELECT a FROM one ORDER BY a;
+ } altdb
+} {0 {1 2 3 4}}
+do_test trans-4.9 {
+ set v [catch {execsql {
+ END TRANSACTION;
+ SELECT a FROM two ORDER BY a;
+ } db} msg]
+ lappend v $msg
+} {0 {1 4 5 10}}
+do_test trans-4.10 {
+ set v [catch {execsql {
+ SELECT a FROM two ORDER BY a;
+ } altdb} msg]
+ lappend v $msg
+} {0 {1 4 5 10}}
+do_test trans-4.11 {
+ set v [catch {execsql {
+ SELECT a FROM one ORDER BY a;
+ } altdb} msg]
+ lappend v $msg
+} {0 {1 2 3 4}}
+integrity_check trans-4.12
+do_test trans-4.98 {
+ altdb close
+ execsql {
+ DROP TABLE one;
+ DROP TABLE two;
+ }
+} {}
+integrity_check trans-4.99
+
+# Check out the commit/rollback behavior of the database
+#
+do_test trans-5.1 {
+ execsql {SELECT name FROM sqlite_master WHERE type='table' ORDER BY name}
+} {}
+do_test trans-5.2 {
+ execsql {BEGIN TRANSACTION}
+ execsql {SELECT name FROM sqlite_master WHERE type='table' ORDER BY name}
+} {}
+do_test trans-5.3 {
+ execsql {CREATE TABLE one(a text, b int)}
+ execsql {SELECT name FROM sqlite_master WHERE type='table' ORDER BY name}
+} {one}
+do_test trans-5.4 {
+ execsql {SELECT a,b FROM one ORDER BY b}
+} {}
+do_test trans-5.5 {
+ execsql {INSERT INTO one(a,b) VALUES('hello', 1)}
+ execsql {SELECT a,b FROM one ORDER BY b}
+} {hello 1}
+do_test trans-5.6 {
+ execsql {ROLLBACK}
+ execsql {SELECT name FROM sqlite_master WHERE type='table' ORDER BY name}
+} {}
+do_test trans-5.7 {
+ set v [catch {
+ execsql {SELECT a,b FROM one ORDER BY b}
+ } msg]
+ lappend v $msg
+} {1 {no such table: one}}
+
+# Test commits and rollbacks of table CREATE TABLEs, CREATE INDEXs
+# DROP TABLEs and DROP INDEXs
+#
+do_test trans-5.8 {
+ execsql {
+ SELECT name fROM sqlite_master
+ WHERE type='table' OR type='index'
+ ORDER BY name
+ }
+} {}
+do_test trans-5.9 {
+ execsql {
+ BEGIN TRANSACTION;
+ CREATE TABLE t1(a int, b int, c int);
+ SELECT name fROM sqlite_master
+ WHERE type='table' OR type='index'
+ ORDER BY name;
+ }
+} {t1}
+do_test trans-5.10 {
+ execsql {
+ CREATE INDEX i1 ON t1(a);
+ SELECT name fROM sqlite_master
+ WHERE type='table' OR type='index'
+ ORDER BY name;
+ }
+} {i1 t1}
+do_test trans-5.11 {
+ execsql {
+ COMMIT;
+ SELECT name fROM sqlite_master
+ WHERE type='table' OR type='index'
+ ORDER BY name;
+ }
+} {i1 t1}
+do_test trans-5.12 {
+ execsql {
+ BEGIN TRANSACTION;
+ CREATE TABLE t2(a int, b int, c int);
+ CREATE INDEX i2a ON t2(a);
+ CREATE INDEX i2b ON t2(b);
+ DROP TABLE t1;
+ SELECT name fROM sqlite_master
+ WHERE type='table' OR type='index'
+ ORDER BY name;
+ }
+} {i2a i2b t2}
+do_test trans-5.13 {
+ execsql {
+ ROLLBACK;
+ SELECT name fROM sqlite_master
+ WHERE type='table' OR type='index'
+ ORDER BY name;
+ }
+} {i1 t1}
+do_test trans-5.14 {
+ execsql {
+ BEGIN TRANSACTION;
+ DROP INDEX i1;
+ SELECT name fROM sqlite_master
+ WHERE type='table' OR type='index'
+ ORDER BY name;
+ }
+} {t1}
+do_test trans-5.15 {
+ execsql {
+ ROLLBACK;
+ SELECT name fROM sqlite_master
+ WHERE type='table' OR type='index'
+ ORDER BY name;
+ }
+} {i1 t1}
+do_test trans-5.16 {
+ execsql {
+ BEGIN TRANSACTION;
+ DROP INDEX i1;
+ CREATE TABLE t2(x int, y int, z int);
+ CREATE INDEX i2x ON t2(x);
+ CREATE INDEX i2y ON t2(y);
+ INSERT INTO t2 VALUES(1,2,3);
+ SELECT name fROM sqlite_master
+ WHERE type='table' OR type='index'
+ ORDER BY name;
+ }
+} {i2x i2y t1 t2}
+do_test trans-5.17 {
+ execsql {
+ COMMIT;
+ SELECT name fROM sqlite_master
+ WHERE type='table' OR type='index'
+ ORDER BY name;
+ }
+} {i2x i2y t1 t2}
+do_test trans-5.18 {
+ execsql {
+ SELECT * FROM t2;
+ }
+} {1 2 3}
+do_test trans-5.19 {
+ execsql {
+ SELECT x FROM t2 WHERE y=2;
+ }
+} {1}
+do_test trans-5.20 {
+ execsql {
+ BEGIN TRANSACTION;
+ DROP TABLE t1;
+ DROP TABLE t2;
+ SELECT name fROM sqlite_master
+ WHERE type='table' OR type='index'
+ ORDER BY name;
+ }
+} {}
+do_test trans-5.21 {
+ set r [catch {execsql {
+ SELECT * FROM t2
+ }} msg]
+ lappend r $msg
+} {1 {no such table: t2}}
+do_test trans-5.22 {
+ execsql {
+ ROLLBACK;
+ SELECT name fROM sqlite_master
+ WHERE type='table' OR type='index'
+ ORDER BY name;
+ }
+} {i2x i2y t1 t2}
+do_test trans-5.23 {
+ execsql {
+ SELECT * FROM t2;
+ }
+} {1 2 3}
+integrity_check trans-5.23
+
+
+# Try to DROP and CREATE tables and indices with the same name
+# within a transaction. Make sure ROLLBACK works.
+#
+do_test trans-6.1 {
+ execsql2 {
+ INSERT INTO t1 VALUES(1,2,3);
+ BEGIN TRANSACTION;
+ DROP TABLE t1;
+ CREATE TABLE t1(p,q,r);
+ ROLLBACK;
+ SELECT * FROM t1;
+ }
+} {a 1 b 2 c 3}
+do_test trans-6.2 {
+ execsql2 {
+ INSERT INTO t1 VALUES(1,2,3);
+ BEGIN TRANSACTION;
+ DROP TABLE t1;
+ CREATE TABLE t1(p,q,r);
+ COMMIT;
+ SELECT * FROM t1;
+ }
+} {}
+do_test trans-6.3 {
+ execsql2 {
+ INSERT INTO t1 VALUES(1,2,3);
+ SELECT * FROM t1;
+ }
+} {p 1 q 2 r 3}
+do_test trans-6.4 {
+ execsql2 {
+ BEGIN TRANSACTION;
+ DROP TABLE t1;
+ CREATE TABLE t1(a,b,c);
+ INSERT INTO t1 VALUES(4,5,6);
+ SELECT * FROM t1;
+ DROP TABLE t1;
+ }
+} {a 4 b 5 c 6}
+do_test trans-6.5 {
+ execsql2 {
+ ROLLBACK;
+ SELECT * FROM t1;
+ }
+} {p 1 q 2 r 3}
+do_test trans-6.6 {
+ execsql2 {
+ BEGIN TRANSACTION;
+ DROP TABLE t1;
+ CREATE TABLE t1(a,b,c);
+ INSERT INTO t1 VALUES(4,5,6);
+ SELECT * FROM t1;
+ DROP TABLE t1;
+ }
+} {a 4 b 5 c 6}
+do_test trans-6.7 {
+ catchsql {
+ COMMIT;
+ SELECT * FROM t1;
+ }
+} {1 {no such table: t1}}
+
+# Repeat on a table with an automatically generated index.
+#
+do_test trans-6.10 {
+ execsql2 {
+ CREATE TABLE t1(a unique,b,c);
+ INSERT INTO t1 VALUES(1,2,3);
+ BEGIN TRANSACTION;
+ DROP TABLE t1;
+ CREATE TABLE t1(p unique,q,r);
+ ROLLBACK;
+ SELECT * FROM t1;
+ }
+} {a 1 b 2 c 3}
+do_test trans-6.11 {
+ execsql2 {
+ BEGIN TRANSACTION;
+ DROP TABLE t1;
+ CREATE TABLE t1(p unique,q,r);
+ COMMIT;
+ SELECT * FROM t1;
+ }
+} {}
+do_test trans-6.12 {
+ execsql2 {
+ INSERT INTO t1 VALUES(1,2,3);
+ SELECT * FROM t1;
+ }
+} {p 1 q 2 r 3}
+do_test trans-6.13 {
+ execsql2 {
+ BEGIN TRANSACTION;
+ DROP TABLE t1;
+ CREATE TABLE t1(a unique,b,c);
+ INSERT INTO t1 VALUES(4,5,6);
+ SELECT * FROM t1;
+ DROP TABLE t1;
+ }
+} {a 4 b 5 c 6}
+do_test trans-6.14 {
+ execsql2 {
+ ROLLBACK;
+ SELECT * FROM t1;
+ }
+} {p 1 q 2 r 3}
+do_test trans-6.15 {
+ execsql2 {
+ BEGIN TRANSACTION;
+ DROP TABLE t1;
+ CREATE TABLE t1(a unique,b,c);
+ INSERT INTO t1 VALUES(4,5,6);
+ SELECT * FROM t1;
+ DROP TABLE t1;
+ }
+} {a 4 b 5 c 6}
+do_test trans-6.16 {
+ catchsql {
+ COMMIT;
+ SELECT * FROM t1;
+ }
+} {1 {no such table: t1}}
+
+do_test trans-6.20 {
+ execsql {
+ CREATE TABLE t1(a integer primary key,b,c);
+ INSERT INTO t1 VALUES(1,-2,-3);
+ INSERT INTO t1 VALUES(4,-5,-6);
+ SELECT * FROM t1;
+ }
+} {1 -2 -3 4 -5 -6}
+do_test trans-6.21 {
+ execsql {
+ CREATE INDEX i1 ON t1(b);
+ SELECT * FROM t1 WHERE b<1;
+ }
+} {4 -5 -6 1 -2 -3}
+do_test trans-6.22 {
+ execsql {
+ BEGIN TRANSACTION;
+ DROP INDEX i1;
+ SELECT * FROM t1 WHERE b<1;
+ ROLLBACK;
+ }
+} {1 -2 -3 4 -5 -6}
+do_test trans-6.23 {
+ execsql {
+ SELECT * FROM t1 WHERE b<1;
+ }
+} {4 -5 -6 1 -2 -3}
+do_test trans-6.24 {
+ execsql {
+ BEGIN TRANSACTION;
+ DROP TABLE t1;
+ ROLLBACK;
+ SELECT * FROM t1 WHERE b<1;
+ }
+} {4 -5 -6 1 -2 -3}
+
+do_test trans-6.25 {
+ execsql {
+ BEGIN TRANSACTION;
+ DROP INDEX i1;
+ CREATE INDEX i1 ON t1(c);
+ SELECT * FROM t1 WHERE b<1;
+ }
+} {1 -2 -3 4 -5 -6}
+do_test trans-6.26 {
+ execsql {
+ SELECT * FROM t1 WHERE c<1;
+ }
+} {4 -5 -6 1 -2 -3}
+do_test trans-6.27 {
+ execsql {
+ ROLLBACK;
+ SELECT * FROM t1 WHERE b<1;
+ }
+} {4 -5 -6 1 -2 -3}
+do_test trans-6.28 {
+ execsql {
+ SELECT * FROM t1 WHERE c<1;
+ }
+} {1 -2 -3 4 -5 -6}
+
+# The following repeats steps 6.20 through 6.28, but puts a "unique"
+# constraint the first field of the table in order to generate an
+# automatic index.
+#
+do_test trans-6.30 {
+ execsql {
+ BEGIN TRANSACTION;
+ DROP TABLE t1;
+ CREATE TABLE t1(a int unique,b,c);
+ COMMIT;
+ INSERT INTO t1 VALUES(1,-2,-3);
+ INSERT INTO t1 VALUES(4,-5,-6);
+ SELECT * FROM t1 ORDER BY a;
+ }
+} {1 -2 -3 4 -5 -6}
+do_test trans-6.31 {
+ execsql {
+ CREATE INDEX i1 ON t1(b);
+ SELECT * FROM t1 WHERE b<1;
+ }
+} {4 -5 -6 1 -2 -3}
+do_test trans-6.32 {
+ execsql {
+ BEGIN TRANSACTION;
+ DROP INDEX i1;
+ SELECT * FROM t1 WHERE b<1;
+ ROLLBACK;
+ }
+} {1 -2 -3 4 -5 -6}
+do_test trans-6.33 {
+ execsql {
+ SELECT * FROM t1 WHERE b<1;
+ }
+} {4 -5 -6 1 -2 -3}
+do_test trans-6.34 {
+ execsql {
+ BEGIN TRANSACTION;
+ DROP TABLE t1;
+ ROLLBACK;
+ SELECT * FROM t1 WHERE b<1;
+ }
+} {4 -5 -6 1 -2 -3}
+
+do_test trans-6.35 {
+ execsql {
+ BEGIN TRANSACTION;
+ DROP INDEX i1;
+ CREATE INDEX i1 ON t1(c);
+ SELECT * FROM t1 WHERE b<1;
+ }
+} {1 -2 -3 4 -5 -6}
+do_test trans-6.36 {
+ execsql {
+ SELECT * FROM t1 WHERE c<1;
+ }
+} {4 -5 -6 1 -2 -3}
+do_test trans-6.37 {
+ execsql {
+ DROP INDEX i1;
+ SELECT * FROM t1 WHERE c<1;
+ }
+} {1 -2 -3 4 -5 -6}
+do_test trans-6.38 {
+ execsql {
+ ROLLBACK;
+ SELECT * FROM t1 WHERE b<1;
+ }
+} {4 -5 -6 1 -2 -3}
+do_test trans-6.39 {
+ execsql {
+ SELECT * FROM t1 WHERE c<1;
+ }
+} {1 -2 -3 4 -5 -6}
+integrity_check trans-6.40
+
+# Test to make sure rollback restores the database back to its original
+# state.
+#
+do_test trans-7.1 {
+ execsql {BEGIN}
+ for {set i 0} {$i<1000} {incr i} {
+ set r1 [expr {rand()}]
+ set r2 [expr {rand()}]
+ set r3 [expr {rand()}]
+ execsql "INSERT INTO t2 VALUES($r1,$r2,$r3)"
+ }
+ execsql {COMMIT}
+ set ::checksum [execsql {SELECT md5sum(x,y,z) FROM t2}]
+ set ::checksum2 [
+ execsql {SELECT md5sum(type,name,tbl_name,rootpage,sql) FROM sqlite_master}
+ ]
+ execsql {SELECT count(*) FROM t2}
+} {1001}
+do_test trans-7.2 {
+ execsql {SELECT md5sum(x,y,z) FROM t2}
+} $checksum
+do_test trans-7.2.1 {
+ execsql {SELECT md5sum(type,name,tbl_name,rootpage,sql) FROM sqlite_master}
+} $checksum2
+do_test trans-7.3 {
+ execsql {
+ BEGIN;
+ DELETE FROM t2;
+ ROLLBACK;
+ SELECT md5sum(x,y,z) FROM t2;
+ }
+} $checksum
+do_test trans-7.4 {
+ execsql {
+ BEGIN;
+ INSERT INTO t2 SELECT * FROM t2;
+ ROLLBACK;
+ SELECT md5sum(x,y,z) FROM t2;
+ }
+} $checksum
+do_test trans-7.5 {
+ execsql {
+ BEGIN;
+ DELETE FROM t2;
+ ROLLBACK;
+ SELECT md5sum(x,y,z) FROM t2;
+ }
+} $checksum
+do_test trans-7.6 {
+ execsql {
+ BEGIN;
+ INSERT INTO t2 SELECT * FROM t2;
+ ROLLBACK;
+ SELECT md5sum(x,y,z) FROM t2;
+ }
+} $checksum
+do_test trans-7.7 {
+ execsql {
+ BEGIN;
+ CREATE TABLE t3 AS SELECT * FROM t2;
+ INSERT INTO t2 SELECT * FROM t3;
+ ROLLBACK;
+ SELECT md5sum(x,y,z) FROM t2;
+ }
+} $checksum
+do_test trans-7.8 {
+ execsql {SELECT md5sum(type,name,tbl_name,rootpage,sql) FROM sqlite_master}
+} $checksum2
+ifcapable tempdb {
+ do_test trans-7.9 {
+ execsql {
+ BEGIN;
+ CREATE TEMP TABLE t3 AS SELECT * FROM t2;
+ INSERT INTO t2 SELECT * FROM t3;
+ ROLLBACK;
+ SELECT md5sum(x,y,z) FROM t2;
+ }
+ } $checksum
+}
+do_test trans-7.10 {
+ execsql {SELECT md5sum(type,name,tbl_name,rootpage,sql) FROM sqlite_master}
+} $checksum2
+ifcapable tempdb {
+ do_test trans-7.11 {
+ execsql {
+ BEGIN;
+ CREATE TEMP TABLE t3 AS SELECT * FROM t2;
+ INSERT INTO t2 SELECT * FROM t3;
+ DROP INDEX i2x;
+ DROP INDEX i2y;
+ CREATE INDEX i3a ON t3(x);
+ ROLLBACK;
+ SELECT md5sum(x,y,z) FROM t2;
+ }
+ } $checksum
+}
+do_test trans-7.12 {
+ execsql {SELECT md5sum(type,name,tbl_name,rootpage,sql) FROM sqlite_master}
+} $checksum2
+ifcapable tempdb {
+ do_test trans-7.13 {
+ execsql {
+ BEGIN;
+ DROP TABLE t2;
+ ROLLBACK;
+ SELECT md5sum(x,y,z) FROM t2;
+ }
+ } $checksum
+}
+do_test trans-7.14 {
+ execsql {SELECT md5sum(type,name,tbl_name,rootpage,sql) FROM sqlite_master}
+} $checksum2
+integrity_check trans-7.15
+
+# Arrange for another process to begin modifying the database but abort
+# and die in the middle of the modification. Then have this process read
+# the database. This process should detect the journal file and roll it
+# back. Verify that this happens correctly.
+#
+set fd [open test.tcl w]
+puts $fd {
+ sqlite3 db test.db
+ db eval {
+ PRAGMA default_cache_size=20;
+ BEGIN;
+ CREATE TABLE t3 AS SELECT * FROM t2;
+ DELETE FROM t2;
+ }
+ sqlite_abort
+}
+close $fd
+file copy -force test.db test.db-bu1
+do_test trans-8.1 {
+ catch {exec [info nameofexec] test.tcl}
+ file copy -force test.db test.db-bu2
+ file copy -force test.db-journal test.db-bu2-journal
+ execsql {SELECT md5sum(x,y,z) FROM t2}
+} $checksum
+do_test trans-8.2 {
+ execsql {SELECT md5sum(type,name,tbl_name,rootpage,sql) FROM sqlite_master}
+} $checksum2
+integrity_check trans-8.3
+
+# In the following sequence of tests, compute the MD5 sum of the content
+# of a table, make lots of modifications to that table, then do a rollback.
+# Verify that after the rollback, the MD5 checksum is unchanged.
+#
+do_test trans-9.1 {
+ execsql {
+ PRAGMA default_cache_size=10;
+ }
+ db close
+ sqlite3 db test.db
+ execsql {
+ BEGIN;
+ CREATE TABLE t3(x TEXT);
+ INSERT INTO t3 VALUES(randstr(10,400));
+ INSERT INTO t3 VALUES(randstr(10,400));
+ INSERT INTO t3 SELECT randstr(10,400) FROM t3;
+ INSERT INTO t3 SELECT randstr(10,400) FROM t3;
+ INSERT INTO t3 SELECT randstr(10,400) FROM t3;
+ INSERT INTO t3 SELECT randstr(10,400) FROM t3;
+ INSERT INTO t3 SELECT randstr(10,400) FROM t3;
+ INSERT INTO t3 SELECT randstr(10,400) FROM t3;
+ INSERT INTO t3 SELECT randstr(10,400) FROM t3;
+ INSERT INTO t3 SELECT randstr(10,400) FROM t3;
+ INSERT INTO t3 SELECT randstr(10,400) FROM t3;
+ COMMIT;
+ SELECT count(*) FROM t3;
+ }
+} {1024}
+
+# The following procedure computes a "signature" for table "t3". If
+# T3 changes in any way, the signature should change.
+#
+# This is used to test ROLLBACK. We gather a signature for t3, then
+# make lots of changes to t3, then rollback and take another signature.
+# The two signatures should be the same.
+#
+proc signature {} {
+ return [db eval {SELECT count(*), md5sum(x) FROM t3}]
+}
+
+# Repeat the following group of tests 20 times for quick testing and
+# 40 times for full testing. Each iteration of the test makes table
+# t3 a little larger, and thus takes a little longer, so doing 40 tests
+# is more than 2.0 times slower than doing 20 tests. Considerably more.
+#
+if {[info exists ISQUICK]} {
+ set limit 20
+} else {
+ set limit 40
+}
+
+# Do rollbacks. Make sure the signature does not change.
+#
+for {set i 2} {$i<=$limit} {incr i} {
+ set ::sig [signature]
+ set cnt [lindex $::sig 0]
+ if {$i%2==0} {
+ execsql {PRAGMA fullfsync=ON}
+ } else {
+ execsql {PRAGMA fullfsync=OFF}
+ }
+ set sqlite_sync_count 0
+ set sqlite_fullsync_count 0
+ do_test trans-9.$i.1-$cnt {
+ execsql {
+ BEGIN;
+ DELETE FROM t3 WHERE random()%10!=0;
+ INSERT INTO t3 SELECT randstr(10,10)||x FROM t3;
+ INSERT INTO t3 SELECT randstr(10,10)||x FROM t3;
+ ROLLBACK;
+ }
+ signature
+ } $sig
+ do_test trans-9.$i.2-$cnt {
+ execsql {
+ BEGIN;
+ DELETE FROM t3 WHERE random()%10!=0;
+ INSERT INTO t3 SELECT randstr(10,10)||x FROM t3;
+ DELETE FROM t3 WHERE random()%10!=0;
+ INSERT INTO t3 SELECT randstr(10,10)||x FROM t3;
+ ROLLBACK;
+ }
+ signature
+ } $sig
+ if {$i<$limit} {
+ do_test trans-9.$i.3-$cnt {
+ execsql {
+ INSERT INTO t3 SELECT randstr(10,400) FROM t3 WHERE random()%10==0;
+ }
+ } {}
+ if {$tcl_platform(platform)=="unix"} {
+ do_test trans-9.$i.4-$cnt {
+ expr {$sqlite_sync_count>0}
+ } 1
+ ifcapable pager_pragmas {
+ do_test trans-9.$i.5-$cnt {
+ expr {$sqlite_fullsync_count>0}
+ } [expr {$i%2==0}]
+ } else {
+ do_test trans-9.$i.5-$cnt {
+ expr {$sqlite_fullsync_count>0}
+ } {1}
+ }
+ }
+ }
+ set ::pager_old_format 0
+}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/trigger1.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/trigger1.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,624 @@
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+#
+# This file tests creating and dropping triggers, and interaction thereof
+# with the database COMMIT/ROLLBACK logic.
+#
+# 1. CREATE and DROP TRIGGER tests
+# trig-1.1: Error if table does not exist
+# trig-1.2: Error if trigger already exists
+# trig-1.3: Created triggers are deleted if the transaction is rolled back
+# trig-1.4: DROP TRIGGER removes trigger
+# trig-1.5: Dropped triggers are restored if the transaction is rolled back
+# trig-1.6: Error if dropped trigger doesn't exist
+# trig-1.7: Dropping the table automatically drops all triggers
+# trig-1.8: A trigger created on a TEMP table is not inserted into sqlite_master
+# trig-1.9: Ensure that we cannot create a trigger on sqlite_master
+# trig-1.10:
+# trig-1.11:
+# trig-1.12: Ensure that INSTEAD OF triggers cannot be created on tables
+# trig-1.13: Ensure that AFTER triggers cannot be created on views
+# trig-1.14: Ensure that BEFORE triggers cannot be created on views
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+ifcapable {!trigger} {
+ finish_test
+ return
+}
+
+do_test trigger1-1.1.1 {
+ catchsql {
+ CREATE TRIGGER trig UPDATE ON no_such_table BEGIN
+ SELECT * from sqlite_master;
+ END;
+ }
+} {1 {no such table: main.no_such_table}}
+
+ifcapable tempdb {
+ do_test trigger1-1.1.2 {
+ catchsql {
+ CREATE TEMP TRIGGER trig UPDATE ON no_such_table BEGIN
+ SELECT * from sqlite_master;
+ END;
+ }
+ } {1 {no such table: no_such_table}}
+}
+
+execsql {
+ CREATE TABLE t1(a);
+}
+execsql {
+ CREATE TRIGGER tr1 INSERT ON t1 BEGIN
+ INSERT INTO t1 values(1);
+ END;
+}
+do_test trigger1-1.2.0 {
+ catchsql {
+ CREATE TRIGGER IF NOT EXISTS tr1 DELETE ON t1 BEGIN
+ SELECT * FROM sqlite_master;
+ END
+ }
+} {0 {}}
+do_test trigger1-1.2.1 {
+ catchsql {
+ CREATE TRIGGER tr1 DELETE ON t1 BEGIN
+ SELECT * FROM sqlite_master;
+ END
+ }
+} {1 {trigger tr1 already exists}}
+do_test trigger1-1.2.2 {
+ catchsql {
+ CREATE TRIGGER "tr1" DELETE ON t1 BEGIN
+ SELECT * FROM sqlite_master;
+ END
+ }
+} {1 {trigger "tr1" already exists}}
+do_test trigger1-1.2.3 {
+ catchsql {
+ CREATE TRIGGER [tr1] DELETE ON t1 BEGIN
+ SELECT * FROM sqlite_master;
+ END
+ }
+} {1 {trigger [tr1] already exists}}
+
+do_test trigger1-1.3 {
+ catchsql {
+ BEGIN;
+ CREATE TRIGGER tr2 INSERT ON t1 BEGIN
+ SELECT * from sqlite_master; END;
+ ROLLBACK;
+ CREATE TRIGGER tr2 INSERT ON t1 BEGIN
+ SELECT * from sqlite_master; END;
+ }
+} {0 {}}
+
+do_test trigger1-1.4 {
+ catchsql {
+ DROP TRIGGER IF EXISTS tr1;
+ CREATE TRIGGER tr1 DELETE ON t1 BEGIN
+ SELECT * FROM sqlite_master;
+ END
+ }
+} {0 {}}
+
+do_test trigger1-1.5 {
+ execsql {
+ BEGIN;
+ DROP TRIGGER tr2;
+ ROLLBACK;
+ DROP TRIGGER tr2;
+ }
+} {}
+
+do_test trigger1-1.6.1 {
+ catchsql {
+ DROP TRIGGER IF EXISTS biggles;
+ }
+} {0 {}}
+
+do_test trigger1-1.6.2 {
+ catchsql {
+ DROP TRIGGER biggles;
+ }
+} {1 {no such trigger: biggles}}
+
+do_test trigger1-1.7 {
+ catchsql {
+ DROP TABLE t1;
+ DROP TRIGGER tr1;
+ }
+} {1 {no such trigger: tr1}}
+
+ifcapable tempdb {
+ execsql {
+ CREATE TEMP TABLE temp_table(a);
+ }
+ do_test trigger1-1.8 {
+ execsql {
+ CREATE TRIGGER temp_trig UPDATE ON temp_table BEGIN
+ SELECT * from sqlite_master;
+ END;
+ SELECT count(*) FROM sqlite_master WHERE name = 'temp_trig';
+ }
+ } {0}
+}
+
+do_test trigger1-1.9 {
+ catchsql {
+ CREATE TRIGGER tr1 AFTER UPDATE ON sqlite_master BEGIN
+ SELECT * FROM sqlite_master;
+ END;
+ }
+} {1 {cannot create trigger on system table}}
+
+# Check to make sure that a DELETE statement within the body of
+# a trigger does not mess up the DELETE that caused the trigger to
+# run in the first place.
+#
+do_test trigger1-1.10 {
+ execsql {
+ create table t1(a,b);
+ insert into t1 values(1,'a');
+ insert into t1 values(2,'b');
+ insert into t1 values(3,'c');
+ insert into t1 values(4,'d');
+ create trigger r1 after delete on t1 for each row begin
+ delete from t1 WHERE a=old.a+2;
+ end;
+ delete from t1 where a=1 OR a=3;
+ select * from t1;
+ drop table t1;
+ }
+} {2 b 4 d}
+
+do_test trigger1-1.11 {
+ execsql {
+ create table t1(a,b);
+ insert into t1 values(1,'a');
+ insert into t1 values(2,'b');
+ insert into t1 values(3,'c');
+ insert into t1 values(4,'d');
+ create trigger r1 after update on t1 for each row begin
+ delete from t1 WHERE a=old.a+2;
+ end;
+ update t1 set b='x-' || b where a=1 OR a=3;
+ select * from t1;
+ drop table t1;
+ }
+} {1 x-a 2 b 4 d}
+
+# Ensure that we cannot create INSTEAD OF triggers on tables
+do_test trigger1-1.12 {
+ catchsql {
+ create table t1(a,b);
+ create trigger t1t instead of update on t1 for each row begin
+ delete from t1 WHERE a=old.a+2;
+ end;
+ }
+} {1 {cannot create INSTEAD OF trigger on table: main.t1}}
+
+ifcapable view {
+# Ensure that we cannot create BEFORE triggers on views
+do_test trigger1-1.13 {
+ catchsql {
+ create view v1 as select * from t1;
+ create trigger v1t before update on v1 for each row begin
+ delete from t1 WHERE a=old.a+2;
+ end;
+ }
+} {1 {cannot create BEFORE trigger on view: main.v1}}
+# Ensure that we cannot create AFTER triggers on views
+do_test trigger1-1.14 {
+ catchsql {
+ drop view v1;
+ create view v1 as select * from t1;
+ create trigger v1t AFTER update on v1 for each row begin
+ delete from t1 WHERE a=old.a+2;
+ end;
+ }
+} {1 {cannot create AFTER trigger on view: main.v1}}
+} ;# ifcapable view
+
+# Check for memory leaks in the trigger parser
+#
+do_test trigger1-2.1 {
+ catchsql {
+ CREATE TRIGGER r1 AFTER INSERT ON t1 BEGIN
+ SELECT * FROM; -- Syntax error
+ END;
+ }
+} {1 {near ";": syntax error}}
+do_test trigger1-2.2 {
+ catchsql {
+ CREATE TRIGGER r1 AFTER INSERT ON t1 BEGIN
+ SELECT * FROM t1;
+ SELECT * FROM; -- Syntax error
+ END;
+ }
+} {1 {near ";": syntax error}}
+
+# Create a trigger that refers to a table that might not exist.
+#
+ifcapable tempdb {
+ do_test trigger1-3.1 {
+ execsql {
+ CREATE TEMP TABLE t2(x,y);
+ }
+ catchsql {
+ CREATE TRIGGER r1 AFTER INSERT ON t1 BEGIN
+ INSERT INTO t2 VALUES(NEW.a,NEW.b);
+ END;
+ }
+ } {0 {}}
+ do_test trigger-3.2 {
+ catchsql {
+ INSERT INTO t1 VALUES(1,2);
+ SELECT * FROM t2;
+ }
+ } {1 {no such table: main.t2}}
+ do_test trigger-3.3 {
+ db close
+ set rc [catch {sqlite3 db test.db} err]
+ if {$rc} {lappend rc $err}
+ set rc
+ } {0}
+ do_test trigger-3.4 {
+ catchsql {
+ INSERT INTO t1 VALUES(1,2);
+ SELECT * FROM t2;
+ }
+ } {1 {no such table: main.t2}}
+ do_test trigger-3.5 {
+ catchsql {
+ CREATE TEMP TABLE t2(x,y);
+ INSERT INTO t1 VALUES(1,2);
+ SELECT * FROM t2;
+ }
+ } {1 {no such table: main.t2}}
+ do_test trigger-3.6 {
+ catchsql {
+ DROP TRIGGER r1;
+ CREATE TEMP TRIGGER r1 AFTER INSERT ON t1 BEGIN
+ INSERT INTO t2 VALUES(NEW.a,NEW.b);
+ END;
+ INSERT INTO t1 VALUES(1,2);
+ SELECT * FROM t2;
+ }
+ } {0 {1 2}}
+ do_test trigger-3.7 {
+ execsql {
+ DROP TABLE t2;
+ CREATE TABLE t2(x,y);
+ SELECT * FROM t2;
+ }
+ } {}
+
+ # There are two versions of trigger-3.8 and trigger-3.9. One that uses
+ # compound SELECT statements, and another that does not.
+ ifcapable compound {
+ do_test trigger1-3.8 {
+ execsql {
+ INSERT INTO t1 VALUES(3,4);
+ SELECT * FROM t1 UNION ALL SELECT * FROM t2;
+ }
+ } {1 2 3 4 3 4}
+ do_test trigger1-3.9 {
+ db close
+ sqlite3 db test.db
+ execsql {
+ INSERT INTO t1 VALUES(5,6);
+ SELECT * FROM t1 UNION ALL SELECT * FROM t2;
+ }
+ } {1 2 3 4 5 6 3 4}
+ } ;# ifcapable compound
+ ifcapable !compound {
+ do_test trigger1-3.8 {
+ execsql {
+ INSERT INTO t1 VALUES(3,4);
+ SELECT * FROM t1;
+ SELECT * FROM t2;
+ }
+ } {1 2 3 4 3 4}
+ do_test trigger1-3.9 {
+ db close
+ sqlite3 db test.db
+ execsql {
+ INSERT INTO t1 VALUES(5,6);
+ SELECT * FROM t1;
+ SELECT * FROM t2;
+ }
+ } {1 2 3 4 5 6 3 4}
+ } ;# ifcapable !compound
+
+ do_test trigger1-4.1 {
+ execsql {
+ CREATE TEMP TRIGGER r1 BEFORE INSERT ON t1 BEGIN
+ INSERT INTO t2 VALUES(NEW.a,NEW.b);
+ END;
+ INSERT INTO t1 VALUES(7,8);
+ SELECT * FROM t2;
+ }
+ } {3 4 7 8}
+ do_test trigger1-4.2 {
+ sqlite3 db2 test.db
+ execsql {
+ INSERT INTO t1 VALUES(9,10);
+ } db2;
+ db2 close
+ execsql {
+ SELECT * FROM t2;
+ }
+ } {3 4 7 8}
+ do_test trigger1-4.3 {
+ execsql {
+ DROP TABLE t1;
+ SELECT * FROM t2;
+ };
+ } {3 4 7 8}
+ do_test trigger1-4.4 {
+ db close
+ sqlite3 db test.db
+ execsql {
+ SELECT * FROM t2;
+ };
+ } {3 4 7 8}
+} else {
+ execsql {
+ CREATE TABLE t2(x,y);
+ DROP TABLE t1;
+ INSERT INTO t2 VALUES(3, 4);
+ INSERT INTO t2 VALUES(7, 8);
+ }
+}
+
+
+integrity_check trigger1-5.1
+
+# Create a trigger with the same name as a table. Make sure the
+# trigger works. Then drop the trigger. Make sure the table is
+# still there.
+#
+set view_v1 {}
+ifcapable view {
+ set view_v1 {view v1}
+}
+do_test trigger1-6.1 {
+ execsql {SELECT type, name FROM sqlite_master}
+} [concat $view_v1 {table t2}]
+do_test trigger1-6.2 {
+ execsql {
+ CREATE TRIGGER t2 BEFORE DELETE ON t2 BEGIN
+ SELECT RAISE(ABORT,'deletes are not allows');
+ END;
+ SELECT type, name FROM sqlite_master;
+ }
+} [concat $view_v1 {table t2 trigger t2}]
+do_test trigger1-6.3 {
+ catchsql {DELETE FROM t2}
+} {1 {deletes are not allows}}
+do_test trigger1-6.4 {
+ execsql {SELECT * FROM t2}
+} {3 4 7 8}
+do_test trigger1-6.5 {
+ db close
+ sqlite3 db test.db
+ execsql {SELECT type, name FROM sqlite_master}
+} [concat $view_v1 {table t2 trigger t2}]
+do_test trigger1-6.6 {
+ execsql {
+ DROP TRIGGER t2;
+ SELECT type, name FROM sqlite_master;
+ }
+} [concat $view_v1 {table t2}]
+do_test trigger1-6.7 {
+ execsql {SELECT * FROM t2}
+} {3 4 7 8}
+do_test trigger1-6.8 {
+ db close
+ sqlite3 db test.db
+ execsql {SELECT * FROM t2}
+} {3 4 7 8}
+
+integrity_check trigger-7.1
+
+# Check to make sure the name of a trigger can be quoted so that keywords
+# can be used as trigger names. Ticket #468
+#
+do_test trigger1-8.1 {
+ execsql {
+ CREATE TRIGGER 'trigger' AFTER INSERT ON t2 BEGIN SELECT 1; END;
+ SELECT name FROM sqlite_master WHERE type='trigger';
+ }
+} {trigger}
+do_test trigger1-8.2 {
+ execsql {
+ DROP TRIGGER 'trigger';
+ SELECT name FROM sqlite_master WHERE type='trigger';
+ }
+} {}
+do_test trigger1-8.3 {
+ execsql {
+ CREATE TRIGGER "trigger" AFTER INSERT ON t2 BEGIN SELECT 1; END;
+ SELECT name FROM sqlite_master WHERE type='trigger';
+ }
+} {trigger}
+do_test trigger1-8.4 {
+ execsql {
+ DROP TRIGGER "trigger";
+ SELECT name FROM sqlite_master WHERE type='trigger';
+ }
+} {}
+do_test trigger1-8.5 {
+ execsql {
+ CREATE TRIGGER [trigger] AFTER INSERT ON t2 BEGIN SELECT 1; END;
+ SELECT name FROM sqlite_master WHERE type='trigger';
+ }
+} {trigger}
+do_test trigger1-8.6 {
+ execsql {
+ DROP TRIGGER [trigger];
+ SELECT name FROM sqlite_master WHERE type='trigger';
+ }
+} {}
+
+ifcapable conflict {
+ # Make sure REPLACE works inside of triggers.
+ #
+ # There are two versions of trigger-9.1 and trigger-9.2. One that uses
+ # compound SELECT statements, and another that does not.
+ ifcapable compound {
+ do_test trigger1-9.1 {
+ execsql {
+ CREATE TABLE t3(a,b);
+ CREATE TABLE t4(x UNIQUE, b);
+ CREATE TRIGGER r34 AFTER INSERT ON t3 BEGIN
+ REPLACE INTO t4 VALUES(new.a,new.b);
+ END;
+ INSERT INTO t3 VALUES(1,2);
+ SELECT * FROM t3 UNION ALL SELECT 99, 99 UNION ALL SELECT * FROM t4;
+ }
+ } {1 2 99 99 1 2}
+ do_test trigger1-9.2 {
+ execsql {
+ INSERT INTO t3 VALUES(1,3);
+ SELECT * FROM t3 UNION ALL SELECT 99, 99 UNION ALL SELECT * FROM t4;
+ }
+ } {1 2 1 3 99 99 1 3}
+ } else {
+ do_test trigger1-9.1 {
+ execsql {
+ CREATE TABLE t3(a,b);
+ CREATE TABLE t4(x UNIQUE, b);
+ CREATE TRIGGER r34 AFTER INSERT ON t3 BEGIN
+ REPLACE INTO t4 VALUES(new.a,new.b);
+ END;
+ INSERT INTO t3 VALUES(1,2);
+ SELECT * FROM t3; SELECT 99, 99; SELECT * FROM t4;
+ }
+ } {1 2 99 99 1 2}
+ do_test trigger1-9.2 {
+ execsql {
+ INSERT INTO t3 VALUES(1,3);
+ SELECT * FROM t3; SELECT 99, 99; SELECT * FROM t4;
+ }
+ } {1 2 1 3 99 99 1 3}
+ }
+ execsql {
+ DROP TABLE t3;
+ DROP TABLE t4;
+ }
+}
+
+
+# Ticket #764. At one stage TEMP triggers would fail to re-install when the
+# schema was reloaded. The following tests ensure that TEMP triggers are
+# correctly re-installed.
+#
+# Also verify that references within trigger programs are resolved at
+# statement compile time, not trigger installation time. This means, for
+# example, that you can drop and re-create tables referenced by triggers.
+ifcapable tempdb {
+ do_test trigger1-10.0 {
+ file delete -force test2.db
+ file delete -force test2.db-journal
+ execsql {
+ ATTACH 'test2.db' AS aux;
+ }
+ } {}
+ do_test trigger1-10.1 {
+ execsql {
+ CREATE TABLE main.t4(a, b, c);
+ CREATE TABLE temp.t4(a, b, c);
+ CREATE TABLE aux.t4(a, b, c);
+ CREATE TABLE insert_log(db, a, b, c);
+ }
+ } {}
+ do_test trigger1-10.2 {
+ execsql {
+ CREATE TEMP TRIGGER trig1 AFTER INSERT ON main.t4 BEGIN
+ INSERT INTO insert_log VALUES('main', new.a, new.b, new.c);
+ END;
+ CREATE TEMP TRIGGER trig2 AFTER INSERT ON temp.t4 BEGIN
+ INSERT INTO insert_log VALUES('temp', new.a, new.b, new.c);
+ END;
+ CREATE TEMP TRIGGER trig3 AFTER INSERT ON aux.t4 BEGIN
+ INSERT INTO insert_log VALUES('aux', new.a, new.b, new.c);
+ END;
+ }
+ } {}
+ do_test trigger1-10.3 {
+ execsql {
+ INSERT INTO main.t4 VALUES(1, 2, 3);
+ INSERT INTO temp.t4 VALUES(4, 5, 6);
+ INSERT INTO aux.t4 VALUES(7, 8, 9);
+ }
+ } {}
+ do_test trigger1-10.4 {
+ execsql {
+ SELECT * FROM insert_log;
+ }
+ } {main 1 2 3 temp 4 5 6 aux 7 8 9}
+ do_test trigger1-10.5 {
+ execsql {
+ BEGIN;
+ INSERT INTO main.t4 VALUES(1, 2, 3);
+ INSERT INTO temp.t4 VALUES(4, 5, 6);
+ INSERT INTO aux.t4 VALUES(7, 8, 9);
+ ROLLBACK;
+ }
+ } {}
+ do_test trigger1-10.6 {
+ execsql {
+ SELECT * FROM insert_log;
+ }
+ } {main 1 2 3 temp 4 5 6 aux 7 8 9}
+ do_test trigger1-10.7 {
+ execsql {
+ DELETE FROM insert_log;
+ INSERT INTO main.t4 VALUES(11, 12, 13);
+ INSERT INTO temp.t4 VALUES(14, 15, 16);
+ INSERT INTO aux.t4 VALUES(17, 18, 19);
+ }
+ } {}
+ do_test trigger1-10.8 {
+ execsql {
+ SELECT * FROM insert_log;
+ }
+ } {main 11 12 13 temp 14 15 16 aux 17 18 19}
+ do_test trigger1-10.8 {
+ # Drop and re-create the insert_log table in a different database. Note
+ # that we can change the column names because the trigger programs don't
+ # use them explicitly.
+ execsql {
+ DROP TABLE insert_log;
+ CREATE TABLE aux.insert_log(db, d, e, f);
+ }
+ } {}
+ do_test trigger1-10.10 {
+ execsql {
+ INSERT INTO main.t4 VALUES(21, 22, 23);
+ INSERT INTO temp.t4 VALUES(24, 25, 26);
+ INSERT INTO aux.t4 VALUES(27, 28, 29);
+ }
+ } {}
+ do_test trigger1-10.11 {
+ execsql {
+ SELECT * FROM insert_log;
+ }
+ } {main 21 22 23 temp 24 25 26 aux 27 28 29}
+}
+
+do_test trigger1-11.1 {
+ catchsql {SELECT raise(abort,'message');}
+} {1 {RAISE() may only be used within a trigger-program}}
+
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/trigger2.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/trigger2.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,742 @@
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+#
+# Regression testing of FOR EACH ROW table triggers
+#
+# 1. Trigger execution order tests.
+# These tests ensure that BEFORE and AFTER triggers are fired at the correct
+# times relative to each other and the triggering statement.
+#
+# trigger2-1.1.*: ON UPDATE trigger execution model.
+# trigger2-1.2.*: DELETE trigger execution model.
+# trigger2-1.3.*: INSERT trigger execution model.
+#
+# 2. Trigger program execution tests.
+# These tests ensure that trigger programs execute correctly (ie. that a
+# trigger program can correctly execute INSERT, UPDATE, DELETE * SELECT
+# statements, and combinations thereof).
+#
+# 3. Selective trigger execution
+# This tests that conditional triggers (ie. UPDATE OF triggers and triggers
+# with WHEN clauses) are fired only fired when they are supposed to be.
+#
+# trigger2-3.1: UPDATE OF triggers
+# trigger2-3.2: WHEN clause
+#
+# 4. Cascaded trigger execution
+# Tests that trigger-programs may cause other triggers to fire. Also that a
+# trigger-program is never executed recursively.
+#
+# trigger2-4.1: Trivial cascading trigger
+# trigger2-4.2: Trivial recursive trigger handling
+#
+# 5. Count changes behaviour.
+# Verify that rows altered by triggers are not included in the return value
+# of the "count changes" interface.
+#
+# 6. ON CONFLICT clause handling
+# trigger2-6.1[a-f]: INSERT statements
+# trigger2-6.2[a-f]: UPDATE statements
+#
+# 7. & 8. Triggers on views fire correctly.
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+ifcapable {!trigger} {
+ finish_test
+ return
+}
+
+# 1.
+ifcapable subquery {
+ set ii 0
+ set tbl_definitions [list \
+ {CREATE TABLE tbl (a, b);} \
+ {CREATE TABLE tbl (a INTEGER PRIMARY KEY, b);} \
+ {CREATE TABLE tbl (a, b PRIMARY KEY);} \
+ {CREATE TABLE tbl (a, b); CREATE INDEX tbl_idx ON tbl(b);} \
+ ]
+ ifcapable tempdb {
+ lappend tbl_definitions \
+ {CREATE TEMP TABLE tbl (a, b); CREATE INDEX tbl_idx ON tbl(b);}
+ lappend tbl_definitions {CREATE TEMP TABLE tbl (a, b);}
+ lappend tbl_definitions \
+ {CREATE TEMPORARY TABLE tbl (a INTEGER PRIMARY KEY, b);}
+ }
+ foreach tbl_defn $tbl_definitions {
+ incr ii
+ catchsql { DROP INDEX tbl_idx; }
+ catchsql {
+ DROP TABLE rlog;
+ DROP TABLE clog;
+ DROP TABLE tbl;
+ DROP TABLE other_tbl;
+ }
+
+ execsql $tbl_defn
+
+ execsql {
+ INSERT INTO tbl VALUES(1, 2);
+ INSERT INTO tbl VALUES(3, 4);
+
+ CREATE TABLE rlog (idx, old_a, old_b, db_sum_a, db_sum_b, new_a, new_b);
+ CREATE TABLE clog (idx, old_a, old_b, db_sum_a, db_sum_b, new_a, new_b);
+
+ CREATE TRIGGER before_update_row BEFORE UPDATE ON tbl FOR EACH ROW
+ BEGIN
+ INSERT INTO rlog VALUES ( (SELECT coalesce(max(idx),0) + 1 FROM rlog),
+ old.a, old.b,
+ (SELECT coalesce(sum(a),0) FROM tbl),
+ (SELECT coalesce(sum(b),0) FROM tbl),
+ new.a, new.b);
+ END;
+
+ CREATE TRIGGER after_update_row AFTER UPDATE ON tbl FOR EACH ROW
+ BEGIN
+ INSERT INTO rlog VALUES ( (SELECT coalesce(max(idx),0) + 1 FROM rlog),
+ old.a, old.b,
+ (SELECT coalesce(sum(a),0) FROM tbl),
+ (SELECT coalesce(sum(b),0) FROM tbl),
+ new.a, new.b);
+ END;
+
+ CREATE TRIGGER conditional_update_row AFTER UPDATE ON tbl FOR EACH ROW
+ WHEN old.a = 1
+ BEGIN
+ INSERT INTO clog VALUES ( (SELECT coalesce(max(idx),0) + 1 FROM clog),
+ old.a, old.b,
+ (SELECT coalesce(sum(a),0) FROM tbl),
+ (SELECT coalesce(sum(b),0) FROM tbl),
+ new.a, new.b);
+ END;
+ }
+
+ do_test trigger2-1.$ii.1 {
+ set r {}
+ foreach v [execsql {
+ UPDATE tbl SET a = a * 10, b = b * 10;
+ SELECT * FROM rlog ORDER BY idx;
+ SELECT * FROM clog ORDER BY idx;
+ }] {
+ lappend r [expr {int($v)}]
+ }
+ set r
+ } [list 1 1 2 4 6 10 20 \
+ 2 1 2 13 24 10 20 \
+ 3 3 4 13 24 30 40 \
+ 4 3 4 40 60 30 40 \
+ 1 1 2 13 24 10 20 ]
+
+ execsql {
+ DELETE FROM rlog;
+ DELETE FROM tbl;
+ INSERT INTO tbl VALUES (100, 100);
+ INSERT INTO tbl VALUES (300, 200);
+ CREATE TRIGGER delete_before_row BEFORE DELETE ON tbl FOR EACH ROW
+ BEGIN
+ INSERT INTO rlog VALUES ( (SELECT coalesce(max(idx),0) + 1 FROM rlog),
+ old.a, old.b,
+ (SELECT coalesce(sum(a),0) FROM tbl),
+ (SELECT coalesce(sum(b),0) FROM tbl),
+ 0, 0);
+ END;
+
+ CREATE TRIGGER delete_after_row AFTER DELETE ON tbl FOR EACH ROW
+ BEGIN
+ INSERT INTO rlog VALUES ( (SELECT coalesce(max(idx),0) + 1 FROM rlog),
+ old.a, old.b,
+ (SELECT coalesce(sum(a),0) FROM tbl),
+ (SELECT coalesce(sum(b),0) FROM tbl),
+ 0, 0);
+ END;
+ }
+ do_test trigger2-1.$ii.2 {
+ set r {}
+ foreach v [execsql {
+ DELETE FROM tbl;
+ SELECT * FROM rlog;
+ }] {
+ lappend r [expr {int($v)}]
+ }
+ set r
+ } [list 1 100 100 400 300 0 0 \
+ 2 100 100 300 200 0 0 \
+ 3 300 200 300 200 0 0 \
+ 4 300 200 0 0 0 0 ]
+
+ execsql {
+ DELETE FROM rlog;
+ CREATE TRIGGER insert_before_row BEFORE INSERT ON tbl FOR EACH ROW
+ BEGIN
+ INSERT INTO rlog VALUES ( (SELECT coalesce(max(idx),0) + 1 FROM rlog),
+ 0, 0,
+ (SELECT coalesce(sum(a),0) FROM tbl),
+ (SELECT coalesce(sum(b),0) FROM tbl),
+ new.a, new.b);
+ END;
+
+ CREATE TRIGGER insert_after_row AFTER INSERT ON tbl FOR EACH ROW
+ BEGIN
+ INSERT INTO rlog VALUES ( (SELECT coalesce(max(idx),0) + 1 FROM rlog),
+ 0, 0,
+ (SELECT coalesce(sum(a),0) FROM tbl),
+ (SELECT coalesce(sum(b),0) FROM tbl),
+ new.a, new.b);
+ END;
+ }
+ do_test trigger2-1.$ii.3 {
+ execsql {
+
+ CREATE TABLE other_tbl(a, b);
+ INSERT INTO other_tbl VALUES(1, 2);
+ INSERT INTO other_tbl VALUES(3, 4);
+ -- INSERT INTO tbl SELECT * FROM other_tbl;
+ INSERT INTO tbl VALUES(5, 6);
+ DROP TABLE other_tbl;
+
+ SELECT * FROM rlog;
+ }
+ } [list 1 0 0 0 0 5 6 \
+ 2 0 0 5 6 5 6 ]
+
+ integrity_check trigger2-1.$ii.4
+ }
+ catchsql {
+ DROP TABLE rlog;
+ DROP TABLE clog;
+ DROP TABLE tbl;
+ DROP TABLE other_tbl;
+ }
+}
+
+# 2.
+set ii 0
+foreach tr_program {
+ {UPDATE tbl SET b = old.b;}
+ {INSERT INTO log VALUES(new.c, 2, 3);}
+ {DELETE FROM log WHERE a = 1;}
+ {INSERT INTO tbl VALUES(500, new.b * 10, 700);
+ UPDATE tbl SET c = old.c;
+ DELETE FROM log;}
+ {INSERT INTO log select * from tbl;}
+} {
+ foreach test_varset [ list \
+ {
+ set statement {UPDATE tbl SET c = 10 WHERE a = 1;}
+ set prep {INSERT INTO tbl VALUES(1, 2, 3);}
+ set newC 10
+ set newB 2
+ set newA 1
+ set oldA 1
+ set oldB 2
+ set oldC 3
+ } \
+ {
+ set statement {DELETE FROM tbl WHERE a = 1;}
+ set prep {INSERT INTO tbl VALUES(1, 2, 3);}
+ set oldA 1
+ set oldB 2
+ set oldC 3
+ } \
+ {
+ set statement {INSERT INTO tbl VALUES(1, 2, 3);}
+ set newA 1
+ set newB 2
+ set newC 3
+ }
+ ] \
+ {
+ set statement {}
+ set prep {}
+ set newA {''}
+ set newB {''}
+ set newC {''}
+ set oldA {''}
+ set oldB {''}
+ set oldC {''}
+
+ incr ii
+
+ eval $test_varset
+
+ set statement_type [string range $statement 0 5]
+ set tr_program_fixed $tr_program
+ if {$statement_type == "DELETE"} {
+ regsub -all new\.a $tr_program_fixed {''} tr_program_fixed
+ regsub -all new\.b $tr_program_fixed {''} tr_program_fixed
+ regsub -all new\.c $tr_program_fixed {''} tr_program_fixed
+ }
+ if {$statement_type == "INSERT"} {
+ regsub -all old\.a $tr_program_fixed {''} tr_program_fixed
+ regsub -all old\.b $tr_program_fixed {''} tr_program_fixed
+ regsub -all old\.c $tr_program_fixed {''} tr_program_fixed
+ }
+
+
+ set tr_program_cooked $tr_program
+ regsub -all new\.a $tr_program_cooked $newA tr_program_cooked
+ regsub -all new\.b $tr_program_cooked $newB tr_program_cooked
+ regsub -all new\.c $tr_program_cooked $newC tr_program_cooked
+ regsub -all old\.a $tr_program_cooked $oldA tr_program_cooked
+ regsub -all old\.b $tr_program_cooked $oldB tr_program_cooked
+ regsub -all old\.c $tr_program_cooked $oldC tr_program_cooked
+
+ catchsql {
+ DROP TABLE tbl;
+ DROP TABLE log;
+ }
+
+ execsql {
+ CREATE TABLE tbl(a PRIMARY KEY, b, c);
+ CREATE TABLE log(a, b, c);
+ }
+
+ set query {SELECT * FROM tbl; SELECT * FROM log;}
+ set prep "$prep; INSERT INTO log VALUES(1, 2, 3);\
+ INSERT INTO log VALUES(10, 20, 30);"
+
+# Check execution of BEFORE programs:
+
+ set before_data [ execsql "$prep $tr_program_cooked $statement $query" ]
+
+ execsql "DELETE FROM tbl; DELETE FROM log; $prep";
+ execsql "CREATE TRIGGER the_trigger BEFORE [string range $statement 0 6]\
+ ON tbl BEGIN $tr_program_fixed END;"
+
+ do_test trigger2-2.$ii-before "execsql {$statement $query}" $before_data
+
+ execsql "DROP TRIGGER the_trigger;"
+ execsql "DELETE FROM tbl; DELETE FROM log;"
+
+# Check execution of AFTER programs
+ set after_data [ execsql "$prep $statement $tr_program_cooked $query" ]
+
+ execsql "DELETE FROM tbl; DELETE FROM log; $prep";
+ execsql "CREATE TRIGGER the_trigger AFTER [string range $statement 0 6]\
+ ON tbl BEGIN $tr_program_fixed END;"
+
+ do_test trigger2-2.$ii-after "execsql {$statement $query}" $after_data
+ execsql "DROP TRIGGER the_trigger;"
+
+ integrity_check trigger2-2.$ii-integrity
+ }
+}
+catchsql {
+ DROP TABLE tbl;
+ DROP TABLE log;
+}
+
+# 3.
+
+# trigger2-3.1: UPDATE OF triggers
+execsql {
+ CREATE TABLE tbl (a, b, c, d);
+ CREATE TABLE log (a);
+ INSERT INTO log VALUES (0);
+ INSERT INTO tbl VALUES (0, 0, 0, 0);
+ INSERT INTO tbl VALUES (1, 0, 0, 0);
+ CREATE TRIGGER tbl_after_update_cd BEFORE UPDATE OF c, d ON tbl
+ BEGIN
+ UPDATE log SET a = a + 1;
+ END;
+}
+do_test trigger2-3.1 {
+ execsql {
+ UPDATE tbl SET b = 1, c = 10; -- 2
+ UPDATE tbl SET b = 10; -- 0
+ UPDATE tbl SET d = 4 WHERE a = 0; --1
+ UPDATE tbl SET a = 4, b = 10; --0
+ SELECT * FROM log;
+ }
+} {3}
+execsql {
+ DROP TABLE tbl;
+ DROP TABLE log;
+}
+
+# trigger2-3.2: WHEN clause
+set when_triggers [list {t1 BEFORE INSERT ON tbl WHEN new.a > 20}]
+ifcapable subquery {
+ lappend when_triggers \
+ {t2 BEFORE INSERT ON tbl WHEN (SELECT count(*) FROM tbl) = 0}
+}
+
+execsql {
+ CREATE TABLE tbl (a, b, c, d);
+ CREATE TABLE log (a);
+ INSERT INTO log VALUES (0);
+}
+
+foreach trig $when_triggers {
+ execsql "CREATE TRIGGER $trig BEGIN UPDATE log set a = a + 1; END;"
+}
+
+ifcapable subquery {
+ set t232 {1 0 1}
+} else {
+ set t232 {0 0 1}
+}
+do_test trigger2-3.2 {
+ execsql {
+
+ INSERT INTO tbl VALUES(0, 0, 0, 0); -- 1 (ifcapable subquery)
+ SELECT * FROM log;
+ UPDATE log SET a = 0;
+
+ INSERT INTO tbl VALUES(0, 0, 0, 0); -- 0
+ SELECT * FROM log;
+ UPDATE log SET a = 0;
+
+ INSERT INTO tbl VALUES(200, 0, 0, 0); -- 1
+ SELECT * FROM log;
+ UPDATE log SET a = 0;
+ }
+} $t232
+execsql {
+ DROP TABLE tbl;
+ DROP TABLE log;
+}
+integrity_check trigger2-3.3
+
+# Simple cascaded trigger
+execsql {
+ CREATE TABLE tblA(a, b);
+ CREATE TABLE tblB(a, b);
+ CREATE TABLE tblC(a, b);
+
+ CREATE TRIGGER tr1 BEFORE INSERT ON tblA BEGIN
+ INSERT INTO tblB values(new.a, new.b);
+ END;
+
+ CREATE TRIGGER tr2 BEFORE INSERT ON tblB BEGIN
+ INSERT INTO tblC values(new.a, new.b);
+ END;
+}
+do_test trigger2-4.1 {
+ execsql {
+ INSERT INTO tblA values(1, 2);
+ SELECT * FROM tblA;
+ SELECT * FROM tblB;
+ SELECT * FROM tblC;
+ }
+} {1 2 1 2 1 2}
+execsql {
+ DROP TABLE tblA;
+ DROP TABLE tblB;
+ DROP TABLE tblC;
+}
+
+# Simple recursive trigger
+execsql {
+ CREATE TABLE tbl(a, b, c);
+ CREATE TRIGGER tbl_trig BEFORE INSERT ON tbl
+ BEGIN
+ INSERT INTO tbl VALUES (new.a, new.b, new.c);
+ END;
+}
+do_test trigger2-4.2 {
+ execsql {
+ INSERT INTO tbl VALUES (1, 2, 3);
+ select * from tbl;
+ }
+} {1 2 3 1 2 3}
+execsql {
+ DROP TABLE tbl;
+}
+
+# 5.
+execsql {
+ CREATE TABLE tbl(a, b, c);
+ CREATE TRIGGER tbl_trig BEFORE INSERT ON tbl
+ BEGIN
+ INSERT INTO tbl VALUES (1, 2, 3);
+ INSERT INTO tbl VALUES (2, 2, 3);
+ UPDATE tbl set b = 10 WHERE a = 1;
+ DELETE FROM tbl WHERE a = 1;
+ DELETE FROM tbl;
+ END;
+}
+do_test trigger2-5 {
+ execsql {
+ INSERT INTO tbl VALUES(100, 200, 300);
+ }
+ db changes
+} {1}
+execsql {
+ DROP TABLE tbl;
+}
+
+ifcapable conflict {
+ # Handling of ON CONFLICT by INSERT statements inside triggers
+ execsql {
+ CREATE TABLE tbl (a primary key, b, c);
+ CREATE TRIGGER ai_tbl AFTER INSERT ON tbl BEGIN
+ INSERT OR IGNORE INTO tbl values (new.a, 0, 0);
+ END;
+ }
+ do_test trigger2-6.1a {
+ execsql {
+ BEGIN;
+ INSERT INTO tbl values (1, 2, 3);
+ SELECT * from tbl;
+ }
+ } {1 2 3}
+ do_test trigger2-6.1b {
+ catchsql {
+ INSERT OR ABORT INTO tbl values (2, 2, 3);
+ }
+ } {1 {column a is not unique}}
+ do_test trigger2-6.1c {
+ execsql {
+ SELECT * from tbl;
+ }
+ } {1 2 3}
+ do_test trigger2-6.1d {
+ catchsql {
+ INSERT OR FAIL INTO tbl values (2, 2, 3);
+ }
+ } {1 {column a is not unique}}
+ do_test trigger2-6.1e {
+ execsql {
+ SELECT * from tbl;
+ }
+ } {1 2 3 2 2 3}
+ do_test trigger2-6.1f {
+ execsql {
+ INSERT OR REPLACE INTO tbl values (2, 2, 3);
+ SELECT * from tbl;
+ }
+ } {1 2 3 2 0 0}
+ do_test trigger2-6.1g {
+ catchsql {
+ INSERT OR ROLLBACK INTO tbl values (3, 2, 3);
+ }
+ } {1 {column a is not unique}}
+ do_test trigger2-6.1h {
+ execsql {
+ SELECT * from tbl;
+ }
+ } {}
+ execsql {DELETE FROM tbl}
+
+
+ # Handling of ON CONFLICT by UPDATE statements inside triggers
+ execsql {
+ INSERT INTO tbl values (4, 2, 3);
+ INSERT INTO tbl values (6, 3, 4);
+ CREATE TRIGGER au_tbl AFTER UPDATE ON tbl BEGIN
+ UPDATE OR IGNORE tbl SET a = new.a, c = 10;
+ END;
+ }
+ do_test trigger2-6.2a {
+ execsql {
+ BEGIN;
+ UPDATE tbl SET a = 1 WHERE a = 4;
+ SELECT * from tbl;
+ }
+ } {1 2 10 6 3 4}
+ do_test trigger2-6.2b {
+ catchsql {
+ UPDATE OR ABORT tbl SET a = 4 WHERE a = 1;
+ }
+ } {1 {column a is not unique}}
+ do_test trigger2-6.2c {
+ execsql {
+ SELECT * from tbl;
+ }
+ } {1 2 10 6 3 4}
+ do_test trigger2-6.2d {
+ catchsql {
+ UPDATE OR FAIL tbl SET a = 4 WHERE a = 1;
+ }
+ } {1 {column a is not unique}}
+ do_test trigger2-6.2e {
+ execsql {
+ SELECT * from tbl;
+ }
+ } {4 2 10 6 3 4}
+ do_test trigger2-6.2f.1 {
+ execsql {
+ UPDATE OR REPLACE tbl SET a = 1 WHERE a = 4;
+ SELECT * from tbl;
+ }
+ } {1 3 10}
+ do_test trigger2-6.2f.2 {
+ execsql {
+ INSERT INTO tbl VALUES (2, 3, 4);
+ SELECT * FROM tbl;
+ }
+ } {1 3 10 2 3 4}
+ do_test trigger2-6.2g {
+ catchsql {
+ UPDATE OR ROLLBACK tbl SET a = 4 WHERE a = 1;
+ }
+ } {1 {column a is not unique}}
+ do_test trigger2-6.2h {
+ execsql {
+ SELECT * from tbl;
+ }
+ } {4 2 3 6 3 4}
+ execsql {
+ DROP TABLE tbl;
+ }
+} ; # ifcapable conflict
+
+# 7. Triggers on views
+ifcapable view {
+
+do_test trigger2-7.1 {
+ execsql {
+ CREATE TABLE ab(a, b);
+ CREATE TABLE cd(c, d);
+ INSERT INTO ab VALUES (1, 2);
+ INSERT INTO ab VALUES (0, 0);
+ INSERT INTO cd VALUES (3, 4);
+
+ CREATE TABLE tlog(ii INTEGER PRIMARY KEY,
+ olda, oldb, oldc, oldd, newa, newb, newc, newd);
+
+ CREATE VIEW abcd AS SELECT a, b, c, d FROM ab, cd;
+
+ CREATE TRIGGER before_update INSTEAD OF UPDATE ON abcd BEGIN
+ INSERT INTO tlog VALUES(NULL,
+ old.a, old.b, old.c, old.d, new.a, new.b, new.c, new.d);
+ END;
+ CREATE TRIGGER after_update INSTEAD OF UPDATE ON abcd BEGIN
+ INSERT INTO tlog VALUES(NULL,
+ old.a, old.b, old.c, old.d, new.a, new.b, new.c, new.d);
+ END;
+
+ CREATE TRIGGER before_delete INSTEAD OF DELETE ON abcd BEGIN
+ INSERT INTO tlog VALUES(NULL,
+ old.a, old.b, old.c, old.d, 0, 0, 0, 0);
+ END;
+ CREATE TRIGGER after_delete INSTEAD OF DELETE ON abcd BEGIN
+ INSERT INTO tlog VALUES(NULL,
+ old.a, old.b, old.c, old.d, 0, 0, 0, 0);
+ END;
+
+ CREATE TRIGGER before_insert INSTEAD OF INSERT ON abcd BEGIN
+ INSERT INTO tlog VALUES(NULL,
+ 0, 0, 0, 0, new.a, new.b, new.c, new.d);
+ END;
+ CREATE TRIGGER after_insert INSTEAD OF INSERT ON abcd BEGIN
+ INSERT INTO tlog VALUES(NULL,
+ 0, 0, 0, 0, new.a, new.b, new.c, new.d);
+ END;
+ }
+} {};
+
+do_test trigger2-7.2 {
+ execsql {
+ UPDATE abcd SET a = 100, b = 5*5 WHERE a = 1;
+ DELETE FROM abcd WHERE a = 1;
+ INSERT INTO abcd VALUES(10, 20, 30, 40);
+ SELECT * FROM tlog;
+ }
+} [ list 1 1 2 3 4 100 25 3 4 \
+ 2 1 2 3 4 100 25 3 4 \
+ 3 1 2 3 4 0 0 0 0 \
+ 4 1 2 3 4 0 0 0 0 \
+ 5 0 0 0 0 10 20 30 40 \
+ 6 0 0 0 0 10 20 30 40 ]
+
+do_test trigger2-7.3 {
+ execsql {
+ DELETE FROM tlog;
+ INSERT INTO abcd VALUES(10, 20, 30, 40);
+ UPDATE abcd SET a = 100, b = 5*5 WHERE a = 1;
+ DELETE FROM abcd WHERE a = 1;
+ SELECT * FROM tlog;
+ }
+} [ list \
+ 1 0 0 0 0 10 20 30 40 \
+ 2 0 0 0 0 10 20 30 40 \
+ 3 1 2 3 4 100 25 3 4 \
+ 4 1 2 3 4 100 25 3 4 \
+ 5 1 2 3 4 0 0 0 0 \
+ 6 1 2 3 4 0 0 0 0 \
+]
+do_test trigger2-7.4 {
+ execsql {
+ DELETE FROM tlog;
+ DELETE FROM abcd WHERE a = 1;
+ INSERT INTO abcd VALUES(10, 20, 30, 40);
+ UPDATE abcd SET a = 100, b = 5*5 WHERE a = 1;
+ SELECT * FROM tlog;
+ }
+} [ list \
+ 1 1 2 3 4 0 0 0 0 \
+ 2 1 2 3 4 0 0 0 0 \
+ 3 0 0 0 0 10 20 30 40 \
+ 4 0 0 0 0 10 20 30 40 \
+ 5 1 2 3 4 100 25 3 4 \
+ 6 1 2 3 4 100 25 3 4 \
+]
+
+do_test trigger2-8.1 {
+ execsql {
+ CREATE TABLE t1(a,b,c);
+ INSERT INTO t1 VALUES(1,2,3);
+ CREATE VIEW v1 AS
+ SELECT a+b AS x, b+c AS y, a+c AS z FROM t1;
+ SELECT * FROM v1;
+ }
+} {3 5 4}
+do_test trigger2-8.2 {
+ execsql {
+ CREATE TABLE v1log(a,b,c,d,e,f);
+ CREATE TRIGGER r1 INSTEAD OF DELETE ON v1 BEGIN
+ INSERT INTO v1log VALUES(OLD.x,NULL,OLD.y,NULL,OLD.z,NULL);
+ END;
+ DELETE FROM v1 WHERE x=1;
+ SELECT * FROM v1log;
+ }
+} {}
+do_test trigger2-8.3 {
+ execsql {
+ DELETE FROM v1 WHERE x=3;
+ SELECT * FROM v1log;
+ }
+} {3 {} 5 {} 4 {}}
+do_test trigger2-8.4 {
+ execsql {
+ INSERT INTO t1 VALUES(4,5,6);
+ DELETE FROM v1log;
+ DELETE FROM v1 WHERE y=11;
+ SELECT * FROM v1log;
+ }
+} {9 {} 11 {} 10 {}}
+do_test trigger2-8.5 {
+ execsql {
+ CREATE TRIGGER r2 INSTEAD OF INSERT ON v1 BEGIN
+ INSERT INTO v1log VALUES(NULL,NEW.x,NULL,NEW.y,NULL,NEW.z);
+ END;
+ DELETE FROM v1log;
+ INSERT INTO v1 VALUES(1,2,3);
+ SELECT * FROM v1log;
+ }
+} {{} 1 {} 2 {} 3}
+do_test trigger2-8.6 {
+ execsql {
+ CREATE TRIGGER r3 INSTEAD OF UPDATE ON v1 BEGIN
+ INSERT INTO v1log VALUES(OLD.x,NEW.x,OLD.y,NEW.y,OLD.z,NEW.z);
+ END;
+ DELETE FROM v1log;
+ UPDATE v1 SET x=x+100, y=y+200, z=z+300;
+ SELECT * FROM v1log;
+ }
+} {3 103 5 205 4 304 9 109 11 211 10 310}
+
+} ;# ifcapable view
+
+integrity_check trigger2-9.9
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/trigger3.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/trigger3.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,176 @@
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+#
+# This file tests the RAISE() function.
+#
+
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+ifcapable {!trigger} {
+ finish_test
+ return
+}
+
+# Test that we can cause ROLLBACK, FAIL and ABORT correctly
+# catchsql { DROP TABLE tbl; }
+catchsql { CREATE TABLE tbl (a, b, c) }
+
+execsql {
+ CREATE TRIGGER before_tbl_insert BEFORE INSERT ON tbl BEGIN SELECT CASE
+ WHEN (new.a = 4) THEN RAISE(IGNORE) END;
+ END;
+
+ CREATE TRIGGER after_tbl_insert AFTER INSERT ON tbl BEGIN SELECT CASE
+ WHEN (new.a = 1) THEN RAISE(ABORT, 'Trigger abort')
+ WHEN (new.a = 2) THEN RAISE(FAIL, 'Trigger fail')
+ WHEN (new.a = 3) THEN RAISE(ROLLBACK, 'Trigger rollback') END;
+ END;
+}
+# ABORT
+do_test trigger3-1.1 {
+ catchsql {
+ BEGIN;
+ INSERT INTO tbl VALUES (5, 5, 6);
+ INSERT INTO tbl VALUES (1, 5, 6);
+ }
+} {1 {Trigger abort}}
+do_test trigger3-1.2 {
+ execsql {
+ SELECT * FROM tbl;
+ ROLLBACK;
+ }
+} {5 5 6}
+do_test trigger3-1.3 {
+ execsql {SELECT * FROM tbl}
+} {}
+
+# FAIL
+do_test trigger3-2.1 {
+ catchsql {
+ BEGIN;
+ INSERT INTO tbl VALUES (5, 5, 6);
+ INSERT INTO tbl VALUES (2, 5, 6);
+ }
+} {1 {Trigger fail}}
+do_test trigger3-2.2 {
+ execsql {
+ SELECT * FROM tbl;
+ ROLLBACK;
+ }
+} {5 5 6 2 5 6}
+# ROLLBACK
+do_test trigger3-3.1 {
+ catchsql {
+ BEGIN;
+ INSERT INTO tbl VALUES (5, 5, 6);
+ INSERT INTO tbl VALUES (3, 5, 6);
+ }
+} {1 {Trigger rollback}}
+do_test trigger3-3.2 {
+ execsql {
+ SELECT * FROM tbl;
+ }
+} {}
+# IGNORE
+do_test trigger3-4.1 {
+ catchsql {
+ BEGIN;
+ INSERT INTO tbl VALUES (5, 5, 6);
+ INSERT INTO tbl VALUES (4, 5, 6);
+ }
+} {0 {}}
+do_test trigger3-4.2 {
+ execsql {
+ SELECT * FROM tbl;
+ ROLLBACK;
+ }
+} {5 5 6}
+
+# Check that we can also do RAISE(IGNORE) for UPDATE and DELETE
+execsql {DROP TABLE tbl;}
+execsql {CREATE TABLE tbl (a, b, c);}
+execsql {INSERT INTO tbl VALUES(1, 2, 3);}
+execsql {INSERT INTO tbl VALUES(4, 5, 6);}
+execsql {
+ CREATE TRIGGER before_tbl_update BEFORE UPDATE ON tbl BEGIN
+ SELECT CASE WHEN (old.a = 1) THEN RAISE(IGNORE) END;
+ END;
+
+ CREATE TRIGGER before_tbl_delete BEFORE DELETE ON tbl BEGIN
+ SELECT CASE WHEN (old.a = 1) THEN RAISE(IGNORE) END;
+ END;
+}
+do_test trigger3-5.1 {
+ execsql {
+ UPDATE tbl SET c = 10;
+ SELECT * FROM tbl;
+ }
+} {1 2 3 4 5 10}
+do_test trigger3-5.2 {
+ execsql {
+ DELETE FROM tbl;
+ SELECT * FROM tbl;
+ }
+} {1 2 3}
+
+# Check that RAISE(IGNORE) works correctly for nested triggers:
+execsql {CREATE TABLE tbl2(a, b, c)}
+execsql {
+ CREATE TRIGGER after_tbl2_insert AFTER INSERT ON tbl2 BEGIN
+ UPDATE tbl SET c = 10;
+ INSERT INTO tbl2 VALUES (new.a, new.b, new.c);
+ END;
+}
+do_test trigger3-6 {
+ execsql {
+ INSERT INTO tbl2 VALUES (1, 2, 3);
+ SELECT * FROM tbl2;
+ SELECT * FROM tbl;
+ }
+} {1 2 3 1 2 3 1 2 3}
+
+# Check that things also work for view-triggers
+
+ifcapable view {
+
+execsql {CREATE VIEW tbl_view AS SELECT * FROM tbl}
+execsql {
+ CREATE TRIGGER tbl_view_insert INSTEAD OF INSERT ON tbl_view BEGIN
+ SELECT CASE WHEN (new.a = 1) THEN RAISE(ROLLBACK, 'View rollback')
+ WHEN (new.a = 2) THEN RAISE(IGNORE)
+ WHEN (new.a = 3) THEN RAISE(ABORT, 'View abort') END;
+ END;
+}
+
+do_test trigger3-7.1 {
+ catchsql {
+ INSERT INTO tbl_view VALUES(1, 2, 3);
+ }
+} {1 {View rollback}}
+do_test trigger3-7.2 {
+ catchsql {
+ INSERT INTO tbl_view VALUES(2, 2, 3);
+ }
+} {0 {}}
+do_test trigger3-7.3 {
+ catchsql {
+ INSERT INTO tbl_view VALUES(3, 2, 3);
+ }
+} {1 {View abort}}
+
+} ;# ifcapable view
+
+integrity_check trigger3-8.1
+
+catchsql { DROP TABLE tbl; }
+catchsql { DROP TABLE tbl2; }
+catchsql { DROP VIEW tbl_view; }
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/trigger4.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/trigger4.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,200 @@
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+#
+# This file tests the triggers of views.
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# If either views or triggers are disabled in this build, omit this file.
+ifcapable {!trigger || !view} {
+ finish_test
+ return
+}
+
+do_test trigger4-1.1 {
+ execsql {
+ create table test1(id integer primary key,a);
+ create table test2(id integer,b);
+ create view test as
+ select test1.id as id,a as a,b as b
+ from test1 join test2 on test2.id = test1.id;
+ create trigger I_test instead of insert on test
+ begin
+ insert into test1 (id,a) values (NEW.id,NEW.a);
+ insert into test2 (id,b) values (NEW.id,NEW.b);
+ end;
+ insert into test values(1,2,3);
+ select * from test1;
+ }
+} {1 2}
+do_test trigger4-1.2 {
+ execsql {
+ select * from test2;
+ }
+} {1 3}
+do_test trigger4-1.3 {
+ db close
+ sqlite3 db test.db
+ execsql {
+ insert into test values(4,5,6);
+ select * from test1;
+ }
+} {1 2 4 5}
+do_test trigger4-1.4 {
+ execsql {
+ select * from test2;
+ }
+} {1 3 4 6}
+
+do_test trigger4-2.1 {
+ execsql {
+ create trigger U_test instead of update on test
+ begin
+ update test1 set a=NEW.a where id=NEW.id;
+ update test2 set b=NEW.b where id=NEW.id;
+ end;
+ update test set a=22 where id=1;
+ select * from test1;
+ }
+} {1 22 4 5}
+do_test trigger4-2.2 {
+ execsql {
+ select * from test2;
+ }
+} {1 3 4 6}
+do_test trigger4-2.3 {
+ db close
+ sqlite3 db test.db
+ execsql {
+ update test set b=66 where id=4;
+ select * from test1;
+ }
+} {1 22 4 5}
+do_test trigger4-2.4 {
+ execsql {
+ select * from test2;
+ }
+} {1 3 4 66}
+
+do_test trigger4-3.1 {
+ catchsql {
+ drop table test2;
+ insert into test values(7,8,9);
+ }
+} {1 {no such table: main.test2}}
+do_test trigger4-3.2 {
+ db close
+ sqlite3 db test.db
+ catchsql {
+ insert into test values(7,8,9);
+ }
+} {1 {no such table: main.test2}}
+do_test trigger4-3.3 {
+ catchsql {
+ update test set a=222 where id=1;
+ }
+} {1 {no such table: main.test2}}
+do_test trigger4-3.4 {
+ execsql {
+ select * from test1;
+ }
+} {1 22 4 5}
+do_test trigger4-3.5 {
+ execsql {
+ create table test2(id,b);
+ insert into test values(7,8,9);
+ select * from test1;
+ }
+} {1 22 4 5 7 8}
+do_test trigger4-3.6 {
+ execsql {
+ select * from test2;
+ }
+} {7 9}
+do_test trigger4-3.7 {
+ db close
+ sqlite3 db test.db
+ execsql {
+ update test set b=99 where id=7;
+ select * from test2;
+ }
+} {7 99}
+
+do_test trigger4-4.1 {
+ db close
+ file delete -force trigtest.db
+ file delete -force trigtest.db-journal
+ sqlite3 db trigtest.db
+ catchsql {drop table tbl; drop view vw}
+ execsql {
+ create table tbl(a integer primary key, b integer);
+ create view vw as select * from tbl;
+ create trigger t_del_tbl instead of delete on vw for each row begin
+ delete from tbl where a = old.a;
+ end;
+ create trigger t_upd_tbl instead of update on vw for each row begin
+ update tbl set a=new.a, b=new.b where a = old.a;
+ end;
+ create trigger t_ins_tbl instead of insert on vw for each row begin
+ insert into tbl values (new.a,new.b);
+ end;
+ insert into tbl values(101,1001);
+ insert into tbl values(102,1002);
+ insert into tbl select a+2, b+2 from tbl;
+ insert into tbl select a+4, b+4 from tbl;
+ insert into tbl select a+8, b+8 from tbl;
+ insert into tbl select a+16, b+16 from tbl;
+ insert into tbl select a+32, b+32 from tbl;
+ insert into tbl select a+64, b+64 from tbl;
+ select count(*) from vw;
+ }
+} {128}
+do_test trigger4-4.2 {
+ execsql {select a, b from vw where a<103 or a>226 order by a}
+} {101 1001 102 1002 227 1127 228 1128}
+
+#test delete from view
+do_test trigger4-5.1 {
+ catchsql {delete from vw where a>101 and a<2000}
+} {0 {}}
+do_test trigger4-5.2 {
+ execsql {select * from vw}
+} {101 1001}
+
+#test insert into view
+do_test trigger4-6.1 {
+ catchsql {
+ insert into vw values(102,1002);
+ insert into vw select a+2, b+2 from vw;
+ insert into vw select a+4, b+4 from vw;
+ insert into vw select a+8, b+8 from vw;
+ insert into vw select a+16, b+16 from vw;
+ insert into vw select a+32, b+32 from vw;
+ insert into vw select a+64, b+64 from vw;
+ }
+} {0 {}}
+do_test trigger4-6.2 {
+ execsql {select count(*) from vw}
+} {128}
+
+#test update of view
+do_test trigger4-7.1 {
+ catchsql {update vw set b=b+1000 where a>101 and a<2000}
+} {0 {}}
+do_test trigger4-7.2 {
+ execsql {select a, b from vw where a<=102 or a>=227 order by a}
+} {101 1001 102 2002 227 2127 228 2128}
+
+integrity_check trigger4-99.9
+
+file delete -force trigtest.db trigtest.db-journal
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/trigger5.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/trigger5.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,43 @@
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+#
+# This file tests the triggers of views.
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+ifcapable {!trigger} {
+ finish_test
+ return
+}
+
+# Ticket #844
+#
+do_test trigger5-1.1 {
+ execsql {
+ CREATE TABLE Item(
+ a integer PRIMARY KEY NOT NULL ,
+ b double NULL ,
+ c int NOT NULL DEFAULT 0
+ );
+ CREATE TABLE Undo(UndoAction TEXT);
+ INSERT INTO Item VALUES (1,38205.60865,340);
+ CREATE TRIGGER trigItem_UNDO_AD AFTER DELETE ON Item FOR EACH ROW
+ BEGIN
+ INSERT INTO Undo SELECT 'INSERT INTO Item (a,b,c) VALUES ('
+ || coalesce(old.a,'NULL') || ',' || quote(old.b) || ',' || old.c || ');';
+ END;
+ DELETE FROM Item WHERE a = 1;
+ SELECT * FROM Undo;
+ }
+} {{INSERT INTO Item (a,b,c) VALUES (1,38205.60865,340);}}
+
+integrity_check trigger5-99.9
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/trigger6.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/trigger6.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,82 @@
+# 2004 December 07
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests to make sure expression of an INSERT
+# and UPDATE statement are only evaluated once. See ticket #980.
+# If an expression uses a function that has side-effects or which
+# is not deterministic (ex: random()) then we want to make sure
+# that the same evaluation occurs for the actual INSERT/UPDATE and
+# for the NEW.* fields of any triggers that fire.
+#
+# $Id: trigger6.test,v 1.2 2005/05/05 11:04:50 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+ifcapable {!trigger} {
+ finish_test
+ return
+}
+
+do_test trigger6-1.1 {
+ execsql {
+ CREATE TABLE t1(x, y);
+ CREATE TABLE log(a, b, c);
+ CREATE TRIGGER r1 BEFORE INSERT ON t1 BEGIN
+ INSERT INTO log VALUES(1, new.x, new.y);
+ END;
+ CREATE TRIGGER r2 BEFORE UPDATE ON t1 BEGIN
+ INSERT INTO log VALUES(2, new.x, new.y);
+ END;
+ }
+ set ::trigger6_cnt 0
+ proc trigger6_counter {args} {
+ incr ::trigger6_cnt
+ return $::trigger6_cnt
+ }
+ db function counter trigger6_counter
+ execsql {
+ INSERT INTO t1 VALUES(1,counter());
+ SELECT * FROM t1;
+ }
+} {1 1}
+do_test trigger6-1.2 {
+ execsql {
+ SELECT * FROM log;
+ }
+} {1 1 1}
+do_test trigger6-1.3 {
+ execsql {
+ DELETE FROM t1;
+ DELETE FROM log;
+ INSERT INTO t1 VALUES(2,counter(2,3)+4);
+ SELECT * FROM t1;
+ }
+} {2 6}
+do_test trigger6-1.4 {
+ execsql {
+ SELECT * FROM log;
+ }
+} {1 2 6}
+do_test trigger6-1.5 {
+ execsql {
+ DELETE FROM log;
+ UPDATE t1 SET y=counter(5);
+ SELECT * FROM t1;
+ }
+} {2 3}
+do_test trigger6-1.6 {
+ execsql {
+ SELECT * FROM log;
+ }
+} {2 2 3}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/trigger7.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/trigger7.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,121 @@
+# 2005 August 18
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests to increase coverage of trigger.c.
+#
+# $Id: trigger7.test,v 1.1 2005/08/19 02:26:27 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+ifcapable {!trigger} {
+ finish_test
+ return
+}
+
+
+# Error messages resulting from qualified trigger names.
+#
+do_test trigger7-1.1 {
+ execsql {
+ CREATE TABLE t1(x, y);
+ }
+ catchsql {
+ CREATE TEMP TRIGGER main.r1 AFTER INSERT ON t1 BEGIN
+ SELECT 'no nothing';
+ END
+ }
+} {1 {temporary trigger may not have qualified name}}
+do_test trigger7-1.2 {
+ catchsql {
+ CREATE TRIGGER not_a_db.r1 AFTER INSERT ON t1 BEGIN
+ SELECT 'no nothing';
+ END
+ }
+} {1 {unknown database not_a_db}}
+
+
+# When the UPDATE OF syntax is used, no code is generated for triggers
+# that do not match the update columns.
+#
+ifcapable explain {
+ do_test trigger7-2.1 {
+ execsql {
+ CREATE TRIGGER r1 AFTER UPDATE OF x ON t1 BEGIN
+ SELECT '___update_t1.x___';
+ END;
+ CREATE TRIGGER r2 AFTER UPDATE OF y ON t1 BEGIN
+ SELECT '___update_t1.y___';
+ END;
+ }
+ set txt [db eval {EXPLAIN UPDATE t1 SET x=5}]
+ string match *___update_t1.x___* $txt
+ } 1
+ do_test trigger7-2.2 {
+ set txt [db eval {EXPLAIN UPDATE t1 SET x=5}]
+ string match *___update_t1.y___* $txt
+ } 0
+ do_test trigger7-2.3 {
+ set txt [db eval {EXPLAIN UPDATE t1 SET y=5}]
+ string match *___update_t1.x___* $txt
+ } 0
+ do_test trigger7-2.4 {
+ set txt [db eval {EXPLAIN UPDATE t1 SET y=5}]
+ string match *___update_t1.y___* $txt
+ } 1
+ do_test trigger7-2.5 {
+ set txt [db eval {EXPLAIN UPDATE t1 SET rowid=5}]
+ string match *___update_t1.x___* $txt
+ } 0
+ do_test trigger7-2.6 {
+ set txt [db eval {EXPLAIN UPDATE t1 SET rowid=5}]
+ string match *___update_t1.x___* $txt
+ } 0
+}
+
+# Test the ability to create many triggers on the same table, then
+# selectively drop those triggers.
+#
+do_test trigger7-3.1 {
+ execsql {
+ CREATE TABLE t2(x,y,z);
+ CREATE TRIGGER t2r1 AFTER INSERT ON t2 BEGIN SELECT 1; END;
+ CREATE TRIGGER t2r2 BEFORE INSERT ON t2 BEGIN SELECT 1; END;
+ CREATE TRIGGER t2r3 AFTER UPDATE ON t2 BEGIN SELECT 1; END;
+ CREATE TRIGGER t2r4 BEFORE UPDATE ON t2 BEGIN SELECT 1; END;
+ CREATE TRIGGER t2r5 AFTER DELETE ON t2 BEGIN SELECT 1; END;
+ CREATE TRIGGER t2r6 BEFORE DELETE ON t2 BEGIN SELECT 1; END;
+ CREATE TRIGGER t2r7 AFTER INSERT ON t2 BEGIN SELECT 1; END;
+ CREATE TRIGGER t2r8 BEFORE INSERT ON t2 BEGIN SELECT 1; END;
+ CREATE TRIGGER t2r9 AFTER UPDATE ON t2 BEGIN SELECT 1; END;
+ CREATE TRIGGER t2r10 BEFORE UPDATE ON t2 BEGIN SELECT 1; END;
+ CREATE TRIGGER t2r11 AFTER DELETE ON t2 BEGIN SELECT 1; END;
+ CREATE TRIGGER t2r12 BEFORE DELETE ON t2 BEGIN SELECT 1; END;
+ DROP TRIGGER t2r6;
+ }
+} {}
+
+# This test corrupts the database file so it must be the last test
+# in the series.
+#
+do_test trigger7-99.1 {
+ execsql {
+ PRAGMA writable_schema=on;
+ UPDATE sqlite_master SET sql='nonsense';
+ }
+ db close
+ sqlite3 db test.db
+ catchsql {
+ DROP TRIGGER t2r5
+ }
+} {1 {malformed database schema - near "nonsense": syntax error}}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/trigger8.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/trigger8.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,42 @@
+# 2006 February 27
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests to make sure abusively large triggers
+# (triggers with 100s or 1000s of statements) work.
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+ifcapable {!trigger} {
+ finish_test
+ return
+}
+
+
+do_test trigger8-1.1 {
+ execsql {
+ CREATE TABLE t1(x);
+ CREATE TABLE t2(y);
+ }
+ set sql "CREATE TRIGGER r10000 AFTER INSERT ON t1 BEGIN\n"
+ for {set i 0} {$i<10000} {incr i} {
+ append sql " INSERT INTO t2 VALUES($i);\n"
+ }
+ append sql "END;"
+ execsql $sql
+ execsql {
+ INSERT INTO t1 VALUES(5);
+ SELECT count(*) FROM t2;
+ }
+} {10000}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/types.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/types.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,324 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. Specfically
+# it tests that the different storage classes (integer, real, text etc.)
+# all work correctly.
+#
+# $Id: types.test,v 1.19 2006/06/27 12:51:13 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Tests in this file are organized roughly as follows:
+#
+# types-1.*.*: Test that values are stored using the expected storage
+# classes when various forms of literals are inserted into
+# columns with different affinities.
+# types-1.1.*: INSERT INTO <table> VALUES(...)
+# types-1.2.*: INSERT INTO <table> SELECT...
+# types-1.3.*: UPDATE <table> SET...
+#
+# types-2.*.*: Check that values can be stored and retrieving using the
+# various storage classes.
+# types-2.1.*: INTEGER
+# types-2.2.*: REAL
+# types-2.3.*: NULL
+# types-2.4.*: TEXT
+# types-2.5.*: Records with a few different storage classes.
+#
+# types-3.*: Test that the '=' operator respects manifest types.
+#
+
+# Disable encryption on the database for this test.
+db close
+set DB [sqlite3 db test.db; sqlite3_connection_pointer db]
+sqlite3_rekey $DB {}
+
+# Create a table with one column for each type of affinity
+do_test types-1.1.0 {
+ execsql {
+ CREATE TABLE t1(i integer, n numeric, t text, o blob);
+ }
+} {}
+
+# Each element of the following list represents one test case.
+#
+# The first value of each sub-list is an SQL literal. The following
+# four value are the storage classes that would be used if the
+# literal were inserted into a column with affinity INTEGER, NUMERIC, TEXT
+# or NONE, respectively.
+set values {
+ { 5.0 integer integer text real }
+ { 5.1 real real text real }
+ { 5 integer integer text integer }
+ { '5.0' integer integer text text }
+ { '5.1' real real text text }
+ { '-5.0' integer integer text text }
+ { '-5.0' integer integer text text }
+ { '5' integer integer text text }
+ { 'abc' text text text text }
+ { NULL null null null null }
+}
+ifcapable {bloblit} {
+ lappend values { X'00' blob blob blob blob }
+}
+
+# This code tests that the storage classes specified above (in the $values
+# table) are correctly assigned when values are inserted using a statement
+# of the form:
+#
+# INSERT INTO <table> VALUE(<values>);
+#
+set tnum 1
+foreach val $values {
+ set lit [lindex $val 0]
+ execsql "DELETE FROM t1;"
+ execsql "INSERT INTO t1 VALUES($lit, $lit, $lit, $lit);"
+ do_test types-1.1.$tnum {
+ execsql {
+ SELECT typeof(i), typeof(n), typeof(t), typeof(o) FROM t1;
+ }
+ } [lrange $val 1 end]
+ incr tnum
+}
+
+# This code tests that the storage classes specified above (in the $values
+# table) are correctly assigned when values are inserted using a statement
+# of the form:
+#
+# INSERT INTO t1 SELECT ....
+#
+set tnum 1
+foreach val $values {
+ set lit [lindex $val 0]
+ execsql "DELETE FROM t1;"
+ execsql "INSERT INTO t1 SELECT $lit, $lit, $lit, $lit;"
+ do_test types-1.2.$tnum {
+ execsql {
+ SELECT typeof(i), typeof(n), typeof(t), typeof(o) FROM t1;
+ }
+ } [lrange $val 1 end]
+ incr tnum
+}
+
+# This code tests that the storage classes specified above (in the $values
+# table) are correctly assigned when values are inserted using a statement
+# of the form:
+#
+# UPDATE <table> SET <column> = <value>;
+#
+set tnum 1
+foreach val $values {
+ set lit [lindex $val 0]
+ execsql "UPDATE t1 SET i = $lit, n = $lit, t = $lit, o = $lit;"
+ do_test types-1.3.$tnum {
+ execsql {
+ SELECT typeof(i), typeof(n), typeof(t), typeof(o) FROM t1;
+ }
+ } [lrange $val 1 end]
+ incr tnum
+}
+
+execsql {
+ DROP TABLE t1;
+}
+
+# Open the table with root-page $rootpage at the btree
+# level. Return a list that is the length of each record
+# in the table, in the tables default scanning order.
+proc record_sizes {rootpage} {
+ set bt [btree_open test.db 10 0]
+ set c [btree_cursor $bt $rootpage 0]
+ btree_first $c
+ while 1 {
+ lappend res [btree_payload_size $c]
+ if {[btree_next $c]} break
+ }
+ btree_close_cursor $c
+ btree_close $bt
+ set res
+}
+
+
+# Create a table and insert some 1-byte integers. Make sure they
+# can be read back OK. These should be 3 byte records.
+do_test types-2.1.1 {
+ execsql {
+ CREATE TABLE t1(a integer);
+ INSERT INTO t1 VALUES(0);
+ INSERT INTO t1 VALUES(120);
+ INSERT INTO t1 VALUES(-120);
+ }
+} {}
+do_test types-2.1.2 {
+ execsql {
+ SELECT a FROM t1;
+ }
+} {0 120 -120}
+
+# Try some 2-byte integers (4 byte records)
+do_test types-2.1.3 {
+ execsql {
+ INSERT INTO t1 VALUES(30000);
+ INSERT INTO t1 VALUES(-30000);
+ }
+} {}
+do_test types-2.1.4 {
+ execsql {
+ SELECT a FROM t1;
+ }
+} {0 120 -120 30000 -30000}
+
+# 4-byte integers (6 byte records)
+do_test types-2.1.5 {
+ execsql {
+ INSERT INTO t1 VALUES(2100000000);
+ INSERT INTO t1 VALUES(-2100000000);
+ }
+} {}
+do_test types-2.1.6 {
+ execsql {
+ SELECT a FROM t1;
+ }
+} {0 120 -120 30000 -30000 2100000000 -2100000000}
+
+# 8-byte integers (10 byte records)
+do_test types-2.1.7 {
+ execsql {
+ INSERT INTO t1 VALUES(9000000*1000000*1000000);
+ INSERT INTO t1 VALUES(-9000000*1000000*1000000);
+ }
+} {}
+do_test types-2.1.8 {
+ execsql {
+ SELECT a FROM t1;
+ }
+} [list 0 120 -120 30000 -30000 2100000000 -2100000000 \
+ 9000000000000000000 -9000000000000000000]
+
+# Check that all the record sizes are as we expected.
+ifcapable legacyformat {
+ do_test types-2.1.9 {
+ set root [db eval {select rootpage from sqlite_master where name = 't1'}]
+ record_sizes $root
+ } {3 3 3 4 4 6 6 10 10}
+} else {
+ do_test types-2.1.9 {
+ set root [db eval {select rootpage from sqlite_master where name = 't1'}]
+ record_sizes $root
+ } {2 3 3 4 4 6 6 10 10}
+}
+
+# Insert some reals. These should be 10 byte records.
+do_test types-2.2.1 {
+ execsql {
+ CREATE TABLE t2(a float);
+ INSERT INTO t2 VALUES(0.0);
+ INSERT INTO t2 VALUES(12345.678);
+ INSERT INTO t2 VALUES(-12345.678);
+ }
+} {}
+do_test types-2.2.2 {
+ execsql {
+ SELECT a FROM t2;
+ }
+} {0.0 12345.678 -12345.678}
+
+# Check that all the record sizes are as we expected.
+ifcapable legacyformat {
+ do_test types-2.2.3 {
+ set root [db eval {select rootpage from sqlite_master where name = 't2'}]
+ record_sizes $root
+ } {3 10 10}
+} else {
+ do_test types-2.2.3 {
+ set root [db eval {select rootpage from sqlite_master where name = 't2'}]
+ record_sizes $root
+ } {2 10 10}
+}
+
+# Insert a NULL. This should be a two byte record.
+do_test types-2.3.1 {
+ execsql {
+ CREATE TABLE t3(a nullvalue);
+ INSERT INTO t3 VALUES(NULL);
+ }
+} {}
+do_test types-2.3.2 {
+ execsql {
+ SELECT a ISNULL FROM t3;
+ }
+} {1}
+
+# Check that all the record sizes are as we expected.
+do_test types-2.3.3 {
+ set root [db eval {select rootpage from sqlite_master where name = 't3'}]
+ record_sizes $root
+} {2}
+
+# Insert a couple of strings.
+do_test types-2.4.1 {
+ set string10 abcdefghij
+ set string500 [string repeat $string10 50]
+ set string500000 [string repeat $string10 50000]
+
+ execsql "
+ CREATE TABLE t4(a string);
+ INSERT INTO t4 VALUES('$string10');
+ INSERT INTO t4 VALUES('$string500');
+ INSERT INTO t4 VALUES('$string500000');
+ "
+} {}
+do_test types-2.4.2 {
+ execsql {
+ SELECT a FROM t4;
+ }
+} [list $string10 $string500 $string500000]
+
+# Check that all the record sizes are as we expected. This is dependant on
+# the database encoding.
+if { $sqlite_options(utf16)==0 || [execsql {pragma encoding}] == "UTF-8" } {
+ do_test types-2.4.3 {
+ set root [db eval {select rootpage from sqlite_master where name = 't4'}]
+ record_sizes $root
+ } {12 503 500004}
+} else {
+ do_test types-2.4.3 {
+ set root [db eval {select rootpage from sqlite_master where name = 't4'}]
+ record_sizes $root
+ } {22 1003 1000004}
+}
+
+do_test types-2.5.1 {
+ execsql {
+ DROP TABLE t1;
+ DROP TABLE t2;
+ DROP TABLE t3;
+ DROP TABLE t4;
+ CREATE TABLE t1(a, b, c);
+ }
+} {}
+do_test types-2.5.2 {
+ set string10 abcdefghij
+ set string500 [string repeat $string10 50]
+ set string500000 [string repeat $string10 50000]
+
+ execsql "INSERT INTO t1 VALUES(NULL, '$string10', 4000);"
+ execsql "INSERT INTO t1 VALUES('$string500', 4000, NULL);"
+ execsql "INSERT INTO t1 VALUES(4000, NULL, '$string500000');"
+} {}
+do_test types-2.5.3 {
+ execsql {
+ SELECT * FROM t1;
+ }
+} [list {} $string10 4000 $string500 4000 {} 4000 {} $string500000]
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/types2.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/types2.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,313 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The focus
+# of this file is testing the interaction of manifest types, type affinity
+# and comparison expressions.
+#
+# $Id: types2.test,v 1.6 2006/05/23 23:22:29 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Tests in this file are organized roughly as follows:
+#
+# types2-1.*: The '=' operator in the absence of an index.
+# types2-2.*: The '=' operator implemented using an index.
+# types2-3.*: The '<' operator implemented using an index.
+# types2-4.*: The '>' operator in the absence of an index.
+# types2-5.*: The 'IN(x, y...)' operator in the absence of an index.
+# types2-6.*: The 'IN(x, y...)' operator with an index.
+# types2-7.*: The 'IN(SELECT...)' operator in the absence of an index.
+# types2-8.*: The 'IN(SELECT...)' operator with an index.
+#
+# All tests test the operators using literals and columns, but no
+# other types of expressions. All expressions except columns are
+# handled similarly in the implementation.
+
+execsql {
+ CREATE TABLE t1(
+ i1 INTEGER,
+ i2 INTEGER,
+ n1 NUMERIC,
+ n2 NUMERIC,
+ t1 TEXT,
+ t2 TEXT,
+ o1 BLOB,
+ o2 BLOB
+ );
+ INSERT INTO t1 VALUES(NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL);
+}
+
+proc test_bool {testname vars expr res} {
+ if { $vars != "" } {
+ execsql "UPDATE t1 SET $vars"
+ }
+
+ foreach {t e r} [list $testname $expr $res] {}
+
+ do_test $t.1 "execsql {SELECT $e FROM t1}" $r
+ do_test $t.2 "execsql {SELECT 1 FROM t1 WHERE $expr}" [expr $r?"1":""]
+ do_test $t.3 "execsql {SELECT 1 FROM t1 WHERE NOT ($e)}" [expr $r?"":"1"]
+}
+
+# Compare literals against literals. This should always use a numeric
+# comparison.
+#
+# Changed by ticket #805: Use no affinity for literal comparisons.
+#
+test_bool types2-1.1 "" {500 = 500.0} 1
+test_bool types2-1.2 "" {'500' = 500.0} 0
+test_bool types2-1.3 "" {500 = '500.0'} 0
+test_bool types2-1.4 "" {'500' = '500.0'} 0
+
+# Compare literals against a column with TEXT affinity
+test_bool types2-1.5 {t1=500} {500 = t1} 1
+test_bool types2-1.6 {t1=500} {'500' = t1} 1
+test_bool types2-1.7 {t1=500} {500.0 = t1} 0
+test_bool types2-1.8 {t1=500} {'500.0' = t1} 0
+test_bool types2-1.9 {t1='500'} {500 = t1} 1
+test_bool types2-1.10 {t1='500'} {'500' = t1} 1
+test_bool types2-1.11 {t1='500'} {500.0 = t1} 0
+test_bool types2-1.12 {t1='500'} {'500.0' = t1} 0
+
+# Compare literals against a column with NUMERIC affinity
+test_bool types2-1.13 {n1=500} {500 = n1} 1
+test_bool types2-1.14 {n1=500} {'500' = n1} 1
+test_bool types2-1.15 {n1=500} {500.0 = n1} 1
+test_bool types2-1.16 {n1=500} {'500.0' = n1} 1
+test_bool types2-1.17 {n1='500'} {500 = n1} 1
+test_bool types2-1.18 {n1='500'} {'500' = n1} 1
+test_bool types2-1.19 {n1='500'} {500.0 = n1} 1
+test_bool types2-1.20 {n1='500'} {'500.0' = n1} 1
+
+# Compare literals against a column with affinity NONE
+test_bool types2-1.21 {o1=500} {500 = o1} 1
+test_bool types2-1.22 {o1=500} {'500' = o1} 0
+test_bool types2-1.23 {o1=500} {500.0 = o1} 1
+test_bool types2-1.24 {o1=500} {'500.0' = o1} 0
+test_bool types2-1.25 {o1='500'} {500 = o1} 0
+test_bool types2-1.26 {o1='500'} {'500' = o1} 1
+test_bool types2-1.27 {o1='500'} {500.0 = o1} 0
+test_bool types2-1.28 {o1='500'} {'500.0' = o1} 0
+
+set vals [list 10 10.0 '10' '10.0' 20 20.0 '20' '20.0' 30 30.0 '30' '30.0']
+# 1 2 3 4 5 6 7 8 9 10 11 12
+
+execsql {
+ CREATE TABLE t2(i INTEGER, n NUMERIC, t TEXT, o XBLOBY);
+ CREATE INDEX t2i1 ON t2(i);
+ CREATE INDEX t2i2 ON t2(n);
+ CREATE INDEX t2i3 ON t2(t);
+ CREATE INDEX t2i4 ON t2(o);
+}
+foreach v $vals {
+ execsql "INSERT INTO t2 VALUES($v, $v, $v, $v);"
+}
+
+proc test_boolset {testname where set} {
+ set ::tb_sql "SELECT rowid FROM t2 WHERE $where"
+ do_test $testname {
+ lsort -integer [execsql $::tb_sql]
+ } $set
+}
+
+test_boolset types2-2.1 {i = 10} {1 2 3 4}
+test_boolset types2-2.2 {i = 10.0} {1 2 3 4}
+test_boolset types2-2.3 {i = '10'} {1 2 3 4}
+test_boolset types2-2.4 {i = '10.0'} {1 2 3 4}
+
+test_boolset types2-2.5 {n = 20} {5 6 7 8}
+test_boolset types2-2.6 {n = 20.0} {5 6 7 8}
+test_boolset types2-2.7 {n = '20'} {5 6 7 8}
+test_boolset types2-2.8 {n = '20.0'} {5 6 7 8}
+
+test_boolset types2-2.9 {t = 20} {5 7}
+test_boolset types2-2.10 {t = 20.0} {6 8}
+test_boolset types2-2.11 {t = '20'} {5 7}
+test_boolset types2-2.12 {t = '20.0'} {6 8}
+
+test_boolset types2-2.10 {o = 30} {9 10}
+test_boolset types2-2.11 {o = 30.0} {9 10}
+test_boolset types2-2.12 {o = '30'} 11
+test_boolset types2-2.13 {o = '30.0'} 12
+
+test_boolset types2-3.1 {i < 20} {1 2 3 4}
+test_boolset types2-3.2 {i < 20.0} {1 2 3 4}
+test_boolset types2-3.3 {i < '20'} {1 2 3 4}
+test_boolset types2-3.4 {i < '20.0'} {1 2 3 4}
+
+test_boolset types2-3.1 {n < 20} {1 2 3 4}
+test_boolset types2-3.2 {n < 20.0} {1 2 3 4}
+test_boolset types2-3.3 {n < '20'} {1 2 3 4}
+test_boolset types2-3.4 {n < '20.0'} {1 2 3 4}
+
+test_boolset types2-3.1 {t < 20} {1 2 3 4}
+test_boolset types2-3.2 {t < 20.0} {1 2 3 4 5 7}
+test_boolset types2-3.3 {t < '20'} {1 2 3 4}
+test_boolset types2-3.4 {t < '20.0'} {1 2 3 4 5 7}
+
+test_boolset types2-3.1 {o < 20} {1 2}
+test_boolset types2-3.2 {o < 20.0} {1 2}
+test_boolset types2-3.3 {o < '20'} {1 2 3 4 5 6 9 10}
+test_boolset types2-3.3 {o < '20.0'} {1 2 3 4 5 6 7 9 10}
+
+# Compare literals against literals (always a numeric comparison).
+# Change (by ticket #805): No affinity in comparisons
+test_bool types2-4.1 "" {500 > 60.0} 1
+test_bool types2-4.2 "" {'500' > 60.0} 1
+test_bool types2-4.3 "" {500 > '60.0'} 0
+test_bool types2-4.4 "" {'500' > '60.0'} 0
+
+# Compare literals against a column with TEXT affinity
+test_bool types2-4.5 {t1=500.0} {t1 > 500} 1
+test_bool types2-4.6 {t1=500.0} {t1 > '500' } 1
+test_bool types2-4.7 {t1=500.0} {t1 > 500.0 } 0
+test_bool types2-4.8 {t1=500.0} {t1 > '500.0' } 0
+test_bool types2-4.9 {t1='500.0'} {t1 > 500 } 1
+test_bool types2-4.10 {t1='500.0'} {t1 > '500' } 1
+test_bool types2-4.11 {t1='500.0'} {t1 > 500.0 } 0
+test_bool types2-4.12 {t1='500.0'} {t1 > '500.0' } 0
+
+# Compare literals against a column with NUMERIC affinity
+test_bool types2-4.13 {n1=400} {500 > n1} 1
+test_bool types2-4.14 {n1=400} {'500' > n1} 1
+test_bool types2-4.15 {n1=400} {500.0 > n1} 1
+test_bool types2-4.16 {n1=400} {'500.0' > n1} 1
+test_bool types2-4.17 {n1='400'} {500 > n1} 1
+test_bool types2-4.18 {n1='400'} {'500' > n1} 1
+test_bool types2-4.19 {n1='400'} {500.0 > n1} 1
+test_bool types2-4.20 {n1='400'} {'500.0' > n1} 1
+
+# Compare literals against a column with affinity NONE
+test_bool types2-4.21 {o1=500} {500 > o1} 0
+test_bool types2-4.22 {o1=500} {'500' > o1} 1
+test_bool types2-4.23 {o1=500} {500.0 > o1} 0
+test_bool types2-4.24 {o1=500} {'500.0' > o1} 1
+test_bool types2-4.25 {o1='500'} {500 > o1} 0
+test_bool types2-4.26 {o1='500'} {'500' > o1} 0
+test_bool types2-4.27 {o1='500'} {500.0 > o1} 0
+test_bool types2-4.28 {o1='500'} {'500.0' > o1} 1
+
+ifcapable subquery {
+ # types2-5.* - The 'IN (x, y....)' operator with no index.
+ #
+ # Compare literals against literals (no affinity applied)
+ test_bool types2-5.1 {} {(NULL IN ('10.0', 20)) ISNULL} 1
+ test_bool types2-5.2 {} {10 IN ('10.0', 20)} 0
+ test_bool types2-5.3 {} {'10' IN ('10.0', 20)} 0
+ test_bool types2-5.4 {} {10 IN (10.0, 20)} 1
+ test_bool types2-5.5 {} {'10.0' IN (10, 20)} 1
+
+ # Compare literals against a column with TEXT affinity
+ test_bool types2-5.6 {t1='10.0'} {t1 IN (10.0, 20)} 1
+ test_bool types2-5.7 {t1='10.0'} {t1 IN (10, 20)} 0
+ test_bool types2-5.8 {t1='10'} {t1 IN (10.0, 20)} 0
+ test_bool types2-5.9 {t1='10'} {t1 IN (20, '10.0')} 0
+ test_bool types2-5.10 {t1=10} {t1 IN (20, '10')} 1
+
+ # Compare literals against a column with NUMERIC affinity
+ test_bool types2-5.11 {n1='10.0'} {n1 IN (10.0, 20)} 1
+ test_bool types2-5.12 {n1='10.0'} {n1 IN (10, 20)} 1
+ test_bool types2-5.13 {n1='10'} {n1 IN (10.0, 20)} 1
+ test_bool types2-5.14 {n1='10'} {n1 IN (20, '10.0')} 1
+ test_bool types2-5.15 {n1=10} {n1 IN (20, '10')} 1
+
+ # Compare literals against a column with affinity NONE
+ test_bool types2-5.16 {o1='10.0'} {o1 IN (10.0, 20)} 0
+ test_bool types2-5.17 {o1='10.0'} {o1 IN (10, 20)} 0
+ test_bool types2-5.18 {o1='10'} {o1 IN (10.0, 20)} 0
+ test_bool types2-5.19 {o1='10'} {o1 IN (20, '10.0')} 0
+ test_bool types2-5.20 {o1=10} {o1 IN (20, '10')} 0
+ test_bool types2-5.21 {o1='10.0'} {o1 IN (10, 20, '10.0')} 1
+ test_bool types2-5.22 {o1='10'} {o1 IN (10.0, 20, '10')} 1
+ test_bool types2-5.23 {o1=10} {n1 IN (20, '10', 10)} 1
+}
+
+# Tests named types2-6.* use the same infrastructure as the types2-2.*
+# tests. The contents of the vals array is repeated here for easy
+# reference.
+#
+# set vals [list 10 10.0 '10' '10.0' 20 20.0 '20' '20.0' 30 30.0 '30' '30.0']
+# 1 2 3 4 5 6 7 8 9 10 11 12
+
+ifcapable subquery {
+ test_boolset types2-6.1 {o IN ('10', 30)} {3 9 10}
+ test_boolset types2-6.2 {o IN (20.0, 30.0)} {5 6 9 10}
+ test_boolset types2-6.3 {t IN ('10', 30)} {1 3 9 11}
+ test_boolset types2-6.4 {t IN (20.0, 30.0)} {6 8 10 12}
+ test_boolset types2-6.5 {n IN ('10', 30)} {1 2 3 4 9 10 11 12}
+ test_boolset types2-6.6 {n IN (20.0, 30.0)} {5 6 7 8 9 10 11 12}
+ test_boolset types2-6.7 {i IN ('10', 30)} {1 2 3 4 9 10 11 12}
+ test_boolset types2-6.8 {i IN (20.0, 30.0)} {5 6 7 8 9 10 11 12}
+
+ # Also test than IN(x, y, z) works on a rowid:
+ test_boolset types2-6.9 {rowid IN (1, 6, 10)} {1 6 10}
+}
+
+# Tests types2-7.* concentrate on expressions of the form
+# "x IN (SELECT...)" with no index.
+execsql {
+ CREATE TABLE t3(i INTEGER, n NUMERIC, t TEXT, o BLOB);
+ INSERT INTO t3 VALUES(1, 1, 1, 1);
+ INSERT INTO t3 VALUES(2, 2, 2, 2);
+ INSERT INTO t3 VALUES(3, 3, 3, 3);
+ INSERT INTO t3 VALUES('1', '1', '1', '1');
+ INSERT INTO t3 VALUES('1.0', '1.0', '1.0', '1.0');
+}
+
+ifcapable subquery {
+ test_bool types2-7.1 {i1=1} {i1 IN (SELECT i FROM t3)} 1
+ test_bool types2-7.2 {i1='2.0'} {i1 IN (SELECT i FROM t3)} 1
+ test_bool types2-7.3 {i1='2.0'} {i1 IN (SELECT n FROM t3)} 1
+ test_bool types2-7.4 {i1='2.0'} {i1 IN (SELECT t FROM t3)} 1
+ test_bool types2-7.5 {i1='2.0'} {i1 IN (SELECT o FROM t3)} 1
+
+ test_bool types2-7.6 {n1=1} {n1 IN (SELECT n FROM t3)} 1
+ test_bool types2-7.7 {n1='2.0'} {n1 IN (SELECT i FROM t3)} 1
+ test_bool types2-7.8 {n1='2.0'} {n1 IN (SELECT n FROM t3)} 1
+ test_bool types2-7.9 {n1='2.0'} {n1 IN (SELECT t FROM t3)} 1
+ test_bool types2-7.10 {n1='2.0'} {n1 IN (SELECT o FROM t3)} 1
+
+ test_bool types2-7.6 {t1=1} {t1 IN (SELECT t FROM t3)} 1
+ test_bool types2-7.7 {t1='2.0'} {t1 IN (SELECT t FROM t3)} 0
+ test_bool types2-7.8 {t1='2.0'} {t1 IN (SELECT n FROM t3)} 1
+ test_bool types2-7.9 {t1='2.0'} {t1 IN (SELECT i FROM t3)} 1
+ test_bool types2-7.10 {t1='2.0'} {t1 IN (SELECT o FROM t3)} 0
+ test_bool types2-7.11 {t1='1.0'} {t1 IN (SELECT t FROM t3)} 1
+ test_bool types2-7.12 {t1='1.0'} {t1 IN (SELECT o FROM t3)} 1
+
+ test_bool types2-7.13 {o1=2} {o1 IN (SELECT o FROM t3)} 1
+ test_bool types2-7.14 {o1='2'} {o1 IN (SELECT o FROM t3)} 0
+ test_bool types2-7.15 {o1='2'} {o1 IN (SELECT o||'' FROM t3)} 1
+}
+
+# set vals [list 10 10.0 '10' '10.0' 20 20.0 '20' '20.0' 30 30.0 '30' '30.0']
+# 1 2 3 4 5 6 7 8 9 10 11 12
+execsql {
+ CREATE TABLE t4(i INTEGER, n NUMERIC, t VARCHAR(20), o LARGE BLOB);
+ INSERT INTO t4 VALUES(10, 20, 20, 30);
+}
+ifcapable subquery {
+ test_boolset types2-8.1 {i IN (SELECT i FROM t4)} {1 2 3 4}
+ test_boolset types2-8.2 {n IN (SELECT i FROM t4)} {1 2 3 4}
+ test_boolset types2-8.3 {t IN (SELECT i FROM t4)} {1 2 3 4}
+ test_boolset types2-8.4 {o IN (SELECT i FROM t4)} {1 2 3 4}
+ test_boolset types2-8.5 {i IN (SELECT t FROM t4)} {5 6 7 8}
+ test_boolset types2-8.6 {n IN (SELECT t FROM t4)} {5 6 7 8}
+ test_boolset types2-8.7 {t IN (SELECT t FROM t4)} {5 7}
+ test_boolset types2-8.8 {o IN (SELECT t FROM t4)} {7}
+ test_boolset types2-8.9 {i IN (SELECT o FROM t4)} {9 10 11 12}
+ test_boolset types2-8.6 {n IN (SELECT o FROM t4)} {9 10 11 12}
+ test_boolset types2-8.7 {t IN (SELECT o FROM t4)} {9 11}
+ test_boolset types2-8.8 {o IN (SELECT o FROM t4)} {9 10}
+}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/types3.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/types3.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,90 @@
+# 2005 June 25
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The focus
+# of this file is testing the interaction of SQLite manifest types
+# with Tcl dual-representations.
+#
+# $Id: types3.test,v 1.5 2006/01/17 09:35:03 danielk1977 Exp $
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# A variable with only a string representation comes in as TEXT
+do_test types3-1.1 {
+ set V {}
+ append V {}
+ concat [tcl_variable_type V] [execsql {SELECT typeof(:V)}]
+} {string text}
+
+# A variable with an integer representation comes in as INTEGER
+do_test types3-1.2 {
+ set V [expr {1+2}]
+ concat [tcl_variable_type V] [execsql {SELECT typeof(:V)}]
+} {int integer}
+do_test types3-1.3 {
+ set V [expr {1+123456789012345}]
+ concat [tcl_variable_type V] [execsql {SELECT typeof(:V)}]
+} {wideInt integer}
+
+# A double variable comes in as REAL
+do_test types3-1.4 {
+ set V [expr {1.0+1}]
+ concat [tcl_variable_type V] [execsql {SELECT typeof(:V)}]
+} {double real}
+
+# A byte-array variable comes in a BLOB if it has no string representation
+# or as TEXT if there is a string representation.
+#
+do_test types3-1.5 {
+ set V [binary format a3 abc]
+ concat [tcl_variable_type V] [execsql {SELECT typeof(:V)}]
+} {bytearray blob}
+do_test types3-1.6 {
+ set V "abc"
+ binary scan $V a3 x
+ concat [tcl_variable_type V] [execsql {SELECT typeof(:V)}]
+} {bytearray text}
+
+# Check to make sure return values are of the right types.
+#
+ifcapable bloblit {
+ do_test types3-2.1 {
+ set V [db one {SELECT x'616263'}]
+ tcl_variable_type V
+ } bytearray
+}
+do_test types3-2.2 {
+ set V [db one {SELECT 123}]
+ tcl_variable_type V
+} int
+do_test types3-2.3 {
+ set V [db one {SELECT 1234567890123456}]
+ tcl_variable_type V
+} wideInt
+do_test types3-2.4.1 {
+ set V [db one {SELECT 1234567890123456.1}]
+ tcl_variable_type V
+} double
+do_test types3-2.4.2 {
+ set V [db one {SELECT 1234567890123.456}]
+ tcl_variable_type V
+} double
+do_test types3-2.5 {
+ set V [db one {SELECT '1234567890123456.0'}]
+ tcl_variable_type V
+} {}
+do_test types3-2.6 {
+ set V [db one {SELECT NULL}]
+ tcl_variable_type V
+} {}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/unique.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/unique.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,253 @@
+# 2001 September 27
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing the CREATE UNIQUE INDEX statement,
+# and primary keys, and the UNIQUE constraint on table columns
+#
+# $Id: unique.test,v 1.8 2005/06/24 03:53:06 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Try to create a table with two primary keys.
+# (This is allowed in SQLite even that it is not valid SQL)
+#
+do_test unique-1.1 {
+ catchsql {
+ CREATE TABLE t1(
+ a int PRIMARY KEY,
+ b int PRIMARY KEY,
+ c text
+ );
+ }
+} {1 {table "t1" has more than one primary key}}
+do_test unique-1.1b {
+ catchsql {
+ CREATE TABLE t1(
+ a int PRIMARY KEY,
+ b int UNIQUE,
+ c text
+ );
+ }
+} {0 {}}
+do_test unique-1.2 {
+ catchsql {
+ INSERT INTO t1(a,b,c) VALUES(1,2,3)
+ }
+} {0 {}}
+do_test unique-1.3 {
+ catchsql {
+ INSERT INTO t1(a,b,c) VALUES(1,3,4)
+ }
+} {1 {column a is not unique}}
+do_test unique-1.4 {
+ execsql {
+ SELECT * FROM t1 ORDER BY a;
+ }
+} {1 2 3}
+do_test unique-1.5 {
+ catchsql {
+ INSERT INTO t1(a,b,c) VALUES(3,2,4)
+ }
+} {1 {column b is not unique}}
+do_test unique-1.6 {
+ execsql {
+ SELECT * FROM t1 ORDER BY a;
+ }
+} {1 2 3}
+do_test unique-1.7 {
+ catchsql {
+ INSERT INTO t1(a,b,c) VALUES(3,4,5)
+ }
+} {0 {}}
+do_test unique-1.8 {
+ execsql {
+ SELECT * FROM t1 ORDER BY a;
+ }
+} {1 2 3 3 4 5}
+integrity_check unique-1.9
+
+do_test unique-2.0 {
+ execsql {
+ DROP TABLE t1;
+ CREATE TABLE t2(a int, b int);
+ INSERT INTO t2(a,b) VALUES(1,2);
+ INSERT INTO t2(a,b) VALUES(3,4);
+ SELECT * FROM t2 ORDER BY a;
+ }
+} {1 2 3 4}
+do_test unique-2.1 {
+ catchsql {
+ CREATE UNIQUE INDEX i2 ON t2(a)
+ }
+} {0 {}}
+do_test unique-2.2 {
+ catchsql {
+ SELECT * FROM t2 ORDER BY a
+ }
+} {0 {1 2 3 4}}
+do_test unique-2.3 {
+ catchsql {
+ INSERT INTO t2 VALUES(1,5);
+ }
+} {1 {column a is not unique}}
+do_test unique-2.4 {
+ catchsql {
+ SELECT * FROM t2 ORDER BY a
+ }
+} {0 {1 2 3 4}}
+do_test unique-2.5 {
+ catchsql {
+ DROP INDEX i2;
+ SELECT * FROM t2 ORDER BY a;
+ }
+} {0 {1 2 3 4}}
+do_test unique-2.6 {
+ catchsql {
+ INSERT INTO t2 VALUES(1,5)
+ }
+} {0 {}}
+do_test unique-2.7 {
+ catchsql {
+ SELECT * FROM t2 ORDER BY a, b;
+ }
+} {0 {1 2 1 5 3 4}}
+do_test unique-2.8 {
+ catchsql {
+ CREATE UNIQUE INDEX i2 ON t2(a);
+ }
+} {1 {indexed columns are not unique}}
+do_test unique-2.9 {
+ catchsql {
+ CREATE INDEX i2 ON t2(a);
+ }
+} {0 {}}
+integrity_check unique-2.10
+
+# Test the UNIQUE keyword as used on two or more fields.
+#
+do_test unique-3.1 {
+ catchsql {
+ CREATE TABLE t3(
+ a int,
+ b int,
+ c int,
+ d int,
+ unique(a,c,d)
+ );
+ }
+} {0 {}}
+do_test unique-3.2 {
+ catchsql {
+ INSERT INTO t3(a,b,c,d) VALUES(1,2,3,4);
+ SELECT * FROM t3 ORDER BY a,b,c,d;
+ }
+} {0 {1 2 3 4}}
+do_test unique-3.3 {
+ catchsql {
+ INSERT INTO t3(a,b,c,d) VALUES(1,2,3,5);
+ SELECT * FROM t3 ORDER BY a,b,c,d;
+ }
+} {0 {1 2 3 4 1 2 3 5}}
+do_test unique-3.4 {
+ catchsql {
+ INSERT INTO t3(a,b,c,d) VALUES(1,4,3,5);
+ SELECT * FROM t3 ORDER BY a,b,c,d;
+ }
+} {1 {columns a, c, d are not unique}}
+integrity_check unique-3.5
+
+# Make sure NULLs are distinct as far as the UNIQUE tests are
+# concerned.
+#
+do_test unique-4.1 {
+ execsql {
+ CREATE TABLE t4(a UNIQUE, b, c, UNIQUE(b,c));
+ INSERT INTO t4 VALUES(1,2,3);
+ INSERT INTO t4 VALUES(NULL, 2, NULL);
+ SELECT * FROM t4;
+ }
+} {1 2 3 {} 2 {}}
+do_test unique-4.2 {
+ catchsql {
+ INSERT INTO t4 VALUES(NULL, 3, 4);
+ }
+} {0 {}}
+do_test unique-4.3 {
+ execsql {
+ SELECT * FROM t4
+ }
+} {1 2 3 {} 2 {} {} 3 4}
+do_test unique-4.4 {
+ catchsql {
+ INSERT INTO t4 VALUES(2, 2, NULL);
+ }
+} {0 {}}
+do_test unique-4.5 {
+ execsql {
+ SELECT * FROM t4
+ }
+} {1 2 3 {} 2 {} {} 3 4 2 2 {}}
+
+# Ticket #1301. Any NULL value in a set of unique columns should
+# cause the rows to be distinct.
+#
+do_test unique-4.6 {
+ catchsql {
+ INSERT INTO t4 VALUES(NULL, 2, NULL);
+ }
+} {0 {}}
+do_test unique-4.7 {
+ execsql {SELECT * FROM t4}
+} {1 2 3 {} 2 {} {} 3 4 2 2 {} {} 2 {}}
+do_test unique-4.8 {
+ catchsql {CREATE UNIQUE INDEX i4a ON t4(a,b)}
+} {0 {}}
+do_test unique-4.9 {
+ catchsql {CREATE UNIQUE INDEX i4b ON t4(a,b,c)}
+} {0 {}}
+do_test unique-4.10 {
+ catchsql {CREATE UNIQUE INDEX i4c ON t4(b)}
+} {1 {indexed columns are not unique}}
+integrity_check unique-4.99
+
+# Test the error message generation logic. In particular, make sure we
+# do not overflow the static buffer used to generate the error message.
+#
+do_test unique-5.1 {
+ execsql {
+ CREATE TABLE t5(
+ first_column_with_long_name,
+ second_column_with_long_name,
+ third_column_with_long_name,
+ fourth_column_with_long_name,
+ fifth_column_with_long_name,
+ sixth_column_with_long_name,
+ UNIQUE(
+ first_column_with_long_name,
+ second_column_with_long_name,
+ third_column_with_long_name,
+ fourth_column_with_long_name,
+ fifth_column_with_long_name,
+ sixth_column_with_long_name
+ )
+ );
+ INSERT INTO t5 VALUES(1,2,3,4,5,6);
+ SELECT * FROM t5;
+ }
+} {1 2 3 4 5 6}
+do_test unique-5.2 {
+ catchsql {
+ INSERT INTO t5 VALUES(1,2,3,4,5,6);
+ }
+} {1 {columns first_column_with_long_name, second_column_with_long_name, third_column_with_long_name, fourth_column_with_long_name, fifth_column_with_long_name, ... are not unique}}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/update.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/update.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,596 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing the UPDATE statement.
+#
+# $Id: update.test,v 1.17 2005/01/21 03:12:16 danielk1977 Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Try to update an non-existent table
+#
+do_test update-1.1 {
+ set v [catch {execsql {UPDATE test1 SET f2=5 WHERE f1<1}} msg]
+ lappend v $msg
+} {1 {no such table: test1}}
+
+# Try to update a read-only table
+#
+do_test update-2.1 {
+ set v [catch \
+ {execsql {UPDATE sqlite_master SET name='xyz' WHERE name='123'}} msg]
+ lappend v $msg
+} {1 {table sqlite_master may not be modified}}
+
+# Create a table to work with
+#
+do_test update-3.1 {
+ execsql {CREATE TABLE test1(f1 int,f2 int)}
+ for {set i 1} {$i<=10} {incr i} {
+ set sql "INSERT INTO test1 VALUES($i,[expr {int(pow(2,$i))}])"
+ execsql $sql
+ }
+ execsql {SELECT * FROM test1 ORDER BY f1}
+} {1 2 2 4 3 8 4 16 5 32 6 64 7 128 8 256 9 512 10 1024}
+
+# Unknown column name in an expression
+#
+do_test update-3.2 {
+ set v [catch {execsql {UPDATE test1 SET f1=f3*2 WHERE f2==32}} msg]
+ lappend v $msg
+} {1 {no such column: f3}}
+do_test update-3.3 {
+ set v [catch {execsql {UPDATE test1 SET f1=test2.f1*2 WHERE f2==32}} msg]
+ lappend v $msg
+} {1 {no such column: test2.f1}}
+do_test update-3.4 {
+ set v [catch {execsql {UPDATE test1 SET f3=f1*2 WHERE f2==32}} msg]
+ lappend v $msg
+} {1 {no such column: f3}}
+
+# Actually do some updates
+#
+do_test update-3.5 {
+ execsql {UPDATE test1 SET f2=f2*3}
+} {}
+do_test update-3.6 {
+ execsql {SELECT * FROM test1 ORDER BY f1}
+} {1 6 2 12 3 24 4 48 5 96 6 192 7 384 8 768 9 1536 10 3072}
+do_test update-3.7 {
+ execsql {PRAGMA count_changes=on}
+ execsql {UPDATE test1 SET f2=f2/3 WHERE f1<=5}
+} {5}
+do_test update-3.8 {
+ execsql {SELECT * FROM test1 ORDER BY f1}
+} {1 2 2 4 3 8 4 16 5 32 6 192 7 384 8 768 9 1536 10 3072}
+do_test update-3.9 {
+ execsql {UPDATE test1 SET f2=f2/3 WHERE f1>5}
+} {5}
+do_test update-3.10 {
+ execsql {SELECT * FROM test1 ORDER BY f1}
+} {1 2 2 4 3 8 4 16 5 32 6 64 7 128 8 256 9 512 10 1024}
+
+# Swap the values of f1 and f2 for all elements
+#
+do_test update-3.11 {
+ execsql {UPDATE test1 SET F2=f1, F1=f2}
+} {10}
+do_test update-3.12 {
+ execsql {SELECT * FROM test1 ORDER BY F1}
+} {2 1 4 2 8 3 16 4 32 5 64 6 128 7 256 8 512 9 1024 10}
+do_test update-3.13 {
+ execsql {PRAGMA count_changes=off}
+ execsql {UPDATE test1 SET F2=f1, F1=f2}
+} {}
+do_test update-3.14 {
+ execsql {SELECT * FROM test1 ORDER BY F1}
+} {1 2 2 4 3 8 4 16 5 32 6 64 7 128 8 256 9 512 10 1024}
+
+# Create duplicate entries and make sure updating still
+# works.
+#
+do_test update-4.0 {
+ execsql {
+ DELETE FROM test1 WHERE f1<=5;
+ INSERT INTO test1(f1,f2) VALUES(8,88);
+ INSERT INTO test1(f1,f2) VALUES(8,888);
+ INSERT INTO test1(f1,f2) VALUES(77,128);
+ INSERT INTO test1(f1,f2) VALUES(777,128);
+ }
+ execsql {SELECT * FROM test1 ORDER BY f1,f2}
+} {6 64 7 128 8 88 8 256 8 888 9 512 10 1024 77 128 777 128}
+do_test update-4.1 {
+ execsql {UPDATE test1 SET f2=f2+1 WHERE f1==8}
+ execsql {SELECT * FROM test1 ORDER BY f1,f2}
+} {6 64 7 128 8 89 8 257 8 889 9 512 10 1024 77 128 777 128}
+do_test update-4.2 {
+ execsql {UPDATE test1 SET f2=f2-1 WHERE f1==8 and f2>800}
+ execsql {SELECT * FROM test1 ORDER BY f1,f2}
+} {6 64 7 128 8 89 8 257 8 888 9 512 10 1024 77 128 777 128}
+do_test update-4.3 {
+ execsql {UPDATE test1 SET f2=f2-1 WHERE f1==8 and f2<800}
+ execsql {SELECT * FROM test1 ORDER BY f1,f2}
+} {6 64 7 128 8 88 8 256 8 888 9 512 10 1024 77 128 777 128}
+do_test update-4.4 {
+ execsql {UPDATE test1 SET f1=f1+1 WHERE f2==128}
+ execsql {SELECT * FROM test1 ORDER BY f1,f2}
+} {6 64 8 88 8 128 8 256 8 888 9 512 10 1024 78 128 778 128}
+do_test update-4.5 {
+ execsql {UPDATE test1 SET f1=f1-1 WHERE f1>100 and f2==128}
+ execsql {SELECT * FROM test1 ORDER BY f1,f2}
+} {6 64 8 88 8 128 8 256 8 888 9 512 10 1024 78 128 777 128}
+do_test update-4.6 {
+ execsql {
+ PRAGMA count_changes=on;
+ UPDATE test1 SET f1=f1-1 WHERE f1<=100 and f2==128;
+ }
+} {2}
+do_test update-4.7 {
+ execsql {
+ PRAGMA count_changes=off;
+ SELECT * FROM test1 ORDER BY f1,f2
+ }
+} {6 64 7 128 8 88 8 256 8 888 9 512 10 1024 77 128 777 128}
+
+# Repeat the previous sequence of tests with an index.
+#
+do_test update-5.0 {
+ execsql {CREATE INDEX idx1 ON test1(f1)}
+ execsql {SELECT * FROM test1 ORDER BY f1,f2}
+} {6 64 7 128 8 88 8 256 8 888 9 512 10 1024 77 128 777 128}
+do_test update-5.1 {
+ execsql {UPDATE test1 SET f2=f2+1 WHERE f1==8}
+ execsql {SELECT * FROM test1 ORDER BY f1,f2}
+} {6 64 7 128 8 89 8 257 8 889 9 512 10 1024 77 128 777 128}
+do_test update-5.2 {
+ execsql {UPDATE test1 SET f2=f2-1 WHERE f1==8 and f2>800}
+ execsql {SELECT * FROM test1 ORDER BY f1,f2}
+} {6 64 7 128 8 89 8 257 8 888 9 512 10 1024 77 128 777 128}
+do_test update-5.3 {
+ execsql {UPDATE test1 SET f2=f2-1 WHERE f1==8 and f2<800}
+ execsql {SELECT * FROM test1 ORDER BY f1,f2}
+} {6 64 7 128 8 88 8 256 8 888 9 512 10 1024 77 128 777 128}
+do_test update-5.4 {
+ execsql {UPDATE test1 SET f1=f1+1 WHERE f2==128}
+ execsql {SELECT * FROM test1 ORDER BY f1,f2}
+} {6 64 8 88 8 128 8 256 8 888 9 512 10 1024 78 128 778 128}
+do_test update-5.4.1 {
+ execsql {SELECT * FROM test1 WHERE f1==78 ORDER BY f1,f2}
+} {78 128}
+do_test update-5.4.2 {
+ execsql {SELECT * FROM test1 WHERE f1==778 ORDER BY f1,f2}
+} {778 128}
+do_test update-5.4.3 {
+ execsql {SELECT * FROM test1 WHERE f1==8 ORDER BY f1,f2}
+} {8 88 8 128 8 256 8 888}
+do_test update-5.5 {
+ execsql {UPDATE test1 SET f1=f1-1 WHERE f1>100 and f2==128}
+} {}
+do_test update-5.5.1 {
+ execsql {SELECT * FROM test1 ORDER BY f1,f2}
+} {6 64 8 88 8 128 8 256 8 888 9 512 10 1024 78 128 777 128}
+do_test update-5.5.2 {
+ execsql {SELECT * FROM test1 WHERE f1==78 ORDER BY f1,f2}
+} {78 128}
+do_test update-5.5.3 {
+ execsql {SELECT * FROM test1 WHERE f1==778 ORDER BY f1,f2}
+} {}
+do_test update-5.5.4 {
+ execsql {SELECT * FROM test1 WHERE f1==777 ORDER BY f1,f2}
+} {777 128}
+do_test update-5.5.5 {
+ execsql {SELECT * FROM test1 WHERE f1==8 ORDER BY f1,f2}
+} {8 88 8 128 8 256 8 888}
+do_test update-5.6 {
+ execsql {
+ PRAGMA count_changes=on;
+ UPDATE test1 SET f1=f1-1 WHERE f1<=100 and f2==128;
+ }
+} {2}
+do_test update-5.6.1 {
+ execsql {
+ PRAGMA count_changes=off;
+ SELECT * FROM test1 ORDER BY f1,f2
+ }
+} {6 64 7 128 8 88 8 256 8 888 9 512 10 1024 77 128 777 128}
+do_test update-5.6.2 {
+ execsql {SELECT * FROM test1 WHERE f1==77 ORDER BY f1,f2}
+} {77 128}
+do_test update-5.6.3 {
+ execsql {SELECT * FROM test1 WHERE f1==778 ORDER BY f1,f2}
+} {}
+do_test update-5.6.4 {
+ execsql {SELECT * FROM test1 WHERE f1==777 ORDER BY f1,f2}
+} {777 128}
+do_test update-5.6.5 {
+ execsql {SELECT * FROM test1 WHERE f1==8 ORDER BY f1,f2}
+} {8 88 8 256 8 888}
+
+# Repeat the previous sequence of tests with a different index.
+#
+execsql {PRAGMA synchronous=FULL}
+do_test update-6.0 {
+ execsql {DROP INDEX idx1}
+ execsql {CREATE INDEX idx1 ON test1(f2)}
+ execsql {SELECT * FROM test1 ORDER BY f1,f2}
+} {6 64 7 128 8 88 8 256 8 888 9 512 10 1024 77 128 777 128}
+do_test update-6.1 {
+ execsql {UPDATE test1 SET f2=f2+1 WHERE f1==8}
+ execsql {SELECT * FROM test1 ORDER BY f1,f2}
+} {6 64 7 128 8 89 8 257 8 889 9 512 10 1024 77 128 777 128}
+do_test update-6.1.1 {
+ execsql {SELECT * FROM test1 WHERE f1==8 ORDER BY f1,f2}
+} {8 89 8 257 8 889}
+do_test update-6.1.2 {
+ execsql {SELECT * FROM test1 WHERE f2==89 ORDER BY f1,f2}
+} {8 89}
+do_test update-6.1.3 {
+ execsql {SELECT * FROM test1 WHERE f1==88 ORDER BY f1,f2}
+} {}
+do_test update-6.2 {
+ execsql {UPDATE test1 SET f2=f2-1 WHERE f1==8 and f2>800}
+ execsql {SELECT * FROM test1 ORDER BY f1,f2}
+} {6 64 7 128 8 89 8 257 8 888 9 512 10 1024 77 128 777 128}
+do_test update-6.3 {
+ execsql {UPDATE test1 SET f2=f2-1 WHERE f1==8 and f2<800}
+ execsql {SELECT * FROM test1 ORDER BY f1,f2}
+} {6 64 7 128 8 88 8 256 8 888 9 512 10 1024 77 128 777 128}
+do_test update-6.3.1 {
+ execsql {SELECT * FROM test1 WHERE f1==8 ORDER BY f1,f2}
+} {8 88 8 256 8 888}
+do_test update-6.3.2 {
+ execsql {SELECT * FROM test1 WHERE f2==89 ORDER BY f1,f2}
+} {}
+do_test update-6.3.3 {
+ execsql {SELECT * FROM test1 WHERE f2==88 ORDER BY f1,f2}
+} {8 88}
+do_test update-6.4 {
+ execsql {UPDATE test1 SET f1=f1+1 WHERE f2==128}
+ execsql {SELECT * FROM test1 ORDER BY f1,f2}
+} {6 64 8 88 8 128 8 256 8 888 9 512 10 1024 78 128 778 128}
+do_test update-6.4.1 {
+ execsql {SELECT * FROM test1 WHERE f1==78 ORDER BY f1,f2}
+} {78 128}
+do_test update-6.4.2 {
+ execsql {SELECT * FROM test1 WHERE f1==778 ORDER BY f1,f2}
+} {778 128}
+do_test update-6.4.3 {
+ execsql {SELECT * FROM test1 WHERE f1==8 ORDER BY f1,f2}
+} {8 88 8 128 8 256 8 888}
+do_test update-6.5 {
+ execsql {UPDATE test1 SET f1=f1-1 WHERE f1>100 and f2==128}
+ execsql {SELECT * FROM test1 ORDER BY f1,f2}
+} {6 64 8 88 8 128 8 256 8 888 9 512 10 1024 78 128 777 128}
+do_test update-6.5.1 {
+ execsql {SELECT * FROM test1 WHERE f1==78 ORDER BY f1,f2}
+} {78 128}
+do_test update-6.5.2 {
+ execsql {SELECT * FROM test1 WHERE f1==778 ORDER BY f1,f2}
+} {}
+do_test update-6.5.3 {
+ execsql {SELECT * FROM test1 WHERE f1==777 ORDER BY f1,f2}
+} {777 128}
+do_test update-6.5.4 {
+ execsql {SELECT * FROM test1 WHERE f1==8 ORDER BY f1,f2}
+} {8 88 8 128 8 256 8 888}
+do_test update-6.6 {
+ execsql {UPDATE test1 SET f1=f1-1 WHERE f1<=100 and f2==128}
+ execsql {SELECT * FROM test1 ORDER BY f1,f2}
+} {6 64 7 128 8 88 8 256 8 888 9 512 10 1024 77 128 777 128}
+do_test update-6.6.1 {
+ execsql {SELECT * FROM test1 WHERE f1==77 ORDER BY f1,f2}
+} {77 128}
+do_test update-6.6.2 {
+ execsql {SELECT * FROM test1 WHERE f1==778 ORDER BY f1,f2}
+} {}
+do_test update-6.6.3 {
+ execsql {SELECT * FROM test1 WHERE f1==777 ORDER BY f1,f2}
+} {777 128}
+do_test update-6.6.4 {
+ execsql {SELECT * FROM test1 WHERE f1==8 ORDER BY f1,f2}
+} {8 88 8 256 8 888}
+
+# Repeat the previous sequence of tests with multiple
+# indices
+#
+do_test update-7.0 {
+ execsql {CREATE INDEX idx2 ON test1(f2)}
+ execsql {CREATE INDEX idx3 ON test1(f1,f2)}
+ execsql {SELECT * FROM test1 ORDER BY f1,f2}
+} {6 64 7 128 8 88 8 256 8 888 9 512 10 1024 77 128 777 128}
+do_test update-7.1 {
+ execsql {UPDATE test1 SET f2=f2+1 WHERE f1==8}
+ execsql {SELECT * FROM test1 ORDER BY f1,f2}
+} {6 64 7 128 8 89 8 257 8 889 9 512 10 1024 77 128 777 128}
+do_test update-7.1.1 {
+ execsql {SELECT * FROM test1 WHERE f1==8 ORDER BY f1,f2}
+} {8 89 8 257 8 889}
+do_test update-7.1.2 {
+ execsql {SELECT * FROM test1 WHERE f2==89 ORDER BY f1,f2}
+} {8 89}
+do_test update-7.1.3 {
+ execsql {SELECT * FROM test1 WHERE f1==88 ORDER BY f1,f2}
+} {}
+do_test update-7.2 {
+ execsql {UPDATE test1 SET f2=f2-1 WHERE f1==8 and f2>800}
+ execsql {SELECT * FROM test1 ORDER BY f1,f2}
+} {6 64 7 128 8 89 8 257 8 888 9 512 10 1024 77 128 777 128}
+do_test update-7.3 {
+ # explain {UPDATE test1 SET f2=f2-1 WHERE f1==8 and F2<300}
+ execsql {UPDATE test1 SET f2=f2-1 WHERE f1==8 and f2<800}
+ execsql {SELECT * FROM test1 ORDER BY f1,f2}
+} {6 64 7 128 8 88 8 256 8 888 9 512 10 1024 77 128 777 128}
+do_test update-7.3.1 {
+ execsql {SELECT * FROM test1 WHERE f1==8 ORDER BY f1,f2}
+} {8 88 8 256 8 888}
+do_test update-7.3.2 {
+ execsql {SELECT * FROM test1 WHERE f2==89 ORDER BY f1,f2}
+} {}
+do_test update-7.3.3 {
+ execsql {SELECT * FROM test1 WHERE f2==88 ORDER BY f1,f2}
+} {8 88}
+do_test update-7.4 {
+ execsql {UPDATE test1 SET f1=f1+1 WHERE f2==128}
+ execsql {SELECT * FROM test1 ORDER BY f1,f2}
+} {6 64 8 88 8 128 8 256 8 888 9 512 10 1024 78 128 778 128}
+do_test update-7.4.1 {
+ execsql {SELECT * FROM test1 WHERE f1==78 ORDER BY f1,f2}
+} {78 128}
+do_test update-7.4.2 {
+ execsql {SELECT * FROM test1 WHERE f1==778 ORDER BY f1,f2}
+} {778 128}
+do_test update-7.4.3 {
+ execsql {SELECT * FROM test1 WHERE f1==8 ORDER BY f1,f2}
+} {8 88 8 128 8 256 8 888}
+do_test update-7.5 {
+ execsql {UPDATE test1 SET f1=f1-1 WHERE f1>100 and f2==128}
+ execsql {SELECT * FROM test1 ORDER BY f1,f2}
+} {6 64 8 88 8 128 8 256 8 888 9 512 10 1024 78 128 777 128}
+do_test update-7.5.1 {
+ execsql {SELECT * FROM test1 WHERE f1==78 ORDER BY f1,f2}
+} {78 128}
+do_test update-7.5.2 {
+ execsql {SELECT * FROM test1 WHERE f1==778 ORDER BY f1,f2}
+} {}
+do_test update-7.5.3 {
+ execsql {SELECT * FROM test1 WHERE f1==777 ORDER BY f1,f2}
+} {777 128}
+do_test update-7.5.4 {
+ execsql {SELECT * FROM test1 WHERE f1==8 ORDER BY f1,f2}
+} {8 88 8 128 8 256 8 888}
+do_test update-7.6 {
+ execsql {UPDATE test1 SET f1=f1-1 WHERE f1<=100 and f2==128}
+ execsql {SELECT * FROM test1 ORDER BY f1,f2}
+} {6 64 7 128 8 88 8 256 8 888 9 512 10 1024 77 128 777 128}
+do_test update-7.6.1 {
+ execsql {SELECT * FROM test1 WHERE f1==77 ORDER BY f1,f2}
+} {77 128}
+do_test update-7.6.2 {
+ execsql {SELECT * FROM test1 WHERE f1==778 ORDER BY f1,f2}
+} {}
+do_test update-7.6.3 {
+ execsql {SELECT * FROM test1 WHERE f1==777 ORDER BY f1,f2}
+} {777 128}
+do_test update-7.6.4 {
+ execsql {SELECT * FROM test1 WHERE f1==8 ORDER BY f1,f2}
+} {8 88 8 256 8 888}
+
+# Error messages
+#
+do_test update-9.1 {
+ set v [catch {execsql {
+ UPDATE test1 SET x=11 WHERE f1=1025
+ }} msg]
+ lappend v $msg
+} {1 {no such column: x}}
+do_test update-9.2 {
+ set v [catch {execsql {
+ UPDATE test1 SET f1=x(11) WHERE f1=1025
+ }} msg]
+ lappend v $msg
+} {1 {no such function: x}}
+do_test update-9.3 {
+ set v [catch {execsql {
+ UPDATE test1 SET f1=11 WHERE x=1025
+ }} msg]
+ lappend v $msg
+} {1 {no such column: x}}
+do_test update-9.4 {
+ set v [catch {execsql {
+ UPDATE test1 SET f1=11 WHERE x(f1)=1025
+ }} msg]
+ lappend v $msg
+} {1 {no such function: x}}
+
+# Try doing updates on a unique column where the value does not
+# really change.
+#
+do_test update-10.1 {
+ execsql {
+ DROP TABLE test1;
+ CREATE TABLE t1(
+ a integer primary key,
+ b UNIQUE,
+ c, d,
+ e, f,
+ UNIQUE(c,d)
+ );
+ INSERT INTO t1 VALUES(1,2,3,4,5,6);
+ INSERT INTO t1 VALUES(2,3,4,4,6,7);
+ SELECT * FROM t1
+ }
+} {1 2 3 4 5 6 2 3 4 4 6 7}
+do_test update-10.2 {
+ catchsql {
+ UPDATE t1 SET a=1, e=9 WHERE f=6;
+ SELECT * FROM t1;
+ }
+} {0 {1 2 3 4 9 6 2 3 4 4 6 7}}
+do_test update-10.3 {
+ catchsql {
+ UPDATE t1 SET a=1, e=10 WHERE f=7;
+ SELECT * FROM t1;
+ }
+} {1 {PRIMARY KEY must be unique}}
+do_test update-10.4 {
+ catchsql {
+ SELECT * FROM t1;
+ }
+} {0 {1 2 3 4 9 6 2 3 4 4 6 7}}
+do_test update-10.5 {
+ catchsql {
+ UPDATE t1 SET b=2, e=11 WHERE f=6;
+ SELECT * FROM t1;
+ }
+} {0 {1 2 3 4 11 6 2 3 4 4 6 7}}
+do_test update-10.6 {
+ catchsql {
+ UPDATE t1 SET b=2, e=12 WHERE f=7;
+ SELECT * FROM t1;
+ }
+} {1 {column b is not unique}}
+do_test update-10.7 {
+ catchsql {
+ SELECT * FROM t1;
+ }
+} {0 {1 2 3 4 11 6 2 3 4 4 6 7}}
+do_test update-10.8 {
+ catchsql {
+ UPDATE t1 SET c=3, d=4, e=13 WHERE f=6;
+ SELECT * FROM t1;
+ }
+} {0 {1 2 3 4 13 6 2 3 4 4 6 7}}
+do_test update-10.9 {
+ catchsql {
+ UPDATE t1 SET c=3, d=4, e=14 WHERE f=7;
+ SELECT * FROM t1;
+ }
+} {1 {columns c, d are not unique}}
+do_test update-10.10 {
+ catchsql {
+ SELECT * FROM t1;
+ }
+} {0 {1 2 3 4 13 6 2 3 4 4 6 7}}
+
+# Make sure we can handle a subquery in the where clause.
+#
+ifcapable subquery {
+ do_test update-11.1 {
+ execsql {
+ UPDATE t1 SET e=e+1 WHERE b IN (SELECT b FROM t1);
+ SELECT b,e FROM t1;
+ }
+ } {2 14 3 7}
+ do_test update-11.2 {
+ execsql {
+ UPDATE t1 SET e=e+1 WHERE a IN (SELECT a FROM t1);
+ SELECT a,e FROM t1;
+ }
+ } {1 15 2 8}
+}
+
+integrity_check update-12.1
+
+# Ticket 602. Updates should occur in the same order as the records
+# were discovered in the WHERE clause.
+#
+do_test update-13.1 {
+ execsql {
+ BEGIN;
+ CREATE TABLE t2(a);
+ INSERT INTO t2 VALUES(1);
+ INSERT INTO t2 VALUES(2);
+ INSERT INTO t2 SELECT a+2 FROM t2;
+ INSERT INTO t2 SELECT a+4 FROM t2;
+ INSERT INTO t2 SELECT a+8 FROM t2;
+ INSERT INTO t2 SELECT a+16 FROM t2;
+ INSERT INTO t2 SELECT a+32 FROM t2;
+ INSERT INTO t2 SELECT a+64 FROM t2;
+ INSERT INTO t2 SELECT a+128 FROM t2;
+ INSERT INTO t2 SELECT a+256 FROM t2;
+ INSERT INTO t2 SELECT a+512 FROM t2;
+ INSERT INTO t2 SELECT a+1024 FROM t2;
+ COMMIT;
+ SELECT count(*) FROM t2;
+ }
+} {2048}
+do_test update-13.2 {
+ execsql {
+ SELECT count(*) FROM t2 WHERE a=rowid;
+ }
+} {2048}
+do_test update-13.3 {
+ execsql {
+ UPDATE t2 SET rowid=rowid-1;
+ SELECT count(*) FROM t2 WHERE a=rowid+1;
+ }
+} {2048}
+do_test update-13.3 {
+ execsql {
+ UPDATE t2 SET rowid=rowid+10000;
+ UPDATE t2 SET rowid=rowid-9999;
+ SELECT count(*) FROM t2 WHERE a=rowid;
+ }
+} {2048}
+do_test update-13.4 {
+ execsql {
+ BEGIN;
+ INSERT INTO t2 SELECT a+2048 FROM t2;
+ INSERT INTO t2 SELECT a+4096 FROM t2;
+ INSERT INTO t2 SELECT a+8192 FROM t2;
+ SELECT count(*) FROM t2 WHERE a=rowid;
+ COMMIT;
+ }
+} 16384
+do_test update-13.5 {
+ execsql {
+ UPDATE t2 SET rowid=rowid-1;
+ SELECT count(*) FROM t2 WHERE a=rowid+1;
+ }
+} 16384
+
+integrity_check update-13.6
+
+ifcapable {trigger} {
+# Test for proper detection of malformed WHEN clauses on UPDATE triggers.
+#
+do_test update-14.1 {
+ execsql {
+ CREATE TABLE t3(a,b,c);
+ CREATE TRIGGER t3r1 BEFORE UPDATE on t3 WHEN nosuchcol BEGIN
+ SELECT 'illegal WHEN clause';
+ END;
+ }
+} {}
+do_test update-14.2 {
+ catchsql {
+ UPDATE t3 SET a=1;
+ }
+} {1 {no such column: nosuchcol}}
+do_test update-14.3 {
+ execsql {
+ CREATE TABLE t4(a,b,c);
+ CREATE TRIGGER t4r1 AFTER UPDATE on t4 WHEN nosuchcol BEGIN
+ SELECT 'illegal WHEN clause';
+ END;
+ }
+} {}
+do_test update-14.4 {
+ catchsql {
+ UPDATE t4 SET a=1;
+ }
+} {1 {no such column: nosuchcol}}
+
+} ;# ifcapable {trigger}
+
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/utf16.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/utf16.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,75 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file runs all tests.
+#
+# $Id: utf16.test,v 1.5 2006/01/09 23:40:26 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+rename finish_test really_finish_test2
+proc finish_test {} {}
+set ISQUICK 1
+
+if { [llength $argv]>0 } {
+ set FILES $argv
+ set argv [list]
+} else {
+ set F {
+ alter.test alter2.test alter3.test
+ auth.test bind.test blob.test capi2.test capi3.test collate1.test
+ collate2.test collate3.test collate4.test collate5.test collate6.test
+ conflict.test date.test delete.test expr.test fkey1.test func.test
+ hook.test index.test insert2.test insert.test interrupt.test in.test
+ intpkey.test ioerr.test join2.test join.test lastinsert.test
+ laststmtchanges.test limit.test lock2.test lock.test main.test
+ memdb.test minmax.test misc1.test misc2.test misc3.test notnull.test
+ null.test progress.test quote.test rowid.test select1.test select2.test
+ select3.test select4.test select5.test select6.test sort.test
+ subselect.test tableapi.test table.test temptable.test
+ trace.test trigger1.test trigger2.test trigger3.test
+ trigger4.test types2.test types.test unique.test update.test
+ vacuum.test view.test where.test
+ }
+ foreach f $F {lappend FILES $testdir/$f}
+}
+
+rename sqlite3 real_sqlite3
+proc sqlite3 {args} {
+ set r [eval "real_sqlite3 $args"]
+ if { [llength $args] == 2 } {
+ [lindex $args 0] eval {pragma encoding = 'UTF-16'}
+ }
+ set r
+}
+
+rename do_test really_do_test
+proc do_test {args} {
+ set sc [concat really_do_test "utf16-[lindex $args 0]" [lrange $args 1 end]]
+ eval $sc
+}
+
+foreach f $FILES {
+ source $f
+ catch {db close}
+ if {$sqlite_open_file_count>0} {
+ puts "$tail did not close all files: $sqlite_open_file_count"
+ incr nErr
+ lappend ::failList $tail
+ }
+}
+
+rename sqlite3 ""
+rename real_sqlite3 sqlite3
+rename finish_test ""
+rename really_finish_test2 finish_test
+rename do_test ""
+rename really_do_test do_test
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/utf16align.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/utf16align.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,84 @@
+# 2006 February 16
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+#
+# This file contains code to verify that the SQLITE_UTF16_ALIGNED
+# flag passed into the sqlite3_create_collation() function insures
+# that all strings passed to that function are aligned on an even
+# byte boundary.
+#
+# $Id: utf16align.test,v 1.1 2006/02/16 18:16:38 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Skip this entire test if we do not support UTF16
+#
+ifcapable !utf16 {
+ finish_test
+ return
+}
+
+# Create a database with a UTF16 encoding. Put in lots of string
+# data of varying lengths.
+#
+do_test utf16align-1.0 {
+ set unaligned_string_counter 0
+ add_alignment_test_collations [sqlite3_connection_pointer db]
+ execsql {
+ PRAGMA encoding=UTF16;
+ CREATE TABLE t1(
+ id INTEGER PRIMARY KEY,
+ spacer TEXT,
+ a TEXT COLLATE utf16_aligned,
+ b TEXT COLLATE utf16_unaligned
+ );
+ INSERT INTO t1(a) VALUES("abc");
+ INSERT INTO t1(a) VALUES("defghi");
+ INSERT INTO t1(a) VALUES("jklmnopqrstuv");
+ INSERT INTO t1(a) VALUES("wxyz0123456789-");
+ UPDATE t1 SET b=a||'-'||a;
+ INSERT INTO t1(a,b) SELECT a||b, b||a FROM t1;
+ INSERT INTO t1(a,b) SELECT a||b, b||a FROM t1;
+ INSERT INTO t1(a,b) SELECT a||b, b||a FROM t1;
+ INSERT INTO t1(a,b) VALUES('one','two');
+ INSERT INTO t1(a,b) SELECT a, b FROM t1;
+ UPDATE t1 SET spacer = CASE WHEN rowid&1 THEN 'x' ELSE 'xx' END;
+ SELECT count(*) FROM t1;
+ }
+} 66
+do_test utf16align-1.1 {
+ set unaligned_string_counter
+} 0
+
+# Creating an index that uses the unaligned collation. We should see
+# some unaligned strings passed to the collating function.
+#
+do_test utf16align-1.2 {
+ execsql {
+ CREATE INDEX t1i1 ON t1(spacer, b);
+ }
+ # puts $unaligned_string_counter
+ expr {$unaligned_string_counter>0}
+} 1
+
+# Create another index that uses the aligned collation. This time
+# there should be no unaligned accesses
+#
+do_test utf16align-1.3 {
+ set unaligned_string_counter 0
+ execsql {
+ CREATE INDEX t1i2 ON t1(spacer, a);
+ }
+ expr {$unaligned_string_counter>0}
+} 0
+integrity_check utf16align-1.4
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/vacuum.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/vacuum.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,359 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing the VACUUM statement.
+#
+# $Id: vacuum.test,v 1.38 2006/10/04 11:55:50 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# If the VACUUM statement is disabled in the current build, skip all
+# the tests in this file.
+#
+ifcapable {!vacuum} {
+ finish_test
+ return
+}
+if $AUTOVACUUM {
+ finish_test
+ return
+}
+
+set fcnt 1
+proc cksum {{db db}} {
+ set sql "SELECT name, type, sql FROM sqlite_master ORDER BY name, type"
+ set txt [$db eval $sql]\n
+ set sql "SELECT name FROM sqlite_master WHERE type='table' ORDER BY name"
+ foreach tbl [$db eval $sql] {
+ append txt [$db eval "SELECT * FROM $tbl"]\n
+ }
+ foreach prag {default_cache_size} {
+ append txt $prag-[$db eval "PRAGMA $prag"]\n
+ }
+ if 0 {
+ global fcnt
+ set fd [open dump$fcnt.txt w]
+ puts -nonewline $fd $txt
+ close $fd
+ incr fcnt
+ }
+ set cksum [string length $txt]-[md5 $txt]
+ # puts $cksum-[file size test.db]
+ return $cksum
+}
+do_test vacuum-1.1 {
+ execsql {
+ BEGIN;
+ CREATE TABLE t1(a INTEGER PRIMARY KEY, b, c);
+ INSERT INTO t1 VALUES(NULL,randstr(10,100),randstr(5,50));
+ INSERT INTO t1 VALUES(123456,randstr(10,100),randstr(5,50));
+ INSERT INTO t1 SELECT NULL, b||'-'||rowid, c||'-'||rowid FROM t1;
+ INSERT INTO t1 SELECT NULL, b||'-'||rowid, c||'-'||rowid FROM t1;
+ INSERT INTO t1 SELECT NULL, b||'-'||rowid, c||'-'||rowid FROM t1;
+ INSERT INTO t1 SELECT NULL, b||'-'||rowid, c||'-'||rowid FROM t1;
+ INSERT INTO t1 SELECT NULL, b||'-'||rowid, c||'-'||rowid FROM t1;
+ INSERT INTO t1 SELECT NULL, b||'-'||rowid, c||'-'||rowid FROM t1;
+ INSERT INTO t1 SELECT NULL, b||'-'||rowid, c||'-'||rowid FROM t1;
+ CREATE INDEX i1 ON t1(b,c);
+ CREATE UNIQUE INDEX i2 ON t1(c,a);
+ CREATE TABLE t2 AS SELECT * FROM t1;
+ COMMIT;
+ DROP TABLE t2;
+ }
+ set ::size1 [file size test.db]
+ set ::cksum [cksum]
+ expr {$::cksum!=""}
+} {1}
+do_test vacuum-1.2 {
+ execsql {
+ VACUUM;
+ }
+ cksum
+} $cksum
+ifcapable vacuum {
+ do_test vacuum-1.3 {
+ expr {[file size test.db]<$::size1}
+ } {1}
+}
+do_test vacuum-1.4 {
+ set sql_script {
+ BEGIN;
+ CREATE TABLE t2 AS SELECT * FROM t1;
+ CREATE TABLE t3 AS SELECT * FROM t1;
+ CREATE VIEW v1 AS SELECT b, c FROM t3;
+ CREATE TRIGGER r1 AFTER DELETE ON t2 BEGIN SELECT 1; END;
+ COMMIT;
+ DROP TABLE t2;
+ }
+ # If the library was compiled to omit view support, comment out the
+ # create view in the script $sql_script before executing it. Similarly,
+ # if triggers are not supported, comment out the trigger definition.
+ ifcapable !view {
+ regsub {CREATE VIEW} $sql_script {-- CREATE VIEW} sql_script
+ }
+ ifcapable !trigger {
+ regsub {CREATE TRIGGER} $sql_script {-- CREATE TRIGGER} sql_script
+ }
+ execsql $sql_script
+ set ::size1 [file size test.db]
+ set ::cksum [cksum]
+ expr {$::cksum!=""}
+} {1}
+do_test vacuum-1.5 {
+ execsql {
+ VACUUM;
+ }
+ cksum
+} $cksum
+
+ifcapable vacuum {
+ do_test vacuum-1.6 {
+ expr {[file size test.db]<$::size1}
+ } {1}
+}
+ifcapable vacuum {
+ do_test vacuum-2.1 {
+ catchsql {
+ BEGIN;
+ VACUUM;
+ COMMIT;
+ }
+ } {1 {cannot VACUUM from within a transaction}}
+ catch {db eval COMMIT}
+}
+do_test vacuum-2.2 {
+ sqlite3 db2 test.db
+ execsql {
+ BEGIN;
+ CREATE TABLE t4 AS SELECT * FROM t1;
+ CREATE TABLE t5 AS SELECT * FROM t1;
+ COMMIT;
+ DROP TABLE t4;
+ DROP TABLE t5;
+ } db2
+ set ::cksum [cksum db2]
+ catchsql {
+ VACUUM
+ }
+} {0 {}}
+do_test vacuum-2.3 {
+ cksum
+} $cksum
+do_test vacuum-2.4 {
+ catch {db2 eval {SELECT count(*) FROM sqlite_master}}
+ cksum db2
+} $cksum
+
+# Make sure the schema cookie is incremented by vacuum.
+#
+do_test vacuum-2.5 {
+ execsql {
+ BEGIN;
+ CREATE TABLE t6 AS SELECT * FROM t1;
+ CREATE TABLE t7 AS SELECT * FROM t1;
+ COMMIT;
+ }
+ sqlite3 db3 test.db
+ execsql {
+ -- The "SELECT * FROM sqlite_master" statement ensures that this test
+ -- works when shared-cache is enabled. If shared-cache is enabled, then
+ -- db3 shares a cache with db2 (but not db - it was opened as
+ -- "./test.db").
+ SELECT * FROM sqlite_master;
+ SELECT * FROM t7 LIMIT 1
+ } db3
+ execsql {
+ VACUUM;
+ }
+ execsql {
+ INSERT INTO t7 VALUES(1234567890,'hello','world');
+ } db3
+ execsql {
+ SELECT * FROM t7 WHERE a=1234567890
+ }
+} {1234567890 hello world}
+integrity_check vacuum-2.6
+do_test vacuum-2.7 {
+ execsql {
+ SELECT * FROM t7 WHERE a=1234567890
+ } db3
+} {1234567890 hello world}
+do_test vacuum-2.8 {
+ execsql {
+ INSERT INTO t7 SELECT * FROM t6;
+ SELECT count(*) FROM t7;
+ }
+} 513
+integrity_check vacuum-2.9
+do_test vacuum-2.10 {
+ execsql {
+ DELETE FROM t7;
+ SELECT count(*) FROM t7;
+ } db3
+} 0
+integrity_check vacuum-2.11
+db3 close
+
+
+# Ticket #427. Make sure VACUUM works when the EMPTY_RESULT_CALLBACKS
+# pragma is turned on.
+#
+do_test vacuum-3.1 {
+ db close
+ db2 close
+ file delete test.db
+ sqlite3 db test.db
+ execsql {
+ PRAGMA empty_result_callbacks=on;
+ VACUUM;
+ }
+} {}
+
+# Ticket #464. Make sure VACUUM works with the sqlite3_prepare() API.
+#
+do_test vacuum-4.1 {
+ db close
+ sqlite3 db test.db; set DB [sqlite3_connection_pointer db]
+ set VM [sqlite3_prepare $DB {VACUUM} -1 TAIL]
+ sqlite3_step $VM
+} {SQLITE_DONE}
+do_test vacuum-4.2 {
+ sqlite3_finalize $VM
+} SQLITE_OK
+
+# Ticket #515. VACUUM after deleting and recreating the table that
+# a view refers to. Omit this test if the library is not view-enabled.
+#
+ifcapable view {
+do_test vacuum-5.1 {
+ db close
+ file delete -force test.db
+ sqlite3 db test.db
+ catchsql {
+ CREATE TABLE Test (TestID int primary key);
+ INSERT INTO Test VALUES (NULL);
+ CREATE VIEW viewTest AS SELECT * FROM Test;
+
+ BEGIN;
+ CREATE TABLE tempTest (TestID int primary key, Test2 int NULL);
+ INSERT INTO tempTest SELECT TestID, 1 FROM Test;
+ DROP TABLE Test;
+ CREATE TABLE Test(TestID int primary key, Test2 int NULL);
+ INSERT INTO Test SELECT * FROM tempTest;
+ DROP TABLE tempTest;
+ COMMIT;
+ VACUUM;
+ }
+} {0 {}}
+do_test vacuum-5.2 {
+ catchsql {
+ VACUUM;
+ }
+} {0 {}}
+} ;# ifcapable view
+
+# Ensure vacuum works with complicated tables names.
+do_test vacuum-6.1 {
+ execsql {
+ CREATE TABLE "abc abc"(a, b, c);
+ INSERT INTO "abc abc" VALUES(1, 2, 3);
+ VACUUM;
+ }
+} {}
+do_test vacuum-6.2 {
+ execsql {
+ select * from "abc abc";
+ }
+} {1 2 3}
+
+# Also ensure that blobs survive a vacuum.
+ifcapable {bloblit} {
+ do_test vacuum-6.3 {
+ execsql {
+ DELETE FROM "abc abc";
+ INSERT INTO "abc abc" VALUES(X'00112233', NULL, NULL);
+ VACUUM;
+ }
+ } {}
+ do_test vacuum-6.4 {
+ execsql {
+ select count(*) from "abc abc" WHERE a = X'00112233';
+ }
+ } {1}
+}
+
+# Check what happens when an in-memory database is vacuumed. The
+# [file delete] command covers us in case the library was compiled
+# without in-memory database support.
+#
+file delete -force :memory:
+do_test vacuum-7.0 {
+ sqlite3 db2 :memory:
+ execsql {
+ CREATE TABLE t1(t);
+ VACUUM;
+ } db2
+} {}
+db2 close
+
+# Ticket #873. VACUUM a database that has ' in its name.
+#
+do_test vacuum-8.1 {
+ file delete -force a'z.db
+ file delete -force a'z.db-journal
+ sqlite3 db2 a'z.db
+ execsql {
+ CREATE TABLE t1(t);
+ VACUUM;
+ } db2
+} {}
+db2 close
+
+# Ticket #1095: Vacuum a table that uses AUTOINCREMENT
+#
+ifcapable {autoinc} {
+ do_test vacuum-9.1 {
+ execsql {
+ DROP TABLE 'abc abc';
+ CREATE TABLE autoinc(a INTEGER PRIMARY KEY AUTOINCREMENT, b);
+ INSERT INTO autoinc(b) VALUES('hi');
+ INSERT INTO autoinc(b) VALUES('there');
+ DELETE FROM autoinc;
+ }
+ set ::cksum [cksum]
+ expr {$::cksum!=""}
+ } {1}
+ do_test vacuum-9.2 {
+ execsql {
+ VACUUM;
+ }
+ cksum
+ } $::cksum
+ do_test vacuum-9.3 {
+ execsql {
+ INSERT INTO autoinc(b) VALUES('one');
+ INSERT INTO autoinc(b) VALUES('two');
+ }
+ set ::cksum [cksum]
+ expr {$::cksum!=""}
+ } {1}
+ do_test vacuum-9.4 {
+ execsql {
+ VACUUM;
+ }
+ cksum
+ } $::cksum
+}
+
+file delete -force {a'z.db}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/vacuum2.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/vacuum2.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,42 @@
+# 2005 February 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing the VACUUM statement.
+#
+# $Id: vacuum2.test,v 1.2 2006/01/16 16:24:25 danielk1977 Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# If the VACUUM statement is disabled in the current build, skip all
+# the tests in this file.
+#
+ifcapable {!vacuum||!autoinc} {
+ finish_test
+ return
+}
+if $AUTOVACUUM {
+ finish_test
+ return
+}
+
+# Ticket #1121 - make sure vacuum works if all autoincrement tables
+# have been deleted.
+#
+do_test vacuum2-1.1 {
+ execsql {
+ CREATE TABLE t1(x INTEGER PRIMARY KEY AUTOINCREMENT, y);
+ DROP TABLE t1;
+ VACUUM;
+ }
+} {}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/varint.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/varint.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,32 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this script is variable-length integer encoding scheme.
+#
+# $Id: varint.test,v 1.1 2004/05/18 15:57:42 drh Exp $
+
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Test reading and writing of varints.
+#
+set cnt 0
+foreach start {0 100 10000 1000000 0x10000000} {
+ foreach mult {1 0x10 0x100 0x1000 0x10000 0x100000 0x1000000 0x10000000} {
+ foreach incr {1 500 10000 50000000} {
+ incr cnt
+ do_test varint-1.$cnt {
+ btree_varint_test $start $mult 5000 $incr
+ } {}
+ }
+ }
+}
Added: freeswitch/trunk/libs/sqlite/test/view.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/view.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,501 @@
+# 2002 February 26
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing VIEW statements.
+#
+# $Id: view.test,v 1.33 2006/09/11 23:45:50 drh Exp $
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Omit this entire file if the library is not configured with views enabled.
+ifcapable !view {
+ finish_test
+ return
+}
+
+do_test view-1.0 {
+ execsql {
+ CREATE TABLE t1(a,b,c);
+ INSERT INTO t1 VALUES(1,2,3);
+ INSERT INTO t1 VALUES(4,5,6);
+ INSERT INTO t1 VALUES(7,8,9);
+ SELECT * FROM t1;
+ }
+} {1 2 3 4 5 6 7 8 9}
+
+do_test view-1.1 {
+ execsql {
+ BEGIN;
+ CREATE VIEW IF NOT EXISTS v1 AS SELECT a,b FROM t1;
+ SELECT * FROM v1 ORDER BY a;
+ }
+} {1 2 4 5 7 8}
+do_test view-1.2 {
+ catchsql {
+ ROLLBACK;
+ SELECT * FROM v1 ORDER BY a;
+ }
+} {1 {no such table: v1}}
+do_test view-1.3 {
+ execsql {
+ CREATE VIEW v1 AS SELECT a,b FROM t1;
+ SELECT * FROM v1 ORDER BY a;
+ }
+} {1 2 4 5 7 8}
+do_test view-1.3.1 {
+ db close
+ sqlite3 db test.db
+ execsql {
+ SELECT * FROM v1 ORDER BY a;
+ }
+} {1 2 4 5 7 8}
+do_test view-1.4 {
+ catchsql {
+ DROP VIEW IF EXISTS v1;
+ SELECT * FROM v1 ORDER BY a;
+ }
+} {1 {no such table: v1}}
+do_test view-1.5 {
+ execsql {
+ CREATE VIEW v1 AS SELECT a,b FROM t1;
+ SELECT * FROM v1 ORDER BY a;
+ }
+} {1 2 4 5 7 8}
+do_test view-1.6 {
+ catchsql {
+ DROP TABLE t1;
+ SELECT * FROM v1 ORDER BY a;
+ }
+} {1 {no such table: main.t1}}
+do_test view-1.7 {
+ execsql {
+ CREATE TABLE t1(x,a,b,c);
+ INSERT INTO t1 VALUES(1,2,3,4);
+ INSERT INTO t1 VALUES(4,5,6,7);
+ INSERT INTO t1 VALUES(7,8,9,10);
+ SELECT * FROM v1 ORDER BY a;
+ }
+} {2 3 5 6 8 9}
+do_test view-1.8 {
+ db close
+ sqlite3 db test.db
+ execsql {
+ SELECT * FROM v1 ORDER BY a;
+ }
+} {2 3 5 6 8 9}
+
+do_test view-2.1 {
+ execsql {
+ CREATE VIEW v2 AS SELECT * FROM t1 WHERE a>5
+ }; # No semicolon
+ execsql2 {
+ SELECT * FROM v2;
+ }
+} {x 7 a 8 b 9 c 10}
+do_test view-2.2 {
+ catchsql {
+ INSERT INTO v2 VALUES(1,2,3,4);
+ }
+} {1 {cannot modify v2 because it is a view}}
+do_test view-2.3 {
+ catchsql {
+ UPDATE v2 SET a=10 WHERE a=5;
+ }
+} {1 {cannot modify v2 because it is a view}}
+do_test view-2.4 {
+ catchsql {
+ DELETE FROM v2;
+ }
+} {1 {cannot modify v2 because it is a view}}
+do_test view-2.5 {
+ execsql {
+ INSERT INTO t1 VALUES(11,12,13,14);
+ SELECT * FROM v2 ORDER BY x;
+ }
+} {7 8 9 10 11 12 13 14}
+do_test view-2.6 {
+ execsql {
+ SELECT x FROM v2 WHERE a>10
+ }
+} {11}
+
+# Test that column name of views are generated correctly.
+#
+do_test view-3.1 {
+ execsql2 {
+ SELECT * FROM v1 LIMIT 1
+ }
+} {a 2 b 3}
+do_test view-3.2 {
+ execsql2 {
+ SELECT * FROM v2 LIMIT 1
+ }
+} {x 7 a 8 b 9 c 10}
+do_test view-3.3 {
+ execsql2 {
+ DROP VIEW v1;
+ CREATE VIEW v1 AS SELECT a AS 'xyz', b+c AS 'pqr', c-b FROM t1;
+ SELECT * FROM v1 LIMIT 1
+ }
+} {xyz 2 pqr 7 c-b 1}
+
+ifcapable compound {
+do_test view-3.4 {
+ execsql2 {
+ CREATE VIEW v3 AS SELECT a FROM t1 UNION SELECT b FROM t1 ORDER BY b;
+ SELECT * FROM v3 LIMIT 4;
+ }
+} {a 2 a 3 a 5 a 6}
+do_test view-3.5 {
+ execsql2 {
+ CREATE VIEW v4 AS
+ SELECT a, b FROM t1
+ UNION
+ SELECT b AS 'x', a AS 'y' FROM t1
+ ORDER BY x, y;
+ SELECT b FROM v4 ORDER BY b LIMIT 4;
+ }
+} {b 2 b 3 b 5 b 6}
+} ;# ifcapable compound
+
+
+do_test view-4.1 {
+ catchsql {
+ DROP VIEW t1;
+ }
+} {1 {use DROP TABLE to delete table t1}}
+do_test view-4.2 {
+ execsql {
+ SELECT 1 FROM t1 LIMIT 1;
+ }
+} 1
+do_test view-4.3 {
+ catchsql {
+ DROP TABLE v1;
+ }
+} {1 {use DROP VIEW to delete view v1}}
+do_test view-4.4 {
+ execsql {
+ SELECT 1 FROM v1 LIMIT 1;
+ }
+} {1}
+do_test view-4.5 {
+ catchsql {
+ CREATE INDEX i1v1 ON v1(xyz);
+ }
+} {1 {views may not be indexed}}
+
+do_test view-5.1 {
+ execsql {
+ CREATE TABLE t2(y,a);
+ INSERT INTO t2 VALUES(22,2);
+ INSERT INTO t2 VALUES(33,3);
+ INSERT INTO t2 VALUES(44,4);
+ INSERT INTO t2 VALUES(55,5);
+ SELECT * FROM t2;
+ }
+} {22 2 33 3 44 4 55 5}
+do_test view-5.2 {
+ execsql {
+ CREATE VIEW v5 AS
+ SELECT t1.x AS v, t2.y AS w FROM t1 JOIN t2 USING(a);
+ SELECT * FROM v5;
+ }
+} {1 22 4 55}
+
+# Verify that the view v5 gets flattened. see sqliteFlattenSubquery().
+# This will only work if EXPLAIN is enabled.
+# Ticket #272
+#
+ifcapable {explain} {
+do_test view-5.3 {
+ lsearch [execsql {
+ EXPLAIN SELECT * FROM v5;
+ }] OpenEphemeral
+} {-1}
+do_test view-5.4 {
+ execsql {
+ SELECT * FROM v5 AS a, t2 AS b WHERE a.w=b.y;
+ }
+} {1 22 22 2 4 55 55 5}
+do_test view-5.5 {
+ lsearch [execsql {
+ EXPLAIN SELECT * FROM v5 AS a, t2 AS b WHERE a.w=b.y;
+ }] OpenEphemeral
+} {-1}
+do_test view-5.6 {
+ execsql {
+ SELECT * FROM t2 AS b, v5 AS a WHERE a.w=b.y;
+ }
+} {22 2 1 22 55 5 4 55}
+do_test view-5.7 {
+ lsearch [execsql {
+ EXPLAIN SELECT * FROM t2 AS b, v5 AS a WHERE a.w=b.y;
+ }] OpenEphemeral
+} {-1}
+do_test view-5.8 {
+ execsql {
+ SELECT * FROM t1 AS a, v5 AS b, t2 AS c WHERE a.x=b.v AND b.w=c.y;
+ }
+} {1 2 3 4 1 22 22 2 4 5 6 7 4 55 55 5}
+do_test view-5.9 {
+ lsearch [execsql {
+ EXPLAIN SELECT * FROM t1 AS a, v5 AS b, t2 AS c WHERE a.x=b.v AND b.w=c.y;
+ }] OpenEphemeral
+} {-1}
+} ;# endif explain
+
+do_test view-6.1 {
+ execsql {
+ SELECT min(x), min(a), min(b), min(c), min(a+b+c) FROM v2;
+ }
+} {7 8 9 10 27}
+do_test view-6.2 {
+ execsql {
+ SELECT max(x), max(a), max(b), max(c), max(a+b+c) FROM v2;
+ }
+} {11 12 13 14 39}
+
+do_test view-7.1 {
+ execsql {
+ CREATE TABLE test1(id integer primary key, a);
+ CREATE TABLE test2(id integer, b);
+ INSERT INTO test1 VALUES(1,2);
+ INSERT INTO test2 VALUES(1,3);
+ CREATE VIEW test AS
+ SELECT test1.id, a, b
+ FROM test1 JOIN test2 ON test2.id=test1.id;
+ SELECT * FROM test;
+ }
+} {1 2 3}
+do_test view-7.2 {
+ db close
+ sqlite3 db test.db
+ execsql {
+ SELECT * FROM test;
+ }
+} {1 2 3}
+do_test view-7.3 {
+ execsql {
+ DROP VIEW test;
+ CREATE VIEW test AS
+ SELECT test1.id, a, b
+ FROM test1 JOIN test2 USING(id);
+ SELECT * FROM test;
+ }
+} {1 2 3}
+do_test view-7.4 {
+ db close
+ sqlite3 db test.db
+ execsql {
+ SELECT * FROM test;
+ }
+} {1 2 3}
+do_test view-7.5 {
+ execsql {
+ DROP VIEW test;
+ CREATE VIEW test AS
+ SELECT test1.id, a, b
+ FROM test1 NATURAL JOIN test2;
+ SELECT * FROM test;
+ }
+} {1 2 3}
+do_test view-7.6 {
+ db close
+ sqlite3 db test.db
+ execsql {
+ SELECT * FROM test;
+ }
+} {1 2 3}
+
+do_test view-8.1 {
+ execsql {
+ CREATE VIEW v6 AS SELECT pqr, xyz FROM v1;
+ SELECT * FROM v6 ORDER BY xyz;
+ }
+} {7 2 13 5 19 8 27 12}
+do_test view-8.2 {
+ db close
+ sqlite3 db test.db
+ execsql {
+ SELECT * FROM v6 ORDER BY xyz;
+ }
+} {7 2 13 5 19 8 27 12}
+do_test view-8.3 {
+ execsql {
+ CREATE VIEW v7 AS SELECT pqr+xyz AS a FROM v6;
+ SELECT * FROM v7 ORDER BY a;
+ }
+} {9 18 27 39}
+
+ifcapable subquery {
+ do_test view-8.4 {
+ execsql {
+ CREATE VIEW v8 AS SELECT max(cnt) AS mx FROM
+ (SELECT a%2 AS eo, count(*) AS cnt FROM t1 GROUP BY eo);
+ SELECT * FROM v8;
+ }
+ } 3
+ do_test view-8.5 {
+ execsql {
+ SELECT mx+10, mx*2 FROM v8;
+ }
+ } {13 6}
+ do_test view-8.6 {
+ execsql {
+ SELECT mx+10, pqr FROM v6, v8 WHERE xyz=2;
+ }
+ } {13 7}
+ do_test view-8.7 {
+ execsql {
+ SELECT mx+10, pqr FROM v6, v8 WHERE xyz>2;
+ }
+ } {13 13 13 19 13 27}
+} ;# ifcapable subquery
+
+# Tests for a bug found by Michiel de Wit involving ORDER BY in a VIEW.
+#
+do_test view-9.1 {
+ execsql {
+ INSERT INTO t2 SELECT * FROM t2 WHERE a<5;
+ INSERT INTO t2 SELECT * FROM t2 WHERE a<4;
+ INSERT INTO t2 SELECT * FROM t2 WHERE a<3;
+ SELECT DISTINCT count(*) FROM t2 GROUP BY a ORDER BY 1;
+ }
+} {1 2 4 8}
+do_test view-9.2 {
+ execsql {
+ SELECT DISTINCT count(*) FROM t2 GROUP BY a ORDER BY 1 LIMIT 3;
+ }
+} {1 2 4}
+do_test view-9.3 {
+ execsql {
+ CREATE VIEW v9 AS
+ SELECT DISTINCT count(*) FROM t2 GROUP BY a ORDER BY 1 LIMIT 3;
+ SELECT * FROM v9;
+ }
+} {1 2 4}
+do_test view-9.4 {
+ execsql {
+ SELECT * FROM v9 ORDER BY 1 DESC;
+ }
+} {4 2 1}
+do_test view-9.5 {
+ execsql {
+ CREATE VIEW v10 AS
+ SELECT DISTINCT a, count(*) FROM t2 GROUP BY a ORDER BY 2 LIMIT 3;
+ SELECT * FROM v10;
+ }
+} {5 1 4 2 3 4}
+do_test view-9.6 {
+ execsql {
+ SELECT * FROM v10 ORDER BY 1;
+ }
+} {3 4 4 2 5 1}
+
+# Tables with columns having peculiar quoted names used in views
+# Ticket #756.
+#
+do_test view-10.1 {
+ execsql {
+ CREATE TABLE t3("9" integer, [4] text);
+ INSERT INTO t3 VALUES(1,2);
+ CREATE VIEW v_t3_a AS SELECT a.[9] FROM t3 AS a;
+ CREATE VIEW v_t3_b AS SELECT "4" FROM t3;
+ SELECT * FROM v_t3_a;
+ }
+} {1}
+do_test view-10.2 {
+ execsql {
+ SELECT * FROM v_t3_b;
+ }
+} {2}
+
+do_test view-11.1 {
+ execsql {
+ CREATE TABLE t4(a COLLATE NOCASE);
+ INSERT INTO t4 VALUES('This');
+ INSERT INTO t4 VALUES('this');
+ INSERT INTO t4 VALUES('THIS');
+ SELECT * FROM t4 WHERE a = 'THIS';
+ }
+} {This this THIS}
+ifcapable subquery {
+ do_test view-11.2 {
+ execsql {
+ SELECT * FROM (SELECT * FROM t4) WHERE a = 'THIS';
+ }
+ } {This this THIS}
+}
+do_test view-11.3 {
+ execsql {
+ CREATE VIEW v11 AS SELECT * FROM t4;
+ SELECT * FROM v11 WHERE a = 'THIS';
+ }
+} {This this THIS}
+
+# Ticket #1270: Do not allow parameters in view definitions.
+#
+do_test view-12.1 {
+ catchsql {
+ CREATE VIEW v12 AS SELECT a FROM t1 WHERE b=?
+ }
+} {1 {parameters are not allowed in views}}
+
+do_test view-13.1 {
+ file delete -force test2.db
+ catchsql {
+ ATTACH 'test2.db' AS two;
+ CREATE TABLE two.t2(x,y);
+ CREATE VIEW v13 AS SELECT y FROM two.t2;
+ }
+} {1 {view v13 cannot reference objects in database two}}
+
+# Ticket #1658
+#
+do_test view-14.1 {
+ catchsql {
+ CREATE TEMP VIEW t1 AS SELECT a,b FROM t1;
+ SELECT * FROM temp.t1;
+ }
+} {1 {view t1 is circularly defined}}
+
+# Tickets #1688, #1709
+#
+do_test view-15.1 {
+ execsql2 {
+ CREATE VIEW v15 AS SELECT a AS x, b AS y FROM t1;
+ SELECT * FROM v15 LIMIT 1;
+ }
+} {x 2 y 3}
+do_test view-15.2 {
+ execsql2 {
+ SELECT x, y FROM v15 LIMIT 1
+ }
+} {x 2 y 3}
+
+do_test view-16.1 {
+ catchsql {
+ CREATE VIEW IF NOT EXISTS v1 AS SELECT * FROM t1;
+ }
+} {0 {}}
+do_test view-16.2 {
+ execsql {
+ SELECT sql FROM sqlite_master WHERE name='v1'
+ }
+} {{CREATE VIEW v1 AS SELECT a AS 'xyz', b+c AS 'pqr', c-b FROM t1}}
+do_test view-16.3 {
+ catchsql {
+ DROP VIEW IF EXISTS nosuchview
+ }
+} {0 {}}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/vtab1.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/vtab1.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,871 @@
+# 2006 June 10
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is creating and dropping virtual tables.
+#
+# $Id: vtab1.test,v 1.38 2006/09/16 21:45:14 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+ifcapable !vtab||!schema_pragmas {
+ finish_test
+ return
+}
+
+#----------------------------------------------------------------------
+# Organization of tests in this file:
+#
+# vtab1-1.*: Error conditions and other issues surrounding creation/connection
+# of a virtual module.
+# vtab1-2.*: Test sqlite3_declare_vtab() and the xConnect/xDisconnect methods.
+# vtab1-3.*: Table scans and WHERE clauses.
+# vtab1-4.*: Table scans and ORDER BY clauses.
+# vtab1-5.*: Test queries that include joins. This brings the
+# sqlite3_index_info.estimatedCost variable into play.
+# vtab1-6.*: Test UPDATE/INSERT/DELETE on vtables.
+# vtab1-7.*: Test sqlite3_last_insert_rowid().
+#
+# This file uses the "echo" module (see src/test8.c). Refer to comments
+# in that file for the special behaviour of the Tcl $echo_module variable.
+#
+# TODO:
+# * How to test the sqlite3_index_constraint_usage.omit field?
+# * vtab1-5.*
+#
+
+
+#----------------------------------------------------------------------
+# Test cases vtab1.1.*
+#
+
+# We cannot create a virtual table if the module has not been registered.
+#
+do_test vtab1-1.1 {
+ catchsql {
+ CREATE VIRTUAL TABLE t1 USING echo;
+ }
+} {1 {no such module: echo}}
+do_test vtab1-1.2 {
+ execsql {
+ SELECT name FROM sqlite_master ORDER BY 1
+ }
+} {}
+
+# Register the module
+register_echo_module [sqlite3_connection_pointer db]
+
+# Once a module has been registered, virtual tables using that module
+# may be created. However if a module xCreate() fails to call
+# sqlite3_declare_vtab() an error will be raised and the table not created.
+#
+# The "echo" module does not invoke sqlite3_declare_vtab() if it is
+# passed zero arguments.
+#
+do_test vtab1-1.3 {
+ catchsql {
+ CREATE VIRTUAL TABLE t1 USING echo;
+ }
+} {1 {vtable constructor did not declare schema: t1}}
+do_test vtab1-1.4 {
+ execsql {
+ SELECT name FROM sqlite_master ORDER BY 1
+ }
+} {}
+
+# The "echo" module xCreate method returns an error and does not create
+# the virtual table if it is passed an argument that does not correspond
+# to an existing real table in the same database.
+#
+do_test vtab1-1.5 {
+ catchsql {
+ CREATE VIRTUAL TABLE t1 USING echo(no_such_table);
+ }
+} {1 {vtable constructor failed: t1}}
+do_test vtab1-1.6 {
+ execsql {
+ SELECT name FROM sqlite_master ORDER BY 1
+ }
+} {}
+
+# Test to make sure nothing goes wrong and no memory is leaked if we
+# select an illegal table-name (i.e a reserved name or the name of a
+# table that already exists).
+#
+do_test vtab1-1.7 {
+ catchsql {
+ CREATE VIRTUAL TABLE sqlite_master USING echo;
+ }
+} {1 {object name reserved for internal use: sqlite_master}}
+do_test vtab1-1.8 {
+ catchsql {
+ CREATE TABLE treal(a, b, c);
+ CREATE VIRTUAL TABLE treal USING echo(treal);
+ }
+} {1 {table treal already exists}}
+do_test vtab1-1.9 {
+ execsql {
+ DROP TABLE treal;
+ SELECT name FROM sqlite_master ORDER BY 1
+ }
+} {}
+
+do_test vtab1-1.10 {
+ execsql {
+ CREATE TABLE treal(a, b, c);
+ CREATE VIRTUAL TABLE techo USING echo(treal);
+ }
+ db close
+ sqlite3 db test.db
+ catchsql {
+ SELECT * FROM techo;
+ }
+} {1 {no such module: echo}}
+do_test vtab1-1.11 {
+ catchsql {
+ INSERT INTO techo VALUES(1, 2, 3);
+ }
+} {1 {no such module: echo}}
+do_test vtab1-1.12 {
+ catchsql {
+ UPDATE techo SET a = 10;
+ }
+} {1 {no such module: echo}}
+do_test vtab1-1.13 {
+ catchsql {
+ DELETE FROM techo;
+ }
+} {1 {no such module: echo}}
+do_test vtab1-1.14 {
+ catchsql {
+ PRAGMA table_info(techo)
+ }
+} {1 {no such module: echo}}
+do_test vtab1-1.15 {
+ catchsql {
+ DROP TABLE techo;
+ }
+} {1 {no such module: echo}}
+
+register_echo_module [sqlite3_connection_pointer db]
+do_test vtab1-1.X {
+ execsql {
+ DROP TABLE techo;
+ DROP TABLE treal;
+ SELECT sql FROM sqlite_master;
+ }
+} {}
+
+#----------------------------------------------------------------------
+# Test cases vtab1.2.*
+#
+# At this point, the database is completely empty. The echo module
+# has already been registered.
+
+# If a single argument is passed to the echo module during table
+# creation, it is assumed to be the name of a table in the same
+# database. The echo module attempts to set the schema of the
+# new virtual table to be the same as the existing database table.
+#
+do_test vtab1-2.1 {
+ execsql {
+ CREATE TABLE template(a, b, c);
+ }
+ execsql { PRAGMA table_info(template); }
+} [list \
+ 0 a {} 0 {} 0 \
+ 1 b {} 0 {} 0 \
+ 2 c {} 0 {} 0 \
+]
+do_test vtab1-2.2 {
+ execsql {
+ CREATE VIRTUAL TABLE t1 USING echo(template);
+ }
+ execsql { PRAGMA table_info(t1); }
+} [list \
+ 0 a {} 0 {} 0 \
+ 1 b {} 0 {} 0 \
+ 2 c {} 0 {} 0 \
+]
+
+# Test that the database can be unloaded. This should invoke the xDisconnect()
+# callback for the successfully create virtual table (t1).
+#
+do_test vtab1-2.3 {
+ set echo_module [list]
+ db close
+ set echo_module
+} [list xDisconnect]
+
+# Re-open the database. This should not cause any virtual methods to
+# be called. The invocation of xConnect() is delayed until the virtual
+# table schema is first required by the compiler.
+#
+do_test vtab1-2.4 {
+ set echo_module [list]
+ sqlite3 db test.db
+ db cache size 0
+ set echo_module
+} {}
+
+# Try to query the virtual table schema. This should fail, as the
+# echo module has not been registered with this database connection.
+#
+do_test vtab1.2.6 {
+breakpoint
+ catchsql { PRAGMA table_info(t1); }
+} {1 {no such module: echo}}
+
+# Register the module
+register_echo_module [sqlite3_connection_pointer db]
+
+# Try to query the virtual table schema again. This time it should
+# invoke the xConnect method and succeed.
+#
+do_test vtab1.2.7 {
+ execsql { PRAGMA table_info(t1); }
+} [list \
+ 0 a {} 0 {} 0 \
+ 1 b {} 0 {} 0 \
+ 2 c {} 0 {} 0 \
+]
+do_test vtab1.2.8 {
+ set echo_module
+} {xConnect echo main t1 template}
+
+# Drop table t1. This should cause the xDestroy (but not xDisconnect) method
+# to be invoked.
+do_test vtab1-2.5 {
+ set echo_module ""
+ execsql {
+ DROP TABLE t1;
+ }
+ set echo_module
+} {xDestroy}
+
+do_test vtab1-2.6 {
+ execsql {
+ PRAGMA table_info(t1);
+ }
+} {}
+do_test vtab1-2.7 {
+ execsql {
+ SELECT sql FROM sqlite_master;
+ }
+} [list {CREATE TABLE template(a, b, c)}]
+# Clean up other test artifacts:
+do_test vtab1-2.8 {
+ execsql {
+ DROP TABLE template;
+ SELECT sql FROM sqlite_master;
+ }
+} [list]
+
+#----------------------------------------------------------------------
+# Test case vtab1-3 test table scans and the echo module's
+# xBestIndex/xFilter handling of WHERE conditions.
+
+do_test vtab1-3.1 {
+ set echo_module ""
+ execsql {
+ CREATE TABLE treal(a INTEGER, b INTEGER, c);
+ CREATE INDEX treal_idx ON treal(b);
+ CREATE VIRTUAL TABLE t1 USING echo(treal);
+ }
+ set echo_module
+} [list xCreate echo main t1 treal \
+ xSync echo(treal) \
+ xCommit echo(treal) \
+]
+
+# Test that a SELECT on t1 doesn't crash. No rows are returned
+# because the underlying real table is currently empty.
+#
+do_test vtab1-3.2 {
+ execsql {
+ SELECT a, b, c FROM t1;
+ }
+} {}
+
+# Put some data into the table treal. Then try a few simple SELECT
+# statements on t1.
+#
+do_test vtab1-3.3 {
+ execsql {
+ INSERT INTO treal VALUES(1, 2, 3);
+ INSERT INTO treal VALUES(4, 5, 6);
+ SELECT * FROM t1;
+ }
+} {1 2 3 4 5 6}
+do_test vtab1-3.4 {
+ execsql {
+ SELECT a FROM t1;
+ }
+} {1 4}
+do_test vtab1-3.5 {
+ execsql {
+ SELECT rowid FROM t1;
+ }
+} {1 2}
+do_test vtab1-3.6 {
+ set echo_module ""
+ execsql {
+ SELECT * FROM t1;
+ }
+} {1 2 3 4 5 6}
+do_test vtab1-3.7 {
+ execsql {
+ SELECT rowid, * FROM t1;
+ }
+} {1 1 2 3 2 4 5 6}
+do_test vtab1-3.8 {
+ execsql {
+ SELECT a AS d, b AS e, c AS f FROM t1;
+ }
+} {1 2 3 4 5 6}
+
+# Execute some SELECT statements with WHERE clauses on the t1 table.
+# Then check the echo_module variable (written to by the module methods
+# in test8.c) to make sure the xBestIndex() and xFilter() methods were
+# called correctly.
+#
+do_test vtab1-3.8 {
+ set echo_module ""
+ execsql {
+ SELECT * FROM t1;
+ }
+ set echo_module
+} [list xBestIndex {SELECT rowid, * FROM 'treal'} \
+ xFilter {SELECT rowid, * FROM 'treal'} ]
+do_test vtab1-3.9 {
+ set echo_module ""
+ execsql {
+ SELECT * FROM t1 WHERE b = 5;
+ }
+} {4 5 6}
+do_test vtab1-3.10 {
+ set echo_module
+} [list xBestIndex {SELECT rowid, * FROM 'treal' WHERE b = ?} \
+ xFilter {SELECT rowid, * FROM 'treal' WHERE b = ?} 5 ]
+do_test vtab1-3.10 {
+ set echo_module ""
+ execsql {
+ SELECT * FROM t1 WHERE b >= 5 AND b <= 10;
+ }
+} {4 5 6}
+do_test vtab1-3.11 {
+ set echo_module
+} [list xBestIndex {SELECT rowid, * FROM 'treal' WHERE b >= ? AND b <= ?} \
+ xFilter {SELECT rowid, * FROM 'treal' WHERE b >= ? AND b <= ?} 5 10 ]
+do_test vtab1-3.12 {
+ set echo_module ""
+ execsql {
+ SELECT * FROM t1 WHERE b BETWEEN 2 AND 10;
+ }
+} {1 2 3 4 5 6}
+do_test vtab1-3.13 {
+ set echo_module
+} [list xBestIndex {SELECT rowid, * FROM 'treal' WHERE b >= ? AND b <= ?} \
+ xFilter {SELECT rowid, * FROM 'treal' WHERE b >= ? AND b <= ?} 2 10 ]
+
+# Add a function for the MATCH operator. Everything always matches!
+#proc test_match {lhs rhs} {
+# lappend ::echo_module MATCH $lhs $rhs
+# return 1
+#}
+#db function match test_match
+
+set echo_module ""
+do_test vtab1-3.12 {
+ set echo_module ""
+ catchsql {
+ SELECT * FROM t1 WHERE a MATCH 'string';
+ }
+} {1 {unable to use function MATCH in the requested context}}
+do_test vtab1-3.13 {
+ set echo_module
+} [list xBestIndex {SELECT rowid, * FROM 'treal'} \
+ xFilter {SELECT rowid, * FROM 'treal'}]
+do_test vtab1-3.14 {
+ set echo_module ""
+ execsql {
+ SELECT * FROM t1 WHERE b MATCH 'string';
+ }
+} {}
+do_test vtab1-3.15 {
+ set echo_module
+} [list xBestIndex \
+ {SELECT rowid, * FROM 'treal' WHERE b LIKE (SELECT '%'||?||'%')} \
+ xFilter \
+ {SELECT rowid, * FROM 'treal' WHERE b LIKE (SELECT '%'||?||'%')} \
+ string ]
+
+#----------------------------------------------------------------------
+# Test case vtab1-3 test table scans and the echo module's
+# xBestIndex/xFilter handling of ORDER BY clauses.
+
+# This procedure executes the SQL. Then it checks to see if the OP_Sort
+# opcode was executed. If an OP_Sort did occur, then "sort" is appended
+# to the result. If no OP_Sort happened, then "nosort" is appended.
+#
+# This procedure is used to check to make sure sorting is or is not
+# occurring as expected.
+#
+proc cksort {sql} {
+ set ::sqlite_sort_count 0
+ set data [execsql $sql]
+ if {$::sqlite_sort_count} {set x sort} {set x nosort}
+ lappend data $x
+ return $data
+}
+
+do_test vtab1-4.1 {
+ set echo_module ""
+ cksort {
+ SELECT b FROM t1 ORDER BY b;
+ }
+} {2 5 nosort}
+do_test vtab1-4.2 {
+ set echo_module
+} [list xBestIndex {SELECT rowid, * FROM 'treal' ORDER BY b ASC} \
+ xFilter {SELECT rowid, * FROM 'treal' ORDER BY b ASC} ]
+do_test vtab1-4.3 {
+ set echo_module ""
+ cksort {
+ SELECT b FROM t1 ORDER BY b DESC;
+ }
+} {5 2 nosort}
+do_test vtab1-4.4 {
+ set echo_module
+} [list xBestIndex {SELECT rowid, * FROM 'treal' ORDER BY b DESC} \
+ xFilter {SELECT rowid, * FROM 'treal' ORDER BY b DESC} ]
+do_test vtab1-4.3 {
+ set echo_module ""
+ cksort {
+ SELECT b FROM t1 ORDER BY b||'';
+ }
+} {2 5 sort}
+do_test vtab1-4.4 {
+ set echo_module
+} [list xBestIndex {SELECT rowid, * FROM 'treal'} \
+ xFilter {SELECT rowid, * FROM 'treal'} ]
+
+execsql {
+ DROP TABLE t1;
+ DROP TABLE treal;
+}
+
+#----------------------------------------------------------------------
+# Test cases vtab1-5 test SELECT queries that include joins on virtual
+# tables.
+
+proc filter {log} {
+ set out [list]
+ for {set ii 0} {$ii < [llength $log]} {incr ii} {
+ if {[lindex $log $ii] eq "xFilter"} {
+ lappend out xFilter
+ lappend out [lindex $log [expr $ii+1]]
+ }
+ }
+ return $out
+}
+
+do_test vtab1-5-1 {
+ execsql {
+ CREATE TABLE t1(a, b, c);
+ CREATE TABLE t2(d, e, f);
+ INSERT INTO t1 VALUES(1, 'red', 'green');
+ INSERT INTO t1 VALUES(2, 'blue', 'black');
+ INSERT INTO t2 VALUES(1, 'spades', 'clubs');
+ INSERT INTO t2 VALUES(2, 'hearts', 'diamonds');
+ CREATE VIRTUAL TABLE et1 USING echo(t1);
+ CREATE VIRTUAL TABLE et2 USING echo(t2);
+ }
+} {}
+
+do_test vtab1-5-2 {
+ set echo_module ""
+ execsql {
+ SELECT * FROM et1, et2;
+ }
+} [list \
+ 1 red green 1 spades clubs \
+ 1 red green 2 hearts diamonds \
+ 2 blue black 1 spades clubs \
+ 2 blue black 2 hearts diamonds \
+]
+do_test vtab1-5-3 {
+ filter $echo_module
+} [list \
+ xFilter {SELECT rowid, * FROM 't1'} \
+ xFilter {SELECT rowid, * FROM 't2'} \
+ xFilter {SELECT rowid, * FROM 't2'} \
+]
+do_test vtab1-5-4 {
+ set echo_module ""
+ execsql {
+ SELECT * FROM et1, et2 WHERE et2.d = 2;
+ }
+} [list \
+ 1 red green 2 hearts diamonds \
+ 2 blue black 2 hearts diamonds \
+]
+do_test vtab1-5-5 {
+ filter $echo_module
+} [list \
+ xFilter {SELECT rowid, * FROM 't1'} \
+ xFilter {SELECT rowid, * FROM 't2'} \
+ xFilter {SELECT rowid, * FROM 't2'} \
+]
+do_test vtab1-5-6 {
+ execsql {
+ CREATE INDEX i1 ON t2(d);
+ }
+
+ db close
+ sqlite3 db test.db
+ register_echo_module [sqlite3_connection_pointer db]
+
+ set echo_module ""
+ execsql {
+ SELECT * FROM et1, et2 WHERE et2.d = 2;
+ }
+} [list \
+ 1 red green 2 hearts diamonds \
+ 2 blue black 2 hearts diamonds \
+]
+do_test vtab1-5-7 {
+ filter $echo_module
+} [list \
+ xFilter {SELECT rowid, * FROM 't2' WHERE d = ?} \
+ xFilter {SELECT rowid, * FROM 't1'} \
+]
+
+execsql {
+ DROP TABLE t1;
+ DROP TABLE t2;
+ DROP TABLE et1;
+ DROP TABLE et2;
+}
+
+#----------------------------------------------------------------------
+# Test cases vtab1-6 test INSERT, UPDATE and DELETE operations
+# on virtual tables.
+do_test vtab1-6-1 {
+ execsql { SELECT sql FROM sqlite_master }
+} {}
+do_test vtab1-6-2 {
+ execsql {
+ CREATE TABLE treal(a PRIMARY KEY, b, c);
+ CREATE VIRTUAL TABLE techo USING echo(treal);
+ SELECT name FROM sqlite_master WHERE type = 'table';
+ }
+} {treal techo}
+do_test vtab1-6-3 {
+ execsql {
+ INSERT INTO techo VALUES(1, 2, 3);
+ SELECT * FROM techo;
+ }
+} {1 2 3}
+do_test vtab1-6-4 {
+ execsql {
+ UPDATE techo SET a = 5;
+ SELECT * FROM techo;
+ }
+} {5 2 3}
+
+do_test vtab1-6-5 {
+ execsql {
+ UPDATE techo set a = a||b||c;
+ SELECT * FROM techo;
+ }
+} {523 2 3}
+
+do_test vtab1-6-6 {
+ execsql {
+ UPDATE techo set rowid = 10;
+ SELECT rowid FROM techo;
+ }
+} {10}
+
+do_test vtab1-6-7 {
+ execsql {
+ DELETE FROM techo;
+ SELECT * FROM techo;
+ }
+} {}
+
+
+file delete -force test2.db
+file delete -force test2.db-journal
+sqlite3 db2 test2.db
+execsql {
+ CREATE TABLE techo(a PRIMARY KEY, b, c);
+} db2
+proc check_echo_table {tn} {
+ set ::data1 [execsql {SELECT rowid, * FROM techo}]
+ set ::data2 [execsql {SELECT rowid, * FROM techo} db2]
+ do_test $tn {
+ string equal $::data1 $::data2
+ } 1
+}
+set tn 0
+foreach stmt [list \
+ {INSERT INTO techo VALUES('abc', 'def', 'ghi')} \
+ {INSERT INTO techo SELECT a||'.'||rowid, b, c FROM techo} \
+ {INSERT INTO techo SELECT a||'x'||rowid, b, c FROM techo} \
+ {INSERT INTO techo SELECT a||'y'||rowid, b, c FROM techo} \
+ {DELETE FROM techo WHERE (oid % 3) = 0} \
+ {UPDATE techo set rowid = 100 WHERE rowid = 1} \
+ {INSERT INTO techo(a, b) VALUES('hello', 'world')} \
+ {DELETE FROM techo} \
+] {
+ execsql $stmt
+ execsql $stmt db2
+ check_echo_table vtab1-6.8.[incr tn]
+}
+
+db2 close
+
+
+
+#----------------------------------------------------------------------
+# Test cases vtab1-7 tests that the value returned by
+# sqlite3_last_insert_rowid() is set correctly when rows are inserted
+# into virtual tables.
+do_test vtab1.7-1 {
+ execsql {
+ CREATE TABLE real_abc(a PRIMARY KEY, b, c);
+ CREATE VIRTUAL TABLE echo_abc USING echo(real_abc);
+ }
+} {}
+do_test vtab1.7-2 {
+ execsql {
+ INSERT INTO echo_abc VALUES(1, 2, 3);
+ SELECT last_insert_rowid();
+ }
+} {1}
+do_test vtab1.7-3 {
+ execsql {
+ INSERT INTO echo_abc(rowid) VALUES(31427);
+ SELECT last_insert_rowid();
+ }
+} {31427}
+do_test vtab1.7-4 {
+ execsql {
+ INSERT INTO echo_abc SELECT a||'.v2', b, c FROM echo_abc;
+ SELECT last_insert_rowid();
+ }
+} {31429}
+do_test vtab1.7-5 {
+ execsql {
+ SELECT rowid, a, b, c FROM echo_abc
+ }
+} [list 1 1 2 3 \
+ 31427 {} {} {} \
+ 31428 1.v2 2 3 \
+ 31429 {} {} {} \
+]
+
+# Now test that DELETE and UPDATE operations do not modify the value.
+do_test vtab1.7-6 {
+ execsql {
+ UPDATE echo_abc SET c = 5 WHERE b = 2;
+ SELECT last_insert_rowid();
+ }
+} {31429}
+do_test vtab1.7-7 {
+ execsql {
+ UPDATE echo_abc SET rowid = 5 WHERE rowid = 1;
+ SELECT last_insert_rowid();
+ }
+} {31429}
+do_test vtab1.7-8 {
+ execsql {
+ DELETE FROM echo_abc WHERE b = 2;
+ SELECT last_insert_rowid();
+ }
+} {31429}
+do_test vtab1.7-9 {
+ execsql {
+ SELECT rowid, a, b, c FROM echo_abc
+ }
+} [list 31427 {} {} {} \
+ 31429 {} {} {} \
+]
+do_test vtab1.7-10 {
+ execsql {
+ DELETE FROM echo_abc WHERE b = 2;
+ SELECT last_insert_rowid();
+ }
+} {31429}
+do_test vtab1.7-11 {
+ execsql {
+ SELECT rowid, a, b, c FROM real_abc
+ }
+} [list 31427 {} {} {} \
+ 31429 {} {} {} \
+]
+do_test vtab1.7-12 {
+ execsql {
+ DELETE FROM echo_abc;
+ SELECT last_insert_rowid();
+ }
+} {31429}
+do_test vtab1.7-13 {
+ execsql {
+ SELECT rowid, a, b, c FROM real_abc
+ }
+} {}
+
+do_test vtab1.8-1 {
+ set echo_module ""
+ execsql {
+ ATTACH 'test2.db' AS aux;
+ CREATE VIRTUAL TABLE aux.e2 USING echo(real_abc);
+ }
+ set echo_module
+} [list xCreate echo aux e2 real_abc \
+ xSync echo(real_abc) \
+ xCommit echo(real_abc) \
+]
+do_test vtab1.8-2 {
+ execsql {
+ DROP TABLE aux.e2;
+ DROP TABLE treal;
+ DROP TABLE techo;
+ DROP TABLE echo_abc;
+ DROP TABLE real_abc;
+ }
+} {}
+
+do_test vtab1.9-1 {
+ set echo_module ""
+ execsql {
+ CREATE TABLE r(a, b, c);
+ CREATE VIRTUAL TABLE e USING echo(r, e_log);
+ SELECT name FROM sqlite_master;
+ }
+} {r e e_log}
+do_test vtab1.9-2 {
+ execsql {
+ DROP TABLE e;
+ SELECT name FROM sqlite_master;
+ }
+} {r}
+
+do_test vtab1.9-3 {
+ set echo_module ""
+ execsql {
+ CREATE VIRTUAL TABLE e USING echo(r, e_log, virtual 1 2 3 varchar(32));
+ }
+ set echo_module
+} [list \
+ xCreate echo main e r e_log {virtual 1 2 3 varchar(32)} \
+ xSync echo(r) \
+ xCommit echo(r) \
+]
+
+do_test vtab1.10-1 {
+ execsql {
+ CREATE TABLE del(d);
+ CREATE VIRTUAL TABLE e2 USING echo(del);
+ }
+ db close
+ sqlite3 db test.db
+ register_echo_module [sqlite3_connection_pointer db]
+ execsql {
+ DROP TABLE del;
+ }
+ catchsql {
+ SELECT * FROM e2;
+ }
+} {1 {vtable constructor failed: e2}}
+do_test vtab1.10-2 {
+ set rc [catch {
+ set ptr [sqlite3_connection_pointer db]
+ sqlite3_declare_vtab $ptr {CREATE TABLE abc(a, b, c)}
+ } msg]
+ list $rc $msg
+} {1 {library routine called out of sequence}}
+do_test vtab1.10-3 {
+ set ::echo_module_begin_fail r
+ catchsql {
+ INSERT INTO e VALUES(1, 2, 3);
+ }
+} {1 {SQL logic error or missing database}}
+do_test vtab1.10-4 {
+ catch {execsql {
+ EXPLAIN SELECT * FROM e WHERE rowid = 2;
+ EXPLAIN QUERY PLAN SELECT * FROM e WHERE rowid = 2 ORDER BY rowid;
+ }}
+} {0}
+
+do_test vtab1.10-5 {
+ set echo_module ""
+ execsql {
+ SELECT * FROM e WHERE rowid||'' MATCH 'pattern';
+ }
+ set echo_module
+} [list \
+ xBestIndex {SELECT rowid, * FROM 'r'} \
+ xFilter {SELECT rowid, * FROM 'r'} \
+]
+proc match_func {args} {return ""}
+do_test vtab1.10-6 {
+ set echo_module ""
+ db function match match_func
+ execsql {
+ SELECT * FROM e WHERE match('pattern', rowid, 'pattern2');
+ }
+ set echo_module
+} [list \
+ xBestIndex {SELECT rowid, * FROM 'r'} \
+ xFilter {SELECT rowid, * FROM 'r'} \
+]
+
+
+# Testing the xFindFunction interface
+#
+catch {rename ::echo_glob_overload {}}
+do_test vtab1.11-1 {
+ execsql {
+ INSERT INTO r(a,b,c) VALUES(1,'?',99);
+ INSERT INTO r(a,b,c) VALUES(2,3,99);
+ SELECT a GLOB b FROM e
+ }
+} {1 0}
+proc ::echo_glob_overload {a b} {
+ return [list $b $a]
+}
+do_test vtab1.11-2 {
+ execsql {
+ SELECT a like 'b' FROM e
+ }
+} {0 0}
+do_test vtab1.11-3 {
+ execsql {
+ SELECT a glob '2' FROM e
+ }
+} {{1 2} {2 2}}
+do_test vtab1.11-4 {
+ execsql {
+ SELECT glob('2',a) FROM e
+ }
+} {0 1}
+do_test vtab1.11-5 {
+ execsql {
+ SELECT glob(a,'2') FROM e
+ }
+} {{2 1} {2 2}}
+
+unset -nocomplain echo_module_begin_fail
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/vtab2.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/vtab2.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,72 @@
+# 2006 June 10
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# $Id: vtab2.test,v 1.6 2006/08/13 19:04:19 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+ifcapable !vtab||!schema_pragmas {
+ finish_test
+ return
+}
+
+register_schema_module [sqlite3_connection_pointer db]
+do_test vtab2-1.1 {
+ execsql {
+ CREATE VIRTUAL TABLE schema USING schema;
+ SELECT * FROM schema;
+ }
+} [list \
+ main schema 0 database {} 0 {} 0 \
+ main schema 1 tablename {} 0 {} 0 \
+ main schema 2 cid {} 0 {} 0 \
+ main schema 3 name {} 0 {} 0 \
+ main schema 4 type {} 0 {} 0 \
+ main schema 5 not_null {} 0 {} 0 \
+ main schema 6 dflt_value {} 0 {} 0 \
+ main schema 7 pk {} 0 {} 0 \
+]
+
+register_tclvar_module [sqlite3_connection_pointer db]
+do_test vtab2-2.1 {
+ set ::abc 123
+ execsql {
+ CREATE VIRTUAL TABLE vars USING tclvar;
+ SELECT * FROM vars WHERE name='abc';
+ }
+} [list abc "" 123]
+do_test vtab2-2.2 {
+ set A(1) 1
+ set A(2) 4
+ set A(3) 9
+ execsql {
+ SELECT * FROM vars WHERE name='A';
+ }
+} [list A 1 1 A 2 4 A 3 9]
+unset -nocomplain result
+unset -nocomplain var
+set result {}
+foreach var [lsort [info vars tcl_*]] {
+ catch {lappend result $var [set $var]}
+}
+do_test vtab2-2.3 {
+ execsql {
+ SELECT name, value FROM vars
+ WHERE name MATCH 'tcl_*' AND arrayname = ''
+ ORDER BY name;
+ }
+} $result
+unset result
+unset var
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/vtab3.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/vtab3.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,142 @@
+# 2006 June 10
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is the authorisation callback and virtual tables.
+#
+# $Id: vtab3.test,v 1.2 2006/06/20 11:01:09 danielk1977 Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+ifcapable !vtab||!auth {
+ finish_test
+ return
+}
+
+set ::auth_fail 0
+set ::auth_log [list]
+set ::auth_filter [list SQLITE_READ SQLITE_UPDATE SQLITE_SELECT SQLITE_PRAGMA]
+
+proc auth {code arg1 arg2 arg3 arg4} {
+ if {[lsearch $::auth_filter $code]>-1} {
+ return SQLITE_OK
+ }
+ lappend ::auth_log $code $arg1 $arg2 $arg3 $arg4
+ incr ::auth_fail -1
+ if {$::auth_fail == 0} {
+ return SQLITE_DENY
+ }
+ return SQLITE_OK
+}
+
+do_test vtab3-1.1 {
+ execsql {
+ CREATE TABLE elephant(
+ name VARCHAR(32),
+ color VARCHAR(16),
+ age INTEGER,
+ UNIQUE(name, color)
+ );
+ }
+} {}
+
+
+do_test vtab3-1.2 {
+ register_echo_module [sqlite3_connection_pointer db]
+ db authorizer ::auth
+ execsql {
+ CREATE VIRTUAL TABLE pachyderm USING echo(elephant);
+ }
+ set ::auth_log
+} [list \
+ SQLITE_INSERT sqlite_master {} main {} \
+ SQLITE_CREATE_VTABLE pachyderm echo main {} \
+]
+
+do_test vtab3-1.3 {
+ set ::auth_log [list]
+ execsql {
+ DROP TABLE pachyderm;
+ }
+ set ::auth_log
+} [list \
+ SQLITE_DELETE sqlite_master {} main {} \
+ SQLITE_DROP_VTABLE pachyderm echo main {} \
+ SQLITE_DELETE pachyderm {} main {} \
+ SQLITE_DELETE sqlite_master {} main {} \
+]
+
+do_test vtab3-1.4 {
+ set ::auth_fail 1
+ catchsql {
+ CREATE VIRTUAL TABLE pachyderm USING echo(elephant);
+ }
+} {1 {not authorized}}
+do_test vtab3-1.5 {
+ execsql {
+ SELECT name FROM sqlite_master WHERE type = 'table';
+ }
+} {elephant}
+
+do_test vtab3-1.5 {
+ set ::auth_fail 2
+ catchsql {
+ CREATE VIRTUAL TABLE pachyderm USING echo(elephant);
+ }
+} {1 {not authorized}}
+do_test vtab3-1.6 {
+ execsql {
+ SELECT name FROM sqlite_master WHERE type = 'table';
+ }
+} {elephant}
+
+do_test vtab3-1.5 {
+ set ::auth_fail 3
+ catchsql {
+ CREATE VIRTUAL TABLE pachyderm USING echo(elephant);
+ }
+} {0 {}}
+do_test vtab3-1.6 {
+ execsql {
+ SELECT name FROM sqlite_master WHERE type = 'table';
+ }
+} {elephant pachyderm}
+
+foreach i [list 1 2 3 4] {
+ set ::auth_fail $i
+ do_test vtab3-1.7.$i.1 {
+ set rc [catch {
+ execsql {DROP TABLE pachyderm;}
+ } msg]
+ if {$msg eq "authorization denied"} {set msg "not authorized"}
+ list $rc $msg
+ } {1 {not authorized}}
+ do_test vtab3-1.7.$i.2 {
+ execsql {
+ SELECT name FROM sqlite_master WHERE type = 'table';
+ }
+ } {elephant pachyderm}
+}
+do_test vtab3-1.8.1 {
+ set ::auth_fail 0
+ catchsql {
+ DROP TABLE pachyderm;
+ }
+} {0 {}}
+do_test vtab3-1.8.2 {
+ execsql {
+ SELECT name FROM sqlite_master WHERE type = 'table';
+ }
+} {elephant}
+
+finish_test
+
+
Added: freeswitch/trunk/libs/sqlite/test/vtab4.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/vtab4.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,194 @@
+# 2006 June 10
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus is on testing the following virtual table methods:
+#
+# xBegin
+# xSync
+# xCommit
+# xRollback
+#
+# $Id: vtab4.test,v 1.2 2006/09/02 22:14:59 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+unset -nocomplain echo_module
+unset -nocomplain echo_module_sync_fail
+
+ifcapable !vtab {
+ finish_test
+ return
+}
+
+# Register the echo module
+db cache size 0
+register_echo_module [sqlite3_connection_pointer db]
+
+do_test vtab4-1.1 {
+ execsql {
+ CREATE TABLE treal(a PRIMARY KEY, b, c);
+ CREATE VIRTUAL TABLE techo USING echo(treal);
+ }
+} {}
+
+# Test an INSERT, UPDATE and DELETE statement on the virtual table
+# in an implicit transaction. Each should result in a single call
+# to xBegin, xSync and xCommit.
+#
+do_test vtab4-1.2 {
+ set echo_module [list]
+ execsql {
+ INSERT INTO techo VALUES(1, 2, 3);
+ }
+ set echo_module
+} {xBegin echo(treal) xSync echo(treal) xCommit echo(treal)}
+do_test vtab4-1.3 {
+ set echo_module [list]
+ execsql {
+ UPDATE techo SET a = 2;
+ }
+ set echo_module
+} [list xBestIndex {SELECT rowid, * FROM 'treal'} \
+ xBegin echo(treal) \
+ xFilter {SELECT rowid, * FROM 'treal'} \
+ xSync echo(treal) \
+ xCommit echo(treal) \
+]
+do_test vtab4-1.4 {
+ set echo_module [list]
+ execsql {
+ DELETE FROM techo;
+ }
+ set echo_module
+} [list xBestIndex {SELECT rowid, * FROM 'treal'} \
+ xBegin echo(treal) \
+ xFilter {SELECT rowid, * FROM 'treal'} \
+ xSync echo(treal) \
+ xCommit echo(treal) \
+]
+
+# Ensure xBegin is not called more than once in a single transaction.
+#
+do_test vtab4-2.1 {
+ set echo_module [list]
+ execsql {
+ BEGIN;
+ INSERT INTO techo VALUES(1, 2, 3);
+ INSERT INTO techo VALUES(4, 5, 6);
+ INSERT INTO techo VALUES(7, 8, 9);
+ COMMIT;
+ }
+ set echo_module
+} {xBegin echo(treal) xSync echo(treal) xCommit echo(treal)}
+
+# Try a transaction with two virtual tables.
+#
+do_test vtab4-2.2 {
+ execsql {
+ CREATE TABLE sreal(a, b, c UNIQUE);
+ CREATE VIRTUAL TABLE secho USING echo(sreal);
+ }
+ set echo_module [list]
+ execsql {
+ BEGIN;
+ INSERT INTO secho SELECT * FROM techo;
+ DELETE FROM techo;
+ COMMIT;
+ }
+ set echo_module
+} [list xBestIndex {SELECT rowid, * FROM 'treal'} \
+ xBegin echo(sreal) \
+ xFilter {SELECT rowid, * FROM 'treal'} \
+ xBestIndex {SELECT rowid, * FROM 'treal'} \
+ xBegin echo(treal) \
+ xFilter {SELECT rowid, * FROM 'treal'} \
+ xSync echo(sreal) \
+ xSync echo(treal) \
+ xCommit echo(sreal) \
+ xCommit echo(treal) \
+]
+do_test vtab4-2.3 {
+ execsql {
+ SELECT * FROM secho;
+ }
+} {1 2 3 4 5 6 7 8 9}
+do_test vtab4-2.4 {
+ execsql {
+ SELECT * FROM techo;
+ }
+} {}
+
+# Try an explicit ROLLBACK on a transaction with two open virtual tables.
+do_test vtab4-2.5 {
+ set echo_module [list]
+ execsql {
+ BEGIN;
+ INSERT INTO techo SELECT * FROM secho;
+ DELETE FROM secho;
+ ROLLBACK;
+ }
+ set echo_module
+} [list xBestIndex {SELECT rowid, * FROM 'sreal'} \
+ xBegin echo(treal) \
+ xFilter {SELECT rowid, * FROM 'sreal'} \
+ xBestIndex {SELECT rowid, * FROM 'sreal'} \
+ xBegin echo(sreal) \
+ xFilter {SELECT rowid, * FROM 'sreal'} \
+ xRollback echo(treal) \
+ xRollback echo(sreal) \
+]
+do_test vtab4-2.6 {
+ execsql {
+ SELECT * FROM secho;
+ }
+} {1 2 3 4 5 6 7 8 9}
+do_test vtab4-2.7 {
+ execsql {
+ SELECT * FROM techo;
+ }
+} {}
+
+do_test vtab4-3.1 {
+ set echo_module [list]
+ set echo_module_sync_fail treal
+ catchsql {
+ INSERT INTO techo VALUES(1, 2, 3);
+ }
+} {1 {unknown error}}
+do_test vtab4-3.2 {
+ set echo_module
+} {xBegin echo(treal) xSync echo(treal) xRollback echo(treal)}
+
+breakpoint
+do_test vtab4-3.3 {
+ set echo_module [list]
+ set echo_module_sync_fail sreal
+ catchsql {
+ BEGIN;
+ INSERT INTO techo SELECT * FROM secho;
+ DELETE FROM secho;
+ COMMIT;
+ }
+ set echo_module
+} [list xBestIndex {SELECT rowid, * FROM 'sreal'} \
+ xBegin echo(treal) \
+ xFilter {SELECT rowid, * FROM 'sreal'} \
+ xBestIndex {SELECT rowid, * FROM 'sreal'} \
+ xBegin echo(sreal) \
+ xFilter {SELECT rowid, * FROM 'sreal'} \
+ xSync echo(treal) \
+ xSync echo(sreal) \
+ xRollback echo(treal) \
+ xRollback echo(sreal) \
+]
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/vtab5.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/vtab5.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,152 @@
+# 2006 June 10
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# $Id: vtab5.test,v 1.6 2006/06/21 12:36:26 danielk1977 Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+ifcapable !vtab {
+ finish_test
+ return
+}
+
+# The following tests - vtab5-1.* - ensure that an INSERT, DELETE or UPDATE
+# statement can be executed immediately after a CREATE or schema reload. The
+# point here is testing that the parser always calls xConnect() before the
+# schema of a virtual table is used.
+#
+register_echo_module [sqlite3_connection_pointer db]
+do_test vtab5-1.1 {
+ execsql {
+ CREATE TABLE treal(a VARCHAR(16), b INTEGER, c FLOAT);
+ INSERT INTO treal VALUES('a', 'b', 'c');
+ CREATE VIRTUAL TABLE techo USING echo(treal);
+ }
+} {}
+do_test vtab5.1.2 {
+ execsql {
+ SELECT * FROM techo;
+ }
+} {a b c}
+do_test vtab5.1.3 {
+ db close
+ sqlite3 db test.db
+ register_echo_module [sqlite3_connection_pointer db]
+ execsql {
+ INSERT INTO techo VALUES('c', 'd', 'e');
+ SELECT * FROM techo;
+ }
+} {a b c c d e}
+do_test vtab5.1.4 {
+ db close
+ sqlite3 db test.db
+ register_echo_module [sqlite3_connection_pointer db]
+ execsql {
+ UPDATE techo SET a = 10;
+ SELECT * FROM techo;
+ }
+} {10 b c 10 d e}
+do_test vtab5.1.5 {
+ db close
+ sqlite3 db test.db
+ register_echo_module [sqlite3_connection_pointer db]
+ execsql {
+ DELETE FROM techo WHERE b > 'c';
+ SELECT * FROM techo;
+ }
+} {10 b c}
+do_test vtab5.1.X {
+ execsql {
+ DROP TABLE techo;
+ DROP TABLE treal;
+ }
+} {}
+
+# The following tests - vtab5-2.* - ensure that collation sequences
+# assigned to virtual table columns via the "CREATE TABLE" statement
+# passed to sqlite3_declare_vtab() are used correctly.
+#
+do_test vtab5.2.1 {
+ execsql {
+ CREATE TABLE strings(str COLLATE NOCASE);
+ INSERT INTO strings VALUES('abc1');
+ INSERT INTO strings VALUES('Abc3');
+ INSERT INTO strings VALUES('ABc2');
+ INSERT INTO strings VALUES('aBc4');
+ SELECT str FROM strings ORDER BY 1;
+ }
+} {abc1 ABc2 Abc3 aBc4}
+do_test vtab5.2.2 {
+ execsql {
+ CREATE VIRTUAL TABLE echo_strings USING echo(strings);
+ SELECT str FROM echo_strings ORDER BY 1;
+ }
+} {abc1 ABc2 Abc3 aBc4}
+do_test vtab5.2.3 {
+ execsql {
+ SELECT str||'' FROM echo_strings ORDER BY 1;
+ }
+} {ABc2 Abc3 aBc4 abc1}
+
+# Test that it is impossible to create a triggger on a virtual table.
+#
+ifcapable trigger {
+ do_test vtab5.3.1 {
+ catchsql {
+ CREATE TRIGGER trig INSTEAD OF INSERT ON echo_strings BEGIN
+ SELECT 1, 2, 3;
+ END;
+ }
+ } {1 {cannot create triggers on virtual tables}}
+ do_test vtab5.3.2 {
+ catchsql {
+ CREATE TRIGGER trig AFTER INSERT ON echo_strings BEGIN
+ SELECT 1, 2, 3;
+ END;
+ }
+ } {1 {cannot create triggers on virtual tables}}
+ do_test vtab5.3.2 {
+ catchsql {
+ CREATE TRIGGER trig BEFORE INSERT ON echo_strings BEGIN
+ SELECT 1, 2, 3;
+ END;
+ }
+ } {1 {cannot create triggers on virtual tables}}
+}
+
+# Test that it is impossible to create an index on a virtual table.
+#
+do_test vtab5.4.1 {
+ catchsql {
+ CREATE INDEX echo_strings_i ON echo_strings(str);
+ }
+} {1 {virtual tables may not be indexed}}
+
+# Test that it is impossible to add a column to a virtual table.
+#
+do_test vtab5.4.2 {
+ catchsql {
+ ALTER TABLE echo_strings ADD COLUMN col2;
+ }
+} {1 {virtual tables may not be altered}}
+
+# Test that it is impossible to add a column to a virtual table.
+#
+do_test vtab5.4.3 {
+ catchsql {
+ ALTER TABLE echo_strings RENAME TO echo_strings2;
+ }
+} {1 {virtual tables may not be altered}}
+
+finish_test
+
Added: freeswitch/trunk/libs/sqlite/test/vtab6.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/vtab6.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,457 @@
+# 2002 May 24
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests for joins, including outer joins involving
+# virtual tables. The test cases in this file are copied from the file
+# join.test, and some of the comments still reflect that.
+#
+# $Id: vtab6.test,v 1.2 2006/06/28 18:18:10 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+ifcapable !vtab {
+ finish_test
+ return
+}
+
+register_echo_module [sqlite3_connection_pointer db]
+
+execsql {
+ CREATE TABLE real_t1(a,b,c);
+ CREATE TABLE real_t2(b,c,d);
+ CREATE TABLE real_t3(c,d,e);
+ CREATE TABLE real_t4(d,e,f);
+ CREATE TABLE real_t5(a INTEGER PRIMARY KEY);
+ CREATE TABLE real_t6(a INTEGER);
+ CREATE TABLE real_t7 (x, y);
+ CREATE TABLE real_t8 (a integer primary key, b);
+ CREATE TABLE real_t9(a INTEGER PRIMARY KEY, b);
+ CREATE TABLE real_t10(x INTEGER PRIMARY KEY, y);
+ CREATE TABLE real_t11(p INTEGER PRIMARY KEY, q);
+ CREATE TABLE real_t12(a,b);
+ CREATE TABLE real_t13(b,c);
+ CREATE TABLE real_t21(a,b,c);
+ CREATE TABLE real_t22(p,q);
+}
+foreach t [list t1 t2 t3 t4 t5 t6 t7 t8 t9 t10 t11 t12 t13 t21 t22] {
+ execsql "CREATE VIRTUAL TABLE $t USING echo(real_$t)"
+}
+
+do_test vtab6-1.1 {
+ execsql {
+ INSERT INTO t1 VALUES(1,2,3);
+ INSERT INTO t1 VALUES(2,3,4);
+ INSERT INTO t1 VALUES(3,4,5);
+ SELECT * FROM t1;
+ }
+} {1 2 3 2 3 4 3 4 5}
+do_test vtab6-1.2 {
+ execsql {
+ INSERT INTO t2 VALUES(1,2,3);
+ INSERT INTO t2 VALUES(2,3,4);
+ INSERT INTO t2 VALUES(3,4,5);
+ SELECT * FROM t2;
+ }
+} {1 2 3 2 3 4 3 4 5}
+
+do_test vtab6-1.3 {
+ execsql2 {
+ SELECT * FROM t1 NATURAL JOIN t2;
+ }
+} {a 1 b 2 c 3 d 4 a 2 b 3 c 4 d 5}
+do_test vtab6-1.3.1 {
+ execsql2 {
+ SELECT * FROM t2 NATURAL JOIN t1;
+ }
+} {b 2 c 3 d 4 a 1 b 3 c 4 d 5 a 2}
+do_test vtab6-1.3.2 {
+ execsql2 {
+ SELECT * FROM t2 AS x NATURAL JOIN t1;
+ }
+} {b 2 c 3 d 4 a 1 b 3 c 4 d 5 a 2}
+do_test vtab6-1.3.3 {
+ execsql2 {
+ SELECT * FROM t2 NATURAL JOIN t1 AS y;
+ }
+} {b 2 c 3 d 4 a 1 b 3 c 4 d 5 a 2}
+do_test vtab6-1.3.4 {
+ execsql {
+ SELECT b FROM t1 NATURAL JOIN t2;
+ }
+} {2 3}
+do_test vtab6-1.4.1 {
+ execsql2 {
+ SELECT * FROM t1 INNER JOIN t2 USING(b,c);
+ }
+} {a 1 b 2 c 3 d 4 a 2 b 3 c 4 d 5}
+do_test vtab6-1.4.2 {
+ execsql2 {
+ SELECT * FROM t1 AS x INNER JOIN t2 USING(b,c);
+ }
+} {a 1 b 2 c 3 d 4 a 2 b 3 c 4 d 5}
+do_test vtab6-1.4.3 {
+ execsql2 {
+ SELECT * FROM t1 INNER JOIN t2 AS y USING(b,c);
+ }
+} {a 1 b 2 c 3 d 4 a 2 b 3 c 4 d 5}
+do_test vtab6-1.4.4 {
+ execsql2 {
+ SELECT * FROM t1 AS x INNER JOIN t2 AS y USING(b,c);
+ }
+} {a 1 b 2 c 3 d 4 a 2 b 3 c 4 d 5}
+do_test vtab6-1.4.5 {
+ execsql {
+ SELECT b FROM t1 JOIN t2 USING(b);
+ }
+} {2 3}
+do_test vtab6-1.5 {
+ execsql2 {
+ SELECT * FROM t1 INNER JOIN t2 USING(b);
+ }
+} {a 1 b 2 c 3 c 3 d 4 a 2 b 3 c 4 c 4 d 5}
+do_test vtab6-1.6 {
+ execsql2 {
+ SELECT * FROM t1 INNER JOIN t2 USING(c);
+ }
+} {a 1 b 2 c 3 b 2 d 4 a 2 b 3 c 4 b 3 d 5}
+do_test vtab6-1.7 {
+ execsql2 {
+ SELECT * FROM t1 INNER JOIN t2 USING(c,b);
+ }
+} {a 1 b 2 c 3 d 4 a 2 b 3 c 4 d 5}
+
+do_test vtab6-1.8 {
+ execsql {
+ SELECT * FROM t1 NATURAL CROSS JOIN t2;
+ }
+} {1 2 3 4 2 3 4 5}
+do_test vtab6-1.9 {
+ execsql {
+ SELECT * FROM t1 CROSS JOIN t2 USING(b,c);
+ }
+} {1 2 3 4 2 3 4 5}
+do_test vtab6-1.10 {
+ execsql {
+ SELECT * FROM t1 NATURAL INNER JOIN t2;
+ }
+} {1 2 3 4 2 3 4 5}
+do_test vtab6-1.11 {
+ execsql {
+ SELECT * FROM t1 INNER JOIN t2 USING(b,c);
+ }
+} {1 2 3 4 2 3 4 5}
+do_test vtab6-1.12 {
+ execsql {
+ SELECT * FROM t1 natural inner join t2;
+ }
+} {1 2 3 4 2 3 4 5}
+
+ifcapable subquery {
+breakpoint
+ do_test vtab6-1.13 {
+ execsql2 {
+ SELECT * FROM t1 NATURAL JOIN
+ (SELECT b as 'c', c as 'd', d as 'e' FROM t2) as t3
+ }
+ } {a 1 b 2 c 3 d 4 e 5}
+ do_test vtab6-1.14 {
+ execsql2 {
+ SELECT * FROM (SELECT b as 'c', c as 'd', d as 'e' FROM t2) as 'tx'
+ NATURAL JOIN t1
+ }
+ } {c 3 d 4 e 5 a 1 b 2}
+}
+
+do_test vtab6-1.15 {
+ execsql {
+ INSERT INTO t3 VALUES(2,3,4);
+ INSERT INTO t3 VALUES(3,4,5);
+ INSERT INTO t3 VALUES(4,5,6);
+ SELECT * FROM t3;
+ }
+} {2 3 4 3 4 5 4 5 6}
+do_test vtab6-1.16 {
+ execsql {
+ SELECT * FROM t1 natural join t2 natural join t3;
+ }
+} {1 2 3 4 5 2 3 4 5 6}
+do_test vtab6-1.17 {
+ execsql2 {
+ SELECT * FROM t1 natural join t2 natural join t3;
+ }
+} {a 1 b 2 c 3 d 4 e 5 a 2 b 3 c 4 d 5 e 6}
+do_test vtab6-1.18 {
+ execsql {
+ INSERT INTO t4 VALUES(2,3,4);
+ INSERT INTO t4 VALUES(3,4,5);
+ INSERT INTO t4 VALUES(4,5,6);
+ SELECT * FROM t4;
+ }
+} {2 3 4 3 4 5 4 5 6}
+do_test vtab6-1.19.1 {
+ execsql {
+ SELECT * FROM t1 natural join t2 natural join t4;
+ }
+} {1 2 3 4 5 6}
+do_test vtab6-1.19.2 {
+ execsql2 {
+ SELECT * FROM t1 natural join t2 natural join t4;
+ }
+} {a 1 b 2 c 3 d 4 e 5 f 6}
+do_test vtab6-1.20 {
+ execsql {
+ SELECT * FROM t1 natural join t2 natural join t3 WHERE t1.a=1
+ }
+} {1 2 3 4 5}
+
+do_test vtab6-2.1 {
+ execsql {
+ SELECT * FROM t1 NATURAL LEFT JOIN t2;
+ }
+} {1 2 3 4 2 3 4 5 3 4 5 {}}
+do_test vtab6-2.2 {
+ execsql {
+ SELECT * FROM t2 NATURAL LEFT OUTER JOIN t1;
+ }
+} {1 2 3 {} 2 3 4 1 3 4 5 2}
+do_test vtab6-2.3 {
+ catchsql {
+ SELECT * FROM t1 NATURAL RIGHT OUTER JOIN t2;
+ }
+} {1 {RIGHT and FULL OUTER JOINs are not currently supported}}
+do_test vtab6-2.4 {
+ execsql {
+ SELECT * FROM t1 LEFT JOIN t2 ON t1.a=t2.d
+ }
+} {1 2 3 {} {} {} 2 3 4 {} {} {} 3 4 5 1 2 3}
+do_test vtab6-2.5 {
+ execsql {
+ SELECT * FROM t1 LEFT JOIN t2 ON t1.a=t2.d WHERE t1.a>1
+ }
+} {2 3 4 {} {} {} 3 4 5 1 2 3}
+do_test vtab6-2.6 {
+ execsql {
+ SELECT * FROM t1 LEFT JOIN t2 ON t1.a=t2.d WHERE t2.b IS NULL OR t2.b>1
+ }
+} {1 2 3 {} {} {} 2 3 4 {} {} {}}
+
+do_test vtab6-3.1 {
+ catchsql {
+ SELECT * FROM t1 NATURAL JOIN t2 ON t1.a=t2.b;
+ }
+} {1 {a NATURAL join may not have an ON or USING clause}}
+do_test vtab6-3.2 {
+ catchsql {
+ SELECT * FROM t1 NATURAL JOIN t2 USING(b);
+ }
+} {1 {a NATURAL join may not have an ON or USING clause}}
+do_test vtab6-3.3 {
+ catchsql {
+ SELECT * FROM t1 JOIN t2 ON t1.a=t2.b USING(b);
+ }
+} {1 {cannot have both ON and USING clauses in the same join}}
+do_test vtab6-3.4 {
+ catchsql {
+ SELECT * FROM t1 JOIN t2 USING(a);
+ }
+} {1 {cannot join using column a - column not present in both tables}}
+do_test vtab6-3.5 {
+ catchsql {
+ SELECT * FROM t1 USING(a);
+ }
+} {0 {1 2 3 2 3 4 3 4 5}}
+do_test vtab6-3.6 {
+ catchsql {
+ SELECT * FROM t1 JOIN t2 ON t3.a=t2.b;
+ }
+} {1 {no such column: t3.a}}
+do_test vtab6-3.7 {
+ catchsql {
+ SELECT * FROM t1 INNER OUTER JOIN t2;
+ }
+} {1 {unknown or unsupported join type: INNER OUTER}}
+do_test vtab6-3.7 {
+ catchsql {
+ SELECT * FROM t1 LEFT BOGUS JOIN t2;
+ }
+} {1 {unknown or unsupported join type: LEFT BOGUS}}
+
+do_test vtab6-4.1 {
+ execsql {
+ BEGIN;
+ INSERT INTO t6 VALUES(NULL);
+ INSERT INTO t6 VALUES(NULL);
+ INSERT INTO t6 SELECT * FROM t6;
+ INSERT INTO t6 SELECT * FROM t6;
+ INSERT INTO t6 SELECT * FROM t6;
+ INSERT INTO t6 SELECT * FROM t6;
+ INSERT INTO t6 SELECT * FROM t6;
+ INSERT INTO t6 SELECT * FROM t6;
+ COMMIT;
+ }
+ execsql {
+ SELECT * FROM t6 NATURAL JOIN t5;
+ }
+} {}
+do_test vtab6-4.2 {
+ execsql {
+ SELECT * FROM t6, t5 WHERE t6.a<t5.a;
+ }
+} {}
+do_test vtab6-4.3 {
+ execsql {
+ SELECT * FROM t6, t5 WHERE t6.a>t5.a;
+ }
+} {}
+do_test vtab6-4.4 {
+ execsql {
+ UPDATE t6 SET a='xyz';
+ SELECT * FROM t6 NATURAL JOIN t5;
+ }
+} {}
+do_test vtab6-4.6 {
+ execsql {
+ SELECT * FROM t6, t5 WHERE t6.a<t5.a;
+ }
+} {}
+do_test vtab6-4.7 {
+ execsql {
+ SELECT * FROM t6, t5 WHERE t6.a>t5.a;
+ }
+} {}
+do_test vtab6-4.8 {
+ execsql {
+ UPDATE t6 SET a=1;
+ SELECT * FROM t6 NATURAL JOIN t5;
+ }
+} {}
+do_test vtab6-4.9 {
+ execsql {
+ SELECT * FROM t6, t5 WHERE t6.a<t5.a;
+ }
+} {}
+do_test vtab6-4.10 {
+ execsql {
+ SELECT * FROM t6, t5 WHERE t6.a>t5.a;
+ }
+} {}
+
+# A test for ticket #247.
+#
+do_test vtab6-7.1 {
+ execsql {
+ INSERT INTO t7 VALUES ("pa1", 1);
+ INSERT INTO t7 VALUES ("pa2", NULL);
+ INSERT INTO t7 VALUES ("pa3", NULL);
+ INSERT INTO t7 VALUES ("pa4", 2);
+ INSERT INTO t7 VALUES ("pa30", 131);
+ INSERT INTO t7 VALUES ("pa31", 130);
+ INSERT INTO t7 VALUES ("pa28", NULL);
+
+ INSERT INTO t8 VALUES (1, "pa1");
+ INSERT INTO t8 VALUES (2, "pa4");
+ INSERT INTO t8 VALUES (3, NULL);
+ INSERT INTO t8 VALUES (4, NULL);
+ INSERT INTO t8 VALUES (130, "pa31");
+ INSERT INTO t8 VALUES (131, "pa30");
+
+ SELECT coalesce(t8.a,999) from t7 LEFT JOIN t8 on y=a;
+ }
+} {1 999 999 2 131 130 999}
+
+# Make sure a left join where the right table is really a view that
+# is itself a join works right. Ticket #306.
+#
+ifcapable view {
+do_test vtab6-8.1 {
+ execsql {
+ BEGIN;
+ INSERT INTO t9 VALUES(1,11);
+ INSERT INTO t9 VALUES(2,22);
+ INSERT INTO t10 VALUES(1,2);
+ INSERT INTO t10 VALUES(3,3);
+ INSERT INTO t11 VALUES(2,111);
+ INSERT INTO t11 VALUES(3,333);
+ CREATE VIEW v10_11 AS SELECT x, q FROM t10, t11 WHERE t10.y=t11.p;
+ COMMIT;
+ SELECT * FROM t9 LEFT JOIN v10_11 ON( a=x );
+ }
+} {1 11 1 111 2 22 {} {}}
+ifcapable subquery {
+ do_test vtab6-8.2 {
+ execsql {
+ SELECT * FROM t9 LEFT JOIN (SELECT x, q FROM t10, t11 WHERE t10.y=t11.p)
+ ON( a=x);
+ }
+ } {1 11 1 111 2 22 {} {}}
+}
+do_test vtab6-8.3 {
+ execsql {
+ SELECT * FROM v10_11 LEFT JOIN t9 ON( a=x );
+ }
+} {1 111 1 11 3 333 {} {}}
+} ;# ifcapable view
+
+# Ticket #350 describes a scenario where LEFT OUTER JOIN does not
+# function correctly if the right table in the join is really
+# subquery.
+#
+# To test the problem, we generate the same LEFT OUTER JOIN in two
+# separate selects but with on using a subquery and the other calling
+# the table directly. Then connect the two SELECTs using an EXCEPT.
+# Both queries should generate the same results so the answer should
+# be an empty set.
+#
+ifcapable compound {
+do_test vtab6-9.1 {
+ execsql {
+ BEGIN;
+ INSERT INTO t12 VALUES(1,11);
+ INSERT INTO t12 VALUES(2,22);
+ INSERT INTO t13 VALUES(22,222);
+ COMMIT;
+ }
+} {}
+
+ifcapable subquery {
+ do_test vtab6-9.1.1 {
+ execsql {
+ SELECT * FROM t12 NATURAL LEFT JOIN t13
+ EXCEPT
+ SELECT * FROM t12 NATURAL LEFT JOIN (SELECT * FROM t13 WHERE b>0);
+ }
+ } {}
+}
+ifcapable view {
+ do_test vtab6-9.2 {
+ execsql {
+ CREATE VIEW v13 AS SELECT * FROM t13 WHERE b>0;
+ SELECT * FROM t12 NATURAL LEFT JOIN t13
+ EXCEPT
+ SELECT * FROM t12 NATURAL LEFT JOIN v13;
+ }
+ } {}
+} ;# ifcapable view
+} ;# ifcapable compound
+
+ifcapable subquery {
+do_test vtab6-10.1 {
+ execsql {
+ CREATE INDEX i22 ON real_t22(q);
+ SELECT a FROM t21 LEFT JOIN t22 ON b=p WHERE q=
+ (SELECT max(m.q) FROM t22 m JOIN t21 n ON n.b=m.p WHERE n.c=1);
+ }
+} {}
+} ;# ifcapable subquery
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/vtab7.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/vtab7.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,199 @@
+# 2006 July 25
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The focus
+# of this test is reading and writing to the database from within a
+# virtual table xSync() callback.
+#
+# $Id: vtab7.test,v 1.2 2006/07/26 16:22:16 danielk1977 Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+ifcapable !vtab {
+ finish_test
+ return
+}
+
+# Register the echo module. Code inside the echo module appends elements
+# to the global tcl list variable ::echo_module whenever SQLite invokes
+# certain module callbacks. This includes the xSync(), xCommit() and
+# xRollback() callbacks. For each of these callback, two elements are
+# appended to ::echo_module, as follows:
+#
+# Module method Elements appended to ::echo_module
+# -------------------------------------------------------
+# xSync() xSync echo($tablename)
+# xCommit() xCommit echo($tablename)
+# xRollback() xRollback echo($tablename)
+# -------------------------------------------------------
+#
+# In each case, $tablename is replaced by the name of the real table (not
+# the echo table). By setting up a tcl trace on the ::echo_module variable,
+# code in this file arranges for a Tcl script to be executed from within
+# the echo module xSync() callback.
+#
+register_echo_module [sqlite3_connection_pointer db]
+trace add variable ::echo_module write echo_module_trace
+
+# This Tcl proc is invoked whenever the ::echo_module variable is written.
+#
+proc echo_module_trace {args} {
+ # Filter out writes to ::echo_module that are not xSync, xCommit or
+ # xRollback callbacks.
+ if {[llength $::echo_module] < 2} return
+ set x [lindex $::echo_module end-1]
+ if {$x ne "xSync" && $x ne "xCommit" && $x ne "xRollback"} return
+
+ regexp {^echo.(.*).$} [lindex $::echo_module end] dummy tablename
+ # puts "Ladies and gentlemen, an $x on $tablename!"
+
+ if {[info exists ::callbacks($x,$tablename)]} {
+ eval $::callbacks($x,$tablename)
+ }
+}
+
+# The following tests, vtab7-1.*, test that the trace callback on
+# ::echo_module is providing the expected tcl callbacks.
+do_test vtab7-1.1 {
+ execsql {
+ CREATE TABLE abc(a, b, c);
+ CREATE VIRTUAL TABLE abc2 USING echo(abc);
+ }
+} {}
+
+do_test vtab7-1.2 {
+ set ::callbacks(xSync,abc) {incr ::counter}
+ set ::counter 0
+ execsql {
+ INSERT INTO abc2 VALUES(1, 2, 3);
+ }
+ set ::counter
+} {1}
+
+# Write to an existing database table from within an xSync callback.
+do_test vtab7-2.1 {
+ set ::callbacks(xSync,abc) {
+ execsql {INSERT INTO log VALUES('xSync');}
+ }
+ execsql {
+ CREATE TABLE log(msg);
+ INSERT INTO abc2 VALUES(4, 5, 6);
+ SELECT * FROM log;
+ }
+} {xSync}
+do_test vtab7-2.3 {
+ execsql {
+ INSERT INTO abc2 VALUES(4, 5, 6);
+ SELECT * FROM log;
+ }
+} {xSync xSync}
+do_test vtab7-2.4 {
+ execsql {
+ INSERT INTO abc2 VALUES(4, 5, 6);
+ SELECT * FROM log;
+ }
+} {xSync xSync xSync}
+
+# Create a database table from within xSync callback.
+do_test vtab7-2.5 {
+ set ::callbacks(xSync,abc) {
+ execsql { CREATE TABLE newtab(d, e, f); }
+ }
+ execsql {
+ INSERT INTO abc2 VALUES(1, 2, 3);
+ SELECT name FROM sqlite_master ORDER BY name;
+ }
+} {abc abc2 log newtab}
+
+# Drop a database table from within xSync callback.
+do_test vtab7-2.6 {
+ set ::callbacks(xSync,abc) {
+ execsql { DROP TABLE newtab }
+ }
+ execsql {
+ INSERT INTO abc2 VALUES(1, 2, 3);
+ SELECT name FROM sqlite_master ORDER BY name;
+ }
+} {abc abc2 log}
+
+# Write to an attached database from xSync().
+do_test vtab7-3.1 {
+ file delete -force test2.db
+ file delete -force test2.db-journal
+ execsql {
+ ATTACH 'test2.db' AS db2;
+ CREATE TABLE db2.stuff(description, shape, color);
+ }
+ set ::callbacks(xSync,abc) {
+ execsql { INSERT INTO db2.stuff VALUES('abc', 'square', 'green'); }
+ }
+ execsql {
+ INSERT INTO abc2 VALUES(1, 2, 3);
+ SELECT * from stuff;
+ }
+} {abc square green}
+
+# UPDATE: The next test passes, but leaks memory. So leave it out.
+#
+# The following tests test that writing to the database from within
+# the xCommit callback causes a misuse error.
+# do_test vtab7-4.1 {
+# unset -nocomplain ::callbacks(xSync,abc)
+# set ::callbacks(xCommit,abc) {
+# execsql { INSERT INTO log VALUES('hello') }
+# }
+# catchsql {
+# INSERT INTO abc2 VALUES(1, 2, 3);
+# }
+# } {1 {library routine called out of sequence}}
+
+# These tests, vtab7-4.*, test that an SQLITE_LOCKED error is returned
+# if an attempt to write to a virtual module table or create a new
+# virtual table from within an xSync() callback.
+do_test vtab7-4.1 {
+ execsql {
+ CREATE TABLE def(d, e, f);
+ CREATE VIRTUAL TABLE def2 USING echo(def);
+ }
+ set ::callbacks(xSync,abc) {
+ set ::error [catchsql { INSERT INTO def2 VALUES(1, 2, 3) }]
+ }
+ execsql {
+ INSERT INTO abc2 VALUES(1, 2, 3);
+ }
+ set ::error
+} {1 {database table is locked}}
+do_test vtab7-4.2 {
+ set ::callbacks(xSync,abc) {
+ set ::error [catchsql { CREATE VIRTUAL TABLE def3 USING echo(def) }]
+ }
+ execsql {
+ INSERT INTO abc2 VALUES(1, 2, 3);
+ }
+ set ::error
+} {1 {database table is locked}}
+
+do_test vtab7-4.3 {
+ set ::callbacks(xSync,abc) {
+ set ::error [catchsql { DROP TABLE def2 }]
+ }
+ execsql {
+ INSERT INTO abc2 VALUES(1, 2, 3);
+ SELECT name FROM sqlite_master ORDER BY name;
+ }
+ set ::error
+} {1 {database table is locked}}
+
+trace remove variable ::echo_module write echo_module_trace
+unset -nocomplain ::callbacks
+
+finish_test
+
Added: freeswitch/trunk/libs/sqlite/test/vtab9.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/vtab9.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,50 @@
+# 2006 August 29
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file inserting into virtual tables from a SELECT
+# statement.
+#
+# $Id: vtab9.test,v 1.1 2006/08/29 18:46:14 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+ifcapable !vtab {
+ finish_test
+ return
+}
+
+do_test vtab9-1.1 {
+ register_echo_module [sqlite3_connection_pointer db]
+ execsql {
+ CREATE TABLE t0(a);
+ CREATE VIRTUAL TABLE t1 USING echo(t0);
+ INSERT INTO t1 SELECT 'hello';
+ SELECT rowid, * FROM t1;
+ }
+} {1 hello}
+
+do_test vtab9-1.2 {
+ execsql {
+ CREATE TABLE t2(a,b,c);
+ CREATE VIRTUAL TABLE t3 USING echo(t2);
+ CREATE TABLE d1(a,b,c);
+ INSERT INTO d1 VALUES(1,2,3);
+ INSERT INTO d1 VALUES('a','b','c');
+ INSERT INTO d1 VALUES(NULL,'x',123.456);
+ INSERT INTO d1 VALUES(x'6869',123456789,-12345);
+ INSERT INTO t3(a,b,c) SELECT * FROM d1;
+ SELECT rowid, * FROM t3;
+ }
+} {1 1 2 3 2 a b c 3 {} x 123.456 4 hi 123456789 -12345}
+
+unset -nocomplain echo_module_begin_fail
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/vtab_err.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/vtab_err.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,161 @@
+# 2006 June 10
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+#
+# $Id: vtab_err.test,v 1.3 2006/08/15 14:21:16 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+ifcapable !vtab {
+ finish_test
+ return
+}
+
+# Usage: do_malloc_test <test number> <options...>
+#
+# The first argument, <test number>, is an integer used to name the
+# tests executed by this proc. Options are as follows:
+#
+# -tclprep TCL script to run to prepare test.
+# -sqlprep SQL script to run to prepare test.
+# -tclbody TCL script to run with malloc failure simulation.
+# -sqlbody TCL script to run with malloc failure simulation.
+# -cleanup TCL script to run after the test.
+#
+# This command runs a series of tests to verify SQLite's ability
+# to handle an out-of-memory condition gracefully. It is assumed
+# that if this condition occurs a malloc() call will return a
+# NULL pointer. Linux, for example, doesn't do that by default. See
+# the "BUGS" section of malloc(3).
+#
+# Each iteration of a loop, the TCL commands in any argument passed
+# to the -tclbody switch, followed by the SQL commands in any argument
+# passed to the -sqlbody switch are executed. Each iteration the
+# Nth call to sqliteMalloc() is made to fail, where N is increased
+# each time the loop runs starting from 1. When all commands execute
+# successfully, the loop ends.
+#
+proc do_malloc_test {tn args} {
+ array unset ::mallocopts
+ array set ::mallocopts $args
+
+ set ::go 1
+ for {set ::n 1} {$::go && $::n < 50000} {incr ::n} {
+ do_test $tn.$::n {
+
+ # Remove all traces of database files test.db and test2.db from the files
+ # system. Then open (empty database) "test.db" with the handle [db].
+ #
+ sqlite_malloc_fail 0
+ catch {db close}
+ catch {file delete -force test.db}
+ catch {file delete -force test.db-journal}
+ catch {file delete -force test2.db}
+ catch {file delete -force test2.db-journal}
+ catch {sqlite3 db test.db}
+ set ::DB [sqlite3_connection_pointer db]
+
+ # Execute any -tclprep and -sqlprep scripts.
+ #
+ if {[info exists ::mallocopts(-tclprep)]} {
+ eval $::mallocopts(-tclprep)
+ }
+ if {[info exists ::mallocopts(-sqlprep)]} {
+ execsql $::mallocopts(-sqlprep)
+ }
+
+ # Now set the ${::n}th malloc() to fail and execute the -tclbody and
+ # -sqlbody scripts.
+ #
+ sqlite_malloc_fail $::n
+ set ::mallocbody {}
+ if {[info exists ::mallocopts(-tclbody)]} {
+ append ::mallocbody "$::mallocopts(-tclbody)\n"
+ }
+ if {[info exists ::mallocopts(-sqlbody)]} {
+ append ::mallocbody "db eval {$::mallocopts(-sqlbody)}"
+ }
+ set v [catch $::mallocbody msg]
+
+ # If the test fails (if $v!=0) and the database connection actually
+ # exists, make sure the failure code is SQLITE_NOMEM.
+ if {$v&&[info command db]=="db"&&[info exists ::mallocopts(-sqlbody)]} {
+ if {[db errorcode]!=7 && $msg!="vtable constructor failed: e"} {
+ set v 999
+ }
+ }
+
+ set leftover [lindex [sqlite_malloc_stat] 2]
+ if {$leftover>0} {
+ if {$leftover>1} {puts "\nLeftover: $leftover\nReturn=$v Message=$msg"}
+ set ::go 0
+ if {$v} {
+ puts "\nError message returned: $msg"
+ } else {
+ set v {1 1}
+ }
+ } else {
+ set v2 [expr {
+ $msg == "" || $msg == "out of memory" ||
+ $msg == "vtable constructor failed: e"
+ }]
+ if {!$v2} {puts "\nError message returned: $msg"}
+ lappend v $v2
+ }
+ } {1 1}
+
+ if {[info exists ::mallocopts(-cleanup)]} {
+ catch [list uplevel #0 $::mallocopts(-cleanup)] msg
+ }
+ }
+ unset ::mallocopts
+}
+
+unset -nocomplain echo_module_begin_fail
+do_ioerr_test vtab_err-1 -tclprep {
+ register_echo_module [sqlite3_connection_pointer db]
+} -sqlbody {
+ BEGIN;
+ CREATE TABLE r(a PRIMARY KEY, b, c);
+ CREATE VIRTUAL TABLE e USING echo(r);
+ INSERT INTO e VALUES(1, 2, 3);
+ INSERT INTO e VALUES('a', 'b', 'c');
+ UPDATE e SET c = 10;
+ DELETE FROM e WHERE a = 'a';
+ COMMIT;
+ BEGIN;
+ CREATE TABLE r2(a, b, c);
+ INSERT INTO r2 SELECT * FROM e;
+ INSERT INTO e SELECT a||'x', b, c FROM r2;
+ COMMIT;
+}
+
+
+do_malloc_test vtab_err-2 -tclprep {
+ register_echo_module [sqlite3_connection_pointer db]
+} -sqlbody {
+ BEGIN;
+ CREATE TABLE r(a PRIMARY KEY, b, c);
+ CREATE VIRTUAL TABLE e USING echo(r);
+ INSERT INTO e VALUES(1, 2, 3);
+ INSERT INTO e VALUES('a', 'b', 'c');
+ UPDATE e SET c = 10;
+ DELETE FROM e WHERE a = 'a';
+ COMMIT;
+ BEGIN;
+ CREATE TABLE r2(a, b, c);
+ INSERT INTO r2 SELECT * FROM e;
+ INSERT INTO e SELECT a||'x', b, c FROM r2;
+ COMMIT;
+}
+
+sqlite_malloc_fail 0
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/where.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/where.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,908 @@
+# 2001 September 15
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing the use of indices in WHERE clases.
+#
+# $Id: where.test,v 1.38 2005/11/14 22:29:06 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Build some test data
+#
+do_test where-1.0 {
+ execsql {
+ CREATE TABLE t1(w int, x int, y int);
+ CREATE TABLE t2(p int, q int, r int, s int);
+ }
+ for {set i 1} {$i<=100} {incr i} {
+ set w $i
+ set x [expr {int(log($i)/log(2))}]
+ set y [expr {$i*$i + 2*$i + 1}]
+ execsql "INSERT INTO t1 VALUES($w,$x,$y)"
+ }
+
+ ifcapable subquery {
+ execsql {
+ INSERT INTO t2 SELECT 101-w, x, (SELECT max(y) FROM t1)+1-y, y FROM t1;
+ }
+ } else {
+ set maxy [execsql {select max(y) from t1}]
+ execsql "
+ INSERT INTO t2 SELECT 101-w, x, $maxy+1-y, y FROM t1;
+ "
+ }
+
+ execsql {
+ CREATE INDEX i1w ON t1(w);
+ CREATE INDEX i1xy ON t1(x,y);
+ CREATE INDEX i2p ON t2(p);
+ CREATE INDEX i2r ON t2(r);
+ CREATE INDEX i2qs ON t2(q, s);
+ }
+} {}
+
+# Do an SQL statement. Append the search count to the end of the result.
+#
+proc count sql {
+ set ::sqlite_search_count 0
+ return [concat [execsql $sql] $::sqlite_search_count]
+}
+
+# Verify that queries use an index. We are using the special variable
+# "sqlite_search_count" which tallys the number of executions of MoveTo
+# and Next operators in the VDBE. By verifing that the search count is
+# small we can be assured that indices are being used properly.
+#
+do_test where-1.1 {
+ count {SELECT x, y FROM t1 WHERE w=10}
+} {3 121 3}
+do_test where-1.1.2 {
+ set sqlite_query_plan
+} {t1 i1w}
+do_test where-1.2 {
+ count {SELECT x, y FROM t1 WHERE w=11}
+} {3 144 3}
+do_test where-1.3 {
+ count {SELECT x, y FROM t1 WHERE 11=w}
+} {3 144 3}
+do_test where-1.4 {
+ count {SELECT x, y FROM t1 WHERE 11=w AND x>2}
+} {3 144 3}
+do_test where-1.4.2 {
+ set sqlite_query_plan
+} {t1 i1w}
+do_test where-1.5 {
+ count {SELECT x, y FROM t1 WHERE y<200 AND w=11 AND x>2}
+} {3 144 3}
+do_test where-1.5.2 {
+ set sqlite_query_plan
+} {t1 i1w}
+do_test where-1.6 {
+ count {SELECT x, y FROM t1 WHERE y<200 AND x>2 AND w=11}
+} {3 144 3}
+do_test where-1.7 {
+ count {SELECT x, y FROM t1 WHERE w=11 AND y<200 AND x>2}
+} {3 144 3}
+do_test where-1.8 {
+ count {SELECT x, y FROM t1 WHERE w>10 AND y=144 AND x=3}
+} {3 144 3}
+do_test where-1.8.2 {
+ set sqlite_query_plan
+} {t1 i1xy}
+do_test where-1.8.3 {
+ count {SELECT x, y FROM t1 WHERE y=144 AND x=3}
+ set sqlite_query_plan
+} {{} i1xy}
+do_test where-1.9 {
+ count {SELECT x, y FROM t1 WHERE y=144 AND w>10 AND x=3}
+} {3 144 3}
+do_test where-1.10 {
+ count {SELECT x, y FROM t1 WHERE x=3 AND w>=10 AND y=121}
+} {3 121 3}
+do_test where-1.11 {
+ count {SELECT x, y FROM t1 WHERE x=3 AND y=100 AND w<10}
+} {3 100 3}
+
+# New for SQLite version 2.1: Verify that that inequality constraints
+# are used correctly.
+#
+do_test where-1.12 {
+ count {SELECT w FROM t1 WHERE x=3 AND y<100}
+} {8 3}
+do_test where-1.13 {
+ count {SELECT w FROM t1 WHERE x=3 AND 100>y}
+} {8 3}
+do_test where-1.14 {
+ count {SELECT w FROM t1 WHERE 3=x AND y<100}
+} {8 3}
+do_test where-1.15 {
+ count {SELECT w FROM t1 WHERE 3=x AND 100>y}
+} {8 3}
+do_test where-1.16 {
+ count {SELECT w FROM t1 WHERE x=3 AND y<=100}
+} {8 9 5}
+do_test where-1.17 {
+ count {SELECT w FROM t1 WHERE x=3 AND 100>=y}
+} {8 9 5}
+do_test where-1.18 {
+ count {SELECT w FROM t1 WHERE x=3 AND y>225}
+} {15 3}
+do_test where-1.19 {
+ count {SELECT w FROM t1 WHERE x=3 AND 225<y}
+} {15 3}
+do_test where-1.20 {
+ count {SELECT w FROM t1 WHERE x=3 AND y>=225}
+} {14 15 5}
+do_test where-1.21 {
+ count {SELECT w FROM t1 WHERE x=3 AND 225<=y}
+} {14 15 5}
+do_test where-1.22 {
+ count {SELECT w FROM t1 WHERE x=3 AND y>121 AND y<196}
+} {11 12 5}
+do_test where-1.23 {
+ count {SELECT w FROM t1 WHERE x=3 AND y>=121 AND y<=196}
+} {10 11 12 13 9}
+do_test where-1.24 {
+ count {SELECT w FROM t1 WHERE x=3 AND 121<y AND 196>y}
+} {11 12 5}
+do_test where-1.25 {
+ count {SELECT w FROM t1 WHERE x=3 AND 121<=y AND 196>=y}
+} {10 11 12 13 9}
+
+# Need to work on optimizing the BETWEEN operator.
+#
+# do_test where-1.26 {
+# count {SELECT w FROM t1 WHERE x=3 AND y BETWEEN 121 AND 196}
+# } {10 11 12 13 9}
+
+do_test where-1.27 {
+ count {SELECT w FROM t1 WHERE x=3 AND y+1==122}
+} {10 17}
+
+do_test where-1.28 {
+ count {SELECT w FROM t1 WHERE x+1=4 AND y+1==122}
+} {10 99}
+do_test where-1.29 {
+ count {SELECT w FROM t1 WHERE y==121}
+} {10 99}
+
+
+do_test where-1.30 {
+ count {SELECT w FROM t1 WHERE w>97}
+} {98 99 100 3}
+do_test where-1.31 {
+ count {SELECT w FROM t1 WHERE w>=97}
+} {97 98 99 100 4}
+do_test where-1.33 {
+ count {SELECT w FROM t1 WHERE w==97}
+} {97 2}
+do_test where-1.33.1 {
+ count {SELECT w FROM t1 WHERE w<=97 AND w==97}
+} {97 2}
+do_test where-1.33.2 {
+ count {SELECT w FROM t1 WHERE w<98 AND w==97}
+} {97 2}
+do_test where-1.33.3 {
+ count {SELECT w FROM t1 WHERE w>=97 AND w==97}
+} {97 2}
+do_test where-1.33.4 {
+ count {SELECT w FROM t1 WHERE w>96 AND w==97}
+} {97 2}
+do_test where-1.33.5 {
+ count {SELECT w FROM t1 WHERE w==97 AND w==97}
+} {97 2}
+do_test where-1.34 {
+ count {SELECT w FROM t1 WHERE w+1==98}
+} {97 99}
+do_test where-1.35 {
+ count {SELECT w FROM t1 WHERE w<3}
+} {1 2 2}
+do_test where-1.36 {
+ count {SELECT w FROM t1 WHERE w<=3}
+} {1 2 3 3}
+do_test where-1.37 {
+ count {SELECT w FROM t1 WHERE w+1<=4 ORDER BY w}
+} {1 2 3 99}
+
+do_test where-1.38 {
+ count {SELECT (w) FROM t1 WHERE (w)>(97)}
+} {98 99 100 3}
+do_test where-1.39 {
+ count {SELECT (w) FROM t1 WHERE (w)>=(97)}
+} {97 98 99 100 4}
+do_test where-1.40 {
+ count {SELECT (w) FROM t1 WHERE (w)==(97)}
+} {97 2}
+do_test where-1.41 {
+ count {SELECT (w) FROM t1 WHERE ((w)+(1))==(98)}
+} {97 99}
+
+
+# Do the same kind of thing except use a join as the data source.
+#
+do_test where-2.1 {
+ count {
+ SELECT w, p FROM t2, t1
+ WHERE x=q AND y=s AND r=8977
+ }
+} {34 67 6}
+do_test where-2.2 {
+ count {
+ SELECT w, p FROM t2, t1
+ WHERE x=q AND s=y AND r=8977
+ }
+} {34 67 6}
+do_test where-2.3 {
+ count {
+ SELECT w, p FROM t2, t1
+ WHERE x=q AND s=y AND r=8977 AND w>10
+ }
+} {34 67 6}
+do_test where-2.4 {
+ count {
+ SELECT w, p FROM t2, t1
+ WHERE p<80 AND x=q AND s=y AND r=8977 AND w>10
+ }
+} {34 67 6}
+do_test where-2.5 {
+ count {
+ SELECT w, p FROM t2, t1
+ WHERE p<80 AND x=q AND 8977=r AND s=y AND w>10
+ }
+} {34 67 6}
+do_test where-2.6 {
+ count {
+ SELECT w, p FROM t2, t1
+ WHERE x=q AND p=77 AND s=y AND w>5
+ }
+} {24 77 6}
+do_test where-2.7 {
+ count {
+ SELECT w, p FROM t1, t2
+ WHERE x=q AND p>77 AND s=y AND w=5
+ }
+} {5 96 6}
+
+# Lets do a 3-way join.
+#
+do_test where-3.1 {
+ count {
+ SELECT A.w, B.p, C.w FROM t1 as A, t2 as B, t1 as C
+ WHERE C.w=101-B.p AND B.r=10202-A.y AND A.w=11
+ }
+} {11 90 11 8}
+do_test where-3.2 {
+ count {
+ SELECT A.w, B.p, C.w FROM t1 as A, t2 as B, t1 as C
+ WHERE C.w=101-B.p AND B.r=10202-A.y AND A.w=12
+ }
+} {12 89 12 8}
+do_test where-3.3 {
+ count {
+ SELECT A.w, B.p, C.w FROM t1 as A, t2 as B, t1 as C
+ WHERE A.w=15 AND B.p=C.w AND B.r=10202-A.y
+ }
+} {15 86 86 8}
+
+# Test to see that the special case of a constant WHERE clause is
+# handled.
+#
+do_test where-4.1 {
+ count {
+ SELECT * FROM t1 WHERE 0
+ }
+} {0}
+do_test where-4.2 {
+ count {
+ SELECT * FROM t1 WHERE 1 LIMIT 1
+ }
+} {1 0 4 0}
+do_test where-4.3 {
+ execsql {
+ SELECT 99 WHERE 0
+ }
+} {}
+do_test where-4.4 {
+ execsql {
+ SELECT 99 WHERE 1
+ }
+} {99}
+do_test where-4.5 {
+ execsql {
+ SELECT 99 WHERE 0.1
+ }
+} {99}
+do_test where-4.6 {
+ execsql {
+ SELECT 99 WHERE 0.0
+ }
+} {}
+
+# Verify that IN operators in a WHERE clause are handled correctly.
+# Omit these tests if the build is not capable of sub-queries.
+#
+ifcapable subquery {
+ do_test where-5.1 {
+ count {
+ SELECT * FROM t1 WHERE rowid IN (1,2,3,1234) order by 1;
+ }
+ } {1 0 4 2 1 9 3 1 16 4}
+ do_test where-5.2 {
+ count {
+ SELECT * FROM t1 WHERE rowid+0 IN (1,2,3,1234) order by 1;
+ }
+ } {1 0 4 2 1 9 3 1 16 199}
+ do_test where-5.3 {
+ count {
+ SELECT * FROM t1 WHERE w IN (-1,1,2,3) order by 1;
+ }
+ } {1 0 4 2 1 9 3 1 16 14}
+ do_test where-5.4 {
+ count {
+ SELECT * FROM t1 WHERE w+0 IN (-1,1,2,3) order by 1;
+ }
+ } {1 0 4 2 1 9 3 1 16 199}
+ do_test where-5.5 {
+ count {
+ SELECT * FROM t1 WHERE rowid IN
+ (select rowid from t1 where rowid IN (-1,2,4))
+ ORDER BY 1;
+ }
+ } {2 1 9 4 2 25 3}
+ do_test where-5.6 {
+ count {
+ SELECT * FROM t1 WHERE rowid+0 IN
+ (select rowid from t1 where rowid IN (-1,2,4))
+ ORDER BY 1;
+ }
+ } {2 1 9 4 2 25 201}
+ do_test where-5.7 {
+ count {
+ SELECT * FROM t1 WHERE w IN
+ (select rowid from t1 where rowid IN (-1,2,4))
+ ORDER BY 1;
+ }
+ } {2 1 9 4 2 25 9}
+ do_test where-5.8 {
+ count {
+ SELECT * FROM t1 WHERE w+0 IN
+ (select rowid from t1 where rowid IN (-1,2,4))
+ ORDER BY 1;
+ }
+ } {2 1 9 4 2 25 201}
+ do_test where-5.9 {
+ count {
+ SELECT * FROM t1 WHERE x IN (1,7) ORDER BY 1;
+ }
+ } {2 1 9 3 1 16 7}
+ do_test where-5.10 {
+ count {
+ SELECT * FROM t1 WHERE x+0 IN (1,7) ORDER BY 1;
+ }
+ } {2 1 9 3 1 16 199}
+ do_test where-5.11 {
+ count {
+ SELECT * FROM t1 WHERE y IN (6400,8100) ORDER BY 1;
+ }
+ } {79 6 6400 89 6 8100 199}
+ do_test where-5.12 {
+ count {
+ SELECT * FROM t1 WHERE x=6 AND y IN (6400,8100) ORDER BY 1;
+ }
+ } {79 6 6400 89 6 8100 7}
+ do_test where-5.13 {
+ count {
+ SELECT * FROM t1 WHERE x IN (1,7) AND y NOT IN (6400,8100) ORDER BY 1;
+ }
+ } {2 1 9 3 1 16 7}
+ do_test where-5.14 {
+ count {
+ SELECT * FROM t1 WHERE x IN (1,7) AND y IN (9,10) ORDER BY 1;
+ }
+ } {2 1 9 8}
+ do_test where-5.15 {
+ count {
+ SELECT * FROM t1 WHERE x IN (1,7) AND y IN (9,16) ORDER BY 1;
+ }
+ } {2 1 9 3 1 16 11}
+}
+
+# This procedure executes the SQL. Then it checks to see if the OP_Sort
+# opcode was executed. If an OP_Sort did occur, then "sort" is appended
+# to the result. If no OP_Sort happened, then "nosort" is appended.
+#
+# This procedure is used to check to make sure sorting is or is not
+# occurring as expected.
+#
+proc cksort {sql} {
+ set ::sqlite_sort_count 0
+ set data [execsql $sql]
+ if {$::sqlite_sort_count} {set x sort} {set x nosort}
+ lappend data $x
+ return $data
+}
+# Check out the logic that attempts to implement the ORDER BY clause
+# using an index rather than by sorting.
+#
+do_test where-6.1 {
+ execsql {
+ CREATE TABLE t3(a,b,c);
+ CREATE INDEX t3a ON t3(a);
+ CREATE INDEX t3bc ON t3(b,c);
+ CREATE INDEX t3acb ON t3(a,c,b);
+ INSERT INTO t3 SELECT w, 101-w, y FROM t1;
+ SELECT count(*), sum(a), sum(b), sum(c) FROM t3;
+ }
+} {100 5050 5050 348550}
+do_test where-6.2 {
+ cksort {
+ SELECT * FROM t3 ORDER BY a LIMIT 3
+ }
+} {1 100 4 2 99 9 3 98 16 nosort}
+do_test where-6.3 {
+ cksort {
+ SELECT * FROM t3 ORDER BY a+1 LIMIT 3
+ }
+} {1 100 4 2 99 9 3 98 16 sort}
+do_test where-6.4 {
+ cksort {
+ SELECT * FROM t3 WHERE a<10 ORDER BY a LIMIT 3
+ }
+} {1 100 4 2 99 9 3 98 16 nosort}
+do_test where-6.5 {
+ cksort {
+ SELECT * FROM t3 WHERE a>0 AND a<10 ORDER BY a LIMIT 3
+ }
+} {1 100 4 2 99 9 3 98 16 nosort}
+do_test where-6.6 {
+ cksort {
+ SELECT * FROM t3 WHERE a>0 ORDER BY a LIMIT 3
+ }
+} {1 100 4 2 99 9 3 98 16 nosort}
+do_test where-6.7 {
+ cksort {
+ SELECT * FROM t3 WHERE b>0 ORDER BY a LIMIT 3
+ }
+} {1 100 4 2 99 9 3 98 16 nosort}
+ifcapable subquery {
+ do_test where-6.8 {
+ cksort {
+ SELECT * FROM t3 WHERE a IN (3,5,7,1,9,4,2) ORDER BY a LIMIT 3
+ }
+ } {1 100 4 2 99 9 3 98 16 sort}
+}
+do_test where-6.9.1 {
+ cksort {
+ SELECT * FROM t3 WHERE a=1 AND c>0 ORDER BY a LIMIT 3
+ }
+} {1 100 4 nosort}
+do_test where-6.9.1.1 {
+ cksort {
+ SELECT * FROM t3 WHERE a>=1 AND a=1 AND c>0 ORDER BY a LIMIT 3
+ }
+} {1 100 4 nosort}
+do_test where-6.9.1.2 {
+ cksort {
+ SELECT * FROM t3 WHERE a<2 AND a=1 AND c>0 ORDER BY a LIMIT 3
+ }
+} {1 100 4 nosort}
+do_test where-6.9.2 {
+ cksort {
+ SELECT * FROM t3 WHERE a=1 AND c>0 ORDER BY a,c LIMIT 3
+ }
+} {1 100 4 nosort}
+do_test where-6.9.3 {
+ cksort {
+ SELECT * FROM t3 WHERE a=1 AND c>0 ORDER BY c LIMIT 3
+ }
+} {1 100 4 nosort}
+do_test where-6.9.4 {
+ cksort {
+ SELECT * FROM t3 WHERE a=1 AND c>0 ORDER BY a DESC LIMIT 3
+ }
+} {1 100 4 nosort}
+do_test where-6.9.5 {
+ cksort {
+ SELECT * FROM t3 WHERE a=1 AND c>0 ORDER BY a DESC, c DESC LIMIT 3
+ }
+} {1 100 4 nosort}
+do_test where-6.9.6 {
+ cksort {
+ SELECT * FROM t3 WHERE a=1 AND c>0 ORDER BY c DESC LIMIT 3
+ }
+} {1 100 4 nosort}
+do_test where-6.9.7 {
+ cksort {
+ SELECT * FROM t3 WHERE a=1 AND c>0 ORDER BY c,a LIMIT 3
+ }
+} {1 100 4 sort}
+do_test where-6.9.8 {
+ cksort {
+ SELECT * FROM t3 WHERE a=1 AND c>0 ORDER BY a DESC, c ASC LIMIT 3
+ }
+} {1 100 4 nosort}
+do_test where-6.9.9 {
+ cksort {
+ SELECT * FROM t3 WHERE a=1 AND c>0 ORDER BY a ASC, c DESC LIMIT 3
+ }
+} {1 100 4 nosort}
+do_test where-6.10 {
+ cksort {
+ SELECT * FROM t3 WHERE a=1 AND c>0 ORDER BY a LIMIT 3
+ }
+} {1 100 4 nosort}
+do_test where-6.11 {
+ cksort {
+ SELECT * FROM t3 WHERE a=1 AND c>0 ORDER BY a,c LIMIT 3
+ }
+} {1 100 4 nosort}
+do_test where-6.12 {
+ cksort {
+ SELECT * FROM t3 WHERE a=1 AND c>0 ORDER BY a,c,b LIMIT 3
+ }
+} {1 100 4 nosort}
+do_test where-6.13 {
+ cksort {
+ SELECT * FROM t3 WHERE a>0 ORDER BY a DESC LIMIT 3
+ }
+} {100 1 10201 99 2 10000 98 3 9801 nosort}
+do_test where-6.13.1 {
+ cksort {
+ SELECT * FROM t3 WHERE a>0 ORDER BY -a LIMIT 3
+ }
+} {100 1 10201 99 2 10000 98 3 9801 sort}
+do_test where-6.14 {
+ cksort {
+ SELECT * FROM t3 ORDER BY b LIMIT 3
+ }
+} {100 1 10201 99 2 10000 98 3 9801 nosort}
+do_test where-6.15 {
+ cksort {
+ SELECT t3.a, t1.x FROM t3, t1 WHERE t3.a=t1.w ORDER BY t3.a LIMIT 3
+ }
+} {1 0 2 1 3 1 nosort}
+do_test where-6.16 {
+ cksort {
+ SELECT t3.a, t1.x FROM t3, t1 WHERE t3.a=t1.w ORDER BY t1.x, t3.a LIMIT 3
+ }
+} {1 0 2 1 3 1 sort}
+do_test where-6.19 {
+ cksort {
+ SELECT y FROM t1 ORDER BY w LIMIT 3;
+ }
+} {4 9 16 nosort}
+do_test where-6.20 {
+ cksort {
+ SELECT y FROM t1 ORDER BY rowid LIMIT 3;
+ }
+} {4 9 16 nosort}
+do_test where-6.21 {
+ cksort {
+ SELECT y FROM t1 ORDER BY rowid, y LIMIT 3;
+ }
+} {4 9 16 sort}
+do_test where-6.22 {
+ cksort {
+ SELECT y FROM t1 ORDER BY rowid, y DESC LIMIT 3;
+ }
+} {4 9 16 sort}
+do_test where-6.23 {
+ cksort {
+ SELECT y FROM t1 WHERE y>4 ORDER BY rowid, w, x LIMIT 3;
+ }
+} {9 16 25 sort}
+do_test where-6.24 {
+ cksort {
+ SELECT y FROM t1 WHERE y>=9 ORDER BY rowid, x DESC, w LIMIT 3;
+ }
+} {9 16 25 sort}
+do_test where-6.25 {
+ cksort {
+ SELECT y FROM t1 WHERE y>4 AND y<25 ORDER BY rowid;
+ }
+} {9 16 nosort}
+do_test where-6.26 {
+ cksort {
+ SELECT y FROM t1 WHERE y>=4 AND y<=25 ORDER BY oid;
+ }
+} {4 9 16 25 nosort}
+do_test where-6.27 {
+ cksort {
+ SELECT y FROM t1 WHERE y<=25 ORDER BY _rowid_, w+y;
+ }
+} {4 9 16 25 sort}
+
+
+# Tests for reverse-order sorting.
+#
+do_test where-7.1 {
+ cksort {
+ SELECT w FROM t1 WHERE x=3 ORDER BY y;
+ }
+} {8 9 10 11 12 13 14 15 nosort}
+do_test where-7.2 {
+ cksort {
+ SELECT w FROM t1 WHERE x=3 ORDER BY y DESC;
+ }
+} {15 14 13 12 11 10 9 8 nosort}
+do_test where-7.3 {
+ cksort {
+ SELECT w FROM t1 WHERE x=3 AND y>100 ORDER BY y LIMIT 3;
+ }
+} {10 11 12 nosort}
+do_test where-7.4 {
+ cksort {
+ SELECT w FROM t1 WHERE x=3 AND y>100 ORDER BY y DESC LIMIT 3;
+ }
+} {15 14 13 nosort}
+do_test where-7.5 {
+ cksort {
+ SELECT w FROM t1 WHERE x=3 AND y>121 ORDER BY y DESC;
+ }
+} {15 14 13 12 11 nosort}
+do_test where-7.6 {
+ cksort {
+ SELECT w FROM t1 WHERE x=3 AND y>=121 ORDER BY y DESC;
+ }
+} {15 14 13 12 11 10 nosort}
+do_test where-7.7 {
+ cksort {
+ SELECT w FROM t1 WHERE x=3 AND y>=121 AND y<196 ORDER BY y DESC;
+ }
+} {12 11 10 nosort}
+do_test where-7.8 {
+ cksort {
+ SELECT w FROM t1 WHERE x=3 AND y>=121 AND y<=196 ORDER BY y DESC;
+ }
+} {13 12 11 10 nosort}
+do_test where-7.9 {
+ cksort {
+ SELECT w FROM t1 WHERE x=3 AND y>121 AND y<=196 ORDER BY y DESC;
+ }
+} {13 12 11 nosort}
+do_test where-7.10 {
+ cksort {
+ SELECT w FROM t1 WHERE x=3 AND y>100 AND y<196 ORDER BY y DESC;
+ }
+} {12 11 10 nosort}
+do_test where-7.11 {
+ cksort {
+ SELECT w FROM t1 WHERE x=3 AND y>=121 AND y<196 ORDER BY y;
+ }
+} {10 11 12 nosort}
+do_test where-7.12 {
+ cksort {
+ SELECT w FROM t1 WHERE x=3 AND y>=121 AND y<=196 ORDER BY y;
+ }
+} {10 11 12 13 nosort}
+do_test where-7.13 {
+ cksort {
+ SELECT w FROM t1 WHERE x=3 AND y>121 AND y<=196 ORDER BY y;
+ }
+} {11 12 13 nosort}
+do_test where-7.14 {
+ cksort {
+ SELECT w FROM t1 WHERE x=3 AND y>100 AND y<196 ORDER BY y;
+ }
+} {10 11 12 nosort}
+do_test where-7.15 {
+ cksort {
+ SELECT w FROM t1 WHERE x=3 AND y<81 ORDER BY y;
+ }
+} {nosort}
+do_test where-7.16 {
+ cksort {
+ SELECT w FROM t1 WHERE x=3 AND y<=81 ORDER BY y;
+ }
+} {8 nosort}
+do_test where-7.17 {
+ cksort {
+ SELECT w FROM t1 WHERE x=3 AND y>256 ORDER BY y;
+ }
+} {nosort}
+do_test where-7.18 {
+ cksort {
+ SELECT w FROM t1 WHERE x=3 AND y>=256 ORDER BY y;
+ }
+} {15 nosort}
+do_test where-7.19 {
+ cksort {
+ SELECT w FROM t1 WHERE x=3 AND y<81 ORDER BY y DESC;
+ }
+} {nosort}
+do_test where-7.20 {
+ cksort {
+ SELECT w FROM t1 WHERE x=3 AND y<=81 ORDER BY y DESC;
+ }
+} {8 nosort}
+do_test where-7.21 {
+ cksort {
+ SELECT w FROM t1 WHERE x=3 AND y>256 ORDER BY y DESC;
+ }
+} {nosort}
+do_test where-7.22 {
+ cksort {
+ SELECT w FROM t1 WHERE x=3 AND y>=256 ORDER BY y DESC;
+ }
+} {15 nosort}
+do_test where-7.23 {
+ cksort {
+ SELECT w FROM t1 WHERE x=0 AND y<4 ORDER BY y;
+ }
+} {nosort}
+do_test where-7.24 {
+ cksort {
+ SELECT w FROM t1 WHERE x=0 AND y<=4 ORDER BY y;
+ }
+} {1 nosort}
+do_test where-7.25 {
+ cksort {
+ SELECT w FROM t1 WHERE x=6 AND y>10201 ORDER BY y;
+ }
+} {nosort}
+do_test where-7.26 {
+ cksort {
+ SELECT w FROM t1 WHERE x=6 AND y>=10201 ORDER BY y;
+ }
+} {100 nosort}
+do_test where-7.27 {
+ cksort {
+ SELECT w FROM t1 WHERE x=0 AND y<4 ORDER BY y DESC;
+ }
+} {nosort}
+do_test where-7.28 {
+ cksort {
+ SELECT w FROM t1 WHERE x=0 AND y<=4 ORDER BY y DESC;
+ }
+} {1 nosort}
+do_test where-7.29 {
+ cksort {
+ SELECT w FROM t1 WHERE x=6 AND y>10201 ORDER BY y DESC;
+ }
+} {nosort}
+do_test where-7.30 {
+ cksort {
+ SELECT w FROM t1 WHERE x=6 AND y>=10201 ORDER BY y DESC;
+ }
+} {100 nosort}
+do_test where-7.31 {
+ cksort {
+ SELECT y FROM t1 ORDER BY rowid DESC LIMIT 3
+ }
+} {10201 10000 9801 nosort}
+do_test where-7.32 {
+ cksort {
+ SELECT y FROM t1 WHERE y<25 ORDER BY rowid DESC
+ }
+} {16 9 4 nosort}
+do_test where-7.33 {
+ cksort {
+ SELECT y FROM t1 WHERE y<=25 ORDER BY rowid DESC
+ }
+} {25 16 9 4 nosort}
+do_test where-7.34 {
+ cksort {
+ SELECT y FROM t1 WHERE y<25 AND y>4 ORDER BY rowid DESC, y DESC
+ }
+} {16 9 sort}
+do_test where-7.35 {
+ cksort {
+ SELECT y FROM t1 WHERE y<25 AND y>=4 ORDER BY rowid DESC
+ }
+} {16 9 4 nosort}
+
+do_test where-8.1 {
+ execsql {
+ CREATE TABLE t4 AS SELECT * FROM t1;
+ CREATE INDEX i4xy ON t4(x,y);
+ }
+ cksort {
+ SELECT w FROM t4 WHERE x=4 and y<1000 ORDER BY y DESC limit 3;
+ }
+} {30 29 28 nosort}
+do_test where-8.2 {
+ execsql {
+ DELETE FROM t4;
+ }
+ cksort {
+ SELECT w FROM t4 WHERE x=4 and y<1000 ORDER BY y DESC limit 3;
+ }
+} {nosort}
+
+# Make sure searches with an index work with an empty table.
+#
+do_test where-9.1 {
+ execsql {
+ CREATE TABLE t5(x PRIMARY KEY);
+ SELECT * FROM t5 WHERE x<10;
+ }
+} {}
+do_test where-9.2 {
+ execsql {
+ SELECT * FROM t5 WHERE x<10 ORDER BY x DESC;
+ }
+} {}
+do_test where-9.3 {
+ execsql {
+ SELECT * FROM t5 WHERE x=10;
+ }
+} {}
+
+do_test where-10.1 {
+ execsql {
+ SELECT 1 WHERE abs(random())<0
+ }
+} {}
+do_test where-10.2 {
+ proc tclvar_func {vname} {return [set ::$vname]}
+ db function tclvar tclvar_func
+ set ::v1 0
+ execsql {
+ SELECT count(*) FROM t1 WHERE tclvar('v1');
+ }
+} {0}
+do_test where-10.3 {
+ set ::v1 1
+ execsql {
+ SELECT count(*) FROM t1 WHERE tclvar('v1');
+ }
+} {100}
+do_test where-10.4 {
+ set ::v1 1
+ proc tclvar_func {vname} {
+ upvar #0 $vname v
+ set v [expr {!$v}]
+ return $v
+ }
+ execsql {
+ SELECT count(*) FROM t1 WHERE tclvar('v1');
+ }
+} {50}
+
+# Ticket #1376. The query below was causing a segfault.
+# The problem was the age-old error of calling realloc() on an
+# array while there are still pointers to individual elements of
+# that array.
+#
+do_test where-11.1 {
+btree_breakpoint
+ execsql {
+ CREATE TABLE t99(Dte INT, X INT);
+ DELETE FROM t99 WHERE (Dte = 2451337) OR (Dte = 2451339) OR
+ (Dte BETWEEN 2451345 AND 2451347) OR (Dte = 2451351) OR
+ (Dte BETWEEN 2451355 AND 2451356) OR (Dte = 2451358) OR
+ (Dte = 2451362) OR (Dte = 2451365) OR (Dte = 2451367) OR
+ (Dte BETWEEN 2451372 AND 2451376) OR (Dte BETWEEN 2451382 AND 2451384) OR
+ (Dte = 2451387) OR (Dte BETWEEN 2451389 AND 2451391) OR
+ (Dte BETWEEN 2451393 AND 2451395) OR (Dte = 2451400) OR
+ (Dte = 2451402) OR (Dte = 2451404) OR (Dte BETWEEN 2451416 AND 2451418) OR
+ (Dte = 2451422) OR (Dte = 2451426) OR (Dte BETWEEN 2451445 AND 2451446) OR
+ (Dte = 2451456) OR (Dte = 2451458) OR (Dte BETWEEN 2451465 AND 2451467) OR
+ (Dte BETWEEN 2451469 AND 2451471) OR (Dte = 2451474) OR
+ (Dte BETWEEN 2451477 AND 2451501) OR (Dte BETWEEN 2451503 AND 2451509) OR
+ (Dte BETWEEN 2451511 AND 2451514) OR (Dte BETWEEN 2451518 AND 2451521) OR
+ (Dte BETWEEN 2451523 AND 2451531) OR (Dte BETWEEN 2451533 AND 2451537) OR
+ (Dte BETWEEN 2451539 AND 2451544) OR (Dte BETWEEN 2451546 AND 2451551) OR
+ (Dte BETWEEN 2451553 AND 2451555) OR (Dte = 2451557) OR
+ (Dte BETWEEN 2451559 AND 2451561) OR (Dte = 2451563) OR
+ (Dte BETWEEN 2451565 AND 2451566) OR (Dte BETWEEN 2451569 AND 2451571) OR
+ (Dte = 2451573) OR (Dte = 2451575) OR (Dte = 2451577) OR (Dte = 2451581) OR
+ (Dte BETWEEN 2451583 AND 2451586) OR (Dte BETWEEN 2451588 AND 2451592) OR
+ (Dte BETWEEN 2451596 AND 2451598) OR (Dte = 2451600) OR
+ (Dte BETWEEN 2451602 AND 2451603) OR (Dte = 2451606) OR (Dte = 2451611);
+ }
+} {}
+
+
+integrity_check {where-99.0}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/where2.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/where2.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,454 @@
+# 2005 July 28
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing the use of indices in WHERE clauses
+# based on recent changes to the optimizer.
+#
+# $Id: where2.test,v 1.9 2006/05/11 13:26:26 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Build some test data
+#
+do_test where2-1.0 {
+ execsql {
+ BEGIN;
+ CREATE TABLE t1(w int, x int, y int, z int);
+ }
+ for {set i 1} {$i<=100} {incr i} {
+ set w $i
+ set x [expr {int(log($i)/log(2))}]
+ set y [expr {$i*$i + 2*$i + 1}]
+ set z [expr {$x+$y}]
+ ifcapable tclvar {
+ execsql {INSERT INTO t1 VALUES($::w,$::x,$::y,$::z)}
+ } else {
+ execsql {INSERT INTO t1 VALUES(:w,:x,:y,:z)}
+ }
+ }
+ execsql {
+ CREATE UNIQUE INDEX i1w ON t1(w);
+ CREATE INDEX i1xy ON t1(x,y);
+ CREATE INDEX i1zyx ON t1(z,y,x);
+ COMMIT;
+ }
+} {}
+
+# Do an SQL statement. Append the search count to the end of the result.
+#
+proc count sql {
+ set ::sqlite_search_count 0
+ return [concat [execsql $sql] $::sqlite_search_count]
+}
+
+# This procedure executes the SQL. Then it checks to see if the OP_Sort
+# opcode was executed. If an OP_Sort did occur, then "sort" is appended
+# to the result. If no OP_Sort happened, then "nosort" is appended.
+#
+# This procedure is used to check to make sure sorting is or is not
+# occurring as expected.
+#
+proc cksort {sql} {
+ set ::sqlite_sort_count 0
+ set data [execsql $sql]
+ if {$::sqlite_sort_count} {set x sort} {set x nosort}
+ lappend data $x
+ return $data
+}
+
+# This procedure executes the SQL. Then it appends to the result the
+# "sort" or "nosort" keyword (as in the cksort procedure above) then
+# it appends the ::sqlite_query_plan variable.
+#
+proc queryplan {sql} {
+ set ::sqlite_sort_count 0
+ set data [execsql $sql]
+ if {$::sqlite_sort_count} {set x sort} {set x nosort}
+ lappend data $x
+ return [concat $data $::sqlite_query_plan]
+}
+
+
+# Prefer a UNIQUE index over another index.
+#
+do_test where2-1.1 {
+ queryplan {
+ SELECT * FROM t1 WHERE w=85 AND x=6 AND y=7396
+ }
+} {85 6 7396 7402 nosort t1 i1w}
+
+# Always prefer a rowid== constraint over any other index.
+#
+do_test where2-1.3 {
+ queryplan {
+ SELECT * FROM t1 WHERE w=85 AND x=6 AND y=7396 AND rowid=85
+ }
+} {85 6 7396 7402 nosort t1 *}
+
+# When constrained by a UNIQUE index, the ORDER BY clause is always ignored.
+#
+do_test where2-2.1 {
+ queryplan {
+ SELECT * FROM t1 WHERE w=85 ORDER BY random(5);
+ }
+} {85 6 7396 7402 nosort t1 i1w}
+do_test where2-2.2 {
+ queryplan {
+ SELECT * FROM t1 WHERE x=6 AND y=7396 ORDER BY random(5);
+ }
+} {85 6 7396 7402 sort t1 i1xy}
+do_test where2-2.3 {
+ queryplan {
+ SELECT * FROM t1 WHERE rowid=85 AND x=6 AND y=7396 ORDER BY random(5);
+ }
+} {85 6 7396 7402 nosort t1 *}
+
+
+# Efficient handling of forward and reverse table scans.
+#
+do_test where2-3.1 {
+ queryplan {
+ SELECT * FROM t1 ORDER BY rowid LIMIT 2
+ }
+} {1 0 4 4 2 1 9 10 nosort t1 *}
+do_test where2-3.2 {
+ queryplan {
+ SELECT * FROM t1 ORDER BY rowid DESC LIMIT 2
+ }
+} {100 6 10201 10207 99 6 10000 10006 nosort t1 *}
+
+# The IN operator can be used by indices at multiple layers
+#
+ifcapable subquery {
+ do_test where2-4.1 {
+ queryplan {
+ SELECT * FROM t1 WHERE z IN (10207,10006) AND y IN (10000,10201)
+ AND x>0 AND x<10
+ ORDER BY w
+ }
+ } {99 6 10000 10006 100 6 10201 10207 sort t1 i1zyx}
+ do_test where2-4.2 {
+ queryplan {
+ SELECT * FROM t1 WHERE z IN (10207,10006) AND y=10000
+ AND x>0 AND x<10
+ ORDER BY w
+ }
+ } {99 6 10000 10006 sort t1 i1zyx}
+ do_test where2-4.3 {
+ queryplan {
+ SELECT * FROM t1 WHERE z=10006 AND y IN (10000,10201)
+ AND x>0 AND x<10
+ ORDER BY w
+ }
+ } {99 6 10000 10006 sort t1 i1zyx}
+ ifcapable compound {
+ do_test where2-4.4 {
+ queryplan {
+ SELECT * FROM t1 WHERE z IN (SELECT 10207 UNION SELECT 10006)
+ AND y IN (10000,10201)
+ AND x>0 AND x<10
+ ORDER BY w
+ }
+ } {99 6 10000 10006 100 6 10201 10207 sort t1 i1zyx}
+ do_test where2-4.5 {
+ queryplan {
+ SELECT * FROM t1 WHERE z IN (SELECT 10207 UNION SELECT 10006)
+ AND y IN (SELECT 10000 UNION SELECT 10201)
+ AND x>0 AND x<10
+ ORDER BY w
+ }
+ } {99 6 10000 10006 100 6 10201 10207 sort t1 i1zyx}
+ }
+ do_test where2-4.6 {
+ queryplan {
+ SELECT * FROM t1
+ WHERE x IN (1,2,3,4,5,6,7,8)
+ AND y IN (10000,10001,10002,10003,10004,10005)
+ ORDER BY 2
+ }
+ } {99 6 10000 10006 sort t1 i1xy}
+
+ # Duplicate entires on the RHS of an IN operator do not cause duplicate
+ # output rows.
+ #
+ do_test where2-4.6 {
+ queryplan {
+ SELECT * FROM t1 WHERE z IN (10207,10006,10006,10207)
+ ORDER BY w
+ }
+ } {99 6 10000 10006 100 6 10201 10207 sort t1 i1zyx}
+ ifcapable compound {
+ do_test where2-4.7 {
+ queryplan {
+ SELECT * FROM t1 WHERE z IN (
+ SELECT 10207 UNION ALL SELECT 10006
+ UNION ALL SELECT 10006 UNION ALL SELECT 10207)
+ ORDER BY w
+ }
+ } {99 6 10000 10006 100 6 10201 10207 sort t1 i1zyx}
+ }
+
+} ;# ifcapable subquery
+
+# The use of an IN operator disables the index as a sorter.
+#
+do_test where2-5.1 {
+ queryplan {
+ SELECT * FROM t1 WHERE w=99 ORDER BY w
+ }
+} {99 6 10000 10006 nosort t1 i1w}
+
+ifcapable subquery {
+ do_test where2-5.2 {
+ queryplan {
+ SELECT * FROM t1 WHERE w IN (99) ORDER BY w
+ }
+ } {99 6 10000 10006 sort t1 i1w}
+}
+
+# Verify that OR clauses get translated into IN operators.
+#
+set ::idx {}
+ifcapable subquery {set ::idx i1w}
+do_test where2-6.1 {
+ queryplan {
+ SELECT * FROM t1 WHERE w=99 OR w=100 ORDER BY +w
+ }
+} [list 99 6 10000 10006 100 6 10201 10207 sort t1 $::idx]
+do_test where2-6.2 {
+ queryplan {
+ SELECT * FROM t1 WHERE w=99 OR w=100 OR 6=w ORDER BY +w
+ }
+} [list 6 2 49 51 99 6 10000 10006 100 6 10201 10207 sort t1 $::idx]
+
+do_test where2-6.3 {
+ queryplan {
+ SELECT * FROM t1 WHERE w=99 OR w=100 OR 6=+w ORDER BY +w
+ }
+} {6 2 49 51 99 6 10000 10006 100 6 10201 10207 sort t1 {}}
+do_test where2-6.4 {
+ queryplan {
+ SELECT * FROM t1 WHERE w=99 OR +w=100 OR 6=w ORDER BY +w
+ }
+} {6 2 49 51 99 6 10000 10006 100 6 10201 10207 sort t1 {}}
+
+set ::idx {}
+ifcapable subquery {set ::idx i1zyx}
+do_test where2-6.5 {
+ queryplan {
+ SELECT b.* FROM t1 a, t1 b
+ WHERE a.w=1 AND (a.y=b.z OR b.z=10)
+ ORDER BY +b.w
+ }
+} [list 1 0 4 4 2 1 9 10 sort a i1w b $::idx]
+do_test where2-6.6 {
+ queryplan {
+ SELECT b.* FROM t1 a, t1 b
+ WHERE a.w=1 AND (b.z=10 OR a.y=b.z OR b.z=10)
+ ORDER BY +b.w
+ }
+} [list 1 0 4 4 2 1 9 10 sort a i1w b $::idx]
+
+# Unique queries (queries that are guaranteed to return only a single
+# row of result) do not call the sorter. But all tables must give
+# a unique result. If any one table in the join does not give a unique
+# result then sorting is necessary.
+#
+do_test where2-7.1 {
+ cksort {
+ create table t8(a unique, b, c);
+ insert into t8 values(1,2,3);
+ insert into t8 values(2,3,4);
+ create table t9(x,y);
+ insert into t9 values(2,4);
+ insert into t9 values(2,3);
+ select y from t8, t9 where a=1 order by a, y;
+ }
+} {3 4 sort}
+do_test where2-7.2 {
+ cksort {
+ select * from t8 where a=1 order by b, c
+ }
+} {1 2 3 nosort}
+do_test where2-7.3 {
+ cksort {
+ select * from t8, t9 where a=1 and y=3 order by b, x
+ }
+} {1 2 3 2 3 sort}
+do_test where2-7.4 {
+ cksort {
+ create unique index i9y on t9(y);
+ select * from t8, t9 where a=1 and y=3 order by b, x
+ }
+} {1 2 3 2 3 nosort}
+
+# Ticket #1807. Using IN constrains on multiple columns of
+# a multi-column index.
+#
+ifcapable subquery {
+ do_test where2-8.1 {
+ execsql {
+ SELECT * FROM t1 WHERE x IN (20,21) AND y IN (1,2)
+ }
+ } {}
+ do_test where2-8.2 {
+ execsql {
+ SELECT * FROM t1 WHERE x IN (1,2) AND y IN (-5,-6)
+ }
+ } {}
+ execsql {CREATE TABLE tx AS SELECT * FROM t1}
+ do_test where2-8.3 {
+ execsql {
+ SELECT w FROM t1
+ WHERE x IN (SELECT x FROM tx WHERE rowid<0)
+ AND +y IN (SELECT y FROM tx WHERE rowid=1)
+ }
+ } {}
+ do_test where2-8.4 {
+ execsql {
+ SELECT w FROM t1
+ WHERE x IN (SELECT x FROM tx WHERE rowid=1)
+ AND y IN (SELECT y FROM tx WHERE rowid<0)
+ }
+ } {}
+ #set sqlite_where_trace 1
+ do_test where2-8.5 {
+ execsql {
+ CREATE INDEX tx_xyz ON tx(x, y, z, w);
+ SELECT w FROM tx
+ WHERE x IN (SELECT x FROM t1 WHERE w BETWEEN 10 AND 20)
+ AND y IN (SELECT y FROM t1 WHERE w BETWEEN 10 AND 20)
+ AND z IN (SELECT z FROM t1 WHERE w BETWEEN 12 AND 14)
+ }
+ } {12 13 14}
+ do_test where2-8.6 {
+ execsql {
+ SELECT w FROM tx
+ WHERE x IN (SELECT x FROM t1 WHERE w BETWEEN 10 AND 20)
+ AND y IN (SELECT y FROM t1 WHERE w BETWEEN 12 AND 14)
+ AND z IN (SELECT z FROM t1 WHERE w BETWEEN 10 AND 20)
+ }
+ } {12 13 14}
+ do_test where2-8.7 {
+ execsql {
+ SELECT w FROM tx
+ WHERE x IN (SELECT x FROM t1 WHERE w BETWEEN 12 AND 14)
+ AND y IN (SELECT y FROM t1 WHERE w BETWEEN 10 AND 20)
+ AND z IN (SELECT z FROM t1 WHERE w BETWEEN 10 AND 20)
+ }
+ } {10 11 12 13 14 15}
+ do_test where2-8.8 {
+ execsql {
+ SELECT w FROM tx
+ WHERE x IN (SELECT x FROM t1 WHERE w BETWEEN 10 AND 20)
+ AND y IN (SELECT y FROM t1 WHERE w BETWEEN 10 AND 20)
+ AND z IN (SELECT z FROM t1 WHERE w BETWEEN 10 AND 20)
+ }
+ } {10 11 12 13 14 15 16 17 18 19 20}
+ do_test where2-8.9 {
+ execsql {
+ SELECT w FROM tx
+ WHERE x IN (SELECT x FROM t1 WHERE w BETWEEN 10 AND 20)
+ AND y IN (SELECT y FROM t1 WHERE w BETWEEN 10 AND 20)
+ AND z IN (SELECT z FROM t1 WHERE w BETWEEN 2 AND 4)
+ }
+ } {}
+ do_test where2-8.10 {
+ execsql {
+ SELECT w FROM tx
+ WHERE x IN (SELECT x FROM t1 WHERE w BETWEEN 10 AND 20)
+ AND y IN (SELECT y FROM t1 WHERE w BETWEEN 2 AND 4)
+ AND z IN (SELECT z FROM t1 WHERE w BETWEEN 10 AND 20)
+ }
+ } {}
+ do_test where2-8.11 {
+ execsql {
+ SELECT w FROM tx
+ WHERE x IN (SELECT x FROM t1 WHERE w BETWEEN 2 AND 4)
+ AND y IN (SELECT y FROM t1 WHERE w BETWEEN 10 AND 20)
+ AND z IN (SELECT z FROM t1 WHERE w BETWEEN 10 AND 20)
+ }
+ } {}
+ do_test where2-8.12 {
+ execsql {
+ SELECT w FROM tx
+ WHERE x IN (SELECT x FROM t1 WHERE w BETWEEN 10 AND 20)
+ AND y IN (SELECT y FROM t1 WHERE w BETWEEN 10 AND 20)
+ AND z IN (SELECT z FROM t1 WHERE w BETWEEN -4 AND -2)
+ }
+ } {}
+ do_test where2-8.13 {
+ execsql {
+ SELECT w FROM tx
+ WHERE x IN (SELECT x FROM t1 WHERE w BETWEEN 10 AND 20)
+ AND y IN (SELECT y FROM t1 WHERE w BETWEEN -4 AND -2)
+ AND z IN (SELECT z FROM t1 WHERE w BETWEEN 10 AND 20)
+ }
+ } {}
+ do_test where2-8.14 {
+ execsql {
+ SELECT w FROM tx
+ WHERE x IN (SELECT x FROM t1 WHERE w BETWEEN -4 AND -2)
+ AND y IN (SELECT y FROM t1 WHERE w BETWEEN 10 AND 20)
+ AND z IN (SELECT z FROM t1 WHERE w BETWEEN 10 AND 20)
+ }
+ } {}
+ do_test where2-8.15 {
+ execsql {
+ SELECT w FROM tx
+ WHERE x IN (SELECT x FROM t1 WHERE w BETWEEN 10 AND 20)
+ AND y IN (SELECT y FROM t1 WHERE w BETWEEN 10 AND 20)
+ AND z IN (SELECT z FROM t1 WHERE w BETWEEN 200 AND 300)
+ }
+ } {}
+ do_test where2-8.16 {
+ execsql {
+ SELECT w FROM tx
+ WHERE x IN (SELECT x FROM t1 WHERE w BETWEEN 10 AND 20)
+ AND y IN (SELECT y FROM t1 WHERE w BETWEEN 200 AND 300)
+ AND z IN (SELECT z FROM t1 WHERE w BETWEEN 10 AND 20)
+ }
+ } {}
+ do_test where2-8.17 {
+ execsql {
+ SELECT w FROM tx
+ WHERE x IN (SELECT x FROM t1 WHERE w BETWEEN 200 AND 300)
+ AND y IN (SELECT y FROM t1 WHERE w BETWEEN 10 AND 20)
+ AND z IN (SELECT z FROM t1 WHERE w BETWEEN 10 AND 20)
+ }
+ } {}
+ do_test where2-8.18 {
+ execsql {
+ SELECT w FROM tx
+ WHERE x IN (SELECT x FROM t1 WHERE +w BETWEEN 10 AND 20)
+ AND y IN (SELECT y FROM t1 WHERE +w BETWEEN 10 AND 20)
+ AND z IN (SELECT z FROM t1 WHERE +w BETWEEN 200 AND 300)
+ }
+ } {}
+ do_test where2-8.19 {
+ execsql {
+ SELECT w FROM tx
+ WHERE x IN (SELECT x FROM t1 WHERE +w BETWEEN 10 AND 20)
+ AND y IN (SELECT y FROM t1 WHERE +w BETWEEN 200 AND 300)
+ AND z IN (SELECT z FROM t1 WHERE +w BETWEEN 10 AND 20)
+ }
+ } {}
+ do_test where2-8.20 {
+ execsql {
+ SELECT w FROM tx
+ WHERE x IN (SELECT x FROM t1 WHERE +w BETWEEN 200 AND 300)
+ AND y IN (SELECT y FROM t1 WHERE +w BETWEEN 10 AND 20)
+ AND z IN (SELECT z FROM t1 WHERE +w BETWEEN 10 AND 20)
+ }
+ } {}
+}
+finish_test
Added: freeswitch/trunk/libs/sqlite/test/where3.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/where3.test Tue Dec 19 15:11:50 2006
@@ -0,0 +1,81 @@
+# 2006 January 31
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library. The
+# focus of this file is testing the join reordering optimization
+# in cases that include a LEFT JOIN.
+#
+# $Id: where3.test,v 1.2 2006/06/06 11:45:55 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# The following is from ticket #1652.
+#
+# A comma join then a left outer join: A,B left join C.
+# Arrange indices so that the B table is chosen to go first.
+# Also put an index on C, but make sure that A is chosen before C.
+#
+do_test where3-1.1 {
+ execsql {
+ CREATE TABLE t1(a, b);
+ CREATE TABLE t2(p, q);
+ CREATE TABLE t3(x, y);
+
+ INSERT INTO t1 VALUES(111,'one');
+ INSERT INTO t1 VALUES(222,'two');
+ INSERT INTO t1 VALUES(333,'three');
+
+ INSERT INTO t2 VALUES(1,111);
+ INSERT INTO t2 VALUES(2,222);
+ INSERT INTO t2 VALUES(4,444);
+ CREATE INDEX t2i1 ON t2(p);
+
+ INSERT INTO t3 VALUES(999,'nine');
+ CREATE INDEX t3i1 ON t3(x);
+
+ SELECT * FROM t1, t2 LEFT JOIN t3 ON q=x WHERE p=2 AND a=q;
+ }
+} {222 two 2 222 {} {}}
+
+# Ticket #1830
+#
+# This is similar to the above but with the LEFT JOIN on the
+# other side.
+#
+do_test where3-1.2 {
+ execsql {
+ CREATE TABLE parent1(parent1key, child1key, Child2key, child3key);
+ CREATE TABLE child1 ( child1key NVARCHAR, value NVARCHAR );
+ CREATE UNIQUE INDEX PKIDXChild1 ON child1 ( child1key );
+ CREATE TABLE child2 ( child2key NVARCHAR, value NVARCHAR );
+
+ INSERT INTO parent1(parent1key,child1key,child2key)
+ VALUES ( 1, 'C1.1', 'C2.1' );
+ INSERT INTO child1 ( child1key, value ) VALUES ( 'C1.1', 'Value for C1.1' );
+ INSERT INTO child2 ( child2key, value ) VALUES ( 'C2.1', 'Value for C2.1' );
+
+ INSERT INTO parent1 ( parent1key, child1key, child2key )
+ VALUES ( 2, 'C1.2', 'C2.2' );
+ INSERT INTO child2 ( child2key, value ) VALUES ( 'C2.2', 'Value for C2.2' );
+
+ INSERT INTO parent1 ( parent1key, child1key, child2key )
+ VALUES ( 3, 'C1.3', 'C2.3' );
+ INSERT INTO child1 ( child1key, value ) VALUES ( 'C1.3', 'Value for C1.3' );
+ INSERT INTO child2 ( child2key, value ) VALUES ( 'C2.3', 'Value for C2.3' );
+
+ SELECT parent1.parent1key, child1.value, child2.value
+ FROM parent1
+ LEFT OUTER JOIN child1 ON child1.child1key = parent1.child1key
+ INNER JOIN child2 ON child2.child2key = parent1.child2key;
+ }
+} {1 {Value for C1.1} {Value for C2.1} 2 {} {Value for C2.2} 3 {Value for C1.3} {Value for C2.3}}
+
+finish_test
Added: freeswitch/trunk/libs/sqlite/tool/diffdb.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/tool/diffdb.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,44 @@
+/*
+** A utility for printing the differences between two SQLite database files.
+*/
+#include <stdio.h>
+#include <ctype.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+#include <unistd.h>
+#include <stdlib.h>
+
+
+#define PAGESIZE 1024
+static int db1 = -1;
+static int db2 = -1;
+
+int main(int argc, char **argv){
+ int iPg;
+ unsigned char a1[PAGESIZE], a2[PAGESIZE];
+ if( argc!=3 ){
+ fprintf(stderr,"Usage: %s FILENAME FILENAME\n", argv[0]);
+ exit(1);
+ }
+ db1 = open(argv[1], O_RDONLY);
+ if( db1<0 ){
+ fprintf(stderr,"%s: can't open %s\n", argv[0], argv[1]);
+ exit(1);
+ }
+ db2 = open(argv[2], O_RDONLY);
+ if( db2<0 ){
+ fprintf(stderr,"%s: can't open %s\n", argv[0], argv[2]);
+ exit(1);
+ }
+ iPg = 1;
+ while( read(db1, a1, PAGESIZE)==PAGESIZE && read(db2,a2,PAGESIZE)==PAGESIZE ){
+ if( memcmp(a1,a2,PAGESIZE) ){
+ printf("Page %d\n", iPg);
+ }
+ iPg++;
+ }
+ printf("%d pages checked\n", iPg-1);
+ close(db1);
+ close(db2);
+}
Added: freeswitch/trunk/libs/sqlite/tool/lemon.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/tool/lemon.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,4767 @@
+/*
+** This file contains all sources (including headers) to the LEMON
+** LALR(1) parser generator. The sources have been combined into a
+** single file to make it easy to include LEMON in the source tree
+** and Makefile of another program.
+**
+** The author of this program disclaims copyright.
+*/
+#include <stdio.h>
+#include <stdarg.h>
+#include <string.h>
+#include <ctype.h>
+#include <stdlib.h>
+
+#ifndef __WIN32__
+# if defined(_WIN32) || defined(WIN32)
+# define __WIN32__
+# endif
+#endif
+
+/* #define PRIVATE static */
+#define PRIVATE
+
+#ifdef TEST
+#define MAXRHS 5 /* Set low to exercise exception code */
+#else
+#define MAXRHS 1000
+#endif
+
+char *msort();
+extern void *malloc();
+
+/******** From the file "action.h" *************************************/
+struct action *Action_new();
+struct action *Action_sort();
+
+/********* From the file "assert.h" ************************************/
+void myassert();
+#ifndef NDEBUG
+# define assert(X) if(!(X))myassert(__FILE__,__LINE__)
+#else
+# define assert(X)
+#endif
+
+/********** From the file "build.h" ************************************/
+void FindRulePrecedences();
+void FindFirstSets();
+void FindStates();
+void FindLinks();
+void FindFollowSets();
+void FindActions();
+
+/********* From the file "configlist.h" *********************************/
+void Configlist_init(/* void */);
+struct config *Configlist_add(/* struct rule *, int */);
+struct config *Configlist_addbasis(/* struct rule *, int */);
+void Configlist_closure(/* void */);
+void Configlist_sort(/* void */);
+void Configlist_sortbasis(/* void */);
+struct config *Configlist_return(/* void */);
+struct config *Configlist_basis(/* void */);
+void Configlist_eat(/* struct config * */);
+void Configlist_reset(/* void */);
+
+/********* From the file "error.h" ***************************************/
+void ErrorMsg(const char *, int,const char *, ...);
+
+/****** From the file "option.h" ******************************************/
+struct s_options {
+ enum { OPT_FLAG=1, OPT_INT, OPT_DBL, OPT_STR,
+ OPT_FFLAG, OPT_FINT, OPT_FDBL, OPT_FSTR} type;
+ char *label;
+ char *arg;
+ char *message;
+};
+int OptInit(/* char**,struct s_options*,FILE* */);
+int OptNArgs(/* void */);
+char *OptArg(/* int */);
+void OptErr(/* int */);
+void OptPrint(/* void */);
+
+/******** From the file "parse.h" *****************************************/
+void Parse(/* struct lemon *lemp */);
+
+/********* From the file "plink.h" ***************************************/
+struct plink *Plink_new(/* void */);
+void Plink_add(/* struct plink **, struct config * */);
+void Plink_copy(/* struct plink **, struct plink * */);
+void Plink_delete(/* struct plink * */);
+
+/********** From the file "report.h" *************************************/
+void Reprint(/* struct lemon * */);
+void ReportOutput(/* struct lemon * */);
+void ReportTable(/* struct lemon * */);
+void ReportHeader(/* struct lemon * */);
+void CompressTables(/* struct lemon * */);
+void ResortStates(/* struct lemon * */);
+
+/********** From the file "set.h" ****************************************/
+void SetSize(/* int N */); /* All sets will be of size N */
+char *SetNew(/* void */); /* A new set for element 0..N */
+void SetFree(/* char* */); /* Deallocate a set */
+
+int SetAdd(/* char*,int */); /* Add element to a set */
+int SetUnion(/* char *A,char *B */); /* A <- A U B, thru element N */
+
+#define SetFind(X,Y) (X[Y]) /* True if Y is in set X */
+
+/********** From the file "struct.h" *************************************/
+/*
+** Principal data structures for the LEMON parser generator.
+*/
+
+typedef enum {B_FALSE=0, B_TRUE} Boolean;
+
+/* Symbols (terminals and nonterminals) of the grammar are stored
+** in the following: */
+struct symbol {
+ char *name; /* Name of the symbol */
+ int index; /* Index number for this symbol */
+ enum {
+ TERMINAL,
+ NONTERMINAL,
+ MULTITERMINAL
+ } type; /* Symbols are all either TERMINALS or NTs */
+ struct rule *rule; /* Linked list of rules of this (if an NT) */
+ struct symbol *fallback; /* fallback token in case this token doesn't parse */
+ int prec; /* Precedence if defined (-1 otherwise) */
+ enum e_assoc {
+ LEFT,
+ RIGHT,
+ NONE,
+ UNK
+ } assoc; /* Associativity if predecence is defined */
+ char *firstset; /* First-set for all rules of this symbol */
+ Boolean lambda; /* True if NT and can generate an empty string */
+ char *destructor; /* Code which executes whenever this symbol is
+ ** popped from the stack during error processing */
+ int destructorln; /* Line number of destructor code */
+ char *datatype; /* The data type of information held by this
+ ** object. Only used if type==NONTERMINAL */
+ int dtnum; /* The data type number. In the parser, the value
+ ** stack is a union. The .yy%d element of this
+ ** union is the correct data type for this object */
+ /* The following fields are used by MULTITERMINALs only */
+ int nsubsym; /* Number of constituent symbols in the MULTI */
+ struct symbol **subsym; /* Array of constituent symbols */
+};
+
+/* Each production rule in the grammar is stored in the following
+** structure. */
+struct rule {
+ struct symbol *lhs; /* Left-hand side of the rule */
+ char *lhsalias; /* Alias for the LHS (NULL if none) */
+ int ruleline; /* Line number for the rule */
+ int nrhs; /* Number of RHS symbols */
+ struct symbol **rhs; /* The RHS symbols */
+ char **rhsalias; /* An alias for each RHS symbol (NULL if none) */
+ int line; /* Line number at which code begins */
+ char *code; /* The code executed when this rule is reduced */
+ struct symbol *precsym; /* Precedence symbol for this rule */
+ int index; /* An index number for this rule */
+ Boolean canReduce; /* True if this rule is ever reduced */
+ struct rule *nextlhs; /* Next rule with the same LHS */
+ struct rule *next; /* Next rule in the global list */
+};
+
+/* A configuration is a production rule of the grammar together with
+** a mark (dot) showing how much of that rule has been processed so far.
+** Configurations also contain a follow-set which is a list of terminal
+** symbols which are allowed to immediately follow the end of the rule.
+** Every configuration is recorded as an instance of the following: */
+struct config {
+ struct rule *rp; /* The rule upon which the configuration is based */
+ int dot; /* The parse point */
+ char *fws; /* Follow-set for this configuration only */
+ struct plink *fplp; /* Follow-set forward propagation links */
+ struct plink *bplp; /* Follow-set backwards propagation links */
+ struct state *stp; /* Pointer to state which contains this */
+ enum {
+ COMPLETE, /* The status is used during followset and */
+ INCOMPLETE /* shift computations */
+ } status;
+ struct config *next; /* Next configuration in the state */
+ struct config *bp; /* The next basis configuration */
+};
+
+/* Every shift or reduce operation is stored as one of the following */
+struct action {
+ struct symbol *sp; /* The look-ahead symbol */
+ enum e_action {
+ SHIFT,
+ ACCEPT,
+ REDUCE,
+ ERROR,
+ CONFLICT, /* Was a reduce, but part of a conflict */
+ SH_RESOLVED, /* Was a shift. Precedence resolved conflict */
+ RD_RESOLVED, /* Was reduce. Precedence resolved conflict */
+ NOT_USED /* Deleted by compression */
+ } type;
+ union {
+ struct state *stp; /* The new state, if a shift */
+ struct rule *rp; /* The rule, if a reduce */
+ } x;
+ struct action *next; /* Next action for this state */
+ struct action *collide; /* Next action with the same hash */
+};
+
+/* Each state of the generated parser's finite state machine
+** is encoded as an instance of the following structure. */
+struct state {
+ struct config *bp; /* The basis configurations for this state */
+ struct config *cfp; /* All configurations in this set */
+ int statenum; /* Sequencial number for this state */
+ struct action *ap; /* Array of actions for this state */
+ int nTknAct, nNtAct; /* Number of actions on terminals and nonterminals */
+ int iTknOfst, iNtOfst; /* yy_action[] offset for terminals and nonterms */
+ int iDflt; /* Default action */
+};
+#define NO_OFFSET (-2147483647)
+
+/* A followset propagation link indicates that the contents of one
+** configuration followset should be propagated to another whenever
+** the first changes. */
+struct plink {
+ struct config *cfp; /* The configuration to which linked */
+ struct plink *next; /* The next propagate link */
+};
+
+/* The state vector for the entire parser generator is recorded as
+** follows. (LEMON uses no global variables and makes little use of
+** static variables. Fields in the following structure can be thought
+** of as begin global variables in the program.) */
+struct lemon {
+ struct state **sorted; /* Table of states sorted by state number */
+ struct rule *rule; /* List of all rules */
+ int nstate; /* Number of states */
+ int nrule; /* Number of rules */
+ int nsymbol; /* Number of terminal and nonterminal symbols */
+ int nterminal; /* Number of terminal symbols */
+ struct symbol **symbols; /* Sorted array of pointers to symbols */
+ int errorcnt; /* Number of errors */
+ struct symbol *errsym; /* The error symbol */
+ struct symbol *wildcard; /* Token that matches anything */
+ char *name; /* Name of the generated parser */
+ char *arg; /* Declaration of the 3th argument to parser */
+ char *tokentype; /* Type of terminal symbols in the parser stack */
+ char *vartype; /* The default type of non-terminal symbols */
+ char *start; /* Name of the start symbol for the grammar */
+ char *stacksize; /* Size of the parser stack */
+ char *include; /* Code to put at the start of the C file */
+ int includeln; /* Line number for start of include code */
+ char *error; /* Code to execute when an error is seen */
+ int errorln; /* Line number for start of error code */
+ char *overflow; /* Code to execute on a stack overflow */
+ int overflowln; /* Line number for start of overflow code */
+ char *failure; /* Code to execute on parser failure */
+ int failureln; /* Line number for start of failure code */
+ char *accept; /* Code to execute when the parser excepts */
+ int acceptln; /* Line number for the start of accept code */
+ char *extracode; /* Code appended to the generated file */
+ int extracodeln; /* Line number for the start of the extra code */
+ char *tokendest; /* Code to execute to destroy token data */
+ int tokendestln; /* Line number for token destroyer code */
+ char *vardest; /* Code for the default non-terminal destructor */
+ int vardestln; /* Line number for default non-term destructor code*/
+ char *filename; /* Name of the input file */
+ char *outname; /* Name of the current output file */
+ char *tokenprefix; /* A prefix added to token names in the .h file */
+ int nconflict; /* Number of parsing conflicts */
+ int tablesize; /* Size of the parse tables */
+ int basisflag; /* Print only basis configurations */
+ int has_fallback; /* True if any %fallback is seen in the grammer */
+ char *argv0; /* Name of the program */
+};
+
+#define MemoryCheck(X) if((X)==0){ \
+ extern void memory_error(); \
+ memory_error(); \
+}
+
+/**************** From the file "table.h" *********************************/
+/*
+** All code in this file has been automatically generated
+** from a specification in the file
+** "table.q"
+** by the associative array code building program "aagen".
+** Do not edit this file! Instead, edit the specification
+** file, then rerun aagen.
+*/
+/*
+** Code for processing tables in the LEMON parser generator.
+*/
+
+/* Routines for handling a strings */
+
+char *Strsafe();
+
+void Strsafe_init(/* void */);
+int Strsafe_insert(/* char * */);
+char *Strsafe_find(/* char * */);
+
+/* Routines for handling symbols of the grammar */
+
+struct symbol *Symbol_new();
+int Symbolcmpp(/* struct symbol **, struct symbol ** */);
+void Symbol_init(/* void */);
+int Symbol_insert(/* struct symbol *, char * */);
+struct symbol *Symbol_find(/* char * */);
+struct symbol *Symbol_Nth(/* int */);
+int Symbol_count(/* */);
+struct symbol **Symbol_arrayof(/* */);
+
+/* Routines to manage the state table */
+
+int Configcmp(/* struct config *, struct config * */);
+struct state *State_new();
+void State_init(/* void */);
+int State_insert(/* struct state *, struct config * */);
+struct state *State_find(/* struct config * */);
+struct state **State_arrayof(/* */);
+
+/* Routines used for efficiency in Configlist_add */
+
+void Configtable_init(/* void */);
+int Configtable_insert(/* struct config * */);
+struct config *Configtable_find(/* struct config * */);
+void Configtable_clear(/* int(*)(struct config *) */);
+/****************** From the file "action.c" *******************************/
+/*
+** Routines processing parser actions in the LEMON parser generator.
+*/
+
+/* Allocate a new parser action */
+struct action *Action_new(){
+ static struct action *freelist = 0;
+ struct action *new;
+
+ if( freelist==0 ){
+ int i;
+ int amt = 100;
+ freelist = (struct action *)malloc( sizeof(struct action)*amt );
+ if( freelist==0 ){
+ fprintf(stderr,"Unable to allocate memory for a new parser action.");
+ exit(1);
+ }
+ for(i=0; i<amt-1; i++) freelist[i].next = &freelist[i+1];
+ freelist[amt-1].next = 0;
+ }
+ new = freelist;
+ freelist = freelist->next;
+ return new;
+}
+
+/* Compare two actions */
+static int actioncmp(ap1,ap2)
+struct action *ap1;
+struct action *ap2;
+{
+ int rc;
+ rc = ap1->sp->index - ap2->sp->index;
+ if( rc==0 ) rc = (int)ap1->type - (int)ap2->type;
+ if( rc==0 ){
+ assert( ap1->type==REDUCE || ap1->type==RD_RESOLVED || ap1->type==CONFLICT);
+ assert( ap2->type==REDUCE || ap2->type==RD_RESOLVED || ap2->type==CONFLICT);
+ rc = ap1->x.rp->index - ap2->x.rp->index;
+ }
+ return rc;
+}
+
+/* Sort parser actions */
+struct action *Action_sort(ap)
+struct action *ap;
+{
+ ap = (struct action *)msort((char *)ap,(char **)&ap->next,actioncmp);
+ return ap;
+}
+
+void Action_add(app,type,sp,arg)
+struct action **app;
+enum e_action type;
+struct symbol *sp;
+char *arg;
+{
+ struct action *new;
+ new = Action_new();
+ new->next = *app;
+ *app = new;
+ new->type = type;
+ new->sp = sp;
+ if( type==SHIFT ){
+ new->x.stp = (struct state *)arg;
+ }else{
+ new->x.rp = (struct rule *)arg;
+ }
+}
+/********************** New code to implement the "acttab" module ***********/
+/*
+** This module implements routines use to construct the yy_action[] table.
+*/
+
+/*
+** The state of the yy_action table under construction is an instance of
+** the following structure
+*/
+typedef struct acttab acttab;
+struct acttab {
+ int nAction; /* Number of used slots in aAction[] */
+ int nActionAlloc; /* Slots allocated for aAction[] */
+ struct {
+ int lookahead; /* Value of the lookahead token */
+ int action; /* Action to take on the given lookahead */
+ } *aAction, /* The yy_action[] table under construction */
+ *aLookahead; /* A single new transaction set */
+ int mnLookahead; /* Minimum aLookahead[].lookahead */
+ int mnAction; /* Action associated with mnLookahead */
+ int mxLookahead; /* Maximum aLookahead[].lookahead */
+ int nLookahead; /* Used slots in aLookahead[] */
+ int nLookaheadAlloc; /* Slots allocated in aLookahead[] */
+};
+
+/* Return the number of entries in the yy_action table */
+#define acttab_size(X) ((X)->nAction)
+
+/* The value for the N-th entry in yy_action */
+#define acttab_yyaction(X,N) ((X)->aAction[N].action)
+
+/* The value for the N-th entry in yy_lookahead */
+#define acttab_yylookahead(X,N) ((X)->aAction[N].lookahead)
+
+/* Free all memory associated with the given acttab */
+void acttab_free(acttab *p){
+ free( p->aAction );
+ free( p->aLookahead );
+ free( p );
+}
+
+/* Allocate a new acttab structure */
+acttab *acttab_alloc(void){
+ acttab *p = malloc( sizeof(*p) );
+ if( p==0 ){
+ fprintf(stderr,"Unable to allocate memory for a new acttab.");
+ exit(1);
+ }
+ memset(p, 0, sizeof(*p));
+ return p;
+}
+
+/* Add a new action to the current transaction set
+*/
+void acttab_action(acttab *p, int lookahead, int action){
+ if( p->nLookahead>=p->nLookaheadAlloc ){
+ p->nLookaheadAlloc += 25;
+ p->aLookahead = realloc( p->aLookahead,
+ sizeof(p->aLookahead[0])*p->nLookaheadAlloc );
+ if( p->aLookahead==0 ){
+ fprintf(stderr,"malloc failed\n");
+ exit(1);
+ }
+ }
+ if( p->nLookahead==0 ){
+ p->mxLookahead = lookahead;
+ p->mnLookahead = lookahead;
+ p->mnAction = action;
+ }else{
+ if( p->mxLookahead<lookahead ) p->mxLookahead = lookahead;
+ if( p->mnLookahead>lookahead ){
+ p->mnLookahead = lookahead;
+ p->mnAction = action;
+ }
+ }
+ p->aLookahead[p->nLookahead].lookahead = lookahead;
+ p->aLookahead[p->nLookahead].action = action;
+ p->nLookahead++;
+}
+
+/*
+** Add the transaction set built up with prior calls to acttab_action()
+** into the current action table. Then reset the transaction set back
+** to an empty set in preparation for a new round of acttab_action() calls.
+**
+** Return the offset into the action table of the new transaction.
+*/
+int acttab_insert(acttab *p){
+ int i, j, k, n;
+ assert( p->nLookahead>0 );
+
+ /* Make sure we have enough space to hold the expanded action table
+ ** in the worst case. The worst case occurs if the transaction set
+ ** must be appended to the current action table
+ */
+ n = p->mxLookahead + 1;
+ if( p->nAction + n >= p->nActionAlloc ){
+ int oldAlloc = p->nActionAlloc;
+ p->nActionAlloc = p->nAction + n + p->nActionAlloc + 20;
+ p->aAction = realloc( p->aAction,
+ sizeof(p->aAction[0])*p->nActionAlloc);
+ if( p->aAction==0 ){
+ fprintf(stderr,"malloc failed\n");
+ exit(1);
+ }
+ for(i=oldAlloc; i<p->nActionAlloc; i++){
+ p->aAction[i].lookahead = -1;
+ p->aAction[i].action = -1;
+ }
+ }
+
+ /* Scan the existing action table looking for an offset where we can
+ ** insert the current transaction set. Fall out of the loop when that
+ ** offset is found. In the worst case, we fall out of the loop when
+ ** i reaches p->nAction, which means we append the new transaction set.
+ **
+ ** i is the index in p->aAction[] where p->mnLookahead is inserted.
+ */
+ for(i=0; i<p->nAction+p->mnLookahead; i++){
+ if( p->aAction[i].lookahead<0 ){
+ for(j=0; j<p->nLookahead; j++){
+ k = p->aLookahead[j].lookahead - p->mnLookahead + i;
+ if( k<0 ) break;
+ if( p->aAction[k].lookahead>=0 ) break;
+ }
+ if( j<p->nLookahead ) continue;
+ for(j=0; j<p->nAction; j++){
+ if( p->aAction[j].lookahead==j+p->mnLookahead-i ) break;
+ }
+ if( j==p->nAction ){
+ break; /* Fits in empty slots */
+ }
+ }else if( p->aAction[i].lookahead==p->mnLookahead ){
+ if( p->aAction[i].action!=p->mnAction ) continue;
+ for(j=0; j<p->nLookahead; j++){
+ k = p->aLookahead[j].lookahead - p->mnLookahead + i;
+ if( k<0 || k>=p->nAction ) break;
+ if( p->aLookahead[j].lookahead!=p->aAction[k].lookahead ) break;
+ if( p->aLookahead[j].action!=p->aAction[k].action ) break;
+ }
+ if( j<p->nLookahead ) continue;
+ n = 0;
+ for(j=0; j<p->nAction; j++){
+ if( p->aAction[j].lookahead<0 ) continue;
+ if( p->aAction[j].lookahead==j+p->mnLookahead-i ) n++;
+ }
+ if( n==p->nLookahead ){
+ break; /* Same as a prior transaction set */
+ }
+ }
+ }
+ /* Insert transaction set at index i. */
+ for(j=0; j<p->nLookahead; j++){
+ k = p->aLookahead[j].lookahead - p->mnLookahead + i;
+ p->aAction[k] = p->aLookahead[j];
+ if( k>=p->nAction ) p->nAction = k+1;
+ }
+ p->nLookahead = 0;
+
+ /* Return the offset that is added to the lookahead in order to get the
+ ** index into yy_action of the action */
+ return i - p->mnLookahead;
+}
+
+/********************** From the file "assert.c" ****************************/
+/*
+** A more efficient way of handling assertions.
+*/
+void myassert(file,line)
+char *file;
+int line;
+{
+ fprintf(stderr,"Assertion failed on line %d of file \"%s\"\n",line,file);
+ exit(1);
+}
+/********************** From the file "build.c" *****************************/
+/*
+** Routines to construction the finite state machine for the LEMON
+** parser generator.
+*/
+
+/* Find a precedence symbol of every rule in the grammar.
+**
+** Those rules which have a precedence symbol coded in the input
+** grammar using the "[symbol]" construct will already have the
+** rp->precsym field filled. Other rules take as their precedence
+** symbol the first RHS symbol with a defined precedence. If there
+** are not RHS symbols with a defined precedence, the precedence
+** symbol field is left blank.
+*/
+void FindRulePrecedences(xp)
+struct lemon *xp;
+{
+ struct rule *rp;
+ for(rp=xp->rule; rp; rp=rp->next){
+ if( rp->precsym==0 ){
+ int i, j;
+ for(i=0; i<rp->nrhs && rp->precsym==0; i++){
+ struct symbol *sp = rp->rhs[i];
+ if( sp->type==MULTITERMINAL ){
+ for(j=0; j<sp->nsubsym; j++){
+ if( sp->subsym[j]->prec>=0 ){
+ rp->precsym = sp->subsym[j];
+ break;
+ }
+ }
+ }else if( sp->prec>=0 ){
+ rp->precsym = rp->rhs[i];
+ }
+ }
+ }
+ }
+ return;
+}
+
+/* Find all nonterminals which will generate the empty string.
+** Then go back and compute the first sets of every nonterminal.
+** The first set is the set of all terminal symbols which can begin
+** a string generated by that nonterminal.
+*/
+void FindFirstSets(lemp)
+struct lemon *lemp;
+{
+ int i, j;
+ struct rule *rp;
+ int progress;
+
+ for(i=0; i<lemp->nsymbol; i++){
+ lemp->symbols[i]->lambda = B_FALSE;
+ }
+ for(i=lemp->nterminal; i<lemp->nsymbol; i++){
+ lemp->symbols[i]->firstset = SetNew();
+ }
+
+ /* First compute all lambdas */
+ do{
+ progress = 0;
+ for(rp=lemp->rule; rp; rp=rp->next){
+ if( rp->lhs->lambda ) continue;
+ for(i=0; i<rp->nrhs; i++){
+ struct symbol *sp = rp->rhs[i];
+ if( sp->type!=TERMINAL || sp->lambda==B_FALSE ) break;
+ }
+ if( i==rp->nrhs ){
+ rp->lhs->lambda = B_TRUE;
+ progress = 1;
+ }
+ }
+ }while( progress );
+
+ /* Now compute all first sets */
+ do{
+ struct symbol *s1, *s2;
+ progress = 0;
+ for(rp=lemp->rule; rp; rp=rp->next){
+ s1 = rp->lhs;
+ for(i=0; i<rp->nrhs; i++){
+ s2 = rp->rhs[i];
+ if( s2->type==TERMINAL ){
+ progress += SetAdd(s1->firstset,s2->index);
+ break;
+ }else if( s2->type==MULTITERMINAL ){
+ for(j=0; j<s2->nsubsym; j++){
+ progress += SetAdd(s1->firstset,s2->subsym[j]->index);
+ }
+ break;
+ }else if( s1==s2 ){
+ if( s1->lambda==B_FALSE ) break;
+ }else{
+ progress += SetUnion(s1->firstset,s2->firstset);
+ if( s2->lambda==B_FALSE ) break;
+ }
+ }
+ }
+ }while( progress );
+ return;
+}
+
+/* Compute all LR(0) states for the grammar. Links
+** are added to between some states so that the LR(1) follow sets
+** can be computed later.
+*/
+PRIVATE struct state *getstate(/* struct lemon * */); /* forward reference */
+void FindStates(lemp)
+struct lemon *lemp;
+{
+ struct symbol *sp;
+ struct rule *rp;
+
+ Configlist_init();
+
+ /* Find the start symbol */
+ if( lemp->start ){
+ sp = Symbol_find(lemp->start);
+ if( sp==0 ){
+ ErrorMsg(lemp->filename,0,
+"The specified start symbol \"%s\" is not \
+in a nonterminal of the grammar. \"%s\" will be used as the start \
+symbol instead.",lemp->start,lemp->rule->lhs->name);
+ lemp->errorcnt++;
+ sp = lemp->rule->lhs;
+ }
+ }else{
+ sp = lemp->rule->lhs;
+ }
+
+ /* Make sure the start symbol doesn't occur on the right-hand side of
+ ** any rule. Report an error if it does. (YACC would generate a new
+ ** start symbol in this case.) */
+ for(rp=lemp->rule; rp; rp=rp->next){
+ int i;
+ for(i=0; i<rp->nrhs; i++){
+ if( rp->rhs[i]==sp ){ /* FIX ME: Deal with multiterminals */
+ ErrorMsg(lemp->filename,0,
+"The start symbol \"%s\" occurs on the \
+right-hand side of a rule. This will result in a parser which \
+does not work properly.",sp->name);
+ lemp->errorcnt++;
+ }
+ }
+ }
+
+ /* The basis configuration set for the first state
+ ** is all rules which have the start symbol as their
+ ** left-hand side */
+ for(rp=sp->rule; rp; rp=rp->nextlhs){
+ struct config *newcfp;
+ newcfp = Configlist_addbasis(rp,0);
+ SetAdd(newcfp->fws,0);
+ }
+
+ /* Compute the first state. All other states will be
+ ** computed automatically during the computation of the first one.
+ ** The returned pointer to the first state is not used. */
+ (void)getstate(lemp);
+ return;
+}
+
+/* Return a pointer to a state which is described by the configuration
+** list which has been built from calls to Configlist_add.
+*/
+PRIVATE void buildshifts(/* struct lemon *, struct state * */); /* Forwd ref */
+PRIVATE struct state *getstate(lemp)
+struct lemon *lemp;
+{
+ struct config *cfp, *bp;
+ struct state *stp;
+
+ /* Extract the sorted basis of the new state. The basis was constructed
+ ** by prior calls to "Configlist_addbasis()". */
+ Configlist_sortbasis();
+ bp = Configlist_basis();
+
+ /* Get a state with the same basis */
+ stp = State_find(bp);
+ if( stp ){
+ /* A state with the same basis already exists! Copy all the follow-set
+ ** propagation links from the state under construction into the
+ ** preexisting state, then return a pointer to the preexisting state */
+ struct config *x, *y;
+ for(x=bp, y=stp->bp; x && y; x=x->bp, y=y->bp){
+ Plink_copy(&y->bplp,x->bplp);
+ Plink_delete(x->fplp);
+ x->fplp = x->bplp = 0;
+ }
+ cfp = Configlist_return();
+ Configlist_eat(cfp);
+ }else{
+ /* This really is a new state. Construct all the details */
+ Configlist_closure(lemp); /* Compute the configuration closure */
+ Configlist_sort(); /* Sort the configuration closure */
+ cfp = Configlist_return(); /* Get a pointer to the config list */
+ stp = State_new(); /* A new state structure */
+ MemoryCheck(stp);
+ stp->bp = bp; /* Remember the configuration basis */
+ stp->cfp = cfp; /* Remember the configuration closure */
+ stp->statenum = lemp->nstate++; /* Every state gets a sequence number */
+ stp->ap = 0; /* No actions, yet. */
+ State_insert(stp,stp->bp); /* Add to the state table */
+ buildshifts(lemp,stp); /* Recursively compute successor states */
+ }
+ return stp;
+}
+
+/*
+** Return true if two symbols are the same.
+*/
+int same_symbol(a,b)
+struct symbol *a;
+struct symbol *b;
+{
+ int i;
+ if( a==b ) return 1;
+ if( a->type!=MULTITERMINAL ) return 0;
+ if( b->type!=MULTITERMINAL ) return 0;
+ if( a->nsubsym!=b->nsubsym ) return 0;
+ for(i=0; i<a->nsubsym; i++){
+ if( a->subsym[i]!=b->subsym[i] ) return 0;
+ }
+ return 1;
+}
+
+/* Construct all successor states to the given state. A "successor"
+** state is any state which can be reached by a shift action.
+*/
+PRIVATE void buildshifts(lemp,stp)
+struct lemon *lemp;
+struct state *stp; /* The state from which successors are computed */
+{
+ struct config *cfp; /* For looping thru the config closure of "stp" */
+ struct config *bcfp; /* For the inner loop on config closure of "stp" */
+ struct config *new; /* */
+ struct symbol *sp; /* Symbol following the dot in configuration "cfp" */
+ struct symbol *bsp; /* Symbol following the dot in configuration "bcfp" */
+ struct state *newstp; /* A pointer to a successor state */
+
+ /* Each configuration becomes complete after it contibutes to a successor
+ ** state. Initially, all configurations are incomplete */
+ for(cfp=stp->cfp; cfp; cfp=cfp->next) cfp->status = INCOMPLETE;
+
+ /* Loop through all configurations of the state "stp" */
+ for(cfp=stp->cfp; cfp; cfp=cfp->next){
+ if( cfp->status==COMPLETE ) continue; /* Already used by inner loop */
+ if( cfp->dot>=cfp->rp->nrhs ) continue; /* Can't shift this config */
+ Configlist_reset(); /* Reset the new config set */
+ sp = cfp->rp->rhs[cfp->dot]; /* Symbol after the dot */
+
+ /* For every configuration in the state "stp" which has the symbol "sp"
+ ** following its dot, add the same configuration to the basis set under
+ ** construction but with the dot shifted one symbol to the right. */
+ for(bcfp=cfp; bcfp; bcfp=bcfp->next){
+ if( bcfp->status==COMPLETE ) continue; /* Already used */
+ if( bcfp->dot>=bcfp->rp->nrhs ) continue; /* Can't shift this one */
+ bsp = bcfp->rp->rhs[bcfp->dot]; /* Get symbol after dot */
+ if( !same_symbol(bsp,sp) ) continue; /* Must be same as for "cfp" */
+ bcfp->status = COMPLETE; /* Mark this config as used */
+ new = Configlist_addbasis(bcfp->rp,bcfp->dot+1);
+ Plink_add(&new->bplp,bcfp);
+ }
+
+ /* Get a pointer to the state described by the basis configuration set
+ ** constructed in the preceding loop */
+ newstp = getstate(lemp);
+
+ /* The state "newstp" is reached from the state "stp" by a shift action
+ ** on the symbol "sp" */
+ if( sp->type==MULTITERMINAL ){
+ int i;
+ for(i=0; i<sp->nsubsym; i++){
+ Action_add(&stp->ap,SHIFT,sp->subsym[i],(char*)newstp);
+ }
+ }else{
+ Action_add(&stp->ap,SHIFT,sp,(char *)newstp);
+ }
+ }
+}
+
+/*
+** Construct the propagation links
+*/
+void FindLinks(lemp)
+struct lemon *lemp;
+{
+ int i;
+ struct config *cfp, *other;
+ struct state *stp;
+ struct plink *plp;
+
+ /* Housekeeping detail:
+ ** Add to every propagate link a pointer back to the state to
+ ** which the link is attached. */
+ for(i=0; i<lemp->nstate; i++){
+ stp = lemp->sorted[i];
+ for(cfp=stp->cfp; cfp; cfp=cfp->next){
+ cfp->stp = stp;
+ }
+ }
+
+ /* Convert all backlinks into forward links. Only the forward
+ ** links are used in the follow-set computation. */
+ for(i=0; i<lemp->nstate; i++){
+ stp = lemp->sorted[i];
+ for(cfp=stp->cfp; cfp; cfp=cfp->next){
+ for(plp=cfp->bplp; plp; plp=plp->next){
+ other = plp->cfp;
+ Plink_add(&other->fplp,cfp);
+ }
+ }
+ }
+}
+
+/* Compute all followsets.
+**
+** A followset is the set of all symbols which can come immediately
+** after a configuration.
+*/
+void FindFollowSets(lemp)
+struct lemon *lemp;
+{
+ int i;
+ struct config *cfp;
+ struct plink *plp;
+ int progress;
+ int change;
+
+ for(i=0; i<lemp->nstate; i++){
+ for(cfp=lemp->sorted[i]->cfp; cfp; cfp=cfp->next){
+ cfp->status = INCOMPLETE;
+ }
+ }
+
+ do{
+ progress = 0;
+ for(i=0; i<lemp->nstate; i++){
+ for(cfp=lemp->sorted[i]->cfp; cfp; cfp=cfp->next){
+ if( cfp->status==COMPLETE ) continue;
+ for(plp=cfp->fplp; plp; plp=plp->next){
+ change = SetUnion(plp->cfp->fws,cfp->fws);
+ if( change ){
+ plp->cfp->status = INCOMPLETE;
+ progress = 1;
+ }
+ }
+ cfp->status = COMPLETE;
+ }
+ }
+ }while( progress );
+}
+
+static int resolve_conflict();
+
+/* Compute the reduce actions, and resolve conflicts.
+*/
+void FindActions(lemp)
+struct lemon *lemp;
+{
+ int i,j;
+ struct config *cfp;
+ struct state *stp;
+ struct symbol *sp;
+ struct rule *rp;
+
+ /* Add all of the reduce actions
+ ** A reduce action is added for each element of the followset of
+ ** a configuration which has its dot at the extreme right.
+ */
+ for(i=0; i<lemp->nstate; i++){ /* Loop over all states */
+ stp = lemp->sorted[i];
+ for(cfp=stp->cfp; cfp; cfp=cfp->next){ /* Loop over all configurations */
+ if( cfp->rp->nrhs==cfp->dot ){ /* Is dot at extreme right? */
+ for(j=0; j<lemp->nterminal; j++){
+ if( SetFind(cfp->fws,j) ){
+ /* Add a reduce action to the state "stp" which will reduce by the
+ ** rule "cfp->rp" if the lookahead symbol is "lemp->symbols[j]" */
+ Action_add(&stp->ap,REDUCE,lemp->symbols[j],(char *)cfp->rp);
+ }
+ }
+ }
+ }
+ }
+
+ /* Add the accepting token */
+ if( lemp->start ){
+ sp = Symbol_find(lemp->start);
+ if( sp==0 ) sp = lemp->rule->lhs;
+ }else{
+ sp = lemp->rule->lhs;
+ }
+ /* Add to the first state (which is always the starting state of the
+ ** finite state machine) an action to ACCEPT if the lookahead is the
+ ** start nonterminal. */
+ Action_add(&lemp->sorted[0]->ap,ACCEPT,sp,0);
+
+ /* Resolve conflicts */
+ for(i=0; i<lemp->nstate; i++){
+ struct action *ap, *nap;
+ struct state *stp;
+ stp = lemp->sorted[i];
+ assert( stp->ap );
+ stp->ap = Action_sort(stp->ap);
+ for(ap=stp->ap; ap && ap->next; ap=ap->next){
+ for(nap=ap->next; nap && nap->sp==ap->sp; nap=nap->next){
+ /* The two actions "ap" and "nap" have the same lookahead.
+ ** Figure out which one should be used */
+ lemp->nconflict += resolve_conflict(ap,nap,lemp->errsym);
+ }
+ }
+ }
+
+ /* Report an error for each rule that can never be reduced. */
+ for(rp=lemp->rule; rp; rp=rp->next) rp->canReduce = B_FALSE;
+ for(i=0; i<lemp->nstate; i++){
+ struct action *ap;
+ for(ap=lemp->sorted[i]->ap; ap; ap=ap->next){
+ if( ap->type==REDUCE ) ap->x.rp->canReduce = B_TRUE;
+ }
+ }
+ for(rp=lemp->rule; rp; rp=rp->next){
+ if( rp->canReduce ) continue;
+ ErrorMsg(lemp->filename,rp->ruleline,"This rule can not be reduced.\n");
+ lemp->errorcnt++;
+ }
+}
+
+/* Resolve a conflict between the two given actions. If the
+** conflict can't be resolve, return non-zero.
+**
+** NO LONGER TRUE:
+** To resolve a conflict, first look to see if either action
+** is on an error rule. In that case, take the action which
+** is not associated with the error rule. If neither or both
+** actions are associated with an error rule, then try to
+** use precedence to resolve the conflict.
+**
+** If either action is a SHIFT, then it must be apx. This
+** function won't work if apx->type==REDUCE and apy->type==SHIFT.
+*/
+static int resolve_conflict(apx,apy,errsym)
+struct action *apx;
+struct action *apy;
+struct symbol *errsym; /* The error symbol (if defined. NULL otherwise) */
+{
+ struct symbol *spx, *spy;
+ int errcnt = 0;
+ assert( apx->sp==apy->sp ); /* Otherwise there would be no conflict */
+ if( apx->type==SHIFT && apy->type==REDUCE ){
+ spx = apx->sp;
+ spy = apy->x.rp->precsym;
+ if( spy==0 || spx->prec<0 || spy->prec<0 ){
+ /* Not enough precedence information. */
+ apy->type = CONFLICT;
+ errcnt++;
+ }else if( spx->prec>spy->prec ){ /* Lower precedence wins */
+ apy->type = RD_RESOLVED;
+ }else if( spx->prec<spy->prec ){
+ apx->type = SH_RESOLVED;
+ }else if( spx->prec==spy->prec && spx->assoc==RIGHT ){ /* Use operator */
+ apy->type = RD_RESOLVED; /* associativity */
+ }else if( spx->prec==spy->prec && spx->assoc==LEFT ){ /* to break tie */
+ apx->type = SH_RESOLVED;
+ }else{
+ assert( spx->prec==spy->prec && spx->assoc==NONE );
+ apy->type = CONFLICT;
+ errcnt++;
+ }
+ }else if( apx->type==REDUCE && apy->type==REDUCE ){
+ spx = apx->x.rp->precsym;
+ spy = apy->x.rp->precsym;
+ if( spx==0 || spy==0 || spx->prec<0 ||
+ spy->prec<0 || spx->prec==spy->prec ){
+ apy->type = CONFLICT;
+ errcnt++;
+ }else if( spx->prec>spy->prec ){
+ apy->type = RD_RESOLVED;
+ }else if( spx->prec<spy->prec ){
+ apx->type = RD_RESOLVED;
+ }
+ }else{
+ assert(
+ apx->type==SH_RESOLVED ||
+ apx->type==RD_RESOLVED ||
+ apx->type==CONFLICT ||
+ apy->type==SH_RESOLVED ||
+ apy->type==RD_RESOLVED ||
+ apy->type==CONFLICT
+ );
+ /* The REDUCE/SHIFT case cannot happen because SHIFTs come before
+ ** REDUCEs on the list. If we reach this point it must be because
+ ** the parser conflict had already been resolved. */
+ }
+ return errcnt;
+}
+/********************* From the file "configlist.c" *************************/
+/*
+** Routines to processing a configuration list and building a state
+** in the LEMON parser generator.
+*/
+
+static struct config *freelist = 0; /* List of free configurations */
+static struct config *current = 0; /* Top of list of configurations */
+static struct config **currentend = 0; /* Last on list of configs */
+static struct config *basis = 0; /* Top of list of basis configs */
+static struct config **basisend = 0; /* End of list of basis configs */
+
+/* Return a pointer to a new configuration */
+PRIVATE struct config *newconfig(){
+ struct config *new;
+ if( freelist==0 ){
+ int i;
+ int amt = 3;
+ freelist = (struct config *)malloc( sizeof(struct config)*amt );
+ if( freelist==0 ){
+ fprintf(stderr,"Unable to allocate memory for a new configuration.");
+ exit(1);
+ }
+ for(i=0; i<amt-1; i++) freelist[i].next = &freelist[i+1];
+ freelist[amt-1].next = 0;
+ }
+ new = freelist;
+ freelist = freelist->next;
+ return new;
+}
+
+/* The configuration "old" is no longer used */
+PRIVATE void deleteconfig(old)
+struct config *old;
+{
+ old->next = freelist;
+ freelist = old;
+}
+
+/* Initialized the configuration list builder */
+void Configlist_init(){
+ current = 0;
+ currentend = ¤t;
+ basis = 0;
+ basisend = &basis;
+ Configtable_init();
+ return;
+}
+
+/* Initialized the configuration list builder */
+void Configlist_reset(){
+ current = 0;
+ currentend = ¤t;
+ basis = 0;
+ basisend = &basis;
+ Configtable_clear(0);
+ return;
+}
+
+/* Add another configuration to the configuration list */
+struct config *Configlist_add(rp,dot)
+struct rule *rp; /* The rule */
+int dot; /* Index into the RHS of the rule where the dot goes */
+{
+ struct config *cfp, model;
+
+ assert( currentend!=0 );
+ model.rp = rp;
+ model.dot = dot;
+ cfp = Configtable_find(&model);
+ if( cfp==0 ){
+ cfp = newconfig();
+ cfp->rp = rp;
+ cfp->dot = dot;
+ cfp->fws = SetNew();
+ cfp->stp = 0;
+ cfp->fplp = cfp->bplp = 0;
+ cfp->next = 0;
+ cfp->bp = 0;
+ *currentend = cfp;
+ currentend = &cfp->next;
+ Configtable_insert(cfp);
+ }
+ return cfp;
+}
+
+/* Add a basis configuration to the configuration list */
+struct config *Configlist_addbasis(rp,dot)
+struct rule *rp;
+int dot;
+{
+ struct config *cfp, model;
+
+ assert( basisend!=0 );
+ assert( currentend!=0 );
+ model.rp = rp;
+ model.dot = dot;
+ cfp = Configtable_find(&model);
+ if( cfp==0 ){
+ cfp = newconfig();
+ cfp->rp = rp;
+ cfp->dot = dot;
+ cfp->fws = SetNew();
+ cfp->stp = 0;
+ cfp->fplp = cfp->bplp = 0;
+ cfp->next = 0;
+ cfp->bp = 0;
+ *currentend = cfp;
+ currentend = &cfp->next;
+ *basisend = cfp;
+ basisend = &cfp->bp;
+ Configtable_insert(cfp);
+ }
+ return cfp;
+}
+
+/* Compute the closure of the configuration list */
+void Configlist_closure(lemp)
+struct lemon *lemp;
+{
+ struct config *cfp, *newcfp;
+ struct rule *rp, *newrp;
+ struct symbol *sp, *xsp;
+ int i, dot;
+
+ assert( currentend!=0 );
+ for(cfp=current; cfp; cfp=cfp->next){
+ rp = cfp->rp;
+ dot = cfp->dot;
+ if( dot>=rp->nrhs ) continue;
+ sp = rp->rhs[dot];
+ if( sp->type==NONTERMINAL ){
+ if( sp->rule==0 && sp!=lemp->errsym ){
+ ErrorMsg(lemp->filename,rp->line,"Nonterminal \"%s\" has no rules.",
+ sp->name);
+ lemp->errorcnt++;
+ }
+ for(newrp=sp->rule; newrp; newrp=newrp->nextlhs){
+ newcfp = Configlist_add(newrp,0);
+ for(i=dot+1; i<rp->nrhs; i++){
+ xsp = rp->rhs[i];
+ if( xsp->type==TERMINAL ){
+ SetAdd(newcfp->fws,xsp->index);
+ break;
+ }else if( xsp->type==MULTITERMINAL ){
+ int k;
+ for(k=0; k<xsp->nsubsym; k++){
+ SetAdd(newcfp->fws, xsp->subsym[k]->index);
+ }
+ break;
+ }else{
+ SetUnion(newcfp->fws,xsp->firstset);
+ if( xsp->lambda==B_FALSE ) break;
+ }
+ }
+ if( i==rp->nrhs ) Plink_add(&cfp->fplp,newcfp);
+ }
+ }
+ }
+ return;
+}
+
+/* Sort the configuration list */
+void Configlist_sort(){
+ current = (struct config *)msort((char *)current,(char **)&(current->next),Configcmp);
+ currentend = 0;
+ return;
+}
+
+/* Sort the basis configuration list */
+void Configlist_sortbasis(){
+ basis = (struct config *)msort((char *)current,(char **)&(current->bp),Configcmp);
+ basisend = 0;
+ return;
+}
+
+/* Return a pointer to the head of the configuration list and
+** reset the list */
+struct config *Configlist_return(){
+ struct config *old;
+ old = current;
+ current = 0;
+ currentend = 0;
+ return old;
+}
+
+/* Return a pointer to the head of the configuration list and
+** reset the list */
+struct config *Configlist_basis(){
+ struct config *old;
+ old = basis;
+ basis = 0;
+ basisend = 0;
+ return old;
+}
+
+/* Free all elements of the given configuration list */
+void Configlist_eat(cfp)
+struct config *cfp;
+{
+ struct config *nextcfp;
+ for(; cfp; cfp=nextcfp){
+ nextcfp = cfp->next;
+ assert( cfp->fplp==0 );
+ assert( cfp->bplp==0 );
+ if( cfp->fws ) SetFree(cfp->fws);
+ deleteconfig(cfp);
+ }
+ return;
+}
+/***************** From the file "error.c" *********************************/
+/*
+** Code for printing error message.
+*/
+
+/* Find a good place to break "msg" so that its length is at least "min"
+** but no more than "max". Make the point as close to max as possible.
+*/
+static int findbreak(msg,min,max)
+char *msg;
+int min;
+int max;
+{
+ int i,spot;
+ char c;
+ for(i=spot=min; i<=max; i++){
+ c = msg[i];
+ if( c=='\t' ) msg[i] = ' ';
+ if( c=='\n' ){ msg[i] = ' '; spot = i; break; }
+ if( c==0 ){ spot = i; break; }
+ if( c=='-' && i<max-1 ) spot = i+1;
+ if( c==' ' ) spot = i;
+ }
+ return spot;
+}
+
+/*
+** The error message is split across multiple lines if necessary. The
+** splits occur at a space, if there is a space available near the end
+** of the line.
+*/
+#define ERRMSGSIZE 10000 /* Hope this is big enough. No way to error check */
+#define LINEWIDTH 79 /* Max width of any output line */
+#define PREFIXLIMIT 30 /* Max width of the prefix on each line */
+void ErrorMsg(const char *filename, int lineno, const char *format, ...){
+ char errmsg[ERRMSGSIZE];
+ char prefix[PREFIXLIMIT+10];
+ int errmsgsize;
+ int prefixsize;
+ int availablewidth;
+ va_list ap;
+ int end, restart, base;
+
+ va_start(ap, format);
+ /* Prepare a prefix to be prepended to every output line */
+ if( lineno>0 ){
+ sprintf(prefix,"%.*s:%d: ",PREFIXLIMIT-10,filename,lineno);
+ }else{
+ sprintf(prefix,"%.*s: ",PREFIXLIMIT-10,filename);
+ }
+ prefixsize = strlen(prefix);
+ availablewidth = LINEWIDTH - prefixsize;
+
+ /* Generate the error message */
+ vsprintf(errmsg,format,ap);
+ va_end(ap);
+ errmsgsize = strlen(errmsg);
+ /* Remove trailing '\n's from the error message. */
+ while( errmsgsize>0 && errmsg[errmsgsize-1]=='\n' ){
+ errmsg[--errmsgsize] = 0;
+ }
+
+ /* Print the error message */
+ base = 0;
+ while( errmsg[base]!=0 ){
+ end = restart = findbreak(&errmsg[base],0,availablewidth);
+ restart += base;
+ while( errmsg[restart]==' ' ) restart++;
+ fprintf(stdout,"%s%.*s\n",prefix,end,&errmsg[base]);
+ base = restart;
+ }
+}
+/**************** From the file "main.c" ************************************/
+/*
+** Main program file for the LEMON parser generator.
+*/
+
+/* Report an out-of-memory condition and abort. This function
+** is used mostly by the "MemoryCheck" macro in struct.h
+*/
+void memory_error(){
+ fprintf(stderr,"Out of memory. Aborting...\n");
+ exit(1);
+}
+
+static int nDefine = 0; /* Number of -D options on the command line */
+static char **azDefine = 0; /* Name of the -D macros */
+
+/* This routine is called with the argument to each -D command-line option.
+** Add the macro defined to the azDefine array.
+*/
+static void handle_D_option(char *z){
+ char **paz;
+ nDefine++;
+ azDefine = realloc(azDefine, sizeof(azDefine[0])*nDefine);
+ if( azDefine==0 ){
+ fprintf(stderr,"out of memory\n");
+ exit(1);
+ }
+ paz = &azDefine[nDefine-1];
+ *paz = malloc( strlen(z)+1 );
+ if( *paz==0 ){
+ fprintf(stderr,"out of memory\n");
+ exit(1);
+ }
+ strcpy(*paz, z);
+ for(z=*paz; *z && *z!='='; z++){}
+ *z = 0;
+}
+
+
+/* The main program. Parse the command line and do it... */
+int main(argc,argv)
+int argc;
+char **argv;
+{
+ static int version = 0;
+ static int rpflag = 0;
+ static int basisflag = 0;
+ static int compress = 0;
+ static int quiet = 0;
+ static int statistics = 0;
+ static int mhflag = 0;
+ static struct s_options options[] = {
+ {OPT_FLAG, "b", (char*)&basisflag, "Print only the basis in report."},
+ {OPT_FLAG, "c", (char*)&compress, "Don't compress the action table."},
+ {OPT_FSTR, "D", (char*)handle_D_option, "Define an %ifdef macro."},
+ {OPT_FLAG, "g", (char*)&rpflag, "Print grammar without actions."},
+ {OPT_FLAG, "m", (char*)&mhflag, "Output a makeheaders compatible file"},
+ {OPT_FLAG, "q", (char*)&quiet, "(Quiet) Don't print the report file."},
+ {OPT_FLAG, "s", (char*)&statistics,
+ "Print parser stats to standard output."},
+ {OPT_FLAG, "x", (char*)&version, "Print the version number."},
+ {OPT_FLAG,0,0,0}
+ };
+ int i;
+ struct lemon lem;
+
+ OptInit(argv,options,stderr);
+ if( version ){
+ printf("Lemon version 1.0\n");
+ exit(0);
+ }
+ if( OptNArgs()!=1 ){
+ fprintf(stderr,"Exactly one filename argument is required.\n");
+ exit(1);
+ }
+ memset(&lem, 0, sizeof(lem));
+ lem.errorcnt = 0;
+
+ /* Initialize the machine */
+ Strsafe_init();
+ Symbol_init();
+ State_init();
+ lem.argv0 = argv[0];
+ lem.filename = OptArg(0);
+ lem.basisflag = basisflag;
+ Symbol_new("$");
+ lem.errsym = Symbol_new("error");
+
+ /* Parse the input file */
+ Parse(&lem);
+ if( lem.errorcnt ) exit(lem.errorcnt);
+ if( lem.nrule==0 ){
+ fprintf(stderr,"Empty grammar.\n");
+ exit(1);
+ }
+
+ /* Count and index the symbols of the grammar */
+ lem.nsymbol = Symbol_count();
+ Symbol_new("{default}");
+ lem.symbols = Symbol_arrayof();
+ for(i=0; i<=lem.nsymbol; i++) lem.symbols[i]->index = i;
+ qsort(lem.symbols,lem.nsymbol+1,sizeof(struct symbol*),
+ (int(*)())Symbolcmpp);
+ for(i=0; i<=lem.nsymbol; i++) lem.symbols[i]->index = i;
+ for(i=1; isupper(lem.symbols[i]->name[0]); i++);
+ lem.nterminal = i;
+
+ /* Generate a reprint of the grammar, if requested on the command line */
+ if( rpflag ){
+ Reprint(&lem);
+ }else{
+ /* Initialize the size for all follow and first sets */
+ SetSize(lem.nterminal);
+
+ /* Find the precedence for every production rule (that has one) */
+ FindRulePrecedences(&lem);
+
+ /* Compute the lambda-nonterminals and the first-sets for every
+ ** nonterminal */
+ FindFirstSets(&lem);
+
+ /* Compute all LR(0) states. Also record follow-set propagation
+ ** links so that the follow-set can be computed later */
+ lem.nstate = 0;
+ FindStates(&lem);
+ lem.sorted = State_arrayof();
+
+ /* Tie up loose ends on the propagation links */
+ FindLinks(&lem);
+
+ /* Compute the follow set of every reducible configuration */
+ FindFollowSets(&lem);
+
+ /* Compute the action tables */
+ FindActions(&lem);
+
+ /* Compress the action tables */
+ if( compress==0 ) CompressTables(&lem);
+
+ /* Reorder and renumber the states so that states with fewer choices
+ ** occur at the end. */
+ ResortStates(&lem);
+
+ /* Generate a report of the parser generated. (the "y.output" file) */
+ if( !quiet ) ReportOutput(&lem);
+
+ /* Generate the source code for the parser */
+ ReportTable(&lem, mhflag);
+
+ /* Produce a header file for use by the scanner. (This step is
+ ** omitted if the "-m" option is used because makeheaders will
+ ** generate the file for us.) */
+ if( !mhflag ) ReportHeader(&lem);
+ }
+ if( statistics ){
+ printf("Parser statistics: %d terminals, %d nonterminals, %d rules\n",
+ lem.nterminal, lem.nsymbol - lem.nterminal, lem.nrule);
+ printf(" %d states, %d parser table entries, %d conflicts\n",
+ lem.nstate, lem.tablesize, lem.nconflict);
+ }
+ if( lem.nconflict ){
+ fprintf(stderr,"%d parsing conflicts.\n",lem.nconflict);
+ }
+ exit(lem.errorcnt + lem.nconflict);
+ return (lem.errorcnt + lem.nconflict);
+}
+/******************** From the file "msort.c" *******************************/
+/*
+** A generic merge-sort program.
+**
+** USAGE:
+** Let "ptr" be a pointer to some structure which is at the head of
+** a null-terminated list. Then to sort the list call:
+**
+** ptr = msort(ptr,&(ptr->next),cmpfnc);
+**
+** In the above, "cmpfnc" is a pointer to a function which compares
+** two instances of the structure and returns an integer, as in
+** strcmp. The second argument is a pointer to the pointer to the
+** second element of the linked list. This address is used to compute
+** the offset to the "next" field within the structure. The offset to
+** the "next" field must be constant for all structures in the list.
+**
+** The function returns a new pointer which is the head of the list
+** after sorting.
+**
+** ALGORITHM:
+** Merge-sort.
+*/
+
+/*
+** Return a pointer to the next structure in the linked list.
+*/
+#define NEXT(A) (*(char**)(((unsigned long)A)+offset))
+
+/*
+** Inputs:
+** a: A sorted, null-terminated linked list. (May be null).
+** b: A sorted, null-terminated linked list. (May be null).
+** cmp: A pointer to the comparison function.
+** offset: Offset in the structure to the "next" field.
+**
+** Return Value:
+** A pointer to the head of a sorted list containing the elements
+** of both a and b.
+**
+** Side effects:
+** The "next" pointers for elements in the lists a and b are
+** changed.
+*/
+static char *merge(a,b,cmp,offset)
+char *a;
+char *b;
+int (*cmp)();
+int offset;
+{
+ char *ptr, *head;
+
+ if( a==0 ){
+ head = b;
+ }else if( b==0 ){
+ head = a;
+ }else{
+ if( (*cmp)(a,b)<0 ){
+ ptr = a;
+ a = NEXT(a);
+ }else{
+ ptr = b;
+ b = NEXT(b);
+ }
+ head = ptr;
+ while( a && b ){
+ if( (*cmp)(a,b)<0 ){
+ NEXT(ptr) = a;
+ ptr = a;
+ a = NEXT(a);
+ }else{
+ NEXT(ptr) = b;
+ ptr = b;
+ b = NEXT(b);
+ }
+ }
+ if( a ) NEXT(ptr) = a;
+ else NEXT(ptr) = b;
+ }
+ return head;
+}
+
+/*
+** Inputs:
+** list: Pointer to a singly-linked list of structures.
+** next: Pointer to pointer to the second element of the list.
+** cmp: A comparison function.
+**
+** Return Value:
+** A pointer to the head of a sorted list containing the elements
+** orginally in list.
+**
+** Side effects:
+** The "next" pointers for elements in list are changed.
+*/
+#define LISTSIZE 30
+char *msort(list,next,cmp)
+char *list;
+char **next;
+int (*cmp)();
+{
+ unsigned long offset;
+ char *ep;
+ char *set[LISTSIZE];
+ int i;
+ offset = (unsigned long)next - (unsigned long)list;
+ for(i=0; i<LISTSIZE; i++) set[i] = 0;
+ while( list ){
+ ep = list;
+ list = NEXT(list);
+ NEXT(ep) = 0;
+ for(i=0; i<LISTSIZE-1 && set[i]!=0; i++){
+ ep = merge(ep,set[i],cmp,offset);
+ set[i] = 0;
+ }
+ set[i] = ep;
+ }
+ ep = 0;
+ for(i=0; i<LISTSIZE; i++) if( set[i] ) ep = merge(ep,set[i],cmp,offset);
+ return ep;
+}
+/************************ From the file "option.c" **************************/
+static char **argv;
+static struct s_options *op;
+static FILE *errstream;
+
+#define ISOPT(X) ((X)[0]=='-'||(X)[0]=='+'||strchr((X),'=')!=0)
+
+/*
+** Print the command line with a carrot pointing to the k-th character
+** of the n-th field.
+*/
+static void errline(n,k,err)
+int n;
+int k;
+FILE *err;
+{
+ int spcnt, i;
+ if( argv[0] ) fprintf(err,"%s",argv[0]);
+ spcnt = strlen(argv[0]) + 1;
+ for(i=1; i<n && argv[i]; i++){
+ fprintf(err," %s",argv[i]);
+ spcnt += strlen(argv[i])+1;
+ }
+ spcnt += k;
+ for(; argv[i]; i++) fprintf(err," %s",argv[i]);
+ if( spcnt<20 ){
+ fprintf(err,"\n%*s^-- here\n",spcnt,"");
+ }else{
+ fprintf(err,"\n%*shere --^\n",spcnt-7,"");
+ }
+}
+
+/*
+** Return the index of the N-th non-switch argument. Return -1
+** if N is out of range.
+*/
+static int argindex(n)
+int n;
+{
+ int i;
+ int dashdash = 0;
+ if( argv!=0 && *argv!=0 ){
+ for(i=1; argv[i]; i++){
+ if( dashdash || !ISOPT(argv[i]) ){
+ if( n==0 ) return i;
+ n--;
+ }
+ if( strcmp(argv[i],"--")==0 ) dashdash = 1;
+ }
+ }
+ return -1;
+}
+
+static char emsg[] = "Command line syntax error: ";
+
+/*
+** Process a flag command line argument.
+*/
+static int handleflags(i,err)
+int i;
+FILE *err;
+{
+ int v;
+ int errcnt = 0;
+ int j;
+ for(j=0; op[j].label; j++){
+ if( strncmp(&argv[i][1],op[j].label,strlen(op[j].label))==0 ) break;
+ }
+ v = argv[i][0]=='-' ? 1 : 0;
+ if( op[j].label==0 ){
+ if( err ){
+ fprintf(err,"%sundefined option.\n",emsg);
+ errline(i,1,err);
+ }
+ errcnt++;
+ }else if( op[j].type==OPT_FLAG ){
+ *((int*)op[j].arg) = v;
+ }else if( op[j].type==OPT_FFLAG ){
+ (*(void(*)())(op[j].arg))(v);
+ }else if( op[j].type==OPT_FSTR ){
+ (*(void(*)())(op[j].arg))(&argv[i][2]);
+ }else{
+ if( err ){
+ fprintf(err,"%smissing argument on switch.\n",emsg);
+ errline(i,1,err);
+ }
+ errcnt++;
+ }
+ return errcnt;
+}
+
+/*
+** Process a command line switch which has an argument.
+*/
+static int handleswitch(i,err)
+int i;
+FILE *err;
+{
+ int lv = 0;
+ double dv = 0.0;
+ char *sv = 0, *end;
+ char *cp;
+ int j;
+ int errcnt = 0;
+ cp = strchr(argv[i],'=');
+ assert( cp!=0 );
+ *cp = 0;
+ for(j=0; op[j].label; j++){
+ if( strcmp(argv[i],op[j].label)==0 ) break;
+ }
+ *cp = '=';
+ if( op[j].label==0 ){
+ if( err ){
+ fprintf(err,"%sundefined option.\n",emsg);
+ errline(i,0,err);
+ }
+ errcnt++;
+ }else{
+ cp++;
+ switch( op[j].type ){
+ case OPT_FLAG:
+ case OPT_FFLAG:
+ if( err ){
+ fprintf(err,"%soption requires an argument.\n",emsg);
+ errline(i,0,err);
+ }
+ errcnt++;
+ break;
+ case OPT_DBL:
+ case OPT_FDBL:
+ dv = strtod(cp,&end);
+ if( *end ){
+ if( err ){
+ fprintf(err,"%sillegal character in floating-point argument.\n",emsg);
+ errline(i,((unsigned long)end)-(unsigned long)argv[i],err);
+ }
+ errcnt++;
+ }
+ break;
+ case OPT_INT:
+ case OPT_FINT:
+ lv = strtol(cp,&end,0);
+ if( *end ){
+ if( err ){
+ fprintf(err,"%sillegal character in integer argument.\n",emsg);
+ errline(i,((unsigned long)end)-(unsigned long)argv[i],err);
+ }
+ errcnt++;
+ }
+ break;
+ case OPT_STR:
+ case OPT_FSTR:
+ sv = cp;
+ break;
+ }
+ switch( op[j].type ){
+ case OPT_FLAG:
+ case OPT_FFLAG:
+ break;
+ case OPT_DBL:
+ *(double*)(op[j].arg) = dv;
+ break;
+ case OPT_FDBL:
+ (*(void(*)())(op[j].arg))(dv);
+ break;
+ case OPT_INT:
+ *(int*)(op[j].arg) = lv;
+ break;
+ case OPT_FINT:
+ (*(void(*)())(op[j].arg))((int)lv);
+ break;
+ case OPT_STR:
+ *(char**)(op[j].arg) = sv;
+ break;
+ case OPT_FSTR:
+ (*(void(*)())(op[j].arg))(sv);
+ break;
+ }
+ }
+ return errcnt;
+}
+
+int OptInit(a,o,err)
+char **a;
+struct s_options *o;
+FILE *err;
+{
+ int errcnt = 0;
+ argv = a;
+ op = o;
+ errstream = err;
+ if( argv && *argv && op ){
+ int i;
+ for(i=1; argv[i]; i++){
+ if( argv[i][0]=='+' || argv[i][0]=='-' ){
+ errcnt += handleflags(i,err);
+ }else if( strchr(argv[i],'=') ){
+ errcnt += handleswitch(i,err);
+ }
+ }
+ }
+ if( errcnt>0 ){
+ fprintf(err,"Valid command line options for \"%s\" are:\n",*a);
+ OptPrint();
+ exit(1);
+ }
+ return 0;
+}
+
+int OptNArgs(){
+ int cnt = 0;
+ int dashdash = 0;
+ int i;
+ if( argv!=0 && argv[0]!=0 ){
+ for(i=1; argv[i]; i++){
+ if( dashdash || !ISOPT(argv[i]) ) cnt++;
+ if( strcmp(argv[i],"--")==0 ) dashdash = 1;
+ }
+ }
+ return cnt;
+}
+
+char *OptArg(n)
+int n;
+{
+ int i;
+ i = argindex(n);
+ return i>=0 ? argv[i] : 0;
+}
+
+void OptErr(n)
+int n;
+{
+ int i;
+ i = argindex(n);
+ if( i>=0 ) errline(i,0,errstream);
+}
+
+void OptPrint(){
+ int i;
+ int max, len;
+ max = 0;
+ for(i=0; op[i].label; i++){
+ len = strlen(op[i].label) + 1;
+ switch( op[i].type ){
+ case OPT_FLAG:
+ case OPT_FFLAG:
+ break;
+ case OPT_INT:
+ case OPT_FINT:
+ len += 9; /* length of "<integer>" */
+ break;
+ case OPT_DBL:
+ case OPT_FDBL:
+ len += 6; /* length of "<real>" */
+ break;
+ case OPT_STR:
+ case OPT_FSTR:
+ len += 8; /* length of "<string>" */
+ break;
+ }
+ if( len>max ) max = len;
+ }
+ for(i=0; op[i].label; i++){
+ switch( op[i].type ){
+ case OPT_FLAG:
+ case OPT_FFLAG:
+ fprintf(errstream," -%-*s %s\n",max,op[i].label,op[i].message);
+ break;
+ case OPT_INT:
+ case OPT_FINT:
+ fprintf(errstream," %s=<integer>%*s %s\n",op[i].label,
+ (int)(max-strlen(op[i].label)-9),"",op[i].message);
+ break;
+ case OPT_DBL:
+ case OPT_FDBL:
+ fprintf(errstream," %s=<real>%*s %s\n",op[i].label,
+ (int)(max-strlen(op[i].label)-6),"",op[i].message);
+ break;
+ case OPT_STR:
+ case OPT_FSTR:
+ fprintf(errstream," %s=<string>%*s %s\n",op[i].label,
+ (int)(max-strlen(op[i].label)-8),"",op[i].message);
+ break;
+ }
+ }
+}
+/*********************** From the file "parse.c" ****************************/
+/*
+** Input file parser for the LEMON parser generator.
+*/
+
+/* The state of the parser */
+struct pstate {
+ char *filename; /* Name of the input file */
+ int tokenlineno; /* Linenumber at which current token starts */
+ int errorcnt; /* Number of errors so far */
+ char *tokenstart; /* Text of current token */
+ struct lemon *gp; /* Global state vector */
+ enum e_state {
+ INITIALIZE,
+ WAITING_FOR_DECL_OR_RULE,
+ WAITING_FOR_DECL_KEYWORD,
+ WAITING_FOR_DECL_ARG,
+ WAITING_FOR_PRECEDENCE_SYMBOL,
+ WAITING_FOR_ARROW,
+ IN_RHS,
+ LHS_ALIAS_1,
+ LHS_ALIAS_2,
+ LHS_ALIAS_3,
+ RHS_ALIAS_1,
+ RHS_ALIAS_2,
+ PRECEDENCE_MARK_1,
+ PRECEDENCE_MARK_2,
+ RESYNC_AFTER_RULE_ERROR,
+ RESYNC_AFTER_DECL_ERROR,
+ WAITING_FOR_DESTRUCTOR_SYMBOL,
+ WAITING_FOR_DATATYPE_SYMBOL,
+ WAITING_FOR_FALLBACK_ID,
+ WAITING_FOR_WILDCARD_ID
+ } state; /* The state of the parser */
+ struct symbol *fallback; /* The fallback token */
+ struct symbol *lhs; /* Left-hand side of current rule */
+ char *lhsalias; /* Alias for the LHS */
+ int nrhs; /* Number of right-hand side symbols seen */
+ struct symbol *rhs[MAXRHS]; /* RHS symbols */
+ char *alias[MAXRHS]; /* Aliases for each RHS symbol (or NULL) */
+ struct rule *prevrule; /* Previous rule parsed */
+ char *declkeyword; /* Keyword of a declaration */
+ char **declargslot; /* Where the declaration argument should be put */
+ int *decllnslot; /* Where the declaration linenumber is put */
+ enum e_assoc declassoc; /* Assign this association to decl arguments */
+ int preccounter; /* Assign this precedence to decl arguments */
+ struct rule *firstrule; /* Pointer to first rule in the grammar */
+ struct rule *lastrule; /* Pointer to the most recently parsed rule */
+};
+
+/* Parse a single token */
+static void parseonetoken(psp)
+struct pstate *psp;
+{
+ char *x;
+ x = Strsafe(psp->tokenstart); /* Save the token permanently */
+#if 0
+ printf("%s:%d: Token=[%s] state=%d\n",psp->filename,psp->tokenlineno,
+ x,psp->state);
+#endif
+ switch( psp->state ){
+ case INITIALIZE:
+ psp->prevrule = 0;
+ psp->preccounter = 0;
+ psp->firstrule = psp->lastrule = 0;
+ psp->gp->nrule = 0;
+ /* Fall thru to next case */
+ case WAITING_FOR_DECL_OR_RULE:
+ if( x[0]=='%' ){
+ psp->state = WAITING_FOR_DECL_KEYWORD;
+ }else if( islower(x[0]) ){
+ psp->lhs = Symbol_new(x);
+ psp->nrhs = 0;
+ psp->lhsalias = 0;
+ psp->state = WAITING_FOR_ARROW;
+ }else if( x[0]=='{' ){
+ if( psp->prevrule==0 ){
+ ErrorMsg(psp->filename,psp->tokenlineno,
+"There is not prior rule opon which to attach the code \
+fragment which begins on this line.");
+ psp->errorcnt++;
+ }else if( psp->prevrule->code!=0 ){
+ ErrorMsg(psp->filename,psp->tokenlineno,
+"Code fragment beginning on this line is not the first \
+to follow the previous rule.");
+ psp->errorcnt++;
+ }else{
+ psp->prevrule->line = psp->tokenlineno;
+ psp->prevrule->code = &x[1];
+ }
+ }else if( x[0]=='[' ){
+ psp->state = PRECEDENCE_MARK_1;
+ }else{
+ ErrorMsg(psp->filename,psp->tokenlineno,
+ "Token \"%s\" should be either \"%%\" or a nonterminal name.",
+ x);
+ psp->errorcnt++;
+ }
+ break;
+ case PRECEDENCE_MARK_1:
+ if( !isupper(x[0]) ){
+ ErrorMsg(psp->filename,psp->tokenlineno,
+ "The precedence symbol must be a terminal.");
+ psp->errorcnt++;
+ }else if( psp->prevrule==0 ){
+ ErrorMsg(psp->filename,psp->tokenlineno,
+ "There is no prior rule to assign precedence \"[%s]\".",x);
+ psp->errorcnt++;
+ }else if( psp->prevrule->precsym!=0 ){
+ ErrorMsg(psp->filename,psp->tokenlineno,
+"Precedence mark on this line is not the first \
+to follow the previous rule.");
+ psp->errorcnt++;
+ }else{
+ psp->prevrule->precsym = Symbol_new(x);
+ }
+ psp->state = PRECEDENCE_MARK_2;
+ break;
+ case PRECEDENCE_MARK_2:
+ if( x[0]!=']' ){
+ ErrorMsg(psp->filename,psp->tokenlineno,
+ "Missing \"]\" on precedence mark.");
+ psp->errorcnt++;
+ }
+ psp->state = WAITING_FOR_DECL_OR_RULE;
+ break;
+ case WAITING_FOR_ARROW:
+ if( x[0]==':' && x[1]==':' && x[2]=='=' ){
+ psp->state = IN_RHS;
+ }else if( x[0]=='(' ){
+ psp->state = LHS_ALIAS_1;
+ }else{
+ ErrorMsg(psp->filename,psp->tokenlineno,
+ "Expected to see a \":\" following the LHS symbol \"%s\".",
+ psp->lhs->name);
+ psp->errorcnt++;
+ psp->state = RESYNC_AFTER_RULE_ERROR;
+ }
+ break;
+ case LHS_ALIAS_1:
+ if( isalpha(x[0]) ){
+ psp->lhsalias = x;
+ psp->state = LHS_ALIAS_2;
+ }else{
+ ErrorMsg(psp->filename,psp->tokenlineno,
+ "\"%s\" is not a valid alias for the LHS \"%s\"\n",
+ x,psp->lhs->name);
+ psp->errorcnt++;
+ psp->state = RESYNC_AFTER_RULE_ERROR;
+ }
+ break;
+ case LHS_ALIAS_2:
+ if( x[0]==')' ){
+ psp->state = LHS_ALIAS_3;
+ }else{
+ ErrorMsg(psp->filename,psp->tokenlineno,
+ "Missing \")\" following LHS alias name \"%s\".",psp->lhsalias);
+ psp->errorcnt++;
+ psp->state = RESYNC_AFTER_RULE_ERROR;
+ }
+ break;
+ case LHS_ALIAS_3:
+ if( x[0]==':' && x[1]==':' && x[2]=='=' ){
+ psp->state = IN_RHS;
+ }else{
+ ErrorMsg(psp->filename,psp->tokenlineno,
+ "Missing \"->\" following: \"%s(%s)\".",
+ psp->lhs->name,psp->lhsalias);
+ psp->errorcnt++;
+ psp->state = RESYNC_AFTER_RULE_ERROR;
+ }
+ break;
+ case IN_RHS:
+ if( x[0]=='.' ){
+ struct rule *rp;
+ rp = (struct rule *)malloc( sizeof(struct rule) +
+ sizeof(struct symbol*)*psp->nrhs + sizeof(char*)*psp->nrhs );
+ if( rp==0 ){
+ ErrorMsg(psp->filename,psp->tokenlineno,
+ "Can't allocate enough memory for this rule.");
+ psp->errorcnt++;
+ psp->prevrule = 0;
+ }else{
+ int i;
+ rp->ruleline = psp->tokenlineno;
+ rp->rhs = (struct symbol**)&rp[1];
+ rp->rhsalias = (char**)&(rp->rhs[psp->nrhs]);
+ for(i=0; i<psp->nrhs; i++){
+ rp->rhs[i] = psp->rhs[i];
+ rp->rhsalias[i] = psp->alias[i];
+ }
+ rp->lhs = psp->lhs;
+ rp->lhsalias = psp->lhsalias;
+ rp->nrhs = psp->nrhs;
+ rp->code = 0;
+ rp->precsym = 0;
+ rp->index = psp->gp->nrule++;
+ rp->nextlhs = rp->lhs->rule;
+ rp->lhs->rule = rp;
+ rp->next = 0;
+ if( psp->firstrule==0 ){
+ psp->firstrule = psp->lastrule = rp;
+ }else{
+ psp->lastrule->next = rp;
+ psp->lastrule = rp;
+ }
+ psp->prevrule = rp;
+ }
+ psp->state = WAITING_FOR_DECL_OR_RULE;
+ }else if( isalpha(x[0]) ){
+ if( psp->nrhs>=MAXRHS ){
+ ErrorMsg(psp->filename,psp->tokenlineno,
+ "Too many symbols on RHS or rule beginning at \"%s\".",
+ x);
+ psp->errorcnt++;
+ psp->state = RESYNC_AFTER_RULE_ERROR;
+ }else{
+ psp->rhs[psp->nrhs] = Symbol_new(x);
+ psp->alias[psp->nrhs] = 0;
+ psp->nrhs++;
+ }
+ }else if( (x[0]=='|' || x[0]=='/') && psp->nrhs>0 ){
+ struct symbol *msp = psp->rhs[psp->nrhs-1];
+ if( msp->type!=MULTITERMINAL ){
+ struct symbol *origsp = msp;
+ msp = malloc(sizeof(*msp));
+ memset(msp, 0, sizeof(*msp));
+ msp->type = MULTITERMINAL;
+ msp->nsubsym = 1;
+ msp->subsym = malloc(sizeof(struct symbol*));
+ msp->subsym[0] = origsp;
+ msp->name = origsp->name;
+ psp->rhs[psp->nrhs-1] = msp;
+ }
+ msp->nsubsym++;
+ msp->subsym = realloc(msp->subsym, sizeof(struct symbol*)*msp->nsubsym);
+ msp->subsym[msp->nsubsym-1] = Symbol_new(&x[1]);
+ if( islower(x[1]) || islower(msp->subsym[0]->name[0]) ){
+ ErrorMsg(psp->filename,psp->tokenlineno,
+ "Cannot form a compound containing a non-terminal");
+ psp->errorcnt++;
+ }
+ }else if( x[0]=='(' && psp->nrhs>0 ){
+ psp->state = RHS_ALIAS_1;
+ }else{
+ ErrorMsg(psp->filename,psp->tokenlineno,
+ "Illegal character on RHS of rule: \"%s\".",x);
+ psp->errorcnt++;
+ psp->state = RESYNC_AFTER_RULE_ERROR;
+ }
+ break;
+ case RHS_ALIAS_1:
+ if( isalpha(x[0]) ){
+ psp->alias[psp->nrhs-1] = x;
+ psp->state = RHS_ALIAS_2;
+ }else{
+ ErrorMsg(psp->filename,psp->tokenlineno,
+ "\"%s\" is not a valid alias for the RHS symbol \"%s\"\n",
+ x,psp->rhs[psp->nrhs-1]->name);
+ psp->errorcnt++;
+ psp->state = RESYNC_AFTER_RULE_ERROR;
+ }
+ break;
+ case RHS_ALIAS_2:
+ if( x[0]==')' ){
+ psp->state = IN_RHS;
+ }else{
+ ErrorMsg(psp->filename,psp->tokenlineno,
+ "Missing \")\" following LHS alias name \"%s\".",psp->lhsalias);
+ psp->errorcnt++;
+ psp->state = RESYNC_AFTER_RULE_ERROR;
+ }
+ break;
+ case WAITING_FOR_DECL_KEYWORD:
+ if( isalpha(x[0]) ){
+ psp->declkeyword = x;
+ psp->declargslot = 0;
+ psp->decllnslot = 0;
+ psp->state = WAITING_FOR_DECL_ARG;
+ if( strcmp(x,"name")==0 ){
+ psp->declargslot = &(psp->gp->name);
+ }else if( strcmp(x,"include")==0 ){
+ psp->declargslot = &(psp->gp->include);
+ psp->decllnslot = &psp->gp->includeln;
+ }else if( strcmp(x,"code")==0 ){
+ psp->declargslot = &(psp->gp->extracode);
+ psp->decllnslot = &psp->gp->extracodeln;
+ }else if( strcmp(x,"token_destructor")==0 ){
+ psp->declargslot = &psp->gp->tokendest;
+ psp->decllnslot = &psp->gp->tokendestln;
+ }else if( strcmp(x,"default_destructor")==0 ){
+ psp->declargslot = &psp->gp->vardest;
+ psp->decllnslot = &psp->gp->vardestln;
+ }else if( strcmp(x,"token_prefix")==0 ){
+ psp->declargslot = &psp->gp->tokenprefix;
+ }else if( strcmp(x,"syntax_error")==0 ){
+ psp->declargslot = &(psp->gp->error);
+ psp->decllnslot = &psp->gp->errorln;
+ }else if( strcmp(x,"parse_accept")==0 ){
+ psp->declargslot = &(psp->gp->accept);
+ psp->decllnslot = &psp->gp->acceptln;
+ }else if( strcmp(x,"parse_failure")==0 ){
+ psp->declargslot = &(psp->gp->failure);
+ psp->decllnslot = &psp->gp->failureln;
+ }else if( strcmp(x,"stack_overflow")==0 ){
+ psp->declargslot = &(psp->gp->overflow);
+ psp->decllnslot = &psp->gp->overflowln;
+ }else if( strcmp(x,"extra_argument")==0 ){
+ psp->declargslot = &(psp->gp->arg);
+ }else if( strcmp(x,"token_type")==0 ){
+ psp->declargslot = &(psp->gp->tokentype);
+ }else if( strcmp(x,"default_type")==0 ){
+ psp->declargslot = &(psp->gp->vartype);
+ }else if( strcmp(x,"stack_size")==0 ){
+ psp->declargslot = &(psp->gp->stacksize);
+ }else if( strcmp(x,"start_symbol")==0 ){
+ psp->declargslot = &(psp->gp->start);
+ }else if( strcmp(x,"left")==0 ){
+ psp->preccounter++;
+ psp->declassoc = LEFT;
+ psp->state = WAITING_FOR_PRECEDENCE_SYMBOL;
+ }else if( strcmp(x,"right")==0 ){
+ psp->preccounter++;
+ psp->declassoc = RIGHT;
+ psp->state = WAITING_FOR_PRECEDENCE_SYMBOL;
+ }else if( strcmp(x,"nonassoc")==0 ){
+ psp->preccounter++;
+ psp->declassoc = NONE;
+ psp->state = WAITING_FOR_PRECEDENCE_SYMBOL;
+ }else if( strcmp(x,"destructor")==0 ){
+ psp->state = WAITING_FOR_DESTRUCTOR_SYMBOL;
+ }else if( strcmp(x,"type")==0 ){
+ psp->state = WAITING_FOR_DATATYPE_SYMBOL;
+ }else if( strcmp(x,"fallback")==0 ){
+ psp->fallback = 0;
+ psp->state = WAITING_FOR_FALLBACK_ID;
+ }else if( strcmp(x,"wildcard")==0 ){
+ psp->state = WAITING_FOR_WILDCARD_ID;
+ }else{
+ ErrorMsg(psp->filename,psp->tokenlineno,
+ "Unknown declaration keyword: \"%%%s\".",x);
+ psp->errorcnt++;
+ psp->state = RESYNC_AFTER_DECL_ERROR;
+ }
+ }else{
+ ErrorMsg(psp->filename,psp->tokenlineno,
+ "Illegal declaration keyword: \"%s\".",x);
+ psp->errorcnt++;
+ psp->state = RESYNC_AFTER_DECL_ERROR;
+ }
+ break;
+ case WAITING_FOR_DESTRUCTOR_SYMBOL:
+ if( !isalpha(x[0]) ){
+ ErrorMsg(psp->filename,psp->tokenlineno,
+ "Symbol name missing after %destructor keyword");
+ psp->errorcnt++;
+ psp->state = RESYNC_AFTER_DECL_ERROR;
+ }else{
+ struct symbol *sp = Symbol_new(x);
+ psp->declargslot = &sp->destructor;
+ psp->decllnslot = &sp->destructorln;
+ psp->state = WAITING_FOR_DECL_ARG;
+ }
+ break;
+ case WAITING_FOR_DATATYPE_SYMBOL:
+ if( !isalpha(x[0]) ){
+ ErrorMsg(psp->filename,psp->tokenlineno,
+ "Symbol name missing after %destructor keyword");
+ psp->errorcnt++;
+ psp->state = RESYNC_AFTER_DECL_ERROR;
+ }else{
+ struct symbol *sp = Symbol_new(x);
+ psp->declargslot = &sp->datatype;
+ psp->decllnslot = 0;
+ psp->state = WAITING_FOR_DECL_ARG;
+ }
+ break;
+ case WAITING_FOR_PRECEDENCE_SYMBOL:
+ if( x[0]=='.' ){
+ psp->state = WAITING_FOR_DECL_OR_RULE;
+ }else if( isupper(x[0]) ){
+ struct symbol *sp;
+ sp = Symbol_new(x);
+ if( sp->prec>=0 ){
+ ErrorMsg(psp->filename,psp->tokenlineno,
+ "Symbol \"%s\" has already be given a precedence.",x);
+ psp->errorcnt++;
+ }else{
+ sp->prec = psp->preccounter;
+ sp->assoc = psp->declassoc;
+ }
+ }else{
+ ErrorMsg(psp->filename,psp->tokenlineno,
+ "Can't assign a precedence to \"%s\".",x);
+ psp->errorcnt++;
+ }
+ break;
+ case WAITING_FOR_DECL_ARG:
+ if( (x[0]=='{' || x[0]=='\"' || isalnum(x[0])) ){
+ if( *(psp->declargslot)!=0 ){
+ ErrorMsg(psp->filename,psp->tokenlineno,
+ "The argument \"%s\" to declaration \"%%%s\" is not the first.",
+ x[0]=='\"' ? &x[1] : x,psp->declkeyword);
+ psp->errorcnt++;
+ psp->state = RESYNC_AFTER_DECL_ERROR;
+ }else{
+ *(psp->declargslot) = (x[0]=='\"' || x[0]=='{') ? &x[1] : x;
+ if( psp->decllnslot ) *psp->decllnslot = psp->tokenlineno;
+ psp->state = WAITING_FOR_DECL_OR_RULE;
+ }
+ }else{
+ ErrorMsg(psp->filename,psp->tokenlineno,
+ "Illegal argument to %%%s: %s",psp->declkeyword,x);
+ psp->errorcnt++;
+ psp->state = RESYNC_AFTER_DECL_ERROR;
+ }
+ break;
+ case WAITING_FOR_FALLBACK_ID:
+ if( x[0]=='.' ){
+ psp->state = WAITING_FOR_DECL_OR_RULE;
+ }else if( !isupper(x[0]) ){
+ ErrorMsg(psp->filename, psp->tokenlineno,
+ "%%fallback argument \"%s\" should be a token", x);
+ psp->errorcnt++;
+ }else{
+ struct symbol *sp = Symbol_new(x);
+ if( psp->fallback==0 ){
+ psp->fallback = sp;
+ }else if( sp->fallback ){
+ ErrorMsg(psp->filename, psp->tokenlineno,
+ "More than one fallback assigned to token %s", x);
+ psp->errorcnt++;
+ }else{
+ sp->fallback = psp->fallback;
+ psp->gp->has_fallback = 1;
+ }
+ }
+ break;
+ case WAITING_FOR_WILDCARD_ID:
+ if( x[0]=='.' ){
+ psp->state = WAITING_FOR_DECL_OR_RULE;
+ }else if( !isupper(x[0]) ){
+ ErrorMsg(psp->filename, psp->tokenlineno,
+ "%%wildcard argument \"%s\" should be a token", x);
+ psp->errorcnt++;
+ }else{
+ struct symbol *sp = Symbol_new(x);
+ if( psp->gp->wildcard==0 ){
+ psp->gp->wildcard = sp;
+ }else{
+ ErrorMsg(psp->filename, psp->tokenlineno,
+ "Extra wildcard to token: %s", x);
+ psp->errorcnt++;
+ }
+ }
+ break;
+ case RESYNC_AFTER_RULE_ERROR:
+/* if( x[0]=='.' ) psp->state = WAITING_FOR_DECL_OR_RULE;
+** break; */
+ case RESYNC_AFTER_DECL_ERROR:
+ if( x[0]=='.' ) psp->state = WAITING_FOR_DECL_OR_RULE;
+ if( x[0]=='%' ) psp->state = WAITING_FOR_DECL_KEYWORD;
+ break;
+ }
+}
+
+/* Run the proprocessor over the input file text. The global variables
+** azDefine[0] through azDefine[nDefine-1] contains the names of all defined
+** macros. This routine looks for "%ifdef" and "%ifndef" and "%endif" and
+** comments them out. Text in between is also commented out as appropriate.
+*/
+static void preprocess_input(char *z){
+ int i, j, k, n;
+ int exclude = 0;
+ int start;
+ int lineno = 1;
+ int start_lineno;
+ for(i=0; z[i]; i++){
+ if( z[i]=='\n' ) lineno++;
+ if( z[i]!='%' || (i>0 && z[i-1]!='\n') ) continue;
+ if( strncmp(&z[i],"%endif",6)==0 && isspace(z[i+6]) ){
+ if( exclude ){
+ exclude--;
+ if( exclude==0 ){
+ for(j=start; j<i; j++) if( z[j]!='\n' ) z[j] = ' ';
+ }
+ }
+ for(j=i; z[j] && z[j]!='\n'; j++) z[j] = ' ';
+ }else if( (strncmp(&z[i],"%ifdef",6)==0 && isspace(z[i+6]))
+ || (strncmp(&z[i],"%ifndef",7)==0 && isspace(z[i+7])) ){
+ if( exclude ){
+ exclude++;
+ }else{
+ for(j=i+7; isspace(z[j]); j++){}
+ for(n=0; z[j+n] && !isspace(z[j+n]); n++){}
+ exclude = 1;
+ for(k=0; k<nDefine; k++){
+ if( strncmp(azDefine[k],&z[j],n)==0 && strlen(azDefine[k])==n ){
+ exclude = 0;
+ break;
+ }
+ }
+ if( z[i+3]=='n' ) exclude = !exclude;
+ if( exclude ){
+ start = i;
+ start_lineno = lineno;
+ }
+ }
+ for(j=i; z[j] && z[j]!='\n'; j++) z[j] = ' ';
+ }
+ }
+ if( exclude ){
+ fprintf(stderr,"unterminated %%ifdef starting on line %d\n", start_lineno);
+ exit(1);
+ }
+}
+
+/* In spite of its name, this function is really a scanner. It read
+** in the entire input file (all at once) then tokenizes it. Each
+** token is passed to the function "parseonetoken" which builds all
+** the appropriate data structures in the global state vector "gp".
+*/
+void Parse(gp)
+struct lemon *gp;
+{
+ struct pstate ps;
+ FILE *fp;
+ char *filebuf;
+ int filesize;
+ int lineno;
+ int c;
+ char *cp, *nextcp;
+ int startline = 0;
+
+ ps.gp = gp;
+ ps.filename = gp->filename;
+ ps.errorcnt = 0;
+ ps.state = INITIALIZE;
+
+ /* Begin by reading the input file */
+ fp = fopen(ps.filename,"rb");
+ if( fp==0 ){
+ ErrorMsg(ps.filename,0,"Can't open this file for reading.");
+ gp->errorcnt++;
+ return;
+ }
+ fseek(fp,0,2);
+ filesize = ftell(fp);
+ rewind(fp);
+ filebuf = (char *)malloc( filesize+1 );
+ if( filebuf==0 ){
+ ErrorMsg(ps.filename,0,"Can't allocate %d of memory to hold this file.",
+ filesize+1);
+ gp->errorcnt++;
+ return;
+ }
+ if( fread(filebuf,1,filesize,fp)!=filesize ){
+ ErrorMsg(ps.filename,0,"Can't read in all %d bytes of this file.",
+ filesize);
+ free(filebuf);
+ gp->errorcnt++;
+ return;
+ }
+ fclose(fp);
+ filebuf[filesize] = 0;
+
+ /* Make an initial pass through the file to handle %ifdef and %ifndef */
+ preprocess_input(filebuf);
+
+ /* Now scan the text of the input file */
+ lineno = 1;
+ for(cp=filebuf; (c= *cp)!=0; ){
+ if( c=='\n' ) lineno++; /* Keep track of the line number */
+ if( isspace(c) ){ cp++; continue; } /* Skip all white space */
+ if( c=='/' && cp[1]=='/' ){ /* Skip C++ style comments */
+ cp+=2;
+ while( (c= *cp)!=0 && c!='\n' ) cp++;
+ continue;
+ }
+ if( c=='/' && cp[1]=='*' ){ /* Skip C style comments */
+ cp+=2;
+ while( (c= *cp)!=0 && (c!='/' || cp[-1]!='*') ){
+ if( c=='\n' ) lineno++;
+ cp++;
+ }
+ if( c ) cp++;
+ continue;
+ }
+ ps.tokenstart = cp; /* Mark the beginning of the token */
+ ps.tokenlineno = lineno; /* Linenumber on which token begins */
+ if( c=='\"' ){ /* String literals */
+ cp++;
+ while( (c= *cp)!=0 && c!='\"' ){
+ if( c=='\n' ) lineno++;
+ cp++;
+ }
+ if( c==0 ){
+ ErrorMsg(ps.filename,startline,
+"String starting on this line is not terminated before the end of the file.");
+ ps.errorcnt++;
+ nextcp = cp;
+ }else{
+ nextcp = cp+1;
+ }
+ }else if( c=='{' ){ /* A block of C code */
+ int level;
+ cp++;
+ for(level=1; (c= *cp)!=0 && (level>1 || c!='}'); cp++){
+ if( c=='\n' ) lineno++;
+ else if( c=='{' ) level++;
+ else if( c=='}' ) level--;
+ else if( c=='/' && cp[1]=='*' ){ /* Skip comments */
+ int prevc;
+ cp = &cp[2];
+ prevc = 0;
+ while( (c= *cp)!=0 && (c!='/' || prevc!='*') ){
+ if( c=='\n' ) lineno++;
+ prevc = c;
+ cp++;
+ }
+ }else if( c=='/' && cp[1]=='/' ){ /* Skip C++ style comments too */
+ cp = &cp[2];
+ while( (c= *cp)!=0 && c!='\n' ) cp++;
+ if( c ) lineno++;
+ }else if( c=='\'' || c=='\"' ){ /* String a character literals */
+ int startchar, prevc;
+ startchar = c;
+ prevc = 0;
+ for(cp++; (c= *cp)!=0 && (c!=startchar || prevc=='\\'); cp++){
+ if( c=='\n' ) lineno++;
+ if( prevc=='\\' ) prevc = 0;
+ else prevc = c;
+ }
+ }
+ }
+ if( c==0 ){
+ ErrorMsg(ps.filename,ps.tokenlineno,
+"C code starting on this line is not terminated before the end of the file.");
+ ps.errorcnt++;
+ nextcp = cp;
+ }else{
+ nextcp = cp+1;
+ }
+ }else if( isalnum(c) ){ /* Identifiers */
+ while( (c= *cp)!=0 && (isalnum(c) || c=='_') ) cp++;
+ nextcp = cp;
+ }else if( c==':' && cp[1]==':' && cp[2]=='=' ){ /* The operator "::=" */
+ cp += 3;
+ nextcp = cp;
+ }else if( (c=='/' || c=='|') && isalpha(cp[1]) ){
+ cp += 2;
+ while( (c = *cp)!=0 && (isalnum(c) || c=='_') ) cp++;
+ nextcp = cp;
+ }else{ /* All other (one character) operators */
+ cp++;
+ nextcp = cp;
+ }
+ c = *cp;
+ *cp = 0; /* Null terminate the token */
+ parseonetoken(&ps); /* Parse the token */
+ *cp = c; /* Restore the buffer */
+ cp = nextcp;
+ }
+ free(filebuf); /* Release the buffer after parsing */
+ gp->rule = ps.firstrule;
+ gp->errorcnt = ps.errorcnt;
+}
+/*************************** From the file "plink.c" *********************/
+/*
+** Routines processing configuration follow-set propagation links
+** in the LEMON parser generator.
+*/
+static struct plink *plink_freelist = 0;
+
+/* Allocate a new plink */
+struct plink *Plink_new(){
+ struct plink *new;
+
+ if( plink_freelist==0 ){
+ int i;
+ int amt = 100;
+ plink_freelist = (struct plink *)malloc( sizeof(struct plink)*amt );
+ if( plink_freelist==0 ){
+ fprintf(stderr,
+ "Unable to allocate memory for a new follow-set propagation link.\n");
+ exit(1);
+ }
+ for(i=0; i<amt-1; i++) plink_freelist[i].next = &plink_freelist[i+1];
+ plink_freelist[amt-1].next = 0;
+ }
+ new = plink_freelist;
+ plink_freelist = plink_freelist->next;
+ return new;
+}
+
+/* Add a plink to a plink list */
+void Plink_add(plpp,cfp)
+struct plink **plpp;
+struct config *cfp;
+{
+ struct plink *new;
+ new = Plink_new();
+ new->next = *plpp;
+ *plpp = new;
+ new->cfp = cfp;
+}
+
+/* Transfer every plink on the list "from" to the list "to" */
+void Plink_copy(to,from)
+struct plink **to;
+struct plink *from;
+{
+ struct plink *nextpl;
+ while( from ){
+ nextpl = from->next;
+ from->next = *to;
+ *to = from;
+ from = nextpl;
+ }
+}
+
+/* Delete every plink on the list */
+void Plink_delete(plp)
+struct plink *plp;
+{
+ struct plink *nextpl;
+
+ while( plp ){
+ nextpl = plp->next;
+ plp->next = plink_freelist;
+ plink_freelist = plp;
+ plp = nextpl;
+ }
+}
+/*********************** From the file "report.c" **************************/
+/*
+** Procedures for generating reports and tables in the LEMON parser generator.
+*/
+
+/* Generate a filename with the given suffix. Space to hold the
+** name comes from malloc() and must be freed by the calling
+** function.
+*/
+PRIVATE char *file_makename(lemp,suffix)
+struct lemon *lemp;
+char *suffix;
+{
+ char *name;
+ char *cp;
+
+ name = malloc( strlen(lemp->filename) + strlen(suffix) + 5 );
+ if( name==0 ){
+ fprintf(stderr,"Can't allocate space for a filename.\n");
+ exit(1);
+ }
+ strcpy(name,lemp->filename);
+ cp = strrchr(name,'.');
+ if( cp ) *cp = 0;
+ strcat(name,suffix);
+ return name;
+}
+
+/* Open a file with a name based on the name of the input file,
+** but with a different (specified) suffix, and return a pointer
+** to the stream */
+PRIVATE FILE *file_open(lemp,suffix,mode)
+struct lemon *lemp;
+char *suffix;
+char *mode;
+{
+ FILE *fp;
+
+ if( lemp->outname ) free(lemp->outname);
+ lemp->outname = file_makename(lemp, suffix);
+ fp = fopen(lemp->outname,mode);
+ if( fp==0 && *mode=='w' ){
+ fprintf(stderr,"Can't open file \"%s\".\n",lemp->outname);
+ lemp->errorcnt++;
+ return 0;
+ }
+ return fp;
+}
+
+/* Duplicate the input file without comments and without actions
+** on rules */
+void Reprint(lemp)
+struct lemon *lemp;
+{
+ struct rule *rp;
+ struct symbol *sp;
+ int i, j, maxlen, len, ncolumns, skip;
+ printf("// Reprint of input file \"%s\".\n// Symbols:\n",lemp->filename);
+ maxlen = 10;
+ for(i=0; i<lemp->nsymbol; i++){
+ sp = lemp->symbols[i];
+ len = strlen(sp->name);
+ if( len>maxlen ) maxlen = len;
+ }
+ ncolumns = 76/(maxlen+5);
+ if( ncolumns<1 ) ncolumns = 1;
+ skip = (lemp->nsymbol + ncolumns - 1)/ncolumns;
+ for(i=0; i<skip; i++){
+ printf("//");
+ for(j=i; j<lemp->nsymbol; j+=skip){
+ sp = lemp->symbols[j];
+ assert( sp->index==j );
+ printf(" %3d %-*.*s",j,maxlen,maxlen,sp->name);
+ }
+ printf("\n");
+ }
+ for(rp=lemp->rule; rp; rp=rp->next){
+ printf("%s",rp->lhs->name);
+ /* if( rp->lhsalias ) printf("(%s)",rp->lhsalias); */
+ printf(" ::=");
+ for(i=0; i<rp->nrhs; i++){
+ sp = rp->rhs[i];
+ printf(" %s", sp->name);
+ if( sp->type==MULTITERMINAL ){
+ for(j=1; j<sp->nsubsym; j++){
+ printf("|%s", sp->subsym[j]->name);
+ }
+ }
+ /* if( rp->rhsalias[i] ) printf("(%s)",rp->rhsalias[i]); */
+ }
+ printf(".");
+ if( rp->precsym ) printf(" [%s]",rp->precsym->name);
+ /* if( rp->code ) printf("\n %s",rp->code); */
+ printf("\n");
+ }
+}
+
+void ConfigPrint(fp,cfp)
+FILE *fp;
+struct config *cfp;
+{
+ struct rule *rp;
+ struct symbol *sp;
+ int i, j;
+ rp = cfp->rp;
+ fprintf(fp,"%s ::=",rp->lhs->name);
+ for(i=0; i<=rp->nrhs; i++){
+ if( i==cfp->dot ) fprintf(fp," *");
+ if( i==rp->nrhs ) break;
+ sp = rp->rhs[i];
+ fprintf(fp," %s", sp->name);
+ if( sp->type==MULTITERMINAL ){
+ for(j=1; j<sp->nsubsym; j++){
+ fprintf(fp,"|%s",sp->subsym[j]->name);
+ }
+ }
+ }
+}
+
+/* #define TEST */
+#if 0
+/* Print a set */
+PRIVATE void SetPrint(out,set,lemp)
+FILE *out;
+char *set;
+struct lemon *lemp;
+{
+ int i;
+ char *spacer;
+ spacer = "";
+ fprintf(out,"%12s[","");
+ for(i=0; i<lemp->nterminal; i++){
+ if( SetFind(set,i) ){
+ fprintf(out,"%s%s",spacer,lemp->symbols[i]->name);
+ spacer = " ";
+ }
+ }
+ fprintf(out,"]\n");
+}
+
+/* Print a plink chain */
+PRIVATE void PlinkPrint(out,plp,tag)
+FILE *out;
+struct plink *plp;
+char *tag;
+{
+ while( plp ){
+ fprintf(out,"%12s%s (state %2d) ","",tag,plp->cfp->stp->statenum);
+ ConfigPrint(out,plp->cfp);
+ fprintf(out,"\n");
+ plp = plp->next;
+ }
+}
+#endif
+
+/* Print an action to the given file descriptor. Return FALSE if
+** nothing was actually printed.
+*/
+int PrintAction(struct action *ap, FILE *fp, int indent){
+ int result = 1;
+ switch( ap->type ){
+ case SHIFT:
+ fprintf(fp,"%*s shift %d",indent,ap->sp->name,ap->x.stp->statenum);
+ break;
+ case REDUCE:
+ fprintf(fp,"%*s reduce %d",indent,ap->sp->name,ap->x.rp->index);
+ break;
+ case ACCEPT:
+ fprintf(fp,"%*s accept",indent,ap->sp->name);
+ break;
+ case ERROR:
+ fprintf(fp,"%*s error",indent,ap->sp->name);
+ break;
+ case CONFLICT:
+ fprintf(fp,"%*s reduce %-3d ** Parsing conflict **",
+ indent,ap->sp->name,ap->x.rp->index);
+ break;
+ case SH_RESOLVED:
+ case RD_RESOLVED:
+ case NOT_USED:
+ result = 0;
+ break;
+ }
+ return result;
+}
+
+/* Generate the "y.output" log file */
+void ReportOutput(lemp)
+struct lemon *lemp;
+{
+ int i;
+ struct state *stp;
+ struct config *cfp;
+ struct action *ap;
+ FILE *fp;
+
+ fp = file_open(lemp,".out","wb");
+ if( fp==0 ) return;
+ fprintf(fp," \b");
+ for(i=0; i<lemp->nstate; i++){
+ stp = lemp->sorted[i];
+ fprintf(fp,"State %d:\n",stp->statenum);
+ if( lemp->basisflag ) cfp=stp->bp;
+ else cfp=stp->cfp;
+ while( cfp ){
+ char buf[20];
+ if( cfp->dot==cfp->rp->nrhs ){
+ sprintf(buf,"(%d)",cfp->rp->index);
+ fprintf(fp," %5s ",buf);
+ }else{
+ fprintf(fp," ");
+ }
+ ConfigPrint(fp,cfp);
+ fprintf(fp,"\n");
+#if 0
+ SetPrint(fp,cfp->fws,lemp);
+ PlinkPrint(fp,cfp->fplp,"To ");
+ PlinkPrint(fp,cfp->bplp,"From");
+#endif
+ if( lemp->basisflag ) cfp=cfp->bp;
+ else cfp=cfp->next;
+ }
+ fprintf(fp,"\n");
+ for(ap=stp->ap; ap; ap=ap->next){
+ if( PrintAction(ap,fp,30) ) fprintf(fp,"\n");
+ }
+ fprintf(fp,"\n");
+ }
+ fclose(fp);
+ return;
+}
+
+/* Search for the file "name" which is in the same directory as
+** the exacutable */
+PRIVATE char *pathsearch(argv0,name,modemask)
+char *argv0;
+char *name;
+int modemask;
+{
+ char *pathlist;
+ char *path,*cp;
+ char c;
+ extern int access();
+
+#ifdef __WIN32__
+ cp = strrchr(argv0,'\\');
+#else
+ cp = strrchr(argv0,'/');
+#endif
+ if( cp ){
+ c = *cp;
+ *cp = 0;
+ path = (char *)malloc( strlen(argv0) + strlen(name) + 2 );
+ if( path ) sprintf(path,"%s/%s",argv0,name);
+ *cp = c;
+ }else{
+ extern char *getenv();
+ pathlist = getenv("PATH");
+ if( pathlist==0 ) pathlist = ".:/bin:/usr/bin";
+ path = (char *)malloc( strlen(pathlist)+strlen(name)+2 );
+ if( path!=0 ){
+ while( *pathlist ){
+ cp = strchr(pathlist,':');
+ if( cp==0 ) cp = &pathlist[strlen(pathlist)];
+ c = *cp;
+ *cp = 0;
+ sprintf(path,"%s/%s",pathlist,name);
+ *cp = c;
+ if( c==0 ) pathlist = "";
+ else pathlist = &cp[1];
+ if( access(path,modemask)==0 ) break;
+ }
+ }
+ }
+ return path;
+}
+
+/* Given an action, compute the integer value for that action
+** which is to be put in the action table of the generated machine.
+** Return negative if no action should be generated.
+*/
+PRIVATE int compute_action(lemp,ap)
+struct lemon *lemp;
+struct action *ap;
+{
+ int act;
+ switch( ap->type ){
+ case SHIFT: act = ap->x.stp->statenum; break;
+ case REDUCE: act = ap->x.rp->index + lemp->nstate; break;
+ case ERROR: act = lemp->nstate + lemp->nrule; break;
+ case ACCEPT: act = lemp->nstate + lemp->nrule + 1; break;
+ default: act = -1; break;
+ }
+ return act;
+}
+
+#define LINESIZE 1000
+/* The next cluster of routines are for reading the template file
+** and writing the results to the generated parser */
+/* The first function transfers data from "in" to "out" until
+** a line is seen which begins with "%%". The line number is
+** tracked.
+**
+** if name!=0, then any word that begin with "Parse" is changed to
+** begin with *name instead.
+*/
+PRIVATE void tplt_xfer(name,in,out,lineno)
+char *name;
+FILE *in;
+FILE *out;
+int *lineno;
+{
+ int i, iStart;
+ char line[LINESIZE];
+ while( fgets(line,LINESIZE,in) && (line[0]!='%' || line[1]!='%') ){
+ (*lineno)++;
+ iStart = 0;
+ if( name ){
+ for(i=0; line[i]; i++){
+ if( line[i]=='P' && strncmp(&line[i],"Parse",5)==0
+ && (i==0 || !isalpha(line[i-1]))
+ ){
+ if( i>iStart ) fprintf(out,"%.*s",i-iStart,&line[iStart]);
+ fprintf(out,"%s",name);
+ i += 4;
+ iStart = i+1;
+ }
+ }
+ }
+ fprintf(out,"%s",&line[iStart]);
+ }
+}
+
+/* The next function finds the template file and opens it, returning
+** a pointer to the opened file. */
+PRIVATE FILE *tplt_open(lemp)
+struct lemon *lemp;
+{
+ static char templatename[] = "lempar.c";
+ char buf[1000];
+ FILE *in;
+ char *tpltname;
+ char *cp;
+
+ cp = strrchr(lemp->filename,'.');
+ if( cp ){
+ sprintf(buf,"%.*s.lt",(int)(cp-lemp->filename),lemp->filename);
+ }else{
+ sprintf(buf,"%s.lt",lemp->filename);
+ }
+ if( access(buf,004)==0 ){
+ tpltname = buf;
+ }else if( access(templatename,004)==0 ){
+ tpltname = templatename;
+ }else{
+ tpltname = pathsearch(lemp->argv0,templatename,0);
+ }
+ if( tpltname==0 ){
+ fprintf(stderr,"Can't find the parser driver template file \"%s\".\n",
+ templatename);
+ lemp->errorcnt++;
+ return 0;
+ }
+ in = fopen(tpltname,"rb");
+ if( in==0 ){
+ fprintf(stderr,"Can't open the template file \"%s\".\n",templatename);
+ lemp->errorcnt++;
+ return 0;
+ }
+ return in;
+}
+
+/* Print a #line directive line to the output file. */
+PRIVATE void tplt_linedir(out,lineno,filename)
+FILE *out;
+int lineno;
+char *filename;
+{
+ fprintf(out,"#line %d \"",lineno);
+ while( *filename ){
+ if( *filename == '\\' ) putc('\\',out);
+ putc(*filename,out);
+ filename++;
+ }
+ fprintf(out,"\"\n");
+}
+
+/* Print a string to the file and keep the linenumber up to date */
+PRIVATE void tplt_print(out,lemp,str,strln,lineno)
+FILE *out;
+struct lemon *lemp;
+char *str;
+int strln;
+int *lineno;
+{
+ if( str==0 ) return;
+ tplt_linedir(out,strln,lemp->filename);
+ (*lineno)++;
+ while( *str ){
+ if( *str=='\n' ) (*lineno)++;
+ putc(*str,out);
+ str++;
+ }
+ if( str[-1]!='\n' ){
+ putc('\n',out);
+ (*lineno)++;
+ }
+ tplt_linedir(out,*lineno+2,lemp->outname);
+ (*lineno)+=2;
+ return;
+}
+
+/*
+** The following routine emits code for the destructor for the
+** symbol sp
+*/
+void emit_destructor_code(out,sp,lemp,lineno)
+FILE *out;
+struct symbol *sp;
+struct lemon *lemp;
+int *lineno;
+{
+ char *cp = 0;
+
+ int linecnt = 0;
+ if( sp->type==TERMINAL ){
+ cp = lemp->tokendest;
+ if( cp==0 ) return;
+ tplt_linedir(out,lemp->tokendestln,lemp->filename);
+ fprintf(out,"{");
+ }else if( sp->destructor ){
+ cp = sp->destructor;
+ tplt_linedir(out,sp->destructorln,lemp->filename);
+ fprintf(out,"{");
+ }else if( lemp->vardest ){
+ cp = lemp->vardest;
+ if( cp==0 ) return;
+ tplt_linedir(out,lemp->vardestln,lemp->filename);
+ fprintf(out,"{");
+ }else{
+ assert( 0 ); /* Cannot happen */
+ }
+ for(; *cp; cp++){
+ if( *cp=='$' && cp[1]=='$' ){
+ fprintf(out,"(yypminor->yy%d)",sp->dtnum);
+ cp++;
+ continue;
+ }
+ if( *cp=='\n' ) linecnt++;
+ fputc(*cp,out);
+ }
+ (*lineno) += 3 + linecnt;
+ fprintf(out,"}\n");
+ tplt_linedir(out,*lineno,lemp->outname);
+ return;
+}
+
+/*
+** Return TRUE (non-zero) if the given symbol has a destructor.
+*/
+int has_destructor(sp, lemp)
+struct symbol *sp;
+struct lemon *lemp;
+{
+ int ret;
+ if( sp->type==TERMINAL ){
+ ret = lemp->tokendest!=0;
+ }else{
+ ret = lemp->vardest!=0 || sp->destructor!=0;
+ }
+ return ret;
+}
+
+/*
+** Append text to a dynamically allocated string. If zText is 0 then
+** reset the string to be empty again. Always return the complete text
+** of the string (which is overwritten with each call).
+**
+** n bytes of zText are stored. If n==0 then all of zText up to the first
+** \000 terminator is stored. zText can contain up to two instances of
+** %d. The values of p1 and p2 are written into the first and second
+** %d.
+**
+** If n==-1, then the previous character is overwritten.
+*/
+PRIVATE char *append_str(char *zText, int n, int p1, int p2){
+ static char *z = 0;
+ static int alloced = 0;
+ static int used = 0;
+ int c;
+ char zInt[40];
+
+ if( zText==0 ){
+ used = 0;
+ return z;
+ }
+ if( n<=0 ){
+ if( n<0 ){
+ used += n;
+ assert( used>=0 );
+ }
+ n = strlen(zText);
+ }
+ if( n+sizeof(zInt)*2+used >= alloced ){
+ alloced = n + sizeof(zInt)*2 + used + 200;
+ z = realloc(z, alloced);
+ }
+ if( z==0 ) return "";
+ while( n-- > 0 ){
+ c = *(zText++);
+ if( c=='%' && zText[0]=='d' ){
+ sprintf(zInt, "%d", p1);
+ p1 = p2;
+ strcpy(&z[used], zInt);
+ used += strlen(&z[used]);
+ zText++;
+ n--;
+ }else{
+ z[used++] = c;
+ }
+ }
+ z[used] = 0;
+ return z;
+}
+
+/*
+** zCode is a string that is the action associated with a rule. Expand
+** the symbols in this string so that the refer to elements of the parser
+** stack.
+*/
+PRIVATE void translate_code(struct lemon *lemp, struct rule *rp){
+ char *cp, *xp;
+ int i;
+ char lhsused = 0; /* True if the LHS element has been used */
+ char used[MAXRHS]; /* True for each RHS element which is used */
+
+ for(i=0; i<rp->nrhs; i++) used[i] = 0;
+ lhsused = 0;
+
+ append_str(0,0,0,0);
+ for(cp=rp->code; *cp; cp++){
+ if( isalpha(*cp) && (cp==rp->code || (!isalnum(cp[-1]) && cp[-1]!='_')) ){
+ char saved;
+ for(xp= &cp[1]; isalnum(*xp) || *xp=='_'; xp++);
+ saved = *xp;
+ *xp = 0;
+ if( rp->lhsalias && strcmp(cp,rp->lhsalias)==0 ){
+ append_str("yygotominor.yy%d",0,rp->lhs->dtnum,0);
+ cp = xp;
+ lhsused = 1;
+ }else{
+ for(i=0; i<rp->nrhs; i++){
+ if( rp->rhsalias[i] && strcmp(cp,rp->rhsalias[i])==0 ){
+ if( cp!=rp->code && cp[-1]=='@' ){
+ /* If the argument is of the form @X then substituted
+ ** the token number of X, not the value of X */
+ append_str("yymsp[%d].major",-1,i-rp->nrhs+1,0);
+ }else{
+ struct symbol *sp = rp->rhs[i];
+ int dtnum;
+ if( sp->type==MULTITERMINAL ){
+ dtnum = sp->subsym[0]->dtnum;
+ }else{
+ dtnum = sp->dtnum;
+ }
+ append_str("yymsp[%d].minor.yy%d",0,i-rp->nrhs+1, dtnum);
+ }
+ cp = xp;
+ used[i] = 1;
+ break;
+ }
+ }
+ }
+ *xp = saved;
+ }
+ append_str(cp, 1, 0, 0);
+ } /* End loop */
+
+ /* Check to make sure the LHS has been used */
+ if( rp->lhsalias && !lhsused ){
+ ErrorMsg(lemp->filename,rp->ruleline,
+ "Label \"%s\" for \"%s(%s)\" is never used.",
+ rp->lhsalias,rp->lhs->name,rp->lhsalias);
+ lemp->errorcnt++;
+ }
+
+ /* Generate destructor code for RHS symbols which are not used in the
+ ** reduce code */
+ for(i=0; i<rp->nrhs; i++){
+ if( rp->rhsalias[i] && !used[i] ){
+ ErrorMsg(lemp->filename,rp->ruleline,
+ "Label %s for \"%s(%s)\" is never used.",
+ rp->rhsalias[i],rp->rhs[i]->name,rp->rhsalias[i]);
+ lemp->errorcnt++;
+ }else if( rp->rhsalias[i]==0 ){
+ if( has_destructor(rp->rhs[i],lemp) ){
+ append_str(" yy_destructor(%d,&yymsp[%d].minor);\n", 0,
+ rp->rhs[i]->index,i-rp->nrhs+1);
+ }else{
+ /* No destructor defined for this term */
+ }
+ }
+ }
+ cp = append_str(0,0,0,0);
+ rp->code = Strsafe(cp);
+}
+
+/*
+** Generate code which executes when the rule "rp" is reduced. Write
+** the code to "out". Make sure lineno stays up-to-date.
+*/
+PRIVATE void emit_code(out,rp,lemp,lineno)
+FILE *out;
+struct rule *rp;
+struct lemon *lemp;
+int *lineno;
+{
+ char *cp;
+ int linecnt = 0;
+
+ /* Generate code to do the reduce action */
+ if( rp->code ){
+ tplt_linedir(out,rp->line,lemp->filename);
+ fprintf(out,"{%s",rp->code);
+ for(cp=rp->code; *cp; cp++){
+ if( *cp=='\n' ) linecnt++;
+ } /* End loop */
+ (*lineno) += 3 + linecnt;
+ fprintf(out,"}\n");
+ tplt_linedir(out,*lineno,lemp->outname);
+ } /* End if( rp->code ) */
+
+ return;
+}
+
+/*
+** Print the definition of the union used for the parser's data stack.
+** This union contains fields for every possible data type for tokens
+** and nonterminals. In the process of computing and printing this
+** union, also set the ".dtnum" field of every terminal and nonterminal
+** symbol.
+*/
+void print_stack_union(out,lemp,plineno,mhflag)
+FILE *out; /* The output stream */
+struct lemon *lemp; /* The main info structure for this parser */
+int *plineno; /* Pointer to the line number */
+int mhflag; /* True if generating makeheaders output */
+{
+ int lineno = *plineno; /* The line number of the output */
+ char **types; /* A hash table of datatypes */
+ int arraysize; /* Size of the "types" array */
+ int maxdtlength; /* Maximum length of any ".datatype" field. */
+ char *stddt; /* Standardized name for a datatype */
+ int i,j; /* Loop counters */
+ int hash; /* For hashing the name of a type */
+ char *name; /* Name of the parser */
+
+ /* Allocate and initialize types[] and allocate stddt[] */
+ arraysize = lemp->nsymbol * 2;
+ types = (char**)malloc( arraysize * sizeof(char*) );
+ for(i=0; i<arraysize; i++) types[i] = 0;
+ maxdtlength = 0;
+ if( lemp->vartype ){
+ maxdtlength = strlen(lemp->vartype);
+ }
+ for(i=0; i<lemp->nsymbol; i++){
+ int len;
+ struct symbol *sp = lemp->symbols[i];
+ if( sp->datatype==0 ) continue;
+ len = strlen(sp->datatype);
+ if( len>maxdtlength ) maxdtlength = len;
+ }
+ stddt = (char*)malloc( maxdtlength*2 + 1 );
+ if( types==0 || stddt==0 ){
+ fprintf(stderr,"Out of memory.\n");
+ exit(1);
+ }
+
+ /* Build a hash table of datatypes. The ".dtnum" field of each symbol
+ ** is filled in with the hash index plus 1. A ".dtnum" value of 0 is
+ ** used for terminal symbols. If there is no %default_type defined then
+ ** 0 is also used as the .dtnum value for nonterminals which do not specify
+ ** a datatype using the %type directive.
+ */
+ for(i=0; i<lemp->nsymbol; i++){
+ struct symbol *sp = lemp->symbols[i];
+ char *cp;
+ if( sp==lemp->errsym ){
+ sp->dtnum = arraysize+1;
+ continue;
+ }
+ if( sp->type!=NONTERMINAL || (sp->datatype==0 && lemp->vartype==0) ){
+ sp->dtnum = 0;
+ continue;
+ }
+ cp = sp->datatype;
+ if( cp==0 ) cp = lemp->vartype;
+ j = 0;
+ while( isspace(*cp) ) cp++;
+ while( *cp ) stddt[j++] = *cp++;
+ while( j>0 && isspace(stddt[j-1]) ) j--;
+ stddt[j] = 0;
+ hash = 0;
+ for(j=0; stddt[j]; j++){
+ hash = hash*53 + stddt[j];
+ }
+ hash = (hash & 0x7fffffff)%arraysize;
+ while( types[hash] ){
+ if( strcmp(types[hash],stddt)==0 ){
+ sp->dtnum = hash + 1;
+ break;
+ }
+ hash++;
+ if( hash>=arraysize ) hash = 0;
+ }
+ if( types[hash]==0 ){
+ sp->dtnum = hash + 1;
+ types[hash] = (char*)malloc( strlen(stddt)+1 );
+ if( types[hash]==0 ){
+ fprintf(stderr,"Out of memory.\n");
+ exit(1);
+ }
+ strcpy(types[hash],stddt);
+ }
+ }
+
+ /* Print out the definition of YYTOKENTYPE and YYMINORTYPE */
+ name = lemp->name ? lemp->name : "Parse";
+ lineno = *plineno;
+ if( mhflag ){ fprintf(out,"#if INTERFACE\n"); lineno++; }
+ fprintf(out,"#define %sTOKENTYPE %s\n",name,
+ lemp->tokentype?lemp->tokentype:"void*"); lineno++;
+ if( mhflag ){ fprintf(out,"#endif\n"); lineno++; }
+ fprintf(out,"typedef union {\n"); lineno++;
+ fprintf(out," %sTOKENTYPE yy0;\n",name); lineno++;
+ for(i=0; i<arraysize; i++){
+ if( types[i]==0 ) continue;
+ fprintf(out," %s yy%d;\n",types[i],i+1); lineno++;
+ free(types[i]);
+ }
+ fprintf(out," int yy%d;\n",lemp->errsym->dtnum); lineno++;
+ free(stddt);
+ free(types);
+ fprintf(out,"} YYMINORTYPE;\n"); lineno++;
+ *plineno = lineno;
+}
+
+/*
+** Return the name of a C datatype able to represent values between
+** lwr and upr, inclusive.
+*/
+static const char *minimum_size_type(int lwr, int upr){
+ if( lwr>=0 ){
+ if( upr<=255 ){
+ return "unsigned char";
+ }else if( upr<65535 ){
+ return "unsigned short int";
+ }else{
+ return "unsigned int";
+ }
+ }else if( lwr>=-127 && upr<=127 ){
+ return "signed char";
+ }else if( lwr>=-32767 && upr<32767 ){
+ return "short";
+ }else{
+ return "int";
+ }
+}
+
+/*
+** Each state contains a set of token transaction and a set of
+** nonterminal transactions. Each of these sets makes an instance
+** of the following structure. An array of these structures is used
+** to order the creation of entries in the yy_action[] table.
+*/
+struct axset {
+ struct state *stp; /* A pointer to a state */
+ int isTkn; /* True to use tokens. False for non-terminals */
+ int nAction; /* Number of actions */
+};
+
+/*
+** Compare to axset structures for sorting purposes
+*/
+static int axset_compare(const void *a, const void *b){
+ struct axset *p1 = (struct axset*)a;
+ struct axset *p2 = (struct axset*)b;
+ return p2->nAction - p1->nAction;
+}
+
+/* Generate C source code for the parser */
+void ReportTable(lemp, mhflag)
+struct lemon *lemp;
+int mhflag; /* Output in makeheaders format if true */
+{
+ FILE *out, *in;
+ char line[LINESIZE];
+ int lineno;
+ struct state *stp;
+ struct action *ap;
+ struct rule *rp;
+ struct acttab *pActtab;
+ int i, j, n;
+ char *name;
+ int mnTknOfst, mxTknOfst;
+ int mnNtOfst, mxNtOfst;
+ struct axset *ax;
+
+ in = tplt_open(lemp);
+ if( in==0 ) return;
+ out = file_open(lemp,".c","wb");
+ if( out==0 ){
+ fclose(in);
+ return;
+ }
+ lineno = 1;
+ tplt_xfer(lemp->name,in,out,&lineno);
+
+ /* Generate the include code, if any */
+ tplt_print(out,lemp,lemp->include,lemp->includeln,&lineno);
+ if( mhflag ){
+ char *name = file_makename(lemp, ".h");
+ fprintf(out,"#include \"%s\"\n", name); lineno++;
+ free(name);
+ }
+ tplt_xfer(lemp->name,in,out,&lineno);
+
+ /* Generate #defines for all tokens */
+ if( mhflag ){
+ char *prefix;
+ fprintf(out,"#if INTERFACE\n"); lineno++;
+ if( lemp->tokenprefix ) prefix = lemp->tokenprefix;
+ else prefix = "";
+ for(i=1; i<lemp->nterminal; i++){
+ fprintf(out,"#define %s%-30s %2d\n",prefix,lemp->symbols[i]->name,i);
+ lineno++;
+ }
+ fprintf(out,"#endif\n"); lineno++;
+ }
+ tplt_xfer(lemp->name,in,out,&lineno);
+
+ /* Generate the defines */
+ fprintf(out,"#define YYCODETYPE %s\n",
+ minimum_size_type(0, lemp->nsymbol+5)); lineno++;
+ fprintf(out,"#define YYNOCODE %d\n",lemp->nsymbol+1); lineno++;
+ fprintf(out,"#define YYACTIONTYPE %s\n",
+ minimum_size_type(0, lemp->nstate+lemp->nrule+5)); lineno++;
+ if( lemp->wildcard ){
+ fprintf(out,"#define YYWILDCARD %d\n",
+ lemp->wildcard->index); lineno++;
+ }
+ print_stack_union(out,lemp,&lineno,mhflag);
+ if( lemp->stacksize ){
+ if( atoi(lemp->stacksize)<=0 ){
+ ErrorMsg(lemp->filename,0,
+"Illegal stack size: [%s]. The stack size should be an integer constant.",
+ lemp->stacksize);
+ lemp->errorcnt++;
+ lemp->stacksize = "100";
+ }
+ fprintf(out,"#define YYSTACKDEPTH %s\n",lemp->stacksize); lineno++;
+ }else{
+ fprintf(out,"#define YYSTACKDEPTH 100\n"); lineno++;
+ }
+ if( mhflag ){
+ fprintf(out,"#if INTERFACE\n"); lineno++;
+ }
+ name = lemp->name ? lemp->name : "Parse";
+ if( lemp->arg && lemp->arg[0] ){
+ int i;
+ i = strlen(lemp->arg);
+ while( i>=1 && isspace(lemp->arg[i-1]) ) i--;
+ while( i>=1 && (isalnum(lemp->arg[i-1]) || lemp->arg[i-1]=='_') ) i--;
+ fprintf(out,"#define %sARG_SDECL %s;\n",name,lemp->arg); lineno++;
+ fprintf(out,"#define %sARG_PDECL ,%s\n",name,lemp->arg); lineno++;
+ fprintf(out,"#define %sARG_FETCH %s = yypParser->%s\n",
+ name,lemp->arg,&lemp->arg[i]); lineno++;
+ fprintf(out,"#define %sARG_STORE yypParser->%s = %s\n",
+ name,&lemp->arg[i],&lemp->arg[i]); lineno++;
+ }else{
+ fprintf(out,"#define %sARG_SDECL\n",name); lineno++;
+ fprintf(out,"#define %sARG_PDECL\n",name); lineno++;
+ fprintf(out,"#define %sARG_FETCH\n",name); lineno++;
+ fprintf(out,"#define %sARG_STORE\n",name); lineno++;
+ }
+ if( mhflag ){
+ fprintf(out,"#endif\n"); lineno++;
+ }
+ fprintf(out,"#define YYNSTATE %d\n",lemp->nstate); lineno++;
+ fprintf(out,"#define YYNRULE %d\n",lemp->nrule); lineno++;
+ fprintf(out,"#define YYERRORSYMBOL %d\n",lemp->errsym->index); lineno++;
+ fprintf(out,"#define YYERRSYMDT yy%d\n",lemp->errsym->dtnum); lineno++;
+ if( lemp->has_fallback ){
+ fprintf(out,"#define YYFALLBACK 1\n"); lineno++;
+ }
+ tplt_xfer(lemp->name,in,out,&lineno);
+
+ /* Generate the action table and its associates:
+ **
+ ** yy_action[] A single table containing all actions.
+ ** yy_lookahead[] A table containing the lookahead for each entry in
+ ** yy_action. Used to detect hash collisions.
+ ** yy_shift_ofst[] For each state, the offset into yy_action for
+ ** shifting terminals.
+ ** yy_reduce_ofst[] For each state, the offset into yy_action for
+ ** shifting non-terminals after a reduce.
+ ** yy_default[] Default action for each state.
+ */
+
+ /* Compute the actions on all states and count them up */
+ ax = malloc( sizeof(ax[0])*lemp->nstate*2 );
+ if( ax==0 ){
+ fprintf(stderr,"malloc failed\n");
+ exit(1);
+ }
+ for(i=0; i<lemp->nstate; i++){
+ stp = lemp->sorted[i];
+ ax[i*2].stp = stp;
+ ax[i*2].isTkn = 1;
+ ax[i*2].nAction = stp->nTknAct;
+ ax[i*2+1].stp = stp;
+ ax[i*2+1].isTkn = 0;
+ ax[i*2+1].nAction = stp->nNtAct;
+ }
+ mxTknOfst = mnTknOfst = 0;
+ mxNtOfst = mnNtOfst = 0;
+
+ /* Compute the action table. In order to try to keep the size of the
+ ** action table to a minimum, the heuristic of placing the largest action
+ ** sets first is used.
+ */
+ qsort(ax, lemp->nstate*2, sizeof(ax[0]), axset_compare);
+ pActtab = acttab_alloc();
+ for(i=0; i<lemp->nstate*2 && ax[i].nAction>0; i++){
+ stp = ax[i].stp;
+ if( ax[i].isTkn ){
+ for(ap=stp->ap; ap; ap=ap->next){
+ int action;
+ if( ap->sp->index>=lemp->nterminal ) continue;
+ action = compute_action(lemp, ap);
+ if( action<0 ) continue;
+ acttab_action(pActtab, ap->sp->index, action);
+ }
+ stp->iTknOfst = acttab_insert(pActtab);
+ if( stp->iTknOfst<mnTknOfst ) mnTknOfst = stp->iTknOfst;
+ if( stp->iTknOfst>mxTknOfst ) mxTknOfst = stp->iTknOfst;
+ }else{
+ for(ap=stp->ap; ap; ap=ap->next){
+ int action;
+ if( ap->sp->index<lemp->nterminal ) continue;
+ if( ap->sp->index==lemp->nsymbol ) continue;
+ action = compute_action(lemp, ap);
+ if( action<0 ) continue;
+ acttab_action(pActtab, ap->sp->index, action);
+ }
+ stp->iNtOfst = acttab_insert(pActtab);
+ if( stp->iNtOfst<mnNtOfst ) mnNtOfst = stp->iNtOfst;
+ if( stp->iNtOfst>mxNtOfst ) mxNtOfst = stp->iNtOfst;
+ }
+ }
+ free(ax);
+
+ /* Output the yy_action table */
+ fprintf(out,"static const YYACTIONTYPE yy_action[] = {\n"); lineno++;
+ n = acttab_size(pActtab);
+ for(i=j=0; i<n; i++){
+ int action = acttab_yyaction(pActtab, i);
+ if( action<0 ) action = lemp->nsymbol + lemp->nrule + 2;
+ if( j==0 ) fprintf(out," /* %5d */ ", i);
+ fprintf(out, " %4d,", action);
+ if( j==9 || i==n-1 ){
+ fprintf(out, "\n"); lineno++;
+ j = 0;
+ }else{
+ j++;
+ }
+ }
+ fprintf(out, "};\n"); lineno++;
+
+ /* Output the yy_lookahead table */
+ fprintf(out,"static const YYCODETYPE yy_lookahead[] = {\n"); lineno++;
+ for(i=j=0; i<n; i++){
+ int la = acttab_yylookahead(pActtab, i);
+ if( la<0 ) la = lemp->nsymbol;
+ if( j==0 ) fprintf(out," /* %5d */ ", i);
+ fprintf(out, " %4d,", la);
+ if( j==9 || i==n-1 ){
+ fprintf(out, "\n"); lineno++;
+ j = 0;
+ }else{
+ j++;
+ }
+ }
+ fprintf(out, "};\n"); lineno++;
+
+ /* Output the yy_shift_ofst[] table */
+ fprintf(out, "#define YY_SHIFT_USE_DFLT (%d)\n", mnTknOfst-1); lineno++;
+ n = lemp->nstate;
+ while( n>0 && lemp->sorted[n-1]->iTknOfst==NO_OFFSET ) n--;
+ fprintf(out, "#define YY_SHIFT_MAX %d\n", n-1); lineno++;
+ fprintf(out, "static const %s yy_shift_ofst[] = {\n",
+ minimum_size_type(mnTknOfst-1, mxTknOfst)); lineno++;
+ for(i=j=0; i<n; i++){
+ int ofst;
+ stp = lemp->sorted[i];
+ ofst = stp->iTknOfst;
+ if( ofst==NO_OFFSET ) ofst = mnTknOfst - 1;
+ if( j==0 ) fprintf(out," /* %5d */ ", i);
+ fprintf(out, " %4d,", ofst);
+ if( j==9 || i==n-1 ){
+ fprintf(out, "\n"); lineno++;
+ j = 0;
+ }else{
+ j++;
+ }
+ }
+ fprintf(out, "};\n"); lineno++;
+
+ /* Output the yy_reduce_ofst[] table */
+ fprintf(out, "#define YY_REDUCE_USE_DFLT (%d)\n", mnNtOfst-1); lineno++;
+ n = lemp->nstate;
+ while( n>0 && lemp->sorted[n-1]->iNtOfst==NO_OFFSET ) n--;
+ fprintf(out, "#define YY_REDUCE_MAX %d\n", n-1); lineno++;
+ fprintf(out, "static const %s yy_reduce_ofst[] = {\n",
+ minimum_size_type(mnNtOfst-1, mxNtOfst)); lineno++;
+ for(i=j=0; i<n; i++){
+ int ofst;
+ stp = lemp->sorted[i];
+ ofst = stp->iNtOfst;
+ if( ofst==NO_OFFSET ) ofst = mnNtOfst - 1;
+ if( j==0 ) fprintf(out," /* %5d */ ", i);
+ fprintf(out, " %4d,", ofst);
+ if( j==9 || i==n-1 ){
+ fprintf(out, "\n"); lineno++;
+ j = 0;
+ }else{
+ j++;
+ }
+ }
+ fprintf(out, "};\n"); lineno++;
+
+ /* Output the default action table */
+ fprintf(out, "static const YYACTIONTYPE yy_default[] = {\n"); lineno++;
+ n = lemp->nstate;
+ for(i=j=0; i<n; i++){
+ stp = lemp->sorted[i];
+ if( j==0 ) fprintf(out," /* %5d */ ", i);
+ fprintf(out, " %4d,", stp->iDflt);
+ if( j==9 || i==n-1 ){
+ fprintf(out, "\n"); lineno++;
+ j = 0;
+ }else{
+ j++;
+ }
+ }
+ fprintf(out, "};\n"); lineno++;
+ tplt_xfer(lemp->name,in,out,&lineno);
+
+ /* Generate the table of fallback tokens.
+ */
+ if( lemp->has_fallback ){
+ for(i=0; i<lemp->nterminal; i++){
+ struct symbol *p = lemp->symbols[i];
+ if( p->fallback==0 ){
+ fprintf(out, " 0, /* %10s => nothing */\n", p->name);
+ }else{
+ fprintf(out, " %3d, /* %10s => %s */\n", p->fallback->index,
+ p->name, p->fallback->name);
+ }
+ lineno++;
+ }
+ }
+ tplt_xfer(lemp->name, in, out, &lineno);
+
+ /* Generate a table containing the symbolic name of every symbol
+ */
+ for(i=0; i<lemp->nsymbol; i++){
+ sprintf(line,"\"%s\",",lemp->symbols[i]->name);
+ fprintf(out," %-15s",line);
+ if( (i&3)==3 ){ fprintf(out,"\n"); lineno++; }
+ }
+ if( (i&3)!=0 ){ fprintf(out,"\n"); lineno++; }
+ tplt_xfer(lemp->name,in,out,&lineno);
+
+ /* Generate a table containing a text string that describes every
+ ** rule in the rule set of the grammer. This information is used
+ ** when tracing REDUCE actions.
+ */
+ for(i=0, rp=lemp->rule; rp; rp=rp->next, i++){
+ assert( rp->index==i );
+ fprintf(out," /* %3d */ \"%s ::=", i, rp->lhs->name);
+ for(j=0; j<rp->nrhs; j++){
+ struct symbol *sp = rp->rhs[j];
+ fprintf(out," %s", sp->name);
+ if( sp->type==MULTITERMINAL ){
+ int k;
+ for(k=1; k<sp->nsubsym; k++){
+ fprintf(out,"|%s",sp->subsym[k]->name);
+ }
+ }
+ }
+ fprintf(out,"\",\n"); lineno++;
+ }
+ tplt_xfer(lemp->name,in,out,&lineno);
+
+ /* Generate code which executes every time a symbol is popped from
+ ** the stack while processing errors or while destroying the parser.
+ ** (In other words, generate the %destructor actions)
+ */
+ if( lemp->tokendest ){
+ for(i=0; i<lemp->nsymbol; i++){
+ struct symbol *sp = lemp->symbols[i];
+ if( sp==0 || sp->type!=TERMINAL ) continue;
+ fprintf(out," case %d:\n",sp->index); lineno++;
+ }
+ for(i=0; i<lemp->nsymbol && lemp->symbols[i]->type!=TERMINAL; i++);
+ if( i<lemp->nsymbol ){
+ emit_destructor_code(out,lemp->symbols[i],lemp,&lineno);
+ fprintf(out," break;\n"); lineno++;
+ }
+ }
+ if( lemp->vardest ){
+ struct symbol *dflt_sp = 0;
+ for(i=0; i<lemp->nsymbol; i++){
+ struct symbol *sp = lemp->symbols[i];
+ if( sp==0 || sp->type==TERMINAL ||
+ sp->index<=0 || sp->destructor!=0 ) continue;
+ fprintf(out," case %d:\n",sp->index); lineno++;
+ dflt_sp = sp;
+ }
+ if( dflt_sp!=0 ){
+ emit_destructor_code(out,dflt_sp,lemp,&lineno);
+ fprintf(out," break;\n"); lineno++;
+ }
+ }
+ for(i=0; i<lemp->nsymbol; i++){
+ struct symbol *sp = lemp->symbols[i];
+ if( sp==0 || sp->type==TERMINAL || sp->destructor==0 ) continue;
+ fprintf(out," case %d:\n",sp->index); lineno++;
+
+ /* Combine duplicate destructors into a single case */
+ for(j=i+1; j<lemp->nsymbol; j++){
+ struct symbol *sp2 = lemp->symbols[j];
+ if( sp2 && sp2->type!=TERMINAL && sp2->destructor
+ && sp2->dtnum==sp->dtnum
+ && strcmp(sp->destructor,sp2->destructor)==0 ){
+ fprintf(out," case %d:\n",sp2->index); lineno++;
+ sp2->destructor = 0;
+ }
+ }
+
+ emit_destructor_code(out,lemp->symbols[i],lemp,&lineno);
+ fprintf(out," break;\n"); lineno++;
+ }
+ tplt_xfer(lemp->name,in,out,&lineno);
+
+ /* Generate code which executes whenever the parser stack overflows */
+ tplt_print(out,lemp,lemp->overflow,lemp->overflowln,&lineno);
+ tplt_xfer(lemp->name,in,out,&lineno);
+
+ /* Generate the table of rule information
+ **
+ ** Note: This code depends on the fact that rules are number
+ ** sequentually beginning with 0.
+ */
+ for(rp=lemp->rule; rp; rp=rp->next){
+ fprintf(out," { %d, %d },\n",rp->lhs->index,rp->nrhs); lineno++;
+ }
+ tplt_xfer(lemp->name,in,out,&lineno);
+
+ /* Generate code which execution during each REDUCE action */
+ for(rp=lemp->rule; rp; rp=rp->next){
+ if( rp->code ) translate_code(lemp, rp);
+ }
+ for(rp=lemp->rule; rp; rp=rp->next){
+ struct rule *rp2;
+ if( rp->code==0 ) continue;
+ fprintf(out," case %d:\n",rp->index); lineno++;
+ for(rp2=rp->next; rp2; rp2=rp2->next){
+ if( rp2->code==rp->code ){
+ fprintf(out," case %d:\n",rp2->index); lineno++;
+ rp2->code = 0;
+ }
+ }
+ emit_code(out,rp,lemp,&lineno);
+ fprintf(out," break;\n"); lineno++;
+ }
+ tplt_xfer(lemp->name,in,out,&lineno);
+
+ /* Generate code which executes if a parse fails */
+ tplt_print(out,lemp,lemp->failure,lemp->failureln,&lineno);
+ tplt_xfer(lemp->name,in,out,&lineno);
+
+ /* Generate code which executes when a syntax error occurs */
+ tplt_print(out,lemp,lemp->error,lemp->errorln,&lineno);
+ tplt_xfer(lemp->name,in,out,&lineno);
+
+ /* Generate code which executes when the parser accepts its input */
+ tplt_print(out,lemp,lemp->accept,lemp->acceptln,&lineno);
+ tplt_xfer(lemp->name,in,out,&lineno);
+
+ /* Append any addition code the user desires */
+ tplt_print(out,lemp,lemp->extracode,lemp->extracodeln,&lineno);
+
+ fclose(in);
+ fclose(out);
+ return;
+}
+
+/* Generate a header file for the parser */
+void ReportHeader(lemp)
+struct lemon *lemp;
+{
+ FILE *out, *in;
+ char *prefix;
+ char line[LINESIZE];
+ char pattern[LINESIZE];
+ int i;
+
+ if( lemp->tokenprefix ) prefix = lemp->tokenprefix;
+ else prefix = "";
+ in = file_open(lemp,".h","rb");
+ if( in ){
+ for(i=1; i<lemp->nterminal && fgets(line,LINESIZE,in); i++){
+ sprintf(pattern,"#define %s%-30s %2d\n",prefix,lemp->symbols[i]->name,i);
+ if( strcmp(line,pattern) ) break;
+ }
+ fclose(in);
+ if( i==lemp->nterminal ){
+ /* No change in the file. Don't rewrite it. */
+ return;
+ }
+ }
+ out = file_open(lemp,".h","wb");
+ if( out ){
+ for(i=1; i<lemp->nterminal; i++){
+ fprintf(out,"#define %s%-30s %2d\n",prefix,lemp->symbols[i]->name,i);
+ }
+ fclose(out);
+ }
+ return;
+}
+
+/* Reduce the size of the action tables, if possible, by making use
+** of defaults.
+**
+** In this version, we take the most frequent REDUCE action and make
+** it the default. Except, there is no default if the wildcard token
+** is a possible look-ahead.
+*/
+void CompressTables(lemp)
+struct lemon *lemp;
+{
+ struct state *stp;
+ struct action *ap, *ap2;
+ struct rule *rp, *rp2, *rbest;
+ int nbest, n;
+ int i;
+ int usesWildcard;
+
+ for(i=0; i<lemp->nstate; i++){
+ stp = lemp->sorted[i];
+ nbest = 0;
+ rbest = 0;
+ usesWildcard = 0;
+
+ for(ap=stp->ap; ap; ap=ap->next){
+ if( ap->type==SHIFT && ap->sp==lemp->wildcard ){
+ usesWildcard = 1;
+ }
+ if( ap->type!=REDUCE ) continue;
+ rp = ap->x.rp;
+ if( rp==rbest ) continue;
+ n = 1;
+ for(ap2=ap->next; ap2; ap2=ap2->next){
+ if( ap2->type!=REDUCE ) continue;
+ rp2 = ap2->x.rp;
+ if( rp2==rbest ) continue;
+ if( rp2==rp ) n++;
+ }
+ if( n>nbest ){
+ nbest = n;
+ rbest = rp;
+ }
+ }
+
+ /* Do not make a default if the number of rules to default
+ ** is not at least 1 or if the wildcard token is a possible
+ ** lookahead.
+ */
+ if( nbest<1 || usesWildcard ) continue;
+
+
+ /* Combine matching REDUCE actions into a single default */
+ for(ap=stp->ap; ap; ap=ap->next){
+ if( ap->type==REDUCE && ap->x.rp==rbest ) break;
+ }
+ assert( ap );
+ ap->sp = Symbol_new("{default}");
+ for(ap=ap->next; ap; ap=ap->next){
+ if( ap->type==REDUCE && ap->x.rp==rbest ) ap->type = NOT_USED;
+ }
+ stp->ap = Action_sort(stp->ap);
+ }
+}
+
+
+/*
+** Compare two states for sorting purposes. The smaller state is the
+** one with the most non-terminal actions. If they have the same number
+** of non-terminal actions, then the smaller is the one with the most
+** token actions.
+*/
+static int stateResortCompare(const void *a, const void *b){
+ const struct state *pA = *(const struct state**)a;
+ const struct state *pB = *(const struct state**)b;
+ int n;
+
+ n = pB->nNtAct - pA->nNtAct;
+ if( n==0 ){
+ n = pB->nTknAct - pA->nTknAct;
+ }
+ return n;
+}
+
+
+/*
+** Renumber and resort states so that states with fewer choices
+** occur at the end. Except, keep state 0 as the first state.
+*/
+void ResortStates(lemp)
+struct lemon *lemp;
+{
+ int i;
+ struct state *stp;
+ struct action *ap;
+
+ for(i=0; i<lemp->nstate; i++){
+ stp = lemp->sorted[i];
+ stp->nTknAct = stp->nNtAct = 0;
+ stp->iDflt = lemp->nstate + lemp->nrule;
+ stp->iTknOfst = NO_OFFSET;
+ stp->iNtOfst = NO_OFFSET;
+ for(ap=stp->ap; ap; ap=ap->next){
+ if( compute_action(lemp,ap)>=0 ){
+ if( ap->sp->index<lemp->nterminal ){
+ stp->nTknAct++;
+ }else if( ap->sp->index<lemp->nsymbol ){
+ stp->nNtAct++;
+ }else{
+ stp->iDflt = compute_action(lemp, ap);
+ }
+ }
+ }
+ }
+ qsort(&lemp->sorted[1], lemp->nstate-1, sizeof(lemp->sorted[0]),
+ stateResortCompare);
+ for(i=0; i<lemp->nstate; i++){
+ lemp->sorted[i]->statenum = i;
+ }
+}
+
+
+/***************** From the file "set.c" ************************************/
+/*
+** Set manipulation routines for the LEMON parser generator.
+*/
+
+static int size = 0;
+
+/* Set the set size */
+void SetSize(n)
+int n;
+{
+ size = n+1;
+}
+
+/* Allocate a new set */
+char *SetNew(){
+ char *s;
+ int i;
+ s = (char*)malloc( size );
+ if( s==0 ){
+ extern void memory_error();
+ memory_error();
+ }
+ for(i=0; i<size; i++) s[i] = 0;
+ return s;
+}
+
+/* Deallocate a set */
+void SetFree(s)
+char *s;
+{
+ free(s);
+}
+
+/* Add a new element to the set. Return TRUE if the element was added
+** and FALSE if it was already there. */
+int SetAdd(s,e)
+char *s;
+int e;
+{
+ int rv;
+ rv = s[e];
+ s[e] = 1;
+ return !rv;
+}
+
+/* Add every element of s2 to s1. Return TRUE if s1 changes. */
+int SetUnion(s1,s2)
+char *s1;
+char *s2;
+{
+ int i, progress;
+ progress = 0;
+ for(i=0; i<size; i++){
+ if( s2[i]==0 ) continue;
+ if( s1[i]==0 ){
+ progress = 1;
+ s1[i] = 1;
+ }
+ }
+ return progress;
+}
+/********************** From the file "table.c" ****************************/
+/*
+** All code in this file has been automatically generated
+** from a specification in the file
+** "table.q"
+** by the associative array code building program "aagen".
+** Do not edit this file! Instead, edit the specification
+** file, then rerun aagen.
+*/
+/*
+** Code for processing tables in the LEMON parser generator.
+*/
+
+PRIVATE int strhash(x)
+char *x;
+{
+ int h = 0;
+ while( *x) h = h*13 + *(x++);
+ return h;
+}
+
+/* Works like strdup, sort of. Save a string in malloced memory, but
+** keep strings in a table so that the same string is not in more
+** than one place.
+*/
+char *Strsafe(y)
+char *y;
+{
+ char *z;
+
+ if( y==0 ) return 0;
+ z = Strsafe_find(y);
+ if( z==0 && (z=malloc( strlen(y)+1 ))!=0 ){
+ strcpy(z,y);
+ Strsafe_insert(z);
+ }
+ MemoryCheck(z);
+ return z;
+}
+
+/* There is one instance of the following structure for each
+** associative array of type "x1".
+*/
+struct s_x1 {
+ int size; /* The number of available slots. */
+ /* Must be a power of 2 greater than or */
+ /* equal to 1 */
+ int count; /* Number of currently slots filled */
+ struct s_x1node *tbl; /* The data stored here */
+ struct s_x1node **ht; /* Hash table for lookups */
+};
+
+/* There is one instance of this structure for every data element
+** in an associative array of type "x1".
+*/
+typedef struct s_x1node {
+ char *data; /* The data */
+ struct s_x1node *next; /* Next entry with the same hash */
+ struct s_x1node **from; /* Previous link */
+} x1node;
+
+/* There is only one instance of the array, which is the following */
+static struct s_x1 *x1a;
+
+/* Allocate a new associative array */
+void Strsafe_init(){
+ if( x1a ) return;
+ x1a = (struct s_x1*)malloc( sizeof(struct s_x1) );
+ if( x1a ){
+ x1a->size = 1024;
+ x1a->count = 0;
+ x1a->tbl = (x1node*)malloc(
+ (sizeof(x1node) + sizeof(x1node*))*1024 );
+ if( x1a->tbl==0 ){
+ free(x1a);
+ x1a = 0;
+ }else{
+ int i;
+ x1a->ht = (x1node**)&(x1a->tbl[1024]);
+ for(i=0; i<1024; i++) x1a->ht[i] = 0;
+ }
+ }
+}
+/* Insert a new record into the array. Return TRUE if successful.
+** Prior data with the same key is NOT overwritten */
+int Strsafe_insert(data)
+char *data;
+{
+ x1node *np;
+ int h;
+ int ph;
+
+ if( x1a==0 ) return 0;
+ ph = strhash(data);
+ h = ph & (x1a->size-1);
+ np = x1a->ht[h];
+ while( np ){
+ if( strcmp(np->data,data)==0 ){
+ /* An existing entry with the same key is found. */
+ /* Fail because overwrite is not allows. */
+ return 0;
+ }
+ np = np->next;
+ }
+ if( x1a->count>=x1a->size ){
+ /* Need to make the hash table bigger */
+ int i,size;
+ struct s_x1 array;
+ array.size = size = x1a->size*2;
+ array.count = x1a->count;
+ array.tbl = (x1node*)malloc(
+ (sizeof(x1node) + sizeof(x1node*))*size );
+ if( array.tbl==0 ) return 0; /* Fail due to malloc failure */
+ array.ht = (x1node**)&(array.tbl[size]);
+ for(i=0; i<size; i++) array.ht[i] = 0;
+ for(i=0; i<x1a->count; i++){
+ x1node *oldnp, *newnp;
+ oldnp = &(x1a->tbl[i]);
+ h = strhash(oldnp->data) & (size-1);
+ newnp = &(array.tbl[i]);
+ if( array.ht[h] ) array.ht[h]->from = &(newnp->next);
+ newnp->next = array.ht[h];
+ newnp->data = oldnp->data;
+ newnp->from = &(array.ht[h]);
+ array.ht[h] = newnp;
+ }
+ free(x1a->tbl);
+ *x1a = array;
+ }
+ /* Insert the new data */
+ h = ph & (x1a->size-1);
+ np = &(x1a->tbl[x1a->count++]);
+ np->data = data;
+ if( x1a->ht[h] ) x1a->ht[h]->from = &(np->next);
+ np->next = x1a->ht[h];
+ x1a->ht[h] = np;
+ np->from = &(x1a->ht[h]);
+ return 1;
+}
+
+/* Return a pointer to data assigned to the given key. Return NULL
+** if no such key. */
+char *Strsafe_find(key)
+char *key;
+{
+ int h;
+ x1node *np;
+
+ if( x1a==0 ) return 0;
+ h = strhash(key) & (x1a->size-1);
+ np = x1a->ht[h];
+ while( np ){
+ if( strcmp(np->data,key)==0 ) break;
+ np = np->next;
+ }
+ return np ? np->data : 0;
+}
+
+/* Return a pointer to the (terminal or nonterminal) symbol "x".
+** Create a new symbol if this is the first time "x" has been seen.
+*/
+struct symbol *Symbol_new(x)
+char *x;
+{
+ struct symbol *sp;
+
+ sp = Symbol_find(x);
+ if( sp==0 ){
+ sp = (struct symbol *)malloc( sizeof(struct symbol) );
+ MemoryCheck(sp);
+ sp->name = Strsafe(x);
+ sp->type = isupper(*x) ? TERMINAL : NONTERMINAL;
+ sp->rule = 0;
+ sp->fallback = 0;
+ sp->prec = -1;
+ sp->assoc = UNK;
+ sp->firstset = 0;
+ sp->lambda = B_FALSE;
+ sp->destructor = 0;
+ sp->datatype = 0;
+ Symbol_insert(sp,sp->name);
+ }
+ return sp;
+}
+
+/* Compare two symbols for working purposes
+**
+** Symbols that begin with upper case letters (terminals or tokens)
+** must sort before symbols that begin with lower case letters
+** (non-terminals). Other than that, the order does not matter.
+**
+** We find experimentally that leaving the symbols in their original
+** order (the order they appeared in the grammar file) gives the
+** smallest parser tables in SQLite.
+*/
+int Symbolcmpp(struct symbol **a, struct symbol **b){
+ int i1 = (**a).index + 10000000*((**a).name[0]>'Z');
+ int i2 = (**b).index + 10000000*((**b).name[0]>'Z');
+ return i1-i2;
+}
+
+/* There is one instance of the following structure for each
+** associative array of type "x2".
+*/
+struct s_x2 {
+ int size; /* The number of available slots. */
+ /* Must be a power of 2 greater than or */
+ /* equal to 1 */
+ int count; /* Number of currently slots filled */
+ struct s_x2node *tbl; /* The data stored here */
+ struct s_x2node **ht; /* Hash table for lookups */
+};
+
+/* There is one instance of this structure for every data element
+** in an associative array of type "x2".
+*/
+typedef struct s_x2node {
+ struct symbol *data; /* The data */
+ char *key; /* The key */
+ struct s_x2node *next; /* Next entry with the same hash */
+ struct s_x2node **from; /* Previous link */
+} x2node;
+
+/* There is only one instance of the array, which is the following */
+static struct s_x2 *x2a;
+
+/* Allocate a new associative array */
+void Symbol_init(){
+ if( x2a ) return;
+ x2a = (struct s_x2*)malloc( sizeof(struct s_x2) );
+ if( x2a ){
+ x2a->size = 128;
+ x2a->count = 0;
+ x2a->tbl = (x2node*)malloc(
+ (sizeof(x2node) + sizeof(x2node*))*128 );
+ if( x2a->tbl==0 ){
+ free(x2a);
+ x2a = 0;
+ }else{
+ int i;
+ x2a->ht = (x2node**)&(x2a->tbl[128]);
+ for(i=0; i<128; i++) x2a->ht[i] = 0;
+ }
+ }
+}
+/* Insert a new record into the array. Return TRUE if successful.
+** Prior data with the same key is NOT overwritten */
+int Symbol_insert(data,key)
+struct symbol *data;
+char *key;
+{
+ x2node *np;
+ int h;
+ int ph;
+
+ if( x2a==0 ) return 0;
+ ph = strhash(key);
+ h = ph & (x2a->size-1);
+ np = x2a->ht[h];
+ while( np ){
+ if( strcmp(np->key,key)==0 ){
+ /* An existing entry with the same key is found. */
+ /* Fail because overwrite is not allows. */
+ return 0;
+ }
+ np = np->next;
+ }
+ if( x2a->count>=x2a->size ){
+ /* Need to make the hash table bigger */
+ int i,size;
+ struct s_x2 array;
+ array.size = size = x2a->size*2;
+ array.count = x2a->count;
+ array.tbl = (x2node*)malloc(
+ (sizeof(x2node) + sizeof(x2node*))*size );
+ if( array.tbl==0 ) return 0; /* Fail due to malloc failure */
+ array.ht = (x2node**)&(array.tbl[size]);
+ for(i=0; i<size; i++) array.ht[i] = 0;
+ for(i=0; i<x2a->count; i++){
+ x2node *oldnp, *newnp;
+ oldnp = &(x2a->tbl[i]);
+ h = strhash(oldnp->key) & (size-1);
+ newnp = &(array.tbl[i]);
+ if( array.ht[h] ) array.ht[h]->from = &(newnp->next);
+ newnp->next = array.ht[h];
+ newnp->key = oldnp->key;
+ newnp->data = oldnp->data;
+ newnp->from = &(array.ht[h]);
+ array.ht[h] = newnp;
+ }
+ free(x2a->tbl);
+ *x2a = array;
+ }
+ /* Insert the new data */
+ h = ph & (x2a->size-1);
+ np = &(x2a->tbl[x2a->count++]);
+ np->key = key;
+ np->data = data;
+ if( x2a->ht[h] ) x2a->ht[h]->from = &(np->next);
+ np->next = x2a->ht[h];
+ x2a->ht[h] = np;
+ np->from = &(x2a->ht[h]);
+ return 1;
+}
+
+/* Return a pointer to data assigned to the given key. Return NULL
+** if no such key. */
+struct symbol *Symbol_find(key)
+char *key;
+{
+ int h;
+ x2node *np;
+
+ if( x2a==0 ) return 0;
+ h = strhash(key) & (x2a->size-1);
+ np = x2a->ht[h];
+ while( np ){
+ if( strcmp(np->key,key)==0 ) break;
+ np = np->next;
+ }
+ return np ? np->data : 0;
+}
+
+/* Return the n-th data. Return NULL if n is out of range. */
+struct symbol *Symbol_Nth(n)
+int n;
+{
+ struct symbol *data;
+ if( x2a && n>0 && n<=x2a->count ){
+ data = x2a->tbl[n-1].data;
+ }else{
+ data = 0;
+ }
+ return data;
+}
+
+/* Return the size of the array */
+int Symbol_count()
+{
+ return x2a ? x2a->count : 0;
+}
+
+/* Return an array of pointers to all data in the table.
+** The array is obtained from malloc. Return NULL if memory allocation
+** problems, or if the array is empty. */
+struct symbol **Symbol_arrayof()
+{
+ struct symbol **array;
+ int i,size;
+ if( x2a==0 ) return 0;
+ size = x2a->count;
+ array = (struct symbol **)malloc( sizeof(struct symbol *)*size );
+ if( array ){
+ for(i=0; i<size; i++) array[i] = x2a->tbl[i].data;
+ }
+ return array;
+}
+
+/* Compare two configurations */
+int Configcmp(a,b)
+struct config *a;
+struct config *b;
+{
+ int x;
+ x = a->rp->index - b->rp->index;
+ if( x==0 ) x = a->dot - b->dot;
+ return x;
+}
+
+/* Compare two states */
+PRIVATE int statecmp(a,b)
+struct config *a;
+struct config *b;
+{
+ int rc;
+ for(rc=0; rc==0 && a && b; a=a->bp, b=b->bp){
+ rc = a->rp->index - b->rp->index;
+ if( rc==0 ) rc = a->dot - b->dot;
+ }
+ if( rc==0 ){
+ if( a ) rc = 1;
+ if( b ) rc = -1;
+ }
+ return rc;
+}
+
+/* Hash a state */
+PRIVATE int statehash(a)
+struct config *a;
+{
+ int h=0;
+ while( a ){
+ h = h*571 + a->rp->index*37 + a->dot;
+ a = a->bp;
+ }
+ return h;
+}
+
+/* Allocate a new state structure */
+struct state *State_new()
+{
+ struct state *new;
+ new = (struct state *)malloc( sizeof(struct state) );
+ MemoryCheck(new);
+ return new;
+}
+
+/* There is one instance of the following structure for each
+** associative array of type "x3".
+*/
+struct s_x3 {
+ int size; /* The number of available slots. */
+ /* Must be a power of 2 greater than or */
+ /* equal to 1 */
+ int count; /* Number of currently slots filled */
+ struct s_x3node *tbl; /* The data stored here */
+ struct s_x3node **ht; /* Hash table for lookups */
+};
+
+/* There is one instance of this structure for every data element
+** in an associative array of type "x3".
+*/
+typedef struct s_x3node {
+ struct state *data; /* The data */
+ struct config *key; /* The key */
+ struct s_x3node *next; /* Next entry with the same hash */
+ struct s_x3node **from; /* Previous link */
+} x3node;
+
+/* There is only one instance of the array, which is the following */
+static struct s_x3 *x3a;
+
+/* Allocate a new associative array */
+void State_init(){
+ if( x3a ) return;
+ x3a = (struct s_x3*)malloc( sizeof(struct s_x3) );
+ if( x3a ){
+ x3a->size = 128;
+ x3a->count = 0;
+ x3a->tbl = (x3node*)malloc(
+ (sizeof(x3node) + sizeof(x3node*))*128 );
+ if( x3a->tbl==0 ){
+ free(x3a);
+ x3a = 0;
+ }else{
+ int i;
+ x3a->ht = (x3node**)&(x3a->tbl[128]);
+ for(i=0; i<128; i++) x3a->ht[i] = 0;
+ }
+ }
+}
+/* Insert a new record into the array. Return TRUE if successful.
+** Prior data with the same key is NOT overwritten */
+int State_insert(data,key)
+struct state *data;
+struct config *key;
+{
+ x3node *np;
+ int h;
+ int ph;
+
+ if( x3a==0 ) return 0;
+ ph = statehash(key);
+ h = ph & (x3a->size-1);
+ np = x3a->ht[h];
+ while( np ){
+ if( statecmp(np->key,key)==0 ){
+ /* An existing entry with the same key is found. */
+ /* Fail because overwrite is not allows. */
+ return 0;
+ }
+ np = np->next;
+ }
+ if( x3a->count>=x3a->size ){
+ /* Need to make the hash table bigger */
+ int i,size;
+ struct s_x3 array;
+ array.size = size = x3a->size*2;
+ array.count = x3a->count;
+ array.tbl = (x3node*)malloc(
+ (sizeof(x3node) + sizeof(x3node*))*size );
+ if( array.tbl==0 ) return 0; /* Fail due to malloc failure */
+ array.ht = (x3node**)&(array.tbl[size]);
+ for(i=0; i<size; i++) array.ht[i] = 0;
+ for(i=0; i<x3a->count; i++){
+ x3node *oldnp, *newnp;
+ oldnp = &(x3a->tbl[i]);
+ h = statehash(oldnp->key) & (size-1);
+ newnp = &(array.tbl[i]);
+ if( array.ht[h] ) array.ht[h]->from = &(newnp->next);
+ newnp->next = array.ht[h];
+ newnp->key = oldnp->key;
+ newnp->data = oldnp->data;
+ newnp->from = &(array.ht[h]);
+ array.ht[h] = newnp;
+ }
+ free(x3a->tbl);
+ *x3a = array;
+ }
+ /* Insert the new data */
+ h = ph & (x3a->size-1);
+ np = &(x3a->tbl[x3a->count++]);
+ np->key = key;
+ np->data = data;
+ if( x3a->ht[h] ) x3a->ht[h]->from = &(np->next);
+ np->next = x3a->ht[h];
+ x3a->ht[h] = np;
+ np->from = &(x3a->ht[h]);
+ return 1;
+}
+
+/* Return a pointer to data assigned to the given key. Return NULL
+** if no such key. */
+struct state *State_find(key)
+struct config *key;
+{
+ int h;
+ x3node *np;
+
+ if( x3a==0 ) return 0;
+ h = statehash(key) & (x3a->size-1);
+ np = x3a->ht[h];
+ while( np ){
+ if( statecmp(np->key,key)==0 ) break;
+ np = np->next;
+ }
+ return np ? np->data : 0;
+}
+
+/* Return an array of pointers to all data in the table.
+** The array is obtained from malloc. Return NULL if memory allocation
+** problems, or if the array is empty. */
+struct state **State_arrayof()
+{
+ struct state **array;
+ int i,size;
+ if( x3a==0 ) return 0;
+ size = x3a->count;
+ array = (struct state **)malloc( sizeof(struct state *)*size );
+ if( array ){
+ for(i=0; i<size; i++) array[i] = x3a->tbl[i].data;
+ }
+ return array;
+}
+
+/* Hash a configuration */
+PRIVATE int confighash(a)
+struct config *a;
+{
+ int h=0;
+ h = h*571 + a->rp->index*37 + a->dot;
+ return h;
+}
+
+/* There is one instance of the following structure for each
+** associative array of type "x4".
+*/
+struct s_x4 {
+ int size; /* The number of available slots. */
+ /* Must be a power of 2 greater than or */
+ /* equal to 1 */
+ int count; /* Number of currently slots filled */
+ struct s_x4node *tbl; /* The data stored here */
+ struct s_x4node **ht; /* Hash table for lookups */
+};
+
+/* There is one instance of this structure for every data element
+** in an associative array of type "x4".
+*/
+typedef struct s_x4node {
+ struct config *data; /* The data */
+ struct s_x4node *next; /* Next entry with the same hash */
+ struct s_x4node **from; /* Previous link */
+} x4node;
+
+/* There is only one instance of the array, which is the following */
+static struct s_x4 *x4a;
+
+/* Allocate a new associative array */
+void Configtable_init(){
+ if( x4a ) return;
+ x4a = (struct s_x4*)malloc( sizeof(struct s_x4) );
+ if( x4a ){
+ x4a->size = 64;
+ x4a->count = 0;
+ x4a->tbl = (x4node*)malloc(
+ (sizeof(x4node) + sizeof(x4node*))*64 );
+ if( x4a->tbl==0 ){
+ free(x4a);
+ x4a = 0;
+ }else{
+ int i;
+ x4a->ht = (x4node**)&(x4a->tbl[64]);
+ for(i=0; i<64; i++) x4a->ht[i] = 0;
+ }
+ }
+}
+/* Insert a new record into the array. Return TRUE if successful.
+** Prior data with the same key is NOT overwritten */
+int Configtable_insert(data)
+struct config *data;
+{
+ x4node *np;
+ int h;
+ int ph;
+
+ if( x4a==0 ) return 0;
+ ph = confighash(data);
+ h = ph & (x4a->size-1);
+ np = x4a->ht[h];
+ while( np ){
+ if( Configcmp(np->data,data)==0 ){
+ /* An existing entry with the same key is found. */
+ /* Fail because overwrite is not allows. */
+ return 0;
+ }
+ np = np->next;
+ }
+ if( x4a->count>=x4a->size ){
+ /* Need to make the hash table bigger */
+ int i,size;
+ struct s_x4 array;
+ array.size = size = x4a->size*2;
+ array.count = x4a->count;
+ array.tbl = (x4node*)malloc(
+ (sizeof(x4node) + sizeof(x4node*))*size );
+ if( array.tbl==0 ) return 0; /* Fail due to malloc failure */
+ array.ht = (x4node**)&(array.tbl[size]);
+ for(i=0; i<size; i++) array.ht[i] = 0;
+ for(i=0; i<x4a->count; i++){
+ x4node *oldnp, *newnp;
+ oldnp = &(x4a->tbl[i]);
+ h = confighash(oldnp->data) & (size-1);
+ newnp = &(array.tbl[i]);
+ if( array.ht[h] ) array.ht[h]->from = &(newnp->next);
+ newnp->next = array.ht[h];
+ newnp->data = oldnp->data;
+ newnp->from = &(array.ht[h]);
+ array.ht[h] = newnp;
+ }
+ free(x4a->tbl);
+ *x4a = array;
+ }
+ /* Insert the new data */
+ h = ph & (x4a->size-1);
+ np = &(x4a->tbl[x4a->count++]);
+ np->data = data;
+ if( x4a->ht[h] ) x4a->ht[h]->from = &(np->next);
+ np->next = x4a->ht[h];
+ x4a->ht[h] = np;
+ np->from = &(x4a->ht[h]);
+ return 1;
+}
+
+/* Return a pointer to data assigned to the given key. Return NULL
+** if no such key. */
+struct config *Configtable_find(key)
+struct config *key;
+{
+ int h;
+ x4node *np;
+
+ if( x4a==0 ) return 0;
+ h = confighash(key) & (x4a->size-1);
+ np = x4a->ht[h];
+ while( np ){
+ if( Configcmp(np->data,key)==0 ) break;
+ np = np->next;
+ }
+ return np ? np->data : 0;
+}
+
+/* Remove all data from the table. Pass each data to the function "f"
+** as it is removed. ("f" may be null to avoid this step.) */
+void Configtable_clear(f)
+int(*f)(/* struct config * */);
+{
+ int i;
+ if( x4a==0 || x4a->count==0 ) return;
+ if( f ) for(i=0; i<x4a->count; i++) (*f)(x4a->tbl[i].data);
+ for(i=0; i<x4a->size; i++) x4a->ht[i] = 0;
+ x4a->count = 0;
+ return;
+}
Added: freeswitch/trunk/libs/sqlite/tool/lempar.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/tool/lempar.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,730 @@
+/* Driver template for the LEMON parser generator.
+** The author disclaims copyright to this source code.
+*/
+/* First off, code is include which follows the "include" declaration
+** in the input file. */
+#include <stdio.h>
+%%
+/* Next is all token values, in a form suitable for use by makeheaders.
+** This section will be null unless lemon is run with the -m switch.
+*/
+/*
+** These constants (all generated automatically by the parser generator)
+** specify the various kinds of tokens (terminals) that the parser
+** understands.
+**
+** Each symbol here is a terminal symbol in the grammar.
+*/
+%%
+/* Make sure the INTERFACE macro is defined.
+*/
+#ifndef INTERFACE
+# define INTERFACE 1
+#endif
+/* The next thing included is series of defines which control
+** various aspects of the generated parser.
+** YYCODETYPE is the data type used for storing terminal
+** and nonterminal numbers. "unsigned char" is
+** used if there are fewer than 250 terminals
+** and nonterminals. "int" is used otherwise.
+** YYNOCODE is a number of type YYCODETYPE which corresponds
+** to no legal terminal or nonterminal number. This
+** number is used to fill in empty slots of the hash
+** table.
+** YYFALLBACK If defined, this indicates that one or more tokens
+** have fall-back values which should be used if the
+** original value of the token will not parse.
+** YYACTIONTYPE is the data type used for storing terminal
+** and nonterminal numbers. "unsigned char" is
+** used if there are fewer than 250 rules and
+** states combined. "int" is used otherwise.
+** ParseTOKENTYPE is the data type used for minor tokens given
+** directly to the parser from the tokenizer.
+** YYMINORTYPE is the data type used for all minor tokens.
+** This is typically a union of many types, one of
+** which is ParseTOKENTYPE. The entry in the union
+** for base tokens is called "yy0".
+** YYSTACKDEPTH is the maximum depth of the parser's stack.
+** ParseARG_SDECL A static variable declaration for the %extra_argument
+** ParseARG_PDECL A parameter declaration for the %extra_argument
+** ParseARG_STORE Code to store %extra_argument into yypParser
+** ParseARG_FETCH Code to extract %extra_argument from yypParser
+** YYNSTATE the combined number of states.
+** YYNRULE the number of rules in the grammar
+** YYERRORSYMBOL is the code number of the error symbol. If not
+** defined, then do no error processing.
+*/
+%%
+#define YY_NO_ACTION (YYNSTATE+YYNRULE+2)
+#define YY_ACCEPT_ACTION (YYNSTATE+YYNRULE+1)
+#define YY_ERROR_ACTION (YYNSTATE+YYNRULE)
+
+/* Next are that tables used to determine what action to take based on the
+** current state and lookahead token. These tables are used to implement
+** functions that take a state number and lookahead value and return an
+** action integer.
+**
+** Suppose the action integer is N. Then the action is determined as
+** follows
+**
+** 0 <= N < YYNSTATE Shift N. That is, push the lookahead
+** token onto the stack and goto state N.
+**
+** YYNSTATE <= N < YYNSTATE+YYNRULE Reduce by rule N-YYNSTATE.
+**
+** N == YYNSTATE+YYNRULE A syntax error has occurred.
+**
+** N == YYNSTATE+YYNRULE+1 The parser accepts its input.
+**
+** N == YYNSTATE+YYNRULE+2 No such action. Denotes unused
+** slots in the yy_action[] table.
+**
+** The action table is constructed as a single large table named yy_action[].
+** Given state S and lookahead X, the action is computed as
+**
+** yy_action[ yy_shift_ofst[S] + X ]
+**
+** If the index value yy_shift_ofst[S]+X is out of range or if the value
+** yy_lookahead[yy_shift_ofst[S]+X] is not equal to X or if yy_shift_ofst[S]
+** is equal to YY_SHIFT_USE_DFLT, it means that the action is not in the table
+** and that yy_default[S] should be used instead.
+**
+** The formula above is for computing the action when the lookahead is
+** a terminal symbol. If the lookahead is a non-terminal (as occurs after
+** a reduce action) then the yy_reduce_ofst[] array is used in place of
+** the yy_shift_ofst[] array and YY_REDUCE_USE_DFLT is used in place of
+** YY_SHIFT_USE_DFLT.
+**
+** The following are the tables generated in this section:
+**
+** yy_action[] A single table containing all actions.
+** yy_lookahead[] A table containing the lookahead for each entry in
+** yy_action. Used to detect hash collisions.
+** yy_shift_ofst[] For each state, the offset into yy_action for
+** shifting terminals.
+** yy_reduce_ofst[] For each state, the offset into yy_action for
+** shifting non-terminals after a reduce.
+** yy_default[] Default action for each state.
+*/
+%%
+#define YY_SZ_ACTTAB (int)(sizeof(yy_action)/sizeof(yy_action[0]))
+
+/* The next table maps tokens into fallback tokens. If a construct
+** like the following:
+**
+** %fallback ID X Y Z.
+**
+** appears in the grammer, then ID becomes a fallback token for X, Y,
+** and Z. Whenever one of the tokens X, Y, or Z is input to the parser
+** but it does not parse, the type of the token is changed to ID and
+** the parse is retried before an error is thrown.
+*/
+#ifdef YYFALLBACK
+static const YYCODETYPE yyFallback[] = {
+%%
+};
+#endif /* YYFALLBACK */
+
+/* The following structure represents a single element of the
+** parser's stack. Information stored includes:
+**
+** + The state number for the parser at this level of the stack.
+**
+** + The value of the token stored at this level of the stack.
+** (In other words, the "major" token.)
+**
+** + The semantic value stored at this level of the stack. This is
+** the information used by the action routines in the grammar.
+** It is sometimes called the "minor" token.
+*/
+struct yyStackEntry {
+ int stateno; /* The state-number */
+ int major; /* The major token value. This is the code
+ ** number for the token at this stack level */
+ YYMINORTYPE minor; /* The user-supplied minor token value. This
+ ** is the value of the token */
+};
+typedef struct yyStackEntry yyStackEntry;
+
+/* The state of the parser is completely contained in an instance of
+** the following structure */
+struct yyParser {
+ int yyidx; /* Index of top element in stack */
+ int yyerrcnt; /* Shifts left before out of the error */
+ ParseARG_SDECL /* A place to hold %extra_argument */
+ yyStackEntry yystack[YYSTACKDEPTH]; /* The parser's stack */
+};
+typedef struct yyParser yyParser;
+
+#ifndef NDEBUG
+#include <stdio.h>
+static FILE *yyTraceFILE = 0;
+static char *yyTracePrompt = 0;
+#endif /* NDEBUG */
+
+#ifndef NDEBUG
+/*
+** Turn parser tracing on by giving a stream to which to write the trace
+** and a prompt to preface each trace message. Tracing is turned off
+** by making either argument NULL
+**
+** Inputs:
+** <ul>
+** <li> A FILE* to which trace output should be written.
+** If NULL, then tracing is turned off.
+** <li> A prefix string written at the beginning of every
+** line of trace output. If NULL, then tracing is
+** turned off.
+** </ul>
+**
+** Outputs:
+** None.
+*/
+void ParseTrace(FILE *TraceFILE, char *zTracePrompt){
+ yyTraceFILE = TraceFILE;
+ yyTracePrompt = zTracePrompt;
+ if( yyTraceFILE==0 ) yyTracePrompt = 0;
+ else if( yyTracePrompt==0 ) yyTraceFILE = 0;
+}
+#endif /* NDEBUG */
+
+#ifndef NDEBUG
+/* For tracing shifts, the names of all terminals and nonterminals
+** are required. The following table supplies these names */
+static const char *const yyTokenName[] = {
+%%
+};
+#endif /* NDEBUG */
+
+#ifndef NDEBUG
+/* For tracing reduce actions, the names of all rules are required.
+*/
+static const char *const yyRuleName[] = {
+%%
+};
+#endif /* NDEBUG */
+
+/*
+** This function returns the symbolic name associated with a token
+** value.
+*/
+const char *ParseTokenName(int tokenType){
+#ifndef NDEBUG
+ if( tokenType>0 && tokenType<(sizeof(yyTokenName)/sizeof(yyTokenName[0])) ){
+ return yyTokenName[tokenType];
+ }else{
+ return "Unknown";
+ }
+#else
+ return "";
+#endif
+}
+
+/*
+** This function allocates a new parser.
+** The only argument is a pointer to a function which works like
+** malloc.
+**
+** Inputs:
+** A pointer to the function used to allocate memory.
+**
+** Outputs:
+** A pointer to a parser. This pointer is used in subsequent calls
+** to Parse and ParseFree.
+*/
+void *ParseAlloc(void *(*mallocProc)(size_t)){
+ yyParser *pParser;
+ pParser = (yyParser*)(*mallocProc)( (size_t)sizeof(yyParser) );
+ if( pParser ){
+ pParser->yyidx = -1;
+ }
+ return pParser;
+}
+
+/* The following function deletes the value associated with a
+** symbol. The symbol can be either a terminal or nonterminal.
+** "yymajor" is the symbol code, and "yypminor" is a pointer to
+** the value.
+*/
+static void yy_destructor(YYCODETYPE yymajor, YYMINORTYPE *yypminor){
+ switch( yymajor ){
+ /* Here is inserted the actions which take place when a
+ ** terminal or non-terminal is destroyed. This can happen
+ ** when the symbol is popped from the stack during a
+ ** reduce or during error processing or when a parser is
+ ** being destroyed before it is finished parsing.
+ **
+ ** Note: during a reduce, the only symbols destroyed are those
+ ** which appear on the RHS of the rule, but which are not used
+ ** inside the C code.
+ */
+%%
+ default: break; /* If no destructor action specified: do nothing */
+ }
+}
+
+/*
+** Pop the parser's stack once.
+**
+** If there is a destructor routine associated with the token which
+** is popped from the stack, then call it.
+**
+** Return the major token number for the symbol popped.
+*/
+static int yy_pop_parser_stack(yyParser *pParser){
+ YYCODETYPE yymajor;
+ yyStackEntry *yytos = &pParser->yystack[pParser->yyidx];
+
+ if( pParser->yyidx<0 ) return 0;
+#ifndef NDEBUG
+ if( yyTraceFILE && pParser->yyidx>=0 ){
+ fprintf(yyTraceFILE,"%sPopping %s\n",
+ yyTracePrompt,
+ yyTokenName[yytos->major]);
+ }
+#endif
+ yymajor = yytos->major;
+ yy_destructor( yymajor, &yytos->minor);
+ pParser->yyidx--;
+ return yymajor;
+}
+
+/*
+** Deallocate and destroy a parser. Destructors are all called for
+** all stack elements before shutting the parser down.
+**
+** Inputs:
+** <ul>
+** <li> A pointer to the parser. This should be a pointer
+** obtained from ParseAlloc.
+** <li> A pointer to a function used to reclaim memory obtained
+** from malloc.
+** </ul>
+*/
+void ParseFree(
+ void *p, /* The parser to be deleted */
+ void (*freeProc)(void*) /* Function used to reclaim memory */
+){
+ yyParser *pParser = (yyParser*)p;
+ if( pParser==0 ) return;
+ while( pParser->yyidx>=0 ) yy_pop_parser_stack(pParser);
+ (*freeProc)((void*)pParser);
+}
+
+/*
+** Find the appropriate action for a parser given the terminal
+** look-ahead token iLookAhead.
+**
+** If the look-ahead token is YYNOCODE, then check to see if the action is
+** independent of the look-ahead. If it is, return the action, otherwise
+** return YY_NO_ACTION.
+*/
+static int yy_find_shift_action(
+ yyParser *pParser, /* The parser */
+ YYCODETYPE iLookAhead /* The look-ahead token */
+){
+ int i;
+ int stateno = pParser->yystack[pParser->yyidx].stateno;
+
+ if( stateno>YY_SHIFT_MAX || (i = yy_shift_ofst[stateno])==YY_SHIFT_USE_DFLT ){
+ return yy_default[stateno];
+ }
+ if( iLookAhead==YYNOCODE ){
+ return YY_NO_ACTION;
+ }
+ i += iLookAhead;
+ if( i<0 || i>=YY_SZ_ACTTAB || yy_lookahead[i]!=iLookAhead ){
+ if( iLookAhead>0 ){
+#ifdef YYFALLBACK
+ int iFallback; /* Fallback token */
+ if( iLookAhead<sizeof(yyFallback)/sizeof(yyFallback[0])
+ && (iFallback = yyFallback[iLookAhead])!=0 ){
+#ifndef NDEBUG
+ if( yyTraceFILE ){
+ fprintf(yyTraceFILE, "%sFALLBACK %s => %s\n",
+ yyTracePrompt, yyTokenName[iLookAhead], yyTokenName[iFallback]);
+ }
+#endif
+ return yy_find_shift_action(pParser, iFallback);
+ }
+#endif
+#ifdef YYWILDCARD
+ {
+ int j = i - iLookAhead + YYWILDCARD;
+ if( j>=0 && j<YY_SZ_ACTTAB && yy_lookahead[j]==YYWILDCARD ){
+#ifndef NDEBUG
+ if( yyTraceFILE ){
+ fprintf(yyTraceFILE, "%sWILDCARD %s => %s\n",
+ yyTracePrompt, yyTokenName[iLookAhead], yyTokenName[YYWILDCARD]);
+ }
+#endif /* NDEBUG */
+ return yy_action[j];
+ }
+ }
+#endif /* YYWILDCARD */
+ }
+ return yy_default[stateno];
+ }else{
+ return yy_action[i];
+ }
+}
+
+/*
+** Find the appropriate action for a parser given the non-terminal
+** look-ahead token iLookAhead.
+**
+** If the look-ahead token is YYNOCODE, then check to see if the action is
+** independent of the look-ahead. If it is, return the action, otherwise
+** return YY_NO_ACTION.
+*/
+static int yy_find_reduce_action(
+ int stateno, /* Current state number */
+ YYCODETYPE iLookAhead /* The look-ahead token */
+){
+ int i;
+ /* int stateno = pParser->yystack[pParser->yyidx].stateno; */
+
+ if( stateno>YY_REDUCE_MAX ||
+ (i = yy_reduce_ofst[stateno])==YY_REDUCE_USE_DFLT ){
+ return yy_default[stateno];
+ }
+ if( iLookAhead==YYNOCODE ){
+ return YY_NO_ACTION;
+ }
+ i += iLookAhead;
+ if( i<0 || i>=YY_SZ_ACTTAB || yy_lookahead[i]!=iLookAhead ){
+ return yy_default[stateno];
+ }else{
+ return yy_action[i];
+ }
+}
+
+/*
+** Perform a shift action.
+*/
+static void yy_shift(
+ yyParser *yypParser, /* The parser to be shifted */
+ int yyNewState, /* The new state to shift in */
+ int yyMajor, /* The major token to shift in */
+ YYMINORTYPE *yypMinor /* Pointer ot the minor token to shift in */
+){
+ yyStackEntry *yytos;
+ yypParser->yyidx++;
+ if( yypParser->yyidx>=YYSTACKDEPTH ){
+ ParseARG_FETCH;
+ yypParser->yyidx--;
+#ifndef NDEBUG
+ if( yyTraceFILE ){
+ fprintf(yyTraceFILE,"%sStack Overflow!\n",yyTracePrompt);
+ }
+#endif
+ while( yypParser->yyidx>=0 ) yy_pop_parser_stack(yypParser);
+ /* Here code is inserted which will execute if the parser
+ ** stack every overflows */
+%%
+ ParseARG_STORE; /* Suppress warning about unused %extra_argument var */
+ return;
+ }
+ yytos = &yypParser->yystack[yypParser->yyidx];
+ yytos->stateno = yyNewState;
+ yytos->major = yyMajor;
+ yytos->minor = *yypMinor;
+#ifndef NDEBUG
+ if( yyTraceFILE && yypParser->yyidx>0 ){
+ int i;
+ fprintf(yyTraceFILE,"%sShift %d\n",yyTracePrompt,yyNewState);
+ fprintf(yyTraceFILE,"%sStack:",yyTracePrompt);
+ for(i=1; i<=yypParser->yyidx; i++)
+ fprintf(yyTraceFILE," %s",yyTokenName[yypParser->yystack[i].major]);
+ fprintf(yyTraceFILE,"\n");
+ }
+#endif
+}
+
+/* The following table contains information about every rule that
+** is used during the reduce.
+*/
+static const struct {
+ YYCODETYPE lhs; /* Symbol on the left-hand side of the rule */
+ unsigned char nrhs; /* Number of right-hand side symbols in the rule */
+} yyRuleInfo[] = {
+%%
+};
+
+static void yy_accept(yyParser*); /* Forward Declaration */
+
+/*
+** Perform a reduce action and the shift that must immediately
+** follow the reduce.
+*/
+static void yy_reduce(
+ yyParser *yypParser, /* The parser */
+ int yyruleno /* Number of the rule by which to reduce */
+){
+ int yygoto; /* The next state */
+ int yyact; /* The next action */
+ YYMINORTYPE yygotominor; /* The LHS of the rule reduced */
+ yyStackEntry *yymsp; /* The top of the parser's stack */
+ int yysize; /* Amount to pop the stack */
+ ParseARG_FETCH;
+ yymsp = &yypParser->yystack[yypParser->yyidx];
+#ifndef NDEBUG
+ if( yyTraceFILE && yyruleno>=0
+ && yyruleno<(int)(sizeof(yyRuleName)/sizeof(yyRuleName[0])) ){
+ fprintf(yyTraceFILE, "%sReduce [%s].\n", yyTracePrompt,
+ yyRuleName[yyruleno]);
+ }
+#endif /* NDEBUG */
+
+#ifndef NDEBUG
+ /* Silence complaints from purify about yygotominor being uninitialized
+ ** in some cases when it is copied into the stack after the following
+ ** switch. yygotominor is uninitialized when a rule reduces that does
+ ** not set the value of its left-hand side nonterminal. Leaving the
+ ** value of the nonterminal uninitialized is utterly harmless as long
+ ** as the value is never used. So really the only thing this code
+ ** accomplishes is to quieten purify.
+ */
+ memset(&yygotominor, 0, sizeof(yygotominor));
+#endif
+
+ switch( yyruleno ){
+ /* Beginning here are the reduction cases. A typical example
+ ** follows:
+ ** case 0:
+ ** #line <lineno> <grammarfile>
+ ** { ... } // User supplied code
+ ** #line <lineno> <thisfile>
+ ** break;
+ */
+%%
+ };
+ yygoto = yyRuleInfo[yyruleno].lhs;
+ yysize = yyRuleInfo[yyruleno].nrhs;
+ yypParser->yyidx -= yysize;
+ yyact = yy_find_reduce_action(yymsp[-yysize].stateno,yygoto);
+ if( yyact < YYNSTATE ){
+#ifdef NDEBUG
+ /* If we are not debugging and the reduce action popped at least
+ ** one element off the stack, then we can push the new element back
+ ** onto the stack here, and skip the stack overflow test in yy_shift().
+ ** That gives a significant speed improvement. */
+ if( yysize ){
+ yypParser->yyidx++;
+ yymsp -= yysize-1;
+ yymsp->stateno = yyact;
+ yymsp->major = yygoto;
+ yymsp->minor = yygotominor;
+ }else
+#endif
+ {
+ yy_shift(yypParser,yyact,yygoto,&yygotominor);
+ }
+ }else if( yyact == YYNSTATE + YYNRULE + 1 ){
+ yy_accept(yypParser);
+ }
+}
+
+/*
+** The following code executes when the parse fails
+*/
+static void yy_parse_failed(
+ yyParser *yypParser /* The parser */
+){
+ ParseARG_FETCH;
+#ifndef NDEBUG
+ if( yyTraceFILE ){
+ fprintf(yyTraceFILE,"%sFail!\n",yyTracePrompt);
+ }
+#endif
+ while( yypParser->yyidx>=0 ) yy_pop_parser_stack(yypParser);
+ /* Here code is inserted which will be executed whenever the
+ ** parser fails */
+%%
+ ParseARG_STORE; /* Suppress warning about unused %extra_argument variable */
+}
+
+/*
+** The following code executes when a syntax error first occurs.
+*/
+static void yy_syntax_error(
+ yyParser *yypParser, /* The parser */
+ int yymajor, /* The major type of the error token */
+ YYMINORTYPE yyminor /* The minor type of the error token */
+){
+ ParseARG_FETCH;
+#define TOKEN (yyminor.yy0)
+%%
+ ParseARG_STORE; /* Suppress warning about unused %extra_argument variable */
+}
+
+/*
+** The following is executed when the parser accepts
+*/
+static void yy_accept(
+ yyParser *yypParser /* The parser */
+){
+ ParseARG_FETCH;
+#ifndef NDEBUG
+ if( yyTraceFILE ){
+ fprintf(yyTraceFILE,"%sAccept!\n",yyTracePrompt);
+ }
+#endif
+ while( yypParser->yyidx>=0 ) yy_pop_parser_stack(yypParser);
+ /* Here code is inserted which will be executed whenever the
+ ** parser accepts */
+%%
+ ParseARG_STORE; /* Suppress warning about unused %extra_argument variable */
+}
+
+/* The main parser program.
+** The first argument is a pointer to a structure obtained from
+** "ParseAlloc" which describes the current state of the parser.
+** The second argument is the major token number. The third is
+** the minor token. The fourth optional argument is whatever the
+** user wants (and specified in the grammar) and is available for
+** use by the action routines.
+**
+** Inputs:
+** <ul>
+** <li> A pointer to the parser (an opaque structure.)
+** <li> The major token number.
+** <li> The minor token number.
+** <li> An option argument of a grammar-specified type.
+** </ul>
+**
+** Outputs:
+** None.
+*/
+void Parse(
+ void *yyp, /* The parser */
+ int yymajor, /* The major token code number */
+ ParseTOKENTYPE yyminor /* The value for the token */
+ ParseARG_PDECL /* Optional %extra_argument parameter */
+){
+ YYMINORTYPE yyminorunion;
+ int yyact; /* The parser action. */
+ int yyendofinput; /* True if we are at the end of input */
+ int yyerrorhit = 0; /* True if yymajor has invoked an error */
+ yyParser *yypParser; /* The parser */
+
+ /* (re)initialize the parser, if necessary */
+ yypParser = (yyParser*)yyp;
+ if( yypParser->yyidx<0 ){
+ /* if( yymajor==0 ) return; // not sure why this was here... */
+ yypParser->yyidx = 0;
+ yypParser->yyerrcnt = -1;
+ yypParser->yystack[0].stateno = 0;
+ yypParser->yystack[0].major = 0;
+ }
+ yyminorunion.yy0 = yyminor;
+ yyendofinput = (yymajor==0);
+ ParseARG_STORE;
+
+#ifndef NDEBUG
+ if( yyTraceFILE ){
+ fprintf(yyTraceFILE,"%sInput %s\n",yyTracePrompt,yyTokenName[yymajor]);
+ }
+#endif
+
+ do{
+ yyact = yy_find_shift_action(yypParser,yymajor);
+ if( yyact<YYNSTATE ){
+ yy_shift(yypParser,yyact,yymajor,&yyminorunion);
+ yypParser->yyerrcnt--;
+ if( yyendofinput && yypParser->yyidx>=0 ){
+ yymajor = 0;
+ }else{
+ yymajor = YYNOCODE;
+ }
+ }else if( yyact < YYNSTATE + YYNRULE ){
+ yy_reduce(yypParser,yyact-YYNSTATE);
+ }else if( yyact == YY_ERROR_ACTION ){
+ int yymx;
+#ifndef NDEBUG
+ if( yyTraceFILE ){
+ fprintf(yyTraceFILE,"%sSyntax Error!\n",yyTracePrompt);
+ }
+#endif
+#ifdef YYERRORSYMBOL
+ /* A syntax error has occurred.
+ ** The response to an error depends upon whether or not the
+ ** grammar defines an error token "ERROR".
+ **
+ ** This is what we do if the grammar does define ERROR:
+ **
+ ** * Call the %syntax_error function.
+ **
+ ** * Begin popping the stack until we enter a state where
+ ** it is legal to shift the error symbol, then shift
+ ** the error symbol.
+ **
+ ** * Set the error count to three.
+ **
+ ** * Begin accepting and shifting new tokens. No new error
+ ** processing will occur until three tokens have been
+ ** shifted successfully.
+ **
+ */
+ if( yypParser->yyerrcnt<0 ){
+ yy_syntax_error(yypParser,yymajor,yyminorunion);
+ }
+ yymx = yypParser->yystack[yypParser->yyidx].major;
+ if( yymx==YYERRORSYMBOL || yyerrorhit ){
+#ifndef NDEBUG
+ if( yyTraceFILE ){
+ fprintf(yyTraceFILE,"%sDiscard input token %s\n",
+ yyTracePrompt,yyTokenName[yymajor]);
+ }
+#endif
+ yy_destructor(yymajor,&yyminorunion);
+ yymajor = YYNOCODE;
+ }else{
+ while(
+ yypParser->yyidx >= 0 &&
+ yymx != YYERRORSYMBOL &&
+ (yyact = yy_find_reduce_action(
+ yypParser->yystack[yypParser->yyidx].stateno,
+ YYERRORSYMBOL)) >= YYNSTATE
+ ){
+ yy_pop_parser_stack(yypParser);
+ }
+ if( yypParser->yyidx < 0 || yymajor==0 ){
+ yy_destructor(yymajor,&yyminorunion);
+ yy_parse_failed(yypParser);
+ yymajor = YYNOCODE;
+ }else if( yymx!=YYERRORSYMBOL ){
+ YYMINORTYPE u2;
+ u2.YYERRSYMDT = 0;
+ yy_shift(yypParser,yyact,YYERRORSYMBOL,&u2);
+ }
+ }
+ yypParser->yyerrcnt = 3;
+ yyerrorhit = 1;
+#else /* YYERRORSYMBOL is not defined */
+ /* This is what we do if the grammar does not define ERROR:
+ **
+ ** * Report an error message, and throw away the input token.
+ **
+ ** * If the input token is $, then fail the parse.
+ **
+ ** As before, subsequent error messages are suppressed until
+ ** three input tokens have been successfully shifted.
+ */
+ if( yypParser->yyerrcnt<=0 ){
+ yy_syntax_error(yypParser,yymajor,yyminorunion);
+ }
+ yypParser->yyerrcnt = 3;
+ yy_destructor(yymajor,&yyminorunion);
+ if( yyendofinput ){
+ yy_parse_failed(yypParser);
+ }
+ yymajor = YYNOCODE;
+#endif
+ }else{
+ yy_accept(yypParser);
+ yymajor = YYNOCODE;
+ }
+ }while( yymajor!=YYNOCODE && yypParser->yyidx>=0 );
+ return;
+}
Added: freeswitch/trunk/libs/sqlite/tool/memleak.awk
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/tool/memleak.awk Tue Dec 19 15:11:50 2006
@@ -0,0 +1,29 @@
+#
+# This script looks for memory leaks by analyzing the output of "sqlite"
+# when compiled with the SQLITE_DEBUG=2 option.
+#
+/[0-9]+ malloc / {
+ mem[$6] = $0
+}
+/[0-9]+ realloc / {
+ mem[$8] = "";
+ mem[$10] = $0
+}
+/[0-9]+ free / {
+ if (mem[$6]=="") {
+ print "*** free without a malloc at",$6
+ }
+ mem[$6] = "";
+ str[$6] = ""
+}
+/^string at / {
+ addr = $4
+ sub("string at " addr " is ","")
+ str[addr] = $0
+}
+END {
+ for(addr in mem){
+ if( mem[addr]=="" ) continue
+ print mem[addr], str[addr]
+ }
+}
Added: freeswitch/trunk/libs/sqlite/tool/memleak2.awk
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/tool/memleak2.awk Tue Dec 19 15:11:50 2006
@@ -0,0 +1,29 @@
+# This AWK script reads the output of testfixture when compiled for memory
+# debugging. It generates SQL commands that can be fed into an sqlite
+# instance to determine what memory is never freed. A typical usage would
+# be as follows:
+#
+# make -f memleak.mk fulltest 2>mem.out
+# awk -f ../sqlite/tool/memleak2.awk mem.out | ./sqlite :memory:
+#
+# The job performed by this script is the same as that done by memleak.awk.
+# The difference is that this script uses much less memory when the size
+# of the mem.out file is huge.
+#
+BEGIN {
+ print "CREATE TABLE mem(loc INTEGER PRIMARY KEY, src);"
+}
+/[0-9]+ malloc / {
+ print "INSERT INTO mem VALUES(" strtonum($6) ",'" $0 "');"
+}
+/[0-9]+ realloc / {
+ print "INSERT INTO mem VALUES(" strtonum($10) \
+ ",(SELECT src FROM mem WHERE loc=" strtonum($8) "));"
+ print "DELETE FROM mem WHERE loc=" strtonum($8) ";"
+}
+/[0-9]+ free / {
+ print "DELETE FROM mem WHERE loc=" strtonum($6) ";"
+}
+END {
+ print "SELECT src FROM mem;"
+}
Added: freeswitch/trunk/libs/sqlite/tool/memleak3.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/tool/memleak3.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,233 @@
+#/bin/sh
+# \
+exec `which tclsh` $0 "$@"
+#
+# The author disclaims copyright to this source code. In place of
+# a legal notice, here is a blessing:
+#
+# May you do good and not evil.
+# May you find forgiveness for yourself and forgive others.
+# May you share freely, never taking more than you give.
+######################################################################
+
+set doco "
+This script is a tool to help track down memory leaks in the sqlite
+library. The library must be compiled with the preprocessor symbol
+SQLITE_MEMDEBUG set to at least 2. It must be set to 3 to enable stack
+traces.
+
+To use, run the leaky application and save the standard error output.
+Then, execute this program with the first argument the name of the
+application binary (or interpreter) and the second argument the name of the
+text file that contains the collected stderr output.
+
+If all goes well a summary of unfreed allocations is printed out. If the
+GNU C library is in use and SQLITE_DEBUG is 3 or greater a stack trace is
+printed out for each unmatched allocation.
+
+If the \"-r <n>\" option is passed, then the program stops and prints out
+the state of the heap immediately after the <n>th call to malloc() or
+realloc().
+
+Example:
+
+$ ./testfixture ../sqlite/test/select1.test 2> memtrace.out
+$ tclsh $argv0 ?-r <malloc-number>? ./testfixture memtrace.out
+"
+
+
+proc usage {} {
+ set prg [file tail $::argv0]
+ puts "Usage: $prg ?-r <malloc-number>? <binary file> <mem trace file>"
+ puts ""
+ puts [string trim $::doco]
+ exit -1
+}
+
+proc shift {listvar} {
+ upvar $listvar l
+ set ret [lindex $l 0]
+ set l [lrange $l 1 end]
+ return $ret
+}
+
+# Argument handling. The following vars are set:
+#
+# $exe - the name of the executable (i.e. "testfixture" or "./sqlite3")
+# $memfile - the name of the file containing the trace output.
+# $report_at - The malloc number to stop and report at. Or -1 to read
+# all of $memfile.
+#
+set report_at -1
+while {[llength $argv]>2} {
+ set arg [shift argv]
+ switch -- $arg {
+ "-r" {
+ set report_at [shift argv]
+ }
+ default {
+ usage
+ }
+ }
+}
+if {[llength $argv]!=2} usage
+set exe [lindex $argv 0]
+set memfile [lindex $argv 1]
+
+# If stack traces are enabled, the 'addr2line' program is called to
+# translate a binary stack address into a human-readable form.
+set addr2line addr2line
+
+# When the SQLITE_MEMDEBUG is set as described above, SQLite prints
+# out a line for each malloc(), realloc() or free() call that the
+# library makes. If SQLITE_MEMDEBUG is 3, then a stack trace is printed
+# out before each malloc() and realloc() line.
+#
+# This program parses each line the SQLite library outputs and updates
+# the following global Tcl variables to reflect the "current" state of
+# the heap used by SQLite.
+#
+set nBytes 0 ;# Total number of bytes currently allocated.
+set nMalloc 0 ;# Total number of malloc()/realloc() calls.
+set nPeak 0 ;# Peak of nBytes.
+set iPeak 0 ;# nMalloc when nPeak was set.
+#
+# More detailed state information is stored in the $memmap array.
+# Each key in the memmap array is the address of a chunk of memory
+# currently allocated from the heap. The value is a list of the
+# following form
+#
+# {<number-of-bytes> <malloc id> <stack trace>}
+#
+array unset memmap
+
+proc process_input {input_file array_name} {
+ upvar $array_name mem
+ set input [open $input_file]
+
+ set MALLOC {([[:digit:]]+) malloc ([[:digit:]]+) bytes at 0x([[:xdigit:]]+)}
+ # set STACK {^[[:digit:]]+: STACK: (.*)$}
+ set STACK {^STACK: (.*)$}
+ set FREE {[[:digit:]]+ free ([[:digit:]]+) bytes at 0x([[:xdigit:]]+)}
+ set REALLOC {([[:digit:]]+) realloc ([[:digit:]]+) to ([[:digit:]]+)}
+ append REALLOC { bytes at 0x([[:xdigit:]]+) to 0x([[:xdigit:]]+)}
+
+ set stack ""
+ while { ![eof $input] } {
+ set line [gets $input]
+ if {[regexp $STACK $line dummy stack]} {
+ # Do nothing. The variable $stack now stores the hexadecimal stack dump
+ # for the next malloc() or realloc().
+
+ } elseif { [regexp $MALLOC $line dummy mallocid bytes addr] } {
+ # If this is a 'malloc' line, set an entry in the mem array. Each entry
+ # is a list of length three, the number of bytes allocated , the malloc
+ # number and the stack dump when it was allocated.
+ set mem($addr) [list $bytes "malloc $mallocid" $stack]
+ set stack ""
+
+ # Increase the current heap usage
+ incr ::nBytes $bytes
+
+ # Increase the number of malloc() calls
+ incr ::nMalloc
+
+ if {$::nBytes > $::nPeak} {
+ set ::nPeak $::nBytes
+ set ::iPeak $::nMalloc
+ }
+
+ } elseif { [regexp $FREE $line dummy bytes addr] } {
+ # If this is a 'free' line, remove the entry from the mem array. If the
+ # entry does not exist, or is the wrong number of bytes, announce a
+ # problem. This is more likely a bug in the regular expressions for
+ # this script than an SQLite defect.
+ if { [lindex $mem($addr) 0] != $bytes } {
+ error "byte count mismatch"
+ }
+ unset mem($addr)
+
+ # Decrease the current heap usage
+ incr ::nBytes [expr -1 * $bytes]
+
+ } elseif { [regexp $REALLOC $line dummy mallocid ob b oa a] } {
+ # "free" the old allocation in the internal model:
+ incr ::nBytes [expr -1 * $ob]
+ unset mem($oa);
+
+ # "malloc" the new allocation
+ set mem($a) [list $b "realloc $mallocid" $stack]
+ incr ::nBytes $b
+ set stack ""
+
+ # Increase the number of malloc() calls
+ incr ::nMalloc
+
+ if {$::nBytes > $::nPeak} {
+ set ::nPeak $::nBytes
+ set ::iPeak $::nMalloc
+ }
+
+ } else {
+ # puts "REJECT: $line"
+ }
+
+ if {$::nMalloc==$::report_at} report
+ }
+
+ close $input
+}
+
+proc printstack {stack} {
+ set fcount 10
+ if {[llength $stack]<10} {
+ set fcount [llength $stack]
+ }
+ foreach frame [lrange $stack 1 $fcount] {
+ foreach {f l} [split [exec $::addr2line -f --exe=$::exe $frame] \n] {}
+ puts [format "%-30s %s" $f $l]
+ }
+ if {[llength $stack]>0 } {puts ""}
+}
+
+proc report {} {
+
+ foreach key [array names ::memmap] {
+ set stack [lindex $::memmap($key) 2]
+ set bytes [lindex $::memmap($key) 0]
+ lappend summarymap($stack) $bytes
+ }
+
+ set sorted [list]
+ foreach stack [array names summarymap] {
+ set allocs $summarymap($stack)
+ set sum 0
+ foreach a $allocs {
+ incr sum $a
+ }
+ lappend sorted [list $sum $stack]
+ }
+
+ set sorted [lsort -integer -index 0 $sorted]
+ foreach s $sorted {
+ set sum [lindex $s 0]
+ set stack [lindex $s 1]
+ set allocs $summarymap($stack)
+ puts "$sum bytes in [llength $allocs] chunks ($allocs)"
+ printstack $stack
+ }
+
+ # Print out summary statistics
+ puts "Total allocations : $::nMalloc"
+ puts "Total outstanding allocations: [array size ::memmap]"
+ puts "Current heap usage : $::nBytes bytes"
+ puts "Peak heap usage : $::nPeak bytes (malloc #$::iPeak)"
+
+ exit
+}
+
+process_input $memfile memmap
+report
+
+
+
Added: freeswitch/trunk/libs/sqlite/tool/mkkeywordhash.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/tool/mkkeywordhash.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,509 @@
+/*
+** Compile and run this standalone program in order to generate code that
+** implements a function that will translate alphabetic identifiers into
+** parser token codes.
+*/
+#include <stdio.h>
+#include <string.h>
+#include <stdlib.h>
+
+/*
+** All the keywords of the SQL language are stored as in a hash
+** table composed of instances of the following structure.
+*/
+typedef struct Keyword Keyword;
+struct Keyword {
+ char *zName; /* The keyword name */
+ char *zTokenType; /* Token value for this keyword */
+ int mask; /* Code this keyword if non-zero */
+ int id; /* Unique ID for this record */
+ int hash; /* Hash on the keyword */
+ int offset; /* Offset to start of name string */
+ int len; /* Length of this keyword, not counting final \000 */
+ int prefix; /* Number of characters in prefix */
+ int iNext; /* Index in aKeywordTable[] of next with same hash */
+ int substrId; /* Id to another keyword this keyword is embedded in */
+ int substrOffset; /* Offset into substrId for start of this keyword */
+};
+
+/*
+** Define masks used to determine which keywords are allowed
+*/
+#ifdef SQLITE_OMIT_ALTERTABLE
+# define ALTER 0
+#else
+# define ALTER 0x00000001
+#endif
+#define ALWAYS 0x00000002
+#ifdef SQLITE_OMIT_ANALYZE
+# define ANALYZE 0
+#else
+# define ANALYZE 0x00000004
+#endif
+#ifdef SQLITE_OMIT_ATTACH
+# define ATTACH 0
+#else
+# define ATTACH 0x00000008
+#endif
+#ifdef SQLITE_OMIT_AUTOINCREMENT
+# define AUTOINCR 0
+#else
+# define AUTOINCR 0x00000010
+#endif
+#ifdef SQLITE_OMIT_CAST
+# define CAST 0
+#else
+# define CAST 0x00000020
+#endif
+#ifdef SQLITE_OMIT_COMPOUND_SELECT
+# define COMPOUND 0
+#else
+# define COMPOUND 0x00000040
+#endif
+#ifdef SQLITE_OMIT_CONFLICT_CLAUSE
+# define CONFLICT 0
+#else
+# define CONFLICT 0x00000080
+#endif
+#ifdef SQLITE_OMIT_EXPLAIN
+# define EXPLAIN 0
+#else
+# define EXPLAIN 0x00000100
+#endif
+#ifdef SQLITE_OMIT_FOREIGN_KEY
+# define FKEY 0
+#else
+# define FKEY 0x00000200
+#endif
+#ifdef SQLITE_OMIT_PRAGMA
+# define PRAGMA 0
+#else
+# define PRAGMA 0x00000400
+#endif
+#ifdef SQLITE_OMIT_REINDEX
+# define REINDEX 0
+#else
+# define REINDEX 0x00000800
+#endif
+#ifdef SQLITE_OMIT_SUBQUERY
+# define SUBQUERY 0
+#else
+# define SUBQUERY 0x00001000
+#endif
+#ifdef SQLITE_OMIT_TRIGGER
+# define TRIGGER 0
+#else
+# define TRIGGER 0x00002000
+#endif
+#ifdef SQLITE_OMIT_VACUUM
+# define VACUUM 0
+#else
+# define VACUUM 0x00004000
+#endif
+#ifdef SQLITE_OMIT_VIEW
+# define VIEW 0
+#else
+# define VIEW 0x00008000
+#endif
+#ifdef SQLITE_OMIT_VIRTUALTABLE
+# define VTAB 0
+#else
+# define VTAB 0x00010000
+#endif
+
+/*
+** These are the keywords
+*/
+static Keyword aKeywordTable[] = {
+ { "ABORT", "TK_ABORT", CONFLICT|TRIGGER },
+ { "ADD", "TK_ADD", ALTER },
+ { "AFTER", "TK_AFTER", TRIGGER },
+ { "ALL", "TK_ALL", ALWAYS },
+ { "ALTER", "TK_ALTER", ALTER },
+ { "ANALYZE", "TK_ANALYZE", ANALYZE },
+ { "AND", "TK_AND", ALWAYS },
+ { "AS", "TK_AS", ALWAYS },
+ { "ASC", "TK_ASC", ALWAYS },
+ { "ATTACH", "TK_ATTACH", ATTACH },
+ { "AUTOINCREMENT", "TK_AUTOINCR", AUTOINCR },
+ { "BEFORE", "TK_BEFORE", TRIGGER },
+ { "BEGIN", "TK_BEGIN", ALWAYS },
+ { "BETWEEN", "TK_BETWEEN", ALWAYS },
+ { "BY", "TK_BY", ALWAYS },
+ { "CASCADE", "TK_CASCADE", FKEY },
+ { "CASE", "TK_CASE", ALWAYS },
+ { "CAST", "TK_CAST", CAST },
+ { "CHECK", "TK_CHECK", ALWAYS },
+ { "COLLATE", "TK_COLLATE", ALWAYS },
+ { "COLUMN", "TK_COLUMNKW", ALTER },
+ { "COMMIT", "TK_COMMIT", ALWAYS },
+ { "CONFLICT", "TK_CONFLICT", CONFLICT },
+ { "CONSTRAINT", "TK_CONSTRAINT", ALWAYS },
+ { "CREATE", "TK_CREATE", ALWAYS },
+ { "CROSS", "TK_JOIN_KW", ALWAYS },
+ { "CURRENT_DATE", "TK_CTIME_KW", ALWAYS },
+ { "CURRENT_TIME", "TK_CTIME_KW", ALWAYS },
+ { "CURRENT_TIMESTAMP","TK_CTIME_KW", ALWAYS },
+ { "DATABASE", "TK_DATABASE", ATTACH },
+ { "DEFAULT", "TK_DEFAULT", ALWAYS },
+ { "DEFERRED", "TK_DEFERRED", ALWAYS },
+ { "DEFERRABLE", "TK_DEFERRABLE", FKEY },
+ { "DELETE", "TK_DELETE", ALWAYS },
+ { "DESC", "TK_DESC", ALWAYS },
+ { "DETACH", "TK_DETACH", ATTACH },
+ { "DISTINCT", "TK_DISTINCT", ALWAYS },
+ { "DROP", "TK_DROP", ALWAYS },
+ { "END", "TK_END", ALWAYS },
+ { "EACH", "TK_EACH", TRIGGER },
+ { "ELSE", "TK_ELSE", ALWAYS },
+ { "ESCAPE", "TK_ESCAPE", ALWAYS },
+ { "EXCEPT", "TK_EXCEPT", COMPOUND },
+ { "EXCLUSIVE", "TK_EXCLUSIVE", ALWAYS },
+ { "EXISTS", "TK_EXISTS", ALWAYS },
+ { "EXPLAIN", "TK_EXPLAIN", EXPLAIN },
+ { "FAIL", "TK_FAIL", CONFLICT|TRIGGER },
+ { "FOR", "TK_FOR", TRIGGER },
+ { "FOREIGN", "TK_FOREIGN", FKEY },
+ { "FROM", "TK_FROM", ALWAYS },
+ { "FULL", "TK_JOIN_KW", ALWAYS },
+ { "GLOB", "TK_LIKE_KW", ALWAYS },
+ { "GROUP", "TK_GROUP", ALWAYS },
+ { "HAVING", "TK_HAVING", ALWAYS },
+ { "IF", "TK_IF", ALWAYS },
+ { "IGNORE", "TK_IGNORE", CONFLICT|TRIGGER },
+ { "IMMEDIATE", "TK_IMMEDIATE", ALWAYS },
+ { "IN", "TK_IN", ALWAYS },
+ { "INDEX", "TK_INDEX", ALWAYS },
+ { "INITIALLY", "TK_INITIALLY", FKEY },
+ { "INNER", "TK_JOIN_KW", ALWAYS },
+ { "INSERT", "TK_INSERT", ALWAYS },
+ { "INSTEAD", "TK_INSTEAD", TRIGGER },
+ { "INTERSECT", "TK_INTERSECT", COMPOUND },
+ { "INTO", "TK_INTO", ALWAYS },
+ { "IS", "TK_IS", ALWAYS },
+ { "ISNULL", "TK_ISNULL", ALWAYS },
+ { "JOIN", "TK_JOIN", ALWAYS },
+ { "KEY", "TK_KEY", ALWAYS },
+ { "LEFT", "TK_JOIN_KW", ALWAYS },
+ { "LIKE", "TK_LIKE_KW", ALWAYS },
+ { "LIMIT", "TK_LIMIT", ALWAYS },
+ { "MATCH", "TK_MATCH", ALWAYS },
+ { "NATURAL", "TK_JOIN_KW", ALWAYS },
+ { "NOT", "TK_NOT", ALWAYS },
+ { "NOTNULL", "TK_NOTNULL", ALWAYS },
+ { "NULL", "TK_NULL", ALWAYS },
+ { "OF", "TK_OF", ALWAYS },
+ { "OFFSET", "TK_OFFSET", ALWAYS },
+ { "ON", "TK_ON", ALWAYS },
+ { "OR", "TK_OR", ALWAYS },
+ { "ORDER", "TK_ORDER", ALWAYS },
+ { "OUTER", "TK_JOIN_KW", ALWAYS },
+ { "PLAN", "TK_PLAN", EXPLAIN },
+ { "PRAGMA", "TK_PRAGMA", PRAGMA },
+ { "PRIMARY", "TK_PRIMARY", ALWAYS },
+ { "QUERY", "TK_QUERY", EXPLAIN },
+ { "RAISE", "TK_RAISE", TRIGGER },
+ { "REFERENCES", "TK_REFERENCES", FKEY },
+ { "REGEXP", "TK_LIKE_KW", ALWAYS },
+ { "REINDEX", "TK_REINDEX", REINDEX },
+ { "RENAME", "TK_RENAME", ALTER },
+ { "REPLACE", "TK_REPLACE", CONFLICT },
+ { "RESTRICT", "TK_RESTRICT", FKEY },
+ { "RIGHT", "TK_JOIN_KW", ALWAYS },
+ { "ROLLBACK", "TK_ROLLBACK", ALWAYS },
+ { "ROW", "TK_ROW", TRIGGER },
+ { "SELECT", "TK_SELECT", ALWAYS },
+ { "SET", "TK_SET", ALWAYS },
+ { "STATEMENT", "TK_STATEMENT", TRIGGER },
+ { "TABLE", "TK_TABLE", ALWAYS },
+ { "TEMP", "TK_TEMP", ALWAYS },
+ { "TEMPORARY", "TK_TEMP", ALWAYS },
+ { "THEN", "TK_THEN", ALWAYS },
+ { "TO", "TK_TO", ALTER },
+ { "TRANSACTION", "TK_TRANSACTION", ALWAYS },
+ { "TRIGGER", "TK_TRIGGER", TRIGGER },
+ { "UNION", "TK_UNION", COMPOUND },
+ { "UNIQUE", "TK_UNIQUE", ALWAYS },
+ { "UPDATE", "TK_UPDATE", ALWAYS },
+ { "USING", "TK_USING", ALWAYS },
+ { "VACUUM", "TK_VACUUM", VACUUM },
+ { "VALUES", "TK_VALUES", ALWAYS },
+ { "VIEW", "TK_VIEW", VIEW },
+ { "VIRTUAL", "TK_VIRTUAL", VTAB },
+ { "WHEN", "TK_WHEN", ALWAYS },
+ { "WHERE", "TK_WHERE", ALWAYS },
+};
+
+/* Number of keywords */
+static int NKEYWORD = (sizeof(aKeywordTable)/sizeof(aKeywordTable[0]));
+
+/* An array to map all upper-case characters into their corresponding
+** lower-case character.
+*/
+const unsigned char sqlite3UpperToLower[] = {
+ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,
+ 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35,
+ 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53,
+ 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 97, 98, 99,100,101,102,103,
+ 104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,
+ 122, 91, 92, 93, 94, 95, 96, 97, 98, 99,100,101,102,103,104,105,106,107,
+ 108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,
+ 126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,
+ 144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,
+ 162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,178,179,
+ 180,181,182,183,184,185,186,187,188,189,190,191,192,193,194,195,196,197,
+ 198,199,200,201,202,203,204,205,206,207,208,209,210,211,212,213,214,215,
+ 216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,232,233,
+ 234,235,236,237,238,239,240,241,242,243,244,245,246,247,248,249,250,251,
+ 252,253,254,255
+};
+#define UpperToLower sqlite3UpperToLower
+
+/*
+** Comparision function for two Keyword records
+*/
+static int keywordCompare1(const void *a, const void *b){
+ const Keyword *pA = (Keyword*)a;
+ const Keyword *pB = (Keyword*)b;
+ int n = pA->len - pB->len;
+ if( n==0 ){
+ n = strcmp(pA->zName, pB->zName);
+ }
+ return n;
+}
+static int keywordCompare2(const void *a, const void *b){
+ const Keyword *pA = (Keyword*)a;
+ const Keyword *pB = (Keyword*)b;
+ int n = strcmp(pA->zName, pB->zName);
+ return n;
+}
+static int keywordCompare3(const void *a, const void *b){
+ const Keyword *pA = (Keyword*)a;
+ const Keyword *pB = (Keyword*)b;
+ int n = pA->offset - pB->offset;
+ return n;
+}
+
+/*
+** Return a KeywordTable entry with the given id
+*/
+static Keyword *findById(int id){
+ int i;
+ for(i=0; i<NKEYWORD; i++){
+ if( aKeywordTable[i].id==id ) break;
+ }
+ return &aKeywordTable[i];
+}
+
+/*
+** This routine does the work. The generated code is printed on standard
+** output.
+*/
+int main(int argc, char **argv){
+ int i, j, k, h;
+ int bestSize, bestCount;
+ int count;
+ int nChar;
+ int aHash[1000]; /* 1000 is much bigger than NKEYWORD */
+
+ /* Remove entries from the list of keywords that have mask==0 */
+ for(i=j=0; i<NKEYWORD; i++){
+ if( aKeywordTable[i].mask==0 ) continue;
+ if( j<i ){
+ aKeywordTable[j] = aKeywordTable[i];
+ }
+ j++;
+ }
+ NKEYWORD = j;
+
+ /* Fill in the lengths of strings and hashes for all entries. */
+ for(i=0; i<NKEYWORD; i++){
+ Keyword *p = &aKeywordTable[i];
+ p->len = strlen(p->zName);
+ p->hash = (UpperToLower[p->zName[0]]*4) ^
+ (UpperToLower[p->zName[p->len-1]]*3) ^ p->len;
+ p->id = i+1;
+ }
+
+ /* Sort the table from shortest to longest keyword */
+ qsort(aKeywordTable, NKEYWORD, sizeof(aKeywordTable[0]), keywordCompare1);
+
+ /* Look for short keywords embedded in longer keywords */
+ for(i=NKEYWORD-2; i>=0; i--){
+ Keyword *p = &aKeywordTable[i];
+ for(j=NKEYWORD-1; j>i && p->substrId==0; j--){
+ Keyword *pOther = &aKeywordTable[j];
+ if( pOther->substrId ) continue;
+ if( pOther->len<=p->len ) continue;
+ for(k=0; k<=pOther->len-p->len; k++){
+ if( memcmp(p->zName, &pOther->zName[k], p->len)==0 ){
+ p->substrId = pOther->id;
+ p->substrOffset = k;
+ break;
+ }
+ }
+ }
+ }
+
+ /* Sort the table into alphabetical order */
+ qsort(aKeywordTable, NKEYWORD, sizeof(aKeywordTable[0]), keywordCompare2);
+
+ /* Fill in the offset for all entries */
+ nChar = 0;
+ for(i=0; i<NKEYWORD; i++){
+ Keyword *p = &aKeywordTable[i];
+ if( p->offset>0 || p->substrId ) continue;
+ p->offset = nChar;
+ nChar += p->len;
+ for(k=p->len-1; k>=1; k--){
+ for(j=i+1; j<NKEYWORD; j++){
+ Keyword *pOther = &aKeywordTable[j];
+ if( pOther->offset>0 || pOther->substrId ) continue;
+ if( pOther->len<=k ) continue;
+ if( memcmp(&p->zName[p->len-k], pOther->zName, k)==0 ){
+ p = pOther;
+ p->offset = nChar - k;
+ nChar = p->offset + p->len;
+ p->zName += k;
+ p->len -= k;
+ p->prefix = k;
+ j = i;
+ k = p->len;
+ }
+ }
+ }
+ }
+ for(i=0; i<NKEYWORD; i++){
+ Keyword *p = &aKeywordTable[i];
+ if( p->substrId ){
+ p->offset = findById(p->substrId)->offset + p->substrOffset;
+ }
+ }
+
+ /* Sort the table by offset */
+ qsort(aKeywordTable, NKEYWORD, sizeof(aKeywordTable[0]), keywordCompare3);
+
+ /* Figure out how big to make the hash table in order to minimize the
+ ** number of collisions */
+ bestSize = NKEYWORD;
+ bestCount = NKEYWORD*NKEYWORD;
+ for(i=NKEYWORD/2; i<=2*NKEYWORD; i++){
+ for(j=0; j<i; j++) aHash[j] = 0;
+ for(j=0; j<NKEYWORD; j++){
+ h = aKeywordTable[j].hash % i;
+ aHash[h] *= 2;
+ aHash[h]++;
+ }
+ for(j=count=0; j<i; j++) count += aHash[j];
+ if( count<bestCount ){
+ bestCount = count;
+ bestSize = i;
+ }
+ }
+
+ /* Compute the hash */
+ for(i=0; i<bestSize; i++) aHash[i] = 0;
+ for(i=0; i<NKEYWORD; i++){
+ h = aKeywordTable[i].hash % bestSize;
+ aKeywordTable[i].iNext = aHash[h];
+ aHash[h] = i+1;
+ }
+
+ /* Begin generating code */
+ printf("/* Hash score: %d */\n", bestCount);
+ printf("static int keywordCode(const char *z, int n){\n");
+
+ printf(" static const char zText[%d] =\n", nChar+1);
+ for(i=j=0; i<NKEYWORD; i++){
+ Keyword *p = &aKeywordTable[i];
+ if( p->substrId ) continue;
+ if( j==0 ) printf(" \"");
+ printf("%s", p->zName);
+ j += p->len;
+ if( j>60 ){
+ printf("\"\n");
+ j = 0;
+ }
+ }
+ printf("%s;\n", j>0 ? "\"" : " ");
+
+ printf(" static const unsigned char aHash[%d] = {\n", bestSize);
+ for(i=j=0; i<bestSize; i++){
+ if( j==0 ) printf(" ");
+ printf(" %3d,", aHash[i]);
+ j++;
+ if( j>12 ){
+ printf("\n");
+ j = 0;
+ }
+ }
+ printf("%s };\n", j==0 ? "" : "\n");
+
+ printf(" static const unsigned char aNext[%d] = {\n", NKEYWORD);
+ for(i=j=0; i<NKEYWORD; i++){
+ if( j==0 ) printf(" ");
+ printf(" %3d,", aKeywordTable[i].iNext);
+ j++;
+ if( j>12 ){
+ printf("\n");
+ j = 0;
+ }
+ }
+ printf("%s };\n", j==0 ? "" : "\n");
+
+ printf(" static const unsigned char aLen[%d] = {\n", NKEYWORD);
+ for(i=j=0; i<NKEYWORD; i++){
+ if( j==0 ) printf(" ");
+ printf(" %3d,", aKeywordTable[i].len+aKeywordTable[i].prefix);
+ j++;
+ if( j>12 ){
+ printf("\n");
+ j = 0;
+ }
+ }
+ printf("%s };\n", j==0 ? "" : "\n");
+
+ printf(" static const unsigned short int aOffset[%d] = {\n", NKEYWORD);
+ for(i=j=0; i<NKEYWORD; i++){
+ if( j==0 ) printf(" ");
+ printf(" %3d,", aKeywordTable[i].offset);
+ j++;
+ if( j>12 ){
+ printf("\n");
+ j = 0;
+ }
+ }
+ printf("%s };\n", j==0 ? "" : "\n");
+
+ printf(" static const unsigned char aCode[%d] = {\n", NKEYWORD);
+ for(i=j=0; i<NKEYWORD; i++){
+ char *zToken = aKeywordTable[i].zTokenType;
+ if( j==0 ) printf(" ");
+ printf("%s,%*s", zToken, (int)(14-strlen(zToken)), "");
+ j++;
+ if( j>=5 ){
+ printf("\n");
+ j = 0;
+ }
+ }
+ printf("%s };\n", j==0 ? "" : "\n");
+
+ printf(" int h, i;\n");
+ printf(" if( n<2 ) return TK_ID;\n");
+ printf(" h = ((charMap(z[0])*4) ^\n"
+ " (charMap(z[n-1])*3) ^\n"
+ " n) %% %d;\n", bestSize);
+ printf(" for(i=((int)aHash[h])-1; i>=0; i=((int)aNext[i])-1){\n");
+ printf(" if( aLen[i]==n &&"
+ " sqlite3StrNICmp(&zText[aOffset[i]],z,n)==0 ){\n");
+ printf(" return aCode[i];\n");
+ printf(" }\n");
+ printf(" }\n");
+ printf(" return TK_ID;\n");
+ printf("}\n");
+ printf("int sqlite3KeywordCode(const unsigned char *z, int n){\n");
+ printf(" return keywordCode((char*)z, n);\n");
+ printf("}\n");
+
+ return 0;
+}
Added: freeswitch/trunk/libs/sqlite/tool/mkopts.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/tool/mkopts.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,51 @@
+#!/usr/bin/tclsh
+#
+# This script is used to generate the array of strings and the enum
+# that appear at the beginning of the C code implementation of a
+# a TCL command and that define the available subcommands for that
+# TCL command.
+
+set prefix {}
+while {![eof stdin]} {
+ set line [gets stdin]
+ if {$line==""} continue
+ regsub -all "\[ \t\n,\]+" [string trim $line] { } line
+ foreach token [split $line { }] {
+ if {![regexp {(([a-zA-Z]+)_)?([_a-zA-Z]+)} $token all px p2 name]} continue
+ lappend namelist [string tolower $name]
+ if {$px!=""} {set prefix $p2}
+ }
+}
+
+puts " static const char *${prefix}_strs\[\] = \173"
+set col 0
+proc put_item x {
+ global col
+ if {$col==0} {puts -nonewline " "}
+ if {$col<2} {
+ puts -nonewline [format " %-21s" $x]
+ incr col
+ } else {
+ puts $x
+ set col 0
+ }
+}
+proc finalize {} {
+ global col
+ if {$col>0} {puts {}}
+ set col 0
+}
+
+foreach name [lsort $namelist] {
+ put_item \"$name\",
+}
+put_item 0
+finalize
+puts " \175;"
+puts " enum ${prefix}_enum \173"
+foreach name [lsort $namelist] {
+ regsub -all {@} $name {} name
+ put_item ${prefix}_[string toupper $name],
+}
+finalize
+puts " \175;"
Added: freeswitch/trunk/libs/sqlite/tool/omittest.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/tool/omittest.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,175 @@
+
+set rcsid {$Id: omittest.tcl,v 1.2 2006/06/20 11:01:09 danielk1977 Exp $}
+
+# Documentation for this script. This may be output to stderr
+# if the script is invoked incorrectly.
+set ::USAGE_MESSAGE {
+This Tcl script is used to test the various compile time options
+available for omitting code (the SQLITE_OMIT_xxx options). It
+should be invoked as follows:
+
+ <script> ?-makefile PATH-TO-MAKEFILE?
+
+The default value for ::MAKEFILE is "../Makefile.linux.gcc".
+
+This script builds the testfixture program and runs the SQLite test suite
+once with each SQLITE_OMIT_ option defined and then once with all options
+defined together. Each run is performed in a seperate directory created
+as a sub-directory of the current directory by the script. The output
+of the build is saved in <sub-directory>/build.log. The output of the
+test-suite is saved in <sub-directory>/test.log.
+
+Almost any SQLite makefile (except those generated by configure - see below)
+should work. The following properties are required:
+
+ * The makefile should support the "testfixture" target.
+ * The makefile should support the "test" target.
+ * The makefile should support the variable "OPTS" as a way to pass
+ options from the make command line to lemon and the C compiler.
+
+More precisely, the following two invocations must be supported:
+
+ make -f $::MAKEFILE testfixture OPTS="-DSQLITE_OMIT_ALTERTABLE=1"
+ make -f $::MAKEFILE test
+
+Makefiles generated by the sqlite configure program cannot be used as
+they do not respect the OPTS variable.
+}
+
+
+# Build a testfixture executable and run quick.test using it. The first
+# parameter is the name of the directory to create and use to run the
+# test in. The second parameter is a list of OMIT symbols to define
+# when doing so. For example:
+#
+# run_quick_test /tmp/testdir {SQLITE_OMIT_TRIGGER SQLITE_OMIT_VIEW}
+#
+#
+proc run_quick_test {dir omit_symbol_list} {
+ # Compile the value of the OPTS Makefile variable.
+ set opts "-DSQLITE_MEMDEBUG=2 -DSQLITE_DEBUG -DOS_UNIX"
+ foreach sym $omit_symbol_list {
+ append opts " -D${sym}=1"
+ }
+
+ # Create the directory and do the build. If an error occurs return
+ # early without attempting to run the test suite.
+ file mkdir $dir
+ puts -nonewline "Building $dir..."
+ flush stdout
+ set rc [catch {
+ exec make -C $dir -f $::MAKEFILE testfixture OPTS=$opts >& $dir/build.log
+ }]
+ if {$rc} {
+ puts "No good. See $dir/build.log."
+ return
+ } else {
+ puts "Ok"
+ }
+
+ # Create an empty file "$dir/sqlite3". This is to trick the makefile out
+ # of trying to build the sqlite shell. The sqlite shell won't build
+ # with some of the OMIT options (i.e OMIT_COMPLETE).
+ if {![file exists $dir/sqlite3]} {
+ set wr [open $dir/sqlite3 w]
+ puts $wr "dummy"
+ close $wr
+ }
+
+ # Run the test suite.
+ puts -nonewline "Testing $dir..."
+ flush stdout
+ set rc [catch {
+ exec make -C $dir -f $::MAKEFILE test OPTS=$opts >& $dir/test.log
+ }]
+ if {$rc} {
+ puts "No good. See $dir/test.log."
+ } else {
+ puts "Ok"
+ }
+}
+
+
+# This proc processes the command line options passed to this script.
+# Currently the only option supported is "-makefile", default
+# "../Makefile.linux-gcc". Set the ::MAKEFILE variable to the value of this
+# option.
+#
+proc process_options {argv} {
+ set ::MAKEFILE ../Makefile.linux-gcc ;# Default value
+ for {set i 0} {$i < [llength $argv]} {incr i} {
+ switch -- [lindex $argv $i] {
+ -makefile {
+ incr i
+ set ::MAKEFILE [lindex $argv $i]
+ }
+
+ default {
+ puts stderr [string trim $::USAGE_MESSAGE]
+ exit -1
+ }
+ }
+ set ::MAKEFILE [file normalize $::MAKEFILE]
+ }
+}
+
+# Main routine.
+#
+
+proc main {argv} {
+ # List of SQLITE_OMIT_XXX symbols supported by SQLite.
+ set ::SYMBOLS [list \
+ SQLITE_OMIT_VIEW \
+ SQLITE_OMIT_VIRTUALTABLE \
+ SQLITE_OMIT_ALTERTABLE \
+ SQLITE_OMIT_EXPLAIN \
+ SQLITE_OMIT_FLOATING_POINT \
+ SQLITE_OMIT_FOREIGN_KEY \
+ SQLITE_OMIT_INTEGRITY_CHECK \
+ SQLITE_OMIT_MEMORYDB \
+ SQLITE_OMIT_PAGER_PRAGMAS \
+ SQLITE_OMIT_PRAGMA \
+ SQLITE_OMIT_PROGRESS_CALLBACK \
+ SQLITE_OMIT_REINDEX \
+ SQLITE_OMIT_SCHEMA_PRAGMAS \
+ SQLITE_OMIT_SCHEMA_VERSION_PRAGMAS \
+ SQLITE_OMIT_DATETIME_FUNCS \
+ SQLITE_OMIT_SUBQUERY \
+ SQLITE_OMIT_TCL_VARIABLE \
+ SQLITE_OMIT_TRIGGER \
+ SQLITE_OMIT_UTF16 \
+ SQLITE_OMIT_VACUUM \
+ SQLITE_OMIT_COMPLETE \
+ SQLITE_OMIT_AUTOVACUUM \
+ SQLITE_OMIT_AUTHORIZATION \
+ SQLITE_OMIT_AUTOINCREMENT \
+ SQLITE_OMIT_BLOB_LITERAL \
+ SQLITE_OMIT_COMPOUND_SELECT \
+ SQLITE_OMIT_CONFLICT_CLAUSE \
+ ]
+
+ # Process any command line options.
+ process_options $argv
+
+ # First try a test with all OMIT symbols except SQLITE_OMIT_FLOATING_POINT
+ # and SQLITE_OMIT_PRAGMA defined. The former doesn't work (causes segfaults)
+ # and the latter is currently incompatible with the test suite (this should
+ # be fixed, but it will be a lot of work).
+ set allsyms [list]
+ foreach s $::SYMBOLS {
+ if {$s!="SQLITE_OMIT_FLOATING_POINT" && $s!="SQLITE_OMIT_PRAGMA"} {
+ lappend allsyms $s
+ }
+ }
+ run_quick_test test_OMIT_EVERYTHING $allsyms
+
+ # Now try one quick.test with each of the OMIT symbols defined. Included
+ # are the OMIT_FLOATING_POINT and OMIT_PRAGMA symbols, even though we
+ # know they will fail. It's good to be reminded of this from time to time.
+ foreach sym $::SYMBOLS {
+ set dirname "test_[string range $sym 7 end]"
+ run_quick_test $dirname $sym
+ }
+}
+
+main $argv
Added: freeswitch/trunk/libs/sqlite/tool/opcodeDoc.awk
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/tool/opcodeDoc.awk Tue Dec 19 15:11:50 2006
@@ -0,0 +1,23 @@
+#
+# Extract opcode documentation for sqliteVdbe.c and generate HTML
+#
+BEGIN {
+ print "<html><body bgcolor=white>"
+ print "<h1>SQLite Virtual Database Engine Opcodes</h1>"
+ print "<table>"
+}
+/ Opcode: /,/\*\// {
+ if( $2=="Opcode:" ){
+ printf "<tr><td>%s %s %s %s</td>\n<td>\n", $3, $4, $5, $6
+ }else if( $1=="*/" ){
+ printf "</td></tr>\n"
+ }else if( NF>1 ){
+ sub(/^ *\*\* /,"")
+ gsub(/</,"<")
+ gsub(/&/,"&")
+ print
+ }
+}
+END {
+ print "</table></body></html>"
+}
Added: freeswitch/trunk/libs/sqlite/tool/report1.txt
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/tool/report1.txt Tue Dec 19 15:11:50 2006
@@ -0,0 +1,66 @@
+The SQL database used for ACD contains 113 tables and indices implemented
+in GDBM. The following are statistics on the sizes of keys and data
+within these tables and indices.
+
+Entries: 962080
+Size: 45573853
+Avg Size: 48
+Key Size: 11045299
+Avg Key Size: 12
+Max Key Size: 99
+
+
+ Size of key Cummulative
+ and data Instances Percentage
+------------ ---------- -----------
+ 0..8 266 0%
+ 9..12 5485 0%
+ 13..16 73633 8%
+ 17..24 180918 27%
+ 25..32 209823 48%
+ 33..40 148995 64%
+ 41..48 76304 72%
+ 49..56 14346 73%
+ 57..64 15725 75%
+ 65..80 44916 80%
+ 81..96 127815 93%
+ 97..112 34769 96%
+ 113..128 13314 98%
+ 129..144 8098 99%
+ 145..160 3355 99%
+ 161..176 1159 99%
+ 177..192 629 99%
+ 193..208 221 99%
+ 209..224 210 99%
+ 225..240 129 99%
+ 241..256 57 99%
+ 257..288 496 99%
+ 289..320 60 99%
+ 321..352 37 99%
+ 353..384 46 99%
+ 385..416 22 99%
+ 417..448 24 99%
+ 449..480 26 99%
+ 481..512 27 99%
+ 513..1024 471 99%
+ 1025..2048 389 99%
+ 2049..4096 182 99%
+ 4097..8192 74 99%
+ 8193..16384 34 99%
+16385..32768 17 99%
+32769..65536 5 99%
+65537..131073 3 100%
+
+
+This information is gathered to help design the new built-in
+backend for sqlite 2.0. Note in particular that 99% of all
+database entries have a combined key and data size of less than
+144 bytes. So if a leaf node in the new database is able to
+store 144 bytes of combined key and data, only 1% of the leaves
+will require overflow pages. Furthermore, note that no key
+is larger than 99 bytes, so if the key will never be on an
+overflow page.
+
+The average combined size of key+data is 48. Add in 16 bytes of
+overhead for a total of 64. That means that a 1K page will
+store (on average) about 16 entries.
Added: freeswitch/trunk/libs/sqlite/tool/showdb.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/tool/showdb.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,86 @@
+/*
+** A utility for printing all or part of an SQLite database file.
+*/
+#include <stdio.h>
+#include <ctype.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+#include <unistd.h>
+#include <stdlib.h>
+
+
+static int pagesize = 1024;
+static int db = -1;
+static int mxPage = 0;
+static int perLine = 32;
+
+static void out_of_memory(void){
+ fprintf(stderr,"Out of memory...\n");
+ exit(1);
+}
+
+static print_page(int iPg){
+ unsigned char *aData;
+ int i, j;
+ aData = malloc(pagesize);
+ if( aData==0 ) out_of_memory();
+ lseek(db, (iPg-1)*pagesize, SEEK_SET);
+ read(db, aData, pagesize);
+ fprintf(stdout, "Page %d:\n", iPg);
+ for(i=0; i<pagesize; i += perLine){
+ fprintf(stdout, " %03x: ",i);
+ for(j=0; j<perLine; j++){
+ fprintf(stdout,"%02x ", aData[i+j]);
+ }
+ for(j=0; j<perLine; j++){
+ fprintf(stdout,"%c", isprint(aData[i+j]) ? aData[i+j] : '.');
+ }
+ fprintf(stdout,"\n");
+ }
+ free(aData);
+}
+
+int main(int argc, char **argv){
+ struct stat sbuf;
+ if( argc<2 ){
+ fprintf(stderr,"Usage: %s FILENAME ?PAGE? ...\n", argv[0]);
+ exit(1);
+ }
+ db = open(argv[1], O_RDONLY);
+ if( db<0 ){
+ fprintf(stderr,"%s: can't open %s\n", argv[0], argv[1]);
+ exit(1);
+ }
+ fstat(db, &sbuf);
+ mxPage = sbuf.st_size/pagesize + 1;
+ if( argc==2 ){
+ int i;
+ for(i=1; i<=mxPage; i++) print_page(i);
+ }else{
+ int i;
+ for(i=2; i<argc; i++){
+ int iStart, iEnd;
+ char *zLeft;
+ iStart = strtol(argv[i], &zLeft, 0);
+ if( zLeft && strcmp(zLeft,"..end")==0 ){
+ iEnd = mxPage;
+ }else if( zLeft && zLeft[0]=='.' && zLeft[1]=='.' ){
+ iEnd = strtol(&zLeft[2], 0, 0);
+ }else{
+ iEnd = iStart;
+ }
+ if( iStart<1 || iEnd<iStart || iEnd>mxPage ){
+ fprintf(stderr,
+ "Page argument should be LOWER?..UPPER?. Range 1 to %d\n",
+ mxPage);
+ exit(1);
+ }
+ while( iStart<=iEnd ){
+ print_page(iStart);
+ iStart++;
+ }
+ }
+ }
+ close(db);
+}
Added: freeswitch/trunk/libs/sqlite/tool/showjournal.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/tool/showjournal.c Tue Dec 19 15:11:50 2006
@@ -0,0 +1,76 @@
+/*
+** A utility for printing an SQLite database journal.
+*/
+#include <stdio.h>
+#include <ctype.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+#include <unistd.h>
+#include <stdlib.h>
+
+
+static int pagesize = 1024;
+static int db = -1;
+static int mxPage = 0;
+
+static void out_of_memory(void){
+ fprintf(stderr,"Out of memory...\n");
+ exit(1);
+}
+
+static print_page(int iPg){
+ unsigned char *aData;
+ int i, j;
+ aData = malloc(pagesize);
+ if( aData==0 ) out_of_memory();
+ read(db, aData, pagesize);
+ fprintf(stdout, "Page %d:\n", iPg);
+ for(i=0; i<pagesize; i += 16){
+ fprintf(stdout, " %03x: ",i);
+ for(j=0; j<16; j++){
+ fprintf(stdout,"%02x ", aData[i+j]);
+ }
+ for(j=0; j<16; j++){
+ fprintf(stdout,"%c", isprint(aData[i+j]) ? aData[i+j] : '.');
+ }
+ fprintf(stdout,"\n");
+ }
+ free(aData);
+}
+
+int main(int argc, char **argv){
+ struct stat sbuf;
+ unsigned int u;
+ int rc;
+ unsigned char zBuf[10];
+ unsigned char zBuf2[sizeof(u)];
+ if( argc!=2 ){
+ fprintf(stderr,"Usage: %s FILENAME\n", argv[0]);
+ exit(1);
+ }
+ db = open(argv[1], O_RDONLY);
+ if( db<0 ){
+ fprintf(stderr,"%s: can't open %s\n", argv[0], argv[1]);
+ exit(1);
+ }
+ read(db, zBuf, 8);
+ if( zBuf[7]==0xd6 ){
+ read(db, &u, sizeof(u));
+ printf("Records in Journal: %u\n", u);
+ read(db, &u, sizeof(u));
+ printf("Magic Number: 0x%08x\n", u);
+ }
+ read(db, zBuf2, sizeof(zBuf2));
+ u = zBuf2[0]<<24 | zBuf2[1]<<16 | zBuf2[2]<<8 | zBuf2[3];
+ printf("Database Size: %u\n", u);
+ while( read(db, zBuf2, sizeof(zBuf2))==sizeof(zBuf2) ){
+ u = zBuf2[0]<<24 | zBuf2[1]<<16 | zBuf2[2]<<8 | zBuf2[3];
+ print_page(u);
+ if( zBuf[7]==0xd6 ){
+ read(db, &u, sizeof(u));
+ printf("Checksum: 0x%08x\n", u);
+ }
+ }
+ close(db);
+}
Added: freeswitch/trunk/libs/sqlite/tool/space_used.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/tool/space_used.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,111 @@
+# Run this TCL script using "testfixture" in order get a report that shows
+# how much disk space is used by a particular data to actually store data
+# versus how much space is unused.
+#
+
+# Get the name of the database to analyze
+#
+if {[llength $argv]!=1} {
+ puts stderr "Usage: $argv0 database-name"
+ exit 1
+}
+set file_to_analyze [lindex $argv 0]
+
+# Open the database
+#
+sqlite db [lindex $argv 0]
+set DB [btree_open [lindex $argv 0]]
+
+# Output the schema for the generated report
+#
+puts \
+{BEGIN;
+CREATE TABLE space_used(
+ name clob, -- Name of a table or index in the database file
+ is_index boolean, -- TRUE if it is an index, false for a table
+ payload int, -- Total amount of data stored in this table or index
+ pri_pages int, -- Number of primary pages used
+ ovfl_pages int, -- Number of overflow pages used
+ pri_unused int, -- Number of unused bytes on primary pages
+ ovfl_unused int -- Number of unused bytes on overflow pages
+);}
+
+# This query will be used to find the root page number for every index and
+# table in the database.
+#
+set sql {
+ SELECT name, type, rootpage FROM sqlite_master
+ UNION ALL
+ SELECT 'sqlite_master', 'table', 2
+ ORDER BY 1
+}
+
+# Initialize variables used for summary statistics.
+#
+set total_size 0
+set total_primary 0
+set total_overflow 0
+set total_unused_primary 0
+set total_unused_ovfl 0
+
+# Analyze every table in the database, one at a time.
+#
+foreach {name type rootpage} [db eval $sql] {
+ set cursor [btree_cursor $DB $rootpage 0]
+ set go [btree_first $cursor]
+ set size 0
+ catch {unset pg_used}
+ set unused_ovfl 0
+ set n_overflow 0
+ while {$go==0} {
+ set payload [btree_payload_size $cursor]
+ incr size $payload
+ set stat [btree_cursor_dump $cursor]
+ set pgno [lindex $stat 0]
+ set freebytes [lindex $stat 4]
+ set pg_used($pgno) $freebytes
+ if {$payload>238} {
+ set n [expr {($payload-238+1019)/1020}]
+ incr n_overflow $n
+ incr unused_ovfl [expr {$n*1020+238-$payload}]
+ }
+ set go [btree_next $cursor]
+ }
+ btree_close_cursor $cursor
+ set n_primary [llength [array names pg_used]]
+ set unused_primary 0
+ foreach x [array names pg_used] {incr unused_primary $pg_used($x)}
+ regsub -all ' $name '' name
+ puts -nonewline "INSERT INTO space_used VALUES('$name'"
+ puts -nonewline ",[expr {$type=="index"}]"
+ puts ",$size,$n_primary,$n_overflow,$unused_primary,$unused_ovfl);"
+ incr total_size $size
+ incr total_primary $n_primary
+ incr total_overflow $n_overflow
+ incr total_unused_primary $unused_primary
+ incr total_unused_ovfl $unused_ovfl
+}
+
+# Output summary statistics:
+#
+puts "-- Total payload size: $total_size"
+puts "-- Total pages used: $total_primary primary and $total_overflow overflow"
+set file_pgcnt [expr {[file size [lindex $argv 0]]/1024}]
+puts -nonewline "-- Total unused bytes on primary pages: $total_unused_primary"
+if {$total_primary>0} {
+ set upp [expr {$total_unused_primary/$total_primary}]
+ puts " (avg $upp bytes/page)"
+} else {
+ puts ""
+}
+puts -nonewline "-- Total unused bytes on overflow pages: $total_unused_ovfl"
+if {$total_overflow>0} {
+ set upp [expr {$total_unused_ovfl/$total_overflow}]
+ puts " (avg $upp bytes/page)"
+} else {
+ puts ""
+}
+set n_free [expr {$file_pgcnt-$total_primary-$total_overflow}]
+if {$n_free>0} {incr n_free -1}
+puts "-- Total pages on freelist: $n_free"
+puts "COMMIT;"
Added: freeswitch/trunk/libs/sqlite/tool/spaceanal.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/tool/spaceanal.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,813 @@
+# Run this TCL script using "testfixture" in order get a report that shows
+# how much disk space is used by a particular data to actually store data
+# versus how much space is unused.
+#
+
+if {[catch {
+
+# Get the name of the database to analyze
+#
+#set argv $argv0
+if {[llength $argv]!=1} {
+ puts stderr "Usage: $argv0 database-name"
+ exit 1
+}
+set file_to_analyze [lindex $argv 0]
+if {![file exists $file_to_analyze]} {
+ puts stderr "No such file: $file_to_analyze"
+ exit 1
+}
+if {![file readable $file_to_analyze]} {
+ puts stderr "File is not readable: $file_to_analyze"
+ exit 1
+}
+if {[file size $file_to_analyze]<512} {
+ puts stderr "Empty or malformed database: $file_to_analyze"
+ exit 1
+}
+
+# Open the database
+#
+sqlite3 db [lindex $argv 0]
+set DB [btree_open [lindex $argv 0] 1000 0]
+
+# In-memory database for collecting statistics. This script loops through
+# the tables and indices in the database being analyzed, adding a row for each
+# to an in-memory database (for which the schema is shown below). It then
+# queries the in-memory db to produce the space-analysis report.
+#
+sqlite3 mem :memory:
+set tabledef\
+{CREATE TABLE space_used(
+ name clob, -- Name of a table or index in the database file
+ tblname clob, -- Name of associated table
+ is_index boolean, -- TRUE if it is an index, false for a table
+ nentry int, -- Number of entries in the BTree
+ leaf_entries int, -- Number of leaf entries
+ payload int, -- Total amount of data stored in this table or index
+ ovfl_payload int, -- Total amount of data stored on overflow pages
+ ovfl_cnt int, -- Number of entries that use overflow
+ mx_payload int, -- Maximum payload size
+ int_pages int, -- Number of interior pages used
+ leaf_pages int, -- Number of leaf pages used
+ ovfl_pages int, -- Number of overflow pages used
+ int_unused int, -- Number of unused bytes on interior pages
+ leaf_unused int, -- Number of unused bytes on primary pages
+ ovfl_unused int -- Number of unused bytes on overflow pages
+);}
+mem eval $tabledef
+
+proc integerify {real} {
+ return [expr int($real)]
+}
+mem function int integerify
+
+# Quote a string for use in an SQL query. Examples:
+#
+# [quote {hello world}] == {'hello world'}
+# [quote {hello world's}] == {'hello world''s'}
+#
+proc quote {txt} {
+ regsub -all ' $txt '' q
+ return '$q'
+}
+
+# This proc is a wrapper around the btree_cursor_info command. The
+# second argument is an open btree cursor returned by [btree_cursor].
+# The first argument is the name of an array variable that exists in
+# the scope of the caller. If the third argument is non-zero, then
+# info is returned for the page that lies $up entries upwards in the
+# tree-structure. (i.e. $up==1 returns the parent page, $up==2 the
+# grandparent etc.)
+#
+# The following entries in that array are filled in with information retrieved
+# using [btree_cursor_info]:
+#
+# $arrayvar(page_no) = The page number
+# $arrayvar(entry_no) = The entry number
+# $arrayvar(page_entries) = Total number of entries on this page
+# $arrayvar(cell_size) = Cell size (local payload + header)
+# $arrayvar(page_freebytes) = Number of free bytes on this page
+# $arrayvar(page_freeblocks) = Number of free blocks on the page
+# $arrayvar(payload_bytes) = Total payload size (local + overflow)
+# $arrayvar(header_bytes) = Header size in bytes
+# $arrayvar(local_payload_bytes) = Local payload size
+# $arrayvar(parent) = Parent page number
+#
+proc cursor_info {arrayvar csr {up 0}} {
+ upvar $arrayvar a
+ foreach [list a(page_no) \
+ a(entry_no) \
+ a(page_entries) \
+ a(cell_size) \
+ a(page_freebytes) \
+ a(page_freeblocks) \
+ a(payload_bytes) \
+ a(header_bytes) \
+ a(local_payload_bytes) \
+ a(parent) ] [btree_cursor_info $csr $up] {}
+}
+
+# Determine the page-size of the database. This global variable is used
+# throughout the script.
+#
+set pageSize [db eval {PRAGMA page_size}]
+
+# Analyze every table in the database, one at a time.
+#
+# The following query returns the name and root-page of each table in the
+# database, including the sqlite_master table.
+#
+set sql {
+ SELECT name, rootpage FROM sqlite_master
+ WHERE type='table' AND rootpage>0
+ UNION ALL
+ SELECT 'sqlite_master', 1
+ ORDER BY 1
+}
+set wideZero [expr {10000000000 - 10000000000}]
+foreach {name rootpage} [db eval $sql] {
+ puts stderr "Analyzing table $name..."
+
+ # Code below traverses the table being analyzed (table name $name), using the
+ # btree cursor $cursor. Statistics related to table $name are accumulated in
+ # the following variables:
+ #
+ set total_payload $wideZero ;# Payload space used by all entries
+ set total_ovfl $wideZero ;# Payload space on overflow pages
+ set unused_int $wideZero ;# Unused space on interior nodes
+ set unused_leaf $wideZero ;# Unused space on leaf nodes
+ set unused_ovfl $wideZero ;# Unused space on overflow pages
+ set cnt_ovfl $wideZero ;# Number of entries that use overflows
+ set cnt_leaf_entry $wideZero ;# Number of leaf entries
+ set cnt_int_entry $wideZero ;# Number of interor entries
+ set mx_payload $wideZero ;# Maximum payload size
+ set ovfl_pages $wideZero ;# Number of overflow pages used
+ set leaf_pages $wideZero ;# Number of leaf pages
+ set int_pages $wideZero ;# Number of interior pages
+
+ # As the btree is traversed, the array variable $seen($pgno) is set to 1
+ # the first time page $pgno is encountered.
+ #
+ catch {unset seen}
+
+ # The following loop runs once for each entry in table $name. The table
+ # is traversed using the btree cursor stored in variable $csr
+ #
+ set csr [btree_cursor $DB $rootpage 0]
+ for {btree_first $csr} {![btree_eof $csr]} {btree_next $csr} {
+ incr cnt_leaf_entry
+
+ # Retrieve information about the entry the btree-cursor points to into
+ # the array variable $ci (cursor info).
+ #
+ cursor_info ci $csr
+
+ # Check if the payload of this entry is greater than the current
+ # $mx_payload statistic for the table. Also increase the $total_payload
+ # statistic.
+ #
+ if {$ci(payload_bytes)>$mx_payload} {set mx_payload $ci(payload_bytes)}
+ incr total_payload $ci(payload_bytes)
+
+ # If this entry uses overflow pages, then update the $cnt_ovfl,
+ # $total_ovfl, $ovfl_pages and $unused_ovfl statistics.
+ #
+ set ovfl [expr {$ci(payload_bytes)-$ci(local_payload_bytes)}]
+ if {$ovfl} {
+ incr cnt_ovfl
+ incr total_ovfl $ovfl
+ set n [expr {int(ceil($ovfl/($pageSize-4.0)))}]
+ incr ovfl_pages $n
+ incr unused_ovfl [expr {$n*($pageSize-4) - $ovfl}]
+ }
+
+ # If this is the first table entry analyzed for the page, then update
+ # the page-related statistics $leaf_pages and $unused_leaf. Also, if
+ # this page has a parent page that has not been analyzed, retrieve
+ # info for the parent and update statistics for it too.
+ #
+ if {![info exists seen($ci(page_no))]} {
+ set seen($ci(page_no)) 1
+ incr leaf_pages
+ incr unused_leaf $ci(page_freebytes)
+
+ # Now check if the page has a parent that has not been analyzed. If
+ # so, update the $int_pages, $cnt_int_entry and $unused_int statistics
+ # accordingly. Then check if the parent page has a parent that has
+ # not yet been analyzed etc.
+ #
+ # set parent $ci(parent_page_no)
+ for {set up 1} \
+ {$ci(parent)!=0 && ![info exists seen($ci(parent))]} {incr up} \
+ {
+ # Mark the parent as seen.
+ #
+ set seen($ci(parent)) 1
+
+ # Retrieve info for the parent and update statistics.
+ cursor_info ci $csr $up
+ incr int_pages
+ incr cnt_int_entry $ci(page_entries)
+ incr unused_int $ci(page_freebytes)
+ }
+ }
+ }
+ btree_close_cursor $csr
+
+ # Handle the special case where a table contains no data. In this case
+ # all statistics are zero, except for the number of leaf pages (1) and
+ # the unused bytes on leaf pages ($pageSize - 8).
+ #
+ # An exception to the above is the sqlite_master table. If it is empty
+ # then all statistics are zero except for the number of leaf pages (1),
+ # and the number of unused bytes on leaf pages ($pageSize - 112).
+ #
+ if {[llength [array names seen]]==0} {
+ set leaf_pages 1
+ if {$rootpage==1} {
+ set unused_leaf [expr {$pageSize-112}]
+ } else {
+ set unused_leaf [expr {$pageSize-8}]
+ }
+ }
+
+ # Insert the statistics for the table analyzed into the in-memory database.
+ #
+ set sql "INSERT INTO space_used VALUES("
+ append sql [quote $name]
+ append sql ",[quote $name]"
+ append sql ",0"
+ append sql ",[expr {$cnt_leaf_entry+$cnt_int_entry}]"
+ append sql ",$cnt_leaf_entry"
+ append sql ",$total_payload"
+ append sql ",$total_ovfl"
+ append sql ",$cnt_ovfl"
+ append sql ",$mx_payload"
+ append sql ",$int_pages"
+ append sql ",$leaf_pages"
+ append sql ",$ovfl_pages"
+ append sql ",$unused_int"
+ append sql ",$unused_leaf"
+ append sql ",$unused_ovfl"
+ append sql );
+ mem eval $sql
+}
+
+# Analyze every index in the database, one at a time.
+#
+# The query below returns the name, associated table and root-page number
+# for every index in the database.
+#
+set sql {
+ SELECT name, tbl_name, rootpage FROM sqlite_master WHERE type='index'
+ ORDER BY 2, 1
+}
+foreach {name tbl_name rootpage} [db eval $sql] {
+ puts stderr "Analyzing index $name of table $tbl_name..."
+
+ # Code below traverses the index being analyzed (index name $name), using the
+ # btree cursor $cursor. Statistics related to index $name are accumulated in
+ # the following variables:
+ #
+ set total_payload $wideZero ;# Payload space used by all entries
+ set total_ovfl $wideZero ;# Payload space on overflow pages
+ set unused_leaf $wideZero ;# Unused space on leaf nodes
+ set unused_ovfl $wideZero ;# Unused space on overflow pages
+ set cnt_ovfl $wideZero ;# Number of entries that use overflows
+ set cnt_leaf_entry $wideZero ;# Number of leaf entries
+ set mx_payload $wideZero ;# Maximum payload size
+ set ovfl_pages $wideZero ;# Number of overflow pages used
+ set leaf_pages $wideZero ;# Number of leaf pages
+
+ # As the btree is traversed, the array variable $seen($pgno) is set to 1
+ # the first time page $pgno is encountered.
+ #
+ catch {unset seen}
+
+ # The following loop runs once for each entry in index $name. The index
+ # is traversed using the btree cursor stored in variable $csr
+ #
+ set csr [btree_cursor $DB $rootpage 0]
+ for {btree_first $csr} {![btree_eof $csr]} {btree_next $csr} {
+ incr cnt_leaf_entry
+
+ # Retrieve information about the entry the btree-cursor points to into
+ # the array variable $ci (cursor info).
+ #
+ cursor_info ci $csr
+
+ # Check if the payload of this entry is greater than the current
+ # $mx_payload statistic for the table. Also increase the $total_payload
+ # statistic.
+ #
+ set payload [btree_keysize $csr]
+ if {$payload>$mx_payload} {set mx_payload $payload}
+ incr total_payload $payload
+
+ # If this entry uses overflow pages, then update the $cnt_ovfl,
+ # $total_ovfl, $ovfl_pages and $unused_ovfl statistics.
+ #
+ set ovfl [expr {$payload-$ci(local_payload_bytes)}]
+ if {$ovfl} {
+ incr cnt_ovfl
+ incr total_ovfl $ovfl
+ set n [expr {int(ceil($ovfl/($pageSize-4.0)))}]
+ incr ovfl_pages $n
+ incr unused_ovfl [expr {$n*($pageSize-4) - $ovfl}]
+ }
+
+ # If this is the first table entry analyzed for the page, then update
+ # the page-related statistics $leaf_pages and $unused_leaf.
+ #
+ if {![info exists seen($ci(page_no))]} {
+ set seen($ci(page_no)) 1
+ incr leaf_pages
+ incr unused_leaf $ci(page_freebytes)
+ }
+ }
+ btree_close_cursor $csr
+
+ # Handle the special case where a index contains no data. In this case
+ # all statistics are zero, except for the number of leaf pages (1) and
+ # the unused bytes on leaf pages ($pageSize - 8).
+ #
+ if {[llength [array names seen]]==0} {
+ set leaf_pages 1
+ set unused_leaf [expr {$pageSize-8}]
+ }
+
+ # Insert the statistics for the index analyzed into the in-memory database.
+ #
+ set sql "INSERT INTO space_used VALUES("
+ append sql [quote $name]
+ append sql ",[quote $tbl_name]"
+ append sql ",1"
+ append sql ",$cnt_leaf_entry"
+ append sql ",$cnt_leaf_entry"
+ append sql ",$total_payload"
+ append sql ",$total_ovfl"
+ append sql ",$cnt_ovfl"
+ append sql ",$mx_payload"
+ append sql ",0"
+ append sql ",$leaf_pages"
+ append sql ",$ovfl_pages"
+ append sql ",0"
+ append sql ",$unused_leaf"
+ append sql ",$unused_ovfl"
+ append sql );
+ mem eval $sql
+}
+
+# Generate a single line of output in the statistics section of the
+# report.
+#
+proc statline {title value {extra {}}} {
+ set len [string length $title]
+ set dots [string range {......................................} $len end]
+ set len [string length $value]
+ set sp2 [string range { } $len end]
+ if {$extra ne ""} {
+ set extra " $extra"
+ }
+ puts "$title$dots $value$sp2$extra"
+}
+
+# Generate a formatted percentage value for $num/$denom
+#
+proc percent {num denom {of {}}} {
+ if {$denom==0.0} {return ""}
+ set v [expr {$num*100.0/$denom}]
+ set of {}
+ if {$v==100.0 || $v<0.001 || ($v>1.0 && $v<99.0)} {
+ return [format {%5.1f%% %s} $v $of]
+ } elseif {$v<0.1 || $v>99.9} {
+ return [format {%7.3f%% %s} $v $of]
+ } else {
+ return [format {%6.2f%% %s} $v $of]
+ }
+}
+
+proc divide {num denom} {
+ if {$denom==0} {return 0.0}
+ return [format %.2f [expr double($num)/double($denom)]]
+}
+
+# Generate a subreport that covers some subset of the database.
+# the $where clause determines which subset to analyze.
+#
+proc subreport {title where} {
+ global pageSize file_pgcnt
+
+ # Query the in-memory database for the sum of various statistics
+ # for the subset of tables/indices identified by the WHERE clause in
+ # $where. Note that even if the WHERE clause matches no rows, the
+ # following query returns exactly one row (because it is an aggregate).
+ #
+ # The results of the query are stored directly by SQLite into local
+ # variables (i.e. $nentry, $nleaf etc.).
+ #
+ mem eval "
+ SELECT
+ int(sum(nentry)) AS nentry,
+ int(sum(leaf_entries)) AS nleaf,
+ int(sum(payload)) AS payload,
+ int(sum(ovfl_payload)) AS ovfl_payload,
+ max(mx_payload) AS mx_payload,
+ int(sum(ovfl_cnt)) as ovfl_cnt,
+ int(sum(leaf_pages)) AS leaf_pages,
+ int(sum(int_pages)) AS int_pages,
+ int(sum(ovfl_pages)) AS ovfl_pages,
+ int(sum(leaf_unused)) AS leaf_unused,
+ int(sum(int_unused)) AS int_unused,
+ int(sum(ovfl_unused)) AS ovfl_unused
+ FROM space_used WHERE $where" {} {}
+
+ # Output the sub-report title, nicely decorated with * characters.
+ #
+ puts ""
+ set len [string length $title]
+ set stars [string repeat * [expr 65-$len]]
+ puts "*** $title $stars"
+ puts ""
+
+ # Calculate statistics and store the results in TCL variables, as follows:
+ #
+ # total_pages: Database pages consumed.
+ # total_pages_percent: Pages consumed as a percentage of the file.
+ # storage: Bytes consumed.
+ # payload_percent: Payload bytes used as a percentage of $storage.
+ # total_unused: Unused bytes on pages.
+ # avg_payload: Average payload per btree entry.
+ # avg_fanout: Average fanout for internal pages.
+ # avg_unused: Average unused bytes per btree entry.
+ # ovfl_cnt_percent: Percentage of btree entries that use overflow pages.
+ #
+ set total_pages [expr {$leaf_pages+$int_pages+$ovfl_pages}]
+ set total_pages_percent [percent $total_pages $file_pgcnt]
+ set storage [expr {$total_pages*$pageSize}]
+ set payload_percent [percent $payload $storage {of storage consumed}]
+ set total_unused [expr {$ovfl_unused+$int_unused+$leaf_unused}]
+ set avg_payload [divide $payload $nleaf]
+ set avg_unused [divide $total_unused $nleaf]
+ if {$int_pages>0} {
+ # TODO: Is this formula correct?
+ set nTab [mem eval "
+ SELECT count(*) FROM (
+ SELECT DISTINCT tblname FROM space_used WHERE $where AND is_index=0
+ )
+ "]
+ set avg_fanout [mem eval "
+ SELECT (sum(leaf_pages+int_pages)-$nTab)/sum(int_pages) FROM space_used
+ WHERE $where AND is_index = 0
+ "]
+ set avg_fanout [format %.2f $avg_fanout]
+ }
+ set ovfl_cnt_percent [percent $ovfl_cnt $nleaf {of all entries}]
+
+ # Print out the sub-report statistics.
+ #
+ statline {Percentage of total database} $total_pages_percent
+ statline {Number of entries} $nleaf
+ statline {Bytes of storage consumed} $storage
+ statline {Bytes of payload} $payload $payload_percent
+ statline {Average payload per entry} $avg_payload
+ statline {Average unused bytes per entry} $avg_unused
+ if {[info exists avg_fanout]} {
+ statline {Average fanout} $avg_fanout
+ }
+ statline {Maximum payload per entry} $mx_payload
+ statline {Entries that use overflow} $ovfl_cnt $ovfl_cnt_percent
+ if {$int_pages>0} {
+ statline {Index pages used} $int_pages
+ }
+ statline {Primary pages used} $leaf_pages
+ statline {Overflow pages used} $ovfl_pages
+ statline {Total pages used} $total_pages
+ if {$int_unused>0} {
+ set int_unused_percent \
+ [percent $int_unused [expr {$int_pages*$pageSize}] {of index space}]
+ statline "Unused bytes on index pages" $int_unused $int_unused_percent
+ }
+ statline "Unused bytes on primary pages" $leaf_unused \
+ [percent $leaf_unused [expr {$leaf_pages*$pageSize}] {of primary space}]
+ statline "Unused bytes on overflow pages" $ovfl_unused \
+ [percent $ovfl_unused [expr {$ovfl_pages*$pageSize}] {of overflow space}]
+ statline "Unused bytes on all pages" $total_unused \
+ [percent $total_unused $storage {of all space}]
+ return 1
+}
+
+# Calculate the overhead in pages caused by auto-vacuum.
+#
+# This procedure calculates and returns the number of pages used by the
+# auto-vacuum 'pointer-map'. If the database does not support auto-vacuum,
+# then 0 is returned. The two arguments are the size of the database file in
+# pages and the page size used by the database (in bytes).
+proc autovacuum_overhead {filePages pageSize} {
+
+ # Read the value of meta 4. If non-zero, then the database supports
+ # auto-vacuum. It would be possible to use "PRAGMA auto_vacuum" instead,
+ # but that would not work if the SQLITE_OMIT_PRAGMA macro was defined
+ # when the library was built.
+ set meta4 [lindex [btree_get_meta $::DB] 4]
+
+ # If the database is not an auto-vacuum database or the file consists
+ # of one page only then there is no overhead for auto-vacuum. Return zero.
+ if {0==$meta4 || $filePages==1} {
+ return 0
+ }
+
+ # The number of entries on each pointer map page. The layout of the
+ # database file is one pointer-map page, followed by $ptrsPerPage other
+ # pages, followed by a pointer-map page etc. The first pointer-map page
+ # is the second page of the file overall.
+ set ptrsPerPage [expr double($pageSize/5)]
+
+ # Return the number of pointer map pages in the database.
+ return [expr int(ceil( ($filePages-1.0)/($ptrsPerPage+1.0) ))]
+}
+
+
+# Calculate the summary statistics for the database and store the results
+# in TCL variables. They are output below. Variables are as follows:
+#
+# pageSize: Size of each page in bytes.
+# file_bytes: File size in bytes.
+# file_pgcnt: Number of pages in the file.
+# file_pgcnt2: Number of pages in the file (calculated).
+# av_pgcnt: Pages consumed by the auto-vacuum pointer-map.
+# av_percent: Percentage of the file consumed by auto-vacuum pointer-map.
+# inuse_pgcnt: Data pages in the file.
+# inuse_percent: Percentage of pages used to store data.
+# free_pgcnt: Free pages calculated as (<total pages> - <in-use pages>)
+# free_pgcnt2: Free pages in the file according to the file header.
+# free_percent: Percentage of file consumed by free pages (calculated).
+# free_percent2: Percentage of file consumed by free pages (header).
+# ntable: Number of tables in the db.
+# nindex: Number of indices in the db.
+# nautoindex: Number of indices created automatically.
+# nmanindex: Number of indices created manually.
+# user_payload: Number of bytes of payload in table btrees
+# (not including sqlite_master)
+# user_percent: $user_payload as a percentage of total file size.
+
+set file_bytes [file size $file_to_analyze]
+set file_pgcnt [expr {$file_bytes/$pageSize}]
+
+set av_pgcnt [autovacuum_overhead $file_pgcnt $pageSize]
+set av_percent [percent $av_pgcnt $file_pgcnt]
+
+set sql {SELECT sum(leaf_pages+int_pages+ovfl_pages) FROM space_used}
+set inuse_pgcnt [expr int([mem eval $sql])]
+set inuse_percent [percent $inuse_pgcnt $file_pgcnt]
+
+set free_pgcnt [expr $file_pgcnt-$inuse_pgcnt-$av_pgcnt]
+set free_percent [percent $free_pgcnt $file_pgcnt]
+set free_pgcnt2 [lindex [btree_get_meta $DB] 0]
+set free_percent2 [percent $free_pgcnt2 $file_pgcnt]
+
+set file_pgcnt2 [expr {$inuse_pgcnt+$free_pgcnt2+$av_pgcnt}]
+
+set ntable [db eval {SELECT count(*)+1 FROM sqlite_master WHERE type='table'}]
+set nindex [db eval {SELECT count(*) FROM sqlite_master WHERE type='index'}]
+set sql {SELECT count(*) FROM sqlite_master WHERE name LIKE 'sqlite_autoindex%'}
+set nautoindex [db eval $sql]
+set nmanindex [expr {$nindex-$nautoindex}]
+
+# set total_payload [mem eval "SELECT sum(payload) FROM space_used"]
+set user_payload [mem one {SELECT int(sum(payload)) FROM space_used
+ WHERE NOT is_index AND name NOT LIKE 'sqlite_master'}]
+set user_percent [percent $user_payload $file_bytes]
+
+# Output the summary statistics calculated above.
+#
+puts "/** Disk-Space Utilization Report For $file_to_analyze"
+catch {
+ puts "*** As of [clock format [clock seconds] -format {%Y-%b-%d %H:%M:%S}]"
+}
+puts ""
+statline {Page size in bytes} $pageSize
+statline {Pages in the whole file (measured)} $file_pgcnt
+statline {Pages in the whole file (calculated)} $file_pgcnt2
+statline {Pages that store data} $inuse_pgcnt $inuse_percent
+statline {Pages on the freelist (per header)} $free_pgcnt2 $free_percent2
+statline {Pages on the freelist (calculated)} $free_pgcnt $free_percent
+statline {Pages of auto-vacuum overhead} $av_pgcnt $av_percent
+statline {Number of tables in the database} $ntable
+statline {Number of indices} $nindex
+statline {Number of named indices} $nmanindex
+statline {Automatically generated indices} $nautoindex
+statline {Size of the file in bytes} $file_bytes
+statline {Bytes of user payload stored} $user_payload $user_percent
+
+# Output table rankings
+#
+puts ""
+puts "*** Page counts for all tables with their indices ********************"
+puts ""
+mem eval {SELECT tblname, count(*) AS cnt,
+ int(sum(int_pages+leaf_pages+ovfl_pages)) AS size
+ FROM space_used GROUP BY tblname ORDER BY size+0 DESC, tblname} {} {
+ statline [string toupper $tblname] $size [percent $size $file_pgcnt]
+}
+
+# Output subreports
+#
+if {$nindex>0} {
+ subreport {All tables and indices} 1
+}
+subreport {All tables} {NOT is_index}
+if {$nindex>0} {
+ subreport {All indices} {is_index}
+}
+foreach tbl [mem eval {SELECT name FROM space_used WHERE NOT is_index
+ ORDER BY name}] {
+ regsub ' $tbl '' qn
+ set name [string toupper $tbl]
+ set n [mem eval "SELECT count(*) FROM space_used WHERE tblname='$qn'"]
+ if {$n>1} {
+ subreport "Table $name and all its indices" "tblname='$qn'"
+ subreport "Table $name w/o any indices" "name='$qn'"
+ subreport "Indices of table $name" "tblname='$qn' AND is_index"
+ } else {
+ subreport "Table $name" "name='$qn'"
+ }
+}
+
+# Output instructions on what the numbers above mean.
+#
+puts {
+*** Definitions ******************************************************
+
+Page size in bytes
+
+ The number of bytes in a single page of the database file.
+ Usually 1024.
+
+Number of pages in the whole file
+}
+puts \
+" The number of $pageSize-byte pages that go into forming the complete
+ database"
+puts \
+{
+Pages that store data
+
+ The number of pages that store data, either as primary B*Tree pages or
+ as overflow pages. The number at the right is the data pages divided by
+ the total number of pages in the file.
+
+Pages on the freelist
+
+ The number of pages that are not currently in use but are reserved for
+ future use. The percentage at the right is the number of freelist pages
+ divided by the total number of pages in the file.
+
+Pages of auto-vacuum overhead
+
+ The number of pages that store data used by the database to facilitate
+ auto-vacuum. This is zero for databases that do not support auto-vacuum.
+
+Number of tables in the database
+
+ The number of tables in the database, including the SQLITE_MASTER table
+ used to store schema information.
+
+Number of indices
+
+ The total number of indices in the database.
+
+Number of named indices
+
+ The number of indices created using an explicit CREATE INDEX statement.
+
+Automatically generated indices
+
+ The number of indices used to implement PRIMARY KEY or UNIQUE constraints
+ on tables.
+
+Size of the file in bytes
+
+ The total amount of disk space used by the entire database files.
+
+Bytes of user payload stored
+
+ The total number of bytes of user payload stored in the database. The
+ schema information in the SQLITE_MASTER table is not counted when
+ computing this number. The percentage at the right shows the payload
+ divided by the total file size.
+
+Percentage of total database
+
+ The amount of the complete database file that is devoted to storing
+ information described by this category.
+
+Number of entries
+
+ The total number of B-Tree key/value pairs stored under this category.
+
+Bytes of storage consumed
+
+ The total amount of disk space required to store all B-Tree entries
+ under this category. The is the total number of pages used times
+ the pages size.
+
+Bytes of payload
+
+ The amount of payload stored under this category. Payload is the data
+ part of table entries and the key part of index entries. The percentage
+ at the right is the bytes of payload divided by the bytes of storage
+ consumed.
+
+Average payload per entry
+
+ The average amount of payload on each entry. This is just the bytes of
+ payload divided by the number of entries.
+
+Average unused bytes per entry
+
+ The average amount of free space remaining on all pages under this
+ category on a per-entry basis. This is the number of unused bytes on
+ all pages divided by the number of entries.
+
+Maximum payload per entry
+
+ The largest payload size of any entry.
+
+Entries that use overflow
+
+ The number of entries that user one or more overflow pages.
+
+Total pages used
+
+ This is the number of pages used to hold all information in the current
+ category. This is the sum of index, primary, and overflow pages.
+
+Index pages used
+
+ This is the number of pages in a table B-tree that hold only key (rowid)
+ information and no data.
+
+Primary pages used
+
+ This is the number of B-tree pages that hold both key and data.
+
+Overflow pages used
+
+ The total number of overflow pages used for this category.
+
+Unused bytes on index pages
+
+ The total number of bytes of unused space on all index pages. The
+ percentage at the right is the number of unused bytes divided by the
+ total number of bytes on index pages.
+
+Unused bytes on primary pages
+
+ The total number of bytes of unused space on all primary pages. The
+ percentage at the right is the number of unused bytes divided by the
+ total number of bytes on primary pages.
+
+Unused bytes on overflow pages
+
+ The total number of bytes of unused space on all overflow pages. The
+ percentage at the right is the number of unused bytes divided by the
+ total number of bytes on overflow pages.
+
+Unused bytes on all pages
+
+ The total number of bytes of unused space on all primary and overflow
+ pages. The percentage at the right is the number of unused bytes
+ divided by the total number of bytes.
+}
+
+# Output a dump of the in-memory database. This can be used for more
+# complex offline analysis.
+#
+puts "**********************************************************************"
+puts "The entire text of this report can be sourced into any SQL database"
+puts "engine for further analysis. All of the text above is an SQL comment."
+puts "The data used to generate this report follows:"
+puts "*/"
+puts "BEGIN;"
+puts $tabledef
+unset -nocomplain x
+mem eval {SELECT * FROM space_used} x {
+ puts -nonewline "INSERT INTO space_used VALUES"
+ set sep (
+ foreach col $x(*) {
+ set v $x($col)
+ if {$v=="" || ![string is double $v]} {set v [quote $v]}
+ puts -nonewline $sep$v
+ set sep ,
+ }
+ puts ");"
+}
+puts "COMMIT;"
+
+} err]} {
+ puts "ERROR: $err"
+ puts $errorInfo
+ exit 1
+}
Added: freeswitch/trunk/libs/sqlite/tool/speedtest.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/tool/speedtest.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,275 @@
+#!/usr/bin/tclsh
+#
+# Run this script using TCLSH to do a speed comparison between
+# various versions of SQLite and PostgreSQL and MySQL
+#
+
+# Run a test
+#
+set cnt 1
+proc runtest {title} {
+ global cnt
+ set sqlfile test$cnt.sql
+ puts "<h2>Test $cnt: $title</h2>"
+ incr cnt
+ set fd [open $sqlfile r]
+ set sql [string trim [read $fd [file size $sqlfile]]]
+ close $fd
+ set sx [split $sql \n]
+ set n [llength $sx]
+ if {$n>8} {
+ set sql {}
+ for {set i 0} {$i<3} {incr i} {append sql [lindex $sx $i]<br>\n}
+ append sql "<i>... [expr {$n-6}] lines omitted</i><br>\n"
+ for {set i [expr {$n-3}]} {$i<$n} {incr i} {
+ append sql [lindex $sx $i]<br>\n
+ }
+ } else {
+ regsub -all \n [string trim $sql] <br> sql
+ }
+ puts "<blockquote>"
+ puts "$sql"
+ puts "</blockquote><table border=0 cellpadding=0 cellspacing=0>"
+ set format {<tr><td>%s</td><td align="right"> %.3f</td></tr>}
+ set delay 1000
+# exec sync; after $delay;
+# set t [time "exec psql drh <$sqlfile" 1]
+# set t [expr {[lindex $t 0]/1000000.0}]
+# puts [format $format PostgreSQL: $t]
+ exec sync; after $delay;
+ set t [time "exec mysql -f drh <$sqlfile" 1]
+ set t [expr {[lindex $t 0]/1000000.0}]
+ puts [format $format MySQL: $t]
+# set t [time "exec ./sqlite232 s232.db <$sqlfile" 1]
+# set t [expr {[lindex $t 0]/1000000.0}]
+# puts [format $format {SQLite 2.3.2:} $t]
+# set t [time "exec ./sqlite-100 s100.db <$sqlfile" 1]
+# set t [expr {[lindex $t 0]/1000000.0}]
+# puts [format $format {SQLite 2.4 (cache=100):} $t]
+ exec sync; after $delay;
+ set t [time "exec ./sqlite248 s2k.db <$sqlfile" 1]
+ set t [expr {[lindex $t 0]/1000000.0}]
+ puts [format $format {SQLite 2.4.8:} $t]
+ exec sync; after $delay;
+ set t [time "exec ./sqlite248 sns.db <$sqlfile" 1]
+ set t [expr {[lindex $t 0]/1000000.0}]
+ puts [format $format {SQLite 2.4.8 (nosync):} $t]
+ exec sync; after $delay;
+ set t [time "exec ./sqlite2412 s2kb.db <$sqlfile" 1]
+ set t [expr {[lindex $t 0]/1000000.0}]
+ puts [format $format {SQLite 2.4.12:} $t]
+ exec sync; after $delay;
+ set t [time "exec ./sqlite2412 snsb.db <$sqlfile" 1]
+ set t [expr {[lindex $t 0]/1000000.0}]
+ puts [format $format {SQLite 2.4.12 (nosync):} $t]
+# set t [time "exec ./sqlite-t1 st1.db <$sqlfile" 1]
+# set t [expr {[lindex $t 0]/1000000.0}]
+# puts [format $format {SQLite 2.4 (test):} $t]
+ puts "</table>"
+}
+
+# Initialize the environment
+#
+expr srand(1)
+catch {exec /bin/sh -c {rm -f s*.db}}
+set fd [open clear.sql w]
+puts $fd {
+ drop table t1;
+ drop table t2;
+}
+close $fd
+catch {exec psql drh <clear.sql}
+catch {exec mysql drh <clear.sql}
+set fd [open 2kinit.sql w]
+puts $fd {
+ PRAGMA default_cache_size=2000;
+ PRAGMA default_synchronous=on;
+}
+close $fd
+exec ./sqlite248 s2k.db <2kinit.sql
+exec ./sqlite2412 s2kb.db <2kinit.sql
+set fd [open nosync-init.sql w]
+puts $fd {
+ PRAGMA default_cache_size=2000;
+ PRAGMA default_synchronous=off;
+}
+close $fd
+exec ./sqlite248 sns.db <nosync-init.sql
+exec ./sqlite2412 snsb.db <nosync-init.sql
+set ones {zero one two three four five six seven eight nine
+ ten eleven twelve thirteen fourteen fifteen sixteen seventeen
+ eighteen nineteen}
+set tens {{} ten twenty thirty forty fifty sixty seventy eighty ninety}
+proc number_name {n} {
+ if {$n>=1000} {
+ set txt "[number_name [expr {$n/1000}]] thousand"
+ set n [expr {$n%1000}]
+ } else {
+ set txt {}
+ }
+ if {$n>=100} {
+ append txt " [lindex $::ones [expr {$n/100}]] hundred"
+ set n [expr {$n%100}]
+ }
+ if {$n>=20} {
+ append txt " [lindex $::tens [expr {$n/10}]]"
+ set n [expr {$n%10}]
+ }
+ if {$n>0} {
+ append txt " [lindex $::ones $n]"
+ }
+ set txt [string trim $txt]
+ if {$txt==""} {set txt zero}
+ return $txt
+}
+
+
+
+set fd [open test$cnt.sql w]
+puts $fd "CREATE TABLE t1(a INTEGER, b INTEGER, c VARCHAR(100));"
+for {set i 1} {$i<=1000} {incr i} {
+ set r [expr {int(rand()*100000)}]
+ puts $fd "INSERT INTO t1 VALUES($i,$r,'[number_name $r]');"
+}
+close $fd
+runtest {1000 INSERTs}
+
+
+
+set fd [open test$cnt.sql w]
+puts $fd "BEGIN;"
+puts $fd "CREATE TABLE t2(a INTEGER, b INTEGER, c VARCHAR(100));"
+for {set i 1} {$i<=25000} {incr i} {
+ set r [expr {int(rand()*500000)}]
+ puts $fd "INSERT INTO t2 VALUES($i,$r,'[number_name $r]');"
+}
+puts $fd "COMMIT;"
+close $fd
+runtest {25000 INSERTs in a transaction}
+
+
+
+set fd [open test$cnt.sql w]
+for {set i 0} {$i<100} {incr i} {
+ set lwr [expr {$i*100}]
+ set upr [expr {($i+10)*100}]
+ puts $fd "SELECT count(*), avg(b) FROM t2 WHERE b>=$lwr AND b<$upr;"
+}
+close $fd
+runtest {100 SELECTs without an index}
+
+
+
+set fd [open test$cnt.sql w]
+for {set i 1} {$i<=100} {incr i} {
+ puts $fd "SELECT count(*), avg(b) FROM t2 WHERE c LIKE '%[number_name $i]%';"
+}
+close $fd
+runtest {100 SELECTs on a string comparison}
+
+
+
+set fd [open test$cnt.sql w]
+puts $fd {CREATE INDEX i2a ON t2(a);}
+puts $fd {CREATE INDEX i2b ON t2(b);}
+close $fd
+runtest {Creating an index}
+
+
+
+set fd [open test$cnt.sql w]
+for {set i 0} {$i<5000} {incr i} {
+ set lwr [expr {$i*100}]
+ set upr [expr {($i+1)*100}]
+ puts $fd "SELECT count(*), avg(b) FROM t2 WHERE b>=$lwr AND b<$upr;"
+}
+close $fd
+runtest {5000 SELECTs with an index}
+
+
+
+set fd [open test$cnt.sql w]
+puts $fd "BEGIN;"
+for {set i 0} {$i<1000} {incr i} {
+ set lwr [expr {$i*10}]
+ set upr [expr {($i+1)*10}]
+ puts $fd "UPDATE t1 SET b=b*2 WHERE a>=$lwr AND a<$upr;"
+}
+puts $fd "COMMIT;"
+close $fd
+runtest {1000 UPDATEs without an index}
+
+
+
+set fd [open test$cnt.sql w]
+puts $fd "BEGIN;"
+for {set i 1} {$i<=25000} {incr i} {
+ set r [expr {int(rand()*500000)}]
+ puts $fd "UPDATE t2 SET b=$r WHERE a=$i;"
+}
+puts $fd "COMMIT;"
+close $fd
+runtest {25000 UPDATEs with an index}
+
+
+set fd [open test$cnt.sql w]
+puts $fd "BEGIN;"
+for {set i 1} {$i<=25000} {incr i} {
+ set r [expr {int(rand()*500000)}]
+ puts $fd "UPDATE t2 SET c='[number_name $r]' WHERE a=$i;"
+}
+puts $fd "COMMIT;"
+close $fd
+runtest {25000 text UPDATEs with an index}
+
+
+
+set fd [open test$cnt.sql w]
+puts $fd "BEGIN;"
+puts $fd "INSERT INTO t1 SELECT * FROM t2;"
+puts $fd "INSERT INTO t2 SELECT * FROM t1;"
+puts $fd "COMMIT;"
+close $fd
+runtest {INSERTs from a SELECT}
+
+
+
+set fd [open test$cnt.sql w]
+puts $fd {DELETE FROM t2 WHERE c LIKE '%fifty%';}
+close $fd
+runtest {DELETE without an index}
+
+
+
+set fd [open test$cnt.sql w]
+puts $fd {DELETE FROM t2 WHERE a>10 AND a<20000;}
+close $fd
+runtest {DELETE with an index}
+
+
+
+set fd [open test$cnt.sql w]
+puts $fd {INSERT INTO t2 SELECT * FROM t1;}
+close $fd
+runtest {A big INSERT after a big DELETE}
+
+
+
+set fd [open test$cnt.sql w]
+puts $fd {BEGIN;}
+puts $fd {DELETE FROM t1;}
+for {set i 1} {$i<=3000} {incr i} {
+ set r [expr {int(rand()*100000)}]
+ puts $fd "INSERT INTO t1 VALUES($i,$r,'[number_name $r]');"
+}
+puts $fd {COMMIT;}
+close $fd
+runtest {A big DELETE followed by many small INSERTs}
+
+
+
+set fd [open test$cnt.sql w]
+puts $fd {DROP TABLE t1;}
+puts $fd {DROP TABLE t2;}
+close $fd
+runtest {DROP TABLE}
Added: freeswitch/trunk/libs/sqlite/tool/speedtest2.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/tool/speedtest2.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,207 @@
+#!/usr/bin/tclsh
+#
+# Run this script using TCLSH to do a speed comparison between
+# various versions of SQLite and PostgreSQL and MySQL
+#
+
+# Run a test
+#
+set cnt 1
+proc runtest {title} {
+ global cnt
+ set sqlfile test$cnt.sql
+ puts "<h2>Test $cnt: $title</h2>"
+ incr cnt
+ set fd [open $sqlfile r]
+ set sql [string trim [read $fd [file size $sqlfile]]]
+ close $fd
+ set sx [split $sql \n]
+ set n [llength $sx]
+ if {$n>8} {
+ set sql {}
+ for {set i 0} {$i<3} {incr i} {append sql [lindex $sx $i]<br>\n}
+ append sql "<i>... [expr {$n-6}] lines omitted</i><br>\n"
+ for {set i [expr {$n-3}]} {$i<$n} {incr i} {
+ append sql [lindex $sx $i]<br>\n
+ }
+ } else {
+ regsub -all \n [string trim $sql] <br> sql
+ }
+ puts "<blockquote>"
+ puts "$sql"
+ puts "</blockquote><table border=0 cellpadding=0 cellspacing=0>"
+ set format {<tr><td>%s</td><td align="right"> %.3f</td></tr>}
+ set delay 1000
+ exec sync; after $delay;
+ set t [time "exec psql drh <$sqlfile" 1]
+ set t [expr {[lindex $t 0]/1000000.0}]
+ puts [format $format PostgreSQL: $t]
+ exec sync; after $delay;
+ set t [time "exec mysql -f drh <$sqlfile" 1]
+ set t [expr {[lindex $t 0]/1000000.0}]
+ puts [format $format MySQL: $t]
+# set t [time "exec ./sqlite232 s232.db <$sqlfile" 1]
+# set t [expr {[lindex $t 0]/1000000.0}]
+# puts [format $format {SQLite 2.3.2:} $t]
+# set t [time "exec ./sqlite-100 s100.db <$sqlfile" 1]
+# set t [expr {[lindex $t 0]/1000000.0}]
+# puts [format $format {SQLite 2.4 (cache=100):} $t]
+ exec sync; after $delay;
+ set t [time "exec ./sqlite240 s2k.db <$sqlfile" 1]
+ set t [expr {[lindex $t 0]/1000000.0}]
+ puts [format $format {SQLite 2.4:} $t]
+ exec sync; after $delay;
+ set t [time "exec ./sqlite240 sns.db <$sqlfile" 1]
+ set t [expr {[lindex $t 0]/1000000.0}]
+ puts [format $format {SQLite 2.4 (nosync):} $t]
+# set t [time "exec ./sqlite-t1 st1.db <$sqlfile" 1]
+# set t [expr {[lindex $t 0]/1000000.0}]
+# puts [format $format {SQLite 2.4 (test):} $t]
+ puts "</table>"
+}
+
+# Initialize the environment
+#
+expr srand(1)
+catch {exec /bin/sh -c {rm -f s*.db}}
+set fd [open clear.sql w]
+puts $fd {
+ drop table t1;
+ drop table t2;
+}
+close $fd
+catch {exec psql drh <clear.sql}
+catch {exec mysql drh <clear.sql}
+set fd [open 2kinit.sql w]
+puts $fd {
+ PRAGMA default_cache_size=2000;
+ PRAGMA default_synchronous=on;
+}
+close $fd
+exec ./sqlite240 s2k.db <2kinit.sql
+exec ./sqlite-t1 st1.db <2kinit.sql
+set fd [open nosync-init.sql w]
+puts $fd {
+ PRAGMA default_cache_size=2000;
+ PRAGMA default_synchronous=off;
+}
+close $fd
+exec ./sqlite240 sns.db <nosync-init.sql
+set ones {zero one two three four five six seven eight nine
+ ten eleven twelve thirteen fourteen fifteen sixteen seventeen
+ eighteen nineteen}
+set tens {{} ten twenty thirty forty fifty sixty seventy eighty ninety}
+proc number_name {n} {
+ if {$n>=1000} {
+ set txt "[number_name [expr {$n/1000}]] thousand"
+ set n [expr {$n%1000}]
+ } else {
+ set txt {}
+ }
+ if {$n>=100} {
+ append txt " [lindex $::ones [expr {$n/100}]] hundred"
+ set n [expr {$n%100}]
+ }
+ if {$n>=20} {
+ append txt " [lindex $::tens [expr {$n/10}]]"
+ set n [expr {$n%10}]
+ }
+ if {$n>0} {
+ append txt " [lindex $::ones $n]"
+ }
+ set txt [string trim $txt]
+ if {$txt==""} {set txt zero}
+ return $txt
+}
+
+
+set fd [open test$cnt.sql w]
+puts $fd "BEGIN;"
+puts $fd "CREATE TABLE t1(a INTEGER, b INTEGER, c VARCHAR(100));"
+for {set i 1} {$i<=25000} {incr i} {
+ set r [expr {int(rand()*500000)}]
+ puts $fd "INSERT INTO t1 VALUES($i,$r,'[number_name $r]');"
+}
+puts $fd "COMMIT;"
+close $fd
+runtest {25000 INSERTs in a transaction}
+
+
+set fd [open test$cnt.sql w]
+puts $fd "DELETE FROM t1;"
+close $fd
+runtest {DELETE everything}
+
+
+set fd [open test$cnt.sql w]
+puts $fd "BEGIN;"
+for {set i 1} {$i<=25000} {incr i} {
+ set r [expr {int(rand()*500000)}]
+ puts $fd "INSERT INTO t1 VALUES($i,$r,'[number_name $r]');"
+}
+puts $fd "COMMIT;"
+close $fd
+runtest {25000 INSERTs in a transaction}
+
+
+set fd [open test$cnt.sql w]
+puts $fd "DELETE FROM t1;"
+close $fd
+runtest {DELETE everything}
+
+
+set fd [open test$cnt.sql w]
+puts $fd "BEGIN;"
+for {set i 1} {$i<=25000} {incr i} {
+ set r [expr {int(rand()*500000)}]
+ puts $fd "INSERT INTO t1 VALUES($i,$r,'[number_name $r]');"
+}
+puts $fd "COMMIT;"
+close $fd
+runtest {25000 INSERTs in a transaction}
+
+
+set fd [open test$cnt.sql w]
+puts $fd "DELETE FROM t1;"
+close $fd
+runtest {DELETE everything}
+
+
+set fd [open test$cnt.sql w]
+puts $fd "BEGIN;"
+for {set i 1} {$i<=25000} {incr i} {
+ set r [expr {int(rand()*500000)}]
+ puts $fd "INSERT INTO t1 VALUES($i,$r,'[number_name $r]');"
+}
+puts $fd "COMMIT;"
+close $fd
+runtest {25000 INSERTs in a transaction}
+
+
+set fd [open test$cnt.sql w]
+puts $fd "DELETE FROM t1;"
+close $fd
+runtest {DELETE everything}
+
+
+set fd [open test$cnt.sql w]
+puts $fd "BEGIN;"
+for {set i 1} {$i<=25000} {incr i} {
+ set r [expr {int(rand()*500000)}]
+ puts $fd "INSERT INTO t1 VALUES($i,$r,'[number_name $r]');"
+}
+puts $fd "COMMIT;"
+close $fd
+runtest {25000 INSERTs in a transaction}
+
+
+set fd [open test$cnt.sql w]
+puts $fd "DELETE FROM t1;"
+close $fd
+runtest {DELETE everything}
+
+
+set fd [open test$cnt.sql w]
+puts $fd {DROP TABLE t1;}
+close $fd
+runtest {DROP TABLE}
Added: freeswitch/trunk/libs/sqlite/www/arch.fig
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/arch.fig Tue Dec 19 15:11:50 2006
@@ -0,0 +1,64 @@
+#FIG 3.2
+Portrait
+Center
+Inches
+Letter
+100.00
+Single
+-2
+1200 2
+2 1 0 3 0 7 100 0 -1 0.000 0 0 -1 1 0 2
+ 1 1 3.00 75.00 135.00
+ 3675 8550 3675 9075
+2 1 0 3 0 7 100 0 -1 0.000 0 0 -1 1 0 2
+ 1 1 3.00 75.00 135.00
+ 3675 7200 3675 7725
+2 1 0 3 0 7 100 0 -1 0.000 0 0 -1 1 0 2
+ 1 1 3.00 75.00 135.00
+ 3675 5775 3675 6300
+2 1 0 3 0 7 100 0 -1 0.000 0 0 -1 1 0 2
+ 1 1 3.00 75.00 135.00
+ 3675 3975 3675 4500
+2 1 0 3 0 7 100 0 -1 0.000 0 0 -1 1 0 2
+ 1 1 3.00 75.00 135.00
+ 3675 2625 3675 3150
+2 1 0 3 0 7 100 0 -1 0.000 0 0 -1 1 0 2
+ 1 1 3.00 75.00 135.00
+ 3675 1275 3675 1800
+2 1 0 3 0 7 100 0 -1 0.000 0 0 -1 1 0 2
+ 1 1 3.00 75.00 135.00
+ 3675 9900 3675 10425
+2 2 0 1 0 11 100 0 20 0.000 0 0 7 0 0 5
+ 2550 10425 4875 10425 4875 11250 2550 11250 2550 10425
+2 2 0 1 0 11 100 0 20 0.000 0 0 7 0 0 5
+ 2550 9075 4875 9075 4875 9900 2550 9900 2550 9075
+2 2 0 1 0 11 100 0 20 0.000 0 0 7 0 0 5
+ 2550 7725 4875 7725 4875 8550 2550 8550 2550 7725
+2 2 0 1 0 11 100 0 20 0.000 0 0 7 0 0 5
+ 2550 6300 4875 6300 4875 7200 2550 7200 2550 6300
+2 2 0 1 0 11 100 0 20 0.000 0 0 7 0 0 5
+ 2550 4500 4875 4500 4875 5775 2550 5775 2550 4500
+2 2 0 1 0 11 100 0 20 0.000 0 0 7 0 0 5
+ 2550 3150 4875 3150 4875 3975 2550 3975 2550 3150
+2 2 0 1 0 11 100 0 20 0.000 0 0 7 0 0 5
+ 2550 1800 4875 1800 4875 2625 2550 2625 2550 1800
+2 2 0 1 0 11 100 0 20 0.000 0 0 7 0 0 5
+ 2550 450 4875 450 4875 1275 2550 1275 2550 450
+4 1 0 100 0 0 20 0.0000 4 195 1020 3675 750 Interface\001
+4 1 0 100 0 0 14 0.0000 4 195 2040 3675 1125 main.c table.c tclsqlite.c\001
+4 1 0 100 0 0 20 0.0000 4 195 1920 3675 6675 Virtual Machine\001
+4 1 0 100 0 0 14 0.0000 4 150 570 3675 7050 vdbe.c\001
+4 1 0 100 0 0 20 0.0000 4 195 1830 3675 4875 Code Generator\001
+4 1 0 100 0 0 14 0.0000 4 195 1860 3675 5175 build.c delete.c expr.c\001
+4 1 0 100 0 0 14 0.0000 4 195 2115 3675 5400 insert.c select.c update.c\001
+4 1 0 100 0 0 14 0.0000 4 150 705 3675 5625 where.c\001
+4 1 0 100 0 0 20 0.0000 4 195 735 3675 3450 Parser\001
+4 1 0 100 0 0 20 0.0000 4 195 1140 3675 2100 Tokenizer\001
+4 1 0 100 0 0 14 0.0000 4 150 870 3675 2475 tokenize.c\001
+4 1 0 100 0 0 20 0.0000 4 255 1350 3675 9375 Page Cache\001
+4 1 0 100 0 0 14 0.0000 4 150 630 3675 3825 parse.y\001
+4 1 0 100 0 0 14 0.0000 4 150 600 3675 8400 btree.c\001
+4 1 0 100 0 0 14 0.0000 4 150 645 3675 9750 pager.c\001
+4 1 0 100 0 0 20 0.0000 4 195 1620 3675 8025 B-tree Driver\001
+4 1 0 100 0 0 14 0.0000 4 105 345 3675 11100 os.c\001
+4 1 0 100 0 0 20 0.0000 4 195 1470 3675 10725 OS Interface\001
Added: freeswitch/trunk/libs/sqlite/www/arch.gif
==============================================================================
Binary file. No diff available.
Added: freeswitch/trunk/libs/sqlite/www/arch.png
==============================================================================
Binary file. No diff available.
Added: freeswitch/trunk/libs/sqlite/www/arch.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/arch.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,221 @@
+#
+# Run this Tcl script to generate the sqlite.html file.
+#
+set rcsid {$Id: arch.tcl,v 1.16 2004/10/10 17:24:54 drh Exp $}
+source common.tcl
+header {Architecture of SQLite}
+puts {
+<h2>The Architecture Of SQLite</h2>
+
+<h3>Introduction</h3>
+
+<table align="right" border="1" cellpadding="15" cellspacing="1">
+<tr><th>Block Diagram Of SQLite</th></tr>
+<tr><td><img src="arch2.gif"></td></tr>
+</table>
+<p>This document describes the architecture of the SQLite library.
+The information here is useful to those who want to understand or
+modify the inner workings of SQLite.
+</p>
+
+<p>
+A block diagram showing the main components of SQLite
+and how they interrelate is shown at the right. The text that
+follows will provide a quick overview of each of these components.
+</p>
+
+
+<p>
+This document describes SQLite version 3.0. Version 2.8 and
+earlier are similar but the details differ.
+</p>
+
+<h3>Interface</h3>
+
+<p>Much of the public interface to the SQLite library is implemented by
+functions found in the <b>main.c</b>, <b>legacy.c</b>, and
+<b>vdbeapi.c</b> source files
+though some routines are
+scattered about in other files where they can have access to data
+structures with file scope. The
+<b>sqlite3_get_table()</b> routine is implemented in <b>table.c</b>.
+<b>sqlite3_mprintf()</b> is found in <b>printf.c</b>.
+<b>sqlite3_complete()</b> is in <b>tokenize.c</b>.
+The Tcl interface is implemented by <b>tclsqlite.c</b>. More
+information on the C interface to SQLite is
+<a href="capi3ref.html">available separately</a>.<p>
+
+<p>To avoid name collisions with other software, all external
+symbols in the SQLite library begin with the prefix <b>sqlite3</b>.
+Those symbols that are intended for external use (in other words,
+those symbols which form the API for SQLite) begin
+with <b>sqlite3_</b>.</p>
+
+<h3>Tokenizer</h3>
+
+<p>When a string containing SQL statements is to be executed, the
+interface passes that string to the tokenizer. The job of the tokenizer
+is to break the original string up into tokens and pass those tokens
+one by one to the parser. The tokenizer is hand-coded in C in
+the file <b>tokenize.c</b>.
+
+<p>Note that in this design, the tokenizer calls the parser. People
+who are familiar with YACC and BISON may be used to doing things the
+other way around -- having the parser call the tokenizer. The author
+of SQLite
+has done it both ways and finds things generally work out nicer for
+the tokenizer to call the parser. YACC has it backwards.</p>
+
+<h3>Parser</h3>
+
+<p>The parser is the piece that assigns meaning to tokens based on
+their context. The parser for SQLite is generated using the
+<a href="http://www.hwaci.com/sw/lemon/">Lemon</a> LALR(1) parser
+generator. Lemon does the same job as YACC/BISON, but it uses
+a different input syntax which is less error-prone.
+Lemon also generates a parser which is reentrant and thread-safe.
+And lemon defines the concept of a non-terminal destructor so
+that it does not leak memory when syntax errors are encountered.
+The source file that drives Lemon is found in <b>parse.y</b>.</p>
+
+<p>Because
+lemon is a program not normally found on development machines, the
+complete source code to lemon (just one C file) is included in the
+SQLite distribution in the "tool" subdirectory. Documentation on
+lemon is found in the "doc" subdirectory of the distribution.
+</p>
+
+<h3>Code Generator</h3>
+
+<p>After the parser assembles tokens into complete SQL statements,
+it calls the code generator to produce virtual machine code that
+will do the work that the SQL statements request. There are many
+files in the code generator:
+<b>attach.c</b>,
+<b>auth.c</b>,
+<b>build.c</b>,
+<b>delete.c</b>,
+<b>expr.c</b>,
+<b>insert.c</b>,
+<b>pragma.c</b>,
+<b>select.c</b>,
+<b>trigger.c</b>,
+<b>update.c</b>,
+<b>vacuum.c</b>
+and <b>where.c</b>.
+In these files is where most of the serious magic happens.
+<b>expr.c</b> handles code generation for expressions.
+<b>where.c</b> handles code generation for WHERE clauses on
+SELECT, UPDATE and DELETE statements. The files <b>attach.c</b>,
+<b>delete.c</b>, <b>insert.c</b>, <b>select.c</b>, <b>trigger.c</b>
+<b>update.c</b>, and <b>vacuum.c</b> handle the code generation
+for SQL statements with the same names. (Each of these files calls routines
+in <b>expr.c</b> and <b>where.c</b> as necessary.) All other
+SQL statements are coded out of <b>build.c</b>.
+The <b>auth.c</b> file implements the functionality of
+<b>sqlite3_set_authorizer()</b>.</p>
+
+<h3>Virtual Machine</h3>
+
+<p>The program generated by the code generator is executed by
+the virtual machine. Additional information about the virtual
+machine is <a href="opcode.html">available separately</a>.
+To summarize, the virtual machine implements an abstract computing
+engine specifically designed to manipulate database files. The
+machine has a stack which is used for intermediate storage.
+Each instruction contains an opcode and
+up to three additional operands.</p>
+
+<p>The virtual machine itself is entirely contained in a single
+source file <b>vdbe.c</b>. The virtual machine also has
+its own header files: <b>vdbe.h</b> that defines an interface
+between the virtual machine and the rest of the SQLite library and
+<b>vdbeInt.h</b> which defines structure private the virtual machine.
+The <b>vdbeaux.c</b> file contains utilities used by the virtual
+machine and interface modules used by the rest of the library to
+construct VM programs. The <b>vdbeapi.c</b> file contains external
+interfaces to the virtual machine such as the
+<b>sqlite3_bind_...</b> family of functions. Individual values
+(strings, integer, floating point numbers, and BLOBs) are stored
+in an internal object named "Mem" which is implemented by
+<b>vdbemem.c</b>.</p>
+
+<p>
+SQLite implements SQL functions using callbacks to C-language routines.
+Even the built-in SQL functions are implemented this way. Most of
+the built-in SQL functions (ex: <b>coalesce()</b>, <b>count()</b>,
+<b>substr()</b>, and so forth) can be found in <b>func.c</b>.
+Date and time conversion functions are found in <b>date.c</b>.
+</p>
+
+<h3>B-Tree</h3>
+
+<p>An SQLite database is maintained on disk using a B-tree implementation
+found in the <b>btree.c</b> source file. A separate B-tree is used for
+each table and index in the database. All B-trees are stored in the
+same disk file. Details of the file format are recorded in a large
+comment at the beginning of <b>btree.c</b>.</p>
+
+<p>The interface to the B-tree subsystem is defined by the header file
+<b>btree.h</b>.
+</p>
+
+<h3>Page Cache</h3>
+
+<p>The B-tree module requests information from the disk in fixed-size
+chunks. The default chunk size is 1024 bytes but can vary between 512
+and 65536 bytes.
+The page cache is responsible for reading, writing, and
+caching these chunks.
+The page cache also provides the rollback and atomic commit abstraction
+and takes care of locking of the database file. The
+B-tree driver requests particular pages from the page cache and notifies
+the page cache when it wants to modify pages or commit or rollback
+changes and the page cache handles all the messy details of making sure
+the requests are handled quickly, safely, and efficiently.</p>
+
+<p>The code to implement the page cache is contained in the single C
+source file <b>pager.c</b>. The interface to the page cache subsystem
+is defined by the header file <b>pager.h</b>.
+</p>
+
+<h3>OS Interface</h3>
+
+<p>
+In order to provide portability between POSIX and Win32 operating systems,
+SQLite uses an abstraction layer to interface with the operating system.
+The interface to the OS abstraction layer is defined in
+<b>os.h</b>. Each supported operating system has its own implementation:
+<b>os_unix.c</b> for Unix, <b>os_win.c</b> for windows, and so forth.
+Each of these operating-specific implements typically has its own
+header file: <b>os_unix.h</b>, <b>os_win.h</b>, etc.
+</p>
+
+<h3>Utilities</h3>
+
+<p>
+Memory allocation and caseless string comparison routines are located
+in <b>util.c</b>.
+Symbol tables used by the parser are maintained by hash tables found
+in <b>hash.c</b>. The <b>utf.c</b> source file contains Unicode
+conversion subroutines.
+SQLite has its own private implementation of <b>printf()</b> (with
+some extensions) in <b>printf.c</b> and its own random number generator
+in <b>random.c</b>.
+</p>
+
+<h3>Test Code</h3>
+
+<p>
+If you count regression test scripts,
+more than half the total code base of SQLite is devoted to testing.
+There are many <b>assert()</b> statements in the main code files.
+In additional, the source files <b>test1.c</b> through <b>test5.c</b>
+together with <b>md5.c</b> implement extensions used for testing
+purposes only. The <b>os_test.c</b> backend interface is used to
+simulate power failures to verify the crash-recovery mechanism in
+the pager.
+</p>
+
+}
+footer $rcsid
Added: freeswitch/trunk/libs/sqlite/www/arch2.fig
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/arch2.fig Tue Dec 19 15:11:50 2006
@@ -0,0 +1,123 @@
+#FIG 3.2
+Landscape
+Center
+Inches
+Letter
+100.00
+Single
+-2
+1200 2
+0 32 #000000
+0 33 #868686
+0 34 #dfefd7
+0 35 #d7efef
+0 36 #efdbef
+0 37 #efdbd7
+0 38 #e7efcf
+0 39 #9e9e9e
+6 3225 3900 4650 6000
+2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
+ 3225 5475 4575 5475 4575 5925 3225 5925 3225 5475
+2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
+ 3300 5550 4650 5550 4650 6000 3300 6000 3300 5550
+2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
+ 3225 4650 4575 4650 4575 5100 3225 5100 3225 4650
+2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
+ 3300 4725 4650 4725 4650 5175 3300 5175 3300 4725
+2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
+ 3225 3900 4575 3900 4575 4350 3225 4350 3225 3900
+2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
+ 3300 3975 4650 3975 4650 4425 3300 4425 3300 3975
+2 1 0 1 0 38 50 0 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 3900 4350 3900 4650
+2 1 0 1 0 38 50 0 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 3900 5100 3900 5475
+4 1 0 50 0 2 12 0.0000 4 135 1050 3900 5775 OS Interface\001
+4 1 0 50 0 2 12 0.0000 4 135 615 3900 4200 B-Tree\001
+4 1 0 50 0 2 12 0.0000 4 180 495 3900 4950 Pager\001
+-6
+6 5400 4725 6825 5250
+2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
+ 5400 4725 6750 4725 6750 5175 5400 5175 5400 4725
+2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
+ 5475 4800 6825 4800 6825 5250 5475 5250 5475 4800
+4 1 0 50 0 2 12 0.0000 4 135 630 6000 5025 Utilities\001
+-6
+6 5400 5550 6825 6075
+2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
+ 5400 5550 6750 5550 6750 6000 5400 6000 5400 5550
+2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
+ 5475 5625 6825 5625 6825 6075 5475 6075 5475 5625
+4 1 0 50 0 2 12 0.0000 4 135 855 6000 5850 Test Code\001
+-6
+6 5400 2775 6825 3750
+2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
+ 5475 2850 6825 2850 6825 3750 5475 3750 5475 2850
+2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
+ 5400 2775 6750 2775 6750 3675 5400 3675 5400 2775
+4 1 0 50 0 2 12 0.0000 4 135 420 6075 3150 Code\001
+4 1 0 50 0 2 12 0.0000 4 135 855 6075 3375 Generator\001
+-6
+6 5400 1950 6825 2475
+2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
+ 5400 1950 6750 1950 6750 2400 5400 2400 5400 1950
+2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
+ 5475 2025 6825 2025 6825 2475 5475 2475 5475 2025
+4 1 0 50 0 2 12 0.0000 4 135 570 6075 2250 Parser\001
+-6
+2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
+ 5400 1050 6750 1050 6750 1500 5400 1500 5400 1050
+2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
+ 5475 1125 6825 1125 6825 1575 5475 1575 5475 1125
+2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
+ 3225 1050 4575 1050 4575 1500 3225 1500 3225 1050
+2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
+ 3300 1125 4650 1125 4650 1575 3300 1575 3300 1125
+2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
+ 3225 1800 4575 1800 4575 2250 3225 2250 3225 1800
+2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
+ 3300 1875 4650 1875 4650 2325 3300 2325 3300 1875
+2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
+ 3225 2550 4575 2550 4575 3000 3225 3000 3225 2550
+2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
+ 3300 2625 4650 2625 4650 3075 3300 3075 3300 2625
+2 1 0 1 0 38 50 0 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 3900 1500 3900 1800
+2 1 0 1 0 38 50 0 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 3900 2250 3900 2550
+2 1 0 1 0 38 50 0 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 3900 3000 3900 3900
+2 1 0 1 0 38 50 0 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 4575 1950 5400 1350
+2 1 0 1 0 38 50 0 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 5400 2925 4650 2325
+2 2 0 1 0 34 55 0 20 0.000 0 0 -1 0 0 5
+ 2850 750 4875 750 4875 3375 2850 3375 2850 750
+2 1 0 1 0 38 50 0 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 6075 1500 6075 1950
+2 3 0 1 0 35 55 0 20 0.000 0 0 -1 0 0 5
+ 2850 3675 4875 3675 4875 6225 2850 6225 2850 3675
+2 2 0 1 0 37 55 0 20 0.000 0 0 -1 0 0 5
+ 5175 750 7200 750 7200 4050 5175 4050 5175 750
+2 2 0 1 0 38 55 0 20 0.000 0 0 -1 0 0 5
+ 5175 4425 7200 4425 7200 6225 5175 6225 5175 4425
+2 1 0 1 0 38 50 0 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 6075 2475 6075 2775
+4 1 0 50 0 2 12 0.0000 4 135 855 6075 1350 Tokenizer\001
+4 1 0 50 0 1 12 1.5708 4 180 1020 7125 2250 SQL Compiler\001
+4 1 0 50 0 1 12 1.5708 4 135 345 3075 2025 Core\001
+4 1 0 50 0 2 12 0.0000 4 135 1290 3900 2850 Virtual Machine\001
+4 1 0 50 0 2 12 0.0000 4 165 1185 3900 1995 SQL Command\001
+4 1 0 50 0 2 12 0.0000 4 135 855 3900 2183 Processor\001
+4 1 0 50 0 2 14 0.0000 4 150 870 3900 1350 Interface\001
+4 1 0 50 0 1 12 1.5708 4 135 885 7125 5400 Accessories\001
+4 1 0 50 0 1 12 1.5708 4 135 645 3075 4875 Backend\001
Added: freeswitch/trunk/libs/sqlite/www/arch2.gif
==============================================================================
Binary file. No diff available.
Added: freeswitch/trunk/libs/sqlite/www/arch2b.fig
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/arch2b.fig Tue Dec 19 15:11:50 2006
@@ -0,0 +1,125 @@
+#FIG 3.2
+Landscape
+Center
+Inches
+Letter
+100.00
+Single
+-2
+1200 2
+0 32 #000000
+0 33 #868686
+0 34 #dfefd7
+0 35 #d7efef
+0 36 #efdbef
+0 37 #efdbd7
+0 38 #e7efcf
+0 39 #9e9e9e
+6 3225 3900 4650 6000
+2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
+ 3225 5475 4575 5475 4575 5925 3225 5925 3225 5475
+2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
+ 3300 5550 4650 5550 4650 6000 3300 6000 3300 5550
+2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
+ 3225 4650 4575 4650 4575 5100 3225 5100 3225 4650
+2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
+ 3300 4725 4650 4725 4650 5175 3300 5175 3300 4725
+2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
+ 3225 3900 4575 3900 4575 4350 3225 4350 3225 3900
+2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
+ 3300 3975 4650 3975 4650 4425 3300 4425 3300 3975
+2 1 0 1 0 38 50 0 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 3900 4350 3900 4650
+2 1 0 1 0 38 50 0 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 3900 5100 3900 5475
+4 1 0 50 0 2 12 0.0000 4 135 1050 3900 5775 OS Interface\001
+4 1 0 50 0 2 12 0.0000 4 135 615 3900 4200 B-Tree\001
+4 1 0 50 0 2 12 0.0000 4 180 495 3900 4950 Pager\001
+-6
+6 5175 4275 7200 6150
+6 5400 4519 6825 5090
+2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
+ 5400 4519 6750 4519 6750 5009 5400 5009 5400 4519
+2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
+ 5475 4601 6825 4601 6825 5090 5475 5090 5475 4601
+4 1 0 50 0 2 12 0.0000 4 135 630 6000 4845 Utilities\001
+-6
+6 5400 5416 6825 5987
+2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
+ 5400 5416 6750 5416 6750 5906 5400 5906 5400 5416
+2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
+ 5475 5498 6825 5498 6825 5987 5475 5987 5475 5498
+4 1 0 50 0 2 12 0.0000 4 135 855 6000 5742 Test Code\001
+-6
+2 2 0 1 0 38 55 0 20 0.000 0 0 -1 0 0 5
+ 5175 4275 7200 4275 7200 6150 5175 6150 5175 4275
+4 1 0 50 0 1 12 1.5708 4 135 885 7125 5253 Accessories\001
+-6
+6 5400 2700 6825 3675
+2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
+ 5475 2775 6825 2775 6825 3675 5475 3675 5475 2775
+2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
+ 5400 2700 6750 2700 6750 3600 5400 3600 5400 2700
+4 1 0 50 0 2 12 0.0000 4 135 420 6075 3075 Code\001
+4 1 0 50 0 2 12 0.0000 4 135 855 6075 3300 Generator\001
+-6
+6 5400 1875 6825 2400
+2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
+ 5400 1875 6750 1875 6750 2325 5400 2325 5400 1875
+2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
+ 5475 1950 6825 1950 6825 2400 5475 2400 5475 1950
+4 1 0 50 0 2 12 0.0000 4 135 570 6075 2175 Parser\001
+-6
+2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
+ 5400 1050 6750 1050 6750 1500 5400 1500 5400 1050
+2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
+ 5475 1125 6825 1125 6825 1575 5475 1575 5475 1125
+2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
+ 3225 1050 4575 1050 4575 1500 3225 1500 3225 1050
+2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
+ 3300 1125 4650 1125 4650 1575 3300 1575 3300 1125
+2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
+ 3225 1800 4575 1800 4575 2250 3225 2250 3225 1800
+2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
+ 3300 1875 4650 1875 4650 2325 3300 2325 3300 1875
+2 2 0 1 0 7 51 0 20 0.000 0 0 -1 0 0 5
+ 3225 2550 4575 2550 4575 3000 3225 3000 3225 2550
+2 2 0 0 0 33 52 0 20 0.000 0 0 -1 0 0 5
+ 3300 2625 4650 2625 4650 3075 3300 3075 3300 2625
+2 1 0 1 0 38 50 0 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 3900 1500 3900 1800
+2 1 0 1 0 38 50 0 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 3900 2250 3900 2550
+2 1 0 1 0 38 50 0 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 3900 3000 3900 3900
+2 1 0 1 0 38 50 0 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 4575 1950 5400 1350
+2 1 0 1 0 38 50 0 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 5400 2925 4650 2175
+2 2 0 1 0 34 55 0 20 0.000 0 0 -1 0 0 5
+ 2850 750 4875 750 4875 3375 2850 3375 2850 750
+2 1 0 1 0 38 50 0 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 6075 1500 6075 1800
+2 3 0 1 0 35 55 0 20 0.000 0 0 -1 0 0 5
+ 2850 3675 4875 3675 4875 6150 2850 6150 2850 3675
+2 2 0 1 0 37 55 0 20 0.000 0 0 -1 0 0 5
+ 5175 750 7200 750 7200 3975 5175 3975 5175 750
+2 1 0 1 0 38 50 0 -1 0.000 0 0 -1 1 0 2
+ 1 1 1.00 60.00 120.00
+ 6075 2400 6075 2700
+4 1 0 50 0 2 12 0.0000 4 135 855 6075 1350 Tokenizer\001
+4 1 0 50 0 1 12 1.5708 4 180 1020 7125 2250 SQL Compiler\001
+4 1 0 50 0 1 12 1.5708 4 135 345 3075 2025 Core\001
+4 1 0 50 0 2 12 0.0000 4 135 1290 3900 2850 Virtual Machine\001
+4 1 0 50 0 2 12 0.0000 4 165 1185 3900 1995 SQL Command\001
+4 1 0 50 0 2 12 0.0000 4 135 855 3900 2183 Processor\001
+4 1 0 50 0 2 14 0.0000 4 150 870 3900 1350 Interface\001
+4 1 0 50 0 1 12 1.5708 4 135 645 3075 4875 Backend\001
Added: freeswitch/trunk/libs/sqlite/www/audit.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/audit.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,214 @@
+#
+# Run this Tcl script to generate the audit.html file.
+#
+set rcsid {$Id: audit.tcl,v 1.1 2002/07/13 16:52:35 drh Exp $}
+
+puts {<html>
+<head>
+ <title>SQLite Security Audit Procedure</title>
+</head>
+<body bgcolor=white>
+<h1 align=center>
+SQLite Security Audit Procedure
+</h1>}
+puts "<p align=center>
+(This page was last modified on [lrange $rcsid 3 4] UTC)
+</p>"
+
+puts {
+<p>
+A security audit for SQLite consists of two components. First, there is
+a check for common errors that often lead to security problems. Second,
+an attempt is made to construct a proof that SQLite has certain desirable
+security properties.
+</p>
+
+<h2>Part I: Things to check</h2>
+
+<p>
+Scan all source code and check for the following common errors:
+</p>
+
+<ol>
+<li><p>
+Verify that the destination buffer is large enough to hold its result
+in every call to the following routines:
+<ul>
+<li> <b>strcpy()</b> </li>
+<li> <b>strncpy()</b> </li>
+<li> <b>strcat()</b> </li>
+<li> <b>memcpy()</b> </li>
+<li> <b>memset()</b> </li>
+<li> <b>memmove()</b> </li>
+<li> <b>bcopy()</b> </li>
+<li> <b>sprintf()</b> </li>
+<li> <b>scanf()</b> </li>
+</ul>
+</p></li>
+<li><p>
+Verify that pointers returned by subroutines are not NULL before using
+the pointers. In particular, make sure the return values for the following
+routines are checked before they are used:
+<ul>
+<li> <b>malloc()</b> </li>
+<li> <b>realloc()</b> </li>
+<li> <b>sqliteMalloc()</b> </li>
+<li> <b>sqliteRealloc()</b> </li>
+<li> <b>sqliteStrDup()</b> </li>
+<li> <b>sqliteStrNDup()</b> </li>
+<li> <b>sqliteExpr()</b> </li>
+<li> <b>sqliteExprFunction()</b> </li>
+<li> <b>sqliteExprListAppend()</b> </li>
+<li> <b>sqliteResultSetOfSelect()</b> </li>
+<li> <b>sqliteIdListAppend()</b> </li>
+<li> <b>sqliteSrcListAppend()</b> </li>
+<li> <b>sqliteSelectNew()</b> </li>
+<li> <b>sqliteTableNameToTable()</b> </li>
+<li> <b>sqliteTableTokenToSrcList()</b> </li>
+<li> <b>sqliteWhereBegin()</b> </li>
+<li> <b>sqliteFindTable()</b> </li>
+<li> <b>sqliteFindIndex()</b> </li>
+<li> <b>sqliteTableNameFromToken()</b> </li>
+<li> <b>sqliteGetVdbe()</b> </li>
+<li> <b>sqlite_mprintf()</b> </li>
+<li> <b>sqliteExprDup()</b> </li>
+<li> <b>sqliteExprListDup()</b> </li>
+<li> <b>sqliteSrcListDup()</b> </li>
+<li> <b>sqliteIdListDup()</b> </li>
+<li> <b>sqliteSelectDup()</b> </li>
+<li> <b>sqliteFindFunction()</b> </li>
+<li> <b>sqliteTriggerSelectStep()</b> </li>
+<li> <b>sqliteTriggerInsertStep()</b> </li>
+<li> <b>sqliteTriggerUpdateStep()</b> </li>
+<li> <b>sqliteTriggerDeleteStep()</b> </li>
+</ul>
+</p></li>
+<li><p>
+On all functions and procedures, verify that pointer parameters are not NULL
+before dereferencing those parameters.
+</p></li>
+<li><p>
+Check to make sure that temporary files are opened safely: that the process
+will not overwrite an existing file when opening the temp file and that
+another process is unable to substitute a file for the temp file being
+opened.
+</p></li>
+</ol>
+
+
+
+<h2>Part II: Things to prove</h2>
+
+<p>
+Prove that SQLite exhibits the characteristics outlined below:
+</p>
+
+<ol>
+<li><p>
+The following are preconditions:</p>
+<p><ul>
+<li><b>Z</b> is an arbitrary-length NUL-terminated string.</li>
+<li>An existing SQLite database has been opened. The return value
+ from the call to <b>sqlite_open()</b> is stored in the variable
+ <b>db</b>.</li>
+<li>The database contains at least one table of the form:
+<blockquote><pre>
+CREATE TABLE t1(a CLOB);
+</pre></blockquote></li>
+<li>There are no user-defined functions other than the standard
+ build-in functions.</li>
+</ul></p>
+<p>The following statement of C code is executed:</p>
+<blockquote><pre>
+sqlite_exec_printf(
+ db,
+ "INSERT INTO t1(a) VALUES('%q');",
+ 0, 0, 0, Z
+);
+</pre></blockquote>
+<p>Prove the following are true for all possible values of string <b>Z</b>:</p>
+<ol type="a">
+<li><p>
+The call to <b>sqlite_exec_printf()</b> will
+return in a length of time that is a polynomial in <b>strlen(Z)</b>.
+It might return an error code but it will not crash.
+</p></li>
+<li><p>
+At most one new row will be inserted into table t1.
+</p></li>
+<li><p>
+No preexisting rows of t1 will be deleted or modified.
+</p></li>
+<li><p>
+No tables other than t1 will be altered in any way.
+</p></li>
+<li><p>
+No preexisting files on the host computers filesystem, other than
+the database file itself, will be deleted or modified.
+</p></li>
+<li><p>
+For some constants <b>K1</b> and <b>K2</b>,
+if at least <b>K1*strlen(Z) + K2</b> bytes of contiguous memory are
+available to <b>malloc()</b>, then the call to <b>sqlite_exec_printf()</b>
+will not return SQLITE_NOMEM.
+</p></li>
+</ol>
+</p></li>
+
+
+<li><p>
+The following are preconditions:
+<p><ul>
+<li><b>Z</b> is an arbitrary-length NUL-terminated string.</li>
+<li>An existing SQLite database has been opened. The return value
+ from the call to <b>sqlite_open()</b> is stored in the variable
+ <b>db</b>.</li>
+<li>There exists a callback function <b>cb()</b> that appends all
+ information passed in through its parameters into a single
+ data buffer called <b>Y</b>.</li>
+<li>There are no user-defined functions other than the standard
+ build-in functions.</li>
+</ul></p>
+<p>The following statement of C code is executed:</p>
+<blockquote><pre>
+sqlite_exec(db, Z, cb, 0, 0);
+</pre></blockquote>
+<p>Prove the following are true for all possible values of string <b>Z</b>:</p>
+<ol type="a">
+<li><p>
+The call to <b>sqlite_exec()</b> will
+return in a length of time which is a polynomial in <b>strlen(Z)</b>.
+It might return an error code but it will not crash.
+</p></li>
+<li><p>
+After <b>sqlite_exec()</b> returns, the buffer <b>Y</b> will not contain
+any content from any preexisting file on the host computers file system,
+except for the database file.
+</p></li>
+<li><p>
+After the call to <b>sqlite_exec()</b> returns, the database file will
+still be well-formed. It might not contain the same data, but it will
+still be a properly constructed SQLite database file.
+</p></li>
+<li><p>
+No preexisting files on the host computers filesystem, other than
+the database file itself, will be deleted or modified.
+</p></li>
+<li><p>
+For some constants <b>K1</b> and <b>K2</b>,
+if at least <b>K1*strlen(Z) + K2</b> bytes of contiguous memory are
+available to <b>malloc()</b>, then the call to <b>sqlite_exec()</b>
+will not return SQLITE_NOMEM.
+</p></li>
+</ol>
+</p></li>
+
+</ol>
+}
+puts {
+<p><hr /></p>
+<p><a href="index.html"><img src="/goback.jpg" border=0 />
+Back to the SQLite Home Page</a>
+</p>
+
+</body></html>}
Added: freeswitch/trunk/libs/sqlite/www/autoinc.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/autoinc.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,109 @@
+#
+# Run this Tcl script to generate the autoinc.html file.
+#
+set rcsid {$Id: }
+source common.tcl
+
+if {[llength $argv]>0} {
+ set outputdir [lindex $argv 0]
+} else {
+ set outputdir ""
+}
+
+header {SQLite Autoincrement}
+puts {
+<h1>SQLite Autoincrement</h1>
+
+<p>
+In SQLite, every row of every table has an integer ROWID.
+The ROWID for each row is unique among all rows in the same table.
+In SQLite version 2.8 the ROWID is a 32-bit signed integer.
+Version 3.0 of SQLite expanded the ROWID to be a 64-bit signed integer.
+</p>
+
+<p>
+You can access the ROWID of an SQLite table using one the special column
+names ROWID, _ROWID_, or OID.
+Except if you declare an ordinary table column to use one of those special
+names, then the use of that name will refer to the declared column not
+to the internal ROWID.
+</p>
+
+<p>
+If a table contains a column of type INTEGER PRIMARY KEY, then that
+column becomes an alias for the ROWID. You can then access the ROWID
+using any of four different names, the original three names described above
+or the name given to the INTEGER PRIMARY KEY column. All these names are
+aliases for one another and work equally well in any context.
+</p>
+
+<p>
+When a new row is inserted into an SQLite table, the ROWID can either
+be specified as part of the INSERT statement or it can be assigned
+automatically by the database engine. To specify a ROWID manually,
+just include it in the list of values to be inserted. For example:
+</p>
+
+<blockquote><pre>
+CREATE TABLE test1(a INT, b TEXT);
+INSERT INTO test1(rowid, a, b) VALUES(123, 5, 'hello');
+</pre></blockquote>
+
+<p>
+If no ROWID is specified on the insert, an appropriate ROWID is created
+automatically. The usual algorithm is to give the newly created row
+a ROWID that is one larger than the largest ROWID in the table prior
+to the insert. If the table is initially empty, then a ROWID of 1 is
+used. If the largest ROWID is equal to the largest possible integer
+(9223372036854775807 in SQLite version 3.0 and later) then the database
+engine starts picking candidate ROWIDs at random until it finds one
+that is not previously used.
+</p>
+
+<p>
+The normal ROWID selection algorithm described above
+will generate monotonically increasing
+unique ROWIDs as long as you never use the maximum ROWID value and you never
+delete the entry in the table with the largest ROWID.
+If you ever delete rows or if you ever create a row with the maximum possible
+ROWID, then ROWIDs from previously deleted rows might be reused when creating
+new rows and newly created ROWIDs might not be in strictly accending order.
+</p>
+
+
+<h2>The AUTOINCREMENT Keyword</h2>
+
+<p>
+If a column has the type INTEGER PRIMARY KEY AUTOINCREMENT then a slightly
+different ROWID selection algorithm is used.
+The ROWID chosen for the new row is one larger than the largest ROWID
+that has ever before existed in that same table. If the table has never
+before contained any data, then a ROWID of 1 is used. If the table
+has previously held a row with the largest possible ROWID, then new INSERTs
+are not allowed and any attempt to insert a new row will fail with an
+SQLITE_FULL error.
+</p>
+
+<p>
+SQLite keeps track of the largest ROWID that a table has ever held using
+the special SQLITE_SEQUENCE table. The SQLITE_SEQUENCE table is created
+and initialized automatically whenever a normal table that contains an
+AUTOINCREMENT column is created. The content of the SQLITE_SEQUENCE table
+can be modified using ordinary UPDATE, INSERT, and DELETE statements.
+But making modifications to this table will likely perturb the AUTOINCREMENT
+key generation algorithm. Make sure you know what you are doing before
+you undertake such changes.
+</p>
+
+<p>
+The behavior implemented by the AUTOINCREMENT keyword is subtly different
+from the default behavior. With AUTOINCREMENT, rows with automatically
+selected ROWIDs are guaranteed to have ROWIDs that have never been used
+before by the same table in the same database. And the automatically generated
+ROWIDs are guaranteed to be monotonically increasing. These are important
+properties in certain applications. But if your application does not
+need these properties, you should probably stay with the default behavior
+since the use of AUTOINCREMENT requires additional work to be done
+as each row is inserted and thus causes INSERTs to run a little slower.
+}
+footer $rcsid
Added: freeswitch/trunk/libs/sqlite/www/c_interface.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/c_interface.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,1116 @@
+#
+# Run this Tcl script to generate the sqlite.html file.
+#
+set rcsid {$Id: c_interface.tcl,v 1.43 2004/11/19 11:59:24 danielk1977 Exp $}
+source common.tcl
+header {The C language interface to the SQLite library}
+puts {
+<h2>The C language interface to the SQLite library</h2>
+
+<p>The SQLite library is designed to be very easy to use from
+a C or C++ program. This document gives an overview of the C/C++
+programming interface.</p>
+
+<h3>1.0 The Core API</h3>
+
+<p>The interface to the SQLite library consists of three core functions,
+one opaque data structure, and some constants used as return values.
+The core interface is as follows:</p>
+
+<blockquote><pre>
+typedef struct sqlite sqlite;
+#define SQLITE_OK 0 /* Successful result */
+
+sqlite *sqlite_open(const char *dbname, int mode, char **errmsg);
+
+void sqlite_close(sqlite *db);
+
+int sqlite_exec(
+ sqlite *db,
+ char *sql,
+ int (*xCallback)(void*,int,char**,char**),
+ void *pArg,
+ char **errmsg
+);
+</pre></blockquote>
+
+<p>
+The above is all you really need to know in order to use SQLite
+in your C or C++ programs. There are other interface functions
+available (and described below) but we will begin by describing
+the core functions shown above.
+</p>
+
+<a name="sqlite_open">
+<h4>1.1 Opening a database</h4>
+
+<p>Use the <b>sqlite_open</b> function to open an existing SQLite
+database or to create a new SQLite database. The first argument
+is the database name. The second argument is intended to signal
+whether the database is going to be used for reading and writing
+or just for reading. But in the current implementation, the
+second argument to <b>sqlite_open</b> is ignored.
+The third argument is a pointer to a string pointer.
+If the third argument is not NULL and an error occurs
+while trying to open the database, then an error message will be
+written to memory obtained from malloc() and *errmsg will be made
+to point to this error message. The calling function is responsible
+for freeing the memory when it has finished with it.</p>
+
+<p>The name of an SQLite database is the name of a file that will
+contain the database. If the file does not exist, SQLite attempts
+to create and initialize it. If the file is read-only (due to
+permission bits or because it is located on read-only media like
+a CD-ROM) then SQLite opens the database for reading only. The
+entire SQL database is stored in a single file on the disk. But
+additional temporary files may be created during the execution of
+an SQL command in order to store the database rollback journal or
+temporary and intermediate results of a query.</p>
+
+<p>The return value of the <b>sqlite_open</b> function is a
+pointer to an opaque <b>sqlite</b> structure. This pointer will
+be the first argument to all subsequent SQLite function calls that
+deal with the same database. NULL is returned if the open fails
+for any reason.</p>
+
+<a name="sqlite_close">
+<h4>1.2 Closing the database</h4>
+
+<p>To close an SQLite database, call the <b>sqlite_close</b>
+function passing it the sqlite structure pointer that was obtained
+from a prior call to <b>sqlite_open</b>.
+If a transaction is active when the database is closed, the transaction
+is rolled back.</p>
+
+<a name="sqlite_exec">
+<h4>1.3 Executing SQL statements</h4>
+
+<p>The <b>sqlite_exec</b> function is used to process SQL statements
+and queries. This function requires 5 parameters as follows:</p>
+
+<ol>
+<li><p>A pointer to the sqlite structure obtained from a prior call
+ to <b>sqlite_open</b>.</p></li>
+<li><p>A null-terminated string containing the text of one or more
+ SQL statements and/or queries to be processed.</p></li>
+<li><p>A pointer to a callback function which is invoked once for each
+ row in the result of a query. This argument may be NULL, in which
+ case no callbacks will ever be invoked.</p></li>
+<li><p>A pointer that is forwarded to become the first argument
+ to the callback function.</p></li>
+<li><p>A pointer to an error string. Error messages are written to space
+ obtained from malloc() and the error string is made to point to
+ the malloced space. The calling function is responsible for freeing
+ this space when it has finished with it.
+ This argument may be NULL, in which case error messages are not
+ reported back to the calling function.</p></li>
+</ol>
+
+<p>
+The callback function is used to receive the results of a query. A
+prototype for the callback function is as follows:</p>
+
+<blockquote><pre>
+int Callback(void *pArg, int argc, char **argv, char **columnNames){
+ return 0;
+}
+</pre></blockquote>
+
+<a name="callback_row_data">
+<p>The first argument to the callback is just a copy of the fourth argument
+to <b>sqlite_exec</b> This parameter can be used to pass arbitrary
+information through to the callback function from client code.
+The second argument is the number of columns in the query result.
+The third argument is an array of pointers to strings where each string
+is a single column of the result for that record. Note that the
+callback function reports a NULL value in the database as a NULL pointer,
+which is very different from an empty string. If the i-th parameter
+is an empty string, we will get:</p>
+<blockquote><pre>
+argv[i][0] == 0
+</pre></blockquote>
+<p>But if the i-th parameter is NULL we will get:</p>
+<blockquote><pre>
+argv[i] == 0
+</pre></blockquote>
+
+<p>The names of the columns are contained in first <i>argc</i>
+entries of the fourth argument.
+If the <a href="pragma.html#pragma_show_datatypes">SHOW_DATATYPES</a> pragma
+is on (it is off by default) then
+the second <i>argc</i> entries in the 4th argument are the datatypes
+for the corresponding columns.
+</p>
+
+<p>If the <a href="pragma.html#pragma_empty_result_callbacks">
+EMPTY_RESULT_CALLBACKS</a> pragma is set to ON and the result of
+a query is an empty set, then the callback is invoked once with the
+third parameter (argv) set to 0. In other words
+<blockquote><pre>
+argv == 0
+</pre></blockquote>
+The second parameter (argc)
+and the fourth parameter (columnNames) are still valid
+and can be used to determine the number and names of the result
+columns if there had been a result.
+The default behavior is not to invoke the callback at all if the
+result set is empty.</p>
+
+<a name="callback_returns_nonzero">
+<p>The callback function should normally return 0. If the callback
+function returns non-zero, the query is immediately aborted and
+<b>sqlite_exec</b> will return SQLITE_ABORT.</p>
+
+<h4>1.4 Error Codes</h4>
+
+<p>
+The <b>sqlite_exec</b> function normally returns SQLITE_OK. But
+if something goes wrong it can return a different value to indicate
+the type of error. Here is a complete list of the return codes:
+</p>
+
+<blockquote><pre>
+#define SQLITE_OK 0 /* Successful result */
+#define SQLITE_ERROR 1 /* SQL error or missing database */
+#define SQLITE_INTERNAL 2 /* An internal logic error in SQLite */
+#define SQLITE_PERM 3 /* Access permission denied */
+#define SQLITE_ABORT 4 /* Callback routine requested an abort */
+#define SQLITE_BUSY 5 /* The database file is locked */
+#define SQLITE_LOCKED 6 /* A table in the database is locked */
+#define SQLITE_NOMEM 7 /* A malloc() failed */
+#define SQLITE_READONLY 8 /* Attempt to write a readonly database */
+#define SQLITE_INTERRUPT 9 /* Operation terminated by sqlite_interrupt() */
+#define SQLITE_IOERR 10 /* Some kind of disk I/O error occurred */
+#define SQLITE_CORRUPT 11 /* The database disk image is malformed */
+#define SQLITE_NOTFOUND 12 /* (Internal Only) Table or record not found */
+#define SQLITE_FULL 13 /* Insertion failed because database is full */
+#define SQLITE_CANTOPEN 14 /* Unable to open the database file */
+#define SQLITE_PROTOCOL 15 /* Database lock protocol error */
+#define SQLITE_EMPTY 16 /* (Internal Only) Database table is empty */
+#define SQLITE_SCHEMA 17 /* The database schema changed */
+#define SQLITE_TOOBIG 18 /* Too much data for one row of a table */
+#define SQLITE_CONSTRAINT 19 /* Abort due to contraint violation */
+#define SQLITE_MISMATCH 20 /* Data type mismatch */
+#define SQLITE_MISUSE 21 /* Library used incorrectly */
+#define SQLITE_NOLFS 22 /* Uses OS features not supported on host */
+#define SQLITE_AUTH 23 /* Authorization denied */
+#define SQLITE_ROW 100 /* sqlite_step() has another row ready */
+#define SQLITE_DONE 101 /* sqlite_step() has finished executing */
+</pre></blockquote>
+
+<p>
+The meanings of these various return values are as follows:
+</p>
+
+<blockquote>
+<dl>
+<dt>SQLITE_OK</dt>
+<dd><p>This value is returned if everything worked and there were no errors.
+</p></dd>
+<dt>SQLITE_INTERNAL</dt>
+<dd><p>This value indicates that an internal consistency check within
+the SQLite library failed. This can only happen if there is a bug in
+the SQLite library. If you ever get an SQLITE_INTERNAL reply from
+an <b>sqlite_exec</b> call, please report the problem on the SQLite
+mailing list.
+</p></dd>
+<dt>SQLITE_ERROR</dt>
+<dd><p>This return value indicates that there was an error in the SQL
+that was passed into the <b>sqlite_exec</b>.
+</p></dd>
+<dt>SQLITE_PERM</dt>
+<dd><p>This return value says that the access permissions on the database
+file are such that the file cannot be opened.
+</p></dd>
+<dt>SQLITE_ABORT</dt>
+<dd><p>This value is returned if the callback function returns non-zero.
+</p></dd>
+<dt>SQLITE_BUSY</dt>
+<dd><p>This return code indicates that another program or thread has
+the database locked. SQLite allows two or more threads to read the
+database at the same time, but only one thread can have the database
+open for writing at the same time. Locking in SQLite is on the
+entire database.</p>
+</p></dd>
+<dt>SQLITE_LOCKED</dt>
+<dd><p>This return code is similar to SQLITE_BUSY in that it indicates
+that the database is locked. But the source of the lock is a recursive
+call to <b>sqlite_exec</b>. This return can only occur if you attempt
+to invoke sqlite_exec from within a callback routine of a query
+from a prior invocation of sqlite_exec. Recursive calls to
+sqlite_exec are allowed as long as they do
+not attempt to write the same table.
+</p></dd>
+<dt>SQLITE_NOMEM</dt>
+<dd><p>This value is returned if a call to <b>malloc</b> fails.
+</p></dd>
+<dt>SQLITE_READONLY</dt>
+<dd><p>This return code indicates that an attempt was made to write to
+a database file that is opened for reading only.
+</p></dd>
+<dt>SQLITE_INTERRUPT</dt>
+<dd><p>This value is returned if a call to <b>sqlite_interrupt</b>
+interrupts a database operation in progress.
+</p></dd>
+<dt>SQLITE_IOERR</dt>
+<dd><p>This value is returned if the operating system informs SQLite
+that it is unable to perform some disk I/O operation. This could mean
+that there is no more space left on the disk.
+</p></dd>
+<dt>SQLITE_CORRUPT</dt>
+<dd><p>This value is returned if SQLite detects that the database it is
+working on has become corrupted. Corruption might occur due to a rogue
+process writing to the database file or it might happen due to an
+perviously undetected logic error in of SQLite. This value is also
+returned if a disk I/O error occurs in such a way that SQLite is forced
+to leave the database file in a corrupted state. The latter should only
+happen due to a hardware or operating system malfunction.
+</p></dd>
+<dt>SQLITE_FULL</dt>
+<dd><p>This value is returned if an insertion failed because there is
+no space left on the disk, or the database is too big to hold any
+more information. The latter case should only occur for databases
+that are larger than 2GB in size.
+</p></dd>
+<dt>SQLITE_CANTOPEN</dt>
+<dd><p>This value is returned if the database file could not be opened
+for some reason.
+</p></dd>
+<dt>SQLITE_PROTOCOL</dt>
+<dd><p>This value is returned if some other process is messing with
+file locks and has violated the file locking protocol that SQLite uses
+on its rollback journal files.
+</p></dd>
+<dt>SQLITE_SCHEMA</dt>
+<dd><p>When the database first opened, SQLite reads the database schema
+into memory and uses that schema to parse new SQL statements. If another
+process changes the schema, the command currently being processed will
+abort because the virtual machine code generated assumed the old
+schema. This is the return code for such cases. Retrying the
+command usually will clear the problem.
+</p></dd>
+<dt>SQLITE_TOOBIG</dt>
+<dd><p>SQLite will not store more than about 1 megabyte of data in a single
+row of a single table. If you attempt to store more than 1 megabyte
+in a single row, this is the return code you get.
+</p></dd>
+<dt>SQLITE_CONSTRAINT</dt>
+<dd><p>This constant is returned if the SQL statement would have violated
+a database constraint.
+</p></dd>
+<dt>SQLITE_MISMATCH</dt>
+<dd><p>This error occurs when there is an attempt to insert non-integer
+data into a column labeled INTEGER PRIMARY KEY. For most columns, SQLite
+ignores the data type and allows any kind of data to be stored. But
+an INTEGER PRIMARY KEY column is only allowed to store integer data.
+</p></dd>
+<dt>SQLITE_MISUSE</dt>
+<dd><p>This error might occur if one or more of the SQLite API routines
+is used incorrectly. Examples of incorrect usage include calling
+<b>sqlite_exec</b> after the database has been closed using
+<b>sqlite_close</b> or
+calling <b>sqlite_exec</b> with the same
+database pointer simultaneously from two separate threads.
+</p></dd>
+<dt>SQLITE_NOLFS</dt>
+<dd><p>This error means that you have attempts to create or access a file
+database file that is larger that 2GB on a legacy Unix machine that
+lacks large file support.
+</p></dd>
+<dt>SQLITE_AUTH</dt>
+<dd><p>This error indicates that the authorizer callback
+has disallowed the SQL you are attempting to execute.
+</p></dd>
+<dt>SQLITE_ROW</dt>
+<dd><p>This is one of the return codes from the
+<b>sqlite_step</b> routine which is part of the non-callback API.
+It indicates that another row of result data is available.
+</p></dd>
+<dt>SQLITE_DONE</dt>
+<dd><p>This is one of the return codes from the
+<b>sqlite_step</b> routine which is part of the non-callback API.
+It indicates that the SQL statement has been completely executed and
+the <b>sqlite_finalize</b> routine is ready to be called.
+</p></dd>
+</dl>
+</blockquote>
+
+<h3>2.0 Accessing Data Without Using A Callback Function</h3>
+
+<p>
+The <b>sqlite_exec</b> routine described above used to be the only
+way to retrieve data from an SQLite database. But many programmers found
+it inconvenient to use a callback function to obtain results. So beginning
+with SQLite version 2.7.7, a second access interface is available that
+does not use callbacks.
+</p>
+
+<p>
+The new interface uses three separate functions to replace the single
+<b>sqlite_exec</b> function.
+</p>
+
+<blockquote><pre>
+typedef struct sqlite_vm sqlite_vm;
+
+int sqlite_compile(
+ sqlite *db, /* The open database */
+ const char *zSql, /* SQL statement to be compiled */
+ const char **pzTail, /* OUT: uncompiled tail of zSql */
+ sqlite_vm **ppVm, /* OUT: the virtual machine to execute zSql */
+ char **pzErrmsg /* OUT: Error message. */
+);
+
+int sqlite_step(
+ sqlite_vm *pVm, /* The virtual machine to execute */
+ int *pN, /* OUT: Number of columns in result */
+ const char ***pazValue, /* OUT: Column data */
+ const char ***pazColName /* OUT: Column names and datatypes */
+);
+
+int sqlite_finalize(
+ sqlite_vm *pVm, /* The virtual machine to be finalized */
+ char **pzErrMsg /* OUT: Error message */
+);
+</pre></blockquote>
+
+<p>
+The strategy is to compile a single SQL statement using
+<b>sqlite_compile</b> then invoke <b>sqlite_step</b> multiple times,
+once for each row of output, and finally call <b>sqlite_finalize</b>
+to clean up after the SQL has finished execution.
+</p>
+
+<h4>2.1 Compiling An SQL Statement Into A Virtual Machine</h4>
+
+<p>
+The <b>sqlite_compile</b> "compiles" a single SQL statement (specified
+by the second parameter) and generates a virtual machine that is able
+to execute that statement.
+As with must interface routines, the first parameter must be a pointer
+to an sqlite structure that was obtained from a prior call to
+<b>sqlite_open</b>.
+
+<p>
+A pointer to the virtual machine is stored in a pointer which is passed
+in as the 4th parameter.
+Space to hold the virtual machine is dynamically allocated. To avoid
+a memory leak, the calling function must invoke
+<b>sqlite_finalize</b> on the virtual machine after it has finished
+with it.
+The 4th parameter may be set to NULL if an error is encountered during
+compilation.
+</p>
+
+<p>
+If any errors are encountered during compilation, an error message is
+written into memory obtained from <b>malloc</b> and the 5th parameter
+is made to point to that memory. If the 5th parameter is NULL, then
+no error message is generated. If the 5th parameter is not NULL, then
+the calling function should dispose of the memory containing the error
+message by calling <b>sqlite_freemem</b>.
+</p>
+
+<p>
+If the 2nd parameter actually contains two or more statements of SQL,
+only the first statement is compiled. (This is different from the
+behavior of <b>sqlite_exec</b> which executes all SQL statements
+in its input string.) The 3rd parameter to <b>sqlite_compile</b>
+is made to point to the first character beyond the end of the first
+statement of SQL in the input. If the 2nd parameter contains only
+a single SQL statement, then the 3rd parameter will be made to point
+to the '\000' terminator at the end of the 2nd parameter.
+</p>
+
+<p>
+On success, <b>sqlite_compile</b> returns SQLITE_OK.
+Otherwise and error code is returned.
+</p>
+
+<h4>2.2 Step-By-Step Execution Of An SQL Statement</h4>
+
+<p>
+After a virtual machine has been generated using <b>sqlite_compile</b>
+it is executed by one or more calls to <b>sqlite_step</b>. Each
+invocation of <b>sqlite_step</b>, except the last one,
+returns a single row of the result.
+The number of columns in the result is stored in the integer that
+the 2nd parameter points to.
+The pointer specified by the 3rd parameter is made to point
+to an array of pointers to column values.
+The pointer in the 4th parameter is made to point to an array
+of pointers to column names and datatypes.
+The 2nd through 4th parameters to <b>sqlite_step</b> convey the
+same information as the 2nd through 4th parameters of the
+<b>callback</b> routine when using
+the <b>sqlite_exec</b> interface. Except, with <b>sqlite_step</b>
+the column datatype information is always included in the in the
+4th parameter regardless of whether or not the
+<a href="pragma.html#pragma_show_datatypes">SHOW_DATATYPES</a> pragma
+is on or off.
+</p>
+
+<p>
+Each invocation of <b>sqlite_step</b> returns an integer code that
+indicates what happened during that step. This code may be
+SQLITE_BUSY, SQLITE_ROW, SQLITE_DONE, SQLITE_ERROR, or
+SQLITE_MISUSE.
+</p>
+
+<p>
+If the virtual machine is unable to open the database file because
+it is locked by another thread or process, <b>sqlite_step</b>
+will return SQLITE_BUSY. The calling function should do some other
+activity, or sleep, for a short amount of time to give the lock a
+chance to clear, then invoke <b>sqlite_step</b> again. This can
+be repeated as many times as desired.
+</p>
+
+<p>
+Whenever another row of result data is available,
+<b>sqlite_step</b> will return SQLITE_ROW. The row data is
+stored in an array of pointers to strings and the 2nd parameter
+is made to point to this array.
+</p>
+
+<p>
+When all processing is complete, <b>sqlite_step</b> will return
+either SQLITE_DONE or SQLITE_ERROR. SQLITE_DONE indicates that the
+statement completed successfully and SQLITE_ERROR indicates that there
+was a run-time error. (The details of the error are obtained from
+<b>sqlite_finalize</b>.) It is a misuse of the library to attempt
+to call <b>sqlite_step</b> again after it has returned SQLITE_DONE
+or SQLITE_ERROR.
+</p>
+
+<p>
+When <b>sqlite_step</b> returns SQLITE_DONE or SQLITE_ERROR,
+the *pN and *pazColName values are set to the number of columns
+in the result set and to the names of the columns, just as they
+are for an SQLITE_ROW return. This allows the calling code to
+find the number of result columns and the column names and datatypes
+even if the result set is empty. The *pazValue parameter is always
+set to NULL when the return codes is SQLITE_DONE or SQLITE_ERROR.
+If the SQL being executed is a statement that does not
+return a result (such as an INSERT or an UPDATE) then *pN will
+be set to zero and *pazColName will be set to NULL.
+</p>
+
+<p>
+If you abuse the library by trying to call <b>sqlite_step</b>
+inappropriately it will attempt return SQLITE_MISUSE.
+This can happen if you call sqlite_step() on the same virtual machine
+at the same
+time from two or more threads or if you call sqlite_step()
+again after it returned SQLITE_DONE or SQLITE_ERROR or if you
+pass in an invalid virtual machine pointer to sqlite_step().
+You should not depend on the SQLITE_MISUSE return code to indicate
+an error. It is possible that a misuse of the interface will go
+undetected and result in a program crash. The SQLITE_MISUSE is
+intended as a debugging aid only - to help you detect incorrect
+usage prior to a mishap. The misuse detection logic is not guaranteed
+to work in every case.
+</p>
+
+<h4>2.3 Deleting A Virtual Machine</h4>
+
+<p>
+Every virtual machine that <b>sqlite_compile</b> creates should
+eventually be handed to <b>sqlite_finalize</b>. The sqlite_finalize()
+procedure deallocates the memory and other resources that the virtual
+machine uses. Failure to call sqlite_finalize() will result in
+resource leaks in your program.
+</p>
+
+<p>
+The <b>sqlite_finalize</b> routine also returns the result code
+that indicates success or failure of the SQL operation that the
+virtual machine carried out.
+The value returned by sqlite_finalize() will be the same as would
+have been returned had the same SQL been executed by <b>sqlite_exec</b>.
+The error message returned will also be the same.
+</p>
+
+<p>
+It is acceptable to call <b>sqlite_finalize</b> on a virtual machine
+before <b>sqlite_step</b> has returned SQLITE_DONE. Doing so has
+the effect of interrupting the operation in progress. Partially completed
+changes will be rolled back and the database will be restored to its
+original state (unless an alternative recovery algorithm is selected using
+an ON CONFLICT clause in the SQL being executed.) The effect is the
+same as if a callback function of <b>sqlite_exec</b> had returned
+non-zero.
+</p>
+
+<p>
+It is also acceptable to call <b>sqlite_finalize</b> on a virtual machine
+that has never been passed to <b>sqlite_step</b> even once.
+</p>
+
+<h3>3.0 The Extended API</h3>
+
+<p>Only the three core routines described in section 1.0 are required to use
+SQLite. But there are many other functions that provide
+useful interfaces. These extended routines are as follows:
+</p>
+
+<blockquote><pre>
+int sqlite_last_insert_rowid(sqlite*);
+
+int sqlite_changes(sqlite*);
+
+int sqlite_get_table(
+ sqlite*,
+ char *sql,
+ char ***result,
+ int *nrow,
+ int *ncolumn,
+ char **errmsg
+);
+
+void sqlite_free_table(char**);
+
+void sqlite_interrupt(sqlite*);
+
+int sqlite_complete(const char *sql);
+
+void sqlite_busy_handler(sqlite*, int (*)(void*,const char*,int), void*);
+
+void sqlite_busy_timeout(sqlite*, int ms);
+
+const char sqlite_version[];
+
+const char sqlite_encoding[];
+
+int sqlite_exec_printf(
+ sqlite*,
+ char *sql,
+ int (*)(void*,int,char**,char**),
+ void*,
+ char **errmsg,
+ ...
+);
+
+int sqlite_exec_vprintf(
+ sqlite*,
+ char *sql,
+ int (*)(void*,int,char**,char**),
+ void*,
+ char **errmsg,
+ va_list
+);
+
+int sqlite_get_table_printf(
+ sqlite*,
+ char *sql,
+ char ***result,
+ int *nrow,
+ int *ncolumn,
+ char **errmsg,
+ ...
+);
+
+int sqlite_get_table_vprintf(
+ sqlite*,
+ char *sql,
+ char ***result,
+ int *nrow,
+ int *ncolumn,
+ char **errmsg,
+ va_list
+);
+
+char *sqlite_mprintf(const char *zFormat, ...);
+
+char *sqlite_vmprintf(const char *zFormat, va_list);
+
+void sqlite_freemem(char*);
+
+void sqlite_progress_handler(sqlite*, int, int (*)(void*), void*);
+
+</pre></blockquote>
+
+<p>All of the above definitions are included in the "sqlite.h"
+header file that comes in the source tree.</p>
+
+<h4>3.1 The ROWID of the most recent insert</h4>
+
+<p>Every row of an SQLite table has a unique integer key. If the
+table has a column labeled INTEGER PRIMARY KEY, then that column
+serves as the key. If there is no INTEGER PRIMARY KEY column then
+the key is a unique integer. The key for a row can be accessed in
+a SELECT statement or used in a WHERE or ORDER BY clause using any
+of the names "ROWID", "OID", or "_ROWID_".</p>
+
+<p>When you do an insert into a table that does not have an INTEGER PRIMARY
+KEY column, or if the table does have an INTEGER PRIMARY KEY but the value
+for that column is not specified in the VALUES clause of the insert, then
+the key is automatically generated. You can find the value of the key
+for the most recent INSERT statement using the
+<b>sqlite_last_insert_rowid</b> API function.</p>
+
+<h4>3.2 The number of rows that changed</h4>
+
+<p>The <b>sqlite_changes</b> API function returns the number of rows
+that have been inserted, deleted, or modified since the database was
+last quiescent. A "quiescent" database is one in which there are
+no outstanding calls to <b>sqlite_exec</b> and no VMs created by
+<b>sqlite_compile</b> that have not been finalized by <b>sqlite_finalize</b>.
+In common usage, <b>sqlite_changes</b> returns the number
+of rows inserted, deleted, or modified by the most recent <b>sqlite_exec</b>
+call or since the most recent <b>sqlite_compile</b>. But if you have
+nested calls to <b>sqlite_exec</b> (that is, if the callback routine
+of one <b>sqlite_exec</b> invokes another <b>sqlite_exec</b>) or if
+you invoke <b>sqlite_compile</b> to create a new VM while there is
+still another VM in existance, then
+the meaning of the number returned by <b>sqlite_changes</b> is more
+complex.
+The number reported includes any changes
+that were later undone by a ROLLBACK or ABORT. But rows that are
+deleted because of a DROP TABLE are <em>not</em> counted.</p>
+
+<p>SQLite implements the command "<b>DELETE FROM table</b>" (without
+a WHERE clause) by dropping the table then recreating it.
+This is much faster than deleting the elements of the table individually.
+But it also means that the value returned from <b>sqlite_changes</b>
+will be zero regardless of the number of elements that were originally
+in the table. If an accurate count of the number of elements deleted
+is necessary, use "<b>DELETE FROM table WHERE 1</b>" instead.</p>
+
+<h4>3.3 Querying into memory obtained from malloc()</h4>
+
+<p>The <b>sqlite_get_table</b> function is a wrapper around
+<b>sqlite_exec</b> that collects all the information from successive
+callbacks and writes it into memory obtained from malloc(). This
+is a convenience function that allows the application to get the
+entire result of a database query with a single function call.</p>
+
+<p>The main result from <b>sqlite_get_table</b> is an array of pointers
+to strings. There is one element in this array for each column of
+each row in the result. NULL results are represented by a NULL
+pointer. In addition to the regular data, there is an added row at the
+beginning of the array that contains the name of each column of the
+result.</p>
+
+<p>As an example, consider the following query:</p>
+
+<blockquote>
+SELECT employee_name, login, host FROM users WHERE login LIKE 'd%';
+</blockquote>
+
+<p>This query will return the name, login and host computer name
+for every employee whose login begins with the letter "d". If this
+query is submitted to <b>sqlite_get_table</b> the result might
+look like this:</p>
+
+<blockquote>
+nrow = 2<br>
+ncolumn = 3<br>
+result[0] = "employee_name"<br>
+result[1] = "login"<br>
+result[2] = "host"<br>
+result[3] = "dummy"<br>
+result[4] = "No such user"<br>
+result[5] = 0<br>
+result[6] = "D. Richard Hipp"<br>
+result[7] = "drh"<br>
+result[8] = "zadok"
+</blockquote>
+
+<p>Notice that the "host" value for the "dummy" record is NULL so
+the result[] array contains a NULL pointer at that slot.</p>
+
+<p>If the result set of a query is empty, then by default
+<b>sqlite_get_table</b> will set nrow to 0 and leave its
+result parameter is set to NULL. But if the EMPTY_RESULT_CALLBACKS
+pragma is ON then the result parameter is initialized to the names
+of the columns only. For example, consider this query which has
+an empty result set:</p>
+
+<blockquote>
+SELECT employee_name, login, host FROM users WHERE employee_name IS NULL;
+</blockquote>
+
+<p>
+The default behavior gives this results:
+</p>
+
+<blockquote>
+nrow = 0<br>
+ncolumn = 0<br>
+result = 0<br>
+</blockquote>
+
+<p>
+But if the EMPTY_RESULT_CALLBACKS pragma is ON, then the following
+is returned:
+</p>
+
+<blockquote>
+nrow = 0<br>
+ncolumn = 3<br>
+result[0] = "employee_name"<br>
+result[1] = "login"<br>
+result[2] = "host"<br>
+</blockquote>
+
+<p>Memory to hold the information returned by <b>sqlite_get_table</b>
+is obtained from malloc(). But the calling function should not try
+to free this information directly. Instead, pass the complete table
+to <b>sqlite_free_table</b> when the table is no longer needed.
+It is safe to call <b>sqlite_free_table</b> with a NULL pointer such
+as would be returned if the result set is empty.</p>
+
+<p>The <b>sqlite_get_table</b> routine returns the same integer
+result code as <b>sqlite_exec</b>.</p>
+
+<h4>3.4 Interrupting an SQLite operation</h4>
+
+<p>The <b>sqlite_interrupt</b> function can be called from a
+different thread or from a signal handler to cause the current database
+operation to exit at its first opportunity. When this happens,
+the <b>sqlite_exec</b> routine (or the equivalent) that started
+the database operation will return SQLITE_INTERRUPT.</p>
+
+<h4>3.5 Testing for a complete SQL statement</h4>
+
+<p>The next interface routine to SQLite is a convenience function used
+to test whether or not a string forms a complete SQL statement.
+If the <b>sqlite_complete</b> function returns true when its input
+is a string, then the argument forms a complete SQL statement.
+There are no guarantees that the syntax of that statement is correct,
+but we at least know the statement is complete. If <b>sqlite_complete</b>
+returns false, then more text is required to complete the SQL statement.</p>
+
+<p>For the purpose of the <b>sqlite_complete</b> function, an SQL
+statement is complete if it ends in a semicolon.</p>
+
+<p>The <b>sqlite</b> command-line utility uses the <b>sqlite_complete</b>
+function to know when it needs to call <b>sqlite_exec</b>. After each
+line of input is received, <b>sqlite</b> calls <b>sqlite_complete</b>
+on all input in its buffer. If <b>sqlite_complete</b> returns true,
+then <b>sqlite_exec</b> is called and the input buffer is reset. If
+<b>sqlite_complete</b> returns false, then the prompt is changed to
+the continuation prompt and another line of text is read and added to
+the input buffer.</p>
+
+<h4>3.6 Library version string</h4>
+
+<p>The SQLite library exports the string constant named
+<b>sqlite_version</b> which contains the version number of the
+library. The header file contains a macro SQLITE_VERSION
+with the same information. If desired, a program can compare
+the SQLITE_VERSION macro against the <b>sqlite_version</b>
+string constant to verify that the version number of the
+header file and the library match.</p>
+
+<h4>3.7 Library character encoding</h4>
+
+<p>By default, SQLite assumes that all data uses a fixed-size
+8-bit character (iso8859). But if you give the --enable-utf8 option
+to the configure script, then the library assumes UTF-8 variable
+sized characters. This makes a difference for the LIKE and GLOB
+operators and the LENGTH() and SUBSTR() functions. The static
+string <b>sqlite_encoding</b> will be set to either "UTF-8" or
+"iso8859" to indicate how the library was compiled. In addition,
+the <b>sqlite.h</b> header file will define one of the
+macros <b>SQLITE_UTF8</b> or <b>SQLITE_ISO8859</b>, as appropriate.</p>
+
+<p>Note that the character encoding mechanism used by SQLite cannot
+be changed at run-time. This is a compile-time option only. The
+<b>sqlite_encoding</b> character string just tells you how the library
+was compiled.</p>
+
+<h4>3.8 Changing the library's response to locked files</h4>
+
+<p>The <b>sqlite_busy_handler</b> procedure can be used to register
+a busy callback with an open SQLite database. The busy callback will
+be invoked whenever SQLite tries to access a database that is locked.
+The callback will typically do some other useful work, or perhaps sleep,
+in order to give the lock a chance to clear. If the callback returns
+non-zero, then SQLite tries again to access the database and the cycle
+repeats. If the callback returns zero, then SQLite aborts the current
+operation and returns SQLITE_BUSY.</p>
+
+<p>The arguments to <b>sqlite_busy_handler</b> are the opaque
+structure returned from <b>sqlite_open</b>, a pointer to the busy
+callback function, and a generic pointer that will be passed as
+the first argument to the busy callback. When SQLite invokes the
+busy callback, it sends it three arguments: the generic pointer
+that was passed in as the third argument to <b>sqlite_busy_handler</b>,
+the name of the database table or index that the library is trying
+to access, and the number of times that the library has attempted to
+access the database table or index.</p>
+
+<p>For the common case where we want the busy callback to sleep,
+the SQLite library provides a convenience routine <b>sqlite_busy_timeout</b>.
+The first argument to <b>sqlite_busy_timeout</b> is a pointer to
+an open SQLite database and the second argument is a number of milliseconds.
+After <b>sqlite_busy_timeout</b> has been executed, the SQLite library
+will wait for the lock to clear for at least the number of milliseconds
+specified before it returns SQLITE_BUSY. Specifying zero milliseconds for
+the timeout restores the default behavior.</p>
+
+<h4>3.9 Using the <tt>_printf()</tt> wrapper functions</h4>
+
+<p>The four utility functions</p>
+
+<p>
+<ul>
+<li><b>sqlite_exec_printf()</b></li>
+<li><b>sqlite_exec_vprintf()</b></li>
+<li><b>sqlite_get_table_printf()</b></li>
+<li><b>sqlite_get_table_vprintf()</b></li>
+</ul>
+</p>
+
+<p>implement the same query functionality as <b>sqlite_exec</b>
+and <b>sqlite_get_table</b>. But instead of taking a complete
+SQL statement as their second argument, the four <b>_printf</b>
+routines take a printf-style format string. The SQL statement to
+be executed is generated from this format string and from whatever
+additional arguments are attached to the end of the function call.</p>
+
+<p>There are two advantages to using the SQLite printf
+functions instead of <b>sprintf</b>. First of all, with the
+SQLite printf routines, there is never a danger of overflowing a
+static buffer as there is with <b>sprintf</b>. The SQLite
+printf routines automatically allocate (and later frees)
+as much memory as is
+necessary to hold the SQL statements generated.</p>
+
+<p>The second advantage the SQLite printf routines have over
+<b>sprintf</b> are two new formatting options specifically designed
+to support string literals in SQL. Within the format string,
+the %q formatting option works very much like %s in that it
+reads a null-terminated string from the argument list and inserts
+it into the result. But %q translates the inserted string by
+making two copies of every single-quote (') character in the
+substituted string. This has the effect of escaping the end-of-string
+meaning of single-quote within a string literal. The %Q formatting
+option works similar; it translates the single-quotes like %q and
+additionally encloses the resulting string in single-quotes.
+If the argument for the %Q formatting options is a NULL pointer,
+the resulting string is NULL without single quotes.
+</p>
+
+<p>Consider an example. Suppose you are trying to insert a string
+value into a database table where the string value was obtained from
+user input. Suppose the string to be inserted is stored in a variable
+named zString. The code to do the insertion might look like this:</p>
+
+<blockquote><pre>
+sqlite_exec_printf(db,
+ "INSERT INTO table1 VALUES('%s')",
+ 0, 0, 0, zString);
+</pre></blockquote>
+
+<p>If the zString variable holds text like "Hello", then this statement
+will work just fine. But suppose the user enters a string like
+"Hi y'all!". The SQL statement generated reads as follows:
+
+<blockquote><pre>
+INSERT INTO table1 VALUES('Hi y'all')
+</pre></blockquote>
+
+<p>This is not valid SQL because of the apostrophy in the word "y'all".
+But if the %q formatting option is used instead of %s, like this:</p>
+
+<blockquote><pre>
+sqlite_exec_printf(db,
+ "INSERT INTO table1 VALUES('%q')",
+ 0, 0, 0, zString);
+</pre></blockquote>
+
+<p>Then the generated SQL will look like the following:</p>
+
+<blockquote><pre>
+INSERT INTO table1 VALUES('Hi y''all')
+</pre></blockquote>
+
+<p>Here the apostrophy has been escaped and the SQL statement is well-formed.
+When generating SQL on-the-fly from data that might contain a
+single-quote character ('), it is always a good idea to use the
+SQLite printf routines and the %q formatting option instead of <b>sprintf</b>.
+</p>
+
+<p>If the %Q formatting option is used instead of %q, like this:</p>
+
+<blockquote><pre>
+sqlite_exec_printf(db,
+ "INSERT INTO table1 VALUES(%Q)",
+ 0, 0, 0, zString);
+</pre></blockquote>
+
+<p>Then the generated SQL will look like the following:</p>
+
+<blockquote><pre>
+INSERT INTO table1 VALUES('Hi y''all')
+</pre></blockquote>
+
+<p>If the value of the zString variable is NULL, the generated SQL
+will look like the following:</p>
+
+<blockquote><pre>
+INSERT INTO table1 VALUES(NULL)
+</pre></blockquote>
+
+<p>All of the _printf() routines above are built around the following
+two functions:</p>
+
+<blockquote><pre>
+char *sqlite_mprintf(const char *zFormat, ...);
+char *sqlite_vmprintf(const char *zFormat, va_list);
+</pre></blockquote>
+
+<p>The <b>sqlite_mprintf()</b> routine works like the the standard library
+<b>sprintf()</b> except that it writes its results into memory obtained
+from malloc() and returns a pointer to the malloced buffer.
+<b>sqlite_mprintf()</b> also understands the %q and %Q extensions described
+above. The <b>sqlite_vmprintf()</b> is a varargs version of the same
+routine. The string pointer that these routines return should be freed
+by passing it to <b>sqlite_freemem()</b>.
+</p>
+
+<h4>3.10 Performing background jobs during large queries</h3>
+
+<p>The <b>sqlite_progress_handler()</b> routine can be used to register a
+callback routine with an SQLite database to be invoked periodically during long
+running calls to <b>sqlite_exec()</b>, <b>sqlite_step()</b> and the various
+wrapper functions.
+</p>
+
+<p>The callback is invoked every N virtual machine operations, where N is
+supplied as the second argument to <b>sqlite_progress_handler()</b>. The third
+and fourth arguments to <b>sqlite_progress_handler()</b> are a pointer to the
+routine to be invoked and a void pointer to be passed as the first argument to
+it.
+</p>
+
+<p>The time taken to execute each virtual machine operation can vary based on
+many factors. A typical value for a 1 GHz PC is between half and three million
+per second but may be much higher or lower, depending on the query. As such it
+is difficult to schedule background operations based on virtual machine
+operations. Instead, it is recommended that a callback be scheduled relatively
+frequently (say every 1000 instructions) and external timer routines used to
+determine whether or not background jobs need to be run.
+</p>
+
+<a name="cfunc">
+<h3>4.0 Adding New SQL Functions</h3>
+
+<p>Beginning with version 2.4.0, SQLite allows the SQL language to be
+extended with new functions implemented as C code. The following interface
+is used:
+</p>
+
+<blockquote><pre>
+typedef struct sqlite_func sqlite_func;
+
+int sqlite_create_function(
+ sqlite *db,
+ const char *zName,
+ int nArg,
+ void (*xFunc)(sqlite_func*,int,const char**),
+ void *pUserData
+);
+int sqlite_create_aggregate(
+ sqlite *db,
+ const char *zName,
+ int nArg,
+ void (*xStep)(sqlite_func*,int,const char**),
+ void (*xFinalize)(sqlite_func*),
+ void *pUserData
+);
+
+char *sqlite_set_result_string(sqlite_func*,const char*,int);
+void sqlite_set_result_int(sqlite_func*,int);
+void sqlite_set_result_double(sqlite_func*,double);
+void sqlite_set_result_error(sqlite_func*,const char*,int);
+
+void *sqlite_user_data(sqlite_func*);
+void *sqlite_aggregate_context(sqlite_func*, int nBytes);
+int sqlite_aggregate_count(sqlite_func*);
+</pre></blockquote>
+
+<p>
+The <b>sqlite_create_function()</b> interface is used to create
+regular functions and <b>sqlite_create_aggregate()</b> is used to
+create new aggregate functions. In both cases, the <b>db</b>
+parameter is an open SQLite database on which the functions should
+be registered, <b>zName</b> is the name of the new function,
+<b>nArg</b> is the number of arguments, and <b>pUserData</b> is
+a pointer which is passed through unchanged to the C implementation
+of the function. Both routines return 0 on success and non-zero
+if there are any errors.
+</p>
+
+<p>
+The length of a function name may not exceed 255 characters.
+Any attempt to create a function whose name exceeds 255 characters
+in length will result in an error.
+</p>
+
+<p>
+For regular functions, the <b>xFunc</b> callback is invoked once
+for each function call. The implementation of xFunc should call
+one of the <b>sqlite_set_result_...</b> interfaces to return its
+result. The <b>sqlite_user_data()</b> routine can be used to
+retrieve the <b>pUserData</b> pointer that was passed in when the
+function was registered.
+</p>
+
+<p>
+For aggregate functions, the <b>xStep</b> callback is invoked once
+for each row in the result and then <b>xFinalize</b> is invoked at the
+end to compute a final answer. The xStep routine can use the
+<b>sqlite_aggregate_context()</b> interface to allocate memory that
+will be unique to that particular instance of the SQL function.
+This memory will be automatically deleted after xFinalize is called.
+The <b>sqlite_aggregate_count()</b> routine can be used to find out
+how many rows of data were passed to the aggregate. The xFinalize
+callback should invoke one of the <b>sqlite_set_result_...</b>
+interfaces to set the final result of the aggregate.
+</p>
+
+<p>
+SQLite now implements all of its built-in functions using this
+interface. For additional information and examples on how to create
+new SQL functions, review the SQLite source code in the file
+<b>func.c</b>.
+</p>
+
+<h3>5.0 Multi-Threading And SQLite</h3>
+
+<p>
+If SQLite is compiled with the THREADSAFE preprocessor macro set to 1,
+then it is safe to use SQLite from two or more threads of the same process
+at the same time. But each thread should have its own <b>sqlite*</b>
+pointer returned from <b>sqlite_open</b>. It is never safe for two
+or more threads to access the same <b>sqlite*</b> pointer at the same time.
+</p>
+
+<p>
+In precompiled SQLite libraries available on the website, the Unix
+versions are compiled with THREADSAFE turned off but the windows
+versions are compiled with THREADSAFE turned on. If you need something
+different that this you will have to recompile.
+</p>
+
+<p>
+Under Unix, an <b>sqlite*</b> pointer should not be carried across a
+<b>fork()</b> system call into the child process. The child process
+should open its own copy of the database after the <b>fork()</b>.
+</p>
+
+<h3>6.0 Usage Examples</h3>
+
+<p>For examples of how the SQLite C/C++ interface can be used,
+refer to the source code for the <b>sqlite</b> program in the
+file <b>src/shell.c</b> of the source tree.
+Additional information about sqlite is available at
+<a href="sqlite.html">sqlite.html</a>.
+See also the sources to the Tcl interface for SQLite in
+the source file <b>src/tclsqlite.c</b>.</p>
+}
+footer $rcsid
Added: freeswitch/trunk/libs/sqlite/www/capi3.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/capi3.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,466 @@
+set rcsid {$Id: capi3.tcl,v 1.9 2005/03/11 04:39:58 drh Exp $}
+source common.tcl
+header {C/C++ Interface For SQLite Version 3}
+
+proc AddHyperlinks {txt} {
+ regsub -all {([^:alnum:>])(sqlite3_\w+)(\([^\)]*\))} $txt \
+ {\1<a href="capi3ref.html#\2">\2</a>\3} t2
+ puts $t2
+}
+
+AddHyperlinks {
+<h2>C/C++ Interface For SQLite Version 3</h2>
+
+<h3>1.0 Overview</h3>
+
+<p>
+SQLite version 3.0 is a new version of SQLite, derived from
+the SQLite 2.8.13 code base, but with an incompatible file format
+and API.
+SQLite version 3.0 was created to answer demand for the following features:
+</p>
+
+<ul>
+<li>Support for UTF-16.</li>
+<li>User-definable text collating sequences.</li>
+<li>The ability to store BLOBs in indexed columns.</li>
+</ul>
+
+<p>
+It was necessary to move to version 3.0 to implement these features because
+each requires incompatible changes to the database file format. Other
+incompatible changes, such as a cleanup of the API, were introduced at the
+same time under the theory that it is best to get your incompatible changes
+out of the way all at once.
+</p>
+
+<p>
+The API for version 3.0 is similar to the version 2.X API,
+but with some important changes. Most noticeably, the "<tt>sqlite_</tt>"
+prefix that occurs on the beginning of all API functions and data
+structures are changed to "<tt>sqlite3_</tt>".
+This avoids confusion between the two APIs and allows linking against both
+SQLite 2.X and SQLite 3.0 at the same time.
+</p>
+
+<p>
+There is no agreement on what the C datatype for a UTF-16
+string should be. Therefore, SQLite uses a generic type of void*
+to refer to UTF-16 strings. Client software can cast the void*
+to whatever datatype is appropriate for their system.
+</p>
+
+<h3>2.0 C/C++ Interface</h3>
+
+<p>
+The API for SQLite 3.0 includes 83 separate functions in addition
+to several data structures and #defines. (A complete
+<a href="capi3ref.html">API reference</a> is provided as a separate document.)
+Fortunately, the interface is not nearly as complex as its size implies.
+Simple programs can still make do with only 3 functions:
+<a href="capi3ref.html#sqlite3_open">sqlite3_open()</a>,
+<a href="capi3ref.html#sqlite3_exec">sqlite3_exec()</a>, and
+<a href="capi3ref.html#sqlite3_close">sqlite3_close()</a>.
+More control over the execution of the database engine is provided
+using
+<a href="capi3ref.html#sqlite3_prepare">sqlite3_prepare()</a>
+to compile an SQLite statement into byte code and
+<a href="capi3ref.html#sqlite3_prepare">sqlite3_step()</a>
+to execute that bytecode.
+A family of routines with names beginning with
+<a href="capi3ref.html#sqlite3_column_blob">sqlite3_column_</a>
+is used to extract information about the result set of a query.
+Many interface functions come in pairs, with both a UTF-8 and
+UTF-16 version. And there is a collection of routines
+used to implement user-defined SQL functions and user-defined
+text collating sequences.
+</p>
+
+
+<h4>2.1 Opening and closing a database</h4>
+
+<blockquote><pre>
+ typedef struct sqlite3 sqlite3;
+ int sqlite3_open(const char*, sqlite3**);
+ int sqlite3_open16(const void*, sqlite3**);
+ int sqlite3_close(sqlite3*);
+ const char *sqlite3_errmsg(sqlite3*);
+ const void *sqlite3_errmsg16(sqlite3*);
+ int sqlite3_errcode(sqlite3*);
+</pre></blockquote>
+
+<p>
+The sqlite3_open() routine returns an integer error code rather than
+a pointer to the sqlite3 structure as the version 2 interface did.
+The difference between sqlite3_open()
+and sqlite3_open16() is that sqlite3_open16() takes UTF-16 (in host native
+byte order) for the name of the database file. If a new database file
+needs to be created, then sqlite3_open16() sets the internal text
+representation to UTF-16 whereas sqlite3_open() sets the text
+representation to UTF-8.
+</p>
+
+<p>
+The opening and/or creating of the database file is deferred until the
+file is actually needed. This allows options and parameters, such
+as the native text representation and default page size, to be
+set using PRAGMA statements.
+</p>
+
+<p>
+The sqlite3_errcode() routine returns a result code for the most
+recent major API call. sqlite3_errmsg() returns an English-language
+text error message for the most recent error. The error message is
+represented in UTF-8 and will be ephemeral - it could disappear on
+the next call to any SQLite API function. sqlite3_errmsg16() works like
+sqlite3_errmsg() except that it returns the error message represented
+as UTF-16 in host native byte order.
+</p>
+
+<p>
+The error codes for SQLite version 3 are unchanged from version 2.
+They are as follows:
+</p>
+
+<blockquote><pre>
+#define SQLITE_OK 0 /* Successful result */
+#define SQLITE_ERROR 1 /* SQL error or missing database */
+#define SQLITE_INTERNAL 2 /* An internal logic error in SQLite */
+#define SQLITE_PERM 3 /* Access permission denied */
+#define SQLITE_ABORT 4 /* Callback routine requested an abort */
+#define SQLITE_BUSY 5 /* The database file is locked */
+#define SQLITE_LOCKED 6 /* A table in the database is locked */
+#define SQLITE_NOMEM 7 /* A malloc() failed */
+#define SQLITE_READONLY 8 /* Attempt to write a readonly database */
+#define SQLITE_INTERRUPT 9 /* Operation terminated by sqlite_interrupt() */
+#define SQLITE_IOERR 10 /* Some kind of disk I/O error occurred */
+#define SQLITE_CORRUPT 11 /* The database disk image is malformed */
+#define SQLITE_NOTFOUND 12 /* (Internal Only) Table or record not found */
+#define SQLITE_FULL 13 /* Insertion failed because database is full */
+#define SQLITE_CANTOPEN 14 /* Unable to open the database file */
+#define SQLITE_PROTOCOL 15 /* Database lock protocol error */
+#define SQLITE_EMPTY 16 /* (Internal Only) Database table is empty */
+#define SQLITE_SCHEMA 17 /* The database schema changed */
+#define SQLITE_TOOBIG 18 /* Too much data for one row of a table */
+#define SQLITE_CONSTRAINT 19 /* Abort due to contraint violation */
+#define SQLITE_MISMATCH 20 /* Data type mismatch */
+#define SQLITE_MISUSE 21 /* Library used incorrectly */
+#define SQLITE_NOLFS 22 /* Uses OS features not supported on host */
+#define SQLITE_AUTH 23 /* Authorization denied */
+#define SQLITE_ROW 100 /* sqlite_step() has another row ready */
+#define SQLITE_DONE 101 /* sqlite_step() has finished executing */
+</pre></blockquote>
+
+<h4>2.2 Executing SQL statements</h4>
+
+<blockquote><pre>
+ typedef int (*sqlite_callback)(void*,int,char**, char**);
+ int sqlite3_exec(sqlite3*, const char *sql, sqlite_callback, void*, char**);
+</pre></blockquote>
+
+<p>
+The sqlite3_exec function works much as it did in SQLite version 2.
+Zero or more SQL statements specified in the second parameter are compiled
+and executed. Query results are returned to a callback routine.
+See the <a href="capi3ref.html#sqlite3_exec">API reference</a> for additional
+information.
+</p>
+
+<p>
+In SQLite version 3, the sqlite3_exec routine is just a wrapper around
+calls to the prepared statement interface.
+</p>
+
+<blockquote><pre>
+ typedef struct sqlite3_stmt sqlite3_stmt;
+ int sqlite3_prepare(sqlite3*, const char*, int, sqlite3_stmt**, const char**);
+ int sqlite3_prepare16(sqlite3*, const void*, int, sqlite3_stmt**, const void**);
+ int sqlite3_finalize(sqlite3_stmt*);
+ int sqlite3_reset(sqlite3_stmt*);
+</pre></blockquote>
+
+<p>
+The sqlite3_prepare interface compiles a single SQL statement into byte code
+for later execution. This interface is now the preferred way of accessing
+the database.
+</p>
+
+<p>
+The SQL statement is a UTF-8 string for sqlite3_prepare().
+The sqlite3_prepare16() works the same way except
+that it expects a UTF-16 string as SQL input.
+Only the first SQL statement in the input string is compiled.
+The fourth parameter is filled in with a pointer to the next (uncompiled)
+SQLite statement in the input string, if any.
+The sqlite3_finalize() routine deallocates a prepared SQL statement.
+All prepared statements must be finalized before the database can be
+closed.
+The sqlite3_reset() routine resets a prepared SQL statement so that it
+can be executed again.
+</p>
+
+<p>
+The SQL statement may contain tokens of the form "?" or "?nnn" or ":aaa"
+where "nnn" is an integer and "aaa" is an identifier.
+Such tokens represent unspecified literal values (or "wildcards")
+to be filled in later by the
+<a href="capi3ref.html#sqlite3_bind_blob">sqlite3_bind</a> interface.
+Each wildcard has an associated number which is its sequence in the
+statement or the "nnn" in the case of a "?nnn" form.
+It is allowed for the same wildcard
+to occur more than once in the same SQL statement, in which case
+all instance of that wildcard will be filled in with the same value.
+Unbound wildcards have a value of NULL.
+</p>
+
+<blockquote><pre>
+ int sqlite3_bind_blob(sqlite3_stmt*, int, const void*, int n, void(*)(void*));
+ int sqlite3_bind_double(sqlite3_stmt*, int, double);
+ int sqlite3_bind_int(sqlite3_stmt*, int, int);
+ int sqlite3_bind_int64(sqlite3_stmt*, int, long long int);
+ int sqlite3_bind_null(sqlite3_stmt*, int);
+ int sqlite3_bind_text(sqlite3_stmt*, int, const char*, int n, void(*)(void*));
+ int sqlite3_bind_text16(sqlite3_stmt*, int, const void*, int n, void(*)(void*));
+ int sqlite3_bind_value(sqlite3_stmt*, int, const sqlite3_value*);
+</pre></blockquote>
+
+<p>
+There is an assortment of sqlite3_bind routines used to assign values
+to wildcards in a prepared SQL statement. Unbound wildcards
+are interpreted as NULLs. Bindings are not reset by sqlite3_reset().
+But wildcards can be rebound to new values after an sqlite3_reset().
+</p>
+
+<p>
+After an SQL statement has been prepared (and optionally bound), it
+is executed using:
+</p>
+
+<blockquote><pre>
+ int sqlite3_step(sqlite3_stmt*);
+</pre></blockquote>
+
+<p>
+The sqlite3_step() routine return SQLITE_ROW if it is returning a single
+row of the result set, or SQLITE_DONE if execution has completed, either
+normally or due to an error. It might also return SQLITE_BUSY if it is
+unable to open the database file. If the return value is SQLITE_ROW, then
+the following routines can be used to extract information about that row
+of the result set:
+</p>
+
+<blockquote><pre>
+ const void *sqlite3_column_blob(sqlite3_stmt*, int iCol);
+ int sqlite3_column_bytes(sqlite3_stmt*, int iCol);
+ int sqlite3_column_bytes16(sqlite3_stmt*, int iCol);
+ int sqlite3_column_count(sqlite3_stmt*);
+ const char *sqlite3_column_decltype(sqlite3_stmt *, int iCol);
+ const void *sqlite3_column_decltype16(sqlite3_stmt *, int iCol);
+ double sqlite3_column_double(sqlite3_stmt*, int iCol);
+ int sqlite3_column_int(sqlite3_stmt*, int iCol);
+ long long int sqlite3_column_int64(sqlite3_stmt*, int iCol);
+ const char *sqlite3_column_name(sqlite3_stmt*, int iCol);
+ const void *sqlite3_column_name16(sqlite3_stmt*, int iCol);
+ const unsigned char *sqlite3_column_text(sqlite3_stmt*, int iCol);
+ const void *sqlite3_column_text16(sqlite3_stmt*, int iCol);
+ int sqlite3_column_type(sqlite3_stmt*, int iCol);
+</pre></blockquote>
+
+<p>
+The
+<a href="capi3ref.html#sqlite3_column_count">sqlite3_column_count()</a>
+function returns the number of columns in
+the results set. sqlite3_column_count() can be called at any time after
+sqlite3_prepare().
+<a href="capi3ref.html#sqlite3_data_count">sqlite3_data_count()</a>
+works similarly to
+sqlite3_column_count() except that it only works following sqlite3_step().
+If the previous call to sqlite3_step() returned SQLITE_DONE or an error code,
+then sqlite3_data_count() will return 0 whereas sqlite3_column_count() will
+continue to return the number of columns in the result set.
+</p>
+
+<p>Returned data is examined using the other sqlite3_column_***() functions,
+all of which take a column number as their second parameter. Columns are
+zero-indexed from left to right. Note that this is different to parameters,
+which are indexed starting at one.
+</p>
+
+<p>
+The sqlite3_column_type() function returns the
+datatype for the value in the Nth column. The return value is one
+of these:
+</p>
+
+<blockquote><pre>
+ #define SQLITE_INTEGER 1
+ #define SQLITE_FLOAT 2
+ #define SQLITE_TEXT 3
+ #define SQLITE_BLOB 4
+ #define SQLITE_NULL 5
+</pre></blockquote>
+
+<p>
+The sqlite3_column_decltype() routine returns text which is the
+declared type of the column in the CREATE TABLE statement. For an
+expression, the return type is an empty string. sqlite3_column_name()
+returns the name of the Nth column. sqlite3_column_bytes() returns
+the number of bytes in a column that has type BLOB or the number of bytes
+in a TEXT string with UTF-8 encoding. sqlite3_column_bytes16() returns
+the same value for BLOBs but for TEXT strings returns the number of bytes
+in a UTF-16 encoding.
+sqlite3_column_blob() return BLOB data.
+sqlite3_column_text() return TEXT data as UTF-8.
+sqlite3_column_text16() return TEXT data as UTF-16.
+sqlite3_column_int() return INTEGER data in the host machines native
+integer format.
+sqlite3_column_int64() returns 64-bit INTEGER data.
+Finally, sqlite3_column_double() return floating point data.
+</p>
+
+<p>
+It is not necessary to retrieve data in the format specify by
+sqlite3_column_type(). If a different format is requested, the data
+is converted automatically.
+</p>
+
+<h4>2.3 User-defined functions</h4>
+
+<p>
+User defined functions can be created using the following routine:
+</p>
+
+<blockquote><pre>
+ typedef struct sqlite3_value sqlite3_value;
+ int sqlite3_create_function(
+ sqlite3 *,
+ const char *zFunctionName,
+ int nArg,
+ int eTextRep,
+ void*,
+ void (*xFunc)(sqlite3_context*,int,sqlite3_value**),
+ void (*xStep)(sqlite3_context*,int,sqlite3_value**),
+ void (*xFinal)(sqlite3_context*)
+ );
+ int sqlite3_create_function16(
+ sqlite3*,
+ const void *zFunctionName,
+ int nArg,
+ int eTextRep,
+ void*,
+ void (*xFunc)(sqlite3_context*,int,sqlite3_value**),
+ void (*xStep)(sqlite3_context*,int,sqlite3_value**),
+ void (*xFinal)(sqlite3_context*)
+ );
+ #define SQLITE_UTF8 1
+ #define SQLITE_UTF16 2
+ #define SQLITE_UTF16BE 3
+ #define SQLITE_UTF16LE 4
+ #define SQLITE_ANY 5
+</pre></blockquote>
+
+<p>
+The nArg parameter specifies the number of arguments to the function.
+A value of 0 indicates that any number of arguments is allowed. The
+eTextRep parameter specifies what representation text values are expected
+to be in for arguments to this function. The value of this parameter should
+be one of the parameters defined above. SQLite version 3 allows multiple
+implementations of the same function using different text representations.
+The database engine chooses the function that minimization the number
+of text conversions required.
+</p>
+
+<p>
+Normal functions specify only xFunc and leave xStep and xFinal set to NULL.
+Aggregate functions specify xStep and xFinal and leave xFunc set to NULL.
+There is no separate sqlite3_create_aggregate() API.
+</p>
+
+<p>
+The function name is specified in UTF-8. A separate sqlite3_create_function16()
+API works the same as sqlite_create_function()
+except that the function name is specified in UTF-16 host byte order.
+</p>
+
+<p>
+Notice that the parameters to functions are now pointers to sqlite3_value
+structures instead of pointers to strings as in SQLite version 2.X.
+The following routines are used to extract useful information from these
+"values":
+</p>
+
+<blockquote><pre>
+ const void *sqlite3_value_blob(sqlite3_value*);
+ int sqlite3_value_bytes(sqlite3_value*);
+ int sqlite3_value_bytes16(sqlite3_value*);
+ double sqlite3_value_double(sqlite3_value*);
+ int sqlite3_value_int(sqlite3_value*);
+ long long int sqlite3_value_int64(sqlite3_value*);
+ const unsigned char *sqlite3_value_text(sqlite3_value*);
+ const void *sqlite3_value_text16(sqlite3_value*);
+ int sqlite3_value_type(sqlite3_value*);
+</pre></blockquote>
+
+<p>
+Function implementations use the following APIs to acquire context and
+to report results:
+</p>
+
+<blockquote><pre>
+ void *sqlite3_aggregate_context(sqlite3_context*, int nbyte);
+ void *sqlite3_user_data(sqlite3_context*);
+ void sqlite3_result_blob(sqlite3_context*, const void*, int n, void(*)(void*));
+ void sqlite3_result_double(sqlite3_context*, double);
+ void sqlite3_result_error(sqlite3_context*, const char*, int);
+ void sqlite3_result_error16(sqlite3_context*, const void*, int);
+ void sqlite3_result_int(sqlite3_context*, int);
+ void sqlite3_result_int64(sqlite3_context*, long long int);
+ void sqlite3_result_null(sqlite3_context*);
+ void sqlite3_result_text(sqlite3_context*, const char*, int n, void(*)(void*));
+ void sqlite3_result_text16(sqlite3_context*, const void*, int n, void(*)(void*));
+ void sqlite3_result_value(sqlite3_context*, sqlite3_value*);
+ void *sqlite3_get_auxdata(sqlite3_context*, int);
+ void sqlite3_set_auxdata(sqlite3_context*, int, void*, void (*)(void*));
+</pre></blockquote>
+
+<h4>2.4 User-defined collating sequences</h4>
+
+<p>
+The following routines are used to implement user-defined
+collating sequences:
+</p>
+
+<blockquote><pre>
+ sqlite3_create_collation(sqlite3*, const char *zName, int eTextRep, void*,
+ int(*xCompare)(void*,int,const void*,int,const void*));
+ sqlite3_create_collation16(sqlite3*, const void *zName, int eTextRep, void*,
+ int(*xCompare)(void*,int,const void*,int,const void*));
+ sqlite3_collation_needed(sqlite3*, void*,
+ void(*)(void*,sqlite3*,int eTextRep,const char*));
+ sqlite3_collation_needed16(sqlite3*, void*,
+ void(*)(void*,sqlite3*,int eTextRep,const void*));
+</pre></blockquote>
+
+<p>
+The sqlite3_create_collation() function specifies a collating sequence name
+and a comparison function to implement that collating sequence. The
+comparison function is only used for comparing text values. The eTextRep
+parameter is one of SQLITE_UTF8, SQLITE_UTF16LE, SQLITE_UTF16BE, or
+SQLITE_ANY to specify which text representation the comparison function works
+with. Separate comparison functions can exist for the same collating
+sequence for each of the UTF-8, UTF-16LE and UTF-16BE text representations.
+The sqlite3_create_collation16() works like sqlite3_create_collation() except
+that the collation name is specified in UTF-16 host byte order instead of
+in UTF-8.
+</p>
+
+<p>
+The sqlite3_collation_needed() routine registers a callback which the
+database engine will invoke if it encounters an unknown collating sequence.
+The callback can lookup an appropriate comparison function and invoke
+sqlite_3_create_collation() as needed. The fourth parameter to the callback
+is the name of the collating sequence in UTF-8. For sqlite3_collation_need16()
+the callback sends the collating sequence name in UTF-16 host byte order.
+</p>
+}
+footer $rcsid
Added: freeswitch/trunk/libs/sqlite/www/capi3ref.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/capi3ref.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,1725 @@
+set rcsid {$Id: capi3ref.tcl,v 1.45 2006/09/15 16:58:49 drh Exp $}
+source common.tcl
+header {C/C++ Interface For SQLite Version 3}
+puts {
+<h2>C/C++ Interface For SQLite Version 3</h2>
+}
+
+proc api {name prototype desc {notused x}} {
+ global apilist specialname
+ if {$name==""} {
+ regsub -all {sqlite3_[a-z0-9_]+\(} $prototype \
+ {[lappend name [string trimright & (]]} x1
+ subst $x1
+ } else {
+ lappend specialname $name
+ }
+ lappend apilist [list $name $prototype $desc]
+}
+
+api {extended-result-codes} {
+#define SQLITE_IOERR_READ
+#define SQLITE_IOERR_SHORT_READ
+#define SQLITE_IOERR_WRITE
+#define SQLITE_IOERR_FSYNC
+#define SQLITE_IOERR_DIR_FSYNC
+#define SQLITE_IOERR_TRUNCATE
+#define SQLITE_IOERR_FSTAT
+#define SQLITE_IOERR_UNLOCK
+#define SQLITE_IOERR_RDLOCK
+...
+} {
+In its default configuration, SQLite API routines return one of 26 integer
+result codes described at result-codes. However, experience has shown that
+many of these result codes are too course-grained. They do not provide as
+much information about problems as users might like. In an effort to
+address this, newer versions of SQLite (version 3.3.8 and later) include
+support for additional result codes that provide more detailed information
+about errors. The extended result codes are enabled (or disabled) for
+each database
+connection using the sqlite3_extended_result_codes() API.
+
+Some of the available extended result codes are listed above.
+We expect the number of extended result codes will be expand
+over time. Software that uses extended result codes should expect
+to see new result codes in future releases of SQLite.
+
+The symbolic name for an extended result code always contains a related
+primary result code as a prefix. Primary result codes contain a single
+"_" character. Extended result codes contain two or more "_" characters.
+The numeric value of an extended result code can be converted to its
+corresponding primary result code by masking off the lower 8 bytes.
+
+A complete list of available extended result codes and
+details about the meaning of the various extended result codes can be
+found by consulting the C code, especially the sqlite3.h header
+file and its antecedent sqlite.h.in. Additional information
+is also available at the SQLite wiki:
+http://www.sqlite.org/cvstrac/wiki?p=ExtendedResultCodes
+}
+
+
+api {result-codes} {
+#define SQLITE_OK 0 /* Successful result */
+#define SQLITE_ERROR 1 /* SQL error or missing database */
+#define SQLITE_INTERNAL 2 /* An internal logic error in SQLite */
+#define SQLITE_PERM 3 /* Access permission denied */
+#define SQLITE_ABORT 4 /* Callback routine requested an abort */
+#define SQLITE_BUSY 5 /* The database file is locked */
+#define SQLITE_LOCKED 6 /* A table in the database is locked */
+#define SQLITE_NOMEM 7 /* A malloc() failed */
+#define SQLITE_READONLY 8 /* Attempt to write a readonly database */
+#define SQLITE_INTERRUPT 9 /* Operation terminated by sqlite_interrupt() */
+#define SQLITE_IOERR 10 /* Some kind of disk I/O error occurred */
+#define SQLITE_CORRUPT 11 /* The database disk image is malformed */
+#define SQLITE_NOTFOUND 12 /* (Internal Only) Table or record not found */
+#define SQLITE_FULL 13 /* Insertion failed because database is full */
+#define SQLITE_CANTOPEN 14 /* Unable to open the database file */
+#define SQLITE_PROTOCOL 15 /* Database lock protocol error */
+#define SQLITE_EMPTY 16 /* (Internal Only) Database table is empty */
+#define SQLITE_SCHEMA 17 /* The database schema changed */
+#define SQLITE_TOOBIG 18 /* Too much data for one row of a table */
+#define SQLITE_CONSTRAINT 19 /* Abort due to constraint violation */
+#define SQLITE_MISMATCH 20 /* Data type mismatch */
+#define SQLITE_MISUSE 21 /* Library used incorrectly */
+#define SQLITE_NOLFS 22 /* Uses OS features not supported on host */
+#define SQLITE_AUTH 23 /* Authorization denied */
+#define SQLITE_ROW 100 /* sqlite_step() has another row ready */
+#define SQLITE_DONE 101 /* sqlite_step() has finished executing */
+} {
+Many SQLite functions return an integer result code from the set shown
+above in order to indicates success or failure.
+
+The result codes above are the only ones returned by SQLite in its
+default configuration. However, the sqlite3_extended_result_codes()
+API can be used to set a database connectoin to return more detailed
+result codes. See the documentation on sqlite3_extended_result_codes()
+or extended-result-codes for additional information.
+}
+
+api {} {
+ int sqlite3_extended_result_codes(sqlite3*, int onoff);
+} {
+This routine enables or disabled extended-result-codes feature.
+By default, SQLite API routines return one of only 26 integer
+result codes described at result-codes. When extended result codes
+are enabled by this routine, the repetoire of result codes can be
+much larger and can (hopefully) provide more detailed information
+about the cause of an error.
+
+The second argument is a boolean value that turns extended result
+codes on and off. Extended result codes are off by default for
+backwards compatibility with older versions of SQLite.
+}
+
+api {} {
+ const char *sqlite3_libversion(void);
+} {
+ Return a pointer to a string which contains the version number of
+ the library. The same string is available in the global
+ variable named "sqlite3_version". This interface is provided since
+ windows is unable to access global variables in DLLs.
+}
+
+api {} {
+ void *sqlite3_aggregate_context(sqlite3_context*, int nBytes);
+} {
+ Aggregate functions use this routine to allocate
+ a structure for storing their state. The first time this routine
+ is called for a particular aggregate, a new structure of size nBytes
+ is allocated, zeroed, and returned. On subsequent calls (for the
+ same aggregate instance) the same buffer is returned. The implementation
+ of the aggregate can use the returned buffer to accumulate data.
+
+ The buffer is freed automatically by SQLite when the query that
+ invoked the aggregate function terminates.
+}
+
+api {} {
+ int sqlite3_aggregate_count(sqlite3_context*);
+} {
+ This function is deprecated. It continues to exist so as not to
+ break any legacy code that might happen to use it. But it should not
+ be used in any new code.
+
+ In order to encourage people to not use this function, we are not going
+ to tell you what it does.
+}
+
+api {} {
+ int sqlite3_bind_blob(sqlite3_stmt*, int, const void*, int n, void(*)(void*));
+ int sqlite3_bind_double(sqlite3_stmt*, int, double);
+ int sqlite3_bind_int(sqlite3_stmt*, int, int);
+ int sqlite3_bind_int64(sqlite3_stmt*, int, long long int);
+ int sqlite3_bind_null(sqlite3_stmt*, int);
+ int sqlite3_bind_text(sqlite3_stmt*, int, const char*, int n, void(*)(void*));
+ int sqlite3_bind_text16(sqlite3_stmt*, int, const void*, int n, void(*)(void*));
+ #define SQLITE_STATIC ((void(*)(void *))0)
+ #define SQLITE_TRANSIENT ((void(*)(void *))-1)
+} {
+ In the SQL strings input to sqlite3_prepare() and sqlite3_prepare16(),
+ one or more literals can be replace by a parameter "?" or ":AAA" or
+ "@AAA" or "\$VVV"
+ where AAA is an alphanumeric identifier and VVV is a variable name according
+ to the syntax rules of the TCL programming language.
+ The values of these parameters (also called "host parameter names")
+ can be set using the sqlite3_bind_*() routines.
+
+ The first argument to the sqlite3_bind_*() routines always is a pointer
+ to the sqlite3_stmt structure returned from sqlite3_prepare(). The second
+ argument is the index of the parameter to be set. The first parameter has
+ an index of 1. When the same named parameter is used more than once, second
+ and subsequent
+ occurrences have the same index as the first occurrence. The index for
+ named parameters can be looked up using the
+ sqlite3_bind_parameter_name() API if desired.
+
+ The third argument is the value to bind to the parameter.
+
+ In those
+ routines that have a fourth argument, its value is the number of bytes
+ in the parameter. To be clear: the value is the number of bytes in the
+ string, not the number of characters. The number
+ of bytes does not include the zero-terminator at the end of strings.
+ If the fourth parameter is negative, the length of the string is
+ number of bytes up to the first zero terminator.
+
+ The fifth argument to sqlite3_bind_blob(), sqlite3_bind_text(), and
+ sqlite3_bind_text16() is a destructor used to dispose of the BLOB or
+ text after SQLite has finished with it. If the fifth argument is the
+ special value SQLITE_STATIC, then the library assumes that the information
+ is in static, unmanaged space and does not need to be freed. If the
+ fifth argument has the value SQLITE_TRANSIENT, then SQLite makes its
+ own private copy of the data immediately, before the sqlite3_bind_*()
+ routine returns.
+
+ The sqlite3_bind_*() routines must be called after
+ sqlite3_prepare() or sqlite3_reset() and before sqlite3_step().
+ Bindings are not cleared by the sqlite3_reset() routine.
+ Unbound parameters are interpreted as NULL.
+
+ These routines return SQLITE_OK on success or an error code if
+ anything goes wrong. SQLITE_RANGE is returned if the parameter
+ index is out of range. SQLITE_NOMEM is returned if malloc fails.
+ SQLITE_MISUSE is returned if these routines are called on a virtual
+ machine that is the wrong state or which has already been finalized.
+}
+
+api {} {
+ int sqlite3_bind_parameter_count(sqlite3_stmt*);
+} {
+ Return the number of parameters in the precompiled statement given as
+ the argument.
+}
+
+api {} {
+ const char *sqlite3_bind_parameter_name(sqlite3_stmt*, int n);
+} {
+ Return the name of the n-th parameter in the precompiled statement.
+ Parameters of the form ":AAA" or "@AAA" or "\$VVV" have a name which is the
+ string ":AAA" or "\$VVV". In other words, the initial ":" or "$" or "@"
+ is included as part of the name.
+ Parameters of the form "?" have no name.
+
+ The first bound parameter has an index of 1, not 0.
+
+ If the value n is out of range or if the n-th parameter is nameless,
+ then NULL is returned. The returned string is always in the
+ UTF-8 encoding.
+}
+
+api {} {
+ int sqlite3_bind_parameter_index(sqlite3_stmt*, const char *zName);
+} {
+ Return the index of the parameter with the given name.
+ The name must match exactly.
+ If there is no parameter with the given name, return 0.
+ The string zName is always in the UTF-8 encoding.
+}
+
+api {} {
+ int sqlite3_busy_handler(sqlite3*, int(*)(void*,int), void*);
+} {
+ This routine identifies a callback function that might be invoked
+ whenever an attempt is made to open a database table
+ that another thread or process has locked.
+ If the busy callback is NULL, then SQLITE_BUSY is returned immediately
+ upon encountering the lock.
+ If the busy callback is not NULL, then the
+ callback will be invoked with two arguments. The
+ second argument is the number of prior calls to the busy callback
+ for the same lock. If the
+ busy callback returns 0, then no additional attempts are made to
+ access the database and SQLITE_BUSY is returned.
+ If the callback returns non-zero, then another attempt is made to open the
+ database for reading and the cycle repeats.
+
+ The presence of a busy handler does not guarantee that
+ it will be invoked when there is lock contention.
+ If SQLite determines that invoking the busy handler could result in
+ a deadlock, it will return SQLITE_BUSY instead.
+ Consider a scenario where one process is holding a read lock that
+ it is trying to promote to a reserved lock and
+ a second process is holding a reserved lock that it is trying
+ to promote to an exclusive lock. The first process cannot proceed
+ because it is blocked by the second and the second process cannot
+ proceed because it is blocked by the first. If both processes
+ invoke the busy handlers, neither will make any progress. Therefore,
+ SQLite returns SQLITE_BUSY for the first process, hoping that this
+ will induce the first process to release its read lock and allow
+ the second process to proceed.
+
+ The default busy callback is NULL.
+
+ Sqlite is re-entrant, so the busy handler may start a new query.
+ (It is not clear why anyone would every want to do this, but it
+ is allowed, in theory.) But the busy handler may not close the
+ database. Closing the database from a busy handler will delete
+ data structures out from under the executing query and will
+ probably result in a coredump.
+
+ There can only be a single busy handler defined for each database
+ connection. Setting a new busy handler clears any previous one.
+ Note that calling sqlite3_busy_timeout() will also set or clear
+ the busy handler.
+}
+
+api {} {
+ int sqlite3_busy_timeout(sqlite3*, int ms);
+} {
+ This routine sets a busy handler that sleeps for a while when a
+ table is locked. The handler will sleep multiple times until
+ at least "ms" milliseconds of sleeping have been done. After
+ "ms" milliseconds of sleeping, the handler returns 0 which
+ causes sqlite3_exec() to return SQLITE_BUSY.
+
+ Calling this routine with an argument less than or equal to zero
+ turns off all busy handlers.
+
+ There can only be a single busy handler for a particular database
+ connection. If another busy handler was defined
+ (using sqlite3_busy_handler()) prior to calling
+ this routine, that other busy handler is cleared.
+}
+
+api {} {
+ int sqlite3_changes(sqlite3*);
+} {
+ This function returns the number of database rows that were changed
+ (or inserted or deleted) by the most recently completed
+ INSERT, UPDATE, or DELETE
+ statement. Only changes that are directly specified by the INSERT,
+ UPDATE, or DELETE statement are counted. Auxiliary changes caused by
+ triggers are not counted. Use the sqlite3_total_changes() function
+ to find the total number of changes including changes caused by triggers.
+
+ Within the body of a trigger, the sqlite3_changes() function does work
+ to report the number of rows that were changed for the most recently
+ completed INSERT, UPDATE, or DELETE statement within the trigger body.
+
+ SQLite implements the command "DELETE FROM table" without a WHERE clause
+ by dropping and recreating the table. (This is much faster than going
+ through and deleting individual elements from the table.) Because of
+ this optimization, the change count for "DELETE FROM table" will be
+ zero regardless of the number of elements that were originally in the
+ table. To get an accurate count of the number of rows deleted, use
+ "DELETE FROM table WHERE 1" instead.
+}
+
+api {} {
+ int sqlite3_total_changes(sqlite3*);
+} {
+ This function returns the total number of database rows that have
+ be modified, inserted, or deleted since the database connection was
+ created using sqlite3_open(). All changes are counted, including
+ changes by triggers and changes to TEMP and auxiliary databases.
+ Except, changes to the SQLITE_MASTER table (caused by statements
+ such as CREATE TABLE) are not counted. Nor are changes counted when
+ an entire table is deleted using DROP TABLE.
+
+ See also the sqlite3_changes() API.
+
+ SQLite implements the command "DELETE FROM table" without a WHERE clause
+ by dropping and recreating the table. (This is much faster than going
+ through and deleting individual elements form the table.) Because of
+ this optimization, the change count for "DELETE FROM table" will be
+ zero regardless of the number of elements that were originally in the
+ table. To get an accurate count of the number of rows deleted, use
+ "DELETE FROM table WHERE 1" instead.
+}
+
+api {} {
+ int sqlite3_close(sqlite3*);
+} {
+ Call this function with a pointer to a structure that was previously
+ returned from sqlite3_open() or sqlite3_open16()
+ and the corresponding database will by closed.
+
+ SQLITE_OK is returned if the close is successful. If there are
+ prepared statements that have not been finalized, then SQLITE_BUSY
+ is returned. SQLITE_ERROR might be returned if the argument is not
+ a valid connection pointer returned by sqlite3_open() or if the connection
+ pointer has been closed previously.
+}
+
+api {} {
+const void *sqlite3_column_blob(sqlite3_stmt*, int iCol);
+int sqlite3_column_bytes(sqlite3_stmt*, int iCol);
+int sqlite3_column_bytes16(sqlite3_stmt*, int iCol);
+double sqlite3_column_double(sqlite3_stmt*, int iCol);
+int sqlite3_column_int(sqlite3_stmt*, int iCol);
+long long int sqlite3_column_int64(sqlite3_stmt*, int iCol);
+const unsigned char *sqlite3_column_text(sqlite3_stmt*, int iCol);
+const void *sqlite3_column_text16(sqlite3_stmt*, int iCol);
+int sqlite3_column_type(sqlite3_stmt*, int iCol);
+#define SQLITE_INTEGER 1
+#define SQLITE_FLOAT 2
+#define SQLITE_TEXT 3
+#define SQLITE_BLOB 4
+#define SQLITE_NULL 5
+} {
+ These routines return information about the information
+ in a single column of the current result row of a query. In every
+ case the first argument is a pointer to the SQL statement that is being
+ executed (the sqlite_stmt* that was returned from sqlite3_prepare()) and
+ the second argument is the index of the column for which information
+ should be returned. iCol is zero-indexed. The left-most column has an
+ index of 0.
+
+ If the SQL statement is not currently point to a valid row, or if the
+ the column index is out of range, the result is undefined.
+
+ If the result is a BLOB then the sqlite3_column_bytes() routine returns
+ the number of bytes in that BLOB. No type conversions occur.
+ If the result is a string (or a number since a number can be converted
+ into a string) then sqlite3_column_bytes() converts
+ the value into a UTF-8 string and returns
+ the number of bytes in the resulting string. The value returned does
+ not include the \\000 terminator at the end of the string. The
+ sqlite3_column_bytes16() routine converts the value into a UTF-16
+ encoding and returns the number of bytes (not characters) in the
+ resulting string. The \\u0000 terminator is not included in this count.
+
+ These routines attempt to convert the value where appropriate. For
+ example, if the internal representation is FLOAT and a text result
+ is requested, sprintf() is used internally to do the conversion
+ automatically. The following table details the conversions that
+ are applied:
+
+<blockquote>
+<table border="1">
+<tr><th>Internal Type</th><th>Requested Type</th><th>Conversion</th></tr>
+<tr><td> NULL </td><td> INTEGER</td><td>Result is 0</td></tr>
+<tr><td> NULL </td><td> FLOAT </td><td> Result is 0.0</td></tr>
+<tr><td> NULL </td><td> TEXT </td><td> Result is NULL pointer</td></tr>
+<tr><td> NULL </td><td> BLOB </td><td> Result is NULL pointer</td></tr>
+<tr><td> INTEGER </td><td> FLOAT </td><td> Convert from integer to float</td></tr>
+<tr><td> INTEGER </td><td> TEXT </td><td> ASCII rendering of the integer</td></tr>
+<tr><td> INTEGER </td><td> BLOB </td><td> Same as for INTEGER->TEXT</td></tr>
+<tr><td> FLOAT </td><td> INTEGER</td><td>Convert from float to integer</td></tr>
+<tr><td> FLOAT </td><td> TEXT </td><td> ASCII rendering of the float</td></tr>
+<tr><td> FLOAT </td><td> BLOB </td><td> Same as FLOAT->TEXT</td></tr>
+<tr><td> TEXT </td><td> INTEGER</td><td>Use atoi()</td></tr>
+<tr><td> TEXT </td><td> FLOAT </td><td> Use atof()</td></tr>
+<tr><td> TEXT </td><td> BLOB </td><td> No change</td></tr>
+<tr><td> BLOB </td><td> INTEGER</td><td>Convert to TEXT then use atoi()</td></tr>
+<tr><td> BLOB </td><td> FLOAT </td><td> Convert to TEXT then use atof()</td></tr>
+<tr><td> BLOB </td><td> TEXT </td><td> Add a \\000 terminator if needed</td></tr>
+</table>
+</blockquote>
+}
+
+api {} {
+int sqlite3_column_count(sqlite3_stmt *pStmt);
+} {
+ Return the number of columns in the result set returned by the prepared
+ SQL statement. This routine returns 0 if pStmt is an SQL statement
+ that does not return data (for example an UPDATE).
+
+ See also sqlite3_data_count().
+}
+
+api {} {
+const char *sqlite3_column_decltype(sqlite3_stmt *, int i);
+const void *sqlite3_column_decltype16(sqlite3_stmt*,int);
+} {
+ The first argument is a prepared SQL statement. If this statement
+ is a SELECT statement, the Nth column of the returned result set
+ of the SELECT is a table column then the declared type of the table
+ column is returned. If the Nth column of the result set is not a table
+ column, then a NULL pointer is returned. The returned string is
+ UTF-8 encoded for sqlite3_column_decltype() and UTF-16 encoded
+ for sqlite3_column_decltype16(). For example, in the database schema:
+
+ <blockquote><pre>
+ CREATE TABLE t1(c1 INTEGER);
+ </pre></blockquote>
+
+ And the following statement compiled:
+
+ <blockquote><pre>
+ SELECT c1 + 1, c1 FROM t1;
+ </pre></blockquote>
+
+ Then this routine would return the string "INTEGER" for the second
+ result column (i==1), and a NULL pointer for the first result column
+ (i==0).
+
+ If the following statements were compiled then this routine would
+ return "INTEGER" for the first (only) result column.
+
+ <blockquote><pre>
+ SELECT (SELECT c1) FROM t1;
+ SELECT (SELECT c1 FROM t1);
+ SELECT c1 FROM (SELECT c1 FROM t1);
+ SELECT * FROM (SELECT c1 FROM t1);
+ SELECT * FROM (SELECT * FROM t1);
+ </pre></blockquote>
+}
+
+api {} {
+ int sqlite3_table_column_metadata(
+ sqlite3 *db, /* Connection handle */
+ const char *zDbName, /* Database name or NULL */
+ const char *zTableName, /* Table name */
+ const char *zColumnName, /* Column name */
+ char const **pzDataType, /* OUTPUT: Declared data type */
+ char const **pzCollSeq, /* OUTPUT: Collation sequence name */
+ int *pNotNull, /* OUTPUT: True if NOT NULL constraint exists */
+ int *pPrimaryKey, /* OUTPUT: True if column part of PK */
+ int *pAutoinc /* OUTPUT: True if colums is auto-increment */
+ );
+} {
+ This routine is used to obtain meta information about a specific column of a
+ specific database table accessible using the connection handle passed as the
+ first function argument.
+
+ The column is identified by the second, third and fourth parameters to
+ this function. The second parameter is either the name of the database
+ (i.e. "main", "temp" or an attached database) containing the specified
+ table or NULL. If it is NULL, then all attached databases are searched
+ for the table using the same algorithm as the database engine uses to
+ resolve unqualified table references.
+
+ The third and fourth parameters to this function are the table and column
+ name of the desired column, respectively. Neither of these parameters
+ may be NULL.
+
+ Meta information is returned by writing to the memory locations passed as
+ the 5th and subsequent parameters to this function. Any of these
+ arguments may be NULL, in which case the corresponding element of meta
+ information is ommitted.
+
+<pre>
+ Parameter Output Type Description
+ -----------------------------------
+ 5th const char* Declared data type
+ 6th const char* Name of the columns default collation sequence
+ 7th int True if the column has a NOT NULL constraint
+ 8th int True if the column is part of the PRIMARY KEY
+ 9th int True if the column is AUTOINCREMENT
+</pre>
+
+ The memory pointed to by the character pointers returned for the
+ declaration type and collation sequence is valid only until the next
+ call to any sqlite API function.
+
+ This function may load one or more schemas from database files. If an
+ error occurs during this process, or if the requested table or column
+ cannot be found, an SQLITE error code is returned and an error message
+ left in the database handle (to be retrieved using sqlite3_errmsg()).
+ Specifying an SQL view instead of a table as the third argument is also
+ considered an error.
+
+ If the specified column is "rowid", "oid" or "_rowid_" and an
+ INTEGER PRIMARY KEY column has been explicitly declared, then the output
+ parameters are set for the explicitly declared column. If there is no
+ explicitly declared IPK column, then the data-type is "INTEGER", the
+ collation sequence "BINARY" and the primary-key flag is set. Both
+ the not-null and auto-increment flags are clear.
+
+ This API is only available if the library was compiled with the
+ SQLITE_ENABLE_COLUMN_METADATA preprocessor symbol defined.
+}
+
+api {} {
+const char *sqlite3_column_database_name(sqlite3_stmt *pStmt, int N);
+const void *sqlite3_column_database_name16(sqlite3_stmt *pStmt, int N);
+} {
+If the Nth column returned by statement pStmt is a column reference,
+these functions may be used to access the name of the database (either
+"main", "temp" or the name of an attached database) that contains
+the column. If the Nth column is not a column reference, NULL is
+returned.
+
+See the description of function sqlite3_column_decltype() for a
+description of exactly which expressions are considered column references.
+
+Function sqlite3_column_database_name() returns a pointer to a UTF-8
+encoded string. sqlite3_column_database_name16() returns a pointer
+to a UTF-16 encoded string.
+}
+
+api {} {
+const char *sqlite3_column_origin_name(sqlite3_stmt *pStmt, int N);
+const void *sqlite3_column_origin_name16(sqlite3_stmt *pStmt, int N);
+} {
+If the Nth column returned by statement pStmt is a column reference,
+these functions may be used to access the schema name of the referenced
+column in the database schema. If the Nth column is not a column
+reference, NULL is returned.
+
+See the description of function sqlite3_column_decltype() for a
+description of exactly which expressions are considered column references.
+
+Function sqlite3_column_origin_name() returns a pointer to a UTF-8
+encoded string. sqlite3_column_origin_name16() returns a pointer
+to a UTF-16 encoded string.
+}
+
+api {} {
+const char *sqlite3_column_table_name(sqlite3_stmt *pStmt, int N);
+const void *sqlite3_column_table_name16(sqlite3_stmt *pStmt, int N);
+} {
+If the Nth column returned by statement pStmt is a column reference,
+these functions may be used to access the name of the table that
+contains the column. If the Nth column is not a column reference,
+NULL is returned.
+
+See the description of function sqlite3_column_decltype() for a
+description of exactly which expressions are considered column references.
+
+Function sqlite3_column_table_name() returns a pointer to a UTF-8
+encoded string. sqlite3_column_table_name16() returns a pointer
+to a UTF-16 encoded string.
+}
+
+api {} {
+const char *sqlite3_column_name(sqlite3_stmt*,int);
+const void *sqlite3_column_name16(sqlite3_stmt*,int);
+} {
+ The first argument is a prepared SQL statement. This function returns
+ the column heading for the Nth column of that statement, where N is the
+ second function argument. The string returned is UTF-8 for
+ sqlite3_column_name() and UTF-16 for sqlite3_column_name16().
+}
+
+api {} {
+void *sqlite3_commit_hook(sqlite3*, int(*xCallback)(void*), void *pArg);
+} {
+ <i>Experimental</i>
+
+ Register a callback function to be invoked whenever a new transaction
+ is committed. The pArg argument is passed through to the callback.
+ callback. If the callback function returns non-zero, then the commit
+ is converted into a rollback.
+
+ If another function was previously registered, its pArg value is returned.
+ Otherwise NULL is returned.
+
+ Registering a NULL function disables the callback. Only a single commit
+ hook callback can be registered at a time.
+}
+
+api {} {
+int sqlite3_complete(const char *sql);
+int sqlite3_complete16(const void *sql);
+} {
+ These functions return true if the given input string comprises
+ one or more complete SQL statements.
+ The argument must be a nul-terminated UTF-8 string for sqlite3_complete()
+ and a nul-terminated UTF-16 string for sqlite3_complete16().
+
+ These routines do not check to see if the SQL statement is well-formed.
+ They only check to see that the statement is terminated by a semicolon
+ that is not part of a string literal and is not inside
+ the body of a trigger.
+} {}
+
+api {} {
+int sqlite3_create_collation(
+ sqlite3*,
+ const char *zName,
+ int pref16,
+ void*,
+ int(*xCompare)(void*,int,const void*,int,const void*)
+);
+int sqlite3_create_collation16(
+ sqlite3*,
+ const char *zName,
+ int pref16,
+ void*,
+ int(*xCompare)(void*,int,const void*,int,const void*)
+);
+#define SQLITE_UTF8 1
+#define SQLITE_UTF16BE 2
+#define SQLITE_UTF16LE 3
+#define SQLITE_UTF16 4
+} {
+ These two functions are used to add new collation sequences to the
+ sqlite3 handle specified as the first argument.
+
+ The name of the new collation sequence is specified as a UTF-8 string
+ for sqlite3_create_collation() and a UTF-16 string for
+ sqlite3_create_collation16(). In both cases the name is passed as the
+ second function argument.
+
+ The third argument must be one of the constants SQLITE_UTF8,
+ SQLITE_UTF16LE or SQLITE_UTF16BE, indicating that the user-supplied
+ routine expects to be passed pointers to strings encoded using UTF-8,
+ UTF-16 little-endian or UTF-16 big-endian respectively. The
+ SQLITE_UTF16 constant indicates that text strings are expected in
+ UTF-16 in the native byte order of the host machine.
+
+ A pointer to the user supplied routine must be passed as the fifth
+ argument. If it is NULL, this is the same as deleting the collation
+ sequence (so that SQLite cannot call it anymore). Each time the user
+ supplied function is invoked, it is passed a copy of the void* passed as
+ the fourth argument to sqlite3_create_collation() or
+ sqlite3_create_collation16() as its first argument.
+
+ The remaining arguments to the user-supplied routine are two strings,
+ each represented by a [length, data] pair and encoded in the encoding
+ that was passed as the third argument when the collation sequence was
+ registered. The user routine should return negative, zero or positive if
+ the first string is less than, equal to, or greater than the second
+ string. i.e. (STRING1 - STRING2).
+}
+
+api {} {
+int sqlite3_collation_needed(
+ sqlite3*,
+ void*,
+ void(*)(void*,sqlite3*,int eTextRep,const char*)
+);
+int sqlite3_collation_needed16(
+ sqlite3*,
+ void*,
+ void(*)(void*,sqlite3*,int eTextRep,const void*)
+);
+} {
+ To avoid having to register all collation sequences before a database
+ can be used, a single callback function may be registered with the
+ database handle to be called whenever an undefined collation sequence is
+ required.
+
+ If the function is registered using the sqlite3_collation_needed() API,
+ then it is passed the names of undefined collation sequences as strings
+ encoded in UTF-8. If sqlite3_collation_needed16() is used, the names
+ are passed as UTF-16 in machine native byte order. A call to either
+ function replaces any existing callback.
+
+ When the user-function is invoked, the first argument passed is a copy
+ of the second argument to sqlite3_collation_needed() or
+ sqlite3_collation_needed16(). The second argument is the database
+ handle. The third argument is one of SQLITE_UTF8, SQLITE_UTF16BE or
+ SQLITE_UTF16LE, indicating the most desirable form of the collation
+ sequence function required. The fourth argument is the name of the
+ required collation sequence.
+
+ The collation sequence is returned to SQLite by a collation-needed
+ callback using the sqlite3_create_collation() or
+ sqlite3_create_collation16() APIs, described above.
+}
+
+api {} {
+int sqlite3_create_function(
+ sqlite3 *,
+ const char *zFunctionName,
+ int nArg,
+ int eTextRep,
+ void *pUserData,
+ void (*xFunc)(sqlite3_context*,int,sqlite3_value**),
+ void (*xStep)(sqlite3_context*,int,sqlite3_value**),
+ void (*xFinal)(sqlite3_context*)
+);
+int sqlite3_create_function16(
+ sqlite3*,
+ const void *zFunctionName,
+ int nArg,
+ int eTextRep,
+ void *pUserData,
+ void (*xFunc)(sqlite3_context*,int,sqlite3_value**),
+ void (*xStep)(sqlite3_context*,int,sqlite3_value**),
+ void (*xFinal)(sqlite3_context*)
+);
+#define SQLITE_UTF8 1
+#define SQLITE_UTF16 2
+#define SQLITE_UTF16BE 3
+#define SQLITE_UTF16LE 4
+#define SQLITE_ANY 5
+} {
+ These two functions are used to add SQL functions or aggregates
+ implemented in C. The
+ only difference between these two routines is that the second argument, the
+ name of the (scalar) function or aggregate, is encoded in UTF-8 for
+ sqlite3_create_function() and UTF-16 for sqlite3_create_function16().
+ The length of the name is limited to 255 bytes, exclusive of the
+ zero-terminator. Note that the name length limit is in bytes, not
+ characters. Any attempt to create a function with a longer name
+ will result in an SQLITE_ERROR error.
+
+ The first argument is the database handle that the new function or
+ aggregate is to be added to. If a single program uses more than one
+ database handle internally, then user functions or aggregates must
+ be added individually to each database handle with which they will be
+ used.
+
+ The third argument is the number of arguments that the function or
+ aggregate takes. If this argument is -1 then the function or
+ aggregate may take any number of arguments. The maximum number
+ of arguments to a new SQL function is 127. A number larger than
+ 127 for the third argument results in an SQLITE_ERROR error.
+
+ The fourth argument, eTextRep, specifies what type of text arguments
+ this function prefers to receive. Any function should be able to work
+ work with UTF-8, UTF-16le, or UTF-16be. But some implementations may be
+ more efficient with one representation than another. Users are allowed
+ to specify separate implementations for the same function which are called
+ depending on the text representation of the arguments. The the implementation
+ which provides the best match is used. If there is only a single
+ implementation which does not care what text representation is used,
+ then the fourth argument should be SQLITE_ANY.
+
+ The fifth argument is an arbitrary pointer. The function implementations
+ can gain access to this pointer using the sqlite_user_data() API.
+
+ The sixth, seventh and eighth argumens, xFunc, xStep and xFinal, are
+ pointers to user implemented C functions that implement the user
+ function or aggregate. A scalar function requires an implementation of
+ the xFunc callback only, NULL pointers should be passed as the xStep
+ and xFinal arguments. An aggregate function requires an implementation
+ of xStep and xFinal, and NULL should be passed for xFunc. To delete an
+ existing user function or aggregate, pass NULL for all three function
+ callbacks. Specifying an inconstant set of callback values, such as an
+ xFunc and an xFinal, or an xStep but no xFinal, results in an SQLITE_ERROR
+ return.
+}
+
+api {} {
+int sqlite3_data_count(sqlite3_stmt *pStmt);
+} {
+ Return the number of values in the current row of the result set.
+
+ After a call to sqlite3_step() that returns SQLITE_ROW, this routine
+ will return the same value as the sqlite3_column_count() function.
+ After sqlite3_step() has returned an SQLITE_DONE, SQLITE_BUSY or
+ error code, or before sqlite3_step() has been called on a
+ prepared SQL statement, this routine returns zero.
+}
+
+api {} {
+int sqlite3_errcode(sqlite3 *db);
+} {
+ Return the error code for the most recent failed sqlite3_* API call associated
+ with sqlite3 handle 'db'. If a prior API call failed but the most recent
+ API call succeeded, the return value from this routine is undefined.
+
+ Calls to many sqlite3_* functions set the error code and string returned
+ by sqlite3_errcode(), sqlite3_errmsg() and sqlite3_errmsg16()
+ (overwriting the previous values). Note that calls to sqlite3_errcode(),
+ sqlite3_errmsg() and sqlite3_errmsg16() themselves do not affect the
+ results of future invocations. Calls to API routines that do not return
+ an error code (examples: sqlite3_data_count() or sqlite3_mprintf()) do
+ not change the error code returned by this routine.
+
+ Assuming no other intervening sqlite3_* API calls are made, the error
+ code returned by this function is associated with the same error as
+ the strings returned by sqlite3_errmsg() and sqlite3_errmsg16().
+} {}
+
+api {} {
+const char *sqlite3_errmsg(sqlite3*);
+const void *sqlite3_errmsg16(sqlite3*);
+} {
+ Return a pointer to a UTF-8 encoded string (sqlite3_errmsg)
+ or a UTF-16 encoded string (sqlite3_errmsg16) describing in English the
+ error condition for the most recent sqlite3_* API call. The returned
+ string is always terminated by an 0x00 byte.
+
+ The string "not an error" is returned when the most recent API call was
+ successful.
+}
+
+api {} {
+int sqlite3_exec(
+ sqlite3*, /* An open database */
+ const char *sql, /* SQL to be executed */
+ sqlite_callback, /* Callback function */
+ void *, /* 1st argument to callback function */
+ char **errmsg /* Error msg written here */
+);
+} {
+ A function to executes one or more statements of SQL.
+
+ If one or more of the SQL statements are queries, then
+ the callback function specified by the 3rd argument is
+ invoked once for each row of the query result. This callback
+ should normally return 0. If the callback returns a non-zero
+ value then the query is aborted, all subsequent SQL statements
+ are skipped and the sqlite3_exec() function returns the SQLITE_ABORT.
+
+ The 4th argument is an arbitrary pointer that is passed
+ to the callback function as its first argument.
+
+ The 2nd argument to the callback function is the number of
+ columns in the query result. The 3rd argument to the callback
+ is an array of strings holding the values for each column.
+ The 4th argument to the callback is an array of strings holding
+ the names of each column.
+
+ The callback function may be NULL, even for queries. A NULL
+ callback is not an error. It just means that no callback
+ will be invoked.
+
+ If an error occurs while parsing or evaluating the SQL (but
+ not while executing the callback) then an appropriate error
+ message is written into memory obtained from malloc() and
+ *errmsg is made to point to that message. The calling function
+ is responsible for freeing the memory that holds the error
+ message. Use sqlite3_free() for this. If errmsg==NULL,
+ then no error message is ever written.
+
+ The return value is is SQLITE_OK if there are no errors and
+ some other return code if there is an error. The particular
+ return value depends on the type of error.
+
+ If the query could not be executed because a database file is
+ locked or busy, then this function returns SQLITE_BUSY. (This
+ behavior can be modified somewhat using the sqlite3_busy_handler()
+ and sqlite3_busy_timeout() functions.)
+} {}
+
+api {} {
+int sqlite3_finalize(sqlite3_stmt *pStmt);
+} {
+ The sqlite3_finalize() function is called to delete a prepared
+ SQL statement obtained by a previous call to sqlite3_prepare()
+ or sqlite3_prepare16(). If the statement was executed successfully, or
+ not executed at all, then SQLITE_OK is returned. If execution of the
+ statement failed then an error code is returned.
+
+ All prepared statements must finalized before sqlite3_close() is
+ called or else the close will fail with a return code of SQLITE_BUSY.
+
+ This routine can be called at any point during the execution of the
+ virtual machine. If the virtual machine has not completed execution
+ when this routine is called, that is like encountering an error or
+ an interrupt. (See sqlite3_interrupt().) Incomplete updates may be
+ rolled back and transactions canceled, depending on the circumstances,
+ and the result code returned will be SQLITE_ABORT.
+}
+
+api {} {
+void *sqlite3_malloc(int);
+void *sqlite3_realloc(void*, int);
+void sqlite3_free(void*);
+} {
+ These routines provide access to the memory allocator used by SQLite.
+ Depending on how SQLite has been compiled and the OS-layer backend,
+ the memory allocator used by SQLite might be the standard system
+ malloc()/realloc()/free(), or it might be something different. With
+ certain compile-time flags, SQLite will add wrapper logic around the
+ memory allocator to add memory leak and buffer overrun detection. The
+ OS layer might substitute a completely different memory allocator.
+ Use these APIs to be sure you are always using the correct memory
+ allocator.
+
+ The sqlite3_free() API, not the standard free() from the system library,
+ should always be used to free the memory buffer returned by
+ sqlite3_mprintf() or sqlite3_vmprintf() and to free the error message
+ string returned by sqlite3_exec(). Using free() instead of sqlite3_free()
+ might accidentally work on some systems and build configurations but
+ will fail on others.
+
+ Compatibility Note: Prior to version 3.4.0, the sqlite3_free API
+ was prototyped to take a <tt>char*</tt> parameter rather than
+ <tt>void*</tt>. Like this:
+<blockquote><pre>
+void sqlite3_free(char*);
+</pre></blockquote>
+ The change to using <tt>void*</tt> might cause warnings when
+ compiling older code against
+ newer libraries, but everything should still work correctly.
+}
+
+api {} {
+int sqlite3_get_table(
+ sqlite3*, /* An open database */
+ const char *sql, /* SQL to be executed */
+ char ***resultp, /* Result written to a char *[] that this points to */
+ int *nrow, /* Number of result rows written here */
+ int *ncolumn, /* Number of result columns written here */
+ char **errmsg /* Error msg written here */
+);
+void sqlite3_free_table(char **result);
+} {
+ This next routine is really just a wrapper around sqlite3_exec().
+ Instead of invoking a user-supplied callback for each row of the
+ result, this routine remembers each row of the result in memory
+ obtained from malloc(), then returns all of the result after the
+ query has finished.
+
+ As an example, suppose the query result where this table:
+
+ <pre>
+ Name | Age
+ -----------------------
+ Alice | 43
+ Bob | 28
+ Cindy | 21
+ </pre>
+
+ If the 3rd argument were &azResult then after the function returns
+ azResult will contain the following data:
+
+ <pre>
+ azResult[0] = "Name";
+ azResult[1] = "Age";
+ azResult[2] = "Alice";
+ azResult[3] = "43";
+ azResult[4] = "Bob";
+ azResult[5] = "28";
+ azResult[6] = "Cindy";
+ azResult[7] = "21";
+ </pre>
+
+ Notice that there is an extra row of data containing the column
+ headers. But the *nrow return value is still 3. *ncolumn is
+ set to 2. In general, the number of values inserted into azResult
+ will be ((*nrow) + 1)*(*ncolumn).
+
+ After the calling function has finished using the result, it should
+ pass the result data pointer to sqlite3_free_table() in order to
+ release the memory that was malloc-ed. Because of the way the
+ malloc() happens, the calling function must not try to call
+ malloc() directly. Only sqlite3_free_table() is able to release
+ the memory properly and safely.
+
+ The return value of this routine is the same as from sqlite3_exec().
+}
+
+api {sqlite3_interrupt} {
+ void sqlite3_interrupt(sqlite3*);
+} {
+ This function causes any pending database operation to abort and
+ return at its earliest opportunity. This routine is typically
+ called in response to a user action such as pressing "Cancel"
+ or Ctrl-C where the user wants a long query operation to halt
+ immediately.
+} {}
+
+api {} {
+long long int sqlite3_last_insert_rowid(sqlite3*);
+} {
+ Each entry in an SQLite table has a unique integer key called the "rowid".
+ The rowid is always available as an undeclared column
+ named ROWID, OID, or _ROWID_.
+ If the table has a column of type INTEGER PRIMARY KEY then that column
+ is another an alias for the rowid.
+
+ This routine
+ returns the rowid of the most recent INSERT into the database
+ from the database connection given in the first argument. If
+ no inserts have ever occurred on this database connection, zero
+ is returned.
+
+ If an INSERT occurs within a trigger, then the rowid of the
+ inserted row is returned by this routine as long as the trigger
+ is running. But once the trigger terminates, the value returned
+ by this routine reverts to the last value inserted before the
+ trigger fired.
+} {}
+
+api {} {
+char *sqlite3_mprintf(const char*,...);
+char *sqlite3_vmprintf(const char*, va_list);
+} {
+ These routines are variants of the "sprintf()" from the
+ standard C library. The resulting string is written into memory
+ obtained from malloc() so that there is never a possibility of buffer
+ overflow. These routines also implement some additional formatting
+ options that are useful for constructing SQL statements.
+
+ The strings returned by these routines should be freed by calling
+ sqlite3_free().
+
+ All of the usual printf formatting options apply. In addition, there
+ is a "%q" option. %q works like %s in that it substitutes a null-terminated
+ string from the argument list. But %q also doubles every '\\'' character.
+ %q is designed for use inside a string literal. By doubling each '\\''
+ character it escapes that character and allows it to be inserted into
+ the string.
+
+ For example, so some string variable contains text as follows:
+
+ <blockquote><pre>
+ char *zText = "It's a happy day!";
+ </pre></blockquote>
+
+ One can use this text in an SQL statement as follows:
+
+ <blockquote><pre>
+ sqlite3_exec_printf(db, "INSERT INTO table VALUES('%q')",
+ callback1, 0, 0, zText);
+ </pre></blockquote>
+
+ Because the %q format string is used, the '\\'' character in zText
+ is escaped and the SQL generated is as follows:
+
+ <blockquote><pre>
+ INSERT INTO table1 VALUES('It''s a happy day!')
+ </pre></blockquote>
+
+ This is correct. Had we used %s instead of %q, the generated SQL
+ would have looked like this:
+
+ <blockquote><pre>
+ INSERT INTO table1 VALUES('It's a happy day!');
+ </pre></blockquote>
+
+ This second example is an SQL syntax error. As a general rule you
+ should always use %q instead of %s when inserting text into a string
+ literal.
+} {}
+
+api {} {
+int sqlite3_open(
+ const char *filename, /* Database filename (UTF-8) */
+ sqlite3 **ppDb /* OUT: SQLite db handle */
+);
+int sqlite3_open16(
+ const void *filename, /* Database filename (UTF-16) */
+ sqlite3 **ppDb /* OUT: SQLite db handle */
+);
+} {
+ Open the sqlite database file "filename". The "filename" is UTF-8
+ encoded for sqlite3_open() and UTF-16 encoded in the native byte order
+ for sqlite3_open16(). An sqlite3* handle is returned in *ppDb, even
+ if an error occurs. If the database is opened (or created) successfully,
+ then SQLITE_OK is returned. Otherwise an error code is returned. The
+ sqlite3_errmsg() or sqlite3_errmsg16() routines can be used to obtain
+ an English language description of the error.
+
+ If the database file does not exist, then a new database will be created
+ as needed.
+ The encoding for the database will be UTF-8 if sqlite3_open() is called and
+ UTF-16 if sqlite3_open16 is used.
+
+ Whether or not an error occurs when it is opened, resources associated
+ with the sqlite3* handle should be released by passing it to
+ sqlite3_close() when it is no longer required.
+
+ The returned sqlite3* can only be used in the same thread in which it
+ was created. It is an error to call sqlite3_open() in one thread then
+ pass the resulting database handle off to another thread to use. This
+ restriction is due to goofy design decisions (bugs?) in the way some
+ threading implementations interact with file locks.
+
+ Note to windows users: The encoding used for the filename argument
+ of sqlite3_open() must be UTF-8, not whatever codepage is currently
+ defined. Filenames containing international characters must be converted
+ to UTF-8 prior to passing them into sqlite3_open().
+}
+
+api {} {
+int sqlite3_prepare(
+ sqlite3 *db, /* Database handle */
+ const char *zSql, /* SQL statement, UTF-8 encoded */
+ int nBytes, /* Length of zSql in bytes. */
+ sqlite3_stmt **ppStmt, /* OUT: Statement handle */
+ const char **pzTail /* OUT: Pointer to unused portion of zSql */
+);
+int sqlite3_prepare16(
+ sqlite3 *db, /* Database handle */
+ const void *zSql, /* SQL statement, UTF-16 encoded */
+ int nBytes, /* Length of zSql in bytes. */
+ sqlite3_stmt **ppStmt, /* OUT: Statement handle */
+ const void **pzTail /* OUT: Pointer to unused portion of zSql */
+);
+} {
+ To execute an SQL query, it must first be compiled into a byte-code
+ program using one of the following routines. The only difference between
+ them is that the second argument, specifying the SQL statement to
+ compile, is assumed to be encoded in UTF-8 for the sqlite3_prepare()
+ function and UTF-16 for sqlite3_prepare16().
+
+ The first argument "db" is an SQLite database handle. The second
+ argument "zSql" is the statement to be compiled, encoded as either
+ UTF-8 or UTF-16 (see above). If the next argument, "nBytes", is less
+ than zero, then zSql is read up to the first nul terminator. If
+ "nBytes" is not less than zero, then it is the length of the string zSql
+ in bytes (not characters).
+
+ *pzTail is made to point to the first byte past the end of the first
+ SQL statement in zSql. This routine only compiles the first statement
+ in zSql, so *pzTail is left pointing to what remains uncompiled.
+
+ *ppStmt is left pointing to a compiled SQL statement that can be
+ executed using sqlite3_step(). Or if there is an error, *ppStmt may be
+ set to NULL. If the input text contained no SQL (if the input is and
+ empty string or a comment) then *ppStmt is set to NULL. The calling
+ procedure is responsible for deleting this compiled SQL statement
+ using sqlite3_finalize() after it has finished with it.
+
+ On success, SQLITE_OK is returned. Otherwise an error code is returned.
+}
+
+api {} {
+void sqlite3_progress_handler(sqlite3*, int, int(*)(void*), void*);
+} {
+ <i>Experimental</i>
+
+ This routine configures a callback function - the progress callback - that
+ is invoked periodically during long running calls to sqlite3_exec(),
+ sqlite3_step() and sqlite3_get_table().
+ An example use for this API is to keep
+ a GUI updated during a large query.
+
+ The progress callback is invoked once for every N virtual machine opcodes,
+ where N is the second argument to this function. The progress callback
+ itself is identified by the third argument to this function. The fourth
+ argument to this function is a void pointer passed to the progress callback
+ function each time it is invoked.
+
+ If a call to sqlite3_exec(), sqlite3_step() or sqlite3_get_table() results
+ in less than N opcodes being executed, then the progress callback is not
+ invoked.
+
+ To remove the progress callback altogether, pass NULL as the third
+ argument to this function.
+
+ If the progress callback returns a result other than 0, then the current
+ query is immediately terminated and any database changes rolled back. If the
+ query was part of a larger transaction, then the transaction is not rolled
+ back and remains active. The sqlite3_exec() call returns SQLITE_ABORT.
+
+}
+
+api {} {
+int sqlite3_reset(sqlite3_stmt *pStmt);
+} {
+ The sqlite3_reset() function is called to reset a prepared SQL
+ statement obtained by a previous call to sqlite3_prepare() or
+ sqlite3_prepare16() back to it's initial state, ready to be re-executed.
+ Any SQL statement variables that had values bound to them using
+ the sqlite3_bind_*() API retain their values.
+}
+
+api {} {
+void sqlite3_result_blob(sqlite3_context*, const void*, int n, void(*)(void*));
+void sqlite3_result_double(sqlite3_context*, double);
+void sqlite3_result_error(sqlite3_context*, const char*, int);
+void sqlite3_result_error16(sqlite3_context*, const void*, int);
+void sqlite3_result_int(sqlite3_context*, int);
+void sqlite3_result_int64(sqlite3_context*, long long int);
+void sqlite3_result_null(sqlite3_context*);
+void sqlite3_result_text(sqlite3_context*, const char*, int n, void(*)(void*));
+void sqlite3_result_text16(sqlite3_context*, const void*, int n, void(*)(void*));
+void sqlite3_result_text16be(sqlite3_context*, const void*, int n, void(*)(void*));
+void sqlite3_result_text16le(sqlite3_context*, const void*, int n, void(*)(void*));
+void sqlite3_result_value(sqlite3_context*, sqlite3_value*);
+} {
+ User-defined functions invoke these routines in order to
+ set their return value. The sqlite3_result_value() routine is used
+ to return an exact copy of one of the arguments to the function.
+
+ The operation of these routines is very similar to the operation of
+ sqlite3_bind_blob() and its cousins. Refer to the documentation there
+ for additional information.
+}
+
+api {} {
+int sqlite3_set_authorizer(
+ sqlite3*,
+ int (*xAuth)(void*,int,const char*,const char*,const char*,const char*),
+ void *pUserData
+);
+#define SQLITE_CREATE_INDEX 1 /* Index Name Table Name */
+#define SQLITE_CREATE_TABLE 2 /* Table Name NULL */
+#define SQLITE_CREATE_TEMP_INDEX 3 /* Index Name Table Name */
+#define SQLITE_CREATE_TEMP_TABLE 4 /* Table Name NULL */
+#define SQLITE_CREATE_TEMP_TRIGGER 5 /* Trigger Name Table Name */
+#define SQLITE_CREATE_TEMP_VIEW 6 /* View Name NULL */
+#define SQLITE_CREATE_TRIGGER 7 /* Trigger Name Table Name */
+#define SQLITE_CREATE_VIEW 8 /* View Name NULL */
+#define SQLITE_DELETE 9 /* Table Name NULL */
+#define SQLITE_DROP_INDEX 10 /* Index Name Table Name */
+#define SQLITE_DROP_TABLE 11 /* Table Name NULL */
+#define SQLITE_DROP_TEMP_INDEX 12 /* Index Name Table Name */
+#define SQLITE_DROP_TEMP_TABLE 13 /* Table Name NULL */
+#define SQLITE_DROP_TEMP_TRIGGER 14 /* Trigger Name Table Name */
+#define SQLITE_DROP_TEMP_VIEW 15 /* View Name NULL */
+#define SQLITE_DROP_TRIGGER 16 /* Trigger Name Table Name */
+#define SQLITE_DROP_VIEW 17 /* View Name NULL */
+#define SQLITE_INSERT 18 /* Table Name NULL */
+#define SQLITE_PRAGMA 19 /* Pragma Name 1st arg or NULL */
+#define SQLITE_READ 20 /* Table Name Column Name */
+#define SQLITE_SELECT 21 /* NULL NULL */
+#define SQLITE_TRANSACTION 22 /* NULL NULL */
+#define SQLITE_UPDATE 23 /* Table Name Column Name */
+#define SQLITE_ATTACH 24 /* Filename NULL */
+#define SQLITE_DETACH 25 /* Database Name NULL */
+#define SQLITE_ALTER_TABLE 26 /* Database Name Table Name */
+#define SQLITE_REINDEX 27 /* Index Name NULL */
+#define SQLITE_ANALYZE 28 /* Table Name NULL */
+#define SQLITE_CREATE_VTABLE 29 /* Table Name Module Name */
+#define SQLITE_DROP_VTABLE 30 /* Table Name Module Name */
+#define SQLITE_FUNCTION 31 /* Function Name NULL */
+
+#define SQLITE_DENY 1 /* Abort the SQL statement with an error */
+#define SQLITE_IGNORE 2 /* Don't allow access, but don't generate an error */
+} {
+ This routine registers a callback with the SQLite library. The
+ callback is invoked by sqlite3_prepare() to authorize various
+ operations against the database. The callback should
+ return SQLITE_OK if access is allowed, SQLITE_DENY if the entire
+ SQL statement should be aborted with an error and SQLITE_IGNORE
+ if the operation should be treated as a no-op.
+
+ Each database connection have at most one authorizer registered
+ at a time one time. Each call
+ to sqlite3_set_authorizer() overrides the previous authorizer.
+ Setting the callback to NULL disables the authorizer.
+
+ The second argument to the access authorization function will be one
+ of the defined constants shown. These values signify what kind of operation
+ is to be authorized. The 3rd and 4th arguments to the authorization
+ function will be arguments or NULL depending on which of the
+ codes is used as the second argument. For example, if the the
+ 2nd argument code is SQLITE_READ then the 3rd argument will be the name
+ of the table that is being read from and the 4th argument will be the
+ name of the column that is being read from. Or if the 2nd argument
+ is SQLITE_FUNCTION then the 3rd argument will be the name of the
+ function that is being invoked and the 4th argument will be NULL.
+
+ The 5th argument is the name
+ of the database ("main", "temp", etc.) where applicable. The 6th argument
+ is the name of the inner-most trigger or view that is responsible for
+ the access attempt or NULL if this access attempt is directly from
+ input SQL code.
+
+ The return value of the authorization function should be one of the
+ constants SQLITE_OK, SQLITE_DENY, or SQLITE_IGNORE. A return of
+ SQLITE_OK means that the operation is permitted and that
+ sqlite3_prepare() can proceed as normal.
+ A return of SQLITE_DENY means that the sqlite3_prepare()
+ should fail with an error. A return of SQLITE_IGNORE causes the
+ sqlite3_prepare() to continue as normal but the requested
+ operation is silently converted into a no-op. A return of SQLITE_IGNORE
+ in response to an SQLITE_READ or SQLITE_FUNCTION causes the column
+ being read or the function being invoked to return a NULL.
+
+ The intent of this routine is to allow applications to safely execute
+ user-entered SQL. An appropriate callback can deny the user-entered
+ SQL access certain operations (ex: anything that changes the database)
+ or to deny access to certain tables or columns within the database.
+}
+
+api {} {
+int sqlite3_step(sqlite3_stmt*);
+} {
+ After an SQL query has been prepared with a call to either
+ sqlite3_prepare() or sqlite3_prepare16(), then this function must be
+ called one or more times to execute the statement.
+
+ The return value will be either SQLITE_BUSY, SQLITE_DONE,
+ SQLITE_ROW, SQLITE_ERROR, or SQLITE_MISUSE.
+
+ SQLITE_BUSY means that the database engine attempted to open
+ a locked database and there is no busy callback registered.
+ Call sqlite3_step() again to retry the open.
+
+ SQLITE_DONE means that the statement has finished executing
+ successfully. sqlite3_step() should not be called again on this virtual
+ machine without first calling sqlite3_reset() to reset the virtual
+ machine back to its initial state.
+
+ If the SQL statement being executed returns any data, then
+ SQLITE_ROW is returned each time a new row of data is ready
+ for processing by the caller. The values may be accessed using
+ the sqlite3_column_*() functions. sqlite3_step()
+ is called again to retrieve the next row of data.
+
+ SQLITE_ERROR means that a run-time error (such as a constraint
+ violation) has occurred. sqlite3_step() should not be called again on
+ the VM. More information may be found by calling sqlite3_errmsg().
+ A more specific error code (example: SQLITE_INTERRUPT, SQLITE_SCHEMA,
+ SQLITE_CORRUPT, and so forth) can be obtained by calling
+ sqlite3_reset() on the prepared statement.
+
+ SQLITE_MISUSE means that the this routine was called inappropriately.
+ Perhaps it was called on a virtual machine that had already been
+ finalized or on one that had previously returned SQLITE_ERROR or
+ SQLITE_DONE. Or it could be the case that a database connection
+ is being used by a different thread than the one it was created it.
+
+ <b>Goofy Interface Alert:</b>
+ The sqlite3_step() API always returns a generic error code,
+ SQLITE_ERROR, following any error other than SQLITE_BUSY and SQLITE_MISUSE.
+ You must call sqlite3_reset() (or sqlite3_finalize()) in order to find
+ the specific error code that better describes the error. We admit that
+ this is a goofy design. Sqlite3_step() would be much easier to use if
+ it returned the specific error code directly. But we cannot change that
+ now without breaking backwards compatibility.
+
+ Note that there is never any harm in calling sqlite3_reset() after
+ getting back an SQLITE_ERROR from sqlite3_step(). Any API that can
+ be used after an sqlite3_step() can also be used after sqlite3_reset().
+ You may want to create a simple wrapper around sqlite3_step() to make
+ this easier. For example:
+
+ <blockquote><pre>
+ int less_goofy_sqlite3_step(sqlite3_stmt *pStatement){
+ int rc;
+ rc = sqlite3_step(pStatement);
+ if( rc==SQLITE_ERROR ){
+ rc = sqlite3_reset(pStatement);
+ }
+ return rc;
+ }
+ </pre></blockquote>
+
+ Simply substitute the less_goofy_sqlite3_step() call above for
+ the normal sqlite3_step() everywhere in your code, and you will
+ always get back the specific error code rather than a generic
+ SQLITE_ERROR error code.
+}
+
+api {} {
+void *sqlite3_trace(sqlite3*, void(*xTrace)(void*,const char*), void*);
+} {
+ Register a function that is called each time an SQL statement is evaluated.
+ The callback function is invoked on the first call to sqlite3_step() after
+ calls to sqlite3_prepare() or sqlite3_reset().
+ This function can be used (for example) to generate
+ a log file of all SQL executed against a database. This can be
+ useful when debugging an application that uses SQLite.
+}
+
+api {} {
+void *sqlite3_user_data(sqlite3_context*);
+} {
+ The pUserData argument to the sqlite3_create_function() and
+ sqlite3_create_function16() routines used to register user functions
+ is available to the implementation of the function using this
+ call.
+}
+
+api {} {
+const void *sqlite3_value_blob(sqlite3_value*);
+int sqlite3_value_bytes(sqlite3_value*);
+int sqlite3_value_bytes16(sqlite3_value*);
+double sqlite3_value_double(sqlite3_value*);
+int sqlite3_value_int(sqlite3_value*);
+long long int sqlite3_value_int64(sqlite3_value*);
+const unsigned char *sqlite3_value_text(sqlite3_value*);
+const void *sqlite3_value_text16(sqlite3_value*);
+const void *sqlite3_value_text16be(sqlite3_value*);
+const void *sqlite3_value_text16le(sqlite3_value*);
+int sqlite3_value_type(sqlite3_value*);
+} {
+ This group of routines returns information about arguments to
+ a user-defined function. Function implementations use these routines
+ to access their arguments. These routines are the same as the
+ sqlite3_column_... routines except that these routines take a single
+ sqlite3_value* pointer instead of an sqlite3_stmt* and an integer
+ column number.
+
+ See the documentation under sqlite3_column_blob for additional
+ information.
+}
+
+api {} {
+ int sqlite3_sleep(int);
+} {
+ Sleep for a little while. The second parameter is the number of
+ miliseconds to sleep for.
+
+ If the operating system does not support sleep requests with
+ milisecond time resolution, then the time will be rounded up to
+ the nearest second. The number of miliseconds of sleep actually
+ requested from the operating system is returned.
+}
+
+api {} {
+ int sqlite3_expired(sqlite3_stmt*);
+} {
+ Return TRUE (non-zero) if the statement supplied as an argument needs
+ to be recompiled. A statement needs to be recompiled whenever the
+ execution environment changes in a way that would alter the program
+ that sqlite3_prepare() generates. For example, if new functions or
+ collating sequences are registered or if an authorizer function is
+ added or changed.
+}
+
+api {} {
+ int sqlite3_transfer_bindings(sqlite3_stmt*, sqlite3_stmt*);
+} {
+ Move all bindings from the first prepared statement over to the second.
+ This routine is useful, for example, if the first prepared statement
+ fails with an SQLITE_SCHEMA error. The same SQL can be prepared into
+ the second prepared statement then all of the bindings transfered over
+ to the second statement before the first statement is finalized.
+}
+
+api {} {
+ int sqlite3_global_recover();
+} {
+ This function used to be involved in recovering from out-of-memory
+ errors. But as of SQLite version 3.3.0, out-of-memory recovery is
+ automatic and this routine now does nothing. THe interface is retained
+ to avoid link errors with legacy code.
+}
+
+api {} {
+ int sqlite3_get_autocommit(sqlite3*);
+} {
+ Test to see whether or not the database connection is in autocommit
+ mode. Return TRUE if it is and FALSE if not. Autocommit mode is on
+ by default. Autocommit is disabled by a BEGIN statement and reenabled
+ by the next COMMIT or ROLLBACK.
+}
+
+api {} {
+ int sqlite3_clear_bindings(sqlite3_stmt*);
+} {
+ Set all the parameters in the compiled SQL statement back to NULL.
+}
+
+api {} {
+ sqlite3 *sqlite3_db_handle(sqlite3_stmt*);
+} {
+ Return the sqlite3* database handle to which the prepared statement given
+ in the argument belongs. This is the same database handle that was
+ the first argument to the sqlite3_prepare() that was used to create
+ the statement in the first place.
+}
+
+api {} {
+ void *sqlite3_update_hook(
+ sqlite3*,
+ void(*)(void *,int ,char const *,char const *,sqlite_int64),
+ void*
+ );
+} {
+ Register a callback function with the database connection identified by the
+ first argument to be invoked whenever a row is updated, inserted or deleted.
+ Any callback set by a previous call to this function for the same
+ database connection is overridden.
+
+ The second argument is a pointer to the function to invoke when a
+ row is updated, inserted or deleted. The first argument to the callback is
+ a copy of the third argument to sqlite3_update_hook. The second callback
+ argument is one of SQLITE_INSERT, SQLITE_DELETE or SQLITE_UPDATE, depending
+ on the operation that caused the callback to be invoked. The third and
+ fourth arguments to the callback contain pointers to the database and
+ table name containing the affected row. The final callback parameter is
+ the rowid of the row. In the case of an update, this is the rowid after
+ the update takes place.
+
+ The update hook is not invoked when internal system tables are
+ modified (i.e. sqlite_master and sqlite_sequence).
+
+ If another function was previously registered, its pArg value is returned.
+ Otherwise NULL is returned.
+
+ See also: sqlite3_commit_hook(), sqlite3_rollback_hook()
+}
+
+api {} {
+ void *sqlite3_rollback_hook(sqlite3*, void(*)(void *), void*);
+} {
+ Register a callback to be invoked whenever a transaction is rolled
+ back.
+
+ The new callback function overrides any existing rollback-hook
+ callback. If there was an existing callback, then it's pArg value
+ (the third argument to sqlite3_rollback_hook() when it was registered)
+ is returned. Otherwise, NULL is returned.
+
+ For the purposes of this API, a transaction is said to have been
+ rolled back if an explicit "ROLLBACK" statement is executed, or
+ an error or constraint causes an implicit rollback to occur. The
+ callback is not invoked if a transaction is automatically rolled
+ back because the database connection is closed.
+}
+
+api {} {
+ int sqlite3_enable_shared_cache(int);
+} {
+ This routine enables or disables the sharing of the database cache
+ and schema data structures between connections to the same database.
+ Sharing is enabled if the argument is true and disabled if the argument
+ is false.
+
+ Cache sharing is enabled and disabled on a thread-by-thread basis.
+ Each call to this routine enables or disables cache sharing only for
+ connections created in the same thread in which this routine is called.
+ There is no mechanism for sharing cache between database connections
+ running in different threads.
+
+ Sharing must be disabled prior to shutting down a thread or else
+ the thread will leak memory. Call this routine with an argument of
+ 0 to turn off sharing. Or use the sqlite3_thread_cleanup() API.
+
+ This routine must not be called when any database connections
+ are active in the current thread. Enabling or disabling shared
+ cache while there are active database connections will result
+ in memory corruption.
+
+ When the shared cache is enabled, the
+ following routines must always be called from the same thread:
+ sqlite3_open(), sqlite3_prepare(), sqlite3_step(), sqlite3_reset(),
+ sqlite3_finalize(), and sqlite3_close().
+ This is due to the fact that the shared cache makes use of
+ thread-specific storage so that it will be available for sharing
+ with other connections.
+
+ This routine returns SQLITE_OK if shared cache was
+ enabled or disabled successfully. An error code is returned
+ otherwise.
+
+ Shared cache is disabled by default for backward compatibility.
+}
+
+api {} {
+ void sqlite3_thread_cleanup(void);
+} {
+ This routine makes sure that all thread local storage used by SQLite
+ in the current thread has been deallocated. A thread can call this
+ routine prior to terminating in order to make sure there are no memory
+ leaks.
+
+ This routine is not strictly necessary. If cache sharing has been
+ disabled using sqlite3_enable_shared_cache() and if all database
+ connections have been closed and if SQLITE_ENABLE_MEMORY_MANAGMENT is
+ on and all memory has been freed, then the thread local storage will
+ already have been automatically deallocated. This routine is provided
+ as a convenience to the program who just wants to make sure that there
+ are no leaks.
+}
+
+api {} {
+ int sqlite3_release_memory(int N);
+} {
+ This routine attempts to free at least N bytes of memory from the caches
+ of database connecions that were created in the same thread from which this
+ routine is called. The value returned is the number of bytes actually
+ freed.
+
+ This routine is only available if memory management has been enabled
+ by compiling with the SQLITE_ENABLE_MEMORY_MANAGMENT macro.
+}
+
+api {} {
+ void sqlite3_soft_heap_limit(int N);
+} {
+ This routine sets the soft heap limit for the current thread to N.
+ If the total heap usage by SQLite in the current thread exceeds N,
+ then sqlite3_release_memory() is called to try to reduce the memory usage
+ below the soft limit.
+
+ Prior to shutting down a thread sqlite3_soft_heap_limit() must be set to
+ zero (the default) or else the thread will leak memory. Alternatively, use
+ the sqlite3_thread_cleanup() API.
+
+ A negative or zero value for N means that there is no soft heap limit and
+ sqlite3_release_memory() will only be called when memory is exhaused.
+ The default value for the soft heap limit is zero.
+
+ SQLite makes a best effort to honor the soft heap limit. But if it
+ is unable to reduce memory usage below the soft limit, execution will
+ continue without error or notification. This is why the limit is
+ called a "soft" limit. It is advisory only.
+
+ This routine is only available if memory management has been enabled
+ by compiling with the SQLITE_ENABLE_MEMORY_MANAGMENT macro.
+}
+
+api {} {
+ void sqlite3_thread_cleanup(void);
+} {
+ This routine ensures that a thread that has used SQLite in the past
+ has released any thread-local storage it might have allocated.
+ When the rest of the API is used properly, the cleanup of
+ thread-local storage should be completely automatic. You should
+ never really need to invoke this API. But it is provided to you
+ as a precaution and as a potential work-around for future
+ thread-releated memory-leaks.
+}
+
+set n 0
+set i 0
+foreach item $apilist {
+ set namelist [lindex $item 0]
+ foreach name $namelist {
+ set n_to_name($n) $name
+ set n_to_idx($n) $i
+ set name_to_idx($name) $i
+ incr n
+ }
+ incr i
+}
+set i 0
+foreach name [lsort [array names name_to_idx]] {
+ set sname($i) $name
+ incr i
+}
+#parray n_to_name
+#parray n_to_idx
+#parray name_to_idx
+#parray sname
+incr n -1
+puts {<table width="100%" cellpadding="5"><tr>}
+set nrow [expr {($n+2)/3}]
+set i 0
+for {set j 0} {$j<3} {incr j} {
+ if {$j>0} {puts {<td width="10"></td>}}
+ puts {<td valign="top">}
+ set limit [expr {$i+$nrow}]
+ puts {<ul>}
+ while {$i<$limit && $i<$n} {
+ set name $sname($i)
+ if {[regexp {^sqlite} $name]} {set display $name} {set display <i>$name</i>}
+ puts "<li><a href=\"#$name\">$display</a></li>"
+ incr i
+ }
+ puts {</ul></td>}
+}
+puts "</table>"
+puts "<!-- $n entries. $nrow rows in 3 columns -->"
+
+proc resolve_name {ignore_list name} {
+ global name_to_idx
+ if {![info exists name_to_idx($name)] || [lsearch $ignore_list $name]>=0} {
+ return $name
+ } else {
+ return "<a href=\"#$name\">$name</a>"
+ }
+}
+
+foreach name [lsort [array names name_to_idx]] {
+ set i $name_to_idx($name)
+ if {[info exists done($i)]} continue
+ set done($i) 1
+ foreach {namelist prototype desc} [lindex $apilist $i] break
+ foreach name $namelist {
+ puts "<a name=\"$name\">"
+ }
+ puts "<p><hr></p>"
+ puts "<blockquote><pre>"
+ regsub "^( *\n)+" $prototype {} p2
+ regsub "(\n *)+\$" $p2 {} p3
+ puts $p3
+ puts "</pre></blockquote>"
+ regsub -all {\[} $desc {\[} desc
+ regsub -all {sqlite3_[a-z0-9_]+} $desc "\[resolve_name $name &\]" d2
+ foreach x $specialname {
+ regsub -all $x $d2 "\[resolve_name $name &\]" d2
+ }
+ regsub -all "\n( *\n)+" [subst $d2] "</p>\n\n<p>" d3
+ puts "<p>$d3</p>"
+}
+
+footer $rcsid
Added: freeswitch/trunk/libs/sqlite/www/changes.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/changes.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,1535 @@
+#
+# Run this script to generated a changes.html output file
+#
+source common.tcl
+header {SQLite changes}
+puts {
+<p>
+This page provides a high-level summary of changes to SQLite.
+For more detail, refer the the checkin logs generated by
+CVS at
+<a href="http://www.sqlite.org/cvstrac/timeline">
+http://www.sqlite.org/cvstrac/timeline</a>.
+</p>
+
+<DL>
+}
+
+
+proc chng {date desc} {
+ if {[regexp {\(([0-9.]+)\)} $date all vers]} {
+ set label [string map {. _} $vers]
+ puts "<A NAME=\"version_$label\">"
+ }
+ puts "<DT><B>$date</B></DT>"
+ puts "<DD><P><UL>$desc</UL></P></DD>"
+}
+
+chng {2006 October 9 (3.3.8)} {
+<li>Support for full text search using the
+<a href="http://www.sqlite.org/cvstrac/wiki?p=FullTextIndex">FTS1 module</a>
+(beta)</li>
+<li>Added OS-X locking patches (beta - disabled by default)</li>
+<li>Introduce extended error codes and add error codes for various
+kinds of I/O errors.</li>
+<li>Added support for IF EXISTS on CREATE/DROP TRIGGER/VIEW</li>
+<li>Fix the regression test suite so that it works with Tcl8.5</li>
+<li>Enhance sqlite3_set_authorizer() to provide notification of calls to
+ SQL functions.</li>
+<li>Added experimental API: sqlite3_auto_extension()</li>
+<li>Various minor bug fixes</li>
+}
+
+chng {2006 August 12 (3.3.7)} {
+<li>Added support for
+<a href="http://www.sqlite.org/cvstrac/wiki?p=VirtualTables">virtual tables</a>
+(beta)</li>
+<li>Added support for
+<a href="http://www.sqlite.org/cvstrac/wiki?p=LoadableExtensions">
+dynamically loaded extensions</a> (beta)</li>
+<li>The
+<a href="capi3ref.html#sqlite3_interrupt">sqlite3_interrupt()</a>
+routine can be called for a different thread</li>
+<li>Added the <a href="lang_expr.html#match">MATCH</a> operator.</li>
+<li>The default file format is now 1.
+}
+
+chng {2006 June 6 (3.3.6)} {
+<li>Plays better with virus scanners on windows</li>
+<li>Faster :memory: databases</li>
+<li>Fix an obscure segfault in UTF-8 to UTF-16 conversions</li>
+<li>Added driver for OS/2</li>
+<li>Correct column meta-information returned for aggregate queries</li>
+<li>Enhanced output from EXPLAIN QUERY PLAN</li>
+<li>LIMIT 0 now works on subqueries</li>
+<li>Bug fixes and performance enhancements in the query optimizer</li>
+<li>Correctly handle NULL filenames in ATTACH and DETACH</li>
+<li>Inproved syntax error messages in the parser</li>
+<li>Fix type coercion rules for the IN operator</li>
+}
+
+chng {2006 April 5 (3.3.5)} {
+<li>CHECK constraints use conflict resolution algorithms correctly.</li>
+<li>The SUM() function throws an error on integer overflow.</li>
+<li>Choose the column names in a compound query from the left-most SELECT
+ instead of the right-most.</li>
+<li>The sqlite3_create_collation() function
+ honors the SQLITE_UTF16_ALIGNED flag.</li>
+<li>SQLITE_SECURE_DELETE compile-time option causes deletes to overwrite
+ old data with zeros.</li>
+<li>Detect integer overflow in abs().</li>
+<li>The random() function provides 64 bits of randomness instead of
+ only 32 bits.</li>
+<li>Parser detects and reports automaton stack overflow.</li>
+<li>Change the round() function to return REAL instead of TEXT.</li>
+<li>Allow WHERE clause terms on the left table of a LEFT OUTER JOIN to
+ contain aggregate subqueries.</li>
+<li>Skip over leading spaces in text to numeric conversions.</li>
+<li>Various minor bug and documentation typo fixes and
+ performance enhancements.</li>
+}
+
+chng {2006 February 11 (3.3.4)} {
+<li>Fix a blunder in the Unix mutex implementation that can lead to
+deadlock on multithreaded systems.</li>
+<li>Fix an alignment problem on 64-bit machines</li>
+<li>Added the fullfsync pragma.</li>
+<li>Fix an optimizer bug that could have caused some unusual LEFT OUTER JOINs
+to give incorrect results.</li>
+<li>The SUM function detects integer overflow and converts to accumulating
+an approximate result using floating point numbers</li>
+<li>Host parameter names can begin with '@' for compatibility with SQL Server.
+</li>
+<li>Other miscellaneous bug fixes</li>
+}
+
+chng {2006 January 31 (3.3.3)} {
+<li>Removed support for an ON CONFLICT clause on CREATE INDEX - it never
+worked correctly so this should not present any backward compatibility
+problems.</li>
+<li>Authorizer callback now notified of ALTER TABLE ADD COLUMN commands</li>
+<li>After any changes to the TEMP database schema, all prepared statements
+are invalidated and must be recreated using a new call to
+sqlite3_prepare()</li>
+<li>Other minor bug fixes in preparation for the first stable release
+of version 3.3</li>
+}
+
+chng {2006 January 24 (3.3.2 beta)} {
+<li>Bug fixes and speed improvements. Improved test coverage.</li>
+<li>Changes to the OS-layer interface: mutexes must now be recursive.</li>
+<li>Discontinue the use of thread-specific data for out-of-memory
+exception handling</li>
+}
+
+chng {2006 January 16 (3.3.1 alpha)} {
+<li>Countless bug fixes</li>
+<li>Speed improvements</li>
+<li>Database connections can now be used by multiple threads, not just
+the thread in which they were created.</li>
+}
+
+chng {2006 January 10 (3.3.0 alpha)} {
+<li>CHECK constraints</li>
+<li>IF EXISTS and IF NOT EXISTS clauses on CREATE/DROP TABLE/INDEX.</li>
+<li>DESC indices</li>
+<li>More efficient encoding of boolean values resulting in smaller database
+files</li>
+<li>More aggressive SQLITE_OMIT_FLOATING_POINT</li>
+<li>Separate INTEGER and REAL affinity</li>
+<li>Added a virtual function layer for the OS interface</li>
+<li>"exists" method added to the TCL interface</li>
+<li>Improved response to out-of-memory errors</li>
+<li>Database cache can be optionally shared between connections
+in the same thread</li>
+<li>Optional READ UNCOMMITTED isolation (instead of the default
+isolation level of SERIALIZABLE) and table level locking when
+database connections share a common cache.</li>
+}
+
+chng {2005 December 19 (3.2.8)} {
+<li>Fix an obscure bug that can cause database corruption under the
+following unusual circumstances: A large INSERT or UPDATE statement which
+is part of an even larger transaction fails due to a uniqueness contraint
+but the containing transaction commits.</li>
+}
+
+chng {2005 December 19 (2.8.17)} {
+<li>Fix an obscure bug that can cause database corruption under the
+following unusual circumstances: A large INSERT or UPDATE statement which
+is part of an even larger transaction fails due to a uniqueness contraint
+but the containing transaction commits.</li>
+}
+
+chng {2005 September 24 (3.2.7)} {
+<li>GROUP BY now considers NULLs to be equal again, as it should
+</li>
+<li>Now compiles on Solaris and OpenBSD and other Unix variants
+that lack the fdatasync() function</li>
+<li>Now compiles on MSVC++6 again</li>
+<li>Fix uninitialized variables causing malfunctions for various obscure
+queries</li>
+<li>Correctly compute a LEFT OUTER JOINs that is constrained on the
+left table only</li>
+}
+
+chng {2005 September 17 (3.2.6)} {
+<li>Fix a bug that can cause database corruption if a VACUUM (or
+ autovacuum) fails and is rolled back on a database that is
+ larger than 1GiB</li>
+<li>LIKE optiization now works for columns with COLLATE NOCASE</li>
+<li>ORDER BY and GROUP BY now use bounded memory</li>
+<li>Added support for COUNT(DISTINCT expr)</li>
+<li>Change the way SUM() handles NULL values in order to comply with
+ the SQL standard</li>
+<li>Use fdatasync() instead of fsync() where possible in order to speed
+ up commits slightly</li>
+<li>Use of the CROSS keyword in a join turns off the table reordering
+ optimization</li>
+<li>Added the experimental and undocumented EXPLAIN QUERY PLAN capability</li>
+<li>Use the unicode API in windows</li>
+}
+
+chng {2005 August 27 (3.2.5)} {
+<li>Fix a bug effecting DELETE and UPDATE statements that changed
+more than 40960 rows.</li>
+<li>Change the makefile so that it no longer requires GNUmake extensions</li>
+<li>Fix the --enable-threadsafe option on the configure script</li>
+<li>Fix a code generator bug that occurs when the left-hand side of an IN
+operator is constant and the right-hand side is a SELECT statement</li>
+<li>The PRAGMA synchronous=off statement now disables syncing of the
+master journal file in addition to the normal rollback journals</li>
+}
+
+chng {2005 August 24 (3.2.4)} {
+<li>Fix a bug introduced in the previous release
+that can cause a segfault while generating code
+for complex WHERE clauses.</li>
+<li>Allow floating point literals to begin or end with a decimal point.</li>
+}
+
+chng {2005 August 21 (3.2.3)} {
+<li>Added support for the CAST operator</li>
+<li>Tcl interface allows BLOB values to be transferred to user-defined
+functions</li>
+<li>Added the "transaction" method to the Tcl interface</li>
+<li>Allow the DEFAULT value of a column to call functions that have constant
+operands</li>
+<li>Added the ANALYZE command for gathering statistics on indices and
+using those statistics when picking an index in the optimizer</li>
+<li>Remove the limit (formerly 100) on the number of terms in the
+WHERE clause</li>
+<li>The right-hand side of the IN operator can now be a list of expressions
+instead of just a list of constants</li>
+<li>Rework the optimizer so that it is able to make better use of indices</li>
+<li>The order of tables in a join is adjusted automatically to make
+better use of indices</li>
+<li>The IN operator is now a candidate for optimization even if the left-hand
+side is not the left-most term of the index. Multiple IN operators can be
+used with the same index.</li>
+<li>WHERE clause expressions using BETWEEN and OR are now candidates
+for optimization</li>
+<li>Added the "case_sensitive_like" pragma and the SQLITE_CASE_SENSITIVE_LIKE
+compile-time option to set its default value to "on".</li>
+<li>Use indices to help with GLOB expressions and LIKE expressions too
+when the case_sensitive_like pragma is enabled</li>
+<li>Added support for grave-accent quoting for compatibility with MySQL</li>
+<li>Improved test coverage</li>
+<li>Dozens of minor bug fixes</li>
+}
+
+chng {2005 June 13 (3.2.2)} {
+<li>Added the sqlite3_db_handle() API</li>
+<li>Added the sqlite3_get_autocommit() API</li>
+<li>Added a REGEXP operator to the parser. There is no function to back
+up this operator in the standard build but users can add their own using
+sqlite3_create_function()</li>
+<li>Speed improvements and library footprint reductions.</li>
+<li>Fix byte alignment problems on 64-bit architectures.</li>
+<li>Many, many minor bug fixes and documentation updates.</li>
+}
+
+chng {2005 March 29 (3.2.1)} {
+<li>Fix a memory allocation error in the new ADD COLUMN comment.</li>
+<li>Documentation updates</li>
+}
+
+chng {2005 March 21 (3.2.0)} {
+<li>Added support for ALTER TABLE ADD COLUMN.</li>
+<li>Added support for the "T" separator in ISO-8601 date/time strings.</li>
+<li>Improved support for Cygwin.</li>
+<li>Numerous bug fixes and documentation updates.</li>
+}
+
+chng {2005 March 16 (3.1.6)} {
+<li>Fix a bug that could cause database corruption when inserting
+ record into tables with around 125 columns.</li>
+<li>sqlite3_step() is now much more likely to invoke the busy handler
+ and less likely to return SQLITE_BUSY.</li>
+<li>Fix memory leaks that used to occur after a malloc() failure.</li>
+}
+
+chng {2005 March 11 (3.1.5)} {
+<li>The ioctl on OS-X to control syncing to disk is F_FULLFSYNC,
+ not F_FULLSYNC. The previous release had it wrong.</li>
+}
+
+chng {2005 March 10 (3.1.4)} {
+<li>Fix a bug in autovacuum that could cause database corruption if
+a CREATE UNIQUE INDEX fails because of a constraint violation.
+This problem only occurs if the new autovacuum feature introduced in
+version 3.1 is turned on.</li>
+<li>The F_FULLSYNC ioctl (currently only supported on OS-X) is disabled
+if the synchronous pragma is set to something other than "full".</li>
+<li>Add additional forward compatibility to the future version 3.2 database
+file format.</li>
+<li>Fix a bug in WHERE clauses of the form (rowid<'2')</li>
+<li>New SQLITE_OMIT_... compile-time options added</li>
+<li>Updates to the man page</li>
+<li>Remove the use of strcasecmp() from the shell</li>
+<li>Windows DLL exports symbols Tclsqlite_Init and Sqlite_Init</li>
+}
+
+chng {2005 February 19 (3.1.3)} {
+<li>Fix a problem with VACUUM on databases from which tables containing
+AUTOINCREMENT have been dropped.</li>
+<li>Add forward compatibility to the future version 3.2 database file
+format.</li>
+<li>Documentation updates</li>
+}
+
+chng {2005 February 15 (3.1.2)} {
+<li>Fix a bug that can lead to database corruption if there are two
+open connections to the same database and one connection does a VACUUM
+and the second makes some change to the database.</li>
+<li>Allow "?" parameters in the LIMIT clause.</li>
+<li>Fix VACUUM so that it works with AUTOINCREMENT.</li>
+<li>Fix a race condition in AUTOVACUUM that can lead to corrupt databases</li>
+<li>Add a numeric version number to the sqlite3.h include file.</li>
+<li>Other minor bug fixes and performance enhancements.</li>
+}
+
+chng {2005 February 15 (2.8.16)} {
+<li>Fix a bug that can lead to database corruption if there are two
+open connections to the same database and one connection does a VACUUM
+and the second makes some change to the database.</li>
+<li>Correctly handle quoted names in CREATE INDEX statements.</li>
+<li>Fix a naming conflict between sqlite.h and sqlite3.h.</li>
+<li>Avoid excess heap usage when copying expressions.</li>
+<li>Other minor bug fixes.</li>
+}
+
+chng {2005 February 1 (3.1.1 BETA)} {
+<li>Automatic caching of prepared statements in the TCL interface</li>
+<li>ATTACH and DETACH as well as some other operations cause existing
+ prepared statements to expire.</li>
+<li>Numerious minor bug fixes</li>
+}
+
+chng {2005 January 21 (3.1.0 ALPHA)} {
+<li>Autovacuum support added</li>
+<li>CURRENT_TIME, CURRENT_DATE, and CURRENT_TIMESTAMP added</li>
+<li>Support for the EXISTS clause added.</li>
+<li>Support for correlated subqueries added.</li>
+<li>Added the ESCAPE clause on the LIKE operator.</li>
+<li>Support for ALTER TABLE ... RENAME TABLE ... added</li>
+<li>AUTOINCREMENT keyword supported on INTEGER PRIMARY KEY</li>
+<li>Many SQLITE_OMIT_ macros inserts to omit features at compile-time
+ and reduce the library footprint.</li>
+<li>The REINDEX command was added.</li>
+<li>The engine no longer consults the main table if it can get
+ all the information it needs from an index.</li>
+<li>Many nuisance bugs fixed.</li>
+}
+
+chng {2004 October 11 (3.0.8)} {
+<li>Add support for DEFERRED, IMMEDIATE, and EXCLUSIVE transactions.</li>
+<li>Allow new user-defined functions to be created when there are
+already one or more precompiled SQL statements.<li>
+<li>Fix portability problems for Mingw/MSYS.</li>
+<li>Fix a byte alignment problem on 64-bit Sparc machines.</li>
+<li>Fix the ".import" command of the shell so that it ignores \r
+characters at the end of lines.</li>
+<li>The "csv" mode option in the shell puts strings inside double-quotes.</li>
+<li>Fix typos in documentation.</li>
+<li>Convert array constants in the code to have type "const".</li>
+<li>Numerous code optimizations, specially optimizations designed to
+make the code footprint smaller.</li>
+}
+
+chng {2004 September 18 (3.0.7)} {
+<li>The BTree module allocates large buffers using malloc() instead of
+ off of the stack, in order to play better on machines with limited
+ stack space.</li>
+<li>Fixed naming conflicts so that versions 2.8 and 3.0 can be
+ linked and used together in the same ANSI-C source file.</li>
+<li>New interface: sqlite3_bind_parameter_index()</li>
+<li>Add support for wildcard parameters of the form: "?nnn"</li>
+<li>Fix problems found on 64-bit systems.</li>
+<li>Removed encode.c file (containing unused routines) from the
+ version 3.0 source tree.</li>
+<li>The sqlite3_trace() callbacks occur before each statement
+ is executed, not when the statement is compiled.</li>
+<li>Makefile updates and miscellaneous bug fixes.</li>
+}
+
+chng {2004 September 02 (3.0.6 beta)} {
+<li>Better detection and handling of corrupt database files.</li>
+<li>The sqlite3_step() interface returns SQLITE_BUSY if it is unable
+ to commit a change because of a lock</li>
+<li>Combine the implementations of LIKE and GLOB into a single
+ pattern-matching subroutine.</li>
+<li>Miscellaneous code size optimizations and bug fixes</li>
+}
+
+chng {2004 August 29 (3.0.5 beta)} {
+<li>Support for ":AAA" style bind parameter names.</li>
+<li>Added the new sqlite3_bind_parameter_name() interface.</li>
+<li>Support for TCL variable names embedded in SQL statements in the
+ TCL bindings.</li>
+<li>The TCL bindings transfer data without necessarily doing a conversion
+ to a string.</li>
+<li>The database for TEMP tables is not created until it is needed.</li>
+<li>Add the ability to specify an alternative temporary file directory
+ using the "sqlite_temp_directory" global variable.</li>
+<li>A compile-time option (SQLITE_BUSY_RESERVED_LOCK) causes the busy
+ handler to be called when there is contention for a RESERVED lock.</li>
+<li>Various bug fixes and optimizations</li>
+}
+
+chng {2004 August 8 (3.0.4 beta)} {
+<li>CREATE TABLE and DROP TABLE now work correctly as prepared statements.</li>
+<li>Fix a bug in VACUUM and UNIQUE indices.</li>
+<li>Add the ".import" command to the command-line shell.</li>
+<li>Fix a bug that could cause index corruption when an attempt to
+ delete rows of a table is blocked by a pending query.</li>
+<li>Library size optimizations.</li>
+<li>Other minor bug fixes.</li>
+}
+
+chng {2004 July 22 (2.8.15)} {
+<li>This is a maintenance release only. Various minor bugs have been
+fixed and some portability enhancements are added.</li>
+}
+
+chng {2004 July 22 (3.0.3 beta)} {
+<li>The second beta release for SQLite 3.0.</li>
+<li>Add support for "PRAGMA page_size" to adjust the page size of
+the database.</li>
+<li>Various bug fixes and documentation updates.</li>
+}
+
+chng {2004 June 30 (3.0.2 beta)} {
+<li>The first beta release for SQLite 3.0.</li>
+}
+
+chng {2004 June 22 (3.0.1 alpha)} {
+<li><font color="red"><b>
+ *** Alpha Release - Research And Testing Use Only ***</b></font>
+<li>Lots of bug fixes.</li>
+}
+
+chng {2004 June 18 (3.0.0 alpha)} {
+<li><font color="red"><b>
+ *** Alpha Release - Research And Testing Use Only ***</b></font>
+<li>Support for internationalization including UTF-8, UTF-16, and
+ user defined collating sequences.</li>
+<li>New file format that is 25% to 35% smaller for typical use.</li>
+<li>Improved concurrency.</li>
+<li>Atomic commits for ATTACHed databases.</li>
+<li>Remove cruft from the APIs.</li>
+<li>BLOB support.</li>
+<li>64-bit rowids.</li>
+<li><a href="version3.html">More information</a>.
+}
+
+chng {2004 June 9 (2.8.14)} {
+<li>Fix the min() and max() optimizer so that it works when the FROM
+ clause consists of a subquery.</li>
+<li>Ignore extra whitespace at the end of of "." commands in the shell.</li>
+<li>Bundle sqlite_encode_binary() and sqlite_decode_binary() with the
+ library.</li>
+<li>The TEMP_STORE and DEFAULT_TEMP_STORE pragmas now work.</li>
+<li>Code changes to compile cleanly using OpenWatcom.</li>
+<li>Fix VDBE stack overflow problems with INSTEAD OF triggers and
+ NULLs in IN operators.</li>
+<li>Add the global variable sqlite_temp_directory which if set defines the
+ directory in which temporary files are stored.</li>
+<li>sqlite_interrupt() plays well with VACUUM.</li>
+<li>Other minor bug fixes.</li>
+}
+
+chng {2004 March 8 (2.8.13)} {
+<li>Refactor parts of the code in order to make the code footprint
+ smaller. The code is now also a little bit faster.</li>
+<li>sqlite_exec() is now implemented as a wrapper around sqlite_compile()
+ and sqlite_step().</li>
+<li>The built-in min() and max() functions now honor the difference between
+ NUMERIC and TEXT datatypes. Formerly, min() and max() always assumed
+ their arguments were of type NUMERIC.</li>
+<li>New HH:MM:SS modifier to the built-in date/time functions.</li>
+<li>Experimental sqlite_last_statement_changes() API added. Fixed the
+ the last_insert_rowid() function so that it works correctly with
+ triggers.</li>
+<li>Add functions prototypes for the database encryption API.</li>
+<li>Fix several nuisance bugs.</li>
+}
+
+chng {2004 February 8 (2.8.12)} {
+<li>Fix a bug that will might corrupt the rollback journal if a power failure
+ or external program halt occurs in the middle of a COMMIT. The corrupt
+ journal can lead to database corruption when it is rolled back.</li>
+<li>Reduce the size and increase the speed of various modules, especially
+ the virtual machine.</li>
+<li>Allow "<expr> IN <table>" as a shorthand for
+ "<expr> IN (SELECT * FROM <table>".</li>
+<li>Optimizations to the sqlite_mprintf() routine.</li>
+<li>Make sure the MIN() and MAX() optimizations work within subqueries.</li>
+}
+
+chng {2004 January 14 (2.8.11)} {
+<li>Fix a bug in how the IN operator handles NULLs in subqueries. The bug
+ was introduced by the previous release.</li>
+}
+
+chng {2004 January 13 (2.8.10)} {
+<li>Fix a potential database corruption problem on Unix caused by the fact
+ that all posix advisory locks are cleared whenever you close() a file.
+ The work around it to embargo all close() calls while locks are
+ outstanding.</li>
+<li>Performance enhancements on some corner cases of COUNT(*).</li>
+<li>Make sure the in-memory backend response sanely if malloc() fails.</li>
+<li>Allow sqlite_exec() to be called from within user-defined SQL
+ functions.</li>
+<li>Improved accuracy of floating-point conversions using "long double".</li>
+<li>Bug fixes in the experimental date/time functions.</li>
+}
+
+chng {2004 January 5 (2.8.9)} {
+<li>Fix a 32-bit integer overflow problem that could result in corrupt
+ indices in a database if large negative numbers (less than -2147483648)
+ were inserted into a indexed numeric column.</li>
+<li>Fix a locking problem on multi-threaded Linux implementations.</li>
+<li>Always use "." instead of "," as the decimal point even if the locale
+ requests ",".</li>
+<li>Added UTC to localtime conversions to the experimental date/time
+ functions.</li>
+<li>Bug fixes to date/time functions.</li>
+}
+
+chng {2003 December 17 (2.8.8)} {
+<li>Fix a critical bug introduced into 2.8.0 which could cause
+ database corruption.</li>
+<li>Fix a problem with 3-way joins that do not use indices</li>
+<li>The VACUUM command now works with the non-callback API</li>
+<li>Improvements to the "PRAGMA integrity_check" command</li>
+}
+
+chng {2003 December 4 (2.8.7)} {
+<li>Added experimental sqlite_bind() and sqlite_reset() APIs.</li>
+<li>If the name of the database is an empty string, open a new database
+ in a temporary file that is automatically deleted when the database
+ is closed.</li>
+<li>Performance enhancements in the lemon-generated parser</li>
+<li>Experimental date/time functions revised.</li>
+<li>Disallow temporary indices on permanent tables.</li>
+<li>Documentation updates and typo fixes</li>
+<li>Added experimental sqlite_progress_handler() callback API</li>
+<li>Removed support for the Oracle8 outer join syntax.</li>
+<li>Allow GLOB and LIKE operators to work as functions.</li>
+<li>Other minor documentation and makefile changes and bug fixes.</li>
+}
+
+chng {2003 August 21 (2.8.6)} {
+<li>Moved the CVS repository to www.sqlite.org</li>
+<li>Update the NULL-handling documentation.</li>
+<li>Experimental date/time functions added.</li>
+<li>Bug fix: correctly evaluate a view of a view without segfaulting.</li>
+<li>Bug fix: prevent database corruption if you dropped a
+ trigger that had the same name as a table.</li>
+<li>Bug fix: allow a VACUUM (without segfaulting) on an empty
+ database after setting the EMPTY_RESULT_CALLBACKS pragma.</li>
+<li>Bug fix: if an integer value will not fit in a 32-bit int, store it in
+ a double instead.</li>
+<li>Bug fix: Make sure the journal file directory entry is committed to disk
+ before writing the database file.</li>
+}
+
+chng {2003 July 22 (2.8.5)} {
+<li>Make LIMIT work on a compound SELECT statement.</li>
+<li>LIMIT 0 now shows no rows. Use LIMIT -1 to see all rows.</li>
+<li>Correctly handle comparisons between an INTEGER PRIMARY KEY and
+ a floating point number.</li>
+<li>Fix several important bugs in the new ATTACH and DETACH commands.</li>
+<li>Updated the <a href="nulls.html">NULL-handling document</a>.</li>
+<li>Allow NULL arguments in sqlite_compile() and sqlite_step().</li>
+<li>Many minor bug fixes</li>
+}
+
+chng {2003 June 29 (2.8.4)} {
+<li>Enhanced the "PRAGMA integrity_check" command to verify indices.</li>
+<li>Added authorization hooks for the new ATTACH and DETACH commands.</li>
+<li>Many documentation updates</li>
+<li>Many minor bug fixes</li>
+}
+
+chng {2003 June 4 (2.8.3)} {
+<li>Fix a problem that will corrupt the indices on a table if you
+ do an INSERT OR REPLACE or an UPDATE OR REPLACE on a table that
+ contains an INTEGER PRIMARY KEY plus one or more indices.</li>
+<li>Fix a bug in windows locking code so that locks work correctly
+ when simultaneously accessed by Win95 and WinNT systems.</li>
+<li>Add the ability for INSERT and UPDATE statements to refer to the
+ "rowid" (or "_rowid_" or "oid") columns.</li>
+<li>Other important bug fixes</li>
+}
+
+chng {2003 May 17 (2.8.2)} {
+<li>Fix a problem that will corrupt the database file if you drop a
+ table from the main database that has a TEMP index.</li>
+}
+
+chng {2003 May 16 (2.8.1)} {
+<li>Reactivated the VACUUM command that reclaims unused disk space in
+ a database file.</li>
+<li>Added the ATTACH and DETACH commands to allow interacting with multiple
+ database files at the same time.</li>
+<li>Added support for TEMP triggers and indices.</li>
+<li>Added support for in-memory databases.</li>
+<li>Removed the experimental sqlite_open_aux_file(). Its function is
+ subsumed in the new ATTACH command.</li>
+<li>The precedence order for ON CONFLICT clauses was changed so that
+ ON CONFLICT clauses on BEGIN statements have a higher precedence than
+ ON CONFLICT clauses on constraints.
+<li>Many, many bug fixes and compatibility enhancements.</li>
+}
+
+chng {2003 Feb 16 (2.8.0)} {
+<li>Modified the journal file format to make it more resistant to corruption
+ that can occur after an OS crash or power failure.</li>
+<li>Added a new C/C++ API that does not use callback for returning data.</li>
+}
+
+chng {2003 Jan 25 (2.7.6)} {
+<li>Performance improvements. The library is now much faster.</li>
+<li>Added the <b>sqlite_set_authorizer()</b> API. Formal documentation has
+ not been written - see the source code comments for instructions on
+ how to use this function.</li>
+<li>Fix a bug in the GLOB operator that was preventing it from working
+ with upper-case letters.</li>
+<li>Various minor bug fixes.</li>
+}
+
+chng {2002 Dec 27 (2.7.5)} {
+<li>Fix an uninitialized variable in pager.c which could (with a probability
+ of about 1 in 4 billion) result in a corrupted database.</li>
+}
+
+chng {2002 Dec 17 (2.7.4)} {
+<li>Database files can now grow to be up to 2^41 bytes. The old limit
+ was 2^31 bytes.</li>
+<li>The optimizer will now scan tables in the reverse if doing so will
+ satisfy an ORDER BY ... DESC clause.</li>
+<li>The full pathname of the database file is now remembered even if
+ a relative path is passed into sqlite_open(). This allows
+ the library to continue operating correctly after a chdir().</li>
+<li>Speed improvements in the VDBE.</li>
+<li>Lots of little bug fixes.</li>
+}
+
+chng {2002 Oct 30 (2.7.3)} {
+<li>Various compiler compatibility fixes.</li>
+<li>Fix a bug in the "expr IN ()" operator.</li>
+<li>Accept column names in parentheses.</li>
+<li>Fix a problem with string memory management in the VDBE</li>
+<li>Fix a bug in the "table_info" pragma"</li>
+<li>Export the sqlite_function_type() API function in the Windows DLL</li>
+<li>Fix locking behavior under windows</li>
+<li>Fix a bug in LEFT OUTER JOIN</li>
+}
+
+chng {2002 Sep 25 (2.7.2)} {
+<li>Prevent journal file overflows on huge transactions.</li>
+<li>Fix a memory leak that occurred when sqlite_open() failed.</li>
+<li>Honor the ORDER BY and LIMIT clause of a SELECT even if the
+ result set is used for an INSERT.</li>
+<li>Do not put write locks on the file used to hold TEMP tables.</li>
+<li>Added documentation on SELECT DISTINCT and on how SQLite handles NULLs.</li>
+<li>Fix a problem that was causing poor performance when many thousands
+ of SQL statements were executed by a single sqlite_exec() call.</li>
+}
+
+chng {2002 Aug 31 (2.7.1)} {
+<li>Fix a bug in the ORDER BY logic that was introduced in version 2.7.0</li>
+<li>C-style comments are now accepted by the tokenizer.</li>
+<li>INSERT runs a little faster when the source is a SELECT statement.</li>
+}
+
+chng {2002 Aug 25 (2.7.0)} {
+<li>Make a distinction between numeric and text values when sorting.
+ Text values sort according to memcmp(). Numeric values sort in
+ numeric order.</li>
+<li>Allow multiple simultaneous readers under windows by simulating
+ the reader/writers locks that are missing from Win95/98/ME.</li>
+<li>An error is now returned when trying to start a transaction if
+ another transaction is already active.</li>
+}
+
+chng {2002 Aug 12 (2.6.3)} {
+<li>Add the ability to read both little-endian and big-endian databases.
+ So database created under SunOS or MacOSX can be read and written
+ under Linux or Windows and vice versa.</li>
+<li>Convert to the new website: http://www.sqlite.org/</li>
+<li>Allow transactions to span Linux Threads</li>
+<li>Bug fix in the processing of the ORDER BY clause for GROUP BY queries</li>
+}
+
+chng {2002 Jly 30 (2.6.2)} {
+<li>Text files read by the COPY command can now have line terminators
+ of LF, CRLF, or CR.</li>
+<li>SQLITE_BUSY is handled correctly if encountered during database
+ initialization.</li>
+<li>Fix to UPDATE triggers on TEMP tables.</li>
+<li>Documentation updates.</li>
+}
+
+chng {2002 Jly 19 (2.6.1)} {
+<li>Include a static string in the library that responds to the RCS
+ "ident" command and which contains the library version number.</li>
+<li>Fix an assertion failure that occurred when deleting all rows of
+ a table with the "count_changes" pragma turned on.</li>
+<li>Better error reporting when problems occur during the automatic
+ 2.5.6 to 2.6.0 database format upgrade.</li>
+}
+
+chng {2002 Jly 17 (2.6.0)} {
+<li>Change the format of indices to correct a design flaw the originated
+ with version 2.1.0. <font color="red">*** This is an incompatible
+ file format change ***</font> When version 2.6.0 or later of the
+ library attempts to open a database file created by version 2.5.6 or
+ earlier, it will automatically and irreversibly convert the file format.
+ <b>Make backup copies of older database files before opening them with
+ version 2.6.0 of the library.</b>
+ </li>
+}
+
+chng {2002 Jly 7 (2.5.6)} {
+<li>Fix more problems with rollback. Enhance the test suite to exercise
+ the rollback logic extensively in order to prevent any future problems.
+ </li>
+}
+
+chng {2002 Jly 6 (2.5.5)} {
+<li>Fix a bug which could cause database corruption during a rollback.
+ This bugs was introduced in version 2.4.0 by the freelist
+ optimization of checking [410].</li>
+<li>Fix a bug in aggregate functions for VIEWs.</li>
+<li>Other minor changes and enhancements.</li>
+}
+
+chng {2002 Jly 1 (2.5.4)} {
+<li>Make the "AS" keyword optional again.</li>
+<li>The datatype of columns now appear in the 4th argument to the
+ callback.</li>
+<li>Added the <b>sqlite_open_aux_file()</b> API, though it is still
+ mostly undocumented and untested.</li>
+<li>Added additional test cases and fixed a few bugs that those
+ test cases found.</li>
+}
+
+chng {2002 Jun 24 (2.5.3)} {
+<li>Bug fix: Database corruption can occur due to the optimization
+ that was introduced in version 2.4.0 (check-in [410]). The problem
+ should now be fixed. The use of versions 2.4.0 through 2.5.2 is
+ not recommended.</li>
+}
+
+chng {2002 Jun 24 (2.5.2)} {
+<li>Added the new <b>SQLITE_TEMP_MASTER</b> table which records the schema
+ for temporary tables in the same way that <b>SQLITE_MASTER</b> does for
+ persistent tables.</li>
+<li>Added an optimization to UNION ALL</li>
+<li>Fixed a bug in the processing of LEFT OUTER JOIN</li>
+<li>The LIMIT clause now works on subselects</li>
+<li>ORDER BY works on subselects</li>
+<li>There is a new TypeOf() function used to determine if an expression
+ is numeric or text.</li>
+<li>Autoincrement now works for INSERT from a SELECT.</li>
+}
+
+chng {2002 Jun 19 (2.5.1)} {
+<li>The query optimizer now attempts to implement the ORDER BY clause
+ using an index. Sorting is still used if not suitable index is
+ available.</li>
+}
+
+chng {2002 Jun 17 (2.5.0)} {
+<li>Added support for row triggers.</li>
+<li>Added SQL-92 compliant handling of NULLs.</li>
+<li>Add support for the full SQL-92 join syntax and LEFT OUTER JOINs.</li>
+<li>Double-quoted strings interpreted as column names not text literals.</li>
+<li>Parse (but do not implement) foreign keys.</li>
+<li>Performance improvements in the parser, pager, and WHERE clause code
+ generator.</li>
+<li>Make the LIMIT clause work on subqueries. (ORDER BY still does not
+ work, though.)</li>
+<li>Added the "%Q" expansion to sqlite_*_printf().</li>
+<li>Bug fixes too numerous to mention (see the change log).</li>
+}
+
+chng {2002 May 09 (2.4.12)} {
+<li>Added logic to detect when the library API routines are called out
+ of sequence.</li>
+}
+
+chng {2002 May 08 (2.4.11)} {
+<li>Bug fix: Column names in the result set were not being generated
+ correctly for some (rather complex) VIEWs. This could cause a
+ segfault under certain circumstances.</li>
+}
+
+chng {2002 May 02 (2.4.10)} {
+<li>Bug fix: Generate correct column headers when a compound SELECT is used
+ as a subquery.</li>
+<li>Added the sqlite_encode_binary() and sqlite_decode_binary() functions to
+ the source tree. But they are not yet linked into the library.</li>
+<li>Documentation updates.</li>
+<li>Export the sqlite_changes() function from windows DLLs.</li>
+<li>Bug fix: Do not attempt the subquery flattening optimization on queries
+ that lack a FROM clause. To do so causes a segfault.</li>
+}
+
+chng {2002 Apr 21 (2.4.9)} {
+<li>Fix a bug that was causing the precompiled binary of SQLITE.EXE to
+ report "out of memory" under Windows 98.</li>
+}
+
+chng {2002 Apr 20 (2.4.8)} {
+<li>Make sure VIEWs are created after their corresponding TABLEs in the
+ output of the <b>.dump</b> command in the shell.</li>
+<li>Speed improvements: Do not do synchronous updates on TEMP tables.</li>
+<li>Many improvements and enhancements to the shell.</li>
+<li>Make the GLOB and LIKE operators functions that can be overridden
+ by a programmer. This allows, for example, the LIKE operator to
+ be changed to be case sensitive.</li>
+}
+
+chng {2002 Apr 06 (2.4.7)} {
+<li>Add the ability to put TABLE.* in the column list of a
+ SELECT statement.</li>
+<li>Permit SELECT statements without a FROM clause.</li>
+<li>Added the <b>last_insert_rowid()</b> SQL function.</li>
+<li>Do not count rows where the IGNORE conflict resolution occurs in
+ the row count.</li>
+<li>Make sure functions expressions in the VALUES clause of an INSERT
+ are correct.</li>
+<li>Added the <b>sqlite_changes()</b> API function to return the number
+ of row that changed in the most recent operation.</li>
+}
+
+chng {2002 Apr 02 (2.4.6)} {
+<li>Bug fix: Correctly handle terms in the WHERE clause of a join that
+ do not contain a comparison operator.</li>
+}
+
+chng {2002 Apr 01 (2.4.5)} {
+<li>Bug fix: Correctly handle functions that appear in the WHERE clause
+ of a join.</li>
+<li>When the PRAGMA vdbe_trace=ON is set, correctly print the P3 operand
+ value when it is a pointer to a structure rather than a pointer to
+ a string.</li>
+<li>When inserting an explicit NULL into an INTEGER PRIMARY KEY, convert
+ the NULL value into a unique key automatically.</li>
+}
+
+chng {2002 Mar 24 (2.4.4)} {
+<li>Allow "VIEW" to be a column name</li>
+<li>Added support for CASE expressions (patch from Dan Kennedy)</li>
+<li>Added RPMS to the delivery (patches from Doug Henry)</li>
+<li>Fix typos in the documentation</li>
+<li>Cut over configuration management to a new CVS repository with
+ its own CVSTrac bug tracking system.</li>
+}
+
+chng {2002 Mar 22 (2.4.3)} {
+<li>Fix a bug in SELECT that occurs when a compound SELECT is used as a
+ subquery in the FROM of a SELECT.</li>
+<li>The <b>sqlite_get_table()</b> function now returns an error if you
+ give it two or more SELECTs that return different numbers of columns.</li>
+}
+
+chng {2002 Mar 14 (2.4.2)} {
+<li>Bug fix: Fix an assertion failure that occurred when ROWID was a column
+ in a SELECT statement on a view.</li>
+<li>Bug fix: Fix an uninitialized variable in the VDBE that would could an
+ assert failure.</li>
+<li>Make the os.h header file more robust in detecting when the compile is
+ for windows and when it is for unix.</li>
+}
+
+chng {2002 Mar 13 (2.4.1)} {
+<li>Using an unnamed subquery in a FROM clause would cause a segfault.</li>
+<li>The parser now insists on seeing a semicolon or the end of input before
+ executing a statement. This avoids an accidental disaster if the
+ WHERE keyword is misspelled in an UPDATE or DELETE statement.</li>
+}
+
+
+chng {2002 Mar 10 (2.4.0)} {
+<li>Change the name of the sanity_check PRAGMA to <b>integrity_check</b>
+ and make it available in all compiles.</li>
+<li>SELECT min() or max() of an indexed column with no WHERE or GROUP BY
+ clause is handled as a special case which avoids a complete table scan.</li>
+<li>Automatically generated ROWIDs are now sequential.</li>
+<li>Do not allow dot-commands of the command-line shell to occur in the
+ middle of a real SQL command.</li>
+<li>Modifications to the "lemon" parser generator so that the parser tables
+ are 4 times smaller.</li>
+<li>Added support for user-defined functions implemented in C.</li>
+<li>Added support for new functions: <b>coalesce()</b>, <b>lower()</b>,
+ <b>upper()</b>, and <b>random()</b>
+<li>Added support for VIEWs.</li>
+<li>Added the subquery flattening optimizer.</li>
+<li>Modified the B-Tree and Pager modules so that disk pages that do not
+ contain real data (free pages) are not journaled and are not
+ written from memory back to the disk when they change. This does not
+ impact database integrity, since the
+ pages contain no real data, but it does make large INSERT operations
+ about 2.5 times faster and large DELETEs about 5 times faster.</li>
+<li>Made the CACHE_SIZE pragma persistent</li>
+<li>Added the SYNCHRONOUS pragma</li>
+<li>Fixed a bug that was causing updates to fail inside of transactions when
+ the database contained a temporary table.</li>
+}
+
+chng {2002 Feb 18 (2.3.3)} {
+<li>Allow identifiers to be quoted in square brackets, for compatibility
+ with MS-Access.</li>
+<li>Added support for sub-queries in the FROM clause of a SELECT.</li>
+<li>More efficient implementation of sqliteFileExists() under Windows.
+ (by Joel Luscy)</li>
+<li>The VALUES clause of an INSERT can now contain expressions, including
+ scalar SELECT clauses.</li>
+<li>Added support for CREATE TABLE AS SELECT</li>
+<li>Bug fix: Creating and dropping a table all within a single
+ transaction was not working.</li>
+}
+
+chng {2002 Feb 14 (2.3.2)} {
+<li>Bug fix: There was an incorrect assert() in pager.c. The real code was
+ all correct (as far as is known) so everything should work OK if you
+ compile with -DNDEBUG=1. When asserts are not disabled, there
+ could be a fault.</li>
+}
+
+chng {2002 Feb 13 (2.3.1)} {
+<li>Bug fix: An assertion was failing if "PRAGMA full_column_names=ON;" was
+ set and you did a query that used a rowid, like this:
+ "SELECT rowid, * FROM ...".</li>
+}
+
+chng {2002 Jan 30 (2.3.0)} {
+<li>Fix a serious bug in the INSERT command which was causing data to go
+ into the wrong columns if the data source was a SELECT and the INSERT
+ clauses specified its columns in some order other than the default.</li>
+<li>Added the ability to resolve constraint conflicts is ways other than
+ an abort and rollback. See the documentation on the "ON CONFLICT"
+ clause for details.</li>
+<li>Temporary files are now automatically deleted by the operating system
+ when closed. There are no more dangling temporary files on a program
+ crash. (If the OS crashes, fsck will delete the file after reboot
+ under Unix. I do not know what happens under Windows.)</li>
+<li>NOT NULL constraints are honored.</li>
+<li>The COPY command puts NULLs in columns whose data is '\N'.</li>
+<li>In the COPY command, backslash can now be used to escape a newline.</li>
+<li>Added the SANITY_CHECK pragma.</li>
+}
+
+chng {2002 Jan 28 (2.2.5)} {
+<li>Important bug fix: the IN operator was not working if either the
+ left-hand or right-hand side was derived from an INTEGER PRIMARY KEY.</li>
+<li>Do not escape the backslash '\' character in the output of the
+ <b>sqlite</b> command-line access program.</li>
+}
+
+chng {2002 Jan 22 (2.2.4)} {
+<li>The label to the right of an AS in the column list of a SELECT can now
+ be used as part of an expression in the WHERE, ORDER BY, GROUP BY, and/or
+ HAVING clauses.</li>
+<li>Fix a bug in the <b>-separator</b> command-line option to the <b>sqlite</b>
+ command.</li>
+<li>Fix a problem with the sort order when comparing upper-case strings against
+ characters greater than 'Z' but less than 'a'.</li>
+<li>Report an error if an ORDER BY or GROUP BY expression is constant.</li>
+}
+
+chng {2002 Jan 16 (2.2.3)} {
+<li>Fix warning messages in VC++ 7.0. (Patches from nicolas352001)</li>
+<li>Make the library thread-safe. (The code is there and appears to work
+ but has not been stressed.)</li>
+<li>Added the new <b>sqlite_last_insert_rowid()</b> API function.</li>
+}
+
+chng {2002 Jan 13 (2.2.2)} {
+<li>Bug fix: An assertion was failing when a temporary table with an index
+ had the same name as a permanent table created by a separate process.</li>
+<li>Bug fix: Updates to tables containing an INTEGER PRIMARY KEY and an
+ index could fail.</li>
+}
+
+chng {2002 Jan 9 (2.2.1)} {
+<li>Bug fix: An attempt to delete a single row of a table with a WHERE
+ clause of "ROWID=x" when no such rowid exists was causing an error.</li>
+<li>Bug fix: Passing in a NULL as the 3rd parameter to <b>sqlite_open()</b>
+ would sometimes cause a coredump.</li>
+<li>Bug fix: DROP TABLE followed by a CREATE TABLE with the same name all
+ within a single transaction was causing a coredump.</li>
+<li>Makefile updates from A. Rottmann</li>
+}
+
+chng {2001 Dec 22 (2.2.0)} {
+<li>Columns of type INTEGER PRIMARY KEY are actually used as the primary
+ key in underlying B-Tree representation of the table.</li>
+<li>Several obscure, unrelated bugs were found and fixed while
+ implemented the integer primary key change of the previous bullet.</li>
+<li>Added the ability to specify "*" as part of a larger column list in
+ the result section of a SELECT statement. For example:
+ <nobr>"<b>SELECT rowid, * FROM table1;</b>"</nobr>.</li>
+<li>Updates to comments and documentation.</li>
+}
+
+chng {2001 Dec 14 (2.1.7)} {
+<li>Fix a bug in <b>CREATE TEMPORARY TABLE</b> which was causing the
+ table to be initially allocated in the main database file instead
+ of in the separate temporary file. This bug could cause the library
+ to suffer an assertion failure and it could cause "page leaks" in the
+ main database file.
+<li>Fix a bug in the b-tree subsystem that could sometimes cause the first
+ row of a table to be repeated during a database scan.</li>
+}
+
+chng {2001 Dec 14 (2.1.6)} {
+<li>Fix the locking mechanism yet again to prevent
+ <b>sqlite_exec()</b> from returning SQLITE_PROTOCOL
+ unnecessarily. This time the bug was a race condition in
+ the locking code. This change effects both POSIX and Windows users.</li>
+}
+
+chng {2001 Dec 6 (2.1.5)} {
+<li>Fix for another problem (unrelated to the one fixed in 2.1.4)
+ that sometimes causes <b>sqlite_exec()</b> to return SQLITE_PROTOCOL
+ unnecessarily. This time the bug was
+ in the POSIX locking code and should not effect windows users.</li>
+}
+
+chng {2001 Dec 4 (2.1.4)} {
+<li>Sometimes <b>sqlite_exec()</b> would return SQLITE_PROTOCOL when it
+ should have returned SQLITE_BUSY.</li>
+<li>The fix to the previous bug uncovered a deadlock which was also
+ fixed.</li>
+<li>Add the ability to put a single .command in the second argument
+ of the sqlite shell</li>
+<li>Updates to the FAQ</li>
+}
+
+chng {2001 Nov 23 (2.1.3)} {
+<li>Fix the behavior of comparison operators
+ (ex: "<b><</b>", "<b>==</b>", etc.)
+ so that they are consistent with the order of entries in an index.</li>
+<li>Correct handling of integers in SQL expressions that are larger than
+ what can be represented by the machine integer.</li>
+}
+
+chng {2001 Nov 22 (2.1.2)} {
+<li>Changes to support 64-bit architectures.</li>
+<li>Fix a bug in the locking protocol.</li>
+<li>Fix a bug that could (rarely) cause the database to become
+ unreadable after a DROP TABLE due to corruption to the SQLITE_MASTER
+ table.</li>
+<li>Change the code so that version 2.1.1 databases that were rendered
+ unreadable by the above bug can be read by this version of
+ the library even though the SQLITE_MASTER table is (slightly)
+ corrupted.</li>
+}
+
+chng {2001 Nov 13 (2.1.1)} {
+<li>Bug fix: Sometimes arbitrary strings were passed to the callback
+ function when the actual value of a column was NULL.</li>
+}
+
+chng {2001 Nov 12 (2.1.0)} {
+<li>Change the format of data records so that records up to 16MB in size
+ can be stored.</li>
+<li>Change the format of indices to allow for better query optimization.</li>
+<li>Implement the "LIMIT ... OFFSET ..." clause on SELECT statements.</li>
+}
+
+chng {2001 Nov 3 (2.0.8)} {
+<li>Made selected parameters in API functions <b>const</b>. This should
+ be fully backwards compatible.</li>
+<li>Documentation updates</li>
+<li>Simplify the design of the VDBE by restricting the number of sorters
+ and lists to 1.
+ In practice, no more than one sorter and one list was ever used anyhow.
+ </li>
+}
+
+chng {2001 Oct 21 (2.0.7)} {
+<li>Any UTF-8 character or ISO8859 character can be used as part of
+ an identifier.</li>
+<li>Patches from Christian Werner to improve ODBC compatibility and to
+ fix a bug in the round() function.</li>
+<li>Plug some memory leaks that use to occur if malloc() failed.
+ We have been and continue to be memory leak free as long as
+ malloc() works.</li>
+<li>Changes to some test scripts so that they work on Windows in
+ addition to Unix.</li>
+}
+
+chng {2001 Oct 19 (2.0.6)} {
+<li>Added the EMPTY_RESULT_CALLBACKS pragma</li>
+<li>Support for UTF-8 and ISO8859 characters in column and table names.</li>
+<li>Bug fix: Compute correct table names with the FULL_COLUMN_NAMES pragma
+ is turned on.</li>
+}
+
+chng {2001 Oct 14 (2.0.5)} {
+<li>Added the COUNT_CHANGES pragma.</li>
+<li>Changes to the FULL_COLUMN_NAMES pragma to help out the ODBC driver.</li>
+<li>Bug fix: "SELECT count(*)" was returning NULL for empty tables.
+ Now it returns 0.</li>
+}
+
+chng {2001 Oct 13 (2.0.4)} {
+<li>Bug fix: an obscure and relatively harmless bug was causing one of
+ the tests to fail when gcc optimizations are turned on. This release
+ fixes the problem.</li>
+}
+
+chng {2001 Oct 13 (2.0.3)} {
+<li>Bug fix: the <b>sqlite_busy_timeout()</b> function was delaying 1000
+ times too long before failing.</li>
+<li>Bug fix: an assertion was failing if the disk holding the database
+ file became full or stopped accepting writes for some other reason.
+ New tests were added to detect similar problems in the future.</li>
+<li>Added new operators: <b>&</b> (bitwise-and)
+ <b>|</b> (bitwise-or), <b>~</b> (ones-complement),
+ <b><<</b> (shift left), <b>>></b> (shift right).</li>
+<li>Added new functions: <b>round()</b> and <b>abs()</b>.</li>
+}
+
+chng {2001 Oct 9 (2.0.2)} {
+<li>Fix two bugs in the locking protocol. (One was masking the other.)</li>
+<li>Removed some unused "#include <unistd.h>" that were causing problems
+ for VC++.</li>
+<li>Fixed <b>sqlite.h</b> so that it is usable from C++</li>
+<li>Added the FULL_COLUMN_NAMES pragma. When set to "ON", the names of
+ columns are reported back as TABLE.COLUMN instead of just COLUMN.</li>
+<li>Added the TABLE_INFO() and INDEX_INFO() pragmas to help support the
+ ODBC interface.</li>
+<li>Added support for TEMPORARY tables and indices.</li>
+}
+
+chng {2001 Oct 2 (2.0.1)} {
+<li>Remove some C++ style comments from btree.c so that it will compile
+ using compilers other than gcc.</li>
+<li>The ".dump" output from the shell does not work if there are embedded
+ newlines anywhere in the data. This is an old bug that was carried
+ forward from version 1.0. To fix it, the ".dump" output no longer
+ uses the COPY command. It instead generates INSERT statements.</li>
+<li>Extend the expression syntax to support "expr NOT NULL" (with a
+ space between the "NOT" and the "NULL") in addition to "expr NOTNULL"
+ (with no space).</li>
+}
+
+chng {2001 Sep 28 (2.0.0)} {
+<li>Automatically build binaries for Linux and Windows and put them on
+ the website.</li>
+}
+
+chng {2001 Sep 28 (2.0-alpha-4)} {
+<li>Incorporate makefile patches form A. Rottmann to use LIBTOOL</li>
+}
+
+chng {2001 Sep 27 (2.0-alpha-3)} {
+<li>SQLite now honors the UNIQUE keyword in CREATE UNIQUE INDEX. Primary
+ keys are required to be unique.</li>
+<li>File format changed back to what it was for alpha-1</li>
+<li>Fixes to the rollback and locking behavior</li>
+}
+
+chng {2001 Sep 20 (2.0-alpha-2)} {
+<li>Initial release of version 2.0. The idea of renaming the library
+ to "SQLus" was abandoned in favor of keeping the "SQLite" name and
+ bumping the major version number.</li>
+<li>The pager and btree subsystems added back. They are now the only
+ available backend.</li>
+<li>The Dbbe abstraction and the GDBM and memory drivers were removed.</li>
+<li>Copyright on all code was disclaimed. The library is now in the
+ public domain.</li>
+}
+
+chng {2001 Jul 23 (1.0.32)} {
+<li>Pager and btree subsystems removed. These will be used in a follow-on
+ SQL server library named "SQLus".</li>
+<li>Add the ability to use quoted strings as table and column names in
+ expressions.</li>
+}
+
+chng {2001 Apr 14 (1.0.31)} {
+<li>Pager subsystem added but not yet used.</li>
+<li>More robust handling of out-of-memory errors.</li>
+<li>New tests added to the test suite.</li>
+}
+
+chng {2001 Apr 6 (1.0.30)} {
+<li>Remove the <b>sqlite_encoding</b> TCL variable that was introduced
+ in the previous version.</li>
+<li>Add options <b>-encoding</b> and <b>-tcl-uses-utf</b> to the
+ <b>sqlite</b> TCL command.</li>
+<li>Add tests to make sure that tclsqlite was compiled using Tcl header
+ files and libraries that match.</li>
+}
+
+chng {2001 Apr 5 (1.0.29)} {
+<li>The library now assumes data is stored as UTF-8 if the --enable-utf8
+ option is given to configure. The default behavior is to assume
+ iso8859-x, as it has always done. This only makes a difference for
+ LIKE and GLOB operators and the LENGTH and SUBSTR functions.</li>
+<li>If the library is not configured for UTF-8 and the Tcl library
+ is one of the newer ones that uses UTF-8 internally,
+ then a conversion from UTF-8 to iso8859 and
+ back again is done inside the TCL interface.</li>
+}
+
+chng {2001 Apr 4 (1.0.28)} {
+<li>Added limited support for transactions. At this point, transactions
+ will do table locking on the GDBM backend. There is no support (yet)
+ for rollback or atomic commit.</li>
+<li>Added special column names ROWID, OID, and _ROWID_ that refer to the
+ unique random integer key associated with every row of every table.</li>
+<li>Additional tests added to the regression suite to cover the new ROWID
+ feature and the TCL interface bugs mentioned below.</li>
+<li>Changes to the "lemon" parser generator to help it work better when
+ compiled using MSVC.</li>
+<li>Bug fixes in the TCL interface identified by Oleg Oleinick.</li>
+}
+
+chng {2001 Mar 20 (1.0.27)} {
+<li>When doing DELETE and UPDATE, the library used to write the record
+ numbers of records to be deleted or updated into a temporary file.
+ This is changed so that the record numbers are held in memory.</li>
+<li>The DELETE command without a WHILE clause just removes the database
+ files from the disk, rather than going through and deleting record
+ by record.</li>
+}
+
+chng {2001 Mar 20 (1.0.26)} {
+<li>A serious bug fixed on Windows. Windows users should upgrade.
+ No impact to Unix.</li>
+}
+
+chng {2001 Mar 15 (1.0.25)} {
+<li>Modify the test scripts to identify tests that depend on system
+ load and processor speed and
+ to warn the user that a failure of one of those (rare) tests does
+ not necessarily mean the library is malfunctioning. No changes to
+ code.
+ </li>
+}
+
+chng {2001 Mar 14 (1.0.24)} {
+<li>Fix a bug which was causing
+ the UPDATE command to fail on systems where "malloc(0)" returns
+ NULL. The problem does not appear Windows, Linux, or HPUX but does
+ cause the library to fail on QNX.
+ </li>
+}
+
+chng {2001 Feb 19 (1.0.23)} {
+<li>An unrelated (and minor) bug from Mark Muranwski fixed. The algorithm
+ for figuring out where to put temporary files for a "memory:" database
+ was not working quite right.
+ </li>
+}
+
+chng {2001 Feb 19 (1.0.22)} {
+<li>The previous fix was not quite right. This one seems to work better.
+ </li>
+}
+
+chng {2001 Feb 19 (1.0.21)} {
+<li>The UPDATE statement was not working when the WHERE clause contained
+ some terms that could be satisfied using indices and other terms that
+ could not. Fixed.</li>
+}
+
+chng {2001 Feb 11 (1.0.20)} {
+<li>Merge development changes into the main trunk. Future work toward
+ using a BTree file structure will use a separate CVS source tree. This
+ CVS tree will continue to support the GDBM version of SQLite only.</li>
+}
+
+chng {2001 Feb 6 (1.0.19)} {
+<li>Fix a strange (but valid) C declaration that was causing problems
+ for QNX. No logical changes.</li>
+}
+
+chng {2001 Jan 4 (1.0.18)} {
+<li>Print the offending SQL statement when an error occurs.</li>
+<li>Do not require commas between constraints in CREATE TABLE statements.</li>
+<li>Added the "-echo" option to the shell.</li>
+<li>Changes to comments.</li>
+}
+
+chng {2000 Dec 10 (1.0.17)} {
+<li>Rewrote <b>sqlite_complete()</b> to make it faster.</li>
+<li>Minor tweaks to other code to make it run a little faster.</li>
+<li>Added new tests for <b>sqlite_complete()</b> and for memory leaks.</li>
+}
+
+chng {2000 Dec 4 (1.0.16)} {
+<li>Documentation updates. Mostly fixing of typos and spelling errors.</li>
+}
+
+chng {2000 Oct 23 (1.0.15)} {
+<li>Documentation updates</li>
+<li>Some sanity checking code was removed from the inner loop of vdbe.c
+ to help the library to run a little faster. The code is only
+ removed if you compile with -DNDEBUG.</li>
+}
+
+chng {2000 Oct 19 (1.0.14)} {
+<li>Added a "memory:" backend driver that stores its database in an
+ in-memory hash table.</li>
+}
+
+chng {2000 Oct 18 (1.0.13)} {
+<li>Break out the GDBM driver into a separate file in anticipation
+ to added new drivers.</li>
+<li>Allow the name of a database to be prefixed by the driver type.
+ For now, the only driver type is "gdbm:".</li>
+}
+
+chng {2000 Oct 16 (1.0.12)} {
+<li>Fixed an off-by-one error that was causing a coredump in
+ the '%q' format directive of the new
+ <b>sqlite_..._printf()</b> routines.</li>
+<li>Added the <b>sqlite_interrupt()</b> interface.</li>
+<li>In the shell, <b>sqlite_interrupt()</b> is invoked when the
+ user presses Control-C</li>
+<li>Fixed some instances where <b>sqlite_exec()</b> was
+ returning the wrong error code.</li>
+}
+
+chng {2000 Oct 11 (1.0.10)} {
+<li>Added notes on how to compile for Windows95/98.</li>
+<li>Removed a few variables that were not being used. Etc.</li>
+}
+
+chng {2000 Oct 8 (1.0.9)} {
+<li>Added the <b>sqlite_..._printf()</b> interface routines.</li>
+<li>Modified the <b>sqlite</b> shell program to use the new interface
+ routines.</li>
+<li>Modified the <b>sqlite</b> shell program to print the schema for
+ the built-in SQLITE_MASTER table, if explicitly requested.</li>
+}
+
+chng {2000 Sep 30 (1.0.8)} {
+<li>Begin writing documentation on the TCL interface.</li>
+}
+
+chng {2000 Sep 29 (Not Released)} {
+<li>Added the <b>sqlite_get_table()</b> API</li>
+<li>Updated the documentation for due to the above change.</li>
+<li>Modified the <b>sqlite</b> shell to make use of the new
+ sqlite_get_table() API in order to print a list of tables
+ in multiple columns, similar to the way "ls" prints filenames.</li>
+<li>Modified the <b>sqlite</b> shell to print a semicolon at the
+ end of each CREATE statement in the output of the ".schema" command.</li>
+}
+
+chng {2000 Sep 21 (Not Released)} {
+<li>Change the tclsqlite "eval" method to return a list of results if
+ no callback script is specified.</li>
+<li>Change tclsqlite.c to use the Tcl_Obj interface</li>
+<li>Add tclsqlite.c to the libsqlite.a library</li>
+}
+
+chng {2000 Sep 13 (Version 1.0.5)} {
+<li>Changed the print format for floating point values from "%g" to "%.15g".
+ </li>
+<li>Changed the comparison function so that numbers in exponential notation
+ (ex: 1.234e+05) sort in numerical order.</li>
+}
+
+chng {2000 Aug 28 (Version 1.0.4)} {
+<li>Added functions <b>length()</b> and <b>substr()</b>.</li>
+<li>Fix a bug in the <b>sqlite</b> shell program that was causing
+ a coredump when the output mode was "column" and the first row
+ of data contained a NULL.</li>
+}
+
+chng {2000 Aug 22 (Version 1.0.3)} {
+<li>In the sqlite shell, print the "Database opened READ ONLY" message
+ to stderr instead of stdout.</li>
+<li>In the sqlite shell, now print the version number on initial startup.</li>
+<li>Add the <b>sqlite_version[]</b> string constant to the library</li>
+<li>Makefile updates</li>
+<li>Bug fix: incorrect VDBE code was being generated for the following
+ circumstance: a query on an indexed table containing a WHERE clause with
+ an IN operator that had a subquery on its right-hand side.</li>
+}
+
+chng {2000 Aug 18 (Version 1.0.1)} {
+<li>Fix a bug in the configure script.</li>
+<li>Minor revisions to the website.</li>
+}
+
+chng {2000 Aug 17 (Version 1.0)} {
+<li>Change the <b>sqlite</b> program so that it can read
+ databases for which it lacks write permission. (It used to
+ refuse all access if it could not write.)</li>
+}
+
+chng {2000 Aug 9} {
+<li>Treat carriage returns as white space.</li>
+}
+
+chng {2000 Aug 8} {
+<li>Added pattern matching to the ".table" command in the "sqlite"
+command shell.</li>
+}
+
+chng {2000 Aug 4} {
+<li>Documentation updates</li>
+<li>Added "busy" and "timeout" methods to the Tcl interface</li>
+}
+
+chng {2000 Aug 3} {
+<li>File format version number was being stored in sqlite_master.tcl
+ multiple times. This was harmless, but unnecessary. It is now fixed.</li>
+}
+
+chng {2000 Aug 2} {
+<li>The file format for indices was changed slightly in order to work
+ around an inefficiency that can sometimes come up with GDBM when
+ there are large indices having many entries with the same key.
+ <font color="red">** Incompatible Change **</font></li>
+}
+
+chng {2000 Aug 1} {
+<li>The parser's stack was overflowing on a very long UPDATE statement.
+ This is now fixed.</li>
+}
+
+chng {2000 July 31} {
+<li>Finish the <a href="vdbe.html">VDBE tutorial</a>.</li>
+<li>Added documentation on compiling to WindowsNT.</li>
+<li>Fix a configuration program for WindowsNT.</li>
+<li>Fix a configuration problem for HPUX.</li>
+}
+
+chng {2000 July 29} {
+<li>Better labels on column names of the result.</li>
+}
+
+chng {2000 July 28} {
+<li>Added the <b>sqlite_busy_handler()</b>
+ and <b>sqlite_busy_timeout()</b> interface.</li>
+}
+
+chng {2000 June 23} {
+<li>Begin writing the <a href="vdbe.html">VDBE tutorial</a>.</li>
+}
+
+chng {2000 June 21} {
+<li>Clean up comments and variable names. Changes to documentation.
+ No functional changes to the code.</li>
+}
+
+chng {2000 June 19} {
+<li>Column names in UPDATE statements were case sensitive.
+ This mistake has now been fixed.</li>
+}
+
+chng {2000 June 16} {
+<li>Added the concatenate string operator (||)</li>
+}
+
+chng {2000 June 12} {
+<li>Added the fcnt() function to the SQL interpreter. The fcnt() function
+ returns the number of database "Fetch" operations that have occurred.
+ This function is designed for use in test scripts to verify that
+ queries are efficient and appropriately optimized. Fcnt() has no other
+ useful purpose, as far as I know.</li>
+<li>Added a bunch more tests that take advantage of the new fcnt() function.
+ The new tests did not uncover any new problems.</li>
+}
+
+chng {2000 June 8} {
+<li>Added lots of new test cases</li>
+<li>Fix a few bugs discovered while adding test cases</li>
+<li>Begin adding lots of new documentation</li>
+}
+
+chng {2000 June 6} {
+<li>Added compound select operators: <B>UNION</b>, <b>UNION ALL</B>,
+<b>INTERSECT</b>, and <b>EXCEPT</b></li>
+<li>Added support for using <b>(SELECT ...)</b> within expressions</li>
+<li>Added support for <b>IN</b> and <b>BETWEEN</b> operators</li>
+<li>Added support for <b>GROUP BY</b> and <b>HAVING</b></li>
+<li>NULL values are now reported to the callback as a NULL pointer
+ rather than an empty string.</li>
+}
+
+chng {2000 June 3} {
+<li>Added support for default values on columns of a table.</li>
+<li>Improved test coverage. Fixed a few obscure bugs found by the
+improved tests.</li>
+}
+
+chng {2000 June 2} {
+<li>All database files to be modified by an UPDATE, INSERT or DELETE are
+now locked before any changes are made to any files.
+This makes it safe (I think) to access
+the same database simultaneously from multiple processes.</li>
+<li>The code appears stable so we are now calling it "beta".</li>
+}
+
+chng {2000 June 1} {
+<li>Better support for file locking so that two or more processes
+(or threads)
+can access the same database simultaneously. More work needed in
+this area, though.</li>
+}
+
+chng {2000 May 31} {
+<li>Added support for aggregate functions (Ex: <b>COUNT(*)</b>, <b>MIN(...)</b>)
+to the SELECT statement.</li>
+<li>Added support for <B>SELECT DISTINCT ...</B></li>
+}
+
+chng {2000 May 30} {
+<li>Added the <b>LIKE</b> operator.</li>
+<li>Added a <b>GLOB</b> operator: similar to <B>LIKE</B>
+but it uses Unix shell globbing wildcards instead of the '%'
+and '_' wildcards of SQL.</li>
+<li>Added the <B>COPY</b> command patterned after
+<a href="http://www.postgresql.org/">PostgreSQL</a> so that SQLite
+can now read the output of the <b>pg_dump</b> database dump utility
+of PostgreSQL.</li>
+<li>Added a <B>VACUUM</B> command that that calls the
+<b>gdbm_reorganize()</b> function on the underlying database
+files.</li>
+<li>And many, many bug fixes...</li>
+}
+
+chng {2000 May 29} {
+<li>Initial Public Release of Alpha code</li>
+}
+
+puts {
+</DL>
+}
+footer {$Id:}
Added: freeswitch/trunk/libs/sqlite/www/common.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/common.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,88 @@
+# This file contains TCL procedures used to generate standard parts of
+# web pages.
+#
+
+proc header {txt} {
+ puts "<html><head><title>$txt</title></head>"
+ puts \
+{<body bgcolor="white" link="#50695f" vlink="#508896">
+<table width="100%" border="0">
+<tr><td valign="top">
+<a href="index.html"><img src="sqlite.gif" border="none"></a></td>
+<td width="100%"></td>
+<td valign="bottom">
+<ul>
+<li><a href="http://www.sqlite.org/cvstrac/tktnew">bugs</a></li>
+<li><a href="changes.html">changes</a></li>
+<li><a href="contrib">contrib</a></li>
+<li><a href="download.html#cvs">cvs repository</a></li>
+<li><a href="docs.html">documentation</a></li>
+</ul>
+</td>
+<td width="10"></td>
+<td valign="bottom">
+<ul>
+<li><a href="download.html">download</a></li>
+<li><a href="faq.html">faq</a></li>
+<li><a href="index.html">home</a></li>
+<li><a href="support.html">mailing list</a></li>
+<li><a href="index.html">news</a></li>
+</ul>
+</td>
+<td width="10"></td>
+<td valign="bottom">
+<ul>
+<li><a href="quickstart.html">quick start</a></li>
+<li><a href="support.html">support</a></li>
+<li><a href="lang.html">syntax</a></li>
+<li><a href="http://www.sqlite.org/cvstrac/timeline">timeline</a></li>
+<li><a href="http://www.sqlite.org/cvstrac/wiki">wiki</a></li>
+</ul>
+</td>
+</tr></table>
+<table width="100%">
+<tr><td bgcolor="#80a796"></td></tr>
+</table>}
+}
+
+proc footer {{rcsid {}}} {
+ puts {
+<table width="100%">
+<tr><td bgcolor="#80a796"></td></tr>
+</table>}
+ set date [lrange $rcsid 3 4]
+ if {$date!=""} {
+ puts "<small><i>This page last modified on $date</i></small>"
+ }
+ puts {</body></html>}
+}
+
+
+# The following proc is used to ensure consistent formatting in the
+# HTML generated by lang.tcl and pragma.tcl.
+#
+proc Syntax {args} {
+ puts {<table cellpadding="10">}
+ foreach {rule body} $args {
+ puts "<tr><td align=\"right\" valign=\"top\">"
+ puts "<i><font color=\"#ff3434\">$rule</font></i> ::=</td>"
+ regsub -all < $body {%LT} body
+ regsub -all > $body {%GT} body
+ regsub -all %LT $body {</font></b><i><font color="#ff3434">} body
+ regsub -all %GT $body {</font></i><b><font color="#2c2cf0">} body
+ regsub -all {[]|[*?]} $body {</font></b>&<b><font color="#2c2cf0">} body
+ regsub -all "\n" [string trim $body] "<br>\n" body
+ regsub -all "\n *" $body "\n\\ \\ \\ \\ " body
+ regsub -all {[|,.*()]} $body {<big>&</big>} body
+ regsub -all { = } $body { <big>=</big> } body
+ regsub -all {STAR} $body {<big>*</big>} body
+ ## These metacharacters must be handled to undo being
+ ## treated as SQL punctuation characters above.
+ regsub -all {RPPLUS} $body {</font></b>)+<b><font color="#2c2cf0">} body
+ regsub -all {LP} $body {</font></b>(<b><font color="#2c2cf0">} body
+ regsub -all {RP} $body {</font></b>)<b><font color="#2c2cf0">} body
+ ## Place the left-hand side of the rule in the 2nd table column.
+ puts "<td><b><font color=\"#2c2cf0\">$body</font></b></td></tr>"
+ }
+ puts {</table>}
+}
Added: freeswitch/trunk/libs/sqlite/www/compile.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/compile.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,278 @@
+#
+# Run this Tcl script to generate the compile.html file.
+#
+set rcsid {$Id: compile.tcl,v 1.5 2005/03/19 15:10:45 drh Exp $ }
+source common.tcl
+header {Compilation Options For SQLite}
+
+puts {
+<h1>Compilation Options For SQLite</h1>
+
+<p>
+For most purposes, SQLite can be built just fine using the default
+compilation options. However, if required, the compile-time options
+documented below can be used to
+<a href="#omitfeatures">omit SQLite features</a> (resulting in
+a smaller compiled library size) or to change the
+<a href="#defaults">default values</a> of some parameters.
+</p>
+<p>
+Every effort has been made to ensure that the various combinations
+of compilation options work harmoniously and produce a working library.
+Nevertheless, it is strongly recommended that the SQLite test-suite
+be executed to check for errors before using an SQLite library built
+with non-standard compilation options.
+</p>
+<a name="defaults"></a>
+<h2>Options To Set Default Parameter Values</h2>
+
+<p><b>SQLITE_DEFAULT_AUTOVACUUM=<i><1 or 0></i></b><br>
+This macro determines if SQLite creates databases with the
+<a href="pragma.html#pragma_auto_vacuum">auto-vacuum</a>
+flag set by default. The default value is 0 (do not create auto-vacuum
+databases). In any case the compile-time default may be overridden by the
+"PRAGMA auto_vacuum" command.
+</p>
+
+<p><b>SQLITE_DEFAULT_CACHE_SIZE=<i><pages></i></b><br>
+This macro sets the default size of the page-cache for each attached
+database, in pages. This can be overridden by the "PRAGMA cache_size"
+comamnd. The default value is 2000.
+</p>
+
+<p><b>SQLITE_DEFAULT_PAGE_SIZE=<i><bytes></i></b><br>
+This macro is used to set the default page-size used when a
+database is created. The value assigned must be a power of 2. The
+default value is 1024. The compile-time default may be overridden at
+runtime by the "PRAGMA page_size" command.
+</p>
+
+<p><b>SQLITE_DEFAULT_TEMP_CACHE_SIZE=<i><pages></i></b><br>
+This macro sets the default size of the page-cache for temporary files
+created by SQLite to store intermediate results, in pages. It does
+not affect the page-cache for the temp database, where tables created
+using "CREATE TEMP TABLE" are stored. The default value is 500.
+</p>
+
+<p><b>SQLITE_MAX_PAGE_SIZE=<i><bytes></i></b><br>
+This is used to set the maximum allowable page-size that can
+be specified by the "PRAGMA page_size" command. The default value
+is 8192.
+</p>
+
+<a name="omitfeatures"></a>
+<h2>Options To Omit Features</h2>
+
+<p>The following options are used to reduce the size of the compiled
+library by omiting optional features. This is probably only useful
+in embedded systems where space is especially tight, as even with all
+features included the SQLite library is relatively small. Don't forget
+to tell your compiler to optimize for binary size! (the -Os option if
+using GCC).</p>
+
+<p>The macros in this section do not require values. The following
+compilation switches all have the same effect:<br>
+-DSQLITE_OMIT_ALTERTABLE<br>
+-DSQLITE_OMIT_ALTERTABLE=1<br>
+-DSQLITE_OMIT_ALTERTABLE=0
+</p>
+
+<p>If any of these options are defined, then the same set of SQLITE_OMIT_XXX
+options must also be defined when using the 'lemon' tool to generate a parse.c
+file. Because of this, these options may only used when the library is built
+from source, not from the collection of pre-packaged C files provided for
+non-UNIX like platforms on the website.
+</p>
+
+<p><b>SQLITE_OMIT_ALTERTABLE</b><br>
+When this option is defined, the
+<a href="lang_altertable.html">ALTER TABLE</a> command is not included in the
+library. Executing an ALTER TABLE statement causes a parse error.
+</p>
+
+<p><b>SQLITE_OMIT_AUTHORIZATION</b><br>
+Defining this option omits the authorization callback feature from the
+library. The <a href="capi3ref.html#sqlite3_set_authorizer">
+sqlite3_set_authorizer()</a> API function is not present in the library.
+</p>
+
+<p><b>SQLITE_OMIT_AUTOVACUUM</b><br>
+If this option is defined, the library cannot create or write to
+databases that support
+<a href="pragma.html#pragma_auto_vacuum">auto-vacuum</a>. Executing a
+"PRAGMA auto_vacuum" statement is not an error, but does not return a value
+or modify the auto-vacuum flag in the database file. If a database that
+supports auto-vacuum is opened by a library compiled with this option, it
+is automatically opened in read-only mode.
+</p>
+
+<p><b>SQLITE_OMIT_AUTOINCREMENT</b><br>
+This option is used to omit the AUTOINCREMENT functionality. When this
+is macro is defined, columns declared as "INTEGER PRIMARY KEY AUTOINCREMENT"
+behave in the same way as columns declared as "INTEGER PRIMARY KEY" when a
+NULL is inserted. The sqlite_sequence system table is neither created, nor
+respected if it already exists.
+</p>
+<p><i>TODO: Need a link here - AUTOINCREMENT is not yet documented</i><p>
+
+<p><b>SQLITE_OMIT_BLOB_LITERAL</b><br>
+When this option is defined, it is not possible to specify a blob in
+an SQL statement using the X'ABCD' syntax.</p>
+}
+#<p>WARNING: The VACUUM command depends on this syntax for vacuuming databases
+#that contain blobs, so disabling this functionality may render a database
+#unvacuumable.
+#</p>
+#<p><i>TODO: Need a link here - is that syntax documented anywhere?</i><p>
+puts {
+
+<p><b>SQLITE_OMIT_COMPLETE</b><br>
+This option causes the <a href="capi3ref.html#sqlite3_complete">
+sqlite3_complete</a> API to be omitted.
+</p>
+
+<p><b>SQLITE_OMIT_COMPOUND_SELECT</b><br>
+This option is used to omit the compound SELECT functionality.
+<a href="lang_select.html">SELECT statements</a> that use the
+UNION, UNION ALL, INTERSECT or EXCEPT compound SELECT operators will
+cause a parse error.
+</p>
+
+<p><b>SQLITE_OMIT_CONFLICT_CLAUSE</b><br>
+In the future, this option will be used to omit the
+<a href="lang_conflict.html">ON CONFLICT</a> clause from the library.
+</p>
+
+<p><b>SQLITE_OMIT_DATETIME_FUNCS</b><br>
+If this option is defined, SQLite's built-in date and time manipulation
+functions are omitted. Specifically, the SQL functions julianday(), date(),
+time(), datetime() and strftime() are not available. The default column
+values CURRENT_TIME, CURRENT_DATE and CURRENT_DATETIME are still available.
+</p>
+
+<p><b>SQLITE_OMIT_EXPLAIN</b><br>
+Defining this option causes the EXPLAIN command to be omitted from the
+library. Attempting to execute an EXPLAIN statement will cause a parse
+error.
+</p>
+
+<p><b>SQLITE_OMIT_FLOATING_POINT</b><br>
+This option is used to omit floating-point number support from the SQLite
+library. When specified, specifying a floating point number as a literal
+(i.e. "1.01") results in a parse error.
+</p>
+<p>In the future, this option may also disable other floating point
+functionality, for example the sqlite3_result_double(),
+sqlite3_bind_double(), sqlite3_value_double() and sqlite3_column_double()
+API functions.
+</p>
+
+<p><b>SQLITE_OMIT_FOREIGN_KEY</b><br>
+If this option is defined, FOREIGN KEY clauses in column declarations are
+ignored.
+</p>
+
+<p><b>SQLITE_OMIT_INTEGRITY_CHECK</b><br>
+This option may be used to omit the
+<a href="pragma.html#pragma_integrity_check">"PRAGMA integrity_check"</a>
+command from the compiled library.
+</p>
+
+<p><b>SQLITE_OMIT_MEMORYDB</b><br>
+When this is defined, the library does not respect the special database
+name ":memory:" (normally used to create an in-memory database). If
+":memory:" is passed to sqlite3_open(), a file with this name will be
+opened or created.
+</p>
+
+<p><b>SQLITE_OMIT_PAGER_PRAGMAS</b><br>
+Defining this option omits pragmas related to the pager subsystem from
+the build. Currently, the
+<a href="pragma.html#pragma_default_cache_size">default_cache_size</a> and
+<a href="pragma.html#pragma_cache_size">cache_size</a> pragmas are omitted.
+</p>
+
+<p><b>SQLITE_OMIT_PRAGMA</b><br>
+This option is used to omit the <a href="pragma.html">PRAGMA command</a>
+from the library. Note that it is useful to define the macros that omit
+specific pragmas in addition to this, as they may also remove supporting code
+in other sub-systems. This macro removes the PRAGMA command only.
+</p>
+
+<p><b>SQLITE_OMIT_PROGRESS_CALLBACK</b><br>
+This option may be defined to omit the capability to issue "progress"
+callbacks during long-running SQL statements. The
+<a href="capi3ref.html#sqlite3_progress_handler">sqlite3_progress_handler()</a>
+API function is not present in the library.
+
+<p><b>SQLITE_OMIT_REINDEX</b><br>
+When this option is defined, the <a href="lang_reindex.html">REINDEX</a>
+command is not included in the library. Executing a REINDEX statement causes
+a parse error.
+</p>
+
+<p><b>SQLITE_OMIT_SCHEMA_PRAGMAS</b><br>
+Defining this option omits pragmas for querying the database schema from
+the build. Currently, the
+<a href="pragma.html#pragma_table_info">table_info</a>,
+<a href="pragma.html#pragma_index_info">index_info</a>,
+<a href="pragma.html#pragma_index_list">index_list</a> and
+<a href="pragma.html#pragma_database_list">database_list</a>
+pragmas are omitted.
+</p>
+
+<p><b>SQLITE_OMIT_SCHEMA_VERSION_PRAGMAS</b><br>
+Defining this option omits pragmas for querying and modifying the
+database schema version and user version from the build. Specifically, the
+<a href="pragma.html#pragma_schema_version">schema_version</a> and
+<a href="pragma.html#pragma_user_version">user_version</a>
+pragmas are omitted.
+
+<p><b>SQLITE_OMIT_SUBQUERY</b><br>
+<p>If defined, support for sub-selects and the IN() operator are omitted.
+</p>
+
+<p><b>SQLITE_OMIT_TCL_VARIABLE</b><br>
+<p>If this macro is defined, then the special "$<variable-name>" syntax
+used to automatically bind SQL variables to TCL variables is omitted.
+</p>
+
+<p><b>SQLITE_OMIT_TRIGGER</b><br>
+Defining this option omits support for VIEW objects. Neither the
+<a href="lang_createtrigger.html">CREATE TRIGGER</a> or
+<a href="lang_droptrigger.html">DROP TRIGGER</a>
+commands are available in this case, attempting to execute either will result
+in a parse error.
+</p>
+<p>
+WARNING: If this macro is defined, it will not be possible to open a database
+for which the schema contains TRIGGER objects.
+</p>
+
+<p><b>SQLITE_OMIT_UTF16</b><br>
+This macro is used to omit support for UTF16 text encoding. When this is
+defined all API functions that return or accept UTF16 encoded text are
+unavailable. These functions can be identified by the fact that they end
+with '16', for example sqlite3_prepare16(), sqlite3_column_text16() and
+sqlite3_bind_text16().
+</p>
+
+<p><b>SQLITE_OMIT_VACUUM</b><br>
+When this option is defined, the <a href="lang_vacuum.html">VACUUM</a>
+command is not included in the library. Executing a VACUUM statement causes
+a parse error.
+</p>
+
+<p><b>SQLITE_OMIT_VIEW</b><br>
+Defining this option omits support for VIEW objects. Neither the
+<a href="lang_createview.html">CREATE VIEW</a> or
+<a href="lang_dropview.html">DROP VIEW</a>
+commands are available in this case, attempting to execute either will result
+in a parse error.
+</p>
+<p>
+WARNING: If this macro is defined, it will not be possible to open a database
+for which the schema contains VIEW objects.
+</p>
+}
+footer $rcsid
Added: freeswitch/trunk/libs/sqlite/www/conflict.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/conflict.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,91 @@
+#
+# Run this Tcl script to generate the constraint.html file.
+#
+set rcsid {$Id: conflict.tcl,v 1.4 2004/10/10 17:24:55 drh Exp $ }
+source common.tcl
+header {Constraint Conflict Resolution in SQLite}
+puts {
+<h1>Constraint Conflict Resolution in SQLite</h1>
+
+<p>
+In most SQL databases, if you have a UNIQUE constraint on
+a table and you try to do an UPDATE or INSERT that violates
+the constraint, the database will abort the operation in
+progress, back out any prior changes associated with
+UPDATE or INSERT command, and return an error.
+This is the default behavior of SQLite.
+Beginning with version 2.3.0, though, SQLite allows you to
+define alternative ways for dealing with constraint violations.
+This article describes those alternatives and how to use them.
+</p>
+
+<h2>Conflict Resolution Algorithms</h2>
+
+<p>
+SQLite defines five constraint conflict resolution algorithms
+as follows:
+</p>
+
+<dl>
+<dt><b>ROLLBACK</b></dt>
+<dd><p>When a constraint violation occurs, an immediate ROLLBACK
+occurs, thus ending the current transaction, and the command aborts
+with a return code of SQLITE_CONSTRAINT. If no transaction is
+active (other than the implied transaction that is created on every
+command) then this algorithm works the same as ABORT.</p></dd>
+
+<dt><b>ABORT</b></dt>
+<dd><p>When a constraint violation occurs, the command backs out
+any prior changes it might have made and aborts with a return code
+of SQLITE_CONSTRAINT. But no ROLLBACK is executed so changes
+from prior commands within the same transaction
+are preserved. This is the default behavior for SQLite.</p></dd>
+
+<dt><b>FAIL</b></dt>
+<dd><p>When a constraint violation occurs, the command aborts with a
+return code SQLITE_CONSTRAINT. But any changes to the database that
+the command made prior to encountering the constraint violation
+are preserved and are not backed out. For example, if an UPDATE
+statement encountered a constraint violation on the 100th row that
+it attempts to update, then the first 99 row changes are preserved
+by change to rows 100 and beyond never occur.</p></dd>
+
+<dt><b>IGNORE</b></dt>
+<dd><p>When a constraint violation occurs, the one row that contains
+the constraint violation is not inserted or changed. But the command
+continues executing normally. Other rows before and after the row that
+contained the constraint violation continue to be inserted or updated
+normally. No error is returned.</p></dd>
+
+<dt><b>REPLACE</b></dt>
+<dd><p>When a UNIQUE constraint violation occurs, the pre-existing row
+that caused the constraint violation is removed prior to inserting
+or updating the current row. Thus the insert or update always occurs.
+The command continues executing normally. No error is returned.</p></dd>
+</dl>
+
+<h2>Why So Many Choices?</h2>
+
+<p>SQLite provides multiple conflict resolution algorithms for a
+couple of reasons. First, SQLite tries to be roughly compatible with as
+many other SQL databases as possible, but different SQL database
+engines exhibit different conflict resolution strategies. For
+example, PostgreSQL always uses ROLLBACK, Oracle always uses ABORT, and
+MySQL usually uses FAIL but can be instructed to use IGNORE or REPLACE.
+By supporting all five alternatives, SQLite provides maximum
+portability.</p>
+
+<p>Another reason for supporting multiple algorithms is that sometimes
+it is useful to use an algorithm other than the default.
+Suppose, for example, you are
+inserting 1000 records into a database, all within a single
+transaction, but one of those records is malformed and causes
+a constraint error. Under PostgreSQL or Oracle, none of the
+1000 records would get inserted. In MySQL, some subset of the
+records that appeared before the malformed record would be inserted
+but the rest would not. Neither behavior is especially helpful.
+What you really want is to use the IGNORE algorithm to insert
+all but the malformed record.</p>
+
+}
+footer $rcsid
Added: freeswitch/trunk/libs/sqlite/www/copyright-release.html
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/copyright-release.html Tue Dec 19 15:11:50 2006
@@ -0,0 +1,109 @@
+<html>
+<body bgcolor="white">
+<h1 align="center">
+Copyright Release for<br>
+Contributions To SQLite
+</h1>
+
+<p>
+SQLite is software that implements an embeddable SQL database engine.
+SQLite is available for free download from http://www.sqlite.org/.
+The principal author and maintainer of SQLite has disclaimed all
+copyright interest in his contributions to SQLite
+and thus released his contributions into the public domain.
+In order to keep the SQLite software unencumbered by copyright
+claims, the principal author asks others who may from time to
+time contribute changes and enhancements to likewise disclaim
+their own individual copyright interest.
+</p>
+
+<p>
+Because the SQLite software found at http://www.sqlite.org/ is in the
+public domain, anyone is free to download the SQLite software
+from that website, make changes to the software, use, distribute,
+or sell the modified software, under either the original name or
+under some new name, without any need to obtain permission, pay
+royalties, acknowledge the original source of the software, or
+in any other way compensate, identify, or notify the original authors.
+Nobody is in any way compelled to contribute their SQLite changes and
+enhancements back to the SQLite website. This document concerns
+only changes and enhancements to SQLite that are intentionally and
+deliberately contributed back to the SQLite website.
+</p>
+
+<p>
+For the purposes of this document, "SQLite software" shall mean any
+computer source code, documentation, makefiles, test scripts, or
+other information that is published on the SQLite website,
+http://www.sqlite.org/. Precompiled binaries are excluded from
+the definition of "SQLite software" in this document because the
+process of compiling the software may introduce information from
+outside sources which is not properly a part of SQLite.
+</p>
+
+<p>
+The header comments on the SQLite source files exhort the reader to
+share freely and to never take more than one gives.
+In the spirit of that exhortation I make the following declarations:
+</p>
+
+<ol>
+<li><p>
+I dedicate to the public domain
+any and all copyright interest in the SQLite software that
+was publicly available on the SQLite website (http://www.sqlite.org/) prior
+to the date of the signature below and any changes or enhancements to
+the SQLite software
+that I may cause to be published on that website in the future.
+I make this dedication for the benefit of the public at large and
+to the detriment of my heirs and successors. I intend this
+dedication to be an overt act of relinquishment in perpetuity of
+all present and future rights to the SQLite software under copyright
+law.
+</p></li>
+
+<li><p>
+To the best of my knowledge and belief, the changes and enhancements that
+I have contributed to SQLite are either originally written by me
+or are derived from prior works which I have verified are also
+in the public domain and are not subject to claims of copyright
+by other parties.
+</p></li>
+
+<li><p>
+To the best of my knowledge and belief, no individual, business, organization,
+government, or other entity has any copyright interest
+in the SQLite software as it existed on the
+SQLite website as of the date on the signature line below.
+</p></li>
+
+<li><p>
+I agree never to publish any additional information
+to the SQLite website (by CVS, email, scp, FTP, or any other means) unless
+that information is an original work of authorship by me or is derived from
+prior published versions of SQLite.
+I agree never to copy and paste code into the SQLite code base from
+other sources.
+I agree never to publish on the SQLite website any information that
+would violate a law or breach a contract.
+</p></li>
+</ol>
+
+<p>
+<table width="100%" cellpadding="0" cellspacing="0">
+<tr>
+<td width="60%" valign="top">
+Signature:
+<p> </p>
+<p> </p>
+<p> </p>
+</td><td valign="top" align="left">
+Date:
+</td></tr>
+<td colspan=2>
+Name (printed):
+</td>
+</tr>
+</table>
+</body>
+</html>
Added: freeswitch/trunk/libs/sqlite/www/copyright-release.pdf
==============================================================================
Binary file. No diff available.
Added: freeswitch/trunk/libs/sqlite/www/copyright.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/copyright.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,59 @@
+set rcsid {$Id: copyright.tcl,v 1.2 2006/05/03 23:39:37 drh Exp $}
+source common.tcl
+header {SQLite Copyright}
+puts {
+<h2>SQLite Copyright</h2>
+
+<p>
+The original author of SQLite has dedicated the code to the public domain.
+Anyone is free to copy, modify, publish, use, compile, sell, or distribute
+the original SQLite code, either in source code form or as a compiled binary,
+for any purpose, commercial or non-commercial, and by any means.
+</p>
+
+<h2>Contributed Code</h2>
+
+<p>
+In order to keep SQLite complete free and unencumbered by copyright,
+other contributors to the SQLite code base are asked to likewise dedicate
+their contributions to the public domain.
+If you want to send a patch or enhancement for possible inclusion in the
+SQLite source tree, please accompany the patch with the following statement:
+</p>
+
+<blockquote><i>
+The author or authors of this code dedicate any and all copyright interest
+in this code to the public domain. We make this dedication for the benefit
+of the public at large and to the detriment of our heirs and successors.
+We intend this dedication to be an overt act of relinquishment in
+perpetuity of all present and future rights this code under copyright law.
+</i></blockquote>
+
+<p>
+Regrettably, as of 2003 October 20,
+we will no longer be able to accept patches or changes to
+SQLite that are not accompanied by a statement such as the above.
+In addition, if you make
+changes or enhancements as an employee, then a simple statement such as the
+above is insufficient. You must also send by surface mail a copyright release
+signed by a company officer.
+A signed original of the copyright release should be mailed to:</p>
+
+<blockquote>
+Hwaci<br>
+6200 Maple Cove Lane<br>
+Charlotte, NC 28269<br>
+USA
+</blockquote>
+
+<p>
+A template copyright release is available
+in <a href="copyright-release.pdf">PDF</a> or
+<a href="copyright-release.html">HTML</a>.
+You can use this release to make future changes. If you have contributed
+changes or enhancements to SQLite in the past, and have not already done
+so, you are invited to complete and sign a copy of the template and mail
+it to the address above.
+</p>
+}
+footer $rcsid
Added: freeswitch/trunk/libs/sqlite/www/datatype3.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/datatype3.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,432 @@
+set rcsid {$Id: datatype3.tcl,v 1.14 2006/05/23 23:22:29 drh Exp $}
+source common.tcl
+header {Datatypes In SQLite Version 3}
+puts {
+<h2>Datatypes In SQLite Version 3</h2>
+
+<h3>1. Storage Classes</h3>
+
+<P>Version 2 of SQLite stores all column values as ASCII text.
+Version 3 enhances this by providing the ability to store integer and
+real numbers in a more compact format and the capability to store
+BLOB data.</P>
+
+<P>Each value stored in an SQLite database (or manipulated by the
+database engine) has one of the following storage classes:</P>
+<UL>
+ <LI><P><B>NULL</B>. The value is a NULL value.</P>
+ <LI><P><B>INTEGER</B>. The value is a signed integer, stored in 1,
+ 2, 3, 4, 6, or 8 bytes depending on the magnitude of the value.</P>
+ <LI><P><B>REAL</B>. The value is a floating point value, stored as
+ an 8-byte IEEE floating point number.</P>
+ <LI><P><B>TEXT</B>. The value is a text string, stored using the
+ database encoding (UTF-8, UTF-16BE or UTF-16-LE).</P>
+ <LI><P><B>BLOB</B>. The value is a blob of data, stored exactly as
+ it was input.</P>
+</UL>
+
+<P>As in SQLite version 2, any column in a version 3 database except an INTEGER
+PRIMARY KEY may be used to store any type of value. The exception to
+this rule is described below under 'Strict Affinity Mode'.</P>
+
+<P>All values supplied to SQLite, whether as literals embedded in SQL
+statements or values bound to pre-compiled SQL statements
+are assigned a storage class before the SQL statement is executed.
+Under circumstances described below, the
+database engine may convert values between numeric storage classes
+(INTEGER and REAL) and TEXT during query execution.
+</P>
+
+<P>Storage classes are initially assigned as follows:</P>
+<UL>
+ <LI><P>Values specified as literals as part of SQL statements are
+ assigned storage class TEXT if they are enclosed by single or double
+ quotes, INTEGER if the literal is specified as an unquoted number
+ with no decimal point or exponent, REAL if the literal is an
+ unquoted number with a decimal point or exponent and NULL if the
+ value is a NULL. Literals with storage class BLOB are specified
+ using the X'ABCD' notation.</P>
+ <LI><P>Values supplied using the sqlite3_bind_* APIs are assigned
+ the storage class that most closely matches the native type bound
+ (i.e. sqlite3_bind_blob() binds a value with storage class BLOB).</P>
+</UL>
+<P>The storage class of a value that is the result of an SQL scalar
+operator depends on the outermost operator of the expression.
+User-defined functions may return values with any storage class. It
+is not generally possible to determine the storage class of the
+result of an expression at compile time.</P>
+
+<a name="affinity">
+<h3>2. Column Affinity</h3>
+
+<p>
+In SQLite version 3, the type of a value is associated with the value
+itself, not with the column or variable in which the value is stored.
+(This is sometimes called
+<a href="http://www.cliki.net/manifest%20type%20system">
+manifest typing</a>.)
+All other SQL databases engines that we are aware of use the more
+restrictive system of static typing where the type is associated with
+the container, not the value.
+</p>
+
+<p>
+In order to maximize compatibility between SQLite and other database
+engines, SQLite support the concept of "type affinity" on columns.
+The type affinity of a column is the recommended type for data stored
+in that column. The key here is that the type is recommended, not
+required. Any column can still store any type of data, in theory.
+It is just that some columns, given the choice, will prefer to use
+one storage class over another. The preferred storage class for
+a column is called its "affinity".
+</p>
+
+<P>Each column in an SQLite 3 database is assigned one of the
+following type affinities:</P>
+<UL>
+ <LI>TEXT</LI>
+ <LI>NUMERIC</LI>
+ <LI>INTEGER</LI>
+ <LI>REAL</li>
+ <LI>NONE</LI>
+</UL>
+
+<P>A column with TEXT affinity stores all data using storage classes
+NULL, TEXT or BLOB. If numerical data is inserted into a column with
+TEXT affinity it is converted to text form before being stored.</P>
+
+<P>A column with NUMERIC affinity may contain values using all five
+storage classes. When text data is inserted into a NUMERIC column, an
+attempt is made to convert it to an integer or real number before it
+is stored. If the conversion is successful, then the value is stored
+using the INTEGER or REAL storage class. If the conversion cannot be
+performed the value is stored using the TEXT storage class. No
+attempt is made to convert NULL or blob values.</P>
+
+<P>A column that uses INTEGER affinity behaves in the same way as a
+column with NUMERIC affinity, except that if a real value with no
+floating point component (or text value that converts to such) is
+inserted it is converted to an integer and stored using the INTEGER
+storage class.</P>
+
+<P>A column with REAL affinity behaves like a column with NUMERIC
+affinity except that it forces integer values into floating point
+representation. (As an optimization, integer values are stored on
+disk as integers in order to take up less space and are only converted
+to floating point as the value is read out of the table.)</P>
+
+<P>A column with affinity NONE does not prefer one storage class over
+another. It makes no attempt to coerce data before
+it is inserted.</P>
+
+<h4>2.1 Determination Of Column Affinity</h4>
+
+<P>The type affinity of a column is determined by the declared type
+of the column, according to the following rules:</P>
+<OL>
+ <LI><P>If the datatype contains the string "INT" then it
+ is assigned INTEGER affinity.</P>
+
+ <LI><P>If the datatype of the column contains any of the strings
+ "CHAR", "CLOB", or "TEXT" then that
+ column has TEXT affinity. Notice that the type VARCHAR contains the
+ string "CHAR" and is thus assigned TEXT affinity.</P>
+
+ <LI><P>If the datatype for a column
+ contains the string "BLOB" or if
+ no datatype is specified then the column has affinity NONE.</P>
+
+ <LI><P>If the datatype for a column
+ contains any of the strings "REAL", "FLOA",
+ or "DOUB" then the column has REAL affinity</P>
+
+ <LI><P>Otherwise, the affinity is NUMERIC.</P>
+</OL>
+
+<P>If a table is created using a "CREATE TABLE <table> AS
+SELECT..." statement, then all columns have no datatype specified
+and they are given no affinity.</P>
+
+<h4>2.2 Column Affinity Example</h4>
+
+<blockquote>
+<PRE>CREATE TABLE t1(
+ t TEXT,
+ nu NUMERIC,
+ i INTEGER,
+ no BLOB
+);
+
+-- Storage classes for the following row:
+-- TEXT, REAL, INTEGER, TEXT
+INSERT INTO t1 VALUES('500.0', '500.0', '500.0', '500.0');
+
+-- Storage classes for the following row:
+-- TEXT, REAL, INTEGER, REAL
+INSERT INTO t1 VALUES(500.0, 500.0, 500.0, 500.0);
+</PRE>
+</blockquote>
+
+<h3>3. Comparison Expressions</h3>
+
+<P>Like SQLite version 2, version 3
+features the binary comparison operators '=',
+'<', '<=', '>=' and '!=', an operation to test for set
+membership, 'IN', and the ternary comparison operator 'BETWEEN'.</P>
+<P>The results of a comparison depend on the storage classes of the
+two values being compared, according to the following rules:</P>
+<UL>
+ <LI><P>A value with storage class NULL is considered less than any
+ other value (including another value with storage class NULL).</P>
+
+ <LI><P>An INTEGER or REAL value is less than any TEXT or BLOB value.
+ When an INTEGER or REAL is compared to another INTEGER or REAL, a
+ numerical comparison is performed.</P>
+
+ <LI><P>A TEXT value is less than a BLOB value. When two TEXT values
+ are compared, the C library function memcmp() is usually used to
+ determine the result. However this can be overridden, as described
+ under 'User-defined collation Sequences' below.</P>
+
+ <LI><P>When two BLOB values are compared, the result is always
+ determined using memcmp().</P>
+</UL>
+
+<P>SQLite may attempt to convert values between the numeric storage
+classes (INTEGER and REAL) and TEXT before performing a comparison.
+For binary comparisons, this is done in the cases enumerated below.
+The term "expression" used in the bullet points below means any
+SQL scalar expression or literal other than a column value.</P>
+<UL>
+ <LI><P>When a column value is compared to the result of an
+ expression, the affinity of the column is applied to the result of
+ the expression before the comparison takes place.</P>
+
+ <LI><P>When two column values are compared, if one column has
+ INTEGER or NUMERIC affinity and the other does not, the NUMERIC
+ affinity is applied to any values with storage class TEXT extracted
+ from the non-NUMERIC column.</P>
+
+ <LI><P>When the results of two expressions are compared, no
+ conversions occur. The results are compared as is. If a string
+ is compared to a number, the number will always be less than the
+ string.</P>
+</UL>
+
+<P>
+In SQLite, the expression "a BETWEEN b AND c" is equivalent to "a >= b
+AND a <= c", even if this means that different affinities are applied to
+'a' in each of the comparisons required to evaluate the expression.
+</P>
+
+<P>Expressions of the type "a IN (SELECT b ....)" are handled by the three
+rules enumerated above for binary comparisons (e.g. in a
+similar manner to "a = b"). For example if 'b' is a column value
+and 'a' is an expression, then the affinity of 'b' is applied to 'a'
+before any comparisons take place.</P>
+
+<P>SQLite treats the expression "a IN (x, y, z)" as equivalent to "a = z OR
+a = y OR a = z".
+</P>
+
+<h4>3.1 Comparison Example</h4>
+
+<blockquote>
+<PRE>
+CREATE TABLE t1(
+ a TEXT,
+ b NUMERIC,
+ c BLOB
+);
+
+-- Storage classes for the following row:
+-- TEXT, REAL, TEXT
+INSERT INTO t1 VALUES('500', '500', '500');
+
+-- 60 and 40 are converted to '60' and '40' and values are compared as TEXT.
+SELECT a < 60, a < 40 FROM t1;
+1|0
+
+-- Comparisons are numeric. No conversions are required.
+SELECT b < 60, b < 600 FROM t1;
+0|1
+
+-- Both 60 and 600 (storage class NUMERIC) are less than '500'
+-- (storage class TEXT).
+SELECT c < 60, c < 600 FROM t1;
+0|0
+</PRE>
+</blockquote>
+<h3>4. Operators</h3>
+
+<P>All mathematical operators (which is to say, all operators other
+than the concatenation operator "||") apply NUMERIC
+affinity to all operands prior to being carried out. If one or both
+operands cannot be converted to NUMERIC then the result of the
+operation is NULL.</P>
+
+<P>For the concatenation operator, TEXT affinity is applied to both
+operands. If either operand cannot be converted to TEXT (because it
+is NULL or a BLOB) then the result of the concatenation is NULL.</P>
+
+<h3>5. Sorting, Grouping and Compound SELECTs</h3>
+
+<P>When values are sorted by an ORDER by clause, values with storage
+class NULL come first, followed by INTEGER and REAL values
+interspersed in numeric order, followed by TEXT values usually in
+memcmp() order, and finally BLOB values in memcmp() order. No storage
+class conversions occur before the sort.</P>
+
+<P>When grouping values with the GROUP BY clause values with
+different storage classes are considered distinct, except for INTEGER
+and REAL values which are considered equal if they are numerically
+equal. No affinities are applied to any values as the result of a
+GROUP by clause.</P>
+
+<P>The compound SELECT operators UNION,
+INTERSECT and EXCEPT perform implicit comparisons between values.
+Before these comparisons are performed an affinity may be applied to
+each value. The same affinity, if any, is applied to all values that
+may be returned in a single column of the compound SELECT result set.
+The affinity applied is the affinity of the column returned by the
+left most component SELECTs that has a column value (and not some
+other kind of expression) in that position. If for a given compound
+SELECT column none of the component SELECTs return a column value, no
+affinity is applied to the values from that column before they are
+compared.</P>
+
+<h3>6. Other Affinity Modes</h3>
+
+<P>The above sections describe the operation of the database engine
+in 'normal' affinity mode. SQLite version 3 will feature two other affinity
+modes, as follows:</P>
+<UL>
+ <LI><P><B>Strict affinity</B> mode. In this mode if a conversion
+ between storage classes is ever required, the database engine
+ returns an error and the current statement is rolled back.</P>
+
+ <LI><P><B>No affinity</B> mode. In this mode no conversions between
+ storage classes are ever performed. Comparisons between values of
+ different storage classes (except for INTEGER and REAL) are always
+ false.</P>
+</UL>
+
+<a name="collation"></a>
+<h3>7. User-defined Collation Sequences</h3>
+
+<p>
+By default, when SQLite compares two text values, the result of the
+comparison is determined using memcmp(), regardless of the encoding of the
+string. SQLite v3 provides the ability for users to supply arbitrary
+comparison functions, known as user-defined collation sequences, to be used
+instead of memcmp().
+</p>
+<p>
+Aside from the default collation sequence BINARY, implemented using
+memcmp(), SQLite features one extra built-in collation sequences
+intended for testing purposes, the NOCASE collation:
+</p>
+<UL>
+ <LI><b>BINARY</b> - Compares string data using memcmp(), regardless
+ of text encoding.</LI>
+ <LI><b>NOCASE</b> - The same as binary, except the 26 upper case
+ characters used by the English language are
+ folded to their lower case equivalents before
+ the comparison is performed. </UL>
+
+
+<h4>7.1 Assigning Collation Sequences from SQL</h4>
+
+<p>
+Each column of each table has a default collation type. If a collation type
+other than BINARY is required, a COLLATE clause is specified as part of the
+<a href="lang_createtable.html">column definition</a> to define it.
+</p>
+
+<p>
+Whenever two text values are compared by SQLite, a collation sequence is
+used to determine the results of the comparison according to the following
+rules. Sections 3 and 5 of this document describe the circumstances under
+which such a comparison takes place.
+</p>
+
+<p>
+For binary comparison operators (=, <, >, <= and >=) if either operand is a
+column, then the default collation type of the column determines the
+collation sequence to use for the comparison. If both operands are columns,
+then the collation type for the left operand determines the collation
+sequence used. If neither operand is a column, then the BINARY collation
+sequence is used.
+</p>
+
+<p>
+The expression "x BETWEEN y and z" is equivalent to "x >= y AND x <=
+z". The expression "x IN (SELECT y ...)" is handled in the same way as the
+expression "x = y" for the purposes of determining the collation sequence
+to use. The collation sequence used for expressions of the form "x IN (y, z
+...)" is the default collation type of x if x is a column, or BINARY
+otherwise.
+</p>
+
+<p>
+An <a href="lang_select.html">ORDER BY</a> clause that is part of a SELECT
+statement may be assigned a collation sequence to be used for the sort
+operation explicitly. In this case the explicit collation sequence is
+always used. Otherwise, if the expression sorted by an ORDER BY clause is
+a column, then the default collation type of the column is used to
+determine sort order. If the expression is not a column, then the BINARY
+collation sequence is used.
+</p>
+
+<h4>7.2 Collation Sequences Example</h4>
+<p>
+The examples below identify the collation sequences that would be used to
+determine the results of text comparisons that may be performed by various
+SQL statements. Note that a text comparison may not be required, and no
+collation sequence used, in the case of numeric, blob or NULL values.
+</p>
+<blockquote>
+<PRE>
+CREATE TABLE t1(
+ a, -- default collation type BINARY
+ b COLLATE BINARY, -- default collation type BINARY
+ c COLLATE REVERSE, -- default collation type REVERSE
+ d COLLATE NOCASE -- default collation type NOCASE
+);
+
+-- Text comparison is performed using the BINARY collation sequence.
+SELECT (a = b) FROM t1;
+
+-- Text comparison is performed using the NOCASE collation sequence.
+SELECT (d = a) FROM t1;
+
+-- Text comparison is performed using the BINARY collation sequence.
+SELECT (a = d) FROM t1;
+
+-- Text comparison is performed using the REVERSE collation sequence.
+SELECT ('abc' = c) FROM t1;
+
+-- Text comparison is performed using the REVERSE collation sequence.
+SELECT (c = 'abc') FROM t1;
+
+-- Grouping is performed using the NOCASE collation sequence (i.e. values
+-- 'abc' and 'ABC' are placed in the same group).
+SELECT count(*) GROUP BY d FROM t1;
+
+-- Grouping is performed using the BINARY collation sequence.
+SELECT count(*) GROUP BY (d || '') FROM t1;
+
+-- Sorting is performed using the REVERSE collation sequence.
+SELECT * FROM t1 ORDER BY c;
+
+-- Sorting is performed using the BINARY collation sequence.
+SELECT * FROM t1 ORDER BY (c || '');
+
+-- Sorting is performed using the NOCASE collation sequence.
+SELECT * FROM t1 ORDER BY c COLLATE NOCASE;
+
+</PRE>
+</blockquote>
+
+}
+footer $rcsid
Added: freeswitch/trunk/libs/sqlite/www/datatypes.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/datatypes.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,243 @@
+#
+# Run this script to generated a datatypes.html output file
+#
+set rcsid {$Id: datatypes.tcl,v 1.8 2004/10/10 17:24:55 drh Exp $}
+source common.tcl
+header {Datatypes In SQLite version 2}
+puts {
+<h2>Datatypes In SQLite Version 2</h2>
+
+<h3>1.0 Typelessness</h3>
+<p>
+SQLite is "typeless". This means that you can store any
+kind of data you want in any column of any table, regardless of the
+declared datatype of that column.
+(See the one exception to this rule in section 2.0 below.)
+This behavior is a feature, not
+a bug. A database is suppose to store and retrieve data and it
+should not matter to the database what format that data is in.
+The strong typing system found in most other SQL engines and
+codified in the SQL language spec is a misfeature -
+it is an example of the implementation showing through into the
+interface. SQLite seeks to overcome this misfeature by allowing
+you to store any kind of data into any kind of column and by
+allowing flexibility in the specification of datatypes.
+</p>
+
+<p>
+A datatype to SQLite is any sequence of zero or more names
+optionally followed by a parenthesized lists of one or two
+signed integers. Notice in particular that a datatype may
+be <em>zero</em> or more names. That means that an empty
+string is a valid datatype as far as SQLite is concerned.
+So you can declare tables where the datatype of each column
+is left unspecified, like this:
+</p>
+
+<blockquote><pre>
+CREATE TABLE ex1(a,b,c);
+</pre></blockquote>
+
+<p>
+Even though SQLite allows the datatype to be omitted, it is
+still a good idea to include it in your CREATE TABLE statements,
+since the data type often serves as a good hint to other
+programmers about what you intend to put in the column. And
+if you ever port your code to another database engine, that
+other engine will probably require a datatype of some kind.
+SQLite accepts all the usual datatypes. For example:
+</p>
+
+<blockquote><pre>
+CREATE TABLE ex2(
+ a VARCHAR(10),
+ b NVARCHAR(15),
+ c TEXT,
+ d INTEGER,
+ e FLOAT,
+ f BOOLEAN,
+ g CLOB,
+ h BLOB,
+ i TIMESTAMP,
+ j NUMERIC(10,5)
+ k VARYING CHARACTER (24),
+ l NATIONAL VARYING CHARACTER(16)
+);
+</pre></blockquote>
+
+<p>
+And so forth. Basically any sequence of names optionally followed by
+one or two signed integers in parentheses will do.
+</p>
+
+<h3>2.0 The INTEGER PRIMARY KEY</h3>
+
+<p>
+One exception to the typelessness of SQLite is a column whose type
+is INTEGER PRIMARY KEY. (And you must use "INTEGER" not "INT".
+A column of type INT PRIMARY KEY is typeless just like any other.)
+INTEGER PRIMARY KEY columns must contain a 32-bit signed integer. Any
+attempt to insert non-integer data will result in an error.
+</p>
+
+<p>
+INTEGER PRIMARY KEY columns can be used to implement the equivalent
+of AUTOINCREMENT. If you try to insert a NULL into an INTEGER PRIMARY
+KEY column, the column will actually be filled with a integer that is
+one greater than the largest key already in the table. Or if the
+largest key is 2147483647, then the column will be filled with a
+random integer. Either way, the INTEGER PRIMARY KEY column will be
+assigned a unique integer. You can retrieve this integer using
+the <b>sqlite_last_insert_rowid()</b> API function or using the
+<b>last_insert_rowid()</b> SQL function in a subsequent SELECT statement.
+</p>
+
+<h3>3.0 Comparison and Sort Order</h3>
+
+<p>
+SQLite is typeless for the purpose of deciding what data is allowed
+to be stored in a column. But some notion of type comes into play
+when sorting and comparing data. For these purposes, a column or
+an expression can be one of two types: <b>numeric</b> and <b>text</b>.
+The sort or comparison may give different results depending on which
+type of data is being sorted or compared.
+</p>
+
+<p>
+If data is of type <b>text</b> then the comparison is determined by
+the standard C data comparison functions <b>memcmp()</b> or
+<b>strcmp()</b>. The comparison looks at bytes from two inputs one
+by one and returns the first non-zero difference.
+Strings are '\000' terminated so shorter
+strings sort before longer strings, as you would expect.
+</p>
+
+<p>
+For numeric data, this situation is more complex. If both inputs
+look like well-formed numbers, then they are converted
+into floating point values using <b>atof()</b> and compared numerically.
+If one input is not a well-formed number but the other is, then the
+number is considered to be less than the non-number. If neither inputs
+is a well-formed number, then <b>strcmp()</b> is used to do the
+comparison.
+</p>
+
+<p>
+Do not be confused by the fact that a column might have a "numeric"
+datatype. This does not mean that the column can contain only numbers.
+It merely means that if the column does contain a number, that number
+will sort in numerical order.
+</p>
+
+<p>
+For both text and numeric values, NULL sorts before any other value.
+A comparison of any value against NULL using operators like "<" or
+">=" is always false.
+</p>
+
+<h3>4.0 How SQLite Determines Datatypes</h3>
+
+<p>
+For SQLite version 2.6.3 and earlier, all values used the numeric datatype.
+The text datatype appears in version 2.7.0 and later. In the sequel it
+is assumed that you are using version 2.7.0 or later of SQLite.
+</p>
+
+<p>
+For an expression, the datatype of the result is often determined by
+the outermost operator. For example, arithmetic operators ("+", "*", "%")
+always return a numeric results. The string concatenation operator
+("||") returns a text result. And so forth. If you are ever in doubt
+about the datatype of an expression you can use the special <b>typeof()</b>
+SQL function to determine what the datatype is. For example:
+</p>
+
+<blockquote><pre>
+sqlite> SELECT typeof('abc'+123);
+numeric
+sqlite> SELECT typeof('abc'||123);
+text
+</pre></blockquote>
+
+<p>
+For table columns, the datatype is determined by the type declaration
+of the CREATE TABLE statement. The datatype is text if and only if
+the type declaration contains one or more of the following strings:
+</p>
+
+<blockquote>
+BLOB<br>
+CHAR<br>
+CLOB</br>
+TEXT
+</blockquote>
+
+<p>
+The search for these strings in the type declaration is case insensitive,
+of course. If any of the above strings occur anywhere in the type
+declaration, then the datatype of the column is text. Notice that
+the type "VARCHAR" contains "CHAR" as a substring so it is considered
+text.</p>
+
+<p>If none of the strings above occur anywhere in the type declaration,
+then the datatype is numeric. Note in particular that the datatype for columns
+with an empty type declaration is numeric.
+</p>
+
+<h3>5.0 Examples</h3>
+
+<p>
+Consider the following two command sequences:
+</p>
+
+<blockquote><pre>
+CREATE TABLE t1(a INTEGER UNIQUE); CREATE TABLE t2(b TEXT UNIQUE);
+INSERT INTO t1 VALUES('0'); INSERT INTO t2 VALUES(0);
+INSERT INTO t1 VALUES('0.0'); INSERT INTO t2 VALUES(0.0);
+</pre></blockquote>
+
+<p>In the sequence on the left, the second insert will fail. In this case,
+the strings '0' and '0.0' are treated as numbers since they are being
+inserted into a numeric column but 0==0.0 which violates the uniqueness
+constraint. However, the second insert in the right-hand sequence works. In
+this case, the constants 0 and 0.0 are treated a strings which means that
+they are distinct.</p>
+
+<p>SQLite always converts numbers into double-precision (64-bit) floats
+for comparison purposes. This means that a long sequence of digits that
+differ only in insignificant digits will compare equal if they
+are in a numeric column but will compare unequal if they are in a text
+column. We have:</p>
+
+<blockquote><pre>
+INSERT INTO t1 INSERT INTO t2
+ VALUES('12345678901234567890'); VALUES(12345678901234567890);
+INSERT INTO t1 INSERT INTO t2
+ VALUES('12345678901234567891'); VALUES(12345678901234567891);
+</pre></blockquote>
+
+<p>As before, the second insert on the left will fail because the comparison
+will convert both strings into floating-point number first and the only
+difference in the strings is in the 20-th digit which exceeds the resolution
+of a 64-bit float. In contrast, the second insert on the right will work
+because in that case, the numbers being inserted are strings and are
+compared using memcmp().</p>
+
+<p>
+Numeric and text types make a difference for the DISTINCT keyword too:
+</p>
+
+<blockquote><pre>
+CREATE TABLE t3(a INTEGER); CREATE TABLE t4(b TEXT);
+INSERT INTO t3 VALUES('0'); INSERT INTO t4 VALUES(0);
+INSERT INTO t3 VALUES('0.0'); INSERT INTO t4 VALUES(0.0);
+SELECT DISTINCT * FROM t3; SELECT DISTINCT * FROM t4;
+</pre></blockquote>
+
+<p>
+The SELECT statement on the left returns a single row since '0' and '0.0'
+are treated as numbers and are therefore indistinct. But the SELECT
+statement on the right returns two rows since 0 and 0.0 are treated
+a strings which are different.</p>
+}
+footer $rcsid
Added: freeswitch/trunk/libs/sqlite/www/different.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/different.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,220 @@
+set rcsid {$Id: different.tcl,v 1.7 2006/05/11 13:33:15 drh Exp $}
+source common.tcl
+header {Distinctive Features Of SQLite}
+puts {
+<p>
+This page highlights some of the characteristics of SQLite that are
+unusual and which make SQLite different from many other SQL
+database engines.
+</p>
+}
+proc feature {tag name text} {
+ puts "<a name=\"$tag\" />"
+ puts "<p><b>$name</b></p>\n"
+ puts "<blockquote>$text</blockquote>\n"
+}
+
+feature zeroconfig {Zero-Configuration} {
+ SQLite does not need to be "installed" before it is used.
+ There is no "setup" procedure. There is no
+ server process that needs to be started, stopped, or configured.
+ There is
+ no need for an administrator to create a new database instance or assign
+ access permissions to users.
+ SQLite uses no configuration files.
+ Nothing needs to be done to tell the system that SQLite is running.
+ No actions are required to recover after a system crash or power failure.
+ There is nothing to troubleshoot.
+ <p>
+ SQLite just works.
+ <p>
+ Other more familiar database engines run great once you get them going.
+ But doing the initial installation and configuration can be
+ intimidatingly complex.
+}
+
+feature serverless {Serverless} {
+ Most SQL database engines are implemented as a separate server
+ process. Programs that want to access the database communicate
+ with the server using some kind of interprocess communcation
+ (typically TCP/IP) to send requests to the server and to receive
+ back results. SQLite does not work this way. With SQLite, the
+ process that wants to access the database reads and writes
+ directly from the database files on disk. There is no intermediary
+ server process.
+ <p>
+ There are advantages and disadvantages to being serverless. The
+ main advantage is that there is no separate server process
+ to install, setup, configure, initialize, manage, and troubleshoot.
+ This is one reason why SQLite is a "zero-configuration" database
+ engine. Programs that use SQLite require no administrative support
+ for setting up the database engine before they are run. Any program
+ that is able to access the disk is able to use an SQLite database.
+ <p>
+ On the other hand, a database engine that uses a server can provide
+ better protection from bugs in the client application - stray pointers
+ in a client cannot corrupt memory on the server. And because a server
+ is a single persistent process, it is able control database access with
+ more precision, allowing for finer grain locking and better concurrancy.
+ <p>
+ Most SQL database engines are client/server based. Of those that are
+ serverless, SQLite is the only one that this author knows of that
+ allows multiple applications to access the same database at the same time.
+}
+
+feature onefile {Single Database File} {
+ An SQLite database is a single ordinary disk file that can be located
+ anywhere in the directory hierarchy. If SQLite can read
+ the disk file then it can read anything in the database. If the disk
+ file and its directory are writable, then SQLite can change anything
+ in the database. Database files can easily be copied onto a USB
+ memory stick or emailed for sharing.
+ <p>
+ Other SQL database engines tend to store data as a large collection of
+ files. Often these files are in a standard location that only the
+ database engine itself can access. This makes the data more secure,
+ but also makes it harder to access. Some SQL database engines provide
+ the option of writing directly to disk and bypassing the filesystem
+ all together. This provides added performance, but at the cost of
+ considerable setup and maintenance complexity.
+}
+
+feature small {Compact} {
+ When optimized for size, the whole SQLite library with everything enabled
+ is less than 225KiB in size (as measured on an ix86 using the "size"
+ utility from the GNU compiler suite.) Unneeded features can be disabled
+ at compile-time to further reduce the size of the library to under
+ 170KiB if desired.
+ <p>
+ Most other SQL database engines are much larger than this. IBM boasts
+ that it's recently released CloudScape database engine is "only" a 2MiB
+ jar file - 10 times larger than SQLite even after it is compressed!
+ Firebird boasts that it's client-side library is only 350KiB. That's
+ 50% larger than SQLite and does not even contain the database engine.
+ The Berkeley DB library from Sleepycat is 450KiB and it omits SQL
+ support, providing the programmer with only simple key/value pairs.
+}
+
+feature typing {Manifest typing} {
+ Most SQL database engines use static typing. A datatype is associated
+ with each column in a table and only values of that particular datatype
+ are allowed to be stored in that column. SQLite relaxes this restriction
+ by using manifest typing.
+ In manifest typing, the datatype is a property of the value itself, not
+ of the column in which the value is stored.
+ SQLite thus allows the user to store
+ any value of any datatype into any column regardless of the declared type
+ of that column. (There are some exceptions to this rule: An INTEGER
+ PRIMARY KEY column may only store integers. And SQLite attempts to coerce
+ values into the declared datatype of the column when it can.)
+ <p>
+ The SQL language specification calls for static typing. So some people
+ feel that the use of manifest typing is a bug in SQLite. But the authors
+ of SQLite feel very strongly that this is a feature. The authors argue
+ that static typing is a bug in the SQL specification that SQLite has fixed
+ in a backwards compatible way.
+}
+
+feature flex {Variable-length records} {
+ Most other SQL database engines allocated a fixed amount of disk space
+ for each row in most tables. They play special tricks for handling
+ BLOBs and CLOBs which can be of wildly varying length. But for most
+ tables, if you declare a column to be a VARCHAR(100) then the database
+ engine will allocate
+ 100 bytes of disk space regardless of how much information you actually
+ store in that column.
+ <p>
+ SQLite, in contrast, use only the amount of disk space actually
+ needed to store the information in a row. If you store a single
+ character in a VARCHAR(100) column, then only a single byte of disk
+ space is consumed. (Actually two bytes - there is some overhead at
+ the beginning of each column to record its datatype and length.)
+ <p>
+ The use of variable-length records by SQLite has a number of advantages.
+ It results in smaller database files, obviously. It also makes the
+ database run faster, since there is less information to move to and from
+ disk. And, the use of variable-length records makes it possible for
+ SQLite to employ manifest typing instead of static typing.
+}
+
+feature readable {Readable source code} {
+ The source code to SQLite is designed to be readable and accessible to
+ the average programmer. All procedures and data structures and many
+ automatic variables are carefully commented with useful information about
+ what they do. Boilerplate commenting is omitted.
+}
+
+feature vdbe {SQL statements compile into virtual machine code} {
+ Every SQL database engine compiles each SQL statement into some kind of
+ internal data structure which is then used to carry out the work of the
+ statement. But in most SQL engines that internal data structure is a
+ complex web of interlinked structures and objects. In SQLite, the compiled
+ form of statements is a short program in a machine-language like
+ representation. Users of the database can view this
+ <a href="opcode.html">virtual machine language</a>
+ by prepending the <a href="lang_explain.html">EXPLAIN</a> keyword
+ to a query.
+ <p>
+ The use of a virtual machine in SQLite has been a great benefit to
+ library's development. The virtual machine provides a crisp, well-defined
+ junction between the front-end of SQLite (the part that parses SQL
+ statements and generates virtual machine code) and the back-end (the
+ part that executes the virtual machine code and computes a result.)
+ The virtual machine allows the developers to see clearly and in an
+ easily readable form what SQLite is trying to do with each statement
+ it compiles, which is a tremendous help in debugging.
+ Depending on how it is compiled, SQLite also has the capability of
+ tracing the execution of the virtual machine - printing each
+ virtual machine instruction and its result as it executes.
+}
+
+#feature binding {Tight bindings to dynamic languages} {
+# Because it is embedded, SQLite can have a much tighter and more natural
+# binding to high-level dynamic languages such as Tcl, Perl, Python,
+# PHP, and Ruby.
+# For example,
+#}
+
+feature license {Public domain} {
+ The source code for SQLite is in the public domain. No claim of copyright
+ is made on any part of the core source code. (The documentation and test
+ code is a different matter - some sections of documentation and test logic
+ are governed by open-sources licenses.) All contributors to the
+ SQLite core software have signed affidavits specifically disavowing any
+ copyright interest in the code. This means that anybody is able to legally
+ do anything they want with the SQLite source code.
+ <p>
+ There are other SQL database engines with liberal licenses that allow
+ the code to be broadly and freely used. But those other engines are
+ still governed by copyright law. SQLite is different in that copyright
+ law simply does not apply.
+ <p>
+ The source code files for other SQL database engines typically begin
+ with a comment describing your license rights to view and copy that file.
+ The SQLite source code contains no license since it is not governed by
+ copyright. Instead of a license, the SQLite source code offers a blessing:
+ <blockquote>
+ <i>May you do good and not evil<br>
+ May you find forgiveness for yourself and forgive others<br>
+ May you share freely, never taking more than you give.</i>
+ </blockquote>
+}
+
+feature extensions {SQL language extensions} {
+ SQLite provides a number of enhancements to the SQL language
+ not normally found in other database engines.
+ The EXPLAIN keyword and manifest typing have already been mentioned
+ above. SQLite also provides statements such as
+ <a href="lang_replace.html">REPLACE</a> and the
+ <a href="lang_conflict.html">ON CONFLICT</a> clause that allow for
+ added control over the resolution of constraint conflicts.
+ SQLite supports <a href="lang_attach.html">ATTACH</a> and
+ <a href="lang_detach.html">DETACH</a> commands that allow multiple
+ independent databases to be used together in the same query.
+ And SQLite defines APIs that allows the user to add new
+ <a href="capi3ref.html#sqlite3_create_function">SQL functions</a>
+ and <a href="capi3ref.html#sqlite3_create_collation">collating sequences</a>.
+}
+
+
+footer $rcsid
Added: freeswitch/trunk/libs/sqlite/www/direct1b.gif
==============================================================================
Binary file. No diff available.
Added: freeswitch/trunk/libs/sqlite/www/docs.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/docs.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,149 @@
+# This script generates the "docs.html" page that describes various
+# sources of documentation available for SQLite.
+#
+set rcsid {$Id: docs.tcl,v 1.14 2006/01/30 16:20:30 drh Exp $}
+source common.tcl
+header {SQLite Documentation}
+puts {
+<h2>Available Documentation</h2>
+<table width="100%" cellpadding="5">
+}
+
+proc doc {name url desc} {
+ puts {<tr><td valign="top" align="right">}
+ regsub -all { +} $name {\ } name
+ puts "<a href=\"$url\">$name</a></td>"
+ puts {<td width="10"></td>}
+ puts {<td valign="top" align="left">}
+ puts $desc
+ puts {</td></tr>}
+}
+
+doc {Appropriate Uses For SQLite} {whentouse.html} {
+ This document describes situations where SQLite is an approriate
+ database engine to use versus situations where a client/server
+ database engine might be a better choice.
+}
+
+doc {Distinctive Features} {different.html} {
+ This document enumerates and describes some of the features of
+ SQLite that make it different from other SQL database engines.
+}
+
+doc {SQLite In 5 Minutes Or Less} {quickstart.html} {
+ A very quick introduction to programming with SQLite.
+}
+
+doc {SQL Syntax} {lang.html} {
+ This document describes the SQL language that is understood by
+ SQLite.
+}
+doc {Version 3 C/C++ API<br>Reference} {capi3ref.html} {
+ This document describes each API function separately.
+}
+doc {Sharing Cache Mode} {sharedcache.html} {
+ Version 3.3.0 and later supports the ability for two or more
+ database connections to share the same page and schema cache.
+ This feature is useful for certain specialized applications.
+}
+doc {Tcl API} {tclsqlite.html} {
+ A description of the TCL interface bindings for SQLite.
+}
+
+doc {Pragma commands} {pragma.html} {
+ This document describes SQLite performance tuning options and other
+ special purpose database commands.
+}
+doc {SQLite Version 3} {version3.html} {
+ A summary of of the changes between SQLite version 2.8 and SQLite version 3.0.
+}
+doc {Version 3 C/C++ API} {capi3.html} {
+ A description of the C/C++ interface bindings for SQLite version 3.0.0
+ and following.
+}
+doc {Version 3 DataTypes } {datatype3.html} {
+ SQLite version 3 introduces the concept of manifest typing, where the
+ type of a value is associated with the value itself, not the column that
+ it is stored in.
+ This page describes data typing for SQLite version 3 in further detail.
+}
+
+doc {Locking And Concurrency<br>In SQLite Version 3} {lockingv3.html} {
+ A description of how the new locking code in version 3 increases
+ concurrancy and decreases the problem of writer starvation.
+}
+
+doc {Overview Of The Optimizer} {optoverview.html} {
+ A quick overview of the various query optimizations that are
+ attempted by the SQLite code generator.
+}
+
+
+doc {Null Handling} {nulls.html} {
+ Different SQL database engines handle NULLs in different ways. The
+ SQL standards are ambiguous. This document describes how SQLite handles
+ NULLs in comparison with other SQL database engines.
+}
+
+doc {Copyright} {copyright.html} {
+ SQLite is in the public domain. This document describes what that means
+ and the implications for contributors.
+}
+
+doc {Unsupported SQL} {omitted.html} {
+ This page describes features of SQL that SQLite does not support.
+}
+
+doc {Version 2 C/C++ API} {c_interface.html} {
+ A description of the C/C++ interface bindings for SQLite through version
+ 2.8
+}
+
+
+doc {Version 2 DataTypes } {datatypes.html} {
+ A description of how SQLite version 2 handles SQL datatypes.
+ Short summary: Everything is a string.
+}
+
+doc {Release History} {changes.html} {
+ A chronology of SQLite releases going back to version 1.0.0
+}
+
+
+doc {Speed Comparison} {speed.html} {
+ The speed of version 2.7.6 of SQLite is compared against PostgreSQL and
+ MySQL.
+}
+
+doc {Architecture} {arch.html} {
+ An architectural overview of the SQLite library, useful for those who want
+ to hack the code.
+}
+
+doc {VDBE Tutorial} {vdbe.html} {
+ The VDBE is the subsystem within SQLite that does the actual work of
+ executing SQL statements. This page describes the principles of operation
+ for the VDBE in SQLite version 2.7. This is essential reading for anyone
+ who want to modify the SQLite sources.
+}
+
+doc {VDBE Opcodes} {opcode.html} {
+ This document is an automatically generated description of the various
+ opcodes that the VDBE understands. Programmers can use this document as
+ a reference to better understand the output of EXPLAIN listings from
+ SQLite.
+}
+
+doc {Compilation Options} {compile.html} {
+ This document describes the compile time options that may be set to
+ modify the default behaviour of the library or omit optional features
+ in order to reduce binary size.
+}
+
+doc {Backwards Compatibility} {formatchng.html} {
+ This document details all of the incompatible changes to the SQLite
+ file format that have occurred since version 1.0.0.
+}
+
+puts {</table>}
+footer $rcsid
Added: freeswitch/trunk/libs/sqlite/www/download.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/download.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,218 @@
+#
+# Run this TCL script to generate HTML for the download.html file.
+#
+set rcsid {$Id: download.tcl,v 1.23 2006/10/08 18:56:57 drh Exp $}
+source common.tcl
+header {SQLite Download Page}
+
+puts {
+<h2>SQLite Download Page</h1>
+<table width="100%" cellpadding="5">
+}
+
+proc Product {pattern desc} {
+ regsub {V[23]} $pattern {*} p3
+ regsub V2 $pattern {(2[0-9a-z._]+)} pattern
+ regsub V3 $pattern {(3[0-9a-z._]+)} pattern
+ set p2 [string map {* .*} $pattern]
+ set flist [glob -nocomplain $p3]
+ foreach file [lsort -dict $flist] {
+ if {![regexp ^$p2\$ $file all version]} continue
+ regsub -all _ $version . version
+ set size [file size $file]
+ set units bytes
+ if {$size>1024*1024} {
+ set size [format %.2f [expr {$size/(1024.0*1024.0)}]]
+ set units MiB
+ } elseif {$size>1024} {
+ set size [format %.2f [expr {$size/(1024.0)}]]
+ set units KiB
+ }
+ puts "<tr><td width=\"10\"></td>"
+ puts "<td valign=\"top\" align=\"right\">"
+ puts "<a href=\"$file\">$file</a><br>($size $units)</td>"
+ puts "<td width=\"5\"></td>"
+ regsub -all VERSION $desc $version d2
+ puts "<td valign=\"top\">[string trim $d2]</td></tr>"
+ }
+}
+cd doc
+
+proc Heading {title} {
+ puts "<tr><td colspan=4><big><b>$title</b></big></td></tr>"
+}
+
+Heading {Precompiled Binaries for Linux}
+
+Product sqlite3-V3.bin.gz {
+ A command-line program for accessing and modifying
+ SQLite version 3.* databases.
+ See <a href="sqlite.html">the documentation</a> for additional information.
+}
+
+Product sqlite-V3.bin.gz {
+ A command-line program for accessing and modifying
+ SQLite databases.
+ See <a href="sqlite.html">the documentation</a> for additional information.
+}
+
+Product tclsqlite-V3.so.gz {
+ Bindings for <a href="http://www.tcl.tk/">Tcl/Tk</a>.
+ You can import this shared library into either
+ tclsh or wish to get SQLite database access from Tcl/Tk.
+ See <a href="tclsqlite.html">the documentation</a> for details.
+}
+
+Product sqlite-V3.so.gz {
+ A precompiled shared-library for Linux without the TCL bindings.
+}
+
+Product fts1-V3.so.gz {
+ A precompiled
+ <a href="http://www.sqlite.org/cvstrac/wiki?p=FtsOne">FTS Module</a>
+ for Linux.
+}
+
+Product sqlite-devel-V3.i386.rpm {
+ RPM containing documentation, header files, and static library for
+ SQLite version VERSION.
+}
+Product sqlite-V3-1.i386.rpm {
+ RPM containing shared libraries and the <b>sqlite</b> command-line
+ program for SQLite version VERSION.
+}
+
+Product sqlite*_analyzer-V3.bin.gz {
+ An analysis program for database files compatible with SQLite
+ version VERSION and later.
+}
+
+Heading {Precompiled Binaries For Windows}
+
+Product sqlite-V3.zip {
+ A command-line program for accessing and modifing SQLite databases.
+ See <a href="sqlite.html">the documentation</a> for additional information.
+}
+Product tclsqlite-V3.zip {
+ Bindings for <a href="http://www.tcl.tk/">Tcl/Tk</a>.
+ You can import this shared library into either
+ tclsh or wish to get SQLite database access from Tcl/Tk.
+ See <a href="tclsqlite.html">the documentation</a> for details.
+}
+Product sqlitedll-V3.zip {
+ This is a DLL of the SQLite library without the TCL bindings.
+ The only external dependency is MSVCRT.DLL.
+}
+
+Product fts1dll-V3.zip {
+ A precompiled
+ <a href="http://www.sqlite.org/cvstrac/wiki?p=FtsOne">FTS Module</a>
+ for win32.
+}
+
+Product sqlite*_analyzer-V3.zip {
+ An analysis program for database files compatible with SQLite version
+ VERSION and later.
+}
+
+
+Heading {Source Code}
+
+Product {sqlite-V3.tar.gz} {
+ A tarball of the complete source tree for SQLite version VERSION
+ including all of the documentation.
+}
+
+Product {sqlite-source-V3.zip} {
+ This ZIP archive contains pure C source code for the SQLite library.
+ Unlike the tarballs below, all of the preprocessing and automatic
+ code generation has already been done on these C source code, so they
+ can be processed directly with any ordinary C compiler.
+ This file is provided as a service to
+ MS-Windows users who lack the build support infrastructure of Unix.
+}
+
+Product {sqlite-V3-tea.tar.gz} {
+ A tarball of proprocessed source code together with a
+ <a href="http://www.tcl.tk/doc/tea/">Tcl Extension Architecture (TEA)</a>
+ compatible configure script and makefile.
+}
+
+Product {sqlite-V3.src.rpm} {
+ An RPM containing complete source code for SQLite version VERSION
+}
+
+Heading {Cross-Platform Binaries}
+
+Product {sqlite-V3.kit} {
+ A <a href="http://www.equi4.com/starkit.html">starkit</a> containing
+ precompiled SQLite binaries and Tcl bindings for Linux-x86, Windows,
+ and Mac OS-X ppc and x86.
+}
+
+Heading {Historical Binaries And Source Code}
+
+Product sqlite-V2.bin.gz {
+ A command-line program for accessing and modifying
+ SQLite version 2.* databases on Linux-x86.
+}
+Product sqlite-V2.zip {
+ A command-line program for accessing and modifying
+ SQLite version 2.* databases on win32.
+}
+
+Product sqlite*_analyzer-V2.bin.gz {
+ An analysis program for version 2.* database files on Linux-x86
+}
+Product sqlite*_analyzer-V2.zip {
+ An analysis program for version 2.* database files on win32.
+}
+Product {sqlite-source-V2.zip} {
+ This ZIP archive contains C source code for the SQLite library
+ version VERSION.
+}
+
+
+
+
+puts {
+</table>
+
+<a name="cvs">
+<h3>Direct Access To The Sources Via Anonymous CVS</h3>
+
+<p>
+All SQLite source code is maintained in a
+<a href="http://www.cvshome.org/">CVS</a> repository that is
+available for read-only access by anyone. You can
+interactively view the
+repository contents and download individual files
+by visiting
+<a href="http://www.sqlite.org/cvstrac/dir?d=sqlite">
+http://www.sqlite.org/cvstrac/dir?d=sqlite</a>.
+To access the repository directly, use the following
+commands:
+</p>
+
+<blockquote><pre>
+cvs -d :pserver:anonymous at www.sqlite.org:/sqlite login
+cvs -d :pserver:anonymous at www.sqlite.org:/sqlite checkout sqlite
+</pre></blockquote>
+
+<p>
+When the first command prompts you for a password, enter "anonymous".
+</p>
+
+<p>
+To access the SQLite version 2.8 sources, begin by getting the 3.0
+tree as described above. Then update to the "version_2" branch
+as follows:
+</p>
+
+<blockquote><pre>
+cvs update -r version_2
+</pre></blockquote>
+
+}
+
+footer $rcsid
Added: freeswitch/trunk/libs/sqlite/www/dynload.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/dynload.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,70 @@
+#
+# Run this Tcl script to generate the dynload.html file.
+#
+set rcsid {$Id: dynload.tcl,v 1.1 2001/02/11 16:58:22 drh Exp $}
+
+puts {<html>
+<head>
+ <title>How to build a dynamically loaded Tcl extension for SQLite</title>
+</head>
+<body bgcolor=white>
+<h1 align=center>
+How To Build A Dynamically Loaded Tcl Extension
+</h1>}
+puts {<p>
+<i>This note was contributed by
+<a href="bsaunder at tampabay.rr.com.nospam">Bill Saunders</a>. Thanks, Bill!</i>
+
+<p>
+To compile the SQLite Tcl extension into a dynamically loaded module
+I did the following:
+</p>
+
+<ol>
+<li><p>Do a standard compile
+(I had a dir called bld at the same level as sqlite ie
+ /root/bld
+ /root/sqlite
+I followed the directions and did a standard build in the bld
+directory)</p></li>
+
+<li><p>
+Now do the following in the bld directory
+<blockquote><pre>
+gcc -shared -I. -lgdbm ../sqlite/src/tclsqlite.c libsqlite.a -o sqlite.so
+</pre></blockquote></p></li>
+
+<li><p>
+This should produce the file sqlite.so in the bld directory</p></li>
+
+<li><p>
+Create a pkgIndex.tcl file that contains this line
+
+<blockquote><pre>
+package ifneeded sqlite 1.0 [list load [file join $dir sqlite.so]]
+</pre></blockquote></p></li>
+
+<li><p>
+To use this put sqlite.so and pkgIndex.tcl in the same directory</p></li>
+
+<li><p>
+From that directory start wish</p></li>
+
+<li><p>
+Execute the following tcl command (tells tcl where to fine loadable
+modules)
+<blockquote><pre>
+lappend auto_path [exec pwd]
+</pre></blockquote></p></li>
+
+<li><p>
+Load the package
+<blockquote><pre>
+package require sqlite
+</pre></blockquote></p></li>
+
+<li><p>
+Have fun....</p></li>
+</ul>
+
+</body></html>}
Added: freeswitch/trunk/libs/sqlite/www/faq.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/faq.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,517 @@
+#
+# Run this script to generated a faq.html output file
+#
+set rcsid {$Id: faq.tcl,v 1.36 2006/04/05 01:02:08 drh Exp $}
+source common.tcl
+header {SQLite Frequently Asked Questions</title>}
+
+set cnt 1
+proc faq {question answer} {
+ set ::faq($::cnt) [list [string trim $question] [string trim $answer]]
+ incr ::cnt
+}
+
+#############
+# Enter questions and answers here.
+
+faq {
+ How do I create an AUTOINCREMENT field.
+} {
+ <p>Short answer: A column declared INTEGER PRIMARY KEY will
+ autoincrement.</p>
+
+ <p>Here is the long answer:
+ If you declare a column of a table to be INTEGER PRIMARY KEY, then
+ whenever you insert a NULL
+ into that column of the table, the NULL is automatically converted
+ into an integer which is one greater than the largest value of that
+ column over all other rows in the table, or 1 if the table is empty.
+ (If the largest possible integer key, 9223372036854775807, then an
+ unused key value is chosen at random.)
+ For example, suppose you have a table like this:
+<blockquote><pre>
+CREATE TABLE t1(
+ a INTEGER PRIMARY KEY,
+ b INTEGER
+);
+</pre></blockquote>
+ <p>With this table, the statement</p>
+<blockquote><pre>
+INSERT INTO t1 VALUES(NULL,123);
+</pre></blockquote>
+ <p>is logically equivalent to saying:</p>
+<blockquote><pre>
+INSERT INTO t1 VALUES((SELECT max(a) FROM t1)+1,123);
+</pre></blockquote>
+
+ <p>There is a new API function named
+ <a href="capi3ref.html#sqlite3_last_insert_rowid">
+ sqlite3_last_insert_rowid()</a> which will return the integer key
+ for the most recent insert operation.</p>
+
+ <p>Note that the integer key is one greater than the largest
+ key that was in the table just prior to the insert. The new key
+ will be unique over all keys currently in the table, but it might
+ overlap with keys that have been previously deleted from the
+ table. To create keys that are unique over the lifetime of the
+ table, add the AUTOINCREMENT keyword to the INTEGER PRIMARY KEY
+ declaration. Then the key chosen will be one more than than the
+ largest key that has ever existed in that table. If the largest
+ possible key has previously existed in that table, then the INSERT
+ will fail with an SQLITE_FULL error code.</p>
+}
+
+faq {
+ What datatypes does SQLite support?
+} {
+ <p>See <a href="datatype3.html">http://www.sqlite.org/datatype3.html</a>.</p>
+}
+
+faq {
+ SQLite lets me insert a string into a database column of type integer!
+} {
+ <p>This is a feature, not a bug. SQLite does not enforce data type
+ constraints. Any data can be
+ inserted into any column. You can put arbitrary length strings into
+ integer columns, floating point numbers in boolean columns, or dates
+ in character columns. The datatype you assign to a column in the
+ CREATE TABLE command does not restrict what data can be put into
+ that column. Every column is able to hold
+ an arbitrary length string. (There is one exception: Columns of
+ type INTEGER PRIMARY KEY may only hold a 64-bit signed integer.
+ An error will result
+ if you try to put anything other than an integer into an
+ INTEGER PRIMARY KEY column.)</p>
+
+ <p>But SQLite does use the declared type of a column as a hint
+ that you prefer values in that format. So, for example, if a
+ column is of type INTEGER and you try to insert a string into
+ that column, SQLite will attempt to convert the string into an
+ integer. If it can, it inserts the integer instead. If not,
+ it inserts the string. This feature is sometimes
+ call <a href="datatype3.html#affinity">type or column affinity</a>.
+ </p>
+}
+
+faq {
+ Why does SQLite think that the expression '0'=='00' is TRUE?
+} {
+ <p>As of version 2.7.0, it doesn't. See the document on
+ <a href="datatype3.html">datatypes in SQLite version 3</a>
+ for details.</p>
+}
+
+faq {
+ Why doesn't SQLite allow me to use '0' and '0.0' as the primary
+ key on two different rows of the same table?
+} {
+ <p>Your primary key must have a numeric type. Change the datatype of
+ your primary key to TEXT and it should work.</p>
+
+ <p>Every row must have a unique primary key. For a column with a
+ numeric type, SQLite thinks that <b>'0'</b> and <b>'0.0'</b> are the
+ same value because they compare equal to one another numerically.
+ (See the previous question.) Hence the values are not unique.</p>
+}
+
+faq {
+ My linux box is not able to read an SQLite database that was created
+ on my SparcStation.
+} {
+ <p>You need to upgrade your SQLite library to version 2.6.3 or later.</p>
+
+ <p>The x86 processor on your linux box is little-endian (meaning that
+ the least significant byte of integers comes first) but the Sparc is
+ big-endian (the most significant bytes comes first). SQLite databases
+ created on a little-endian architecture cannot be on a big-endian
+ machine by version 2.6.2 or earlier of SQLite. Beginning with
+ version 2.6.3, SQLite should be able to read and write database files
+ regardless of byte order of the machine on which the file was created.</p>
+}
+
+faq {
+ Can multiple applications or multiple instances of the same
+ application access a single database file at the same time?
+} {
+ <p>Multiple processes can have the same database open at the same
+ time. Multiple processes can be doing a SELECT
+ at the same time. But only one process can be making changes to
+ the database at any moment in time, however.</p>
+
+ <p>SQLite uses reader/writer locks to control access to the database.
+ (Under Win95/98/ME which lacks support for reader/writer locks, a
+ probabilistic simulation is used instead.)
+ But use caution: this locking mechanism might
+ not work correctly if the database file is kept on an NFS filesystem.
+ This is because fcntl() file locking is broken on many NFS implementations.
+ You should avoid putting SQLite database files on NFS if multiple
+ processes might try to access the file at the same time. On Windows,
+ Microsoft's documentation says that locking may not work under FAT
+ filesystems if you are not running the Share.exe daemon. People who
+ have a lot of experience with Windows tell me that file locking of
+ network files is very buggy and is not dependable. If what they
+ say is true, sharing an SQLite database between two or more Windows
+ machines might cause unexpected problems.</p>
+
+ <p>We are aware of no other <i>embedded</i> SQL database engine that
+ supports as much concurrancy as SQLite. SQLite allows multiple processes
+ to have the database file open at once, and for multiple processes to
+ read the database at once. When any process wants to write, it must
+ lock the entire database file for the duration of its update. But that
+ normally only takes a few milliseconds. Other processes just wait on
+ the writer to finish then continue about their business. Other embedded
+ SQL database engines typically only allow a single process to connect to
+ the database at once.</p>
+
+ <p>However, client/server database engines (such as PostgreSQL, MySQL,
+ or Oracle) usually support a higher level of concurrency and allow
+ multiple processes to be writing to the same database at the same time.
+ This is possible in a client/server database because there is always a
+ single well-controlled server process available to coordinate access.
+ If your application has a need for a lot of concurrency, then you should
+ consider using a client/server database. But experience suggests that
+ most applications need much less concurrency than their designers imagine.
+ </p>
+
+ <p>When SQLite tries to access a file that is locked by another
+ process, the default behavior is to return SQLITE_BUSY. You can
+ adjust this behavior from C code using the
+ <a href="capi3ref#sqlite3_busy_handler">sqlite3_busy_handler()</a> or
+ <a href="capi3ref#sqlite3_busy_timeout">sqlite3_busy_timeout()</a>
+ API functions.</p>
+}
+
+faq {
+ Is SQLite threadsafe?
+} {
+ <p>Yes. Sometimes. In order to be thread-safe, SQLite must be compiled
+ with the THREADSAFE preprocessor macro set to 1. In the default
+ distribution, the windows binaries are compiled to be threadsafe but
+ the linux binaries are not. If you want to change this, you'll have to
+ recompile.</p>
+
+ <p>"Threadsafe" in the previous paragraph means that two or more threads
+ can run SQLite at the same time on different "<b>sqlite3</b>" structures
+ returned from separate calls to
+ <a href="capi3ref#sqlite3_open">sqlite3_open()</a>. It is never safe
+ to use the same <b>sqlite3</b> structure pointer in two
+ or more threads.</p>
+
+ <p>Prior to version 3.3.1,
+ an <b>sqlite3</b> structure could only be used in the same thread
+ that called <a href="capi3ref#sqlite3_open">sqlite3_open</a> to create it.
+ You could not open a
+ database in one thread then pass the handle off to another thread for
+ it to use. This was due to limitations (bugs?) in many common threading
+ implementations such as on RedHat9. Specifically, an fcntl() lock
+ created by one thread cannot be removed or modified by a different
+ thread on the troublesome systems. And since SQLite uses fcntl()
+ locks heavily for concurrency control, serious problems arose if you
+ start moving database connections across threads.</p>
+
+ <p>The restriction on moving database connections across threads
+ was relaxed somewhat in version 3.3.1. With that and subsequent
+ versions, it is safe to move a connection handle across threads
+ as long as the connection is not holding any fcntl() locks. You
+ can safely assume that no locks are being held if no
+ transaction is pending and all statements have been finalized.</p>
+
+ <p>Under UNIX, you should not carry an open SQLite database across
+ a fork() system call into the child process. Problems will result
+ if you do.</p>
+}
+
+faq {
+ How do I list all tables/indices contained in an SQLite database
+} {
+ <p>If you are running the <b>sqlite3</b> command-line access program
+ you can type "<b>.tables</b>" to get a list of all tables. Or you
+ can type "<b>.schema</b>" to see the complete database schema including
+ all tables and indices. Either of these commands can be followed by
+ a LIKE pattern that will restrict the tables that are displayed.</p>
+
+ <p>From within a C/C++ program (or a script using Tcl/Ruby/Perl/Python
+ bindings) you can get access to table and index names by doing a SELECT
+ on a special table named "<b>SQLITE_MASTER</b>". Every SQLite database
+ has an SQLITE_MASTER table that defines the schema for the database.
+ The SQLITE_MASTER table looks like this:</p>
+<blockquote><pre>
+CREATE TABLE sqlite_master (
+ type TEXT,
+ name TEXT,
+ tbl_name TEXT,
+ rootpage INTEGER,
+ sql TEXT
+);
+</pre></blockquote>
+ <p>For tables, the <b>type</b> field will always be <b>'table'</b> and the
+ <b>name</b> field will be the name of the table. So to get a list of
+ all tables in the database, use the following SELECT command:</p>
+<blockquote><pre>
+SELECT name FROM sqlite_master
+WHERE type='table'
+ORDER BY name;
+</pre></blockquote>
+ <p>For indices, <b>type</b> is equal to <b>'index'</b>, <b>name</b> is the
+ name of the index and <b>tbl_name</b> is the name of the table to which
+ the index belongs. For both tables and indices, the <b>sql</b> field is
+ the text of the original CREATE TABLE or CREATE INDEX statement that
+ created the table or index. For automatically created indices (used
+ to implement the PRIMARY KEY or UNIQUE constraints) the <b>sql</b> field
+ is NULL.</p>
+
+ <p>The SQLITE_MASTER table is read-only. You cannot change this table
+ using UPDATE, INSERT, or DELETE. The table is automatically updated by
+ CREATE TABLE, CREATE INDEX, DROP TABLE, and DROP INDEX commands.</p>
+
+ <p>Temporary tables do not appear in the SQLITE_MASTER table. Temporary
+ tables and their indices and triggers occur in another special table
+ named SQLITE_TEMP_MASTER. SQLITE_TEMP_MASTER works just like SQLITE_MASTER
+ except that it is only visible to the application that created the
+ temporary tables. To get a list of all tables, both permanent and
+ temporary, one can use a command similar to the following:
+<blockquote><pre>
+SELECT name FROM
+ (SELECT * FROM sqlite_master UNION ALL
+ SELECT * FROM sqlite_temp_master)
+WHERE type='table'
+ORDER BY name
+</pre></blockquote>
+}
+
+faq {
+ Are there any known size limits to SQLite databases?
+} {
+ <p>A database is limited in size to 2 tibibytes (2<sup>41</sup> bytes).
+ That is a theoretical limitation. In practice, you should try to keep
+ your SQLite databases below 100 gigabytes to avoid performance problems.
+ If you need to store 100 gigabytes or more in a database, consider using
+ an enterprise database engine which is designed for that purpose.</p>
+
+ <p>The theoretical limit on the number of rows in a table is
+ 2<sup>64</sup>-1, though obviously you will run into the file size
+ limitation prior to reaching the row limit. A single row can hold
+ up to 2<sup>30</sup> bytes of data in the current implementation. The
+ underlying file format supports row sizes up to about 2<sup>62</sup> bytes.
+ </p>
+
+ <p>There are probably limits on the number of tables or indices or
+ the number of columns in a table or index, but nobody is sure what
+ those limits are. In practice, SQLite must read and parse the original
+ SQL of all table and index declarations everytime a new database file
+ is opened, so for the best performance of
+ <a href="capi3ref.html#sqlite3_open">sqlite3_open()</a> it is best
+ to keep down the number of declared tables. Likewise, though there
+ is no limit on the number of columns in a table, more than a few hundred
+ seems extreme. Only the first 31 columns of a table are candidates for
+ certain optimizations. You can put as many columns in an index as you like
+ but indexes with more than 30 columns will not be used to optimize queries.
+ </p>
+
+ <p>The names of tables, indices, view, triggers, and columns can be
+ as long as desired. However, the names of SQL functions (as created
+ by the
+ <a href="capi3ref.html#sqlite3_create_function">sqlite3_create_function()</a>
+ API) may not exceed 255 characters in length.</p>
+}
+
+faq {
+ What is the maximum size of a VARCHAR in SQLite?
+} {
+ <p>SQLite does not enforce the length of a VARCHAR. You can declare
+ a VARCHAR(10) and SQLite will be happy to let you put 500 characters
+ in it. And it will keep all 500 characters intact - it never truncates.
+ </p>
+}
+
+faq {
+ Does SQLite support a BLOB type?
+} {
+ <p>SQLite versions 3.0 and later allow you to store BLOB data in any
+ column, even columns that are declared to hold some other type.</p>
+}
+
+faq {
+ How do I add or delete columns from an existing table in SQLite.
+} {
+ <p>SQLite has limited
+ <a href="lang_altertable.html">ALTER TABLE</a> support that you can
+ use to add a column to the end of a table or to change the name of
+ a table.
+ If you what make more complex changes the structure of a table,
+ you will have to recreate the
+ table. You can save existing data to a temporary table, drop the
+ old table, create the new table, then copy the data back in from
+ the temporary table.</p>
+
+ <p>For example, suppose you have a table named "t1" with columns
+ names "a", "b", and "c" and that you want to delete column "c" from
+ this table. The following steps illustrate how this could be done:
+ </p>
+
+ <blockquote><pre>
+BEGIN TRANSACTION;
+CREATE TEMPORARY TABLE t1_backup(a,b);
+INSERT INTO t1_backup SELECT a,b FROM t1;
+DROP TABLE t1;
+CREATE TABLE t1(a,b);
+INSERT INTO t1 SELECT a,b FROM t1_backup;
+DROP TABLE t1_backup;
+COMMIT;
+</pre></blockquote>
+}
+
+faq {
+ I deleted a lot of data but the database file did not get any
+ smaller. Is this a bug?
+} {
+ <p>No. When you delete information from an SQLite database, the
+ unused disk space is added to an internal "free-list" and is reused
+ the next time you insert data. The disk space is not lost. But
+ neither is it returned to the operating system.</p>
+
+ <p>If you delete a lot of data and want to shrink the database file,
+ run the <a href="lang_vacuum.html">VACUUM</a> command.
+ VACUUM will reconstruct
+ the database from scratch. This will leave the database with an empty
+ free-list and a file that is minimal in size. Note, however, that the
+ VACUUM can take some time to run (around a half second per megabyte
+ on the Linux box where SQLite is developed) and it can use up to twice
+ as much temporary disk space as the original file while it is running.
+ </p>
+
+ <p>As of SQLite version 3.1, an alternative to using the VACUUM command
+ is auto-vacuum mode, enabled using the
+ <a href="pragma.html#pragma_auto_vacuum">auto_vacuum pragma</a>.</p>
+}
+
+faq {
+ Can I use SQLite in my commercial product without paying royalties?
+} {
+ <p>Yes. SQLite is in the
+ <a href="copyright.html">public domain</a>. No claim of ownership is made
+ to any part of the code. You can do anything you want with it.</p>
+}
+
+faq {
+ How do I use a string literal that contains an embedded single-quote (')
+ character?
+} {
+ <p>The SQL standard specifies that single-quotes in strings are escaped
+ by putting two single quotes in a row. SQL works like the Pascal programming
+ language in the regard. SQLite follows this standard. Example:
+ </p>
+
+ <blockquote><pre>
+ INSERT INTO xyz VALUES('5 O''clock');
+ </pre></blockquote>
+}
+
+faq {What is an SQLITE_SCHEMA error, and why am I getting one?} {
+ <p>An SQLITE_SCHEMA error is returned when a
+ prepared SQL statement is no longer valid and cannot be executed.
+ When this occurs, the statement must be recompiled from SQL using
+ the
+ <a href="capi3ref.html#sqlite3_prepare">sqlite3_prepare()</a> API.
+ In SQLite version 3, an SQLITE_SCHEMA error can
+ only occur when using the
+ <a href="capi3ref.html#sqlite3_prepare">sqlite3_prepare()</a>/<a
+ href="capi3ref.html#sqlite3_step">sqlite3_step()</a>/<a
+ href="capi3ref.html#sqlite3_finalize">sqlite3_finalize()</a>
+ API to execute SQL, not when using the
+ <a href="capi3ref.html#sqlite3_exec">sqlite3_exec()</a>. This was not
+ the case in version 2.</p>
+
+ <p>The most common reason for a prepared statement to become invalid
+ is that the schema of the database was modified after the SQL was
+ prepared (possibly by another process). The other reasons this can
+ happen are:</p>
+ <ul>
+ <li>A database was <a href="lang_detach.html">DETACH</a>ed.
+ <li>The database was <a href="lang_vacuum.html">VACUUM</a>ed
+ <li>A user-function definition was deleted or changed.
+ <li>A collation sequence definition was deleted or changed.
+ <li>The authorization function was changed.
+ </ul>
+
+ <p>In all cases, the solution is to recompile the statement from SQL
+ and attempt to execute it again. Because a prepared statement can be
+ invalidated by another process changing the database schema, all code
+ that uses the
+ <a href="capi3ref.html#sqlite3_prepare">sqlite3_prepare()</a>/<a
+ href="capi3ref.html#sqlite3_step">sqlite3_step()</a>/<a
+ href="capi3ref.html#sqlite3_finalize">sqlite3_finalize()</a>
+ API should be prepared to handle SQLITE_SCHEMA errors. An example
+ of one approach to this follows:</p>
+
+ <blockquote><pre>
+
+ int rc;
+ sqlite3_stmt *pStmt;
+ char zSql[] = "SELECT .....";
+
+ do {
+ /* Compile the statement from SQL. Assume success. */
+ sqlite3_prepare(pDb, zSql, -1, &pStmt, 0);
+
+ while( SQLITE_ROW==sqlite3_step(pStmt) ){
+ /* Do something with the row of available data */
+ }
+
+ /* Finalize the statement. If an SQLITE_SCHEMA error has
+ ** occured, then the above call to sqlite3_step() will have
+ ** returned SQLITE_ERROR. sqlite3_finalize() will return
+ ** SQLITE_SCHEMA. In this case the loop will execute again.
+ */
+ rc = sqlite3_finalize(pStmt);
+ } while( rc==SQLITE_SCHEMA );
+
+ </pre></blockquote>
+}
+
+faq {Why does ROUND(9.95,1) return 9.9 instead of 10.0?
+ Shouldn't 9.95 round up?} {
+ <p>SQLite uses binary arithmetic and in binary, there is no
+ way to write 9.95 in a finite number of bits. The closest to
+ you can get to 9.95 in a 64-bit IEEE float (which is what
+ SQLite uses) is 9.949999999999999289457264239899814128875732421875.
+ So when you type "9.95", SQLite really understands the number to be
+ the much longer value shown above. And that value rounds down.</p>
+
+ <p>This kind of problem comes up all the time when dealing with
+ floating point binary numbers. The general rule to remember is
+ that most fractional numbers that have a finite representation in decimal
+ (a.k.a "base-10")
+ do not have a finite representation in binary (a.k.a "base-2").
+ And so they are
+ approximated using the closest binary number available. That
+ approximation is usually very close, but it will be slightly off
+ and in some cases can cause your results to be a little different
+ from what you might expect.</p>
+}
+
+# End of questions and answers.
+#############
+
+puts {<h2>Frequently Asked Questions</h2>}
+
+# puts {<DL COMPACT>}
+# for {set i 1} {$i<$cnt} {incr i} {
+# puts " <DT><A HREF=\"#q$i\">($i)</A></DT>"
+# puts " <DD>[lindex $faq($i) 0]</DD>"
+# }
+# puts {</DL>}
+puts {<OL>}
+for {set i 1} {$i<$cnt} {incr i} {
+ puts "<li><a href=\"#q$i\">[lindex $faq($i) 0]</a></li>"
+}
+puts {</OL>}
+
+for {set i 1} {$i<$cnt} {incr i} {
+ puts "<A NAME=\"q$i\"><HR />"
+ puts "<P><B>($i) [lindex $faq($i) 0]</B></P>\n"
+ puts "<BLOCKQUOTE>[lindex $faq($i) 1]</BLOCKQUOTE></LI>\n"
+}
+
+puts {</OL>}
+footer $rcsid
Added: freeswitch/trunk/libs/sqlite/www/fileformat.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/fileformat.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,785 @@
+#
+# Run this script to generated a fileformat.html output file
+#
+set rcsid {$Id: fileformat.tcl,v 1.13 2004/10/10 17:24:55 drh Exp $}
+source common.tcl
+header {SQLite Database File Format (Version 2)}
+puts {
+<h2>SQLite 2.X Database File Format</h2>
+
+<p>
+This document describes the disk file format for SQLite versions 2.1
+through 2.8. SQLite version 3.0 and following uses a very different
+format which is described separately.
+</p>
+
+<h3>1.0 Layers</h3>
+
+<p>
+SQLite is implemented in layers.
+(See the <a href="arch.html">architecture description</a>.)
+The format of database files is determined by three different
+layers in the architecture.
+</p>
+
+<ul>
+<li>The <b>schema</b> layer implemented by the VDBE.</li>
+<li>The <b>b-tree</b> layer implemented by btree.c</li>
+<li>The <b>pager</b> layer implemented by pager.c</li>
+</ul>
+
+<p>
+We will describe each layer beginning with the bottom (pager)
+layer and working upwards.
+</p>
+
+<h3>2.0 The Pager Layer</h3>
+
+<p>
+An SQLite database consists of
+"pages" of data. Each page is 1024 bytes in size.
+Pages are numbered beginning with 1.
+A page number of 0 is used to indicate "no such page" in the
+B-Tree and Schema layers.
+</p>
+
+<p>
+The pager layer is responsible for implementing transactions
+with atomic commit and rollback. It does this using a separate
+journal file. Whenever a new transaction is started, a journal
+file is created that records the original state of the database.
+If the program terminates before completing the transaction, the next
+process to open the database can use the journal file to restore
+the database to its original state.
+</p>
+
+<p>
+The journal file is located in the same directory as the database
+file and has the same name as the database file but with the
+characters "<tt>-journal</tt>" appended.
+</p>
+
+<p>
+The pager layer does not impose any content restrictions on the
+main database file. As far as the pager is concerned, each page
+contains 1024 bytes of arbitrary data. But there is structure to
+the journal file.
+</p>
+
+<p>
+A journal file begins with 8 bytes as follows:
+0xd9, 0xd5, 0x05, 0xf9, 0x20, 0xa1, 0x63, and 0xd6.
+Processes that are attempting to rollback a journal use these 8 bytes
+as a sanity check to make sure the file they think is a journal really
+is a valid journal. Prior version of SQLite used different journal
+file formats. The magic numbers for these prior formats are different
+so that if a new version of the library attempts to rollback a journal
+created by an earlier version, it can detect that the journal uses
+an obsolete format and make the necessary adjustments. This article
+describes only the newest journal format - supported as of version
+2.8.0.
+</p>
+
+<p>
+Following the 8 byte prefix is a three 4-byte integers that tell us
+the number of pages that have been committed to the journal,
+a magic number used for
+sanity checking each page, and the
+original size of the main database file before the transaction was
+started. The number of committed pages is used to limit how far
+into the journal to read. The use of the checksum magic number is
+described below.
+The original size of the database is used to restore the database
+file back to its original size.
+The size is expressed in pages (1024 bytes per page).
+</p>
+
+<p>
+All three integers in the journal header and all other multi-byte
+numbers used in the journal file are big-endian.
+That means that the most significant byte
+occurs first. That way, a journal file that is
+originally created on one machine can be rolled back by another
+machine that uses a different byte order. So, for example, a
+transaction that failed to complete on your big-endian SparcStation
+can still be rolled back on your little-endian Linux box.
+</p>
+
+<p>
+After the 8-byte prefix and the three 4-byte integers, the
+journal file consists of zero or more page records. Each page
+record is a 4-byte (big-endian) page number followed by 1024 bytes
+of data and a 4-byte checksum.
+The data is the original content of the database page
+before the transaction was started. So to roll back the transaction,
+the data is simply written into the corresponding page of the
+main database file. Pages can appear in the journal in any order,
+but they are guaranteed to appear only once. All page numbers will be
+between 1 and the maximum specified by the page size integer that
+appeared at the beginning of the journal.
+</p>
+
+<p>
+The so-called checksum at the end of each record is not really a
+checksum - it is the sum of the page number and the magic number which
+was the second integer in the journal header. The purpose of this
+value is to try to detect journal corruption that might have occurred
+because of a power loss or OS crash that occurred which the journal
+file was being written to disk. It could have been the case that the
+meta-data for the journal file, specifically the size of the file, had
+been written to the disk so that when the machine reboots it appears that
+file is large enough to hold the current record. But even though the
+file size has changed, the data for the file might not have made it to
+the disk surface at the time of the OS crash or power loss. This means
+that after reboot, the end of the journal file will contain quasi-random
+garbage data. The checksum is an attempt to detect such corruption. If
+the checksum does not match, that page of the journal is not rolled back.
+</p>
+
+<p>
+Here is a summary of the journal file format:
+</p>
+
+<ul>
+<li>8 byte prefix: 0xd9, 0xd5, 0x05, 0xf9, 0x20, 0xa1, 0x63, 0xd6</li>
+<li>4 byte number of records in journal</li>
+<li>4 byte magic number used for page checksums</li>
+<li>4 byte initial database page count</li>
+<li>Zero or more instances of the following:
+ <ul>
+ <li>4 byte page number</li>
+ <li>1024 bytes of original data for the page</li>
+ <li>4 byte checksum</li>
+ </ul>
+</li>
+</ul>
+
+<h3>3.0 The B-Tree Layer</h3>
+
+<p>
+The B-Tree layer builds on top of the pager layer to implement
+one or more separate b-trees all in the same disk file. The
+algorithms used are taken from Knuth's <i>The Art Of Computer
+Programming.</i></p>
+
+<p>
+Page 1 of a database contains a header string used for sanity
+checking, a few 32-bit words of configuration data, and a pointer
+to the beginning of a list of unused pages in the database.
+All other pages in the
+database are either pages of a b-tree, overflow pages, or unused
+pages on the freelist.
+</p>
+
+<p>
+Each b-tree page contains zero or more database entries.
+Each entry has an unique key of one or more bytes and data of
+zero or more bytes.
+Both the key and data are arbitrary byte sequences. The combination
+of key and data are collectively known as "payload". The current
+implementation limits the amount of payload in a single entry to
+1048576 bytes. This limit can be raised to 16777216 by adjusting
+a single #define in the source code and recompiling. But most entries
+contain less than a hundred bytes of payload so a megabyte limit seems
+more than enough.
+</p>
+
+<p>
+Up to 238 bytes of payload for an entry can be held directly on
+a b-tree page. Any additional payload is contained on a linked list
+of overflow pages. This limit on the amount of payload held directly
+on b-tree pages guarantees that each b-tree page can hold at least
+4 entries. In practice, most entries are smaller than 238 bytes and
+thus most pages can hold more than 4 entries.
+</p>
+
+<p>
+A single database file can hold any number of separate, independent b-trees.
+Each b-tree is identified by its root page, which never changes.
+Child pages of the b-tree may change as entries are added and removed
+and pages split and combine. But the root page always stays the same.
+The b-tree itself does not record which pages are root pages and which
+are not. That information is handled entirely at the schema layer.
+</p>
+
+<h4>3.1 B-Tree Page 1 Details</h4>
+
+<p>
+Page 1 begins with the following 48-byte string:
+</p>
+
+<blockquote><pre>
+** This file contains an SQLite 2.1 database **
+</pre></blockquote>
+
+<p>
+If you count the number of characters in the string above, you will
+see that there are only 47. A '\000' terminator byte is added to
+bring the total to 48.
+</p>
+
+<p>
+A frequent question is why the string says version 2.1 when (as
+of this writing) we are up to version 2.7.0 of SQLite and any
+change to the second digit of the version is suppose to represent
+a database format change. The answer to this is that the B-tree
+layer has not changed any since version 2.1. There have been
+database format changes since version 2.1 but those changes have
+all been in the schema layer. Because the format of the b-tree
+layer is unchanged since version 2.1.0, the header string still
+says version 2.1.
+</p>
+
+<p>
+After the format string is a 4-byte integer used to determine the
+byte-order of the database. The integer has a value of
+0xdae37528. If this number is expressed as 0xda, 0xe3, 0x75, 0x28, then
+the database is in a big-endian format and all 16 and 32-bit integers
+elsewhere in the b-tree layer are also big-endian. If the number is
+expressed as 0x28, 0x75, 0xe3, and 0xda, then the database is in a
+little-endian format and all other multi-byte numbers in the b-tree
+layer are also little-endian.
+Prior to version 2.6.3, the SQLite engine was only able to read databases
+that used the same byte order as the processor they were running on.
+But beginning with 2.6.3, SQLite can read or write databases in any
+byte order.
+</p>
+
+<p>
+After the byte-order code are six 4-byte integers. Each integer is in the
+byte order determined by the byte-order code. The first integer is the
+page number for the first page of the freelist. If there are no unused
+pages in the database, then this integer is 0. The second integer is
+the number of unused pages in the database. The last 4 integers are
+not used by the b-tree layer. These are the so-called "meta" values that
+are passed up to the schema layer
+and used there for configuration and format version information.
+All bytes of page 1 past beyond the meta-value integers are unused
+and are initialized to zero.
+</p>
+
+<p>
+Here is a summary of the information contained on page 1 in the b-tree layer:
+</p>
+
+<ul>
+<li>48 byte header string</li>
+<li>4 byte integer used to determine the byte-order</li>
+<li>4 byte integer which is the first page of the freelist</li>
+<li>4 byte integer which is the number of pages on the freelist</li>
+<li>36 bytes of meta-data arranged as nine 4-byte integers</li>
+<li>928 bytes of unused space</li>
+</ul>
+
+<h4>3.2 Structure Of A Single B-Tree Page</h4>
+
+<p>
+Conceptually, a b-tree page contains N database entries and N+1 pointers
+to other b-tree pages.
+</p>
+
+<blockquote>
+<table border=1 cellspacing=0 cellpadding=5>
+<tr>
+<td align="center">Ptr<br>0</td>
+<td align="center">Entry<br>0</td>
+<td align="center">Ptr<br>1</td>
+<td align="center">Entry<br>1</td>
+<td align="center"><b>...</b></td>
+<td align="center">Ptr<br>N-1</td>
+<td align="center">Entry<br>N-1</td>
+<td align="center">Ptr<br>N</td>
+</tr>
+</table>
+</blockquote>
+
+<p>
+The entries are arranged in increasing order. That is, the key to
+Entry 0 is less than the key to Entry 1, and the key to Entry 1 is
+less than the key of Entry 2, and so forth. The pointers point to
+pages containing additional entries that have keys in between the
+entries on either side. So Ptr 0 points to another b-tree page that
+contains entries that all have keys less than Key 0, and Ptr 1
+points to a b-tree pages where all entries have keys greater than Key 0
+but less than Key 1, and so forth.
+</p>
+
+<p>
+Each b-tree page in SQLite consists of a header, zero or more "cells"
+each holding a single entry and pointer, and zero or more "free blocks"
+that represent unused space on the page.
+</p>
+
+<p>
+The header on a b-tree page is the first 8 bytes of the page.
+The header contains the value
+of the right-most pointer (Ptr N) and the byte offset into the page
+of the first cell and the first free block. The pointer is a 32-bit
+value and the offsets are each 16-bit values. We have:
+</p>
+
+<blockquote>
+<table border=1 cellspacing=0 cellpadding=5>
+<tr>
+<td align="center" width=30>0</td>
+<td align="center" width=30>1</td>
+<td align="center" width=30>2</td>
+<td align="center" width=30>3</td>
+<td align="center" width=30>4</td>
+<td align="center" width=30>5</td>
+<td align="center" width=30>6</td>
+<td align="center" width=30>7</td>
+</tr>
+<tr>
+<td align="center" colspan=4>Ptr N</td>
+<td align="center" colspan=2>Cell 0</td>
+<td align="center" colspan=2>Freeblock 0</td>
+</tr>
+</table>
+</blockquote>
+
+<p>
+The 1016 bytes of a b-tree page that come after the header contain
+cells and freeblocks. All 1016 bytes are covered by either a cell
+or a freeblock.
+</p>
+
+<p>
+The cells are connected in a linked list. Cell 0 contains Ptr 0 and
+Entry 0. Bytes 4 and 5 of the header point to Cell 0. Cell 0 then
+points to Cell 1 which contains Ptr 1 and Entry 1. And so forth.
+Cells vary in size. Every cell has a 12-byte header and at least 4
+bytes of payload space. Space is allocated to payload in increments
+of 4 bytes. Thus the minimum size of a cell is 16 bytes and up to
+63 cells can fit on a single page. The size of a cell is always a multiple
+of 4 bytes.
+A cell can have up to 238 bytes of payload space. If
+the payload is more than 238 bytes, then an additional 4 byte page
+number is appended to the cell which is the page number of the first
+overflow page containing the additional payload. The maximum size
+of a cell is thus 254 bytes, meaning that a least 4 cells can fit into
+the 1016 bytes of space available on a b-tree page.
+An average cell is usually around 52 to 100 bytes in size with about
+10 or 20 cells to a page.
+</p>
+
+<p>
+The data layout of a cell looks like this:
+</p>
+
+<blockquote>
+<table border=1 cellspacing=0 cellpadding=5>
+<tr>
+<td align="center" width=20>0</td>
+<td align="center" width=20>1</td>
+<td align="center" width=20>2</td>
+<td align="center" width=20>3</td>
+<td align="center" width=20>4</td>
+<td align="center" width=20>5</td>
+<td align="center" width=20>6</td>
+<td align="center" width=20>7</td>
+<td align="center" width=20>8</td>
+<td align="center" width=20>9</td>
+<td align="center" width=20>10</td>
+<td align="center" width=20>11</td>
+<td align="center" width=100>12 ... 249</td>
+<td align="center" width=20>250</td>
+<td align="center" width=20>251</td>
+<td align="center" width=20>252</td>
+<td align="center" width=20>253</td>
+</tr>
+<tr>
+<td align="center" colspan=4>Ptr</td>
+<td align="center" colspan=2>Keysize<br>(low)</td>
+<td align="center" colspan=2>Next</td>
+<td align="center" colspan=1>Ksz<br>(hi)</td>
+<td align="center" colspan=1>Dsz<br>(hi)</td>
+<td align="center" colspan=2>Datasize<br>(low)</td>
+<td align="center" colspan=1>Payload</td>
+<td align="center" colspan=4>Overflow<br>Pointer</td>
+</tr>
+</table>
+</blockquote>
+
+<p>
+The first four bytes are the pointer. The size of the key is a 24-bit
+where the upper 8 bits are taken from byte 8 and the lower 16 bits are
+taken from bytes 4 and 5 (or bytes 5 and 4 on little-endian machines.)
+The size of the data is another 24-bit value where the upper 8 bits
+are taken from byte 9 and the lower 16 bits are taken from bytes 10 and
+11 or 11 and 10, depending on the byte order. Bytes 6 and 7 are the
+offset to the next cell in the linked list of all cells on the current
+page. This offset is 0 for the last cell on the page.
+</p>
+
+<p>
+The payload itself can be any number of bytes between 1 and 1048576.
+But space to hold the payload is allocated in 4-byte chunks up to
+238 bytes. If the entry contains more than 238 bytes of payload, then
+additional payload data is stored on a linked list of overflow pages.
+A 4 byte page number is appended to the cell that contains the first
+page of this linked list.
+</p>
+
+<p>
+Each overflow page begins with a 4-byte value which is the
+page number of the next overflow page in the list. This value is
+0 for the last page in the list. The remaining
+1020 bytes of the overflow page are available for storing payload.
+Note that a full page is allocated regardless of the number of overflow
+bytes stored. Thus, if the total payload for an entry is 239 bytes,
+the first 238 are stored in the cell and the overflow page stores just
+one byte.
+</p>
+
+<p>
+The structure of an overflow page looks like this:
+</p>
+
+<blockquote>
+<table border=1 cellspacing=0 cellpadding=5>
+<tr>
+<td align="center" width=20>0</td>
+<td align="center" width=20>1</td>
+<td align="center" width=20>2</td>
+<td align="center" width=20>3</td>
+<td align="center" width=200>4 ... 1023</td>
+</tr>
+<tr>
+<td align="center" colspan=4>Next Page</td>
+<td align="center" colspan=1>Overflow Data</td>
+</tr>
+</table>
+</blockquote>
+
+<p>
+All space on a b-tree page which is not used by the header or by cells
+is filled by freeblocks. Freeblocks, like cells, are variable in size.
+The size of a freeblock is at least 4 bytes and is always a multiple of
+4 bytes.
+The first 4 bytes contain a header and the remaining bytes
+are unused. The structure of the freeblock is as follows:
+</p>
+
+<blockquote>
+<table border=1 cellspacing=0 cellpadding=5>
+<tr>
+<td align="center" width=20>0</td>
+<td align="center" width=20>1</td>
+<td align="center" width=20>2</td>
+<td align="center" width=20>3</td>
+<td align="center" width=200>4 ... 1015</td>
+</tr>
+<tr>
+<td align="center" colspan=2>Size</td>
+<td align="center" colspan=2>Next</td>
+<td align="center" colspan=1>Unused</td>
+</tr>
+</table>
+</blockquote>
+
+<p>
+Freeblocks are stored in a linked list in increasing order. That is
+to say, the first freeblock occurs at a lower index into the page than
+the second free block, and so forth. The first 2 bytes of the header
+are an integer which is the total number of bytes in the freeblock.
+The second 2 bytes are the index into the page of the next freeblock
+in the list. The last freeblock has a Next value of 0.
+</p>
+
+<p>
+When a new b-tree is created in a database, the root page of the b-tree
+consist of a header and a single 1016 byte freeblock. As entries are
+added, space is carved off of that freeblock and used to make cells.
+When b-tree entries are deleted, the space used by their cells is converted
+into freeblocks. Adjacent freeblocks are merged, but the page can still
+become fragmented. The b-tree code will occasionally try to defragment
+the page by moving all cells to the beginning and constructing a single
+freeblock at the end to take up all remaining space.
+</p>
+
+<h4>3.3 The B-Tree Free Page List</h4>
+
+<p>
+When information is removed from an SQLite database such that one or
+more pages are no longer needed, those pages are added to a list of
+free pages so that they can be reused later when new information is
+added. This subsection describes the structure of this freelist.
+</p>
+
+<p>
+The 32-bit integer beginning at byte-offset 52 in page 1 of the database
+contains the address of the first page in a linked list of free pages.
+If there are no free pages available, this integer has a value of 0.
+The 32-bit integer at byte-offset 56 in page 1 contains the number of
+free pages on the freelist.
+</p>
+
+<p>
+The freelist contains a trunk and many branches. The trunk of
+the freelist is composed of overflow pages. That is to say, each page
+contains a single 32-bit integer at byte offset 0 which
+is the page number of the next page on the freelist trunk.
+The payload area
+of each trunk page is used to record pointers to branch pages.
+The first 32-bit integer in the payload area of a trunk page
+is the number of branch pages to follow (between 0 and 254)
+and each subsequent 32-bit integer is a page number for a branch page.
+The following diagram shows the structure of a trunk freelist page:
+</p>
+
+<blockquote>
+<table border=1 cellspacing=0 cellpadding=5>
+<tr>
+<td align="center" width=20>0</td>
+<td align="center" width=20>1</td>
+<td align="center" width=20>2</td>
+<td align="center" width=20>3</td>
+<td align="center" width=20>4</td>
+<td align="center" width=20>5</td>
+<td align="center" width=20>6</td>
+<td align="center" width=20>7</td>
+<td align="center" width=200>8 ... 1023</td>
+</tr>
+<tr>
+<td align="center" colspan=4>Next trunk page</td>
+<td align="center" colspan=4># of branch pages</td>
+<td align="center" colspan=1>Page numbers for branch pages</td>
+</tr>
+</table>
+</blockquote>
+
+<p>
+It is important to note that only the pages on the trunk of the freelist
+contain pointers to other pages. The branch pages contain no
+data whatsoever. The fact that the branch pages are completely
+blank allows for an important optimization in the paging layer. When
+a branch page is removed from the freelist to be reused, it is not
+necessary to write the original content of that page into the rollback
+journal. The branch page contained no data to begin with, so there is
+no need to restore the page in the event of a rollback. Similarly,
+when a page is not longer needed and is added to the freelist as a branch
+page, it is not necessary to write the content of that page
+into the database file.
+Again, the page contains no real data so it is not necessary to record the
+content of that page. By reducing the amount of disk I/O required,
+these two optimizations allow some database operations
+to go four to six times faster than they would otherwise.
+</p>
+
+<h3>4.0 The Schema Layer</h3>
+
+<p>
+The schema layer implements an SQL database on top of one or more
+b-trees and keeps track of the root page numbers for all b-trees.
+Where the b-tree layer provides only unformatted data storage with
+a unique key, the schema layer allows each entry to contain multiple
+columns. The schema layer also allows indices and non-unique key values.
+</p>
+
+<p>
+The schema layer implements two separate data storage abstractions:
+tables and indices. Each table and each index uses its own b-tree
+but they use the b-tree capabilities in different ways. For a table,
+the b-tree key is a unique 4-byte integer and the b-tree data is the
+content of the table row, encoded so that columns can be separately
+extracted. For indices, the b-tree key varies in size depending on the
+size of the fields being indexed and the b-tree data is empty.
+</p>
+
+<h4>4.1 SQL Table Implementation Details</h4>
+
+<p>Each row of an SQL table is stored in a single b-tree entry.
+The b-tree key is a 4-byte big-endian integer that is the ROWID
+or INTEGER PRIMARY KEY for that table row.
+The key is stored in a big-endian format so
+that keys will sort in numerical order using memcmp() function.</p>
+
+<p>The content of a table row is stored in the data portion of
+the corresponding b-tree table. The content is encoded to allow
+individual columns of the row to be extracted as necessary. Assuming
+that the table has N columns, the content is encoded as N+1 offsets
+followed by N column values, as follows:
+</p>
+
+<blockquote>
+<table border=1 cellspacing=0 cellpadding=5>
+<tr>
+<td>offset 0</td>
+<td>offset 1</td>
+<td><b>...</b></td>
+<td>offset N-1</td>
+<td>offset N</td>
+<td>value 0</td>
+<td>value 1</td>
+<td><b>...</b></td>
+<td>value N-1</td>
+</tr>
+</table>
+</blockquote>
+
+<p>
+The offsets can be either 8-bit, 16-bit, or 24-bit integers depending
+on how much data is to be stored. If the total size of the content
+is less than 256 bytes then 8-bit offsets are used. If the total size
+of the b-tree data is less than 65536 then 16-bit offsets are used.
+24-bit offsets are used otherwise. Offsets are always little-endian,
+which means that the least significant byte occurs first.
+</p>
+
+<p>
+Data is stored as a nul-terminated string. Any empty string consists
+of just the nul terminator. A NULL value is an empty string with no
+nul-terminator. Thus a NULL value occupies zero bytes and an empty string
+occupies 1 byte.
+</p>
+
+<p>
+Column values are stored in the order that they appear in the CREATE TABLE
+statement. The offsets at the beginning of the record contain the
+byte index of the corresponding column value. Thus, Offset 0 contains
+the byte index for Value 0, Offset 1 contains the byte offset
+of Value 1, and so forth. The number of bytes in a column value can
+always be found by subtracting offsets. This allows NULLs to be
+recovered from the record unambiguously.
+</p>
+
+<p>
+Most columns are stored in the b-tree data as described above.
+The one exception is column that has type INTEGER PRIMARY KEY.
+INTEGER PRIMARY KEY columns correspond to the 4-byte b-tree key.
+When an SQL statement attempts to read the INTEGER PRIMARY KEY,
+the 4-byte b-tree key is read rather than information out of the
+b-tree data. But there is still an Offset associated with the
+INTEGER PRIMARY KEY, just like any other column. But the Value
+associated with that offset is always NULL.
+</p>
+
+<h4>4.2 SQL Index Implementation Details</h4>
+
+<p>
+SQL indices are implement using a b-tree in which the key is used
+but the data is always empty. The purpose of an index is to map
+one or more column values into the ROWID for the table entry that
+contains those column values.
+</p>
+
+<p>
+Each b-tree in an index consists of one or more column values followed
+by a 4-byte ROWID. Each column value is nul-terminated (even NULL values)
+and begins with a single character that indicates the datatype for that
+column value. Only three datatypes are supported: NULL, Number, and
+Text. NULL values are encoded as the character 'a' followed by the
+nul terminator. Numbers are encoded as the character 'b' followed by
+a string that has been crafted so that sorting the string using memcmp()
+will sort the corresponding numbers in numerical order. (See the
+sqliteRealToSortable() function in util.c of the SQLite sources for
+additional information on this encoding.) Numbers are also nul-terminated.
+Text values consists of the character 'c' followed by a copy of the
+text string and a nul-terminator. These encoding rules result in
+NULLs being sorted first, followed by numerical values in numerical
+order, followed by text values in lexicographical order.
+</p>
+
+<h4>4.4 SQL Schema Storage And Root B-Tree Page Numbers</h4>
+
+<p>
+The database schema is stored in the database in a special tabled named
+"sqlite_master" and which always has a root b-tree page number of 2.
+This table contains the original CREATE TABLE,
+CREATE INDEX, CREATE VIEW, and CREATE TRIGGER statements used to define
+the database to begin with. Whenever an SQLite database is opened,
+the sqlite_master table is scanned from beginning to end and
+all the original CREATE statements are played back through the parser
+in order to reconstruct an in-memory representation of the database
+schema for use in subsequent command parsing. For each CREATE TABLE
+and CREATE INDEX statement, the root page number for the corresponding
+b-tree is also recorded in the sqlite_master table so that SQLite will
+know where to look for the appropriate b-tree.
+</p>
+
+<p>
+SQLite users can query the sqlite_master table just like any other table
+in the database. But the sqlite_master table cannot be directly written.
+The sqlite_master table is automatically updated in response to CREATE
+and DROP statements but it cannot be changed using INSERT, UPDATE, or
+DELETE statements as that would risk corrupting the database.
+</p>
+
+<p>
+SQLite stores temporary tables and indices in a separate
+file from the main database file. The temporary table database file
+is the same structure as the main database file. The schema table
+for the temporary tables is stored on page 2 just as in the main
+database. But the schema table for the temporary database named
+"sqlite_temp_master" instead of "sqlite_master". Other than the
+name change, it works exactly the same.
+</p>
+
+<h4>4.4 Schema Version Numbering And Other Meta-Information</h4>
+
+<p>
+The nine 32-bit integers that are stored beginning at byte offset
+60 of Page 1 in the b-tree layer are passed up into the schema layer
+and used for versioning and configuration information. The meaning
+of the first four integers is shown below. The other five are currently
+unused.
+</p>
+
+<ol>
+<li>The schema version number</li>
+<li>The format version number</li>
+<li>The recommended pager cache size</li>
+<li>The safety level</li>
+</ol>
+
+<p>
+The first meta-value, the schema version number, is used to detect when
+the schema of the database is changed by a CREATE or DROP statement.
+Recall that when a database is first opened the sqlite_master table is
+scanned and an internal representation of the tables, indices, views,
+and triggers for the database is built in memory. This internal
+representation is used for all subsequent SQL command parsing and
+execution. But what if another process were to change the schema
+by adding or removing a table, index, view, or trigger? If the original
+process were to continue using the old schema, it could potentially
+corrupt the database by writing to a table that no longer exists.
+To avoid this problem, the schema version number is changed whenever
+a CREATE or DROP statement is executed. Before each command is
+executed, the current schema version number for the database file
+is compared against the schema version number from when the sqlite_master
+table was last read. If those numbers are different, the internal
+schema representation is erased and the sqlite_master table is reread
+to reconstruct the internal schema representation.
+(Calls to sqlite_exec() generally return SQLITE_SCHEMA when this happens.)
+</p>
+
+<p>
+The second meta-value is the schema format version number. This
+number tells what version of the schema layer should be used to
+interpret the file. There have been changes to the schema layer
+over time and this number is used to detect when an older database
+file is being processed by a newer version of the library.
+As of this writing (SQLite version 2.7.0) the current format version
+is "4".
+</p>
+
+<p>
+The third meta-value is the recommended pager cache size as set
+by the DEFAULT_CACHE_SIZE pragma. If the value is positive it
+means that synchronous behavior is enable (via the DEFAULT_SYNCHRONOUS
+pragma) and if negative it means that synchronous behavior is
+disabled.
+</p>
+
+<p>
+The fourth meta-value is safety level added in version 2.8.0.
+A value of 1 corresponds to a SYNCHRONOUS setting of OFF. In other
+words, SQLite does not pause to wait for journal data to reach the disk
+surface before overwriting pages of the database. A value of 2 corresponds
+to a SYNCHRONOUS setting of NORMAL. A value of 3 corresponds to a
+SYNCHRONOUS setting of FULL. If the value is 0, that means it has not
+been initialized so the default synchronous setting of NORMAL is used.
+</p>
+}
+footer $rcsid
Added: freeswitch/trunk/libs/sqlite/www/formatchng.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/formatchng.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,272 @@
+#
+# Run this Tcl script to generate the formatchng.html file.
+#
+set rcsid {$Id: formatchng.tcl,v 1.19 2006/08/12 14:38:47 drh Exp $ }
+source common.tcl
+header {File Format Changes in SQLite}
+puts {
+<h2>File Format Changes in SQLite</h2>
+
+<p>
+Every effort is made to keep SQLite fully backwards compatible from
+one release to the next. Rarely, however, some
+enhancements or bug fixes may require a change to
+the underlying file format. When this happens and you
+must convert the contents of your
+databases into a portable ASCII representation using the old version
+of the library then reload the data using the new version of the
+library.
+</p>
+
+<p>
+You can tell if you should reload your databases by comparing the
+version numbers of the old and new libraries. If the first digit
+of the version number is different, then a reload of the database will
+be required. If the second digit changes, newer versions of SQLite
+will be able to read and write older database files, but older versions
+of the library may have difficulty reading or writing newer database
+files.
+For example, upgrading from
+version 2.8.14 to 3.0.0 requires a reload. Going from
+version 3.0.8 to 3.1.0 is backwards compatible but not necessarily
+forwards compatible.
+</p>
+
+<p>
+The following table summarizes the SQLite file format changes that have
+occurred since version 1.0.0:
+</p>
+
+<blockquote>
+<table border=2 cellpadding=5>
+<tr>
+ <th>Version Change</th>
+ <th>Approx. Date</th>
+ <th>Description Of File Format Change</th>
+</tr>
+<tr>
+ <td valign="top">1.0.32 to 2.0.0</td>
+ <td valign="top">2001-Sep-20</td>
+ <td>Version 1.0.X of SQLite used the GDBM library as its backend
+ interface to the disk. Beginning in version 2.0.0, GDBM was replaced
+ by a custom B-Tree library written especially for SQLite. The new
+ B-Tree backend is twice as fast as GDBM, supports atomic commits and
+ rollback, and stores an entire database in a single disk file instead
+ using a separate file for each table as GDBM does. The two
+ file formats are not even remotely similar.</td>
+</tr>
+<tr>
+ <td valign="top">2.0.8 to 2.1.0</td>
+ <td valign="top">2001-Nov-12</td>
+ <td>The same basic B-Tree format is used but the details of the
+ index keys were changed in order to provide better query
+ optimization opportunities. Some of the headers were also changed in order
+ to increase the maximum size of a row from 64KB to 24MB.<p>
+
+ This change is an exception to the version number rule described above
+ in that it is neither forwards or backwards compatible. A complete
+ reload of the database is required. This is the only exception.</td>
+</tr>
+<tr>
+ <td valign="top">2.1.7 to 2.2.0</td>
+ <td valign="top">2001-Dec-21</td>
+ <td>Beginning with version 2.2.0, SQLite no longer builds an index for
+ an INTEGER PRIMARY KEY column. Instead, it uses that column as the actual
+ B-Tree key for the main table.<p>Version 2.2.0 and later of the library
+ will automatically detect when it is reading a 2.1.x database and will
+ disable the new INTEGER PRIMARY KEY feature. In other words, version
+ 2.2.x is backwards compatible to version 2.1.x. But version 2.1.x is not
+ forward compatible with version 2.2.x. If you try to open
+ a 2.2.x database with an older 2.1.x library and that database contains
+ an INTEGER PRIMARY KEY, you will likely get a coredump. If the database
+ schema does not contain any INTEGER PRIMARY KEYs, then the version 2.1.x
+ and version 2.2.x database files will be identical and completely
+ interchangeable.</p>
+</tr>
+<tr>
+ <td valign="top">2.2.5 to 2.3.0</td>
+ <td valign="top">2002-Jan-30</td>
+ <td>Beginning with version 2.3.0, SQLite supports some additional syntax
+ (the "ON CONFLICT" clause) in the CREATE TABLE and CREATE INDEX statements
+ that are stored in the SQLITE_MASTER table. If you create a database that
+ contains this new syntax, then try to read that database using version 2.2.5
+ or earlier, the parser will not understand the new syntax and you will get
+ an error. Otherwise, databases for 2.2.x and 2.3.x are interchangeable.</td>
+</tr>
+<tr>
+ <td valign="top">2.3.3 to 2.4.0</td>
+ <td valign="top">2002-Mar-10</td>
+ <td>Beginning with version 2.4.0, SQLite added support for views.
+ Information about views is stored in the SQLITE_MASTER table. If an older
+ version of SQLite attempts to read a database that contains VIEW information
+ in the SQLITE_MASTER table, the parser will not understand the new syntax
+ and initialization will fail. Also, the
+ way SQLite keeps track of unused disk blocks in the database file
+ changed slightly.
+ If an older version of SQLite attempts to write a database that
+ was previously written by version 2.4.0 or later, then it may leak disk
+ blocks.</td>
+</tr>
+<tr>
+ <td valign="top">2.4.12 to 2.5.0</td>
+ <td valign="top">2002-Jun-17</td>
+ <td>Beginning with version 2.5.0, SQLite added support for triggers.
+ Information about triggers is stored in the SQLITE_MASTER table. If an older
+ version of SQLite attempts to read a database that contains a CREATE TRIGGER
+ in the SQLITE_MASTER table, the parser will not understand the new syntax
+ and initialization will fail.
+ </td>
+</tr>
+<tr>
+ <td valign="top">2.5.6 to 2.6.0</td>
+ <td valign="top">2002-July-17</td>
+ <td>A design flaw in the layout of indices required a file format change
+ to correct. This change appeared in version 2.6.0.<p>
+
+ If you use version 2.6.0 or later of the library to open a database file
+ that was originally created by version 2.5.6 or earlier, an attempt to
+ rebuild the database into the new format will occur automatically.
+ This can take some time for a large database. (Allow 1 or 2 seconds
+ per megabyte of database under Unix - longer under Windows.) This format
+ conversion is irreversible. It is <strong>strongly</strong> suggested
+ that you make a backup copy of older database files prior to opening them
+ with version 2.6.0 or later of the library, in case there are errors in
+ the format conversion logic.<p>
+
+ Version 2.6.0 or later of the library cannot open read-only database
+ files from version 2.5.6 or earlier, since read-only files cannot be
+ upgraded to the new format.</p>
+ </td>
+</tr>
+<tr>
+ <td valign="top">2.6.3 to 2.7.0</td>
+ <td valign="top">2002-Aug-13</td>
+ <td><p>Beginning with version 2.7.0, SQLite understands two different
+ datatypes: text and numeric. Text data sorts in memcmp() order.
+ Numeric data sorts in numerical order if it looks like a number,
+ or in memcmp() order if it does not.</p>
+
+ <p>When SQLite version 2.7.0 or later opens a 2.6.3 or earlier database,
+ it assumes all columns of all tables have type "numeric". For 2.7.0
+ and later databases, columns have type "text" if their datatype
+ string contains the substrings "char" or "clob" or "blob" or "text".
+ Otherwise they are of type "numeric".</p>
+
+ <p>Because "text" columns have a different sort order from numeric,
+ indices on "text" columns occur in a different order for version
+ 2.7.0 and later database. Hence version 2.6.3 and earlier of SQLite
+ will be unable to read a 2.7.0 or later database. But version 2.7.0
+ and later of SQLite will read earlier databases.</p>
+ </td>
+</tr>
+<tr>
+ <td valign="top">2.7.6 to 2.8.0</td>
+ <td valign="top">2003-Feb-14</td>
+ <td><p>Version 2.8.0 introduces a change to the format of the rollback
+ journal file. The main database file format is unchanged. Versions
+ 2.7.6 and earlier can read and write 2.8.0 databases and vice versa.
+ Version 2.8.0 can rollback a transaction that was started by version
+ 2.7.6 and earlier. But version 2.7.6 and earlier cannot rollback a
+ transaction started by version 2.8.0 or later.</p>
+
+ <p>The only time this would ever be an issue is when you have a program
+ using version 2.8.0 or later that crashes with an incomplete
+ transaction, then you try to examine the database using version 2.7.6 or
+ earlier. The 2.7.6 code will not be able to read the journal file
+ and thus will not be able to rollback the incomplete transaction
+ to restore the database.</p>
+ </td>
+</tr>
+<tr>
+ <td valign="top">2.8.14 to 3.0.0</td>
+ <td valign="top">2004-Jun-18</td>
+ <td><p>Version 3.0.0 is a major upgrade for SQLite that incorporates
+ support for UTF-16, BLOBs, and a more compact encoding that results
+ in database files that are typically 25% to 50% smaller. The new file
+ format is very different and is completely incompatible with the
+ version 2 file format.</p>
+ </td>
+</tr>
+<tr>
+ <td valign="top">3.0.8 to 3.1.0</td>
+ <td valign="top">2005-Jan-21</td>
+ <td><p>Version 3.1.0 adds support for
+ <a href="pragma.html#pragma_auto_vacuum">autovacuum mode</a>.
+ Prior versions of SQLite will be able to read an autovacuumed
+ database but will not be able to write it. If autovaccum is disabled
+ (which is the default condition)
+ then databases are fully forwards and backwards compatible.</p>
+ </td>
+</tr>
+<tr>
+ <td valign="top">3.1.6 to 3.2.0</td>
+ <td valign="top">2005-Mar-19</td>
+ <td><p>Version 3.2.0 adds support for the
+ <a href="lang_altertable.html">ALTER TABLE ADD COLUMN</a>
+ command. A database that has been modified by this command can
+ not be read by a version of SQLite prior to 3.1.4. Running
+ <a href="lang_vacuum.html">VACUUM</a>
+ after the ALTER TABLE
+ restores the database to a format such that it can be read by earlier
+ SQLite versions.</p>
+ </td>
+</tr>
+<tr>
+ <td valign="top">3.2.8 to 3.3.0</td>
+ <td valign="top">2006-Jan-10</td>
+ <td><p>Version 3.3.0 adds support for descending indices and
+ uses a new encoding for boolean values that requires
+ less disk space. Version 3.3.0 can read and write database
+ files created by prior versions of SQLite. But prior versions
+ of SQLite will not be able to read or write databases created
+ by Version 3.3.0</p>
+ <p>If you need backwards and forwards capatibility, you can
+ compile with -DSQLITE_DEFAULT_FILE_FORMAT=1. Or at runtime
+ you can say "PRAGMA legacy_file_format=ON" prior to creating
+ a new database file</p>
+ <p>Once a database file is created, its format is fixed. So
+ a database file created by SQLite 3.2.8 and merely modified
+ by version 3.3.0 or later will retain the old format. Except,
+ the VACUUM command recreates the database so running VACUUM
+ on 3.3.0 or later will change the file format to the latest
+ edition.</p>
+ </td>
+</tr>
+<tr>
+ <td valign="top">3.3.6 to 3.3.7</td>
+ <td valign="top">2006-Aug-12</td>
+ <td><p>The previous file format change has caused so much
+ grief that the default behavior has been changed back to
+ the original file format. This means that DESC option on
+ indices is ignored by default that the more efficient encoding
+ of boolean values is not used. In that way, older versions
+ of SQLite can read and write databases created by newer
+ versions. If the new features are desired, they can be
+ enabled using pragma: "PRAGMA legacy_file_format=OFF".</p>
+ <p>To be clear: both old and new file formats continue to
+ be understood and continue to work. But the old file format
+ is used by default instead of the new. This might change
+ again in some future release - we may go back to generating
+ the new file format by default - but probably not until
+ all users have upgraded to a version of SQLite that will
+ understand the new file format. That might take several
+ years.</p></td>
+</tr>
+</table>
+</blockquote>
+
+<p>
+To perform a database reload, have ready versions of the
+<b>sqlite</b> command-line utility for both the old and new
+version of SQLite. Call these two executables "<b>sqlite-old</b>"
+and "<b>sqlite-new</b>". Suppose the name of your old database
+is "<b>old.db</b>" and you want to create a new database with
+the same information named "<b>new.db</b>". The command to do
+this is as follows:
+</p>
+
+<blockquote>
+ sqlite-old old.db .dump | sqlite-new new.db
+</blockquote>
+}
+footer $rcsid
Added: freeswitch/trunk/libs/sqlite/www/fullscanb.gif
==============================================================================
Binary file. No diff available.
Added: freeswitch/trunk/libs/sqlite/www/index-ex1-x-b.gif
==============================================================================
Binary file. No diff available.
Added: freeswitch/trunk/libs/sqlite/www/index.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/index.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,114 @@
+#!/usr/bin/tclsh
+source common.tcl
+header {SQLite home page}
+puts {
+<table width="100%" border="0" cellspacing="5">
+<tr>
+<td width="50%" valign="top">
+<h2>About SQLite</h2>
+<p>
+ <table align="right" border="0"><tr><td>
+ <a href="http://osdir.com/Article6677.phtml">
+ <img src="2005osaward.gif"></a>
+ </td></tr></table>
+SQLite is a small
+C library that implements a self-contained, embeddable,
+zero-configuration
+SQL database engine.
+Features include:
+</p>
+
+<p><ul>
+<li>Transactions are atomic, consistent, isolated, and durable (ACID)
+ even after system crashes and power failures.
+<li>Zero-configuration - no setup or administration needed.</li>
+<li>Implements most of SQL92.
+ (<a href="omitted.html">Features not supported</a>)</li>
+<li>A complete database is stored in a single disk file.</li>
+<li>Database files can be freely shared between machines with
+ different byte orders.</li>
+<li>Supports databases up to 2 terabytes
+ (2<sup><small>41</small></sup> bytes) in size.</li>
+<li>Sizes of strings and BLOBs limited only by available memory.</li>
+<li>Small code footprint: less than 250KiB fully configured or less
+ than 150KiB with optional features omitted.</li>
+<li><a href="speed.html">Faster</a> than popular client/server database
+ engines for most common operations.</li>
+<li>Simple, easy to use <a href="capi3.html">API</a>.</li>
+<li><a href="tclsqlite.html">TCL bindings</a> included.
+ Bindings for many other languages
+ <a href="http://www.sqlite.org/cvstrac/wiki?p=SqliteWrappers">
+ available separately.</a></li>
+<li>Well-commented source code with over 95% test coverage.</li>
+<li>Self-contained: no external dependencies.</li>
+<li>Sources are in the <a href="copyright.html">public domain</a>.
+ Use for any purpose.</li>
+</ul>
+</p>
+
+<p>
+The SQLite distribution comes with a standalone command-line
+access program (<a href="sqlite.html">sqlite</a>) that can
+be used to administer an SQLite database and which serves as
+an example of how to use the SQLite library.
+</p>
+
+</td>
+<td width="1" bgcolor="#80a796"></td>
+<td valign="top" width="50%">
+<h2>News</h2>
+}
+
+proc newsitem {date title text} {
+ puts "<h3>$date - $title</h3>"
+ regsub -all "\n( *\n)+" $text "</p>\n\n<p>" txt
+ puts "<p>$txt</p>"
+ puts "<hr width=\"50%\">"
+}
+
+newsitem {2006-Oct-9} {Version 3.3.8} {
+ Version 3.3.8 adds support for full-text search using the
+ <a href="http://www.sqlite.org/cvstrac/wiki?p=FtsOne">FTS1
+ module.</a> There are also minor bug fixes. Upgrade only if
+ you want to try out the new full-text search capabilities or if
+ you are having problems with 3.3.7.
+}
+
+newsitem {2006-Aug-12} {Version 3.3.7} {
+ Version 3.3.7 includes support for loadable extensions and virtual
+ tables. But both features are still considered "beta" and their
+ APIs are subject to change in a future release. This release is
+ mostly to make available the minor bug fixes that have accumulated
+ since 3.3.6. Upgrading is not necessary. Do so only if you encounter
+ one of the obscure bugs that have been fixed or if you want to try
+ out the new features.
+}
+
+newsitem {2006-Jun-19} {New Book About SQLite} {
+ <a href="http://www.apress.com/book/bookDisplay.html?bID=10130">
+ <i>The Definitive Guide to SQLite</i></a>, a new book by
+ <a href="http://www.mikesclutter.com">Mike Owens</a>.
+ is now available from <a href="http://www.apress.com">Apress</a>.
+ The books covers the latest SQLite internals as well as
+ the native C interface and bindings for PHP, Python,
+ Perl, Ruby, Tcl, and Java. Recommended.
+}
+
+newsitem {2006-Jun-6} {Version 3.3.6} {
+ Changes include improved tolerance for windows virus scanners
+ and faster :memory: databases. There are also fixes for several
+ obscure bugs. Upgrade if you are having problems.
+}
+
+newsitem {2006-Apr-5} {Version 3.3.5} {
+ This release fixes many minor bugs and documentation typos and
+ provides some minor new features and performance enhancements.
+ Upgrade only if you are having problems or need one of the new features.
+}
+
+
+puts {
+<p align="right"><a href="oldnews.html">Old news...</a></p>
+</td></tr></table>
+}
+footer {$Id: index.tcl,v 1.143 2006/10/08 18:56:57 drh Exp $}
Added: freeswitch/trunk/libs/sqlite/www/indirect1b1.gif
==============================================================================
Binary file. No diff available.
Added: freeswitch/trunk/libs/sqlite/www/lang.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/lang.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,2054 @@
+#
+# Run this Tcl script to generate the lang-*.html files.
+#
+set rcsid {$Id: lang.tcl,v 1.118 2006/09/23 20:46:23 drh Exp $}
+source common.tcl
+
+if {[llength $argv]>0} {
+ set outputdir [lindex $argv 0]
+} else {
+ set outputdir ""
+}
+
+header {Query Language Understood by SQLite}
+puts {
+<h1>SQL As Understood By SQLite</h1>
+
+<p>The SQLite library understands most of the standard SQL
+language. But it does <a href="omitted.html">omit some features</a>
+while at the same time
+adding a few features of its own. This document attempts to
+describe precisely what parts of the SQL language SQLite does
+and does not support. A list of <a href="lang_keywords.html">keywords</a> is
+also provided.</p>
+
+<p>In all of the syntax diagrams that follow, literal text is shown in
+bold blue. Non-terminal symbols are shown in italic red. Operators
+that are part of the syntactic markup itself are shown in black roman.</p>
+
+<p>This document is just an overview of the SQL syntax implemented
+by SQLite. Many low-level productions are omitted. For detailed information
+on the language that SQLite understands, refer to the source code and
+the grammar file "parse.y".</p>
+
+
+<p>SQLite implements the follow syntax:</p>
+<p><ul>
+}
+
+proc slink {label} {
+ if {[string match *.html $label]} {
+ return $label
+ }
+ if {[string length $::outputdir]==0} {
+ return #$label
+ } else {
+ return lang_$label.html
+ }
+}
+
+foreach {section} [lsort -index 0 -dictionary {
+ {{CREATE TABLE} createtable}
+ {{CREATE VIRTUAL TABLE} createvtab}
+ {{CREATE INDEX} createindex}
+ {VACUUM vacuum}
+ {{DROP TABLE} droptable}
+ {{DROP INDEX} dropindex}
+ {INSERT insert}
+ {REPLACE replace}
+ {DELETE delete}
+ {UPDATE update}
+ {SELECT select}
+ {comment comment}
+ {COPY copy}
+ {EXPLAIN explain}
+ {expression expr}
+ {{BEGIN TRANSACTION} transaction}
+ {{COMMIT TRANSACTION} transaction}
+ {{END TRANSACTION} transaction}
+ {{ROLLBACK TRANSACTION} transaction}
+ {PRAGMA pragma.html}
+ {{ON CONFLICT clause} conflict}
+ {{CREATE VIEW} createview}
+ {{DROP VIEW} dropview}
+ {{CREATE TRIGGER} createtrigger}
+ {{DROP TRIGGER} droptrigger}
+ {{ATTACH DATABASE} attach}
+ {{DETACH DATABASE} detach}
+ {REINDEX reindex}
+ {{ALTER TABLE} altertable}
+ {{ANALYZE} analyze}
+}] {
+ foreach {s_title s_tag} $section {}
+ puts "<li><a href=\"[slink $s_tag]\">$s_title</a></li>"
+}
+puts {</ul></p>
+
+<p>Details on the implementation of each command are provided in
+the sequel.</p>
+}
+
+proc Operator {name} {
+ return "<font color=\"#2c2cf0\"><big>$name</big></font>"
+}
+proc Nonterminal {name} {
+ return "<i><font color=\"#ff3434\">$name</font></i>"
+}
+proc Keyword {name} {
+ return "<font color=\"#2c2cf0\">$name</font>"
+}
+proc Example {text} {
+ puts "<blockquote><pre>$text</pre></blockquote>"
+}
+
+proc Section {name label} {
+ global outputdir
+
+ if {[string length $outputdir]!=0} {
+ if {[llength [info commands puts_standard]]>0} {
+ footer $::rcsid
+ }
+
+ if {[string length $label]>0} {
+ rename puts puts_standard
+ proc puts {str} {
+ regsub -all {href="#([a-z]+)"} $str {href="lang_\1.html"} str
+ puts_standard $::section_file $str
+ }
+ rename footer footer_standard
+ proc footer {id} {
+ footer_standard $id
+ rename footer ""
+ rename puts ""
+ rename puts_standard puts
+ rename footer_standard footer
+ }
+ set ::section_file [open [file join $outputdir lang_$label.html] w]
+ header "Query Language Understood by SQLite: $name"
+ puts "<h1>SQL As Understood By SQLite</h1>"
+ puts "<a href=\"lang.html\">\[Contents\]</a>"
+ puts "<h2>$name</h2>"
+ return
+ }
+ }
+ puts "\n<hr />"
+ if {$label!=""} {
+ puts "<a name=\"$label\"></a>"
+ }
+ puts "<h1>$name</h1>\n"
+}
+
+Section {ALTER TABLE} altertable
+
+Syntax {sql-statement} {
+ALTER TABLE [<database-name> .] <table-name> <alteration>
+} {alteration} {
+RENAME TO <new-table-name>
+} {alteration} {
+ADD [COLUMN] <column-def>
+}
+
+puts {
+<p>SQLite's version of the ALTER TABLE command allows the user to
+rename or add a new column to an existing table. It is not possible
+to remove a column from a table.
+</p>
+
+<p>The RENAME TO syntax is used to rename the table identified by
+<i>[database-name.]table-name</i> to <i>new-table-name</i>. This command
+cannot be used to move a table between attached databases, only to rename
+a table within the same database.</p>
+
+<p>If the table being renamed has triggers or indices, then these remain
+attached to the table after it has been renamed. However, if there are
+any view definitions, or statements executed by triggers that refer to
+the table being renamed, these are not automatically modified to use the new
+table name. If this is required, the triggers or view definitions must be
+dropped and recreated to use the new table name by hand.
+</p>
+
+<p>The ADD [COLUMN] syntax is used to add a new column to an existing table.
+The new column is always appended to the end of the list of existing columns.
+<i>Column-def</i> may take any of the forms permissable in a CREATE TABLE
+statement, with the following restrictions:
+<ul>
+<li>The column may not have a PRIMARY KEY or UNIQUE constraint.</li>
+<li>The column may not have a default value of CURRENT_TIME, CURRENT_DATE
+ or CURRENT_TIMESTAMP.</li>
+<li>If a NOT NULL constraint is specified, then the column must have a
+ default value other than NULL.
+</ul>
+
+<p>The execution time of the ALTER TABLE command is independent of
+the amount of data in the table. The ALTER TABLE command runs as quickly
+on a table with 10 million rows as it does on a table with 1 row.
+</p>
+
+<p>After ADD COLUMN has been run on a database, that database will not
+be readable by SQLite version 3.1.3 and earlier until the database
+is <a href="lang_vacuum.html">VACUUM</a>ed.</p>
+}
+
+Section {ANALYZE} analyze
+
+Syntax {sql-statement} {
+ ANALYZE
+}
+Syntax {sql-statement} {
+ ANALYZE <database-name>
+}
+Syntax {sql-statement} {
+ ANALYZE [<database-name> .] <table-name>
+}
+
+puts {
+<p>The ANALYZE command gathers statistics about indices and stores them
+in a special tables in the database where the query optimizer can use
+them to help make better index choices.
+If no arguments are given, all indices in all attached databases are
+analyzed. If a database name is given as the argument, all indices
+in that one database are analyzed. If the argument is a table name,
+then only indices associated with that one table are analyzed.</p>
+
+<p>The initial implementation stores all statistics in a single
+table named <b>sqlite_stat1</b>. Future enhancements may create
+additional tables with the same name pattern except with the "1"
+changed to a different digit. The <b>sqlite_stat1</b> table cannot
+be <a href="#droptable">DROP</a>ped,
+but all the content can be <a href="#delete">DELETE</a>d which has the
+same effect.</p>
+}
+
+Section {ATTACH DATABASE} attach
+
+Syntax {sql-statement} {
+ATTACH [DATABASE] <database-filename> AS <database-name>
+}
+
+puts {
+<p>The ATTACH DATABASE statement adds another database
+file to the current database connection. If the filename contains
+punctuation characters it must be quoted. The names 'main' and
+'temp' refer to the main database and the database used for
+temporary tables. These cannot be detached. Attached databases
+are removed using the <a href="#detach">DETACH DATABASE</a>
+statement.</p>
+
+<p>You can read from and write to an attached database and you
+can modify the schema of the attached database. This is a new
+feature of SQLite version 3.0. In SQLite 2.8, schema changes
+to attached databases were not allowed.</p>
+
+<p>You cannot create a new table with the same name as a table in
+an attached database, but you can attach a database which contains
+tables whose names are duplicates of tables in the main database. It is
+also permissible to attach the same database file multiple times.</p>
+
+<p>Tables in an attached database can be referred to using the syntax
+<i>database-name.table-name</i>. If an attached table doesn't have
+a duplicate table name in the main database, it doesn't require a
+database name prefix. When a database is attached, all of its
+tables which don't have duplicate names become the default table
+of that name. Any tables of that name attached afterwards require the table
+prefix. If the default table of a given name is detached, then
+the last table of that name attached becomes the new default.</p>
+
+<p>
+Transactions involving multiple attached databases are atomic,
+assuming that the main database is not ":memory:". If the main
+database is ":memory:" then
+transactions continue to be atomic within each individual
+database file. But if the host computer crashes in the middle
+of a COMMIT where two or more database files are updated,
+some of those files might get the changes where others
+might not.
+Atomic commit of attached databases is a new feature of SQLite version 3.0.
+In SQLite version 2.8, all commits to attached databases behaved as if
+the main database were ":memory:".
+</p>
+
+<p>There is a compile-time limit of 10 attached database files.</p>
+}
+
+
+Section {BEGIN TRANSACTION} transaction
+
+Syntax {sql-statement} {
+BEGIN [ DEFERRED | IMMEDIATE | EXCLUSIVE ] [TRANSACTION [<name>]]
+}
+Syntax {sql-statement} {
+END [TRANSACTION [<name>]]
+}
+Syntax {sql-statement} {
+COMMIT [TRANSACTION [<name>]]
+}
+Syntax {sql-statement} {
+ROLLBACK [TRANSACTION [<name>]]
+}
+
+puts {
+<p>Beginning in version 2.0, SQLite supports transactions with
+rollback and atomic commit.</p>
+
+<p>The optional transaction name is ignored. SQLite currently
+does not allow nested transactions.</p>
+
+<p>
+No changes can be made to the database except within a transaction.
+Any command that changes the database (basically, any SQL command
+other than SELECT) will automatically start a transaction if
+one is not already in effect. Automatically started transactions
+are committed at the conclusion of the command.
+</p>
+
+<p>
+Transactions can be started manually using the BEGIN
+command. Such transactions usually persist until the next
+COMMIT or ROLLBACK command. But a transaction will also
+ROLLBACK if the database is closed or if an error occurs
+and the ROLLBACK conflict resolution algorithm is specified.
+See the documentation on the <a href="#conflict">ON CONFLICT</a>
+clause for additional information about the ROLLBACK
+conflict resolution algorithm.
+</p>
+
+<p>
+In SQLite version 3.0.8 and later, transactions can be deferred,
+immediate, or exclusive. Deferred means that no locks are acquired
+on the database until the database is first accessed. Thus with a
+deferred transaction, the BEGIN statement itself does nothing. Locks
+are not acquired until the first read or write operation. The first read
+operation against a database creates a SHARED lock and the first
+write operation creates a RESERVED lock. Because the acquisition of
+locks is deferred until they are needed, it is possible that another
+thread or process could create a separate transaction and write to
+the database after the BEGIN on the current thread has executed.
+If the transaction is immediate, then RESERVED locks
+are acquired on all databases as soon as the BEGIN command is
+executed, without waiting for the
+database to be used. After a BEGIN IMMEDIATE, you are guaranteed that
+no other thread or process will be able to write to the database or
+do a BEGIN IMMEDIATE or BEGIN EXCLUSIVE. Other processes can continue
+to read from the database, however. An exclusive transaction causes
+EXCLUSIVE locks to be acquired on all databases. After a BEGIN
+EXCLUSIVE, you are guaranteed that no other thread or process will
+be able to read or write the database until the transaction is
+complete.
+</p>
+
+<p>
+A description of the meaning of SHARED, RESERVED, and EXCLUSIVE locks
+is available <a href="lockingv3.html">separately</a>.
+</p>
+
+<p>
+The default behavior for SQLite version 3.0.8 is a
+deferred transaction. For SQLite version 3.0.0 through 3.0.7,
+deferred is the only kind of transaction available. For SQLite
+version 2.8 and earlier, all transactions are exclusive.
+</p>
+
+<p>
+The COMMIT command does not actually perform a commit until all
+pending SQL commands finish. Thus if two or more SELECT statements
+are in the middle of processing and a COMMIT is executed, the commit
+will not actually occur until all SELECT statements finish.
+</p>
+
+<p>
+An attempt to execute COMMIT might result in an SQLITE_BUSY return code.
+This indicates that another thread or process had a read lock on the database
+that prevented the database from being updated. When COMMIT fails in this
+way, the transaction remains active and the COMMIT can be retried later
+after the reader has had a chance to clear.
+</p>
+}
+
+
+Section comment comment
+
+Syntax {comment} {<SQL-comment> | <C-comment>
+} {SQL-comment} {-- <single-line>
+} {C-comment} {/STAR <multiple-lines> [STAR/]
+}
+
+puts {
+<p> Comments aren't SQL commands, but can occur in SQL queries. They are
+treated as whitespace by the parser. They can begin anywhere whitespace
+can be found, including inside expressions that span multiple lines.
+</p>
+
+<p> SQL comments only extend to the end of the current line.</p>
+
+<p> C comments can span any number of lines. If there is no terminating
+delimiter, they extend to the end of the input. This is not treated as
+an error. A new SQL statement can begin on a line after a multiline
+comment ends. C comments can be embedded anywhere whitespace can occur,
+including inside expressions, and in the middle of other SQL statements.
+C comments do not nest. SQL comments inside a C comment will be ignored.
+</p>
+}
+
+
+Section COPY copy
+
+Syntax {sql-statement} {
+COPY [ OR <conflict-algorithm> ] [<database-name> .] <table-name> FROM <filename>
+[ USING DELIMITERS <delim> ]
+}
+
+puts {
+<p>The COPY command is available in SQLite version 2.8 and earlier.
+The COPY command has been removed from SQLite version 3.0 due to
+complications in trying to support it in a mixed UTF-8/16 environment.
+In version 3.0, the <a href="sqlite.html">command-line shell</a>
+contains a new command <b>.import</b> that can be used as a substitute
+for COPY.
+</p>
+
+<p>The COPY command is an extension used to load large amounts of
+data into a table. It is modeled after a similar command found
+in PostgreSQL. In fact, the SQLite COPY command is specifically
+designed to be able to read the output of the PostgreSQL dump
+utility <b>pg_dump</b> so that data can be easily transferred from
+PostgreSQL into SQLite.</p>
+
+<p>The table-name is the name of an existing table which is to
+be filled with data. The filename is a string or identifier that
+names a file from which data will be read. The filename can be
+the <b>STDIN</b> to read data from standard input.</p>
+
+<p>Each line of the input file is converted into a single record
+in the table. Columns are separated by tabs. If a tab occurs as
+data within a column, then that tab is preceded by a baskslash "\"
+character. A baskslash in the data appears as two backslashes in
+a row. The optional USING DELIMITERS clause can specify a delimiter
+other than tab.</p>
+
+<p>If a column consists of the character "\N", that column is filled
+with the value NULL.</p>
+
+<p>The optional conflict-clause allows the specification of an alternative
+constraint conflict resolution algorithm to use for this one command.
+See the section titled
+<a href="#conflict">ON CONFLICT</a> for additional information.</p>
+
+<p>When the input data source is STDIN, the input can be terminated
+by a line that contains only a baskslash and a dot:}
+puts "\"[Operator \\.]\".</p>"
+
+
+Section {CREATE INDEX} createindex
+
+Syntax {sql-statement} {
+CREATE [UNIQUE] INDEX [IF NOT EXISTS] [<database-name> .] <index-name>
+ON <table-name> ( <column-name> [, <column-name>]* )
+} {column-name} {
+<name> [ COLLATE <collation-name>] [ ASC | DESC ]
+}
+
+puts {
+<p>The CREATE INDEX command consists of the keywords "CREATE INDEX" followed
+by the name of the new index, the keyword "ON", the name of a previously
+created table that is to be indexed, and a parenthesized list of names of
+columns in the table that are used for the index key.
+Each column name can be followed by one of the "ASC" or "DESC" keywords
+to indicate sort order, but the sort order is ignored in the current
+implementation. Sorting is always done in ascending order.</p>
+
+<p>The COLLATE clause following each column name defines a collating
+sequence used for text entires in that column. The default collating
+sequence is the collating sequence defined for that column in the
+CREATE TABLE statement. Or if no collating sequence is otherwise defined,
+the built-in BINARY collating sequence is used.</p>
+
+<p>There are no arbitrary limits on the number of indices that can be
+attached to a single table, nor on the number of columns in an index.</p>
+
+<p>If the UNIQUE keyword appears between CREATE and INDEX then duplicate
+index entries are not allowed. Any attempt to insert a duplicate entry
+will result in an error.</p>
+
+<p>The exact text
+of each CREATE INDEX statement is stored in the <b>sqlite_master</b>
+or <b>sqlite_temp_master</b> table, depending on whether the table
+being indexed is temporary. Every time the database is opened,
+all CREATE INDEX statements
+are read from the <b>sqlite_master</b> table and used to regenerate
+SQLite's internal representation of the index layout.</p>
+
+<p>If the optional IF NOT EXISTS clause is present and another index
+with the same name aleady exists, then this command becomes a no-op.</p>
+
+<p>Indexes are removed with the <a href="#dropindex">DROP INDEX</a>
+command.</p>
+}
+
+
+Section {CREATE TABLE} {createtable}
+
+Syntax {sql-command} {
+CREATE [TEMP | TEMPORARY] TABLE [IF NOT EXISTS] [<database-name> .] <table-name> (
+ <column-def> [, <column-def>]*
+ [, <constraint>]*
+)
+} {sql-command} {
+CREATE [TEMP | TEMPORARY] TABLE [<database-name>.] <table-name> AS <select-statement>
+} {column-def} {
+<name> [<type>] [[CONSTRAINT <name>] <column-constraint>]*
+} {type} {
+<typename> |
+<typename> ( <number> ) |
+<typename> ( <number> , <number> )
+} {column-constraint} {
+NOT NULL [ <conflict-clause> ] |
+PRIMARY KEY [<sort-order>] [ <conflict-clause> ] [AUTOINCREMENT] |
+UNIQUE [ <conflict-clause> ] |
+CHECK ( <expr> ) |
+DEFAULT <value> |
+COLLATE <collation-name>
+} {constraint} {
+PRIMARY KEY ( <column-list> ) [ <conflict-clause> ] |
+UNIQUE ( <column-list> ) [ <conflict-clause> ] |
+CHECK ( <expr> )
+} {conflict-clause} {
+ON CONFLICT <conflict-algorithm>
+}
+
+puts {
+<p>A CREATE TABLE statement is basically the keywords "CREATE TABLE"
+followed by the name of a new table and a parenthesized list of column
+definitions and constraints. The table name can be either an identifier
+or a string. Tables names that begin with "<b>sqlite_</b>" are reserved
+for use by the engine.</p>
+
+<p>Each column definition is the name of the column followed by the
+datatype for that column, then one or more optional column constraints.
+The datatype for the column does not restrict what data may be put
+in that column.
+See <a href="datatype3.html">Datatypes In SQLite Version 3</a> for
+additional information.
+The UNIQUE constraint causes an index to be created on the specified
+columns. This index must contain unique keys.
+The COLLATE clause specifies what text <a href="datatype3.html#collation">
+collating function</a> to use when comparing text entries for the column.
+The built-in BINARY collating function is used by default.
+<p>
+The DEFAULT constraint specifies a default value to use when doing an INSERT.
+The value may be NULL, a string constant or a number. Starting with version
+3.1.0, the default value may also be one of the special case-independant
+keywords CURRENT_TIME, CURRENT_DATE or CURRENT_TIMESTAMP. If the value is
+NULL, a string constant or number, it is literally inserted into the column
+whenever an INSERT statement that does not specify a value for the column is
+executed. If the value is CURRENT_TIME, CURRENT_DATE or CURRENT_TIMESTAMP, then
+the current UTC date and/or time is inserted into the columns. For
+CURRENT_TIME, the format is HH:MM:SS. For CURRENT_DATE, YYYY-MM-DD. The format
+for CURRENT_TIMESTAMP is "YYYY-MM-DD HH:MM:SS".
+</p>
+
+<p>Specifying a PRIMARY KEY normally just creates a UNIQUE index
+on the corresponding columns. However, if primary key is on a single column
+that has datatype INTEGER, then that column is used internally
+as the actual key of the B-Tree for the table. This means that the column
+may only hold unique integer values. (Except for this one case,
+SQLite ignores the datatype specification of columns and allows
+any kind of data to be put in a column regardless of its declared
+datatype.) If a table does not have an INTEGER PRIMARY KEY column,
+then the B-Tree key will be a automatically generated integer. The
+B-Tree key for a row can always be accessed using one of the
+special names "<b>ROWID</b>", "<b>OID</b>", or "<b>_ROWID_</b>".
+This is true regardless of whether or not there is an INTEGER
+PRIMARY KEY. An INTEGER PRIMARY KEY column can also include the
+keyword AUTOINCREMENT. The AUTOINCREMENT keyword modified the way
+that B-Tree keys are automatically generated. Additional detail
+on automatic B-Tree key generation is available
+<a href="autoinc.html">separately</a>.</p>
+
+<p>According to the SQL standard, PRIMARY KEY should imply NOT NULL.
+Unfortunately, due to a long-standing coding oversight, this is not
+the case in SQLite. SQLite allows NULL values
+in a PRIMARY KEY column. We could change SQLite to conform to the
+standard (and we might do so in the future), but by the time the
+oversight was discovered, SQLite was in such wide use that we feared
+breaking legacy code if we fixed the problem. So for now we have
+chosen to contain allowing NULLs in PRIMARY KEY columns.
+Developers should be aware, however, that we may change SQLite to
+conform to the SQL standard in future and should design new programs
+accordingly.</p>
+
+<p>If the "TEMP" or "TEMPORARY" keyword occurs in between "CREATE"
+and "TABLE" then the table that is created is only visible
+within that same database connection
+and is automatically deleted when
+the database connection is closed. Any indices created on a temporary table
+are also temporary. Temporary tables and indices are stored in a
+separate file distinct from the main database file.</p>
+
+<p> If a <database-name> is specified, then the table is created in
+the named database. It is an error to specify both a <database-name>
+and the TEMP keyword, unless the <database-name> is "temp". If no
+database name is specified, and the TEMP keyword is not present,
+the table is created in the main database.</p>
+
+<p>The optional conflict-clause following each constraint
+allows the specification of an alternative default
+constraint conflict resolution algorithm for that constraint.
+The default is abort ABORT. Different constraints within the same
+table may have different default conflict resolution algorithms.
+If an COPY, INSERT, or UPDATE command specifies a different conflict
+resolution algorithm, then that algorithm is used in place of the
+default algorithm specified in the CREATE TABLE statement.
+See the section titled
+<a href="#conflict">ON CONFLICT</a> for additional information.</p>
+
+<p>CHECK constraints are supported as of version 3.3.0. Prior
+to version 3.3.0, CHECK constraints were parsed but not enforced.</p>
+
+<p>There are no arbitrary limits on the number
+of columns or on the number of constraints in a table.
+The total amount of data in a single row is limited to about
+1 megabytes in version 2.8. In version 3.0 there is no arbitrary
+limit on the amount of data in a row.</p>
+
+
+<p>The CREATE TABLE AS form defines the table to be
+the result set of a query. The names of the table columns are
+the names of the columns in the result.</p>
+
+<p>The exact text
+of each CREATE TABLE statement is stored in the <b>sqlite_master</b>
+table. Every time the database is opened, all CREATE TABLE statements
+are read from the <b>sqlite_master</b> table and used to regenerate
+SQLite's internal representation of the table layout.
+If the original command was a CREATE TABLE AS then then an equivalent
+CREATE TABLE statement is synthesized and store in <b>sqlite_master</b>
+in place of the original command.
+The text of CREATE TEMPORARY TABLE statements are stored in the
+<b>sqlite_temp_master</b> table.
+</p>
+
+<p>If the optional IF NOT EXISTS clause is present and another table
+with the same name aleady exists, then this command becomes a no-op.</p>
+
+<p>Tables are removed using the <a href="#droptable">DROP TABLE</a>
+statement. </p>
+}
+
+
+Section {CREATE TRIGGER} createtrigger
+
+Syntax {sql-statement} {
+CREATE [TEMP | TEMPORARY] TRIGGER [IF NOT EXISTS] <trigger-name> [ BEFORE | AFTER ]
+<database-event> ON [<database-name> .] <table-name>
+<trigger-action>
+}
+
+Syntax {sql-statement} {
+CREATE [TEMP | TEMPORARY] TRIGGER [IF NOT EXISTS] <trigger-name> INSTEAD OF
+<database-event> ON [<database-name> .] <view-name>
+<trigger-action>
+}
+
+Syntax {database-event} {
+DELETE |
+INSERT |
+UPDATE |
+UPDATE OF <column-list>
+}
+
+Syntax {trigger-action} {
+[ FOR EACH ROW | FOR EACH STATEMENT ] [ WHEN <expression> ]
+BEGIN
+ <trigger-step> ; [ <trigger-step> ; ]*
+END
+}
+
+Syntax {trigger-step} {
+<update-statement> | <insert-statement> |
+<delete-statement> | <select-statement>
+}
+
+puts {
+<p>The CREATE TRIGGER statement is used to add triggers to the
+database schema. Triggers are database operations (the <i>trigger-action</i>)
+that are automatically performed when a specified database event (the
+<i>database-event</i>) occurs. </p>
+
+<p>A trigger may be specified to fire whenever a DELETE, INSERT or UPDATE of a
+particular database table occurs, or whenever an UPDATE of one or more
+specified columns of a table are updated.</p>
+
+<p>At this time SQLite supports only FOR EACH ROW triggers, not FOR EACH
+STATEMENT triggers. Hence explicitly specifying FOR EACH ROW is optional. FOR
+EACH ROW implies that the SQL statements specified as <i>trigger-steps</i>
+may be executed (depending on the WHEN clause) for each database row being
+inserted, updated or deleted by the statement causing the trigger to fire.</p>
+
+<p>Both the WHEN clause and the <i>trigger-steps</i> may access elements of
+the row being inserted, deleted or updated using references of the form
+"NEW.<i>column-name</i>" and "OLD.<i>column-name</i>", where
+<i>column-name</i> is the name of a column from the table that the trigger
+is associated with. OLD and NEW references may only be used in triggers on
+<i>trigger-event</i>s for which they are relevant, as follows:</p>
+
+<table border=0 cellpadding=10>
+<tr>
+<td valign="top" align="right" width=120><i>INSERT</i></td>
+<td valign="top">NEW references are valid</td>
+</tr>
+<tr>
+<td valign="top" align="right" width=120><i>UPDATE</i></td>
+<td valign="top">NEW and OLD references are valid</td>
+</tr>
+<tr>
+<td valign="top" align="right" width=120><i>DELETE</i></td>
+<td valign="top">OLD references are valid</td>
+</tr>
+</table>
+</p>
+
+<p>If a WHEN clause is supplied, the SQL statements specified as <i>trigger-steps</i> are only executed for rows for which the WHEN clause is true. If no WHEN clause is supplied, the SQL statements are executed for all rows.</p>
+
+<p>The specified <i>trigger-time</i> determines when the <i>trigger-steps</i>
+will be executed relative to the insertion, modification or removal of the
+associated row.</p>
+
+<p>An ON CONFLICT clause may be specified as part of an UPDATE or INSERT
+<i>trigger-step</i>. However if an ON CONFLICT clause is specified as part of
+the statement causing the trigger to fire, then this conflict handling
+policy is used instead.</p>
+
+<p>Triggers are automatically dropped when the table that they are
+associated with is dropped.</p>
+
+<p>Triggers may be created on views, as well as ordinary tables, by specifying
+INSTEAD OF in the CREATE TRIGGER statement. If one or more ON INSERT, ON DELETE
+or ON UPDATE triggers are defined on a view, then it is not an error to execute
+an INSERT, DELETE or UPDATE statement on the view, respectively. Thereafter,
+executing an INSERT, DELETE or UPDATE on the view causes the associated
+ triggers to fire. The real tables underlying the view are not modified
+ (except possibly explicitly, by a trigger program).</p>
+
+<p><b>Example:</b></p>
+
+<p>Assuming that customer records are stored in the "customers" table, and
+that order records are stored in the "orders" table, the following trigger
+ensures that all associated orders are redirected when a customer changes
+his or her address:</p>
+}
+Example {
+CREATE TRIGGER update_customer_address UPDATE OF address ON customers
+ BEGIN
+ UPDATE orders SET address = new.address WHERE customer_name = old.name;
+ END;
+}
+puts {
+<p>With this trigger installed, executing the statement:</p>
+}
+
+Example {
+UPDATE customers SET address = '1 Main St.' WHERE name = 'Jack Jones';
+}
+puts {
+<p>causes the following to be automatically executed:</p>
+}
+Example {
+UPDATE orders SET address = '1 Main St.' WHERE customer_name = 'Jack Jones';
+}
+
+puts {
+<p>Note that currently, triggers may behave oddly when created on tables
+ with INTEGER PRIMARY KEY fields. If a BEFORE trigger program modifies the
+ INTEGER PRIMARY KEY field of a row that will be subsequently updated by the
+ statement that causes the trigger to fire, then the update may not occur.
+ The workaround is to declare the table with a PRIMARY KEY column instead
+ of an INTEGER PRIMARY KEY column.</p>
+}
+
+puts {
+<p>A special SQL function RAISE() may be used within a trigger-program, with the following syntax</p>
+}
+Syntax {raise-function} {
+RAISE ( ABORT, <error-message> ) |
+RAISE ( FAIL, <error-message> ) |
+RAISE ( ROLLBACK, <error-message> ) |
+RAISE ( IGNORE )
+}
+puts {
+<p>When one of the first three forms is called during trigger-program execution, the specified ON CONFLICT processing is performed (either ABORT, FAIL or
+ ROLLBACK) and the current query terminates. An error code of SQLITE_CONSTRAINT is returned to the user, along with the specified error message.</p>
+
+<p>When RAISE(IGNORE) is called, the remainder of the current trigger program,
+the statement that caused the trigger program to execute and any subsequent
+ trigger programs that would of been executed are abandoned. No database
+ changes are rolled back. If the statement that caused the trigger program
+ to execute is itself part of a trigger program, then that trigger program
+ resumes execution at the beginning of the next step.
+</p>
+
+<p>Triggers are removed using the <a href="#droptrigger">DROP TRIGGER</a>
+statement.</p>
+}
+
+
+Section {CREATE VIEW} {createview}
+
+Syntax {sql-command} {
+CREATE [TEMP | TEMPORARY] VIEW [IF NOT EXISTS] [<database-name>.] <view-name> AS <select-statement>
+}
+
+puts {
+<p>The CREATE VIEW command assigns a name to a pre-packaged
+<a href="#select">SELECT</a>
+statement. Once the view is created, it can be used in the FROM clause
+of another SELECT in place of a table name.
+</p>
+
+<p>If the "TEMP" or "TEMPORARY" keyword occurs in between "CREATE"
+and "VIEW" then the view that is created is only visible to the
+process that opened the database and is automatically deleted when
+the database is closed.</p>
+
+<p> If a <database-name> is specified, then the view is created in
+the named database. It is an error to specify both a <database-name>
+and the TEMP keyword, unless the <database-name> is "temp". If no
+database name is specified, and the TEMP keyword is not present,
+the table is created in the main database.</p>
+
+<p>You cannot COPY, DELETE, INSERT or UPDATE a view. Views are read-only
+in SQLite. However, in many cases you can use a <a href="#createtrigger">
+TRIGGER</a> on the view to accomplish the same thing. Views are removed
+with the <a href="#dropview">DROP VIEW</a>
+command.</p>
+}
+
+Section {CREATE VIRTUAL TABLE} {createvtab}
+
+Syntax {sql-command} {
+CREATE VIRTUAL TABLE [<database-name> .] <table-name> USING <module-name> [( <arguments> )]
+}
+
+puts {
+<p>A virtual table is an interface to an external storage or computation
+engine that appears to be a table but does not actually store information
+in the database file.</p>
+
+<p>In general, you can do anything with a virtual table that can be done
+with an ordinary table, except that you cannot create triggers on a
+virtual table. Some virtual table implementations might impose additional
+restrictions. For example, many virtual tables are read-only.</p>
+
+<p>The <module-name> is the name of an object that implements
+the virtual table. The <module-name> must be registered with
+the SQLite database connection using
+<a href="capi3ref.html#sqlite3_create_module">sqlite3_create_module</a>
+prior to issuing the CREATE VIRTUAL TABLE statement.
+The module takes zero or more comma-separated arguments.
+The arguments can be just about any text as long as it has balanced
+parentheses. The argument syntax is sufficiently general that the
+arguments can be made to appear as column definitions in a traditional
+<a href="#createtable">CREATE TABLE</a> statement.
+SQLite passes the module arguments directly
+to the module without any interpretation. It is the responsibility
+of the module implementation to parse and interpret its own arguments.</p>
+
+<p>A virtual table is destroyed using the ordinary
+<a href="#droptable">DROP TABLE</a> statement. There is no
+DROP VIRTUAL TABLE statement.</p>
+}
+
+Section DELETE delete
+
+Syntax {sql-statement} {
+DELETE FROM [<database-name> .] <table-name> [WHERE <expr>]
+}
+
+puts {
+<p>The DELETE command is used to remove records from a table.
+The command consists of the "DELETE FROM" keywords followed by
+the name of the table from which records are to be removed.
+</p>
+
+<p>Without a WHERE clause, all rows of the table are removed.
+If a WHERE clause is supplied, then only those rows that match
+the expression are removed.</p>
+}
+
+
+Section {DETACH DATABASE} detach
+
+Syntax {sql-command} {
+DETACH [DATABASE] <database-name>
+}
+
+puts {
+<p>This statement detaches an additional database connection previously
+attached using the <a href="#attach">ATTACH DATABASE</a> statement. It
+is possible to have the same database file attached multiple times using
+different names, and detaching one connection to a file will leave the
+others intact.</p>
+
+<p>This statement will fail if SQLite is in the middle of a transaction.</p>
+}
+
+
+Section {DROP INDEX} dropindex
+
+Syntax {sql-command} {
+DROP INDEX [IF EXISTS] [<database-name> .] <index-name>
+}
+
+puts {
+<p>The DROP INDEX statement removes an index added
+with the <a href="#createindex">
+CREATE INDEX</a> statement. The index named is completely removed from
+the disk. The only way to recover the index is to reenter the
+appropriate CREATE INDEX command.</p>
+
+<p>The DROP INDEX statement does not reduce the size of the database
+file in the default mode.
+Empty space in the database is retained for later INSERTs. To
+remove free space in the database, use the <a href="#vacuum">VACUUM</a>
+command. If AUTOVACUUM mode is enabled for a database then space
+will be freed automatically by DROP INDEX.</p>
+}
+
+
+Section {DROP TABLE} droptable
+
+Syntax {sql-command} {
+DROP TABLE [IF EXISTS] [<database-name>.] <table-name>
+}
+
+puts {
+<p>The DROP TABLE statement removes a table added with the <a href=
+"#createtable">CREATE TABLE</a> statement. The name specified is the
+table name. It is completely removed from the database schema and the
+disk file. The table can not be recovered. All indices associated
+with the table are also deleted.</p>
+
+<p>The DROP TABLE statement does not reduce the size of the database
+file in the default mode. Empty space in the database is retained for
+later INSERTs. To
+remove free space in the database, use the <a href="#vacuum">VACUUM</a>
+command. If AUTOVACUUM mode is enabled for a database then space
+will be freed automatically by DROP TABLE.</p>
+
+<p>The optional IF EXISTS clause suppresses the error that would normally
+result if the table does not exist.</p>
+}
+
+
+Section {DROP TRIGGER} droptrigger
+Syntax {sql-statement} {
+DROP TRIGGER [IF EXISTS] [<database-name> .] <trigger-name>
+}
+puts {
+<p>The DROP TRIGGER statement removes a trigger created by the
+<a href="#createtrigger">CREATE TRIGGER</a> statement. The trigger is
+deleted from the database schema. Note that triggers are automatically
+dropped when the associated table is dropped.</p>
+}
+
+
+Section {DROP VIEW} dropview
+
+Syntax {sql-command} {
+DROP VIEW [IF EXISTS] <view-name>
+}
+
+puts {
+<p>The DROP VIEW statement removes a view created by the <a href=
+"#createview">CREATE VIEW</a> statement. The name specified is the
+view name. It is removed from the database schema, but no actual data
+in the underlying base tables is modified.</p>
+}
+
+
+Section EXPLAIN explain
+
+Syntax {sql-statement} {
+EXPLAIN <sql-statement>
+}
+
+puts {
+<p>The EXPLAIN command modifier is a non-standard extension. The
+idea comes from a similar command found in PostgreSQL, but the operation
+is completely different.</p>
+
+<p>If the EXPLAIN keyword appears before any other SQLite SQL command
+then instead of actually executing the command, the SQLite library will
+report back the sequence of virtual machine instructions it would have
+used to execute the command had the EXPLAIN keyword not been present.
+For additional information about virtual machine instructions see
+the <a href="arch.html">architecture description</a> or the documentation
+on <a href="opcode.html">available opcodes</a> for the virtual machine.</p>
+}
+
+
+Section expression expr
+
+Syntax {expr} {
+<expr> <binary-op> <expr> |
+<expr> [NOT] <like-op> <expr> [ESCAPE <expr>] |
+<unary-op> <expr> |
+( <expr> ) |
+<column-name> |
+<table-name> . <column-name> |
+<database-name> . <table-name> . <column-name> |
+<literal-value> |
+<parameter> |
+<function-name> ( <expr-list> | STAR ) |
+<expr> ISNULL |
+<expr> NOTNULL |
+<expr> [NOT] BETWEEN <expr> AND <expr> |
+<expr> [NOT] IN ( <value-list> ) |
+<expr> [NOT] IN ( <select-statement> ) |
+<expr> [NOT] IN [<database-name> .] <table-name> |
+[EXISTS] ( <select-statement> ) |
+CASE [<expr>] LP WHEN <expr> THEN <expr> RPPLUS [ELSE <expr>] END |
+CAST ( <expr> AS <type> )
+} {like-op} {
+LIKE | GLOB | REGEXP | MATCH
+}
+
+puts {
+<p>This section is different from the others. Most other sections of
+this document talks about a particular SQL command. This section does
+not talk about a standalone command but about "expressions" which are
+subcomponents of most other commands.</p>
+
+<p>SQLite understands the following binary operators, in order from
+highest to lowest precedence:</p>
+
+<blockquote><pre>
+<font color="#2c2cf0"><big>||
+* / %
++ -
+<< >> & |
+< <= > >=
+= == != <> </big>IN
+AND
+OR</font>
+</pre></blockquote>
+
+<p>Supported unary operators are these:</p>
+
+<blockquote><pre>
+<font color="#2c2cf0"><big>- + ! ~ NOT</big></font>
+</pre></blockquote>
+
+<p>The unary operator [Operator +] is a no-op. It can be applied
+to strings, numbers, or blobs and it always gives as its result the
+value of the operand.</p>
+
+<p>Note that there are two variations of the equals and not equals
+operators. Equals can be either}
+puts "[Operator =] or [Operator ==].
+The non-equals operator can be either
+[Operator !=] or [Operator {<>}].
+The [Operator ||] operator is \"concatenate\" - it joins together
+the two strings of its operands.
+The operator [Operator %] outputs the remainder of its left
+operand modulo its right operand.</p>
+
+<p>The result of any binary operator is a numeric value, except
+for the [Operator ||] concatenation operator which gives a string
+result.</p>"
+
+puts {
+
+<a name="literal_value"></a>
+<p>
+A literal value is an integer number or a floating point number.
+Scientific notation is supported. The "." character is always used
+as the decimal point even if the locale setting specifies "," for
+this role - the use of "," for the decimal point would result in
+syntactic ambiguity. A string constant is formed by enclosing the
+string in single quotes ('). A single quote within the string can
+be encoded by putting two single quotes in a row - as in Pascal.
+C-style escapes using the backslash character are not supported because
+they are not standard SQL.
+BLOB literals are string literals containing hexadecimal data and
+preceded by a single "x" or "X" character. For example:</p>
+
+<blockquote><pre>
+X'53514697465'
+</pre></blockquote>
+
+<p>
+A literal value can also be the token "NULL".
+</p>
+
+<p>
+A parameter specifies a placeholder in the expression for a literal
+value that is filled in at runtime using the
+<a href="capi3ref.html#sqlite3_bind_int">sqlite3_bind</a> API.
+Parameters can take several forms:
+</p
+
+<blockquote>
+<table>
+<tr>
+<td align="right" valign="top"><b>?</b><i>NNN</i></td><td width="20"></td>
+<td>A question mark followed by a number <i>NNN</i> holds a spot for the
+NNN-th parameter. NNN must be between 1 and 999.</td>
+</tr>
+<tr>
+<td align="right" valign="top"><b>?</b></td><td width="20"></td>
+<td>A question mark that is not followed by a number holds a spot for
+the next unused parameter.</td>
+</tr>
+<tr>
+<td align="right" valign="top"><b>:</b><i>AAAA</i></td><td width="20"></td>
+<td>A colon followed by an identifier name holds a spot for a named
+parameter with the name AAAA. Named parameters are also numbered.
+The number assigned is the next unused number. To avoid confusion,
+it is best to avoid mixing named and numbered parameters.</td>
+</tr>
+<tr>
+<td align="right" valign="top"><b>@</b><i>AAAA</i></td><td width="20"></td>
+<td>An "at" sign works exactly like a colon.</td>
+</tr>
+<tr>
+<td align="right" valign="top"><b>$</b><i>AAAA</i></td><td width="20"></td>
+<td>A dollar-sign followed by an identifier name also holds a spot for a named
+parameter with the name AAAA. The identifier name in this case can include
+one or more occurances of "::" and a suffix enclosed in "(...)" containing
+any text at all. This syntax is the form of a variable name in the Tcl
+programming language.</td>
+</tr>
+</table>
+</blockquote>
+
+<p>Parameters that are not assigned values using
+<a href="capi3ref.html#sqlite3_bind_int">sqlite3_bind</a> are treated
+as NULL.</p>
+
+<a name="like"></a>
+<p>The LIKE operator does a pattern matching comparison. The operand
+to the right contains the pattern, the left hand operand contains the
+string to match against the pattern.
+}
+puts "A percent symbol [Operator %] in the pattern matches any
+sequence of zero or more characters in the string. An underscore
+[Operator _] in the pattern matches any single character in the
+string. Any other character matches itself or it's lower/upper case
+equivalent (i.e. case-insensitive matching). (A bug: SQLite only
+understands upper/lower case for 7-bit Latin characters. Hence the
+LIKE operator is case sensitive for 8-bit iso8859 characters or UTF-8
+characters. For example, the expression <b>'a' LIKE 'A'</b>
+is TRUE but <b>'æ' LIKE 'Æ'</b> is FALSE.).</p>"
+
+puts {
+<p>If the optional ESCAPE clause is present, then the expression
+following the ESCAPE keyword must evaluate to a string consisting of
+a single character. This character may be used in the LIKE pattern
+to include literal percent or underscore characters. The escape
+character followed by a percent symbol, underscore or itself matches a
+literal percent symbol, underscore or escape character in the string,
+respectively. The infix LIKE operator is implemented by calling the
+user function <a href="#likeFunc"> like(<i>X</i>,<i>Y</i>)</a>.</p>
+}
+
+puts {
+The LIKE operator is not case sensitive and will match upper case
+characters on one side against lower case characters on the other.
+(A bug: SQLite only understands upper/lower case for 7-bit Latin
+characters. Hence the LIKE operator is case sensitive for 8-bit
+iso8859 characters or UTF-8 characters. For example, the expression
+<b>'a' LIKE 'A'</b> is TRUE but
+<b>'æ' LIKE 'Æ'</b> is FALSE.).</p>
+
+<p>The infix LIKE
+operator is implemented by calling the user function <a href="#likeFunc">
+like(<i>X</i>,<i>Y</i>)</a>. If an ESCAPE clause is present, it adds
+a third parameter to the function call. If the functionality of LIKE can be
+overridden by defining an alternative implementation of the
+like() SQL function.</p>
+</p>
+
+<a name="glob"></a>
+<p>The GLOB operator is similar to LIKE but uses the Unix
+file globbing syntax for its wildcards. Also, GLOB is case
+sensitive, unlike LIKE. Both GLOB and LIKE may be preceded by
+the NOT keyword to invert the sense of the test. The infix GLOB
+operator is implemented by calling the user function <a href="#globFunc">
+glob(<i>X</i>,<i>Y</i>)</a> and can be modified by overriding
+that function.</p>
+
+<a name="regexp"></a>
+<p>The REGEXP operator is a special syntax for the regexp()
+user function. No regexp() user function is defined by default
+and so use of the REGEXP operator will normally result in an
+error message. If a user-defined function named "regexp"
+is added at run-time, that function will be called in order
+to implement the REGEXP operator.</p>
+
+<a name="match"></a>
+<p>The MATCH operator is a special syntax for the match()
+user function. The default match() function implementation
+raises and exception and is not really useful for anything.
+But extensions can override the match() function with more
+helpful logic.</p>
+
+<p>A column name can be any of the names defined in the CREATE TABLE
+statement or one of the following special identifiers: "<b>ROWID</b>",
+"<b>OID</b>", or "<b>_ROWID_</b>".
+These special identifiers all describe the
+unique random integer key (the "row key") associated with every
+row of every table.
+The special identifiers only refer to the row key if the CREATE TABLE
+statement does not define a real column with the same name. Row keys
+act like read-only columns. A row key can be used anywhere a regular
+column can be used, except that you cannot change the value
+of a row key in an UPDATE or INSERT statement.
+"SELECT * ..." does not return the row key.</p>
+
+<p>SELECT statements can appear in expressions as either the
+right-hand operand of the IN operator, as a scalar quantity, or
+as the operand of an EXISTS operator.
+As a scalar quantity or the operand of an IN operator,
+the SELECT should have only a single column in its
+result. Compound SELECTs (connected with keywords like UNION or
+EXCEPT) are allowed.
+With the EXISTS operator, the columns in the result set of the SELECT are
+ignored and the expression returns TRUE if one or more rows exist
+and FALSE if the result set is empty.
+If no terms in the SELECT expression refer to value in the containing
+query, then the expression is evaluated once prior to any other
+processing and the result is reused as necessary. If the SELECT expression
+does contain variables from the outer query, then the SELECT is reevaluated
+every time it is needed.</p>
+
+<p>When a SELECT is the right operand of the IN operator, the IN
+operator returns TRUE if the result of the left operand is any of
+the values generated by the select. The IN operator may be preceded
+by the NOT keyword to invert the sense of the test.</p>
+
+<p>When a SELECT appears within an expression but is not the right
+operand of an IN operator, then the first row of the result of the
+SELECT becomes the value used in the expression. If the SELECT yields
+more than one result row, all rows after the first are ignored. If
+the SELECT yields no rows, then the value of the SELECT is NULL.</p>
+
+<p>A CAST expression changes the datatype of the <expr> into the
+type specified by <type>.
+<type> can be any non-empty type name that is valid
+for the type in a column definition of a CREATE TABLE statement.</p>
+
+<p>Both simple and aggregate functions are supported. A simple
+function can be used in any expression. Simple functions return
+a result immediately based on their inputs. Aggregate functions
+may only be used in a SELECT statement. Aggregate functions compute
+their result across all rows of the result set.</p>
+
+<a name="corefunctions"></a>
+<b>Core Functions</b>
+
+<p>The core functions shown below are available by default. Additional
+functions may be written in C and added to the database engine using
+the <a href="capi3ref.html#cfunc">sqlite3_create_function()</a>
+API.</p>
+
+<table border=0 cellpadding=10>
+<tr>
+<td valign="top" align="right" width=120>abs(<i>X</i>)</td>
+<td valign="top">Return the absolute value of argument <i>X</i>.</td>
+</tr>
+
+<tr>
+<td valign="top" align="right">coalesce(<i>X</i>,<i>Y</i>,...)</td>
+<td valign="top">Return a copy of the first non-NULL argument. If
+all arguments are NULL then NULL is returned. There must be at least
+2 arguments.</td>
+</tr>
+
+<tr>
+<a name="globFunc"></a>
+<td valign="top" align="right">glob(<i>X</i>,<i>Y</i>)</td>
+<td valign="top">This function is used to implement the
+"<b>X GLOB Y</b>" syntax of SQLite. The
+<a href="capi3ref.html#sqlite3_create_function">sqlite3_create_function()</a>
+interface can
+be used to override this function and thereby change the operation
+of the <a href="#globFunc">GLOB</a> operator.</td>
+</tr>
+
+<tr>
+<td valign="top" align="right">ifnull(<i>X</i>,<i>Y</i>)</td>
+<td valign="top">Return a copy of the first non-NULL argument. If
+both arguments are NULL then NULL is returned. This behaves the same as
+<b>coalesce()</b> above.</td>
+</tr>
+
+<tr>
+<td valign="top" align="right">last_insert_rowid()</td>
+<td valign="top">Return the ROWID of the last row insert from this
+connection to the database. This is the same value that would be returned
+from the <b>sqlite_last_insert_rowid()</b> API function.</td>
+</tr>
+
+<tr>
+<td valign="top" align="right">length(<i>X</i>)</td>
+<td valign="top">Return the string length of <i>X</i> in characters.
+If SQLite is configured to support UTF-8, then the number of UTF-8
+characters is returned, not the number of bytes.</td>
+</tr>
+
+<tr>
+<a name="likeFunc"></a>
+<td valign="top" align="right">like(<i>X</i>,<i>Y</i> [,<i>Z</i>])</td>
+<td valign="top">
+This function is used to implement the "<b>X LIKE Y [ESCAPE Z]</b>"
+syntax of SQL. If the optional ESCAPE clause is present, then the
+user-function is invoked with three arguments. Otherwise, it is
+invoked with two arguments only. The
+<a href="capi3ref.html#sqlite3_create_function">
+sqlite_create_function()</a> interface can be used to override this
+function and thereby change the operation of the <a
+href= "#like">LIKE</a> operator. When doing this, it may be important
+to override both the two and three argument versions of the like()
+function. Otherwise, different code may be called to implement the
+LIKE operator depending on whether or not an ESCAPE clause was
+specified.</td>
+</tr>
+
+<tr>
+<td valign="top" align="right">load_extension(<i>X</i>)<br>
+load_extension(<i>X</i>,<i>Y</i>)</td>
+<td valign="top">Load SQLite extensions out of the shared library
+file named <i>X</i> using the entry point <i>Y</i>. The result
+is a NULL. If <i>Y</i> is omitted then the default entry point
+of <b>sqlite3_extension_init</b> is used. This function raises
+an exception if the extension fails to load or initialize correctly.
+</tr>
+
+<tr>
+<td valign="top" align="right">lower(<i>X</i>)</td>
+<td valign="top">Return a copy of string <i>X</i> will all characters
+converted to lower case. The C library <b>tolower()</b> routine is used
+for the conversion, which means that this function might not
+work correctly on UTF-8 characters.</td>
+</tr>
+
+<tr>
+<td valign="top" align="right">max(<i>X</i>,<i>Y</i>,...)</td>
+<td valign="top">Return the argument with the maximum value. Arguments
+may be strings in addition to numbers. The maximum value is determined
+by the usual sort order. Note that <b>max()</b> is a simple function when
+it has 2 or more arguments but converts to an aggregate function if given
+only a single argument.</td>
+</tr>
+
+<tr>
+<td valign="top" align="right">min(<i>X</i>,<i>Y</i>,...)</td>
+<td valign="top">Return the argument with the minimum value. Arguments
+may be strings in addition to numbers. The minimum value is determined
+by the usual sort order. Note that <b>min()</b> is a simple function when
+it has 2 or more arguments but converts to an aggregate function if given
+only a single argument.</td>
+</tr>
+
+<tr>
+<td valign="top" align="right">nullif(<i>X</i>,<i>Y</i>)</td>
+<td valign="top">Return the first argument if the arguments are different,
+otherwise return NULL.</td>
+</tr>
+
+<tr>
+<td valign="top" align="right">quote(<i>X</i>)</td>
+<td valign="top">This routine returns a string which is the value of
+its argument suitable for inclusion into another SQL statement.
+Strings are surrounded by single-quotes with escapes on interior quotes
+as needed. BLOBs are encoded as hexadecimal literals.
+The current implementation of VACUUM uses this function. The function
+is also useful when writing triggers to implement undo/redo functionality.
+</td>
+</tr>
+
+<tr>
+<td valign="top" align="right">random(*)</td>
+<td valign="top">Return a pseudo-random integer
+between -9223372036854775808 and +9223372036854775807.</td>
+</tr>
+
+<tr>
+<td valign="top" align="right">round(<i>X</i>)<br>round(<i>X</i>,<i>Y</i>)</td>
+<td valign="top">Round off the number <i>X</i> to <i>Y</i> digits to the
+right of the decimal point. If the <i>Y</i> argument is omitted, 0 is
+assumed.</td>
+</tr>
+
+<tr>
+<td valign="top" align="right">soundex(<i>X</i>)</td>
+<td valign="top">Compute the soundex encoding of the string <i>X</i>.
+The string "?000" is returned if the argument is NULL.
+This function is omitted from SQLite by default.
+It is only available the -DSQLITE_SOUNDEX=1 compiler option
+is used when SQLite is built.</td>
+</tr>
+
+<tr>
+<td valign="top" align="right">sqlite_version(*)</td>
+<td valign="top">Return the version string for the SQLite library
+that is running. Example: "2.8.0"</td>
+</tr>
+
+<tr>
+<td valign="top" align="right">substr(<i>X</i>,<i>Y</i>,<i>Z</i>)</td>
+<td valign="top">Return a substring of input string <i>X</i> that begins
+with the <i>Y</i>-th character and which is <i>Z</i> characters long.
+The left-most character of <i>X</i> is number 1. If <i>Y</i> is negative
+the the first character of the substring is found by counting from the
+right rather than the left. If SQLite is configured to support UTF-8,
+then characters indices refer to actual UTF-8 characters, not bytes.</td>
+</tr>
+
+<tr>
+<td valign="top" align="right">typeof(<i>X</i>)</td>
+<td valign="top">Return the type of the expression <i>X</i>. The only
+return values are "null", "integer", "real", "text", and "blob".
+SQLite's type handling is
+explained in <a href="datatype3.html">Datatypes in SQLite Version 3</a>.</td>
+</tr>
+
+<tr>
+<td valign="top" align="right">upper(<i>X</i>)</td>
+<td valign="top">Return a copy of input string <i>X</i> converted to all
+upper-case letters. The implementation of this function uses the C library
+routine <b>toupper()</b> which means it may not work correctly on
+UTF-8 strings.</td>
+</tr>
+</table>
+
+<b>Date And Time Functions</b>
+
+<p>Date and time functions are documented in the
+<a href="http://www.sqlite.org/cvstrac/wiki?p=DateAndTimeFunctions">
+SQLite Wiki</a>.</p>
+
+<a name="aggregatefunctions"></a>
+<b>Aggregate Functions</b>
+
+<p>
+The aggregate functions shown below are available by default. Additional
+aggregate functions written in C may be added using the
+<a href="capi3ref.html#sqlite3_create_function">sqlite3_create_function()</a>
+API.</p>
+
+<p>
+In any aggregate function that takes a single argument, that argument
+can be preceeded by the keyword DISTINCT. In such cases, duplicate
+elements are filtered before being passed into the aggregate function.
+For example, the function "count(distinct X)" will return the number
+of distinct values of column X instead of the total number of non-null
+values in column X.
+</p>
+
+<table border=0 cellpadding=10>
+<tr>
+<td valign="top" align="right" width=120>avg(<i>X</i>)</td>
+<td valign="top">Return the average value of all non-NULL <i>X</i> within a
+group. String and BLOB values that do not look like numbers are
+interpreted as 0.
+The result of avg() is always a floating point value even if all
+inputs are integers. </p></td>
+</tr>
+
+<tr>
+<td valign="top" align="right">count(<i>X</i>)<br>count(*)</td>
+<td valign="top">The first form return a count of the number of times
+that <i>X</i> is not NULL in a group. The second form (with no argument)
+returns the total number of rows in the group.</td>
+</tr>
+
+<tr>
+<td valign="top" align="right">max(<i>X</i>)</td>
+<td valign="top">Return the maximum value of all values in the group.
+The usual sort order is used to determine the maximum.</td>
+</tr>
+
+<tr>
+<td valign="top" align="right">min(<i>X</i>)</td>
+<td valign="top">Return the minimum non-NULL value of all values in the group.
+The usual sort order is used to determine the minimum. NULL is only returned
+if all values in the group are NULL.</td>
+</tr>
+
+<tr>
+<td valign="top" align="right">sum(<i>X</i>)<br>total(<i>X</i>)</td>
+<td valign="top">Return the numeric sum of all non-NULL values in the group.
+ If there are no non-NULL input rows then sum() returns
+ NULL but total() returns 0.0.
+ NULL is not normally a helpful result for the sum of no rows
+ but the SQL standard requires it and most other
+ SQL database engines implement sum() that way so SQLite does it in the
+ same way in order to be compatible. The non-standard total() function
+ is provided as a convenient way to work around this design problem
+ in the SQL language.</p>
+
+ <p>The result of total() is always a floating point value.
+ The result of sum() is an integer value if all non-NULL inputs are integers.
+ If any input to sum() is neither an integer or a NULL
+ then sum() returns a floating point value
+ which might be an approximation to the true sum.</p>
+
+ <p>Sum() will throw an "integer overflow" exception if all inputs
+ are integers or NULL
+ and an integer overflow occurs at any point during the computation.
+ Total() never throws an exception.</p>
+</tr>
+</table>
+}
+
+
+Section INSERT insert
+
+Syntax {sql-statement} {
+INSERT [OR <conflict-algorithm>] INTO [<database-name> .] <table-name> [(<column-list>)] VALUES(<value-list>) |
+INSERT [OR <conflict-algorithm>] INTO [<database-name> .] <table-name> [(<column-list>)] <select-statement>
+}
+
+puts {
+<p>The INSERT statement comes in two basic forms. The first form
+(with the "VALUES" keyword) creates a single new row in an existing table.
+If no column-list is specified then the number of values must
+be the same as the number of columns in the table. If a column-list
+is specified, then the number of values must match the number of
+specified columns. Columns of the table that do not appear in the
+column list are filled with the default value, or with NULL if not
+default value is specified.
+</p>
+
+<p>The second form of the INSERT statement takes it data from a
+SELECT statement. The number of columns in the result of the
+SELECT must exactly match the number of columns in the table if
+no column list is specified, or it must match the number of columns
+name in the column list. A new entry is made in the table
+for every row of the SELECT result. The SELECT may be simple
+or compound.</p>
+
+<p>The optional conflict-clause allows the specification of an alternative
+constraint conflict resolution algorithm to use during this one command.
+See the section titled
+<a href="#conflict">ON CONFLICT</a> for additional information.
+For compatibility with MySQL, the parser allows the use of the
+single keyword <a href="#replace">REPLACE</a> as an alias for "INSERT OR REPLACE".
+</p>
+}
+
+
+Section {ON CONFLICT clause} conflict
+
+Syntax {conflict-clause} {
+ON CONFLICT <conflict-algorithm>
+} {conflict-algorithm} {
+ROLLBACK | ABORT | FAIL | IGNORE | REPLACE
+}
+
+puts {
+<p>The ON CONFLICT clause is not a separate SQL command. It is a
+non-standard clause that can appear in many other SQL commands.
+It is given its own section in this document because it is not
+part of standard SQL and therefore might not be familiar.</p>
+
+<p>The syntax for the ON CONFLICT clause is as shown above for
+the CREATE TABLE command. For the INSERT and
+UPDATE commands, the keywords "ON CONFLICT" are replaced by "OR", to make
+the syntax seem more natural. For example, instead of
+"INSERT ON CONFLICT IGNORE" we have "INSERT OR IGNORE".
+The keywords change but the meaning of the clause is the same
+either way.</p>
+
+<p>The ON CONFLICT clause specifies an algorithm used to resolve
+constraint conflicts. There are five choices: ROLLBACK, ABORT,
+FAIL, IGNORE, and REPLACE. The default algorithm is ABORT. This
+is what they mean:</p>
+
+<dl>
+<dt><b>ROLLBACK</b></dt>
+<dd><p>When a constraint violation occurs, an immediate ROLLBACK
+occurs, thus ending the current transaction, and the command aborts
+with a return code of SQLITE_CONSTRAINT. If no transaction is
+active (other than the implied transaction that is created on every
+command) then this algorithm works the same as ABORT.</p></dd>
+
+<dt><b>ABORT</b></dt>
+<dd><p>When a constraint violation occurs, the command backs out
+any prior changes it might have made and aborts with a return code
+of SQLITE_CONSTRAINT. But no ROLLBACK is executed so changes
+from prior commands within the same transaction
+are preserved. This is the default behavior.</p></dd>
+
+<dt><b>FAIL</b></dt>
+<dd><p>When a constraint violation occurs, the command aborts with a
+return code SQLITE_CONSTRAINT. But any changes to the database that
+the command made prior to encountering the constraint violation
+are preserved and are not backed out. For example, if an UPDATE
+statement encountered a constraint violation on the 100th row that
+it attempts to update, then the first 99 row changes are preserved
+but changes to rows 100 and beyond never occur.</p></dd>
+
+<dt><b>IGNORE</b></dt>
+<dd><p>When a constraint violation occurs, the one row that contains
+the constraint violation is not inserted or changed. But the command
+continues executing normally. Other rows before and after the row that
+contained the constraint violation continue to be inserted or updated
+normally. No error is returned.</p></dd>
+
+<dt><b>REPLACE</b></dt>
+<dd><p>When a UNIQUE constraint violation occurs, the pre-existing rows
+that are causing the constraint violation are removed prior to inserting
+or updating the current row. Thus the insert or update always occurs.
+The command continues executing normally. No error is returned.
+If a NOT NULL constraint violation occurs, the NULL value is replaced
+by the default value for that column. If the column has no default
+value, then the ABORT algorithm is used. If a CHECK constraint violation
+occurs then the IGNORE algorithm is used.</p>
+
+<p>When this conflict resolution strategy deletes rows in order to
+satisfy a constraint, it does not invoke delete triggers on those
+rows. This behavior might change in a future release.</p>
+</dl>
+
+<p>The algorithm specified in the OR clause of a INSERT or UPDATE
+overrides any algorithm specified in a CREATE TABLE.
+If no algorithm is specified anywhere, the ABORT algorithm is used.</p>
+}
+
+Section REINDEX reindex
+
+Syntax {sql-statement} {
+ REINDEX <collation name>
+}
+Syntax {sql-statement} {
+ REINDEX [<database-name> .] <table/index-name>
+}
+
+puts {
+<p>The REINDEX command is used to delete and recreate indices from scratch.
+This is useful when the definition of a collation sequence has changed.
+</p>
+
+<p>In the first form, all indices in all attached databases that use the
+named collation sequence are recreated. In the second form, if
+<i>[database-name.]table/index-name</i> identifies a table, then all indices
+associated with the table are rebuilt. If an index is identified, then only
+this specific index is deleted and recreated.
+</p>
+
+<p>If no <i>database-name</i> is specified and there exists both a table or
+index and a collation sequence of the specified name, then indices associated
+with the collation sequence only are reconstructed. This ambiguity may be
+dispelled by always specifying a <i>database-name</i> when reindexing a
+specific table or index.
+}
+
+Section REPLACE replace
+
+Syntax {sql-statement} {
+REPLACE INTO [<database-name> .] <table-name> [( <column-list> )] VALUES ( <value-list> ) |
+REPLACE INTO [<database-name> .] <table-name> [( <column-list> )] <select-statement>
+}
+
+puts {
+<p>The REPLACE command is an alias for the "INSERT OR REPLACE" variant
+of the <a href="#insert">INSERT</a> command. This alias is provided for
+compatibility with MySQL. See the
+<a href="#insert">INSERT</a> command documentation for additional
+information.</p>
+}
+
+
+Section SELECT select
+
+Syntax {sql-statement} {
+SELECT [ALL | DISTINCT] <result> [FROM <table-list>]
+[WHERE <expr>]
+[GROUP BY <expr-list>]
+[HAVING <expr>]
+[<compound-op> <select>]*
+[ORDER BY <sort-expr-list>]
+[LIMIT <integer> [LP OFFSET | , RP <integer>]]
+} {result} {
+<result-column> [, <result-column>]*
+} {result-column} {
+STAR | <table-name> . STAR | <expr> [ [AS] <string> ]
+} {table-list} {
+<table> [<join-op> <table> <join-args>]*
+} {table} {
+<table-name> [AS <alias>] |
+( <select> ) [AS <alias>]
+} {join-op} {
+, | [NATURAL] [LEFT | RIGHT | FULL] [OUTER | INNER | CROSS] JOIN
+} {join-args} {
+[ON <expr>] [USING ( <id-list> )]
+} {sort-expr-list} {
+<expr> [<sort-order>] [, <expr> [<sort-order>]]*
+} {sort-order} {
+[ COLLATE <collation-name> ] [ ASC | DESC ]
+} {compound_op} {
+UNION | UNION ALL | INTERSECT | EXCEPT
+}
+
+puts {
+<p>The SELECT statement is used to query the database. The
+result of a SELECT is zero or more rows of data where each row
+has a fixed number of columns. The number of columns in the
+result is specified by the expression list in between the
+SELECT and FROM keywords. Any arbitrary expression can be used
+as a result. If a result expression is }
+puts "[Operator *] then all columns of all tables are substituted"
+puts {for that one expression. If the expression is the name of}
+puts "a table followed by [Operator .*] then the result is all columns"
+puts {in that one table.</p>
+
+<p>The DISTINCT keyword causes a subset of result rows to be returned,
+in which each result row is different. NULL values are not treated as
+distinct from each other. The default behavior is that all result rows
+be returned, which can be made explicit with the keyword ALL.</p>
+
+<p>The query is executed against one or more tables specified after
+the FROM keyword. If multiple tables names are separated by commas,
+then the query is against the cross join of the various tables.
+The full SQL-92 join syntax can also be used to specify joins.
+A sub-query
+in parentheses may be substituted for any table name in the FROM clause.
+The entire FROM clause may be omitted, in which case the result is a
+single row consisting of the values of the expression list.
+</p>
+
+<p>The WHERE clause can be used to limit the number of rows over
+which the query operates.</p>
+
+<p>The GROUP BY clauses causes one or more rows of the result to
+be combined into a single row of output. This is especially useful
+when the result contains aggregate functions. The expressions in
+the GROUP BY clause do <em>not</em> have to be expressions that
+appear in the result. The HAVING clause is similar to WHERE except
+that HAVING applies after grouping has occurred. The HAVING expression
+may refer to values, even aggregate functions, that are not in the result.</p>
+
+<p>The ORDER BY clause causes the output rows to be sorted.
+The argument to ORDER BY is a list of expressions that are used as the
+key for the sort. The expressions do not have to be part of the
+result for a simple SELECT, but in a compound SELECT each sort
+expression must exactly match one of the result columns. Each
+sort expression may be optionally followed by a COLLATE keyword and
+the name of a collating function used for ordering text and/or
+keywords ASC or DESC to specify the sort order.</p>
+
+<p>The LIMIT clause places an upper bound on the number of rows
+returned in the result. A negative LIMIT indicates no upper bound.
+The optional OFFSET following LIMIT specifies how many
+rows to skip at the beginning of the result set.
+In a compound query, the LIMIT clause may only appear on the
+final SELECT statement.
+The limit is applied to the entire query not
+to the individual SELECT statement to which it is attached.
+Note that if the OFFSET keyword is used in the LIMIT clause, then the
+limit is the first number and the offset is the second number. If a
+comma is used instead of the OFFSET keyword, then the offset is the
+first number and the limit is the second number. This seeming
+contradition is intentional - it maximizes compatibility with legacy
+SQL database systems.
+</p>
+
+<p>A compound SELECT is formed from two or more simple SELECTs connected
+by one of the operators UNION, UNION ALL, INTERSECT, or EXCEPT. In
+a compound SELECT, all the constituent SELECTs must specify the
+same number of result columns. There may be only a single ORDER BY
+clause at the end of the compound SELECT. The UNION and UNION ALL
+operators combine the results of the SELECTs to the right and left into
+a single big table. The difference is that in UNION all result rows
+are distinct where in UNION ALL there may be duplicates.
+The INTERSECT operator takes the intersection of the results of the
+left and right SELECTs. EXCEPT takes the result of left SELECT after
+removing the results of the right SELECT. When three or more SELECTs
+are connected into a compound, they group from left to right.</p>
+}
+
+
+Section UPDATE update
+
+Syntax {sql-statement} {
+UPDATE [ OR <conflict-algorithm> ] [<database-name> .] <table-name>
+SET <assignment> [, <assignment>]*
+[WHERE <expr>]
+} {assignment} {
+<column-name> = <expr>
+}
+
+puts {
+<p>The UPDATE statement is used to change the value of columns in
+selected rows of a table. Each assignment in an UPDATE specifies
+a column name to the left of the equals sign and an arbitrary expression
+to the right. The expressions may use the values of other columns.
+All expressions are evaluated before any assignments are made.
+A WHERE clause can be used to restrict which rows are updated.</p>
+
+<p>The optional conflict-clause allows the specification of an alternative
+constraint conflict resolution algorithm to use during this one command.
+See the section titled
+<a href="#conflict">ON CONFLICT</a> for additional information.</p>
+}
+
+
+Section VACUUM vacuum
+
+Syntax {sql-statement} {
+VACUUM [<index-or-table-name>]
+}
+
+puts {
+<p>The VACUUM command is an SQLite extension modeled after a similar
+command found in PostgreSQL. If VACUUM is invoked with the name of a
+table or index then it is suppose to clean up the named table or index.
+In version 1.0 of SQLite, the VACUUM command would invoke
+<b>gdbm_reorganize()</b> to clean up the backend database file.</p>
+
+<p>
+VACUUM became a no-op when the GDBM backend was removed from
+SQLITE in version 2.0.0.
+VACUUM was reimplemented in version 2.8.1.
+The index or table name argument is now ignored.
+</p>
+
+<p>When an object (table, index, or trigger) is dropped from the
+database, it leaves behind empty space. This makes the database
+file larger than it needs to be, but can speed up inserts. In time
+inserts and deletes can leave the database file structure fragmented,
+which slows down disk access to the database contents.
+
+The VACUUM command cleans
+the main database by copying its contents to a temporary database file and
+reloading the original database file from the copy. This eliminates
+free pages, aligns table data to be contiguous, and otherwise cleans
+up the database file structure. It is not possible to perform the same
+process on an attached database file.</p>
+
+<p>This command will fail if there is an active transaction. This
+command has no effect on an in-memory database.</p>
+
+<p>As of SQLite version 3.1, an alternative to using the VACUUM command
+is auto-vacuum mode, enabled using the
+<a href="pragma.html#pragma_auto_vacuum">auto_vacuum pragma</a>.</p>
+}
+
+# A list of keywords. A asterisk occurs after the keyword if it is on
+# the fallback list.
+#
+set keyword_list [lsort {
+ ABORT*
+ AFTER*
+ ALL
+ ALTER
+ AND
+ AS
+ ASC*
+ ATTACH*
+ AUTOINCREMENT
+ BEFORE*
+ BEGIN*
+ BETWEEN
+ BY
+ CASCADE*
+ CASE
+ CHECK
+ COLLATE
+ COMMIT
+ CONFLICT*
+ CONSTRAINT
+ CREATE
+ CROSS
+ CURRENT_DATE*
+ CURRENT_TIME*
+ CURRENT_TIMESTAMP*
+ DATABASE*
+ DEFAULT
+ DEFERRED*
+ DEFERRABLE
+ DELETE
+ DESC*
+ DETACH*
+ DISTINCT
+ DROP
+ END*
+ EACH*
+ ELSE
+ ESCAPE
+ EXCEPT
+ EXCLUSIVE*
+ EXPLAIN*
+ FAIL*
+ FOR*
+ FOREIGN
+ FROM
+ FULL
+ GLOB*
+ GROUP
+ HAVING
+ IGNORE*
+ IMMEDIATE*
+ IN
+ INDEX
+ INITIALLY*
+ INNER
+ INSERT
+ INSTEAD*
+ INTERSECT
+ INTO
+ IS
+ ISNULL
+ JOIN
+ KEY*
+ LEFT
+ LIKE*
+ LIMIT
+ MATCH*
+ NATURAL
+ NOT
+ NOTNULL
+ NULL
+ OF*
+ OFFSET*
+ ON
+ OR
+ ORDER
+ OUTER
+ PRAGMA*
+ PRIMARY
+ RAISE*
+ REFERENCES
+ REINDEX*
+ RENAME*
+ REPLACE*
+ RESTRICT*
+ RIGHT
+ ROLLBACK
+ ROW*
+ SELECT
+ SET
+ STATEMENT*
+ TABLE
+ TEMP*
+ TEMPORARY*
+ THEN
+ TO
+ TRANSACTION
+ TRIGGER*
+ UNION
+ UNIQUE
+ UPDATE
+ USING
+ VACUUM*
+ VALUES
+ VIEW*
+ WHEN
+ WHERE
+}]
+
+
+
+Section {SQLite keywords} keywords
+
+puts {
+<p>The SQL standard specifies a huge number of keywords which may not
+be used as the names of tables, indices, columns, or databases. The
+list is so long that few people can remember them all. For most SQL
+code, your safest bet is to never use any English language word as the
+name of a user-defined object.</p>
+
+<p>If you want to use a keyword as a name, you need to quote it. There
+are three ways of quoting keywords in SQLite:</p>
+
+<p>
+<blockquote>
+<table>
+<tr> <td valign="top"><b>'keyword'</b></td><td width="20"></td>
+ <td>A keyword in single quotes is interpreted as a literal string
+ if it occurs in a context where a string literal is allowed, otherwise
+ it is understood as an identifier.</td></tr>
+<tr> <td valign="top"><b>"keyword"</b></td><td></td>
+ <td>A keyword in double-quotes is interpreted as an identifier if
+ it matches a known identifier. Otherwise it is interpreted as a
+ string literal.</td></tr>
+<tr> <td valign="top"><b>[keyword]</b></td><td></td>
+ <td>A keyword enclosed in square brackets is always understood as
+ an identifier. This is not standard SQL. This quoting mechanism
+ is used by MS Access and SQL Server and is included in SQLite for
+ compatibility.</td></tr>
+</table>
+</blockquote>
+</p>
+
+<p>Quoted keywords are unaesthetic.
+To help you avoid them, SQLite allows many keywords to be used unquoted
+as the names of databases, tables, indices, triggers, views, and/or columns.
+In the list of keywords that follows, those that can be used as identifiers
+are shown in an italic font. Keywords that must be quoted in order to be
+used as identifiers are shown in bold.</p>
+
+<p>
+SQLite adds new keywords from time to time when it take on new features.
+So to prevent you code from being broken by future enhancements, you should
+normally quote any indentifier that is an English language word, even if
+you do not have to.
+</p>
+
+<p>
+The following are the keywords currently recognized by SQLite:
+</p>
+
+<blockquote>
+<table width="100%">
+<tr>
+<td align="left" valign="top" width="20%">
+}
+
+set n [llength $keyword_list]
+set nCol 5
+set nRow [expr {($n+$nCol-1)/$nCol}]
+set i 0
+foreach word $keyword_list {
+ if {[string index $word end]=="*"} {
+ set word [string range $word 0 end-1]
+ set font i
+ } else {
+ set font b
+ }
+ if {$i==$nRow} {
+ puts "</td><td valign=\"top\" align=\"left\" width=\"20%\">"
+ set i 1
+ } else {
+ incr i
+ }
+ puts "<$font>$word</$font><br>"
+}
+
+puts {
+</td></tr></table></blockquote>
+
+<h2>Special names</h2>
+
+<p>The following are not keywords in SQLite, but are used as names of
+system objects. They can be used as an identifier for a different
+type of object.</p>
+
+<blockquote><b>
+ _ROWID_<br>
+ MAIN<br>
+ OID<br>
+ ROWID<br>
+ SQLITE_MASTER<br>
+ SQLITE_SEQUENCE<br>
+ SQLITE_TEMP_MASTER<br>
+ TEMP<br>
+</b></blockquote>
+}
+
+footer $rcsid
+if {[string length $outputdir]} {
+ footer $rcsid
+}
Added: freeswitch/trunk/libs/sqlite/www/lockingv3.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/lockingv3.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,567 @@
+#
+# Run this script to generated a lockingv3.html output file
+#
+set rcsid {$Id: }
+source common.tcl
+header {File Locking And Concurrency In SQLite Version 3}
+
+proc HEADING {level title} {
+ global pnum
+ incr pnum($level)
+ foreach i [array names pnum] {
+ if {$i>$level} {set pnum($i) 0}
+ }
+ set h [expr {$level+1}]
+ if {$h>6} {set h 6}
+ set n $pnum(1).$pnum(2)
+ for {set i 3} {$i<=$level} {incr i} {
+ append n .$pnum($i)
+ }
+ puts "<h$h>$n $title</h$h>"
+}
+set pnum(1) 0
+set pnum(2) 0
+set pnum(3) 0
+set pnum(4) 0
+set pnum(5) 0
+set pnum(6) 0
+set pnum(7) 0
+set pnum(8) 0
+
+HEADING 1 {File Locking And Concurrency In SQLite Version 3}
+
+puts {
+<p>Version 3 of SQLite introduces a more complex locking and journaling
+mechanism designed to improve concurrency and reduce the writer starvation
+problem. The new mechanism also allows atomic commits of transactions
+involving multiple database files.
+This document describes the new locking mechanism.
+The intended audience is programmers who want to understand and/or modify
+the pager code and reviewers working to verify the design
+of SQLite version 3.
+</p>
+}
+
+HEADING 1 {Overview}
+
+puts {
+<p>
+Locking and concurrency control are handled by the the
+<a href="http://www.sqlite.org/cvstrac/getfile/sqlite/src/pager.c">
+pager module</a>.
+The pager module is responsible for making SQLite "ACID" (Atomic,
+Consistent, Isolated, and Durable). The pager module makes sure changes
+happen all at once, that either all changes occur or none of them do,
+that two or more processes do not try to access the database
+in incompatible ways at the same time, and that once changes have been
+written they persist until explicitly deleted. The pager also provides
+an memory cache of some of the contents of the disk file.</p>
+
+<p>The pager is unconcerned
+with the details of B-Trees, text encodings, indices, and so forth.
+From the point of view of the pager the database consists of
+a single file of uniform-sized blocks. Each block is called a
+"page" and is usually 1024 bytes in size. The pages are numbered
+beginning with 1. So the first 1024 bytes of the database are called
+"page 1" and the second 1024 bytes are call "page 2" and so forth. All
+other encoding details are handled by higher layers of the library.
+The pager communicates with the operating system using one of several
+modules
+(Examples:
+<a href="http://www.sqlite.org/cvstrac/getfile/sqlite/src/os_unix.c">
+os_unix.c</a>,
+<a href="http://www.sqlite.org/cvstrac/getfile/sqlite/src/os_win.c">
+os_win.c</a>)
+that provides a uniform abstraction for operating system services.
+</p>
+
+<p>The pager module effectively controls access for separate threads, or
+separate processes, or both. Throughout this document whenever the
+word "process" is written you may substitute the word "thread" without
+changing the truth of the statement.</p>
+}
+
+HEADING 1 {Locking}
+
+puts {
+<p>
+From the point of view of a single process, a database file
+can be in one of five locking states:
+</p>
+
+<p>
+<table cellpadding="20">
+<tr><td valign="top">UNLOCKED</td>
+<td valign="top">
+No locks are held on the database. The database may be neither read nor
+written. Any internally cached data is considered suspect and subject to
+verification against the database file before being used. Other
+processes can read or write the database as their own locking states
+permit. This is the default state.
+</td></tr>
+
+<tr><td valign="top">SHARED</td>
+<td valign="top">
+The database may be read but not written. Any number of
+processes can hold SHARED locks at the same time, hence there can be
+many simultaneous readers. But no other thread or process is allowed
+to write to the database file while one or more SHARED locks are active.
+</td></tr>
+
+<tr><td valign="top">RESERVED</td>
+<td valign="top">
+A RESERVED lock means that the process is planning on writing to the
+database file at some point in the future but that it is currently just
+reading from the file. Only a single RESERVED lock may be active at one
+time, though multiple SHARED locks can coexist with a single RESERVED lock.
+RESERVED differs from PENDING in that new SHARED locks can be acquired
+while there is a RESERVED lock.
+</td></tr>
+
+<tr><td valign="top">PENDING</td>
+<td valign="top">
+A PENDING lock means that the process holding the lock wants to write
+to the database as soon as possible and is just waiting on all current
+SHARED locks to clear so that it can get an EXCLUSIVE lock. No new
+SHARED locks are permitted against the database if
+a PENDING lock is active, though existing SHARED locks are allowed to
+continue.
+</td></tr>
+
+<tr><td valign="top">EXCLUSIVE</td>
+<td valign="top">
+An EXCLUSIVE lock is needed in order to write to the database file.
+Only one EXCLUSIVE lock is allowed on the file and no other locks of
+any kind are allowed to coexist with an EXCLUSIVE lock. In order to
+maximize concurrency, SQLite works to minimize the amount of time that
+EXCLUSIVE locks are held.
+</td></tr>
+</table>
+</p>
+
+<p>
+The operating system interface layer understands and tracks all five
+locking states described above.
+The pager module only tracks four of the five locking states.
+A PENDING lock is always just a temporary
+stepping stone on the path to an EXCLUSIVE lock and so the pager module
+does not track PENDING locks.
+</p>
+}
+
+HEADING 1 {The Rollback Journal}
+
+puts {
+<p>Any time a process wants to make a changes to a database file, it
+first records enough information in the <em>rollback journal</em> to
+restore the database file back to its initial condition. Thus, before
+altering any page of the database, the original contents of that page
+must be written into the journal. The journal also records the initial
+size of the database so that if the database file grows it can be truncated
+back to its original size on a rollback.</p>
+
+<p>The rollback journal is a ordinary disk file that has the same name as
+the database file with the suffix "<tt>-journal</tt>" added.</p>
+
+<p>If SQLite is working with multiple databases at the same time
+(using the ATTACH command) then each database has its own journal.
+But there is also a separate aggregate journal
+called the <em>master journal</em>.
+The master journal does not contain page data used for rolling back
+changes. Instead the master journal contains the names of the
+individual file journals for each of the ATTACHed databases. Each of
+the individual file journals also contain the name of the master journal.
+If there are no ATTACHed databases (or if none of the ATTACHed database
+is participating in the current transaction) no master journal is
+created and the normal rollback journal contains an empty string
+in the place normally reserved for recording the name of the master
+journal.</p>
+
+<p>A individual file journal is said to be <em>hot</em>
+if it needs to be rolled back
+in order to restore the integrity of its database.
+A hot journal is created when a process is in the middle of a database
+update and a program or operating system crash or power failure prevents
+the update from completing.
+Hot journals are an exception condition.
+Hot journals exist to recover from crashes and power failures.
+If everything is working correctly
+(that is, if there are no crashes or power failures)
+you will never get a hot journal.
+</p>
+
+<p>
+If no master journal is involved, then
+a journal is hot if it exists and its corresponding database file
+does not have a RESERVED lock.
+If a master journal is named in the file journal, then the file journal
+is hot if its master journal exists and there is no RESERVED
+lock on the corresponding database file.
+It is important to understand when a journal is hot so the
+preceding rules will be repeated in bullets:
+</p>
+
+<ul>
+<li>A journal is hot if...
+ <ul>
+ <li>It exists, and</li>
+ <li>It's master journal exists or the master journal name is an
+ empty string, and</li>
+ <li>There is no RESERVED lock on the corresponding database file.</li>
+ </ul>
+</li>
+</ul>
+}
+
+HEADING 2 {Dealing with hot journals}
+
+puts {
+<p>
+Before reading from a a database file, SQLite always checks to see if that
+database file has a hot journal. If the file does have a hot journal, then
+the journal is rolled back before the file is read. In this way, we ensure
+that the database file is in a consistent state before it is read.
+</p>
+
+<p>When a process wants to read from a database file, it followed
+the following sequence of steps:
+</p>
+
+<ol>
+<li>Open the database file and obtain a SHARED lock. If the SHARED lock
+ cannot be obtained, fail immediately and return SQLITE_BUSY.</li>
+<li>Check to see if the database file has a hot journal. If the file
+ does not have a hot journal, we are done. Return immediately.
+ If there is a hot journal, that journal must be rolled back by
+ the subsequent steps of this algorithm.</li>
+<li>Acquire a PENDING lock then an EXCLUSIVE lock on the database file.
+ (Note: Do not acquire a RESERVED lock because that would make
+ other processes think the journal was no longer hot.) If we
+ fail to acquire these locks it means another process
+ is already trying to do the rollback. In that case,
+ drop all locks, close the database, and return SQLITE_BUSY. </li>
+<li>Read the journal file and roll back the changes.</li>
+<li>Wait for the rolled back changes to be written onto
+ the surface of the disk. This protects the integrity of the database
+ in case another power failure or crash occurs.</li>
+<li>Delete the journal file.</li>
+<li>Delete the master journal file if it is safe to do so.
+ This step is optional. It is here only to prevent stale
+ master journals from cluttering up the disk drive.
+ See the discussion below for details.</li>
+<li>Drop the EXCLUSIVE and PENDING locks but retain the SHARED lock.</li>
+</ol>
+
+<p>After the algorithm above completes successfully, it is safe to
+read from the database file. Once all reading has completed, the
+SHARED lock is dropped.</p>
+}
+
+HEADING 2 {Deleting stale master journals}
+
+puts {
+<p>A stale master journal is a master journal that is no longer being
+used for anything. There is no requirement that stale master journals
+be deleted. The only reason for doing so is to free up disk space.</p>
+
+<p>A master journal is stale if no individual file journals are pointing
+to it. To figure out if a master journal is stale, we first read the
+master journal to obtain the names of all of its file journals. Then
+we check each of those file journals. If any of the file journals named
+in the master journal exists and points back to the master journal, then
+the master journal is not stale. If all file journals are either missing
+or refer to other master journals or no master journal at all, then the
+master journal we are testing is stale and can be safely deleted.</p>
+}
+
+HEADING 1 {Writing to a database file}
+
+puts {
+<p>To write to a database, a process must first acquire a SHARED lock
+as described above (possibly rolling back incomplete changes if there
+is a hot journal).
+After a SHARED lock is obtained, a RESERVED lock must be acquired.
+The RESERVED lock signals that the process intends to write to the
+database at some point in the future. Only one process at a time
+can hold a RESERVED lock. But other processes can continue to read
+the database while the RESERVED lock is held.
+</p>
+
+<p>If the process that wants to write is unable to obtain a RESERVED
+lock, it must mean that another process already has a RESERVED lock.
+In that case, the write attempt fails and returns SQLITE_BUSY.</p>
+
+<p>After obtaining a RESERVED lock, the process that wants to write
+creates a rollback journal. The header of the journal is initialized
+with the original size of the database file. Space in the journal header
+is also reserved for a master journal name, though the master journal
+name is initially empty.</p>
+
+<p>Before making changes to any page of the database, the process writes
+the original content of that page into the rollback journal. Changes
+to pages are held in memory at first and are not written to the disk.
+The original database file remains unaltered, which means that other
+processes can continue to read the database.</p>
+
+<p>Eventually, the writing process will want to update the database
+file, either because its memory cache has filled up or because it is
+ready to commit its changes. Before this happens, the writer must
+make sure no other process is reading the database and that the rollback
+journal data is safely on the disk surface so that it can be used to
+rollback incomplete changes in the event of a power failure.
+The steps are as follows:</p>
+
+<ol>
+<li>Make sure all rollback journal data has actually been written to
+ the surface of the disk (and is not just being held in the operating
+ system's or disk controllers cache) so that if a power failure occurs
+ the data will still be there after power is restored.</li>
+<li>Obtain a PENDING lock and then an EXCLUSIVE lock on the database file.
+ If other processes are still have SHARED locks, the writer might have
+ to wait until those SHARED locks clear before it is able to obtain
+ an EXCLUSIVE lock.</li>
+<li>Write all page modifications currently held in memory out to the
+ original database disk file.</li>
+</ol>
+
+<p>
+If the reason for writing to the database file is because the memory
+cache was full, then the writer will not commit right away. Instead,
+the writer might continue to make changes to other pages. Before
+subsequent changes are written to the database file, the rollback
+journal must be flushed to disk again. Note also that the EXCLUSIVE
+lock that the writer obtained in order to write to the database initially
+must be held until all changes are committed. That means that no other
+processes are able to access the database from the
+time the memory cache first spills to disk until the transaction
+commits.
+</p>
+
+<p>
+When a writer is ready to commit its changes, it executes the following
+steps:
+</p>
+
+<ol>
+<li value="4">
+ Obtain an EXCLUSIVE lock on the database file and
+ make sure all memory changes have been written to the database file
+ using the algorithm of steps 1-3 above.</li>
+<li>Flush all database file changes to the disk. Wait for those changes
+ to actually be written onto the disk surface.</li>
+<li>Delete the journal file. This is the instant when the changes are
+ committed. Prior to deleting the journal file, if a power failure
+ or crash occurs, the next process to open the database will see that
+ it has a hot journal and will roll the changes back.
+ After the journal is deleted, there will no longer be a hot journal
+ and the changes will persist.
+ </li>
+<li>Drop the EXCLUSIVE and PENDING locks from the database file.
+ </li>
+</ol>
+
+<p>As soon as PENDING lock is released from the database file, other
+processes can begin reading the database again. In the current implementation,
+the RESERVED lock is also released, but that is not essential. Future
+versions of SQLite might provide a "CHECKPOINT" SQL command that will
+commit all changes made so far within a transaction but retain the
+RESERVED lock so that additional changes can be made without given
+any other process an opportunity to write.</p>
+
+<p>If a transaction involves multiple databases, then a more complex
+commit sequence is used, as follows:</p>
+
+<ol>
+<li value="4">
+ Make sure all individual database files have an EXCLUSIVE lock and a
+ valid journal.
+<li>Create a master-journal. The name of the master-journal is arbitrary.
+ (The current implementation appends random suffixes to the name of the
+ main database file until it finds a name that does not previously exist.)
+ Fill the master journal with the names of all the individual journals
+ and flush its contents to disk.
+<li>Write the name of the master journal into
+ all individual journals (in space set aside for that purpose in the
+ headers of the individual journals) and flush the contents of the
+ individual journals to disk and wait for those changes to reach the
+ disk surface.
+<li>Flush all database file changes to the disk. Wait for those changes
+ to actually be written onto the disk surface.</li>
+<li>Delete the master journal file. This is the instant when the changes are
+ committed. Prior to deleting the master journal file, if a power failure
+ or crash occurs, the individual file journals will be considered hot
+ and will be rolled back by the next process that
+ attempts to read them. After the master journal has been deleted,
+ the file journals will no longer be considered hot and the changes
+ will persist.
+ </li>
+<li>Delete all individual journal files.
+<li>Drop the EXCLUSIVE and PENDING locks from all database files.
+ </li>
+</ol>
+}
+
+HEADING 2 {Writer starvation}
+
+puts {
+<p>In SQLite version 2, if many processes are reading from the database,
+it might be the case that there is never a time when there are
+no active readers. And if there is always at least one read lock on the
+database, no process would ever be able to make changes to the database
+because it would be impossible to acquire a write lock. This situation
+is called <em>writer starvation</em>.</p>
+
+<p>SQLite version 3 seeks to avoid writer starvation through the use of
+the PENDING lock. The PENDING lock allows existing readers to continue
+but prevents new readers from connecting to the database. So when a
+process wants to write a busy database, it can set a PENDING lock which
+will prevent new readers from coming in. Assuming existing readers do
+eventually complete, all SHARED locks will eventually clear and the
+writer will be given a chance to make its changes.</p>
+}
+
+HEADING 1 {How To Corrupt Your Database Files}
+
+puts {
+<p>The pager module is robust but it is not completely failsafe.
+It can be subverted. This section attempts to identify and explain
+the risks.</p>
+
+<p>
+Clearly, a hardware or operating system fault that introduces incorrect data
+into the middle of the database file or journal will cause problems.
+Likewise,
+if a rogue process opens a database file or journal and writes malformed
+data into the middle of it, then the database will become corrupt.
+There is not much that can be done about these kinds of problems
+so they are given no further attention.
+</p>
+
+<p>
+SQLite uses POSIX advisory locks to implement locking on Unix. On
+windows it uses the LockFile(), LockFileEx(), and UnlockFile() system
+calls. SQLite assumes that these system calls all work as advertised. If
+that is not the case, then database corruption can result. One should
+note that POSIX advisory locking is known to be buggy or even unimplemented
+on many NFS implementations (including recent versions of Mac OS X)
+and that there are reports of locking problems
+for network filesystems under windows. Your best defense is to not
+use SQLite for files on a network filesystem.
+</p>
+
+<p>
+SQLite uses the fsync() system call to flush data to the disk under Unix and
+it uses the FlushFileBuffers() to do the same under windows. Once again,
+SQLite assumes that these operating system services function as advertised.
+But it has been reported that fsync() and FlushFileBuffers() do not always
+work correctly, especially with inexpensive IDE disks. Apparently some
+manufactures of IDE disks have defective controller chips that report
+that data has reached the disk surface when in fact the data is still
+in volatile cache memory in the disk drive electronics. There are also
+reports that windows sometimes chooses to ignore FlushFileBuffers() for
+unspecified reasons. The author cannot verify any of these reports.
+But if they are true, it means that database corruption is a possibility
+following an unexpected power loss. These are hardware and/or operating
+system bugs that SQLite is unable to defend against.
+</p>
+
+<p>
+If a crash or power failure occurs and results in a hot journal but that
+journal is deleted, the next process to open the database will not
+know that it contains changes that need to be rolled back. The rollback
+will not occur and the database will be left in an inconsistent state.
+Rollback journals might be deleted for any number of reasons:
+</p>
+
+<ul>
+<li>An administrator might be cleaning up after an OS crash or power failure,
+ see the journal file, think it is junk, and delete it.</li>
+<li>Someone (or some process) might rename the database file but fail to
+ also rename its associated journal.</li>
+<li>If the database file has aliases (hard or soft links) and the file
+ is opened by a different alias than the one used to create the journal,
+ then the journal will not be found. To avoid this problem, you should
+ not create links to SQLite database files.</li>
+<li>Filesystem corruption following a power failure might cause the
+ journal to be renamed or deleted.</li>
+</ul>
+
+<p>
+The last (fourth) bullet above merits additional comment. When SQLite creates
+a journal file on Unix, it opens the directory that contains that file and
+calls fsync() on the directory, in an effort to push the directory information
+to disk. But suppose some other process is adding or removing unrelated
+files to the directory that contains the database and journal at the the
+moment of a power failure. The supposedly unrelated actions of this other
+process might result in the journal file being dropped from the directory and
+moved into "lost+found". This is an unlikely scenario, but it could happen.
+The best defenses are to use a journaling filesystem or to keep the
+database and journal in a directory by themselves.
+</p>
+
+<p>
+For a commit involving multiple databases and a master journal, if the
+various databases were on different disk volumes and a power failure occurs
+during the commit, then when the machine comes back up the disks might
+be remounted with different names. Or some disks might not be mounted
+at all. When this happens the individual file journals and the master
+journal might not be able to find each other. The worst outcome from
+this scenario is that the commit ceases to be atomic.
+Some databases might be rolled back and others might not.
+All databases will continue to be self-consistent.
+To defend against this problem, keep all databases
+on the same disk volume and/or remount disks using exactly the same names
+after a power failure.
+</p>
+}
+
+HEADING 1 {Transaction Control At The SQL Level}
+
+puts {
+<p>
+The changes to locking and concurrency control in SQLite version 3 also
+introduce some subtle changes in the way transactions work at the SQL
+language level.
+By default, SQLite version 3 operates in <em>autocommit</em> mode.
+In autocommit mode,
+all changes to the database are committed as soon as all operations associated
+with the current database connection complete.</p>
+
+<p>The SQL command "BEGIN TRANSACTION" (the TRANSACTION keyword
+is optional) is used to take SQLite out of autocommit mode.
+Note that the BEGIN command does not acquire any locks on the database.
+After a BEGIN command, a SHARED lock will be acquired when the first
+SELECT statement is executed. A RESERVED lock will be acquired when
+the first INSERT, UPDATE, or DELETE statement is executed. No EXCLUSIVE
+lock is acquired until either the memory cache fills up and must
+be spilled to disk or until the transaction commits. In this way,
+the system delays blocking read access to the file file until the
+last possible moment.
+</p>
+
+<p>The SQL command "COMMIT" does not actually commit the changes to
+disk. It just turns autocommit back on. Then, at the conclusion of
+the command, the regular autocommit logic takes over and causes the
+actual commit to disk to occur.
+The SQL command "ROLLBACK" also operates by turning autocommit back on,
+but it also sets a flag that tells the autocommit logic to rollback rather
+than commit.</p>
+
+<p>If the SQL COMMIT command turns autocommit on and the autocommit logic
+then tries to commit change but fails because some other process is holding
+a SHARED lock, then autocommit is turned back off automatically. This
+allows the user to retry the COMMIT at a later time after the SHARED lock
+has had an opportunity to clear.</p>
+
+<p>If multiple commands are being executed against the same SQLite database
+connection at the same time, the autocommit is deferred until the very
+last command completes. For example, if a SELECT statement is being
+executed, the execution of the command will pause as each row of the
+result is returned. During this pause other INSERT, UPDATE, or DELETE
+commands can be executed against other tables in the database. But none
+of these changes will commit until the original SELECT statement finishes.
+</p>
+}
+
+
+footer $rcsid
Added: freeswitch/trunk/libs/sqlite/www/mingw.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/mingw.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,160 @@
+#
+# Run this Tcl script to generate the mingw.html file.
+#
+set rcsid {$Id: mingw.tcl,v 1.4 2003/03/30 18:58:58 drh Exp $}
+
+puts {<html>
+<head>
+ <title>Notes On How To Build MinGW As A Cross-Compiler</title>
+</head>
+<body bgcolor=white>
+<h1 align=center>
+Notes On How To Build MinGW As A Cross-Compiler
+</h1>}
+puts "<p align=center>
+(This page was last modified on [lrange $rcsid 3 4] UTC)
+</p>"
+
+puts {
+<p><a href="http://www.mingw.org/">MinGW</a> or
+<a href="http://www.mingw.org/">Minimalist GNU For Windows</a>
+is a version of the popular GCC compiler that builds Win95/Win98/WinNT
+binaries. See the website for details.</p>
+
+<p>This page describes how you can build MinGW
+from sources as a cross-compiler
+running under Linux. Doing so will allow you to construct
+WinNT binaries from the comfort and convenience of your
+Unix desktop.</p>
+}
+
+proc Link {path {file {}}} {
+ if {$file!=""} {
+ set path $path/$file
+ } else {
+ set file $path
+ }
+ puts "<a href=\"$path\">$file</a>"
+}
+
+puts {
+<p>Here are the steps:</p>
+
+<ol>
+<li>
+<p>Get a copy of source code. You will need the binutils, the
+compiler, and the MinGW runtime. Each are available separately.
+As of this writing, Mumit Khan has collected everything you need
+together in one FTP site:
+}
+set ftpsite \
+ ftp://ftp.nanotech.wisc.edu/pub/khan/gnu-win32/mingw32/snapshots/gcc-2.95.2-1
+Link $ftpsite
+puts {
+The three files you will need are:</p>
+<ul>
+<li>}
+Link $ftpsite binutils-19990818-1-src.tar.gz
+puts </li><li>
+Link $ftpsite gcc-2.95.2-1-src.tar.gz
+puts </li><li>
+Link $ftpsite mingw-20000203.zip
+puts {</li>
+</ul>
+
+<p>Put all the downloads in a directory out of the way. The sequel
+will assume all downloads are in a directory named
+<b>~/mingw/download</b>.</p>
+</li>
+
+<li>
+<p>
+Create a directory in which to install the new compiler suite and make
+the new directory writable.
+Depending on what directory you choose, you might need to become
+root. The example shell commands that follow
+will assume the installation directory is
+<b>/opt/mingw</b> and that your user ID is <b>drh</b>.</p>
+<blockquote><pre>
+su
+mkdir /opt/mingw
+chown drh /opt/mingw
+exit
+</pre></blockquote>
+</li>
+
+<li>
+<p>Unpack the source tarballs into a separate directory.</p>
+<blockquote><pre>
+mkdir ~/mingw/src
+cd ~/mingw/src
+tar xzf ../download/binutils-*.tar.gz
+tar xzf ../download/gcc-*.tar.gz
+unzip ../download/mingw-*.zip
+</pre></blockquote>
+</li>
+
+<li>
+<p>Create a directory in which to put all the build products.</p>
+<blockquote><pre>
+mkdir ~/mingw/bld
+</pre></blockquote>
+</li>
+
+<li>
+<p>Configure and build binutils and add the results to your PATH.</p>
+<blockquote><pre>
+mkdir ~/mingw/bld/binutils
+cd ~/mingw/bld/binutils
+../../src/binutils/configure --prefix=/opt/mingw --target=i386-mingw32 -v
+make 2>&1 | tee make.out
+make install 2>&1 | tee make-install.out
+export PATH=$PATH:/opt/mingw/bin
+</pre></blockquote>
+</li>
+
+<li>
+<p>Manually copy the runtime include files into the installation directory
+before trying to build the compiler.</p>
+<blockquote><pre>
+mkdir /opt/mingw/i386-mingw32/include
+cd ~/mingw/src/mingw-runtime*/mingw/include
+cp -r * /opt/mingw/i386-mingw32/include
+</pre></blockquote>
+</li>
+
+<li>
+<p>Configure and build the compiler</p>
+<blockquote><pre>
+mkdir ~/mingw/bld/gcc
+cd ~/mingw/bld/gcc
+../../src/gcc-*/configure --prefix=/opt/mingw --target=i386-mingw32 -v
+cd gcc
+make installdirs
+cd ..
+make 2>&1 | tee make.out
+make install
+</pre></blockquote>
+</li>
+
+<li>
+<p>Configure and build the MinGW runtime</p>
+<blockquote><pre>
+mkdir ~/mingw/bld/runtime
+cd ~/mingw/bld/runtime
+../../src/mingw-runtime*/configure --prefix=/opt/mingw --target=i386-mingw32 -v
+make install-target-w32api
+make install
+</pre></blockquote>
+</li>
+</ol>
+
+<p>And you are done...</p>
+}
+puts {
+<p><hr /></p>
+<p><a href="index.html"><img src="/goback.jpg" border=0 />
+Back to the SQLite Home Page</a>
+</p>
+
+</body></html>}
Added: freeswitch/trunk/libs/sqlite/www/nulls.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/nulls.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,329 @@
+#
+# Run this script to generated a nulls.html output file
+#
+set rcsid {$Id: nulls.tcl,v 1.8 2004/10/10 17:24:55 drh Exp $}
+source common.tcl
+header {NULL Handling in SQLite}
+puts {
+<h2>NULL Handling in SQLite Versus Other Database Engines</h2>
+
+<p>
+The goal is
+to make SQLite handle NULLs in a standards-compliant way.
+But the descriptions in the SQL standards on how to handle
+NULLs seem ambiguous.
+It is not clear from the standards documents exactly how NULLs should
+be handled in all circumstances.
+</p>
+
+<p>
+So instead of going by the standards documents, various popular
+SQL engines were tested to see how they handle NULLs. The idea
+was to make SQLite work like all the other engines.
+A SQL test script was developed and run by volunteers on various
+SQL RDBMSes and the results of those tests were used to deduce
+how each engine processed NULL values.
+The original tests were run in May of 2002.
+A copy of the test script is found at the end of this document.
+</p>
+
+<p>
+SQLite was originally coded in such a way that the answer to
+all questions in the chart below would be "Yes". But the
+experiments run on other SQL engines showed that none of them
+worked this way. So SQLite was modified to work the same as
+Oracle, PostgreSQL, and DB2. This involved making NULLs
+indistinct for the purposes of the SELECT DISTINCT statement and
+for the UNION operator in a SELECT. NULLs are still distinct
+in a UNIQUE column. This seems somewhat arbitrary, but the desire
+to be compatible with other engines outweighted that objection.
+</p>
+
+<p>
+It is possible to make SQLite treat NULLs as distinct for the
+purposes of the SELECT DISTINCT and UNION. To do so, one should
+change the value of the NULL_ALWAYS_DISTINCT #define in the
+<tt>sqliteInt.h</tt> source file and recompile.
+</p>
+
+<blockquote>
+<p>
+<i>Update 2003-07-13:</i>
+Since this document was originally written some of the database engines
+tested have been updated and users have been kind enough to send in
+corrections to the chart below. The original data showed a wide variety
+of behaviors, but over time the range of behaviors has converged toward
+the PostgreSQL/Oracle model. The only significant difference
+is that Informix and MS-SQL both threat NULLs as
+indistinct in a UNIQUE column.
+</p>
+
+<p>
+The fact that NULLs are distinct for UNIQUE columns but are indistinct for
+SELECT DISTINCT and UNION continues to be puzzling. It seems that NULLs
+should be either distinct everywhere or nowhere. And the SQL standards
+documents suggest that NULLs should be distinct everywhere. Yet as of
+this writing, no SQL engine tested treats NULLs as distinct in a SELECT
+DISTINCT statement or in a UNION.
+</p>
+</blockquote>
+
+
+<p>
+The following table shows the results of the NULL handling experiments.
+</p>
+
+<table border=1 cellpadding=3 width="100%">
+<tr><th>  </th>
+<th>SQLite</th>
+<th>PostgreSQL</th>
+<th>Oracle</th>
+<th>Informix</th>
+<th>DB2</th>
+<th>MS-SQL</th>
+<th>OCELOT</th>
+</tr>
+
+<tr><td>Adding anything to null gives null</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+</tr>
+<tr><td>Multiplying null by zero gives null</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+</tr>
+<tr><td>nulls are distinct in a UNIQUE column</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
+<td valign="center" align="center" bgcolor="#aaaad2">(Note 4)</td>
+<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+</tr>
+<tr><td>nulls are distinct in SELECT DISTINCT</td>
+<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
+<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
+<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
+<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
+<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
+<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
+<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
+</tr>
+<tr><td>nulls are distinct in a UNION</td>
+<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
+<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
+<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
+<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
+<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
+<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
+<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
+</tr>
+<tr><td>"CASE WHEN null THEN 1 ELSE 0 END" is 0?</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+</tr>
+<tr><td>"null OR true" is true</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+</tr>
+<tr><td>"not (null AND false)" is true</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+</tr>
+</table>
+
+<table border=1 cellpadding=3 width="100%">
+<tr><th>  </th>
+<th>MySQL<br>3.23.41</th>
+<th>MySQL<br>4.0.16</th>
+<th>Firebird</th>
+<th>SQL<br>Anywhere</th>
+<th>Borland<br>Interbase</th>
+</tr>
+
+<tr><td>Adding anything to null gives null</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+</tr>
+<tr><td>Multiplying null by zero gives null</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+</tr>
+<tr><td>nulls are distinct in a UNIQUE column</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#aaaad2">(Note 4)</td>
+<td valign="center" align="center" bgcolor="#aaaad2">(Note 4)</td>
+</tr>
+<tr><td>nulls are distinct in SELECT DISTINCT</td>
+<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
+<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
+<td valign="center" align="center" bgcolor="#c7a9a9">No (Note 1)</td>
+<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
+<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
+</tr>
+<tr><td>nulls are distinct in a UNION</td>
+<td valign="center" align="center" bgcolor="#aaaad2">(Note 3)</td>
+<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
+<td valign="center" align="center" bgcolor="#c7a9a9">No (Note 1)</td>
+<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
+<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
+</tr>
+<tr><td>"CASE WHEN null THEN 1 ELSE 0 END" is 0?</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#aaaad2">(Note 5)</td>
+</tr>
+<tr><td>"null OR true" is true</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+</tr>
+<tr><td>"not (null AND false)" is true</td>
+<td valign="center" align="center" bgcolor="#c7a9a9">No</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+<td valign="center" align="center" bgcolor="#a9c7a9">Yes</td>
+</tr>
+</table>
+
+<table border=0 align="right" cellpadding=0 cellspacing=0>
+<tr>
+<td valign="top" rowspan=5>Notes: </td>
+<td>1. </td>
+<td>Older versions of firebird omits all NULLs from SELECT DISTINCT
+and from UNION.</td>
+</tr>
+<tr><td>2. </td>
+<td>Test data unavailable.</td>
+</tr>
+<tr><td>3. </td>
+<td>MySQL version 3.23.41 does not support UNION.</td>
+</tr>
+<tr><td>4. </td>
+<td>DB2, SQL Anywhere, and Borland Interbase
+do not allow NULLs in a UNIQUE column.</td>
+</tr>
+<tr><td>5. </td>
+<td>Borland Interbase does not support CASE expressions.</td>
+</tr>
+</table>
+<br clear="both">
+
+<p> </p>
+<p>
+The following script was used to gather information for the table
+above.
+</p>
+
+<pre>
+-- I have about decided that SQL's treatment of NULLs is capricious and cannot be
+-- deduced by logic. It must be discovered by experiment. To that end, I have
+-- prepared the following script to test how various SQL databases deal with NULL.
+-- My aim is to use the information gather from this script to make SQLite as much
+-- like other databases as possible.
+--
+-- If you could please run this script in your database engine and mail the results
+-- to me at drh at hwaci.com, that will be a big help. Please be sure to identify the
+-- database engine you use for this test. Thanks.
+--
+-- If you have to change anything to get this script to run with your database
+-- engine, please send your revised script together with your results.
+--
+
+-- Create a test table with data
+create table t1(a int, b int, c int);
+insert into t1 values(1,0,0);
+insert into t1 values(2,0,1);
+insert into t1 values(3,1,0);
+insert into t1 values(4,1,1);
+insert into t1 values(5,null,0);
+insert into t1 values(6,null,1);
+insert into t1 values(7,null,null);
+
+-- Check to see what CASE does with NULLs in its test expressions
+select a, case when b<>0 then 1 else 0 end from t1;
+select a+10, case when not b<>0 then 1 else 0 end from t1;
+select a+20, case when b<>0 and c<>0 then 1 else 0 end from t1;
+select a+30, case when not (b<>0 and c<>0) then 1 else 0 end from t1;
+select a+40, case when b<>0 or c<>0 then 1 else 0 end from t1;
+select a+50, case when not (b<>0 or c<>0) then 1 else 0 end from t1;
+select a+60, case b when c then 1 else 0 end from t1;
+select a+70, case c when b then 1 else 0 end from t1;
+
+-- What happens when you multiple a NULL by zero?
+select a+80, b*0 from t1;
+select a+90, b*c from t1;
+
+-- What happens to NULL for other operators?
+select a+100, b+c from t1;
+
+-- Test the treatment of aggregate operators
+select count(*), count(b), sum(b), avg(b), min(b), max(b) from t1;
+
+-- Check the behavior of NULLs in WHERE clauses
+select a+110 from t1 where b<10;
+select a+120 from t1 where not b>10;
+select a+130 from t1 where b<10 OR c=1;
+select a+140 from t1 where b<10 AND c=1;
+select a+150 from t1 where not (b<10 AND c=1);
+select a+160 from t1 where not (c=1 AND b<10);
+
+-- Check the behavior of NULLs in a DISTINCT query
+select distinct b from t1;
+
+-- Check the behavior of NULLs in a UNION query
+select b from t1 union select b from t1;
+
+-- Create a new table with a unique column. Check to see if NULLs are considered
+-- to be distinct.
+create table t2(a int, b int unique);
+insert into t2 values(1,1);
+insert into t2 values(2,null);
+insert into t2 values(3,null);
+select * from t2;
+
+drop table t1;
+drop table t2;
+</pre>
+}
+
+footer $rcsid
Added: freeswitch/trunk/libs/sqlite/www/oldnews.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/oldnews.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,351 @@
+#!/usr/bin/tclsh
+source common.tcl
+header {SQLite Older News}
+
+proc newsitem {date title text} {
+ puts "<h3>$date - $title</h3>"
+ regsub -all "\n( *\n)+" $text "</p>\n\n<p>" txt
+ puts "<p>$txt</p>"
+ puts "<hr width=\"50%\">"
+}
+
+
+newsitem {2006-Feb-11} {Version 3.3.4} {
+ This release fixes several bugs, including a
+ a blunder that might cause a deadlock on multithreaded systems.
+ Anyone using SQLite in a multithreaded environment should probably upgrade.
+}
+
+newsitem {2006-Jan-31} {Version 3.3.3 stable} {
+ There have been no major problems discovered in version 3.3.2, so
+ we hereby declare the new APIs and language features to be stable
+ and supported.
+}
+
+newsitem {2006-Jan-24} {Version 3.3.2 beta} {
+ More bug fixes and performance improvements as we move closer to
+ a production-ready version 3.3.x.
+}
+
+newsitem {2006-Jan-16} {Version 3.3.1 alpha} {
+ Many bugs found in last week's alpha release have now been fixed and
+ the library is running much faster again.
+
+ Database connections can now be moved between threads as long as the
+ connection holds no locks at the time it is moved. Thus the common
+ paradigm of maintaining a pool of database connections and handing
+ them off to transient worker threads is now supported.
+ Please help test this new feature.
+ See <a href="http://www.sqlite.org/cvstrac/wiki?p=MultiThreading">
+ the MultiThreading wiki page</a> for additional
+ information.
+}
+
+newsitem {2006-Jan-10} {Version 3.3.0 alpha} {
+ Version 3.3.0 adds support for CHECK constraints, DESC indices,
+ separate REAL and INTEGER column affinities, a new OS interface layer
+ design, and many other changes. The code passed a regression
+ test but should still be considered alpha. Please report any
+ problems.
+
+ The file format for version 3.3.0 has changed slightly to support
+ descending indices and
+ a more efficient encoding of boolean values. SQLite 3.3.0 will read and
+ write legacy databases created with any prior version of SQLite 3. But
+ databases created by version 3.3.0 will not be readable or writable
+ by earlier versions of the SQLite. The older file format can be
+ specified at compile-time for those rare cases where it is needed.
+}
+
+newsitem {2005-Dec-19} {Versions 3.2.8 and 2.8.17} {
+ These versions contain one-line changes to 3.2.7 and 2.8.16 to fix a bug
+ that has been present since March of 2002 and version 2.4.0.
+ That bug might possibly cause database corruption if a large INSERT or
+ UPDATE statement within a multi-statement transaction fails due to a
+ uniqueness constraint but the containing transaction commits.
+}
+
+
+newsitem {2005-Sep-24} {Version 3.2.7} {
+ This version fixes several minor and obscure bugs.
+ Upgrade only if you are having problems.
+}
+
+newsitem {2005-Sep-16} {Version 3.2.6 - Critical Bug Fix} {
+ This version fixes a bug that can result in database
+ corruption if a VACUUM of a 1 gibibyte or larger database fails
+ (perhaps do to running out of disk space or an unexpected power loss)
+ and is later rolled back.
+ <p>
+ Also in this release:
+ The ORDER BY and GROUP BY processing was rewritten to use less memory.
+ Support for COUNT(DISTINCT) was added. The LIKE operator can now be
+ used by the optimizer on columns with COLLATE NOCASE.
+}
+
+newsitem {2005-Aug-27} {Version 3.2.5} {
+ This release fixes a few more lingering bugs in the new code.
+ We expect that this release will be stable and ready for production use.
+}
+
+newsitem {2005-Aug-24} {Version 3.2.4} {
+ This release fixes a bug in the new optimizer that can lead to segfaults
+ when parsing very complex WHERE clauses.
+}
+
+newsitem {2005-Aug-21} {Version 3.2.3} {
+ This release adds the <a href="lang_analyze.html">ANALYZE</a> command,
+ the <a href="lang_expr.html">CAST</a> operator, and many
+ very substantial improvements to the query optimizer. See the
+ <a href="changes.html#version_3_2_3">change log</a> for additional
+ information.
+}
+
+newsitem {2005-Aug-2} {2005 Open Source Award for SQLite} {
+ SQLite and its primary author D. Richard Hipp have been honored with
+ a <a href="http://osdir.com/Article6677.phtml">2005 Open Source
+ Award</a> from Google and O'Reilly.<br clear="right">
+}
+
+
+newsitem {2005-Jun-13} {Version 3.2.2} {
+ This release includes numerous minor bug fixes, speed improvements,
+ and code size reductions. There is no reason to upgrade unless you
+ are having problems or unless you just want to.
+}
+
+newsitem {2005-Mar-29} {Version 3.2.1} {
+ This release fixes a memory allocation problem in the new
+ <a href="lang_altertable.html">ALTER TABLE ADD COLUMN</a>
+ command.
+}
+
+newsitem {2005-Mar-21} {Version 3.2.0} {
+ The primary purpose for version 3.2.0 is to add support for
+ <a href="lang_altertable.html">ALTER TABLE ADD COLUMN</a>.
+ The new ADD COLUMN capability is made
+ possible by AOL developers supporting and embracing great
+ open-source software. Thanks, AOL!
+
+ Version 3.2.0 also fixes an obscure but serious bug that was discovered
+ just prior to release. If you have a multi-statement transaction and
+ within that transaction an UPDATE or INSERT statement fails due to a
+ constraint, then you try to rollback the whole transaction, the rollback
+ might not work correctly. See
+ <a href="http://www.sqlite.org/cvstrac/tktview?tn=1171">Ticket #1171</a>
+ for details. Upgrading is recommended for all users.
+}
+
+newsitem {2005-Mar-16} {Version 3.1.6} {
+ Version 3.1.6 fixes a critical bug that can cause database corruption
+ when inserting rows into tables with around 125 columns. This bug was
+ introduced in version 3.0.0. See
+ <a href="http://www.sqlite.org/cvstrac/tktview?tn=1163">Ticket #1163</a>
+ for additional information.
+}
+
+newsitem {2005-Mar-11} {Versions 3.1.4 and 3.1.5 Released} {
+ Version 3.1.4 fixes a critical bug that could cause database corruption
+ if the autovacuum mode of version 3.1.0 is turned on (it is off by
+ default) and a CREATE UNIQUE INDEX is executed within a transaction but
+ fails because the indexed columns are not unique. Anyone using the
+ autovacuum feature and unique indices should upgrade.
+
+ Version 3.1.5 adds the ability to disable
+ the F_FULLFSYNC ioctl() in OS-X by setting "PRAGMA synchronous=on" instead
+ of the default "PRAGMA synchronous=full". There was an attempt to add
+ this capability in 3.1.4 but it did not work due to a spelling error.
+}
+
+newsitem {2005-Feb-19} {Version 3.1.3 Released} {
+ Version 3.1.3 cleans up some minor issues discovered in version 3.1.2.
+}
+
+newsitem {2005-Feb-15} {Versions 2.8.16 and 3.1.2 Released} {
+ A critical bug in the VACUUM command that can lead to database
+ corruption has been fixed in both the 2.x branch and the main
+ 3.x line. This bug has existed in all prior versions of SQLite.
+ Even though it is unlikely you will ever encounter this bug,
+ it is suggested that all users upgrade. See
+ <a href="http://www.sqlite.org/cvstrac/tktview?tn=1116">
+ ticket #1116</a>. for additional information.
+
+ Version 3.1.2 is also the first stable release of the 3.1
+ series. SQLite 3.1 features added support for correlated
+ subqueries, autovacuum, autoincrement, ALTER TABLE, and
+ other enhancements. See the
+ <a href="http://www.sqlite.org/releasenotes310.html">release notes
+ for version 3.1.0</a> for a detailed description of the
+ changes available in the 3.1 series.
+}
+
+newsitem {2005-Feb-01} {Version 3.1.1 (beta) Released} {
+ Version 3.1.1 (beta) is now available on the
+ website. Verison 3.1.1 is fully backwards compatible with the 3.0 series
+ and features many new features including Autovacuum and correlated
+ subqueries. The
+ <a href="http://www.sqlite.org/releasenotes310.html">release notes</a>
+ From version 3.1.0 apply equally to this release beta. A stable release
+ is expected within a couple of weeks.
+}
+
+newsitem {2005-Jan-21} {Version 3.1.0 (alpha) Released} {
+ Version 3.1.0 (alpha) is now available on the
+ website. Verison 3.1.0 is fully backwards compatible with the 3.0 series
+ and features many new features including Autovacuum and correlated
+ subqueries. See the
+ <a href="http://www.sqlite.org/releasenotes310.html">release notes</a>
+ for details.
+
+ This is an alpha release. A beta release is expected in about a week
+ with the first stable release to follow after two more weeks.
+}
+
+newsitem {2004-Nov-09} {SQLite at the 2004 International PHP Conference} {
+ There was a talk on the architecture of SQLite and how to optimize
+ SQLite queries at the 2004 International PHP Conference in Frankfurt,
+ Germany.
+ <a href="http://www.sqlite.org/php2004/page-001.html">
+ Slides</a> from that talk are available.
+}
+
+newsitem {2004-Oct-11} {Version 3.0.8} {
+ Version 3.0.8 of SQLite contains several code optimizations and minor
+ bug fixes and adds support for DEFERRED, IMMEDIATE, and EXCLUSIVE
+ transactions. This is an incremental release. There is no reason
+ to upgrade from version 3.0.7 if that version is working for you.
+}
+
+
+newsitem {2004-Oct-10} {SQLite at the 11<sup><small>th</small></sup>
+Annual Tcl/Tk Conference} {
+ There will be a talk on the use of SQLite in Tcl/Tk at the
+ 11<sup><small>th</small></sup> Tcl/Tk Conference this week in
+ New Orleans. Visit <a href="http://www.tcl.tk/community/tcl2004/">
+ http://www.tcl.tk/</a> for details.
+ <a href="http://www.sqlite.org/tclconf2004/page-001.html">
+ Slides</a> from the talk are available.
+}
+
+newsitem {2004-Sep-18} {Version 3.0.7} {
+ Version 3.0 has now been in use by multiple projects for several
+ months with no major difficulties. We consider it stable and
+ ready for production use.
+}
+
+newsitem {2004-Sep-02} {Version 3.0.6 (beta)} {
+ Because of some important changes to sqlite3_step(),
+ we have decided to
+ do an additional beta release prior to the first "stable" release.
+ If no serious problems are discovered in this version, we will
+ release version 3.0 "stable" in about a week.
+}
+
+
+newsitem {2004-Aug-29} {Version 3.0.5 (beta)} {
+ The fourth beta release of SQLite version 3.0 is now available.
+ The next release is expected to be called "stable".
+}
+
+
+newsitem {2004-Aug-08} {Version 3.0.4 (beta)} {
+ The third beta release of SQLite version 3.0 is now available.
+ This new beta fixes several bugs including a database corruption
+ problem that can occur when doing a DELETE while a SELECT is pending.
+ Expect at least one more beta before version 3.0 goes final.
+}
+
+newsitem {2004-July-22} {Version 3.0.3 (beta)} {
+ The second beta release of SQLite version 3.0 is now available.
+ This new beta fixes many bugs and adds support for databases with
+ varying page sizes. The next 3.0 release will probably be called
+ a final or stable release.
+
+ Version 3.0 adds support for internationalization and a new
+ more compact file format.
+ <a href="version3.html">Details.</a>
+ The API and file format have been fixed since 3.0.2. All
+ regression tests pass (over 100000 tests) and the test suite
+ exercises over 95% of the code.
+
+ SQLite version 3.0 is made possible in part by AOL
+ developers supporting and embracing great Open-Source Software.
+}
+
+newsitem {2004-Jly-22} {Version 2.8.15} {
+ SQLite version 2.8.15 is a maintenance release for the version 2.8
+ series. Version 2.8 continues to be maintained with bug fixes, but
+ no new features will be added to version 2.8. All the changes in
+ this release are minor. If you are not having problems, there is
+ there is no reason to upgrade.
+}
+
+newsitem {2004-Jun-30} {Version 3.0.2 (beta) Released} {
+ The first beta release of SQLite version 3.0 is now available.
+ Version 3.0 adds support for internationalization and a new
+ more compact file format.
+ <a href="version3.html">Details.</a>
+ As of this release, the API and file format are frozen. All
+ regression tests pass (over 100000 tests) and the test suite
+ exercises over 95% of the code.
+
+ SQLite version 3.0 is made possible in part by AOL
+ developers supporting and embracing great Open-Source Software.
+}
+
+
+newsitem {2004-Jun-25} {Website hacked} {
+ The www.sqlite.org website was hacked sometime around 2004-Jun-22
+ because the lead SQLite developer failed to properly patch CVS.
+ Evidence suggests that the attacker was unable to elevate privileges
+ above user "cvs". Nevertheless, as a precaution the entire website
+ has been reconstructed from scratch on a fresh machine. All services
+ should be back to normal as of 2004-Jun-28.
+}
+
+
+newsitem {2004-Jun-18} {Version 3.0.0 (alpha) Released} {
+ The first alpha release of SQLite version 3.0 is available for
+ public review and comment. Version 3.0 enhances internationalization support
+ through the use of UTF-16 and user-defined text collating sequences.
+ BLOBs can now be stored directly, without encoding.
+ A new file format results in databases that are 25% smaller (depending
+ on content). The code is also a little faster. In spite of the many
+ new features, the library footprint is still less than 240KB
+ (x86, gcc -O1).
+ <a href="version3.html">Additional information</a>.
+
+ Our intent is to freeze the file format and API on 2004-Jul-01.
+ Users are encouraged to review and evaluate this alpha release carefully
+ and submit any feedback prior to that date.
+
+ The 2.8 series of SQLite will continue to be supported with bug
+ fixes for the foreseeable future.
+}
+
+newsitem {2004-Jun-09} {Version 2.8.14 Released} {
+ SQLite version 2.8.14 is a patch release to the stable 2.8 series.
+ There is no reason to upgrade if 2.8.13 is working ok for you.
+ This is only a bug-fix release. Most development effort is
+ going into version 3.0.0 which is due out soon.
+}
+
+newsitem {2004-May-31} {CVS Access Temporarily Disabled} {
+ Anonymous access to the CVS repository will be suspended
+ for 2 weeks beginning on 2004-June-04. Everyone will still
+ be able to download
+ prepackaged source bundles, create or modify trouble tickets, or view
+ change logs during the CVS service interruption. Full open access to the
+ CVS repository will be restored on 2004-June-18.
+}
+
+newsitem {2004-Apr-23} {Work Begins On SQLite Version 3} {
+ Work has begun on version 3 of SQLite. Version 3 is a major
+ changes to both the C-language API and the underlying file format
+ that will enable SQLite to better support internationalization.
+ The first beta is schedule for release on 2004-July-01.
+
+ Plans are to continue to support SQLite version 2.8 with
+ bug fixes. But all new development will occur in version 3.0.
+}
+footer {$Id: oldnews.tcl,v 1.16 2006/08/12 14:38:47 drh Exp $}
Added: freeswitch/trunk/libs/sqlite/www/omitted.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/omitted.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,85 @@
+#
+# Run this script to generated a omitted.html output file
+#
+set rcsid {$Id: omitted.tcl,v 1.10 2005/11/03 00:41:18 drh Exp $}
+source common.tcl
+header {SQL Features That SQLite Does Not Implement}
+puts {
+<h2>SQL Features That SQLite Does Not Implement</h2>
+
+<p>
+Rather than try to list all the features of SQL92 that SQLite does
+support, it is much easier to list those that it does not.
+Unsupported features of SQL92 are shown below.</p>
+
+<p>
+The order of this list gives some hint as to when a feature might
+be added to SQLite. Those features near the top of the list are
+likely to be added in the near future. There are no immediate
+plans to add features near the bottom of the list.
+</p>
+
+<table cellpadding="10">
+}
+
+proc feature {name desc} {
+ puts "<tr><td valign=\"top\"><b><nobr>$name</nobr></b></td>"
+ puts "<td width=\"10\"> </th>"
+ puts "<td valign=\"top\">$desc</td></tr>"
+}
+
+feature {FOREIGN KEY constraints} {
+ FOREIGN KEY constraints are parsed but are not enforced.
+}
+
+feature {Complete trigger support} {
+ There is some support for triggers but it is not complete. Missing
+ subfeatures include FOR EACH STATEMENT triggers (currently all triggers
+ must be FOR EACH ROW), INSTEAD OF triggers on tables (currently
+ INSTEAD OF triggers are only allowed on views), and recursive
+ triggers - triggers that trigger themselves.
+}
+
+feature {Complete ALTER TABLE support} {
+ Only the RENAME TABLE and ADD COLUMN variants of the
+ ALTER TABLE command are supported. Other kinds of ALTER TABLE operations
+ such as
+ DROP COLUMN, ALTER COLUMN, ADD CONSTRAINT, and so forth are omitted.
+}
+
+feature {Nested transactions} {
+ The current implementation only allows a single active transaction.
+}
+
+feature {RIGHT and FULL OUTER JOIN} {
+ LEFT OUTER JOIN is implemented, but not RIGHT OUTER JOIN or
+ FULL OUTER JOIN.
+}
+
+feature {Writing to VIEWs} {
+ VIEWs in SQLite are read-only. You may not execute a DELETE, INSERT, or
+ UPDATE statement on a view. But you can create a trigger
+ that fires on an attempt to DELETE, INSERT, or UPDATE a view and do
+ what you need in the body of the trigger.
+}
+
+feature {GRANT and REVOKE} {
+ Since SQLite reads and writes an ordinary disk file, the
+ only access permissions that can be applied are the normal
+ file access permissions of the underlying operating system.
+ The GRANT and REVOKE commands commonly found on client/server
+ RDBMSes are not implemented because they would be meaningless
+ for an embedded database engine.
+}
+
+puts {
+</table>
+
+<p>
+If you find other SQL92 features that SQLite does not support, please
+add them to the Wiki page at
+<a href="http://www.sqlite.org/cvstrac/wiki?p=UnsupportedSql">
+http://www.sqlite.org/cvstrac/wiki?p=Unsupported</a>
+</p>
+}
+footer $rcsid
Added: freeswitch/trunk/libs/sqlite/www/opcode.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/opcode.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,243 @@
+#
+# Run this Tcl script to generate the sqlite.html file.
+#
+set rcsid {$Id: opcode.tcl,v 1.15 2005/03/09 12:26:51 danielk1977 Exp $}
+source common.tcl
+header {SQLite Virtual Machine Opcodes}
+puts {
+<h2>SQLite Virtual Machine Opcodes</h2>
+}
+
+set fd [open [lindex $argv 0] r]
+set file [read $fd [file size [lindex $argv 0]]]
+close $fd
+set current_op {}
+foreach line [split $file \n] {
+ set line [string trim $line]
+ if {[string index $line 1]!="*"} {
+ set current_op {}
+ continue
+ }
+ if {[regexp {^/\* Opcode: } $line]} {
+ set current_op [lindex $line 2]
+ set txt [lrange $line 3 end]
+ regsub -all {>} $txt {\>} txt
+ regsub -all {<} $txt {\<} txt
+ set Opcode($current_op:args) $txt
+ lappend OpcodeList $current_op
+ continue
+ }
+ if {$current_op==""} continue
+ if {[regexp {^\*/} $line]} {
+ set current_op {}
+ continue
+ }
+ set line [string trim [string range $line 3 end]]
+ if {$line==""} {
+ append Opcode($current_op:text) \n<p>
+ } else {
+ regsub -all {>} $line {\>} line
+ regsub -all {<} $line {\<} line
+ append Opcode($current_op:text) \n$line
+ }
+}
+unset file
+
+puts {
+<h3>Introduction</h3>
+
+<p>In order to execute an SQL statement, the SQLite library first parses
+the SQL, analyzes the statement, then generates a short program to execute
+the statement. The program is generated for a "virtual machine" implemented
+by the SQLite library. This document describes the operation of that
+virtual machine.</p>
+
+<p>This document is intended as a reference, not a tutorial.
+A separate <a href="vdbe.html">Virtual Machine Tutorial</a> is
+available. If you are looking for a narrative description
+of how the virtual machine works, you should read the tutorial
+and not this document. Once you have a basic idea of what the
+virtual machine does, you can refer back to this document for
+the details on a particular opcode.
+Unfortunately, the virtual machine tutorial was written for
+SQLite version 1.0. There are substantial changes in the virtual
+machine for version 2.0 and the document has not been updated.
+</p>
+
+<p>The source code to the virtual machine is in the <b>vdbe.c</b> source
+file. All of the opcode definitions further down in this document are
+contained in comments in the source file. In fact, the opcode table
+in this document
+was generated by scanning the <b>vdbe.c</b> source file
+and extracting the necessary information from comments. So the
+source code comments are really the canonical source of information
+about the virtual machine. When in doubt, refer to the source code.</p>
+
+<p>Each instruction in the virtual machine consists of an opcode and
+up to three operands named P1, P2 and P3. P1 may be an arbitrary
+integer. P2 must be a non-negative integer. P2 is always the
+jump destination in any operation that might cause a jump.
+P3 is a null-terminated
+string or NULL. Some operators use all three operands. Some use
+one or two. Some operators use none of the operands.<p>
+
+<p>The virtual machine begins execution on instruction number 0.
+Execution continues until (1) a Halt instruction is seen, or
+(2) the program counter becomes one greater than the address of
+last instruction, or (3) there is an execution error.
+When the virtual machine halts, all memory
+that it allocated is released and all database cursors it may
+have had open are closed. If the execution stopped due to an
+error, any pending transactions are terminated and changes made
+to the database are rolled back.</p>
+
+<p>The virtual machine also contains an operand stack of unlimited
+depth. Many of the opcodes use operands from the stack. See the
+individual opcode descriptions for details.</p>
+
+<p>The virtual machine can have zero or more cursors. Each cursor
+is a pointer into a single table or index within the database.
+There can be multiple cursors pointing at the same index or table.
+All cursors operate independently, even cursors pointing to the same
+indices or tables.
+The only way for the virtual machine to interact with a database
+file is through a cursor.
+Instructions in the virtual
+machine can create a new cursor (Open), read data from a cursor
+(Column), advance the cursor to the next entry in the table
+(Next) or index (NextIdx), and many other operations.
+All cursors are automatically
+closed when the virtual machine terminates.</p>
+
+<p>The virtual machine contains an arbitrary number of fixed memory
+locations with addresses beginning at zero and growing upward.
+Each memory location can hold an arbitrary string. The memory
+cells are typically used to hold the result of a scalar SELECT
+that is part of a larger expression.</p>
+
+<p>The virtual machine contains a single sorter.
+The sorter is able to accumulate records, sort those records,
+then play the records back in sorted order. The sorter is used
+to implement the ORDER BY clause of a SELECT statement.</p>
+
+<p>The virtual machine contains a single "List".
+The list stores a list of integers. The list is used to hold the
+rowids for records of a database table that needs to be modified.
+The WHERE clause of an UPDATE or DELETE statement scans through
+the table and writes the rowid of every record to be modified
+into the list. Then the list is played back and the table is modified
+in a separate step.</p>
+
+<p>The virtual machine can contain an arbitrary number of "Sets".
+Each set holds an arbitrary number of strings. Sets are used to
+implement the IN operator with a constant right-hand side.</p>
+
+<p>The virtual machine can open a single external file for reading.
+This external read file is used to implement the COPY command.</p>
+
+<p>Finally, the virtual machine can have a single set of aggregators.
+An aggregator is a device used to implement the GROUP BY clause
+of a SELECT. An aggregator has one or more slots that can hold
+values being extracted by the select. The number of slots is the
+same for all aggregators and is defined by the AggReset operation.
+At any point in time a single aggregator is current or "has focus".
+There are operations to read or write to memory slots of the aggregator
+in focus. There are also operations to change the focus aggregator
+and to scan through all aggregators.</p>
+
+<h3>Viewing Programs Generated By SQLite</h3>
+
+<p>Every SQL statement that SQLite interprets results in a program
+for the virtual machine. But if you precede the SQL statement with
+the keyword "EXPLAIN" the virtual machine will not execute the
+program. Instead, the instructions of the program will be returned
+like a query result. This feature is useful for debugging and
+for learning how the virtual machine operates.</p>
+
+<p>You can use the <b>sqlite</b> command-line tool to see the
+instructions generated by an SQL statement. The following is
+an example:</p>}
+
+proc Code {body} {
+ puts {<blockquote><tt>}
+ regsub -all {&} [string trim $body] {\&} body
+ regsub -all {>} $body {\>} body
+ regsub -all {<} $body {\<} body
+ regsub -all {\(\(\(} $body {<b>} body
+ regsub -all {\)\)\)} $body {</b>} body
+ regsub -all { } $body {\ } body
+ regsub -all \n $body <br>\n body
+ puts $body
+ puts {</tt></blockquote>}
+}
+
+Code {
+$ (((sqlite ex1)))
+sqlite> (((.explain)))
+sqlite> (((explain delete from tbl1 where two<20;)))
+addr opcode p1 p2 p3
+---- ------------ ----- ----- ----------------------------------------
+0 Transaction 0 0
+1 VerifyCookie 219 0
+2 ListOpen 0 0
+3 Open 0 3 tbl1
+4 Rewind 0 0
+5 Next 0 12
+6 Column 0 1
+7 Integer 20 0
+8 Ge 0 5
+9 Recno 0 0
+10 ListWrite 0 0
+11 Goto 0 5
+12 Close 0 0
+13 ListRewind 0 0
+14 OpenWrite 0 3
+15 ListRead 0 19
+16 MoveTo 0 0
+17 Delete 0 0
+18 Goto 0 15
+19 ListClose 0 0
+20 Commit 0 0
+}
+
+puts {
+<p>All you have to do is add the "EXPLAIN" keyword to the front of the
+SQL statement. But if you use the ".explain" command to <b>sqlite</b>
+first, it will set up the output mode to make the program more easily
+viewable.</p>
+
+<p>If <b>sqlite</b> has been compiled without the "-DNDEBUG=1" option
+(that is, with the NDEBUG preprocessor macro not defined) then you
+can put the SQLite virtual machine in a mode where it will trace its
+execution by writing messages to standard output. The non-standard
+SQL "PRAGMA" comments can be used to turn tracing on and off. To
+turn tracing on, enter:
+</p>
+
+<blockquote><pre>
+PRAGMA vdbe_trace=on;
+</pre></blockquote>
+
+<p>
+You can turn tracing back off by entering a similar statement but
+changing the value "on" to "off".</p>
+
+<h3>The Opcodes</h3>
+}
+
+puts "<p>There are currently [llength $OpcodeList] opcodes defined by
+the virtual machine."
+puts {All currently defined opcodes are described in the table below.
+This table was generated automatically by scanning the source code
+from the file <b>vdbe.c</b>.</p>}
+
+puts {
+<p><table cellspacing="1" border="1" cellpadding="10">
+<tr><th>Opcode Name</th><th>Description</th></tr>}
+foreach op [lsort -dictionary $OpcodeList] {
+ puts {<tr><td valign="top" align="center">}
+ puts "<a name=\"$op\">$op</a>"
+ puts "<td>[string trim $Opcode($op:text)]</td></tr>"
+}
+puts {</table></p>}
+footer $rcsid
Added: freeswitch/trunk/libs/sqlite/www/optimizer.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/optimizer.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,265 @@
+#
+# Run this TCL script to generate HTML for the goals.html file.
+#
+set rcsid {$Id: optimizer.tcl,v 1.1 2005/08/30 22:44:06 drh Exp $}
+source common.tcl
+header {The SQLite Query Optimizer}
+
+proc CODE {text} {
+ puts "<blockquote><pre>"
+ puts $text
+ puts "</pre></blockquote>"
+}
+proc IMAGE {name {caption {}}} {
+ puts "<center><img src=\"$name\">"
+ if {$caption!=""} {
+ puts "<br>$caption"
+ }
+ puts "</center>"
+}
+proc PARAGRAPH {text} {
+ puts "<p>$text</p>\n"
+}
+proc HEADING {level name} {
+ puts "<h$level>$name</h$level>"
+}
+
+HEADING 1 {The SQLite Query Optimizer}
+
+PARAGRAPH {
+ This article describes how the SQLite query optimizer works.
+ This is not something you have to know in order to use SQLite - many
+ programmers use SQLite successfully without the slightest hint of what
+ goes on in the inside.
+ But a basic understanding of what SQLite is doing
+ behind the scenes will help you to write more efficient SQL. And the
+ knowledge gained by studying the SQLite query optimizer has broad
+ application since most other relational database engines operate
+ similarly.
+ A solid understanding of how the query optimizer works is also
+ required before making meaningful changes or additions to the SQLite, so
+ this article should be read closely by anyone aspiring
+ to hack the source code.
+}
+
+HEADING 2 Background
+
+PARAGRAPH {
+ It is important to understand that SQL is a programming language.
+ SQL is a perculiar programming language in that it
+ describes <u>what</u> the programmer wants to compute not <u>how</u>
+ to compute it as most other programming languages do.
+ But perculiar or not, SQL is still just a programming language.
+}
+
+PARAGRAPH {
+ It is very helpful to think of each SQL statement as a separate
+ program.
+ An important job of the SQL database engine is to translate each
+ SQL statement from its descriptive form that specifies what the
+ information is desired (the <u>what</u>)
+ into a procedural form that specifies how to go
+ about acquiring the desired information (the <u>how</u>).
+ The task of translating the <u>what</u> into a
+ <u>how</u> is assigned to the query optimizer.
+}
+
+PARAGRAPH {
+ The beauty of SQL comes from the fact that the optimizer frees the programmer
+ from having to worry over the details of <u>how</u>. The programmer
+ only has to specify the <u>what</u> and then leave the optimizer
+ to deal with all of the minutae of implementing the
+ <u>how</u>. Thus the programmer is able to think and work at a
+ much higher level and leave the optimizer to stress over the low-level
+ work.
+}
+
+HEADING 2 {Database Layout}
+
+PARAGRAPH {
+ An SQLite database consists of one or more "b-trees".
+ Each b-tree contains zero or more "rows".
+ A single row contains a "key" and some "data".
+ In general, both the key and the data are arbitrary binary
+ data of any length.
+ The keys must all be unique within a single b-tree.
+ Rows are stored in order of increasing key values - each
+ b-tree has a comparision functions for keys that determines
+ this order.
+}
+
+PARAGRAPH {
+ In SQLite, each SQL table is stored as a b-tree where the
+ key is a 64-bit integer and the data is the content of the
+ table row. The 64-bit integer key is the ROWID. And, of course,
+ if the table has an INTEGER PRIMARY KEY, then that integer is just
+ an alias for the ROWID.
+}
+
+PARAGRAPH {
+ Consider the following block of SQL code:
+}
+
+CODE {
+ CREATE TABLE ex1(
+ id INTEGER PRIMARY KEY,
+ x VARCHAR(30),
+ y INTEGER
+ );
+ INSERT INTO ex1 VALUES(NULL,'abc',12345);
+ INSERT INTO ex1 VALUES(NULL,456,'def');
+ INSERT INTO ex1 VALUES(100,'hello','world');
+ INSERT INTO ex1 VALUES(-5,'abc','xyz');
+ INSERT INTO ex1 VALUES(54321,NULL,987);
+}
+
+PARAGRAPH {
+ This code generates a new b-tree (named "ex1") containing 5 rows.
+ This table can be visualized as follows:
+}
+IMAGE table-ex1b2.gif
+
+PARAGRAPH {
+ Note that the key for each row if the b-tree is the INTEGER PRIMARY KEY
+ for that row. (Remember that the INTEGER PRIMARY KEY is just an alias
+ for the ROWID.) The other fields of the table form the data for each
+ entry in the b-tree. Note also that the b-tree entries are in ROWID order
+ which is different from the order that they were originally inserted.
+}
+
+PARAGRAPH {
+ Now consider the following SQL query:
+}
+CODE {
+ SELECT y FROM ex1 WHERE x=456;
+}
+
+PARAGRAPH {
+ When the SQLite parser and query optimizer are handed this query, they
+ have to translate it into a procedure that will find the desired result.
+ In this case, they do what is call a "full table scan". They start
+ at the beginning of the b-tree that contains the table and visit each
+ row. Within each row, the value of the "x" column is tested and when it
+ is found to match 456, the value of the "y" column is output.
+ We can represent this procedure graphically as follows:
+}
+IMAGE fullscanb.gif
+
+PARAGRAPH {
+ A full table scan is the access method of last resort. It will always
+ work. But if the table contains millions of rows and you are only looking
+ a single one, it might take a very long time to find the particular row
+ you are interested in.
+ In particular, the time needed to access a single row of the table is
+ proportional to the total number of rows in the table.
+ So a big part of the job of the optimizer is to try to find ways to
+ satisfy the query without doing a full table scan.
+}
+PARAGRAPH {
+ The usual way to avoid doing a full table scan is use a binary search
+ to find the particular row or rows of interest in the table.
+ Consider the next query which searches on rowid instead of x:
+}
+CODE {
+ SELECT y FROM ex1 WHERE rowid=2;
+}
+
+PARAGRAPH {
+ In the previous query, we could not use a binary search for x because
+ the values of x were not ordered. But the rowid values are ordered.
+ So instead of having to visit every row of the b-tree looking for one
+ that has a rowid value of 2, we can do a binary search for that particular
+ row and output its corresponding y value. We show this graphically
+ as follows:
+}
+IMAGE direct1b.gif
+
+PARAGRAPH {
+ When doing a binary search, we only have to look at a number of
+ rows with is proportional to the logorithm of the number of entries
+ in the table. For a table with just 5 entires as in the example above,
+ the difference between a full table scan and a binary search is
+ negligible. In fact, the full table scan might be faster. But in
+ a database that has 5 million rows, a binary search will be able to
+ find the desired row in only about 23 tries, whereas the full table
+ scan will need to look at all 5 million rows. So the binary search
+ is about 200,000 times faster in that case.
+}
+PARAGRAPH {
+ A 200,000-fold speed improvement is huge. So we always want to do
+ a binary search rather than a full table scan when we can.
+}
+PARAGRAPH {
+ The problem with a binary search is that the it only works if the
+ fields you are search for are in sorted order. So we can do a binary
+ search when looking up the rowid because the rows of the table are
+ sorted by rowid. But we cannot use a binary search when looking up
+ x because the values in the x column are in no particular order.
+}
+PARAGRAPH {
+ The way to work around this problem and to permit binary searching on
+ fields like x is to provide an index.
+ An index is another b-tree.
+ But in the index b-tree the key is not the rowid but rather the field
+ or fields being indexed followed by the rowid.
+ The data in an index b-tree is empty - it is not needed or used.
+ The following diagram shows an index on the x field of our example table:
+}
+IMAGE index-ex1-x-b.gif
+
+PARAGRAPH {
+ An important point to note in the index are that they keys of the
+ b-tree are in sorted order. (Recall that NULL values in SQLite sort
+ first, followed by numeric values in numerical order, then strings, and
+ finally BLOBs.) This is the property that will allow use to do a
+ binary search for the field x. The rowid is also included in every
+ key for two reasons. First, by including the rowid we guarantee that
+ every key will be unique. And second, the rowid will be used to look
+ up the actual table entry after doing the binary search. Finally, note
+ that the data portion of the index b-tree serves no purpose and is thus
+ kept empty to save space in the disk file.
+}
+PARAGRAPH {
+ Remember what the original query example looked like:
+}
+CODE {
+ SELECT y FROM ex1 WHERE x=456;
+}
+
+PARAGRAPH {
+ The first time this query was encountered we had to do a full table
+ scan. But now that we have an index on x, we can do a binary search
+ on that index for the entry where x==456. Then from that entry we
+ can find the rowid value and use the rowid to look up the corresponding
+ entry in the original table. From the entry in the original table,
+ we can find the value y and return it as our result. The following
+ diagram shows this process graphically:
+}
+IMAGE indirect1b1.gif
+
+PARAGRAPH {
+ With the index, we are able to look up an entry based on the value of
+ x after visiting only a logorithmic number of b-tree entries. Unlike
+ the case where we were searching using rowid, we have to do two binary
+ searches for each output row. But for a 5-million row table, that is
+ still only 46 searches instead of 5 million for a 100,000-fold speedup.
+}
+
+HEADING 3 {Parsing The WHERE Clause}
+
+
+
+# parsing the where clause
+# rowid lookup
+# index lookup
+# index lookup without the table
+# how an index is chosen
+# joins
+# join reordering
+# order by using an index
+# group by using an index
+# OR -> IN optimization
+# Bitmap indices
+# LIKE and GLOB optimization
+# subquery flattening
+# MIN and MAX optimizations
Added: freeswitch/trunk/libs/sqlite/www/optimizing.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/optimizing.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,15 @@
+set rcsid {$Id: optimizing.tcl,v 1.1 2005/01/17 03:42:52 drh Exp $}
+source common.tcl
+header {Hints For Optimizing Queries In SQLite}
+proc section {level tag name} {
+ incr level
+ if {$level>6} {set level 6}
+ puts "\n"<a name=\"tag\" />"
+ puts "<h$level>$name</h$level>\n"
+}
+section 1 recompile {Recompile the library for optimal performance}
+section 2 avoidtrans {Minimize the number of transactions}
+section 3 usebind {Use sqlite3_bind to insert large chunks of data}
+section 4 useindices {Use appropriate indices}
+section 5 recordjoin {Reorder the tables in a join}
+footer $rcsid
Added: freeswitch/trunk/libs/sqlite/www/optoverview.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/optoverview.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,516 @@
+#
+# Run this TCL script to generate HTML for the goals.html file.
+#
+set rcsid {$Id: optoverview.tcl,v 1.5 2005/11/24 13:15:34 drh Exp $}
+source common.tcl
+header {The SQLite Query Optimizer Overview}
+
+proc CODE {text} {
+ puts "<blockquote><pre>"
+ puts $text
+ puts "</pre></blockquote>"
+}
+proc SYNTAX {text} {
+ puts "<blockquote><pre>"
+ set t2 [string map {& & < < > >} $text]
+ regsub -all "/(\[^\n/\]+)/" $t2 {</b><i>\1</i><b>} t3
+ puts "<b>$t3</b>"
+ puts "</pre></blockquote>"
+}
+proc IMAGE {name {caption {}}} {
+ puts "<center><img src=\"$name\">"
+ if {$caption!=""} {
+ puts "<br>$caption"
+ }
+ puts "</center>"
+}
+proc PARAGRAPH {text} {
+ # regsub -all "/(\[a-zA-Z0-9\]+)/" $text {<i>\1</i>} t2
+ regsub -all "\\*(\[^\n*\]+)\\*" $text {<tt><b><big>\1</big></b></tt>} t3
+ puts "<p>$t3</p>\n"
+}
+set level(0) 0
+set level(1) 0
+proc HEADING {n name {tag {}}} {
+ if {$tag!=""} {
+ puts "<a name=\"$tag\">"
+ }
+ global level
+ incr level($n)
+ for {set i [expr {$n+1}]} {$i<10} {incr i} {
+ set level($i) 0
+ }
+ if {$n==0} {
+ set num {}
+ } elseif {$n==1} {
+ set num $level(1).0
+ } else {
+ set num $level(1)
+ for {set i 2} {$i<=$n} {incr i} {
+ append num .$level($i)
+ }
+ }
+ incr n 1
+ puts "<h$n>$num $name</h$n>"
+}
+
+HEADING 0 {The SQLite Query Optimizer Overview}
+
+PARAGRAPH {
+ This document provides a terse overview of how the query optimizer
+ for SQLite works. This is not a tutorial. The reader is likely to
+ need some prior knowledge of how database engines operate
+ in order to fully understand this text.
+}
+
+HEADING 1 {WHERE clause analysis} where_clause
+
+PARAGRAPH {
+ The WHERE clause on a query is broken up into "terms" where each term
+ is separated from the others by an AND operator.
+}
+PARAGRAPH {
+ All terms of the WHERE clause are analyzed to see if they can be
+ satisfied using indices.
+ Terms that cannot be satisfied through the use of indices become
+ tests that are evaluated against each row of the relevant input
+ tables. No tests are done for terms that are completely satisfied by
+ indices. Sometimes
+ one or more terms will provide hints to indices but still must be
+ evaluated against each row of the input tables.
+}
+
+PARAGRAPH {
+ The analysis of a term might cause new "virtual" terms to
+ be added to the WHERE clause. Virtual terms can be used with
+ indices to restrict a search. But virtual terms never generate code
+ that is tested against input rows.
+}
+
+PARAGRAPH {
+ To be usable by an index a term must be of one of the following
+ forms:
+}
+SYNTAX {
+ /column/ = /expression/
+ /column/ > /expression/
+ /column/ >= /expression/
+ /column/ < /expression/
+ /column/ <= /expression/
+ /expression/ = /column/
+ /expression/ > /column/
+ /expression/ >= /column/
+ /expression/ < /column/
+ /expression/ <= /column/
+ /column/ IN (/expression-list/)
+ /column/ IN (/subquery/)
+}
+PARAGRAPH {
+ If an index is created using a statement like this:
+}
+CODE {
+ CREATE INDEX idx_ex1 ON ex1(a,b,c,d,e,...,y,z);
+}
+PARAGRAPH {
+ Then the index might be used if the initial columns of the index
+ (columns a, b, and so forth) appear in WHERE clause terms.
+ All index columns must be used with
+ the *=* or *IN* operators except for
+ the right-most column which can use inequalities. For the right-most
+ column of an index that is used, there can be up to two inequalities
+ that must sandwich the allowed values of the column between two extremes.
+}
+PARAGRAPH {
+ It is not necessary for every column of an index to appear in a
+ WHERE clause term in order for that index to be used.
+ But there can not be gaps in the columns of the index that are used.
+ Thus for the example index above, if there is no WHERE clause term
+ that constraints column c, then terms that constraint columns a and b can
+ be used with the index but not terms that constraint columns d through z.
+ Similarly, no index column will be used (for indexing purposes)
+ that is to the right of a
+ column that is constrained only by inequalities.
+ For the index above and WHERE clause like this:
+}
+CODE {
+ ... WHERE a=5 AND b IN (1,2,3) AND c>12 AND d='hello'
+}
+PARAGRAPH {
+ Only columns a, b, and c of the index would be usable. The d column
+ would not be usable because it occurs to the right of c and c is
+ constrained only by inequalities.
+}
+
+HEADING 1 {The BETWEEN optimization} between_opt
+
+PARAGRAPH {
+ If a term of the WHERE clause is of the following form:
+}
+SYNTAX {
+ /expr1/ BETWEEN /expr2/ AND /expr3/
+}
+PARAGRAPH {
+ Then two virtual terms are added as follows:
+}
+SYNTAX {
+ /expr1/ >= /expr2/ AND /expr1/ <= /expr3/
+}
+PARAGRAPH {
+ If both virtual terms end up being used as constraints on an index,
+ then the original BETWEEN term is omitted and the corresponding test
+ is not performed on input rows.
+ Thus if the BETWEEN term ends up being used as an index constraint
+ no tests are ever performed on that term.
+ On the other hand, the
+ virtual terms themselves never causes tests to be performed on
+ input rows.
+ Thus if the BETWEEN term is not used as an index constraint and
+ instead must be used to test input rows, the <i>expr1</i> expression is
+ only evaluated once.
+}
+
+HEADING 1 {The OR optimization} or_opt
+
+PARAGRAPH {
+ If a term consists of multiple subterms containing a common column
+ name and separated by OR, like this:
+}
+SYNTAX {
+ /column/ = /expr1/ OR /column/ = /expr2/ OR /column/ = /expr3/ OR ...
+}
+PARAGRAPH {
+ Then the term is rewritten as follows:
+}
+SYNTAX {
+ /column/ IN (/expr1/,/expr2/,/expr3/,/expr4/,...)
+}
+PARAGRAPH {
+ The rewritten term then might go on to constraint an index using the
+ normal rules for *IN* operators.
+ Note that <i>column</i> must be the same column in every OR-connected subterm,
+ although the column can occur on either the left or the right side of
+ the *=* operator.
+}
+
+HEADING 1 {The LIKE optimization} like_opt
+
+PARAGRAPH {
+ Terms that are composed of the LIKE or GLOB operator
+ can sometimes be used to constrain indices.
+ There are many conditions on this use:
+}
+PARAGRAPH {
+ <ol>
+ <li>The left-hand side of the LIKE or GLOB operator must be the name
+ of an indexed column.</li>
+ <li>The right-hand side of the LIKE or GLOB must be a string literal
+ that does not begin with a wildcard character.</li>
+ <li>The ESCAPE clause cannot appear on the LIKE operator.</li>
+ <li>The build-in functions used to implement LIKE and GLOB must not
+ have been overloaded using the sqlite3_create_function() API.</li>
+ <li>For the GLOB operator, the column must use the default BINARY
+ collating sequence.</li>
+ <li>For the LIKE operator, if case_sensitive_like mode is enabled then
+ the column must use the default BINARY collating sequence, or if
+ case_sensitive_like mode is disabled then the column must use the
+ built-in NOCASE collating sequence.</li>
+ </ol>
+}
+PARAGRAPH {
+ The LIKE operator has two modes that can be set by a pragma. The
+ default mode is for LIKE comparisons to be insensitive to differences
+ of case for latin1 characters. Thus, by default, the following
+ expression is true:
+}
+CODE {
+ 'a' LIKE 'A'
+}
+PARAGRAPH {
+ By turned on the case_sensitive_like pragma as follows:
+}
+CODE {
+ PRAGMA case_sensitive_like=ON;
+}
+PARAGRAPH {
+ Then the LIKE operator pays attention to case and the example above would
+ evaluate to false. Note that case insensitivity only applies to
+ latin1 characters - basically the upper and lower case letters of English
+ in the lower 127 byte codes of ASCII. International character sets
+ are case sensitive in SQLite unless a user-supplied collating
+ sequence is used. But if you employ a user-supplied collating sequence,
+ the LIKE optimization describe here will never be taken.
+}
+PARAGRAPH {
+ The LIKE operator is case insensitive by default because this is what
+ the SQL standard requires. You can change the default behavior at
+ compile time by using the -DSQLITE_CASE_SENSITIVE_LIKE command-line option
+ to the compiler.
+}
+PARAGRAPH {
+ The LIKE optimization might occur if the column named on the left of the
+ operator uses the BINARY collating sequence (which is the default) and
+ case_sensitive_like is turned on. Or the optimization might occur if
+ the column uses the built-in NOCASE collating sequence and the
+ case_sensitive_like mode is off. These are the only two combinations
+ under which LIKE operators will be optimized. If the column on the
+ right-hand side of the LIKE operator uses any collating sequence other
+ than the built-in BINARY and NOCASE collating sequences, then no optimizations
+ will ever be attempted on the LIKE operator.
+}
+PARAGRAPH {
+ The GLOB operator is always case sensitive. The column on the left side
+ of the GLOB operator must always use the built-in BINARY collating sequence
+ or no attempt will be made to optimize that operator with indices.
+}
+PARAGRAPH {
+ The right-hand side of the GLOB or LIKE operator must be a literal string
+ value that does not begin with a wildcard. If the right-hand side is a
+ parameter that is bound to a string, then no optimization is attempted.
+ If the right-hand side begins with a wildcard character then no
+ optimization is attempted.
+}
+PARAGRAPH {
+ Suppose the initial sequence of non-wildcard characters on the right-hand
+ side of the LIKE or GLOB operator is <i>x</i>. We are using a single
+ character to denote this non-wildcard prefix but the reader should
+ understand that the prefix can consist of more than 1 character.
+ Let <i>y</i> the smallest string that is the same length as /x/ but which
+ compares greater than <i>x</i>. For example, if <i>x</i> is *hello* then
+ <i>y</i> would be *hellp*.
+ The LIKE and GLOB optimizations consist of adding two virtual terms
+ like this:
+}
+SYNTAX {
+ /column/ >= /x/ AND /column/ < /y/
+}
+PARAGRAPH {
+ Under most circumstances, the original LIKE or GLOB operator is still
+ tested against each input row even if the virtual terms are used to
+ constrain an index. This is because we do not know what additional
+ constraints may be imposed by characters to the right
+ of the <i>x</i> prefix. However, if there is only a single global wildcard
+ to the right of <i>x</i>, then the original LIKE or GLOB test is disabled.
+ In other words, if the pattern is like this:
+}
+SYNTAX {
+ /column/ LIKE /x/%
+ /column/ GLOB /x/*
+}
+PARAGRAPH {
+ Then the original LIKE or GLOB tests are disabled when the virtual
+ terms constrain an index because in that case we know that all of the
+ rows selected by the index will pass the LIKE or GLOB test.
+}
+
+HEADING 1 {Joins} joins
+
+PARAGRAPH {
+ The current implementation of
+ SQLite uses only loop joins. That is to say, joins are implemented as
+ nested loops.
+}
+PARAGRAPH {
+ The default order of the nested loops in a join is for the left-most
+ table in the FROM clause to form the outer loop and the right-most
+ table to form the inner loop.
+ However, SQLite will nest the loops in a different order if doing so
+ will help it to select better indices.
+}
+PARAGRAPH {
+ Inner joins can be freely reordered. However a left outer join is
+ neither commutative nor associative and hence will not be reordered.
+ Inner joins to the left and right of the outer join might be reordered
+ if the optimizer thinks that is advantageous but the outer joins are
+ always evaluated in the order in which they occur.
+}
+PARAGRAPH {
+ When selecting the order of tables in a join, SQLite uses a greedy
+ algorithm that runs in polynomial time.
+}
+PARAGRAPH {
+ The ON and USING clauses of a join are converted into additional
+ terms of the WHERE clause prior to WHERE clause analysis described
+ above in paragraph 1.0. Thus
+ with SQLite, there is no advantage to use the newer SQL92 join syntax
+ over the older SQL89 comma-join syntax. They both end up accomplishing
+ exactly the same thing.
+}
+PARAGRAPH {
+ Join reordering is automatic and usually works well enough that
+ programmer do not have to think about it. But occasionally some
+ hints from the programmer are needed. For a description of when
+ hints might be necessary and how to provide those hints, see the
+ <a href="http://www.sqlite.org/cvstrac/wiki?p=QueryPlans">QueryPlans</a>
+ page in the Wiki.
+}
+
+HEADING 1 {Choosing between multiple indices} multi_index
+
+PARAGRAPH {
+ Each table in the FROM clause of a query can use at most one index,
+ and SQLite strives to use at least one index on each table. Sometimes,
+ two or more indices might be candidates for use on a single table.
+ For example:
+}
+CODE {
+ CREATE TABLE ex2(x,y,z);
+ CREATE INDEX ex2i1 ON ex2(x);
+ CREATE INDEX ex2i2 ON ex2(y);
+ SELECT z FROM ex2 WHERE x=5 AND y=6;
+}
+PARAGRAPH {
+ For the SELECT statement above, the optimizer can use the ex2i1 index
+ to lookup rows of ex2 that contain x=5 and then test each row against
+ the y=6 term. Or it can use the ex2i2 index to lookup rows
+ of ex2 that contain y=6 then test each of those rows against the
+ x=5 term.
+}
+PARAGRAPH {
+ When faced with a choice of two or more indices, SQLite tries to estimate
+ the total amount of work needed to perform the query using each option.
+ It then selects the option that gives the least estimated work.
+}
+PARAGRAPH {
+ To help the optimizer get a more accurate estimate of the work involved
+ in using various indices, the user may optional run the ANALYZE command.
+ The ANALYZE command scans all indices of database where there might
+ be a choice between two or more indices and gathers statistics on the
+ selectiveness of those indices. The results of this scan are stored
+ in the sqlite_stat1 table.
+ The contents of the sqlite_stat1 table are not updated as the database
+ changes so after making significant changes it might be prudent to
+ rerun ANALYZE.
+ The results of an ANALYZE command are only available to database connections
+ that are opened after the ANALYZE command completes.
+}
+PARAGRAPH {
+ Once created, the sqlite_stat1 table cannot be dropped. But its
+ content can be viewed, modified, or erased. Erasing the entire content
+ of the sqlite_stat1 table has the effect of undoing the ANALYZE command.
+ Changing the content of the sqlite_stat1 table can get the optimizer
+ deeply confused and cause it to make silly index choices. Making
+ updates to the sqlite_stat1 table (except by running ANALYZE) is
+ not recommended.
+}
+PARAGRAPH {
+ Terms of the WHERE clause can be manually disqualified for use with
+ indices by prepending a unary *+* operator to the column name. The
+ unary *+* is a no-op and will not slow down the evaluation of the test
+ specified by the term.
+ But it will prevent the term from constraining an index.
+ So, in the example above, if the query were rewritten as:
+}
+CODE {
+ SELECT z FROM ex2 WHERE +x=5 AND y=6;
+}
+PARAGRAPH {
+ The *+* operator on the *x* column would prevent that term from
+ constraining an index. This would force the use of the ex2i2 index.
+}
+
+HEADING 1 {Avoidance of table lookups} index_only
+
+PARAGRAPH {
+ When doing an indexed lookup of a row, the usual procedure is to
+ do a binary search on the index to find the index entry, then extract
+ the rowid from the index and use that rowid to do a binary search on
+ the original table. Thus a typical indexed lookup involves two
+ binary searches.
+ If, however, all columns that were to be fetched from the table are
+ already available in the index itself, SQLite will use the values
+ contained in the index and will never look up the original table
+ row. This saves one binary search for each row and can make many
+ queries run twice as fast.
+}
+
+HEADING 1 {ORDER BY optimizations} order_by
+
+PARAGRAPH {
+ SQLite attempts to use an index to satisfy the ORDER BY clause of a
+ query when possible.
+ When faced with the choice of using an index to satisfy WHERE clause
+ constraints or satisfying an ORDER BY clause, SQLite does the same
+ work analysis described in section 6.0
+ and chooses the index that it believes will result in the fastest answer.
+
+}
+
+HEADING 1 {Subquery flattening} flattening
+
+PARAGRAPH {
+ When a subquery occurs in the FROM clause of a SELECT, the default
+ behavior is to evaluate the subquery into a transient table, then run
+ the outer SELECT against the transient table.
+ This is problematic since the transient table will not have any indices
+ and the outer query (which is likely a join) will be forced to do a
+ full table scan on the transient table.
+}
+PARAGRAPH {
+ To overcome this problem, SQLite attempts to flatten subqueries in
+ the FROM clause of a SELECT.
+ This involves inserting the FROM clause of the subquery into the
+ FROM clause of the outer query and rewriting expressions in
+ the outer query that refer to the result set of the subquery.
+ For example:
+}
+CODE {
+ SELECT a FROM (SELECT x+y AS a FROM t1 WHERE z<100) WHERE a>5
+}
+PARAGRAPH {
+ Would be rewritten using query flattening as:
+}
+CODE {
+ SELECT x+y AS a FROM t1 WHERE z<100 AND a>5
+}
+PARAGRAPH {
+ There is a long list of conditions that must all be met in order for
+ query flattening to occur.
+}
+PARAGRAPH {
+ <ol>
+ <li> The subquery and the outer query do not both use aggregates.</li>
+ <li> The subquery is not an aggregate or the outer query is not a join. </li>
+ <li> The subquery is not the right operand of a left outer join, or
+ the subquery is not itself a join. </li>
+ <li> The subquery is not DISTINCT or the outer query is not a join. </li>
+ <li> The subquery is not DISTINCT or the outer query does not use
+ aggregates. </li>
+ <li> The subquery does not use aggregates or the outer query is not
+ DISTINCT. </li>
+ <li> The subquery has a FROM clause. </li>
+ <li> The subquery does not use LIMIT or the outer query is not a join. </li>
+ <li> The subquery does not use LIMIT or the outer query does not use
+ aggregates. </li>
+ <li> The subquery does not use aggregates or the outer query does not
+ use LIMIT. </li>
+ <li> The subquery and the outer query do not both have ORDER BY clauses.</li>
+ <li> The subquery is not the right term of a LEFT OUTER JOIN or the
+ subquery has no WHERE clause. </li>
+ </ol>
+}
+PARAGRAPH {
+ The proof that query flattening may safely occur if all of the the
+ above conditions are met is left as an exercise to the reader.
+}
+PARAGRAPH {
+ Query flattening is an important optimization when views are used as
+ each use of a view is translated into a subquery.
+}
+
+HEADING 1 {The MIN/MAX optimization} minmax
+
+PARAGRAPH {
+ Queries of the following forms will be optimized to run in logarithmic
+ time assuming appropriate indices exist:
+}
+CODE {
+ SELECT MIN(x) FROM table;
+ SELECT MAX(x) FROM table;
+}
+PARAGRAPH {
+ In order for these optimizations to occur, they must appear in exactly
+ the form shown above - changing only the name of the table and column.
+ It is not permissible to add a WHERE clause or do any arithmetic on the
+ result. The result set must contain a single column.
+ The column in the MIN or MAX function must be an indexed column.
+}
Added: freeswitch/trunk/libs/sqlite/www/pragma.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/pragma.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,527 @@
+#
+# Run this Tcl script to generate the pragma.html file.
+#
+set rcsid {$Id: pragma.tcl,v 1.18 2006/06/20 00:22:38 drh Exp $}
+source common.tcl
+header {Pragma statements supported by SQLite}
+
+proc Section {name {label {}}} {
+ puts "\n<hr />"
+ if {$label!=""} {
+ puts "<a name=\"$label\"></a>"
+ }
+ puts "<h1>$name</h1>\n"
+}
+
+puts {
+<p>The <a href="#syntax">PRAGMA command</a> is a special command used to
+modify the operation of the SQLite library or to query the library for
+internal (non-table) data. The PRAGMA command is issued using the same
+interface as other SQLite commands (e.g. SELECT, INSERT) but is
+different in the following important respects:
+</p>
+<ul>
+<li>Specific pragma statements may be removed and others added in future
+ releases of SQLite. Use with caution!
+<li>No error messages are generated if an unknown pragma is issued.
+ Unknown pragmas are simply ignored. This means if there is a typo in
+ a pragma statement the library does not inform the user of the fact.
+<li>Some pragmas take effect during the SQL compilation stage, not the
+ execution stage. This means if using the C-language sqlite3_compile(),
+ sqlite3_step(), sqlite3_finalize() API (or similar in a wrapper
+ interface), the pragma may be applied to the library during the
+ sqlite3_compile() call.
+<li>The pragma command is unlikely to be compatible with any other SQL
+ engine.
+</ul>
+
+<p>The available pragmas fall into four basic categories:</p>
+<ul>
+<li>Pragmas used to <a href="#schema">query the schema</a> of the current
+ database.
+<li>Pragmas used to <a href="#modify">modify the operation</a> of the
+ SQLite library in some manner, or to query for the current mode of
+ operation.
+<li>Pragmas used to <a href="#version">query or modify the databases two
+ version values</a>, the schema-version and the user-version.
+<li>Pragmas used to <a href="#debug">debug the library</a> and verify that
+ database files are not corrupted.
+</ul>
+}
+
+Section {PRAGMA command syntax} syntax
+
+Syntax {sql-statement} {
+PRAGMA <name> [= <value>] |
+PRAGMA <function>(<arg>)
+}
+
+puts {
+<p>The pragmas that take an integer <b><i>value</i></b> also accept
+symbolic names. The strings "<b>on</b>", "<b>true</b>", and "<b>yes</b>"
+are equivalent to <b>1</b>. The strings "<b>off</b>", "<b>false</b>",
+and "<b>no</b>" are equivalent to <b>0</b>. These strings are case-
+insensitive, and do not require quotes. An unrecognized string will be
+treated as <b>1</b>, and will not generate an error. When the <i>value</i>
+is returned it is as an integer.</p>
+}
+
+Section {Pragmas to modify library operation} modify
+
+puts {
+<ul>
+<a name="pragma_auto_vacuum"></a>
+<li><p><b>PRAGMA auto_vacuum;
+ <br>PRAGMA auto_vacuum = </b><i>0 | 1</i><b>;</b></p>
+ <p> Query or set the auto-vacuum flag in the database.</p>
+
+ <p>Normally, when a transaction that deletes data from a database is
+ committed, the database file remains the same size. Unused database file
+ pages are marked as such and reused later on, when data is inserted into
+ the database. In this mode the <a href="lang_vacuum.html">VACUUM</a>
+ command is used to reclaim unused space.</p>
+
+ <p>When the auto-vacuum flag is set, the database file shrinks when a
+ transaction that deletes data is committed (The VACUUM command is not
+ useful in a database with the auto-vacuum flag set). To support this
+ functionality the database stores extra information internally, resulting
+ in slightly larger database files than would otherwise be possible.</p>
+
+ <p>It is only possible to modify the value of the auto-vacuum flag before
+ any tables have been created in the database. No error message is
+ returned if an attempt to modify the auto-vacuum flag is made after
+ one or more tables have been created.
+ </p></li>
+
+<a name="pragma_cache_size"></a>
+<li><p><b>PRAGMA cache_size;
+ <br>PRAGMA cache_size = </b><i>Number-of-pages</i><b>;</b></p>
+ <p>Query or change the maximum number of database disk pages that SQLite
+ will hold in memory at once. Each page uses about 1.5K of memory.
+ The default cache size is 2000. If you are doing UPDATEs or DELETEs
+ that change many rows of a database and you do not mind if SQLite
+ uses more memory, you can increase the cache size for a possible speed
+ improvement.</p>
+ <p>When you change the cache size using the cache_size pragma, the
+ change only endures for the current session. The cache size reverts
+ to the default value when the database is closed and reopened. Use
+ the <a href="#pragma_default_cache_size"><b>default_cache_size</b></a>
+ pragma to check the cache size permanently.</p></li>
+
+<a name="pragma_case_sensitive_like"></a>
+<li><p><b>PRAGMA case_sensitive_like;
+ <br>PRAGMA case_sensitive_like = </b><i>0 | 1</i><b>;</b></p>
+ <p>The default behavior of the LIKE operator is to ignore case
+ for latin1 characters. Hence, by default <b>'a' LIKE 'A'</b> is
+ true. The case_sensitive_like pragma can be turned on to change
+ this behavior. When case_sensitive_like is enabled,
+ <b>'a' LIKE 'A'</b> is false but <b>'a' LIKE 'a'</b> is still true.</p>
+ </li>
+
+<a name="pragma_count_changes"></a>
+<li><p><b>PRAGMA count_changes;
+ <br>PRAGMA count_changes = </b><i>0 | 1</i><b>;</b></p>
+ <p>Query or change the count-changes flag. Normally, when the
+ count-changes flag is not set, INSERT, UPDATE and DELETE statements
+ return no data. When count-changes is set, each of these commands
+ returns a single row of data consisting of one integer value - the
+ number of rows inserted, modified or deleted by the command. The
+ returned change count does not include any insertions, modifications
+ or deletions performed by triggers.</p>
+
+<a name="pragma_default_cache_size"></a>
+<li><p><b>PRAGMA default_cache_size;
+ <br>PRAGMA default_cache_size = </b><i>Number-of-pages</i><b>;</b></p>
+ <p>Query or change the maximum number of database disk pages that SQLite
+ will hold in memory at once. Each page uses 1K on disk and about
+ 1.5K in memory.
+ This pragma works like the
+ <a href="#pragma_cache_size"><b>cache_size</b></a>
+ pragma with the additional
+ feature that it changes the cache size persistently. With this pragma,
+ you can set the cache size once and that setting is retained and reused
+ every time you reopen the database.</p></li>
+
+<a name="pragma_default_synchronous"></a>
+<li><p><b>PRAGMA default_synchronous;</b></p>
+ <p>This pragma was available in version 2.8 but was removed in version
+ 3.0. It is a dangerous pragma whose use is discouraged. To help
+ dissuide users of version 2.8 from employing this pragma, the documentation
+ will not tell you what it does.</p></li>
+
+
+<a name="pragma_empty_result_callbacks"></a>
+<li><p><b>PRAGMA empty_result_callbacks;
+ <br>PRAGMA empty_result_callbacks = </b><i>0 | 1</i><b>;</b></p>
+ <p>Query or change the empty-result-callbacks flag.</p>
+ <p>The empty-result-callbacks flag affects the sqlite3_exec API only.
+ Normally, when the empty-result-callbacks flag is cleared, the
+ callback function supplied to the sqlite3_exec() call is not invoked
+ for commands that return zero rows of data. When empty-result-callbacks
+ is set in this situation, the callback function is invoked exactly once,
+ with the third parameter set to 0 (NULL). This is to enable programs
+ that use the sqlite3_exec() API to retrieve column-names even when
+ a query returns no data.
+ </p>
+
+<a name="pragma_encoding"></a>
+<li><p><b>PRAGMA encoding;
+ <br>PRAGMA encoding = "UTF-8";
+ <br>PRAGMA encoding = "UTF-16";
+ <br>PRAGMA encoding = "UTF-16le";
+ <br>PRAGMA encoding = "UTF-16be";</b></p>
+ <p>In first form, if the main database has already been
+ created, then this pragma returns the text encoding used by the
+ main database, one of "UTF-8", "UTF-16le" (little-endian UTF-16
+ encoding) or "UTF-16be" (big-endian UTF-16 encoding). If the main
+ database has not already been created, then the value returned is the
+ text encoding that will be used to create the main database, if
+ it is created by this session.</p>
+ <p>The second and subsequent forms of this pragma are only useful if
+ the main database has not already been created. In this case the
+ pragma sets the encoding that the main database will be created with if
+ it is created by this session. The string "UTF-16" is interpreted
+ as "UTF-16 encoding using native machine byte-ordering". If the second
+ and subsequent forms are used after the database file has already
+ been created, they have no effect and are silently ignored.</p>
+
+ <p>Once an encoding has been set for a database, it cannot be changed.</p>
+
+ <p>Databases created by the ATTACH command always use the same encoding
+ as the main database.</p>
+</li>
+
+<a name="pragma_full_column_names"></a>
+<li><p><b>PRAGMA full_column_names;
+ <br>PRAGMA full_column_names = </b><i>0 | 1</i><b>;</b></p>
+ <p>Query or change the full-column-names flag. This flag affects
+ the way SQLite names columns of data returned by SELECT statements
+ when the expression for the column is a table-column name or the
+ wildcard "*". Normally, such result columns are named
+ <table-name/alias><column-name> if the SELECT statement joins
+ two or
+ more tables together, or simply <column-name> if the SELECT
+ statement queries a single table. When the full-column-names flag
+ is set, such columns are always named <table-name/alias>
+ <column-name> regardless of whether or not a join is performed.
+ </p>
+ <p>If both the short-column-names and full-column-names are set,
+ then the behaviour associated with the full-column-names flag is
+ exhibited.
+ </p>
+</li>
+
+<a name="pragma_fullfsync"></a>
+<li><p><b>PRAGMA fullfsync
+ <br>PRAGMA fullfsync = </b><i>0 | 1</i><b>;</b></p>
+ <p>Query or change the fullfsync flag. This flag affects
+ determines whether or not the F_FULLFSYNC syncing method is used
+ on systems that support it. The default value is off. As of this
+ writing (2006-02-10) only Mac OS X supports F_FULLFSYNC.
+ </p>
+</li>
+
+<a name="pragma_legacy_file_format"></a>
+<li><p><b>PRAGMA legacy_file_format;
+ <br>PRAGMA legacy_file_format = <i>ON | OFF</i></b></p>
+ <p>This pragma sets or queries the value of the legacy_file_format
+ flag. When this flag is on, new SQLite databases are created in
+ a file format that is readable and writable by all versions of
+ SQLite going back to 3.0.0. When the flag is off, new databases
+ are created using the latest file format which might to be
+ readable or writable by older versions of SQLite.</p>
+
+ <p>This flag only effects newly created databases. It has no
+ effect on databases that already exists.</p>
+</li>
+
+
+<a name="pragma_page_size"></a>
+<li><p><b>PRAGMA page_size;
+ <br>PRAGMA page_size = </b><i>bytes</i><b>;</b></p>
+ <p>Query or set the page-size of the database. The page-size
+ may only be set if the database has not yet been created. The page
+ size must be a power of two greater than or equal to 512 and less
+ than or equal to 8192. The upper limit may be modified by setting
+ the value of macro SQLITE_MAX_PAGE_SIZE during compilation. The
+ maximum upper bound is 32768.
+ </p>
+</li>
+
+<a name="pragma_read_uncommitted"></a>
+<li><p><b>PRAGMA read_uncommitted;
+ <br>PRAGMA read_uncommitted = </b><i>0 | 1</i><b>;</b></p>
+ <p>Query, set, or clear READ UNCOMMITTED isolation. The default isolation
+ level for SQLite is SERIALIZABLE. Any process or thread can select
+ READ UNCOMMITTED isolation, but SERIALIZABLE will still be used except
+ between connections that share a common page and schema cache.
+ Cache sharing is enabled using the
+ <a href="capi3ref.html#sqlite3_enable_shared_cache">
+ sqlite3_enable_shared_cache()</a> API and is only available between
+ connections running the same thread. Cache sharing is off by default.
+ </p>
+</li>
+
+<a name="pragma_short_column_names"></a>
+<li><p><b>PRAGMA short_column_names;
+ <br>PRAGMA short_column_names = </b><i>0 | 1</i><b>;</b></p>
+ <p>Query or change the short-column-names flag. This flag affects
+ the way SQLite names columns of data returned by SELECT statements
+ when the expression for the column is a table-column name or the
+ wildcard "*". Normally, such result columns are named
+ <table-name/alias>lt;column-name> if the SELECT statement
+ joins two or more tables together, or simply <column-name> if
+ the SELECT statement queries a single table. When the short-column-names
+ flag is set, such columns are always named <column-name>
+ regardless of whether or not a join is performed.
+ </p>
+ <p>If both the short-column-names and full-column-names are set,
+ then the behaviour associated with the full-column-names flag is
+ exhibited.
+ </p>
+</li>
+
+<a name="pragma_synchronous"></a>
+<li><p><b>PRAGMA synchronous;
+ <br>PRAGMA synchronous = FULL; </b>(2)<b>
+ <br>PRAGMA synchronous = NORMAL; </b>(1)<b>
+ <br>PRAGMA synchronous = OFF; </b>(0)</p>
+ <p>Query or change the setting of the "synchronous" flag.
+ The first (query) form will return the setting as an
+ integer. When synchronous is FULL (2), the SQLite database engine will
+ pause at critical moments to make sure that data has actually been
+ written to the disk surface before continuing. This ensures that if
+ the operating system crashes or if there is a power failure, the database
+ will be uncorrupted after rebooting. FULL synchronous is very
+ safe, but it is also slow.
+ When synchronous is NORMAL, the SQLite database
+ engine will still pause at the most critical moments, but less often
+ than in FULL mode. There is a very small (though non-zero) chance that
+ a power failure at just the wrong time could corrupt the database in
+ NORMAL mode. But in practice, you are more likely to suffer
+ a catastrophic disk failure or some other unrecoverable hardware
+ fault.
+ With synchronous OFF (0), SQLite continues without pausing
+ as soon as it has handed data off to the operating system.
+ If the application running SQLite crashes, the data will be safe, but
+ the database might become corrupted if the operating system
+ crashes or the computer loses power before that data has been written
+ to the disk surface. On the other hand, some
+ operations are as much as 50 or more times faster with synchronous OFF.
+ </p>
+ <p>In SQLite version 2, the default value is NORMAL. For version 3, the
+ default was changed to FULL.
+ </p>
+</li>
+
+
+<a name="pragma_temp_store"></a>
+<li><p><b>PRAGMA temp_store;
+ <br>PRAGMA temp_store = DEFAULT;</b> (0)<b>
+ <br>PRAGMA temp_store = FILE;</b> (1)<b>
+ <br>PRAGMA temp_store = MEMORY;</b> (2)</p>
+ <p>Query or change the setting of the "<b>temp_store</b>" parameter.
+ When temp_store is DEFAULT (0), the compile-time C preprocessor macro
+ TEMP_STORE is used to determine where temporary tables and indices
+ are stored. When
+ temp_store is MEMORY (2) temporary tables and indices are kept in memory.
+ When temp_store is FILE (1) temporary tables and indices are stored
+ in a file. The <a href="#pragma_temp_store_directory">
+ temp_store_directory</a> pragma can be used to specify the directory
+ containing this file.
+ <b>FILE</b> is specified. When the temp_store setting is changed,
+ all existing temporary tables, indices, triggers, and views are
+ immediately deleted.</p>
+
+ <p>It is possible for the library compile-time C preprocessor symbol
+ TEMP_STORE to override this pragma setting. The following table summarizes
+ the interaction of the TEMP_STORE preprocessor macro and the
+ temp_store pragma:</p>
+
+ <blockquote>
+ <table cellpadding="2" border="1">
+ <tr><th valign="bottom">TEMP_STORE</th>
+ <th valign="bottom">PRAGMA<br>temp_store</th>
+ <th>Storage used for<br>TEMP tables and indices</th></tr>
+ <tr><td align="center">0</td>
+ <td align="center"><em>any</em></td>
+ <td align="center">file</td></tr>
+ <tr><td align="center">1</td>
+ <td align="center">0</td>
+ <td align="center">file</td></tr>
+ <tr><td align="center">1</td>
+ <td align="center">1</td>
+ <td align="center">file</td></tr>
+ <tr><td align="center">1</td>
+ <td align="center">2</td>
+ <td align="center">memory</td></tr>
+ <tr><td align="center">2</td>
+ <td align="center">0</td>
+ <td align="center">memory</td></tr>
+ <tr><td align="center">2</td>
+ <td align="center">1</td>
+ <td align="center">file</td></tr>
+ <tr><td align="center">2</td>
+ <td align="center">2</td>
+ <td align="center">memory</td></tr>
+ <tr><td align="center">3</td>
+ <td align="center"><em>any</em></td>
+ <td align="center">memory</td></tr>
+ </table>
+ </blockquote>
+ </li>
+ <br>
+
+<a name="pragma_temp_store_directory"></a>
+<li><p><b>PRAGMA temp_store_directory;
+ <br>PRAGMA temp_store_directory = 'directory-name';</b></p>
+ <p>Query or change the setting of the "temp_store_directory" - the
+ directory where files used for storing temporary tables and indices
+ are kept. This setting lasts for the duration of the current connection
+ only and resets to its default value for each new connection opened.
+
+ <p>When the temp_store_directory setting is changed, all existing temporary
+ tables, indices, triggers, and viewers are immediately deleted. In
+ practice, temp_store_directory should be set immediately after the
+ database is opened. </p>
+
+ <p>The value <i>directory-name</i> should be enclosed in single quotes.
+ To revert the directory to the default, set the <i>directory-name</i> to
+ an empty string, e.g., <i>PRAGMA temp_store_directory = ''</i>. An
+ error is raised if <i>directory-name</i> is not found or is not
+ writable. </p>
+
+ <p>The default directory for temporary files depends on the OS. For
+ Unix/Linux/OSX, the default is the is the first writable directory found
+ in the list of: <b>/var/tmp, /usr/tmp, /tmp,</b> and <b>
+ <i>current-directory</i></b>. For Windows NT, the default
+ directory is determined by Windows, generally
+ <b>C:\Documents and Settings\<i>user-name</i>\Local Settings\Temp\</b>.
+ Temporary files created by SQLite are unlinked immediately after
+ opening, so that the operating system can automatically delete the
+ files when the SQLite process exits. Thus, temporary files are not
+ normally visible through <i>ls</i> or <i>dir</i> commands.</p>
+
+ </li>
+</ul>
+}
+
+Section {Pragmas to query the database schema} schema
+
+puts {
+<ul>
+<a name="pragma_database_list"></a>
+<li><p><b>PRAGMA database_list;</b></p>
+ <p>For each open database, invoke the callback function once with
+ information about that database. Arguments include the index and
+ the name the database was attached with. The first row will be for
+ the main database. The second row will be for the database used to
+ store temporary tables.</p></li>
+
+<a name="pragma_foreign_key_list"></a>
+<li><p><b>PRAGMA foreign_key_list(</b><i>table-name</i><b>);</b></p>
+ <p>For each foreign key that references a column in the argument
+ table, invoke the callback function with information about that
+ foreign key. The callback function will be invoked once for each
+ column in each foreign key.</p></li>
+
+<a name="pragma_index_info"></a>
+<li><p><b>PRAGMA index_info(</b><i>index-name</i><b>);</b></p>
+ <p>For each column that the named index references, invoke the
+ callback function
+ once with information about that column, including the column name,
+ and the column number.</p></li>
+
+<a name="pragma_index_list"></a>
+<li><p><b>PRAGMA index_list(</b><i>table-name</i><b>);</b></p>
+ <p>For each index on the named table, invoke the callback function
+ once with information about that index. Arguments include the
+ index name and a flag to indicate whether or not the index must be
+ unique.</p></li>
+
+<a name="pragma_table_info"></a>
+<li><p><b>PRAGMA table_info(</b><i>table-name</i><b>);</b></p>
+ <p>For each column in the named table, invoke the callback function
+ once with information about that column, including the column name,
+ data type, whether or not the column can be NULL, and the default
+ value for the column.</p></li>
+</ul>
+}
+
+Section {Pragmas to query/modify version values} version
+
+puts {
+
+<ul>
+<a name="pragma_schema_version"></a>
+<a name="pragma_user_version"></a>
+<li><p><b>PRAGMA [database.]schema_version;
+ <br>PRAGMA [database.]schema_version = </b><i>integer </i><b>;
+ <br>PRAGMA [database.]user_version;
+ <br>PRAGMA [database.]user_version = </b><i>integer </i><b>;</b>
+
+
+<p> The pragmas schema_version and user_version are used to set or get
+ the value of the schema-version and user-version, respectively. Both
+ the schema-version and the user-version are 32-bit signed integers
+ stored in the database header.</p>
+
+<p> The schema-version is usually only manipulated internally by SQLite.
+ It is incremented by SQLite whenever the database schema is modified
+ (by creating or dropping a table or index). The schema version is
+ used by SQLite each time a query is executed to ensure that the
+ internal cache of the schema used when compiling the SQL query matches
+ the schema of the database against which the compiled query is actually
+ executed. Subverting this mechanism by using "PRAGMA schema_version"
+ to modify the schema-version is potentially dangerous and may lead
+ to program crashes or database corruption. Use with caution!</p>
+
+<p> The user-version is not used internally by SQLite. It may be used by
+ applications for any purpose.</p>
+</li>
+</ul>
+}
+
+Section {Pragmas to debug the library} debug
+
+puts {
+<ul>
+<a name="pragma_integrity_check"></a>
+<li><p><b>PRAGMA integrity_check;</b></p>
+ <p>The command does an integrity check of the entire database. It
+ looks for out-of-order records, missing pages, malformed records, and
+ corrupt indices.
+ If any problems are found, then a single string is returned which is
+ a description of all problems. If everything is in order, "ok" is
+ returned.</p></li>
+
+<a name="pragma_parser_trace"></a>
+<li><p><b>PRAGMA parser_trace = ON; </b>(1)<b>
+ <br>PRAGMA parser_trace = OFF;</b> (0)</p>
+ <p>Turn tracing of the SQL parser inside of the
+ SQLite library on and off. This is used for debugging.
+ This only works if the library is compiled without the NDEBUG macro.
+ </p></li>
+
+<a name="pragma_vdbe_trace"></a>
+<li><p><b>PRAGMA vdbe_trace = ON; </b>(1)<b>
+ <br>PRAGMA vdbe_trace = OFF;</b> (0)</p>
+ <p>Turn tracing of the virtual database engine inside of the
+ SQLite library on and off. This is used for debugging. See the
+ <a href="vdbe.html#trace">VDBE documentation</a> for more
+ information.</p></li>
+
+<a name="pragma_vdbe_listing"></a>
+<li><p><b>PRAGMA vdbe_listing = ON; </b>(1)<b>
+ <br>PRAGMA vdbe_listing = OFF;</b> (0)</p>
+ <p>Turn listings of virtual machine programs on and off.
+ With listing is on, the entire content of a program is printed
+ just prior to beginning execution. This is like automatically
+ executing an EXPLAIN prior to each statement. The statement
+ executes normally after the listing is printed.
+ This is used for debugging. See the
+ <a href="vdbe.html#trace">VDBE documentation</a> for more
+ information.</p></li>
+</ul>
+
+}
Added: freeswitch/trunk/libs/sqlite/www/quickstart.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/quickstart.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,110 @@
+#
+# Run this TCL script to generate HTML for the quickstart.html file.
+#
+set rcsid {$Id: quickstart.tcl,v 1.8 2006/06/13 11:27:22 drh Exp $}
+source common.tcl
+header {SQLite In 5 Minutes Or Less}
+puts {
+<p>Here is what you do to start experimenting with SQLite without having
+to do a lot of tedious reading and configuration:</p>
+
+<h2>Download The Code</h2>
+
+<ul>
+<li><p>Get a copy of the prebuilt binaries for your machine, or get a copy
+of the sources and compile them yourself. Visit
+the <a href="download.html">download</a> page for more information.</p></li>
+</ul>
+
+<h2>Create A New Database</h2>
+
+<ul>
+<li><p>At a shell or DOS prompt, enter: "<b>sqlite3 test.db</b>". This will
+create a new database named "test.db". (You can use a different name if
+you like.)</p></li>
+<li><p>Enter SQL commands at the prompt to create and populate the
+new database.</p></li>
+<li><p>Additional documentation is available <a href="sqlite.html">here</a></li>
+</ul>
+
+<h2>Write Programs That Use SQLite</h2>
+
+<ul>
+<li><p>Below is a simple TCL program that demonstrates how to use
+the TCL interface to SQLite. The program executes the SQL statements
+given as the second argument on the database defined by the first
+argument. The commands to watch for are the <b>sqlite3</b> command
+on line 7 which opens an SQLite database and creates
+a new TCL command named "<b>db</b>" to access that database, the
+invocation of the <b>db</b> command on line 8 to execute
+SQL commands against the database, and the closing of the database connection
+on the last line of the script.</p>
+
+<blockquote><pre>
+#!/usr/bin/tclsh
+if {$argc!=2} {
+ puts stderr "Usage: %s DATABASE SQL-STATEMENT"
+ exit 1
+}
+load /usr/lib/tclsqlite3.so Sqlite3
+<b>sqlite3</b> db [lindex $argv 0]
+<b>db</b> eval [lindex $argv 1] x {
+ foreach v $x(*) {
+ puts "$v = $x($v)"
+ }
+ puts ""
+}
+<b>db</b> close
+</pre></blockquote>
+</li>
+
+<li><p>Below is a simple C program that demonstrates how to use
+the C/C++ interface to SQLite. The name of a database is given by
+the first argument and the second argument is one or more SQL statements
+to execute against the database. The function calls to pay attention
+to here are the call to <b>sqlite3_open()</b> on line 22 which opens
+the database, <b>sqlite3_exec()</b> on line 27 that executes SQL
+commands against the database, and <b>sqlite3_close()</b> on line 31
+that closes the database connection.</p>
+
+<blockquote><pre>
+#include <stdio.h>
+#include <sqlite3.h>
+
+static int callback(void *NotUsed, int argc, char **argv, char **azColName){
+ int i;
+ for(i=0; i<argc; i++){
+ printf("%s = %s\n", azColName[i], argv[i] ? argv[i] : "NULL");
+ }
+ printf("\n");
+ return 0;
+}
+
+int main(int argc, char **argv){
+ sqlite3 *db;
+ char *zErrMsg = 0;
+ int rc;
+
+ if( argc!=3 ){
+ fprintf(stderr, "Usage: %s DATABASE SQL-STATEMENT\n", argv[0]);
+ exit(1);
+ }
+ rc = <b>sqlite3_open</b>(argv[1], &db);
+ if( rc ){
+ fprintf(stderr, "Can't open database: %s\n", sqlite3_errmsg(db));
+ sqlite3_close(db);
+ exit(1);
+ }
+ rc = <b>sqlite3_exec</b>(db, argv[2], callback, 0, &zErrMsg);
+ if( rc!=SQLITE_OK ){
+ fprintf(stderr, "SQL error: %s\n", zErrMsg);
+ sqlite3_free(zErrMsg);
+ }
+ <b>sqlite3_close</b>(db);
+ return 0;
+}
+</pre></blockquote>
+</li>
+</ul>
+}
+footer {$Id: quickstart.tcl,v 1.8 2006/06/13 11:27:22 drh Exp $}
Added: freeswitch/trunk/libs/sqlite/www/shared.gif
==============================================================================
Binary file. No diff available.
Added: freeswitch/trunk/libs/sqlite/www/sharedcache.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/sharedcache.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,221 @@
+#
+# Run this script to generated a sharedcache.html output file
+#
+set rcsid {$Id: }
+source common.tcl
+header {SQLite Shared-Cache Mode}
+
+proc HEADING {level title} {
+ global pnum
+ incr pnum($level)
+ foreach i [array names pnum] {
+ if {$i>$level} {set pnum($i) 0}
+ }
+ set h [expr {$level+1}]
+ if {$h>6} {set h 6}
+ set n $pnum(1).$pnum(2)
+ for {set i 3} {$i<=$level} {incr i} {
+ append n .$pnum($i)
+ }
+ puts "<h$h>$n $title</h$h>"
+}
+set pnum(1) 0
+set pnum(2) 0
+set pnum(3) 0
+set pnum(4) 0
+set pnum(5) 0
+set pnum(6) 0
+set pnum(7) 0
+set pnum(8) 0
+
+HEADING 1 {SQLite Shared-Cache Mode}
+
+puts {
+<p>Starting with version 3.3.0, SQLite includes a special "shared-cache"
+mode (disabled by default) intended for use in embedded servers. If
+shared-cache mode is enabled and a thread establishes multiple connections
+to the same database, the connections share a single data and schema cache.
+This can significantly reduce the quantity of memory and IO required by
+the system.</p>
+
+<p>Using shared-cache mode imposes some extra restrictions on
+passing database handles between threads and changes the semantics
+of the locking model in some cases. These details are described in full by
+this document. A basic understanding of the normal SQLite locking model (see
+<a href="lockingv3.html">File Locking And Concurrency In SQLite Version 3</a>
+for details) is assumed.</p>
+}
+
+HEADING 1 {Shared-Cache Locking Model}
+
+puts {
+<p>Externally, from the point of view of another process or thread, two
+or more database connections using a shared-cache appear as a single
+connection. The locking protocol used to arbitrate between multiple
+shared-caches or regular database users is described elsewhere.
+</p>
+
+<table style="margin:auto">
+<tr><td>
+<img src="shared.gif">
+<!-- <pre>
+ +--------------+ +--------------+
+ | Connection 2 | | Connection 3 |
+ +--------------+ +--------------+
+ | |
+ V V
++--------------+ +--------------+
+| Connection 1 | | Shared cache |
++--------------+ +--------------+
+ | |
+ V V
+ +----------------+
+ | Database |
+ +----------------+
+</pre> -->
+</table>
+<p style="font-style:italic;text-align:center">Figure 1</p>
+
+<p>Figure 1 depicts an example runtime configuration where three
+database connections have been established. Connection 1 is a normal
+SQLite database connection. Connections 2 and 3 share a cache (and so must
+have been established by the same process thread). The normal locking
+protocol is used to serialize database access between connection 1 and
+the shared cache. The internal protocol used to serialize (or not, see
+"Read-Uncommitted Isolation Mode" below) access to the shared-cache by
+connections 2 and 3 is described in the remainder of this section.
+</p>
+
+<p>There are three levels to the shared-cache locking model,
+transaction level locking, table level locking and schema level locking.
+They are described in the following three sub-sections.</p>
+
+}
+
+HEADING 2 {Transaction Level Locking}
+
+puts {
+<p>SQLite connections can open two kinds of transactions, read and write
+transactions. This is not done explicitly, a transaction is implicitly a
+read-transaction until it first writes to a database table, at which point
+it becomes a write-transaction.
+</p>
+<p>At most one connection to a single shared cache may open a
+write transaction at any one time. This may co-exist with any number of read
+transactions.
+</p>
+}
+
+HEADING 2 {Table Level Locking}
+
+puts {
+<p>When two or more connections use a shared-cache, locks are used to
+serialize concurrent access attempts on a per-table basis. Tables support
+two types of locks, "read-locks" and "write-locks". Locks are granted to
+connections - at any one time, each database connection has either a
+read-lock, write-lock or no lock on each database table.
+</p>
+
+<p>At any one time, a single table may have any number of active read-locks
+or a single active write lock. To read data a table, a connection must
+first obtain a read-lock. To write to a table, a connection must obtain a
+write-lock on that table. If a required table lock cannot be obtained,
+the query fails and SQLITE_LOCKED is returned to the caller.
+</p>
+
+<p>Once a connection obtains a table lock, it is not released until the
+current transaction (read or write) is concluded.
+</p>
+}
+
+HEADING 3 {Read-Uncommitted Isolation Mode}
+
+puts {
+<p>The behaviour described above may be modified slightly by using the
+<i>read_uncommitted</i> pragma to change the isolation level from serialized
+(the default), to read-uncommitted.</p>
+
+<p> A database connection in read-uncommitted mode does not attempt
+to obtain read-locks before reading from database tables as described
+above. This can lead to inconsistent query results if another database
+connection modifies a table while it is being read, but it also means that
+a read-transaction opened by a connection in read-uncommitted mode can
+neither block nor be blocked by any other connection.</p>
+
+<p>Read-uncommitted mode has no effect on the locks required to write to
+database tables (i.e. read-uncommitted connections must still obtain
+write-locks and hence database writes may still block or be blocked).
+Also, read-uncommitted mode has no effect on the <i>sqlite_master</i>
+locks required by the rules enumerated below (see section
+"Schema (sqlite_master) Level Locking").
+</p>
+
+<pre>
+ /* Set the value of the read-uncommitted flag:
+ **
+ ** True -> Set the connection to read-uncommitted mode.
+ ** False -> Set the connectino to serialized (the default) mode.
+ */
+ PRAGMA read_uncommitted = <boolean>;
+
+ /* Retrieve the current value of the read-uncommitted flag */
+ PRAGMA read_uncommitted;
+</pre>
+}
+
+HEADING 2 {Schema (sqlite_master) Level Locking}
+
+puts {
+<p>The <i>sqlite_master</i> table supports shared-cache read and write
+locks in the same way as all other database tables (see description
+above). The following special rules also apply:
+</p>
+
+<ul>
+<li>A connection must obtain a read-lock on <i>sqlite_master</i> before
+accessing any database tables or obtaining any other read or write locks.</li>
+<li>Before executing a statement that modifies the database schema (i.e.
+a CREATE or DROP TABLE statement), a connection must obtain a write-lock on
+<i>sqlite_master</i>.
+</li>
+<li>A connection may not compile an SQL statement if any other connection
+is holding a write-lock on the <i>sqlite_master</i> table of any attached
+database (including the default database, "main").
+</li>
+</ul>
+}
+
+HEADING 1 {Thread Related Issues}
+
+puts {
+<p>When shared-cache mode is enabled, a database connection may only be
+used by the thread that called sqlite3_open() to create it. If another
+thread attempts to use the database connection, in most cases an
+SQLITE_MISUSE error is returned. However this is not guaranteed and
+programs should not depend on this behaviour, in some cases a segfault
+may result.
+</p>
+}
+
+HEADING 1 {Enabling Shared-Cache Mode}
+
+puts {
+<p>Shared-cache mode is enabled on a thread-wide basis. Using the C
+interface, the following API can be used to enable or disable shared-cache
+mode for the calling thread:
+</p>
+
+<pre>
+int sqlite3_enable_shared_cache(int);
+</pre>
+
+<p>It is illegal to call sqlite3_enable_shared_cache() if one or more
+open database connections were opened by the calling thread. If the argument
+is non-zero, shared-cache mode is enabled. If the argument is zero,
+shared-cache mode is disabled. The return value is either SQLITE_OK (if the
+operation was successful), SQLITE_NOMEM (if a malloc() failed), or
+SQLITE_MISUSE (if the thread has open database connections).
+</p>
+}
+
+footer $rcsid
Added: freeswitch/trunk/libs/sqlite/www/speed.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/speed.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,495 @@
+#
+# Run this Tcl script to generate the speed.html file.
+#
+set rcsid {$Id: speed.tcl,v 1.17 2005/03/12 15:55:11 drh Exp $ }
+source common.tcl
+header {SQLite Database Speed Comparison}
+
+puts {
+<h2>Database Speed Comparison</h2>
+
+<font color="red"><b>
+Note: This document is old. It describes a speed comparison between
+an older version of SQLite against archaic versions of MySQL and PostgreSQL.
+Readers are invited to contribute more up-to-date speed comparisons
+on the <a href="http://www.sqlite.org/cvstrac/wiki">SQLite Wiki</a>.
+<p>
+The numbers here are old enough to be nearly meaningless. Until it is
+updated, use this document only as proof that SQLite is not a
+sluggard.
+</b></font>
+
+<h3>Executive Summary</h3>
+
+<p>A series of tests were run to measure the relative performance of
+SQLite 2.7.6, PostgreSQL 7.1.3, and MySQL 3.23.41.
+The following are general
+conclusions drawn from these experiments:
+</p>
+
+<ul>
+<li><p>
+ SQLite 2.7.6 is significantly faster (sometimes as much as 10 or
+ 20 times faster) than the default PostgreSQL 7.1.3 installation
+ on RedHat 7.2 for most common operations.
+</p></li>
+<li><p>
+ SQLite 2.7.6 is often faster (sometimes
+ more than twice as fast) than MySQL 3.23.41
+ for most common operations.
+</p></li>
+<li><p>
+ SQLite does not execute CREATE INDEX or DROP TABLE as fast as
+ the other databases. But this is not seen as a problem because
+ those are infrequent operations.
+</p></li>
+<li><p>
+ SQLite works best if you group multiple operations together into
+ a single transaction.
+</p></li>
+</ul>
+
+<p>
+The results presented here come with the following caveats:
+</p>
+
+<ul>
+<li><p>
+ These tests did not attempt to measure multi-user performance or
+ optimization of complex queries involving multiple joins and subqueries.
+</p></li>
+<li><p>
+ These tests are on a relatively small (approximately 14 megabyte) database.
+ They do not measure how well the database engines scale to larger problems.
+</p></li>
+</ul>
+
+<h3>Test Environment</h3>
+
+<p>
+The platform used for these tests is a 1.6GHz Athlon with 1GB or memory
+and an IDE disk drive. The operating system is RedHat Linux 7.2 with
+a stock kernel.
+</p>
+
+<p>
+The PostgreSQL and MySQL servers used were as delivered by default on
+RedHat 7.2. (PostgreSQL version 7.1.3 and MySQL version 3.23.41.)
+No effort was made to tune these engines. Note in particular
+the the default MySQL configuration on RedHat 7.2 does not support
+transactions. Not having to support transactions gives MySQL a
+big speed advantage, but SQLite is still able to hold its own on most
+tests.
+</p>
+
+<p>
+I am told that the default PostgreSQL configuration in RedHat 7.3
+is unnecessarily conservative (it is designed to
+work on a machine with 8MB of RAM) and that PostgreSQL could
+be made to run a lot faster with some knowledgeable configuration
+tuning.
+Matt Sergeant reports that he has tuned his PostgreSQL installation
+and rerun the tests shown below. His results show that
+PostgreSQL and MySQL run at about the same speed. For Matt's
+results, visit
+</p>
+
+<blockquote>
+<a href="http://www.sergeant.org/sqlite_vs_pgsync.html">
+http://www.sergeant.org/sqlite_vs_pgsync.html</a>
+</blockquote>
+
+<p>
+SQLite was tested in the same configuration that it appears
+on the website. It was compiled with -O6 optimization and with
+the -DNDEBUG=1 switch which disables the many "assert()" statements
+in the SQLite code. The -DNDEBUG=1 compiler option roughly doubles
+the speed of SQLite.
+</p>
+
+<p>
+All tests are conducted on an otherwise quiescent machine.
+A simple Tcl script was used to generate and run all the tests.
+A copy of this Tcl script can be found in the SQLite source tree
+in the file <b>tools/speedtest.tcl</b>.
+</p>
+
+<p>
+The times reported on all tests represent wall-clock time
+in seconds. Two separate time values are reported for SQLite.
+The first value is for SQLite in its default configuration with
+full disk synchronization turned on. With synchronization turned
+on, SQLite executes
+an <b>fsync()</b> system call (or the equivalent) at key points
+to make certain that critical data has
+actually been written to the disk drive surface. Synchronization
+is necessary to guarantee the integrity of the database if the
+operating system crashes or the computer powers down unexpectedly
+in the middle of a database update. The second time reported for SQLite is
+when synchronization is turned off. With synchronization off,
+SQLite is sometimes much faster, but there is a risk that an
+operating system crash or an unexpected power failure could
+damage the database. Generally speaking, the synchronous SQLite
+times are for comparison against PostgreSQL (which is also
+synchronous) and the asynchronous SQLite times are for
+comparison against the asynchronous MySQL engine.
+</p>
+
+<h3>Test 1: 1000 INSERTs</h3>
+<blockquote>
+CREATE TABLE t1(a INTEGER, b INTEGER, c VARCHAR(100));<br>
+INSERT INTO t1 VALUES(1,13153,'thirteen thousand one hundred fifty three');<br>
+INSERT INTO t1 VALUES(2,75560,'seventy five thousand five hundred sixty');<br>
+<i>... 995 lines omitted</i><br>
+INSERT INTO t1 VALUES(998,66289,'sixty six thousand two hundred eighty nine');<br>
+INSERT INTO t1 VALUES(999,24322,'twenty four thousand three hundred twenty two');<br>
+INSERT INTO t1 VALUES(1000,94142,'ninety four thousand one hundred forty two');<br>
+
+</blockquote><table border=0 cellpadding=0 cellspacing=0>
+<tr><td>PostgreSQL:</td><td align="right"> 4.373</td></tr>
+<tr><td>MySQL:</td><td align="right"> 0.114</td></tr>
+<tr><td>SQLite 2.7.6:</td><td align="right"> 13.061</td></tr>
+<tr><td>SQLite 2.7.6 (nosync):</td><td align="right"> 0.223</td></tr>
+</table>
+
+<p>
+Because it does not have a central server to coordinate access,
+SQLite must close and reopen the database file, and thus invalidate
+its cache, for each transaction. In this test, each SQL statement
+is a separate transaction so the database file must be opened and closed
+and the cache must be flushed 1000 times. In spite of this, the asynchronous
+version of SQLite is still nearly as fast as MySQL. Notice how much slower
+the synchronous version is, however. SQLite calls <b>fsync()</b> after
+each synchronous transaction to make sure that all data is safely on
+the disk surface before continuing. For most of the 13 seconds in the
+synchronous test, SQLite was sitting idle waiting on disk I/O to complete.</p>
+
+
+<h3>Test 2: 25000 INSERTs in a transaction</h3>
+<blockquote>
+BEGIN;<br>
+CREATE TABLE t2(a INTEGER, b INTEGER, c VARCHAR(100));<br>
+INSERT INTO t2 VALUES(1,59672,'fifty nine thousand six hundred seventy two');<br>
+<i>... 24997 lines omitted</i><br>
+INSERT INTO t2 VALUES(24999,89569,'eighty nine thousand five hundred sixty nine');<br>
+INSERT INTO t2 VALUES(25000,94666,'ninety four thousand six hundred sixty six');<br>
+COMMIT;<br>
+
+</blockquote><table border=0 cellpadding=0 cellspacing=0>
+<tr><td>PostgreSQL:</td><td align="right"> 4.900</td></tr>
+<tr><td>MySQL:</td><td align="right"> 2.184</td></tr>
+<tr><td>SQLite 2.7.6:</td><td align="right"> 0.914</td></tr>
+<tr><td>SQLite 2.7.6 (nosync):</td><td align="right"> 0.757</td></tr>
+</table>
+
+<p>
+When all the INSERTs are put in a transaction, SQLite no longer has to
+close and reopen the database or invalidate its cache between each statement.
+It also does not
+have to do any fsync()s until the very end. When unshackled in
+this way, SQLite is much faster than either PostgreSQL and MySQL.
+</p>
+
+<h3>Test 3: 25000 INSERTs into an indexed table</h3>
+<blockquote>
+BEGIN;<br>
+CREATE TABLE t3(a INTEGER, b INTEGER, c VARCHAR(100));<br>
+CREATE INDEX i3 ON t3(c);<br>
+<i>... 24998 lines omitted</i><br>
+INSERT INTO t3 VALUES(24999,88509,'eighty eight thousand five hundred nine');<br>
+INSERT INTO t3 VALUES(25000,84791,'eighty four thousand seven hundred ninety one');<br>
+COMMIT;<br>
+
+</blockquote><table border=0 cellpadding=0 cellspacing=0>
+<tr><td>PostgreSQL:</td><td align="right"> 8.175</td></tr>
+<tr><td>MySQL:</td><td align="right"> 3.197</td></tr>
+<tr><td>SQLite 2.7.6:</td><td align="right"> 1.555</td></tr>
+<tr><td>SQLite 2.7.6 (nosync):</td><td align="right"> 1.402</td></tr>
+</table>
+
+<p>
+There were reports that SQLite did not perform as well on an indexed table.
+This test was recently added to disprove those rumors. It is true that
+SQLite is not as fast at creating new index entries as the other engines
+(see Test 6 below) but its overall speed is still better.
+</p>
+
+<h3>Test 4: 100 SELECTs without an index</h3>
+<blockquote>
+BEGIN;<br>
+SELECT count(*), avg(b) FROM t2 WHERE b>=0 AND b<1000;<br>
+SELECT count(*), avg(b) FROM t2 WHERE b>=100 AND b<1100;<br>
+<i>... 96 lines omitted</i><br>
+SELECT count(*), avg(b) FROM t2 WHERE b>=9800 AND b<10800;<br>
+SELECT count(*), avg(b) FROM t2 WHERE b>=9900 AND b<10900;<br>
+COMMIT;<br>
+
+</blockquote><table border=0 cellpadding=0 cellspacing=0>
+<tr><td>PostgreSQL:</td><td align="right"> 3.629</td></tr>
+<tr><td>MySQL:</td><td align="right"> 2.760</td></tr>
+<tr><td>SQLite 2.7.6:</td><td align="right"> 2.494</td></tr>
+<tr><td>SQLite 2.7.6 (nosync):</td><td align="right"> 2.526</td></tr>
+</table>
+
+
+<p>
+This test does 100 queries on a 25000 entry table without an index,
+thus requiring a full table scan. Prior versions of SQLite used to
+be slower than PostgreSQL and MySQL on this test, but recent performance
+enhancements have increased its speed so that it is now the fastest
+of the group.
+</p>
+
+<h3>Test 5: 100 SELECTs on a string comparison</h3>
+<blockquote>
+BEGIN;<br>
+SELECT count(*), avg(b) FROM t2 WHERE c LIKE '%one%';<br>
+SELECT count(*), avg(b) FROM t2 WHERE c LIKE '%two%';<br>
+<i>... 96 lines omitted</i><br>
+SELECT count(*), avg(b) FROM t2 WHERE c LIKE '%ninety nine%';<br>
+SELECT count(*), avg(b) FROM t2 WHERE c LIKE '%one hundred%';<br>
+COMMIT;<br>
+
+</blockquote><table border=0 cellpadding=0 cellspacing=0>
+<tr><td>PostgreSQL:</td><td align="right"> 13.409</td></tr>
+<tr><td>MySQL:</td><td align="right"> 4.640</td></tr>
+<tr><td>SQLite 2.7.6:</td><td align="right"> 3.362</td></tr>
+<tr><td>SQLite 2.7.6 (nosync):</td><td align="right"> 3.372</td></tr>
+</table>
+
+<p>
+This test still does 100 full table scans but it uses
+uses string comparisons instead of numerical comparisons.
+SQLite is over three times faster than PostgreSQL here and about 30%
+faster than MySQL.
+</p>
+
+<h3>Test 6: Creating an index</h3>
+<blockquote>
+CREATE INDEX i2a ON t2(a);<br>CREATE INDEX i2b ON t2(b);
+</blockquote><table border=0 cellpadding=0 cellspacing=0>
+<tr><td>PostgreSQL:</td><td align="right"> 0.381</td></tr>
+<tr><td>MySQL:</td><td align="right"> 0.318</td></tr>
+<tr><td>SQLite 2.7.6:</td><td align="right"> 0.777</td></tr>
+<tr><td>SQLite 2.7.6 (nosync):</td><td align="right"> 0.659</td></tr>
+</table>
+
+<p>
+SQLite is slower at creating new indices. This is not a huge problem
+(since new indices are not created very often) but it is something that
+is being worked on. Hopefully, future versions of SQLite will do better
+here.
+</p>
+
+<h3>Test 7: 5000 SELECTs with an index</h3>
+<blockquote>
+SELECT count(*), avg(b) FROM t2 WHERE b>=0 AND b<100;<br>
+SELECT count(*), avg(b) FROM t2 WHERE b>=100 AND b<200;<br>
+SELECT count(*), avg(b) FROM t2 WHERE b>=200 AND b<300;<br>
+<i>... 4994 lines omitted</i><br>
+SELECT count(*), avg(b) FROM t2 WHERE b>=499700 AND b<499800;<br>
+SELECT count(*), avg(b) FROM t2 WHERE b>=499800 AND b<499900;<br>
+SELECT count(*), avg(b) FROM t2 WHERE b>=499900 AND b<500000;<br>
+
+</blockquote><table border=0 cellpadding=0 cellspacing=0>
+<tr><td>PostgreSQL:</td><td align="right"> 4.614</td></tr>
+<tr><td>MySQL:</td><td align="right"> 1.270</td></tr>
+<tr><td>SQLite 2.7.6:</td><td align="right"> 1.121</td></tr>
+<tr><td>SQLite 2.7.6 (nosync):</td><td align="right"> 1.162</td></tr>
+</table>
+
+<p>
+All three database engines run faster when they have indices to work with.
+But SQLite is still the fastest.
+</p>
+
+<h3>Test 8: 1000 UPDATEs without an index</h3>
+<blockquote>
+BEGIN;<br>
+UPDATE t1 SET b=b*2 WHERE a>=0 AND a<10;<br>
+UPDATE t1 SET b=b*2 WHERE a>=10 AND a<20;<br>
+<i>... 996 lines omitted</i><br>
+UPDATE t1 SET b=b*2 WHERE a>=9980 AND a<9990;<br>
+UPDATE t1 SET b=b*2 WHERE a>=9990 AND a<10000;<br>
+COMMIT;<br>
+
+</blockquote><table border=0 cellpadding=0 cellspacing=0>
+<tr><td>PostgreSQL:</td><td align="right"> 1.739</td></tr>
+<tr><td>MySQL:</td><td align="right"> 8.410</td></tr>
+<tr><td>SQLite 2.7.6:</td><td align="right"> 0.637</td></tr>
+<tr><td>SQLite 2.7.6 (nosync):</td><td align="right"> 0.638</td></tr>
+</table>
+
+<p>
+For this particular UPDATE test, MySQL is consistently
+five or ten times
+slower than PostgreSQL and SQLite. I do not know why. MySQL is
+normally a very fast engine. Perhaps this problem has been addressed
+in later versions of MySQL.
+</p>
+
+<h3>Test 9: 25000 UPDATEs with an index</h3>
+<blockquote>
+BEGIN;<br>
+UPDATE t2 SET b=468026 WHERE a=1;<br>
+UPDATE t2 SET b=121928 WHERE a=2;<br>
+<i>... 24996 lines omitted</i><br>
+UPDATE t2 SET b=35065 WHERE a=24999;<br>
+UPDATE t2 SET b=347393 WHERE a=25000;<br>
+COMMIT;<br>
+
+</blockquote><table border=0 cellpadding=0 cellspacing=0>
+<tr><td>PostgreSQL:</td><td align="right"> 18.797</td></tr>
+<tr><td>MySQL:</td><td align="right"> 8.134</td></tr>
+<tr><td>SQLite 2.7.6:</td><td align="right"> 3.520</td></tr>
+<tr><td>SQLite 2.7.6 (nosync):</td><td align="right"> 3.104</td></tr>
+</table>
+
+<p>
+As recently as version 2.7.0, SQLite ran at about the same speed as
+MySQL on this test. But recent optimizations to SQLite have more
+than doubled speed of UPDATEs.
+</p>
+
+<h3>Test 10: 25000 text UPDATEs with an index</h3>
+<blockquote>
+BEGIN;<br>
+UPDATE t2 SET c='one hundred forty eight thousand three hundred eighty two' WHERE a=1;<br>
+UPDATE t2 SET c='three hundred sixty six thousand five hundred two' WHERE a=2;<br>
+<i>... 24996 lines omitted</i><br>
+UPDATE t2 SET c='three hundred eighty three thousand ninety nine' WHERE a=24999;<br>
+UPDATE t2 SET c='two hundred fifty six thousand eight hundred thirty' WHERE a=25000;<br>
+COMMIT;<br>
+
+</blockquote><table border=0 cellpadding=0 cellspacing=0>
+<tr><td>PostgreSQL:</td><td align="right"> 48.133</td></tr>
+<tr><td>MySQL:</td><td align="right"> 6.982</td></tr>
+<tr><td>SQLite 2.7.6:</td><td align="right"> 2.408</td></tr>
+<tr><td>SQLite 2.7.6 (nosync):</td><td align="right"> 1.725</td></tr>
+</table>
+
+<p>
+Here again, version 2.7.0 of SQLite used to run at about the same speed
+as MySQL. But now version 2.7.6 is over two times faster than MySQL and
+over twenty times faster than PostgreSQL.
+</p>
+
+<p>
+In fairness to PostgreSQL, it started thrashing on this test. A
+knowledgeable administrator might be able to get PostgreSQL to run a lot
+faster here by tweaking and tuning the server a little.
+</p>
+
+<h3>Test 11: INSERTs from a SELECT</h3>
+<blockquote>
+BEGIN;<br>INSERT INTO t1 SELECT b,a,c FROM t2;<br>INSERT INTO t2 SELECT b,a,c FROM t1;<br>COMMIT;
+</blockquote><table border=0 cellpadding=0 cellspacing=0>
+<tr><td>PostgreSQL:</td><td align="right"> 61.364</td></tr>
+<tr><td>MySQL:</td><td align="right"> 1.537</td></tr>
+<tr><td>SQLite 2.7.6:</td><td align="right"> 2.787</td></tr>
+<tr><td>SQLite 2.7.6 (nosync):</td><td align="right"> 1.599</td></tr>
+</table>
+
+<p>
+The asynchronous SQLite is just a shade slower than MySQL on this test.
+(MySQL seems to be especially adept at INSERT...SELECT statements.)
+The PostgreSQL engine is still thrashing - most of the 61 seconds it used
+were spent waiting on disk I/O.
+</p>
+
+<h3>Test 12: DELETE without an index</h3>
+<blockquote>
+DELETE FROM t2 WHERE c LIKE '%fifty%';
+</blockquote><table border=0 cellpadding=0 cellspacing=0>
+<tr><td>PostgreSQL:</td><td align="right"> 1.509</td></tr>
+<tr><td>MySQL:</td><td align="right"> 0.975</td></tr>
+<tr><td>SQLite 2.7.6:</td><td align="right"> 4.004</td></tr>
+<tr><td>SQLite 2.7.6 (nosync):</td><td align="right"> 0.560</td></tr>
+</table>
+
+<p>
+The synchronous version of SQLite is the slowest of the group in this test,
+but the asynchronous version is the fastest.
+The difference is the extra time needed to execute fsync().
+</p>
+
+<h3>Test 13: DELETE with an index</h3>
+<blockquote>
+DELETE FROM t2 WHERE a>10 AND a<20000;
+</blockquote><table border=0 cellpadding=0 cellspacing=0>
+<tr><td>PostgreSQL:</td><td align="right"> 1.316</td></tr>
+<tr><td>MySQL:</td><td align="right"> 2.262</td></tr>
+<tr><td>SQLite 2.7.6:</td><td align="right"> 2.068</td></tr>
+<tr><td>SQLite 2.7.6 (nosync):</td><td align="right"> 0.752</td></tr>
+</table>
+
+<p>
+This test is significant because it is one of the few where
+PostgreSQL is faster than MySQL. The asynchronous SQLite is,
+however, faster then both the other two.
+</p>
+
+<h3>Test 14: A big INSERT after a big DELETE</h3>
+<blockquote>
+INSERT INTO t2 SELECT * FROM t1;
+</blockquote><table border=0 cellpadding=0 cellspacing=0>
+<tr><td>PostgreSQL:</td><td align="right"> 13.168</td></tr>
+<tr><td>MySQL:</td><td align="right"> 1.815</td></tr>
+<tr><td>SQLite 2.7.6:</td><td align="right"> 3.210</td></tr>
+<tr><td>SQLite 2.7.6 (nosync):</td><td align="right"> 1.485</td></tr>
+</table>
+
+<p>
+Some older versions of SQLite (prior to version 2.4.0)
+would show decreasing performance after a
+sequence of DELETEs followed by new INSERTs. As this test shows, the
+problem has now been resolved.
+</p>
+
+<h3>Test 15: A big DELETE followed by many small INSERTs</h3>
+<blockquote>
+BEGIN;<br>
+DELETE FROM t1;<br>
+INSERT INTO t1 VALUES(1,10719,'ten thousand seven hundred nineteen');<br>
+<i>... 11997 lines omitted</i><br>
+INSERT INTO t1 VALUES(11999,72836,'seventy two thousand eight hundred thirty six');<br>
+INSERT INTO t1 VALUES(12000,64231,'sixty four thousand two hundred thirty one');<br>
+COMMIT;<br>
+
+</blockquote><table border=0 cellpadding=0 cellspacing=0>
+<tr><td>PostgreSQL:</td><td align="right"> 4.556</td></tr>
+<tr><td>MySQL:</td><td align="right"> 1.704</td></tr>
+<tr><td>SQLite 2.7.6:</td><td align="right"> 0.618</td></tr>
+<tr><td>SQLite 2.7.6 (nosync):</td><td align="right"> 0.406</td></tr>
+</table>
+
+<p>
+SQLite is very good at doing INSERTs within a transaction, which probably
+explains why it is so much faster than the other databases at this test.
+</p>
+
+<h3>Test 16: DROP TABLE</h3>
+<blockquote>
+DROP TABLE t1;<br>DROP TABLE t2;<br>DROP TABLE t3;
+</blockquote><table border=0 cellpadding=0 cellspacing=0>
+<tr><td>PostgreSQL:</td><td align="right"> 0.135</td></tr>
+<tr><td>MySQL:</td><td align="right"> 0.015</td></tr>
+<tr><td>SQLite 2.7.6:</td><td align="right"> 0.939</td></tr>
+<tr><td>SQLite 2.7.6 (nosync):</td><td align="right"> 0.254</td></tr>
+</table>
+
+<p>
+SQLite is slower than the other databases when it comes to dropping tables.
+This probably is because when SQLite drops a table, it has to go through and
+erase the records in the database file that deal with that table. MySQL and
+PostgreSQL, on the other hand, use separate files to represent each table
+so they can drop a table simply by deleting a file, which is much faster.
+</p>
+
+<p>
+On the other hand, dropping tables is not a very common operation
+so if SQLite takes a little longer, that is not seen as a big problem.
+</p>
+
+}
+footer $rcsid
Added: freeswitch/trunk/libs/sqlite/www/sqlite.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/sqlite.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,577 @@
+#
+# Run this Tcl script to generate the sqlite.html file.
+#
+set rcsid {$Id: sqlite.tcl,v 1.24 2006/08/19 13:32:05 drh Exp $}
+source common.tcl
+header {sqlite: A command-line access program for SQLite databases}
+puts {
+<h2>sqlite: A command-line access program for SQLite databases</h2>
+
+<p>The SQLite library includes a simple command-line utility named
+<b>sqlite</b> that allows the user to manually enter and execute SQL
+commands against an SQLite database. This document provides a brief
+introduction on how to use <b>sqlite</b>.
+
+<h3>Getting Started</h3>
+
+<p>To start the <b>sqlite</b> program, just type "sqlite" followed by
+the name the file that holds the SQLite database. If the file does
+not exist, a new one is created automatically.
+The <b>sqlite</b> program will
+then prompt you to enter SQL. Type in SQL statements (terminated by a
+semicolon), press "Enter" and the SQL will be executed.</p>
+
+<p>For example, to create a new SQLite database named "ex1"
+with a single table named "tbl1", you might do this:</p>
+}
+
+proc Code {body} {
+ puts {<blockquote><tt>}
+ regsub -all {&} [string trim $body] {\&} body
+ regsub -all {>} $body {\>} body
+ regsub -all {<} $body {\<} body
+ regsub -all {\(\(\(} $body {<b>} body
+ regsub -all {\)\)\)} $body {</b>} body
+ regsub -all { } $body {\ } body
+ regsub -all \n $body <br>\n body
+ puts $body
+ puts {</tt></blockquote>}
+}
+
+Code {
+$ (((sqlite ex1)))
+SQLite version 2.0.0
+Enter ".help" for instructions
+sqlite> (((create table tbl1(one varchar(10), two smallint);)))
+sqlite> (((insert into tbl1 values('hello!',10);)))
+sqlite> (((insert into tbl1 values('goodbye', 20);)))
+sqlite> (((select * from tbl1;)))
+hello!|10
+goodbye|20
+sqlite>
+}
+
+puts {
+<p>You can terminate the sqlite program by typing your systems
+End-Of-File character (usually a Control-D) or the interrupt
+character (usually a Control-C).</p>
+
+<p>Make sure you type a semicolon at the end of each SQL command!
+The sqlite looks for a semicolon to know when your SQL command is
+complete. If you omit the semicolon, sqlite will give you a
+continuation prompt and wait for you to enter more text to be
+added to the current SQL command. This feature allows you to
+enter SQL commands that span multiple lines. For example:</p>
+}
+
+Code {
+sqlite> (((CREATE TABLE tbl2 ()))
+ ...> ((( f1 varchar(30) primary key,)))
+ ...> ((( f2 text,)))
+ ...> ((( f3 real)))
+ ...> ((();)))
+sqlite>
+}
+
+puts {
+
+<h3>Aside: Querying the SQLITE_MASTER table</h3>
+
+<p>The database schema in an SQLite database is stored in
+a special table named "sqlite_master".
+You can execute "SELECT" statements against the
+special sqlite_master table just like any other table
+in an SQLite database. For example:</p>
+}
+
+Code {
+$ (((sqlite ex1)))
+SQlite vresion 2.0.0
+Enter ".help" for instructions
+sqlite> (((select * from sqlite_master;)))
+ type = table
+ name = tbl1
+tbl_name = tbl1
+rootpage = 3
+ sql = create table tbl1(one varchar(10), two smallint)
+sqlite>
+}
+
+puts {
+<p>
+But you cannot execute DROP TABLE, UPDATE, INSERT or DELETE against
+the sqlite_master table. The sqlite_master
+table is updated automatically as you create or drop tables and
+indices from the database. You can not make manual changes
+to the sqlite_master table.
+</p>
+
+<p>
+The schema for TEMPORARY tables is not stored in the "sqlite_master" table
+since TEMPORARY tables are not visible to applications other than the
+application that created the table. The schema for TEMPORARY tables
+is stored in another special table named "sqlite_temp_master". The
+"sqlite_temp_master" table is temporary itself.
+</p>
+
+<h3>Special commands to sqlite</h3>
+
+<p>
+Most of the time, sqlite just reads lines of input and passes them
+on to the SQLite library for execution.
+But if an input line begins with a dot ("."), then
+that line is intercepted and interpreted by the sqlite program itself.
+These "dot commands" are typically used to change the output format
+of queries, or to execute certain prepackaged query statements.
+</p>
+
+<p>
+For a listing of the available dot commands, you can enter ".help"
+at any time. For example:
+</p>}
+
+Code {
+sqlite> (((.help)))
+.databases List names and files of attached databases
+.dump ?TABLE? ... Dump the database in a text format
+.echo ON|OFF Turn command echo on or off
+.exit Exit this program
+.explain ON|OFF Turn output mode suitable for EXPLAIN on or off.
+.header(s) ON|OFF Turn display of headers on or off
+.help Show this message
+.indices TABLE Show names of all indices on TABLE
+.mode MODE Set mode to one of "line(s)", "column(s)",
+ "insert", "list", or "html"
+.mode insert TABLE Generate SQL insert statements for TABLE
+.nullvalue STRING Print STRING instead of nothing for NULL data
+.output FILENAME Send output to FILENAME
+.output stdout Send output to the screen
+.prompt MAIN CONTINUE Replace the standard prompts
+.quit Exit this program
+.read FILENAME Execute SQL in FILENAME
+.schema ?TABLE? Show the CREATE statements
+.separator STRING Change separator string for "list" mode
+.show Show the current values for various settings
+.tables ?PATTERN? List names of tables matching a pattern
+.timeout MS Try opening locked tables for MS milliseconds
+.width NUM NUM ... Set column widths for "column" mode
+sqlite>
+}
+
+puts {
+<h3>Changing Output Formats</h3>
+
+<p>The sqlite program is able to show the results of a query
+in five different formats: "line", "column", "list", "html", and "insert".
+You can use the ".mode" dot command to switch between these output
+formats.</p>
+
+<p>The default output mode is "list". In
+list mode, each record of a query result is written on one line of
+output and each column within that record is separated by a specific
+separator string. The default separator is a pipe symbol ("|").
+List mode is especially useful when you are going to send the output
+of a query to another program (such as AWK) for additional processing.</p>}
+
+Code {
+sqlite> (((.mode list)))
+sqlite> (((select * from tbl1;)))
+hello|10
+goodbye|20
+sqlite>
+}
+
+puts {
+<p>You can use the ".separator" dot command to change the separator
+for list mode. For example, to change the separator to a comma and
+a space, you could do this:</p>}
+
+Code {
+sqlite> (((.separator ", ")))
+sqlite> (((select * from tbl1;)))
+hello, 10
+goodbye, 20
+sqlite>
+}
+
+puts {
+<p>In "line" mode, each column in a row of the database
+is shown on a line by itself. Each line consists of the column
+name, an equal sign and the column data. Successive records are
+separated by a blank line. Here is an example of line mode
+output:</p>}
+
+Code {
+sqlite> (((.mode line)))
+sqlite> (((select * from tbl1;)))
+one = hello
+two = 10
+
+one = goodbye
+two = 20
+sqlite>
+}
+
+puts {
+<p>In column mode, each record is shown on a separate line with the
+data aligned in columns. For example:</p>}
+
+Code {
+sqlite> (((.mode column)))
+sqlite> (((select * from tbl1;)))
+one two
+---------- ----------
+hello 10
+goodbye 20
+sqlite>
+}
+
+puts {
+<p>By default, each column is at least 10 characters wide.
+Data that is too wide to fit in a column is truncated. You can
+adjust the column widths using the ".width" command. Like this:</p>}
+
+Code {
+sqlite> (((.width 12 6)))
+sqlite> (((select * from tbl1;)))
+one two
+------------ ------
+hello 10
+goodbye 20
+sqlite>
+}
+
+puts {
+<p>The ".width" command in the example above sets the width of the first
+column to 12 and the width of the second column to 6. All other column
+widths were unaltered. You can gives as many arguments to ".width" as
+necessary to specify the widths of as many columns as are in your
+query results.</p>
+
+<p>If you specify a column a width of 0, then the column
+width is automatically adjusted to be the maximum of three
+numbers: 10, the width of the header, and the width of the
+first row of data. This makes the column width self-adjusting.
+The default width setting for every column is this
+auto-adjusting 0 value.</p>
+
+<p>The column labels that appear on the first two lines of output
+can be turned on and off using the ".header" dot command. In the
+examples above, the column labels are on. To turn them off you
+could do this:</p>}
+
+Code {
+sqlite> (((.header off)))
+sqlite> (((select * from tbl1;)))
+hello 10
+goodbye 20
+sqlite>
+}
+
+puts {
+<p>Another useful output mode is "insert". In insert mode, the output
+is formatted to look like SQL INSERT statements. You can use insert
+mode to generate text that can later be used to input data into a
+different database.</p>
+
+<p>When specifying insert mode, you have to give an extra argument
+which is the name of the table to be inserted into. For example:</p>
+}
+
+Code {
+sqlite> (((.mode insert new_table)))
+sqlite> (((select * from tbl1;)))
+INSERT INTO 'new_table' VALUES('hello',10);
+INSERT INTO 'new_table' VALUES('goodbye',20);
+sqlite>
+}
+
+puts {
+<p>The last output mode is "html". In this mode, sqlite writes
+the results of the query as an XHTML table. The beginning
+<TABLE> and the ending </TABLE> are not written, but
+all of the intervening <TR>s, <TH>s, and <TD>s
+are. The html output mode is envisioned as being useful for
+CGI.</p>
+}
+
+puts {
+<h3>Writing results to a file</h3>
+
+<p>By default, sqlite sends query results to standard output. You
+can change this using the ".output" command. Just put the name of
+an output file as an argument to the .output command and all subsequent
+query results will be written to that file. Use ".output stdout" to
+begin writing to standard output again. For example:</p>}
+
+Code {
+sqlite> (((.mode list)))
+sqlite> (((.separator |)))
+sqlite> (((.output test_file_1.txt)))
+sqlite> (((select * from tbl1;)))
+sqlite> (((.exit)))
+$ (((cat test_file_1.txt)))
+hello|10
+goodbye|20
+$
+}
+
+puts {
+<h3>Querying the database schema</h3>
+
+<p>The sqlite program provides several convenience commands that
+are useful for looking at the schema of the database. There is
+nothing that these commands do that cannot be done by some other
+means. These commands are provided purely as a shortcut.</p>
+
+<p>For example, to see a list of the tables in the database, you
+can enter ".tables".</p>
+}
+
+Code {
+sqlite> (((.tables)))
+tbl1
+tbl2
+sqlite>
+}
+
+puts {
+<p>The ".tables" command is the same as setting list mode then
+executing the following query:</p>
+
+<blockquote><pre>
+SELECT name FROM sqlite_master WHERE type='table'
+UNION ALL SELECT name FROM sqlite_temp_master WHERE type='table'
+ORDER BY name;
+</pre></blockquote>
+
+<p>In fact, if you look at the source code to the sqlite program
+(found in the source tree in the file src/shell.c) you'll find
+exactly the above query.</p>
+
+<p>The ".indices" command works in a similar way to list all of
+the indices for a particular table. The ".indices" command takes
+a single argument which is the name of the table for which the
+indices are desired. Last, but not least, is the ".schema" command.
+With no arguments, the ".schema" command shows the original CREATE TABLE
+and CREATE INDEX statements that were used to build the current database.
+If you give the name of a table to ".schema", it shows the original
+CREATE statement used to make that table and all if its indices.
+We have:</p>}
+
+Code {
+sqlite> (((.schema)))
+create table tbl1(one varchar(10), two smallint)
+CREATE TABLE tbl2 (
+ f1 varchar(30) primary key,
+ f2 text,
+ f3 real
+)
+sqlite> (((.schema tbl2)))
+CREATE TABLE tbl2 (
+ f1 varchar(30) primary key,
+ f2 text,
+ f3 real
+)
+sqlite>
+}
+
+puts {
+<p>The ".schema" command accomplishes the same thing as setting
+list mode, then entering the following query:</p>
+
+<blockquote><pre>
+SELECT sql FROM
+ (SELECT * FROM sqlite_master UNION ALL
+ SELECT * FROM sqlite_temp_master)
+WHERE type!='meta'
+ORDER BY tbl_name, type DESC, name
+</pre></blockquote>
+
+<p>Or, if you give an argument to ".schema" because you only
+want the schema for a single table, the query looks like this:</p>
+
+<blockquote><pre>
+SELECT sql FROM
+ (SELECT * FROM sqlite_master UNION ALL
+ SELECT * FROM sqlite_temp_master)
+WHERE tbl_name LIKE '%s' AND type!='meta'
+ORDER BY type DESC, name
+</pre></blockquote>
+
+<p>The <b>%s</b> in the query above is replaced by the argument
+to ".schema", of course. Notice that the argument to the ".schema"
+command appears to the right of an SQL LIKE operator. So you can
+use wildcards in the name of the table. For example, to get the
+schema for all tables whose names contain the character string
+"abc" you could enter:</p>}
+
+Code {
+sqlite> (((.schema %abc%)))
+}
+
+puts {
+<p>
+Along these same lines,
+the ".table" command also accepts a pattern as its first argument.
+If you give an argument to the .table command, a "%" is both
+appended and prepended and a LIKE clause is added to the query.
+This allows you to list only those tables that match a particular
+pattern.</p>
+
+<p>The ".databases" command shows a list of all databases open in
+the current connection. There will always be at least 2. The first
+one is "main", the original database opened. The second is "temp",
+the database used for temporary tables. There may be additional
+databases listed for databases attached using the ATTACH statement.
+The first output column is the name the database is attached with,
+and the second column is the filename of the external file.</p>}
+
+Code {
+sqlite> (((.databases)))
+}
+
+puts {
+<h3>Converting An Entire Database To An ASCII Text File</h3>
+
+<p>Use the ".dump" command to convert the entire contents of a
+database into a single ASCII text file. This file can be converted
+back into a database by piping it back into <b>sqlite</b>.</p>
+
+<p>A good way to make an archival copy of a database is this:</p>
+}
+
+Code {
+$ (((echo '.dump' | sqlite ex1 | gzip -c >ex1.dump.gz)))
+}
+
+puts {
+<p>This generates a file named <b>ex1.dump.gz</b> that contains everything
+you need to reconstruct the database at a later time, or on another
+machine. To reconstruct the database, just type:</p>
+}
+
+Code {
+$ (((zcat ex1.dump.gz | sqlite ex2)))
+}
+
+puts {
+<p>The text format used is the same as used by
+<a href="http://www.postgresql.org/">PostgreSQL</a>, so you
+can also use the .dump command to export an SQLite database
+into a PostgreSQL database. Like this:</p>
+}
+
+Code {
+$ (((createdb ex2)))
+$ (((echo '.dump' | sqlite ex1 | psql ex2)))
+}
+
+puts {
+<p>You can almost (but not quite) go the other way and export
+a PostgreSQL database into SQLite using the <b>pg_dump</b> utility.
+Unfortunately, when <b>pg_dump</b> writes the database schema information,
+it uses some SQL syntax that SQLite does not understand.
+So you cannot pipe the output of <b>pg_dump</b> directly
+into <b>sqlite</b>.
+But if you can recreate the
+schema separately, you can use <b>pg_dump</b> with the <b>-a</b>
+option to list just the data
+of a PostgreSQL database and import that directly into SQLite.</p>
+}
+
+Code {
+$ (((sqlite ex3 <schema.sql)))
+$ (((pg_dump -a ex2 | sqlite ex3)))
+}
+
+puts {
+<h3>Other Dot Commands</h3>
+
+<p>The ".explain" dot command can be used to set the output mode
+to "column" and to set the column widths to values that are reasonable
+for looking at the output of an EXPLAIN command. The EXPLAIN command
+is an SQLite-specific SQL extension that is useful for debugging. If any
+regular SQL is prefaced by EXPLAIN, then the SQL command is parsed and
+analyzed but is not executed. Instead, the sequence of virtual machine
+instructions that would have been used to execute the SQL command are
+returned like a query result. For example:</p>}
+
+Code {
+sqlite> (((.explain)))
+sqlite> (((explain delete from tbl1 where two<20;)))
+addr opcode p1 p2 p3
+---- ------------ ----- ----- -------------------------------------
+0 ListOpen 0 0
+1 Open 0 1 tbl1
+2 Next 0 9
+3 Field 0 1
+4 Integer 20 0
+5 Ge 0 2
+6 Key 0 0
+7 ListWrite 0 0
+8 Goto 0 2
+9 Noop 0 0
+10 ListRewind 0 0
+11 ListRead 0 14
+12 Delete 0 0
+13 Goto 0 11
+14 ListClose 0 0
+}
+
+puts {
+
+<p>The ".timeout" command sets the amount of time that the <b>sqlite</b>
+program will wait for locks to clear on files it is trying to access
+before returning an error. The default value of the timeout is zero so
+that an error is returned immediately if any needed database table or
+index is locked.</p>
+
+<p>And finally, we mention the ".exit" command which causes the
+sqlite program to exit.</p>
+
+<h3>Using sqlite in a shell script</h3>
+
+<p>
+One way to use sqlite in a shell script is to use "echo" or
+"cat" to generate a sequence of commands in a file, then invoke sqlite
+while redirecting input from the generated command file. This
+works fine and is appropriate in many circumstances. But as
+an added convenience, sqlite allows a single SQL command to be
+entered on the command line as a second argument after the
+database name. When the sqlite program is launched with two
+arguments, the second argument is passed to the SQLite library
+for processing, the query results are printed on standard output
+in list mode, and the program exits. This mechanism is designed
+to make sqlite easy to use in conjunction with programs like
+"awk". For example:</p>}
+
+Code {
+$ (((sqlite ex1 'select * from tbl1' |)))
+> ((( awk '{printf "<tr><td>%s<td>%s\n",$1,$2 }')))
+<tr><td>hello<td>10
+<tr><td>goodbye<td>20
+$
+}
+
+puts {
+<h3>Ending shell commands</h3>
+
+<p>
+SQLite commands are normally terminated by a semicolon. In the shell
+you can also use the word "GO" (case-insensitive) or a slash character
+"/" on a line by itself to end a command. These are used by SQL Server
+and Oracle, respectively. These won't work in <b>sqlite_exec()</b>,
+because the shell translates these into a semicolon before passing them
+to that function.</p>
+}
+
+puts {
+<h3>Compiling the sqlite program from sources</h3>
+
+<p>
+The sqlite program is built automatically when you compile the
+sqlite library. Just get a copy of the source tree, run
+"configure" and then "make".</p>
+}
+footer $rcsid
Added: freeswitch/trunk/libs/sqlite/www/support.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/support.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,79 @@
+set rcsid {$Id: support.tcl,v 1.6 2005/12/05 22:22:40 drh Exp $}
+source common.tcl
+header {SQLite Support Options}
+puts {
+<h2>SQLite Support Options</h2>
+
+
+<h3>Mailing List</h3>
+<p>
+A mailing list has been set up for asking questions and
+for open discussion of problems
+and issues by the SQLite user community.
+To subscribe to the mailing list, send an email to
+<a href="mailto:sqlite-users-subscribe at sqlite.org">
+sqlite-users-subscribe at sqlite.org</a>.
+If you would prefer to get digests rather than individual
+emails, send a message to to
+<a href="mailto:sqlite-users-digest-subscribe at sqlite.org">
+sqlite-users-digest-subscribe at sqlite.org</a>.
+For additional information about operating and using this
+mailing list, send a message to
+<a href="mailto:sqlite-users-help at sqlite.org">
+sqlite-users-help at sqlite.org</a> and instructions will be
+sent by to you by return email.
+</p>
+
+<p>
+There are multiple archives of the mailing list:
+</p>
+
+<blockquote>
+<a href="http://www.mail-archive.com/sqlite-users%40sqlite.org/">
+http://www.mail-archive.com/sqlite-users%40sqlite.org</a><br>
+<a href="http://marc.10east.com/?l=sqlite-users&r=1&w=2">
+http://marc.10east.com/?l=sqlite-users&r=1&w=2</a><br>
+<a href="http://news.gmane.org/gmane.comp.db.sqlite.general">
+http://news.gmane.org/gmane.comp.db.sqlite.general</a>
+</blockquote>
+
+</p>
+
+<a name="directemail">
+<h3>Direct E-Mail To The Author</h3>
+
+<p>
+Use the mailing list.
+Please do <b>not</b> send email directly to the author of SQLite
+unless:
+<ul>
+<li>You have or intend to acquire a professional support contract
+as described below, or</li>
+<li>You are working on an open source project.</li>
+</ul>
+You are welcomed to use SQLite in closed source, proprietary, and/or
+commerical projects and to ask questions about such use on the public
+mailing list. But please do not ask to receive free direct technical
+support. The software is free; direct technical support is not.
+</p>
+
+
+<h3>Professional Support</h3>
+
+<p>
+If you would like professional support for SQLite
+or if you want custom modifications to SQLite performed by the
+original author, these services are available for a modest fee.
+For additional information visit
+<a href="http://www.hwaci.com/sw/sqlite/prosupport.html">
+http://www.hwaci.com/sw/sqlite/prosupport.html</a> or contact:</p>
+
+<blockquote>
+D. Richard Hipp <br />
+Hwaci - Applied Software Research <br />
+704.948.4565 <br />
+<a href="mailto:drh at hwaci.com">drh at hwaci.com</a>
+</blockquote>
+
+}
+footer $rcsid
Added: freeswitch/trunk/libs/sqlite/www/table-ex1b2.gif
==============================================================================
Binary file. No diff available.
Added: freeswitch/trunk/libs/sqlite/www/tclsqlite.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/tclsqlite.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,607 @@
+#
+# Run this Tcl script to generate the tclsqlite.html file.
+#
+set rcsid {$Id: tclsqlite.tcl,v 1.16 2006/01/05 15:50:07 drh Exp $}
+source common.tcl
+header {The Tcl interface to the SQLite library}
+proc METHOD {name text} {
+ puts "<a name=\"$name\">\n<h3>The \"$name\" method</h3>\n"
+ puts $text
+}
+puts {
+<h2>The Tcl interface to the SQLite library</h2>
+
+<p>The SQLite library is designed to be very easy to use from
+a Tcl or Tcl/Tk script. This document gives an overview of the Tcl
+programming interface.</p>
+
+<h3>The API</h3>
+
+<p>The interface to the SQLite library consists of single
+tcl command named <b>sqlite</b> (version 2.8) or <b>sqlite3</b>
+(version 3.0). Because there is only this
+one command, the interface is not placed in a separate
+namespace.</p>
+
+<p>The <b>sqlite3</b> command is used as follows:</p>
+
+<blockquote>
+<b>sqlite3</b> <i>dbcmd database-name</i>
+</blockquote>
+
+<p>
+The <b>sqlite3</b> command opens the database named in the second
+argument. If the database does not already exist, it is
+automatically created.
+The <b>sqlite3</b> command also creates a new Tcl
+command to control the database. The name of the new Tcl command
+is given by the first argument. This approach is similar to the
+way widgets are created in Tk.
+</p>
+
+<p>
+The name of the database is just the name of a disk file in which
+the database is stored. If the name of the database is an empty
+string or the special name ":memory:" then a new database is created
+in memory.
+</p>
+
+<p>
+Once an SQLite database is open, it can be controlled using
+methods of the <i>dbcmd</i>. There are currently 21 methods
+defined:</p>
+
+<p>The <b>sqlite3</b> also accepts an optional third argument called
+the "mode". This argument is a legacy from SQLite version 2 and is
+currently ignored.</p>
+
+<p>
+<ul>
+}
+foreach m [lsort {
+ authorizer
+ busy
+ cache
+ changes
+ close
+ collate
+ collation_needed
+ commit_hook
+ complete
+ copy
+ errorcode
+ eval
+ exists
+ function
+ last_insert_rowid
+ nullvalue
+ onecolumn
+ progress
+ timeout
+ total_changes
+ trace
+ transaction
+}] {
+ puts "<li><a href=\"#$m\">$m</a></li>"
+}
+puts {
+</ul>
+</p>
+
+<p>The use of each of these methods will be explained in the sequel, though
+not in the order shown above.</p>
+
+}
+
+##############################################################################
+METHOD eval {
+<p>
+The most useful <i>dbcmd</i> method is "eval". The eval method is used
+to execute SQL on the database. The syntax of the eval method looks
+like this:</p>
+
+<blockquote>
+<i>dbcmd</i> <b>eval</b> <i>sql</i>
+ ?<i>array-name </i>? ?<i>script</i>?
+</blockquote>
+
+<p>
+The job of the eval method is to execute the SQL statement or statements
+given in the second argument. For example, to create a new table in
+a database, you can do this:</p>
+
+<blockquote>
+<b>sqlite3 db1 ./testdb<br>
+db1 eval {CREATE TABLE t1(a int, b text)}</b>
+</blockquote>
+
+<p>The above code creates a new table named <b>t1</b> with columns
+<b>a</b> and <b>b</b>. What could be simpler?</p>
+
+<p>Query results are returned as a list of column values. If a
+query requests 2 columns and there are 3 rows matching the query,
+then the returned list will contain 6 elements. For example:</p>
+
+<blockquote>
+<b>db1 eval {INSERT INTO t1 VALUES(1,'hello')}<br>
+db1 eval {INSERT INTO t1 VALUES(2,'goodbye')}<br>
+db1 eval {INSERT INTO t1 VALUES(3,'howdy!')}<br>
+set x [db1 eval {SELECT * FROM t1 ORDER BY a}]</b>
+</blockquote>
+
+<p>The variable <b>$x</b> is set by the above code to</p>
+
+<blockquote>
+<b>1 hello 2 goodbye 3 howdy!</b>
+</blockquote>
+
+<p>You can also process the results of a query one row at a time
+by specifying the name of an array variable and a script following
+the SQL code. For each row of the query result, the values of all
+columns will be inserted into the array variable and the script will
+be executed. For instance:</p>
+
+<blockquote>
+<b>db1 eval {SELECT * FROM t1 ORDER BY a} values {<br>
+ parray values<br>
+ puts ""<br>
+}</b>
+</blockquote>
+
+<p>This last code will give the following output:</p>
+
+<blockquote><b>
+values(*) = a b<br>
+values(a) = 1<br>
+values(b) = hello<p>
+
+values(*) = a b<br>
+values(a) = 2<br>
+values(b) = goodbye<p>
+
+values(*) = a b<br>
+values(a) = 3<br>
+values(b) = howdy!</b>
+</blockquote>
+
+<p>
+For each column in a row of the result, the name of that column
+is used as an index in to array. The value of the column is stored
+in the corresponding array entry. The special array index * is
+used to store a list of column names in the order that they appear.
+</p>
+
+<p>
+If the array variable name is omitted or is the empty string, then the value of
+each column is stored in a variable with the same name as the column
+itself. For example:
+</p>
+
+<blockquote>
+<b>db1 eval {SELECT * FROM t1 ORDER BY a} {<br>
+ puts "a=$a b=$b"<br>
+}</b>
+</blockquote>
+
+<p>
+From this we get the following output
+</p>
+
+<blockquote><b>
+a=1 b=hello<br>
+a=2 b=goodbye<br>
+a=3 b=howdy!</b>
+</blockquote>
+
+<p>
+Tcl variable names can appear in the SQL statement of the second argument
+in any position where it is legal to put a string or number literal. The
+value of the variable is substituted for the variable name. If the
+variable does not exist a NULL values is used. For example:
+</p>
+
+<blockquote><b>
+db1 eval {INSERT INTO t1 VALUES(5,$bigblob)}
+</b></blockquote>
+
+<p>
+Note that it is not necessary to quote the $bigblob value. That happens
+automatically. If $bigblob is a large string or binary object, this
+technique is not only easier to write, it is also much more efficient
+since it avoids making a copy of the content of $bigblob.
+</p>
+
+}
+
+##############################################################################
+METHOD close {
+
+<p>
+As its name suggests, the "close" method to an SQLite database just
+closes the database. This has the side-effect of deleting the
+<i>dbcmd</i> Tcl command. Here is an example of opening and then
+immediately closing a database:
+</p>
+
+<blockquote>
+<b>sqlite3 db1 ./testdb<br>
+db1 close</b>
+</blockquote>
+
+<p>
+If you delete the <i>dbcmd</i> directly, that has the same effect
+as invoking the "close" method. So the following code is equivalent
+to the previous:</p>
+
+<blockquote>
+<b>sqlite3 db1 ./testdb<br>
+rename db1 {}</b>
+</blockquote>
+}
+
+##############################################################################
+METHOD transaction {
+
+<p>
+The "transaction" method is used to execute a TCL script inside an SQLite
+database transaction. The transaction is committed when the script completes,
+or it rolls back if the script fails. If the transaction occurs within
+another transaction (even one that is started manually using BEGIN) it
+is a no-op.
+</p>
+
+<p>
+The transaction command can be used to group together several SQLite
+commands in a safe way. You can always start transactions manually using
+BEGIN, of
+course. But if an error occurs so that the COMMIT or ROLLBACK are never
+run, then the database will remain locked indefinitely. Also, BEGIN
+does not nest, so you have to make sure no other transactions are active
+before starting a new one. The "transaction" method takes care of
+all of these details automatically.
+</p>
+
+<p>
+The syntax looks like this:
+</p>
+
+<blockquote>
+<i>dbcmd</i> <b>transaction</b> <i>?transaction-type?</i>
+ <i>SCRIPT,</i>
+</blockquote>
+
+
+<p>
+The <i>transaction-type</i> can be one of <b>deferred</b>,
+<b>exclusive</b> or <b>immediate</b>. The default is deferred.
+</p>
+}
+
+##############################################################################
+METHOD cache {
+
+<p>
+The "eval" method described <a href="#eval">above</a> keeps a cache of
+<a href="capi3ref.html#sqlite3_prepare">prepared statements</a>
+for recently evaluated SQL commands.
+The "cache" method is used to control this cache.
+The first form of this command is:</p>
+
+<blockquote>
+<i>dbcmd</i> <b>cache size</b> <i>N</i>
+</blockquote>
+
+<p>This sets the maximum number of statements that can be cached.
+The upper limit is 100. The default is 10. If you set the cache size
+to 0, no caching is done.</p>
+
+<p>The second form of the command is this:</p>
+
+
+<blockquote>
+<i>dbcmd</i> <b>cache flush</b>
+</blockquote>
+
+<p>The cache-flush method
+<a href="capi3ref.html#sqlite3_finalize">finalizes</a>
+all prepared statements currently
+in the cache.</p>
+
+}
+
+##############################################################################
+METHOD complete {
+
+<p>
+The "complete" method takes a string of supposed SQL as its only argument.
+It returns TRUE if the string is a complete statement of SQL and FALSE if
+there is more to be entered.</p>
+
+<p>The "complete" method is useful when building interactive applications
+in order to know when the user has finished entering a line of SQL code.
+This is really just an interface to the
+<a href="capi3ref.html#sqlite3_complete"><b>sqlite3_complete()</b></a> C
+function.
+}
+
+##############################################################################
+METHOD copy {
+
+<p>
+The "copy" method copies data from a file into a table.
+It returns the number of rows processed successfully from the file.
+The syntax of the copy method looks like this:</p>
+
+<blockquote>
+<i>dbcmd</i> <b>copy</b> <i>conflict-algorithm</i>
+ <i>table-name </i> <i>file-name </i>
+ ?<i>column-separator </i>?
+ ?<i>null-indicator</i>?
+</blockquote>
+
+<p>Conflict-alogrithm must be one of the SQLite conflict algorithms for
+the INSERT statement: <i>rollback</i>, <i>abort</i>,
+<i>fail</i>,<i>ignore</i>, or <i>replace</i>. See the SQLite Language
+section for <a href="lang.html#conflict">ON CONFLICT</a> for more information.
+The conflict-algorithm must be specified in lower case.
+</p>
+
+<p>Table-name must already exists as a table. File-name must exist, and
+each row must contain the same number of columns as defined in the table.
+If a line in the file contains more or less than the number of columns defined,
+the copy method rollbacks any inserts, and returns an error.</p>
+
+<p>Column-separator is an optional column separator string. The default is
+the ASCII tab character \t. </p>
+
+<p>Null-indicator is an optional string that indicates a column value is null.
+The default is an empty string. Note that column-separator and
+null-indicator are optional positional arguments; if null-indicator
+is specified, a column-separator argument must be specifed and
+precede the null-indicator argument.</p>
+
+<p>The copy method implements similar functionality to the <b>.import</b>
+SQLite shell command.
+The SQLite 2.x <a href="lang.html#copy"><b>COPY</b></a> statement
+(using the PostgreSQL COPY file format)
+can be implemented with this method as:</p>
+
+<blockquote>
+dbcmd copy $conflictalgo
+ $tablename $filename
+ \t
+ \\N
+</blockquote>
+
+}
+
+##############################################################################
+METHOD timeout {
+
+<p>The "timeout" method is used to control how long the SQLite library
+will wait for locks to clear before giving up on a database transaction.
+The default timeout is 0 millisecond. (In other words, the default behavior
+is not to wait at all.)</p>
+
+<p>The SQLite database allows multiple simultaneous
+readers or a single writer but not both. If any process is writing to
+the database no other process is allows to read or write. If any process
+is reading the database other processes are allowed to read but not write.
+The entire database shared a single lock.</p>
+
+<p>When SQLite tries to open a database and finds that it is locked, it
+can optionally delay for a short while and try to open the file again.
+This process repeats until the query times out and SQLite returns a
+failure. The timeout is adjustable. It is set to 0 by default so that
+if the database is locked, the SQL statement fails immediately. But you
+can use the "timeout" method to change the timeout value to a positive
+number. For example:</p>
+
+<blockquote><b>db1 timeout 2000</b></blockquote>
+
+<p>The argument to the timeout method is the maximum number of milliseconds
+to wait for the lock to clear. So in the example above, the maximum delay
+would be 2 seconds.</p>
+}
+
+##############################################################################
+METHOD busy {
+
+<p>The "busy" method, like "timeout", only comes into play when the
+database is locked. But the "busy" method gives the programmer much more
+control over what action to take. The "busy" method specifies a callback
+Tcl procedure that is invoked whenever SQLite tries to open a locked
+database. This callback can do whatever is desired. Presumably, the
+callback will do some other useful work for a short while (such as service
+GUI events) then return
+so that the lock can be tried again. The callback procedure should
+return "0" if it wants SQLite to try again to open the database and
+should return "1" if it wants SQLite to abandon the current operation.
+}
+
+##############################################################################
+METHOD exists {
+
+<p>The "exists" method is similar to "onecolumn" and "eval" in that
+it executes SQL statements. The difference is that the "exists" method
+always returns a boolean value which is TRUE if a query in the SQL
+statement it executes returns one or more rows and FALSE if the SQL
+returns an empty set.</p>
+
+<p>The "exists" method is often used to test for the existance of
+rows in a table. For example:</p>
+
+<blockquote><b>
+if {[db exists {SELECT 1 FROM table1 WHERE user=$user}]} {<br>
+ # Processing if $user exists<br>
+} else {<br>
+ # Processing if $user does not exist<br>
+}
+</b></blockquote>
+}
+
+
+##############################################################################
+METHOD last_insert_rowid {
+
+<p>The "last_insert_rowid" method returns an integer which is the ROWID
+of the most recently inserted database row.</p>
+}
+
+##############################################################################
+METHOD function {
+
+<p>The "function" method registers new SQL functions with the SQLite engine.
+The arguments are the name of the new SQL function and a TCL command that
+implements that function. Arguments to the function are appended to the
+TCL command before it is invoked.</p>
+
+<p>
+The following example creates a new SQL function named "hex" that converts
+its numeric argument in to a hexadecimal encoded string:
+</p>
+
+<blockquote><b>
+db function hex {format 0x%X}
+</b></blockquote>
+
+}
+
+##############################################################################
+METHOD nullvalue {
+
+<p>
+The "nullvalue" method changes the representation for NULL returned
+as result of the "eval" method.</p>
+
+<blockquote><b>
+db1 nullvalue NULL
+</b></blockquote>
+
+<p>The "nullvalue" method is useful to differ between NULL and empty
+column values as Tcl lacks a NULL representation. The default
+representation for NULL values is an empty string.</p>
+}
+
+
+
+##############################################################################
+METHOD onecolumn {
+
+<p>The "onecolumn" method works like
+"<a href="#eval">eval</a>" in that it evaluates the
+SQL query statement given as its argument. The difference is that
+"onecolumn" returns a single element which is the first column of the
+first row of the query result.</p>
+
+<p>This is a convenience method. It saves the user from having to
+do a "<tt>[lindex ... 0]</tt>" on the results of an "eval"
+in order to extract a single column result.</p>
+}
+
+##############################################################################
+METHOD changes {
+
+<p>The "changes" method returns an integer which is the number of rows
+in the database that were inserted, deleted, and/or modified by the most
+recent "eval" method.</p>
+}
+
+##############################################################################
+METHOD total_changes {
+
+<p>The "total_changes" method returns an integer which is the number of rows
+in the database that were inserted, deleted, and/or modified since the
+current database connection was first opened.</p>
+}
+
+##############################################################################
+METHOD authorizer {
+
+<p>The "authorizer" method provides access to the
+<a href="capi3ref.html#sqlite3_set_authorizer">sqlite3_set_authorizer</a>
+C/C++ interface. The argument to authorizer is the name of a procedure that
+is called when SQL statements are being compiled in order to authorize
+certain operations. The callback procedure takes 5 arguments which describe
+the operation being coded. If the callback returns the text string
+"SQLITE_OK", then the operation is allowed. If it returns "SQLITE_IGNORE",
+then the operation is silently disabled. If the return is "SQLITE_DENY"
+then the compilation fails with an error.
+</p>
+
+<p>If the argument is an empty string then the authorizer is disabled.
+If the argument is omitted, then the current authorizer is returned.</p>
+}
+
+##############################################################################
+METHOD progress {
+
+<p>This method registers a callback that is invoked periodically during
+query processing. There are two arguments: the number of SQLite virtual
+machine opcodes between invocations, and the TCL command to invoke.
+Setting the progress callback to an empty string disables it.</p>
+
+<p>The progress callback can be used to display the status of a lengthy
+query or to process GUI events during a lengthy query.</p>
+}
+
+
+##############################################################################
+METHOD collate {
+
+<p>This method registers new text collating sequences. There are
+two arguments: the name of the collating sequence and the name of a
+TCL procedure that implements a comparison function for the collating
+sequence.
+</p>
+
+<p>For example, the following code implements a collating sequence called
+"NOCASE" that sorts in text order without regard to case:
+</p>
+
+<blockquote><b>
+proc nocase_compare {a b} {<br>
+ return [string compare [string tolower $a] [string tolower $b]]<br>
+}<br>
+db collate NOCASE nocase_compare<br>
+</b></blockquote>
+}
+
+##############################################################################
+METHOD collation_needed {
+
+<p>This method registers a callback routine that is invoked when the SQLite
+engine needs a particular collating sequence but does not have that
+collating sequence registered. The callback can register the collating
+sequence. The callback is invoked with a single parameter which is the
+name of the needed collating sequence.</p>
+}
+
+##############################################################################
+METHOD commit_hook {
+
+<p>This method registers a callback routine that is invoked just before
+SQLite tries to commit changes to a database. If the callback throws
+an exception or returns a non-zero result, then the transaction rolls back
+rather than commit.</p>
+}
+
+##############################################################################
+METHOD errorcode {
+
+<p>This method returns the numeric error code that resulted from the most
+recent SQLite operation.</p>
+}
+
+##############################################################################
+METHOD trace {
+
+<p>The "trace" method registers a callback that is invoked as each SQL
+statement is compiled. The text of the SQL is appended as a single string
+to the command before it is invoked. This can be used (for example) to
+keep a log of all SQL operations that an application performs.
+</p>
+}
+
+
+footer $rcsid
Added: freeswitch/trunk/libs/sqlite/www/vdbe.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/vdbe.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,1988 @@
+#
+# Run this Tcl script to generate the vdbe.html file.
+#
+set rcsid {$Id: vdbe.tcl,v 1.14 2005/03/12 15:55:11 drh Exp $}
+source common.tcl
+header {The Virtual Database Engine of SQLite}
+puts {
+<h2>The Virtual Database Engine of SQLite</h2>
+
+<blockquote><b>
+This document describes the virtual machine used in SQLite version 2.8.0.
+The virtual machine in SQLite version 3.0 and 3.1 is very similar in
+concept but many of the opcodes have changed and the algorithms are
+somewhat different. Use this document as a rough guide to the idea
+behind the virtual machine in SQLite version 3, not as a reference on
+how the virtual machine works.
+</b></blockquote>
+}
+
+puts {
+<p>If you want to know how the SQLite library works internally,
+you need to begin with a solid understanding of the Virtual Database
+Engine or VDBE. The VDBE occurs right in the middle of the
+processing stream (see the <a href="arch.html">architecture diagram</a>)
+and so it seems to touch most parts of the library. Even
+parts of the code that do not directly interact with the VDBE
+are usually in a supporting role. The VDBE really is the heart of
+SQLite.</p>
+
+<p>This article is a brief introduction to how the VDBE
+works and in particular how the various VDBE instructions
+(documented <a href="opcode.html">here</a>) work together
+to do useful things with the database. The style is tutorial,
+beginning with simple tasks and working toward solving more
+complex problems. Along the way we will visit most
+submodules in the SQLite library. After completeing this tutorial,
+you should have a pretty good understanding of how SQLite works
+and will be ready to begin studying the actual source code.</p>
+
+<h2>Preliminaries</h2>
+
+<p>The VDBE implements a virtual computer that runs a program in
+its virtual machine language. The goal of each program is to
+interrogate or change the database. Toward this end, the machine
+language that the VDBE implements is specifically designed to
+search, read, and modify databases.</p>
+
+<p>Each instruction of the VDBE language contains an opcode and
+three operands labeled P1, P2, and P3. Operand P1 is an arbitrary
+integer. P2 is a non-negative integer. P3 is a pointer to a data
+structure or null-terminated string, possibly null. Only a few VDBE
+instructions use all three operands. Many instructions use only
+one or two operands. A significant number of instructions use
+no operands at all but instead take their data and store their results
+on the execution stack. The details of what each instruction
+does and which operands it uses are described in the separate
+<a href="opcode.html">opcode description</a> document.</p>
+
+<p>A VDBE program begins
+execution on instruction 0 and continues with successive instructions
+until it either (1) encounters a fatal error, (2) executes a
+Halt instruction, or (3) advances the program counter past the
+last instruction of the program. When the VDBE completes execution,
+all open database cursors are closed, all memory is freed, and
+everything is popped from the stack.
+So there are never any worries about memory leaks or
+undeallocated resources.</p>
+
+<p>If you have done any assembly language programming or have
+worked with any kind of abstract machine before, all of these
+details should be familiar to you. So let's jump right in and
+start looking as some code.</p>
+
+<a name="insert1">
+<h2>Inserting Records Into The Database</h2>
+
+<p>We begin with a problem that can be solved using a VDBE program
+that is only a few instructions long. Suppose we have an SQL
+table that was created like this:</p>
+
+<blockquote><pre>
+CREATE TABLE examp(one text, two int);
+</pre></blockquote>
+
+<p>In words, we have a database table named "examp" that has two
+columns of data named "one" and "two". Now suppose we want to insert a single
+record into this table. Like this:</p>
+
+<blockquote><pre>
+INSERT INTO examp VALUES('Hello, World!',99);
+</pre></blockquote>
+
+<p>We can see the VDBE program that SQLite uses to implement this
+INSERT using the <b>sqlite</b> command-line utility. First start
+up <b>sqlite</b> on a new, empty database, then create the table.
+Next change the output format of <b>sqlite</b> to a form that
+is designed to work with VDBE program dumps by entering the
+".explain" command.
+Finally, enter the INSERT statement shown above, but precede the
+INSERT with the special keyword "EXPLAIN". The EXPLAIN keyword
+will cause <b>sqlite</b> to print the VDBE program rather than
+execute it. We have:</p>
+}
+proc Code {body} {
+ puts {<blockquote><tt>}
+ regsub -all {&} [string trim $body] {\&} body
+ regsub -all {>} $body {\>} body
+ regsub -all {<} $body {\<} body
+ regsub -all {\(\(\(} $body {<b>} body
+ regsub -all {\)\)\)} $body {</b>} body
+ regsub -all { } $body {\ } body
+ regsub -all \n $body <br>\n body
+ puts $body
+ puts {</tt></blockquote>}
+}
+
+Code {
+$ (((sqlite test_database_1)))
+sqlite> (((CREATE TABLE examp(one text, two int);)))
+sqlite> (((.explain)))
+sqlite> (((EXPLAIN INSERT INTO examp VALUES('Hello, World!',99);)))
+addr opcode p1 p2 p3
+---- ------------ ----- ----- -----------------------------------
+0 Transaction 0 0
+1 VerifyCookie 0 81
+2 Transaction 1 0
+3 Integer 0 0
+4 OpenWrite 0 3 examp
+5 NewRecno 0 0
+6 String 0 0 Hello, World!
+7 Integer 99 0 99
+8 MakeRecord 2 0
+9 PutIntKey 0 1
+10 Close 0 0
+11 Commit 0 0
+12 Halt 0 0
+}
+
+puts {<p>As you can see above, our simple insert statement is
+implemented in 12 instructions. The first 3 and last 2 instructions are
+a standard prologue and epilogue, so the real work is done in the middle
+7 instructions. There are no jumps, so the program executes once through
+from top to bottom. Let's now look at each instruction in detail.<p>
+}
+
+Code {
+0 Transaction 0 0
+1 VerifyCookie 0 81
+2 Transaction 1 0
+}
+puts {
+<p>The instruction <a href="opcode.html#Transaction">Transaction</a>
+begins a transaction. The transaction ends when a Commit or Rollback
+opcode is encountered. P1 is the index of the database file on which
+the transaction is started. Index 0 is the main database file. A write
+lock is obtained on the database file when a transaction is started.
+No other process can read or write the file while the transaction is
+underway. Starting a transaction also creates a rollback journal. A
+transaction must be started before any changes can be made to the
+database.</p>
+
+<p>The instruction <a href="opcode.html#VerifyCookie">VerifyCookie</a>
+checks cookie 0 (the database schema version) to make sure it is equal
+to P2 (the value obtained when the database schema was last read).
+P1 is the database number (0 for the main database). This is done to
+make sure the database schema hasn't been changed by another thread, in
+which case it has to be reread.</p>
+
+<p> The second <a href="opcode.html#Transaction">Transaction</a>
+instruction begins a transaction and starts a rollback journal for
+database 1, the database used for temporary tables.</p>
+}
+
+proc stack args {
+ puts "<blockquote><table border=2>"
+ foreach elem $args {
+ puts "<tr><td align=left>$elem</td></tr>"
+ }
+ puts "</table></blockquote>"
+}
+
+Code {
+3 Integer 0 0
+4 OpenWrite 0 3 examp
+}
+puts {
+<p> The instruction <a href="opcode.html#Integer">Integer</a> pushes
+the integer value P1 (0) onto the stack. Here 0 is the number of the
+database to use in the following OpenWrite instruction. If P3 is not
+NULL then it is a string representation of the same integer. Afterwards
+the stack looks like this:</p>
+}
+stack {(integer) 0}
+
+puts {
+<p> The instruction <a href="opcode.html#OpenWrite">OpenWrite</a> opens
+a new read/write cursor with handle P1 (0 in this case) on table "examp",
+whose root page is P2 (3, in this database file). Cursor handles can be
+any non-negative integer. But the VDBE allocates cursors in an array
+with the size of the array being one more than the largest cursor. So
+to conserve memory, it is best to use handles beginning with zero and
+working upward consecutively. Here P3 ("examp") is the name of the
+table being opened, but this is unused, and only generated to make the
+code easier to read. This instruction pops the database number to use
+(0, the main database) from the top of the stack, so afterwards the
+stack is empty again.</p>
+}
+
+Code {
+5 NewRecno 0 0
+}
+puts {
+<p> The instruction <a href="opcode.html#NewRecno">NewRecno</a> creates
+a new integer record number for the table pointed to by cursor P1. The
+record number is one not currently used as a key in the table. The new
+record number is pushed onto the stack. Afterwards the stack looks like
+this:</p>
+}
+stack {(integer) new record key}
+
+Code {
+6 String 0 0 Hello, World!
+}
+puts {
+<p> The instruction <a href="opcode.html#String">String</a> pushes its
+P3 operand onto the stack. Afterwards the stack looks like this:</p>
+}
+stack {(string) "Hello, World!"} \
+ {(integer) new record key}
+
+Code {
+7 Integer 99 0 99
+}
+puts {
+<p> The instruction <a href="opcode.html#Integer">Integer</a> pushes
+its P1 operand (99) onto the stack. Afterwards the stack looks like
+this:</p>
+}
+stack {(integer) 99} \
+ {(string) "Hello, World!"} \
+ {(integer) new record key}
+
+Code {
+8 MakeRecord 2 0
+}
+puts {
+<p> The instruction <a href="opcode.html#MakeRecord">MakeRecord</a> pops
+the top P1 elements off the stack (2 in this case) and converts them into
+the binary format used for storing records in a database file.
+(See the <a href="fileformat.html">file format</a> description for
+details.) The new record generated by the MakeRecord instruction is
+pushed back onto the stack. Afterwards the stack looks like this:</p>
+</ul>
+}
+stack {(record) "Hello, World!", 99} \
+ {(integer) new record key}
+
+Code {
+9 PutIntKey 0 1
+}
+puts {
+<p> The instruction <a href="opcode.html#PutIntKey">PutIntKey</a> uses
+the top 2 stack entries to write an entry into the table pointed to by
+cursor P1. A new entry is created if it doesn't already exist or the
+data for an existing entry is overwritten. The record data is the top
+stack entry, and the key is the next entry down. The stack is popped
+twice by this instruction. Because operand P2 is 1 the row change count
+is incremented and the rowid is stored for subsequent return by the
+sqlite_last_insert_rowid() function. If P2 is 0 the row change count is
+unmodified. This instruction is where the insert actually occurs.</p>
+}
+
+Code {
+10 Close 0 0
+}
+puts {
+<p> The instruction <a href="opcode.html#Close">Close</a> closes a
+cursor previously opened as P1 (0, the only open cursor). If P1 is not
+currently open, this instruction is a no-op.</p>
+}
+
+Code {
+11 Commit 0 0
+}
+puts {
+<p> The instruction <a href="opcode.html#Commit">Commit</a> causes all
+modifications to the database that have been made since the last
+Transaction to actually take effect. No additional modifications are
+allowed until another transaction is started. The Commit instruction
+deletes the journal file and releases the write lock on the database.
+A read lock continues to be held if there are still cursors open.</p>
+}
+
+Code {
+12 Halt 0 0
+}
+puts {
+<p> The instruction <a href="opcode.html#Halt">Halt</a> causes the VDBE
+engine to exit immediately. All open cursors, Lists, Sorts, etc are
+closed automatically. P1 is the result code returned by sqlite_exec().
+For a normal halt, this should be SQLITE_OK (0). For errors, it can be
+some other value. The operand P2 is only used when there is an error.
+There is an implied "Halt 0 0 0" instruction at the end of every
+program, which the VDBE appends when it prepares a program to run.</p>
+
+
+<a name="trace">
+<h2>Tracing VDBE Program Execution</h2>
+
+<p>If the SQLite library is compiled without the NDEBUG preprocessor
+macro, then the PRAGMA <a href="pragma.html#pragma_vdbe_trace">vdbe_trace
+</a> causes the VDBE to trace the execution of programs. Though this
+feature was originally intended for testing and debugging, it can also
+be useful in learning about how the VDBE operates.
+Use "<tt>PRAGMA vdbe_trace=ON;</tt>" to turn tracing on and
+"<tt>PRAGMA vdbe_trace=OFF</tt>" to turn tracing back off.
+Like this:</p>
+}
+
+Code {
+sqlite> (((PRAGMA vdbe_trace=ON;)))
+ 0 Halt 0 0
+sqlite> (((INSERT INTO examp VALUES('Hello, World!',99);)))
+ 0 Transaction 0 0
+ 1 VerifyCookie 0 81
+ 2 Transaction 1 0
+ 3 Integer 0 0
+Stack: i:0
+ 4 OpenWrite 0 3 examp
+ 5 NewRecno 0 0
+Stack: i:2
+ 6 String 0 0 Hello, World!
+Stack: t[Hello,.World!] i:2
+ 7 Integer 99 0 99
+Stack: si:99 t[Hello,.World!] i:2
+ 8 MakeRecord 2 0
+Stack: s[...Hello,.World!.99] i:2
+ 9 PutIntKey 0 1
+ 10 Close 0 0
+ 11 Commit 0 0
+ 12 Halt 0 0
+}
+
+puts {
+<p>With tracing mode on, the VDBE prints each instruction prior
+to executing it. After the instruction is executed, the top few
+entries in the stack are displayed. The stack display is omitted
+if the stack is empty.</p>
+
+<p>On the stack display, most entries are shown with a prefix
+that tells the datatype of that stack entry. Integers begin
+with "<tt>i:</tt>". Floating point values begin with "<tt>r:</tt>".
+(The "r" stands for "real-number".) Strings begin with either
+"<tt>s:</tt>", "<tt>t:</tt>", "<tt>e:</tt>" or "<tt>z:</tt>".
+The difference among the string prefixes is caused by how their
+memory is allocated. The z: strings are stored in memory obtained
+from <b>malloc()</b>. The t: strings are statically allocated.
+The e: strings are ephemeral. All other strings have the s: prefix.
+This doesn't make any difference to you,
+the observer, but it is vitally important to the VDBE since the
+z: strings need to be passed to <b>free()</b> when they are
+popped to avoid a memory leak. Note that only the first 10
+characters of string values are displayed and that binary
+values (such as the result of the MakeRecord instruction) are
+treated as strings. The only other datatype that can be stored
+on the VDBE stack is a NULL, which is display without prefix
+as simply "<tt>NULL</tt>". If an integer has been placed on the
+stack as both an integer and a string, its prefix is "<tt>si:</tt>".
+
+
+<a name="query1">
+<h2>Simple Queries</h2>
+
+<p>At this point, you should understand the basics of how the VDBE
+writes to a database. Now let's look at how it does queries.
+We will use the following simple SELECT statement as our example:</p>
+
+<blockquote><pre>
+SELECT * FROM examp;
+</pre></blockquote>
+
+<p>The VDBE program generated for this SQL statement is as follows:</p>
+}
+
+Code {
+sqlite> (((EXPLAIN SELECT * FROM examp;)))
+addr opcode p1 p2 p3
+---- ------------ ----- ----- -----------------------------------
+0 ColumnName 0 0 one
+1 ColumnName 1 0 two
+2 Integer 0 0
+3 OpenRead 0 3 examp
+4 VerifyCookie 0 81
+5 Rewind 0 10
+6 Column 0 0
+7 Column 0 1
+8 Callback 2 0
+9 Next 0 6
+10 Close 0 0
+11 Halt 0 0
+}
+
+puts {
+<p>Before we begin looking at this problem, let's briefly review
+how queries work in SQLite so that we will know what we are trying
+to accomplish. For each row in the result of a query,
+SQLite will invoke a callback function with the following
+prototype:</p>
+
+<blockquote><pre>
+int Callback(void *pUserData, int nColumn, char *azData[], char *azColumnName[]);
+</pre></blockquote>
+
+<p>The SQLite library supplies the VDBE with a pointer to the callback function
+and the <b>pUserData</b> pointer. (Both the callback and the user data were
+originally passed in as arguments to the <b>sqlite_exec()</b> API function.)
+The job of the VDBE is to
+come up with values for <b>nColumn</b>, <b>azData[]</b>,
+and <b>azColumnName[]</b>.
+<b>nColumn</b> is the number of columns in the results, of course.
+<b>azColumnName[]</b> is an array of strings where each string is the name
+of one of the result columns. <b>azData[]</b> is an array of strings holding
+the actual data.</p>
+}
+
+Code {
+0 ColumnName 0 0 one
+1 ColumnName 1 0 two
+}
+puts {
+<p>The first two instructions in the VDBE program for our query are
+concerned with setting up values for <b>azColumn</b>.
+The <a href="opcode.html#ColumnName">ColumnName</a> instructions tell
+the VDBE what values to fill in for each element of the <b>azColumnName[]</b>
+array. Every query will begin with one ColumnName instruction for each
+column in the result, and there will be a matching Column instruction for
+each one later in the query.
+</p>
+}
+
+Code {
+2 Integer 0 0
+3 OpenRead 0 3 examp
+4 VerifyCookie 0 81
+}
+puts {
+<p>Instructions 2 and 3 open a read cursor on the database table that is
+to be queried. This works the same as the OpenWrite instruction in the
+INSERT example except that the cursor is opened for reading this time
+instead of for writing. Instruction 4 verifies the database schema as
+in the INSERT example.</p>
+}
+
+Code {
+5 Rewind 0 10
+}
+puts {
+<p> The <a href="opcode.html#Rewind">Rewind</a> instruction initializes
+a loop that iterates over the "examp" table. It rewinds the cursor P1
+to the first entry in its table. This is required by the the Column and
+Next instructions, which use the cursor to iterate through the table.
+If the table is empty, then jump to P2 (10), which is the instruction just
+past the loop. If the table is not empty, fall through to the following
+instruction at 6, which is the beginning of the loop body.</p>
+}
+
+Code {
+6 Column 0 0
+7 Column 0 1
+8 Callback 2 0
+}
+puts {
+<p> The instructions 6 through 8 form the body of the loop that will
+execute once for each record in the database file.
+
+The <a href="opcode.html#Column">Column</a> instructions at addresses 6
+and 7 each take the P2-th column from the P1-th cursor and push it onto
+the stack. In this example, the first Column instruction is pushing the
+value for the column "one" onto the stack and the second Column
+instruction is pushing the value for column "two".
+
+The <a href="opcode.html#Callback">Callback</a> instruction at address 8
+invokes the callback() function. The P1 operand to Callback becomes the
+value for <b>nColumn</b>. The Callback instruction pops P1 values from
+the stack and uses them to fill the <b>azData[]</b> array.</p>
+}
+
+Code {
+9 Next 0 6
+}
+puts {
+<p>The instruction at address 9 implements the branching part of the
+loop. Together with the Rewind at address 5 it forms the loop logic.
+This is a key concept that you should pay close attention to.
+The <a href="opcode.html#Next">Next</a> instruction advances the cursor
+P1 to the next record. If the cursor advance was successful, then jump
+immediately to P2 (6, the beginning of the loop body). If the cursor
+was at the end, then fall through to the following instruction, which
+ends the loop.</p>
+}
+
+Code {
+10 Close 0 0
+11 Halt 0 0
+}
+puts {
+<p>The Close instruction at the end of the program closes the
+cursor that points into the table "examp". It is not really necessary
+to call Close here since all cursors will be automatically closed
+by the VDBE when the program halts. But we needed an instruction
+for the Rewind to jump to so we might as well go ahead and have that
+instruction do something useful.
+The Halt instruction ends the VDBE program.</p>
+
+<p>Note that the program for this SELECT query didn't contain the
+Transaction and Commit instructions used in the INSERT example. Because
+the SELECT is a read operation that doesn't alter the database, it
+doesn't require a transaction.</p>
+}
+
+
+puts {
+<a name="query2">
+<h2>A Slightly More Complex Query</h2>
+
+<p>The key points of the previous example were the use of the Callback
+instruction to invoke the callback function, and the use of the Next
+instruction to implement a loop over all records of the database file.
+This example attempts to drive home those ideas by demonstrating a
+slightly more complex query that involves more columns of
+output, some of which are computed values, and a WHERE clause that
+limits which records actually make it to the callback function.
+Consider this query:</p>
+
+<blockquote><pre>
+SELECT one, two, one || two AS 'both'
+FROM examp
+WHERE one LIKE 'H%'
+</pre></blockquote>
+
+<p>This query is perhaps a bit contrived, but it does serve to
+illustrate our points. The result will have three column with
+names "one", "two", and "both". The first two columns are direct
+copies of the two columns in the table and the third result
+column is a string formed by concatenating the first and
+second columns of the table.
+Finally, the
+WHERE clause says that we will only chose rows for the
+results where the "one" column begins with an "H".
+Here is what the VDBE program looks like for this query:</p>
+}
+
+Code {
+addr opcode p1 p2 p3
+---- ------------ ----- ----- -----------------------------------
+0 ColumnName 0 0 one
+1 ColumnName 1 0 two
+2 ColumnName 2 0 both
+3 Integer 0 0
+4 OpenRead 0 3 examp
+5 VerifyCookie 0 81
+6 Rewind 0 18
+7 String 0 0 H%
+8 Column 0 0
+9 Function 2 0 ptr(0x7f1ac0)
+10 IfNot 1 17
+11 Column 0 0
+12 Column 0 1
+13 Column 0 0
+14 Column 0 1
+15 Concat 2 0
+16 Callback 3 0
+17 Next 0 7
+18 Close 0 0
+19 Halt 0 0
+}
+
+puts {
+<p>Except for the WHERE clause, the structure of the program for
+this example is very much like the prior example, just with an
+extra column. There are now 3 columns, instead of 2 as before,
+and there are three ColumnName instructions.
+A cursor is opened using the OpenRead instruction, just like in the
+prior example. The Rewind instruction at address 6 and the
+Next at address 17 form a loop over all records of the table.
+The Close instruction at the end is there to give the
+Rewind instruction something to jump to when it is done. All of
+this is just like in the first query demonstration.</p>
+
+<p>The Callback instruction in this example has to generate
+data for three result columns instead of two, but is otherwise
+the same as in the first query. When the Callback instruction
+is invoked, the left-most column of the result should be
+the lowest in the stack and the right-most result column should
+be the top of the stack. We can see the stack being set up
+this way at addresses 11 through 15. The Column instructions at
+11 and 12 push the values for the first two columns in the result.
+The two Column instructions at 13 and 14 pull in the values needed
+to compute the third result column and the Concat instruction at
+15 joins them together into a single entry on the stack.</p>
+
+<p>The only thing that is really new about the current example
+is the WHERE clause which is implemented by instructions at
+addresses 7 through 10. Instructions at address 7 and 8 push
+onto the stack the value of the "one" column from the table
+and the literal string "H%".
+The <a href="opcode.html#Function">Function</a> instruction at address 9
+pops these two values from the stack and pushes the result of the LIKE()
+function back onto the stack.
+The <a href="opcode.html#IfNot">IfNot</a> instruction pops the top stack
+value and causes an immediate jump forward to the Next instruction if the
+top value was false (<em>not</em> not like the literal string "H%").
+Taking this jump effectively skips the callback, which is the whole point
+of the WHERE clause. If the result
+of the comparison is true, the jump is not taken and control
+falls through to the Callback instruction below.</p>
+
+<p>Notice how the LIKE operator is implemented. It is a user-defined
+function in SQLite, so the address of its function definition is
+specified in P3. The operand P1 is the number of function arguments for
+it to take from the stack. In this case the LIKE() function takes 2
+arguments. The arguments are taken off the stack in reverse order
+(right-to-left), so the pattern to match is the top stack element, and
+the next element is the data to compare. The return value is pushed
+onto the stack.</p>
+
+
+<a name="pattern1">
+<h2>A Template For SELECT Programs</h2>
+
+<p>The first two query examples illustrate a kind of template that
+every SELECT program will follow. Basically, we have:</p>
+
+<p>
+<ol>
+<li>Initialize the <b>azColumnName[]</b> array for the callback.</li>
+<li>Open a cursor into the table to be queried.</li>
+<li>For each record in the table, do:
+ <ol type="a">
+ <li>If the WHERE clause evaluates to FALSE, then skip the steps that
+ follow and continue to the next record.</li>
+ <li>Compute all columns for the current row of the result.</li>
+ <li>Invoke the callback function for the current row of the result.</li>
+ </ol>
+<li>Close the cursor.</li>
+</ol>
+</p>
+
+<p>This template will be expanded considerably as we consider
+additional complications such as joins, compound selects, using
+indices to speed the search, sorting, and aggregate functions
+with and without GROUP BY and HAVING clauses.
+But the same basic ideas will continue to apply.</p>
+
+<h2>UPDATE And DELETE Statements</h2>
+
+<p>The UPDATE and DELETE statements are coded using a template
+that is very similar to the SELECT statement template. The main
+difference, of course, is that the end action is to modify the
+database rather than invoke a callback function. Because it modifies
+the database it will also use transactions. Let's begin
+by looking at a DELETE statement:</p>
+
+<blockquote><pre>
+DELETE FROM examp WHERE two<50;
+</pre></blockquote>
+
+<p>This DELETE statement will remove every record from the "examp"
+table where the "two" column is less than 50.
+The code generated to do this is as follows:</p>
+}
+
+Code {
+addr opcode p1 p2 p3
+---- ------------ ----- ----- -----------------------------------
+0 Transaction 1 0
+1 Transaction 0 0
+2 VerifyCookie 0 178
+3 Integer 0 0
+4 OpenRead 0 3 examp
+5 Rewind 0 12
+6 Column 0 1
+7 Integer 50 0 50
+8 Ge 1 11
+9 Recno 0 0
+10 ListWrite 0 0
+11 Next 0 6
+12 Close 0 0
+13 ListRewind 0 0
+14 Integer 0 0
+15 OpenWrite 0 3
+16 ListRead 0 20
+17 NotExists 0 19
+18 Delete 0 1
+19 Goto 0 16
+20 ListReset 0 0
+21 Close 0 0
+22 Commit 0 0
+23 Halt 0 0
+}
+
+puts {
+<p>Here is what the program must do. First it has to locate all of
+the records in the table "examp" that are to be deleted. This is
+done using a loop very much like the loop used in the SELECT examples
+above. Once all records have been located, then we can go back through
+and delete them one by one. Note that we cannot delete each record
+as soon as we find it. We have to locate all records first, then
+go back and delete them. This is because the SQLite database
+backend might change the scan order after a delete operation.
+And if the scan
+order changes in the middle of the scan, some records might be
+visited more than once and other records might not be visited at all.</p>
+
+<p>So the implemention of DELETE is really in two loops. The first loop
+(instructions 5 through 11) locates the records that are to be deleted
+and saves their keys onto a temporary list, and the second loop
+(instructions 16 through 19) uses the key list to delete the records one
+by one. </p>
+}
+
+
+Code {
+0 Transaction 1 0
+1 Transaction 0 0
+2 VerifyCookie 0 178
+3 Integer 0 0
+4 OpenRead 0 3 examp
+}
+puts {
+<p>Instructions 0 though 4 are as in the INSERT example. They start
+transactions for the main and temporary databases, verify the database
+schema for the main database, and open a read cursor on the table
+"examp". Notice that the cursor is opened for reading, not writing. At
+this stage of the program we are only going to be scanning the table,
+not changing it. We will reopen the same table for writing later, at
+instruction 15.</p>
+}
+
+Code {
+5 Rewind 0 12
+}
+puts {
+<p>As in the SELECT example, the <a href="opcode.html#Rewind">Rewind</a>
+instruction rewinds the cursor to the beginning of the table, readying
+it for use in the loop body.</p>
+}
+
+Code {
+6 Column 0 1
+7 Integer 50 0 50
+8 Ge 1 11
+}
+puts {
+<p>The WHERE clause is implemented by instructions 6 through 8.
+The job of the where clause is to skip the ListWrite if the WHERE
+condition is false. To this end, it jumps ahead to the Next instruction
+if the "two" column (extracted by the Column instruction) is
+greater than or equal to 50.</p>
+
+<p>As before, the Column instruction uses cursor P1 and pushes the data
+record in column P2 (1, column "two") onto the stack. The Integer
+instruction pushes the value 50 onto the top of the stack. After these
+two instructions the stack looks like:</p>
+}
+stack {(integer) 50} \
+ {(record) current record for column "two" }
+
+puts {
+<p>The <a href="opcode.html#Ge">Ge</a> operator compares the top two
+elements on the stack, pops them, and then branches based on the result
+of the comparison. If the second element is >= the top element, then
+jump to address P2 (the Next instruction at the end of the loop).
+Because P1 is true, if either operand is NULL (and thus the result is
+NULL) then take the jump. If we don't jump, just advance to the next
+instruction.</p>
+}
+
+Code {
+9 Recno 0 0
+10 ListWrite 0 0
+}
+puts {
+<p>The <a href="opcode.html#Recno">Recno</a> instruction pushes onto the
+stack an integer which is the first 4 bytes of the the key to the current
+entry in a sequential scan of the table pointed to by cursor P1.
+The <a href="opcode.html#ListWrite">ListWrite</a> instruction writes the
+integer on the top of the stack into a temporary storage list and pops
+the top element. This is the important work of this loop, to store the
+keys of the records to be deleted so we can delete them in the second
+loop. After this ListWrite instruction the stack is empty again.</p>
+}
+
+Code {
+11 Next 0 6
+12 Close 0 0
+}
+puts {
+<p> The Next instruction increments the cursor to point to the next
+element in the table pointed to by cursor P0, and if it was successful
+branches to P2 (6, the beginning of the loop body). The Close
+instruction closes cursor P1. It doesn't affect the temporary storage
+list because it isn't associated with cursor P1; it is instead a global
+working list (which can be saved with ListPush).</p>
+}
+
+Code {
+13 ListRewind 0 0
+}
+puts {
+<p> The <a href="opcode.html#ListRewind">ListRewind</a> instruction
+rewinds the temporary storage list to the beginning. This prepares it
+for use in the second loop.</p>
+}
+
+Code {
+14 Integer 0 0
+15 OpenWrite 0 3
+}
+puts {
+<p> As in the INSERT example, we push the database number P1 (0, the main
+database) onto the stack and use OpenWrite to open the cursor P1 on table
+P2 (base page 3, "examp") for modification.</p>
+}
+
+Code {
+16 ListRead 0 20
+17 NotExists 0 19
+18 Delete 0 1
+19 Goto 0 16
+}
+puts {
+<p>This loop does the actual deleting. It is organized differently from
+the one in the UPDATE example. The ListRead instruction plays the role
+that the Next did in the INSERT loop, but because it jumps to P2 on
+failure, and Next jumps on success, we put it at the start of the loop
+instead of the end. This means that we have to put a Goto at the end of
+the loop to jump back to the the loop test at the beginning. So this
+loop has the form of a C while(){...} loop, while the loop in the INSERT
+example had the form of a do{...}while() loop. The Delete instruction
+fills the role that the callback function did in the preceding examples.
+</p>
+<p>The <a href="opcode.html#ListRead">ListRead</a> instruction reads an
+element from the temporary storage list and pushes it onto the stack.
+If this was successful, it continues to the next instruction. If this
+fails because the list is empty, it branches to P2, which is the
+instruction just after the loop. Afterwards the stack looks like:</p>
+}
+stack {(integer) key for current record}
+
+puts {
+<p>Notice the similarity between the ListRead and Next instructions.
+Both operations work according to this rule:
+</p>
+<blockquote>
+Push the next "thing" onto the stack and fall through OR jump to P2,
+depending on whether or not there is a next "thing" to push.
+</blockquote>
+<p>One difference between Next and ListRead is their idea of a "thing".
+The "things" for the Next instruction are records in a database file.
+"Things" for ListRead are integer keys in a list. Another difference
+is whether to jump or fall through if there is no next "thing". In this
+case, Next falls through, and ListRead jumps. Later on, we will see
+other looping instructions (NextIdx and SortNext) that operate using the
+same principle.</p>
+
+<p>The <a href="opcode.html#NotExists">NotExists</a> instruction pops
+the top stack element and uses it as an integer key. If a record with
+that key does not exist in table P1, then jump to P2. If a record does
+exist, then fall thru to the next instruction. In this case P2 takes
+us to the Goto at the end of the loop, which jumps back to the ListRead
+at the beginning. This could have been coded to have P2 be 16, the
+ListRead at the start of the loop, but the SQLite parser which generated
+this code didn't make that optimization.</p>
+<p>The <a href="opcode.html#Delete">Delete</a> does the work of this
+loop; it pops an integer key off the stack (placed there by the
+preceding ListRead) and deletes the record of cursor P1 that has that key.
+Because P2 is true, the row change counter is incremented.</p>
+<p>The <a href="opcode.html#Goto">Goto</a> jumps back to the beginning
+of the loop. This is the end of the loop.</p>
+}
+
+Code {
+20 ListReset 0 0
+21 Close 0 0
+22 Commit 0 0
+23 Halt 0 0
+}
+puts {
+<p>This block of instruction cleans up the VDBE program. Three of these
+instructions aren't really required, but are generated by the SQLite
+parser from its code templates, which are designed to handle more
+complicated cases.</p>
+<p>The <a href="opcode.html#ListReset">ListReset</a> instruction empties
+the temporary storage list. This list is emptied automatically when the
+VDBE program terminates, so it isn't necessary in this case. The Close
+instruction closes the cursor P1. Again, this is done by the VDBE
+engine when it is finished running this program. The Commit ends the
+current transaction successfully, and causes all changes that occurred
+in this transaction to be saved to the database. The final Halt is also
+unneccessary, since it is added to every VDBE program when it is
+prepared to run.</p>
+
+
+<p>UPDATE statements work very much like DELETE statements except
+that instead of deleting the record they replace it with a new one.
+Consider this example:
+</p>
+
+<blockquote><pre>
+UPDATE examp SET one= '(' || one || ')' WHERE two < 50;
+</pre></blockquote>
+
+<p>Instead of deleting records where the "two" column is less than
+50, this statement just puts the "one" column in parentheses
+The VDBE program to implement this statement follows:</p>
+}
+
+Code {
+addr opcode p1 p2 p3
+---- ------------ ----- ----- -----------------------------------
+0 Transaction 1 0
+1 Transaction 0 0
+2 VerifyCookie 0 178
+3 Integer 0 0
+4 OpenRead 0 3 examp
+5 Rewind 0 12
+6 Column 0 1
+7 Integer 50 0 50
+8 Ge 1 11
+9 Recno 0 0
+10 ListWrite 0 0
+11 Next 0 6
+12 Close 0 0
+13 Integer 0 0
+14 OpenWrite 0 3
+15 ListRewind 0 0
+16 ListRead 0 28
+17 Dup 0 0
+18 NotExists 0 16
+19 String 0 0 (
+20 Column 0 0
+21 Concat 2 0
+22 String 0 0 )
+23 Concat 2 0
+24 Column 0 1
+25 MakeRecord 2 0
+26 PutIntKey 0 1
+27 Goto 0 16
+28 ListReset 0 0
+29 Close 0 0
+30 Commit 0 0
+31 Halt 0 0
+}
+
+puts {
+<p>This program is essentially the same as the DELETE program except
+that the body of the second loop has been replace by a sequence of
+instructions (at addresses 17 through 26) that update the record rather
+than delete it. Most of this instruction sequence should already be
+familiar to you, but there are a couple of minor twists so we will go
+over it briefly. Also note that the order of some of the instructions
+before and after the 2nd loop has changed. This is just the way the
+SQLite parser chose to output the code using a different template.</p>
+
+<p>As we enter the interior of the second loop (at instruction 17)
+the stack contains a single integer which is the key of the
+record we want to modify. We are going to need to use this
+key twice: once to fetch the old value of the record and
+a second time to write back the revised record. So the first instruction
+is a Dup to make a duplicate of the key on the top of the stack. The
+Dup instruction will duplicate any element of the stack, not just the top
+element. You specify which element to duplication using the
+P1 operand. When P1 is 0, the top of the stack is duplicated.
+When P1 is 1, the next element down on the stack duplication.
+And so forth.</p>
+
+<p>After duplicating the key, the next instruction, NotExists,
+pops the stack once and uses the value popped as a key to
+check the existence of a record in the database file. If there is no record
+for this key, it jumps back to the ListRead to get another key.</p>
+
+<p>Instructions 19 through 25 construct a new database record
+that will be used to replace the existing record. This is
+the same kind of code that we saw
+in the description of INSERT and will not be described further.
+After instruction 25 executes, the stack looks like this:</p>
+}
+
+stack {(record) new data record} {(integer) key}
+
+puts {
+<p>The PutIntKey instruction (also described
+during the discussion about INSERT) writes an entry into the
+database file whose data is the top of the stack and whose key
+is the next on the stack, and then pops the stack twice. The
+PutIntKey instruction will overwrite the data of an existing record
+with the same key, which is what we want here. Overwriting was not
+an issue with INSERT because with INSERT the key was generated
+by the NewRecno instruction which is guaranteed to provide a key
+that has not been used before.</p>
+}
+
+if 0 {<p>(By the way, since keys must
+all be unique and each key is a 32-bit integer, a single
+SQLite database table can have no more than 2<sup>32</sup>
+rows. Actually, the Key instruction starts to become
+very inefficient as you approach this upper bound, so it
+is best to keep the number of entries below 2<sup>31</sup>
+or so. Surely a couple billion records will be enough for
+most applications!)</p>
+}
+
+puts {
+<h2>CREATE and DROP</h2>
+
+<p>Using CREATE or DROP to create or destroy a table or index is
+really the same as doing an INSERT or DELETE from the special
+"sqlite_master" table, at least from the point of view of the VDBE.
+The sqlite_master table is a special table that is automatically
+created for every SQLite database. It looks like this:</p>
+
+<blockquote><pre>
+CREATE TABLE sqlite_master (
+ type TEXT, -- either "table" or "index"
+ name TEXT, -- name of this table or index
+ tbl_name TEXT, -- for indices: name of associated table
+ sql TEXT -- SQL text of the original CREATE statement
+)
+</pre></blockquote>
+
+<p>Every table (except the "sqlite_master" table itself)
+and every named index in an SQLite database has an entry
+in the sqlite_master table. You can query this table using
+a SELECT statement just like any other table. But you are
+not allowed to directly change the table using UPDATE, INSERT,
+or DELETE. Changes to sqlite_master have to occur using
+the CREATE and DROP commands because SQLite also has to update
+some of its internal data structures when tables and indices
+are added or destroyed.</p>
+
+<p>But from the point of view of the VDBE, a CREATE works
+pretty much like an INSERT and a DROP works like a DELETE.
+When the SQLite library opens to an existing database,
+the first thing it does is a SELECT to read the "sql"
+columns from all entries of the sqlite_master table.
+The "sql" column contains the complete SQL text of the
+CREATE statement that originally generated the index or
+table. This text is fed back into the SQLite parser
+and used to reconstruct the
+internal data structures describing the index or table.</p>
+
+<h2>Using Indexes To Speed Searching</h2>
+
+<p>In the example queries above, every row of the table being
+queried must be loaded off of the disk and examined, even if only
+a small percentage of the rows end up in the result. This can
+take a long time on a big table. To speed things up, SQLite
+can use an index.</p>
+
+<p>An SQLite file associates a key with some data. For an SQLite
+table, the database file is set up so that the key is an integer
+and the data is the information for one row of the table.
+Indices in SQLite reverse this arrangement. The index key
+is (some of) the information being stored and the index data
+is an integer.
+To access a table row that has some particular
+content, we first look up the content in the index table to find
+its integer index, then we use that integer to look up the
+complete record in the table.</p>
+
+<p>Note that SQLite uses b-trees, which are a sorted data structure,
+so indices can be used when the WHERE clause of the SELECT statement
+contains tests for equality or inequality. Queries like the following
+can use an index if it is available:</p>
+
+<blockquote><pre>
+SELECT * FROM examp WHERE two==50;
+SELECT * FROM examp WHERE two<50;
+SELECT * FROM examp WHERE two IN (50, 100);
+</pre></blockquote>
+
+<p>If there exists an index that maps the "two" column of the "examp"
+table into integers, then SQLite will use that index to find the integer
+keys of all rows in examp that have a value of 50 for column two, or
+all rows that are less than 50, etc.
+But the following queries cannot use the index:</p>
+
+<blockquote><pre>
+SELECT * FROM examp WHERE two%50 == 10;
+SELECT * FROM examp WHERE two&127 == 3;
+</pre></blockquote>
+
+<p>Note that the SQLite parser will not always generate code to use an
+index, even if it is possible to do so. The following queries will not
+currently use the index:</p>
+
+<blockquote><pre>
+SELECT * FROM examp WHERE two+10 == 50;
+SELECT * FROM examp WHERE two==50 OR two==100;
+</pre></blockquote>
+
+<p>To understand better how indices work, lets first look at how
+they are created. Let's go ahead and put an index on the two
+column of the examp table. We have:</p>
+
+<blockquote><pre>
+CREATE INDEX examp_idx1 ON examp(two);
+</pre></blockquote>
+
+<p>The VDBE code generated by the above statement looks like the
+following:</p>
+}
+
+Code {
+addr opcode p1 p2 p3
+---- ------------ ----- ----- -----------------------------------
+0 Transaction 1 0
+1 Transaction 0 0
+2 VerifyCookie 0 178
+3 Integer 0 0
+4 OpenWrite 0 2
+5 NewRecno 0 0
+6 String 0 0 index
+7 String 0 0 examp_idx1
+8 String 0 0 examp
+9 CreateIndex 0 0 ptr(0x791380)
+10 Dup 0 0
+11 Integer 0 0
+12 OpenWrite 1 0
+13 String 0 0 CREATE INDEX examp_idx1 ON examp(tw
+14 MakeRecord 5 0
+15 PutIntKey 0 0
+16 Integer 0 0
+17 OpenRead 2 3 examp
+18 Rewind 2 24
+19 Recno 2 0
+20 Column 2 1
+21 MakeIdxKey 1 0 n
+22 IdxPut 1 0 indexed columns are not unique
+23 Next 2 19
+24 Close 2 0
+25 Close 1 0
+26 Integer 333 0
+27 SetCookie 0 0
+28 Close 0 0
+29 Commit 0 0
+30 Halt 0 0
+}
+
+puts {
+<p>Remember that every table (except sqlite_master) and every named
+index has an entry in the sqlite_master table. Since we are creating
+a new index, we have to add a new entry to sqlite_master. This is
+handled by instructions 3 through 15. Adding an entry to sqlite_master
+works just like any other INSERT statement so we will not say anymore
+about it here. In this example, we want to focus on populating the
+new index with valid data, which happens on instructions 16 through
+23.</p>
+}
+
+Code {
+16 Integer 0 0
+17 OpenRead 2 3 examp
+}
+puts {
+<p>The first thing that happens is that we open the table being
+indexed for reading. In order to construct an index for a table,
+we have to know what is in that table. The index has already been
+opened for writing using cursor 0 by instructions 3 and 4.</p>
+}
+
+Code {
+18 Rewind 2 24
+19 Recno 2 0
+20 Column 2 1
+21 MakeIdxKey 1 0 n
+22 IdxPut 1 0 indexed columns are not unique
+23 Next 2 19
+}
+puts {
+<p>Instructions 18 through 23 implement a loop over every row of the
+table being indexed. For each table row, we first extract the integer
+key for that row using Recno in instruction 19, then get the value of
+the "two" column using Column in instruction 20.
+The <a href="opcode.html#MakeIdxKey">MakeIdxKey</a> instruction at 21
+converts data from the "two" column (which is on the top of the stack)
+into a valid index key. For an index on a single column, this is
+basically a no-op. But if the P1 operand to MakeIdxKey had been
+greater than one multiple entries would have been popped from the stack
+and converted into a single index key.
+The <a href="opcode.html#IdxPut">IdxPut</a> instruction at 22 is what
+actually creates the index entry. IdxPut pops two elements from the
+stack. The top of the stack is used as a key to fetch an entry from the
+index table. Then the integer which was second on stack is added to the
+set of integers for that index and the new record is written back to the
+database file. Note
+that the same index entry can store multiple integers if there
+are two or more table entries with the same value for the two
+column.
+</p>
+
+<p>Now let's look at how this index will be used. Consider the
+following query:</p>
+
+<blockquote><pre>
+SELECT * FROM examp WHERE two==50;
+</pre></blockquote>
+
+<p>SQLite generates the following VDBE code to handle this query:</p>
+}
+
+Code {
+addr opcode p1 p2 p3
+---- ------------ ----- ----- -----------------------------------
+0 ColumnName 0 0 one
+1 ColumnName 1 0 two
+2 Integer 0 0
+3 OpenRead 0 3 examp
+4 VerifyCookie 0 256
+5 Integer 0 0
+6 OpenRead 1 4 examp_idx1
+7 Integer 50 0 50
+8 MakeKey 1 0 n
+9 MemStore 0 0
+10 MoveTo 1 19
+11 MemLoad 0 0
+12 IdxGT 1 19
+13 IdxRecno 1 0
+14 MoveTo 0 0
+15 Column 0 0
+16 Column 0 1
+17 Callback 2 0
+18 Next 1 11
+19 Close 0 0
+20 Close 1 0
+21 Halt 0 0
+}
+
+puts {
+<p>The SELECT begins in a familiar fashion. First the column
+names are initialized and the table being queried is opened.
+Things become different beginning with instructions 5 and 6 where
+the index file is also opened. Instructions 7 and 8 make
+a key with the value of 50.
+The <a href="opcode.html#MemStore">MemStore</a> instruction at 9 stores
+the index key in VDBE memory location 0. The VDBE memory is used to
+avoid having to fetch a value from deep in the stack, which can be done,
+but makes the program harder to generate. The following instruction
+<a href="opcode.html#MoveTo">MoveTo</a> at address 10 pops the key off
+the stack and moves the index cursor to the first row of the index with
+that key. This initializes the cursor for use in the following loop.</p>
+
+<p>Instructions 11 through 18 implement a loop over all index records
+with the key that was fetched by instruction 8. All of the index
+records with this key will be contiguous in the index table, so we walk
+through them and fetch the corresponding table key from the index.
+This table key is then used to move the cursor to that row in the table.
+The rest of the loop is the same as the loop for the non-indexed SELECT
+query.</p>
+
+<p>The loop begins with the <a href="opcode.html#MemLoad">MemLoad</a>
+instruction at 11 which pushes a copy of the index key back onto the
+stack. The instruction <a href="opcode.html#IdxGT">IdxGT</a> at 12
+compares the key to the key in the current index record pointed to by
+cursor P1. If the index key at the current cursor location is greater
+than the the index we are looking for, then jump out of the loop.</p>
+
+<p>The instruction <a href="opcode.html#IdxRecno">IdxRecno</a> at 13
+pushes onto the stack the table record number from the index. The
+following MoveTo pops it and moves the table cursor to that row. The
+next 3 instructions select the column data the same way as in the non-
+indexed case. The Column instructions fetch the column data and the
+callback function is invoked. The final Next instruction advances the
+index cursor, not the table cursor, to the next row, and then branches
+back to the start of the loop if there are any index records left.</p>
+
+<p>Since the index is used to look up values in the table,
+it is important that the index and table be kept consistent.
+Now that there is an index on the examp table, we will have
+to update that index whenever data is inserted, deleted, or
+changed in the examp table. Remember the first example above
+where we were able to insert a new row into the "examp" table using
+12 VDBE instructions. Now that this table is indexed, 19
+instructions are required. The SQL statement is this:</p>
+
+<blockquote><pre>
+INSERT INTO examp VALUES('Hello, World!',99);
+</pre></blockquote>
+
+<p>And the generated code looks like this:</p>
+}
+
+Code {
+addr opcode p1 p2 p3
+---- ------------ ----- ----- -----------------------------------
+0 Transaction 1 0
+1 Transaction 0 0
+2 VerifyCookie 0 256
+3 Integer 0 0
+4 OpenWrite 0 3 examp
+5 Integer 0 0
+6 OpenWrite 1 4 examp_idx1
+7 NewRecno 0 0
+8 String 0 0 Hello, World!
+9 Integer 99 0 99
+10 Dup 2 1
+11 Dup 1 1
+12 MakeIdxKey 1 0 n
+13 IdxPut 1 0
+14 MakeRecord 2 0
+15 PutIntKey 0 1
+16 Close 0 0
+17 Close 1 0
+18 Commit 0 0
+19 Halt 0 0
+}
+
+puts {
+<p>At this point, you should understand the VDBE well enough to
+figure out on your own how the above program works. So we will
+not discuss it further in this text.</p>
+
+<h2>Joins</h2>
+
+<p>In a join, two or more tables are combined to generate a single
+result. The result table consists of every possible combination
+of rows from the tables being joined. The easiest and most natural
+way to implement this is with nested loops.</p>
+
+<p>Recall the query template discussed above where there was a
+single loop that searched through every record of the table.
+In a join we have basically the same thing except that there
+are nested loops. For example, to join two tables, the query
+template might look something like this:</p>
+
+<p>
+<ol>
+<li>Initialize the <b>azColumnName[]</b> array for the callback.</li>
+<li>Open two cursors, one to each of the two tables being queried.</li>
+<li>For each record in the first table, do:
+ <ol type="a">
+ <li>For each record in the second table do:
+ <ol type="i">
+ <li>If the WHERE clause evaluates to FALSE, then skip the steps that
+ follow and continue to the next record.</li>
+ <li>Compute all columns for the current row of the result.</li>
+ <li>Invoke the callback function for the current row of the result.</li>
+ </ol></li>
+ </ol>
+<li>Close both cursors.</li>
+</ol>
+</p>
+
+<p>This template will work, but it is likely to be slow since we
+are now dealing with an O(N<sup>2</sup>) loop. But it often works
+out that the WHERE clause can be factored into terms and that one or
+more of those terms will involve only columns in the first table.
+When this happens, we can factor part of the WHERE clause test out of
+the inner loop and gain a lot of efficiency. So a better template
+would be something like this:</p>
+
+<p>
+<ol>
+<li>Initialize the <b>azColumnName[]</b> array for the callback.</li>
+<li>Open two cursors, one to each of the two tables being queried.</li>
+<li>For each record in the first table, do:
+ <ol type="a">
+ <li>Evaluate terms of the WHERE clause that only involve columns from
+ the first table. If any term is false (meaning that the whole
+ WHERE clause must be false) then skip the rest of this loop and
+ continue to the next record.</li>
+ <li>For each record in the second table do:
+ <ol type="i">
+ <li>If the WHERE clause evaluates to FALSE, then skip the steps that
+ follow and continue to the next record.</li>
+ <li>Compute all columns for the current row of the result.</li>
+ <li>Invoke the callback function for the current row of the result.</li>
+ </ol></li>
+ </ol>
+<li>Close both cursors.</li>
+</ol>
+</p>
+
+<p>Additional speed-up can occur if an index can be used to speed
+the search of either or the two loops.</p>
+
+<p>SQLite always constructs the loops in the same order as the
+tables appear in the FROM clause of the SELECT statement. The
+left-most table becomes the outer loop and the right-most table
+becomes the inner loop. It is possible, in theory, to reorder
+the loops in some circumstances to speed the evaluation of the
+join. But SQLite does not attempt this optimization.</p>
+
+<p>You can see how SQLite constructs nested loops in the following
+example:</p>
+
+<blockquote><pre>
+CREATE TABLE examp2(three int, four int);
+SELECT * FROM examp, examp2 WHERE two<50 AND four==two;
+</pre></blockquote>
+}
+
+Code {
+addr opcode p1 p2 p3
+---- ------------ ----- ----- -----------------------------------
+0 ColumnName 0 0 examp.one
+1 ColumnName 1 0 examp.two
+2 ColumnName 2 0 examp2.three
+3 ColumnName 3 0 examp2.four
+4 Integer 0 0
+5 OpenRead 0 3 examp
+6 VerifyCookie 0 909
+7 Integer 0 0
+8 OpenRead 1 5 examp2
+9 Rewind 0 24
+10 Column 0 1
+11 Integer 50 0 50
+12 Ge 1 23
+13 Rewind 1 23
+14 Column 1 1
+15 Column 0 1
+16 Ne 1 22
+17 Column 0 0
+18 Column 0 1
+19 Column 1 0
+20 Column 1 1
+21 Callback 4 0
+22 Next 1 14
+23 Next 0 10
+24 Close 0 0
+25 Close 1 0
+26 Halt 0 0
+}
+
+puts {
+<p>The outer loop over table examp is implement by instructions
+7 through 23. The inner loop is instructions 13 through 22.
+Notice that the "two<50" term of the WHERE expression involves
+only columns from the first table and can be factored out of
+the inner loop. SQLite does this and implements the "two<50"
+test in instructions 10 through 12. The "four==two" test is
+implement by instructions 14 through 16 in the inner loop.</p>
+
+<p>SQLite does not impose any arbitrary limits on the tables in
+a join. It also allows a table to be joined with itself.</p>
+
+<h2>The ORDER BY clause</h2>
+
+<p>For historical reasons, and for efficiency, all sorting is currently
+done in memory.</p>
+
+<p>SQLite implements the ORDER BY clause using a special
+set of instructions to control an object called a sorter. In the
+inner-most loop of the query, where there would normally be
+a Callback instruction, instead a record is constructed that
+contains both callback parameters and a key. This record
+is added to the sorter (in a linked list). After the query loop
+finishes, the list of records is sorted and this list is walked. For
+each record on the list, the callback is invoked. Finally, the sorter
+is closed and memory is deallocated.</p>
+
+<p>We can see the process in action in the following query:</p>
+
+<blockquote><pre>
+SELECT * FROM examp ORDER BY one DESC, two;
+</pre></blockquote>
+}
+
+Code {
+addr opcode p1 p2 p3
+---- ------------ ----- ----- -----------------------------------
+0 ColumnName 0 0 one
+1 ColumnName 1 0 two
+2 Integer 0 0
+3 OpenRead 0 3 examp
+4 VerifyCookie 0 909
+5 Rewind 0 14
+6 Column 0 0
+7 Column 0 1
+8 SortMakeRec 2 0
+9 Column 0 0
+10 Column 0 1
+11 SortMakeKey 2 0 D+
+12 SortPut 0 0
+13 Next 0 6
+14 Close 0 0
+15 Sort 0 0
+16 SortNext 0 19
+17 SortCallback 2 0
+18 Goto 0 16
+19 SortReset 0 0
+20 Halt 0 0
+}
+
+puts {
+<p>There is only one sorter object, so there are no instructions to open
+or close it. It is opened automatically when needed, and it is closed
+when the VDBE program halts.</p>
+
+<p>The query loop is built from instructions 5 through 13. Instructions
+6 through 8 build a record that contains the azData[] values for a single
+invocation of the callback. A sort key is generated by instructions
+9 through 11. Instruction 12 combines the invocation record and the
+sort key into a single entry and puts that entry on the sort list.<p>
+
+<p>The P3 argument of instruction 11 is of particular interest. The
+sort key is formed by prepending one character from P3 to each string
+and concatenating all the strings. The sort comparison function will
+look at this character to determine whether the sort order is
+ascending or descending, and whether to sort as a string or number.
+In this example, the first column should be sorted as a string
+in descending order so its prefix is "D" and the second column should
+sorted numerically in ascending order so its prefix is "+". Ascending
+string sorting uses "A", and descending numeric sorting uses "-".</p>
+
+<p>After the query loop ends, the table being queried is closed at
+instruction 14. This is done early in order to allow other processes
+or threads to access that table, if desired. The list of records
+that was built up inside the query loop is sorted by the instruction
+at 15. Instructions 16 through 18 walk through the record list
+(which is now in sorted order) and invoke the callback once for
+each record. Finally, the sorter is closed at instruction 19.</p>
+
+<h2>Aggregate Functions And The GROUP BY and HAVING Clauses</h2>
+
+<p>To compute aggregate functions, the VDBE implements a special
+data structure and instructions for controlling that data structure.
+The data structure is an unordered set of buckets, where each bucket
+has a key and one or more memory locations. Within the query
+loop, the GROUP BY clause is used to construct a key and the bucket
+with that key is brought into focus. A new bucket is created with
+the key if one did not previously exist. Once the bucket is in
+focus, the memory locations of the bucket are used to accumulate
+the values of the various aggregate functions. After the query
+loop terminates, each bucket is visited once to generate a
+single row of the results.</p>
+
+<p>An example will help to clarify this concept. Consider the
+following query:</p>
+
+<blockquote><pre>
+SELECT three, min(three+four)+avg(four)
+FROM examp2
+GROUP BY three;
+</pre></blockquote>
+
+
+<p>The VDBE code generated for this query is as follows:</p>
+}
+
+Code {
+addr opcode p1 p2 p3
+---- ------------ ----- ----- -----------------------------------
+0 ColumnName 0 0 three
+1 ColumnName 1 0 min(three+four)+avg(four)
+2 AggReset 0 3
+3 AggInit 0 1 ptr(0x7903a0)
+4 AggInit 0 2 ptr(0x790700)
+5 Integer 0 0
+6 OpenRead 0 5 examp2
+7 VerifyCookie 0 909
+8 Rewind 0 23
+9 Column 0 0
+10 MakeKey 1 0 n
+11 AggFocus 0 14
+12 Column 0 0
+13 AggSet 0 0
+14 Column 0 0
+15 Column 0 1
+16 Add 0 0
+17 Integer 1 0
+18 AggFunc 0 1 ptr(0x7903a0)
+19 Column 0 1
+20 Integer 2 0
+21 AggFunc 0 1 ptr(0x790700)
+22 Next 0 9
+23 Close 0 0
+24 AggNext 0 31
+25 AggGet 0 0
+26 AggGet 0 1
+27 AggGet 0 2
+28 Add 0 0
+29 Callback 2 0
+30 Goto 0 24
+31 Noop 0 0
+32 Halt 0 0
+}
+
+puts {
+<p>The first instruction of interest is the
+<a href="opcode.html#AggReset">AggReset</a> at 2.
+The AggReset instruction initializes the set of buckets to be the
+empty set and specifies the number of memory slots available in each
+bucket as P2. In this example, each bucket will hold 3 memory slots.
+It is not obvious, but if you look closely at the rest of the program
+you can figure out what each of these slots is intended for.</p>
+
+<blockquote><table border="2" cellpadding="5">
+<tr><th>Memory Slot</th><th>Intended Use Of This Memory Slot</th></tr>
+<tr><td>0</td><td>The "three" column -- the key to the bucket</td></tr>
+<tr><td>1</td><td>The minimum "three+four" value</td></tr>
+<tr><td>2</td><td>The sum of all "four" values. This is used to compute
+ "avg(four)".</td></tr>
+</table></blockquote>
+
+<p>The query loop is implemented by instructions 8 through 22.
+The aggregate key specified by the GROUP BY clause is computed
+by instructions 9 and 10. Instruction 11 causes the appropriate
+bucket to come into focus. If a bucket with the given key does
+not already exists, a new bucket is created and control falls
+through to instructions 12 and 13 which initialize the bucket.
+If the bucket does already exist, then a jump is made to instruction
+14. The values of aggregate functions are updated by the instructions
+between 11 and 21. Instructions 14 through 18 update memory
+slot 1 to hold the next value "min(three+four)". Then the sum of the
+"four" column is updated by instructions 19 through 21.</p>
+
+<p>After the query loop is finished, the table "examp2" is closed at
+instruction 23 so that its lock will be released and it can be
+used by other threads or processes. The next step is to loop
+over all aggregate buckets and output one row of the result for
+each bucket. This is done by the loop at instructions 24
+through 30. The AggNext instruction at 24 brings the next bucket
+into focus, or jumps to the end of the loop if all buckets have
+been examined already. The 3 columns of the result are fetched from
+the aggregator bucket in order at instructions 25 through 27.
+Finally, the callback is invoked at instruction 29.</p>
+
+<p>In summary then, any query with aggregate functions is implemented
+by two loops. The first loop scans the input table and computes
+aggregate information into buckets and the second loop scans through
+all the buckets to compute the final result.</p>
+
+<p>The realization that an aggregate query is really two consequtive
+loops makes it much easier to understand the difference between
+a WHERE clause and a HAVING clause in SQL query statement. The
+WHERE clause is a restriction on the first loop and the HAVING
+clause is a restriction on the second loop. You can see this
+by adding both a WHERE and a HAVING clause to our example query:</p>
+
+
+<blockquote><pre>
+SELECT three, min(three+four)+avg(four)
+FROM examp2
+WHERE three>four
+GROUP BY three
+HAVING avg(four)<10;
+</pre></blockquote>
+}
+
+Code {
+addr opcode p1 p2 p3
+---- ------------ ----- ----- -----------------------------------
+0 ColumnName 0 0 three
+1 ColumnName 1 0 min(three+four)+avg(four)
+2 AggReset 0 3
+3 AggInit 0 1 ptr(0x7903a0)
+4 AggInit 0 2 ptr(0x790700)
+5 Integer 0 0
+6 OpenRead 0 5 examp2
+7 VerifyCookie 0 909
+8 Rewind 0 26
+9 Column 0 0
+10 Column 0 1
+11 Le 1 25
+12 Column 0 0
+13 MakeKey 1 0 n
+14 AggFocus 0 17
+15 Column 0 0
+16 AggSet 0 0
+17 Column 0 0
+18 Column 0 1
+19 Add 0 0
+20 Integer 1 0
+21 AggFunc 0 1 ptr(0x7903a0)
+22 Column 0 1
+23 Integer 2 0
+24 AggFunc 0 1 ptr(0x790700)
+25 Next 0 9
+26 Close 0 0
+27 AggNext 0 37
+28 AggGet 0 2
+29 Integer 10 0 10
+30 Ge 1 27
+31 AggGet 0 0
+32 AggGet 0 1
+33 AggGet 0 2
+34 Add 0 0
+35 Callback 2 0
+36 Goto 0 27
+37 Noop 0 0
+38 Halt 0 0
+}
+
+puts {
+<p>The code generated in this last example is the same as the
+previous except for the addition of two conditional jumps used
+to implement the extra WHERE and HAVING clauses. The WHERE
+clause is implemented by instructions 9 through 11 in the query
+loop. The HAVING clause is implemented by instruction 28 through
+30 in the output loop.</p>
+
+<h2>Using SELECT Statements As Terms In An Expression</h2>
+
+<p>The very name "Structured Query Language" tells us that SQL should
+support nested queries. And, in fact, two different kinds of nesting
+are supported. Any SELECT statement that returns a single-row, single-column
+result can be used as a term in an expression of another SELECT statement.
+And, a SELECT statement that returns a single-column, multi-row result
+can be used as the right-hand operand of the IN and NOT IN operators.
+We will begin this section with an example of the first kind of nesting,
+where a single-row, single-column SELECT is used as a term in an expression
+of another SELECT. Here is our example:</p>
+
+<blockquote><pre>
+SELECT * FROM examp
+WHERE two!=(SELECT three FROM examp2
+ WHERE four=5);
+</pre></blockquote>
+
+<p>The way SQLite deals with this is to first run the inner SELECT
+(the one against examp2) and store its result in a private memory
+cell. SQLite then substitutes the value of this private memory
+cell for the inner SELECT when it evaluates the outer SELECT.
+The code looks like this:</p>
+}
+
+Code {
+addr opcode p1 p2 p3
+---- ------------ ----- ----- -----------------------------------
+0 String 0 0
+1 MemStore 0 1
+2 Integer 0 0
+3 OpenRead 1 5 examp2
+4 VerifyCookie 0 909
+5 Rewind 1 13
+6 Column 1 1
+7 Integer 5 0 5
+8 Ne 1 12
+9 Column 1 0
+10 MemStore 0 1
+11 Goto 0 13
+12 Next 1 6
+13 Close 1 0
+14 ColumnName 0 0 one
+15 ColumnName 1 0 two
+16 Integer 0 0
+17 OpenRead 0 3 examp
+18 Rewind 0 26
+19 Column 0 1
+20 MemLoad 0 0
+21 Eq 1 25
+22 Column 0 0
+23 Column 0 1
+24 Callback 2 0
+25 Next 0 19
+26 Close 0 0
+27 Halt 0 0
+}
+
+puts {
+<p>The private memory cell is initialized to NULL by the first
+two instructions. Instructions 2 through 13 implement the inner
+SELECT statement against the examp2 table. Notice that instead of
+sending the result to a callback or storing the result on a sorter,
+the result of the query is pushed into the memory cell by instruction
+10 and the loop is abandoned by the jump at instruction 11.
+The jump at instruction at 11 is vestigial and never executes.</p>
+
+<p>The outer SELECT is implemented by instructions 14 through 25.
+In particular, the WHERE clause that contains the nested select
+is implemented by instructions 19 through 21. You can see that
+the result of the inner select is loaded onto the stack by instruction
+20 and used by the conditional jump at 21.</p>
+
+<p>When the result of a sub-select is a scalar, a single private memory
+cell can be used, as shown in the previous
+example. But when the result of a sub-select is a vector, such
+as when the sub-select is the right-hand operand of IN or NOT IN,
+a different approach is needed. In this case,
+the result of the sub-select is
+stored in a transient table and the contents of that table
+are tested using the Found or NotFound operators. Consider this
+example:</p>
+
+<blockquote><pre>
+SELECT * FROM examp
+WHERE two IN (SELECT three FROM examp2);
+</pre></blockquote>
+
+<p>The code generated to implement this last query is as follows:</p>
+}
+
+Code {
+addr opcode p1 p2 p3
+---- ------------ ----- ----- -----------------------------------
+0 OpenTemp 1 1
+1 Integer 0 0
+2 OpenRead 2 5 examp2
+3 VerifyCookie 0 909
+4 Rewind 2 10
+5 Column 2 0
+6 IsNull -1 9
+7 String 0 0
+8 PutStrKey 1 0
+9 Next 2 5
+10 Close 2 0
+11 ColumnName 0 0 one
+12 ColumnName 1 0 two
+13 Integer 0 0
+14 OpenRead 0 3 examp
+15 Rewind 0 25
+16 Column 0 1
+17 NotNull -1 20
+18 Pop 1 0
+19 Goto 0 24
+20 NotFound 1 24
+21 Column 0 0
+22 Column 0 1
+23 Callback 2 0
+24 Next 0 16
+25 Close 0 0
+26 Halt 0 0
+}
+
+puts {
+<p>The transient table in which the results of the inner SELECT are
+stored is created by the <a href="opcode.html#OpenTemp">OpenTemp</a>
+instruction at 0. This opcode is used for tables that exist for the
+duration of a single SQL statement only. The transient cursor is always
+opened read/write even if the main database is read-only. The transient
+table is deleted automatically when the cursor is closed. The P2 value
+of 1 means the cursor points to a BTree index, which has no data but can
+have an arbitrary key.</p>
+
+<p>The inner SELECT statement is implemented by instructions 1 through 10.
+All this code does is make an entry in the temporary table for each
+row of the examp2 table with a non-NULL value for the "three" column.
+The key for each temporary table entry is the "three" column of examp2
+and the data is an empty string since it is never used.</p>
+
+<p>The outer SELECT is implemented by instructions 11 through 25. In
+particular, the WHERE clause containing the IN operator is implemented
+by instructions at 16, 17, and 20. Instruction 16 pushes the value of
+the "two" column for the current row onto the stack and instruction 17
+checks to see that it is non-NULL. If this is successful, execution
+jumps to 20, where it tests to see if top of the stack matches any key
+in the temporary table. The rest of the code is the same as what has
+been shown before.</p>
+
+<h2>Compound SELECT Statements</h2>
+
+<p>SQLite also allows two or more SELECT statements to be joined as
+peers using operators UNION, UNION ALL, INTERSECT, and EXCEPT. These
+compound select statements are implemented using transient tables.
+The implementation is slightly different for each operator, but the
+basic ideas are the same. For an example we will use the EXCEPT
+operator.</p>
+
+<blockquote><pre>
+SELECT two FROM examp
+EXCEPT
+SELECT four FROM examp2;
+</pre></blockquote>
+
+<p>The result of this last example should be every unique value
+of the "two" column in the examp table, except any value that is
+in the "four" column of examp2 is removed. The code to implement
+this query is as follows:</p>
+}
+
+Code {
+addr opcode p1 p2 p3
+---- ------------ ----- ----- -----------------------------------
+0 OpenTemp 0 1
+1 KeyAsData 0 1
+2 Integer 0 0
+3 OpenRead 1 3 examp
+4 VerifyCookie 0 909
+5 Rewind 1 11
+6 Column 1 1
+7 MakeRecord 1 0
+8 String 0 0
+9 PutStrKey 0 0
+10 Next 1 6
+11 Close 1 0
+12 Integer 0 0
+13 OpenRead 2 5 examp2
+14 Rewind 2 20
+15 Column 2 1
+16 MakeRecord 1 0
+17 NotFound 0 19
+18 Delete 0 0
+19 Next 2 15
+20 Close 2 0
+21 ColumnName 0 0 four
+22 Rewind 0 26
+23 Column 0 0
+24 Callback 1 0
+25 Next 0 23
+26 Close 0 0
+27 Halt 0 0
+}
+
+puts {
+<p>The transient table in which the result is built is created by
+instruction 0. Three loops then follow. The loop at instructions
+5 through 10 implements the first SELECT statement. The second
+SELECT statement is implemented by the loop at instructions 14 through
+19. Finally, a loop at instructions 22 through 25 reads the transient
+table and invokes the callback once for each row in the result.</p>
+
+<p>Instruction 1 is of particular importance in this example. Normally,
+the Column instruction extracts the value of a column from a larger
+record in the data of an SQLite file entry. Instruction 1 sets a flag on
+the transient table so that Column will instead treat the key of the
+SQLite file entry as if it were data and extract column information from
+the key.</p>
+
+<p>Here is what is going to happen: The first SELECT statement
+will construct rows of the result and save each row as the key of
+an entry in the transient table. The data for each entry in the
+transient table is a never used so we fill it in with an empty string.
+The second SELECT statement also constructs rows, but the rows
+constructed by the second SELECT are removed from the transient table.
+That is why we want the rows to be stored in the key of the SQLite file
+instead of in the data -- so they can be easily located and deleted.</p>
+
+<p>Let's look more closely at what is happening here. The first
+SELECT is implemented by the loop at instructions 5 through 10.
+Instruction 5 intializes the loop by rewinding its cursor.
+Instruction 6 extracts the value of the "two" column from "examp"
+and instruction 7 converts this into a row. Instruction 8 pushes
+an empty string onto the stack. Finally, instruction 9 writes the
+row into the temporary table. But remember, the PutStrKey opcode uses
+the top of the stack as the record data and the next on stack as the
+key. For an INSERT statement, the row generated by the
+MakeRecord opcode is the record data and the record key is an integer
+created by the NewRecno opcode. But here the roles are reversed and
+the row created by MakeRecord is the record key and the record data is
+just an empty string.</p>
+
+<p>The second SELECT is implemented by instructions 14 through 19.
+Instruction 14 intializes the loop by rewinding its cursor.
+A new result row is created from the "four" column of table "examp2"
+by instructions 15 and 16. But instead of using PutStrKey to write this
+new row into the temporary table, we instead call Delete to remove
+it from the temporary table if it exists.</p>
+
+<p>The result of the compound select is sent to the callback routine
+by the loop at instructions 22 through 25. There is nothing new
+or remarkable about this loop, except for the fact that the Column
+instruction at 23 will be extracting a column out of the record key
+rather than the record data.</p>
+
+<h2>Summary</h2>
+
+<p>This article has reviewed all of the major techniques used by
+SQLite's VDBE to implement SQL statements. What has not been shown
+is that most of these techniques can be used in combination to
+generate code for an appropriately complex query statement. For
+example, we have shown how sorting is accomplished on a simple query
+and we have shown how to implement a compound query. But we did
+not give an example of sorting in a compound query. This is because
+sorting a compound query does not introduce any new concepts: it
+merely combines two previous ideas (sorting and compounding)
+in the same VDBE program.</p>
+
+<p>For additional information on how the SQLite library
+functions, the reader is directed to look at the SQLite source
+code directly. If you understand the material in this article,
+you should not have much difficulty in following the sources.
+Serious students of the internals of SQLite will probably
+also what to make a careful study of the VDBE opcodes
+as documented <a href="opcode.html">here</a>. Most of the
+opcode documentation is extracted from comments in the source
+code using a script so you can also get information about the
+various opcodes directly from the <b>vdbe.c</b> source file.
+If you have successfully read this far, you should have little
+difficulty understanding the rest.</p>
+
+<p>If you find errors in either the documentation or the code,
+feel free to fix them and/or contact the author at
+<a href="mailto:drh at hwaci.com">drh at hwaci.com</a>. Your bug fixes or
+suggestions are always welcomed.</p>
+}
+footer $rcsid
Added: freeswitch/trunk/libs/sqlite/www/version3.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/version3.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,293 @@
+#!/usr/bin/tclsh
+source common.tcl
+header {SQLite Version 3 Overview}
+puts {
+<h2>SQLite Version 3 Overview</h2>
+
+<p>
+SQLite version 3.0 introduces important changes to the library, including:
+</p>
+
+<ul>
+<li>A more compact format for database files.</li>
+<li>Manifest typing and BLOB support.</li>
+<li>Support for both UTF-8 and UTF-16 text.</li>
+<li>User-defined text collating sequences.</li>
+<li>64-bit ROWIDs.</li>
+<li>Improved Concurrency.</li>
+</ul>
+
+<p>
+This document is a quick introduction to the changes for SQLite 3.0
+for users who are already familiar with SQLite version 2.8.
+</p>
+
+<h3>Naming Changes</h3>
+
+<p>
+SQLite version 2.8 will continue to be supported with bug fixes
+for the foreseeable future. In order to allow SQLite version 2.8
+and SQLite version 3.0 to peacefully coexist, the names of key files
+and APIs in SQLite version 3.0 have been changed to include the
+character "3". For example, the include file used by C programs
+has been changed from "sqlite.h" to "sqlite3.h". And the name of
+the shell program used to interact with databases has been changed
+from "sqlite.exe" to "sqlite3.exe". With these changes, it is possible
+to have both SQLite 2.8 and SQLite 3.0 installed on the same system at
+the same time. And it is possible for the same C program to link
+against both SQLite 2.8 and SQLite 3.0 at the same time and to use
+both libraries at the same time.
+</p>
+
+<h3>New File Format</h3>
+
+<p>
+The format used by SQLite database files has been completely revised.
+The old version 2.1 format and the new 3.0 format are incompatible with
+one another. Version 2.8 of SQLite will not read a version 3.0 database
+files and version 3.0 of SQLite will not read a version 2.8 database file.
+</p>
+
+<p>
+To convert an SQLite 2.8 database into an SQLite 3.0 database, have
+ready the command-line shells for both version 2.8 and 3.0. Then
+enter a command like the following:
+</p>
+
+<blockquote><pre>
+sqlite OLD.DB .dump | sqlite3 NEW.DB
+</pre></blockquote>
+
+<p>
+The new database file format uses B+trees for tables. In a B+tree, all
+data is stored in the leaves of the tree instead of in both the leaves and
+the intermediate branch nodes. The use of B+trees for tables allows for
+better scalability and the storage of larger data fields without the use of
+overflow pages. Traditional B-trees are still used for indices.</p>
+
+<p>
+The new file format also supports variable pages sizes between 512 and
+32768 bytes. The size of a page is stored in the file header so the
+same library can read databases with different pages sizes, in theory,
+though this feature has not yet been implemented in practice.
+</p>
+
+<p>
+The new file format omits unused fields from its disk images. For example,
+indices use only the key part of a B-tree record and not the data. So
+for indices, the field that records the length of the data is omitted.
+Integer values such as the length of key and data are stored using
+a variable-length encoding so that only one or two bytes are required to
+store the most common cases but up to 64-bits of information can be encoded
+if needed.
+Integer and floating point data is stored on the disk in binary rather
+than being converted into ASCII as in SQLite version 2.8.
+These changes taken together result in database files that are typically
+25% to 35% smaller than the equivalent files in SQLite version 2.8.
+</p>
+
+<p>
+Details of the low-level B-tree format used in SQLite version 3.0 can
+be found in header comments to the
+<a href="http://www.sqlite.org/cvstrac/getfile/sqlite/src/btree.c">btree.c</a>
+source file.
+</p>
+
+<h3>Manifest Typing and BLOB Support</h3>
+
+<p>
+SQLite version 2.8 will deal with data in various formats internally,
+but when writing to the disk or interacting through its API, SQLite 2.8
+always converts data into ASCII text. SQLite 3.0, in contrast, exposes
+its internal data representations to the user and stores binary representations
+to disk when appropriate. The exposing of non-ASCII representations was
+added in order to support BLOBs.
+</p>
+
+<p>
+SQLite version 2.8 had the feature that any type of data could be stored
+in any table column regardless of the declared type of that column. This
+feature is retained in version 3.0, though in a slightly modified form.
+Each table column will store any type of data, though columns have an
+affinity for the format of data defined by their declared datatype.
+When data is inserted into a column, that column will make at attempt
+to convert the data format into the columns declared type. All SQL
+database engines do this. The difference is that SQLite 3.0 will
+still store the data even if a format conversion is not possible.
+</p>
+
+<p>
+For example, if you have a table column declared to be of type "INTEGER"
+and you try to insert a string, the column will look at the text string
+and see if it looks like a number. If the string does look like a number
+it is converted into a number and into an integer if the number does not
+have a fractional part, and stored that way. But if the string is not
+a well-formed number it is still stored as a string. A column with a
+type of "TEXT" tries to convert numbers into an ASCII-Text representation
+before storing them. But BLOBs are stored in TEXT columns as BLOBs because
+you cannot in general convert a BLOB into text.
+</p>
+
+<p>
+In most other SQL database engines the datatype is associated with
+the table column that holds the data - with the data container.
+In SQLite 3.0, the datatype is associated with the data itself, not
+with its container.
+<a href="http://www.paulgraham.com/">Paul Graham</a> in his book
+<a href="http://www.paulgraham.com/acl.html"><i>ANSI Common Lisp</i></a>
+calls this property "Manifest Typing".
+Other writers have other definitions for the term "manifest typing",
+so beware of confusion. But by whatever name, that is the datatype
+model supported by SQLite 3.0.
+</p>
+
+<p>
+Additional information about datatypes in SQLite version 3.0 is
+available
+<a href="datatype3.html">separately</a>.
+</p>
+
+<h3>Support for UTF-8 and UTF-16</h3>
+
+<p>
+The new API for SQLite 3.0 contains routines that accept text as
+both UTF-8 and UTF-16 in the native byte order of the host machine.
+Each database file manages text as either UTF-8, UTF-16BE (big-endian),
+or UTF-16LE (little-endian). Internally and in the disk file, the
+same text representation is used everywhere. If the text representation
+specified by the database file (in the file header) does not match
+the text representation required by the interface routines, then text
+is converted on-the-fly.
+Constantly converting text from one representation to another can be
+computationally expensive, so it is suggested that programmers choose a
+single representation and stick with it throughout their application.
+</p>
+
+<p>
+In the current implementation of SQLite, the SQL parser only works
+with UTF-8 text. So if you supply UTF-16 text it will be converted.
+This is just an implementation issue and there is nothing to prevent
+future versions of SQLite from parsing UTF-16 encoded SQL natively.
+</p>
+
+<p>
+When creating new user-defined SQL functions and collating sequences,
+each function or collating sequence can specify it if works with
+UTF-8, UTF-16be, or UTF-16le. Separate implementations can be registered
+for each encoding. If an SQL function or collating sequences is required
+but a version for the current text encoding is not available, then
+the text is automatically converted. As before, this conversion takes
+computation time, so programmers are advised to pick a single
+encoding and stick with it in order to minimize the amount of unnecessary
+format juggling.
+</p>
+
+<p>
+SQLite is not particular about the text it receives and is more than
+happy to process text strings that are not normalized or even
+well-formed UTF-8 or UTF-16. Thus, programmers who want to store
+IS08859 data can do so using the UTF-8 interfaces. As long as no
+attempts are made to use a UTF-16 collating sequence or SQL function,
+the byte sequence of the text will not be modified in any way.
+</p>
+
+<h3>User-defined Collating Sequences</h3>
+
+<p>
+A collating sequence is just a defined order for text. When SQLite 3.0
+sorts (or uses a comparison operator like "<" or ">=") the sort order
+is first determined by the data type.
+</p>
+
+<ul>
+<li>NULLs sort first</li>
+<li>Numeric values sort next in numerical order</li>
+<li>Text values come after numerics</li>
+<li>BLOBs sort last</li>
+</ul>
+
+<p>
+Collating sequences are used for comparing two text strings.
+The collating sequence does not change the ordering of NULLs, numbers,
+or BLOBs, only text.
+</p>
+
+<p>
+A collating sequence is implemented as a function that takes the
+two strings being compared as inputs and returns negative, zero, or
+positive if the first string is less than, equal to, or greater than
+the second.
+SQLite 3.0 comes with a single built-in collating sequence named "BINARY"
+which is implemented using the memcmp() routine from the standard C library.
+The BINARY collating sequence works well for English text. For other
+languages or locales, alternative collating sequences may be preferred.
+</p>
+
+<p>
+The decision of which collating sequence to use is controlled by the
+COLLATE clause in SQL. A COLLATE clause can occur on a table definition,
+to define a default collating sequence to a table column, or on field
+of an index, or in the ORDER BY clause of a SELECT statement.
+Planned enhancements to SQLite are to include standard CAST() syntax
+to allow the collating sequence of an expression to be defined.
+</p>
+
+<h3>64-bit ROWIDs</h3>
+
+<p>
+Every row of a table has a unique rowid.
+If the table defines a column with the type "INTEGER PRIMARY KEY" then that
+column becomes an alias for the rowid. But with or without an INTEGER PRIMARY
+KEY column, every row still has a rowid.
+</p>
+
+<p>
+In SQLite version 3.0, the rowid is a 64-bit signed integer.
+This is an expansion of SQLite version 2.8 which only permitted
+rowids of 32-bits.
+</p>
+
+<p>
+To minimize storage space, the 64-bit rowid is stored as a variable length
+integer. Rowids between 0 and 127 use only a single byte.
+Rowids between 0 and 16383 use just 2 bytes. Up to 2097152 uses three
+bytes. And so forth. Negative rowids are allowed but they always use
+nine bytes of storage and so their use is discouraged. When rowids
+are generated automatically by SQLite, they will always be non-negative.
+</p>
+
+<h3>Improved Concurrency</h3>
+
+<p>
+SQLite version 2.8 allowed multiple simultaneous readers or a single
+writer but not both. SQLite version 3.0 allows one process to begin
+writing the database while other processes continue to read. The
+writer must still obtain an exclusive lock on the database for a brief
+interval in order to commit its changes, but the exclusive lock is no
+longer required for the entire write operation.
+A <a href="lockingv3.html">more detailed report</a> on the locking
+behavior of SQLite version 3.0 is available separately.
+</p>
+
+<p>
+A limited form of table-level locking is now also available in SQLite.
+If each table is stored in a separate database file, those separate
+files can be attached to the main database (using the ATTACH command)
+and the combined databases will function as one. But locks will only
+be acquired on individual files as needed. So if you redefine "database"
+to mean two or more database files, then it is entirely possible for
+two processes to be writing to the same database at the same time.
+To further support this capability, commits of transactions involving
+two or more ATTACHed database are now atomic.
+</p>
+
+<h3>Credits</h3>
+
+<p>
+SQLite version 3.0 is made possible in part by AOL developers
+supporting and embracing great Open-Source Software.
+</p>
+
+
+}
+footer {$Id: version3.tcl,v 1.6 2006/03/03 21:39:54 drh Exp $}
Added: freeswitch/trunk/libs/sqlite/www/whentouse.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/whentouse.tcl Tue Dec 19 15:11:50 2006
@@ -0,0 +1,252 @@
+#
+# Run this TCL script to generate HTML for the goals.html file.
+#
+set rcsid {$Id: whentouse.tcl,v 1.6 2005/08/16 14:44:49 drh Exp $}
+source common.tcl
+header {Appropriate Uses For SQLite}
+
+puts {
+<p>
+SQLite is different from most other SQL database engines in that its
+primary design goal is to be simple:
+</p>
+
+<ul>
+<li>Simple to administer</li>
+<li>Simple to operate</li>
+<li>Simple to embed in a larger program</li>
+<li>Simple to maintain and customize</li>
+</ul>
+
+<p>
+Many people like SQLite because it is small and fast. But those
+qualities are just happy accidents.
+Users also find that SQLite is very reliable. Reliability is
+a consequence of simplicity. With less complication, there is
+less to go wrong. So, yes, SQLite is small, fast, and reliable,
+but first and foremost, SQLite strives to be simple.
+</p>
+
+<p>
+Simplicity in a database engine can be either a strength or a
+weakness, depending on what you are trying to do. In order to
+achieve simplicity, SQLite has had to sacrifice other characteristics
+that some people find useful, such as high concurrency, fine-grained
+access control, a rich set of built-in functions, stored procedures,
+esoteric SQL language features, XML and/or Java extensions,
+tera- or peta-byte scalability, and so forth. If you need some of these
+features and do not mind the added complexity that they
+bring, then SQLite is probably not the database for you.
+SQLite is not intended to be an enterprise database engine. It
+not designed to compete with Oracle or PostgreSQL.
+</p>
+
+<p>
+The basic rule of thumb for when it is appropriate to use SQLite is
+this: Use SQLite in situations where simplicity of administration,
+implementation, and maintenance are more important than the countless
+complex features that enterprise database engines provide.
+As it turns out, situations where simplicity is the better choice
+are more common than many people realize.
+</p>
+
+<h2>Situations Where SQLite Works Well</h2>
+
+<ul>
+<li><p><b>Websites</b></p>
+
+<p>SQLite usually will work great as the database engine for low to
+medium traffic websites (which is to say, 99.9% of all websites).
+The amount of web traffic that SQLite can handle depends, of course,
+on how heavily the website uses its database. Generally
+speaking, any site that gets fewer than a 100000 hits/day should work
+fine with SQLite.
+The 100000 hits/day figure is a conservative estimate, not a
+hard upper bound.
+SQLite has been demonstrated to work with 10 times that amount
+of traffic.</p>
+</li>
+
+<li><p><b>Embedded devices and applications</b></p>
+
+<p>Because an SQLite database requires little or no administration,
+SQLite is a good choice for devices or services that must work
+unattended and without human support. SQLite is a good fit for
+use in cellphones, PDAs, set-top boxes, and/or appliances. It also
+works well as an embedded database in downloadable consumer applications.
+</p>
+</li>
+
+<li><p><b>Application File Format</b></p>
+
+<p>
+SQLite has been used with great success as the on-disk file format
+for desktop applications such as financial analysis tools, CAD
+packages, record keeping programs, and so forth. The traditional
+File/Open operation does an sqlite3_open() and executes a
+BEGIN TRANSACTION to get exclusive access to the content. File/Save
+does a COMMIT followed by another BEGIN TRANSACTION. The use
+of transactions guarantees that updates to the application file are atomic,
+durable, isolated, and consistent.
+</p>
+
+<p>
+Temporary triggers can be added to the database to record all
+changes into a (temporary) undo/redo log table. These changes can then
+be played back when the user presses the Undo and Redo buttons. Using
+this technique, a unlimited depth undo/redo implementation can be written
+in surprising little code.
+</p>
+</li>
+
+<li><p><b>Replacement for <i>ad hoc</i> disk files</b></p>
+
+<p>Many programs use fopen(), fread(), and fwrite() to create and
+manage files of data in home-grown formats. SQLite works
+particularly well as a
+replacement for these <i>ad hoc</i> data files.</p>
+</li>
+
+<li><p><b>Internal or temporary databases</b></p>
+
+<p>
+For programs that have a lot of data that must be sifted and sorted
+in diverse ways, it is often easier and quicker to load the data into
+an in-memory SQLite database and use queries with joins and ORDER BY
+clauses to extract the data in the form and order needed rather than
+to try to code the same operations manually.
+Using an SQL database internally in this way also gives the program
+greater flexibility since new columns and indices can be added without
+having to recode every query.
+</p>
+</li>
+
+<li><p><b>Command-line dataset analysis tool</b></p>
+
+<p>
+Experienced SQL users can employ
+the command-line <b>sqlite</b> program to analyze miscellaneous
+datasets. Raw data can be imported from CSV files, then that
+data can be sliced and diced to generate a myriad of summary
+reports. Possible uses include website log analysis, sports
+statistics analysis, compilation of programming metrics, and
+analysis of experimental results.
+</p>
+
+<p>
+You can also do the same thing with a enterprise client/server
+database, of course. The advantages to using SQLite in this situation
+are that SQLite is much easier to set up and the resulting database
+is a single file that you can store on a floppy disk or flash-memory stick
+or email to a colleague.
+</p>
+</li>
+
+<li><p><b>Stand-in for an enterprise database during demos or testing</b></p>
+
+<p>
+If you are writing a client application for an enterprise database engine,
+it makes sense to use a generic database backend that allows you to connect
+to many different kinds of SQL database engines. It makes even better
+sense to
+go ahead and include SQLite in the mix of supported database and to statically
+link the SQLite engine in with the client. That way the client program
+can be used standalone with an SQLite data file for testing or for
+demonstrations.
+</p>
+</li>
+
+<li><p><b>Database Pedagogy</b></p>
+
+<p>
+Because it is simple to setup and use (installation is trivial: just
+copy the <b>sqlite</b> or <b>sqlite.exe</b> executable to the target machine
+and run it) SQLite makes a good database engine for use in teaching SQL.
+Students can easily create as many databases as they like and can
+email databases to the instructor for comments or grading. For more
+advanced students who are interested in studying how an RDBMS is
+implemented, the modular and well-commented and documented SQLite code
+can serve as a good basis. This is not to say that SQLite is an accurate
+model of how other database engines are implemented, but rather a student who
+understands how SQLite works can more quickly comprehend the operational
+principles of other systems.
+</p>
+</li>
+
+<li><p><b>Experimental SQL language extensions</b></p>
+
+<p>The simple, modular design of SQLite makes it a good platform for
+prototyping new, experimental database language features or ideas.
+</p>
+</li>
+
+
+</ul>
+
+<h2>Situations Where Another RDBMS May Work Better</h2>
+
+<ul>
+<li><p><b>Client/Server Applications</b><p>
+
+<p>If you have many client programs accessing a common database
+over a network, you should consider using a client/server database
+engine instead of SQLite. SQLite will work over a network filesystem,
+but because of the latency associated with most network filesystems,
+performance will not be great. Also, the file locking logic of
+many network filesystems implementation contains bugs (on both Unix
+and windows). If file locking does not work like it should,
+it might be possible for two or more client programs to modify the
+same part of the same database at the same time, resulting in
+database corruption. Because this problem results from bugs in
+the underlying filesystem implementation, there is nothing SQLite
+can do to prevent it.</p>
+
+<p>A good rule of thumb is that you should avoid using SQLite
+in situations where the same database will be accessed simultaneously
+from many computers over a network filesystem.</p>
+</li>
+
+<li><p><b>High-volume Websites</b></p>
+
+<p>SQLite will normally work fine as the database backend to a website.
+But if you website is so busy that your are thinking of splitting the
+database component off onto a separate machine, then you should
+definitely consider using an enterprise-class client/server database
+engine instead of SQLite.</p>
+</li>
+
+<li><p><b>Very large datasets</b></p>
+
+<p>When you start a transaction in SQLite (which happens automatically
+before any write operation that is not within an explicit BEGIN...COMMIT)
+the engine has to allocate a bitmap of dirty pages in the disk file to
+help it manage its rollback journal. SQLite needs 256 bytes of RAM for
+every 1MB of database. For smaller databases, the amount of memory
+required is not a problem, but when database begin to grow into the
+multi-gigabyte range, the size of the bitmap can get quite large. If
+you need to store and modify more than a few dozen GB of data, you should
+consider using a different database engine.
+</p>
+</li>
+
+<li><p><b>High Concurrency</b></p>
+
+<p>
+SQLite uses reader/writer locks on the entire database file. That means
+if any process is reading from any part of the database, all other
+processes are prevented from writing any other part of the database.
+Similarly, if any one process is writing to the database,
+all other processes are prevented from reading any other part of the
+database.
+For many situations, this is not a problem. Each application
+does its database work quickly and moves on, and no lock lasts for more
+than a few dozen milliseconds. But there are some applications that require
+more concurrency, and those applications may need to seek a different
+solution.
+</p>
+</li>
+
+</ul>
+
+}
+footer $rcsid
More information about the Freeswitch-trunk
mailing list