[Freeswitch-svn] [commit] r4351 - in freeswitch/trunk/libs/sqlite: . ext/fts1 ext/fts2 src test tool www

Freeswitch SVN mikej at freeswitch.org
Thu Feb 22 17:09:42 EST 2007


Author: mikej
Date: Thu Feb 22 17:09:42 2007
New Revision: 4351

Added:
   freeswitch/trunk/libs/sqlite/.update
   freeswitch/trunk/libs/sqlite/ext/fts2/
   freeswitch/trunk/libs/sqlite/ext/fts2/README.txt
   freeswitch/trunk/libs/sqlite/ext/fts2/fts2.c
   freeswitch/trunk/libs/sqlite/ext/fts2/fts2.h
   freeswitch/trunk/libs/sqlite/ext/fts2/fts2_hash.c
   freeswitch/trunk/libs/sqlite/ext/fts2/fts2_hash.h
   freeswitch/trunk/libs/sqlite/ext/fts2/fts2_porter.c
   freeswitch/trunk/libs/sqlite/ext/fts2/fts2_tokenizer.h
   freeswitch/trunk/libs/sqlite/ext/fts2/fts2_tokenizer1.c
   freeswitch/trunk/libs/sqlite/test/capi3c.test
   freeswitch/trunk/libs/sqlite/test/fts1e.test
   freeswitch/trunk/libs/sqlite/test/fts1f.test
   freeswitch/trunk/libs/sqlite/test/fts1i.test
   freeswitch/trunk/libs/sqlite/test/fts1j.test
   freeswitch/trunk/libs/sqlite/test/fts2a.test
   freeswitch/trunk/libs/sqlite/test/fts2b.test
   freeswitch/trunk/libs/sqlite/test/fts2c.test
   freeswitch/trunk/libs/sqlite/test/fts2d.test
   freeswitch/trunk/libs/sqlite/test/fts2e.test
   freeswitch/trunk/libs/sqlite/test/fts2f.test
   freeswitch/trunk/libs/sqlite/test/fts2g.test
   freeswitch/trunk/libs/sqlite/test/fts2h.test
   freeswitch/trunk/libs/sqlite/test/fts2i.test
   freeswitch/trunk/libs/sqlite/test/fts2j.test
   freeswitch/trunk/libs/sqlite/test/schema2.test
   freeswitch/trunk/libs/sqlite/test/speed1.test
   freeswitch/trunk/libs/sqlite/test/tkt2141.test
   freeswitch/trunk/libs/sqlite/test/tkt2192.test
   freeswitch/trunk/libs/sqlite/test/tkt2213.test
   freeswitch/trunk/libs/sqlite/test/where4.test
   freeswitch/trunk/libs/sqlite/tool/fragck.tcl
   freeswitch/trunk/libs/sqlite/www/typesafe.tcl
Modified:
   freeswitch/trunk/libs/sqlite/   (props changed)
   freeswitch/trunk/libs/sqlite/Makefile.in
   freeswitch/trunk/libs/sqlite/VERSION
   freeswitch/trunk/libs/sqlite/ext/fts1/fts1.c
   freeswitch/trunk/libs/sqlite/ext/fts1/fts1_porter.c
   freeswitch/trunk/libs/sqlite/src/   (props changed)
   freeswitch/trunk/libs/sqlite/src/btree.c
   freeswitch/trunk/libs/sqlite/src/btree.h
   freeswitch/trunk/libs/sqlite/src/build.c
   freeswitch/trunk/libs/sqlite/src/callback.c
   freeswitch/trunk/libs/sqlite/src/date.c
   freeswitch/trunk/libs/sqlite/src/delete.c
   freeswitch/trunk/libs/sqlite/src/expr.c
   freeswitch/trunk/libs/sqlite/src/func.c
   freeswitch/trunk/libs/sqlite/src/loadext.c
   freeswitch/trunk/libs/sqlite/src/main.c
   freeswitch/trunk/libs/sqlite/src/os.h
   freeswitch/trunk/libs/sqlite/src/os_os2.c
   freeswitch/trunk/libs/sqlite/src/os_unix.c
   freeswitch/trunk/libs/sqlite/src/os_win.c
   freeswitch/trunk/libs/sqlite/src/pager.c
   freeswitch/trunk/libs/sqlite/src/pager.h
   freeswitch/trunk/libs/sqlite/src/parse.y
   freeswitch/trunk/libs/sqlite/src/pragma.c
   freeswitch/trunk/libs/sqlite/src/prepare.c
   freeswitch/trunk/libs/sqlite/src/printf.c
   freeswitch/trunk/libs/sqlite/src/random.c
   freeswitch/trunk/libs/sqlite/src/select.c
   freeswitch/trunk/libs/sqlite/src/shell.c
   freeswitch/trunk/libs/sqlite/src/sqlite.h.in
   freeswitch/trunk/libs/sqlite/src/sqlite3ext.h
   freeswitch/trunk/libs/sqlite/src/sqliteInt.h
   freeswitch/trunk/libs/sqlite/src/tclsqlite.c
   freeswitch/trunk/libs/sqlite/src/test1.c
   freeswitch/trunk/libs/sqlite/src/test3.c
   freeswitch/trunk/libs/sqlite/src/test8.c
   freeswitch/trunk/libs/sqlite/src/test_autoext.c
   freeswitch/trunk/libs/sqlite/src/tokenize.c
   freeswitch/trunk/libs/sqlite/src/trigger.c
   freeswitch/trunk/libs/sqlite/src/update.c
   freeswitch/trunk/libs/sqlite/src/utf.c
   freeswitch/trunk/libs/sqlite/src/vacuum.c
   freeswitch/trunk/libs/sqlite/src/vdbe.c
   freeswitch/trunk/libs/sqlite/src/vdbe.h
   freeswitch/trunk/libs/sqlite/src/vdbeInt.h
   freeswitch/trunk/libs/sqlite/src/vdbeapi.c
   freeswitch/trunk/libs/sqlite/src/vdbeaux.c
   freeswitch/trunk/libs/sqlite/src/vdbemem.c
   freeswitch/trunk/libs/sqlite/src/vtab.c
   freeswitch/trunk/libs/sqlite/src/where.c
   freeswitch/trunk/libs/sqlite/test/all.test
   freeswitch/trunk/libs/sqlite/test/alter2.test
   freeswitch/trunk/libs/sqlite/test/btree.test
   freeswitch/trunk/libs/sqlite/test/capi2.test
   freeswitch/trunk/libs/sqlite/test/capi3.test
   freeswitch/trunk/libs/sqlite/test/collate1.test
   freeswitch/trunk/libs/sqlite/test/collate2.test
   freeswitch/trunk/libs/sqlite/test/conflict.test
   freeswitch/trunk/libs/sqlite/test/date.test
   freeswitch/trunk/libs/sqlite/test/func.test
   freeswitch/trunk/libs/sqlite/test/ioerr.test
   freeswitch/trunk/libs/sqlite/test/malloc.test
   freeswitch/trunk/libs/sqlite/test/misc5.test
   freeswitch/trunk/libs/sqlite/test/pragma.test
   freeswitch/trunk/libs/sqlite/test/quick.test
   freeswitch/trunk/libs/sqlite/test/select6.test
   freeswitch/trunk/libs/sqlite/test/select7.test
   freeswitch/trunk/libs/sqlite/test/tableapi.test
   freeswitch/trunk/libs/sqlite/test/tester.tcl
   freeswitch/trunk/libs/sqlite/test/threadtest2.c
   freeswitch/trunk/libs/sqlite/test/trigger4.test
   freeswitch/trunk/libs/sqlite/test/utf16.test
   freeswitch/trunk/libs/sqlite/test/vtab1.test
   freeswitch/trunk/libs/sqlite/test/vtab_err.test
   freeswitch/trunk/libs/sqlite/test/where.test
   freeswitch/trunk/libs/sqlite/test/where2.test
   freeswitch/trunk/libs/sqlite/test/where3.test
   freeswitch/trunk/libs/sqlite/tool/lemon.c
   freeswitch/trunk/libs/sqlite/tool/lempar.c
   freeswitch/trunk/libs/sqlite/tool/spaceanal.tcl
   freeswitch/trunk/libs/sqlite/www/capi3ref.tcl
   freeswitch/trunk/libs/sqlite/www/changes.tcl
   freeswitch/trunk/libs/sqlite/www/different.tcl
   freeswitch/trunk/libs/sqlite/www/index.tcl
   freeswitch/trunk/libs/sqlite/www/lang.tcl
   freeswitch/trunk/libs/sqlite/www/oldnews.tcl
   freeswitch/trunk/libs/sqlite/www/pragma.tcl
   freeswitch/trunk/libs/sqlite/www/sqlite.tcl

Log:
sync up our in tree sqlite with the 3.3.13 official release.  Commit to follow to finish this process on the windows build.

Added: freeswitch/trunk/libs/sqlite/.update
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/.update	Thu Feb 22 17:09:42 2007
@@ -0,0 +1 @@
+Thu Feb 22 16:53:55 EST 2007

Modified: freeswitch/trunk/libs/sqlite/Makefile.in
==============================================================================
--- freeswitch/trunk/libs/sqlite/Makefile.in	(original)
+++ freeswitch/trunk/libs/sqlite/Makefile.in	Thu Feb 22 17:09:42 2007
@@ -116,9 +116,7 @@
 
 # You should not have to change anything below this line
 ###############################################################################
-OPTS = 
-OPTS += -DSQLITE_OMIT_CURSOR          # Cursors do not work at this time
-TCC += -DSQLITE_OMIT_CURSOR
+TCC += -DSQLITE_OMIT_LOAD_EXTENSION=1
 
 # Object files for the SQLite library.
 #
@@ -305,7 +303,7 @@
 # Rules to build the LEMON compiler generator
 #
 lemon$(BEXE):	$(TOP)/tool/lemon.c $(TOP)/tool/lempar.c
-	$(BCC) -o lemon $(TOP)/tool/lemon.c
+	$(BCC) -o lemon$(BEXE) $(TOP)/tool/lemon.c
 	cp $(TOP)/tool/lempar.c .
 
 
@@ -393,7 +391,7 @@
 
 parse.c:	$(TOP)/src/parse.y lemon$(BEXE) $(TOP)/addopcodes.awk
 	cp $(TOP)/src/parse.y .
-	./lemon $(OPTS) parse.y
+	./lemon$(BEXE) $(OPTS) parse.y
 	mv parse.h parse.h.temp
 	awk -f $(TOP)/addopcodes.awk parse.h.temp >parse.h
 
@@ -667,6 +665,7 @@
 	$(LTINSTALL) sqlite3 $(DESTDIR)$(exec_prefix)/bin
 	$(INSTALL) -d $(DESTDIR)$(prefix)/include
 	$(INSTALL) -m 0644 sqlite3.h $(DESTDIR)$(prefix)/include
+	$(INSTALL) -m 0644 $(TOP)/src/sqlite3ext.h $(DESTDIR)$(prefix)/include
 	$(INSTALL) -d $(DESTDIR)$(libdir)/pkgconfig; 
 	$(INSTALL) -m 0644 sqlite3.pc $(DESTDIR)$(libdir)/pkgconfig; 
 

Modified: freeswitch/trunk/libs/sqlite/VERSION
==============================================================================
--- freeswitch/trunk/libs/sqlite/VERSION	(original)
+++ freeswitch/trunk/libs/sqlite/VERSION	Thu Feb 22 17:09:42 2007
@@ -1 +1 @@
-3.3.8
+3.3.13

Modified: freeswitch/trunk/libs/sqlite/ext/fts1/fts1.c
==============================================================================
--- freeswitch/trunk/libs/sqlite/ext/fts1/fts1.c	(original)
+++ freeswitch/trunk/libs/sqlite/ext/fts1/fts1.c	Thu Feb 22 17:09:42 2007
@@ -50,14 +50,14 @@
   char *s;      /* Content of the string */
 } StringBuffer;
 
-void initStringBuffer(StringBuffer *sb){
+static void initStringBuffer(StringBuffer *sb){
   sb->len = 0;
   sb->alloced = 100;
   sb->s = malloc(100);
   sb->s[0] = '\0';
 }
 
-void nappend(StringBuffer *sb, const char *zFrom, int nFrom){
+static void nappend(StringBuffer *sb, const char *zFrom, int nFrom){
   if( sb->len + nFrom >= sb->alloced ){
     sb->alloced = sb->len + nFrom + 100;
     sb->s = realloc(sb->s, sb->alloced+1);
@@ -70,7 +70,7 @@
   sb->len += nFrom;
   sb->s[sb->len] = 0;
 }
-void append(StringBuffer *sb, const char *zFrom){
+static void append(StringBuffer *sb, const char *zFrom){
   nappend(sb, zFrom, strlen(zFrom));
 }
 
@@ -847,25 +847,31 @@
 }
 
 /* Format a string, replacing each occurrence of the % character with
- * zName.  This may be more convenient than sqlite_mprintf()
+ * zDb.zName.  This may be more convenient than sqlite_mprintf()
  * when one string is used repeatedly in a format string.
  * The caller must free() the returned string. */
-static char *string_format(const char *zFormat, const char *zName){
+static char *string_format(const char *zFormat,
+                           const char *zDb, const char *zName){
   const char *p;
   size_t len = 0;
+  size_t nDb = strlen(zDb);
   size_t nName = strlen(zName);
+  size_t nFullTableName = nDb+1+nName;
   char *result;
   char *r;
 
   /* first compute length needed */
   for(p = zFormat ; *p ; ++p){
-    len += (*p=='%' ? nName : 1);
+    len += (*p=='%' ? nFullTableName : 1);
   }
   len += 1;  /* for null terminator */
 
   r = result = malloc(len);
   for(p = zFormat; *p; ++p){
     if( *p=='%' ){
+      memcpy(r, zDb, nDb);
+      r += nDb;
+      *r++ = '.';
       memcpy(r, zName, nName);
       r += nName;
     } else {
@@ -877,8 +883,9 @@
   return result;
 }
 
-static int sql_exec(sqlite3 *db, const char *zName, const char *zFormat){
-  char *zCommand = string_format(zFormat, zName);
+static int sql_exec(sqlite3 *db, const char *zDb, const char *zName,
+                    const char *zFormat){
+  char *zCommand = string_format(zFormat, zDb, zName);
   int rc;
   TRACE(("FTS1 sql: %s\n", zCommand));
   rc = sqlite3_exec(db, zCommand, NULL, 0, NULL);
@@ -886,9 +893,9 @@
   return rc;
 }
 
-static int sql_prepare(sqlite3 *db, const char *zName, sqlite3_stmt **ppStmt,
-                const char *zFormat){
-  char *zCommand = string_format(zFormat, zName);
+static int sql_prepare(sqlite3 *db, const char *zDb, const char *zName,
+                       sqlite3_stmt **ppStmt, const char *zFormat){
+  char *zCommand = string_format(zFormat, zDb, zName);
   int rc;
   TRACE(("FTS1 prepare: %s\n", zCommand));
   rc = sqlite3_prepare(db, zCommand, -1, ppStmt, NULL);
@@ -1040,6 +1047,7 @@
 struct fulltext_vtab {
   sqlite3_vtab base;               /* Base class used by SQLite core */
   sqlite3 *db;                     /* The database connection */
+  const char *zDb;                 /* logical database name */
   const char *zName;               /* virtual table name */
   int nColumn;                     /* number of columns in virtual table */
   char **azColumn;                 /* column names.  malloced */
@@ -1139,7 +1147,7 @@
       default:
         zStmt = fulltext_zStatement[iStmt];
     }
-    rc = sql_prepare(v->db, v->zName, &v->pFulltextStatements[iStmt],
+    rc = sql_prepare(v->db, v->zDb, v->zName, &v->pFulltextStatements[iStmt],
                          zStmt);
     if( zStmt != fulltext_zStatement[iStmt]) free((void *) zStmt);
     if( rc!=SQLITE_OK ) return rc;
@@ -1242,7 +1250,7 @@
   return sql_single_step_statement(v, CONTENT_UPDATE_STMT, &s);
 }
 
-void freeStringArray(int nString, const char **pString){
+static void freeStringArray(int nString, const char **pString){
   int i;
 
   for (i=0 ; i < nString ; ++i) {
@@ -1634,7 +1642,7 @@
 **     [pqr]   becomes   pqr
 **     `mno`   becomes   mno
 */
-void dequoteString(char *z){
+static void dequoteString(char *z){
   int quote;
   int i, j;
   if( z==0 ) return;
@@ -1676,7 +1684,7 @@
 **     input:      delimiters ( '[' , ']' , '...' )
 **     output:     [ ] ...
 */
-void tokenListToIdList(char **azIn){
+static void tokenListToIdList(char **azIn){
   int i, j;
   if( azIn ){
     for(i=0, j=-1; azIn[i]; i++){
@@ -1699,8 +1707,7 @@
 ** the result.
 */
 static char *firstToken(char *zIn, char **pzTail){
-  int i, n, ttype;
-  i = 0;
+  int n, ttype;
   while(1){
     n = getToken(zIn, &ttype);
     if( ttype==TOKEN_SPACE ){
@@ -1743,6 +1750,7 @@
 ** and use by fulltextConnect and fulltextCreate.
 */
 typedef struct TableSpec {
+  const char *zDb;         /* Logical database name */
   const char *zName;       /* Name of the full-text index */
   int nColumn;             /* Number of columns to be indexed */
   char **azColumn;         /* Original names of columns to be indexed */
@@ -1753,7 +1761,7 @@
 /*
 ** Reclaim all of the memory used by a TableSpec
 */
-void clearTableSpec(TableSpec *p) {
+static void clearTableSpec(TableSpec *p) {
   free(p->azColumn);
   free(p->azContentColumn);
   free(p->azTokenizer);
@@ -1767,8 +1775,9 @@
  * We return parsed information in a TableSpec structure.
  * 
  */
-int parseSpec(TableSpec *pSpec, int argc, const char *const*argv, char**pzErr){
-  int i, j, n;
+static int parseSpec(TableSpec *pSpec, int argc, const char *const*argv,
+                     char**pzErr){
+  int i, n;
   char *z, *zDummy;
   char **azArg;
   const char *zTokenizer = 0;    /* argv[] entry describing the tokenizer */
@@ -1804,11 +1813,12 @@
   /* Identify the column names and the tokenizer and delimiter arguments
   ** in the argv[][] array.
   */
+  pSpec->zDb = azArg[1];
   pSpec->zName = azArg[2];
   pSpec->nColumn = 0;
   pSpec->azColumn = azArg;
   zTokenizer = "tokenize simple";
-  for(i=3, j=0; i<argc; ++i){
+  for(i=3; i<argc; ++i){
     if( startsWith(azArg[i],"tokenize") ){
       zTokenizer = azArg[i];
     }else{
@@ -1904,6 +1914,7 @@
   memset(v, 0, sizeof(*v));
   /* sqlite will initialize v->base */
   v->db = db;
+  v->zDb = spec->zDb;       /* Freed when azColumn is freed */
   v->zName = spec->zName;   /* Freed when azColumn is freed */
   v->nColumn = spec->nColumn;
   v->azContentColumn = spec->azContentColumn;
@@ -2020,11 +2031,11 @@
   append(&schema, "CREATE TABLE %_content(");
   appendList(&schema, spec.nColumn, spec.azContentColumn);
   append(&schema, ")");
-  rc = sql_exec(db, spec.zName, schema.s);
+  rc = sql_exec(db, spec.zDb, spec.zName, schema.s);
   free(schema.s);
   if( rc!=SQLITE_OK ) goto out;
 
-  rc = sql_exec(db, spec.zName,
+  rc = sql_exec(db, spec.zDb, spec.zName,
     "create table %_term(term text, segment integer, doclist blob, "
                         "primary key(term, segment));");
   if( rc!=SQLITE_OK ) goto out;
@@ -2039,6 +2050,7 @@
 /* Decide how to handle an SQL query. */
 static int fulltextBestIndex(sqlite3_vtab *pVTab, sqlite3_index_info *pInfo){
   int i;
+  TRACE(("FTS1 BestIndex\n"));
 
   for(i=0; i<pInfo->nConstraint; ++i){
     const struct sqlite3_index_constraint *pConstraint;
@@ -2047,10 +2059,12 @@
       if( pConstraint->iColumn==-1 &&
           pConstraint->op==SQLITE_INDEX_CONSTRAINT_EQ ){
         pInfo->idxNum = QUERY_ROWID;      /* lookup by rowid */
+        TRACE(("FTS1 QUERY_ROWID\n"));
       } else if( pConstraint->iColumn>=0 &&
                  pConstraint->op==SQLITE_INDEX_CONSTRAINT_MATCH ){
         /* full-text search */
         pInfo->idxNum = QUERY_FULLTEXT + pConstraint->iColumn;
+        TRACE(("FTS1 QUERY_FULLTEXT %d\n", pConstraint->iColumn));
       } else continue;
 
       pInfo->aConstraintUsage[i].argvIndex = 1;
@@ -2065,7 +2079,6 @@
     }
   }
   pInfo->idxNum = QUERY_GENERIC;
-  TRACE(("FTS1 BestIndex\n"));
   return SQLITE_OK;
 }
 
@@ -2080,8 +2093,10 @@
   int rc;
 
   TRACE(("FTS1 Destroy %p\n", pVTab));
-  rc = sql_exec(v->db, v->zName,
-                    "drop table %_content; drop table %_term");
+  rc = sql_exec(v->db, v->zDb, v->zName,
+                "drop table if exists %_content;"
+                "drop table if exists %_term;"
+                );
   if( rc!=SQLITE_OK ) return rc;
 
   fulltext_vtab_destroy((fulltext_vtab *)pVTab);
@@ -2815,6 +2830,11 @@
 ** number idxNum-QUERY_FULLTEXT, 0 indexed.  argv[0] is the right-hand
 ** side of the MATCH operator.
 */
+/* TODO(shess) Upgrade the cursor initialization and destruction to
+** account for fulltextFilter() being called multiple times on the
+** same cursor.  The current solution is very fragile.  Apply fix to
+** fts2 as appropriate.
+*/
 static int fulltextFilter(
   sqlite3_vtab_cursor *pCursor,     /* The cursor used for this query */
   int idxNum, const char *idxStr,   /* Which indexing scheme to use */
@@ -2829,9 +2849,10 @@
 
   zSql = sqlite3_mprintf("select rowid, * from %%_content %s",
                           idxNum==QUERY_GENERIC ? "" : "where rowid=?");
-  rc = sql_prepare(v->db, v->zName, &c->pStmt, zSql);
+  sqlite3_finalize(c->pStmt);
+  rc = sql_prepare(v->db, v->zDb, v->zName, &c->pStmt, zSql);
   sqlite3_free(zSql);
-  if( rc!=SQLITE_OK ) goto out;
+  if( rc!=SQLITE_OK ) return rc;
 
   c->iCursorType = idxNum;
   switch( idxNum ){
@@ -2840,7 +2861,7 @@
 
     case QUERY_ROWID:
       rc = sqlite3_bind_int64(c->pStmt, 1, sqlite3_value_int64(argv[0]));
-      if( rc!=SQLITE_OK ) goto out;
+      if( rc!=SQLITE_OK ) return rc;
       break;
 
     default:   /* full-text search */
@@ -2851,16 +2872,14 @@
       assert( argc==1 );
       queryClear(&c->q);
       rc = fulltextQuery(v, idxNum-QUERY_FULLTEXT, zQuery, -1, &pResult, &c->q);
-      if( rc!=SQLITE_OK ) goto out;
+      if( rc!=SQLITE_OK ) return rc;
+      if( c->result.pDoclist!=NULL ) docListDelete(c->result.pDoclist);
       readerInit(&c->result, pResult);
       break;
     }
   }
 
-  rc = fulltextNext(pCursor);
-
-out:
-  return rc;
+  return fulltextNext(pCursor);
 }
 
 /* This is the xEof method of the virtual table.  The SQLite core
@@ -3081,11 +3100,11 @@
   int rc = deleteTerms(v, pTerms, iRow);
   if( rc!=SQLITE_OK ) return rc;
 
-  /* Now add positions for terms which appear in the updated row. */
-  rc = insertTerms(v, pTerms, iRow, pValues);
+  rc = content_update(v, pValues, iRow);  /* execute an SQL UPDATE */
   if( rc!=SQLITE_OK ) return rc;
 
-  return content_update(v, pValues, iRow);  /* execute an SQL UPDATE */
+  /* Now add positions for terms which appear in the updated row. */
+  return insertTerms(v, pTerms, iRow, pValues);
 }
 
 /* This function implements the xUpdate callback; it's the top-level entry

Modified: freeswitch/trunk/libs/sqlite/ext/fts1/fts1_porter.c
==============================================================================
--- freeswitch/trunk/libs/sqlite/ext/fts1/fts1_porter.c	(original)
+++ freeswitch/trunk/libs/sqlite/ext/fts1/fts1_porter.c	Thu Feb 22 17:09:42 2007
@@ -70,9 +70,6 @@
   sqlite3_tokenizer **ppTokenizer
 ){
   porter_tokenizer *t;
-  int i;
-
-for(i=0; i<argc; i++) printf("argv[%d] = %s\n", i, argv[i]);
   t = (porter_tokenizer *) calloc(sizeof(porter_tokenizer), 1);
   *ppTokenizer = &t->base;
   return SQLITE_OK;
@@ -563,7 +560,7 @@
 ** part of a token.  In other words, delimiters all must have
 ** values of 0x7f or lower.
 */
-const char isIdChar[] = {
+static const char isIdChar[] = {
 /* x0 x1 x2 x3 x4 x5 x6 x7 x8 x9 xA xB xC xD xE xF */
     1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0,  /* 3x */
     0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,  /* 4x */

Added: freeswitch/trunk/libs/sqlite/ext/fts2/README.txt
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/ext/fts2/README.txt	Thu Feb 22 17:09:42 2007
@@ -0,0 +1,4 @@
+This folder contains source code to the second full-text search
+extension for SQLite.  While the API is the same, this version uses a
+substantially different storage schema from fts1, so tables will need
+to be rebuilt.

Added: freeswitch/trunk/libs/sqlite/ext/fts2/fts2.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/ext/fts2/fts2.c	Thu Feb 22 17:09:42 2007
@@ -0,0 +1,5282 @@
+/* The author disclaims copyright to this source code.
+ *
+ * This is an SQLite module implementing full-text search.
+ */
+
+/*
+** The code in this file is only compiled if:
+**
+**     * The FTS2 module is being built as an extension
+**       (in which case SQLITE_CORE is not defined), or
+**
+**     * The FTS2 module is being built into the core of
+**       SQLite (in which case SQLITE_ENABLE_FTS2 is defined).
+*/
+
+/* TODO(shess) Consider exporting this comment to an HTML file or the
+** wiki.
+*/
+/* The full-text index is stored in a series of b+tree (-like)
+** structures called segments which map terms to doclists.  The
+** structures are like b+trees in layout, but are constructed from the
+** bottom up in optimal fashion and are not updatable.  Since trees
+** are built from the bottom up, things will be described from the
+** bottom up.
+**
+**
+**** Varints ****
+** The basic unit of encoding is a variable-length integer called a
+** varint.  We encode variable-length integers in little-endian order
+** using seven bits * per byte as follows:
+**
+** KEY:
+**         A = 0xxxxxxx    7 bits of data and one flag bit
+**         B = 1xxxxxxx    7 bits of data and one flag bit
+**
+**  7 bits - A
+** 14 bits - BA
+** 21 bits - BBA
+** and so on.
+**
+** This is identical to how sqlite encodes varints (see util.c).
+**
+**
+**** Document lists ****
+** A doclist (document list) holds a docid-sorted list of hits for a
+** given term.  Doclists hold docids, and can optionally associate
+** token positions and offsets with docids.
+**
+** A DL_POSITIONS_OFFSETS doclist is stored like this:
+**
+** array {
+**   varint docid;
+**   array {                (position list for column 0)
+**     varint position;     (delta from previous position plus POS_BASE)
+**     varint startOffset;  (delta from previous startOffset)
+**     varint endOffset;    (delta from startOffset)
+**   }
+**   array {
+**     varint POS_COLUMN;   (marks start of position list for new column)
+**     varint column;       (index of new column)
+**     array {
+**       varint position;   (delta from previous position plus POS_BASE)
+**       varint startOffset;(delta from previous startOffset)
+**       varint endOffset;  (delta from startOffset)
+**     }
+**   }
+**   varint POS_END;        (marks end of positions for this document.
+** }
+**
+** Here, array { X } means zero or more occurrences of X, adjacent in
+** memory.  A "position" is an index of a token in the token stream
+** generated by the tokenizer, while an "offset" is a byte offset,
+** both based at 0.  Note that POS_END and POS_COLUMN occur in the
+** same logical place as the position element, and act as sentinals
+** ending a position list array.
+**
+** A DL_POSITIONS doclist omits the startOffset and endOffset
+** information.  A DL_DOCIDS doclist omits both the position and
+** offset information, becoming an array of varint-encoded docids.
+**
+** On-disk data is stored as type DL_DEFAULT, so we don't serialize
+** the type.  Due to how deletion is implemented in the segmentation
+** system, on-disk doclists MUST store at least positions.
+**
+**
+**** Segment leaf nodes ****
+** Segment leaf nodes store terms and doclists, ordered by term.  Leaf
+** nodes are written using LeafWriter, and read using LeafReader (to
+** iterate through a single leaf node's data) and LeavesReader (to
+** iterate through a segment's entire leaf layer).  Leaf nodes have
+** the format:
+**
+** varint iHeight;             (height from leaf level, always 0)
+** varint nTerm;               (length of first term)
+** char pTerm[nTerm];          (content of first term)
+** varint nDoclist;            (length of term's associated doclist)
+** char pDoclist[nDoclist];    (content of doclist)
+** array {
+**                             (further terms are delta-encoded)
+**   varint nPrefix;           (length of prefix shared with previous term)
+**   varint nSuffix;           (length of unshared suffix)
+**   char pTermSuffix[nSuffix];(unshared suffix of next term)
+**   varint nDoclist;          (length of term's associated doclist)
+**   char pDoclist[nDoclist];  (content of doclist)
+** }
+**
+** Here, array { X } means zero or more occurrences of X, adjacent in
+** memory.
+**
+** Leaf nodes are broken into blocks which are stored contiguously in
+** the %_segments table in sorted order.  This means that when the end
+** of a node is reached, the next term is in the node with the next
+** greater node id.
+**
+** New data is spilled to a new leaf node when the current node
+** exceeds LEAF_MAX bytes (default 2048).  New data which itself is
+** larger than STANDALONE_MIN (default 1024) is placed in a standalone
+** node (a leaf node with a single term and doclist).  The goal of
+** these settings is to pack together groups of small doclists while
+** making it efficient to directly access large doclists.  The
+** assumption is that large doclists represent terms which are more
+** likely to be query targets.
+**
+** TODO(shess) It may be useful for blocking decisions to be more
+** dynamic.  For instance, it may make more sense to have a 2.5k leaf
+** node rather than splitting into 2k and .5k nodes.  My intuition is
+** that this might extend through 2x or 4x the pagesize.
+**
+**
+**** Segment interior nodes ****
+** Segment interior nodes store blockids for subtree nodes and terms
+** to describe what data is stored by the each subtree.  Interior
+** nodes are written using InteriorWriter, and read using
+** InteriorReader.  InteriorWriters are created as needed when
+** SegmentWriter creates new leaf nodes, or when an interior node
+** itself grows too big and must be split.  The format of interior
+** nodes:
+**
+** varint iHeight;           (height from leaf level, always >0)
+** varint iBlockid;          (block id of node's leftmost subtree)
+** optional {
+**   varint nTerm;           (length of first term)
+**   char pTerm[nTerm];      (content of first term)
+**   array {
+**                                (further terms are delta-encoded)
+**     varint nPrefix;            (length of shared prefix with previous term)
+**     varint nSuffix;            (length of unshared suffix)
+**     char pTermSuffix[nSuffix]; (unshared suffix of next term)
+**   }
+** }
+**
+** Here, optional { X } means an optional element, while array { X }
+** means zero or more occurrences of X, adjacent in memory.
+**
+** An interior node encodes n terms separating n+1 subtrees.  The
+** subtree blocks are contiguous, so only the first subtree's blockid
+** is encoded.  The subtree at iBlockid will contain all terms less
+** than the first term encoded (or all terms if no term is encoded).
+** Otherwise, for terms greater than or equal to pTerm[i] but less
+** than pTerm[i+1], the subtree for that term will be rooted at
+** iBlockid+i.  Interior nodes only store enough term data to
+** distinguish adjacent children (if the rightmost term of the left
+** child is "something", and the leftmost term of the right child is
+** "wicked", only "w" is stored).
+**
+** New data is spilled to a new interior node at the same height when
+** the current node exceeds INTERIOR_MAX bytes (default 2048).
+** INTERIOR_MIN_TERMS (default 7) keeps large terms from monopolizing
+** interior nodes and making the tree too skinny.  The interior nodes
+** at a given height are naturally tracked by interior nodes at
+** height+1, and so on.
+**
+**
+**** Segment directory ****
+** The segment directory in table %_segdir stores meta-information for
+** merging and deleting segments, and also the root node of the
+** segment's tree.
+**
+** The root node is the top node of the segment's tree after encoding
+** the entire segment, restricted to ROOT_MAX bytes (default 1024).
+** This could be either a leaf node or an interior node.  If the top
+** node requires more than ROOT_MAX bytes, it is flushed to %_segments
+** and a new root interior node is generated (which should always fit
+** within ROOT_MAX because it only needs space for 2 varints, the
+** height and the blockid of the previous root).
+**
+** The meta-information in the segment directory is:
+**   level               - segment level (see below)
+**   idx                 - index within level
+**                       - (level,idx uniquely identify a segment)
+**   start_block         - first leaf node
+**   leaves_end_block    - last leaf node
+**   end_block           - last block (including interior nodes)
+**   root                - contents of root node
+**
+** If the root node is a leaf node, then start_block,
+** leaves_end_block, and end_block are all 0.
+**
+**
+**** Segment merging ****
+** To amortize update costs, segments are groups into levels and
+** merged in matches.  Each increase in level represents exponentially
+** more documents.
+**
+** New documents (actually, document updates) are tokenized and
+** written individually (using LeafWriter) to a level 0 segment, with
+** incrementing idx.  When idx reaches MERGE_COUNT (default 16), all
+** level 0 segments are merged into a single level 1 segment.  Level 1
+** is populated like level 0, and eventually MERGE_COUNT level 1
+** segments are merged to a single level 2 segment (representing
+** MERGE_COUNT^2 updates), and so on.
+**
+** A segment merge traverses all segments at a given level in
+** parallel, performing a straightforward sorted merge.  Since segment
+** leaf nodes are written in to the %_segments table in order, this
+** merge traverses the underlying sqlite disk structures efficiently.
+** After the merge, all segment blocks from the merged level are
+** deleted.
+**
+** MERGE_COUNT controls how often we merge segments.  16 seems to be
+** somewhat of a sweet spot for insertion performance.  32 and 64 show
+** very similar performance numbers to 16 on insertion, though they're
+** a tiny bit slower (perhaps due to more overhead in merge-time
+** sorting).  8 is about 20% slower than 16, 4 about 50% slower than
+** 16, 2 about 66% slower than 16.
+**
+** At query time, high MERGE_COUNT increases the number of segments
+** which need to be scanned and merged.  For instance, with 100k docs
+** inserted:
+**
+**    MERGE_COUNT   segments
+**       16           25
+**        8           12
+**        4           10
+**        2            6
+**
+** This appears to have only a moderate impact on queries for very
+** frequent terms (which are somewhat dominated by segment merge
+** costs), and infrequent and non-existent terms still seem to be fast
+** even with many segments.
+**
+** TODO(shess) That said, it would be nice to have a better query-side
+** argument for MERGE_COUNT of 16.  Also, it's possible/likely that
+** optimizations to things like doclist merging will swing the sweet
+** spot around.
+**
+**
+**
+**** Handling of deletions and updates ****
+** Since we're using a segmented structure, with no docid-oriented
+** index into the term index, we clearly cannot simply update the term
+** index when a document is deleted or updated.  For deletions, we
+** write an empty doclist (varint(docid) varint(POS_END)), for updates
+** we simply write the new doclist.  Segment merges overwrite older
+** data for a particular docid with newer data, so deletes or updates
+** will eventually overtake the earlier data and knock it out.  The
+** query logic likewise merges doclists so that newer data knocks out
+** older data.
+**
+** TODO(shess) Provide a VACUUM type operation to clear out all
+** deletions and duplications.  This would basically be a forced merge
+** into a single segment.
+*/
+
+#if !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS2)
+
+#if defined(SQLITE_ENABLE_FTS2) && !defined(SQLITE_CORE)
+# define SQLITE_CORE 1
+#endif
+
+#include <assert.h>
+#if !defined(__APPLE__)
+#include <malloc.h>
+#endif
+#include <stdlib.h>
+#include <stdio.h>
+#include <string.h>
+#include <ctype.h>
+
+#include "fts2.h"
+#include "fts2_hash.h"
+#include "fts2_tokenizer.h"
+#include "sqlite3.h"
+#include "sqlite3ext.h"
+SQLITE_EXTENSION_INIT1
+
+
+/* TODO(shess) MAN, this thing needs some refactoring.  At minimum, it
+** would be nice to order the file better, perhaps something along the
+** lines of:
+**
+**  - utility functions
+**  - table setup functions
+**  - table update functions
+**  - table query functions
+**
+** Put the query functions last because they're likely to reference
+** typedefs or functions from the table update section.
+*/
+
+#if 0
+# define TRACE(A)  printf A; fflush(stdout)
+#else
+# define TRACE(A)
+#endif
+
+typedef enum DocListType {
+  DL_DOCIDS,              /* docids only */
+  DL_POSITIONS,           /* docids + positions */
+  DL_POSITIONS_OFFSETS    /* docids + positions + offsets */
+} DocListType;
+
+/*
+** By default, only positions and not offsets are stored in the doclists.
+** To change this so that offsets are stored too, compile with
+**
+**          -DDL_DEFAULT=DL_POSITIONS_OFFSETS
+**
+** If DL_DEFAULT is set to DL_DOCIDS, your table can only be inserted
+** into (no deletes or updates).
+*/
+#ifndef DL_DEFAULT
+# define DL_DEFAULT DL_POSITIONS
+#endif
+
+enum {
+  POS_END = 0,        /* end of this position list */
+  POS_COLUMN,         /* followed by new column number */
+  POS_BASE
+};
+
+/* MERGE_COUNT controls how often we merge segments (see comment at
+** top of file).
+*/
+#define MERGE_COUNT 16
+
+/* utility functions */
+
+/* CLEAR() and SCRAMBLE() abstract memset() on a pointer to a single
+** record to prevent errors of the form:
+**
+** my_function(SomeType *b){
+**   memset(b, '\0', sizeof(b));  // sizeof(b)!=sizeof(*b)
+** }
+*/
+/* TODO(shess) Obvious candidates for a header file. */
+#define CLEAR(b) memset(b, '\0', sizeof(*(b)))
+
+#ifndef NDEBUG
+#  define SCRAMBLE(b) memset(b, 0x55, sizeof(*(b)))
+#else
+#  define SCRAMBLE(b)
+#endif
+
+/* We may need up to VARINT_MAX bytes to store an encoded 64-bit integer. */
+#define VARINT_MAX 10
+
+/* Write a 64-bit variable-length integer to memory starting at p[0].
+ * The length of data written will be between 1 and VARINT_MAX bytes.
+ * The number of bytes written is returned. */
+static int putVarint(char *p, sqlite_int64 v){
+  unsigned char *q = (unsigned char *) p;
+  sqlite_uint64 vu = v;
+  do{
+    *q++ = (unsigned char) ((vu & 0x7f) | 0x80);
+    vu >>= 7;
+  }while( vu!=0 );
+  q[-1] &= 0x7f;  /* turn off high bit in final byte */
+  assert( q - (unsigned char *)p <= VARINT_MAX );
+  return (int) (q - (unsigned char *)p);
+}
+
+/* Read a 64-bit variable-length integer from memory starting at p[0].
+ * Return the number of bytes read, or 0 on error.
+ * The value is stored in *v. */
+static int getVarint(const char *p, sqlite_int64 *v){
+  const unsigned char *q = (const unsigned char *) p;
+  sqlite_uint64 x = 0, y = 1;
+  while( (*q & 0x80) == 0x80 ){
+    x += y * (*q++ & 0x7f);
+    y <<= 7;
+    if( q - (unsigned char *)p >= VARINT_MAX ){  /* bad data */
+      assert( 0 );
+      return 0;
+    }
+  }
+  x += y * (*q++);
+  *v = (sqlite_int64) x;
+  return (int) (q - (unsigned char *)p);
+}
+
+static int getVarint32(const char *p, int *pi){
+ sqlite_int64 i;
+ int ret = getVarint(p, &i);
+ *pi = (int) i;
+ assert( *pi==i );
+ return ret;
+}
+
+/*******************************************************************/
+/* DataBuffer is used to collect data into a buffer in piecemeal
+** fashion.  It implements the usual distinction between amount of
+** data currently stored (nData) and buffer capacity (nCapacity).
+**
+** dataBufferInit - create a buffer with given initial capacity.
+** dataBufferReset - forget buffer's data, retaining capacity.
+** dataBufferDestroy - free buffer's data.
+** dataBufferExpand - expand capacity without adding data.
+** dataBufferAppend - append data.
+** dataBufferAppend2 - append two pieces of data at once.
+** dataBufferReplace - replace buffer's data.
+*/
+typedef struct DataBuffer {
+  char *pData;          /* Pointer to malloc'ed buffer. */
+  int nCapacity;        /* Size of pData buffer. */
+  int nData;            /* End of data loaded into pData. */
+} DataBuffer;
+
+static void dataBufferInit(DataBuffer *pBuffer, int nCapacity){
+  assert( nCapacity>=0 );
+  pBuffer->nData = 0;
+  pBuffer->nCapacity = nCapacity;
+  pBuffer->pData = nCapacity==0 ? NULL : malloc(nCapacity);
+}
+static void dataBufferReset(DataBuffer *pBuffer){
+  pBuffer->nData = 0;
+}
+static void dataBufferDestroy(DataBuffer *pBuffer){
+  if( pBuffer->pData!=NULL ) free(pBuffer->pData);
+  SCRAMBLE(pBuffer);
+}
+static void dataBufferExpand(DataBuffer *pBuffer, int nAddCapacity){
+  assert( nAddCapacity>0 );
+  /* TODO(shess) Consider expanding more aggressively.  Note that the
+  ** underlying malloc implementation may take care of such things for
+  ** us already.
+  */
+  if( pBuffer->nData+nAddCapacity>pBuffer->nCapacity ){
+    pBuffer->nCapacity = pBuffer->nData+nAddCapacity;
+    pBuffer->pData = realloc(pBuffer->pData, pBuffer->nCapacity);
+  }
+}
+static void dataBufferAppend(DataBuffer *pBuffer,
+                             const char *pSource, int nSource){
+  assert( nSource>0 && pSource!=NULL );
+  dataBufferExpand(pBuffer, nSource);
+  memcpy(pBuffer->pData+pBuffer->nData, pSource, nSource);
+  pBuffer->nData += nSource;
+}
+static void dataBufferAppend2(DataBuffer *pBuffer,
+                              const char *pSource1, int nSource1,
+                              const char *pSource2, int nSource2){
+  assert( nSource1>0 && pSource1!=NULL );
+  assert( nSource2>0 && pSource2!=NULL );
+  dataBufferExpand(pBuffer, nSource1+nSource2);
+  memcpy(pBuffer->pData+pBuffer->nData, pSource1, nSource1);
+  memcpy(pBuffer->pData+pBuffer->nData+nSource1, pSource2, nSource2);
+  pBuffer->nData += nSource1+nSource2;
+}
+static void dataBufferReplace(DataBuffer *pBuffer,
+                              const char *pSource, int nSource){
+  dataBufferReset(pBuffer);
+  dataBufferAppend(pBuffer, pSource, nSource);
+}
+
+/* StringBuffer is a null-terminated version of DataBuffer. */
+typedef struct StringBuffer {
+  DataBuffer b;            /* Includes null terminator. */
+} StringBuffer;
+
+static void initStringBuffer(StringBuffer *sb){
+  dataBufferInit(&sb->b, 100);
+  dataBufferReplace(&sb->b, "", 1);
+}
+static int stringBufferLength(StringBuffer *sb){
+  return sb->b.nData-1;
+}
+static char *stringBufferData(StringBuffer *sb){
+  return sb->b.pData;
+}
+static void stringBufferDestroy(StringBuffer *sb){
+  dataBufferDestroy(&sb->b);
+}
+
+static void nappend(StringBuffer *sb, const char *zFrom, int nFrom){
+  assert( sb->b.nData>0 );
+  if( nFrom>0 ){
+    sb->b.nData--;
+    dataBufferAppend2(&sb->b, zFrom, nFrom, "", 1);
+  }
+}
+static void append(StringBuffer *sb, const char *zFrom){
+  nappend(sb, zFrom, strlen(zFrom));
+}
+
+/* Append a list of strings separated by commas. */
+static void appendList(StringBuffer *sb, int nString, char **azString){
+  int i;
+  for(i=0; i<nString; ++i){
+    if( i>0 ) append(sb, ", ");
+    append(sb, azString[i]);
+  }
+}
+
+static int endsInWhiteSpace(StringBuffer *p){
+  return stringBufferLength(p)>0 &&
+    isspace(stringBufferData(p)[stringBufferLength(p)-1]);
+}
+
+/* If the StringBuffer ends in something other than white space, add a
+** single space character to the end.
+*/
+static void appendWhiteSpace(StringBuffer *p){
+  if( stringBufferLength(p)==0 ) return;
+  if( !endsInWhiteSpace(p) ) append(p, " ");
+}
+
+/* Remove white space from the end of the StringBuffer */
+static void trimWhiteSpace(StringBuffer *p){
+  while( endsInWhiteSpace(p) ){
+    p->b.pData[--p->b.nData-1] = '\0';
+  }
+}
+
+/*******************************************************************/
+/* DLReader is used to read document elements from a doclist.  The
+** current docid is cached, so dlrDocid() is fast.  DLReader does not
+** own the doclist buffer.
+**
+** dlrAtEnd - true if there's no more data to read.
+** dlrDocid - docid of current document.
+** dlrDocData - doclist data for current document (including docid).
+** dlrDocDataBytes - length of same.
+** dlrAllDataBytes - length of all remaining data.
+** dlrPosData - position data for current document.
+** dlrPosDataLen - length of pos data for current document (incl POS_END).
+** dlrStep - step to current document.
+** dlrInit - initial for doclist of given type against given data.
+** dlrDestroy - clean up.
+**
+** Expected usage is something like:
+**
+**   DLReader reader;
+**   dlrInit(&reader, pData, nData);
+**   while( !dlrAtEnd(&reader) ){
+**     // calls to dlrDocid() and kin.
+**     dlrStep(&reader);
+**   }
+**   dlrDestroy(&reader);
+*/
+typedef struct DLReader {
+  DocListType iType;
+  const char *pData;
+  int nData;
+
+  sqlite_int64 iDocid;
+  int nElement;
+} DLReader;
+
+static int dlrAtEnd(DLReader *pReader){
+  assert( pReader->nData>=0 );
+  return pReader->nData==0;
+}
+static sqlite_int64 dlrDocid(DLReader *pReader){
+  assert( !dlrAtEnd(pReader) );
+  return pReader->iDocid;
+}
+static const char *dlrDocData(DLReader *pReader){
+  assert( !dlrAtEnd(pReader) );
+  return pReader->pData;
+}
+static int dlrDocDataBytes(DLReader *pReader){
+  assert( !dlrAtEnd(pReader) );
+  return pReader->nElement;
+}
+static int dlrAllDataBytes(DLReader *pReader){
+  assert( !dlrAtEnd(pReader) );
+  return pReader->nData;
+}
+/* TODO(shess) Consider adding a field to track iDocid varint length
+** to make these two functions faster.  This might matter (a tiny bit)
+** for queries.
+*/
+static const char *dlrPosData(DLReader *pReader){
+  sqlite_int64 iDummy;
+  int n = getVarint(pReader->pData, &iDummy);
+  assert( !dlrAtEnd(pReader) );
+  return pReader->pData+n;
+}
+static int dlrPosDataLen(DLReader *pReader){
+  sqlite_int64 iDummy;
+  int n = getVarint(pReader->pData, &iDummy);
+  assert( !dlrAtEnd(pReader) );
+  return pReader->nElement-n;
+}
+static void dlrStep(DLReader *pReader){
+  assert( !dlrAtEnd(pReader) );
+
+  /* Skip past current doclist element. */
+  assert( pReader->nElement<=pReader->nData );
+  pReader->pData += pReader->nElement;
+  pReader->nData -= pReader->nElement;
+
+  /* If there is more data, read the next doclist element. */
+  if( pReader->nData!=0 ){
+    sqlite_int64 iDocidDelta;
+    int iDummy, n = getVarint(pReader->pData, &iDocidDelta);
+    pReader->iDocid += iDocidDelta;
+    if( pReader->iType>=DL_POSITIONS ){
+      assert( n<pReader->nData );
+      while( 1 ){
+        n += getVarint32(pReader->pData+n, &iDummy);
+        assert( n<=pReader->nData );
+        if( iDummy==POS_END ) break;
+        if( iDummy==POS_COLUMN ){
+          n += getVarint32(pReader->pData+n, &iDummy);
+          assert( n<pReader->nData );
+        }else if( pReader->iType==DL_POSITIONS_OFFSETS ){
+          n += getVarint32(pReader->pData+n, &iDummy);
+          n += getVarint32(pReader->pData+n, &iDummy);
+          assert( n<pReader->nData );
+        }
+      }
+    }
+    pReader->nElement = n;
+    assert( pReader->nElement<=pReader->nData );
+  }
+}
+static void dlrInit(DLReader *pReader, DocListType iType,
+                    const char *pData, int nData){
+  assert( pData!=NULL && nData!=0 );
+  pReader->iType = iType;
+  pReader->pData = pData;
+  pReader->nData = nData;
+  pReader->nElement = 0;
+  pReader->iDocid = 0;
+
+  /* Load the first element's data.  There must be a first element. */
+  dlrStep(pReader);
+}
+static void dlrDestroy(DLReader *pReader){
+  SCRAMBLE(pReader);
+}
+
+#ifndef NDEBUG
+/* Verify that the doclist can be validly decoded.  Also returns the
+** last docid found because it's convenient in other assertions for
+** DLWriter.
+*/
+static void docListValidate(DocListType iType, const char *pData, int nData,
+                            sqlite_int64 *pLastDocid){
+  sqlite_int64 iPrevDocid = 0;
+  assert( nData>0 );
+  assert( pData!=0 );
+  assert( pData+nData>pData );
+  while( nData!=0 ){
+    sqlite_int64 iDocidDelta;
+    int n = getVarint(pData, &iDocidDelta);
+    iPrevDocid += iDocidDelta;
+    if( iType>DL_DOCIDS ){
+      int iDummy;
+      while( 1 ){
+        n += getVarint32(pData+n, &iDummy);
+        if( iDummy==POS_END ) break;
+        if( iDummy==POS_COLUMN ){
+          n += getVarint32(pData+n, &iDummy);
+        }else if( iType>DL_POSITIONS ){
+          n += getVarint32(pData+n, &iDummy);
+          n += getVarint32(pData+n, &iDummy);
+        }
+        assert( n<=nData );
+      }
+    }
+    assert( n<=nData );
+    pData += n;
+    nData -= n;
+  }
+  if( pLastDocid ) *pLastDocid = iPrevDocid;
+}
+#define ASSERT_VALID_DOCLIST(i, p, n, o) docListValidate(i, p, n, o)
+#else
+#define ASSERT_VALID_DOCLIST(i, p, n, o) assert( 1 )
+#endif
+
+/*******************************************************************/
+/* DLWriter is used to write doclist data to a DataBuffer.  DLWriter
+** always appends to the buffer and does not own it.
+**
+** dlwInit - initialize to write a given type doclistto a buffer.
+** dlwDestroy - clear the writer's memory.  Does not free buffer.
+** dlwAppend - append raw doclist data to buffer.
+** dlwAdd - construct doclist element and append to buffer.
+*/
+typedef struct DLWriter {
+  DocListType iType;
+  DataBuffer *b;
+  sqlite_int64 iPrevDocid;
+} DLWriter;
+
+static void dlwInit(DLWriter *pWriter, DocListType iType, DataBuffer *b){
+  pWriter->b = b;
+  pWriter->iType = iType;
+  pWriter->iPrevDocid = 0;
+}
+static void dlwDestroy(DLWriter *pWriter){
+  SCRAMBLE(pWriter);
+}
+/* iFirstDocid is the first docid in the doclist in pData.  It is
+** needed because pData may point within a larger doclist, in which
+** case the first item would be delta-encoded.
+**
+** iLastDocid is the final docid in the doclist in pData.  It is
+** needed to create the new iPrevDocid for future delta-encoding.  The
+** code could decode the passed doclist to recreate iLastDocid, but
+** the only current user (docListMerge) already has decoded this
+** information.
+*/
+/* TODO(shess) This has become just a helper for docListMerge.
+** Consider a refactor to make this cleaner.
+*/
+static void dlwAppend(DLWriter *pWriter,
+                      const char *pData, int nData,
+                      sqlite_int64 iFirstDocid, sqlite_int64 iLastDocid){
+  sqlite_int64 iDocid = 0;
+  char c[VARINT_MAX];
+  int nFirstOld, nFirstNew;     /* Old and new varint len of first docid. */
+#ifndef NDEBUG
+  sqlite_int64 iLastDocidDelta;
+#endif
+
+  /* Recode the initial docid as delta from iPrevDocid. */
+  nFirstOld = getVarint(pData, &iDocid);
+  assert( nFirstOld<nData || (nFirstOld==nData && pWriter->iType==DL_DOCIDS) );
+  nFirstNew = putVarint(c, iFirstDocid-pWriter->iPrevDocid);
+
+  /* Verify that the incoming doclist is valid AND that it ends with
+  ** the expected docid.  This is essential because we'll trust this
+  ** docid in future delta-encoding.
+  */
+  ASSERT_VALID_DOCLIST(pWriter->iType, pData, nData, &iLastDocidDelta);
+  assert( iLastDocid==iFirstDocid-iDocid+iLastDocidDelta );
+
+  /* Append recoded initial docid and everything else.  Rest of docids
+  ** should have been delta-encoded from previous initial docid.
+  */
+  if( nFirstOld<nData ){
+    dataBufferAppend2(pWriter->b, c, nFirstNew,
+                      pData+nFirstOld, nData-nFirstOld);
+  }else{
+    dataBufferAppend(pWriter->b, c, nFirstNew);
+  }
+  pWriter->iPrevDocid = iLastDocid;
+}
+static void dlwAdd(DLWriter *pWriter, sqlite_int64 iDocid,
+                   const char *pPosList, int nPosList){
+  char c[VARINT_MAX];
+  int n = putVarint(c, iDocid-pWriter->iPrevDocid);
+
+  assert( pWriter->iPrevDocid<iDocid );
+  assert( pPosList==0 || pWriter->iType>DL_DOCIDS );
+
+  dataBufferAppend(pWriter->b, c, n);
+
+  if( pWriter->iType>DL_DOCIDS ){
+    n = putVarint(c, 0);
+    if( nPosList>0 ){
+      dataBufferAppend2(pWriter->b, pPosList, nPosList, c, n);
+    }else{
+      dataBufferAppend(pWriter->b, c, n);
+    }
+  }
+  pWriter->iPrevDocid = iDocid;
+}
+
+/*******************************************************************/
+/* PLReader is used to read data from a document's position list.  As
+** the caller steps through the list, data is cached so that varints
+** only need to be decoded once.
+**
+** plrInit, plrDestroy - create/destroy a reader.
+** plrColumn, plrPosition, plrStartOffset, plrEndOffset - accessors
+** plrAtEnd - at end of stream, only call plrDestroy once true.
+** plrStep - step to the next element.
+*/
+typedef struct PLReader {
+  /* These refer to the next position's data.  nData will reach 0 when
+  ** reading the last position, so plrStep() signals EOF by setting
+  ** pData to NULL.
+  */
+  const char *pData;
+  int nData;
+
+  DocListType iType;
+  int iColumn;         /* the last column read */
+  int iPosition;       /* the last position read */
+  int iStartOffset;    /* the last start offset read */
+  int iEndOffset;      /* the last end offset read */
+} PLReader;
+
+static int plrAtEnd(PLReader *pReader){
+  return pReader->pData==NULL;
+}
+static int plrColumn(PLReader *pReader){
+  assert( !plrAtEnd(pReader) );
+  return pReader->iColumn;
+}
+static int plrPosition(PLReader *pReader){
+  assert( !plrAtEnd(pReader) );
+  return pReader->iPosition;
+}
+static int plrStartOffset(PLReader *pReader){
+  assert( !plrAtEnd(pReader) );
+  return pReader->iStartOffset;
+}
+static int plrEndOffset(PLReader *pReader){
+  assert( !plrAtEnd(pReader) );
+  return pReader->iEndOffset;
+}
+static void plrStep(PLReader *pReader){
+  int i, n;
+
+  assert( !plrAtEnd(pReader) );
+
+  if( pReader->nData==0 ){
+    pReader->pData = NULL;
+    return;
+  }
+
+  n = getVarint32(pReader->pData, &i);
+  if( i==POS_COLUMN ){
+    n += getVarint32(pReader->pData+n, &pReader->iColumn);
+    pReader->iPosition = 0;
+    pReader->iStartOffset = 0;
+    n += getVarint32(pReader->pData+n, &i);
+  }
+  /* Should never see adjacent column changes. */
+  assert( i!=POS_COLUMN );
+
+  if( i==POS_END ){
+    pReader->nData = 0;
+    pReader->pData = NULL;
+    return;
+  }
+
+  pReader->iPosition += i-POS_BASE;
+  if( pReader->iType==DL_POSITIONS_OFFSETS ){
+    n += getVarint32(pReader->pData+n, &i);
+    pReader->iStartOffset += i;
+    n += getVarint32(pReader->pData+n, &i);
+    pReader->iEndOffset = pReader->iStartOffset+i;
+  }
+  assert( n<=pReader->nData );
+  pReader->pData += n;
+  pReader->nData -= n;
+}
+
+static void plrInit(PLReader *pReader, DocListType iType,
+                    const char *pData, int nData){
+  pReader->pData = pData;
+  pReader->nData = nData;
+  pReader->iType = iType;
+  pReader->iColumn = 0;
+  pReader->iPosition = 0;
+  pReader->iStartOffset = 0;
+  pReader->iEndOffset = 0;
+  plrStep(pReader);
+}
+static void plrDestroy(PLReader *pReader){
+  SCRAMBLE(pReader);
+}
+
+/*******************************************************************/
+/* PLWriter is used in constructing a document's position list.  As a
+** convenience, if iType is DL_DOCIDS, PLWriter becomes a no-op.
+**
+** plwInit - init for writing a document's poslist.
+** plwReset - reset the writer for a new document.
+** plwDestroy - clear a writer.
+** plwNew - malloc storage and initialize it.
+** plwDelete - clear and free storage.
+** plwDlwAdd - append the docid and poslist to a doclist writer.
+** plwAdd - append position and offset information.
+*/
+/* TODO(shess) PLWriter is used in two ways.  fulltextUpdate() uses it
+** in construction of a new doclist.  docListTrim() and mergePosList()
+** use it when trimming.  In the former case, it wants to own the
+** DataBuffer, in the latter it's possible it could encode into a
+** pre-existing DataBuffer.
+*/
+typedef struct PLWriter {
+  DataBuffer b;
+
+  sqlite_int64 iDocid;
+  DocListType iType;
+  int iColumn;    /* the last column written */
+  int iPos;       /* the last position written */
+  int iOffset;    /* the last start offset written */
+} PLWriter;
+
+static void plwDlwAdd(PLWriter *pWriter, DLWriter *dlWriter){
+  dlwAdd(dlWriter, pWriter->iDocid, pWriter->b.pData, pWriter->b.nData);
+}
+static void plwAdd(PLWriter *pWriter, int iColumn, int iPos,
+                   int iStartOffset, int iEndOffset){
+  /* Worst-case space for POS_COLUMN, iColumn, iPosDelta,
+  ** iStartOffsetDelta, and iEndOffsetDelta.
+  */
+  char c[5*VARINT_MAX];
+  int n = 0;
+
+  if( pWriter->iType==DL_DOCIDS ) return;
+
+  if( iColumn!=pWriter->iColumn ){
+    n += putVarint(c+n, POS_COLUMN);
+    n += putVarint(c+n, iColumn);
+    pWriter->iColumn = iColumn;
+    pWriter->iPos = 0;
+    pWriter->iOffset = 0;
+  }
+  assert( iPos>=pWriter->iPos );
+  n += putVarint(c+n, POS_BASE+(iPos-pWriter->iPos));
+  pWriter->iPos = iPos;
+  if( pWriter->iType==DL_POSITIONS_OFFSETS ){
+    assert( iStartOffset>=pWriter->iOffset );
+    n += putVarint(c+n, iStartOffset-pWriter->iOffset);
+    pWriter->iOffset = iStartOffset;
+    assert( iEndOffset>=iStartOffset );
+    n += putVarint(c+n, iEndOffset-iStartOffset);
+  }
+  dataBufferAppend(&pWriter->b, c, n);
+}
+static void plwReset(PLWriter *pWriter,
+                     sqlite_int64 iDocid, DocListType iType){
+  dataBufferReset(&pWriter->b);
+  pWriter->iDocid = iDocid;
+  pWriter->iType = iType;
+  pWriter->iColumn = 0;
+  pWriter->iPos = 0;
+  pWriter->iOffset = 0;
+}
+static void plwInit(PLWriter *pWriter, sqlite_int64 iDocid, DocListType iType){
+  dataBufferInit(&pWriter->b, 0);
+  plwReset(pWriter, iDocid, iType);
+}
+static PLWriter *plwNew(sqlite_int64 iDocid, DocListType iType){
+  PLWriter *pWriter = malloc(sizeof(PLWriter));
+  plwInit(pWriter, iDocid, iType);
+  return pWriter;
+}
+static void plwDestroy(PLWriter *pWriter){
+  dataBufferDestroy(&pWriter->b);
+  SCRAMBLE(pWriter);
+}
+static void plwDelete(PLWriter *pWriter){
+  plwDestroy(pWriter);
+  free(pWriter);
+}
+
+
+/* Copy the doclist data of iType in pData/nData into *out, trimming
+** unnecessary data as we go.  Only columns matching iColumn are
+** copied, all columns copied if iColimn is -1.  Elements with no
+** matching columns are dropped.  The output is an iOutType doclist.
+*/
+static void docListTrim(DocListType iType, const char *pData, int nData,
+                        int iColumn, DocListType iOutType, DataBuffer *out){
+  DLReader dlReader;
+  DLWriter dlWriter;
+  PLWriter plWriter;
+
+  assert( iOutType<=iType );
+
+  dlrInit(&dlReader, iType, pData, nData);
+  dlwInit(&dlWriter, iOutType, out);
+  plwInit(&plWriter, 0, iOutType);
+
+  while( !dlrAtEnd(&dlReader) ){
+    PLReader plReader;
+    int match = 0;
+
+    plrInit(&plReader, dlReader.iType,
+            dlrPosData(&dlReader), dlrPosDataLen(&dlReader));
+    plwReset(&plWriter, dlrDocid(&dlReader), iOutType);
+
+    while( !plrAtEnd(&plReader) ){
+      if( iColumn==-1 || plrColumn(&plReader)==iColumn ){
+        match = 1;
+        plwAdd(&plWriter, plrColumn(&plReader), plrPosition(&plReader),
+               plrStartOffset(&plReader), plrEndOffset(&plReader));
+      }
+      plrStep(&plReader);
+    }
+    if( match ) plwDlwAdd(&plWriter, &dlWriter);
+
+    plrDestroy(&plReader);
+    dlrStep(&dlReader);
+  }
+  plwDestroy(&plWriter);
+  dlwDestroy(&dlWriter);
+  dlrDestroy(&dlReader);
+}
+
+/* Used by docListMerge() to keep doclists in the ascending order by
+** docid, then ascending order by age (so the newest comes first).
+*/
+typedef struct OrderedDLReader {
+  DLReader *pReader;
+
+  /* TODO(shess) If we assume that docListMerge pReaders is ordered by
+  ** age (which we do), then we could use pReader comparisons to break
+  ** ties.
+  */
+  int idx;
+} OrderedDLReader;
+
+/* Order eof to end, then by docid asc, idx desc. */
+static int orderedDLReaderCmp(OrderedDLReader *r1, OrderedDLReader *r2){
+  if( dlrAtEnd(r1->pReader) ){
+    if( dlrAtEnd(r2->pReader) ) return 0;  /* Both atEnd(). */
+    return 1;                              /* Only r1 atEnd(). */
+  }
+  if( dlrAtEnd(r2->pReader) ) return -1;   /* Only r2 atEnd(). */
+
+  if( dlrDocid(r1->pReader)<dlrDocid(r2->pReader) ) return -1;
+  if( dlrDocid(r1->pReader)>dlrDocid(r2->pReader) ) return 1;
+
+  /* Descending on idx. */
+  return r2->idx-r1->idx;
+}
+
+/* Bubble p[0] to appropriate place in p[1..n-1].  Assumes that
+** p[1..n-1] is already sorted.
+*/
+/* TODO(shess) Is this frequent enough to warrant a binary search?
+** Before implementing that, instrument the code to check.  In most
+** current usage, I expect that p[0] will be less than p[1] a very
+** high proportion of the time.
+*/
+static void orderedDLReaderReorder(OrderedDLReader *p, int n){
+  while( n>1 && orderedDLReaderCmp(p, p+1)>0 ){
+    OrderedDLReader tmp = p[0];
+    p[0] = p[1];
+    p[1] = tmp;
+    n--;
+    p++;
+  }
+}
+
+/* Given an array of doclist readers, merge their doclist elements
+** into out in sorted order (by docid), dropping elements from older
+** readers when there is a duplicate docid.  pReaders is assumed to be
+** ordered by age, oldest first.
+*/
+/* TODO(shess) nReaders must be <= MERGE_COUNT.  This should probably
+** be fixed.
+*/
+static void docListMerge(DataBuffer *out,
+                         DLReader *pReaders, int nReaders){
+  OrderedDLReader readers[MERGE_COUNT];
+  DLWriter writer;
+  int i, n;
+  const char *pStart = 0;
+  int nStart = 0;
+  sqlite_int64 iFirstDocid = 0, iLastDocid = 0;
+
+  assert( nReaders>0 );
+  if( nReaders==1 ){
+    dataBufferAppend(out, dlrDocData(pReaders), dlrAllDataBytes(pReaders));
+    return;
+  }
+
+  assert( nReaders<=MERGE_COUNT );
+  n = 0;
+  for(i=0; i<nReaders; i++){
+    assert( pReaders[i].iType==pReaders[0].iType );
+    readers[i].pReader = pReaders+i;
+    readers[i].idx = i;
+    n += dlrAllDataBytes(&pReaders[i]);
+  }
+  /* Conservatively size output to sum of inputs.  Output should end
+  ** up strictly smaller than input.
+  */
+  dataBufferExpand(out, n);
+
+  /* Get the readers into sorted order. */
+  while( i-->0 ){
+    orderedDLReaderReorder(readers+i, nReaders-i);
+  }
+
+  dlwInit(&writer, pReaders[0].iType, out);
+  while( !dlrAtEnd(readers[0].pReader) ){
+    sqlite_int64 iDocid = dlrDocid(readers[0].pReader);
+
+    /* If this is a continuation of the current buffer to copy, extend
+    ** that buffer.  memcpy() seems to be more efficient if it has a
+    ** lots of data to copy.
+    */
+    if( dlrDocData(readers[0].pReader)==pStart+nStart ){
+      nStart += dlrDocDataBytes(readers[0].pReader);
+    }else{
+      if( pStart!=0 ){
+        dlwAppend(&writer, pStart, nStart, iFirstDocid, iLastDocid);
+      }
+      pStart = dlrDocData(readers[0].pReader);
+      nStart = dlrDocDataBytes(readers[0].pReader);
+      iFirstDocid = iDocid;
+    }
+    iLastDocid = iDocid;
+    dlrStep(readers[0].pReader);
+
+    /* Drop all of the older elements with the same docid. */
+    for(i=1; i<nReaders &&
+             !dlrAtEnd(readers[i].pReader) &&
+             dlrDocid(readers[i].pReader)==iDocid; i++){
+      dlrStep(readers[i].pReader);
+    }
+
+    /* Get the readers back into order. */
+    while( i-->0 ){
+      orderedDLReaderReorder(readers+i, nReaders-i);
+    }
+  }
+
+  /* Copy over any remaining elements. */
+  if( nStart>0 ) dlwAppend(&writer, pStart, nStart, iFirstDocid, iLastDocid);
+  dlwDestroy(&writer);
+}
+
+/* pLeft and pRight are DLReaders positioned to the same docid.
+**
+** If there are no instances in pLeft or pRight where the position
+** of pLeft is one less than the position of pRight, then this
+** routine adds nothing to pOut.
+**
+** If there are one or more instances where positions from pLeft
+** are exactly one less than positions from pRight, then add a new
+** document record to pOut.  If pOut wants to hold positions, then
+** include the positions from pRight that are one more than a
+** position in pLeft.  In other words:  pRight.iPos==pLeft.iPos+1.
+*/
+static void mergePosList(DLReader *pLeft, DLReader *pRight, DLWriter *pOut){
+  PLReader left, right;
+  PLWriter writer;
+  int match = 0;
+
+  assert( dlrDocid(pLeft)==dlrDocid(pRight) );
+  assert( pOut->iType!=DL_POSITIONS_OFFSETS );
+
+  plrInit(&left, pLeft->iType, dlrPosData(pLeft), dlrPosDataLen(pLeft));
+  plrInit(&right, pRight->iType, dlrPosData(pRight), dlrPosDataLen(pRight));
+  plwInit(&writer, dlrDocid(pLeft), pOut->iType);
+
+  while( !plrAtEnd(&left) && !plrAtEnd(&right) ){
+    if( plrColumn(&left)<plrColumn(&right) ){
+      plrStep(&left);
+    }else if( plrColumn(&left)>plrColumn(&right) ){
+      plrStep(&right);
+    }else if( plrPosition(&left)+1<plrPosition(&right) ){
+      plrStep(&left);
+    }else if( plrPosition(&left)+1>plrPosition(&right) ){
+      plrStep(&right);
+    }else{
+      match = 1;
+      plwAdd(&writer, plrColumn(&right), plrPosition(&right), 0, 0);
+      plrStep(&left);
+      plrStep(&right);
+    }
+  }
+
+  /* TODO(shess) We could remember the output position, encode the
+  ** docid, then encode the poslist directly into the output.  If no
+  ** match, we back out to the stored output position.  This would
+  ** also reduce the malloc count.
+  */
+  if( match ) plwDlwAdd(&writer, pOut);
+
+  plrDestroy(&left);
+  plrDestroy(&right);
+  plwDestroy(&writer);
+}
+
+/* We have two doclists with positions:  pLeft and pRight.
+** Write the phrase intersection of these two doclists into pOut.
+**
+** A phrase intersection means that two documents only match
+** if pLeft.iPos+1==pRight.iPos.
+**
+** iType controls the type of data written to pOut.  If iType is
+** DL_POSITIONS, the positions are those from pRight.
+*/
+static void docListPhraseMerge(
+  const char *pLeft, int nLeft,
+  const char *pRight, int nRight,
+  DocListType iType,
+  DataBuffer *pOut      /* Write the combined doclist here */
+){
+  DLReader left, right;
+  DLWriter writer;
+
+  if( nLeft==0 || nRight==0 ) return;
+
+  assert( iType!=DL_POSITIONS_OFFSETS );
+
+  dlrInit(&left, DL_POSITIONS, pLeft, nLeft);
+  dlrInit(&right, DL_POSITIONS, pRight, nRight);
+  dlwInit(&writer, iType, pOut);
+
+  while( !dlrAtEnd(&left) && !dlrAtEnd(&right) ){
+    if( dlrDocid(&left)<dlrDocid(&right) ){
+      dlrStep(&left);
+    }else if( dlrDocid(&right)<dlrDocid(&left) ){
+      dlrStep(&right);
+    }else{
+      mergePosList(&left, &right, &writer);
+      dlrStep(&left);
+      dlrStep(&right);
+    }
+  }
+
+  dlrDestroy(&left);
+  dlrDestroy(&right);
+  dlwDestroy(&writer);
+}
+
+/* We have two DL_DOCIDS doclists:  pLeft and pRight.
+** Write the intersection of these two doclists into pOut as a
+** DL_DOCIDS doclist.
+*/
+static void docListAndMerge(
+  const char *pLeft, int nLeft,
+  const char *pRight, int nRight,
+  DataBuffer *pOut      /* Write the combined doclist here */
+){
+  DLReader left, right;
+  DLWriter writer;
+
+  if( nLeft==0 || nRight==0 ) return;
+
+  dlrInit(&left, DL_DOCIDS, pLeft, nLeft);
+  dlrInit(&right, DL_DOCIDS, pRight, nRight);
+  dlwInit(&writer, DL_DOCIDS, pOut);
+
+  while( !dlrAtEnd(&left) && !dlrAtEnd(&right) ){
+    if( dlrDocid(&left)<dlrDocid(&right) ){
+      dlrStep(&left);
+    }else if( dlrDocid(&right)<dlrDocid(&left) ){
+      dlrStep(&right);
+    }else{
+      dlwAdd(&writer, dlrDocid(&left), 0, 0);
+      dlrStep(&left);
+      dlrStep(&right);
+    }
+  }
+
+  dlrDestroy(&left);
+  dlrDestroy(&right);
+  dlwDestroy(&writer);
+}
+
+/* We have two DL_DOCIDS doclists:  pLeft and pRight.
+** Write the union of these two doclists into pOut as a
+** DL_DOCIDS doclist.
+*/
+static void docListOrMerge(
+  const char *pLeft, int nLeft,
+  const char *pRight, int nRight,
+  DataBuffer *pOut      /* Write the combined doclist here */
+){
+  DLReader left, right;
+  DLWriter writer;
+
+  if( nLeft==0 ){
+    dataBufferAppend(pOut, pRight, nRight);
+    return;
+  }
+  if( nRight==0 ){
+    dataBufferAppend(pOut, pLeft, nLeft);
+    return;
+  }
+
+  dlrInit(&left, DL_DOCIDS, pLeft, nLeft);
+  dlrInit(&right, DL_DOCIDS, pRight, nRight);
+  dlwInit(&writer, DL_DOCIDS, pOut);
+
+  while( !dlrAtEnd(&left) || !dlrAtEnd(&right) ){
+    if( dlrAtEnd(&right) || dlrDocid(&left)<dlrDocid(&right) ){
+      dlwAdd(&writer, dlrDocid(&left), 0, 0);
+      dlrStep(&left);
+    }else if( dlrAtEnd(&left) || dlrDocid(&right)<dlrDocid(&left) ){
+      dlwAdd(&writer, dlrDocid(&right), 0, 0);
+      dlrStep(&right);
+    }else{
+      dlwAdd(&writer, dlrDocid(&left), 0, 0);
+      dlrStep(&left);
+      dlrStep(&right);
+    }
+  }
+
+  dlrDestroy(&left);
+  dlrDestroy(&right);
+  dlwDestroy(&writer);
+}
+
+/* We have two DL_DOCIDS doclists:  pLeft and pRight.
+** Write into pOut as DL_DOCIDS doclist containing all documents that
+** occur in pLeft but not in pRight.
+*/
+static void docListExceptMerge(
+  const char *pLeft, int nLeft,
+  const char *pRight, int nRight,
+  DataBuffer *pOut      /* Write the combined doclist here */
+){
+  DLReader left, right;
+  DLWriter writer;
+
+  if( nLeft==0 ) return;
+  if( nRight==0 ){
+    dataBufferAppend(pOut, pLeft, nLeft);
+    return;
+  }
+
+  dlrInit(&left, DL_DOCIDS, pLeft, nLeft);
+  dlrInit(&right, DL_DOCIDS, pRight, nRight);
+  dlwInit(&writer, DL_DOCIDS, pOut);
+
+  while( !dlrAtEnd(&left) ){
+    while( !dlrAtEnd(&right) && dlrDocid(&right)<dlrDocid(&left) ){
+      dlrStep(&right);
+    }
+    if( dlrAtEnd(&right) || dlrDocid(&left)<dlrDocid(&right) ){
+      dlwAdd(&writer, dlrDocid(&left), 0, 0);
+    }
+    dlrStep(&left);
+  }
+
+  dlrDestroy(&left);
+  dlrDestroy(&right);
+  dlwDestroy(&writer);
+}
+
+static char *string_dup_n(const char *s, int n){
+  char *str = malloc(n + 1);
+  memcpy(str, s, n);
+  str[n] = '\0';
+  return str;
+}
+
+/* Duplicate a string; the caller must free() the returned string.
+ * (We don't use strdup() since it's not part of the standard C library and
+ * may not be available everywhere.) */
+static char *string_dup(const char *s){
+  return string_dup_n(s, strlen(s));
+}
+
+/* Format a string, replacing each occurrence of the % character with
+ * zDb.zName.  This may be more convenient than sqlite_mprintf()
+ * when one string is used repeatedly in a format string.
+ * The caller must free() the returned string. */
+static char *string_format(const char *zFormat,
+                           const char *zDb, const char *zName){
+  const char *p;
+  size_t len = 0;
+  size_t nDb = strlen(zDb);
+  size_t nName = strlen(zName);
+  size_t nFullTableName = nDb+1+nName;
+  char *result;
+  char *r;
+
+  /* first compute length needed */
+  for(p = zFormat ; *p ; ++p){
+    len += (*p=='%' ? nFullTableName : 1);
+  }
+  len += 1;  /* for null terminator */
+
+  r = result = malloc(len);
+  for(p = zFormat; *p; ++p){
+    if( *p=='%' ){
+      memcpy(r, zDb, nDb);
+      r += nDb;
+      *r++ = '.';
+      memcpy(r, zName, nName);
+      r += nName;
+    } else {
+      *r++ = *p;
+    }
+  }
+  *r++ = '\0';
+  assert( r == result + len );
+  return result;
+}
+
+static int sql_exec(sqlite3 *db, const char *zDb, const char *zName,
+                    const char *zFormat){
+  char *zCommand = string_format(zFormat, zDb, zName);
+  int rc;
+  TRACE(("FTS2 sql: %s\n", zCommand));
+  rc = sqlite3_exec(db, zCommand, NULL, 0, NULL);
+  free(zCommand);
+  return rc;
+}
+
+static int sql_prepare(sqlite3 *db, const char *zDb, const char *zName,
+                       sqlite3_stmt **ppStmt, const char *zFormat){
+  char *zCommand = string_format(zFormat, zDb, zName);
+  int rc;
+  TRACE(("FTS2 prepare: %s\n", zCommand));
+  rc = sqlite3_prepare(db, zCommand, -1, ppStmt, NULL);
+  free(zCommand);
+  return rc;
+}
+
+/* end utility functions */
+
+/* Forward reference */
+typedef struct fulltext_vtab fulltext_vtab;
+
+/* A single term in a query is represented by an instances of
+** the following structure.
+*/
+typedef struct QueryTerm {
+  short int nPhrase; /* How many following terms are part of the same phrase */
+  short int iPhrase; /* This is the i-th term of a phrase. */
+  short int iColumn; /* Column of the index that must match this term */
+  signed char isOr;  /* this term is preceded by "OR" */
+  signed char isNot; /* this term is preceded by "-" */
+  char *pTerm;       /* text of the term.  '\000' terminated.  malloced */
+  int nTerm;         /* Number of bytes in pTerm[] */
+} QueryTerm;
+
+
+/* A query string is parsed into a Query structure.
+ *
+ * We could, in theory, allow query strings to be complicated
+ * nested expressions with precedence determined by parentheses.
+ * But none of the major search engines do this.  (Perhaps the
+ * feeling is that an parenthesized expression is two complex of
+ * an idea for the average user to grasp.)  Taking our lead from
+ * the major search engines, we will allow queries to be a list
+ * of terms (with an implied AND operator) or phrases in double-quotes,
+ * with a single optional "-" before each non-phrase term to designate
+ * negation and an optional OR connector.
+ *
+ * OR binds more tightly than the implied AND, which is what the
+ * major search engines seem to do.  So, for example:
+ * 
+ *    [one two OR three]     ==>    one AND (two OR three)
+ *    [one OR two three]     ==>    (one OR two) AND three
+ *
+ * A "-" before a term matches all entries that lack that term.
+ * The "-" must occur immediately before the term with in intervening
+ * space.  This is how the search engines do it.
+ *
+ * A NOT term cannot be the right-hand operand of an OR.  If this
+ * occurs in the query string, the NOT is ignored:
+ *
+ *    [one OR -two]          ==>    one OR two
+ *
+ */
+typedef struct Query {
+  fulltext_vtab *pFts;  /* The full text index */
+  int nTerms;           /* Number of terms in the query */
+  QueryTerm *pTerms;    /* Array of terms.  Space obtained from malloc() */
+  int nextIsOr;         /* Set the isOr flag on the next inserted term */
+  int nextColumn;       /* Next word parsed must be in this column */
+  int dfltColumn;       /* The default column */
+} Query;
+
+
+/*
+** An instance of the following structure keeps track of generated
+** matching-word offset information and snippets.
+*/
+typedef struct Snippet {
+  int nMatch;     /* Total number of matches */
+  int nAlloc;     /* Space allocated for aMatch[] */
+  struct snippetMatch { /* One entry for each matching term */
+    char snStatus;       /* Status flag for use while constructing snippets */
+    short int iCol;      /* The column that contains the match */
+    short int iTerm;     /* The index in Query.pTerms[] of the matching term */
+    short int nByte;     /* Number of bytes in the term */
+    int iStart;          /* The offset to the first character of the term */
+  } *aMatch;      /* Points to space obtained from malloc */
+  char *zOffset;  /* Text rendering of aMatch[] */
+  int nOffset;    /* strlen(zOffset) */
+  char *zSnippet; /* Snippet text */
+  int nSnippet;   /* strlen(zSnippet) */
+} Snippet;
+
+
+typedef enum QueryType {
+  QUERY_GENERIC,   /* table scan */
+  QUERY_ROWID,     /* lookup by rowid */
+  QUERY_FULLTEXT   /* QUERY_FULLTEXT + [i] is a full-text search for column i*/
+} QueryType;
+
+typedef enum fulltext_statement {
+  CONTENT_INSERT_STMT,
+  CONTENT_SELECT_STMT,
+  CONTENT_UPDATE_STMT,
+  CONTENT_DELETE_STMT,
+
+  BLOCK_INSERT_STMT,
+  BLOCK_SELECT_STMT,
+  BLOCK_DELETE_STMT,
+
+  SEGDIR_MAX_INDEX_STMT,
+  SEGDIR_SET_STMT,
+  SEGDIR_SELECT_STMT,
+  SEGDIR_SPAN_STMT,
+  SEGDIR_DELETE_STMT,
+  SEGDIR_SELECT_ALL_STMT,
+
+  MAX_STMT                     /* Always at end! */
+} fulltext_statement;
+
+/* These must exactly match the enum above. */
+/* TODO(shess): Is there some risk that a statement will be used in two
+** cursors at once, e.g.  if a query joins a virtual table to itself?
+** If so perhaps we should move some of these to the cursor object.
+*/
+static const char *const fulltext_zStatement[MAX_STMT] = {
+  /* CONTENT_INSERT */ NULL,  /* generated in contentInsertStatement() */
+  /* CONTENT_SELECT */ "select * from %_content where rowid = ?",
+  /* CONTENT_UPDATE */ NULL,  /* generated in contentUpdateStatement() */
+  /* CONTENT_DELETE */ "delete from %_content where rowid = ?",
+
+  /* BLOCK_INSERT */ "insert into %_segments values (?)",
+  /* BLOCK_SELECT */ "select block from %_segments where rowid = ?",
+  /* BLOCK_DELETE */ "delete from %_segments where rowid between ? and ?",
+
+  /* SEGDIR_MAX_INDEX */ "select max(idx) from %_segdir where level = ?",
+  /* SEGDIR_SET */ "insert into %_segdir values (?, ?, ?, ?, ?, ?)",
+  /* SEGDIR_SELECT */
+  "select start_block, leaves_end_block, root from %_segdir "
+  " where level = ? order by idx",
+  /* SEGDIR_SPAN */
+  "select min(start_block), max(end_block) from %_segdir "
+  " where level = ? and start_block <> 0",
+  /* SEGDIR_DELETE */ "delete from %_segdir where level = ?",
+  /* SEGDIR_SELECT_ALL */ "select root from %_segdir order by level desc, idx",
+};
+
+/*
+** A connection to a fulltext index is an instance of the following
+** structure.  The xCreate and xConnect methods create an instance
+** of this structure and xDestroy and xDisconnect free that instance.
+** All other methods receive a pointer to the structure as one of their
+** arguments.
+*/
+struct fulltext_vtab {
+  sqlite3_vtab base;               /* Base class used by SQLite core */
+  sqlite3 *db;                     /* The database connection */
+  const char *zDb;                 /* logical database name */
+  const char *zName;               /* virtual table name */
+  int nColumn;                     /* number of columns in virtual table */
+  char **azColumn;                 /* column names.  malloced */
+  char **azContentColumn;          /* column names in content table; malloced */
+  sqlite3_tokenizer *pTokenizer;   /* tokenizer for inserts and queries */
+
+  /* Precompiled statements which we keep as long as the table is
+  ** open.
+  */
+  sqlite3_stmt *pFulltextStatements[MAX_STMT];
+
+  /* Precompiled statements used for segment merges.  We run a
+  ** separate select across the leaf level of each tree being merged.
+  */
+  sqlite3_stmt *pLeafSelectStmts[MERGE_COUNT];
+  /* The statement used to prepare pLeafSelectStmts. */
+#define LEAF_SELECT \
+  "select block from %_segments where rowid between ? and ? order by rowid"
+};
+
+/*
+** When the core wants to do a query, it create a cursor using a
+** call to xOpen.  This structure is an instance of a cursor.  It
+** is destroyed by xClose.
+*/
+typedef struct fulltext_cursor {
+  sqlite3_vtab_cursor base;        /* Base class used by SQLite core */
+  QueryType iCursorType;           /* Copy of sqlite3_index_info.idxNum */
+  sqlite3_stmt *pStmt;             /* Prepared statement in use by the cursor */
+  int eof;                         /* True if at End Of Results */
+  Query q;                         /* Parsed query string */
+  Snippet snippet;                 /* Cached snippet for the current row */
+  int iColumn;                     /* Column being searched */
+  DataBuffer result;               /* Doclist results from fulltextQuery */
+  DLReader reader;                 /* Result reader if result not empty */
+} fulltext_cursor;
+
+static struct fulltext_vtab *cursor_vtab(fulltext_cursor *c){
+  return (fulltext_vtab *) c->base.pVtab;
+}
+
+static const sqlite3_module fulltextModule;   /* forward declaration */
+
+/* Return a dynamically generated statement of the form
+ *   insert into %_content (rowid, ...) values (?, ...)
+ */
+static const char *contentInsertStatement(fulltext_vtab *v){
+  StringBuffer sb;
+  int i;
+
+  initStringBuffer(&sb);
+  append(&sb, "insert into %_content (rowid, ");
+  appendList(&sb, v->nColumn, v->azContentColumn);
+  append(&sb, ") values (?");
+  for(i=0; i<v->nColumn; ++i)
+    append(&sb, ", ?");
+  append(&sb, ")");
+  return stringBufferData(&sb);
+}
+
+/* Return a dynamically generated statement of the form
+ *   update %_content set [col_0] = ?, [col_1] = ?, ...
+ *                    where rowid = ?
+ */
+static const char *contentUpdateStatement(fulltext_vtab *v){
+  StringBuffer sb;
+  int i;
+
+  initStringBuffer(&sb);
+  append(&sb, "update %_content set ");
+  for(i=0; i<v->nColumn; ++i) {
+    if( i>0 ){
+      append(&sb, ", ");
+    }
+    append(&sb, v->azContentColumn[i]);
+    append(&sb, " = ?");
+  }
+  append(&sb, " where rowid = ?");
+  return stringBufferData(&sb);
+}
+
+/* Puts a freshly-prepared statement determined by iStmt in *ppStmt.
+** If the indicated statement has never been prepared, it is prepared
+** and cached, otherwise the cached version is reset.
+*/
+static int sql_get_statement(fulltext_vtab *v, fulltext_statement iStmt,
+                             sqlite3_stmt **ppStmt){
+  assert( iStmt<MAX_STMT );
+  if( v->pFulltextStatements[iStmt]==NULL ){
+    const char *zStmt;
+    int rc;
+    switch( iStmt ){
+      case CONTENT_INSERT_STMT:
+        zStmt = contentInsertStatement(v); break;
+      case CONTENT_UPDATE_STMT:
+        zStmt = contentUpdateStatement(v); break;
+      default:
+        zStmt = fulltext_zStatement[iStmt];
+    }
+    rc = sql_prepare(v->db, v->zDb, v->zName, &v->pFulltextStatements[iStmt],
+                         zStmt);
+    if( zStmt != fulltext_zStatement[iStmt]) free((void *) zStmt);
+    if( rc!=SQLITE_OK ) return rc;
+  } else {
+    int rc = sqlite3_reset(v->pFulltextStatements[iStmt]);
+    if( rc!=SQLITE_OK ) return rc;
+  }
+
+  *ppStmt = v->pFulltextStatements[iStmt];
+  return SQLITE_OK;
+}
+
+/* Step the indicated statement, handling errors SQLITE_BUSY (by
+** retrying) and SQLITE_SCHEMA (by re-preparing and transferring
+** bindings to the new statement).
+** TODO(adam): We should extend this function so that it can work with
+** statements declared locally, not only globally cached statements.
+*/
+static int sql_step_statement(fulltext_vtab *v, fulltext_statement iStmt,
+                              sqlite3_stmt **ppStmt){
+  int rc;
+  sqlite3_stmt *s = *ppStmt;
+  assert( iStmt<MAX_STMT );
+  assert( s==v->pFulltextStatements[iStmt] );
+
+  while( (rc=sqlite3_step(s))!=SQLITE_DONE && rc!=SQLITE_ROW ){
+    sqlite3_stmt *pNewStmt;
+
+    if( rc==SQLITE_BUSY ) continue;
+    if( rc!=SQLITE_ERROR ) return rc;
+
+    rc = sqlite3_reset(s);
+    if( rc!=SQLITE_SCHEMA ) return SQLITE_ERROR;
+
+    v->pFulltextStatements[iStmt] = NULL;   /* Still in s */
+    rc = sql_get_statement(v, iStmt, &pNewStmt);
+    if( rc!=SQLITE_OK ) goto err;
+    *ppStmt = pNewStmt;
+
+    rc = sqlite3_transfer_bindings(s, pNewStmt);
+    if( rc!=SQLITE_OK ) goto err;
+
+    rc = sqlite3_finalize(s);
+    if( rc!=SQLITE_OK ) return rc;
+    s = pNewStmt;
+  }
+  return rc;
+
+ err:
+  sqlite3_finalize(s);
+  return rc;
+}
+
+/* Like sql_step_statement(), but convert SQLITE_DONE to SQLITE_OK.
+** Useful for statements like UPDATE, where we expect no results.
+*/
+static int sql_single_step_statement(fulltext_vtab *v,
+                                     fulltext_statement iStmt,
+                                     sqlite3_stmt **ppStmt){
+  int rc = sql_step_statement(v, iStmt, ppStmt);
+  return (rc==SQLITE_DONE) ? SQLITE_OK : rc;
+}
+
+/* Like sql_get_statement(), but for special replicated LEAF_SELECT
+** statements.
+*/
+/* TODO(shess) Write version for generic statements and then share
+** that between the cached-statement functions.
+*/
+static int sql_get_leaf_statement(fulltext_vtab *v, int idx,
+                                  sqlite3_stmt **ppStmt){
+  assert( idx>=0 && idx<MERGE_COUNT );
+  if( v->pLeafSelectStmts[idx]==NULL ){
+    int rc = sql_prepare(v->db, v->zDb, v->zName, &v->pLeafSelectStmts[idx],
+                         LEAF_SELECT);
+    if( rc!=SQLITE_OK ) return rc;
+  }else{
+    int rc = sqlite3_reset(v->pLeafSelectStmts[idx]);
+    if( rc!=SQLITE_OK ) return rc;
+  }
+
+  *ppStmt = v->pLeafSelectStmts[idx];
+  return SQLITE_OK;
+}
+
+/* Like sql_step_statement(), but for special replicated LEAF_SELECT
+** statements.
+*/
+/* TODO(shess) Write version for generic statements and then share
+** that between the cached-statement functions.
+*/
+static int sql_step_leaf_statement(fulltext_vtab *v, int idx,
+                                   sqlite3_stmt **ppStmt){
+  int rc;
+  sqlite3_stmt *s = *ppStmt;
+
+  while( (rc=sqlite3_step(s))!=SQLITE_DONE && rc!=SQLITE_ROW ){
+    sqlite3_stmt *pNewStmt;
+
+    if( rc==SQLITE_BUSY ) continue;
+    if( rc!=SQLITE_ERROR ) return rc;
+
+    rc = sqlite3_reset(s);
+    if( rc!=SQLITE_SCHEMA ) return SQLITE_ERROR;
+
+    v->pLeafSelectStmts[idx] = NULL;   /* Still in s */
+    rc = sql_get_leaf_statement(v, idx, &pNewStmt);
+    if( rc!=SQLITE_OK ) goto err;
+    *ppStmt = pNewStmt;
+
+    rc = sqlite3_transfer_bindings(s, pNewStmt);
+    if( rc!=SQLITE_OK ) goto err;
+
+    rc = sqlite3_finalize(s);
+    if( rc!=SQLITE_OK ) return rc;
+    s = pNewStmt;
+  }
+  return rc;
+
+ err:
+  sqlite3_finalize(s);
+  return rc;
+}
+
+/* insert into %_content (rowid, ...) values ([rowid], [pValues]) */
+static int content_insert(fulltext_vtab *v, sqlite3_value *rowid,
+                          sqlite3_value **pValues){
+  sqlite3_stmt *s;
+  int i;
+  int rc = sql_get_statement(v, CONTENT_INSERT_STMT, &s);
+  if( rc!=SQLITE_OK ) return rc;
+
+  rc = sqlite3_bind_value(s, 1, rowid);
+  if( rc!=SQLITE_OK ) return rc;
+
+  for(i=0; i<v->nColumn; ++i){
+    rc = sqlite3_bind_value(s, 2+i, pValues[i]);
+    if( rc!=SQLITE_OK ) return rc;
+  }
+
+  return sql_single_step_statement(v, CONTENT_INSERT_STMT, &s);
+}
+
+/* update %_content set col0 = pValues[0], col1 = pValues[1], ...
+ *                  where rowid = [iRowid] */
+static int content_update(fulltext_vtab *v, sqlite3_value **pValues,
+                          sqlite_int64 iRowid){
+  sqlite3_stmt *s;
+  int i;
+  int rc = sql_get_statement(v, CONTENT_UPDATE_STMT, &s);
+  if( rc!=SQLITE_OK ) return rc;
+
+  for(i=0; i<v->nColumn; ++i){
+    rc = sqlite3_bind_value(s, 1+i, pValues[i]);
+    if( rc!=SQLITE_OK ) return rc;
+  }
+
+  rc = sqlite3_bind_int64(s, 1+v->nColumn, iRowid);
+  if( rc!=SQLITE_OK ) return rc;
+
+  return sql_single_step_statement(v, CONTENT_UPDATE_STMT, &s);
+}
+
+static void freeStringArray(int nString, const char **pString){
+  int i;
+
+  for (i=0 ; i < nString ; ++i) {
+    free((void *) pString[i]);
+  }
+  free((void *) pString);
+}
+
+/* select * from %_content where rowid = [iRow]
+ * The caller must delete the returned array and all strings in it.
+ *
+ * TODO: Perhaps we should return pointer/length strings here for consistency
+ * with other code which uses pointer/length. */
+static int content_select(fulltext_vtab *v, sqlite_int64 iRow,
+                          const char ***pValues){
+  sqlite3_stmt *s;
+  const char **values;
+  int i;
+  int rc;
+
+  *pValues = NULL;
+
+  rc = sql_get_statement(v, CONTENT_SELECT_STMT, &s);
+  if( rc!=SQLITE_OK ) return rc;
+
+  rc = sqlite3_bind_int64(s, 1, iRow);
+  if( rc!=SQLITE_OK ) return rc;
+
+  rc = sql_step_statement(v, CONTENT_SELECT_STMT, &s);
+  if( rc!=SQLITE_ROW ) return rc;
+
+  values = (const char **) malloc(v->nColumn * sizeof(const char *));
+  for(i=0; i<v->nColumn; ++i){
+    values[i] = string_dup((char*)sqlite3_column_text(s, i));
+  }
+
+  /* We expect only one row.  We must execute another sqlite3_step()
+   * to complete the iteration; otherwise the table will remain locked. */
+  rc = sqlite3_step(s);
+  if( rc==SQLITE_DONE ){
+    *pValues = values;
+    return SQLITE_OK;
+  }
+
+  freeStringArray(v->nColumn, values);
+  return rc;
+}
+
+/* delete from %_content where rowid = [iRow ] */
+static int content_delete(fulltext_vtab *v, sqlite_int64 iRow){
+  sqlite3_stmt *s;
+  int rc = sql_get_statement(v, CONTENT_DELETE_STMT, &s);
+  if( rc!=SQLITE_OK ) return rc;
+
+  rc = sqlite3_bind_int64(s, 1, iRow);
+  if( rc!=SQLITE_OK ) return rc;
+
+  return sql_single_step_statement(v, CONTENT_DELETE_STMT, &s);
+}
+
+/* insert into %_segments values ([pData])
+**   returns assigned rowid in *piBlockid
+*/
+static int block_insert(fulltext_vtab *v, const char *pData, int nData,
+                        sqlite_int64 *piBlockid){
+  sqlite3_stmt *s;
+  int rc = sql_get_statement(v, BLOCK_INSERT_STMT, &s);
+  if( rc!=SQLITE_OK ) return rc;
+
+  rc = sqlite3_bind_blob(s, 1, pData, nData, SQLITE_STATIC);
+  if( rc!=SQLITE_OK ) return rc;
+
+  rc = sql_step_statement(v, BLOCK_INSERT_STMT, &s);
+  if( rc==SQLITE_ROW ) return SQLITE_ERROR;
+  if( rc!=SQLITE_DONE ) return rc;
+
+  *piBlockid = sqlite3_last_insert_rowid(v->db);
+  return SQLITE_OK;
+}
+
+/* delete from %_segments
+**   where rowid between [iStartBlockid] and [iEndBlockid]
+**
+** Deletes the range of blocks, inclusive, used to delete the blocks
+** which form a segment.
+*/
+static int block_delete(fulltext_vtab *v,
+                        sqlite_int64 iStartBlockid, sqlite_int64 iEndBlockid){
+  sqlite3_stmt *s;
+  int rc = sql_get_statement(v, BLOCK_DELETE_STMT, &s);
+  if( rc!=SQLITE_OK ) return rc;
+
+  rc = sqlite3_bind_int64(s, 1, iStartBlockid);
+  if( rc!=SQLITE_OK ) return rc;
+
+  rc = sqlite3_bind_int64(s, 2, iEndBlockid);
+  if( rc!=SQLITE_OK ) return rc;
+
+  return sql_single_step_statement(v, BLOCK_DELETE_STMT, &s);
+}
+
+/* Returns SQLITE_ROW with *pidx set to the maximum segment idx found
+** at iLevel.  Returns SQLITE_DONE if there are no segments at
+** iLevel.  Otherwise returns an error.
+*/
+static int segdir_max_index(fulltext_vtab *v, int iLevel, int *pidx){
+  sqlite3_stmt *s;
+  int rc = sql_get_statement(v, SEGDIR_MAX_INDEX_STMT, &s);
+  if( rc!=SQLITE_OK ) return rc;
+
+  rc = sqlite3_bind_int(s, 1, iLevel);
+  if( rc!=SQLITE_OK ) return rc;
+
+  rc = sql_step_statement(v, SEGDIR_MAX_INDEX_STMT, &s);
+  /* Should always get at least one row due to how max() works. */
+  if( rc==SQLITE_DONE ) return SQLITE_DONE;
+  if( rc!=SQLITE_ROW ) return rc;
+
+  /* NULL means that there were no inputs to max(). */
+  if( SQLITE_NULL==sqlite3_column_type(s, 0) ){
+    rc = sqlite3_step(s);
+    if( rc==SQLITE_ROW ) return SQLITE_ERROR;
+    return rc;
+  }
+
+  *pidx = sqlite3_column_int(s, 0);
+
+  /* We expect only one row.  We must execute another sqlite3_step()
+   * to complete the iteration; otherwise the table will remain locked. */
+  rc = sqlite3_step(s);
+  if( rc==SQLITE_ROW ) return SQLITE_ERROR;
+  if( rc!=SQLITE_DONE ) return rc;
+  return SQLITE_ROW;
+}
+
+/* insert into %_segdir values (
+**   [iLevel], [idx],
+**   [iStartBlockid], [iLeavesEndBlockid], [iEndBlockid],
+**   [pRootData]
+** )
+*/
+static int segdir_set(fulltext_vtab *v, int iLevel, int idx,
+                      sqlite_int64 iStartBlockid,
+                      sqlite_int64 iLeavesEndBlockid,
+                      sqlite_int64 iEndBlockid,
+                      const char *pRootData, int nRootData){
+  sqlite3_stmt *s;
+  int rc = sql_get_statement(v, SEGDIR_SET_STMT, &s);
+  if( rc!=SQLITE_OK ) return rc;
+
+  rc = sqlite3_bind_int(s, 1, iLevel);
+  if( rc!=SQLITE_OK ) return rc;
+
+  rc = sqlite3_bind_int(s, 2, idx);
+  if( rc!=SQLITE_OK ) return rc;
+
+  rc = sqlite3_bind_int64(s, 3, iStartBlockid);
+  if( rc!=SQLITE_OK ) return rc;
+
+  rc = sqlite3_bind_int64(s, 4, iLeavesEndBlockid);
+  if( rc!=SQLITE_OK ) return rc;
+
+  rc = sqlite3_bind_int64(s, 5, iEndBlockid);
+  if( rc!=SQLITE_OK ) return rc;
+
+  rc = sqlite3_bind_blob(s, 6, pRootData, nRootData, SQLITE_STATIC);
+  if( rc!=SQLITE_OK ) return rc;
+
+  return sql_single_step_statement(v, SEGDIR_SET_STMT, &s);
+}
+
+/* Queries %_segdir for the block span of the segments in level
+** iLevel.  Returns SQLITE_DONE if there are no blocks for iLevel,
+** SQLITE_ROW if there are blocks, else an error.
+*/
+static int segdir_span(fulltext_vtab *v, int iLevel,
+                       sqlite_int64 *piStartBlockid,
+                       sqlite_int64 *piEndBlockid){
+  sqlite3_stmt *s;
+  int rc = sql_get_statement(v, SEGDIR_SPAN_STMT, &s);
+  if( rc!=SQLITE_OK ) return rc;
+
+  rc = sqlite3_bind_int(s, 1, iLevel);
+  if( rc!=SQLITE_OK ) return rc;
+
+  rc = sql_step_statement(v, SEGDIR_SPAN_STMT, &s);
+  if( rc==SQLITE_DONE ) return SQLITE_DONE;  /* Should never happen */
+  if( rc!=SQLITE_ROW ) return rc;
+
+  /* This happens if all segments at this level are entirely inline. */
+  if( SQLITE_NULL==sqlite3_column_type(s, 0) ){
+    /* We expect only one row.  We must execute another sqlite3_step()
+     * to complete the iteration; otherwise the table will remain locked. */
+    int rc2 = sqlite3_step(s);
+    if( rc2==SQLITE_ROW ) return SQLITE_ERROR;
+    return rc2;
+  }
+
+  *piStartBlockid = sqlite3_column_int64(s, 0);
+  *piEndBlockid = sqlite3_column_int64(s, 1);
+
+  /* We expect only one row.  We must execute another sqlite3_step()
+   * to complete the iteration; otherwise the table will remain locked. */
+  rc = sqlite3_step(s);
+  if( rc==SQLITE_ROW ) return SQLITE_ERROR;
+  if( rc!=SQLITE_DONE ) return rc;
+  return SQLITE_ROW;
+}
+
+/* Delete the segment blocks and segment directory records for all
+** segments at iLevel.
+*/
+static int segdir_delete(fulltext_vtab *v, int iLevel){
+  sqlite3_stmt *s;
+  sqlite_int64 iStartBlockid, iEndBlockid;
+  int rc = segdir_span(v, iLevel, &iStartBlockid, &iEndBlockid);
+  if( rc!=SQLITE_ROW && rc!=SQLITE_DONE ) return rc;
+
+  if( rc==SQLITE_ROW ){
+    rc = block_delete(v, iStartBlockid, iEndBlockid);
+    if( rc!=SQLITE_OK ) return rc;
+  }
+
+  /* Delete the segment directory itself. */
+  rc = sql_get_statement(v, SEGDIR_DELETE_STMT, &s);
+  if( rc!=SQLITE_OK ) return rc;
+
+  rc = sqlite3_bind_int64(s, 1, iLevel);
+  if( rc!=SQLITE_OK ) return rc;
+
+  return sql_single_step_statement(v, SEGDIR_DELETE_STMT, &s);
+}
+
+/*
+** Free the memory used to contain a fulltext_vtab structure.
+*/
+static void fulltext_vtab_destroy(fulltext_vtab *v){
+  int iStmt, i;
+
+  TRACE(("FTS2 Destroy %p\n", v));
+  for( iStmt=0; iStmt<MAX_STMT; iStmt++ ){
+    if( v->pFulltextStatements[iStmt]!=NULL ){
+      sqlite3_finalize(v->pFulltextStatements[iStmt]);
+      v->pFulltextStatements[iStmt] = NULL;
+    }
+  }
+
+  for( i=0; i<MERGE_COUNT; i++ ){
+    if( v->pLeafSelectStmts[i]!=NULL ){
+      sqlite3_finalize(v->pLeafSelectStmts[i]);
+      v->pLeafSelectStmts[i] = NULL;
+    }
+  }
+
+  if( v->pTokenizer!=NULL ){
+    v->pTokenizer->pModule->xDestroy(v->pTokenizer);
+    v->pTokenizer = NULL;
+  }
+  
+  free(v->azColumn);
+  for(i = 0; i < v->nColumn; ++i) {
+    sqlite3_free(v->azContentColumn[i]);
+  }
+  free(v->azContentColumn);
+  free(v);
+}
+
+/*
+** Token types for parsing the arguments to xConnect or xCreate.
+*/
+#define TOKEN_EOF         0    /* End of file */
+#define TOKEN_SPACE       1    /* Any kind of whitespace */
+#define TOKEN_ID          2    /* An identifier */
+#define TOKEN_STRING      3    /* A string literal */
+#define TOKEN_PUNCT       4    /* A single punctuation character */
+
+/*
+** If X is a character that can be used in an identifier then
+** IdChar(X) will be true.  Otherwise it is false.
+**
+** For ASCII, any character with the high-order bit set is
+** allowed in an identifier.  For 7-bit characters, 
+** sqlite3IsIdChar[X] must be 1.
+**
+** Ticket #1066.  the SQL standard does not allow '$' in the
+** middle of identfiers.  But many SQL implementations do. 
+** SQLite will allow '$' in identifiers for compatibility.
+** But the feature is undocumented.
+*/
+static const char isIdChar[] = {
+/* x0 x1 x2 x3 x4 x5 x6 x7 x8 x9 xA xB xC xD xE xF */
+    0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,  /* 2x */
+    1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0,  /* 3x */
+    0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,  /* 4x */
+    1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1,  /* 5x */
+    0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,  /* 6x */
+    1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0,  /* 7x */
+};
+#define IdChar(C)  (((c=C)&0x80)!=0 || (c>0x1f && isIdChar[c-0x20]))
+
+
+/*
+** Return the length of the token that begins at z[0]. 
+** Store the token type in *tokenType before returning.
+*/
+static int getToken(const char *z, int *tokenType){
+  int i, c;
+  switch( *z ){
+    case 0: {
+      *tokenType = TOKEN_EOF;
+      return 0;
+    }
+    case ' ': case '\t': case '\n': case '\f': case '\r': {
+      for(i=1; isspace(z[i]); i++){}
+      *tokenType = TOKEN_SPACE;
+      return i;
+    }
+    case '\'':
+    case '"': {
+      int delim = z[0];
+      for(i=1; (c=z[i])!=0; i++){
+        if( c==delim ){
+          if( z[i+1]==delim ){
+            i++;
+          }else{
+            break;
+          }
+        }
+      }
+      *tokenType = TOKEN_STRING;
+      return i + (c!=0);
+    }
+    case '[': {
+      for(i=1, c=z[0]; c!=']' && (c=z[i])!=0; i++){}
+      *tokenType = TOKEN_ID;
+      return i;
+    }
+    default: {
+      if( !IdChar(*z) ){
+        break;
+      }
+      for(i=1; IdChar(z[i]); i++){}
+      *tokenType = TOKEN_ID;
+      return i;
+    }
+  }
+  *tokenType = TOKEN_PUNCT;
+  return 1;
+}
+
+/*
+** A token extracted from a string is an instance of the following
+** structure.
+*/
+typedef struct Token {
+  const char *z;       /* Pointer to token text.  Not '\000' terminated */
+  short int n;         /* Length of the token text in bytes. */
+} Token;
+
+/*
+** Given a input string (which is really one of the argv[] parameters
+** passed into xConnect or xCreate) split the string up into tokens.
+** Return an array of pointers to '\000' terminated strings, one string
+** for each non-whitespace token.
+**
+** The returned array is terminated by a single NULL pointer.
+**
+** Space to hold the returned array is obtained from a single
+** malloc and should be freed by passing the return value to free().
+** The individual strings within the token list are all a part of
+** the single memory allocation and will all be freed at once.
+*/
+static char **tokenizeString(const char *z, int *pnToken){
+  int nToken = 0;
+  Token *aToken = malloc( strlen(z) * sizeof(aToken[0]) );
+  int n = 1;
+  int e, i;
+  int totalSize = 0;
+  char **azToken;
+  char *zCopy;
+  while( n>0 ){
+    n = getToken(z, &e);
+    if( e!=TOKEN_SPACE ){
+      aToken[nToken].z = z;
+      aToken[nToken].n = n;
+      nToken++;
+      totalSize += n+1;
+    }
+    z += n;
+  }
+  azToken = (char**)malloc( nToken*sizeof(char*) + totalSize );
+  zCopy = (char*)&azToken[nToken];
+  nToken--;
+  for(i=0; i<nToken; i++){
+    azToken[i] = zCopy;
+    n = aToken[i].n;
+    memcpy(zCopy, aToken[i].z, n);
+    zCopy[n] = 0;
+    zCopy += n+1;
+  }
+  azToken[nToken] = 0;
+  free(aToken);
+  *pnToken = nToken;
+  return azToken;
+}
+
+/*
+** Convert an SQL-style quoted string into a normal string by removing
+** the quote characters.  The conversion is done in-place.  If the
+** input does not begin with a quote character, then this routine
+** is a no-op.
+**
+** Examples:
+**
+**     "abc"   becomes   abc
+**     'xyz'   becomes   xyz
+**     [pqr]   becomes   pqr
+**     `mno`   becomes   mno
+*/
+static void dequoteString(char *z){
+  int quote;
+  int i, j;
+  if( z==0 ) return;
+  quote = z[0];
+  switch( quote ){
+    case '\'':  break;
+    case '"':   break;
+    case '`':   break;                /* For MySQL compatibility */
+    case '[':   quote = ']';  break;  /* For MS SqlServer compatibility */
+    default:    return;
+  }
+  for(i=1, j=0; z[i]; i++){
+    if( z[i]==quote ){
+      if( z[i+1]==quote ){
+        z[j++] = quote;
+        i++;
+      }else{
+        z[j++] = 0;
+        break;
+      }
+    }else{
+      z[j++] = z[i];
+    }
+  }
+}
+
+/*
+** The input azIn is a NULL-terminated list of tokens.  Remove the first
+** token and all punctuation tokens.  Remove the quotes from
+** around string literal tokens.
+**
+** Example:
+**
+**     input:      tokenize chinese ( 'simplifed' , 'mixed' )
+**     output:     chinese simplifed mixed
+**
+** Another example:
+**
+**     input:      delimiters ( '[' , ']' , '...' )
+**     output:     [ ] ...
+*/
+static void tokenListToIdList(char **azIn){
+  int i, j;
+  if( azIn ){
+    for(i=0, j=-1; azIn[i]; i++){
+      if( isalnum(azIn[i][0]) || azIn[i][1] ){
+        dequoteString(azIn[i]);
+        if( j>=0 ){
+          azIn[j] = azIn[i];
+        }
+        j++;
+      }
+    }
+    azIn[j] = 0;
+  }
+}
+
+
+/*
+** Find the first alphanumeric token in the string zIn.  Null-terminate
+** this token.  Remove any quotation marks.  And return a pointer to
+** the result.
+*/
+static char *firstToken(char *zIn, char **pzTail){
+  int n, ttype;
+  while(1){
+    n = getToken(zIn, &ttype);
+    if( ttype==TOKEN_SPACE ){
+      zIn += n;
+    }else if( ttype==TOKEN_EOF ){
+      *pzTail = zIn;
+      return 0;
+    }else{
+      zIn[n] = 0;
+      *pzTail = &zIn[1];
+      dequoteString(zIn);
+      return zIn;
+    }
+  }
+  /*NOTREACHED*/
+}
+
+/* Return true if...
+**
+**   *  s begins with the string t, ignoring case
+**   *  s is longer than t
+**   *  The first character of s beyond t is not a alphanumeric
+** 
+** Ignore leading space in *s.
+**
+** To put it another way, return true if the first token of
+** s[] is t[].
+*/
+static int startsWith(const char *s, const char *t){
+  while( isspace(*s) ){ s++; }
+  while( *t ){
+    if( tolower(*s++)!=tolower(*t++) ) return 0;
+  }
+  return *s!='_' && !isalnum(*s);
+}
+
+/*
+** An instance of this structure defines the "spec" of a
+** full text index.  This structure is populated by parseSpec
+** and use by fulltextConnect and fulltextCreate.
+*/
+typedef struct TableSpec {
+  const char *zDb;         /* Logical database name */
+  const char *zName;       /* Name of the full-text index */
+  int nColumn;             /* Number of columns to be indexed */
+  char **azColumn;         /* Original names of columns to be indexed */
+  char **azContentColumn;  /* Column names for %_content */
+  char **azTokenizer;      /* Name of tokenizer and its arguments */
+} TableSpec;
+
+/*
+** Reclaim all of the memory used by a TableSpec
+*/
+static void clearTableSpec(TableSpec *p) {
+  free(p->azColumn);
+  free(p->azContentColumn);
+  free(p->azTokenizer);
+}
+
+/* Parse a CREATE VIRTUAL TABLE statement, which looks like this:
+ *
+ * CREATE VIRTUAL TABLE email
+ *        USING fts2(subject, body, tokenize mytokenizer(myarg))
+ *
+ * We return parsed information in a TableSpec structure.
+ * 
+ */
+static int parseSpec(TableSpec *pSpec, int argc, const char *const*argv,
+                     char**pzErr){
+  int i, n;
+  char *z, *zDummy;
+  char **azArg;
+  const char *zTokenizer = 0;    /* argv[] entry describing the tokenizer */
+
+  assert( argc>=3 );
+  /* Current interface:
+  ** argv[0] - module name
+  ** argv[1] - database name
+  ** argv[2] - table name
+  ** argv[3..] - columns, optionally followed by tokenizer specification
+  **             and snippet delimiters specification.
+  */
+
+  /* Make a copy of the complete argv[][] array in a single allocation.
+  ** The argv[][] array is read-only and transient.  We can write to the
+  ** copy in order to modify things and the copy is persistent.
+  */
+  CLEAR(pSpec);
+  for(i=n=0; i<argc; i++){
+    n += strlen(argv[i]) + 1;
+  }
+  azArg = malloc( sizeof(char*)*argc + n );
+  if( azArg==0 ){
+    return SQLITE_NOMEM;
+  }
+  z = (char*)&azArg[argc];
+  for(i=0; i<argc; i++){
+    azArg[i] = z;
+    strcpy(z, argv[i]);
+    z += strlen(z)+1;
+  }
+
+  /* Identify the column names and the tokenizer and delimiter arguments
+  ** in the argv[][] array.
+  */
+  pSpec->zDb = azArg[1];
+  pSpec->zName = azArg[2];
+  pSpec->nColumn = 0;
+  pSpec->azColumn = azArg;
+  zTokenizer = "tokenize simple";
+  for(i=3; i<argc; ++i){
+    if( startsWith(azArg[i],"tokenize") ){
+      zTokenizer = azArg[i];
+    }else{
+      z = azArg[pSpec->nColumn] = firstToken(azArg[i], &zDummy);
+      pSpec->nColumn++;
+    }
+  }
+  if( pSpec->nColumn==0 ){
+    azArg[0] = "content";
+    pSpec->nColumn = 1;
+  }
+
+  /*
+  ** Construct the list of content column names.
+  **
+  ** Each content column name will be of the form cNNAAAA
+  ** where NN is the column number and AAAA is the sanitized
+  ** column name.  "sanitized" means that special characters are
+  ** converted to "_".  The cNN prefix guarantees that all column
+  ** names are unique.
+  **
+  ** The AAAA suffix is not strictly necessary.  It is included
+  ** for the convenience of people who might examine the generated
+  ** %_content table and wonder what the columns are used for.
+  */
+  pSpec->azContentColumn = malloc( pSpec->nColumn * sizeof(char *) );
+  if( pSpec->azContentColumn==0 ){
+    clearTableSpec(pSpec);
+    return SQLITE_NOMEM;
+  }
+  for(i=0; i<pSpec->nColumn; i++){
+    char *p;
+    pSpec->azContentColumn[i] = sqlite3_mprintf("c%d%s", i, azArg[i]);
+    for (p = pSpec->azContentColumn[i]; *p ; ++p) {
+      if( !isalnum(*p) ) *p = '_';
+    }
+  }
+
+  /*
+  ** Parse the tokenizer specification string.
+  */
+  pSpec->azTokenizer = tokenizeString(zTokenizer, &n);
+  tokenListToIdList(pSpec->azTokenizer);
+
+  return SQLITE_OK;
+}
+
+/*
+** Generate a CREATE TABLE statement that describes the schema of
+** the virtual table.  Return a pointer to this schema string.
+**
+** Space is obtained from sqlite3_mprintf() and should be freed
+** using sqlite3_free().
+*/
+static char *fulltextSchema(
+  int nColumn,                  /* Number of columns */
+  const char *const* azColumn,  /* List of columns */
+  const char *zTableName        /* Name of the table */
+){
+  int i;
+  char *zSchema, *zNext;
+  const char *zSep = "(";
+  zSchema = sqlite3_mprintf("CREATE TABLE x");
+  for(i=0; i<nColumn; i++){
+    zNext = sqlite3_mprintf("%s%s%Q", zSchema, zSep, azColumn[i]);
+    sqlite3_free(zSchema);
+    zSchema = zNext;
+    zSep = ",";
+  }
+  zNext = sqlite3_mprintf("%s,%Q)", zSchema, zTableName);
+  sqlite3_free(zSchema);
+  return zNext;
+}
+
+/*
+** Build a new sqlite3_vtab structure that will describe the
+** fulltext index defined by spec.
+*/
+static int constructVtab(
+  sqlite3 *db,              /* The SQLite database connection */
+  TableSpec *spec,          /* Parsed spec information from parseSpec() */
+  sqlite3_vtab **ppVTab,    /* Write the resulting vtab structure here */
+  char **pzErr              /* Write any error message here */
+){
+  int rc;
+  int n;
+  fulltext_vtab *v = 0;
+  const sqlite3_tokenizer_module *m = NULL;
+  char *schema;
+
+  v = (fulltext_vtab *) malloc(sizeof(fulltext_vtab));
+  if( v==0 ) return SQLITE_NOMEM;
+  CLEAR(v);
+  /* sqlite will initialize v->base */
+  v->db = db;
+  v->zDb = spec->zDb;       /* Freed when azColumn is freed */
+  v->zName = spec->zName;   /* Freed when azColumn is freed */
+  v->nColumn = spec->nColumn;
+  v->azContentColumn = spec->azContentColumn;
+  spec->azContentColumn = 0;
+  v->azColumn = spec->azColumn;
+  spec->azColumn = 0;
+
+  if( spec->azTokenizer==0 ){
+    return SQLITE_NOMEM;
+  }
+  /* TODO(shess) For now, add new tokenizers as else if clauses. */
+  if( spec->azTokenizer[0]==0 || startsWith(spec->azTokenizer[0], "simple") ){
+    sqlite3Fts2SimpleTokenizerModule(&m);
+  }else if( startsWith(spec->azTokenizer[0], "porter") ){
+    sqlite3Fts2PorterTokenizerModule(&m);
+  }else{
+    *pzErr = sqlite3_mprintf("unknown tokenizer: %s", spec->azTokenizer[0]);
+    rc = SQLITE_ERROR;
+    goto err;
+  }
+  for(n=0; spec->azTokenizer[n]; n++){}
+  if( n ){
+    rc = m->xCreate(n-1, (const char*const*)&spec->azTokenizer[1],
+                    &v->pTokenizer);
+  }else{
+    rc = m->xCreate(0, 0, &v->pTokenizer);
+  }
+  if( rc!=SQLITE_OK ) goto err;
+  v->pTokenizer->pModule = m;
+
+  /* TODO: verify the existence of backing tables foo_content, foo_term */
+
+  schema = fulltextSchema(v->nColumn, (const char*const*)v->azColumn,
+                          spec->zName);
+  rc = sqlite3_declare_vtab(db, schema);
+  sqlite3_free(schema);
+  if( rc!=SQLITE_OK ) goto err;
+
+  memset(v->pFulltextStatements, 0, sizeof(v->pFulltextStatements));
+
+  *ppVTab = &v->base;
+  TRACE(("FTS2 Connect %p\n", v));
+
+  return rc;
+
+err:
+  fulltext_vtab_destroy(v);
+  return rc;
+}
+
+static int fulltextConnect(
+  sqlite3 *db,
+  void *pAux,
+  int argc, const char *const*argv,
+  sqlite3_vtab **ppVTab,
+  char **pzErr
+){
+  TableSpec spec;
+  int rc = parseSpec(&spec, argc, argv, pzErr);
+  if( rc!=SQLITE_OK ) return rc;
+
+  rc = constructVtab(db, &spec, ppVTab, pzErr);
+  clearTableSpec(&spec);
+  return rc;
+}
+
+/* The %_content table holds the text of each document, with
+** the rowid used as the docid.
+*/
+/* TODO(shess) This comment needs elaboration to match the updated
+** code.  Work it into the top-of-file comment at that time.
+*/
+static int fulltextCreate(sqlite3 *db, void *pAux,
+                          int argc, const char * const *argv,
+                          sqlite3_vtab **ppVTab, char **pzErr){
+  int rc;
+  TableSpec spec;
+  StringBuffer schema;
+  TRACE(("FTS2 Create\n"));
+
+  rc = parseSpec(&spec, argc, argv, pzErr);
+  if( rc!=SQLITE_OK ) return rc;
+
+  initStringBuffer(&schema);
+  append(&schema, "CREATE TABLE %_content(");
+  appendList(&schema, spec.nColumn, spec.azContentColumn);
+  append(&schema, ")");
+  rc = sql_exec(db, spec.zDb, spec.zName, stringBufferData(&schema));
+  stringBufferDestroy(&schema);
+  if( rc!=SQLITE_OK ) goto out;
+
+  rc = sql_exec(db, spec.zDb, spec.zName,
+                "create table %_segments(block blob);");
+  if( rc!=SQLITE_OK ) goto out;
+
+  rc = sql_exec(db, spec.zDb, spec.zName,
+                "create table %_segdir("
+                "  level integer,"
+                "  idx integer,"
+                "  start_block integer,"
+                "  leaves_end_block integer,"
+                "  end_block integer,"
+                "  root blob,"
+                "  primary key(level, idx)"
+                ");");
+  if( rc!=SQLITE_OK ) goto out;
+
+  rc = constructVtab(db, &spec, ppVTab, pzErr);
+
+out:
+  clearTableSpec(&spec);
+  return rc;
+}
+
+/* Decide how to handle an SQL query. */
+static int fulltextBestIndex(sqlite3_vtab *pVTab, sqlite3_index_info *pInfo){
+  int i;
+  TRACE(("FTS2 BestIndex\n"));
+
+  for(i=0; i<pInfo->nConstraint; ++i){
+    const struct sqlite3_index_constraint *pConstraint;
+    pConstraint = &pInfo->aConstraint[i];
+    if( pConstraint->usable ) {
+      if( pConstraint->iColumn==-1 &&
+          pConstraint->op==SQLITE_INDEX_CONSTRAINT_EQ ){
+        pInfo->idxNum = QUERY_ROWID;      /* lookup by rowid */
+        TRACE(("FTS2 QUERY_ROWID\n"));
+      } else if( pConstraint->iColumn>=0 &&
+                 pConstraint->op==SQLITE_INDEX_CONSTRAINT_MATCH ){
+        /* full-text search */
+        pInfo->idxNum = QUERY_FULLTEXT + pConstraint->iColumn;
+        TRACE(("FTS2 QUERY_FULLTEXT %d\n", pConstraint->iColumn));
+      } else continue;
+
+      pInfo->aConstraintUsage[i].argvIndex = 1;
+      pInfo->aConstraintUsage[i].omit = 1;
+
+      /* An arbitrary value for now.
+       * TODO: Perhaps rowid matches should be considered cheaper than
+       * full-text searches. */
+      pInfo->estimatedCost = 1.0;   
+
+      return SQLITE_OK;
+    }
+  }
+  pInfo->idxNum = QUERY_GENERIC;
+  return SQLITE_OK;
+}
+
+static int fulltextDisconnect(sqlite3_vtab *pVTab){
+  TRACE(("FTS2 Disconnect %p\n", pVTab));
+  fulltext_vtab_destroy((fulltext_vtab *)pVTab);
+  return SQLITE_OK;
+}
+
+static int fulltextDestroy(sqlite3_vtab *pVTab){
+  fulltext_vtab *v = (fulltext_vtab *)pVTab;
+  int rc;
+
+  TRACE(("FTS2 Destroy %p\n", pVTab));
+  rc = sql_exec(v->db, v->zDb, v->zName,
+                "drop table if exists %_content;"
+                "drop table if exists %_segments;"
+                "drop table if exists %_segdir;"
+                );
+  if( rc!=SQLITE_OK ) return rc;
+
+  fulltext_vtab_destroy((fulltext_vtab *)pVTab);
+  return SQLITE_OK;
+}
+
+static int fulltextOpen(sqlite3_vtab *pVTab, sqlite3_vtab_cursor **ppCursor){
+  fulltext_cursor *c;
+
+  c = (fulltext_cursor *) calloc(sizeof(fulltext_cursor), 1);
+  /* sqlite will initialize c->base */
+  *ppCursor = &c->base;
+  TRACE(("FTS2 Open %p: %p\n", pVTab, c));
+
+  return SQLITE_OK;
+}
+
+
+/* Free all of the dynamically allocated memory held by *q
+*/
+static void queryClear(Query *q){
+  int i;
+  for(i = 0; i < q->nTerms; ++i){
+    free(q->pTerms[i].pTerm);
+  }
+  free(q->pTerms);
+  CLEAR(q);
+}
+
+/* Free all of the dynamically allocated memory held by the
+** Snippet
+*/
+static void snippetClear(Snippet *p){
+  free(p->aMatch);
+  free(p->zOffset);
+  free(p->zSnippet);
+  CLEAR(p);
+}
+/*
+** Append a single entry to the p->aMatch[] log.
+*/
+static void snippetAppendMatch(
+  Snippet *p,               /* Append the entry to this snippet */
+  int iCol, int iTerm,      /* The column and query term */
+  int iStart, int nByte     /* Offset and size of the match */
+){
+  int i;
+  struct snippetMatch *pMatch;
+  if( p->nMatch+1>=p->nAlloc ){
+    p->nAlloc = p->nAlloc*2 + 10;
+    p->aMatch = realloc(p->aMatch, p->nAlloc*sizeof(p->aMatch[0]) );
+    if( p->aMatch==0 ){
+      p->nMatch = 0;
+      p->nAlloc = 0;
+      return;
+    }
+  }
+  i = p->nMatch++;
+  pMatch = &p->aMatch[i];
+  pMatch->iCol = iCol;
+  pMatch->iTerm = iTerm;
+  pMatch->iStart = iStart;
+  pMatch->nByte = nByte;
+}
+
+/*
+** Sizing information for the circular buffer used in snippetOffsetsOfColumn()
+*/
+#define FTS2_ROTOR_SZ   (32)
+#define FTS2_ROTOR_MASK (FTS2_ROTOR_SZ-1)
+
+/*
+** Add entries to pSnippet->aMatch[] for every match that occurs against
+** document zDoc[0..nDoc-1] which is stored in column iColumn.
+*/
+static void snippetOffsetsOfColumn(
+  Query *pQuery,
+  Snippet *pSnippet,
+  int iColumn,
+  const char *zDoc,
+  int nDoc
+){
+  const sqlite3_tokenizer_module *pTModule;  /* The tokenizer module */
+  sqlite3_tokenizer *pTokenizer;             /* The specific tokenizer */
+  sqlite3_tokenizer_cursor *pTCursor;        /* Tokenizer cursor */
+  fulltext_vtab *pVtab;                /* The full text index */
+  int nColumn;                         /* Number of columns in the index */
+  const QueryTerm *aTerm;              /* Query string terms */
+  int nTerm;                           /* Number of query string terms */  
+  int i, j;                            /* Loop counters */
+  int rc;                              /* Return code */
+  unsigned int match, prevMatch;       /* Phrase search bitmasks */
+  const char *zToken;                  /* Next token from the tokenizer */
+  int nToken;                          /* Size of zToken */
+  int iBegin, iEnd, iPos;              /* Offsets of beginning and end */
+
+  /* The following variables keep a circular buffer of the last
+  ** few tokens */
+  unsigned int iRotor = 0;             /* Index of current token */
+  int iRotorBegin[FTS2_ROTOR_SZ];      /* Beginning offset of token */
+  int iRotorLen[FTS2_ROTOR_SZ];        /* Length of token */
+
+  pVtab = pQuery->pFts;
+  nColumn = pVtab->nColumn;
+  pTokenizer = pVtab->pTokenizer;
+  pTModule = pTokenizer->pModule;
+  rc = pTModule->xOpen(pTokenizer, zDoc, nDoc, &pTCursor);
+  if( rc ) return;
+  pTCursor->pTokenizer = pTokenizer;
+  aTerm = pQuery->pTerms;
+  nTerm = pQuery->nTerms;
+  if( nTerm>=FTS2_ROTOR_SZ ){
+    nTerm = FTS2_ROTOR_SZ - 1;
+  }
+  prevMatch = 0;
+  while(1){
+    rc = pTModule->xNext(pTCursor, &zToken, &nToken, &iBegin, &iEnd, &iPos);
+    if( rc ) break;
+    iRotorBegin[iRotor&FTS2_ROTOR_MASK] = iBegin;
+    iRotorLen[iRotor&FTS2_ROTOR_MASK] = iEnd-iBegin;
+    match = 0;
+    for(i=0; i<nTerm; i++){
+      int iCol;
+      iCol = aTerm[i].iColumn;
+      if( iCol>=0 && iCol<nColumn && iCol!=iColumn ) continue;
+      if( aTerm[i].nTerm!=nToken ) continue;
+      if( memcmp(aTerm[i].pTerm, zToken, nToken) ) continue;
+      if( aTerm[i].iPhrase>1 && (prevMatch & (1<<i))==0 ) continue;
+      match |= 1<<i;
+      if( i==nTerm-1 || aTerm[i+1].iPhrase==1 ){
+        for(j=aTerm[i].iPhrase-1; j>=0; j--){
+          int k = (iRotor-j) & FTS2_ROTOR_MASK;
+          snippetAppendMatch(pSnippet, iColumn, i-j,
+                iRotorBegin[k], iRotorLen[k]);
+        }
+      }
+    }
+    prevMatch = match<<1;
+    iRotor++;
+  }
+  pTModule->xClose(pTCursor);  
+}
+
+
+/*
+** Compute all offsets for the current row of the query.  
+** If the offsets have already been computed, this routine is a no-op.
+*/
+static void snippetAllOffsets(fulltext_cursor *p){
+  int nColumn;
+  int iColumn, i;
+  int iFirst, iLast;
+  fulltext_vtab *pFts;
+
+  if( p->snippet.nMatch ) return;
+  if( p->q.nTerms==0 ) return;
+  pFts = p->q.pFts;
+  nColumn = pFts->nColumn;
+  iColumn = p->iCursorType;
+  if( iColumn<0 || iColumn>=nColumn ){
+    iFirst = 0;
+    iLast = nColumn-1;
+  }else{
+    iFirst = iColumn;
+    iLast = iColumn;
+  }
+  for(i=iFirst; i<=iLast; i++){
+    const char *zDoc;
+    int nDoc;
+    zDoc = (const char*)sqlite3_column_text(p->pStmt, i+1);
+    nDoc = sqlite3_column_bytes(p->pStmt, i+1);
+    snippetOffsetsOfColumn(&p->q, &p->snippet, i, zDoc, nDoc);
+  }
+}
+
+/*
+** Convert the information in the aMatch[] array of the snippet
+** into the string zOffset[0..nOffset-1].
+*/
+static void snippetOffsetText(Snippet *p){
+  int i;
+  int cnt = 0;
+  StringBuffer sb;
+  char zBuf[200];
+  if( p->zOffset ) return;
+  initStringBuffer(&sb);
+  for(i=0; i<p->nMatch; i++){
+    struct snippetMatch *pMatch = &p->aMatch[i];
+    zBuf[0] = ' ';
+    sprintf(&zBuf[cnt>0], "%d %d %d %d", pMatch->iCol,
+        pMatch->iTerm, pMatch->iStart, pMatch->nByte);
+    append(&sb, zBuf);
+    cnt++;
+  }
+  p->zOffset = stringBufferData(&sb);
+  p->nOffset = stringBufferLength(&sb);
+}
+
+/*
+** zDoc[0..nDoc-1] is phrase of text.  aMatch[0..nMatch-1] are a set
+** of matching words some of which might be in zDoc.  zDoc is column
+** number iCol.
+**
+** iBreak is suggested spot in zDoc where we could begin or end an
+** excerpt.  Return a value similar to iBreak but possibly adjusted
+** to be a little left or right so that the break point is better.
+*/
+static int wordBoundary(
+  int iBreak,                   /* The suggested break point */
+  const char *zDoc,             /* Document text */
+  int nDoc,                     /* Number of bytes in zDoc[] */
+  struct snippetMatch *aMatch,  /* Matching words */
+  int nMatch,                   /* Number of entries in aMatch[] */
+  int iCol                      /* The column number for zDoc[] */
+){
+  int i;
+  if( iBreak<=10 ){
+    return 0;
+  }
+  if( iBreak>=nDoc-10 ){
+    return nDoc;
+  }
+  for(i=0; i<nMatch && aMatch[i].iCol<iCol; i++){}
+  while( i<nMatch && aMatch[i].iStart+aMatch[i].nByte<iBreak ){ i++; }
+  if( i<nMatch ){
+    if( aMatch[i].iStart<iBreak+10 ){
+      return aMatch[i].iStart;
+    }
+    if( i>0 && aMatch[i-1].iStart+aMatch[i-1].nByte>=iBreak ){
+      return aMatch[i-1].iStart;
+    }
+  }
+  for(i=1; i<=10; i++){
+    if( isspace(zDoc[iBreak-i]) ){
+      return iBreak - i + 1;
+    }
+    if( isspace(zDoc[iBreak+i]) ){
+      return iBreak + i + 1;
+    }
+  }
+  return iBreak;
+}
+
+
+
+/*
+** Allowed values for Snippet.aMatch[].snStatus
+*/
+#define SNIPPET_IGNORE  0   /* It is ok to omit this match from the snippet */
+#define SNIPPET_DESIRED 1   /* We want to include this match in the snippet */
+
+/*
+** Generate the text of a snippet.
+*/
+static void snippetText(
+  fulltext_cursor *pCursor,   /* The cursor we need the snippet for */
+  const char *zStartMark,     /* Markup to appear before each match */
+  const char *zEndMark,       /* Markup to appear after each match */
+  const char *zEllipsis       /* Ellipsis mark */
+){
+  int i, j;
+  struct snippetMatch *aMatch;
+  int nMatch;
+  int nDesired;
+  StringBuffer sb;
+  int tailCol;
+  int tailOffset;
+  int iCol;
+  int nDoc;
+  const char *zDoc;
+  int iStart, iEnd;
+  int tailEllipsis = 0;
+  int iMatch;
+  
+
+  free(pCursor->snippet.zSnippet);
+  pCursor->snippet.zSnippet = 0;
+  aMatch = pCursor->snippet.aMatch;
+  nMatch = pCursor->snippet.nMatch;
+  initStringBuffer(&sb);
+
+  for(i=0; i<nMatch; i++){
+    aMatch[i].snStatus = SNIPPET_IGNORE;
+  }
+  nDesired = 0;
+  for(i=0; i<pCursor->q.nTerms; i++){
+    for(j=0; j<nMatch; j++){
+      if( aMatch[j].iTerm==i ){
+        aMatch[j].snStatus = SNIPPET_DESIRED;
+        nDesired++;
+        break;
+      }
+    }
+  }
+
+  iMatch = 0;
+  tailCol = -1;
+  tailOffset = 0;
+  for(i=0; i<nMatch && nDesired>0; i++){
+    if( aMatch[i].snStatus!=SNIPPET_DESIRED ) continue;
+    nDesired--;
+    iCol = aMatch[i].iCol;
+    zDoc = (const char*)sqlite3_column_text(pCursor->pStmt, iCol+1);
+    nDoc = sqlite3_column_bytes(pCursor->pStmt, iCol+1);
+    iStart = aMatch[i].iStart - 40;
+    iStart = wordBoundary(iStart, zDoc, nDoc, aMatch, nMatch, iCol);
+    if( iStart<=10 ){
+      iStart = 0;
+    }
+    if( iCol==tailCol && iStart<=tailOffset+20 ){
+      iStart = tailOffset;
+    }
+    if( (iCol!=tailCol && tailCol>=0) || iStart!=tailOffset ){
+      trimWhiteSpace(&sb);
+      appendWhiteSpace(&sb);
+      append(&sb, zEllipsis);
+      appendWhiteSpace(&sb);
+    }
+    iEnd = aMatch[i].iStart + aMatch[i].nByte + 40;
+    iEnd = wordBoundary(iEnd, zDoc, nDoc, aMatch, nMatch, iCol);
+    if( iEnd>=nDoc-10 ){
+      iEnd = nDoc;
+      tailEllipsis = 0;
+    }else{
+      tailEllipsis = 1;
+    }
+    while( iMatch<nMatch && aMatch[iMatch].iCol<iCol ){ iMatch++; }
+    while( iStart<iEnd ){
+      while( iMatch<nMatch && aMatch[iMatch].iStart<iStart
+             && aMatch[iMatch].iCol<=iCol ){
+        iMatch++;
+      }
+      if( iMatch<nMatch && aMatch[iMatch].iStart<iEnd
+             && aMatch[iMatch].iCol==iCol ){
+        nappend(&sb, &zDoc[iStart], aMatch[iMatch].iStart - iStart);
+        iStart = aMatch[iMatch].iStart;
+        append(&sb, zStartMark);
+        nappend(&sb, &zDoc[iStart], aMatch[iMatch].nByte);
+        append(&sb, zEndMark);
+        iStart += aMatch[iMatch].nByte;
+        for(j=iMatch+1; j<nMatch; j++){
+          if( aMatch[j].iTerm==aMatch[iMatch].iTerm
+              && aMatch[j].snStatus==SNIPPET_DESIRED ){
+            nDesired--;
+            aMatch[j].snStatus = SNIPPET_IGNORE;
+          }
+        }
+      }else{
+        nappend(&sb, &zDoc[iStart], iEnd - iStart);
+        iStart = iEnd;
+      }
+    }
+    tailCol = iCol;
+    tailOffset = iEnd;
+  }
+  trimWhiteSpace(&sb);
+  if( tailEllipsis ){
+    appendWhiteSpace(&sb);
+    append(&sb, zEllipsis);
+  }
+  pCursor->snippet.zSnippet = stringBufferData(&sb);
+  pCursor->snippet.nSnippet = stringBufferLength(&sb);
+}
+
+
+/*
+** Close the cursor.  For additional information see the documentation
+** on the xClose method of the virtual table interface.
+*/
+static int fulltextClose(sqlite3_vtab_cursor *pCursor){
+  fulltext_cursor *c = (fulltext_cursor *) pCursor;
+  TRACE(("FTS2 Close %p\n", c));
+  sqlite3_finalize(c->pStmt);
+  queryClear(&c->q);
+  snippetClear(&c->snippet);
+  if( c->result.nData!=0 ) dlrDestroy(&c->reader);
+  dataBufferDestroy(&c->result);
+  free(c);
+  return SQLITE_OK;
+}
+
+static int fulltextNext(sqlite3_vtab_cursor *pCursor){
+  fulltext_cursor *c = (fulltext_cursor *) pCursor;
+  int rc;
+
+  TRACE(("FTS2 Next %p\n", pCursor));
+  snippetClear(&c->snippet);
+  if( c->iCursorType < QUERY_FULLTEXT ){
+    /* TODO(shess) Handle SQLITE_SCHEMA AND SQLITE_BUSY. */
+    rc = sqlite3_step(c->pStmt);
+    switch( rc ){
+      case SQLITE_ROW:
+        c->eof = 0;
+        return SQLITE_OK;
+      case SQLITE_DONE:
+        c->eof = 1;
+        return SQLITE_OK;
+      default:
+        c->eof = 1;
+        return rc;
+    }
+  } else {  /* full-text query */
+    rc = sqlite3_reset(c->pStmt);
+    if( rc!=SQLITE_OK ) return rc;
+
+    if( c->result.nData==0 || dlrAtEnd(&c->reader) ){
+      c->eof = 1;
+      return SQLITE_OK;
+    }
+    rc = sqlite3_bind_int64(c->pStmt, 1, dlrDocid(&c->reader));
+    dlrStep(&c->reader);
+    if( rc!=SQLITE_OK ) return rc;
+    /* TODO(shess) Handle SQLITE_SCHEMA AND SQLITE_BUSY. */
+    rc = sqlite3_step(c->pStmt);
+    if( rc==SQLITE_ROW ){   /* the case we expect */
+      c->eof = 0;
+      return SQLITE_OK;
+    }
+    /* an error occurred; abort */
+    return rc==SQLITE_DONE ? SQLITE_ERROR : rc;
+  }
+}
+
+
+/* TODO(shess) If we pushed LeafReader to the top of the file, or to
+** another file, term_select() could be pushed above
+** docListOfTerm().
+*/
+static int termSelect(fulltext_vtab *v, int iColumn,
+                      const char *pTerm, int nTerm,
+                      DocListType iType, DataBuffer *out);
+
+/* Return a DocList corresponding to the query term *pTerm.  If *pTerm
+** is the first term of a phrase query, go ahead and evaluate the phrase
+** query and return the doclist for the entire phrase query.
+**
+** The resulting DL_DOCIDS doclist is stored in pResult, which is
+** overwritten.
+*/
+static int docListOfTerm(
+  fulltext_vtab *v,   /* The full text index */
+  int iColumn,        /* column to restrict to.  No restriction if >=nColumn */
+  QueryTerm *pQTerm,  /* Term we are looking for, or 1st term of a phrase */
+  DataBuffer *pResult /* Write the result here */
+){
+  DataBuffer left, right, new;
+  int i, rc;
+
+  /* No phrase search if no position info. */
+  assert( pQTerm->nPhrase==0 || DL_DEFAULT!=DL_DOCIDS );
+
+  dataBufferInit(&left, 0);
+  rc = termSelect(v, iColumn, pQTerm->pTerm, pQTerm->nTerm,
+                  0<pQTerm->nPhrase ? DL_POSITIONS : DL_DOCIDS, &left);
+  if( rc ) return rc;
+  for(i=1; i<=pQTerm->nPhrase && left.nData>0; i++){
+    dataBufferInit(&right, 0);
+    rc = termSelect(v, iColumn, pQTerm[i].pTerm, pQTerm[i].nTerm,
+                    DL_POSITIONS, &right);
+    if( rc ){
+      dataBufferDestroy(&left);
+      return rc;
+    }
+    dataBufferInit(&new, 0);
+    docListPhraseMerge(left.pData, left.nData, right.pData, right.nData,
+                       i<pQTerm->nPhrase ? DL_POSITIONS : DL_DOCIDS, &new);
+    dataBufferDestroy(&left);
+    dataBufferDestroy(&right);
+    left = new;
+  }
+  *pResult = left;
+  return SQLITE_OK;
+}
+
+/* Add a new term pTerm[0..nTerm-1] to the query *q.
+*/
+static void queryAdd(Query *q, const char *pTerm, int nTerm){
+  QueryTerm *t;
+  ++q->nTerms;
+  q->pTerms = realloc(q->pTerms, q->nTerms * sizeof(q->pTerms[0]));
+  if( q->pTerms==0 ){
+    q->nTerms = 0;
+    return;
+  }
+  t = &q->pTerms[q->nTerms - 1];
+  CLEAR(t);
+  t->pTerm = malloc(nTerm+1);
+  memcpy(t->pTerm, pTerm, nTerm);
+  t->pTerm[nTerm] = 0;
+  t->nTerm = nTerm;
+  t->isOr = q->nextIsOr;
+  q->nextIsOr = 0;
+  t->iColumn = q->nextColumn;
+  q->nextColumn = q->dfltColumn;
+}
+
+/*
+** Check to see if the string zToken[0...nToken-1] matches any
+** column name in the virtual table.   If it does,
+** return the zero-indexed column number.  If not, return -1.
+*/
+static int checkColumnSpecifier(
+  fulltext_vtab *pVtab,    /* The virtual table */
+  const char *zToken,      /* Text of the token */
+  int nToken               /* Number of characters in the token */
+){
+  int i;
+  for(i=0; i<pVtab->nColumn; i++){
+    if( memcmp(pVtab->azColumn[i], zToken, nToken)==0
+        && pVtab->azColumn[i][nToken]==0 ){
+      return i;
+    }
+  }
+  return -1;
+}
+
+/*
+** Parse the text at pSegment[0..nSegment-1].  Add additional terms
+** to the query being assemblied in pQuery.
+**
+** inPhrase is true if pSegment[0..nSegement-1] is contained within
+** double-quotes.  If inPhrase is true, then the first term
+** is marked with the number of terms in the phrase less one and
+** OR and "-" syntax is ignored.  If inPhrase is false, then every
+** term found is marked with nPhrase=0 and OR and "-" syntax is significant.
+*/
+static int tokenizeSegment(
+  sqlite3_tokenizer *pTokenizer,          /* The tokenizer to use */
+  const char *pSegment, int nSegment,     /* Query expression being parsed */
+  int inPhrase,                           /* True if within "..." */
+  Query *pQuery                           /* Append results here */
+){
+  const sqlite3_tokenizer_module *pModule = pTokenizer->pModule;
+  sqlite3_tokenizer_cursor *pCursor;
+  int firstIndex = pQuery->nTerms;
+  int iCol;
+  int nTerm = 1;
+  
+  int rc = pModule->xOpen(pTokenizer, pSegment, nSegment, &pCursor);
+  if( rc!=SQLITE_OK ) return rc;
+  pCursor->pTokenizer = pTokenizer;
+
+  while( 1 ){
+    const char *pToken;
+    int nToken, iBegin, iEnd, iPos;
+
+    rc = pModule->xNext(pCursor,
+                        &pToken, &nToken,
+                        &iBegin, &iEnd, &iPos);
+    if( rc!=SQLITE_OK ) break;
+    if( !inPhrase &&
+        pSegment[iEnd]==':' &&
+         (iCol = checkColumnSpecifier(pQuery->pFts, pToken, nToken))>=0 ){
+      pQuery->nextColumn = iCol;
+      continue;
+    }
+    if( !inPhrase && pQuery->nTerms>0 && nToken==2
+         && pSegment[iBegin]=='O' && pSegment[iBegin+1]=='R' ){
+      pQuery->nextIsOr = 1;
+      continue;
+    }
+    queryAdd(pQuery, pToken, nToken);
+    if( !inPhrase && iBegin>0 && pSegment[iBegin-1]=='-' ){
+      pQuery->pTerms[pQuery->nTerms-1].isNot = 1;
+    }
+    pQuery->pTerms[pQuery->nTerms-1].iPhrase = nTerm;
+    if( inPhrase ){
+      nTerm++;
+    }
+  }
+
+  if( inPhrase && pQuery->nTerms>firstIndex ){
+    pQuery->pTerms[firstIndex].nPhrase = pQuery->nTerms - firstIndex - 1;
+  }
+
+  return pModule->xClose(pCursor);
+}
+
+/* Parse a query string, yielding a Query object pQuery.
+**
+** The calling function will need to queryClear() to clean up
+** the dynamically allocated memory held by pQuery.
+*/
+static int parseQuery(
+  fulltext_vtab *v,        /* The fulltext index */
+  const char *zInput,      /* Input text of the query string */
+  int nInput,              /* Size of the input text */
+  int dfltColumn,          /* Default column of the index to match against */
+  Query *pQuery            /* Write the parse results here. */
+){
+  int iInput, inPhrase = 0;
+
+  if( zInput==0 ) nInput = 0;
+  if( nInput<0 ) nInput = strlen(zInput);
+  pQuery->nTerms = 0;
+  pQuery->pTerms = NULL;
+  pQuery->nextIsOr = 0;
+  pQuery->nextColumn = dfltColumn;
+  pQuery->dfltColumn = dfltColumn;
+  pQuery->pFts = v;
+
+  for(iInput=0; iInput<nInput; ++iInput){
+    int i;
+    for(i=iInput; i<nInput && zInput[i]!='"'; ++i){}
+    if( i>iInput ){
+      tokenizeSegment(v->pTokenizer, zInput+iInput, i-iInput, inPhrase,
+                       pQuery);
+    }
+    iInput = i;
+    if( i<nInput ){
+      assert( zInput[i]=='"' );
+      inPhrase = !inPhrase;
+    }
+  }
+
+  if( inPhrase ){
+    /* unmatched quote */
+    queryClear(pQuery);
+    return SQLITE_ERROR;
+  }
+  return SQLITE_OK;
+}
+
+/* Perform a full-text query using the search expression in
+** zInput[0..nInput-1].  Return a list of matching documents
+** in pResult.
+**
+** Queries must match column iColumn.  Or if iColumn>=nColumn
+** they are allowed to match against any column.
+*/
+static int fulltextQuery(
+  fulltext_vtab *v,      /* The full text index */
+  int iColumn,           /* Match against this column by default */
+  const char *zInput,    /* The query string */
+  int nInput,            /* Number of bytes in zInput[] */
+  DataBuffer *pResult,   /* Write the result doclist here */
+  Query *pQuery          /* Put parsed query string here */
+){
+  int i, iNext, rc;
+  DataBuffer left, right, or, new;
+  int nNot = 0;
+  QueryTerm *aTerm;
+
+  /* TODO(shess) I think that the queryClear() calls below are not
+  ** necessary, because fulltextClose() already clears the query.
+  */
+  rc = parseQuery(v, zInput, nInput, iColumn, pQuery);
+  if( rc!=SQLITE_OK ) return rc;
+
+  /* Empty or NULL queries return no results. */
+  if( pQuery->nTerms==0 ){
+    dataBufferInit(pResult, 0);
+    return SQLITE_OK;
+  }
+
+  /* Merge AND terms. */
+  /* TODO(shess) I think we can early-exit if( i>nNot && left.nData==0 ). */
+  aTerm = pQuery->pTerms;
+  for(i = 0; i<pQuery->nTerms; i=iNext){
+    if( aTerm[i].isNot ){
+      /* Handle all NOT terms in a separate pass */
+      nNot++;
+      iNext = i + aTerm[i].nPhrase+1;
+      continue;
+    }
+    iNext = i + aTerm[i].nPhrase + 1;
+    rc = docListOfTerm(v, aTerm[i].iColumn, &aTerm[i], &right);
+    if( rc ){
+      if( i!=nNot ) dataBufferDestroy(&left);
+      queryClear(pQuery);
+      return rc;
+    }
+    while( iNext<pQuery->nTerms && aTerm[iNext].isOr ){
+      rc = docListOfTerm(v, aTerm[iNext].iColumn, &aTerm[iNext], &or);
+      iNext += aTerm[iNext].nPhrase + 1;
+      if( rc ){
+        if( i!=nNot ) dataBufferDestroy(&left);
+        dataBufferDestroy(&right);
+        queryClear(pQuery);
+        return rc;
+      }
+      dataBufferInit(&new, 0);
+      docListOrMerge(right.pData, right.nData, or.pData, or.nData, &new);
+      dataBufferDestroy(&right);
+      dataBufferDestroy(&or);
+      right = new;
+    }
+    if( i==nNot ){           /* first term processed. */
+      left = right;
+    }else{
+      dataBufferInit(&new, 0);
+      docListAndMerge(left.pData, left.nData, right.pData, right.nData, &new);
+      dataBufferDestroy(&right);
+      dataBufferDestroy(&left);
+      left = new;
+    }
+  }
+
+  if( nNot==pQuery->nTerms ){
+    /* We do not yet know how to handle a query of only NOT terms */
+    return SQLITE_ERROR;
+  }
+
+  /* Do the EXCEPT terms */
+  for(i=0; i<pQuery->nTerms;  i += aTerm[i].nPhrase + 1){
+    if( !aTerm[i].isNot ) continue;
+    rc = docListOfTerm(v, aTerm[i].iColumn, &aTerm[i], &right);
+    if( rc ){
+      queryClear(pQuery);
+      dataBufferDestroy(&left);
+      return rc;
+    }
+    dataBufferInit(&new, 0);
+    docListExceptMerge(left.pData, left.nData, right.pData, right.nData, &new);
+    dataBufferDestroy(&right);
+    dataBufferDestroy(&left);
+    left = new;
+  }
+
+  *pResult = left;
+  return rc;
+}
+
+/*
+** This is the xFilter interface for the virtual table.  See
+** the virtual table xFilter method documentation for additional
+** information.
+**
+** If idxNum==QUERY_GENERIC then do a full table scan against
+** the %_content table.
+**
+** If idxNum==QUERY_ROWID then do a rowid lookup for a single entry
+** in the %_content table.
+**
+** If idxNum>=QUERY_FULLTEXT then use the full text index.  The
+** column on the left-hand side of the MATCH operator is column
+** number idxNum-QUERY_FULLTEXT, 0 indexed.  argv[0] is the right-hand
+** side of the MATCH operator.
+*/
+/* TODO(shess) Upgrade the cursor initialization and destruction to
+** account for fulltextFilter() being called multiple times on the
+** same cursor.  The current solution is very fragile.  Apply fix to
+** fts2 as appropriate.
+*/
+static int fulltextFilter(
+  sqlite3_vtab_cursor *pCursor,     /* The cursor used for this query */
+  int idxNum, const char *idxStr,   /* Which indexing scheme to use */
+  int argc, sqlite3_value **argv    /* Arguments for the indexing scheme */
+){
+  fulltext_cursor *c = (fulltext_cursor *) pCursor;
+  fulltext_vtab *v = cursor_vtab(c);
+  int rc;
+  char *zSql;
+
+  TRACE(("FTS2 Filter %p\n",pCursor));
+
+  zSql = sqlite3_mprintf("select rowid, * from %%_content %s",
+                          idxNum==QUERY_GENERIC ? "" : "where rowid=?");
+  sqlite3_finalize(c->pStmt);
+  rc = sql_prepare(v->db, v->zDb, v->zName, &c->pStmt, zSql);
+  sqlite3_free(zSql);
+  if( rc!=SQLITE_OK ) return rc;
+
+  c->iCursorType = idxNum;
+  switch( idxNum ){
+    case QUERY_GENERIC:
+      break;
+
+    case QUERY_ROWID:
+      rc = sqlite3_bind_int64(c->pStmt, 1, sqlite3_value_int64(argv[0]));
+      if( rc!=SQLITE_OK ) return rc;
+      break;
+
+    default:   /* full-text search */
+    {
+      const char *zQuery = (const char *)sqlite3_value_text(argv[0]);
+      assert( idxNum<=QUERY_FULLTEXT+v->nColumn);
+      assert( argc==1 );
+      queryClear(&c->q);
+      if( c->result.nData!=0 ){
+        /* This case happens if the same cursor is used repeatedly. */
+        dlrDestroy(&c->reader);
+        dataBufferReset(&c->result);
+      }else{
+        dataBufferInit(&c->result, 0);
+      }
+      rc = fulltextQuery(v, idxNum-QUERY_FULLTEXT, zQuery, -1, &c->result, &c->q);
+      if( rc!=SQLITE_OK ) return rc;
+      if( c->result.nData!=0 ){
+        dlrInit(&c->reader, DL_DOCIDS, c->result.pData, c->result.nData);
+      }
+      break;
+    }
+  }
+
+  return fulltextNext(pCursor);
+}
+
+/* This is the xEof method of the virtual table.  The SQLite core
+** calls this routine to find out if it has reached the end of
+** a query's results set.
+*/
+static int fulltextEof(sqlite3_vtab_cursor *pCursor){
+  fulltext_cursor *c = (fulltext_cursor *) pCursor;
+  return c->eof;
+}
+
+/* This is the xColumn method of the virtual table.  The SQLite
+** core calls this method during a query when it needs the value
+** of a column from the virtual table.  This method needs to use
+** one of the sqlite3_result_*() routines to store the requested
+** value back in the pContext.
+*/
+static int fulltextColumn(sqlite3_vtab_cursor *pCursor,
+                          sqlite3_context *pContext, int idxCol){
+  fulltext_cursor *c = (fulltext_cursor *) pCursor;
+  fulltext_vtab *v = cursor_vtab(c);
+
+  if( idxCol<v->nColumn ){
+    sqlite3_value *pVal = sqlite3_column_value(c->pStmt, idxCol+1);
+    sqlite3_result_value(pContext, pVal);
+  }else if( idxCol==v->nColumn ){
+    /* The extra column whose name is the same as the table.
+    ** Return a blob which is a pointer to the cursor
+    */
+    sqlite3_result_blob(pContext, &c, sizeof(c), SQLITE_TRANSIENT);
+  }
+  return SQLITE_OK;
+}
+
+/* This is the xRowid method.  The SQLite core calls this routine to
+** retrive the rowid for the current row of the result set.  The
+** rowid should be written to *pRowid.
+*/
+static int fulltextRowid(sqlite3_vtab_cursor *pCursor, sqlite_int64 *pRowid){
+  fulltext_cursor *c = (fulltext_cursor *) pCursor;
+
+  *pRowid = sqlite3_column_int64(c->pStmt, 0);
+  return SQLITE_OK;
+}
+
+/* Add all terms in [zText] to the given hash table.  If [iColumn] > 0,
+ * we also store positions and offsets in the hash table using the given
+ * column number. */
+static int buildTerms(fulltext_vtab *v, fts2Hash *terms, sqlite_int64 iDocid,
+                      const char *zText, int iColumn){
+  sqlite3_tokenizer *pTokenizer = v->pTokenizer;
+  sqlite3_tokenizer_cursor *pCursor;
+  const char *pToken;
+  int nTokenBytes;
+  int iStartOffset, iEndOffset, iPosition;
+  int rc;
+
+  rc = pTokenizer->pModule->xOpen(pTokenizer, zText, -1, &pCursor);
+  if( rc!=SQLITE_OK ) return rc;
+
+  pCursor->pTokenizer = pTokenizer;
+  while( SQLITE_OK==pTokenizer->pModule->xNext(pCursor,
+                                               &pToken, &nTokenBytes,
+                                               &iStartOffset, &iEndOffset,
+                                               &iPosition) ){
+    PLWriter *p;
+
+    /* Positions can't be negative; we use -1 as a terminator internally. */
+    if( iPosition<0 ){
+      pTokenizer->pModule->xClose(pCursor);
+      return SQLITE_ERROR;
+    }
+
+    p = fts2HashFind(terms, pToken, nTokenBytes);
+    if( p==NULL ){
+      p = plwNew(iDocid, DL_DEFAULT);
+      fts2HashInsert(terms, pToken, nTokenBytes, p);
+    }
+    if( iColumn>=0 ){
+      plwAdd(p, iColumn, iPosition, iStartOffset, iEndOffset);
+    }
+  }
+
+  /* TODO(shess) Check return?  Should this be able to cause errors at
+  ** this point?  Actually, same question about sqlite3_finalize(),
+  ** though one could argue that failure there means that the data is
+  ** not durable.  *ponder*
+  */
+  pTokenizer->pModule->xClose(pCursor);
+  return rc;
+}
+
+/* Add doclists for all terms in [pValues] to the hash table [terms]. */
+static int insertTerms(fulltext_vtab *v, fts2Hash *terms, sqlite_int64 iRowid,
+                sqlite3_value **pValues){
+  int i;
+  for(i = 0; i < v->nColumn ; ++i){
+    char *zText = (char*)sqlite3_value_text(pValues[i]);
+    int rc = buildTerms(v, terms, iRowid, zText, i);
+    if( rc!=SQLITE_OK ) return rc;
+  }
+  return SQLITE_OK;
+}
+
+/* Add empty doclists for all terms in the given row's content to the hash
+ * table [pTerms]. */
+static int deleteTerms(fulltext_vtab *v, fts2Hash *pTerms, sqlite_int64 iRowid){
+  const char **pValues;
+  int i, rc;
+
+  /* TODO(shess) Should we allow such tables at all? */
+  if( DL_DEFAULT==DL_DOCIDS ) return SQLITE_ERROR;
+
+  rc = content_select(v, iRowid, &pValues);
+  if( rc!=SQLITE_OK ) return rc;
+
+  for(i = 0 ; i < v->nColumn; ++i) {
+    rc = buildTerms(v, pTerms, iRowid, pValues[i], -1);
+    if( rc!=SQLITE_OK ) break;
+  }
+
+  freeStringArray(v->nColumn, pValues);
+  return SQLITE_OK;
+}
+
+/* Insert a row into the %_content table; set *piRowid to be the ID of the
+ * new row.  Fill [pTerms] with new doclists for the %_term table. */
+static int index_insert(fulltext_vtab *v, sqlite3_value *pRequestRowid,
+                        sqlite3_value **pValues,
+                        sqlite_int64 *piRowid, fts2Hash *pTerms){
+  int rc;
+
+  rc = content_insert(v, pRequestRowid, pValues);  /* execute an SQL INSERT */
+  if( rc!=SQLITE_OK ) return rc;
+  *piRowid = sqlite3_last_insert_rowid(v->db);
+  return insertTerms(v, pTerms, *piRowid, pValues);
+}
+
+/* Delete a row from the %_content table; fill [pTerms] with empty doclists
+ * to be written to the %_term table. */
+static int index_delete(fulltext_vtab *v, sqlite_int64 iRow, fts2Hash *pTerms){
+  int rc = deleteTerms(v, pTerms, iRow);
+  if( rc!=SQLITE_OK ) return rc;
+  return content_delete(v, iRow);  /* execute an SQL DELETE */
+}
+
+/* Update a row in the %_content table; fill [pTerms] with new doclists for the
+ * %_term table. */
+static int index_update(fulltext_vtab *v, sqlite_int64 iRow,
+                        sqlite3_value **pValues, fts2Hash *pTerms){
+  /* Generate an empty doclist for each term that previously appeared in this
+   * row. */
+  int rc = deleteTerms(v, pTerms, iRow);
+  if( rc!=SQLITE_OK ) return rc;
+
+  rc = content_update(v, pValues, iRow);  /* execute an SQL UPDATE */
+  if( rc!=SQLITE_OK ) return rc;
+
+  /* Now add positions for terms which appear in the updated row. */
+  return insertTerms(v, pTerms, iRow, pValues);
+}
+
+/*******************************************************************/
+/* InteriorWriter is used to collect terms and block references into
+** interior nodes in %_segments.  See commentary at top of file for
+** format.
+*/
+
+/* How large interior nodes can grow. */
+#define INTERIOR_MAX 2048
+
+/* Minimum number of terms per interior node (except the root). This
+** prevents large terms from making the tree too skinny - must be >0
+** so that the tree always makes progress.  Note that the min tree
+** fanout will be INTERIOR_MIN_TERMS+1.
+*/
+#define INTERIOR_MIN_TERMS 7
+#if INTERIOR_MIN_TERMS<1
+# error INTERIOR_MIN_TERMS must be greater than 0.
+#endif
+
+/* ROOT_MAX controls how much data is stored inline in the segment
+** directory.
+*/
+/* TODO(shess) Push ROOT_MAX down to whoever is writing things.  It's
+** only here so that interiorWriterRootInfo() and leafWriterRootInfo()
+** can both see it, but if the caller passed it in, we wouldn't even
+** need a define.
+*/
+#define ROOT_MAX 1024
+#if ROOT_MAX<VARINT_MAX*2
+# error ROOT_MAX must have enough space for a header.
+#endif
+
+/* InteriorBlock stores a linked-list of interior blocks while a lower
+** layer is being constructed.
+*/
+typedef struct InteriorBlock {
+  DataBuffer term;           /* Leftmost term in block's subtree. */
+  DataBuffer data;           /* Accumulated data for the block. */
+  struct InteriorBlock *next;
+} InteriorBlock;
+
+static InteriorBlock *interiorBlockNew(int iHeight, sqlite_int64 iChildBlock,
+                                       const char *pTerm, int nTerm){
+  InteriorBlock *block = calloc(1, sizeof(InteriorBlock));
+  char c[VARINT_MAX+VARINT_MAX];
+  int n;
+
+  dataBufferInit(&block->term, 0);
+  dataBufferReplace(&block->term, pTerm, nTerm);
+
+  n = putVarint(c, iHeight);
+  n += putVarint(c+n, iChildBlock);
+  dataBufferInit(&block->data, INTERIOR_MAX);
+  dataBufferReplace(&block->data, c, n);
+
+  return block;
+}
+
+#ifndef NDEBUG
+/* Verify that the data is readable as an interior node. */
+static void interiorBlockValidate(InteriorBlock *pBlock){
+  const char *pData = pBlock->data.pData;
+  int nData = pBlock->data.nData;
+  int n, iDummy;
+  sqlite_int64 iBlockid;
+
+  assert( nData>0 );
+  assert( pData!=0 );
+  assert( pData+nData>pData );
+
+  /* Must lead with height of node as a varint(n), n>0 */
+  n = getVarint32(pData, &iDummy);
+  assert( n>0 );
+  assert( iDummy>0 );
+  assert( n<nData );
+  pData += n;
+  nData -= n;
+
+  /* Must contain iBlockid. */
+  n = getVarint(pData, &iBlockid);
+  assert( n>0 );
+  assert( n<=nData );
+  pData += n;
+  nData -= n;
+
+  /* Zero or more terms of positive length */
+  if( nData!=0 ){
+    /* First term is not delta-encoded. */
+    n = getVarint32(pData, &iDummy);
+    assert( n>0 );
+    assert( iDummy>0 );
+    assert( n+iDummy>0);
+    assert( n+iDummy<=nData );
+    pData += n+iDummy;
+    nData -= n+iDummy;
+
+    /* Following terms delta-encoded. */
+    while( nData!=0 ){
+      /* Length of shared prefix. */
+      n = getVarint32(pData, &iDummy);
+      assert( n>0 );
+      assert( iDummy>=0 );
+      assert( n<nData );
+      pData += n;
+      nData -= n;
+
+      /* Length and data of distinct suffix. */
+      n = getVarint32(pData, &iDummy);
+      assert( n>0 );
+      assert( iDummy>0 );
+      assert( n+iDummy>0);
+      assert( n+iDummy<=nData );
+      pData += n+iDummy;
+      nData -= n+iDummy;
+    }
+  }
+}
+#define ASSERT_VALID_INTERIOR_BLOCK(x) interiorBlockValidate(x)
+#else
+#define ASSERT_VALID_INTERIOR_BLOCK(x) assert( 1 )
+#endif
+
+typedef struct InteriorWriter {
+  int iHeight;                   /* from 0 at leaves. */
+  InteriorBlock *first, *last;
+  struct InteriorWriter *parentWriter;
+
+  DataBuffer term;               /* Last term written to block "last". */
+  sqlite_int64 iOpeningChildBlock; /* First child block in block "last". */
+#ifndef NDEBUG
+  sqlite_int64 iLastChildBlock;  /* for consistency checks. */
+#endif
+} InteriorWriter;
+
+/* Initialize an interior node where pTerm[nTerm] marks the leftmost
+** term in the tree.  iChildBlock is the leftmost child block at the
+** next level down the tree.
+*/
+static void interiorWriterInit(int iHeight, const char *pTerm, int nTerm,
+                               sqlite_int64 iChildBlock,
+                               InteriorWriter *pWriter){
+  InteriorBlock *block;
+  assert( iHeight>0 );
+  CLEAR(pWriter);
+
+  pWriter->iHeight = iHeight;
+  pWriter->iOpeningChildBlock = iChildBlock;
+#ifndef NDEBUG
+  pWriter->iLastChildBlock = iChildBlock;
+#endif
+  block = interiorBlockNew(iHeight, iChildBlock, pTerm, nTerm);
+  pWriter->last = pWriter->first = block;
+  ASSERT_VALID_INTERIOR_BLOCK(pWriter->last);
+  dataBufferInit(&pWriter->term, 0);
+}
+
+/* Append the child node rooted at iChildBlock to the interior node,
+** with pTerm[nTerm] as the leftmost term in iChildBlock's subtree.
+*/
+static void interiorWriterAppend(InteriorWriter *pWriter,
+                                 const char *pTerm, int nTerm,
+                                 sqlite_int64 iChildBlock){
+  char c[VARINT_MAX+VARINT_MAX];
+  int n, nPrefix = 0;
+
+  ASSERT_VALID_INTERIOR_BLOCK(pWriter->last);
+
+  /* The first term written into an interior node is actually
+  ** associated with the second child added (the first child was added
+  ** in interiorWriterInit, or in the if clause at the bottom of this
+  ** function).  That term gets encoded straight up, with nPrefix left
+  ** at 0.
+  */
+  if( pWriter->term.nData==0 ){
+    n = putVarint(c, nTerm);
+  }else{
+    while( nPrefix<pWriter->term.nData &&
+           pTerm[nPrefix]==pWriter->term.pData[nPrefix] ){
+      nPrefix++;
+    }
+
+    n = putVarint(c, nPrefix);
+    n += putVarint(c+n, nTerm-nPrefix);
+  }
+
+#ifndef NDEBUG
+  pWriter->iLastChildBlock++;
+#endif
+  assert( pWriter->iLastChildBlock==iChildBlock );
+
+  /* Overflow to a new block if the new term makes the current block
+  ** too big, and the current block already has enough terms.
+  */
+  if( pWriter->last->data.nData+n+nTerm-nPrefix>INTERIOR_MAX &&
+      iChildBlock-pWriter->iOpeningChildBlock>INTERIOR_MIN_TERMS ){
+    pWriter->last->next = interiorBlockNew(pWriter->iHeight, iChildBlock,
+                                           pTerm, nTerm);
+    pWriter->last = pWriter->last->next;
+    pWriter->iOpeningChildBlock = iChildBlock;
+    dataBufferReset(&pWriter->term);
+  }else{
+    dataBufferAppend2(&pWriter->last->data, c, n,
+                      pTerm+nPrefix, nTerm-nPrefix);
+    dataBufferReplace(&pWriter->term, pTerm, nTerm);
+  }
+  ASSERT_VALID_INTERIOR_BLOCK(pWriter->last);
+}
+
+/* Free the space used by pWriter, including the linked-list of
+** InteriorBlocks, and parentWriter, if present.
+*/
+static int interiorWriterDestroy(InteriorWriter *pWriter){
+  InteriorBlock *block = pWriter->first;
+
+  while( block!=NULL ){
+    InteriorBlock *b = block;
+    block = block->next;
+    dataBufferDestroy(&b->term);
+    dataBufferDestroy(&b->data);
+    free(b);
+  }
+  if( pWriter->parentWriter!=NULL ){
+    interiorWriterDestroy(pWriter->parentWriter);
+    free(pWriter->parentWriter);
+  }
+  dataBufferDestroy(&pWriter->term);
+  SCRAMBLE(pWriter);
+  return SQLITE_OK;
+}
+
+/* If pWriter can fit entirely in ROOT_MAX, return it as the root info
+** directly, leaving *piEndBlockid unchanged.  Otherwise, flush
+** pWriter to %_segments, building a new layer of interior nodes, and
+** recursively ask for their root into.
+*/
+static int interiorWriterRootInfo(fulltext_vtab *v, InteriorWriter *pWriter,
+                                  char **ppRootInfo, int *pnRootInfo,
+                                  sqlite_int64 *piEndBlockid){
+  InteriorBlock *block = pWriter->first;
+  sqlite_int64 iBlockid = 0;
+  int rc;
+
+  /* If we can fit the segment inline */
+  if( block==pWriter->last && block->data.nData<ROOT_MAX ){
+    *ppRootInfo = block->data.pData;
+    *pnRootInfo = block->data.nData;
+    return SQLITE_OK;
+  }
+
+  /* Flush the first block to %_segments, and create a new level of
+  ** interior node.
+  */
+  ASSERT_VALID_INTERIOR_BLOCK(block);
+  rc = block_insert(v, block->data.pData, block->data.nData, &iBlockid);
+  if( rc!=SQLITE_OK ) return rc;
+  *piEndBlockid = iBlockid;
+
+  pWriter->parentWriter = malloc(sizeof(*pWriter->parentWriter));
+  interiorWriterInit(pWriter->iHeight+1,
+                     block->term.pData, block->term.nData,
+                     iBlockid, pWriter->parentWriter);
+
+  /* Flush additional blocks and append to the higher interior
+  ** node.
+  */
+  for(block=block->next; block!=NULL; block=block->next){
+    ASSERT_VALID_INTERIOR_BLOCK(block);
+    rc = block_insert(v, block->data.pData, block->data.nData, &iBlockid);
+    if( rc!=SQLITE_OK ) return rc;
+    *piEndBlockid = iBlockid;
+
+    interiorWriterAppend(pWriter->parentWriter,
+                         block->term.pData, block->term.nData, iBlockid);
+  }
+
+  /* Parent node gets the chance to be the root. */
+  return interiorWriterRootInfo(v, pWriter->parentWriter,
+                                ppRootInfo, pnRootInfo, piEndBlockid);
+}
+
+/****************************************************************/
+/* InteriorReader is used to read off the data from an interior node
+** (see comment at top of file for the format).
+*/
+typedef struct InteriorReader {
+  const char *pData;
+  int nData;
+
+  DataBuffer term;          /* previous term, for decoding term delta. */
+
+  sqlite_int64 iBlockid;
+} InteriorReader;
+
+static void interiorReaderDestroy(InteriorReader *pReader){
+  SCRAMBLE(pReader);
+}
+
+static void interiorReaderInit(const char *pData, int nData,
+                               InteriorReader *pReader){
+  int n, nTerm;
+
+  /* Require at least the leading flag byte */
+  assert( nData>0 );
+  assert( pData[0]!='\0' );
+
+  CLEAR(pReader);
+
+  /* Decode the base blockid, and set the cursor to the first term. */
+  n = getVarint(pData+1, &pReader->iBlockid);
+  assert( 1+n<=nData );
+  pReader->pData = pData+1+n;
+  pReader->nData = nData-(1+n);
+
+  /* A single-child interior node (such as when a leaf node was too
+  ** large for the segment directory) won't have any terms.
+  ** Otherwise, decode the first term.
+  */
+  if( pReader->nData==0 ){
+    dataBufferInit(&pReader->term, 0);
+  }else{
+    n = getVarint32(pReader->pData, &nTerm);
+    dataBufferInit(&pReader->term, nTerm);
+    dataBufferReplace(&pReader->term, pReader->pData+n, nTerm);
+    assert( n+nTerm<=pReader->nData );
+    pReader->pData += n+nTerm;
+    pReader->nData -= n+nTerm;
+  }
+}
+
+static int interiorReaderAtEnd(InteriorReader *pReader){
+  return pReader->term.nData==0;
+}
+
+static sqlite_int64 interiorReaderCurrentBlockid(InteriorReader *pReader){
+  return pReader->iBlockid;
+}
+
+static int interiorReaderTermBytes(InteriorReader *pReader){
+  assert( !interiorReaderAtEnd(pReader) );
+  return pReader->term.nData;
+}
+static const char *interiorReaderTerm(InteriorReader *pReader){
+  assert( !interiorReaderAtEnd(pReader) );
+  return pReader->term.pData;
+}
+
+/* Step forward to the next term in the node. */
+static void interiorReaderStep(InteriorReader *pReader){
+  assert( !interiorReaderAtEnd(pReader) );
+
+  /* If the last term has been read, signal eof, else construct the
+  ** next term.
+  */
+  if( pReader->nData==0 ){
+    dataBufferReset(&pReader->term);
+  }else{
+    int n, nPrefix, nSuffix;
+
+    n = getVarint32(pReader->pData, &nPrefix);
+    n += getVarint32(pReader->pData+n, &nSuffix);
+
+    /* Truncate the current term and append suffix data. */
+    pReader->term.nData = nPrefix;
+    dataBufferAppend(&pReader->term, pReader->pData+n, nSuffix);
+
+    assert( n+nSuffix<=pReader->nData );
+    pReader->pData += n+nSuffix;
+    pReader->nData -= n+nSuffix;
+  }
+  pReader->iBlockid++;
+}
+
+/* Compare the current term to pTerm[nTerm], returning strcmp-style
+** results.
+*/
+static int interiorReaderTermCmp(InteriorReader *pReader,
+                                 const char *pTerm, int nTerm){
+  const char *pReaderTerm = interiorReaderTerm(pReader);
+  int nReaderTerm = interiorReaderTermBytes(pReader);
+  int c, n = nReaderTerm<nTerm ? nReaderTerm : nTerm;
+
+  if( n==0 ){
+    if( nReaderTerm>0 ) return -1;
+    if( nTerm>0 ) return 1;
+    return 0;
+  }
+
+  c = memcmp(pReaderTerm, pTerm, n);
+  if( c!=0 ) return c;
+  return nReaderTerm - nTerm;
+}
+
+/****************************************************************/
+/* LeafWriter is used to collect terms and associated doclist data
+** into leaf blocks in %_segments (see top of file for format info).
+** Expected usage is:
+**
+** LeafWriter writer;
+** leafWriterInit(0, 0, &writer);
+** while( sorted_terms_left_to_process ){
+**   // data is doclist data for that term.
+**   rc = leafWriterStep(v, &writer, pTerm, nTerm, pData, nData);
+**   if( rc!=SQLITE_OK ) goto err;
+** }
+** rc = leafWriterFinalize(v, &writer);
+**err:
+** leafWriterDestroy(&writer);
+** return rc;
+**
+** leafWriterStep() may write a collected leaf out to %_segments.
+** leafWriterFinalize() finishes writing any buffered data and stores
+** a root node in %_segdir.  leafWriterDestroy() frees all buffers and
+** InteriorWriters allocated as part of writing this segment.
+**
+** TODO(shess) Document leafWriterStepMerge().
+*/
+
+/* Put terms with data this big in their own block. */
+#define STANDALONE_MIN 1024
+
+/* Keep leaf blocks below this size. */
+#define LEAF_MAX 2048
+
+typedef struct LeafWriter {
+  int iLevel;
+  int idx;
+  sqlite_int64 iStartBlockid;     /* needed to create the root info */
+  sqlite_int64 iEndBlockid;       /* when we're done writing. */
+
+  DataBuffer term;                /* previous encoded term */
+  DataBuffer data;                /* encoding buffer */
+
+  /* bytes of first term in the current node which distinguishes that
+  ** term from the last term of the previous node.
+  */
+  int nTermDistinct;
+
+  InteriorWriter parentWriter;    /* if we overflow */
+  int has_parent;
+} LeafWriter;
+
+static void leafWriterInit(int iLevel, int idx, LeafWriter *pWriter){
+  CLEAR(pWriter);
+  pWriter->iLevel = iLevel;
+  pWriter->idx = idx;
+
+  dataBufferInit(&pWriter->term, 32);
+
+  /* Start out with a reasonably sized block, though it can grow. */
+  dataBufferInit(&pWriter->data, LEAF_MAX);
+}
+
+#ifndef NDEBUG
+/* Verify that the data is readable as a leaf node. */
+static void leafNodeValidate(const char *pData, int nData){
+  int n, iDummy;
+
+  if( nData==0 ) return;
+  assert( nData>0 );
+  assert( pData!=0 );
+  assert( pData+nData>pData );
+
+  /* Must lead with a varint(0) */
+  n = getVarint32(pData, &iDummy);
+  assert( iDummy==0 );
+  assert( n>0 );
+  assert( n<nData );
+  pData += n;
+  nData -= n;
+
+  /* Leading term length and data must fit in buffer. */
+  n = getVarint32(pData, &iDummy);
+  assert( n>0 );
+  assert( iDummy>0 );
+  assert( n+iDummy>0 );
+  assert( n+iDummy<nData );
+  pData += n+iDummy;
+  nData -= n+iDummy;
+
+  /* Leading term's doclist length and data must fit. */
+  n = getVarint32(pData, &iDummy);
+  assert( n>0 );
+  assert( iDummy>0 );
+  assert( n+iDummy>0 );
+  assert( n+iDummy<=nData );
+  ASSERT_VALID_DOCLIST(DL_DEFAULT, pData+n, iDummy, NULL);
+  pData += n+iDummy;
+  nData -= n+iDummy;
+
+  /* Verify that trailing terms and doclists also are readable. */
+  while( nData!=0 ){
+    n = getVarint32(pData, &iDummy);
+    assert( n>0 );
+    assert( iDummy>=0 );
+    assert( n<nData );
+    pData += n;
+    nData -= n;
+    n = getVarint32(pData, &iDummy);
+    assert( n>0 );
+    assert( iDummy>0 );
+    assert( n+iDummy>0 );
+    assert( n+iDummy<nData );
+    pData += n+iDummy;
+    nData -= n+iDummy;
+
+    n = getVarint32(pData, &iDummy);
+    assert( n>0 );
+    assert( iDummy>0 );
+    assert( n+iDummy>0 );
+    assert( n+iDummy<=nData );
+    ASSERT_VALID_DOCLIST(DL_DEFAULT, pData+n, iDummy, NULL);
+    pData += n+iDummy;
+    nData -= n+iDummy;
+  }
+}
+#define ASSERT_VALID_LEAF_NODE(p, n) leafNodeValidate(p, n)
+#else
+#define ASSERT_VALID_LEAF_NODE(p, n) assert( 1 )
+#endif
+
+/* Flush the current leaf node to %_segments, and adding the resulting
+** blockid and the starting term to the interior node which will
+** contain it.
+*/
+static int leafWriterInternalFlush(fulltext_vtab *v, LeafWriter *pWriter,
+                                   int iData, int nData){
+  sqlite_int64 iBlockid = 0;
+  const char *pStartingTerm;
+  int nStartingTerm, rc, n;
+
+  /* Must have the leading varint(0) flag, plus at least some
+  ** valid-looking data.
+  */
+  assert( nData>2 );
+  assert( iData>=0 );
+  assert( iData+nData<=pWriter->data.nData );
+  ASSERT_VALID_LEAF_NODE(pWriter->data.pData+iData, nData);
+
+  rc = block_insert(v, pWriter->data.pData+iData, nData, &iBlockid);
+  if( rc!=SQLITE_OK ) return rc;
+  assert( iBlockid!=0 );
+
+  /* Reconstruct the first term in the leaf for purposes of building
+  ** the interior node.
+  */
+  n = getVarint32(pWriter->data.pData+iData+1, &nStartingTerm);
+  pStartingTerm = pWriter->data.pData+iData+1+n;
+  assert( pWriter->data.nData>iData+1+n+nStartingTerm );
+  assert( pWriter->nTermDistinct>0 );
+  assert( pWriter->nTermDistinct<=nStartingTerm );
+  nStartingTerm = pWriter->nTermDistinct;
+
+  if( pWriter->has_parent ){
+    interiorWriterAppend(&pWriter->parentWriter,
+                         pStartingTerm, nStartingTerm, iBlockid);
+  }else{
+    interiorWriterInit(1, pStartingTerm, nStartingTerm, iBlockid,
+                       &pWriter->parentWriter);
+    pWriter->has_parent = 1;
+  }
+
+  /* Track the span of this segment's leaf nodes. */
+  if( pWriter->iEndBlockid==0 ){
+    pWriter->iEndBlockid = pWriter->iStartBlockid = iBlockid;
+  }else{
+    pWriter->iEndBlockid++;
+    assert( iBlockid==pWriter->iEndBlockid );
+  }
+
+  return SQLITE_OK;
+}
+static int leafWriterFlush(fulltext_vtab *v, LeafWriter *pWriter){
+  int rc = leafWriterInternalFlush(v, pWriter, 0, pWriter->data.nData);
+  if( rc!=SQLITE_OK ) return rc;
+
+  /* Re-initialize the output buffer. */
+  dataBufferReset(&pWriter->data);
+
+  return SQLITE_OK;
+}
+
+/* Fetch the root info for the segment.  If the entire leaf fits
+** within ROOT_MAX, then it will be returned directly, otherwise it
+** will be flushed and the root info will be returned from the
+** interior node.  *piEndBlockid is set to the blockid of the last
+** interior or leaf node written to disk (0 if none are written at
+** all).
+*/
+static int leafWriterRootInfo(fulltext_vtab *v, LeafWriter *pWriter,
+                              char **ppRootInfo, int *pnRootInfo,
+                              sqlite_int64 *piEndBlockid){
+  /* we can fit the segment entirely inline */
+  if( !pWriter->has_parent && pWriter->data.nData<ROOT_MAX ){
+    *ppRootInfo = pWriter->data.pData;
+    *pnRootInfo = pWriter->data.nData;
+    *piEndBlockid = 0;
+    return SQLITE_OK;
+  }
+
+  /* Flush remaining leaf data. */
+  if( pWriter->data.nData>0 ){
+    int rc = leafWriterFlush(v, pWriter);
+    if( rc!=SQLITE_OK ) return rc;
+  }
+
+  /* We must have flushed a leaf at some point. */
+  assert( pWriter->has_parent );
+
+  /* Tenatively set the end leaf blockid as the end blockid.  If the
+  ** interior node can be returned inline, this will be the final
+  ** blockid, otherwise it will be overwritten by
+  ** interiorWriterRootInfo().
+  */
+  *piEndBlockid = pWriter->iEndBlockid;
+
+  return interiorWriterRootInfo(v, &pWriter->parentWriter,
+                                ppRootInfo, pnRootInfo, piEndBlockid);
+}
+
+/* Collect the rootInfo data and store it into the segment directory.
+** This has the effect of flushing the segment's leaf data to
+** %_segments, and also flushing any interior nodes to %_segments.
+*/
+static int leafWriterFinalize(fulltext_vtab *v, LeafWriter *pWriter){
+  sqlite_int64 iEndBlockid;
+  char *pRootInfo;
+  int rc, nRootInfo;
+
+  rc = leafWriterRootInfo(v, pWriter, &pRootInfo, &nRootInfo, &iEndBlockid);
+  if( rc!=SQLITE_OK ) return rc;
+
+  /* Don't bother storing an entirely empty segment. */
+  if( iEndBlockid==0 && nRootInfo==0 ) return SQLITE_OK;
+
+  return segdir_set(v, pWriter->iLevel, pWriter->idx,
+                    pWriter->iStartBlockid, pWriter->iEndBlockid,
+                    iEndBlockid, pRootInfo, nRootInfo);
+}
+
+static void leafWriterDestroy(LeafWriter *pWriter){
+  if( pWriter->has_parent ) interiorWriterDestroy(&pWriter->parentWriter);
+  dataBufferDestroy(&pWriter->term);
+  dataBufferDestroy(&pWriter->data);
+}
+
+/* Encode a term into the leafWriter, delta-encoding as appropriate.
+** Returns the length of the new term which distinguishes it from the
+** previous term, which can be used to set nTermDistinct when a node
+** boundary is crossed.
+*/
+static int leafWriterEncodeTerm(LeafWriter *pWriter,
+                                const char *pTerm, int nTerm){
+  char c[VARINT_MAX+VARINT_MAX];
+  int n, nPrefix = 0;
+
+  assert( nTerm>0 );
+  while( nPrefix<pWriter->term.nData &&
+         pTerm[nPrefix]==pWriter->term.pData[nPrefix] ){
+    nPrefix++;
+    /* Failing this implies that the terms weren't in order. */
+    assert( nPrefix<nTerm );
+  }
+
+  if( pWriter->data.nData==0 ){
+    /* Encode the node header and leading term as:
+    **  varint(0)
+    **  varint(nTerm)
+    **  char pTerm[nTerm]
+    */
+    n = putVarint(c, '\0');
+    n += putVarint(c+n, nTerm);
+    dataBufferAppend2(&pWriter->data, c, n, pTerm, nTerm);
+  }else{
+    /* Delta-encode the term as:
+    **  varint(nPrefix)
+    **  varint(nSuffix)
+    **  char pTermSuffix[nSuffix]
+    */
+    n = putVarint(c, nPrefix);
+    n += putVarint(c+n, nTerm-nPrefix);
+    dataBufferAppend2(&pWriter->data, c, n, pTerm+nPrefix, nTerm-nPrefix);
+  }
+  dataBufferReplace(&pWriter->term, pTerm, nTerm);
+
+  return nPrefix+1;
+}
+
+/* Used to avoid a memmove when a large amount of doclist data is in
+** the buffer.  This constructs a node and term header before
+** iDoclistData and flushes the resulting complete node using
+** leafWriterInternalFlush().
+*/
+static int leafWriterInlineFlush(fulltext_vtab *v, LeafWriter *pWriter,
+                                 const char *pTerm, int nTerm,
+                                 int iDoclistData){
+  char c[VARINT_MAX+VARINT_MAX];
+  int iData, n = putVarint(c, 0);
+  n += putVarint(c+n, nTerm);
+
+  /* There should always be room for the header.  Even if pTerm shared
+  ** a substantial prefix with the previous term, the entire prefix
+  ** could be constructed from earlier data in the doclist, so there
+  ** should be room.
+  */
+  assert( iDoclistData>=n+nTerm );
+
+  iData = iDoclistData-(n+nTerm);
+  memcpy(pWriter->data.pData+iData, c, n);
+  memcpy(pWriter->data.pData+iData+n, pTerm, nTerm);
+
+  return leafWriterInternalFlush(v, pWriter, iData, pWriter->data.nData-iData);
+}
+
+/* Push pTerm[nTerm] along with the doclist data to the leaf layer of
+** %_segments.
+*/
+static int leafWriterStepMerge(fulltext_vtab *v, LeafWriter *pWriter,
+                               const char *pTerm, int nTerm,
+                               DLReader *pReaders, int nReaders){
+  char c[VARINT_MAX+VARINT_MAX];
+  int iTermData = pWriter->data.nData, iDoclistData;
+  int i, nData, n, nActualData, nActual, rc, nTermDistinct;
+
+  ASSERT_VALID_LEAF_NODE(pWriter->data.pData, pWriter->data.nData);
+  nTermDistinct = leafWriterEncodeTerm(pWriter, pTerm, nTerm);
+
+  /* Remember nTermDistinct if opening a new node. */
+  if( iTermData==0 ) pWriter->nTermDistinct = nTermDistinct;
+
+  iDoclistData = pWriter->data.nData;
+
+  /* Estimate the length of the merged doclist so we can leave space
+  ** to encode it.
+  */
+  for(i=0, nData=0; i<nReaders; i++){
+    nData += dlrAllDataBytes(&pReaders[i]);
+  }
+  n = putVarint(c, nData);
+  dataBufferAppend(&pWriter->data, c, n);
+
+  docListMerge(&pWriter->data, pReaders, nReaders);
+  ASSERT_VALID_DOCLIST(DL_DEFAULT,
+                       pWriter->data.pData+iDoclistData+n,
+                       pWriter->data.nData-iDoclistData-n, NULL);
+
+  /* The actual amount of doclist data at this point could be smaller
+  ** than the length we encoded.  Additionally, the space required to
+  ** encode this length could be smaller.  For small doclists, this is
+  ** not a big deal, we can just use memmove() to adjust things.
+  */
+  nActualData = pWriter->data.nData-(iDoclistData+n);
+  nActual = putVarint(c, nActualData);
+  assert( nActualData<=nData );
+  assert( nActual<=n );
+
+  /* If the new doclist is big enough for force a standalone leaf
+  ** node, we can immediately flush it inline without doing the
+  ** memmove().
+  */
+  /* TODO(shess) This test matches leafWriterStep(), which does this
+  ** test before it knows the cost to varint-encode the term and
+  ** doclist lengths.  At some point, change to
+  ** pWriter->data.nData-iTermData>STANDALONE_MIN.
+  */
+  if( nTerm+nActualData>STANDALONE_MIN ){
+    /* Push leaf node from before this term. */
+    if( iTermData>0 ){
+      rc = leafWriterInternalFlush(v, pWriter, 0, iTermData);
+      if( rc!=SQLITE_OK ) return rc;
+
+      pWriter->nTermDistinct = nTermDistinct;
+    }
+
+    /* Fix the encoded doclist length. */
+    iDoclistData += n - nActual;
+    memcpy(pWriter->data.pData+iDoclistData, c, nActual);
+
+    /* Push the standalone leaf node. */
+    rc = leafWriterInlineFlush(v, pWriter, pTerm, nTerm, iDoclistData);
+    if( rc!=SQLITE_OK ) return rc;
+
+    /* Leave the node empty. */
+    dataBufferReset(&pWriter->data);
+
+    return rc;
+  }
+
+  /* At this point, we know that the doclist was small, so do the
+  ** memmove if indicated.
+  */
+  if( nActual<n ){
+    memmove(pWriter->data.pData+iDoclistData+nActual,
+            pWriter->data.pData+iDoclistData+n,
+            pWriter->data.nData-(iDoclistData+n));
+    pWriter->data.nData -= n-nActual;
+  }
+
+  /* Replace written length with actual length. */
+  memcpy(pWriter->data.pData+iDoclistData, c, nActual);
+
+  /* If the node is too large, break things up. */
+  /* TODO(shess) This test matches leafWriterStep(), which does this
+  ** test before it knows the cost to varint-encode the term and
+  ** doclist lengths.  At some point, change to
+  ** pWriter->data.nData>LEAF_MAX.
+  */
+  if( iTermData+nTerm+nActualData>LEAF_MAX ){
+    /* Flush out the leading data as a node */
+    rc = leafWriterInternalFlush(v, pWriter, 0, iTermData);
+    if( rc!=SQLITE_OK ) return rc;
+
+    pWriter->nTermDistinct = nTermDistinct;
+
+    /* Rebuild header using the current term */
+    n = putVarint(pWriter->data.pData, 0);
+    n += putVarint(pWriter->data.pData+n, nTerm);
+    memcpy(pWriter->data.pData+n, pTerm, nTerm);
+    n += nTerm;
+
+    /* There should always be room, because the previous encoding
+    ** included all data necessary to construct the term.
+    */
+    assert( n<iDoclistData );
+    /* So long as STANDALONE_MIN is half or less of LEAF_MAX, the
+    ** following memcpy() is safe (as opposed to needing a memmove).
+    */
+    assert( 2*STANDALONE_MIN<=LEAF_MAX );
+    assert( n+pWriter->data.nData-iDoclistData<iDoclistData );
+    memcpy(pWriter->data.pData+n,
+           pWriter->data.pData+iDoclistData,
+           pWriter->data.nData-iDoclistData);
+    pWriter->data.nData -= iDoclistData-n;
+  }
+  ASSERT_VALID_LEAF_NODE(pWriter->data.pData, pWriter->data.nData);
+
+  return SQLITE_OK;
+}
+
+/* Push pTerm[nTerm] along with the doclist data to the leaf layer of
+** %_segments.
+*/
+/* TODO(shess) Revise writeZeroSegment() so that doclists are
+** constructed directly in pWriter->data.
+*/
+static int leafWriterStep(fulltext_vtab *v, LeafWriter *pWriter,
+                          const char *pTerm, int nTerm,
+                          const char *pData, int nData){
+  int rc;
+  DLReader reader;
+
+  dlrInit(&reader, DL_DEFAULT, pData, nData);
+  rc = leafWriterStepMerge(v, pWriter, pTerm, nTerm, &reader, 1);
+  dlrDestroy(&reader);
+
+  return rc;
+}
+
+
+/****************************************************************/
+/* LeafReader is used to iterate over an individual leaf node. */
+typedef struct LeafReader {
+  DataBuffer term;          /* copy of current term. */
+
+  const char *pData;        /* data for current term. */
+  int nData;
+} LeafReader;
+
+static void leafReaderDestroy(LeafReader *pReader){
+  dataBufferDestroy(&pReader->term);
+  SCRAMBLE(pReader);
+}
+
+static int leafReaderAtEnd(LeafReader *pReader){
+  return pReader->nData<=0;
+}
+
+/* Access the current term. */
+static int leafReaderTermBytes(LeafReader *pReader){
+  return pReader->term.nData;
+}
+static const char *leafReaderTerm(LeafReader *pReader){
+  assert( pReader->term.nData>0 );
+  return pReader->term.pData;
+}
+
+/* Access the doclist data for the current term. */
+static int leafReaderDataBytes(LeafReader *pReader){
+  int nData;
+  assert( pReader->term.nData>0 );
+  getVarint32(pReader->pData, &nData);
+  return nData;
+}
+static const char *leafReaderData(LeafReader *pReader){
+  int n, nData;
+  assert( pReader->term.nData>0 );
+  n = getVarint32(pReader->pData, &nData);
+  return pReader->pData+n;
+}
+
+static void leafReaderInit(const char *pData, int nData,
+                           LeafReader *pReader){
+  int nTerm, n;
+
+  assert( nData>0 );
+  assert( pData[0]=='\0' );
+
+  CLEAR(pReader);
+
+  /* Read the first term, skipping the header byte. */
+  n = getVarint32(pData+1, &nTerm);
+  dataBufferInit(&pReader->term, nTerm);
+  dataBufferReplace(&pReader->term, pData+1+n, nTerm);
+
+  /* Position after the first term. */
+  assert( 1+n+nTerm<nData );
+  pReader->pData = pData+1+n+nTerm;
+  pReader->nData = nData-1-n-nTerm;
+}
+
+/* Step the reader forward to the next term. */
+static void leafReaderStep(LeafReader *pReader){
+  int n, nData, nPrefix, nSuffix;
+  assert( !leafReaderAtEnd(pReader) );
+
+  /* Skip previous entry's data block. */
+  n = getVarint32(pReader->pData, &nData);
+  assert( n+nData<=pReader->nData );
+  pReader->pData += n+nData;
+  pReader->nData -= n+nData;
+
+  if( !leafReaderAtEnd(pReader) ){
+    /* Construct the new term using a prefix from the old term plus a
+    ** suffix from the leaf data.
+    */
+    n = getVarint32(pReader->pData, &nPrefix);
+    n += getVarint32(pReader->pData+n, &nSuffix);
+    assert( n+nSuffix<pReader->nData );
+    pReader->term.nData = nPrefix;
+    dataBufferAppend(&pReader->term, pReader->pData+n, nSuffix);
+
+    pReader->pData += n+nSuffix;
+    pReader->nData -= n+nSuffix;
+  }
+}
+
+/* strcmp-style comparison of pReader's current term against pTerm. */
+static int leafReaderTermCmp(LeafReader *pReader,
+                             const char *pTerm, int nTerm){
+  int c, n = pReader->term.nData<nTerm ? pReader->term.nData : nTerm;
+  if( n==0 ){
+    if( pReader->term.nData>0 ) return -1;
+    if(nTerm>0 ) return 1;
+    return 0;
+  }
+
+  c = memcmp(pReader->term.pData, pTerm, n);
+  if( c!=0 ) return c;
+  return pReader->term.nData - nTerm;
+}
+
+
+/****************************************************************/
+/* LeavesReader wraps LeafReader to allow iterating over the entire
+** leaf layer of the tree.
+*/
+typedef struct LeavesReader {
+  int idx;                  /* Index within the segment. */
+
+  sqlite3_stmt *pStmt;      /* Statement we're streaming leaves from. */
+  int eof;                  /* we've seen SQLITE_DONE from pStmt. */
+
+  LeafReader leafReader;    /* reader for the current leaf. */
+  DataBuffer rootData;      /* root data for inline. */
+} LeavesReader;
+
+/* Access the current term. */
+static int leavesReaderTermBytes(LeavesReader *pReader){
+  assert( !pReader->eof );
+  return leafReaderTermBytes(&pReader->leafReader);
+}
+static const char *leavesReaderTerm(LeavesReader *pReader){
+  assert( !pReader->eof );
+  return leafReaderTerm(&pReader->leafReader);
+}
+
+/* Access the doclist data for the current term. */
+static int leavesReaderDataBytes(LeavesReader *pReader){
+  assert( !pReader->eof );
+  return leafReaderDataBytes(&pReader->leafReader);
+}
+static const char *leavesReaderData(LeavesReader *pReader){
+  assert( !pReader->eof );
+  return leafReaderData(&pReader->leafReader);
+}
+
+static int leavesReaderAtEnd(LeavesReader *pReader){
+  return pReader->eof;
+}
+
+static void leavesReaderDestroy(LeavesReader *pReader){
+  leafReaderDestroy(&pReader->leafReader);
+  dataBufferDestroy(&pReader->rootData);
+  SCRAMBLE(pReader);
+}
+
+/* Initialize pReader with the given root data (if iStartBlockid==0
+** the leaf data was entirely contained in the root), or from the
+** stream of blocks between iStartBlockid and iEndBlockid, inclusive.
+*/
+static int leavesReaderInit(fulltext_vtab *v,
+                            int idx,
+                            sqlite_int64 iStartBlockid,
+                            sqlite_int64 iEndBlockid,
+                            const char *pRootData, int nRootData,
+                            LeavesReader *pReader){
+  CLEAR(pReader);
+  pReader->idx = idx;
+
+  dataBufferInit(&pReader->rootData, 0);
+  if( iStartBlockid==0 ){
+    /* Entire leaf level fit in root data. */
+    dataBufferReplace(&pReader->rootData, pRootData, nRootData);
+    leafReaderInit(pReader->rootData.pData, pReader->rootData.nData,
+                   &pReader->leafReader);
+  }else{
+    sqlite3_stmt *s;
+    int rc = sql_get_leaf_statement(v, idx, &s);
+    if( rc!=SQLITE_OK ) return rc;
+
+    rc = sqlite3_bind_int64(s, 1, iStartBlockid);
+    if( rc!=SQLITE_OK ) return rc;
+
+    rc = sqlite3_bind_int64(s, 2, iEndBlockid);
+    if( rc!=SQLITE_OK ) return rc;
+
+    rc = sql_step_leaf_statement(v, idx, &s);
+    if( rc==SQLITE_DONE ){
+      pReader->eof = 1;
+      return SQLITE_OK;
+    }
+    if( rc!=SQLITE_ROW ) return rc;
+
+    pReader->pStmt = s;
+    leafReaderInit(sqlite3_column_blob(pReader->pStmt, 0),
+                   sqlite3_column_bytes(pReader->pStmt, 0),
+                   &pReader->leafReader);
+  }
+  return SQLITE_OK;
+}
+
+/* Step the current leaf forward to the next term.  If we reach the
+** end of the current leaf, step forward to the next leaf block.
+*/
+static int leavesReaderStep(fulltext_vtab *v, LeavesReader *pReader){
+  assert( !leavesReaderAtEnd(pReader) );
+  leafReaderStep(&pReader->leafReader);
+
+  if( leafReaderAtEnd(&pReader->leafReader) ){
+    int rc;
+    if( pReader->rootData.pData ){
+      pReader->eof = 1;
+      return SQLITE_OK;
+    }
+    rc = sql_step_leaf_statement(v, pReader->idx, &pReader->pStmt);
+    if( rc!=SQLITE_ROW ){
+      pReader->eof = 1;
+      return rc==SQLITE_DONE ? SQLITE_OK : rc;
+    }
+    leafReaderDestroy(&pReader->leafReader);
+    leafReaderInit(sqlite3_column_blob(pReader->pStmt, 0),
+                   sqlite3_column_bytes(pReader->pStmt, 0),
+                   &pReader->leafReader);
+  }
+  return SQLITE_OK;
+}
+
+/* Order LeavesReaders by their term, ignoring idx.  Readers at eof
+** always sort to the end.
+*/
+static int leavesReaderTermCmp(LeavesReader *lr1, LeavesReader *lr2){
+  if( leavesReaderAtEnd(lr1) ){
+    if( leavesReaderAtEnd(lr2) ) return 0;
+    return 1;
+  }
+  if( leavesReaderAtEnd(lr2) ) return -1;
+
+  return leafReaderTermCmp(&lr1->leafReader,
+                           leavesReaderTerm(lr2), leavesReaderTermBytes(lr2));
+}
+
+/* Similar to leavesReaderTermCmp(), with additional ordering by idx
+** so that older segments sort before newer segments.
+*/
+static int leavesReaderCmp(LeavesReader *lr1, LeavesReader *lr2){
+  int c = leavesReaderTermCmp(lr1, lr2);
+  if( c!=0 ) return c;
+  return lr1->idx-lr2->idx;
+}
+
+/* Assume that pLr[1]..pLr[nLr] are sorted.  Bubble pLr[0] into its
+** sorted position.
+*/
+static void leavesReaderReorder(LeavesReader *pLr, int nLr){
+  while( nLr>1 && leavesReaderCmp(pLr, pLr+1)>0 ){
+    LeavesReader tmp = pLr[0];
+    pLr[0] = pLr[1];
+    pLr[1] = tmp;
+    nLr--;
+    pLr++;
+  }
+}
+
+/* Initializes pReaders with the segments from level iLevel, returning
+** the number of segments in *piReaders.  Leaves pReaders in sorted
+** order.
+*/
+static int leavesReadersInit(fulltext_vtab *v, int iLevel,
+                             LeavesReader *pReaders, int *piReaders){
+  sqlite3_stmt *s;
+  int i, rc = sql_get_statement(v, SEGDIR_SELECT_STMT, &s);
+  if( rc!=SQLITE_OK ) return rc;
+
+  rc = sqlite3_bind_int(s, 1, iLevel);
+  if( rc!=SQLITE_OK ) return rc;
+
+  i = 0;
+  while( (rc = sql_step_statement(v, SEGDIR_SELECT_STMT, &s))==SQLITE_ROW ){
+    sqlite_int64 iStart = sqlite3_column_int64(s, 0);
+    sqlite_int64 iEnd = sqlite3_column_int64(s, 1);
+    const char *pRootData = sqlite3_column_blob(s, 2);
+    int nRootData = sqlite3_column_bytes(s, 2);
+
+    assert( i<MERGE_COUNT );
+    rc = leavesReaderInit(v, i, iStart, iEnd, pRootData, nRootData,
+                          &pReaders[i]);
+    if( rc!=SQLITE_OK ) break;
+
+    i++;
+  }
+  if( rc!=SQLITE_DONE ){
+    while( i-->0 ){
+      leavesReaderDestroy(&pReaders[i]);
+    }
+    return rc;
+  }
+
+  *piReaders = i;
+
+  /* Leave our results sorted by term, then age. */
+  while( i-- ){
+    leavesReaderReorder(pReaders+i, *piReaders-i);
+  }
+  return SQLITE_OK;
+}
+
+/* Merge doclists from pReaders[nReaders] into a single doclist, which
+** is written to pWriter.  Assumes pReaders is ordered oldest to
+** newest.
+*/
+/* TODO(shess) Consider putting this inline in segmentMerge(). */
+static int leavesReadersMerge(fulltext_vtab *v,
+                              LeavesReader *pReaders, int nReaders,
+                              LeafWriter *pWriter){
+  DLReader dlReaders[MERGE_COUNT];
+  const char *pTerm = leavesReaderTerm(pReaders);
+  int i, nTerm = leavesReaderTermBytes(pReaders);
+
+  assert( nReaders<=MERGE_COUNT );
+
+  for(i=0; i<nReaders; i++){
+    dlrInit(&dlReaders[i], DL_DEFAULT,
+            leavesReaderData(pReaders+i),
+            leavesReaderDataBytes(pReaders+i));
+  }
+
+  return leafWriterStepMerge(v, pWriter, pTerm, nTerm, dlReaders, nReaders);
+}
+
+/* Forward ref due to mutual recursion with segdirNextIndex(). */
+static int segmentMerge(fulltext_vtab *v, int iLevel);
+
+/* Put the next available index at iLevel into *pidx.  If iLevel
+** already has MERGE_COUNT segments, they are merged to a higher
+** level to make room.
+*/
+static int segdirNextIndex(fulltext_vtab *v, int iLevel, int *pidx){
+  int rc = segdir_max_index(v, iLevel, pidx);
+  if( rc==SQLITE_DONE ){              /* No segments at iLevel. */
+    *pidx = 0;
+  }else if( rc==SQLITE_ROW ){
+    if( *pidx==(MERGE_COUNT-1) ){
+      rc = segmentMerge(v, iLevel);
+      if( rc!=SQLITE_OK ) return rc;
+      *pidx = 0;
+    }else{
+      (*pidx)++;
+    }
+  }else{
+    return rc;
+  }
+  return SQLITE_OK;
+}
+
+/* Merge MERGE_COUNT segments at iLevel into a new segment at
+** iLevel+1.  If iLevel+1 is already full of segments, those will be
+** merged to make room.
+*/
+static int segmentMerge(fulltext_vtab *v, int iLevel){
+  LeafWriter writer;
+  LeavesReader lrs[MERGE_COUNT];
+  int i, rc, idx = 0;
+
+  /* Determine the next available segment index at the next level,
+  ** merging as necessary.
+  */
+  rc = segdirNextIndex(v, iLevel+1, &idx);
+  if( rc!=SQLITE_OK ) return rc;
+
+  /* TODO(shess) This assumes that we'll always see exactly
+  ** MERGE_COUNT segments to merge at a given level.  That will be
+  ** broken if we allow the developer to request preemptive or
+  ** deferred merging.
+  */
+  memset(&lrs, '\0', sizeof(lrs));
+  rc = leavesReadersInit(v, iLevel, lrs, &i);
+  if( rc!=SQLITE_OK ) return rc;
+  assert( i==MERGE_COUNT );
+
+  leafWriterInit(iLevel+1, idx, &writer);
+
+  /* Since leavesReaderReorder() pushes readers at eof to the end,
+  ** when the first reader is empty, all will be empty.
+  */
+  while( !leavesReaderAtEnd(lrs) ){
+    /* Figure out how many readers share their next term. */
+    for(i=1; i<MERGE_COUNT && !leavesReaderAtEnd(lrs+i); i++){
+      if( 0!=leavesReaderTermCmp(lrs, lrs+i) ) break;
+    }
+
+    rc = leavesReadersMerge(v, lrs, i, &writer);
+    if( rc!=SQLITE_OK ) goto err;
+
+    /* Step forward those that were merged. */
+    while( i-->0 ){
+      rc = leavesReaderStep(v, lrs+i);
+      if( rc!=SQLITE_OK ) goto err;
+
+      /* Reorder by term, then by age. */
+      leavesReaderReorder(lrs+i, MERGE_COUNT-i);
+    }
+  }
+
+  for(i=0; i<MERGE_COUNT; i++){
+    leavesReaderDestroy(&lrs[i]);
+  }
+
+  rc = leafWriterFinalize(v, &writer);
+  leafWriterDestroy(&writer);
+  if( rc!=SQLITE_OK ) return rc;
+
+  /* Delete the merged segment data. */
+  return segdir_delete(v, iLevel);
+
+ err:
+  for(i=0; i<MERGE_COUNT; i++){
+    leavesReaderDestroy(&lrs[i]);
+  }
+  leafWriterDestroy(&writer);
+  return rc;
+}
+
+/* Read pData[nData] as a leaf node, and if the doclist for
+** pTerm[nTerm] is present, merge it over *out (any duplicate doclists
+** read from pData will overwrite those in *out).
+*/
+static int loadSegmentLeaf(fulltext_vtab *v, const char *pData, int nData,
+                           const char *pTerm, int nTerm, DataBuffer *out){
+  LeafReader reader;
+  assert( nData>1 );
+  assert( *pData=='\0' );
+
+  leafReaderInit(pData, nData, &reader);
+  while( !leafReaderAtEnd(&reader) ){
+    int c = leafReaderTermCmp(&reader, pTerm, nTerm);
+    if( c==0 ){
+      if( out->nData==0 ){
+        dataBufferReplace(out,
+                          leafReaderData(&reader), leafReaderDataBytes(&reader));
+      }else{
+        DLReader readers[2];
+        DataBuffer result;
+        dlrInit(&readers[0], DL_DEFAULT, out->pData, out->nData);
+        dlrInit(&readers[1], DL_DEFAULT,
+                leafReaderData(&reader), leafReaderDataBytes(&reader));
+        dataBufferInit(&result, out->nData+leafReaderDataBytes(&reader));
+        docListMerge(&result, readers, 2);
+        dataBufferDestroy(out);
+        *out = result;
+      }
+    }
+    if( c>=0 ) break;
+    leafReaderStep(&reader);
+  }
+  leafReaderDestroy(&reader);
+  return SQLITE_OK;
+}
+
+/* Traverse the tree represented by pData[nData] looking for
+** pTerm[nTerm], merging its doclist over *out if found (any duplicate
+** doclists read from the segment rooted at pData will overwrite those
+** in *out).
+*/
+static int loadSegment(fulltext_vtab *v, const char *pData, int nData,
+                       const char *pTerm, int nTerm, DataBuffer *out){
+  int rc;
+  sqlite3_stmt *s = NULL;
+
+  assert( nData>1 );
+
+  /* Process data as an interior node until we reach a leaf. */
+  while( *pData!='\0' ){
+    sqlite_int64 iBlockid;
+    InteriorReader reader;
+
+    /* Scan the node data until we find a term greater than our term.
+    ** Our target child will be in the blockid under that term, or in
+    ** the last blockid in the node if we never find such a term.
+    */
+    interiorReaderInit(pData, nData, &reader);
+    while( !interiorReaderAtEnd(&reader) ){
+      if( interiorReaderTermCmp(&reader, pTerm, nTerm)>0 ) break;
+      interiorReaderStep(&reader);
+    }
+
+    /* Grab the child blockid before calling sql_get_statement(),
+    ** because sql_get_statement() may reset our data out from under
+    ** us.
+    */
+    iBlockid = interiorReaderCurrentBlockid(&reader);
+    interiorReaderDestroy(&reader);
+
+    rc = sql_get_statement(v, BLOCK_SELECT_STMT, &s);
+    if( rc!=SQLITE_OK ) return rc;
+
+    rc = sqlite3_bind_int64(s, 1, iBlockid);
+    if( rc!=SQLITE_OK ) return rc;
+
+    rc = sql_step_statement(v, BLOCK_SELECT_STMT, &s);
+    if( rc==SQLITE_DONE ) return SQLITE_ERROR;
+    if( rc!=SQLITE_ROW ) return rc;
+
+    pData = sqlite3_column_blob(s, 0);
+    nData = sqlite3_column_bytes(s, 0);
+  }
+
+  rc = loadSegmentLeaf(v, pData, nData, pTerm, nTerm, out);
+  if( rc!=SQLITE_OK ) return rc;
+
+  /* If we selected a child node, we need to finish that select. */
+  if( s!=NULL ){
+    /* We expect only one row.  We must execute another sqlite3_step()
+     * to complete the iteration; otherwise the table will remain
+     * locked. */
+    rc = sqlite3_step(s);
+    if( rc==SQLITE_ROW ) return SQLITE_ERROR;
+    if( rc!=SQLITE_DONE ) return rc;
+  }
+  return SQLITE_OK;
+}
+
+/* Scan the database and merge together the posting lists for the term
+** into *out.
+*/
+static int termSelect(fulltext_vtab *v, int iColumn,
+                      const char *pTerm, int nTerm,
+                      DocListType iType, DataBuffer *out){
+  DataBuffer doclist;
+  sqlite3_stmt *s;
+  int rc = sql_get_statement(v, SEGDIR_SELECT_ALL_STMT, &s);
+  if( rc!=SQLITE_OK ) return rc;
+
+  dataBufferInit(&doclist, 0);
+
+  /* Traverse the segments from oldest to newest so that newer doclist
+  ** elements for given docids overwrite older elements.
+  */
+  while( (rc=sql_step_statement(v, SEGDIR_SELECT_ALL_STMT, &s))==SQLITE_ROW ){
+    rc = loadSegment(v, sqlite3_column_blob(s, 0), sqlite3_column_bytes(s, 0),
+                     pTerm, nTerm, &doclist);
+    if( rc!=SQLITE_OK ) goto err;
+  }
+  if( rc==SQLITE_DONE ){
+    if( doclist.nData!=0 ){
+      /* TODO(shess) The old term_select_all() code applied the column
+      ** restrict as we merged segments, leading to smaller buffers.
+      ** This is probably worthwhile to bring back, once the new storage
+      ** system is checked in.
+      */
+      if( iColumn==v->nColumn) iColumn = -1;
+      docListTrim(DL_DEFAULT, doclist.pData, doclist.nData,
+                  iColumn, iType, out);
+    }
+    rc = SQLITE_OK;
+  }
+
+ err:
+  dataBufferDestroy(&doclist);
+  return rc;
+}
+
+/****************************************************************/
+/* Used to hold hashtable data for sorting. */
+typedef struct TermData {
+  const char *pTerm;
+  int nTerm;
+  PLWriter *pWriter;
+} TermData;
+
+/* Orders TermData elements in strcmp fashion ( <0 for less-than, 0
+** for equal, >0 for greater-than).
+*/
+static int termDataCmp(const void *av, const void *bv){
+  const TermData *a = (const TermData *)av;
+  const TermData *b = (const TermData *)bv;
+  int n = a->nTerm<b->nTerm ? a->nTerm : b->nTerm;
+  int c = memcmp(a->pTerm, b->pTerm, n);
+  if( c!=0 ) return c;
+  return a->nTerm-b->nTerm;
+}
+
+/* Order pTerms data by term, then write a new level 0 segment using
+** LeafWriter.
+*/
+static int writeZeroSegment(fulltext_vtab *v, fts2Hash *pTerms){
+  fts2HashElem *e;
+  int idx, rc, i, n;
+  TermData *pData;
+  LeafWriter writer;
+  DataBuffer dl;
+
+  /* Determine the next index at level 0, merging as necessary. */
+  rc = segdirNextIndex(v, 0, &idx);
+  if( rc!=SQLITE_OK ) return rc;
+
+  n = fts2HashCount(pTerms);
+  pData = malloc(n*sizeof(TermData));
+
+  for(i = 0, e = fts2HashFirst(pTerms); e; i++, e = fts2HashNext(e)){
+    assert( i<n );
+    pData[i].pTerm = fts2HashKey(e);
+    pData[i].nTerm = fts2HashKeysize(e);
+    pData[i].pWriter = fts2HashData(e);
+  }
+  assert( i==n );
+
+  /* TODO(shess) Should we allow user-defined collation sequences,
+  ** here?  I think we only need that once we support prefix searches.
+  */
+  if( n>1 ) qsort(pData, n, sizeof(*pData), termDataCmp);
+
+  /* TODO(shess) Refactor so that we can write directly to the segment
+  ** DataBuffer, as happens for segment merges.
+  */
+  leafWriterInit(0, idx, &writer);
+  dataBufferInit(&dl, 0);
+  for(i=0; i<n; i++){
+    DLWriter dlw;
+    dataBufferReset(&dl);
+    dlwInit(&dlw, DL_DEFAULT, &dl);
+    plwDlwAdd(pData[i].pWriter, &dlw);
+    rc = leafWriterStep(v, &writer,
+                        pData[i].pTerm, pData[i].nTerm, dl.pData, dl.nData);
+    dlwDestroy(&dlw);
+    if( rc!=SQLITE_OK ) goto err;
+  }
+  dataBufferDestroy(&dl);
+  rc = leafWriterFinalize(v, &writer);
+
+ err:
+  free(pData);
+  leafWriterDestroy(&writer);
+  return rc;
+}
+
+/* This function implements the xUpdate callback; it's the top-level entry
+ * point for inserting, deleting or updating a row in a full-text table. */
+static int fulltextUpdate(sqlite3_vtab *pVtab, int nArg, sqlite3_value **ppArg,
+                   sqlite_int64 *pRowid){
+  fulltext_vtab *v = (fulltext_vtab *) pVtab;
+  fts2Hash terms;   /* maps term string -> PosList */
+  int rc;
+  fts2HashElem *e;
+
+  TRACE(("FTS2 Update %p\n", pVtab));
+  
+  fts2HashInit(&terms, FTS2_HASH_STRING, 1);
+
+  if( nArg<2 ){
+    rc = index_delete(v, sqlite3_value_int64(ppArg[0]), &terms);
+  } else if( sqlite3_value_type(ppArg[0]) != SQLITE_NULL ){
+    /* An update:
+     * ppArg[0] = old rowid
+     * ppArg[1] = new rowid
+     * ppArg[2..2+v->nColumn-1] = values
+     * ppArg[2+v->nColumn] = value for magic column (we ignore this)
+     */
+    sqlite_int64 rowid = sqlite3_value_int64(ppArg[0]);
+    if( sqlite3_value_type(ppArg[1]) != SQLITE_INTEGER ||
+      sqlite3_value_int64(ppArg[1]) != rowid ){
+      rc = SQLITE_ERROR;  /* we don't allow changing the rowid */
+    } else {
+      assert( nArg==2+v->nColumn+1);
+      rc = index_update(v, rowid, &ppArg[2], &terms);
+    }
+  } else {
+    /* An insert:
+     * ppArg[1] = requested rowid
+     * ppArg[2..2+v->nColumn-1] = values
+     * ppArg[2+v->nColumn] = value for magic column (we ignore this)
+     */
+    assert( nArg==2+v->nColumn+1);
+    rc = index_insert(v, ppArg[1], &ppArg[2], pRowid, &terms);
+  }
+
+  if( rc==SQLITE_OK ) rc = writeZeroSegment(v, &terms);
+
+  /* clean up */
+  for(e=fts2HashFirst(&terms); e; e=fts2HashNext(e)){
+    plwDelete(fts2HashData(e));
+  }
+  fts2HashClear(&terms);
+
+  return rc;
+}
+
+/*
+** Implementation of the snippet() function for FTS2
+*/
+static void snippetFunc(
+  sqlite3_context *pContext,
+  int argc,
+  sqlite3_value **argv
+){
+  fulltext_cursor *pCursor;
+  if( argc<1 ) return;
+  if( sqlite3_value_type(argv[0])!=SQLITE_BLOB ||
+      sqlite3_value_bytes(argv[0])!=sizeof(pCursor) ){
+    sqlite3_result_error(pContext, "illegal first argument to html_snippet",-1);
+  }else{
+    const char *zStart = "<b>";
+    const char *zEnd = "</b>";
+    const char *zEllipsis = "<b>...</b>";
+    memcpy(&pCursor, sqlite3_value_blob(argv[0]), sizeof(pCursor));
+    if( argc>=2 ){
+      zStart = (const char*)sqlite3_value_text(argv[1]);
+      if( argc>=3 ){
+        zEnd = (const char*)sqlite3_value_text(argv[2]);
+        if( argc>=4 ){
+          zEllipsis = (const char*)sqlite3_value_text(argv[3]);
+        }
+      }
+    }
+    snippetAllOffsets(pCursor);
+    snippetText(pCursor, zStart, zEnd, zEllipsis);
+    sqlite3_result_text(pContext, pCursor->snippet.zSnippet,
+                        pCursor->snippet.nSnippet, SQLITE_STATIC);
+  }
+}
+
+/*
+** Implementation of the offsets() function for FTS2
+*/
+static void snippetOffsetsFunc(
+  sqlite3_context *pContext,
+  int argc,
+  sqlite3_value **argv
+){
+  fulltext_cursor *pCursor;
+  if( argc<1 ) return;
+  if( sqlite3_value_type(argv[0])!=SQLITE_BLOB ||
+      sqlite3_value_bytes(argv[0])!=sizeof(pCursor) ){
+    sqlite3_result_error(pContext, "illegal first argument to offsets",-1);
+  }else{
+    memcpy(&pCursor, sqlite3_value_blob(argv[0]), sizeof(pCursor));
+    snippetAllOffsets(pCursor);
+    snippetOffsetText(&pCursor->snippet);
+    sqlite3_result_text(pContext,
+                        pCursor->snippet.zOffset, pCursor->snippet.nOffset,
+                        SQLITE_STATIC);
+  }
+}
+
+/*
+** This routine implements the xFindFunction method for the FTS2
+** virtual table.
+*/
+static int fulltextFindFunction(
+  sqlite3_vtab *pVtab,
+  int nArg,
+  const char *zName,
+  void (**pxFunc)(sqlite3_context*,int,sqlite3_value**),
+  void **ppArg
+){
+  if( strcmp(zName,"snippet")==0 ){
+    *pxFunc = snippetFunc;
+    return 1;
+  }else if( strcmp(zName,"offsets")==0 ){
+    *pxFunc = snippetOffsetsFunc;
+    return 1;
+  }
+  return 0;
+}
+
+static const sqlite3_module fulltextModule = {
+  /* iVersion      */ 0,
+  /* xCreate       */ fulltextCreate,
+  /* xConnect      */ fulltextConnect,
+  /* xBestIndex    */ fulltextBestIndex,
+  /* xDisconnect   */ fulltextDisconnect,
+  /* xDestroy      */ fulltextDestroy,
+  /* xOpen         */ fulltextOpen,
+  /* xClose        */ fulltextClose,
+  /* xFilter       */ fulltextFilter,
+  /* xNext         */ fulltextNext,
+  /* xEof          */ fulltextEof,
+  /* xColumn       */ fulltextColumn,
+  /* xRowid        */ fulltextRowid,
+  /* xUpdate       */ fulltextUpdate,
+  /* xBegin        */ 0, 
+  /* xSync         */ 0,
+  /* xCommit       */ 0,
+  /* xRollback     */ 0,
+  /* xFindFunction */ fulltextFindFunction,
+};
+
+int sqlite3Fts2Init(sqlite3 *db){
+  sqlite3_overload_function(db, "snippet", -1);
+  sqlite3_overload_function(db, "offsets", -1);
+  return sqlite3_create_module(db, "fts2", &fulltextModule, 0);
+}
+
+#if !SQLITE_CORE
+int sqlite3_extension_init(sqlite3 *db, char **pzErrMsg,
+                           const sqlite3_api_routines *pApi){
+  SQLITE_EXTENSION_INIT2(pApi)
+  return sqlite3Fts2Init(db);
+}
+#endif
+
+#endif /* !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS2) */

Added: freeswitch/trunk/libs/sqlite/ext/fts2/fts2.h
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/ext/fts2/fts2.h	Thu Feb 22 17:09:42 2007
@@ -0,0 +1,11 @@
+#include "sqlite3.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif  /* __cplusplus */
+
+int sqlite3Fts2Init(sqlite3 *db);
+
+#ifdef __cplusplus
+}  /* extern "C" */
+#endif  /* __cplusplus */

Added: freeswitch/trunk/libs/sqlite/ext/fts2/fts2_hash.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/ext/fts2/fts2_hash.c	Thu Feb 22 17:09:42 2007
@@ -0,0 +1,369 @@
+/*
+** 2001 September 22
+**
+** The author disclaims copyright to this source code.  In place of
+** a legal notice, here is a blessing:
+**
+**    May you do good and not evil.
+**    May you find forgiveness for yourself and forgive others.
+**    May you share freely, never taking more than you give.
+**
+*************************************************************************
+** This is the implementation of generic hash-tables used in SQLite.
+** We've modified it slightly to serve as a standalone hash table
+** implementation for the full-text indexing module.
+*/
+#include <assert.h>
+#include <stdlib.h>
+#include <string.h>
+
+/*
+** The code in this file is only compiled if:
+**
+**     * The FTS2 module is being built as an extension
+**       (in which case SQLITE_CORE is not defined), or
+**
+**     * The FTS2 module is being built into the core of
+**       SQLite (in which case SQLITE_ENABLE_FTS2 is defined).
+*/
+#if !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS2)
+
+
+#include "fts2_hash.h"
+
+static void *malloc_and_zero(int n){
+  void *p = malloc(n);
+  if( p ){
+    memset(p, 0, n);
+  }
+  return p;
+}
+
+/* Turn bulk memory into a hash table object by initializing the
+** fields of the Hash structure.
+**
+** "pNew" is a pointer to the hash table that is to be initialized.
+** keyClass is one of the constants 
+** FTS2_HASH_BINARY or FTS2_HASH_STRING.  The value of keyClass 
+** determines what kind of key the hash table will use.  "copyKey" is
+** true if the hash table should make its own private copy of keys and
+** false if it should just use the supplied pointer.
+*/
+void sqlite3Fts2HashInit(fts2Hash *pNew, int keyClass, int copyKey){
+  assert( pNew!=0 );
+  assert( keyClass>=FTS2_HASH_STRING && keyClass<=FTS2_HASH_BINARY );
+  pNew->keyClass = keyClass;
+  pNew->copyKey = copyKey;
+  pNew->first = 0;
+  pNew->count = 0;
+  pNew->htsize = 0;
+  pNew->ht = 0;
+  pNew->xMalloc = malloc_and_zero;
+  pNew->xFree = free;
+}
+
+/* Remove all entries from a hash table.  Reclaim all memory.
+** Call this routine to delete a hash table or to reset a hash table
+** to the empty state.
+*/
+void sqlite3Fts2HashClear(fts2Hash *pH){
+  fts2HashElem *elem;         /* For looping over all elements of the table */
+
+  assert( pH!=0 );
+  elem = pH->first;
+  pH->first = 0;
+  if( pH->ht ) pH->xFree(pH->ht);
+  pH->ht = 0;
+  pH->htsize = 0;
+  while( elem ){
+    fts2HashElem *next_elem = elem->next;
+    if( pH->copyKey && elem->pKey ){
+      pH->xFree(elem->pKey);
+    }
+    pH->xFree(elem);
+    elem = next_elem;
+  }
+  pH->count = 0;
+}
+
+/*
+** Hash and comparison functions when the mode is FTS2_HASH_STRING
+*/
+static int strHash(const void *pKey, int nKey){
+  const char *z = (const char *)pKey;
+  int h = 0;
+  if( nKey<=0 ) nKey = (int) strlen(z);
+  while( nKey > 0  ){
+    h = (h<<3) ^ h ^ *z++;
+    nKey--;
+  }
+  return h & 0x7fffffff;
+}
+static int strCompare(const void *pKey1, int n1, const void *pKey2, int n2){
+  if( n1!=n2 ) return 1;
+  return strncmp((const char*)pKey1,(const char*)pKey2,n1);
+}
+
+/*
+** Hash and comparison functions when the mode is FTS2_HASH_BINARY
+*/
+static int binHash(const void *pKey, int nKey){
+  int h = 0;
+  const char *z = (const char *)pKey;
+  while( nKey-- > 0 ){
+    h = (h<<3) ^ h ^ *(z++);
+  }
+  return h & 0x7fffffff;
+}
+static int binCompare(const void *pKey1, int n1, const void *pKey2, int n2){
+  if( n1!=n2 ) return 1;
+  return memcmp(pKey1,pKey2,n1);
+}
+
+/*
+** Return a pointer to the appropriate hash function given the key class.
+**
+** The C syntax in this function definition may be unfamilar to some 
+** programmers, so we provide the following additional explanation:
+**
+** The name of the function is "hashFunction".  The function takes a
+** single parameter "keyClass".  The return value of hashFunction()
+** is a pointer to another function.  Specifically, the return value
+** of hashFunction() is a pointer to a function that takes two parameters
+** with types "const void*" and "int" and returns an "int".
+*/
+static int (*hashFunction(int keyClass))(const void*,int){
+  if( keyClass==FTS2_HASH_STRING ){
+    return &strHash;
+  }else{
+    assert( keyClass==FTS2_HASH_BINARY );
+    return &binHash;
+  }
+}
+
+/*
+** Return a pointer to the appropriate hash function given the key class.
+**
+** For help in interpreted the obscure C code in the function definition,
+** see the header comment on the previous function.
+*/
+static int (*compareFunction(int keyClass))(const void*,int,const void*,int){
+  if( keyClass==FTS2_HASH_STRING ){
+    return &strCompare;
+  }else{
+    assert( keyClass==FTS2_HASH_BINARY );
+    return &binCompare;
+  }
+}
+
+/* Link an element into the hash table
+*/
+static void insertElement(
+  fts2Hash *pH,            /* The complete hash table */
+  struct _fts2ht *pEntry,  /* The entry into which pNew is inserted */
+  fts2HashElem *pNew       /* The element to be inserted */
+){
+  fts2HashElem *pHead;     /* First element already in pEntry */
+  pHead = pEntry->chain;
+  if( pHead ){
+    pNew->next = pHead;
+    pNew->prev = pHead->prev;
+    if( pHead->prev ){ pHead->prev->next = pNew; }
+    else             { pH->first = pNew; }
+    pHead->prev = pNew;
+  }else{
+    pNew->next = pH->first;
+    if( pH->first ){ pH->first->prev = pNew; }
+    pNew->prev = 0;
+    pH->first = pNew;
+  }
+  pEntry->count++;
+  pEntry->chain = pNew;
+}
+
+
+/* Resize the hash table so that it cantains "new_size" buckets.
+** "new_size" must be a power of 2.  The hash table might fail 
+** to resize if sqliteMalloc() fails.
+*/
+static void rehash(fts2Hash *pH, int new_size){
+  struct _fts2ht *new_ht;          /* The new hash table */
+  fts2HashElem *elem, *next_elem;  /* For looping over existing elements */
+  int (*xHash)(const void*,int);   /* The hash function */
+
+  assert( (new_size & (new_size-1))==0 );
+  new_ht = (struct _fts2ht *)pH->xMalloc( new_size*sizeof(struct _fts2ht) );
+  if( new_ht==0 ) return;
+  if( pH->ht ) pH->xFree(pH->ht);
+  pH->ht = new_ht;
+  pH->htsize = new_size;
+  xHash = hashFunction(pH->keyClass);
+  for(elem=pH->first, pH->first=0; elem; elem = next_elem){
+    int h = (*xHash)(elem->pKey, elem->nKey) & (new_size-1);
+    next_elem = elem->next;
+    insertElement(pH, &new_ht[h], elem);
+  }
+}
+
+/* This function (for internal use only) locates an element in an
+** hash table that matches the given key.  The hash for this key has
+** already been computed and is passed as the 4th parameter.
+*/
+static fts2HashElem *findElementGivenHash(
+  const fts2Hash *pH, /* The pH to be searched */
+  const void *pKey,   /* The key we are searching for */
+  int nKey,
+  int h               /* The hash for this key. */
+){
+  fts2HashElem *elem;            /* Used to loop thru the element list */
+  int count;                     /* Number of elements left to test */
+  int (*xCompare)(const void*,int,const void*,int);  /* comparison function */
+
+  if( pH->ht ){
+    struct _fts2ht *pEntry = &pH->ht[h];
+    elem = pEntry->chain;
+    count = pEntry->count;
+    xCompare = compareFunction(pH->keyClass);
+    while( count-- && elem ){
+      if( (*xCompare)(elem->pKey,elem->nKey,pKey,nKey)==0 ){ 
+        return elem;
+      }
+      elem = elem->next;
+    }
+  }
+  return 0;
+}
+
+/* Remove a single entry from the hash table given a pointer to that
+** element and a hash on the element's key.
+*/
+static void removeElementGivenHash(
+  fts2Hash *pH,         /* The pH containing "elem" */
+  fts2HashElem* elem,   /* The element to be removed from the pH */
+  int h                 /* Hash value for the element */
+){
+  struct _fts2ht *pEntry;
+  if( elem->prev ){
+    elem->prev->next = elem->next; 
+  }else{
+    pH->first = elem->next;
+  }
+  if( elem->next ){
+    elem->next->prev = elem->prev;
+  }
+  pEntry = &pH->ht[h];
+  if( pEntry->chain==elem ){
+    pEntry->chain = elem->next;
+  }
+  pEntry->count--;
+  if( pEntry->count<=0 ){
+    pEntry->chain = 0;
+  }
+  if( pH->copyKey && elem->pKey ){
+    pH->xFree(elem->pKey);
+  }
+  pH->xFree( elem );
+  pH->count--;
+  if( pH->count<=0 ){
+    assert( pH->first==0 );
+    assert( pH->count==0 );
+    fts2HashClear(pH);
+  }
+}
+
+/* Attempt to locate an element of the hash table pH with a key
+** that matches pKey,nKey.  Return the data for this element if it is
+** found, or NULL if there is no match.
+*/
+void *sqlite3Fts2HashFind(const fts2Hash *pH, const void *pKey, int nKey){
+  int h;                 /* A hash on key */
+  fts2HashElem *elem;    /* The element that matches key */
+  int (*xHash)(const void*,int);  /* The hash function */
+
+  if( pH==0 || pH->ht==0 ) return 0;
+  xHash = hashFunction(pH->keyClass);
+  assert( xHash!=0 );
+  h = (*xHash)(pKey,nKey);
+  assert( (pH->htsize & (pH->htsize-1))==0 );
+  elem = findElementGivenHash(pH,pKey,nKey, h & (pH->htsize-1));
+  return elem ? elem->data : 0;
+}
+
+/* Insert an element into the hash table pH.  The key is pKey,nKey
+** and the data is "data".
+**
+** If no element exists with a matching key, then a new
+** element is created.  A copy of the key is made if the copyKey
+** flag is set.  NULL is returned.
+**
+** If another element already exists with the same key, then the
+** new data replaces the old data and the old data is returned.
+** The key is not copied in this instance.  If a malloc fails, then
+** the new data is returned and the hash table is unchanged.
+**
+** If the "data" parameter to this function is NULL, then the
+** element corresponding to "key" is removed from the hash table.
+*/
+void *sqlite3Fts2HashInsert(
+  fts2Hash *pH,        /* The hash table to insert into */
+  const void *pKey,    /* The key */
+  int nKey,            /* Number of bytes in the key */
+  void *data           /* The data */
+){
+  int hraw;                 /* Raw hash value of the key */
+  int h;                    /* the hash of the key modulo hash table size */
+  fts2HashElem *elem;       /* Used to loop thru the element list */
+  fts2HashElem *new_elem;   /* New element added to the pH */
+  int (*xHash)(const void*,int);  /* The hash function */
+
+  assert( pH!=0 );
+  xHash = hashFunction(pH->keyClass);
+  assert( xHash!=0 );
+  hraw = (*xHash)(pKey, nKey);
+  assert( (pH->htsize & (pH->htsize-1))==0 );
+  h = hraw & (pH->htsize-1);
+  elem = findElementGivenHash(pH,pKey,nKey,h);
+  if( elem ){
+    void *old_data = elem->data;
+    if( data==0 ){
+      removeElementGivenHash(pH,elem,h);
+    }else{
+      elem->data = data;
+    }
+    return old_data;
+  }
+  if( data==0 ) return 0;
+  new_elem = (fts2HashElem*)pH->xMalloc( sizeof(fts2HashElem) );
+  if( new_elem==0 ) return data;
+  if( pH->copyKey && pKey!=0 ){
+    new_elem->pKey = pH->xMalloc( nKey );
+    if( new_elem->pKey==0 ){
+      pH->xFree(new_elem);
+      return data;
+    }
+    memcpy((void*)new_elem->pKey, pKey, nKey);
+  }else{
+    new_elem->pKey = (void*)pKey;
+  }
+  new_elem->nKey = nKey;
+  pH->count++;
+  if( pH->htsize==0 ){
+    rehash(pH,8);
+    if( pH->htsize==0 ){
+      pH->count = 0;
+      pH->xFree(new_elem);
+      return data;
+    }
+  }
+  if( pH->count > pH->htsize ){
+    rehash(pH,pH->htsize*2);
+  }
+  assert( pH->htsize>0 );
+  assert( (pH->htsize & (pH->htsize-1))==0 );
+  h = hraw & (pH->htsize-1);
+  insertElement(pH, &pH->ht[h], new_elem);
+  new_elem->data = data;
+  return 0;
+}
+
+#endif /* !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS2) */

Added: freeswitch/trunk/libs/sqlite/ext/fts2/fts2_hash.h
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/ext/fts2/fts2_hash.h	Thu Feb 22 17:09:42 2007
@@ -0,0 +1,112 @@
+/*
+** 2001 September 22
+**
+** The author disclaims copyright to this source code.  In place of
+** a legal notice, here is a blessing:
+**
+**    May you do good and not evil.
+**    May you find forgiveness for yourself and forgive others.
+**    May you share freely, never taking more than you give.
+**
+*************************************************************************
+** This is the header file for the generic hash-table implemenation
+** used in SQLite.  We've modified it slightly to serve as a standalone
+** hash table implementation for the full-text indexing module.
+**
+*/
+#ifndef _FTS2_HASH_H_
+#define _FTS2_HASH_H_
+
+/* Forward declarations of structures. */
+typedef struct fts2Hash fts2Hash;
+typedef struct fts2HashElem fts2HashElem;
+
+/* A complete hash table is an instance of the following structure.
+** The internals of this structure are intended to be opaque -- client
+** code should not attempt to access or modify the fields of this structure
+** directly.  Change this structure only by using the routines below.
+** However, many of the "procedures" and "functions" for modifying and
+** accessing this structure are really macros, so we can't really make
+** this structure opaque.
+*/
+struct fts2Hash {
+  char keyClass;          /* HASH_INT, _POINTER, _STRING, _BINARY */
+  char copyKey;           /* True if copy of key made on insert */
+  int count;              /* Number of entries in this table */
+  fts2HashElem *first;    /* The first element of the array */
+  void *(*xMalloc)(int);  /* malloc() function to use */
+  void (*xFree)(void *);  /* free() function to use */
+  int htsize;             /* Number of buckets in the hash table */
+  struct _fts2ht {        /* the hash table */
+    int count;               /* Number of entries with this hash */
+    fts2HashElem *chain;     /* Pointer to first entry with this hash */
+  } *ht;
+};
+
+/* Each element in the hash table is an instance of the following 
+** structure.  All elements are stored on a single doubly-linked list.
+**
+** Again, this structure is intended to be opaque, but it can't really
+** be opaque because it is used by macros.
+*/
+struct fts2HashElem {
+  fts2HashElem *next, *prev; /* Next and previous elements in the table */
+  void *data;                /* Data associated with this element */
+  void *pKey; int nKey;      /* Key associated with this element */
+};
+
+/*
+** There are 2 different modes of operation for a hash table:
+**
+**   FTS2_HASH_STRING        pKey points to a string that is nKey bytes long
+**                           (including the null-terminator, if any).  Case
+**                           is respected in comparisons.
+**
+**   FTS2_HASH_BINARY        pKey points to binary data nKey bytes long. 
+**                           memcmp() is used to compare keys.
+**
+** A copy of the key is made if the copyKey parameter to fts2HashInit is 1.  
+*/
+#define FTS2_HASH_STRING    1
+#define FTS2_HASH_BINARY    2
+
+/*
+** Access routines.  To delete, insert a NULL pointer.
+*/
+void sqlite3Fts2HashInit(fts2Hash*, int keytype, int copyKey);
+void *sqlite3Fts2HashInsert(fts2Hash*, const void *pKey, int nKey, void *pData);
+void *sqlite3Fts2HashFind(const fts2Hash*, const void *pKey, int nKey);
+void sqlite3Fts2HashClear(fts2Hash*);
+
+/*
+** Shorthand for the functions above
+*/
+#define fts2HashInit   sqlite3Fts2HashInit
+#define fts2HashInsert sqlite3Fts2HashInsert
+#define fts2HashFind   sqlite3Fts2HashFind
+#define fts2HashClear  sqlite3Fts2HashClear
+
+/*
+** Macros for looping over all elements of a hash table.  The idiom is
+** like this:
+**
+**   fts2Hash h;
+**   fts2HashElem *p;
+**   ...
+**   for(p=fts2HashFirst(&h); p; p=fts2HashNext(p)){
+**     SomeStructure *pData = fts2HashData(p);
+**     // do something with pData
+**   }
+*/
+#define fts2HashFirst(H)  ((H)->first)
+#define fts2HashNext(E)   ((E)->next)
+#define fts2HashData(E)   ((E)->data)
+#define fts2HashKey(E)    ((E)->pKey)
+#define fts2HashKeysize(E) ((E)->nKey)
+
+/*
+** Number of entries in a hash table
+*/
+#define fts2HashCount(H)  ((H)->count)
+
+#endif /* _FTS2_HASH_H_ */

Added: freeswitch/trunk/libs/sqlite/ext/fts2/fts2_porter.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/ext/fts2/fts2_porter.c	Thu Feb 22 17:09:42 2007
@@ -0,0 +1,642 @@
+/*
+** 2006 September 30
+**
+** The author disclaims copyright to this source code.  In place of
+** a legal notice, here is a blessing:
+**
+**    May you do good and not evil.
+**    May you find forgiveness for yourself and forgive others.
+**    May you share freely, never taking more than you give.
+**
+*************************************************************************
+** Implementation of the full-text-search tokenizer that implements
+** a Porter stemmer.
+*/
+
+/*
+** The code in this file is only compiled if:
+**
+**     * The FTS2 module is being built as an extension
+**       (in which case SQLITE_CORE is not defined), or
+**
+**     * The FTS2 module is being built into the core of
+**       SQLite (in which case SQLITE_ENABLE_FTS2 is defined).
+*/
+#if !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS2)
+
+
+#include <assert.h>
+#if !defined(__APPLE__)
+#include <malloc.h>
+#else
+#include <stdlib.h>
+#endif
+#include <stdio.h>
+#include <string.h>
+#include <ctype.h>
+
+#include "fts2_tokenizer.h"
+
+/*
+** Class derived from sqlite3_tokenizer
+*/
+typedef struct porter_tokenizer {
+  sqlite3_tokenizer base;      /* Base class */
+} porter_tokenizer;
+
+/*
+** Class derived from sqlit3_tokenizer_cursor
+*/
+typedef struct porter_tokenizer_cursor {
+  sqlite3_tokenizer_cursor base;
+  const char *zInput;          /* input we are tokenizing */
+  int nInput;                  /* size of the input */
+  int iOffset;                 /* current position in zInput */
+  int iToken;                  /* index of next token to be returned */
+  char *zToken;                /* storage for current token */
+  int nAllocated;              /* space allocated to zToken buffer */
+} porter_tokenizer_cursor;
+
+
+/* Forward declaration */
+static const sqlite3_tokenizer_module porterTokenizerModule;
+
+
+/*
+** Create a new tokenizer instance.
+*/
+static int porterCreate(
+  int argc, const char * const *argv,
+  sqlite3_tokenizer **ppTokenizer
+){
+  porter_tokenizer *t;
+  t = (porter_tokenizer *) calloc(sizeof(porter_tokenizer), 1);
+  *ppTokenizer = &t->base;
+  return SQLITE_OK;
+}
+
+/*
+** Destroy a tokenizer
+*/
+static int porterDestroy(sqlite3_tokenizer *pTokenizer){
+  free(pTokenizer);
+  return SQLITE_OK;
+}
+
+/*
+** Prepare to begin tokenizing a particular string.  The input
+** string to be tokenized is zInput[0..nInput-1].  A cursor
+** used to incrementally tokenize this string is returned in 
+** *ppCursor.
+*/
+static int porterOpen(
+  sqlite3_tokenizer *pTokenizer,         /* The tokenizer */
+  const char *zInput, int nInput,        /* String to be tokenized */
+  sqlite3_tokenizer_cursor **ppCursor    /* OUT: Tokenization cursor */
+){
+  porter_tokenizer_cursor *c;
+
+  c = (porter_tokenizer_cursor *) malloc(sizeof(porter_tokenizer_cursor));
+  c->zInput = zInput;
+  if( zInput==0 ){
+    c->nInput = 0;
+  }else if( nInput<0 ){
+    c->nInput = (int)strlen(zInput);
+  }else{
+    c->nInput = nInput;
+  }
+  c->iOffset = 0;                 /* start tokenizing at the beginning */
+  c->iToken = 0;
+  c->zToken = NULL;               /* no space allocated, yet. */
+  c->nAllocated = 0;
+
+  *ppCursor = &c->base;
+  return SQLITE_OK;
+}
+
+/*
+** Close a tokenization cursor previously opened by a call to
+** porterOpen() above.
+*/
+static int porterClose(sqlite3_tokenizer_cursor *pCursor){
+  porter_tokenizer_cursor *c = (porter_tokenizer_cursor *) pCursor;
+  free(c->zToken);
+  free(c);
+  return SQLITE_OK;
+}
+/*
+** Vowel or consonant
+*/
+static const char cType[] = {
+   0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0,
+   1, 1, 1, 2, 1
+};
+
+/*
+** isConsonant() and isVowel() determine if their first character in
+** the string they point to is a consonant or a vowel, according
+** to Porter ruls.  
+**
+** A consonate is any letter other than 'a', 'e', 'i', 'o', or 'u'.
+** 'Y' is a consonant unless it follows another consonant,
+** in which case it is a vowel.
+**
+** In these routine, the letters are in reverse order.  So the 'y' rule
+** is that 'y' is a consonant unless it is followed by another
+** consonent.
+*/
+static int isVowel(const char*);
+static int isConsonant(const char *z){
+  int j;
+  char x = *z;
+  if( x==0 ) return 0;
+  assert( x>='a' && x<='z' );
+  j = cType[x-'a'];
+  if( j<2 ) return j;
+  return z[1]==0 || isVowel(z + 1);
+}
+static int isVowel(const char *z){
+  int j;
+  char x = *z;
+  if( x==0 ) return 0;
+  assert( x>='a' && x<='z' );
+  j = cType[x-'a'];
+  if( j<2 ) return 1-j;
+  return isConsonant(z + 1);
+}
+
+/*
+** Let any sequence of one or more vowels be represented by V and let
+** C be sequence of one or more consonants.  Then every word can be
+** represented as:
+**
+**           [C] (VC){m} [V]
+**
+** In prose:  A word is an optional consonant followed by zero or
+** vowel-consonant pairs followed by an optional vowel.  "m" is the
+** number of vowel consonant pairs.  This routine computes the value
+** of m for the first i bytes of a word.
+**
+** Return true if the m-value for z is 1 or more.  In other words,
+** return true if z contains at least one vowel that is followed
+** by a consonant.
+**
+** In this routine z[] is in reverse order.  So we are really looking
+** for an instance of of a consonant followed by a vowel.
+*/
+static int m_gt_0(const char *z){
+  while( isVowel(z) ){ z++; }
+  if( *z==0 ) return 0;
+  while( isConsonant(z) ){ z++; }
+  return *z!=0;
+}
+
+/* Like mgt0 above except we are looking for a value of m which is
+** exactly 1
+*/
+static int m_eq_1(const char *z){
+  while( isVowel(z) ){ z++; }
+  if( *z==0 ) return 0;
+  while( isConsonant(z) ){ z++; }
+  if( *z==0 ) return 0;
+  while( isVowel(z) ){ z++; }
+  if( *z==0 ) return 1;
+  while( isConsonant(z) ){ z++; }
+  return *z==0;
+}
+
+/* Like mgt0 above except we are looking for a value of m>1 instead
+** or m>0
+*/
+static int m_gt_1(const char *z){
+  while( isVowel(z) ){ z++; }
+  if( *z==0 ) return 0;
+  while( isConsonant(z) ){ z++; }
+  if( *z==0 ) return 0;
+  while( isVowel(z) ){ z++; }
+  if( *z==0 ) return 0;
+  while( isConsonant(z) ){ z++; }
+  return *z!=0;
+}
+
+/*
+** Return TRUE if there is a vowel anywhere within z[0..n-1]
+*/
+static int hasVowel(const char *z){
+  while( isConsonant(z) ){ z++; }
+  return *z!=0;
+}
+
+/*
+** Return TRUE if the word ends in a double consonant.
+**
+** The text is reversed here. So we are really looking at
+** the first two characters of z[].
+*/
+static int doubleConsonant(const char *z){
+  return isConsonant(z) && z[0]==z[1] && isConsonant(z+1);
+}
+
+/*
+** Return TRUE if the word ends with three letters which
+** are consonant-vowel-consonent and where the final consonant
+** is not 'w', 'x', or 'y'.
+**
+** The word is reversed here.  So we are really checking the
+** first three letters and the first one cannot be in [wxy].
+*/
+static int star_oh(const char *z){
+  return
+    z[0]!=0 && isConsonant(z) &&
+    z[0]!='w' && z[0]!='x' && z[0]!='y' &&
+    z[1]!=0 && isVowel(z+1) &&
+    z[2]!=0 && isConsonant(z+2);
+}
+
+/*
+** If the word ends with zFrom and xCond() is true for the stem
+** of the word that preceeds the zFrom ending, then change the 
+** ending to zTo.
+**
+** The input word *pz and zFrom are both in reverse order.  zTo
+** is in normal order. 
+**
+** Return TRUE if zFrom matches.  Return FALSE if zFrom does not
+** match.  Not that TRUE is returned even if xCond() fails and
+** no substitution occurs.
+*/
+static int stem(
+  char **pz,             /* The word being stemmed (Reversed) */
+  const char *zFrom,     /* If the ending matches this... (Reversed) */
+  const char *zTo,       /* ... change the ending to this (not reversed) */
+  int (*xCond)(const char*)   /* Condition that must be true */
+){
+  char *z = *pz;
+  while( *zFrom && *zFrom==*z ){ z++; zFrom++; }
+  if( *zFrom!=0 ) return 0;
+  if( xCond && !xCond(z) ) return 1;
+  while( *zTo ){
+    *(--z) = *(zTo++);
+  }
+  *pz = z;
+  return 1;
+}
+
+/*
+** This is the fallback stemmer used when the porter stemmer is
+** inappropriate.  The input word is copied into the output with
+** US-ASCII case folding.  If the input word is too long (more
+** than 20 bytes if it contains no digits or more than 6 bytes if
+** it contains digits) then word is truncated to 20 or 6 bytes
+** by taking 10 or 3 bytes from the beginning and end.
+*/
+static void copy_stemmer(const char *zIn, int nIn, char *zOut, int *pnOut){
+  int i, mx, j;
+  int hasDigit = 0;
+  for(i=0; i<nIn; i++){
+    int c = zIn[i];
+    if( c>='A' && c<='Z' ){
+      zOut[i] = c - 'A' + 'a';
+    }else{
+      if( c>='0' && c<='9' ) hasDigit = 1;
+      zOut[i] = c;
+    }
+  }
+  mx = hasDigit ? 3 : 10;
+  if( nIn>mx*2 ){
+    for(j=mx, i=nIn-mx; i<nIn; i++, j++){
+      zOut[j] = zOut[i];
+    }
+    i = j;
+  }
+  zOut[i] = 0;
+  *pnOut = i;
+}
+
+
+/*
+** Stem the input word zIn[0..nIn-1].  Store the output in zOut.
+** zOut is at least big enough to hold nIn bytes.  Write the actual
+** size of the output word (exclusive of the '\0' terminator) into *pnOut.
+**
+** Any upper-case characters in the US-ASCII character set ([A-Z])
+** are converted to lower case.  Upper-case UTF characters are
+** unchanged.
+**
+** Words that are longer than about 20 bytes are stemmed by retaining
+** a few bytes from the beginning and the end of the word.  If the
+** word contains digits, 3 bytes are taken from the beginning and
+** 3 bytes from the end.  For long words without digits, 10 bytes
+** are taken from each end.  US-ASCII case folding still applies.
+** 
+** If the input word contains not digits but does characters not 
+** in [a-zA-Z] then no stemming is attempted and this routine just 
+** copies the input into the input into the output with US-ASCII
+** case folding.
+**
+** Stemming never increases the length of the word.  So there is
+** no chance of overflowing the zOut buffer.
+*/
+static void porter_stemmer(const char *zIn, int nIn, char *zOut, int *pnOut){
+  int i, j, c;
+  char zReverse[28];
+  char *z, *z2;
+  if( nIn<3 || nIn>=sizeof(zReverse)-7 ){
+    /* The word is too big or too small for the porter stemmer.
+    ** Fallback to the copy stemmer */
+    copy_stemmer(zIn, nIn, zOut, pnOut);
+    return;
+  }
+  for(i=0, j=sizeof(zReverse)-6; i<nIn; i++, j--){
+    c = zIn[i];
+    if( c>='A' && c<='Z' ){
+      zReverse[j] = c + 'a' - 'A';
+    }else if( c>='a' && c<='z' ){
+      zReverse[j] = c;
+    }else{
+      /* The use of a character not in [a-zA-Z] means that we fallback
+      ** to the copy stemmer */
+      copy_stemmer(zIn, nIn, zOut, pnOut);
+      return;
+    }
+  }
+  memset(&zReverse[sizeof(zReverse)-5], 0, 5);
+  z = &zReverse[j+1];
+
+
+  /* Step 1a */
+  if( z[0]=='s' ){
+    if(
+     !stem(&z, "sess", "ss", 0) &&
+     !stem(&z, "sei", "i", 0)  &&
+     !stem(&z, "ss", "ss", 0)
+    ){
+      z++;
+    }
+  }
+
+  /* Step 1b */  
+  z2 = z;
+  if( stem(&z, "dee", "ee", m_gt_0) ){
+    /* Do nothing.  The work was all in the test */
+  }else if( 
+     (stem(&z, "gni", "", hasVowel) || stem(&z, "de", "", hasVowel))
+      && z!=z2
+  ){
+     if( stem(&z, "ta", "ate", 0) ||
+         stem(&z, "lb", "ble", 0) ||
+         stem(&z, "zi", "ize", 0) ){
+       /* Do nothing.  The work was all in the test */
+     }else if( doubleConsonant(z) && (*z!='l' && *z!='s' && *z!='z') ){
+       z++;
+     }else if( m_eq_1(z) && star_oh(z) ){
+       *(--z) = 'e';
+     }
+  }
+
+  /* Step 1c */
+  if( z[0]=='y' && hasVowel(z+1) ){
+    z[0] = 'i';
+  }
+
+  /* Step 2 */
+  switch( z[1] ){
+   case 'a':
+     stem(&z, "lanoita", "ate", m_gt_0) ||
+     stem(&z, "lanoit", "tion", m_gt_0);
+     break;
+   case 'c':
+     stem(&z, "icne", "ence", m_gt_0) ||
+     stem(&z, "icna", "ance", m_gt_0);
+     break;
+   case 'e':
+     stem(&z, "rezi", "ize", m_gt_0);
+     break;
+   case 'g':
+     stem(&z, "igol", "log", m_gt_0);
+     break;
+   case 'l':
+     stem(&z, "ilb", "ble", m_gt_0) ||
+     stem(&z, "illa", "al", m_gt_0) ||
+     stem(&z, "iltne", "ent", m_gt_0) ||
+     stem(&z, "ile", "e", m_gt_0) ||
+     stem(&z, "ilsuo", "ous", m_gt_0);
+     break;
+   case 'o':
+     stem(&z, "noitazi", "ize", m_gt_0) ||
+     stem(&z, "noita", "ate", m_gt_0) ||
+     stem(&z, "rota", "ate", m_gt_0);
+     break;
+   case 's':
+     stem(&z, "msila", "al", m_gt_0) ||
+     stem(&z, "ssenevi", "ive", m_gt_0) ||
+     stem(&z, "ssenluf", "ful", m_gt_0) ||
+     stem(&z, "ssensuo", "ous", m_gt_0);
+     break;
+   case 't':
+     stem(&z, "itila", "al", m_gt_0) ||
+     stem(&z, "itivi", "ive", m_gt_0) ||
+     stem(&z, "itilib", "ble", m_gt_0);
+     break;
+  }
+
+  /* Step 3 */
+  switch( z[0] ){
+   case 'e':
+     stem(&z, "etaci", "ic", m_gt_0) ||
+     stem(&z, "evita", "", m_gt_0)   ||
+     stem(&z, "ezila", "al", m_gt_0);
+     break;
+   case 'i':
+     stem(&z, "itici", "ic", m_gt_0);
+     break;
+   case 'l':
+     stem(&z, "laci", "ic", m_gt_0) ||
+     stem(&z, "luf", "", m_gt_0);
+     break;
+   case 's':
+     stem(&z, "ssen", "", m_gt_0);
+     break;
+  }
+
+  /* Step 4 */
+  switch( z[1] ){
+   case 'a':
+     if( z[0]=='l' && m_gt_1(z+2) ){
+       z += 2;
+     }
+     break;
+   case 'c':
+     if( z[0]=='e' && z[2]=='n' && (z[3]=='a' || z[3]=='e')  && m_gt_1(z+4)  ){
+       z += 4;
+     }
+     break;
+   case 'e':
+     if( z[0]=='r' && m_gt_1(z+2) ){
+       z += 2;
+     }
+     break;
+   case 'i':
+     if( z[0]=='c' && m_gt_1(z+2) ){
+       z += 2;
+     }
+     break;
+   case 'l':
+     if( z[0]=='e' && z[2]=='b' && (z[3]=='a' || z[3]=='i') && m_gt_1(z+4) ){
+       z += 4;
+     }
+     break;
+   case 'n':
+     if( z[0]=='t' ){
+       if( z[2]=='a' ){
+         if( m_gt_1(z+3) ){
+           z += 3;
+         }
+       }else if( z[2]=='e' ){
+         stem(&z, "tneme", "", m_gt_1) ||
+         stem(&z, "tnem", "", m_gt_1) ||
+         stem(&z, "tne", "", m_gt_1);
+       }
+     }
+     break;
+   case 'o':
+     if( z[0]=='u' ){
+       if( m_gt_1(z+2) ){
+         z += 2;
+       }
+     }else if( z[3]=='s' || z[3]=='t' ){
+       stem(&z, "noi", "", m_gt_1);
+     }
+     break;
+   case 's':
+     if( z[0]=='m' && z[2]=='i' && m_gt_1(z+3) ){
+       z += 3;
+     }
+     break;
+   case 't':
+     stem(&z, "eta", "", m_gt_1) ||
+     stem(&z, "iti", "", m_gt_1);
+     break;
+   case 'u':
+     if( z[0]=='s' && z[2]=='o' && m_gt_1(z+3) ){
+       z += 3;
+     }
+     break;
+   case 'v':
+   case 'z':
+     if( z[0]=='e' && z[2]=='i' && m_gt_1(z+3) ){
+       z += 3;
+     }
+     break;
+  }
+
+  /* Step 5a */
+  if( z[0]=='e' ){
+    if( m_gt_1(z+1) ){
+      z++;
+    }else if( m_eq_1(z+1) && !star_oh(z+1) ){
+      z++;
+    }
+  }
+
+  /* Step 5b */
+  if( m_gt_1(z) && z[0]=='l' && z[1]=='l' ){
+    z++;
+  }
+
+  /* z[] is now the stemmed word in reverse order.  Flip it back
+  ** around into forward order and return.
+  */
+  *pnOut = i = strlen(z);
+  zOut[i] = 0;
+  while( *z ){
+    zOut[--i] = *(z++);
+  }
+}
+
+/*
+** Characters that can be part of a token.  We assume any character
+** whose value is greater than 0x80 (any UTF character) can be
+** part of a token.  In other words, delimiters all must have
+** values of 0x7f or lower.
+*/
+static const char isIdChar[] = {
+/* x0 x1 x2 x3 x4 x5 x6 x7 x8 x9 xA xB xC xD xE xF */
+    1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0,  /* 3x */
+    0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,  /* 4x */
+    1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1,  /* 5x */
+    0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,  /* 6x */
+    1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0,  /* 7x */
+};
+#define idChar(C)  (((ch=C)&0x80)!=0 || (ch>0x2f && isIdChar[ch-0x30]))
+#define isDelim(C) (((ch=C)&0x80)==0 && (ch<0x30 || !isIdChar[ch-0x30]))
+
+/*
+** Extract the next token from a tokenization cursor.  The cursor must
+** have been opened by a prior call to porterOpen().
+*/
+static int porterNext(
+  sqlite3_tokenizer_cursor *pCursor,  /* Cursor returned by porterOpen */
+  const char **pzToken,               /* OUT: *pzToken is the token text */
+  int *pnBytes,                       /* OUT: Number of bytes in token */
+  int *piStartOffset,                 /* OUT: Starting offset of token */
+  int *piEndOffset,                   /* OUT: Ending offset of token */
+  int *piPosition                     /* OUT: Position integer of token */
+){
+  porter_tokenizer_cursor *c = (porter_tokenizer_cursor *) pCursor;
+  const char *z = c->zInput;
+
+  while( c->iOffset<c->nInput ){
+    int iStartOffset, ch;
+
+    /* Scan past delimiter characters */
+    while( c->iOffset<c->nInput && isDelim(z[c->iOffset]) ){
+      c->iOffset++;
+    }
+
+    /* Count non-delimiter characters. */
+    iStartOffset = c->iOffset;
+    while( c->iOffset<c->nInput && !isDelim(z[c->iOffset]) ){
+      c->iOffset++;
+    }
+
+    if( c->iOffset>iStartOffset ){
+      int n = c->iOffset-iStartOffset;
+      if( n>c->nAllocated ){
+        c->nAllocated = n+20;
+        c->zToken = realloc(c->zToken, c->nAllocated);
+      }
+      porter_stemmer(&z[iStartOffset], n, c->zToken, pnBytes);
+      *pzToken = c->zToken;
+      *piStartOffset = iStartOffset;
+      *piEndOffset = c->iOffset;
+      *piPosition = c->iToken++;
+      return SQLITE_OK;
+    }
+  }
+  return SQLITE_DONE;
+}
+
+/*
+** The set of routines that implement the porter-stemmer tokenizer
+*/
+static const sqlite3_tokenizer_module porterTokenizerModule = {
+  0,
+  porterCreate,
+  porterDestroy,
+  porterOpen,
+  porterClose,
+  porterNext,
+};
+
+/*
+** Allocate a new porter tokenizer.  Return a pointer to the new
+** tokenizer in *ppModule
+*/
+void sqlite3Fts2PorterTokenizerModule(
+  sqlite3_tokenizer_module const**ppModule
+){
+  *ppModule = &porterTokenizerModule;
+}
+
+#endif /* !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS2) */

Added: freeswitch/trunk/libs/sqlite/ext/fts2/fts2_tokenizer.h
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/ext/fts2/fts2_tokenizer.h	Thu Feb 22 17:09:42 2007
@@ -0,0 +1,90 @@
+/*
+** 2006 July 10
+**
+** The author disclaims copyright to this source code.
+**
+*************************************************************************
+** Defines the interface to tokenizers used by fulltext-search.  There
+** are three basic components:
+**
+** sqlite3_tokenizer_module is a singleton defining the tokenizer
+** interface functions.  This is essentially the class structure for
+** tokenizers.
+**
+** sqlite3_tokenizer is used to define a particular tokenizer, perhaps
+** including customization information defined at creation time.
+**
+** sqlite3_tokenizer_cursor is generated by a tokenizer to generate
+** tokens from a particular input.
+*/
+#ifndef _FTS2_TOKENIZER_H_
+#define _FTS2_TOKENIZER_H_
+
+/* TODO(shess) Only used for SQLITE_OK and SQLITE_DONE at this time.
+** If tokenizers are to be allowed to call sqlite3_*() functions, then
+** we will need a way to register the API consistently.
+*/
+#include "sqlite3.h"
+
+/*
+** Structures used by the tokenizer interface.
+*/
+typedef struct sqlite3_tokenizer sqlite3_tokenizer;
+typedef struct sqlite3_tokenizer_cursor sqlite3_tokenizer_cursor;
+typedef struct sqlite3_tokenizer_module sqlite3_tokenizer_module;
+
+struct sqlite3_tokenizer_module {
+  int iVersion;                  /* currently 0 */
+
+  /*
+  ** Create and destroy a tokenizer.  argc/argv are passed down from
+  ** the fulltext virtual table creation to allow customization.
+  */
+  int (*xCreate)(int argc, const char *const*argv,
+                 sqlite3_tokenizer **ppTokenizer);
+  int (*xDestroy)(sqlite3_tokenizer *pTokenizer);
+
+  /*
+  ** Tokenize a particular input.  Call xOpen() to prepare to
+  ** tokenize, xNext() repeatedly until it returns SQLITE_DONE, then
+  ** xClose() to free any internal state.  The pInput passed to
+  ** xOpen() must exist until the cursor is closed.  The ppToken
+  ** result from xNext() is only valid until the next call to xNext()
+  ** or until xClose() is called.
+  */
+  /* TODO(shess) current implementation requires pInput to be
+  ** nul-terminated.  This should either be fixed, or pInput/nBytes
+  ** should be converted to zInput.
+  */
+  int (*xOpen)(sqlite3_tokenizer *pTokenizer,
+               const char *pInput, int nBytes,
+               sqlite3_tokenizer_cursor **ppCursor);
+  int (*xClose)(sqlite3_tokenizer_cursor *pCursor);
+  int (*xNext)(sqlite3_tokenizer_cursor *pCursor,
+               const char **ppToken, int *pnBytes,
+               int *piStartOffset, int *piEndOffset, int *piPosition);
+};
+
+struct sqlite3_tokenizer {
+  const sqlite3_tokenizer_module *pModule;  /* The module for this tokenizer */
+  /* Tokenizer implementations will typically add additional fields */
+};
+
+struct sqlite3_tokenizer_cursor {
+  sqlite3_tokenizer *pTokenizer;       /* Tokenizer for this cursor. */
+  /* Tokenizer implementations will typically add additional fields */
+};
+
+/*
+** Get the module for a tokenizer which generates tokens based on a
+** set of non-token characters.  The default is to break tokens at any
+** non-alnum character, though the set of delimiters can also be
+** specified by the first argv argument to xCreate().
+*/
+/* TODO(shess) This doesn't belong here.  Need some sort of
+** registration process.
+*/
+void sqlite3Fts2SimpleTokenizerModule(sqlite3_tokenizer_module const**ppModule);
+void sqlite3Fts2PorterTokenizerModule(sqlite3_tokenizer_module const**ppModule);
+
+#endif /* _FTS2_TOKENIZER_H_ */

Added: freeswitch/trunk/libs/sqlite/ext/fts2/fts2_tokenizer1.c
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/ext/fts2/fts2_tokenizer1.c	Thu Feb 22 17:09:42 2007
@@ -0,0 +1,220 @@
+/*
+** The author disclaims copyright to this source code.
+**
+*************************************************************************
+** Implementation of the "simple" full-text-search tokenizer.
+*/
+
+/*
+** The code in this file is only compiled if:
+**
+**     * The FTS2 module is being built as an extension
+**       (in which case SQLITE_CORE is not defined), or
+**
+**     * The FTS2 module is being built into the core of
+**       SQLite (in which case SQLITE_ENABLE_FTS2 is defined).
+*/
+#if !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS2)
+
+
+#include <assert.h>
+#if !defined(__APPLE__)
+#include <malloc.h>
+#else
+#include <stdlib.h>
+#endif
+#include <stdio.h>
+#include <string.h>
+#include <ctype.h>
+
+#include "fts2_tokenizer.h"
+
+typedef struct simple_tokenizer {
+  sqlite3_tokenizer base;
+  char delim[128];             /* flag ASCII delimiters */
+} simple_tokenizer;
+
+typedef struct simple_tokenizer_cursor {
+  sqlite3_tokenizer_cursor base;
+  const char *pInput;          /* input we are tokenizing */
+  int nBytes;                  /* size of the input */
+  int iOffset;                 /* current position in pInput */
+  int iToken;                  /* index of next token to be returned */
+  char *pToken;                /* storage for current token */
+  int nTokenAllocated;         /* space allocated to zToken buffer */
+} simple_tokenizer_cursor;
+
+
+/* Forward declaration */
+static const sqlite3_tokenizer_module simpleTokenizerModule;
+
+static int isDelim(simple_tokenizer *t, unsigned char c){
+  return c<0x80 && t->delim[c];
+}
+
+/*
+** Create a new tokenizer instance.
+*/
+static int simpleCreate(
+  int argc, const char * const *argv,
+  sqlite3_tokenizer **ppTokenizer
+){
+  simple_tokenizer *t;
+
+  t = (simple_tokenizer *) calloc(sizeof(simple_tokenizer), 1);
+  /* TODO(shess) Delimiters need to remain the same from run to run,
+  ** else we need to reindex.  One solution would be a meta-table to
+  ** track such information in the database, then we'd only want this
+  ** information on the initial create.
+  */
+  if( argc>1 ){
+    int i, n = strlen(argv[1]);
+    for(i=0; i<n; i++){
+      unsigned char ch = argv[1][i];
+      /* We explicitly don't support UTF-8 delimiters for now. */
+      if( ch>=0x80 ){
+        free(t);
+        return SQLITE_ERROR;
+      }
+      t->delim[ch] = 1;
+    }
+  } else {
+    /* Mark non-alphanumeric ASCII characters as delimiters */
+    int i;
+    for(i=1; i<0x80; i++){
+      t->delim[i] = !isalnum(i);
+    }
+  }
+
+  *ppTokenizer = &t->base;
+  return SQLITE_OK;
+}
+
+/*
+** Destroy a tokenizer
+*/
+static int simpleDestroy(sqlite3_tokenizer *pTokenizer){
+  free(pTokenizer);
+  return SQLITE_OK;
+}
+
+/*
+** Prepare to begin tokenizing a particular string.  The input
+** string to be tokenized is pInput[0..nBytes-1].  A cursor
+** used to incrementally tokenize this string is returned in 
+** *ppCursor.
+*/
+static int simpleOpen(
+  sqlite3_tokenizer *pTokenizer,         /* The tokenizer */
+  const char *pInput, int nBytes,        /* String to be tokenized */
+  sqlite3_tokenizer_cursor **ppCursor    /* OUT: Tokenization cursor */
+){
+  simple_tokenizer_cursor *c;
+
+  c = (simple_tokenizer_cursor *) malloc(sizeof(simple_tokenizer_cursor));
+  c->pInput = pInput;
+  if( pInput==0 ){
+    c->nBytes = 0;
+  }else if( nBytes<0 ){
+    c->nBytes = (int)strlen(pInput);
+  }else{
+    c->nBytes = nBytes;
+  }
+  c->iOffset = 0;                 /* start tokenizing at the beginning */
+  c->iToken = 0;
+  c->pToken = NULL;               /* no space allocated, yet. */
+  c->nTokenAllocated = 0;
+
+  *ppCursor = &c->base;
+  return SQLITE_OK;
+}
+
+/*
+** Close a tokenization cursor previously opened by a call to
+** simpleOpen() above.
+*/
+static int simpleClose(sqlite3_tokenizer_cursor *pCursor){
+  simple_tokenizer_cursor *c = (simple_tokenizer_cursor *) pCursor;
+  free(c->pToken);
+  free(c);
+  return SQLITE_OK;
+}
+
+/*
+** Extract the next token from a tokenization cursor.  The cursor must
+** have been opened by a prior call to simpleOpen().
+*/
+static int simpleNext(
+  sqlite3_tokenizer_cursor *pCursor,  /* Cursor returned by simpleOpen */
+  const char **ppToken,               /* OUT: *ppToken is the token text */
+  int *pnBytes,                       /* OUT: Number of bytes in token */
+  int *piStartOffset,                 /* OUT: Starting offset of token */
+  int *piEndOffset,                   /* OUT: Ending offset of token */
+  int *piPosition                     /* OUT: Position integer of token */
+){
+  simple_tokenizer_cursor *c = (simple_tokenizer_cursor *) pCursor;
+  simple_tokenizer *t = (simple_tokenizer *) pCursor->pTokenizer;
+  unsigned char *p = (unsigned char *)c->pInput;
+
+  while( c->iOffset<c->nBytes ){
+    int iStartOffset;
+
+    /* Scan past delimiter characters */
+    while( c->iOffset<c->nBytes && isDelim(t, p[c->iOffset]) ){
+      c->iOffset++;
+    }
+
+    /* Count non-delimiter characters. */
+    iStartOffset = c->iOffset;
+    while( c->iOffset<c->nBytes && !isDelim(t, p[c->iOffset]) ){
+      c->iOffset++;
+    }
+
+    if( c->iOffset>iStartOffset ){
+      int i, n = c->iOffset-iStartOffset;
+      if( n>c->nTokenAllocated ){
+        c->nTokenAllocated = n+20;
+        c->pToken = realloc(c->pToken, c->nTokenAllocated);
+      }
+      for(i=0; i<n; i++){
+        /* TODO(shess) This needs expansion to handle UTF-8
+        ** case-insensitivity.
+        */
+        unsigned char ch = p[iStartOffset+i];
+        c->pToken[i] = ch<0x80 ? tolower(ch) : ch;
+      }
+      *ppToken = c->pToken;
+      *pnBytes = n;
+      *piStartOffset = iStartOffset;
+      *piEndOffset = c->iOffset;
+      *piPosition = c->iToken++;
+
+      return SQLITE_OK;
+    }
+  }
+  return SQLITE_DONE;
+}
+
+/*
+** The set of routines that implement the simple tokenizer
+*/
+static const sqlite3_tokenizer_module simpleTokenizerModule = {
+  0,
+  simpleCreate,
+  simpleDestroy,
+  simpleOpen,
+  simpleClose,
+  simpleNext,
+};
+
+/*
+** Allocate a new simple tokenizer.  Return a pointer to the new
+** tokenizer in *ppModule
+*/
+void sqlite3Fts2SimpleTokenizerModule(
+  sqlite3_tokenizer_module const**ppModule
+){
+  *ppModule = &simpleTokenizerModule;
+}
+
+#endif /* !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS2) */

Modified: freeswitch/trunk/libs/sqlite/src/btree.c
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/btree.c	(original)
+++ freeswitch/trunk/libs/sqlite/src/btree.c	Thu Feb 22 17:09:42 2007
@@ -9,7 +9,7 @@
 **    May you share freely, never taking more than you give.
 **
 *************************************************************************
-** $Id: btree.c,v 1.328 2006/08/16 16:42:48 drh Exp $
+** $Id: btree.c,v 1.335 2007/02/10 19:22:36 drh Exp $
 **
 ** This file implements a external (disk-based) database using BTrees.
 ** For a detailed discussion of BTrees, refer to
@@ -421,7 +421,8 @@
 */
 #if SQLITE_TEST
 # define TRACE(X)   if( sqlite3_btree_trace )\
-                        { sqlite3DebugPrintf X; fflush(stdout); }
+/*                        { sqlite3DebugPrintf X; fflush(stdout); } */ \
+{ printf X; fflush(stdout); }
 int sqlite3_btree_trace=0;  /* True to enable tracing */
 #else
 # define TRACE(X)
@@ -1039,91 +1040,6 @@
 #endif
 
 
-/*
-** Do sanity checking on a page.  Throw an exception if anything is
-** not right.
-**
-** This routine is used for internal error checking only.  It is omitted
-** from most builds.
-*/
-#if defined(BTREE_DEBUG) && !defined(NDEBUG) && 0
-static void _pageIntegrity(MemPage *pPage){
-  int usableSize;
-  u8 *data;
-  int i, j, idx, c, pc, hdr, nFree;
-  int cellOffset;
-  int nCell, cellLimit;
-  u8 *used;
-
-  used = sqliteMallocRaw( pPage->pBt->pageSize );
-  if( used==0 ) return;
-  usableSize = pPage->pBt->usableSize;
-  assert( pPage->aData==&((unsigned char*)pPage)[-pPage->pBt->pageSize] );
-  hdr = pPage->hdrOffset;
-  assert( hdr==(pPage->pgno==1 ? 100 : 0) );
-  assert( pPage->pgno==sqlite3pager_pagenumber(pPage->aData) );
-  c = pPage->aData[hdr];
-  if( pPage->isInit ){
-    assert( pPage->leaf == ((c & PTF_LEAF)!=0) );
-    assert( pPage->zeroData == ((c & PTF_ZERODATA)!=0) );
-    assert( pPage->leafData == ((c & PTF_LEAFDATA)!=0) );
-    assert( pPage->intKey == ((c & (PTF_INTKEY|PTF_LEAFDATA))!=0) );
-    assert( pPage->hasData ==
-             !(pPage->zeroData || (!pPage->leaf && pPage->leafData)) );
-    assert( pPage->cellOffset==pPage->hdrOffset+12-4*pPage->leaf );
-    assert( pPage->nCell = get2byte(&pPage->aData[hdr+3]) );
-  }
-  data = pPage->aData;
-  memset(used, 0, usableSize);
-  for(i=0; i<hdr+10-pPage->leaf*4; i++) used[i] = 1;
-  nFree = 0;
-  pc = get2byte(&data[hdr+1]);
-  while( pc ){
-    int size;
-    assert( pc>0 && pc<usableSize-4 );
-    size = get2byte(&data[pc+2]);
-    assert( pc+size<=usableSize );
-    nFree += size;
-    for(i=pc; i<pc+size; i++){
-      assert( used[i]==0 );
-      used[i] = 1;
-    }
-    pc = get2byte(&data[pc]);
-  }
-  idx = 0;
-  nCell = get2byte(&data[hdr+3]);
-  cellLimit = get2byte(&data[hdr+5]);
-  assert( pPage->isInit==0 
-         || pPage->nFree==nFree+data[hdr+7]+cellLimit-(cellOffset+2*nCell) );
-  cellOffset = pPage->cellOffset;
-  for(i=0; i<nCell; i++){
-    int size;
-    pc = get2byte(&data[cellOffset+2*i]);
-    assert( pc>0 && pc<usableSize-4 );
-    size = cellSize(pPage, &data[pc]);
-    assert( pc+size<=usableSize );
-    for(j=pc; j<pc+size; j++){
-      assert( used[j]==0 );
-      used[j] = 1;
-    }
-  }
-  for(i=cellOffset+2*nCell; i<cellimit; i++){
-    assert( used[i]==0 );
-    used[i] = 1;
-  }
-  nFree = 0;
-  for(i=0; i<usableSize; i++){
-    assert( used[i]<=1 );
-    if( used[i]==0 ) nFree++;
-  }
-  assert( nFree==data[hdr+7] );
-  sqliteFree(used);
-}
-#define pageIntegrity(X) _pageIntegrity(X)
-#else
-# define pageIntegrity(X)
-#endif
-
 /* A bunch of assert() statements to check the transaction state variables
 ** of handle p (type Btree*) are internally consistent.
 */
@@ -1430,7 +1346,6 @@
   }
 
   pPage->isInit = 1;
-  pageIntegrity(pPage);
   return SQLITE_OK;
 }
 
@@ -1461,7 +1376,6 @@
   pPage->idxShift = 0;
   pPage->nCell = 0;
   pPage->isInit = 1;
-  pageIntegrity(pPage);
 }
 
 /*
@@ -1636,8 +1550,13 @@
     return SQLITE_NOMEM;
   }
   rc = sqlite3pager_open(&pBt->pPager, zFilename, EXTRA_SIZE, flags);
+  if( rc==SQLITE_OK ){
+    rc = sqlite3pager_read_fileheader(pBt->pPager,sizeof(zDbHeader),zDbHeader);
+  }
   if( rc!=SQLITE_OK ){
-    if( pBt->pPager ) sqlite3pager_close(pBt->pPager);
+    if( pBt->pPager ){
+      sqlite3pager_close(pBt->pPager);
+    }
     sqliteFree(pBt);
     sqliteFree(p);
     *ppBtree = 0;
@@ -1650,7 +1569,6 @@
   pBt->pCursor = 0;
   pBt->pPage1 = 0;
   pBt->readOnly = sqlite3pager_isreadonly(pBt->pPager);
-  sqlite3pager_read_fileheader(pBt->pPager, sizeof(zDbHeader), zDbHeader);
   pBt->pageSize = get2byte(&zDbHeader[16]);
   if( pBt->pageSize<512 || pBt->pageSize>SQLITE_MAX_PAGE_SIZE
        || ((pBt->pageSize-1)&pBt->pageSize)!=0 ){
@@ -2013,13 +1931,15 @@
 */
 static void unlockBtreeIfUnused(BtShared *pBt){
   if( pBt->inTransaction==TRANS_NONE && pBt->pCursor==0 && pBt->pPage1!=0 ){
-    if( pBt->pPage1->aData==0 ){
-      MemPage *pPage = pBt->pPage1;
-      pPage->aData = &((u8*)pPage)[-pBt->pageSize];
-      pPage->pBt = pBt;
-      pPage->pgno = 1;
+    if( sqlite3pager_refcount(pBt->pPager)>=1 ){
+      if( pBt->pPage1->aData==0 ){
+        MemPage *pPage = pBt->pPage1;
+        pPage->aData = &((u8*)pPage)[-pBt->pageSize];
+        pPage->pBt = pBt;
+        pPage->pgno = 1;
+      }
+      releasePage(pBt->pPage1);
     }
-    releasePage(pBt->pPage1);
     pBt->pPage1 = 0;
     pBt->inStmt = 0;
   }
@@ -2971,7 +2891,6 @@
   assert( pCur->eState==CURSOR_VALID );
   pBt = pCur->pBtree->pBt;
   pPage = pCur->pPage;
-  pageIntegrity(pPage);
   assert( pCur->idx>=0 && pCur->idx<pPage->nCell );
   getCellInfo(pCur);
   aPayload = pCur->info.pCell + pCur->info.nHeader;
@@ -3109,7 +3028,6 @@
   assert( pCur!=0 && pCur->pPage!=0 );
   assert( pCur->eState==CURSOR_VALID );
   pPage = pCur->pPage;
-  pageIntegrity(pPage);
   assert( pCur->idx>=0 && pCur->idx<pPage->nCell );
   getCellInfo(pCur);
   aPayload = pCur->info.pCell;
@@ -3171,7 +3089,6 @@
   assert( pCur->eState==CURSOR_VALID );
   rc = getAndInitPage(pBt, newPgno, &pNewPage, pCur->pPage);
   if( rc ) return rc;
-  pageIntegrity(pNewPage);
   pNewPage->idxParent = pCur->idx;
   pOldPage = pCur->pPage;
   pOldPage->idxShift = 0;
@@ -3219,10 +3136,8 @@
   pPage = pCur->pPage;
   assert( pPage!=0 );
   assert( !isRootPage(pPage) );
-  pageIntegrity(pPage);
   pParent = pPage->pParent;
   assert( pParent!=0 );
-  pageIntegrity(pParent);
   idxParent = pPage->idxParent;
   sqlite3pager_ref(pParent->aData);
   releasePage(pPage);
@@ -3252,7 +3167,6 @@
       return rc;
     }
     releasePage(pCur->pPage);
-    pageIntegrity(pRoot);
     pCur->pPage = pRoot;
   }
   pCur->idx = 0;
@@ -3396,7 +3310,7 @@
     assert( pCur->pPage->nCell==0 );
     return SQLITE_OK;
   }
-   for(;;){
+  for(;;){
     int lwr, upr;
     Pgno chldPg;
     MemPage *pPage = pCur->pPage;
@@ -3406,7 +3320,6 @@
     if( !pPage->intKey && pKey==0 ){
       return SQLITE_CORRUPT_BKPT;
     }
-    pageIntegrity(pPage);
     while( lwr<=upr ){
       void *pCellKey;
       i64 nCellKey;
@@ -3659,14 +3572,14 @@
   int rc;
   int n;     /* Number of pages on the freelist */
   int k;     /* Number of leaves on the trunk of the freelist */
+  MemPage *pTrunk = 0;
+  MemPage *pPrevTrunk = 0;
 
   pPage1 = pBt->pPage1;
   n = get4byte(&pPage1->aData[36]);
   if( n>0 ){
     /* There are pages on the freelist.  Reuse one of those pages. */
-    MemPage *pTrunk = 0;
     Pgno iTrunk;
-    MemPage *pPrevTrunk = 0;
     u8 searchList = 0; /* If the free-list must be searched for 'nearby' */
     
     /* If the 'exact' parameter was true and a query of the pointer-map
@@ -3707,16 +3620,8 @@
       }
       rc = getPage(pBt, iTrunk, &pTrunk);
       if( rc ){
-        releasePage(pPrevTrunk);
-        return rc;
-      }
-
-      /* TODO: This should move to after the loop? */
-      rc = sqlite3pager_write(pTrunk->aData);
-      if( rc ){
-        releasePage(pTrunk);
-        releasePage(pPrevTrunk);
-        return rc;
+        pTrunk = 0;
+        goto end_allocate_page;
       }
 
       k = get4byte(&pTrunk->aData[4]);
@@ -3725,6 +3630,10 @@
         ** So extract the trunk page itself and use it as the newly 
         ** allocated page */
         assert( pPrevTrunk==0 );
+        rc = sqlite3pager_write(pTrunk->aData);
+        if( rc ){
+          goto end_allocate_page;
+        }
         *pPgno = iTrunk;
         memcpy(&pPage1->aData[32], &pTrunk->aData[0], 4);
         *ppPage = pTrunk;
@@ -3732,7 +3641,8 @@
         TRACE(("ALLOCATE: %d trunk - %d free pages left\n", *pPgno, n-1));
       }else if( k>pBt->usableSize/4 - 8 ){
         /* Value of k is out of range.  Database corruption */
-        return SQLITE_CORRUPT_BKPT;
+        rc = SQLITE_CORRUPT_BKPT;
+        goto end_allocate_page;
 #ifndef SQLITE_OMIT_AUTOVACUUM
       }else if( searchList && nearby==iTrunk ){
         /* The list is being searched and this trunk page is the page
@@ -3741,6 +3651,10 @@
         assert( *pPgno==iTrunk );
         *ppPage = pTrunk;
         searchList = 0;
+        rc = sqlite3pager_write(pTrunk->aData);
+        if( rc ){
+          goto end_allocate_page;
+        }
         if( k==0 ){
           if( !pPrevTrunk ){
             memcpy(&pPage1->aData[32], &pTrunk->aData[0], 4);
@@ -3756,26 +3670,26 @@
           Pgno iNewTrunk = get4byte(&pTrunk->aData[8]);
           rc = getPage(pBt, iNewTrunk, &pNewTrunk);
           if( rc!=SQLITE_OK ){
-            releasePage(pTrunk);
-            releasePage(pPrevTrunk);
-            return rc;
+            goto end_allocate_page;
           }
           rc = sqlite3pager_write(pNewTrunk->aData);
           if( rc!=SQLITE_OK ){
             releasePage(pNewTrunk);
-            releasePage(pTrunk);
-            releasePage(pPrevTrunk);
-            return rc;
+            goto end_allocate_page;
           }
           memcpy(&pNewTrunk->aData[0], &pTrunk->aData[0], 4);
           put4byte(&pNewTrunk->aData[4], k-1);
           memcpy(&pNewTrunk->aData[8], &pTrunk->aData[12], (k-1)*4);
+          releasePage(pNewTrunk);
           if( !pPrevTrunk ){
             put4byte(&pPage1->aData[32], iNewTrunk);
           }else{
+            rc = sqlite3pager_write(pPrevTrunk->aData);
+            if( rc ){
+              goto end_allocate_page;
+            }
             put4byte(&pPrevTrunk->aData[0], iNewTrunk);
           }
-          releasePage(pNewTrunk);
         }
         pTrunk = 0;
         TRACE(("ALLOCATE: %d trunk - %d free pages left\n", *pPgno, n-1));
@@ -3785,6 +3699,10 @@
         int closest;
         Pgno iPage;
         unsigned char *aData = pTrunk->aData;
+        rc = sqlite3pager_write(aData);
+        if( rc ){
+          goto end_allocate_page;
+        }
         if( nearby>0 ){
           int i, dist;
           closest = 0;
@@ -3828,8 +3746,8 @@
         }
       }
       releasePage(pPrevTrunk);
+      pPrevTrunk = 0;
     }while( searchList );
-    releasePage(pTrunk);
   }else{
     /* There are no pages on the freelist, so create a new page at the
     ** end of the file */
@@ -3858,6 +3776,10 @@
   }
 
   assert( *pPgno!=PENDING_BYTE_PAGE(pBt) );
+
+end_allocate_page:
+  releasePage(pTrunk);
+  releasePage(pPrevTrunk);
   return rc;
 }
 
@@ -4258,7 +4180,6 @@
     put2byte(&data[ins], idx);
     put2byte(&data[hdr+3], pPage->nCell);
     pPage->idxShift = 1;
-    pageIntegrity(pPage);
 #ifndef SQLITE_OMIT_AUTOVACUUM
     if( pPage->pBt->autoVacuum ){
       /* The cell may contain a pointer to an overflow page. If so, write
@@ -4998,8 +4919,6 @@
   ** But the parent page will always be initialized.
   */
   assert( pParent->isInit );
-  /* assert( pPage->isInit ); // No! pPage might have been added to freelist */
-  /* pageIntegrity(pPage);    // No! pPage might have been added to freelist */ 
   rc = balance(pParent, 0);
   
   /*
@@ -5971,6 +5890,7 @@
 **   aResult[7] =  Header size in bytes
 **   aResult[8] =  Local payload size
 **   aResult[9] =  Parent page number
+**   aResult[10]=  Page number of the first overflow page
 **
 ** This routine is used for testing and debugging only.
 */
@@ -5984,14 +5904,12 @@
     return rc;
   }
 
-  pageIntegrity(pPage);
   assert( pPage->isInit );
   getTempCursor(pCur, &tmpCur);
   while( upCnt-- ){
     moveToParent(&tmpCur);
   }
   pPage = tmpCur.pPage;
-  pageIntegrity(pPage);
   aResult[0] = sqlite3pager_pagenumber(pPage->aData);
   assert( aResult[0]==pPage->pgno );
   aResult[1] = tmpCur.idx;
@@ -6021,6 +5939,11 @@
   }else{
     aResult[9] = pPage->pParent->pgno;
   }
+  if( tmpCur.info.iOverflow ){
+    aResult[10] = get4byte(&tmpCur.info.pCell[tmpCur.info.iOverflow]);
+  }else{
+    aResult[10] = 0;
+  }
   releaseTempCursor(&tmpCur);
   return SQLITE_OK;
 }
@@ -6041,10 +5964,12 @@
 typedef struct IntegrityCk IntegrityCk;
 struct IntegrityCk {
   BtShared *pBt;    /* The tree being checked out */
-  Pager *pPager; /* The associated pager.  Also accessible by pBt->pPager */
-  int nPage;     /* Number of pages in the database */
-  int *anRef;    /* Number of times each page is referenced */
-  char *zErrMsg; /* An error message.  NULL of no errors seen. */
+  Pager *pPager;    /* The associated pager.  Also accessible by pBt->pPager */
+  int nPage;        /* Number of pages in the database */
+  int *anRef;       /* Number of times each page is referenced */
+  int mxErr;        /* Stop accumulating errors when this reaches zero */
+  char *zErrMsg;    /* An error message.  NULL if no errors seen. */
+  int nErr;         /* Number of messages written to zErrMsg so far */
 };
 
 #ifndef SQLITE_OMIT_INTEGRITY_CHECK
@@ -6059,6 +5984,9 @@
 ){
   va_list ap;
   char *zMsg2;
+  if( !pCheck->mxErr ) return;
+  pCheck->mxErr--;
+  pCheck->nErr++;
   va_start(ap, zFormat);
   zMsg2 = sqlite3VMPrintf(zFormat, ap);
   va_end(ap);
@@ -6142,7 +6070,7 @@
   int i;
   int expected = N;
   int iFirst = iPage;
-  while( N-- > 0 ){
+  while( N-- > 0 && pCheck->mxErr ){
     unsigned char *pOvfl;
     if( iPage<1 ){
       checkAppendMsg(pCheck, zContext,
@@ -6254,7 +6182,7 @@
   /* Check out all the cells.
   */
   depth = 0;
-  for(i=0; i<pPage->nCell; i++){
+  for(i=0; i<pPage->nCell && pCheck->mxErr; i++){
     u8 *pCell;
     int sz;
     CellInfo info;
@@ -6369,7 +6297,13 @@
 ** and a pointer to that error message is returned.  The calling function
 ** is responsible for freeing the error message when it is done.
 */
-char *sqlite3BtreeIntegrityCheck(Btree *p, int *aRoot, int nRoot){
+char *sqlite3BtreeIntegrityCheck(
+  Btree *p,     /* The btree to be checked */
+  int *aRoot,   /* An array of root pages numbers for individual trees */
+  int nRoot,    /* Number of entries in aRoot[] */
+  int mxErr,    /* Stop reporting errors after this many */
+  int *pnErr    /* Write number of errors seen to this variable */
+){
   int i;
   int nRef;
   IntegrityCk sCheck;
@@ -6382,6 +6316,9 @@
   sCheck.pBt = pBt;
   sCheck.pPager = pBt->pPager;
   sCheck.nPage = sqlite3pager_pagecount(sCheck.pPager);
+  sCheck.mxErr = mxErr;
+  sCheck.nErr = 0;
+  *pnErr = 0;
   if( sCheck.nPage==0 ){
     unlockBtreeIfUnused(pBt);
     return 0;
@@ -6389,6 +6326,7 @@
   sCheck.anRef = sqliteMallocRaw( (sCheck.nPage+1)*sizeof(sCheck.anRef[0]) );
   if( !sCheck.anRef ){
     unlockBtreeIfUnused(pBt);
+    *pnErr = 1;
     return sqlite3MPrintf("Unable to malloc %d bytes", 
         (sCheck.nPage+1)*sizeof(sCheck.anRef[0]));
   }
@@ -6406,7 +6344,7 @@
 
   /* Check all the tables.
   */
-  for(i=0; i<nRoot; i++){
+  for(i=0; i<nRoot && sCheck.mxErr; i++){
     if( aRoot[i]==0 ) continue;
 #ifndef SQLITE_OMIT_AUTOVACUUM
     if( pBt->autoVacuum && aRoot[i]>1 ){
@@ -6418,7 +6356,7 @@
 
   /* Make sure every page in the file is referenced
   */
-  for(i=1; i<=sCheck.nPage; i++){
+  for(i=1; i<=sCheck.nPage && sCheck.mxErr; i++){
 #ifdef SQLITE_OMIT_AUTOVACUUM
     if( sCheck.anRef[i]==0 ){
       checkAppendMsg(&sCheck, 0, "Page %d is never used", i);
@@ -6451,6 +6389,7 @@
   /* Clean  up and report errors.
   */
   sqliteFree(sCheck.anRef);
+  *pnErr = sCheck.nErr;
   return sCheck.zErrMsg;
 }
 #endif /* SQLITE_OMIT_INTEGRITY_CHECK */
@@ -6509,7 +6448,6 @@
     rc = sqlite3pager_get(pBtFrom->pPager, i, &pPage);
     if( rc ) break;
     rc = sqlite3pager_overwrite(pBtTo->pPager, i, pPage);
-    if( rc ) break;
     sqlite3pager_unref(pPage);
   }
   for(i=nPage+1; rc==SQLITE_OK && i<=nToPage; i++){

Modified: freeswitch/trunk/libs/sqlite/src/btree.h
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/btree.h	(original)
+++ freeswitch/trunk/libs/sqlite/src/btree.h	Thu Feb 22 17:09:42 2007
@@ -13,7 +13,7 @@
 ** subsystem.  See comments in the source code for a detailed description
 ** of what each interface routine does.
 **
-** @(#) $Id: btree.h,v 1.71 2006/06/27 16:34:57 danielk1977 Exp $
+** @(#) $Id: btree.h,v 1.72 2007/01/27 02:24:55 drh Exp $
 */
 #ifndef _BTREE_H_
 #define _BTREE_H_
@@ -131,7 +131,7 @@
 int sqlite3BtreeDataSize(BtCursor*, u32 *pSize);
 int sqlite3BtreeData(BtCursor*, u32 offset, u32 amt, void*);
 
-char *sqlite3BtreeIntegrityCheck(Btree*, int *aRoot, int nRoot);
+char *sqlite3BtreeIntegrityCheck(Btree*, int *aRoot, int nRoot, int, int*);
 struct Pager *sqlite3BtreePager(Btree*);
 
 

Modified: freeswitch/trunk/libs/sqlite/src/build.c
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/build.c	(original)
+++ freeswitch/trunk/libs/sqlite/src/build.c	Thu Feb 22 17:09:42 2007
@@ -22,7 +22,7 @@
 **     COMMIT
 **     ROLLBACK
 **
-** $Id: build.c,v 1.411 2006/09/11 23:45:49 drh Exp $
+** $Id: build.c,v 1.413 2007/02/02 12:44:37 drh Exp $
 */
 #include "sqliteInt.h"
 #include <ctype.h>
@@ -1222,6 +1222,10 @@
 ** If no versions of the requested collations sequence are available, or
 ** another error occurs, NULL is returned and an error message written into
 ** pParse.
+**
+** This routine is a wrapper around sqlite3FindCollSeq().  This routine
+** invokes the collation factory if the named collation cannot be found
+** and generates an error message.
 */
 CollSeq *sqlite3LocateCollSeq(Parse *pParse, const char *zName, int nName){
   sqlite3 *db = pParse->db;
@@ -2457,7 +2461,7 @@
     const char *zColName = pListItem->zName;
     Column *pTabCol;
     int requestedSortOrder;
-    char *zColl;                   /* Collation sequence */
+    char *zColl;                   /* Collation sequence name */
 
     for(j=0, pTabCol=pTab->aCol; j<pTab->nCol; j++, pTabCol++){
       if( sqlite3StrICmp(zColName, pTabCol->zName)==0 ) break;
@@ -2467,6 +2471,12 @@
         pTab->zName, zColName);
       goto exit_create_index;
     }
+    /* TODO:  Add a test to make sure that the same column is not named
+    ** more than once within the same index.  Only the first instance of
+    ** the column will ever be used by the optimizer.  Note that using the
+    ** same column more than once cannot be an error because that would 
+    ** break backwards compatibility - it needs to be a warning.
+    */
     pIndex->aiColumn[i] = j;
     if( pListItem->pExpr ){
       assert( pListItem->pExpr->pColl );
@@ -2941,15 +2951,6 @@
 }
 
 /*
-** Add an alias to the last identifier on the given identifier list.
-*/
-void sqlite3SrcListAddAlias(SrcList *pList, Token *pToken){
-  if( pList && pList->nSrc>0 ){
-    pList->a[pList->nSrc-1].zAlias = sqlite3NameFromToken(pToken);
-  }
-}
-
-/*
 ** Delete an entire SrcList including all its substructure.
 */
 void sqlite3SrcListDelete(SrcList *pList){
@@ -2969,6 +2970,74 @@
 }
 
 /*
+** This routine is called by the parser to add a new term to the
+** end of a growing FROM clause.  The "p" parameter is the part of
+** the FROM clause that has already been constructed.  "p" is NULL
+** if this is the first term of the FROM clause.  pTable and pDatabase
+** are the name of the table and database named in the FROM clause term.
+** pDatabase is NULL if the database name qualifier is missing - the
+** usual case.  If the term has a alias, then pAlias points to the
+** alias token.  If the term is a subquery, then pSubquery is the
+** SELECT statement that the subquery encodes.  The pTable and
+** pDatabase parameters are NULL for subqueries.  The pOn and pUsing
+** parameters are the content of the ON and USING clauses.
+**
+** Return a new SrcList which encodes is the FROM with the new
+** term added.
+*/
+SrcList *sqlite3SrcListAppendFromTerm(
+  SrcList *p,             /* The left part of the FROM clause already seen */
+  Token *pTable,          /* Name of the table to add to the FROM clause */
+  Token *pDatabase,       /* Name of the database containing pTable */
+  Token *pAlias,          /* The right-hand side of the AS subexpression */
+  Select *pSubquery,      /* A subquery used in place of a table name */
+  Expr *pOn,              /* The ON clause of a join */
+  IdList *pUsing          /* The USING clause of a join */
+){
+  struct SrcList_item *pItem;
+  p = sqlite3SrcListAppend(p, pTable, pDatabase);
+  if( p==0 || p->nSrc==0 ){
+    sqlite3ExprDelete(pOn);
+    sqlite3IdListDelete(pUsing);
+    sqlite3SelectDelete(pSubquery);
+    return p;
+  }
+  pItem = &p->a[p->nSrc-1];
+  if( pAlias && pAlias->n ){
+    pItem->zAlias = sqlite3NameFromToken(pAlias);
+  }
+  pItem->pSelect = pSubquery;
+  pItem->pOn = pOn;
+  pItem->pUsing = pUsing;
+  return p;
+}
+
+/*
+** When building up a FROM clause in the parser, the join operator
+** is initially attached to the left operand.  But the code generator
+** expects the join operator to be on the right operand.  This routine
+** Shifts all join operators from left to right for an entire FROM
+** clause.
+**
+** Example: Suppose the join is like this:
+**
+**           A natural cross join B
+**
+** The operator is "natural cross join".  The A and B operands are stored
+** in p->a[0] and p->a[1], respectively.  The parser initially stores the
+** operator with A.  This routine shifts that operator over to B.
+*/
+void sqlite3SrcListShiftJoinType(SrcList *p){
+  if( p && p->a ){
+    int i;
+    for(i=p->nSrc-1; i>0; i--){
+      p->a[i].jointype = p->a[i-1].jointype;
+    }
+    p->a[0].jointype = 0;
+  }
+}
+
+/*
 ** Begin a transaction
 */
 void sqlite3BeginTransaction(Parse *pParse, int type){

Modified: freeswitch/trunk/libs/sqlite/src/callback.c
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/callback.c	(original)
+++ freeswitch/trunk/libs/sqlite/src/callback.c	Thu Feb 22 17:09:42 2007
@@ -13,7 +13,7 @@
 ** This file contains functions used to access the internal hash tables
 ** of user defined functions and collation sequences.
 **
-** $Id: callback.c,v 1.15 2006/05/24 12:43:27 drh Exp $
+** $Id: callback.c,v 1.16 2007/02/02 12:44:37 drh Exp $
 */
 
 #include "sqliteInt.h"
@@ -195,6 +195,11 @@
 **
 ** If the entry specified is not found and 'create' is true, then create a
 ** new entry.  Otherwise return NULL.
+**
+** A separate function sqlite3LocateCollSeq() is a wrapper around
+** this routine.  sqlite3LocateCollSeq() invokes the collation factory
+** if necessary and generates an error message if the collating sequence
+** cannot be found.
 */
 CollSeq *sqlite3FindCollSeq(
   sqlite3 *db,

Modified: freeswitch/trunk/libs/sqlite/src/date.c
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/date.c	(original)
+++ freeswitch/trunk/libs/sqlite/src/date.c	Thu Feb 22 17:09:42 2007
@@ -16,7 +16,7 @@
 ** sqlite3RegisterDateTimeFunctions() found at the bottom of the file.
 ** All other code has file scope.
 **
-** $Id: date.c,v 1.58 2006/09/25 18:05:04 drh Exp $
+** $Id: date.c,v 1.60 2007/01/08 16:19:07 drh Exp $
 **
 ** NOTES:
 **
@@ -840,7 +840,7 @@
           y.M = 1;
           y.D = 1;
           computeJD(&y);
-          nDay = x.rJD - y.rJD;
+          nDay = x.rJD - y.rJD + 0.5;
           if( zFmt[i]=='W' ){
             int wd;   /* 0=Monday, 1=Tuesday, ... 6=Sunday */
             wd = ((int)(x.rJD+0.5)) % 7;
@@ -860,7 +860,7 @@
           j += strlen(&z[j]);
           break;
         }
-        case 'S':  sprintf(&z[j],"%02d",(int)(x.s+0.5)); j+=2; break;
+        case 'S':  sprintf(&z[j],"%02d",(int)x.s); j+=2; break;
         case 'w':  z[j++] = (((int)(x.rJD+1.5)) % 7) + '0'; break;
         case 'Y':  sprintf(&z[j],"%04d",x.Y); j+=strlen(&z[j]); break;
         case '%':  z[j++] = '%'; break;

Modified: freeswitch/trunk/libs/sqlite/src/delete.c
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/delete.c	(original)
+++ freeswitch/trunk/libs/sqlite/src/delete.c	Thu Feb 22 17:09:42 2007
@@ -12,7 +12,7 @@
 ** This file contains C code routines that are called by the parser
 ** in order to generate code for DELETE FROM statements.
 **
-** $Id: delete.c,v 1.127 2006/06/19 03:05:10 danielk1977 Exp $
+** $Id: delete.c,v 1.128 2007/02/07 01:06:53 drh Exp $
 */
 #include "sqliteInt.h"
 
@@ -106,7 +106,8 @@
   AuthContext sContext;  /* Authorization context */
   int oldIdx = -1;       /* Cursor for the OLD table of AFTER triggers */
   NameContext sNC;       /* Name context to resolve expressions in */
-  int iDb;
+  int iDb;               /* Database number */
+  int memCnt = 0;        /* Memory cell used for change counting */
 
 #ifndef SQLITE_OMIT_TRIGGER
   int isView;                  /* True if attempting to delete from a view */
@@ -204,7 +205,8 @@
   ** we are counting rows.
   */
   if( db->flags & SQLITE_CountRows ){
-    sqlite3VdbeAddOp(v, OP_Integer, 0, 0);
+    memCnt = pParse->nMem++;
+    sqlite3VdbeAddOp(v, OP_MemInt, 0, memCnt);
   }
 
   /* Special case: A DELETE without a WHERE clause deletes everything.
@@ -221,7 +223,7 @@
         sqlite3OpenTable(pParse, iCur, iDb, pTab, OP_OpenRead);
       }
       sqlite3VdbeAddOp(v, OP_Rewind, iCur, sqlite3VdbeCurrentAddr(v)+2);
-      addr2 = sqlite3VdbeAddOp(v, OP_AddImm, 1, 0);
+      addr2 = sqlite3VdbeAddOp(v, OP_MemIncr, 1, memCnt);
       sqlite3VdbeAddOp(v, OP_Next, iCur, addr2);
       sqlite3VdbeResolveLabel(v, endOfLoop);
       sqlite3VdbeAddOp(v, OP_Close, iCur, 0);
@@ -251,7 +253,7 @@
     sqlite3VdbeAddOp(v, IsVirtual(pTab) ? OP_VRowid : OP_Rowid, iCur, 0);
     sqlite3VdbeAddOp(v, OP_FifoWrite, 0, 0);
     if( db->flags & SQLITE_CountRows ){
-      sqlite3VdbeAddOp(v, OP_AddImm, 1, 0);
+      sqlite3VdbeAddOp(v, OP_MemIncr, 1, memCnt);
     }
 
     /* End the database scan loop.
@@ -354,6 +356,7 @@
   ** invoke the callback function.
   */
   if( db->flags & SQLITE_CountRows && pParse->nested==0 && !pParse->trigStack ){
+    sqlite3VdbeAddOp(v, OP_MemLoad, memCnt, 0);
     sqlite3VdbeAddOp(v, OP_Callback, 1, 0);
     sqlite3VdbeSetNumCols(v, 1);
     sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "rows deleted", P3_STATIC);

Modified: freeswitch/trunk/libs/sqlite/src/expr.c
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/expr.c	(original)
+++ freeswitch/trunk/libs/sqlite/src/expr.c	Thu Feb 22 17:09:42 2007
@@ -12,7 +12,7 @@
 ** This file contains routines used for analyzing expressions and
 ** for generating VDBE code that evaluates expressions in SQLite.
 **
-** $Id: expr.c,v 1.268 2006/08/24 15:18:25 drh Exp $
+** $Id: expr.c,v 1.275 2007/02/07 13:09:46 drh Exp $
 */
 #include "sqliteInt.h"
 #include <ctype.h>
@@ -50,6 +50,24 @@
 }
 
 /*
+** Set the collating sequence for expression pExpr to be the collating
+** sequence named by pToken.   Return a pointer to the revised expression.
+** The collating sequence is marked as "explicit" using the EP_ExpCollate
+** flag.  An explicit collating sequence will override implicit
+** collating sequences.
+*/
+Expr *sqlite3ExprSetColl(Parse *pParse, Expr *pExpr, Token *pName){
+  CollSeq *pColl;
+  if( pExpr==0 ) return 0;
+  pColl = sqlite3LocateCollSeq(pParse, (char*)pName->z, pName->n);
+  if( pColl ){
+    pExpr->pColl = pColl;
+    pExpr->flags |= EP_ExpCollate;
+  }
+  return pExpr;
+}
+
+/*
 ** Return the default collation sequence for the expression pExpr. If
 ** there is no default collation type, return 0.
 */
@@ -158,9 +176,20 @@
 ** type.
 */
 static CollSeq* binaryCompareCollSeq(Parse *pParse, Expr *pLeft, Expr *pRight){
-  CollSeq *pColl = sqlite3ExprCollSeq(pParse, pLeft);
-  if( !pColl ){
-    pColl = sqlite3ExprCollSeq(pParse, pRight);
+  CollSeq *pColl;
+  assert( pLeft );
+  assert( pRight );
+  if( pLeft->flags & EP_ExpCollate ){
+    assert( pLeft->pColl );
+    pColl = pLeft->pColl;
+  }else if( pRight->flags & EP_ExpCollate ){
+    assert( pRight->pColl );
+    pColl = pRight->pColl;
+  }else{
+    pColl = sqlite3ExprCollSeq(pParse, pLeft);
+    if( !pColl ){
+      pColl = sqlite3ExprCollSeq(pParse, pRight);
+    }
   }
   return pColl;
 }
@@ -205,8 +234,18 @@
   if( pToken ){
     assert( pToken->dyn==0 );
     pNew->span = pNew->token = *pToken;
-  }else if( pLeft && pRight ){
-    sqlite3ExprSpan(pNew, &pLeft->span, &pRight->span);
+  }else if( pLeft ){
+    if( pRight ){
+      sqlite3ExprSpan(pNew, &pLeft->span, &pRight->span);
+      if( pRight->flags && EP_ExpCollate ){
+        pNew->flags |= EP_ExpCollate;
+        pNew->pColl = pRight->pColl;
+      }
+    }
+    if( pLeft->flags && EP_ExpCollate ){
+      pNew->flags |= EP_ExpCollate;
+      pNew->pColl = pLeft->pColl;
+    }
   }
   return pNew;
 }
@@ -890,23 +929,26 @@
             /* Substitute the rowid (column -1) for the INTEGER PRIMARY KEY */
             pExpr->iColumn = j==pTab->iPKey ? -1 : j;
             pExpr->affinity = pTab->aCol[j].affinity;
-            pExpr->pColl = sqlite3FindCollSeq(db, ENC(db), zColl,-1, 0);
-            if( pItem->jointype & JT_NATURAL ){
-              /* If this match occurred in the left table of a natural join,
-              ** then skip the right table to avoid a duplicate match */
-              pItem++;
-              i++;
+            if( (pExpr->flags & EP_ExpCollate)==0 ){
+              pExpr->pColl = sqlite3FindCollSeq(db, ENC(db), zColl,-1, 0);
             }
-            if( (pUsing = pItem->pUsing)!=0 ){
-              /* If this match occurs on a column that is in the USING clause
-              ** of a join, skip the search of the right table of the join
-              ** to avoid a duplicate match there. */
-              int k;
-              for(k=0; k<pUsing->nId; k++){
-                if( sqlite3StrICmp(pUsing->a[k].zName, zCol)==0 ){
-                  pItem++;
-                  i++;
-                  break;
+            if( i<pSrcList->nSrc-1 ){
+              if( pItem[1].jointype & JT_NATURAL ){
+                /* If this match occurred in the left table of a natural join,
+                ** then skip the right table to avoid a duplicate match */
+                pItem++;
+                i++;
+              }else if( (pUsing = pItem[1].pUsing)!=0 ){
+                /* If this match occurs on a column that is in the USING clause
+                ** of a join, skip the search of the right table of the join
+                ** to avoid a duplicate match there. */
+                int k;
+                for(k=0; k<pUsing->nId; k++){
+                  if( sqlite3StrICmp(pUsing->a[k].zName, zCol)==0 ){
+                    pItem++;
+                    i++;
+                    break;
+                  }
                 }
               }
             }
@@ -945,7 +987,9 @@
             cnt++;
             pExpr->iColumn = iCol==pTab->iPKey ? -1 : iCol;
             pExpr->affinity = pTab->aCol[iCol].affinity;
-            pExpr->pColl = sqlite3FindCollSeq(db, ENC(db), zColl,-1, 0);
+            if( (pExpr->flags & EP_ExpCollate)==0 ){
+              pExpr->pColl = sqlite3FindCollSeq(db, ENC(db), zColl,-1, 0);
+            }
             pExpr->pTab = pTab;
             break;
           }
@@ -1045,7 +1089,7 @@
       n = sizeof(Bitmask)*8-1;
     }
     assert( pMatch->iCursor==pExpr->iTable );
-    pMatch->colUsed |= 1<<n;
+    pMatch->colUsed |= ((Bitmask)1)<<n;
   }
 
 lookupname_end:
@@ -1180,7 +1224,7 @@
       }else{
         is_agg = pDef->xFunc==0;
       }
-#ifndef SQLITE_OMIT_AUTHORIZER
+#ifndef SQLITE_OMIT_AUTHORIZATION
       if( pDef ){
         auth = sqlite3AuthCheck(pParse, SQLITE_FUNCTION, 0, pDef->zName, 0);
         if( auth!=SQLITE_OK ){
@@ -2207,6 +2251,7 @@
   
 
   switch( pExpr->op ){
+    case TK_AGG_COLUMN:
     case TK_COLUMN: {
       /* Check to see if the column is in one of the tables in the FROM
       ** clause of the aggregate query */

Modified: freeswitch/trunk/libs/sqlite/src/func.c
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/func.c	(original)
+++ freeswitch/trunk/libs/sqlite/src/func.c	Thu Feb 22 17:09:42 2007
@@ -16,7 +16,7 @@
 ** sqliteRegisterBuildinFunctions() found at the bottom of the file.
 ** All other code has file scope.
 **
-** $Id: func.c,v 1.134 2006/09/16 21:45:14 drh Exp $
+** $Id: func.c,v 1.136 2007/01/29 17:58:28 drh Exp $
 */
 #include "sqliteInt.h"
 #include <ctype.h>
@@ -273,6 +273,25 @@
 }
 
 /*
+** Implementation of randomblob(N).  Return a random blob
+** that is N bytes long.
+*/
+static void randomBlob(
+  sqlite3_context *context,
+  int argc,
+  sqlite3_value **argv
+){
+  int n;
+  unsigned char *p;
+  assert( argc==1 );
+  n = sqlite3_value_int(argv[0]);
+  if( n<1 ) n = 1;
+  p = sqlite3_malloc(n);
+  sqlite3Randomness(n, p);
+  sqlite3_result_blob(context, (char*)p, n, sqlite3_free);
+}
+
+/*
 ** Implementation of the last_insert_rowid() SQL function.  The return
 ** value is the same as the sqlite3_last_insert_rowid() API function.
 */
@@ -548,6 +567,12 @@
   sqlite3_result_text(context, sqlite3_version, -1, SQLITE_STATIC);
 }
 
+/* Array for converting from half-bytes (nybbles) into ASCII hex
+** digits. */
+static const char hexdigits[] = {
+  '0', '1', '2', '3', '4', '5', '6', '7',
+  '8', '9', 'A', 'B', 'C', 'D', 'E', 'F' 
+};
 
 /*
 ** EXPERIMENTAL - This is not an official function.  The interface may
@@ -573,10 +598,6 @@
       break;
     }
     case SQLITE_BLOB: {
-      static const char hexdigits[] = { 
-        '0', '1', '2', '3', '4', '5', '6', '7',
-        '8', '9', 'A', 'B', 'C', 'D', 'E', 'F' 
-      };
       char *zText = 0;
       int nBlob = sqlite3_value_bytes(argv[0]);
       char const *zBlob = sqlite3_value_blob(argv[0]);
@@ -622,11 +643,41 @@
   }
 }
 
+/*
+** The hex() function.  Interpret the argument as a blob.  Return
+** a hexadecimal rendering as text.
+*/
+static void hexFunc(
+  sqlite3_context *context,
+  int argc,
+  sqlite3_value **argv
+){
+  int i, n;
+  const unsigned char *pBlob;
+  char *zHex, *z;
+  assert( argc==1 );
+  pBlob = sqlite3_value_blob(argv[0]);
+  n = sqlite3_value_bytes(argv[0]);
+  z = zHex = sqlite3_malloc(n*2 + 1);
+  if( zHex==0 ) return;
+  for(i=0; i<n; i++, pBlob++){
+    unsigned char c = *pBlob;
+    *(z++) = hexdigits[(c>>4)&0xf];
+    *(z++) = hexdigits[c&0xf];
+  }
+  *z = 0;
+  sqlite3_result_text(context, zHex, n*2, sqlite3_free);
+}
+
 #ifdef SQLITE_SOUNDEX
 /*
 ** Compute the soundex encoding of a word.
 */
-static void soundexFunc(sqlite3_context *context, int argc, sqlite3_value **argv){
+static void soundexFunc(
+  sqlite3_context *context,
+  int argc,
+  sqlite3_value **argv
+){
   char zResult[8];
   const u8 *zIn;
   int i, j;
@@ -1022,8 +1073,10 @@
     { "coalesce",          -1, 0, SQLITE_UTF8,    0, ifnullFunc },
     { "coalesce",           0, 0, SQLITE_UTF8,    0, 0          },
     { "coalesce",           1, 0, SQLITE_UTF8,    0, 0          },
+    { "hex",                1, 0, SQLITE_UTF8,    0, hexFunc    },
     { "ifnull",             2, 0, SQLITE_UTF8,    1, ifnullFunc },
     { "random",            -1, 0, SQLITE_UTF8,    0, randomFunc },
+    { "randomblob",         1, 0, SQLITE_UTF8,    0, randomBlob },
     { "nullif",             2, 0, SQLITE_UTF8,    1, nullifFunc },
     { "sqlite_version",     0, 0, SQLITE_UTF8,    0, versionFunc},
     { "quote",              1, 0, SQLITE_UTF8,    0, quoteFunc  },

Modified: freeswitch/trunk/libs/sqlite/src/loadext.c
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/loadext.c	(original)
+++ freeswitch/trunk/libs/sqlite/src/loadext.c	Thu Feb 22 17:09:42 2007
@@ -75,6 +75,20 @@
 # define sqlite3_declare_vtab 0
 #endif
 
+#ifdef SQLITE_OMIT_SHARED_CACHE
+# define sqlite3_enable_shared_cache 0
+#endif
+
+#ifdef SQLITE_OMIT_TRACE
+# define sqlite3_profile       0
+# define sqlite3_trace         0
+#endif
+
+#ifdef SQLITE_OMIT_GET_TABLE
+# define sqlite3_free_table    0
+# define sqlite3_get_table     0
+#endif
+
 /*
 ** The following structure contains pointers to all SQLite API routines.
 ** A pointer to this structure is passed into extensions when they are
@@ -154,7 +168,7 @@
   sqlite3_get_autocommit,
   sqlite3_get_auxdata,
   sqlite3_get_table,
-  sqlite3_global_recover,
+  0,     /* Was sqlite3_global_recover(), but that function is deprecated */
   sqlite3_interrupt,
   sqlite3_last_insert_rowid,
   sqlite3_libversion,
@@ -218,28 +232,6 @@
 };
 
 /*
-** The windows implementation of shared-library loaders
-*/
-#if defined(_WIN32) || defined(WIN32) || defined(__MINGW32__) || defined(__BORLANDC__)
-# include <windows.h>
-# define SQLITE_LIBRARY_TYPE     HANDLE
-# define SQLITE_OPEN_LIBRARY(A)  LoadLibrary(A)
-# define SQLITE_FIND_SYMBOL(A,B) GetProcAddress(A,B)
-# define SQLITE_CLOSE_LIBRARY(A) FreeLibrary(A)
-#endif /* windows */
-
-/*
-** The unix implementation of shared-library loaders
-*/
-#if defined(HAVE_DLOPEN) && !defined(SQLITE_LIBRARY_TYPE)
-# include <dlfcn.h>
-# define SQLITE_LIBRARY_TYPE     void*
-# define SQLITE_OPEN_LIBRARY(A)  dlopen(A, RTLD_NOW | RTLD_GLOBAL)
-# define SQLITE_FIND_SYMBOL(A,B) dlsym(A,B)
-# define SQLITE_CLOSE_LIBRARY(A) dlclose(A)
-#endif
-
-/*
 ** Attempt to load an SQLite extension library contained in the file
 ** zFile.  The entry point is zProc.  zProc may be 0 in which case a
 ** default entry point name (sqlite3_extension_init) is used.  Use
@@ -257,11 +249,10 @@
   const char *zProc,    /* Entry point.  Use "sqlite3_extension_init" if 0 */
   char **pzErrMsg       /* Put error message here if not 0 */
 ){
-#ifdef SQLITE_LIBRARY_TYPE
-  SQLITE_LIBRARY_TYPE handle;
+  void *handle;
   int (*xInit)(sqlite3*,char**,const sqlite3_api_routines*);
   char *zErrmsg = 0;
-  SQLITE_LIBRARY_TYPE *aHandle;
+  void **aHandle;
 
   /* Ticket #1863.  To avoid a creating security problems for older
   ** applications that relink against newer versions of SQLite, the
@@ -280,7 +271,7 @@
     zProc = "sqlite3_extension_init";
   }
 
-  handle = SQLITE_OPEN_LIBRARY(zFile);
+  handle = sqlite3OsDlopen(zFile);
   if( handle==0 ){
     if( pzErrMsg ){
       *pzErrMsg = sqlite3_mprintf("unable to open shared library [%s]", zFile);
@@ -288,20 +279,20 @@
     return SQLITE_ERROR;
   }
   xInit = (int(*)(sqlite3*,char**,const sqlite3_api_routines*))
-                   SQLITE_FIND_SYMBOL(handle, zProc);
+                   sqlite3OsDlsym(handle, zProc);
   if( xInit==0 ){
     if( pzErrMsg ){
        *pzErrMsg = sqlite3_mprintf("no entry point [%s] in shared library [%s]",
                                    zProc, zFile);
     }
-    SQLITE_CLOSE_LIBRARY(handle);
+    sqlite3OsDlclose(handle);
     return SQLITE_ERROR;
   }else if( xInit(db, &zErrmsg, &sqlite3_apis) ){
     if( pzErrMsg ){
       *pzErrMsg = sqlite3_mprintf("error during initialization: %s", zErrmsg);
     }
     sqlite3_free(zErrmsg);
-    SQLITE_CLOSE_LIBRARY(handle);
+    sqlite3OsDlclose(handle);
     return SQLITE_ERROR;
   }
 
@@ -317,14 +308,8 @@
   sqliteFree(db->aExtension);
   db->aExtension = aHandle;
 
-  ((SQLITE_LIBRARY_TYPE*)db->aExtension)[db->nExtension-1] = handle;
+  db->aExtension[db->nExtension-1] = handle;
   return SQLITE_OK;
-#else
-  if( pzErrMsg ){
-    *pzErrMsg = sqlite3_mprintf("extension loading is disabled");
-  }
-  return SQLITE_ERROR;
-#endif
 }
 
 /*
@@ -332,13 +317,11 @@
 ** to clean up loaded extensions
 */
 void sqlite3CloseExtensions(sqlite3 *db){
-#ifdef SQLITE_LIBRARY_TYPE
   int i;
   for(i=0; i<db->nExtension; i++){
-    SQLITE_CLOSE_LIBRARY(((SQLITE_LIBRARY_TYPE*)db->aExtension)[i]);
+    sqlite3OsDlclose(db->aExtension[i]);
   }
   sqliteFree(db->aExtension);
-#endif
 }
 
 /*

Modified: freeswitch/trunk/libs/sqlite/src/main.c
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/main.c	(original)
+++ freeswitch/trunk/libs/sqlite/src/main.c	Thu Feb 22 17:09:42 2007
@@ -14,7 +14,7 @@
 ** other files are for internal use by SQLite and should not be
 ** accessed by users of the library.
 **
-** $Id: main.c,v 1.358 2006/09/16 21:45:14 drh Exp $
+** $Id: main.c,v 1.360 2006/12/19 18:57:11 drh Exp $
 */
 #include "sqliteInt.h"
 #include "os.h"
@@ -942,7 +942,7 @@
   /* Load automatic extensions - extensions that have been registered
   ** using the sqlite3_automatic_extension() API.
   */
-  sqlite3AutoLoadExtensions(db);
+  (void)sqlite3AutoLoadExtensions(db);
 
 #ifdef SQLITE_ENABLE_FTS1
   {
@@ -951,6 +951,13 @@
   }
 #endif
 
+#ifdef SQLITE_ENABLE_FTS2
+  {
+    extern int sqlite3Fts2Init(sqlite3*);
+    sqlite3Fts2Init(db);
+  }
+#endif
+
 opendb_out:
   if( SQLITE_NOMEM==(rc = sqlite3_errcode(db)) ){
     sqlite3_close(db);

Modified: freeswitch/trunk/libs/sqlite/src/os.h
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/os.h	(original)
+++ freeswitch/trunk/libs/sqlite/src/os.h	Thu Feb 22 17:09:42 2007
@@ -81,9 +81,21 @@
 ** prefix to reflect your program's name, so that if your program exits
 ** prematurely, old temporary files can be easily identified. This can be done
 ** using -DTEMP_FILE_PREFIX=myprefix_ on the compiler command line.
+**
+** 2006-10-31:  The default prefix used to be "sqlite_".  But then
+** Mcafee started using SQLite in their anti-virus product and it
+** started putting files with the "sqlite" name in the c:/temp folder.
+** This annoyed many windows users.  Those users would then do a 
+** Google search for "sqlite", find the telephone numbers of the
+** developers and call to wake them up at night and complain.
+** For this reason, the default name prefix is changed to be "sqlite" 
+** spelled backwards.  So the temp files are still identified, but
+** anybody smart enough to figure out the code is also likely smart
+** enough to know that calling the developer will not help get rid
+** of the file.
 */
 #ifndef TEMP_FILE_PREFIX
-# define TEMP_FILE_PREFIX "sqlite_"
+# define TEMP_FILE_PREFIX "etilqs_"
 #endif
 
 /*
@@ -110,6 +122,9 @@
 #define sqlite3OsRealloc            sqlite3GenericRealloc
 #define sqlite3OsFree               sqlite3GenericFree
 #define sqlite3OsAllocationSize     sqlite3GenericAllocationSize
+#define sqlite3OsDlopen             sqlite3UnixDlopen
+#define sqlite3OsDlsym              sqlite3UnixDlsym
+#define sqlite3OsDlclose            sqlite3UnixDlclose
 #endif
 #if OS_WIN
 #define sqlite3OsOpenReadWrite      sqlite3WinOpenReadWrite
@@ -132,6 +147,9 @@
 #define sqlite3OsRealloc            sqlite3GenericRealloc
 #define sqlite3OsFree               sqlite3GenericFree
 #define sqlite3OsAllocationSize     sqlite3GenericAllocationSize
+#define sqlite3OsDlopen             sqlite3WinDlopen
+#define sqlite3OsDlsym              sqlite3WinDlsym
+#define sqlite3OsDlclose            sqlite3WinDlclose
 #endif
 #if OS_OS2
 #define sqlite3OsOpenReadWrite      sqlite3Os2OpenReadWrite
@@ -154,6 +172,9 @@
 #define sqlite3OsRealloc            sqlite3GenericRealloc
 #define sqlite3OsFree               sqlite3GenericFree
 #define sqlite3OsAllocationSize     sqlite3GenericAllocationSize
+#define sqlite3OsDlopen             sqlite3Os2Dlopen
+#define sqlite3OsDlsym              sqlite3Os2Dlsym
+#define sqlite3OsDlclose            sqlite3Os2Dlclose
 #endif
 
 
@@ -337,6 +358,9 @@
 void *sqlite3OsRealloc(void *, int);
 void sqlite3OsFree(void *);
 int sqlite3OsAllocationSize(void *);
+void *sqlite3OsDlopen(const char*);
+void *sqlite3OsDlsym(void*, const char*);
+int sqlite3OsDlclose(void*);
 
 /*
 ** If the SQLITE_ENABLE_REDEF_IO macro is defined, then the OS-layer
@@ -381,16 +405,26 @@
   void *(*xRealloc)(void *, int);
   void (*xFree)(void *);
   int (*xAllocationSize)(void *);
+
+  void *(*xDlopen)(const char*);
+  void *(*xDlsym)(void*, const char*);
+  int (*xDlclose)(void*);
 };
 
 /* Macro used to comment out routines that do not exists when there is
-** no disk I/O 
+** no disk I/O or extension loading
 */
 #ifdef SQLITE_OMIT_DISKIO
 # define IF_DISKIO(X)  0
 #else
 # define IF_DISKIO(X)  X
 #endif
+#ifdef SQLITE_OMIT_LOAD_EXTENSION
+# define IF_DLOPEN(X)  0
+#else
+# define IF_DLOPEN(X)  X
+#endif
+
 
 #ifdef _SQLITE_OS_C_
   /*
@@ -416,7 +450,10 @@
     sqlite3OsMalloc,
     sqlite3OsRealloc,
     sqlite3OsFree,
-    sqlite3OsAllocationSize
+    sqlite3OsAllocationSize,
+    IF_DLOPEN( sqlite3OsDlopen ),
+    IF_DLOPEN( sqlite3OsDlsym ),
+    IF_DLOPEN( sqlite3OsDlclose ),
   };
 #else
   /*

Modified: freeswitch/trunk/libs/sqlite/src/os_os2.c
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/os_os2.c	(original)
+++ freeswitch/trunk/libs/sqlite/src/os_os2.c	Thu Feb 22 17:09:42 2007
@@ -12,6 +12,12 @@
 **
 ** This file contains code that is specific to OS/2.
 */
+
+#if (__GNUC__ > 3 || __GNUC__ == 3 && __GNUC_MINOR__ >= 3) && defined(OS2_HIGH_MEMORY)
+/* os2safe.h has to be included before os2.h, needed for high mem */
+#include <os2safe.h>
+#endif
+
 #include "sqliteInt.h"
 #include "os.h"
 
@@ -290,7 +296,14 @@
   SimulateIOError( return SQLITE_IOERR );
   TRACE3( "READ %d lock=%d\n", ((os2File*)id)->h, ((os2File*)id)->locktype );
   DosRead( ((os2File*)id)->h, pBuf, amt, &got );
-  return (got == (ULONG)amt) ? SQLITE_OK : SQLITE_IOERR;
+  if (got == (ULONG)amt)
+    return SQLITE_OK;
+  else if (got < 0)
+    return SQLITE_IOERR_READ;
+  else {
+    memset(&((char*)pBuf)[got], 0, amt-got);
+    return SQLITE_IOERR_SHORT_READ;
+  }
 }
 
 /*
@@ -768,6 +781,40 @@
 ** with other miscellanous aspects of the operating system interface
 ****************************************************************************/
 
+#ifndef SQLITE_OMIT_LOAD_EXTENSION
+/*
+** Interfaces for opening a shared library, finding entry points
+** within the shared library, and closing the shared library.
+*/
+void *sqlite3Os2Dlopen(const char *zFilename){
+  UCHAR loadErr[256];
+  HMODULE hmod;
+  APIRET rc;
+  rc = DosLoadModule(loadErr, sizeof(loadErr), zFilename, &hmod);
+  if (rc != NO_ERROR) return 0;
+  return (void*)hmod;
+}
+void *sqlite3Os2Dlsym(void *pHandle, const char *zSymbol){
+  PFN pfn;
+  APIRET rc;
+  rc = DosQueryProcAddr((HMODULE)pHandle, 0L, zSymbol, &pfn);
+  if (rc != NO_ERROR) {
+    /* if the symbol itself was not found, search again for the same
+     * symbol with an extra underscore, that might be needed depending
+     * on the calling convention */
+    char _zSymbol[256] = "_";
+    strncat(_zSymbol, zSymbol, 255);
+    rc = DosQueryProcAddr((HMODULE)pHandle, 0L, _zSymbol, &pfn);
+  }
+  if (rc != NO_ERROR) return 0;
+  return pfn;
+}
+int sqlite3Os2Dlclose(void *pHandle){
+  return DosFreeModule((HMODULE)pHandle);
+}
+#endif /* SQLITE_OMIT_LOAD_EXTENSION */
+
+
 /*
 ** Get information to seed the random number generator.  The seed
 ** is written into the buffer zBuf[256].  The calling function must

Modified: freeswitch/trunk/libs/sqlite/src/os_unix.c
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/os_unix.c	(original)
+++ freeswitch/trunk/libs/sqlite/src/os_unix.c	Thu Feb 22 17:09:42 2007
@@ -565,7 +565,7 @@
   lockInfo.l_whence = SEEK_SET;
   lockInfo.l_type = F_RDLCK;
   
-  if (fcntl(fd, F_GETLK, (int) &lockInfo) != -1) {
+  if (fcntl(fd, F_GETLK, &lockInfo) != -1) {
     return posixLockingStyle;
   } 
   
@@ -1000,10 +1000,14 @@
 */
 static int seekAndRead(unixFile *id, void *pBuf, int cnt){
   int got;
+  i64 newOffset;
 #ifdef USE_PREAD
   got = pread(id->h, pBuf, cnt, id->offset);
 #else
-  lseek(id->h, id->offset, SEEK_SET);
+  newOffset = lseek(id->h, id->offset, SEEK_SET);
+  if( newOffset!=id->offset ){
+    return -1;
+  }
   got = read(id->h, pBuf, cnt);
 #endif
   if( got>0 ){
@@ -1026,12 +1030,13 @@
   TRACE5("READ    %-3d %5d %7d %d\n", ((unixFile*)id)->h, got,
           last_page, TIMER_ELAPSED);
   SEEK(0);
-  SimulateIOError( got=0 );
+  SimulateIOError( got = -1 );
   if( got==amt ){
     return SQLITE_OK;
   }else if( got<0 ){
     return SQLITE_IOERR_READ;
   }else{
+    memset(&((char*)pBuf)[got], 0, amt-got);
     return SQLITE_IOERR_SHORT_READ;
   }
 }
@@ -1042,10 +1047,14 @@
 */
 static int seekAndWrite(unixFile *id, const void *pBuf, int cnt){
   int got;
+  i64 newOffset;
 #ifdef USE_PREAD
   got = pwrite(id->h, pBuf, cnt, id->offset);
 #else
-  lseek(id->h, id->offset, SEEK_SET);
+  newOffset = lseek(id->h, id->offset, SEEK_SET);
+  if( newOffset!=id->offset ){
+    return -1;
+  }
   got = write(id->h, pBuf, cnt);
 #endif
   if( got>0 ){
@@ -1159,13 +1168,26 @@
 #if HAVE_FULLFSYNC
   if( fullSync ){
     rc = fcntl(fd, F_FULLFSYNC, 0);
-  }else
-#endif /* HAVE_FULLFSYNC */
+  }else{
+    rc = 1;
+  }
+  /* If the FULLFSYNC failed, fall back to attempting an fsync().
+   * It shouldn't be possible for fullfsync to fail on the local 
+   * file system (on OSX), so failure indicates that FULLFSYNC
+   * isn't supported for this file system. So, attempt an fsync 
+   * and (for now) ignore the overhead of a superfluous fcntl call.  
+   * It'd be better to detect fullfsync support once and avoid 
+   * the fcntl call every time sync is called.
+   */
+  if( rc ) rc = fsync(fd);
+
+#else 
   if( dataOnly ){
     rc = fdatasync(fd);
   }else{
     rc = fsync(fd);
   }
+#endif /* HAVE_FULLFSYNC */
 #endif /* defined(SQLITE_NO_SYNC) */
 
   return rc;
@@ -2445,12 +2467,12 @@
   const char *zFilename,  /* Name of the file being opened */
   int delFlag             /* Delete-on-or-before-close flag */
 ){
-  sqlite3LockingStyle lockStyle;
+  sqlite3LockingStyle lockingStyle;
   unixFile *pNew;
   unixFile f;
   int rc;
 
-  lockingStyle = sqlite3DetectLockingStyle(zFilename, f.h);
+  lockingStyle = sqlite3DetectLockingStyle(zFilename, h);
   if ( lockingStyle == posixLockingStyle ) {
     sqlite3OsEnterMutex();
     rc = findLockInfo(h, &f.pLock, &f.pOpen);
@@ -2485,7 +2507,7 @@
     return SQLITE_NOMEM;
   }else{
     *pNew = f;
-    switch(lockStyle) {
+    switch(lockingStyle) {
       case afpLockingStyle:
         /* afp locking uses the file path so it needs to be included in
         ** the afpLockingContext */
@@ -2581,6 +2603,23 @@
 ****************************************************************************/
 
 
+#ifndef SQLITE_OMIT_LOAD_EXTENSION
+/*
+** Interfaces for opening a shared library, finding entry points
+** within the shared library, and closing the shared library.
+*/
+#include <dlfcn.h>
+void *sqlite3UnixDlopen(const char *zFilename){
+  return dlopen(zFilename, RTLD_NOW | RTLD_GLOBAL);
+}
+void *sqlite3UnixDlsym(void *pHandle, const char *zSymbol){
+  return dlsym(pHandle, zSymbol);
+}
+int sqlite3UnixDlclose(void *pHandle){
+  return dlclose(pHandle);
+}
+#endif /* SQLITE_OMIT_LOAD_EXTENSION */
+
 /*
 ** Get information to seed the random number generator.  The seed
 ** is written into the buffer zBuf[256].  The calling function must

Modified: freeswitch/trunk/libs/sqlite/src/os_win.c
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/os_win.c	(original)
+++ freeswitch/trunk/libs/sqlite/src/os_win.c	Thu Feb 22 17:09:42 2007
@@ -40,6 +40,7 @@
 */
 #if defined(_WIN32_WCE)
 # define OS_WINCE 1
+# define AreFileApisANSI() 1
 #else
 # define OS_WINCE 0
 #endif
@@ -124,16 +125,14 @@
 #endif /* OS_WINCE */
 
 /*
-** Convert a UTF-8 string to UTF-32.  Space to hold the returned string
-** is obtained from sqliteMalloc.
+** Convert a UTF-8 string to microsoft unicode (UTF-16?). 
+**
+** Space to hold the returned string is obtained from sqliteMalloc.
 */
 static WCHAR *utf8ToUnicode(const char *zFilename){
   int nChar;
   WCHAR *zWideFilename;
 
-  if( !isNT() ){
-    return 0;
-  }
   nChar = MultiByteToWideChar(CP_UTF8, 0, zFilename, -1, NULL, 0);
   zWideFilename = sqliteMalloc( nChar*sizeof(zWideFilename[0]) );
   if( zWideFilename==0 ){
@@ -148,7 +147,7 @@
 }
 
 /*
-** Convert UTF-32 to UTF-8.  Space to hold the returned string is
+** Convert microsoft unicode to UTF-8.  Space to hold the returned string is
 ** obtained from sqliteMalloc().
 */
 static char *unicodeToUtf8(const WCHAR *zWideFilename){
@@ -169,6 +168,91 @@
   return zFilename;
 }
 
+/*
+** Convert an ansi string to microsoft unicode, based on the
+** current codepage settings for file apis.
+** 
+** Space to hold the returned string is obtained
+** from sqliteMalloc.
+*/
+static WCHAR *mbcsToUnicode(const char *zFilename){
+  int nByte;
+  WCHAR *zMbcsFilename;
+  int codepage = AreFileApisANSI() ? CP_ACP : CP_OEMCP;
+
+  nByte = MultiByteToWideChar(codepage, 0, zFilename, -1, NULL,0)*sizeof(WCHAR);
+  zMbcsFilename = sqliteMalloc( nByte*sizeof(zMbcsFilename[0]) );
+  if( zMbcsFilename==0 ){
+    return 0;
+  }
+  nByte = MultiByteToWideChar(codepage, 0, zFilename, -1, zMbcsFilename, nByte);
+  if( nByte==0 ){
+    sqliteFree(zMbcsFilename);
+    zMbcsFilename = 0;
+  }
+  return zMbcsFilename;
+}
+
+/*
+** Convert microsoft unicode to multibyte character string, based on the
+** user's Ansi codepage.
+**
+** Space to hold the returned string is obtained from
+** sqliteMalloc().
+*/
+static char *unicodeToMbcs(const WCHAR *zWideFilename){
+  int nByte;
+  char *zFilename;
+  int codepage = AreFileApisANSI() ? CP_ACP : CP_OEMCP;
+
+  nByte = WideCharToMultiByte(codepage, 0, zWideFilename, -1, 0, 0, 0, 0);
+  zFilename = sqliteMalloc( nByte );
+  if( zFilename==0 ){
+    return 0;
+  }
+  nByte = WideCharToMultiByte(codepage, 0, zWideFilename, -1, zFilename, nByte,
+                              0, 0);
+  if( nByte == 0 ){
+    sqliteFree(zFilename);
+    zFilename = 0;
+  }
+  return zFilename;
+}
+
+/*
+** Convert multibyte character string to UTF-8.  Space to hold the
+** returned string is obtained from sqliteMalloc().
+*/
+static char *mbcsToUtf8(const char *zFilename){
+  char *zFilenameUtf8;
+  WCHAR *zTmpWide;
+
+  zTmpWide = mbcsToUnicode(zFilename);
+  if( zTmpWide==0 ){
+    return 0;
+  }
+  zFilenameUtf8 = unicodeToUtf8(zTmpWide);
+  sqliteFree(zTmpWide);
+  return zFilenameUtf8;
+}
+
+/*
+** Convert UTF-8 to multibyte character string.  Space to hold the 
+** returned string is obtained from sqliteMalloc().
+*/
+static char *utf8ToMbcs(const char *zFilename){
+  char *zFilenameMbcs;
+  WCHAR *zTmpWide;
+
+  zTmpWide = utf8ToUnicode(zFilename);
+  if( zTmpWide==0 ){
+    return 0;
+  }
+  zFilenameMbcs = unicodeToMbcs(zTmpWide);
+  sqliteFree(zTmpWide);
+  return zFilenameMbcs;
+}
+
 #if OS_WINCE
 /*************************************************************************
 ** This section contains code for WinCE only.
@@ -476,6 +560,23 @@
 #endif /* OS_WINCE */
 
 /*
+** Convert a UTF-8 filename into whatever form the underlying
+** operating system wants filenames in.  Space to hold the result
+** is obtained from sqliteMalloc and must be freed by the calling
+** function.
+*/
+static void *convertUtf8Filename(const char *zFilename){
+  void *zConverted = 0;
+  if( isNT() ){
+    zConverted = utf8ToUnicode(zFilename);
+  }else{
+    zConverted = utf8ToMbcs(zFilename);
+  }
+  /* caller will handle out of memory */
+  return zConverted;
+}
+
+/*
 ** Delete the named file.
 **
 ** Note that windows does not allow a file to be deleted if some other
@@ -489,25 +590,28 @@
 */
 #define MX_DELETION_ATTEMPTS 3
 int sqlite3WinDelete(const char *zFilename){
-  WCHAR *zWide = utf8ToUnicode(zFilename);
   int cnt = 0;
   int rc;
-  if( zWide ){
+  void *zConverted = convertUtf8Filename(zFilename);
+  if( zConverted==0 ){
+    return SQLITE_NOMEM;
+  }
+  if( isNT() ){
     do{
-      rc = DeleteFileW(zWide);
-    }while( rc==0 && GetFileAttributesW(zWide)!=0xffffffff 
+      rc = DeleteFileW(zConverted);
+    }while( rc==0 && GetFileAttributesW(zConverted)!=0xffffffff 
             && cnt++ < MX_DELETION_ATTEMPTS && (Sleep(100), 1) );
-    sqliteFree(zWide);
   }else{
 #if OS_WINCE
     return SQLITE_NOMEM;
 #else
     do{
-      rc = DeleteFileA(zFilename);
-    }while( rc==0 && GetFileAttributesA(zFilename)!=0xffffffff
+      rc = DeleteFileA(zConverted);
+    }while( rc==0 && GetFileAttributesA(zConverted)!=0xffffffff
             && cnt++ < MX_DELETION_ATTEMPTS && (Sleep(100), 1) );
 #endif
   }
+  sqliteFree(zConverted);
   TRACE2("DELETE \"%s\"\n", zFilename);
   return rc!=0 ? SQLITE_OK : SQLITE_IOERR;
 }
@@ -517,17 +621,20 @@
 */
 int sqlite3WinFileExists(const char *zFilename){
   int exists = 0;
-  WCHAR *zWide = utf8ToUnicode(zFilename);
-  if( zWide ){
-    exists = GetFileAttributesW(zWide) != 0xffffffff;
-    sqliteFree(zWide);
+  void *zConverted = convertUtf8Filename(zFilename);
+  if( zConverted==0 ){
+    return SQLITE_NOMEM;
+  }
+  if( isNT() ){
+    exists = GetFileAttributesW((WCHAR*)zConverted) != 0xffffffff;
   }else{
 #if OS_WINCE
     return SQLITE_NOMEM;
 #else
-    exists = GetFileAttributesA(zFilename) != 0xffffffff;
+    exists = GetFileAttributesA((char*)zConverted) != 0xffffffff;
 #endif
   }
+  sqliteFree(zConverted);
   return exists;
 }
 
@@ -554,10 +661,14 @@
 ){
   winFile f;
   HANDLE h;
-  WCHAR *zWide = utf8ToUnicode(zFilename);
+  void *zConverted = convertUtf8Filename(zFilename);
+  if( zConverted==0 ){
+    return SQLITE_NOMEM;
+  }
   assert( *pId==0 );
-  if( zWide ){
-    h = CreateFileW(zWide,
+
+  if( isNT() ){
+    h = CreateFileW((WCHAR*)zConverted,
        GENERIC_READ | GENERIC_WRITE,
        FILE_SHARE_READ | FILE_SHARE_WRITE,
        NULL,
@@ -566,7 +677,7 @@
        NULL
     );
     if( h==INVALID_HANDLE_VALUE ){
-      h = CreateFileW(zWide,
+      h = CreateFileW((WCHAR*)zConverted,
          GENERIC_READ,
          FILE_SHARE_READ | FILE_SHARE_WRITE,
          NULL,
@@ -575,7 +686,7 @@
          NULL
       );
       if( h==INVALID_HANDLE_VALUE ){
-        sqliteFree(zWide);
+        sqliteFree(zConverted);
         return SQLITE_CANTOPEN;
       }
       *pReadonly = 1;
@@ -585,16 +696,15 @@
 #if OS_WINCE
     if (!winceCreateLock(zFilename, &f)){
       CloseHandle(h);
-      sqliteFree(zWide);
+      sqliteFree(zConverted);
       return SQLITE_CANTOPEN;
     }
 #endif
-    sqliteFree(zWide);
   }else{
 #if OS_WINCE
     return SQLITE_NOMEM;
 #else
-    h = CreateFileA(zFilename,
+    h = CreateFileA((char*)zConverted,
        GENERIC_READ | GENERIC_WRITE,
        FILE_SHARE_READ | FILE_SHARE_WRITE,
        NULL,
@@ -603,7 +713,7 @@
        NULL
     );
     if( h==INVALID_HANDLE_VALUE ){
-      h = CreateFileA(zFilename,
+      h = CreateFileA((char*)zConverted,
          GENERIC_READ,
          FILE_SHARE_READ | FILE_SHARE_WRITE,
          NULL,
@@ -612,6 +722,7 @@
          NULL
       );
       if( h==INVALID_HANDLE_VALUE ){
+        sqliteFree(zConverted);
         return SQLITE_CANTOPEN;
       }
       *pReadonly = 1;
@@ -620,6 +731,9 @@
     }
 #endif /* OS_WINCE */
   }
+
+  sqliteFree(zConverted);
+
   f.h = h;
 #if OS_WINCE
   f.zDeleteOnClose = 0;
@@ -652,8 +766,11 @@
 int sqlite3WinOpenExclusive(const char *zFilename, OsFile **pId, int delFlag){
   winFile f;
   HANDLE h;
-  int fileflags;
-  WCHAR *zWide = utf8ToUnicode(zFilename);
+  DWORD fileflags;
+  void *zConverted = convertUtf8Filename(zFilename);
+  if( zConverted==0 ){
+    return SQLITE_NOMEM;
+  }
   assert( *pId == 0 );
   fileflags = FILE_FLAG_RANDOM_ACCESS;
 #if !OS_WINCE
@@ -661,10 +778,10 @@
     fileflags |= FILE_ATTRIBUTE_TEMPORARY | FILE_FLAG_DELETE_ON_CLOSE;
   }
 #endif
-  if( zWide ){
+  if( isNT() ){
     int cnt = 0;
     do{
-      h = CreateFileW(zWide,
+      h = CreateFileW((WCHAR*)zConverted,
          GENERIC_READ | GENERIC_WRITE,
          0,
          NULL,
@@ -673,14 +790,13 @@
          NULL
       );
     }while( h==INVALID_HANDLE_VALUE && cnt++ < 2 && (Sleep(100), 1) );
-    sqliteFree(zWide);
   }else{
 #if OS_WINCE
     return SQLITE_NOMEM;
 #else
     int cnt = 0;
     do{
-      h = CreateFileA(zFilename,
+      h = CreateFileA((char*)zConverted,
         GENERIC_READ | GENERIC_WRITE,
         0,
         NULL,
@@ -691,14 +807,18 @@
     }while( h==INVALID_HANDLE_VALUE && cnt++ < 2 && (Sleep(100), 1) );
 #endif /* OS_WINCE */
   }
+#if OS_WINCE
+  if( delFlag && h!=INVALID_HANDLE_VALUE ){
+    f.zDeleteOnClose = zConverted;
+    zConverted = 0;
+  }
+  f.hMutex = NULL;
+#endif
+  sqliteFree(zConverted);
   if( h==INVALID_HANDLE_VALUE ){
     return SQLITE_CANTOPEN;
   }
   f.h = h;
-#if OS_WINCE
-  f.zDeleteOnClose = delFlag ? utf8ToUnicode(zFilename) : 0;
-  f.hMutex = NULL;
-#endif
   TRACE3("OPEN EX %d \"%s\"\n", h, zFilename);
   return allocateWinFile(&f, pId);
 }
@@ -713,10 +833,13 @@
 int sqlite3WinOpenReadOnly(const char *zFilename, OsFile **pId){
   winFile f;
   HANDLE h;
-  WCHAR *zWide = utf8ToUnicode(zFilename);
+  void *zConverted = convertUtf8Filename(zFilename);
+  if( zConverted==0 ){
+    return SQLITE_NOMEM;
+  }
   assert( *pId==0 );
-  if( zWide ){
-    h = CreateFileW(zWide,
+  if( isNT() ){
+    h = CreateFileW((WCHAR*)zConverted,
        GENERIC_READ,
        0,
        NULL,
@@ -724,12 +847,11 @@
        FILE_ATTRIBUTE_NORMAL | FILE_FLAG_RANDOM_ACCESS,
        NULL
     );
-    sqliteFree(zWide);
   }else{
 #if OS_WINCE
     return SQLITE_NOMEM;
 #else
-    h = CreateFileA(zFilename,
+    h = CreateFileA((char*)zConverted,
        GENERIC_READ,
        0,
        NULL,
@@ -739,6 +861,7 @@
     );
 #endif
   }
+  sqliteFree(zConverted);
   if( h==INVALID_HANDLE_VALUE ){
     return SQLITE_CANTOPEN;
   }
@@ -804,9 +927,21 @@
       strncpy(zTempPath, zMulti, SQLITE_TEMPNAME_SIZE-30);
       zTempPath[SQLITE_TEMPNAME_SIZE-30] = 0;
       sqliteFree(zMulti);
+    }else{
+      return SQLITE_NOMEM;
     }
   }else{
-    GetTempPathA(SQLITE_TEMPNAME_SIZE-30, zTempPath);
+    char *zUtf8;
+    char zMbcsPath[SQLITE_TEMPNAME_SIZE];
+    GetTempPathA(SQLITE_TEMPNAME_SIZE-30, zMbcsPath);
+    zUtf8 = mbcsToUtf8(zMbcsPath);
+    if( zUtf8 ){
+      strncpy(zTempPath, zUtf8, SQLITE_TEMPNAME_SIZE-30);
+      zTempPath[SQLITE_TEMPNAME_SIZE-30] = 0;
+      sqliteFree(zUtf8);
+    }else{
+      return SQLITE_NOMEM;
+    }
   }
   for(i=strlen(zTempPath); i>0 && zTempPath[i-1]=='\\'; i--){}
   zTempPath[i] = 0;
@@ -866,15 +1001,16 @@
 static int winRead(OsFile *id, void *pBuf, int amt){
   DWORD got;
   assert( id!=0 );
-  SimulateIOError(return SQLITE_IOERR);
+  SimulateIOError(return SQLITE_IOERR_READ);
   TRACE3("READ %d lock=%d\n", ((winFile*)id)->h, ((winFile*)id)->locktype);
   if( !ReadFile(((winFile*)id)->h, pBuf, amt, &got, 0) ){
-    got = 0;
+    return SQLITE_IOERR_READ;
   }
   if( got==(DWORD)amt ){
     return SQLITE_OK;
   }else{
-    return SQLITE_IOERR;
+    memset(&((char*)pBuf)[got], 0, amt-got);
+    return SQLITE_IOERR_SHORT_READ;
   }
 }
 
@@ -886,7 +1022,7 @@
   int rc = 0;
   DWORD wrote;
   assert( id!=0 );
-  SimulateIOError(return SQLITE_IOERR);
+  SimulateIOError(return SQLITE_IOERR_READ);
   SimulateDiskfullError(return SQLITE_FULL);
   TRACE3("WRITE %d lock=%d\n", ((winFile*)id)->h, ((winFile*)id)->locktype);
   assert( amt>0 );
@@ -946,7 +1082,7 @@
 ** than UNIX.
 */
 int sqlite3WinSyncDirectory(const char *zDirname){
-  SimulateIOError(return SQLITE_IOERR);
+  SimulateIOError(return SQLITE_IOERR_READ);
   return SQLITE_OK;
 }
 
@@ -957,7 +1093,7 @@
   LONG upperBits = nByte>>32;
   assert( id!=0 );
   TRACE3("TRUNCATE %d %lld\n", ((winFile*)id)->h, nByte);
-  SimulateIOError(return SQLITE_IOERR);
+  SimulateIOError(return SQLITE_IOERR_TRUNCATE);
   SetFilePointer(((winFile*)id)->h, nByte, &upperBits, FILE_BEGIN);
   SetEndOfFile(((winFile*)id)->h);
   return SQLITE_OK;
@@ -969,7 +1105,7 @@
 static int winFileSize(OsFile *id, i64 *pSize){
   DWORD upperBits, lowerBits;
   assert( id!=0 );
-  SimulateIOError(return SQLITE_IOERR);
+  SimulateIOError(return SQLITE_IOERR_FSTAT);
   lowerBits = GetFileSize(((winFile*)id)->h, &upperBits);
   *pSize = (((i64)upperBits)<<32) + lowerBits;
   return SQLITE_OK;
@@ -1024,20 +1160,24 @@
 */
 int sqlite3WinIsDirWritable(char *zDirname){
   int fileAttr;
-  WCHAR *zWide;
+  void *zConverted;
   if( zDirname==0 ) return 0;
   if( !isNT() && strlen(zDirname)>MAX_PATH ) return 0;
-  zWide = utf8ToUnicode(zDirname);
-  if( zWide ){
-    fileAttr = GetFileAttributesW(zWide);
-    sqliteFree(zWide);
+
+  zConverted = convertUtf8Filename(zDirname);
+  if( zConverted==0 ){
+    return SQLITE_NOMEM;
+  }
+  if( isNT() ){
+    fileAttr = GetFileAttributesW((WCHAR*)zConverted);
   }else{
 #if OS_WINCE
     return 0;
 #else
-    fileAttr = GetFileAttributesA(zDirname);
+    fileAttr = GetFileAttributesA((char*)zConverted);
 #endif
   }
+  sqliteFree(zConverted);
   if( fileAttr == 0xffffffff ) return 0;
   if( (fileAttr & FILE_ATTRIBUTE_DIRECTORY) != FILE_ATTRIBUTE_DIRECTORY ){
     return 0;
@@ -1226,7 +1366,7 @@
     if( locktype==SHARED_LOCK && !getReadLock(pFile) ){
       /* This should never happen.  We should always be able to
       ** reacquire the read lock */
-      rc = SQLITE_IOERR;
+      rc = SQLITE_IOERR_UNLOCK;
     }
   }
   if( type>=RESERVED_LOCK ){
@@ -1260,24 +1400,33 @@
   /* WinCE has no concept of a relative pathname, or so I am told. */
   zFull = sqliteStrDup(zRelative);
 #else
-  char *zNotUsed;
-  WCHAR *zWide;
   int nByte;
-  zWide = utf8ToUnicode(zRelative);
-  if( zWide ){
-    WCHAR *zTemp, *zNotUsedW;
-    nByte = GetFullPathNameW(zWide, 0, 0, &zNotUsedW) + 1;
+  void *zConverted;
+  zConverted = convertUtf8Filename(zRelative);
+  if( isNT() ){
+    WCHAR *zTemp;
+    nByte = GetFullPathNameW((WCHAR*)zConverted, 0, 0, 0) + 3;
     zTemp = sqliteMalloc( nByte*sizeof(zTemp[0]) );
-    if( zTemp==0 ) return 0;
-    GetFullPathNameW(zWide, nByte, zTemp, &zNotUsedW);
-    sqliteFree(zWide);
+    if( zTemp==0 ){
+      sqliteFree(zConverted);
+      return 0;
+    }
+    GetFullPathNameW((WCHAR*)zConverted, nByte, zTemp, 0);
+    sqliteFree(zConverted);
     zFull = unicodeToUtf8(zTemp);
     sqliteFree(zTemp);
   }else{
-    nByte = GetFullPathNameA(zRelative, 0, 0, &zNotUsed) + 1;
-    zFull = sqliteMalloc( nByte*sizeof(zFull[0]) );
-    if( zFull==0 ) return 0;
-    GetFullPathNameA(zRelative, nByte, zFull, &zNotUsed);
+    char *zTemp;
+    nByte = GetFullPathNameA((char*)zConverted, 0, 0, 0) + 3;
+    zTemp = sqliteMalloc( nByte*sizeof(zTemp[0]) );
+    if( zTemp==0 ){
+      sqliteFree(zConverted);
+      return 0;
+    }
+    GetFullPathNameA((char*)zConverted, nByte, zTemp, 0);
+    sqliteFree(zConverted);
+    zFull = mbcsToUtf8(zTemp);
+    sqliteFree(zTemp);
   }
 #endif
   return zFull;
@@ -1359,6 +1508,45 @@
 ** with other miscellanous aspects of the operating system interface
 ****************************************************************************/
 
+#if !defined(SQLITE_OMIT_LOAD_EXTENSION)
+/*
+** Interfaces for opening a shared library, finding entry points
+** within the shared library, and closing the shared library.
+*/
+void *sqlite3WinDlopen(const char *zFilename){
+  HANDLE h;
+  void *zConverted = convertUtf8Filename(zFilename);
+  if( zConverted==0 ){
+    return 0;
+  }
+  if( isNT() ){
+    h = LoadLibraryW((WCHAR*)zConverted);
+  }else{
+#if OS_WINCE
+    return 0;
+#else
+    h = LoadLibraryA((char*)zConverted);
+#endif
+  }
+  sqliteFree(zConverted);
+  return (void*)h;
+  
+}
+void *sqlite3WinDlsym(void *pHandle, const char *zSymbol){
+#if OS_WINCE
+  /* The GetProcAddressA() routine is only available on wince. */
+  return GetProcAddressA((HANDLE)pHandle, zSymbol);
+#else
+  /* All other windows platforms expect GetProcAddress() to take
+  ** an Ansi string regardless of the _UNICODE setting */
+  return GetProcAddress((HANDLE)pHandle, zSymbol);
+#endif
+}
+int sqlite3WinDlclose(void *pHandle){
+  return FreeLibrary((HANDLE)pHandle);
+}
+#endif /* !SQLITE_OMIT_LOAD_EXTENSION */
+
 /*
 ** Get information to seed the random number generator.  The seed
 ** is written into the buffer zBuf[256].  The calling function must

Modified: freeswitch/trunk/libs/sqlite/src/pager.c
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/pager.c	(original)
+++ freeswitch/trunk/libs/sqlite/src/pager.c	Thu Feb 22 17:09:42 2007
@@ -18,7 +18,7 @@
 ** file simultaneously, or one process from reading the database while
 ** another is writing.
 **
-** @(#) $Id: pager.c,v 1.274 2006/10/03 19:05:19 drh Exp $
+** @(#) $Id: pager.c,v 1.282 2007/01/05 02:00:47 drh Exp $
 */
 #ifndef SQLITE_OMIT_DISKIO
 #include "sqliteInt.h"
@@ -31,6 +31,7 @@
 ** Macros for troubleshooting.  Normally turned off
 */
 #if 0
+#define sqlite3DebugPrintf printf
 #define TRACE1(X)       sqlite3DebugPrintf(X)
 #define TRACE2(X,Y)     sqlite3DebugPrintf(X,Y)
 #define TRACE3(X,Y,Z)   sqlite3DebugPrintf(X,Y,Z)
@@ -350,7 +351,9 @@
 /*
 ** The default size of a disk sector
 */
-#define PAGER_SECTOR_SIZE 512
+#ifndef PAGER_SECTOR_SIZE
+# define PAGER_SECTOR_SIZE 512
+#endif
 
 /*
 ** Page number PAGER_MJ_PGNO is never used in an SQLite database (it is
@@ -376,8 +379,8 @@
     static int cnt = 0;
     if( !pager3_refinfo_enable ) return;
     sqlite3DebugPrintf(
-       "REFCNT: %4d addr=%p nRef=%d\n",
-       p->pgno, PGHDR_TO_DATA(p), p->nRef
+       "REFCNT: %4d addr=%p nRef=%-3d total=%d\n",
+       p->pgno, PGHDR_TO_DATA(p), p->nRef, p->pPager->nRef
     );
     cnt++;   /* Something to set a breakpoint on */
   }
@@ -848,6 +851,23 @@
 }
 
 /*
+** Unlock the database file.
+**
+** Once all locks have been removed from the database file, other
+** processes or threads might change the file.  So make sure all of
+** our internal cache is invalidated.
+*/
+static void pager_unlock(Pager *pPager){
+  if( !MEMDB ){
+    sqlite3OsUnlock(pPager->fd, NO_LOCK);
+    pPager->dbSize = -1;
+  }
+  pPager->state = PAGER_UNLOCK;
+  assert( pPager->pAll==0 );
+}
+
+
+/*
 ** Unlock the database and clear the in-memory cache.  This routine
 ** sets the state of the pager back to what it was when it was first
 ** opened.  Any outstanding pages are invalidated and subsequent attempts
@@ -871,11 +891,9 @@
   if( pPager->state>=PAGER_RESERVED ){
     sqlite3pager_rollback(pPager);
   }
-  sqlite3OsUnlock(pPager->fd, NO_LOCK);
-  pPager->state = PAGER_UNLOCK;
-  pPager->dbSize = -1;
+  pager_unlock(pPager);
   pPager->nRef = 0;
-  assert( pPager->journalOpen==0 );
+  assert( pPager->errCode || (pPager->journalOpen==0 && pPager->stmtOpen==0) );
 }
 
 /*
@@ -927,6 +945,7 @@
   pPager->setMaster = 0;
   pPager->needSync = 0;
   pPager->pFirstSynced = pPager->pFirst;
+  pPager->dbSize = -1;
   return rc;
 }
 
@@ -1421,6 +1440,7 @@
   if( pPager->state>=PAGER_EXCLUSIVE ){
     rc = pager_truncate(pPager, pPager->stmtSize);
   }
+  assert( pPager->state>=PAGER_SHARED );
   pPager->dbSize = pPager->stmtSize;
 
   /* Figure out how many records are in the statement journal.
@@ -1798,14 +1818,19 @@
 ** response is to zero the memory at pDest and continue.  A real IO error 
 ** will presumably recur and be picked up later (Todo: Think about this).
 */
-void sqlite3pager_read_fileheader(Pager *pPager, int N, unsigned char *pDest){
+int sqlite3pager_read_fileheader(Pager *pPager, int N, unsigned char *pDest){
+  int rc = SQLITE_OK;
   memset(pDest, 0, N);
   if( MEMDB==0 ){
     disable_simulated_io_errors();
     sqlite3OsSeek(pPager->fd, 0);
-    sqlite3OsRead(pPager->fd, pDest, N);
     enable_simulated_io_errors();
+    rc = sqlite3OsRead(pPager->fd, pDest, N);
+    if( rc==SQLITE_IOERR_SHORT_READ ){
+      rc = SQLITE_OK;
+    }
   }
+  return rc;
 }
 
 /*
@@ -1965,9 +1990,15 @@
 */
 static int pager_wait_on_lock(Pager *pPager, int locktype){
   int rc;
+
+  /* The OS lock values must be the same as the Pager lock values */
   assert( PAGER_SHARED==SHARED_LOCK );
   assert( PAGER_RESERVED==RESERVED_LOCK );
   assert( PAGER_EXCLUSIVE==EXCLUSIVE_LOCK );
+
+  /* If the file is currently unlocked then the size must be unknown */
+  assert( pPager->state>=PAGER_SHARED || pPager->dbSize<0 || MEMDB );
+
   if( pPager->state>=locktype ){
     rc = SQLITE_OK;
   }else{
@@ -1986,6 +2017,7 @@
 */
 int sqlite3pager_truncate(Pager *pPager, Pgno nPage){
   int rc;
+  assert( pPager->state>=PAGER_SHARED || MEMDB );
   sqlite3pager_pagecount(pPager);
   if( pPager->errCode ){
     rc = pPager->errCode;
@@ -2032,7 +2064,6 @@
 ** to the caller.
 */
 int sqlite3pager_close(Pager *pPager){
-  PgHdr *pPg, *pNext;
 #ifdef SQLITE_ENABLE_MEMORY_MANAGEMENT
   /* A malloc() cannot fail in sqlite3ThreadData() as one or more calls to 
   ** malloc() must have already been made by this thread before it gets
@@ -2044,46 +2075,10 @@
   assert( pTsd && pTsd->nAlloc );
 #endif
 
-  switch( pPager->state ){
-    case PAGER_RESERVED:
-    case PAGER_SYNCED: 
-    case PAGER_EXCLUSIVE: {
-      /* We ignore any IO errors that occur during the rollback
-      ** operation. So disable IO error simulation so that testing
-      ** works more easily.
-      */
-      disable_simulated_io_errors();
-      sqlite3pager_rollback(pPager);
-      enable_simulated_io_errors();
-      if( !MEMDB ){
-        sqlite3OsUnlock(pPager->fd, NO_LOCK);
-      }
-      assert( pPager->errCode || pPager->journalOpen==0 );
-      break;
-    }
-    case PAGER_SHARED: {
-      if( !MEMDB ){
-        sqlite3OsUnlock(pPager->fd, NO_LOCK);
-      }
-      break;
-    }
-    default: {
-      /* Do nothing */
-      break;
-    }
-  }
-  for(pPg=pPager->pAll; pPg; pPg=pNext){
-#ifndef NDEBUG
-    if( MEMDB ){
-      PgHistory *pHist = PGHDR_TO_HIST(pPg, pPager);
-      assert( !pPg->alwaysRollback );
-      assert( !pHist->pOrig );
-      assert( !pHist->pStmt );
-    }
-#endif
-    pNext = pPg->pNextAll;
-    sqliteFree(pPg);
-  }
+  disable_simulated_io_errors();
+  pPager->errCode = 0;
+  pager_reset(pPager);
+  enable_simulated_io_errors();
   TRACE2("CLOSE %d\n", PAGERID(pPager));
   assert( pPager->errCode || (pPager->journalOpen==0 && pPager->stmtOpen==0) );
   if( pPager->journalOpen ){
@@ -2665,8 +2660,7 @@
        */
        rc = sqlite3OsLock(pPager->fd, EXCLUSIVE_LOCK);
        if( rc!=SQLITE_OK ){
-         sqlite3OsUnlock(pPager->fd, NO_LOCK);
-         pPager->state = PAGER_UNLOCK;
+         pager_unlock(pPager);
          return pager_error(pPager, rc);
        }
        pPager->state = PAGER_EXCLUSIVE;
@@ -2681,8 +2675,7 @@
        */
        rc = sqlite3OsOpenReadOnly(pPager->zJournal, &pPager->jfd);
        if( rc!=SQLITE_OK ){
-         sqlite3OsUnlock(pPager->fd, NO_LOCK);
-         pPager->state = PAGER_UNLOCK;
+         pager_unlock(pPager);
          return SQLITE_BUSY;
        }
        pPager->journalOpen = 1;
@@ -2789,19 +2782,10 @@
       }
       TRACE3("FETCH %d page %d\n", PAGERID(pPager), pPg->pgno);
       CODEC1(pPager, PGHDR_TO_DATA(pPg), pPg->pgno, 3);
-      if( rc!=SQLITE_OK ){
-        i64 fileSize;
-        int rc2 = sqlite3OsFileSize(pPager->fd, &fileSize);
-        if( rc2!=SQLITE_OK || fileSize>=pgno*pPager->pageSize ){
-	  /* An IO error occured in one of the the sqlite3OsSeek() or
-          ** sqlite3OsRead() calls above. */
-          pPg->pgno = 0;
-          sqlite3pager_unref(PGHDR_TO_DATA(pPg));
-          return rc;
-        }else{
-          clear_simulated_io_error();
-          memset(PGHDR_TO_DATA(pPg), 0, pPager->pageSize);
-        }
+      if( rc!=SQLITE_OK && rc!=SQLITE_IOERR_SHORT_READ ){
+        pPg->pgno = 0;
+        sqlite3pager_unref(PGHDR_TO_DATA(pPg));
+        return rc;
       }else{
         TEST_INCR(pPager->nRead);
       }
@@ -2973,8 +2957,7 @@
     */
     sqlite3OsDelete(pPager->zJournal);
   }else{
-    sqlite3OsUnlock(pPager->fd, NO_LOCK);
-    pPager->state = PAGER_UNLOCK;
+    pager_reset(pPager);
   }
   return rc;
 }
@@ -3233,6 +3216,7 @@
 
   /* Update the database size and return.
   */
+  assert( pPager->state>=PAGER_SHARED );
   if( pPager->dbSize<(int)pPg->pgno ){
     pPager->dbSize = pPg->pgno;
     if( !MEMDB && pPager->dbSize==PENDING_BYTE/pPager->pageSize ){
@@ -3308,6 +3292,7 @@
   assert( pPg!=0 );  /* We never call _dont_write unless the page is in mem */
   pPg->alwaysRollback = 1;
   if( pPg->dirty && !pPager->stmtInUse ){
+    assert( pPager->state>=PAGER_SHARED );
     if( pPager->dbSize==(int)pPg->pgno && pPager->origDbSize<pPager->dbSize ){
       /* If this pages is the last page in the file and the file has grown
       ** during the current transaction, then do NOT mark the page as clean.
@@ -3337,7 +3322,8 @@
   PgHdr *pPg = DATA_TO_PGHDR(pData);
   Pager *pPager = pPg->pPager;
 
-  if( pPager->state!=PAGER_EXCLUSIVE || pPager->journalOpen==0 ) return;
+  assert( pPager->state>=PAGER_RESERVED );
+  if( pPager->journalOpen==0 ) return;
   if( pPg->alwaysRollback || pPager->alwaysRollback || MEMDB ) return;
   if( !pPg->inJournal && (int)pPg->pgno <= pPager->origDbSize ){
     assert( pPager->aInJournal!=0 );
@@ -3405,14 +3391,12 @@
     ** if there have been no changes to the database file. */
     assert( pPager->needSync==0 );
     rc = pager_unwritelock(pPager);
-    pPager->dbSize = -1;
     return rc;
   }
   assert( pPager->journalOpen );
   rc = sqlite3pager_sync(pPager, 0, 0);
   if( rc==SQLITE_OK ){
     rc = pager_unwritelock(pPager);
-    pPager->dbSize = -1;
   }
   return rc;
 }
@@ -3470,7 +3454,6 @@
 
   if( !pPager->dirtyCache || !pPager->journalOpen ){
     rc = pager_unwritelock(pPager);
-    pPager->dbSize = -1;
     return rc;
   }
 
@@ -3546,6 +3529,7 @@
   int rc;
   char zTemp[SQLITE_TEMPNAME_SIZE];
   assert( !pPager->stmtInUse );
+  assert( pPager->state>=PAGER_SHARED );
   assert( pPager->dbSize>=0 );
   TRACE2("STMT-BEGIN %d\n", PAGERID(pPager));
   if( MEMDB ){

Modified: freeswitch/trunk/libs/sqlite/src/pager.h
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/pager.h	(original)
+++ freeswitch/trunk/libs/sqlite/src/pager.h	Thu Feb 22 17:09:42 2007
@@ -13,7 +13,7 @@
 ** subsystem.  The page cache subsystem reads and writes a file a page
 ** at a time and provides a journal for rollback.
 **
-** @(#) $Id: pager.h,v 1.51 2006/08/08 13:51:43 drh Exp $
+** @(#) $Id: pager.h,v 1.52 2006/11/06 21:20:26 drh Exp $
 */
 
 #ifndef _PAGER_H_
@@ -75,7 +75,7 @@
 void sqlite3pager_set_destructor(Pager*, void(*)(void*,int));
 void sqlite3pager_set_reiniter(Pager*, void(*)(void*,int));
 int sqlite3pager_set_pagesize(Pager*, int);
-void sqlite3pager_read_fileheader(Pager*, int, unsigned char*);
+int sqlite3pager_read_fileheader(Pager*, int, unsigned char*);
 void sqlite3pager_set_cachesize(Pager*, int);
 int sqlite3pager_close(Pager *pPager);
 int sqlite3pager_get(Pager *pPager, Pgno pgno, void **ppPage);

Modified: freeswitch/trunk/libs/sqlite/src/parse.y
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/parse.y	(original)
+++ freeswitch/trunk/libs/sqlite/src/parse.y	Thu Feb 22 17:09:42 2007
@@ -14,7 +14,7 @@
 ** the parser.  Lemon will also generate a header file containing
 ** numeric codes for all of the tokens.
 **
-** @(#) $Id: parse.y,v 1.210 2006/09/21 11:02:17 drh Exp $
+** @(#) $Id: parse.y,v 1.215 2007/02/02 12:44:37 drh Exp $
 */
 
 // All token codes are small integers with #defines that begin with "TK_"
@@ -205,6 +205,7 @@
 %left PLUS MINUS.
 %left STAR SLASH REM.
 %left CONCAT.
+%left COLLATE.
 %right UMINUS UPLUS BITNOT.
 
 // And "ids" is an identifer-or-string.
@@ -249,14 +250,14 @@
 carglist ::= .
 carg ::= CONSTRAINT nm ccons.
 carg ::= ccons.
-carg ::= DEFAULT term(X).            {sqlite3AddDefaultValue(pParse,X);}
-carg ::= DEFAULT LP expr(X) RP.      {sqlite3AddDefaultValue(pParse,X);}
-carg ::= DEFAULT PLUS term(X).       {sqlite3AddDefaultValue(pParse,X);}
-carg ::= DEFAULT MINUS term(X).      {
+ccons ::= DEFAULT term(X).            {sqlite3AddDefaultValue(pParse,X);}
+ccons ::= DEFAULT LP expr(X) RP.      {sqlite3AddDefaultValue(pParse,X);}
+ccons ::= DEFAULT PLUS term(X).       {sqlite3AddDefaultValue(pParse,X);}
+ccons ::= DEFAULT MINUS term(X).      {
   Expr *p = sqlite3Expr(TK_UMINUS, X, 0, 0);
   sqlite3AddDefaultValue(pParse,p);
 }
-carg ::= DEFAULT id(X).              {
+ccons ::= DEFAULT id(X).              {
   Expr *p = sqlite3Expr(TK_STRING, 0, 0, &X);
   sqlite3AddDefaultValue(pParse,p);
 }
@@ -444,7 +445,10 @@
 // A complete FROM clause.
 //
 from(A) ::= .                                 {A = sqliteMalloc(sizeof(*A));}
-from(A) ::= FROM seltablist(X).               {A = X;}
+from(A) ::= FROM seltablist(X).               {
+  A = X;
+  sqlite3SrcListShiftJoinType(A);
+}
 
 // "seltablist" is a "Select Table List" - the content of the FROM clause
 // in a SELECT statement.  "stl_prefix" is a prefix of this list.
@@ -455,31 +459,12 @@
 }
 stl_prefix(A) ::= .                           {A = 0;}
 seltablist(A) ::= stl_prefix(X) nm(Y) dbnm(D) as(Z) on_opt(N) using_opt(U). {
-  A = sqlite3SrcListAppend(X,&Y,&D);
-  if( Z.n ) sqlite3SrcListAddAlias(A,&Z);
-  if( N ){
-    if( A && A->nSrc>1 ){ A->a[A->nSrc-2].pOn = N; }
-    else { sqlite3ExprDelete(N); }
-  }
-  if( U ){
-    if( A && A->nSrc>1 ){ A->a[A->nSrc-2].pUsing = U; }
-    else { sqlite3IdListDelete(U); }
-  }
+  A = sqlite3SrcListAppendFromTerm(X,&Y,&D,&Z,0,N,U);
 }
 %ifndef SQLITE_OMIT_SUBQUERY
   seltablist(A) ::= stl_prefix(X) LP seltablist_paren(S) RP
                     as(Z) on_opt(N) using_opt(U). {
-    A = sqlite3SrcListAppend(X,0,0);
-    if( A && A->nSrc>0 ) A->a[A->nSrc-1].pSelect = S;
-    if( Z.n ) sqlite3SrcListAddAlias(A,&Z);
-    if( N ){
-      if( A && A->nSrc>1 ){ A->a[A->nSrc-2].pOn = N; }
-      else { sqlite3ExprDelete(N); }
-    }
-    if( U ){
-      if( A && A->nSrc>1 ){ A->a[A->nSrc-2].pUsing = U; }
-      else { sqlite3IdListDelete(U); }
-    }
+    A = sqlite3SrcListAppendFromTerm(X,0,0,&Z,S,N,U);
   }
   
   // A seltablist_paren nonterminal represents anything in a FROM that
@@ -490,6 +475,7 @@
   %destructor seltablist_paren {sqlite3SelectDelete($$);}
   seltablist_paren(A) ::= select(S).      {A = S;}
   seltablist_paren(A) ::= seltablist(F).  {
+     sqlite3SrcListShiftJoinType(F);
      A = sqlite3SelectNew(0,F,0,0,0,0,0,0,0);
   }
 %endif  SQLITE_OMIT_SUBQUERY
@@ -530,24 +516,21 @@
 
 orderby_opt(A) ::= .                          {A = 0;}
 orderby_opt(A) ::= ORDER BY sortlist(X).      {A = X;}
-sortlist(A) ::= sortlist(X) COMMA sortitem(Y) collate(C) sortorder(Z). {
-  A = sqlite3ExprListAppend(X,Y,C.n>0?&C:0);
+sortlist(A) ::= sortlist(X) COMMA sortitem(Y) sortorder(Z). {
+  A = sqlite3ExprListAppend(X,Y,0);
   if( A ) A->a[A->nExpr-1].sortOrder = Z;
 }
-sortlist(A) ::= sortitem(Y) collate(C) sortorder(Z). {
-  A = sqlite3ExprListAppend(0,Y,C.n>0?&C:0);
+sortlist(A) ::= sortitem(Y) sortorder(Z). {
+  A = sqlite3ExprListAppend(0,Y,0);
   if( A && A->a ) A->a[0].sortOrder = Z;
 }
 sortitem(A) ::= expr(X).   {A = X;}
 
 %type sortorder {int}
-%type collate {Token}
 
 sortorder(A) ::= ASC.           {A = SQLITE_SO_ASC;}
 sortorder(A) ::= DESC.          {A = SQLITE_SO_DESC;}
 sortorder(A) ::= .              {A = SQLITE_SO_ASC;}
-collate(C) ::= .                {C.z = 0; C.n = 0;}
-collate(C) ::= COLLATE id(X).   {C = X;}
 
 %type groupby_opt {ExprList*}
 %destructor groupby_opt {sqlite3ExprListDelete($$);}
@@ -657,6 +640,9 @@
   Expr *pExpr = A = sqlite3Expr(TK_VARIABLE, 0, 0, pToken);
   sqlite3ExprAssignVarNumber(pParse, pExpr);
 }
+expr(A) ::= expr(E) COLLATE id(C). {
+  A = sqlite3ExprSetColl(pParse, E, &C);
+}
 %ifndef SQLITE_OMIT_CAST
 expr(A) ::= CAST(X) LP expr(E) AS typetoken(T) RP(Y). {
   A = sqlite3Expr(TK_CAST, E, 0, &T);
@@ -892,6 +878,10 @@
 }
 idxitem(A) ::= nm(X).              {A = X;}
 
+%type collate {Token}
+collate(C) ::= .                {C.z = 0; C.n = 0;}
+collate(C) ::= COLLATE id(X).   {C = X;}
+
 
 ///////////////////////////// The DROP INDEX command /////////////////////////
 //
@@ -907,14 +897,15 @@
 ///////////////////////////// The PRAGMA command /////////////////////////////
 //
 %ifndef SQLITE_OMIT_PRAGMA
-cmd ::= PRAGMA nm(X) dbnm(Z) EQ nm(Y).  {sqlite3Pragma(pParse,&X,&Z,&Y,0);}
+cmd ::= PRAGMA nm(X) dbnm(Z) EQ nmnum(Y).  {sqlite3Pragma(pParse,&X,&Z,&Y,0);}
 cmd ::= PRAGMA nm(X) dbnm(Z) EQ ON(Y).  {sqlite3Pragma(pParse,&X,&Z,&Y,0);}
-cmd ::= PRAGMA nm(X) dbnm(Z) EQ plus_num(Y). {sqlite3Pragma(pParse,&X,&Z,&Y,0);}
 cmd ::= PRAGMA nm(X) dbnm(Z) EQ minus_num(Y). {
   sqlite3Pragma(pParse,&X,&Z,&Y,1);
 }
-cmd ::= PRAGMA nm(X) dbnm(Z) LP nm(Y) RP. {sqlite3Pragma(pParse,&X,&Z,&Y,0);}
+cmd ::= PRAGMA nm(X) dbnm(Z) LP nmnum(Y) RP. {sqlite3Pragma(pParse,&X,&Z,&Y,0);}
 cmd ::= PRAGMA nm(X) dbnm(Z).             {sqlite3Pragma(pParse,&X,&Z,0,0);}
+nmnum(A) ::= plus_num(X).             {A = X;}
+nmnum(A) ::= nm(X).                   {A = X;}
 %endif SQLITE_OMIT_PRAGMA
 plus_num(A) ::= plus_opt number(X).   {A = X;}
 minus_num(A) ::= MINUS number(X).     {A = X;}

Modified: freeswitch/trunk/libs/sqlite/src/pragma.c
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/pragma.c	(original)
+++ freeswitch/trunk/libs/sqlite/src/pragma.c	Thu Feb 22 17:09:42 2007
@@ -11,7 +11,7 @@
 *************************************************************************
 ** This file contains code used to implement the PRAGMA command.
 **
-** $Id: pragma.c,v 1.124 2006/09/25 18:01:57 drh Exp $
+** $Id: pragma.c,v 1.127 2007/01/27 02:24:56 drh Exp $
 */
 #include "sqliteInt.h"
 #include "os.h"
@@ -483,14 +483,12 @@
       sqlite3ViewGetColumnNames(pParse, pTab);
       for(i=0, pCol=pTab->aCol; i<pTab->nCol; i++, pCol++){
         const Token *pDflt;
-        static const Token noDflt =  { (unsigned char*)"", 0, 0 };
         sqlite3VdbeAddOp(v, OP_Integer, i, 0);
         sqlite3VdbeOp3(v, OP_String8, 0, 0, pCol->zName, 0);
         sqlite3VdbeOp3(v, OP_String8, 0, 0,
            pCol->zType ? pCol->zType : "", 0);
         sqlite3VdbeAddOp(v, OP_Integer, pCol->notNull, 0);
-        pDflt = pCol->pDflt ? &pCol->pDflt->span : &noDflt;
-        if( pDflt->z ){
+        if( pCol->pDflt && (pDflt = &pCol->pDflt->span)->z ){
           sqlite3VdbeOp3(v, OP_String8, 0, 0, (char*)pDflt->z, pDflt->n);
         }else{
           sqlite3VdbeAddOp(v, OP_Null, 0, 0);
@@ -642,9 +640,13 @@
     }
   }else
 
+#ifndef SQLITE_INTEGRITY_CHECK_ERROR_MAX
+# define SQLITE_INTEGRITY_CHECK_ERROR_MAX 100
+#endif
+
 #ifndef SQLITE_OMIT_INTEGRITY_CHECK
   if( sqlite3StrICmp(zLeft, "integrity_check")==0 ){
-    int i, j, addr;
+    int i, j, addr, mxErr;
 
     /* Code that appears at the end of the integrity check.  If no error
     ** messages have been generated, output OK.  Otherwise output the
@@ -662,7 +664,16 @@
     if( sqlite3ReadSchema(pParse) ) goto pragma_out;
     sqlite3VdbeSetNumCols(v, 1);
     sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "integrity_check", P3_STATIC);
-    sqlite3VdbeAddOp(v, OP_MemInt, 0, 0);  /* Initialize error count to 0 */
+
+    /* Set the maximum error count */
+    mxErr = SQLITE_INTEGRITY_CHECK_ERROR_MAX;
+    if( zRight ){
+      mxErr = atoi(zRight);
+      if( mxErr<=0 ){
+        mxErr = SQLITE_INTEGRITY_CHECK_ERROR_MAX;
+      }
+    }
+    sqlite3VdbeAddOp(v, OP_MemInt, mxErr, 0);
 
     /* Do an integrity check on each database file */
     for(i=0; i<db->nDb; i++){
@@ -673,6 +684,9 @@
       if( OMIT_TEMPDB && i==1 ) continue;
 
       sqlite3CodeVerifySchema(pParse, i);
+      addr = sqlite3VdbeAddOp(v, OP_IfMemPos, 0, 0);
+      sqlite3VdbeAddOp(v, OP_Halt, 0, 0);
+      sqlite3VdbeJumpHere(v, addr);
 
       /* Do an integrity check of the B-Tree
       */
@@ -687,28 +701,28 @@
           cnt++;
         }
       }
-      assert( cnt>0 );
-      sqlite3VdbeAddOp(v, OP_IntegrityCk, cnt, i);
-      sqlite3VdbeAddOp(v, OP_Dup, 0, 1);
-      addr = sqlite3VdbeOp3(v, OP_String8, 0, 0, "ok", P3_STATIC);
-      sqlite3VdbeAddOp(v, OP_Eq, 0, addr+7);
+      if( cnt==0 ) continue;
+      sqlite3VdbeAddOp(v, OP_IntegrityCk, 0, i);
+      addr = sqlite3VdbeAddOp(v, OP_IsNull, -1, 0);
       sqlite3VdbeOp3(v, OP_String8, 0, 0,
          sqlite3MPrintf("*** in database %s ***\n", db->aDb[i].zName),
          P3_DYNAMIC);
       sqlite3VdbeAddOp(v, OP_Pull, 1, 0);
-      sqlite3VdbeAddOp(v, OP_Concat, 0, 1);
+      sqlite3VdbeAddOp(v, OP_Concat, 0, 0);
       sqlite3VdbeAddOp(v, OP_Callback, 1, 0);
-      sqlite3VdbeAddOp(v, OP_MemIncr, 1, 0);
+      sqlite3VdbeJumpHere(v, addr);
 
       /* Make sure all the indices are constructed correctly.
       */
-      sqlite3CodeVerifySchema(pParse, i);
       for(x=sqliteHashFirst(pTbls); x; x=sqliteHashNext(x)){
         Table *pTab = sqliteHashData(x);
         Index *pIdx;
         int loopTop;
 
         if( pTab->pIndex==0 ) continue;
+        addr = sqlite3VdbeAddOp(v, OP_IfMemPos, 0, 0);
+        sqlite3VdbeAddOp(v, OP_Halt, 0, 0);
+        sqlite3VdbeJumpHere(v, addr);
         sqlite3OpenTableAndIndices(pParse, pTab, 1, OP_OpenRead);
         sqlite3VdbeAddOp(v, OP_MemInt, 0, 1);
         loopTop = sqlite3VdbeAddOp(v, OP_Rewind, 1, 0);
@@ -716,7 +730,7 @@
         for(j=0, pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext, j++){
           int jmp2;
           static const VdbeOpList idxErr[] = {
-            { OP_MemIncr,     1,  0,  0},
+            { OP_MemIncr,    -1,  0,  0},
             { OP_String8,     0,  0,  "rowid "},
             { OP_Rowid,       1,  0,  0},
             { OP_String8,     0,  0,  " missing from index "},
@@ -741,13 +755,16 @@
              { OP_MemLoad,      1,  0,  0},
              { OP_MemLoad,      2,  0,  0},
              { OP_Eq,           0,  0,  0},  /* 6 */
-             { OP_MemIncr,      1,  0,  0},
+             { OP_MemIncr,     -1,  0,  0},
              { OP_String8,      0,  0,  "wrong # of entries in index "},
              { OP_String8,      0,  0,  0},  /* 9 */
              { OP_Concat,       0,  0,  0},
              { OP_Callback,     1,  0,  0},
           };
           if( pIdx->tnum==0 ) continue;
+          addr = sqlite3VdbeAddOp(v, OP_IfMemPos, 0, 0);
+          sqlite3VdbeAddOp(v, OP_Halt, 0, 0);
+          sqlite3VdbeJumpHere(v, addr);
           addr = sqlite3VdbeAddOpList(v, ArraySize(cntIdx), cntIdx);
           sqlite3VdbeChangeP1(v, addr+1, j+2);
           sqlite3VdbeChangeP2(v, addr+1, addr+4);
@@ -759,6 +776,7 @@
       } 
     }
     addr = sqlite3VdbeAddOpList(v, ArraySize(endCode), endCode);
+    sqlite3VdbeChangeP1(v, addr+1, mxErr);
     sqlite3VdbeJumpHere(v, addr+2);
   }else
 #endif /* SQLITE_OMIT_INTEGRITY_CHECK */
@@ -896,6 +914,7 @@
       sqlite3VdbeChangeP1(v, addr, iDb);
       sqlite3VdbeChangeP2(v, addr, iCookie);
       sqlite3VdbeSetNumCols(v, 1);
+      sqlite3VdbeSetColName(v, 0, COLNAME_NAME, zLeft, P3_TRANSIENT);
     }
   }
 #endif /* SQLITE_OMIT_SCHEMA_VERSION_PRAGMAS */

Modified: freeswitch/trunk/libs/sqlite/src/prepare.c
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/prepare.c	(original)
+++ freeswitch/trunk/libs/sqlite/src/prepare.c	Thu Feb 22 17:09:42 2007
@@ -13,7 +13,7 @@
 ** interface, and routines that contribute to loading the database schema
 ** from disk.
 **
-** $Id: prepare.c,v 1.40 2006/09/23 20:36:02 drh Exp $
+** $Id: prepare.c,v 1.43 2007/01/09 14:01:13 drh Exp $
 */
 #include "sqliteInt.h"
 #include "os.h"
@@ -445,12 +445,13 @@
 /*
 ** Compile the UTF-8 encoded SQL statement zSql into a statement handle.
 */
-int sqlite3_prepare(
+int sqlite3Prepare(
   sqlite3 *db,              /* Database handle. */
   const char *zSql,         /* UTF-8 encoded SQL statement. */
   int nBytes,               /* Length of zSql in bytes. */
+  int saveSqlFlag,          /* True to copy SQL text into the sqlite3_stmt */
   sqlite3_stmt **ppStmt,    /* OUT: A pointer to the prepared statement */
-  const char** pzTail       /* OUT: End of parsed string */
+  const char **pzTail       /* OUT: End of parsed string */
 ){
   Parse sParse;
   char *zErrMsg = 0;
@@ -503,7 +504,9 @@
   if( sqlite3MallocFailed() ){
     sParse.rc = SQLITE_NOMEM;
   }
-  if( pzTail ) *pzTail = sParse.zTail;
+  if( pzTail ){
+    *pzTail = sParse.zTail;
+  }
   rc = sParse.rc;
 
 #ifndef SQLITE_OMIT_EXPLAIN
@@ -521,13 +524,16 @@
       sqlite3VdbeSetColName(sParse.pVdbe, 3, COLNAME_NAME, "p2", P3_STATIC);
       sqlite3VdbeSetColName(sParse.pVdbe, 4, COLNAME_NAME, "p3", P3_STATIC);
     }
-  } 
+  }
 #endif
 
   if( sqlite3SafetyOff(db) ){
     rc = SQLITE_MISUSE;
   }
   if( rc==SQLITE_OK ){
+    if( saveSqlFlag ){
+      sqlite3VdbeSetSql(sParse.pVdbe, zSql, sParse.zTail - zSql);
+    }
     *ppStmt = (sqlite3_stmt*)sParse.pVdbe;
   }else if( sParse.pVdbe ){
     sqlite3_finalize((sqlite3_stmt*)sParse.pVdbe);
@@ -546,14 +552,74 @@
   return rc;
 }
 
+/*
+** Rerun the compilation of a statement after a schema change.
+** Return true if the statement was recompiled successfully.
+** Return false if there is an error of some kind.
+*/
+int sqlite3Reprepare(Vdbe *p){
+  int rc;
+  Vdbe *pNew;
+  const char *zSql;
+  sqlite3 *db;
+  
+  zSql = sqlite3VdbeGetSql(p);
+  if( zSql==0 ){
+    return 0;
+  }
+  db = sqlite3VdbeDb(p);
+  rc = sqlite3Prepare(db, zSql, -1, 0, (sqlite3_stmt**)&pNew, 0);
+  if( rc ){
+    assert( pNew==0 );
+    return 0;
+  }else{
+    assert( pNew!=0 );
+  }
+  sqlite3VdbeSwap(pNew, p);
+  sqlite3_transfer_bindings((sqlite3_stmt*)pNew, (sqlite3_stmt*)p);
+  sqlite3VdbeResetStepResult(pNew);
+  sqlite3VdbeFinalize(pNew);
+  return 1;
+}
+
+
+/*
+** Two versions of the official API.  Legacy and new use.  In the legacy
+** version, the original SQL text is not saved in the prepared statement
+** and so if a schema change occurs, SQLITE_SCHEMA is returned by
+** sqlite3_step().  In the new version, the original SQL text is retained
+** and the statement is automatically recompiled if an schema change
+** occurs.
+*/
+int sqlite3_prepare(
+  sqlite3 *db,              /* Database handle. */
+  const char *zSql,         /* UTF-8 encoded SQL statement. */
+  int nBytes,               /* Length of zSql in bytes. */
+  sqlite3_stmt **ppStmt,    /* OUT: A pointer to the prepared statement */
+  const char **pzTail       /* OUT: End of parsed string */
+){
+  return sqlite3Prepare(db,zSql,nBytes,0,ppStmt,pzTail);
+}
+int sqlite3_prepare_v2(
+  sqlite3 *db,              /* Database handle. */
+  const char *zSql,         /* UTF-8 encoded SQL statement. */
+  int nBytes,               /* Length of zSql in bytes. */
+  sqlite3_stmt **ppStmt,    /* OUT: A pointer to the prepared statement */
+  const char **pzTail       /* OUT: End of parsed string */
+){
+  return sqlite3Prepare(db,zSql,nBytes,1,ppStmt,pzTail);
+}
+
+
 #ifndef SQLITE_OMIT_UTF16
 /*
 ** Compile the UTF-16 encoded SQL statement zSql into a statement handle.
 */
-int sqlite3_prepare16(
+static int sqlite3Prepare16(
   sqlite3 *db,              /* Database handle. */ 
   const void *zSql,         /* UTF-8 encoded SQL statement. */
   int nBytes,               /* Length of zSql in bytes. */
+  int saveSqlFlag,          /* True to save SQL text into the sqlite3_stmt */
   sqlite3_stmt **ppStmt,    /* OUT: A pointer to the prepared statement */
   const void **pzTail       /* OUT: End of parsed string */
 ){
@@ -570,7 +636,7 @@
   }
   zSql8 = sqlite3utf16to8(zSql, nBytes);
   if( zSql8 ){
-    rc = sqlite3_prepare(db, zSql8, -1, ppStmt, &zTail8);
+    rc = sqlite3Prepare(db, zSql8, -1, saveSqlFlag, ppStmt, &zTail8);
   }
 
   if( zTail8 && pzTail ){
@@ -585,4 +651,32 @@
   sqliteFree(zSql8); 
   return sqlite3ApiExit(db, rc);
 }
+
+/*
+** Two versions of the official API.  Legacy and new use.  In the legacy
+** version, the original SQL text is not saved in the prepared statement
+** and so if a schema change occurs, SQLITE_SCHEMA is returned by
+** sqlite3_step().  In the new version, the original SQL text is retained
+** and the statement is automatically recompiled if an schema change
+** occurs.
+*/
+int sqlite3_prepare16(
+  sqlite3 *db,              /* Database handle. */ 
+  const void *zSql,         /* UTF-8 encoded SQL statement. */
+  int nBytes,               /* Length of zSql in bytes. */
+  sqlite3_stmt **ppStmt,    /* OUT: A pointer to the prepared statement */
+  const void **pzTail       /* OUT: End of parsed string */
+){
+  return sqlite3Prepare16(db,zSql,nBytes,0,ppStmt,pzTail);
+}
+int sqlite3_prepare16_v2(
+  sqlite3 *db,              /* Database handle. */ 
+  const void *zSql,         /* UTF-8 encoded SQL statement. */
+  int nBytes,               /* Length of zSql in bytes. */
+  sqlite3_stmt **ppStmt,    /* OUT: A pointer to the prepared statement */
+  const void **pzTail       /* OUT: End of parsed string */
+){
+  return sqlite3Prepare16(db,zSql,nBytes,1,ppStmt,pzTail);
+}
+
 #endif /* SQLITE_OMIT_UTF16 */

Modified: freeswitch/trunk/libs/sqlite/src/printf.c
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/printf.c	(original)
+++ freeswitch/trunk/libs/sqlite/src/printf.c	Thu Feb 22 17:09:42 2007
@@ -857,7 +857,7 @@
   va_start(ap, zFormat);
   base_vprintf(0, 0, zBuf, sizeof(zBuf), zFormat, ap);
   va_end(ap);
-  fprintf(stdout,"%d: %s", getpid(), zBuf);
+  fprintf(stdout,"%s", zBuf);
   fflush(stdout);
 }
 #endif

Modified: freeswitch/trunk/libs/sqlite/src/random.c
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/random.c	(original)
+++ freeswitch/trunk/libs/sqlite/src/random.c	Thu Feb 22 17:09:42 2007
@@ -15,7 +15,7 @@
 ** Random numbers are used by some of the database backends in order
 ** to generate random integer keys for tables or random filenames.
 **
-** $Id: random.c,v 1.15 2006/01/06 14:32:20 drh Exp $
+** $Id: random.c,v 1.16 2007/01/05 14:38:56 drh Exp $
 */
 #include "sqliteInt.h"
 #include "os.h"
@@ -37,7 +37,7 @@
 ** (Later):  Actually, OP_NewRowid does not depend on a good source of
 ** randomness any more.  But we will leave this code in all the same.
 */
-static int randomByte(){
+static int randomByte(void){
   unsigned char t;
 
   /* All threads share a single random number generator.

Modified: freeswitch/trunk/libs/sqlite/src/select.c
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/select.c	(original)
+++ freeswitch/trunk/libs/sqlite/src/select.c	Thu Feb 22 17:09:42 2007
@@ -12,7 +12,7 @@
 ** This file contains C code routines that are called by the parser
 ** to handle SELECT statements in SQLite.
 **
-** $Id: select.c,v 1.321 2006/09/29 14:01:05 drh Exp $
+** $Id: select.c,v 1.326 2007/02/01 23:02:45 drh Exp $
 */
 #include "sqliteInt.h"
 
@@ -301,8 +301,8 @@
     /* When the NATURAL keyword is present, add WHERE clause terms for
     ** every column that the two tables have in common.
     */
-    if( pLeft->jointype & JT_NATURAL ){
-      if( pLeft->pOn || pLeft->pUsing ){
+    if( pRight->jointype & JT_NATURAL ){
+      if( pRight->pOn || pRight->pUsing ){
         sqlite3ErrorMsg(pParse, "a NATURAL join may not have "
            "an ON or USING clause", 0);
         return 1;
@@ -320,7 +320,7 @@
 
     /* Disallow both ON and USING clauses in the same join
     */
-    if( pLeft->pOn && pLeft->pUsing ){
+    if( pRight->pOn && pRight->pUsing ){
       sqlite3ErrorMsg(pParse, "cannot have both ON and USING "
         "clauses in the same join");
       return 1;
@@ -329,10 +329,10 @@
     /* Add the ON clause to the end of the WHERE clause, connected by
     ** an AND operator.
     */
-    if( pLeft->pOn ){
-      setJoinExpr(pLeft->pOn, pRight->iCursor);
-      p->pWhere = sqlite3ExprAnd(p->pWhere, pLeft->pOn);
-      pLeft->pOn = 0;
+    if( pRight->pOn ){
+      setJoinExpr(pRight->pOn, pRight->iCursor);
+      p->pWhere = sqlite3ExprAnd(p->pWhere, pRight->pOn);
+      pRight->pOn = 0;
     }
 
     /* Create extra terms on the WHERE clause for each column named
@@ -342,8 +342,8 @@
     ** Report an error if any column mentioned in the USING clause is
     ** not contained in both tables to be joined.
     */
-    if( pLeft->pUsing ){
-      IdList *pList = pLeft->pUsing;
+    if( pRight->pUsing ){
+      IdList *pList = pRight->pUsing;
       for(j=0; j<pList->nId; j++){
         char *zName = pList->a[j].zName;
         if( columnIndex(pLeftTab, zName)<0 || columnIndex(pRightTab, zName)<0 ){
@@ -1309,13 +1309,13 @@
 
             if( i>0 ){
               struct SrcList_item *pLeft = &pTabList->a[i-1];
-              if( (pLeft->jointype & JT_NATURAL)!=0 &&
+              if( (pLeft[1].jointype & JT_NATURAL)!=0 &&
                         columnIndex(pLeft->pTab, zName)>=0 ){
                 /* In a NATURAL join, omit the join columns from the 
                 ** table on the right */
                 continue;
               }
-              if( sqlite3IdListIndex(pLeft->pUsing, zName)>=0 ){
+              if( sqlite3IdListIndex(pLeft[1].pUsing, zName)>=0 ){
                 /* In a join with a USING clause, omit columns in the
                 ** using clause from the table on the right. */
                 continue;
@@ -1936,6 +1936,7 @@
         }
         sqlite3VdbeChangeP2(v, addr, nCol);
         sqlite3VdbeChangeP3(v, addr, (char*)pKeyInfo, P3_KEYINFO);
+        pLoop->addrOpenEphm[i] = -1;
       }
     }
 
@@ -1951,10 +1952,9 @@
       apColl = pKeyInfo->aColl;
       for(i=0; i<nOrderByExpr; i++, pOTerm++, apColl++, pSortOrder++){
         Expr *pExpr = pOTerm->pExpr;
-        char *zName = pOTerm->zName;
-        assert( pExpr->op==TK_COLUMN && pExpr->iColumn<nCol );
-        if( zName ){
-          *apColl = sqlite3LocateCollSeq(pParse, zName, -1);
+        if( (pExpr->flags & EP_ExpCollate) ){
+          assert( pExpr->pColl!=0 );
+          *apColl = pExpr->pColl;
         }else{
           *apColl = aCopy[pExpr->iColumn];
         }
@@ -2175,7 +2175,7 @@
   **
   ** which is not at all the same thing.
   */
-  if( pSubSrc->nSrc>1 && iFrom>0 && (pSrc->a[iFrom-1].jointype & JT_OUTER)!=0 ){
+  if( pSubSrc->nSrc>1 && (pSubitem->jointype & JT_OUTER)!=0 ){
     return 0;
   }
 
@@ -2192,8 +2192,7 @@
   ** But the t2.x>0 test will always fail on a NULL row of t2, which
   ** effectively converts the OUTER JOIN into an INNER JOIN.
   */
-  if( iFrom>0 && (pSrc->a[iFrom-1].jointype & JT_OUTER)!=0 
-      && pSub->pWhere!=0 ){
+  if( (pSubitem->jointype & JT_OUTER)!=0 && pSub->pWhere!=0 ){
     return 0;
   }
 
@@ -2232,7 +2231,7 @@
       pSrc->a[i+iFrom] = pSubSrc->a[i];
       memset(&pSubSrc->a[i], 0, sizeof(pSubSrc->a[i]));
     }
-    pSrc->a[iFrom+nSubSrc-1].jointype = jointype;
+    pSrc->a[iFrom].jointype = jointype;
   }
 
   /* Now begin substituting subquery result set expressions for 
@@ -2478,8 +2477,14 @@
     Expr *pE = pOrderBy->a[i].pExpr;
     if( sqlite3ExprIsInteger(pE, &iCol) ){
       if( iCol>0 && iCol<=pEList->nExpr ){
+        CollSeq *pColl = pE->pColl;
+        int flags = pE->flags & EP_ExpCollate;
         sqlite3ExprDelete(pE);
         pE = pOrderBy->a[i].pExpr = sqlite3ExprDup(pEList->a[iCol-1].pExpr);
+        if( pColl && flags ){
+          pE->pColl = pColl;
+          pE->flags |= flags;
+        }
       }else{
         sqlite3ErrorMsg(pParse, 
            "%s BY column number %d out of range - should be "
@@ -2605,7 +2610,14 @@
     }
   }
 
-  return SQLITE_OK;
+  /* If this is one SELECT of a compound, be sure to resolve names
+  ** in the other SELECTs.
+  */
+  if( p->pPrior ){
+    return sqlite3SelectResolve(pParse, p->pPrior, pOuterNC);
+  }else{
+    return SQLITE_OK;
+  }
 }
 
 /*
@@ -2907,23 +2919,15 @@
   }
 #endif
 
-  /* If there is an ORDER BY clause, resolve any collation sequences
-  ** names that have been explicitly specified and create a sorting index.
-  **
-  ** This sorting index might end up being unused if the data can be 
+  /* If there is an ORDER BY clause, then this sorting
+  ** index might end up being unused if the data can be 
   ** extracted in pre-sorted order.  If that is the case, then the
   ** OP_OpenEphemeral instruction will be changed to an OP_Noop once
   ** we figure out that the sorting index is not needed.  The addrSortIndex
   ** variable is used to facilitate that change.
   */
   if( pOrderBy ){
-    struct ExprList_item *pTerm;
     KeyInfo *pKeyInfo;
-    for(i=0, pTerm=pOrderBy->a; i<pOrderBy->nExpr; i++, pTerm++){
-      if( pTerm->zName ){
-        pTerm->pExpr->pColl = sqlite3LocateCollSeq(pParse, pTerm->zName, -1);
-      }
-    }
     if( pParse->nErr ){
       goto select_end;
     }
@@ -3293,3 +3297,99 @@
   sqliteFree(sAggInfo.aFunc);
   return rc;
 }
+
+#if defined(SQLITE_TEST) || defined(SQLITE_DEBUG)
+/*
+*******************************************************************************
+** The following code is used for testing and debugging only.  The code
+** that follows does not appear in normal builds.
+**
+** These routines are used to print out the content of all or part of a 
+** parse structures such as Select or Expr.  Such printouts are useful
+** for helping to understand what is happening inside the code generator
+** during the execution of complex SELECT statements.
+**
+** These routine are not called anywhere from within the normal
+** code base.  Then are intended to be called from within the debugger
+** or from temporary "printf" statements inserted for debugging.
+*/
+void sqlite3PrintExpr(Expr *p){
+  if( p->token.z && p->token.n>0 ){
+    sqlite3DebugPrintf("(%.*s", p->token.n, p->token.z);
+  }else{
+    sqlite3DebugPrintf("(%d", p->op);
+  }
+  if( p->pLeft ){
+    sqlite3DebugPrintf(" ");
+    sqlite3PrintExpr(p->pLeft);
+  }
+  if( p->pRight ){
+    sqlite3DebugPrintf(" ");
+    sqlite3PrintExpr(p->pRight);
+  }
+  sqlite3DebugPrintf(")");
+}
+void sqlite3PrintExprList(ExprList *pList){
+  int i;
+  for(i=0; i<pList->nExpr; i++){
+    sqlite3PrintExpr(pList->a[i].pExpr);
+    if( i<pList->nExpr-1 ){
+      sqlite3DebugPrintf(", ");
+    }
+  }
+}
+void sqlite3PrintSelect(Select *p, int indent){
+  sqlite3DebugPrintf("%*sSELECT(%p) ", indent, "", p);
+  sqlite3PrintExprList(p->pEList);
+  sqlite3DebugPrintf("\n");
+  if( p->pSrc ){
+    char *zPrefix;
+    int i;
+    zPrefix = "FROM";
+    for(i=0; i<p->pSrc->nSrc; i++){
+      struct SrcList_item *pItem = &p->pSrc->a[i];
+      sqlite3DebugPrintf("%*s ", indent+6, zPrefix);
+      zPrefix = "";
+      if( pItem->pSelect ){
+        sqlite3DebugPrintf("(\n");
+        sqlite3PrintSelect(pItem->pSelect, indent+10);
+        sqlite3DebugPrintf("%*s)", indent+8, "");
+      }else if( pItem->zName ){
+        sqlite3DebugPrintf("%s", pItem->zName);
+      }
+      if( pItem->pTab ){
+        sqlite3DebugPrintf("(table: %s)", pItem->pTab->zName);
+      }
+      if( pItem->zAlias ){
+        sqlite3DebugPrintf(" AS %s", pItem->zAlias);
+      }
+      if( i<p->pSrc->nSrc-1 ){
+        sqlite3DebugPrintf(",");
+      }
+      sqlite3DebugPrintf("\n");
+    }
+  }
+  if( p->pWhere ){
+    sqlite3DebugPrintf("%*s WHERE ", indent, "");
+    sqlite3PrintExpr(p->pWhere);
+    sqlite3DebugPrintf("\n");
+  }
+  if( p->pGroupBy ){
+    sqlite3DebugPrintf("%*s GROUP BY ", indent, "");
+    sqlite3PrintExprList(p->pGroupBy);
+    sqlite3DebugPrintf("\n");
+  }
+  if( p->pHaving ){
+    sqlite3DebugPrintf("%*s HAVING ", indent, "");
+    sqlite3PrintExpr(p->pHaving);
+    sqlite3DebugPrintf("\n");
+  }
+  if( p->pOrderBy ){
+    sqlite3DebugPrintf("%*s ORDER BY ", indent, "");
+    sqlite3PrintExprList(p->pOrderBy);
+    sqlite3DebugPrintf("\n");
+  }
+}
+/* End of the structure debug printing code
+*****************************************************************************/
+#endif /* defined(SQLITE_TEST) || defined(SQLITE_DEBUG) */

Modified: freeswitch/trunk/libs/sqlite/src/shell.c
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/shell.c	(original)
+++ freeswitch/trunk/libs/sqlite/src/shell.c	Thu Feb 22 17:09:42 2007
@@ -12,7 +12,7 @@
 ** This file contains code to implement the "sqlite" command line
 ** utility for accessing SQLite databases.
 **
-** $Id: shell.c,v 1.150 2006/09/25 13:09:23 drh Exp $
+** $Id: shell.c,v 1.158 2007/01/08 14:31:36 drh Exp $
 */
 #include <stdlib.h>
 #include <string.h>
@@ -61,6 +61,18 @@
 #endif
 
 /*
+** If the following flag is set, then command execution stops
+** at an error if we are not interactive.
+*/
+static int bail_on_error = 0;
+
+/*
+** Threat stdin as an interactive input if the following variable
+** is true.  Otherwise, assume stdin is connected to a file or pipe.
+*/
+static int stdin_is_interactive = 1;
+
+/*
 ** The following is the open SQLite database.  We make a pointer
 ** to this database a static variable so that it can be accessed
 ** by the SIGINT handler to interrupt database processing.
@@ -184,10 +196,7 @@
 }
 
 /*
-** Retrieve a single line of input text.  "isatty" is true if text
-** is coming from a terminal.  In that case, we issue a prompt and
-** attempt to use "readline" for command-line editing.  If "isatty"
-** is false, use "local_getline" instead of "readline" and issue no prompt.
+** Retrieve a single line of input text.
 **
 ** zPrior is a string of prior text retrieved.  If not the empty
 ** string, then issue a continuation prompt.
@@ -216,6 +225,7 @@
   int showHeader;
   int colWidth[100];
 };
+
 /*
 ** An pointer to an instance of this structure is passed from
 ** the main program to the callback.  This is used to communicate
@@ -227,6 +237,7 @@
   int cnt;               /* Number of records displayed so far */
   FILE *out;             /* Write results here */
   int mode;              /* An output mode setting */
+  int writableSchema;    /* True if PRAGMA writable_schema=ON */
   int showHeader;        /* True to show column names in List or Column mode */
   char *zDestTable;      /* Name of destination table when MODE_Insert */
   char separator[20];    /* Separator character for MODE_List */
@@ -351,18 +362,56 @@
 }
 
 /*
+** If a field contains any character identified by a 1 in the following
+** array, then the string must be quoted for CSV.
+*/
+static const char needCsvQuote[] = {
+  1, 1, 1, 1, 1, 1, 1, 1,   1, 1, 1, 1, 1, 1, 1, 1,   
+  1, 1, 1, 1, 1, 1, 1, 1,   1, 1, 1, 1, 1, 1, 1, 1,   
+  1, 0, 1, 0, 0, 0, 0, 1,   0, 0, 0, 0, 0, 0, 0, 0, 
+  0, 0, 0, 0, 0, 0, 0, 0,   0, 0, 0, 0, 0, 0, 0, 0, 
+  0, 0, 0, 0, 0, 0, 0, 0,   0, 0, 0, 0, 0, 0, 0, 0, 
+  0, 0, 0, 0, 0, 0, 0, 0,   0, 0, 0, 0, 0, 0, 0, 0, 
+  0, 0, 0, 0, 0, 0, 0, 0,   0, 0, 0, 0, 0, 0, 0, 0, 
+  0, 0, 0, 0, 0, 0, 0, 0,   0, 0, 0, 0, 0, 0, 0, 1, 
+  1, 1, 1, 1, 1, 1, 1, 1,   1, 1, 1, 1, 1, 1, 1, 1,   
+  1, 1, 1, 1, 1, 1, 1, 1,   1, 1, 1, 1, 1, 1, 1, 1,   
+  1, 1, 1, 1, 1, 1, 1, 1,   1, 1, 1, 1, 1, 1, 1, 1,   
+  1, 1, 1, 1, 1, 1, 1, 1,   1, 1, 1, 1, 1, 1, 1, 1,   
+  1, 1, 1, 1, 1, 1, 1, 1,   1, 1, 1, 1, 1, 1, 1, 1,   
+  1, 1, 1, 1, 1, 1, 1, 1,   1, 1, 1, 1, 1, 1, 1, 1,   
+  1, 1, 1, 1, 1, 1, 1, 1,   1, 1, 1, 1, 1, 1, 1, 1,   
+  1, 1, 1, 1, 1, 1, 1, 1,   1, 1, 1, 1, 1, 1, 1, 1,   
+};
+
+/*
 ** Output a single term of CSV.  Actually, p->separator is used for
 ** the separator, which may or may not be a comma.  p->nullvalue is
 ** the null value.  Strings are quoted using ANSI-C rules.  Numbers
 ** appear outside of quotes.
 */
 static void output_csv(struct callback_data *p, const char *z, int bSep){
+  FILE *out = p->out;
   if( z==0 ){
-    fprintf(p->out,"%s",p->nullvalue);
-  }else if( isNumber(z, 0) ){
-    fprintf(p->out,"%s",z);
+    fprintf(out,"%s",p->nullvalue);
   }else{
-    output_c_string(p->out, z);
+    int i;
+    for(i=0; z[i]; i++){
+      if( needCsvQuote[((unsigned char*)z)[i]] ){
+        i = 0;
+        break;
+      }
+    }
+    if( i==0 ){
+      putc('"', out);
+      for(i=0; z[i]; i++){
+        if( z[i]=='"' ) putc('"', out);
+        putc(z[i], out);
+      }
+      putc('"', out);
+    }else{
+      fprintf(out, "%s", z);
+    }
   }
   if( bSep ){
     fprintf(p->out, p->separator);
@@ -587,7 +636,7 @@
 ** If the third argument, quote, is not '\0', then it is used as a 
 ** quote character for zAppend.
 */
-static char * appendText(char *zIn, char const *zAppend, char quote){
+static char *appendText(char *zIn, char const *zAppend, char quote){
   int len;
   int i;
   int nAppend = strlen(zAppend);
@@ -628,6 +677,9 @@
 /*
 ** Execute a query statement that has a single result column.  Print
 ** that result column on a line by itself with a semicolon terminator.
+**
+** This is used, for example, to show the schema of the database by
+** querying the SQLITE_MASTER table.
 */
 static int run_table_dump_query(FILE *out, sqlite3 *db, const char *zSelect){
   sqlite3_stmt *pSelect;
@@ -669,6 +721,19 @@
     fprintf(p->out, "ANALYZE sqlite_master;\n");
   }else if( strncmp(zTable, "sqlite_", 7)==0 ){
     return 0;
+  }else if( strncmp(zSql, "CREATE VIRTUAL TABLE", 20)==0 ){
+    char *zIns;
+    if( !p->writableSchema ){
+      fprintf(p->out, "PRAGMA writable_schema=ON;\n");
+      p->writableSchema = 1;
+    }
+    zIns = sqlite3_mprintf(
+       "INSERT INTO sqlite_master(type,name,tbl_name,rootpage,sql)"
+       "VALUES('table','%q','%q',0,'%q');",
+       zTable, zTable, zSql);
+    fprintf(p->out, "%s\n", zIns);
+    sqlite3_free(zIns);
+    return 0;
   }else{
     fprintf(p->out, "%s;\n", zSql);
   }
@@ -702,7 +767,7 @@
       zSelect = appendText(zSelect, zText, '"');
       rc = sqlite3_step(pTableInfo);
       if( rc==SQLITE_ROW ){
-        zSelect = appendText(zSelect, ") || ', ' || ", 0);
+        zSelect = appendText(zSelect, ") || ',' || ", 0);
       }else{
         zSelect = appendText(zSelect, ") ", 0);
       }
@@ -721,15 +786,14 @@
       rc = run_table_dump_query(p->out, p->db, zSelect);
     }
     if( zSelect ) free(zSelect);
-    if( rc!=SQLITE_OK ){
-      return 1;
-    }
   }
   return 0;
 }
 
 /*
-** Run zQuery.  Update dump_callback() as the callback routine.
+** Run zQuery.  Use dump_callback() as the callback routine so that
+** the contents of the query are output as SQL statements.
+**
 ** If we get a SQLITE_CORRUPT error, rerun the query after appending
 ** "ORDER BY rowid DESC" to the end.
 */
@@ -757,6 +821,7 @@
 ** Text of a help message
 */
 static char zHelp[] =
+  ".bail ON|OFF           Stop after hitting an error.  Default OFF\n"
   ".databases             List names and files of attached databases\n"
   ".dump ?TABLE? ...      Dump the database in an SQL text format\n"
   ".echo ON|OFF           Turn command echo on or off\n"
@@ -793,7 +858,7 @@
 ;
 
 /* Forward reference */
-static void process_input(struct callback_data *p, FILE *in);
+static int process_input(struct callback_data *p, FILE *in);
 
 /*
 ** Make sure the database is open.  If it is not, then open it.  If
@@ -854,10 +919,27 @@
 }
 
 /*
+** Interpret zArg as a boolean value.  Return either 0 or 1.
+*/
+static int booleanValue(char *zArg){
+  int val = atoi(zArg);
+  int j;
+  for(j=0; zArg[j]; j++){
+    zArg[j] = tolower(zArg[j]);
+  }
+  if( strcmp(zArg,"on")==0 ){
+    val = 1;
+  }else if( strcmp(zArg,"yes")==0 ){
+    val = 1;
+  }
+  return val;
+}
+
+/*
 ** If an input line begins with "." then invoke this routine to
 ** process that line.
 **
-** Return 1 to exit and 0 to continue.
+** Return 1 on error, 2 to exit, and 0 otherwise.
 */
 static int do_meta_command(char *zLine, struct callback_data *p){
   int i = 1;
@@ -892,6 +974,10 @@
   if( nArg==0 ) return rc;
   n = strlen(azArg[0]);
   c = azArg[0][0];
+  if( c=='b' && n>1 && strncmp(azArg[0], "bail", n)==0 && nArg>1 ){
+    bail_on_error = booleanValue(azArg[1]);
+  }else
+
   if( c=='d' && n>1 && strncmp(azArg[0], "databases", n)==0 ){
     struct callback_data data;
     char *zErrMsg = 0;
@@ -914,19 +1000,15 @@
     char *zErrMsg = 0;
     open_db(p);
     fprintf(p->out, "BEGIN TRANSACTION;\n");
+    p->writableSchema = 0;
     if( nArg==1 ){
       run_schema_dump_query(p, 
         "SELECT name, type, sql FROM sqlite_master "
-        "WHERE sql NOT NULL AND type=='table' AND rootpage!=0", 0
-      );
-      run_schema_dump_query(p, 
-        "SELECT name, type, sql FROM sqlite_master "
-        "WHERE sql NOT NULL AND "
-        "  AND type!='table' AND type!='meta'", 0
+        "WHERE sql NOT NULL AND type=='table'", 0
       );
       run_table_dump_query(p->out, p->db,
         "SELECT sql FROM sqlite_master "
-        "WHERE sql NOT NULL AND rootpage==0 AND type='table'"
+        "WHERE sql NOT NULL AND type IN ('index','trigger','view')"
       );
     }else{
       int i;
@@ -935,19 +1017,20 @@
         run_schema_dump_query(p,
           "SELECT name, type, sql FROM sqlite_master "
           "WHERE tbl_name LIKE shellstatic() AND type=='table'"
-          "  AND rootpage!=0 AND sql NOT NULL", 0);
-        run_schema_dump_query(p,
-          "SELECT name, type, sql FROM sqlite_master "
-          "WHERE tbl_name LIKE shellstatic() AND type!='table'"
-          "  AND type!='meta' AND sql NOT NULL", 0);
+          "  AND sql NOT NULL", 0);
         run_table_dump_query(p->out, p->db,
           "SELECT sql FROM sqlite_master "
-          "WHERE sql NOT NULL AND rootpage==0 AND type='table'"
+          "WHERE sql NOT NULL"
+          "  AND type IN ('index','trigger','view')"
           "  AND tbl_name LIKE shellstatic()"
         );
         zShellStatic = 0;
       }
     }
+    if( p->writableSchema ){
+      fprintf(p->out, "PRAGMA writable_schema=OFF;\n");
+      p->writableSchema = 0;
+    }
     if( zErrMsg ){
       fprintf(stderr,"Error: %s\n", zErrMsg);
       sqlite3_free(zErrMsg);
@@ -957,37 +1040,15 @@
   }else
 
   if( c=='e' && strncmp(azArg[0], "echo", n)==0 && nArg>1 ){
-    int j;
-    char *z = azArg[1];
-    int val = atoi(azArg[1]);
-    for(j=0; z[j]; j++){
-      z[j] = tolower((unsigned char)z[j]);
-    }
-    if( strcmp(z,"on")==0 ){
-      val = 1;
-    }else if( strcmp(z,"yes")==0 ){
-      val = 1;
-    }
-    p->echoOn = val;
+    p->echoOn = booleanValue(azArg[1]);
   }else
 
   if( c=='e' && strncmp(azArg[0], "exit", n)==0 ){
-    rc = 1;
+    rc = 2;
   }else
 
   if( c=='e' && strncmp(azArg[0], "explain", n)==0 ){
-    int j;
-    static char zOne[] = "1";
-    char *z = nArg>=2 ? azArg[1] : zOne;
-    int val = atoi(z);
-    for(j=0; z[j]; j++){
-      z[j] = tolower((unsigned char)z[j]);
-    }
-    if( strcmp(z,"on")==0 ){
-      val = 1;
-    }else if( strcmp(z,"yes")==0 ){
-      val = 1;
-    }
+    int val = nArg>=2 ? booleanValue(azArg[1]) : 1;
     if(val == 1) {
       if(!p->explainPrev.valid) {
         p->explainPrev.valid = 1;
@@ -1018,21 +1079,9 @@
     }
   }else
 
-  if( c=='h' && (strncmp(azArg[0], "header", n)==0
-                 ||
+  if( c=='h' && (strncmp(azArg[0], "header", n)==0 ||
                  strncmp(azArg[0], "headers", n)==0 )&& nArg>1 ){
-    int j;
-    char *z = azArg[1];
-    int val = atoi(azArg[1]);
-    for(j=0; z[j]; j++){
-      z[j] = tolower((unsigned char)z[j]);
-    }
-    if( strcmp(z,"on")==0 ){
-      val = 1;
-    }else if( strcmp(z,"yes")==0 ){
-      val = 1;
-    }
-    p->showHeader = val;
+    p->showHeader = booleanValue(azArg[1]);
   }else
 
   if( c=='h' && strncmp(azArg[0], "help", n)==0 ){
@@ -1069,6 +1118,7 @@
     if( rc ){
       fprintf(stderr,"Error: %s\n", sqlite3_errmsg(db));
       nCol = 0;
+      rc = 1;
     }else{
       nCol = sqlite3_column_count(pStmt);
     }
@@ -1089,7 +1139,7 @@
     if( rc ){
       fprintf(stderr, "Error: %s\n", sqlite3_errmsg(db));
       sqlite3_finalize(pStmt);
-      return 0;
+      return 1;
     }
     in = fopen(zFile, "rb");
     if( in==0 ){
@@ -1135,6 +1185,7 @@
       if( rc!=SQLITE_OK ){
         fprintf(stderr,"Error: %s\n", sqlite3_errmsg(db));
         zCommit = "ROLLBACK";
+        rc = 1;
         break;
       }
     }
@@ -1180,6 +1231,7 @@
     if( rc!=SQLITE_OK ){
       fprintf(stderr, "%s\n", zErrMsg);
       sqlite3_free(zErrMsg);
+      rc = 1;
     }
   }else
 #endif
@@ -1214,7 +1266,7 @@
         set_table_name(p, "table");
       }
     }else {
-      fprintf(stderr,"mode should be on of: "
+      fprintf(stderr,"mode should be one of: "
          "column csv html insert line list tabs tcl\n");
     }
   }else
@@ -1251,7 +1303,7 @@
   }else
 
   if( c=='q' && strncmp(azArg[0], "quit", n)==0 ){
-    rc = 1;
+    rc = 2;
   }else
 
   if( c=='r' && strncmp(azArg[0], "read", n)==0 && nArg==2 ){
@@ -1403,6 +1455,8 @@
         }
         printf("\n");
       }
+    }else{
+      rc = 1;
     }
     sqlite3_free_table(azResult);
   }else
@@ -1482,24 +1536,40 @@
 ** is coming from a file or device.  A prompt is issued and history
 ** is saved only if input is interactive.  An interrupt signal will
 ** cause this routine to exit immediately, unless input is interactive.
+**
+** Return the number of errors.
 */
-static void process_input(struct callback_data *p, FILE *in){
+static int process_input(struct callback_data *p, FILE *in){
   char *zLine;
   char *zSql = 0;
   int nSql = 0;
   char *zErrMsg;
   int rc;
-  while( fflush(p->out), (zLine = one_input_line(zSql, in))!=0 ){
+  int errCnt = 0;
+  int lineno = 0;
+  int startline = 0;
+
+  while( errCnt==0 || !bail_on_error || (in==0 && stdin_is_interactive) ){
+    fflush(p->out);
+    zLine = one_input_line(zSql, in);
+    if( zLine==0 ){
+      break;  /* We have reached EOF */
+    }
     if( seenInterrupt ){
       if( in!=0 ) break;
       seenInterrupt = 0;
     }
+    lineno++;
     if( p->echoOn ) printf("%s\n", zLine);
     if( (zSql==0 || zSql[0]==0) && _all_whitespace(zLine) ) continue;
     if( zLine && zLine[0]=='.' && nSql==0 ){
-      int rc = do_meta_command(zLine, p);
+      rc = do_meta_command(zLine, p);
       free(zLine);
-      if( rc ) break;
+      if( rc==2 ){
+        break;
+      }else if( rc ){
+        errCnt++;
+      }
       continue;
     }
     if( _is_command_terminator(zLine) ){
@@ -1516,6 +1586,7 @@
           exit(1);
         }
         strcpy(zSql, zLine);
+        startline = lineno;
       }
     }else{
       int len = strlen(zLine);
@@ -1534,14 +1605,20 @@
       open_db(p);
       rc = sqlite3_exec(p->db, zSql, callback, p, &zErrMsg);
       if( rc || zErrMsg ){
-        /* if( in!=0 && !p->echoOn ) printf("%s\n",zSql); */
+        char zPrefix[100];
+        if( in!=0 || !stdin_is_interactive ){
+          sprintf(zPrefix, "SQL error near line %d:", startline);
+        }else{
+          sprintf(zPrefix, "SQL error:");
+        }
         if( zErrMsg!=0 ){
-          printf("SQL error: %s\n", zErrMsg);
+          printf("%s %s\n", zPrefix, zErrMsg);
           sqlite3_free(zErrMsg);
           zErrMsg = 0;
         }else{
-          printf("SQL error: %s\n", sqlite3_errmsg(p->db));
+          printf("%s %s\n", zPrefix, sqlite3_errmsg(p->db));
         }
+        errCnt++;
       }
       free(zSql);
       zSql = 0;
@@ -1552,6 +1629,7 @@
     if( !_all_whitespace(zSql) ) printf("Incomplete SQL: %s\n", zSql);
     free(zSql);
   }
+  return errCnt;
 }
 
 /*
@@ -1642,7 +1720,7 @@
   }
   in = fopen(sqliterc,"rb");
   if( in ){
-    if( isatty(fileno(stdout)) ){
+    if( stdin_is_interactive ){
       printf("Loading resources from %s\n",sqliterc);
     }
     process_input(p,in);
@@ -1659,7 +1737,11 @@
   "   -init filename       read/process named file\n"
   "   -echo                print commands before execution\n"
   "   -[no]header          turn headers on or off\n"
+  "   -bail                stop after hitting an error\n"
+  "   -interactive         force interactive I/O\n"
+  "   -batch               force batch I/O\n"
   "   -column              set output mode to 'column'\n"
+  "   -csv                 set output mode to 'csv'\n"
   "   -html                set output mode to HTML\n"
   "   -line                set output mode to 'line'\n"
   "   -list                set output mode to 'list'\n"
@@ -1698,6 +1780,7 @@
   const char *zInitFile = 0;
   char *zFirstCmd = 0;
   int i;
+  int rc = 0;
 
 #ifdef __MACOS__
   argc = ccommand(&argv);
@@ -1705,6 +1788,7 @@
 
   Argv0 = argv[0];
   main_init(&data);
+  stdin_is_interactive = isatty(0);
 
   /* Make sure we have a valid signal handler early, before anything
   ** else is done.
@@ -1718,7 +1802,10 @@
   ** and the first command to execute.
   */
   for(i=1; i<argc-1; i++){
+    char *z;
     if( argv[i][0]!='-' ) break;
+    z = argv[i];
+    if( z[0]=='-' && z[1]=='-' ) z++;
     if( strcmp(argv[i],"-separator")==0 || strcmp(argv[i],"-nullvalue")==0 ){
       i++;
     }else if( strcmp(argv[i],"-init")==0 ){
@@ -1769,6 +1856,7 @@
   */
   for(i=1; i<argc && argv[i][0]=='-'; i++){
     char *z = argv[i];
+    if( z[1]=='-' ){ z++; }
     if( strcmp(z,"-init")==0 ){
       i++;
     }else if( strcmp(z,"-html")==0 ){
@@ -1779,6 +1867,9 @@
       data.mode = MODE_Line;
     }else if( strcmp(z,"-column")==0 ){
       data.mode = MODE_Column;
+    }else if( strcmp(z,"-csv")==0 ){
+      data.mode = MODE_Csv;
+      strcpy(data.separator,",");
     }else if( strcmp(z,"-separator")==0 ){
       i++;
       sprintf(data.separator,"%.*s",(int)sizeof(data.separator)-1,argv[i]);
@@ -1791,9 +1882,15 @@
       data.showHeader = 0;
     }else if( strcmp(z,"-echo")==0 ){
       data.echoOn = 1;
+    }else if( strcmp(z,"-bail")==0 ){
+      bail_on_error = 1;
     }else if( strcmp(z,"-version")==0 ){
       printf("%s\n", sqlite3_libversion());
       return 0;
+    }else if( strcmp(z,"-interactive")==0 ){
+      stdin_is_interactive = 1;
+    }else if( strcmp(z,"-batch")==0 ){
+      stdin_is_interactive = 0;
     }else if( strcmp(z,"-help")==0 || strcmp(z, "--help")==0 ){
       usage(1);
     }else{
@@ -1821,7 +1918,7 @@
   }else{
     /* Run commands received from standard input
     */
-    if( isatty(fileno(stdout)) && isatty(fileno(stdin)) ){
+    if( stdin_is_interactive ){
       char *zHome;
       char *zHistory = 0;
       printf(
@@ -1836,7 +1933,7 @@
 #if defined(HAVE_READLINE) && HAVE_READLINE==1
       if( zHistory ) read_history(zHistory);
 #endif
-      process_input(&data, 0);
+      rc = process_input(&data, 0);
       if( zHistory ){
         stifle_history(100);
         write_history(zHistory);
@@ -1844,7 +1941,7 @@
       }
       free(zHome);
     }else{
-      process_input(&data, stdin);
+      rc = process_input(&data, stdin);
     }
   }
   set_table_name(&data, 0);
@@ -1853,5 +1950,5 @@
       fprintf(stderr,"error closing database: %s\n", sqlite3_errmsg(db));
     }
   }
-  return 0;
+  return rc;
 }

Modified: freeswitch/trunk/libs/sqlite/src/sqlite.h.in
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/sqlite.h.in	(original)
+++ freeswitch/trunk/libs/sqlite/src/sqlite.h.in	Thu Feb 22 17:09:42 2007
@@ -12,7 +12,7 @@
 ** This header file defines the interface that the SQLite library
 ** presents to client programs.
 **
-** @(#) $Id: sqlite.h.in,v 1.194 2006/09/16 21:45:14 drh Exp $
+** @(#) $Id: sqlite.h.in,v 1.198 2007/01/26 00:51:44 drh Exp $
 */
 #ifndef _SQLITE3_H_
 #define _SQLITE3_H_
@@ -125,7 +125,7 @@
 ** value then the query is aborted, all subsequent SQL statements
 ** are skipped and the sqlite3_exec() function returns the SQLITE_ABORT.
 **
-** The 4th parameter is an arbitrary pointer that is passed
+** The 1st parameter is an arbitrary pointer that is passed
 ** to the callback function as its first parameter.
 **
 ** The 2nd parameter to the callback function is the number of
@@ -315,13 +315,30 @@
 ** currently locked by another process or thread.  If the busy callback
 ** is NULL, then sqlite3_exec() returns SQLITE_BUSY immediately if
 ** it finds a locked table.  If the busy callback is not NULL, then
-** sqlite3_exec() invokes the callback with three arguments.  The
-** second argument is the name of the locked table and the third
-** argument is the number of times the table has been busy.  If the
+** sqlite3_exec() invokes the callback with two arguments.  The
+** first argument to the handler is a copy of the void* pointer which
+** is the third argument to this routine.  The second argument to
+** the handler is the number of times that the busy handler has
+** been invoked for this locking event.  If the
 ** busy callback returns 0, then sqlite3_exec() immediately returns
 ** SQLITE_BUSY.  If the callback returns non-zero, then sqlite3_exec()
 ** tries to open the table again and the cycle repeats.
 **
+** The presence of a busy handler does not guarantee that
+** it will be invoked when there is lock contention.
+** If SQLite determines that invoking the busy handler could result in
+** a deadlock, it will return SQLITE_BUSY instead.
+** Consider a scenario where one process is holding a read lock that
+** it is trying to promote to a reserved lock and
+** a second process is holding a reserved lock that it is trying
+** to promote to an exclusive lock.  The first process cannot proceed
+** because it is blocked by the second and the second process cannot
+** proceed because it is blocked by the first.  If both processes
+** invoke the busy handlers, neither will make any progress.  Therefore,
+** SQLite returns SQLITE_BUSY for the first process, hoping that this
+** will induce the first process to release its read lock and allow
+** the second process to proceed.
+**
 ** The default busy callback is NULL.
 **
 ** Sqlite is re-entrant, so the busy handler may start a new query. 
@@ -693,6 +710,31 @@
 );
 
 /*
+** Newer versions of the prepare API work just like the legacy versions
+** but with one exception:  The a copy of the SQL text is saved in the
+** sqlite3_stmt structure that is returned.  If this copy exists, it
+** modifieds the behavior of sqlite3_step() slightly.  First, sqlite3_step()
+** will no longer return an SQLITE_SCHEMA error but will instead automatically
+** rerun the compiler to rebuild the prepared statement.  Secondly, 
+** sqlite3_step() now turns a full result code - the result code that
+** use used to have to call sqlite3_reset() to get.
+*/
+int sqlite3_prepare_v2(
+  sqlite3 *db,            /* Database handle */
+  const char *zSql,       /* SQL statement, UTF-8 encoded */
+  int nBytes,             /* Length of zSql in bytes. */
+  sqlite3_stmt **ppStmt,  /* OUT: Statement handle */
+  const char **pzTail     /* OUT: Pointer to unused portion of zSql */
+);
+int sqlite3_prepare16_v2(
+  sqlite3 *db,            /* Database handle */
+  const void *zSql,       /* SQL statement, UTF-16 encoded */
+  int nBytes,             /* Length of zSql in bytes. */
+  sqlite3_stmt **ppStmt,  /* OUT: Statement handle */
+  const void **pzTail     /* OUT: Pointer to unused portion of zSql */
+);
+
+/*
 ** Pointers to the following two opaque structures are used to communicate
 ** with the implementations of user-defined functions.
 */
@@ -1143,9 +1185,13 @@
 ** SQLITE_TRANSIENT value means that the content will likely change in
 ** the near future and that SQLite should make its own private copy of
 ** the content before returning.
+**
+** The typedef is necessary to work around problems in certain
+** C++ compilers.  See ticket #2191.
 */
-#define SQLITE_STATIC      ((void(*)(void *))0)
-#define SQLITE_TRANSIENT   ((void(*)(void *))-1)
+typedef void (*sqlite3_destructor_type)(void*);
+#define SQLITE_STATIC      ((sqlite3_destructor_type)0)
+#define SQLITE_TRANSIENT   ((sqlite3_destructor_type)-1)
 
 /*
 ** User-defined functions invoke the following routines in order to

Modified: freeswitch/trunk/libs/sqlite/src/sqlite3ext.h
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/sqlite3ext.h	(original)
+++ freeswitch/trunk/libs/sqlite/src/sqlite3ext.h	Thu Feb 22 17:09:42 2007
@@ -15,7 +15,7 @@
 ** as extensions by SQLite should #include this file instead of 
 ** sqlite3.h.
 **
-** @(#) $Id: sqlite3ext.h,v 1.7 2006/09/22 23:38:21 shess Exp $
+** @(#) $Id: sqlite3ext.h,v 1.8 2007/01/09 14:37:18 drh Exp $
 */
 #ifndef _SQLITE3EXT_H_
 #define _SQLITE3EXT_H_
@@ -92,7 +92,7 @@
   void * (*get_auxdata)(sqlite3_context*,int);
   int  (*get_table)(sqlite3*,const char*,char***,int*,int*,char**);
   int  (*global_recover)(void);
-  void  (*interrupt)(sqlite3*);
+  void  (*interruptx)(sqlite3*);
   sqlite_int64  (*last_insert_rowid)(sqlite3*);
   const char * (*libversion)(void);
   int  (*libversion_number)(void);
@@ -222,7 +222,7 @@
 #define sqlite3_get_auxdata            sqlite3_api->get_auxdata
 #define sqlite3_get_table              sqlite3_api->get_table
 #define sqlite3_global_recover         sqlite3_api->global_recover
-#define sqlite3_interrupt              sqlite3_api->interrupt
+#define sqlite3_interrupt              sqlite3_api->interruptx
 #define sqlite3_last_insert_rowid      sqlite3_api->last_insert_rowid
 #define sqlite3_libversion             sqlite3_api->libversion
 #define sqlite3_libversion_number      sqlite3_api->libversion_number

Modified: freeswitch/trunk/libs/sqlite/src/sqliteInt.h
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/sqliteInt.h	(original)
+++ freeswitch/trunk/libs/sqlite/src/sqliteInt.h	Thu Feb 22 17:09:42 2007
@@ -11,7 +11,7 @@
 *************************************************************************
 ** Internal interface definitions for SQLite.
 **
-** @(#) $Id: sqliteInt.h,v 1.529 2006/09/23 20:36:02 drh Exp $
+** @(#) $Id: sqliteInt.h,v 1.536 2007/02/13 12:49:24 drh Exp $
 */
 #ifndef _SQLITEINT_H_
 #define _SQLITEINT_H_
@@ -463,7 +463,7 @@
     u8 busy;                    /* TRUE if currently initializing */
   } init;
   int nExtension;               /* Number of loaded extensions */
-  void *aExtension;             /* Array of shared libraray handles */
+  void **aExtension;            /* Array of shared libraray handles */
   struct Vdbe *pVdbe;           /* List of active virtual machines */
   int activeVdbeCnt;            /* Number of vdbes currently executing */
   void (*xTrace)(void*,const char*);        /* Trace function */
@@ -1021,6 +1021,7 @@
 #define EP_VarSelect    0x20  /* pSelect is correlated, not constant */
 #define EP_Dequoted     0x40  /* True if the string has been dequoted */
 #define EP_InfixFunc    0x80  /* True for an infix function: LIKE, GLOB, etc */
+#define EP_ExpCollate  0x100  /* Collating sequence specified explicitly */
 
 /*
 ** These macros can be used to test, set, or clear bits in the 
@@ -1078,8 +1079,12 @@
 
 /*
 ** The bitmask datatype defined below is used for various optimizations.
+**
+** Changing this from a 64-bit to a 32-bit type limits the number of
+** tables in a join to 32 instead of 64.  But it also reduces the size
+** of the library by 738 bytes on ix86.
 */
-typedef unsigned int Bitmask;
+typedef u64 Bitmask;
 
 /*
 ** The following structure describes the FROM clause of a SELECT statement.
@@ -1091,6 +1096,11 @@
 ** is modified by an INSERT, DELETE, or UPDATE statement.  In standard SQL,
 ** such a table must be a simple name: ID.  But in SQLite, the table can
 ** now be identified by a database name, a dot, then the table name: ID.ID.
+**
+** The jointype starts out showing the join type between the current table
+** and the next table on the list.  The parser builds the list this way.
+** But sqlite3SrcListShiftJoinType() later shifts the jointypes so that each
+** jointype expresses the join between the table and the previous table.
 */
 struct SrcList {
   i16 nSrc;        /* Number of tables or subqueries in the FROM clause */
@@ -1102,8 +1112,8 @@
     Table *pTab;      /* An SQL table corresponding to zName */
     Select *pSelect;  /* A SELECT statement used in place of a table name */
     u8 isPopulated;   /* Temporary table associated with SELECT is populated */
-    u8 jointype;      /* Type of join between this table and the next */
-    i16 iCursor;      /* The VDBE cursor number used to access this table */
+    u8 jointype;      /* Type of join between this able and the previous */
+    int iCursor;      /* The VDBE cursor number used to access this table */
     Expr *pOn;        /* The ON clause of a join */
     IdList *pUsing;   /* The USING clause of a join */
     Bitmask colUsed;  /* Bit N (1<<N) set if column N or pTab is used */
@@ -1617,7 +1627,9 @@
 IdList *sqlite3IdListAppend(IdList*, Token*);
 int sqlite3IdListIndex(IdList*,const char*);
 SrcList *sqlite3SrcListAppend(SrcList*, Token*, Token*);
-void sqlite3SrcListAddAlias(SrcList*, Token*);
+SrcList *sqlite3SrcListAppendFromTerm(SrcList*, Token*, Token*, Token*,
+                                      Select*, Expr*, IdList*);
+void sqlite3SrcListShiftJoinType(SrcList*);
 void sqlite3SrcListAssignCursors(Parse*, SrcList*);
 void sqlite3IdListDelete(IdList*);
 void sqlite3SrcListDelete(SrcList*);
@@ -1766,6 +1778,7 @@
 CollSeq *sqlite3FindCollSeq(sqlite3*,u8 enc, const char *,int,int);
 CollSeq *sqlite3LocateCollSeq(Parse *pParse, const char *zName, int nName);
 CollSeq *sqlite3ExprCollSeq(Parse *pParse, Expr *pExpr);
+Expr *sqlite3ExprSetColl(Parse *pParse, Expr *, Token *);
 int sqlite3CheckCollSeq(Parse *, CollSeq *);
 int sqlite3CheckIndexCollSeq(Parse *, Index *);
 int sqlite3CheckObjectName(Parse *, const char *);
@@ -1876,6 +1889,7 @@
 int sqlite3VtabBegin(sqlite3 *, sqlite3_vtab *);
 FuncDef *sqlite3VtabOverloadFunction(FuncDef*, int nArg, Expr*);
 void sqlite3InvalidFunction(sqlite3_context*,int,sqlite3_value**);
+int sqlite3Reprepare(Vdbe*);
 
 #ifdef SQLITE_SSE
 #include "sseInt.h"

Modified: freeswitch/trunk/libs/sqlite/src/tclsqlite.c
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/tclsqlite.c	(original)
+++ freeswitch/trunk/libs/sqlite/src/tclsqlite.c	Thu Feb 22 17:09:42 2007
@@ -11,7 +11,7 @@
 *************************************************************************
 ** A TCL Interface to SQLite
 **
-** $Id: tclsqlite.c,v 1.173 2006/09/02 14:17:00 drh Exp $
+** $Id: tclsqlite.c,v 1.176 2007/02/01 01:53:44 drh Exp $
 */
 #ifndef NO_TCL     /* Omit this whole file if TCL is unavailable */
 
@@ -1055,7 +1055,7 @@
       return TCL_ERROR;
     }
     nByte = strlen(zSql);
-    rc = sqlite3_prepare(pDb->db, zSql, 0, &pStmt, 0);
+    rc = sqlite3_prepare(pDb->db, zSql, -1, &pStmt, 0);
     sqlite3_free(zSql);
     if( rc ){
       Tcl_AppendResult(interp, "Error: ", sqlite3_errmsg(pDb->db), 0);
@@ -1081,7 +1081,7 @@
     }
     zSql[j++] = ')';
     zSql[j] = 0;
-    rc = sqlite3_prepare(pDb->db, zSql, 0, &pStmt, 0);
+    rc = sqlite3_prepare(pDb->db, zSql, -1, &pStmt, 0);
     free(zSql);
     if( rc ){
       Tcl_AppendResult(interp, "Error: ", sqlite3_errmsg(pDb->db), 0);
@@ -1173,6 +1173,7 @@
   ** default.
   */
   case DB_ENABLE_LOAD_EXTENSION: {
+#ifndef SQLITE_OMIT_LOAD_EXTENSION
     int onoff;
     if( objc!=3 ){
       Tcl_WrongNumArgs(interp, 2, objv, "BOOLEAN");
@@ -1183,6 +1184,11 @@
     }
     sqlite3_enable_load_extension(pDb->db, onoff);
     break;
+#else
+    Tcl_AppendResult(interp, "extension loading is turned off at compile-time",
+                     0);
+    return TCL_ERROR;
+#endif
   }
 
   /*
@@ -2055,7 +2061,7 @@
   sqlite3_open(zFile, &p->db);
   Tcl_DStringFree(&translatedFilename);
   if( SQLITE_OK!=sqlite3_errcode(p->db) ){
-    zErrMsg = strdup(sqlite3_errmsg(p->db));
+    zErrMsg = sqlite3_mprintf("%s", sqlite3_errmsg(p->db));
     sqlite3_close(p->db);
     p->db = 0;
   }
@@ -2065,7 +2071,7 @@
   if( p->db==0 ){
     Tcl_SetResult(interp, zErrMsg, TCL_VOLATILE);
     Tcl_Free((char*)p);
-    free(zErrMsg);
+    sqlite3_free(zErrMsg);
     return TCL_ERROR;
   }
   p->maxStmt = NUM_PREPARED_STMTS;

Modified: freeswitch/trunk/libs/sqlite/src/test1.c
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/test1.c	(original)
+++ freeswitch/trunk/libs/sqlite/src/test1.c	Thu Feb 22 17:09:42 2007
@@ -13,7 +13,7 @@
 ** is not included in the SQLite library.  It is used for automated
 ** testing of the SQLite library.
 **
-** $Id: test1.c,v 1.222 2006/09/15 07:28:51 drh Exp $
+** $Id: test1.c,v 1.228 2007/02/05 14:21:48 danielk1977 Exp $
 */
 #include "sqliteInt.h"
 #include "tcl.h"
@@ -63,6 +63,22 @@
   return TCL_OK;
 }
 
+/*
+** Decode a pointer to an sqlite3 object.
+*/
+static int getDbPointer(Tcl_Interp *interp, const char *zA, sqlite3 **ppDb){
+  struct SqliteDb *p;
+  Tcl_CmdInfo cmdInfo;
+  if( Tcl_GetCommandInfo(interp, zA, &cmdInfo) ){
+    p = (struct SqliteDb*)cmdInfo.objClientData;
+    *ppDb = p->db;
+  }else{
+    *ppDb = (sqlite3*)sqlite3TextToPtr(zA);
+  }
+  return TCL_OK;
+}
+
+
 const char *sqlite3TestErrorName(int rc){
   const char *zName = 0;
   switch( rc & 0xff ){
@@ -122,14 +138,6 @@
 }
 
 /*
-** Decode a pointer to an sqlite3 object.
-*/
-static int getDbPointer(Tcl_Interp *interp, const char *zA, sqlite3 **ppDb){
-  *ppDb = (sqlite3*)sqlite3TextToPtr(zA);
-  return TCL_OK;
-}
-
-/*
 ** Decode a pointer to an sqlite3_stmt object.
 */
 static int getStmtPointer(
@@ -228,6 +236,65 @@
 }
 
 /*
+** Usage:  sqlite3_exec  DB  SQL
+**
+** Invoke the sqlite3_exec interface using the open database DB
+*/
+static int test_exec(
+  void *NotUsed,
+  Tcl_Interp *interp,    /* The TCL interpreter that invoked this command */
+  int argc,              /* Number of arguments */
+  char **argv            /* Text of each argument */
+){
+  sqlite3 *db;
+  Tcl_DString str;
+  int rc;
+  char *zErr = 0;
+  char zBuf[30];
+  if( argc!=3 ){
+    Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0], 
+       " DB SQL", 0);
+    return TCL_ERROR;
+  }
+  if( getDbPointer(interp, argv[1], &db) ) return TCL_ERROR;
+  Tcl_DStringInit(&str);
+  rc = sqlite3_exec(db, argv[2], exec_printf_cb, &str, &zErr);
+  sprintf(zBuf, "%d", rc);
+  Tcl_AppendElement(interp, zBuf);
+  Tcl_AppendElement(interp, rc==SQLITE_OK ? Tcl_DStringValue(&str) : zErr);
+  Tcl_DStringFree(&str);
+  if( zErr ) sqlite3_free(zErr);
+  if( sqlite3TestErrCode(interp, db, rc) ) return TCL_ERROR;
+  return TCL_OK;
+}
+
+/*
+** Usage:  sqlite3_exec_nr  DB  SQL
+**
+** Invoke the sqlite3_exec interface using the open database DB.  Discard
+** all results
+*/
+static int test_exec_nr(
+  void *NotUsed,
+  Tcl_Interp *interp,    /* The TCL interpreter that invoked this command */
+  int argc,              /* Number of arguments */
+  char **argv            /* Text of each argument */
+){
+  sqlite3 *db;
+  int rc;
+  char *zErr = 0;
+  if( argc!=3 ){
+    Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0], 
+       " DB SQL", 0);
+    return TCL_ERROR;
+  }
+  if( getDbPointer(interp, argv[1], &db) ) return TCL_ERROR;
+  rc = sqlite3_exec(db, argv[2], 0, 0, &zErr);
+  if( sqlite3TestErrCode(interp, db, rc) ) return TCL_ERROR;
+  return TCL_OK;
+}
+
+/*
 ** Usage:  sqlite3_mprintf_z_test  SEPARATOR  ARG0  ARG1 ...
 **
 ** Test the %z format of sqliteMPrintf().  Use multiple mprintf() calls to 
@@ -544,6 +611,46 @@
 }
 
 /*
+** Implementation of tkt2213func(), a scalar function that takes exactly
+** one argument. It has two interesting features:
+**
+** * It calls sqlite3_value_text() 3 times on the argument sqlite3_value*.
+**   If the three pointers returned are not the same an SQL error is raised.
+**
+** * Otherwise it returns a copy of the text representation of it's 
+**   argument in such a way as the VDBE representation is a Mem* cell 
+**   with the MEM_Term flag clear. 
+**
+** Ticket #2213 can therefore be tested by evaluating the following
+** SQL expression:
+**
+**   tkt2213func(tkt2213func('a string'));
+*/
+static void tkt2213Function(
+  sqlite3_context *context, 
+  int argc,  
+  sqlite3_value **argv
+){
+  int nText;
+  unsigned char const *zText1;
+  unsigned char const *zText2;
+  unsigned char const *zText3;
+
+  nText = sqlite3_value_bytes(argv[0]);
+  zText1 = sqlite3_value_text(argv[0]);
+  zText2 = sqlite3_value_text(argv[0]);
+  zText3 = sqlite3_value_text(argv[0]);
+
+  if( zText1!=zText2 || zText2!=zText3 ){
+    sqlite3_result_error(context, "tkt2213 is not fixed", -1);
+  }else{
+    char *zCopy = (char *)sqlite3_malloc(nText);
+    memcpy(zCopy, zText1, nText);
+    sqlite3_result_text(context, zCopy, nText, sqlite3_free);
+  }
+}
+
+/*
 ** Usage:  sqlite_test_create_function DB
 **
 ** Call the sqlite3_create_function API on the given database in order
@@ -584,6 +691,10 @@
     rc = sqlite3_create_function(db, "hex16", 1, SQLITE_ANY, 0, 
           hex16Func, 0, 0);
   }
+  if( rc==SQLITE_OK ){
+    rc = sqlite3_create_function(db, "tkt2213func", 1, SQLITE_ANY, 0, 
+          tkt2213Function, 0, 0);
+  }
 
 #ifndef SQLITE_OMIT_UTF16
   /* Use the sqlite3_create_function16() API here. Mainly for fun, but also 
@@ -693,6 +804,30 @@
 }
 
 
+/*
+** Usage:  printf TEXT
+**
+** Send output to printf.  Use this rather than puts to merge the output
+** in the correct sequence with debugging printfs inserted into C code.
+** Puts uses a separate buffer and debugging statements will be out of
+** sequence if it is used.
+*/
+static int test_printf(
+  void *NotUsed,
+  Tcl_Interp *interp,    /* The TCL interpreter that invoked this command */
+  int argc,              /* Number of arguments */
+  char **argv            /* Text of each argument */
+){
+  if( argc!=2 ){
+    Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0],
+       " TEXT\"", 0);
+    return TCL_ERROR;
+  }
+  printf("%s\n", argv[1]);
+  return TCL_OK;
+}
+
+
 
 /*
 ** Usage:  sqlite3_mprintf_int FORMAT INTEGER INTEGER INTEGER
@@ -1348,7 +1483,7 @@
 ){
   sqlite3_stmt *pStmt;
   int rc;
-  sqlite3 *db;
+  sqlite3 *db = 0;
 
   if( objc!=2 ){
     Tcl_AppendResult(interp, "wrong # args: should be \"",
@@ -2489,7 +2624,60 @@
 }
 
 /*
-** Usage: sqlite3_prepare DB sql bytes tailvar
+** Usage: sqlite3_prepare_v2 DB sql bytes tailvar
+**
+** Compile up to <bytes> bytes of the supplied SQL string <sql> using
+** database handle <DB>. The parameter <tailval> is the name of a global
+** variable that is set to the unused portion of <sql> (if any). A
+** STMT handle is returned.
+*/
+static int test_prepare_v2(
+  void * clientData,
+  Tcl_Interp *interp,
+  int objc,
+  Tcl_Obj *CONST objv[]
+){
+  sqlite3 *db;
+  const char *zSql;
+  int bytes;
+  const char *zTail = 0;
+  sqlite3_stmt *pStmt = 0;
+  char zBuf[50];
+  int rc;
+
+  if( objc!=5 ){
+    Tcl_AppendResult(interp, "wrong # args: should be \"", 
+       Tcl_GetString(objv[0]), " DB sql bytes tailvar", 0);
+    return TCL_ERROR;
+  }
+  if( getDbPointer(interp, Tcl_GetString(objv[1]), &db) ) return TCL_ERROR;
+  zSql = Tcl_GetString(objv[2]);
+  if( Tcl_GetIntFromObj(interp, objv[3], &bytes) ) return TCL_ERROR;
+
+  rc = sqlite3_prepare_v2(db, zSql, bytes, &pStmt, &zTail);
+  if( sqlite3TestErrCode(interp, db, rc) ) return TCL_ERROR;
+  if( zTail ){
+    if( bytes>=0 ){
+      bytes = bytes - (zTail-zSql);
+    }
+    Tcl_ObjSetVar2(interp, objv[4], 0, Tcl_NewStringObj(zTail, bytes), 0);
+  }
+  if( rc!=SQLITE_OK ){
+    assert( pStmt==0 );
+    sprintf(zBuf, "(%d) ", rc);
+    Tcl_AppendResult(interp, zBuf, sqlite3_errmsg(db), 0);
+    return TCL_ERROR;
+  }
+
+  if( pStmt ){
+    if( sqlite3TestMakePointerStr(interp, zBuf, pStmt) ) return TCL_ERROR;
+    Tcl_AppendResult(interp, zBuf, 0);
+  }
+  return TCL_OK;
+}
+
+/*
+** Usage: sqlite3_prepare16 DB sql bytes tailvar
 **
 ** Compile up to <bytes> bytes of the supplied SQL string <sql> using
 ** database handle <DB>. The parameter <tailval> is the name of a global
@@ -2547,6 +2735,64 @@
 }
 
 /*
+** Usage: sqlite3_prepare16_v2 DB sql bytes tailvar
+**
+** Compile up to <bytes> bytes of the supplied SQL string <sql> using
+** database handle <DB>. The parameter <tailval> is the name of a global
+** variable that is set to the unused portion of <sql> (if any). A
+** STMT handle is returned.
+*/
+static int test_prepare16_v2(
+  void * clientData,
+  Tcl_Interp *interp,
+  int objc,
+  Tcl_Obj *CONST objv[]
+){
+#ifndef SQLITE_OMIT_UTF16
+  sqlite3 *db;
+  const void *zSql;
+  const void *zTail = 0;
+  Tcl_Obj *pTail = 0;
+  sqlite3_stmt *pStmt = 0;
+  char zBuf[50]; 
+  int rc;
+  int bytes;                /* The integer specified as arg 3 */
+  int objlen;               /* The byte-array length of arg 2 */
+
+  if( objc!=5 ){
+    Tcl_AppendResult(interp, "wrong # args: should be \"", 
+       Tcl_GetString(objv[0]), " DB sql bytes tailvar", 0);
+    return TCL_ERROR;
+  }
+  if( getDbPointer(interp, Tcl_GetString(objv[1]), &db) ) return TCL_ERROR;
+  zSql = Tcl_GetByteArrayFromObj(objv[2], &objlen);
+  if( Tcl_GetIntFromObj(interp, objv[3], &bytes) ) return TCL_ERROR;
+
+  rc = sqlite3_prepare16_v2(db, zSql, bytes, &pStmt, &zTail);
+  if( sqlite3TestErrCode(interp, db, rc) ) return TCL_ERROR;
+  if( rc ){
+    return TCL_ERROR;
+  }
+
+  if( zTail ){
+    objlen = objlen - ((u8 *)zTail-(u8 *)zSql);
+  }else{
+    objlen = 0;
+  }
+  pTail = Tcl_NewByteArrayObj((u8 *)zTail, objlen);
+  Tcl_IncrRefCount(pTail);
+  Tcl_ObjSetVar2(interp, objv[4], 0, pTail, 0);
+  Tcl_DecrRefCount(pTail);
+
+  if( pStmt ){
+    if( sqlite3TestMakePointerStr(interp, zBuf, pStmt) ) return TCL_ERROR;
+  }
+  Tcl_AppendResult(interp, zBuf, 0);
+#endif /* SQLITE_OMIT_UTF16 */
+  return TCL_OK;
+}
+
+/*
 ** Usage: sqlite3_open filename ?options-list?
 */
 static int test_open(
@@ -3617,6 +3863,12 @@
   Tcl_SetVar2(interp, "sqlite_options", "fts1", "0", TCL_GLOBAL_ONLY);
 #endif
 
+#ifdef SQLITE_ENABLE_FTS2
+  Tcl_SetVar2(interp, "sqlite_options", "fts2", "1", TCL_GLOBAL_ONLY);
+#else
+  Tcl_SetVar2(interp, "sqlite_options", "fts2", "0", TCL_GLOBAL_ONLY);
+#endif
+
 #ifdef SQLITE_OMIT_GLOBALRECOVER
   Tcl_SetVar2(interp, "sqlite_options", "globalrecover", "0", TCL_GLOBAL_ONLY);
 #else
@@ -3828,6 +4080,8 @@
      { "sqlite3_mprintf_n_test",        (Tcl_CmdProc*)test_mprintf_n        },
      { "sqlite3_last_insert_rowid",     (Tcl_CmdProc*)test_last_rowid       },
      { "sqlite3_exec_printf",           (Tcl_CmdProc*)test_exec_printf      },
+     { "sqlite3_exec",                  (Tcl_CmdProc*)test_exec             },
+     { "sqlite3_exec_nr",               (Tcl_CmdProc*)test_exec_nr          },
      { "sqlite3_get_table_printf",      (Tcl_CmdProc*)test_get_table_printf },
      { "sqlite3_close",                 (Tcl_CmdProc*)sqlite_test_close     },
      { "sqlite3_create_function",       (Tcl_CmdProc*)test_create_function  },
@@ -3849,6 +4103,7 @@
      { "sqlite3_get_autocommit",        (Tcl_CmdProc*)get_autocommit        },
      { "sqlite3_stack_used",            (Tcl_CmdProc*)test_stack_used       },
      { "sqlite3_busy_timeout",          (Tcl_CmdProc*)test_busy_timeout     },
+     { "printf",                        (Tcl_CmdProc*)test_printf           },
   };
   static struct {
      char *zName;
@@ -3877,6 +4132,8 @@
 
      { "sqlite3_prepare",               test_prepare       ,0 },
      { "sqlite3_prepare16",             test_prepare16     ,0 },
+     { "sqlite3_prepare_v2",            test_prepare_v2    ,0 },
+     { "sqlite3_prepare16_v2",          test_prepare16_v2  ,0 },
      { "sqlite3_finalize",              test_finalize      ,0 },
      { "sqlite3_reset",                 test_reset         ,0 },
      { "sqlite3_expired",               test_expired       ,0 },

Modified: freeswitch/trunk/libs/sqlite/src/test3.c
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/test3.c	(original)
+++ freeswitch/trunk/libs/sqlite/src/test3.c	Thu Feb 22 17:09:42 2007
@@ -13,7 +13,7 @@
 ** is not included in the SQLite library.  It is used for automated
 ** testing of the SQLite library.
 **
-** $Id: test3.c,v 1.67 2006/08/13 18:39:26 drh Exp $
+** $Id: test3.c,v 1.70 2007/02/10 19:22:36 drh Exp $
 */
 #include "sqliteInt.h"
 #include "pager.h"
@@ -567,6 +567,7 @@
   int nRoot;
   int *aRoot;
   int i;
+  int nErr;
   char *zResult;
 
   if( argc<3 ){
@@ -576,16 +577,16 @@
   }
   pBt = sqlite3TextToPtr(argv[1]);
   nRoot = argc-2;
-  aRoot = malloc( sizeof(int)*(argc-2) );
+  aRoot = (int*)malloc( sizeof(int)*(argc-2) );
   for(i=0; i<argc-2; i++){
     if( Tcl_GetInt(interp, argv[i+2], &aRoot[i]) ) return TCL_ERROR;
   }
 #ifndef SQLITE_OMIT_INTEGRITY_CHECK
-  zResult = sqlite3BtreeIntegrityCheck(pBt, aRoot, nRoot);
+  zResult = sqlite3BtreeIntegrityCheck(pBt, aRoot, nRoot, 10000, &nErr);
 #else
   zResult = 0;
 #endif
-  free(aRoot);
+  free((void*)aRoot);
   if( zResult ){
     Tcl_AppendResult(interp, zResult, 0);
     sqliteFree(zResult); 
@@ -1051,6 +1052,7 @@
   rc = sqlite3BtreeData(pCur, 0, n, zBuf);
   if( rc ){
     Tcl_AppendResult(interp, errorName(rc), 0);
+    free(zBuf);
     return TCL_ERROR;
   }
   zBuf[n] = 0;
@@ -1184,6 +1186,7 @@
 **   aResult[7] =  Header size in bytes
 **   aResult[8] =  Local payload size
 **   aResult[9] =  Parent page number
+**   aResult[10]=  Page number of the first overflow page
 */
 static int btree_cursor_info(
   void *NotUsed,
@@ -1195,7 +1198,7 @@
   int rc;
   int i, j;
   int up;
-  int aResult[10];
+  int aResult[11];
   char zBuf[400];
 
   if( argc!=2 && argc!=3 ){
@@ -1224,6 +1227,76 @@
 }
 
 /*
+** Copied from btree.c:
+*/
+static u32 get4byte(unsigned char *p){
+  return (p[0]<<24) | (p[1]<<16) | (p[2]<<8) | p[3];
+}
+
+/*
+**   btree_ovfl_info  BTREE  CURSOR
+**
+** Given a cursor, return the sequence of pages number that form the
+** overflow pages for the data of the entry that the cursor is point
+** to.
+*/ 
+static int btree_ovfl_info(
+  void *NotUsed,
+  Tcl_Interp *interp,    /* The TCL interpreter that invoked this command */
+  int argc,              /* Number of arguments */
+  const char **argv      /* Text of each argument */
+){
+  Btree *pBt;
+  BtCursor *pCur;
+  Pager *pPager;
+  int rc;
+  int n;
+  int dataSize;
+  u32 pgno;
+  void *pPage;
+  int aResult[11];
+  char zElem[100];
+  Tcl_DString str;
+
+  if( argc!=3 ){
+    Tcl_AppendResult(interp, "wrong # args: should be \"", argv[0], 
+                    " BTREE CURSOR", 0);
+    return TCL_ERROR;
+  }
+  pBt = sqlite3TextToPtr(argv[1]);
+  pCur = sqlite3TextToPtr(argv[2]);
+  if( (*(void**)pCur) != (void*)pBt ){
+    Tcl_AppendResult(interp, "Cursor ", argv[2], " does not belong to btree ",
+       argv[1], 0);
+    return TCL_ERROR;
+  }
+  pPager = sqlite3BtreePager(pBt);
+  rc = sqlite3BtreeCursorInfo(pCur, aResult, 0);
+  if( rc ){
+    Tcl_AppendResult(interp, errorName(rc), 0);
+    return TCL_ERROR;
+  }
+  dataSize = sqlite3BtreeGetPageSize(pBt) - sqlite3BtreeGetReserve(pBt);
+  Tcl_DStringInit(&str);
+  n = aResult[6] - aResult[8];
+  n = (n + dataSize - 1)/dataSize;
+  pgno = (u32)aResult[10];
+  while( pgno && n-- ){
+    sprintf(zElem, "%d", pgno);
+    Tcl_DStringAppendElement(&str, zElem);
+    if( sqlite3pager_get(pPager, pgno, &pPage)!=SQLITE_OK ){
+      Tcl_DStringFree(&str);
+      Tcl_AppendResult(interp, "unable to get page ", zElem, 0);
+      return TCL_ERROR;
+    }
+    pgno = get4byte((unsigned char*)pPage);
+    sqlite3pager_unref(pPage);
+  }
+  Tcl_DStringResult(interp, &str);
+  return SQLITE_OK;
+}
+
+/*
 ** The command is provided for the purpose of setting breakpoints.
 ** in regression test scripts.
 **
@@ -1438,6 +1511,7 @@
      { "btree_from_db",            (Tcl_CmdProc*)btree_from_db            },
      { "btree_set_cache_size",     (Tcl_CmdProc*)btree_set_cache_size     },
      { "btree_cursor_info",        (Tcl_CmdProc*)btree_cursor_info        },
+     { "btree_ovfl_info",          (Tcl_CmdProc*)btree_ovfl_info          },
      { "btree_cursor_list",        (Tcl_CmdProc*)btree_cursor_list        },
   };
   int i;

Modified: freeswitch/trunk/libs/sqlite/src/test8.c
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/test8.c	(original)
+++ freeswitch/trunk/libs/sqlite/src/test8.c	Thu Feb 22 17:09:42 2007
@@ -13,7 +13,7 @@
 ** is not included in the SQLite library.  It is used for automated
 ** testing of the SQLite library.
 **
-** $Id: test8.c,v 1.43 2006/10/08 18:56:57 drh Exp $
+** $Id: test8.c,v 1.44 2007/01/03 23:37:29 drh Exp $
 */
 #include "sqliteInt.h"
 #include "tcl.h"
@@ -639,6 +639,7 @@
   */
   zQuery = sqlite3_mprintf("SELECT count(*) FROM %Q", pVtab->zTableName);
   rc = sqlite3_prepare(pVtab->db, zQuery, -1, &pStmt, 0);
+  sqlite3_free(zQuery);
   if( rc!=SQLITE_OK ){
     return rc;
   }

Modified: freeswitch/trunk/libs/sqlite/src/test_autoext.c
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/test_autoext.c	(original)
+++ freeswitch/trunk/libs/sqlite/src/test_autoext.c	Thu Feb 22 17:09:42 2007
@@ -11,10 +11,10 @@
 *************************************************************************
 ** Test extension for testing the sqlite3_auto_extension() function.
 **
-** $Id: test_autoext.c,v 1.1 2006/08/23 20:07:22 drh Exp $
+** $Id: test_autoext.c,v 1.2 2006/12/19 18:57:11 drh Exp $
 */
-#ifndef SQLITE_OMIT_LOAD_EXTENSION
 #include "tcl.h"
+#ifndef SQLITE_OMIT_LOAD_EXTENSION
 #include "sqlite3ext.h"
 static SQLITE_EXTENSION_INIT1
 

Modified: freeswitch/trunk/libs/sqlite/src/tokenize.c
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/tokenize.c	(original)
+++ freeswitch/trunk/libs/sqlite/src/tokenize.c	Thu Feb 22 17:09:42 2007
@@ -15,7 +15,7 @@
 ** individual tokens and sends those tokens one-by-one over to the
 ** parser for analysis.
 **
-** $Id: tokenize.c,v 1.124 2006/08/12 12:33:14 drh Exp $
+** $Id: tokenize.c,v 1.125 2007/01/26 19:31:01 drh Exp $
 */
 #include "sqliteInt.h"
 #include "os.h"
@@ -394,16 +394,16 @@
   int tokenType;
   int lastTokenParsed = -1;
   sqlite3 *db = pParse->db;
-  extern void *sqlite3ParserAlloc(void*(*)(int));
+  extern void *sqlite3ParserAlloc(void*(*)(size_t));
   extern void sqlite3ParserFree(void*, void(*)(void*));
-  extern int sqlite3Parser(void*, int, Token, Parse*);
+  extern void sqlite3Parser(void*, int, Token, Parse*);
 
   if( db->activeVdbeCnt==0 ){
     db->u1.isInterrupted = 0;
   }
   pParse->rc = SQLITE_OK;
   i = 0;
-  pEngine = sqlite3ParserAlloc((void*(*)(int))sqlite3MallocX);
+  pEngine = sqlite3ParserAlloc((void*(*)(size_t))sqlite3MallocX);
   if( pEngine==0 ){
     return SQLITE_NOMEM;
   }

Modified: freeswitch/trunk/libs/sqlite/src/trigger.c
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/trigger.c	(original)
+++ freeswitch/trunk/libs/sqlite/src/trigger.c	Thu Feb 22 17:09:42 2007
@@ -668,12 +668,12 @@
     pParse->trigStack->orconf = orconf;
     switch( pTriggerStep->op ){
       case TK_SELECT: {
-	Select * ss = sqlite3SelectDup(pTriggerStep->pSelect);		  
-	assert(ss);
-	assert(ss->pSrc);
-        sqlite3SelectResolve(pParse, ss, 0);
-	sqlite3Select(pParse, ss, SRT_Discard, 0, 0, 0, 0, 0);
-	sqlite3SelectDelete(ss);
+	Select *ss = sqlite3SelectDup(pTriggerStep->pSelect);
+        if( ss ){
+          sqlite3SelectResolve(pParse, ss, 0);
+          sqlite3Select(pParse, ss, SRT_Discard, 0, 0, 0, 0, 0);
+          sqlite3SelectDelete(ss);
+        }
 	break;
       }
       case TK_UPDATE: {

Modified: freeswitch/trunk/libs/sqlite/src/update.c
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/update.c	(original)
+++ freeswitch/trunk/libs/sqlite/src/update.c	Thu Feb 22 17:09:42 2007
@@ -12,7 +12,7 @@
 ** This file contains C code routines that are called by the parser
 ** to handle UPDATE statements.
 **
-** $Id: update.c,v 1.133 2006/06/27 13:20:21 drh Exp $
+** $Id: update.c,v 1.134 2007/02/07 01:06:53 drh Exp $
 */
 #include "sqliteInt.h"
 
@@ -103,6 +103,7 @@
   AuthContext sContext;  /* The authorization context */
   NameContext sNC;       /* The name-context to resolve expressions in */
   int iDb;               /* Database containing the table being updated */
+  int memCnt = 0;        /* Memory cell used for counting rows changed */
 
 #ifndef SQLITE_OMIT_TRIGGER
   int isView;                  /* Trying to update a view */
@@ -311,7 +312,8 @@
   /* Initialize the count of updated rows
   */
   if( db->flags & SQLITE_CountRows && !pParse->trigStack ){
-    sqlite3VdbeAddOp(v, OP_Integer, 0, 0);
+    memCnt = pParse->nMem++;
+    sqlite3VdbeAddOp(v, OP_MemInt, 0, memCnt);
   }
 
   if( triggers_exist ){
@@ -469,7 +471,7 @@
   /* Increment the row counter 
   */
   if( db->flags & SQLITE_CountRows && !pParse->trigStack){
-    sqlite3VdbeAddOp(v, OP_AddImm, 1, 0);
+    sqlite3VdbeAddOp(v, OP_MemIncr, 1, memCnt);
   }
 
   /* If there are triggers, close all the cursors after each iteration
@@ -514,6 +516,7 @@
   ** invoke the callback function.
   */
   if( db->flags & SQLITE_CountRows && !pParse->trigStack && pParse->nested==0 ){
+    sqlite3VdbeAddOp(v, OP_MemLoad, memCnt, 0);
     sqlite3VdbeAddOp(v, OP_Callback, 1, 0);
     sqlite3VdbeSetNumCols(v, 1);
     sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "rows updated", P3_STATIC);

Modified: freeswitch/trunk/libs/sqlite/src/utf.c
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/utf.c	(original)
+++ freeswitch/trunk/libs/sqlite/src/utf.c	Thu Feb 22 17:09:42 2007
@@ -12,7 +12,7 @@
 ** This file contains routines used to translate between UTF-8, 
 ** UTF-16, UTF-16BE, and UTF-16LE.
 **
-** $Id: utf.c,v 1.42 2006/10/05 11:43:53 drh Exp $
+** $Id: utf.c,v 1.43 2006/10/19 01:58:44 drh Exp $
 **
 ** Notes on UTF-8:
 **
@@ -64,7 +64,7 @@
 
 /*
 ** This table maps from the first byte of a UTF-8 character to the number
-** of trailing bytes expected. A value '255' indicates that the table key
+** of trailing bytes expected. A value '4' indicates that the table key
 ** is not a legal first byte for a UTF-8 character.
 */
 static const u8 xtra_utf8_bytes[256]  = {
@@ -79,10 +79,10 @@
 0, 0, 0, 0, 0, 0, 0, 0,     0, 0, 0, 0, 0, 0, 0, 0,
 
 /* 10wwwwww */
-255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
-255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
-255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
-255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
+4, 4, 4, 4, 4, 4, 4, 4,     4, 4, 4, 4, 4, 4, 4, 4,
+4, 4, 4, 4, 4, 4, 4, 4,     4, 4, 4, 4, 4, 4, 4, 4,
+4, 4, 4, 4, 4, 4, 4, 4,     4, 4, 4, 4, 4, 4, 4, 4,
+4, 4, 4, 4, 4, 4, 4, 4,     4, 4, 4, 4, 4, 4, 4, 4,
 
 /* 110yyyyy */
 1, 1, 1, 1, 1, 1, 1, 1,     1, 1, 1, 1, 1, 1, 1, 1,
@@ -92,7 +92,7 @@
 2, 2, 2, 2, 2, 2, 2, 2,     2, 2, 2, 2, 2, 2, 2, 2,
 
 /* 11110yyy */
-3, 3, 3, 3, 3, 3, 3, 3,     255, 255, 255, 255, 255, 255, 255, 255,
+3, 3, 3, 3, 3, 3, 3, 3,     4, 4, 4, 4, 4, 4, 4, 4,
 };
 
 /*
@@ -101,11 +101,24 @@
 ** read by a naive implementation of a UTF-8 character reader. The code
 ** in the READ_UTF8 macro explains things best.
 */
-static const int xtra_utf8_bits[4] =  {
-0,
-12416,          /* (0xC0 << 6) + (0x80) */
-925824,         /* (0xE0 << 12) + (0x80 << 6) + (0x80) */
-63447168        /* (0xF0 << 18) + (0x80 << 12) + (0x80 << 6) + 0x80 */
+static const int xtra_utf8_bits[] =  {
+  0,
+  12416,          /* (0xC0 << 6) + (0x80) */
+  925824,         /* (0xE0 << 12) + (0x80 << 6) + (0x80) */
+  63447168        /* (0xF0 << 18) + (0x80 << 12) + (0x80 << 6) + 0x80 */
+};
+
+/*
+** If a UTF-8 character contains N bytes extra bytes (N bytes follow
+** the initial byte so that the total character length is N+1) then
+** masking the character with utf8_mask[N] must produce a non-zero
+** result.  Otherwise, we have an (illegal) overlong encoding.
+*/
+static const int utf_mask[] = {
+  0x00000000,
+  0xffffff80,
+  0xfffff800,
+  0xffff0000,
 };
 
 #define READ_UTF8(zIn, c) { \
@@ -113,11 +126,14 @@
   c = *(zIn)++;                                        \
   xtra = xtra_utf8_bytes[c];                           \
   switch( xtra ){                                      \
-    case 255: c = (int)0xFFFD; break;                  \
+    case 4: c = (int)0xFFFD; break;                    \
     case 3: c = (c<<6) + *(zIn)++;                     \
     case 2: c = (c<<6) + *(zIn)++;                     \
     case 1: c = (c<<6) + *(zIn)++;                     \
     c -= xtra_utf8_bits[xtra];                         \
+    if( (utf_mask[xtra]&c)==0                          \
+        || (c&0xFFFFF800)==0xD800                      \
+        || (c&0xFFFFFFFE)==0xFFFE ){  c = 0xFFFD; }    \
   }                                                    \
 }
 int sqlite3ReadUtf8(const unsigned char *z){
@@ -181,6 +197,7 @@
     int c2 = (*zIn++);                                                \
     c2 += ((*zIn++)<<8);                                              \
     c = (c2&0x03FF) + ((c&0x003F)<<10) + (((c&0x03C0)+0x0040)<<10);   \
+    if( (c & 0xFFFF0000)==0 ) c = 0xFFFD;                             \
   }                                                                   \
 }
 
@@ -191,6 +208,7 @@
     int c2 = ((*zIn++)<<8);                                           \
     c2 += (*zIn++);                                                   \
     c = (c2&0x03FF) + ((c&0x003F)<<10) + (((c&0x03C0)+0x0040)<<10);   \
+    if( (c & 0xFFFF0000)==0 ) c = 0xFFFD;                             \
   }                                                                   \
 }
 
@@ -556,7 +574,7 @@
 ** characters in each encoding are inverses of each other.
 */
 void sqlite3utfSelfTest(){
-  unsigned int i;
+  unsigned int i, t;
   unsigned char zBuf[20];
   unsigned char *z;
   int n;
@@ -568,7 +586,10 @@
     n = z-zBuf;
     z = zBuf;
     READ_UTF8(z, c);
-    assert( c==i );
+    t = i;
+    if( i>=0xD800 && i<=0xDFFF ) t = 0xFFFD;
+    if( (i&0xFFFFFFFE)==0xFFFE ) t = 0xFFFD;
+    assert( c==t );
     assert( (z-zBuf)==n );
   }
   for(i=0; i<0x00110000; i++){

Modified: freeswitch/trunk/libs/sqlite/src/vacuum.c
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/vacuum.c	(original)
+++ freeswitch/trunk/libs/sqlite/src/vacuum.c	Thu Feb 22 17:09:42 2007
@@ -14,7 +14,7 @@
 ** Most of the code in this file may be omitted by defining the
 ** SQLITE_OMIT_VACUUM macro.
 **
-** $Id: vacuum.c,v 1.63 2006/09/21 11:02:18 drh Exp $
+** $Id: vacuum.c,v 1.66 2007/01/03 23:37:29 drh Exp $
 */
 #include "sqliteInt.h"
 #include "vdbeInt.h"
@@ -22,20 +22,6 @@
 
 #ifndef SQLITE_OMIT_VACUUM
 /*
-** Generate a random name of 20 character in length.
-*/
-static void randomName(unsigned char *zBuf){
-  static const unsigned char zChars[] =
-    "abcdefghijklmnopqrstuvwxyz"
-    "0123456789";
-  int i;
-  sqlite3Randomness(20, zBuf);
-  for(i=0; i<20; i++){
-    zBuf[i] = zChars[ zBuf[i]%(sizeof(zChars)-1) ];
-  }
-}
-
-/*
 ** Execute zSql on database db. Return an error code.
 */
 static int execSql(sqlite3 *db, const char *zSql){
@@ -92,59 +78,25 @@
 */
 int sqlite3RunVacuum(char **pzErrMsg, sqlite3 *db){
   int rc = SQLITE_OK;     /* Return code from service routines */
-  const char *zFilename;  /* full pathname of the database file */
-  int nFilename;          /* number of characters  in zFilename[] */
-  char *zTemp = 0;        /* a temporary file in same directory as zFilename */
   Btree *pMain;           /* The database being vacuumed */
-  Btree *pTemp;
-  char *zSql = 0;
-  int saved_flags;       /* Saved value of the db->flags */
-  Db *pDb = 0;           /* Database to detach at end of vacuum */
+  Btree *pTemp;           /* The temporary database we vacuum into */
+  char *zSql = 0;         /* SQL statements */
+  int saved_flags;        /* Saved value of the db->flags */
+  Db *pDb = 0;            /* Database to detach at end of vacuum */
+  char zTemp[SQLITE_TEMPNAME_SIZE+20];  /* Name of the TEMP file */
 
   /* Save the current value of the write-schema flag before setting it. */
   saved_flags = db->flags;
   db->flags |= SQLITE_WriteSchema | SQLITE_IgnoreChecks;
 
+  sqlite3OsTempFileName(zTemp);
   if( !db->autoCommit ){
     sqlite3SetString(pzErrMsg, "cannot VACUUM from within a transaction", 
        (char*)0);
     rc = SQLITE_ERROR;
     goto end_of_vacuum;
   }
-
-  /* Get the full pathname of the database file and create a
-  ** temporary filename in the same directory as the original file.
-  */
   pMain = db->aDb[0].pBt;
-  zFilename = sqlite3BtreeGetFilename(pMain);
-  assert( zFilename );
-  if( zFilename[0]=='\0' ){
-    /* The in-memory database. Do nothing. Return directly to avoid causing
-    ** an error trying to DETACH the vacuum_db (which never got attached)
-    ** in the exit-handler.
-    */
-    return SQLITE_OK;
-  }
-  nFilename = strlen(zFilename);
-  zTemp = sqliteMalloc( nFilename+100 );
-  if( zTemp==0 ){
-    rc = SQLITE_NOMEM;
-    goto end_of_vacuum;
-  }
-  strcpy(zTemp, zFilename);
-
-  /* The randomName() procedure in the following loop uses an excellent
-  ** source of randomness to generate a name from a space of 1.3e+31 
-  ** possibilities.  So unless the directory already contains on the order
-  ** of 1.3e+31 files, the probability that the following loop will
-  ** run more than once or twice is vanishingly small.  We are certain
-  ** enough that this loop will always terminate (and terminate quickly)
-  ** that we don't even bother to set a maximum loop count.
-  */
-  do {
-    zTemp[nFilename] = '-';
-    randomName((unsigned char*)&zTemp[nFilename+1]);
-  } while( sqlite3OsFileExists(zTemp) );
 
   /* Attach the temporary database as 'vacuum_db'. The synchronous pragma
   ** can be set to 'off' for this file, as it is not recovered if a crash
@@ -307,10 +259,9 @@
     pDb->pSchema = 0;
   }
 
-  if( zTemp ){
-    sqlite3OsDelete(zTemp);
-    sqliteFree(zTemp);
-  }
+  sqlite3OsDelete(zTemp);
+  strcat(zTemp, "-journal");
+  sqlite3OsDelete(zTemp);
   sqliteFree( zSql );
   sqlite3ResetInternalSchema(db, 0);
 

Modified: freeswitch/trunk/libs/sqlite/src/vdbe.c
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/vdbe.c	(original)
+++ freeswitch/trunk/libs/sqlite/src/vdbe.c	Thu Feb 22 17:09:42 2007
@@ -43,7 +43,7 @@
 ** in this file for details.  If in doubt, do not deviate from existing
 ** commenting and indentation practices when changing or adding code.
 **
-** $Id: vdbe.c,v 1.577 2006/09/23 20:36:02 drh Exp $
+** $Id: vdbe.c,v 1.588 2007/01/27 13:37:22 drh Exp $
 */
 #include "sqliteInt.h"
 #include "os.h"
@@ -454,6 +454,21 @@
   p->resOnStack = 0;
   db->busyHandler.nBusy = 0;
   CHECK_FOR_INTERRUPT;
+#ifdef SQLITE_DEBUG
+  if( (p->db->flags & SQLITE_VdbeListing)!=0
+    || sqlite3OsFileExists("vdbe_explain")
+  ){
+    int i;
+    printf("VDBE Program Listing:\n");
+    sqlite3VdbePrintSql(p);
+    for(i=0; i<p->nOp; i++){
+      sqlite3VdbePrintOp(stdout, i, &p->aOp[i]);
+    }
+  }
+  if( sqlite3OsFileExists("vdbe_trace") ){
+    p->trace = stdout;
+  }
+#endif
   for(pc=p->pc; rc==SQLITE_OK; pc++){
     assert( pc>=0 && pc<p->nOp );
     assert( pTos<=&p->aStack[pc] );
@@ -1812,32 +1827,31 @@
 
 /* Opcode: IsNull P1 P2 *
 **
-** If any of the top abs(P1) values on the stack are NULL, then jump
-** to P2.  Pop the stack P1 times if P1>0.   If P1<0 leave the stack
-** unchanged.
+** Check the top of the stack and jump to P2 if the top of the stack
+** is NULL.  If P1 is positive, then pop P1 elements from the stack
+** regardless of whether or not the jump is taken.  If P1 is negative,
+** pop -P1 elements from the stack only if the jump is taken and leave
+** the stack unchanged if the jump is not taken.
 */
 case OP_IsNull: {            /* same as TK_ISNULL, no-push */
-  int i, cnt;
-  Mem *pTerm;
-  cnt = pOp->p1;
-  if( cnt<0 ) cnt = -cnt;
-  pTerm = &pTos[1-cnt];
-  assert( pTerm>=p->aStack );
-  for(i=0; i<cnt; i++, pTerm++){
-    if( pTerm->flags & MEM_Null ){
-      pc = pOp->p2-1;
-      break;
+  if( pTos->flags & MEM_Null ){
+    pc = pOp->p2-1;
+    if( pOp->p1<0 ){
+      popStack(&pTos, -pOp->p1);
     }
   }
-  if( pOp->p1>0 ) popStack(&pTos, cnt);
+  if( pOp->p1>0 ){
+    popStack(&pTos, pOp->p1);
+  }
   break;
 }
 
 /* Opcode: NotNull P1 P2 *
 **
-** Jump to P2 if the top P1 values on the stack are all not NULL.  Pop the
-** stack if P1 times if P1 is greater than zero.  If P1 is less than
-** zero then leave the stack unchanged.
+** Jump to P2 if the top abs(P1) values on the stack are all not NULL.  
+** Regardless of whether or not the jump is taken, pop the stack
+** P1 times if P1 is greater than zero.  But if P1 is negative,
+** leave the stack unchanged.
 */
 case OP_NotNull: {            /* same as TK_NOTNULL, no-push */
   int i, cnt;
@@ -2010,7 +2024,9 @@
         pC->aRow = 0;
       }
     }
-    assert( zRec!=0 || avail>=payloadSize || avail>=9 );
+    /* The following assert is true in all cases accept when
+    ** the database file has been corrupted externally.
+    **    assert( zRec!=0 || avail>=payloadSize || avail>=9 ); */
     szHdrSz = GetVarint((u8*)zData, offset);
 
     /* The KeyFetch() or DataFetch() above are fast and will get the entire
@@ -2501,6 +2517,8 @@
   }
   if( rc==SQLITE_OK && iMeta!=pOp->p2 ){
     sqlite3SetString(&p->zErrMsg, "database schema has changed", (char*)0);
+    sqlite3ResetInternalSchema(db, pOp->p1);
+    sqlite3ExpirePreparedStatements(db);
     rc = SQLITE_SCHEMA;
   }
   break;
@@ -2907,7 +2925,7 @@
 **
 ** The top of the stack holds a blob constructed by MakeRecord.  P1 is
 ** an index.  If no entry exists in P1 that matches the blob then jump
-** to P1.  If an entry does existing, fall through.  The cursor is left
+** to P2.  If an entry does existing, fall through.  The cursor is left
 ** pointing to the entry that matches.  The blob is popped from the stack.
 **
 ** The difference between this operation and Distinct is that
@@ -3081,6 +3099,9 @@
     pC->rowidIsValid = res==0;
     pC->nullRow = 0;
     pC->cacheStatus = CACHE_STALE;
+    /* res might be uninitialized if rc!=SQLITE_OK.  But if rc!=SQLITE_OK
+    ** processing is about to abort so we really do not care whether or not
+    ** the following jump is taken. */
     if( res!=0 ){
       pc = pOp->p2 - 1;
       pC->rowidIsValid = 0;
@@ -3852,38 +3873,6 @@
   break;
 }
 
-/* Opcode: IdxIsNull P1 P2 *
-**
-** The top of the stack contains an index entry such as might be generated
-** by the MakeIdxRec opcode.  This routine looks at the first P1 fields of
-** that key.  If any of the first P1 fields are NULL, then a jump is made
-** to address P2.  Otherwise we fall straight through.
-**
-** The index entry is always popped from the stack.
-*/
-case OP_IdxIsNull: {        /* no-push */
-  int i = pOp->p1;
-  int k, n;
-  const char *z;
-  u32 serial_type;
-
-  assert( pTos>=p->aStack );
-  assert( pTos->flags & MEM_Blob );
-  z = pTos->z;
-  n = pTos->n;
-  k = sqlite3GetVarint32((u8*)z, &serial_type);
-  for(; k<n && i>0; i--){
-    k += sqlite3GetVarint32((u8*)&z[k], &serial_type);
-    if( serial_type==0 ){   /* Serial type 0 is a NULL */
-      pc = pOp->p2-1;
-      break;
-    }
-  }
-  Release(pTos);
-  pTos--;
-  break;
-}
-
 /* Opcode: Destroy P1 P2 *
 **
 ** Delete an entire database table or index whose root page in the database
@@ -3906,9 +3895,9 @@
 */
 case OP_Destroy: {
   int iMoved;
-  Vdbe *pVdbe;
   int iCnt;
 #ifndef SQLITE_OMIT_VIRTUALTABLE
+  Vdbe *pVdbe;
   iCnt = 0;
   for(pVdbe=db->pVdbe; pVdbe; pVdbe=pVdbe->pNext){
     if( pVdbe->magic==VDBE_MAGIC_RUN && pVdbe->inVtabMethod<2 && pVdbe->pc>=0 ){
@@ -4032,10 +4021,14 @@
   break;
 }
 
-/* Opcode: ParseSchema P1 * P3
+/* Opcode: ParseSchema P1 P2 P3
 **
 ** Read and parse all entries from the SQLITE_MASTER table of database P1
-** that match the WHERE clause P3.
+** that match the WHERE clause P3.  P2 is the "force" flag.   Always do
+** the parsing if P2 is true.  If P2 is false, then this routine is a
+** no-op if the schema is not currently loaded.  In other words, if P2
+** is false, the SQLITE_MASTER table is only parsed if the rest of the
+** schema is already loaded into the symbol table.
 **
 ** This opcode invokes the parser to create a new virtual machine,
 ** then runs the new virtual machine.  It is thus a reentrant opcode.
@@ -4047,7 +4040,9 @@
   InitData initData;
 
   assert( iDb>=0 && iDb<db->nDb );
-  if( !DbHasProperty(db, iDb, DB_SchemaLoaded) ) break;
+  if( !pOp->p2 && !DbHasProperty(db, iDb, DB_SchemaLoaded) ){
+    break;
+  }
   zMaster = SCHEMA_TABLE(iDb);
   initData.db = db;
   initData.iDb = pOp->p1;
@@ -4125,11 +4120,16 @@
 
 
 #ifndef SQLITE_OMIT_INTEGRITY_CHECK
-/* Opcode: IntegrityCk * P2 *
+/* Opcode: IntegrityCk P1 P2 *
 **
 ** Do an analysis of the currently open database.  Push onto the
 ** stack the text of an error message describing any problems.
-** If there are no errors, push a "ok" onto the stack.
+** If no problems are found, push a NULL onto the stack.
+**
+** P1 is the address of a memory cell that contains the maximum
+** number of allowed errors.  At most mem[P1] errors will be reported.
+** In other words, the analysis stops as soon as mem[P1] errors are 
+** seen.  Mem[P1] is updated with the number of errors remaining.
 **
 ** The root page numbers of all tables in the database are integer
 ** values on the stack.  This opcode pulls as many integers as it
@@ -4138,13 +4138,15 @@
 ** If P2 is not zero, the check is done on the auxiliary database
 ** file, not the main database file.
 **
-** This opcode is used for testing purposes only.
+** This opcode is used to implement the integrity_check pragma.
 */
 case OP_IntegrityCk: {
   int nRoot;
   int *aRoot;
   int j;
+  int nErr;
   char *z;
+  Mem *pnErr;
 
   for(nRoot=0; &pTos[-nRoot]>=p->aStack; nRoot++){
     if( (pTos[-nRoot].flags & MEM_Int)==0 ) break;
@@ -4152,6 +4154,10 @@
   assert( nRoot>0 );
   aRoot = sqliteMallocRaw( sizeof(int*)*(nRoot+1) );
   if( aRoot==0 ) goto no_mem;
+  j = pOp->p1;
+  assert( j>=0 && j<p->nMem );
+  pnErr = &p->aMem[j];
+  assert( (pnErr->flags & MEM_Int)!=0 );
   for(j=0; j<nRoot; j++){
     Mem *pMem = &pTos[-j];
     aRoot[j] = pMem->i;
@@ -4159,12 +4165,12 @@
   aRoot[j] = 0;
   popStack(&pTos, nRoot);
   pTos++;
-  z = sqlite3BtreeIntegrityCheck(db->aDb[pOp->p2].pBt, aRoot, nRoot);
-  if( z==0 || z[0]==0 ){
-    if( z ) sqliteFree(z);
-    pTos->z = "ok";
-    pTos->n = 2;
-    pTos->flags = MEM_Str | MEM_Static | MEM_Term;
+  z = sqlite3BtreeIntegrityCheck(db->aDb[pOp->p2].pBt, aRoot, nRoot,
+                                 pnErr->i, &nErr);
+  pnErr->i -= nErr;
+  if( nErr==0 ){
+    assert( z==0 );
+    pTos->flags = MEM_Null;
   }else{
     pTos->z = z;
     pTos->n = strlen(z);
@@ -4675,9 +4681,9 @@
   assert( (pTos[0].flags&MEM_Int)!=0 && pTos[-1].flags==MEM_Int );
   nArg = pTos[-1].i;
 
-  /* Invoke the xFilter method if one is defined. */
-  if( pModule->xFilter ){
-    int res;
+  /* Invoke the xFilter method */
+  {
+    int res = 0;
     int i;
     Mem **apArg = p->apArg;
     for(i = 0; i<nArg; i++){

Modified: freeswitch/trunk/libs/sqlite/src/vdbe.h
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/vdbe.h	(original)
+++ freeswitch/trunk/libs/sqlite/src/vdbe.h	Thu Feb 22 17:09:42 2007
@@ -15,7 +15,7 @@
 ** or VDBE.  The VDBE implements an abstract machine that runs a
 ** simple program to access and modify the underlying database.
 **
-** $Id: vdbe.h,v 1.105 2006/06/13 23:51:35 drh Exp $
+** $Id: vdbe.h,v 1.108 2007/01/09 14:01:14 drh Exp $
 */
 #ifndef _SQLITE_VDBE_H_
 #define _SQLITE_VDBE_H_
@@ -129,12 +129,16 @@
 void sqlite3VdbeResolveLabel(Vdbe*, int);
 int sqlite3VdbeCurrentAddr(Vdbe*);
 void sqlite3VdbeTrace(Vdbe*,FILE*);
+void sqlite3VdbeResetStepResult(Vdbe*);
 int sqlite3VdbeReset(Vdbe*);
 int sqliteVdbeSetVariables(Vdbe*,int,const char**);
 void sqlite3VdbeSetNumCols(Vdbe*,int);
 int sqlite3VdbeSetColName(Vdbe*, int, int, const char *, int);
 void sqlite3VdbeCountChanges(Vdbe*);
 sqlite3 *sqlite3VdbeDb(Vdbe*);
+void sqlite3VdbeSetSql(Vdbe*, const char *z, int n);
+const char *sqlite3VdbeGetSql(Vdbe*);
+void sqlite3VdbeSwap(Vdbe*,Vdbe*);
 
 #ifndef NDEBUG
   void sqlite3VdbeComment(Vdbe*, const char*, ...);

Modified: freeswitch/trunk/libs/sqlite/src/vdbeInt.h
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/vdbeInt.h	(original)
+++ freeswitch/trunk/libs/sqlite/src/vdbeInt.h	Thu Feb 22 17:09:42 2007
@@ -15,6 +15,8 @@
 ** 6000 lines long) it was split up into several smaller files and
 ** this header information was factored out.
 */
+#ifndef _VDBEINT_H_
+#define _VDBEINT_H_
 
 /*
 ** intToKey() and keyToInt() used to transform the rowid.  But with
@@ -328,6 +330,8 @@
   u8 inVtabMethod;        /* See comments above */
   int nChange;            /* Number of db changes made since last reset */
   i64 startTime;          /* Time when query started - used for profiling */
+  int nSql;             /* Number of bytes in zSql */
+  char *zSql;           /* Text of the SQL statement that generated this */
 #ifdef SQLITE_SSE
   int fetchId;          /* Statement number used by sqlite3_fetch_statement */
   int lru;              /* Counter used for LRU cache replacement */
@@ -401,3 +405,5 @@
 int sqlite3VdbeFifoPush(Fifo*, i64);
 int sqlite3VdbeFifoPop(Fifo*, i64*);
 void sqlite3VdbeFifoClear(Fifo*);
+
+#endif /* !defined(_VDBEINT_H_) */

Modified: freeswitch/trunk/libs/sqlite/src/vdbeapi.c
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/vdbeapi.c	(original)
+++ freeswitch/trunk/libs/sqlite/src/vdbeapi.c	Thu Feb 22 17:09:42 2007
@@ -153,9 +153,13 @@
 /*
 ** Execute the statement pStmt, either until a row of data is ready, the
 ** statement is completely executed or an error occurs.
+**
+** This routine implements the bulk of the logic behind the sqlite_step()
+** API.  The only thing omitted is the automatic recompile if a 
+** schema change has occurred.  That detail is handled by the
+** outer sqlite3_step() wrapper procedure.
 */
-int sqlite3_step(sqlite3_stmt *pStmt){
-  Vdbe *p = (Vdbe*)pStmt;
+static int sqlite3Step(Vdbe *p){
   sqlite3 *db;
   int rc;
 
@@ -172,7 +176,8 @@
     if( p->rc==SQLITE_OK ){
       p->rc = SQLITE_SCHEMA;
     }
-    return SQLITE_ERROR;
+    rc = SQLITE_ERROR;
+    goto end_of_step;
   }
   db = p->db;
   if( sqlite3SafetyOn(db) ){
@@ -254,9 +259,42 @@
 
   sqlite3Error(p->db, rc, 0);
   p->rc = sqlite3ApiExit(p->db, p->rc);
+end_of_step:
   assert( (rc&0xff)==rc );
+  if( p->zSql && (rc&0xff)<SQLITE_ROW ){
+    /* This behavior occurs if sqlite3_prepare_v2() was used to build
+    ** the prepared statement.  Return error codes directly */
+    return p->rc;
+  }else{
+    /* This is for legacy sqlite3_prepare() builds and when the code
+    ** is SQLITE_ROW or SQLITE_DONE */
+    return rc;
+  }
+}
+
+/*
+** This is the top-level implementation of sqlite3_step().  Call
+** sqlite3Step() to do most of the work.  If a schema error occurs,
+** call sqlite3Reprepare() and try again.
+*/
+#ifdef SQLITE_OMIT_PARSER
+int sqlite3_step(sqlite3_stmt *pStmt){
+  return sqlite3Step((Vdbe*)pStmt);
+}
+#else
+int sqlite3_step(sqlite3_stmt *pStmt){
+  int cnt = 0;
+  int rc;
+  Vdbe *v = (Vdbe*)pStmt;
+  while( (rc = sqlite3Step(v))==SQLITE_SCHEMA
+         && cnt++ < 5
+         && sqlite3Reprepare(v) ){
+    sqlite3_reset(pStmt);
+    v->expired = 0;
+  }
   return rc;
 }
+#endif
 
 /*
 ** Extract the user data from a sqlite3_context structure and return a

Modified: freeswitch/trunk/libs/sqlite/src/vdbeaux.c
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/vdbeaux.c	(original)
+++ freeswitch/trunk/libs/sqlite/src/vdbeaux.c	Thu Feb 22 17:09:42 2007
@@ -49,6 +49,46 @@
 }
 
 /*
+** Remember the SQL string for a prepared statement.
+*/
+void sqlite3VdbeSetSql(Vdbe *p, const char *z, int n){
+  if( p==0 ) return;
+  assert( p->zSql==0 );
+  p->zSql = sqlite3StrNDup(z, n);
+}
+
+/*
+** Return the SQL associated with a prepared statement
+*/
+const char *sqlite3VdbeGetSql(Vdbe *p){
+  return p->zSql;
+}
+
+/*
+** Swap all content between two VDBE structures.
+*/
+void sqlite3VdbeSwap(Vdbe *pA, Vdbe *pB){
+  Vdbe tmp, *pTmp;
+  char *zTmp;
+  int nTmp;
+  tmp = *pA;
+  *pA = *pB;
+  *pB = tmp;
+  pTmp = pA->pNext;
+  pA->pNext = pB->pNext;
+  pB->pNext = pTmp;
+  pTmp = pA->pPrev;
+  pA->pPrev = pB->pPrev;
+  pB->pPrev = pTmp;
+  zTmp = pA->zSql;
+  pA->zSql = pB->zSql;
+  pB->zSql = zTmp;
+  nTmp = pA->nSql;
+  pA->nSql = pB->nSql;
+  pB->nSql = nTmp;
+}
+
+/*
 ** Turn tracing on or off
 */
 void sqlite3VdbeTrace(Vdbe *p, FILE *trace){
@@ -812,21 +852,6 @@
     p->aMem[n].flags = MEM_Null;
   }
 
-#ifdef SQLITE_DEBUG
-  if( (p->db->flags & SQLITE_VdbeListing)!=0
-    || sqlite3OsFileExists("vdbe_explain")
-  ){
-    int i;
-    printf("VDBE Program Listing:\n");
-    sqlite3VdbePrintSql(p);
-    for(i=0; i<p->nOp; i++){
-      sqlite3VdbePrintOp(stdout, i, &p->aOp[i]);
-    }
-  }
-  if( sqlite3OsFileExists("vdbe_trace") ){
-    p->trace = stdout;
-  }
-#endif
   p->pTos = &p->aStack[-1];
   p->pc = -1;
   p->rc = SQLITE_OK;
@@ -1425,6 +1450,14 @@
 }
 
 /*
+** Each VDBE holds the result of the most recent sqlite3_step() call
+** in p->rc.  This routine sets that result back to SQLITE_OK.
+*/
+void sqlite3VdbeResetStepResult(Vdbe *p){
+  p->rc = SQLITE_OK;
+}
+
+/*
 ** Clean up a VDBE after execution but do not delete the VDBE just yet.
 ** Write any error messages into *pzErrMsg.  Return the result code.
 **
@@ -1574,6 +1607,7 @@
   sqliteFree(p->aStack);
   releaseMemArray(p->aColName, p->nResColumn*COLNAME_N);
   sqliteFree(p->aColName);
+  sqliteFree(p->zSql);
   p->magic = VDBE_MAGIC_DEAD;
   sqliteFree(p);
 }
@@ -1892,14 +1926,13 @@
     idx2 += GetVarint( aKey2+idx2, serial_type2 );
     if( d2>=nKey2 && sqlite3VdbeSerialTypeLen(serial_type2)>0 ) break;
 
-    /* Assert that there is enough space left in each key for the blob of
-    ** data to go with the serial type just read. This assert may fail if
-    ** the file is corrupted.  Then read the value from each key into mem1
-    ** and mem2 respectively.
+    /* Extract the values to be compared.
     */
     d1 += sqlite3VdbeSerialGet(&aKey1[d1], serial_type1, &mem1);
     d2 += sqlite3VdbeSerialGet(&aKey2[d2], serial_type2, &mem2);
 
+    /* Do the comparison
+    */
     rc = sqlite3MemCompare(&mem1, &mem2, i<nField ? pKeyInfo->aColl[i] : 0);
     if( mem1.flags & MEM_Dyn ) sqlite3VdbeMemRelease(&mem1);
     if( mem2.flags & MEM_Dyn ) sqlite3VdbeMemRelease(&mem2);

Modified: freeswitch/trunk/libs/sqlite/src/vdbemem.c
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/vdbemem.c	(original)
+++ freeswitch/trunk/libs/sqlite/src/vdbemem.c	Thu Feb 22 17:09:42 2007
@@ -137,6 +137,7 @@
     }
     pMem->xDel = 0;
     pMem->z = z;
+    pMem->flags |= MEM_Term;
   }
   return SQLITE_OK;
 }

Modified: freeswitch/trunk/libs/sqlite/src/vtab.c
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/vtab.c	(original)
+++ freeswitch/trunk/libs/sqlite/src/vtab.c	Thu Feb 22 17:09:42 2007
@@ -11,7 +11,7 @@
 *************************************************************************
 ** This file contains code used to help implement virtual tables.
 **
-** $Id: vtab.c,v 1.37 2006/09/18 20:24:03 drh Exp $
+** $Id: vtab.c,v 1.39 2007/01/09 14:01:14 drh Exp $
 */
 #ifndef SQLITE_OMIT_VIRTUALTABLE
 #include "sqliteInt.h"
@@ -230,7 +230,7 @@
 
     sqlite3VdbeAddOp(v, OP_Expire, 0, 0);
     zWhere = sqlite3MPrintf("name='%q'", pTab->zName);
-    sqlite3VdbeOp3(v, OP_ParseSchema, iDb, 0, zWhere, P3_DYNAMIC);
+    sqlite3VdbeOp3(v, OP_ParseSchema, iDb, 1, zWhere, P3_DYNAMIC);
     sqlite3VdbeOp3(v, OP_VCreate, iDb, 0, pTab->zName, strlen(pTab->zName) + 1);
   }
 
@@ -340,7 +340,6 @@
 */
 int sqlite3VtabCallConnect(Parse *pParse, Table *pTab){
   Module *pMod;
-  const char *zModule;
   int rc = SQLITE_OK;
 
   if( !pTab || !pTab->isVirtual || pTab->pVtab ){
@@ -348,7 +347,6 @@
   }
 
   pMod = pTab->pMod;
-  zModule = pTab->azModuleArg[0];
   if( !pMod ){
     const char *zModule = pTab->azModuleArg[0];
     sqlite3ErrorMsg(pParse, "no such module: %s", zModule);

Modified: freeswitch/trunk/libs/sqlite/src/where.c
==============================================================================
--- freeswitch/trunk/libs/sqlite/src/where.c	(original)
+++ freeswitch/trunk/libs/sqlite/src/where.c	Thu Feb 22 17:09:42 2007
@@ -16,7 +16,7 @@
 ** so is applicable.  Because this module is responsible for selecting
 ** indices, you might also think of this module as the "query optimizer".
 **
-** $Id: where.c,v 1.228 2006/06/27 13:20:22 drh Exp $
+** $Id: where.c,v 1.237 2007/02/06 13:26:33 drh Exp $
 */
 #include "sqliteInt.h"
 
@@ -43,6 +43,7 @@
 /* Forward reference
 */
 typedef struct WhereClause WhereClause;
+typedef struct ExprMaskSet ExprMaskSet;
 
 /*
 ** The query generator uses an array of instances of this structure to
@@ -106,6 +107,7 @@
 */
 struct WhereClause {
   Parse *pParse;           /* The parser context */
+  ExprMaskSet *pMaskSet;   /* Mapping of table indices to bitmasks */
   int nTerm;               /* Number of terms */
   int nSlot;               /* Number of entries in a[] */
   WhereTerm *a;            /* Each a[] describes a term of the WHERE cluase */
@@ -138,7 +140,6 @@
 ** numbers all get mapped into bit numbers that begin with 0 and contain
 ** no gaps.
 */
-typedef struct ExprMaskSet ExprMaskSet;
 struct ExprMaskSet {
   int n;                        /* Number of assigned cursor values */
   int ix[sizeof(Bitmask)*8];    /* Cursor assigned to each bit */
@@ -157,28 +158,42 @@
 #define WO_GT     (WO_EQ<<(TK_GT-TK_EQ))
 #define WO_GE     (WO_EQ<<(TK_GE-TK_EQ))
 #define WO_MATCH  64
+#define WO_ISNULL 128
 
 /*
-** Value for flags returned by bestIndex()
-*/
-#define WHERE_ROWID_EQ       0x0001   /* rowid=EXPR or rowid IN (...) */
-#define WHERE_ROWID_RANGE    0x0002   /* rowid<EXPR and/or rowid>EXPR */
-#define WHERE_COLUMN_EQ      0x0010   /* x=EXPR or x IN (...) */
-#define WHERE_COLUMN_RANGE   0x0020   /* x<EXPR and/or x>EXPR */
-#define WHERE_COLUMN_IN      0x0040   /* x IN (...) */
-#define WHERE_TOP_LIMIT      0x0100   /* x<EXPR or x<=EXPR constraint */
-#define WHERE_BTM_LIMIT      0x0200   /* x>EXPR or x>=EXPR constraint */
-#define WHERE_IDX_ONLY       0x0800   /* Use index only - omit table */
-#define WHERE_ORDERBY        0x1000   /* Output will appear in correct order */
-#define WHERE_REVERSE        0x2000   /* Scan in reverse order */
-#define WHERE_UNIQUE         0x4000   /* Selects no more than one row */
-#define WHERE_VIRTUALTABLE   0x8000   /* Use virtual-table processing */
+** Value for flags returned by bestIndex().  
+**
+** The least significant byte is reserved as a mask for WO_ values above.
+** The WhereLevel.flags field is usually set to WO_IN|WO_EQ|WO_ISNULL.
+** But if the table is the right table of a left join, WhereLevel.flags
+** is set to WO_IN|WO_EQ.  The WhereLevel.flags field can then be used as
+** the "op" parameter to findTerm when we are resolving equality constraints.
+** ISNULL constraints will then not be used on the right table of a left
+** join.  Tickets #2177 and #2189.
+*/
+#define WHERE_ROWID_EQ     0x000100   /* rowid=EXPR or rowid IN (...) */
+#define WHERE_ROWID_RANGE  0x000200   /* rowid<EXPR and/or rowid>EXPR */
+#define WHERE_COLUMN_EQ    0x001000   /* x=EXPR or x IN (...) */
+#define WHERE_COLUMN_RANGE 0x002000   /* x<EXPR and/or x>EXPR */
+#define WHERE_COLUMN_IN    0x004000   /* x IN (...) */
+#define WHERE_TOP_LIMIT    0x010000   /* x<EXPR or x<=EXPR constraint */
+#define WHERE_BTM_LIMIT    0x020000   /* x>EXPR or x>=EXPR constraint */
+#define WHERE_IDX_ONLY     0x080000   /* Use index only - omit table */
+#define WHERE_ORDERBY      0x100000   /* Output will appear in correct order */
+#define WHERE_REVERSE      0x200000   /* Scan in reverse order */
+#define WHERE_UNIQUE       0x400000   /* Selects no more than one row */
+#define WHERE_VIRTUALTABLE 0x800000   /* Use virtual-table processing */
 
 /*
 ** Initialize a preallocated WhereClause structure.
 */
-static void whereClauseInit(WhereClause *pWC, Parse *pParse){
+static void whereClauseInit(
+  WhereClause *pWC,        /* The WhereClause to be initialized */
+  Parse *pParse,           /* The parsing context */
+  ExprMaskSet *pMaskSet    /* Mapping from table indices to bitmasks */
+){
   pWC->pParse = pParse;
+  pWC->pMaskSet = pMaskSet;
   pWC->nTerm = 0;
   pWC->nSlot = ARRAYSIZE(pWC->aStatic);
   pWC->a = pWC->aStatic;
@@ -354,7 +369,7 @@
   assert( TK_LT>TK_EQ && TK_LT<TK_GE );
   assert( TK_LE>TK_EQ && TK_LE<TK_GE );
   assert( TK_GE==TK_EQ+4 );
-  return op==TK_IN || (op>=TK_EQ && op<=TK_GE);
+  return op==TK_IN || (op>=TK_EQ && op<=TK_GE) || op==TK_ISNULL;
 }
 
 /*
@@ -388,9 +403,12 @@
   assert( allowedOp(op) );
   if( op==TK_IN ){
     c = WO_IN;
+  }else if( op==TK_ISNULL ){
+    c = WO_ISNULL;
   }else{
     c = WO_EQ<<(op-TK_EQ);
   }
+  assert( op!=TK_ISNULL || c==WO_ISNULL );
   assert( op!=TK_IN || c==WO_IN );
   assert( op!=TK_EQ || c==WO_EQ );
   assert( op!=TK_LT || c==WO_LT );
@@ -422,7 +440,7 @@
        && pTerm->leftColumn==iColumn
        && (pTerm->eOperator & op)!=0
     ){
-      if( iCur>=0 && pIdx ){
+      if( iCur>=0 && pIdx && pTerm->eOperator!=WO_ISNULL ){
         Expr *pX = pTerm->pExpr;
         CollSeq *pColl;
         char idxaff;
@@ -451,7 +469,7 @@
 }
 
 /* Forward reference */
-static void exprAnalyze(SrcList*, ExprMaskSet*, WhereClause*, int);
+static void exprAnalyze(SrcList*, WhereClause*, int);
 
 /*
 ** Call exprAnalyze on all terms in a WHERE clause.  
@@ -460,12 +478,11 @@
 */
 static void exprAnalyzeAll(
   SrcList *pTabList,       /* the FROM clause */
-  ExprMaskSet *pMaskSet,   /* table masks */
   WhereClause *pWC         /* the WHERE clause to be analyzed */
 ){
   int i;
   for(i=pWC->nTerm-1; i>=0; i--){
-    exprAnalyze(pTabList, pMaskSet, pWC, i);
+    exprAnalyze(pTabList, pWC, i);
   }
 }
 
@@ -580,23 +597,27 @@
 */
 static void exprAnalyze(
   SrcList *pSrc,            /* the FROM clause */
-  ExprMaskSet *pMaskSet,    /* table masks */
   WhereClause *pWC,         /* the WHERE clause */
   int idxTerm               /* Index of the term to be analyzed */
 ){
   WhereTerm *pTerm = &pWC->a[idxTerm];
+  ExprMaskSet *pMaskSet = pWC->pMaskSet;
   Expr *pExpr = pTerm->pExpr;
   Bitmask prereqLeft;
   Bitmask prereqAll;
   int nPattern;
   int isComplete;
+  int op;
 
   if( sqlite3MallocFailed() ) return;
   prereqLeft = exprTableUsage(pMaskSet, pExpr->pLeft);
-  if( pExpr->op==TK_IN ){
+  op = pExpr->op;
+  if( op==TK_IN ){
     assert( pExpr->pRight==0 );
     pTerm->prereqRight = exprListTableUsage(pMaskSet, pExpr->pList)
                           | exprSelectTableUsage(pMaskSet, pExpr->pSelect);
+  }else if( op==TK_ISNULL ){
+    pTerm->prereqRight = 0;
   }else{
     pTerm->prereqRight = exprTableUsage(pMaskSet, pExpr->pRight);
   }
@@ -608,13 +629,13 @@
   pTerm->leftCursor = -1;
   pTerm->iParent = -1;
   pTerm->eOperator = 0;
-  if( allowedOp(pExpr->op) && (pTerm->prereqRight & prereqLeft)==0 ){
+  if( allowedOp(op) && (pTerm->prereqRight & prereqLeft)==0 ){
     Expr *pLeft = pExpr->pLeft;
     Expr *pRight = pExpr->pRight;
     if( pLeft->op==TK_COLUMN ){
       pTerm->leftCursor = pLeft->iTable;
       pTerm->leftColumn = pLeft->iColumn;
-      pTerm->eOperator = operatorMask(pExpr->op);
+      pTerm->eOperator = operatorMask(op);
     }
     if( pRight && pRight->op==TK_COLUMN ){
       WhereTerm *pNew;
@@ -622,6 +643,10 @@
       if( pTerm->leftCursor>=0 ){
         int idxNew;
         pDup = sqlite3ExprDup(pExpr);
+        if( sqlite3MallocFailed() ){
+          sqliteFree(pDup);
+          return;
+        }
         idxNew = whereClauseInsert(pWC, pDup, TERM_VIRTUAL|TERM_DYNAMIC);
         if( idxNew==0 ) return;
         pNew = &pWC->a[idxNew];
@@ -659,7 +684,7 @@
       pNewExpr = sqlite3Expr(ops[i], sqlite3ExprDup(pExpr->pLeft),
                              sqlite3ExprDup(pList->a[i].pExpr), 0);
       idxNew = whereClauseInsert(pWC, pNewExpr, TERM_VIRTUAL|TERM_DYNAMIC);
-      exprAnalyze(pSrc, pMaskSet, pWC, idxNew);
+      exprAnalyze(pSrc, pWC, idxNew);
       pTerm = &pWC->a[idxTerm];
       pWC->a[idxNew].iParent = idxTerm;
     }
@@ -688,9 +713,9 @@
     WhereTerm *pOrTerm;
 
     assert( (pTerm->flags & TERM_DYNAMIC)==0 );
-    whereClauseInit(&sOr, pWC->pParse);
+    whereClauseInit(&sOr, pWC->pParse, pMaskSet);
     whereSplit(&sOr, pExpr, TK_OR);
-    exprAnalyzeAll(pSrc, pMaskSet, &sOr);
+    exprAnalyzeAll(pSrc, &sOr);
     assert( sOr.nTerm>0 );
     j = 0;
     do{
@@ -715,23 +740,22 @@
     if( ok ){
       ExprList *pList = 0;
       Expr *pNew, *pDup;
+      Expr *pLeft = 0;
       for(i=sOr.nTerm-1, pOrTerm=sOr.a; i>=0 && ok; i--, pOrTerm++){
         if( (pOrTerm->flags & TERM_OR_OK)==0 ) continue;
         pDup = sqlite3ExprDup(pOrTerm->pExpr->pRight);
         pList = sqlite3ExprListAppend(pList, pDup, 0);
+        pLeft = pOrTerm->pExpr->pLeft;
       }
-      pDup = sqlite3Expr(TK_COLUMN, 0, 0, 0);
-      if( pDup ){
-        pDup->iTable = iCursor;
-        pDup->iColumn = iColumn;
-      }
+      assert( pLeft!=0 );
+      pDup = sqlite3ExprDup(pLeft);
       pNew = sqlite3Expr(TK_IN, pDup, 0, 0);
       if( pNew ){
         int idxNew;
         transferJoinMarkings(pNew, pExpr);
         pNew->pList = pList;
         idxNew = whereClauseInsert(pWC, pNew, TERM_VIRTUAL|TERM_DYNAMIC);
-        exprAnalyze(pSrc, pMaskSet, pWC, idxNew);
+        exprAnalyze(pSrc, pWC, idxNew);
         pTerm = &pWC->a[idxTerm];
         pWC->a[idxNew].iParent = idxTerm;
         pTerm->nChild = 1;
@@ -768,10 +792,10 @@
     }
     pNewExpr1 = sqlite3Expr(TK_GE, sqlite3ExprDup(pLeft), pStr1, 0);
     idxNew1 = whereClauseInsert(pWC, pNewExpr1, TERM_VIRTUAL|TERM_DYNAMIC);
-    exprAnalyze(pSrc, pMaskSet, pWC, idxNew1);
+    exprAnalyze(pSrc, pWC, idxNew1);
     pNewExpr2 = sqlite3Expr(TK_LT, sqlite3ExprDup(pLeft), pStr2, 0);
     idxNew2 = whereClauseInsert(pWC, pNewExpr2, TERM_VIRTUAL|TERM_DYNAMIC);
-    exprAnalyze(pSrc, pMaskSet, pWC, idxNew2);
+    exprAnalyze(pSrc, pWC, idxNew2);
     pTerm = &pWC->a[idxTerm];
     if( isComplete ){
       pWC->a[idxNew1].iParent = idxTerm;
@@ -817,6 +841,25 @@
 #endif /* SQLITE_OMIT_VIRTUALTABLE */
 }
 
+/*
+** Return TRUE if any of the expressions in pList->a[iFirst...] contain
+** a reference to any table other than the iBase table.
+*/
+static int referencesOtherTables(
+  ExprList *pList,          /* Search expressions in ths list */
+  ExprMaskSet *pMaskSet,    /* Mapping from tables to bitmaps */
+  int iFirst,               /* Be searching with the iFirst-th expression */
+  int iBase                 /* Ignore references to this table */
+){
+  Bitmask allowed = ~getMask(pMaskSet, iBase);
+  while( iFirst<pList->nExpr ){
+    if( (exprTableUsage(pMaskSet, pList->a[iFirst++].pExpr)&allowed)!=0 ){
+      return 1;
+    }
+  }
+  return 0;
+}
+
 
 /*
 ** This routine decides if pIdx can be used to satisfy the ORDER BY
@@ -839,6 +882,7 @@
 */
 static int isSortingIndex(
   Parse *pParse,          /* Parsing context */
+  ExprMaskSet *pMaskSet,  /* Mapping from table indices to bitmaps */
   Index *pIdx,            /* The index we are testing */
   int base,               /* Cursor number for the table to be sorted */
   ExprList *pOrderBy,     /* The ORDER BY clause */
@@ -857,22 +901,43 @@
 
   /* Match terms of the ORDER BY clause against columns of
   ** the index.
+  **
+  ** Note that indices have pIdx->nColumn regular columns plus
+  ** one additional column containing the rowid.  The rowid column
+  ** of the index is also allowed to match against the ORDER BY
+  ** clause.
   */
-  for(i=j=0, pTerm=pOrderBy->a; j<nTerm && i<pIdx->nColumn; i++){
+  for(i=j=0, pTerm=pOrderBy->a; j<nTerm && i<=pIdx->nColumn; i++){
     Expr *pExpr;       /* The expression of the ORDER BY pTerm */
     CollSeq *pColl;    /* The collating sequence of pExpr */
     int termSortOrder; /* Sort order for this term */
+    int iColumn;       /* The i-th column of the index.  -1 for rowid */
+    int iSortOrder;    /* 1 for DESC, 0 for ASC on the i-th index term */
+    const char *zColl; /* Name of the collating sequence for i-th index term */
 
     pExpr = pTerm->pExpr;
     if( pExpr->op!=TK_COLUMN || pExpr->iTable!=base ){
       /* Can not use an index sort on anything that is not a column in the
       ** left-most table of the FROM clause */
-      return 0;
+      break;
     }
     pColl = sqlite3ExprCollSeq(pParse, pExpr);
-    if( !pColl ) pColl = db->pDfltColl;
-    if( pExpr->iColumn!=pIdx->aiColumn[i] || 
-        sqlite3StrICmp(pColl->zName, pIdx->azColl[i]) ){
+    if( !pColl ){
+      pColl = db->pDfltColl;
+    }
+    if( i<pIdx->nColumn ){
+      iColumn = pIdx->aiColumn[i];
+      if( iColumn==pIdx->pTable->iPKey ){
+        iColumn = -1;
+      }
+      iSortOrder = pIdx->aSortOrder[i];
+      zColl = pIdx->azColl[i];
+    }else{
+      iColumn = -1;
+      iSortOrder = 0;
+      zColl = pColl->zName;
+    }
+    if( pExpr->iColumn!=iColumn || sqlite3StrICmp(pColl->zName, zColl) ){
       /* Term j of the ORDER BY clause does not match column i of the index */
       if( i<nEqCol ){
         /* If an index column that is constrained by == fails to match an
@@ -888,8 +953,8 @@
     }
     assert( pIdx->aSortOrder!=0 );
     assert( pTerm->sortOrder==0 || pTerm->sortOrder==1 );
-    assert( pIdx->aSortOrder[i]==0 || pIdx->aSortOrder[i]==1 );
-    termSortOrder = pIdx->aSortOrder[i] ^ pTerm->sortOrder;
+    assert( iSortOrder==0 || iSortOrder==1 );
+    termSortOrder = iSortOrder ^ pTerm->sortOrder;
     if( i>nEqCol ){
       if( termSortOrder!=sortOrder ){
         /* Indices can only be used if all ORDER BY terms past the
@@ -901,13 +966,29 @@
     }
     j++;
     pTerm++;
+    if( iColumn<0 && !referencesOtherTables(pOrderBy, pMaskSet, j, base) ){
+      /* If the indexed column is the primary key and everything matches
+      ** so far and none of the ORDER BY terms to the right reference other
+      ** tables in the join, then we are assured that the index can be used 
+      ** to sort because the primary key is unique and so none of the other
+      ** columns will make any difference
+      */
+      j = nTerm;
+    }
   }
 
-  /* The index can be used for sorting if all terms of the ORDER BY clause
-  ** are covered.
-  */
+  *pbRev = sortOrder!=0;
   if( j>=nTerm ){
-    *pbRev = sortOrder!=0;
+    /* All terms of the ORDER BY clause are covered by this index so
+    ** this index can be used for sorting. */
+    return 1;
+  }
+  if( pIdx->onError!=OE_None && i==pIdx->nColumn
+      && !referencesOtherTables(pOrderBy, pMaskSet, j, base) ){
+    /* All terms of this index match some prefix of the ORDER BY clause
+    ** and the index is UNIQUE and no terms on the tail of the ORDER BY
+    ** clause reference other tables in a join.  If this is all true then
+    ** the order by clause is superfluous. */
     return 1;
   }
   return 0;
@@ -921,6 +1002,7 @@
 static int sortableByRowid(
   int base,               /* Cursor number for table to be sorted */
   ExprList *pOrderBy,     /* The ORDER BY clause */
+  ExprMaskSet *pMaskSet,  /* Mapping from tables to bitmaps */
   int *pbRev              /* Set to 1 if ORDER BY is DESC */
 ){
   Expr *p;
@@ -928,8 +1010,8 @@
   assert( pOrderBy!=0 );
   assert( pOrderBy->nExpr>0 );
   p = pOrderBy->a[0].pExpr;
-  if( pOrderBy->nExpr==1 && p->op==TK_COLUMN && p->iTable==base
-          && p->iColumn==-1 ){
+  if( p->op==TK_COLUMN && p->iTable==base && p->iColumn==-1
+    && !referencesOtherTables(pOrderBy, pMaskSet, 1, base) ){
     *pbRev = pOrderBy->a[0].sortOrder;
     return 1;
   }
@@ -1232,6 +1314,7 @@
   int rev;                    /* True to scan in reverse order */
   int flags;                  /* Flags associated with pProbe */
   int nEq;                    /* Number of == or IN constraints */
+  int eqTermMask;             /* Mask of valid equality operators */
   double cost;                /* Cost of using pProbe */
 
   TRACE(("bestIndex: tbl=%s notReady=%x\n", pSrc->pTab->zName, notReady));
@@ -1246,7 +1329,7 @@
   */
   if( pProbe==0 &&
      findTerm(pWC, iCur, -1, 0, WO_EQ|WO_IN|WO_LT|WO_LE|WO_GT|WO_GE,0)==0 &&
-     (pOrderBy==0 || !sortableByRowid(iCur, pOrderBy, &rev)) ){
+     (pOrderBy==0 || !sortableByRowid(iCur, pOrderBy, pWC->pMaskSet, &rev)) ){
     *pFlags = 0;
     *ppIndex = 0;
     *pnEq = 0;
@@ -1308,7 +1391,7 @@
   /* If the table scan does not satisfy the ORDER BY clause, increase
   ** the cost by NlogN to cover the expense of sorting. */
   if( pOrderBy ){
-    if( sortableByRowid(iCur, pOrderBy, &rev) ){
+    if( sortableByRowid(iCur, pOrderBy, pWC->pMaskSet, &rev) ){
       flags |= WHERE_ORDERBY|WHERE_ROWID_RANGE;
       if( rev ){
         flags |= WHERE_REVERSE;
@@ -1323,6 +1406,17 @@
     bestFlags = flags;
   }
 
+  /* If the pSrc table is the right table of a LEFT JOIN then we may not
+  ** use an index to satisfy IS NULL constraints on that table.  This is
+  ** because columns might end up being NULL if the table does not match -
+  ** a circumstance which the index cannot help us discover.  Ticket #2177.
+  */
+  if( (pSrc->jointype & JT_LEFT)!=0 ){
+    eqTermMask = WO_EQ|WO_IN;
+  }else{
+    eqTermMask = WO_EQ|WO_IN|WO_ISNULL;
+  }
+
   /* Look at each index.
   */
   for(; pProbe; pProbe=pProbe->pNext){
@@ -1337,7 +1431,7 @@
     flags = 0;
     for(i=0; i<pProbe->nColumn; i++){
       int j = pProbe->aiColumn[i];
-      pTerm = findTerm(pWC, iCur, j, notReady, WO_EQ|WO_IN, pProbe);
+      pTerm = findTerm(pWC, iCur, j, notReady, eqTermMask, pProbe);
       if( pTerm==0 ) break;
       flags |= WHERE_COLUMN_EQ;
       if( pTerm->eOperator & WO_IN ){
@@ -1381,7 +1475,7 @@
     */
     if( pOrderBy ){
       if( (flags & WHERE_COLUMN_IN)==0 &&
-           isSortingIndex(pParse,pProbe,iCur,pOrderBy,nEq,&rev) ){
+           isSortingIndex(pParse,pWC->pMaskSet,pProbe,iCur,pOrderBy,nEq,&rev) ){
         if( flags==0 ){
           flags = WHERE_COLUMN_RANGE;
         }
@@ -1431,7 +1525,7 @@
   *ppIndex = bestIdx;
   TRACE(("best index is %s, cost=%.9g, flags=%x, nEq=%d\n",
         bestIdx ? bestIdx->zName : "(none)", lowestCost, bestFlags, bestNEq));
-  *pFlags = bestFlags;
+  *pFlags = bestFlags | eqTermMask;
   *pnEq = bestNEq;
   return lowestCost;
 }
@@ -1476,30 +1570,18 @@
 }
 
 /*
-** Generate code that builds a probe for an index.  Details:
-**
-**    *  Check the top nColumn entries on the stack.  If any
-**       of those entries are NULL, jump immediately to brk,
-**       which is the loop exit, since no index entry will match
-**       if any part of the key is NULL. Pop (nColumn+nExtra) 
-**       elements from the stack.
-**
-**    *  Construct a probe entry from the top nColumn entries in
-**       the stack with affinities appropriate for index pIdx. 
-**       Only nColumn elements are popped from the stack in this case
-**       (by OP_MakeRecord).
+** Generate code that builds a probe for an index.
 **
+** There should be nColumn values on the stack.  The index
+** to be probed is pIdx.  Pop the values from the stack and
+** replace them all with a single record that is the index
+** problem.
 */
 static void buildIndexProbe(
-  Vdbe *v, 
-  int nColumn, 
-  int nExtra, 
-  int brk, 
-  Index *pIdx
+  Vdbe *v,        /* Generate code into this VM */
+  int nColumn,    /* The number of columns to check for NULL */
+  Index *pIdx     /* Index that we will be searching */
 ){
-  sqlite3VdbeAddOp(v, OP_NotNull, -nColumn, sqlite3VdbeCurrentAddr(v)+3);
-  sqlite3VdbeAddOp(v, OP_Pop, nColumn+nExtra, 0);
-  sqlite3VdbeAddOp(v, OP_Goto, 0, brk);
   sqlite3VdbeAddOp(v, OP_MakeRecord, nColumn, 0);
   sqlite3IndexAffinityStr(v, pIdx);
 }
@@ -1523,15 +1605,17 @@
   WhereLevel *pLevel  /* When level of the FROM clause we are working on */
 ){
   Expr *pX = pTerm->pExpr;
-  if( pX->op!=TK_IN ){
-    assert( pX->op==TK_EQ );
+  Vdbe *v = pParse->pVdbe;
+  if( pX->op==TK_EQ ){
     sqlite3ExprCode(pParse, pX->pRight);
+  }else if( pX->op==TK_ISNULL ){
+    sqlite3VdbeAddOp(v, OP_Null, 0, 0);
 #ifndef SQLITE_OMIT_SUBQUERY
   }else{
     int iTab;
     int *aIn;
-    Vdbe *v = pParse->pVdbe;
 
+    assert( pX->op==TK_IN );
     sqlite3CodeSubselect(pParse, pX);
     iTab = pX->iTable;
     sqlite3VdbeAddOp(v, OP_Rewind, iTab, 0);
@@ -1603,17 +1687,20 @@
 
   /* Evaluate the equality constraints
   */
-  for(j=0; j<pIdx->nColumn; j++){
+  assert( pIdx->nColumn>=nEq );
+  for(j=0; j<nEq; j++){
     int k = pIdx->aiColumn[j];
-    pTerm = findTerm(pWC, iCur, k, notReady, WO_EQ|WO_IN, pIdx);
+    pTerm = findTerm(pWC, iCur, k, notReady, pLevel->flags, pIdx);
     if( pTerm==0 ) break;
     assert( (pTerm->flags & TERM_CODED)==0 );
     codeEqualityTerm(pParse, pTerm, brk, pLevel);
+    if( (pTerm->eOperator & WO_ISNULL)==0 ){
+      sqlite3VdbeAddOp(v, OP_IsNull, termsInMem ? -1 : -(j+1), brk);
+    }
     if( termsInMem ){
       sqlite3VdbeAddOp(v, OP_MemStore, pLevel->iMem+j+1, 1);
     }
   }
-  assert( j==nEq );
 
   /* Make sure all the constraint values are on the top of the stack
   */
@@ -1776,7 +1863,7 @@
   ** subexpression is separated by an AND operator.
   */
   initMaskSet(&maskSet);
-  whereClauseInit(&wc, pParse);
+  whereClauseInit(&wc, pParse, &maskSet);
   whereSplit(&wc, pWhere, TK_AND);
     
   /* Allocate and initialize the WhereInfo structure that will become the
@@ -1807,7 +1894,7 @@
   for(i=0; i<pTabList->nSrc; i++){
     createMask(&maskSet, pTabList->a[i].iCursor);
   }
-  exprAnalyzeAll(pTabList, &maskSet, &wc);
+  exprAnalyzeAll(pTabList, &wc);
   if( sqlite3MallocFailed() ){
     goto whereBeginNoMem;
   }
@@ -1850,8 +1937,7 @@
     for(j=iFrom, pTabItem=&pTabList->a[j]; j<pTabList->nSrc; j++, pTabItem++){
       int doNotReorder;  /* True if this table should not be reordered */
 
-      doNotReorder =  (pTabItem->jointype & (JT_LEFT|JT_CROSS))!=0
-                   || (j>0 && (pTabItem[-1].jointype & (JT_LEFT|JT_CROSS))!=0);
+      doNotReorder =  (pTabItem->jointype & (JT_LEFT|JT_CROSS))!=0;
       if( once && doNotReorder ) break;
       m = getMask(&maskSet, pTabItem->iCursor);
       if( (m & notReady)==0 ){
@@ -1986,7 +2072,9 @@
       sqlite3VdbeOp3(v, OP_OpenRead, iIdxCur, pIx->tnum,
                      (char*)pKey, P3_KEYINFO_HANDOFF);
     }
-    if( (pLevel->flags & WHERE_IDX_ONLY)!=0 ){
+    if( (pLevel->flags & (WHERE_IDX_ONLY|WHERE_COLUMN_RANGE))!=0 ){
+      /* Only call OP_SetNumColumns on the index if we might later use
+      ** OP_Column on the index. */
       sqlite3VdbeAddOp(v, OP_SetNumColumns, iIdxCur, pIx->nColumn+1);
     }
     sqlite3CodeVerifySchema(pParse, iDb);
@@ -2025,7 +2113,7 @@
     ** initialize a memory cell that records if this table matches any
     ** row of the left table of the join.
     */
-    if( pLevel->iFrom>0 && (pTabItem[-1].jointype & JT_LEFT)!=0 ){
+    if( pLevel->iFrom>0 && (pTabItem[0].jointype & JT_LEFT)!=0 ){
       if( !pParse->nMem ) pParse->nMem++;
       pLevel->iLeftJoin = pParse->nMem++;
       sqlite3VdbeAddOp(v, OP_MemInt, 0, pLevel->iLeftJoin);
@@ -2159,7 +2247,6 @@
       int btmEq=0;        /* True if btm limit uses ==. False if strictly > */
       int topOp, btmOp;   /* Operators for the top and bottom search bounds */
       int testOp;
-      int nNotNull;       /* Number of rows of index that must be non-NULL */
       int topLimit = (pLevel->flags & WHERE_TOP_LIMIT)!=0;
       int btmLimit = (pLevel->flags & WHERE_BTM_LIMIT)!=0;
 
@@ -2181,7 +2268,6 @@
       ** operator and the top bound is a < or <= operator.  For a descending
       ** index the operators are reversed.
       */
-      nNotNull = nEq + topLimit;
       if( pIdx->aSortOrder[nEq]==SQLITE_SO_ASC ){
         topOp = WO_LT|WO_LE;
         btmOp = WO_GT|WO_GE;
@@ -2206,6 +2292,7 @@
         pX = pTerm->pExpr;
         assert( (pTerm->flags & TERM_CODED)==0 );
         sqlite3ExprCode(pParse, pX->pRight);
+        sqlite3VdbeAddOp(v, OP_IsNull, -(nEq+1), brk);
         topEq = pTerm->eOperator & (WO_LE|WO_GE);
         disableTerm(pLevel, pTerm);
         testOp = OP_IdxGE;
@@ -2216,7 +2303,7 @@
       if( testOp!=OP_Noop ){
         int nCol = nEq + topLimit;
         pLevel->iMem = pParse->nMem++;
-        buildIndexProbe(v, nCol, nEq, brk, pIdx);
+        buildIndexProbe(v, nCol, pIdx);
         if( bRev ){
           int op = topEq ? OP_MoveLe : OP_MoveLt;
           sqlite3VdbeAddOp(v, op, iIdxCur, brk);
@@ -2244,6 +2331,7 @@
         pX = pTerm->pExpr;
         assert( (pTerm->flags & TERM_CODED)==0 );
         sqlite3ExprCode(pParse, pX->pRight);
+        sqlite3VdbeAddOp(v, OP_IsNull, -(nEq+1), brk);
         btmEq = pTerm->eOperator & (WO_LE|WO_GE);
         disableTerm(pLevel, pTerm);
       }else{
@@ -2251,7 +2339,7 @@
       }
       if( nEq>0 || btmLimit ){
         int nCol = nEq + btmLimit;
-        buildIndexProbe(v, nCol, 0, brk, pIdx);
+        buildIndexProbe(v, nCol, pIdx);
         if( bRev ){
           pLevel->iMem = pParse->nMem++;
           sqlite3VdbeAddOp(v, OP_MemStore, pLevel->iMem, 1);
@@ -2278,8 +2366,10 @@
           sqlite3VdbeChangeP3(v, -1, "+", P3_STATIC);
         }
       }
-      sqlite3VdbeAddOp(v, OP_RowKey, iIdxCur, 0);
-      sqlite3VdbeAddOp(v, OP_IdxIsNull, nNotNull, cont);
+      if( topLimit | btmLimit ){
+        sqlite3VdbeAddOp(v, OP_Column, iIdxCur, nEq);
+        sqlite3VdbeAddOp(v, OP_IsNull, 1, cont);
+      }
       if( !omitTable ){
         sqlite3VdbeAddOp(v, OP_IdxRowid, iIdxCur, 0);
         sqlite3VdbeAddOp(v, OP_MoveGe, iCur, 0);
@@ -2305,7 +2395,7 @@
       /* Generate a single key that will be used to both start and terminate
       ** the search
       */
-      buildIndexProbe(v, nEq, 0, brk, pIdx);
+      buildIndexProbe(v, nEq, pIdx);
       sqlite3VdbeAddOp(v, OP_MemStore, pLevel->iMem, 0);
 
       /* Generate code (1) to move to the first matching element of the table.
@@ -2326,8 +2416,6 @@
         sqlite3VdbeOp3(v, OP_IdxGE, iIdxCur, brk, "+", P3_STATIC);
         pLevel->op = OP_Next;
       }
-      sqlite3VdbeAddOp(v, OP_RowKey, iIdxCur, 0);
-      sqlite3VdbeAddOp(v, OP_IdxIsNull, nEq, cont);
       if( !omitTable ){
         sqlite3VdbeAddOp(v, OP_IdxRowid, iIdxCur, 0);
         sqlite3VdbeAddOp(v, OP_MoveGe, iCur, 0);

Modified: freeswitch/trunk/libs/sqlite/test/all.test
==============================================================================
--- freeswitch/trunk/libs/sqlite/test/all.test	(original)
+++ freeswitch/trunk/libs/sqlite/test/all.test	Thu Feb 22 17:09:42 2007
@@ -10,7 +10,7 @@
 #***********************************************************************
 # This file runs all tests.
 #
-# $Id: all.test,v 1.35 2006/01/17 15:36:33 danielk1977 Exp $
+# $Id: all.test,v 1.36 2006/11/23 21:09:11 drh Exp $
 
 set testdir [file dirname $argv0]
 source $testdir/tester.tcl
@@ -56,6 +56,7 @@
   malloc.test
   misuse.test
   memleak.test
+  speed1.test
 }
 
 # Files to include in the test.  If this list is empty then everything

Modified: freeswitch/trunk/libs/sqlite/test/alter2.test
==============================================================================
--- freeswitch/trunk/libs/sqlite/test/alter2.test	(original)
+++ freeswitch/trunk/libs/sqlite/test/alter2.test	Thu Feb 22 17:09:42 2007
@@ -13,7 +13,7 @@
 # file format change that may be used in the future to implement
 # "ALTER TABLE ... ADD COLUMN".
 #
-# $Id: alter2.test,v 1.5 2006/01/03 00:33:50 drh Exp $
+# $Id: alter2.test,v 1.6 2007/01/04 14:36:02 drh Exp $
 #
 
 set testdir [file dirname $argv0]
@@ -25,7 +25,7 @@
 # These tests do not work if there is a codec.  The
 # btree_open command does not know how to handle codecs.
 #
-if {[catch {sqlite3 -has_codec} r] || $r} return
+#if {[catch {sqlite3 -has_codec} r] || $r} return
 
 # The file format change affects the way row-records stored in tables (but 
 # not indices) are interpreted. Before version 3.1.3, a row-record for a 
@@ -68,17 +68,13 @@
 #
 proc alter_table {tbl sql {file_format 2}} {
   sqlite3 dbat test.db
-puts one
   dbat eval {
     PRAGMA writable_schema = 1;
     UPDATE sqlite_master SET sql = $sql WHERE name = $tbl AND type = 'table';
     PRAGMA writable_schema = 0;
   }
-puts two
   dbat close
-puts three
   set_file_format 2
-puts four
 }
 
 #-----------------------------------------------------------------------
@@ -96,7 +92,6 @@
   # ALTER TABLE abc ADD COLUMN c;
   alter_table abc {CREATE TABLE abc(a, b, c);}
 } {}
-exit
 do_test alter2-1.3 {
   execsql {
     SELECT * FROM abc;
@@ -127,7 +122,7 @@
   execsql {
     SELECT sum(a), c FROM abc GROUP BY c;
   }
-} {8.0 {} 1.0 10}
+} {8 {} 1 10}
 do_test alter2-1.9 {
   # ALTER TABLE abc ADD COLUMN d;
   alter_table abc {CREATE TABLE abc(a, b, c, d);}
@@ -234,12 +229,12 @@
 
 #---------------------------------------------------------------------
 # Check that an error occurs if the database is upgraded to a file
-# format that SQLite does not support (in this case 4). Note: The 
+# format that SQLite does not support (in this case 5). Note: The 
 # file format is checked each time the schema is read, so changing the
 # file format requires incrementing the schema cookie.
 #
 do_test alter2-4.1 {
-  set_file_format 4
+  set_file_format 5
 } {}
 do_test alter2-4.2 {
   catchsql {
@@ -341,7 +336,7 @@
   execsql {
     SELECT a, typeof(a), b, typeof(b), c, typeof(c) FROM t1 LIMIT 1;
   }
-} {1 integer -123.0 real 5 text}
+} {1 integer -123 integer 5 text}
 
 #-----------------------------------------------------------------------
 # Test that UPDATE trigger tables work with default values, and that when
@@ -367,11 +362,11 @@
     UPDATE t1 SET c = 10 WHERE a = 1;
     SELECT a, typeof(a), b, typeof(b), c, typeof(c) FROM t1 LIMIT 1;
   }
-} {1 integer -123.0 real 10 text}
+} {1 integer -123 integer 10 text}
 ifcapable trigger {
   do_test alter2-8.3 {
     set ::val
-  } {-123 real 5 text -123 real 10 text}
+  } {-123 integer 5 text -123 integer 10 text}
 }
 
 #-----------------------------------------------------------------------
@@ -395,7 +390,7 @@
       DELETE FROM t1 WHERE a = 2;
     }
     set ::val
-  } {-123 real 5 text}
+  } {-123 integer 5 text}
 }
 
 #-----------------------------------------------------------------------

Modified: freeswitch/trunk/libs/sqlite/test/btree.test
==============================================================================
--- freeswitch/trunk/libs/sqlite/test/btree.test	(original)
+++ freeswitch/trunk/libs/sqlite/test/btree.test	Thu Feb 22 17:09:42 2007
@@ -11,7 +11,7 @@
 # This file implements regression tests for SQLite library.  The
 # focus of this script is btree database backend
 #
-# $Id: btree.test,v 1.37 2006/08/16 16:42:48 drh Exp $
+# $Id: btree.test,v 1.38 2007/01/03 23:37:29 drh Exp $
 
 
 set testdir [file dirname $argv0]
@@ -548,7 +548,6 @@
 } {}
 btree_page_dump $::b1 1
 btree_page_dump $::b1 2
-btree_page_dump $::b1 3
 do_test btree-8.1.1 {
   lindex [btree_pager_stats $::b1] 1
 } {1}

Modified: freeswitch/trunk/libs/sqlite/test/capi2.test
==============================================================================
--- freeswitch/trunk/libs/sqlite/test/capi2.test	(original)
+++ freeswitch/trunk/libs/sqlite/test/capi2.test	Thu Feb 22 17:09:42 2007
@@ -11,7 +11,7 @@
 # This file implements regression tests for SQLite library.  The
 # focus of this script testing the callback-free C/C++ API.
 #
-# $Id: capi2.test,v 1.32 2006/08/16 16:42:48 drh Exp $
+# $Id: capi2.test,v 1.33 2007/01/03 23:37:29 drh Exp $
 #
 
 set testdir [file dirname $argv0]
@@ -71,7 +71,7 @@
 do_test capi2-1.7 {
   list [sqlite3_column_count $VM] [get_row_values $VM] [get_column_names $VM]
 } {2 {} {name rowid text INTEGER}}
-do_test capi2-1.8 {
+do_test capi2-1.8-misuse {
   sqlite3_step $VM
 } {SQLITE_MISUSE}
 
@@ -208,7 +208,7 @@
   sqlite3_finalize $VM
 } {SQLITE_OK}
 do_test capi2-3.11b {db changes} {1}
-do_test capi2-3.12 {
+do_test capi2-3.12-misuse {
   sqlite3_finalize $VM
 } {SQLITE_MISUSE}
 do_test capi2-3.13 {

Modified: freeswitch/trunk/libs/sqlite/test/capi3.test
==============================================================================
--- freeswitch/trunk/libs/sqlite/test/capi3.test	(original)
+++ freeswitch/trunk/libs/sqlite/test/capi3.test	Thu Feb 22 17:09:42 2007
@@ -11,7 +11,7 @@
 # This file implements regression tests for SQLite library.  The
 # focus of this script testing the callback-free C/C++ API.
 #
-# $Id: capi3.test,v 1.46 2006/08/16 16:42:48 drh Exp $
+# $Id: capi3.test,v 1.47 2007/01/03 23:37:29 drh Exp $
 #
 
 set testdir [file dirname $argv0]
@@ -152,14 +152,14 @@
 do_test capi3-3.5 {
   sqlite3_close $db2
 } {SQLITE_OK}
-do_test capi3-3.6.1 {
+do_test capi3-3.6.1-misuse {
   sqlite3_close $db2
 } {SQLITE_MISUSE}
-do_test capi3-3.6.2 {
+do_test capi3-3.6.2-misuse {
   sqlite3_errmsg $db2
 } {library routine called out of sequence}
 ifcapable {utf16} {
-  do_test capi3-3.6.3 {
+  do_test capi3-3.6.3-misuse {
     utf8 [sqlite3_errmsg16 $db2]
   } {library routine called out of sequence}
 }
@@ -612,7 +612,7 @@
 do_test capi3-6.3 {
   sqlite3_finalize $STMT
 } {SQLITE_OK}
-do_test capi3-6.4 {
+do_test capi3-6.4-misuse {
   db cache flush
   sqlite3_close $DB
 } {SQLITE_OK}
@@ -991,7 +991,7 @@
 
 # Ticket #1219:  Make sure binding APIs can handle a NULL pointer.
 #
-do_test capi3-14.1 {
+do_test capi3-14.1-misuse {
   set rc [catch {sqlite3_bind_text 0 1 hello 5} msg]
   lappend rc $msg
 } {1 SQLITE_MISUSE}

Added: freeswitch/trunk/libs/sqlite/test/capi3c.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/capi3c.test	Thu Feb 22 17:09:42 2007
@@ -0,0 +1,1216 @@
+# 2006 November 08
+#
+# The author disclaims copyright to this source code.  In place of
+# a legal notice, here is a blessing:
+#
+#    May you do good and not evil.
+#    May you find forgiveness for yourself and forgive others.
+#    May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.  
+#
+# This is a copy of the capi3.test file that has been adapted to
+# test the new sqlite3_prepare_v2 interface.
+#
+# $Id: capi3c.test,v 1.6 2007/01/12 23:43:43 drh Exp $
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Return the UTF-16 representation of the supplied UTF-8 string $str.
+# If $nt is true, append two 0x00 bytes as a nul terminator.
+proc utf16 {str {nt 1}} {
+  set r [encoding convertto unicode $str]
+  if {$nt} {
+    append r "\x00\x00"
+  }
+  return $r
+}
+
+# Return the UTF-8 representation of the supplied UTF-16 string $str. 
+proc utf8 {str} {
+  # If $str ends in two 0x00 0x00 bytes, knock these off before
+  # converting to UTF-8 using TCL.
+  binary scan $str \c* vals
+  if {[lindex $vals end]==0 && [lindex $vals end-1]==0} {
+    set str [binary format \c* [lrange $vals 0 end-2]]
+  }
+
+  set r [encoding convertfrom unicode $str]
+  return $r
+}
+
+# These tests complement those in capi2.test. They are organized
+# as follows:
+#
+# capi3c-1.*: Test sqlite3_prepare_v2 
+# capi3c-2.*: Test sqlite3_prepare16_v2 
+# capi3c-3.*: Test sqlite3_open
+# capi3c-4.*: Test sqlite3_open16
+# capi3c-5.*: Test the various sqlite3_result_* APIs
+# capi3c-6.*: Test that sqlite3_close fails if there are outstanding VMs.
+#
+
+set DB [sqlite3_connection_pointer db]
+
+do_test capi3c-1.0 {
+  sqlite3_get_autocommit $DB
+} 1
+do_test capi3c-1.1 {
+  set STMT [sqlite3_prepare_v2 $DB {SELECT name FROM sqlite_master} -1 TAIL]
+  sqlite3_finalize $STMT
+  set TAIL
+} {}
+do_test capi3c-1.2 {
+  sqlite3_errcode $DB
+} {SQLITE_OK}
+do_test capi3c-1.3 {
+  sqlite3_errmsg $DB
+} {not an error}
+do_test capi3c-1.4 {
+  set sql {SELECT name FROM sqlite_master;SELECT 10}
+  set STMT [sqlite3_prepare_v2 $DB $sql -1 TAIL]
+  sqlite3_finalize $STMT
+  set TAIL
+} {SELECT 10}
+do_test capi3c-1.5 {
+  set sql {SELECT namex FROM sqlite_master}
+  catch {
+    set STMT [sqlite3_prepare_v2 $DB $sql -1 TAIL]
+  }
+} {1}
+do_test capi3c-1.6 {
+  sqlite3_errcode $DB
+} {SQLITE_ERROR}
+do_test capi3c-1.7 {
+  sqlite3_errmsg $DB
+} {no such column: namex}
+
+ifcapable {utf16} {
+  do_test capi3c-2.1 {
+    set sql16 [utf16 {SELECT name FROM sqlite_master}]
+    set STMT [sqlite3_prepare16_v2  $DB $sql16 -1 ::TAIL]
+    sqlite3_finalize $STMT
+    utf8 $::TAIL
+  } {}
+  do_test capi3c-2.2 {
+    set sql [utf16 {SELECT name FROM sqlite_master;SELECT 10}]
+    set STMT [sqlite3_prepare16_v2  $DB $sql -1 TAIL]
+    sqlite3_finalize $STMT
+    utf8 $TAIL
+  } {SELECT 10}
+  do_test capi3c-2.3 {
+    set sql [utf16 {SELECT namex FROM sqlite_master}]
+    catch {
+      set STMT [sqlite3_prepare16_v2  $DB $sql -1 TAIL]
+    }
+  } {1}
+  do_test capi3c-2.4 {
+    sqlite3_errcode $DB
+  } {SQLITE_ERROR}
+  do_test capi3c-2.5 {
+    sqlite3_errmsg $DB
+  } {no such column: namex}
+
+  ifcapable schema_pragmas {
+    do_test capi3c-2.6 {
+      execsql {CREATE TABLE tablename(x)}
+      set sql16 [utf16 {PRAGMA table_info("TableName")}]
+      set STMT [sqlite3_prepare16_v2  $DB $sql16 -1 TAIL]
+      sqlite3_step $STMT
+    } SQLITE_ROW
+    do_test capi3c-2.7 {
+      sqlite3_step $STMT
+    } SQLITE_DONE
+    do_test capi3c-2.8 {
+      sqlite3_finalize $STMT
+    } SQLITE_OK
+  }
+
+} ;# endif utf16
+
+# rename sqlite3_open sqlite3_open_old
+# proc sqlite3_open {fname options} {sqlite3_open_new $fname $options}
+
+do_test capi3c-3.1 {
+  set db2 [sqlite3_open test.db {}]
+  sqlite3_errcode $db2
+} {SQLITE_OK}
+# FIX ME: Should test the db handle works.
+do_test capi3c-3.2 {
+  sqlite3_close $db2
+} {SQLITE_OK}
+do_test capi3c-3.3 {
+  catch {
+    set db2 [sqlite3_open /bogus/path/test.db {}]
+  }
+  sqlite3_errcode $db2
+} {SQLITE_CANTOPEN}
+do_test capi3c-3.4 {
+  sqlite3_errmsg $db2
+} {unable to open database file}
+do_test capi3c-3.5 {
+  sqlite3_close $db2
+} {SQLITE_OK}
+do_test capi3c-3.6.1-misuse {
+  sqlite3_close $db2
+} {SQLITE_MISUSE}
+do_test capi3c-3.6.2-misuse {
+  sqlite3_errmsg $db2
+} {library routine called out of sequence}
+ifcapable {utf16} {
+  do_test capi3c-3.6.3-misuse {
+    utf8 [sqlite3_errmsg16 $db2]
+  } {library routine called out of sequence}
+}
+
+# rename sqlite3_open ""
+# rename sqlite3_open_old sqlite3_open
+
+ifcapable {utf16} {
+do_test capi3c-4.1 {
+  set db2 [sqlite3_open16 [utf16 test.db] {}]
+  sqlite3_errcode $db2
+} {SQLITE_OK}
+# FIX ME: Should test the db handle works.
+do_test capi3c-4.2 {
+  sqlite3_close $db2
+} {SQLITE_OK}
+do_test capi3c-4.3 {
+  catch {
+    set db2 [sqlite3_open16 [utf16 /bogus/path/test.db] {}]
+  }
+  sqlite3_errcode $db2
+} {SQLITE_CANTOPEN}
+do_test capi3c-4.4 {
+  utf8 [sqlite3_errmsg16 $db2]
+} {unable to open database file}
+do_test capi3c-4.5 {
+  sqlite3_close $db2
+} {SQLITE_OK}
+} ;# utf16
+
+# This proc is used to test the following API calls:
+#
+# sqlite3_column_count
+# sqlite3_column_name
+# sqlite3_column_name16
+# sqlite3_column_decltype
+# sqlite3_column_decltype16
+#
+# $STMT is a compiled SQL statement. $test is a prefix
+# to use for test names within this proc. $names is a list
+# of the column names that should be returned by $STMT.
+# $decltypes is a list of column declaration types for $STMT.
+#
+# Example:
+#
+# set STMT [sqlite3_prepare_v2 "SELECT 1, 2, 2;" -1 DUMMY]
+# check_header test1.1 {1 2 3} {"" "" ""}
+#
+proc check_header {STMT test names decltypes} {
+
+  # Use the return value of sqlite3_column_count() to build
+  # a list of column indexes. i.e. If sqlite3_column_count
+  # is 3, build the list {0 1 2}.
+  set ::idxlist [list]
+  set ::numcols [sqlite3_column_count $STMT]
+  for {set i 0} {$i < $::numcols} {incr i} {lappend ::idxlist $i}
+
+  # Column names in UTF-8
+  do_test $test.1 {
+    set cnamelist [list]
+    foreach i $idxlist {lappend cnamelist [sqlite3_column_name $STMT $i]} 
+    set cnamelist
+  } $names
+
+  # Column names in UTF-16
+  ifcapable {utf16} {
+    do_test $test.2 {
+      set cnamelist [list]
+      foreach i $idxlist {
+        lappend cnamelist [utf8 [sqlite3_column_name16 $STMT $i]]
+      }
+      set cnamelist
+    } $names
+  }
+
+  # Column names in UTF-8
+  do_test $test.3 {
+    set cnamelist [list]
+    foreach i $idxlist {lappend cnamelist [sqlite3_column_name $STMT $i]} 
+    set cnamelist
+  } $names
+
+  # Column names in UTF-16
+  ifcapable {utf16} {
+    do_test $test.4 {
+      set cnamelist [list]
+      foreach i $idxlist {
+        lappend cnamelist [utf8 [sqlite3_column_name16 $STMT $i]]
+      }
+      set cnamelist
+    } $names
+  }
+
+  # Column names in UTF-8
+  do_test $test.5 {
+    set cnamelist [list]
+    foreach i $idxlist {lappend cnamelist [sqlite3_column_decltype $STMT $i]} 
+    set cnamelist
+  } $decltypes
+
+  # Column declaration types in UTF-16
+  ifcapable {utf16} {
+    do_test $test.6 {
+      set cnamelist [list]
+      foreach i $idxlist {
+        lappend cnamelist [utf8 [sqlite3_column_decltype16 $STMT $i]]
+      }
+      set cnamelist
+    } $decltypes
+  }
+
+
+  # Test some out of range conditions:
+  ifcapable {utf16} {
+    do_test $test.7 {
+      list \
+        [sqlite3_column_name $STMT -1] \
+        [sqlite3_column_name16 $STMT -1] \
+        [sqlite3_column_decltype $STMT -1] \
+        [sqlite3_column_decltype16 $STMT -1] \
+        [sqlite3_column_name $STMT $numcols] \
+        [sqlite3_column_name16 $STMT $numcols] \
+        [sqlite3_column_decltype $STMT $numcols] \
+        [sqlite3_column_decltype16 $STMT $numcols]
+    } {{} {} {} {} {} {} {} {}}
+  }
+} 
+
+# This proc is used to test the following API calls:
+#
+# sqlite3_column_origin_name
+# sqlite3_column_origin_name16
+# sqlite3_column_table_name
+# sqlite3_column_table_name16
+# sqlite3_column_database_name
+# sqlite3_column_database_name16
+#
+# $STMT is a compiled SQL statement. $test is a prefix
+# to use for test names within this proc. $names is a list
+# of the column names that should be returned by $STMT.
+# $decltypes is a list of column declaration types for $STMT.
+#
+# Example:
+#
+# set STMT [sqlite3_prepare_v2 "SELECT 1, 2, 2;" -1 DUMMY]
+# check_header test1.1 {1 2 3} {"" "" ""}
+#
+proc check_origin_header {STMT test dbs tables cols} {
+  # If sqlite3_column_origin_name() and friends are not compiled into
+  # this build, this proc is a no-op.
+ifcapable columnmetadata {
+
+    # Use the return value of sqlite3_column_count() to build
+    # a list of column indexes. i.e. If sqlite3_column_count
+    # is 3, build the list {0 1 2}.
+    set ::idxlist [list]
+    set ::numcols [sqlite3_column_count $STMT]
+    for {set i 0} {$i < $::numcols} {incr i} {lappend ::idxlist $i}
+  
+    # Database names in UTF-8
+    do_test $test.8 {
+      set cnamelist [list]
+      foreach i $idxlist {
+        lappend cnamelist [sqlite3_column_database_name $STMT $i]
+      } 
+      set cnamelist
+    } $dbs
+  
+    # Database names in UTF-16
+    ifcapable {utf16} {
+      do_test $test.9 {
+        set cnamelist [list]
+        foreach i $idxlist {
+          lappend cnamelist [utf8 [sqlite3_column_database_name16 $STMT $i]]
+        }
+        set cnamelist
+      } $dbs
+    }
+  
+    # Table names in UTF-8
+    do_test $test.10 {
+      set cnamelist [list]
+      foreach i $idxlist {
+        lappend cnamelist [sqlite3_column_table_name $STMT $i]
+      } 
+      set cnamelist
+    } $tables
+  
+    # Table names in UTF-16
+    ifcapable {utf16} {
+      do_test $test.11 {
+        set cnamelist [list]
+        foreach i $idxlist {
+          lappend cnamelist [utf8 [sqlite3_column_table_name16 $STMT $i]]
+        }
+        set cnamelist
+      } $tables
+    }
+  
+    # Origin names in UTF-8
+    do_test $test.12 {
+      set cnamelist [list]
+      foreach i $idxlist {
+        lappend cnamelist [sqlite3_column_origin_name $STMT $i]
+      } 
+      set cnamelist
+    } $cols
+  
+    # Origin declaration types in UTF-16
+    ifcapable {utf16} {
+      do_test $test.13 {
+        set cnamelist [list]
+        foreach i $idxlist {
+          lappend cnamelist [utf8 [sqlite3_column_origin_name16 $STMT $i]]
+        }
+        set cnamelist
+      } $cols
+    }
+  }
+}
+
+# This proc is used to test the following APIs:
+#
+# sqlite3_data_count
+# sqlite3_column_type
+# sqlite3_column_int
+# sqlite3_column_text
+# sqlite3_column_text16
+# sqlite3_column_double
+#
+# $STMT is a compiled SQL statement for which the previous call 
+# to sqlite3_step returned SQLITE_ROW. $test is a prefix to use 
+# for test names within this proc. $types is a list of the 
+# manifest types for the current row. $ints, $doubles and $strings
+# are lists of the integer, real and string representations of
+# the values in the current row.
+#
+# Example:
+#
+# set STMT [sqlite3_prepare_v2 "SELECT 'hello', 1.1, NULL" -1 DUMMY]
+# sqlite3_step $STMT
+# check_data test1.2 {TEXT REAL NULL} {0 1 0} {0 1.1 0} {hello 1.1 {}}
+#
+proc check_data {STMT test types ints doubles strings} {
+
+  # Use the return value of sqlite3_column_count() to build
+  # a list of column indexes. i.e. If sqlite3_column_count
+  # is 3, build the list {0 1 2}.
+  set ::idxlist [list]
+  set numcols [sqlite3_data_count $STMT]
+  for {set i 0} {$i < $numcols} {incr i} {lappend ::idxlist $i}
+
+# types
+do_test $test.1 {
+  set types [list]
+  foreach i $idxlist {lappend types [sqlite3_column_type $STMT $i]}
+  set types
+} $types
+
+# Integers
+do_test $test.2 {
+  set ints [list]
+  foreach i $idxlist {lappend ints [sqlite3_column_int64 $STMT $i]}
+  set ints
+} $ints
+
+# bytes
+set lens [list]
+foreach i $::idxlist {
+  lappend lens [string length [lindex $strings $i]]
+}
+do_test $test.3 {
+  set bytes [list]
+  set lens [list]
+  foreach i $idxlist {
+    lappend bytes [sqlite3_column_bytes $STMT $i]
+  }
+  set bytes
+} $lens
+
+# bytes16
+ifcapable {utf16} {
+  set lens [list]
+  foreach i $::idxlist {
+    lappend lens [expr 2 * [string length [lindex $strings $i]]]
+  }
+  do_test $test.4 {
+    set bytes [list]
+    set lens [list]
+    foreach i $idxlist {
+      lappend bytes [sqlite3_column_bytes16 $STMT $i]
+    }
+    set bytes
+  } $lens
+}
+
+# Blob
+do_test $test.5 {
+  set utf8 [list]
+  foreach i $idxlist {lappend utf8 [sqlite3_column_blob $STMT $i]}
+  set utf8
+} $strings
+
+# UTF-8
+do_test $test.6 {
+  set utf8 [list]
+  foreach i $idxlist {lappend utf8 [sqlite3_column_text $STMT $i]}
+  set utf8
+} $strings
+
+# Floats
+do_test $test.7 {
+  set utf8 [list]
+  foreach i $idxlist {lappend utf8 [sqlite3_column_double $STMT $i]}
+  set utf8
+} $doubles
+
+# UTF-16
+ifcapable {utf16} {
+  do_test $test.8 {
+    set utf8 [list]
+    foreach i $idxlist {lappend utf8 [utf8 [sqlite3_column_text16 $STMT $i]]}
+    set utf8
+  } $strings
+}
+
+# Integers
+do_test $test.9 {
+  set ints [list]
+  foreach i $idxlist {lappend ints [sqlite3_column_int $STMT $i]}
+  set ints
+} $ints
+
+# Floats
+do_test $test.10 {
+  set utf8 [list]
+  foreach i $idxlist {lappend utf8 [sqlite3_column_double $STMT $i]}
+  set utf8
+} $doubles
+
+# UTF-8
+do_test $test.11 {
+  set utf8 [list]
+  foreach i $idxlist {lappend utf8 [sqlite3_column_text $STMT $i]}
+  set utf8
+} $strings
+
+# Types
+do_test $test.12 {
+  set types [list]
+  foreach i $idxlist {lappend types [sqlite3_column_type $STMT $i]}
+  set types
+} $types
+
+# Test that an out of range request returns the equivalent of NULL
+do_test $test.13 {
+  sqlite3_column_int $STMT -1
+} {0}
+do_test $test.13 {
+  sqlite3_column_text $STMT -1
+} {}
+
+}
+
+ifcapable !floatingpoint {
+  finish_test
+  return
+}
+
+do_test capi3c-5.0 {
+  execsql {
+    CREATE TABLE t1(a VARINT, b BLOB, c VARCHAR(16));
+    INSERT INTO t1 VALUES(1, 2, 3);
+    INSERT INTO t1 VALUES('one', 'two', NULL);
+    INSERT INTO t1 VALUES(1.2, 1.3, 1.4);
+  }
+  set sql "SELECT * FROM t1"
+  set STMT [sqlite3_prepare_v2 $DB $sql -1 TAIL]
+  sqlite3_column_count $STMT
+} 3
+
+check_header $STMT capi3c-5.1 {a b c} {VARINT BLOB VARCHAR(16)}
+check_origin_header $STMT capi3c-5.1 {main main main} {t1 t1 t1} {a b c}
+do_test capi3c-5.2 {
+  sqlite3_step $STMT
+} SQLITE_ROW
+
+check_header $STMT capi3c-5.3 {a b c} {VARINT BLOB VARCHAR(16)}
+check_origin_header $STMT capi3c-5.3 {main main main} {t1 t1 t1} {a b c}
+check_data $STMT capi3c-5.4 {INTEGER INTEGER TEXT} {1 2 3} {1.0 2.0 3.0} {1 2 3}
+
+do_test capi3c-5.5 {
+  sqlite3_step $STMT
+} SQLITE_ROW
+
+check_header $STMT capi3c-5.6 {a b c} {VARINT BLOB VARCHAR(16)}
+check_origin_header $STMT capi3c-5.6 {main main main} {t1 t1 t1} {a b c}
+check_data $STMT capi3c-5.7 {TEXT TEXT NULL} {0 0 0} {0.0 0.0 0.0} {one two {}}
+
+do_test capi3c-5.8 {
+  sqlite3_step $STMT
+} SQLITE_ROW
+
+check_header $STMT capi3c-5.9 {a b c} {VARINT BLOB VARCHAR(16)}
+check_origin_header $STMT capi3c-5.9 {main main main} {t1 t1 t1} {a b c}
+check_data $STMT capi3c-5.10 {FLOAT FLOAT TEXT} {1 1 1} {1.2 1.3 1.4} {1.2 1.3 1.4}
+
+do_test capi3c-5.11 {
+  sqlite3_step $STMT
+} SQLITE_DONE
+
+do_test capi3c-5.12 {
+  sqlite3_finalize $STMT
+} SQLITE_OK
+
+do_test capi3c-5.20 {
+  set sql "SELECT a, sum(b), max(c) FROM t1 GROUP BY a"
+  set STMT [sqlite3_prepare_v2 $DB $sql -1 TAIL]
+  sqlite3_column_count $STMT
+} 3
+
+check_header $STMT capi3c-5.21 {a sum(b) max(c)} {VARINT {} {}}
+check_origin_header $STMT capi3c-5.22 {main {} {}} {t1 {} {}} {a {} {}}
+do_test capi3c-5.23 {
+  sqlite3_finalize $STMT
+} SQLITE_OK
+
+
+set ::ENC [execsql {pragma encoding}]
+db close
+
+do_test capi3c-6.0 {
+btree_breakpoint
+  sqlite3 db test.db
+  set DB [sqlite3_connection_pointer db]
+btree_breakpoint
+  sqlite3_key $DB xyzzy
+  set sql {SELECT a FROM t1 order by rowid}
+  set STMT [sqlite3_prepare_v2 $DB $sql -1 TAIL]
+  expr 0
+} {0}
+do_test capi3c-6.1 {
+  db cache flush
+  sqlite3_close $DB
+} {SQLITE_BUSY}
+do_test capi3c-6.2 {
+  sqlite3_step $STMT
+} {SQLITE_ROW}
+check_data $STMT capi3c-6.3 {INTEGER} {1} {1.0} {1}
+do_test capi3c-6.3 {
+  sqlite3_finalize $STMT
+} {SQLITE_OK}
+do_test capi3c-6.4 {
+  db cache flush
+  sqlite3_close $DB
+} {SQLITE_OK}
+do_test capi3c-6.99-misuse {
+  db close
+} {}
+
+if {![sqlite3 -has-codec]} {
+  # Test what happens when the library encounters a newer file format.
+  # Do this by updating the file format via the btree layer.
+  do_test capi3c-7.1 {
+    set ::bt [btree_open test.db 10 0]
+    btree_begin_transaction $::bt
+    set meta [btree_get_meta $::bt]
+    lset meta 2 5
+    eval [concat btree_update_meta $::bt [lrange $meta 0 end]]
+    btree_commit $::bt
+    btree_close $::bt
+  } {}
+  do_test capi3c-7.2 {
+    sqlite3 db test.db
+    catchsql {
+      SELECT * FROM sqlite_master;
+    }
+  } {1 {unsupported file format}}
+  db close
+}
+
+if {![sqlite3 -has-codec]} {
+  # Now test that the library correctly handles bogus entries in the
+  # sqlite_master table (schema corruption).
+  do_test capi3c-8.1 {
+    file delete -force test.db
+    file delete -force test.db-journal
+    sqlite3 db test.db
+    execsql {
+      CREATE TABLE t1(a);
+    }
+    db close
+  } {}
+  do_test capi3c-8.2 {
+    set ::bt [btree_open test.db 10 0]
+    btree_begin_transaction $::bt
+    set ::bc [btree_cursor $::bt 1 1]
+
+    # Build a 5-field row record consisting of 5 null records. This is
+    # officially black magic.
+    catch {unset data}
+    set data [binary format c6 {6 0 0 0 0 0}]
+    btree_insert $::bc 5 $data
+
+    btree_close_cursor $::bc
+    btree_commit $::bt
+    btree_close $::bt
+  } {}
+  do_test capi3c-8.3 {
+    sqlite3 db test.db
+    catchsql {
+      SELECT * FROM sqlite_master;
+    }
+  } {1 {malformed database schema}}
+  do_test capi3c-8.4 {
+    set ::bt [btree_open test.db 10 0]
+    btree_begin_transaction $::bt
+    set ::bc [btree_cursor $::bt 1 1]
+  
+    # Build a 5-field row record. The first field is a string 'table', and
+    # subsequent fields are all NULL. Replace the other broken record with
+    # this one and try to read the schema again. The broken record uses
+    # either UTF-8 or native UTF-16 (if this file is being run by
+    # utf16.test).
+    if { [string match UTF-16* $::ENC] } {
+      set data [binary format c6a10 {6 33 0 0 0 0} [utf16 table]]
+    } else {
+      set data [binary format c6a5 {6 23 0 0 0 0} table]
+    }
+    btree_insert $::bc 5 $data
+  
+    btree_close_cursor $::bc
+    btree_commit $::bt
+    btree_close $::bt
+  } {};
+  do_test capi3c-8.5 {
+    db close 
+    sqlite3 db test.db
+    catchsql {
+      SELECT * FROM sqlite_master;
+    }
+  } {1 {malformed database schema}}
+  db close
+}
+file delete -force test.db
+file delete -force test.db-journal
+
+
+# Test the english language string equivalents for sqlite error codes
+set code2english [list \
+SQLITE_OK         {not an error} \
+SQLITE_ERROR      {SQL logic error or missing database} \
+SQLITE_PERM       {access permission denied} \
+SQLITE_ABORT      {callback requested query abort} \
+SQLITE_BUSY       {database is locked} \
+SQLITE_LOCKED     {database table is locked} \
+SQLITE_NOMEM      {out of memory} \
+SQLITE_READONLY   {attempt to write a readonly database} \
+SQLITE_INTERRUPT  {interrupted} \
+SQLITE_IOERR      {disk I/O error} \
+SQLITE_CORRUPT    {database disk image is malformed} \
+SQLITE_FULL       {database or disk is full} \
+SQLITE_CANTOPEN   {unable to open database file} \
+SQLITE_PROTOCOL   {database locking protocol failure} \
+SQLITE_EMPTY      {table contains no data} \
+SQLITE_SCHEMA     {database schema has changed} \
+SQLITE_CONSTRAINT {constraint failed} \
+SQLITE_MISMATCH   {datatype mismatch} \
+SQLITE_MISUSE     {library routine called out of sequence} \
+SQLITE_NOLFS      {kernel lacks large file support} \
+SQLITE_AUTH       {authorization denied} \
+SQLITE_FORMAT     {auxiliary database format error} \
+SQLITE_RANGE      {bind or column index out of range} \
+SQLITE_NOTADB     {file is encrypted or is not a database} \
+unknownerror      {unknown error} \
+]
+
+set test_number 1
+foreach {code english} $code2english {
+  do_test capi3c-9.$test_number "sqlite3_test_errstr $code" $english
+  incr test_number
+}
+
+# Test the error message when a "real" out of memory occurs.
+if {[info command sqlite_malloc_stat]!=""} {
+set sqlite_malloc_fail 1
+do_test capi3c-10-1 {
+  sqlite3 db test.db
+  set DB [sqlite3_connection_pointer db]
+  sqlite_malloc_fail 1
+  catchsql {
+    select * from sqlite_master;
+  }
+} {1 {out of memory}}
+do_test capi3c-10-2 {
+  sqlite3_errmsg $::DB
+} {out of memory}
+ifcapable {utf16} {
+  do_test capi3c-10-3 {
+    utf8 [sqlite3_errmsg16 $::DB]
+  } {out of memory}
+}
+db close
+sqlite_malloc_fail 0
+}
+
+# The following tests - capi3c-11.* - test that a COMMIT or ROLLBACK
+# statement issued while there are still outstanding VMs that are part of
+# the transaction fails.
+sqlite3 db test.db
+set DB [sqlite3_connection_pointer db]
+sqlite_register_test_function $DB func
+do_test capi3c-11.1 {
+  execsql {
+    BEGIN;
+    CREATE TABLE t1(a, b);
+    INSERT INTO t1 VALUES(1, 'int');
+    INSERT INTO t1 VALUES(2, 'notatype');
+  }
+} {}
+do_test capi3c-11.1.1 {
+  sqlite3_get_autocommit $DB
+} 0
+do_test capi3c-11.2 {
+  set STMT [sqlite3_prepare_v2 $DB "SELECT func(b, a) FROM t1" -1 TAIL]
+  sqlite3_step $STMT
+} {SQLITE_ROW}
+do_test capi3c-11.3 {
+  catchsql {
+    COMMIT;
+  }
+} {1 {cannot commit transaction - SQL statements in progress}}
+do_test capi3c-11.3.1 {
+  sqlite3_get_autocommit $DB
+} 0
+do_test capi3c-11.4 {
+  sqlite3_step $STMT
+} {SQLITE_ERROR}
+do_test capi3c-11.5 {
+  sqlite3_finalize $STMT
+} {SQLITE_ERROR}
+do_test capi3c-11.6 {
+  catchsql {
+    SELECT * FROM t1;
+  }
+} {0 {1 int 2 notatype}}
+do_test capi3c-11.6.1 {
+  sqlite3_get_autocommit $DB
+} 0
+do_test capi3c-11.7 {
+  catchsql {
+    COMMIT;
+  }
+} {0 {}}
+do_test capi3c-11.7.1 {
+  sqlite3_get_autocommit $DB
+} 1
+do_test capi3c-11.8 {
+  execsql {
+    CREATE TABLE t2(a);
+    INSERT INTO t2 VALUES(1);
+    INSERT INTO t2 VALUES(2);
+    BEGIN;
+    INSERT INTO t2 VALUES(3);
+  }
+} {}
+do_test capi3c-11.8.1 {
+  sqlite3_get_autocommit $DB
+} 0
+do_test capi3c-11.9 {
+  set STMT [sqlite3_prepare_v2 $DB "SELECT a FROM t2" -1 TAIL]
+  sqlite3_step $STMT
+} {SQLITE_ROW}
+do_test capi3c-11.9.1 {
+  sqlite3_get_autocommit $DB
+} 0
+do_test capi3c-11.9.2 {
+  catchsql {
+    ROLLBACK;
+  }
+} {1 {cannot rollback transaction - SQL statements in progress}}
+do_test capi3c-11.9.3 {
+  sqlite3_get_autocommit $DB
+} 0
+do_test capi3c-11.10 {
+  sqlite3_step $STMT
+} {SQLITE_ROW}
+do_test capi3c-11.11 {
+  sqlite3_step $STMT
+} {SQLITE_ROW}
+do_test capi3c-11.12 {
+  sqlite3_step $STMT
+} {SQLITE_DONE}
+do_test capi3c-11.13 {
+  sqlite3_finalize $STMT
+} {SQLITE_OK}
+do_test capi3c-11.14 {
+  execsql {
+    SELECT a FROM t2;
+  }
+} {1 2 3}
+do_test capi3c-11.14.1 {
+  sqlite3_get_autocommit $DB
+} 0
+do_test capi3c-11.15 {
+  catchsql {
+    ROLLBACK;
+  }
+} {0 {}}
+do_test capi3c-11.15.1 {
+  sqlite3_get_autocommit $DB
+} 1
+do_test capi3c-11.16 {
+  execsql {
+    SELECT a FROM t2;
+  }
+} {1 2}
+
+# Sanity check on the definition of 'outstanding VM'. This means any VM
+# that has had sqlite3_step() called more recently than sqlite3_finalize() or
+# sqlite3_reset(). So a VM that has just been prepared or reset does not
+# count as an active VM.
+do_test capi3c-11.17 {
+  execsql {
+    BEGIN;
+  }
+} {}
+do_test capi3c-11.18 {
+  set STMT [sqlite3_prepare_v2 $DB "SELECT a FROM t1" -1 TAIL]
+  catchsql {
+    COMMIT;
+  }
+} {0 {}}
+do_test capi3c-11.19 {
+  sqlite3_step $STMT
+} {SQLITE_ROW}
+do_test capi3c-11.20 {
+  catchsql {
+    BEGIN;
+    COMMIT;
+  }
+} {1 {cannot commit transaction - SQL statements in progress}}
+do_test capi3c-11.20 {
+  sqlite3_reset $STMT
+  catchsql {
+    COMMIT;
+  }
+} {0 {}}
+do_test capi3c-11.21 {
+  sqlite3_finalize $STMT
+} {SQLITE_OK}
+
+# The following tests - capi3c-12.* - check that it's Ok to start a
+# transaction while other VMs are active, and that it's Ok to execute
+# atomic updates in the same situation 
+#
+do_test capi3c-12.1 {
+  set STMT [sqlite3_prepare_v2 $DB "SELECT a FROM t2" -1 TAIL]
+  sqlite3_step $STMT
+} {SQLITE_ROW}
+do_test capi3c-12.2 {
+  catchsql {
+    INSERT INTO t1 VALUES(3, NULL);
+  }
+} {0 {}}
+do_test capi3c-12.3 {
+  catchsql {
+    INSERT INTO t2 VALUES(4);
+  }
+} {0 {}}
+do_test capi3c-12.4 {
+  catchsql {
+    BEGIN;
+    INSERT INTO t1 VALUES(4, NULL);
+  }
+} {0 {}}
+do_test capi3c-12.5 {
+  sqlite3_step $STMT
+} {SQLITE_ROW}
+do_test capi3c-12.5.1 {
+  sqlite3_step $STMT
+} {SQLITE_ROW}
+do_test capi3c-12.6 {
+  sqlite3_step $STMT
+} {SQLITE_DONE}
+do_test capi3c-12.7 {
+  sqlite3_finalize $STMT
+} {SQLITE_OK}
+do_test capi3c-12.8 {
+  execsql {
+    COMMIT;
+    SELECT a FROM t1;
+  }
+} {1 2 3 4}
+
+# Test cases capi3c-13.* test the sqlite3_clear_bindings() and 
+# sqlite3_sleep APIs.
+#
+if {[llength [info commands sqlite3_clear_bindings]]>0} {
+  do_test capi3c-13.1 {
+    execsql {
+      DELETE FROM t1;
+    }
+    set STMT [sqlite3_prepare_v2 $DB "INSERT INTO t1 VALUES(?, ?)" -1 TAIL]
+    sqlite3_step $STMT
+  } {SQLITE_DONE}
+  do_test capi3c-13.2 {
+    sqlite3_reset $STMT
+    sqlite3_bind_text $STMT 1 hello 5
+    sqlite3_bind_text $STMT 2 world 5
+    sqlite3_step $STMT
+  } {SQLITE_DONE}
+  do_test capi3c-13.3 {
+    sqlite3_reset $STMT
+    sqlite3_clear_bindings $STMT
+    sqlite3_step $STMT
+  } {SQLITE_DONE}
+  do_test capi3c-13-4 {
+    sqlite3_finalize $STMT
+    execsql {
+      SELECT * FROM t1;
+    }
+  } {{} {} hello world {} {}}
+}
+if {[llength [info commands sqlite3_sleep]]>0} {
+  do_test capi3c-13-5 {
+    set ms [sqlite3_sleep 80]
+    expr {$ms==80 || $ms==1000}
+  } {1}
+}
+
+# Ticket #1219:  Make sure binding APIs can handle a NULL pointer.
+#
+do_test capi3c-14.1 {
+  set rc [catch {sqlite3_bind_text 0 1 hello 5} msg]
+  lappend rc $msg
+} {1 SQLITE_MISUSE}
+
+# Ticket #1650:  Honor the nBytes parameter to sqlite3_prepare.
+#
+do_test capi3c-15.1 {
+  set sql {SELECT * FROM t2}
+  set nbytes [string length $sql]
+  append sql { WHERE a==1}
+  set STMT [sqlite3_prepare_v2 $DB $sql $nbytes TAIL]
+  sqlite3_step $STMT
+  sqlite3_column_int $STMT 0
+} {1}
+do_test capi3c-15.2 {
+  sqlite3_step $STMT
+  sqlite3_column_int $STMT 0
+} {2}
+do_test capi3c-15.3 {
+  sqlite3_finalize $STMT
+} {SQLITE_OK}
+
+# Make sure code is always generated even if an IF EXISTS or 
+# IF NOT EXISTS clause is present that the table does not or
+# does exists.  That way we will always have a prepared statement
+# to expire when the schema changes.
+#
+do_test capi3c-16.1 {
+  set sql {DROP TABLE IF EXISTS t3}
+  set STMT [sqlite3_prepare_v2 $DB $sql -1 TAIL]
+  sqlite3_finalize $STMT
+  expr {$STMT!=""}
+} {1}
+do_test capi3c-16.2 {
+  set sql {CREATE TABLE IF NOT EXISTS t1(x,y)}
+  set STMT [sqlite3_prepare_v2 $DB $sql -1 TAIL]
+  sqlite3_finalize $STMT
+  expr {$STMT!=""}
+} {1}
+
+# But still we do not generate code if there is no SQL
+#
+do_test capi3c-16.3 {
+  set STMT [sqlite3_prepare_v2 $DB {} -1 TAIL]
+  sqlite3_finalize $STMT
+  expr {$STMT==""}
+} {1}
+do_test capi3c-16.4 {
+  set STMT [sqlite3_prepare_v2 $DB {;} -1 TAIL]
+  sqlite3_finalize $STMT
+  expr {$STMT==""}
+} {1}
+
+# Ticket #2154.
+#
+do_test capi3c-17.1 {
+  set STMT [sqlite3_prepare_v2 $DB {SELECT max(a) FROM t2} -1 TAIL]
+  sqlite3_step $STMT
+} SQLITE_ROW
+do_test capi3c-17.2 {
+  sqlite3_column_int $STMT 0
+} 4
+do_test capi3c-17.3 {
+  sqlite3_step $STMT
+} SQLITE_DONE
+do_test capi3c-17.4 {
+  sqlite3_reset $STMT
+  db eval {CREATE INDEX i2 ON t2(a)}
+  sqlite3_step $STMT
+} SQLITE_ROW
+do_test capi3c-17.5 {
+  sqlite3_column_int $STMT 0
+} 4
+do_test capi3c-17.6 {
+  sqlite3_step $STMT
+} SQLITE_DONE
+do_test capi3c-17.7 {
+  sqlite3_reset $STMT
+  db eval {DROP INDEX i2}
+  sqlite3_step $STMT
+} SQLITE_ROW
+do_test capi3c-17.8 {
+  sqlite3_column_int $STMT 0
+} 4
+do_test capi3c-17.9 {
+  sqlite3_step $STMT
+} SQLITE_DONE
+do_test capi3c-17.10 {
+  sqlite3_finalize $STMT
+  set STMT [sqlite3_prepare_v2 $DB {SELECT b FROM t1 WHERE a=?} -1 TAIL]
+  sqlite3_bind_int $STMT 1 2
+  db eval {
+    DELETE FROM t1;
+    INSERT INTO t1 VALUES(1,'one');
+    INSERT INTO t1 VALUES(2,'two');
+    INSERT INTO t1 VALUES(3,'three');
+    INSERT INTO t1 VALUES(4,'four');
+  }
+  sqlite3_step $STMT
+} SQLITE_ROW
+do_test capi3c-17.11 {
+  sqlite3_column_text $STMT 0
+} two
+do_test capi3c-17.12 {
+  sqlite3_step $STMT
+} SQLITE_DONE
+do_test capi3c-17.13 {
+  sqlite3_reset $STMT
+  db eval {CREATE INDEX i1 ON t1(a)}
+  sqlite3_step $STMT
+} SQLITE_ROW
+do_test capi3c-17.14 {
+  sqlite3_column_text $STMT 0
+} two
+do_test capi3c-17.15 {
+  sqlite3_step $STMT
+} SQLITE_DONE
+do_test capi3c-17.16 {
+  sqlite3_reset $STMT
+  db eval {DROP INDEX i1}
+  sqlite3_step $STMT
+} SQLITE_ROW
+do_test capi3c-17.17 {
+  sqlite3_column_text $STMT 0
+} two
+do_test capi3c-17.18 {
+  sqlite3_step $STMT
+} SQLITE_DONE
+do_test capi3c-17.99 {
+  sqlite3_finalize $STMT
+} SQLITE_OK
+
+# On the mailing list it has been reported that finalizing after
+# an SQLITE_BUSY return leads to a segfault.  Here we test that case.
+#
+do_test capi3c-18.1 {
+  sqlite3 db2 test.db
+  set STMT [sqlite3_prepare_v2 $DB {SELECT max(a) FROM t1} -1 TAIL]
+  sqlite3_step $STMT
+} SQLITE_ROW
+do_test capi3c-18.2 {
+  sqlite3_column_int $STMT 0
+} 4
+do_test capi3c-18.3 {
+  sqlite3_reset $STMT
+  db2 eval {BEGIN EXCLUSIVE}
+  sqlite3_step $STMT
+} SQLITE_BUSY
+do_test capi3c-18.4 {
+  sqlite3_finalize $STMT
+} SQLITE_BUSY
+do_test capi3c-18.5 {
+  db2 eval {COMMIT}
+  db2 close
+} {}
+
+# Ticket #2158.  The sqlite3_step() will still return SQLITE_SCHEMA
+# if the database schema changes in a way that makes the statement
+# no longer valid.
+#
+do_test capi3c-19.1 {
+  db eval {
+     CREATE TABLE t3(x,y);
+     INSERT INTO t3 VALUES(1,2);
+  }
+  set STMT [sqlite3_prepare_v2 $DB {SELECT * FROM t3} -1 TAIL]
+  sqlite3_step $STMT
+} SQLITE_ROW
+do_test capi3c-19.2 {
+  sqlite3_column_int $STMT 0
+} 1
+do_test capi3c-19.3 {
+  sqlite3_step $STMT
+} SQLITE_DONE
+do_test capi3c-19.4 {
+  sqlite3_reset $STMT
+  db eval {DROP TABLE t3}
+  sqlite3_step $STMT
+} SQLITE_SCHEMA
+do_test capi3c-19.4.2 {
+  sqlite3_errmsg $DB
+} {no such table: t3}
+do_test capi3c-19.5 {
+  sqlite3_reset $STMT
+  db eval {
+     CREATE TABLE t3(x,y);
+     INSERT INTO t3 VALUES(1,2);
+  }
+  sqlite3_step $STMT
+} SQLITE_ROW
+do_test capi3c-19.6 {
+  sqlite3_column_int $STMT 1
+} 2
+do_test capi3c-19.99 {
+  sqlite3_finalize $STMT
+} SQLITE_OK
+
+# Make sure a change in a separate database connection does not
+# cause an SQLITE_SCHEMA return.
+#
+do_test capi3c-20.1 {
+  set STMT [sqlite3_prepare_v2 $DB {SELECT * FROM t3} -1 TAIL]
+  sqlite3 db2 test.db
+  db2 eval {CREATE TABLE t4(x)}
+  sqlite3_step $STMT
+} SQLITE_ROW
+do_test capi3c-20.2 {
+  sqlite3_column_int $STMT 1
+} 2
+do_test capi3c-20.3 {
+  sqlite3_step $STMT
+} SQLITE_DONE
+do_test capi3c-20.4 {
+  db2 close
+  sqlite3_finalize $STMT
+} SQLITE_OK
+
+finish_test

Modified: freeswitch/trunk/libs/sqlite/test/collate1.test
==============================================================================
--- freeswitch/trunk/libs/sqlite/test/collate1.test	(original)
+++ freeswitch/trunk/libs/sqlite/test/collate1.test	Thu Feb 22 17:09:42 2007
@@ -12,7 +12,7 @@
 # This file implements regression tests for SQLite library.  The
 # focus of this script is page cache subsystem.
 #
-# $Id: collate1.test,v 1.4 2005/11/01 15:48:25 drh Exp $
+# $Id: collate1.test,v 1.5 2007/02/01 23:02:46 drh Exp $
 
 set testdir [file dirname $argv0]
 source $testdir/tester.tcl
@@ -91,6 +91,21 @@
 } {{} 0x2D 0x119}
 do_test collate1-1.5 {
   execsql {
+    SELECT c2 COLLATE hex FROM collate1t1 ORDER BY 1
+  }
+} {{} 0x2D 0x119}
+do_test collate1-1.6 {
+  execsql {
+    SELECT c2 COLLATE hex FROM collate1t1 ORDER BY 1 ASC
+  }
+} {{} 0x2D 0x119}
+do_test collate1-1.7 {
+  execsql {
+    SELECT c2 COLLATE hex FROM collate1t1 ORDER BY 1 DESC
+  }
+} {0x119 0x2D {}}
+do_test collate1-1.99 {
+  execsql {
     DROP TABLE collate1t1;
   }
 } {}
@@ -133,7 +148,59 @@
         ORDER BY 1 COLLATE binary ASC, 2 COLLATE hex ASC;
   }
 } {{} {} 11 0x11 11 0x101 5 0xA 5 0x11 7 0xA}
-do_test collate1-2.7 {
+do_test collate1-2.12.1 {
+  execsql {
+    SELECT c1 COLLATE numeric, c2 FROM collate1t1 
+     ORDER BY 1, 2 COLLATE hex;
+  }
+} {{} {} 5 0xA 5 0x11 7 0xA 11 0x11 11 0x101}
+do_test collate1-2.12.2 {
+  execsql {
+    SELECT c1 COLLATE hex, c2 FROM collate1t1 
+     ORDER BY 1 COLLATE numeric, 2 COLLATE hex;
+  }
+} {{} {} 5 0xA 5 0x11 7 0xA 11 0x11 11 0x101}
+do_test collate1-2.12.3 {
+  execsql {
+    SELECT c1, c2 COLLATE hex FROM collate1t1 
+     ORDER BY 1 COLLATE numeric, 2;
+  }
+} {{} {} 5 0xA 5 0x11 7 0xA 11 0x11 11 0x101}
+do_test collate1-2.12.4 {
+  execsql {
+    SELECT c1 COLLATE numeric, c2 COLLATE hex
+      FROM collate1t1 
+     ORDER BY 1, 2;
+  }
+} {{} {} 5 0xA 5 0x11 7 0xA 11 0x11 11 0x101}
+do_test collate1-2.13 {
+  execsql {
+    SELECT c1 COLLATE binary, c2 COLLATE hex
+      FROM collate1t1
+     ORDER BY 1, 2;
+  }
+} {{} {} 11 0x11 11 0x101 5 0xA 5 0x11 7 0xA}
+do_test collate1-2.14 {
+  execsql {
+    SELECT c1, c2
+      FROM collate1t1 ORDER BY 1 COLLATE binary DESC, 2 COLLATE hex;
+  }
+} {7 0xA 5 0xA 5 0x11 11 0x11 11 0x101 {} {}}
+do_test collate1-2.15 {
+  execsql {
+    SELECT c1 COLLATE binary, c2 COLLATE hex
+      FROM collate1t1 
+     ORDER BY 1 DESC, 2 DESC;
+  }
+} {7 0xA 5 0x11 5 0xA 11 0x101 11 0x11 {} {}}
+do_test collate1-2.16 {
+  execsql {
+    SELECT c1 COLLATE hex, c2 COLLATE binary
+      FROM collate1t1 
+     ORDER BY 1 COLLATE binary ASC, 2 COLLATE hex ASC;
+  }
+} {{} {} 11 0x11 11 0x101 5 0xA 5 0x11 7 0xA}
+do_test collate1-2.99 {
   execsql {
     DROP TABLE collate1t1;
   }
@@ -180,6 +247,12 @@
     SELECT a as c1, b as c2 FROM collate1t1 ORDER BY c1 COLLATE binary;
   }
 } {{} {} 0x45 69 0x5 5 1 1}
+do_test collate1-3.5.1 {
+  execsql {
+    SELECT a COLLATE binary as c1, b as c2
+      FROM collate1t1 ORDER BY c1;
+  }
+} {{} {} 0x45 69 0x5 5 1 1}
 do_test collate1-3.6 {
   execsql {
     DROP TABLE collate1t1;
@@ -220,6 +293,11 @@
     SELECT c1||'' FROM collate1t1 ORDER BY 1;
   }
 } {{} 1 101 12}
+do_test collate1-4.4.1 {
+  execsql {
+    SELECT (c1||'') COLLATE numeric FROM collate1t1 ORDER BY 1;
+  }
+} {{} 1 12 101}
 do_test collate1-4.5 {
   execsql {
     DROP TABLE collate1t1;

Modified: freeswitch/trunk/libs/sqlite/test/collate2.test
==============================================================================
--- freeswitch/trunk/libs/sqlite/test/collate2.test	(original)
+++ freeswitch/trunk/libs/sqlite/test/collate2.test	Thu Feb 22 17:09:42 2007
@@ -12,7 +12,7 @@
 # This file implements regression tests for SQLite library.  The
 # focus of this script is page cache subsystem.
 #
-# $Id: collate2.test,v 1.4 2005/01/21 03:12:16 danielk1977 Exp $
+# $Id: collate2.test,v 1.5 2007/02/01 23:02:46 drh Exp $
 
 set testdir [file dirname $argv0]
 source $testdir/tester.tcl
@@ -98,16 +98,67 @@
     SELECT a FROM collate2t1 WHERE a > 'aa' ORDER BY 1;
   }
 } {ab bA bB ba bb}
+do_test collate2-1.1.1 {
+  execsql {
+    SELECT a FROM collate2t1 WHERE a COLLATE binary > 'aa' ORDER BY 1;
+  }
+} {ab bA bB ba bb}
+do_test collate2-1.1.2 {
+  execsql {
+    SELECT a FROM collate2t1 WHERE b COLLATE binary > 'aa' ORDER BY 1;
+  }
+} {ab bA bB ba bb}
+do_test collate2-1.1.3 {
+  execsql {
+    SELECT a FROM collate2t1 WHERE c COLLATE binary > 'aa' ORDER BY 1;
+  }
+} {ab bA bB ba bb}
 do_test collate2-1.2 {
   execsql {
     SELECT b FROM collate2t1 WHERE b > 'aa' ORDER BY 1, oid;
   }
 } {ab aB Ab AB ba bA Ba BA bb bB Bb BB}
+do_test collate2-1.2.1 {
+  execsql {
+    SELECT b FROM collate2t1 WHERE a COLLATE nocase > 'aa'
+     ORDER BY 1, oid;
+  }
+} {ab aB Ab AB ba bA Ba BA bb bB Bb BB}
+do_test collate2-1.2.2 {
+  execsql {
+    SELECT b FROM collate2t1 WHERE b COLLATE nocase > 'aa'
+     ORDER BY 1, oid;
+  }
+} {ab aB Ab AB ba bA Ba BA bb bB Bb BB}
+do_test collate2-1.2.3 {
+  execsql {
+    SELECT b FROM collate2t1 WHERE c COLLATE nocase > 'aa'
+     ORDER BY 1, oid;
+  }
+} {ab aB Ab AB ba bA Ba BA bb bB Bb BB}
 do_test collate2-1.3 {
   execsql {
     SELECT c FROM collate2t1 WHERE c > 'aa' ORDER BY 1;
   }
 } {ba Ab Bb ab bb}
+do_test collate2-1.3.1 {
+  execsql {
+    SELECT c FROM collate2t1 WHERE a COLLATE backwards > 'aa'
+    ORDER BY 1;
+  }
+} {ba Ab Bb ab bb}
+do_test collate2-1.3.2 {
+  execsql {
+    SELECT c FROM collate2t1 WHERE b COLLATE backwards > 'aa'
+    ORDER BY 1;
+  }
+} {ba Ab Bb ab bb}
+do_test collate2-1.3.3 {
+  execsql {
+    SELECT c FROM collate2t1 WHERE c COLLATE backwards > 'aa'
+    ORDER BY 1;
+  }
+} {ba Ab Bb ab bb}
 do_test collate2-1.4 {
   execsql {
     SELECT a FROM collate2t1 WHERE a < 'aa' ORDER BY 1;

Modified: freeswitch/trunk/libs/sqlite/test/conflict.test
==============================================================================
--- freeswitch/trunk/libs/sqlite/test/conflict.test	(original)
+++ freeswitch/trunk/libs/sqlite/test/conflict.test	Thu Feb 22 17:09:42 2007
@@ -13,7 +13,7 @@
 # This file implements tests for the conflict resolution extension
 # to SQLite.
 #
-# $Id: conflict.test,v 1.27 2006/01/17 09:35:02 danielk1977 Exp $
+# $Id: conflict.test,v 1.28 2007/01/03 23:37:29 drh Exp $
 
 set testdir [file dirname $argv0]
 source $testdir/tester.tcl
@@ -309,6 +309,7 @@
     if {$conf1!=""} {set conf1 "ON CONFLICT $conf1"}
     execsql {pragma temp_store=file}
     set ::sqlite_opentemp_count 0
+if {$i==2} btree_breakpoint
     set r0 [catch {execsql [subst {
       DROP TABLE t1;
       CREATE TABLE t1(a,b,c, UNIQUE(a) $conf1);

Modified: freeswitch/trunk/libs/sqlite/test/date.test
==============================================================================
--- freeswitch/trunk/libs/sqlite/test/date.test	(original)
+++ freeswitch/trunk/libs/sqlite/test/date.test	Thu Feb 22 17:09:42 2007
@@ -11,7 +11,7 @@
 # This file implements regression tests for SQLite library.  The
 # focus of this file is testing date and time functions.
 #
-# $Id: date.test,v 1.17 2006/09/25 18:03:29 drh Exp $
+# $Id: date.test,v 1.19 2007/01/08 16:19:07 drh Exp $
 
 set testdir [file dirname $argv0]
 source $testdir/tester.tcl
@@ -123,6 +123,17 @@
 datetest 3.11.12 {strftime('%W','2004-12-31')} 52
 datetest 3.11.13 {strftime('%W','2007-12-31')} 53
 datetest 3.11.14 {strftime('%W','2007-01-01')} 01
+datetest 3.11.15 {strftime('%W %j',2454109.04140970)} {02 008}
+datetest 3.11.16 {strftime('%W %j',2454109.04140971)} {02 008}
+datetest 3.11.17 {strftime('%W %j',2454109.04140972)} {02 008}
+datetest 3.11.18 {strftime('%W %j',2454109.04140973)} {02 008}
+datetest 3.11.19 {strftime('%W %j',2454109.04140974)} {02 008}
+datetest 3.11.20 {strftime('%W %j',2454109.04140975)} {02 008}
+datetest 3.11.21 {strftime('%W %j',2454109.04140976)} {02 008}
+datetest 3.11.22 {strftime('%W %j',2454109.04140977)} {02 008}
+datetest 3.11.22 {strftime('%W %j',2454109.04140978)} {02 008}
+datetest 3.11.22 {strftime('%W %j',2454109.04140979)} {02 008}
+datetest 3.11.22 {strftime('%W %j',2454109.04140980)} {02 008}
 datetest 3.12 {strftime('%Y','2003-10-31 12:34:56.432')} 2003
 datetest 3.13 {strftime('%%','2003-10-31 12:34:56.432')} %
 datetest 3.14 {strftime('%_','2003-10-31 12:34:56.432')} NULL
@@ -284,5 +295,19 @@
   }
 } {{2006-09-24 10:50:26.047}}
 
+# Ticket #2153
+datetest 13.2 {strftime('%Y-%m-%d %H:%M:%S', '2007-01-01 12:34:59.6')} \
+  {2007-01-01 12:34:59}
+datetest 13.3 {strftime('%Y-%m-%d %H:%M:%f', '2007-01-01 12:34:59.6')} \
+  {2007-01-01 12:34:59.600}
+datetest 13.4 {strftime('%Y-%m-%d %H:%M:%S', '2007-01-01 12:59:59.6')} \
+  {2007-01-01 12:59:59}
+datetest 13.5 {strftime('%Y-%m-%d %H:%M:%f', '2007-01-01 12:59:59.6')} \
+  {2007-01-01 12:59:59.600}
+datetest 13.6 {strftime('%Y-%m-%d %H:%M:%S', '2007-01-01 23:59:59.6')} \
+  {2007-01-01 23:59:59}
+datetest 13.7 {strftime('%Y-%m-%d %H:%M:%f', '2007-01-01 23:59:59.6')} \
+  {2007-01-01 23:59:59.600}
+
 
 finish_test

Added: freeswitch/trunk/libs/sqlite/test/fts1e.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/fts1e.test	Thu Feb 22 17:09:42 2007
@@ -0,0 +1,85 @@
+# 2006 October 19
+#
+# The author disclaims copyright to this source code.
+#
+#*************************************************************************
+# This file implements regression tests for SQLite library.  The
+# focus of this script is testing deletions in the FTS1 module.
+#
+# $Id: fts1e.test,v 1.1 2006/10/19 23:28:35 shess Exp $
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# If SQLITE_ENABLE_FTS1 is defined, omit this file.
+ifcapable !fts1 {
+  finish_test
+  return
+}
+
+# Construct a full-text search table containing keywords which are the
+# ordinal numbers of the bit positions set for a sequence of integers,
+# which are used for the rowid.  There are a total of 30 INSERT and
+# DELETE statements, so that we'll test both the segmentMerge() merge
+# (over the first 16) and the termSelect() merge (over the level-1
+# segment and 14 level-0 segments).
+db eval {
+  CREATE VIRTUAL TABLE t1 USING fts1(content);
+  INSERT INTO t1 (rowid, content) VALUES(1, 'one');
+  INSERT INTO t1 (rowid, content) VALUES(2, 'two');
+  INSERT INTO t1 (rowid, content) VALUES(3, 'one two');
+  INSERT INTO t1 (rowid, content) VALUES(4, 'three');
+  DELETE FROM t1 WHERE rowid = 1;
+  INSERT INTO t1 (rowid, content) VALUES(5, 'one three');
+  INSERT INTO t1 (rowid, content) VALUES(6, 'two three');
+  INSERT INTO t1 (rowid, content) VALUES(7, 'one two three');
+  DELETE FROM t1 WHERE rowid = 4;
+  INSERT INTO t1 (rowid, content) VALUES(8, 'four');
+  INSERT INTO t1 (rowid, content) VALUES(9, 'one four');
+  INSERT INTO t1 (rowid, content) VALUES(10, 'two four');
+  DELETE FROM t1 WHERE rowid = 7;
+  INSERT INTO t1 (rowid, content) VALUES(11, 'one two four');
+  INSERT INTO t1 (rowid, content) VALUES(12, 'three four');
+  INSERT INTO t1 (rowid, content) VALUES(13, 'one three four');
+  DELETE FROM t1 WHERE rowid = 10;
+  INSERT INTO t1 (rowid, content) VALUES(14, 'two three four');
+  INSERT INTO t1 (rowid, content) VALUES(15, 'one two three four');
+  INSERT INTO t1 (rowid, content) VALUES(16, 'five');
+  DELETE FROM t1 WHERE rowid = 13;
+  INSERT INTO t1 (rowid, content) VALUES(17, 'one five');
+  INSERT INTO t1 (rowid, content) VALUES(18, 'two five');
+  INSERT INTO t1 (rowid, content) VALUES(19, 'one two five');
+  DELETE FROM t1 WHERE rowid = 16;
+  INSERT INTO t1 (rowid, content) VALUES(20, 'three five');
+  INSERT INTO t1 (rowid, content) VALUES(21, 'one three five');
+  INSERT INTO t1 (rowid, content) VALUES(22, 'two three five');
+  DELETE FROM t1 WHERE rowid = 19;
+  DELETE FROM t1 WHERE rowid = 22;
+}
+
+do_test fts1f-1.1 {
+  execsql {SELECT COUNT(*) FROM t1}
+} {14}
+
+do_test fts1e-2.1 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH 'one'}
+} {3 5 9 11 15 17 21}
+
+do_test fts1e-2.2 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH 'two'}
+} {2 3 6 11 14 15 18}
+
+do_test fts1e-2.3 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH 'three'}
+} {5 6 12 14 15 20 21}
+
+do_test fts1e-2.4 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH 'four'}
+} {8 9 11 12 14 15}
+
+do_test fts1e-2.5 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH 'five'}
+} {17 18 20 21}
+
+finish_test

Added: freeswitch/trunk/libs/sqlite/test/fts1f.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/fts1f.test	Thu Feb 22 17:09:42 2007
@@ -0,0 +1,90 @@
+# 2006 October 19
+#
+# The author disclaims copyright to this source code.
+#
+#*************************************************************************
+# This file implements regression tests for SQLite library.  The
+# focus of this script is testing updates in the FTS1 module.
+#
+# $Id: fts1f.test,v 1.1 2006/10/19 23:28:35 shess Exp $
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# If SQLITE_ENABLE_FTS1 is defined, omit this file.
+ifcapable !fts1 {
+  finish_test
+  return
+}
+
+# Construct a full-text search table containing keywords which are the
+# ordinal numbers of the bit positions set for a sequence of integers,
+# which are used for the rowid.  There are a total of 31 INSERT,
+# UPDATE, and DELETE statements, so that we'll test both the
+# segmentMerge() merge (over the first 16) and the termSelect() merge
+# (over the level-1 segment and 15 level-0 segments).
+db eval {
+  CREATE VIRTUAL TABLE t1 USING fts1(content);
+  INSERT INTO t1 (rowid, content) VALUES(1, 'one');
+  INSERT INTO t1 (rowid, content) VALUES(2, 'two');
+  INSERT INTO t1 (rowid, content) VALUES(3, 'one two');
+  INSERT INTO t1 (rowid, content) VALUES(4, 'three');
+  INSERT INTO t1 (rowid, content) VALUES(5, 'one three');
+  INSERT INTO t1 (rowid, content) VALUES(6, 'two three');
+  INSERT INTO t1 (rowid, content) VALUES(7, 'one two three');
+  DELETE FROM t1 WHERE rowid = 4;
+  INSERT INTO t1 (rowid, content) VALUES(8, 'four');
+  UPDATE t1 SET content = 'update one three' WHERE rowid = 1;
+  INSERT INTO t1 (rowid, content) VALUES(9, 'one four');
+  INSERT INTO t1 (rowid, content) VALUES(10, 'two four');
+  DELETE FROM t1 WHERE rowid = 7;
+  INSERT INTO t1 (rowid, content) VALUES(11, 'one two four');
+  INSERT INTO t1 (rowid, content) VALUES(12, 'three four');
+  INSERT INTO t1 (rowid, content) VALUES(13, 'one three four');
+  DELETE FROM t1 WHERE rowid = 10;
+  INSERT INTO t1 (rowid, content) VALUES(14, 'two three four');
+  INSERT INTO t1 (rowid, content) VALUES(15, 'one two three four');
+  UPDATE t1 SET content = 'update two five' WHERE rowid = 8;
+  INSERT INTO t1 (rowid, content) VALUES(16, 'five');
+  DELETE FROM t1 WHERE rowid = 13;
+  INSERT INTO t1 (rowid, content) VALUES(17, 'one five');
+  INSERT INTO t1 (rowid, content) VALUES(18, 'two five');
+  INSERT INTO t1 (rowid, content) VALUES(19, 'one two five');
+  DELETE FROM t1 WHERE rowid = 16;
+  INSERT INTO t1 (rowid, content) VALUES(20, 'three five');
+  INSERT INTO t1 (rowid, content) VALUES(21, 'one three five');
+  INSERT INTO t1 (rowid, content) VALUES(22, 'two three five');
+  DELETE FROM t1 WHERE rowid = 19;
+  UPDATE t1 SET content = 'update' WHERE rowid = 15;
+}
+
+do_test fts1f-1.1 {
+  execsql {SELECT COUNT(*) FROM t1}
+} {16}
+
+do_test fts1e-2.0 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH 'update'}
+} {1 8 15}
+
+do_test fts1e-2.1 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH 'one'}
+} {1 3 5 9 11 17 21}
+
+do_test fts1e-2.2 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH 'two'}
+} {2 3 6 8 11 14 18 22}
+
+do_test fts1e-2.3 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH 'three'}
+} {1 5 6 12 14 20 21 22}
+
+do_test fts1e-2.4 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH 'four'}
+} {9 11 12 14}
+
+do_test fts1e-2.5 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH 'five'}
+} {8 17 18 20 21 22}
+
+finish_test

Added: freeswitch/trunk/libs/sqlite/test/fts1i.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/fts1i.test	Thu Feb 22 17:09:42 2007
@@ -0,0 +1,88 @@
+# 2007 January 17
+#
+# The author disclaims copyright to this source code.
+#
+#*************************************************************************
+# This file implements regression tests for SQLite fts1 library.  The
+# focus here is testing handling of UPDATE when using UTF-16-encoded
+# databases.
+#
+# $Id: fts1i.test,v 1.2 2007/01/24 03:43:20 drh Exp $
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# If SQLITE_ENABLE_FTS1 is defined, omit this file.
+ifcapable !fts1 {
+  finish_test
+  return
+}
+
+
+# Return the UTF-16 representation of the supplied UTF-8 string $str.
+# If $nt is true, append two 0x00 bytes as a nul terminator.
+# NOTE(shess) Copied from capi3.test.
+proc utf16 {str {nt 1}} {
+  set r [encoding convertto unicode $str]
+  if {$nt} {
+    append r "\x00\x00"
+  }
+  return $r
+}
+
+db eval {
+  PRAGMA encoding = "UTF-16le";
+  CREATE VIRTUAL TABLE t1 USING fts1(content);
+}
+
+do_test fts1i-1.0 {
+  execsql {PRAGMA encoding}
+} {UTF-16le}
+
+do_test fts1i-1.1 {
+  execsql {INSERT INTO t1 (rowid, content) VALUES(1, 'one')}
+  execsql {SELECT content FROM t1 WHERE rowid = 1}
+} {one}
+
+do_test fts1i-1.2 {
+  set sql "INSERT INTO t1 (rowid, content) VALUES(2, 'two')"
+  set STMT [sqlite3_prepare $DB $sql -1 TAIL]
+  sqlite3_step $STMT
+  sqlite3_finalize $STMT
+  execsql {SELECT content FROM t1 WHERE rowid = 2}
+} {two}
+
+do_test fts1i-1.3 {
+  set sql "INSERT INTO t1 (rowid, content) VALUES(3, 'three')"
+  set STMT [sqlite3_prepare $DB $sql -1 TAIL]
+  sqlite3_step $STMT
+  sqlite3_finalize $STMT
+  set sql "UPDATE t1 SET content = 'trois' WHERE rowid = 3"
+  set STMT [sqlite3_prepare $DB $sql -1 TAIL]
+  sqlite3_step $STMT
+  sqlite3_finalize $STMT
+  execsql {SELECT content FROM t1 WHERE rowid = 3}
+} {trois}
+
+do_test fts1i-1.4 {
+  set sql16 [utf16 {INSERT INTO t1 (rowid, content) VALUES(4, 'four')}]
+  set STMT [sqlite3_prepare16 $DB $sql16 -1 TAIL]
+  sqlite3_step $STMT
+  sqlite3_finalize $STMT
+  execsql {SELECT content FROM t1 WHERE rowid = 4}
+} {four}
+
+do_test fts1i-1.5 {
+  set sql16 [utf16 {INSERT INTO t1 (rowid, content) VALUES(5, 'five')}]
+  set STMT [sqlite3_prepare16 $DB $sql16 -1 TAIL]
+  sqlite3_step $STMT
+  sqlite3_finalize $STMT
+  set sql "UPDATE t1 SET content = 'cinq' WHERE rowid = 5"
+  set STMT [sqlite3_prepare $DB $sql -1 TAIL]
+  sqlite3_step $STMT
+  sqlite3_finalize $STMT
+  execsql {SELECT content FROM t1 WHERE rowid = 5}
+} {cinq}
+
+finish_test

Added: freeswitch/trunk/libs/sqlite/test/fts1j.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/fts1j.test	Thu Feb 22 17:09:42 2007
@@ -0,0 +1,89 @@
+# 2007 February 6
+#
+# The author disclaims copyright to this source code.
+#
+#*************************************************************************
+# This file implements regression tests for SQLite library.  This
+# tests creating fts1 tables in an attached database.
+#
+# $Id: fts1j.test,v 1.1 2007/02/07 01:01:18 shess Exp $
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# If SQLITE_ENABLE_FTS1 is defined, omit this file.
+ifcapable !fts1 {
+  finish_test
+  return
+}
+
+# Clean up anything left over from a previous pass.
+file delete -force test2.db
+file delete -force test2.db-journal
+sqlite3 db2 test2.db
+
+db eval {
+  CREATE VIRTUAL TABLE t3 USING fts1(content);
+  INSERT INTO t3 (rowid, content) VALUES(1, "hello world");
+}
+
+db2 eval {
+  CREATE VIRTUAL TABLE t1 USING fts1(content);
+  INSERT INTO t1 (rowid, content) VALUES(1, "hello world");
+  INSERT INTO t1 (rowid, content) VALUES(2, "hello there");
+  INSERT INTO t1 (rowid, content) VALUES(3, "cruel world");
+}
+
+# This has always worked because the t1_* tables used by fts1 will be
+# the defaults.
+do_test fts1j-1.1 {
+  execsql {
+    ATTACH DATABASE 'test2.db' AS two;
+    SELECT rowid FROM t1 WHERE t1 MATCH 'hello';
+    DETACH DATABASE two;
+  }
+} {1 2}
+# Make certain we're detached if there was an error.
+catch {db eval {DETACH DATABASE two}}
+
+# In older code, this appears to work fine, but the t2_* tables used
+# by fts1 will be created in database 'main' instead of database
+# 'two'.  It appears to work fine because the tables end up being the
+# defaults, but obviously is badly broken if you hope to use things
+# other than in the exact same ATTACH setup.
+do_test fts1j-1.2 {
+  execsql {
+    ATTACH DATABASE 'test2.db' AS two;
+    CREATE VIRTUAL TABLE two.t2 USING fts1(content);
+    INSERT INTO t2 (rowid, content) VALUES(1, "hello world");
+    INSERT INTO t2 (rowid, content) VALUES(2, "hello there");
+    INSERT INTO t2 (rowid, content) VALUES(3, "cruel world");
+    SELECT rowid FROM t2 WHERE t2 MATCH 'hello';
+    DETACH DATABASE two;
+  }
+} {1 2}
+catch {db eval {DETACH DATABASE two}}
+
+# In older code, this broke because the fts1 code attempted to create
+# t3_* tables in database 'main', but they already existed.  Normally
+# this wouldn't happen without t3 itself existing, in which case the
+# fts1 code would never be called in the first place.
+do_test fts1j-1.3 {
+  execsql {
+    ATTACH DATABASE 'test2.db' AS two;
+
+    CREATE VIRTUAL TABLE two.t3 USING fts1(content);
+    INSERT INTO two.t3 (rowid, content) VALUES(2, "hello there");
+    INSERT INTO two.t3 (rowid, content) VALUES(3, "cruel world");
+    SELECT rowid FROM two.t3 WHERE t3 MATCH 'hello';
+
+    DETACH DATABASE two;
+  } db2
+} {2}
+catch {db eval {DETACH DATABASE two}}
+
+catch {db2 close}
+file delete -force test2.db
+
+finish_test

Added: freeswitch/trunk/libs/sqlite/test/fts2a.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/fts2a.test	Thu Feb 22 17:09:42 2007
@@ -0,0 +1,186 @@
+# 2006 September 9
+#
+# The author disclaims copyright to this source code.  In place of
+# a legal notice, here is a blessing:
+#
+#    May you do good and not evil.
+#    May you find forgiveness for yourself and forgive others.
+#    May you share freely, never taking more than you give.
+#
+#*************************************************************************
+# This file implements regression tests for SQLite library.  The
+# focus of this script is testing the FTS2 module.
+#
+# $Id: fts2a.test,v 1.1 2006/10/19 23:36:26 shess Exp $
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# If SQLITE_ENABLE_FTS2 is defined, omit this file.
+ifcapable !fts2 {
+  finish_test
+  return
+}
+
+# Construct a full-text search table containing five keywords:
+# one, two, three, four, and five, in various combinations.  The
+# rowid for each will be a bitmask for the elements it contains.
+#
+db eval {
+  CREATE VIRTUAL TABLE t1 USING fts2(content);
+  INSERT INTO t1(content) VALUES('one');
+  INSERT INTO t1(content) VALUES('two');
+  INSERT INTO t1(content) VALUES('one two');
+  INSERT INTO t1(content) VALUES('three');
+  INSERT INTO t1(content) VALUES('one three');
+  INSERT INTO t1(content) VALUES('two three');
+  INSERT INTO t1(content) VALUES('one two three');
+  INSERT INTO t1(content) VALUES('four');
+  INSERT INTO t1(content) VALUES('one four');
+  INSERT INTO t1(content) VALUES('two four');
+  INSERT INTO t1(content) VALUES('one two four');
+  INSERT INTO t1(content) VALUES('three four');
+  INSERT INTO t1(content) VALUES('one three four');
+  INSERT INTO t1(content) VALUES('two three four');
+  INSERT INTO t1(content) VALUES('one two three four');
+  INSERT INTO t1(content) VALUES('five');
+  INSERT INTO t1(content) VALUES('one five');
+  INSERT INTO t1(content) VALUES('two five');
+  INSERT INTO t1(content) VALUES('one two five');
+  INSERT INTO t1(content) VALUES('three five');
+  INSERT INTO t1(content) VALUES('one three five');
+  INSERT INTO t1(content) VALUES('two three five');
+  INSERT INTO t1(content) VALUES('one two three five');
+  INSERT INTO t1(content) VALUES('four five');
+  INSERT INTO t1(content) VALUES('one four five');
+  INSERT INTO t1(content) VALUES('two four five');
+  INSERT INTO t1(content) VALUES('one two four five');
+  INSERT INTO t1(content) VALUES('three four five');
+  INSERT INTO t1(content) VALUES('one three four five');
+  INSERT INTO t1(content) VALUES('two three four five');
+  INSERT INTO t1(content) VALUES('one two three four five');
+}
+
+do_test fts2a-1.1 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH 'one'}
+} {1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31}
+do_test fts2a-1.2 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH 'one two'}
+} {3 7 11 15 19 23 27 31}
+do_test fts2a-1.3 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH 'two one'}
+} {3 7 11 15 19 23 27 31}
+do_test fts2a-1.4 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH 'one two three'}
+} {7 15 23 31}
+do_test fts2a-1.5 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH 'one three two'}
+} {7 15 23 31}
+do_test fts2a-1.6 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH 'two three one'}
+} {7 15 23 31}
+do_test fts2a-1.7 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH 'two one three'}
+} {7 15 23 31}
+do_test fts2a-1.8 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH 'three one two'}
+} {7 15 23 31}
+do_test fts2a-1.9 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH 'three two one'}
+} {7 15 23 31}
+do_test fts2a-1.10 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH 'one two THREE'}
+} {7 15 23 31}
+do_test fts2a-1.11 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH '  ONE    Two   three  '}
+} {7 15 23 31}
+
+do_test fts2a-2.1 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH '"one"'}
+} {1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31}
+do_test fts2a-2.2 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH '"one two"'}
+} {3 7 11 15 19 23 27 31}
+do_test fts2a-2.3 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH '"two one"'}
+} {}
+do_test fts2a-2.4 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH '"one two three"'}
+} {7 15 23 31}
+do_test fts2a-2.5 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH '"one three two"'}
+} {}
+do_test fts2a-2.6 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH '"one two three four"'}
+} {15 31}
+do_test fts2a-2.7 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH '"one three two four"'}
+} {}
+do_test fts2a-2.8 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH '"one three five"'}
+} {21}
+do_test fts2a-2.9 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH '"one three" five'}
+} {21 29}
+do_test fts2a-2.10 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH 'five "one three"'}
+} {21 29}
+do_test fts2a-2.11 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH 'five "one three" four'}
+} {29}
+do_test fts2a-2.12 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH 'five four "one three"'}
+} {29}
+do_test fts2a-2.13 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH '"one three" four five'}
+} {29}
+
+do_test fts2a-3.1 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH 'one'}
+} {1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31}
+do_test fts2a-3.2 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH 'one -two'}
+} {1 5 9 13 17 21 25 29}
+do_test fts2a-3.3 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH '-two one'}
+} {1 5 9 13 17 21 25 29}
+
+do_test fts2a-4.1 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH 'one OR two'}
+} {1 2 3 5 6 7 9 10 11 13 14 15 17 18 19 21 22 23 25 26 27 29 30 31}
+do_test fts2a-4.2 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH '"one two" OR three'}
+} {3 4 5 6 7 11 12 13 14 15 19 20 21 22 23 27 28 29 30 31}
+do_test fts2a-4.3 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH 'three OR "one two"'}
+} {3 4 5 6 7 11 12 13 14 15 19 20 21 22 23 27 28 29 30 31}
+do_test fts2a-4.4 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH 'one two OR three'}
+} {3 5 7 11 13 15 19 21 23 27 29 31}
+do_test fts2a-4.5 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH 'three OR two one'}
+} {3 5 7 11 13 15 19 21 23 27 29 31}
+do_test fts2a-4.6 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH 'one two OR three OR four'}
+} {3 5 7 9 11 13 15 19 21 23 25 27 29 31}
+do_test fts2a-4.7 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH 'two OR three OR four one'}
+} {3 5 7 9 11 13 15 19 21 23 25 27 29 31}
+
+# Test the ability to handle NULL content
+#
+do_test fts2a-5.1 {
+  execsql {INSERT INTO t1(content) VALUES(NULL)}
+} {}
+do_test fts2a-5.2 {
+  set rowid [db last_insert_rowid]
+  execsql {SELECT content FROM t1 WHERE rowid=$rowid}
+} {{}}
+do_test fts2a-5.3 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH NULL}
+} {}
+
+
+
+finish_test

Added: freeswitch/trunk/libs/sqlite/test/fts2b.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/fts2b.test	Thu Feb 22 17:09:42 2007
@@ -0,0 +1,147 @@
+# 2006 September 13
+#
+# The author disclaims copyright to this source code.  In place of
+# a legal notice, here is a blessing:
+#
+#    May you do good and not evil.
+#    May you find forgiveness for yourself and forgive others.
+#    May you share freely, never taking more than you give.
+#
+#*************************************************************************
+# This file implements regression tests for SQLite library.  The
+# focus of this script is testing the FTS2 module.
+#
+# $Id: fts2b.test,v 1.1 2006/10/19 23:36:26 shess Exp $
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# If SQLITE_ENABLE_FTS2 is defined, omit this file.
+ifcapable !fts2 {
+  finish_test
+  return
+}
+
+# Fill the full-text index "t1" with phrases in english, spanish,
+# and german.  For the i-th row, fill in the names for the bits
+# that are set in the value of i.  The least significant bit is
+# 1.  For example,  the value 5 is 101 in binary which will be
+# converted to "one three" in english.
+#
+proc fill_multilanguage_fulltext_t1 {} {
+  set english {one two three four five}
+  set spanish {un dos tres cuatro cinco}
+  set german {eine zwei drei vier funf}
+  
+  for {set i 1} {$i<=31} {incr i} {
+    set cmd "INSERT INTO t1 VALUES"
+    set vset {}
+    foreach lang {english spanish german} {
+      set words {}
+      for {set j 0; set k 1} {$j<5} {incr j; incr k $k} {
+        if {$k&$i} {lappend words [lindex [set $lang] $j]}
+      }
+      lappend vset "'$words'"
+    }
+    set sql "INSERT INTO t1(english,spanish,german) VALUES([join $vset ,])"
+    # puts $sql
+    db eval $sql
+  }
+}
+
+# Construct a full-text search table containing five keywords:
+# one, two, three, four, and five, in various combinations.  The
+# rowid for each will be a bitmask for the elements it contains.
+#
+db eval {
+  CREATE VIRTUAL TABLE t1 USING fts2(english,spanish,german);
+}
+fill_multilanguage_fulltext_t1
+
+do_test fts2b-1.1 {
+  execsql {SELECT rowid FROM t1 WHERE english MATCH 'one'}
+} {1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31}
+do_test fts2b-1.2 {
+  execsql {SELECT rowid FROM t1 WHERE spanish MATCH 'one'}
+} {}
+do_test fts2b-1.3 {
+  execsql {SELECT rowid FROM t1 WHERE german MATCH 'one'}
+} {}
+do_test fts2b-1.4 {
+  execsql {SELECT rowid FROM t1 WHERE t1 MATCH 'one'}
+} {1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31}
+do_test fts2b-1.5 {
+  execsql {SELECT rowid FROM t1 WHERE t1 MATCH 'one dos drei'}
+} {7 15 23 31}
+do_test fts2b-1.6 {
+  execsql {SELECT english, spanish, german FROM t1 WHERE rowid=1}
+} {one un eine}
+do_test fts2b-1.7 {
+  execsql {SELECT rowid FROM t1 WHERE t1 MATCH '"one un"'}
+} {}
+
+do_test fts2b-2.1 {
+  execsql {
+    CREATE VIRTUAL TABLE t2 USING fts2(from,to);
+    INSERT INTO t2([from],[to]) VALUES ('one two three', 'four five six');
+    SELECT [from], [to] FROM t2
+  }
+} {{one two three} {four five six}}
+
+
+# Compute an SQL string that contains the words one, two, three,... to
+# describe bits set in the value $i.  Only the lower 5 bits are examined.
+#
+proc wordset {i} {
+  set x {}
+  for {set j 0; set k 1} {$j<5} {incr j; incr k $k} {
+    if {$k&$i} {lappend x [lindex {one two three four five} $j]}
+  }
+  return '$x'
+}
+
+# Create a new FTS table with three columns:
+#
+#    norm:      words for the bits of rowid
+#    plusone:   words for the bits of rowid+1
+#    invert:    words for the bits of ~rowid
+#
+db eval {
+   CREATE VIRTUAL TABLE t4 USING fts2([norm],'plusone',"invert");
+}
+for {set i 1} {$i<=15} {incr i} {
+  set vset [list [wordset $i] [wordset [expr {$i+1}]] [wordset [expr {~$i}]]]
+  db eval "INSERT INTO t4(norm,plusone,invert) VALUES([join $vset ,]);"
+}
+
+do_test fts2b-4.1 {
+  execsql {SELECT rowid FROM t4 WHERE t4 MATCH 'norm:one'}
+} {1 3 5 7 9 11 13 15}
+do_test fts2b-4.2 {
+  execsql {SELECT rowid FROM t4 WHERE norm MATCH 'one'}
+} {1 3 5 7 9 11 13 15}
+do_test fts2b-4.3 {
+  execsql {SELECT rowid FROM t4 WHERE t4 MATCH 'one'}
+} {1 2 3 4 5 6 7 8 9 10 11 12 13 14 15}
+do_test fts2b-4.4 {
+  execsql {SELECT rowid FROM t4 WHERE t4 MATCH 'plusone:one'}
+} {2 4 6 8 10 12 14}
+do_test fts2b-4.5 {
+  execsql {SELECT rowid FROM t4 WHERE plusone MATCH 'one'}
+} {2 4 6 8 10 12 14}
+do_test fts2b-4.6 {
+  execsql {SELECT rowid FROM t4 WHERE t4 MATCH 'norm:one plusone:two'}
+} {1 5 9 13}
+do_test fts2b-4.7 {
+  execsql {SELECT rowid FROM t4 WHERE t4 MATCH 'norm:one two'}
+} {1 3 5 7 9 11 13 15}
+do_test fts2b-4.8 {
+  execsql {SELECT rowid FROM t4 WHERE t4 MATCH 'plusone:two norm:one'}
+} {1 5 9 13}
+do_test fts2b-4.9 {
+  execsql {SELECT rowid FROM t4 WHERE t4 MATCH 'two norm:one'}
+} {1 3 5 7 9 11 13 15}
+
+
+finish_test

Added: freeswitch/trunk/libs/sqlite/test/fts2c.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/fts2c.test	Thu Feb 22 17:09:42 2007
@@ -0,0 +1,1213 @@
+# 2006 September 14
+#
+# The author disclaims copyright to this source code.  In place of
+# a legal notice, here is a blessing:
+#
+#    May you do good and not evil.
+#    May you find forgiveness for yourself and forgive others.
+#    May you share freely, never taking more than you give.
+#
+#*************************************************************************
+# This file implements regression tests for SQLite library.  The
+# focus of this script is testing the FTS2 module.
+#
+# $Id: fts2c.test,v 1.1 2006/10/19 23:36:26 shess Exp $
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# If SQLITE_ENABLE_FTS2 is defined, omit this file.
+ifcapable !fts2 {
+  finish_test
+  return
+}
+
+# Create a table of sample email data.   The data comes from email
+# archives of Enron executives that was published as part of the
+# litigation against that company.
+#
+do_test fts2c-1.1 {
+  db eval {
+    CREATE VIRTUAL TABLE email USING fts2([from],[to],subject,body);
+    BEGIN TRANSACTION;
+INSERT INTO email([from],[to],subject,body) VALUES('savita.puthigai at enron.com', 'traders.eol at enron.com, traders.eol at enron.com', 'EnronOnline- Change to Autohedge', 'Effective Monday, October 22, 2001 the following changes will be made to the Autohedge functionality on EnronOnline.
+
+The volume on the hedge will now respect the minimum volume and volume increment settings on the parent product. See rules below: 
+
+?	If the transaction volume on the child is less than half of the parent''s minimum volume no hedge will occur.
+?	If the transaction volume on the child is more than half the parent''s minimum volume but less than half the volume increment on the parent, the hedge will volume will be the parent''s minimum volume.
+?	For all other volumes, the same rounding rules will apply based on the volume increment on the parent product.
+
+Please see example below:
+
+Parent''s Settings:
+Minimum: 	5000
+Increment:  1000
+
+Volume on Autohedge transaction			Volume Hedged
+1      - 2499							0
+2500 - 5499							5000
+5500 - 6499							6000');
+INSERT INTO email([from],[to],subject,body) VALUES('dana.davis at enron.com', 'laynie.east at enron.com, lisa.king at enron.com, lisa.best at enron.com,', 'Leaving Early', 'FYI:  
+If it''s ok with everyone''s needs, I would like to leave @4pm. If you think 
+you will need my assistance past the 4 o''clock hour just let me know;  I''ll 
+be more than willing to stay.');
+INSERT INTO email([from],[to],subject,body) VALUES('enron_update at concureworkplace.com', 'louise.kitchen at enron.com', '<<Concur Expense Document>> - CC02.06.02', 'The following expense report is ready for approval:
+
+Employee Name: Christopher F. Calger
+Status last changed by: Mollie E. Gustafson Ms
+Expense Report Name: CC02.06.02
+Report Total: $3,972.93
+Amount Due Employee: $3,972.93
+
+
+To approve this expense report, click on the following link for Concur Expense.
+http://expensexms.enron.com');
+INSERT INTO email([from],[to],subject,body) VALUES('jeff.duff at enron.com', 'julie.johnson at enron.com', 'Work request', 'Julie,
+
+Could you print off the current work request report by 1:30 today?
+
+Gentlemen,
+
+I''d like to review this today at 1:30 in our office.  Also, could you provide 
+me with your activity reports so I can have Julie enter this information.
+
+JD');
+INSERT INTO email([from],[to],subject,body) VALUES('v.weldon at enron.com', 'gary.l.carrier at usa.dupont.com, scott.joyce at bankofamerica.com', 'Enron News', 'This could turn into something big.... 
+http://biz.yahoo.com/rf/010129/n29305829.html');
+INSERT INTO email([from],[to],subject,body) VALUES('mark.haedicke at enron.com', 'paul.simons at enron.com', 'Re: First Polish Deal!', 'Congrats!  Things seem to be building rapidly now on the Continent.  Mark');
+INSERT INTO email([from],[to],subject,body) VALUES('e..carter at enron.com', 't..robinson at enron.com', 'FW: Producers Newsletter 9-24-2001', '
+The producer lumber pricing sheet.
+ -----Original Message-----
+From: 	Johnson, Jay  
+Sent:	Tuesday, October 16, 2001 3:42 PM
+To:	Carter, Karen E.
+Subject:	FW: Producers Newsletter 9-24-2001
+
+
+
+ -----Original Message-----
+From: 	Daigre, Sergai  
+Sent:	Friday, September 21, 2001 8:33 PM
+Subject:	Producers Newsletter 9-24-2001
+
+ ');
+INSERT INTO email([from],[to],subject,body) VALUES('david.delainey at enron.com', 'kenneth.lay at enron.com', 'Greater Houston Partnership', 'Ken, in response to the letter from Mr Miguel San Juan, my suggestion would 
+be to offer up the Falcon for their use; however, given the tight time frame 
+and your recent visit with Mr. Fox that it would be difficult for either you 
+or me to participate.
+
+I spoke to Max and he agrees with this approach.
+
+I hope this meets with your approval.
+
+Regards
+Delainey');
+INSERT INTO email([from],[to],subject,body) VALUES('lachandra.fenceroy at enron.com', 'lindy.donoho at enron.com', 'FW: Bus Applications Meeting Follow Up', 'Lindy,
+
+Here is the original memo we discussed earlier.  Please provide any information that you may have.
+
+Your cooperation is greatly appreciated.
+
+Thanks,
+
+lachandra.fenceroy at enron.com
+713.853.3884
+877.498.3401 Pager
+
+ -----Original Message-----
+From: 	Bisbee, Joanne  
+Sent:	Wednesday, September 26, 2001 7:50 AM
+To:	Fenceroy, LaChandra
+Subject:	FW: Bus Applications Meeting Follow Up
+
+Lachandra, Please get with David Duff today and see what this is about.  Who are our TW accounting business users?
+
+ -----Original Message-----
+From: 	Koh, Wendy  
+Sent:	Tuesday, September 25, 2001 2:41 PM
+To:	Bisbee, Joanne
+Subject:	Bus Applications Meeting Follow Up
+
+Lisa brought up a TW change effective Nov 1.  It involves eliminating a turnback surcharge.  I have no other information, but you might check with the business folks for any system changes required.
+
+Wendy');
+INSERT INTO email([from],[to],subject,body) VALUES('danny.mccarty at enron.com', 'fran.fagan at enron.com', 'RE: worksheets', 'Fran,
+    If Julie''s merit needs to be lump sum, just move it over to that column.  Also, send me Eric Gadd''s sheets as well.  Thanks.
+Dan
+
+ -----Original Message-----
+From: 	Fagan, Fran  
+Sent:	Thursday, December 20, 2001 11:10 AM
+To:	McCarty, Danny
+Subject:	worksheets
+
+As discussed, attached are your sheets for bonus and merit.
+
+Thanks,
+
+Fran Fagan
+Sr. HR Rep
+713.853.5219
+
+
+ << File: McCartyMerit.xls >>  << File: mccartyBonusCommercial_UnP.xls >> 
+
+');
+INSERT INTO email([from],[to],subject,body) VALUES('bert.meyers at enron.com', 'shift.dl-portland at enron.com', 'OCTOBER SCHEDULE', 'TEAM,
+
+PLEASE SEND ME ANY REQUESTS THAT YOU HAVE FOR OCTOBER.  SO FAR I HAVE THEM FOR LEAF.  I WOULD LIKE TO HAVE IT DONE BY THE 15TH OF THE MONTH.  ANY QUESTIONS PLEASE GIVE ME A CALL.
+
+BERT');
+INSERT INTO email([from],[to],subject,body) VALUES('errol.mclaughlin at enron.com', 'john.arnold at enron.com, bilal.bajwa at enron.com, john.griffith at enron.com,', 'TRV Notification:  (NG - PROPT P/L - 09/27/2001)', 'The report named: NG - PROPT P/L <http://trv.corp.enron.com/linkFromExcel.asp?report_cd=11&report_name=NG+-+PROPT+P/L&category_cd=5&category_name=FINANCIAL&toc_hide=1&sTV1=5&TV1Exp=Y&current_efct_date=09/27/2001>, published as of 09/27/2001 is now available for viewing on the website.');
+INSERT INTO email([from],[to],subject,body) VALUES('patrice.mims at enron.com', 'calvin.eakins at enron.com', 'Re: Small business supply assistance', 'Hi Calvin
+
+
+I spoke with Rickey (boy, is he long-winded!!).  Gave him the name of our 
+credit guy, Russell Diamond.
+
+Thank for your help!');
+INSERT INTO email([from],[to],subject,body) VALUES('legal <.hall at enron.com>', 'stephanie.panus at enron.com', 'Termination update', 'City of Vernon and Salt River Project terminated their contracts.  I will fax these notices to you.');
+INSERT INTO email([from],[to],subject,body) VALUES('d..steffes at enron.com', 'richard.shapiro at enron.com', 'EES / ENA Government Affairs Staffing & Outside Services', 'Rick --
+
+Here is the information on staffing and outside services.  Call if you need anything else.
+
+Jim
+
+ ');
+INSERT INTO email([from],[to],subject,body) VALUES('gelliott at industrialinfo.com', 'pcopello at industrialinfo.com', 'ECAAR (Gavin), WSCC (Diablo Canyon), & NPCC (Seabrook)', 'Dear Power Outage Database Customer, 
+Attached you will find an excel document. The outages contained within are forced or rescheduled outages. Your daily delivery will still contain these outages. 
+In addition to the two excel documents, there is a dbf file that is formatted like your daily deliveries you receive nightly. This will enable you to load the data into your regular database. Any questions please let me know. Thanks. 
+Greg Elliott 
+IIR, Inc. 
+713-783-5147 x 3481 
+outages at industrialinfo.com 
+THE INFORMATION CONTAINED IN THIS E-MAIL IS LEGALLY PRIVILEGED AND CONFIDENTIAL INFORMATION INTENDED ONLY FOR THE USE OF THE INDIVIDUAL OR ENTITY NAMED ABOVE.  YOU ARE HEREBY NOTIFIED THAT ANY DISSEMINATION, DISTRIBUTION, OR COPY OF THIS E-MAIL TO UNAUTHORIZED ENTITIES IS STRICTLY PROHIBITED. IF YOU HAVE RECEIVED THIS 
+E-MAIL IN ERROR, PLEASE DELETE IT.
+ - OUTAGE.dbf 
+ - 111201R.xls 
+ - 111201.xls ');
+INSERT INTO email([from],[to],subject,body) VALUES('enron.announcements at enron.com', 'all_ena_egm_eim at enron.com', 'EWS Brown Bag', 'MARK YOUR LUNCH CALENDARS NOW !
+
+You are invited to attend the EWS Brown Bag Lunch Series
+
+Featuring:   RAY BOWEN, COO
+
+Topic:  Enron Industrial Markets
+
+Thursday, March 15, 2001
+11:30 am - 12:30 pm
+EB 5 C2
+
+
+You bring your lunch,           Limited Seating
+We provide drinks and dessert.          RSVP  x 3-9610');
+INSERT INTO email([from],[to],subject,body) VALUES('chris.germany at enron.com', 'ingrid.immer at williams.com', 'Re: About St Pauls', 'Sounds good to me.  I bet this is next to the Warick?? Hotel.
+
+
+
+
+"Immer, Ingrid" <Ingrid.Immer at Williams.com> on 12/21/2000 11:48:47 AM
+To: "''chris.germany at enron.com''" <chris.germany at enron.com>
+cc:  
+Subject: About St Pauls
+
+
+
+
+ <<About St Pauls.url>>  
+? 
+?http://www.stpaulshouston.org/about.html 
+
+Chris, 
+
+I like the looks of this place.? What do you think about going here Christmas 
+eve?? They have an 11:00 a.m. service and a candlelight service at 5:00 p.m., 
+among others.
+
+Let me know.?? ii 
+
+ - About St Pauls.url
+
+');
+INSERT INTO email([from],[to],subject,body) VALUES('nas at cpuc.ca.gov', 'skatz at sempratrading.com, kmccrea at sablaw.com, thompson at wrightlaw.com,', 'Reply Brief filed July 31, 2000', ' - CPUC01-#76371-v1-Revised_Reply_Brief__Due_today_7_31_.doc');
+INSERT INTO email([from],[to],subject,body) VALUES('gascontrol at aglresources.com', 'dscott4 at enron.com, lcampbel at enron.com', 'Alert Posted 10:00 AM November 20,2000: E-GAS Request Reminder', 'Alert Posted 10:00 AM November 20,2000: E-GAS Request Reminder
+As discussed in the Winter Operations Meeting on Sept.29,2000, 
+E-Gas(Emergency Gas) will not be offered this winter as a service from AGLC.  
+Marketers and Poolers can receive gas via Peaking and IBSS nominations(daisy 
+chain) from other marketers up to the 6 p.m. Same Day 2 nomination cycle.
+');
+INSERT INTO email([from],[to],subject,body) VALUES('dutch.quigley at enron.com', 'rwolkwitz at powermerchants.com', '', ' 
+
+Here is a goody for you');
+INSERT INTO email([from],[to],subject,body) VALUES('ryan.o''rourke at enron.com', 'k..allen at enron.com, randy.bhatia at enron.com, frank.ermis at enron.com,', 'TRV Notification:  (West VaR - 11/07/2001)', 'The report named: West VaR <http://trv.corp.enron.com/linkFromExcel.asp?report_cd=36&report_name=West+VaR&category_cd=2&category_name=WEST&toc_hide=1&sTV1=2&TV1Exp=Y&current_efct_date=11/07/2001>, published as of 11/07/2001 is now available for viewing on the website.');
+INSERT INTO email([from],[to],subject,body) VALUES('mjones7 at txu.com', 'cstone1 at txu.com, ggreen2 at txu.com, timpowell at txu.com,', 'Enron / HPL Actuals for July 10, 2000', 'Teco Tap       10.000 / Enron ; 110.000 / HPL IFERC
+
+LS HPL LSK IC       30.000 / Enron
+');
+INSERT INTO email([from],[to],subject,body) VALUES('susan.pereira at enron.com', 'kkw816 at aol.com', 'soccer practice', 'Kathy-
+
+Is it safe to assume that practice is cancelled for tonight??
+
+Susan Pereira');
+INSERT INTO email([from],[to],subject,body) VALUES('mark.whitt at enron.com', 'barry.tycholiz at enron.com', 'Huber Internal Memo', 'Please look at this.  I didn''t know how deep to go with the desk.  Do you think this works.
+
+ ');
+INSERT INTO email([from],[to],subject,body) VALUES('m..forney at enron.com', 'george.phillips at enron.com', '', 'George,
+Give me a call and we will further discuss opportunities on the 13st floor.
+
+Thanks,
+JMForney
+3-7160');
+INSERT INTO email([from],[to],subject,body) VALUES('brad.mckay at enron.com', 'angusmcka at aol.com', 'Re: (no subject)', 'not yet');
+INSERT INTO email([from],[to],subject,body) VALUES('adam.bayer at enron.com', 'jonathan.mckay at enron.com', 'FW: Curve Fetch File', 'Here is the curve fetch file sent to me.  It has plenty of points in it.  If you give me a list of which ones you need we may be able to construct a secondary worksheet to vlookup the values.
+
+adam
+35227
+
+
+ -----Original Message-----
+From: 	Royed, Jeff  
+Sent:	Tuesday, September 25, 2001 11:37 AM
+To:	Bayer, Adam
+Subject:	Curve Fetch File
+
+Let me know if it works.   It may be required to have a certain version of Oracle for it to work properly.
+
+ 
+
+Jeff Royed
+Enron 
+Energy Operations
+Phone: 713-853-5295');
+INSERT INTO email([from],[to],subject,body) VALUES('matt.smith at enron.com', 'yan.wang at enron.com', 'Report Formats', 'Yan,
+
+The merged reports look great.  I believe the only orientation changes are to 
+"unmerge" the following six reports:  
+
+31 Keystone Receipts
+15 Questar Pipeline
+40 Rockies Production
+22 West_2
+23 West_3
+25 CIG_WIC
+
+The orientation of the individual reports should be correct.  Thanks.
+
+Mat
+
+PS.  Just a reminder to add the "*" by the title of calculated points.');
+INSERT INTO email([from],[to],subject,body) VALUES('michelle.lokay at enron.com', 'jimboman at bigfoot.com', 'Egyptian Festival', '---------------------- Forwarded by Michelle Lokay/ET&S/Enron on 09/07/2000 
+10:08 AM ---------------------------
+
+
+"Karkour, Randa" <Randa.Karkour at COMPAQ.com> on 09/07/2000 09:01:04 AM
+To: "''Agheb (E-mail)" <Agheb at aol.com>, "Leila Mankarious (E-mail)" 
+<Leila_Mankarious at mhhs.org>, "''Marymankarious (E-mail)" 
+<marymankarious at aol.com>, "Michelle lokay (E-mail)" <mlokay at enron.com>, "Ramy 
+Mankarious (E-mail)" <Mankarious at aol.com>
+cc:  
+
+Subject: Egyptian Festival
+
+
+ <<Egyptian Festival.url>>
+
+ http://www.egyptianfestival.com/
+
+ - Egyptian Festival.url
+');
+INSERT INTO email([from],[to],subject,body) VALUES('errol.mclaughlin at enron.com', 'sherry.dawson at enron.com', 'Urgent!!! --- New EAST books', 'This has to be done..................................
+
+Thanks
+---------------------- Forwarded by Errol McLaughlin/Corp/Enron on 12/20/2000 
+08:39 AM ---------------------------
+   
+	
+	
+	From:  William Kelly @ ECT                           12/20/2000 08:31 AM
+	
+
+To: Kam Keiser/HOU/ECT at ECT, Darron C Giron/HOU/ECT at ECT, David 
+Baumbach/HOU/ECT at ECT, Errol McLaughlin/Corp/Enron at ENRON
+cc: Kimat Singla/HOU/ECT at ECT, Kulvinder Fowler/NA/Enron at ENRON, Kyle R 
+Lilly/HOU/ECT at ECT, Jeff Royed/Corp/Enron at ENRON, Alejandra 
+Chavez/NA/Enron at ENRON, Crystal Hyde/HOU/ECT at ECT 
+
+Subject: New EAST books
+
+We have new book names in TAGG for our intramonth portfolios and it is 
+extremely important that any deal booked to the East is communicated quickly 
+to someone on my team.  I know it will take some time for the new names to 
+sink in and I do not want us to miss any positions or P&L.  
+
+Thanks for your help on this.
+
+New:
+Scott Neal :         East Northeast
+Dick Jenkins:     East Marketeast
+
+WK 
+');
+INSERT INTO email([from],[to],subject,body) VALUES('david.forster at enron.com', 'eol.wide at enron.com', 'Change to Stack Manager', 'Effective immediately, there is a change to the Stack Manager which will 
+affect any Inactive Child.
+
+An inactive Child with links to Parent products will not have their 
+calculated prices updated until the Child product is Activated.
+
+When the Child Product is activated, the price will be recalculated and 
+updated BEFORE it is displayed on the web.
+
+This means that if you are inputting a basis price on a Child product, you 
+will not see the final, calculated price until you Activate the product, at 
+which time the customer will also see it.
+
+If you have any questions, please contact the Help Desk on:
+
+Americas: 713 853 4357
+Europe: + 44 (0) 20 7783 7783
+Asia/Australia: +61 2 9229 2300
+
+Dave');
+INSERT INTO email([from],[to],subject,body) VALUES('vince.kaminski at enron.com', 'jhh1 at email.msn.com', 'Re: Light reading - see pieces beginning on page 7', 'John,
+
+I saw it. Very interesting.
+
+Vince
+
+
+
+
+
+"John H Herbert" <jhh1 at email.msn.com> on 07/28/2000 08:38:08 AM
+To: "Vince J Kaminski" <Vince_J_Kaminski at enron.com>
+cc:  
+Subject: Light reading - see pieces beginning on page 7
+
+
+Cheers and have a nice weekend,
+
+
+JHHerbert
+
+
+
+
+ - gd000728.pdf
+
+
+
+');
+INSERT INTO email([from],[to],subject,body) VALUES('matthew.lenhart at enron.com', 'mmmarcantel at equiva.com', 'RE:', 'i will try to line up a pig for you ');
+INSERT INTO email([from],[to],subject,body) VALUES('jae.black at enron.com', 'claudette.harvey at enron.com, chaun.roberts at enron.com, judy.martinez at enron.com,', 'Disaster Recovery Equipment', 'As a reminder...there are several pieces of equipment that are set up on the 30th Floor, as well as on our floor, for the Disaster Recovery Team.  PLEASE DO NOT TAKE, BORROW OR USE this equipment.  Should you need to use another computer system, other than yours, or make conference calls please work with your Assistant to help find or set up equipment for you to use.
+
+Thanks for your understanding in this matter.
+
+T.Jae Black
+East Power Trading
+Assistant to Kevin Presto
+off. 713-853-5800
+fax 713-646-8272
+cell 713-539-4760');
+INSERT INTO email([from],[to],subject,body) VALUES('eric.bass at enron.com', 'dale.neuner at enron.com', '5 X 24', 'Dale,
+
+Have you heard anything more on the 5 X 24s?  We would like to get this 
+product out ASAP.
+
+
+Thanks,
+
+Eric');
+INSERT INTO email([from],[to],subject,body) VALUES('messenger at smartreminders.com', 'm..tholt at enron.com', '10% Coupon - PrintPal Printer Cartridges - 100% Guaranteed', '[IMAGE]
+[IMAGE][IMAGE][IMAGE] 
+Dear  SmartReminders Member,
+       [IMAGE]         [IMAGE]        [IMAGE]     [IMAGE]    [IMAGE]    [IMAGE]        [IMAGE]      [IMAGE]     	
+
+
+  
+ 
+ 
+ 
+ 
+ 
+ 
+ 
+ 
+ 
+ 
+ 
+ 
+ 
+ 
+ 
+ 
+ 
+
+We respect  your privacy and are a Certified Participant of the BBBOnLine
+ Privacy Program.  To be removed from future offers,click  here. 
+SmartReminders.com  is a permission based service. To unsubscribe click  here .  ');
+INSERT INTO email([from],[to],subject,body) VALUES('benjamin.rogers at enron.com', 'mark.bernstein at enron.com', '', 'The guy you are talking about left CIN under a "cloud of suspicion" sort of 
+speak.  He was the one who got into several bad deals and PPA''s in California 
+for CIN, thus he left on a bad note.  Let me know if you need more detail 
+than that, I felt this was the type of info you were looking for.  Thanks!
+Ben');
+INSERT INTO email([from],[to],subject,body) VALUES('enron_update at concureworkplace.com', 'michelle.cash at enron.com', 'Expense Report Receipts Not Received', 'Employee Name: Michelle Cash
+Report Name:   Houston Cellular 8-11-01
+Report Date:   12/13/01
+Report ID:     594D37C9ED2111D5B452
+Submitted On:  12/13/01
+
+You are only allowed 2 reports with receipts outstanding.  Your expense reports will not be paid until you meet this requirement.');
+INSERT INTO email([from],[to],subject,body) VALUES('susan.mara at enron.com', 'ray.alvarez at enron.com, mark.palmer at enron.com, karen.denne at enron.com,', 'CAISO Emergency Motion -- to discontinue market-based rates for', 'FYI.  the latest broadside against the generators.
+
+Sue Mara
+Enron Corp.
+Tel: (415) 782-7802
+Fax:(415) 782-7854
+----- Forwarded by Susan J Mara/NA/Enron on 06/08/2001 12:24 PM -----
+
+
+	"Milner, Marcie" <MMilner at coral-energy.com> 06/08/2001 11:13 AM 	   To: "''smara at enron.com''" <smara at enron.com>  cc:   Subject: CAISO Emergency Motion	
+
+
+Sue, did you see this emergency motion the CAISO filed today?  Apparently
+they are requesting that FERC discontinue market-based rates immediately and
+grant refunds plus interest on the difference between cost-based rates and
+market revenues received back to May 2000.  They are requesting the
+commission act within 14 days.  Have you heard anything about what they are
+doing?
+
+Marcie
+
+http://www.caiso.com/docs/2001/06/08/200106081005526469.pdf 
+');
+INSERT INTO email([from],[to],subject,body) VALUES('fletcher.sturm at enron.com', 'eloy.escobar at enron.com', 'Re: General Brinks Position Meeting', 'Eloy,
+
+Who is General Brinks?
+
+Fletch');
+INSERT INTO email([from],[to],subject,body) VALUES('nailia.dindarova at enron.com', 'richard.shapiro at enron.com', 'Documents for Mark Frevert (on EU developments and lessons from', 'Rick,
+
+Here are the documents that Peter has prepared for Mark Frevert. 
+
+Nailia
+---------------------- Forwarded by Nailia Dindarova/LON/ECT on 25/06/2001 
+16:36 ---------------------------
+
+
+Nailia Dindarova
+25/06/2001 15:36
+To: Michael Brown/Enron at EUEnronXGate
+cc: Ross Sankey/Enron at EUEnronXGate, Eric Shaw/ENRON at EUEnronXGate, Peter 
+Styles/LON/ECT at ECT 
+
+Subject: Documents for Mark Frevert (on EU developments and lessons from 
+California)
+
+Michael,
+
+
+These are the documents that Peter promised to give to you for Mark Frevert. 
+He has now handed them to him in person but asked me to transmit them 
+electronically to you, as well as Eric and Ross.
+
+Nailia
+
+
+
+
+
+');
+INSERT INTO email([from],[to],subject,body) VALUES('peggy.a.kostial at accenture.com', 'dave.samuels at enron.com', 'EOL-Accenture Deal Sheet', 'Dave -
+
+Attached are our comments and suggested changes. Please call to review.
+
+On the time line for completion, we have four critical steps to complete:
+     Finalize market analysis to refine business case, specifically
+     projected revenue stream
+     Complete counterparty surveying, including targeting 3 CPs for letters
+     of intent
+     Review Enron asset base for potential reuse/ licensing
+     Contract negotiations
+
+Joe will come back to us with an updated time line, but it is my
+expectation that we are still on the same schedule (we just begun week
+three) with possibly a week or so slippage.....contract negotiations will
+probably be the critical path.
+
+We will send our cut at the actual time line here shortly. Thanks,
+
+Peggy
+
+(See attached file: accenture-dealpoints v2.doc)
+ - accenture-dealpoints v2.doc ');
+INSERT INTO email([from],[to],subject,body) VALUES('thomas.martin at enron.com', 'thomas.martin at enron.com', 'Re: Guadalupe Power Partners LP', '---------------------- Forwarded by Thomas A Martin/HOU/ECT on 03/20/2001 
+03:49 PM ---------------------------
+
+
+Thomas A Martin
+10/11/2000 03:55 PM
+To: Patrick Wade/HOU/ECT at ECT
+cc:  
+Subject: Re: Guadalupe Power Partners LP  
+
+The deal is physically served at Oasis Waha or Oasis Katy and is priced at 
+either HSC, Waha or Katytailgate GD at buyers option three days prior to 
+NYMEX  close.
+
+');
+INSERT INTO email([from],[to],subject,body) VALUES('judy.townsend at enron.com', 'dan.junek at enron.com, chris.germany at enron.com', 'Columbia Distribution''s Capacity Available for Release - Sum', '---------------------- Forwarded by Judy Townsend/HOU/ECT on 03/09/2001 11:04 
+AM ---------------------------
+
+
+agoddard at nisource.com on 03/08/2001 09:16:57 AM
+To: "        -         *Koch, Kent" <kkoch at nisource.com>, "        -         
+*Millar, Debra" <dmillar at nisource.com>, "        -         *Burke, Lynn" 
+<lburke at nisource.com>
+cc: "        -         *Heckathorn, Tom" <theckathorn at nisource.com> 
+Subject: Columbia Distribution''s Capacity Available for Release - Sum
+
+
+Attached is Columbia Distribution''s notice of capacity available for release
+for
+the summer of 2001 (Apr. 2001 through Oct. 2001).
+
+Please note that the deadline for bids is 3:00pm EST on March 20, 2001.
+
+If you have any questions, feel free to contact any of the representatives
+listed
+at the bottom of the attachment.
+
+Aaron Goddard
+
+
+
+
+ - 2001Summer.doc
+');
+INSERT INTO email([from],[to],subject,body) VALUES('rhonda.denton at enron.com', 'tim.belden at enron.com, dana.davis at enron.com, genia.fitzgerald at enron.com,', 'Split Rock Energy LLC', 'We have received the executed EEI contract from this CP dated 12/12/2000.  
+Copies will be distributed to Legal and Credit.');
+INSERT INTO email([from],[to],subject,body) VALUES('kerrymcelroy at dwt.com', 'jack.speer at alcoa.com, crow at millernash.com, michaelearly at earthlink.net,', 'Oral Argument Request', ' - Oral Argument Request.doc');
+INSERT INTO email([from],[to],subject,body) VALUES('mike.carson at enron.com', 'rlmichaelis at hormel.com', '', 'Did you come in town this wk end..... My new number at our house is : 
+713-668-3712...... my cell # is 281-381-7332
+
+the kid');
+INSERT INTO email([from],[to],subject,body) VALUES('cooper.richey at enron.com', 'trycooper at hotmail.com', 'FW: Contact Info', '
+
+-----Original Message-----
+From: Punja, Karim 
+Sent: Thursday, December 13, 2001 2:35 PM
+To: Richey, Cooper
+Subject: Contact Info
+
+
+Cooper,
+
+Its been a real pleasure working with you (even though it was for only a small amount of time)
+I hope we can stay in touch.
+
+Home# 234-0249
+email: kpunja at hotmail.com
+
+Take Care, 
+
+Karim.
+  ');
+INSERT INTO email([from],[to],subject,body) VALUES('bjm30 at earthlink.net', 'mcguinn.k at enron.com, mcguinn.ian at enron.com, mcguinn.stephen at enron.com,', 'email address change', 'Hello all.
+
+I haven''t talked to many of you via email recently but I do want to give you
+my new address for your email file:
+
+    bjm30 at earthlink.net
+
+I hope all is well.
+
+Brian McGuinn');
+INSERT INTO email([from],[to],subject,body) VALUES('shelley.corman at enron.com', 'steve.hotte at enron.com', 'Flat Panels', 'Can you please advise what is going on with the flat panels that we had planned to distribute to our gas logistics team.  It was in the budget and we had the okay, but now I''m hearing there is some hold-up & the units are stored on 44.
+
+Shelley');
+INSERT INTO email([from],[to],subject,body) VALUES('sara.davidson at enron.com', 'john.schwartzenburg at enron.com, scott.dieball at enron.com, recipients at enron.com,', '2001 Enron Law Conference (Distribution List 2)', '    Enron Law Conference
+
+San Antonio, Texas    May 2-4, 2001    Westin Riverwalk
+
+                   See attached memo for more details!!
+
+
+? Registration for the law conference this year will be handled through an 
+Online RSVP Form on the Enron Law Conference Website at 
+http://lawconference.corp.enron.com.  The website is still under construction 
+and will not be available until Thursday, March 15, 2001.  
+
+? We will send you another e-mail to confirm when the Law Conference Website 
+is operational. 
+
+? Please complete the Online RSVP Form as soon as it is available  and submit 
+it no later than Friday, March 30th.  
+
+
+
+
+');
+INSERT INTO email([from],[to],subject,body) VALUES('tori.kuykendall at enron.com', 'heath.b.taylor at accenture.com', 'Re:', 'hey - thats funny about john - he definitely remembers him - i''ll call pat 
+and let him know - we are coming on saturday - i just havent had a chance to 
+call you guys back --  looking forward to it -- i probably need the 
+directions again though');
+INSERT INTO email([from],[to],subject,body) VALUES('darron.giron at enron.com', 'bryce.baxter at enron.com', 'Re: Feedback for Audrey Cook', 'Bryce,
+
+I''ll get it done today.  
+
+DG    3-9573
+
+
+   
+	
+	
+	From:  Bryce Baxter                           06/12/2000 07:15 PM
+	
+
+To: Darron C Giron/HOU/ECT at ECT
+cc:  
+Subject: Feedback for Audrey Cook
+
+You were identified as a reviewer for Audrey Cook.  If possible, could you 
+complete her feedback by end of business Wednesday?  It will really help me 
+in the PRC process to have your input.  Thanks.
+
+');
+INSERT INTO email([from],[to],subject,body) VALUES('casey.evans at enron.com', 'stephanie.sever at enron.com', 'Gas EOL ID', 'Stephanie,
+
+In conjunction with the recent movement of several power traders, they are changing the names of their gas books as well.  The names of the new gas books and traders are as follows:
+
+PWR-NG-LT-SPP:  Mike Carson
+PWR-NG-LT-SERC:  Jeff King
+
+If you need to know their power desk to map their ID to their gas books, those desks are as follows:
+
+EPMI-LT-SPP:  Mike Carson
+EPMI-LT-SERC:  Jeff King
+
+I will be in training this afternoon, but will be back when class is over.  Let me know if you have any questions.
+
+Thanks for your help!
+Casey');
+INSERT INTO email([from],[to],subject,body) VALUES('darrell.schoolcraft at enron.com', 'david.roensch at enron.com, kimberly.watson at enron.com, michelle.lokay at enron.com,', 'Postings', 'Please see the attached.
+
+
+ds
+
+
+  
+
+ ');
+INSERT INTO email([from],[to],subject,body) VALUES('mcominsky at aol.com', 'cpatman at bracepatt.com, james_derrick at enron.com', 'Jurisprudence Luncheon', 'Carrin & Jim --
+
+It was an honor and a pleasure to meet both of you yesterday.  I know we will
+have fun working together on this very special event.
+
+Jeff left the jurisprudence luncheon lists for me before he left on vacation.
+ I wasn''t sure whether he transmitted them to you as well.  Would you please
+advise me if you would like them sent to you?  I can email the MS Excel files
+or I can fax the hard copies to you.   Please advise what is most convenient.
+
+I plan to be in town through the holidays and can be reached by phone, email,
+or cell phone at any time.  My cell phone number is 713/705-4829.
+
+Thanks again for your interest in the ADL''s work.  Martin.
+
+Martin B. Cominsky
+Director, Southwest Region
+Anti-Defamation League
+713/627-3490, ext. 122
+713/627-2011 (fax)
+MCominsky at aol.com');
+INSERT INTO email([from],[to],subject,body) VALUES('phillip.love at enron.com', 'todagost at utmb.edu, gbsonnta at utmb.edu', 'New President', 'I had a little bird put a word in my ear.  Is there any possibility for Ben 
+Raimer to be Bush''s secretary of HHS?  Just curious about that infamous UTMB 
+rumor mill.  Hope things are well, happy holidays.
+PL');
+INSERT INTO email([from],[to],subject,body) VALUES('marie.heard at enron.com', 'ehamilton at fna.com', 'ISDA Master Agreement', 'Erin:
+
+Pursuant to your request, attached are the Schedule to the ISDA Master Agreement, together with Paragraph 13 to the ISDA Credit Support Annex.  Please let me know if you need anything else.  We look forward to hearing your comments.
+
+Marie
+
+Marie Heard
+Senior Legal Specialist
+Enron North America Corp.
+Phone:  (713) 853-3907
+Fax:  (713) 646-3490
+marie.heard at enron.com
+
+				 ');
+INSERT INTO email([from],[to],subject,body) VALUES('andrea.ring at enron.com', 'beverly.beaty at enron.com', 'Re: Tennessee Buy - Louis Dreyfus', 'Beverly -  once again thanks so much for your help on this.
+
+           
+
+                                                                     ');
+INSERT INTO email([from],[to],subject,body) VALUES('karolyn.criado at enron.com', 'j..bonin at enron.com, felicia.case at enron.com, b..clapp at enron.com,', 'Price List week of Oct. 8-9, 2001', '
+Please contact me if you have any questions regarding last weeks prices.
+
+Thank you,
+Karolyn Criado
+3-9441
+
+
+ 
+
+');
+INSERT INTO email([from],[to],subject,body) VALUES('kevin.presto at enron.com', 'edward.baughman at enron.com, billy.braddock at enron.com', 'Associated', 'Please begin working on filling our Associated short position in 02.   I would like to take this risk off the books.
+
+In addition, please find out what a buy-out of VEPCO would cost us.   With Rogers transitioning to run our retail risk management, I would like to clean up our customer positions.
+
+We also need to continue to explore a JEA buy-out.
+
+Thanks.');
+INSERT INTO email([from],[to],subject,body) VALUES('stacy.dickson at enron.com', 'gregg.penman at enron.com', 'RE: Constellation TC 5-7-01', 'Gregg, 
+
+I am at home with a sick baby.  (Lots of fun!)  I will call you about this 
+tomorrow.
+
+Stacy');
+INSERT INTO email([from],[to],subject,body) VALUES('joe.quenet at enron.com', 'dfincher at utilicorp.com', '', 'hey big guy.....check this out.....
+
+ w ww.gorelieberman-2000.com/');
+INSERT INTO email([from],[to],subject,body) VALUES('k..allen at enron.com', 'jacqestc at aol.com', '', 'Jacques,
+
+I sent you a fax of Kevin Kolb''s comments on the release.  The payoff on the note would be $36,248 ($36090(principal) + $158 (accrued interest)).
+This is assuming we wrap this up on Tuesday.  
+
+Please email to confirm that their changes are ok so I can set up a meeting on Tuesday to reach closure.
+
+Phillip');
+INSERT INTO email([from],[to],subject,body) VALUES('kourtney.nelson at enron.com', 'mike.swerzbin at enron.com', 'Adjusted L/R Balance', 'Mike,
+
+I placed the adjusted L/R Balance on the Enronwest site.  It is under the "Staff/Kourtney Nelson".  There are two links:  
+
+1)  "Adj L_R" is the same data/format from the weekly strategy meeting. 
+2)  "New Gen 2001_2002" link has all of the supply side info that is used to calculate the L/R balance
+	-Please note the Data Flag column, a value of "3" indicates the project was cancelled, on hold, etc and is not included in the calc.  
+
+Both of these sheets are interactive Excel spreadsheets and thus you can play around with the data as you please.  Also, James Bruce is working to get his gen report on the web.  That will help with your access to information on new gen.
+
+Please let me know if you have any questions or feedback,
+
+Kourtney
+
+
+
+Kourtney Nelson
+Fundamental Analysis 
+Enron North America
+(503) 464-8280
+kourtney.nelson at enron.com');
+INSERT INTO email([from],[to],subject,body) VALUES('d..thomas at enron.com', 'naveed.ahmed at enron.com', 'FW: Current Enron TCC Portfolio', '
+
+-----Original Message-----
+From: Grace, Rebecca M. 
+Sent: Monday, December 17, 2001 9:44 AM
+To: Thomas, Paul D.
+Cc: Cashion, Jim; Allen, Thresa A.; May, Tom
+Subject: RE: Current Enron TCC Portfolio
+
+
+Paul,
+
+I reviewed NY''s list.  I agree with all of their contracts numbers and mw amounts.
+
+Call if you have any more questions.
+
+Rebecca
+
+
+
+ -----Original Message-----
+From: 	Thomas, Paul D.  
+Sent:	Monday, December 17, 2001 9:08 AM
+To:	Grace, Rebecca M.
+Subject:	FW: Current Enron TCC Portfolio
+
+ << File: enrontccs.xls >> 
+Rebecca,
+Let me know if you see any differences.
+
+Paul
+X 3-0403
+-----Original Message-----
+From: Thomas, Paul D. 
+Sent: Monday, December 17, 2001 9:04 AM
+To: Ahmed, Naveed
+Subject: FW: Current Enron TCC Portfolio
+
+
+
+
+-----Original Message-----
+From: Thomas, Paul D. 
+Sent: Thursday, December 13, 2001 10:01 AM
+To: Baughman, Edward D.
+Subject: Current Enron TCC Portfolio
+
+
+');
+INSERT INTO email([from],[to],subject,body) VALUES('stephanie.panus at enron.com', 'william.bradford at enron.com, debbie.brackett at enron.com,', 'Coastal Merchant Energy/El Paso Merchant Energy', 'Coastal Merchant Energy, L.P. merged with and into El Paso Merchant Energy, 
+L.P., effective February 1, 2001, with the surviving entity being El Paso 
+Merchant Energy, L.P.  We currently have ISDA Master Agreements with both 
+counterparties.  Please see the attached memo regarding the existing Masters 
+and let us know which agreement should be terminated.
+
+Thanks,
+Stephanie
+');
+INSERT INTO email([from],[to],subject,body) VALUES('kam.keiser at enron.com', 'c..kenne at enron.com', 'RE: What about this too???', ' 
+
+ -----Original Message-----
+From: 	Kenne, Dawn C.  
+Sent:	Wednesday, February 06, 2002 11:50 AM
+To:	Keiser, Kam
+Subject:	What about this too???
+
+
+ << File: Netco Trader Matrix.xls >> 
+ ');
+INSERT INTO email([from],[to],subject,body) VALUES('chris.meyer at enron.com', 'joe.parks at enron.com', 'Centana', 'Talked to Chip.  We do need Cash Committe approval given the netting feature of your deal, which means Batch Funding Request.  Please update per my previous e-mail and forward.
+
+Thanks
+
+chris
+x31666');
+INSERT INTO email([from],[to],subject,body) VALUES('debra.perlingiere at enron.com', 'jworman at academyofhealth.com', '', 'Have a great weekend!   Happy Fathers Day!
+
+
+Debra Perlingiere
+Enron North America Corp.
+1400 Smith Street, EB 3885
+Houston, Texas 77002
+dperlin at enron.com
+Phone 713-853-7658
+Fax  713-646-3490');
+INSERT INTO email([from],[to],subject,body) VALUES('outlook.team at enron.com', '', 'Demo by Martha Janousek of Dashboard & Pipeline Profile / Julia  &', 'CALENDAR ENTRY:	APPOINTMENT
+
+Description:
+	Demo by Martha Janousek of Dashboard & Pipeline Profile / Julia  & Dir Rpts. - 4102
+
+Date:		1/5/2001
+Time:		9:00 AM - 10:00 AM (Central Standard Time)
+
+Chairperson:	Outlook Migration Team
+
+Detailed Description:');
+INSERT INTO email([from],[to],subject,body) VALUES('diana.seifert at enron.com', 'mark.taylor at enron.com', 'Guest access Chile', 'Hello Mark,
+
+Justin Boyd told me that your can help me with questions regarding Chile.
+We got a request for guest access through MG.
+The company is called Escondida and is a subsidiary of BHP Australia.
+
+Please advise if I can set up a guest account or not.
+F.Y.I.: MG is planning to put a "in w/h Chile" contract for Copper on-line as 
+soon as Enron has done the due diligence for this country.
+Thanks !
+
+
+Best regards
+
+Diana Seifert
+EOL PCG');
+INSERT INTO email([from],[to],subject,body) VALUES('enron_update at concureworkplace.com', 'mark.whitt at enron.com', '<<Concur Expense Document>> - 121001', 'The Approval status has changed on the following report:
+
+Status last changed by: Barry L. Tycholiz
+Expense Report Name: 121001
+Report Total: $198.98
+Amount Due Employee: $198.98
+Amount Approved: $198.98
+Amount Paid: $0.00
+Approval Status: Approved
+Payment Status: Pending
+
+
+To review this expense report, click on the following link for Concur Expense.
+http://expensexms.enron.com');
+INSERT INTO email([from],[to],subject,body) VALUES('kevin.hyatt at enron.com', '', 'Technical Support', 'Outside the U.S., please refer to the list below:
+
+Australia:
+1800 678-515
+support at palm-au.com
+
+Canada:
+1905 305-6530
+support at palm.com
+
+New Zealand:
+0800 446-398
+support at palm-nz.com
+
+U.K.:
+0171 867 0108
+eurosupport at palm.3com.com
+
+Please refer to the Worldwide Customer Support card for a complete technical support contact list.');
+INSERT INTO email([from],[to],subject,body) VALUES('geoff.storey at enron.com', 'dutch.quigley at enron.com', 'RE:', 'duke contact?
+
+ -----Original Message-----
+From: 	Quigley, Dutch  
+Sent:	Wednesday, October 31, 2001 10:14 AM
+To:	Storey, Geoff
+Subject:	RE: 
+
+bp corp	Albert LaMore	281-366-4962
+
+running the reports now
+
+
+ -----Original Message-----
+From: 	Storey, Geoff  
+Sent:	Wednesday, October 31, 2001 10:10 AM
+To:	Quigley, Dutch
+Subject:	RE: 
+
+give me a contact over there too
+BP
+
+
+ -----Original Message-----
+From: 	Quigley, Dutch  
+Sent:	Wednesday, October 31, 2001 9:42 AM
+To:	Storey, Geoff
+Subject:	
+
+Coral	Jeff Whitnah	713-767-5374
+Relaint	Steve McGinn	713-207-4000');
+INSERT INTO email([from],[to],subject,body) VALUES('pete.davis at enron.com', 'pete.davis at enron.com', 'Start Date: 4/22/01; HourAhead hour: 3;  <CODESITE>', 'Start Date: 4/22/01; HourAhead hour: 3;  No ancillary schedules awarded.  
+Variances detected.
+Variances detected in Load schedule.
+
+    LOG MESSAGES:
+
+PARSING FILE -->> O:\Portland\WestDesk\California Scheduling\ISO Final 
+Schedules\2001042203.txt
+
+---- Load Schedule ----
+$$$ Variance found in table tblLoads.
+     Details: (Hour: 3 / Preferred:   1.92 / Final:   1.89)
+  TRANS_TYPE: FINAL
+  LOAD_ID: PGE4
+  MKT_TYPE: 2
+  TRANS_DATE: 4/22/01
+  SC_ID: EPMI
+
+');
+INSERT INTO email([from],[to],subject,body) VALUES('john.postlethwaite at enron.com', 'john.zufferli at enron.com', 'Reference', 'John, hope things are going well up there for you. The big day is almost here for you and Jessica. I was wondering if I could use your name as a job reference if need be. I am just trying to get everything in order just in case something happens. 
+
+John');
+INSERT INTO email([from],[to],subject,body) VALUES('jeffrey.shankman at enron.com', 'lschiffm at jonesday.com', 'Re:', 'I saw you called on the cell this a.m.  Sorry I missed you.  (I was in the 
+shower).  I have had a shitty week--I suspect my silence (not only to you, 
+but others) after our phone call is a result of the week.  I''m seeing Glen at 
+11:15....talk to you');
+INSERT INTO email([from],[to],subject,body) VALUES('litebytz at enron.com', '', 'Lite Bytz RSVP', '
+This week''s Lite Bytz presentation will feature the following TOOLZ speaker:
+
+Richard McDougall
+Solaris 8
+Thursday, June 7, 2001
+
+If you have not already signed up, please RSVP via email to litebytz at enron.com by the end of the day Tuesday, June 5, 2001.
+
+*Remember: this is now a Brown Bag Event--so bring your lunch and we will provide cookies and drinks.
+
+Click below for more details.
+
+http://home.enron.com:84/messaging/litebytztoolzprint.jpg');
+    COMMIT;
+  }
+} {}
+
+###############################################################################
+# Everything above just builds an interesting test database.  The actual
+# tests come after this comment.
+###############################################################################
+
+do_test fts2c-1.2 {
+  execsql {
+    SELECT rowid FROM email WHERE email MATCH 'mark'
+  }
+} {6 17 25 38 40 42 73 74}
+do_test fts2c-1.3 {
+  execsql {
+    SELECT rowid FROM email WHERE email MATCH 'susan'
+  }
+} {24 40}
+do_test fts2c-1.4 {
+  execsql {
+    SELECT rowid FROM email WHERE email MATCH 'mark susan'
+  }
+} {40}
+do_test fts2c-1.5 {
+  execsql {
+    SELECT rowid FROM email WHERE email MATCH 'susan mark'
+  }
+} {40}
+do_test fts2c-1.6 {
+  execsql {
+    SELECT rowid FROM email WHERE email MATCH '"mark susan"'
+  }
+} {}
+do_test fts2c-1.7 {
+  execsql {
+    SELECT rowid FROM email WHERE email MATCH 'mark -susan'
+  }
+} {6 17 25 38 42 73 74}
+do_test fts2c-1.8 {
+  execsql {
+    SELECT rowid FROM email WHERE email MATCH '-mark susan'
+  }
+} {24}
+do_test fts2c-1.9 {
+  execsql {
+    SELECT rowid FROM email WHERE email MATCH 'mark OR susan'
+  }
+} {6 17 24 25 38 40 42 73 74}
+
+# Some simple tests of the automatic "offsets(email)" column.  In the sample
+# data set above, only one message, number 20, contains the words
+# "gas" and "reminder" in both body and subject.
+#
+do_test fts2c-2.1 {
+  execsql {
+    SELECT rowid, offsets(email) FROM email
+     WHERE email MATCH 'gas reminder'
+  }
+} {20 {2 0 42 3 2 1 54 8 3 0 42 3 3 1 54 8 3 0 129 3 3 0 143 3 3 0 240 3}}
+do_test fts2c-2.2 {
+  execsql {
+    SELECT rowid, offsets(email) FROM email
+     WHERE email MATCH 'subject:gas reminder'
+  }
+} {20 {2 0 42 3 2 1 54 8 3 1 54 8}}
+do_test fts2c-2.3 {
+  execsql {
+    SELECT rowid, offsets(email) FROM email
+     WHERE email MATCH 'body:gas reminder'
+  }
+} {20 {2 1 54 8 3 0 42 3 3 1 54 8 3 0 129 3 3 0 143 3 3 0 240 3}}
+do_test fts2c-2.4 {
+  execsql {
+    SELECT rowid, offsets(email) FROM email
+     WHERE subject MATCH 'gas reminder'
+  }
+} {20 {2 0 42 3 2 1 54 8}}
+do_test fts2c-2.5 {
+  execsql {
+    SELECT rowid, offsets(email) FROM email
+     WHERE body MATCH 'gas reminder'
+  }
+} {20 {3 0 42 3 3 1 54 8 3 0 129 3 3 0 143 3 3 0 240 3}}
+
+# Document 32 contains 5 instances of the world "child".  But only
+# 3 of them are paired with "product".  Make sure only those instances
+# that match the phrase appear in the offsets(email) list.
+#
+do_test fts2c-3.1 {
+  execsql {
+    SELECT rowid, offsets(email) FROM email
+     WHERE body MATCH 'child product' AND +rowid=32
+  }
+} {32 {3 0 94 5 3 0 114 5 3 0 207 5 3 1 213 7 3 0 245 5 3 1 251 7 3 0 409 5 3 1 415 7 3 1 493 7}}
+do_test fts2c-3.2 {
+  execsql {
+    SELECT rowid, offsets(email) FROM email
+     WHERE body MATCH '"child product"'
+  }
+} {32 {3 0 207 5 3 1 213 7 3 0 245 5 3 1 251 7 3 0 409 5 3 1 415 7}}
+
+# Snippet generator tests
+#
+do_test fts2c-4.1 {
+  execsql {
+    SELECT snippet(email) FROM email
+     WHERE email MATCH 'subject:gas reminder'
+  }
+} {{Alert Posted 10:00 AM November 20,2000: E-<b>GAS</b> Request <b>Reminder</b>}}
+do_test fts2c-4.2 {
+  execsql {
+    SELECT snippet(email) FROM email
+     WHERE email MATCH 'christmas candlelight'
+  }
+} {{<b>...</b> place.? What do you think about going here <b>Christmas</b> 
+eve?? They have an 11:00 a.m. service and a <b>candlelight</b> service at 5:00 p.m., 
+among others. <b>...</b>}}
+
+do_test fts2c-4.3 {
+  execsql {
+    SELECT snippet(email) FROM email
+     WHERE email MATCH 'deal sheet potential reuse'
+  }
+} {{EOL-Accenture <b>Deal</b> <b>Sheet</b> <b>...</b> intent
+     Review Enron asset base for <b>potential</b> <b>reuse</b>/ licensing
+     Contract negotiations <b>...</b>}}
+do_test fts2c-4.4 {
+  execsql {
+    SELECT snippet(email,'<<<','>>>',' ') FROM email
+     WHERE email MATCH 'deal sheet potential reuse'
+  }
+} {{EOL-Accenture <<<Deal>>> <<<Sheet>>>  intent
+     Review Enron asset base for <<<potential>>> <<<reuse>>>/ licensing
+     Contract negotiations  }}
+do_test fts2c-4.5 {
+  execsql {
+    SELECT snippet(email,'<<<','>>>',' ') FROM email
+     WHERE email MATCH 'first things'
+  }
+} {{Re: <<<First>>> Polish Deal!  Congrats!  <<<Things>>> seem to be building rapidly now on the  }}
+do_test fts2c-4.6 {
+  execsql {
+    SELECT snippet(email) FROM email
+     WHERE email MATCH 'chris is here'
+  }
+} {{<b>chris</b>.germany at enron.com <b>...</b> Sounds good to me.  I bet this <b>is</b> next to the Warick?? Hotel. <b>...</b> place.? What do you think about going <b>here</b> Christmas 
+eve?? They have an 11:00 a.m. <b>...</b>}}
+do_test fts2c-4.7 {
+  execsql {
+    SELECT snippet(email) FROM email
+     WHERE email MATCH '"pursuant to"'
+  }
+} {{Erin:
+
+<b>Pursuant</b> <b>to</b> your request, attached are the Schedule to <b>...</b>}}
+do_test fts2c-4.8 {
+  execsql {
+    SELECT snippet(email) FROM email
+     WHERE email MATCH 'ancillary load davis'
+  }
+} {{pete.<b>davis</b>@enron.com <b>...</b> Start Date: 4/22/01; HourAhead hour: 3;  No <b>ancillary</b> schedules awarded.  
+Variances detected.
+Variances detected in <b>Load</b> schedule.
+
+    LOG MESSAGES:
+
+PARSING <b>...</b>}}
+
+# Combinations of AND and OR operators:
+#
+do_test fts2c-5.1 {
+  execsql {
+    SELECT snippet(email) FROM email
+     WHERE email MATCH 'questar enron OR com'
+  }
+} {{matt.smith@<b>enron</b>.<b>com</b> <b>...</b> six reports:  
+
+31 Keystone Receipts
+15 <b>Questar</b> Pipeline
+40 Rockies Production
+22 West_2 <b>...</b>}}
+do_test fts2c-5.2 {
+  execsql {
+    SELECT snippet(email) FROM email
+     WHERE email MATCH 'enron OR com questar'
+  }
+} {{matt.smith@<b>enron</b>.<b>com</b> <b>...</b> six reports:  
+
+31 Keystone Receipts
+15 <b>Questar</b> Pipeline
+40 Rockies Production
+22 West_2 <b>...</b>}}
+
+finish_test

Added: freeswitch/trunk/libs/sqlite/test/fts2d.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/fts2d.test	Thu Feb 22 17:09:42 2007
@@ -0,0 +1,65 @@
+# 2006 October 1
+#
+# The author disclaims copyright to this source code.  In place of
+# a legal notice, here is a blessing:
+#
+#    May you do good and not evil.
+#    May you find forgiveness for yourself and forgive others.
+#    May you share freely, never taking more than you give.
+#
+#*************************************************************************
+# This file implements regression tests for SQLite library.  The
+# focus of this script is testing the FTS2 module, and in particular
+# the Porter stemmer.
+#
+# $Id: fts2d.test,v 1.1 2006/10/19 23:36:26 shess Exp $
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# If SQLITE_ENABLE_FTS2 is defined, omit this file.
+ifcapable !fts2 {
+  finish_test
+  return
+}
+
+do_test fts2d-1.1 {
+  execsql {
+    CREATE VIRTUAL TABLE t1 USING fts2(content, tokenize porter);
+    INSERT INTO t1(rowid, content) VALUES(1, 'running and jumping');
+    SELECT rowid FROM t1 WHERE content MATCH 'run jump';
+  }
+} {1}
+do_test fts2d-1.2 {
+  execsql {
+    SELECT snippet(t1) FROM t1 WHERE t1 MATCH 'run jump';
+  }
+} {{<b>running</b> and <b>jumping</b>}}
+do_test fts2d-1.3 {
+  execsql {
+    INSERT INTO t1(rowid, content) 
+          VALUES(2, 'abcdefghijklmnopqrstuvwyxz');
+    SELECT rowid, snippet(t1) FROM t1 WHERE t1 MATCH 'abcdefghijqrstuvwyxz'
+  }
+} {2 <b>abcdefghijklmnopqrstuvwyxz</b>}
+do_test fts2d-1.4 {
+  execsql {
+    SELECT rowid, snippet(t1) FROM t1 WHERE t1 MATCH 'abcdefghijXXXXqrstuvwyxz'
+  }
+} {2 <b>abcdefghijklmnopqrstuvwyxz</b>}
+do_test fts2d-1.5 {
+  execsql {
+    INSERT INTO t1(rowid, content) 
+          VALUES(3, 'The value is 123456789');
+    SELECT rowid, snippet(t1) FROM t1 WHERE t1 MATCH '123789'
+  }
+} {3 {The value is <b>123456789</b>}}
+do_test fts2d-1.6 {
+  execsql {
+    SELECT rowid, snippet(t1) FROM t1 WHERE t1 MATCH '123000000789'
+  }
+} {3 {The value is <b>123456789</b>}}
+
+
+finish_test

Added: freeswitch/trunk/libs/sqlite/test/fts2e.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/fts2e.test	Thu Feb 22 17:09:42 2007
@@ -0,0 +1,85 @@
+# 2006 October 19
+#
+# The author disclaims copyright to this source code.
+#
+#*************************************************************************
+# This file implements regression tests for SQLite library.  The
+# focus of this script is testing deletions in the FTS2 module.
+#
+# $Id: fts2e.test,v 1.1 2006/10/19 23:36:26 shess Exp $
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# If SQLITE_ENABLE_FTS2 is defined, omit this file.
+ifcapable !fts2 {
+  finish_test
+  return
+}
+
+# Construct a full-text search table containing keywords which are the
+# ordinal numbers of the bit positions set for a sequence of integers,
+# which are used for the rowid.  There are a total of 30 INSERT and
+# DELETE statements, so that we'll test both the segmentMerge() merge
+# (over the first 16) and the termSelect() merge (over the level-1
+# segment and 14 level-0 segments).
+db eval {
+  CREATE VIRTUAL TABLE t1 USING fts2(content);
+  INSERT INTO t1 (rowid, content) VALUES(1, 'one');
+  INSERT INTO t1 (rowid, content) VALUES(2, 'two');
+  INSERT INTO t1 (rowid, content) VALUES(3, 'one two');
+  INSERT INTO t1 (rowid, content) VALUES(4, 'three');
+  DELETE FROM t1 WHERE rowid = 1;
+  INSERT INTO t1 (rowid, content) VALUES(5, 'one three');
+  INSERT INTO t1 (rowid, content) VALUES(6, 'two three');
+  INSERT INTO t1 (rowid, content) VALUES(7, 'one two three');
+  DELETE FROM t1 WHERE rowid = 4;
+  INSERT INTO t1 (rowid, content) VALUES(8, 'four');
+  INSERT INTO t1 (rowid, content) VALUES(9, 'one four');
+  INSERT INTO t1 (rowid, content) VALUES(10, 'two four');
+  DELETE FROM t1 WHERE rowid = 7;
+  INSERT INTO t1 (rowid, content) VALUES(11, 'one two four');
+  INSERT INTO t1 (rowid, content) VALUES(12, 'three four');
+  INSERT INTO t1 (rowid, content) VALUES(13, 'one three four');
+  DELETE FROM t1 WHERE rowid = 10;
+  INSERT INTO t1 (rowid, content) VALUES(14, 'two three four');
+  INSERT INTO t1 (rowid, content) VALUES(15, 'one two three four');
+  INSERT INTO t1 (rowid, content) VALUES(16, 'five');
+  DELETE FROM t1 WHERE rowid = 13;
+  INSERT INTO t1 (rowid, content) VALUES(17, 'one five');
+  INSERT INTO t1 (rowid, content) VALUES(18, 'two five');
+  INSERT INTO t1 (rowid, content) VALUES(19, 'one two five');
+  DELETE FROM t1 WHERE rowid = 16;
+  INSERT INTO t1 (rowid, content) VALUES(20, 'three five');
+  INSERT INTO t1 (rowid, content) VALUES(21, 'one three five');
+  INSERT INTO t1 (rowid, content) VALUES(22, 'two three five');
+  DELETE FROM t1 WHERE rowid = 19;
+  DELETE FROM t1 WHERE rowid = 22;
+}
+
+do_test fts2f-1.1 {
+  execsql {SELECT COUNT(*) FROM t1}
+} {14}
+
+do_test fts2e-2.1 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH 'one'}
+} {3 5 9 11 15 17 21}
+
+do_test fts2e-2.2 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH 'two'}
+} {2 3 6 11 14 15 18}
+
+do_test fts2e-2.3 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH 'three'}
+} {5 6 12 14 15 20 21}
+
+do_test fts2e-2.4 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH 'four'}
+} {8 9 11 12 14 15}
+
+do_test fts2e-2.5 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH 'five'}
+} {17 18 20 21}
+
+finish_test

Added: freeswitch/trunk/libs/sqlite/test/fts2f.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/fts2f.test	Thu Feb 22 17:09:42 2007
@@ -0,0 +1,90 @@
+# 2006 October 19
+#
+# The author disclaims copyright to this source code.
+#
+#*************************************************************************
+# This file implements regression tests for SQLite library.  The
+# focus of this script is testing updates in the FTS2 module.
+#
+# $Id: fts2f.test,v 1.1 2006/10/19 23:36:26 shess Exp $
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# If SQLITE_ENABLE_FTS2 is defined, omit this file.
+ifcapable !fts2 {
+  finish_test
+  return
+}
+
+# Construct a full-text search table containing keywords which are the
+# ordinal numbers of the bit positions set for a sequence of integers,
+# which are used for the rowid.  There are a total of 31 INSERT,
+# UPDATE, and DELETE statements, so that we'll test both the
+# segmentMerge() merge (over the first 16) and the termSelect() merge
+# (over the level-1 segment and 15 level-0 segments).
+db eval {
+  CREATE VIRTUAL TABLE t1 USING fts2(content);
+  INSERT INTO t1 (rowid, content) VALUES(1, 'one');
+  INSERT INTO t1 (rowid, content) VALUES(2, 'two');
+  INSERT INTO t1 (rowid, content) VALUES(3, 'one two');
+  INSERT INTO t1 (rowid, content) VALUES(4, 'three');
+  INSERT INTO t1 (rowid, content) VALUES(5, 'one three');
+  INSERT INTO t1 (rowid, content) VALUES(6, 'two three');
+  INSERT INTO t1 (rowid, content) VALUES(7, 'one two three');
+  DELETE FROM t1 WHERE rowid = 4;
+  INSERT INTO t1 (rowid, content) VALUES(8, 'four');
+  UPDATE t1 SET content = 'update one three' WHERE rowid = 1;
+  INSERT INTO t1 (rowid, content) VALUES(9, 'one four');
+  INSERT INTO t1 (rowid, content) VALUES(10, 'two four');
+  DELETE FROM t1 WHERE rowid = 7;
+  INSERT INTO t1 (rowid, content) VALUES(11, 'one two four');
+  INSERT INTO t1 (rowid, content) VALUES(12, 'three four');
+  INSERT INTO t1 (rowid, content) VALUES(13, 'one three four');
+  DELETE FROM t1 WHERE rowid = 10;
+  INSERT INTO t1 (rowid, content) VALUES(14, 'two three four');
+  INSERT INTO t1 (rowid, content) VALUES(15, 'one two three four');
+  UPDATE t1 SET content = 'update two five' WHERE rowid = 8;
+  INSERT INTO t1 (rowid, content) VALUES(16, 'five');
+  DELETE FROM t1 WHERE rowid = 13;
+  INSERT INTO t1 (rowid, content) VALUES(17, 'one five');
+  INSERT INTO t1 (rowid, content) VALUES(18, 'two five');
+  INSERT INTO t1 (rowid, content) VALUES(19, 'one two five');
+  DELETE FROM t1 WHERE rowid = 16;
+  INSERT INTO t1 (rowid, content) VALUES(20, 'three five');
+  INSERT INTO t1 (rowid, content) VALUES(21, 'one three five');
+  INSERT INTO t1 (rowid, content) VALUES(22, 'two three five');
+  DELETE FROM t1 WHERE rowid = 19;
+  UPDATE t1 SET content = 'update' WHERE rowid = 15;
+}
+
+do_test fts2f-1.1 {
+  execsql {SELECT COUNT(*) FROM t1}
+} {16}
+
+do_test fts2e-2.0 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH 'update'}
+} {1 8 15}
+
+do_test fts2e-2.1 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH 'one'}
+} {1 3 5 9 11 17 21}
+
+do_test fts2e-2.2 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH 'two'}
+} {2 3 6 8 11 14 18 22}
+
+do_test fts2e-2.3 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH 'three'}
+} {1 5 6 12 14 20 21 22}
+
+do_test fts2e-2.4 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH 'four'}
+} {9 11 12 14}
+
+do_test fts2e-2.5 {
+  execsql {SELECT rowid FROM t1 WHERE content MATCH 'five'}
+} {8 17 18 20 21 22}
+
+finish_test

Added: freeswitch/trunk/libs/sqlite/test/fts2g.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/fts2g.test	Thu Feb 22 17:09:42 2007
@@ -0,0 +1,77 @@
+# 2006 October 19
+#
+# The author disclaims copyright to this source code.
+#
+#*************************************************************************
+# This file implements regression tests for SQLite library.  The focus
+# of this script is testing handling of edge cases for various doclist
+# merging functions in the FTS2 module query logic.
+#
+# $Id: fts2g.test,v 1.1 2006/10/25 20:27:40 shess Exp $
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# If SQLITE_ENABLE_FTS2 is defined, omit this file.
+ifcapable !fts2 {
+  finish_test
+  return
+}
+
+db eval {
+  CREATE VIRTUAL TABLE t1 USING fts2(content);
+  INSERT INTO t1 (rowid, content) VALUES(1, 'this is a test');
+}
+
+# No hits at all.  Returns empty doclists from termSelect().
+do_test fts2g-1.1 {
+  execsql {SELECT rowid FROM t1 WHERE t1 MATCH 'something'}
+} {}
+
+# Empty left in docListExceptMerge().
+do_test fts2g-1.2 {
+  execsql {SELECT rowid FROM t1 WHERE t1 MATCH '-this something'}
+} {}
+
+# Empty right in docListExceptMerge().
+do_test fts2g-1.3 {
+  execsql {SELECT rowid FROM t1 WHERE t1 MATCH 'this -something'}
+} {1}
+
+# Empty left in docListPhraseMerge().
+do_test fts2g-1.4 {
+  execsql {SELECT rowid FROM t1 WHERE t1 MATCH '"this something"'}
+} {}
+
+# Empty right in docListPhraseMerge().
+do_test fts2g-1.5 {
+  execsql {SELECT rowid FROM t1 WHERE t1 MATCH '"something is"'}
+} {}
+
+# Empty left in docListOrMerge().
+do_test fts2g-1.6 {
+  execsql {SELECT rowid FROM t1 WHERE t1 MATCH 'something OR this'}
+} {1}
+
+# Empty right in docListOrMerge().
+do_test fts2g-1.7 {
+  execsql {SELECT rowid FROM t1 WHERE t1 MATCH 'this OR something'}
+} {1}
+
+# Empty left in docListAndMerge().
+do_test fts2g-1.8 {
+  execsql {SELECT rowid FROM t1 WHERE t1 MATCH 'something this'}
+} {}
+
+# Empty right in docListAndMerge().
+do_test fts2g-1.9 {
+  execsql {SELECT rowid FROM t1 WHERE t1 MATCH 'this something'}
+} {}
+
+# No support for all-except queries.
+do_test fts2g-1.10 {
+  catchsql {SELECT rowid FROM t1 WHERE t1 MATCH '-this -something'}
+} {1 {SQL logic error or missing database}}
+
+finish_test

Added: freeswitch/trunk/libs/sqlite/test/fts2h.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/fts2h.test	Thu Feb 22 17:09:42 2007
@@ -0,0 +1,76 @@
+# 2006 October 31 (scaaarey)
+#
+# The author disclaims copyright to this source code.
+#
+#*************************************************************************
+# This file implements regression tests for SQLite library.  The focus
+# here is testing correct handling of excessively long terms.
+#
+# $Id: fts2h.test,v 1.1 2006/11/29 21:03:01 shess Exp $
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# If SQLITE_ENABLE_FTS2 is defined, omit this file.
+ifcapable !fts2 {
+  finish_test
+  return
+}
+
+# Generate a term of len copies of char.
+proc bigterm {char len} {
+  for {set term ""} {$len>0} {incr len -1} {
+    append term $char
+  }
+  return $term
+}
+
+# Generate a document of bigterms based on characters from the list
+# chars.
+proc bigtermdoc {chars len} {
+  set doc ""
+  foreach char $chars {
+    append doc " " [bigterm $char $len]
+  }
+  return $doc
+}
+
+set len 5000
+set doc1 [bigtermdoc {a b c d} $len]
+set doc2 [bigtermdoc {b d e f} $len]
+set doc3 [bigtermdoc {a c e} $len]
+
+set aterm [bigterm a $len]
+set bterm [bigterm b $len]
+set xterm [bigterm x $len]
+
+db eval {
+  CREATE VIRTUAL TABLE t1 USING fts2(content);
+  INSERT INTO t1 (rowid, content) VALUES(1, $doc1);
+  INSERT INTO t1 (rowid, content) VALUES(2, $doc2);
+  INSERT INTO t1 (rowid, content) VALUES(3, $doc3);
+}
+
+# No hits at all.  Returns empty doclists from termSelect().
+do_test fts2h-1.1 {
+  execsql {SELECT rowid FROM t1 WHERE t1 MATCH 'something'}
+} {}
+
+do_test fts2h-1.2 {
+  execsql {SELECT rowid FROM t1 WHERE t1 MATCH $aterm}
+} {1 3}
+
+do_test fts2h-1.2 {
+  execsql {SELECT rowid FROM t1 WHERE t1 MATCH $xterm}
+} {}
+
+do_test fts2h-1.3 {
+  execsql "SELECT rowid FROM t1 WHERE t1 MATCH '$aterm -$xterm'"
+} {1 3}
+
+do_test fts2h-1.4 {
+  execsql "SELECT rowid FROM t1 WHERE t1 MATCH '\"$aterm $bterm\"'"
+} {1}
+
+finish_test

Added: freeswitch/trunk/libs/sqlite/test/fts2i.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/fts2i.test	Thu Feb 22 17:09:42 2007
@@ -0,0 +1,87 @@
+# 2007 January 17
+#
+# The author disclaims copyright to this source code.
+#
+#*************************************************************************
+# This file implements regression tests for SQLite fts2 library.  The
+# focus here is testing handling of UPDATE when using UTF-16-encoded
+# databases.
+#
+# $Id: fts2i.test,v 1.2 2007/01/24 03:46:35 drh Exp $
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# If SQLITE_ENABLE_FTS2 is defined, omit this file.
+ifcapable !fts2 {
+  finish_test
+  return
+}
+
+# Return the UTF-16 representation of the supplied UTF-8 string $str.
+# If $nt is true, append two 0x00 bytes as a nul terminator.
+# NOTE(shess) Copied from capi3.test.
+proc utf16 {str {nt 1}} {
+  set r [encoding convertto unicode $str]
+  if {$nt} {
+    append r "\x00\x00"
+  }
+  return $r
+}
+
+db eval {
+  PRAGMA encoding = "UTF-16le";
+  CREATE VIRTUAL TABLE t1 USING fts2(content);
+}
+
+do_test fts2i-1.0 {
+  execsql {PRAGMA encoding}
+} {UTF-16le}
+
+do_test fts2i-1.1 {
+  execsql {INSERT INTO t1 (rowid, content) VALUES(1, 'one')}
+  execsql {SELECT content FROM t1 WHERE rowid = 1}
+} {one}
+
+do_test fts2i-1.2 {
+  set sql "INSERT INTO t1 (rowid, content) VALUES(2, 'two')"
+  set STMT [sqlite3_prepare $DB $sql -1 TAIL]
+  sqlite3_step $STMT
+  sqlite3_finalize $STMT
+  execsql {SELECT content FROM t1 WHERE rowid = 2}
+} {two}
+
+do_test fts2i-1.3 {
+  set sql "INSERT INTO t1 (rowid, content) VALUES(3, 'three')"
+  set STMT [sqlite3_prepare $DB $sql -1 TAIL]
+  sqlite3_step $STMT
+  sqlite3_finalize $STMT
+  set sql "UPDATE t1 SET content = 'trois' WHERE rowid = 3"
+  set STMT [sqlite3_prepare $DB $sql -1 TAIL]
+  sqlite3_step $STMT
+  sqlite3_finalize $STMT
+  execsql {SELECT content FROM t1 WHERE rowid = 3}
+} {trois}
+
+do_test fts2i-1.4 {
+  set sql16 [utf16 {INSERT INTO t1 (rowid, content) VALUES(4, 'four')}]
+  set STMT [sqlite3_prepare16 $DB $sql16 -1 TAIL]
+  sqlite3_step $STMT
+  sqlite3_finalize $STMT
+  execsql {SELECT content FROM t1 WHERE rowid = 4}
+} {four}
+
+do_test fts2i-1.5 {
+  set sql16 [utf16 {INSERT INTO t1 (rowid, content) VALUES(5, 'five')}]
+  set STMT [sqlite3_prepare16 $DB $sql16 -1 TAIL]
+  sqlite3_step $STMT
+  sqlite3_finalize $STMT
+  set sql "UPDATE t1 SET content = 'cinq' WHERE rowid = 5"
+  set STMT [sqlite3_prepare $DB $sql -1 TAIL]
+  sqlite3_step $STMT
+  sqlite3_finalize $STMT
+  execsql {SELECT content FROM t1 WHERE rowid = 5}
+} {cinq}
+
+finish_test

Added: freeswitch/trunk/libs/sqlite/test/fts2j.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/fts2j.test	Thu Feb 22 17:09:42 2007
@@ -0,0 +1,89 @@
+# 2007 February 6
+#
+# The author disclaims copyright to this source code.
+#
+#*************************************************************************
+# This file implements regression tests for SQLite library.  This
+# tests creating fts2 tables in an attached database.
+#
+# $Id: fts2j.test,v 1.1 2007/02/07 01:01:18 shess Exp $
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# If SQLITE_ENABLE_FTS2 is defined, omit this file.
+ifcapable !fts2 {
+  finish_test
+  return
+}
+
+# Clean up anything left over from a previous pass.
+file delete -force test2.db
+file delete -force test2.db-journal
+sqlite3 db2 test2.db
+
+db eval {
+  CREATE VIRTUAL TABLE t3 USING fts2(content);
+  INSERT INTO t3 (rowid, content) VALUES(1, "hello world");
+}
+
+db2 eval {
+  CREATE VIRTUAL TABLE t1 USING fts2(content);
+  INSERT INTO t1 (rowid, content) VALUES(1, "hello world");
+  INSERT INTO t1 (rowid, content) VALUES(2, "hello there");
+  INSERT INTO t1 (rowid, content) VALUES(3, "cruel world");
+}
+
+# This has always worked because the t1_* tables used by fts2 will be
+# the defaults.
+do_test fts2j-1.1 {
+  execsql {
+    ATTACH DATABASE 'test2.db' AS two;
+    SELECT rowid FROM t1 WHERE t1 MATCH 'hello';
+    DETACH DATABASE two;
+  }
+} {1 2}
+# Make certain we're detached if there was an error.
+catch {db eval {DETACH DATABASE two}}
+
+# In older code, this appears to work fine, but the t2_* tables used
+# by fts2 will be created in database 'main' instead of database
+# 'two'.  It appears to work fine because the tables end up being the
+# defaults, but obviously is badly broken if you hope to use things
+# other than in the exact same ATTACH setup.
+do_test fts2j-1.2 {
+  execsql {
+    ATTACH DATABASE 'test2.db' AS two;
+    CREATE VIRTUAL TABLE two.t2 USING fts2(content);
+    INSERT INTO t2 (rowid, content) VALUES(1, "hello world");
+    INSERT INTO t2 (rowid, content) VALUES(2, "hello there");
+    INSERT INTO t2 (rowid, content) VALUES(3, "cruel world");
+    SELECT rowid FROM t2 WHERE t2 MATCH 'hello';
+    DETACH DATABASE two;
+  }
+} {1 2}
+catch {db eval {DETACH DATABASE two}}
+
+# In older code, this broke because the fts2 code attempted to create
+# t3_* tables in database 'main', but they already existed.  Normally
+# this wouldn't happen without t3 itself existing, in which case the
+# fts2 code would never be called in the first place.
+do_test fts2j-1.3 {
+  execsql {
+    ATTACH DATABASE 'test2.db' AS two;
+
+    CREATE VIRTUAL TABLE two.t3 USING fts2(content);
+    INSERT INTO two.t3 (rowid, content) VALUES(2, "hello there");
+    INSERT INTO two.t3 (rowid, content) VALUES(3, "cruel world");
+    SELECT rowid FROM two.t3 WHERE t3 MATCH 'hello';
+
+    DETACH DATABASE two;
+  } db2
+} {2}
+catch {db eval {DETACH DATABASE two}}
+
+catch {db2 close}
+file delete -force test2.db
+
+finish_test

Modified: freeswitch/trunk/libs/sqlite/test/func.test
==============================================================================
--- freeswitch/trunk/libs/sqlite/test/func.test	(original)
+++ freeswitch/trunk/libs/sqlite/test/func.test	Thu Feb 22 17:09:42 2007
@@ -11,7 +11,7 @@
 # This file implements regression tests for SQLite library.  The
 # focus of this file is testing built-in functions.
 #
-# $Id: func.test,v 1.55 2006/09/16 21:45:14 drh Exp $
+# $Id: func.test,v 1.57 2007/01/29 17:58:28 drh Exp $
 
 set testdir [file dirname $argv0]
 source $testdir/tester.tcl
@@ -296,6 +296,35 @@
     SELECT random() is not null;
   }
 } {1}
+do_test func-9.2 {
+  execsql {
+    SELECT typeof(random());
+  }
+} {integer}
+do_test func-9.3 {
+  execsql {
+    SELECT randomblob(32) is not null;
+  }
+} {1}
+do_test func-9.4 {
+  execsql {
+    SELECT typeof(randomblob(32));
+  }
+} {blob}
+do_test func-9.5 {
+  execsql {
+    SELECT length(randomblob(32)), length(randomblob(-5)),
+           length(randomblob(2000))
+  }
+} {32 1 2000}
+
+# The "hex()" function was added in order to be able to render blobs
+# generated by randomblob().  So this seems like a good place to test
+# hex().
+#
+do_test func-9.10 {
+  execsql {SELECT hex(x'00112233445566778899aAbBcCdDeEfF')}
+} {00112233445566778899AABBCCDDEEFF}
 
 # Use the "sqlite_register_test_function" TCL command which is part of
 # the text fixture in order to verify correct operation of some of

Modified: freeswitch/trunk/libs/sqlite/test/ioerr.test
==============================================================================
--- freeswitch/trunk/libs/sqlite/test/ioerr.test	(original)
+++ freeswitch/trunk/libs/sqlite/test/ioerr.test	Thu Feb 22 17:09:42 2007
@@ -15,7 +15,7 @@
 # The tests in this file use special facilities that are only
 # available in the SQLite test fixture.
 #
-# $Id: ioerr.test,v 1.27 2006/09/15 07:28:51 drh Exp $
+# $Id: ioerr.test,v 1.29 2007/01/04 14:58:14 drh Exp $
 
 set testdir [file dirname $argv0]
 source $testdir/tester.tcl
@@ -46,6 +46,9 @@
   DELETE FROM t1 WHERE a<100;
 } -exclude [expr [string match [execsql {pragma auto_vacuum}] 1] ? 4 : 0]
 
+finish_test
+return
+
 # Test for IO errors during a VACUUM. 
 #
 # The first IO call is excluded from the test. This call attempts to read
@@ -165,6 +168,7 @@
 # These tests can't be run on windows because the windows version of 
 # SQLite holds a mandatory exclusive lock on journal files it has open.
 #
+btree_breakpoint
 if {$tcl_platform(platform)!="windows"} {
   do_ioerr_test ioerr-7 -tclprep {
     db close

Modified: freeswitch/trunk/libs/sqlite/test/malloc.test
==============================================================================
--- freeswitch/trunk/libs/sqlite/test/malloc.test	(original)
+++ freeswitch/trunk/libs/sqlite/test/malloc.test	Thu Feb 22 17:09:42 2007
@@ -14,7 +14,7 @@
 # special feature is used to see what happens in the library if a malloc
 # were to really fail due to an out-of-memory situation.
 #
-# $Id: malloc.test,v 1.35 2006/10/04 11:55:50 drh Exp $
+# $Id: malloc.test,v 1.36 2006/10/18 23:26:39 drh Exp $
 
 set testdir [file dirname $argv0]
 source $testdir/tester.tcl
@@ -229,7 +229,10 @@
   CREATE TABLE t1(a,b);
   CREATE TABLE t2(x,y);
   CREATE TRIGGER r1 AFTER INSERT ON t1 BEGIN
-  INSERT INTO t2(x,y) VALUES(new.rowid,1);
+    INSERT INTO t2(x,y) VALUES(new.rowid,1);
+    UPDATE t2 SET y=y+1 WHERE x=new.rowid;
+    SELECT 123;
+    DELETE FROM t2 WHERE x=new.rowid;
   END;
   INSERT INTO t1(a,b) VALUES(2,3);
   COMMIT;

Modified: freeswitch/trunk/libs/sqlite/test/misc5.test
==============================================================================
--- freeswitch/trunk/libs/sqlite/test/misc5.test	(original)
+++ freeswitch/trunk/libs/sqlite/test/misc5.test	Thu Feb 22 17:09:42 2007
@@ -13,7 +13,7 @@
 # This file implements tests for miscellanous features that were
 # left out of other test files.
 #
-# $Id: misc5.test,v 1.15 2006/08/12 12:33:15 drh Exp $
+# $Id: misc5.test,v 1.16 2007/01/03 23:37:29 drh Exp $
 
 set testdir [file dirname $argv0]
 source $testdir/tester.tcl
@@ -573,7 +573,7 @@
 
 # Check the MISUSE return from sqlitee3_busy_timeout
 #
-do_test misc5-8.1 {
+do_test misc5-8.1-misuse {
   set DB [sqlite3_connection_pointer db]
   db close
   sqlite3_busy_timeout $DB 1000

Modified: freeswitch/trunk/libs/sqlite/test/pragma.test
==============================================================================
--- freeswitch/trunk/libs/sqlite/test/pragma.test	(original)
+++ freeswitch/trunk/libs/sqlite/test/pragma.test	Thu Feb 22 17:09:42 2007
@@ -12,7 +12,7 @@
 #
 # This file implements tests for the PRAGMA command.
 #
-# $Id: pragma.test,v 1.44 2006/08/14 14:23:43 drh Exp $
+# $Id: pragma.test,v 1.51 2007/01/27 14:26:07 drh Exp $
 
 set testdir [file dirname $argv0]
 source $testdir/tester.tcl
@@ -42,7 +42,8 @@
 # that the "all.test" script does.
 #
 db close
-file delete test.db
+file delete test.db test.db-journal
+file delete test3.db test3.db-journal
 sqlite3 db test.db; set DB [sqlite3_connection_pointer db]
 
 ifcapable pager_pragmas {
@@ -258,12 +259,143 @@
     btree_close $db
     execsql {PRAGMA integrity_check}
   } {{rowid 1 missing from index i2} {wrong # of entries in index i2}}
-}
-do_test pragma-3.3 {
-  execsql {
-    DROP INDEX i2;
-  } 
-} {}
+  do_test pragma-3.3 {
+    execsql {PRAGMA integrity_check=1}
+  } {{rowid 1 missing from index i2}}
+  do_test pragma-3.4 {
+    execsql {
+      ATTACH DATABASE 'test.db' AS t2;
+      PRAGMA integrity_check
+    }
+  } {{rowid 1 missing from index i2} {wrong # of entries in index i2} {rowid 1 missing from index i2} {wrong # of entries in index i2}}
+  do_test pragma-3.5 {
+    execsql {
+      PRAGMA integrity_check=3
+    }
+  } {{rowid 1 missing from index i2} {wrong # of entries in index i2} {rowid 1 missing from index i2}}
+  do_test pragma-3.6 {
+    execsql {
+      PRAGMA integrity_check=xyz
+    }
+  } {{rowid 1 missing from index i2} {wrong # of entries in index i2} {rowid 1 missing from index i2} {wrong # of entries in index i2}}
+  do_test pragma-3.7 {
+    execsql {
+      PRAGMA integrity_check=0
+    }
+  } {{rowid 1 missing from index i2} {wrong # of entries in index i2} {rowid 1 missing from index i2} {wrong # of entries in index i2}}
+
+  # Add additional corruption by appending unused pages to the end of
+  # the database file testerr.db
+  #
+  do_test pragma-3.8 {
+    execsql {DETACH t2}
+    file delete -force testerr.db testerr.db-journal
+    set out [open testerr.db w]
+    fconfigure $out -translation binary
+    set in [open test.db r]
+    fconfigure $in -translation binary
+    puts -nonewline $out [read $in]
+    seek $in 0
+    puts -nonewline $out [read $in]
+    close $in
+    close $out
+    execsql {REINDEX t2}
+    execsql {PRAGMA integrity_check}
+  } {ok}
+  do_test pragma-3.9 {
+    execsql {
+      ATTACH 'testerr.db' AS t2;
+      PRAGMA integrity_check
+    }
+  } {{*** in database t2 ***
+Page 4 is never used
+Page 5 is never used
+Page 6 is never used} {rowid 1 missing from index i2} {wrong # of entries in index i2}}
+  do_test pragma-3.10 {
+    execsql {
+      PRAGMA integrity_check=1
+    }
+  } {{*** in database t2 ***
+Page 4 is never used}}
+  do_test pragma-3.11 {
+    execsql {
+      PRAGMA integrity_check=5
+    }
+  } {{*** in database t2 ***
+Page 4 is never used
+Page 5 is never used
+Page 6 is never used} {rowid 1 missing from index i2} {wrong # of entries in index i2}}
+  do_test pragma-3.12 {
+    execsql {
+      PRAGMA integrity_check=4
+    }
+  } {{*** in database t2 ***
+Page 4 is never used
+Page 5 is never used
+Page 6 is never used} {rowid 1 missing from index i2}}
+  do_test pragma-3.13 {
+    execsql {
+      PRAGMA integrity_check=3
+    }
+  } {{*** in database t2 ***
+Page 4 is never used
+Page 5 is never used
+Page 6 is never used}}
+  do_test pragma-3.14 {
+    execsql {
+      PRAGMA integrity_check(2)
+    }
+  } {{*** in database t2 ***
+Page 4 is never used
+Page 5 is never used}}
+  do_test pragma-3.15 {
+    execsql {
+      ATTACH 'testerr.db' AS t3;
+      PRAGMA integrity_check
+    }
+  } {{*** in database t2 ***
+Page 4 is never used
+Page 5 is never used
+Page 6 is never used} {rowid 1 missing from index i2} {wrong # of entries in index i2} {*** in database t3 ***
+Page 4 is never used
+Page 5 is never used
+Page 6 is never used} {rowid 1 missing from index i2} {wrong # of entries in index i2}}
+  do_test pragma-3.16 {
+    execsql {
+      PRAGMA integrity_check(9)
+    }
+  } {{*** in database t2 ***
+Page 4 is never used
+Page 5 is never used
+Page 6 is never used} {rowid 1 missing from index i2} {wrong # of entries in index i2} {*** in database t3 ***
+Page 4 is never used
+Page 5 is never used
+Page 6 is never used} {rowid 1 missing from index i2}}
+  do_test pragma-3.17 {
+    execsql {
+      PRAGMA integrity_check=7
+    }
+  } {{*** in database t2 ***
+Page 4 is never used
+Page 5 is never used
+Page 6 is never used} {rowid 1 missing from index i2} {wrong # of entries in index i2} {*** in database t3 ***
+Page 4 is never used
+Page 5 is never used}}
+  do_test pragma-3.18 {
+    execsql {
+      PRAGMA integrity_check=4
+    }
+  } {{*** in database t2 ***
+Page 4 is never used
+Page 5 is never used
+Page 6 is never used} {rowid 1 missing from index i2}}
+}
+do_test pragma-3.99 {
+  catchsql {DETACH t3}
+  catchsql {DETACH t2}
+  file delete -force testerr.db testerr.db-journal
+  catchsql {DROP INDEX i2}
+} {0 {}}
 
 # Test modifying the cache_size of an attached database.
 ifcapable pager_pragmas {
@@ -351,12 +483,20 @@
     pragma table_info(t2)
   }
 } {0 a {} 0 {} 0 1 b {} 0 {} 0 2 c {} 0 {} 0}
+db nullvalue <<NULL>>
 do_test pragma-6.2.2 {
   execsql {
-    CREATE TABLE t5(a TEXT DEFAULT CURRENT_TIMESTAMP, b DEFAULT (5+3));
+    CREATE TABLE t5(
+      a TEXT DEFAULT CURRENT_TIMESTAMP, 
+      b DEFAULT (5+3),
+      c TEXT,
+      d INTEGER DEFAULT NULL,
+      e TEXT DEFAULT ''
+    );
     PRAGMA table_info(t5);
   }
-} {0 a TEXT 0 CURRENT_TIMESTAMP 0 1 b {} 0 5+3 0}
+} {0 a TEXT 0 CURRENT_TIMESTAMP 0 1 b {} 0 5+3 0 2 c TEXT 0 <<NULL>> 0 3 d INTEGER 0 NULL 0 4 e TEXT 0 '' 0}
+db nullvalue {}
 ifcapable {foreignkey} {
   do_test pragma-6.3 {
     execsql {
@@ -438,10 +578,10 @@
   }
 } {}
 do_test pragma-8.1.2 {
-  execsql {
+  execsql2 {
     PRAGMA schema_version;
   }
-} 105
+} {schema_version 105}
 do_test pragma-8.1.3 {
   execsql {
     PRAGMA schema_version = 106;
@@ -540,20 +680,20 @@
 # Now test that the user-version can be read and written (and that we aren't
 # accidentally manipulating the schema-version instead).
 do_test pragma-8.2.1 {
-  execsql {
+  execsql2 {
     PRAGMA user_version;
   }
-} {0}
+} {user_version 0}
 do_test pragma-8.2.2 {
   execsql {
     PRAGMA user_version = 2;
   }
 } {}
 do_test pragma-8.2.3.1 {
-  execsql {
+  execsql2 {
     PRAGMA user_version;
   }
-} {2}
+} {user_version 2}
 do_test pragma-8.2.3.2 {
   db close
   sqlite3 db test.db
@@ -686,7 +826,7 @@
   execsql { 
     PRAGMA temp_store_directory;
   }
-} [pwd]
+} [list [pwd]]
 do_test pragma-9.7 {
   catchsql { 
     PRAGMA temp_store_directory='/NON/EXISTENT/PATH/FOOBAR';

Modified: freeswitch/trunk/libs/sqlite/test/quick.test
==============================================================================
--- freeswitch/trunk/libs/sqlite/test/quick.test	(original)
+++ freeswitch/trunk/libs/sqlite/test/quick.test	Thu Feb 22 17:09:42 2007
@@ -6,7 +6,7 @@
 #***********************************************************************
 # This file runs all tests.
 #
-# $Id: quick.test,v 1.45 2006/06/23 08:05:38 danielk1977 Exp $
+# $Id: quick.test,v 1.47 2006/11/23 21:09:11 drh Exp $
 
 proc lshift {lvar} {
   upvar $lvar l
@@ -50,6 +50,7 @@
   memleak.test
   misuse.test
   quick.test
+  speed1.test
 
   autovacuum_crash.test
   btree8.test
@@ -63,9 +64,17 @@
   #  conflict.test
 }
 
+
+# Files to include in the test.  If this list is empty then everything
+# that is not in the EXCLUDE list is run.
+#
+set INCLUDE {
+}
+
 foreach testfile [lsort -dictionary [glob $testdir/*.test]] {
   set tail [file tail $testfile]
   if {[lsearch -exact $EXCLUDE $tail]>=0} continue
+  if {[llength $INCLUDE]>0 && [lsearch -exact $INCLUDE $tail]<0} continue
   source $testfile
   catch {db close}
   if {$sqlite_open_file_count>0} {

Added: freeswitch/trunk/libs/sqlite/test/schema2.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/schema2.test	Thu Feb 22 17:09:42 2007
@@ -0,0 +1,334 @@
+# 2006 November 08
+#
+# The author disclaims copyright to this source code.  In place of
+# a legal notice, here is a blessing:
+#
+#    May you do good and not evil.
+#    May you find forgiveness for yourself and forgive others.
+#    May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file tests the various conditions under which an SQLITE_SCHEMA
+# error should be returned.  This is a copy of schema.test that
+# has been altered to use sqlite3_prepare_v2 instead of sqlite3_prepare
+#
+# $Id: schema2.test,v 1.1 2006/11/09 00:24:55 drh Exp $
+
+#---------------------------------------------------------------------
+# When any of the following types of SQL statements or actions are 
+# executed, all pre-compiled statements are invalidated. An attempt
+# to execute an invalidated statement always returns SQLITE_SCHEMA.
+#
+# CREATE/DROP TABLE...................................schema2-1.*
+# CREATE/DROP VIEW....................................schema2-2.*
+# CREATE/DROP TRIGGER.................................schema2-3.*
+# CREATE/DROP INDEX...................................schema2-4.*
+# DETACH..............................................schema2-5.*
+# Deleting a user-function............................schema2-6.*
+# Deleting a collation sequence.......................schema2-7.*
+# Setting or changing the authorization function......schema2-8.*
+#
+# Test cases schema2-9.* and schema2-10.* test some specific bugs
+# that came up during development.
+#
+# Test cases schema2-11.* test that it is impossible to delete or
+# change a collation sequence or user-function while SQL statements
+# are executing. Adding new collations or functions is allowed.
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+do_test schema2-1.1 {
+  set ::STMT [sqlite3_prepare_v2 $::DB {SELECT * FROM sqlite_master} -1 TAIL]
+  execsql {
+    CREATE TABLE abc(a, b, c);
+  }
+  sqlite3_step $::STMT
+} {SQLITE_ROW}
+do_test schema2-1.2 {
+  sqlite3_finalize $::STMT
+} {SQLITE_OK}
+do_test schema2-1.3 {
+  set ::STMT [sqlite3_prepare_v2 $::DB {SELECT * FROM sqlite_master} -1 TAIL]
+  execsql {
+    DROP TABLE abc;
+  }
+  sqlite3_step $::STMT
+} {SQLITE_DONE}
+do_test schema2-1.4 {
+  sqlite3_finalize $::STMT
+} {SQLITE_OK}
+
+
+ifcapable view {
+  do_test schema2-2.1 {
+    set ::STMT [sqlite3_prepare_v2 $::DB {SELECT * FROM sqlite_master} -1 TAIL]
+    execsql {
+      CREATE VIEW v1 AS SELECT * FROM sqlite_master;
+    }
+    sqlite3_step $::STMT
+  } {SQLITE_ROW}
+  do_test schema2-2.2 {
+    sqlite3_finalize $::STMT
+  } {SQLITE_OK}
+  do_test schema2-2.3 {
+    set ::STMT [sqlite3_prepare_v2 $::DB {SELECT * FROM sqlite_master} -1 TAIL]
+    execsql {
+      DROP VIEW v1;
+    }
+    sqlite3_step $::STMT
+  } {SQLITE_DONE}
+  do_test schema2-2.4 {
+    sqlite3_finalize $::STMT
+  } {SQLITE_OK}
+}
+
+ifcapable trigger {
+  do_test schema2-3.1 {
+    execsql {
+      CREATE TABLE abc(a, b, c);
+    }
+    set ::STMT [sqlite3_prepare_v2 $::DB {SELECT * FROM sqlite_master} -1 TAIL]
+    execsql {
+      CREATE TRIGGER abc_trig AFTER INSERT ON abc BEGIN
+        SELECT 1, 2, 3;
+      END;
+    }
+    sqlite3_step $::STMT
+  } {SQLITE_ROW}
+  do_test schema2-3.2 {
+    sqlite3_finalize $::STMT
+  } {SQLITE_OK}
+  do_test schema2-3.3 {
+    set ::STMT [sqlite3_prepare_v2 $::DB {SELECT * FROM sqlite_master} -1 TAIL]
+    execsql {
+      DROP TRIGGER abc_trig;
+    }
+    sqlite3_step $::STMT
+  } {SQLITE_ROW}
+  do_test schema2-3.4 {
+    sqlite3_finalize $::STMT
+  } {SQLITE_OK}
+}
+
+do_test schema2-4.1 {
+  catchsql {
+    CREATE TABLE abc(a, b, c);
+  }
+  set ::STMT [sqlite3_prepare_v2 $::DB {SELECT * FROM sqlite_master} -1 TAIL]
+  execsql {
+    CREATE INDEX abc_index ON abc(a);
+  }
+  sqlite3_step $::STMT
+} {SQLITE_ROW}
+do_test schema2-4.2 {
+  sqlite3_finalize $::STMT
+} {SQLITE_OK}
+do_test schema2-4.3 {
+  set ::STMT [sqlite3_prepare_v2 $::DB {SELECT * FROM sqlite_master} -1 TAIL]
+  execsql {
+    DROP INDEX abc_index;
+  }
+  sqlite3_step $::STMT
+} {SQLITE_ROW}
+do_test schema2-4.4 {
+  sqlite3_finalize $::STMT
+} {SQLITE_OK}
+
+#---------------------------------------------------------------------
+# Tests 5.1 to 5.4 check that prepared statements are invalidated when
+# a database is DETACHed (but not when one is ATTACHed).
+#
+do_test schema2-5.1 {
+  set sql {SELECT * FROM abc;}
+  set ::STMT [sqlite3_prepare_v2 $::DB $sql -1 TAIL]
+  execsql {
+    ATTACH 'test2.db' AS aux;
+  }
+  sqlite3_step $::STMT
+} {SQLITE_DONE}
+do_test schema2-5.2 {
+  sqlite3_reset $::STMT
+} {SQLITE_OK}
+do_test schema2-5.3 {
+  execsql {
+    DETACH aux;
+  }
+  sqlite3_step $::STMT
+} {SQLITE_DONE}
+do_test schema2-5.4 {
+  sqlite3_finalize $::STMT
+} {SQLITE_OK}
+
+#---------------------------------------------------------------------
+# Tests 6.* check that prepared statements are invalidated when
+# a user-function is deleted (but not when one is added).
+do_test schema2-6.1 {
+  set sql {SELECT * FROM abc;}
+  set ::STMT [sqlite3_prepare_v2 $::DB $sql -1 TAIL]
+  db function hello_function {}
+  sqlite3_step $::STMT
+} {SQLITE_DONE}
+do_test schema2-6.2 {
+  sqlite3_reset $::STMT
+} {SQLITE_OK}
+do_test schema2-6.3 {
+  sqlite_delete_function $::DB hello_function
+  sqlite3_step $::STMT
+} {SQLITE_DONE}
+do_test schema2-6.4 {
+  sqlite3_finalize $::STMT
+} {SQLITE_OK}
+
+#---------------------------------------------------------------------
+# Tests 7.* check that prepared statements are invalidated when
+# a collation sequence is deleted (but not when one is added).
+#
+ifcapable utf16 {
+  do_test schema2-7.1 {
+    set sql {SELECT * FROM abc;}
+    set ::STMT [sqlite3_prepare_v2 $::DB $sql -1 TAIL]
+    add_test_collate $::DB 1 1 1
+    sqlite3_step $::STMT
+  } {SQLITE_DONE}
+  do_test schema2-7.2 {
+    sqlite3_reset $::STMT
+  } {SQLITE_OK}
+  do_test schema2-7.3 {
+    add_test_collate $::DB 0 0 0 
+    sqlite3_step $::STMT
+  } {SQLITE_DONE}
+  do_test schema2-7.4 {
+    sqlite3_finalize $::STMT
+  } {SQLITE_OK}
+}
+
+#---------------------------------------------------------------------
+# Tests 8.1 and 8.2 check that prepared statements are invalidated when
+# the authorization function is set.
+#
+ifcapable auth {
+  do_test schema2-8.1 {
+    set ::STMT [sqlite3_prepare_v2 $::DB {SELECT * FROM sqlite_master} -1 TAIL]
+    db auth {}
+    sqlite3_step $::STMT
+  } {SQLITE_ROW}
+  do_test schema2-8.3 {
+    sqlite3_finalize $::STMT
+  } {SQLITE_OK}
+}
+
+#---------------------------------------------------------------------
+# schema2-9.1: Test that if a table is dropped by one database connection, 
+#             other database connections are aware of the schema change.
+# schema2-9.2: Test that if a view is dropped by one database connection,
+#             other database connections are aware of the schema change.
+#
+do_test schema2-9.1 {
+  sqlite3 db2 test.db
+  execsql {
+    DROP TABLE abc;
+  } db2
+  db2 close
+  catchsql {
+    SELECT * FROM abc;
+  }
+} {1 {no such table: abc}}
+execsql {
+  CREATE TABLE abc(a, b, c);
+}
+ifcapable view {
+  do_test schema2-9.2 {
+    execsql {
+      CREATE VIEW abcview AS SELECT * FROM abc;
+    }
+    sqlite3 db2 test.db
+    execsql {
+      DROP VIEW abcview;
+    } db2
+    db2 close
+    catchsql {
+      SELECT * FROM abcview;
+    }
+  } {1 {no such table: abcview}}
+}
+
+#---------------------------------------------------------------------
+# Test that if a CREATE TABLE statement fails because there are other
+# btree cursors open on the same database file it does not corrupt
+# the sqlite_master table.
+#
+do_test schema2-10.1 {
+  execsql {
+    INSERT INTO abc VALUES(1, 2, 3);
+  }
+  set sql {SELECT * FROM abc}
+  set ::STMT [sqlite3_prepare_v2 $::DB $sql -1 TAIL]
+  sqlite3_step $::STMT
+} {SQLITE_ROW}
+do_test schema2-10.2 {
+  catchsql {
+    CREATE TABLE t2(a, b, c);
+  }
+} {1 {database table is locked}}
+do_test schema2-10.3 {
+  sqlite3_finalize $::STMT
+} {SQLITE_OK}
+do_test schema2-10.4 {
+  sqlite3 db2 test.db
+  execsql {
+    SELECT * FROM abc
+  } db2
+} {1 2 3}
+do_test schema2-10.5 {
+  db2 close
+} {}
+
+#---------------------------------------------------------------------
+# Attempting to delete or replace a user-function or collation sequence 
+# while there are active statements returns an SQLITE_BUSY error.
+#
+# schema2-11.1 - 11.4: User function.
+# schema2-11.5 - 11.8: Collation sequence.
+#
+do_test schema2-11.1 {
+  db function tstfunc {}
+  set sql {SELECT * FROM abc}
+  set ::STMT [sqlite3_prepare_v2 $::DB $sql -1 TAIL]
+  sqlite3_step $::STMT
+} {SQLITE_ROW}
+do_test schema2-11.2 {
+  sqlite_delete_function $::DB tstfunc
+} {SQLITE_BUSY}
+do_test schema2-11.3 {
+  set rc [catch {
+    db function tstfunc {}
+  } msg]
+  list $rc $msg
+} {1 {Unable to delete/modify user-function due to active statements}}
+do_test schema2-11.4 {
+  sqlite3_finalize $::STMT
+} {SQLITE_OK}
+do_test schema2-11.5 {
+  db collate tstcollate {}
+  set sql {SELECT * FROM abc}
+  set ::STMT [sqlite3_prepare_v2 $::DB $sql -1 TAIL]
+  sqlite3_step $::STMT
+} {SQLITE_ROW}
+do_test schema2-11.6 {
+  sqlite_delete_collation $::DB tstcollate
+} {SQLITE_BUSY}
+do_test schema2-11.7 {
+  set rc [catch {
+    db collate tstcollate {}
+  } msg]
+  list $rc $msg
+} {1 {Unable to delete/modify collation sequence due to active statements}}
+do_test schema2-11.8 {
+  sqlite3_finalize $::STMT
+} {SQLITE_OK}
+
+finish_test

Modified: freeswitch/trunk/libs/sqlite/test/select6.test
==============================================================================
--- freeswitch/trunk/libs/sqlite/test/select6.test	(original)
+++ freeswitch/trunk/libs/sqlite/test/select6.test	Thu Feb 22 17:09:42 2007
@@ -12,7 +12,7 @@
 # focus of this file is testing SELECT statements that contain
 # subqueries in their FROM clause.
 #
-# $Id: select6.test,v 1.24 2006/06/11 23:41:56 drh Exp $
+# $Id: select6.test,v 1.26 2006/11/30 13:06:00 drh Exp $
 
 set testdir [file dirname $argv0]
 source $testdir/tester.tcl

Modified: freeswitch/trunk/libs/sqlite/test/select7.test
==============================================================================
--- freeswitch/trunk/libs/sqlite/test/select7.test	(original)
+++ freeswitch/trunk/libs/sqlite/test/select7.test	Thu Feb 22 17:09:42 2007
@@ -10,7 +10,7 @@
 # focus of this file is testing compute SELECT statements and nested
 # views.
 #
-# $Id: select7.test,v 1.7 2005/03/29 03:11:00 danielk1977 Exp $
+# $Id: select7.test,v 1.8 2006/10/13 15:34:17 drh Exp $
 
 
 set testdir [file dirname $argv0]
@@ -71,5 +71,39 @@
     }
   } [list 0 [execsql {SELECT * FROM sqlite_master ORDER BY name}]]
 }
-finish_test
 
+# Ticket #2018 - Make sure names are resolved correctly on all
+# SELECT statements of a compound subquery.
+#
+ifcapable {subquery && compound} {
+  do_test select7-4.1 {
+    execsql {
+      CREATE TABLE IF NOT EXISTS photo(pk integer primary key, x);
+      CREATE TABLE IF NOT EXISTS tag(pk integer primary key, fk int, name);
+    
+      SELECT P.pk from PHOTO P WHERE NOT EXISTS ( 
+           SELECT T2.pk from TAG T2 WHERE T2.fk = P.pk 
+           EXCEPT 
+           SELECT T3.pk from TAG T3 WHERE T3.fk = P.pk AND T3.name LIKE '%foo%'
+      );
+    }
+  } {}
+  do_test select7-4.2 {
+    execsql {
+      INSERT INTO photo VALUES(1,1);
+      INSERT INTO photo VALUES(2,2);
+      INSERT INTO photo VALUES(3,3);
+      INSERT INTO tag VALUES(11,1,'one');
+      INSERT INTO tag VALUES(12,1,'two');
+      INSERT INTO tag VALUES(21,1,'one-b');
+      SELECT P.pk from PHOTO P WHERE NOT EXISTS ( 
+           SELECT T2.pk from TAG T2 WHERE T2.fk = P.pk 
+           EXCEPT 
+           SELECT T3.pk from TAG T3 WHERE T3.fk = P.pk AND T3.name LIKE '%foo%'
+      );
+    }
+  } {2 3}
+
+}
+
+finish_test

Added: freeswitch/trunk/libs/sqlite/test/speed1.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/speed1.test	Thu Feb 22 17:09:42 2007
@@ -0,0 +1,280 @@
+# 2006 November 23
+#
+# The author disclaims copyright to this source code.  In place of
+# a legal notice, here is a blessing:
+#
+#    May you do good and not evil.
+#    May you find forgiveness for yourself and forgive others.
+#    May you share freely, never taking more than you give.
+#
+#*************************************************************************
+# This file implements regression tests for SQLite library.  The
+# focus of this script is measuring executing speed.
+#
+# $Id: speed1.test,v 1.2 2006/11/30 13:06:00 drh Exp $
+#
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+set sqlout [open speed1.txt w]
+proc tracesql {sql} {
+  puts $::sqlout $sql\;
+}
+db trace tracesql
+
+# The number_name procedure below converts its argment (an integer)
+# into a string which is the English-language name for that number.
+#
+# Example:
+#
+#     puts [number_name 123]   ->  "one hundred twenty three"
+#
+set ones {zero one two three four five six seven eight nine
+          ten eleven twelve thirteen fourteen fifteen sixteen seventeen
+          eighteen nineteen}
+set tens {{} ten twenty thirty forty fifty sixty seventy eighty ninety}
+proc number_name {n} {
+  if {$n>=1000} {
+    set txt "[number_name [expr {$n/1000}]] thousand"
+    set n [expr {$n%1000}]
+  } else {
+    set txt {}
+  }
+  if {$n>=100} {
+    append txt " [lindex $::ones [expr {$n/100}]] hundred"
+    set n [expr {$n%100}]
+  }
+  if {$n>=20} {
+    append txt " [lindex $::tens [expr {$n/10}]]"
+    set n [expr {$n%10}]
+  }
+  if {$n>0} {
+    append txt " [lindex $::ones $n]"
+  }
+  set txt [string trim $txt]
+  if {$txt==""} {set txt zero}
+  return $txt
+}
+
+# Create a database schema.
+#
+do_test speed1-1.0 {
+  execsql {
+pragma page_size=4096;
+    CREATE TABLE t1(a INTEGER, b INTEGER, c TEXT);
+    CREATE TABLE t2(a INTEGER, b INTEGER, c TEXT);
+    CREATE INDEX i2a ON t2(a);
+    CREATE INDEX i2b ON t2(b);
+    SELECT name FROM sqlite_master ORDER BY 1;
+  }
+} {i2a i2b t1 t2}
+
+
+# 50000 INSERTs on an unindexed table
+#
+set sql {}
+for {set i 1} {$i<=50000} {incr i} {
+  set r [expr {int(rand()*500000)}]
+  append sql "INSERT INTO t1 VALUES($i,$r,'[number_name $r]');\n"
+}
+db eval BEGIN
+speed_trial speed1-insert1 50000 row $sql
+db eval COMMIT
+
+# 50000 INSERTs on an indexed table
+#
+set sql {}
+for {set i 1} {$i<=50000} {incr i} {
+  set r [expr {int(rand()*500000)}]
+  append sql "INSERT INTO t2 VALUES($i,$r,'[number_name $r]');\n"
+}
+db eval BEGIN
+speed_trial speed1-insert2 50000 row $sql
+db eval COMMIT
+
+
+
+# 50 SELECTs on an integer comparison.  There is no index so
+# a full table scan is required.
+#
+set sql {}
+for {set i 0} {$i<50} {incr i} {
+  set lwr [expr {$i*100}]
+  set upr [expr {($i+10)*100}]
+  append sql "SELECT count(*), avg(b) FROM t1 WHERE b>=$lwr AND b<$upr;"
+}
+db eval BEGIN
+speed_trial speed1-select1 [expr {50*50000}] row $sql
+db eval COMMIT
+
+# 50 SELECTs on an LIKE comparison.  There is no index so a full
+# table scan is required.
+#
+set sql {}
+for {set i 0} {$i<50} {incr i} {
+  append sql \
+    "SELECT count(*), avg(b) FROM t1 WHERE c LIKE '%[number_name $i]%';"
+}
+db eval BEGIN
+speed_trial speed1-select2 [expr {50*50000}] row $sql
+db eval COMMIT
+
+# Create indices
+#
+db eval BEGIN
+speed_trial speed1-createidx 150000 row {
+  CREATE INDEX i1a ON t1(a);
+  CREATE INDEX i1b ON t1(b);
+  CREATE INDEX i1c ON t1(c);
+}
+db eval COMMIT
+
+# 5000 SELECTs on an integer comparison where the integer is
+# indexed.
+#
+set sql {}
+for {set i 0} {$i<5000} {incr i} {
+  set lwr [expr {$i*100}]
+  set upr [expr {($i+10)*100}]
+  append sql "SELECT count(*), avg(b) FROM t1 WHERE b>=$lwr AND b<$upr;"
+}
+db eval BEGIN
+speed_trial speed1-select3 5000 stmt $sql
+db eval COMMIT
+
+# 100000 random SELECTs against rowid.
+#
+set sql {}
+for {set i 1} {$i<=100000} {incr i} {
+  set id [expr {int(rand()*50000)+1}]
+  append sql "SELECT c FROM t1 WHERE rowid=$id;"
+}
+db eval BEGIN
+speed_trial speed1-select4 100000 row $sql
+db eval COMMIT
+
+# 100000 random SELECTs against a unique indexed column.
+#
+set sql {}
+for {set i 1} {$i<=100000} {incr i} {
+  set id [expr {int(rand()*50000)+1}]
+  append sql "SELECT c FROM t1 WHERE a=$id;"
+}
+db eval BEGIN
+speed_trial speed1-select5 100000 row $sql
+db eval COMMIT
+
+# 50000 random SELECTs against an indexed column text column
+#
+set sql {}
+db eval {SELECT c FROM t1 ORDER BY random() LIMIT 50000} {
+  append sql "SELECT c FROM t1 WHERE c='$c';"
+}
+db eval BEGIN
+speed_trial speed1-select6 50000 row $sql
+db eval COMMIT
+
+
+# Vacuum
+speed_trial speed1-vacuum 100000 row VACUUM
+
+# 5000 updates of ranges where the field being compared is indexed.
+#
+set sql {}
+for {set i 0} {$i<5000} {incr i} {
+  set lwr [expr {$i*2}]
+  set upr [expr {($i+1)*2}]
+  append sql "UPDATE t1 SET b=b*2 WHERE a>=$lwr AND a<$upr;"
+}
+db eval BEGIN
+speed_trial speed1-update1 5000 stmt $sql
+db eval COMMIT
+
+# 50000 single-row updates.  An index is used to find the row quickly.
+#
+set sql {}
+for {set i 0} {$i<50000} {incr i} {
+  set r [expr {int(rand()*500000)}]
+  append sql "UPDATE t1 SET b=$r WHERE a=$i;"
+}
+db eval BEGIN
+speed_trial speed1-update2 50000 row $sql
+db eval COMMIT
+
+# 1 big text update that touches every row in the table.
+#
+speed_trial speed1-update3 50000 row {
+  UPDATE t1 SET c=a;
+}
+
+# Many individual text updates.  Each row in the table is
+# touched through an index.
+#
+set sql {}
+for {set i 1} {$i<=50000} {incr i} {
+  set r [expr {int(rand()*500000)}]
+  append sql "UPDATE t1 SET c='[number_name $r]' WHERE a=$i;"
+}
+db eval BEGIN
+speed_trial speed1-update4 50000 row $sql
+db eval COMMIT
+
+# Delete all content in a table.
+#
+speed_trial speed1-delete1 50000 row {DELETE FROM t1}
+
+# Copy one table into another
+#
+speed_trial speed1-copy1 50000 row {INSERT INTO t1 SELECT * FROM t2}
+
+# Delete all content in a table, one row at a time.
+#
+speed_trial speed1-delete2 50000 row {DELETE FROM t1 WHERE 1}
+
+# Refill the table yet again
+#
+speed_trial speed1-copy2 50000 row {INSERT INTO t1 SELECT * FROM t2}
+
+# Drop the table and recreate it without its indices.
+#
+db eval BEGIN
+speed_trial speed1-drop1 50000 row {
+   DROP TABLE t1;
+   CREATE TABLE t1(a INTEGER, b INTEGER, c TEXT);
+}
+db eval COMMIT
+
+# Refill the table yet again.  This copy should be faster because
+# there are no indices to deal with.
+#
+speed_trial speed1-copy3 50000 row {INSERT INTO t1 SELECT * FROM t2}
+
+# Select 20000 rows from the table at random.
+#
+speed_trial speed1-random1 50000 row {
+  SELECT rowid FROM t1 ORDER BY random() LIMIT 20000
+}
+
+# Delete 20000 random rows from the table.
+#
+speed_trial speed1-random-del1 20000 row {
+  DELETE FROM t1 WHERE rowid IN
+    (SELECT rowid FROM t1 ORDER BY random() LIMIT 20000)
+}
+do_test speed1-1.1 {
+  db one {SELECT count(*) FROM t1}
+} 30000
+
+    
+# Delete 20000 more rows at random from the table.
+#
+speed_trial speed1-random-del2 20000 row {
+  DELETE FROM t1 WHERE rowid IN
+    (SELECT rowid FROM t1 ORDER BY random() LIMIT 20000)
+}
+do_test speed1-1.2 {
+  db one {SELECT count(*) FROM t1}
+} 10000
+
+finish_test

Modified: freeswitch/trunk/libs/sqlite/test/tableapi.test
==============================================================================
--- freeswitch/trunk/libs/sqlite/test/tableapi.test	(original)
+++ freeswitch/trunk/libs/sqlite/test/tableapi.test	Thu Feb 22 17:09:42 2007
@@ -12,7 +12,7 @@
 # focus of this file is testing the sqlite_exec_printf() and
 # sqlite_get_table_printf() APIs.
 #
-# $Id: tableapi.test,v 1.11 2006/06/27 20:39:05 drh Exp $
+# $Id: tableapi.test,v 1.12 2007/01/05 00:14:28 drh Exp $
 
 set testdir [file dirname $argv0]
 source $testdir/tester.tcl
@@ -208,7 +208,7 @@
 
 do_test tableapi-6.1 {
   sqlite3_get_table_printf $::dbx {PRAGMA user_version} {}
-} {0 1 1 {} 0}
+} {0 1 1 user_version 0}
 
 do_test tableapi-99.0 {
   sqlite3_close $::dbx

Modified: freeswitch/trunk/libs/sqlite/test/tester.tcl
==============================================================================
--- freeswitch/trunk/libs/sqlite/test/tester.tcl	(original)
+++ freeswitch/trunk/libs/sqlite/test/tester.tcl	Thu Feb 22 17:09:42 2007
@@ -11,7 +11,7 @@
 # This file implements some common TCL routines used for regression
 # testing the SQLite library
 #
-# $Id: tester.tcl,v 1.69 2006/10/04 11:55:50 drh Exp $
+# $Id: tester.tcl,v 1.72 2007/01/04 14:58:14 drh Exp $
 
 # Make sure tclsqlite3 was compiled correctly.  Abort now with an
 # error message if not.
@@ -78,6 +78,9 @@
 set skip_test 0
 set failList {}
 set maxErr 1000
+if {![info exists speedTest]} {
+  set speedTest 0
+}
 
 # Invoke the do_test procedure to run a single test 
 #
@@ -118,6 +121,21 @@
   }
 }
 
+# Run an SQL script.  
+# Return the number of microseconds per statement.
+#
+proc speed_trial {name numstmt units sql} {
+  puts -nonewline [format {%-20.20s } $name...]
+  flush stdout
+  set speed [time {sqlite3_exec_nr db $sql}]
+  set tm [lindex $speed 0]
+  set per [expr {$tm/(1.0*$numstmt)}]
+  set rate [expr {1000000.0*$numstmt/$tm}]
+  set u1 us/$units
+  set u2 $units/s
+  puts [format {%20.3f %-7s %20.5f %s} $per $u1 $rate $u2]
+}
+
 # The procedure uses the special "sqlite_malloc_stat" command
 # (which is only available if SQLite is compiled with -DSQLITE_DEBUG=1)
 # to see how many malloc()s have not been free()ed.  The number
@@ -334,10 +352,13 @@
   set ::ioerropts(-start) 1
   set ::ioerropts(-cksum) 0
   set ::ioerropts(-erc) 0
+  set ::ioerropts(-count) 100000000
   array set ::ioerropts $args
 
   set ::go 1
   for {set n $::ioerropts(-start)} {$::go} {incr n} {
+    incr ::ioerropts(-count) -1
+    if {$::ioerropts(-count)<0} break
  
     # Skip this IO error if it was specified with the "-exclude" option.
     if {[info exists ::ioerropts(-exclude)]} {

Modified: freeswitch/trunk/libs/sqlite/test/threadtest2.c
==============================================================================
--- freeswitch/trunk/libs/sqlite/test/threadtest2.c	(original)
+++ freeswitch/trunk/libs/sqlite/test/threadtest2.c	Thu Feb 22 17:09:42 2007
@@ -39,12 +39,13 @@
 ** global variable to stop all other activity.  Print the error message
 ** or print OK if the string "ok" is seen.
 */
-int check_callback(void *notUsed, int argc, char **argv, char **notUsed2){
+int check_callback(void *pid, int argc, char **argv, char **notUsed2){
+  int id = (int)pid;
   if( strcmp(argv[0],"ok") ){
     all_stop = 1;
-    fprintf(stderr,"pid=%d. %s\n", getpid(), argv[0]);
+    fprintf(stderr,"id: %s\n", id, argv[0]);
   }else{
-    /* fprintf(stderr,"pid=%d. OK\n", getpid()); */
+    /* fprintf(stderr,"%d: OK\n", id); */
   }
   return 0;
 }
@@ -53,13 +54,13 @@
 ** Do an integrity check on the database.  If the first integrity check
 ** fails, try it a second time.
 */
-int integrity_check(sqlite *db){
+int integrity_check(sqlite *db, int id){
   int rc;
   if( all_stop ) return 0;
-  /* fprintf(stderr,"pid=%d: CHECK\n", getpid()); */
+  /* fprintf(stderr,"%d: CHECK\n", id); */
   rc = sqlite3_exec(db, "pragma integrity_check", check_callback, 0, 0);
   if( rc!=SQLITE_OK && rc!=SQLITE_BUSY ){
-    fprintf(stderr,"pid=%d, Integrity check returns %d\n", getpid(), rc);
+    fprintf(stderr,"%d, Integrity check returns %d\n", id, rc);
   }
   if( all_stop ){
     sqlite3_exec(db, "pragma integrity_check", check_callback, 0, 0);
@@ -70,21 +71,24 @@
 /*
 ** This is the worker thread
 */
-void *worker(void *notUsed){
+void *worker(void *workerArg){
   sqlite *db;
+  int id = (int)workerArg;
   int rc;
   int cnt = 0;
+  fprintf(stderr, "Starting worker %d\n", id);
   while( !all_stop && cnt++<10000 ){
-    if( cnt%1000==0 ) printf("pid=%d: %d\n", getpid(), cnt);
+    if( cnt%100==0 ) printf("%d: %d\n", id, cnt);
     while( (sqlite3_open(DB_FILE, &db))!=SQLITE_OK ) sched_yield();
     sqlite3_exec(db, "PRAGMA synchronous=OFF", 0, 0, 0);
-    integrity_check(db);
+    /* integrity_check(db, id); */
     if( all_stop ){ sqlite3_close(db); break; }
-    /* fprintf(stderr, "pid=%d: BEGIN\n", getpid()); */
+    /* fprintf(stderr, "%d: BEGIN\n", id); */
     rc = sqlite3_exec(db, "INSERT INTO t1 VALUES('bogus data')", 0, 0, 0);
-    /* fprintf(stderr, "pid=%d: END rc=%d\n", getpid(), rc); */
+    /* fprintf(stderr, "%d: END rc=%d\n", id, rc); */
     sqlite3_close(db);
   }
+  fprintf(stderr, "Worker %d finished\n", id);
   return 0;
 }
 
@@ -100,7 +104,7 @@
     char *zJournal = sqlite3_mprintf("%s-journal", DB_FILE);
     unlink(DB_FILE);
     unlink(zJournal);
-    free(zJournal);
+    sqlite3_free(zJournal);
   }  
   sqlite3_open(DB_FILE, &db);
   if( db==0 ){
@@ -114,7 +118,7 @@
   }
   sqlite3_close(db);
   for(i=0; i<sizeof(aThread)/sizeof(aThread[0]); i++){
-    pthread_create(&aThread[i], 0, worker, 0);
+    pthread_create(&aThread[i], 0, worker, (void*)i);
   }
   for(i=0; i<sizeof(aThread)/sizeof(aThread[i]); i++){
     pthread_join(aThread[i], 0);

Added: freeswitch/trunk/libs/sqlite/test/tkt2141.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/tkt2141.test	Thu Feb 22 17:09:42 2007
@@ -0,0 +1,57 @@
+# 2007 January 03
+#
+# The author disclaims copyright to this source code.  In place of
+# a legal notice, here is a blessing:
+#
+#    May you do good and not evil.
+#    May you find forgiveness for yourself and forgive others.
+#    May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests to verify that ticket #2141 has been
+# fixed.  
+#
+#
+# $Id: tkt2141.test,v 1.1 2007/01/04 01:20:29 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+
+do_test tkt2141-1.1 {
+  execsql {
+      CREATE TABLE tab1 (t1_id integer PRIMARY KEY, t1_desc);
+      INSERT INTO tab1 VALUES(1,'rec 1 tab 1');
+      CREATE TABLE tab2 (t2_id integer PRIMARY KEY, t2_id_t1, t2_desc);
+      INSERT INTO tab2 VALUES(1,1,'rec 1 tab 2');
+      CREATE TABLE tab3 (t3_id integer PRIMARY KEY, t3_id_t2, t3_desc);
+      INSERT INTO tab3 VALUES(1,1,'aa');
+      SELECT *
+      FROM tab1 t1 LEFT JOIN tab2 t2 ON t1.t1_id = t2.t2_id_t1
+      WHERE t2.t2_id IN
+           (SELECT t2_id FROM tab2, tab3 ON t2_id = t3_id_t2
+             WHERE t3_id IN (1,2) GROUP BY t2_id);
+  }
+} {1 {rec 1 tab 1} 1 1 {rec 1 tab 2}}
+do_test tkt2141-1.2 {
+  execsql {
+      SELECT *
+      FROM tab1 t1 LEFT JOIN tab2 t2 ON t1.t1_id = t2.t2_id_t1
+      WHERE t2.t2_id IN
+           (SELECT t2_id FROM tab2, tab3 ON t2_id = t3_id_t2
+             WHERE t3_id IN (1,2));
+  }
+} {1 {rec 1 tab 1} 1 1 {rec 1 tab 2}}
+do_test tkt2141-1.3 {
+  execsql {
+      SELECT *
+      FROM tab1 t1 LEFT JOIN tab2 t2
+      WHERE t2.t2_id IN
+           (SELECT t2_id FROM tab2, tab3 ON t2_id = t3_id_t2
+             WHERE t3_id IN (1,2));
+  }
+} {1 {rec 1 tab 1} 1 1 {rec 1 tab 2}}
+
+finish_test

Added: freeswitch/trunk/libs/sqlite/test/tkt2192.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/tkt2192.test	Thu Feb 22 17:09:42 2007
@@ -0,0 +1,136 @@
+# 2007 January 26
+#
+# The author disclaims copyright to this source code.  In place of
+# a legal notice, here is a blessing:
+#
+#    May you do good and not evil.
+#    May you find forgiveness for yourself and forgive others.
+#    May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests to verify that ticket #2192 has been
+# fixed.  
+#
+#
+# $Id: tkt2192.test,v 1.1 2007/01/26 19:04:00 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+
+do_test tkt2191-1.1 {
+  execsql {
+    -- Raw data (RBS) --------
+    
+    create table records (
+      date          real,
+      type          text,
+      description   text,
+      value         integer,
+      acc_name      text,
+      acc_no        text
+    );
+    
+    -- Direct Debits ----------------
+    create view direct_debits as
+      select * from records where type = 'D/D';
+    
+    create view monthly_direct_debits as
+      select strftime('%Y-%m', date) as date, (-1 * sum(value)) as value
+        from direct_debits
+       group by strftime('%Y-%m', date);
+    
+    -- Expense Categories ---------------
+    create view energy as
+      select strftime('%Y-%m', date) as date, (-1 * sum(value)) as value
+        from direct_debits
+       where description like '%NPOWER%'
+       group by strftime('%Y-%m', date);
+    
+    create view phone_internet as
+      select strftime('%Y-%m', date) as date, (-1 * sum(value)) as value
+        from direct_debits
+       where description like '%BT DIRECT%'
+          or description like '%SUPANET%'
+          or description like '%ORANGE%'
+       group by strftime('%Y-%m', date);
+    
+    create view credit_cards as
+      select strftime('%Y-%m', date) as date, (-1 * sum(value)) as value
+        from direct_debits where description like '%VISA%'
+       group by strftime('%Y-%m', date);
+    
+    -- Overview ---------------------
+    
+    create view expense_overview as
+      select 'Energy' as expense, date, value from energy
+      union
+      select 'Phone/Internet' as expense, date, value from phone_internet
+      union
+      select 'Credit Card' as expense, date, value from credit_cards;
+    
+    create view jan as
+      select 'jan', expense, value from expense_overview
+       where date like '%-01';
+    
+    create view nov as
+      select 'nov', expense, value from expense_overview
+       where date like '%-11';
+    
+    create view summary as
+      select * from jan join nov on (jan.expense = nov.expense);
+  }
+} {}
+do_test tkt2192-1.2 {
+  # set ::sqlite_addop_trace 1
+  execsql {
+    select * from summary;
+  }
+} {}
+do_test tkt2192-2.1 {
+  execsql {
+    CREATE TABLE t1(a,b);
+    CREATE VIEW v1 AS
+      SELECT * FROM t1 WHERE b%7=0 UNION SELECT * FROM t1 WHERE b%5=0;
+    INSERT INTO t1 VALUES(1,7);
+    INSERT INTO t1 VALUES(2,10);
+    INSERT INTO t1 VALUES(3,14);
+    INSERT INTO t1 VALUES(4,15);
+    INSERT INTO t1 VALUES(1,16);
+    INSERT INTO t1 VALUES(2,17);
+    INSERT INTO t1 VALUES(3,20);
+    INSERT INTO t1 VALUES(4,21);
+    INSERT INTO t1 VALUES(1,22);
+    INSERT INTO t1 VALUES(2,24);
+    INSERT INTO t1 VALUES(3,25);
+    INSERT INTO t1 VALUES(4,26);
+    INSERT INTO t1 VALUES(1,27);
+ 
+    SELECT b FROM v1 ORDER BY b;
+  }
+} {7 10 14 15 20 21 25}
+do_test tkt2192-2.2 {
+  execsql {
+    SELECT * FROM v1 ORDER BY a, b;
+  }
+} {1 7 2 10 3 14 3 20 3 25 4 15 4 21}
+do_test tkt2192-2.3 {
+  execsql {
+    SELECT x.a || '/' || x.b || '/' || y.b
+      FROM v1 AS x JOIN v1 AS y ON x.a=y.a AND x.b<y.b
+     ORDER BY x.a, x.b, y.b
+  }
+} {3/14/20 3/14/25 3/20/25 4/15/21}
+do_test tkt2192-2.4 {
+  execsql {
+    CREATE VIEW v2 AS
+    SELECT x.a || '/' || x.b || '/' || y.b AS z
+      FROM v1 AS x JOIN v1 AS y ON x.a=y.a AND x.b<y.b
+     ORDER BY x.a, x.b, y.b;
+    SELECT * FROM v2;
+  }
+} {3/14/20 3/14/25 3/20/25 4/15/21}
+
+finish_test

Added: freeswitch/trunk/libs/sqlite/test/tkt2213.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/tkt2213.test	Thu Feb 22 17:09:42 2007
@@ -0,0 +1,30 @@
+# 2007 Febuary 05
+#
+# The author disclaims copyright to this source code.  In place of
+# a legal notice, here is a blessing:
+#
+#    May you do good and not evil.
+#    May you find forgiveness for yourself and forgive others.
+#    May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.
+#
+# This file implements tests to verify that ticket #2213 has been
+# fixed.  
+#
+#
+# $Id: tkt2213.test,v 1.1 2007/02/05 14:21:48 danielk1977 Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+do_test tkt2213-1 {
+  sqlite3_create_function db
+  catchsql {
+    SELECT tkt2213func(tkt2213func('abcd'));
+  }
+} {0 abcd}
+
+finish_test
+

Modified: freeswitch/trunk/libs/sqlite/test/trigger4.test
==============================================================================
--- freeswitch/trunk/libs/sqlite/test/trigger4.test	(original)
+++ freeswitch/trunk/libs/sqlite/test/trigger4.test	Thu Feb 22 17:09:42 2007
@@ -194,7 +194,7 @@
 } {101 1001 102 2002 227 2127 228 2128}
 
 integrity_check trigger4-99.9
-
+db close
 file delete -force trigtest.db trigtest.db-journal
 
 finish_test

Modified: freeswitch/trunk/libs/sqlite/test/utf16.test
==============================================================================
--- freeswitch/trunk/libs/sqlite/test/utf16.test	(original)
+++ freeswitch/trunk/libs/sqlite/test/utf16.test	Thu Feb 22 17:09:42 2007
@@ -10,7 +10,7 @@
 #***********************************************************************
 # This file runs all tests.
 #
-# $Id: utf16.test,v 1.5 2006/01/09 23:40:26 drh Exp $
+# $Id: utf16.test,v 1.6 2007/01/04 16:37:04 drh Exp $
 
 set testdir [file dirname $argv0]
 source $testdir/tester.tcl
@@ -23,7 +23,7 @@
   set argv [list]
 } else {
   set F {
-    alter.test alter2.test alter3.test
+    alter.test alter3.test
     auth.test bind.test blob.test capi2.test capi3.test collate1.test
     collate2.test collate3.test collate4.test collate5.test collate6.test
     conflict.test date.test delete.test expr.test fkey1.test func.test

Modified: freeswitch/trunk/libs/sqlite/test/vtab1.test
==============================================================================
--- freeswitch/trunk/libs/sqlite/test/vtab1.test	(original)
+++ freeswitch/trunk/libs/sqlite/test/vtab1.test	Thu Feb 22 17:09:42 2007
@@ -11,7 +11,7 @@
 # This file implements regression tests for SQLite library.  The
 # focus of this file is creating and dropping virtual tables.
 #
-# $Id: vtab1.test,v 1.38 2006/09/16 21:45:14 drh Exp $
+# $Id: vtab1.test,v 1.39 2007/01/09 14:01:14 drh Exp $
 
 set testdir [file dirname $argv0]
 source $testdir/tester.tcl
@@ -96,6 +96,29 @@
   }
 } {}
 
+# Ticket #2156.  Using the sqlite3_prepare_v2() API, make sure that
+# a CREATE VIRTUAL TABLE statement can be used multiple times.
+#
+do_test vtab1-1.2152.1 {
+  set DB [sqlite3_connection_pointer db]
+  set sql {CREATE VIRTUAL TABLE t2152a USING echo(t2152b)}
+  set STMT [sqlite3_prepare_v2 $DB $sql -1 TAIL]
+  sqlite3_step $STMT
+} SQLITE_ERROR
+do_test vtab-1.2152.2 {
+  sqlite3_reset $STMT
+  sqlite3_step $STMT
+} SQLITE_ERROR
+do_test vtab-1.2152.3 {
+  sqlite3_reset $STMT
+  db eval {CREATE TABLE t2152b(x,y)}
+  sqlite3_step $STMT
+} SQLITE_DONE
+do_test vtab-1.2152.4 {
+  sqlite3_finalize $STMT
+  db eval {DROP TABLE t2152a; DROP TABLE t2152b}
+} {}
+
 # Test to make sure nothing goes wrong and no memory is leaked if we 
 # select an illegal table-name (i.e a reserved name or the name of a
 # table that already exists).

Modified: freeswitch/trunk/libs/sqlite/test/vtab_err.test
==============================================================================
--- freeswitch/trunk/libs/sqlite/test/vtab_err.test	(original)
+++ freeswitch/trunk/libs/sqlite/test/vtab_err.test	Thu Feb 22 17:09:42 2007
@@ -9,11 +9,19 @@
 #
 #***********************************************************************
 #
-# $Id: vtab_err.test,v 1.3 2006/08/15 14:21:16 drh Exp $
+# $Id: vtab_err.test,v 1.4 2007/01/02 18:41:58 drh Exp $
 
 set testdir [file dirname $argv0]
 source $testdir/tester.tcl
 
+# Only run these tests if memory debugging is turned on.
+#
+if {[info command sqlite_malloc_stat]==""} {
+  puts "Skipping vtab_err tests: not compiled with -DSQLITE_MEMDEBUG=1"
+  finish_test
+  return
+}
+
 ifcapable !vtab {
   finish_test
   return

Modified: freeswitch/trunk/libs/sqlite/test/where.test
==============================================================================
--- freeswitch/trunk/libs/sqlite/test/where.test	(original)
+++ freeswitch/trunk/libs/sqlite/test/where.test	Thu Feb 22 17:09:42 2007
@@ -11,7 +11,7 @@
 # This file implements regression tests for SQLite library.  The
 # focus of this file is testing the use of indices in WHERE clases.
 #
-# $Id: where.test,v 1.38 2005/11/14 22:29:06 drh Exp $
+# $Id: where.test,v 1.41 2007/02/06 23:41:34 drh Exp $
 
 set testdir [file dirname $argv0]
 source $testdir/tester.tcl
@@ -589,22 +589,22 @@
   cksort {
     SELECT y FROM t1 ORDER BY rowid, y LIMIT 3;
   }
-} {4 9 16 sort}
+} {4 9 16 nosort}
 do_test where-6.22 {
   cksort {
     SELECT y FROM t1 ORDER BY rowid, y DESC LIMIT 3;
   }
-} {4 9 16 sort}
+} {4 9 16 nosort}
 do_test where-6.23 {
   cksort {
     SELECT y FROM t1 WHERE y>4 ORDER BY rowid, w, x LIMIT 3;
   }
-} {9 16 25 sort}
+} {9 16 25 nosort}
 do_test where-6.24 {
   cksort {
     SELECT y FROM t1 WHERE y>=9 ORDER BY rowid, x DESC, w LIMIT 3;
   }
-} {9 16 25 sort}
+} {9 16 25 nosort}
 do_test where-6.25 {
   cksort {
     SELECT y FROM t1 WHERE y>4 AND y<25 ORDER BY rowid;
@@ -619,7 +619,7 @@
   cksort {
     SELECT y FROM t1 WHERE y<=25 ORDER BY _rowid_, w+y;
   }
-} {4 9 16 25 sort}
+} {4 9 16 25 nosort}
 
 
 # Tests for reverse-order sorting.
@@ -793,7 +793,7 @@
   cksort {
     SELECT y FROM t1 WHERE y<25 AND y>4 ORDER BY rowid DESC, y DESC
   }
-} {16 9 sort}
+} {16 9 nosort}
 do_test where-7.35 {
   cksort {
     SELECT y FROM t1 WHERE y<25 AND y>=4 ORDER BY rowid DESC
@@ -874,7 +874,6 @@
 # that array.
 #
 do_test where-11.1 {
-btree_breakpoint
   execsql {
    CREATE TABLE t99(Dte INT, X INT);
    DELETE FROM t99 WHERE (Dte = 2451337) OR (Dte = 2451339) OR
@@ -902,6 +901,224 @@
   }
 } {}
 
+# Ticket #2116:  Make sure sorting by index works well with nn INTEGER PRIMARY
+# KEY.
+#
+do_test where-12.1 {
+  execsql {
+    CREATE TABLE t6(a INTEGER PRIMARY KEY, b TEXT);
+    INSERT INTO t6 VALUES(1,'one');
+    INSERT INTO t6 VALUES(4,'four');
+    CREATE INDEX t6i1 ON t6(b);
+  }
+  cksort {
+    SELECT * FROM t6 ORDER BY b;
+  }
+} {4 four 1 one nosort}
+do_test where-12.2 {
+  cksort {
+    SELECT * FROM t6 ORDER BY b, a;
+  }
+} {4 four 1 one nosort}
+do_test where-12.3 {
+  cksort {
+    SELECT * FROM t6 ORDER BY a;
+  }
+} {1 one 4 four nosort}
+do_test where-12.4 {
+  cksort {
+    SELECT * FROM t6 ORDER BY a, b;
+  }
+} {1 one 4 four nosort}
+do_test where-12.5 {
+  cksort {
+    SELECT * FROM t6 ORDER BY b DESC;
+  }
+} {1 one 4 four nosort}
+do_test where-12.6 {
+  cksort {
+    SELECT * FROM t6 ORDER BY b DESC, a DESC;
+  }
+} {1 one 4 four nosort}
+do_test where-12.7 {
+  cksort {
+    SELECT * FROM t6 ORDER BY b DESC, a ASC;
+  }
+} {1 one 4 four sort}
+do_test where-12.8 {
+  cksort {
+    SELECT * FROM t6 ORDER BY b ASC, a DESC;
+  }
+} {4 four 1 one sort}
+do_test where-12.9 {
+  cksort {
+    SELECT * FROM t6 ORDER BY a DESC;
+  }
+} {4 four 1 one nosort}
+do_test where-12.10 {
+  cksort {
+    SELECT * FROM t6 ORDER BY a DESC, b DESC;
+  }
+} {4 four 1 one nosort}
+do_test where-12.11 {
+  cksort {
+    SELECT * FROM t6 ORDER BY a DESC, b ASC;
+  }
+} {4 four 1 one nosort}
+do_test where-12.12 {
+  cksort {
+    SELECT * FROM t6 ORDER BY a ASC, b DESC;
+  }
+} {1 one 4 four nosort}
+do_test where-13.1 {
+  execsql {
+    CREATE TABLE t7(a INTEGER PRIMARY KEY, b TEXT);
+    INSERT INTO t7 VALUES(1,'one');
+    INSERT INTO t7 VALUES(4,'four');
+    CREATE INDEX t7i1 ON t7(b);
+  }
+  cksort {
+    SELECT * FROM t7 ORDER BY b;
+  }
+} {4 four 1 one nosort}
+do_test where-13.2 {
+  cksort {
+    SELECT * FROM t7 ORDER BY b, a;
+  }
+} {4 four 1 one nosort}
+do_test where-13.3 {
+  cksort {
+    SELECT * FROM t7 ORDER BY a;
+  }
+} {1 one 4 four nosort}
+do_test where-13.4 {
+  cksort {
+    SELECT * FROM t7 ORDER BY a, b;
+  }
+} {1 one 4 four nosort}
+do_test where-13.5 {
+  cksort {
+    SELECT * FROM t7 ORDER BY b DESC;
+  }
+} {1 one 4 four nosort}
+do_test where-13.6 {
+  cksort {
+    SELECT * FROM t7 ORDER BY b DESC, a DESC;
+  }
+} {1 one 4 four nosort}
+do_test where-13.7 {
+  cksort {
+    SELECT * FROM t7 ORDER BY b DESC, a ASC;
+  }
+} {1 one 4 four sort}
+do_test where-13.8 {
+  cksort {
+    SELECT * FROM t7 ORDER BY b ASC, a DESC;
+  }
+} {4 four 1 one sort}
+do_test where-13.9 {
+  cksort {
+    SELECT * FROM t7 ORDER BY a DESC;
+  }
+} {4 four 1 one nosort}
+do_test where-13.10 {
+  cksort {
+    SELECT * FROM t7 ORDER BY a DESC, b DESC;
+  }
+} {4 four 1 one nosort}
+do_test where-13.11 {
+  cksort {
+    SELECT * FROM t7 ORDER BY a DESC, b ASC;
+  }
+} {4 four 1 one nosort}
+do_test where-13.12 {
+  cksort {
+    SELECT * FROM t7 ORDER BY a ASC, b DESC;
+  }
+} {1 one 4 four nosort}
+
+# Ticket #2211.
+#
+# When optimizing out ORDER BY clauses, make sure that trailing terms
+# of the ORDER BY clause do not reference other tables in a join.
+#
+do_test where-14.1 {
+  execsql {
+    CREATE TABLE t8(a INTEGER PRIMARY KEY, b TEXT UNIQUE);
+    INSERT INTO t8 VALUES(1,'one');
+    INSERT INTO t8 VALUES(4,'four');
+  }
+  cksort {
+    SELECT x.a || '/' || y.a FROM t8 x, t8 y ORDER BY x.a, y.b
+  } 
+} {1/4 1/1 4/4 4/1 sort}
+do_test where-14.2 {
+  cksort {
+    SELECT x.a || '/' || y.a FROM t8 x, t8 y ORDER BY x.a, y.b DESC
+  } 
+} {1/1 1/4 4/1 4/4 sort}
+do_test where-14.3 {
+  cksort {
+    SELECT x.a || '/' || y.a FROM t8 x, t8 y ORDER BY x.a, x.b
+  } 
+} {1/1 1/4 4/1 4/4 nosort}
+do_test where-14.4 {
+  cksort {
+    SELECT x.a || '/' || y.a FROM t8 x, t8 y ORDER BY x.a, x.b DESC
+  } 
+} {1/1 1/4 4/1 4/4 nosort}
+btree_breakpoint
+do_test where-14.5 {
+  cksort {
+    SELECT x.a || '/' || y.a FROM t8 x, t8 y ORDER BY x.b, x.a||x.b
+  } 
+} {4/1 4/4 1/1 1/4 nosort}
+do_test where-14.6 {
+  cksort {
+    SELECT x.a || '/' || y.a FROM t8 x, t8 y ORDER BY x.b, x.a||x.b DESC
+  } 
+} {4/1 4/4 1/1 1/4 nosort}
+do_test where-14.7 {
+  cksort {
+    SELECT x.a || '/' || y.a FROM t8 x, t8 y ORDER BY x.b, y.a||y.b
+  } 
+} {4/1 4/4 1/1 1/4 sort}
+do_test where-14.7.1 {
+  cksort {
+    SELECT x.a || '/' || y.a FROM t8 x, t8 y ORDER BY x.b, x.a, y.a||y.b
+  } 
+} {4/1 4/4 1/1 1/4 sort}
+do_test where-14.7.2 {
+  cksort {
+    SELECT x.a || '/' || y.a FROM t8 x, t8 y ORDER BY x.b, x.a, x.a||x.b
+  } 
+} {4/1 4/4 1/1 1/4 nosort}
+do_test where-14.8 {
+  cksort {
+    SELECT x.a || '/' || y.a FROM t8 x, t8 y ORDER BY x.b, y.a||y.b DESC
+  } 
+} {4/4 4/1 1/4 1/1 sort}
+do_test where-14.9 {
+  cksort {
+    SELECT x.a || '/' || y.a FROM t8 x, t8 y ORDER BY x.b, x.a||y.b
+  } 
+} {4/4 4/1 1/4 1/1 sort}
+do_test where-14.10 {
+  cksort {
+    SELECT x.a || '/' || y.a FROM t8 x, t8 y ORDER BY x.b, x.a||y.b DESC
+  } 
+} {4/1 4/4 1/1 1/4 sort}
+do_test where-14.11 {
+  cksort {
+    SELECT x.a || '/' || y.a FROM t8 x, t8 y ORDER BY x.b, y.a||x.b
+  } 
+} {4/1 4/4 1/1 1/4 sort}
+do_test where-14.12 {
+  cksort {
+    SELECT x.a || '/' || y.a FROM t8 x, t8 y ORDER BY x.b, y.a||x.b DESC
+  } 
+} {4/4 4/1 1/4 1/1 sort}
+
 
 integrity_check {where-99.0}
 

Modified: freeswitch/trunk/libs/sqlite/test/where2.test
==============================================================================
--- freeswitch/trunk/libs/sqlite/test/where2.test	(original)
+++ freeswitch/trunk/libs/sqlite/test/where2.test	Thu Feb 22 17:09:42 2007
@@ -12,7 +12,7 @@
 # focus of this file is testing the use of indices in WHERE clauses
 # based on recent changes to the optimizer.
 #
-# $Id: where2.test,v 1.9 2006/05/11 13:26:26 drh Exp $
+# $Id: where2.test,v 1.10 2006/11/06 15:10:06 drh Exp $
 
 set testdir [file dirname $argv0]
 source $testdir/tester.tcl
@@ -451,4 +451,33 @@
     }
   } {}
 }  
+
+# Make sure WHERE clauses of the form A=1 AND (B=2 OR B=3) are optimized
+# when we have an index on A and B.
+#
+ifcapable or_opt {
+  do_test where2-9.1 {
+    execsql {
+      BEGIN;
+      CREATE TABLE t10(a,b,c);
+      INSERT INTO t10 VALUES(1,1,1);
+      INSERT INTO t10 VALUES(1,2,2);
+      INSERT INTO t10 VALUES(1,3,3);
+    }
+    for {set i 4} {$i<=1000} {incr i} {
+      execsql {INSERT INTO t10 VALUES(1,$i,$i)}
+    }
+    execsql {
+      CREATE INDEX i10 ON t10(a,b);
+      COMMIT;
+      SELECT count(*) FROM t10;
+    }
+  } 1000
+  do_test where2-9.2 {
+    count {
+      SELECT * FROM t10 WHERE a=1 AND (b=2 OR b=3)
+    }
+  } {1 2 2 1 3 3 7}
+}
+
 finish_test

Modified: freeswitch/trunk/libs/sqlite/test/where3.test
==============================================================================
--- freeswitch/trunk/libs/sqlite/test/where3.test	(original)
+++ freeswitch/trunk/libs/sqlite/test/where3.test	Thu Feb 22 17:09:42 2007
@@ -12,7 +12,7 @@
 # focus of this file is testing the join reordering optimization
 # in cases that include a LEFT JOIN.
 #
-# $Id: where3.test,v 1.2 2006/06/06 11:45:55 drh Exp $
+# $Id: where3.test,v 1.3 2006/12/16 16:25:17 drh Exp $
 
 set testdir [file dirname $argv0]
 source $testdir/tester.tcl
@@ -78,4 +78,85 @@
   }
 } {1 {Value for C1.1} {Value for C2.1} 2 {} {Value for C2.2} 3 {Value for C1.3} {Value for C2.3}}
 
+# This procedure executes the SQL.  Then it appends 
+# the ::sqlite_query_plan variable.
+#
+proc queryplan {sql} {
+  set ::sqlite_sort_count 0
+  set data [execsql $sql]
+  return [concat $data $::sqlite_query_plan]
+}
+
+
+# If you have a from clause of the form:   A B C left join D
+# then make sure the query optimizer is able to reorder the 
+# A B C part anyway it wants. 
+#
+# Following the fix to ticket #1652, there was a time when
+# the C table would not reorder.  So the following reorderings
+# were possible:
+#
+#            A B C left join D
+#            B A C left join D
+#
+# But these reorders were not allowed
+#
+#            C A B left join D
+#            A C B left join D
+#            C B A left join D
+#            B C A left join D
+#
+# The following tests are here to verify that the latter four
+# reorderings are allowed again.
+#
+do_test where3-2.1 {
+  execsql {
+    CREATE TABLE tA(apk integer primary key, ax);
+    CREATE TABLE tB(bpk integer primary key, bx);
+    CREATE TABLE tC(cpk integer primary key, cx);
+    CREATE TABLE tD(dpk integer primary key, dx);
+  }
+  queryplan {
+    SELECT * FROM tA, tB, tC LEFT JOIN tD ON dpk=cx
+     WHERE cpk=bx AND bpk=ax
+  }
+} {tA {} tB * tC * tD *}
+do_test where3-2.2 {
+  queryplan {
+    SELECT * FROM tA, tB, tC LEFT JOIN tD ON dpk=cx
+     WHERE cpk=bx AND apk=bx
+  }
+} {tB {} tA * tC * tD *}
+do_test where3-2.3 {
+  queryplan {
+    SELECT * FROM tA, tB, tC LEFT JOIN tD ON dpk=cx
+     WHERE cpk=bx AND apk=bx
+  }
+} {tB {} tA * tC * tD *}
+do_test where3-2.4 {
+  queryplan {
+    SELECT * FROM tA, tB, tC LEFT JOIN tD ON dpk=cx
+     WHERE apk=cx AND bpk=ax
+  }
+} {tC {} tA * tB * tD *}
+do_test where3-2.5 {
+  queryplan {
+    SELECT * FROM tA, tB, tC LEFT JOIN tD ON dpk=cx
+     WHERE cpk=ax AND bpk=cx
+  }
+} {tA {} tC * tB * tD *}
+do_test where3-2.5 {
+  queryplan {
+    SELECT * FROM tA, tB, tC LEFT JOIN tD ON dpk=cx
+     WHERE bpk=cx AND apk=bx
+  }
+} {tC {} tB * tA * tD *}
+do_test where3-2.6 {
+  queryplan {
+    SELECT * FROM tA, tB, tC LEFT JOIN tD ON dpk=cx
+     WHERE cpk=bx AND apk=cx
+  }
+} {tB {} tC * tA * tD *}
+
+
 finish_test

Added: freeswitch/trunk/libs/sqlite/test/where4.test
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/test/where4.test	Thu Feb 22 17:09:42 2007
@@ -0,0 +1,180 @@
+# 2006 October 27
+#
+# The author disclaims copyright to this source code.  In place of
+# a legal notice, here is a blessing:
+#
+#    May you do good and not evil.
+#    May you find forgiveness for yourself and forgive others.
+#    May you share freely, never taking more than you give.
+#
+#***********************************************************************
+# This file implements regression tests for SQLite library.  The
+# focus of this file is testing the use of indices in WHERE clauses.
+# This file was created when support for optimizing IS NULL phrases
+# was added.  And so the principle purpose of this file is to test
+# that IS NULL phrases are correctly optimized.  But you can never
+# have too many tests, so some other tests are thrown in as well.
+#
+# $Id: where4.test,v 1.2 2007/01/25 16:56:08 drh Exp $
+
+set testdir [file dirname $argv0]
+source $testdir/tester.tcl
+
+# Build some test data
+#
+do_test where4-1.0 {
+  execsql {
+    CREATE TABLE t1(w, x, y);
+    CREATE INDEX i1wxy ON t1(w,x,y);
+    INSERT INTO t1 VALUES(1,2,3);
+    INSERT INTO t1 VALUES(1,NULL,3);
+    INSERT INTO t1 VALUES('a','b','c');
+    INSERT INTO t1 VALUES('a',NULL,'c');
+    INSERT INTO t1 VALUES(X'78',x'79',x'7a');
+    INSERT INTO t1 VALUES(X'78',NULL,X'7A');
+    INSERT INTO t1 VALUES(NULL,NULL,NULL);
+    SELECT count(*) FROM t1;
+  }
+} {7}
+
+# Do an SQL statement.  Append the search count to the end of the result.
+#
+proc count sql {
+  set ::sqlite_search_count 0
+  return [concat [execsql $sql] $::sqlite_search_count]
+}
+
+# Verify that queries use an index.  We are using the special variable
+# "sqlite_search_count" which tallys the number of executions of MoveTo
+# and Next operators in the VDBE.  By verifing that the search count is
+# small we can be assured that indices are being used properly.
+#
+do_test where4-1.1 {
+  count {SELECT rowid FROM t1 WHERE w IS NULL}
+} {7 2}
+do_test where4-1.2 {
+  count {SELECT rowid FROM t1 WHERE +w IS NULL}
+} {7 6}
+do_test where4-1.3 {
+  count {SELECT rowid FROM t1 WHERE w=1 AND x IS NULL}
+} {2 2}
+do_test where4-1.4 {
+  count {SELECT rowid FROM t1 WHERE w=1 AND +x IS NULL}
+} {2 3}
+do_test where4-1.5 {
+  count {SELECT rowid FROM t1 WHERE w=1 AND x>0}
+} {1 2}
+do_test where4-1.6 {
+  count {SELECT rowid FROM t1 WHERE w=1 AND x<9}
+} {1 3}
+do_test where4-1.7 {
+  count {SELECT rowid FROM t1 WHERE w=1 AND x IS NULL AND y=3}
+} {2 2}
+do_test where4-1.8 {
+  count {SELECT rowid FROM t1 WHERE w=1 AND x IS NULL AND y>2}
+} {2 2}
+do_test where4-1.9 {
+  count {SELECT rowid FROM t1 WHERE w='a' AND x IS NULL AND y='c'}
+} {4 2}
+do_test where4-1.10 {
+  count {SELECT rowid FROM t1 WHERE w=x'78' AND x IS NULL}
+} {6 2}
+do_test where4-1.11 {
+  count {SELECT rowid FROM t1 WHERE w=x'78' AND x IS NULL AND y=123}
+} {1}
+do_test where4-1.12 {
+  count {SELECT rowid FROM t1 WHERE w=x'78' AND x IS NULL AND y=x'7A'}
+} {6 2}
+do_test where4-1.13 {
+  count {SELECT rowid FROM t1 WHERE w IS NULL AND x IS NULL}
+} {7 2}
+do_test where4-1.14 {
+  count {SELECT rowid FROM t1 WHERE w IS NULL AND x IS NULL AND y IS NULL}
+} {7 2}
+do_test where4-1.15 {
+  count {SELECT rowid FROM t1 WHERE w IS NULL AND x IS NULL AND y<0}
+} {2}
+do_test where4-1.16 {
+  count {SELECT rowid FROM t1 WHERE w IS NULL AND x IS NULL AND y>=0}
+} {1}
+
+do_test where4-2.1 {
+  execsql {SELECT rowid FROM t1 ORDER BY w, x, y}
+} {7 2 1 4 3 6 5}
+do_test where4-2.2 {
+  execsql {SELECT rowid FROM t1 ORDER BY w DESC, x, y}
+} {6 5 4 3 2 1 7}
+do_test where4-2.3 {
+  execsql {SELECT rowid FROM t1 ORDER BY w, x DESC, y}
+} {7 1 2 3 4 5 6}
+
+
+# Ticket #2177
+#
+# Suppose you have a left join where the right table of the left
+# join (the one that can be NULL) has an index on two columns.
+# The first indexed column is used in the ON clause of the join.
+# The second indexed column is used in the WHERE clause with an IS NULL
+# constraint.  It is not allowed to use the IS NULL optimization to
+# optimize the query because the second column might be NULL because
+# the right table did not match - something the index does not know
+# about.
+#
+do_test where4-3.1 {
+  execsql {
+    CREATE TABLE t2(a);
+    INSERT INTO t2 VALUES(1);
+    INSERT INTO t2 VALUES(2);
+    INSERT INTO t2 VALUES(3);
+    CREATE TABLE t3(x,y,UNIQUE(x,y));
+    INSERT INTO t3 VALUES(1,11);
+    INSERT INTO t3 VALUES(2,NULL);
+ 
+    SELECT * FROM t2 LEFT JOIN t3 ON a=x WHERE +y IS NULL;
+  }
+} {2 2 {} 3 {} {}}
+do_test where4-3.2 {
+  execsql {
+    SELECT * FROM t2 LEFT JOIN t3 ON a=x WHERE y IS NULL;
+  }
+} {2 2 {} 3 {} {}}
+
+# Ticket #2189.  Probably the same bug as #2177.
+#
+do_test where4-4.1 {
+  execsql {
+    CREATE TABLE test(col1 TEXT PRIMARY KEY);
+    INSERT INTO test(col1) values('a');
+    INSERT INTO test(col1) values('b');
+    INSERT INTO test(col1) values('c');
+    CREATE TABLE test2(col1 TEXT PRIMARY KEY);
+    INSERT INTO test2(col1) values('a');
+    INSERT INTO test2(col1) values('b');
+    INSERT INTO test2(col1) values('c');
+    SELECT * FROM test t1 LEFT OUTER JOIN test2 t2 ON t1.col1 = t2.col1
+      WHERE +t2.col1 IS NULL;
+  }
+} {}
+do_test where4-4.2 {
+  execsql {
+    SELECT * FROM test t1 LEFT OUTER JOIN test2 t2 ON t1.col1 = t2.col1
+      WHERE t2.col1 IS NULL;
+  }
+} {}
+do_test where4-4.3 {
+  execsql {
+    SELECT * FROM test t1 LEFT OUTER JOIN test2 t2 ON t1.col1 = t2.col1
+      WHERE +t1.col1 IS NULL;
+  }
+} {}
+do_test where4-4.4 {
+  execsql {
+    SELECT * FROM test t1 LEFT OUTER JOIN test2 t2 ON t1.col1 = t2.col1
+      WHERE t1.col1 IS NULL;
+  }
+} {}
+    
+
+integrity_check {where4-99.0}
+
+finish_test

Added: freeswitch/trunk/libs/sqlite/tool/fragck.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/tool/fragck.tcl	Thu Feb 22 17:09:42 2007
@@ -0,0 +1,149 @@
+# Run this TCL script using "testfixture" to get a report that shows
+# the sequence of database pages used by a particular table or index.
+# This information is used for fragmentation analysis.
+#
+
+# Get the name of the database to analyze
+#
+
+if {[llength $argv]!=2} {
+  puts stderr "Usage: $argv0 database-name table-or-index-name"
+  exit 1
+}
+set file_to_analyze [lindex $argv 0]
+if {![file exists $file_to_analyze]} {
+  puts stderr "No such file: $file_to_analyze"
+  exit 1
+}
+if {![file readable $file_to_analyze]} {
+  puts stderr "File is not readable: $file_to_analyze"
+  exit 1
+}
+if {[file size $file_to_analyze]<512} {
+  puts stderr "Empty or malformed database: $file_to_analyze"
+  exit 1
+}
+set objname [lindex $argv 1]
+
+# Open the database
+#
+sqlite3 db [lindex $argv 0]
+set DB [btree_open [lindex $argv 0] 1000 0]
+
+# This proc is a wrapper around the btree_cursor_info command. The
+# second argument is an open btree cursor returned by [btree_cursor].
+# The first argument is the name of an array variable that exists in
+# the scope of the caller. If the third argument is non-zero, then
+# info is returned for the page that lies $up entries upwards in the
+# tree-structure. (i.e. $up==1 returns the parent page, $up==2 the 
+# grandparent etc.)
+#
+# The following entries in that array are filled in with information retrieved
+# using [btree_cursor_info]:
+#
+#   $arrayvar(page_no)             =  The page number
+#   $arrayvar(entry_no)            =  The entry number
+#   $arrayvar(page_entries)        =  Total number of entries on this page
+#   $arrayvar(cell_size)           =  Cell size (local payload + header)
+#   $arrayvar(page_freebytes)      =  Number of free bytes on this page
+#   $arrayvar(page_freeblocks)     =  Number of free blocks on the page
+#   $arrayvar(payload_bytes)       =  Total payload size (local + overflow)
+#   $arrayvar(header_bytes)        =  Header size in bytes
+#   $arrayvar(local_payload_bytes) =  Local payload size
+#   $arrayvar(parent)              =  Parent page number
+# 
+proc cursor_info {arrayvar csr {up 0}} {
+  upvar $arrayvar a
+  foreach [list a(page_no) \
+                a(entry_no) \
+                a(page_entries) \
+                a(cell_size) \
+                a(page_freebytes) \
+                a(page_freeblocks) \
+                a(payload_bytes) \
+                a(header_bytes) \
+                a(local_payload_bytes) \
+                a(parent) \
+                a(first_ovfl) ] [btree_cursor_info $csr $up] break
+}
+
+# Determine the page-size of the database. This global variable is used
+# throughout the script.
+#
+set pageSize [db eval {PRAGMA page_size}]
+
+# Find the root page of table or index to be analyzed.  Also find out
+# if the object is a table or an index.
+#
+if {$objname=="sqlite_master"} {
+  set rootpage 1
+  set type table
+} else {
+  db eval {
+    SELECT rootpage, type FROM sqlite_master
+     WHERE name=$objname
+  } break
+  if {![info exists rootpage]} {
+    puts stderr "no such table or index: $objname"
+    exit 1
+  }
+  if {$type!="table" && $type!="index"} {
+    puts stderr "$objname is something other than a table or index"
+    exit 1
+  }
+  if {![string is integer -strict $rootpage]} {
+    puts stderr "invalid root page for $objname: $rootpage"
+    exit 1
+  } 
+}
+
+# The cursor $csr is pointing to an entry.  Print out information
+# about the page that $up levels above that page that contains
+# the entry.  If $up==0 use the page that contains the entry.
+# 
+# If information about the page has been printed already, then
+# this is a no-op.
+# 
+proc page_info {csr up} {
+  global seen
+  cursor_info ci $csr $up
+  set pg $ci(page_no)
+  if {[info exists seen($pg)]} return
+  set seen($pg) 1
+
+  # Do parent pages first
+  #
+  if {$ci(parent)} {
+    page_info $csr [expr {$up+1}]
+  }
+
+  # Find the depth of this page
+  #
+  set depth 1
+  set i $up
+  while {$ci(parent)} {
+    incr i
+    incr depth
+    cursor_info ci $csr $i
+  }
+
+  # print the results
+  #
+  puts [format {LEVEL %d:  %6d} $depth $pg]
+}  
+
+  
+  
+
+# Loop through the object and print out page numbers
+#
+set csr [btree_cursor $DB $rootpage 0]
+for {btree_first $csr} {![btree_eof $csr]} {btree_next $csr} {
+  page_info $csr 0
+  set i 1
+  foreach pg [btree_ovfl_info $DB $csr] {
+    puts [format {OVFL %3d: %6d} $i $pg]
+    incr i
+  }
+}
+exit 0

Modified: freeswitch/trunk/libs/sqlite/tool/lemon.c
==============================================================================
--- freeswitch/trunk/libs/sqlite/tool/lemon.c	(original)
+++ freeswitch/trunk/libs/sqlite/tool/lemon.c	Thu Feb 22 17:09:42 2007
@@ -361,8 +361,6 @@
   rc = ap1->sp->index - ap2->sp->index;
   if( rc==0 ) rc = (int)ap1->type - (int)ap2->type;
   if( rc==0 ){
-    assert( ap1->type==REDUCE || ap1->type==RD_RESOLVED || ap1->type==CONFLICT);
-    assert( ap2->type==REDUCE || ap2->type==RD_RESOLVED || ap2->type==CONFLICT);
     rc = ap1->x.rp->index - ap2->x.rp->index;
   }
   return rc;
@@ -1019,6 +1017,10 @@
   struct symbol *spx, *spy;
   int errcnt = 0;
   assert( apx->sp==apy->sp );  /* Otherwise there would be no conflict */
+  if( apx->type==SHIFT && apy->type==SHIFT ){
+    apy->type = CONFLICT;
+    errcnt++;
+  }
   if( apx->type==SHIFT && apy->type==REDUCE ){
     spx = apx->sp;
     spy = apy->x.rp->precsym;
@@ -3166,7 +3168,7 @@
   if( z==0 ) return "";
   while( n-- > 0 ){
     c = *(zText++);
-    if( c=='%' && zText[0]=='d' ){
+    if( c=='%' && n>0 && zText[0]=='d' ){
       sprintf(zInt, "%d", p1);
       p1 = p2;
       strcpy(&z[used], zInt);
@@ -3196,7 +3198,7 @@
   lhsused = 0;
 
   append_str(0,0,0,0);
-  for(cp=rp->code; *cp; cp++){
+  for(cp=(rp->code?rp->code:""); *cp; cp++){
     if( isalpha(*cp) && (cp==rp->code || (!isalnum(cp[-1]) && cp[-1]!='_')) ){
       char saved;
       for(xp= &cp[1]; isalnum(*xp) || *xp=='_'; xp++);
@@ -3259,8 +3261,10 @@
       }
     }
   }
-  cp = append_str(0,0,0,0);
-  rp->code = Strsafe(cp);
+  if( rp->code ){
+    cp = append_str(0,0,0,0);
+    rp->code = Strsafe(cp?cp:"");
+  }
 }
 
 /* 
@@ -3623,7 +3627,7 @@
   n = acttab_size(pActtab);
   for(i=j=0; i<n; i++){
     int action = acttab_yyaction(pActtab, i);
-    if( action<0 ) action = lemp->nsymbol + lemp->nrule + 2;
+    if( action<0 ) action = lemp->nstate + lemp->nrule + 2;
     if( j==0 ) fprintf(out," /* %5d */ ", i);
     fprintf(out, " %4d,", action);
     if( j==9 || i==n-1 ){
@@ -3828,7 +3832,7 @@
 
   /* Generate code which execution during each REDUCE action */
   for(rp=lemp->rule; rp; rp=rp->next){
-    if( rp->code ) translate_code(lemp, rp);
+    translate_code(lemp, rp);
   }
   for(rp=lemp->rule; rp; rp=rp->next){
     struct rule *rp2;

Modified: freeswitch/trunk/libs/sqlite/tool/lempar.c
==============================================================================
--- freeswitch/trunk/libs/sqlite/tool/lempar.c	(original)
+++ freeswitch/trunk/libs/sqlite/tool/lempar.c	Thu Feb 22 17:09:42 2007
@@ -476,7 +476,6 @@
   }
 #endif /* NDEBUG */
 
-#ifndef NDEBUG
   /* Silence complaints from purify about yygotominor being uninitialized
   ** in some cases when it is copied into the stack after the following
   ** switch.  yygotominor is uninitialized when a rule reduces that does
@@ -484,9 +483,15 @@
   ** value of the nonterminal uninitialized is utterly harmless as long
   ** as the value is never used.  So really the only thing this code
   ** accomplishes is to quieten purify.  
+  **
+  ** 2007-01-16:  The wireshark project (www.wireshark.org) reports that
+  ** without this code, their parser segfaults.  I'm not sure what there
+  ** parser is doing to make this happen.  This is the second bug report
+  ** from wireshark this week.  Clearly they are stressing Lemon in ways
+  ** that it has not been previously stressed...  (SQLite ticket #2172)
   */
   memset(&yygotominor, 0, sizeof(yygotominor));
-#endif
+
 
   switch( yyruleno ){
   /* Beginning here are the reduction cases.  A typical example

Modified: freeswitch/trunk/libs/sqlite/tool/spaceanal.tcl
==============================================================================
--- freeswitch/trunk/libs/sqlite/tool/spaceanal.tcl	(original)
+++ freeswitch/trunk/libs/sqlite/tool/spaceanal.tcl	Thu Feb 22 17:09:42 2007
@@ -26,6 +26,10 @@
   exit 1
 }
 
+# Maximum distance between pages before we consider it a "gap"
+#
+set MAXGAP 3
+
 # Open the database
 #
 sqlite3 db [lindex $argv 0]
@@ -53,7 +57,8 @@
    ovfl_pages int,   -- Number of overflow pages used
    int_unused int,   -- Number of unused bytes on interior pages
    leaf_unused int,  -- Number of unused bytes on primary pages
-   ovfl_unused int   -- Number of unused bytes on overflow pages
+   ovfl_unused int,  -- Number of unused bytes on overflow pages
+   gap_cnt int       -- Number of gaps in the page layout
 );}
 mem eval $tabledef
 
@@ -105,7 +110,8 @@
                 a(payload_bytes) \
                 a(header_bytes) \
                 a(local_payload_bytes) \
-                a(parent) ] [btree_cursor_info $csr $up] {}
+                a(parent) \
+                a(first_ovfl) ] [btree_cursor_info $csr $up] break
 }
 
 # Determine the page-size of the database. This global variable is used
@@ -145,6 +151,8 @@
   set ovfl_pages $wideZero           ;# Number of overflow pages used
   set leaf_pages $wideZero           ;# Number of leaf pages
   set int_pages $wideZero            ;# Number of interior pages
+  set gap_cnt 0                      ;# Number of holes in the page sequence
+  set prev_pgno 0                    ;# Last page number seen
 
   # As the btree is traversed, the array variable $seen($pgno) is set to 1
   # the first time page $pgno is encountered.
@@ -180,6 +188,9 @@
       set n [expr {int(ceil($ovfl/($pageSize-4.0)))}]
       incr ovfl_pages $n
       incr unused_ovfl [expr {$n*($pageSize-4) - $ovfl}]
+      set pglist [btree_ovfl_info $DB $csr]
+    } else {
+      set pglist {}
     }
 
     # If this is the first table entry analyzed for the page, then update
@@ -191,6 +202,7 @@
       set seen($ci(page_no)) 1
       incr leaf_pages
       incr unused_leaf $ci(page_freebytes)
+      set pglist "$ci(page_no) $pglist"
 
       # Now check if the page has a parent that has not been analyzed. If
       # so, update the $int_pages, $cnt_int_entry and $unused_int statistics
@@ -210,7 +222,19 @@
         incr int_pages
         incr cnt_int_entry $ci(page_entries)
         incr unused_int $ci(page_freebytes)
+
+        # parent pages come before their first child
+        set pglist "$ci(page_no) $pglist"
+      }
+    }
+
+    # Check the page list for fragmentation
+    #
+    foreach pg $pglist {
+      if {$pg!=$prev_pgno+1 && $prev_pgno>0} {
+        incr gap_cnt
       }
+      set prev_pgno $pg
     }
   }
   btree_close_cursor $csr
@@ -250,6 +274,7 @@
   append sql ",$unused_int"
   append sql ",$unused_leaf"
   append sql ",$unused_ovfl"
+  append sql ",$gap_cnt"
   append sql );
   mem eval $sql
 }
@@ -279,6 +304,8 @@
   set mx_payload $wideZero           ;# Maximum payload size
   set ovfl_pages $wideZero           ;# Number of overflow pages used
   set leaf_pages $wideZero           ;# Number of leaf pages
+  set gap_cnt 0                      ;# Number of holes in the page sequence
+  set prev_pgno 0                    ;# Last page number seen
 
   # As the btree is traversed, the array variable $seen($pgno) is set to 1
   # the first time page $pgno is encountered.
@@ -324,6 +351,11 @@
       set seen($ci(page_no)) 1
       incr leaf_pages
       incr unused_leaf $ci(page_freebytes)
+      set pg $ci(page_no)
+      if {$prev_pgno>0 && $pg!=$prev_pgno+1} {
+        incr gap_cnt
+      }
+      set prev_pgno $ci(page_no)
     }
   }
   btree_close_cursor $csr
@@ -355,6 +387,7 @@
   append sql ",0"
   append sql ",$unused_leaf"
   append sql ",$unused_ovfl"
+  append sql ",$gap_cnt"
   append sql );
   mem eval $sql
 }
@@ -420,7 +453,8 @@
       int(sum(ovfl_pages)) AS ovfl_pages,
       int(sum(leaf_unused)) AS leaf_unused,
       int(sum(int_unused)) AS int_unused,
-      int(sum(ovfl_unused)) AS ovfl_unused
+      int(sum(ovfl_unused)) AS ovfl_unused,
+      int(sum(gap_cnt)) AS gap_cnt
     FROM space_used WHERE $where" {} {}
 
   # Output the sub-report title, nicely decorated with * characters.
@@ -476,6 +510,10 @@
   if {[info exists avg_fanout]} {
     statline {Average fanout} $avg_fanout
   }
+  if {$total_pages>1} {
+    set fragmentation [percent $gap_cnt [expr {$total_pages-1}] {fragmentation}]
+    statline {Fragmentation} $fragmentation
+  }
   statline {Maximum payload per entry} $mx_payload
   statline {Entries that use overflow} $ovfl_cnt $ovfl_cnt_percent
   if {$int_pages>0} {
@@ -731,6 +769,14 @@
     category on a per-entry basis.  This is the number of unused bytes on
     all pages divided by the number of entries.
 
+Fragmentation
+
+    The percentage of pages in the table or index that are not
+    consecutive in the disk file.  Many filesystems are optimized
+    for sequential file access so smaller fragmentation numbers 
+    sometimes result in faster queries, especially for larger
+    database files that do not fit in the disk cache.
+
 Maximum payload per entry
 
     The largest payload size of any entry.

Modified: freeswitch/trunk/libs/sqlite/www/capi3ref.tcl
==============================================================================
--- freeswitch/trunk/libs/sqlite/www/capi3ref.tcl	(original)
+++ freeswitch/trunk/libs/sqlite/www/capi3ref.tcl	Thu Feb 22 17:09:42 2007
@@ -1,4 +1,4 @@
-set rcsid {$Id: capi3ref.tcl,v 1.45 2006/09/15 16:58:49 drh Exp $}
+set rcsid {$Id: capi3ref.tcl,v 1.51 2007/01/10 12:57:29 drh Exp $}
 source common.tcl
 header {C/C++ Interface For SQLite Version 3}
 puts {
@@ -157,7 +157,7 @@
   #define SQLITE_STATIC      ((void(*)(void *))0)
   #define SQLITE_TRANSIENT   ((void(*)(void *))-1)
 } {
- In the SQL strings input to sqlite3_prepare() and sqlite3_prepare16(),
+ In the SQL strings input to sqlite3_prepare_v2() and sqlite3_prepare16_v2(),
  one or more literals can be replace by a parameter "?" or ":AAA" or 
  "@AAA" or "\$VVV"
  where AAA is an alphanumeric identifier and VVV is a variable name according
@@ -166,7 +166,7 @@
  can be set using the sqlite3_bind_*() routines.
 
  The first argument to the sqlite3_bind_*() routines always is a pointer
- to the sqlite3_stmt structure returned from sqlite3_prepare().  The second
+ to the sqlite3_stmt structure returned from sqlite3_prepare_v2().  The second
  argument is the index of the parameter to be set.  The first parameter has
  an index of 1. When the same named parameter is used more than once, second
  and subsequent
@@ -194,7 +194,7 @@
  routine returns.
 
  The sqlite3_bind_*() routines must be called after
- sqlite3_prepare() or sqlite3_reset() and before sqlite3_step().
+ sqlite3_prepare_v2() or sqlite3_reset() and before sqlite3_step().
  Bindings are not cleared by the sqlite3_reset() routine.
  Unbound parameters are interpreted as NULL.
 
@@ -247,8 +247,10 @@
  upon encountering the lock.
  If the busy callback is not NULL, then the
  callback will be invoked with two arguments.  The
- second argument is the number of prior calls to the busy callback
- for the same lock.  If the
+ first argument to the handler is a copy of the void* pointer which
+ is the third argument to this routine.  The second argument to
+ the handler is the number of times that the busy handler has
+ been invoked for this locking event. If the
  busy callback returns 0, then no additional attempts are made to
  access the database and SQLITE_BUSY is returned.
  If the callback returns non-zero, then another attempt is made to open the
@@ -381,7 +383,7 @@
  These routines return information about the information
  in a single column of the current result row of a query.  In every
  case the first argument is a pointer to the SQL statement that is being
- executed (the sqlite_stmt* that was returned from sqlite3_prepare()) and
+ executed (the sqlite_stmt* that was returned from sqlite3_prepare_v2()) and
  the second argument is the index of the column for which information 
  should be returned.  iCol is zero-indexed.  The left-most column has an
  index of 0.
@@ -859,7 +861,7 @@
  value then the query is aborted, all subsequent SQL statements
  are skipped and the sqlite3_exec() function returns the SQLITE_ABORT.
 
- The 4th argument is an arbitrary pointer that is passed
+ The 1st argument is an arbitrary pointer that is passed
  to the callback function as its first argument.
 
  The 2nd argument to the callback function is the number of
@@ -894,8 +896,9 @@
 int sqlite3_finalize(sqlite3_stmt *pStmt);
 } {
  The sqlite3_finalize() function is called to delete a prepared
- SQL statement obtained by a previous call to sqlite3_prepare()
- or sqlite3_prepare16(). If the statement was executed successfully, or
+ SQL statement obtained by a previous call to sqlite3_prepare(),
+ sqlite3_prepare_v2(), sqlite3_prepare16(), or sqlite3_prepare16_v2().
+ If the statement was executed successfully, or
  not executed at all, then SQLITE_OK is returned. If execution of the
  statement failed then an error code is returned. 
 
@@ -1123,6 +1126,22 @@
 }
 
 api {} {
+int sqlite3_prepare_v2(
+  sqlite3 *db,            /* Database handle */
+  const char *zSql,       /* SQL statement, UTF-8 encoded */
+  int nBytes,             /* Length of zSql in bytes. */
+  sqlite3_stmt **ppStmt,  /* OUT: Statement handle */
+  const char **pzTail     /* OUT: Pointer to unused portion of zSql */
+);
+int sqlite3_prepare16_v2(
+  sqlite3 *db,            /* Database handle */
+  const void *zSql,       /* SQL statement, UTF-16 encoded */
+  int nBytes,             /* Length of zSql in bytes. */
+  sqlite3_stmt **ppStmt,  /* OUT: Statement handle */
+  const void **pzTail     /* OUT: Pointer to unused portion of zSql */
+);
+
+/* Legacy Interfaces */
 int sqlite3_prepare(
   sqlite3 *db,            /* Database handle */
   const char *zSql,       /* SQL statement, UTF-8 encoded */
@@ -1139,14 +1158,13 @@
 );
 } {
  To execute an SQL query, it must first be compiled into a byte-code
- program using one of the following routines. The only difference between
- them is that the second argument, specifying the SQL statement to
- compile, is assumed to be encoded in UTF-8 for the sqlite3_prepare()
- function and UTF-16 for sqlite3_prepare16().
+ program using one of these routines. 
 
  The first argument "db" is an SQLite database handle. The second
  argument "zSql" is the statement to be compiled, encoded as either
- UTF-8 or UTF-16 (see above). If the next argument, "nBytes", is less
+ UTF-8 or UTF-16.  The sqlite3_prepare_v2()
+ interfaces uses UTF-8 and sqlite3_prepare16_v2()
+ use UTF-16. If the next argument, "nBytes", is less
  than zero, then zSql is read up to the first nul terminator.  If
  "nBytes" is not less than zero, then it is the length of the string zSql
  in bytes (not characters).
@@ -1163,6 +1181,38 @@
  using sqlite3_finalize() after it has finished with it.
 
  On success, SQLITE_OK is returned.  Otherwise an error code is returned.
+
+ The sqlite3_prepare_v2() and sqlite3_prepare16_v2() interfaces are
+ recommended for all new programs. The two older interfaces are retained
+ for backwards compatibility, but their use is discouraged.
+ In the "v2" interfaces, the prepared statement
+ that is returned (the sqlite3_stmt object) contains a copy of the original
+ SQL. This causes the sqlite3_step() interface to behave a differently in
+ two ways:
+
+ <ol>
+ <li>
+ If the database schema changes, instead of returning SQLITE_SCHEMA as it
+ always used to do, sqlite3_step() will automatically recompile the SQL
+ statement and try to run it again.  If the schema has changed in a way
+ that makes the statement no longer valid, sqlite3_step() will still
+ return SQLITE_SCHEMA.  But unlike the legacy behavior, SQLITE_SCHEMA is
+ now a fatal error.  Calling sqlite3_prepare_v2() again will not make the
+ error go away.  Note: use sqlite3_errmsg() to find the text of the parsing
+ error that results in an SQLITE_SCHEMA return.
+ </li>
+
+ <li>
+ When an error occurs, 
+ sqlite3_step() will return one of the detailed result-codes
+ like SQLITE_IOERR or SQLITE_FULL or SQLITE_SCHEMA directly. The
+ legacy behavior was that sqlite3_step() would only return a generic
+ SQLITE_ERROR code and you would have to make a second call to
+ sqlite3_reset() in order to find the underlying cause of the problem.
+ With the "v2" prepare interfaces, the underlying reason for the error is
+ returned directly.
+ </li>
+ </ol>
 }
 
 api {} {
@@ -1200,8 +1250,9 @@
 int sqlite3_reset(sqlite3_stmt *pStmt);
 } {
  The sqlite3_reset() function is called to reset a prepared SQL
- statement obtained by a previous call to sqlite3_prepare() or
- sqlite3_prepare16() back to it's initial state, ready to be re-executed.
+ statement obtained by a previous call to 
+ sqlite3_prepare_v2() or
+ sqlite3_prepare16_v2() back to it's initial state, ready to be re-executed.
  Any SQL statement variables that had values bound to them using
  the sqlite3_bind_*() API retain their values.
 }
@@ -1271,7 +1322,7 @@
 #define SQLITE_IGNORE 2   /* Don't allow access, but don't generate an error */
 } {
  This routine registers a callback with the SQLite library.  The
- callback is invoked by sqlite3_prepare() to authorize various
+ callback is invoked by sqlite3_prepare_v2() to authorize various
  operations against the database.  The callback should
  return SQLITE_OK if access is allowed, SQLITE_DENY if the entire
  SQL statement should be aborted with an error and SQLITE_IGNORE
@@ -1302,10 +1353,10 @@
  The return value of the authorization function should be one of the
  constants SQLITE_OK, SQLITE_DENY, or SQLITE_IGNORE.  A return of
  SQLITE_OK means that the operation is permitted and that 
- sqlite3_prepare() can proceed as normal.
- A return of SQLITE_DENY means that the sqlite3_prepare()
+ sqlite3_prepare_v2() can proceed as normal.
+ A return of SQLITE_DENY means that the sqlite3_prepare_v2()
  should fail with an error.  A return of SQLITE_IGNORE causes the 
- sqlite3_prepare() to continue as normal but the requested 
+ sqlite3_prepare_v2() to continue as normal but the requested 
  operation is silently converted into a no-op.  A return of SQLITE_IGNORE
  in response to an SQLITE_READ or SQLITE_FUNCTION causes the column
  being read or the function being invoked to return a NULL.
@@ -1320,11 +1371,22 @@
 int sqlite3_step(sqlite3_stmt*);
 } {
  After an SQL query has been prepared with a call to either
- sqlite3_prepare() or sqlite3_prepare16(), then this function must be
+ sqlite3_prepare_v2() or sqlite3_prepare16_v2() or to one of
+ the legacy interfaces sqlite3_prepare() or sqlite3_prepare16(),
+ then this function must be
  called one or more times to execute the statement.
 
- The return value will be either SQLITE_BUSY, SQLITE_DONE, 
- SQLITE_ROW, SQLITE_ERROR, or SQLITE_MISUSE.
+ The details of the behavior of this sqlite3_step() interface depend
+ on whether the statement was prepared using the newer "v2" interface
+ sqlite3_prepare_v2() and sqlite3_prepare16_v2() or the older legacy
+ interface sqlite3_prepare() and sqlite3_prepare16().  The use of the
+ new "v2" interface is recommended for new applications but the legacy
+ interface will continue to be supported.
+
+ In the lagacy interface, the return value will be either SQLITE_BUSY, 
+ SQLITE_DONE, SQLITE_ROW, SQLITE_ERROR, or SQLITE_MISUSE.  With the "v2"
+ interface, any of the other SQLite result-codes might be returned as
+ well.
 
  SQLITE_BUSY means that the database engine attempted to open
  a locked database and there is no busy callback registered.
@@ -1338,15 +1400,16 @@
  If the SQL statement being executed returns any data, then 
  SQLITE_ROW is returned each time a new row of data is ready
  for processing by the caller. The values may be accessed using
- the sqlite3_column_*() functions. sqlite3_step()
- is called again to retrieve the next row of data.
+ the sqlite3_column_int(), sqlite3_column_text(), and similar functions.
+ sqlite3_step() is called again to retrieve the next row of data.
  
  SQLITE_ERROR means that a run-time error (such as a constraint
  violation) has occurred.  sqlite3_step() should not be called again on
  the VM. More information may be found by calling sqlite3_errmsg().
  A more specific error code (example: SQLITE_INTERRUPT, SQLITE_SCHEMA,
  SQLITE_CORRUPT, and so forth) can be obtained by calling
- sqlite3_reset() on the prepared statement.
+ sqlite3_reset() on the prepared statement.  In the "v2" interface,
+ the more specific error code is returned directly by sqlite3_step().
 
  SQLITE_MISUSE means that the this routine was called inappropriately.
  Perhaps it was called on a virtual machine that had already been
@@ -1355,35 +1418,17 @@
  is being used by a different thread than the one it was created it.
 
  <b>Goofy Interface Alert:</b>
- The sqlite3_step() API always returns a generic error code,
+ In the legacy interface, 
+ the sqlite3_step() API always returns a generic error code,
  SQLITE_ERROR, following any error other than SQLITE_BUSY and SQLITE_MISUSE.
  You must call sqlite3_reset() (or sqlite3_finalize()) in order to find
- the specific error code that better describes the error.  We admit that
- this is a goofy design.  Sqlite3_step() would be much easier to use if
- it returned the specific error code directly.  But we cannot change that
- now without breaking backwards compatibility.
-
- Note that there is never any harm in calling sqlite3_reset() after
- getting back an SQLITE_ERROR from sqlite3_step().  Any API that can
- be used after an sqlite3_step() can also be used after sqlite3_reset().
- You may want to create a simple wrapper around sqlite3_step() to make
- this easier.  For example:
-
- <blockquote><pre>
-    int less_goofy_sqlite3_step(sqlite3_stmt *pStatement){
-      int rc;
-      rc = sqlite3_step(pStatement);
-      if( rc==SQLITE_ERROR ){
-        rc = sqlite3_reset(pStatement);
-      }
-      return rc;
-    }
- </pre></blockquote>
-
- Simply substitute the less_goofy_sqlite3_step() call above for 
- the normal sqlite3_step() everywhere in your code, and you will
- always get back the specific error code rather than a generic
- SQLITE_ERROR error code.
+ one of the specific result-codes that better describes the error.
+ We admit that this is a goofy design.  The problem has been fixed
+ with the "v2" interface.  If you prepare all of your SQL statements
+ using either sqlite3_prepare_v2() or sqlite3_prepare16_v2() instead
+ of the legacy sqlite3_prepare() and sqlite3_prepare16(), then the 
+ more specific result-codes are returned directly by sqlite3_step().
+ The use of the "v2" interface is recommended.
 }
 
 api {} {
@@ -1391,7 +1436,7 @@
 } {
  Register a function that is called each time an SQL statement is evaluated.
  The callback function is invoked on the first call to sqlite3_step() after
- calls to sqlite3_prepare() or sqlite3_reset().
+ calls to sqlite3_prepare_v2() or sqlite3_reset().
  This function can be used (for example) to generate
  a log file of all SQL executed against a database.  This can be
  useful when debugging an application that uses SQLite.
@@ -1570,7 +1615,7 @@
 
   When the shared cache is enabled, the
   following routines must always be called from the same thread:
-  sqlite3_open(), sqlite3_prepare(), sqlite3_step(), sqlite3_reset(),
+  sqlite3_open(), sqlite3_prepare_v2(), sqlite3_step(), sqlite3_reset(),
   sqlite3_finalize(), and sqlite3_close().
   This is due to the fact that the shared cache makes use of
   thread-specific storage so that it will be available for sharing

Modified: freeswitch/trunk/libs/sqlite/www/changes.tcl
==============================================================================
--- freeswitch/trunk/libs/sqlite/www/changes.tcl	(original)
+++ freeswitch/trunk/libs/sqlite/www/changes.tcl	Thu Feb 22 17:09:42 2007
@@ -25,6 +25,99 @@
   puts "<DD><P><UL>$desc</UL></P></DD>"
 }
 
+chng {2007 February 13 (3.3.13)} {
+<li>Add a "fragmentation" measurement in the output of sqlite3_analyzer.</li>
+<li>Add the COLLATE operator used to explicitly set the collating sequence
+used by an expression.  This feature is considered experimental pending
+additional testing.</li>
+<li>Allow up to 64 tables in a join - the old limit was 32.</li>
+<li>Added two new experimental functions:
+<a href="lang_expr.html#randomblobFunc">randomBlob()</a> and
+<a href="lang_expr.html#hexFunc">hex()</a>.
+Their intended use is to facilitate generating 
+<a href="http://en.wikipedia.org/wiki/UUID">UUIDs</a>.
+</li>
+<li>Fix a problem where
+<a href="pragma.html#pragma_count_changes">PRAGMA count_changes</a> was
+causing incorrect results for updates on tables with triggers</li>
+<li>Fix a bug in the ORDER BY clause optimizer for joins where the
+left-most table in the join is constrained by a UNIQUE index.</li>
+<li>Fixed a bug in the "copy" method of the TCL interface.</li>
+<li>Bug fixes in fts1 and fts2 modules.</li>
+}
+
+chng {2007 January 27 (3.3.12)} {
+<li>Fix another bug in the IS NULL optimization that was added in
+version 3.3.9.</li>
+<li>Fix a assertion fault that occurred on deeply nested views.</li>
+<li>Limit the amount of output that
+<a href="pragma.html#pragma_integrity_check">PRAGMA integrity_check</a>
+generates.</li>
+<li>Minor syntactic changes to support a wider variety of compilers.</li>
+}
+
+chng {2007 January 22 (3.3.11)} {
+<li>Fix another bug in the implementation of the new 
+<a href="capi3ref.html#sqlite3_prepare_v2">sqlite3_prepare_v2()</a> API.
+We'll get it right eventually...</li>
+<li>Fix a bug in the IS NULL optimization that was added in version 3.3.9 -
+the bug was causing incorrect results on certain LEFT JOINs that included
+in the WHERE clause an IS NULL constraint for the right table of the
+LEFT JOIN.</li>
+<li>Make AreFileApisANSI() a no-op macro in winCE since winCE does not
+support this function.</li>
+}
+
+chng {2007 January 9 (3.3.10)} {
+<li>Fix bugs in the implementation of the new 
+<a href="capi3ref.html#sqlite3_prepare_v2">sqlite3_prepare_v2()</a> API
+that can lead to segfaults.</li>
+<li>Fix 1-second round-off errors in the 
+<a href="http://www.sqlite.org/cvstrac/wiki?p=DateAndTimeFunctions">
+strftime()</a> function</li>
+<li>Enhance the windows OS layer to provide detailed error codes</li>
+<li>Work around a win2k problem so that SQLite can use single-character
+database file names</li>
+<li>The
+<a href="pragma.html#pragma_user_version">user_version</a> and
+<a href="pragma.html#pragma_schema_version">schema_version</a> pragmas 
+correctly set their column names in the result set</li>
+<li>Documentation updates</li>
+}
+
+chng {2007 January 4 (3.3.9)} {
+<li>Fix bugs in pager.c that could lead to database corruption if two
+processes both try to recover a hot journal at the same instant</li>
+<li>Added the <a href="capi3ref.html#sqlite3_prepare_v2">sqlite3_prepare_v2()</a>
+API.</li>
+<li>Fixed the ".dump" command in the command-line shell to show
+indices, triggers and views again.</li>
+<li>Change the table_info pragma so that it returns NULL for the default
+value if there is no default value</li>
+<li>Support for non-ASCII characters in win95 filenames</li>
+<li>Query optimizer enhancements:
+<ul>
+<li>Optimizer does a better job of using indices to satisfy ORDER BY
+clauses that sort on the integer primary key</li>
+<li>Use an index to satisfy an IS NULL operator in the WHERE clause</li>
+<li>Fix a bug that was causing the optimizer to miss an OR optimization
+opportunity</li>
+<li>The optimizer has more freedom to reorder tables in the FROM clause
+even in there are LEFT joins.</li>
+</ul>
+<li>Extension loading supported added to winCE</li>
+<li>Allow constraint names on the DEFAULT clause in a table definition</li>
+<li>Added the ".bail" command to the command-line shell</li>
+<li>Make CSV (comma separate value) output from the command-line shell
+more closely aligned to accepted practice</li>
+<li>Experimental FTS2 module added</li>
+<li>Use sqlite3_mprintf() instead of strdup() to avoid libc dependencies</li>
+<li>VACUUM uses a temporary file in the official TEMP folder, not in the
+same directory as the original database</li>
+<li>The prefix on temporary filenames on windows is changed from "sqlite"
+to "etilqs".</li>
+}
+
 chng {2006 October 9 (3.3.8)} {
 <li>Support for full text search using the
 <a href="http://www.sqlite.org/cvstrac/wiki?p=FullTextIndex">FTS1 module</a>

Modified: freeswitch/trunk/libs/sqlite/www/different.tcl
==============================================================================
--- freeswitch/trunk/libs/sqlite/www/different.tcl	(original)
+++ freeswitch/trunk/libs/sqlite/www/different.tcl	Thu Feb 22 17:09:42 2007
@@ -1,4 +1,4 @@
-set rcsid {$Id: different.tcl,v 1.7 2006/05/11 13:33:15 drh Exp $}
+set rcsid {$Id: different.tcl,v 1.8 2006/12/18 14:12:21 drh Exp $}
 source common.tcl
 header {Distinctive Features Of SQLite}
 puts {
@@ -108,11 +108,15 @@
   PRIMARY KEY column may only store integers.  And SQLite attempts to coerce
   values into the declared datatype of the column when it can.)
   <p>
-  The SQL language specification calls for static typing.  So some people
+  As far as we can tell, the SQL language specification allows the use
+  of manifest typing.   Nevertheless, most other SQL database engines are
+  statically typed and so some people
   feel that the use of manifest typing is a bug in SQLite.  But the authors
-  of SQLite feel very strongly that this is a feature.  The authors argue
-  that static typing is a bug in the SQL specification that SQLite has fixed
-  in a backwards compatible way.
+  of SQLite feel very strongly that this is a feature.  The use of manifest
+  typing in SQLite is a deliberate design decision which has proven in practice
+  to make SQLite more reliable and easier to use, especially when used in
+  combination with dynamically typed programming languages such as Tcl and
+  Python.
 }
 
 feature flex {Variable-length records} {

Modified: freeswitch/trunk/libs/sqlite/www/index.tcl
==============================================================================
--- freeswitch/trunk/libs/sqlite/www/index.tcl	(original)
+++ freeswitch/trunk/libs/sqlite/www/index.tcl	Thu Feb 22 17:09:42 2007
@@ -27,9 +27,10 @@
 <li>A complete database is stored in a single disk file.</li>
 <li>Database files can be freely shared between machines with
     different byte orders.</li>
-<li>Supports databases up to 2 terabytes
+<li>Supports databases up to 2 tebibytes
     (2<sup><small>41</small></sup> bytes) in size.</li>
-<li>Sizes of strings and BLOBs limited only by available memory.</li>
+<li>Strings and BLOBs up to 2 gibibytes (2<sup><small>31</small></sup> bytes)
+    in size.</li>
 <li>Small code footprint: less than 250KiB fully configured or less
     than 150KiB with optional features omitted.</li>
 <li><a href="speed.html">Faster</a> than popular client/server database
@@ -66,49 +67,33 @@
   puts "<hr width=\"50%\">"
 }
 
-newsitem {2006-Oct-9} {Version 3.3.8} {
-  Version 3.3.8 adds support for full-text search using the 
-  <a href="http://www.sqlite.org/cvstrac/wiki?p=FtsOne">FTS1
-  module.</a>  There are also minor bug fixes.  Upgrade only if
-  you want to try out the new full-text search capabilities or if
-  you are having problems with 3.3.7.
-}
-
-newsitem {2006-Aug-12} {Version 3.3.7} {
-  Version 3.3.7 includes support for loadable extensions and virtual
-  tables.  But both features are still considered "beta" and their
-  APIs are subject to change in a future release.  This release is
-  mostly to make available the minor bug fixes that have accumulated
-  since 3.3.6.  Upgrading is not necessary.  Do so only if you encounter
-  one of the obscure bugs that have been fixed or if you want to try
-  out the new features.
-}
-
-newsitem {2006-Jun-19} {New Book About SQLite} {
-  <a href="http://www.apress.com/book/bookDisplay.html?bID=10130">
-  <i>The Definitive Guide to SQLite</i></a>, a new book by
-  <a href="http://www.mikesclutter.com">Mike Owens</a>.
-  is now available from <a href="http://www.apress.com">Apress</a>.
-  The books covers the latest SQLite internals as well as
-  the native C interface and bindings for PHP, Python,
-  Perl, Ruby, Tcl, and Java.  Recommended.
-}
-
-newsitem {2006-Jun-6} {Version 3.3.6} {
-  Changes include improved tolerance for windows virus scanners
-  and faster :memory: databases.  There are also fixes for several
-  obscure bugs.  Upgrade if you are having problems.
-}
-
-newsitem {2006-Apr-5} {Version 3.3.5} {
-  This release fixes many minor bugs and documentation typos and
-  provides some minor new features and performance enhancements.
-  Upgrade only if you are having problems or need one of the new features.
+newsitem {2007-Feb-13} {Version 3.3.13} {
+  This version fixes a subtle bug in the ORDER BY optimizer that can 
+  occur when using joins.  There are also a few minor enhancements.
+  Upgrading is recommended.
 }
 
+newsitem {2007-Jan-27} {Version 3.3.12} {
+  The first published build of the previous version used the wrong
+  set of source files.  Consequently, many people downloaded a build
+  that was labeled as "3.3.11" but was really 3.3.10.  Version 3.3.12
+  is released to clear up the ambiguity.  A couple more bugs have
+  also been fixed and <a href="pragma.html#pragma_integrity_check">
+  PRAGMA integrity_check</a> has been enhanced.
+}
+
+newsitem {2007-Jan-22} {Version 3.3.11} {
+  Version 3.3.11 fixes for a few more problems in version 3.3.9 that
+  version 3.3.10 failed to catch.  Upgrading is recommended.
+}
+
+newsitem {2007-Jan-9} {Version 3.3.10} {
+  Version 3.3.10 fixes several bugs that were introduced by the previous
+  release.  Upgrading is recommended.
+}
 
 puts {
 <p align="right"><a href="oldnews.html">Old news...</a></p>
 </td></tr></table>
 }
-footer {$Id: index.tcl,v 1.143 2006/10/08 18:56:57 drh Exp $}
+footer {$Id: index.tcl,v 1.150 2007/02/13 02:03:24 drh Exp $}

Modified: freeswitch/trunk/libs/sqlite/www/lang.tcl
==============================================================================
--- freeswitch/trunk/libs/sqlite/www/lang.tcl	(original)
+++ freeswitch/trunk/libs/sqlite/www/lang.tcl	Thu Feb 22 17:09:42 2007
@@ -1,7 +1,7 @@
 #
 # Run this Tcl script to generate the lang-*.html files.
 #
-set rcsid {$Id: lang.tcl,v 1.118 2006/09/23 20:46:23 drh Exp $}
+set rcsid {$Id: lang.tcl,v 1.122 2007/02/13 02:03:25 drh Exp $}
 source common.tcl
 
 if {[llength $argv]>0} {
@@ -1007,7 +1007,8 @@
 <expr> [NOT] IN [<database-name> .] <table-name> |
 [EXISTS] ( <select-statement> ) |
 CASE [<expr>] LP WHEN <expr> THEN <expr> RPPLUS [ELSE <expr>] END |
-CAST ( <expr> AS <type> )
+CAST ( <expr> AS <type> ) |
+<expr> COLLATE <collation-name>
 } {like-op} {
 LIKE | GLOB | REGEXP | MATCH
 }
@@ -1032,12 +1033,17 @@
 OR</font>
 </pre></blockquote>
 
-<p>Supported unary operators are these:</p>
+<p>Supported unary prefix operators are these:</p>
 
 <blockquote><pre>
 <font color="#2c2cf0"><big>-    +    !    ~    NOT</big></font>
 </pre></blockquote>
 
+<p>The COLLATE operator can be thought of as a unary postfix
+operator.  The COLLATE operator has the highest precedence.
+It always binds more tightly than any prefix unary operator or
+any binary operator.</p>
+
 <p>The unary operator [Operator +] is a no-op.  It can be applied
 to strings, numbers, or blobs and it always gives as its result the
 value of the operand.</p>
@@ -1265,8 +1271,9 @@
 </tr>
 
 <tr>
+<td valign="top" align="right">
 <a name="globFunc"></a>
-<td valign="top" align="right">glob(<i>X</i>,<i>Y</i>)</td>
+glob(<i>X</i>,<i>Y</i>)</td>
 <td valign="top">This function is used to implement the
 "<b>X GLOB Y</b>" syntax of SQLite.  The
 <a href="capi3ref.html#sqlite3_create_function">sqlite3_create_function()</a> 
@@ -1283,6 +1290,14 @@
 </tr>
 
 <tr>
+<td valign="top" align="right">
+<a name="hexFunc">
+hex(<i>X</i>)</td>
+<td valign="top">The argument is interpreted as a BLOB.  The result
+is a hexadecimal rendering of the content of that blob.</td>
+</tr>
+
+<tr>
 <td valign="top" align="right">last_insert_rowid()</td>
 <td valign="top">Return the ROWID of the last row insert from this
 connection to the database.  This is the same value that would be returned
@@ -1297,8 +1312,9 @@
 </tr>
 
 <tr>
+<td valign="top" align="right">
 <a name="likeFunc"></a>
-<td valign="top" align="right">like(<i>X</i>,<i>Y</i> [,<i>Z</i>])</td>
+like(<i>X</i>,<i>Y</i> [,<i>Z</i>])</td>
 <td valign="top">
 This function is used to implement the "<b>X LIKE Y [ESCAPE Z]</b>"
 syntax of SQL. If the optional ESCAPE clause is present, then the
@@ -1374,6 +1390,14 @@
 </tr>
 
 <tr>
+<td valign="top" align="right">
+<a name="randomblobFunc">
+randomblob(<i>N</i>)</td>
+<td valign="top">Return a <i>N</i>-byte blob containing pseudo-random bytes.
+<i>N</i> should be a postive integer.</td>
+</tr>
+
+<tr>
 <td valign="top" align="right">round(<i>X</i>)<br>round(<i>X</i>,<i>Y</i>)</td>
 <td valign="top">Round off the number <i>X</i> to <i>Y</i> digits to the
 right of the decimal point.  If the <i>Y</i> argument is omitted, 0 is 

Modified: freeswitch/trunk/libs/sqlite/www/oldnews.tcl
==============================================================================
--- freeswitch/trunk/libs/sqlite/www/oldnews.tcl	(original)
+++ freeswitch/trunk/libs/sqlite/www/oldnews.tcl	Thu Feb 22 17:09:42 2007
@@ -10,6 +10,60 @@
 }
 
 
+newsitem {2007-Jan-4} {Version 3.3.9} {
+  Version 3.3.9 fixes bugs that can lead to database corruption under
+  obscure and difficult to reproduce circumstances.  See
+  <a href="http://www.sqlite.org/cvstrac/wiki?p=DatabaseCorruption">
+  DatabaseCorruption</a> in the
+  <a href="http://www.sqlite.org/cvstrac/wiki">wiki</a> for details.
+  This release also adds the new
+  <a href="capi3ref.html#sqlite3_prepare_v2">sqlite3_prepare_v2()</a>
+  API and includes important bug fixes in the command-line
+  shell and enhancements to the query optimizer.  Upgrading is
+  recommended.
+}
+
+newsitem {2006-Oct-9} {Version 3.3.8} {
+  Version 3.3.8 adds support for full-text search using the 
+  <a href="http://www.sqlite.org/cvstrac/wiki?p=FtsOne">FTS1
+  module.</a>  There are also minor bug fixes.  Upgrade only if
+  you want to try out the new full-text search capabilities or if
+  you are having problems with 3.3.7.
+}
+
+newsitem {2006-Aug-12} {Version 3.3.7} {
+  Version 3.3.7 includes support for loadable extensions and virtual
+  tables.  But both features are still considered "beta" and their
+  APIs are subject to change in a future release.  This release is
+  mostly to make available the minor bug fixes that have accumulated
+  since 3.3.6.  Upgrading is not necessary.  Do so only if you encounter
+  one of the obscure bugs that have been fixed or if you want to try
+  out the new features.
+}
+
+newsitem {2006-Jun-19} {New Book About SQLite} {
+  <a href="http://www.apress.com/book/bookDisplay.html?bID=10130">
+  <i>The Definitive Guide to SQLite</i></a>, a new book by
+  <a href="http://www.mikesclutter.com">Mike Owens</a>.
+  is now available from <a href="http://www.apress.com">Apress</a>.
+  The books covers the latest SQLite internals as well as
+  the native C interface and bindings for PHP, Python,
+  Perl, Ruby, Tcl, and Java.  Recommended.
+}
+
+
+newsitem {2006-Jun-6} {Version 3.3.6} {
+  Changes include improved tolerance for windows virus scanners
+  and faster :memory: databases.  There are also fixes for several
+  obscure bugs.  Upgrade if you are having problems.
+}
+
+newsitem {2006-Apr-5} {Version 3.3.5} {
+  This release fixes many minor bugs and documentation typos and
+  provides some minor new features and performance enhancements.
+  Upgrade only if you are having problems or need one of the new features.
+}
+
 newsitem {2006-Feb-11} {Version 3.3.4} {
   This release fixes several bugs, including a 
   a blunder that might cause a deadlock on multithreaded systems.
@@ -348,4 +402,4 @@
   Plans are to continue to support SQLite version 2.8 with
   bug fixes.  But all new development will occur in version 3.0.
 }
-footer {$Id: oldnews.tcl,v 1.16 2006/08/12 14:38:47 drh Exp $}
+footer {$Id: oldnews.tcl,v 1.19 2007/02/13 02:03:25 drh Exp $}

Modified: freeswitch/trunk/libs/sqlite/www/pragma.tcl
==============================================================================
--- freeswitch/trunk/libs/sqlite/www/pragma.tcl	(original)
+++ freeswitch/trunk/libs/sqlite/www/pragma.tcl	Thu Feb 22 17:09:42 2007
@@ -1,7 +1,7 @@
 #
 # Run this Tcl script to generate the pragma.html file.
 #
-set rcsid {$Id: pragma.tcl,v 1.18 2006/06/20 00:22:38 drh Exp $}
+set rcsid {$Id: pragma.tcl,v 1.20 2007/02/02 12:33:17 drh Exp $}
 source common.tcl
 header {Pragma statements supported by SQLite}
 
@@ -228,11 +228,11 @@
     flag.  When this flag is on, new SQLite databases are created in
     a file format that is readable and writable by all versions of
     SQLite going back to 3.0.0.  When the flag is off, new databases
-    are created using the latest file format which might to be
+    are created using the latest file format which might not be
     readable or writable by older versions of SQLite.</p>
 
-    <p>This flag only effects newly created databases.  It has no
-    effect on databases that already exists.</p>
+    <p>This flag only affects newly created databases.  It has no
+    effect on databases that already exist.</p>
 </li>
 
 
@@ -487,12 +487,16 @@
 puts {
 <ul>
 <a name="pragma_integrity_check"></a>
-<li><p><b>PRAGMA integrity_check;</b></p>
+<li><p><b>PRAGMA integrity_check;
+    <br>PRAGMA integrity_check(</b><i>integer</i><b>)</b></p>
     <p>The command does an integrity check of the entire database.  It
     looks for out-of-order records, missing pages, malformed records, and
     corrupt indices.
-    If any problems are found, then a single string is returned which is
-    a description of all problems.  If everything is in order, "ok" is
+    If any problems are found, then strings are returned (as multiple
+    rows with a single column per row) which describe
+    the problems.  At most <i>integer</i> errors will be reported
+    before the analysis quits.  The default value for <i>integer</i>
+    is 100.  If no errors are found, a single row with the value "ok" is
     returned.</p></li>
 
 <a name="pragma_parser_trace"></a>

Modified: freeswitch/trunk/libs/sqlite/www/sqlite.tcl
==============================================================================
--- freeswitch/trunk/libs/sqlite/www/sqlite.tcl	(original)
+++ freeswitch/trunk/libs/sqlite/www/sqlite.tcl	Thu Feb 22 17:09:42 2007
@@ -1,23 +1,23 @@
 #
 # Run this Tcl script to generate the sqlite.html file.
 #
-set rcsid {$Id: sqlite.tcl,v 1.24 2006/08/19 13:32:05 drh Exp $}
+set rcsid {$Id: sqlite.tcl,v 1.25 2007/01/08 14:31:36 drh Exp $}
 source common.tcl
-header {sqlite: A command-line access program for SQLite databases}
+header {sqlite3: A command-line access program for SQLite databases}
 puts {
-<h2>sqlite: A command-line access program for SQLite databases</h2>
+<h2>sqlite3: A command-line access program for SQLite databases</h2>
 
 <p>The SQLite library includes a simple command-line utility named
-<b>sqlite</b> that allows the user to manually enter and execute SQL
+<b>sqlite3</b> that allows the user to manually enter and execute SQL
 commands against an SQLite database.  This document provides a brief
-introduction on how to use <b>sqlite</b>.
+introduction on how to use <b>sqlite3</b>.
 
 <h3>Getting Started</h3>
 
-<p>To start the <b>sqlite</b> program, just type "sqlite" followed by
+<p>To start the <b>sqlite3</b> program, just type "sqlite3" followed by
 the name the file that holds the SQLite database.  If the file does
 not exist, a new one is created automatically.
-The <b>sqlite</b> program will
+The <b>sqlite3</b> program will
 then prompt you to enter SQL.  Type in SQL statements (terminated by a
 semicolon), press "Enter" and the SQL will be executed.</p>
 
@@ -39,8 +39,8 @@
 }
 
 Code {
-$ (((sqlite ex1)))
-SQLite version 2.0.0
+$ (((sqlite3 ex1)))
+SQLite version 3.3.10
 Enter ".help" for instructions
 sqlite> (((create table tbl1(one varchar(10), two smallint);)))
 sqlite> (((insert into tbl1 values('hello!',10);)))
@@ -52,13 +52,13 @@
 }
 
 puts {
-<p>You can terminate the sqlite program by typing your systems
+<p>You can terminate the sqlite3 program by typing your systems
 End-Of-File character (usually a Control-D) or the interrupt
 character (usually a Control-C).</p>
 
 <p>Make sure you type a semicolon at the end of each SQL command!
-The sqlite looks for a semicolon to know when your SQL command is
-complete.  If you omit the semicolon, sqlite will give you a
+The sqlite3 program looks for a semicolon to know when your SQL command is
+complete.  If you omit the semicolon, sqlite3 will give you a
 continuation prompt and wait for you to enter more text to be
 added to the current SQL command.  This feature allows you to
 enter SQL commands that span multiple lines.  For example:</p>
@@ -85,8 +85,8 @@
 }
 
 Code {
-$ (((sqlite ex1)))
-SQlite vresion 2.0.0
+$ (((sqlite3 ex1)))
+SQlite vresion 3.3.10
 Enter ".help" for instructions
 sqlite> (((select * from sqlite_master;)))
     type = table
@@ -114,13 +114,13 @@
 "sqlite_temp_master" table is temporary itself.
 </p>
 
-<h3>Special commands to sqlite</h3>
+<h3>Special commands to sqlite3</h3>
 
 <p>
-Most of the time, sqlite just reads lines of input and passes them
+Most of the time, sqlite3 just reads lines of input and passes them
 on to the SQLite library for execution.
 But if an input line begins with a dot ("."), then
-that line is intercepted and interpreted by the sqlite program itself.
+that line is intercepted and interpreted by the sqlite3 program itself.
 These "dot commands" are typically used to change the output format
 of queries, or to execute certain prepackaged query statements.
 </p>
@@ -132,27 +132,36 @@
 
 Code {
 sqlite> (((.help)))
+.bail ON|OFF           Stop after hitting an error.  Default OFF
 .databases             List names and files of attached databases
-.dump ?TABLE? ...      Dump the database in a text format
+.dump ?TABLE? ...      Dump the database in an SQL text format
 .echo ON|OFF           Turn command echo on or off
 .exit                  Exit this program
 .explain ON|OFF        Turn output mode suitable for EXPLAIN on or off.
 .header(s) ON|OFF      Turn display of headers on or off
 .help                  Show this message
+.import FILE TABLE     Import data from FILE into TABLE
 .indices TABLE         Show names of all indices on TABLE
-.mode MODE             Set mode to one of "line(s)", "column(s)", 
-                       "insert", "list", or "html"
-.mode insert TABLE     Generate SQL insert statements for TABLE
-.nullvalue STRING      Print STRING instead of nothing for NULL data
+.load FILE ?ENTRY?     Load an extension library
+.mode MODE ?TABLE?     Set output mode where MODE is one of:
+                         csv      Comma-separated values
+                         column   Left-aligned columns.  (See .width)
+                         html     HTML <table> code
+                         insert   SQL insert statements for TABLE
+                         line     One value per line
+                         list     Values delimited by .separator string
+                         tabs     Tab-separated values
+                         tcl      TCL list elements
+.nullvalue STRING      Print STRING in place of NULL values
 .output FILENAME       Send output to FILENAME
 .output stdout         Send output to the screen
 .prompt MAIN CONTINUE  Replace the standard prompts
 .quit                  Exit this program
 .read FILENAME         Execute SQL in FILENAME
 .schema ?TABLE?        Show the CREATE statements
-.separator STRING      Change separator string for "list" mode
+.separator STRING      Change separator used by output mode and .import
 .show                  Show the current values for various settings
-.tables ?PATTERN?      List names of tables matching a pattern
+.tables ?PATTERN?      List names of tables matching a LIKE pattern
 .timeout MS            Try opening locked tables for MS milliseconds
 .width NUM NUM ...     Set column widths for "column" mode
 sqlite> 
@@ -161,8 +170,9 @@
 puts {
 <h3>Changing Output Formats</h3>
 
-<p>The sqlite program is able to show the results of a query
-in five different formats: "line", "column", "list", "html", and "insert".
+<p>The sqlite3 program is able to show the results of a query
+in eight different formats: "csv", "column", "html", "insert",
+"line", "tabs", and "tcl".
 You can use the ".mode" dot command to switch between these output
 formats.</p>
 
@@ -287,7 +297,7 @@
 }
 
 puts {
-<p>The last output mode is "html".  In this mode, sqlite writes
+<p>The last output mode is "html".  In this mode, sqlite3 writes
 the results of the query as an XHTML table.  The beginning
 &lt;TABLE&gt; and the ending &lt;/TABLE&gt; are not written, but
 all of the intervening &lt;TR&gt;s, &lt;TH&gt;s, and &lt;TD&gt;s
@@ -298,7 +308,7 @@
 puts {
 <h3>Writing results to a file</h3>
 
-<p>By default, sqlite sends query results to standard output.  You
+<p>By default, sqlite3 sends query results to standard output.  You
 can change this using the ".output" command.  Just put the name of
 an output file as an argument to the .output command and all subsequent
 query results will be written to that file.  Use ".output stdout" to
@@ -319,7 +329,7 @@
 puts {
 <h3>Querying the database schema</h3>
 
-<p>The sqlite program provides several convenience commands that
+<p>The sqlite3 program provides several convenience commands that
 are useful for looking at the schema of the database.  There is
 nothing that these commands do that cannot be done by some other
 means.  These commands are provided purely as a shortcut.</p>
@@ -336,16 +346,19 @@
 }
 
 puts {
-<p>The ".tables" command is the same as setting list mode then
+<p>The ".tables" command is similar to setting list mode then
 executing the following query:</p>
 
 <blockquote><pre>
-SELECT name FROM sqlite_master WHERE type='table' 
-UNION ALL SELECT name FROM sqlite_temp_master WHERE type='table'
-ORDER BY name;
+SELECT name FROM sqlite_master 
+WHERE type IN ('table','view') AND name NOT LIKE 'sqlite_%'
+UNION ALL 
+SELECT name FROM sqlite_temp_master 
+WHERE type IN ('table','view') 
+ORDER BY 1
 </pre></blockquote>
 
-<p>In fact, if you look at the source code to the sqlite program
+<p>In fact, if you look at the source code to the sqlite3 program
 (found in the source tree in the file src/shell.c) you'll find
 exactly the above query.</p>
 
@@ -395,16 +408,27 @@
 SELECT sql FROM
    (SELECT * FROM sqlite_master UNION ALL
     SELECT * FROM sqlite_temp_master)
-WHERE tbl_name LIKE '%s' AND type!='meta'
-ORDER BY type DESC, name
+WHERE type!='meta' AND sql NOT NULL AND name NOT LIKE 'sqlite_%'
+ORDER BY substr(type,2,1), name
 </pre></blockquote>
 
-<p>The <b>%s</b> in the query above is replaced by the argument
-to ".schema", of course.  Notice that the argument to the ".schema"
-command appears to the right of an SQL LIKE operator.  So you can
-use wildcards in the name of the table.  For example, to get the
-schema for all tables whose names contain the character string
-"abc" you could enter:</p>}
+<p>
+You can supply an argument to the .schema command.  If you do, the
+query looks like this:
+</p>
+
+<blockquote><pre>
+SELECT sql FROM
+   (SELECT * FROM sqlite_master UNION ALL
+    SELECT * FROM sqlite_temp_master)
+WHERE tbl_name LIKE '%s'
+  AND type!='meta' AND sql NOT NULL AND name NOT LIKE 'sqlite_%'
+ORDER BY substr(type,2,1), name
+</pre></blockquote>
+
+<p>The "%s" in the query is replace by your argument.  This allows you
+to view the schema for some subset of the database.</p>
+}
 
 Code {
 sqlite> (((.schema %abc%)))
@@ -436,13 +460,13 @@
 
 <p>Use the ".dump" command to convert the entire contents of a
 database into a single ASCII text file.  This file can be converted
-back into a database by piping it back into <b>sqlite</b>.</p>
+back into a database by piping it back into <b>sqlite3</b>.</p>
 
 <p>A good way to make an archival copy of a database is this:</p>
 }
 
 Code {
-$ (((echo '.dump' | sqlite ex1 | gzip -c >ex1.dump.gz)))
+$ (((echo '.dump' | sqlite3 ex1 | gzip -c >ex1.dump.gz)))
 }
 
 puts {
@@ -452,37 +476,18 @@
 }
 
 Code {
-$ (((zcat ex1.dump.gz | sqlite ex2)))
+$ (((zcat ex1.dump.gz | sqlite3 ex2)))
 }
 
 puts {
-<p>The text format used is the same as used by
-<a href="http://www.postgresql.org/">PostgreSQL</a>, so you
+<p>The text format is pure SQL so you
 can also use the .dump command to export an SQLite database
-into a PostgreSQL database.  Like this:</p>
+into other popular SQL database engines.  Like this:</p>
 }
 
 Code {
 $ (((createdb ex2)))
-$ (((echo '.dump' | sqlite ex1 | psql ex2)))
-}
-
-puts {
-<p>You can almost (but not quite) go the other way and export
-a PostgreSQL database into SQLite using the <b>pg_dump</b> utility.
-Unfortunately, when <b>pg_dump</b> writes the database schema information,
-it uses some SQL syntax that SQLite does not understand.
-So you cannot pipe the output of <b>pg_dump</b> directly 
-into <b>sqlite</b>.
-But if you can recreate the
-schema separately, you can use <b>pg_dump</b> with the <b>-a</b>
-option to list just the data
-of a PostgreSQL database and import that directly into SQLite.</p>
-}
-
-Code {
-$ (((sqlite ex3 <schema.sql)))
-$ (((pg_dump -a ex2 | sqlite ex3)))
+$ (((sqlite3 ex1 .dump | psql ex2)))
 }
 
 puts {
@@ -521,33 +526,33 @@
 
 puts {
 
-<p>The ".timeout" command sets the amount of time that the <b>sqlite</b>
+<p>The ".timeout" command sets the amount of time that the <b>sqlite3</b>
 program will wait for locks to clear on files it is trying to access
 before returning an error.  The default value of the timeout is zero so
 that an error is returned immediately if any needed database table or
 index is locked.</p>
 
 <p>And finally, we mention the ".exit" command which causes the
-sqlite program to exit.</p>
+sqlite3 program to exit.</p>
 
-<h3>Using sqlite in a shell script</h3>
+<h3>Using sqlite3 in a shell script</h3>
 
 <p>
-One way to use sqlite in a shell script is to use "echo" or
-"cat" to generate a sequence of commands in a file, then invoke sqlite 
+One way to use sqlite3 in a shell script is to use "echo" or
+"cat" to generate a sequence of commands in a file, then invoke sqlite3
 while redirecting input from the generated command file.  This
 works fine and is appropriate in many circumstances.  But as
-an added convenience, sqlite allows a single SQL command to be
+an added convenience, sqlite3 allows a single SQL command to be
 entered on the command line as a second argument after the
-database name.  When the sqlite program is launched with two
+database name.  When the sqlite3 program is launched with two
 arguments, the second argument is passed to the SQLite library
 for processing, the query results are printed on standard output
 in list mode, and the program exits.  This mechanism is designed
-to make sqlite easy to use in conjunction with programs like
+to make sqlite3 easy to use in conjunction with programs like
 "awk".  For example:</p>}
 
 Code {
-$ (((sqlite ex1 'select * from tbl1' |)))
+$ (((sqlite3 ex1 'select * from tbl1' |)))
 > ((( awk '{printf "<tr><td>%s<td>%s\n",$1,$2 }')))
 <tr><td>hello<td>10
 <tr><td>goodbye<td>20
@@ -561,17 +566,17 @@
 SQLite commands are normally terminated by a semicolon.  In the shell 
 you can also use the word "GO" (case-insensitive) or a slash character 
 "/" on a line by itself to end a command.  These are used by SQL Server 
-and Oracle, respectively.  These won't work in <b>sqlite_exec()</b>, 
+and Oracle, respectively.  These won't work in <b>sqlite3_exec()</b>, 
 because the shell translates these into a semicolon before passing them 
 to that function.</p>
 }
 
 puts {
-<h3>Compiling the sqlite program from sources</h3>
+<h3>Compiling the sqlite3 program from sources</h3>
 
 <p>
-The sqlite program is built automatically when you compile the
-sqlite library.  Just get a copy of the source tree, run
+The sqlite3 program is built automatically when you compile the
+SQLite library.  Just get a copy of the source tree, run
 "configure" and then "make".</p>
 }
 footer $rcsid

Added: freeswitch/trunk/libs/sqlite/www/typesafe.tcl
==============================================================================
--- (empty file)
+++ freeswitch/trunk/libs/sqlite/www/typesafe.tcl	Thu Feb 22 17:09:42 2007
@@ -0,0 +1,61 @@
+set rcsid {$Id: different.tcl,v 1.7 2006/05/11 13:33:15 drh Exp $}
+source common.tcl
+header {SQLite Is Typesafe}
+puts {
+<p>
+In December of 2006, 
+<a href="http://en.wikipedia.org/">wikipedia</a> defines 
+<a href="http://en.wikipedia.org/wiki/Type_safety">typesafe</a>
+to a property of programming languages that detects and prevents
+"type errors".
+The wikipedia says:
+</p>
+
+<blockquote>
+The behaviors classified as type errors by any given programming language
+are generally those that result from attempts to perform on some value
+(or values) an operation that is not appropriate to its type (or their
+types). The fundamental basis for this classification is to a certain
+ extent a matter of opinion: some language designers and programmers 
+take the view that any operation not leading to program crashes, 
+security flaws or other obvious failures is legitimate and need not be 
+considered an error, while others consider any contravention of the
+programmer's intent (as communicated via typing annotations) to be
+erroneous and deserving of the label "unsafe".
+</blockquote>
+
+<p>
+We, the developers of SQLite, take the first and more liberal view of
+type safety expressed above - specifically that anything that does
+not result in a crash or security flaw or other obvious failures is
+not a type error.
+Given this viewpoint, SQLite is easily shown to be typesafe since 
+almost all operations are valid and well-defined for operands of 
+all datatypes and those few cases where an operation is only valid
+for a subset of the available datatypes (example: inserting a
+value into an INTEGER PRIMARY KEY column) type errors are detected and
+reported at run-time prior to performing the operation.
+</p>
+
+<p>
+Some commentators hold that type safety implies static typing.
+SQLite uses dynamic typing and thus cannot be typesafe in the eyes
+of those who believe that only a statically typed language can be
+typesafe.  But we believe that static typing is a distinct property
+from type safety.  To quote again from the
+<a href="http://en.wikipedia.org/wiki/Type_safety">wikipedia</a>:
+</p>
+
+<blockquote>
+[T]ype safety and dynamic typing are not mutually exclusive. A 
+dynamically typed language can be seen as a statically-typed language
+with a very permissive type system under which any syntactically 
+correct program is well-typed; as long as its dynamic semantics 
+ensures that no such program ever "goes wrong" in an appropriate 
+sense, it satisfies the definition above and can be called type-safe.
+</blockquote>
+
+
+
+
+}



More information about the Freeswitch-svn mailing list