Re: DBF<-->SQLite Exporter
I think you didn't understand.
It talks about new datatypes in DBF fields. Before there were only C, L, N, M and D but now we have a few more. The auto increment is type | + | and is very useful.
What I wanted to express is that the DBF<-->SQLite Exporter utility does not know this new DBF datatype + so it gives an error (type not recognized or something like that...)
I was asking you if it is possible that you could update your nice utility to understand the new DBF datatypes. Do you think it is possible? I like your utility and I think it could support these new datatypes...
By the way I am considering to try some special features of SQLite:
1) Integration with ICU with localization features for finding "A" and "á" as the same thing: http://www.sqlite.org/src/artifact?ci=t ... README.txt
(also "a", "ã", "Â", "À", "â", etc... and goes to "o", "O", "ó", etc... etc...)
2) Integration with SQLCypher http://sqlcipher.net/
(I remember you succeded with this one, if I am not wrong)
3) FTS (Fast Text Search) feature: http://www.sqlite.org/fts3.html
I think SQLite is very good and what I like most is the simplicity of it (zero config), only one file for all tables and of course blazing fast. A query like this one in a big database is super fast:
SELECT * FROM table1 WHERE field1 LIKE '%core%' OR field2 LIKE '%core%' OR field3 LIKE '%core%' ORDER BY field4
The above query seeks for word "core" in many fields and in any part of the field way faster than using DBFNSX with OrdWildSeek()
Of course it demands a lot of research and try but a program using SQLite with the above features would be very nice. I would appreciate any previous experience with any of the above (of all of them)
Right now I can not do the above query to search for the following data (I am missing ICU component):
By the way you can use ICU component in the DBF files also:
2012-04-20 17:52 UTC+0200 Przemyslaw Czerpak (druzus/at/poczta.onet.pl)
* replaced UTF8ASC with new CP: UTF8EX
This CP uses character indexes instead of bytes one
and operates on unicode characters flags.
Tables for upper/lower conversions and upper/lower/alpha/digit
flags were generated automatically from http://www.unicode.org/Public/UNIDATA/UnicodeData.txt
It also uses custom collation rules. It's very simple one
level sorting based on UTF8 C collation.
If someone needs some advanced sorting rules, then it's enough
to create copy of this cp with user custom version of UTF8_cmp()
and UTF8_cmpi() functions, i.e. they can be redirected to some
external library like ICU (icu-project.org).
Also I can not encrypt the database so anyone can open it and see/modify it with a simple utility what could cause inconsistencies not to mention it would not protect the information on the database being accessed by anyone (in special in a shared folder on the network).
The FTS feature would just be an extra.
Thanks for your help