|
Tech GURU
Joined: 18 Oct 2000 Posts: 2733 Location: Atlanta, USA
|
Posted: Fri Aug 01, 2008 9:43 pm
[3.0] Mapper Conversion Problems |
When converting my map, I had several messages popup saying "Can not convert TStringField to a TBitMap". Now this map, used several custom images from zMapper. I'll email the original file to you later.
|
|
_________________ Asati di tempari! |
|
|
|
Dumas Enchanter
Joined: 11 Feb 2003 Posts: 511 Location: USA
|
Posted: Sun Aug 03, 2008 2:24 pm |
Getting the following error:
Code: |
SQL Error: near "Query": syntax error
CREATE TABLE ObjectTbl Query ([Name] VARCHAR(255),[Color]
INTEGER, [Cost] INTEGER) |
And it seems to pretty much hang up at that point.
Need me to email the map file? |
|
|
|
Zugg MASTER
Joined: 25 Sep 2000 Posts: 23379 Location: Colorado, USA
|
Posted: Tue Aug 12, 2008 10:37 pm |
YES! I *need* the map files. Email the *.MDB file to sales@zuggsoft.com
|
|
Last edited by Zugg on Wed Aug 13, 2008 12:59 am; edited 1 time in total |
|
|
|
Dumas Enchanter
Joined: 11 Feb 2003 Posts: 511 Location: USA
|
|
|
|
Zugg MASTER
Joined: 25 Sep 2000 Posts: 23379 Location: Colorado, USA
|
Posted: Wed Aug 13, 2008 12:59 am |
Fixed the link to be sales@zuggsoft.com. Guess I just had "support" on my brain ;)
|
|
|
|
Tech GURU
Joined: 18 Oct 2000 Posts: 2733 Location: Atlanta, USA
|
Posted: Wed Aug 13, 2008 1:30 pm |
On it's way... as someone suggested in another forum post, it looks like the issue occurs when you have custom images for rooms created in zMapper. I was able to get to my normal rooms just fine, and my other mud without custom images for the rooms loads with no isssues.
|
|
_________________ Asati di tempari! |
|
|
|
Zugg MASTER
Joined: 25 Sep 2000 Posts: 23379 Location: Colorado, USA
|
Posted: Wed Aug 13, 2008 4:44 pm |
Dumas: I got your file, thanks.
And yes, I think there is definitely an issue with custom images. I need to figure out how to convert the raw binary blob fields used for the images in ADO to the proper field type in SQLite. |
|
|
|
Zugg MASTER
Joined: 25 Sep 2000 Posts: 23379 Location: Colorado, USA
|
Posted: Mon Aug 18, 2008 8:21 pm |
OK, I think I have this fixed for the next version.
Geez Dumas, that is a *huge* map file you sent me. Probably the biggest I have seen. It takes forever to convert ;) I tried to improve the conversion speed, but it seems that SQLite just gets slower and slower as you do bulk INSERT commands. Even separating them into transactions didn't seem to help much. So I guess when you have an 80 MB map file (after running the Compress Map in zMUD) it's just going to take a while to convert it. At least it should only need to be done once.
Btw, when I release the 3.01 preview at the end of this week, you'll need to delete *any* existing *.DBM map files that were created by the 3.00 version. Sorry for the inconvenience, but it just wasn't worth all of the effort to try and convert the "bad" files from 3.00 into correct files for 3.01. It involved too much difficulty with blob fields. Guess that's just the price to be paid for alpha-testing the new map converter. Sorry. |
|
|
|
Dumas Enchanter
Joined: 11 Feb 2003 Posts: 511 Location: USA
|
Posted: Mon Aug 18, 2008 9:15 pm |
I don't know if this is already possible, but is there any way to dynamically load maps as you walk? The main reason my Achaea map is so large is that I have a large amount of the game's wilderness mapped (and believe me, I've only mapped a fraction of it). It would be nice that I could leave that in a separate map file, have it load only when I enter the wilderness (or by using an event command), then when I return to the main map have it reload that one.
|
|
|
|
Zugg MASTER
Joined: 25 Sep 2000 Posts: 23379 Location: Colorado, USA
|
Posted: Mon Aug 18, 2008 10:20 pm |
No, unfortunately not. In the old days, zMUD would only have a single "zone" in memory at once. But that ended up being way too slow. So now the design requires the entire map to be loaded entirely into memory. The problem with any dynamic loading is that the mapper would not be able to speedwalk into zones that are not loaded. And the load time when you enter the wilderness area and swap maps would be too long for more people.
Once I get your map converted into SQLite, I will look more into what can be done to improve this. Especially since handling large wilderness areas is one of the goals in the new mapper.
I ended up spending most of this afternoon looking for various optimizations for SQLite that would speed up the conversion. The problem is that it gets exponentially slower the more SQL INSERT statements are done. When I first start converting your map, it is inserting about 400 records per second. At the half-way point, it is only inserting 10 records a second. Since it gets exponentially slower and slower, I actually haven't yet managed to convert your file.
I am using the following PRAGMA statements for SQLite:
PRAGMA temp_store=MEMORY
PRAGMA count_changes=OFF
PRAGMA cache_size=200000
PRAGMA synchronous=OFF
PRAGMA locking_mode=EXCLUSIVE
and I am doing 800 INSERT statements between each BEGIN TRANSACTION and COMMIT TRANSACTION.
But what is really weird is that the "cache_size" is supposed to determine how much memory SQLite can use for it's cache. And no matter what I set it to, the Windows memory usage of CMUD remains the same. So it doesn't seem to be having the effect that it should.
At the point that I'm inserting this data, there are no indexes or anything. So I really don't understand why it is slowing down so much and why I can't force it to use more memory. Maybe there is a compiled maximum cache size in SQLite that I'm already hitting or something.
But this is really irritating. I wasn't going to worry about this until I noticed the exponential slowdown problem. It wouldn't bother me if it took 5-10 minutes to convert a map of this size. But at this rate it's going to take hours to convert, and that's not acceptable.
If anyone has any other suggestions on how to improve this SQLite performance, let me know. I'd hate to think that I picked the wrong database for this and that it's not going to handle big maps at all.
Btw, Dumas: How long does the current CMUD 2.36 take to load your existing *.MDB map database? |
|
|
|
Zugg MASTER
Joined: 25 Sep 2000 Posts: 23379 Location: Colorado, USA
|
Posted: Mon Aug 18, 2008 10:50 pm |
Looks like it takes about 30 minutes to convert your file. Of course, I had a bug in my new routine for adding the indexes, so it crashed after 30 minutes and I'll have to try it again :(
While it was converting, I played around with SQLite Admin program and I didn't have any speed problems with it at all. I run an SQL statement with 1000 INSERT commands over and over again, with the database well over 10 MB in size and didn't get any slowdown at all.
Now, I know that SQLite Admin is written in Delphi and uses the same ZeosLib routines that I do. So I need to figure out what they are doing differently than I am.
It's almost like something is causing the entire database to get reloaded after every batch of SQL INSERT statements or something. Very very strange. But I would really like to figure this out! |
|
|
|
Dumas Enchanter
Joined: 11 Feb 2003 Posts: 511 Location: USA
|
Posted: Mon Aug 18, 2008 11:10 pm |
Zugg wrote: |
Btw, Dumas: How long does the current CMUD 2.36 take to load your existing *.MDB map database? |
10 seconds or so. Mostly exits being the longest wait period. |
|
|
|
Zugg MASTER
Joined: 25 Sep 2000 Posts: 23379 Location: Colorado, USA
|
Posted: Mon Aug 18, 2008 11:11 pm |
OMG this is ridiculous!!
OK, so when I was doing some testing, I actually commented out the line that actually executes the SQL statement. So now with NO DATABASE ACCESS, it still takes the same SLOW amount of time!
In tracking this down, apparently this is all the fault of the stupid Microsoft ADO/MDAC stuff. Just the simple loop of:
ADOTable.Open
while not ADOTable.EOF do
ADOTable.Next
takes exponentially longer and longer to move to the next record in the dataset. I'm going to try bypassing the Delphi TDataset stuff and access the ADO record set directly to see if that helps.
So none of this speed problem has anything to do with SQLite. All of the conversion speed is actually the slowness of ADO. It's going to be REALLY NICE to get rid of this ADO crap! |
|
|
|
Zugg MASTER
Joined: 25 Sep 2000 Posts: 23379 Location: Colorado, USA
|
Posted: Mon Aug 18, 2008 11:28 pm |
Woot! That did it. So it wasn't all ADO's fault. It was actually the fault of the Delphi ADOTable object. The slow code was this:
Code: |
ADOTable.Open
while not(ADOTable.Eof) do begin
...
ADOTable.Next;
end; |
The fast code is this:
Code: |
rs: _RecordSet;
...
ADOTable.Open;
rs := ADOTable.Recordset;
while not(rs.Eof) do begin
...
rs.MoveNext;
end; |
With this new code, I am able to convert your database in less than a minute! So this was all very good to learn.
Now I'm getting to the point of actually *loading* your map, and that part is still slow. I honestly don't know how you have suffered with this huge map file all this time. But in any case, the code that I'm using to load the map database into memory now seems to be the bottleneck.
In any case, your huge file has been a great test case so far! |
|
|
|
Zugg MASTER
Joined: 25 Sep 2000 Posts: 23379 Location: Colorado, USA
|
Posted: Mon Aug 18, 2008 11:48 pm |
Btw, just so people know what size file I am talking about here, I gathered some statistics:
Old ADO *.MDB filesize: 87 MB
New SQLite3 filesize: 82 MB
Number of rooms: 60,190
Number of exits: 409,910
Conversion time: 82 seconds
It's still taking too long to load. It's the initial Dataset.Open that seems to be causing the slowdown, so I need to look more into that.
Once I got it loaded it seemed to display just fine. I took a look at the Wilderness zone and noticed that only a fraction of it is actually mapped. Just the main paths between areas. So I can see that I'm going to need to come up with a new design to handle mapping large wilderness areas, especially with any new tile-based graphical mapping. The current room/exits record structure just isn't efficient for this task. |
|
|
|
Tech GURU
Joined: 18 Oct 2000 Posts: 2733 Location: Atlanta, USA
|
Posted: Tue Aug 19, 2008 12:25 am |
Glad to hear you found on major bottle neck.. I'm excited to see what you do with the rest of the stuff.
At the risk of sounding like a fan boy, you sling some pretty mean code. I'd like to think I'm pretty good, but I don't think I could have developed z/CMUD to the extent you did. It's a hung amount of code to process and work with. Keep up the good work. Can't wait to see what you come up with next. |
|
_________________ Asati di tempari! |
|
|
|
Zugg MASTER
Joined: 25 Sep 2000 Posts: 23379 Location: Colorado, USA
|
Posted: Tue Aug 19, 2008 12:47 am |
Well, I located the loading problem. In the mapper I have two datasets. One dataset is used to load and cache the data needed to actually draw the map. This only loads certain fields from the database. In particular, it doesn't need to load the Description field for the rooms (since that isn't used to draw the map).
The second dataset does a "SELECT * FROM ObjectTbl" and is used to display the room properties dialog, and is used as the "master" dataset for locating rooms, updating rooms, etc. All of the %roomXXX scripting functions use this dataset, for example. (A third dataset can sometimes also be used when %mapfilter applies a set of filters to the normal dataset).
Anyway, it's the second dataset (the SELECT * FROM) that is slow. If I change it to "SELECT * FROM ObjectTbl LIMIT 10" then the rest of the map will load and display just fine (Load time about 14 seconds total).
So, I'm going to need to do a bit more redesign than I expected. I can change the Room Properties dialog so that it doesn't need to do access the entire database (simply using an SQL parameter for the room id to be displayed in the dialog). But the second dataset is also used by all of the scripting functions, and that's a lot more work to change. In the best scenario, I probably need to change everything to use raw SQL instead of using the Delphi Dataset stuff.
For example, normally when using the Delphi Dataset if you want to fetch a particular room name, you just do this:
Code: |
if ObjectTbl.Locate( 'ObjId', ID, []) then Result := ObjectTblName.Value |
However, that requires that the ObjectTbl dataset has the entire database loaded into memory. In plain SQL, this would just look something like
Code: |
rs := SQLQuery( 'SELECT Name FROM ObjectTbl WHERE ObjId = '+IntToStr(ID));
Result := rs.Fields[0].Value |
Tonight I need to think more about whether I want to do this or if there is an alternative. The nice thing about using the TDataset stuff in Delphi is that it isolates me from the detailed database driver stuff. If I change this to raw SQL then it might be a lot harder to use other databases in the future (like for any sort of remote map database server). |
|
|
|
Zugg MASTER
Joined: 25 Sep 2000 Posts: 23379 Location: Colorado, USA
|
Posted: Tue Aug 19, 2008 9:30 pm |
I spent most of today trying to get bitmap (and other binary data) converting properly. There was a problem in my ADO conversion that wasn't handling blob fields properly.
For better code compatibility, I decided to use a parameterized SQL INSERT statement instead of creating the string myself. So now I do:
SQL := 'INSERT INTO Table (field1,field2,...) VALUES (:p1,:p2,...)'
and then loop through the fields and do:
Query.Params[I].Value := rs.Fields.Value
for most fields, but for blob fields I can use the Query.Params[I].SetBlobData properly. Internally this seems to encode the data with NULLs being %0. But this ensures that the data is encoded how the internal SQLite database expects it so that I don't have to worry about the encoding myself.
This is a bit slower than building a giant SQL string of 1000 INSERT statements all at once. But it's still not too bad. The speed of converting Dumas's huge map went from about 80 seconds to 130 seconds. So just over 2 minutes. And that's not too bad for a one-time conversion of a huge map file.
With this change, the mapper is correctly drawing room bitmaps and meta shapes now.
Now that the conversion and display seem to be working better, it's time to figure out how to redesign the scripting stuff so that I don't need to load the entire database into memory. What I am thinking of doing is to have %mapfilter create a dataset like this:
SELECT ObjId FROM ObjectTbl WHERE FilterString
where "FilterString" is the string given by %mapfilter. This is a bit different than in zMUD where the string in %mapfilter is an ADO filter string instead of an SQL WHERE string. But this change will actually make the syntax in CMUD a bit more consistent. It also actually allows much more powerful queries to be done to the map.
So if %mapfilter creates a dataset that just contains the list of ObjId values (which is fast), then when you use a function like %roomname I can just set up a second dataset with a parameterized query like:
SELECT * FROM ObjectTbl WHERE ObjID=:id
and assign the id parameter to the value from the %mapfilter dataset that I want to query. This loads the full record for just the room that I am interested in, rather than keeping all of the data in the dataset.
I can also use this for the Room Properties display, again by just setting the id parameter to the desired value. Eventually I plan to convert the room properties to a non-database form, but right now all of the fields point to a dataset and I want to get that working first before I tear it apart completely.
Probably more detail that anyone is interested in, but it's good for me to think "out-loud" sometimes. |
|
|
|
Tech GURU
Joined: 18 Oct 2000 Posts: 2733 Location: Atlanta, USA
|
Posted: Tue Aug 19, 2008 10:28 pm |
What happens if you don't use a %mapfilter, I almost never do (granted my maps are much smaller). Then won't you have the same problem for properties.
Would you just load the map into memory still or would you use a delayed load approach i.e. load the maps initial zone, any rooms need for immediate access by user (either through UI, speedwalk, etc). Is it any slower/or fast to load the master dataset via SQL then assign appropriate values to you dataset (if that's even possible)?
And while it's probably getting ahead of you thought process, with tile based mapping would handle non-stand exits like in/out, enter cave, eat corn etc? Those could probably be done the same way as today. But for more exotic exits (I think one fellow mentioned nne 'north northeast' and use 'up southeast' )? It might have just been the interface, but I always got the impression that we could only have 10 exits (9 cardinal and the extra other).
I just checked it seems I was wrong about that.
My brain's frazzled from work and I feel like I'm rambling now, so that's it for now. |
|
_________________ Asati di tempari! |
|
|
|
Zugg MASTER
Joined: 25 Sep 2000 Posts: 23379 Location: Colorado, USA
|
Posted: Tue Aug 19, 2008 11:22 pm |
All of the room and exit data needed to display the map is always loaded fully into memory (that's what the "Loading Rooms..." and "Loading Exits"... progress bars are for).
The problem I need to solve is that the current mapper also has a "SELECT * FROM ObjectTbl" dataset that loads *all* data into memory (which includes the long Description fields of the rooms). With ADO, I had a trick to prevent this from loading the entire database into memory by using "Server-based" Cursors in the dataset. And I used this dataset to locate rooms when I needed to fetch stuff like the Description field, and also used this dataset to store any new values or changes to the map (the dataset used to display the map is ReadOnly for improved speed).
All of the %roomXXX functions access this full dataset. That allows you to use %roomdesc to get the room description (for example). It lets you loop through all of the rooms using #LOOP 1,%numrooms to access properties of any room in the database.
The %mapfilter function added a "filter" to this "SELECT *" dataset so that the %roomXXX functions would only access the subset of the map data that matched the filter. Basically, %numrooms is fetching the Dataset.RecordCount value. So it was still using the "SELECT * FROM ObjectTbl" but then applying a Filter to the results.
Horribly inefficient!
As I mentioned above, I think I can implement this better by making %mapfilter just do a "SELECT ObjID FROM ObjectTbl WHERE FilterString". Then %numrooms would still return this Dataset.RecordCount value. But then when you use a %roomXXX function, it would fetch the ObjID from this Dataset and then use that in a second dataset that just does "SELECT * FROM ObjectTbl WHERE id=:id".
With this implementation, the large fields like the room description would only be loaded for a single room. It would never load the entire set of descriptions into memory at once.
If you don't use %mapfilter then it just does a "SELECT ObjID FROM ObjectTbl" to fetch the list of ID values for all of the rooms. But this is fast. It's only the "SELECT * FROM ObjectTbl" stuff that I need to get rid of since that loads *all* of the data into memory.
As for non-standard exits, you can have as many as you want for each room. And yes, that will still be allowed for tile-based mapping. What I'm thinking of doing for wilderness maps is adding a flag to a room that says "Has all standard exits" which would automatically allow the rooms nearby to be linked without actually creating and storing the link records for all of those exits. In other words, instead of storing a record in the LinkTbl for n,s,e,w,ne,se,nw,sw it would just query the ObjectTbl for the room in the desired direction (for example, X+1,Y for the room to the east).
Then, I'll set up a "Walls" array for each room that can be used to determine which standard exits are not linked. I need a Walls array anyway in order to allow colored walls, which is another feature desired for the new mapper (allowing each wall in the room to be a different color/texture).
Anyway, my brain is completely frazzled now too, so I'm going to break for dinner. I'm still in the middle of the conversion to this new way of handling all of the data sets so it will be at least another day before I can really test it all. It's a lot of editing changes, and will require testing almost every feature and function in the mapper to make sure nothing is broken. So, unfortunately, I expect the next 3.01 version to be pretty buggy. This is just a massive change to the internal mapper design. But it should all work out better in the end. |
|
|
|
Vijilante SubAdmin
Joined: 18 Nov 2001 Posts: 5182
|
Posted: Wed Aug 20, 2008 2:59 am |
I am quite certain that the single query method will be faster. Also querying again for the specific data field needed will be nearly as fast as the full memory load. My experience with a simple COM based query from CMud shows ADO's repsonse time around 15ms. I am going to guess that 2/3rds of that is the CMud parse, COM, and synchronization. So I would expect any relatively simple SQL query done through a direct linkage to be under 5ms on my system. It is definitely a good choice to move the %roomxxx functions to a load on demand design.
One of the things I noticed with CMud since I started directly querying the map was that the %roomxxx functions weren't just slower, they were masively slower, and with a significant loop an exponential curve could be noticed. I always figured this was because each call to a %room function required synchonization into the main thread. I am wondering if there is anyway you can smooth out those functions to have a nice linear progression with looped calls? |
|
_________________ The only good questions are the ones we have never answered before.
Search the Forums |
|
|
|
Zugg MASTER
Joined: 25 Sep 2000 Posts: 23379 Location: Colorado, USA
|
Posted: Wed Aug 20, 2008 5:30 pm |
I think the slowness of the %roomxxx functions within a loop is related to the same slowness with ADO that I found when I was doing the loop to copy the data into the SQLite database. The %roomxxx functions all use the same Delphi ADODataset.Next routine that had the exponential increase in time that I saw above.
So, I am very hopeful that the new SQLite system won't have this problem. Also, using the load-on-demand design should also make it more scaleable.
The %roomxxx functions *do* require synchronization with the main thread because they have the possibility to change the map database and not just query it. I might be able to improve this in the future so that a simple query does not require the synchronization. But that shouldn't be the cause of any exponential slowdown behavior. |
|
|
|
Rainchild Wizard
Joined: 10 Oct 2000 Posts: 1551 Location: Australia
|
Posted: Thu Aug 21, 2008 12:46 am |
SELECT * is always a resource hungry thing to do, any DBA will tell you. Programmers (myself included, but I'm trying to train myself out of the habit) tend to think of a database as a "bag of holding"... stick as much stuff in there as you want, and get it back instantaneously anytime you want... which is the case if you have 100 records, but not if you have 100,000. And you should always have a WHERE clause, there's very seldom any reason why a user would need to see all 100,000 records at once.
I'm not sure how zeos/sqlite works, but MSSQL can "prepare" (compile) a parameterized query to speed up subsequent operations (including inserts) which can help when you're doing a lot of the same thing - bulk insert, or just repeatedly using the same query with different data.
Anyway, on to my actual point/question... slow/safe walk matches the room name or the first line of the description, so you might want to have a compiled query which only fetches those two columns, rather than doing a select * where room = :id ... also (and this is a MSSQL optomization so may not apply to sqlite)... in MSSQL the large "MEMO" and "BLOB" fields are actually stored in a different part of the database from the rest of the row, which means there's an extra cost because the hard drive has to perform a seek operation to read the data from somewhere else... so if you do a select *, you're actually doing multiple seeks, since it has to retrieve multiple MEMO/BLOB objects.
Also, if it does turn out to be the case that it has to do that extra seek to load the description when you only need the first line, you might consider having a short text field which contains a copy of the first 80 characters of the room description so it is stored "in-line" and only retrieve the full description if the room properties dialog is open.
Obviously you're not up to the optomization part of mapper development yet, but it's something to keep in the back of your mind in case it does end up being a problem. |
|
|
|
Zugg MASTER
Joined: 25 Sep 2000 Posts: 23379 Location: Colorado, USA
|
Posted: Thu Aug 21, 2008 5:40 pm |
Quote: |
And you should always have a WHERE clause, there's very seldom any reason why a user would need to see all 100,000 records at once. |
Yes, I agree and I know about that. But it isn't how the Delphi Dataset stuff was designed. When you use ZeosLib to assign an SQL database to a Delphi TDataSet, the TDataSet only has access to the data given in the SQL command. For example, if you use LIMIT 100 to limit the SQL, then using TDataset.Locate can only locate records within this 100 records and not in the rest of the dataset. Yes, it's a stupid way to design database components. Originally the Delphi TDataSet was only designed for their own BDE Interbase engine. And then they added ADO support. In both of those cases you could set the "cursor" to be on the server to prevent the SQL statement from bringing in the entire database. But that all breaks when using the ZeosLib components to tie any SQL database to a TDataSet.
Also, with the Delphi TDataset, if the original SQL statement doesn't include the field you want (like with using SELECT *) then there is no way to access that field via the TDataset. Again, it's because TDataset was designed back in the local database era where it was tied directly to a table of a file on disk.
Delphi has gotten around a lot of this in their Enterprise edition with their new ClientDataset and fancy 3-tiered database models. But that is all completely overkill for CMUD and causes too much overhead and slowdown. Also, their Enterprise edition it too expensive. So that's why I use the free ZeosLib with the normal TDataset instead.
In a way this is all good because it now forces me to design the database application correctly and always have a WHERE clause and limit the data that I need. It's just a lot more redesign and re-coding work than I originally anticipated for the mapper conversion.
Quote: |
I'm not sure how zeos/sqlite works, but MSSQL can "prepare" (compile) a parameterized query |
Yep, ZeosLib does that, and that is what I'm using. When I gave the example of "SELECT * FROM ObjectTbl WHERE ObjId=:id", the ":id" at the end creates a parameter for the SQL query. The statement gets prepared (compiled) and then I use Query.Params[0].Value := 123 or whatever I want to set it to and then execute the query. I am doing this in my INSERT commands as well since it's more straightforward how to handle binary blob data when using parameters.
Quote: |
Anyway, on to my actual point/question... |
That's a good point. I know that Memo and Blob fields are slower. I don't think SQLite stores them differently, but in order to support other databases in the future I'll consider this. Storing the first line in a regular character field is an interesting idea too. I probably won't do that right away because right now I'm trying to do the conversion without performing any database changes. But when I start adding new features to the mapper that requires new database fields, that would be a good time to add it. Again, for SQLite it's not going to make a noticeable difference, but it might be good for future other databases. |
|
|
|
Zugg MASTER
Joined: 25 Sep 2000 Posts: 23379 Location: Colorado, USA
|
Posted: Thu Aug 21, 2008 11:30 pm |
This is so COOL!!!
OK, this is getting a bit easier than I thought. There were two main ways in which I was using a Delphi TDataset in the mapper. I'll call them the "query" and the "append" methods.
In the "query" method, the code often looked like this:
Code: |
if Dataset.Locate('id', ID, []) then begin
Name := DatasetName.Value;
X := DatasetX.Value;
Y := DatasetY.Value;
...now do something with this data
Dataset.Edit;
DatasetX.Value := NewX;
DatasetY.Value := NewY;
Dataset.Post;
end; |
In this example, DatasetName, DatasetX, and DatasetY are the "field variables" that Delphi creates for the Dataset. The Edit/Post methods allow the current database record to be modified.
In the "append" method, the code would look something like this:
Code: |
Dataset.Append;
DatasetName.Value := 'New Record';
DatasetX.Value := NewX;
DatasetY.Value := NewY;
Dataset.Post;
NewID := DatasetID.Value |
This adds a new record to the database and sets the values of the fields for that record. After the Post, the ID field can be queried to get the ID of the new record (like when using an Auto Increment ID field).
As I have mentioned in this thread, in the past I had the Dataset.SQL set to something like "SELECT * FROM ObjectTbl" to bind the dataset to the SQLite database file.
Now, as I have also mentioned, the "query" code can be easily replaced by "SELECT * FROM ObjectTbl WHERE ObjID=:id". So it's easy to change the "Dataset.Locate" method to just my own "ObjectTblLocate" that executes the above SQL query and returns the resulting Dataset.
However, I wasn't sure if that parameterized SQL would work for the "append" method. I was worried that the WHERE statement would constrain the query so that I couldn't add a new record to the database. But it turns out that it works just fine!
So even with the parameterized query, I can still call "Dataset.Append...DataSet.Post" to add a new record. Until I "requery" the dataset, it now has two records in it...the original record with the original ID value, along with a second record with a NEW ID value that does not match the original ID. When I "requery" the dataset, then the new record is no longer fetched (but it is still saved in the underlying database file).
Using this parameterized query fixes the problem of loading too much data. Instead of loading all of the data and letting the Delphi Dataset.Locate find the proper record, I just use this parameterized query so that only a single record is loaded in memory. It's very simple and easy.
Now, I went ahead and performed the other optimizations like Rainchild mentioned. So I don't actually use "SELECT *" anymore. I have a list of fields that are fetched. And for Memo fields, I have some additional routines to fetch/set those values separately. And of course the data needed to actually draw the map is still all loaded into memory and cached. So there is still a small map loading time (about the same as in zMUD), and the map still takes up as much memory as needed. But it's less memory usage than before because we've gotten rid of the second copy of the database (from the SELECT *) and have gotten rid of the Description data that isn't needed (which is loaded on-demand now). Although some of this memory savings doesn't show up in the Task Manager because in zMUD some of the data was being stored by the ADO/MDAC system and didn't show up as zMUD/CMUD memory usage directly. |
|
|
|
|
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|