Jump to content
The simFlight Network Forums

martinboehme

Members
  • Posts

    15
  • Joined

  • Last visited

About martinboehme

  • Birthday 01/01/1970

Contact Methods

  • Website URL
    http://

martinboehme's Achievements

Newbie

Newbie (1/14)

0

Reputation

  1. I've just tried it out, and it seems to work perfectly. Thanks again! Martin
  2. Seems our postings crossed... ;) Wow, you're quick... thanks a lot! I'll try it out and report back! Best regards, Martin
  3. I think I checked that a while ago, and it wasn't what I needed... ELAPSED_SECONDS does speed up and slow down, just the way I'd want it to. To give you an idea of what I'm trying to do: I want to implement a custom autopilot. Ultimately, I'll probably want to run it as a gauge, but at the moment I'm doing it externally via FSUIPC because it's easier to debug that way. I need ELAPSED_SECONDS (I think) because I want to use PID controllers... for which I need to compute the integral and the derivative of certain values I'm observing. For example, I want to know how quickly airspeed is increasing or decreasing, and that's why I need to know how much simulated time has passed between two airspeed measurements. Anyway, since this is only for the "prototype autopilot" anyway, it's not worth making it accessible through FSUIPC if that would mean a lot of effort on your part or if it would slow FSUIPC down a lot. I was just hoping it was maybe a variable you hadn't made available yet because there didn't seem to be a need... But it seems the easiest solution is probably just to use the "ticks" and to compensate for time acceleration myself. Thanks for the pointer -- I haven't done any work on FSX yet (still using FS9 exclusively), but I'll surely look into it at some point... Cheers, Martin
  4. Dear Pete, I'm looking for a way to access the FS9 SDK's ELAPSED_SECONDS variable using FSUIPC. This contains the number of seconds of simulated time that have elapsed since midnight -- and the important thing for me is that it does it with sub-second resolution. I see that offsets 0238 to 023A give me the current time, but only with a resolution of one second; offset 0310 does give sub-second resolution, but it does not speed up or slow down with time acceleration, which is what I would want... Any chance you could expose ELAPSED_SECONDS through FSUIPC? Best regards, Martin
  5. Don't apologize for going on holiday... :wink: Hope you had a good time! That's understood... I was just using the "external" interface for the time being because I was lazy... but I've now converted to the internal interface, and it works a treat. OK, that's good to hear... My experience seems to bear this out -- I've been running the code for a while now without any problems. Thanks for the quick answer! Martin
  6. Dear Pete, hope you are enjoying your holiday! I'm working on converting vasFMC (http://vas-project.org/), a standalone FMC, so it will work as a gauge within FS9. vasFMC uses OpenGL to draw its display, and I have to use a rather roundabout route to get the result to display in a gauge. As a result, a gauge update takes relatively long (around 30 ms), so I have decided to run all of the vasFMC code in a separate thread to minimize the impact on FS frame rates. OK, after this rather long introduction... here's my question: vasFMC uses FSUIPC to query FS9 (after all, it's originally a standalone program), so this means that I am doing FSUIPC_Process calls from a separate thread. Judging from the discussion in this thread: viewtopic.php?f=54&t=6980&view=next it seems that this works OK because FSUIPC uses SendMessageTimeout to call into FS9 -- so everything should synchronize nicely. I just wanted to confirm with you that what I'm doing is in fact OK... I realize that SendMessageTimeout may block my thread for a while until FS9's message loop can process the message, but that's not a major concern for me... Thanks, and best regards, Martin
  7. OK... would have been nice if it had been a variable in memory... but I've got a Plan B: I don't think I'll have to scan BGLs... I'll use NAV2 to monitor an ILS that may be tuned (the only other thing I need NAV2 for is for audio identing navaids), so I'll notice very quickly if the localizer comes alive. Then, I can just compare the localizer deviation with the angular deviation to compute the beam width... Anyway, thanks for the help! Martin
  8. Pete, thanks for looking into this -- and on a Sunday! Makes sense -- just wanted to check that the behaviour I'm getting is normal and that there's no way to speed it up... Thanks also for the information about the 388 offset... Yep, the "beam width" certainly has an effect. Actually, I only started looking for this field after I observed that different ILSs gave me different "localizer deviation units per degree". I then had a look in AFCAD and sure enough, I found this "beam width" field... so there must be some way of encoding this in a BGL, because it has an effect in the simulator... By the way, the reason why beam width is adjustable is to ensure that the width of the beam at the runway threshold is always 700 feet. Because the localizer is situated at the far end of the runway, different runway lengths require different beam widths to fulfill this requirement. See here for details: http://www.flightsimaviation.com/aviatim_ILS.html The glideslope, on the other hand, always seems to have a fixed beam width of 1.4 degrees (according to this website), which makes sense, because the glideslope transmitter is always more or less the same distance from the threshold. So far, this also seems to be true in MSFS, but I haven't tested my code on very many ILSs yet... So if the glideslope has that mystery two-byte field as well, it doesn't seem very likely that it would be the beam width. Two bytes would be the right size for an angle, though... maybe they included a beam width for the glideslope, too, and then only noticed afterwards that they could always set it to a constant value? Is this a two-byte field in the BGL file or in MSFS's data segment? If it's easy to do, maybe you could take a look at the values contained in those fields? If it really is a beam width, the glideslope value should be something like 1.4/360*65536=255, or maybe half of that if they're giving the half-width of the beam... hmm, unlucky coincidence, because even if we do find 255 or 256 in that field, it could be a lot of other things, too... I've just looked up the beam width for the localizer at Seattle (KSEA) on runway 34R, that's the ISEA ILS. The beam width in the stock AFCAD file is 3.3 degrees, which corresponds to an FS angle value of 601. So maybe that could help us disambiguate this... Only if it's not too much trouble, though... I've got some ideas about how I can proceed even if I can't read out the beam width explicitly... Cheers, Martin
  9. Pete, I've been doing some more work on this and have unfortunately hit upon some discouraging results... see the discussion here: http://forums.avsim.net/dcboard.php?az=pic_id=885 Basically, the three different types of VOR (high, low, and terminal) all have different attenuation functions for governing how signal strength changes with height over the ground. It should be possible to work out which of these three functions is being used (see Bob's suggestion), except in rare circumstances when the functions intersect, but I've decided that working out the intricacies of how the signal strength calculation works is probably more work than an alternative approach: Write my own code not only for computing DME distance but also for the VOR radial and localizer and glideslope deviation. Once I have that, I don't really need FS's built in NAV radios for anything except determining signal strength, so I'll cycle NAV1 through all the stations I'm interested in and use that to update the signal strength measurement -- it's enough if this happens every few seconds. I've actually got my own VOR and ILS code working already, except for one small point: ILSs in MSFS (as in real life) have a variable localizer "beam width", which governs how far off the localizer centreline you have to be (in degrees) to get full deflection on the localizer deviation indicator. (This beam width can, for example, be set in AFCAD.) Is there any variable in FSUIPC that I can query to find out this beam width? Of course, once the aircraft is actually on the localizer, I can just compare the localizer deviation reading that I get from MSFS with the deviation in degrees that I compute myself and then use that to deduce the beam width. However, that obviously doesn't work before the intercept when the localizer needle is "pinned". So is there any other way I can get at this value? Another small question: I've already experimented with cycling through different frequencies by setting the frequency in offsets 0x350 or 0x352. What I've noticed is that it takes about a second after I've set the frequency before I get valid data. I was hoping that I would be able to reduce this delay by writing a 2 to 0x388 (the nav radio activation offset), but this doesn't seem to have any effect. Am I just misinterpreting the effect that this offset is supposed to have? Cheers, Martin
  10. Pete, I'm back... :wink: Ah... we'll, I'm afraid that disproves my tentative theory. One of the locations where I made measurements was Queenstown in New Zealand... I made two measurements there, both at 10 nm from the station (which itself is only a few nm from Queenstown airport), so Queenstown airport should definitely have been the nearest airport in both cases (there's not really anything else in the vicinity). However, the "reference altitudes" that I reconstructed from my regressions were quite different (and reflected the elevation difference of about 3000 feet between the two locations). "Bolted on" is a good expression... :wink: They always kind of reminded me of those "squiggly bits" that climbers bolt onto walls to make artificial rock faces... Yes, I realize that... Unfortunately, I'm not really that "into" FSX yet... my computer isn't quite up to it, and besides, the Tinmouse (which all of this is for) doesn't work with it. It would probably still be a good idea to test it all the same... could provide useful information. We'll see... For the moment, I think I'll try gathering more data... maybe a coherent picture will emerge at some point... Martin
  11. Well, I'm sure there are those types of effects going on... and who knows, maybe there's code in MSFS to account for it. I have my doubts, though... it just doesn't fit in with the rest of the picture. I mean, they use an 1/d law instead of a 1/d^2 law... and when you're right on top of a station, signal strength actually increases with altitude instead of decreasing as it should... I find it hard to believe that they should leave those kinds of obvious flaws in the model uncorrected but at the same time go to great pains to model small variations caused by ground clutter... I've been thinking about this a bit more since my last post, and I've come up with an idea (admittedly very speculative) for what might be going on here. Bear with me for a moment. I'm going to go with the following assumption: The code that calculates DME and signal strength is old code, at least at its core. I recall that at one point (circa FS4), MSFS didn't take altitude into account when calculating DME. You could overfly a station at, say, 6000 feet, and you would get a zero DME reading when you were right over the station. (I remember being impressed when ATP corrected this when it came out...) Now, from what I've seen of the signal strength calculation, it seems to me as if this, too, depended only on "horizontal distance" at one point. The way that signal strength varies with altitude (and the strange effects that result when one is right over the station, see above) seems to me as if it was bolted on as an afterthought -- as a "quick fix" to make signal strength do the right thing (at least qualitatively) in most cases. Now, going back into MSFS's history again, I remember there was a time, before "mesh" existed, when ground elevation did not change continuously... I know this was the case with FS4, ATP, and even AS2. (This is the line of FS genealogy that I'm most familiar with... I never owned FS98, FS2000 or FS2002, so I'm not sure which of those versions introduced mesh, but you can probably enlighten me... :wink:) Anyway, as you're probably well aware, what happened in those early days was that there were discrete regions, within each of which the ground elevation was constant. Then, when you passed from one of those regions to the next, the ground elevation would suddenly jump from one value to another. Now, my speculative assumption is this: (a) The version of MSFS that introduced an altitude-dependent effect on signal strength was pre-mesh, i.e. it used the regions-with-constant-elevation model desribed above, and (b) somewhere in MSFS, this primitive elevation model persists to this very day, and the signal strength code was never changed to use the more refined elevation values supplied by the mesh. I realize this is a very, very long shot... and I don't really believe it myself. It's just that I can't seem to come up with a sensible explanation for the data I'm observing... Either use the little "Quote" button provided for the purpose, or enter the closing quote part with a / instead of the \ you are using. ;-) Argh... :oops: hm, good point... Martin
  12. Pete, thanks for your input on this! No... what I've been doing so far in this particular department implicitly assumes the earth if flat. What I did there was to take the DME distance reported by MSFS along with the difference between the elevation of the DME station and the ground elevation under the aircraft. Assuming the earth is flat, the DME distance gives me the hypotenuse of a right triangle, and the elevation difference gives me one of its legs. The "horizontal distance" is the other leg, which I can compute using the Pythagorean theorem. Bear with me, though, there's more to come... :wink: That's what I did originally. What I did, basically, was to take a few signal strength measurements at different (DME) distances from the station; I did these at an altitude of 10,000 feet to make sure the attenuation that kicks in closer to the ground wouldn't affect me. The resulting curve looked very much like 1/dme, so multiplying my signal strength measurements by dme should give me a constant value. This worked very well when I was more than a few nautical miles away from the station, but the values weren't quite constant when I was closer in. So I tested the idea that the variable might be "horizontal distance" (computed from the MSFS-reported DME as above) not "slant distance", and that worked much better. Close to the station, I still got some fluctuations, but not more than I would expect given that MSFS reports DME to an accuracy of 0.1 nm, and the closest measurement I took was at a "horizontal distance" of 0.4 nm. (By the way, as a side note, a strange consequence of the way that MSFS computes signal strength is that if you place your aircraft right onto a DME station at ground level, or as close as you can get, then start slewing upwards, signal strength will actually increase even though you're getting further and further away from the station. Another observation: If your lat/lon position is really close to that of the station then, at altitude, you can get really large signal strength values. I wonder if MSFS checks for a divide-by-zero for the case where the lat/lon position of the aircraft matches that of the station exactly. If I get really desperate, I might try provoking a divide-by-zero to pinpoint the position in the code where signal strength gets computed.) Anyway... I've been going on about "horizontal distance" for a while, but in fact, I'm not too worried about that component of the formula. In fact, for the purpose of determining what happens in the vertical direction, I'm independent of the exact way in which "horizontal distance" gets computed. This is because, for each set of measurements that I take at a given lat/lon position, I divide the measured signal strength values by the signal strength observed at altitude. In effect, what I'm doing is dividing x out of the f(x, h) function I describe in the thread on the Tinmouse forum. Maybe I should give a more detailed description of my measurement process to make this clearer. What I do is that, for a given lat/lon position, I place the aircraft at different altitudes (I record altitude, not height when making my measurements, but I do, of course, make a separate note of ground elevation) and record the corresponding signal strengths. If I plot these measurements, I get a plot that has a straight-line segment, then a kink, then another straight-line segment, and then another kink, after which the plot is horizontal (i.e. signal strength stays constant beyond a certain altitude). What I then do is I divide all of my measured signal strengths by the "final" signal strength in the constant segment. I then fit the two straight-line segments using linear regression. The thing is this: For all of my measurements so far, the functions that I fit to the measured values are identical (up to a residual error of a couple of units of signal strength), except for an offset along the altitude axis. The slopes of the segments are identical; the distance between the "kinks" is always 4666 feet, give or take a couple of feet. Also, I can explain most of the variation along the altitude direction using ground elevation -- up to about +/- 100 feet, my mystery constant c, which I can't resolve. This is where earth curvature could come in -- and for a moment I thought that it might explain this discrepancy -- but here's why I don't think it does: I've taken several measurements at the same distance from the station (10 nm), so the influence of earth curvature should be the same, but the constants c differ. Conversely, I've taken two measurements over sea, at different distances from the station, but the constants c are the same to within one or two feet. I really should do one or two more measurements over sea to make sure this is not a coincidence, but I don't really think it is... Anyway, enough for one post, I think... :wink: BTW, I'm going on holiday tomorrow for a week, so if I don't answer, that's why... Cheers, Martin P.S. Can't seem to make the "quote" tags work... sorry... P.P.S. Edited slightly for grammar and clarity P.P.P.S. Edited again to use a forward slash instead of backslash on closing "quote" tags...
  13. Hi Pete, thanks for the quick reply! Yep, seems odd... but I've tested a station that has an elevation of around 1000 feet, and the variation in the parameter c that I get relative to a station that is at sea level is only about 100 feet... which is about the same amount of variation that I get between different locations for the same DME station... The formula doesn't seem to be too "physically accurate" anyway... if I've got my physics at all right, the signal strength should be proportional to 1/d^2, not 1/d... shouldn't it? I've cleared that up in the meantime -- sorry for not mentioning it. I was doing something wrong in my own code -- a check with FSInterrogate (which I should have done in the first place) revealed that all was well with the values delivered by FSUIPC. My apologies... Ahem... you're right, of course. :oops: However, I haven't written any code yet for the signal strength issue -- I collected all of the data using FSInterrogate... so it's not my code, at least... but it could well be that I'm missing something obvious. Which was part of my motivation for posting about it... sometimes the act of putting things down in words gives me new ideas. Not so in this case, unfortunately... at least not yet... but I'll keep at it. Cheers, Martin
  14. Hi Pete, I'm working on implementing a DME hold feature, which I want to contribute to the Tinmouse project. The actual DME distance calculation works fine, but I also want to reverse engineer the formula that MSFS uses internally for the signal strength calculation, so I can take a known signal strength (at the time the user presses the DME hold button) and use that to compute the range of the station. Anyway... to get to the point: I've been able to work out most of the formula, but I've run into a problem that has me stumped, and I was thinking you or someone else on the forum might have come across something similar before. (Don't take this as a request to do any research -- if you don't know, I'll do the digging myself. Just wanted to avoid duplicating work on something that someone else might have cracked already.) The situation is this: I've been able to find out that signal strength depends on horizontal distance from the station, and height of the aircraft above ground. The only snag is that I have to adjust the height above ground by an amount of about +/- 100 feet to get an accurate prediction. I've been able to rule out the alternative possibilities that signal strength depends on altitude (instead of height AGL) or height above the station, so that's not the reason for the variation. If you're interested, the details are here: http://forums.avsim.net/dcboard.php?az=pic_id=885 My suspicion is that, internally, MSFS is using some sort of "ground elevation" value that is slightly different from the ones reported in FSUIPC offsets 0020 and 0B4C (which seem to deliver the same values, albeit with different precision). One possibility could be that MSFS simply uses the elevation of the closest mesh grid point, instead of interpolating between grid points, as I assume the values in 0020 and 0B4C do. So I guess my question is: In you forays through the MSFS code, have you ever come across something like this? Doesn't seem very likely, but I thought I'd ask... Cheers, Martin
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use. Guidelines Privacy Policy We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.