Category Archives: Electronics


Micrometer and Force: a time-stamped interface

For a special application that I can’t name here, we need to measure both deflection (length) with about 0.01 mm resolution, and force in the range of 200 N (=20 kg). The force measurement is needed with a few 10s of Hz of temporal resolution, for deflection ~10 Hz will be sufficient.

All this needs to be accomplished at a budget, so I decided to use a China load cell (20 kg range), with an HX711 converter board, and a digital micrometer dial (13 mm range, 0.01 mm resolution).

First, to the HX711. As per the datasheet, it can be run from an external clock source, not sure if this is necessary, but thought I give it a try. Spec range for the clock is up to 20 MHz (normally running at about 11 MHz), but the HX711 works well to 70 MHz and above. You don’t need to couple a lot of power, even at -10 dBm, it is still working. Anyway, by setting the clock control pin to HI, the HX711 will provide a data rate of 80 Hz. As with all analog-digital converters, there will be a trade-off for frequency vs. noise-free resolution, but for the application discussed here, even 10 bit would be enough, to serve the purpose.

For the digital dial – unfortunately, the industry (the Chinese digital caliper and dial industry) has not yet come to a decision to use a commonly available connector to get the signal for a digital caliper or dial into a cable. At least, they offer various types of data protocols, and with not too much sense detective work, you can figure out how to interpret the data. I user a Micro-USB connector, carefully soldered to the data output pads of the micrometer dial.

Some small wires were used to connect the USB connector to the board, to allow for some movement, and to ensure longevity even in an environment that has vibration, and people connecting the USB cable with not much care.
The dial is running on 1.5 V (a single button cell), so we need a small converter board, using half of a MC3302P quad comparator, to convert the 1.5 V logic to good old 5 V TTL logic. You can use any type of comparator of logic conversion circuit, even a single transistor may work. Anyway, I didn’t want to load the dial any more than necessary, and to improve noise immunity, added a 20 MHz low-pass (330 Ohm with 22 pF) to the input.

Here the rising edges, logic conversion board input (blue), vs. output (yellow). For the faster traces, the internal pull-up resistors of the ATMega8 were enabled. Not much effect anyway.

All the data are collected by a ATMEGA8-16PU, and sent to a host PC via a 115.2k RS232 link. This allows even wireless connections, with a serial-to-RS232 converter. Data are send in one direction only, from the ATMEGA8 to the PC. All measurements have a 16 bit time stamp, using the ATMEGA8 16 bit timer.

The 8 bit timer of the ATMEGA8 is used to capture the data from the micrometer, which uses a synchronous clock to transmit data – the risking edge of the clock will trigger an interrupt, INT0, and the timer will ensure that each data Frame is received properly (the timer will overflow after each received data package, to signal that the next clock edge will be the start of a new data package).

The dial transmission pattern: one data block is send every ~100 ms.

Some detail of the timing, for the current dial indicator interface, data are valid at the rising edge of the clock.

For the last bit, there is no falling edge of the clock.

For “0” logic state, the dial interface pulls down the data line for each clock pulse, and then releases it again for high. Not sure why that is, maybe it saves some battery power?

The data is transmitted as binary number, 24 bit, 0.01 mm or 0.005″ per count. The last 4 bits are uses for status and indicate a “-” display, and mm vs. inch.

That’s the full pattern of a transmission sequence from the HX711. Blue is data, yellow is clock.

Timing – data line changes states quickly after the rising edge of the clock signal.

This is the final data stream as received. Notice the time-stamped load cell and dial transmissions.

For those interested, here is the MCU code (AVG-GCC): loadc_d0.c

Workshop upgrade: Light fittings, and luminous efficacies

With the workshop basic repairs complete, how to set up a cost-effective lighting system?

Some items to consider
-I will be working there mostly after work, late in the evening, and mostly in the dark winter months. So I need daylight and bright light to keep focus.
-Diffuse light along will not be enough. I like the feeling of incandescent lights, so there need to be some work and bright direct lights.
-Some areas, like the stair, will need a separate light, which needs to be “ON” immediately after the switch is pushed. Same applies to the other lights – there should be no dimm start-up, flicker, or start-up delay.
-Light level needs to be high, because the main purpose of this room is the assemble, fix, and test high frequency/microwave assemblies, and fine-mechanical devices.
-Because this is a hobby, we need to keep expenses down, both for the initial cost (light fitting and lamps), and the running cost – electric power cost.

This is a plan of the room layout, rectangles mark the position of tables.

Essentially, two kinds of light sources have been considered – T8 fluorescent, for the background illumination, 8 pcs. 25200 lm total (EUR 5 per piece, including lamp and electronic started – a bargain). And 4 PAR38 30° LED down-spots (these replace the 108 W halogen lights, and have still have a nice glass body).

licht parathom

These are about EUR 13 per piece, and the light fittings are simple screw sockets E27 size, EUR 1.50 per piece.

licht raumplan

Some calculations done – total of 31 kilo-lm, 363 Watts (which is quite precisely found when checking the total current to the workshop with a wattmeter). ~85 Lumen per Watt, which is a very good value.

We will need to check the actual luminous intensity at the work surfaces later, because the lumens of the fluorescent tubes is not all going downwards (some reflected from the white ceiling and walls, some lost in the light fitting).

licht efficiacy

Life time these lights are rated for 25000 hours (LED), or 12000 hours (T8 tubes). This means, 3000~6000 hobby days (counted at 4 hours), so there should be no need to change these light bulbs and tubes any time soon.

New test lab and workshop: Renovation update

It has been a bit quiet here recently, not because of lack of activity, but more to the opposite. Currently, my workshop and test labs occupy on room in an appartment, 3rd floor (2. OG in German counting), and 3 basement rooms (mostly for soldering, assembly, and mechanical fabrication; plus 2 basement rooms in a house 2 hours drive from here… with some of the not-so-often-used heavy metal working machinery).

After some negotiation, I was able to get another room, which is on ground floor, well heated and rather constant temperature all over the year, and is has daylight – this all in a beautiful building that is even listed in the historic monument directory of the state. It has about 28 m2 floor space, and quite ideal for my needs to have a clean working area, for assembly of equipment, and detail testing. It is also much closer to the soldering workshop and parts storage in the basement, and will safe me from walking up and down 3 floor several times a day, during the final assembly phase or during repair works, which often require a combination of soldering, and difficult testing that could only be done so far on the 3rd floor. The basement is great, but it is too humid for operating vector network analyzer, and the like.

Now to the laborious part, the renovation. The had been used in the past as a meeting room for a motorcycle club (probably, in the 80s), and later as a workshop for remote-controlled model aircraft. During the last 3 years, it had been mainly used for storage, with the floor cover, walls etc, all aging away. Also, no internet connection of safe electric outlet available in the room. It took about 7 full days of hard work to get it up to requirement, including, removing all the junk, cleaning and fixing the walls, door, and wooden panels, removing several layers of old floor covers, and putting it all back together again. Another 3 days for all the cables, in particular, the network cables (all done using CAT7 LSZH 4x2xAWG23 S-FTP; one short section is CAT6 CCA PVC shield) and ethernet (CAT6) wall outlets.

Here a network map, mostly for my own reference, with the two servers (HTTP, SAMBA), running at 192.168.130 (this is the active server, a Dell Optiplex FX160 – it uses an ATOM processor, running Ubuntu, and is a really power efficient way to run such a system, 2 TB RAID0), and (arctur, a Dell Poweredge server, 2×3 TB RAID1, used for backup of the active server, and also as a backup webserver in case of hardware failure or the FX160). All the switches were selected for low power consumption, to keep this green and low running cost. There are two WLAN transmitters, so now there is good bandwidth all around the (large) house and even in the basement and garage.

The WAN connection is via a cable modem, which is located in the 3rd floor apartment (2. OG), and works at 100 Mbps down, 6 Mbps up (this is good for now, usually getting abot 80~90 Mbps down, and the full 6 Mbps up, probably upgrading to a 100/100 Mbps connection next year).

A quick test of the network speed – by measuring the transfer speed from and to a ramdisk on the acrux server. Getting 50~70 Mbytes per second, from all locations of the network. That’s certainly fast enough.

werkstatt network map

This is a view for the power distribution, and network distribution box.

Micro-Tel MSR-902C Receiver: root cause analysis, and a volt meter

Finally, some time to deal with the MSR-902C repairs. After replacing the 7401 TTL, and a 7404 TTL, the band select logic seems to work well, except two bands. This could be traced to a dead transistor on the A3A5 band control board. Still a mystery, what caused all these defects? Tracing the line going to the dead transistor (which appears to be a simple +15 V on/off switch), it only goes to one place – a circuit far inside the receiver. As it turns out, this is a hand-wired circuit, not really a circuit board, but a piece of sheet metal with various solder posts. And, on the other side, two filter. One filter mounted properly, the other tied to it with some thread. As you can see, this holds the filter in place, but it can still move around the other filter – and cause a short on the 15 V rail, including the signal coming from the transistor switch.



To avoid similar defects in the future, I put some plastic sheet around the filter, and fixed it in place with better ties.

Finally, time for some alignment of the YIG filter, by using a fairly complex setup, a microwave signal generator, a scope to test the receiver output, etc. – see below picture.


The YIG filter needs to be aligned for each band, same for the YTO band edge frequencies. This is all done on the A3B7 board. Not much adjustment needed, fortunately, only some fine tuning of the YIG preselectors.


Receiving… quite fun to operate the receiver, easy to tune over the full range of frequencies. Maybe this is what makes it so suitable for detecting microwave bugs.


Some last repair relates to the frequency display. It did work in some bands originally, not sure how the defect came about – maybe I slipped with a screwdriver, or some other mishap, or some already damaged part, I can’t tell. But now it only shows erratic values, and without a schematic, it is a tough task to fix it.


A fairly complex assembly, keep it mind, it is just a volt meter for the frequency display… so much easier nowadays…


The LED display: hand-wired with Teflon coated wires. Sure, this receiver was never intended for the layman, but for some agencies that don’t care about cost and taxpayers’ money.


After some tests and checks – the voltmeter uses a voltage to frequency/time converter, and a MIC5005 integrated timer! Quite a nice and complex chip for its age!


Two hours later – found the issue. A bad reference diode, 1n821. Unfortunately, no such diode in stock, but it is quite similar to the 1n827, only that the latter is more precise, and more expensive, and only a used part in my bin. But easy to check, just put a resistor in series, and run at about 1 mA, and check the voltage drop over the diode. All good.



Finally, reception is pretty good over all bands, no detail tests of noise levels done yet, but already now it is clear that this is pretty capable receiver, build with only the best components at a time – just the style is not quite service friendly.

Demodulators work as well, receiving 1 kHz demodulated signal, all looking pretty good and clean.


Ultra-cheap LED Spot Lights: Failure mode analysis, and some reverse engineering, and some concerns

Something amazing about the advent of LED technology for general lighting is not only the brightness, efficiency, and so on, but also the amazingly low price. Here, 20 light fixtures, including 3 LED elements each, 34 EUR total. That’s a bargain a friend of mine could not resist. But think twice, after about 1 year of occasional usage of these lights – several failed. Brightness is gone, some lightly flashing lights remains.


Still the price is amazing – considering the price of a singe 1 W LED element, with about 1 EUR retail. Plus the case, heat sink, aluminum circuit board, heat conduction paste, external case, 3 lenses!! No idea how this is made in China, for about 1.5 a piece delivered.


The first suspect – the drivers: each lamp has their own little driver box. Type S3W-0103.



The parts, and a good quality aluminum board, named CQ-LV8072. This is a universal board, found in many kinds of Chinese LED light fixtures.


Tested the LEDs – turns out, one of the LED elements is dead, and this ruins the whole thing, as all three LEDs are arranged in a series circuit. We can fix this easily by replacing the LED elements, all three, with some good quality elements. Albeit, at almost non-economic cost. Hint – the case and be unscrewed with the heatsink turning vs. the outer case. No need to apply brute force like I did, to open it up.


Some reverse engineering reveals a rather simple, but practical circuit. Using S8050 and MJE13003 TO-92 transistors, and a little transformer.


As you can see, no protection elements, what if the input capacitor shorts out, or if some overvoltage blows the transistor. Could it set your flat on fire? Well, my guess is, yes.

Digital Delay Line: sawtooth corrections of an ultra-precise GPS-reference 1 pps signal, and thermal effects

In an earlier post, I have already introduced the Motorola M12+ timing receiver, which is really a nice and affordable gadget for everyone who needs a precise and accurate time signal. Taking about nanoseconds here. All these timing receiver have something called a sawtooth error, linked to their internal clock. See earlier post: M12 perfect time.

Various methods exist to account for this sawtooth error, first and foremost, correction by software. However, I felt the need for a hardware solution here, to simplify the usage of the 1 pps trigger as a reference signal for phase measurements, and other purposes where the recording of sawtooth correction values would be rather troublesome.
With any such attempt at nanosecond scale, considerable thought needs to be put into the system to avoid introducing any errors larger than those we want to correct. In particular, thermal effects can lead to great long-term jitter, aka, randomly wandering phase.

How can we achieve compensation of the sawtooth error? Well, rather easily, by introducing a variable delay element in the signal chain, and adjusting its delay second by second, to the expected sawtooth error, in ns. Fortunately, the M12+ can be programmed to send out a message, called @@Hn TRAIM Status Msg, which provides, every second, the expected sawtooth error, of the next second. One single command is need to make the M12+ send out this message, very second, from now and forever, until other instructions received, or until the M12+ backup battery is taken out…

See below diagram, a AVR processor is tapping the TxD line, from the GPS receiver, to any host controller or PC (if connected), and whatever messages are send out are checked for the @@Hn message (and @@Ha message, just to display the current time, UTC, and date, on a LCD display connected to the AVR). Note that this works perfectly fine, even when another host, or PC is used to control/read/monitor the M12+. The M12+ uses 3 V logic, but an AVR input can easily handle this as a valid signal, even with the AVR running at 5 V.


Glad a processor is doing the decoding work… the GPS messages, a bit too cryptic for me:


Rather than implementing a discrete solution with various delay lines as coax cables, switches, etc, Maxim Integrated provides a marvelous chip, a silicon delay line, DS1023 series, at not marginal, but still acceptable cost, USD 8 per piece.


This chip comes in various versions, varying by the delay-per-set, and an 8 bit register, to set the actual delay. Sure, the minimum delay is not “0 ns”, but some odd number, corresponding to the delay of the signal before and after the actual delay line.


According to information found in the datasheet, this chip is trimmed for best accuracy, and high thermal stability. Further documents also say that the thermal drift is non-linear, and that no coefficient can be provided. Rather, the delay is specified as an absolute number, over the full temperature range. Well, fair enough, but what does this mean for our present case and actual device under test? With no information available anywhere, it seems, the only way to find out is to measure it. The datasheet maximum error would be a bit more than we want.


The schematic is nothing to write home about, a 74F04 is used to buffer the input signal, and a the same F04 is used as an output buffer, providing a nice and fast-rise (or, respectively, fast-fall) 1 pps signal.
The only specialty, a thermistor, and two resistors epoxy-glued to the DS1023-50 top surface! This can be used to heat up the device rather quickly, to 60 degC or more, by providing power from a regulated DC power supply.


Note the heating element and the thermistor (a rather small, fast response, 100 kOhm NTC) – red frame.


The test setup – to measure the temperature effects, is running without the GPS, but with a ~1 kHz fast rise-time pulse, from a HP 8012B pulse generator. Both input and output are connected to a HP 5370B Timer Interval Counter. The latter is a great device, single-shot accuracy of 20~30 ps, if you are into any precision timing tasks, very much worthwhile to get one of these, or a Stanford Research Systems SR620. Time intervals are then recorded as averages of 1k measurement, giving very stable readings with high resolution, certainly to 0.01 ns. For the test purposes, the AVR monitoring the RS232 signal can also be programmed via USB, to set any delay value from 0..255, corresponding to a 0..128 ns delay, plus any baseline delay of the gates and the DS1023-50.



All connected to a PC via GPIB, and recording the delay values at various settings.


Rather than many words, please inspect these diagrams, which will give you a feeling of the delay and drift to be expected with temperature cycling of the device at various rates (slow cooling, fast heating, slow heating, etc.). These were all recorded at the maximum delay, register set to 0xff, 255. Diagrams show delay, in ns, vs. time, as MJD.



In absolute numbers, 152.1~152.7 ns variation. Not much. About 1 step. So maybe good enough, and no need to apply any temperature compensation, or to put everything into a thermostated box.

Avantek S081-0321 YIG oscillator: not oscillating at all

One of the best sources of microwave signals still are YIG oscillators/YTO. These do require a good amount of power, magnetic coils, etc, but provide stable and rather low noise output, and good modulation capability. Core element is a small YIG sphere, placed in a magnetic field.

However, for the current unit under investigation (from a 18-26 GHz frontend), type S081-0321, 8.0-13.4 GHz, all the magnetic field and effort is wasted – no output detectable at all, not even a faint signal (checked with various equipment). Knocking it with a (small!) hammer, no effect. Varying the coil current – no effect.
Current consumption on the 15 V rail is normal.



Well, with all the basics checked, what to do with such hermetically sealed unit, other than using it to satisfy my curiosity about its internals. Hope to trace the defect to some specific part.

But before we consider more destructive measures, let’s try to re-tune the YIG by slightly adjusting the YIG sphere. This is possibly throught the side opening, which is usually welded shut, but can be drilled up rather easily.


Still no luck, no signal, even after turning the YIG quite a bit.

To look inside, carefully removed the top weld seam on a lathe, and the you can pry the case open.


What you can see is pretty straightforward, despite all the gold wires. There is an input voltage regulator, from +15 V rail, down to 8 volts (measured about 8.15 V), this is then distributed to the 4 active parts via resistors (the bluish elements). Voltage at the resistors is about 4.3 V, so all stages seem to be adequately powered and current flowing as usual. Still no signal. Also probed other parts of the circuit, with a thin wire, under the microscope. No obvious defect. The gold wires and contact point reveal a good amount of adjustment done by placing/removing bond wires as need to adjust bias currents, probably also frequency response, etc.


The coil – rather, the coils. The thick wire is the main tuning coil, which accepts 0.4~0.6 Amps, the small coil around the magnetic center pole is the FM modulation coil. This is for much lower currently but high bandwidth modulation. All is sealed and soaked with epoxy resin. Note the hand made labels which may explain the cost of these units if purchased new… looks like US style handwriting to me.


Well, seems that fixing this is beyond what I can do here with the tools at hand. So will need to look for a spare/used 8-13.4 GHz YIG/YTO somewhere.

A precision pendulum clock, and an even more precise time interval counter

Several years ago, I managed to get hold of a rather special piece, an electromechanical master clock, including an Invar temperature compensated precision pendulum. Such clocks were use to control various remote clocks at a train station, in large factories, or huge governmental offices, etc.

These clocks have long been superseeded by crystal oscillators, nevertheless, they are marvelous pieces of precision engineering, and it has been a long held thought of mine to measure how accurately this time-piece is performing, not only long-term (which can be easily timed by daily checks), but also short time, for each individual tic.

The pendulum has a 1/2 period of 0.75 seconds, which is quite common for such kinds of electromechanical clocks. Only the most wealthy businesses opted for a 1 m (full) pendulum, and the instrument is large enough anyway even with 3/4 lenght.


The dial is quite beautiful, and every piece appears to be hand-made. There is no date on the clock, but the age of the capacitor suggests 1920~1930 time-frame. I fully rebuild the mechanics about 3 years ago, all degreased, checked, and freshly lubricated with clock oil. There was no need for any other repair, all parts and bearings are still in good shape.


The clock has a rewind mechanism that is activated once per minute, and also turns a polarity reversal switch, used to steer the remote clocks.


To get the signal out of the clock, a small light gate has been setup inside, and a 1 mm wire connected to the lower end of the pendulum (in a way to ensure virtually no movement of the wire). The wire interrupts the light gate approximate at the lowest point of the pendulum, i.e., when it has its highest speed – this is to ensure sharp edges of the signal.


The setup, currently it is just a set of boards running the TIC4 and LOGGER5 software discussed earlier. A Dell OptiPlex FX160 is used to collect the data, but you can use any kind of computer that can handle RS232 input.


Here, some first results – more to come. Phase is given in seconds, and horizontal axis shows the tick counts – about 115k ticks per day. The software uses narrow time gating to sort out any incorrect ticks, caused by electrical interference, or other random disturbances. There are no more than 1-2 of such events every day. The phase reconstruction algorithm also handles any missing ticks, and the measurement accuracy is not compromised if one or more tick events are not registered for any random reason.


Removing the linear part – not a lot of residual phase error left. Plus minus a fraction of a second a day. Now I have slightly slowed down the clock by removing a tuning weight from the pendulum.


As described earlier, the LOGGER5 setup also records real-time (not to a high degree of precision, just to keep track of time and day), and temperature/pressure. See earler post, LOGGER5, and TIC4.



Below, and interesting feature of the data – with the re-loading of the clock every minute, there is some slight variation of the frequency. It is really not much considerable the notable “CLICK” with every re-wind, done by a large magnetic coil, actuating a lever mechanism.


The hourly variation, most likely, related to the travel of the minute hand – will check this later, simply by removing the hands!


Some first correlation of frequency vs. pressure, but will need to collect much more data, and then correlate with pressure and temperature.


Finally, the Allan variation of the clock, determined from a few days worth of data. Short term stability is compromised by the bi-directional pick-up of the pendulum (detection is at the rising edge of the pulse, which corresponds to two different positions of the pendulum relative to the light gate – because of the discrete thickness of the wire).


Perfect time: upgrade to a Motorola M12+ receiver, and new GPS antenna

For years, a Motorola UT+ GPS timing receiver has served me well as a frequency reference and source of accurate time (and location). While I primarily use a DCF77 locked 10 MHz OCXO, the GPS time is useful for various purposes, be it, to confirm that DCF77 is actually delivering the proper time.

One drawback of the Motorola UT+ is the rather large “sawtooth” error, which is caused by the quantization of the 1 pps signal derived from a 9.54 MHz clock. This results in a +-52 phase inaccuracy – which can be corrected, but only with further effort.

The later model, which is dated by now and available at low cost, the Motorola M12+, is much better in this respect, featuring a +-13 ns sawtooth, which is not a lot, and good enough for most purposes without any further corrections.

Below, some tests on an OCXO vs. GPS 1 pps pulses, for a OCXO under test (10 MHz, divided down to 100 kHz, and phase displayed in microseconds).



This is the small board, not a thing of beauty, but working. The only parts needed are +3.0 V and +5 V (actually using +4.4 V) voltage regulators: 3.0 V for the M12+, 4.4 V for the GPS antenna.
The 3.0 V also powders a MAX3232 TTL to RS232 converter.


Also procured a second-hand GPS timing antenna – this one has a nice radome, a quadrifilar helix element, and a 26 dB amplifier to compensate any cable losses. The cable, LMR-195, features N to SMA connectors, and a considerable of PVC tape was used to protect the N-connector from the elements. Still it would be better to use some special outdoor N connectors, but, sorry, don’t have.



A handy program to control the GPS – TAC32. Usual procedure is to carry out a location survey, which will take about 2-3 hours, and then continue in position hold/timing mode.

One drawback of my location – there is no way to get full 360 deg view, so reception is limited to the more southern satellites. But usually 6-8 satellites are in sight.

Still contemplating if it is worthwhile to put this in a larger box, together with a 10 MHz OCXO, and possibly a DS1023-50 delay line to implement a hardware sawtooth correction. Maybe a good project for winter time.

DCF77 vs. GPS time comparison: not a lot of uncertainty…

Some folks were asking about the accuracy of the DCF77 10 MHz standard described earlier, DCF77 10 MHz – which has an Piezo brand OCXO, steered by a long-time-constant PLL locked to the DCF77 77.5 kHz carrier.

But, how to assess the short and long term stability of such a ‘standard’ in practical terms? Well, short term accuracy – it will simply be that of the Piezo OCXO, and some noise injected by the power supply. Mid- and long term, the drift will be determined by the DCF77 master clock (which is dead accurate), and the propagation conditions of the long wave signal (which is by far worse).

With my location at Ludwigshafen, Germany, I’m reasonably close to the DCF77 transmitter – maybe 70 miles? So there is hope that the transmission induced effects are not all that bad.

To measure the mid and long term stability, see below two plots of the DCF77-locked phase of the Piezo OCXO, vs. the instantaneous phase of GPS, stable to 40 ns or better, and obtained from a Motorola M12+ timing receiver. Measurements were done by measuring the time interval from the GPS 1 PPS signal, to the rising edge of a 10 kHz signal – derived from the 10 MHz OCXO by a good divider (using a ADF41020 REF input – R divider routed to MUX output) by HP 5335A counter.

dcf dcf vs gps time day 57603

dcf dcf vs gps time day 57604

In short – DCF77 is tracking GPS extremely well, and the OCXO phase is stable to within a few 10 to 100 ns. In practical terms, 1 second of observation time would be well enough to calibrate any frequency standard to 1 ppm or better, by comparison with the DCF77 locked OCXO. In other words, the DCF77 locked OCXO instability appears to be dominated by the propagation of the DCF77 signal more then anything else.