Tektronix Technical Forums are maintained by community involvement. Feel free to post questions or respond to questions by other members. Should you require a time-sensitive answer, please contact your local Tektronix support center here.
We are actually testing a DMM 7510. Our requirement is to take 1650 measurements starting from the rising edge of one external trigger. The DMM
measurement rate is set to 30303 in order that each measurement takes 33 uS. Therefore our 1650 measurements takes 54 mS. When we have the correct number of measurement in the default buffer (defbuffer1), we read them. The trigger signal is at 10 Hz (period = 100 mS). We have 100 - 54 = 46 mS to read the data before the next trigger. The DMM is configured to send its data in binary form (8 bytes/data). When reading the data, we also ask the relative time between each measurement. This means we need to read 26400 bytes (1650 * 2 * 8). Our problem is that quite often, reading the data requires more than 40 mS and the next trigger is not taken into account.
The computer reading the data is a intel i7 host (running Ubuntu) doing only this. I don't think the problem is on the host size. When analyzing what happens on the network wire with Wireshark, it is clears that when the data transfer takes more than 40 mS, it is due to the DMM which suddenly
and for one unknown reason stops sending its data during around 35 mS.
Do you have any idea of what could be the problem ?
Thank's for your help
PS: The SCPI commands used to run the DMM are:
And to get the data
SENSe:TRIGger:DIGitize:STIMulus EXTernal (Only once)
followed by (in a infinite loop)
TRACe:ACTual? (to get how many data we have in the buffer)
TRACe:DATA? 1, 1650, "defbuffer1", READ, REL (to read data)
TRACe:CLEar (to clear the buffer once the data has been read)
- Keithley Applications
- Posts: 212
- Joined: October 25th, 2010, 1:31 pm
- Country: United States
I don't have a lot of experience using the 7510 but I would take several approaches. Switching to TSP rather than SCPI might offer some capability. In using GPIB, a single transaction could take time from 5 msec to 20 msec typical. Performing a query requires two actions: GPIB write, GPIB read. In your sequence, you perform two queries so there is double the time taken. You may be dealing with system overhead given the timing.
Using TSP, it is possible to create a function within a script to accept a trigger, perform the number of required readings and automatically output data to the output buffer then repeat. By doing this, the operation does not conform to IEEE 488.2 standards but it will help to overcome any communications overhead.
You can use the Status system and perform a service request in your code to wait for a reading. The most simple approach is to perform a GPIB read evey 100 msec. Set the GPIB timeout to at least 300 msec. and increase the number of bytes to read greater than the actual expected bytes returned for 1650 readings. The GPIB read will terminate once an EOI is received regardless of the number of bytes set to be read.
Do you need double precision? Going with an ASCII format, you can reduce the number of bytes per reading hence reducing the time needed to output data.
Who is online
Users browsing this forum: No registered users and 4 guests