Tektronix Technical Forums are maintained by community involvement. Feel free to post questions or respond to questions by other members. Should you require a time-sensitive answer, please contact your local Tektronix support center here.
In datasheet for Keithley 3706A it is written that sampling time can be defined between 10 microseconds and 250 miliseconds.
I am using Keithley 3706A and 3730 Matrix Card.
The problem is when I perform 100 measurements with 0,1 s of sampling time on single channel (3730 Matrix card) my total measuring time is 9,905572891 seconds.
Error[%] = ((Real_time – Ideal_time) /Ideal_time)*100
Error = ((9,905572891 – 9,9)/9,9)*100 = 0,0563%
On the other hand when I perform 100 measurements with 0,2 ms of sampling time on single channel (3730 Matrix card) my total measuring time is 0,025583506 seconds.
Error = ((0,025583506 – 0,02)/0,02)*100 = 28%
As You can see error is really big.
I would like to know how can avoid the delays when sampling time is shorter?
Please find attached Lua code.
- (1.35 KiB) Downloaded 964 times
- Keithley Applications
- Posts: 1323
- Joined: October 15th, 2010, 10:35 am
- Country: United States
Even with delays and autozero turned off, there is still some overhead.
Using your code with the 200usec aperture and printing out all the timestamps from the 100 readings, a delta on the time stamps shows an average delta time of 256usec vs. your desired 200usec of delta time.
Code: Select all
for i = 1, reading_buffer.n do print(reading_buffer.relativetimestamps[i]) end -- for loop
At 100 measurements, this accounts for the extra 5.6msec in your final timestamp ( 0,025583506 vs. ideal 0,020000)
If the 200usec sampling interval is important, you will need to set the dmm.aperture = (desired interval - overhead).
Keep in mind, the timestamp accuracy is 100usec.
Who is online
Users browsing this forum: No registered users and 2 guests