Hi all,
I am in the midst of a control systems upgrade that involves Modbus serial comms to a Bently Nevada 3300 vibration monitoring system.
I've found in the system manual the following detail:
"Internally the Serial Interface converts the backplane multiplexed static data signal into a digital value using an 8-bit analog-to-digital convertor. When an analog data request message is received on Modbus protocol, this 8-bit value is shifted left 4 bits to represent a 12-bit value. Since the maximum value the 8-bit convertor can have is 255, then the maximum 12-bit value is 255(16) = 4080. The least significant four bits of the 12-bit binary word are always zero."
The Modbus function code reads this in as a 16-bit integer though - so what happens to the other 4 bits? Are the most significant bits zeroed, giving the actual original 8-bit word (0-255) in the middle of the 16-bits? Or the least significant bits, giving the last 8 bits all zeroed and the actual original 8-bit word (0-255) all in the most significant byte? If the signal going into the converter is scaled -40mm to +40mm, how would the conversion back to EU be done when it arrives as a 16-bit integer?
Appreciate any experience anyone has with this.
Thanks,
Michael
I am in the midst of a control systems upgrade that involves Modbus serial comms to a Bently Nevada 3300 vibration monitoring system.
I've found in the system manual the following detail:
"Internally the Serial Interface converts the backplane multiplexed static data signal into a digital value using an 8-bit analog-to-digital convertor. When an analog data request message is received on Modbus protocol, this 8-bit value is shifted left 4 bits to represent a 12-bit value. Since the maximum value the 8-bit convertor can have is 255, then the maximum 12-bit value is 255(16) = 4080. The least significant four bits of the 12-bit binary word are always zero."
The Modbus function code reads this in as a 16-bit integer though - so what happens to the other 4 bits? Are the most significant bits zeroed, giving the actual original 8-bit word (0-255) in the middle of the 16-bits? Or the least significant bits, giving the last 8 bits all zeroed and the actual original 8-bit word (0-255) all in the most significant byte? If the signal going into the converter is scaled -40mm to +40mm, how would the conversion back to EU be done when it arrives as a 16-bit integer?
Appreciate any experience anyone has with this.
Thanks,
Michael