Report on initial tests with the VA-HDR1 chip and a TYPE 1 sensor.

 

H. Pernegger (Oct. 28, 1997)

 

 

This test report summarizes initial tests on the new 128 channel VA-HDR1 chip received on Oct 7. Most of the tests were performed an assembly of type 1 sensor and VA-HDR1 128 channel chip.

 

Some general comments on the tests:

 

The hybrid is a single chip PCB hybrid. The detector was at typically 70V bias during the tests. I used the following parameters for the chip:

 

Test done until now where:

 

The tests mainly focussed on

 

Test using the internal calibration feature of the chip:

 

I tested 3 chips and the calibration worked for all channels of all chips. I had them mounted on a simple PCB hybrid I got from Alan Rude/CERN. The calibration signal is a good and fast test to check if all channels on the chip are working and if their gain(variation) is acceptable. It requires substantial work and understanding of the control cards if we what to do an absolute calibration to better than 5% with it.

 

The gain variation without sensor between the channels was better than 5% (specifications), typical variations were 2%

 

I changed the peaking time only by changing VFS settings in a range of +1.7V (shortest peaking: 600ns) to -0.2V (longest possible peaking time: 2ms). Intermediate values are:

+130mV: 1.2ms

+20mV: 1.5ms

 

Figure 1 shows the output signal in response to a 2.2 mV test signal on the external 1.8pF test capacitor at different VFS settings. The amplitude at 1.2ms peaking (used in the testbeam) is 45mV. The signal is the differential output of a video amplifier on the CERN repeater board. Using the test pulse calibration I found a calibration constant of 549e-/mV output signal. I would estimate the error on this value with 10% resulting from the small signal size and capacitor tolerances.

 

Figure 1 calibration signal at different settings of VFS

 

 

Tests with a 241Am source:

 

I used a Am-source because it provides a clear energy deposition of 60keV in silicon. The detector was operated at 110V during this test. The peaking time was set to 1.2ms.

The scope was triggered on the output signal with a 50% threshold on the amplitude.

Figure 2 shows a single signal (top) and the average signal for 128 triggers. The amplitude is 29.3mV. The signal deposition for the Am-source is 16700 e- which yields a second calibration constant of 570e-/mV.

 

The noise was measured as the r.m.s. of the output signal over a period of 100us.

A r.m.s noise of 2.23mV was measured for channel 48 connected to the sensor. The last 5 channels on the chip were not connected to the detector. Their average noise was approximately 1.9mV. This measurement includes all common mode noise.

 

The chip pedestal is shown in figure 3. It gives a peak-to-peak variation of 640mV at 220mA buffer current. At 1.2 ms signal peaking time the variation corresponds to 364.6ke-, i.e.15 MIP's.

 

 

Testbeam measurements:

 

Two hybrids with 1 type 1 and 1 type 5 sensors were tested in a 100GeV/c pion beam. The type 5 detector showed extremely high noise due to the high leakage current (40-70uA). Data from the type 5 detector have to be analyzed using the 4 available reference planes.

 

The reference telescope consists of 4 x and y reference plane with 2x2 cm silicon strip detectors. They were operated at nominal signal-to-noise of 130-200 (2us peaking time) and intrinsic spatial resolution of 1.3mm. They can be used to obtain a high precision prediction of the hit position for studies of cross talk and signal sharing

 

Data were taken in the following configurations:

 

RUN #

Bias voltage [V]

Peaking time [us]

Beam position type1 / type 5

# of triggers

 

658,659

70

2.0

Center/2nd row

200 k

683

70

1.2

Center/2nd row

100 k

684

50

1.2

Center/2nd row

32 k

685

110

1.2

Center/2nd row

29 k

686

110

1.2

Center/2-3rd row

70 k

687

110

1.2

Center/4th row

30 k

708

70

0.6

Center/3-4th row

84 k

709

70

0.6

Center/1-2nd row

120 k

 

The data from these runs are waiting for analysis. I had a very brief first look to RUN 683 to verify that the type 1 detector worked during the tests. In this "first look" the reference system was not used, it uses type 1 detector signals only. I processed only the first 2000 events of run 683 with a very basic noise estimate. I still expect improvements specially in the noise figure after the common mode noise is properly treated in the analysis program.

 

Figure 4: Pedestal recorded for type 1 and 5 detector

Figure 4 shows pedestal variation as measured in the testbeam at a peaking time of 1.2us and Vbias=70V on the detector. I use the previous measurement from the Am-source test to establish a calibration of the ADCs used in the testbeam. The peak-peak pedestal-variation of 640mV translates to a max pedestal variation of 984 ADC counts, i.e. 1.54 ADC/mV output signal.

The combination with the average of Am-source calibration and the test pulse calibration yields a calibration constant for the test beam ADC of 363 e-/ADC count.

 

Figure 5 shows the noise obtained in the test beam for type 1 and type 5 detector. The solid line shows the r.m.s of the ADC count pedestal distribution without any common mode noise correction. The superimposed dots indicate the r.m.s noise after one iteration of common mode noise correction. In the common mode noise correction all channels with a signal larger than 2.7 times the r.m.s of the ADC pedestal distribution were rejected. The average signal of the remaining channels was calculated and subtracted from the signal.

 

On the type 5 detector only every second VA channel is connected to the sensors, which allows to estimate the noise of the chip only. On the type 1 detector the last 5 channels (channel 124-128) of the VA chip are not connected to the sensor. The following table gives the average noise for connected and not connected channels on type 1 and type 5 detector.

 

 

Before common mode correction

After common mode correction

VA only (Type 1 not connected)

3.2 ADC

2.1 ADC

VA with Type 1

 

3.44 ADC

2.73 ADC

VA only (Type 5 not connected)

3.014 ADC

2.23 ADC

 

I estimate the noise of the VA in the testbeam by using the average noise after common mode correction and obtain the following results with a calibration constant of 363e-/ADC count.

 

 

ENC (e-)@1.2us

VA only

785

VA with type 1

990

 

The type 1 detector had a combined leakage current of approximately 5-8mA for guard and active area. The two measurements of polysilicon resistors on its wafer are 24MOhm and 6MOhm.

 

Figure 6 displays the single pad signal together with a Landau fit to the distribution. The peak value is obtained from the Landau fit. The fit yields a peak energy deposition of 19600 e- . This value is significantly below the expected MIP signal. At the later tests it was found that the detector is not fully depleted at 70V bias

 

 

Figure 7 shows the single pad signal-to-noise distribution for type 1 sensor. The signal-to-noise distribution peaks at 19:1 .

 

Source Measurements at MIT

 

I repeated the tests at MIT after the test beam. The test station consists of the new NI-ADC card to readout the detector. The ADC generates all necessary control signals and digitizes the repeater analog out with 1.3MHz . The hybrid and repeater boards are the same as in the testbeam.

 

The trigger is provided by a 1x1x1cm3 scintilator below the sensor. I tried a low intensity 106Ru source and a high intensity, collimated 90Sr source. The results are identical in terms of signals measured from the Si detector.

 

After contacting IDE about their results, they informed me that they measured a noise behaviour of ENC=660 +4.2*C e- at a peaking time of 1.5us . I therefor made the measurements at their settings to get comparable results.

 

 

 

 

 

Noise measurement:

 

I tested the noise on the testbeam type 1 hybrid as well as a second hybrid supplied by IDE with 2 128 channel chip without sensor.

I used the internal calibration to determine the gain of the system with the IDE hybrid: A 21.2 mV voltage step at the 1.8pF test capacitor resulted in a 617mV output signal, which yields a calibration constant of 386e-/mV .

 

The noise distribution after common mode correction is shown left in figure 8. The right top plot shows the rms noise versus the 256 channels

 

Figure 8: rms noise after common mode correction on the two-chip IDE hybrid

 

The measured average noise corresponds to a (VA-only) noise of 610e- at a peaking time of 1.5us.

 

I repeated this measurement on the type 1 hybrid. A similar calibration procedure (21.2 mV at 1.8pF = 570mV output amplitude) yields a calibration constant of 417e-/mV.

Figure 9 gives the rms noise after common mode correction measured on the testbeam type 1 sensor and chip. Channel 0 is sampled before the first channel is switch to the VA output nodes, thus is estimates the noise contribution coming from repeater and cables.

Channels 124 trough 128 are not connected to the sensor, thus the plot shows the VA-chip noise.

 

 

The table gives the ENC for bonded and not bonded channels

 

 

ENC [e-]@1.5us

VA only

659

VA+type 1 sensor @70V bias

863

 

Figure 9: rms noise of VA+type 1 sensors in the 90Sr-source test.

 

I attribute the noise improvement with respect to the testbeam to the slower shaping (1.5us instead of 1.2us). The noise variation shows a clear structure in groups of 11 channels. Those channels correspond to pad columns on the sensors.

 

Signal measurement

 

Figure 10 shows the single pad signal including the pedestal for a peaking time of 1.5 us and a sensor bias voltage of 70V. The ADC data were corrected for their pedestal and common mode shift. The common mode shift measured during the source tests is comparable to the one measured in the testbeam, i.e. in the same order as the intrinsic chip noise.

 

 

The signal distribution shows three components: the pedestal peak, hits with charge shared between two pads and the single pad signal. A relative large fraction of hits share their charge between pads as the pad size is 1x1mm2 only.

 

 

Figure 10: 90Sr source test with type 1 sensor at 70V bias.

 

Figure 11 shows a comparison of signal spectra recorded at 70V bias (lower spectrum) and 110V bias voltage (upper spectrum) with type 1 sensor. The clear shift indicates that the detector obviously does not deplete at 70 V.

 

The signal in electrons is obtained by using the calibration constant (417e-/mV ) determined from the internal calibration. At 110 V the signal distribution peaks at 22000 electrons.

 

In the usual capacitance measurements carried out on this wafer no depletion point could be found. The capacitance measurement is carried out in the range of 0 to 100V.

To determine where the detector may deplete I made a bias voltage scan, recording the most probable signal (peak) for different values of backplane voltage. The voltage given is corrected for voltage drops on the resistor in the backplane bias voltage line (180kOhm). The most probable signal is displayed in figure 12. The measurement is not yet completed, I am still working with Pradeep on the spread. At each voltage point 5000 triggers were recorded. The peak value is determined by finding the average signal for the 5 most populated bins.

 

 

 

 

Figure 11: Signal on type 1 detector at 70V bias (lower spectrum) and 110 V bias (upper spectrum).

 

 

 

 

 

 

 

Figure 12: Peak signal versus bias voltage

 

This preliminary bias voltage scan indicates a depletion voltage of 100V, which might explain why it was not possible to observe a clear kink in the CV measurements.

 

The type 1 sensor operated at full depletion at a signal-to-noise ratio of 25:1 .

 

 

Summary

 

I want to summarize the test results by using the IDE specification for the 128 channel VA-HDR1 and compare them to my measurements.

 

 

 

IDE nominal

This measurement

Peaking time

Nominal 1.0us

Minimum 0.6us

Maximum 2.0us

ENC noise

Original specifications

ENC=500 + 7*C

They quote results for 1.5us peaking time of

ENC=660 + 4.2*C

Measured at 1.5us

ENC=610 to 660

 

I don't have a measurement for the slope yet.

Linear range

+/- 80MIP

Not yet fully tested but certainly above 40MIP's

Channel-to-channel gain variation

< 5%

Typical 2%

Chip-to-chip gain variation

< 10%

No real measurement yet, but the gain variation of the 3 chips I tested until now was in the order of 10% or better

Pedestal variation channel-to-channel

Less than 15% full range

Measured: 15 MIP at 1.2us

Measured: 11 MIP at 1.5us