Testing Semiconductor Memories
An Open Notation for Memory Tests
Aad Offerman and Ad J. van de Goor: An Open Notation for Memory Tests (also available in PostScript format)
(presented at the IEEE International Workshop on Memory Technology, Design and Testing, 1997).
Cited by:
- Michael Linder, Alfred Eder, Ulf Schlichtmann, Klaus Oberlander: An Analysis of Industrial SRAM Test Results – A Comprehensive Study on Effectiveness and Classification of March Test Algorithms, IEEE Design and Test 31(3):42-53, June 2014;
- Michael Linder: Test Set Optimization for Industrial SRAM Testing, Dissertation, Fakultät für Elektrotechnik und Informationstechnik, Technischen Universität München, 2012-2013;
- Sonal Sharma, Vishal Moyal: FSM Based BIST Architecture, International Journal of Engineering Sciences & Management, April-June, 2012;
- M. Linder, A. Eder, K. Oberlander, M. Huch: Variations of fault manifestation during Burn-In — A case study on industrial SRAM test results, IEEE 17th International On-Line Testing Symposium (IOLTS), 2011, pages 218-221;
- A. Kokrady, R. Mehrotra, T.J. Powell, S. Ramakrishnan: Reducing design verification cycle time through testbench redundancy, 19th International Conference on VLSI Design, 2006, pages 6. Held jointly with 5th International Conference on Embedded Systems and Design;
- Zaid Al-Ars: DRAM Fault Analysis and Test Generation, Dissertation, Delft University of Technology, June 2005;
- K. Thaller, A. Steininger: A transparent online memory test for simultaneous detection of functional faults and soft errors in memories, IEEE Transactions on Reliability, Volume 52, Issue 4, pages 413-422, December 2003;
- Farbod Karimi, V. Swamy Irrinki, T. Crosby, Fabrizio Lombardi: Parallel testing of multi-port static random access memories, Microelectronics Journal 34(1):3-21, January 2003;
- F. Karimi, F. Lombardi: A scan-BIST environment for testing embedded memories, Proceedings of the Eighth IEEE International On-Line Testing Workshop, 2002, pages 211-217;
- Z. Al-Ars, A.J. van de Goor, J. Braun, D. Richter: A memory specific notation for fault modeling, 10th Asian Test Symposium, 2001, Proceedings pages 43-48;
- Karl Thaller: A Highly-Efficient Transparent Online Memory Test, Proceedings of the 2001 IEEE International Test Conference, p.230, October 30 - November 1, 2001;
This email address is being protected from spambots. You need JavaScript enabled to view it. : Programmable Embedded Memory BIST Using Embedded Processor;This email address is being protected from spambots. You need JavaScript enabled to view it. : Programmable BIST Architecture to find faults based on defect injection;- US Patent 6496950: Testing content addressable static memories;
- US Patent 6550032: Detecting interport faults in multiport static memories;
- US Patent 6757854: Detecting faults in dual port FIFO memories.
Aad Offerman and Ad J. van de Goor: An Open Notation for Memory Tests (also available in PostScript format)
(Technical Report No.1-68340-44(1997)07, Delft University of Technology)
Cited by:
- US Patent 8829898: Method and apparatus for testing.
This paper contains the description of a language that was intended to be open, and at the same time handle most of the tests used these days. The result may be a little overwhelming.
We have realized that building and building on top of this open notation is not the way to go in the long run, and are thinking about better solutions at the moment. The next step will probably be a language that looks more like a general programming language providing facilities commonly used in memory tests. Furthermore, it is tempting to try to design notations for the specification of faults and for memory architectures at the same time, and make good use of the fact that the three of them are so closely related. However, this only exists in our heads yet, so no paper on this can be provided at the moment.
GPGPU: Modern Commodity Hardware for Parallel Computing
Thesis Project: Modern Commodity Hardware for Parallel Computing, and Opportunities for Artificial Intelligence
This thesis report is part of the program for my Master's degree in Psychology. It elaborates on current developments in hardware and its consequential opportunities for every research field using High-Performance Computing (HPC) in general, and Artificial Intelligence (AI) in particular.
Abstract
Over the last years, there has been a fundamental change in the way manufacturers of general-purpose processors have been improving the performance of their products. Physical and technical limitations no longer allow manufacturers to increase the clock speeds of their chips like they did over the last decades. Performance improvements will have to come mainly from the higher transistor count that smaller chip features are bringing. Since developments in Instruction-Level Parallelism (ILP) are lagging, more parallelism is the only way to go.
Intel believes in many-core processors, supporting tens or hundreds of threads. After testing the water with hyper-threading and dual-core technologies, CPU manufacturers now have irrefutably entered the multi-core era. On the long term, general-purpose processors will consist of tens, hundreds, or even thousands of cores.
nVidia says it is already there, with their graphics processors containing hundreds of cores and supporting thousands of mini-threads. GPUs, currently being seperate chips on motherboards or specialized graphics cards, are increasingly being utilized by application programmers. For specific problems they have found mappings onto these graphics engines that result in speedups by two orders of magnitude. Manufacturers of graphics processors have recognized this opportunity and are increasingly making their products accessible to others than graphics programmers.
From a programmer's perspective, CPUs offer a multi-threaded model allowing a lot of control flow instructions, while GPUs offer a rigid stream processing model putting a large performance penalty on control flow changes. For the first, complexity is in the application logic. Currently, new programming languages and extensions to current languages are developed, supporting both explicit and implicit parallelism. Stream processing only works for problems that contain massive parallelism with limited communications between elements.
AMD's Fusion strategy brings x86 cores and GPU together onto a single die, possibly extending it with other specialized processing engines. Meanwhile, completely new architectures and topologies are being researched by Intel (Larrabee processor), IBM (Cell Broadband Engine), and Sun (UltraSPARC T series), all searching for the next hardware/software paradigm.
In HPC computing, performance-per-Watt has become the most important design parameter. Current commodity GPUs provide a cheap computing resource where the least possible number of transistors are dissipating power without contributing to the actual computation. We are waiting for graphics processors to become a standard part of the product portfolio of manufacturers of high-end computer systems. Then, standard building blocks can be bought, together with support, training and other professional services.
Although these developments in hardware bring along huge advantages for every research field using High-Performance Computing (HPC) in general, it is of particular interest for research in Artifical Intelligence (AI). The speedup of one or two orders of magnitude that is generally reported for all research fields when using GPUs, is also representative for neural networks, natively using massive parallel processing.
For algorithms in AI, more and more mimicing their biological originals, are massively parallel by nature. This goes for all types of neural networks we simulate on computers, but also for visual systems and all sorts of object recognition, feature extraction and other processing that takes place on visual data.
Especially the latter promises to take big advantage of developments in graphics processors. Some researchers in this area report speedups up to three orders of magnitude. In HPC terms this relates to the next step when DARPA (Defense Advanced Research Projects Agency) asks companies like IBM, Cray and SGI for the development of a long-term vision and fundamental research into the next era of HPC.
Another field that can be expected to profit from these developments are robotics. The ability to operate autonomously and independently requires intelligence, compactness and mobility. This relates directly to higher densities (both on silicon and system level), higher performance, and lower power consumption, all driving current developments in hardware.
Even deploying relatively small computer systems, several researchers in this area report now to be able to run applications in real-time or to provide interactive visualization where this could not be done before, presenting not only a quantitative but also a qualitative breakthrough.
In combination with the continuing pressure on power dissipation and density, GPGPU provides tremendous opportunities for robotics, and for related areas like the development of intelligent portable devices or prostheses.
However, at this moment, GPGPU is not yet a mature technology. Over the next years, graphics processors will become better suited to support generic stream processing applications. Work needs to be done in generic memory access and double-precision floating-point operations.
Furthermore, until recently, only proprietary programming toolkits belonging to a specific GPU were available. nVidia's CUDA toolkit has become the de facto standard, but it is not portable. Today, all important players in this market, i.e. AMD, IBM, Intel, and Nvidia, are supporting Apple's OpenCL programming language. However, performance is not yet as good as CUDA's. Furthermore, source code still contains topology-specific programming, inhibiting portability of applications over various hardware platforms.
Despite these limitations, in the near future, OpenCL will be the standard language for GPGPU (and possibly many-core) computing. And even when applications will not be portable, programmers will have a single language and development platform to work with.

Development of a Funding Mechanism for Sustaining Open Source Software for European Public Services
Working with the newly established Open Source Programme Office (OSPO) of the European Commission on the development of a funding mechanism to sustain the critical open-source software used in the European sector, as well as innovation of new open-source software created by individuals, startups and micro communities who are currently not supported financially. Non-financial sustainability issues and possible solutions are an integral part of this exercise.
This project is part of the Commission's new Open Source Software Strategy 2020-2023 'Think Open'. The Open Source Programme Office (OSPO) at DIGIT is the facilitator of this strategy and action plan, involving all directorates-general.
Date: October 2020 – April 2022.
Project report: https://joinup.ec.europa.eu/collection/fosseps/news/funding-sustainability
EU-FOSSA 2
The EU-FOSSA project – short for Free and Open Source Software Auditing – aims to increase the security and integrity of critical open-source software. It was launched by the European Commission at the instigation of the European Parliament after the discovery of the Heartbleed bug in 2014.
Following the success of the initial pilot, the project was renewed for another three years. EU-FOSSA 2 builds on the pilot project by extending the auditing of free and open-source software through setting up bug bounty programmes, organising hackathons and conferences, and engaging with developer communities. In addition, EU-FOSSA 2 expanded its scope to a wider range of software projects and communities.
Date: January 2019–augustus 2020.
Joinup project page: https://joinup.ec.europa.eu/collection/eu-fossa-2
Project deliverables: https://joinup.ec.europa.eu/collection/eu-fossa-2/eu-fossa-2-deliverables
Open Source Observatory (OSOR)
The Open Source Observatory (previously OSOR: Open Source Repository and Observatory) is a collaborative platform created by the European Commission and funded by the European Union via the Interoperability Solutions for Public Administrations (ISA) Programme. It aims to facilitate professionals in exchanging information, experiences and best practices around open-source solutions for use in public administrations, and support them in finding, choosing, re-using, developing, and implementing open-source software, interoperability solutions, and semantic interoperability assets.
Date: October 2011–October 2020.
Joinup project page: https://joinup.ec.europa.eu/collection/open-source-observatory-osor
Page 2 of 3