Univerza na Primorskem Fakulteta za matematiko, naravoslovje in informacijske tehnologije
-->
SI | EN

Abstracts

natisni

Regular Sessions

Session A - Chair: Andrej Brodnik

Lower bound on on-line bin-packing algorithms

Gábor Galambos1, János Balogh1, József Békési1
1Department of Informatics' Applications, Faculty of Teachers Training, University of Szeged, Hungary

keywords: bin packing, online algorithms, analysis of algorithms

One of the most frequently studied combinatorial problems is one-dimensional bin packing: We are given a list L={x1,x2,... ,xn} of real numbers in [0,1), and an infinite list of unit capacity bins. Each number xi has to be assigned to a unique bin such that the sum of the elements in each bin does not exceed~1. Our aim is to minimize the number of used bins. It is well-known that finding an optimal packing is NP-hard. Consequently, large number of papers have been published which look for polynomial time algorithms with an acceptable approximative behavior. Some of these algorithms are on-line algorithms: They put items into a bin as they appear without knowing anything about the subsequent elements (neither the size nor the number of elements).

In this talk  first we will overview the history of the certain lower bounds for the on-line algorithms.  Thereafter we will consider a special class of lists: we suppose that the elements are preordered in decreasing order, and we need to pack the elements by the on-line rule. We will show an improvement of a 26 years old result, where Csirik et al. proved an 8/7 lower bound on these type of lists. Our new lower bound is 54/47.

 

Session B - Chair: Janez Žibert

A flexible method for driver scheduling in public transportation

Attila Tóth1
1Department of Informatics' Applications, Faculty of Teachers Training, University of Szeged, Hungary

keywords: public transportation, driver scheduling

Nowadays scheduling of public transportation is a very important and crucial question for transportation companies. However, the development of an automatic scheduling system is a very complex task, since the public transportation of even a middle size city induces a large scale NP hard scheduling problem. Usually two subproblems are distinguished, such as vehicle scheduling and driver scheduling, the objectives. Nevertheless, both vehicle and driver scheduling are special assignment problems, but for drivers, the number of the constraints and their complexity are significantly higher. This talk describes the problem statement of the driver scheduling and introduces a flexible system. This system divides the scheduling into five different sequent steps and provides the possibility to combine different methods to produce the solution.

 

Design Space Exploration for Embedded Parallel System-on-Chip Platforms using modeFRONTIER

C. Kavka1, L.Onesti2, P. Avasare3, G. Vanmeerbeeck3, M. Wouters3 and H. Posadas4
1ESTECO SRL, Trieste, Italy, carlos.kavka@esteco.com
2ESTECO SRL, Trieste, Italy, luka.onesti@esteco.com
3IMEC vzw, Leuven, Belgium, {avasare, vanmeerb, woutersm}@imec.be
4University of Cantabria, Santander, Spain, posadash@teisa.unican.es

A complete design space analysis of a System-on-Chip (SoC) architecture is practically prohibitive due to the large
number of possible architectural configurations and the long time required to perform system simulations. The
problem is usually more complex due to the existence of multiple and often conflicting objectives to optimize. In
this type of problems, carefully chosen optimization algorithms which explore wisely the design space by
considering all objectives at the same time need to be applied. These optimization algorithms do not produce a
single solution, but a set of non-dominated solutions named Pareto front, which represent a good compromise
between all conflicting objectives.


Many methods have been proposed in the literature to identify the Pareto front for SoC design problems. However,
the use of powerful design exploration methods by itself does not guarantee that optimum solutions will be found
in reasonable time with a reasonable use of resources. It is important for the SoC designer to have access to a
design environment that provides the ability to express the design problem in clear terms, understanding the
relevant characteristics of the problem, and allowing them to discover how these characteristics change with the
problem specification and parameter values. In this way, it is possible to avoid unrealistic combinations of
parameters, choose representative points and prune the design space as much as possible.
 

The modeFRONTIER design environment is one of the most widely used tools for multi-objective optimization in
complex engineering domains. In the EU MULTICUBE project, modeFRONTIER is being retargeted to the
domain of Embedded Parallel SoC design. The interaction between modeFRONTIER and the high level simulators
is performed by using an open XML specification, which allows the integration with various simulators (or models)
for SoC platforms and architectures. During the optimization process, modeFRONTIER provides values for the
system configuration parameters and expects back from the simulator the corresponding system metrics. The work
presents how an engineering specific workflow can be schematized into the modeFRONTIER’s framework, how
local or remote application can be transparently executed and how the complex output (measures) can be assessed
using traditional and innovative Data Mining techniques.


Initial optimization experiments have been performed using two high level simulators (IMEC-HLsim and
MULTICUBE-SCoPE) running an MPEG4 encoder multimedia application. Configuration parameters considered
till now are the number of CPUs, the instruction cache size and the processor frequency. System metrics like power
consumption, latency and execution time are used as optimization objectives. The Design of Experiments (DOE)
module helps to define the initial set of designs for the exploration and the set of points for reliable Response
Surface Models (RSM). The RSM module allows users to save simulation time by creating statistically validated
mathematical models that approximate the behavior of the simulator. Post-processing tools allow users to obtain
information from the SoC system and determine correlations, extracting useful design features. The optimization
experiments performed till now show promising results, suggesting that modeFRONTIER can provide an easy to
use and flexible design environment for SoC platforms.

 

Session C - Chair: Miklós Krész

Improved Analysis of an Algorithm for the Coupled Task Problem with UET Jobs

József Békési1, Gábor Galambos1, Marcus Oswald2, Gerhard Reinelt2
1Department of Informatics' Applications, Faculty of Teachers Training, University of Szeged, Hungary
2Institute of Informatics, University of Heidelberg, Germany

keywords: scheduling, coupled task problem, analysis of algorithms
 
The coupled task problem is to schedule n jobs, each one consisting of two subtasks with exact delay times between them, on a single machine. In this talk we present a new lower bound for the problem variant with unit execution times and correct an analysis of Ageev and Baburin.

 

A framework for a flexible vehicle scheduling system

David Paš1, József Békési2, Miklós Krész2, Andrej Brodnik1
1Faculty of Mathemathics, Natural Sciences and Information Technologies, University of Primorska, Slovenia
2Department of Informatics' Applications, Faculty of Teachers Training, University of Szeged, Hungary

keywords: transportation, multiple depot vehicle scheduling
 
A flexible framework for vehicle scheduling is presented which was developed for public bus transportation systems. The framework builds upon the theoretical foundation that has been established in this area and extends it to incorporate additional real-world constraints. We have applied our approach to the urban bus system of two middle-sized cities, and computed substantially better schedules than the previous ones prepared by-hand.

 

Medial axis approximation of simple polygons

Gregor Smogavec1, Borut Žalik1
1Faculty of Electrical Engineering and Computer Science, University of Maribor, Slovenia

Keywords: medial axis, triangulation, Centers of maximal disks

Medial axis is a set of centers of maximal inscribed discs [1]. It is used as an essential part in a variety of applications like character recognition, GIS, robotics, collision detection, image description and analysis. Since exact computation of the medial axis is difficult and time consuming, it limits the above applications in a sense of real-time execution [2]. In this paper we propose an algorithm, which finds an approximation of a medial axis. The algorithm works in two steps. Firstly, a constrained Delaunay triangulation is constructed. Polygon is after that further triangulated with Steiner points to meet the criteria of introduced heuristics. The centers of gravity of neighboring triangles are connected and an approximation of the medial axis of a polygon is obtained (see Figure below).

Experiments show that approximation of the medial axis is obtained very rapidly, what gives the possibility to use the algorithm in real-time applications; for example in robotics.

References:
[1] E. Remy, E. Thiel, Exact medial axis width Euclidean distance, Image and Vision Computing, vol. 23, no. 2, pp. 167-175, 2005
[2] Tim Culver, John Keyser, Dinesh Manocha, Accurate computation of the medial axis of a polyhedron, Proceedings of Fifth Symposium on Solid Modeling and Applications (ACM Solid Modeling '99), pp. 179-190, 1999.
 

Session D - Chair: József Békési

An efficient graph reduction method

Miklós Krész1, Miklós Bartha2
1Department of Informatics' Applications, Faculty of Teachers Training, University of Szeged, Hungary
2St. John's, University of Newfoundland, Canada

keywords: reduction, unique perfect matching, depth-first trees, implied redex

A redex in a graph G is a triple r = (u; c; v) of distinct vertices that determine a 2-star. Shrinking r means deleting the center c and merging u with v into one vertex. Reduction of G entails shrinking all of its redexes in a recursive way, and, at the same time, deleting all loops that are created during this process. It is shown that reduction can be implemented in linear time only, when implemented on depth-first trees in a smart way.

 

Speaker Tracking in Broadcast News: a Case Study

Janez Žibert1
1Faculty of Mathemathics, Natural Sciences and Information Technologies, University of Primorska, Slovenia

A system for tracking speakers within audio data of broadcast news shows is presented and the impacts of the main components of the system to the overall speaker-tracking performance are evaluated. The process of speaker tracking in continuous audio streams involves several processing tasks and is therefore treated as a multistage process. The main building blocks of such system include the components for audio segmentation, speech detection, speaker clustering and speaker identification. The aim of the first three processes is to find homogeneous regions in continuous audio streams that belong to one speaker and to join each region of the same speaker together. The task of organizing the audio data in this way is known as a speaker diarization and plays an important role in various speech-processing applications. In our case the impact of speaker diarization was assessed in a speaker-based audio indexing system by performing a comparative study of how each of the component influenced the overall speaker-detection results. The valuation experiments were performed on broadcast-news audio data with a speaker-tracking system, which was capable of detecting 41 target speakers. We implemented several different approaches in each component of the system and compared their performances by inspecting the final speaker-tracking results. The evaluation results indicate the importance of the audio-segmentation and speech-detection components, while no significant improvement of the overall results was achieved by additionally including a speaker-clustering component to the speaker-tracking system.


 

Student Sessions

Session IPTOR-1 - Chair: Miklós Krész

Gene network analysis

David Božič, University of Primorska

In this research I have investigated strategies for identifying genetic networks from gene expression patterns derived by gene disruptions and gene over expressions using a Boolean network model. I have proved mathematically a lower bound and an upper bound of the number of expression patterns required to identify the network correctly. I have not assumed that time series of expression patterns are observable in and thus the derived bounds on experimental complexity are too high to be practical if it would be applied directly. However, the recent progress of biotechnology is making it possible to observe time series of gene expression patterns. Therefore, in this paper, I have mathematically studied the number of gene expression patterns required to identify the genetic network using the Boolean network model. The contribution of this paper is a simple algorithm for identifying the original Boolean network from the state transition pairs (i.e., INPUT/OUTPUT expression pattern pairs) and its mathematical analysis. Its usefulness in practice is also verified by computational experiments. The algorithm is much simpler than REVEAL although the efficiency of time and memory space of it may be worse than REVEAL. The simplicity of this algorithm makes its mathematical analysis possible. Moreover, the algorithm can be modified for counting or enumerating the networks consistent with given examples (i.e., state transition pairs). It was proved mathematically in this paper that O(log n) precisely, Θ(log n) transition pairs are necessary and sufficient for the algorithm to identify the original Boolean network of n nodes with a high probability if the maximum indegree is bounded by a constant and transition pairs are given uniformly randomly from 2n possible pairs, where log x means log2x. In order to expose the constant factor involved in O(log n) notation, there were made computational experiments on the algorithm. The computational experiments reveal that the constant hidden in O(log n) notation is practically small. For example, Boolean networks of bounded indegree 2 with 100.000 nodes can be identified from only 100 random state transition pairs. In order to investigate more practical situations, there were made computational experiments where state transition pairs were generated from attractors. Although the number of pairs required for identification is increased, it is still proportional to log n. Although the real genetic networks are different from the Boolean networks, the theoretical and practical results in this paper may be extended for more realistic models. Of course, real biological systems are different from Boolean networks: nodes in a Boolean network take binary values which are updated synchronously, whereas quantities of gene expressions in real cells are not binary and are changing continuously in time. Since the proposed algorithm is conceptually very simple, it is highly extensible for various situations.

  

Networks of Evolutionary Processors

David Paš, University of Primorska

This paper is a survey on the young field of networks of evolutionary processors. Networks of evolutionary processors will be introduced as a powerful model of computation that can solve NP-hard problems in linear time. The focus has been put here on Accepting Hybrid Networks of Evolutionary Processors (AHNEP) as important characterizations have been made regarding their computational power compared to Turing machines. In particular, it has been shown
that AHNEPs of a small constant size compare to nondeterministic Turing machines. Beyond their use as a model of computation, networks of evolutionary processors have been also analyzed as language generating devices. Although the current research is centered on the Generating Hybrid Networks
of Evolutionary Processors (GHNEP), the focus in this paper has been put on networks that have been simply referred to as NEPs in the research community. For this kind of networks a simple correspondence could be made between the languages they generate and the classic Chomsky Hierarchy. Nevertheless, also the incomplete results that have been obtained for the GHNEPs will be presented.

 

Complexity Theory and Randomized Algorithms: The Perspective of Grafted Evolutionary Algorithm

Milan Djordjević, University of Primorska

This presentation describe the impact or influence of grafting an evolutionary algorithm for solving the Traveling Salesman Problem (TSP) and also present the polynomial local search (PLS) completeness of classes of problems that occur when evolutionary algorithm (EA) and grafted evolutionary algorithm(GEA) are applied to the travelling salesman problem (TSP) using a range of mutation, crossover and hybrid local search operators.

 

DNA computation via Self-Assembly

Peter Race, University of Primorska

Paper describes some basic terms about DNA computation and the mechanism of DNA self-assembling, which can perform universal computation. There are three models of computation by self-assembly, the paper is mostly focused on Graph-Theoretic model. Paper also examines the computational capabilities inherent in the hybridization of DNA molecules. First theoretical models are considered, and show that the self-assembly of oligonucleotides into linear duplex DNA can only generate sets of sequences equivalent to regular languages. If branched DNA is used for self-assembly of dendrimer structures, only sets of sequences equivalent to context-free languages can be achieved. In contrast, the self-assembly of double crossover molecules into two dimensional sheets or three dimensional solids is theoretically capable of universal computation.

 

Session IPTOR-2 - Chair: David Paš

Quantum Computing

Jakob Bartolj, University of Primorska

As we all know very well, conventional (electronic) computers have enormous power which, moreover, is constantly increasing.With their help gigantic quantities of data can be handled and evaluated. This is of crucial importance, for instance, in modern high energy physics experiments. On the other hand, computers can be used to model very complex systems with the aim of predicting, for example, future developments in our environment, notably climatic changes. Thanks to their incredible speed computers are, of course, destined to treat mathematical problems for which specific algorithms exist very efficiently. To mention a simple example, the digits of π are readily evaluated up to a very high order. Last, but not least, their potential to solve differential equations numerically, even when they are nonlinear, makes computers indispensable in modern research. So, one might ask, is there any reason to look for fundamentally new ways of computing as they are seemingly offered by quantum processes? What advantages can be expected when the laws of quantum mechanics, rather than those of classical physics, are exploited for computation? Two answers were given: first, the simulation of complex quantum systems on classical computers meets serious difficulties that might be overcome with the help of quantum computers. This idea was suggested in 1982 by R. Feynman, and in the 1990s it was indeed shown by several groups of researchers that quantum mechanical systems for which no efficient simulation on a classical computer is known can be efficiently simulated on a quantum computer. Second, and more surprising, it was shown for a few special cases that a quantum computer will be superior to a classical computer in solving selected mathematical problems, notably the problem of finding the prime factors of an integer. Also here the quantum computer excels in efficiency. The efficiency criterion is the time (more precisely speaking, the number of computational steps) needed for the calculation. When this time grows as a polynomial with the size of the input, we speak of an efficient algorithm. On the other hand, when this growth is superpolynomial (typically exponential), the algorithm is considered inefficient. In this sense, the classical algorithm for factorizing integers, for example, is inefficient. For very large integers the calculation is so extremely time consuming that the problem, in fact, becomes intractable. There are actually two different topics of research: new efficient algorithms have to be devised that will run (only) on quantum computers, and physical concepts of quantum computers have to be discussed that, hopefully, might lead to a realization. Admittedly, in view of the desired complexity there is no great hope that a quantum computer will be constructed some day that can actually beat a classical computer. Nevertheless, it is an exciting task to study quantum mechanics from a completely novel point of view. Instead of striving to find out the physical laws governing microscopic systems and utilize this knowledge to develop technical devices, we now want to use the quantum laws to do calculations that are impossible to perform on conventional computers.
 

 

Communication complexity as a lower bound for learning in games

Gregor Ambrožic, University of Primorska

Learning in games is addresses by a fast-growing body of research in the AI and machine learning communities, where there are multiple learners with different interests. This research adds to more established research on learning in games conducted in economics. In part because of a clash of fields, there are widely varying requirements on learning algorithms in this domain. The goal of original paper is to demonstrate how communication complexity can be used as a lower bound on the required learning time or cost. Because this lower bound does not assume any requirements on the learning algorithm, it is universal, applying under any set of requirements on the learning algorithm. We characterize exactly the communication complexity of various solution concepts from game theory, namely Nash equilibrium, iterated dominant strategies (both strict and weak), and backwards induction. This gives the tighest lower bounds on learning in games that can be obtained with this method.

 

Parsing formal languages

Aleš Kalin, University of Primorska

This talk provides an introduction to the theory of automata and formal languages. The elements are presented in a historical perspective and the links with other areas are underlined. In particular, applications of the field to linguistics, software design, text processing, computational algebra or computational biology are given.


Session IPPS-1 - Chair: Andrej Brodnik

Finger Search Trees with Constant Insertion Time

Jakob Bartolj, University of Primorska

Finger Trees (Hinze and Paterson 2006) are a general purpose persistent data structure with good performance. Their genericity permits developing a wealth of structures like ordered sequences or interval trees on top of a single implementation. However, the type systems used by current functional languages do not guarantee the coherent parameterization and specialization of Finger Trees, let alone the correctness of their implementation. We consider the problem of implementing finger search trees on the pointer machine, i.e., how to maintain a sorted list such that searching for an element x, starting
the search at any arbitrary element f in the list, only requires logarithmic time in the distance between x and f in the list. We present the first pointer-based implementation of finger search trees allowing new elements to be inserted at any arbitrary position in the list in worst
case constant time. Previously, the best known insertion time on the pointer machine was O(log* n), where n is the total length of the list. On a unit-cost RAM, a constant insertion time has been achieved by Dietz and Raman by using standard techniques of packing small problem sizes into a constant number of machine words. Deletion of a list element is supported in O(log* n) time, which matches the previous best bounds. Our data structure requires linear space.

 

Funnel Heap - A Cache Oblivious Priority Queue

David Božič, University of Primorska

As the memory systems of modern computer become more complex, it is increasingly important to design algorithms that are sensitive to the structure of memory. In order to amortize the large access time of memory levels far away from the processor, memory systems often transfer data between memory levels in
large blocks. The standard approach to obtaining good locality is to design algorithms parameterized by several aspects of the memory hierarchy, such as the size of each memory level, and the speed and the block sizes of memory transfers between levels. As a result these algorithms are inflexible and not portable. To avoid
the complexity of having too many parameters, a lot of research has been done on simpler two-level memory models. It has been shown that if such so-called cache-oblivius algorithms works optimally on a two-level hierarchy then it works optimally on all levels of a multilevel memory hierarchy. Arge et al. presented the
first optimal cache oblivious priority queue, and demonstrated the importance of this result by providing the first cache oblivious algorithms for graph problems. Their structure uses cache oblivious sorting and selection as subroutines. In this paper, it is described an alternative optimal cache oblivious priority queue
based only on binary merging developed by Brodal and Fagerberg.

 

Dynamic LCA Queries on Trees

Marko Grgurovič, University of Primorska

In this paper, we provide an overview of the implementation of Dynamic LCA Queries on Trees as given by Cole and Hariharan[1].

References:

[1] Cole, R., Hariharan, R. 19??. Dynamic LCA Queries on Trees.

 

Optimal Pointer Algorithms for Finding Nearest Common Ancestors in Dynamic Trees

Marko Mikolavčič, University of Primorska

This document explains how works the alghorithm for finding the nearest common ancestor in a dinamic tree. It explaint from the primary problem with a static tree to a fully dinamic. We will explain also some theories from a variety of other authors that helped the authors of the algorithem Stephen Alstrup and Mikkel Thorup.

 

Session IPPS-2 - Chair: Milan Djordjević

Overcoming the Memory Bottleneck in Suffix Tree Construction

Gregor Tomazin, University of Primorska

In this paper, we provide an overview of the implementation of Overcoming the Memory Bottleneck in Suffix Tree Construction as given by Farach,Ferragina and Muthukrishnan [1].

References:

[1] MartinFarach, PaoloFerragina, S.Muthukrishnan. "Overcoming the Memory Bottleneck in Suffix Tree Construction."

 

Linear Time Construction of Suffix Arrays

David Paš, University of Primorska

In the year 2003, three groups of researchers published independently of each other algorithms for the direct construction of suffix arrays in linear time. Up to that point the only known method for constructing a suffix array in linear time actually resorted to suffix trees for which linear time algorithms already existed. As all three algorithms build on the ideas of Farach’s linear time algorithm for suffix trees, they exhibit a similar structure. Nevertheless, each of them takes its own approach to the problem. In this paper, I will introduce suffix arrays as a data structure that efficiently supports full text searches. Then, I will present the three algorithms for the construction of suffix arrays.

 

Sorting and Searching in Faulty Memories

Aleš Kalin, University of Primorska

In this paper I'll try to summary the paper with address 'Sorting and Searching in Faulty Memories' of Irene Finocchi and Giuseppe F. Italiano. They investigate the design and analysis of  algorithms resilient to memory faults. They focus on algorithms that, despite the corruption of some memory values during their  
execution, are nevertheless able to produce a correct output at  least on the set of uncorrupted values. In their framework, they consider two fundamental problems: sorting and searching. In particular, they prove that any O(n log n) comparison-based sorting algorithm can tolerate the corruption of at most O((n log n)1/2) keys. Furthermore, they present one comparison-based sorting algorithm with optimal space and running time that is resilient to O((n log n)1/3) memory faults. They also prove polylogarithmic lower and upper bounds on resilient searching in the paper.

 

Cache-Oblivious Priority Queue and Graph Algorithm Applications

Peter Race, University of Primorska

The cache oblivious model of computation is a two-level memory model with the assumption that the parameters of the model are unknown to the algorithms. A consequence of this assumption is that an algorithm efficient in the cache oblivious model is automatically efficient in a multi-level memory model. Article describes a development of an optimal cache-oblivious priority queue data structure, supporting insertion, deletion, and deletemin operations. Structure, described in article is as efficient as several previously developed external memory (cache-aware) priority queue data structures, which all rely crucially on knowledge about M and B. Priority queues are a critical component in many of the best known external memory graph algorithms, and using our cache-oblivious priority queue we develop several cache-oblivious graph algorithms.

 

Cache-Oblivious Data Structures for Orthogonal Range Searching

Tomasz Świderek,

This article describes a cache-oblivious data structure for orthogonal range searching. The orthogonal range searching is a problem of finding all T points in a set   
of N points in Rd placed in a query hyper-rectangle.  The article starts with introducing the external-memory model and then continues to explain how the cache-oblivious model is constructed. This is followed by the introduction of kd-tree and van Emde Boas layout. This part of the paper deals with how the data structure is being represented in the memory system. The final part describes a dynamic linear cache-oblivious data structure which answers d-dimensional queries in
O((N/B)1-1/d+T/B) memory transfers, where B represents the size of the block for any of the two levels of a multilevel memory system.