Constraint Systems
Variable Symmetries
Variable Symmetries
Symmetries may force a solver to visit equivalent solutions
- This is bad for proving optimality/infeasibility
- May led to trashing (time wasted to recover from a mistake)
Symmetries are a big topic in Constraint Programming
- A thorough discussion is too much for this course
- If you are interested: a few papers on the course web-site
Nevertheless, we will try to be more systematic
- Formal definitions (for one special case)
- One frequently employed symmetry breaking technique
Symmetries and Permutations
Symmetries are defined based on the concept of permutation
A permutation π
over a discrete set D
is a 1-1 function from D
to D
- Intuitively: just a re-arrangement of the elements. E.g.:
i:(12345)π(i):(34215)
Variable Symmetries
With the permutation concept, we can say that:
A problem has a variable symmetry iff:
- there exists a permutation
π
of the variable indices
- s.t. for each feasible solution...
- ...we can re-arrange the variables according to
π
...
- and obtain another feasible solution
- Swapping variables = re-assigning the values
- The permutation
π
identifies a specific symmetry
Variable Symmetries - Example
E.g. given a solution for our nqueens model:
- We can swap variable 0 with n-1, 1 with n-2...
Variable Symmetries - Example
E.g. given a solution for our nqueens model:
- We can swap variable 0 with n-1, 1 with n-2...
- ...and obtain a new feasible solution (board flipped on the y-axis)
Symmetry Breaking
There are other forms of symmetries
- We will see an example in the lecture about search...
- ...But we will not define all the possible cases
Instead, we will describe briefly some symmetry breaking techniques
Symmetry breaking = avoid visiting equivalent solutions
The are three main approaches for symmetry breaking in CP:
- Model reformulation
- Static symmetry breaking
- Dynamic symmetry breaking (in the lecture about search)
A first approach to remove symmetries:
- Formulate an alternative model...
- ...And make sure that the symmetries are not there
An example: model 2 in Lab4!
A few caveats:
- The alternative model may have drawbacks (e.g. bad propagation)
- Some symmetries may still be there
It's all ok: the goal is finding the most effective trade-off
Static Symmetry Breaking
A second approach to remove symmetries:
- Make some of the equivalent solutions invalid...
- ...By adding symmetry breaking constraints
Main idea:
- If a problem contains symmetries
π0,π1,…
- ...Make sure that applying the permutations leads to infeasibility
Can this be done automatically? In some cases. E.g.:
- We know the permutation for each symmetry
- We are dealing with variable symmetries
Lex-Leader Method
Lex-Leader: a generic method for breaking variable symmetries:
- For each symmetry (i.e. permutation)
π
- Make sure that only one of the symmetric solutions is valid
- By adding a lexicographic ordering constraint:
(x0,x1,…)⪯lex(xπ(0),xπ(1),…)
An example of lexicographic ordering:
(x0<xπ(0))∨(x0=xπ(0)∧x1<xπ(1))∨…
- Either the fist pairs of variables in each vector are ordered...
- ...Or the following pair must be, and so on
Lex-Leader Method
In lex-leader, we break symmetries one by one:
- PRO: it is very general!
- PRO: breaks all variable symmetries
- CON: complex constraints
- CON: what if we have many symmetries?
Example: model 1 in Lab4
- If we have an order for
n
product units
- Every permutation corresponds to a symmetry!
- Total:
n!
symmetries ⇒
n!
constraints
Lex-Leader: a Special Case
But in model 1 we did not add n!
constraints
Model 1 was an instance of a special case:
- If the symmetric variables must be all different...
- E.g. they are part of an
ALLDIFFERENT
constraint
- ...The lex-leader constraints become much simpler!
(Proof) Consider:
(x0<xπ(0))∨(x0=xπ(0)∧x1<xπ(1))∨…
- None of the reified constraints
xi=xπ(i)
can be true...
- ...Because the variables must be all different...
- ...Therefore, the first constraint (i.e.
x0<xπ(0)
) must hold
Lex-Leader: a Special Case
Long story short:
- If the symmetric variables must be all different...
- ...Each symmetry is broken by a single
<
constraint...
- ...And at most
n−1
constraints are necessary
If all n!
permutations are feasible:
- We just need to pick a variable ordering
s
(e.g. s=[0,1,2,…]
)
- And post the constraints:
xs(0)<xs(1)<xs(2)<…
This is what we did in model 1 in Lab4
- Actually, we also added
xs(0)<xs(2),xs(0)<xs(3),…
- ...They were not actually necessary!
Constraint Systems
SUM Constraint
and Incremental Propagation
Global SUM Constraint
Many solvers provide a global SUM
constraint, e.g.:
SUM(z,X)
, where X
is a vector or variables representing the terms of the sum, and z
is a variable representing the sum result
The constraint enforces Bound Consistency on the relation:
z=∑xi∈Xxi
- This is also the notation that we will use in the slides
But why would we need it? We already have "+"...
SUM Constraint: an Example
Let's consider a practical example:
z=x0+x1+x2+x3x0,x1,x2,x3∈{0,1}
- Let us focus on computing bounds for
z
With binary sums, this is equivalent to a set of constraints
z=y0+x3y0=y1+x2y1=x0+x1
SUM Constraint: an Example
The network of constraints can be represented visually:
- Orange = constraints
- Green = original vars
- Blue = introduced vars/expressions
SUM Constraint: an Examples
Let us assume we want to filter the maximum of D(z)
- By reasoning on the individual constraints, we can deduce:
¯y1=¯x0+¯x1⟶ ¯y1=1+1=2¯y0=¯y1+¯x2⟶ ¯y0=2+1=3¯z=¯y0+¯x3⟶ ¯z=3+1=4
- And by reasoning on the sum as a whole?
¯z=¯x0+¯x1+¯x2+¯x3=4
We deduce exactly the same bound (of course)
Why introducing a global constraint, then?
Why a SUM Constraint?
How many operations with individual sums?
¯y1=¯x0+¯x1¯y0=¯y1+¯x2¯z=¯y0+¯x3
- Read access: 6
- Write access: 3
- Sum: 3
Even more if the constraints are not processed in the right order!
- E.g. if
z=y0+x3
is propagated before y0=y1+x2
Why a SUM Constraint?
How many operations with a global constraint?
¯z=¯x0+¯x1+¯x2+¯x3
- Read access: 4 (improved)
- Write access: 1 (improved)
- Sum: 3 (same)
And the processing order does no longer matter
Sometimes, we use global constraints to perform
the same propagation in a more efficient fashion
Why a SUM Constraint?
For a sum with n
terms:
Individual sums
- Read:
2(n−1)
- Write:
n−1
- Sum:
n−1
SUM Constraint
- Read:
n
- Write:
1
- Sum:
n−1
If n
is large, the efficiency gap may grow pretty big!
But we can do even more
by exploiting incremental computation
Incremental Computation
A filtering algorithm is usually called multiple times
- In a search tree node (until the fix point it reached)
- On multiple search tree nodes
Hence, incremental computation can improve the efficiency
- The first time the algorithm is called, everything is as usual
- Except that we cache some partial results
- When the algorithm is invoked again we exploit the cached data
This requires access to more details about propagation
- Which variable has been pruned
- Which values have been pruned
Incremental Computation for SUM
In the case of the SUM
constraint:
At the first invocation of the filtering algorithm, we compute:
ubz=∑xi∈X¯xi
- If
ubz<¯z
, then we update the maximum of z
(as usual)
- But we also cache the value of
ubz
- Let the cached value be
ubz$
("$" stands for "cache" :-)
)
The algorithm will be activated again when a variable xi∈X
is pruned
Incremental Computation for SUM
For the next activations, we assume that:
- The solver provides the index of the pruned variable
- The solver allows access to the pruned values
In particular, let us assume that:
xj
is the pruned variable
old(¯xi)
is the old maximum of any xi
And we have that:
old(¯xi)>¯xi
if the maximum of xi
was pruned
old(¯xi)=¯xi
if some other value in D(xi)
has been pruned
Incremental Computation for SUM
- When the domain of
xj
is updated, we have that...
- ...Our cached
ubz$
has been computed using old domains:
ubz$=∑xi∈Xold(¯xi)
And therefore we can compute a new upper bound on z
as:
ubz=ubz$−old(¯xj)+¯xj
- Intuively, we replave the old maximum of
xj
...
- With the new maximum
¯xj
This computation is done in O(1)
complexity!
- Because we relied on the cached bound
Incremental Computation for SUM
For a sum with n
terms, starting from the second algorithm activation:
Classical SUM
- Read:
n
- Write:
1
if ubz<¯z
- Sum:
n−1
Incremental propagator
- Read:
3
- Write:
1
(+1
if ubz<¯z
)
- Sum:
2
Now the efficiency gap is huge
- We have reduced the asymptotic complexity from
O(n)
to O(1)
Filtering the xi
variables in SUM
What about the xi
variables?
A (non-incremental) upper bound on variable xi
is given by:
ubxi=¯z−∑xh∈X,h≠ix_h
Intuitively:
- We start from the largest possible value for
z
- We subtract the lowest possible value of all other
xh
variables
It's a generalization of our filtering rule for binary sums
Filtering the xi
variables in SUM
We can use incremental computation to speed up the process
Assuming that xj
has been pruned, our incremental bound is:
ubxi=¯z−(lbz$−x_i−old(x_j)+x_j)
Which is based on the fact that the cached z
lower bound is given by:
lbz$=∑xh∈Xold(x_h)
- We subtract
x_i
to remove the contribution of the current variable
- We subtract
old(x_j)
and add x_j
to update the contribution of xj
In terms of complexity:
- We can prune a single variable with
O(1)
complexity
- And all variables with complexity
O(n)
The complexity for a naive approach is O(n2)
!
Filtering the xi
variables in SUM
In practice, however:
- There are non-incremental propagators with complexity
O(n)
- Hence incremental propagation for the
x
vars is less effective...
But still it helps a lot
- The best solvers implement more efficient incremental propagators
- It is usually much better to use
SUM
rather than binary sums
- Especially if the sum contains many terms!
Some examples:
- Our hole-counting bound in model 2 from Lab 4
- The dominance rule for model 2 in Lab 4
Constraint Systems
Solver Support
for Incremental Computation
Solver Support for Incremental Computation
From the last batch of slides:
- Incremental computation can be very powerful...
- ...but requires solver support
What do we need, exactly?
- The ability to cache values
- Knowledge of the variable that has been pruned
- The ability to access pruned values
Let's see how we can provide support for these features...
Caching Values
How to cache a value over multiple propagator executions?
A first solution: use a simple variable
Caching via a Simple Variable - Example
Let's see an example:
- At the root node, we write a value
Caching via a Simple Variable - Example
Let's see an example:
- At the root node, we write a value
- We take the left branch
Caching via a Simple Variable - Example
Let's see an example:
- At the root node, we write a value
- We take the left branch
- We update the value
Caching via a Simple Variable - Example
Let's see an example:
- At the root node, we write a value
- We take the left branch
- We update the value
- We update the value again
Caching via a Simple Variable - Example
Let's see an example:
- At the root node, we write a value
- We take the left branch
- We update the value
- We update the value again
- And then we backtrack... and we make a mistake
Caching Values
How to cache a value over multiple propagator executions?
A first solution: use a simple variable
- It works when moving from parent to child in the search tree
- We only need access to the most recent value
- It does not work on backtrack
- No way to access an old value once it has been overwritten
Caching Values
How to cache a value over multiple propagator executions?
A first solution: use a simple variable
- It works when moving from parent to child in the search tree
- We only need access to the most recent value
- It does not work on backtrack
- No way to access an old value once it has been overwritten
What do we need?
- We need to keep track of past values
- So that they can be restored on backtracking
This can be done by storing values in a stack
Caching via a Stack - Example
Let's see our example again:
- At the root node, we write a value and push it on the stack
Caching via a Stack - Example
Let's see our example again:
- At the root node, we write a value and push it on the stack
- We take the left branch
Caching via a Stack - Example
Let's see our example again:
- At the root node, we write a value and push it on the stack
- We take the left branch
- We update the value and push it on the stack
Caching via a Stack - Example
Let's see our example again:
- At the root node, we write a value and push it on the stack
- We take the left branch
- We update the value and push it on the stack
- We update the value again and push it again
Caching via a Stack - Example
Let's see our example again:
- At the root node, we write a value and push it on the stack
- We take the left branch
- We update the value and push it on the stack
- We update the value again and push it again
- And then we backtrack... and pop some values
Caching Values
The stack idea works
Two main issues:
- How to keep track of the number of values to pop?
- Are all the push operations necessary?
We can solve both issues using a timestamp mechanism
- The timestamp is just an integer value
- The timestamp is incremented every time we branch/backtrack
- We push values only if the timestamp has been incremented
- On backtrack, we always pop a single value
Caching via a Stack & Timestamp - Example
Let's see our example again:
- At the root node, we write a value and push it on the stack
Caching via a Stack & Timestamp - Example
Let's see our example again:
- At the root node, we write a value and push it on the stack
- We take the left branch
Caching via a Stack & Timestamp - Example
Let's see our example again:
- At the root node, we write a value and push it on the stack
- We take the left branch
- We update the value and push it on the stack
Caching via a Stack & Timestamp - Example
Let's see our example again:
- At the root node, we write a value and push it on the stack
- We take the left branch
- We update the value and push it on the stack
- We update the value again and replace the top value
Caching via a Stack & Timestamp - Example
Let's see our example again:
- At the root node, we write a value and push it on the stack
- We take the left branch
- We update the value and push it on the stack
- We update the value again and replace the top value
- And then we backtrack... and pop one value
Caching Values
And we got it (more or less)!
- This technique is usually called trailing
- The stack with timestamps is called trail
Trailing is also used for:
- Keeping track of the domains of the decision variables
- I.e. one of the critical operations performed by a solver
Most of the state-of-the-art CP solvers are trailing based:
- Or-tools, IBM-ILOG CP Optimizer, Choco (default behavior)...
- An important exception: Gecode
Improving our Propagation Algorithm
Second requirement: know which variable has been pruned
For this, we need to modify our basic propagation algorithm:
- dirty = true
- while dirty:
- dirty = false
- for
cj∈C
:
- D′=filtercj(D)
- if D′≠D: dirty = true
- This was called AC1
- Is is veeery basic
- And it is definitely time to improve it
Which modifications do we need? Let's start from two of them
Improving our Propagation Algorithm
1) The constraints should know about the pruned variable
This can be done by adding a new filtering method
filtercj(i,D)
- The index of the pruned variable
xi
is passed as an argument
- Such information can be employed for incremental computation
Improving our Propagation Algorithm
1) The constraints should know about the pruned variable
This can be done by adding a new filtering method
filtercj(i,D)
- The index of the pruned variable
xi
is passed as an argument
- Such information can be employed for incremental computation
2) The constraints should communicate the pruned variable
We can introduce a function to prune values
prune(i,v)
- Where
i
is the index of the variable and v
is the value to be pruned
- The method must notify the other constraints about pruning events
Pruning and Notification - Naive Approach
How should the prune
method work?
A possible implementation:
- function
prune(i,v)
D(xi)=D(xi)∖{v}
- for
cj∈C
:
- Calling
filter
may lead to new pruned values...
- ...and therefore to recursive calls of
prune
This is messy and inefficient: can we avoid recursive calls?
- Instead of calling
filter
...
- ...we store the "pruning event" into an external data structure
Propagation Queue
For example, we can use a FIFO queue Q
:
Q
keeps track of the filter
calls that we still need to make
- Each element in
Q
is a pair (cj,xi)
Q
is often called propagation queue (or propagator queue)
We can now redefine our prune
function:
- function
prune(i,v)
D(xi)=D(xi)∖{v}
- for
cj∈C
:
- Instead of calling
filtercj(xi,D)
we add an item to Q
Propagation Queue
For example, we can use a FIFO queue Q
:
Q
keeps track of the filter
calls that we still need to make
- Each element in
Q
is a pair (cj,xi)
Q
is often called propagation queue (or propagator queue)
We can now redefine our prune
function:
- function
prune(i,v)
D(xi)=D(xi)∖{v}
- for
cj∈C
:
- if
xi∈X(cj)
: Q
.push((cj,xi)
)
- Since we know that variable
xi
is being pruned...
- ...we can notify only the constraints that have
xi
in their scope
Propagation Queue
For example, we can use a FIFO queue Q
:
Q
keeps track of the filter
calls that we still need to make
- Each element in
Q
is a pair (cj,xi)
Q
is often called propagation queue (or propagator queue)
We can now redefine our prune
function:
- function
prune(i,v)
D(xi)=D(xi)∖{v}
- for
cj∈C
:
- if
xi∈X(cj)∧(cj,xi)∉Q
: Q
.push((cj,xi)
)
- Since we are not calling
filter
immediately...
- ...we can also avoid some redundant calls (and stack overflows)
Propagation Queue
For example, we can use a FIFO queue Q
:
Q
keeps track of the filter
calls that we still need to make
- Each element in
Q
is a pair (cj,xi)
Q
is often called propagation queue (or propagator queue)
We can now redefine our prune
function:
- function
prune(i,v)
D(xi)=D(xi)∖{v}
- for
cj∈C
:
- if
xi∈X(cj)∧(cj,xi)∉Q
: Q
.push((cj,xi)
)
- Dedicated methods are often available for pruning min/max vals
- The main idea however stays the same
A Modified Propagation Algorithm
Our goal now is just processing the queue:
- while
len(Q)>0
(cj,xi)=Q.pop()
filtercj(i,D)
- New items are inserted in
Q
iff filter
prunes something
- Hence, the fix-point is reached iff
Q
becomes empty
At this point, only a few pieces are missing...
- What is the initial content of queue?
- What about non-incremental propagators?
- When do we perform the initial setup of incremental propagators?
A Modified Propagation Algorithm
Let's add the possibility to have (cj,∗)
items in Q
The wild-card ∗
means that the pruned variable is unspecified
- An item
(cj,∗)
is processed by calling filtercj(D)
...
- ...i.e. our old filtering function...
- ...which is available also for non-incremental propagators
At the root of the search tree, no variable has been pruned:
- Hence,
Q
should contain (cj,∗)
items for all constraints
- Incremental constraints may perform their setup in
filtercj(D)
AC3 Propagation Algorithm
Overall, we get:
Q
= [(cj,∗)
for cj∈C
]
- while
len(Q)>0
:
(cj,xi)=Q.pop()
- if
xi=∗
:
- else:
This algorithm is often called AC3
- (Even if the original AC3 was a bit different)
- It's probably the most widespread propagation algorithm
AC3 Propagation Algorithm
Overall, we get:
Q
= [(cj,∗)
for cj∈C
]
- while
len(Q)>0
:
(cj,xi)=Q.pop()
- if
xi=∗
:
- else:
- There are many other propagation algorithms (AC5, AC7, AC2000...)
- We will not see them (most work for extensional constraints)
- If you are interested, there is a paper on the course web site
Instead, let's seen an example of how AC3 works...
AC3 - An Example
Let's consider this simple CSP
- For the sums, we use the global
SUM
constraint
- Initially,
Q
contains a (cj,∗)
item for each constraint
AC3 - An Example
- We pop and process the first item in
Q
- No variable can be pruned
AC3 - An Example
- We pop and process the first item in
Q
- We use the non-incremental filtering algorithm (due to the
∗
)
- We initialize the values of
ubz$
and lbz$
- We manage to prune
x0
and x1
AC3 - An Example
- We add a
(cj,x0)
item for each cj
having x0
in the scope
- We add a
(cj,x1)
item for each cj
having x1
in the scope
AC3 - An Example
We have not seen an incremental propagator for ALLDIFFERENT
- But an incremental propagator does exist
- And it is very important for the
ALLDIFFERENT
performance
- However, we will not see it in this course
AC3 - An Example
x0
, x1
have been pruned by c1
- And the
(c1,x0)
and (c1,x1)
items are inserted in Q
This may look redundant, but it is not:
- Some propagators need multiple passes to reach convergence
AC3 - An Example
- We pop and process the first queue item
- No variable can be pruned
AC3 - An Example
- We pop and process the first queue item
- We can prune
x2
AC3 - An Example
- We add a
(cj,x2)
event for each cj
having x2
in the scope
AC3 - An Example
- We pop and process the first queue item
- No variable can be pruned
AC3 - An Example
- We pop and process the first queue item
- No variable can be pruned
AC3 - An Example
- We pop and process the first queue item
- No variable can be pruned, even though
x1=2
is infeasible
- Because we are using the incremental propagator for
SUM
- ...And we are using information about
x1
to filter x2
...
- ...And not vice-versa
AC3 - An Example
- We pop and process the first queue item
- No variable can be pruned
AC3 - An Example
- We pop and process the first queue item
- And we can prune
x1
...
- ...Because now we are using information about
x2
to filter x1
AC3 - An Example
- We insert new items in the queue, as usual
AC3 - An Example
- We pop and process the first queue item...
- ...And we prune
x0
AC3 - An Example
- We insert new items as usual
AC3 - An Example
- The solution is now feasible, but the solver does not know it
- Even when all variables are bound there may be an inconsistency
- Hence, we must process all the items in
Q
!
AC3 - An Example
AC3 - An Example
AC3 - An Example
AC3 - An Example
- Now the queue is empty!
- Hence, we have reached the fix-point
- Incidentally, we also have a solution
Accessing Pruned Values
We are missing only one ingredient: accessing pruned values
If we want to enable this, we need to:
- Design an API for accessing the values
- Store the pruned values
- Forget the pruned values once they have been processed
The first two problems are simple:
- The API is (mostly) a matter of design choices
- Storing the values can be done in a number ways
The tricky part is choosing when to forget the stored values
Accessing Pruned Values
Consider this step of our AC3 example:
- Value 3 has been pruned from
x0
(and x1
)
- And we can store it in some way
Accessing Pruned Values
Consider this step of our AC3 example
The removal triggered the insertion of items in Q
:
- Once items
(c0,∗)
and (c1,x0)
have been processed
- The removal of
3
from x0
has been acknowledged
Accessing Pruned Values
Consider this step of our AC3 example
Hence, we just need to:
- Mark the last generated item for each pruned variable, i.e.
(c1,x0)
- Once it has been processed, we can forget the stored value
Constraint Systems
ELEMENT Constraint
and Support Caching
Modeling Assignment Costs
A simple problem: we need to assign a certain object
x∈{0,1,2}
represent the possible assignment choices
- Assigning the object leads to a cost
z
- The cost depends on the value of
x
:
- If
x=0
, the cost is 1
- If
x=1
, the cost is 3
- If
x=2
, the cost is 4
This is simple, but important:
- This kind of relation appears in many complex problems
How can we model this situation?
Sum Based Model
A first possible model:
z=1(x=0)+3(x=1)+4(x=2)
This is how we have handled costs (and capacities) so far:
- It works
- Filtering can be quite efficient thanks to
SUM
However, the model has poor propagation:
- The z bounds computed by
SUM
are 0 and 8
- But the true bounds for z are 1 and 4!
The problem, as usual, are the reified constraints...
Sum Based Model #2
A better model:
z=1+2(x=1)+3(x=2)
Here's the main underlying idea:
x
must take a value
- Hence, in the best case the cost will be 1 (for value 0)
- Other values lead to higher costs
The propagation is a bit better:
But we are still far from the true bounds!
A Change of Perspective
Let's look at the problem from another perspective:
- We are trying to access a vector of costs (i.e. [1, 3, 4])
- And the
x
variable acts as an index
Formally, this can be written as:
z=vx
where V=[v0,v1,v2,…]
is the vector of costs
- In this format, it is explicit that
z
must take a value from V
- But we need a custom propagator!
ELEMENT Constraint
The relation z=vx
is captured by the ELEMENT
constraint
ELEMENT(z,V,x)
, where:
z
is an "output" variable
V
is a vector of values (or variables)
x
is an "index" variable
- In most solvers the constraint is represented as an expression...
- ... and using a special syntax
We will use the syntax z=vx
ELEMENT - Usage Examples
ELEMENT
is a very important constraint
Some usage examples:
"The global cost is the sum of the assignment cost of variables in X"
z=∑xi∈Xcxi
C
is the vector of assignment costs
- The summation can be modeled (as usual) with
SUM
Assignment costs are present in many practical problems
ELEMENT - Usage Examples
ELEMENT
is a very important constraint
Some usage examples:
"The product in position 0
must have lower weight than position 1
"
wx0<wx1
xi
represent the product at position i
W
is a vector containing the weight of all products
ELEMENT
can be used to map items to their properties
- This greatly simplifies the definition of complex constraints
- Especially with some modeling styles
A Propagator for the ELEMENT Constraint
How do we write a propagator for z=vx
?
A simplifying assumption: V
is a vector of values
- Bound consistency for the
z
variable:
ubz=max
w \in D(z) \text{ is not pruned iff } \exists u \in D(x) \ :\ v_u = w
BC can be enforce in O(|D(x)|)
, GAC in O(|D(z)|\,|D(x)|)
A Propagator for the {\rm\scriptsize ELEMENT} Constraint
How do we write a propagator for z = v_x
?
A simplifying assumption: V
is a vector of values
- Bound consistency for the
x
variable:
\begin{align}
& ub_x = \max\{u \in D(x) : \underline{z} \leq v_u \leq \overline{z} \}\\
& lb_x = \min\{u \in D(x) : \underline{z} \leq v_u \leq \overline{z} \}
\end{align}
u \in D(x) \text{ is not pruned iff } \exists w \in D(z) \ :\ v_u = w
- If
V
contains variables the propagator is a bit more complex...
- ...But the basic reasoning is the same
Incremental Propagator for {\rm\scriptsize ELEMENT}
Can we use incremental computation for {\rm\scriptsize ELEMENT}
?
- Yes, we can, via a simple and very generalizable approach
- Main idea: store the supports
Let's see how to do it when enforcing GAC on z:
- At the first propagator execution, we do the usual stuff:
\begin{align}
w \in D(z) \text{ is not pruned iff } \exists u \in D(x) \ :\ v_u = w
\end{align}
- But for each
w \in D(z)
, we store the corresponding support u
- Let us refer to it as
u(w)
Incremental Propagator for {\rm\scriptsize ELEMENT}
Can we use incremental computation for element?
- Yes, we can, via a simple and very generalizable approach
- Main idea: store the supports
Let's see how to do it when enforcing GAC on z:
- Whenever
x
gets pruned, we check if u(w)
is still in the domain
- If it is, then
w
has still a support
- Otherwise, we search for another support
- And of course the new value becomes
u(w)
Incremental Propagator for {\rm\scriptsize ELEMENT}
This idea is at the basis of an algorithm called GAC-Schema
- It is often sub-optimal
- But it is also simple and very general
For those interested in the actual GAC-Schema algorithm:
- There is a paper on the course web site
- It's the survey about global constraints
Incremental Propagator for {\rm\scriptsize ELEMENT}
In many cases (including ours), this approach has a nice property
There is no need to restore the u(w)
values on backtrack
When moving to child to parent node, the domains can only grow larger
- Hence, if
u(w)
is a support for w
in a child node...
- ...it will still be a support in the parent node
We can use normal variables to store the u(w)
values!
- There is not need of trailing for
u(w)
- Hence, no stack to store and to manage the variable
- This is much more efficient
Constraint Systems
A Few More Global Constraints
{\rm\scriptsize MIN}/{\rm\scriptsize MAX} Constraint
- We have introduced a global constraint for sums with many terms
- We can to the same for minimum/maximum operations
{\rm\scriptsize MIN}(z, X)
, where z
is an output variable
and X
is a vector of input variables
The constraint enforces BC on the relation:
z = \min(X)
- which is often used as a special syntax for its definition
The constraint has an efficient propagator
- Incremental propagation is performed via support cachink
{\rm\scriptsize TABLE} Constraint
- Remember our "theoretical" definition of constraint?
- It turns out it is available as a global constraint, too!
{\rm\scriptsize TABLE}(X, T)
, where:
X
is a vector of variables
T
is a vector of tuples, corresponding to the valid assignments
- The variables in
X
can take a certain combination of values...
- ...iff such combination appears as a tuple in
T
The constraint can also be specified using invalid assignments
{\rm\scriptsize TABLE} Constraint
{\rm\scriptsize TABLE}
allows to model arbitrary constraints, including:
- Vector-to-scalar functions:
- Scalar-to-vector functions:
{\rm\scriptsize TABLE} Constraint
{\rm\scriptsize TABLE}
allows to model arbitrary constraints, including:
- Vector-to-vector functions:
Every constraint over discrete variables can be modeled with {\rm\scriptsize TABLE}
- Of course, the table size may grow very large
- But
{\rm\scriptsize TABLE}
often very well-optimized
- In or-tools: tables with 1M rows can be handled very efficiently