在保持最低频率的同时减小设定尺寸

时间:2014-09-12 05:19:20

标签: python algorithm

假设我有以下内容:

{(2,), (3,), (1, 4), (1, 2, 3), (2, 3), (3, 4), (2, 4)}

这为每个数字提供以下频率:

2: 4, 3: 4, 4: 3, 1: 2

你能否提出一种减少集合的方法,使每个数字至少存在2次,但集合中的元组数量减少到最小?

例如,可以从集合中删除元组(3,4),给出这些频率:

2: 4, 3: 3, 4: 2, 1: 2

这是我解决这个问题的非常微弱的尝试:

def reduce(a, limit):
    while True:
       remove = None
       for i in a:
          c = Counter([i for s in a for i in s])

          if c.most_common(1)[0][0] in i:
             if min([c[j] for j in i]) > limit:
                remove = i
                break

       if remove:
          a.remove(remove)
       else:
          break

reduce(a, 2) # we want at least two of each number

这个解决方案的问题在于它可能会减少集合,但不一定会使我留下尽可能小的集合。

对于我的特定示例,我希望减少的集合包含字符串,让我们这样说:

a = [("one","eighty one","three"), ("eighty five","sixty one","three", "eleven"), ...] 其中a的长度为1000. a中每个元组的长度为3到9.有100个唯一值,元组可以由,例如," one&#组成34;就是这样一个价值。我希望在减少集合后,每个唯一值至少代表25次。 PC计算减少的集合需要多长时间?我们在说几秒钟还是几分钟?

4 个答案:

答案 0 :(得分:7)

如评论中所述,NP难题Set Set是一个特殊的问题 这个问题的最小频率是k = 1,这样做 问题NP-hard也是如此。我会推荐像这样的库 PuLP具有以下整数 程序

minimize sum over tuples T of x(T)
subject to
y(e): for all elements e, (sum over tuples T of (count of e in T) * x(T)) >= k
z(T): for all tuples T, x(T) in {0, 1}

PuLP的一个缺点是它需要一个外部求解器。我曾是 然而,在心情破解,所以我写了一个(非常轻微测试)纯 Python求解器。它使用深度优先搜索与最佳的第一次回溯, 使用简单的传播策略来确定哪些元组必须或 不能选择和基于原始对偶的启发式函数 近似于前一个程序的以下双重(所以它是一个 精致的玩具,但仍然是玩具。)

maximize (sum over elements e of k * y(e)) - (sum over tuples T of z(T))
subject to
x(T): for all tuples T, (sum over elements e in T of y(e)) - z(T) <= 1
for all elements e, y(e) >= 0
for all tuples T, z(T) >= 0

原始 - 双重策略是以相同的速率增加这些值 y增加的人不需要无利可图的相应增加 在z

from collections import Counter, defaultdict, namedtuple
from fractions import Fraction
from heapq import heappop, heappush
from math import ceil
from operator import itemgetter


class _BestFirstSearchDepthFirstBacktracking:
    def optimize(self):
        node = self._make_root_node()
        heap = []
        upper_bound = None
        while True:
            lower_bound = ceil(node.lower_bound)
            if upper_bound is None or lower_bound < upper_bound:
                child_nodes = list(self._make_child_nodes(node))
                if child_nodes:
                    i, node = min(enumerate(child_nodes), key=itemgetter(1))
                    del child_nodes[i]
                    for child_node in child_nodes:
                        heappush(heap, child_node)
                    continue
                upper_bound = lower_bound
                solution = node
            if not heap:
                return (upper_bound, solution)
            node = heappop(heap)


Node = namedtuple('Node', ('lower_bound', 'index', 'maybes', 'yeses', 'variable'))


class UnsolvableException(Exception):
    pass


class _Optimizer(_BestFirstSearchDepthFirstBacktracking):
    def __init__(self, tuples, min_freq):
        self._index = 0
        self._tuples = set(tuples)
        self._min_freq = min_freq
        self._elements = set()
        for t in self._tuples:
            self._elements.update(t)

    def _propagate(self, maybes, yeses):
        upper_count = Counter()
        for t in maybes:
            upper_count.update(t)
        for t in yeses:
            upper_count.update(t)
        if any(upper_count[e] < self._min_freq for e in self._elements):
            raise UnsolvableException()
        forced_yeses = set()
        forced_yeses = {t for t in maybes if any(upper_count[e] - k < self._min_freq for e, k in Counter(t).items())}
        maybes = maybes - forced_yeses
        yeses = yeses | forced_yeses
        lower_count = Counter()
        for t in yeses:
            lower_count.update(t)
        residual = {e for e in self._elements if lower_count[e] < self._min_freq}
        maybes = {t for t in maybes if any(e in residual for e in t)}
        return (maybes, yeses)

    def _compute_heuristic(self, maybes, yeses):
        lower_count = Counter()
        for t in yeses:
            lower_count.update(t)
        residual_count = {e: max(self._min_freq - lower_count[e], 0) for e in self._elements}
        y = defaultdict(int)
        z = defaultdict(int)
        variable = None
        while True:
            slack = {t: 1 + z[t] - sum(y[e] for e in t) for t in maybes}
            assert all(s >= 0 for s in slack.values())
            inactive_maybes = {t for t, s in slack.items() if s > 0}
            if not inactive_maybes:
                break
            active_maybes = {t for t, s in slack.items() if s == 0}
            active_count = Counter()
            for t in active_maybes:
                active_count.update(t)
            dy = {e: 1 for e, k in residual_count.items() if active_count[e] < k}
            if not dy:
                break
            delta_inverse, variable = max(((Fraction(sum(dy.get(e, 0) for e in t), slack[t]), t) for t in inactive_maybes), key=itemgetter(0))
            delta = Fraction(1, delta_inverse)
            for e, dy_e in dy.items():
                y[e] += delta * dy_e
            for t in active_maybes:
                z[t] += delta * sum(dy.get(e, 0) for e in t)
        return (sum(residual_count[e] * y_e for e, y_e in y.items()) - sum(z.values()), variable)

    def _make_node(self, maybes, yeses):
        maybes, yeses = self._propagate(maybes, yeses)
        heuristic, variable = self._compute_heuristic(maybes, yeses)
        node = Node(len(yeses) + heuristic, self._index, maybes, yeses, variable)
        self._index += 1
        return node

    def _make_root_node(self):
        return self._make_node(self._tuples, set())

    def _make_child_nodes(self, node):
        if node.variable is None:
            return
        variable = {node.variable}
        maybes = node.maybes - variable
        yield self._make_node(maybes, node.yeses)
        yield self._make_node(maybes, node.yeses | variable)


def optimize(tuples, min_freq):
    optimizer = _Optimizer(tuples, min_freq)
    node = optimizer.optimize()[1]
    print('Nodes examined:', optimizer._index)
    return node.yeses


print(optimize({(2,), (3,), (1, 4), (1, 2, 3), (2, 3), (3, 4), (2, 4)}, 2))
print(optimize({(1, 2, 3, 4, 5, 6, 7), (8, 9, 10, 11, 12, 13, 14), (1, 2, 3, 4, 8, 9, 10, 11), (5, 6, 12, 13), (7, 14)}, 1))

答案 1 :(得分:2)

这是一种快速而肮脏的方法。希望足以让你前进。

不幸的是,它并不能保证获得确切的最小结果集。它首先摆脱了较小的元组。因此,如果往往会有更小的元组和更少的元组,那么它可能对你有用。

也以有序集(列表)开始,但没有绕过恢复订单。需要至少在函数中排序,因此计算的值正确关联。我想清理它并重构但是已经很晚了。

def reduce(source, min_count=2):
    print "source: {}".format(source)
    # [(2,), (3,), (1, 4), (1, 2, 3), (2, 3), (3, 4), (2, 4)]
    answer = []

    freq = {}
    lens = []
    for t in source:
        lens.append(len(t))
        for i in t:
            freq[i] = freq.get(i, 0) + 1
    print "freq: {}".format(freq) # {1: 2, 2: 4, 3: 4, 4: 3}
    print "lens: {}".format(lens) # [1, 1, 2, 3, 2, 2, 2]

    from collections import defaultdict
    slens = defaultdict(list)
    for l, t in zip(lens, source):
        slens[l].append(t)
    print "slens: {}".format(slens)
    # {1: [(2,), (3,)], 2: [(1, 4), (2, 3), (3, 4), (2, 4)], 3: [(1, 2, 3)]}

    for l in sorted(slens.keys()):
        for t in slens[l]:
            save = False
            for i in t:
                if (freq[i] <= min_count):
                    save = True
                freq[i] -= 1
            if save:
                answer.append(t)
    print "answer: {}".format(answer) # [(1, 4), (1, 2, 3), (3, 4), (2, 4)]

    freq = {}
    for t in answer:
        for i in t:
            freq[i] = freq.get(i, 0) + 1
    print "freq: {}".format(freq) # {1: 2, 2: 2, 3: 2, 4: 3}

我最初的想法是迭代,保存min_count下面的任何元组并减少工作集。然后对剩余的元组进行评分,其中较低频率元素计数更多。然后丢弃最低得分元组,这些元组在移除时不会降低min_count以下任何组件的频率。然后重新计算频率并重新开始。

答案 2 :(得分:1)

问题至少是NP难,这意味着你将无法找到有效的(多项式时间)算法。但是,有一些方法可以减少恒定的时间因素。除了使用更好的算法外,还可以考虑使用更快的运行时,比如PyPy。

以下代码(如果运行完成)将返回可能的最小大小的子集。此外,它只考虑有效输入,并可逐步输出增加小覆盖子集。

from collections import defaultdict
from itertools import product, combinations

def covering_set(sets, min_freq, print_approximations=False):

    # dictionary mapping each unique value to the sets that contain it
    d = defaultdict(list)
    for set_ in sets:
        for elem in set_:
            d[elem].append(set_)

    # we need min_freq number of each unique values
    combos = [combinations(x, min_freq) for x in d.values()]

    #initial solution
    min_cover = sets
    min_length = len(sets)

    #iterate through valid solutions
    #cover is a list of list of sets
    for cover in product(*combos):

        #we must flatten and remove the duplicates in the cover
        covering_set = set()
        for elem_cover in cover:
            for set_ in elem_cover:
                if set_ not in covering_set:
                    covering_set.add(set_)

        #now, we check if it the smallest current solution            
        if len(covering_set) < min_length:
            min_cover = covering_set
            min_length = len(covering_set)
            if print_approximations:
                print(min_length, min_cover)

    return min_cover

答案 3 :(得分:1)

所以这是我的解决方案。我看到你最多使用了一组1000个元素,所以我决定以递归模式实现算法。

首先,让我们定义获取元组中每个数字频率的函数:

def get_frequency(tuple_list):
    frequency = {}
    def mapper(element):
        if frequency.has_key(element):
            frequency[element] += 1
        else:
            frequency[element] = 1
    map(lambda t: map(mapper, t), tuple_list)
    return frequency

这相对容易说,所以我不会花很多时间在那上面。之后,我决定实现名为recursive的main函数。此函数返回一个元组,该元组由能够删除的元素列表和算法可以达到的最大深度组成。

这是我在实施前编写的预算法:

if tuple_list is null : return ([], iteration)
best_deletion = None
for elements:
     if element can't be deleted : continue
     launch the next recursion without the current element in the list
     if the iteration is better than best_iteration or best_iteration is None :
         set the result of recursion in best_deletion
if best_deletion is None : return ([], iteration)
return the best_iteration with adding the current Node inside, and increment the iteration

结果如下:

def recursive(tuple_list, limit, iteration):
    if tuple_list == []:
        return ([], iteration)

    frequency = get_frequency(tuple_list)

    value = None

    for i in xrange(len(tuple_list)):

        impossible_to_delete = False
        for number in tuple_list[i]:
            frequency[number] -= 1
            if frequency[number] < limit:
                impossible_to_delete = True
                break

        if impossible_to_delete:
            continue

        next_recursion_list = tuple_list[:]
        next_recursion_list.pop(i)

        maximum_deletion = recursive(next_recursion_list, limit, iteration + 1)

        if value == None:
            maximum_deletion[0].insert(0, tuple_list[i])
            value = (maximum_deletion[0], maximum_deletion[1] + 1)
        else:
            if value[1] < maximum_deletion[1]:
                maximum_deletion[0].insert(0, tuple_list[i])
                value = (maximum_deletion[0], maximum_deletion[1] + 1)

    if value == None:
        return ([], iteration)
    return value

之后,只需调用函数:

items_to_delete = recursive(list(tuple_set), 2, 0)

希望它会有所帮助。如果我有一些时间,我会测试哪个先例算法是最快的