24 votes

Programming Challenge: Merge an arbitrary number of arrays in sorted order.

It looks like it's been over a week and a half since our last coding challenge, so let's get one going. This challenge is a relatively simple one, but it's complex enough that you can take a variety of different approaches to it.

As the title suggests, write a program that accepts an arbitrary number of arrays, in whatever form or manner you see fit (if you want to e.g. parse a potentially massive CSV file, then go nuts!), and returns a single array containing all of the elements of the other arrays in sorted order. That's it!

Bonus points for creative, efficient, or generalized solutions!

31 comments

  1. [3]
    mironimous
    (edited )
    Link
    Here is my sed solution: :a s/[^0-9,]//g s/(,|$)+/%,/g :b s/$/@0@1@2@3@4@5@6@7@8@9@/ :c s/^([0-9]*)([0-9])%([0-9]*,)(.*@\2[^@]*)@/\4\1%\2\3@/ tc s/@(.|$)//g /(^|,)[^%]/{s/(,|^)%/\10%/g; bb}...

    Here is my sed solution:

    :a
    s/[^0-9,]//g
    s/(,|$)+/%,/g
    :b
    s/$/@0@1@2@3@4@5@6@7@8@9@/
    :c
    s/^([0-9]*)([0-9])%([0-9]*,)(.*@\2[^@]*)@/\4\1%\2\3@/
    tc
    s/@(.|$)//g
    /(^|,)[^%]/{s/(,|^)%/\10%/g;
    bb}
    s/%0*([^,])|,$/\1/g
    s/^,//
    N
    s/\n/,/
    ba
    

    Inputs are comma separated lists of numbers, each newline the currently sorted list is outputted, so if you want the whole list, pipe it into tail -n1.

    Internally it uses radix sort (@0@1@2@3@4@5@6@7@8@9@ are the marks for the buckets and it adds padding zeros in front of the numbers as needed and removes them at the end).

    Edit: Edited it so now it only outputs the list at EOF.

    22 votes
    1. balooga
      Link Parent
      You monster. That is amazing.

      You monster.

      That is amazing.

      6 votes
    2. apoctr
      Link Parent
      This is absolutely disgusting and abhorrent. Brilliant.

      This is absolutely disgusting and abhorrent. Brilliant.

      4 votes
  2. [3]
    min
    (edited )
    Link
    I used Scala for this problem, and it almost feels like cheating. My solution is: import scala.reflect.ClassTag def merge[T: Ordering: ClassTag](arrays: Array[T]*): Array[T] =...

    I used Scala for this problem, and it almost feels like cheating.

    My solution is:

    import scala.reflect.ClassTag
    def merge[T: Ordering: ClassTag](arrays: Array[T]*): Array[T] = 
        arrays.flatten.sorted.toArray
    

    Yep, that's all.

    I implemented it as a generic function that will work on any array that implements the Ordering trait, which will let it use the sorted function. The * in Array[T]* denotes the parameter as variadic and allows an infinite (?) amount of parameters. ClassTag allows it to be converted back into an Array in order to be returned. flatten turns the Seq parameter, arrays, into a single Array, combining all elements.

    Here are some assertions showing it working with various different types:

    // Characters
    assert(merge(Array('G', 'V', 'K', 'T', 'X'),
                 Array('r', 'y', 'b', 's'),
                 Array('#', '%', '!')) sameElements
                 Array('!', '#', '%', 'G', 'K', 'T', 'V', 'X', 'b', 'r', 's', 'y'))
    
    // Integers
    assert(merge(Array(9, 2, 3),
                 Array(7, 5, 4), 
                 Array(8, 6, 1)) sameElements 
                 Array(1, 2, 3, 4, 5, 6, 7, 8, 9))
    
    // Doubles
    assert(merge(Array(7.64, 2.31, 3.28),
                 Array(1.31, -7.2, 1.03), 
                 Array(5.82, 6.33, 8.73)) sameElements
                 Array(-7.2, 1.03, 1.31, 2.31, 3.28, 5.82, 6.33, 7.64, 8.73))
    

    I also tried to find the complexity:

    val startTime = System.nanoTime()
    
    merge(
        (for (i <- 1 to 1000)
            yield Array(
                (for (i <- 1 to 1000) 
                    yield scala.util.Random.nextInt(100)): _*)): _*)
    
    println("Time to merge and sort 1,000 Arrays with 1,000 random (0 to 100) Integers: " + ((System.nanoTime() - startTime) / 1e6) + " ms")
    
    
    val startTime2 = System.nanoTime()
    
    merge(
        (for (i <- 1 to 10000)
            yield Array(
                (for (i <- 1 to 1000) 
                    yield scala.util.Random.nextInt(100)): _*)): _*)
    
    println("Time to merge and sort 10,000 Arrays with 1,000 random (0 to 100) Integers: " + ((System.nanoTime() - startTime2) / 1e6) + " ms")
    

    I'd like to mention the _* in the function calls, which I think is pretty cool. It's called the Splat operator, and it unpacks the arguments so the function call acts if all of those Arrays where being called as individual parameters. You might recognize it as * from various dynamic languages like Ruby and Python or as the spread operator (...) from Javascript.

    Each array fed into the merge function has 1,000 elements, which are all random Integers between 0 and 100. On my PC (i7 7700k/16GB) it does the 1,000 arrays in about 0.35 Seconds, and 10,000 arrays in about 1.3 seconds. Seems like some pretty good optimization: a 10x increase in elements only increases the runtime by about 4x. Could it be a O(log n) complexity? I'm not quite sure about why this performs this way, but it may be inherent to the ways functional programing concepts are implemented.

    And finally, here is an online compiler showing it working: https://scastie.scala-lang.org/uEvBr82dTMyMKfoZmQhosQ

    On Scastie, my average times were .4 seconds for 1,000 elements and 3.5 seconds for 10,000 elements. In other words, about a 9x increase in runtime for a 10x increase in number of elements. So, not as great as it ran on my machine, but still faster than a O(n) complexity.

    Overall, this was a fun problem to do. Even though I didn't do most of the work (that credit goes to the Scala Developers for the stdlib), it was a great practice for reading documentation (and stack overflow :) ). It did take me an embarrassingly long time to find all that documentation for a language I barely know, but I still enjoyed it.

    I'll keep this write-up and the source code on Github for anybody who wants to look at it later: https://github.com/mnxn/Merge-and-Sort-Arrays.Scala

    7 votes
    1. [2]
      jacebot
      Link Parent
      This is pretty sexy too. 3 lines and done. 2 if you don't count the import. Nice.

      This is pretty sexy too. 3 lines and done. 2 if you don't count the import. Nice.

      1. min
        (edited )
        Link Parent
        I could rewrite it as def merge[T: Ordering: scala.reflect.ClassTag](arrays: Array[T]*): Array[T] = arrays.flatten.sorted.toArray But that seemed pretty unnecessary given how much elegance and...

        I could rewrite it as

        def merge[T: Ordering: scala.reflect.ClassTag](arrays: Array[T]*): Array[T] = 
            arrays.flatten.sorted.toArray
        

        But that seemed pretty unnecessary given how much elegance and readability is lost.

        Edit: If I remove the return type (and use type inference instead) and line break I can get it down to one line:

        def merge[T: Ordering: scala.reflect.ClassTag](arrays: Array[T]*) = arrays.flatten.sorted.toArray
        
        1 vote
  3. [3]
    666
    Link
    This is the least efficient I can make it (Python 3), without adding irrelevant code in between: def sort_arrays(*a): return...

    This is the least efficient I can make it (Python 3), without adding irrelevant code in between:

    def sort_arrays(*a):
      return list(sorted(__import__('functools').reduce(__import__('operator').concat, a)))
    

    Just call it like this: sort_arrays([5,7,1], [2,8,3], [9,5,7])

    7 votes
    1. jacebot
      Link Parent
      Now this is elegant, and mostly readable.

      Now this is elegant, and mostly readable.

      3 votes
  4. jonluca
    Link
    Hmm fun. Naive (straight forward): def sort_arrays(*arrays): compiled = list() for array in arrays: compiled += array print(compiled.sort()) It's got a time complexity of O(nlogm) where n is the...

    Hmm fun.

    Naive (straight forward):

    def sort_arrays(*arrays):
    	compiled = list()
    	for array in arrays:
    		compiled += array 
    	print(compiled.sort())
    

    It's got a time complexity of O(nlogm) where n is the total number of elements in all arrays. I'll try to come up with some more fun ones when I get home (maybe a generator? multithreaded?)

    5 votes
  5. Celeo
    (edited )
    Link
    Super simple Python. Read a line, create a new list of those values as _int_s, throw it on to a master list, then unpack everything and sort it. import sys arrays = [] for line in sys.stdin: line...

    Super simple Python. Read a line, create a new list of those values as _int_s, throw it on to a master list, then unpack everything and sort it.

    import sys
    
    
    arrays = []
    for line in sys.stdin:
        line = line.strip()
        if not line:
            break
        arrays.append([int(e) for e in line.split(' ') if e])
    print(sorted(i for j in arrays for i in j))
    

    The easiest thing to do is cast to ints when reading off stdin, but sorted supports a key parameter to specify how to sort the data as well.


    Similar thing in Go, but only keeping a single slice of ints and appending to it each time. Not likely a performant usage of append.

    package main
    
    import (
    	"bufio"
    	"fmt"
    	"os"
    	"sort"
    	"strconv"
    	"strings"
    )
    
    func main() {
    	var ints []int
    	reader := bufio.NewReader(os.Stdin)
    	for {
    		text, err := reader.ReadString('\n')
    		if err != nil {
    			panic(err)
    		}
    		text = strings.TrimSpace(text)
    		if text == "" {
    			break
    		}
    		for _, part := range strings.Split(text, " ") {
    			partVal, err := strconv.Atoi(strings.TrimSpace(part))
    			if err != nil {
    				panic(err)
    			}
    			ints = append(ints, partVal)
    		}
    	}
    	sort.Ints(ints)
    	fmt.Println(ints)
    }
    
    
    5 votes
  6. spit-evil-olive-tips
    Link
    Here's one that assumes each list is already sorted (or sorts them at the beginning out of laziness) then does a N-way merge between them: def sort(arrays): for array in arrays: array.sort()...

    Here's one that assumes each list is already sorted (or sorts them at the beginning out of laziness) then does a N-way merge between them:

    def sort(arrays):
        for array in arrays:
            array.sort()
    
        indexes = [0] * len(arrays)
        output = []
        while True:
            source_array = find_next(arrays, indexes)
            if source_array is None:
                return output
    
            index = indexes[source_array]
            output.append(arrays[source_array][index])
            indexes[source_array] += 1
    
    
    def find_next(arrays, indexes):
        """
        Given an array of arrays to sort and their current index, return the index
        of the array with the smallest element at the current index.
        """
        winning_index = None
        winning_item = None
        for array_index, array in enumerate(arrays):
            index = indexes[array_index]
            if index < len(arrays[array_index]):
                item = array[index]
                if winning_item is None or item < winning_item:
                    winning_item = item
                    winning_index = array_index
    
        return winning_index
    

    Plus test cases (feel free to use these for your own Python solution, it only assumes that the sort function is imported as sort and takes a list of lists):

    from collections import namedtuple
    import unittest
    
    from merge_sort import sort
    
    TestCase = namedtuple('TestCase', ['input', 'expected'])
    
    
    class MergeSortTests(unittest.TestCase):
        def test_sort(self):
            test_cases = [
                TestCase(input=[[]], expected=[]),
                TestCase(input=[[1]], expected=[1]),
                TestCase(input=[[1, 2]], expected=[1, 2]),
                TestCase(input=[[2, 1]], expected=[1, 2]),
                TestCase(input=[[1], [2]], expected=[1, 2]),
                TestCase(input=[[2], [1]], expected=[1, 2]),
                TestCase(input=[[1], [2], [3, 4]], expected=[1, 2, 3, 4]),
                TestCase(input=[[9, 8, 7], [6, 5, 4], [3, 2, 1]],
                    expected=[1, 2, 3, 4, 5, 6, 7, 8, 9]),
                TestCase(input=[[2, 3, 4, 6, 7, 8, 9], [1, 5]],
                    expected=[1, 2, 3, 4, 5, 6, 7, 8, 9]),
            ]
    
            for test_case in test_cases:
                actual = sort(test_case.input)
                self.assertEqual(test_case.expected, actual, test_case.input)
    

    To run, python3 -m unittest test_merge_sort (assumes both merge_sort.py and test_merge_sort.py are in the same directory)

    4 votes
  7. RiderOfGiraffes
    Link
    Python2 def sort_list_of_lists( lol ): combined = sum( lol, [] ) combined.sort() return combined That's just the function, I leave the I/O to the front-end people.

    Python2

    def sort_list_of_lists( lol ):
        combined = sum( lol, [] )
        combined.sort()
        return combined
    

    That's just the function, I leave the I/O to the front-end people.

    4 votes
  8. [3]
    balooga
    Link
    Just because of all the PHP umbrage around here lately, I wrote my solution in it. 😝 function combineAndSortArrays(...$params){ $arr = array_reduce($params, 'array_merge', []); sort($arr); return...

    Just because of all the PHP umbrage around here lately, I wrote my solution in it. 😝

    function combineAndSortArrays(...$params){
            $arr = array_reduce($params, 'array_merge', []);
            sort($arr);
            return $arr;
    }
    

    combineAndSortArrays() is a variadic function that accepts as many arrays as you can throw at it. It combines them into a single array and then sorts it by value.

    4 votes
    1. [2]
      Klayy
      Link Parent
      You can actually do $arr = array_merge(...$params);

      You can actually do
      $arr = array_merge(...$params);

      1 vote
  9. [6]
    Soptik
    (edited )
    Link
    I'll use my own sorting algorithm (but I'm sure it already exists): Theory Let's have a method that gets unlimited amount of arrays (params int[][] in c#). We build some collection, probably...

    I'll use my own sorting algorithm (but I'm sure it already exists):

    Theory

    Let's have a method that gets unlimited amount of arrays (params int[][] in c#). We build some collection, probably binary tree. For each element in each of the arrays, I'll append the number into tree (or custom collection that works very similar to binary tree). Because the arrays are not ordered, we can safely choose first element of first array as tree root, as it's equivalent to choosing random element. I don't know if it's worth balancing the tree in the process.

    After we have all elements, we return the tree (or convert it to array and return it).

    Time complexity should be, where N is number of total elements, N*log(N), plus, if we convert the tree to an array afterwards, N.

    It would be interesting to compare it to the most simple way how to do it, simply add everything to single array and then call .Sort().

    Code

            static void Main(string[] args)
            {
                Random rnd = new Random();
                int[][] input = new int[10][];
                for(int i = 0; i < input.Length; i++)
                {
                    input[i] = new int[10];
                    for(int j = 0; j < input[i].Length; j++)
                    {
                        input[i][j] = rnd.Next(101);
                    }
                }
                Stopwatch sw = new Stopwatch();
                const int tries = 100;
                List<long> t1 = new List<long>(tries);
                List<long> t2 = new List<long>(tries);
                for (int i = 0; i < tries; i++)
                {
                    sw.Start();
                    SortArraysSimpleMethod(input).ToArray(); // Convert result to array, otherwise the code will never be executed, as LINQ uses lazy loading
                    sw.Stop();
                    t1.Add(sw.ElapsedMilliseconds);
                    sw.Reset();
    
                    sw.Start();
                    SortArraysComplexMethod(input);
                    sw.Stop();
                    t2.Add(sw.ElapsedMilliseconds);
                    sw.Reset();
    
                    Console.Write(".");
                }
                Console.Write("\n");
    
                Console.WriteLine(t1.Average());
                Console.WriteLine(t2.Average());
                Console.ReadKey();
                return;
            }
    
    
            private static IEnumerable<int> SortArraysSimpleMethod(params int[][] arrays)
            {
                int[] result = new int[arrays.Select(array => array.Length).Sum()];
                int i = 0;
                foreach(int[] array in arrays)
                {
                    foreach(int element in array)
                    {
                        result[i] = element;
                        i++;
                    }
                }
                return result.OrderBy(element => element);
            }
    
            private static IEnumerable<int> SortArraysComplexMethod(params int[][] arrays)
            {
                Node tree = null;
                foreach(int[] array in arrays)
                {
                    foreach(int element in array)
                    {
                        if(tree == null)
                        {
                            tree = new Node() { value = element };
                        }
                        else
                        {
                            tree.AddChild(element);
                        }
                    }
                }
    
                // Read child
                List<int> values = new List<int>(arrays.Select(array => array.Length).Sum());
                tree.ReadValues(values);
                return values;
            }
    
    
            class Node
            {
                public Node leftChild;
                public Node rightChild;
                public Node parent;
                public int value;
    
                public void AddChild(int value)
                {
                    if(value >= this.value)
                    {
                        if(rightChild == null)
                        {
                            rightChild = new Node() { parent = this, value = value };
                        }
                        else
                        {
                            rightChild.AddChild(value);
                        }
                    }
                    else
                    {
                        if(leftChild == null)
                        {
                            leftChild = new Node() { parent = this, value = value };
                        }
                        else
                        {
                            leftChild.AddChild(value);
                        }
                    }
                }
    
                public void ReadValues(List<int> collection)
                {
                    if(leftChild != null)
                    {
                        leftChild.ReadValues(collection);
                    }
                    collection.Add(value);
                    if(rightChild != null)
                    {
                        rightChild.ReadValues(collection);
                    }
                }
            }
    

    Results

    For small inputs, like 10*10, the results were surprising - the simple method had average (from 100 tries) 0.07ms, the complex one 0.01ms. For input 100*50, the simple one had average 1.62ms, and the complex one 2.13 ms. For input 100*1000, (this time average of 10 tries), the simple one had average 53.9ms and the complex one 1274.2ms.

    Edit: Added syntax highlighting

    4 votes
    1. [5]
      Emerald_Knight
      Link Parent
      Those results aren't surprising at all, actually! Let's say we assume O(n log n) for both approaches and completely disregard any additional algorithmic factors. With a simple array, the allocated...

      Those results aren't surprising at all, actually! Let's say we assume O(n log n) for both approaches and completely disregard any additional algorithmic factors. With a simple array, the allocated memory is sequential and thus your array accesses benefit from locality of reference, whereas your tree isn't constructed of sequentially-allocated memory and thus doesn't have that same benefit. Temporal and spatial locality of reference are concepts that are used to optimize caching behavior, reducing the impact of a cache miss and the subsequent retrieval from slower RAM. It would absolutely have an expected performance impact :)

      3 votes
      1. [4]
        Soptik
        Link Parent
        I was surprised about the result for 10*10. I expected the simple solution to be faster, so I was quite surprised when the complex one was faster. I tested it multiple times, but the complex one...

        I was surprised about the result for 10*10. I expected the simple solution to be faster, so I was quite surprised when the complex one was faster. I tested it multiple times, but the complex one is always faster for 10*10. I'm not sure why is the complex solution faster, but I think, that the LINQ methods I use in simple solution have some overhead and the tree isn't that slower with just 100 elements.

        I thought that the tree approach would be slower just because it has to visit log(n) children when inserting new element, I didn't know about the sequentially-allocated memory :)

        If I created something like linked list instead of tree, allocated all required nodes at once and connected the nodes afterwards - the slowing effect of not-sequentially-allocated memory would disappear, right?

        2 votes
        1. [3]
          Emerald_Knight
          Link Parent
          Regarding the tree being more efficient with 10*10, that could easily be due to amortized time complexity of your inserts. That amortized time complexity likely makes a large difference at the...

          Regarding the tree being more efficient with 10*10, that could easily be due to amortized time complexity of your inserts. That amortized time complexity likely makes a large difference at the beginning, but ends up being completely overshadowed by the impacts of locality of reference with larger arrays.

          Regarding the linked list structure, I wouldn't expect the sequential nature of the nodes to necessarily translate directly into a sequential allocation of memory. It's possible, for example, that a sequence of nodes a -> b -> c -> d -> e -> f is allocated in memory in the order e -> b -> d -> a -> c -> f. In such a case, in an array you would expect that accessing e then accessing f would be efficient, but in this example scenario of a linked list you would find that accessing e then accessing f would be inefficient since they're so far apart in memory.

          Locality of reference has nothing to do with how we structure the data representationally, and everything to do with where that data lives in actual, physical memory, on the hardware.

          1 vote
          1. [2]
            Soptik
            (edited )
            Link Parent
            This is really interesting. for i in 0..n for j in 0..m for k in 0..p C[i][j] = C[i][j] + A[i][k] * B[k][j]; wiki However it looks a bit like black magic to me. When you're writing algorithms that...

            This is really interesting.

            A common example is matrix multiplication:

            for i in 0..n
              for j in 0..m
                for k in 0..p
                  C[i][j] = C[i][j] + A[i][k] * B[k][j];
            

            By switching the looping order for j and k, the speedup in large matrix multiplications becomes dramatic, at least for languages that put contiguous array elements in the last dimension.

            On a year 2014 processor, the second case is approximately five times faster than the first case

            wiki

            However it looks a bit like black magic to me. When you're writing algorithms that need to run fast, do you use these techniques? Is it worth really diving into?

            1 vote
            1. Emerald_Knight
              Link Parent
              I personally don't need to handle these kinds of optimizations. But yes, these are used if needed. They're only worth diving into if your applications need to squeeze out every bit of performance...

              I personally don't need to handle these kinds of optimizations. But yes, these are used if needed. They're only worth diving into if your applications need to squeeze out every bit of performance possible, particularly in processing-intensive programs.

              It may seem like black magic, but once you know how data is structured and how it's loaded into cache, it makes perfect sense. It's all just a matter of taking advantage of the architecture you're working with.

              Good on you for doing some additional research on the subject!

              1 vote
  10. Leo
    Link
    Now for Python: fn = lambda x: sorted(sum(x, []))

    Now for Python:

    fn = lambda x: sorted(sum(x, []))

    3 votes
  11. Leo
    (edited )
    Link
    Don't have time to test this quite yet but it should work: #include <algorithm> #include <iterator> #include <vector> template <typename T> std::vector<T>...

    Don't have time to test this quite yet but it should work:

    #include <algorithm>
    #include <iterator>
    #include <vector>
    
    template <typename T>
    std::vector<T> flatten_and_sort(std::vector<std::vector<T>> arrays) {
    	std::vector<T> result;
    	auto it = std::back_inserter(result);
    	
    	for (auto &array : arrays) {
    		std::move(array.begin(), array.end(), it);
    	}
    	
    	std::sort(result.begin(), result.end());
    	
    	return result;
    }
    

    Passing the argument by value allows the caller to choose whether to pass an lvalue to be copied or an rvalue to be consumed.

    Flattening is O(n) where n is the total number of elements in all the arrays, and sorting is O(n log n).

    2 votes
  12. Emerald_Knight
    (edited )
    Link
    I decided to take a database approach to this problem using MongoDB. Given a collection arrays where each document has a field values containing an array of numbers, the following aggregation...

    I decided to take a database approach to this problem using MongoDB.

    Given a collection arrays where each document has a field values containing an array of numbers, the following aggregation pipeline will produce a document with the desired result contained in a final merged field:

    db.arrays.aggregate([
        {$unwind: "$values"},
        {$sort: {
            values: 1
        }},
        {$group: {
            _id: null,
            merged: {$push: "$values"}
        }}
    ]);
    
    2 votes
  13. [3]
    onyxleopard
    Link
    Not sure if using Python's itertools.chain is cheating… Python 3.7.0 (default, Jul 15 2018, 14:01:53) Type 'copyright', 'credits' or 'license' for more information IPython 6.5.0 -- An enhanced...

    Not sure if using Python's itertools.chain is cheating…

    Python 3.7.0 (default, Jul 15 2018, 14:01:53) 
    Type 'copyright', 'credits' or 'license' for more information
    IPython 6.5.0 -- An enhanced Interactive Python. Type '?' for help.
    
    In [1]: from operator import itemgetter
    
    In [2]: from itertools import chain
    
    In [3]: def multi_sort_gen(*iterables, **kwargs):
       ...:     yield from sorted(chain(*iterables), **kwargs)
       ...:     
       
    In [4]: list(multi_sort_gen([(1,3), (1,2), (1,1)], [(2,10), (2,9), (2,8)], key=itemgetter(1)))
    Out[4]: [(1, 1), (1, 2), (1, 3), (2, 8), (2, 9), (2, 10)]
    
    In [5]: list(multi_sort_gen([(1,3), (1,2), (1,1)], [(2,10), (2,9), (2,8)], key=itemgetter(1), reverse=True))
    Out[5]: [(2, 10), (2, 9), (2, 8), (1, 3), (1, 2), (1, 1)]
    
    In [6]: from random import randint
    
    In [7]: arrays = [[randint(0,9) for _ in range(10 ** 3)] for _ in range(10 ** 3)]
    
    In [8]: %timeit result = list(multi_sort_gen(*arrays))
    263 ms ± 11.8 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
    
    In [9]: arrays = [[(i, randint(0,9)) for _ in range(10 ** 3)] for i in range(10 ** 3)]
    
    In [10]: %timeit result = list(multi_sort_gen(*arrays, key=itemgetter(1)))
    570 ms ± 46.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
    

    I learned that heapq.merge exists and it claims:

    Help on function merge in module heapq:
    
    merge(*iterables, key=None, reverse=False)
        Merge multiple sorted inputs into a single sorted output.
        
        Similar to sorted(itertools.chain(*iterables)) but returns a generator,
        does not pull the data into memory all at once, and assumes that each of
        the input streams is already sorted (smallest to largest).
        
        >>> list(merge([1,3,5,7], [0,2,4,8], [5,10,15,20], [], [25]))
        [0, 1, 2, 3, 4, 5, 5, 7, 8, 10, 15, 20, 25]
        
        If *key* is not None, applies a key function to each element to determine
        its sort order.
        
        >>> list(merge(['dog', 'horse'], ['cat', 'fish', 'kangaroo'], key=len))
        ['dog', 'cat', 'fish', 'horse', 'kangaroo']
    

    This has the disadvantage that the inputs must be pre-sorted. The advantage, though, is that it does not require all inputs to fit in memory (whereas my solution above does). Nice to know that the tradeoff of space for time can be made here.

    In [12]: import heapq
    
    In [14]: %timeit result = list(heapq.merge(*(sorted(a, key=itemgetter(1)) for a in arrays), key=itemgetter(1)))      
    3.24 s ± 252 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
    
    2 votes
    1. [2]
      Emerald_Knight
      Link Parent
      Not really cheating, no. Otherwise we would have to decide where to draw the line at abstractions, which would be a large and fruitless debate not worth getting into.

      Not really cheating, no. Otherwise we would have to decide where to draw the line at abstractions, which would be a large and fruitless debate not worth getting into.

      1. onyxleopard
        Link Parent
        Yeah, it’s just a matter of if it’s part of the challenge to re-implement something like itertools.chain, which is probably good practice in its own right.

        Yeah, it’s just a matter of if it’s part of the challenge to re-implement something like itertools.chain, which is probably good practice in its own right.

        1 vote
  14. [3]
    jacebot
    Link
    The magic that is JavaScript ES6... when working through this, I had a similar real world problem this week, so still rather fresh. Get a messy array => into new array of arrays with groups of 3...

    The magic that is JavaScript ES6... when working through this, I had a similar real world problem this week, so still rather fresh. Get a messy array => into new array of arrays with groups of 3 obj. Fun times.

    • Don't hate on the web dev... we program too. Sometimes.*
    let overallArr = [];
    const arr = [[1,2,3], [3,4,5,], [5,6,7,8]]
    
    function sortArr (current, arr) {
      let currentArr = current;
      let tempArr = [];
      
      for (let i=0; i < arr.length; i++) {
        let innerArr = arr[i];
        tempArr = [...tempArr, ...innerArr]
      }
    
    // Not the most efficient in large sets
    // https://gist.github.com/telekosmos/3b62a31a5c43f40849bb#gistcomment-1739870
      return overallArr = new Set(tempArr.sort())
    }
    
    sortArr(overallArr, arr)
    console.log(sortArr(overallArr, arr))
    
    2 votes
    1. min
      Link Parent
      You can get it even shorter using concat: function mergeAndSort(arr) { return [].concat(...arr).sort() } Or if you still want it in a set: function mergeAndSort(arr) { return new...

      You can get it even shorter using concat:

      function mergeAndSort(arr) {
         return [].concat(...arr).sort()
      }
      

      Or if you still want it in a set:

      function mergeAndSort(arr) {
         return new Set([].concat(...arr).sort())
      }
      
      1 vote
    2. Emerald_Knight
      Link Parent
      No hate here. Despite all of the flak it gets, I work primarily with PHP, and occasionally with JS. The only thing I would hate on, if this were in a merge request, would be some minor naming...

      No hate here. Despite all of the flak it gets, I work primarily with PHP, and occasionally with JS.

      The only thing I would hate on, if this were in a merge request, would be some minor naming issues. For a programming challenge like this, though, it's not really an important point. Also, just a heads up, the new Set() call would be undesired for a simple merging of array elements as it strips the duplicates that you would expect to see. Great for getting unique values, though! :)