Dispatch::Source are used to monitor a variety of system objects and events including file descriptors, mach ports, processes, virtual filesystem nodes, signal delivery and timers.
When a state change occurs, the dispatch source will be submit its event handler block to its target queue.
All the Dispatch::Source
types have the same initialization scheme:
- new(type, handle, mask, queue) { ... } -> Source
- [PARAM] type:
- The type of the dispatch source.
- [PARAM] handle:
- The handle to monitor. If passeed `DATA_ADD` or `DATA_OR`
in type, specify the `0` in this argument.
- [PARAM] mask:
- The mask of flags.
- [PARAM] queue:
- The dispatch queue to which the event handler block is submitted.
- [RETURN]
- Returns a Dispatch::Source instance.
Well there is no rule without exception, in this case the exception is Dispatch::Source.timer
- timer(delay, interval, leeway, queue) { ... } -> Source
- [PARAM] delay:
- The start time of the timer.
- [PARAM] interval:
- The second interval for the timer.
- [PARAM] leeway:
- The amount of time, in seconds, that the system can defer the timer.
- [PARAM] queue:
- The dispatch queue to which the event handler block is submitted.
- [RETURN]
- Returns a Dispatch::Source instance.
Just like Dispatch::Queue
, Dispatch::Source
also can be cancelled and tested if they are cancelled:
- cancel!
- Asynchronously cancels the dispatch source, preventing any
further of invocation of its event passed block
- cancelled? -> bool
- [RETURN]
- Returns a `true` if cancelled, otherwise `false`.
Other implemented Methods are:
- handle
- [RETURN]
- Returns the underlying Ruby handle for the dispatch source.
- mask
- [RETURN]
- Returns returns the set of flags that were specified at source
creation time via the mask argument.
- data
- [RETURN]
- Returns the currently pending data for the dispatch source.
Grand Central Dispatch (GCD) comes with 11 different type of Dispatch::Sources and at the moment only 8 are accessible to [MacRury][macrury] and RubyMotion.
Dispatch::Source::DATA_ADD
Dispatch::Source::DATA_OR
Dispatch::Source::Timer
Dispatch::Source::PROC
Dispatch::Source::READ
Dispatch::Source::WRITE
Dispatch::Source::SIGNAL
Dispatch::Source::VNODE
Dispatch::Source::MACH_SEND
Dispatch::Source::MACH_RECV
Dispatch::Source::MEMORYPRESSURE
1. Dispatch::Source::DATA_ADD and Dispatch::Source::DATA_OR
Both Sources allow applications to manually trigger the source’s event action vai a call of Dispatch::Source#<<
. The Data will be merged to with the source’s pending data via an atomic add or logic OR depending on the source type. the operation will happen within the target queue.
1 2 3 4 5 6 |
|
explanation: the above example increment our progress
object, eventhought the numbers are generated concurrently the source
will make it synchrone, so that are no ThreadSafe progress
doesn’t blows our intentions.
2. Dispatch::Source::Timer
1 2 3 4 |
|
explanation: The Above source is a timer which get called every 1 second, but it has a leeway of 0.5 second, which means it may sometimes be called with a max interval of 1.5 seconds. The leeway is the torance.
3. Dispatch::Source::PROC
This type of sources monitpr procesdses state changes, the handle is the process identifier of the monitored process and te mask may be one or more of the following flags:
Flag | Meaning | |
---|---|---|
Dispatch::Source::PROC_EXIT | The process has exited and is available to wait | |
Dispatch::Source::PROC_FORK | The process has created one or more child processes. | |
Dispatch::Source::PROC_EXEC | The process has become another executable image. | |
Dispatch::Source::PROC_SIGNAL | A Unix signal was delivered to the process. |
How to use it?
1 2 3 4 5 6 7 8 9 10 |
|
explanation: On our example, we observer a iTunes Process, the block is called whenever the application is quited.
4. Dispatch::Source::READ
1 2 3 4 5 6 7 8 9 10 |
|
explanation: The above Source read the content of our file filename
asynchronously, the content of the file will be copy in our @result
variable.
5. Dispatch::Source::WRITE
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
explanation: The above Source writes the content of the variable @msg
asynchronously to the file filename
.
6. Dispatch::Source::VNODE
Flag | Meaning | |
---|---|---|
Dispatch::Source::VNODE_WRITE | The process has exited and is available to wait | |
Dispatch::Source::VNODE_DELETE | The process has become another executable image. | |
Dispatch::Source::VNODE_EXTEND | The process has created one or more child processes. | |
Dispatch::Source::VNODE_RENAME | The process has become another executable image. | |
Dispatch::Source::VNODE_ATTRIB | The process has become another executable image. | |
Dispatch::Source::VNODE_REVOKE | The process has become another executable image. | |
Dispatch::Source::VNODE_LINK | The process has become another executable image. |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
explanation: The above Source observes the directory dirPath
, whenever the content of the directory changes (write or deleted), the source triggers it block.
As you can see, GCD is a mighty tool and it dvantages is concurrency and Asynchronousity. Whenever you have to get critical tasks done, you should consider GCD.
This is the last part of the GCD Series. The coming posts will cover other topics. I hope you enjoyed it.
The Auto Layout system let use defining layout constraints for the user interface elements, these constraints represent relationship between user interface elemets such as “these views line up head to tail” or “this button should move with this split view subview”. When laying out the user interface, a constraint satisfaction system arranges the elements in a way that most closely meets the constraints. if you configure constraints that the system cannot be satisfy, an exception is thrown.
Constraints are rules for layout elements in your user interface. For example the help you to specify the a text label should be centered on its superview and keep the same proportional to its superview even though the superview sizes changed. Let’s illustrate it by real life scenario: - Localization example (image) - Auto Rotation example (image)
Constraints themselves are objects, actually instances of NSLayoutConstraint that you can install on views objects (instance of UIView on iOS 6 or Instace of NSView on Mac OS X ≥ 10.7) Typically, you specify the constraints in Interface Builder, well but you and and me can do it better :-), we will create it programmatically by using an ASCII-art² inspired format string and by using a form that looks very much like an linear equations³:
H:|-[input_field]-[action_button]-|
view1.attr1 < relation > view2.attr2 * multiplier + constant
1 2 3 4 5 6 7 8 |
|
We have a UIViewcontroller with 5 subviews: 1. UILabel: (Title label) should be directed attached to the top of it superview 2. UILabel: (subtitle label) should be direct attached to the bottom of the title label 3. UITextField: (Symbol input field) the input field is placed 5pts from the bottom of the subtitle label 4. UIButton: the (action button) is direct attached to the right size of the input text. 5. UILabel: (disclaimer label), it’s bottom side is placed 5 pts above the bottom of the superview. we want that all this relationship remains, don’t matter how the superview ratio changes.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
|
This code might work on all devices the same [iphone, retina iphone and iphone 5] the example code on github include some localization tweaks, it shows what are the benifits of autolayout for localization and other stuffs.
Dispatch semaphore is GCD’s implementations of traditional counting semaphore not to be confused whit Ruby’s Mutex which only implements a simple semaphore lock but also allows coordinated access to shared data from multiple Threads. Traditional semaphores always require calling down to the kernel to test the semaphore, but the dispatch semaphore tests semaphore in user space, and only traps into the kernel when the test fails and needs to block the thread, this makes Dispatch Semaphore efficient and lightweight.
A Dispatch Semaphore object on mainly responds to two methods:
- semaphore#signal
- semaphore#wait()
*-When a semaphore is signaled, the counter is incremented. When the thread waits in a semaphore, it will block, if necessary until is greater than 0 and then decrement the count.
Let’s look into some code for better understanding: On This example I’ll try to solve the ”The Dining Philosophers Problem” which is an example problem often used in concurrent algorithm design to illustrate synchronization issues and techniques for resolving them.
The dining philosophers problem is invented by E. W. Dijkstra. Imagine that five philosophers who spend their lives just thinking and easting. In the middle of the dining room is a circular table with five chairs. The table has a big plate of spaghetti. However, there are only five chopsticks available, as shown in the following figure. Each philosopher thinks. When he gets hungry, he sits down and picks up the two chopsticks that are closest to him. If a philosopher can pick up both chopsticks, he eats for a while. when a philosopher finishes eating, he puts down the chopsticks and starts to think more information.
On this example the Chopsticks are our Infinite resource each philosopher has one, but to eat he needs two, whenever a philosopher picks a chopsticks he send a wait, and when he is done the send a signal to wake the others waiting for the chopsticks.
Let’s take a look into how semaphores really work:
1 2 3 4 5 6 7 8 9 10 |
|
Another example sleeping barber problem
As you can see, Dispatch Semaphore is great way to control limited resource, this are basically the fundamental usage of dispatch semaphore. On the last blog post of this serie I’ll show the principles of Dispatch Source.
Now that we know how to use Grand Central Dispatch to make our application concurrent and parallel, this time I’ll try to show you how GCD makes it easier for us to syncronize blocks and queue tasks.
A dispatch group is a way to monitor a set of block objects for completion. (You can monitor the blocks synchronously or asynchronously depending on your needs.) Groups provide a useful synchronization mechanism for code that depends on the completion of other tasks. Dispatch::Group acts more or less like Thread#join in plain ruby.
Let’s look at some examples to figure out how how Dispatch::Group works:
On this example, Derpina will wait till Derp is back from the kitchen to press the play button. You don’t have to use two queues, both tasks could be executed on the same queue.
How useful are groups on GCD? well let’s take a look into another example, do you know how difficult is to implement Promises and Futures in plain Ruby? well this is how you can implement in MacRuby or Rubymotion:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
|
Now it’s easy to schedule long-running tasks in the background:
1 2 3 4 5 6 7 |
|
this example is brought to you by patrickt (Patrick Thomson) and benstiglitz (Benjamin Stiglitz)
I think we have enough of groups, now let’s take a look into something new, barrier were introduced with OS X Lion and iOS 5. Barriers are a specialized version of the Dispatch::Queue#async method. When a block enqueued with barrier_async reaches the front of a private concurrent queue, it waits until all other enqueued blocks to finish executing, at which point the block is executed. No blocks submitted after a call to barrier_async will be executed until the enqueued block finishes. It returns immediately (from Macruby Source code) If the provided queue is not a concurrent private queue, this function behaves identically to the #async function.
let’s look into some code:
1 2 3 4 5 6 |
|
When should we use #barrier_async ? Well barrier_async are pretty useful for example to manipulate data structures which can be read but not written concurrently.
There is a second barrier method, which blocks until the provided task or block is executed. e.g.:
1 2 3 4 5 6 |
|
more barrier example inspired by Mike Ash Let’s imagine that we have a Hash that’s being used as a cache. Hash is thread safe for reading, but doesn’t allow any concurrent access while modifying its contents, not even if the other access is simple reading.”
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
|
To have a better understanding, you should try both versions out (GCD and Ruby Version) one of them will hang, unfortunately the actual MacRuby version is not compiled on Mac OS X Lion, so Dispatch::Queue#barrier_async is not availible, but you can compile it yourself… or ask me on twitter, if I can send you an installer package ;-)
Since one of biggest difference between Android and iOS application is the UI responsiveness, I hope this motivates you to use GCD to improving the user experience of your rubymotion application.
Grand Central Dispatch is the apple way to perform concurrent programming, by using it you’re able to divide your program into peaces tasks that can be executed by a queue concurrently or serially. Since GCD is a low level C API, you can’t communicate directly with it, but MacRuby has a wrapper for that.
You can think of Dispatch::Queues as workers waiting to execute undefined tasks, the can either execute tasks concurrently or serially. A Serial Queue executes a single task at once, a concurrent queue is capable to execute as many tasks simultaneously as your system allow it execute.
GCD comes with three different types of queues:
1) The Main queue: the main queue is the application aka. main thread, you can get this queue by using:
1
|
|
2) Global / Concurrents queue: on pre Lion / iOS5 version of GCD there were only one concurrent queue that could have three defined priorities, but now this changed. You can create as much concurrent queues as you want that execute multiple blocks at the same time¹.
1 2 3 4 |
|
3) Customized queues: are lightweight list of blocks which can be executed one at a time in FIFO order, they can be compared with Ruby mutex or traditional ruby thread. They are perfectly suited for synchronization mechanism without have to deal with lock and unlock.² If you want to ensure that tasks execute in a predictable order, you should use the customized queues.
1
|
|
There are two ways to submit a block to a queue, the first is the asynchronous execution, which submits a block to a queue and returns immediately. e.g:
1 2 |
|
The second method is the Dispatch::Queue#sync which submits a block on a dispatch queue and waits until that block completes. Unlike the Dispatch::Queue#async method, the block are synchronously executed. e.g:
1 2 |
|
Dispatch::Queue#after submits a block asynchronously to the given queue after the given delay (in seconds) is passed.
1
|
|
Dispatch::Queue#apply submits a block to a dispatch queue for multiple execution, if the execution queue is a concurrent, the block will be executed concurrently.
1 2 3 4 |
|
Dispatch Objects allow to manage the blocks execution by canceling, suspend and resume it.
1 2 3 4 5 |
|
Sometimes when dealing with Cocoa / CocoaTouch API’s you will need to have access to the Queue object, for this purpose Macruby’s GDC-Wrapper delievers a method to get the queue object: Dispatch::Queue#dispatch_object
Dispatch::TIME_FOREVER: means infinity, queue or semaphore will wait till blocks are done
Dispatch::TIME_NOW: means zero, queue or semaphore will not wait for blocks at all
- An Introduction to GCD with MacRuby by Patrick Thomson
-Intro to Grand Central Dispatch, Part I: Basics and Dispatch Queues by Mike Ash