Git Autocomplete setup

Autocomplete in almost every known shell is spoiled me, which is why I’d expect it to do the same for git. Sadly, it doesn’t come packaged with the default git installation on Mac OS X (El Capitan).

1. Download the git-completion.bash script and store it a known location

curl https://raw.githubusercontent.com/git/git/master/contrib/completion/git-completion.bash -o ~/.git-completion.bash
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 57021  100 57021    0     0  31591      0  0:00:01  0:00:01 --:--:-- 31590

2. Add the script to your .profile

if [ -f ~/.git-completion.bash ]; then
  . ~/.git-completion.bash
fi

3. You are done.

Hitting tab after git command now shows

$git 
add                  bundle               config               gc                   log                  push                 reset                stash                wtf 
am                   checkout             describe             get-tar-commit-id    merge                rebase               revert               status               
annotate             cherry               diff                 grep                 mergetool            reflog               rm                   submodule            
apply                cherry-pick          difftool             help                 mv                   relink               send-email           subtree              
archive              citool               fetch                imap-send            name-rev             remote               shortlog             svn                  
bisect               clean                filter-branch        init                 notes                repack               show                 tag                  
blame                clone                format-patch         instaweb             p4                   replace              show-branch          verify-commit        
branch               commit               fsck                 interpret-trailers   pull                 request-pull         stage                whatchanged   

iOS Error Series: Library not loaded: @rpath/….. Reason: no suitable image found

I was able run the app in the simulator however running it one of the devices ran into the error

dyld: Library not loaded: @rpath/libswiftCore.dylib
  Referenced from: /var/mobile/Containers/Bundle/Application/3FC2DC5C-A908-42C4-8508-1320E01E0D5B/Stylist.app/Stylist
  Reason: no suitable image found.  Did find:
    /private/var/mobile/Containers/Bundle/Application/3FC2DC5C-A908-42C4-8508-1320E01E0D5B/testapp.app/Frameworks/libswiftCore.dylib: mmap() errno=1 validating first page of '/private/var/mobile/Containers/Bundle/Application/3FC2DC5C-A908-42C4-8508-1320E01E0D5B/testapp.app/Frameworks/libswiftCore.dylib'
(lldb) 

It turns out that Xcode cache some device specific stuff which can get mixed up if you are running your apps on multiple devices. The simple fix is to delete Xcode cache. The following command clean it up for you

rm -rf "$(getconf DARWIN_USER_CACHE_DIR)/org.llvm.clang/ModuleCache"
rm -rf ~/Library/Developer/Xcode/DerivedData
rm -rf ~/Library/Caches/com.apple.dt.Xcode

Local Kafka setup on Mac OS X

Local Kafka set up guide is available at: http://kafka.apache.org/documentation.html#quickstart

However, this out of the box set up on Mac OS X (Yosemite) did not work for me directly.  When trying to publish a message to a newly created topic, it would fail as follows

kafka08.client.ClientUtils$ - Successfully fetched metadata for 1 topic(s) Set(my-topic)
kafka08.producer.BrokerPartitionInfo - Getting broker partition info for topic my-topic
kafka08.producer.BrokerPartitionInfo  - Partition [my-topic,0] has leader 0
kafka08.producer.async.DefaultEventHandler - Broker partitions registered for topic: my-topic are 0
kafka08.producer.async.DefaultEventHandler - Sending 1 messages with compression codec 2 to [my-topic,0]
kafka08.producer.async.DefaultEventHandler - Producer sending messages with correlation id 7 for topics [my-topic,0] to broker 0 on developer-mbp:9092
kafka08.producer.SyncProducer - Connected to developer-mbp:9092 for producing
kafka08.producer.SyncProducer - Disconnecting from developer-mbp:9092
kafka08.producer.async.DefaultEventHandler - Failed to send producer request with correlation id 7 to broker 0 with data for partitions [my-topic,0]
java.nio.channels.ClosedChannelException
	at kafka08.network.BlockingChannel.send(BlockingChannel.scala:100)
	at kafka08.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
	at kafka08.producer.SyncProducer.kafka08$producer$SyncProducer$$doSend(SyncProducer.scala:72)
	at kafka08.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SyncProducer.scala:103)
	at kafka08.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:103)
	at kafka08.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:103)
	at kafka08.metrics.KafkaTimer.time(KafkaTimer.scala:33)
	at kafka08.producer.SyncProducer$$anonfun$send$1.apply$mcV$sp(SyncProducer.scala:102)
	at kafka08.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala:102)
	at kafka08.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala:102)
	at kafka08.metrics.KafkaTimer.time(KafkaTimer.scala:33)
	at kafka08.producer.SyncProducer.send(SyncProducer.scala:101)
	at kafka08.producer.async.DefaultEventHandler.kafka08$producer$async$DefaultEventHandler$$send(DefaultEventHandler.scala:255)
	at kafka08.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(DefaultEventHandler.scala:106)
	at kafka08.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(DefaultEventHandler.scala:100)

Banging my head against the wall for a few hours, I couldn’t figure out why if my application  wasn’t able to talk to kafka or kafka wasn’t able to talk to zookeeper. netstat claimed that both kakfa and zookeeper were running fine, listening on the default ports without any errors in the logs.

SOLUTION

It turns out that even if localhost is provided to Kafka client (running with java 8), it tries to resolve the server via my machine name which in this case was developer-mbp:9092. This should not be a problem since my machine should be accessible via the machine name, however, because machine name was changed on my macbook pro, it had an empty /etc/hosts file.

This was the default file

##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting.  Do not change this entry.
##
127.0.0.1	localhost
255.255.255.255	broadcasthost
::1             localhost

Notice how the machine name is not configured here. Pointing your 127.0.0.1 to your machine name fixed the problem. This was the updated /etc/hosts file

##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting.  Do not change this entry.
##
127.0.0.1	localhost developer-mbp
255.255.255.255	broadcasthost
::1             localhost

Noe Kakfa client was able to resolve the developer-mbp and publish to my-topic.

Making your scrips chkconfig aware

If you want to have your scripts run at startup and shutdown, there is a specific way of doing it using chkconfig (or /sbin/chkconfig).

The first thing to do is to get to know about runlevels. Run levels informally define the state to which your system is booting up to. Runlevel 5 in Fedora/RedHat/CentOS is the default and means Multi-user with X. Runlevel 1 typically means single user mode, Runlevel 3 is Multi-user mode without X. There are 7 runlevels, 0 through 6. The file /etc/inittab tells you the run level that your system boots up to by default.

Firstly you need to know the runlevels in which you need to run your startup scripts. Typically you’d be running your scripts in runlevel 3,4,5.

The first thing you need to add to your script is the comment

#chkconfig 345 98 02
#description: This is what my script does.

–The first set of numbers after chkconfig are the runlevels you want your script to run at startup.
–The second number is the priority of the script at start time i.e. 98 in this case. It means that your script will run after all scripts with priority less than 98 have already run.
–The third number is the priority of the script at shutdown i.e. 02 in this case.

When you add your script using the command chkconfig –add , 7 symlinks are created. Firstly the symlinks prefixed with S are placed in /etc/rc.d where runlevel are the levels you specified that your script should run at startup. Then in the remaining /etc/rc.d, symlinks prefixed with K are created.

Parse SDK in swift

When I was a newbie to swift, I spent a lot of hours getting this working.

Setup:  Xcode 6+, Parse SDK 1.6.2+, iOS 8+

Setting up Parse SDK in Xcode

  1. Create a new account at Parse.com
  2. Create a new Application
    parse-swift1
  3. Once your app is set up, Parse will provide you your keys. Note these keys and keep them safe
    parse-swift-2
  4. From the Downloads section of Parse.com, download the SDK

Set up your Xcode project

  1.  Create a new Xcode project and remember to select the Language as Swift
  2. Once the Application is created, click on your Project and go to “Build Phrases”.
    In the List “Link Binary With Libraries” you will have to add these Frameworks to use Parse.parse-swift-3AudioToolbox.framework
    CFNetwork.framework
    CoreGraphics.framework
    CoreLocation.framework
    libz.dylib
    MobileCoreServices.framework
    QuartzCore.framework
    Security.framework
    StoreKit.framework
    SystemConfiguration.framework
    libsqlite3.dylib
  3. Now drag the Parse.framework you downloaded before into your Xcode Project. Remember to check “Copy Items if needed” when you bring in the framework file

Make Parse SDK useable in Swift

Parse SDK is written in Objective-C. In order to use Objective C code in a swift project, you need to link the Objective-C header files using a special Bridging file. You can make Xcode do the work of setting up an empty bridging file.

  1. Create a new File (File -> New -> File) of type Objective-C File.
  2. You can use any name for this file. I am going to call it Dummy.m. We are not going to use this file. However, this will make Xcode ask you to create the bridging file
    parse-swoft-4
  3. Select yes. Xcode will add 2 files to your project – Dummy.m and the Bridging file. Lets call it BridgingHeader.h
  4. Add the following to your BridgingHeader.h
    #import <Parse/Parse.h>

 

Using Apache HTTPClient 4.x for MultiPart uploads with Jersey 1.x Server

You can easily find a lot of articles on the web describing the process to use Jersey client with a Jersey 1.x Server to do multi-part uploads. However, when trying to use Apache HTTP client, it uncovers a bug in jersey causing a NullPointerException – https://java.net/jira/browse/JERSEY-1658

SEVERE: The RuntimeException could not be mapped to a response, re-throwing to the HTTP container
java.lang.NullPointerException
    at     com.sun.jersey.multipart.impl.MultiPartReaderClientSide.unquoteMediaTypeParameters(MultiPartReaderClientSide.java:227)
    at com.sun.jersey.multipart.impl.MultiPartReaderClientSide.readMultiPart(MultiPartReaderClientSide.java:154)
    at com.sun.jersey.multipart.impl.MultiPartReaderServerSide.readMultiPart(MultiPartReaderServerSide.java:80)
    at com.sun.jersey.multipart.impl.MultiPartReaderClientSide.readFrom(MultiPartReaderClientSide.java:144)
    at com.sun.jersey.multipart.impl.MultiPartReaderClientSide.readFrom(MultiPartReaderClientSide.java:82)
    at com.sun.jersey.spi.container.ContainerRequest.getEntity(ContainerRequest.java:488)
    at com.sun.jersey.server.impl.model.method.dispatch.EntityParamDispatchProvider$EntityInjectable.getValue(EntityParamDispatchProvider.java:123)
    at com.sun.jersey.server.impl.inject.InjectableValuesProvider.getInjectableValues(InjectableValuesProvider.java:46)
    at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$EntityParamInInvoker.getParams(AbstractResourceMethodDispatchProvider.java:153)
    at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:203)
    at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)

Here’s the relevant piece of code from Jersey

protected static MediaType unquoteMediaTypeParameters(final MediaType mediaType, final String... parameters) {
235        if (parameters == null || parameters.length == 0) {
236            return mediaType;
237        }
238
239        final HashMap unquotedParams = new HashMap(mediaType.getParameters());
240
241        for (final String parameterName : parameters) {
242            String parameterValue = mediaType.getParameters().get(parameterName);
243
244            if (parameterValue.startsWith("\"")) {
245                parameterValue = parameterValue.substring(1, parameterValue.length() - 1);
246                unquotedParams.put(parameterName, parameterValue);
247            }
248        }
249
250        return new MediaType(mediaType.getType(), mediaType.getSubtype(), unquotedParams);
251    }

The error occurs because Jersey Server expects the boundary parameter be set as a part of the content-type header, which is not being set by Apache HTTP Client. It can be verified by looking at the request made by Jersey client vs Apache client

Jersey Client

Content-Type=multipart/form-data;boundary=Boundary-1234567890

Apache HTTP Client

Content-Type=multipart/form-data

And since the boundary parameter is missing, it ends up throwing a NPE.

SOLUTION

I was able manually hack in the boundary parameter into the Content-Type header of the request making it available for Jersey parser and thus avoiding the NPE. The issue with this fix however is that the class MultipartFormEntity is package private and therefore, the utility class described below needs to be created in the package org.apache.http.entity.mime

package org.apache.http.entity.mime;

import org.apache.commons.lang3.Validate;
import org.apache.http.HttpEntity;

public class MultiPartEntityUtil {
	
	public static String getBoundaryValue(HttpEntity entity) {
		Validate.notNull(entity);
		
		if( entity instanceof MultipartFormEntity ) {
			MultipartFormEntity formEntity = (MultipartFormEntity)entity;

			AbstractMultipartForm form =  formEntity.getMultipart();
			Validate.notNull(form);
			
			return form.getBoundary();
		}
		
		throw new IllegalArgumentException("Provided entity is of type: " + entity.getClass() + " instead of expected: MultipartFormEntity");
	}

}

With this utility class, we can simply set the Content-Type header as follows

 MultipartEntityBuilder builder = MultipartEntityBuilder.create();
 builder.setMode(HttpMultipartMode.BROWSER_COMPATIBLE);

for (File file : files) {
    builder.addBinaryBody(file.getName(), file, ContentType.DEFAULT_BINARY, file.getName());
}

HttpEntity entity = builder.build();
String boundary= MultiPartEntityUtil.getBoundaryValue(entity);

...

request.addHeader(HttpHeaders.CONTENT_TYPE, "multipart/form-data;boundary="+boundary);

This hack makes sure that Jersey server finds the appropriate boundary parameter. Now you can successfully do multipart uploads with Apache client on Jersey 1.x

Zookeeper Leader election and timeouts

My cluster of 3 nodes running fine for a while till one of the nodes died. This node was the LEADER. I guessed the cluster would still be fine since 2/3 nodes were still healthy. However, it looked like it was unable to elect a leader a set up quorum properly.

Here’s what I was getting:

2014-11-11 12:09:36,101 [myid:1] - WARN  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Follower@89] - Exception when following the leader
java.net.ConnectException: Connection refused
        at java.net.PlainSocketImpl.socketConnect(Native Method)
        at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:382)
        at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:241)
        at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:228)
        at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:365)
        at java.net.Socket.connect(Socket.java:527)
        at org.apache.zookeeper.server.quorum.Learner.connectToLeader(Learner.java:225)
        at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:71)
        at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:786)

and

2014-11-11 12:09:36,102 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Follower@166] - shutdown called
java.lang.Exception: shutdown Follower
        at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:166)
        at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:790)

There’s a configuration initLimit which defines the amount of time (in ticks) that the initial synchronization phase can take. This value defaults to 10 in zookeeper. However, it turns out that my cluster had enough data to sync in the initial phase, which took longer the initLimit specified. Increasing the initLimit to about 50 fixed the issue. However, I wonder the side effects of a much higher initLimit value on the cluster.

More details after searching the net:

What happened here was that the server that was being elected as leader did go through leader election process successfully. It then started to send a snapshot of the state to its follower, however, before that process could be completed and the follower could finish sync to the leader state, the initLimit timeout was reached, and the leader thread decided it had to give up. So increasing initLimit to a value that allowed the snapshot transfer to complete fixed this problem.

S3 Multipart uploads with InputStream

AWS Documentation provides the example to upload a file using S3 Multipart Upload feature. This is available here

In one of my projects, I had a system using InputStream to talk to S3. While upgrading that to use S3 Multipart Feature, I was happy to see that the UploadPartRequest takes an InputStream, which meant that I could easily create the request as follows

UploadPartRequest uploadRequest = new UploadPartRequest().withUploadId(uploadId)
                .withBucketName(s3Bucket)
                .withKey(s3Key)
                .withInputStream(in)
                .withPartNumber(partNumber)
                .withPartSize(partSize)
                .withLastPart(lastPart)

The code would compile fine but interestingly, it would not upload any object with more than one part. The AmazonS3Client contains the following in the uploadPart() method

 finally {
            if (inputStream != null) {
                try {inputStream.close();}
                catch (Exception e) {}
            }
        }

i.e. The client would close the stream after every part. This is pretty interesting behavior from the AWS SDK. Taking a deeper look at how the file based uploads work with the SDK reveals the secret sauce

        InputStream inputStream = null;
        if (uploadPartRequest.getInputStream() != null) {
            inputStream = uploadPartRequest.getInputStream();
        } else if (uploadPartRequest.getFile() != null) {
            try {
                inputStream = new InputSubstream(new RepeatableFileInputStream(uploadPartRequest.getFile()),
                        uploadPartRequest.getFileOffset(), partSize, true);
            } catch (FileNotFoundException e) {
                throw new IllegalArgumentException("The specified file doesn't exist", e);
            }
        } else {
            throw new IllegalArgumentException("A File or InputStream must be specified when uploading part");
        }

i.e. for file based uploads, it creates an InputSubStream for each part to be uploaded and closes that after the part is uploaded successfully. In order to make it work with a provided InputStream, it is your responsibility to provide an InputStream that can closed for each part.

My first hack was to make it so that the client could not close the stream. A very simple way of achieving this is

/**
 * The caller must explictly close() the original stream
 */
public class NonCloseableBufferedInputStream extends InputStream {

    public NonCloseableInputStream(InputStream inputStream) {
        super(inputStream);
    }

    @Override
    public void close() {
        //do nothing
    }

}

By providing an InputStream wrapped with a NonCloseableInputStream, the uploadPart() call wouldn’t be the able to close the stream and the same stream could be passed to all the UploadPartRequests.

The code ran fine for a while however we would see a larger number of failed uploads relative to the previous upload scheme. This was confusing since the client was configured with a RetryPolicy to upload individual parts the same number of times. Scanning through the logs, I found the problem the hack

private void resetRequestAfterError(Request request, Exception cause) throws AmazonClientException {
        if ( request.getContent() == null ) {
            return; // no reset needed
        }
        if ( ! request.getContent().markSupported() ) {
            throw new AmazonClientException("Encountered an exception and stream is not resettable", cause);
        }
        try {
            request.getContent().reset();
        } catch ( IOException e ) {
            // This exception comes from being unable to reset the input stream,
            // so throw the original, more meaningful exception
            throw new AmazonClientException(
                    "Encountered an exception and couldn't reset the stream to retry", cause);
        }
    }

The expectation that every upload part is provided with its own InputStream is built into the retry logic for the client. While an error occurred while uploading a part, the resetRequestAfterError() method would reset the stream to the beginning. Normally this would lead to silent corrupted data uploads, however, since my stream couldn’t reset to the beginning, it failed with the error message “Encountered an exception and couldn’t reset the stream to retry”

Whats the workaround?

I ended up with reading the part into a byte[] and then wrapping it into a ByteArrayInputStream for the UploadPartRequest. This increases the memory requirements for the app but works like a charm.

byte[] part = new byte[partSize];
List partETags = new ArrayList();

long uploaded = 0;

for( int partNumber =  1; partNumber < numParts; partNumber++ ) {
   // make sure you read the data corresponding to the part as InputStream.read() may return with less data than asked for
   part = IOUtils.read(in, partSize);
   ByteArrayInputStream bais = new ByteArrayInputStream(part);
   
   UploadPartRequest uploadRequest = createUploadPartRequest(uploadId, s3Bucket, s3Key, bais, partNumber, partSize, lastPart);
   UploadPartResult result =  getS3Client().uploadPart(uploadRequest);
   partETags.add(result.getPartETag());
   uploaded += partSize;
}

long remaining = size - uploaded;

//read the remaining data into the buffer
part = IOUtils.read(in, remaining);
ByteArrayInputStream bais = new ByteArrayInputStream(part);

UploadPartRequest uploadRequest = createUploadPartRequest(uploadId, s3Bucket, s3Key, bais, partNumber, partSize, lastPart);
UploadPartResult result =  getS3Client().uploadPart(uploadRequest);
partETags.add(result.getPartETag());

If memory is a big concern, then you should create a SlicedInputStream for the range of the part. Note that in this case, a retry would need to reset to the start of the slice which could mean that you are skipping over the input stream from the start to the start of the slice depending upon the underlying stream in your application.

Zookeeper Error Guide : Part 1

The last few weeks with Zookeeper/Curator have been a good experience. I am going to maintain a continuous list of errors that come up with Zookeeper and how I fixed/stepped over them

Running out of connections
WARN [NIOServerCxn.Factory: 0.0.0.0/0.0.0.0:2181:NIOServerCnxn$Factory@352] - Too many connections from /ab.cd.ef.ghi - max is 60

This is indicative of the client running out of connections. In your zookeeper.cfg, set the following

maxClientCnxns = 500
Unable to load database – disk corruption
FATAL Unable to load database on disk !  java.io.IOException: Failed to process transaction type: 2 error: KeeperErrorCode = NoNode for  at org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:152)!

This typically implies either disk corruption on your server or the process was restarted while snapshotting. There are some bugs filed with Zookeeper in related area. The easiest way is to wipe out the version-2/ directory if other nodes in your cluster are running. The node with the error would rebuild itself from the other nodes.

Unable to load database – Unreasonable length
FATAL Unable to load database on disk java.io.IOException: Unreasonable length = 1048583 at org.apache.jute.BinaryInputArchive.readBuffer(BinaryInputArchive.java:100)

Some versions of zookeeper allowed the client to set the data larger than the max readable size by the server. Increasing the max buffer size JVM property fixes the issue.

-Djute.maxbuffer = xxx
Failure to follow the leader
WARN org.apache.zookeeper.server.quorum.Learner: Exception when following the leader java.net.SocketTimeoutException: Read timed out

This is observed under stress on the system. The stress might be caused by either disk contention or network delays etc. If you cannot reduce the load on the system, try increasing your hardware spec. On EC2, I switched over High I/O instances and the response was much better.

Java Garbage Collection Statistics

Anyone who has done even a moderate  sized project in java knows about the GC hell that comes with it. Its simple enough to have those GC statistics from the beginning rather than adding later and waiting for the process to run into GC issues. Here’s the piece that I typically add to the process and then fire up the log4J levels when the process shows signs of GC troubles.


private final ScheduledThreadPoolExecutor executor = new ScheduledThreadPoolExecutor(1); // single thread for logging
//in your method - initialize the logger
executor.scheduleWithFixedDelay(gcStatLogger, 60000, 60000, TimeUnit.MILLISECONDS ); // log every minute

private class GCStatLogger implements Runnable {

@Override
public void run() {
 logGCStats();
}

private void logGCStats() {
 long gcCount = 0;
 long gcTime = 0;
 for(GarbageCollectorMXBean gc :ManagementFactory.getGarbageCollectorMXBeans()) {
   long count = gc.getCollectionCount();
   if(count >=0){
    gcCount += count;
   }

   long time = gc.getCollectionTime();
   if(time >=0) {
    gcTime += time;
   }
  }

  log.debug("Total Garbage Collections: " + gcCount );
  log.debug("Total Garbage Collection Time (ms): "+ gcTime);
 }
}